diff --git a/data_all_eng_slimpj/shuffled/split2/finalzahs b/data_all_eng_slimpj/shuffled/split2/finalzahs new file mode 100644 index 0000000000000000000000000000000000000000..f32e20b54ddae04774cd15ef5351cfe0ddaa9805 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzahs @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nIn engineering and management applications, one often has to collect data from unknown systems, learn their transition functions, and learn to make predictions and decisions. A critical precursor of decision making is to model the system from data. We study how to learn an unknown Markov model of the system from its state-transition trajectories. When the system admits a large number of states, recovering the full model becomes sample expensive.\n\nIn this paper, we focus on Markov processes where the transition matrix has a small rank.\nThe small rank implies that the observed process is governed by a low-dimensional latent process which we cannot see in a straightforward manner. It is a property that is (approximately) satisfied in a wide range of practical systems. Despite the large state space, the low-rank property unlocks potential of accurate learning of a full set of transition density functions based on short empirical trajectories. \n\n\\subsection{Motivating Examples}\n\nPractical state-transition processes with a large number of states often exhibit low-rank structures. For example, the sequence of stops made by a taxi turns out to follow a Markov model with approximately low rank structure \\citep{liu2012understanding, benson2017spacey}. \nFor another example, random walk on a lumpable network has a low-rank transition matrix \\citep{buchholz1994exact,e2008optimal}. The transition kernel with fast decaying eigenvalues has been also observed in molecular dynamics \\citep{rohrdanz2011determination}, which can be used to find metastable states, coresets and manifold structures of complicated dynamics \\citep{chodera2007automatic, coifman2008diffusion}.\n\n\nLow-rank Markov models are also related to dimension reduction for control systems and reinforcement learning. For example, the state aggregation approach for modeling a high-dimensional system can be viewed as a low-rank approximation approach \\citep{bertsekas1995dynamic, bertsekas1995neuro,singh1995reinforcement}. In state aggregation, one assumes that there exists a latent stochastic process $\\{z_t\\} \\subset [r]$ such that\n$\\mathbb{P}( s_{t+1} \\mid s_t ) = \\sum_z \\mathbb{P}( z_t =z \\mid s_t) \\mathbb{P}( s_{t+1} \\mid z_t=z),$\nwhich is equivalent to a factorization model of the transition kernel $\\P$.\nIn the context of reinforcement learning, the nonnegative factorization model was referred to as the generalization to the rich-observation model \\citep{azizzadenesheli2016reinforcement1}. The low-rank structure allows us to model and optimize the system using significantly fewer observations and less computation. Effective methods for estimating the low-rank Markov model would pave the way to better understanding of process data and more efficient decision making.\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Our approach}\nWe propose to estimate the low-rank Markov model based on an empirical trajectory of states, whose length is only proportional to the total number of states. \nWe propose two approaches based on the maximum likelihood principle and low-rank optimization. The first approach uses a convex nuclear-norm regularizer to enforce the low-rank structure and a polyhedral constraint to ensure that optimization is over all probabilistic matrices. The second approach is to solve a rank-constrained optimization problem using difference-of-convex (DC) programming. For both approaches, we provide statistical upper bounds for the Kullback-Leibler (KL) divergence between the estimator and the true transition matrix as well as the $\\ell_2$ risk. We also provide an information-theoretic lower bound to show that the proposed estimators are nearly rate-optimal. Note that the low-rank estimation of the Markov model was considered in \\cite{zhang2018optimal} where a spectral method with total variation bound is given. In comparison, the novelty of our methods lies in the use of maximum likelihood principle and low-rank optimization, which allows us to obtain the first KL divergence bound for learning low-rank Markov models.\n\n\nOur second approach involves solving a rank constraint optimization problem over probabilistic matrices, which is a refinement of the convex nuclear-norm approach. Due to the non-convex rank constraint, the optimization problem is difficult - \u00a0to the best of our knowledge, there is no efficient approach that directly solves the rank-constraint problem.\u00a0In this paper, we develop a penalty approach to relax the rank constraint and transform the original problem into a DC (difference of convex functions) programming one. Furthermore, we develop a particular DC algorithm to solve the problem by initiating at the solution to the convex problem and successively refining the solution through solving a sequence of inner subproblems. Each subroutine is based on the multi-block alternating direction method of multipliers (ADMM). Empirical experiments show that the successive refinements through DC programming do improve the learning quality. As a byproduct of this research, we develop a new class of DC algorithms and a unified convergence analysis for solving non-convex non-smooth problems, which were not available in the literature to our best knowledge. \n\n\\subsection{Contributions and paper outline}\n\nThe paper provides a full set of solutions for learning low-rank Markov models. The main contributions are: (1) We develop statistical methods for learning low-rank Markov model with rate-optimal Kullback-Leiber divergence guarantee for the first time; (2) We develop low-rank optimization methods that are tailored to the computation problems for nuclear-norm regularized and rank-constrained M-estimation; (3) A byproduct is a generalized DC algorithm that applies to nonsmooth nonconvex optimization with convergence guarantee. \n\nThe rest of the paper is organized as follows. Section 2 surveys related literature. Section 3 proposes two maximum likelihood estimators based on low-rank optimization and establishes their statistical properties. Section 4 develops computation methods and establishes convergence of the methods. Section 5 presents the results of our numerical experiments.\n\n\n\n\\section{Related literature}\n\n\nModel reduction for complicated systems has a long history. It traces back to variable-resolution dynamic programming \\citep{moore1991variable} and state aggregation for decision process \\citep{sutton1998reinforcement}. In the case of Markov process, \\citep{deng2011optimal, deng2012model} considered low-rank reduction of Markov models with explicitly known transition probability matrix, but not the estimation of the reduced models. Low-rank matrix approximation has been proved powerful in analysis of large-scale panel data, with numerous applications including network analysis \\citep{e2008optimal}, community detection \\citep{newman2013spectral}, ranking \\citep{negahban2016rank}, product recommendation \\citep{keshavan2010matrix} and many more. The main goal is to impute corrupted or missing entries of a large data matrix. Statistical theory and computation methods are well understood in the settings where a low-rank signal matrix is corrupted with independent Gaussian noise or its entries are missed independently. \n\n\nIn contrast, our problem is to estimate the transition density functions from dependent state trajectories, where statistical theory and efficient methods are underdeveloped. When the Markov model has rank $1$, it becomes an independent process. In this case, our problem reduces to estimation of a discrete distribution from independent samples \\citep{steinhaus1957problem,lehmann2006theory,han2015minimax}. For a rank-$2$ transition matrix, \\cite{huang2016recovering} proposed an estimation method using a small number of independent samples. Very recently there have been some works on minimax learning of Markov chains. \\citet{HOP18} derived the minimax rates of estimating a Markov model in terms of a smooth class of $f$-divergences. They considered the family of $\\alpha$-minorated Markov chains, i.e., all the transition probabilities are greater than $\\alpha$. \\citet{WKo19} computed the finite-sample PAC-type minimax sample complexity of recovering the transition matrix from a state trajectory of a Markov chain, up to a tolerance in a total-variation-based (TV-based) metric. This TV-based metric does not belong to the family of the smooth $f$-divergences in \\citet{HOP18}, and their class of Markov models strictly contains the class of the $\\alpha$-minorated ones. Neither of these works considered low-rank Markov models though. \n\nThe closest work to ours is \\cite{zhang2018optimal}, in which a spectral method via truncated singular value decomposition was introduced and the upper and lower error bounds in terms of total variation were established. \\cite{yang2017dynamic} developed an online stochastic gradient method for computing the leading singular space of a transition matrix from random walk data. To our best knowledge, none of the existing works has analyzed efficient recovery of a low-rank Markov model with Kullback-Leiber divergence guarantee.\n\nHidden Markov Models (HMMs) are closely related with our low-rank Markov models. Note that the observation trajectory of an HMM is not necessarily Markovian. Therefore, an HMM can be regarded as a relaxed variant of low-rank Markov models. There have been many works on estimating HMM, in particular through spectral approaches, e.g., \\citet{hsu2012spectral} and \\citet{anandkumar2014tensor}. A critical difference is: States are not fully observable in HMM, but are fully observable in low-rank Markov models. Although HMM is more general, the low-rank Markov model is more suitable for dynamical processes where the state space is large but fully observable, for which we will establish tighter error bounds. \n\n\nOn the optimization side, we adopt DC programming to handle the rank constraint and replace it with the difference of two convex functions. DC programming was first introduced by \\cite{tao1997convex} and has become a prominent tool for handling a class of nonconvex optimization problems (see also \\cite{tao2005dc,le2012exact,le2017stochastic,lethi2018}). \nIn particular, \\citet{van2015convergence} and \\cite{wen2017proximal} considered the {majorized DC algorithm}, which motivated the optimization method developed in this paper.\nHowever, both \\cite{van2015convergence} and \\citet{wen2017proximal} used the majorization technique with restricted choices of majorants, and neither considered the introduction of the indefinite proximal terms. In addition, \\citet{wen2017proximal} further assumes the smooth part in the objective to be convex. \nIn comparison with the existing methods, our DC programming method applies to nonsmooth problems and is compatible with a more flexible and possibly indefinite proximal term.\n\nFinally, we would like to mention the probabilistic tools we used to derive the statistical results. Recent years have witnessed many works on measure concentration of dependent random variables, e.g., \\citet{Mar96, Kon07, KRa08, Pau15, JFS18}, etc. Nevertheless, these results do not suffice to establish the desired statistical guarantee, because exploiting low-rank structure requires studying the concentration of a matrix martingale in terms of the spectral norm, as shown in Lemma \\ref{lem:gradient}. The matrix Freedman inequality \\citep[][Corollary~1.3]{tropp2011freedman} turns out to be the right tool for analyzing the concentration of the matrix martingale. We also used an variant of Bernstein's inequality for general Markov chains \\citep[][Theorem~1.2]{JFS18} to derive an exponential tail bound for the status counts of the Markov chain ${\\mathcal X}$. \n\n\n\n\n\n\n\n\\section{Minimax rate-optimal estimation of low-rank Markov chains}\n\\label{sec:StatProperty}\n\nConsider an ergodic Markov chain ${\\mathcal X} = \\{X_0, X_1, \\ldots, X_n\\}$ on $p$ states ${\\mathcal S}=\\{s_j\\}_{j=1}^p$ with the transition probability matrix $\\mathbf{P}\\in\\mathbb{R}^{p\\times p}$ and stationary distribution $\\pi$, where $P_{ij} = \\mathbb{P}(X_1 = s_j|X_0 = s_i)$ for any $i, j \\in [p]$. Let $\\pi_{\\min} := \\min_{j \\in [p]} \\pi_j$ and $\\pi_{\\max} := \\max_{j \\in [p]} \\pi_j$. We quantify the distance between two transition matrices $\\mathbf{P}$ and $\\widehat{\\mathbf{P}}$ in Frobenius norm $\\|\\widehat{\\mathbf{P}} - \\mathbf{P}\\|_{\\rm F} = \\bigl\\{\\sum_{i,j = 1}^p (\\widehat P_{ij} - P_{ij})^2\\bigr\\}^{1\/2}$ and Kullback--Leibler divergence $D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat{\\mathbf{P}}) = \\sum_{i, j=1}^p \\pi_iP_{ij}\\log(P_{ij}\/ \\widehat P_{ij})1_{\\{P_{ij} \\neq 0\\}}$.\nSuppose that the unknown transition matrix $\\mathbf{P}$ has a small rank $r \\ll p$. Our goal is to estimate the transition matrix $\\P$ via a state trajectory of length $n$. \n\n\\subsection{Spectral gap of nonreversible Markov chains}\nWe first introduce the {\\it right ${\\mathcal L}_2$-spectral gap} of $\\mathbf{P}$ \\citep{Fil91, JFS18}, a quantity that determines the convergence speed of the Markov chain ${\\mathcal X}$ to its invariant distribution $\\pi$. \nLet ${\\mathcal L}_2(\\pi):= \\{h \\in \\Re^p: \\sum_{j \\in [p]} h_j^2 \\pi_j < \\infty\\}$ be a Hilbert space endowed with the following inner product: \n\\[\n\\inn{h_1, h_2}_{\\pi} := \\sum_{j \\in [p]} h_{1j} h_{2j}\\pi_j. \n\\]\nThe matrix $\\mathbf{P}$ induces a linear operator on ${\\mathcal L}_2(\\pi)$: $h \\mapsto \\mathbf{P} h$, which we abuse $\\mathbf{P}$ to denote. Let $\\mathbf{P}^*$ be the adjoint operator of $\\mathbf{P}$ with respect to ${\\mathcal L}_2(\\pi)$:\n$$\\P^* = \\diag(\\pi)^{-1} \\P^\\top \\diag(\\pi).$$ \nNote that the following four statements are equivalent: (a) $\\P$ is self-adjoint; (b) $\\P^\\ast = \\P$; (c) the detailed balance condition holds: $\\pi_i P_{ij} = \\pi_j P_{ji}$; (d) the Markov chain is reversible. In our analysis, we do {\\it not} require the Markov chain to be reversible. We therefore introduce the {\\it additive reversiblization of $\\mathbf{P}$}: $(\\mathbf{P} + \\mathbf{P}^*) \/ 2$, which is a self-adjoint operator on ${\\mathcal L}_2(\\pi)$ and has the largest eigenvalue as 1. The right spectral gap of $\\mathbf{P}$ is defined as follows: \n\\begin{definition}[Right ${\\mathcal L}_2$-spectral gap]\n\tWe say the right ${\\mathcal L}_2$-spectral gap of $\\mathbf{P}$ is $1 - \\rho_+ $ if \n\t\\[\n\t\\rho_+:= \\sup_{\\inn{h, 1}_{\\pi} = 0, \\inn{h, h}_{\\pi} = 1} \\frac{1}{2}\\inn{(\\mathbf{P} + \\mathbf{P}^*)h, h}_{\\pi} < 1, \n\t\\]\n\twhere $1$ in $\\inn{h, 1}$ refers to the all-one $p$-dimensional vector.\n\\end{definition}\nDefine the $\\epsilon$-mixing time of the Markov chain ${\\mathcal X}$ as\n\\[\n\\tau(\\epsilon) := \\min\\{t: \\max_{j \\in [p]} \\|(\\mathbf{P}^t)_{j\\cdot} - \\pi\\|_{\\mathrm{TV}} \\le \\epsilon\\}, \n\\]\nwhere $\\|(\\mathbf{P}^t)_{j\\cdot} - \\pi\\|_{\\mathrm{TV}} := 2 ^ {-1} \\lonenorm{(\\mathbf{P}^t)_{j\\cdot} - \\pi}$ is the total variation distance between $\\mathbf{P} ^ t_{j \\cdot}$ and $\\pi$. For reversible and ergodic Markov chains, \\citet[][Theorem~12.3]{LPe17} show that \n\\be\n\t\\label{eq:mixing_gap}\n\t\\tau(\\epsilon) \\le \\frac{1}{1 - \\rho_+}\\log\\biggl(\\frac{1}{\\epsilon \\pi_{\\min}}\\biggr), \n\\ee\nwhich implies that the larger the spectral gap is, the faster the Markov chain converges to the stationary distribution.\n\n\\subsection{Estimation methods and statistical results}\n\\label{sec:main_results}\n\nNow we are in position to present our methods and statistical results. Given the trajectory $\\{X_1,\\ldots,X_n\\}$, we count the number of times that the state $s_i$ transitions to $s_j$:\n\\[ n_{ij}: = \\left|\\{1\\le k\\le n \\mid \\, X_{k-1} = s_i, X_k = s_j\\}\\right|.\\]\nLet $n_i: = \\sum_{j=1}^{p} n_{ij}$ for $i=1,\\ldots, p$ and $n := \\sum_{i=1}^p n_i$. The averaged negative log-likelihood function of $\\P$ based on the state-transition trajectory $\\{x_0, \\ldots, x_n\\}$ is\n\\begin{equation}\\label{eq:log-likelihood}\n{\\ell_n}(\\P):= - \\frac{1}{n} \\sum_{k = 1}^n \\log (\\inn{\\mathbf{P}, \\mathbf{X}_k}) = -\\frac{1}{n} \\sum_{i=1}^{p}\\sum_{j=1}^{p} n_{ij}\\log(P_{ij}), \n\\end{equation}\nwhere $\\mathbf{X}_k := e_ie_j^\\top \\in \\Re^{p \\times p}$ if $x_k = s_i$ and $x_{k + 1} = s_j$. We first impose the following assumptions on $\\mathbf{P}$ and $\\pi$. \n\\begin{assumption}\n\t\\label{asp:1}\n\t(i) ${\\rm rank}(\\mathbf{P}) = r$; (ii) There exist some positive constants $\\alpha, \\beta > 0$ such that for any $1 \\le j, k \\le p$, $P_{jk} \\in \\{0\\} \\cup [\\alpha \/ p, \\beta \/ p]$. \n\\end{assumption}\n\\begin{remark}\nThe entrywise constraints on $\\mathbf{P}$ are imposed by our theoretical analysis and may not be necessary in practice. \n\tSpecifically, the upper and lower bounds for the nonzero entries of $\\mathbf{P}$\n\tensure that (i) the gradient of the log-likelihood $\\nabla \\ell_n(\\mathbf{P})$ is well controlled and exhibits exponential concentration around its population mean (see \\eqref{eq:log_likelihood} for the reason we need $\\alpha$ there); \n\t(ii) the converter between the $\\ell_2$-risk $\\fnorm{\\widehat \\mathbf{P} - \\mathbf{P}}$ ($\\fnorm{\\widehat \\mathbf{P}^r - \\mathbf{P}}$ resp.) and the KL-divergence $D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P})$ ($D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P}^r)$ resp.) depends on $\\alpha$ and $\\beta$, as per Lemma \\ref{lem:kl_to_l2}. The entry-wise upper and lower bounds are common in statistical analysis of count data, e.g., Poisson matrix completion \\citep[Equation (10)]{cao2016poisson}, Poisson sparse regression \\citep[Assumption 2.1]{jiang2015minimax}, point autoregressive model \\citep[Definition of $\\mathcal{A}_s$]{hall2016inference}, etc.\n\\end{remark\n\n\\begin{remark}\n\n\tIf we remove $0$ in the feasible range of $P_{jk}$, we obtain the $(\\alpha\/p)$-minoration condition: $P_{jk} \\ge \\alpha \/ p$ for all $j, k \\in [p]$. The $(\\alpha\/p)$-minoration condition implies strong mixing since combining \\citet[][pp. 237-238]{Bre99} and \\citet[][Lemma~2.2.2]{Kon07} yields $1 - \\rho_+ \\ge \\alpha$ and we can deduce that $\\tau(\\epsilon) \\le \\alpha^{-1}\\log\\{(\\epsilon \\pi_{\\min})^{-1}\\}$ given \\eqref{eq:mixing_gap}. \n\\end{remark}\n\nNext we propose and analyze a nuclear-norm regularized maximum likelihood estimator (MLE) of $\\mathbf{P}$ defined as follows: \n\\begin{equation}\n\\label{prob:convex-nuclear}\n\\begin{array}{rllll}\n\\widehat{\\P} := & \\argmin ~\\ell_n({\\mathbf{Q}}) + \\lambda \\nnorm{{\\mathbf{Q}}}\\\\\n\\mbox{s.t.} & {\\mathbf{Q}} 1_p = 1_p,\\quad \\alpha \/ p \\le Q_{ij}\\le \\beta \/ p,\\quad \\forall\\, 1\\le i,j\\le p, \n\\end{array}\n\\end{equation}\nwhere $\\lambda>0$ is a tuning parameter. Note that we cannot allow $\\mathbf{Q}$ to have zero entries as in Assumption \\ref{asp:1}, because otherwise we may have that $\\widehat P_{ij} = 0$ and $P_{ij} > 0$ for some $(i, j)$, violating the requirement of the definition of $D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P})$. Our first theorem shows that with an appropriate choice of $\\lambda$, $\\widehat \\mathbf{P}$ exhibits a sharp statistical rate. For simplicity, from now on we say $a \\gtrsim b$ ($a \\lesssim b$) if there exists a universal constant $c > 0$ ($C > 0$) such that $a \\ge cb$ ($a \\le Cb$). \n\n\\begin{theorem}[Statistical guarantee for the nuclear-norm regularized estimator]\n\t\\label{thm:nuclear}\n\tSuppose the initial state $X_0$ is drawn from the stationary distribution $\\pi$ and Assumption \\ref{asp:1} holds. There exists a universal constant $C_1 > 0$, such that for any $\\xi > 1$, if we choose \n\t\\[\n\t\t\\lambda = C_1 \\biggl\\{\\biggl(\\frac{\\xi p ^ 2\\pi_{\\max} \\log p}{n\\alpha}\\biggr)^{\\! 1 \/ 2} + \\frac{\\xi p\\log p}{n \\alpha}\\biggr\\}, \n\t\\]\n\tthen whenever $n\\pi_{\\max}(1 - \\rho_+) \\ge \\max\\{\\max(20, \\xi ^ 2) \\log p, \\log n\\}$, we have that \n\t\\[\n\t\\begin{aligned}\n\t\t\\mathbb{P} \\biggl( D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) \\gtrsim \\frac{\\xi r\\pi_{\\max}\\beta ^ 2 p \\log p}{\\pi_{\\min}\\alpha ^ 3n} + \\frac{\\xi\\pi_{\\min}}{rp\\pi_{\\max}\\log p}\\biggr) \\lesssim e^{- \\xi} + p^{-(\\xi - 1)} + p^{-10},\n\t\\end{aligned}\n\t\\]\n\tand that \n\t\\[\n\t\t\\begin{aligned}\n\t\t\\mathbb{P} \\biggl( \\fnorm{\\widehat \\mathbf{P} - \\mathbf{P}} ^ 2 \\gtrsim \\frac{\\xi r\\pi_{\\max}\\beta ^ 4\\log p}{\\pi_{\\min} ^ 2\\alpha ^ 4n} + \\frac{\\xi\\beta ^ 2}{\\alpha rp ^ 2 \\pi_{\\max} \\log p}\\biggr) \\lesssim e^{- \\xi} + p^{-(\\xi - 1)} + p^{-10}. \n\t\t\\end{aligned}\n\t\\]\t\n\n\\end{theorem}\n\n\n\\begin{remark}\n\t\tWhen $n \\lesssim \\{rp \\pi_{\\max}(\\log p) \\beta\/ (\\pi_{\\min} \\alpha ^ {\\!3 \/ 2})\\} ^ 2$, the second terms of both the KL-Divergence and Frobenius-norm error bounds are dominated by the first terms respectively, so that \n\t\t\\be\n\t\t\t\\label{eq:hd_result}\n\t\t\tD_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) = O_{\\mathbb{P}}\\biggl(\\frac{r\\pi_{\\max}\\beta ^ 2 p \\log p}{\\pi_{\\min}\\alpha ^ 3n}\\biggr)~\\text{and}~\\fnorm{\\widehat\\mathbf{P} - \\mathbf{P}} ^ 2 = O_{\\mathbb{P}}\\biggl(\\frac{r\\pi_{\\max}\\beta ^ 4\\log p}{\\pi_{\\min} ^ 2\\alpha ^ 4n}\\biggr).\n\t\t\\ee\n\t\tWhen $\\alpha \\asymp \\beta$ and $\\pi_{\\max} \\asymp \\pi_{\\min}$, we have that $\\alpha, \\beta \\asymp 1$ and that $\\pi_{\\max}, \\pi_{\\min} \\asymp 1 \/ p$. Therefore, $D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P}) = O_{\\mathbb{P}}(rp\\log p \/ n)$ and $\\fnorm{\\widehat \\mathbf{P} - \\mathbf{P}} ^ 2 = O_{\\mathbb{P}}(rp\\log p \/ n)$. These rates are consistent with those derived in the literature of low-rank matrix estimation \\citep{NWa11, KLT11}. For a big $n$, the current error bounds are sub-optimal: the second terms of the bounds are independent of $n$ and thus do not converge to zero as $n$ goes to infinity. These terms are due to the requirement of the uniform concentration of $\\widetilde D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P})$, the empirical counterpart of $D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P})$, to $D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P})$ (see Lemma \\ref{lem:uniform_law}). We eliminate these trailing terms through an alternative proof strategy in Section \\ref{sec:alt}, though the resulting statistical rates have worse dependence on $\\alpha$ and $\\beta$ and are thus relegated to the appendix.\n\\end{remark}\n\n\\begin{remark}\n\tWhen $r = 1$, $\\mathbf{P}$ can be written as $1v^\\top$ for some vector $v\\in \\Re^{p}$, and then estimating $\\mathbf{P}$ essentially reduces to estimating a discrete distribution from multinomial count data. The first term of the upper bounds in Theorem \\ref{thm:nuclear} nearly matches (up to a log factor) the classical results of discrete distribution estimation $\\ell_2$ risks (see, e.g., \\citet[Pg. 349]{lehmann2006theory}). \n\n\\end{remark}\n\n\nNext we move on to the second approach -- using rank-constrained MLE to estimate $\\mathbf{P}$: \n\\begin{equation}\\label{prob:nonconvex-lowrank}\n\\begin{array}{rllll}\n\\widehat{\\P}^r := & \\argmin {\\ell_n}({\\mathbf{Q}}) \\\\\n \\mbox{s.t.} & {\\mathbf{Q}} 1_p = 1_p,\\quad \\alpha \/ p \\le Q_{ij}\\le \\beta \/ p,\\quad \\forall\\, 1\\le i,j\\le p,\\quad \\text{rank}({\\mathbf{Q}}) \\le r. \n\\end{array}\n\\end{equation}\nSimilarly to \\eqref{prob:convex-nuclear}, we cannot allow $\\mathbf{Q}$ to have zero entries. In contrast to $\\widehat \\mathbf{P}$, the rank-constrained MLE $\\widehat \\mathbf{P}^r$ enforces the prior knowledge ``$\\mathbf{P}$ is low-rank\" exactly without inducing any additional bias. It requires solving a non-convex and non-smooth optimization problem, for which we will provide an algorithm based on DC programming in Section \\ref{sec:6}. Here we first present its statistical guarantee. \n\n\\begin{theorem}[Statistical guarantee for the rank-constrained estimator]\n\t\\label{thm:rank}\n\t\tSuppose that Assumption \\ref{asp:1} holds and that $n\\pi_{\\max} (1 - \\rho_+) > \\max(20 \\log p, \\log n)$. There exist universal constants $C_1, C_2 > 0$ such that for any $\\xi > 0$, \n\t\t\\[\n\t\t\t\\mathbb{P}\\biggl\\{D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P}^r) \\ge \\max\\biggl(\\frac{C_1 r\\pi_{\\max} \\beta^2 p \\log p }{\\pi_{\\min}\\alpha ^ 3n}, \\frac{\\xi \\pi_{\\min}}{rp \\pi_{\\max} \\log p}\\biggr)\\biggr\\} \\le C_2e^{- \\xi}, \n\t\t\\]\n\t\tand \n\t\t\\[\n\t\t\t\\mathbb{P}\\biggl\\{\\fnorm{\\widehat \\mathbf{P}^r - \\mathbf{P}} ^ 2 \\ge \\max\\biggl(\\frac{C_1 r\\pi_{\\max} \\beta^4 \\log p }{\\pi ^ 2_{\\min}\\alpha ^ 4n}, \\frac{\\xi \\beta ^ 2}{\\alpha rp ^ 2 \\pi_{\\max}\\log p}\\biggr)\\biggr\\} \\le C_2 e^{-\\xi}. \n\t\t\\]\n\\end{theorem}\n\n\\begin{remark}\n\tThe proof of the rank constrained method requires fewer inequality steps and is more straightforward than the that of the nuclear method. Although our upper bounds of the nuclear norm regularized method and the rank constrained one have the same rate, the difference of their proofs may implicitly suggest the advantage of the rank constrained method in the constant, as futher illustrated by our numerical studies.\n\\end{remark\n\nTo assess the quality of the established statistical guarantee, we further provide a lower bound result below. It shows that when $\\alpha, \\beta$ are constants, both estimators $\\widehat\\mathbf{P}$ and $\\widehat \\mathbf{P}^r$ are rate-optimal up to a logarithmic factor. Informally speaking, they are not improvable for estimating the class of rank-$r$ Markov chains. \n\\begin{theorem}[Minimax error lower bound for estimating low-rank Markov models]\n\t\\label{thm:lower_bound}\n\tConsider the following set of low-rank transition matrices\n\t$$\\Theta := \\bigl\\{\\mathbf{P}: \\forall j, k \\in [p], P_{jk} \\in \\{0\\} \\cup [\\alpha \/ p, +\\infty),~\\bP1_p = 1_p,~{\\rm rank}(\\mathbf{P}) \\le r\\bigr\\}.$$\n\tThere exists a universal constant $c > 0$ such that when $p(r -1) \\ge 192\\log 2$, we have\n\t\\[\n\t\t\\inf_{\\widehat{\\mathbf{P}}}\\sup_{\\mathbf{P}\\in \\Theta}\\mathbb{E}\\|\\widehat{\\mathbf{P}} - \\mathbf{P}\\|_F^2 \\geq \\frac{cp(r - 1)}{n\\alpha}. \n\t\\]\n\\end{theorem}\n\\begin{remark}\t\n\tTheorem \\ref{thm:lower_bound} shows that a smaller $\\alpha$ makes the estimation problem harder. It still remains to be an open problem whether $\\beta$ in Assumption \\ref{asp:1} should be in this minimax risk or not. \n\\end{remark}\n\nBesides the full transition matrix $\\mathbf{P}$, the leading left and right singular vectors of $\\mathbf{P}$, denoted by $\\bU, \\bV \\in \\mathbb{O}^{p\\times r}$, also play important roles in Markov chain analysis. For example, performing $k$-means on reliable estimate of $\\bU$ or $\\bV$ can give rise to state aggregation of the Markov chain \\citep{zhang2018optimal}. In the following, we further establish the statistical rate of estimating the singular subspace of the Markov transition matrix, based on the previous results. \n\n\\begin{theorem}\\label{thm:uv}\n\tUnder the setting of Theorem \\ref{thm:nuclear}, let $\\widehat{\\bU}, \\widehat \\bV \\in \\mathbb{O}^{p\\times r}$ be the left and right singular vectors of $\\widehat{\\mathbf{P}}$ respectively. Then there exist universal constants $C_1, C_2$, such that for any $\\xi > 0$, we have that \n\t\\begin{equation*}\n\t\\max\\left\\{\\|\\sin\\Theta(\\widehat{\\bU}, \\bU)\\|_F^2, \\|\\sin\\Theta(\\widehat{\\bV}, \\bV)\\|_F^2\\right\\} \n\t\\le \\min\\biggl\\{\\max\\biggl(\\frac{C r\\pi_{\\max} \\beta^4\\log p }{\\pi ^ 2_{\\min}\\alpha ^ 4n\\sigma_r^2(\\mathbf{P})}, \\frac{\\xi \\beta ^ 2}{\\alpha rp ^ 2 \\pi_{\\max}(\\log p)\\sigma_r^2(\\mathbf{P}) }\\biggr), r\\biggr\\}\n\t\\end{equation*}\n\twith probability at least $1 - C_2(e^{- \\xi} + p^{-(\\xi - 1)} + p^{-10})$. Here, $\\sigma_r(\\mathbf{P})$ is the $r$-th largest singular value of $\\mathbf{P}$ and $\\|\\sin\\Theta(\\widehat{\\bU}, \\bU)\\|_F := (r - \\|\\widehat{\\bU}^\\top\\bU\\|_F^2)^{1\/2}$ is the Frobenius norm $\\sin \\Theta$ distance between $\\widehat{\\bU}$ and $\\bU$.\n\\end{theorem}\n\n\n\n\\subsection{Proof outline of Theorems \\ref{thm:nuclear}, \\ref{thm:rank}}\n\\label{sec:statistical_analysis}\nIn this section, we elucidate the roadmap to proving Theorems \\ref{thm:nuclear} and \\ref{thm:rank}. Complete proofs are deferred to the supplementary materials. We mainly focus on Theorem \\ref{thm:nuclear} for the nuclear-norm penalized MLE $\\widehat \\mathbf{P}$, as we use similar strategies to prove Theorem \\ref{thm:rank}. \n\nWe first show in the forthcoming Lemma \\ref{lem:large_lambda} that when the regularization parameter $\\lambda$ is sufficiently large, the statistical error $\\widehat \\boldsymbol{\\Delta} := \\widehat \\mathbf{P} - \\mathbf{P}$ falls in a restricted nuclear-norm cone. This cone structure is crucial to establishing strong statistical guarantee for estimation of low-rank matrices with high-dimensional scaling \\citep{NWa11}. Define a linear subspace ${\\mathcal N} := \\{ \\mathbf{Q}: \\mathbf{Q} 1_p = 1_p \\}$ \nand denote the corresponding projection operator by $\\Pi_{{\\mathcal N}}$. In other words, for any $\\mathbf{Q} \\in {\\mathcal N}$ and any $j = 1, \\ldots, p$, the summation of all the entries in the $j$th row of $\\mathbf{Q}$ equals one. One can verify that for any $\\mathbf{Q} \\in \\Re^{p \\times p}$, $\\Pi_{{\\mathcal N}}(\\mathbf{Q}) = \\mathbf{Q} - \\mathbf{Q} 11^\\top \/ p$. Let $\\mathbf{P}=\\bU\\mathbf{D}\\bV^\\top$ be an SVD of $\\mathbf{P}$, where $\\bU, \\bV\\in \\Re^{p \\times r}$ are orthonormal and the diagonals of $\\mathbf{D}$ are in the non-increasing order. Define\n\\[\n\\begin{aligned}\n\t& {\\mathcal M}:=\\{\\mathbf{Q} \\in \\Re^{p \\times p}\\ |\\ \\text{row}(\\mathbf{Q})\\subseteq \\text{col}(\\bV), \\text{col}(\\mathbf{Q})\\subseteq \\text{col}(\\bU)\\}, \\\\\n\t& \\overline{{\\mathcal M}}^{\\perp}:=\\{\\mathbf{Q} \\in \\Re^{p \\times p}\\ |\\ \\text{row}(\\mathbf{Q})\\perp \\text{col}(\\bV), \\text{col}(\\mathbf{Q})\\perp \\text{col}(\\bU)\\},\n\\end{aligned}\n\\]\nwhere col$(\\cdot)$ and row$(\\cdot)$ denote the column space and row space respectively. We can write any $\\boldsymbol{\\Delta}\\in \\Re^{p \\times p}$ as\n\\[\n\\boldsymbol{\\Delta}=[\\bU, \\bU^\\perp]\\left[\n\\begin{array}{cc}\n\\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{11} & \\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{12} \\\\\n\\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{21} & \\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{22}\n\\end{array} \\right] [\\bV, \\bV^\\perp]^\\top.\n\\]\nDefine $\\boldsymbol{\\Delta}_{{\\mathcal W}}$ as the projection of $\\boldsymbol{\\Delta}$ onto any Hilbert space ${\\mathcal W}\\subseteq\\Re^{p \\times p}$. Then,\n\\be\n\\begin{aligned}\n\t\\boldsymbol{\\Delta}_{{\\mathcal M}} =\\bU\\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{11}{\\bV}^\\top,\\quad\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}^\\perp} = \\bU^{\\perp} \\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{22}(\\bV^{\\perp})^{\\top}, \\quad \\boldsymbol{\\Delta}_{\\overline{\\mathcal M}} = [\\bU, \\bU^{\\perp}]\\left[\n\t\\begin{array}{cc}\n\t\t\\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{11} & \\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{12} \\\\\n\t\t\\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{21} & \\bzero\n\t\\end{array} \\right] [\\bV, \\bV^{\\perp}]^{\\top}.\n\\end{aligned}\n\\ee\nThe lemma below shows that $\\widehat \\boldsymbol{\\Delta} := \\widehat\\mathbf{P} - \\mathbf{P}$ falls in a nuclear-norm cone if $\\lambda$ is sufficiently large. \n\\begin{lemma}\n\t\\label{lem:large_lambda}\n\tIf $\\lambda\\ge 2 \\opnorm{\\Pi_{{\\mathcal N}}(\\nabla \\ell_n(\\mathbf{P}))}$ in \\eqref{prob:convex-nuclear}, then we have that \n\t\\[\n\t\t\\nnorm{ \\widehat\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}^\\perp}} \\le 3\\nnorm{\\widehat\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}}} + 4\\nnorm{\\mathbf{P}_{{\\mathcal M}^\\perp}}. \n\t\\]\n\tIn particular, when $\\mathbf{P} \\in {\\mathcal M}$, we have that $\\nnorm{\\mathbf{P}_{{\\mathcal M}^{\\perp}}} = 0$ and that\n\t\\be\n\t\\label{eq:converter}\n\t\\nnorm{\\widehat \\boldsymbol{\\Delta}} \\le \\nnorm{\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}^{\\perp}}} + \\nnorm{\\boldsymbol{\\Delta}_{\\overline {\\mathcal M}}} \\le 4 \\nnorm{\\widehat \\boldsymbol{\\Delta}_{\\overline {\\mathcal M}}} \\le 4(2r)^{1 \/ 2}\\fnorm{\\widehat \\boldsymbol{\\Delta}}.\n\t\\ee\n\\end{lemma}\n\nLemma \\ref{lem:large_lambda} implies that the converting factor between the nuclear and Frobenius norms of $\\widehat \\boldsymbol{\\Delta}$ is merely $4(2r)^{1 \/ 2}$ when $\\P\\in \\mathcal{M}$, which is much smaller than the worst-case factor $p^{1 \/ 2}$ between nuclear and Frobenius norms of general $p$-by-$p$ matrices. This property of $\\widehat \\boldsymbol{\\Delta}$ is one cornerstone for establishing Theorem \\ref{thm:nuclear}.\n\n\n\\begin{comment}\n\\begin{remark}\n\tWhen $\\mathbf{P} \\in {\\mathcal M}$, $\\nnorm{\\mathbf{P}_{{\\mathcal M}^{\\perp}}} = 0$ and Lemma \\ref{lem:large_lambda} implies that \n\\be\n\t\\label{eq:converter}\n\t\\nnorm{\\widehat \\boldsymbol{\\Delta}} \\le \\nnorm{\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}^{\\perp}}} + \\nnorm{\\boldsymbol{\\Delta}_{\\overline {\\mathcal M}}} \\le 4 \\nnorm{\\widehat \\boldsymbol{\\Delta}_{\\overline {\\mathcal M}}} \\le 4\\sqrt{2r}\\fnorm{\\widehat \\boldsymbol{\\Delta}}.\n\\ee\nThus the converting factor between the nuclear and Frobenius norms of $\\widehat \\boldsymbol{\\Delta}$ is merely $4\\sqrt{2r}$, which is much smaller than the worst-case factor $\\sqrt{p}$ between nuclear and Frobenius norms of general $p$-by-$p$ matrices. This property of $\\widehat \\boldsymbol{\\Delta}$ is one cornerstone for establishing Theorem \\ref{thm:nuclear} (see \\eqref{eq:stat_error_fnorm} for details).\n\\end{remark}\n\\end{comment}\n\n\nNext, we derive the rate of $\\opnorm{\\Pi_{{\\mathcal N}}(\\nabla \\ell_n(\\mathbf{P}))}$ to determine the order of $\\lambda$ that ensures the condition of Lemma \\ref{lem:large_lambda} to hold. \n\n\\begin{lemma}\n\t\\label{lem:gradient} \n\tUnder Assumption \\ref{asp:1}, whenever $n \\pi_{\\max} (1 - \\rho_+) \\ge 2 \\log p$, for any $\\xi > 1$, \n\t\\[\n\t\t\\mathbb{P}\\biggl\\{ \\opnorm{\\Pi_{{\\mathcal N}}(\\nabla \\ell_n(\\mathbf{P}))} \\gtrsim \\biggl(\\frac{\\xi p ^ 2\\pi_{\\max}\\log p}{n\\alpha}\\biggr)^{1 \/ 2} + \\frac{\\xi p\\log p}{n \\alpha}\\biggr\\} \\le 4p^{-(\\xi - 1)} + \\exp\\biggl(- \\frac{n\\pi_{\\max} (1 - \\rho_+)}{2}\\biggr). \n\t\\]\n\\end{lemma}\n\n\\begin{remark}\n\tLemma \\ref{lem:gradient} is essentially due to concentration of a matrix martingale. Many existing results on measure concentration of dependent random variables \\citep{Mar96, Kon07, KRa08, Pau15} are not directly applicable because of the matrix structure of $\\nabla \\ell_n(\\mathbf{P})$. The main probabilistic tool we use here is the matrix Freedman inequality \\citep[][Corollary~1.3]{tropp2011freedman} that characterizes concentration behavior of a matrix martingale (See \\eqref{eq:matrix_freedman} for details). We notice two recent works, \\citet{WKo19} and \\citet{WKon19}, that use the same matrix Freedman inequality. Specifically, \\citet{WKon19} applied the matrix Freedman inequality to derive a confidence interval for the mixing time of a Markov chain based on its single trajectory, and \\citet{WKo19} used the same inequality to establish an upper bound for the sample complexity of learning a Markov chain. Finally, we also use an variant of Bernstein's inequality for general Markov chains \\citep[][Theorem~1.2]{JFS18} to derive an exponential tail bound for the status counts of the Markov chain ${\\mathcal X}$ (See \\eqref{eq:mc_bernstein} for details).\n\\end{remark}\n\n\n\nLet ${\\mathcal C} := \\{\\mathbf{Q} \\in \\mathbb{R}^{p \\times p}: \\nnorm{\\mathbf{Q} - \\mathbf{P}} \\le 4\\times 2^{1 \/2} \\fnorm{\\mathbf{Q} - \\mathbf{P}}, \\mathbf{Q} 1_p = 1_p, \\alpha \/ p \\le Q_{jk} \\le \\beta \/ p, \\forall (j, k) \\in [p] \\times [p]\\}$. For any $\\mathbf{Q} \\in {\\mathcal C}$, define ${\\mathcal L}(\\mathbf{Q}) := \\mathbb{E} \\{- \\log (\\inn{\\mathbf{Q}, \\mathbf{X}_i})\\}$ and $\\ell_n(\\mathbf{Q}) := n^{-1}\\sum_{i = 1}^n -\\log(\\inn{\\mathbf{Q}, \\mathbf{X}_i})$. Recall that $D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) = {\\mathcal L}(\\mathbf{Q}) - {\\mathcal L}(\\mathbf{P}) = \\sum_{i=1}^p \\pi_i D_{\\mathrm{KL}}(P_{i\\cdot}, Q_{i\\cdot}) = \\sum_{i = 1}^p \\sum_{j=1}^p \\pi_iP_{ij} \\log(P_{ij}\/Q_{ij})$. Define the empirical KL divergence of $\\mathbf{Q}$ from $\\mathbf{P}$ as\n\\[\n\\widetilde{D}_{\\mathrm{KL}}(\\mathbf{P},\\mathbf{Q}) := \\frac{1}{n}\\sum_{i=1}^n \\langle \\log (\\mathbf{P}) -\\log(\\mathbf{Q}), \\mathbf{X}_i\\rangle = \\ell_n(\\mathbf{Q}) - \\ell_n(\\mathbf{P}). \n\\]\nThe final ingredient of the analysis is the uniform converegence of $\\widetilde D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q})$ to $D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q})$ when $D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q})$ is large. \n\\begin{lemma}\n\t\\label{lem:uniform_law}\n\tSuppose that $n\\pi_{\\max}(1 - \\rho_+) \\ge \\max(20\\log p, \\log n)$. For any $\\eta > \\pi_{\\min} \/ (2\\pi_{\\max} rp \\log p)$, define ${\\mathcal C}(\\eta) := \\{\\mathbf{Q} \\in {\\mathcal C}: D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) \\ge \\eta\\}$. Then there exist universal constants $C_1, C_2 > 0$ such that\n\t\\begin{equation}\\label{ineq:to-show-1}\n\t\\begin{aligned}\n\t\\mathbb{P}\\biggl\\{\\forall \\mathbf{Q} \\in {\\mathcal C}(\\eta), ~|\\widetilde D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) - D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q})| \\leq \\frac{1}{2}D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) + &\\frac{C_1\\pi_{\\max} \\beta ^ 2rp\\log p}{\\pi_{\\min}\\alpha ^ 3n}\\biggr\\} \\\\\n\t& \\geq 1- C_2 \\exp\\biggl(- \\frac{\\eta \\pi_{\\max} rp \\log p}{\\pi_{\\min}}\\biggr).\n\t\\end{aligned}\n\t\\end{equation}\n\\end{lemma}\n\nTheorem \\ref{thm:nuclear} follows immediatley after one combines Lemmas \\ref{lem:large_lambda}, \\ref{lem:gradient} and \\ref{lem:uniform_law}. As for the rank-constrained MLE $\\widehat\\mathbf{P}^r$, let $\\widehat {\\boldsymbol{\\Delta}}(r) := \\widehat \\mathbf{P}^r - \\mathbf{P}$. Note that the rank constraint in \\eqref{prob:nonconvex-lowrank} implies that ${\\rm rank}(\\widehat \\boldsymbol{\\Delta}(r)) \\le 2r$. Thus, $\\nnorm{\\widehat\\boldsymbol{\\Delta}(r)} \\le (2r)^{1 \/ 2} \\fnorm{\\widehat\\boldsymbol{\\Delta}(r)}$ and Lemma \\ref{lem:uniform_law} remains applicable in the statistical anlaysis of $\\widehat \\mathbf{P}^r$. \n\n\n\n\n\n\n\n\n\n\n\n\\section{Computing Markov models using low-rank optimization}\n\nIn this section we develop efficient optimization methods to compute the proposed estimators for the low-rank Markov model. From now on, we drop the constraint that $\\alpha \/ p \\le Q_{ij}\\le \\beta \/ p$, which is used only to derive the statistical guarantees. In other words, $\\alpha$ and $\\beta$ are motivated by statistical theory, and do not need to be taken into account in the optimization. \n\n\\subsection{Optimization methods for the nuclear-norm regularized likelihood problem}\n\\label{sec:optNuc}\n\nWe first consider the nuclear-norm regularized likelihood problem \\eqref{prob:convex-nuclear}. It is a special case of the following linearly constrained optimization problem:\n\\begin{equation}\\label{prob:gen-convex-nuc}\n\\min \\, \\left\\{ g({\\mathbf{X}}) + c \\norm{{\\mathbf{X}}}_{*}\\,\\mid\\, {\\mathcal A}(\\mathbf{X}) = b \\right\\}, \n\\end{equation}\nwhere $g:\\Re^{p \\times p} \\to (-\\infty,+\\infty]$ is a closed, convex, but possibly non-smooth function, ${\\mathcal A}:\\Re^{p\\times p} \\to \\Re^m$ is a linear map, $b\\in\\Re^m$ and $c > 0$ are given data. If we take $\\alpha=0, \\beta=p$ in problem \\eqref{prob:convex-nuclear}, it becomes a special case of the general problem \\eqref{prob:gen-convex-nuc} with $g({\\mathbf{X}}) = -\\ell_n({\\mathbf{X}}) + \\delta({\\mathbf{X}}\\ge 0)$, ${\\mathcal A}({\\mathbf{X}}) = {\\mathbf{X}} {\\bf 1}_p$, $b = {\\bf 1}_p$, and $\\delta(\\cdot)$ being the indicator function.\n\nDespite of its convexity, problem \\eqref{prob:gen-convex-nuc} is highly nontrivial due to the nonsmoothness of $g$ and the presence of the nuclear norm regularizer. Here, we propose to solve it via the dual approach. \nThe dual of problem \\eqref{prob:gen-convex-nuc} is\n\\begin{equation} \n\\label{prob:D}\n\\begin{array}{rll}\n\\min & g^*(-{\\bf\\Xi}) - \\inprod{b}{y} \\\\\n\\mbox{s.t.} & {\\bf\\Xi} + {\\mathcal A}^*(y) + \\mathbb{S} = 0,\\quad \\norm{\\mathbb{S}}_2 \\le c,\n\\end{array}\n\\end{equation}\nwhere $\\norm{\\cdot}_2$ denotes the spectral norm, and $g^*$ is the conjugate function of $g$ given by\n\\begin{align*} \ng^*({\\bf\\Xi}) {}\n= \\sum_{(i,j)\\in\\Omega} \\frac{n_{ij}}{n}(\\log \\frac{n_{ij}}{n} - 1 - \\log(-\\Xi_{ij})) + \\delta( \\Xi \\le 0) \\quad \\forall\\,{\\bf \\Xi}\\,\\in\\Re^{p\\times p}\n\\end{align*}\nwith $\\Omega = \\{(i,j) \\mid n_{ij} \\neq 0\\}$ and $\\overline \\Omega = \\{(i,j) \\mid n_{ij} = 0\\}$. \nGiven $\\sigma >0$, the augmented Lagrangian function ${\\mathcal L}_{\\sigma}$ associated with \\eqref{prob:D} is\n\\begin{align*}\n{\\mathcal L}_{\\sigma}({\\bf\\Xi}, y, \\mathbb{S}; {\\mathbf{X}}) = g^*(-{\\bf\\Xi}) - \\inprod{b}{y} + \\frac{\\sigma}{2} \\norm{{\\bf\\Xi} + {\\mathcal A}^*(y) + \\mathbb{S} + {\\mathbf{X}}\/\\sigma}^2 - \\frac{1}{2\\sigma}\\norm{{\\mathbf{X}}}^2.\n\\end{align*}\nWe consider popular ADMM type methods for solving problem \\eqref{prob:D} (A comprehensive numerical study has been conducted in \\citep{li2016schur} and justifies our procedure).\nSince there are three separable blocks in \\eqref{prob:D} (namely ${\\bf\\Xi}$, $y$, and $\\mathbb{S}$), the direct extended ADMM is not applicable. Indeed, it has been shown in \\citep{chen2016direct} that the direct extended ADMM for multi-block convex minimization problem is not necessarily convergent. Fortunately, the functions corresponding to block $y$ in the objective of \\eqref{prob:D} is linear. Thus we can apply the multi-block symmetric Gauss-Sediel based ADMM (sGS-ADMM) \\citep{li2016schur}.\nIn literature \\citep{chen2017efficient,ferreira2017semidefinite,lam2017fast,li2016schur,wang2018another}, extensive numerical experiments demonstrate that sGS-ADMM is not only convergent but also faster than the directly extended multi-block ADMM and its many other variants. Specifically, the algorithmic framework of sGS-ADMM for solving \\eqref{prob:D} is presented in Algorithm \\ref{alg:sGS-ADMM}. \n\n\\begin{algorithm} \n\t\\caption{An sGS-ADMM for solving \\eqref{prob:D}}\n\t\\label{alg:sGS-ADMM}\n\t\\begin{algorithmic}\n\t\t\\STATE {\\bfseries Input:} initial point $({\\bf\\Xi}^0, y^0, \\mathbb{S}^0, {\\mathbf{X}}^0)$, penalty parameter $\\sigma > 0$, maximum iteration number $K$, and the step-length $\\gamma \\in (0,(1+ \\sqrt{5})\/2)$\n\t\t\\FOR{$k=0$ {\\bfseries to} $K$}\n\t\t\\STATE $y^{k+\\frac{1}{2}} = \\argmin_{y} {\\mathcal L}_{\\sigma}({\\bf\\Xi}^k, y, \\mathbb{S}^k; {\\mathbf{X}}^k)$\n\t\t\\STATE ${\\bf\\Xi}^{k+1} = \\argmin_{{\\bf\\Xi}} {\\mathcal L}_{\\sigma}({\\bf\\Xi} ,y^{k+\\frac{1}{2}}, \\mathbb{S}^k; {\\mathbf{X}}^k) $\n\t\t\\STATE $y^{k+1} = \\argmin_{y} {\\mathcal L}_{\\sigma}({\\bf\\Xi}^{k+1}, y, \\mathbb{S}^k; {\\mathbf{X}}^k)$\n\t\t\\STATE $\\mathbb{S}^{k+1} = \\argmin_{\\mathbb{S}} {\\mathcal L}_{\\sigma}({\\bf\\Xi}^{k+1}, y^{k+1}, \\mathbb{S}, ; {\\mathbf{X}}^k)$\n\t\t\\STATE\n\t\t${\\mathbf{X}}^{k+1} = {\\mathbf{X}}^k + \\gamma\\sigma({\\bf\\Xi}^{k+1} + {\\mathcal A}^*(y^{k+1}) + \\mathbb{S}^{k+1})$\n\t\t\\ENDFOR\n\t\n\t\\end{algorithmic}\n\\end{algorithm}\n\nNext, we discuss how the $k$-th iteration of Algorithm \\ref{alg:sGS-ADMM} is performed:\n\\begin{description}\n\t\\item[{\\bf Computation of $y^{k+\\frac{1}{2}}$ and $y^{k+1}$}.]\n\tSimple calculations show that $y^{k+\\frac{1}{2}}$ and $y^{k+1}$ can be obtained by solving the following linear systems:\n\t\\begin{equation*}\n\t\\left\\{ \n\t\\begin{aligned}\n\t&y^{k+\\frac{1}{2}} = \\frac{1}{\\sigma}({\\mathcal A} {\\mathcal A}^*)^{-1}\\big( b - X^k - \\sigma({\\bf\\Xi}^k + \\mathbb{S}^k)\\big ), \\\\[5pt]\n\t&y^{k+1} = \\frac{1}{\\sigma}({\\mathcal A} {\\mathcal A}^*)^{-1}\\big( b - X^k - \\sigma({\\bf\\Xi}^{k+1} + \\mathbb{S}^k)\\big ).\n\t\\end{aligned}\n\t\\right. \n\t\\end{equation*}\n\tIn our estimation problem, it is not difficult to verify that \n\t$ {\\mathcal A} {\\mathcal A}^* y = p y$ for any $y\\in \\Re^p$.\n\tThanks to this special structure, the above formulas can be further reduced to \n\t\\begin{equation*}\n\ty^{k+\\frac{1}{2}} = \\frac{1}{\\sigma p}\\big( b - {\\mathbf{X}}^k - \\sigma({\\bf\\Xi}^k + \\mathbb{S}^k)\\big) \\mbox{ and }\n\ty^{k+1} = \\frac{1}{\\sigma p}\\big( b - {\\mathbf{X}}^k - \\sigma({\\bf\\Xi}^{k+1} + \\mathbb{S}^k)\\big).\n\t\\end{equation*}\n\t\\item [{\\bf Computation of ${\\bf \\Xi}^{k+1}$}.] \n\tTo compute ${\\bf \\Xi}^{k+1}$, we need to solve the following optimization problem:\n\n\t\\[\n\t\\min_{\\bf\\Xi} \\left\\{ g^*(-{\\bf\\Xi}) +\\frac{\\sigma}{2}\\norm{{\\bf\\Xi} + {\\bf R}^k}^2 \\right\\},\n\t\\]\n\twhere ${\\bf R}^k \\in \\Re^{p\\times p}$ is given. Careful calculations, together with the Moreau identity \\citep[Theorem 31.5]{rockafellar2015convex}, show that\n\t\\[\n\t{\\bf \\Xi}^{k+1} = \\frac{1}{\\sigma} [{\\mathbf{Z}}^k - \\sigma {\\bf R}^k]\\, \\mbox{ and } \\,\n\t{\\mathbf{Z}}^k = \\argmin_{{\\mathbf{Z}}} \\left\\{\\sigma g({\\mathbf{Z}}) + \\frac{1}{2}\\norm{{\\mathbf{Z}} - \\sigma {\\bf R}^k}^2 \\right\\}.\n\t\\]\n\tFor our estimation problem, i.e., $g({\\mathbf{X}}) = \\ell_n({\\mathbf{X}}) + \\delta( {\\mathbf{X}}\\ge 0)$, it is easy to see that \n\t$Z^k$ admits the following form:\n\t\\[\n\tZ^k_{ij} = \\frac{\\sigma R_{ij}^k + \\sigma\\sqrt{(R^k_{ij})^2 + 4 n_{ij}\/(n\\sigma)}}{2} \\quad \\mbox{if } (i,j)\\in \\Omega \\, \\mbox{ and } \\,\n\tZ^k_{ij} = \\sigma \\max(R_{ij}^k,0) \\quad \\mbox{if } (i,j)\\in \\overline{\\Omega}.\n\t\\]\n\t\\item [{\\bf Computation of $\\mathbb{S}^{k+1}$}.]\n\tThe computation of $\\mathbb{S}^{k+1}$ can be simplified as:\n\t\\[\n\t\\mathbb{S}^{k+1} = \\argmin_{\\mathbb{S}} \\left\\{\n\t\\frac{\\sigma}{2} \\norm{\\mathbb{S} + {\\bf\\Xi}^{k+1} + {\\mathcal A}^* y^{k+1} + {\\mathbf{X}}^k\/\\sigma}^2 \\mid \n\t\\norm{\\mathbb{S}}_2 \\le c \n\t\\right\\}.\n\t\\]\n\tLet ${\\bf W}_{k} := -({\\bf\\Xi}^{k+1} + {\\mathcal A}^* y^{k+1} + X^k\/\\sigma)$ admit the following singular value decomposition (SVD)\n\t$\n\t{\\bf W}_{k} = {\\bf U}_{k} {\\bf\\Sigma}_k {\\bf V}_k^\\top,\n\t$\n\twhere ${\\bf U}_{k}$ and ${\\bf V}_k$ are orthogonal matrices,\n\t${\\bf \\Sigma}_k = {\\rm Diag}(\\alpha_1^k,\\ldots, \\alpha_p^k)$ is\n\tthe diagonal matrix of singular values of $W_k$, with $\\alpha_1^k\\ge\\ldots\\ge \\alpha_p^k\\ge 0.$ Then, by Lemma 2.1 in \\citep{jiang2014partial}, we know that\n\t\\[\n\t\\mathbb{S}^{k+1} = {\\bf U}_k\\min({\\bf \\Sigma}_k,c){\\bf V}_k^\\top,\n\t\\]\n\twhere $\\min({\\bf \\Sigma}_k,c) = {\\rm Diag}\\big(\\min(\\alpha_1^k,c),\\ldots, \\min(\\alpha_p^k,c)\\big)$.\n\n\tWe also note that in the implementation, only partial SVD, which is much cheaper than full SVD, is needed as $r\\ll p$.\n\\end{description}\nThe nontrivial convergence results and the sublinear non-ergodic iteration\ncomplexity of Algorithm \\ref{alg:sGS-ADMM} can be obtained from \\cite{li2016schur} and \\cite{chen2017efficient}.\n{We put the convergence theorem and a sketch of the proof in the supplementary material.}\n\n\\subsection{Optimization methods for the rank-constrained likelihood problem}\n\\label{sec:6}\n\n\n\nNext we develop the optimization method for computing the rank-constrained likelihood maximizer from \\eqref{prob:nonconvex-lowrank}. In Subsection \\ref{sec:pen}, a penalty approach is applied to transform the original intractable rank-constrained problem into a DC programming problem. Then we solve this problem by a proximal DC (PDC) algorithm in Subsection \\ref{subsec:proxdca}. \nWe also discuss the solver for the subproblems involved in the proximal DC algorithm. Lastly, a unified convergence analysis of a class of majorized indefinite-proximal DC (Majorized iPDC) algorithms is provided in Subsection \\ref{sec:DCA}.\n\n\\subsubsection{A penalty approach for problem \\eqref{prob:nonconvex-lowrank}. } \n\\label{sec:pen}\nRecall \\eqref{prob:nonconvex-lowrank} is intractable due to the non-convex rank constraint, we introduce a penalty approach to relax. We particularly study the following optimization problem:\n\\begin{equation}\\label{prob:gen-nonconvex-lowrank}\n\\min \\, \\left\\{ f({\\mathbf{X}}) \\,\\mid\\, {\\mathcal A}({\\mathbf{X}}) = b,\\, {\\rm rank}({\\mathbf{X}})\\le r \\right\\}, \n\\end{equation}\nwhere $f:\\Re^{p \\times p} \\to (-\\infty,+\\infty]$ is a closed proper convex, but possibly non-smooth, function.\nThe original rank-constraint maximum likelihood problem \\eqref{prob:nonconvex-lowrank} can be viewed as a special case of the general model \\eqref{prob:gen-nonconvex-lowrank}.\n\n\n\nGiven ${\\mathbf{X}}\\in\\Re^{p\\times p}$, let $\\sigma_1({\\mathbf{X}}) \\geq \\cdots \\geq \\sigma_p({\\mathbf{X}})\\geq 0$ be the singular values of ${\\mathbf{X}}$. Since ${\\rm rank}({\\mathbf{X}})\\le r$ if and only $\\sigma_{r+1}({\\mathbf{X}}) + \\ldots + \\sigma_{p}({\\mathbf{X}}) = \\|{\\mathbf{X}}\\|_\\ast - \\|{\\mathbf{X}}\\|_{(r)} = 0$ ($\\|{\\mathbf{X}}\\|_{(r)} = \\sum_{i=1}^r \\sigma_i({\\mathbf{X}})$ is the Ky Fan $r$-norm of ${\\mathbf{X}}$), \\eqref{prob:gen-nonconvex-lowrank} can be equivalently formulated as\n\\begin{equation*}\n\\min \\left\\{ f({\\mathbf{X}}) \\mid \\|{\\mathbf{X}}\\|_\\ast-\\|{\\mathbf{X}}\\|_{(r)}=0, \\, {\\mathcal A}({\\mathbf{X}}) = b \\right\\}.\n\\end{equation*}\nSee also \\citep[Equation (29)]{sun2010majorized}.\nThe penalized formulation of problem \\eqref{prob:gen-nonconvex-lowrank} is\n\\begin{equation}\\label{prob:nonconvex-pen-DC}\n\\mi\n\\left\\{ f({\\mathbf{X}}) + c (\\|{\\mathbf{X}}\\|_\\ast - \\|{\\mathbf{X}}\\|_{(r)}) \\mid {\\mathcal A}({\\mathbf{X}}) = b \\right\\},\n\\end{equation}\nwhere $c>0$ is a penalty parameter. \nSince $\\norm{\\cdot}_{(r)}$ is convex, the objective in problem \\eqref{prob:nonconvex-pen-DC} is a difference\nof two convex functions: $f({\\mathbf{X}}) + c\\norm{{\\mathbf{X}}}_{*}$ and $c\\norm{{\\mathbf{X}}}_{(r)}$, i.e., \\eqref{prob:nonconvex-pen-DC} is a DC program.\n\nLet ${\\mathbf{X}}_c^*$ be an optimal solution to the penalized problem \\eqref{prob:nonconvex-pen-DC}. The following proposition shows that ${\\mathbf{X}}_c^\\ast$ is also the optimizer to \\eqref{prob:gen-nonconvex-lowrank} when it is low-rank.\n\\begin{proposition}\\label{prop:penlowrank}\n\tIf ${\\rm rank}({\\mathbf{X}}_c^*)\\le r$, then ${\\mathbf{X}}_c^*$ is also an optimal {solution} to the original problem \\eqref{prob:gen-nonconvex-lowrank}.\n\\end{proposition}\n\n\nIn practice, one can gradually increase the penalty parameter $c$ to obtain a sufficient low rank solution ${\\mathbf{X}}_c^*$. In our numerical experiments, we can obtain solutions with the desired rank with a properly chosen parameter $c$. \n\n\n\n\\subsubsection{A PDC algorithm for the penalized problem \\eqref{prob:nonconvex-pen-DC}.}\n\\label{subsec:proxdca}\n\n\n\nThe central idea of the DC algorithm \\citep{tao1997convex} is as follows: at each iteration, one approximates the concave part of the objective function by its affine majorant, then solves the resulting convex optimization problem. In this subsection, we present a variant of the classic DC algorithm for solving \\eqref{prob:nonconvex-pen-DC}. For the execution of the algorithm, we recall that the sub-gradient of Ky Fan $r$-norm at a point ${\\mathbf{X}}\\in\\Re^{p\\times p}$ \\citep{watson1993matrix} is\n\\[\\partial \\norm{{\\mathbf{X}}}_{(r)}=\\left\\{ \n{\\mathbf{U}} \\, {\\rm Diag}(q^*){\\mathbf{V}}^\\top \\, \\mid q^*\\in \\Delta \\right\\},\n\\\nwhere {$\\bU$ and $\\bV$ are the singular vectors of ${\\mathbf{X}}$,} and $\\Delta$ is the optimal solution set of the following problem\n\\begin{equation*}\n\\max_{q\\in \\Re^{p}} \\left\\{ \n\\sum_{i=1}^p \\sigma_i({\\mathbf{X}})q_i \\mid \n\\inprod{{\\bf 1}_p}{q} \\le r, \\, 0\\le q \\le 1\n\\right\\}.\n\\end{equation*}\nNote that one can efficiently obtain a component of $\\partial \\norm{{\\mathbf{X}}}_{(r)}$ by computing the\nSVD of $X$ and picking up the SVD vectors corresponding to the $r$ largest singular values.\nAfter these preparations, we are ready to state the PDC algorithm for problem \\eqref{prob:nonconvex-pen-DC} in Algorithm \\ref{alg:dc}.\nDifferent from the classic DC algorithm, an additional proximal term is added to ensure that solutions of subproblems \\eqref{subprob:mmalg} exist and the difference of two consecutive iterations converges. {See Theorem \\ref{thm:convergence-alg-MM} and Remark \\ref{remk:pdc} for more details.} \n\n\n\\begin{algorithm}\n\t\\caption{A PDC algorithm for solving \\eqref{prob:nonconvex-pen-DC}}\n\t\\label{alg:dc}\n\tGiven $c>0$, $\\alpha \\ge 0$, and the stopping tolerance $\\eta$, choose initial point ${\\mathbf{X}}^0\\in \\Re^{p\\times p}$.\n\tIterate the following steps for $k=0,1,\\ldots:$\n\t\n\t{\\bf 1.} Choose ${\\mathbf{W}}_k\\in \\partial \\norm{{\\mathbf{X}}^k}_{(r)}$. Compute\n\t\\begin{equation}\\label{subprob:mmalg}\n\t\\begin{split}\n\t\\mathbf{X}^{k+1}\\quad = \\quad & \\argmin f({\\mathbf{X}}) + c\\left(\\norm{{\\mathbf{X}}}_* - \\inprod{{\\mathbf{W}}_k}{{\\mathbf{X}} - {\\mathbf{X}}^k} - \\norm{{\\mathbf{X}}^k}_{(r)}\\right) \n\t+ \\frac{\\alpha}{2}\\norm{{\\mathbf{X}} - {\\mathbf{X}}^k}_F^2 \\\\\n\t& \\text{subject to } {\\mathcal A}({\\mathbf{X}}) = b.\n\t\\end{split}\n\t\\end{equation}\n\t{\\bf 2.} If $\\norm{{\\mathbf{X}}^{k+1} - {\\mathbf{X}}^k}_F\\le \\eta$, stop.\n\\end{algorithm}\nWe say that ${\\mathbf{X}}$ is a critical point of problem \\eqref{prob:nonconvex-pen-DC} if \n\\[\\partial (f({\\mathbf{X}}) + c\\norm{{\\mathbf{X}}}_* + \\delta({\\mathcal A}(X) = b) ) \\cap (c\\partial \\norm{{\\mathbf{X}}}_{(r)}) \\ne \\emptyset.\\]\nWe have the following convergence results for Algorithm \\ref{alg:dc}. \n\\begin{theorem}[Convergence of Algorithm \\ref{alg:dc}]\n\t\\label{thm:convergence-alg-MM}\n\tLet $\\{{\\mathbf{X}}^k\\}$ be the sequence generated by Algorithm \\ref{alg:dc} and $\\alpha \\ge 0$. Then $\\{ f({\\mathbf{X}}^k) + c(\\norm{{\\mathbf{X}}^k}_* - \\norm{{\\mathbf{X}}^k}_{(r)})\\}$ is a non-increasing sequence. If ${\\mathbf{X}}^{k+1} = {\\mathbf{X}}^k$ for some integer $k\\ge 0$, then ${\\mathbf{X}}^k$ is a critical point of \\eqref{prob:nonconvex-pen-DC}. Otherwise, it holds that\n\t\\begin{align*} \n\t&\\big(f({\\mathbf{X}}^{k+1}) + c(\\norm{{\\mathbf{X}}^{k+1}}_* - \\norm{{\\mathbf{X}}^{k+1}}_{(r)})\\big) \n\t- \\big( f({\\mathbf{X}}^k) + c(\\norm{{\\mathbf{X}}^k}_* - \\norm{{\\mathbf{X}}^k}_{(r)} )\\big)\n\t\\le {} -\\frac{\\alpha}{2}\\norm{{\\mathbf{X}}^{k+1} - {\\mathbf{X}}^k}^2_F.\n\t\\end{align*}\n\tMoreover, \n\tany accumulation point of the bounded sequence $\\{{\\mathbf{X}}^k\\}$ is a critical point of problem \\eqref{prob:nonconvex-pen-DC}.\n\tIn addition, if $\\alpha >0$, it holds that $\\lim_{k\\to \\infty}\\norm{{\\mathbf{X}}^{k+1} - {\\mathbf{X}}^k}_F = 0$.\n\\end{theorem}\n\n\\begin{remark}[Adjusting Parameters]\n\t\\label{remk:pdc}\n\t{\\rm In practice, a small $\\alpha >0$ is suggested to ensure strict decrease of the objective value and convergence \n\t\tof $\\{ \\norm{{\\mathbf{X}}^{k+1} - {\\mathbf{X}}^k}_F \\}$;\n\t\tif $f$ is strongly convex, one achieves these nice properties even if $\\alpha =0$ based on the results of Theorem \\ref{thm: MMconvergence}. The penalty parameter $c$ can be adaptively adjusted according to the rank of the sequence generated by Algorithm \\ref{alg:dc}.}\n\\end{remark}\n\n\\begin{remark}[Number of iterations of Algorithm \\ref{alg:dc}]\n\t\\label{prop:convergence-alg-MM}\n\tLet $\\eta >0$ be the stopping tolerance and $F^*$ be the optimal value of problem \\eqref{prob:nonconvex-pen-DC}. By using the inequality in Theorem \\ref{thm:convergence-alg-MM}, it can be shown that if $\\alpha >0$, then Algorithm \\ref{alg:dc} terminates in no more than $K$ iterations, where\n\t\\[\n\tK = \\left\\lceil \\frac{2\\big(f(X^0) + c(\\norm{X^0}_* - \\norm{X^0}_{(r)}) - F^* \\big)}{\\alpha \\eta^2}\\right\\rceil + 1.\n\t\\] \n\\end{remark}\n\n\\begin{remark}[Statistical properties]\n\t\tThe statistical rate we derived in Theorem 2 does not carry over to the iterates of the DC algorithm here. Though we show in Theorem 5 that the DC algorithm can converge to a critical point, it remains unclear whethere this point is close to the global optimum and provably enjoys the statistical guarantee. Recently there have been many works conveying positive messages on the statistical properties of the non-convex optimization algorithms. For example, \\citet{LWa15} showed that any stationary point of the composite objective function they considered lies within statistical precision of the true parameter. We hope to establish similar theory for the proposed DC approach in future research.\n\\end{remark}\n\nNext, we discuss how to solve subproblems \\eqref{subprob:mmalg}. \\eqref{subprob:mmalg} is still a nuclear norm penalized convex optimization problem and is a special case of model \\eqref{prob:gen-convex-nuc} with $g({\\mathbf{X}}) = f({\\mathbf{X}}) + \\inprod{{\\mathbf{W}}}{{\\mathbf{X}}} + \\frac{\\alpha }{2}\\norm{{\\mathbf{X}}}_F^2$. Hence, Algorithm \\ref{alg:sGS-ADMM} can directly solve these subproblems efficiently. When Algorithm \\ref{alg:sGS-ADMM} is executed on this new function $g$, all computations, except for the update of $\\bf\\Xi$, have already been discussed in Section \\ref{sec:optNuc}. To update $\\bf\\Xi$ in the process of executing Algorithm \\ref{alg:sGS-ADMM} for solving \\eqref{subprob:mmalg} with $g({\\mathbf{X}}) = \\ell_n({\\mathbf{X}}) + \\delta({\\mathbf{X}}\\ge0) + \\inprod{{\\mathbf{W}}}{{\\mathbf{X}}} + \\frac{\\alpha }{2}\\norm{{\\mathbf{X}}}_F^2$, we need to solve the following minimization problem for given ${\\bf R}\\in\\Re^{p\\times p}$ and $\\sigma > 0$,\n\\[\n{\\mathbf{Z}}^* = \\argmin_{{\\mathbf{Z}}} \\left\\{ \\sigma g({\\mathbf{Z}}) + \\frac{1}{2} \\norm{{\\mathbf{Z}} - \\sigma{\\bf R}}^2\\right\\}.\n\\] \n${\\mathbf{Z}}^*$ here can be calculated by\n\\begin{equation*}\nZ_{ij}^* = \n\\left\\{\n\\begin{aligned}\n{}&\\frac{(\\sigma R_{ij} - W_{ij}) + \\sigma\\sqrt{(R_{ij} - W_{ij}\/\\sigma)^2 + 4 (\\alpha + 1)n_{ij}\/(n\\sigma)}}{2(\\alpha + 1)} \\quad \\mbox{if } (i,j)\\in \\Omega; \\\\[5pt]\n{}&\\sigma \\max(R_{ij} - W_{ij}\/\\sigma,0) \\quad \\mbox{if } (i,j)\\in \\overline{\\Omega}.\n\\end{aligned}\n\\right.\n\\end{equation*}\n\n\n\n\\subsubsection {A unified analysis for the majorized iPDC algorithm.}\n\\label{sec:DCA}\nDue to the presence of the proximal term $\\frac{\\alpha}{2}\\norm{{\\mathbf{X}} - {\\mathbf{X}}^k}^2$ in Algorithm \\ref{alg:dc}, the classical DC analyses cannot be applied directly. In this subsection, we provide a unified convergence analysis for the majorized indefinite-proximal DC (majorized iPDC) algorithm which includes Algorithm \\ref{alg:dc} as a special instance. \nLet $\\mathbb{X}$ be a finite-dimensional real Euclidean space endowed with inner product $\\inprod{\\cdot}{\\cdot}$ and induced norm $\\norm{\\cdot}$.\nConsider the following optimization problem\n\\begin{equation}\\label{eq:dca model}\n\\min_{x\\in \\mathbb{X}}\\; \\theta(x)\\triangleq g(x) + p(x) - q(x),\n\\end{equation}\nwhere $g:\\mathbb{X}\\to \\Re$ is a continuously differentiable function (not necessarily convex) with a Lipschitz continuous gradient and Lipschitz modulus $L_g >0$, i.e.,\n\\[\\norm{\\nabla f(x) - \\nabla f(x')} \\le L_g \\norm{x - x'}\\quad \\forall \\, x, x'\\in \\mathbb{X}, \\]\n$p:\\mathbb{X}\\to (-\\infty, +\\infty]$ and $q:\\mathbb{X}\\to(-\\infty, +\\infty]$ are two proper closed convex functions.\nIt is not difficult to observe that penalized problem \\eqref{prob:nonconvex-pen-DC} is a special instance of problem \\eqref{eq:dca model}. \nFor general model \\eqref{eq:dca model}, one can only expect the DC algorithm converges to a critical point $\\bar{x}\\in \\mathbb{X}$ of \\eqref{eq:dca model} satisfying \n\\[ \\left(\\nabla g(\\bar{x}) + \\partial p(\\bar{x}) \\right) \\cap \\partial q(\\bar{x}) \\neq \\emptyset.\\]\n\n\n\nSince $g$ is continuously differentiable with Lipschitz continuous gradient, there exists a self-adjoint positive semidefinite linear operator ${\\mathcal G}:\\mathbb{X} \\to \\mathbb{X}$ such that for any\n$x, x'\\in \\mathbb{X}$,\n\\begin{equation*}\\label{ineq:majorization}\ng(x)\\le \\widehat{g}(x;x')\\triangleq g(x') + \\langle \\nabla g(x'), x-x'\\rangle + \\frac{1}{2}\\|x - x'\\|^2_{\\mathcal{G}}.\n\\end{equation*}\n\n\\begin{algorithm}\n\t\\caption{A majorized indefinite-proximal DC algorithm for solving problem \\eqref{eq:dca model}}\n\t\\label{alg:dca-general}\n\tGiven initial point $x^0\\in \\mathbb{X}$ and stopping tolerance $\\eta$, choose a self-adjoint, possibly\n\tindefinite, linear operator ${\\mathcal T}:\\mathbb{X} \\to \\mathbb{X}$.\n\tIterate the following steps for $k=0,1,\\ldots:$\n\t\n\t{\\bf 1.} Choose $\\xi^k \\in \\partial q({x}^k)$. Compute\n\t\\begin{equation}\\label{eq:subproblem}\n\tx^{k+1} \\in \\argmin_{x\\in \\mathbb{X}} \\; \\left\\{ \\widehat{\\theta}(x;{x}^k) + \\frac{1}{2}\\|x - {x}^k\\|^2_{\\mathcal{T}}\\right\\}, \n\t\\end{equation}\n\twhere $\\widehat{\\theta}(x;{x}^k) \\triangleq \\widehat{g}(x;{x}^k) + p(x) - \\big(q({x}^k) + \\langle x-{x}^k, \\xi^k\\rangle \\big).$\n\t\n\t{\\bf 2.} If $\\|{x}^{k+1}-{x}^k\\|\\le \\eta$, stop.\n\\end{algorithm}\n\n\n\nWe present the majorized iPDC algorithm for solving \\eqref{eq:dca model} in Algorithm \\ref{alg:dca-general} and provide the following convergence results. \n\\begin{theorem}[Convergence of iPDC]\\label{thm: MMconvergence}\n\tAssume that {$\\inf_{x\\in \\mathbb{X}}\\theta(x)>-\\infty$}. Let $\\{x^k\\}$ be the sequence generated by Algorithm \\ref{alg:dca-general}. \n\tIf $x^{k+1} = x^k$ for some $k\\ge 0$, then $x^k$ is a critical point of \\eqref{eq:dca model}. If $\\mathcal{G} + 2\\mathcal{T}\\succeq 0$, then\n\tany accumulation point of $\\{x^{k}\\}$, if exists, is a critical point of \\eqref{eq:dca model}. In addition, if $ \\mathcal{G} + 2\\mathcal{T}\\succ 0$, it holds that $\\displaystyle\\lim_{k\\to\\infty}\\|{x}^{k+1} - {x}^k\\| = 0$.\n\\end{theorem}\nThe proof of Theorem \\ref{thm: MMconvergence} is provided in the supplementary material.\n\n\\begin{remark}\nHere, we discuss the roles of linear operators ${\\mathcal G}$ and ${\\mathcal T}$.\nFirst, ${\\mathcal G}$ makes the subproblems \\eqref{eq:subproblem} in Algorithm \\ref{alg:dca-general} more amenable to efficient computations. Theorem \\ref{thm: MMconvergence} shows the algorithm is convergent if ${\\mathcal G} + 2{\\mathcal T} \\succeq 0$. This indicates that instead of adding the commonly used positive semidefinte or positive definite proximal terms, we allow ${\\mathcal T}$ to be indefinite for better practical performance. The computational benefit of using indefinite proximal terms has also been observed in \\citep{sun2010majorized,li2016majorized}.\nAs far as we know, Theorem \\ref{thm: MMconvergence} provides the first rigorous convergence proof of the DC algorithms with indefinite proximal terms. Second, ${\\mathcal G}$ and ${\\mathcal T}$ also help to guarantee that the solutions of the subproblems \\eqref{eq:subproblem} exist.\nSince ${\\mathcal G} + 2{\\mathcal T} \\succeq 0$ and ${\\mathcal G} \\succeq 0$, we have that $2{\\mathcal G} + 2{\\mathcal T} \\succeq 0$, i.e., ${\\mathcal G} + {\\mathcal T} \\succeq 0$.\nHence, ${\\mathcal G}+2{\\mathcal T} \\succeq 0$ (${\\mathcal G}+2{\\mathcal T} \\succ 0$) implies that subproblems \\eqref{eq:subproblem} \nare (strongly) convex. Third, the choices of ${\\mathcal G}$ and ${\\mathcal T}$ are very much problem dependent. The general principle is that ${\\mathcal G} + {\\mathcal T}$ should be as small as possible while ensuring {$x^{k+1}$} is relatively easy to compute.\n\\end{remark}\n\n\n\n\n\n\n\n\n\n\n\n\\section{Simulation results}\n\\label{sec:num}\n\nIn this section, we conduct numerical experiments to validate our theoretical results. We first compare the proposed nuclear-norm regularized estimator and the rank-constrained estimator with previous methods in literature using synthetic data. We then use the rank-constrained method to analyze a dataset of Manhattan taxi trips to reveal citywide traffic patterns. All of our computational results are obtained by running {\\sc Matlab} (version 9.5) on a windows workstation (8-core, Intel Xeon W-2145 at 3.70GHz, 64 G RAM).\n\n\\subsection{Experiments with simulated data}\nWe randomly draw the transition matrix $\\P$ as follows. Let ${\\mathbf{U}}_0, {\\mathbf{V}}_0 \\in \\Re^{p\\times r}$ be random matrices with i.i.d. standard normal entries and let\n\\[\n\\widetilde {\\mathbf{U}}_{[i,:]} = ({\\mathbf{U}}_0 \\circ {\\mathbf{U}}_0)_{[i,:]} \/ \\norm{({\\mathbf{U}}_0)_{[i,:]}}_2^2 \\mbox{ and } \\widetilde {\\mathbf{V}}_{[:,j]} = ({\\mathbf{V}}_0 \\circ \\bV_0)_{[:,j]} \/ \\norm{({\\mathbf{V}}_0)_{[:,j]}}_2^2, \\quad i=1,\\ldots,p, j=1,\\ldots, r, \n\\]\nwhere $\\circ$ is the Hadamard product and $\\widetilde {\\mathbf{U}}_{[i,:]}$ denotes the $i$-th row of $\\widetilde{\\mathbf{U}}$. The transition matrix $\\P$ is obtained via $\\P = \\widetilde{\\mathbf{U}} \\widetilde{\\mathbf{V}}^\\top$. Then we simulate a Markov chain trajectory of length $n = {\\rm round} (krp \\log(p))$ on $p$ states, $\\{X_0,\\ldots, X_n\\}$, with varying values of $k$.\n\nWe compare the performance of four procedures: the nuclear norm penalized MLE, rank-constrained MLE, empirical estimator and spectral estimator. Here, the empirical estimator is the empirical count distribution matrix defined as follows: \n$$\\tilde{\\P} = \\left(\\tilde{\\P}_{ij}\\right)_{1\\leq i, j\\leq p}, \\quad \\tilde{\\P}_{ij} = \\left\\{\\begin{array}{ll}\n\\frac{\\sum_{k =1}^n 1_{\\{X_{k-1} = i, X_k = j\\}}}{\\sum_{k =1}^n 1_{\\{X_{k-1} = i\\}}}, & \\quad \\text{when }\\sum_{k =1}^n 1_{\\{X_{k-1} = i\\}} \\geq 1;\\\\\n\\frac{1}{p}, & \\quad \\text{when } \\sum_{k =1}^n 1_{\\{X_{k-1} = i\\}} = 0.\n\\end{array}\\right.$$\nThe empirical estimator is in fact the unconstrained maximum likelihood estimator without taking into account the low-rank structure. The spectral estimator \\citep[Algorithm 1]{zhang2018optimal} is based on a truncated SVD. In the implementation of the nuclear norm penalized estimator, the regularization parameter $\\lambda$ in \\eqref{prob:convex-nuclear} is set to be $C\\sqrt{{p\\log p\/n}}$ with constant $C$ selected by cross-validation.\nFor each method, let $\\widehat{{\\mathbf{U}}}$ and $\\widehat{{\\mathbf{V}}}$ be the leading $r$ left and right singular vectors of the resulting estimator $\\widehat \\P$. We measure the statistical performance of $\\widehat{\\mathbf{P}}$ through three quantities: \n\\begin{align*}\n \\eta_F := \\norm{\\P - \\widehat \\P}_F^2, ~~ \\eta_{KL}:= D_{\\mathrm{KL}}(\\mathbf{P},\\widehat\\mathbf{P}), \\mbox{ and } \\eta_{UV} := \\max\\bigl\\{ \\norm{\\sin \\Theta(\\widehat{{\\mathbf{U}}},{\\mathbf{U}})}_F^2, \\norm{\\sin \\Theta(\\widehat{{\\mathbf{V}}},{\\mathbf{V}})}_F^2 \\bigr\\}.\n\\end{align*} \n\n\nWe consider the following setting with $p=1000$, $r = 10$, and $k\\in [10, 100]$.\nThe results are plotted in Figure \\ref{figure:mlevsrank}.\nOne can observe from these results that for rank-constrained, nuclear norm penalized and spectral methods, $\\eta_F, \\eta_{KL}$ and $\\eta_{UV}$ converge to zero quickly as the number of the state transitions $n$ increases, while the statistical error of the empirical estimator decreases in a much slower rate. Among the three estimators in the zoomed plots (second rows of Figure \\ref{figure:mlevsrank}), the rank constrained estimator slightly outperforms the nuclear norm penalized estimator and the spectral estimator.\nThis observation is consistent with our algorithmic design: the nuclear norm minimization procedure is actually the initial step of Algorithm \\ref{alg:dc}; thus the rank-constrained estimator can be seen as a refined version of the nuclear norm regularized estimator. \n\n\nWe also consider the case where the invariant distribution $\\pi$ is ``imbalanced'', i.e., we construct $\\P$ such that $\\min_{i=1,\\ldots,p} \\pi_i$ is quite small and the appearance of some states is significantly less than the others. Specifically, given $\\gamma_1,\\gamma_2 >0$, we generate a diagonal matrix ${\\mathbf{D}}$ with i.i.d. beta-distributed (${\\rm Beta}(\\gamma_1, \\gamma_2)$) diagonal elements.\nAfter obtaining $\\widetilde {\\mathbf{U}}$ and $\\widetilde {\\mathbf{V}}$ in the same way as in the beginning of this subsection, we compute $\\widetilde \\P = \\widetilde {\\mathbf{U}} \\widetilde {\\mathbf{V}}^\\top {\\mathbf{D}}$. The ground truth transition matrix $\\P$ is obtained after a normalization of $\\widetilde \\P$. Then, we simulate a Markov chain trajectory of length $n = {\\rm round}(krp\\log(p))$ on $p$ states.\nIn our experiment, we set $p = 1000$, $r = 10$, $k\\in [10,100]$, and $\\gamma_1 = \\gamma_2 = 0.5$. The detailed results are plotted in Figure \\ref{figure:betamlevsrank}. As can be seen from the figure, under the imbalanced setting, the rank-constrained, nuclear norm penalized and spectral methods perform much better than the empirical approach in terms of all the three statistical performance measures ($\\eta_F$, $\\eta_{KL}$ and $\\eta_{UV}$). In addition, the rank-constrained estimator exhibits a clear advantage over two other approaches.\n\n\n\\begin{figure*}\n\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:mle-etaF}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/normal\/\/mle-rankF-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:mle-etaKL}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/normal\/\/mle-rankKL-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:mle-etaUV}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/normal\/\/mle-rankUV-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\n\t\\\\\n\t\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:svd-etaF}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/normal\/\/svd-rankF-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:svd-etaKL}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/normal\/\/svd-rankKL-plot-n1000-r10-rolls10}\n\t\\end{subfigure}\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:svd-etaUV}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/normal\/\/svd-rankUV-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\t\n\t\\caption{The first row compares the rank-constrained estimator, nuclear norm penalized estimator, spectral method, and empirical estimator in terms of $\\eta_F = \\norm{\\P - \\widehat \\P}_F^2, \\eta_{KL}= D_{\\mathrm{KL}}(\\mathbf{P},\\widehat\\mathbf{P})$, and $\\eta_{UV} = \\max\\bigl\\{ \\norm{\\sin \\Theta(\\widehat{{\\mathbf{U}}},{\\mathbf{U}},)}_F^2, \\norm{\\sin \\Theta(\\widehat{{\\mathbf{V}}},{\\mathbf{V}})}_F^2 \\bigr\\}$. The second row provides the zoomed plots of the first row without the empirical estimator. Here, $n = {\\rm round}(k rp\\log p)$ with $p = 1,000$, $r = 10$ and $k$ ranging from $10$ to $100$. \n\t\n\t}\n\t\\label{figure:mlevsrank}\n\t\n\t\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:betamle-etaF}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/beta\/\/mle-rankF-beta-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:betamle-etaKL}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/beta\/\/mle-rankKL-beta-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:betamle-etaUV}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/beta\/\/mle-rankUV-beta-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\n\t\\\\\n\t\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:betasvd-etaF}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/beta\/\/svd-rankF-beta-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:betasvd-etaKL}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/beta\/\/svd-rankKL-beta-plot-n1000-r10-rolls10}\n\t\\end{subfigure}\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:betasvd-etaUV}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/beta\/\/svd-rankUV-beta-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\t\n\t\\caption{The first row compares the rank-constrained estimator, nuclear norm penalized estimator, spectral method, and empirical estimator in terms of $\\eta_F = \\norm{\\P - \\widehat \\P}_F^2, \\eta_{KL}= D_{\\mathrm{KL}}(\\mathbf{P},\\widehat\\mathbf{P})$, and $\\eta_{UV} = \\max\\bigl\\{ \\norm{\\sin \\Theta(\\widehat{{\\mathbf{U}}},{\\mathbf{U}},)}_F^2, \\norm{\\sin \\Theta(\\widehat{{\\mathbf{V}}},{\\mathbf{V}})}_F^2 \\bigr\\}$ with imbalanced\n\t\tinvariant distribution. The second row provides the zoomed plots of the first row without the empirical estimator. Here, $n = {\\rm round}(k rp\\log p)$ with $p = 1,000$, $r = 10$ and $k$ ranging from $10$ to $100$. \n\t}\n\t\\label{figure:betamlevsrank}\n\n\\end{figure*}\n\n\n\n\n\n\n\n\n\n\n\\subsection{Experiments with Manhattan Taxi data}\n\\label{sec:taxi}\n\nIn this experiment, we analyze a real dataset of $1.1\\times 10^7$ trip records of NYC Yellow cabs (Link: \\url{https:\/\/s3.amazonaws.com\/nyc-tlc\/trip+data\/yellow_tripdata_2016-01.csv}) in January 2016. Our goal is to partition the Manhattan island into several areas, in each of which the taxi customers share similar destination preference. This can provide guidance for balancing the supply and demand of taxi service and optimizing the allocation of traffic resources.\n\nWe discretize the Manhattan island into a fine grid and model each cell of the grid as a state of the Markov chain; each taxi trip can thus be viewed as a state transition of this \nMarkov chain \\citep{yang2017dynamic, benson2017spacey, liu2012understanding}. For stability concerns, our model ignores the cells that have fewer than $1,000$ taxi visits. Given that the traffic dynamics typically vary over time, \nwe fit the MC under three periods of a day, i.e., $06:00\\sim 11:59$ (morning), $12:00 \\sim 17:59$ (afternoon) and $18:00 \\sim 23:59$ (evening), where the number of the active states $p = 803$, $999$ and $1,079$ respectively. \nWe apply the rank-constrained likelihood approach to obtain the estimator $\\widehat \\mathbf{P}^r$ of the transition matrix, and then apply $k$-means to the left singular subspaces of $\\widehat{\\mathbf{P}}^r$ to classify all the states into several clusters.\nFigure \\ref{figure:lr4} presents the clustering result with $r = 4$ \nand $k = 4$ \nfor the three periods of a day.\n\nFirst of all, we notice that the locations within the same cluster are close with each other in geographical distance. This is non-trivial: we do not have exposure to GPS location in the clustering analysis. This implies that taxi customers in neighboring locations have similar destination preference, which is consistent with common sense. Furthermore, to track the variation of the traffic dynamics over time, \nFigure \\ref{figure:pb4} visualizes the distribution of the destination choice that is correspondent to the center of the green cluster in the morning, afternoon and evening respectively. We identify the varying popular destinations in different periods of the day and provide corresponding explanations in the following table: \n\\begin{table}[H]\n\t\\centering \n\t\\def\\arraystretch{0.7}\t\n\t\\begin{tabular}{c@{\\hskip 1cm}p{7cm}@{\\hskip 1cm}p{4cm}}\n\t\t\\hline\\hline\n\t\tTime & \\hfil Popular Destinations & \\hfil Explanation \\\\ \\hline\\hline\n\t\tMorning & New York--Presbyterian Medical Center,\\vspace{-.3cm} \\newline \\hfil 42--59 St. Park Ave, Penn Station & hospitals, workplaces,\\vspace{-.3cm} \\newline \\quad the train station \\\\ \\hline\n\t\tAfternoon & \\centering 66 St. Broadway & \\hfil lunch, afternoon break,\\vspace{-.3cm} \\newline short trips \\\\ \\hline\n\t\tEvening & \\centering Penn Station & go home \\\\\\hline\\hline \n\t\\end{tabular}\n\\end{table}\n\nFinally, it might be tempting to model the taxi trips by an HMM, where regions of Manhattan correspond to hidden states. However, such a region is always part of the current observation (i.e., location of taxi): It is observable and is not a hidden state that has to be inferred from all past observations. As a result, although both HMM and the low-rank Markov model could apply to taxi trips, the low-rank Markov model is simpler and more accurate. \n\n\n\n\n\\begin{figure*}[t!]\n\t\\centering\n\n\n\n\n\n\t\\begin{subfigure}{.32\\textwidth\n\t\t\\label{figure:lrh1}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/taxi\/\/test-P-rank-r4hh2}\n\t\\end{subfigure} \n\t\\begin{subfigure}{.32\\textwidth\n\t\t\\label{figure:lrh2}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/taxi\/\/test-P-rank-r4hh3}\n\t\\end{subfigure}\n\n\t\\begin{subfigure}{.32\\textwidth\n\t\t\\label{figure:lrh3}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/taxi\/\/test-P-rank-r4hh4}\n\t\\end{subfigure}\n\t\\caption{\n\t\tThe meta-states compression of Manhattan traffic network via rank-constrained approach with $ r = 4$: mornings (left), afternoons (middle) and evenings (right). Each color or symbol represents a meta-state. One\n\t\tcan see the day-time state aggregation\n\t\tresults differ significantly from that of the evening time.}\n\t\\label{figure:lr4}\n\\end{figure*}\n\n\n\n\\begin{figure*}[tb]\n\t\\centering\n\t\\begin{subfigure}{.32\\textwidth\n\t\t\\label{figure:lr6h1}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/taxi\/\/P-cluster-rank-r4hh2}\n\t\\end{subfigure} \n\t\\begin{subfigure}{.32\\textwidth\n\t\t\\label{figure:lr6h2}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/taxi\/\/P-cluster-rank-r4hh3}\n\t\\end{subfigure}\n\n\t\\begin{subfigure}{.32\\textwidth\n\t\t\\label{figure:lr6h3}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/taxi\/\/P-cluster-rank-r4hh4}\n\t\\end{subfigure}\n\t\\caption{Visualization of the destination distributions corresponding to the pick-up locations in the green clusters in Figure \\ref{figure:lr4}: mornings (left), afternoons (middle) and evenings (right).}\n\t\\label{figure:pb4}\n\\end{figure*}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nThis paper studies the recovery and state compression of low-rank Markov chains from empirical trajectories via a rank-constrained likelihood approach. We provide statistical upper bounds for the $\\ell_2$ risk and Kullback-Leiber divergence between the estimator and the true probability transition matrix for the proposed estimator. \nThen, a novel DC programming algorithm is developed to solve the associated rank-constrained optimization problem. The proposed algorithm non-trivially combines several recent optimization techniques, such as the penalty approach, the proximal DC algorithm, and the multi-block sGS-ADMM. We further study a new class of majorized indefinite-proximal DC algorithms for solving general non-convex non-smooth DC programming problems and provide a unified convergence analysis.\nExperiments on simulated data illustrate the merits of our approach.\n\n\n\\section{Introduction.}\\label{intro}\n\n\n\\input{main.tex}\n\n\n\n\n\n\n\n\n\\bibliographystyle{informs2014}\n\n\n\\section{Proof of Lemma \\ref{lem:restricted_strong_convexity}}\n\n\\section{Technical lemmas}\n\n\\begin{lemma}\n\t\t\\label{lem:kl_to_l2}\n\t\tGiven two discrete distributions $u, v \\in \\mathbb{R}^p$, if there exist $\\alpha, \\beta > 0$ such that $u_j \\in \\{0\\}\\cup [\\alpha \/ p, \\beta \/ p]$ and $v_j \\in [\\alpha \/ p, \\beta \/ p]$ for any $j \\in [p]$, then we have $$D_{\\mathrm{KL}}(u, v) \\ge \\{p\\alpha \/ (2\\beta ^ 2)\\} \\ltwonorm{u - v} ^ 2.$$ This implies that under Assumption \\ref{asp:1}, for any $\\mathbf{Q} \\in {\\mathcal C}$, $$\\fnorm{\\mathbf{P} - \\mathbf{Q}} ^ 2 \\le \\frac{2\\beta^2}{\\alpha\\pi_{\\min}p}D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}).$$\n\\end{lemma}\n\\begin{proof}{Proof of Lemma \\ref{lem:kl_to_l2}}\n\t\t\\noindent By the mean value theorem, for any $j \\in [p]$ such that $u_j \\neq 0$, there exists $\\xi_j \\in [\\alpha \/ p, \\beta \/ p]$ such that \n\t\t\\[\n\t\t\t\\log(v_j) - \\log(u_j) = \\frac{v_j - u_j}{u_j} - \\frac{(v_j - u_j) ^ 2}{2\\xi_j ^2}. \n\t\t\\]\n\t\tTherefore, \n\t\t\\[\n\t\t\t\\begin{aligned}\n\t\t\t\tD_{\\mathrm{KL}}(u, v) & = \\sum_{j: u_j \\neq 0} u_j\\log(u_j \/ v_j) = \\sum_{j: u_j \\neq 0} (u_j - v_j) + \\sum_{j: u_j \\neq 0} \\frac{(u_j - v_j) ^ 2}{2 \\xi_j ^ 2} \\\\\n\t\t\t\t& \\ge 1 - \\sum_{j: u_j \\neq 0} v_j + \\sum_{j: u_j \\neq 0} \\frac{p \\alpha(u_j - v_j) ^ 2}{2\\beta ^ 2} = \\sum_{j: u_j = 0} v_j - u_j + \\sum_{j: u_j \\neq 0} \\frac{p \\alpha(u_j - v_j) ^ 2}{2\\beta ^ 2} \\\\\n\t\t\t\t& \\ge \\sum_{j: u_j = 0} \\frac{p(v_j - u_j) ^ 2}{\\beta} + \\sum_{j: u_j \\neq 0} \\frac{p \\alpha(u_j - v_j) ^ 2}{2\\beta ^ 2} \\ge \\frac{p\\alpha}{2\\beta ^ 2} \\ltwonorm{u - v} ^ 2. \n\t\t\t\\end{aligned} \n\t\t\\]\n\t\tThen we have\n\t\t\\[\n\t\t\t\\fnorm{\\mathbf{P} - \\mathbf{Q}} ^ 2 = \\sum_{i \\in [p]} \\ltwonorm{P_{i\\cdot} - Q_{i\\cdot}} ^ 2 \\le \\sum_{i \\in [p]} \\frac{2\\beta ^ 2\\pi_{i}}{p\\alpha \\pi_{\\min}}D_{\\mathrm{KL}}(P_{i\\cdot}, Q_{i\\cdot}) = \\frac{2\\beta ^ 2}{p\\alpha \\pi_{\\min}}D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}). \n\t\t\\]\n\\end{proof}\n\\section{Proof of Theorem \\ref{thm:nuclear}}\n\n\\begin{proof}\n\t\\noindent Given the definition of $\\widehat \\mathbf{P}$, \n\t\\begin{equation}\\label{ineq:tilde_D-P-hat-P}\n\t\t\\widetilde{D}_{\\mathrm{KL}}(\\mathbf{P},\\widehat{\\mathbf{P}}) = \\frac{1}{n}\\sum_{i=1}^n \\langle\\log(\\mathbf{P}) - \\log(\\widehat{\\mathbf{P}}), \\mathbf{X}_i\\rangle = \\ell_n(\\widehat{\\mathbf{P}}) - \\ell_n(\\mathbf{P}) \\leq \\lambda (\\nnorm{\\widehat \\mathbf{P}} - \\nnorm{\\mathbf{P}}) \\le \\lambda \\nnorm{\\mathbf{P} - \\widehat \\mathbf{P}}.\n\t\\end{equation}\n\tThen we have \n\t\\be\n\t\\label{ineq:basic}\n\t\\begin{aligned}\n\t\tD_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) & = {\\mathcal L}(\\widehat\\mathbf{P}) - {\\mathcal L}(\\mathbf{P}) = {\\mathcal L}(\\widehat\\mathbf{P}) - \\ell_n(\\widehat\\mathbf{P}) + \\ell_n(\\widehat\\mathbf{P}) - \\ell_n(\\mathbf{P}) + \\ell_n(\\mathbf{P}) - {\\mathcal L}(\\mathbf{P}) \\\\\n\t\t& \\le {\\mathcal L}(\\widehat \\mathbf{P}) - \\ell_n(\\widehat \\mathbf{P}) + \\ell_n(\\mathbf{P}) - {\\mathcal L}(\\mathbf{P}) + \\lambda \\nnorm{\\mathbf{P} - \\widehat \\mathbf{P}} \\\\\n\t\t& = D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) - \\widetilde D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) + \\lambda \\nnorm{\\mathbf{P} - \\widehat \\mathbf{P}}. \n\t\\end{aligned}\n\t\\ee\n\tDefine ${\\mathcal E} := \\{\\lambda \\ge 2 \\opnorm{\\Pi_{{\\mathcal N}}(\\nabla \\ell_n(\\mathbf{P}))}\\}$. If ${\\mathcal E}$ holds, then by Lemma \\ref{lem:large_lambda} and then Lemma \\ref{lem:kl_to_l2}, we obtain that \n\t\\[\n\t\t\\begin{aligned}\n\t\t\tD_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) & \\le D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) - \\widetilde D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) + 4(2r)^{1 \/ 2}\\lambda \\fnorm{\\mathbf{P} - \\widehat \\mathbf{P}} \\\\\n\t\t\t& \t\\le D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) - \\widetilde D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) + 8\\lambda \\beta\\biggl(\\frac{r D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P})}{p\\pi_{\\min} \\alpha}\\biggr)^{\\!1\/2}. \n\t\t\\end{aligned}\n\t\\]\n\tFor any $\\xi > 1$, an application of Lemma \\ref{lem:uniform_law} with $\\eta = \\xi \\pi_{\\min}\/ (rp\\pi_{\\max}\\log p)$ yields\n\t\\[\n\t\t\\begin{aligned}\n\t\t\t\\mathbb{P}\\biggl[ \\biggl\\{D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P}) \\le 16\\lambda \\beta\\biggl(\\frac{r D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P})}{p\\pi_{\\min} \\alpha}\\biggr)^{\\!1\/2} + \\frac{2C_1 r\\pi_{\\max}\\beta ^ 2 p\\log p}{\\pi_{\\min}\\alpha ^ 3n} + \\eta \\biggr\\} \\cap {\\mathcal E}\\biggr] \\ge 1 - C_2 e^{- \\xi} - \\mathbb{P}({\\mathcal E}^c), \n\t\t\\end{aligned}\n\t\\]\t\n\twhere $C_1$ and $C_2$ are exactly the same constants as in Lemma \\ref{lem:uniform_law}. Some algebra yields that \n\t\\[\n\t\t\\begin{aligned}\n\t\t\t\\mathbb{P}\\biggl[ \\biggl\\{D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P}) \\le \\frac{256\\lambda ^ 2 \\beta ^ 2 r}{p\\pi_{\\min} \\alpha} + \\frac{2C_1 r\\pi_{\\max}\\beta ^ 2 p\\log p}{\\pi_{\\min}\\alpha ^ 3n} + \\eta \\biggr\\} \\cap {\\mathcal E}\\biggr] \\ge 1 - C_2 e^{- \\xi} - \\mathbb{P}({\\mathcal E}^c). \n\t\t\\end{aligned}\n\t\\]\t\t\n\tBy Lemma \\ref{lem:gradient}, there exists a universal constant $C_3 > 0$ such that if we choose \n\t\\[\n\t\t\\lambda = C_3\\biggl\\{\\biggl(\\frac{\\xi p ^ 2\\pi_{\\max} \\log p}{n\\alpha}\\biggr)^{1 \/ 2} + \\frac{\\xi p\\log p}{n \\alpha}\\biggr\\}, \n\t\\]\n\tthen for any $\\xi > 1$, whenever $n\\pi_{\\max}(1 - \\rho_+) \\ge \\max(20, \\xi ^ 2)\\log p$, we have that \n\t\\[\n\t\t\\begin{aligned}\n\t\t\t\\mathbb{P} \\biggl( D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}^r) \\gtrsim \\frac{\\xi r\\pi_{\\max}\\beta ^ 2 p \\log p}{\\pi_{\\min}\\alpha ^ 3n} + \\frac{\\xi\\pi_{\\min}}{rp \\pi_{\\max}\\log p}\\biggr) \\lesssim e^{- \\xi} + p^{-(\\xi - 1)} + p^{-10},\n\t\t\\end{aligned}\n\t\\]\n\tas desired. The Frobenius-norm error bound follows immediately by applying Lemma \\ref{lem:kl_to_l2}. \n\\end{proof}\n\n\n\\section{Proof of Theorem \\ref{thm:rank}}\n\n\\begin{proof}\t\n\t\\noindent Given the definition of $\\widehat \\mathbf{P}^r$, \n\t\\begin{equation}\\label{ineq:tilde_D-P-hat-P}\n\t\\widetilde{D}_{\\mathrm{KL}}(\\mathbf{P},\\widehat{\\mathbf{P}}^r) = \\frac{1}{n}\\sum_{i=1}^n \\langle\\log(\\mathbf{P}) - \\log(\\widehat{\\mathbf{P}}^r), \\mathbf{X}_i\\rangle = \\ell_n(\\widehat{\\mathbf{P}}^r) - \\ell_n(\\mathbf{P}) \\leq 0.\n\t\\end{equation}\n\tThen we have\n\t\\be\n\t\\label{ineq:basic}\n\t\\begin{aligned}\n\t\tD_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}^r) & = {\\mathcal L}(\\widehat\\mathbf{P}^r) - {\\mathcal L}(\\mathbf{P}) = {\\mathcal L}(\\widehat\\mathbf{P}^r) - \\ell_n(\\widehat\\mathbf{P}^r) + \\ell_n(\\widehat\\mathbf{P}^r) - \\ell_n(\\mathbf{P}) + \\ell_n(\\mathbf{P}) - {\\mathcal L}(\\mathbf{P}) \\\\\n\t\t& \\le {\\mathcal L}(\\widehat \\mathbf{P}^r) - \\ell_n(\\widehat \\mathbf{P}^r) + \\ell_n(\\mathbf{P}) - {\\mathcal L}(\\mathbf{P}) = D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}^r) - \\widetilde D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}^r). \n\t\\end{aligned}\n\t\\ee\n\tFor any $\\xi > 1$, an application of Lemma \\ref{lem:uniform_law} with $\\eta = \\pi_{\\min}\\xi \/ (rp\\pi_{\\max}\\log p)$ yields\n\t\\[\n\t\t\\mathbb{P}\\biggl\\{D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P}^r) \\ge \\max\\biggl(\\frac{2C_1 r\\pi_{\\max} \\beta^2 p \\log p }{\\pi_{\\min}\\alpha ^ 3n}, \\frac{\\xi\\alpha ^ 2}{rp^ 2 \\pi_{\\max} \\log p}\\biggr)\\biggr\\} \\le C_2e^{- \\xi}, \n\t\\]\n\tas desired. The Frobenius-norm error bound immediately follows by Lemma \\ref{lem:kl_to_l2}. \n\t\n\\end{proof}\n\n\n\\section{Proof of Theorem \\ref{thm:lower_bound}}\n\n\\begin{proof}\n\t~\\hspace{-.5cm} To simplify the notation, assume without loss of generality that $p$ is a multiple of $4(r-1)$. For any $1\\leq k \\leq m$, consider\n\t\\begin{equation}\\label{eq:lower-bound-P^(k)2}\n\t\\begin{split}\n\t\\mathbf{P}^{(k)} = & \\begin{bmatrix}\n\t\\frac{2-\\alpha}{p} \\bone_{p\\times (p\/2)} ~~~~ \\frac{\\alpha}{p} \\bone_{p\\times (p\/2)} \n\t\\end{bmatrix} \\\\\n\t& + \\frac{\\eta(2-\\alpha)}{2p} \\begin{bmatrix}\n\t\\bzero_{(p\/2)\\times (p\/4)} & \\bzero_{(p\/2)\\times (p\/4)} & \\bzero_{(p\/2)\\times (p\/2)}\\\\\n\t~~ \\bR^{(k)} ~~ \\cdots ~~~~~ \\bR^{(k)} & -\\bR^{(k)} ~~ \\cdots ~~ -\\bR^{(k)} & \\bzero_{(p\/4)\\times (p\/2)} \\\\\n\t\\underbrace{-\\bR^{(k)} ~~ \\cdots ~~ - \\bR^{(k)}}_{l_0} & \\underbrace{~~\\bR^{(k)} ~~ \\cdots ~~~~~~ \\bR^{(k)}}_{l_0} & \\bzero_{(p\/4)\\times (p\/2)} \\\\\n\t\\end{bmatrix},\n\t\\end{split}\n\t\\end{equation}\n\twhere $l_0 = \\frac{p}{4(r - 1)}$, $\\bR^{(k)} \\in \\{0, 1\\}^{(p\/4)\\times (r-1)}$, and $\\eta$ is some positive value to be determined later. Let \n\t\\begin{equation}\n\t\\mu := \\biggl(\\frac{2-\\alpha}{p} 1_{p\/2}^\\top ~~ \\frac{\\alpha}{p} 1_{p\/2}^\\top\\biggr)^\\top.\n\t\\end{equation}\n\tFirst of all, regardless of the value of $\\bR^{(k)}$, one can see that for any $k \\in [m]$, \n\t\\begin{enumerate}\n\t\t\\item ${\\rm rank}(\\mathbf{P}^{(k)})\\leq r$; \n\t\t\\item $\\mu^\\top \\mathbf{P}^{(k)} = \\mu^\\top$, and hence $\\mu$ is the invariant distribution of $\\mathbf{P}^{(k)}$; \n\t\t\\item $\\mathbf{P}^{(k)} \\in \\Theta$. \n\t\\end{enumerate} \n\tLet $\\{\\bR^{(k)}\\}_{k = 1}^m$ be i.i.d. matrices of independent Rademacher entries, i.e., for any $k \\in [m]$, $\\{R^{(k)}_{ij}\\}_{i \\in [n], j\\in [d]}$ are independent Rademacher variables, and $\\{\\bR^{(k)}\\}_{k \\in [m]}$ are independent. For any $k \\neq l$, one can see that $\\bigl\\{\\bigl|\\bR^{(k)}_{ij} - \\bR^{(l)}_{ij}\\bigr|\\bigr\\}$ are i.i.d. uniformly distributed on $\\{0, 2\\}$, and that \n\t$$\\mathbb{E}\\bigl|\\bR^{(k)}_{ij} - \\bR^{(l)}_{ij}\\bigr| = 1, \\quad {\\rm Var}\\bigl(\\bigl|\\bR^{(k)}_{ij} - \\bR^{(l)}_{ij}\\bigr|\\bigr)= 1,\\quad \\bigl|\\bigl|\\bR^{(k)}_{ij} - \\bR^{(l)}_{ij}\\bigr| - 1\\bigr| = 1.$$\n\tBy Bernstein's inequality \\citep[][Theorem~2.10]{BLM13}, for any $t>0$, \n\t\\begin{equation*}\n\t{\\mathbb{P}}\\biggl\\{\\biggl|\\bigl\\|\\bR^{(k)} - \\bR^{(l)}\\bigr\\|_1 - \\frac{p(r-1)}{4}\\biggr| \\ge \\biggl( \\frac{p(r - 1)t}{2}\\biggr)^{\\!1\/ 2} + t \\biggr\\} \\leq 2e^{-t}. \n\t\\end{equation*}\n\tLet $t = p(r - 1)\/ 64$ and $m = \\lfloor \\exp\\{p(r-1)\/128\\} \/ 2^{1 \/ 2} \\rfloor$. Since $p(r - 1) \\ge 192\\log 2$, we have that $m \\ge 2$. Then a union bound yields that\n\t\\begin{equation*}\n\t\\begin{split}\n\t{\\mathbb{P}}\\biggl(\\forall 1\\leq k < l \\leq m, ~ \\frac{p(r - 1)}{8} \\le \\bigl\\|\\bR^{(k)} - \\bR^{(l)}\\bigr\\|_1 \\le \\frac{3p(r - 1)}{8}\\biggr) \\ge 1 - 2m^2\\exp\\biggl(\\frac{-p(r - 1)}{64}\\biggr) > 0. \n\t\\end{split}\n\t\\end{equation*}\n\tHence, there exist $\\bR^{(1)},\\ldots, \\bR^{(m)} \\subseteq \\{-1, 1\\}^{(p \/ 4)\\times (r-1)}$ such that\n\t\\begin{equation}\\label{ineq:P^k-P^l}\n\t\\forall 1\\leq k < l \\leq m,~\\frac{p(r - 1)}{8} \\le \\bigl\\|\\bR^{(k)} - \\bR^{(l)}\\bigr\\|_1 \\le \\frac{3p(r - 1)}{8}, \n\t\\end{equation}\n\twhich, given that $\\fnorm{\\bR^{(j)} - \\bR^{(k)}} ^ 2 = 2\\|\\bR^{(j)} - \\bR^{(k)}\\|_1$, further implies that \n\t\\begin{equation}\n\t\\forall 1\\leq k < l \\leq m,~\\frac{p(r - 1)}{4} \\le \\fnorm{\\bR^{(k)} - \\bR^{(l)}}^2 \\le \\frac{3p(r - 1)}{4}. \n\t\\end{equation}\n\n\n\n\n\tNow we have that \n\t\\begin{equation*}\n\t\\begin{split}\n\t\\left\\|\\mathbf{P}^{(k)} - \\mathbf{P}^{(l)}\\right\\|_1 = \\frac{2l_0\\eta(2-\\alpha)}{p} \\|\\bR^{(k)} - \\bR^{(l)}\\|_1 \\geq \\frac{\\eta p (2-\\alpha)}{16}, \\left\\|\\mathbf{P}^{(k)} - \\mathbf{P}^{(l)}\\right\\|_1 \\le \\frac{3\\eta p (2 - \\alpha)}{16}, \\\\\n\t\\fnorm{\\mathbf{P}^{(k)} - \\mathbf{P}^{(l)}} ^ 2 = \\frac{l_0\\eta ^ 2(2-\\alpha) ^ 2}{p ^ 2} \\fnorm{\\bR^{(k)} - \\bR^{(l)}} ^ 2 \\geq \\frac{\\eta ^ 2(2-\\alpha) ^ 2}{16}, \\left\\|\\mathbf{P}^{(k)} - \\mathbf{P}^{(l)}\\right\\|_1 \\le \\frac{3\\eta ^ 2(2-\\alpha) ^ 2}{16}. \n\t\\end{split}\n\t\\end{equation*}\n\tBesides, \n\t\\begin{equation*}\n\t\\begin{split}\n\tD_{\\mathrm{KL}} ({\\mathcal X}^{(k)} || {\\mathcal X}^{(l)}) & = n\\sum_{i\\in [p]} \\pi_i D_{\\mathrm{KL}}\\bigl(\\mathbf{P}_{[i,:]}^{(k)}, \\mathbf{P}^{(l)}_{[i,:]}\\bigr)\n\t= n\\sum_{i=(p\/2)+1}^{p} \\sum_{j=1}^{p\/2} \\frac{\\alpha}{p} P_{ij}^{(k)}\\log \\bigl(P_{ij}^{(k)}\/P_{ij}^{(l)}\\bigr) \\\\\n\t& = \\frac{2n\\alpha}{p}\\sum_{i=1}^{p\/4} \\frac{2-\\alpha}{2} D_{\\mathrm{KL}}\\bigl(u^{(k)}_i, u^{(l)}_i\\bigr),\n\t\\end{split}\n\t\\end{equation*}\n\twhere $u^{(k)}_i = \\frac{2}{p}1_{p\/2} + \\frac{\\eta}{p}\\left[\\bR^{(k)}_{[i,:]} ~ \\cdots ~ \\bR^{(k)}_{[i,:]} ~ -\\bR^{(k)}_{[i,:]} ~ \\cdots ~ -\\bR^{(k)}_{[i,:]}\\right]$ corresponds to a $(p\/2)$-dimensional distribution. By \\citet[][Lemma~4]{ZWa19}, we have that \n\t\\begin{equation*}\n\tD_{\\mathrm{KL}}\\bigl(u^{(k)}_i, u^{(l)}_i\\bigr) \\leq \\frac{3l_0\\eta ^ 2}{p}\\bigl\\|\\bR_{[i,:]}^{(k)} - \\bR_{[i,:]}^{(l)}\\bigr\\|_2^2 = \\frac{6 l_0\\eta^2}{p}\\bigl\\|\\bR_{[i,:]}^{(k)} - \\bR_{[i,:]}^{(l)}\\bigr\\|_1.\n\t\\end{equation*}\n\tTherefore, \n\t\\begin{equation*}\n\t\\begin{split}\n\tD_{\\mathrm{KL}}({\\mathcal X}^{(k)}, {\\mathcal X}^{(l)}) & = \\frac{6n\\alpha(2-\\alpha)l_0 \\eta ^ 2}{p ^ 2}\\sum_{i=1}^{p\/4} \\bigl\\|\\bR_{[i,:]}^{(k)} - \\bR_{[i,:]}^{(l)}\\bigr\\|_1 \\leq \\frac{12n\\alpha l_0\\eta^2}{p^2}\\|\\bR^{(k)} - \\bR^{(l)}\\|_1 \\\\\n\t& \\leq \\frac{12n\\alpha l_0 \\eta^2}{p^2} \\frac{3p(r-1)}{8} = \\frac{9n\\eta^2\\alpha}{8}.\n\t\\end{split}\n\t\\end{equation*}\n\n\tBy Fano's inequality \\citep[][Lemma~3]{yu1997assouad}, we have that\n\t\\begin{equation*}\n\t\\begin{split}\n\n\t\\inf_{\\widehat{\\mathbf{P}}} \\sup_{\\mathbf{P}\\in \\Theta} \\fnorm{\\widehat{\\mathbf{P}} - \\mathbf{P}} ^ 2 \\geq & \\inf_{\\widehat{\\mathbf{P}}} \\sup_{\\mathbf{P}\\in \\{\\mathbf{P}^{(1)}, \\ldots, \\mathbf{P}^{(m)}\\}} \\fnorm{\\hat{\\mathbf{P}} - \\mathbf{P}} ^ 2 \\geq \\frac{\\eta ^ 2(2-\\alpha) ^ 2}{16} \\left(1 - \\frac{9n\\eta^2\\alpha - \\log 2}{\\log m}\\right). \n\n\n\t\\end{split}\n\t\\end{equation*}\n\tThere exist universal constants $c_1, c_2 > 0$ such that when $p(r - 1) \\ge 192 \\log 2$, choosing $\\eta = c_1\\{p(r - 1) \/ (n \\alpha)\\}^{\\!1 \/ 2}$ yields that \n\t\\[\n\t\\inf_{\\widehat{\\mathbf{P}}} \\sup_{\\mathbf{P}\\in \\Theta} \\fnorm{\\widehat{\\mathbf{P}} - \\mathbf{P}} ^ 2 \\ge c_2\\frac{p(r - 1)}{n\\alpha}. \n\t\\]\n\t\\quad $\\square$\n\t\n\\end{proof}\n\n\n\\section{Proof of Theorem \\ref{thm:uv}}\n\\begin{proof}\n\t~Let $\\widehat{\\bU}_{\\perp}, \\widehat{\\bV}_{\\perp} \\in \\Re^{p\\times (p-r)}$ be the orthogonal complement of $\\widehat{\\bU}$ and $\\widehat{\\bV}$. Since $\\bU, \\bV, \\widehat{\\bU}$, and $\\widehat{\\bV}$ are the leading left and right singular vectors of $\\mathbf{P}$ and $\\widehat{\\mathbf{P}}$, we have\n\t\\begin{equation*}\n\t\\begin{split}\n\t\t\\|\\widehat{\\mathbf{P}} - \\mathbf{P}\\|_F \\geq & \\|\\widehat{\\bU}_{\\perp}^\\top(\\widehat{\\mathbf{P}} - \\bU\\bU^\\top \\mathbf{P})\\|_F = \\|\\widehat{\\bU}_{\\perp}^\\top \\bU\\bU^\\top\\mathbf{P}\\|_F \\geq \\|\\widehat{\\bU}^\\top_{\\perp} \\bU\\|_F \\sigma_r(\\bU^\\top \\mathbf{P}) = \\|\\sin\\Theta(\\widehat{\\bU}, \\bU)\\|_F \\sigma_r(\\mathbf{P}).\n\t\\end{split}\n\t\\end{equation*}\n\tSimilar argument also applies to $\\|\\sin\\Theta(\\widehat{\\bV}, \\bV)\\|$. Thus,\n\t$$\\max\\bigl(\\|\\sin\\Theta(\\widehat{\\bU}, \\bU)\\|_F, \\|\\sin\\Theta(\\widehat{\\bV}, \\bV)\\|_F\\bigr)\\le \\min\\biggl(\\frac{\\|\\widehat{\\mathbf{P}}-\\mathbf{P}\\|_F}{\\sigma_r(\\mathbf{P})}, r^{1 \/ 2}\\biggr).$$\n\tThe rest of the proof immediately follows from Theorem \\ref{thm:nuclear}.\n\\end{proof}\n\n\n\n\\section{Proof of Lemma \\ref{lem:large_lambda}}\n\n\\begin{proof}\n\t~By the inequality (52) in Lemma 3 in the Appendix of \\citet{negahban2012restricted}, we have for any $\\boldsymbol{\\Delta} \\in \\Re^{p \\times p}$, \n\t\\[\n\t\t\\nnorm{\\mathbf{P} + \\boldsymbol{\\Delta}} - \\nnorm{\\mathbf{P}} \\ge \\nnorm{\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}^\\perp}} - \\nnorm{\\boldsymbol{\\Delta}_{\\overline {\\mathcal M}}} - 2\\nnorm{\\mathbf{P}_{{\\mathcal M}^\\perp}}. \n\t\\]\n\tBesides, \n\t\\[\n\t\t\\begin{aligned}\n\t\t\\ell_n(\\mathbf{P} + \\boldsymbol{\\Delta}) - \\ell_n(\\mathbf{P}) & \\ge \\inn{\\nabla \\ell_n(\\mathbf{P}), \\boldsymbol{\\Delta}} = \\inn{\\Pi_{{\\mathcal N}}(\\nabla\\ell_n(\\mathbf{P})), \\boldsymbol{\\Delta}} \\ge - |\\inn{\\Pi_{{\\mathcal N}}(\\nabla \\ell_n (\\mathbf{P})), \\boldsymbol{\\Delta}}| \\\\\n\t\t& \\ge - \\opnorm{\\Pi_{{\\mathcal N}}(\\nabla \\ell_n(\\mathbf{P}))} \\nnorm{\\boldsymbol{\\Delta} } \\ge - \\frac{\\lambda}{2} \\bigl(\\nnorm{\\boldsymbol{\\Delta}_{\\overline {\\mathcal M}}} + \\nnorm{\\boldsymbol{\\Delta}_{\\overline {\\mathcal M}^\\perp}} \\bigr). \n\t\t\\end{aligned}\n\t\\]\n\tBy the optimality of $\\widehat\\mathbf{P}$, $\\ell_n(\\widehat\\mathbf{P}) + \\lambda \\nnorm{\\widehat\\mathbf{P}} \\le \\ell_n(\\mathbf{P}) + \\lambda \\nnorm{\\mathbf{P}}$. Therefore, \n\t\\[\n\t\t\\lambda \\bigl(\\nnorm{\\boldsymbol{\\Delta}_{\\overline {\\mathcal M}}} + 2\\nnorm{\\mathbf{P}_{{\\mathcal M}^\\perp}} -\\nnorm{\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}^\\perp}} \\bigr) \\ge \\lambda (\\nnorm{\\mathbf{P}} - \\nnorm{\\widehat\\mathbf{P}}) \\ge - \\frac{\\lambda}{2}\\bigl( \\nnorm{\\widehat\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}}} + \\nnorm{ \\widehat\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}^\\perp}}\\bigr), \n\t\\]\n\tfrom which we deduce that \n\t\\[\n\t\\nnorm{ \\widehat\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}^\\perp}} \\le 3\\nnorm{\\widehat\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}}} + 4\\nnorm{\\mathbf{P}_{{\\mathcal M}^\\perp}}. \n\t\\]\n\\end{proof}\n\n\\section{Proof of Lemma \\ref{lem:gradient}}\n\n\\begin{proof}\n\t\t~Some algebra yields that \n\t\\be\n\t\\label{eq:log_likelihood}\n\t\\nabla \\ell_n(\\mathbf{Q} ) = \\frac{1}{n} \\sum\\limits_{i=1}^n -\\frac{\\mathbf{X}_i}{\\inn{\\mathbf{Q}, \\mathbf{X}_i}}. \n\t\\ee\n\tFor ease of notation, write $\\bZ_i := - \\mathbf{X}_i \/ \\inn{\\mathbf{P}, \\mathbf{X}_i}$. Note that $\\bZ_i$ is well-defined almost surely. Besides, \n\t\\[\n\t\\mathbb{E}(\\bZ_i | \\bZ_{i - 1}) = \\mathbb{E}(\\bZ_i | \\mathbf{X}_{i-1}) = \\sum\\limits_{j=1}^p -\\frac{e_{X_{i - 1}}e_{j}^\\top}{P_{X_{i - 1}, j}} P_{X_{i - 1}, j} = -e_{X_{i - 1}} 1^\\top. \n\t\\]\n\tThus $\\opnorm{\\bZ_i - \\mathbb{E}(\\bZ_i | \\bZ_{i - 1})} \\le p \/ \\alpha + \\sqrt{p} =: R < \\infty$. Define $\\mathbf{S}_k := \\sum_{i=1}^k \\bZ_i - \\mathbb{E}(\\bZ_i| \\bZ_{i - 1})$, then $\\{\\mathbf{S}_k\\}_{k = 1}^n$ is a matrix martingale. In addition, \n\t\\[\n\t\\begin{aligned}\n\t\\mathbb{E} & \\bigl\\{(\\bZ_i - \\mathbb{E}(\\bZ_i | \\bZ_{i - 1}))^\\top (\\bZ_i - \\mathbb{E} (\\bZ_i | \\bZ_{i - 1})) | \\{\\mathbf{S}_k\\}_{k = 1}^{i -1}\\bigr\\} = \\mathbb{E} \\bigl\\{(\\bZ_i - \\mathbb{E}(\\bZ_i | \\bZ_{i - 1}))^\\top (\\bZ_i - \\mathbb{E} (\\bZ_i | \\bZ_{i - 1})) | \\bZ_{i - 1}\\bigr\\} \\\\\n\t& = \\mathbb{E} (\\bZ^\\top_i \\bZ_i | \\bZ_{i - 1}) - \\mathbb{E}(\\bZ_i | \\bZ_{i - 1})^\\top \\mathbb{E} (\\bZ_i | \\bZ_{i - 1}) = \\biggl(\\sum_{j = 1}^p \\frac{e_je_j^\\top}{P_{X_{i - 1}, j}}\\biggr) - 11^\\top =: \\bW^{(1)}_i, \n\t\\end{aligned}\n\t\\]\n\tand similarly, \n\t\\[\n\t\\begin{aligned}\n\t\\mathbb{E} \\bigl\\{(\\bZ_i - \\mathbb{E}(\\bZ_i | \\bZ_{i - 1}))(\\bZ_i - \\mathbb{E} (\\bZ_i | \\bZ_{i - 1}))^\\top | \\{\\mathbf{S}_k\\}_{k = 1}^{i -1} \\bigr\\} = \\biggl(\\sum\\limits_{j = 1}^ p \\frac{ e_{X_{i - 1}}e_{X_{i - 1}}^\\top}{P_{X_{i - 1}, j}} \\biggr) - p e_{X_{i - 1}}e_{X_{i - 1}}^\\top =: \\bW^{(2)}_i. \t\n\t\\end{aligned}\n\t\\]\n\tWrite $\\opnorm{\\sum_{i=1}^n \\bW^{(1)}_i}$ as $W_n^{(1)}$, $\\opnorm{\\sum_{i = 1}^n \\bW^{(2)}_i}$ as $W_n^{(2)}$ and $\\max(W_n^{(1)}, W_n^{(2)})$ as $W_n$. By the matrix Freedman inequality \\citep[Corollary~1.3]{tropp2011freedman}, for any $t \\ge 0$ and $\\sigma^2 > 0$, \n\t\\be\n\t\\label{eq:matrix_freedman}\n\t\\mathbb{P} ( \\opnorm{\\mathbf{S}_n} \\ge t, W_n\\le \\sigma^2 ) \\le 2p \\exp\\biggl(-\\frac{t^2 \/2 }{\\sigma^2 + Rt \/ 3}\\biggr). \n\t\\ee\n\tNow we need to choose an appropriate $\\sigma^2$ so that $W_n \\le \\sigma^2$ holds with high probability. Note that $W^{(1)}_n \\le np(\\alpha^{-1} + 1)$ and $W_n^{(2)} \\le (p^2 \\alpha^{-1} - p) \\sup_{j\\in [p]} \\sum_{i = 1}^n 1_{\\{X_i = s_j\\}}$. In the following we derive a bound for $\\sup_{j\\in [p]} \\sum_{i = 1}^n 1_{\\{X_i = s_j\\}}$. For any $j \\in [p]$, by \\citet[Theorem~1.2]{JFS18}, which is a variant of Bernstein's inequality for Markov chains, we have that \n\t\\be\n\t\\label{eq:mc_bernstein}\n\t\\mathbb{P}\\biggl\\{\\frac{1}{n}\\sum_{i=1}^n (1_{\\{X_i = s_j\\}} - \\pi_j) > \\epsilon \\biggr\\} \\le \\exp\\biggl( -\\frac{n\\epsilon ^ 2}{2(A_1\\beta \/ p + A_2\\epsilon)}\\biggr), \n\t\\ee\n\twhere \n\t\\[\n\tA_1 = \\frac{1 + \\max(\\rho_+, 0)}{1 - \\max(\\rho_+, 0)}~~~\\textnormal{and}~~~A_2 = \\frac{1}{3}1_{\\{\\rho_{+} \\le 0\\}} + \\frac{5}{1 - \\rho_+} 1_{\\{\\rho_+ > 0\\}}. \n\t\\]\n\tSome algebra yields that for any $\\xi > 0$, \n\t\\[\n\t\\mathbb{P}\\biggl\\{\\frac{1}{n}\\sum_{i=1}^n 1_{\\{X_i = s_j\\}} - \\pi_j > \\biggl(\\frac{4A_1\\xi}{np}\\biggr)^{1 \/ 2} + \\frac{4A_2\\xi}{n} \\biggr\\} \\le \\exp( - \\xi). \n\t\\]\n\tA union bound over $j \\in [p]$ yields that \n\t\\[\n\t\\mathbb{P}\\biggl\\{ \\sup_{j \\in [p]} \\frac{1}{n}\\sum_{i=1}^n (1_{\\{X_i = s_j\\}}- \\pi_j) > \\biggl(\\frac{4A_1\\xi \\log p}{np}\\biggr)^{1 \/ 2} + \\frac{4A_2\\xi \\log p}{n}\\biggr\\} \\le p^{-(\\xi - 1)}, \n\t\\]\n\twhich implies that \n\t\\[\n\t\\mathbb{P}\\biggl\\{\\sup_{j \\in [p]} \\frac{1}{n}\\sum\\limits_{i=1}^n 1_{\\{X_i = s_j\\}} > \\pi_{\\max} + \\biggl(\\frac{4A_1\\xi \\log p}{np}\\biggr)^{1 \/ 2} + \\frac{4A_2\\xi \\log p}{n}\\biggr\\} \\le p^{-(\\xi - 1)}. \n\t\\]\n\tTherefore, whenever $n \\pi_{\\max} (1 - \\rho_+) \\ge 2 \\log p$, we have that \n\t\\[\n\t\t\\mathbb{P}\\biggl(\\sup_{j \\in [p]} \\frac{1}{n} \\sum_{i = 1}^n 1_{\\{X_i = s_j\\}} \\gtrsim \\pi_{\\max}\\biggr) \\le \\exp\\biggl(- \\frac{n\\pi_{\\max} (1 - \\rho_+)}{2}\\biggr). \n\t\\]\n\tCombining this with the bounds of $W_n^{(1)}$ and $W_n^{(2)}$, we have that \n\t\\[\n\t\t\\mathbb{P}\\biggl(W_n \\ge \\frac{C_1np ^ 2\\pi_{\\max}}{\\alpha}\\biggr) \\le \\exp\\biggl(- \\frac{n\\pi_{\\max} (1 - \\rho_+)}{2}\\biggr), \n\t\\]\n\twhere $C_1$ is a universal constant. Now choosing $\\sigma^2 = C_1 np^2 \\pi_{\\max} \/ \\alpha$, we deduce that for any $t \\ge 0$, \n\t\\[\n\t\\begin{aligned}\n\t\t\\mathbb{P}(\\opnorm{\\mathbf{S}_n} \\ge t) & = \\mathbb{P}(\\opnorm{\\mathbf{S}_n} \\ge t, W_n \\le \\sigma^2 ) + \\mathbb{P}(\\opnorm{\\mathbf{S}_n} \\ge t, W_n > \\sigma^2 ) \\\\\n\t\t& \\le \\mathbb{P}(\\opnorm{\\mathbf{S}_n} \\ge t, W_n \\le \\sigma^2 ) + \\mathbb{P}(W_n > \\sigma^2 ) \\\\\n\t\t& \\le 2p \\exp\\biggl(-\\frac{t^2 \/2 }{\\sigma^2 + Rt \/ 3}\\biggr) + \\exp\\biggl(- \\frac{n\\pi_{\\max} (1 - \\rho_+)}{2}\\biggr). \n\t\\end{aligned}\n\t\\]\n\tEquivalently, for any $\\xi > 1$, \n\t\\[\n\t\t\\mathbb{P}\\biggl\\{\\biggl\\|\\frac{1}{n}\\mathbf{S}_n\\biggr\\|_2 \\gtrsim \\biggl(\\frac{\\xi p ^ 2\\pi_{\\max} \\log p}{n\\alpha}\\biggr)^{1 \/ 2} + \\frac{\\xi p\\log p}{n \\alpha}\\biggr\\} \\le 4p^{-(\\xi - 1)} + \\exp\\biggl(- \\frac{n\\pi_{\\max} (1 - \\rho_+)}{2}\\biggr). \n\t\\]\t\n\tFinally, observe that for any $i \\in [n]$, $\\Pi_{{\\mathcal N}}(\\mathbb{E}(\\bZ_i | \\bZ_{i - 1})) = \\Pi_{{\\mathcal N}}( -e_{X_{i - 1}} 1^\\top) = \\bzero$. Therefore, $\\Pi_{{\\mathcal N}}(\\nabla\\ell_n(\\mathbf{P})) = n^{-1}\\mathbf{S}_n$ and the final conclusion then follows. \t\n\\end{proof}\n\n\n\\section{Proof of Lemma \\ref{lem:uniform_law}}\n\\begin{proof}\n\t\\noindent We first split $\\mathcal{C}(\\eta)$ as the union of the sets \n\t\\begin{equation*}\n\t\\mathcal{C}_l := \\left\\{\\mathbf{Q} \\in \\mathcal{C}(\\eta): 2^{l-1}\\eta \\leq D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) \\leq 2^l\\eta, ~ {\\rm rank}(\\mathbf{Q})\\leq r\\right\\}, \\quad l=1,2,3,\\ldots. \n\t\\end{equation*}\n\tDefine\n\t\\begin{equation*}\n\t\\begin{split}\n\t\\gamma_l = & \\sup_{\\mathbf{Q} \\in \\mathcal{C}_l} \\bigl|D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) - \\widetilde D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q})\\bigr| \\\\\n\t= & \\sup_{\\mathbf{Q} \\in \\mathcal{C}_l} \\biggl|\\frac{1}{n}\\sum_{i=1}^n \\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle - \\mathbb{E}\\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle\\biggr|.\n\t\\end{split}\n\t\\end{equation*}\n\tFirst, we wish to apply \\citet[][Theorem~7]{Ada08} to bound $|\\gamma_l - \\mathbb{E} \\gamma_l|$. Adamczak's bound entails the following asymptotic weak variance\n\t\\[\n\t\\sigma^2 := \\sup_{\\mathbf{Q}\\in{\\mathcal C}_l} {\\rm Var} \\biggl\\{\\sum_{i = S_1 + 1}^{S_2} \\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle - \\mathbb{E}\\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle \\biggr\\} \/ \\mathbb{E} T_2. \n\t\\]\n\tWe have that \n\t\\[\n\t\\begin{aligned}\n\t\\sigma^2 & \\le \\sup_{\\mathbf{Q}\\in{\\mathcal C}_l}\\mathbb{E} \\biggl[\\biggl\\{ \\sum_{i = S_1 + 1}^{S_2} \\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle - \\mathbb{E}\\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle \\biggr\\}^{\\!2}\\biggr] \/ \\mathbb{E} T_2\\\\\n\t& = \\frac{1}{2} \\sup_{\\mathbf{Q}\\in{\\mathcal C}_l} \\sum\\limits_{j = 1}^{\\infty} \\mathbb{E} \\biggl[\\biggl\\{ \\sum_{i = S_1 + 1}^{S_2} \\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle - \\mathbb{E}\\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle \\biggr\\}^{\\!2} 1_{\\{T_2 = j\\}}\\biggr] \\\\\n\t& \\le \\frac{1}{2} \\sum\\limits_{j = 1}^{\\infty} {4j^2\\log^ 2(\\beta \/ \\alpha) }\\mathbb{P}(T_2 = j) = {2 \\log^2(\\beta \/ \\alpha) \\mathbb{E} (T_2^2)} = {8\\log ^ 2(\\beta \/ \\alpha)}. \n\t\\end{aligned}\n\t\\]\n\tBy \\citet[][Theorem~7]{Ada08}, there exists a universal constant $K > 1$ such that for any $\\xi > 0$, \n\t\\[\n\t\\mathbb{P}\\biggl\\{ |\\gamma_l - \\mathbb{E} \\gamma_l| \\ge K \\mathbb{E}\\gamma_l + {2\\log(\\beta \/ \\alpha)}\\biggl(\\frac{2K\\xi }{n}\\biggr)^{\\! 1 \/ 2} + \\frac{16 K\\log(\\beta \/ \\alpha)\\xi \\log n }{n}\\biggr\\} \\le K e^{- \\xi}. \n\t\\]\n\tSince $n^{- 1\/ 2} \\ge 2 n^{-1}\\log n$ for any positive integer $n$, we have that\n\t\\be\n\t\\label{eq:gamma_l_uniform_concentration}\n\t\\mathbb{P}\\biggl\\{ |\\gamma_l - \\mathbb{E} \\gamma_l| \\ge K \\mathbb{E}\\gamma_l + {11K\\log(\\beta \/ \\alpha)}\\biggl(\\frac{\\xi }{n}\\biggr)^{\\! 1 \/ 2}\\biggr\\} \\le K e^{- \\xi}. \t\n\t\\ee\n\tNext, we bound $\\mathbb{E} \\gamma_l$. Let $\\{\\varepsilon_i\\}_{i=1}^n$ be $n$ independent Rademacher random variables. By a symmetrization argument, \n\t\\begin{equation*}\n\t\\begin{split}\n\t\\mathbb{E}\\gamma_l = & \\mathbb{E}\\biggl(\\sup_{\\mathbf{Q} \\in \\mathcal{C}_l} \\biggl|\\frac{1}{n}\\sum_{i=1}^n \\langle \\log (\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i\\rangle - \\mathbb{E}\\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle\\biggr|\\biggr)\\\\\n\t\\leq & 2\\mathbb{E}\\biggl(\\sup_{\\mathbf{Q} \\in \\mathcal{C}_l}\\biggl|\\frac{1}{n}\\sum_{i=1}^n \\varepsilon_k \\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i\\rangle\\biggr|\\biggr).\n\t\\end{split}\n\t\\end{equation*}\n\tLet $\\phi_i(t) = (\\alpha \/ p)\\{\\log(\\langle\\mathbf{P}, \\mathbf{X}_i\\rangle + t) - \\log(\\langle \\mathbf{P}, \\mathbf{X}_i\\rangle)\\}$. Then $\\phi_{i}(0) = 0$ and $|\\phi_i'(t)| \\leq 1$ for all $t$ such that $t + \\langle\\mathbf{P}, \\mathbf{X}_i\\rangle \\geq \\alpha\/p$. In other words, $\\phi_i$ is a contraction map for $t \\geq \\min_{j, k \\in [p]}(P_{jk} - \\alpha\/p)$. By the contraction principle (Theorem 4.12 in \\cite{ledoux2013probability}),\n\n\n\n\n\n\n\n\t\\begin{equation}\\label{ineq:expected-gamma_l}\n\t\t\\begin{split}\n\t\t\t\\mathbb{E}\\gamma_l \\leq & \\frac{2p}{\\alpha}\\mathbb{E}\\biggl(\\sup_{\\mathbf{Q} \\in \\mathcal{C}_l}\\biggl|\\frac{1}{n}\\sum_{i=1}^n\\varepsilon_i \\phi_i\\left(\\langle \\mathbf{Q}-\\mathbf{P}, \\mathbf{X}_i\\rangle\\right)\\biggr|\\biggr) \\leq \\frac{4p}{\\alpha}\\mathbb{E}\\biggl(\\sup_{\\mathbf{Q} \\in \\mathcal{C}_l}\\biggl|\\frac{1}{n}\\sum_{i = 1}^n \\varepsilon_i\\langle \\mathbf{Q}-\\mathbf{P}, \\mathbf{X}_i\\rangle\\biggr|\\biggr)\\\\\n\t\t\t\\leq & \\frac{4p}{\\alpha}\\mathbb{E}\\biggl(\\sup_{\\mathbf{Q} \\in \\mathcal{C}_l}\\biggl\\|\\frac{1}{n}\\sum_{i = 1}^n \\varepsilon_i\\mathbf{X}_i\\biggr\\|_2 \\|\\mathbf{Q}-\\mathbf{P}\\|_\\ast\\biggr) \\leq \\frac{4p}{\\alpha} \\mathbb{E}\\biggl\\|\\frac{1}{n}\\sum_{i = 1}^n \\varepsilon_i\\mathbf{X}_i\\biggr\\|_2 \\sup_{\\mathbf{Q} \\in \\mathcal{C}_l} \\|\\mathbf{Q} - \\mathbf{P}\\|_\\ast. \n\t\t\\end{split}\n\t\\end{equation}\t\n\tBy Lemma \\ref{lem:kl_to_l2}, \n\n\n\n\n\n\n\t\\begin{equation}\n\t\\label{ineq:Q-P-nuclear}\n\t\\begin{split}\n\t\t\\sup_{\\mathbf{Q} \\in \\mathcal{C}_l} \\|\\mathbf{Q} - \\mathbf{P}\\|_\\ast & \\leq \\sup_{\\mathbf{Q} \\in \\mathcal{C}_l} (2r)^{1 \/ 2} \\fnorm{\\mathbf{Q} - \\mathbf{P}} \\le 2\\beta \\biggl(\\frac{2^l \\eta r}{p\\alpha \\pi_{\\min}}\\biggr)^{\\!1 \/ 2}. \n\t\\end{split}\n\t\\end{equation}\t\n\n\tHence, the remaining task is to bound $\\mathbb{E}\\|n^{-1}\\sum_{i = 1}^n \\varepsilon_i\\mathbf{X}_i\\|$. From now on, we denote $\\varepsilon_i \\mathbf{X}_i$ by $\\bZ_i$. One can see that $(\\bZ_i)_{i = 1}^n$ is a martingale difference sequence. We wish to apply the matrix Freedman inequality \\citep[Corollary~1.3]{tropp2011freedman} to bound the average of $(\\bZ_i)_{i = 1}^n$. We have that\n\t\\begin{equation*}\n\t\\begin{split}\n\t\\biggl\\|\\sum_{i = 1}^n \\mathbb{E} \\bigl(\\bZ_i^\\top \\bZ_i \\vert X_{i - 1}\\bigr) \\biggr\\|_2 = & \\biggl\\|\\sum_{i=1}^n \\sum_{j=1}^p P_{X_{i - 1}, j} (e_{X_{i - 1}} e_j^\\top)^\\top (e_{X_{i - 1}}e_j^\\top)\\biggr\\|_2 = \\biggl\\|\\sum_{j=1}^p \\sum_{i=1}^n P_{X_{i - 1}, j} e_je_j^\\top\\biggr\\|_2 \\\\\n\t= & \\max_{j \\in [p]} \\sum_{i = 1}^n P_{X_{i - 1}, j} =: W_n^{(1)}\n\t\\end{split}\n\t\\end{equation*} \n\tand that \n\t\\begin{equation*}\n\t\\begin{split}\n\t\\biggl\\|\\sum_{i = 1}^n \\mathbb{E} \\bigl(\\bZ_i\\bZ_i^\\top\\vert X_{i - 1}\\bigr) \\biggr\\|_2 = & \\biggl\\|\\sum_{i=1}^n \\sum_{j=1}^p P_{X_{i - 1}, j} e_{X_{i - 1}} e_{X_{i - 1}}^\\top \\biggr\\|_2 = \\biggl\\|\\sum_{i=1}^n e_{X_{i - 1}} e_{X_{i - 1}}^\\top\\biggr\\|_2 \\\\\n\t= & \\max_{j \\in [p]} \\sum_{i = 1}^n 1_{\\{X_{i - 1} = j\\}} =: W_n^{(2)}. \n\t\\end{split}\n\t\\end{equation*}\t\n\tWe first bound $W_n^{(1)}$. Note that for any $j \\in [p]$, $\\mathbb{E} (P_{X_{i - 1}, j}) = \\pi_j $, and that\n\t\\[\n\t{\\rm Var}_{\\pi}(P_{X_{i - 1}, j}) = \\sum_{k = 1}^p \\pi_k (P_{kj} - \\pi_j)^2 = \\sum_{k = 1}^p \\pi_k P^2_{kj} - \\pi_j ^ 2 \\le \\pi_j(1 - \\pi_j). \n\t\\]\n\tBy a variant of Bernstein's inequality for Markov chains \\citep[Theorem~1.2]{JFS18}, we have that for any $j \\in [p]$, \n\t\\[\n\t\\label{eq:mc_bernstein}\n\t\\mathbb{P}\\biggl(\\frac{1}{n} \\sum_{i = 1}^n P_{X_{i - 1}, j} - \\pi_j > \\epsilon \\biggr) \\le \\exp\\biggl\\{ -\\frac{n\\epsilon ^ 2}{2(A_1\\pi_j + A_2\\epsilon)}\\biggr\\}, \n\t\\]\n\twhere\n\t\\[\n\tA_1 := \\frac{1 + \\max(\\rho_+, 0)}{1 - \\max(\\rho_+, 0)}~~~\\textnormal{and}~~~A_2 := \\frac{1}{3}1_{\\{\\rho_{+} \\le 0\\}} + \\frac{5}{1 - \\rho_+} 1_{\\{\\rho_+ > 0\\}}. \n\t\\]\n\tA union bound yields that \n\t\\be\n\t\\label{eq:w1}\n\t\\mathbb{P}\\bigl\\{W_n^{(1)} \\ge n\\pi_{\\max} + (4nA_1 \\pi_{\\max}\\xi \\log p)^{1 \/ 2} + {4A_2\\xi \\log p} \\bigr\\} \\le p^{-(\\xi - 1)}. \n\t\\ee\n\tNext we bound $W_n^{(2)}$. Note that $W_n^{(2)} \\le \\max_{j \\in [p]} \\sum_{i =1}^n 1_{\\{X_{i - 1} = s_j\\}}$. Similarly, by \\citet[Theorem~1.2]{JFS18}, for any $j \\in [p]$, \n\n\n\t\\be\n\t\\label{eq:mc_bernstein}\n\t\\mathbb{P}\\biggl\\{\\frac{1}{n}\\sum_{i=1}^n 1_{\\{X_{i - 1} = s_j\\}} - \\pi_j > \\epsilon \\biggr\\} \\le \\exp\\biggl\\{ -\\frac{n\\epsilon ^ 2}{2(A_1\\pi_j + A_2\\epsilon)}\\biggr\\}, \n\t\\ee\n\tSome algebra yields that for any $\\xi > 0$, \n\t\\[\n\t\\mathbb{P}\\biggl\\{\\frac{1}{n}\\sum_{i=1}^n 1_{\\{X_{i - 1} = s_j\\}} - \\pi_j > \\biggl(\\frac{4A_1\\pi_j\\xi }{n}\\biggr)^{1 \/ 2} + \\frac{4A_2\\xi}{n} \\biggr\\} \\le \\exp( - \\xi). \n\t\\]\n\tBy a union bound over $j \\in [p]$, \n\t\\[\n\t\\mathbb{P}\\biggl\\{ \\max_{j \\in [p]} \\frac{1}{n}\\sum_{i=1}^n 1_{\\{X_{i - 1} = s_j\\}} > \\pi_{\\max} + \\biggl(\\frac{4A_1\\pi_{\\max}\\xi \\log p}{n}\\biggr)^{1 \/ 2} + \\frac{4A_2\\xi \\log p}{n}\\biggr\\} \\le p^{-(\\xi - 1)}, \n\t\\]\n\twhich further implies that \n\t\\be\n\t\\label{eq:w2}\n\t\\mathbb{P}\\bigl\\{ W_n^{(2)} \\ge n\\pi_{\\max} + (4nA_1 \\pi_{\\max}\\xi \\log p)^{1 \/ 2} + {4A_2\\xi \\log p} \\bigr\\} \\le p^{-(\\xi - 1)}. \n\t\\ee\n\tDefine $W_n := \\max(W_n^{(1)}, W_n^{(2)})$. Let $\\mathbf{S}_n := \\sum_{i = 1}^n \\varepsilon_i\\mathbf{X}_i$. By matrix Freedman's inequality \\citep[][Corollary~1.3]{tropp2011freedman}, for any $t \\ge 0$ and $\\sigma^2 > 0$, \n\t\\be\n\t\\label{eq:matrix_freedman}\n\t\\mathbb{P} ( \\opnorm{\\mathbf{S}_n} \\ge t, W_n\\le \\sigma^2 ) \\le 2p \\exp\\biggl(-\\frac{t^2 \/2 }{\\sigma^2 + t \/ 3}\\biggr). \n\t\\ee\n\tNow we need to choose an appropriate $\\sigma^2$ so that $W_n \\le \\sigma^2$ holds with high probability.\n\tGiven that $\\rho_+ > 0$ and $n\\pi_{\\max} \\ge 10\\xi \\log p \/ (1 - \\rho_+)$, combining \\eqref{eq:w1} and \\eqref{eq:w2} yields that \n\n\n\n\n\t\\be\n\t\\label{eq:w}\t\n\t\\mathbb{P}\\bigl( W_n \\ge 4n\\pi_{\\max}\\bigr) \\le 2p^{-(\\xi - 1)}. \n\t\\ee\n\tNow choosing $\\sigma^2 = 4n\\pi_{\\max}$ in \\eqref{eq:matrix_freedman}, we deduce that \n\t\\[\n\t\\begin{aligned}\n\t\\mathbb{P}(\\opnorm{\\mathbf{S}_n} \\ge t) & = \\mathbb{P}(\\opnorm{\\mathbf{S}_n} \\ge t, W_n \\le \\sigma^2 ) + \\mathbb{P}(\\opnorm{\\mathbf{S}_n} \\ge t, W_n > \\sigma^2 ) \\\\\n\t& \\le \\mathbb{P}(\\opnorm{\\mathbf{S}_n} \\ge t, W_n \\le \\sigma^2 ) + \\mathbb{P}(W_n > \\sigma^2 ) \\\\\n\t& \\le 2p \\exp\\biggl(-\\frac{t^2 \/2 }{\\sigma^2 + t \/ 3}\\biggr) + 2p^{-(\\xi - 1)}. \n\t\\end{aligned}\n\t\\]\n\tChoose $\\xi = n \\pi_{\\max} (1 - \\rho_+) \/ (10 \\log p)$. As long as $n\\pi_{\\max}(1 - \\rho_+)\\ge \\max (20 \\log p, \\log n)$, we have that \n\t\\be\n\t\\label{eq:expectation_sn_op}\n\t\\mathbb{E} \\biggl\\|\\frac{1}{n}\\mathbf{S}_n\\biggr\\|_2 \\lesssim \\biggl(\\frac{\\pi_{\\max}\\log p}{n}\\biggr)^{\\!1 \/2}. \t\t\n\n\n\t\\ee\n\tCombining \\eqref{ineq:expected-gamma_l}, \\eqref{ineq:Q-P-nuclear} and \\eqref{eq:expectation_sn_op} yields that\n\t\\begin{equation*}\n\t\\mathbb{E} \\gamma_l \\lesssim \\frac{\\beta}{\\alpha ^ {3 \/ 2}}\\biggl(\\frac{2^{l}\\eta \\pi_{\\max}rp\\log p}{\\pi_{\\min}n}\\biggr)^{\\!1 \/2}. \n\t\\end{equation*}\n\tThen combining this with \\eqref{eq:gamma_l_uniform_concentration} yields that \n\t\\begin{equation*}\n\t\\mathbb{P}\\biggl\\{\\gamma_l \\gtrsim \\frac{\\beta}{\\alpha ^ {3 \/ 2}}\\biggl(\\frac{2^{l}\\eta \\pi_{\\max}rp\\log p}{\\pi_{\\min}n}\\biggr)^{\\!1 \/2} + \\log(\\beta \/ \\alpha)\\biggl(\\frac{\\xi }{n}\\biggr)^{\\! 1 \/ 2} \\hspace{0cm} \\biggr\\} \\lesssim e^{-\\xi}. \n\t\\end{equation*}\n\tLet $\\xi = 2^l \\eta \\pi_{\\max} rp \\log p \/ \\pi_{\\min}$. Then there exist universal constants $C_1, C_2 > 0$ such that\n\t\\begin{equation*}\n\t\\mathbb{P}\\biggl\\{\\gamma_l \\ge \\frac{C_1\\beta}{\\alpha ^ {3 \/ 2}}\\biggl(\\frac{2^{l}\\eta \\pi_{\\max}rp\\log p}{\\pi_{\\min}n}\\biggr)^{\\!1 \/2} \\biggr\\} \\le C_2\\exp\\biggl\\{- \\frac{(2l + 1) \\eta \\pi_{\\max} rp \\log p}{\\pi_{\\min}}\\biggr\\}. \t\n\t\\end{equation*}\n\tWe can thus deduce that there exists a universal constant $C_3 > 0$ such that \n\t\\begin{equation*}\n\t\\begin{split}\n\t&\\mathbb{P}\\biggl( |\\widetilde{D}_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) - D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q})|> \\frac{1}{2}D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) + \\frac{C_3\\pi_{\\max} \\beta ^ 2rp\\log p}{\\pi_{\\min}\\alpha ^ 3n} \\biggr)\\\\\n\t\\leq & \\sum_{l=0}^\\infty P\\left(\\exists \\mathbf{Q} \\in \\mathcal{C}_l, ~\\left|\\widetilde{D}_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) - D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q})\\right|> 2^{l - 2}\\eta + \\frac{C_3\\pi_{\\max} \\beta ^ 2rp\\log p}{\\pi_{\\min}\\alpha ^ 3n}\\right)\\\\\n\t\\leq & \\sum_{l=0}^\\infty P\\biggl\\{\\gamma_l \\ge \\frac{C_1\\beta}{\\alpha ^ {3 \/ 2}}\\biggl(\\frac{2^l\\eta\\pi_{\\max}rp\\log p}{\\pi_{\\min}n}\\biggr)^{\\!1 \/2}\\biggr\\}\\\\\n\t\\leq & C_2 \\sum_{l=0}^\\infty \\exp\\biggl\\{- \\frac{(2l + 1) \\eta \\pi_{\\max} rp \\log p}{\\pi_{\\min}}\\biggr\\} \\le 2C_2 \\exp\\biggl(- \\frac{\\eta \\pi_{\\max} rp \\log p}{\\pi_{\\min}}\\biggr). \n\t\\end{split}\n\t\\end{equation*}\n\twhere we use the Cauchy-Schwarz inequality in the second step. \n\\end{proof}\n\n\n\\section{Alternative statistical error analysis}\n\\label{sec:alt}\n\n\\subsection{Main results}\n\nIn this section, we provide an alternative proof strategy that follows \\citet{NRW12} to bound the statistical error of $\\widehat \\mathbf{P}$ and $\\widehat \\mathbf{P} ^ r$. This strategy resolves the inconsistency issue of Theorems \\ref{thm:nuclear} and \\ref{thm:rank} when $n \\gg \\{rp \\pi_{\\max}(\\log p) \\beta\/ (\\pi_{\\min} \\alpha ^ {\\!3 \/ 2})\\} ^ 2$. For any $R>0$, define a constraint set ${\\mathcal C} (\\beta, R, \\kappa):= \\{\\boldsymbol{\\Delta} \\in \\Re^{p \\times p}: \\supnorm{\\boldsymbol{\\Delta}} \\le \\beta \/ p, \\fnorm{\\boldsymbol{\\Delta}}\\le R, \\nnorm{\\boldsymbol{\\Delta}} \\le \\kappa r^{1 \/ 2}\\fnorm{\\boldsymbol{\\Delta}} \\}$. An important ingredient of this statistical analysis is the localized restricted strong convexity \\citep{NWa11, FLS18} of the loss function $\\ell_n(\\mathbf{P})$ near $\\mathbf{P}$. This property allows us to bound the distance in the parameter space by the difference in the objective function value. Define the first-order Taylor remainder term of the negative log-likelihood function $\\ell_n(\\mathbf{Q})$ around $\\mathbf{P}$ as\n\\[\n\t\\delta\\ell_n(\\mathbf{Q}; \\mathbf{P}) := \\ell_n(\\mathbf{Q}) - \\ell_n(\\mathbf{P}) - \\nabla\\ell_n(\\mathbf{P})^\\top (\\mathbf{Q} - \\mathbf{P}). \n\\]\nThe following lemma establishes the desired local restricted strong convexity. \n\\begin{lemma}\n\t\\label{lem:restricted_strong_convexity}\n\tUnder Assumption \\ref{asp:1}, there exists a universal constant $K$ such that for any $\\xi > 1$, it holds with probability at least $1 - K\\exp(-\\xi)$ that for any $\\boldsymbol{\\Delta} \\in {\\mathcal C}(\\beta, R, \\kappa)$, \n\t\\be\n\t\\begin{aligned}\n\t\\delta\\ell_n(\\mathbf{P} + \\boldsymbol{\\Delta}; \\mathbf{P}) \\ge \\frac{\\alpha ^2}{8\\beta^2}\\fnorm{\\boldsymbol{\\Delta}}^2 & - 8R\\biggl({\\frac{3K\\xi }{n}}\\biggr)^{1 \/ 2} - \\frac{8K\\xi\\alpha^2 \\log n }{\\beta^2 n} - \\frac{Kp\\kappa R}{\\beta}\\biggl(\\frac{r\\pi_{\\max}\\log p}{n}\\biggr) ^ {\\!1 \/ 2}. \n\\end{aligned}\n\t\\ee\n\\end{lemma}\n\nNow we present the statistical rates of $\\widehat \\mathbf{P}$ and $\\widehat \\mathbf{P} ^ r$. \n\n\\begin{theorem}[Alternative statistical guarantee for $\\widehat \\mathbf{P}$]\n\t\\label{thm:nuclear_alt}\n\tUnder the same assumptions of Theorem \\ref{thm:nuclear}, there exists a universal constant $C_1 > 0$, such that for any $\\xi > 1$, if we choose \n\t\\[\n\t\\lambda = C_1 \\biggl\\{\\biggl(\\frac{\\xi p ^ 2\\pi_{\\max} \\log p}{n\\alpha}\\biggr)^{\\! 1 \/ 2} + \\frac{\\xi p\\log p}{n \\alpha}\\biggr\\}, \n\t\\]\n\tthen whenever $n\\pi_{\\max}(1 - \\rho_+) \\ge \\max\\{\\max(20, \\xi ^ 2) \\log p, \\log n\\}$, we have with probability at least $1 - K\\exp(-\\xi) - 4p^{-(\\xi - 1)} - p ^{-1}$ that \n\t\\[\n\t\\begin{aligned}\n\t\\fnorm{\\widehat\\mathbf{P} - \\mathbf{P}} \\lesssim \\frac{\\beta ^ 2}{\\alpha ^ 2}\\biggl(\\frac{\\xi r p ^ 2 \\pi_{\\max} \\log p}{n \\alpha}\\biggr) ^ {\\!1 \/ 2}~~\\text{and}~~D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) \\lesssim \\frac{\\xi \\beta ^ 6 \\pi_{\\max} r p ^ 2 \\log p}{n \\alpha ^ 7},\n\t\\end{aligned}\n\t\\]\n\twhere $K$ is the same constant as in Lemma \\ref{lem:restricted_strong_convexity}. \n\\end{theorem}\n\n\n\\begin{theorem}[Alternative statistical guarantee for $\\widehat \\mathbf{P} ^ r$]\n\t\\label{thm:rank_alt}\n\tUnder the same assumptions of Theorem \\ref{thm:nuclear}, there exists a universal constant $C_1 > 0$, for any $\\xi > 1$, we have with probability at least $1 - K\\exp(-\\xi) - 4p^{-(\\xi - 1)} - p ^{-1}$ that \n\t\\[\n\t\\fnorm{\\widehat\\mathbf{P} ^ r- \\mathbf{P}} \\lesssim \\frac{\\beta ^ 2}{\\alpha ^ 2}\\biggl(\\frac{\\xi r p ^ 2 \\pi_{\\max} \\log p}{n \\alpha}\\biggr) ^ {\\!1 \/ 2} ~~\\text{and}~~D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P} ^ r) \\lesssim \\frac{\\xi \\beta ^ 6 \\pi_{\\max} r p ^ 2 \\log p}{n \\alpha ^ 7}, \n\t\\]\n\twhere $K$ is the same constant as in Lemma \\ref{lem:restricted_strong_convexity}. \n\\end{theorem}\n\nOne can see from the theorems above that the derived error bounds converge to zero as $n$ goes to infinity. Nevertheless, their dependence on $\\alpha$ and $\\beta$ is worse than those in Theorems \\ref{thm:nuclear} and \\ref{thm:rank} when $n \\lesssim \\{rp \\pi_{\\max}(\\log p) \\beta\/ (\\pi_{\\min} \\alpha ^ {\\!3 \/ 2})\\} ^ 2$. This is why we do not present this result in the main text. \n\n\\subsection{Proof of Lemma \\ref{lem:restricted_strong_convexity}}\n\n\\begin{proof}\n\t~Given any $\\boldsymbol{\\Delta} \\in {\\mathcal C}(\\beta, R, \\kappa)$, it holds that for some $0\\le v \\le 1$ that \n\t\\be\n\t\\label{eq:rsc_first_step}\n\t\\begin{aligned}\n\t\t\\delta \\ell_n(\\mathbf{P} + \\boldsymbol{\\Delta}; \\mathbf{P}) & = \\frac{1}{2} \\text{vec}(\\boldsymbol{\\Delta})^\\top \\mathbf{H}_n(\\mathbf{P} + v\\boldsymbol{\\Delta})\\text{vec}(\\boldsymbol{\\Delta}) = \\frac{1}{2n} \\sum\\limits_{i=1}^n \\frac{\\inn{\\mathbf{X}_i, \\boldsymbol{\\Delta}}^2}{\\inn{\\mathbf{P} + v\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2} \\\\\n\t\t& \\ge \\frac{1}{2n}\\sum\\limits_{i=1}^n \\frac{p ^ 2}{4\\beta^2} \\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2. \n\t\\end{aligned}\n\t\\ee\n\tDefine \n\t\\[\n\t\\Gamma_n := \\sup_{\\boldsymbol{\\Delta} \\in {\\mathcal C}(\\beta, R, \\kappa)} \\biggl |\\frac{1}{n} \\sum\\limits_{i=1}^n \\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2 - \\mathbb{E}(\\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2) \\biggr |. \n\t\\]\n\tWe first bound the deviation of $\\Gamma_r$ from its expectation $\\mathbb{E} \\Gamma_r$. Note that $\\{\\mathbf{X}_i\\}_{i = 1}^n$ is a Markov chain on ${\\mathcal M} := \\{e_je_k^\\top\\}_{j, k = 1}^p$. Here we apply a tail inequality for suprema of unbounded empirical processes due to \\citet[][Theorem~7]{Ada08}. To apply this result, we need to verify that $\\{\\mathbf{X}_i\\}_{i = 1}^n$ satisfies the ``minorization condition'' as stated in Section 3.1 of \\citet{Ada08}. Below we characterize a specialized version of this condition. \n\t\n\t\\begin{condition}[Condition 1 (minorized condition).]\n\t\t\\label{con:minorization}\n\t\tWe say that a Markov chain ${\\mathcal X}$ on ${\\mathcal S}$ satisfies the minorized condition if there exist $\\delta > 0$, a set ${\\mathcal C}\\subset {\\mathcal S}$ and a probability measure $\\nu$ on ${\\mathcal S}$ for which $\\forall_{x \\in {\\mathcal C}} \\forall_{{\\mathcal A} \\subset {\\mathcal S}} \\mathbb{P}(x, {\\mathcal A}) \\ge \\delta \\nu({\\mathcal A})$ and $\\forall_{x \\in {\\mathcal S}} \\exists_{n \\in \\mathbb{N}} \\mathbb{P}^n(x, {\\mathcal C}) > 0$. \n\t\\end{condition} \n\tOne can verify that the Markov chain $\\{\\mathbf{X}_i\\}_{i = 1}^n$ satisfies Condition \\ref{con:minorization} with $\\delta = 1\/ 2$, ${\\mathcal C} = \\{e_1e_2^\\top\\}$ and $\\nu(e_je_k^\\top) = P_{jk}1_{\\{j = 2\\}}$ for $j, k \\in [p]$.\n\t\n\tNow consider a new Markov chain $\\{(\\widetilde \\mathbf{X}_i, R_i)\\}_{i=1}^n$ constructed as follows. Let $\\{R_i\\}_{i=1}^n$ be i.i.d. Bernoulli random variables with $\\mathbb{E} R_1 = \\delta$. For any $i \\in \\{0, \\ldots, n - 1\\}$, at step $i$, if $\\mathbf{X}_i \\notin {\\mathcal C}$, we sample $\\widetilde \\mathbf{X}_{i + 1}$ according to $\\mathbb{P}(\\widetilde \\mathbf{X}_i, \\cdot)$; otherwise, the distribution of $\\widetilde \\mathbf{X}_i$ depends on $R_i$: if $R_i = 1$, the chain regenerates in the sense that we draw $\\widetilde \\mathbf{X}_i$ from $\\nu$, and if $R_i = 0$, we draw $\\widetilde \\mathbf{X}_i$ from $(\\mathbb{P}(\\mathbf{X}_i, \\cdot) - \\delta \\nu(\\cdot)) \/ (1 - \\delta)$. One can verify that the sequence $\\{\\widetilde\\mathbf{X}_i\\}_{i=1}^n$ has exactly the same distribution as the original Markov chain $\\{\\mathbf{X}_i\\}_{i=1}^n$. Define $T_1 := \\inf\\{n > 0: R_n = 1\\}$ and $T_{i + 1} := \\inf\\{n > 0: R_{T_1 + \\ldots + T_i + n} = 1\\}$ for $i \\ge 0$. Note that $\\{T_i\\}_{i \\ge 0}$ are i.i.d. Geometric random variables with $\\mathbb{E} T_1 = 2$ and $\\|T_1\\|_{\\psi_1} \\le 4$. Let $S_0 := -1$, $S_j := T_1 + \\ldots + T_j$ and ${\\mathcal Y}_j := \\{ \\widetilde \\mathbf{X}_i\\}_{i = S_{j - 1} + 1}^{S_j}$ for $j \\ge 1$. Based on our construction, we deduce that $\\{{\\mathcal Y}_j\\}_{j \\ge 1}$ are independent. Thus we chop the original Markov chain $\\{X_i\\}_{i \\in [n]}$ into independent sequences. Finally, Adamazak's bound entails the following asymptotic weak variance\n\t\\[\n\t\\sigma^2 := \\sup_{\\boldsymbol{\\Delta}\\in{\\mathcal C}(\\beta, R, \\kappa)} {\\rm Var} \\biggl\\{\\sum_{i = S_1 + 1}^{S_2} \\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2 - \\mathbb{E}(\\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2) \\biggr\\} \/ \\mathbb{E} T_2. \n\t\\]\n\tWe have\n\t\\[\n\t\\begin{aligned}\n\t\\sigma^2 & \\le \\sup_{\\boldsymbol{\\Delta}\\in{\\mathcal C}(\\beta, R, \\kappa)}\\mathbb{E} \\biggl[\\biggl\\{ \\sum_{i = S_1 + 1}^{S_2} \\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2 - \\mathbb{E}(\\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2) \\biggr\\}^2\\biggr] \/ \\mathbb{E} T_2\\\\\n\t& = \\frac{1}{2} \\sup_{\\boldsymbol{\\Delta}\\in{\\mathcal C}(\\beta, R, \\kappa)} \\sum\\limits_{j = 1}^{\\infty} \\mathbb{E} \\biggl[\\biggl\\{ \\sum_{i = S_1 + 1}^{S_2} \\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2 - \\mathbb{E}(\\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2) \\biggr\\}^2 1_{\\{T_2 = j\\}}\\biggr] \\\\\n\t& \\le \\frac{1}{2} \\sum\\limits_{j = 1}^{\\infty} \\frac{j^2R^2\\beta^4}{p^4}\\mathbb{P}(T_2 = j) = \\frac{R^2 \\beta^4\\mathbb{E} (T_2^2)}{2p^4} = \\frac{3\\beta^4R^2}{p^4}. \n\t\\end{aligned}\n\t\\]\n\tBy \\citet[][Theorem~7]{Ada08}, there exists a universal constant $K$ such that for any $\\xi > 0$, \n\t\\be\n\t\\label{eq:1.7}\n\t\\mathbb{P}\\biggl\\{ |\\Gamma_n - \\mathbb{E} \\Gamma_n| \\ge K \\mathbb{E}\\Gamma_n + \\frac{R\\beta^2}{ p^2}\\biggl(\\frac{3K\\xi }{n}\\biggr)^{1 \/ 2} + \\frac{64K\\xi\\alpha^2 \\log n }{np^2}\\biggr\\} \\le K \\exp(-\\xi). \n\t\\ee\n\t\t\n\tNext, by the symmetrization argument and Ledoux-Talagrand contraction inequality \\citep{ledoux2013probability}, for $n$ independent and identically distributed Rademacher variables $\\{\\gamma_i\\}_{i=1}^n$, when $n \\pi_{\\max}(1 - \\rho_+) \\ge \\max(20\\log p, \\log n)$, we have that\n\t\\be\n\t\\label{eq:16}\n\t\\begin{aligned}\n\t\t\\mathbb{E} \\Gamma_n & \\le 2\\mathbb{E} \\sup_{\\substack{\\fnorm{\\boldsymbol{\\Delta}} \\le R, \\\\ \\boldsymbol{\\Delta} \\in {\\mathcal C}(\\beta, R, \\kappa)}} \\biggl | \\frac{1}{n} \\sum\\limits_{i=1}^n \\gamma_i \\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2 \\biggr | \\le \\frac{8\\beta}{p}~\\mathbb{E} \\sup_{\\substack{\\fnorm{\\boldsymbol{\\Delta}} \\le R, \\\\ \\boldsymbol{\\Delta} \\in {\\mathcal C}(\\beta, R, \\kappa)}} \\biggl | \\inn{\\frac{1}{n}\\sum\\limits_{i=1}^n \\gamma_i \\mathbf{X}_i, \\boldsymbol{\\Delta}} \\biggr| \\\\\n\t\t& \\le \\frac{8\\beta\\nnorm{\\boldsymbol{\\Delta}}}{p} ~\\mathbb{E} \\biggl\\| \\frac{1}{n}\\sum\\limits_{i=1}^n \\gamma_i \\mathbf{X}_i \\biggr \\|_{2} \\le \\frac{8\\kappa\\beta r^{1 \/ 2} R}{p}\\mathbb{E} \\biggl\\| \\frac{1}{n}\\sum\\limits_{i=1}^n \\gamma_i \\mathbf{X}_i \\biggr\\|_2 \\le \\frac{8\\kappa\\beta R}{p}\\biggl(\\frac{r\\pi_{\\max}\\log p}{n}\\biggr)^{\\!1 \/ 2}, \n\t\\end{aligned}\n\t\\ee\n\twhere the penultimate inequality is due to the fact that $\\boldsymbol{\\Delta} \\in {\\mathcal C}(\\beta, R, \\kappa)$, and where the last inequality is due to \\eqref{eq:expectation_sn_op}. \n\t\n\tFinally, \n\t\\be\n\t\\mathbb{E} \\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2 = \\sum\\limits_{1 \\le j,k \\le d} \\pi_j P_{jk}\\Delta^2_{jk} \\ge \\frac{\\alpha^2}{p^2} \\fnorm{\\boldsymbol{\\Delta}}^2. \n\t\\ee\n\tCombining all the bounds above, we have for any $\\xi > 1$, with probability at least $1 - K \\exp(-\\xi)$, \n\t\\be\n\t\\begin{aligned}\n\t\t\\delta\\ell_n(\\mathbf{P} + \\boldsymbol{\\Delta}; \\mathbf{P}) \\ge \\frac{\\alpha ^2}{8\\beta^2}\\fnorm{\\boldsymbol{\\Delta}}^2 & - 8R\\biggl({\\frac{3K\\xi }{n}}\\biggr)^{1 \/ 2} - \\frac{8K\\xi\\alpha^2 \\log n }{\\beta^2 n} - \\frac{Kp\\kappa R}{\\beta}\\biggl(\\frac{r\\pi_{\\max}\\log p}{n}\\biggr) ^ {\\!1 \/ 2}. \n\t\\end{aligned}\n\t\\ee\n\\end{proof}\n\n\\subsection{Proof of Theorem \\ref{thm:nuclear_alt}}\n\n\\begin{proof}\n\t~For a specific $R$ whose value will be determined later, we construct an intermediate estimator $\\widehat\\mathbf{P}_{\\eta} $ between $\\widehat\\mathbf{P}$ and $\\mathbf{P}$:\n\t\\[\n\t\\widehat \\mathbf{P}_{\\eta} = \\mathbf{P} + \\eta (\\widehat \\mathbf{P} - \\mathbf{P}), \n\t\\]\n\twhere $\\eta = 1$ if $\\fnorm{\\widehat \\mathbf{P} - \\mathbf{P}} \\le R$ and $\\eta = R\/\\fnorm{\\widehat\\mathbf{P} - \\mathbf{P}}$ if $\\fnorm{\\widehat\\mathbf{P} - \\mathbf{P}} > R$. For any $\\xi > 1$, there exists a universal constant $C > 0$ such that when \n\t\\[\n\t\\lambda = C\\biggl\\{\\biggl(\\frac{\\xi p ^ 2\\pi_{\\max}\\log p}{n\\alpha}\\biggr)^{1 \/ 2} + \\frac{\\xi p\\log p}{n \\alpha}\\biggr\\}, \n\t\\] \n\twe have by Lemmas \\ref{lem:restricted_strong_convexity} and \\ref{lem:gradient} that with probability at least $1 - K\\exp(-\\xi) - 4p^{-(\\xi - 1)} - p ^{-1}$, \n\t\\be\n\t\\label{eq:stat_error_fnorm}\n\t\\begin{aligned}\n\t\t\\frac{\\alpha ^2}{8\\beta^2} & \\fnorm{\\boldsymbol{\\Delta}}^2 - 8R\\biggl({\\frac{3K\\xi }{n}}\\biggr)^{1 \/ 2} - \\frac{8K\\xi\\alpha^2 \\log n }{\\beta^2 n} - \\frac{Kp\\kappa R}{\\beta}\\biggl(\\frac{r\\pi_{\\max}\\log p}{n}\\biggr) ^ {\\!1 \/ 2}\\\\\n\t\t& \\le \\delta\\ell_n(\\widehat\\mathbf{P}_{\\eta}; \\mathbf{P}) \\le - \\inn{\\Pi_{{\\mathcal N}}(\\nabla{\\mathcal L}_n(\\mathbf{P})), \\widehat \\boldsymbol{\\Delta}_{\\eta}} + \\lambda ( \\nnorm{\\mathbf{P}} - \\nnorm{\\widehat\\mathbf{P}_{\\eta}} ) \\\\\n\t\t& \\le - \\inn{\\Pi_{{\\mathcal N}}(\\nabla{\\mathcal L}_n(\\mathbf{P})), \\widehat \\boldsymbol{\\Delta}_{\\eta}} + \\lambda\\nnorm{\\widehat\\boldsymbol{\\Delta}_{\\eta}} \\le (\\opnorm{\\Pi_{{\\mathcal N}}(\\nabla {\\mathcal L}_n(\\mathbf{P}))} + \\lambda) \\nnorm{\\widehat \\boldsymbol{\\Delta}_{\\eta}} \\\\\n\t\t& \\le 8\\lambda \\nnorm{[\\widehat \\boldsymbol{\\Delta}_{\\eta}]_{\\overline{\\mathcal M}}} \\le 8\\lambda\\sqrt{r}\\fnorm{\\widehat\\boldsymbol{\\Delta}_{\\eta}},\n\t\\end{aligned}\n\t\\ee\n\twhere $K$ is the same universal constant as in Theorem \\ref{lem:restricted_strong_convexity}. Some algebra yields that \n\t\\be\n\t\\label{eq:5.24}\n\t\\begin{aligned}\n\t\t\\fnorm{\\widehat \\boldsymbol{\\Delta}_{\\eta}}^2 \\lesssim \\frac{\\beta ^ 2}{\\alpha ^ 2} \\max \\biggl\\{ \\frac{\\lambda ^ 2r\\beta ^ 2}{\\alpha ^ 2}, R\\biggl({\\frac{\\xi}{n}}\\biggr)^{1 \/ 2}, \\frac{\\xi \\alpha ^ 2 \\log n}{\\beta ^ 2n}, \\frac{pR}{\\beta}\\biggl(\\frac{r\\pi_{\\max}\\log p}{n}\\biggr) ^ {\\!1 \/ 2}\\biggr\\}. \n\t\\end{aligned} \n\t\\ee\n\tLetting $R ^ 2$ be greater than the RHS of the inequality above, we can find a universal constant $C_4 > 0$ such that\n\t\\[\n\t\\begin{aligned}\n\tR \\ge \\frac{C_4\\beta ^ 2}{\\alpha ^ 2}\\biggl(\\frac{\\xi r p ^ 2 \\pi_{\\max} \\log p}{n \\alpha}\\biggr) ^ {\\!1 \/ 2} =: R_0. \n\t\\end{aligned}\n\t\\]\n\tChoose $R = R_0$. Therefore, $\\fnorm{\\widehat \\boldsymbol{\\Delta}_{\\eta}} \\le R$ and $\\widehat \\boldsymbol{\\Delta}_{\\eta} = \\widehat \\boldsymbol{\\Delta}$. We can thus reach the conclusion. As to the KL-Divergence, by \\citet[][Lemma~4]{zhang2018optimal}, we deduce that \n\t\\be\n\t\\label{eq:l2_to_kl}\n\tD_{\\mathrm{KL}}(\\widehat\\mathbf{P}, \\mathbf{P}) = \\sum\\limits_{j = 1}^p \\pi_j D_{\\mathrm{KL}}(\\mathbf{P}_{j\\cdot}, \\widehat\\mathbf{P}_{j\\cdot}) \\le \\sum\\limits_{j = 1}^p \\frac{\\beta^2 }{2 \\alpha^2}\\ltwonorm{\\mathbf{P}_{j\\cdot} - \\widehat\\mathbf{P}_{j\\cdot}}^2 = \\frac{\\beta^2}{2\\alpha^2} \\fnorm{\\widehat\\mathbf{P} - \\mathbf{P}}^2, \n\t\\ee\n\tfrom which we attain the conclusion. \n\\end{proof}\n\n\\subsection{Proof of Theorem \\ref{thm:rank_alt}}\n\n\\begin{proof}\t\n\t~Define ${\\widehat\\boldsymbol{\\Delta}}(r) := \\widehat \\mathbf{P}^r - \\mathbf{P}$. Since ${\\rm rank}(\\mathbf{P}) \\le r$ and ${\\rm rank}(\\widehat\\mathbf{P}^r) \\le r$, ${\\rm rank}(\\widehat\\boldsymbol{\\Delta}(r))\\le 2r$. Thus $\\fnorm{\\widehat \\boldsymbol{\\Delta}(r)} \\le (2r)^{1 \/ 2} \\nnorm{\\widehat \\boldsymbol{\\Delta}(r)}$. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\tNow we follow the proof strategy of Theorem \\ref{thm:nuclear} to establish the statistical error bound for $\\widehat \\mathbf{P}^r$. Similarly, for a specific $R > 0$ whose value will be determined later, we can construct an intermediate estimator $\\widehat\\mathbf{P}^r_{\\eta} $ between $\\widehat\\mathbf{P}^r$ and $\\mathbf{P}$:\n\t\\[\n\t\\widehat \\mathbf{P}^r_{\\eta} = \\mathbf{P} + \\eta (\\widehat \\mathbf{P}^r - \\mathbf{P}), \n\t\\]\n\twhere $\\eta = 1$ if $\\fnorm{\\widehat \\mathbf{P}^r - \\mathbf{P}} \\le R$ and $\\eta = R\/\\fnorm{\\widehat\\mathbf{P}^r - \\mathbf{P}}$ if $\\fnorm{\\widehat\\mathbf{P}^r - \\mathbf{P}} > R$. Let $\\widehat \\boldsymbol{\\Delta}_\\eta(r) := \\widehat \\mathbf{P}^r_\\eta - \\mathbf{P}$. Since $\\widehat \\boldsymbol{\\Delta}_\\eta(r) \\in {\\mathcal C}(\\beta, R, \\sqrt{2})$, applying Lemma \\ref{lem:restricted_strong_convexity} yields that \n\t\\be\n\t\\begin{aligned}\n\t\t\\frac{\\alpha ^2}{8\\beta^2} & \\fnorm{\\boldsymbol{\\Delta}}^2 - 8R\\biggl({\\frac{3K\\xi }{n}}\\biggr)^{1 \/ 2} - \\frac{8K\\xi\\alpha^2 \\log n }{\\beta^2 n} - \\frac{Kp\\kappa R}{\\beta}\\biggl(\\frac{r\\pi_{\\max}\\log p}{n}\\biggr) ^ {\\!1 \/ 2}\\\\\n\t\t& \\le \\delta\\ell_n(\\widehat\\mathbf{P}^r_{\\eta}; \\mathbf{P}) \\le - \\inn{\\Pi_{{\\mathcal N}}(\\nabla\\ell_n(\\mathbf{P})), \\widehat \\boldsymbol{\\Delta}_{\\eta}(r)} \\le \\opnorm{\\Pi_{{\\mathcal N}}(\\nabla {\\mathcal L}_n(\\mathbf{P}))} \\nnorm{\\widehat \\boldsymbol{\\Delta}_{\\eta}(r)}\\\\\n\t\t& \\le \\sqrt{2r}\\opnorm{\\Pi_{{\\mathcal N}}(\\nabla {\\mathcal L}_n(\\mathbf{P}))} \\fnorm{\\widehat \\boldsymbol{\\Delta}_{\\eta}(r)}, \n\t\\end{aligned}\n\t\\ee\n\twhich futher implies that there exists $C_1$ depending only on $\\alpha$ and $\\beta$ such that \n\t\\[\n\t\\fnorm{\\widehat \\boldsymbol{\\Delta}_{\\eta}(r)}^2 \\le C_1 \\max \\biggl\\{r\\opnorm{\\Pi_{{\\mathcal N}}(\\nabla {\\mathcal L}_n(\\mathbf{P}))} ^ 2, R\\biggl({\\frac{\\xi}{n}}\\biggr)^{1 \/ 2}, \\frac{\\xi \\alpha ^ 2 \\log n}{\\beta ^ 2n}, \\frac{pR}{\\beta}\\biggl(\\frac{r\\pi_{\\max}\\log p}{n}\\biggr) ^ {\\!1 \/ 2}\\biggr\\}. \n\t\\]\n\tBy a contradiction argument as in the proof of Theorem \\ref{thm:nuclear}, we can choose an appropriate $R$ large enough such that $\\widehat \\mathbf{P}^r_\\eta = \\widehat \\mathbf{P}^r $ and attain the conclusion. \n\t\n\\end{proof}\n\n\n\\section{Proof of Proposition \\ref{prop:penlowrank}\n\\begin{proof}\n\n\n\n\n\t~Since ${\\rm rank}({\\mathbf{X}}_c^*)\\le r$, we know that ${\\mathbf{X}}_c^*$ is in fact a feasible solution to the original problem (5) and $\\norm{{\\mathbf{X}}_c^*}_{*} - \\norm{{\\mathbf{X}}_c^*}_{(r)} = 0$. Therefore, for any feasible solution $X$ to\n\t(5), it holds that \n\t\\begin{align*} \n\tf({\\mathbf{X}}_c^*) ={}& f({\\mathbf{X}}_c^*) + c(\\norm{{\\mathbf{X}}_c^*}_{*} - \\norm{{\\mathbf{X}}_c^*}_{(r)})\\\\[5pt]\n\t\\le{}& f({\\mathbf{X}}) + c(\\norm{{\\mathbf{X}}}_* - \\norm{{\\mathbf{X}}}_{(r)})\n\t= f({\\mathbf{X}}).\n\t\\end{align*}\n\tThis completes the proof of the proposition.\n\\end{proof}\n\n\\section{Convergence and $o(1\/k)$ non-ergodic iteration complexity of Algorithm 1 (sGS-ADMM)}\n\nBefore deriving the desired results of Algorithm \\ref{alg:sGS-ADMM} for solving problem \\eqref{prob:D}, we present some notation and definitions for the subsequent analysis. Assume that the solution sets of \\eqref{prob:gen-convex-nuc} and \\eqref{prob:D} are nonempty. Then, the primal-dual solution pairs associated with problems \\eqref{prob:gen-convex-nuc} and \\eqref{prob:D} satisfy the following Karush-Kuhn-Tucker (KKT) system: \n\\begin{equation}\n\\label{KKT}\n0 \\in R({\\bf X}, {\\bf \\Xi}, {\\bf S}), \\quad {\\mathcal A}({\\bf X}) = b, \\quad \n{\\bf \\Xi} + {\\mathcal A}^*(y) + {\\bf S} = 0,\n\\end{equation}\nwith \n\\begin{equation*}\nR({\\bf X}, {\\bf \\Xi}, {\\bf S}): = \\begin{pmatrix}\n{\\bf \\Xi} + \\partial g({\\bf X}) \\\\[5pt]\n{\\bf X} + \\partial \\delta(\\norm{{\\bf S}}_\n\n2 \\le c)\n\\end{pmatrix}, \\quad ({\\bf X}, {\\bf \\Xi}, {\\bf S})\\in {\\rm dom}\\,g \\times \\Re^{p\\times p}\\times \\left\\{{\\bf S}\\in \\Re^{p\\times p}\\mid \\norm{{\\bf S}}_2 \\le c \\right\\}.\n\\end{equation*}\nDefine the KKT residual function $D:{\\rm dom}\\,g \\times \\Re^{p\\times p}\\times \\Re^n \\times \\left\\{{\\bf S}\\in \\Re^{p\\times p}\\mid \\norm{{\\bf S}}_2 \\le c \\right\\} \\to [0, +\\infty)$ as\n\\[\nD({\\bf X}, {\\bf \\Xi}, y, {\\bf S}):= {\\rm dist}^2(0, R({\\bf X}, {\\bf \\Xi}, {\\bf S})) + \\norm{{\\mathcal A}({\\bf X}) - b}^2 + \n\\norm{{\\bf \\Xi} + {\\mathcal A}^*(y) + {\\bf S}}^2.\n\\]\nWe say $({\\bf X}, {\\bf \\Xi}, y, {\\bf S})\\in {\\rm dom}\\,g \\times \\Re^{p\\times p}\\times \\Re^n \\times \\left\\{{\\bf S}\\in \\Re^{p\\times p}\\mid \\norm{{\\bf S}}_2 \\le c \\right\\}$ be an $\\epsilon$-approximate primal-dual solution pair for problems \\eqref{prob:gen-convex-nuc} and \\eqref{prob:D} if\n$D({\\bf X}, {\\bf \\Xi}, y, {\\bf S}) \\le \\epsilon$. We show in the following theorem the global convergence and the $o(1\/k)$ iteration complexity results of Algorithm sGS-ADMM.\n\\begin{theorem}\n\t\\label{thm:sGS-ADMM}\n\tSuppose that the solution sets of \\eqref{prob:gen-convex-nuc} and \\eqref{prob:D} are nonempty. Let $\\{({\\bf\\Xi}^k,y^k,\\mathbb{S}^k,{\\mathbf{X}}^k)\\}$ be the sequence generated by Algorithm \\ref{alg:sGS-ADMM}. If $\\tau\\in(0,(1+\\sqrt{5}\\,)\/2)$, then the sequence $\\{({\\bf\\Xi}^k,y^k,\\mathbb{S}^k)\\}$ converges to an optimal solution of \\eqref{prob:D} and $\\{{\\mathbf{X}}^k\\}$ converges to an optimal solution of \\eqref{prob:gen-convex-nuc}.\n\tMoreover, there exist a constant $\\omega >0$ such that \n\t\\[\n\t\\min_{1\\le i \\le k} \\left\\{ D({\\bf X}^k, {\\bf \\Xi}^k, y^k, {\\bf S}^k) \\right\\} \\le \\frac{\\omega}{k}, \\ \\forall\\, k\\ge 1, \\quad{\\rm and} \\quad \n\t\\lim_{k\\to \\infty} \\left\\{ k \\min_{1\\le i \\le k} \\left\\{ D({\\bf X}^k, {\\bf \\Xi}^k, y^k, {\\bf S}^k) \\right\\} \\right\\}= 0.\n\t\\]\n\\end{theorem}\n\n\\begin{proof}\n\t~In order to use \\citep[Theorem 3]{li2016schur}, we need to write problem \\eqref{prob:D} as following\n\t\\begin{equation*} \n\t\\begin{array}{rll}\n\t\\min & g^*(-{\\bf\\Xi}) - \\inprod{b}{y} + {\\delta( \\norm{\\mathbb{S}}_2 \\le c)} \\\\\n\t\\mbox{s.t.} & {\\mathcal F} ({\\bf\\Xi}) + {\\mathcal A}_1^*(y) + {\\mathcal G}(\\mathbb{S}) = 0,\n\t\\end{array}\n\t\\end{equation*}\n\twhere ${\\mathcal F}, {\\mathcal A}_1$ and ${\\mathcal G}$ are linear operators such that for all $({\\bf \\Xi}, y, \\mathbb{S}) \\in \\Re^{p\\times p} \\times \\Re^n \\times \\Re^{p\\times p}$, ${\\mathcal F}({\\bf\\Xi}) = {\\bf \\Xi}$, \n\t${\\mathcal A}_1^*(y) = {\\mathcal A}^*(y)$ and ${\\mathcal G}(\\mathbb{S}) = \\mathbb{S}$. \n\tClearly, ${\\mathcal F} = {\\mathcal G} = {\\mathcal I}$ where ${\\mathcal I}:\\Re^{p\\times p} \\to \\Re^{p\\times p}$ is the identity map.\n\tTherefore, we have ${\\mathcal A}_1{\\mathcal A}_1^* \\succ 0$ and ${\\mathcal F}\\cF^* = {\\mathcal G}\\cG^* = {\\mathcal I} \\succ 0$. Hence, the assumptions and conditions in \\citep[Theorem 3]{li2016schur} are satisfied. The convergence results thus follow directly. Meanwhile, the non-ergodic iteration complexity results follows from \\citep[Theorem 6.1]{chen2017efficient}.\n\\end{proof}\n\n\n\\section{Proof of Theorems \\ref{thm:convergence-alg-MM} and \\ref{thm: MMconvergence}}\nWe only need to prove Theorem \\ref{thm: MMconvergence} \nas Theorem \\ref{thm:convergence-alg-MM} \nis a special incidence. To prove Theorem \\ref{thm: MMconvergence}, \nwe first introduce the following lemma.\t\n\\begin{lemma}\\label{lemma:decrease}\n\tSuppose that $\\{ {x}^k \\}$ is the sequence generated by Algorithm 3.\n\tThen $\\theta({x}^{k+1})\\le \\theta({x}^k) - \\frac{1}{2}\\|{x}^{k+1} - {x}^k\\|^2_{\\mathcal{G} + 2\\mathcal{T}}$.\n\\end{lemma}\n\\begin{proof}\n\n\n\n\n\n~For any $k\\geq 0$, by the optimality condition of problem (10) at\n\t${x}^{k+1}$, we know that \n\tthere exist $\\eta^{k+1}\\in \\partial p({x}^{k+1})$ such that\n\t\\begin{equation*}\\label{eq:major-k-optimality}\n\t0 = \\nabla g ({x}^k) + (\\mathcal{G} + \\mathcal{T})({x}^{k+1} -\n\tx^{k}) + \\eta^{k+1}-\\xi^k = 0.\n\t\\end{equation*}\n\tThen for any $k\\ge 0$, we deduce\n\t\\begin{equation*}\n\t\\begin{array}{rl}\n\t& \\theta({x}^{k+1}) - \\theta({x}^k)\n\t\\le \\widehat{\\theta}({x}^{k+1};{x}^k) - \\theta({x}^k)\\\\[0.1in]\n\t= & p(x^{k+1}) - p(x^k) + \\langle {x}^{k+1} - {x}^k , \\nabla g({x}^k)-\\xi^k \\rangle +\n\t\\frac{1}{2} \\|{x}^{k+1} - {x}^k\\|^2_{\\mathcal{G}} \\\\[0.1in]\n\t\\le & \\langle \\nabla g(x^k)+\\eta^{k+1} -\\xi^k, {x}^{k+1} - {x}^k\\rangle+\n\t\\frac{1}{2} \\|{x}^{k+1} - {x}^k\\|^2_{\\mathcal{G}}\n\t\\\\[0.1in]\n\t= & - \\frac{1}{2}\\|x^{k+1} - x^k\\|^2_{\\mathcal{G} + 2\\mathcal{T}}.\n\t\\end{array}\n\t\\end{equation*}\n\tThis completes the proof of this lemma.\n\\end{proof}\n\nNow we are ready to prove Theorem \\ref{thm: MMconvergence}.\n\\begin{proof}\n\t~From the optimality condition at $x^{k+1}$, we have that \n\t\\[ 0 \\in \\nabla g ({x}^k) + (\\mathcal{G} + \\mathcal{T})({x}^{k+1} -\n\tx^{k}) + \\partial p(x^{k+1})-\\xi^k.\\]\n\tSince $x^{k+1} = x^k$, this implies that \n\t\\[ 0 \\in \\nabla g ({x}^k) + \\partial p(x^{k})- \\partial q(x^k), \\]\n\ti.e., $x^k$ is a critical point.\n\tObserve that the sequence $\\{\\theta (x^{k})\\}$ is non-increasing since\n\t$${\\theta}(x^{k+1}) \\le \\widehat{\\theta}(x^{k+1}; x^{k}) \\le \\widehat{\\theta}(x^{k}; x^{k}) =\\theta(x^{k}), \\quad k\\geq 0.$$\n\tSuppose that there exists a subsequence $\\{x^{k_j}\\}$ that converging to $\\bar{x}$, i.e., one of the accumulation points of $\\{x^k\\}$.\n\tBy Lemma \\ref{lemma:decrease} and the assumption that $\\mathcal{G} + 2\\mathcal{T}\\succeq 0$, we know that for all $x\\in \\mathbb{X}$\n\t\\begin{align*}\n\t&\\widehat{\\theta}(x^{k_{j+1}};x^{k_{j+1}}) = \\theta(x^{k_{j+1}}) \\\\\n\t\\le &\\theta(x^{k_j+1})\\le \\widehat{\\theta}(x^{k_j+1};x^{k_j})\\le \\widehat{\\theta}(x;x^{k_j}).\n\t\\end{align*}\n\tBy letting $j\\to\\infty$ in the above inequality, we obtain that\n\t$$\n\t\\widehat{\\theta}(\\bar{x};\\bar{x})\\le \\widehat{\\theta}(x;\\bar{x}).\n\t$$\n\tBy the optimality condition of $\\widehat{\\theta}(x; \\bar{x})$, we have that\n\tthere exists $\\bar{u}\\in \\partial p(\\bar{x})$ and $\\bar{v}\\in \\partial q(\\bar{x})$ such that\n\t$$\n\t0 \\in \\nabla g(\\bar{x}) + \\bar{u} - \\bar{v}. \n\t$$\n\tThis implies that $\\left(\\nabla g(\\bar{x}) + \\partial p(\\bar{x}) \\right)\\cap \\partial q(\\bar{x})\\neq \\emptyset$.\n\tTo establish the rest of this proposition, \n\twe obtain from Lemma 1 that\n\t\\begin{align*}\n\t&\\lim_{t \\to + \\infty}\\frac{1}{2} \\sum_{i=0}^t \\|{x}^{k+1}-{x}^k\\|^2_{\\mathcal{G}+2\\mathcal{T}} \\\\\n\t\\le {}&\\liminf_{t\\to +\\infty} \\big( \\theta(x^0)\n\t-\\theta({x}^{k+1})\\big) \\le \\theta(x^0) < +\\infty \\,,\n\t\\end{align*}\n\twhich implies $ \\lim_{i\\to +\\infty} \\|{x}^{k+1} -\n\tx^{i}\\|_{\\mathcal{G}+2\\mathcal{T}} = 0.$ The proof of this theorem is thus complete by the positive definiteness of the operator $\\mathcal{G} + 2\\mathcal{T}$.\n\\end{proof}\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsubsection*{#1}}\n\n\\newcommand{S}{S}\n\\newcommand{s}{s}\n\\newcommand{\\pv}[1]{\\mathbf{#1}}\n\\newcommand{\\mathrm{t}}{\\mathrm{t}}\n\\newcommand{z_\\mr{diff}}{z_\\mr{diff}}\n\\newcommand{z_\\mr{same}}{z_\\mr{same}}\n\n\\newcommand{\\medskip}{\\medskip}\n\n\\newcommand{\\propertygroup}[2]{\\paragraph*{#1.~#2}}\n\\newcommand{\\property}[1]{\\emph{#1:}\\ }\n\n\\newcommand{\\dtxt}[1]{\\medskip\\begin{quote}%\n\\setlength{\\fboxsep}{2mm}%\n\\fbox{\\parbox{0.82\\textwidth}{\\sl #1}}%\n\\end{quote}\\medskip}\n\n\n\\newcommand{\\magn}[1]{|#1|}\n\\newcommand{\\pd}[1]{\\widetilde{#1}}\n\n\\newcommand{\\smp}[1]{\\Delta_{#1}}\n\\newcommand{\\ismp}[1]{\\Delta_{#1}^\\circ}\n\\newcommand{\\rstr}[2]{#1|_{#2}}\n\\newcommand{\\mr{supp}}{\\mr{supp}}\n\n\\newcommand{\\mr{c}}{\\mr{c}}\n\\newcommand{\\mr{d}}{\\mr{d}}\n\n\\newcommand{\\Dmax}[1]{D_\\mr{max}(#1)}\n\\newcommand{\\ptl}[1]{\\frac{\\partial}{\\partial #1}}\n\\newcommand{\\indprop}[1]{\\textbf{\\textsf{Q}}${}_{#1}$}\n\n\\newcommand{\\passage}[1]{\\paragraph{\\textit{#1}}}\n\n\\newcommand{\\lbl}[1]{\\label{#1}}\n\\renewcommand{\\iff}{\\Leftrightarrow}\n\\newcommand{\\trianglelefteq}{\\trianglelefteq}\n\\newcommand{\\approx}{\\approx}\n\\newcommand{\\simeq}{\\simeq}\n\\newcommand{\\um}[2]{U_{#1}^{#2}}\n\\newcommand{\\ip}[2]{\\langle #1, #2 \\rangle}\n\n\n\\section{Statement of the problem}\n\\lbl{sec:statement}\n\n\\passage{Basic definitions} Fix an integer $n \\geq 1$ throughout. A\n\\demph{similarity matrix} is an $n\\times n$ symmetric matrix $Z$ with entries\nin the interval $[0, 1]$, such that $Z_{ii} = 1$ for all $i$. A\n\\demph{(probability) distribution} is an $n$-tuple $\\pv{p} = (p_1, \\ldots,\np_n)$ with $p_i \\geq 0$ for all $i$ and $\\sum_{i = 1}^n p_i = 1$.\n\nGiven a similarity matrix $Z$ and a distribution $\\pv{p}$, thought of as a\ncolumn vector, we may form the matrix product $Z\\pv{p}$ (also a column\nvector), and we denote by $(Z\\pv{p})_i$ its $i$th entry.\n\n\\begin{lemma} \\lbl{lemma:positive}\nLet $Z$ be a similarity matrix and $\\pv{p}$ a distribution. Then $p_i \\leq\n(Z\\pv{p})_i \\leq 1$ for all $i \\in \\{1, \\ldots, n\\}$. In particular, if $p_i\n> 0$ then $(Z\\pv{p})_i > 0$.\n\\end{lemma}\n\n\\begin{proof}\nWe have\n\\[\n(Z\\pv{p})_i \n=\n\\sum_{j = 1}^n Z_{ij} p_j\n=\np_i + \\sum_{j \\neq i} Z_{ij} p_j\n\\geq\np_i\n\\]\nand $(Z \\pv{p})_i \\leq \\sum_{j = 1}^n 1 p_j = 1$.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nLet $Z$ be a similarity matrix and let $q \\in [0, \\infty)$. The function\n$H_q^Z$ is defined on distributions $\\pv{p}$ by\n\\[\nH^Z_q(\\pv{p})\n=\n\\left\\{\n\\begin{array}{ll}\n\\displaystyle\n\\frac{1}{q - 1} \n\\left( 1 - \\sum_{i:\\,p_i > 0} p_i (Z\\pv{p})_i^{q - 1} \\right) &\n\\textrm{if } q \\neq 1 \\\\\n\\displaystyle\n- \\sum_{i:\\,p_i > 0} p_i \\log (Z\\pv{p})_i &\n\\textrm{if } q = 1.\n\\end{array}\n\\right.\n\\]\nLemma~\\ref{lemma:positive} guarantees that these definitions are valid. The\ndefinition in the case $q = 1$ is explained by the fact that $H_1^Z(\\pv{p}) =\n\\lim_{q\\to 1} H_q^Z(\\pv{p})$ (easily shown using l'H\\^opital's rule). We\ncall $H_q^Z$ the \\demph{entropy of order $q$}.\n\n\\passage{Notes on the literature} The entropies $H_q^Z$ were introduced in\nthis generality by Ricotta and Szeidl in 2006, as an index of the diversity of\nan ecological community~\\cite{RS}. Think of $n$ as the number of species,\n$Z_{ij}$ as indicating the similarity of the $i$th and $j$th species, and\n$p_i$ as the relative abundance of the $i$th species. Ricotta and Szeidl used\nnot similarities $Z_{ij}$ but dissimilarities or `distances' $d_{ij}$; the\nformulas above become equivalent to theirs on putting $Z_{ij} = 1 - d_{ij}$.\n\nThe case $Z = I$ goes back further. Something very similar to $H_q^I$, using\nlogarithms to base $2$ rather than base $e$, appeared in information theory in\n1967, in a paper of Havrda and Charv\\'at~\\cite{HC}. Later, the entropies\n$H_q^I$ were discovered in statistical ecology, in a 1982 paper of Patil and\nTaillie~\\cite{PT}. Finally, they were rediscovered in physics, in a 1988\npaper of Tsallis~\\cite{Tsa}.\n\nStill in the case $Z = I$, certain values of $q$ give famous quantities. The\nentropy $H_1^I$ is Shannon entropy (except that Shannon used logarithms to\nbase~$2$). The entropy $H_2^I$ is known in ecology as the Simpson or\nGini--Simpson index; it is the probability that two individuals chosen at\nrandom are of different species.\n\nFor general $Z$, the entropy of order $2$ is known as Rao's quadratic\nentropy~\\cite{Rao}. It is usually stated in terms of the matrix with $(i,\nj)$-entry $1 - Z_{ij}$, that is, the matrix of dissimilarities mentioned\nabove. \n\nOne way to obtain a similarity matrix is to start with a finite metric space\n$\\{a_1, \\ldots, a_n\\}$ and put $Z_{ij} = e^{-d(a_i, a_j)}$. Matrices of this\nkind are investigated in depth in~\\cite{MMS} and other papers cited therein.\nHere, metric spaces will only appear in two examples~(\\ref{eg:ultra}\nand~\\ref{eg:metric}).\n\n\\passage{The maximum entropy problem} Let $Z$ be a similarity matrix and\nlet $q \\in [0, \\infty)$. The maximum entropy problem is this:\n\\dtxt{%\nFor which distribution(s) $\\pv{p}$ is\n$H_q^Z(\\pv{p})$ maximal, and what is the maximum value?}\n\nThe solution is given in Theorem~\\ref{thm:main-entropy}. The terms used in\nthe statement of the theorem will be defined shortly. However, the following\nstriking fact can be stated immediately:\n\\dtxt{%\nThere is a distribution maximizing $H_q^Z$ for all\n$q$ simultaneously.}\nSo even though the entropies of different orders rank distributions\ndifferently, there is a distribution that is maximal for all of them.\n\nFor example, this fully explains the numerical coincidence noted in\nthe Results section of~\\cite{AKB}.\n\n\\passage{Restatement in terms of diversity}\nLet $Z$ be a similarity matrix. For each $q \\in [0, \\infty)$, define a\nfunction $D_q^Z$ on distributions $\\pv{p}$ by\n\\begin{eqnarray*}\nD_q^Z(\\pv{p}) &\n= &\n\\left\\{\n\\begin{array}{ll}\n\\left( 1 - (q - 1)H_q^Z(\\pv{p}) \\right)^\\frac{1}{1 - q} &\n\\textrm{if } q \\neq 1 \\\\\ne^{H_1^Z(\\pv{p})} &\n\\textrm{if }q = 1\n\\end{array}\n\\right. \\\\\n &= &\n\\left\\{\n\\begin{array}{ll}\n\\displaystyle\n\\left(\n\\sum_{i:\\,p_i > 0}\np_i (Z\\pv{p})_i^{q - 1}\n\\right)^\\frac{1}{1 - q} &\n\\textrm{if } q \\neq 1 \\\\\n\\displaystyle\n\\prod_{i:\\,p_i > 0}\n(Z\\pv{p})_i^{-p_i} &\n\\textrm{if } q = 1.\n\\end{array}\n\\right.\n\\end{eqnarray*}\nWe call $D_q^Z$ the \\demph{diversity of order $q$}. As for entropy, it is\neasily shown that $D_1^Z(\\pv{p}) = \\lim_{q \\to 1} D_q^Z(\\pv{p})$.\n\nThese diversities were introduced informally in~\\cite{EDC2}, and are explained\nand developed in~\\cite{MD}. The case $Z = I$ is well known in several fields:\nin information theory, $\\log D_q^I$ is called the R\\'enyi entropy of order\n$q$~\\cite{Ren}; in ecology, $D_q^I$ is called the Hill number of order\n$q$~\\cite{Hill}; and in economics, $1\/D_q^I$ is the Hannah--Kay measure of\nconcentration~\\cite{HK}.\n\nThe transformation between $H_q^Z$ and $D_q^Z$ is invertible and\norder-preserving (increasing). Hence the maximum entropy problem is\nequivalent to the maximum diversity problem:\n\\dtxt{%\nFor which distribution(s) $\\pv{p}$ is\n$D_q^Z(\\pv{p})$ maximal, and what is the maximum value?}\nThe solution is given in Theorem~\\ref{thm:main}. It will be more convenient\nmathematically to work with diversity rather than entropy. Thus, we prove\nresults about diversity and deduce results about entropy.\n\nWhen stated in terms of diversity, a further striking aspect of the solution\nbecomes apparent:\n\\dtxt{%\nThere is a distribution maximizing $D_q^Z$ for all\n$q$ simultaneously. The maximum value of $D_q^Z$ is the same for all $q$.}\nSo every similarity matrix has an unambiguous `maximum diversity', the\nmaximum value of $D_q^Z$ for any $q$.\n\nA similarity matrix may have more than one maximizing distribution---but the\ncollection of maximizing distributions is independent of $q > 0$. In other\nwords, a distribution that maximizes $D_q^Z$ for some $q$ actually maximizes\n$D_q^Z$ for all $q$ (Corollary~\\ref{cor:some-all}).\n\nThe diversities $D_q^Z$ are closely related to generalized means~\\cite{HLP},\nalso called power means. Given a finite set $I$, positive real numbers\n$(x_i)_{i \\in I}$, positive real numbers $(p_i)_{i \\in I}$ such that $\\sum_i\np_i = 1$, and $t \\in \\mathbb{R}$, the \\demph{generalized mean} of $(x_i)_{i \\in\nI}$, weighted by $(p_i)_{i \\in I}$, of order $t$, is\n\\[\n\\left\\{\n\\begin{array}{ll}\n\\displaystyle\n\\left( \\sum_{i \\in I} p_i x_i^t \\right)^{1\/t} &\n\\textrm{if } t \\neq 0 \\\\\n\\displaystyle\n\\prod_{i \\in I} x_i^{p_i} &\n\\textrm{if } t = 0.\n\\end{array}\n\\right.\n\\]\nFor example, if $p_i = p_j$ for all $i, j \\in I$ then the generalized means of\norders $1$, $0$ and $-1$ are the arithmetic, geometric and harmonic means,\nrespectively. \n\nGiven a similarity matrix $Z$ and a distribution $\\pv{p}$, take $I = \\{ i\n\\in \\{1, \\ldots, n\\} \\:|\\: p_i > 0\\}$. Then $1\/D_q^Z(\\pv{p})$ is the\ngeneralized mean of $((Z\\pv{p})_i)_{i \\in I}$, weighted by $(p_i)_{i \\in I}$,\nof order $q - 1$. We deduce the following.\n\n\\begin{lemma} \\label{lemma:means}\nLet $Z$ be a similarity matrix and $\\pv{p}$ a distribution. Then:\n\\begin{enumerate}\n\\item \\label{part:cts}\n$D_q^Z(\\pv{p})$ is continuous in $q \\in [0, \\infty)$\n\\item \\label{part:dec}\nif $(Z\\pv{p})_i =\n(Z\\pv{p})_j$ for all $i, j$ such that $p_i, p_j > 0$ then $D_q^Z(\\pv{p})$ is\nconstant over $q \\in [0, \\infty)$; otherwise, $D_q^Z(\\pv{p})$ is strictly\ndecreasing in $q \\in [0, \\infty)$\n\\item \\label{part:infty} \n$\\lim_{q \\to \\infty} D_q^Z(\\pv{p}) = 1\/\\max_{i: p_i > 0} (Z\\pv{p})_i$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nAll of these assertions follow from standard results on generalized means.\nContinuity is clear except perhaps at $q = 1$, where it follows from Theorem~3\nof~\\cite{HLP}. Part~(\\ref{part:dec}) follows from Theorem~16 of~\\cite{HLP},\nand part~(\\ref{part:infty}) from Theorem~4. \n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nIn the light of this, we define\n\\[\nD_\\infty^Z(\\pv{p}) \n= \n1 \/ \\max_{i:\\,p_i > 0} (Z\\pv{p})_i.\n\\]\nThere is no useful definition of $H_\\infty^Z$, since $\\lim_{q \\to \\infty}\nH_q^Z(\\pv{p}) = 0$ for all $Z$ and $\\pv{p}$.\n\n\n\n\\section{Preparatory results}\n\\lbl{sec:prep}\n\nHere we make some definitions and prove some lemmas in preparation for solving\nthe maximum diversity and entropy problems. Some of these definitions and\nlemmas can also be found in~\\cite{MMS} and~\\cite{AMSES}.\n\n\\emph{Convention:} for the rest of this work, unlabelled summations $\\sum$ are\nunderstood to be over all $i \\in \\{1, \\ldots, n\\}$ such that $p_i > 0$.\n\n\n\\passage{Weightings and magnitude}\n\n\\begin{defn}\nLet $Z$ be a similarity matrix. A \\demph{weighting} on $Z$ is a column vector\n$\\pv{w} \\in \\mathbb{R}^n$ such that\n\\[\nZ\\pv{w} \n= \n\\left(\\!\\!\n\\begin{array}{c}\n1 \\\\\n\\vdots \\\\\n1\n\\end{array}\n\\!\\!\\right).\n\\]\nA weighting $\\pv{w}$ is \\demph{non-negative} if $w_i \\geq 0$ for all\n$i$, and \\demph{positive} if $w_i > 0$ for all $i$. \n\\end{defn}\n\n\\begin{lemma}\nLet $\\pv{w}$ and $\\pv{x}$ be weightings on $Z$. Then $\\sum_{i = 1}^n w_i =\n\\sum_{i = 1}^n x_i$.\n\\end{lemma}\n\n\n\\begin{proof} Write $\\pv{u}$ for the column vector $(1\\ \\cdots\\ 1)^\\mathrm{t}$,\nwhere $(\\emptybk)^\\mathrm{t}$ means transpose. Then \n\\[\n\\sum_{i = 1}^n w_i\n=\n\\pv{u}^\\mathrm{t} \\pv{w}\n=\n(Z\\pv{x})^\\mathrm{t} \\pv{w}\n=\n\\pv{x}^\\mathrm{t} (Z \\pv{w})\n=\n\\pv{x}^\\mathrm{t} \\pv{u}\n=\n\\sum_{i = 1}^n x_i,\n\\]\nusing symmetry of $Z$.\n\\hfill\\ensuremath{\\Box} \n\\end{proof}\n\n\\begin{defn}\nLet $Z$ be a similarity matrix on which there exists at least one weighting.\nIts \\demph{magnitude} is $\\magn{Z} = \\sum_{i = 1}^n w_i$, for any weighting\n$\\pv{w}$ on $Z$. \n\\end{defn}\n\nFor example, if $Z$ is invertible then there is a unique weighting $\\pv{w}$ on\n$Z$, and $w_i$ is the sum of the $i$th row of $Z^{-1}$. So then\n\\[\n\\magn{Z} = \\sum_{i, j = 1}^n (Z^{-1})_{ij},\n\\]\nthe sum of all $n^2$ entries of $Z^{-1}$. This formula also appears\nin~\\cite{SP}, \\cite{Shi} and~\\cite{POP}, for closely related reasons to do\nwith diversity and its maximization.\n\n\n\\passage{Weight distributions}\n\n\\begin{defn}\nLet $Z$ be a similarity matrix. A \\demph{weight distribution} for $Z$ is a\ndistribution $\\pv{p}$ such that $(Z\\pv{p})_1 = \\cdots = (Z\\pv{p})_n$.\n\\end{defn}\n\n\\begin{lemma} \\lbl{lemma:wt-distribs}\nLet $Z$ be a similarity matrix. \n\\begin{enumerate}\n\\item If $Z$ admits a non-negative weighting then $\\magn{Z} > 0$.\n\\item If $\\pv{w}$ is a non-negative weighting on $Z$ then $\\pv{w}\/\\magn{Z}$ is\na weight distribution for $Z$, and this defines a one-to-one correspondence\nbetween non-negative weightings and weight distributions.\n\\item \nIf $Z$ admits a weight distribution then $Z$ admits a weighting and $\\magn{Z}\n> 0$. \n\\item \\lbl{part:wt-distribs-reciprocal}\nIf $\\pv{p}$ is a weight distribution for $Z$ then $(Z\\pv{p})_i =\n1\/\\magn{Z}$ for all $i$.\n\\end{enumerate}\n\\end{lemma}\n\n\n\\begin{proof}\n\\begin{enumerate}\n\\item Let $\\pv{w}$ be a non-negative weighting. Certainly $\\magn{Z} =\n\\sum_{i = 1}^n w_i \\geq 0$. Since we are assuming that $n \\geq 1$, the vector\n$\\pv{0}$ is not a weighting, so $w_i > 0$ for some $i$. Hence $\\magn{Z} > 0$.\n\n\\item The first part is clear. To see that this defines a one-to-one\ncorrespondence, take a weight distribution $\\pv{p}$, writing $(Z\\pv{p})_i = K$\nfor all $i$. Since $\\sum p_i = 1$, we have $p_i > 0$ for some $i$, and then\n$K = (Z\\pv{p})_i > 0$ by Lemma~\\ref{lemma:positive}. The vector $\\pv{w} =\n\\pv{p}\/K$ is then a non-negative weighting.\n\nThe two processes---passing from a non-negative weighting to a weight\ndistribution, and vice versa---are easily shown to be mutually inverse.\n\n\\item Follows from the previous parts.\n\n\\item Follows from the previous parts.\n\\hfill\\ensuremath{\\Box}\n\\end{enumerate}\n\\end{proof}\n\nThe first connection between magnitude and diversity is this:\n\\begin{lemma} \\lbl{lemma:mag-div}\nLet $Z$ be a similarity matrix and $\\pv{p}$ a weight distribution for $Z$.\nThen $D_q^Z(\\pv{p}) = \\magn{Z}$ for all $q \\in [0, \\infty]$.\n\\end{lemma}\n\n\\begin{proof}\nBy continuity, it is enough to prove this for $q \\neq 1, \\infty$. In that\ncase, using Lemma~\\ref{lemma:wt-distribs}(\\ref{part:wt-distribs-reciprocal}),\n\\[\nD_q^Z(\\pv{p})\n=\n\\left(\n\\sum p_i (Z\\pv{p})_i^{q - 1}\n\\right)^\\frac{1}{1 - q}\n=\n\\left(\n\\sum p_i \\magn{Z}^{1 - q}\n\\right)^\\frac{1}{1 - q}\n=\n\\magn{Z},\n\\]\nas required.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\n\\passage{Invariant distributions}\n\n\\begin{defn}\nLet $Z$ be a similarity matrix. A distribution $\\pv{p}$ is \\demph{invariant}\nif $D_q^Z(\\pv{p}) = D_{q'}^Z(\\pv{p})$ for all $q, q' \\in [0, \\infty]$.\n\\end{defn}\n\n\nSoon we will classify the invariant distributions. To do so, we need some\nmore notation and a lemma.\n\nGiven a similarity matrix $Z$ and a subset $B \\subseteq \\{1, \\ldots, n\\}$, let\n$Z_B$ be the matrix $Z$ restricted to $B$, so that $(Z_B)_{ij} = Z_{ij}$ ($i,\nj \\in B$). If $B$ has $m$ elements then $Z_B$ is an $m \\times m$ matrix, but\nit will be more convenient to index the rows and columns of $Z_B$ by the\nelements of $B$ themselves than by $1, \\ldots, m$.\n\nWe will also need to consider distributions on subsets of $\\{1, \\ldots, n\\}$.\nA distribution on $B \\subseteq \\{1, \\ldots, n\\}$ is said to be invariant, a weight\ndistribution, etc., if it is invariant, a weight distribution, etc., with\nrespect to $Z_B$. Similarly, we will sometimes speak of `weightings on $B$',\nmeaning weightings on $Z_B$. Distributions are understood to be on $\\{1,\n\\ldots, n\\}$ unless specified otherwise.\n\n\\begin{lemma} \\lbl{lemma:absent}\nLet $Z$ be a similarity matrix, let $B \\subseteq \\{1, \\ldots, n\\}$, and let\n$\\pv{r}$ be a distribution on $B$. Write $\\pv{p}$ for the distribution\nobtained by extending $\\pv{r}$ by zero. Then $D_q^{Z_B}(\\pv{r}) =\nD_q^Z(\\pv{p})$ for all $q \\in [0, \\infty]$. In particular, $\\pv{r}$ is\ninvariant if and only if $\\pv{p}$ is.\n\\end{lemma}\n\n\\begin{proof}\nFor $i \\in B$ we have $r_i = p_i$ and $(Z_B \\pv{r})_i = (Z\\pv{p})_i$. The\nresult follows immediately from the definition of diversity of order $q$.\n\\hfill\\ensuremath{\\Box} \n\\end{proof}\n\nBy Lemma~\\ref{lemma:mag-div}, any weight distribution is invariant, and by\nLemma~\\ref{lemma:absent}, any extension by zero of a weight distribution is\nalso invariant. We will prove that these are all the invariant distributions\nthere are.\n\nFor a distribution $\\pv{p}$ we write $\\mr{supp}(\\pv{p}) = \\{ i \\in \\{1, \\ldots,\nn\\} \\:|\\: p_i > 0\\}$, the \\demph{support} of $\\pv{p}$.\n\nLet $Z$ be a similarity matrix. Given $\\emptyset \\neq B \\subseteq \\{1, \\ldots,\nn\\}$ and a non-negative weighting $\\pv{w}$ on $Z_B$, let $\\pd{\\pv{w}}$ be the\ndistribution obtained by first taking the weight distribution\n$\\pv{w}\/\\magn{Z_B}$ on $B$, then extending by zero to $\\{1, \\ldots, n\\}$.\n\n\n\\begin{propn} \\lbl{propn:invt}\nLet $Z$ be a similarity matrix and $\\pv{p}$ a distribution. The following\nare equivalent:\n\\begin{enumerate}\n\\item \\lbl{part:invt-invt}\n$\\pv{p}$ is invariant\n\\item \\lbl{part:invt-support}\n$(Z\\pv{p})_i = (Z\\pv{p})_j$ for all $i, j \\in \\mr{supp}(\\pv{p})$\n\\item \\lbl{part:invt-wt-dist}\n$\\pv{p}$ is the extension by zero of a weight distribution on a nonempty\nsubset of $\\{1, \\ldots, n\\}$ \n\\item \\lbl{part:invt-weighting}\n$\\pv{p} = \\pd{\\pv{w}}$ for some non-negative weighting $\\pv{w}$ on some\nnonempty subset of $\\{1, \\ldots, n\\}$.\n\\end{enumerate}\n\\end{propn}\n\n\\begin{proof}\n(\\ref{part:invt-invt}$\\,\\Rightarrow\\,$\\ref{part:invt-support}): Follows from\nLemma~\\ref{lemma:means}. \n\n(\\ref{part:invt-support}$\\,\\Rightarrow\\,$\\ref{part:invt-wt-dist}): Suppose\nthat~(\\ref{part:invt-support}) holds, and write $B = \\mr{supp}(\\pv{p})$. The\ndistribution $\\pv{p}$ on $\\{1, \\ldots, n\\}$ restricts to a distribution\n$\\pv{r}$ on $B$. This $\\pv{r}$ is a weight distribution on\n$B$, since for all $i \\in B$ we have $(Z_B \\pv{r})_i = (Z\\pv{p})_i$,\nwhich by~(\\ref{part:invt-support}) is constant over $i \\in B$. Clearly\n$\\pv{p}$ is the extension by zero of $\\pv{r}$.\n\n(\\ref{part:invt-wt-dist}$\\,\\Rightarrow\\,$\\ref{part:invt-invt}): Suppose that $\\pv{p}$\nis the extension by zero of a weight distribution $\\pv{r}$ on a\nnonempty subset $B \\subseteq \\{1, \\ldots, n\\}$. Then for all $q \\in [0, \\infty]$,\n\\[\nD_q^Z(\\pv{p})\n=\nD_q^{Z_B}(\\pv{r})\n=\n\\magn{Z_B}\n\\]\nby Lemmas~\\ref{lemma:absent} and~\\ref{lemma:mag-div} respectively; hence\n$\\pv{p}$ is invariant.\n\n(\\ref{part:invt-wt-dist}$\\iff$\\ref{part:invt-weighting}): Follows from\nLemma~\\ref{lemma:wt-distribs}.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nThere is at least one invariant distribution on any given similarity matrix.\nFor we may choose $B$ to be a one-element subset, which has a unique\nnon-negative weighting $\\pv{w} = (1)$, and this gives the invariant\ndistribution $\\pd{\\pv{w}} = (0, \\ldots, 0, 1, 0, \\ldots, 0)$.\n\n\n\\passage{Maximizing distributions}\n\n\n\\begin{defn} \\lbl{defn:max}\nLet $Z$ be a similarity matrix. Given $q \\in [0,\n\\infty]$, a distribution $\\pv{p}$ is \\demph{$q$-maximizing} if $D_q^Z(\\pv{p})\n\\geq D_q^Z(\\pv{p}')$ for all distributions $\\pv{p}'$. A distribution is\n\\demph{maximizing} if it is $q$-maximizing for all $q \\in [0, \\infty]$.\n\\end{defn}\n\nIt makes no difference to the definition of `maximizing' if we omit $q =\n\\infty$; nor does it make a difference to either definition if we replace\ndiversity $D_q^Z$ by entropy $H_q^Z$.\n\nWe will eventually show that every similarity matrix has a\nmaximizing distribution.\n\n\\begin{lemma} \\lbl{lemma:max}\nLet $Z$ be a similarity matrix and $\\pv{p}$ an invariant distribution. Then\n$\\pv{p}$ is $0$-maximizing if and only if it is maximizing.\n\\end{lemma}\n\n\\begin{proof}\nSuppose that $\\pv{p}$ is $0$-maximizing. Then for all $q \\in [0, \\infty]$ and\nall distributions $\\pv{p}'$, \n\\[\nD_q^Z(\\pv{p}) \n=\nD_0^Z(\\pv{p})\n\\geq\nD_0^Z(\\pv{p}')\n\\geq\nD_q^Z(\\pv{p}'),\n\\]\nusing invariance in the first step and Lemma~\\ref{lemma:means} in the\nlast. \n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\\begin{lemma} \\lbl{lemma:smaller-max}\nLet $Z$ be a similarity matrix and $B \\subseteq \\{1, \\ldots, n\\}$. Suppose that\n$\\sup_{\\pv{r}} D_0^{Z_B}(\\pv{r}) \\geq \\sup_{\\pv{p}} D_0^Z(\\pv{p})$, where the\nfirst supremum is over distributions $\\pv{r}$ on $B$ and the second is over\ndistributions $\\pv{p}$ on $\\{1, \\ldots, n\\}$. Suppose also that $Z_B$ admits\nan invariant maximizing distribution. Then so does $Z$.\n\\end{lemma}\n(In fact, $\\sup_{\\pv{r}} D_0^{Z_B}(\\pv{r}) \\leq \\sup_{\\pv{p}} D_0^Z(\\pv{p})$\nin any case, by Lemma~\\ref{lemma:absent}. So the `$\\geq$' in the statement\nof the present lemma could equivalently be replaced by `$=$'.)\n\n\\begin{proof}\nLet $\\pv{r}$ be an invariant maximizing distribution on $Z_B$. Define a\ndistribution $\\pv{p}$ on $\\{1, \\ldots, n\\}$ by extending $\\pv{r}$ by zero. By\nLemma~\\ref{lemma:absent}, $\\pv{p}$ is invariant. Using\nLemma~\\ref{lemma:absent} again,\n\\[\nD_0^Z(\\pv{p})\n=\nD_0^{Z_B}(\\pv{r})\n=\n\\sup_{\\pv{r}'} D_0^{Z_B}(\\pv{r}')\n\\geq\n\\sup_{\\pv{p}'} D_0^Z(\\pv{p}'),\n\\]\nso $\\pv{p}$ is $0$-maximizing. Then by Lemma~\\ref{lemma:max}, $\\pv{p}$ is\nmaximizing. \n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\n\\passage{Decomposition}\n\nLet $Z$ be a similarity matrix. Subsets $B$ and $B'$ of $\\{1, \\ldots, n\\}$\nare \\demph{complementary} (for $Z$) if $B \\cup B' = \\{1, \\ldots, n\\}$, $B \\cap\nB' = \\emptyset$, and $Z_{i i'} = 0$ for all $i \\in B$ and $i' \\in B'$. For\nexample, there exist nonempty complementary subsets if $Z$ can be expressed as\na nontrivial block sum\n\\[\n\\left(\\!\\!\n\\begin{array}{cc}\nX &0 \\\\\n0 &X'\n\\end{array}\n\\!\\!\\right).\n\\]\n\nGiven a distribution $\\pv{p}$ and a subset $B \\subseteq \\{1, \\ldots, n\\}$ such that\n$p_i > 0$ for some $i \\in B$, let\n$\\rstr{\\pv{p}}{B}$ be the distribution on $B$ defined by \n\\[\n(\\rstr{\\pv{p}}{B})_i \n=\n\\frac{p_i}{\\sum_{j \\in B} p_j}.\n\\] \n\n\\begin{lemma} \\lbl{lemma:complementary-basic}\nLet $Z$ be a similarity matrix, and let $B$ and $B'$ be nonempty complementary\nsubsets of $\\{1, \\ldots, n\\}$. Then:\n\\begin{enumerate}\n\\item \\lbl{part:complementary-weightings}\nFor any weightings $\\pv{v}$ on $Z_B$ and $\\pv{v}'$ on $Z_{B'}$,\nthere is a weighting $\\pv{w}$ on $Z$ defined by\n\\[\nw_i\n=\n\\left\\{\n\\begin{array}{ll}\nv_i &\\textrm{if } i \\in B \\\\\nv'_i &\\textrm{if } i \\in B'.\n\\end{array}\n\\right. \n\\]\n\\item \\lbl{part:complementary-invt}\nFor any invariant distributions $\\pv{r}$ on $B$ and $\\pv{r}'$ on $B'$, there\nexists an invariant distribution $\\pv{p}$ on $\\{1, \\ldots, n\\}$ such that\n$\\rstr{\\pv{p}}{B} = \\pv{r}$ and $\\rstr{\\pv{p}}{B'} = \\pv{r}'$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\n\\begin{enumerate}\n\\item For $i \\in B$, we have\n\\[\n(Z\\pv{w})_i\n=\n\\sum_{j \\in B} Z_{ij} v_j + \\sum_{j \\in B'} Z_{ij} v'_j\n=\n\\sum_{j \\in B} (Z_B)_{ij} v_j\n=\n(Z_B \\pv{v})_i \n=\n1.\n\\]\nSimilarly, $(Z\\pv{w})_i = 1$ for all $i \\in B'$. So $\\pv{w}$ is a weighting.\n\n\\item By Proposition~\\ref{propn:invt}, $\\pv{r} = \\pd{\\pv{v}}$ for some\nnon-negative weighting $\\pv{v}$ on some nonempty subset $C \\subseteq B$.\nSimilarly, $\\pv{r}' = \\pd{\\pv{v}'}$ for some non-negative weighting $\\pv{v}'$\non some nonempty $C' \\subseteq B'$. By~(\\ref{part:complementary-weightings}),\nthere is a non-negative weighting $\\pv{w}$ on the nonempty set $C \\cup C'$\ndefined by \n\\[\nw_i\n=\n\\left\\{\n\\begin{array}{ll}\nv_i &\\textrm{if } i \\in C \\\\\nv'_i &\\textrm{if } i \\in C'.\n\\end{array}\n\\right.\n\\]\nLet $\\pv{p} = \\pd{\\pv{w}}$, a distribution on $\\{1, \\ldots, n\\}$, which is\ninvariant by Proposition~\\ref{propn:invt}. For $i \\in C$ we have\n\\begin{eqnarray*}\n(\\rstr{\\pv{p}}{B})_i &= &\n\\frac{p_i}{\\sum_{j \\in B} p_j}\n=\n\\frac{w_i\/\\magn{Z_{C \\cup C'}}}%\n{\\sum_{j \\in C} w_j\/\\magn{Z_{C \\cup C'}}} \\\\\n &= &\n\\frac{w_i}{\\sum_{j \\in C} w_j}\n=\n\\frac{v_i}{\\sum_{j\\in C} v_j} \\\\\n &= &\nr_i.\n\\end{eqnarray*}\nFor $i \\in B\\setminus C$ we have\n\\[\n(\\rstr{\\pv{p}}{B})_i \n=\n\\frac{p_i}{\\sum_{j \\in B} p_j}\n=\n0\n=\nr_i.\n\\]\nHence $\\rstr{\\pv{p}}{B} = \\pv{r}$, and similarly $\\rstr{\\pv{p}}{B'} =\n\\pv{r}'$. \n\\hfill\\ensuremath{\\Box}\n\\end{enumerate}\n\\end{proof}\n\n\\begin{lemma} \\lbl{lemma:complementary-D0}\nLet $Z$ be a similarity matrix, let $B$\nand $B'$ be complementary subsets of $\\{1, \\ldots, n\\}$, and let $\\pv{p}$ be a\ndistribution on $\\{1, \\ldots, n\\}$ such that $p_i > 0$ for some $i \\in B$ and\n$p_i > 0$ for some $i \\in B'$. Then \n\\[\nD_0^Z(\\pv{p})\n=\nD_0^{Z_B}(\\rstr{\\pv{p}}{B}) + \nD_0^{Z_{B'}}(\\rstr{\\pv{p}}{B'}).\n\\]\n\\end{lemma}\n\n\\begin{proof}\nBy definition,\n\\[\nD_0^Z(\\pv{p})\n=\n\\sum \\frac{p_i}{(Z\\pv{p})_i}\n=\n\\sum_{i \\in B:\\ p_i > 0} \\frac{p_i}{(Z\\pv{p})_i}\n+\n\\sum_{i \\in B':\\ p_i > 0} \\frac{p_i}{(Z\\pv{p})_i}.\n\\]\nNow for $i \\in B$, \n\\[\np_i \n=\n\\left( \\sum_{j \\in B} p_j \\right)\n\\left( \\rstr{\\pv{p}}{B} \\right)_i\n\\]\nby definition of $\\rstr{\\pv{p}}{B}$, and \n\\[\n(Z\\pv{p})_i \n=\n\\sum_{j \\in B} Z_{ij} p_j + \\sum_{j \\in B'} Z_{ij} p_j\n=\n\\sum_{j \\in B} (Z_B)_{ij} p_j\n=\n\\left( \\sum_{j \\in B} p_j \\right)\n(Z_B \\rstr{\\pv{p}}{B})_i.\n\\]\nSimilar equations hold for $B'$, so\n\\begin{eqnarray*}\nD_0^Z(\\pv{p}) &\n= &\n\\sum_{i \\in B:\\ p_i > 0} \n\\frac{(\\rstr{\\pv{p}}{B})_i}{(Z_B \\rstr{\\pv{p}}{B})_i}\n+\n\\sum_{i \\in B':\\ p_i > 0} \n\\frac{(\\rstr{\\pv{p}}{B'})_i}{(Z_{B'} \\rstr{\\pv{p}}{B'})_i} \\\\\n &= &\nD_0^{Z_B}(\\rstr{\\pv{p}}{B}) + \nD_0^{Z_{B'}}(\\rstr{\\pv{p}}{B'}),\n\\end{eqnarray*}\nas required.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\n\\begin{propn} \\lbl{propn:comp-invt-max}\nLet $Z$ be a similarity matrix and let $B$ and $B'$ be nonempty complementary\nsubsets of $\\{1, \\ldots, n\\}$. Suppose that $Z_B$ and $Z_{B'}$ each admit an\ninvariant maximizing distribution. Then so does $Z$.\n\\end{propn}\n\n\\begin{proof}\nChoose invariant maximizing distributions $\\pv{r}$ on $B$ and $\\pv{r}'$ on\n$B'$. By\nLemma~\\ref{lemma:complementary-basic}(\\ref{part:complementary-invt}), there\nexists an invariant distribution $\\pv{p}$ on $\\{1, \\ldots, n\\}$ such that\n$\\rstr{\\pv{p}}{B} = \\pv{r}$ and $\\rstr{\\pv{p}}{B'} = \\pv{r}'$. I claim that\n$\\pv{p}$ is maximizing. Indeed, let $\\pv{s}$ be a distribution on $\\{1,\n\\ldots, n\\}$. If $s_i > 0$ for some $i \\in B$ and $s_i > 0$ for some $i \\in\nB'$ then\n\\[\nD_0^Z(\\pv{s})\n=\nD_0^{Z_B}(\\rstr{\\pv{s}}{B}) + D_0^{Z_{B'}}(\\rstr{\\pv{s}}{B'}) \n\\leq\nD_0^{Z_B}(\\pv{r}) + D_0^{Z_{B'}}(\\pv{r}')\n\\]\nby Lemma~\\ref{lemma:complementary-D0}. If not then without loss of\ngenerality, $s_i = 0$ for all $i \\in B'$; then $s_i > 0$ for some $i \\in B$,\nand\n\\[\nD_0^Z(\\pv{s})\n=\nD_0^{Z_B}(\\rstr{\\pv{s}}{B})\n\\leq\nD_0^{Z_B}(\\pv{r})\n\\leq \nD_0^{Z_B}(\\pv{r}) + D_0^{Z_{B'}}(\\pv{r}')\n\\]\nby Lemma~\\ref{lemma:absent}. So in any case we have\n\\begin{eqnarray*}\nD_0^Z(\\pv{s}) \n &\\leq &\nD_0^{Z_B}(\\pv{r}) + D_0^{Z_{B'}}(\\pv{r}') \\\\\n &= &\nD_0^{Z_B}(\\rstr{\\pv{p}}{B}) + D_0^{Z_{B'}}(\\rstr{\\pv{p}}{B'}) \\\\\n &= &\nD_0^Z(\\pv{p}),\n\\end{eqnarray*}\nusing Lemma~\\ref{lemma:complementary-D0} in the last step. Hence $\\pv{p}$ is\n$0$-maximizing, and by Lemma~\\ref{lemma:max}, $\\pv{p}$ is maximizing.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\n\\passage{Positive definite similarity matrices}\nThe solution to the maximum diversity problem turns out to be simpler when the\nsimilarity matrix is positive definite and satisfies certain further\nconditions. Here are some preparatory results. They are not\nneeded for the proof of the main theorem~(\\ref{thm:main}) itself, but \nwill be used for the corollaries in Section~\\ref{sec:cors}.\n\n\n\\begin{lemma} \\lbl{lemma:pos-def-basic}\nLet $Z$ be a positive definite similarity matrix. Then $Z$ has a unique\nweighting and $\\magn{Z} > 0$.\n\\end{lemma}\n\n\\begin{proof}\nA positive definite matrix is invertible, so $Z$ has a unique weighting\n$\\pv{w}$. By the definitions of magnitude and weighting,\n\\[\n\\magn{Z}\n=\n\\sum_{i = 1}^n w_i\n=\n\\pv{w}^\\mathrm{t} Z \\pv{w}.\n\\]\nBut $n \\geq 1$, so $\\pv{0}$ is not a weighting, so $\\pv{w} \\neq \\pv{0}$; then\nsince $Z$ is positive definite, $\\pv{w}^\\mathrm{t} Z \\pv{w} > 0$.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\n\\begin{lemma} \\lbl{lemma:pos-def-sup}\nLet $Z$ be a positive definite similarity matrix. Then \n\\[\n\\magn{Z}\n=\n\\sup_{\\pv{x}}\n\\frac{(\\sum_{i = 1}^n x_i)^2}{\\pv{x}^\\mathrm{t} Z \\pv{x}}\n\\]\nwhere the supremum is over all column vectors $\\pv{x} \\neq \\pv{0}$. The\npoints at which the supremum is attained are exactly the nonzero scalar\nmultiples of the unique weighting on $Z$.\n\\end{lemma}\n\n\\begin{proof}\nSince $Z$ is positive definite, there is an inner product\n$\\ip{-}{-}$ on $\\mathbb{R}^n$ defined by\n\\[\n\\ip{\\pv{x}}{\\pv{y}} = \\pv{x}^\\mathrm{t} Z \\pv{y}\n\\]\n($\\pv{x}, \\pv{y} \\in \\mathbb{R}^n$). The Cauchy--Schwarz inequality states that\nfor all $\\pv{x}, \\pv{y} \\in \\mathbb{R}^n$,\n\\[\n\\ip{\\pv{x}}{\\pv{x}} \\cdot \\ip{\\pv{y}}{\\pv{y}}\n\\geq\n\\ip{\\pv{x}}{\\pv{y}}^2\n\\]\nwith equality if and only if one of $\\pv{x}$ and $\\pv{y}$ is a scalar multiple\nof the other. Let $\\pv{y}$ be the unique weighting on $Z$. Then the\ninequality states that for all $\\pv{x} \\in \\mathbb{R}^n$,\n\\[\n\\pv{x}^\\mathrm{t} Z \\pv{x} \\cdot \\magn{Z}\n\\geq\n\\left(\\sum_{i = 1}^n x_i \\right)^2.\n\\]\nSince $\\pv{y} \\neq \\pv{0}$, equality holds if and only if $\\pv{x}$ is a scalar\nmultiple of $\\pv{y}$. The result follows. \n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\nA vector $\\pv{x}$ is \\demph{nowhere zero} if $x_i \\neq 0$ for all $i$.\n\n\\begin{propn} \\lbl{propn:pos-def-sub}\nLet $Z$ be a positive definite similarity matrix and $B \\subseteq \\{1, \\ldots,\nn\\}$. Then $Z_B$ is positive definite and $\\magn{Z_B} \\leq \\magn{Z}$.\nThe inequality is strict if $B$ is a proper subset and the unique weighting on\n$Z$ is nowhere zero.\n\\end{propn}\n\n\\begin{proof}\nSuppose without loss of generality that $B = \\{1, \\ldots, m\\}$, where $0 \\leq m\n\\leq n$. \nLet $\\pv{y}$ be an $m$-dimensional column vector and write \n\\[\n\\pv{x} \n=\n(y_1, \\ldots, y_m, 0, \\ldots, 0)^\\mathrm{t}.\n\\]\nThen\n\\begin{equation} \\label{eq:quad-form}\n\\pv{y}^\\mathrm{t} Z_B \\pv{y}\n=\n\\sum_{i, j = 1}^m y_i (Z_B)_{ij} y_j\n=\n\\sum_{i, j = 1}^n x_i Z_{ij} x_j\n= \n\\pv{x}^\\mathrm{t} Z \\pv{x}\n\\end{equation}\nand\n\\begin{equation} \\label{eq:square}\n\\left( \\sum_{i = 1}^m y_i \\right)^2\n=\n\\left( \\sum_{i = 1}^n x_i \\right)^2.\n\\end{equation}\nBy~(\\ref{eq:quad-form}) and positive definiteness of $Z$, we have\n$\\pv{y}^\\mathrm{t} Z_B \\pv{y} \\geq 0$, with equality if and only if $\\pv{x} = 0$,\nif and only if $\\pv{y} = 0$. So $Z_B$ is positive definite. Then\nby~(\\ref{eq:quad-form}), (\\ref{eq:square}) and Lemma~\\ref{lemma:pos-def-sup},\n$\\magn{Z_B} \\leq \\magn{Z}$. \n\nNow suppose that $m < n$ and the weighting $\\pv{w}$ on $Z$ is nowhere zero.\nThe supremum in Lemma~\\ref{lemma:pos-def-sup} is attained only at nonzero\nscalar multiples of $\\pv{w}$; in particular, any vector $\\pv{x}$ at which it\nis attained satisfies $x_n \\neq 0$. Let $\\pv{y}$ be the unique weighting\non $Z_B$ and let $\\pv{x}$ be the corresponding $n$-dimensional column vector,\nas above. Since $x_n = 0$, we have\n\\[\n\\magn{Z_B}\n=\n\\frac{(\\sum_{i = 1}^m y_i)^2}{\\pv{y}^\\mathrm{t} Z_B \\pv{y}}\n=\n\\frac{(\\sum_{i = 1}^n x_i)^2}{\\pv{x}^\\mathrm{t} Z \\pv{x}}\n<\n\\magn{Z},\n\\]\nas required. \n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\\begin{lemma} \\lbl{lemma:scattered}\nLet $Z$ be a similarity matrix with $Z_{ij} < 1\/(n - 1)$ for all $i \\neq j$.\nThen $Z$ is positive definite, and the unique weighting on $Z$ is positive.\n\\end{lemma}\n\n\\begin{proof}\nTheorem~2 of~\\cite{AMSES} shows that $Z$ is positive definite. Now,\nfor each $i \\in \\{1, \\ldots, n\\}$ and $r \\geq 0$, put\n\\[\nc_{i, r}\n= \n\\sum_{i = i_0 \\neq \\cdots \\neq i_r} \nZ_{i_0 i_1} Z_{i_1 i_2} \\cdots Z_{i_{r - 1} i_r},\n\\]\nwhere the sum is over all $i_0, \\ldots, i_r \\in \\{1, \\ldots, n\\}$ such\nthat $i_0 = i$ and $i_{s - 1} \\neq i_s$ whenever $1 \\leq s \\leq r$. In\nparticular, $c_{i, 0} = 1$. Write $\\gamma = \\max_{j \\neq k} Z_{jk}$. Then\nfor all $r \\geq 0$,\n\\begin{equation} \\label{eq:wt-alt}\nc_{i, r + 1} \n\\leq \n\\sum_{i = i_0 \\neq \\cdots \\neq i_r \\neq i_{r + 1}}\nZ_{i_0 i_1} Z_{i_1 i_2} \\cdots Z_{i_{r - 1} i_r} \\gamma\n=\n(n - 1)\\gamma \\cdot c_{i, r}.\n\\end{equation}\nHence $c_{i, r} \\leq ((n - 1)\\gamma)^r$ for all $r \\geq 0$; and $(n - 1)\\gamma\n< 1$, so the sum $w_i := \\sum_{r = 0}^\\infty (-1)^r c_{i, r}$ converges.\nAgain using~(\\ref{eq:wt-alt}), we have $c_{i, r + 1} < c_{i, r}$ for all $r$,\nso $w_i > 0$.\n\nIt remains to show that $\\pv{w} = (w_1, \\ldots, w_n)^\\mathrm{t}$ is a weighting.\nLet $i \\in \\{1, \\ldots, n\\}$. Then\n\\begin{eqnarray*}\n(Z\\pv{w})_i &= &\nw_i + \\sum_{j \\neq i} Z_{ij} w_j \\\\\n &= &\nw_i + \n\\sum_{j \\neq i} Z_{ij} \n\\sum_{r = 0}^\\infty (-1)^r\n\\sum_{j = j_0 \\neq \\cdots \\neq j_r} Z_{j_0 j_1} \\cdots Z_{j_{r-1} j_r} \\\\\n &= &\nw_i - \n\\sum_{r = 0}^\\infty (-1)^{r + 1} c_{i, r + 1} \\\\\n &= &\nw_i - (w_i - c_{i, 0}) \n\\\\\n &= &\n1,\n\\end{eqnarray*}\nas required.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\n\\pagebreak\n\n\n\\section{The main theorem}\n\n\n\\passage{Solution to the maximum diversity problem}\n\n\\begin{thm}[Main Theorem] \\lbl{thm:main}\nLet $Z$ be a similarity matrix. Then:\n\\begin{enumerate}\n\\item \\lbl{part:main-value}\nFor all $q \\in [0, \\infty]$,\n\\begin{equation} \\label{eq:sup-max}\n\\sup_{\\pv{p}} D_q^Z(\\pv{p})\n=\n\\max_B \\magn{Z_B}\n\\end{equation}\nwhere the supremum is over all distributions $\\pv{p}$ and the\nmaximum is over all subsets $B \\subseteq \\{1, \\ldots, n\\}$ such that $Z_B$\nadmits a non-negative weighting.\n\\item \\lbl{part:main-place}\nThe maximizing distributions are precisely those of the form\n$\\pd{\\pv{w}}$, where $\\pv{w}$ is a non-negative weighting on a subset\n$B \\subseteq \\{1, \\ldots, n\\}$ such that $\\magn{Z_B}$ attains the\nmaximum~(\\ref{eq:sup-max}). \n\\end{enumerate}\nIn particular, there exists a maximizing distribution, and the maximum\ndiversity of order $q$ is the same for all $q \\in [0, \\infty]$.\n\\end{thm}\n\nFor the definitions, including that of `maximizing distribution', see\nSection~\\ref{sec:prep}.\nThe proof is given later in this section. First we make some remarks on\ncomputation and on maximum entropy. \n\nThe \\demph{maximum diversity} of a similarity matrix $Z$ is $\\Dmax{Z} :=\n\\sup_\\pv{p} D_q^Z(\\pv{p})$, which by Theorem~\\ref{thm:main} is independent of\nthe value of $q \\in [0, \\infty]$.\n\n\n\\passage{Remarks on computation}\nSuppose that we are given a similarity matrix $Z$ and want to compute its\nmaximizing distribution(s) and maximum diversity. The theorem gives the\nfollowing algorithm. For each of the $2^n$ subsets $B$ of $\\{1, \\ldots, n\\}$:\n\\begin{itemize}\n\\item perform some simple linear algebra to decide whether $Z_B$ admits a\nnon-negative weighting\n\\item if it does, tag $B$ as `good' and record the magnitude $\\magn{Z_B}$\n(the sum of the entries of any weighting).\n\\end{itemize}\nThe maximum of all the recorded magnitudes is the maximum\ndiversity $\\Dmax{Z}$. For each good $B$ such that $\\magn{Z_B} =\n\\Dmax{Z}$, find all non-negative weightings $\\pv{w}$ on $Z_B$; the\ncorresponding distributions $\\pd{\\pv{w}}$ are the maximizing distributions.\n\nThis algorithm takes exponentially many steps. However, each step is fast, so\nit might be possible to handle reasonably large values of $n$ in a reasonable\nlength of time. Moreover, the results of Section~\\ref{sec:cors} may allow\nthe speed of the algorithm to be improved.\n\n\\passage{Solution to the maximum entropy problem}\nWe can translate the solution to the maximum \\emph{diversity} problem into a\nsolution to the maximum \\emph{entropy} problem. The first part, giving the\nvalue of the maximum, becomes more complicated. The second part, giving the\nmaximizing distribution(s), is unchanged.\n\n\\begin{thm} \\lbl{thm:main-entropy}\nLet $Z$ be a similarity matrix. Then:\n\\begin{enumerate}\n\\item \nFor all $q \\in [0, \\infty)$,\n\\begin{equation} \\label{eq:sup-max-ent}\n\\sup_{\\pv{p}} H_q^Z(\\pv{p})\n=\n\\left\\{\n\\begin{array}{ll}\n\\max_B \\frac{1}{q - 1} \\left( 1 - \\magn{Z_B}^{1 - q} \\right) &\n\\textrm{if } q \\neq 1 \\\\\n\\max_B \\log \\magn{Z_B} &\n\\textrm{if } q = 1\n\\end{array}\n\\right.\n\\end{equation}\nwhere the supremum is over all distributions $\\pv{p}$ and the\nmaxima are over all subsets $B \\subseteq \\{1, \\ldots, n\\}$ such that $Z_B$\nadmits a non-negative weighting.\n\\item\nThe maximizing distributions are precisely those of the form\n$\\pd{\\pv{w}}$, where $\\pv{w}$ is a non-negative weighting on a subset\n$B \\subseteq \\{1, \\ldots, n\\}$ such that $\\magn{Z_B}$ is maximal among all subsets\nadmitting a non-negative weighting.\n\\end{enumerate}\nIn particular, there exists a maximizing distribution.\n\\end{thm}\n\n\\begin{proof}\nThis follows almost immediately from Theorem~\\ref{thm:main}, using the\ndefinition of $D_q^Z$ in terms of $H_q^Z$. Note that on the right-hand side\nof~(\\ref{eq:sup-max-ent}), the expressions $\\frac{1}{q - 1}(1 - \\magn{Z_B}^{1\n- q})$ and $\\log \\magn{Z_B}$ are increasing, injective functions of\n$\\magn{Z_B}$, so a subset $B$ maximizes any one of them if and only if it\nmaximizes $\\magn{Z_B}$. \\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nThe part of Theorem~\\ref{thm:main} stating that the maximum diversity of order\n$q$ is the same for all values of $q$ has no clean statement in terms of\nentropy.\n\n\n\\passage{Diversity of order zero}\n\nOur proof of Theorem~\\ref{thm:main} will depend on an analysis of the function\n$D_0^Z$, diversity of order zero. The first step is to find its critical\npoints, and for that we need a technical lemma.\n\n\\begin{lemma} \\lbl{lemma:skew}\nLet $m \\geq 1$, let $Y$ be an $m \\times m$ real skew-symmetric matrix, and let\n$\\pv{x} \\in (0, \\infty)^m$. Suppose that $Y_{ij} \\geq 0$ whenever $i \\geq j$\nand that $\\sum_{j=1}^m Y_{ij} x_j$ is independent of $i \\in \\{1, \\ldots, m\\}$.\nThen $Y = 0$.\n\\end{lemma}\n\n\\begin{proof}\nThis is true for $m = 1$; suppose inductively that $m \\geq 2$. We\nhave \n\\[\n\\sum_{j = 1}^m Y_{1j} x_j \n= \n\\sum_{j = 1}^m Y_{mj} x_j\n\\]\nwith $Y_{1j} x_j = -Y_{j1} x_j \\leq 0$ and $Y_{mj} x_j \\geq 0$ for all $j$;\nhence both sides are $0$ and $Y_{mj} x_j = 0$ for all $j$. So for all $j$ we\nhave $Y_{mj} = 0$ (since $x_j > 0$) and $Y_{jm} = 0$ (by skew-symmetry). Let\n$Y'$ be the $(m - 1) \\times (m - 1)$ matrix defined by $Y'_{ij} = Y_{ij}$.\nThen $Y'$ satisfies the conditions of the inductive hypothesis, so $Y' = 0$;\nthat is, $Y_{ij} = 0$ whenever $i, j < m$. But we already have $Y_{ij} = 0$\nwhenever $i = m$ or $j = m$, so $Y = 0$, completing the induction. \\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nWrite\n\\[\n\\smp{n}\n=\n\\{ \\pv{p} \\in \\mathbb{R}^n \n\\:|\\: \n\\sum p_i = 1,\n\\ p_i \\geq 0 \n\\}\n\\]\nfor the space of distributions, and\n\\[\n\\ismp{n}\n=\n\\{ \\pv{p} \\in \\mathbb{R}^n \n\\:|\\: \n\\sum p_i = 1,\n\\ p_i > 0 \n\\}\n\\]\nfor the space of nowhere-zero distributions. The function $D_0^Z$ on\n$\\smp{n}$ is given by\n\\[\nD_0^Z(\\pv{p})\n=\n\\sum \\frac{p_i}{(Z\\pv{p})_i}.\n\\]\n(Recall the standing convention that unlabelled summations are over all $i \\in\n\\{1, \\ldots, n\\}$ such that $p_i > 0$.) It can be defined, using the same\nformula, for all $\\pv{p} \\in [0, \\infty)^n$. It is then differentiable\non $(0, \\infty)^n$, where the summation is over all $i \\in \\{1, \\ldots, n\\}$.\n\n\\begin{propn}\nLet $Z$ be a similarity matrix and $\\pv{p} \\in \\ismp{n}$. Then $\\pv{p}$ is a\ncritical point of $D_0^Z$ on $\\ismp{n}$ if and only if for all $i, j \\in \\{1,\n\\ldots, n\\}$,\n\\[\nZ_{ij} > 0 \\,\\Rightarrow\\, (Z\\pv{p})_i = (Z\\pv{p})_j.\n\\]\n\\end{propn}\n\n\\begin{proof}\nWe find the critical points of $D_0^Z$ on $\\ismp{n}$ using Lagrange\nmultipliers and the fact that $\\ismp{n}$ is the intersection of $(0,\n\\infty)^n$ with the hyperplane $\\{ \\pv{p} \\in \\mathbb{R}^n \\:|\\: \\sum p_i = 1\n\\}$. Write $h(\\pv{p}) = \\sum p_i - 1$.\n\nFor $k, i \\in \\{1, \\ldots, n\\}$ and $\\pv{p} \\in (0, \\infty)^n$ we have\n$\\ptl{p_k} (Z\\pv{p})_i = Z_{ik}$, giving\n\\[\n\\ptl{p_k} \n\\left(\n\\frac{p_i}{(Z\\pv{p})_i}\n\\right)\n=\n\\left\\{\n\\begin{array}{ll}\n\\displaystyle\n\\frac{1}{(Z\\pv{p})_k^2} \\sum_{j \\neq k} Z_{kj} p_j &\n\\textrm{if } k = i \\\\\n\\displaystyle\n-\\frac{1}{(Z \\pv{p})_i^2} Z_{ik} p_i &\n\\textrm{otherwise.}\n\\end{array}\n\\right.\n\\]\nFrom this and symmetry of $Z$ we deduce that for $k \\in \\{1, \\ldots, n\\}$ and\n$\\pv{p} \\in (0, \\infty)^n$,\n\\[\n\\ptl{p_k} D_0^Z(\\pv{p})\n=\n\\sum_{i = 1}^n Y_{ki} p_i\n\\]\nwhere\n\\[\nY_{ki} \n=\n\\left(\n\\frac{1}{(Z\\pv{p})_k^2} - \\frac{1}{(Z\\pv{p})_i^2} \n\\right)\nZ_{ki}.\n\\]\nOn the other hand, \n\\[\n\\ptl{p_k} h(\\pv{p})\n=\n1\n\\]\nfor all $k$. A point $\\pv{p} \\in \\ismp{n}$ is a critical point of $D_0^Z$ on\n$\\ismp{n}$ if and only if there exists a scalar $\\lambda$ such that $(\\nabla\nD_0^Z)(\\pv{p}) = \\lambda (\\nabla h)(\\pv{p})$. Hence $\\pv{p}$ is critical if\nand only if $\\sum_i Y_{ki} p_i$ is independent of $k \\in \\{1, \\ldots, n\\}$.\nSo the proposition is equivalent to the statement that, for $\\pv{p} \\in\n\\ismp{n}$, the sum $\\sum Y_{ki} p_i$ is independent of $k$ if and only if the\nmatrix $Y$ is $0$.\n\nThe `if' direction is trivial. Conversely, suppose that $\\sum_i Y_{ki} p_i$\nis independent of $k$. Assume without loss of generality that $(Z \\pv{p})_1\n\\geq \\cdots \\geq (Z\\pv{p})_n$. Then Lemma~\\ref{lemma:skew} applies (taking\n$\\pv{x} = \\pv{p}$), and $Y = 0$. \\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nLet $\\sim$ be the equivalence relation on\n$\\{1, \\ldots, n\\}$ generated by $i \\sim j$ whenever $Z_{ij} > 0$. Thus, $i\n\\sim j$ if and only if there is a chain $i = i_1, i_2, \\ldots, i_{r-1}, i_r =\nj$ with $Z_{i_t, i_{t+1}} > 0$ for all $t$. Call $Z$ \\demph{connected} if $i\n\\sim j$ for all $i, j$.\n\n\\begin{cor} \\lbl{cor:conn-crit}\nLet $Z$ be a connected similarity matrix. Then every critical point of $D_0^Z$\nin $\\ismp{n}$ is a weight distribution. \n\\hfill\\ensuremath{\\Box}\n\\end{cor}\n\nWe are aiming to show, among other things, that there is a $0$-maximizing\ndistribution: the function $D_0^Z$ on $\\smp{n}$ attains its supremum. Its\nsupremum is finite, since by Lemma~\\ref{lemma:positive}, $D_0^Z(\\pv{p}) \\leq\nn$ for all distributions $\\pv{p}$. If $D_0^Z$ is continuous on $\\smp{n}$ then\nit certainly does attain its supremum. But in general, it is not. For\nexample, if $Z = I$ then $D_0^Z(\\pv{p})$ is the cardinality of the support of\n$\\pv{p}$, which is not continuous in $\\pv{p}$. We must therefore use another\nargument to establish the existence of a $0$-maximizing distribution. The\nfollowing lemma will help us.\n\n\\begin{lemma} \\lbl{lemma:seqs}\nLet $Z$ be a connected similarity matrix and let $(\\pv{p}^{k})_{k \\in \\nat}$\nbe a sequence in $\\smp{n}$. Then $(\\pv{p}^k)_{k \\in \\nat}$ has a subsequence\n$(\\pv{p}^k)_{k \\in S}$ satisfying at least one of the following conditions:\n\\begin{enumerate}\n\\item \\lbl{part:seqs-boundary}\nthere is some $i \\in \\{1, \\ldots, n\\}$ such that $p^k_i = 0$ for all $k\n\\in S$\n\\item \\lbl{part:seqs-interior}\nthe subsequence $(\\pv{p}^k)_{k \\in S}$ lies in $\\ismp{n}$ and converges\nto some point of $\\ismp{n}$\n\\item \\lbl{part:seqs-quotient}\nthe subsequence $(\\pv{p}^k)_{k \\in S}$ lies in $\\ismp{n}$, and there is\nsome $i \\in \\{1, \\ldots, n\\}$ such that\n$\\lim_{k \\in S} \\left(p^k_i\/(Z\\pv{p}^k)_i\\right) = 0$.\n\\end{enumerate}\n\\end{lemma}\nHere and in what follows, we treat sequences as families $(x_k)_{k \\in T}$\nindexed over some infinite subset $T$ of $\\nat$. A subsequence of such a\nsequence therefore amounts to an infinite subset of $T$.\n\n\\begin{proof}\nIf there exist infinitely many pairs $(k, i) \\in \\nat \\times \\{1, \\ldots, n\\}$\nsuch that $p^k_i = 0$ then there is some $i \\in \\{1, \\ldots, n\\}$ such that\n$\\{k \\in \\nat \\:|\\: p^k_i = 0\\}$ is infinite. Taking $S = \\{k \\in \\nat\n\\:|\\: p^k_i = 0\\}$ then gives condition~(\\ref{part:seqs-boundary}). \n\nSuppose, then, that there are only finitely many such pairs. We may choose a\nsubsequence $(\\pv{p}^k)_{k \\in Q}$ of $(\\pv{p}^k)_{k \\in \\nat}$ lying in\n$\\ismp{n}$. Further, since $\\smp{n}$ is compact, we may choose a subsequence\n$(\\pv{p}^k)_{k \\in R}$ of $(\\pv{p}^k)_{k \\in Q}$ converging to some point\n$\\pv{p} \\in \\smp{n}$. If $\\pv{p} \\in \\ismp{n}$\nthen $(\\pv{p}^k)_{k \\in R}$ satisfies~(\\ref{part:seqs-interior}).\n\nSuppose, then, that $\\pv{p} \\not\\in \\ismp{n}$; say $p_\\ell = 0$ where $\\ell\n\\in \\{1, \\ldots, n\\}$. \n\nDefine a binary relation $\\trianglelefteq$ on $\\{1, \\ldots, n\\}$ by $i \\trianglelefteq j$ if\nand only if $(p^k_i\/p^k_j)_{k \\in R}$ is bounded (that is, bounded above).\nThen $\\trianglelefteq$ is reflexive and transitive, and if $i \\trianglelefteq j$ and $p_j = 0$\nthen $p_i = 0$. Write $i \\approx j$ for $i \\trianglelefteq j \\trianglelefteq i$; then\n$\\approx$ is an equivalence relation.\n\nI claim that there exist $i, j \\in \\{1, \\ldots, n\\}$ with $Z_{ij} > 0$ and $i\n\\not\\trianglelefteq j$. For if not, the equivalence relation $\\approx$ satisfies\n$Z_{ij} > 0 \\,\\Rightarrow\\, i \\approx j$, and since $Z$ is connected, $i \\approx j$\nfor all $i, j$. But then $i \\trianglelefteq \\ell$ for all $i$, and $p_\\ell = 0$, so\n$p_i = 0$ for all $i$. This contradicts $\\pv{p}$ being a distribution,\nproving the claim.\n\nNow without loss of generality, $Z_{1n} > 0$ and $1 \\not\\trianglelefteq n$. So\n$(p^k_1\/p^k_n)_{k \\in R}$ is unbounded. We may choose an infinite subset $S\n\\subseteq R$ such that $\\lim_{k \\in S} (p^k_1\/p^k_n) = \\infty$. For all $k\n\\in S$ we have\n\\[\n\\frac{(Z\\pv{p}^k)_n}{p^k_n}\n\\geq\n\\frac{Z_{n1} p^k_1}{p^k_n}\n\\]\nwith $Z_{n1} = Z_{1n} > 0$, so\n\\[\n\\lim_{k \\in S}\n\\left(\n\\frac{(Z\\pv{p}^k)_n}{p^k_n}\n\\right)\n=\n\\infty,\n\\]\nand condition~(\\ref{part:seqs-quotient}) follows.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\\passage{Existence of a maximizing distribution} At the heart of\nTheorem~\\ref{thm:main} is the following result, from which we will deduce the\ntheorem itself.\n\n\\begin{propn} \\lbl{propn:heart}\nEvery similarity matrix has a maximizing distribution, and every maximizing\ndistribution is invariant. \n\\end{propn}\n\n\\begin{proof}\nLet $Z$ be a similarity matrix. It is enough to prove that $Z$ admits an\ninvariant maximizing distribution: for if $\\pv{p}$ and $\\pv{p}'$ are both\nmaximizing then $D_q^Z(\\pv{p}) = D_q^Z(\\pv{p}')$ for all $q$, so $\\pv{p}$ is\ninvariant if and only if $\\pv{p}'$ is.\n\nThe result holds for $n = 1$. Suppose inductively that $n \\geq 2$.\n\n\\emph{Case 1: $Z$ is not connected.} We may partition $\\{1, \\ldots, n\\}$ into\ntwo nonempty subsets, $B$ and $B'$, each of which is a union of\n$\\sim$-equivalence classes (where $\\sim$ is as defined before\nCorollary~\\ref{cor:conn-crit}). Then $B$ and $B'$ are complementary, and by\ninductive hypothesis, $Z_B$ and $Z_{B'}$ each admit an invariant maximizing\ndistribution. So by Proposition~\\ref{propn:comp-invt-max}, $Z$ admits one\ntoo. \n\n\\emph{Case 2: $Z$ is connected.} Write $\\sigma = \\sup_{\\pv{p}}\nD_0^Z(\\pv{p})$. We may choose a sequence $(\\pv{p}^k)_{k \\in \\nat}$ in\n$\\smp{n}$ with $\\lim_{k \\to \\infty} D_0^Z(\\pv{p}^k) = \\sigma$. By\nLemma~\\ref{lemma:seqs}, at least one of the following three conditions holds.\n\\begin{enumerate}\n\\item There is a subsequence $(\\pv{p}^k)_{k \\in S}$ such that (without loss of\ngenerality) $p^k_n = 0$ for all $k \\in S$. Write $B = \\{1, \\ldots, n - 1\\}$.\nDefine a sequence $(\\pv{r}^k)_{k \\in S}$ in $\\smp{n - 1}$ by \n\\[\n\\pv{r}^k = (p^k_1, \\ldots, p^k_{n - 1}).\n\\]\nThen for all $k \\in S$ we have $D_0^{Z_B}(\\pv{r}^k) = D_0^Z(\\pv{p}^k)$ (by\nLemma~\\ref{lemma:absent}), so $\\sup_{k \\in S} D_0^{Z_B}(\\pv{r}^k) = \\sigma$.\nThen by Lemma~\\ref{lemma:smaller-max} and inductive hypothesis, $Z$ admits an\ninvariant maximizing distribution.\n\n\\item There is a subsequence $(\\pv{p^k})_{k \\in S}$ in $\\ismp{n}$ convergent\nto some point $\\pv{p} \\in \\ismp{n}$. Since $D_0^Z$ is continuous on\n$\\ismp{n}$,\n\\[\nD_0^Z(\\pv{p})\n=\n\\lim_{k \\in S} D_0^Z(\\pv{p}^k)\n=\n\\sigma.\n\\]\nSo $\\pv{p}$ is $0$-maximizing. Now $\\pv{p}$ is a critical point of $D_0^Z$ on\n$\\ismp{n}$, and $Z$ is connected, so by Corollary~\\ref{cor:conn-crit},\n$\\pv{p}$ is a weight distribution. By Lemma~\\ref{lemma:mag-div},\n$\\pv{p}$ is invariant; then by Lemma~\\ref{lemma:max}, $\\pv{p}$ is maximizing.\n\n\\item There is a subsequence $(\\pv{p}^k)_{k \\in S}$ in $\\ismp{n}$ such that\n(without loss of generality) $\\lim_{k \\in S} \\left( p^k_n \/ (Z\\pv{p}^k)_n\n\\right) = 0$. Write $B = \\{1, \\ldots, n - 1\\}$. Define a sequence\n$(\\pv{r}^k)_{k \\in S}$ in $\\ismp{n - 1}$ by $\\pv{r}^k = \\rstr{\\pv{p}^k}{B}$\n(which is possible because $\\pv{p}^k \\in \\ismp{n}$ and $n \\geq 2$). Then for\nall $k \\in S$ and $i \\in B$,\n\\[\n\\frac{r^k_i}{(Z_B \\pv{r}^k)_i}\n=\n\\frac{p^k_i}{\\sum_{j = 1}^{n - 1} Z_{ij} p^k_j}\n=\n\\frac{p^k_i}{(Z\\pv{p}^k)_i - Z_{in} p^k_n}\n\\geq\n\\frac{p^k_i}{(Z\\pv{p}^k)_i}.\n\\]\nHence for all $k \\in S$,\n\\[\nD_0^{Z_B}(\\pv{r}^k)\n=\n\\sum_{i = 1}^{n - 1}\n\\frac{r^k_i}{(Z_B \\pv{r}^k)_i}\n\\geq\n\\sum_{i = 1}^{n - 1}\n\\frac{p^k_i}{(Z\\pv{p}^k)_i}\n=\nD_0^Z(\\pv{p}^k) - \\frac{p^k_n}{(Z\\pv{p}^k)_n}.\n\\]\nBut \n\\[\n\\lim_{k \\in S} \n\\left(\nD_0^Z(\\pv{p}^k) - \\frac{p^k_n}{(Z\\pv{p}^k)_n}\n\\right)\n=\n\\sigma - 0\n=\n\\sigma,\n\\]\nso $\\sup_{k \\in S} D_0^{Z_B}(\\pv{r}^k) \\geq \\sigma$. Then by\nLemma~\\ref{lemma:smaller-max} and inductive hypothesis, $Z$ admits an\ninvariant maximizing distribution.\n\\end{enumerate}\nSo in all cases there is an invariant maximizing distribution, completing\nthe induction.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\\passage{Proof of the Main Theorem,~\\ref{thm:main}}\n\\begin{enumerate}\n\\item Let $q \\in [0, \\infty]$. By Proposition~\\ref{propn:heart}, the supremum\n$\\sup_\\pv{p} D_q^Z(\\pv{p})$ is unchanged if $\\pv{p}$ is taken to run over only\nthe invariant distributions. By Proposition~\\ref{propn:invt}, any invariant\ndistribution is of the form $\\pd{\\pv{w}}$ for some non-negative weighting\n$\\pv{w}$ on some nonempty subset $B \\subseteq \\{1, \\ldots, n\\}$. Hence \n\\[\n\\sup_{\\pv{p}} D_q^Z(\\pv{p})\n=\n\\max_{B, \\pv{w}} D_q^Z(\\pd{\\pv{w}})\n\\]\nwhere the maximum is over all nonempty $B$ and non-negative weightings\n$\\pv{w}$ on $B$. But for any such $B$ and $\\pv{w}$ we have \n\\[\nD_q^Z(\\pd{\\pv{w}}) \n= \nD_q^{Z_B}(\\pv{w}\/\\magn{Z_B})\n=\n\\magn{Z_B}\n\\]\nby Lemmas~\\ref{lemma:absent} and~\\ref{lemma:mag-div} respectively. Hence \n\\[\n\\sup_{\\pv{p}} D_q^Z(\\pv{p})\n=\n\\max_B \\magn{Z_B}\n\\]\nwhere the maximum is now over all nonempty $B \\subseteq \\{1, \\ldots, n\\}$ such that\nthere exists a non-negative weighting on $Z_B$. And since $\\magn{\\emptyset} =\n0$, it makes no difference if we allow $B$ to be empty.\n\n\\item Any maximizing distribution is invariant, by\nProposition~\\ref{propn:heart}. The result now follows from\nProposition~\\ref{propn:invt}. \n\\hfill\\ensuremath{\\Box}\n\\end{enumerate}\n\n\n\n\n\\section{Corollaries and examples}\n\\lbl{sec:cors}\n\n\nHere we state some corollaries to the results of the previous section. The\nfirst is a companion to Lemma~\\ref{lemma:max}.\n\n\\begin{cor} \\lbl{cor:some-all}\nLet $Z$ be a similarity matrix and $q \\in (0, \\infty]$. Then a distribution\nis $q$-maximizing if and only if it is maximizing.\n\\end{cor}\n\nIn other words, if a distribution is $q$-maximizing for \\emph{some}\n$q > 0$ then it is $q$-maximizing for \\emph{all} $q \\geq 0$. The proof is\nbelow. \n\nHowever, a $0$-maximizing distribution is not necessarily maximizing. Take $Z\n= I$, for example. Then $D_0^Z(\\pv{p}) = \\sum_{i:\\,p_i > 0} 1 = $ cardinality\nof $\\mr{supp}(\\pv{p})$, so any nowhere-zero distribution is $0$-maximizing. On\nthe other hand, only the uniform distribution $(1\/n, \\ldots, 1\/n)$ is\nmaximizing. So the restriction $q \\neq 0$ cannot be dropped from\nCorollary~\\ref{cor:some-all}, nor can the word `invariant' be dropped\nfrom Lemma~\\ref{lemma:max}.\n\n\\begin{proof}\nLet $\\pv{p}$ be a $q$-maximizing distribution. Then\n\\[\nD_q^Z(\\pv{p})\n=\n\\Dmax{Z}\n\\geq\nD_0^Z(\\pv{p})\n\\geq\nD_q^Z(\\pv{p}),\n\\]\nwhere the second inequality is by Lemma~\\ref{lemma:means}. So we have\nequality throughout, and in particular $D_0^Z(\\pv{p}) = D_q^Z(\\pv{p})$. But\n$q > 0$, so by Lemma~\\ref{lemma:means}, $\\pv{p}$ is invariant. Hence for all\n$q' \\in [0, \\infty]$,\n\\[\nD_{q'}^Z(\\pv{p}) = D_q^Z(\\pv{p}) = \\Dmax{Z},\n\\]\nand therefore $\\pv{p}$ is maximizing.\n\\hfill\\ensuremath{\\Box} \n\\end{proof}\n\nThe importance of Corollary~\\ref{cor:some-all} is that if one has solved the\nproblem of maximizing entropy or diversity of any particular order $q > 0$,\nthen one has solved the problem of maximizing entropy and diversity of all\norders. In the following example, we observe that for a certain class of\nsimilarity matrices, the problem of maximizing the entropy of order $2$ has\nalready been solved in the literature; we can immediately deduce a\nmore general maximum entropy theorem.\n\n\\begin{example} \\lbl{eg:graphs}\nLet $G$ be a finite reflexive graph. Thus, $G$ consists of a finite set $\\{1,\n\\ldots, n\\}$ of vertices equipped with a reflexive symmetric binary relation\n$E$. Such graphs correspond to similarity matrices $Z$ whose entries are all\n$0$ or $1$, taking $Z_{ij} = 1$ if $(i, j) \\in E$ and $Z_{ij} = 0$\notherwise.\n\nFor each $q \\in [0, \\infty]$ and distribution $\\pv{p}$, define\n$D_q^G(\\pv{p}) = D_q^Z(\\pv{p})$, the \\demph{diversity of order $q$} of\n$\\pv{p}$ with respect to $G$. Thus,\n\\[\nD_q^G(\\pv{p})\n=\n\\left\\{\n\\renewcommand{\\arraystretch}{3}\n\\begin{array}{ll}\n\\displaystyle\n\\left( \\sum_{i:\\,p_i > 0} p_i\n\\left( \\sum_{j:\\,(i, j) \\in E} p_j \\right)^{q - 1}\n\\right)^\\frac{1}{1 - q} &\n\\textrm{if } q \\neq 1, \\infty \\\\\n\\displaystyle\n\\prod_{i:\\,p_i > 0} \n\\left( \\sum_{j:\\,(i, j) \\in E} p_j \\right)^{-p_i} &\n\\textrm{if } q = 1 \\\\\n\\displaystyle\n1\/\\max_{i:\\,p_i > 0} \\sum_{j:\\,(i, j) \\in E} p_j &\n\\textrm{if } q = \\infty.\n\\end{array}\n\\renewcommand{\\arraystretch}{1}\n\\right.\n\\]\n\nA set $K \\subseteq \\{1, \\ldots, n\\}$ of vertices of $G$ is \\demph{discrete} if $(i,\nj) \\not\\in E$ whenever $i, j \\in K$ with $i \\neq j$. Write $\\mr{d}(G)$\nfor the largest integer $d$ such that there exists a discrete set in $G$\nof cardinality $d$. Also, given any\nnonempty set $B \\subseteq \\{1, \\ldots, n\\}$, write $\\pv{p}^B$ for the distribution\n\\[\n\\pv{p}^B_i\n=\n\\left\\{\n\\begin{array}{ll}\n1\/|B| &\\textrm{if } i \\in B \\\\\n0 &\\textrm{otherwise}.\n\\end{array}\n\\right.\n\\]\n\n\\emph{Claim:} For all $q \\in [0, \\infty]$,\n\\[\n\\sup_{\\pv{p}} D_q^G(\\pv{p}) = \\mr{d}(G),\n\\]\nand the supremum is attained at $\\pv{p}^K$ for any discrete set $K$ of\ncardinality $\\mr{d}(G)$.\n\n\\emph{Proof:} We use the following result of Berarducci, Majer and\nNovaga~\\cite{BMN}. Let $G'$ be a finite \\emph{ir}reflexive graph with $n$\nvertices, that is, an irreflexive symmetric binary relation $E'$ on $\\{1,\n\\ldots, n\\}$. A set $K$ of vertices of $G'$ is a \\demph{clique} (or complete\nsubgraph) if $(i, j) \\in E'$ whenever $i, j \\in K$ with $i \\neq j$. Write\n$\\mr{c}(G')$ for the largest integer $c$ such that there exists a clique in\n$G'$ of cardinality $c$. Their Proposition~4.1 states that\n\\[\n\\sup_{\\pv{p}} \\sum_{(i, j) \\in E'} p_i p_j\n=\n1 - \\frac{1}{\\mr{c}(G')}\n\\]\n(which they call the `capacity' of $G'$). Their proof shows that the supremum\nis attained at $\\pv{p}^K$ for any clique $K$ of cardinality\n$\\mr{c}(G')$. \n\nWe are given a graph $G$. Let $G'$ be its dual graph, with the same vertex-set\nand with edge-relation $E'$ defined by $(i, j) \\in E'$ if and only if $(i, j)\n\\not\\in E$. Then $G'$ is irreflexive, a clique in $G'$ is the same as a\ndiscrete set in $G$, and $\\mr{c}(G') = \\mr{d}(G)$. For any distribution\n$\\pv{p}$,\n\\[\n\\sum_{(i, j) \\in E'} p_i p_j \n=\n1 - \\sum_{(i, j) \\in E} p_i p_j\n=\nH_2^Z(\\pv{p}).\n\\]\nLet $K$ be a clique in $G'$ of maximal cardinality, that is, a discrete set in\n$G$ of maximal cardinality. Then by~\\cite{BMN}, $\\pv{p}^K$ is 2-maximizing and\n\\[\nH_2^Z(\\pv{p}^K) \n= \n1 - \\frac{1}{\\mr{c}(G')} \n=\n1 - \\frac{1}{\\mr{d}(G)}.\n\\]\nBut it is a completely general fact that \n\\[\nH_2^Z(\\pv{p}) \n= \n1 - \\frac{1}{D_2^Z(\\pv{p})}\n\\]\nfor all $\\pv{p}$, directly from the definitions in\nSection~\\ref{sec:statement}. Hence $\\mr{d}(G) = D_2^Z(\\pv{p}^K) =\nD_2^G(\\pv{p}^K)$, and the claim holds for $q = 2$.\nCorollary~\\ref{cor:some-all} now tells us that the claim holds for all $q \\in\n[0, \\infty]$.\n\nThis class of examples tells us that a similarity matrix may have several\ndifferent maximizing distributions, and that a maximizing distribution\n$\\pv{p}$ may have $p_i = 0$ for some values of $i$. These phenomena have\nbeen observed in the ecological literature in the case $q = 2$ (Rao's\nquadratic entropy): see Pavoine and Bonsall~\\cite{PB} and references therein. \n\\end{example}\n\nComputing the maximum diversity is potentially slow, because in principle\none has to go through all $2^n$ subsets of $\\{1, \\ldots, n\\}$. But if the\nsimilarity matrix satisfies some further conditions, a maximizing distribution\ncan be found very quickly:\n\n\\begin{cor} \\lbl{cor:pos-def-div}\nLet $Z$ be a positive definite similarity matrix whose unique weighting\n$\\pv{w}$ is non-negative. Then $\\Dmax{Z} = \\magn{Z}$. Moreover,\n$\\pv{w}\/\\magn{Z}$ is a maximizing distribution, and if $\\pv{w}$ is\npositive then it is the unique such.\n\\end{cor}\n\n\\begin{proof}\nFollows immediately from Proposition~\\ref{propn:pos-def-sub} and\nTheorem~\\ref{thm:main}. \n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\\begin{example} \\lbl{eg:ultra}\nA similarity matrix $Z$ is \\demph{ultrametric} if $\\min\\{Z_{ij}, Z_{jk}\\} \\leq\nZ_{ik}$ for all $i, j, k$ and $Z_{ij} < 1$ for all $i \\neq j$. As shown\nbelow, every ultrametric matrix is positive definite and its weighting\nis positive. Hence its maximum diversity is its magnitude, it has a unique\nmaximizing distribution, and that distribution is nowhere zero.\n\nUltrametric matrices are closely related to \\demph{ultrametric spaces}, that\nis, metric spaces satisfying a stronger version of the triangle inequality:\n\\[\n\\max\\{ d(a, b), d(b, c) \\} \\geq d(a, c)\n\\]\nfor all points $a, b, c$. Any finite metric space $A = \\{a_1, \\ldots, a_n\\}$\ngives rise to a similarity matrix $Z$ by putting $Z_{ij} = e^{-d(a_i, a_j)}$,\nand if the space $A$ is ultrametric then so is the matrix $Z$.\n\nUltrametric matrices also arise in the quantification of biodiversity. Take a\ncollection of $n$ species, and suppose, for example, that we choose a\ntaxonomic measure of species similarity:\n\\[\nZ_{ij} =\n\\left\\{\n\\begin{array}{ll}\n1 &\\textrm{if } i = j \\\\\n0.8 &\\textrm{if $i \\neq j$ but the $i$th and $j$th species are of the same\ngenus} \\\\\n0.6 &\\textrm{if the $i$th and $j$th species are of different genera but\nthe same family} \\\\\n0 &\\textrm{otherwise.}\n\\end{array}\n\\right.\n\\]\nThis is an ultrametric matrix, so is guaranteed to have a unique maximizing\ndistribution. That distribution is nowhere zero: maximizing diversity\ndoes not eradicate any species. The same conclusion for general ultrametric\nmatrices was reached, in the case $q = 2$, by Pavoine, Ollier and\nPontier~\\cite{POP}. \n\nWe now prove that any ultrametric matrix $Z$ is positive definite with\npositive weighting. That $Z$ is positive definite was also proved by Varga\nand Nabben~\\cite{VN}, and that the weighting is positive was also proved\nin~\\cite{POP}. The following proof, which is probably not new either, seems\nmore direct.\n\nIf $n = 1$ then certainly $Z$ is positive definite and its weighting is\npositive. \n\nSuppose inductively that $n \\geq 2$. Write $z = \\min_{i, j} Z_{ij} < 1$. By\nthe ultrametric property, there is an equivalence relation $\\simeq$ on\n$\\{1, \\ldots, n\\}$ defined by $i \\simeq j$ if and only if $Z_{ij} > z$. We\nmay partition $\\{1, \\ldots, n\\}$ into two nonempty subsets, $B$ and $B'$, each\nof which is a union of $\\simeq$-equivalence classes; and without loss of\ngenerality, $B = \\{1, \\ldots, m\\}$ and $B' = \\{m + 1, \\ldots, n\\}$, where $1\n\\leq m < n$. For all $i \\leq m$ and $j \\geq m + 1$ we have $Z_{ij} \\leq z$,\nthat is, $Z_{ij} = z$. Hence \n\\[\nZ \n=\n\\left(\\!\\!\n\\begin{array}{cc}\nY &z\\um{m}{n - m} \\\\\nz\\um{n - m}{m} &Y'\n\\end{array}\n\\!\\!\\right)\n\\]\nwhere $Y$ is some $m \\times m$ matrix, $Y'$ is some $(n - m) \\times (n - m)$\nmatrix, and $\\um{k}{\\ell}$ denotes the $k \\times \\ell$ matrix all of whose\nentries are $1$. Now $Y$ and $Y'$ are ultrametric with entries in $[z, 1]$,\nso the matrices\n\\[\nX = \\frac{1}{1 - z} (Y - z\\um{m}{m}),\n\\qquad\nX' = \\frac{1}{1 - z}(Y' - z\\um{n - m}{n - m})\n\\]\nare also ultrametric. By inductive hypothesis, $X$ and $X'$ are\npositive definite and their respective weightings are positive.\n\nWe have\n\\begin{equation} \\lbl{eq:ultra-inductive}\nZ \n= \nz \\um{n}{n} + \n(1 - z)\n\\left(\\!\\!\n\\begin{array}{cc}\nX &0 \\\\\n0 &X'\n\\end{array}\n\\!\\!\\right).\n\\end{equation}\nThe matrix $\\um{n}{n}$ is positive-semidefinite, since $\\pv{x}^\\mathrm{t}\n\\um{n}{n} \\pv{x} = (x_1 + \\cdots + x_n)^2$ for all $\\pv{x} \\in \\mathbb{R}^n$.\nAlso $\\left(\\!\\!\n\\begin{array}{cc}\nX &0 \\\\\n0 &X'\n\\end{array}\n\\!\\!\\right)$ is positive definite, since $X$ and $X'$ are. Finally, $z \\geq 0$\nand $1 - z > 0$. It follows that $Z$ is positive definite.\n\nWrite $\\pv{v}$ and $\\pv{v}'$ for the weightings on $X$ and $X'$. Put\n\\[\n\\pv{w}\n=\n\\frac{1}{z\\left(\\sum_{i = 1}^m v_i + \\sum_{j = 1}^{n - m} v'_j\\right) \n+ (1 - z)}\n\\left(\\!\\!\n\\begin{array}{c}\nv_1 \\\\ \n\\vdots \\\\\nv_m \\\\\nv'_1 \\\\\n\\vdots \\\\\nv'_{n - m}\n\\end{array}\n\\!\\!\\right).\n\\]\nThe weightings $\\pv{v}$ and $\\pv{v'}$ are positive and $0 \\leq z \\leq 1$, so\n$\\pv{w}$ is positive. And it is routine to verify,\nusing~(\\ref{eq:ultra-inductive}), that $\\pv{w}$ is the weighting on $Z$.\n\\end{example}\n\n\\begin{example} \\lbl{eg:metric}\nTake a metric space with three points, $a_1, a_2, a_3$, and put $Z_{ij}\n= e^{-d(a_i, a_j)}$. This defines a $3 \\times 3$ similarity matrix $Z$ with\n$Z_{ij} < 1$ for all $i \\neq j$ and $Z_{ij} Z_{jk} \\leq Z_{ik}$ for all $i, j,\nk$. We will show that $Z$ is positive definite and that its unique weighting\nis positive. It follows that there is a unique maximizing distribution and\nthat the maximum diversity is $\\magn{Z}$. We give explicit expressions for\nboth.\n\nFirst, Sylvester's Criterion states that a symmetric real $n\\times n$ matrix\nis positive definite if and only if for all $m \\in \\{1, \\ldots, n\\}$, the\nupper-left $m \\times m$ submatrix has positive determinant. In this case:\n\\begin{itemize}\n\\item the upper-left $1 \\times 1$ matrix is $(1)$, which has determinant $1$\n\\item the upper-left $2 \\times 2$ matrix is $\\left(\\!\\! \\begin{array}{cc} 1\n&Z_{12}\\\\ Z_{12} &1 \\end{array} \\!\\!\\right)$, which has determinant $1 -\nZ_{12}^2 > 0$\n\\item the upper-left $3 \\times 3$ matrix is $Z$ itself, and\n\\begin{eqnarray*}\n\\det Z &= &\n1 - (Z_{12}^2 + Z_{23}^2 + Z_{31}^2) + 2 Z_{12} Z_{23} Z_{31} \\\\\n &= &\n(1 - Z_{12})(1 - Z_{23})(1 - Z_{31}) \n+\n(1 - Z_{12})(Z_{12} - Z_{13} Z_{32}) \\\\\n & &\n{}+\n(1 - Z_{23})(Z_{23} - Z_{21} Z_{13})\n+\n(1 - Z_{31})(Z_{31} - Z_{32} Z_{21}) \\\\\n &> &\n0.\n\\end{eqnarray*}\n\\end{itemize}\nHence $Z$ is positive definite. Next, it is easily checked that the\nunique weighting $\\pv{w}$ is given by $\\pv{w} = \\pv{v}\/\\det Z$, where, for\ninstance, \n\\begin{eqnarray*}\nv_1 &= &\n1 - (Z_{12} + Z_{13}) + (Z_{13}Z_{32} + Z_{12}Z_{23}) - Z_{23}^2 \\\\\n &= &\n(1 - Z_{12})(1 - Z_{23})(1 - Z_{31})\n+\n(1 - Z_{23})(Z_{23} - Z_{21}Z_{13}) \\\\\n &> &\n0.\n\\end{eqnarray*}\nSince $\\det Z > 0$, the weighting $\\pv{w}$ is positive. \n\nThe maximum diversity is $\\magn{Z} = w_1 + w_2 + w_3$, which is\n\\[\n1 +\n\\frac{2(1 - Z_{12})(1 - Z_{23})(1 - Z_{31})}%\n{1 - (Z_{12}^2 + Z_{23}^2 + Z_{31}^2) + 2 Z_{12}Z_{23}Z_{31}}.\n\\]\n(This expression was pointed out to me by Simon Willerton.) The unique\nmaximizing distribution $\\pv{p}$ is given by $\\pv{p} = \\pd{\\pv{w}} =\n\\pv{w}\/\\magn{Z} = \\pv{v}\/(\\magn{Z}\\det Z)$, so\n\\[\np_1\n=\n\\frac{1 - (Z_{12} + Z_{13}) + (Z_{13}Z_{32} + Z_{12}Z_{23}) - Z_{23}^2}%\n{1 - (Z_{12}^2 + Z_{23}^2 + Z_{31}^2) + 2 Z_{12}Z_{23}Z_{31}\n+ 2(1 - Z_{12})(1 - Z_{23})(1 - Z_{31})}\n\\]\nand similarly for $p_2$ and $p_3$.\n\\end{example}\n\nExample~\\ref{eg:graphs} (graphs) shows that maximizing distributions\nsometimes contain some zero entries. In ecological terms this means that\ndiversity is sometimes maximized by completely eradicating certain species,\nwhich may be contrary to acceptable practice. For this and other reasons, we\nmight seek conditions under which some or all of the maximizing distributions\n$\\pv{p}$ satisfy $p_i > 0$ for all $i$.\n\n\\begin{cor} \\lbl{cor:close-to-id}\nLet $Z$ be a similarity matrix such that $Z_{ij} < 1\/(n - 1)$ for all $i \\neq\nj$. Then $\\Dmax{Z} = \\magn{Z}$. Moreover, $Z$ has a unique weighting\n$\\pv{w}$, the unique maximizing distribution is $\\pv{p} = \\pv{w}\/\\magn{Z}$,\nand $p_i > 0$ for all $i$.\n\\end{cor}\n\n\\begin{proof}\nBy Lemma~\\ref{lemma:scattered}, $Z$ is positive definite and its unique\nweighting is positive. Then apply Corollary~\\ref{cor:pos-def-div}.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nThe extra hypothesis on $Z$ is strong, possibly too strong for the corollary\nto be of any use in ecology: when $n$ is large, it forces $Z$ to be very close\nto the identity matrix. On the other hand, the ecological interpretation of\nCorollary~\\ref{cor:close-to-id} is clear: if we treat every species as highly\ndissimilar to every other, the distribution that\nmaximizes diversity conserves all of them.\n\n\n\n\\passage{Acknowledgements} Parts of this work were done during visits to the\nCentre de Recerca Matem\\`atica, Barcelona, and the School of Mathematics and\nStatistics at the University of Sheffield. I am grateful to both institutions\nfor their hospitality, and to Eugenia Cheng, David Jordan and Joachim Kock for\nmaking my visits both possible and pleasant. I thank Christina Cobbold,\nAndr\\'e Joyal and Simon Willerton for useful conversations, and Ji\\v{r}\\'\\i\\\nVelebil for tracking down a reference.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Non-ideal fabrication in fixed frequency\nqubits}\n\nLattices of coupled qubits are proposed to enable error-correction algorithms such as the `surface code' \\cite{Gambetta2017Build_s,fowler2012surface_s}. Qubits are arranged into a square grid with alternate qubits serving either data or error-checking functions. Bus-couplers provide interaction among adjacent qubits, with up to four qubits attached to each bus. A seven qubit-lattice thereby comprises 12 qubit pairs and a seventeen-qubit lattice comprises 34 pairs. However, single junction transmon qubits are challenging to fabricate at precisely set frequencies. Among dozens of identically-fabricated qubits, the frequencies typically have a spread of $\\sigma_f \\sim 200$ MHz \\cite{privcommsrosenblatt_s}. Such imprecision will inhibit functioning of qubit lattices. Considering a lattice of tansmon qubits of frequency $\\sim 5$ GHz and anharmonicity $\\delta\/2\\pi = -340$ MHz, and considering cross-resonance gate operations, we can estimate the number of undesired interactions among these pairs. Studies of the cross-resonance gate \\citep{divincenzo2013quantum_s} indicate that these gates will be dominated by undesirable interactions if the frequency separation $|\\Delta|$ between adjacent qubits is equal to zero, a degeneracy between $f_{01}$ of the qubits; equal to $-\\delta\/2\\pi$, a degeneracy between $f_{01}$ of one qubit and $f_{12}$ of the next; or if $|\\Delta| > -\\delta\/2\\pi$ (weak interaction leading to very slow gate operation). In a simple Monte Carlo model, we assign to all points in the lattice a random qubit frequency from a gaussian distribution around 5 GHz, and count the number of degenerate or weak-interaction pairs, taking a range of $\\pm (\\delta\/2\\pi)\/20$, or $\\pm 17$ MHz around each degeneracy. The results appearing in Table \\ref{table:MCModelCollisions} make it evident that the likelihood of frequency collisions increases as the lattice grows.\n\n\\begin{table}[h]\n\t\\centering\n\t\\begin{tabular}{c|c|c}\n\t\tNumber & & Mean Number \\\\\n\t\tof QBs & $\\sigma_f$ & of Collisions \\\\ \\hline\n\t\t7 & $\\frac{1}{2}|\\delta\/2\\pi|$ & 2.3 \\\\ \n\t\t7 & $\\frac{3}{4}|\\delta\/2\\pi|$ & 3.6 \\\\ \n\t\t17 & $\\frac{1}{2}|\\delta\/2\\pi|$ & 6.6 \\\\ \n\t\t17 & $\\frac{3}{4}|\\delta\/2\\pi|$ & 10.6 \n\t\\end{tabular} \n\\caption{\\label{table:MCModelCollisions} Frequency-collision modeling in lattices of transmon qubits employing cross-resonance gates. Predicted number of bad gate pairs (`frequency collisions') in two different lattice sizes. 7-qubit lattice has 12 pairs and 17-qubit lattice has 34 pairs. Mean of distribution is 5 GHz and two different distribution widths $\\sigma_f$ are considered.}\n\\end{table}\n\n\n\\section{Device design and fabrication}\n\nThe device for sample A, shown in Fig. \\ref{fig:1}, has all\neight qubit\/cavities capacitively coupled to a common feedline through\nwhich individual qubit readout was achieved via a single microwave\ndrive and output line. Sample B, shown in Fig. \\ref{fig:1}, employs a design where all qubits have separate drive and readout microwave lines. As in Ref. \\cite{Takita2016Dem_s} and \\cite{ibmquantumexp_s}, this sample is designed as a lattice of coupled qubits for use in multi-qubit gate operations, although no such operations are presented in this paper. Coplanar-waveguide buses, half-wave resonant at $\\sim$6 GHz, span the space between the qubits. Each bus resonator couples together three adjacent qubits. As compared to Ref. \\cite{Takita2016Dem_s}, here the lattice comprises eight qubits and four buses instead of the seven qubits and two buses found in Ref. \\cite{Takita2016Dem_s}.\n\nBoth samples were fabricated using standard lithographic processing\nto pattern the coplanar waveguides, ground plane, and qubit capacitors\nfrom a sputtered Nb film on a Si substrate. In sample A the Nb films are 100 nm thick. In sample B they are 200 nm. The qubits were similar\nin design to \\citep{Sheldon_Procedure_2016_s,chow2014implementing_s,corcoles2015demonstration_s,Takita2016Dem_s} \nwith large transmon capacitor pads bridged by electron-beam patterned Al traces used to\ncreate Josephson junctions. Conventional shadow-evaporated double-angle Al-AlOx-Al was used to fabricate the junctions. Transmon capacitor pads in samples A and B have different size and separation, necessitating different SQUID loop geometries, as shown in Fig. \\ref{fig:1}. The SQUID loops for qubits on sample A were created by bridging the transmon capacitor pads with\ntwo separate $0.6-\\mu{\\rm m}$ wide Al traces and Josephson junctions, with the\nasymmetry in the junctions fabricated by increasing the width of one\njunction with respect to the other, while keeping the overlap fixed at $0.2\\, \\mu{\\rm m}$. The sum of the large and small junction areas was designed to be constant, independent of $\\alpha$.\nQubits on sample A had capacitor pads separated by $20\\, \\mu{\\rm m}$ and\nthe Al electrodes separated such that the SQUID loop area was roughly\n$400\\, \\mu{\\rm m^{2}}$. In sample B, the Nb capacitor pads were separated by $70\\, \\mu{\\rm m}$. The SQUID comprises a $\\sim 20 \\times 20\\, \\mu{\\rm m}^2$ Al loop of 2 $\\mu$m trace width, placed midway between the capacitor pads and joined to Nb leads extending from the pads. In sample B, the large and small junction differ in both width and overlap. In this sample, all SQUIDs of a given $\\alpha$ were fabricated identically but SQUIDs of different $\\alpha$ had different total junction area. \n\n\\begin{figure}[!b]\n\\includegraphics[width=1.0\\columnwidth]{Device_Image_Supp2}\n\n\\caption{(color online) Optical micrographs of samples including higher magnification images of qubits and SQUID loops. Sample B image is a chip of identical design to the ones used for measurements. In sample B image, labels indicate each qubit and its individual readout resonators, while unlabeled resonators are bus resonators. \n\\label{fig:1}}\n\\end{figure}\n\n\\section{Measurement setup}\n\nMeasurements of sample A were completed in a dilution\nrefrigerator (DR) at Syracuse University (SU), while sample B was measured in a DR at the IBM TJ Watson Research Center. Both samples were wire-bonded into holders designed to suppress microwave\nchip modes. Each sample was mounted to the mixing chamber of its respective DR and placed inside a cryoperm magnetic shield, thermally anchored at the mixing chamber. Both SU and IBM DRs had room-temperature $\\mu$-metal shields. Measurements for both samples were performed using standard cQED readout techniques \\citep{Reed_High_2010_s}.\n\nFor sample A, room-temperature microwave signals were supplied through attenuated coaxial\nlines, thermalized at each stage of the DR and filtered using 10\nGHz low pass filters (K\\&L) thermalized at the mixing chamber. We used\na total of 70 dB of attenuation on the drive-lines: 20 dB at $4\\, {\\rm K}$, 20\ndB at $0.7\\, {\\rm K}$ and 30 dB at the mixing chamber, with a base temperature of $30\\,{\\rm mK}$. Output measurement signals\nfrom the sample pass through another 10 GHz low-pass filter, a microwave\nswitch, and two magnetically shielded cryogenic isolators, all thermally anchored\nto the mixing chamber. In the case of sample A, the signal was amplified\nby a low-noise HEMT at $4\\, {\\rm K}$, passing through a Nb\/Nb superconducting\ncoaxial cable between the mixing chamber and $4\\, {\\rm K}$ stage. The signal was amplified further\nat room temperature before being mixed down to 10 MHz and digitized. The eight resonators, coupled to each qubit on sample A, had measured frequencies that ranged from $6.975 - 7.136\\, {\\rm GHz}$, separated by $20 - 25\\, {\\rm MHz}$. $\\kappa\/{2\\pi}$ linewidths for these resonators were on the order of a few hundreds of kHz. \n\nFigure \\ref{fig:1} shows the layout of the sample B chip. The $\\alpha = 15$ asymmetric-SQUID transmon reported in the paper was located at position $Q_7$. It was read out through a coplanar waveguide resonator of frequency 6.559 GHz and linewidth $\\sim$ 300 kHz, and was found to have $f_{01}^{max} = 5.387$ GHz. The fixed-frequency transmon (5.346 GHz) at position $Q_2$ was read out through a 6.418 GHz resonator having linewidth $\\sim$ 300 kHz. Sample B qubits were measured via signal wiring similar to that presented in Refs. \\cite{Takita2016Dem_s,chow2014implementing_s,corcoles2015demonstration_s,sheldon2016characterizing_s}. Drive wiring included 10 dB of attenuation at 50 K, 10 dB at 4K, 6 dB at 0.7 K, 10 dB at 100 mK, and at the mixing-chamber plate 30 dB of attenuation plus a homemade `Eccosorb' low-pass filter. Drive signals entered a microwave circulator at the mixing plate. On one set of signal wiring, the 2nd port of the circulator passed directly to qubit $Q_7$. In another set of signal wiring, the second port of the circulator passed to several different qubits via a microwave switch. Signals reflected from the device passed back through the circulator to output and amplifier circuitry. Output circuitry comprised a low-pass Cu powder filter, followed by two cryogenic isolators in series, followed by an additional low-pass filter, followed by superconducting NbTi coaxial cable, followed by a low-noise HEMT amplifier at 4K and an additional low-noise amplifier at room temperature. Low-pass filters were intended to block signals above $\\sim$ 10 GHz. In the case of $Q_7$, additional amplification was afforded by a SLUG amplifier \\cite{HoverAPL2014_104_152601_s} mounted at the mixing stage, biased via two bias-tee networks and isolated from the sample by an additional cryogenic isolator. Output signals were mixed down to 5 MHz before being digitized and averaged. Mixing-plate thermometer indicated a temperature of $\\sim$ 15 to 20 mK during measurements. \n\nMagnetic flux was supplied to sample A via a 6-mm inner diameter superconducting\nwire coil placed $2\\,{\\rm mm}$ above the sample. A Stanford SRS SIM928 dc voltage source with a room-temperature $2\\,{\\rm k}\\Omega$ resistor in series supplied the bias current to the coil. The flux bias current passed through brass coaxial lines that were thermally anchored\nat each stage of the DR, with a $80~{\\rm MHz}$ $\\pi$-filter at 4K and a copper powder filter on the mixing chamber. In sample B, a similar wire-wound superconducting coil was mounted about 3 mm above the qubit chip and likewise driven from a SIM928 voltage source through a room-temperature $5\\,{\\rm k}\\Omega$ bias resistor. DC pair wiring (Cu above 4K within the fridge, NbTi below) was used to drive the coil. The coil had a self-inductance of 3.9 mH and mutual inductance to the SQUID loop of $\\sim$ 1 pH. The flux coil applied a dc flux through\nall qubits with the flux level being set just prior to qubit measurement\nand maintained at a constant level throughout the measurement. For each qubit, we measured $f_{01}$ as a function of coil current and fit this against Eq. (1) of our paper to enable scaling of $\\Phi_0$ and subtract any offset flux, as well as to determine $f_{01}^{max}$ and asymmetry $d$. We treat the sign of flux as arbitrary. \n\n\\section{Qubit Coherence}\n\nCoherence data for both samples was collected using an automated measurement algorithm. After applying a prescribed fixed flux, the system determined the qubit frequency from Ramsey fringe fitting, optimized $\\pi$ and $\\pi\/2$ pulses at this frequency, and measured coherence. $T_{2}^{*}$ measurements were completed at a frequency detuned from the qubit frequency, with the level of detuning optimized to provide a reasonable number of fringes for fitting. All raw coherence data was visually checked to confirm that a good quality measurement was achieved. If the automated tuning routine failed to find the frequency or properly scale the $\\pi$ and $\\pi\/2$ pulses, this point was omitted from the dataset.\nFor sample A, three $T_1$ measurements were made at each flux point followed by three $T_2^*$ measurements. At each flux point, the reported $T_1$ and $T_2^*$ values and error bars comprise the mean and standard deviation of the three measurements. The corresponding $\\Gamma_{\\phi}$ value is found from these mean values and its error bar is found by propagating the errors in $T_1$ and $T_2^*$ through via partial derivative and combining these in a quadrature sum. For sample B, at each flux point first $T_1$ was measured, then $T_2^*$, three times in succession. For this device the reported $T_1$ and $T_2^*$ values comprise the mean of the three measurements and the error bars are their standard deviation. Here the reported dephasing rate $\\Gamma_{\\phi}$ comprises the mean of the three values of $\\Gamma_{\\phi}=1\/T_{2}^{*}-1\/2T_{1}$ found from the three $T_1$, $T_2^*$ pairs, and the error bar is the standard deviation. \n\n\\begin{figure}\n\\includegraphics[width=0.75\\columnwidth]{T1_vs_Freq_SYR_IBM2}\n\n\\caption{$T_{1}$ vs. frequency measured for all qubits discussed in the main paper. Single points included for $T_{1}$ values measured for the fixed-frequency qubits. \n\\label{fig:2}}\n\n\\end{figure}\n\nFigure \\ref{fig:2} shows $T_{1}$ plotted versus qubit frequency, measured for the qubits discussed in our paper. We observe a trend\nof increasing $T_{1}$ with decreasing qubit frequency. In sample A, each qubit's quality factor $\\omega T_{1}$ is roughly constant, consistent with dielectric loss and a frequency-independent loss tangent, as observed in other tunable superconducting qubits \\citep{barends2013coherent_s}. On sample B, $T_{1}$ decreases by about 10 $\\mu$s from the low to high end of the frequency range, consistent with Purcell loss to the readout resonator. In addition, fine structure is occasionally observed in Fig.\n\\ref{fig:2} where $T_{1}$ drops sharply at specific frequencies. These localized features in the $T_{1}$ frequency dependence are observed\nfor all tunable qubits that we have measured. These features, similar to those observed by \\citep{barends2013coherent_s}, are attributed to frequencies where a qubit transition is resonant with a two-level system defect on or near the qubit. Additionally, on sample B, at a few frequency points inter-qubit coupling affects relaxation. Where the $Q_7$ qubit is nearly degenerate to $Q_6$ (at $\\sim$5.33 GHz) and to $Q_8$ (at $\\sim$5.22 GHz), coupling via the adjacent buses produces an avoided crossing in the energy spectrum. This effect is barely noticeable in both the frequency curve of Fig. 2 of our paper as well as the relaxation data in Fig. \\ref{eq:2} here.\n\n\\begin{figure}\n\\includegraphics[width=0.75\\columnwidth]{Ramsey_vs_Phi_All2}\n\n\\caption{$T_{2}^{*}$ vs. flux measured for the qubits discussed in the main paper. $T_{2}^{*}$ measured for the fixed-frequency qubits on both samples is included with dashed lines to help guide the eye.\n\\label{fig:3}}\n\n\\end{figure}\n\nFigure \\ref{fig:3} shows $T_{2}^{*}$ plotted versus flux, measured\nfor the qubits discussed in our paper.\nFor the tunable qubits on\nsample A, $T_{2}^{*}$ is greatest at the qubit sweet-spots and decreases\naway from these sweet spots as $D_{\\Phi}$ increases.\nIn the $\\alpha = 15$ tunable qubit on sample B, $T_{2}^{*}$ is nearly constant over the measured half flux quantum range. The small frequency dependence observed in $T_{2}^{*}$ in sample B is consistent with the observed variation of $T_{1}$ with frequency, leading to the frequency-independent dephasing rate observed for this qubit\nin Fig. 3 of our paper.\n\n\\section{Relaxation Due to coupling to Flux Bias Line}\n\nWhile using two Josephson junctions to form a dc SQUID for the inductive element of a transmon allows its frequency to be tuned via magnetic flux, this opens up an additional channel for\nenergy relaxation via emission into the dissipative environment across the bias coil that is coupled to the qubit through a mutual inductance. This was first discussed by Koch et al \\citep{koch2007charge_s}. regarding\na near symmetrical split-junction transmon. We apply the same analysis\nhere to study the effect of increasing junction asymmetry on the qubit $T_{1}$ through this loss mechanism. For an asymmetric transmon, Koch et al. show in Eq. {(}2.17{)} of Ref. \\citep{koch2007charge_s} that the Josephson portion of the qubit Hamiltonian can be written in terms of a single phase variable with a shifted minimum that depends\nupon the qubit's asymmetry and the applied flux bias. \nBy linearizing this Hamiltonian about the static flux bias point for small noise amplitudes, Koch et al. compute the relaxation rate for a particular current noise power from the bias impedance coupled to the SQUID loop through a mutual inductance $M$.\nWe followed this same analysis for our qubit parameters, assuming harmonic oscillator wavefunctions for the qubit ground and excited state, and obtained the dependence of $T_{1}$ due to this mechanism as a function of bias flux.\nUsing our typical device\nparameters ($E_{J} = 20\\,{\\rm GHz}$, $E_{c} = 350\\,{\\rm MHz}$, $M = 2\\,{\\rm pH}$, $R =\n50~\\Omega$) we obtain the intrinsic loss for the asymmetries\ndiscussed in our paper, shown in Fig. \\ref{fig:4}. This analysis\nagrees with the results described in Ref. {[}4{]}. For a 10\\% junction asymmetry, this contribution results in a $T_{1}$ that varies between $25\\,{\\rm ms}$ and a few seconds. As the junction asymmetry is increased,\nthe minimum $T_{1}$ value, obtained at odd half-integer multiples of $\\Phi_{0}$, decreases slightly. However, even for our $\\alpha = 15$ qubit, the calculated value of $T_{1}$ due to this mechanism never falls below $10\\,{\\rm ms}$. Therefore, although increasing\njunction asymmetry does place an upper bound on $T_{1}$ of an asymmetric transmon, this level is two orders of magnitude larger than the measured $T_{1}$ in current state-of-the-art superconducting qubits due to other mechanisms.\n\n\\begin{figure}\n\n\\includegraphics[width=0.75\\columnwidth]{T1max_vs_phi}\n\n\\caption{Dependence of $T_{1}$ with flux for asymmetric transmons, calculated for the asymmetries discussed in the main paper, due to coupling to an external flux bias following the analysis of Koch et al \\citep{koch2007charge_s}. Though in the main paper our symmetric qubit was an $\\alpha = 1$, in this calculation we used $\\alpha = 1.1$ so that $T_{1}$ did not diverge at $\\Phi = 0$. \n\\label{fig:4}}\n\n\\end{figure}\n\nAlso in Ref. [4], Koch et al. described a second loss channel for a transmon related to coupling to the flux-bias line. In this case, the relaxation occurs due to the oscillatory current through the inductive element of the qubit -- independent of the presence of a SQUID loop -- coupling to the flux-bias line, described by an effective mutual inductance $M'$. This mutual vanishes when the Josephson element of the qubit and the bias line are arranged symmetrically. With a moderate coupling asymmetry for an on-chip bias line, Koch et al. estimate that the $T_{1}$ corresponding to this loss mechanism would be of the order of 70 ms. Because this mechanism does not directly involve the presence or absence of a SQUID loop for the inductive element, the asymmetry between junctions that we employ in our asymmetric transmons will not play any role here and this particular limit on $T_{1}$ should be no different from that for a conventional transmon. An additional potential relaxation channel may arise due to capacitive coupling to the flux-bias line, as discussed in Ref. \\cite{JohnsonDisserationYale2011_s}. However, this is expected to be negligible where a bobbin coil is used as in our experiments.\n\n\\section{Ramsey Decay Fitting}\n\nAs described in the main paper, our analysis of qubit dephasing rates used a purely exponential fit\nto all of the measured Ramsey decays. Here we discuss why this fitting approach is appropriate for all asymmetric qubits and a large portion of\nthe coherence data measured for the symmetric qubit.\n\nOf all the qubits measured in this study, the symmetric $\\alpha = 1$ qubit\nwas most impacted by flux noise away from the qubit sweet spot because of its large energy-band gradient. Therefore,\nto illustrate the impact that flux noise has upon the Ramsey decay\nenvelope we will consider the Ramsey measurements for this qubit on\nand off the sweet spot. Example measurements are shown at flux values of 0 and 0.3 ~$\\Phi_0$ in Fig. \\ref{fig:5}a and b, respectively. At each flux point,\nwe fit the Ramsey decay with both a purely exponential (Fig. \\ref{fig:5}a\nI) and purely Gaussian form (Fig \\ref{fig:5}a II), the residuals\nof each fit are included to compare the quality of fit in each case.\nAs has been discussed in the main paper, at the upper sweet-spot, where\n$D_{\\Phi} = 0$, non-flux dependent background-dephasing\nshould dominate and the Ramsey decay should be more readily fit using\nan exponential. Figure \\ref{fig:5}a shows that this is indeed the\ncase: the purely exponential fit provides a more precise fit to the\nRamsey decay, with the residuals to this fit being smaller over the entire range compared to those corresponding to the Gaussian fit. The Ramsey decay\nshown in Fig \\ref{fig:5}b was measured at a point where $D_{\\Phi}$\nwas the maximum measured for the $\\alpha = 1$ qubit. Here, it is clear that\na purely Gaussian form results in a better fit with smaller residuals than an exponential envelope. This indicates that, at this flux point, the $\\alpha = 1$ qubit is heavily impacted by low-frequency flux noise, as a purely 1\/f dephasing source would result in a Gaussian envolope for the decay \\citep{ithier2005decoherence_s}. Although a purely Gaussian fit form is useful\nfor illustrating the impact that flux noise has upon the Ramsey decay\nform, it is not an optimal quantitative approach for investigating dephasing in these qubits. This is because tunable transmons dephase not only due to flux noise with a roughly $1\/f$ power spectrum, but also due to other noise sources with different non-$1\/f$ power spectra \\citep{sears2012photon_s,schuster2005ac_s,gambetta2006qubit_s}. These other noise sources generally result in an exponential dephasing envelope. Also, dephasing has an intrinsic loss component that is always exponential in nature. Therefore, to accurately fit decay due to dephasing in these qubits, we must account for these exponential decay envelopes in any fitting approach that is not purely exponential.\n\n\\begin{figure}\n\n\\includegraphics[width=1\\columnwidth]{Ramsey_Decay_Fit}\n\\caption{Ramsey decay envolopes measured for the $\\alpha = 1$ qubit at a) the sweet-spot $\\Phi=0$ and b) $\\Phi=0.3 \\Phi_{0}$ where $D_{\\Phi}$ was the largest value measured for this qubit. At each flux point, the Ramsey decay envelopes are fit with both a purely exponential {(}I{)} and Gaussian {(}II{)} fit form. Functions fitted to the measured data {(}blue open circles{)} plotted as solid red lines.\n\\label{fig:5}}\n\n\\end{figure}\n\nTo account for the $T_{1}$ contribution to the Ramsey decay envelope in our non-exponential fitting, we take the average $T_{1}$ measured at each flux point\nand separate this from $T_{2}^{*}$ in the Ramsey fit function using\n$1\/T_{2}^{*} = 1\/T_{\\phi} + 1\/2T_{1}$. Therefore, instead\nof fitting a $T_{2}^{*}$ time, we fit $T_{\\phi} $\ndirectly. To fit the Ramsey using a Gaussian fit form, we square the\ndephasing exponent within the fitting function {[}Eq. {(}\\ref{eq:1}{)}{]}. We can go one step\nfurther by not forcing an explicit fit form to the dephasing exponent, but instead adding another fit parameter $\\gamma$ {[}Eq. {(}\\ref{eq:2}{)}{]}, which would be 1 for a pure exponential and 2 for a pure Gaussian. Although a fit that is not explicitly exponential or Gaussian is not motivated directly by a particular theoretical model, by fitting Ramsey decays with this free exponent\n$\\gamma$, we gain insight into the transition from flux-noise dominated dephasing at large $D_{\\Phi}$ to background dephasing near the sweet-spots. The two separate fit forms described above are given by the following decay functions:\n\n\\begin{equation}\nf_{Ramsey}(t)=A+B\\{\\cos{(\\omega t+\\delta)}\\exp{(-\\Gamma_{1}t\/2)}\\exp{[-(\\Gamma_{\\phi}t)^2]}\\}\\label{eq:1},\n\\end{equation}\n\\begin{equation}\nf_{Ramsey}(t)=A+B\\{\\cos{(\\omega t+\\delta)}\\exp{(-\\Gamma_{1}t\/2)}\\exp{[-(\\Gamma_{\\phi}t)^\\gamma}]\\}\\label{eq:2},\n\\end{equation}\nwhere A and B are magnitude and offset constants to adjust the arbitrary measured signal, $\\omega$ is the detuning from the qubit frequency with a phase offset $\\delta$, $\\Gamma_{1}$ is the intrinsic loss rate {(}$1\/T_{1}${)} and $\\Gamma_{\\phi}$ is the dephasing rate. Here, A, B, $\\omega$, $\\delta$, $\\Gamma_{\\phi}$, and $\\gamma$ are fit parameters. All other components are fixed with values determined using the methods discussed above.\n\n\\begin{figure}\n\n\\includegraphics[width=0.75\\columnwidth]{1to1_gamma_vs_Phi}\n\\caption{$\\gamma$ vs flux extracted from fits to the Ramsey measurements on the $\\alpha = 1$ qubit using Eq. \\ref{eq:2}.\n\\label{fig:6}}\n\n\\end{figure}\n\nThis behavior is illustrated in Fig. \\ref{fig:6}, where we plot $\\gamma$\nvs. flux extracted from fits to the Ramsey measurements on the $\\alpha = 1$\nqubit using Eq. {(}\\ref{eq:1}{)}. In the flux region between +\/- 0.1 $\\Phi_{0}$, $\\gamma \\approx 1$,\nindicating that the dephasing envelope is primarily exponential, and thus the dominant dephasing noise affecting the qubits here does not have a $1\/f$ spectrum. At flux bias points further away from the sweet-spot, $\\gamma$ shifts towards 2 as $D_{\\Phi}$ increases and appears\nto level off close to this value at flux biases above $\\sim 0.2\\,\\Phi_{0}$. Thus, in this bias regime, the dephasing envelope is primarily Gaussian and the dephasing noise influencing the qubits is predominantly low-frequency in nature with a $1\/f$-like spectrum \\citep{ithier2005decoherence_s,yoshihara2006decoherence_s}. \n\nWe can also vizualize this variable-exponent fit by plotting $\\gamma$ vs. $D_{\\Phi}$ rather than $\\Phi$, again, for the $\\alpha = 1$ qubit {(}Fig. \\ref{fig:7}{)}.\nIn this plot, $\\gamma$ approaches 2 for $D_{\\Phi}$ values around $6~\\rm GHz\/\\Phi_{0}$. We have also included vertical dashed lines on Fig. \\ref{fig:7} indicating the maximum $D_{\\Phi}$ values reached by the less tunable $\\alpha = 4$ and 7 qubits on sample A. Below these $D_{\\Phi}$ levels, $\\gamma$ is close to 1 implying that the decay envelope is nearly exponential, and thus justifying our use of an exponential decay for fitting the asymmetrical qubits in the main paper.\n\n\\begin{figure}\n\\includegraphics[width=0.75\\columnwidth]{1to1_gamma_vs_Grad}\n\n\\caption{$\\gamma$ vs $D_{\\Phi}$ extracted from fits to the Ramsey measurements on the $\\alpha = 1$ qubit using Eq. \\ref{eq:2}. Dashed lines included to indicate the maximum $D_{\\Phi}$ reached by the $\\alpha = 7$ {(}black dashed line{)} and $\\alpha = 4$ {(}blue dot-dashed line{)} qubits measured on sample A.\n\\label{fig:7}}\n\n\\end{figure}\n\nAs yet another approach to fitting the Ramsey decay envelopes, we can employ a function that separates the exponential from background-dephasing from the Gaussian form due to dephasing from noise with a low-frequency tail.\nFor this fit, along with separating\nout the $T_{1}$ contribution to the Ramsey decay envelope, we also determine the\nnon-flux dependent background-dephasing rate at the sweet-spot,\nthen use this rate as a fixed parameter in the fitting of our Ramsey measurements\nat any given flux point. We now have a composite Ramsey fit form that\nhas three components: a $T_{1}$ contribution and background dephasing component that are purely exponential and fixed by the fitting of separate measurements, plus a Gaussian component to capture the dephasing due to noise with a $1\/f$ spectrum. This leads to a composite fitting function of the form:\n\n\\begin{equation}\nf_{Ramsey}(t)=A+B\\{\\cos{(\\omega t+\\delta)}\\exp{(-\\Gamma_{1}t\/2)}\\exp{(-\\Gamma_{\\phi ,bkg}t)}\\exp{[-(\\Gamma_{\\phi}t)^2}]\\}\\label{eq:3},\n\\end{equation}\nwhere A and B are magnitude and offset constants to adjust the arbitrary measured signal, $\\omega$ is the detuning from the qubit frequency with a phase offset $\\delta$, $\\Gamma_{1}$ is the intrinsic loss rate {(}$1\/T_{1}${)}, $\\Gamma_{\\phi ,bkg}$ is the background dephasing rate measured at $D_{\\Phi}=0$ and $\\Gamma_{\\phi}$ is the fitted dephasing rate. Here, A, B, $\\omega$, $\\delta$, and $\\Gamma_{\\phi}$ are fit parameters. All other components are fixed with values determined using methods discussed above. Though this fit form well separates the different components to dephasing decay, it has one key deficiency: it assumes that the background dephasing rate is frequency independent, which is not necessarily justified, as the background dephasing mechanism may also vary with frequency. To calculate the total dephasing rate using this fit form, we add the constant background dephasing to the fitted $\\Gamma_{\\phi}$. \n\n\\begin{figure}\n\n\\includegraphics[width=0.75\\columnwidth]{1to1_RatevsGrad_4type_MAIN}\n\\caption{$\\Gamma_{\\phi}$ vs. $D_{\\Phi}$ calculated for the $\\alpha = 1$ qubit using the exponential,\nGaussian {[}Eq. {(}\\ref{eq:1}{)}{]}, $\\gamma$-exponent {[}Eq. {(}\\ref{eq:2}{)}{]}, and composite {[}Eq. {(}\\ref{eq:3}{)}{]}fitting forms.\n\\label{fig:8}}\n\\end{figure}\n\nTo understand how the explicit fitting form impacts the dephasing rate, in Fig. \\ref{fig:8} we plot $\\Gamma_{\\phi}$ vs. $D_{\\Phi}$ calculated for\nthe $\\alpha = 1$ qubit using the four different fitting forms: exponential, Gaussian {[}Eq. {(}\\ref{eq:1}{)}{]}, $\\gamma$-exponent {[}Eq. {(}\\ref{eq:2}{)}{]}, and composite {[}Eq. {(}\\ref{eq:3}{)}{]}. We first note that any differences in the rate of dephasing calculated at each point using the various fit methods\nare subtle and the fits are reasonably consistent with one another within the fit error bars and scatter. We do observe, though, that a purely exponential fit results in\na dephasing rate that is slightly higher than the values from the Guassian fits for all flux points, resulting in the largest slope and thus the highest effective flux-noise level. Therefore, we conclude that forcing a purely exponential fit to the Ramsey decay envelopes measured for qubits that are strongly influenced\nby $1\/f$ flux noise simply puts an upper bound on the absolute flux\nnoise strength. The $\\gamma$-exponent fitting approach provides a dephasing\nrate that agrees well with that extracted from the exponential fit\nform at low $D_{\\Phi}$ values where background-dephasing\nprocesses dominate. However, at higher $D_{\\Phi}$ values\nwhere the qubit is heavily impacted by $1\/f$ flux noise, the $\\gamma$-exponent\nfit provides better agreement with the Gaussian-fitted dephasing rate.\n\nThe composite fit is rigidly fixed in the $\\Gamma_{\\phi}$ axis by the value chosen\nto match the background dephasing rate, in this case chosen to match\nthe rate observed at the lowest $D_{\\Phi}$ for the pure\nexponential fit. For this reason, direct comparisons between this\nfit and the others at individual flux points is more difficult. Despite all of these potential issues, the slope of $\\Gamma_{\\phi}$ vs. $D_{\\Phi}$ is independent of the chosen background-dephasing\nrate. Therefore, this composite fit can be used to calculate a flux-noise level for this $\\alpha = 1$ qubit that takes into account both the exponential\nnature of non-flux dependent dephasing and the Gaussian nature of\n$1\/f$ flux-noise decay. Using the same methods outlined in our paper, where we \nspecified $\\Gamma_{\\phi}=2\\pi\\sqrt{A_{\\Phi}|\\ln{(2\\pi f_{IR}t)}|}D_{\\Phi}$, following the approach described in Ref. {[}\\citep{ithier2005decoherence_s}{]}, we use the slope of this composite fit\nto extract a $1\/f$ flux noise level of $A_{\\Phi}^{1\/2}=1.3~\\pm~0.2~\\mu \\Phi_0$. \nThis $\\sim10\\%$ reduction in the extracted flux-noise level for the $\\alpha = 1$ qubit compared to the purely exponential fit {(}$A_{\\Phi}^{1\/2}~=~1.4~\\pm~0.2~\\mu \\Phi_0${)} brings it closer to the flux-noise level extracted from the fits to the measurements on the $\\alpha = 7$ and 4 qubits: $1.3~\\pm~0.2~\\mu \\Phi_0$ and $1.2~\\pm~0.2~\\mu \\Phi_0$, respectively. The Ramsey measurements for these qubits were fit using a purely exponential fit form. It is important to note though, that the $\\sim10\\%$ reduction in the composite fit extracted flux-noise level for the $\\alpha = 1$ qubit is within the errors associated with our flux-noise calculations.\n\nTo conclude this fitting study, we have shown that:\n\\begin{enumerate}\n\\item The $\\alpha = 1$ qubit in this study has a Ramsey decay envelope that is more Gaussian\nin nature at high $D_{\\Phi}$ values where the dephasing of this qubit is strongly influenced by low-frequency flux noise.\n\\item Though we have discussed different fitting approaches that better\nmodel the Ramsey decay envelope of qubits influenced by $1\/f$ flux-noise,\nusing a purely exponential decay form for the Ramsey decay simply\nputs an upper bound on the extracted flux noise strength. Also, the value of the flux-noise level and the dephasing rates are comparable to those we obtained with the various other fitting approaches.\n\\item Using a Ramsey fit function that takes into account both the exponential\nnature of the $T_{1}$ contribution to the decay envelope and non-flux dependent dephasing, as well as the\nGaussian nature of dephasing due to $1\/f$ flux noise, allows us to calculate\na flux noise level for the $\\alpha = 1$ qubit that agrees well with the other,\nasymmetric qubits on the same sample. This is expected, as qubits of the same geometry on the same chip should experience similar flux noise \\citep{sendelbach2008magnetism_s}.\n\\end{enumerate}\n\n\\section{Dephasing Rate Discussion}\n\nIn Fig. \\ref{fig:9} we present dephasing rates for several additional qubits, plotted against $D_{\\Phi}$. These qubits were similar to those in our paper, but were prepared on additional chips and measured during additional cools of our cryostats. These data are not included in our paper for reasons of clarity and consistency. However, they are presented here to support the observations found in this study across all qubits measured in both of our labs. \n\n\\begin{figure}[!b]\n\n\\includegraphics[width=1.0\\columnwidth]{RatevsGrad_SUP2}\n\\caption{$\\Gamma_{\\phi}$ vs $D_{\\Phi}$ for qubits measured during this study that were not included in the main paper. $\\Gamma_{\\phi}$ for fixed-frequency qubits included as dashed lines. Type A\/B qubits were similar in design to those on sample A\/B, measured using similar methods and device designs as those described for the corresponding sample type.\n\\label{fig:9}}\n\\end{figure}\n\nThe first observation we make from Fig. \\ref{fig:9} is that a spread in background dephasing rates is measured between both fixed-frequency and tunable qubits. As discussed in our paper, these subtle variations in qubit dephasing rate are not unexpected and are commonly observed in multi-qubit devices \\citep{corcoles2015demonstration_s,chow2014implementing_s,Takita2016Dem_s}. While these variations in dephasing rate make the figure somewhat challenging to interpret, we can still draw the same conclusions for this data as those from our main paper. We still observe that the dephasing rate due to flux-noise increases linearly with $D_{\\Phi}$ for the lower asymmetry qubits. Again at lower $D_{\\Phi}$ values, below $\\sim 1\\,{\\rm GHz}\/\\Phi_0$, the rate of dephasing is constant within the experimental spread for all qubits. Here, it is important to note that, for several of the qubits shown here and those discussed in our paper, there are specific flux bias points for each qubit where the dephasing rate is anomalously high. These points almost always coincide with places where $T_{1}$ drops sharply at specific frequencies, presumably due to localized coupling to defects in these qubits. Again, this sharp frequency dependence in $T_{1}$ is not unusual for tunable superconducting qubits and is consistent with what others have observed \\citep{barends2013coherent_s}. \n\nThe relatively flux-independent dephasing rate at low $D_{\\Phi}$ is particularly apparent in the 9:1 qubits we measured. Several of these qubits exhibited the lowest background depahsing rates we observed in our study, between 20 and 40~kHz. These dephasing rates are comparable to current state-of-the-art superconducting qubits \\cite{Takita2016Dem_s}. No fixed-frequency qubits were included on the same chips as these 9:1 asymmetric transmons, which prevents us from making a direct comparison with non-flux-noise-driven background dephasing rates as is done in the main paper. Nonetheless, for these 9:1 qubits, we can clearly see that the dephasing rate is essentially flux independent below $\\sim 1\\,{\\rm GHz}\/\\Phi_0$ even at these low background dephasing levels. This reinforces our statement that asymmetric qubits with a useful level of tunability can be incorporated into future fault-tolerant superconducting qubit devices, significantly aiding scalability in these systems. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nQuantum spin systems with energy gaps have attracted considerable attention\nbecause of their rich variety of interesting phenomena.\nIn these systems, the ground state is a nonmagnetic singlet state,\nand there exists a finite energy-gap between the ground and the excited magnetic states.\nUnder certain magnetic fields $H_{\\rm c}$, the energy of one of the magnetic excited states\nbecomes lower than that of the singlet state,\nwhere nonmagnetic-magnetic crossover occurs.\nRecently, special attention is paid to the magnetic transition\nthat develops just above $H_{\\rm c}$ in these spin-gap systems.~\\cite{Rice}\nA number of novel phenomena is reported in these fields;\nfor example, magnetic ordering of TlCuCl$_3$ in fields is interpreted as\nthe Bose-Einstein condensation (BEC) of magnons.~\\cite{Nikuni,Oosawa}\nFor the two dimensional dimer system SrCu$_2$(BO$_3$)$_2$,\nsuperlattice formation of localized triplet is observed.~\\cite{Kodama}\n\nThe Haldane systems, i.e., quasi-one-dimensional Heisenberg antiferromagnets with integer spins,\nalso are ones of the most extensively studied spin gap systems.\nFor this class, unfortunately, field-induced magnetic ordering \nhas hardly been observed experimentally.\nThe archetypal Haldane-system, Ni(C$_2$H$_8$N$_2$)$_2$NO$_2$(ClO$_4$) (NENP),~\\cite{Renard}\nshows no evidence of field-induced ordering down to 0.2 K in fields up to 13 T.~\\cite{Kobayashi92}\nInstead, the existence of an energy gap was revealed even at $H_{\\rm c}$.~\\cite{Kobayashi92}\nThis is explained by the existence of a staggered field on Ni sites,\nwhich arises because the principal axis of the $g$ tensor in NENP tilts\nalternately.~\\cite{Chiba,Fujiwara}\nThis fact results in the slow crossover from nonmagnetic to magnetically polarized state in NENP,\npreventing from phase transition induced by fields.\n\nSo far, field-induced ordering in Haldane-systems was\nreported only for two cases: in Ni(C$_5$H$_{14}$N$_2$)$_2$N$_3$(PF$_6$)\nand Ni(C$_5$H$_{14}$N$_2$)$_2$N$_3$(ClO$_4$), abbreviated NDMAP and NDMAZ, respectively.\nField-induced transitions in these systems were demonstrated \nby specific heat~\\cite{Honda97,Honda98,Honda01,Kobayashi01}\nand neutron diffraction experiments.~\\cite{Chen,Zheludev01E}\nIn the ordered state, interestingly, unusual spin excitations are observed.\nESR~\\cite{Hagiwara03} and inelastic neutron-scattering experiments~\\cite{Zheludev03} on NDMAP\nhave revealed the existence of three distinct excitations in the ordered phase.\nThis feature is quite different from those in a conventional Neel state,\nwhere dominant excitations are the spin-wave modes.~\\cite{Zheludev02A}\nField-induced ordered phase in Haldane-systems is thus expected to illustrate\nmuch kind of novel physics, if much more examples are available.\n\nHere the compound PbNi$_2$V$_2$O$_8$ would be another candidate for the Haldane\nsystem where field-induced ordering can be observed experimentally.\nPbNi$_2$V$_2$O$_8$ has a tetragonal crystal structure with Ni$^{2+}$ ($S=1$) ions \nforming a chain along the $c$-axis.\nMagnetic susceptibility, high-field magnetization, and inelastic neutron scattering\nexperiments were performed and their results consistently suggest that this system\nis a Haldane-gap system.~\\cite{Uchiyama}\nThe spin-gap closes at $H^{\\parallel}_{\\rm c} = 14$ T and $H^{\\perp}_{\\rm c} = 19$ T,~\\cite{Uchiyama}\nwhere $H^{\\parallel}_{\\rm c}$ and $H^{\\perp}_{\\rm c}$ are the critical fields\napplied parallel and perpendicular to the chain ($c$-axis), respectively.\nThese values of $H_{\\rm c}$ are within experimentally accessible range.\nMoreover, PbNi$_2$V$_2$O$_8$ is reported to exhibit\nimpurity-induced magnetic transition around 3 K.~\\cite{Uchiyama}\nThis transition was found to be a long-range magnetic ordering \nby the neutron diffraction~\\cite{Lappas}\nas well as the specific heat measurements~\\cite{Masuda}.\nThese facts suggest the relatively large interchain coupling, $J_1$.\nIn fact, the $D-J_1$ plot~\\cite{Sakai90} (Sakai-Takahashi diagram) for this compound,\nwhere $D$ is the single-ion anisotropy, \nsuggests that PbNi$_2$V$_2$O$_8$ is in the spin-liquid (disordered) regime\nbut very close to the long-range ordered regime.~\\cite{Zheludev00} \nHence, one can expect that applying fields beyond $H_{\\rm c}$\nwill result in the magnetic ordering.\nIn the present paper, we have investigated magnetic properties of PbNi$_2$V$_2$O$_8$\nat $H > H_{\\rm c}$ using temperature dependent measurements of magnetization in static fields\nup to 30 T,\nand have observed indication of magnetic ordering above $H_{\\rm c}$.\n\n\\section{Experimental}\nField-oriented powder sample of PbNi$_2$V$_2$O$_8$ was prepared \nas in the first report,~\\cite{Uchiyama}\nsince single crystalline samples are not yet available.\nPowder sample of PbNi$_2$V$_2$O$_8$ was synthesized by a solid state reaction\nfrom PbO (99.999\\% pure), NiO (99.99\\%) and V$_2$O$_5$ (99.99\\%).\nThey were mixed and heated in air, \nfirstly at 600$^\\circ$C and subsequently at 750$^\\circ$C for several days \nwith intermittent grindings. \n\nPowder X-ray diffraction (XRD) pattern agrees well with the calculated pattern\nbased on the structure refined by the neutron diffraction experiments,~\\cite{Lappas,Comment}\nand no second phase was detected.\nThe powder was aligned by a magnetic fields (6 T) in stycast.\nThe orientation was checked by the (004) XRD peak.\nThe result confirmed that the $c$-axis aligns parallel to the magnetic fields,\nas is reported previously.~\\cite{Uchiyama}\nIn the following, we refer to magnetization measured under fields\nparallel to the $c$-axis, as $M^{\\parallel}$, and\nto that under fields perpendicular to the $c$-axis, as $M^{\\perp}$.\n\nMagnetization was measured by an extraction method.\nMagnetic fields up to 15 T were generated by a superconducting magnet.\nFields higher than 15 T were generated by a hybrid magnet at the Tsukuba\nMagnet Laboratory.\nFor the measurements of magnetization as a function of magnetic fields,\nthe fields were swept at the rate about 0.3 Tesla per minute at the temperature of 1.5 K.\nFor the temperature-dependent measurements, magnetization was measured under\nconstant magnetic fields.\n\n\\section{Results and discussion}\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=8cm]{Fig1.eps}\n\\caption{\\label{fig:epsart} Field dependence of the magnetization of the\nPbNi$_2$V$_2$O$_8$ powder samples.}\n\\end{center}\n\\end{figure}\n\\begin{figure*}[t]\n\\begin{center}\n\\includegraphics[width=14cm]{Fig2.eps}\n\\caption{\\label{fig:epsart} Temperature dependence of the magnetization of \nPbNi$_2$V$_2$O$_8$ for (a) $H \\parallel c$ and (b) $H \\perp c$.\nArrows indicate $T_{\\rm min}$ shown in the text.\n}\n\\end{center}\n\\end{figure*}\nFig.1 shows field dependence of the magnetization $M^{\\parallel}(H)$\nand $M^{\\perp}(H)$ measured at $T$ = 1.5 K.\nBoth the $M^{\\parallel}(H)$ and $M^{\\perp}(H)$ curves steeply increase above \nthe critical fields, $H_{\\rm c}^{\\parallel}$ = 19 T and $H_{\\rm c}^{\\perp}$ = 13.5 T, respectively. \nThese values correspond to the critical fields at which the Haldane gap closes,\nand are in good agreement with the previous report obtained by\nthe pulsed-field experiments.~\\cite{Uchiyama}\n\nHere the $M(H)$ just above $H_c$ increases almost linearly with $H$.\nThis behavior differs from the theoretical predictions,\nwhere $M(H)$ varies as proportional to $\\sqrt{H-H_{\\rm c}}$ for axially symmetric fields\n($H \\parallel c$).~\\cite{Affleck,Takahashi}\nOne of the reason of this discrepancy can be the finite temperature effect.\nThe $\\sqrt{H-H_{\\rm c}}$ dependence easily becomes invisible by a slight thermal excitation,\nas is observed in NDMAZ.~\\cite{Kobayashi01}\nOther origins can be imperfect powder orientation,\nthat can lead axially-asymmetric fields even for $H \\parallel c$.\nHowever, it is also possible that the linear $M(H)$ is intrinsic behavior of an\nantiferromagnetically ordered system.\nWhen the measured temperature is sufficiently low compared to the energy gap,\nthe system promptly enters the antiferromagnetic ordered regime above $H_c$,\nas is shown below.\n\n\nFig.2(a) shows temperature dependence of the magnetization measured under\nfields parallel to the $c$-axis, $M^{\\parallel}(T)$.\nFor $H < H^{\\parallel}_{\\rm c} = 19$ T, the $M^{\\parallel}(T)$ curves show no anomalies.\nBelow 5 K, $M^{\\parallel}(T)$ have small but finite values (0.02-0.03$\\mu_{\\rm B}$),\nand these are attributed to the saturation magnetization of impurity and\/or defects.\nFor $H$ = 22 T, there exists a cusp-like minimum at\naround $T_{\\rm min}$ = 6.4 K. With increasing fields, $T_{\\rm min}$ \nshifts to higher temperatures systematically.\n$T_{\\rm min}$ reaches to 11.5 K at $H$ = 30 T, as is seen in the figure.\n\nFig.2(b) shows temperature dependence of the magnetization \nmeasured under fields perpendicular to the $c$-axis, $M^\\perp(T)$.\nSimilar to the above results, $M^{\\perp}(T)$ also exhibits a cusp-like minimum\nfor $H > H^{\\perp}_{\\rm c}$ = 13.5T.\n\nIn both Fig.2 (a) and (b), $M(T)$ shows a sharp change in its slope at $T_{\\rm min}$.\nMoreover, below $T_{\\rm min}$, $M(T)$ increases with decreasing $T$ showing a convex curve,\nas is most clearly seen in $M^{\\perp}(T)$ at $H$ = 30 T (Fig.2 (b)).\nSuch $M(T)$ curves quite resemble those field-induced magnetic ordering \nof the coupled dimers TlCuCl$_3$.~\\cite{Oosawa,Nikuni}\nIn this compound, it is demonstrated that $T_{\\rm min}$ at which $M(T)$ has a minimum\nis the N\\'{e}el temperature,\nfrom the neutron diffraction~\\cite{Tanaka} and the specific heat measurements.~\\cite{Oosawa01}\nSimilarly, such cusp-like anomalies in $M(T)$ curves are shown to be \nthe ordering temperature \nin the $S=1\/2$ alternating chain Pb$_2$V$_3$O$_9$ (ref.~\\cite{Waki})\nand the quasi-2-dimensional BaCuSi$_2$O$_6$ (ref.~\\cite{Jaime}) \nfrom specific heat measurements.\nWe hence conclude that the data shown in Fig.2 also demonstrate the occurrence of field-induced\nmagnetic ordering with N\\'{e}el temperatures around $T_{\\rm min}$.\n\nIt is notable that the Haldane system NDMAP exhibits a minimum in $M(T)$ for $H > H_{\\rm c}$\nat temperatures much higher than $T_{\\rm N}$.~\\cite{Honda01,HondaJAP}\nThe origin of the minimum is not yet clear, and possibly related to a\ncrossover into the low-temperature Tomonaga-Luttinger(TL) liquid regime,\nas is predicted for non-interacting one-dimensional ladders.~\\cite{Wang,Wessel}\nThis is purely a one-dimensional phenomenon,\nand the three-dimensional ordering occurs at much lower temperatures.~\\cite{Wessel}\nFor those cases, $M(T)$ curves around $T_{\\rm min}$ are characterized by the relatively\nbroad minimum and the concave curve.~\\cite{Honda01,HondaJAP,Wessel}\nThis is in clear contrast with the cusp-like anomalies and the convex curve below $T_{\\rm min}$\nin the present study as well as in those reported for TlCuCl$_3$ etc.,\nwhich signals the 3-dimensional magnetic ordering.\nIt is of course important to perform other experiments in order \nto verify the magnetic ordering at $T_{\\rm min}$.\nThe lack of single crystalline samples makes it difficult\nto measure the specific heat of this anisotropic compound.\nWe are then planning to measure the NMR spectra at high fields.\n\nIt may be interesting to compare the ordered-state induced by fields in PbNi$_2$V$_2$O$_8$\nwith that induced by impurity-doping in PbNi$_{2-x}$Mg$_x$V$_2$O$_8$.~\\cite{Uchiyama}\nIt is shown that the ordered-state of the latter has inhomogeneous distribution of magnetic moment,\nby ESR~\\cite{Smirnov} and ${\\rm \\mu}$SR experiments.~\\cite{Lappas}\nIn addition, the impurity-induced ordered state vanishes at fields higher than $H$ = 4 T,\nwhere the Haldane state with an energy gap recovers.~\\cite{Masuda}\nIn contrast, ordered state observed in the present experiments shows up only\nabove $H_{\\rm c}$.\nThe largest value of $T_{\\rm min}$ in the present study is $\\sim$10 K for $H$ = 30 T,\nwhich is much larger than the maximum value of $T_{\\rm N}$ induced by Mg-doping,\n3.3 K,~\\cite{Smirnov} or the value of $zJ_1 \\sim 0.03J \\simeq 3.1 $K,\nwith $z$ the number of nearest chains, and $J$ the intrachain coupling.~\\cite{Zheludev00}\nThis fact implies that the field-induced ordering occurs via the\ndeveloped antiferromagnetic-correlation along the chain,\nand the ordered moment induced by fields is distributed\nuniformly on the chain.\n\nIn Fig.3, the values of $T_{\\rm min}$ are plotted against the applied fields.\nThis corresponds to the magnetic phase diagram for PbNi$_2$V$_2$O$_8$.\nFor both of $H \\parallel c$ and $H \\perp c$, $T_{\\rm min}$ increases with fields.\nIt is notable that the phase boundaries for $H \\parallel c$ and $H \\perp c$ \ndo not cross each other at least within the field range measured.\nThis is in qualitative agreement with the theoretical calculation\nby Sakai,~\\cite{Sakai01} the $HT$ phase diagram calculated for a Haldane chain with\nnegative $D$.\nIndeed, $D\/J = -0.05$ is estimated for PbNi$_2$V$_2$O$_8$ from inelastic\nneutron scattering experiments.~\\cite{Zheludev00}\nIn contrast, it is reported that crossing of the phase boundaries occurs in the\nphase diagram of NDMAP and NDMAZ,~\\cite{Honda98,Kobayashi01}\nand is well explained by the theoretical calculation for positive $D$.~\\cite{Sakai00}\nThus, PbNi$_2$V$_2$O$_8$ is the first example of field-induced order in \nthe Haldane system with negative $D$.\n\n\\begin{figure}[tp]\n\\begin{center}\n\\includegraphics[width=8cm]{Fig3.eps}\n\\caption{\\label{fig:epsart} Magnetic phase diagram of PbNi$_2$V$_2$O$_8$\nsuggested from the present experiments (symbols).\n`AFO' and `Haldane-state' represent the antiferromagnetically ordered state\nand the nonmagnetic spin-singlet state with a Haldane-gap, respectively.\nOpen symbols represent the data for $H \\parallel c$, and filled ones are \nfor $H \\perp c$. Circles indicate $T_{\\rm min}$ determined from the $M(T)$ curves,\nwhile diamonds indicate $H_{\\rm c}$ estimated from the $M(H)$ curves.\n }\n\\end{center}\n\\end{figure}\nSince the early stage of the research of the Haldane systems,\nits magnetic state at $H > H_{\\rm c}$ has been discussed theoretically\nin terms of the BEC picture.~\\cite{Affleck,Sorensen}\nIn the following, we discuss on the possible condensed state in the ordered phase.\nIn Fig.2 (a), one can see that $M^{\\parallel}(T)$ increases \nbelow $T_{\\rm min}$ with decreasing $T$.\nSuch increase cannot be explained by a conventional mean-field theory,~\\cite{Tachiki}\nwhich predicts almost flat $M(T)$ below the ordered temperature.\nInstead, this increase is successfully explained by the magnon BEC theory,\nas to be due to the increase of magnon number as the condensation sets in.~\\cite{Nikuni}\nTo apply the magnon BEC theory to our case, it is essential that the rotational\nsymmetry around the magnetic field be conserved.~\\cite{Nikuni}\nIn the present case, $M^{\\parallel}(T)$ holds this restriction.\n\nIt is rather surprising that the $M^{\\perp}(T)$ also increases below $T_{\\rm min}$\nas is seen in Fig.2(b).\nHere $H$ is applied perpendicularly to $D$,\nthereby the rotational symmetry of the Hamiltonian around $H$ is broken.\nIn such cases, magnon BEC picture is not assured because the number of bosons \nis not conserved.~\\cite{Nikuni}\nIn fact, the $M^{\\perp}(T)$ of NDMAP does not increase below $T_{\\rm N}$\nbut becomes flat against $T$.~\\cite{Honda01,HondaJAP}\nSuch behavior is consistent with the Ising-like antiferromagnet that\nis predicted to develop for $H \\perp D$.~\\cite{Sakai00}\nFor the present system, the similarity of $M^{\\perp}(T)$ and $M^{\\parallel}(T)$\nmay be due to the relatively small $D$ ($D\/J = -0.05$).\nThis point should be studied more carefully.\n\nIt should be remarked, however, that the BEC picture requires some rigorous conditions.\nFirst, the concentration of magnons must be enough dilute.~\\cite{Nikuni}\nIn fact, experiments on KCuCl$_3$, isostructural of TlCuCl$_3$,\nshowed that the $M(T)$ curve becomes flat below $T_{\\rm N}$ for \nfields well above $H_{\\rm c}$.~\\cite{Oosawa02}\nThis behavior implies that for this dense magnon condition, \nmean field approximation is a better description.\nThe BEC picture should hence be applied only for the region $H - H_{\\rm C} \\sim 0$.\nMoreover, it is recently argued that some anisotropic interactions\narising from spin-orbit coupling like the Dzyaloshinsky-Moriya(DM) interaction \nand\/or the staggered $g$ effect can be qualitatively modify the BEC description,\neven if the interactions are very weak.~\\cite{Sirker04,Sirker05}\nRecent ESR measurements on TlCuCl$_3$ have indeed suggested the existence of \nsuch interactions.~\\cite{Glazkov}\nFor the present case, the screw-like crystal structure of PbNi$_2$V$_2$O$_8$\nmay cause the DM interaction, as is suggested to explain the weak-ferromagnetism \nin the isostructural SrNi$_2$V$_2$O$_8$.~\\cite{Zheludev00}\n\n\n\n\\section{Conclusions}\nWe have observed cusp-like anomaly at $T_{\\rm min}$ in the $M(T)$ curves for $H > H_{\\rm c}$.\nThe value of $T_{\\rm min}$ increases with applied fields.\nThese observations suggest the evolution of field-induced magnetic ordering\nin the Haldane chain system PbNi$_2$V$_2$O$_8$.\nMagnetic phase diagram of this system up to 30 T is presented.\nThe phase boundaries for $H \\parallel c$ and $H \\perp c$ do not\ncross each other, in qualitative agreement with the $HT$ phase diagram calculated theoretically for a Haldane \nsystem with $D<0$.~\\cite{Sakai01}\n\nIn the ordered phase, it is revealed that the magnetizations increase with decreasing $T$\nand the $M(T)$ have a convex curve for both the directions $H \\parallel c$ and $H \\perp c$.\nThese features may support that the magnon Bose-Einstein condensation picture\ncan be applicable as an approximation for Haldane gap systems at least for $H \\parallel c$.\nHowever, possible anisotropic effects including the Dzyaloshinsky-Moriya interaction\ncan modify the description of the ordered state significantly.\n\n\\begin{acknowledgments}\nN.T. gratefully acknowledges M. Hagiwara, A. Oosawa, M. Hase, and H. Kageyama for fruitful discussions.\nHe also thanks T. Waki and K. Yoshimura for informing their results, and \nK. Hashi and H. Shinagawa for the help of field-oriented sample preparation.\n\n\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIdentifying and analyzing Cyber Threat Information (CTI) is an important part of validating security alerts and incidents~\\cite{islam2019multi, koyama2015security, menges2019unifying, mittal2019cyber}. Any piece of information that helps organizations identify, assess, and monitor cyber threats is known as CTI~\\cite{johnson2016guide}. To help a Security Operation Centre (SOC) in using CTI, existing approaches, such as a unifying threat intelligence platform~\\cite{islam2019multi, koyama2015security, menges2019unifying, MISPJournal}, aim to automatically gather and unify CTI relevant to security alerts and incidents. However, gathering CTI is not enough to perform validation tasks, as security teams need to analyze and understand CTI for defining response actions. Security teams write scripts and define rules to extract necessary information from CTI, and map alerts and incidents to CTI~\\cite{anstee2017great, elmellas2016knowledge, RFID2021, tounsi2018survey}. Whilst techniques such as defining rules and scripts can be automated, they do not help in identifying evolving threats and alerts ~\\cite{serketzis2019actionable, ward2017building, zhou2019ensemble}, because rules can only be defined for behavior of known threats. Thus, human understanding and resolution are required to identify, define and update CTI, rules and scripts for emerging threats to adapt changing contexts.\n\nThe vast volume of CTI makes it time-consuming for a human to analyze. Thus, to address the shortcoming of defining rules and scripts to use CTI, we present a novel framework, SmartValidator. SmartValidator identifies CTI and validates security alerts and incidents by leveraging Artificial Intelligence (AI) based automation techniques~\\cite{faiella2019enriching, qamar2017data,serketzis2019actionable}. SmartValidator follows a systematic and structured approach for reducing the human cognitive burden to continuously monitor for changes (e.g., change in attack patterns and CTI) and define the automation strategies whenever changes occur. We focus on two aspects of automation: (i)~\\textit{automatic identification} of CTI for different alerts and (ii)~\\textit{automatic validation} of alerts using identified CTI. \n\nBy \\textit{automatic identification} of CTI, we mean identifying CTI from a wide variety of sources. The increasing presence and amount of CTI over the internet demands effective techniques to automate the identification of the required CTI for validation tasks~\\cite{faiella2019enriching,menges2019unifying,noor2019machine,qamar2017data}. The sources of CTI vary with differences in alerts and incidents~\\cite{EY2017,MISP2021,RFID2021}. Examples of CTI include Indicators of Compromise (IoC) (system artifacts or observables associated with an attack), Tactics Techniques Procedures (TTP) and threat intelligence reports~\\cite{johnson2016guide}. \n\nBy \\textit{automatic validation} of alerts and incidents, we refer to validating (i.e., prioritizing, and assessing the relevance or impact of) different types of alerts and incidents generated and identified by different detectors. In this context, by detector we mean any tools or systems used for detection of malicious activities. An organization deploys or develops different types of detectors that generate alerts upon detection of malicious activities. Examples of such detectors include Intrusion Detection Systems (IDS), vulnerability scanners and spam detectors. Validation of different types of security alerts and incidents requires extracting information from relevant CTI~\\cite{anstee2017great, elmellas2016knowledge, RFID2021, tounsi2018survey, WINKLER2017143}. For example, a network administrator or threat hunter writes scripts to search for CTI (e.g., information about the suspicious incident) and defines rules to validate an alert. \nThere are always cases for which automatic validation would not be suitable. For example, in our scenario, automated validation is not applicable for alerts and incidents that do not have associated CTIs; hence, such scenario would require a security team to perform manual analysis.\n\nThe massive volume and variations of CTI opens the door for automatic identification of patterns and gathering insights about CTI using Natural Language Processing (NLP) and Machine Learning (ML) techniques. For instance, Sonicwall has reported 9.9 billion malware attacks in its 2020 cyber threat report~\\cite{sonicwall2020}. The threat research team of Sonicwall has come across more than~1,200 new malware variants each day. Existing studies~\\cite{ibrahim2020challenges, le2019automated, noor2019machine,RFteam2018, serketzis2019actionable, Struve2017, zahedi2018empirical, zhou2019ensemble} have highlighted the power of AI to monitor, gather and analyze security intelligence. Recent advances have also been noticed in the use of NLP and ML techniques to extract patterns from threat data and gain insight about attacks and threats. The focus of these studies are application-specific, for example, detecting anomalies~\\cite{zhou2019ensemble} or automating vulnerability assessment~\\cite{le2019automated}, which need to be updated with changing CTI and organizational needs. These studies required knowledge of NLP and ML to build a model for performing the assessment or detection task. Most existing SOCs are managed SOCs (SOC as a service), which are subscription based~\\cite{Nick2020, ibrahim2020challenges}. They do not have dedicated data science or ML experts to design and update the AI based system based on their need. Considering this scenario \\textit{\"Can we design an efficient system to automate and assist the validation of security incidents and alerts with changing threat data and user needs\"?}.\n\nEvolving threat landscapes and changing needs of security teams demand a dynamic AI\/ML-based validation system which can be adapted at runtime.\nFor instance, if a security expert expresses interest to validate the maliciousness of a domain \"URL\", a prediction model is built by a data scientist team that classifies a URL as malicious or non-malicious. In an ML context, this task is known as a prediction or classification task. We propose three different layers to differentiate the tasks of threat data collection, validation and prediction model building. The purpose is to hide implementation complexity of data processing, prediction models and validators from security teams. Each layer is controlled and developed by experts with dedicated capabilities. Changing threat landscapes require SOCs to request new CTI and prediction models. One possible solution to this is to build and save prediction models for all possible attributes sets. However, building all possible models whenever changes occur will incur significant resource consumption (e.g., computation time). Most SOCs have limited resources; hence, instead of pre-building all possible combinations of prediction models, SmartValidator is designed to build the model based on a SOC's demands.\n\nWe have implemented a Proof of Concept (PoC) system to evaluate the effectiveness and efficiency of our proposed framework with two feature engineering approaches and eight ML algorithms. We have used an IoC form of CTI~\\cite{johnson2016guide} and collected open source threat intelligence (OSINT) from public websites and a CTI platform, MISP~\\cite{MISP2021, RFID2021}. The input of the developed system is a set of attributes from a security team. For example, a security team may want to investigate the \"\\textit{domain name}\" and \"\\textit{URL}\" to identify the \"\\textit{maliciousness}\" of an incident. The developed system takes these three attributes as input where \"\\textit{domain name}\" and \"\\textit{URL}\" are the observed attribute~($ob_{attrib}$) and \"\\textit{maliciousness}\" is the unknown attribute~($un_{attrib}$). The prediction models are built for classifying\/ predicting~$un_{attrib}$ based on~$ob_{attrib}$. \n\nTo capture changing contexts (i.e., security team requirements), we have considered five $un_{attrib}$: (i)~attack, (ii)~threat type, (iii)~name, (iv)~threat level and (v)~event. Eighteen different sets of~$ob_{attrib}$ (shown in Table~\\ref{tab:tableAttribute}) are provided to validate these five attributes to demonstrate the performance of the PoC with changing requirements. We have designed the PoC to select the suitable feature engineering approaches and ML algorithms at run time. \nSeven $ob_{attrib}$ are selected by the PoC to predict \\textit{attack} and 11 $ob_{attrib}$ sets are used to predict the remaining four attributes. Hence, the PoC provided a total 51 optimal prediction models for predicting five $un_{attrib}$ based on the preferred~18~$ob_{attrib}$. The results show that approximately~84\\% of the models have F1-scores above~0.72 and~75\\% of the models have F1-scores above~0.8. These results imply that SmartValidator is able to assist the automatic validation of threat alerts with a high level of confidence. Most of the models that were built with data gathered from the MISP platform can effectively predict~$un_{attrib}$ based on~$ob_{attrib}$ with a higher F1-score than the models that were built with CTI gathered from public websites. This demonstrates that trusted threat intelligence is more effective in validating alerts.\n\nThe results also demonstrate the efficiency of SmartValidator with dynamic changes in the preferred set of attributes. We pre-built all possible models, which required us to run~814 experiments. Given a maximum time limit of 48 hours and a memory limit of~100GB to build each prediction model, 20\\%~of the models failed to complete within the time limit and given memory. Hence, it shows the difficulties a security team would encounter in manually constructing each model. Results further reveal that building prediction models is a time-consuming process and requires expertise that can be automated through orchestrating different tasks. Saving the feature engineering approaches and ML algorithms helps SmartValidator to use them for predicting new attributes based on changing CTI and SOC requirements. Thus, constructing prediction models at run time based on a security team's preferred attributes sets reduces the overhead and resource consumption. The key contributions of this work are:\n\n\\vspace{-5pt}\n\n\\begin{itemize}\n\\item A novel AI-based framework, SmartValidator, that consists of three layers to effectively and efficiently identify and classify CTI for validating security alerts with changing CTI and security team requirements.\n\\vspace{-5pt}\n\\item A PoC system that automatically built 51 models to predict five different unknown attributes with~18~ observed attributes sets using two sources of OSINT.\n\\vspace{-5pt}\n\\item We demonstrated that SmartValidator can effectively select optimal prediction models to classify CTI where approximately 75\\% of optimal models have an F1-score of above~0.8.\n\\vspace{-5pt}\n\\item We showed the efficiency of SmartValidator by building prediction models based on security team demands which requires approximately 99\\%~less models to build, thus less resources and time consumption. \n\\vspace{-5pt}\n\\end{itemize}\n\n\n\nPaper organization: Section~\\ref{Sec:Motiv} presents a motivation scenario that highlights the need for SmartValidator. Section~\\ref{Sec:Preli} discusses the background knowledge about CTI. Section~\\ref{Sec:Framework} introduces the proposed framework, SmartValidator. Section~\\ref{section:POC} describes the large-scale experiment that is carried out for the evaluation of SmartValidator. Section~\\ref{Sec:Eval} demonstrates the effectiveness and efficiency of the proposed approach. Section~\\ref{Sec:RelatedWork} discusses related work. Finally, section~\\ref{Sec:Conclu} concludes the paper with future works.\n\\vspace{-10pt}\n\n\\section{Motivation Scenario} \\label{Sec:Motiv}\n\n\\begin{figure*}\n \\centering\n \\subfloat[]{\\includegraphics[width=0.7\\textwidth]{figs\/Figure1a.pdf}\\label{fig:motiva}}\n \\hfil\n \\subfloat[]{\\includegraphics[width=0.8\\textwidth]{figs\/motivB.pdf}\\label{fig:motivb}}\n \\caption{Motivation scenario illustrated (a) detection of malicious activities and validation of alerts with multiple detectors and sources of CTI respectively, (b) validation of same alerts for different set of preferences required two different validators and different CTI}\n \\label{fig:Motiv}\n \\vspace{-15pt}\n\\end{figure*}\n\nIn this section, we motivate an AI-based solution for alert validation through an example scenario. Figure~\\ref{fig:Motiv} shows a scenario where a SOC of an organization has deployed different types of detectors, validators and CTI to monitor and validate malicious behaviour in its network and business data. Figure~\\ref{fig:motiva} shows that three detectors (intrusion, phishing email, and vulnerability detectors) are deployed to detect suspicious and malicious activities of an organization. The information used by detectors varies with attack types. For example, the information that detectors use to identify an intrusion is different from identifying a phishing email\\footnote{https:\/\/github.com\/counteractive\/incident-response-plan-template\/blob\/master\/playbooks\/playbook-phishing.md} (Figure~\\ref{fig:motiva}). These detectors continuously monitor an organization's network and business data\\footnote{https:\/\/github.com\/rosenbet\/demisto\/tree\/master\/Playbooks} (e.g., emails, network traffic and business reports). \n\nMost detectors produce alerts upon detecting malicious activity that require a security team to act on it. These alerts require validation before analysing them for decision making\\footnote{https:\/\/www.incidentresponse.com\/playbooks\/}. In this paper, we consider a validator performs a task related to prioritizing and identifying the relevance or impact of alerts. Let us assume that an intrusion detector has detected a list of malicious IP addresses. A SOC has an alert validator to validate the maliciousness of the IPs~\\cite{Siemplify2019}. Figure~\\ref{fig:motiva} shows that validation of different types of alerts requires different forms of CTI. To validate IP maliciousness, blacklist and whitelist IP addresses are used. CTI further varies from organization to organization. Each security team has its own set of requirements (different attributes) to validate an alert. In this scenario, the security team rely on three types of CTI for alert validation. Considering different types of alerts have different attributes and require different sources of CTI, a SOC needs three validators to validate the alerts produced by three detectors. Figure \\ref{fig:motivb} illustrates the use of CTI to validate alerts.\n\nWe assume, an alert of type $A_i \\in A$ ($A$ is the set of alerts) is produced by detector $D_1$ (e.g., IDS). Each alert is represented with different attributes (or features). The function $F_{attrib}$($A_i$) provides attributes list of $A_i$. \n\n$F_{attrib}(A_i) \\rightarrow $\n\\\\ where $f_1 = \\text{IP}$, $f_2 = domain$, $f_3= \\text{URL}$, $f_4 = attack\\ type$, and $f_5 = threat\\ level$. Considering two different security roles of a SOC have different preferences and used different attributes to validate $A_i$, two validators $V_1$ and $V_2$ are built (Figure \\ref{fig:motivb}). For validator $V_1$, a security team prefers to validate \\textit{attack type} based on \\textit{\\text{IP}}, \\textit{domain} and \\textit{\\text{URL}}, thus\n\n$ob^1_{attrib} = <\\text{IP},\\ domain,\\ \\text{URL}>$ and \n\n$un^1_{attrib} = <\\text{IP}\\ maliciousness>$. \\\\\n For validator $V_2$, a security team's preference is to validate \\textit{threat level} using attributes \\textit{domain}, \\textit{URL} and \\textit{attack type}, thus,\n\n$ob^2_{attrib}= <\\text{IP},\\ domain,\\ \\text{URL},\\ attack\\ type>$ and \n\n$un^2_{attrib}= <\\text{URL}\\ threat\\ level>$.\n\nFor both cases, to perform validation, validators first extract $ob_{attrib}$ and if available $un_{attrib}$ from $A_i$ and then identify CTI with these attributes. In most cases, a security team provides CTI sources to a validator. In the next step, CTI that have $ob_{attrib}$ and $un_{attrib}$ are identified. As shown in Figure \\ref{fig:motivb}, validator $V_1$ extracts three attributes to validate \\textit{IP maliciousness} and validator $V_2$ extracts four attributes to validate \\textit{URL threat level}. $\\text{CTI}_1$ has the attributes that are required by validator ${V_1}$. On the other hand, $\\text{CTI}_2$ has the attributes required to investigate URL threat level by $V_2$. Therefore, threat data is extracted from $\\text{CTI}_1$ and $\\text{CTI}_2$ respectively for further investigation.\n\nThough $V_1$ and $V_2$ have investigated two different sets of attributes, the key steps (step~3.1 to step~3.4), as shown in Figure~\\ref{fig:motivb}, are the same. The tasks $V_1$ and $V_2$ can be formulated as an ML classification problem, where two different prediction models are required to be built. Building a prediction model involves pre-processing of data (e.g., $ob^1_{attrib}$), feature engineering, training and selecting a model, and then predicting a output ($un^1_{attrib}$). Many of the possible $ob_{attrib}$ of Cyber Threat Information (CTI) (e.g., domain, filename, description and comment) are textual features. Traditional categorical feature engineering or transformation approaches are not suitable to encode the textural features, hence require the application of NLP technique. To perform validation of $un^1_{attrib}$ and $un^2_{attrib}$ using $ob^1_{attrib}$ and $ob^2_{attrib}$, prediction models are required to be built where input of prediction models are security team preferences. Here, we consider the observed attributes and unknown attributes as SOCs' preferences\/ requirements. We assume, $AS$ as a set of a SOC's requirements, where \n\n $AS = $\n\\\\ This mean for validator $V_1,\\ AS_1$ = <$ob^1_{attrib}$ , $un^1_{attrib}$> \\& $AS_1 \\in AS$. For $ V_2, AS_2$ = <$ob^2_{attrib}$, $un^2_{attrib}$> \\& $AS_2 \\in AS$\n\nConsidering emerging threat patterns, a SOC may deploy new detectors and update the existing rules of intrusion detectors to detect evolving anomalies. To validate the alerts of a new detector, new validators may be required. Thus, several changes can arise in the scenario of Figure~\\ref{fig:Motiv}. Following, we present the three scenarios that we consider in this work.\n\n\\vspace{-5pt}\n\n\\begin{itemize}\n \\item Change in Alert: With changes in alerts types, variation can be seen in the attributes of alerts. For example, an alert of type $A_2$ may have a different set of attributes from $A_1$ such as timestamps, date, IP, organization, tools and comments. Depending on types of attack and detector, an alert attribute changes. Building prediction models with changing alerts might require incorporation of various types of pre-processing and features engineering approaches. \n \\vspace{-5pt}\n \\item Change in CTI: CTI are continuously changing with changing requirements from SOCs. In most cases, SOCs buy CTI from third parties where attributes provided by different vendors vary, or they build their own CTI platform. The validation of alerts relies on the attributes available in CTI.\n \\vspace{-5pt}\n \\item Changes in Preferred Attributes Set: Change in preferred attributes ($AS$) requires re-designing and building of prediction models. A SOC does not always have dedicated data scientists or experts to design and build prediction models. Even though the steps of model building are repetitive (e.g., pre-processing, feature engineering, model building and selection), few changes may be required for adaptation of the variations as existing solutions are not designed to automatically work with changing attributes.\n\\end{itemize}\n\\vspace{-5pt}\nTo address these changes, we propose SmartValidator to support the flexible design of a validator following a systematic and structured approach. The proposed framework can automatically construct prediction models to validate alerts\nwith changing requirements. \n\n\\vspace{-10pt}\n\n\\section{Preliminaries} \\label{Sec:Preli}\nThis section provides background information about CTI and MISP (an Open Source Threat Intelligence Platform).\n\n\\textbf{Indicator of Compromise: } IOCs provide characteristics of cyberattacks and threats. Based on IOCs, a security team decides whether a system is affected by a particular malware or not~\\cite{ anstee2017great, elmellas2016knowledge, tounsi2018survey}. Examples of IOCs include domain name, IP, file names and md5 file hashes. Three common categories of IOCs are network indicators, host-based indicators and email indicators. IP addresses, URLs and domain names are the most popular types of network indicators. The malicious file hash or signature, registry keys, malware name and dynamic link libraries are widely used host-based indicators. An email indicator may consist of a source email address, message objects, attachments, links and source IP addresses. The source of IOCs ranges from crowd-sourcing to government-sponsored sources. Just having threat data is not enough to fully understand the context or patterns of a cyberattack. For example, threat data may contain an IP address that is used only once to attack a network. Conversely, an associated URL in threat data might have been used many times. Therefore, threat intelligence must be extracted from the threat data with possible IOCs and their contextual information~\\cite{ anstee2017great, elmellas2016knowledge, ward2017building, tounsi2018survey}. Table~\\ref{tab:CTIsource} shows examples of some of these websites that provide threat feeds and are utilized for gathering OSINT. Table~\\ref{tab:CTIexample} shows examples of CTI publicly available in the malware domain website~\\cite{malwareDomain2021}. \n\n\\begin{table*}[h]\n \\caption{Description of each CTI sources with the Indicator of Compromise (IOCs) they contain}\n \\begin{tabular}{ll}\n \\toprule\n \\textbf{Source} & \\textbf{Description}\\\\\n \\midrule\nC\\&C Tracker \\cite{Ctracker2019} & Contains a list of C\\&C IPs (command and control botnets), date, and a link to a manual which contains \\\\& text description and false positive risk value. \\\\\nFeodo Tracker \\cite{Ftracker2019} & Tracks the Feodo Trojan. Contains IP, port, and date. \\\\\nMalware Domain List \\cite{malwareDomain2021} & Contains a searchable list of malicious domains, IP, date, domain, reverse lookups \\\\ & and lists registrants. Mostly focused on phishing, Trojans, and exploit kits.\\\\\nRansomware Tracker \\cite{Ransomwware2019} & Provides overview of infrastructures used by Ransomware, status of URLs, IP address and \\\\ & domain names associated with Ransomware and various block list of malicious traffic. \\\\\nWHOIS data \\cite{WhoIS2019} & Provides a database of registered users and assignees of internet resources, which is widely used\\\\ & for lookup of domain names.\\\\\nZeus Tracker \\cite{Ztracker2019} & Tracks domains of Zeus Command \\& Control servers.\\\\\nOpenPhish \\cite{OpenPhish2019} & A list of phishing URLs and their targeted brand. \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:CTIsource}\n\\vspace{-10pt}\n\\end{table*}\n\n\\begin{table*}[!hb]\n\\caption{An example of a list of observed malware domains with corresponding Indicator of Compromise (IOCs) from the website malwaredomainlist.com \\cite{malwareDomain2021}}\n \\centering\n \\begin{tabular}{llllll}\n \\toprule\n \\textbf{Date} & \\textbf{Domain} & \\textbf{IP} & \\textbf{Reserve Lookup} & \\textbf{Description} & \\textbf{ASN}\\\\\n\\midrule \n \n2017\/12\/04 18:50 & textspeier.de & 104.27.163.228 & - & phishing\/ fraud & 13335 \\\\\n2017\/10\/26 13:48 & photoscape.ch\/ & 31.148.219.11 & knigazdorovya.com & trojan & 14576 \\\\\n &Setup.exe \\\\\n2017\/06\/02 08:38 & sarahdaniella.com\/swift & 63.247.140.224 & coriandertest. & trojan & 19271\\\\\n & \/SWIFT\\%20\\$.pdf.ace & & hmdnsgroup.com & \\\\\n \n2017\/05\/01 16:22 & amazon-sicherheit.kunden & 63.247.140.224 & hosted-by.blazingfast.io & phishing & 49349\\\\\n& -ueberpruefung.xyz \\\\\n2017\/03\/20 10:13 & alegroup.info\/ntnrrhst & 185.61.138.74 & mccfortwayne.org & Ransom, Fake & 197695\\\\\n&&&&.PCN, Malspam \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:CTIexample}\n \\vspace{-10pt}\n\\end{table*}\n\n\\textbf{Threat Intelligence}: Threat Intelligence, also known as security threat intelligence, is an essential piece of CTI for a cybersecurity team. According to \\textit{Recorded Future}, \"\\textit{threat intelligence is the output of the analysis based on detection, identification, classification, collection and enrichment of relevant data and information}\"~\\cite{RFID2021}.Threat intelligence helps a security team understand what causes an attack and what needs to be done to defend against it by gathering contextual information about an attack. For example, security teams use threat intelligence to validate security incidents or alerts and enrich threat data to get more insights about a particular security incident~\\cite{anstee2017great, elmellas2016knowledge, tounsi2018survey, RFID2021, WINKLER2017143}. The gathered data is organized in a human-readable and structured form for further analysis~\\cite{anstee2017great, elmellas2016knowledge, faiella2019enriching, RF2019, WINKLER2017143}. Open Source Intelligence (OSINT) is gathered from various websites (e.g., Zeus Tracker and Ransomware Tracker) that provide information about malware or blacklist domain\/IPs. \n\n\n\n\n\n\n\\textbf{Cyber Threat Intelligence Platform:} Threat intelligence platforms allow security communities to share and collaborate to learn more about the existing malware or threats. Using threat intelligence platforms, companies can improve their countermeasures against cyber-attacks and prepare detection and prevention mechanisms. In recent years, the cybersecurity communities have emphasized building common threat intelligence platforms to share threat information in a unified and structured way, and make CTI actionable~\\cite{faiella2019enriching, gao2018graph, menges2019unifying,mittal2019cyber, tounsi2018survey, ward2017building}. Various specifications and protocols such as STIX, TAXII, Cybox, CWE, CAPEC and CVE are widely used to describe and share threat information through common platforms~\\cite{ Stixbarnum2012standardizing, barnum2012cybox, taxiiconnolly2014trusted, RamsdaleCTISurvey2019}. Trusted Automated Exchange of Indicator Information (TAXII)~\\cite{taxiiconnolly2014trusted} is developed as a protocol for exchanging threat intelligence represented in STIX format. Both STIX and TAXII are open source and have collaborative forums~\\cite{Stixbarnum2012standardizing,taxiiconnolly2014trusted}. \n\n\\begin{figure}\n \\includegraphics[scale = 0.35]{figs\/MISP.jpg}\n \\caption{An example of MISP showing evoluation of a multi task Botnet}\n \\label{fig:MISP}\n \\vspace{-15pt}\n\\end{figure}\n\n\\textbf{MISP:} Malware Information Sharing Platform (MISP) is one of the most popular trusted threat intelligence platforms used by different industries to store and share CTI~\\cite{MISP2021, MISPJournal}. MISP is a framework for sharing and storing threat data in a structured way~\\cite{MISPJournal, azevedo2019pure}. MISP enables an organization to store both technical and non-technical information about attacks, threats and malware in a structured way. The relationship between malware and their IOCs are also available in MISP. Rules for network Intrusion Detection Systems (IDS) can also be generated from MISP, which can be imported into an IDS system, and hence improve the detection of anomalies and malicious behavior. A security team queries MISP for relevant data, and it shows the details of the attack. For example, Figure~\\ref{fig:MISP} shows the details of the evolution of a multitask Botnet. Table~\\ref{tab:MISPExample} and Table~\\ref{tab:MISPvaluePercentage} of Appendix~\\ref{app:A} show the key attributes gathered from MISP for this study and and the percentage of each attribute.\n\n\\vspace{-10pt}\n\n\\section{Proposed Framework} \\label{Sec:Framework}\nFigure~\\ref{fig:framework} provides an overview of our proposed framework, SmartValidator, that automates the identification and classification of CTI for validation of alerts. It comprises of three layers: (i)~threat data collection layer, (ii)~threat data prediction model building layer and (iii)~threat data validation layer. We consider each layer to have a separation of concerns so that while updating components of one layer, a security team does not need to worry about other layers.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.85\\textwidth]{figs\/framework.pdf}\n \\caption{An overview of SmartValidator for automated identification and validation (i.e., classification) of cyber threat data}\n \\label{fig:framework}\n \\vspace{-15pt}\n\\end{figure*}\n\n\\textit{Threat Data Collection layer} (section~\\ref{sub:4.1}): The threat data collection layer consists of a collector that automates the identification of CTI information from various sources (Figure~\\ref{fig:datalayer}). It further transforms the gathered CTI into a unified form (if required) and passes it to the threat data prediction model building layer.\n\n\\textit{Threat Data Prediction Model Building layer}(section~\\ref{sub:4.2}): This layer has a model builder that builds models for validation of alerts based on a SOC's requirements ($ob_{attrib}$, $un_{attrib}$) using the gathered CTI (Figure~\\ref{fig:predictionlayer}). Pre-processing of data, feature engineering, training and evaluation of prediction models are performed in this layer to generate candidate validation models. The candidate models and corresponding feature sets are saved with a SOC's preferences for use by the threat data validation layer at runtime to validate alert. \n\n\\textit{Threat Data Validation layer} (section~\\ref{sub:4.3}): The threat data validation layer takes alerts and a SOC's requirements as input to choose suitable prediction models for alert validation. SOC requirements are interpreted by an interpreter. Based on a SOC's requirements, attributes are extracted from alerts. The extracted attributes are used to choose suitable prediction model from the candidate list of saved models for predicting the unknown attributes ($un_{attrib}$) and perform validation of alerts based on observed attributes ($ob_{attrib}$).\n\nThe following sections elaborate the core components and functionalities of each layer of SmartValidator. \n\\vspace{-5pt}\n\n\\subsection{Threat Data Collection Layer}\\label{sub:4.1}\nValidation of a security alert requires the identification of relevant CTI for building a threat data prediction model. The purpose of the prediction model is to learn the pattern of CTI for automatic validation of alerts. Here, we have formulated the validation tasks as a classification task. For instance, to validate an IP maliciousness, a system needs to classify IPs as malicious or non malicious, which can be achieved through a prediction model. Similar to existing studies~\\cite{faiella2019enriching, tounsi2018survey, MISPJournal}, the data collection layer gathers CTI from multiple sources and combines them into a unified format. The data collection layer employs a collector to gather CTI data from an organization's preferred sources. Considering several types of CTI sources are used by an organization, deployment of various plugins, APIs and crawlers are required to collect CTI from these sources. Figure~\\ref{fig:datalayer} has shown the processing of three types of CTI data - (i)~internet data, (ii)~business data and (iii)~external data. These are most commonly used CTI. Other forms of CTI can also be integrated by following standard data processing strategies. \n\n\\begin{figure}[htb]\n \\includegraphics[width=0.5\\textwidth]{figs\/dataCollectionlayer.pdf}\n \\caption{Threat data collection layer - IOCs are extracted and processed from three types of data and then combined into a unified form}\n \\label{fig:datalayer}\n \\vspace{-15pt}\n\\end{figure}\n\nFor processing internet data, the collector has web crawlers, scrapers and parsers to gather and process CTI data from web pages (Figure~\\ref{fig:datalayer}). A web crawler searches and identifies reliable sites that contain threat information and IOCs of various malware. Considering threat intelligence team provides the relevant list of websites or keywords of their interest to gather CTI~\\cite{listThreatIntel2021}, a crawler crawls through the internet to search the relevant information. We propose a scraper as part of the collector for filtering out unnecessary information from crawled data. Crawling and scraping can be done on a variety of sources, such as RSS feeds, blogs and social media, but require different types of processing. A parser utilizes various information processing techniques to extract information from the output of a scraper and organise the data into a structured and language-agnostic format (e.g., markup language such as ontology). For forums or blog posts written in \\textbf{natural language}, a parser is required to extract threat information from sentences. NLP tools and techniques (e.g., Spacy and NLTK) are used to build a parser based on the structure of a document and information required by security team. \n\nTo get threat feeds from databases and CTI platforms, we design API calls and queries that are parts of the collector. Threat feeds can be gathered from both an organization's internal business data and external data (Figure~\\ref{fig:datalayer}). In this paper, we only gather external threat feeds. The collector can also query external data sources to find out missing information about available threat data. For example, after receiving an IP address, a query can be made to WHOIS query website to search for domain name. In this way, a collector gathers different sets of data, for example, blacklist and white list IP addresses, list of phishing websites and so on, from different types sources. The collected data is further combined into a unified form (e.g., dataset~P, Figure~\\ref{fig:datalayer}). To combine the data into a unified form, we first normalised the data, removed redundant information and then combined them. Examples of normalisation techniques include 1NF, 2NF and so forth. Depending on validation tasks (e.g., validation of IP maliciousness or validation of domain threat level), CTI is extracted and sent to the threat data prediction model building layer to build a validation model.\n\nWe consider organizations (e.g., government or financial) may have a dedicated threat intelligence teams or may use third party services to gather CTI. Any update related to the collection of CTI, such as adding or modifying CTI sources, deploying APIs or parsers to gather and extract information from these CTI, and inclusion or deletion of new data collection and normalisation techniques, are performed in the threat data collection layer by dedicated threat intelligence team.\n\\vspace{-5pt}\n\n\\subsection{Threat Data Prediction Model Building Layer}\\label{sub:4.2}\nThe threat data prediction model building layer is designed to build ML-based classification models using CTI and SOC's preferences. For example, if a security team wants to validate the \\textit{maliciousness} of an IP considering the \\textit{IP}, \\textit{domain} and \\textit{URL}, an ML classification model is built that takes IP, domain and URL and predicts IP maliciousness. We consider these attributes (IP, domain, URL and IP maliciousness) as SOC's preference where $ob_{attrib} = \\{\\textit{IP}, domain, \\textit{URL}\\}$ and $un_{attrib} = \\{\\textit{IP}\\ maliciouness\\}$. SOC's preferences drive from organizational security requirements and alerts. \n\nFigure~\\ref{fig:predictionlayer} shows the core components and workflow of the threat data prediction model building layer. It comprises of a pre-processor that pre-processes CTI (step~1, Figure~\\ref{fig:predictionlayer}) for extracting features from it. The pre-processing techniques depend on the types of attributes (e.g., categorical or text-based). For example, IP maliciousness can be either malicious or non-malicious, which is categorical. On the other hand, domain name is text based, as shown in Table~\\ref{tab:CTIexample}. Pre-processed data is passed to a feature engineering module where data is transformed into features (numeric form) (step 2, Figure~\\ref{fig:predictionlayer}), which are used as input for the ML algorithms. The reason behind this is that ML algorithms can only work with numerical data~\\cite{Helge2020, RFteam2018, sabir2020machine, Struve2017}. Depending on the type, size and diversity of CTI, the data science team chooses a feature engineering approach. The first two steps of Figure~\\ref{fig:predictionlayer}~leverage simple NLP techniques for pre-processing and feature engineering. Categorical values can be directly transformed into features using label encoding or one-hot encoding and text values are transformed into features using count vectorizer and TFIDF (Term Frequency-Inverse Document Frequency) techniques. The associated text cleaning and pre-processing steps for each text attribute are discussed in Appendix B1. Common pre-processing techniques such as tokenization, stop words removal and lemmatization are performed before transforming text data into features. Hence, based on the alert attributes types, data science teams perform data pre-processing and select a feature engineering approach. \n\n\\begin{figure} [tbh]\n \\includegraphics[scale = 0.5]{figs\/predictionlayer.pdf}\n \\caption{Threat data prediction layer - prediction models are built to predict unknown attributes based on observed attributes}\n \\label{fig:predictionlayer}\n \\vspace{-15pt}\n\\end{figure}\n\nThe transformed data is split into training and testing datasets to build, select and evaluate an prediction model (step~3, Figure~\\ref{fig:predictionlayer}). ML algorithms are applied to train models based on CTI that connects and learns patterns in the data to derive a working model. Depending on the nature of the training data, different models are built by a data science team. Traditionally a set of ML algorithms are applied to find an algorithm suitable for a specific dataset and user requirements~\\cite{AHMED201619, Helge2020, sabir2020machine}. To investigate the effectiveness of prediction models for the validation task, we considered a set of ML algorithms (e.g., Decision Tree, Na\u00efve Bayes, K-Nearest Neighbours and Random Forest). The details of the PoC is discussed in section~\\ref{section:POC}. As most ML algorithms have a list of hyperparameters, validation techniques (e.g., k-fold cross-validation, random cross-validation and Bayesian optimisation) are incorporated to select hyperparameters\\footnote{Hyperparameters are user-defined values that determine details about the ML classifier before training. For example, a decision tree requires tuning the value of variable depth, and k-nearest neighbours has a variable number of neighbours.} setting and feature engineering approach for a specific ML algorithm (step~4, Figure~\\ref{fig:predictionlayer}). \n\nThe built model's performance is evaluated using the testing dataset (step~5, Figure~\\ref{fig:predictionlayer}). Different types of performance metrics (e.g., precision, recall, accuracy and F1-score) are used to choose a model (also known as the optimal model) that provides the best performance (details in section~\\ref{sub:evaluationMetrics}). In this work, we mainly consider F1-score for evaluation, which is a score between~0 and~1. Higher F1-score indicates better performance of a model. Figure~\\ref{fig:predictionlayer} shows how the pre-processing techniques, feature engineering approaches and ML algorithms that are used to build threat data prediction models are stored for future model building process. Once an prediction model is found to have performed best, that model can be rebuilt using both training and testing datasets. The best models are saved with the evaluation score (step~6, Figure~\\ref{fig:predictionlayer}) for using it in the threat data validation layer.\n\n\\subsection{Threat Data Validation Layer}\\label{sub:4.3}\nWe design the threat data validation layer to (i)~collect a SOC's needs, (ii)~automatically orchestrate and request for CTI and prediction models and (iii)~validate alerts.\nFigure~\\ref{fig:framework} shows the validation layer comprises of an \\textit{interpreter}, \\textit{orchestrator}, \\textit{data processor} and \\textit{predictor}. In this layer, security teams of SOCs provide their preferences, $AS$, as a set of requirements, where $AS$ = {<$ob_{attrib}, un_{attrib}>$}. Security teams may also provide a minimum threshold for F1-score, which we refer to as the confidence score of prediction models. The reason behind gathering confidence score is that the performance of prediction models will differ with variation in CTI, alerts and attributes sets. A security team might need a higher F1-score while dealing with safety critical data and sensitive information. For example, to identify IP maliciousness, a security team may request for a validation model with a minimum confidence score of 0.9. While categorizing comments or text messages as spam, model performance of 0.8 or above may be approved. Selection of F1-score values vary from application to application. Thus, instead of setting a fixed value, we consider providing security teams the flexibility to set the confidence score based on their application needs. \n\nWe design Algorithm~\\ref{alg:Orchestrator} describing the key steps of the threat data validation layer. These steps are coordinated and orchestrated by the orchestrator. SOC preferences ($AS$ and confidence score) are the input of Algorithm~\\ref{alg:Orchestrator}. The \\textit{interpreter} receives the SOC requirements and extracts observed attributes $ob_{attrib}$, unknown attributes $un_{attrib}$ and confidence scores from that (line~3). The orchestrator checks model availability with F1-score above the confidence score for predicting $un_{attrib}$ based on $ob_{attrib}$ (line~4). If a model is available, the attributes are passed to the data processor, where it pre-processes and transforms the data based on the saved pre-processing and feature engineering approaches (lines~5-7). Finaly, the pre-processed and transformed data is sent to the predictor, which uses the available model to predict $un_{attrib}$ (line~8).\n\n\\begin{algorithm}[tbh]\n\\footnotesize\n \\caption{Model building with orchestrator in threat data validation layer}\\label{alg:Orchestrator}\n \\begin{algorithmic}[1]\n \\State \\textbf{Input: } \\texttt{AS} <$ob_{atrrib}, un_{attrib}$>, \\texttt{confidence score}\n \\State \\textbf{Output: } \\texttt{predictedData}\n \\State Interpret (\\texttt{AS, confidence score})\n \\State \\texttt{IsModels}= CheckModel(\\texttt{AS, confidence score}) \n \\If{\\texttt{IsModels} true}\n \\State \\texttt{model, featureEng} = getModel(\\texttt{AS,confidence score})\n \\State \\texttt{processedData} = transformData(\\texttt{featureEng}, \\texttt{AS})\n \\State \\texttt{predictedData} = predictOutput(\\texttt{model,processedData})\n \\Else\n \\State IsData = CheckData(\\texttt{AS})\n \\If{IsData true}\n \\State \\texttt{CTIData} = RetrieveData(\\texttt{CTI, AS})\n \\State \\texttt{model} = buildModel(\\texttt{CTIData, AS, confidence score})\n \\If{\\texttt{model} is built}\n \\State go to step 6\n \\Else\n \\State go to step 19\n \\EndIf\n \\Else\n \\State RequestData(\\texttt{AS})\n \\State \\textbf{return NotApplicable} \n \\EndIf\n \\EndIf\n \\State \\textbf{return} \\texttt{predictedData}\n \\end{algorithmic}\n\\end{algorithm}\n\nIf a model is unavailable for the requested attributes set, e.g., for predicting~$un_{attrib}$ based on~$ob_{attrib}$, then the orchestrator requests the data collector module to gather the relevant CTI data for the preferred attribute sets (line~9). After identifying the relevant information, the data collector module sends the collected CTI data to the \\textit{model builder} to build an prediction model (lines~12-13). Using the CTI \\textit{data} from the data collection layer, the \\textit{model builder} follows the model building process as discussed in section~\\ref{sub:4.2} (lines~14-15). After building a model, it sends the model availability notification to the orchestrator. Then the same process of data processing and prediction is performed. If the requested data is unavailable, a notification is sent to a threat intelligence team and a SOC team to gather the required CTI and manually analyze the alerts, respectively (lines 18-19). To ensure that alerts are not ignored when models or CTIs are unavailable, SOC teams must keep informed that manual analysis is required.\n\nOur proposed framework, SmartValidator, streamlines gathering, identification and classification of CTI. SmartValidator allows a security team to swiftly make a response about incoming alerts. As most information is generated in a structured way, it can be easily pre-processed to share through a CTI platform such as MISP or Collective Intelligence Framework (CIF) to benefit diverse security teams. SmartValidator can be integrated with the existing security orchestration and automation process to validate alerts and thus work together with the existing security tools, such as Security Information and Event Management (SIEM) and Endpoint Detection and Response (EDR). Microsoft Azure Sentinel\\footnote{https:\/\/azure.microsoft.com\/en-au\/services\/azure-sentinel\/} and Splunk\\footnote{https:\/\/www.splunk.com\/} are examples of SIEM where Limacharlie\\footnote{https:\/\/www.limacharlie.io\/} and Google Chronicle\\footnote{https:\/\/cloud.google.com\/blog\/products\/identity-security\/introducing-chronicle-detect-from-google-cloud} are considered as EDR.\n\n\\vspace{-10pt}\n\\section{Experiment Design and Setup} \\label{section:POC}\n\nWe designed and implemented a Proof of Concept (PoC) system to evaluate SmartValidator. We expected to demonstrate effectiveness of prediction models in validating security threat data and efficiency of building prediction models based on a SOC's requirements. The goal of the PoC is to identify the relevant CTI and build prediction models based on a list of SOC's requirements. Hence, we evaluated the PoC system based on the following two Research Questions (RQ). \n\\vspace{-5pt}\n\\begin{itemize}\n \\item RQ1. How effective is machine learning in classifying CTI for SmartValidator?\n \n \\vspace{-5pt}\n \\item RQ2. How efficient is SmartValidator in selecting and building prediction models at runtime over pre-building all possible prediction models?\n\\end{itemize}\n\\vspace{-5pt}\nTo build the PoC system for SmartValidator, we implemented the core components of Figure \\ref{fig:framework}: the data collection layer presented in Figure \\ref{fig:datalayer}, the prediction layer presented in Figure \\ref{fig:predictionlayer} and an orchestrator for the validation layer described in section \\ref{sub:4.3}. \n\n\\subsection{SOC's Requirement}\n\nWe defined a set of attributes (validated attributes and observed attributes) as SOC's requirements to carry on the experiment. These attributes were mainly given by a team who were different than the one that implemented the prediction models. Thus, here we considered the team who provided the requirement as part of the SOC and the other team as part of the data science team. This setting gave us the option to evaluate a variety of different SOC's requirements, to appropriately assess the PoC system. In a practical scenario, these requirements would usually be defined by a security team. Among the various attributes, commonly validated attributes ($un_{attrib}$) are \\textbf{attack}, \\textbf{threat type} and \\textbf{threat level}. We considered them as the desired unknown attributes. Beside them, we also considered two others attributes \\textbf{name} and \\textbf{event} as the desired unknown attributes.\nExample of these attributes are shown in Table~\\ref{tab:MISPExample} in appendix~\\ref{app:A}. As shown in Table \\ref{tab:MISPExample}, an example of an event title (or event) is \"\\textit{OSINT Leviathan: Espionage actor spear phishes maritime and defence targets}\".\n\nAs observed attributes ($ob_{attrib}$) vary from security team to security team, we gathered~18 different sets of observed attributes from security team to validate the aforementioned $un_{attrib}$. As shown in Table~\\ref{tab:tableAttribute}, \\textit{IP information} (i.e., ASN, IP owner, country, and domain), \\textit{organization}, \\textit{comments about attributes}, \\textit{comments about attacks}, \\textit{event data}, \\textit{timestamp} and \\textit{category} are the attributes that we considered in the set of observed attributes. We selected these attributes from alert data that are also commonly used to validate alerts generated by different IDS. We also used the metadata of attributes such as URL, domain and filename. \n\\begin{table}[!hb]\n\\caption{List of the observed attribute sets}\n \\centering\n \\begin{tabular}{ll}\n \\toprule\n \\textbf{\\#} & \\textbf{List of attributes}\\\\\n\\midrule \n $ob^1_{attrib}$ & Date \\\\\n $ob^2_{attrib}$ & Domain\\\\\n $ob^3_{attrib}$ & IP, ASN, Owner, Country \\\\\n $ob^4_{attrib}$ & Date, Domain \\\\\n $ob^5_{attrib}$ & IP, ASN, Owner, Country, Domain \\\\\n $ob^6_{attrib}$ & IP, ASN, Owner, Country, Date \\\\\n $ob^7_{attrib}$ & IP, ASN, Owner, Country, Domain, Date \\\\\n $ob^8_{attrib}$ & IP destination, Port, IP source, ASN, Owner, \\\\ & Country, Domain, File hash, Filename\\\\\n $ob^9_{attrib}$ & IP destination, Port, IP source, ASN, Owner, \\\\ & Country, Domain, Description, Comment, File \\\\ & hash, Filename\\\\\n $ob^{10}_{attrib}$ & IP destination, Port, IP source, ASN, Owner, \\\\ & Country, Domain, Description, Comment\\\\\n $ob^{11}_{attrib}$ & IP destination, Port, IP source, ASN, Owner,\\\\ & Country, Domain, Date, Timestamp, File hash , \\\\ & Filename\\\\\n $ob^{12}_{attrib}$ & IP destination, Port, IP source, ASN, Owner, \\\\ & Country, Domain, Date, Timestamp, Description,\\\\ & Comment, File hash, Filename\\\\\n $ob^{13}_{attrib}$ & IP destination, Port, IP source, ASN, Owner, \\\\ & Country, Domain, Date, Timestamp, Description,\\\\ & Comment\\\\\n $ob^{14}_{attrib}$ & IP destination, Port, IP source, ASN, Owner, \\\\ &Country, Domain, Date, Timestamp\\\\\n $ob^{15}_{attrib}$ & Description, Comment, File hash, Filename\\\\\n $ob^{16}_{attrib}$ & Date, Timestamp, File hash, Filename\\\\\n $ob^{17}_{attrib}$ & Date, Timestamp, Description, Comment, \\\\ &File hash, filename\\\\\n $ob^{18}_{attrib}$ & Date, Timestamp, Description, Comment\\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:tableAttribute}\n \\vspace{-10pt}\n\\end{table}\n\n\\subsection{Collecting CTI}\\label{subsec:5.2}\n\nWe gathered CTI from two types of sources \u2013 publicly available internet data and data from an OSINT platform, MISP. CTI gathered from these two sources is considered as dataset~1~($DS_1$) and dataset~2~($DS_2$), respectively. \n\n\\textbf{\\textit{Gathering CTI from websites}}: We obtained a list of publicly available websites from a GitHub CTI repository~\\cite{listThreatIntel2021} which are shown in Table~\\ref{tab:CTIsource}. We selected these websites because they provided malware RSS feeds and their access were not restricted (e.g., API limit). We built web crawlers and scrapers to gather and extract the key pieces of information from the selected websites. A parser was built to parse the information and stored it in a structured format (i.e., a CSV file). The gathered data had consistent tagging and was labelled with the malware used in the attack, for example, Zeus, Citadel or Ice~IX. $DS_1$~contained 4060 events and represented the data available through public CTI feeds from websites. \n\n\\textbf{\\textit{Gathering CTI from MISP}}: We selected the MISP platform as a threat intelligence platform due to its popularity amongst businesses and the abundance of labelled data. We first gathered the MISP default feeds that were written in JSON format and then built a parser to extract the key attributes from that. $DS_2$~contained~213,736 events and represented the data available to an organisation from a dedicated threat intelligence platform.\n\n\\textbf{\\textit{Gathering additional attributes:}} External information was gathered utilizing the parsed attributes of both~$DS_1$ and~$DS_2$. For example, the common features amongst each source were IP, domain and date. Additional attributes were gathered from WhoIS data (e.g., a database query of the RFC~3912 protocol) for each IP. We used the Python cymruwhois\\footnote{https:\/\/pypi.org\/project\/cymruwhois\/} module to search each IP in the WhoIS database, which returned the IPs ASN (i.e, a unique global identifier), owner, and country location. Besides, AlienVault forum was chosen as an external information source. We scraped the AlienVault forum updates using the Python module BeatifulSoup\\footnote{https:\/\/pypi.org\/project\/beautifulsoup4\/}. The AlienVault data, which were natural language text descriptions, was searched and extracted for the associated event and threat.\n\n\\subsection{Building Data Processor}\n\nWe built a data processor to clean, pre-process and transform the collected data for building ML-based validators. \n\n\\textbf{\\textit{Cleaning and pre-processing:}} We used the Python sci-kit learn libraries to pre-process the attribute values~\\cite{chen2020deep, scikitlearn2021}. We first cleaned the data by removing null values and removing events with missing information. We observed missing information to be relatively infrequent, resulting in minimal information loss and a more robust model. For text values, we found two types of natural language features from the MISP data (i)~text attributes which are short paragraphs that describe an event in natural language and are often taken from blogs, and (ii)~comments. \n\nWe first analyzed the text and comments attribute to find a suitable processing and encoding technique. Thus, simple processing techniques were undertaken to decrease the dimensionality and remove any uninformative words (e.g., articles and prepositions). Each piece of text was stripped of all non-alphabetical characters, as numbers and special characters can rapidly increase dimensionality and rarely contain valuable information. The text was then stripped of any non-noun or non-proper noun tokens, as nouns are the most informative part of the text (e.g., attack names, attack types, and organisation). Finally, each word was lemmatized (i.e., changed to the base form of the word), so that similar words can be recognized. One of the key steps we followed was to tokenize the string values of attributes (i.e., domain, filename, hostname, URL) where patterns exit. We removed punctuation and special characters within a string to clean the data. We further split the text into small tokens based on a regular expression that tokenized a string using a given character. This separated each word within a value (e.g., value of domain or URL) and allowed a string to be tokenized. For example, we split the value of the URL in terms of \"\/\/\" and \".\". The tokenized data were then encoded as integers to create a numeric form of a feature vector.\n\n\\textbf{\\textit{Feature engineering:}} We encoded the categorical variables using one-hot encoding and label encoding. One-hot encoding considers each categorical value separately and represents each categorical variable as a column. Label encoding represents each categorical value as a unique integer. Table \\ref{tab:exampleEncoding} shows an example of one-hot encoding in the first table and label encoding in the second, for three types of attacks: phishing, DDoS and SQL injection. We used the labelEncoder() method of sci-kit learn to convert the string data into numerical values. An inbuilt function from the sci-kit-learn library, standardScaler(), was used to standardize the data. The function transformed data into a normalized distribution to remove outliers from the data, allowing for building more accurate prediction models. \nThe text variables (i.e., unstructured and structured natural language) did not conform to traditional one-hot or label encoding, as one-hot or label encoding interprets the text as a whole. Hence, we used two techniques: count vectorization and TFIDF as our feature engineering approach to encode text into numerical values. Count vectorization techniques stored each tokenized word as a column with its value being the number of times it appeared in each respective document.\nTable~\\ref{tab:exampleVectorizer} shows the examples of count vectorization of two sentences. The \\textbf{TFIDF} vectorizer\\footnote{https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.feature\\_\\\\extraction.text.TfidfVectorizer.html} worked similar to the count vectorization, except rather than storing counts it stored the TFIDF value of each word. TFIDF provided a metric for how 'important' a word is within a part of the text by comparing the term's frequency in a single document to the inverse of its frequency amongst all documents.\n\n\\begin{table}[h]\n\\caption{An example of one-hot encoding and label encoding for three types of attack}\n \\centering\n \n \\begin{tabular}{|c c c|c |c p{.9cm} c|}\n \\hline\n Phishing & DDoS & SQL & & & Attack & Encoded \\\\\n & & injection & & & & attack\\\\\n \\hline\n0& 1& 0&\t& 1&\tDDoS\t& 2\\\\\n1&\t0&\t0&\t&\t2&\tPhishing&\t1\\\\\n0&\t1&\t0&\t&\t3&\tDDoS&\t2\\\\\n1&\t0&\t0&\t&\t4&\tPhishing&\t1 \\\\\n1&\t0&\t0&\t&\t5&\tPhishing&\t1 \\\\\n0&\t0&\t1&\t&\t6&\tSQL&\t3\\\\\n& & & & & injection & \\\\\n\\hline\n \n \\end{tabular}\n \\label{tab:exampleEncoding}\n \\vspace{-10pt}\n\\end{table}\n\nFor the one-hot encoding setup, we used a simple count vectorizer, and for the label encoding setup we used a TFIDF vectorizer. It should be noted that whilst the dataset included text variables, the vast majority did not follow a natural language convention (e.g., domain or filename). Hence, more advanced NLP techniques, such as word embedding, cannot be accurately applied. The feature engineering schemes were saved for runtime use with the model building and prediction phase. Appendix~\\ref{app:B} summarizes the pre-processing techniques that we followed for different attributes.\n\n\\begin{table*}[!hb]\n\\caption{Count vectors for two sentences, S1: \"Fireball is malware\" and S2: \"Malware is any program that is harmful\"}\n \\centering\n \\begin{tabular}{cccccccc}\n \\hline\n &fireball &is &malware &any &program &that &harmful\\\\\n \\hline\n S1 & 1 & 1 & 1 & 0 & 0 & 0 & 0\\\\\n \\hline\n S2 & 0 & 2 & 1 & 1 & 1 & 1 & 1 \\\\\n \\hline\n \\end{tabular}\n \\label{tab:exampleVectorizer}\n \\vspace{-10pt}\n\\end{table*}\n\n\\subsection{Building the Validation Model} \\label{sub:buildingValidateModel}\nWe built prediction models following the traditional ML pipeline (i.e., selecting ML algorithms, building prediction models, performing hyperparameter tuning and evaluating the built model). We designed a model builder to build prediction models for various attribute sets $AS$ ($ob_{attrib}$ and $un_{attrib}$). We selected eight commonly used classification algorithms~\\cite{caruana2006}: Decision Tree (DT), Random Forest (RF), K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), Ridge Classifier (RID), Na\u00efve Bayes (BAY) and eXtreme Gradient Boost (XGB)~\\cite{chen2016} to cover a wide range of classifier types. Appendix~\\ref{app:B} summarizes the ML algorithsm that we considered to build the PoC system. Bayesian Optimization was used to automatically tune each model~\\cite{snoek2012}. We used a straightforward train test split for evaluation, with~30\\% of the dataset hold out for testing. In a real world, setting the training data would be selected by the threat intelligence team, to ensure data quality. The built model was optimised by performing hyperparameter tuning. The Python module sci-kit learn was used to build the prediction models, as it is one of the popular and widely used libraries for building prediction models~\\cite{chen2020deep, scikitlearn2021, sabir2020machine}. \n\n\\subsection{ Developing the Orchestrator}\n\nWe designed and implemented a Python script to coordinate the data collector and model builder. The script worked as an orchestrator that automated the process from gathering SOC's requirements to predicting the outputs that is validating alerts. For example, we took the SOC's requirements as an attribute set, $$ and the confidence score (a value between 0 and 1). The output of the script was the value of $un_{attrib}$ and F1-score. In this process, the script first checked whether a model was available to predict $un_{attrib}$ with $ob_{attrib}$. If a model was available, it then called the data processor to process $$ and predict the value of $un_{attrib}$. \n\nIf a model was not found, it checked the availability of CTI with attributes $$. If CTI was available, the model builder was invoked and models were built following the process of model building discussed in previous section. Here, the orchestrator used the saved feature engineering approaches and algorithm to train the model and then selected the model with the best F1-score as the optimal model. The scripts then checked the value of the F1-score. If the F1-score was lower than the confidence score, it requested the data science team for building the model and returned that there is no model available to the security team. Otherwise, it notified the orchestrator about model availability and the next steps of data processing and predicting $un_{attrib}$ was followed and the value of $un_{attrib}$ was returned to the security team.\n\nIf required CTI data was not available, the orchestrator notified the security team about CTI unavailability. For example, the two CTI data we used did not have any vulnerability description and values. Now if we provided the vulnerability description as input and requested that we wanted to predict the severity, the orchestrator would return no data available. \n\n\\subsection{Evaluation Metrics} \\label{sub:evaluationMetrics}\n\nEvaluation metrics are needed to measure the success of a prediction model in validating security alerts and building prediction models on run time. This will determine the effectiveness and efficiency of SmartValidator. Accuracy, precision, recall and F1-score are the four commonly accepted evaluation metrics for evaluating a prediction models performance~\\cite{chen2020deep, sabir2020machine}. The correct and incorrect predictions are further calculated using number of (i)True Positive (TP) (refers to correct prediction of an attributes label), (ii) False Positive (FP) (indicates incorrect prediction of an attributes label), (iii) True Negative (TN) (refers to correct prediction that a threat does not have a particular label) and (iv) False Negative (FN) (indicates incorrect prediction that a threat does not have particular label). For example, if a model classifies a malicious IP address as non-malicious, it is calculated as a false positive. If it refers a malicious IP address as malicious it is calculated as a true positive. A true negative is when non-malicious IPs are not classified as malicious IPs. A false negative is if a malicious IP is not classified as such. Equation \\ref{equ:accuracy} to equation \\ref{equ:f1score} provides details of how accuracy, recall, precision and F1-score are calculated using TP, FP, TN and FN.\n\n\\vspace{-10pt}\n\n\\begin{equation}\\label{equ:accuracy}\n \\text{Accuracy} = \\frac{TP + TN}{TP+ TN+ FP+ FN }\n\\end{equation} \n\n\\begin{equation}\\label{equ:recall}\n \\text{Recall}= \\frac{TP}{TP + FN} \n\\end{equation}\n\n \\begin{equation}\\label{equ:precision}\n \\text{Precision}= \\frac{TP}{TP + FP} \n \\end{equation} \n\n\\begin{equation} \\label{equ:f1score}\n \\text{F1-score}= \\frac{2 \\times(Recall \\times Precision )}{Recall + Precision} \n\\end{equation}\n\nWe assessed the effectiveness of the prediction models in validating security alerts with F1-score because accuracy (equation \\ref{equ:accuracy}) is not always a useful metric on its own. It does not capture the bias of the data. The recall (equation \\ref{equ:recall}) is a measure of robustness; it displays if a model is failing to predict the relevant samples, e.g., failing to classify IP maliciousness correctly. It is important for the PoC system to have high recall to ensure that no malicious events are misinterpreted or ignored. Precision is the model's ability to accurately predict the positive class (malicious events), shown in equation \\ref{equ:precision}. A low value for precision indicates a high amount of false positives. Thus, it is important to achieve high precision, as low precision would introduce the need for human validation of the output of SmartValidator. The F1-score can be considered the best metric for an overall evaluation, as it considers both precision and recall (equation \\ref{equ:f1score}) together and evaluates each class separately. F1-score does not have any unit as this is the harmonic mean of precision and recall which do not have any unit as well.\n\nWe defined a confidence score between 0-1 to be used by the security team as a threshold value for prediction models. In our PoC, we compared the confidence score with the F1-score of the prediction model. If a model had a lower F1-score than the confidence score, the PoC discarded that model.\n\nWe defined computation time, as shown in equation \\ref{equ:comptime}, to evaluate the efficiency of building prediction models based on SOC's need. Computation time is the summation of the training time ($train_{time}$) and prediction time ({$predict_{time}$}). Training time is the time required to build a model and prediction time is the time required to predict unknown attributes using an optimal model.\n\n\\vspace{-15pt}\n\n\\begin{equation} \\label{equ:comptime}\n computation_{time} = train_{time} + predict_{time}\n\\end{equation}\n\n\\vspace{-10pt}\n\n\\section {Evaluation and Results}\\label{Sec:Eval}\n\nIn this section, we present the results of the developed PoC of SmartValidator to show the effectiveness and efficiency of a dynamic ML-based validator to automate and assist the validation of security alerts with changing threat data and SOC's needs.\n\n\\begin{table*}[thb]\n\\caption{Performance (F1-score) of different models for prediction of \\textit{attack} based on $ob^7_{attrib}$ using $DS_1$ and prediction of \\textit{threat type}, \\textit{threat level}, \\textit{name} and \\textit{event} based on $ob^{14}_{attrib}$ using $DS_2$}\n \\centering\n \n \\begin{tabular}{|l|c|c|c|c|c|}\n\n \\hline\n \n \\multirow{2}{*}{Model} & $DS_1\\ ob^7_{attrib}$ & \\multicolumn{4}{|c|}{$DS_2 ob^{14}_{attrib}$} \\\\ [1ex]\n \\cline{2-6}\n \n & attack & threat type & threat level & name & event \\\\ \n \n \\hline\nDT+LE & 0.719\t& 0.941 & 0.998\t& 0.938\t& \\textbf{0.995 }\\\\\nRF+LE\t& 0.274\t& 0.727\t& 0.76\t& 0.689\t& 0.362\\\\\nKNN+LE\t& 0.594\t& 0.912\t&\t0.986\t&\t0.917\t&\t0.902\\\\\nGBAY+LE & 0.312\t& \t0.15\t&\t0.343\t&\t0.102\t&\t0.077\\\\\nRID+LE\t& 0.627\t& \t0.055\t&\t0.082\t&\t0.105\t&\t0.001\\\\\nSVM+LE\t& 0.401\t&\t0.135\t&\t0.654\t&\t0.295\t&\t0.283\\\\\nMLP+LE \t& \t0.546\t&\t0.762\t&\t-\t&\t0.878\t&\t-\\\\\nXGB+LE\t& \t\\textbf{0.787}\t&\\textbf{\t0.998}\t&\t\\textbf{0.999}\t&\t\\textbf{0.997}\t&\t-\\\\\n\\hline\nDT+OHE\t& \t0.587\t&\t0.917\t&\t0.988\t&\t0.905\t&\t0.926\\\\\nRF+OHE\t& \t0.501\t&\t0.916\t&\t0.948\t&\t0.912\t&\t0.87\\\\\nKNN+OHE\t& \t0.351\t&\t\\textbf{0.998}\t&\t\\textbf{0.998}\t&\t\\textbf{0.999}\t& \\textbf{0.996}\\\\\nGBAY+OHE\t& \t0.382\t&\t0.677\t&\t-\t&\t0.465\t&\t0.241\\\\\nRID+OHE\t& \t0.763\t&\t0.121\t&\t-\t&\t0.007 &\t- \\\\\nSVM+OHE\t& \t0.322\t&\t0.051\t&\t0.126\t&\t0.001\t&\t-\\\\\nMLP+OHE\t& \t0.762\t&\t0.061\t&\t-\t&\t0.079 &\t-\\\\\nXGB+OHE\t& \t\\textbf{0.784}\t&\t-\t&\t-\t&\t0.996\t&\t-\\\\\n \\hline\n \\end{tabular}\n \\label{tab:performanceModel}\n \\vspace{-10pt}\n\\end{table*}\n\n\\subsection{Evaluation of Effectiveness}\n\nWe evaluated the effectiveness of prediction models to answer RQ1 which is \\textit{\"How effective are prediction models in classifying CTI?\"}. \nSpecifically, we used two datasets $DS_1$ and $DS_2$, as described in Section \\ref{subsec:5.2}, for predicting five observed attributes ($ob_{attrib}$). We collected the datasets in a way so that each dataset was confirmed to have at least one observed value. Finally, all experiments were conducted on the collected datasets and different combinations of attributes sets. Based on self-defined SOC requirements, CTI datasets were selected and models were built. Optimal models were selected based on their effectiveness for a particular attribute set. Effectiveness is measured using the metrics described in Section \\ref{sub:evaluationMetrics}. We investigated the performance of the optimal models for classifying the threat data.\n\nWe found that 51 optimal models were returned by the PoC system based on the given requirements. Among them, seven of the models were built using $DS_1$ to predict \\textit{attack} that have used $ob^1_{attrib}$ to $ob^7_{attrib}$ and the other~44 models were build using $DS_2$ to predict the four other unknown attributes based on $ob^8_{attrib}$ to $ob^{18}_{attrib}$.\n\nThe performance of different ML algorithms and encoding methods are summarized in Table~\\ref{tab:performanceModel} for the two observed attributes sets \u2013 $ob^7_{attrib}$ and $ob^{14}_{attrib}$. The results shows that XGBoost (XGB) with Label Encoding (LE) achieved a near perfect F1 score while using $DS_2$. We further observed that while using One Hot Encoding (OHE), K-Nearest Neighbors (KNN) performed better than XGB. However, considering the time and memory constraints, XGB failed to train a model to predict \"\\textit{event}\" when LE was used as an encoding method. Some of the model building processes failed as they could not finish within the allocated memory and time limits that were 24 hours and 10GB for $DS_1$ and 48 hours and 100GB for $DS_2$. These limits were set and tested to investigate and simulate computational resource limits. \n\nFigure \\ref{fig:Effective} shows the evaluation score (F1-score) for different datasets, labels and encoding methods. Figure \\ref{fig:Effective(a)} and Figure \\ref{fig:Effective(b)} shows the comparison of the different classifiers when trained on $DS_1$ and $DS_2$, respectively. We first observe that ML algorithms generally performed better using $DS_2$ data, with the exception of the Ridge classifier. This finding demonstrates the importance of CTI data and information quality. prediction models require a large number of training examples to properly learn trends and patterns. We recommend utilizing data from CTI platforms such as MISP, as these platforms aggregate a large quantity of verified information from a variety of sources.\n\nFigure \\ref{fig:Effective(c)} shows the comparative classifier performance across both $DS_1$ and $DS_2$. We observe a large range in classifier performance. The variance in best classification algorithms further motivates the need for automated model building and selection. Some ML algorithms were not as effective for this classification task. Hence, if a human were to repetitively use one algorithm to build models for different set of attributes, the results would not be good (i.e., effective) for all models. It would also be time consuming to validate every model. The results demonstrate on average, the XGB classifier performed extremely well, but KNN, as well as other tree-based classifiers (DT and RF) also performed well. MLP classifiers also appear to perform well. As MLP utilized a simplistic artificial neural network, this potentially motivates the investigation of more sophisticated deep learning methods for future work \\cite{FERRAG2020102419}.\n\n\\begin{figure*}\n\\centering\n \\subfloat[]{\\includegraphics[width=0.3\\textwidth]{figs\/ESDS1.pdf}\\label{fig:Effective(a)}}\n \\subfloat[]{\\includegraphics[width=0.3\\textwidth]{figs\/ESDS2.pdf}\\label{fig:Effective(b)}}\n \\subfloat[]{\\includegraphics[width=0.3\\textwidth]{figs\/EvaluationScore.pdf}\\label{fig:Effective(c)}}\n \\hfil\n \\subfloat[]{\\includegraphics[width=0.3\\textwidth]{figs\/ESlabel.pdf}\\label{fig:Effective(d)}}\n \\subfloat[]{\\includegraphics[width=0.3\\textwidth]{figs\/ESEncoding.pdf}\\label{fig:Effective(e)}}\n \\subfloat[]{\\includegraphics[width=0.3\\textwidth]{figs\/ESOptimalModel.pdf}\\label{fig:Effective(f)}}\n \n \\caption{Comparative analysis of different classifiers performance (F1-score) while (a) using dataset 1, $DS_1$, (b) using dataset 2, $DS_2$, and (c) using both datasets, (d) predicting different labels and (e) using different encoding methods. (f) number of optimal models for different evaluation score (F1-score)}\n \\label{fig:Effective}\n \\vspace{-15pt}\n\\end{figure*}\n\n\\begin{table}[]\n\\caption{Optimal classifier and encoding method with evaluation score (F1-score) for observed attributes $ob^1_{attrib}$ to $ob^7_{attrib}$ to predict \\textit{attack} using $DS_1$}\n\n \\centering\n \n \\begin{tabular}{|c|c|c|}\n\n \\hline\n \n \\multirow{2}{*}{Model} & \n \\multicolumn{2}{|c|}{attack} \\\\\n \\cline{2-3}\n \n & Optimal model & F1-score \\\\ \n \n \\hline\n$ob^1_{attrib}$ &\tKNN + LE &\t0.572 \\\\[1ex]\n$ob^2_{attrib}$ &\tRID + OHE &\t0.537\\\\[1ex]\n$ob^3_{attrib}$ &\tSVM + OHE &\t0.623\\\\[1ex]\n$ob^4_{attrib}$ &\tXGB + OHE &\t0.728\\\\[1ex]\n$ob^5_{attrib}$ &\tRID + OHE &\t0.637\\\\[1ex]\n$ob^6_{attrib}$ &\tRID + OHE &\t0.772\\\\[1ex]\n$ob^7_{attrib}$ &\tXGB + LE &\t0.787\\\\[1ex]\n\n \\hline\n \\end{tabular}\n \\label{tab:optimalModelDs1}\n \\vspace{-10pt}\n\\end{table}\n\nWe further performed comparative analysis for predicting the five observed attributes in Figure \\ref{fig:Effective(d)}. Models generally performed well for all prediction tasks, but performed best when classifying \\textit{threat\\_level}. This is potentially because \\textit{threat\\_level} has the lowest dimensionality of the observed attributes, and hence ample training data for each class. Correspondingly, the \\textit{event} attribute had lower performance due to its high dimensionality. $DS_1$ data was used to predict \\textit{attack}. Performance is noticeably worse for this attribute, with mean evaluation score of approximately 0.5. This further emphasizes the importance of CTI quality.\n\nThe relative effectiveness of the two encoding strategies is analyzed and shown in Figure \\ref{fig:Effective(e)}. These two encoding strategies also provide a comparison of the two NLP techniques that we considered, which are count vectorization and TFIDF vectorization. One-hot encoding appeared to typically outperform label encoding, but the results are relatively similar. This refers to the similar performance of count vectorization and TFIDF, where the count vectorizer performed slightly better, as seen under the one-hot encoding results. Similar to the previous observation, we found that depending on the type of attributes and algorithms, the performance of count vectorizer and TFIDF varies. However, the performance of classification with $DS_2$, irrespective of the encoding, built effective machine learning models \\cite{Helge2020, chen2020deep, islam2019multi, sabir2020machine}. \n\nTable \\ref{tab:optimalModelDs1} shows the optimal model for predicting \\textit{attack} using $ob^1_{attrib}$ to $ob^7_{attrib}$ that were built using $DS_1$. Table \\ref{tab:optimalModelDs2} shows optimal model for predicting \\textit{threat type}, \\textit{threat level}, \\textit{name} and \\textit{event} based on $ob^8_{attrib}$ to $ob^{14}_{attrib}$ that were built using $DS_2$. Table \\ref{tab:optimalModelDs1} and Table \\ref{tab:optimalModelDs2} demonstrate that for different attribute sets different classifiers were seen to perform better. Thus, we cannot rely on a single algorithm. Table \\ref{tab:optimalModelDs1} shows that the IP extracted features that were in $ob^3_{attrib}$ (i.e., IP, ASN, owner, country) performed the best singularly, out of the three available features. $ob^5_{attrib}$ which had date and IP features was also actually relatively similar in terms of predictive capability. However, by itself the IP features that is $ob^2_{attrib}$ can only achieve a F1-Score of 0.623. Using more available features increased the effectiveness of the prediction model, with the largest feature set achieving the best F1-score of 0.787. The addition of the domain feature to the IP feature set which formed $ob^4_{attrib}$ did not significantly increase the effectiveness of the prediction model. This is likely due to the existing correlation between the IP and domain features. However, the date did appear to noticeably improve the F1-score. The large difference between the best and worst models F1-scores highlights the importance of proper feature engineering and model selection.\n\nAnalysing the results of Table \\ref{tab:optimalModelDs2}, we found that the optimal models generally performed very well, obtaining extremely good evaluation scores (F1-score). Several of the optimal models achieved a near perfect score on the testing dataset. The models also performed noticeably better than those trained on $DS_1$, further showcasing the need for good features and a large dataset. The models which used time-based features performed noticeably better than similar feature sets which did not. The time-based features likely had such strong predictive power as only a few MISP events were recorded close together. However, models without the temporal features were still able to achieve an F1-score of over 0.8. The file-based features also appeared to have weak predictive power as their inclusion or exclusion appeared to have very little impact on the evaluation score. The results of Table \\ref{tab:optimalModelDs2} further reflects that the \\textit{event} label was harder to predict than other three labels. This was likely because \\textit{event} was the label with the highest number of classes. Interestingly, tree-based classifiers seemed to perform better for the \\textit{event} label. However, it should be noted that a lot of the more sophisticated models timed-out for the \\textit{event} label, so their results were not recorded. It can also be seen that the\\textit{ threat level} was the easiest to predict, likely because it had the lowest number of classes. Figure \\ref{fig:Effective(d)} displays the huge variance in F1-scores of trained models for different labels.\n\n\\begin{table*}[h]\n\\caption{Optimal models and encoding methods with evaluation score (f1-score) for attributes sets $ob^8_{attrib}$ to $ob^{18}_{attrib}$ to predict threat type, threat level, name and event}\n\n \\centering\n \n \\begin{tabular}{|p{1.2cm}|l|p{.9cm}|l|p{.9cm}|l|p{.9cm}|l|p{.9cm}|}\n\n \\hline\n \n \\multirow{2}{1.2cm}{Observed attributes} & \n \\multicolumn{2}{c|}{threat type} & \\multicolumn{2}{c|}{threat level} & \\multicolumn{2}{c|}{name} &\\multicolumn{2}{c|}{event}\\\\\n \\cline{2-9}\n \n & Optimal model & F1-score & Optimal model & F1-score & Optimal model & F1-score & Optimal model & F1-score \\\\ \n \\hline\n$ob^8_{attrib}$ &\tMLP + OHE&\t0.648&\tXGB + LE&\t0.68&\tRID + OHE&\t0.593&\tDT + LE&\t0.275\\\\[1ex]\n$ob^9_{attrib}$&\tSVM + OHE&\t0.854&\tXGB + LE&\t0.84&\tSVM + OHE&\t0.809&\tSVM + OHE&\t0.735\\\\[1ex]\n$ob^{10}_{attrib}$\t&SVM + OHE&\t0.859&\tSVM + OHE&\t0.915&\tSVM + OHE&\t0.864&\tSVM + OHE&\t0.763\\\\[1ex]\n$ob^{11}_{attrib}$\t&KNN + OHE&\t0.998&\tXGB + LE&\t0.999&\tKNN + OHE&\t0.999&\tDT + LE\t&0.898\\\\[1ex]\n$ob^{12}_{attrib}$&\tKNN + OHE&\t0.997&\tKNN + OHE&\t0.999&\tKNN + OHE&\t0.998&\tDT + LE\t&0.994\\\\[1ex]\n$ob^{13}_{attrib}$&\tKNN + OHE&\t0.998&\tXGB + LE&\t0.999&\tKNN + OHE&\t0.998&\tKNN + OHE&\t0.995\\\\[1ex]\n$ob^{14}_{attrib}$\t&KNN + OHE&\t0.998&\tXGB + LE&\t0.999\t&KNN + OHE&\t0.999&\tKNN + OHE&\t0.999\\\\[1ex]\n$ob^{15}_{attrib}$&\tSVM + OHE&\t0.873&\tSVM + OHE&\t0.862&\tSVM + OHE&\t0.813&\tSVM + OHE&\t0.725\\\\[1ex]\n$ob^{16}_{attrib}$&\tXGB + LE&\t0.998&\tXGB + LE&\t1&\tKNN + OHE&\t0.998&\tDT + LE\t&0.996\\\\[1ex]\n$ob^{17}_{attrib}$&\tXGB + LE&\t0.997&\tXGB + LE&\t0.999&\tKNN + OHE&\t0.999&\tRF + OHE&\t0.865\\\\[1ex]\n$ob^{18}_{attrib}$&\tXGB + LE&\t0.998&\tXGB + LE&\t1&\tKNN + OHE&\t0.998&\tDT + LE&\t0.996\\\\[1ex]\n\n \n \\hline\n\n \\hline\n \\end{tabular}\n \\label{tab:optimalModelDs2}\n \\vspace{-10pt}\n\\end{table*}\n\n\n\nWe further analyzed the classification confidence of the models of SmartValidator on the collected data. The evaluation results are visualized in Figure \\ref{fig:Effective(f)}. We consider different confidence scores (0.6 \u2013 0.9) as with the variation in attributes sets, preference of confidence score also varies. Figure \\ref{fig:Effective(f)} shows that at runtime, with confidence score of 0.8, 80\\% of the models that were built based on $DS_2$ fits the need of a SOC. These models were built based on the saved feature engineering and ML algorithms. Figure \\ref{fig:Effective(f)} provides an overview to the security team whether they can rely on a dataset where the performance is not up to their requirements. It shows with increase in the confidence score the number of models above the confidence score decreases. Obviously, it can be seen that the models on $DS_2$ classified the attributes with a much higher confidence. We found that one of the key reasons behind this is that $DS_1$ had comparatively less data elements than $DS_2$. Thus, with $DS_2$ capturing the variation in data and correlating them provided better results than $DS_1$. The results show that approximately 84\\% of the 51 optimal models had an F1-score (or confidence score) above 0.72 and 75\\% of the models had F1-score above 0.8. Most of the models that were built with data gathered from CTI platforms can effectively predict $un_{attrib}$ based on $ob_{attrib}$ with a higher F1-score than the models that were built with CTI gathered from public websites. \n\nIn summary, the MISP dataset (i.e., $DS_2$) was found to be a high-quality dataset that worked well with automated classification and thus validation of alerts. Hence, using attributes that are representative of possible threat data, prediction models can be built to effectively validate alerts with a substantial degree of accuracy, precision and recall. The results of $DS_2$ reflect that ML based validation models can be used to effectively validate the alerts with high quality CTI like $DS_2$. Model choice and alternatives are seen to be important steps to find the optimal models, as for different attributes sets the PoC has returned different models.\n\n\\vspace{-5pt}\n\\subsection{Evaluation of Efficiency} \\label{subsec:6.2}\n\nTo demonstrate the efficiency of SmartValidator, we answer RQ2 that is \"\\textit{How efficient is SmartValidator in selecting and building prediction models on runtime over pre-building prediction models?}\".\n\nIn SmartValidator, we propose to build the models at run time based on SOC's requirements instead of pre-building all possible models. We considered the time to build possible model combinations of the eight aforementioned classifier algorithms as a baseline to compare the efficiency of SmartValidator. We observed that it was infeasible to pre-build models for every possible combination of features. For $DS_1$, there were 62 possible feature combinations, and for $DS_2$ there were 8190. This would increase the number of experiments for $DS_2$ to 524,160. For predicting the five unknown attributes with 18 attribute sets we would only require to run 1440 experiments; only 0.26\\% of the total experiments. \nThus, for calculating the efficiency, that is the computation time, based on SOC's requirement the PoC ran a total of 816 experiments; 112 experiments for DS1 (8 ML algorithms $\\times$ 7 input attribute sets $\\times$ 1 output attribute $\\times$ 2 encoding methods) and 704 experiments for DS2 (8 ML algorithms $\\times$ 11 input attribute sets $\\times$ 4 output attributes $\\times$ 2 encoding methods) to test all combinations of the 18 observed attribute sets, prediction models, encoding methods and five unknown attributes (i.e., classification labels). \n\nWe considered the total time as a baseline to evaluate the efficiency of building the model at runtime. Here we attempted to simulate the resource limitations of model construction for a real-world environment. We considered $DS_1$ as a lightweight dataset and assigned the restrictions of 24 hours runtime and 10GB of memory, whereas $DS_2$ was a heavyweight dataset and assigned a 48 hours runtime limit and 100GB of memory. Any experiment that exceeded this run time or consumed too much memory would be aborted, as it was deemed impractical due to organisations' strict resources and fast response requirements \\cite{islam2019multi, sonicwall2020}. In this result section, we reported the efficiency in terms of time.\n\nFor $DS_1$, all 112 experiments completed successfully. However, for $DS_2$, 169 of the 704 experiments failed to finish whilst enforcing our experimental setup. The most common classifier model to time out were MLP and XGB, as these models had a significantly larger training time. For these failed jobs, 149 out of 169 jobs were either for the \\textit{event} or \\textit{threat level} label, as these datasets had many more valid entries, and thus also took more time and memory to train. Similarly, 116 of the failed jobs used one-hot encoding, as this encoding method was much less efficient than label encoding, due to every possible value adding a dimension to the encoded input. However, 53 of the label encoding experiments also failed due to the size of the text features. These features were encoded with very high dimensionality due to the lack of a natural language convention. To investigate this issue further in our future work, we plan to investigate efficient encoding methods through vocabulary size and dimensionality reduction. \n\nTable \\ref{tab:timeDs1} and Table \\ref{tab:timeDs2} show the training time and prediction time of the optimal models that were built based on $DS_1$ and $DS_2$ respectively. The experimental results show that it is extremely inefficient to pre-build a large number of prediction models. For $DS_1$ the total training time was 61064 seconds (0.7 days), and for $DS_2$ the total training time was 7010279 seconds (81.1 days). The prediction time of a model was significantly faster than the training time, which further encouraged the use of validation models. On average, models made predictions in 0.6\\% of the training time for $DS_1$ (0.09 seconds), and 2.9\\% for $DS_2$ (17.29 seconds). \n\n\\begin{table}[h]\n\\caption{Training time and prediction time of optimal models in \\textbf{seconds} for attribute sets $ob^1_{attrib}$ to $ob^7_{attrib}$ to predict \\textit{attack} using $DS_1$}\n\n \\centering\n \n \\begin{tabular}{|c|l|c|c|}\n\n \\hline\n \n \\multirow{2}{*}{Model} & \n \\multicolumn{3}{|c|}{attack} \\\\\n \\cline{2-4}\n \n & Optimal model & Train time & Predict time \\\\ \n \n \\hline\n$ob^1_{attrib}$ &\tKNN + LE &\t758&\t0.676 \\\\[1ex]\n$ob^2_{attrib}$ &\tRID + OHE &\t0.90&\t0.002\\\\[1ex]\n$ob^3_{attrib}$ &\tSVM + OHE &\t482\t&0.001\\\\[1ex]\n$ob^4_{attrib}$ &\tXGB + OHE &\t2597&\t0.196\\\\[1ex]\n$ob^5_{attrib}$ &\tRID + OHE &\t5.6\t&0.001\\\\[1ex]\n$ob^6_{attrib}$ &\tRID + OHE &\t2.2\t&0.001\\\\[1ex]\n$ob^7_{attrib}$ &\tXGB + LE &\t1422.9&\t0.09\\\\[1ex]\n \\hline\n \\end{tabular}\n \\label{tab:timeDs1}\n \\vspace{-10pt}\n\\end{table}\n\n\\begin{table*}[h]\n\\caption{Training time and prediction time of optimal models in \\textbf{seconds} that were built to predict threat type, threat level, name and event based on attributes $ob^8_{attrib}$ to $ob^{18}_{attrib}$ using $DS_2$ }\n\n \\centering\n \n \\resizebox{\\textwidth}{!}{%\n \n \\begin{tabular}{|p{1.25cm}|p{1.45cm}|p{1cm}|p{.8cm}|p{1.45cm}|p{1cm}|p{.8cm}|p{1.45cm}|p{1cm}|p{.8cm}|p{1.45cm}|p{.8cm}|p{.8cm}|} \n \\hline\n \n \\multirow{2}{1.3cm}{Observed attributes} & \n \\multicolumn{3}{c|}{threat type} & \\multicolumn{3}{c|}{threat level} & \\multicolumn{3}{c|}{name} &\\multicolumn{3}{c|}{event}\\\\\n \\cline{2-13}\n \n & Optimal model &\tTrain time &\tPredict time\t& Optimal model &\ttrain Time&\tPredict time &\tOptimal model &\ttrain time &\tPredict time &\tOptimal model &\tTrain time&\tPredict time \\\\ \n \\hline\n$ob^8_{attrib}$ & \tMLP+OHE& \t166623& \t0.18& \tXGB+LE& \t111788& \t56.86& \tRID+OHE\t& 10959\t& 0.036& \tDT+LE& \t289.87& \t0.199\\\\[1ex]\n$ob^9_{attrib}$& \tSVM+OHE& \t1141.1\t& 0.031& \tXGB+LE& \t144473& \t139.2& \tSVM+OHE& \t925.37& \t0.016& \tSVM+OHE& \t37846\t& 0.686\\\\[1ex]\n$ob^{10}_{attrib}$& \tSVM+OHE &\t1199.66 & \t0.024& \tSVM+OHE& \t804.92& \t0.006& \tSVM+OHE\t& 994.43& \t0.014& \tSVM+OHE& \t27004& \t0.657\\\\[1ex]\n$ob^{11}_{attrib}$\t& KNN+OHE\t& 2599.72\t& 23.1& \tXGB+LE& \t19190\t& 3.115& \tKNN+OHE& \t984.94& \t5.98& \tDT+LE\t& 371.38& \t0.204\\\\[1ex]\n$ob^{12}_{attrib}$& \tKNN+OHE& \t2592.79& \t22.11& \tKNN+OHE& \t16871& \t176.6& \tKNN+OHE& \t2795.24\t& 14.56\t& DT+LE& \t534.94& \t0.197\\\\[1ex]\n$ob^{13}_{attrib}$\t& KNN+OHE\t& 3177.3& \t32.04& \tXGB+LE\t& 27067& \t4.296& \tKNN+OHE& \t1178.39& \t9.311& \tKNN+OHE& \t18065& \t259.95\\\\[1ex]\n$ob^{14}_{attrib}$\t& KNN+OHE& \t2535.45& \t22.09& \tXGB+LE& \t16036& \t2.556& \tKNN+OHE& \t951.54& \t6.713& \tKNN+OHE& \t16731& \t224.98\\\\[1ex]\n$ob^{15}_{attrib}$\t& SVM+OHE\t& 1081.05\t& 0.015& \tSVM+OHE& \t739.58& \t0.004& \tSVM+OHE& \t1019.56& \t0.007& \tSVM+OHE& \t21275& \t0.369\\\\[1ex]\n$ob^{16}_{attrib}$\t& XGB+LE& \t10552.3& \t21.38\t& XGB+LE& \t4033.701& \t1.913& \tKNN+OHE& \t899.12&\t5.679&\tDT+LE&\t102.35& \t0.213\\\\[1ex]\n$ob^{17}_{attrib}$ & \tXGB+LE& \t19602.7& \t19.32& \tXGB+LE& \t13950.9& \t3.938& \tKNN+OHE& \t1005.59& \t6.64& \tRF+OHE& \t7036.7& \t15.78\\\\[1ex]\n$ob^{18}_{attrib}$\t& XGB+LE& \t14257.7& \t18.57& \tXGB+LE& \t8569.79\t& 3.175\t& KNN+OHE\t& 949.95& \t5.611& \tDT+LE\t& 142.76& \t0.22\\\\[1ex]\n \n \\hline\n\n \\hline\n \\end{tabular}\n \\label{tab:timeDs2}\n \\vspace{-10pt}\n }\n\\end{table*}\n\n\\begin{figure*}\n\\centering\n \\subfloat[]{\\includegraphics[width=0.32\\textwidth]{figs\/timeDS1.pdf}\\label{fig:efficiency(a)}}\n \\subfloat[]{\\includegraphics[width=0.32\\textwidth]{figs\/timeDS2.pdf}\\label{fig:efficiency(b)}}\n \\subfloat[]{\\includegraphics[width=0.32\\textwidth]{figs\/time.pdf}\\label{fig:efficiency(c)}}\n \\hfil\n \\subfloat[]{\\includegraphics[width=0.32\\textwidth]{figs\/TimeLabel.pdf}\\label{fig:efficiency(d)}}\n \\subfloat[]{\\includegraphics[width=0.32\\textwidth]{figs\/timeEncoding.pdf}\\label{fig:efficiency(e)}}\n \n \\caption{Comparative analysis of time in \\textbf{seconds} required to train different ML based validation models with (a) dataset 1, (b) dataset 2, (c) both datasets, (d) different labels and (e) encoding methods}\n \\label{fig:Efficiency}\n \\vspace{-15pt}\n\\end{figure*}\n\nFigure \\ref{fig:Efficiency} shows the logarithmic distribution of the training time for different datasets, labels and encoding methods. The logarithmic distribution of training times for $DS_1$, $DS_2$ and both $DS_1$ and $DS_2$ is shown in Figure \\ref{fig:efficiency(a)}, Figure \\ref{fig:efficiency(b)} and Figure \\ref{fig:efficiency(c)}, respectively. It should be noted that Figure \\ref{fig:Efficiency} did not consider the run time of experiments which were timed out. As shown in Figure \\ref{fig:Efficiency}, the run times for more intensive models are skewed to the left. Noticeably, the Na\u00efve Bayes (GBAY) classifiers were trained near instantaneously, as these models did not require heavy fitting to the data. DT similarly was trained faster for both datasets, due to the simplicity of this model. Figure \\ref{fig:efficiency(a)} shows for $DS_1$, SVM, KNN and MLP required an average of 10-15 minutes to train. However, the XGB classifier took significantly the longest time to train with a median value of 36 minutes. For $DS_2$, the average overall training time was 217 minutes (shown in Figure \\ref{fig:efficiency(b)}) which was significantly larger than the average overall training time of 9 minutes for $DS_1$, due to the substantial dataset size increase. However, XGB had a significantly larger training time of over 8 hours. We observed that the runtime between RID and SVM was quite different even though both are linear classifiers (Figure \\ref{fig:efficiency(c)}). The SVM classifier took an average of 30 minutes to train on $DS_2$ due to hyperparameter optimization, whereas the RID classifier took an average of 10 seconds as they did not require any significant hyperparameters. These observations highlight the importance of SOC requirements, as we can see a trade-off between model performance and training time. XGB is the best performing model on average, but also exhibits the largest training time. Hence, SOC analysts would need to weigh model effectiveness against efficiency.\n\nFigure \\ref{fig:efficiency(d)} displays the training time for completed experiments for each predicted attribute. Models targeted towards predicting the \\textit{attack} attribute only took an average training time of 9 minutes, as they were trained using the much smaller $DS_1$ dataset in comparison to the other attributes that were trained using $DS_2$. Models took an average of 2 hours to train for the name attribute, in comparison to \\textit{threat\\_level}, \\textit{threat\\_type} and \\textit{event}, which took a mean time of around 4.5 hours. This could be because the training set was much smaller for the name attribute, as not as many CTI entries were assigned such information. Similarly, it should be noted that a large portion of the \\textit{event} and \\textit{threat\\_level} experiments timed out, as the training set was larger for these attributes, due to more valid entries.\n\nFigure \\ref{fig:efficiency(e)} reflects that the training time did not significantly differ between encoding methods. This is because the major encoding dimensionality came from the domain and filename features, which were treated as text attributes and were thus only one-hot encoded in our experiments. However, one-hot encoding usually exhibited larger training times, as it has much higher dimensionality and is thus less efficient. For $DS_1$, one-hot experiments took an extra 4.6 minutes on average (11.3 minutes vs 6.8 minutes). For $DS_2$, one-hot experiments took an extra 18.9 minutes on average (227.7 minutes vs 208.8 minutes).\n\nprediction models were built by experts (i.e., data scientists) who have the knowledge of ML technologies and pipeline to automatically validate the alerts. We observed that for changing SOC requirements, interaction or collaboration were required among the security team and the data science team where a security team specified the requirements and requested for the models they need. If the data required by the data science team were not available, they needed to request the threat intelligence team who gathered the requested information and updated the relevant list of information.\nHence, the model required redesigning and further actions needed to be performed to achieve the best prediction models.\nWhilst using the PoC based on SmartVadilator, these interaction could be minimized, by managing the interaction through the orchestrator. In this way, the orchestrator requested the model builder to build the required validation model. Constructing the models automatically based on a SOC's needs required less time and was more feasible than constructing possible model for all combination of attribute sets.\n\nEvaluating the efficiency of SmartValidator, we found that SmartValidator successfully identified and classified threat data required for alert validation. The same framework can be used to automate the validation of newly listed alerts, with new data sources. The data science team requires to map the suitable algorithm with suitable attributes sets and define the required data sources. \n\n\\vspace{-5pt}\n\\subsection{Discussion}\nWe consider the same attributes sets and CTI sources that are used for security incident and alert validation for three validation approaches. The validation approaches are (i) manual validation, (ii) pre-building prediction models and (iii) automatic construction of prediction models based on SOC's requirements \n\nIn manual validation, for each attribute or set of attributes, a security team first searched for attribute types and then looked for the relevant CTI availability. A security team used their previous experience to select the CTI to perform the validation. For example, to validate a malicious IP, a security team collected the blacklisted IP addresses and then looked for the IPs on the list. They further used WhoIS database to identify the relevant information about suspected IPs. The security team needed to manually write queries or call APIs to find and extract information for CTI and required knowledge about the underlying CTI sources. For similar types of alerts or changing context (i.e., change in CTI, alerts or SOC's requirement), the same sequence of actions were repeated, that cost significant man-hours and required knowledge about the underlying plugin, API, CTI sources and so on. \n\nFor changing context while following approach 2 (pre-building prediction models), the security team needed to request the data science team to train and build the possible prediction models for the new context. Section \\ref{subsec:6.2} reveals that to build prediction models each time a change occurs is not feasible. With automatic construction of prediction models, each time a SOC requested for a validation task, the models were built automatically. With changing context, the orchestrator coordinated the data collection and model building process that also freed the security team from coordinating and communicating with the data science team and also reduced the delay incurred due to communication. In this work, we have experimentally evaluated the performance of SmartValidator. We did not discuss the amount of time required by the security team to perform the validation activities or the time required for communication between the security team and data science team. This will also include the time gap between a request being made and the time for the security team to get the model. In future work, we plan to evaluate SmartValidator in a real SOC environment to\nfurther demonstrate how SmartValidator can be beneficial in a SOC environment. \nFor example, the effort required to manage the PoC system versus the cost saving from automation. Further we want to evaluate the maximum upfront cost to incorporate the PoC system in a real SOC environment. \n\nFor automating construction of ML-based validation models, the PoC followed three major steps - i) collecting and processing the data, ii) training the classifier and iii) running the prediction model. Step 1 required a large amount of time (in hours) as there were hundreds of thousands of data points to download and process, step 2 took a reasonable amount of time (in minutes) as the data were needed to be encoded and the choice of classifier were needed to train and optimise its hyperparameters. Step 3 was reasonably fast as it only needed time (in seconds) to apply a pre-trained model. These steps were bundled into an installable python package which could be made publicly available. We designed the PoC in a modular fashion so that it can be integrated into other network-enabled services to gain more information about network security. The system could easily be built for the future improvements.\n\nTo validate alerts coming from an IDS, the developed PoC system can be extended to first receive the IDS alerts over a network. After parsing the alert attributes (which would be similar to the attributes sets used in our PoC), the next steps are to transform the alert attributes into features, and then look for potential CTI to correlate the alerts information into patterns by building prediction models to predict suspicious behavior. Furthermore, the alerts can be validated using the models and the validated output can display security context of the network in a graphical user interface that is easy to understand. The PoC system can be enhanced to provide API that can be integrated as a part of a SOC's existing security system, such as middleware for EDR or SIEM. \n\nIn our experiment, we have selected two types of CTI \u2013 one is gathered from the websites, and another is from an OSINT platform MISP, which is widely used by industry as it contains high-quality data with enriched IOCs. We consider the confidence score of users to ensure that the models with low evaluation scores are not selected. We also merged multiple data sources to enrich CTI. The experimental results show that while using MISP, SmartValidator performs better than when using web data. We assert that this is due to the high quality of MISP. The lower level of CTIs used in our experiment can be replaced with higher quality CTIs such as TTP, where SmartValidator will perform the same steps from identifying CTIs to building models and performing the validation. The prediction models can be enriched with advanced IOC and TTP with more details about the threats.\n\nThe key steps identified to improve the models automatically built through SmartValidator are effective feature encoding, hyperparameter optimization, data distribution, feature extraction, dimensionality reduction and classifier selection. Increasing the size of the dataset and number of features increases the F1-score. The dimensionality of the categorical variables needs to be decreased. It is worth noting that our investigation is by no means exhaustive; we adopt basic ML principles and NLP techniques to develop a simplistic PoC. We plan to investigate more ML techniques such as data balancing, normalization and feature selection with diverse types of CTI. Similarly, we intend to investigate more sophisticated NLP techniques, such as word embeddings that can capture semantic information of the natural language attributes.\n\nWe found that CTI datasets contain highly multi-variate categorical variables. Highly dimensional problems like this are likely to be linearly separable, as we can separate any d+1 points in a d-dimensional space with a linear classifier, regardless of how the points are labelled. We further found that overall ensemble classifiers such as XGB performed better than the other selected seven algorithms.\n\n\nAs shown in Figure \\ref{fig:framework}, we consider that dedicated expertise is required (e.g., a data scientist) to build prediction models, which in most cases is different from the SOC team using the models for validation task. The prediction models are built by experts (e.g., data scientists, ML experts, or developers) who are knowledgeable about ML libraries, feature engineering and algorithms. Considering a SOC's security team capability, the model building process can also be replaced with Automated Machine Learning (AutoML)\\footnote{https:\/\/www.automl.org\/automl\/} framework such as Google Cloud AutoML\\footnote{https:\/\/cloud.google.com\/automl}. AutoML frameworks are designed to provide ML as a service, where a security team is required to provide the pre-processed data and for some cases the transformed data. It considers multiple ML algorithms in a pipeline to evaluate the performance, perform hyperparameter tuning and validation in an attempt to improve the performance. An AutoML framework provides a list of optimal models. For example, TPOTClassifier\\footnote{http:\/\/epistasislab.github.io\/tpot\/api\/} is an automated ML classifier that is developed in an attempt to automate the ML pipeline in python. It explores prediction models configurations that a data analyst or security analyst may not consider and attempts to fine-tune the model to achieve the most optimized model. Hence, the model builder of our proposed SmartValidator can be developed following the process discussed in Figure~\\ref{fig:predictionlayer} or using an AutoML framework. Thus, depending on an organizations SOC capabilities they may use an AutoML framework instead of building the prediction models with assistance of data scientist.\n\nSOC teams overwhelmed with massive volume of alerts failed to respond to a security incident even they had the alerts and corresponding information in their CTIs\\footnote{\\url{https:\/\/www.trendmicro.com\/explore\/en_gb_soc-research}}. Hence, we assert that the manual and repetitive validation task can be automated through SmartValidator, whereas more critical or unknown alerts and incidents would still require human involvement. In future work, we plan to extend the PoC to provide more explainable output so that SOC can make decisions based on the validated output, where prediction model choice and alternatives would be captured with explanation.\n\n\\subsection{Limitation of SmartValidator}\n\n\nThe experimental results show the effectiveness of SmartValidator. However, we observed several cases in which SmartValidator is unable to perform the validation.\nFurthermore using CTI to validate alerts for unknown known, unknown or zero-day attacks might not always be practical. To empirically investigate this, we ran an experiment separately to explore the ability of SmartValidator to detect unknown alerts. For our experiment, we used the alert data of Snort and selected IP information that is $ob_{attrib}^3$ to predict \\textit{threat level}. To validate such information, we used SmartValidator to build a model trained exclusively with the MISP data ($DS_2$), so that the Snort alert data is almost entirely unseen. The Snort alert data contained 16317 distant IPs, for which only 78 of the IPs were seen in the our MISP training dataset. The prediction model achieved an F1-score of 0.307 which implies SmartValidator has some capability for prediction of unknown alerts, albeit limited. Even though the alerts and IPs are unseen, the prediction model was still able to detect some patterns inferred from the ASN, IP owner and country attributes. We assert that due to the intelligent nature and the ability to learn the underlying semantic patterns, the models built with SmartValidator have the potential to validate some unseen values of unknown attributes (i.e., unknown attributes for which the model is built depending on the observed attributes). If the alert data is entirely unseen, i.e., the IP, ASN, owner and country are all uncontained in the training data, then SmartValidator will predict an output based on the most common value in the training data.\n\nTo validate any unknown attributes (i.e., $un_{attrib}$) with specific values, SmartValidator always needs the observed attributes as input to build a model. Therefore, even if the given value of unknown attributes is not seen in the training data, it is still possible to make a correct prediction by learning the patterns from the model building phase. For example, malicious IPs can have the same domain and threat actors. Here, an IP that has not been seen before can be identified as malicious, observing the threat actors and the domain. However, it is always possible that a trained model can fail to correctly predict IPs, which is a limitation of our proposed approach. We observered there are the cases when the observed attributes are not representative for capturing and learning patterns about the unknown attributes. SmartValidator will not be applicable to automate the validation tasks in these scenarios. Quantifiable investigation of this limitation is out of the scope of this work; but it is an exciting area of exploration for future research.\n\nCTI is time-sensitive. Hence, SOC teams will need to update the model whenever a new CTI is available. The framework can further be extended to capture the timeliness of the CTI used for building the models and keep track of the models with up-to-date CTI. However, all the CTI will not be updated simultaneously; thus, changing all the models whenever there is an update in the CTI will not be feasible. SmartValidator can be extended to handle this situation by retraining the model when new requests come and retraining and evaluating the available model built based on old CTI.\n\nThere can be bad quality of CTI sources which may affect the performance of SmartValidator. For example, it is possible that the models infer wrong patterns with bad quality data and consider legitimate or benign IPs as malicious. There is a need of empirical studies that focus on ensuring the quality of CTIs. However, this is not within the scope of this study. \n\n\n\n\\subsection{Threats to validity}\n\n\\textit{Construct Validity:} Our choice of data for our evaluation setup may not be suitable. We have considered CTI data to also be representative of alert attributes, as CTI is often generated from existing security alerts from external organizations. However, the classifiers have not yet been tested thoroughly with real-world internal business data. This evaluation will be attempted in future work.\n\n\\textit{Internal Validity:} A potential concern is that our models are not properly optimised. The hyperparameter tuning was performed for a specific set of configurations, as testing hyperparameters with all possible combination would take a large amount of time that may not justify here. Moreover, knowing all the combination of hyperparameters is quite im-possible. Similarly, the features that we have selected to train our models are non-exhaustive. The attribute sets selected were chosen based on the attributes used for validating alerts through security orchestration. The purpose was to show that SmartValidator can automate the construction of prediction models by identifying the CTI and the constructed prediction models can effectively validate alerts based on SOC's requirement. The attributes list might not reflect a complete list of attributes for validating certain alerts, but our system can be easily extended to several other attack scenarios.\n\n\\textit{External Validity:} Our experiments may not generalize to other datasets. The built classifiers and prediction models were evaluated based on simulated alerts attributes sets and publicly available CTIs such as MISP. \n\n\\vspace{-10pt}\n\n\\section{Related Work} \\label{Sec:RelatedWork}\n\nResearch trends are seen in the use of Machine Learning (ML) and Deep Learning (DL) in cybersecurity domain for detection and classification of cyberattacks. Most of the existing literature focused on using AI techniques such as NLP, ML and DL tools and techniques to identify and detect cyber attacks such as malware, network intrusion, data exploitation and vulnerabilities \\cite{AHMED201619, Helge2020,FERRAG2020102419, GAMAGE2020102767, GIBERT2020102526, sabir2020machine, softVul9108283}. ML algorithms are used to extract knowledge from open source public repositories which are later used to analyze attacks, or validate alerts. Although automation has been achieved in detecting and analysis of attacks, validation of alerts and incidents still requires SOC's involvement \\cite{islam2019multi}.\n\nCTI is used by security experts of SOCs to analyze and validate alerts. To ease the use of CTI, researchers have been trying to come up with a unified structured for sharing CTI \\cite{menges2019unifying, tounsi2018survey}. STIX \\cite{Stixbarnum2012standardizing}, TAXII \\cite{taxiiconnolly2014trusted}, CyBox \\cite{barnum2012cybox} and UCO \\cite{menges2019unifying} are popular among them. Use of Artificial Intelligence (AI) is encouraged for identifying, gathering and extracting CTI objects \\cite{RF2019, qamar2017data, Struve2017}. Various AI tools and techniques are used for knowledge extraction, representation and analytics of CTI \\cite{brazhuk2019semantic, tounsi2018survey}. For example, Zahedi et al. \\cite{zahedi2018empirical} has applied topic modelling techniques such as LDA to find the security relevant topics from open source repositories such as GitHub. Another example is a system used by EY \\cite{EY2017} who mined previous threat data and then analyze it to give information on threats. Using this information, they can respond to attacks and continuously monitor a system. They place data collectors at points of high movement in a network, like a server, where a system can continuously analyze data and keep the system safe. Attacks detected can be used to harvest IOCs and analyzed to discover security issues within the network. Recorded Future (widely known CTI service providers \\cite{RFID2021}) also elaborated on the fact that threat data can be found in a large variety of places such as Tweets, Facebook posts and emails. They also use AI to recognize patterns in email so that phishing emails can be found based on the information of the sender or file attached.\n\nRecent advances in CTI domain have drawn attention to the use of the existing knowledge to automate the manual analysis of human experts and enrichment of quality CTIs \\cite{azevedo2019pure,edwards2017panning, noor2019machine, zhou2019ensemble}. For example, vulnerability description of NVD like databases are being used to predict the severity, confidentiality and availability of threats. Le et al. \\cite{le2019automated} have used NLP and traditional ML algorithms to perform automated vulnerability assessment using vulnerability description of open source public repositories. Noor et al. \\cite{noor2019machine} have used data provided by STIX and Mitre corporation to identify the documents related to attacks. Azevedo et al. have proposed a platform Pure to improve quality of CTI in the form of enriched IoCs by automatically correlating and aggregating the IoCs \\cite{azevedo2019pure}. They have evaluated the performance of the proposed platform with 34 OSINT feeds gathered from MISP.\n \nOne recent study by Recorded Future has laid out four ways of using AI techniques to extract CTI from a detected attack \\cite{Struve2017}. They have defined risk score metrics to identify malicious network activity. This extends the classification from being just about whether an attack has occurred and provides more in-depth information on the threat \\cite{RFteam2018, Struve2017}. Recorded Future has used NLP to increase the range of possible data sources by removing the limit on just structured information \\cite{Struve2017}. They utilize extracted text and perform classification for the language, topic and company. They have applied ML and NLP techniques to rank documents to identify malware data attacks. Their model also considers different classifications needed like scoring a risk value. They do not always use ML for scoring a risk value as they often have a rule-based system for the classifier to follow. \n\n\nUnlike the above-mentioned work, we propose SmartValidator to utilize NLP and ML techniques to assist in automating validation of security alerts and incident. To the best of our knowledge, this is the first attempt to use CTI, such as MISP data, to automate the classification and validation of security threat data based on a SOC's preferences. Here, we have investigated how effective ML algorithms are while classifying CTI to assist in alert validation. Unlike the existing works, where possible prediction models are pre-built, here we propose to build the models on demand. We have demonstrated the efficiency of constructing prediction models dynamically. \n\\vspace{-10pt}\n\n\\section{Conclusion} \\label{Sec:Conclu}\n\nMany organizations are facing difficulty to keep pace with the changing threat landscape as security experts need to identify and analyze threat data in most circumstances. Without the indulgence of automation techniques, it is impossible to reduce the burden of analyzing the CTI to make a timely decision. \nIn this work, we propose a novel framework SmartValidator, to build an effective and efficient validation tool using CTI that automates the validation of the security alerts and incidents based on SOC's preferences. Different from the manual approaches, SmartValidator is designed in a way so that SOCs can add their requirements without worrying about collecting CTI and using CTI to build a validation model. SmartValidator consists of three layers: threat data collection, threat data prediction model building and threat data validation. Different teams are responsible for updating the components of different layers, thus freeing security teams from learning data processing and model building techniques. The validation task is designed as a classification problem that leverages existing NLP and ML techniques to extract features from CTI and learn patterns for alert validation. We developed a Proof of Concept (PoC) system to automatically extract features from CTI and build prediction models based on the preferences of SOCs. A SOC's preferences are collected as a set of attributes sets:~observed and unknown attributes, where the task of the PoC is to predict unknown attributes based on observed attributes.\n\nWe have demonstrated the effectiveness of SmartValidator by predicting \\textit{attack}, \\textit{events}, \\textit{threat type}, \\textit{threat level} and \\textit{name}. It collected and processed data from public websites and MISP. Next, CTI with preferred attributes sets were selected to build prediction models. Eight ML algorithms were ran to build and select the models with the highest F1-score. The best model was used to predict the unknown attributes and thus validate alerts. The developed PoC constructed validation models, and can be used to validate alerts generated by the threat detection tools and find the missing information to store the data in a structured format. The results show prediction models are effective in validating security alerts. Building prediction models at run time are more efficient then building prediction models for all possible attributes sets and~CTI. \n\nIn future work, we plan to extend the PoC system to reduce the amount of data that is sent to the SIEM tool, thus reducing the cost of data analysis. The system can also be extended to reduce the organizational dependence on human expertise to take actions against security threats such as blocking ports or identifying maliciousness of an incident. The proposed framework can assist an organization's security team to focus on decision making, rather than manually extracting and validating security alerts and incidents. The framework can also be leveraged to provide the benefit to choose CTI suitable for an organization's application rather than using generalized CTI.\n\\vspace{5pt}\n\n\\textbf{Acknowledgements}\n\nThis work is supported by Cyber Security Cooperative Research Centre (CSCRC), Australia.\n\n\\vspace{-10pt}\n\n\\bibliographystyle{cas-model2-names}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzhnz b/data_all_eng_slimpj/shuffled/split2/finalzhnz new file mode 100644 index 0000000000000000000000000000000000000000..172ca5c311ea2f3f63892e8b0ddb7fe60250a157 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzhnz @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nContinuous first-order logic is a generalization of first-order logic suitable for studying metric structures, which are mathematical structures with an underlying complete metric and with uniformly continuous functions and $\\mathbb{R}$-valued predicates. A rich class of metric structures arise from Banach spaces and expansions thereof. An active area of research in continuous logic is the characterization of inseparably categorical continuous theories.\nFor a general introduction to continuous logic, see \\cite{MTFMS}.\n\n\nIn the present work we will consider expansions of Banach spaces. We introduce the notion of an indiscernible subspace. An indiscernible subspace is a subspace in which types of tuples of elements only depend on their quantifier-free type in the reduct consisting of only the metric and the constant $\\mathbf{0}$. Similarly to indiscernible sequences, indiscernible subspaces are always consistent with a Banach theory (with no stability assumption, see Theorem \\ref{thm:exist}), but are not always present in every model. We will show that an indiscernible subspace always takes the form of an isometrically embedded real Hilbert space wherein the type of any tuple only depends on its quantifier-free type in the Hilbert space. The notion of an indiscernible subspace is of independent interest in the model theory of Banach and Hilbert structures, and in particular here we use it to improve the results of Shelah and Usvyatsov in the context of types in the full language (as opposed to $\\Delta$-types). Specifically, in this context we give a shorter proof of Shelah and Usvyatsov's main result \\cite[Prop.\\ 4.13]{SHELAH2019106738}, we improve their result on the strong uniqueness of Morley sequences in minimal wide types \\cite[Prop.\\ 4.12]{SHELAH2019106738}, and we expand on their commentary on the ``induced structure'' of the span of a Morley sequence in a minimal wide type \\cite[Rem.\\ 5.6]{SHELAH2019106738}. This more restricted case is what is relevant to inseparably categorical Banach theories, so our work is applicable to the problem of their characterization.\n\nFinally, we present some relevant counterexamples and in particular we resolve (in the negative) the question of Shelah and Usvyatsov presented at the end of Section 5 of \\cite{SHELAH2019106738}, in which they ask whether or not the span of a Morley sequence in a minimal wide type is always a type-definable set.\n\n\\subsection{Background}\n\nFor $K \\in \\{\\mathbb{R}, \\mathbb{C}\\}$, we think of a $K$-Banach space $X$ as being a metric structure $\\mathfrak{X}$ whose underlying set is the closed unit ball $B(X)$ of $X$ with metric $d(x,y) = \\left\\lVert x - y \\right\\rVert$.\\footnote{For another equivalent approach, see \\cite{MTFMS}, which encodes Banach structures as many-sorted metric structures with balls of various radii as different sorts.} This structure is taken to have for each tuple $\\bar{a} \\in K$ an $|\\bar{a}|$-ary predicate $s_{\\bar{a}}(\\bar{x}) = \\left\\lVert \\sum_{i<|\\bar{a}|} a_i x_i \\right\\rVert$, although we will always write this in the more standard form.\n Note that we evaluate this in $X$ even if $\\sum_{i<|\\bar{a}|} a_i x_i$ is not actually an element of the structure $\\mathfrak{X}$. For convenience, we will also have a constant for the zero vector, $\\mathbf{0}$, and an $n$-ary function $\\sigma_{\\bar{a}}(\\bar{x})$ such that $\\sigma_{\\bar{a}}(\\bar{x}) = \\sum_{i<|\\bar{a}|} a_i x_i$ if it is in $B(X)$ and $\\sigma_{\\bar{a}}(\\bar{x}) = \\frac{\\sum_{i<|\\bar{a}|} a_i x_i}{\\left\\lVert \\sum_{i<|\\bar{a}|} a_i x_i \\right\\rVert}$ otherwise. If $|a|\\leq 1$, we will write $ax$ for $\\sigma_{a}(x)$. Note that while this is an uncountable language, it is interdefinable with a countable reduct of it (restricting attention to rational elements of $K$). These structures capture the typical meaning of the ultraproduct of Banach spaces. As is common, we will conflate $X$ and the metric structure $\\mathfrak{X}$ in which we have encoded $X$.\n\n\n\\begin{defn}\nA \\emph{Banach (or Hilbert) structure} is a metric structure which is the expansion of a Banach (or Hilbert) space. A \\emph{Banach (or Hilbert) theory} is the theory of such a structure. The adjectives \\emph{real} and \\emph{complex} refer to the scalar field $K$.\n\\end{defn}\n\n$C^{\\ast}$- and other Banach algebras are commonly studied examples of Banach structures that are not just Banach spaces.\n\n\n\n\n\n\n\nA central problem in continuous logic is the characterization of inseparably categorical countable theories, that is to say countable theories with a unique model in each uncountable density character. The analog of Morley's theorem was shown in continuous logic via related formalisms \\cite{ben-yaacov_2005, Shelah2011}, but no satisfactory analog of the Baldwin-Lachlan theorem or its precise structural characterization of uncountably categorical discrete theories in terms of strongly minimal sets is known. Some progress in the specific case of Banach theories has been made in \\cite{SHELAH2019106738}, in which Shelah and Usvyatsov introduce the notion of a wide type and the notion of a minimal wide type, which they argue is the correct analog of strongly minimal types in the context of inseparably categorical Banach theories.\n\n\\begin{defn}\nA type $p$ in a Banach theory is \\emph{wide} if its set of realizations consistently contain the unit sphere of an infinite dimensional real subspace.\n\nA type is \\emph{minimal wide} if it is wide and has a unique wide extension to every set of parameters.\n\\end{defn}\nIn \\cite{SHELAH2019106738}, Shelah and Usvyatsov were able to show that every Banach theory has wide complete types using the following classical concentration of measure results of Dvoretzky and Milman, which Shelah and Usvyatsov refer to as the Dvoretzky-Milman theorem.\n\n\\begin{fact}[Dvoretzky-Milman theorem] \\label{fact:DM-thm}\nLet $(X,\\left\\lVert \\cdot \\right\\rVert)$ be an infinite dimensional real Banach space with unit sphere $S$ and let $f:S \\rightarrow \\mathbb{R}$ be a uniformly continuous function. For any $k<\\omega$ and $\\varepsilon > 0$, there exists a $k$-dimensional subspace $Y \\subset X$ and a Euclidean norm\\footnote{A norm $\\vertiii{\\cdot}$ is \\emph{Euclidean} if it satisfies the parallelogram law, $2\\vertiii{a}^2 + 2 \\vertiii{b}^2 = \\vertiii{a+b}^2 + \\vertiii{a-b}^2,$ or equivalently if it is induced by an inner product.} $\\vertiii{\\cdot}$ on $Y$ such that for any $a,b \\in S\\cap Y$, we have $\\vertiii{a} \\leq \\left\\lVert a\\right\\rVert \\leq (1 + \\varepsilon)\\vertiii{a}$ and $|f(a) - f(b)| < \\varepsilon$.\\footnote{Fact \\ref{fact:DM-thm} without $f$ is (a form of) Dvoretsky's theorem.}\n\\end{fact}\n\n\nShelah and Usvyatsov showed that in a stable Banach theory every wide type has a minimal wide extension (possibly over a larger set of parameters) and that every Morley sequence in a minimal wide type is an orthonormal basis of a subspace isometric to a real Hilbert space. Furthermore, they showed that in an inseparably categorical Banach theory, every inseparable model is prime over a countable set of parameters and a Morley sequence in some minimal wide type, analogously to how a model of a discrete uncountably categorical theory is always prime over some finite set of parameters and a Morley sequence in some strongly minimal type.\n\nThe key ingredient to our present work is the following result, due to Milman.\n It extends the Dvoretzky-Milman theorem in a manner analogous to the extension of the pigeonhole principle by Ramsey's theorem.\\footnote{The original Dvoretzky-Milman result is often compared to Ramsey's theorem, such as when Gromov coined the term \\emph{the Ramsey-Dvoretzky-Milman phenomenon} \\cite{gromov1983}, but in the context of Fact \\ref{fact:main} it is hard not to think of the $n=1$ case as being analogous to the pigeonhole principle and the $n>1$ cases as being analogous to Ramsey's theorem.}\n\n\\begin{defn}\\label{defn:main-defn}\nLet $(X,\\left\\lVert \\cdot \\right\\rVert)$ be a Banach space. If $a_0,a_1,\\dots,a_{n-1}$ and $b_0,b_1,\\dots,\\allowbreak b_{n-1}$ are ordered $n$-tuples of elements of $X$, we say that $\\bar{a}$ and $\\bar{b}$ are \\emph{congruent} if $\\left\\lVert a_i - a_j\\right\\rVert=\\left\\lVert b_i - b_j \\right\\rVert$ for all $ i,j \\leq n$, where we take $a_{n}=b_{n}=\\mathbf{0}$. We will write this as $\\bar{a} \\cong \\bar{b}$.\n\\end{defn}\n\n\\begin{fact}[\\cite{zbMATH03376472}, Thm.\\ 3] \\label{fact:main}\nLet $S^\\infty$ be the unit sphere of a separable infinite dimensional real Hilbert space ${H}$ and let $f:(S^\\infty)^n \\rightarrow \\mathbb{R}$ be a uniformly continuous function. For any $\\varepsilon>0$ and any $k<\\omega$ there exists a $k$-dimensional subspace $V$ of $H$ such that for any $a_0,a_1,\\dots,a_{n-1},b_0,b_1,\\dots,b_{n-1}\\in S^\\infty$ with $\\bar{a} \\cong \\bar{b}$, $|f(\\bar{a})-f(\\bar{b})| < \\varepsilon$.\n\\end{fact}\n\nNote that the analogous result for inseparable Hilbert spaces follows immediately, by restricting attention to a separable infinite dimensional subspace. Also note that by using Dvoretsky's theorem and an easy compactness argument, Fact \\ref{fact:main} can be generalized to arbitrary infinite dimensional Banach spaces.\n\n\\subsection{Connection to Extreme Amenability}\n\n A modern proof of Fact \\ref{fact:main} would go through the extreme amenability of the unitary group of an infinite dimensional Hilbert space endowed with the strong operator topology, or in other words the fact that any continuous action of this group on a compact Hausdorff space has a fixed point, which was originally shown in \\cite{10.2307\/2374298}. This connection is unsurprising. It is well known that the extreme amenability of $\\mathrm{Aut}(\\mathbb{Q})$ (endowed with the topology of pointwise convergence) can be understood as a restatement of Ramsey's theorem. It is possible to use this to give a high brow proof of the existence of indiscernible sequences in any first-order theory $T$:\n \n \\begin{proof}\nFix a first-order theory $T$. Let $Q$ be a family of variables indexed by the rational numbers. The natural action of $\\mathrm{Aut}(\\mathbb{Q})$ on $S_Q(T)$, the Stone space of types over $T$ in the variables $Q$, is continuous and so by extreme amenability has a fixed point. A fixed point of this action is precisely the same thing as the type of a $\\mathbb{Q}$-indexed indiscernible sequence over $T$, and so we get that there are models of $T$ with indiscernible sequences.\n \\end{proof}\n \n \n A similar proof of the existence of indiscernible subspaces in Banach theories (Theorem \\ref{thm:exist}) is possible, but requires an argument that the analog of $S_Q(T)$ is non-empty (which follows from Dvoretzky's theorem) and also requires more delicate bookkeeping to define the analog of $S_Q(T)$ and to show that the action of the unitary group of a separable Hilbert space is continuous. In the end this is more technical than a proof using Fact \\ref{fact:main} directly.\n\n\n\n\\section{Indiscernible Subspaces} \\label{sec:ind-subsp}\n\n\n\n\\begin{defn}\n Let $T$ be a Banach {theory}. Let $\\mathfrak{M}\\models T$ and let $A\\subseteq \\mathfrak{M}$ be some set of parameters.\n An \\emph{indiscernible subspace over $A$} is a real subspace $V$ of $\\mathfrak{M}$ such that for any $n<\\omega$ and any $n$-tuples $\\bar{b},\\bar{c} \\in V$, $\\bar{b} \\equiv_A \\bar{c}$ if and only if $\\bar{b} \\cong \\bar{c}$.\n\n{\\sloppy If $p$ is a type over $A$, then $V$ is an \\emph{indiscernible subspace in $p$ (over $A$)} if it is an indiscernible subspace over $A$ and $b\\models p$ for all $b\\in V$ with $\\left\\lVert b \\right\\rVert = 1$.}\n\n\n\\end{defn}\n\nNote that an indiscernible subspace is a real subspace even if $T$ is a complex Banach theory. Also note that an indiscernible subspace in $p$ is not literally contained in the realizations of $p$, but rather has its unit sphere contained in the realizations of $p$. It might be more accurate to talk about ``indiscernible spheres,'' but we find the subspace terminology more familiar.\n\nIndiscernible subspaces are very metrically regular.\n\n\\begin{prop}\nSuppose $V$ is an indiscernible subspace in some Banach structure. Then $V$ is isometric to a real Hilbert space.\n\nIn particular, a real subspace $V$ of a Banach structure is indiscernible over $A$ if and only if it is isometric to a real Hilbert space and for every $n<\\omega$ and every pair of $n$-tuples $\\bar{b},\\bar{c}\\in V$, $\\bar{b}\\equiv_A\\bar{c}$ if and only if for all $i,j = \\left$.\n\\end{prop}\n\\begin{proof}\nFor any real Banach space $W$, if $\\dim W \\leq 1$, then $W$ is necessarily isometric to a real Hilbert space. If $\\dim V \\geq 2$, let $V_0$ be a $2$-dimensional subspace of $V$. A subspace of an indiscernible subspace is automatically an indiscernible subspace, so $V_0$ is indiscernible. For any two distinct unit vectors $a$ and $b$, indiscernibility implies that for any $r,s\\in \\mathbb{R}$, $\\left\\lVert r a + s b\\right\\rVert = \\left\\lVert s a + r b\\right\\rVert$, hence the unique linear map that switches $a$ and $b$ fixes $\\left\\lVert \\cdot \\right \\rVert$. This implies that the automorphism group of $(V_0, \\left\\lVert \\cdot \\right \\rVert)$ is transitive on the $\\left\\lVert \\cdot \\right\\rVert$-unit circle. By John's theorem on maximal ellipsoids \\cite{MR0030135}, the unit ball of $\\left\\lVert \\cdot \\right \\rVert$ must be an ellipse, so $\\left\\lVert \\cdot \\right \\rVert$ is a Euclidean norm.\n\nThus every $2$-dimensional real subspace of $V$ is Euclidean and so $(V,\\left\\lVert \\cdot \\right \\rVert)$ satisfies the parallelogram law and is therefore a real Hilbert space.\n\nThe `in particular' statement follows from the fact that in a real Hilbert subspace of a Banach space, the polarization identity \\cite[Prop.\\ 14.1.2]{Blanchard2002} defines the inner product in terms of a particular quantifier-free formula:\n\\begin{equation*}\n\\left = \\frac{1}{4}\\left( \\left\\lVert x + y \\right\\rVert ^2 - \\left\\lVert x - y \\right\\rVert^2 \\right).\\footnotemark \\qedhere\n\\end{equation*}\n\\end{proof}\n\\footnotetext{There is also a polarization identity for the complex inner product: $${\\left_{\\mathbb{C}} = \\frac{1}{4}\\left( \\left\\lVert x + y \\right\\rVert ^2 - \\left\\lVert x - y \\right\\rVert^2 + i\\left\\lVert x - iy \\right\\rVert^2 - i \\left\\lVert x + iy \\right\\rVert^2 \\right).}$$}\n\n \n\n\n\n\n\n\n\n\n\\subsection{Existence of Indiscernible Subspaces}\n\n\n\n\nAs mentioned in \\cite[Cor.\\ 3.9]{SHELAH2019106738}, it follows from Dvoretzky's theorem that if $p$ is a wide type and $\\mathfrak{M}$ is a sufficiently saturated model, then $p(\\mathfrak{M})$ contains the unit sphere of an infinite dimensional subspace isometric to a Hilbert space. We refine this by showing that, in fact, an indiscernible subspace can be found. \n\n\\begin{thm} \\label{thm:exist}\nLet $A$ be a set of parameters in a Banach {theory} $T$ and let $p$ be a wide type over $A$. For any $\\kappa$, there is $\\mathfrak{M} \\models T$ and a subspace $V\\subseteq \\mathfrak{M}$ of dimension $\\kappa$ such that $V$ is an indiscernible subspace in $p$ over $A$. In particular, any $\\aleph_0 + \\kappa+|A|$-saturated $\\mathfrak{M}$ will have such a subspace.\n\\end{thm}\n\\begin{proof}\nFor any set $\\Delta$ of $A$-formulas, call a subspace $V$ of a model $\\mathfrak{N}$ of $T_A$ \\emph{$\\Delta$-indiscernible in $p$} if every unit vector in $V$ models $p$ and for any $n<\\omega$ and any formula $\\varphi \\in \\Delta$ of arity $n$ and any $n$-tuples $\\bar{b},\\bar{c} \\in V$ with $\\bar{b} \\cong \\bar{c}$, we have $\\mathfrak{N}\\models \\varphi(\\bar{b}) = \\varphi(\\bar{c})$.\n\nSince $p$ is wide, there is a model $\\mathfrak{N}\\models T$ containing an infinite dimensional subspace $W$ isometric to a real Hilbert space such that for all $b\\in W$ with $\\left\\lVert b \\right\\rVert = 1$, $b\\models p$. This is an infinite dimensional $\\varnothing$-indiscernible subspace in $p$.\n\nNow for any finite set of $A$-formulas $\\Delta$ and formula $\\varphi$, assume that we've shown that there is a model $\\mathfrak{N}\\models T$ containing an infinite dimensional $\\Delta$-indiscernible subspace $V$ in $p$ over $A$. We want to show that there is a $\\Delta \\cup \\{\\phi\\}$-indiscernible subspace in $V$. By Fact \\ref{fact:main}, for every $k<\\omega$ there is a $k$-dimensional subspace $W_{k}\\subseteq V$ such that for any unit vectors $b_0,\\dots,b_{\\ell -1},c_0,\\dots,c_{\\ell-1}$ in $W_{k}$ with $\\bar{b}\\cong\\bar{c}$, we have that $|\\varphi^{\\mathfrak{N}}(\\bar{b})-\\varphi^{\\mathfrak{N}}(\\bar{c})| < 2^{-k}$. If we let $\\mathfrak{N}_k = (\\mathfrak{N}_k,W_k)$ where we've expanded the language by a fresh predicate symbol $D$ such that $D^{\\mathfrak{N}_k}(x)=d(x,W_k)$, then an ultraproduct of the sequence $\\mathfrak{N}_k$ will be a structure $(\\mathfrak{N}_\\omega,W_\\omega)$ in which $W_\\omega$ is an infinite dimensional Hilbert space.\n\n\\emph{Claim:} $W_\\omega$ is $\\Delta\\cup\\{\\varphi\\}$-indiscernible in $p$.\n\n\\emph{Proof of claim.} Fix an $m$-ary formula $\\psi \\in \\Delta \\cup \\{\\varphi\\}$ and let $f(k)=0$ if $\\psi \\in \\Delta$ and $f(k)=2^{-k}$ if $\\psi = \\varphi$. For any $k \\geq 2m$, fix $b_0,\\dots,b_{m-1},c_0,\\dots,c_{m-1}$ in the unit ball of $W_k$, there is a $2m$ dimensional subspace $W^\\prime \\subseteq W_k$ containing $\\bar{b},\\bar{c}$. By compactness of $B(W^\\prime)^m$ (where $B(X)$ is the unit ball of $X$), we have that for any $\\varepsilon > 0$ there is a $\\delta(\\varepsilon) > 0$ such that if $|\\left - \\left| < \\delta(\\varepsilon)$ for all $i,j < m$ then $|\\psi^{\\mathfrak{N}}(\\bar{b})-\\psi^{\\mathfrak{N}}(\\bar{c})| \\leq f(k) + \\varepsilon$. Note that we can take the function $\\delta$ to only depend on $\\psi$, specifically its arity and modulus of continuity, and not on $k$, since $B(W^\\prime)^m$ is always isometric to $B(\\mathbb{R}^{2m})^m$. Therefore, in the ultraproduct we will have $(\\forall i,j - \\left| < \\delta(\\varepsilon) \\Rightarrow |\\psi^{\\mathfrak{N}}(\\bar{b})-\\psi^{\\mathfrak{N}}(\\bar{c})| \\leq \\varepsilon$ and thus $\\bar{b}\\cong \\bar{c} \\Rightarrow \\psi^{\\mathfrak{N}_\\omega}(\\bar{b}) = \\psi^{\\mathfrak{N}_\\omega}(\\bar{c})$, as required. \\hfill $\\qed_{\\textit{Claim}}$\n\nNow for each finite set of $A$-formulas we've shown that there's a structure $(\\mathfrak{M}_\\Delta,V_\\Delta)$ (where, again, $V_\\Delta$ is the set defined by the new predicate symbol $D$) such that $\\mathfrak{M}_\\Delta \\models T_A$ and $V_\\Delta$ is an infinite dimensional $\\Delta$-indiscernible subspace in $p$. By taking an ultraproduct with an appropriate ultrafilter we get a structure $(\\mathfrak{M},V)$ where $\\mathfrak{M}\\models T_A$ and $V$ is an infinite dimensional subspace. $V$ is an indiscernible subspace in $p$ over $A$ by the same argument as in the claim.\n\nFinally note that by compactness we can take $V$ to have arbitrarily large dimension and that any subspace of an indiscernible subspace in $p$ over $A$ is an indiscernible subspace in $p$ over $A$, so we get the required result.\n\\end{proof}\n\n\n\n\n\n\nTogether with the fact that wide types always exist in Banach theories with infinite dimensional models \\cite[Thm.\\ 3.7]{SHELAH2019106738}, we get a corollary.\n\n\\begin{cor} \\label{cor:ind-subsp}\nEvery Banach {theory} with infinite dimensional models has an infinite dimensional indiscernible subspace in some model. In particular, every such theory has an infinite indiscernible set, namely any orthonormal basis of an infinite dimensional indiscernible subspace.\n\\end{cor}\n\n\n\\section{Minimal{ }Wide Types}\n\n\n\nCompare the following Theorem \\ref{thm:main} with this fact in discrete logic: If $p$ is a minimal type (i.e.\\ $p$ has a unique global non-algebraic extension), then an infinite sequence of realizations of $p$ is a Morley sequence in $p$ if and only if it is an indiscernible sequence.\n\nHere we are using the definition of Morley sequence for (possibly unstable) $A$-invariant types: Let $p$ be a global $A$-invariant type, and let $B\\supseteq A$ be some set of parameters. A sequence $\\{c_i\\}_{i< \\kappa}$ is a \\emph{Morley sequence in $p$ over $B$} if for all $i< \\kappa$, $\\mathrm{tp}(c_i\/Bc_{0$. \nRecall that a linear map $T:X\\rightarrow Y$ between Banach spaces is an \\emph{isomorphism} if it is a continuous bijection. This is enough to imply that $T$ is invertible and that both $T$ and $T^{-1}$ are Lipschitz. An analog of Dvoretzky's theorem for $k \\geq \\omega$ would imply that every sufficiently large Banach space has an infinite dimensional subspace isomorphic to Hilbert space, which is known to be false. Here we will see as specific example of this.\n\n The following is a well known result in Banach space {theory} (for a proof see the comment after Proposition 2.a.2 in \\cite{Lindenstrauss1996}).\n\n\\begin{fact} \\label{fact:no-no}\nFor any distinct $X,Y \\in \\{\\ell_p: 1\\leq p < \\infty\\} \\cup \\{c_0\\}$, no subspace of $X$ is isomorphic to $Y$.\n\\end{fact}\n\nNote that, whereas Corollary \\ref{cor:ind-subsp} says that every Banach theory is consistent with the partial type of an indiscernible subspace, the following corollary says that this type can sometimes be omitted in arbitrarily large models (contrast this with the fact that the existence of an Erd\\\"os cardinal implies that you can find indiscernible sequences in any sufficiently large structure in a countable language \\cite[Thm.\\ 9.3]{Kanamori2003}).\n\n\\begin{cor} \\label{cor:no-no-cor}\nFor $p \\in [1,\\infty) \\setminus \\{2\\}$, there are arbitrarily large models of $\\mathrm{Th}(\\ell_p)$ that do not contain any infinite dimensional subspaces isomorphic to a Hilbert space. \n\\end{cor}\n\\begin{proof}\nFix $p \\in [1,\\infty) \\setminus \\{2\\}$ and $\\kappa \\geq \\aleph_0$.\n Let $\\ell_p(\\kappa)$ be the Banach space of functions $f:\\kappa \\rightarrow \\mathbb{R}$ such that $\\sum_{i<\\kappa} |f(i)|^p < \\infty$. Note that $\\ell_p(\\kappa) \\equiv \\ell_p$.\\footnote{To see this, we can find an elementary sub-structure of $\\ell_p(\\kappa)$ that is isomorphic to $\\ell_p$: Let $\\mathfrak{L}_0$ be a separable elementary sub-structure of $\\ell_p(\\kappa)$. For each $i<\\omega$, given $\\mathfrak{L}_i$, let $B_i$ be the set of all $f \\in \\ell_p(\\kappa)$ that are the indicator function of a singleton $\\{i\\}$ for some $i$ in the support of some element of $\\mathfrak{L}_i$. $B_i$ is countable. Let $\\mathfrak{L}_{i+1}$ be a separable elementary sub-structure of $\\ell_p(\\kappa)$ containing $\\mathfrak{L}_i\\cup B_i$. $\\overline{\\bigcup_{i<\\omega}\\mathfrak{L}_{i+1}}$ is equal to the span of $\\bigcup_{i<\\omega} B_i$ and so is a separable elementary sub-structure of $\\ell_p(\\kappa)$ isomorphic to $\\ell_p$.}\n Pick a subspace $V \\subseteq \\ell_p(\\kappa)$. If $V$ is isomorphic to a Hilbert space, then any separable $V_0 \\subseteq V$ will also be isomorphic to a Hilbert space. There exists a countable set $A \\subseteq \\kappa$ such that $V_0 \\subseteq \\ell_p(A) \\subseteq \\ell_p(\\kappa)$. By Fact \\ref{fact:no-no}, $V_0$ is not isomorphic to a Hilbert space, which is a contradiction. Thus no such $V$ can exist.\n\\end{proof}\n\n\n\nEven assuming we start with a Hilbert space we do not get an analog of the infinitary pigeonhole principle (i.e.\\ a generalization of Fact \\ref{fact:DM-thm}). The discussion by H\\'ajeck and Mat\\v ej in \\cite[after Thm.\\ 1]{Hajek2018} of a result of Maurey \\cite{Maurey1995} implies that there is a Hilbert theory $T$ with a unary predicate $P$ such that for some $\\varepsilon>0$ there are arbitrarily large models $\\mathfrak{M}$ of $T$ such that for any infinite dimensional subspace $V \\subseteq \\mathfrak{M}$ there are unit vectors $a,b\\in V$ with $|P^{\\mathfrak{M}}(a)-P^{\\mathfrak{M}}(b)| \\geq \\varepsilon$.\n\nStability of a theory often has the effect of making Ramsey phenomena more prevalent in its models, so there is a natural question as to whether anything similar will happen here. Recall that a function $f:S(X)\\rightarrow \\mathbb{R}$ on the unit sphere $S(X)$ of a Banach space $X$ is \\emph{oscillation stable} if for every infinite dimensional subspace $Y \\subseteq X$ and every $\\varepsilon>0$ there is an infinite dimensional subspace $Z \\subseteq Y$ such that for any $a,b\\in S(Z)$, $|f(a)-f(b)|\\leq \\varepsilon$.\n\n\n\\begin{quest}\nDoes (model theoretic) stability imply oscillation stability? That is to say, if $T$ is a stable Banach theory, is every unary formula oscillation stable on models of $T$?\n\\end{quest}\n\n\\subsection{The (Type-)Definability of Indiscernible Subspaces and Complex Banach Structures} \\label{subsec:comp}\n\nA central question in the study of inseparably categorical Banach space theories is the degree of definability of the `minimal Hilbert space' that controls a given inseparable model of the theory. Results of Henson and Raynaud in \\cite{HensonRaynaud} imply that in general the Hilbert space may not be definable. In \\cite{SHELAH2019106738}, Shelah and Usvyatsov ask whether or not the Hilbert space can be taken to be type-definable or a zeroset. In Example \\ref{ex:no-def} we present a simple, but hopefully clarifying, example showing that this is slightly too much to ask.\n\nIt is somewhat uncomfortable that even in complex Hilbert structures we are only thinking about \\emph{real} indiscernible subspaces rather than \\emph{complex} indiscernible subspaces. One problem is that Ramsey-Dvoretzky-Milman phenomena only deal with real subspaces in general. The other problem is that Definition \\ref{defn:main-defn} is incompatible with complex structure:\n\n\\begin{prop} \\label{prop:no-comp}\nLet $T$ be a complex Banach theory. Let $V$ be an indiscernible subspace in some model of $T$. For any non-zero $a\\in V$ and $\\lambda \\in \\mathbb{C} \\setminus \\{0\\}$, if $\\lambda a \\in V$, then $\\lambda \\in \\mathbb{R}$.\n\\end{prop}\n\\begin{proof}\nAssume that for some non-zero vector $a$, both $a$ and $ia$ are in $V$. We have that $(a,ia)\\equiv(ia,a)$, but $(a,ia)\\models d(ix,y)=0$ and $(ia,a)\\not\\models d(ix,y)=0$, which contradicts indiscernibility. Therefore we cannot have that both $a$ and $ia$ are in $V$. The same statement for $a$ and $\\lambda a$ with $\\lambda \\in \\mathbb{C}\\setminus \\mathbb{R}$ follows immediately, since $a,\\lambda a \\in V \\Rightarrow ia \\in V$. \n\\end{proof}\n\nIn the case of complex Hilbert space and other Hilbert spaces with a unitary Lie group action, this is the reason that indiscernible subspaces can fail to be type-definable. We will explicitly give the simplest example of this\n\n\n\n\n\\begin{ex} \\label{ex:no-def}\nLet $T$ be the theory of an infinite dimensional complex Hilbert space and let $\\mathfrak{C}$ be the monster model of $T$. $T$ is inseparably categorical, but for any partial type $\\Sigma$ over any small set of parameters $A$, $\\Sigma(\\mathfrak{C})$ is not an infinite dimensional indiscernible subspace (over $\\varnothing$).\n\n\\end{ex}\n\\begin{proof}\n$T$ is clearly inseparably categorical by the same reasoning that the theory of real infinite dimensional Hilbert spaces is inseparably categorical (being an infinite dimensional complex Hilbert space is first-order and there is a unique infinite dimensional complex Hilbert space of each infinite density character). \n\nIf $\\Sigma(\\mathfrak{C})$ is not an infinite dimensional subspace of $\\mathfrak{C}$, then we are done, so assume that $\\Sigma(\\mathfrak{C})$ is an infinite dimensional subspace of $\\mathfrak{C}$. Let $\\mathfrak{N}$ be a small model containing $A$. Since $\\mathfrak{N}$ is a subspace of $\\mathfrak{C}$, $\\Sigma(\\mathfrak{N}) = \\Sigma(\\mathfrak{C})\\cap \\mathfrak{N}$ is a subspace of $\\mathfrak{N}$. Let $v \\in \\Sigma(\\mathfrak{C})\\setminus \\Sigma(\\mathfrak{N})$. This implies that $v\\in \\mathfrak{C} \\setminus \\mathfrak{N}$, so we can write $v$ as $v_\\parallel+ v_\\perp$, where $v_\\parallel$ is the orthogonal projection of $v$ onto $\\mathfrak{N}$ and $v_\\perp$ is complex orthogonal to $\\mathfrak{N}$. Necessarily we have that $v_\\perp \\neq 0$. Let $\\mathfrak{N}^\\perp$ be the orthocomplement of $\\mathfrak{N}$ in $\\mathfrak{C}$. If we write elements of $\\mathfrak{C}$ as $(x,y)$ with $x\\in \\mathfrak{N}$ and $y\\in \\mathfrak{N}^\\perp$, then the maps $(x,y)\\mapsto (x,-y)$, $(x,y)\\mapsto (x,iy)$, and $(x,y)\\mapsto(x,-iy)$ are automorphisms of $\\mathfrak{C}$ fixing $\\mathfrak{N}$. Therefore $(v_\\parallel + v_\\perp) \\equiv_{\\mathfrak{N}} (v_\\parallel - v_\\perp) \\equiv_{\\mathfrak{N}} (v_\\parallel + iv_\\perp) \\equiv_{\\mathfrak{N}} (v_\\parallel -iv_\\perp)$, so we must have that $(v_\\parallel - v_\\perp),(v_\\parallel + iv_\\perp),( v_\\parallel - iv_\\perp) \\in \\Sigma(\\mathfrak{C})$ as well. Since $\\Sigma(\\mathfrak{C})$ is a subspace, we have that $b_\\perp \\in \\Sigma(\\mathfrak{C})$ and $ib_\\perp \\in \\Sigma(\\mathfrak{C})$. Thus by Proposition \\ref{prop:no-comp} $\\Sigma(\\mathfrak{C})$ is not an indiscernible subspace over $\\varnothing$.\n\\end{proof}\n\nThis example is a special case of this more general construction: If $G$ is a compact Lie group with an irreducible unitary representation on $\\mathbb{R}^n$ for some $n$ (i.e.\\ the group action is transitive on the unit sphere), then we can extend this action to $\\ell_2$ by taking the Hilbert space direct sum of countably many copies of the irreducible unitary representation of $G$, and we can think of this as a structure by adding function symbols for the elements of $G$.\nThe theory of this structure will be totally categorical and satisfy the conclusion of Example \\ref{ex:no-def}.\n\nExample \\ref{ex:no-def} is analogous to the fact that in many strongly minimal theories the set of generic elements in a model is not itself a basis\/Morley sequence. The immediate response would be to ask the question of whether or not the unit sphere of the complex linear span (or more generally the `$G$-linear span,' i.e.\\ the linear span of $G\\cdot V$) of the indiscernible subspace in a minimal{ }wide type agrees with the set of realizations of that minimal{ }wide type, but this can overshoot:\n\n\\begin{ex} \\label{ex:bad-comp}\nConsider the structure whose universe is (the unit ball of) $\\ell_2 \\oplus \\ell_2$ (where we are taking $\\ell_2$ as a real Hilbert space), with a complex action $(x,y)\\mapsto (-y,x)$ and orthogonal projections $P_0$ and $P_1$ for the sets $\\ell_2 \\oplus \\{\\mathbf{0}\\}$ and $\\{\\mathbf{0}\\} \\oplus \\ell_2$, respectively. Let $T$ be the theory of this structure. This is a totally categorical complex Hilbert structure, but for any complete type $p$ and $\\mathfrak{M}\\models T $, $p(\\mathfrak{M})$ does not contain the unit sphere of a non-trivial complex subspace.\n\\end{ex}\n\\begin{proof}\n$T$ is bi-interpretable with a real Hilbert space, so it is totally categorical. For any complete type $p$, there are unique values of $\\left\\lVert P_0(x) \\right\\rVert$ and $\\left\\lVert P_1(x) \\right\\rVert$ that are consistent with $p$, so the set of realizations of $p$ in any model cannot contain $\\{\\lambda a\\}_{\\lambda \\in \\mathrm{U}(1)}$ for $a$, a unit vector, and $\\mathrm{U}(1) \\subset \\mathbb{C}$, the set of unit complex numbers. \n\\end{proof}\n\nThe issue, of course, being that, while we declared by fiat that this is a complex Hilbert structure, the expanded structure does not respect the complex structure.\n\nSo, on the one hand, Example \\ref{ex:bad-comp} shows that in general the unit sphere of the complex span won't be contained in the minimal{ }wide type. On the other hand, a priori the set of realizations of the minimal{ }wide type could contain more than just the unit sphere of the complex span, such as if we have an $\\mathrm{SU}(n)$ action. The complex (or $G$-linear) span of a set is of course part of the algebraic closure of the set in question, so this suggests a small refinement of the original question of Shelah and Usvyatsov:\n\n\\begin{quest}\nIf $T$ is an inseparably categorical Banach {theory}, $p$ is a minimal{ }wide type, and $\\mathfrak{M}$ is a model of $T$ which is prime over an indiscernible subspace $V$ in $p$, does it follow that $p(\\mathfrak{M})$ is the unit sphere of a subspace contained in the algebraic closure of $V$?\n\\end{quest}\n\nThis would be analogous to the statement that if $p$ is a strongly minimal type in an uncountably categorical discrete theory and $\\mathfrak{M}$ is a model prime over a Morley sequence $I$ in $p$, then $p(\\mathfrak{M})\\subseteq \\mathrm{acl}(I)$.\n\n\n\n\n\n\n\n\n\n\\subsection{Non-Minimal Wide Types}\n\nThe following example shows, unsurprisingly, that Theorem \\ref{thm:main} does not hold for non-minimal{ }wide types.\n\n\\begin{ex}\nLet $T$ be the theory of (the unit ball of) the infinite Hilbert space sum $\\ell_2 \\oplus \\ell_2 \\oplus \\dots$, where we add a predicate $D$ that is the distance to $S^\\infty \\sqcup S^\\infty \\sqcup \\dots$, where $S^\\infty$ is the unit sphere of the corresponding copy of $\\ell_2$. This theory is $\\omega$-stable. The partial type $\\{D = 0\\}$ has a unique global non-forking extension $p$ that is wide, but the unit sphere of the linear span of any Morley sequence in $p$ is not contained in $p(\\mathfrak{C})$.\n\\end{ex}\n\\begin{proof}\nThis follows from the fact that on $D$ the equivalence relation `$x$ and $y$ are contained in a common unit sphere' is definable by a formula, namely \\[E(x,y) = \\inf_{z,w \\in D}(d(x,z)\\dotdiv 1) + (d(z,w)\\dotdiv 1) + (d(w,y)\\dotdiv 1),\\]\nwhere $a \\dotdiv b = \\max\\{a-b,0\\}$. If $x,y$ are in the same sphere, then let $S$ be a great circle passing through $x$ and $y$ and choose $z$ and $w$ evenly spaced along the shorter path of $S$. It will always hold that $d(x,z),d(z,w),d(w,y) \\leq 1$, so we will have $E(x,y)=0$. On the other hand, if $x$ and $y$ are in different spheres, then $E(x,y)= \\sqrt{2} -1$.\n\nTherefore a Morley sequence in $p$ is just any sequence of elements of $D$ which are pairwise non-$E$-equivalent and the unit sphere of the span of any such set is clearly not contained in $D$.\n\\end{proof}\n\n\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRecurrent neural networks (RNNs) and their variants like long short-term memory (LSTM) have been empirically proven to be quite successful in structured prediction applications such as machine translation \\cite{nips-sutskever:14}, Chatbot \\cite{arxiv-vinyals:15}, parsing \\cite{emnlp-ballesteros:16}, summarization \\cite{emnlp-rash:15} and image caption generation \\cite{cvpr-vinyals:15} due to their capability to bridge arbitrary time lags. \nBy far the most popular strategy to train RNNs is via the maximum likelihood principle, which consists of maximizing the probability of the next word in the sequence given the current (recurrent) state and the previous ground truth word (also known as \\emph{teacher forcing}). \nAt inference time, truth previous words are unknown, and then are replaced by words predicted by the model itself.\n\n\\begin{figure}[t]\n \\small\n \\centering\n \\includegraphics[width=7.5cm]{sequence.pdf}\n \\caption{An example training sentence. For each time step, there are a single ground truth word (highlighted by grey color) and multiple semantically and syntactically similar words (highlighted by orange color). In our approach, the weighted embedding of those words at the time step $t$ is fed into the network at the next time step. Such design makes it possible for the model to explore more feasible combinations (e.g. ``I get few red graphs'' or ``We have some yellow peaches''), and can be considered as an approximation to the beam search in a less computationally demanding way.}\n \\label{fig:sequence}\n\\end{figure}\n\nThe models trained by such teacher forcing strategy suffer from at least three drawbacks. \nFirst, the discrepancy between training and inference, called \\emph{exposure bias} \\cite{iclr-ranzato:16}, can yield errors because the model is only exposed to the distribution of training data, instead of its own predictions at inference. \nAs a result the errors can accumulate quickly along the sequence to be generated. \nSecond, the training loss for RNNs is usually defined at the word-level using maximum likelihood estimation (MLE), while their performances are typically evaluated using discrete and non-differentiable sequence-level metrics, such BLUE \\cite{acl-papineni:02} or ROUGE \\cite{acl-lin:04}. \nWe call it \\emph{evaluation bias}. \nThird, the whole training process is not \\emph{fully differentiable} because the chosen input at each time step hinders the gradients back-propagating to all previous states in an end-to-end manner. \nAlthough Goodfellow et al \\shortcite{book-goodfellow:16} pointed out that the underlying graphical model of these RNNs is a complete graph, and no conditional independence assumption is made to simplify the sequence prediction, the individual output probabilities might be still conditioned on the last few predictions if we use the teacher forcing regimen \\cite{iclr-leblond:18}. \nNote that the chain rule can not be applied any longer where an incorrect prediction is replaced with its ground truth, which prevents the model from capturing long-term dependencies and recovering the probability of the whole sequence.\n\nMany approaches have been proposed to deal with the first two drawbacks of training RNNs by scheduled sampling \\cite{bengio2015scheduled}, adversarial domain adaptation \\cite{nips-goyal:16}, reinforcement learning \\cite{iclr-ranzato:16,iclr-bahdanau:17}, or learning to search (L2S) \\cite{emnlp-wiseman:16,iclr-leblond:18} while little attention has been given to tackling the last one. \nA fully differentiable solution is capable of back-propagating the gradients through the entire sequence, which alleviates the discrepancy between training and inference by feeding RNNs with the same form of inputs, and gently bridges the gap between the training loss defined at each word and the evaluation metrics derived from whole sequence. \nThanks to the chain rule for differentiation, the training loss of differentiable solution is indeed a sequence level loss that involves the joint probability of words in a sequence. \n\nWe propose a fully differentiable training algorithm for RNNs to addresses the issues discussed above. \nThe key idea is that at the time step $t+1$, the network takes as input a ``bundle'' of predicted words at the previous time step $t$ instead of a single ground truth. \nIntuitively, the input words at each time step need to be similar in terms of semantics and syntax, and they can be replaced each other without harming the structure of sentences (see Figure \\ref{fig:sequence}).\nThe mixture of the similar words can be represented by a convex hull formed by their representations, which could be viewed as regularization to the input of recurrent neural networks.\nThis design makes it possible for the model to explore more feasible combinations (possibly unseen sequences), and can be interpreted as a computationally efficient approximation to the beam search.\n\nWe also want the number of input words can vary for different time steps, unlike the end-to-end algorithm proposed by Ranzato et al \\shortcite{iclr-ranzato:16}, in which they propagated as input the predicted top-$k$ words, and the value of $k$ is fixed in advance.\nAlthough different numbers of $k$ can be tested to determine the optimal value, the number of proper words that can be used heavily depends on contexts in which they occur. \nFor a given $k$, we could introduce unnecessary noise if more words are taken as input, whereas we may prevent the model from exploring possible sequences if less words are involved. \nIn our architecture, an attention mechanism \\cite{nips-Vaswani:17} is applied to select candidate words, and their weighted embedding are fed into the network according to their attention scores. \nSmoothing the input by this way makes the whole process trainable and differentiable by using standard back-propagation. \n\n\n\\section{Related Work}\n\nUnlike Conditional Random Fields \\cite{icml-Lafferty:01} and other models that assume independence between outputs at different time steps, RNNs and their vriants are capable of representing the probability distribution of sequences in the relatively general form. However, the most popular strategy for training RNNs, known as ``teacher forcing'' takes the ground truth as input at each time step, and makes the later predictions partly conditioned on those inputs being fed back into the model. Such training strategy impairs their ability to learn rich distributions over entire sequences. Although some training methods, such as scheduled sampling \\cite{bengio2015scheduled} and ``professor forcing'' \\cite{nips-goyal:16} are proposed to encourage the states of recurrent networks to be the same as possible when the training and inference over multiple time steps, the neural networks using greedy predictions (top-$1$) by the arguments of the maxima (abbreviated argmax) is not fully differentiable. Discrete categorical variables are involved in those greedy prediction, and become the obstacle to permit efficient computation of parameter gradients.\n\nPerhaps the most natural and na\\\"{i}ve approach towards the differentiable training is that at time step $t + 1$ we take as input all the words whose contributions are weighted by their scores instead of the ground truth word, and the scores are the predicted distribution over words from the previous time step $t$. However, the output distribution at each time step is normally not sparse enough, and thus the input may blur by a large number of words semantically far from the ground truth. A simple remedy would be the end-to-end algorithm proposed by Ranzato et al \\shortcite{iclr-ranzato:16}, in which they propagated as input the top $k$ words predicted at the previous time step. The $k$ largest scoring words are weighted by their re-normalized scores (summing to one). Although different numbers of $k$ can be tested to choose the optimal value, the value of $k$ is fixed at each time step. It would better to learn to determine a particular $k$ for each time step, depending on a specific context. For a fixed $k$, we would introduce unnecessary noise if too many words are involved, whereas we may prevent the model from exploring possible sequences if too few words are chosen.\n\nJang et al \\shortcite{jang2017categorical} proposed Gumbel-Softmax that can make the output distributions over words more sparse by approximating discrete one-hot categorical distributions with their continuous analogues. Such approximate sampling mechanism for categorical variables can be integrated into RNNs, and trained using standard back-propagation by replacing categorical samples with Gumbel-Softmax ones. Theoretically, as the temperature $\\tau$ of Gumbel-Softmax distribution approaches $0$, samples from this distribution will become one-hot. In practice, they begin with a high temperature and anneal to a small but non-zero one. The temperature, however, can not be too close to $0$ (usually $\\tau \\ge 0.5$) because the variance of the gradients will be so large, which may result in the gradient explosion.\n\nIdeas coming from learning to search \\cite{emnlp-wiseman:16,iclr-leblond:18} or reinforcement learning, such as the REINFORCE \\cite{iclr-ranzato:16} and ACTOR-CRITIC \\cite{iclr-bahdanau:17}, have been used to derive training losses that are more closely related to the sequence-level evaluation metrics that we actually want to optimize. Those approaches side step the issues associated with discrete nature of the optimization by not requiring losses to be differentiable. While those approaches appear to be well suited to tackle the training problem of RNNs, they suffer from a very large action space which makes them difficult to learn or search efficiently, and thus slow to train the models, especially for natural language processing tasks normally with a large vocabulary size. We reduce the effect of search error by pursuing few next word candidates at each time step in a pseudo-parallel way. We note that those ideas and ours are complementary to each other and can be combined to improve performance further, although the extent of the improvements will be task dependent.\n\n\\section{Methodology}\n\nWe propose a fully differentiable method to train a sequence model, which allows the gradients to back-propagate through the entire sequence. \nFollowing \\citep{zhang2016generating,jang2017categorical,goyal2017differentiable}, the network takes as input the weighted sum of word embeddings at each time step, where the weights reflect how likely they are chosen in the previous step. \nThe following discussion is based on a general architecture that can be viewed as a variational encoder-decoder with an attention mechanism \\citep{bahdanau2014neural, kingma2014auto-encoding, bowman2016generating}. \nThe teacher forcing method, denoted as VAE-RNNSearch, is implemented with the above architecture, while we extend this architecture with a lexicon-based memory (Figure \\ref{fig:reg-model} shows the details, where the margin relaxed component should be ignored at present), called \\textbf{W}ord \\textbf{E}mbedding \\textbf{A}s \\textbf{M}emory (WEAM).\n\nGiven a source sentence $s_{1:M}$, $s_i \\in \\mathcal{V}^s$ where $\\mathcal{V}^s$ is the vocabulary of source language, a multi-layer BiLSTM encoder $f^s(\\cdot)$ is used to compute a high-level contextual word representation $e_i \\in \\mathbb{R}^d$ for each word $s_i$, where $d$ is the dimensionality of the embedded vector space,\n\\begin{equation} \\small\n e_{1:M} = f^s(s_{1:M})\n\\end{equation}\nwhich is also the entries to be attended by the decoder. \nThe final state of BiLSTM is used to compute the mean and the log variance of the latent Gaussian distribution $\\mathcal{N}(\\mu, \\sigma^2)$ in VAE with a linear layer. \n\nIn the decoding process, a seed $c$ sampled for the latent Gaussian distribution is fed into a multi-layer LSTM decoder $f^t(\\cdot)$ with multi-hop attention. \nThe decoder aims at predicting the distributions $p_{1:N}$ of target words $t_{1:N}$ conditioned on the seed $c$ and the encoder output $e_{1:M}$,\n\\begin{equation} \\small\n\\label{eq:decoder}\n p_{1:N}=f^t(c, e_{1:M})\n\\end{equation}\nwhere each $p_j\\in [0, 1]^{|\\mathcal{V}^t|}$ is the probability distribution of the $j$-th target word, and $\\mathcal{V}^t$ is the vocabulary of target language.\nThe words with the maximum probability are the predictions $\\tilde{t}_{1:N}$.\n\nSpecifically, we explain the decoding process in a recurrent manner. \nAbove all, the word embedding matrix $\\mathcal{M} \\in \\mathbb{R}^{|\\mathcal{V}^t| \\times d}$ is considered as a memory.\nThe recurrent function $g(\\cdot)$ is a multi-layer LSTM cell incorporated with multi-hop attention over encoder output $e_{1:M}$. \nAt timestamp $j$, the recurrent function $g(\\cdot)$ output a hidden representation $h_{j+1}$ by giving the previous states $h_{j}$ and the current word embedding $v_j$, and the hidden state is further used to computed the likelihood of the predictions\n\\begin{equation} \\small\n h_{j+1} = g(h_j, v_j)\n\\end{equation}\n\\begin{equation} \\small\n p_{j+1} = \\text{softmax}(\\mathcal{M}h_{j+1}) \n\\end{equation}\nFor the starting timestamp, $h_0$ is defined as the seed $c$, and $v_0$ is the word embedding of the end-of-sentence token. \nDuring the training process, $v_j$ used in VAE-RNNSearch is the ground truth word embedding $\\mathcal{M}(t_j)$ and that in WEAM is approximated by attention over word embeddings. \nThe attention score here could be obtained by reusing the predicted distribution of the next word, namely $p_j(\\cdot)$. \nThus the input word embedding $v_j$ is approximated by\n\\begin{equation} \\small\n\\label{eq:embed_atten}\n v_j = p_j\\mathcal{M} = \\sum_{w\\in\\mathcal{V}^t}{p_j(w)\\mathcal{M}(w)}\n\\end{equation}\nFrom now on, we use ground truth words to denote words inputted into the network and target words to denote objective ones for the purpose of distinguishing their functions, although they refer to the same sequence.\n\nFinally, after the entire sequence is generated, the objective function of WEAM is computed from the predicted probability $p_j(\\cdot)$ and the target sequence $t_{1:N}$ as well as the latent Gaussian distribution, \n\\begin{equation} \\small\n\\begin{split}\n\\label{eq:loss}\n L(\\theta) = & -\\sum_{j=1..N}{\\log p_j(t_j)} \\\\\n & + D_{KL}(\\mathcal{N}(\\mu, \\sigma^2) || \\mathcal{N}(0,1))\n\\end{split}\n\\end{equation}\nIn Equation \\ref{eq:loss}, the first term is the cross-entropy between the predicted probability with the target one, and the second term is the KL divergence between the approximate latent Gaussian distribution and a prior normal distribution. \nEquipped with WEAM, the training process of text generation is fully differentiable.\n\nAt the inference, instead of using the attention-weighted embedding in Equation \\ref{eq:embed_atten}, we found it better to feed an exact word embedding into the neural network at each timestamp. \nAlthough the word embedding memory regularizes $v_{j+1}$ in a convex hull of word embeddings comparing with $h_j$ (Figure \\ref{fig:reg-weam}), there is still difference with the representations of genuine words in vocabulary, which is a finite set.\nThe bias could lead to minor mistakes, which accumulate as the prediction process goes on and is potentially harmful to future predictions.\nThus feeding an exact word embedding in inference would be helpful in generating a sequence by regularizing the input embedding and erasing the potential mistakes.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=7.5cm]{model.pdf}\n \\caption{\\small{M-WEAM architecture. It is based on a variational encoder-decoder with attention mechanism. (a) performs an attention operation over all the word embeddings to produce their probabilities (reflecting how likely they are chosen); (b) retrieves the words according to their probabilities (masked to make it sparse). By applying a margin relaxed method, we rule out the words having the lower probabilities than a threshold, and the remaining values are normalized to estimate the distribution of input words at the next time step.}}\n \\label{fig:reg-model}\n\\end{figure}\n\n\\subsection{Sparse Word Distributions}\n\nAs discussed above, feeding an exact word embedding is a helpful property in inference, which is equivalent to setting a sparse distribution $p_j(\\cdot)$.\nHowever, WEAM model suffers from the long tail effect cause by the non-sparse $p_j(\\cdot)$.\nMost words in vocabulary are predicted with a very small but non-negligible probability, leading to a noisy $v_j$.\n\nA widely-used method to obtain a sparse distribution is rescaling the logits before presenting it to the final-layer classifier, equivalently\n\\begin{equation} \\small\n\\label{eq:rescaled-logit}\n \\tilde{p}_j(k) = p_j(k)^{1\/\\tau}\n\\end{equation}\nwhere $\\tau$ is a rescaling factor decaying from one to zero \\citep{zhang2016generating, jang2017categorical}. \nIdeally, when $\\tau$ converges to zeros, the distribution $\\tilde{p}_j(\\cdot)$ converges to a categorical one.\nHowever, in practice, $\\tau$ cannot be set to a small value, which could lead to gradient explosion. \nThus an alternative strategy is setting a lowerbound for $\\tau$, which is empirically $0.5$ in \\citep{jang2017categorical}. \nThis value of $\\tau$ is far from the optimal number (i.e. $0$) in theory, and the sparse categorical distribution can be achieved in practice. \n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}{0.23\\textwidth}\n \\includegraphics[width=\\textwidth]{fullspace.png}\n \\subcaption{WEAM}\n \\label{fig:reg-weam}\n \\end{subfigure}\n \\begin{subfigure}{0.23\\textwidth}\n \\includegraphics[width=\\textwidth]{subspace.png}\n \\subcaption{M-WEAM}\n \\label{fig:reg-rweam}\n \\end{subfigure}\n \\caption{The difference between (a) WEAM and (b) M-WEAM in the way they form the subspace in a convex hull defined by all the word embeddings, which can be viewed as a kind of regularizations. Blue points represent all the words in a vocabulary, whereas yellow ones are the unmasked words in M-WEAM. The simplex is the convex hull defined by the unmasked words chosen by the results of the lexicon-based attention.}\n\\label{fig:reg}\n\\end{figure}\n\nWe propose a margin relaxed method to make a sparse distribution through restricting $v_j$ in a relatively smaller subspace (see Figure \\ref{fig:reg-rweam}). \nThe architecture of M-WEAM model is shown in Figure \\ref{fig:reg-model}. \nGiven a distribution $p_j(\\cdot)$, its sparse version $\\tilde{p}_j(\\cdot)$ is computed by ruling out the words with low probability,\n\\begin{equation} \\small\n \\tilde{p}_j(w) = \\frac{p_j(w)mask_j(w)}{\\sum_{l}{p_j(w')mask_j(w')}}\n\\end{equation}\n\\begin{equation} \\small\n mask_j(w) = \\begin{cases} 1 & p_j(w) > \\eta_j \\\\\n 0 & otherwise\n \\end{cases}\n\\end{equation}\nwhere $\\eta_j$ is a threshold and could be determined by various ways.\nIts motivation is to alleviate the long tail effect by masking the ``long tail'' entries with close-to-zero probability.\nA possible choice is defined the threshold $\\eta_j$ as \n\\begin{equation} \\small\n\\label{eq:margin}\n \\eta_j=e^{-\\epsilon}\\max_{k}p_j(k)\n\\end{equation}\nwhere $\\epsilon$ is a margin in log space.\nAs $\\epsilon$ converges to $0$, the $\\tilde{p}_j(k)$ converges to a categorical distribution as well.\nHowever, we do not intend to anneal $\\epsilon$ to zero, aiming at keep a mixture meaning of multiple semantically and syntactically similar words in the training process.\nIn vector space, the mixture meaning is represented as the convex hull shown in Figure \\ref{fig:reg-rweam}.\nAll the words forming this convex hull are reasonable predictions, and their representations are optimized in the training.\nIf $\\epsilon$ become very small, there is only one word, namely one point, to be optimized, leading to slower convergence.\nBesides, an input representation $v_j$ generated from a subspace would be more robust than that assigned to only one point, preventing the model from overfitting to a specific point.\nThe optimization over a subspace could be considered as a balance between that over a point and over the whole space in Figure \\ref{fig:reg-weam}.\n\nThe re-estimated probability $\\tilde{p}(\\cdot)$ can also be used in objective function Equation \\ref{eq:loss} following ``enough is enough'' principal.\nNote that the training objective is discriminating a target word with the others in vocabulary by increasing the margin between them.\nAs training progresses, the margin becomes larger implying that the target word can be easily pointed out from the vocabulary. \nKeeping optimizing the margin between the target word and all the others would be useless, and easily leads to overfitting.\nThe ``enough is enough'' principal means that a word requires no further optimization when its likelihood is much lower than the target one. \nThus the re-estimated probability $\\tilde{p}(\\cdot)$ is used in the loss function to prevent the words with low probability from over-optimization.\n\nWe denote the vanilla non-sparse version as WEAM, the one rescaling logits in Equation \\ref{eq:rescaled-logit} as R-WEAM, and the proposed margin relaxed one as M-WEAM. Note that R-WEAM is slightly different from that in \\citep{zhang2016generating, jang2017categorical} since word classifier and word embeddings are tied \\cite{inan2017tying}. \n\n\\subsection{Warm-up Strategy}\n\nAt the beginning of the training, the words generated by WEAM models are almost random. \nConditioned on such randomly predicted words, the model fails to benefit from the future training.\nIn order to alleviate this problem, the models are warmed up using the teacher forcing strategy. We present two warming up techniques to train WEAM-family models.\n\n\\subsubsection{Scheduled Sampling}\n\nScheduled sampling is a widely-used way to warm up WEAM-family models in training \\citep{bengio2015scheduled}.\nWhether to use the embedding of a ground truth word or its approximation in Equation \\ref{eq:embed_atten} at the next timestamp is randomly selected as:\n\\begin{equation} \\small \\label{eq:sample-i}\n\\begin{split}\n v_{j+1} & =\\begin{cases}\n \\mathcal{M}(w_{j+1}), & t=0 \\\\\n \\sum_{w\\in\\mathcal{V}^t}{p_j(w)M(w)}, & t=1\n \\end{cases} \\\\\n t & \\sim \\mathcal{B}(\\text{steps}\/\\text{max\\_steps})\n\\end{split}\n\\end{equation}\nwhere $t$ is sampled from a Bernoulli distribution $\\mathcal{B}(\\cdot)$ whose expectation has a positive correlation with training progress.\nFrom a training perspective, the probability of choosing the approximated word embedding increases linearly from zero to one.\n\n\\subsubsection{Threshold}\n\nMasking words with low probability may cause a new problem that a target word may be masked and lose its opportunity for optimization.\nOnce a target word is ruled out of training, it may be ruled out forever since its low probability, leading to bad performances and slow convergence.\nAn alternative way to solve this problem is computing the threshold with the probability of the target word, and we determines the threshold in a probable way as:\n\\begin{equation} \\small \\label{eq:gold-margin}\n\\begin{split}\n \\eta_j & =\\begin{cases}\n e^{-\\epsilon} p_j(w_{j+1}), & r=0 \\\\\n e^{-\\epsilon} \\max_{k}e^{-\\epsilon}p_j(k), & r=1\n \\end{cases} \\\\\n r & \\sim \\mathcal{B}(\\max(\\text{steps}\/\\text{max\\_steps}, \\xi))\n\\end{split}\n\\end{equation}\nwhere $r$ is similar to $t$ with lowerbound $\\xi$.\nEquation \\ref{eq:gold-margin} shows that the threshold is determined by the target word at the beginning of training whereas it is determined by the predictions when the algorithm almost converges.\nAdopting a lowerbound $\\xi$ also leads to an attractive property that a prediction and its re sptteiaevcrget word are replaceable to each other, and thus the model is expected to capturing the phenomenon of synonyms.\n\n\\section{Experiments}\n\nWe conducted three sets of experiments to demonstrate the effectiveness of M-WEAM on various tasks including machine translation, text summarization and dialogue generation. \n\nWe employed almost the same setting to all the tasks. \nA wrapped recurrent block was used for both the encoder and decoder, defined as a sequence of LSTM, dropout, residual connection and layer normalization. \nUni-directional LSTM was used in encoder while the bi-directional LSTM was used in decoder.\nThe multi-hop attention component was also wrapped in a similar way.\nWe set the dimensionality of vector space to $256$, layers of recurrent block to $3$, dropout rate to $0.3$, lowerbound of margin $\\epsilon$ to $1$, and $\\xi$ to $0.5$.\nThe models were optimized by Adam \\cite{kingma2014adam} with $\\beta_1=0.9, \\beta_2=0.999 $ and $\\text{eps}=10^{-8}$. \nLearning rate was scheduled following \\cite{feng2018neural}, and its maximum was set to $0.001$.\nLabel smoothing was also used with $0.1$ smoothing rate. \nSeparated vocabularies were constructed for source and target domains on translation and dialogue tasks whereas a shared one was used for summarization task.\nAll the models were implemented with PyTorch and trained on $1$ NVIDIA Titan Xp GPU.\n\n\n\\subsection{Machine Translation}\n\n\\begin{table}[b]\n \\centering\n \\begin{tabular}{l|c}\n \\toprule\n Model & BLEU \\\\\n \\midrule\n MIXER & $20.73$ \\\\\n BSO & $23.83$ \\\\\n $\\alpha$-soft annealing & $20.60$ \\\\\n Actor-Critic & $27.49$ \\\\\n SEARNN & $28.20$ \\\\\n NPMT & $28.57$ \\\\\n \\midrule\n VAE-RNNSearch & $28.97$ \\\\\n WEAM & $27.84$ \\\\\n R-WEAM & $28.13$\\\\\n \\midrule\n M-WEAM & \\textbf{29.17} \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Results on IWSLT14 German-English.}\n \\label{tab:de-en}\n\\end{table}\n\nMachine translation task was experimented on two datasets, IWSLT14 German to English (De-En) \\citep{cettolo2014report} and IWSLT15 English to Vietnamese (En-Vi) \\citep{cettolo2015iwslt}. \nThe dataset was preprocessed basically following \\cite{huang2017towards, feng2018neural}. \nFor IWSLT14 De-En, the dataset roughly contains 153K training sentences, 7K development sentences and 7K testing sentences. \nWords whose frequency of occurrence is lower than 3 were replaced by an ``UNK'' token.\nFor IWSLT15 En-Vi, the dataset contains 133K translation pairs in training set, 1,553 in validation set (TED tst2012) and 1,268 in test set (TED tst2013).\nSimilar to De-En, words with frequency of occurrence lower than 5 were replaced by the ``UNK'' token.\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{l|c}\n \\toprule\n Model & BLEU \\\\\n \\midrule\n Hard Monotonic & $23.00$ \\\\\n NPMT & $26.91$ \\\\\n \\midrule\n VAE-RNNSearch & $28.66$ \\\\\n WEAM & $27.14$ \\\\\n R-WEAM & $26.79$ \\\\\n \\midrule\n M-WEAM & \\textbf{28.70} \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Results on IWSLT15 English-Vietnamese.}\n \\label{tab:en-vi}\n\\end{table}\n\nOn IWSLT14 De-En, we compared M-WEAM against VAE-RNNSearch \\citep{bowman2016generating}, WEAM, R-WEAM \\citep{zhang2016generating}, MIXER \\citep{iclr-ranzato:16}, BSO \\citep{emnlp-wiseman:16}, Actor-Critic \\citep{iclr-bahdanau:17}, SEARNN \\citep{iclr-leblond:18}, $\\alpha$-soft annealing \\cite{goyal2017differentiable}, and NPMT \\citep{huang2017towards}.\nOn IWSLT15 En-Vi, we compared M-WEAM against VAE-RNNSearch \\citep{bowman2016generating}, WEAM, R-WEAM \\citep{zhang2016generating}, Hard Monotonic \\citep{raffel2017online} and NPMT \\citep{huang2017towards} .\n\n\nTable \\ref{tab:de-en} and Table \\ref{tab:en-vi} display the results on the German-English and English-Vietnamese translation task respectively. \nIt is can be seen that M-WEAM achieves decent results on the both tasks.\nComparing with differentiable models, M-WEAM seizes a notable gain of over $1$ BLEU point, outperforming WEAM and R-WEAM on both tasks. \nM-WEAM also achieves comparable results with its teacher forcing version.\nBesides, M-WEAM beats previous SOTA RNN-based models with a significant margin.\n\nInspired by \\cite{keskar2016large}, where they explored the neighborhood of the minima to measure the model's sensitivity, we conducted a similar experiment on IWSLT14 De-En to show the robustness of VAE-RNNSearch and WEAM family models. \nWe tested on the models by injecting noise into input embedding. \nThe noise was sampled from an unbiased Gaussian distribution with the standard deviation increasing from 0 to 1. \nAs the noise increases, models with sharper local minima should suffer more from performance degradation. \n\nFigure \\ref{fig:sensitivity} shows how the performance of the compared models changes against noise. \nM-WEAM model surpasses the teacher forcing model by nearly 3 BLEU points when the standard deviation reaches 1. \nIt thus can be inferred that M-WEAM converges to a flatter minima than VAE-RNNSearch does.\nBesides, among all models, WEAM shows strongest anti-noise ability because it has been trained with massive noise due to the long tail effect. \nIn general, M-WEAM achieves a balance between performance and robustness.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=7.5cm]{noise.pdf}\n \\caption{Sensitivity test with different Gaussian noise levels. The $x$-axis is the standard deviation of the Gaussian noise. \n \n }\n \\label{fig:sensitivity}\n\\end{figure}\n\n\n\n\\subsection{Abstractive Summarization}\n\nWe evaluated the models on Gigaword benchmark, and applied the same data preprocessing as \\citep{rush2015neural, chopra2016abstractive}.\nThe results are reported with F1 ROUGE scores, including ROUGE-1, ROUGE-2 and ROUGE-L.\n\nThe results of ABS \\citep{rush2015neural}, ABS+ \\citep{rush2015neural}, Feats \\citep{nallapati2016abstractive}, RAS-LSTM \\citep{chopra2016abstractive} and RAS-Elman \\citep{chopra2016abstractive} are extracted from the numbers reported by their authors. We implemented the VAE-RNNSearch \\citep{bowman2016generating} as well as WEAM, and report their results using our implementations. The results of ABS and ABS+ with the greedy search are unavailable, and thus we report those with the beam search instead. We are aware that SEASS \\citep{zhou2017selective} and DRGD \\citep{li2017deep} are two recent studies in which the higher ROUGE metrics were reported on Gigaword dataset. \nTheir results are not listed because those models are specifically tailored for the abstractive summarization task. SEASS is equipped with a selective network for salient language units, and DRGD uses a recurrent latent variable to capture the summary structure, while we focus on the general framework and training algorithm for the sequence-to-sequence tasks. \n\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{l|c|c|c}\n \\toprule\n Model & R-1 & R-2 & R-L \\\\\n \\midrule\n ABS$^*$ & $29.55$ & $11.32$ & $26.42$ \\\\\n ABS+$^*$ & $29.78$ & $11.89$ & $26.97$ \\\\\n Feats & $32.67$ & $15.59$ & $30.64$ \\\\\n RAS-LSTM & $31.71$ & $13.63$ & $29.31$ \\\\\n RAS-Elman & $33.10$ & $14.45$ & $30.25$ \\\\\n \n \n \\midrule\n VAE-RNNSearch & $\\textbf{33.99}$ & $15.72$ & $\\textbf{31.67}$ \\\\\n WEAM & $33.09$ & $15.05$ & $30.79$ \\\\\n R-WEAM & $32.35$ & $14.52$ & $30.10$ \\\\\n \\midrule\n \n M-WEAM & $33.87$ & $\\textbf{15.78}$ & $31.52$ \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Results on Gigaword dataset. The models indicated with $^*$ use beam search to generate summaries whereas the others use the greedy search instead.}\n \\label{tab:gigaword}\n\\end{table}\n\nTable \\ref{tab:gigaword} shows the results on Gigaword Test Set. \nAmong the fully differentiable WEAM family models, M-WEAM achieves the best performance, and outperform the WEAM and R-WEAM with at least $0.7$ improvement on the all ROUGE metrics.\nBy comparing with VAE-RNNSearch, M-WEAM achieves comparable results on all the ROUGE metrics. \nIt is worth noting that although M-WEAM performs slightly worse than VAE-RNNSearch on ROUGE-1, but it does better on ROUGE-2, which shows that M-WEAM is able to recover more longer patterns, thanks to its fully differentiable property.\n\n\\subsection{Dialogue}\n\nFinally, we evaluated the models on the task of single-turn dialogue, and used a subset of STC-weibo corpus \\cite{shang2015neural} as a dataset, where weibo is a popular Twitter-like microblogging service in China. \nThe complete corpus consists of roughly $4.4$M conversational post-response pairs.\nWe removed the sentences containing more than $10$ words and created a dataset with $907,809$ post-response pairs for fast training and testing.\nThe dataset was randomly split into training, validation, and testing sets, which contains $899$K, $10$K and $10$K pairs respectively. \nThe vocabulary was formed by the most frequent $40$K words in training data.\n\nBLEU score is a widely used metric in dialogue evaluation to measure the response relevance, whereas distinct-1\/2 is used to measure the response diversity. \nAs is shown in Table \\ref{tab:stc}, M-WEAM model achieves the highest performance in terms of BLEU, but performs worse than other models with distinct metrics. \nUnlike machine translation, a response can be taken as an acceptable answer to many posts (or questions) and the dataset contains a number of one-to-many cases. In order to recover the probability of entire sequences, the fully differentiable model with improved training efficiency tends to generate frequently-used sequences and $n$-grams, which hurts the diversity of the generated responses. We leave this issue for future research. \nThe high relevance but low diversity of responses generated by M-WEAM show that the proposed M-WEAM model is more desirable for the tasks requiring to generate exact sequences.\n\n\\begin{table}[t] \n \\small\n \\centering\n \\begin{tabular}{l|c|c|c}\n \\toprule\n Model & BLEU & distinct-1 & distinct-2 \\\\\n \\midrule\n VAE-RNNSearch & $5.95$ & $2.39$ & $19.01$ \\\\\n WEAM & $5.67$ & \\textbf{2.50} & $19.52$ \\\\\n R-WEAM & $5.73$ & $2.49$ & \\textbf{19.65} \\\\\n \\midrule\n M-WEAM & \\textbf{6.11} & $2.39$ & $16.31$ \\\\\n \\bottomrule\n \\end{tabular}\n \\normalsize\n \\caption{Automatic evaluation results on STC-weibo dataset. The best results are highlighted in bold font.}\n \\label{tab:stc}\n\\end{table}\n\n\n\\section{Conclusion}\n\nWe proposed a fully differentiable training algorithm for RNNs to alleviate the discrepancy between training and inference (exposure bias), and to bridge the gap between the training loss defined at each word and the evaluation metrics derived from whole sequence (evaluation bias). \nThe fully differentiable property is achieved by feeding the network at each step with a ``bundle'' of the words carrying similar meaning instead of a single ground truth. \nIn our solution, the network is allowed to take different number of words as input at each time step, depending on the context. \nWe may have more candidate words at some positions, while less at others when trying to generate a sentence. \nExperiments on machine translation, abstractive summarization, and open-domain dialogue response generation tasks showed that the proposed architecture and training algorithm achieved the best or comparable performance, especially in BLUE or ROUGE-2 metrics defined on sequence level, reflecting the enhanced ability in capturing long-term dependencies and recovering the probability of the whole sequence. \nThe trained networks is also empirically proven to be more robust to noises, and perform constantly better other competitors with different noise levels.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro}\\setcounter{equation}{0}\n\t\n\tMany physical processes such as harvesting, natural disaster, shocks etc, cause abrupt changes in their states at certain time instant. These sudden changes occur for negligible time period and they are estimated in the form of instantaneous impulses. The theory of instantaneous impulsive systems has remarkable applications in several areas of science and engineering, for example, population dynamics, ecology, network control system with scheduling protocol etc., (cf. \\cite{NE,AM,TY}, etc). However, certain dynamics of evolution processes in pharmacotherapy cannot be modeled by instantaneous impulsive dynamical systems, for example, in hemodynamical equilibrium of a person, introduction of insulin into the bloodstream and the consequent absorption of the body are gradual processes and stay active for a finite time interval. Thus, we cannot describe this situation via instantaneous impulsive systems. Therefore, Hern\\'andez et. al. \\cite{EHD} introduced a new class of impulses termed as non-instantaneous impulses, which starts at an arbitrary fixed point and stays active on a finite time interval and they established the existence of solutions for such a class of impulsive differential equations. Later, Wang et. al. \\cite{JRW,JRY}, extended this model to two general classes of impulsive differential equations, which are very important in the study of dynamics of evolutionary processes in pharmacotherapy. For more details on the theory of non-instantaneous impulsive systems, we refer the interested readers to \\cite{YTJ,JMF}, etc, and the references therein. In addition to this, there are various real world phenomena, for example, neural network, inferred grinding models, ecological models, heat conduction in materials with fading memory etc. In these phenomena, the current state of a system is influenced by the past states. The dynamics of such processes are characterized by delay differential equations (finite or infinite), see for example, \\cite{XFL,LA,JW}, etc. In the application point of view, the functional evolution systems with state-dependent delays are more prevalent and adequate, see for instance, \\cite{AO,CNF}, etc, and the references therein. \n\t\n\tThe concept of controllability plays a vital role in the study of control systems. Controllability (exact or approximate) refers that the solution of a control system can steer from an arbitrary initial state to a desired final state by using some control function. In the infinite dimensional setting, in comparison with exactly controllable systems, approximate controllable systems are more extensive and have wide range of applications, cf. \\cite{M,TRR,TR,EZ}, etc. In the past two decades, the problem of approximate controllability of various kinds of systems (in Hilbert and Banach spaces) such as impulsive differential equations, functional differential equations, stochastic systems, Sobolev type evolution systems, etc, is extensively studied with the help of fixed point approach and produced excellent results, see for instance, \\cite{SM,SAM,FUX,AGK,M,ER}, etc. \n\t\n\tOn the other hand, fractional differential equations (FDEs), which involve fractional derivatives of the form $\\frac{d^{\\alpha}}{dt^{\\alpha}}$, where $\\alpha>0$ is not necessarily an integer, attained great importance due to their ability to model complex phenomena. They naturally appear in diffusion processes, electrical engineering, viscoelasticity, control theory of dynamical systems, quantum mechanics, biological sciences, electromagnetic theory, signal processing, finance, economics, and many other fields (cf. \\cite{HlR,AAK,VM,IPF,TPJ,YAR,MST}, etc). A comprehensive study on fractional calculus and FDEs are available in \\cite{AAK,IPF}, etc. For the past few decades, FDEs in infinite dimensions seek incredible attention of many researchers and eminent contributions have been made both in theory as well as in applications. Several authors studied the existence and approximate controllability results for fractional order systems in Hilbert spaces, see for instance, \\cite{SNS,NIZ,SRY,JWY}, etc, and the references therein. \n\t\n\tThe study of approximate controllability of the fractional order control systems in Banach spaces has not got much attention in the literature. In \\cite{NMI}, Mahmudov developed sufficient conditions for the approximate controllability of the Sobolev type fractional evolution equations with Caputo derivative in separable reflexive Banach spaces using the Schauder fixed point theorem. Later in \\cite{MIN}, he studied the approximate controllability of the fractional neutral evolution systems by taking infinite delay using Krasnoselkii's fixed-point theorem. After that Chalishazar et. al. \\cite{DNC} extended his work by considering instantaneous impulses and examined the approximate controllability in Banach spaces. \n\t\n\tThe articles \\cite{PCY,FZM,RS,RGR}, etc claimed the approximate controllability of the factional order systems in general Banach spaces using resolvent operator condition. But, the resolvent operator defined in these works is valid only if the state space is a Hilbert space, whose dual is identified by the space itself (see, the resolvent operator definition in the expression \\eqref{2.1} and Remark \\ref{rem2.8}). Moreover, many papers deal with the fractional order impulsive systems with delays, see for instance, \\cite{ZT,ZH,YZO}, etc. In these works, the characterization of norm or seminorm defined in the phase space involves uniform norm, but the choice of such a norm or seminorm is not suitable in the impulsive case, for counter examples and more details, we refer the interested readers to \\cite{GU} (a detailed discussion on this problem is also available in \\cite{SAMT}). Many articles considered the fractional order impulsive systems with non-instantaneous impulses (cf. \\cite{RMS,Ad,Zyf,Zy} etc.). In theses works, the concept of the mild solution defined for the considered system is not realistic, a counter example and appropriate definition of the mild solution discussed in \\cite{Fe,JRY} (see Definition \\ref{def2.6} for the mild solution definition for system under our consideration). One of the main aims of this work is to resolve these issues.\n\t\n\tRecently, a few papers have been reported on the approximate controllability of the non-instantaneous impulsive systems with and without delays in Hilbert spaces, (cf. \\cite{RMS,SKS,SAJ}, etc). Dhayal et. al. \\cite{RMS} formulated the approximate controllability results for a class of fractional order non-instantaneous impulsive stochastic differential equations driven by fractional Brownian motion. In \\cite{SKS}, Kumar and Abdal derived sufficient conditions for the approximate controllability of non-instantaneous impulsive fractional semilinear measure driven control systems with infinite delay. Liu and his co-authors, in \\cite{SAJ}, investigated the approximate controllability of fractional differential equation with non-instantaneous impulses via iterative learning control scheme. To best of our knowledge, there is no work has been reported on the approximate controllability of the fractional order non-instantaneous impulsive systems with state-dependent delay in Banach spaces.\n\t\n\tMotivated from the above facts, in this work, we derive sufficient conditions for the approximate controllability of fractional order non-instantaneous impulsive functional evolution equations with state-dependent delay in separable reflexive Banach spaces. Moreover, we properly define the resolvent operator in Banach spaces, which plays a crucial role in obtaining the aforementioned results (see the expression \\eqref{2.1} below). Proper motivation for the construction of different forms of feedback controls for the fractional order semilinear systems available in the literature has been justified in this work (see Remarks \\ref{rem3.6} and \\ref{rem4.4} below). Furthermore, our paper modifies the phase space characterization to incorporate Guedda's observations in \\cite{GU}, by replacing the uniform norm on the phase space by integral norm for the impulsive differential equations (see Example \\ref{exm2.8}). \n\t\n\tWe consider the following fractional order non-instantaneous impulsive functional evolution equation with state-dependent delay: \n\t\\begin{equation}\\label{1.1}\n\t\\left\\{\n\t\\begin{aligned}\n\t^C\\mathrm{D}_{0,t}^{\\alpha}x(t)&=\\mathrm{A}x(t)+\\mathrm{B}u(t)+f(t,x_{\\rho(t, x_t)}), \\ t\\in\\bigcup_{k=0}^{m} (\\tau_k, t_{k+1}]\\subset J=[0,T], \\\\\n\tx(t)&=h_k(t, x(t_k^{-})),\\ t\\in(t_k, \\tau_k], \\ k=1,\\dots,m, \\\\\n\tx_{0}&=\\psi\\in \\mathfrak{B},\n\t\\end{aligned}\n\t\\right.\n\t\\end{equation}\n\twhere \\begin{itemize} \\item $^C\\mathrm{D}_{0,t}^{\\alpha}$ denotes a derivative in Caputo sense of order $\\alpha$ with $\\frac{1}{2}<\\alpha<1$, \\item the operator $\\mathrm{A}:\\mathrm{D(A)}\\subset \\mathbb{X}\\to\\mathbb{X}$ is an infinitesimal generator of a $\\mathrm{C}_0$-semigroup $ \\mathcal{T}(t) $ on a separable reflexive Banach space $ \\mathbb{X}$ (having a strictly convex dual $\\mathbb{X}^*$), \\item the linear operator $\\mathrm{B}:\\mathbb{U}\\to\\mathbb{X}$ is bounded with $\\left\\|\\mathrm{B}\\right\\|_{\\mathcal{L}(\\mathbb{U},\\mathbb{X})}= \\tilde{M}$ and the control function $u\\in \\mathrm{L}^{2}(J;\\mathbb{U})$, where $\\mathbb{U}$ is a separable Hilbert space, \\item the function $ f:J\\times \\mathfrak{B}\\rightarrow \\mathbb{X} $, where $\\mathfrak{B}$ is a phase space, which will be specified in the subsequent sections, \\item for $k=1,\\ldots,m$, the functions $h_k:[t_k, \\tau_k]\\times\\mathbb{X}\\to\\mathbb{X}$ represent the non-instantaneous impulses and the fixed points $\\tau_k$ and $t_k$ satisfy $0=t_0=\\tau_00$ is defined as\n\t\t\\begin{align*}\n\t\tI_{a}^{q}f(t):=\\frac{1}{\\Gamma(q)}\\int_{a}^{t}\\frac{f(s)}{(t-s)^{1-q}}\\mathrm{d}s,\\ \\mbox{ for a.e. } \\ t\\in[a,b],\n\t\t\\end{align*}\n\t\twhere $f\\in\\mathrm{L}^1([a,b];\\mathbb{R})$ and $\\Gamma(\\alpha)=\\int_{0}^{\\infty}t^{\\alpha-1}e^{-t}\\mathrm{d}t$ is the Euler gamma function.\n\t\\end{Def}\n\t\\begin{Def\n\t\tThe \\emph{Riemann-Liouville fractional derivative} of a function $f:[a,b]\\to\\mathbb{R}$ of order $q>0$ is given as \n\t\t\\begin{align*}\n\t\t^L\\mathrm{D}_{a,t}^{q}f(t):=\\frac{1}{\\Gamma(n-q)}\\frac{d^n}{dt^n}\\int_{a}^{t}(t-s)^{n-q-1}f(s)\\mathrm{d}s,\\ \\mbox{ for a.e. }\\ t\\in[a,b],\n\t\t\\end{align*}\n\t\twith $n-1< q0$ is defined as \n\t\t\\begin{align*}\n\t\t^C\\mathrm{D}_{a,t}^{q}f(t):=\\ ^L\\mathrm{D}_{a,t}^{q}\\left[f(t)-\\sum_{p=1}^{n-1}\\frac{f^{(p)}(a)}{p!}(x-a)^p\\right],\\ \\mbox{ for a.e. } \\ t\\in[a,b].\n\t\t\\end{align*}\n\t\\end{Def}\n\n\n\n\t\\begin{rem}[\\cite{IPF}]\n\t\tIf a function $f\\in\\mathrm{AC}^n([a,b];\\mathbb{R})$, then the Caputo fractional derivative of $f$ can be written as\n\t\t\\begin{align*}\n\t\t^C\\mathrm{D}_{a,t}^{q}f(t)=\\frac{1}{\\Gamma(n-q)}\\int_{a}^{t}(t-s)^{n-q-1}f^{(n)}(s)\\mathrm{d}s,\\ \\mbox{ for a.e. }\\ \\ t \\in[a,b], \\ n-1< q0$, then the operators $\\mathcal{T}_{q}(t)$ and $\\widehat{\\mathcal{T}}_{q}(t)$ are also compact for $t>0$. \n\t\t\\end{enumerate}\n\t\\end{lem}\n\tLet us define the set \n\t\\begin{align*} \n\t\\mathrm{PC}(J;\\mathbb{X})&:=\\big\\{x:J \\rightarrow \\mathbb{X} : x\\vert_{t\\in I_k}\\in\\mathrm{C}(I_k;\\mathbb{X}),\\ I_k:=(t_k, t_{k+1}],\\ k=0,1,\\ldots,m \\ \\mbox{ and }\\ x(t_k^+)\\\\&\\qquad \\qquad \\mbox{ and }\\ x(t_k^-)\\ \\mbox{ exist for each }\\ k=1,\\ldots,m, \\ \\mbox{ and satisfy }\\ x(t_k)=x(t_k^-)\\big\\}, \n\t\\end{align*}\n\tendowed with the norm $\\left\\|x\\right\\|_{\\mathrm{PC}(J;\\mathbb{X})}:=\\sup\\limits_{t\\in J}\\left\\|x\\right\\|_{\\mathbb{X}}$.\n\t\n\tWe now introduce the concept of mild solution for the system \\eqref{1.1} (cf. \\cite{JRY}). \n\t\\begin{Def}[Mild solution]\\label{def2.6}\n\t\tA function $x(\\cdot;\\psi,u):(-\\infty, T]\\to\\mathbb{X}$ is said to be a \\emph{mild solution} of \\eqref{1.1}, if $x_0=\\psi\\in\\mathfrak{B}$ and $x\\vert_{J}\\in\\mathrm{PC}(J;\\mathbb{X})$ and satisfies the following:\n\t\t\\begin{equation}\\label{2.2}\n\t\tx(t)=\n\t\t\\begin{dcases}\n\t\t\\mathcal{T}_{\\alpha}(t)\\psi(0)+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\left[\\mathrm{B}u(s)+f(s,x_{\\rho(s, x_s)})\\right]\\mathrm{d}s,\\ t\\in[0, t_1],\\\\\n\t\th_k(t, x(t_k^-)),\\ t\\in(t_k, \\tau_k],\\ k=1,\\ldots,m,\\\\\n\t\t\\mathcal{T}_{\\alpha}(t-\\tau_k)h_k(\\tau_k, x(t_k^-))-\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)\\left[\\mathrm{B}u(s)+f(s,x_{\\rho(s, x_s)})\\right]\\mathrm{d}s\\\\\\quad+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\left[\\mathrm{B}u(s)+f(s,x_{\\rho(s, x_s)})\\right]\\mathrm{d}s,\\ t\\in(\\tau_k,t_{k+1}],\\ k=1,\\ldots,m.\n\t\t\\end{dcases}\n\t\t\\end{equation}\n\t\\end{Def}\n\n\t\\subsection{Phase space}\n\tWe now provide the definition of phase space $\\mathfrak{B}$ introduced in \\cite{HY}, and suitably modify to incorporate the impulsive systems (cf. \\cite{VOj}). The linear space $\\mathfrak{B}$ equipped with the seminorm $\\left\\|\\cdot\\right\\|_{\\mathfrak{B}}$, consisting of all functions from $(-\\infty, 0]$ into $\\mathbb{X}$ and satisfying the following axioms:\n\t\\begin{enumerate}\n\t\t\\item [(A1)] If $x: (-\\infty, T]\\rightarrow \\mathbb{X}$ such that $x_{0}\\in \\mathfrak{B}$ and $x|_{J}\\in \\mathrm{PC}(J;\\mathbb{X})$. Then the following conditions hold:\n\t\t\\begin{itemize}\n\t\t\t\\item [(i)] $x_{t}\\in\\mathfrak{B}$ for $t\\in J$.\n\t\t\t\\item [(ii)] $\\left\\|x_{t}\\right\\|_{\\mathfrak{B}}\\leq \\Lambda(t)\\sup\\{\\left\\|x(s)\\right\\|_{\\mathbb{X}}: 0\n\t\t\t\\leq s\\leq t\\}+\\Upsilon(t)\\left\\|x_{0}\\right\\|_{\\mathfrak{B}},$ for $t\\in J$, where $\\Lambda, \\Upsilon:[0, \\infty)\\rightarrow [0, \\infty)$ are independent of $x$, the function $\\Lambda(\\cdot)$ is strictly positive and continuous, $\\Upsilon(\\cdot)$ is locally bounded.\n\t\t\n\t\t\\end{itemize}\n\t\t\\item [(A2)] The space $\\mathfrak{B}$ is complete. \n\t\\end{enumerate} \n\tFor any $\\psi\\in \\mathfrak{B}$, the function $\\psi_{t}, \\ t\\leq 0,$ defined as $\\psi_{t}(\\theta)=\\psi(t+\\theta),\\ \\theta \\in (-\\infty, 0].$ Then for any function $x(\\cdot)$ satisfying the axiom (A1) with $x_{0}=\\psi$, we can extend the mapping $t\\mapsto x_{t}$ by setting $x_{t}=\\psi_{t}, \\ t\\leq0$, to the whole interval $(-\\infty, T]$. Moreover, let us introduce a set \n\t\\begin{align*}\n\t\\mathcal{Q}(\\rho^-)&=\\{\\rho(s, \\varphi):\\ \\rho(s, \\varphi)\\leq 0, \\mbox{ for } (s, \\varphi)\\in J \\times \\mathfrak{B}\\}.\n\t\\end{align*} \n\tAssume that the function $t\\mapsto \\psi_{t}$, defined from $\\mathcal{Q}(\\rho^-)$ into $\\mathfrak{B}$ is continuous, and there exists a continuous and bounded function $\\varTheta^{\\psi}: \\mathcal{Q}(\\rho^-)\\rightarrow (0, \\infty)$ such that \n\t\\begin{align*}\n\t\\left\\|\\psi_{t}\\right\\|_{\\mathfrak{B}}&\\leq \\varTheta^{\\psi}(t)\\left\\|\\psi\\right\\|_{\\mathfrak{B}}.\n\t\\end{align*}\n\t\\begin{lem}[\\cite{HY}]{\\label{lem2.7}}\n\t\tLet $x:(-\\infty, T]\\rightarrow \\mathbb{X}$ be a function such that $x_0=\\psi$ and $x\\vert_J\\in\\mathrm{PC}(J;\\mathbb{X})$. Then \n\t\t\\begin{align*}\n\t\t\\left\\|x_s\\right\\|_{\\mathfrak{B}}&\\leq H_{1}\\left\\|\\psi\\right\\|_{\\mathfrak{B}}+H_{2}\\sup \\big\\{ \\left\\|x(\\theta)\\right \\|_{\\mathbb{X}}:\\theta \\in [0, \\max \\{0, s\\}] \\big\\}, \\ s\\in \\mathcal{Q}(\\rho^-) \\cup J,\n\t\t\\end{align*}\n\t\twhere $$H_{1}= \\sup\\limits_{t\\in \\mathcal{Q}(\\rho^-)} \\Theta^{\\psi}(t) + \\sup\\limits_{t\\in J}\\Upsilon(t), \\;\\; H_{2}= \\sup\\limits_{t\\in J}\\Lambda(t).$$ \n\t\\end{lem}\n\t\\begin{Ex}\\label{exm2.8}\n\t\tLet us take $\\mathfrak{B}=\\mathrm{PC}_{r}\\times\\mathrm{L}^p_h(\\mathbb{X}), r\\ge0, 1\\le p<\\infty$, which consists of all functions $\\psi:(-\\infty,0]\\to\\mathbb{X}$ such that $\\psi\\vert_{[-r,0]}\\in \\mathrm{PC}([-r,0];\\mathbb{X}),$ Lebesgue measurable on $(-\\infty,-r)$ and $h\\|\\psi(\\cdot)\\|^p_{\\mathbb{X}}$ is Lebesgue integrable on $(-\\infty,-r]$. The seminorm in $\\mathfrak{B}$ is defined as \n\t\t\\begin{align}\n\t\t\\label{Bnorm}\\left\\|\\psi\\right\\|_{\\mathcal{P}}:=\\int_{-r}^{0}\\left\\|\\psi(\\theta)\\right\\|_\\mathbb{X}\\mathrm{d}\\theta+\\left(\\int_{-\\infty}^{-r}h(\\theta)\\left\\|\\psi(\\theta)\\right\\|^p_{\\mathbb{X}}\\mathrm{d}\\theta\\right)^{\\frac{1}{p}},\n\t\t\\end{align}\n\t\twhere the function $h:(-\\infty, 0]\\to\\mathbb{R}^+$ is locally bounded and Lebesgue integrable. Moreover, there exists a locally bounded function $H:(-\\infty,0]\\to\\mathbb{R}^+$ such that $h(t+\\theta)\\le H(t)h(\\theta),$ for all $t\\le0$ and $\\theta\\in(-\\infty,0)\\backslash \\mathcal{O}_t$ where $\\mathcal{O}_t\\subseteq(-\\infty,0)$ is a set with Lebesgue measure zero. \n\t\\end{Ex}\n\n\t\\subsection{Resolvent operator and assumptions}\n\tTo discuss the approximate controllability of the system \\eqref{1.1}, we first define the following operators:\n\t\\begin{equation}\\label{2.1}\n\t\\left\\{\n\t\\begin{aligned}\n\tL_0^Tu&:=\\int^{T}_{0}(T-t)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-t)\\mathrm{B}u(t)\\mathrm{d}t,\\\\\n\t\\Phi_{0}^{T}&:=\\int^{T}_{0}(T-t)^{2(\\alpha-1)}\\widehat{\\mathcal{T}}_{\\alpha}(T-t)\\mathrm{B}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-t)^*\\mathrm{d}t,\\\\\n\t\\mathcal{R}(\\lambda,\\Phi_{0}^{T})&:=(\\lambda \\mathrm{I}+\\Phi_{0}^{T}\\mathcal{J})^{-1},\\ \\lambda > 0,\n\t\\end{aligned}\n\t\\right.\n\t\\end{equation}\n\twhere $\\mathrm{B}^{*}$ and $\\widehat{\\mathcal{T}}_{\\alpha}(t)^*$ denote the adjoint operators of $\\mathrm{B}$ and $\\widehat{\\mathcal{T}}_{\\alpha}(t),$ respectively. It is immediate that the operator $L_0^T$ is linear and bounded for $\\frac{1}{2}<\\alpha<1$. Moreover, the map $\\mathcal{J} : \\mathbb{X} \\rightarrow 2^{\\mathbb{X}^*}$ stands for the duality mapping, which is defined as \n\t\\begin{align*}\n\t\\mathcal{J}[x]&=\\{x^* \\in \\mathbb{X}^* : \\langle x, x^* \\rangle=\\left\\|x\\right\\|_{\\mathbb{X}}^2= \\left\\|x^*\\right\\|_{\\mathbb{X}^*}^2 \\}, \\mbox{ for all } x\\in \\mathbb{X},\n\t\\end{align*}\n\twhere $\\langle \\cdot, \\cdot \\rangle $ represents a duality pairing between $\\mathbb{X}$ and $\\mathbb{X}^*$. Since the space $\\mathbb{X}$ is a reflexive Banach space, then $\\mathbb{X}^*$ becomes strictly convex (see, \\cite{AA}), which implies that the mapping $\\mathcal{J}$ is bijective, strictly monotonic and demicontinuous, that is, $$x_k\\to x\\ \\mbox{ in }\\ \\mathbb{X}\\ \\mbox{ implies }\\ \\mathcal{J}[x_k] \\xrightharpoonup{w} \\mathcal{J}[x]\\ \\mbox{ in } \\ \\mathbb{X}^*\\ \\mbox{ as }\\ k\\to\\infty.$$ Moreover, the inverse mapping $\\mathcal{J}^{-1}:\\mathbb{X}^*\\to\\mathbb{X}$ is also duality mapping.\n\t\\begin{rem}\\label{rem2.8}\n\t\tIf $\\mathbb{X}$ is a separable Hilbert space (identified with its own dual), then the resolvent operator is defined as $\\mathcal{R}(\\lambda,\\Phi_{0}^{T}):=(\\lambda \\mathrm{I}+\\Phi_{0}^{T})^{-1},\\ \\lambda > 0$.\n\t\\end{rem}\n\t\\begin{lem}[Lemma 2.2 \\cite{M}]\\label{lem2.9}\n\t\tFor every $h\\in\\mathbb{X}$ and $\\lambda>0$, the equation\n\t\t\\begin{align}\\label{2.4}\\lambda z_{\\lambda}+\\Phi_{0}^{T}\\mathcal{J}[z_{\\lambda}]=\\lambda h,\\end{align}\n\t\thas a unique solution $z_{\\lambda}(h)=\\lambda(\\lambda \\mathrm{I}+\\Phi_{0}^{T}\\mathcal{J})^{-1}(h)=\\lambda\\mathcal{R}(\\lambda,\\Phi_{0}^{T})(h)$ and \\begin{align}\\label{2.5}\n\t\t\\left\\|z_{\\lambda}(h)\\right\\|_{\\mathbb{X}}=\\left\\|\\mathcal{J}[z_{\\lambda}(h)]\\right\\|_{\\mathbb{X}^*}\\leq\\left\\|h\\right\\|_{\\mathbb{X}}.\n\t\t\\end{align}\n\t\t\\begin{proof}\n\t\t\tSince the non-negative operator $\\Phi_0^T$ is linear and bounded for $\\frac{1}{2}<\\alpha<1$, then proceeding similar way as in the proof of Lemma 2.2 \\cite{M}, one can obtain the results.\n\t\t\\end{proof}\n\t\\end{lem}\n\t\n\t\n\t\n\t\\begin{Def}[\\cite{FUX}]\n\t\tThe system \\eqref{1.1} is said to be \\emph{approximately controllable} on $ J $, for any initial function $\\psi\\in\\mathfrak{B}$, if the closure of reachable set is whole space $\\mathbb{X}$, that is, $\\overline{\\mathfrak{R}(T,\\psi)}=\\mathbb{X},$ where the reachable set is defined as \\begin{align*}\\mathfrak{R}(T,\\psi) = \\{x(T;\\psi,u): u(\\cdot) \\in \\mathrm{L}^{2}(J;\\mathbb{U})\\}.\\end{align*}\n\t\\end{Def}\n\tWe impose the following assumptions to investigate the approximate controllability of the system \\eqref{1.1}:\n\t\\begin{Ass}\\label{as2.1} \n\t\t\\begin{enumerate}\n\t\t\t\\item [\\textit{($H0$)}] For every $h\\in\\mathbb{X}$, $z_\\lambda=z_{\\lambda}(h)=\\lambda\\mathcal{R}(\\lambda,\\Phi_{0}^{T})(h) \\rightarrow 0$ as $\\lambda\\downarrow 0$ in strong topology, where $z_{\\lambda}(h)$ is a solution of the equation \\eqref{2.4}.\n\t\t\t\\item[\\textit{(H1)}] The $\\mathrm{C}_0$-semigroup of bounded linear operator $\\mathcal{T}(t)$ is compact for $t>0$ with bound $M\\geq 1,$ such that $\\|\\mathcal{T}(t)\\|_{\\mathcal{L(\\mathbb{X})}}\\leq M$.\n\t\t\t\\item [\\textit{(H2)}] \n\t\t\t\\begin{enumerate} \n\t\t\t\t\\item [(i)] Let $x:(-\\infty, T]\\rightarrow \\mathbb{X}$ be such that $x_0=\\psi$ and $x|_{J}\\in \\mathrm{PC}(J;\\mathbb{X}).$ The function $t\\mapsto f(t, x_{\\rho(t,x_{t})}) $ is strongly measurable on $J$ and the function $f(t,\\cdot): \\mathfrak{B}\\rightarrow \\mathbb{X}$ is continuous for a.e. $t\\in J$. Also, the map $t\\mapsto f(s,x_{t})$ is continuous on $\\mathcal{Q}(\\rho^-) \\cup J,$ for every $s\\in J$. \n\t\t\t\t\\item [(ii)] For each positive integer $r$, there exists a constant $\\alpha_1\\in[0,\\alpha]$ and a function $\\gamma_{r}\\in \\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^{+}})$, such that $$ \\sup_{\\left\\| \\psi\\right\\|_{\\mathfrak{B}}\\leq r} \\left\\|f(t, \\psi)\\right\\|_{\\mathbb{X}}\\leq\\gamma_{r}(t), \\mbox{ for a.e.} \\ t \\in J \\mbox{ and } \\psi\\in \\mathfrak{B},$$ with\n\t\t\t\t$$ \\liminf_{r \\rightarrow \\infty } \\frac {\\left\\|\\gamma_r\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}}{r} = \\beta< \\infty. $$ \n\t\t\t\\end{enumerate}\n\t\t\t\\item [\\textit{(H3)}] The impulses $ h_k:[t_k,\\tau_k]\\times\\mathbb{X}\\to\\mathbb{X}$, for $k=1,\\dots,m$, are such that \n\t\t\t\\begin{itemize}\n\t\t\t\t\\item [(i)]The impulses $h_k(\\cdot,x):[t_k,\\tau_k]\\to\\mathbb{X}$ are continuous for each $x\\in\\mathbb{X}$. \n\t\t\t\t\\item [(ii)] Each $h_k(t,\\cdot):\\mathbb{X}\\to\\mathbb{X}$ is completely continuous, for all $t\\in[t_k,\\tau_k]$.\n\t\t\t\t\\item[(iii)] $\\left\\|h_k(t,x) \\right\\|_{\\mathbb{X}} \\leq l_k, \\mbox{ for each } t\\in [t_k, \\tau_k] \\mbox{ and } x \\in \\mathbb{X},$ where $l_k$'s are positive constants.\n\t\t\t\\end{itemize} \n\t\t\\end{enumerate}\n\t\\end{Ass}\n\tThe following version of the discrete Gronwall-Bellman lemma (cf. \\cite{Ch}) is used in the sequel. \n\t\\begin{lem}\\label{lem2.13}\n\tIf $\\{f_n\\}_{n=0}^{\\infty}, \\{g_n\\}_{n=0}^{\\infty}$ and $\\{w_n\\}_{n=0}^{\\infty}$ are non-negative sequences and $$f_n\\le g_n+\\sum_{k=0}^{n-1}w_kf_k,\\ \\text{ for }\\ n\\geq 0,$$ then $$f_n\\le g_n+\\sum_{k=0}^{n-1}g_kw_k\\exp\\left(\\sum_{j=k+1}^{n-1}g_j\\right),\\text{ for }\\ n\\geq 0.$$\n\t\\end{lem}\n\t\n\n\t\\section{Linear Control Problem} \\label{Linear}\\setcounter{equation}{0}\n\tThe present section is devoted for discussing the approximate controllability of the fractional order linear control problem corresponding to \\eqref{1.1}. To establish this result, we first obtain the existence of an optimal control by minimizing the cost functional given by\n\t\\begin{equation}\\label{3.1}\n\t\\mathcal{G}(x,u)=\\left\\|x(T)-x_{T}\\right\\|^{2}_{\\mathbb{X}}+\\lambda\\int^{T}_{0}\\left\\|u(t)\\right\\|^{2}_{\\mathbb{U}}\\mathrm{d}t,\n\t\\end{equation}\n\twhere $x(\\cdot)$ is the solution of the linear control system:\n\t\\begin{equation}\\label{3.2}\n\t\\left\\{\n\t\\begin{aligned}\n\t^CD_{0,t}^{\\alpha}x(t)&= \\mathrm{A}x(t)+\\mathrm{B}u(t),\\ t\\in J,\\\\\n\tx(0)&=\\zeta,\n\t\\end{aligned}\n\t\\right.\n\t\\end{equation}\n\twith the control $u\\in \\mathbb{U}$, $x_{T}\\in \\mathbb{X}$ and $\\lambda >0$. Since $\\mathrm{B}u\\in\\mathrm{L}^1(J;\\mathbb{X})$, the system \\eqref{3.2} has a unique mild solution $x\\in \\mathrm{C}(J;\\mathbb{X}) $ given by (see Corollary 2.2, Chapter 4, \\cite{P} and Lemma 4.68, Chapter 4, \\cite{Y})\n\t\\begin{align*}\n\tx(t)= \\mathcal{T}_{\\alpha}(t)\\zeta+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u(s)\\mathrm{d}s,\n\t\\end{align*}\n\tfor any $u\\in\\mathscr{U}_{\\mathrm{ad}}=\\mathrm{L}^2(J;\\mathbb{U})$ (admissible control class). Next, we define the \\emph{admissible class} $\\mathscr{A}_{\\mathrm{ad}}$ for the system \\eqref{3.2} as\n\t\\begin{align*}\n\t\\mathscr{A}_{\\mathrm{ad}}:=\\big\\{(x,u) :x\\mbox{ is \\mbox{the unique mild solution} of }\\eqref{3.2} \\mbox{ with the control }u\\in\\mathscr{U}_{\\mathrm{ad}}\\big\\}.\n\t\\end{align*}\n\tFor any given control $u\\in\\mathscr{U}_{\\mathrm{ad}}$, the system \\eqref{3.2} has a unique mild solution, which ensures that the set $\\mathscr{A}_{\\mathrm{ad}}$ is nonempty. By using the definition of the cost functional, we can formulate the optimal control problem as:\n\t\\begin{align}\\label{3.3}\n\t\\min_{ (x,u) \\in \\mathscr{A}_{\\mathrm{ad}}} \\mathcal{G}(x,u).\n\t\\end{align}\n\tIn the next theorem, we show the existence of an optimal pair for the problem \\eqref{3.3}.\n\t\\begin{theorem}[Existence of an optimal pair]\\label{optimal}\n\t\tFor a given $\\zeta\\in\\mathbb{X}$ and fixed $\\frac{1}{2}<\\alpha<1$, there exists a unique optimal pair $(x^0,u^0)\\in\\mathscr{A}_{\\mathrm{ad}}$ for the problem \\eqref{3.3}.\n\t\\end{theorem}\n\t\\begin{proof}\n\t\tLet us assume $$\\mathcal{G} := \\inf \\limits _{u \\in \\mathscr{U}_{\\mathrm{ad}}}\\mathcal{G}(x,u).$$ Since $0\\leq \\mathcal{G} < +\\infty$, there exists a minimizing sequence $\\{u^n\\}_{n=1}^{\\infty} \\in \\mathscr{U}_{\\mathrm{ad}}$ such that $$\\lim_{n\\to\\infty}\\mathcal{G}(x^n,u^n) = \\mathcal{G},$$ where $x^n(\\cdot)$ is the unique mild solution of the system \\eqref{3.2}, corresponding to the control $u^n(\\cdot),$ for each $n\\in\\mathbb{N}$ with $x^n(0)=\\zeta$.\tNote that $x^n(\\cdot)$ satisfies\n\t\t\\begin{align}\\label{3.4}\n\t\tx^n(t)&=\\mathcal{T}_{\\alpha}(t)\\zeta+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u^n(s)\\mathrm{d}s,\n\t\t\\end{align} \n\t\tfor $t\\in J$. Since $0\\in\\mathscr{U}_{\\mathrm{ad}}$, without loss of generality, we may assume that $\\mathcal{G}(x^n,u^n) \\leq \\mathcal{G}(x,0)$, where $(x,0)\\in\\mathscr{A}_{\\mathrm{ad}}$. Using the definition of $\\mathcal{G}(\\cdot,\\cdot)$, we easily get\n\t\t\\begin{align}\\label{35}\n\t\t\\left\\|x^n(T)-x_{T}\\right\\|^{2}_{\\mathbb{X}}+\\lambda\\int^{T}_{0}\\left\\|u^n(t)\\right\\|^{2}_{\\mathbb{U}}\\mathrm{d}t\\leq \\left\\|x(T)-x_{T}\\right\\|^{2}_{\\mathbb{X}}\\leq 2\\left(\\|x(T)\\|_{\\mathbb{X}}^2+\\|x_T\\|_{\\mathbb{X}}^2\\right)<+\\infty.\n\t\t\\end{align}\n\t\tFrom the above estimate, it is clear that, there exists a large $L>0$ (independent of $n$), such that \n\t\t\\begin{align}\\label{3.5}\\int_0^T \\|u^n(t)\\|^2_{\\mathbb{U}} \\mathrm{d} t \\leq L < +\\infty .\\end{align}\n\t\tUsing the expression \\eqref{3.4}, we compute\n\t\t\\begin{align*}\n\t\t\\left\\|x^n(t)\\right\\|_{\\mathbb{X}}&\\le\\left\\|\\mathcal{T}_{\\alpha}(t)\\zeta\\right\\|_{\\mathbb{X}}+\\left\\|\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u^n(s)\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\ &\\le\\left\\|\\mathcal{T}_{\\alpha}(t)\\right\\|_{\\mathcal{L}(\\mathbb{X})}\\left\\|\\zeta\\right\\|_{\\mathbb{X}}+\\int_{0}^{t}(t-s)^{\\alpha-1}\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\|_{\\mathcal{L}(\\mathbb{X})}\\left\\|\\mathrm{B}\\right\\|_{\\mathcal{L}(\\mathbb{U},\\mathbb{X})}\\left\\|u^n(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\\n\t\t&\\le M\\left\\|\\zeta\\right\\|_{\\mathbb{X}}+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|u^n(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s\n\t\t\\nonumber\\\\&\\le M\\left\\|\\zeta\\right\\|_{\\mathbb{X}}+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\frac{T^{2\\alpha-1}}{2\\alpha-1}\\left(\\int_{0}^{t}\\left\\|u^n(s)\\right\\|_{\\mathbb{U}}^2\\mathrm{d}s\\right)^{\\frac{1}{2}}\\nonumber\\\\&\\le M\\left\\|\\zeta\\right\\|_{\\mathbb{X}}+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\frac{T^{2\\alpha-1}L^{\\frac{1}{2}}}{2\\alpha-1}<+\\infty,\n\t\t\\end{align*}\n\t\tfor all $t\\in J$ and $\\frac{1}{2}<\\alpha<1$. Since we know that $\\mathrm{L}^{2}(J;\\mathbb{X})$ is reflexive, then by applying the Banach-Alaoglu theorem, we can find a subsequence $\\{x^{n_k}\\}_{k=1}^{\\infty}$ of $\\{x^n\\}_{n=1}^{\\infty}$ such that \n\t\t\\begin{align}\\label{3.6}\n\t\tx^{n_k}\\xrightharpoonup{w}x^0\\ \\mbox{ in }\\ \\mathrm{L}^{2}(J;\\mathbb{X}), \\ \\mbox{ as }\\ k\\to\\infty. \n\t\t\\end{align}\n\t\tFrom the estimate \\eqref{3.5}, we also infer that the sequence $\\{u^n\\}_{n=1}^{\\infty}$ is uniformly bounded in the space $\\mathrm{L}^2(J;\\mathbb{U})$. Further, by using the Banach-Alaoglu theorem, there exists a subsequence, say, $\\{u^{n_k}\\}_{k=1}^{\\infty}$ of $\\{u^n\\}_{n=1}^{\\infty}$ such that \n\t\t\\begin{align*}\n\t\tu^{n_k}\\xrightharpoonup{w}u^0\\ \\mbox{ in }\\ \\mathrm{L}^2(J;\\mathbb{U})=\\mathscr{U}_{\\mathrm{ad}}, \\ \\mbox{ as }\\ k\\to\\infty. \n\t\t\\end{align*}\n\t\tSince $\\mathrm{B}$ is a bounded linear operator from $\\mathbb{U}$ to $\\mathbb{X}$, then we have \n\t\t\\begin{align}\\label{3.7}\n\t\t\\mathrm{B}\tu^{n_k}\\xrightharpoonup{w}\\mathrm{B}u^0\\ \\mbox{ in }\\ \\mathrm{L}^2(J;\\mathbb{X}),\\ \\mbox{ as }\\ k\\to\\infty.\n\t\t\\end{align}\n\t\tMoreover, by using the above convergences together with the compactness of the operator $(\\mathrm{Q}f)(\\cdot) =\\int_{0}^{\\cdot}(\\cdot-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\cdot-s) f(s)\\mathrm{d}s:\\mathrm{L}^2(J;\\mathbb{X})\\rightarrow \\mathrm{C}(J;\\mathbb{X}) $ (see Lemma \\ref{lem2.12} below), we obtain\n\t\t\\begin{align*}\n\t\t\\left\\|\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u^{n_k}(s)\\mathrm{d}s-\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u(s)\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\to0,\\ \\mbox{ as }\\ k\\to\\infty,\n\t\t\\end{align*}\n\t\tfor all $t\\in J$. We now estimate \n\t\t\\begin{align}\\label{3.8}\n\t\t\\left\\|x^{n_k}(t)-x^*(t)\\right\\|_{\\mathbb{X}}&=\\left\\|\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u^{n_k}(s)\\mathrm{d}s-\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u^{0}(s)\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\to 0,\\ \\mbox{ as } \\ k\\to\\infty, \\ \\mbox{for all }\\ t\\in J,\n\t\t\\end{align}\n\t\twhere \n\t\t\\begin{align*}\n\t\tx^*(t)=\\mathcal{T}_{\\alpha}(t)\\zeta+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u^{0}(s)\\mathrm{d}s,\\ t\\in J.\n\t\t\\end{align*}\n\t\tIt is clear by the above expression, the function $x^*\\in \\mathrm{C}(J;\\mathbb{X})$ is the unique mild solution of the equation \\eqref{3.2} with the control $u^{0}\\in\\mathscr{U}_{\\mathrm{ad}}$. Since the weak limit is unique, then by combining the convergences \\eqref{3.6} and \\eqref{3.8}, we obtain $x^*(t)=x^0(t),$ for all $t\\in J$. Hence, the function $x^0$ is the unique mild solution of the system \\eqref{3.2} with the control $u^{0}\\in\\mathscr{U}_{\\mathrm{ad}}$ and also the whole sequence $x^n\\to x^0\\in\\mathrm{C}(J;\\mathbb{X})$. Consequently, we have $(x^0,u^0)\\in\\mathscr{A}_{\\mathrm{ad}}$.\n\t\t\n\t\tIt remains to show that the functional $\\mathcal{G}(\\cdot,\\cdot)$ attains its minimum at $(x^0,u^0)$, that is, \\emph{$\\mathcal{G}=\\mathcal{G}(x^0,u^0)$}. Since the cost functional $\\mathcal{G}(\\cdot,\\cdot)$ given in \\eqref{3.1} is continuous and convex (see Proposition III.1.6 and III.1.10, \\cite{EI}) on $\\mathrm{L}^2(J;\\mathbb{X}) \\times \\mathrm{L}^2(J;\\mathbb{U})$, it follows that $\\mathcal{G}(\\cdot,\\cdot)$ is weakly lower semi-continuous (Proposition II.4.5, \\cite{EI}). That is, for a sequence \n\t\t$$(x^n,u^n)\\xrightharpoonup{w}(x^0,u^0)\\ \\mbox{ in }\\ \\mathrm{L}^2(J;\\mathbb{X}) \\times \\mathrm{L}^2(J;\\mathbb{U}),\\ \\mbox{ as }\\ n\\to\\infty,$$\n\t\twe have \n\t\t\\begin{align*}\n\t\t\\mathcal{G}(x^0,u^0) \\leq \\liminf \\limits _{n\\rightarrow \\infty} \\mathcal{G}(x^n,u^n).\n\t\t\\end{align*}\n\t\tHence, we obtain \n\t\t\\begin{align*}\\mathcal{G} \\leq \\mathcal{G}(x^0,u^0) \\leq \\liminf \\limits _{n\\rightarrow \\infty} \\mathcal{G}(x^n,u^n)= \\lim \\limits _{n\\rightarrow \\infty} \\mathcal{G}(x^n,u^n) = \\mathcal{G},\\end{align*}\n\t\tand thus $(x^0,u^0)$ is a minimizer of the problem \\eqref{3.3}. Note that the cost functional given in \\eqref{3.1} is convex, the constraint \\eqref{3.2} is linear and the class $\\mathscr{U}_{\\mathrm{ad}}=\\mathrm{L}^2(J;\\mathbb{U})$ is convex, then the optimal control obtained above is unique.\n\t\\end{proof}\n\t\n\t\n\tIn the following lemma, we prove the compactness of the operator $(\\mathrm{Q}f)(\\cdot) =\\int_{0}^{\\cdot}(\\cdot-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\cdot-s) f(s)\\mathrm{d}s:\\mathrm{L}^2(J;\\mathbb{X})\\rightarrow \\mathrm{C}(J;\\mathbb{X}) ,\\ \\mbox{ for }\\ \\frac{1}{2}<\\alpha<1,$ where we assume that $\\mathbb{X}$ is a general Banach space. The case of $\\alpha=1$ is available in Lemma 3.2, \\cite{JYONG}. \n\t\\begin{lem}\\label{lem2.12}\n\t\tSuppose that Assumptions (H1) holds. Let the operator $\\mathrm{Q}:\\mathrm{L}^{2}(J;\\mathbb{X})\\rightarrow \\mathrm{C}(J;\\mathbb{X})$ be defined as\n\t\t\\begin{align}\n\t\t(\\mathrm{Q}\\psi)(t)= \\int^{t}_{0}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\psi(s)\\mathrm{d}s, \\ t\\in J,\\ \\frac{1}{2}<\\alpha<1.\n\t\t\\end{align}\n\t\tThen the operator $\\mathrm{Q}$ is compact.\n\t\\end{lem}\n\t\\begin{proof}\n\t\tWe prove that $\\mathrm{Q}$ is a compact operator by using the infinite-dimensional version of Arzel\\'a-Ascoli theorem (see, Theorem 3.7, Chapter 2, \\cite{JYONG}). Let a closed and bounded ball $\\mathcal{B}_R$ in $\\mathrm{L}^{2}(J;\\mathbb{X})$ be defined as \n\t\t\\begin{align*}\n\t\t\\mathcal{B}_R=\\left\\{\\psi\\in \\mathrm{L}^{2}(J;\\mathbb{X}):\\left\\|\\psi\\right\\|_{\\mathrm{L}^{2}(J;\\mathbb{X})}\\leq R\\right\\}.\n\t\t\\end{align*}\n\t\tFor $s_1,s_2\\in J$ ($s_10$, which implies that the operator $\\widehat{\\mathcal{T}}_{\\alpha}(t)$ is continuous under the uniform operator topology (see Theorem 3.2, Chapter 2, \\cite{P}). Hence, using the arbitrariness of $\\epsilon$ and continuity of $\\widehat{\\mathcal{T}}_{\\alpha}(t)$ in the uniform operator topology, the right hand side of the expression \\eqref{2.8} converges to zero as $|s_2-s_1| \\rightarrow 0$. Thus, $\\mathrm{Q}\\mathcal{B}_R$ is equicontinuous on $\\mathrm{L}^{2}(J;\\mathbb{X})$. \n\t\t\n\t\tNext, we show that $\\mathrm{V}(t):= \\left\\{(\\mathrm{Q}\\psi)(t):\\psi\\in \\mathcal{B}_R\\right\\},$ for all $t\\in J$ is relatively compact. For $t=0$, it is easy to check that the set $\\mathrm{V}(t)$ is relatively compact in $\\mathbb{X}$. Let us take $ 00$, we define\n\t\t\\begin{align*}\n\t\t(\\mathrm{Q}^{\\eta,\\delta}\\psi)(t)&=\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}(t^{\\alpha}\\xi)\\psi(s)\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&=\\mathcal{T}(\\eta^{\\alpha}\\delta)\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}(t^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\psi(s)\\mathrm{d}\\xi\\mathrm{d}s.\n\t\t\\end{align*}\n\t\tSince the operator $\\mathcal{T}(\\cdot)$ is compact, the set $\\mathrm{V}_{\\eta,\\delta}(t)=\\{(\\mathrm{Q}^{\\eta,\\delta}_\\lambda \\psi)(t):\\psi\\in \\mathcal{B}_R\\}$ is relatively compact in $\\mathbb{X}$. Hence, there exist a finite $ x_{i}$'s, for $i=1,\\dots, n $ in $ \\mathbb{X} $ such that \n\t\t\\begin{align*}\n\t\t\\mathrm{V}_{\\eta,\\delta}(t) \\subset \\bigcup_{i=1}^{n}\\mathcal{S}(x_i, \\varepsilon\/2),\n\t\t\\end{align*}\n\t\tfor some $\\varepsilon>0$, where $\\mathcal{S}(x_i, \\varepsilon\/2)$ is an open ball centered at $x_i$ and of radius $\\varepsilon\/2$. Let us choose $\\delta>0$ and $\\eta>0$ such that \n\t\t\\begin{align*}\n\t\t\\left\\|(\\mathrm{Q}\\psi)(t)-(\\mathrm{Q}^{\\eta,\\delta}\\psi)(t)\\right\\|_{\\mathbb{X}}&\\le\\alpha\\left\\|\\int_{0}^{t}\\int_{0}^{\\delta}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}(t^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\psi(s)\\mathrm{d}\\xi\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\alpha\\left\\|\\int_{t-\\eta}^{t}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}(t^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\psi(s)\\mathrm{d}\\xi\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le\\frac{MR\\alpha t^{2\\alpha-1}}{2\\alpha-1}\\int_{0}^{\\delta}\\xi\\varphi(\\xi)\\mathrm{d}\\xi+\\frac{MR\\alpha}{\\Gamma(1+\\alpha)}\\frac{\\eta^{2\\alpha-1}}{2\\alpha-1}\\le\\frac{\\varepsilon}{2}.\n\t\t\\end{align*}\n\t\tConsequently $$ \\mathrm{V}(t)\\subset \\bigcup_{i=1}^{n}\\mathcal{S}(x_i, \\varepsilon ). $$ Thus, for each $t\\in J$, the set $\\mathrm{V}(t)$ is relatively compact in $ \\mathbb{X}$. Then by invoking the Arzela-Ascoli theorem, we conclude that the operator $\\mathrm{Q}$ is compact.\n\t\\end{proof}\n\tSince $\\mathbb{X}^*$ is strictly convex, the norm $\\|\\cdot\\|_{\\mathbb{X}}$ is Gateaux differentiable (cf. Fact 8.12, \\cite{MFb}). Furthermore, every separable Banach space admits an equivalent Gateaux differentiable norm (cf. Theorem 8.13, \\cite{MFb}). Since $\\mathcal{J}$ is single-valued, the Gateaux derivative of $\\phi(x)=\\frac{1}{2}\\|x\\|_{\\mathbb{X}}^2$ is the duality map, that is, $$\\langle\\partial_x\\phi(x),y\\rangle=\\lim_{\\varepsilon \\to 0}\\frac{\\phi(x+\\varepsilon y)-\\phi(x)}{\\varepsilon}=\\frac{1}{2}\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}\\|x+\\varepsilon y\\|_{\\mathbb{X}}^2\\Big|_{\\varepsilon=0}=\\langle\\mathcal{J}[x],y\\rangle,$$ for $y\\in\\mathbb{X}$, where $\\partial_x\\phi(x)$ denotes the Gateaux derivative of $\\phi$ at $x\\in\\mathbb{X}$. In fact, since $\\mathbb{U}$ is a separable Hilbert space (identified with its own dual), by Theorem 8.24, \\cite{MFb}, we infer that $\\mathbb{U}$ admits a Fr\\'echet differentiable norm. The explicit expression of the optimal control ${u}$ in the feedback form is obtained in the following lemma: \n\t\n\t\\begin{lem}\\label{lem3.1}\n\t\tLet $u$ be the optimal control satisfying \\eqref{3.3} and minimizing the cost functional \\eqref{3.1}. Then $u$ is given by \n\t\t\\begin{align*}\n\t\tu(t)=(T-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-t)^*\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{0}^T)p(x(\\cdot))\\right],\\ t\\in [0, T),\\ \\lambda>0,\\ \\frac{1}{2}<\\alpha<1, \n\t\t\\end{align*}\n\t\twith\n\t\t\\begin{align*}\n\t\tp(x(\\cdot))=x_{T}-\\mathcal{T}_{\\alpha}(T)\\zeta.\n\t\t\\end{align*}\n\t\\end{lem}\n\t\\begin{proof}\n\t\tLet us first consider the functional \n\t\t\\begin{align*}\n\t\t\\mathcal{I}(\\varepsilon)=\\mathcal{G}(x_{u+\\varepsilon w},u+\\varepsilon w),\n\t\t\\end{align*}\n\t\twhere $(x,u)$ is the optimal solution of \\eqref{3.3} and $w\\in \\mathrm{L}^{2}(J;\\mathbb{U})$. Also the function $x_{u+\\varepsilon w}$ is the unique mild solution of \\eqref{3.2} corresponding to the control $u+\\varepsilon w$. Then, it is immediate that \n\t\t\\begin{align*}\n\t\tx_{u+\\varepsilon w}(t)= \\mathcal{T}_{\\alpha}(t)\\zeta+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}(u+\\varepsilon w)(s)\\mathrm{d}s.\n\t\t\\end{align*}\n\t\tIt is clear $\\varepsilon=0$ is the critical point of $\\mathcal{I}(\\varepsilon)$. We now evaluate the first variation of the cost functional $\\mathcal{G}$ (defined in \\eqref{3.1}) as\n\t\t\\begin{align*}\n\t\t\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}\\mathcal{I}(\\varepsilon)\\Big|_{\\varepsilon=0}&=\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}\\bigg[\\left\\|x_{u+\\varepsilon w}(T)-x_{T}\\right\\|^{2}_{\\mathbb{X}}+\\lambda\\int^{T}_{0}\\left\\|u(t)+\\varepsilon w(t)\\right\\|^{2}_{\\mathbb{U}}\\mathrm{d}t\\bigg]_{\\varepsilon=0}\\nonumber\\\\\n\t\t&=2\\bigg[\\langle \\mathcal{J}(x_{u+\\varepsilon w}(T)-x_{T}), \\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}(x_{u+\\varepsilon w}(T)-x_{T})\\rangle\\nonumber\\\\&\\qquad +2\\lambda\\int^{T}_{0}(u(t)+\\varepsilon w(t),\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}(u(t)+\\varepsilon w(t)))\\mathrm{d}t\\bigg]_{\\varepsilon=0}\\nonumber\\\\\n\t\t&=2\\left\\langle\\mathcal{J}(x(T)-x_T),\\int_0^T(T-t)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-t)\\mathrm{B}w(t)\\mathrm{d}t \\right\\rangle+2\\lambda\\int_0^T(u(t),w(t))\\mathrm{d} t. \n\t\t\\end{align*}\n\t\tBy taking the first variation of the cost functional is zero, we deduce that\n\t\t\\begin{align}\\label{3.9}\n\t\t0&=\\left\\langle\\mathcal{J}(x(T)-x_T),\\int_0^T(T-t)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-t)\\mathrm{B}w(t)\\mathrm{d}t\\right\\rangle+\\lambda\\int_0^T(u(t),w(t))\\mathrm{d} t\\nonumber\\\\&=\\int_0^T(T-t)^{\\alpha-1}\\left\\langle\\mathcal{J}(x(T)-x_T),\\widehat{\\mathcal{T}}_{\\alpha}(T-t)\\mathrm{B}w(t) \\right\\rangle\\mathrm{d}t+\\lambda\\int_0^T(u(t),w(t))\\mathrm{d} t\\nonumber\\\\&= \\int_0^T\\left((T-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-t)^*\\mathcal{J}(x(T)-x_T)+\\lambda u(t),w(t) \\right)\\mathrm{d}t,\n\t\t\\end{align}\n\t\twhere $(\\cdot,\\cdot)$ is the inner product in the Hilbert space $\\mathbb{U}$. Since $w\\in \\mathrm{L}^{2}(J;\\mathbb{U})$ is an arbitrary element (one can choose $w$ to be $(T-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-t)^*\\mathcal{J}(x(T)-x_T)+\\lambda u(t)$), it follows that the optimal control is given by\n\t\t\\begin{align}\\label{3.10}\n\t\tu(t)&= -\\lambda^{-1}(T-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-t)^*\\mathcal{J}(x(T)-x_T),\n\t\t\\end{align}\n\t\tfor a.e. $t\\in [0,T]$. Since by the relations \\eqref{3.9} and \\eqref{3.10}, it is clear that $u\\in\\mathrm{C}([0,T);\\mathbb{U})$. Using the above expression of the control, we find\n\t\n\t\t\\begin{align}\\label{3.11}\n\t\tx(T)&=\\mathcal{T}_{\\alpha}(T)\\zeta-\\int^{T}_{0}\\lambda^{-1}(T-s)^{2(\\alpha-1)}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\mathrm{B}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-s)^*\\mathcal{J}(x(T)-x_T)\\mathrm{d}s\\nonumber\\\\\n\t\t&= \\mathcal{T}_{\\alpha}(T)\\zeta-\\lambda^{-1}\\Phi_{0}^T\\mathcal{J}\\left[x(T)-x_{T}\\right].\n\t\t\\end{align}\n\t\tLet us assume\n\t\t\\begin{align}\\label{3.12}\n\t\tp(x(\\cdot)):=x_{T}-\\mathcal{T}_{\\alpha}(T)\\zeta.\n\t\t\\end{align}\n\t\tCombining \\eqref{3.11} and \\eqref{3.12}, we have\n\t\t\\begin{align}\\label{3.13}\n\t\tx(T)-x_{T}&=-p(x(\\cdot))-\\lambda^{-1}\\Phi_{0}^T\\mathcal{J}\\left[x(T)-x_{T}\\right].\n\t\t\\end{align}\n\t\tFrom \\eqref{3.13}, one can easily deduce that \n\t\t\\begin{align}\\label{3.15}\n\t\tx(T)-x_T=-\\lambda\\mathrm{I}(\\lambda\\mathrm{I}+\\Phi_0^T\\mathcal{J})^{-1}p(x(\\cdot))=-\\lambda\\mathcal{R}(\\lambda,\\Phi_0^T)p(x(\\cdot)).\n\t\t\\end{align}\n\t\tFinally, from \\eqref{3.10}, we get the expression for optimal control as\n\t\t\\begin{align*}\n\t\tu(t)=(T-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-t)^*\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{0}^T)p(x(\\cdot))\\right],\\ \\mbox{ for } \\ t\\in [0,T),\n\t\t\\end{align*}\n\t\twhich completes the proof. \n\t\\end{proof}\n\tNext, we examine the approximate controllability of the linear control system \\eqref{3.2} through the following lemma. \n\t\\begin{lem}\\label{lem4.2}\n\t\tThe linear control system \\eqref{3.2} is approximately controllable on $J$ if and only if Assumption (H0) holds. \n\t\\end{lem}\n\tA proof of the above lemma can be obtained by proceeding similarly as in the proof of Theorem 3.2, \\cite{SM}.\n\t\\begin{rem}\\label{rem3.4}\n\t\tIf Assumption (\\textit{H0}) holds, then by Theorem 2.3, \\cite{M}, we know that the operator $\\Phi_{0}^{T}$ is positive and vice versa. The positivity of $\\Phi_{0}^{T}$ is equivalent to $$ \\langle x^*, \\Phi_{0}^{T}x^*\\rangle=0\\Rightarrow x^*=0.$$ We know that \n\t\t\\begin{align}\n\t\t\\langle x^*, \\Phi_{0}^{T}x^*\\rangle =\\int_0^T\\left\\|(T-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_\\alpha(T-t)^*x^*\\right\\|_{\\mathbb{X}^*}^2\\mathrm{d}t.\n\t\t\\end{align}\n\t\tBy the above fact and Lemma \\ref{lem4.2}, we infer that the approximate controllability of the linear system \\eqref{3.2} is equivalent to the condition $$\\mathrm{B}^*\\widehat{\\mathcal{T}}_\\alpha(T-t)^*x^*=0,\\ 0\\le t0,\\ \\frac{1}{2}<\\alpha<1, \n\t\t\\end{align*}\n\t\twith\n\t\t$\tp(x(\\cdot))=x_{T}-\\mathcal{T}_{\\alpha}(T)\\zeta,$ and \n\t\t\\begin{equation}\\label{3.20}\n\t\t\\Phi_{0}^{T}=\\int^{T}_{0}(T-t)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-t)\\mathrm{B}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-t)^*\\mathrm{d}t. \n\t\t\\end{equation}\n\t\\end{rem}\n\t\n\t\\section{Approximate Controllability of the Semilinear Impulsive System} \\label{semilinear}\\setcounter{equation}{0}\n\tThe purpose of this section is to investigate the approximate controllability of the fractional order semilinear impulsive system \\eqref{1.1}. In order to acquire sufficient conditions on approximate controllability, we first show that for $\\lambda>0$ and $x_T\\in\\mathbb{X}$, there exists a mild solution of the system \\eqref{1.1} with the control function defined as \n\t\\begin{align}\\label{C}\n\tu^\\alpha_{\\lambda}(t)&=\\sum_{k=0}^{m}u^\\alpha_{k,\\lambda}(t)\\chi_{[\\tau_k, t_{k+1})}(t), \\ t\\in J,\\ \\frac{1}{2}<\\alpha<1,\n\t\\end{align}\n\twhere \n\t\\begin{align*}\n\tu^\\alpha_{k,\\lambda}(t)&=(t_{k+1}-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(t_{k+1}-t)^*\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{\\tau_k}^{t_{k+1}})p_k(x(\\cdot))\\right],\n\t\\end{align*}\n\tfor $t\\in [\\tau_k, t_{k+1}),k=0,1,\\ldots,m$, with\n\t\\begin{align*}\n\tp_0(x(\\cdot))&=\\zeta_{0}-\\mathcal{T}_{\\alpha}(t_1)\\psi(0)-\\int^{t_1}_{0}(t_1-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t_1-s)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\mathrm{d}s,\\nonumber\\\\\n\tp_k(x(\\cdot))&=\\zeta_{k}-\\mathcal{T}_{\\alpha}(t_{k+1}-\\tau_k)h_k(\\tau_k,\\tilde{x}(t_k^-))\\\\&\\quad+\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)\\left[f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})+\\mathrm{B}\\sum_{j=0}^{k-1}u^\\alpha_{j,\\lambda}(s)\\chi_{[\\tau_j, t_{j+1})}(s)\\right]\\mathrm{d}s \\\\&\\quad-\\int_{0}^{t_{k+1}}(t_{k+1}-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t_{k+1}-s)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\mathrm{d}s\t\\\\&\\quad-\\int_{0}^{\\tau_k}(t_{k+1}-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t_{k+1}-s)\\mathrm{B}\\sum_{j=0}^{k-1}u^\\alpha_{j,\\lambda}(s)\\chi_{[\\tau_j, t_{j+1})}(s)\\mathrm{d}s, \\ k=1,\\ldots,m,\n\n\t\\end{align*} \n\tand $\\tilde{x}:(-\\infty,T]\\rightarrow\\mathbb{X}$ such that $\\tilde{x}(t)=\\psi(t), \\ t\\in(-\\infty,0]\\ \\tilde{x}(t)=x(t),\\ t\\in J=[0,T],$ and $\\zeta_{k}\\in \\mathbb{X}$ for $k=0,1,\\ldots,m$.\n\t\\begin{rem}\n\t\tSince the operator $\\Phi_{\\tau_k}^{t_{k+1}},$ for each $k=0,\\ldots,m,$ is non-negative, linear and bounded for $\\frac{1}{2}<\\alpha<1$, Lemma \\ref{lem2.9} is also valid for each $\\Phi_{\\tau_k}^{t_{k+1}},$ for $k=0,\\ldots,m$.\n\t\\end{rem}\n\tThe following theorem provides the existence of mild solution of the system \\eqref{1.1} with the control \\eqref{C}. \n\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\\begin{theorem}\\label{thm4.3}\n\t\tLet Assumptions (H1)-(H3) hold true. Then for every $ \\lambda>0 $ and fixed $\\zeta_{k}\\in\\mathbb{X},$ for $k=0,1,\\ldots,m$, the system \\eqref{1.1} with the control \\eqref{C} has at least one mild solution on $J$, provided \n\t\t\\begin{align}\\label{cnd}\n\t\\frac{MH_{2}\\alpha\\beta}{\\Gamma(1+\\alpha)}\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\{1+\\frac{(m+1)(m+2)\\tilde{R}}{2}+\\frac{m(m+1)\\tilde{R}^2}{2}\\sum_{j=0}^{m-1}e^{\\frac{(m+j)(m-j-1)\\tilde{R}}{2}}\\right\\}<1,\n\t\t\\end{align}\n\t\twhere $\\tilde{R}=\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\frac{2T^{2\\alpha-1}}{\\lambda(2\\alpha-1)}$ and $\\mu=\\frac{\\alpha-\\alpha_1}{1-\\alpha_1}$.\n\t\\end{theorem}\n\t\\begin{proof}\n\t\tLet us take a set $\\mathrm{E}:=\\{x\\in\\mathrm{PC}(J;\\mathbb{X}) : x(0)=\\psi(0)\\}$ with the norm $\\left\\|\\cdot\\right\\|_{\\mathrm{PC}(J;\\mathbb{X})}$. For each $r>0$, we consider a set $\\mathrm{E}_{r}=\\{x\\in\\mathrm{E} : \\left\\|x\\right\\|_{\\mathrm{PC}(J;\\mathbb{X})}\\le r\\}$.\n\t\t\n\t\tFor $\\lambda>0$, let us define an operator $F_{\\lambda}:\\mathrm{E}\\to\\mathrm{E}$ such that\n\t\t\\begin{eqnarray}\\label{2}\n\t\t(F_{\\lambda}x)(t)=\\left\\{\n\t\t\\begin{aligned}\n\t\t&\\mathcal{T}_{\\alpha}(t)\\psi(0)+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right]\\mathrm{d}s,\\ t\\in[0, t_1],\\\\\n\t\t&\th_k(t, \\tilde{x}(t_k^-)),\\ t\\in(t_k, \\tau_k],\\ k=1,\\ldots,m,\\\\\n\t\t&\\mathcal{T}_{\\alpha}(t-\\tau_k)h_k(\\tau_k,\\tilde{x}(t_k^-))-\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right]\\mathrm{d}s\\\\&\\quad+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s) \\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x}_{\n\t\t\t\\rho(s, \\tilde{x}_s)})\\right]\\mathrm{d}s,\\ t\\in(\\tau_k,t_{k+1}],\\ k=1,\\ldots,m,\n\t\t\\end{aligned}\n\t\t\\right.\n\t\t\\end{eqnarray}\n\t\twhere $u^{\\alpha}_{\\lambda}$ is given in \\eqref{C}. From the definition of $ F_{\\lambda}$, we infer that the system $\\eqref{1.1}$ has a mild solution, if the operator $ F_{\\lambda}$ has a fixed point. We divide the proof of the fact that the operator $F_{\\lambda}$ has a fixed point in the following steps. \n\t\t\\vskip 0.1in \n\t\t\\noindent\\textbf{Step (1): } \\emph{$ F_{\\lambda}(\\mathrm{E}_r)\\subset \\mathrm{E}_r,$ for some $ r $}. On the contrary, let us suppose that our claim is not true. Then for any $\\lambda>0$ and for all $r>0$, there exists $x^r\\in \\mathrm{E}_r,$ such that $\\left\\|(F_{\\lambda}x^r)(t)\\right\\|_\\mathbb{X}>r,$ for some $t\\in J$, where $t$ may depend upon $r$. \n\t\tFirst, by using Assumption \\ref{as2.1}, we estimate \n\t\t\\begin{align}\\label{4.4}\n\t\t\\left\\|p_0(x(\\cdot))\\right\\|_{\\mathbb{X}}&\\le\\left\\|\\zeta_0\\right\\|_{\\mathbb{X}}+\\left\\|\\mathcal{T}_{\\alpha}(t_1)\\psi(0)\\right\\|_{\\mathbb{X}}+\\int_{0}^{t_1}(t_1-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t_1-s)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\le\\left\\|\\zeta_0\\right\\|_{\\mathbb{X}}+M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t_1}(t_1-s)^{\\alpha-1}\\gamma_{r'}(s)\\mathrm{d}s\\nonumber\\\\&\\le\\left\\|\\zeta_0\\right\\|_{\\mathbb{X}}+M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{t_1}(t_1-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left(\\int_{0}^{t_1}(\\gamma_{r'}(s))^{\\frac{1}{\\alpha_1}}\\mathrm{d}s\\right)^{\\alpha_1}\\nonumber\\\\&\\le\\left\\|\\zeta_0\\right\\|_{\\mathbb{X}}+M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\frac{t_{1}^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,t_1];\\mathbb{R^{+}})}\\nonumber\\\\&\\le\\left\\|\\zeta_0\\right\\|_{\\mathbb{X}}+M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\frac{T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^{+}})}\\nonumber\\\\&\\le\\left\\|\\zeta_0\\right\\|_{\\mathbb{X}}+M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^{+}})}=N_0,\n\t\t\\end{align} \n\t\twhere $\\mu=\\frac{\\alpha-\\alpha_1}{1-\\alpha_1}$ and $r'=H_{1}\\left\\|\\psi\\right\\|_{\\mathfrak{B}}+H_{2}r$.\tFurther, using Assumption \\ref{as2.1}, we estimate \n\t\t\\begin{align*}\n\t\t&\\left\\|p_k(x(\\cdot))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le\\left\\|\\zeta_k\\right\\|_{\\mathbb{X}}+\\left\\|\\mathcal{T}_{\\alpha}(t-\\tau_k)h_k(\\tau_k,\\tilde{x}(t_k^-))\\right\\|_{\\mathbb{X}}+\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{t_{k+1}}(t_{k+1}-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t_{k+1}-s)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)\\mathrm{B}\\sum_{j=0}^{k-1}u^\\alpha_{j,\\lambda}(s)\\chi_{[\\tau_j,t_{j+1})}(s)\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{\\tau_k}(t_{k+1}-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t_{k+1}-s)\\mathrm{B}\\sum_{j=0}^{k-1}u^\\alpha_{j,\\lambda}(s)\\chi_{[\\tau_j, t_{j+1})}(s)\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\le\\left\\|\\zeta_k\\right\\|_{\\mathbb{X}}+Ml_k+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\frac{\\tau_k^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,\\tau_k];\\mathbb{R^{+}})}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\frac{t_{k+1}^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,t_{k+1}];\\mathbb{R^{+}})}\\nonumber\\\\&\\quad+\\frac{1}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_j}^{t_{j+1}}(\\tau_k-s)^{\\alpha-1}(t_{j+1}-s)^{\\alpha-1}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{1}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_j}^{t_{j+1}}(t_{k+1}-s)^{\\alpha-1}(t_{j+1}-s)^{\\alpha-1}\\mathrm{d}s\\nonumber\\\\&\\le\\left\\|\\zeta_k\\right\\|_{\\mathbb{X}}+Ml_k+\\frac{2M\\alpha}{\\Gamma(1+\\alpha)}\\frac{T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^{+}})}\\nonumber\\\\&\\quad+\\frac{2}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_j}^{t_{j+1}}(t_{j+1}-s)^{2(\\alpha-1)}\\mathrm{d}s\\nonumber\\\\&\\le N_k+\\frac{2}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\frac{(t_{j+1}-\\tau_j)^{2\\alpha-1}}{2\\alpha-1}\\nonumber\\\\&\\le N_k+\\frac{2T^{2\\alpha-1}}{\\lambda(2\\alpha-1)}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\n\t\t\\nonumber\\\\&= N_k+\\tilde{R}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}},\n\t\t\\end{align*}\n\t\twhere $\\tilde{R}=\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\frac{2T^{2\\alpha-1}}{\\lambda(2\\alpha-1)}$ and $N_k=\\left\\|\\zeta_k\\right\\|_{\\mathbb{X}}+Ml_k+\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}$ for $k=1,\\ldots,m.$ Applying the discrete Gronwall-Bellman lemma (Lemma \\ref{lem2.13}), we obtain \n\t\t\\begin{align}\\label{4.5}\n\t\t\\left\\|p_k(x(\\cdot))\\right\\|_{\\mathbb{X}}\\le&N_k+\\tilde{R}\\sum_{j=0}^{k-1}N_je^{\\frac{(k+j)(k-j-1)\\tilde{R}}{2}}=C_k, \\ \\mbox{for}\\ k=1,\\ldots,m.\n\t\t\\end{align}\n\t\tTaking $t\\in[0,t_1]$ and using the relation \\eqref{2.5}, Lemma \\ref{lem2.5} and Assumption \\ref{as2.1} \\textit{(H1)}-\\textit{(H3)}, we compute\n\t\t\\begin{align}\\label{4.21}\n\t\tr&<\\left\\|(F_{\\lambda}x^r)(t)\\right\\|_\\mathbb{X}\\nonumber\\\\&=\\left\\|\\mathcal{T}_{\\alpha}(t)\\psi(0)+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}} \\nonumber\\\\&\\le\\left\\|\\mathcal{T}_{\\alpha}(t)\\psi(0)\\right\\|_{\\mathbb{X}}+\\int_0^t(t-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_0^t(t-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\le M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\!\\!\\int_0^t\\!\\!(t-s)^{\\alpha-1}\\left\\|u^{\\alpha}_{\\lambda}(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\!\\!\\int_0^t\\!\\!(t-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\le M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\frac{1}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\left\\|p_0(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{0}^{t}(t-s)^{\\alpha-1}(t_1-s)^{\\alpha-1}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\int_0^t(t-s)^{\\alpha-1}\\gamma_{r'}(s)\\mathrm{d}s\\nonumber\\\\&\\le M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\frac{t^{2\\alpha-1}}{\\lambda(2\\alpha-1)}\\left\\|p_0(x(\\cdot))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_0^t(t-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}([0,t];\\mathbb{R^{+}})}\\nonumber\\\\&\\le M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\frac{T^{2\\alpha-1}}{\\lambda(2\\alpha-1)}\\left\\|p_0(x(\\cdot))\\right\\|_{\\mathbb{X}}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\frac{T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}\\nonumber\\\\&\\le M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\frac{2T^{2\\alpha-1}}{\\lambda(2\\alpha-1)}\\sum_{j=0}^{m}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}\\nonumber\\\\&\\le M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\tilde{R}\\sum_{j=0}^{m}C_j+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})},\n\t\t\\end{align}\n\t\twhere $C_0=N_0$. For $t\\in(t_k,\\tau_k],\\ k=1,\\ldots,m$, we obtain \n\t\t\\begin{align}\\label{4.22}\n\t\tr<\\left\\|(F_{\\lambda}x^r)(t)\\right\\|_\\mathbb{X}&\\le\\left\\|h_k(t, \\tilde{x}(t_k^-))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le l_k\\le l_k+\\tilde{R}\\sum_{j=0}^{m}C_j+\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}.\n\t\t\\end{align}\n\t\tTaking $t\\in(\\tau_k,t_{k+1}], \\ k=1,\\dots,m$, we evaluate\n\t\t\\begin{align}\\label{4.23}\n\t\tr&<\\left\\|(F_{\\lambda}x^r)(t)\\right\\|_\\mathbb{X}\\nonumber\\\\&=\\bigg\\|\\mathcal{T}_{\\alpha}(t-\\tau_k)h_k(\\tau_k,\\tilde{x}(t_k^-))-\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat {\\mathcal{T}}_{\\alpha}(t-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\mathrm{d}s\\bigg\\|_{\\mathbb{X}}\\nonumber\\\\&\\le\\left\\|\\mathcal{T}_{\\alpha} (t)h_k(\\tau_k, \\tilde{x}(t_k^-))\\right\\|_{\\mathbb{X}}+\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\n\t\t\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s+\\int_{0}^t(t-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad +\\int_{0}^t(t-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\n\t\t\\nonumber\\\\&\\le Ml_k+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|u^{\\alpha}_{\\lambda}(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s+\\frac{M\\alpha}\n\t\t{\\Gamma(1+\\alpha)}\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|u^{\\alpha}_{\\lambda}(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s+\\frac{M\\alpha}\n\t\t{\\Gamma(1+\\alpha)}\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\le Ml_k+\\frac{1}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_j}^{t_{j+1}}(\\tau_k-s)^{\\alpha-1}(t_{j+1}-s)^{\\alpha-1}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,\\tau_k];\\mathbb{R^{+}})}\\nonumber\\\\&\\quad+\\frac{1}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\Bigg[\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_j}^{t_{j+1}}(t-s)^{\\alpha-1}(t_{j+1}-s)^{\\alpha-1}\\mathrm{d}s\\nonumber\\\\&\\qquad+\\left\\|p_k(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_k}^{t}(t-s)^{\\alpha-1}(t_{k+1}-s)^{\\alpha-1}\\mathrm{d}s\\Bigg]+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{t}(t-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,t];\\mathbb{R^{+}})}\\nonumber\\\\&\\le Ml_k+\\frac{2}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_j}^{t_{j+1}}(\\tau_k-s)^{\\alpha-1}(t_{j+1}-s)^{\\alpha-1}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,\\tau_k];\\mathbb{R^{+}})}\\nonumber\\\\&\\quad+\\frac{1}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\left\\|p_k(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_k}^{t}(t-s)^{\\alpha-1}(t_{k+1}-s)^{\\alpha-1}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{t}(t-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,t];\\mathbb{R^{+}})}\\nonumber\\\\&\\le Ml_k+\\frac{2}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_j}^{t_{j+1}}(t_{j+1}-s)^{2(\\alpha-1)}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,\\tau_k];\\mathbb{R^{+}})}\\nonumber\\\\&\\quad+\\frac{1}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\left\\|p_k(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_k}^{t}(t-s)^{2(\\alpha-1)}\\mathrm{d}s+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{t}(t-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,t];\\mathbb{R^{+}})}\\nonumber\\\\&\\le Ml_k+\\frac{2}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\frac{(t_{j+1}-\\tau_j)^{2\\alpha-1}}{2\\alpha-1}\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,\\tau_k];\\mathbb{R^{+}})}\\nonumber\\\\&\\quad+\\frac{1}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\left\\|p_k(x(\\cdot))\\right\\|_{\\mathbb{X}}\\frac{(t-\\tau_k)^{2\\alpha-1}}{2\\alpha-1}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{t}(t-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,t];\\mathbb{R^{+}})}\\nonumber\\\\&\\le Ml_k+\\frac{2T^{2\\alpha-1}}{\\lambda(2\\alpha-1)}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,\\tau_k];\\mathbb{R^{+}})}+\\frac{2T^{2\\alpha-1}}{\\lambda(2\\alpha-1)}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\left\\|p_k(x(\\cdot))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{t}(t-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,t];\\mathbb{R^{+}})}\\nonumber\\\\&\\le Ml_k+\\tilde{R}\\sum_{j=0}^{k}C_j+\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}\\nonumber\\\\&\\le Ml_k+\\tilde{R}\\sum_{j=0}^{m}C_j+\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}.\n\t\t\\end{align}\n\t\tUsing Assumption \\ref{as2.1} (\\textit{H2})(ii), we easily obtain\n\t\t\\begin{align*}\n\t\t\\liminf_{r \\rightarrow \\infty }\\frac {\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}}{r}&=\\liminf_{r \\rightarrow \\infty } \\left (\\frac {\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}}{r'}\\times\\frac{r'}{r}\\right)=H_2\\beta.\n\t\t\\end{align*}\n\t\tThus, dividing by $r$ in expressions \\eqref{4.21}, \\eqref{4.22}, \\eqref{4.23} and then passing $r\\to\\infty$, we obtain\n\t\t\\begin{align*}\n\t\t\\frac{MH_{2}\\alpha\\beta}{\\Gamma(1+\\alpha)}\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\{1+\\frac{(m+1)(m+2)\\tilde{R}}{2}+\\frac{m(m+1)\\tilde{R}^2}{2}\\sum_{j=0}^{m-1}e^{\\frac{(m+j)(m-j-1)\\tilde{R}}{2}}\\right\\}>1,\n\t\t\\end{align*}\n\t\twhich is a contradiction to \\eqref{cnd}. Hence, for some $ r>0$, $F_{\\lambda}(\\mathrm{E}_{r})\\subset \\mathrm{E}_{r}.$\n\t\t\\vskip 0.1in \n\t\t\\noindent\\textbf{Step (2): } \\emph{The operator $ F_{\\lambda}$ is continuous}. To achieve this goal, we consider a sequence $\\{{x}^n\\}^\\infty_{n=1}\\subseteq \\mathrm{E}_r$ such that ${x}^n\\rightarrow {x}\\mbox{ in }{\\mathrm{E}_r},$ that is,\n\t\t$$\\lim\\limits_{n\\rightarrow \\infty}\\left\\|x^n-x\\right\\|_{\\mathrm{PC}(J;\\mathbb{X})}=0.$$\n\t\tFrom Lemma \\ref{lem2.7}, we infer that\n\t\t\\begin{align*}\n\t\t\\left\\|\\tilde{x_{s}^n}- \\tilde{x_{s}}\\right\\|_{\\mathfrak{B}}&\\leq H_{2}\\sup\\limits_{\\theta\\in J}\\left\\|\\tilde{x^{n}}(\\theta)-\\tilde{x}(\\theta)\\right\\|_{\\mathbb{X}}=H_{2}\\left\\|x^{n}-x\\right\\|_{\\mathrm{PC}(J;\\mathbb{X})}\\rightarrow 0 \\ \\mbox{ as } \\ n\\rightarrow\\infty,\n\t\t\\end{align*}\n\t\tfor all $s\\in\\mathcal{Q}(\\rho^-)\\cup J$. Since $\\rho(s, \\tilde{x_s^k})\\in \\mathcal{Q}(\\rho^-)\\cup J,$ for all $k\\in\\mathbb{N}$, then we conclude that\n\t\t\\begin{align*}\n\t\t\\left\\|\\tilde{x^n}_{\\rho(s,\\tilde{x_s^k})}-\\tilde{x}_{\\rho(s,\\tilde{x_s^k})}\\right\\|_{\\mathfrak{B}}\\rightarrow 0 \\ \\mbox{ as } \\ n\\rightarrow\\infty, \\ \\mbox{ for all }\\ s\\in J\\ \\mbox{ and }\\ k\\in\\mathbb{N}.\n\t\t\\end{align*}\n\t\tIn particular, we choose $k=n$ and use the above convergence together with Assumption \\ref{as2.1} \\textit{(H1)} to obtain\n\t\t\\begin{align}\\label{4.25}\n\t\t\\left\\|f(s, \\tilde{x^n}_{\\rho(s,\\tilde{x_s^n})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x_s})})\\right\\|_{\\mathbb{X}}&\\leq\\left\\|f(s, \\tilde{x^n}_{\\rho(s,\\tilde{x_s^n})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x_s^n})})\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|f(s, \\tilde{x}_{\\rho(s,\\tilde{x_s^n})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x_s})})\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\to 0\\ \\mbox{ as }\\ n\\to\\infty, \\mbox{ uniformly for } \\ s\\in J. \n\t\t\\end{align}\n\t\tFrom the above convergence and the dominated convergence theorem, we evaluate\n\t\t\\begin{align}\\label{4.26}\n\t\t\\left\\|p_0(x^n(\\cdot))-p_0(x(\\cdot))\\right\\|_{\\mathbb{X}}&\\le\\left\\|\\int^{t_1}_{0}(t_1-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t_1-s)\\left[f(s,\\tilde{x^n}_{\\rho(s,\\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x_s})})\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\int^{t_1}_{0}(t_1-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x^n}_{\\rho(s,\\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x_s})})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\to 0\\ \\mbox{ as }\\ n\\to\\infty.\n\t\t\\end{align}\n\t\tUsing the convergences \\eqref{4.26} and the relation \\eqref{2.5}, we calculate\n\t\t\\begin{align*}\n\t\t\\left\\|\\mathcal{R}(\\lambda,\\Phi_{0}^{t_{1}})p_0(x^{n}(\\cdot))-\\mathcal{R}(\\lambda,\\Phi_{0}^{t_{1}})p_0(x(\\cdot))\\right\\|_{\\mathbb{X}}&=\\frac{1}{\\lambda}\\left\\|\\lambda\\mathcal{R}(\\lambda,\\Phi_{\\tau_k}^{t_{k+1}})\\left(p_0(x^{n}(\\cdot))-p_0(x(\\cdot))\\right)\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\leq\\frac{1}{\\lambda}\\left\\|p_0(x^{n}(\\cdot))-p_0(x(\\cdot))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\to 0 \\ \\mbox{ as } \\ n \\to \\infty.\n\t\t\\end{align*}\n\t\tSince the mapping $\\mathcal{J}:\\mathbb{X}\\to\\mathbb{X}^{*}$ is demicontinuous, it is easy to obtain\n\t\t\\begin{align}\\label{4.2}\n\t\t\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{\\tau_k}^{t_{k+1}})p_0(x^{n}(\\cdot))\\right]\\xrightharpoonup{w}\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{\\tau_k}^{t_{k+1}})p_0(x(\\cdot))\\right] \\ \\mbox{ as } \\ n\\to\\infty \\ \\mbox{ in }\\ \\mathbb{X}^{*}.\n\t\t\\end{align}\n\t\tFrom Lemma \\ref{lem2.5}, we infer that the operator $\\widehat{\\mathcal{T}}_{\\alpha}(t)$ is compact for $t>0$. Therefore, the operator $\\widehat{\\mathcal{T}}_{\\alpha}(t)^*$ is also compact for $t>0$. Hence, by using the compactness of this operator together with the weak convergence \\eqref{4.2}, one can obtain\n\t\t\\begin{align}\\label{4.1}\n\t\t&\\left\\|u^{n,\\alpha}_{0,\\lambda}(t)-u_{0,\\lambda}^{\\alpha}(t)\\right||_{\\mathbb{U}}\\nonumber\\\\&\\le\\left\\|(t_{1}-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(t_{1}-t)^*\\left[\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{0}^{t_{1}})p_0(x^{n}(\\cdot))\\right]-\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{0}^{t_{1}})p_0k(x(\\cdot))\\right]\\right]\\right\\|_{\\mathbb{U}}\\nonumber\\\\&\\le(t_{1}-t)^{\\alpha-1}\\tilde{M}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t_{1}-t)^*\\left[\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{0}^{t_{1}})p_0(x^{n}(\\cdot))\\right]-\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{0}^{t_{1}})p_0(x(\\cdot))\\right]\\right]\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\to 0\\ \\mbox{ as }\\ n\\to\\infty, \\ \\mbox{ for each }\\ t\\in[0,t_{1}).\n\t\t\\end{align}\n\t\tSimilarly, for $k=1$, we compute\n\t\t\\begin{align}\\label{4.27}\n\t\t\\left\\|p_1(x^n(\\cdot))-p_1(x(\\cdot))\\right\\|_{\\mathbb{X}}&\\le\\left\\|\\mathcal{T}_{\\alpha}(t_{2}-\\tau_1)\\left[h_1(\\tau_1,\\tilde{x^n}(t_1^-))-h_1(\\tau_1,\\tilde{x}(t_1^-))\\right]\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{\\tau_{1}}(\\tau_{1}-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_{1}-s)\\left[f(s,\\tilde{x^n}_{\\rho(s,\\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{t_{2}}(t_{2}-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t_{2}-s)\\left[f(s,\\tilde{x^n}_{\\rho(s,\\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{\\tau_1}(\\tau_1-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_1-s)\\mathrm{B}\\left[u^{n,\\alpha}_{0,\\lambda}(s)-u^{\\alpha}_{0,\\lambda}(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}} \\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{\\tau_1}(t_2-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t_2-s)\\mathrm{B}\\left[u^{n,\\alpha}_{0,\\lambda}(s)-u^{\\alpha}_{0,\\lambda}(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}} \\nonumber\\\\&\\le M\\left\\|h_1(\\tau_1,\\tilde{x^n}(t_1^-))-h_1(\\tau_1,\\tilde{x}(t_1^-))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{\\tau_{1}}(\\tau_{1}-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x^n}_{\\rho(s,\\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t_{2}}(t_{2}-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x^n}_{\\rho(s,\\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t_1}(\\tau_1-s)^{\\alpha-1}\\left\\|u^{n,\\alpha}_{0,\\lambda}(s)-u^{\\alpha}_{0,\\lambda}(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t_1}(t_2-s)^{\\alpha-1}\\left\\|u^{n,\\alpha}_{0,\\lambda}(s)-u^{\\alpha}_{0,\\lambda}(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\&\\to0\\ \\mbox{ as }\\ n\\to\\infty, \n\t\t\\end{align}\n\t\twhere we used Assumption \\ref{as2.1} (\\textit{H3}), convergences \\eqref{4.25}, \\eqref{4.1} and the dominated convergence theorem. Moreover, similar to the convergence \\eqref{4.1}, one can obtain\n\t\t\\begin{align*}\n\t\t&\\left\\|u^{n,\\alpha}_{1,\\lambda}(t)-u_{1,\\lambda}^{\\alpha}(t)\\right||_{\\mathbb{U}}\\to 0\\ \\mbox{ as }\\ n\\to\\infty, \\ \\mbox{ for each }\\ t\\in[\\tau_1,t_{2}).\n\t\t\\end{align*}\n\t\tFurther, applying a similar analogy as above for $k=2,\\ldots,m,$ one can compute \n\t\t\\begin{align*}\n\t\t&\\left\\|u^{n,\\alpha}_{k,\\lambda}(t)-u_{k,\\lambda}^{\\alpha}(t)\\right||_{\\mathbb{U}}\\to 0\\ \\mbox{ as }\\ n\\to\\infty, \\mbox{ for each }\\ t\\in[\\tau_k,t_{k+1}), k=2,\\ldots,m.\n\t\t\\end{align*} \n\t\tTherefore, we have\n\t\t\\begin{align}\\label{4.29}\n\t\t\\left\\|u^{n,\\alpha}_{\\lambda}(t)-u_{\\lambda}^{\\alpha}(t)\\right||_{\\mathbb{U}}\\to 0\\ \\mbox{ as }\\ n\\to\\infty,\\ \\mbox{ for each }\\ t\\in[\\tau_k,t_{k+1}), k=0,1,\\ldots,m.\n\t\t\\end{align}\n\t\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\tUsing the convergences \\eqref{4.25}, \\eqref{4.29} and the dominated convergence theorem, we arrive at \n\t\t\\begin{align*}\n\t\t\\left\\|(F_{\\lambda}x^n)(t)-(F_{\\lambda}x)(t)\\right\\|_{\\mathbb{X}}&\\le\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}\\left[u^{n,\\alpha}_{\\lambda}(s)-u_{\\lambda}^{\\alpha}(s)\\right]\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\left[f(s,\\tilde{x^n}_{\\rho(s,\\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\le\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|u^{n,\\alpha}_{\\lambda}(s)-u_{\\lambda}^{\\alpha}(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x^n}_{\\rho(s, \\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\to 0 \\ \\mbox{ as }\\ n\\to\\infty, \\ \\mbox{ for }\\ t\\in[0,t_1].\n\t\t\\end{align*}\n\t\tSimilarly, for $t\\in(\\tau_k,t_{k+1}],\\ k=1,\\ldots,m$, we deduce that \n\t\t\\begin{align*}\n\t\t\\left\\|(F_{\\lambda}x^n)(t)-(F_{\\lambda}x)(t)\\right\\|_{\\mathbb{X}}&\\le\\left\\|\\mathcal{T}_{\\alpha}(t-\\tau_k)\\left[h_k(\\tau_k,\\tilde{x^n}(t_k^-))-h_k(\\tau_k,\\tilde{x}(t_k^-))\\right]\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)\\left[f(s,\\tilde{x^n}_{\\rho(s, \\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right]\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{\\tau_k}(\\tau_1-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_1-s)\\mathrm{B}\\left[u^{n,\\alpha}_{\\lambda}(s)-u_{\\lambda}^{\\alpha}(s)\\right]\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}\\left[u^{n,\\alpha}_{\\lambda}(s)-u_{\\lambda}^{\\alpha}(s)\\right]\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\left[f(s,\\tilde{x^n}_{\\rho(s, \\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right]\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\le M\\left\\|h_k(\\tau_k,\\tilde{x^n}(t_k^-))-h_k(\\tau_k,\\tilde{x}(t_k^-))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x^n}_{\\rho(s, \\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+ \\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|u^{n,\\alpha}_{\\lambda}(s)-u_{\\lambda}^{\\alpha}(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|u^{n,\\alpha}_{\\lambda}(s)-u_{\\lambda}^{\\alpha}(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x^n}_{\\rho(s, \\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\to 0 \\ \\mbox{ as }\\ n\\to\\infty.\n\t\t\\end{align*}\n\t\tMoreover, for $t\\in(t_k, \\tau_k],\\ k=1,\\ldots,m$, applying Assumption \\ref{as2.1} (\\textit{H3}), we obtain\n\t\t\\begin{align*}\n\t\t\\left\\|(F_{\\lambda}x^n)(t)-(F_{\\lambda}x)(t)\\right\\|_{\\mathbb{X}}&\\le\\left\\|h_k(t,\\tilde{x^n}(t_k^-))-h_k(t,\\tilde{x}(t_k^-))\\right\\|_{\\mathbb{X}}\\to 0 \\ \\mbox{ as }\\ n\\to\\infty.\n\t\t\\end{align*}\n\t\tHence, it follows that $F_{\\lambda}$ is continuous.\n\t\t\\vskip 0.1in \n\t\t\\noindent\\textbf{Step (3): } \\emph{$ F_{\\lambda}$ is a compact operator.} In order to prove this claim, we use the well-known Ascoli-Arzela theorem. According to the infinite-dimensional version of the Ascoli-Arzela theorem (see, Theorem 3.7, Chapter 2, \\cite{JYONG}), it is enough to show that \n\t\t\\begin{itemize}\n\t\t\t\\item [(i)] the image of $\\mathrm{E}_r$ under $F_{\\lambda}$ is uniformly bounded (which is proved in Step I),\n\t\t\t\\item [(ii)] the image of $\\mathrm{E}_r$ under $F_{\\lambda}$ is equicontinuous,\n\t\t\t\\item [(iii)] for an arbitrary $t\\in J$, the set $\\mathrm{V}(t)=\\{(F_\\lambda x)(t):x\\in \\mathrm{E}_r\\}$ is relatively compact.\n\t\t\\end{itemize}\n\t\t\n\t\t\n\t\tFirst, we claim that the image of $\\mathrm{E}_r$ under $F_{\\lambda}$ is equicontinuous. For $s_1,s_2\\in[0,t_1]$ such that $s_10$, we define\n\t\t\\begin{align*}\n\t&\t(F_{\\lambda}^{\\eta,\\delta}x)(t)\\nonumber\\\\&=\\int_{\\delta}^{\\infty}\\varphi_{\\alpha}(\\xi)\\mathcal{T}(t^{\\alpha}\\xi)\\psi(0)\\mathrm{d}\\xi +\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&\\quad+\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&=\\mathcal{T}(\\eta^{\\alpha}\\delta)\\bigg[\\int_{\\delta}^{\\infty}\\varphi_{\\alpha}(\\xi)\\mathcal{T}(t^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\psi(0)\\mathrm{d}\\xi\\nonumber\\\\&\\qquad\\qquad+\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&\\qquad\\qquad+\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\mathrm{d}\\xi\\mathrm{d}s\\bigg]\\nonumber\\\\&=\\mathcal{T}(\\eta^{\\alpha}\\delta)y(t,\\eta,\\delta),\n\t\t\\end{align*}\n\t\twhere $y(\\cdot,\\cdot,\\cdot)$ is the term appearing inside the parenthesis. Using Assumption \\ref{as2.1}, one can calculate\n\t\t\\begin{align*}\n\t\t\\left\\|y(t,\\eta,\\delta)\\right\\|_{\\mathbb{X}}&\\le\\int_{\\delta}^{\\infty}\\varphi_{\\alpha}(\\xi)\\left\\|\\mathcal{T}(t^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\psi(0)\\right\\|_{\\mathbb{X}}\\mathrm{d}\\xi\\nonumber\\\\&\\quad+\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\left\\|\\mathcal{T}((t-s)^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\right\\|_{\\mathbb{X}}\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&\\quad+\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\left\\|\\mathcal{T}((t-s)^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&\\le\n\t\tM\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\frac{N_0}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\frac{t^{2\\alpha-1}-\\eta^{2\\alpha-1}}{\\lambda(2\\alpha-1)}\\nonumber\\\\&\\quad\\times\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\frac{t^\\mu-\\eta^\\mu}{\\mu}\\right)^{1-\\alpha_1}\\left(\\int_{0}^{t-\\eta}(\\gamma_{r'}(s))^{\\frac{1}{\\alpha_1}}\\right)^{\\alpha_1}<+\\infty, \\ \\mbox{ for }\\ t\\in[0,t_1].\n\t\t\\end{align*}\n\t\tThe compactness of the operator $\\mathcal{T}(\\cdot)$ implies that the set $\\mathrm{V}_{\\eta,\\delta}(t)=\\{(F^{\\eta,\\delta}_\\lambda x)(t):x\\in \\mathrm{E}_r\\}$ is relatively compact in $\\mathbb{X}$. Hence, there exist a finite $ x_{i}$'s, for $i=1,\\dots, n $ in $ \\mathbb{X} $ such that \n\t\t\\begin{align*}\n\t\t\\mathrm{V}_{\\eta,\\delta}(t) \\subset \\bigcup_{i=1}^{n}\\mathcal{S}(x_i, \\varepsilon\/2),\n\t\t\\end{align*}\n\t\tfor some $\\varepsilon>0$. Let us choose $\\delta>0$ and $\\eta>0$ such that \n\t\t\\begin{align*}\n\t\t&\\left\\|(F_{\\lambda}x)(t)-(F_{\\lambda}^{\\eta,\\delta}x)(t)\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le\\left\\|\\int_{0}^{\\delta}\\varphi_{\\alpha}(\\xi)\\mathcal{T}(t^{\\alpha}\\xi)\\psi(0)\\mathrm{d}\\xi\\right\\|_{\\mathbb{X}}+\\alpha\\left\\|\\int_{0}^{t}\\int_{0}^{\\delta}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\mathrm{d}\\xi\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\alpha\\left\\|\\int_{t-\\eta}^{t}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\mathrm{d}\\xi\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\alpha\\left\\|\\int_{0}^{t}\\int_{0}^{\\delta}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\mathrm{d}\\xi\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\alpha\\left\\|\\int_{t-\\eta}^{t}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\mathrm{d}\\xi\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}\\int_{0}^{\\delta}\\varphi_{\\alpha}(\\xi)\\mathrm{d}\\xi+\\frac{M^2\\tilde{M}^2N_{0}\\alpha t^{2\\alpha-1}}{\\lambda(2\\alpha-1)(\\Gamma(\\alpha+1))}\\int_{0}^{\\delta}\\xi\\varphi_{\\alpha}(\\xi)\\mathrm{d}\\xi\n\t\t\\nonumber\\\\&\\quad+\\frac{N_0}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\int_{t-\\eta}^{t}(t-s)^{\\alpha-1}(t_1-s)^{\\alpha-1}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha t^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}\\int_{0}^{\\delta}\\xi\\varphi_{\\alpha}(\\xi)\\mathrm{d}\\xi+\\frac{M\\alpha}{(\\Gamma(1+\\alpha))^2}\\int_{t-\\eta}^{t}(t-s)^{\\alpha-1}\\gamma_{r'}(s)\\mathrm{d}s\\nonumber\\\\&\\le\\frac{\\varepsilon}{2}.\n\t\t\\end{align*}\n\t\tConsequently $$\\mathrm{V}(t)\\subset \\bigcup_{i=1}^{n}\\mathcal{S}(x_i, \\varepsilon ).$$\n\t\tThus, for each $t\\in [0,t_1]$, the set $\\mathrm{V}(t)$ is relatively compact in $ \\mathbb{X}$. Next, we take $ t\\in(\\tau_k,t_{k+1}],$ for $k=1,\\ldots,m$ and for given $\\eta$ with $ 0<\\eta<\\min\\{t-\\tau_k, \\tau_k\\}$ and any $\\delta>0$, we define\n\t\t\\begin{align*}\n\t\t&(F_{\\lambda}^{\\eta,\\delta}x)(t)\\nonumber\\\\&=\\int_{\\delta}^{\\infty}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-\\tau_k)^{\\alpha}\\xi)h_k(\\tau_k,\\tilde{x}(t_k^-))\\mathrm{d}\\xi \\nonumber\\\\&\\quad-\\alpha\\int_{0}^{\\tau_k-\\eta}\\int_{\\delta}^{\\infty}\\xi(\\tau_k-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((\\tau_k-s)^{\\alpha}\\xi)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&\\quad+\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&=\\mathcal{T}(\\eta^{\\alpha}\\delta)\\bigg[\\int_{\\delta}^{\\infty}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-\\tau_k)^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)h_k(\\tau_k,\\tilde{x}(t_k^-))\\mathrm{d}\\xi\\nonumber\\\\&\\quad+\\alpha\\int_{0}^{\\tau_k-\\eta}\\int_{\\delta}^{\\infty}\\xi(\\tau_k-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((\\tau_k-s)^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&\\quad+\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\mathrm{d}\\xi\\mathrm{d}s\\bigg].\n\t\t\\end{align*}\n\t\tProceeding similarly for the case $t\\in[0,t_1]$, one can prove that the set $\\mathrm{V}(t),$ for $t\\in(\\tau_k,t_{k+1}],\\ k=1,\\ldots,m$ is relatively compact in $ \\mathbb{X}$. Moreover for $t\\in(t_k,\\tau_k], k=1\\ldots,m$, the fact that the set $\\mathrm{V}(t)$ is relative compact follows by the compactness of the impulses $h_k,$ for $k=1,\\ldots,m$. Therefore, the set $\\mathrm{V}(t)=\\{(F_\\lambda x)(t):x\\in \\mathrm{E}_r\\},$ for each $t\\in J$ is relatively compact in $\\mathbb{X}$.\n\t\t\n\t\tHence, by invoking the Arzela-Ascoli theorem, we conclude that the operator $ F_{\\lambda}$ is compact. Then \\emph{Schauder's fixed point theorem} yields that the operator $F_{\\lambda}$ has a fixed point in $\\mathrm{E}_{r}$, which is a mild solution of the system \\eqref{1.1}.\n\t\\end{proof}\n\t\n\tIn order to prove the approximate controllability of the system \\eqref{1.1}, we replace the assumption (\\textit{H2}) by the following stronger assumption: \n\t\\begin{enumerate}\\label{as}\n\t\t\\item [\\textit{(H4)}] The function $ f: J \\times \\mathfrak{B}_{\\alpha} \\rightarrow \\mathbb{X} $ satisfies the assumption (\\textit{$H2$})(i) and there exists a function $ \\gamma\\in \\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R}^+)$ with $\\alpha_1\\in[0,\\frac{1}{2})$ such that $$ \\|f(t,\\psi)\\|_{\\mathbb{X}}\\leq \\gamma(t),\\ \\text{ for all }\\ (t,\\psi) \\in J \\times \\mathfrak{B}_{\\alpha}. $$ \n\t\\end{enumerate}\n\t\\begin{theorem}\\label{thm4.4}\n\t\tSuppose that Assumptions (H0)-(H1), (H3)-(H4) and the condition \\eqref{cnd} of Theorem \\ref{thm4.3} are satisfied. Then the system \\eqref{1.1} is approximately controllable.\n\t\\end{theorem}\n\t\\begin{proof}\n\t\tBy using Theorem \\ref{thm4.3}, we infer that for every $\\lambda>0$ and $\\zeta_k\\in \\mathbb{X}$ for $k=0,1,\\ldots m$, there exists a mild solution $x^{\\lambda}\\in\\mathrm{E}_r$ such that\n\t\t\\begin{equation}\\label{M}\n\t\tx^{\\lambda}(t)=\\begin{dcases}\n\t\t\\mathcal{T}_{\\alpha}(t)\\psi(0)+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x^\\lambda}_{\\rho(s, \\tilde{x^\\lambda}_s)})\\right]\\mathrm{d}s,t\\in[0, t_1],\\\\\n\t\th_k(t, \\tilde{x^\\lambda}(t_k^-)),t\\in(t_k, \\tau_k],\\ k=1,\\ldots,m,\\\\\n\t\t\\mathcal{T}_{\\alpha}(t-\\tau_k)h_k(\\tau_k,\\tilde{x^\\lambda}(t_k^-))-\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})\\right]\\mathrm{d}s\\\\\\quad+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x^\\lambda}_{\\rho(s, \\tilde{x^\\lambda}_s)})\\right]\\mathrm{d}s,\\\\ \\qquad\\qquad\\qquad t\\in(\\tau_k,t_{k+1}],\\ k=1,\\ldots,m,\n\t\t\\end{dcases}\n\t\t\\end{equation}\n\t\twith the control defined in \\eqref{C}. Next, we estimate\n\t\t\\begin{align}\\label{4.35}\n\t\tx^{\\lambda}(T)&=\\mathcal{T}_{\\alpha}(T-\\tau_m)h_{m}(\\tau_m,\\tilde{x^\\lambda}(t_m^-))\\nonumber\\\\&\\quad-\\int_{0}^{\\tau_m}(\\tau_m-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_m-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})\\right]\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{T}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})\\right]\\mathrm{d}s\\nonumber\\\\&=\\mathcal{T}_{\\alpha}(T-\\tau_m)h_{m}(\\tau_m,\\tilde{x^\\lambda}(t_m^-))\\nonumber\\\\&\\quad-\\int_{0}^{\\tau_m}(\\tau_m-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_m-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})\\right]\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{T}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})+\\int_{0}^{\\tau_m}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{\\tau_m}^{T}(T-s)^{2(\\alpha-1)}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\mathrm{B}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-s)^*\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{\\tau_m}^{T})p_m(x^\\lambda(\\cdot))\\right]\\mathrm{d}s\\nonumber\\\\&=\\mathcal{T}_{\\alpha}(T-\\tau_m)h_{m}(\\tau_m,\\tilde{x^\\lambda}(t_m^-))\\nonumber\\\\&\\quad-\\int_{0}^{\\tau_m}(\\tau_m-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_m-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})\\right]\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{T}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})+\\int_{0}^{\\tau_m}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\mathrm{d}s\\nonumber\\\\&\\quad+\\Phi_{\\tau_m}^{T}\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{\\tau_m}^{T})p_m(x^\\lambda(\\cdot))\\right]\\nonumber\\\\&=\\zeta_m-\\lambda\\mathcal{R}(\\lambda,\\Phi_{\\tau_m}^{T})p_m(x^\\lambda(\\cdot)).\n\t\t\\end{align}\n\t\tMoreover, that fact that $ x^{\\lambda}\\in\\mathrm{E}_r $ is bounded implies that the sequence $ x^{\\lambda}(t),$ for each $t\\in J$ is bounded in $\\mathbb{X}$. Then by the Banach-Alaoglu theorem, we can find a subsequence, still denoted as $ x^{\\lambda},$ such that \n\t\t\\begin{align*}\n\t\tx^\\lambda(t)\\xrightharpoonup{w}z(t) \\ \\mbox{ in }\\ \\mathbb{X} \\ \\ \\mbox{as}\\ \\ \\lambda\\to0^+,\\ t\\in J.\n\t\t\\end{align*}\n\t\tUsing the condition (\\textit{$H3$}) of Assumption \\ref{as2.1}, we obtain\n\t\t\\begin{align}\\label{4.19}\n\t\th_m(t,x^\\lambda(t_m^-))\\to h_m(t,z(t_m^-)) \\ \\mbox{ in }\\ \\mathbb{X} \\ \\ \\mbox{as}\\ \\ \\lambda\\to0^+.\n\t\t\\end{align}\n\t\tFurthermore, by using Assumption \\textit{(H4)}, we get\n\t\t\\begin{align}\n\t\t\\int_{s_1}^{s_2}\\left\\|f(s,\\tilde{x^{\\lambda}}_{\\rho(s,\\tilde{x^{\\lambda}}_s)})\\right\\|_{\\mathbb{X}}^{2}\\mathrm{d}s&\\le \\int_{s_1}^{s_2}\\gamma^2(s)\\mathrm{d} s\\leq \\left(\\int_{s_1}^{s_2}\\gamma^{\\frac{1}{\\alpha_1}}(s)\\mathrm{d}s\\right)^{2\\alpha_1}(s_1-s_2)^{1-2\\alpha_1}<+\\infty, \\nonumber\n\t\t\\end{align}\n\t\tfor any $ s_1,s_2\\in[0,T]$ with $s_10\\} $ in $ \\mathrm{L}^2([s_1,s_2]; \\mathbb{X})$ is bounded. By an application of the Banach-Alaoglu theorem, we can find a subsequence still denoted as $ \\{f(\\cdot, \\tilde{x^{\\lambda}}_{\\rho(s,\\tilde{x^{\\lambda}}_s)}): \\lambda > 0 \\}$ such that \n\t\t\\begin{align}\\label{4.36}\n\t\tf(\\cdot, \\tilde{x^{\\lambda}}_{\\rho(s,\\tilde{x^{\\lambda}}_s)})\\xrightharpoonup{w}f(\\cdot) \\ \\mbox{ in }\\ \\mathrm{L}^2([s_1,s_2];\\mathbb{X}).\n\t\t\\end{align}\n\t\tWe now calculate \n\t\t\\begin{align}\n\t\t&\\int_{0}^{\\tau_m}\\left\\|u^{\\alpha}_{\\lambda}(s)\\right\\|^2_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\&=\\int_{0}^{t_1}\\left\\|u^{\\alpha}_{\\lambda}(s)\\right\\|^2_{\\mathbb{U}}\\mathrm{d}s+\\int_{t_1}^{\\tau_1}\\left\\|u^{\\alpha}_{\\lambda}(s)\\right\\|^2_{\\mathbb{U}}\\mathrm{d}s+\\int_{\\tau_1}^{t_2}\\left\\|u^{\\alpha}_{\\lambda}(s)\\right\\|^2_{\\mathbb{U}}\\mathrm{d}s+\\cdots+\\int_{t_m}^{\\tau_m}\\left\\|u^{\\alpha}_{\\lambda}(s)\\right\\|^2_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\&=\\int_{0}^{t_1}\\left\\|u^{\\alpha}_{0,\\lambda}(s)\\right\\|^2_{\\mathbb{U}}\\mathrm{d}s+\\int_{\\tau_1}^{t_2}\\left\\|u^{\\alpha}_{1,\\lambda}(s)\\right\\|^2_{\\mathbb{U}}\\mathrm{d}s+\\cdots+\\int_{\\tau_{m-1}}^{t_m}\\left\\|u^{\\alpha}_{m-1,\\lambda}(s)\\right\\|^2_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\&\\le\\left(\\frac{M\\tilde{M}\\alpha}{\\lambda\\Gamma(1+\\alpha)}\\right)^{2}\\frac{T^{2\\alpha-1}}{2\\alpha-1}\\sum_{j=0}^{m-1}C_j^2=C,\n\t\t\\end{align}\n\t\twhere $C_k,$ for $k=1,\\ldots,m-1$ are the same as given in \\eqref{4.5} and $C_0=N_0$ given in \\eqref{4.4}. Moreover, the above estimate ensures that the sequence $\\{u^\\alpha_{\\lambda}(\\cdot): \\lambda >0\\}$ in $ \\mathrm{L}^2([0,\\tau_m]; \\mathbb{U})$ is bounded. Further, by the Banach-Alaoglu theorem, we can find a subsequence, still denoted as $\\{u^\\alpha_{\\lambda}(\\cdot): \\lambda >0\\}$ such that \n\t\t\\begin{align}\\label{4}\n\t\tu^\\alpha_{\\lambda}(\\cdot)\\xrightharpoonup{w}u^\\alpha(\\cdot) \\ \\mbox{ in }\\ \\mathrm{L}^2([0,\\tau_m];\\mathbb{U}).\n\t\t\\end{align}\n\t\tNext, we compute\n\t\t\\begin{align}\\label{4.37}\n\t\t\\left\\|p_{m}(x^{\\lambda}(\\cdot))-\\omega\\right\\|_{\\mathbb{X}}&\\le\\left\\|\\mathcal{T}_{\\alpha}(T-\\tau_m)(h_k(\\tau_m,\\tilde{x^\\lambda}(t_m^-))-h_k(\\tau_m,z(t_m^-)))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{\\tau_m}(\\tau_m-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_m-s)\\mathrm{B}\\left[u^\\alpha_{\\lambda}(s)-u^{\\alpha}(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{\\tau_m}(\\tau_m-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_m-s)\\left[f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})-f(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{\\tau_m}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\mathrm{B}\\left[u^\\alpha_{\\lambda}(s)-u^{\\alpha}(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{T}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\left[f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})-f(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le\\left\\|\\mathcal{T}_{\\alpha}(T-\\tau_m)(h_k(\\tau_m,x^\\lambda(t_m^-))-h_k(\\tau_m,z(t_m^-)))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{\\tau_m}(\\tau_m-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_m-s)\\mathrm{B}\\left[u^\\alpha_{\\lambda}(s)-u^{\\alpha}(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{\\tau_m}(\\tau_m-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_m-s)\\left[f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})-f(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\frac{T^{2\\alpha-1}-(T-\\tau_m)^{2\\alpha-1}}{2\\alpha-1}\\left(\\int_{0}^{\\tau_m}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\mathrm{B}\\left[u^\\alpha_{\\lambda}(s)-u^{\\alpha}(s)\\right]\\right\\|^2_{\\mathbb{X}}\\mathrm{d}s\\right)^{\\frac{1}{2}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{T}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\left[f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})-f(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad\\to 0\\ \\mbox{as}\\ \\lambda\\to0^+, \n\t\t\\end{align}\n\t\twhere \n\t\t\\begin{align*}\n\t\t\\omega &=\\zeta_m-\\mathcal{T}_{\\alpha}(T-\\tau_m)h_m(\\tau_m,z(t_m^-))+\\int_{0}^{\\tau_m}(\\tau_m-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_m-s)\\left[\\mathrm{B}u^{\\alpha}(s)+f(s)\\right]\\mathrm{d}s\\nonumber\\\\&\\quad-\\int_{0}^{\\tau_m}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\mathrm{B}u^{\\alpha}(s)\\mathrm{d}s-\\int_{0}^{T}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)f(s)\\mathrm{d}s.\n\t\t\\end{align*}\n\t\tHere, we used the convergences \\eqref{4.19},\\eqref{4.36},\\eqref{4}, the dominated convergence theorem and the compactness of the operator $f(\\cdot)\\to\\int_{0}^{\\cdot}(\\cdot-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\cdot-s) f(s)\\mathrm{d}s:\\mathrm{L}^2(J;\\mathbb{X})\\rightarrow \\mathrm{C}(J;\\mathbb{X})$, (see Lemma \\ref{lem2.12}).\n\t\t\tFinally, by using the equality \\eqref{4.35}, we evaluate \n\t\t\\begin{align}\n\t\t\\left\\|x^{\\lambda}(T)-\\zeta_m\\right\\|_{\\mathbb{X}}&\\le\\left\\|\\lambda\\mathcal{R}(\\lambda,\\Phi_{\\tau_m}^{T})p_m(x(\\cdot))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le\\left\\|\\lambda\\mathcal{R}(\\lambda,\\Phi_{\\tau_m}^{T})(p_m(x(\\cdot))-\\omega)\\right\\|_{\\mathbb{X}}+\\left\\|\\lambda\\mathcal{R}(\\lambda,\\Phi_{\\tau_m}^{T})\\omega\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le\\left\\|\\lambda\\mathcal{R}(\\lambda,\\Phi_{\\tau_m}^{T})\\right\\|_{\\mathcal{L}(\\mathbb{X})}\\left\\|p_m(x(\\cdot))-\\omega\\right\\|_{\\mathbb{X}}+\\left\\|\\lambda\\mathcal{R}(\\lambda,\\Phi_{\\tau_m}^{T})\\omega\\right\\|_{\\mathbb{X}}.\n\t\t\\end{align}\n\t\tUsing the above inequality, \\eqref{4.37} and Assumption \\ref{as2.1} (\\textit{H0}), we obtain\n\t\t\\begin{align*}\n\t\t\\left\\|x^{\\lambda}(T)-\\zeta_m\\right\\|_{\\mathbb{X}}\\to0,\\ \\mbox{ as }\\ \\lambda\\to0^+,\n\t\t\\end{align*}\n\t\twhich ensures that the system \\eqref{1.1} is approximately controllable on $J$.\n\t\\end{proof}\n\t\\begin{rem}\\label{rem4.4}\n\t\tThe works \\cite{PCY,NIZ,SRY}, etc considered a different kind of control for the fractional order semilinear problems. If one follows Remark \\ref{rem3.6}, the controllability operator defined in \\eqref{2.1} changes to \\eqref{3.20}\n\t\tand $u^\\alpha_{k,\\lambda}(\\cdot)$ appearing in the control defined in \\eqref{C} takes the form \n\t\t\\begin{align*}\n\t\tu^\\alpha_{k,\\lambda}(t)&=\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(t_{k+1}-t)^*\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{\\tau_k}^{t_{k+1}})p_k(x(\\cdot))\\right],\n\t\t\\end{align*}\n\t\tfor $t\\in [\\tau_k, t_{k+1}),k=0,1,\\ldots,m$. The control provided in \\eqref{C} is motivated from the linear regulator problem with the cost functional defined in \\eqref{3.1}. The proof of Theorems \\ref{thm4.3} and \\ref{thm4.4} follows in a similar way with some obvious modifications in the calculations. \n\t\\end{rem}\n\t\\section{Application}\\label{application}\\setcounter{equation}{0}\n\tIn this section, we discuss a concrete example to verify the results developed in previous sections. \n\t\n\t\\begin{Ex}\\label{ex1} Let us take the following fractional heat equation with non-instantaneous impulses and delay:\n\t\t\\begin{equation}\\label{ex}\n\t\t\\left\\{\n\t\t\\begin{aligned}\n\t\t\\frac{\\partial^\\alpha z(t,\\xi)}{\\partial t^\\alpha}&=\\frac{\\partial^2z(t,\\xi)}{\\partial \\xi^2}+\\eta(t,\\xi)+\\int_{-\\infty}^{t}b(s-t)z(s-\\sigma(\\|z(t)\\|),\\xi)\\mathrm{d}s, \\\\&\\qquad \\qquad \\ t\\in\\bigcup_{k=0}^{m} (\\tau_k, t_{k+1}]\\subset J=[0,T], \\ \\xi\\in[0,\\pi], \\\\\n\t\tz(t,\\xi)&=h_k(t,z(t_k^-,\\xi)),\\ t\\in(t_k,\\tau_k],\\ k=1,\\ldots, m,\\ \\xi\\in[0,\\pi],\\\\\n\t\tz(t,0)&=0=z(t,\\pi), \\ t\\in [0, 1], \\\\\n\t\tz(\\theta,\\xi)&=\\psi(\\theta,\\xi), \\ \\xi\\in[0,\\pi], \\ \\theta\\leq0.\n\t\t\\end{aligned}\n\t\t\\right.\n\t\t\\end{equation}\n\t\twhere the function $\\eta:[0,1]\\times[0,\\pi]\\to[0,\\pi]$ is continuous in $t$ and the functions $\\sigma:[0,\\infty)\\to[0,\\infty)$ are also continuous.\n\t\\end{Ex}\n\t\\vskip 0.1 cm\n\t\\noindent\\textbf{Step 1:} \\emph{$\\mathrm{C}_0$-semigroup and phase space:} \n\tLet $\\mathbb{X}_p= \\mathrm{L}^{p}([0,\\pi];\\mathbb{R})$ with $p\\in[2,\\infty)$, and $\\mathbb{U}=\\mathrm{L}^{2}([0,\\pi];\\mathbb{R})$. Note that $\\mathbb{X}_p$ is separable and reflexive with strictly convex dual $\\mathbb{X}_p^*=\\mathrm{L}^{\\frac{p}{p-1}}([0,\\pi];\\mathbb{R})$ and $\\mathbb{U}$ is separable. We define the linear operator $\\mathrm{A}_p:\\mathrm{D}(\\mathrm{A}_p)\\subset\\mathbb{X}_p\\to\\mathbb{X}_p$ as\n\t\\begin{align*}\n\t\\mathrm{A}_pg(\\xi)= g''(\\xi),\n\t\\end{align*}\n\twhere $\\mathrm{D}(\\mathrm{A}_p)= \\mathrm{W}^{2,p}([0,\\pi];\\mathbb{R})\\cap\\mathrm{W}_0^{1,p}([0,\\pi];\\mathbb{R})$. Since we know that $\\mathrm{C}_0^{\\infty}([0,\\pi];\\mathbb{R})\\subset\\mathrm{D}(\\mathrm{A}_p)$ and hence $\\mathrm{D}(\\mathrm{A}_p)$ is dense in $\\mathbb{X}_p$ and one can easily verify that the operator $\\mathrm{A}_p$ is closed. Next, we consider the following Sturm-Liouville system:\n\t\\begin{equation}\\label{59}\n\t\\left\\{\n\t\\begin{aligned}\n\t\\left(\\lambda\\mathrm{I}-\\mathrm{A}_p\\right)g(\\xi)&=l(\\xi), \\ 0<\\xi<\\pi,\\\\\n\tg(0)=g(\\pi)&=0.\n\t\\end{aligned}\n\t\\right.\n\t\\end{equation}\n\tOne can easily rewrite the above system as \n\t\\begin{align}\\label{511}\n\t\\left(\\lambda\\mathrm{I}-\\Delta\\right)g(\\xi)&=l(\\xi),\n\t\\end{align}\n\twhere $\\Delta g(\\xi)=g''(\\xi)$. Multiplying both sides of \\eqref{511} by $g|g|^{p-2}$ and then integrating over $[0,\\pi]$, we obtain\n\t\\begin{align}\\label{536}\n\t\\lambda\\int_0^{\\pi}|g(\\xi)|^p\\mathrm{d}\\xi&+(p-1)\\int_0^{\\pi}|g(\\xi)|^{p-2}|f'(\\xi)|^2\\mathrm{d}\\xi=\\int_0^{\\pi}l(\\xi)g(\\xi)|g(\\xi)|^{p-2}\\mathrm{d}\\xi.\n\t\\end{align}\n\n\n\n\n\n\n\n\n\n\n\n\n\tApplying H\\\"older's inequality, we get\n\t\\begin{align*}\n\t\\lambda\\int_0^{\\pi}|g(\\xi)|^p\\mathrm{d}\\xi&\\leq \\left(\\int_0^{\\pi}|g(\\xi)|^p\\mathrm{d}\\xi\\right)^{\\frac{p-1}{p}}\\left(\\int_0^{\\pi}|l(\\xi)|^p\\mathrm{d}\\xi\\right)^{\\frac{1}{p}}.\n\t\\end{align*}\n\tThus, we have \n\t\\begin{align*}\n\t\\|\\mathcal{R}(\\lambda,\\mathrm{A}_p)l\\|_{\\mathrm{L}^p}=\\|g\\|_{\\mathrm{L}^p}\\leq\\frac{1}{\\lambda}\\|l\\|_{\\mathrm{L}^p},\n\t\\end{align*}\n\tso that we obtain \n\t\\begin{align}\n\t\\|\\mathcal{R}(\\lambda,\\mathrm{A}_p\\|_{\\mathcal{L}(\\mathrm{L}^p)}\\leq\\frac{1}{\\lambda}.\n\t\\end{align}\n\tHence, by applying the Hille-Yosida theorem, we obtain that the operator $\\mathrm{A}_p$ generate a strongly continuous semigroup $\\{\\mathcal{T}_p(t):t\\ge0\\}$ of bounded linear operators. \n\t\n\tMoreover, the infinitesimal generator $\\mathrm{A}_p$ and the semigroup $\\mathcal{T}_p(t)$ can be written as\n\t\\begin{align}\n\t\\mathrm{A}_pg&= \\sum_{n=1}^{\\infty}-n^{2}\\langle g, w_{n} \\rangle w_{n},\\ g\\in \\mathrm{D}(\\mathrm{A}_p),\\nonumber\\\\\n\t\\mathcal{T}_p(t)g&= \\sum_{n=1}^{\\infty}\\exp\\left(-n^2t\\right)\\langle g, w_{n} \\rangle w_{n},\\ g\\in\\mathbb{X}_p,\n\t\\end{align}\n\twhere, $w_n(\\xi)=\\sqrt{\\frac{2}{\\pi}}\\sin(n\\xi)$ are the normalized eigenfunctions corresponding to the eigenvalues $\\lambda_n=-n^2\\ (n\\in\\mathbb{N})$ of the operator $\\mathrm{A}_p$ and $\\langle g,w_n\\rangle :=\\int_0^{\\pi}g(\\xi)w_n(\\xi)\\mathrm{d}\\xi$. Further, the resolvent operator $\\mathcal{R}(\\lambda,\\mathrm{A}_p)$ is compact (see \\cite{MTM} for more details). Therefore, the generated semigroup $\\mathcal{T}_p(t)$ is compact for $t>0$. Thus, the condition (\\textit{H1}) of Assumption \\ref{as2.1} holds.\n\t\n\tWe now define the following operators\n\t\\begin{align}\n\t\\mathcal{T}_{\\alpha,p}(t)g&=\\int_{0}^{\\infty}\\varphi_{\\alpha}(\\xi)\\mathcal{T}_p(t^{\\alpha}\\xi)g(\\xi)\\mathrm{d}\\xi=\\int_{0}^{\\infty}\\varphi_{\\alpha}(\\xi)\\sum_{n=1}^{\\infty}\\exp\\left(-n^2t^\\alpha\\xi\\right)\\langle g, w_{n} \\rangle w_{n}(\\xi)\\mathrm{d}\\xi.\\\\\n\t\\widehat{\\mathcal{T}}_{\\alpha,p}(t)g&=\\alpha\\int_{0}^{\\infty}\\xi\\varphi_{\\alpha}(\\xi)\\mathcal{T}_p(t^{\\alpha}\\xi)g(\\xi)\\mathrm{d}\\xi=\\alpha\\int_{0}^{\\infty}\\xi\\varphi_{\\alpha}(\\xi)\\sum_{n=1}^{\\infty}\\exp\\left(-n^2t^\\alpha\\xi\\right)\\langle g, w_{n} \\rangle w_{n}(\\xi)\\mathrm{d}\\xi, \\label{5.7}\n\t\\end{align}\n\tfor all $g\\in\\mathbb{X}_p$. \n\t\n\tLet us take $\\mathfrak{B}=\\mathrm{PC}_{0}\\times\\mathrm{L}^1_h(\\mathbb{X})$ with $h(\\theta)=e^{\\nu\\theta}$, for some $\\nu>0$ (see Example \\ref{exm2.8}). Proceeding similar arguments as in section 5, \\cite{SSM}, one can verify that the space $\\mathfrak{B}=\\mathrm{PC}_{0}\\times\\mathrm{L}^1_h(\\mathbb{X})$ is a phase space, which satisfies the axioms (A1) and (A2) with $\\Lambda(t) =\\int_{-t}^{-0}h(\\theta)\\mathrm{d}\\theta$ and $\\Upsilon(t)= H(-t)$.\n\tWe define $K:=\\sup\\limits_{\\theta\\in(-\\infty,0]}\\frac{|b(-\\theta)|}{h(\\theta)}$.\n\t\\vskip 0.1 cm \n\t\\noindent\\textbf{Step 2:} \\emph{Abstract formulation and approximate controllability.}\n\tLet us define $$x(t)(\\xi):=z(t,\\xi),\\ \\mbox{ for }\\ t\\in J\\ \\mbox{ and }\\ \\xi\\in[0,\\pi],$$ and the bounded linear operator $\\mathrm{B}:\\mathbb{U}\\to\\mathbb{X}_p$ as $$\\mathrm{B}u(t)(\\xi):=\\eta(t,\\xi)=\\int_{0}^{\\pi}K(\\zeta,\\xi)u(t)(\\zeta)\\mathrm{d}\\zeta, \\ t\\in J,\\ \\xi\\in [0,\\pi],$$ where $K\\in\\mathrm{C}([0,\\pi]\\times[0,\\pi];\\mathbb{R})$ with $K(\\zeta,\\xi)=K(\\xi,\\zeta),$ for all $\\zeta,\\xi\\in [0,\\pi]$. We assume that the operator $\\mathrm{B}$ is one-one.\n\tLet us estimate \n\t\\begin{align*}\n\t\\left\\|\\mathrm{B}u(t)\\right\\|_{\\mathbb{X}_p}^p=\\int_{0}^{\\pi}\\left|\\int_{0}^{\\pi}K(\\zeta,\\xi)u(t)(\\zeta)\\mathrm{d}\\zeta\\right|^p\\mathrm{d}\\xi.\n\t\\end{align*}\n\tApplying the Cauchy-Schwarz inequality, we have\n\t\\begin{align*}\n\t\\left\\|\\mathrm{B}u(t)\\right\\|_{\\mathbb{X}_p}^p&\\le\\int_{0}^{\\pi}\\left[\\left(\\int_{0}^{\\pi}|K(\\zeta,\\xi)|^2\\mathrm{d}\\zeta\\right)^{\\frac{1}{2}}\\left(\\int_{0}^{\\pi}|u(t)(\\zeta)|^2\\mathrm{d}\\zeta\\right)^{\\frac{1}{2}}\\right]^{p}\\mathrm{d}\\xi\\\\&=\\left(\\int_{0}^{\\pi}|u(t)(\\zeta)|^2\\mathrm{d}\\zeta\\right)^{\\frac{p}{2}}\\int_{0}^{\\pi}\\left(\\int_{0}^{\\pi}|K(\\zeta,\\xi)|^2\\mathrm{d}\\zeta\\right)^{\\frac{p}{2}}\\mathrm{d}\\xi.\n\t\\end{align*}\n\tSince the kernel $K(\\cdot,\\cdot)$ is continuous, we arrive at\n\t\\begin{align*}\n\t\\left\\|\\mathrm{B}u(t)\\right\\|_{\\mathbb{X}_p}\\le C\\left\\|u(t)\\right\\|_{\\mathbb{U}},\n\t\\end{align*}\n\tso that we get \n\t$\t\\left\\|\\mathrm{B}\\right\\|_{\\mathcal{L}(\\mathbb{U};\\mathbb{X}_p)}\\le C.$\n\tHence, the operator $\\mathrm{B}$ is bounded. Moreover, the symmetry of the kernel implies that the operator $\\mathrm{B}=\\mathrm{B}^*$ (self adjoint). For example, one can take $K(\\xi,\\zeta)=1+\\xi^2+\\zeta^2,\\ \\mbox{for all}\\ \\xi, \\zeta\\in [0,\\pi]$.\n\tThe function $\\psi:(-\\infty,0]\\rightarrow\\mathbb{X}$ is given as\n\t\\begin{align}\n\t\\nonumber \\psi(t)(\\xi)=\\psi(t,\\xi),\\ \\xi\\in[0,\\pi].\n\t\\end{align}\t \n\tNext, the functions $f, \\rho:J\\times \\mathfrak{B}\\to\\mathbb{X}$ are defined as\n\t\\begin{align}\n\t\\nonumber f(t,\\psi)\\xi&:=\\int_{-\\infty}^{0}b(-\\theta)\\psi(\\theta,\\xi)\\mathrm{d}\\theta,\\\\\n\t\\nonumber\\rho(t,\\psi):&=t-\\sigma(\\|\\psi(0)\\|_{\\mathbb{X}}),\n\t\\end{align}\t\n\tfor $\\xi\\in[0,\\pi]$. Clearly, $f$ is continuous and uniformly bounded by $K$. These facts guarantee that the function $f$ satisfied the condition \\textit{$(H2)$} of Assumption \\ref{as2.1} and the condition \\textit{$(H4)$}.\n\t\n\tMoreover, the impulse functions $h_k:[t_k,\\tau_k]\\times\\mathbb{X}\\to\\mathbb{X},$ for $k=1,\\ldots,m,$ are defined as \n\t\\begin{align*}\n\th_k(t,x)\\xi:=\\int_{0}^{\\pi}\\rho_k(t,\\xi,z)\\cos^2(x(t_k^-)z)\\mathrm{d}z, \\ \\mbox{ for }\\ t\\in(t_k,\\tau_k],\n\t\\end{align*}\n\twhere, $\\rho_k\\in\\mathrm{C}([0,1]\\times[0,\\pi]^2;\\mathbb{R})$. It is easy to verify that the impulses $h_k,$ for $k=1,\\ldots,m,$ satisfy the condition \\textit{$(H3)$} of Assumption \\ref{as2.1}.\n\t\n\t\n\tThe system \\eqref{ex} can be transformed into the abstract form \\eqref{1.1} by using the above substitutions and it satisfies Assumption \\ref{as2.1} \\textit{$(H1)$-$(H3)$} and Assumption (\\textit{$H4$}). Moreover, it remains to verify that the associated linear system of the equation \\eqref{1.1} is approximately controllable. In order to prove this, we consider\n\t$$(T-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha,p}(T-t)^*x^*=0,\\ \\mbox{ for any}\\ x^*\\in\\mathbb{X}^*,\\ 0\\le t + \\ensuremath{\\,\\text{d}}\\Phi_{+2}^< \\, .\n\\end{align}\nThe ordered part $\\ensuremath{\\,\\text{d}}\\Phi_{+2}^<$ corresponds to the region accessible to strongly-ordered shower paths $\\ensuremath{t_0} > t > t'$, whereas the unordered part $\\ensuremath{\\,\\text{d}}\\Phi_{+2}^>$ is inaccessible to strongly-ordered showers because of the larger intermediate scale $\\ensuremath{t_0} > t' > t$. We will use V\\protect\\scalebox{0.8}{INCIA}\\xspace's sector criterion, cf.\\ sec.~3.3 in \\cite{Brooks:2020upa}, to distinguish between the two, cf.~\\cref{subsec:2to4}.\n\nIn order to be able to match the NNLO calculation with the shower, the shower needs to incorporate virtual corrections to ordinary $2\\to 3$ branchings as well as new $2\\to 4$ branchings, accounting for the simultaneous emission of two particles. These new shower terms correspond to the real-virtual and double-real corrections in the NNLO calculation. In addition, we need to incorporate the corresponding parton-shower counterterms.\nWe start by defining the two-particle NLO Sudakov as \\cite{Li:2016yez}\n\\begin{multline}\n \\Delta_2^\\mathrm{NLO}(\\ensuremath{t_0},t) \\\\\n = \\exp\\Bigg\\{-\\int^{\\ensuremath{t_0}}_{t}\\ensuremath{\\,\\text{d}}\\Phi_{+1}\\, {\\mathrm{A}}_{2\\mapsto3}^{(0)}(\\Phi_{+1}) w^\\mathrm{NLO}_{2\\mapsto3}(\\Phi_2,\\Phi_{+1}) \\Bigg\\} \\\\\n \\times \\exp\\Bigg\\{-\\int^{\\ensuremath{t_0}}_{t}\\ensuremath{\\,\\text{d}}\\Phi_{+2}^>\\, {\\mathrm{A}}_{2\\mapsto 4}^{(0)}(\\Phi_{+2})w^\\mathrm{LO}_{2\\mapsto4}(\\Phi_2,\\Phi_{+2}) \\Bigg\\} \\, ,\n\\label{eq:nloSudakov}\n\\end{multline}\nwhere we have introduced the $2\\mapsto4$ LO matrix-element correction factor,\n\\begin{equation}\n w^\\mathrm{LO}_{2\\mapsto4}(\\Phi_2,\\Phi_{+2}) = \\frac{{\\mathrm{R\\kern-0.15em R}}(\\Phi_2,\\Phi_{+2})}{{\\mathrm{A}}^{(0)}_{2\\mapsto4}(\\Phi_{+2}){\\mathrm{B}}(\\Phi_2)}\n\\label{eq:LOMEC2to4}\n\\end{equation}\nand the $2\\mapsto3$ NLO matrix-element correction factor $w^\\mathrm{NLO}_{2\\mapsto3}(\\Phi_{+1})$, which we write in terms of a second order correction to the LO $2\\mapsto 3$ MEC in \\cref{eq:LOMEC2to3},\n\\begin{multline}\n w^\\mathrm{NLO}_{2\\mapsto3}(\\Phi_2,\\Phi_{+1}) = w^\\mathrm{LO}_{2\\mapsto3}(\\Phi_2,\\Phi_{+1})\\\\\n \\times\\big(1+\\tilde{w}^\\mathrm{FO}_{2\\mapsto3}(\\Phi_2,\\Phi_{+1})\n +\\tilde{w}^\\mathrm{PS}_{2\\mapsto3}(\\Phi_2)\\big)\\,.\n\\label{eq:NLOMEC2to3}\n\\end{multline}\nThe coefficients $\\tilde{w}$ are given by matching the $\\Order{\\ensuremath{\\alpha_{\\mathrm{s}}}^2}$ terms\nin the expansion of the truncated shower approximation to the fixed-order result \nin \\cref{eq:expvalNNLO} \\cite{Li:2016yez,Hartgring:2013jma}.\nWe find the fixed-order contribution\n\\begin{multline}\n \\tilde{w}^\\mathrm{FO}_{2\\mapsto3}(\\Phi_2,\\Phi_{+1})=\\\\\n \\frac{{\\mathrm{R\\kern-0.15emV}}(\\Phi_2,\\Phi_{+1})}{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1})} + \\int^{t}_{0}\\ensuremath{\\,\\text{d}}\\Phi_{+1}^\\prime\\, \\frac{{\\mathrm{R\\kern-0.15em R}}(\\Phi_2,\\Phi_{+1},\\Phi_{+1}^\\prime)}{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1})} \\\\\n -\\left(\\frac{{\\mathrm{V}}(\\Phi_2)}{{\\mathrm{B}}(\\Phi_2)}+\\int^{\\ensuremath{t_0}}_{0}\\ensuremath{\\,\\text{d}}\\Phi_{+1}'\\,\\frac{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1}')}{{\\mathrm{B}}(\\Phi_2)}\\right)\\;,\n\\label{eq:VMEC2to3}\n\\end{multline}\nand the second-order parton-shower matching term\n\\begin{multline}\n \\tilde{w}^\\mathrm{PS}_{2\\mapsto3}(\\Phi_2)\n = \\frac{\\ensuremath{\\alpha_{\\mathrm{s}}}}{2\\pi}\\ln\\frac{\\kappa^2\\mu_{\\rm S}^2}{\\ensuremath{\\mu}_\\mathrm{R}^2}\\\\\n + \\int^{\\ensuremath{t_0}}_{t}\\ensuremath{\\,\\text{d}}\\Phi_{+1}'\\, {\\mathrm{A}}_{2\\mapsto3}^{(0)}(\\Phi_{+1}')w^\\mathrm{LO}_{2\\mapsto3}(\\Phi_2,\\Phi_{+1}')\\;.\n\\end{multline}\nThe factor $\\kappa$ is a constant and $\\mu_\\mathrm{S}^2$ is the parton-shower renormalisation scale.\nThe two are conventionally chosen such that the logarithmic structure of eq.~\\eqref{eq:VMEC2to3}\nis reproduced, which leads to $\\mu_\\mathrm{S}=p_\\perp$ and $\\kappa^2=\\exp\\{K\/\\beta_0\\}$, with $K$ \nthe two-loop cusp anomalous dimension~\\cite{Kodaira:1981nh,Davies:1984hs,Davies:1984sp,Catani:1988vd}. \nThis is known as the CMW scheme~\\cite{Catani:1990rr}.\n\nNote that in \\cref{eq:nloSudakov}, the integral over ${\\mathrm{A}}_{2\\mapsto 4}^{(0)}$ is defined over \nthe range $[t, \\ensuremath{t_0}]$, since the ``ordered'' contribution $t'\\, {\\mathrm{A}}_{2\\mapsto4}^{(0)}(\\Phi_{+2})w^\\mathrm{LO}_{2\\mapsto4} (\\Phi_2,\\Phi_{+2})O(\\Phi_2,\\Phi_{+2}) \\nonumber\n\\end{align}\nand our final NNLO+PS matching formula takes the simple form:\n\\begin{equation}\n \\avg{O}_{\\mathrm{NNLO}+\\mathrm{PS}} = \\int \\ensuremath{\\,\\text{d}} \\Phi_2\\, {\\mathrm{B}}(\\Phi_2) k_\\mathrm{NNLO}(\\Phi_2) {\\mathcal{S}}_2(\\ensuremath{t_0},O) \\, .\n\\label{eq:nnlops}\n\\end{equation}\n\nWhen expanding the truncated shower operator ${\\mathcal{S}}_2$ in \\cref{eq:nnlops} up to order $\\ensuremath{\\alpha_{\\mathrm{s}}}^2$, NNLO accuracy is recovered for the observable $O(\\Phi_2)$, while $O(\\Phi_3)$ and $O(\\Phi_4)$ achieve NLO and LO accuracy, respectively.\nThis is true, because the combination of the iterated $2\\mapsto 3\\mapsto 4$ and the direct $2\\mapsto 4$ contributions to \\cref{eq:showerOpNLO} yields the correct double-real correction ${\\mathrm{R\\kern-0.15em R}}$ in \\cref{eq:expvalNNLO} by means of the LO MEC factors in \\cref{eq:LOMEC2to3,eq:LOMEC3to4,eq:LOMEC2to4}. Moreover the NLO correction \\cref{eq:NLOMEC2to3} recovers the correct real and real-virtual corrections ${\\mathrm{R}}$ and ${\\mathrm{R\\kern-0.15emV}}$ in \\cref{eq:expvalNNLO} by means of \\cref{eq:LOMEC2to3} and \\cref{eq:VMEC2to3}.\n\n\\begin{comment}\n\\todo{SH: I think the following paragraph is not needed anymore. We could comment earlier on when we describe the counterterms. One really does not have a choice if one wants to avoid a large logarithm in the NLO weight.}\nWe want to close this section by elaborating upon the renormalisation scales in our calculation. While the fixed-order calculation is renormalised at the scale of the hard process, $\\ensuremath{\\mu}_\\mathrm{R}^2$, the scales in the real-radiation contributions are dictated by the parton-shower resummation, meaning that the strong coupling is evaluated at the emission scales $t \\equiv p_\\perp^2$.\nThis is reflected in our calculation above by evaluating all antenna functions at the shower scales,\n\\begin{equation}\n {\\mathrm{A}}^{(\\ell)}_{n\\mapsto n+m}(\\Phi_{+m}) \\equiv {\\mathrm{A}}^{(\\ell)}_{n\\mapsto n+m}(\\Phi_{+m}; p_{\\perp,n+m}^2) \\, ,\n\\end{equation}\nwhereas all matrix-element correction factors and the Born-local (N)NLO weights are evaluated at the renormalisation scale of the hard process,\n\\begin{align}\n w^\\mathrm{MEC}_{n\\mapsto n+m}(\\Phi_n,\\Phi_{+m}) &\\equiv w^\\mathrm{MEC}_{n\\mapsto n+m}(\\Phi_n,\\Phi_{+m}; \\ensuremath{\\mu}_\\mathrm{R}^2) \\, , \\\\\n k_\\mathrm{(N)NLO}(\\Phi_2) &\\equiv k_\\mathrm{(N)NLO}(\\Phi_2; \\ensuremath{\\mu}_\\mathrm{R}^2) \\, .\n\\end{align}\nThis allows the calculation to be reorganised in a way that all logarithms containing scale hierarchies can be reabsorbed in a multiplicative factor.\n\\end{comment}\n\n\\section{Numerical Implementation} \\label{sec:implementation}\nIn this section, we want to present all necessary components of an implementation of our NNLO matching strategy. These are:\n\\begin{itemize}\n \\item a framework to calculate the Born-local NNLO $K$-factors in Eq.~\\eqref{eq:born_local_nnlo_kfactor}\n \\item a shower filling the strongly-ordered \\cite{Brooks:2020bhi} and unordered \\cite{Li:2016yez} regions of the single- and double-emission phase space\n \\item tree-level MECs in strongly-ordered \\cite{Fischer:2017yja} and unordered \\cite{Giele:2011cb} shower paths\n \\item NLO MECs in the first emission \\cite{Hartgring:2013jma}\n\\end{itemize}\nWith the exception of the first point, (process-dependent) implementations of these components existed in previous V\\protect\\scalebox{0.8}{INCIA}\\xspace versions (not necessarily simultaneously), and have been described in detail in the various references.\nWe have (re-)implemented all components in a semi-automated~\\footnote{Semi-automated here refers to the fact that antenna subtraction terms are explicitly implemented for each class of processes.} fashion in the V\\protect\\scalebox{0.8}{INCIA}\\xspace antenna shower in P\\protect\\scalebox{0.8}{YTHIA}\\xspace 8.3. We access loop matrix elements via a novel M\\protect\\scalebox{0.8}{CFM}\\xspace \\cite{Campbell:1999ah,Campbell:2011bn,Campbell:2015qma,Campbell:2019dru} interface presented in \\cite{Campbell:2021vlt} and tree-level matrix elements via a new run-time interface \\cite{ComixInterface} to the C\\protect\\scalebox{0.8}{OMIX}\\xspace matrix element generator \\cite{Gleisberg:2008fv} in S\\protect\\scalebox{0.8}{HERPA}\\xspace \\cite{Gleisberg:2008ta,Sherpa:2019gpd}.\n\nOur NNLO matching algorithm can be summarised in the following steps:\n\\begin{enumerate}\n \\item[1.] Generate a phase space point according to the Born cross section ${\\mathrm{B}}(\\Phi_2)$.\n \\item[2.] Calculate the Born-local NNLO factor $k_\\mathrm{NNLO}(\\Phi_2)$ and reweight the phase space point by the result. \n \\item[3.] Let the phase-space maximum given by the invariant mass of the two Born partons define the starting scale for the shower, $t_\\mathrm{now} = t_0(\\Phi_2)$.\n \\item[4.] Starting from the current shower scale, $t_\\mathrm{now}$, let the $2\\mapsto 3$ and $2\\mapsto 4$ showers compete for the highest branching scale. \n \\item[5.] Update the current shower scale to be that of the winning branching, $t_\\mathrm{now} = \\mathrm{max}(t_{2\\mapsto 3},t_{2\\mapsto 4})$.\n \\item[6a.] If the winning branching is a $2\\mapsto 3$ branching, calculate the accept probability including the NLO MEC $w^\\mathrm{NLO}_{2\\mapsto3}$. \n \\begin{itemize}\n \\item If rejected, continue from step 4.\n \\item If accepted, continue with a LO shower from the resulting three-particle configuration, starting from $t_\\mathrm{now}$ and including the LO MEC $w^\\mathrm{LO}_{3\\mapsto 4}$ when calculating accept probabilities for the $3\\mapsto4$ step. \n \\end{itemize}\n When a $3\\mapsto 4$ branching is accepted (or the shower cutoff scale is reached), continue with step 7.\n \\item[6b.] If the winning branching is a $2\\mapsto 4$ branching, calculate the accept probability including the LO MEC $w^\\mathrm{LO}_{2\\mapsto4}$. \n \\begin{itemize}\n \\item If rejected, continue from step 4. \n \\item If accepted, continue with step 7.\n \\end{itemize}\n \\item[7.] Continue with a standard (possibly uncorrected) shower from the resulting four-particle configuration, starting from $t_\\mathrm{now}$. \n\\end{enumerate}\nIt should be emphasised that the matrix-element correction factors make this algorithm independent of the splitting kernels (i.e.\\ antenna functions in our case) up to the matched order and the shower merely acts as an efficient Sudakov-weighted phase-space generator. Hence, if the algorithm is stopped after step 6, an NNLO-matched result is obtained, which can be showered by any other parton shower, just as is the case for P\\protect\\scalebox{0.8}{OWHEG}\\xspace NLO matching. Note, that there remains a dependence on the ordering variable, which has to be properly accounted for.\n\n\\subsection{NNLO Kinematics}\\label{subsec:kinematics}\nFor both, the unordered shower contributions and the Born-local NNLO weight, new kinematic maps are needed to reflect their direct $2\\mapsto 4$, i.e.\\ unordered or double-unresolved, nature. We utilise that the $n$-particle phase space measure\nmay be factorised into the product of a $2\\mapsto 3$ antenna phase space and the $n-1$-particle phase space measure, as well as into the product of a $2\\mapsto 4$ antenna phase space and the $n-2$-particle phase space.\nThis allows us to write the $2\\mapsto 4$ antenna phase space as the product of two $2\\mapsto 3$ antenna phase spaces,\n\\begin{multline}\n \\ensuremath{\\,\\text{d}} \\Phi_{+2} (p_I+p_K; p_i, p_{j_1}, p_{j_2}, p_{k}) \\\\\n = \\ensuremath{\\,\\text{d}} \\Phi_{+1} (p_I+p_K; \\hat{p}_i, \\hat{p}_{j}, p_{k}) \\\\\n \\times \\ensuremath{\\,\\text{d}} \\Phi_{+1} (\\hat{p}_i+\\hat{p}_j; p_i, p_{j_1}, p_{j_2}) \\, ,\n\\label{eq:PS2to4}\n\\end{multline}\ncorresponding to the kinematic mapping\n\\begin{equation}\n p_I + p_K = \\hat{p}_i + \\hat{p}_j + p_k = p_i + p_{j_1} + p_{j_2} + p_k \\, ,\n\\end{equation}\neffectively representing a tripole map \\cite{Gehrmann-DeRidder:2003pne}. In line with the phase space factorisation, the kinematic mapping is then constructed as an iteration of two on-shell $2\\mapsto 3$ antenna maps given in sec.~2.3 in \\cite{Brooks:2020upa}. \n\nWe have tested the validity of our kinematic maps by comparing V\\protect\\scalebox{0.8}{INCIA}\\xspace's phase-space mappings (double-gluon emission and gluon-emission-plus-splitting) to a flat sampling via R\\protect\\scalebox{0.8}{AMBO}\\xspace.\n\n\\subsection{Unordered Shower Contributions}\\label{subsec:2to4}\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{main205-1.pdf}\n \\includegraphics[width=0.4\\textwidth]{main205-2.pdf}\n \\includegraphics[width=0.4\\textwidth]{main205-3.pdf}\n \\includegraphics[width=0.4\\textwidth]{main205-4.pdf}\n \\caption{Ratio of the evolution variable of the four-parton and three-parton configuration $\\log(p_{\\perp,4}^2\/p_{\\perp,3}^2)$ in $e^+e^-\\to 4j$. The region $> 0$ corresponds to unordered contributions not reached by strongly-ordered showers.}\n \\label{fig:orderingZDecays}\n\\end{figure*}\n\nAn important part of our proposal is the inclusion of double-unresolved radiation in the shower evolution.\nTo this end, we employ the sector-antenna framework \\cite{Brooks:2020upa} and amend it by direct $2\\mapsto 4$ branchings as described in \\cite{Li:2016yez}. \nIn the sector-shower approach, each branching is restricted to the region in phase space where it minimises the resolution variable, defined for final-state clusterings by\n\\begin{equation}\n Q^2_{\\mathrm{res},j} = \\begin{cases} \\frac{s_{ij}s_{jk}}{s_{IK}} & \\text{if } j \\text{ is a gluon} \\\\ s_{ij} \\sqrt{\\frac{s_{jk}}{s_{IK}}} & \\text{if } (i,j) \\text{ is a quark-antiquark pair} \\end{cases}\n\\label{eq:resVar}\n\\end{equation}\nThis is achieved by a ``sectorisation'' of phase space according the partition of unity,\n\\begin{equation}\n 1 = \\sum\\limits_j \\Theta^\\mathrm{sct}_{j\/IK} = \\sum\\limits_j \\theta\\left(\\min\\limits_{i}\\left\\{Q^2_{\\mathrm{res},i}\\right\\} - Q^2_{\\mathrm{res},j}\\right) \\, ,\n\\label{eq:sectorVetoLO}\n\\end{equation}\nwhich is implemented in the shower evolution as an explicit veto for each trial branching.\nSince only a single branching kernel contributes per colour-ordered phase space point, sector antenna functions have to incorporate the full singularity structure associated with the respective sector. At LO, this amounts to including both the full single-collinear and single-soft limits in the antenna function. The full set of V\\protect\\scalebox{0.8}{INCIA}\\xspace's LO sector antenna functions is collected in \\cite{Brooks:2020upa}.\n\nBy construction, the default sector shower generates only strongly-ordered sequences\\footnote{This is different to virtually any other strongly-ordered shower, where recoil effects introduce unordered sequences. Such phase space points are vetoed in a sector shower.}, as the sector veto ensures that each emission is the softest (or most-collinear) in the post-branching configuration. The inclusion of direct $2\\mapsto 4$ branchings (which look unordered from an iterated $2\\mapsto 3$ point of view) in the sector shower is facilitated by extending the sector decomposition in \\cref{eq:sectorVetoLO} by an ordering criterion,\n\\begin{align}\n 1 &= \\sum\\limits_j \\left[\\Theta^<_{j\/IK}\\Theta^\\mathrm{sct}_{j\/IK} + \\Theta^>_{j\/IK}\\Theta^\\mathrm{sct}_{j\/IK}\\right]\\\\\n &= \\underbrace{\\sum\\limits_j \\theta\\left(\\hat{p}_{\\perp,\\hat{j}}^2 - p_{\\perp,j}^2\\right)\\Theta^\\mathrm{sct}_{j\/IK}}_{2\\mapsto 3 \\text{ (strongly ordered)}} + \\underbrace{\\sum\\limits_j \\theta\\left(p_{\\perp,j}^2 - \\hat{p}_{\\perp,\\hat{j}}^2\\right)\\Theta^\\mathrm{sct}_{j\/IK}}_{2\\mapsto4 \\text{ (unordered)}} \\nonumber \n\\end{align}\nwhere $p_\\perp^2$ denotes V\\protect\\scalebox{0.8}{INCIA}\\xspace's transverse-momentum ordering variable and hatted variables denote the intermediate node in a sequence $IL \\mapsto \\hat{i} \\hat{j} \\hat{\\ell} \\mapsto i j k \\ell$. Here, the scales $p_\\perp^2$ and $\\hat p_\\perp^2$ are uniquely defined by the ordering variable of the sector-shower emission, i.e., that emission which minimises \\cref{eq:resVar}. Direct $2\\mapsto 4$ emissions are thus restricted to the unordered region of the double-emission phase space, denoted as $\\ensuremath{\\,\\text{d}}\\Phi_{+2}^>$ in \\cref{eq:nloSudakov} and defined as\n\\begin{equation}\n \\ensuremath{\\,\\text{d}}\\Phi_{+2}^> = \\sum\\limits_{j} \\Theta^>_{j\/IK} \\Theta^\\mathrm{sct}_{j\/IK} \\ensuremath{\\,\\text{d}} \\Phi_{+2}^j \\, .\n\\end{equation}\n\nFor $2\\to 4$ emissions off quark-antiquark and gluon-gluon antennae, we use the double-real antenna functions in \\cite{GehrmannDeRidder:2004tv,GehrmannDeRidder:2005aw,GehrmannDeRidder:2005cm}. We note that NLO quark-gluon antenna functions appear in the Standard Model at lowest order for three final-state particles and are hence not of interest for our test case of $e^+e^-\\to jj$. We wish to point out, however, that the NLO quark-gluon antenna functions in \\cite{GehrmannDeRidder:2005hi,GehrmannDeRidder:2005cm} contain spurious singularities which have to be removed before a shower implementation is possible.\n\nAs a validation, we show in \\cref{fig:orderingZDecays} the ratio of the four-jet to three-jet evolution variable for $e^+e^- \\to 4j$ at $\\sqrt{s} = 240~\\giga e\\volt$. To focus on the perturbative realm, the shower evolution is constrained to the region between $\\ensuremath{t_0} = s$ and $\\ensuremath{t_\\mathrm{c}} = (5~\\giga e\\volt)^2$. The region $>0$ corresponds to the unordered part of phase space to which strongly-ordered showers cannot contribute. Due to the use of sector showers, there is a sharp cut-off at the boundary between the ordered and unordered region, as the sector criterion ensures that the last emission is always the softest and therefore, no recoil effects can spoil the strong ordering of the shower. As expected, the inclusion of direct $2\\to 4$ branchings gives access to the unordered parts of phase space, a crucial element of our matching method.\n\n\\subsection{LO Matrix-Element Corrections}\nIn order for the shower expansion to match the fixed-order calculation, we need (iterated) $2\\mapsto 3$ tree-level MECs and (direct) $2\\mapsto4$ tree-level MECs. Both take a particularly simple form in the sector-antenna framework, as will be shown below.\n\nAt leading-colour, tree-level MECs to the ordered sector shower can be constructed as \\cite{LopezVillarejo:2011ap,Fischer:2017yja}\n\\begin{align*}\n w_{2\\mapsto 3,i}^{\\mathrm{LO},\\mathrm{LC}}(\\Phi_2,\\Phi_{+1})\n &= \\frac{{\\mathrm{R}}^\\mathrm{LC}_{i}(\\Phi_2,\\Phi_{+1})}{\\sum_j \\Theta^\\mathrm{sct}_{j\/IK} A^\\mathrm{sct}_{j\/IK}(p_i,p_j,p_k){\\mathrm{B}}(\\Phi_2)} \\, ,\\\\\n w_{3\\mapsto 4,i}^{\\mathrm{LO},\\mathrm{LC}}(\\Phi_3,\\Phi_{+1})\n &= \\frac{{\\mathrm{R\\kern-0.15em R}}^\\mathrm{LC}_{i}(\\Phi_3,\\Phi_{+1})}{\\sum_j \\Theta^\\mathrm{sct}_{j\/IK} A^\\mathrm{sct}_{j\/IK}(p_i,p_j,p_k){\\mathrm{R}}^\\mathrm{LC}_i(\\Phi_3)} \\, ,\n\\end{align*}\nwhere\n\\begin{align*}\n {\\mathrm{B}}(\\Phi_2) &= \\abs{{\\mathcal{M}}_2^{(0)}(p_1,p_2)}^2 \\, , \\\\\n {\\mathrm{R}}^\\mathrm{LC}_{i}(\\Phi_3) &= \\abs{{\\mathcal{M}}_3^{(0)}(\\sigma_i\\{p_1,p_2,p_3\\})}^2 \\, , \\\\\n {\\mathrm{R\\kern-0.15em R}}^\\mathrm{LC}_{i}(\\Phi_4) &= \\abs{{\\mathcal{M}}_4^{(0)}(\\sigma_i\\{p_1,p_2,p_3,p_4\\})}^2 \\, ,\n\\end{align*}\ndenote squared leading-colour colour-ordered amplitudes with the index $i$ denoting the respective permutation $\\sigma_i$ (the number of permutations depends on the process). The sector veto $\\Theta^\\mathrm{sct}_{j\/IK}$ ensures that only the most singular term contributes in the denominators, rendering the fraction exceptionally simple.\n\nDirect $2\\mapsto 4$ branchings can be corrected in an analogous way, replacing the sum over $2\\mapsto3$ antenna functions with a sum of $2\\mapsto4$ ones,\n\\begin{multline*}\n w_{2\\mapsto 4,i}^{\\mathrm{LO},\\mathrm{LC}}(\\Phi_2,\\Phi_{+2})\n \\\\\n = \\frac{{\\mathrm{R\\kern-0.15em R}}^\\mathrm{LC}_{i}(\\Phi_2,\\Phi_{+2})}{\\sum_{\\{j,k\\}} \\Theta^\\mathrm{sct}_{jk\/IL} A^\\mathrm{sct}_{jk\/IL}(p_i,p_j,p_k,p_\\ell){\\mathrm{B}}(\\Phi_2)} \\, ,\n\\end{multline*}\n\nThe full-colour matrix element can be recovered on average by multiplication with a full-colour to leading-colour-summed matrix-element weight,\n\\begin{align}\n w_{2\\mapsto 3,i}^{\\mathrm{LO}} &= w_{2\\mapsto 3,i}^{\\mathrm{LO},\\mathrm{LC}}\n \\times \\frac{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1})}{\\sum_j {\\mathrm{R}}^\\mathrm{LC}_{j}(\\Phi_2,\\Phi_{+1})} \\, , \\\\\n w_{3\\mapsto 4,i}^{\\mathrm{LO}} &= w_{3\\mapsto 4,i}^{\\mathrm{LO},\\mathrm{LC}}\n \\times \\frac{{\\mathrm{R\\kern-0.15em R}}(\\Phi_3,\\Phi_{+1})}{\\sum_j {\\mathrm{R\\kern-0.15em R}}^\\mathrm{LC}_{j}(\\Phi_3,\\Phi_{+1})} \\, , \\\\\n w_{2\\mapsto 4,i}^{\\mathrm{LO}} &= w_{2\\mapsto 4,i}^{\\mathrm{LO},\\mathrm{LC}}\n \\times \\frac{{\\mathrm{R\\kern-0.15em R}}(\\Phi_2,\\Phi_{+2})}{\\sum_j {\\mathrm{R\\kern-0.15em R}}^\\mathrm{LC}_{j}(\\Phi_2,\\Phi_{+2})} \\, .\n\\end{align}\n\nFor gluon splittings, multiple histories contribute even in the sector shower, because all permutations of quark lines have to be taken into account. To ensure that the MEC factors remain finite for final states with multiple quark pairs, an additional quark-projection factor has to be included. Since we only deal with a maximum of two quark pairs, it is given by\n\\begin{equation}\n \\rho_j = \\frac{A^\\mathrm{sct}_{j_q\/g_IX_K}(\\bar q_i, q_j, X_k)}{\\sum_{j}A^\\mathrm{sct}_{j_q\/g_IX_K}(\\bar q_i, q_j, X_k)}\n\\end{equation}\nfor $2\\to 3$ branchings and\n\\begin{equation}\n \\rho_j = \\frac{A^\\mathrm{sct}_{j_qk_{\\bar q}\/X_IY_L}(X_i, q_j,\\bar q_k, Y_\\ell)}{\\sum_{j}A^\\mathrm{sct}_{j_qk_{\\bar q}\/X_IY_L}(X_i, q_j, \\bar q_k, Y_\\ell)}\n\\end{equation}\nfor $2\\mapsto 4$ branchings.\n\n\\subsection{NLO Matrix-Element Corrections}\nMaking the antenna subtraction terms explicit, the fixed-order correction to the NLO matrix-element correction \\cref{eq:NLOMEC2to3} reads\n\\begin{align}\n &\\tilde{w}^\\mathrm{FO}_{2\\mapsto3}(\\Phi_2,\\Phi_{+1}) = \\frac{{\\mathrm{R\\kern-0.15emV}}(\\Phi_2,\\Phi_{+1})}{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1})} + \\frac{{\\mathrm{I}}^\\mathrm{NLO}(\\Phi_2,\\Phi_{+1})}{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1})}\n \\label{eq:VMEC2to3Numerical} \\\\\n &\\, + \\int^{t}_{0}\\ensuremath{\\,\\text{d}}\\Phi_{+1}^\\prime\\, \\left[ \\frac{{\\mathrm{R\\kern-0.15em R}}(\\Phi_2,\\Phi_{+1},\\Phi_{+1}^\\prime)}{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1})} - \\frac{{\\mathrm{S}}^\\mathrm{NLO}(\\Phi_2,\\Phi_{+1},\\Phi_{+1}^\\prime)}{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1})} \\right] \\nonumber\\\\\n &\\, - \\Bigg(\\frac{{\\mathrm{V}}(\\Phi_2)}{{\\mathrm{B}}(\\Phi_2)} + \\frac{{\\mathrm{I}}^\\mathrm{NLO}(\\Phi_2)}{{\\mathrm{B}}(\\Phi_2)} \\nonumber \\\\\n &\\qquad + \\int^{\\ensuremath{t_0}}_{0}\\ensuremath{\\,\\text{d}}\\Phi_{+1}'\\, \\left[\\frac{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1}^\\prime)}{{\\mathrm{B}}(\\Phi_2)} - \\frac{{\\mathrm{S}}^\\mathrm{NLO}(\\Phi_2,\\Phi_{+1}^\\prime)}{{\\mathrm{B}}(\\Phi_2)} \\right]\\Bigg)\\, , \\nonumber\n\\end{align}\nwith the differential NLO antenna subtraction terms ${\\mathrm{S}}^\\mathrm{NLO}(\\Phi_2,\\Phi_{+1}^\\prime)$, ${\\mathrm{S}}^\\mathrm{NLO}(\\Phi_2,\\Phi_{+1},\\Phi_{+1}^\\prime)$ and their integrated counterparts ${\\mathrm{I}}^\\mathrm{NLO}_{{\\mathrm{S}}}(\\Phi_2)$, ${\\mathrm{I}}^\\mathrm{NLO}_{{\\mathrm{S}}}(\\Phi_2,\\Phi_{+1})$ cf.~\\cref{eq:expvalNNLO,eq:subtTermsNLO1jet}.\nBased on the argument of the last subsection, we construct the full-colour NLO matrix-element correction as\n\\begin{align}\n w^\\mathrm{NLO}_{2\\mapsto 3,i}(\\Phi_2,\\Phi_{+1}) &= w^{\\mathrm{LO},\\mathrm{LC}}_{2\\mapsto3,i}(\\Phi_2,\\Phi_{+1}) \\frac{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1})}{\\sum_j {\\mathrm{R}}^\\mathrm{LC}_{j}(\\Phi_2,\\Phi_{+1})} \\nonumber \\\\ \n &\\quad \\times (1 + \\tilde{w}^\\text{FO}_{2\\mapsto 3}(\\Phi_2,\\Phi_{+1}) + \\tilde{w}^\\text{PS}_{2\\mapsto 3}(\\Phi_2)) \\, .\n\\end{align}\n\nThe integration over the radiation phase spaces denoted $\\Phi_{+1}^\\prime$ in \\cref{eq:VMEC2to3Numerical} is done numerically, utilising antenna kinematics to map $3$-parton configurations to $4$-parton configurations (similarly for $2$-parton configurations). This phase-space generation approach will be described in detail in the next subsection in the context of the NNLO Born weight.\nNote that the radiation phase space $\\Phi_{+1}$ in \\cref{eq:VMEC2to3Numerical} is generated by the shower.\n\n\\subsection{NNLO Born Weight}\nThe Born-local NNLO weight can be calculated numerically using a ``forward-branching'' phase-space generation approach \\cite{Frixione:2007vw,Hoche:2010pf,Alioli:2010xd,Giele:2011tm,Figy:2018imt}, which has previously been applied to unweighted NLO event generation, using Catani-Seymour dipole subtraction \\cite{Campbell:2012cz}. The application to NNLO corrections to $e^+e^- \\to 2j$ using antenna subtraction has been outlined in \\cite{Weinzierl:2006ij}.\n\nGiven a Born phase space point, the real-radiation phase space is generated by uniformly sampling the shower variables $(t,\\zeta,\\phi)$ for each antenna, which represent integration channels in this context. As for the shower evolution, every phase space point is restricted to the sector in which the emission(s) correspond to the most-singular clusterings. \nThe momenta of the Born$+1j$ point are constructed according to the same kinematic map as the shower uses, summarised in sec.~2.3 in \\cite{Brooks:2020bhi}. Since antenna functions are azimuthally averaged, they do not cancel spin-correlations in collinear gluon branchings locally. To obtain a point-wise pole cancellation, the subtracted real correction ${\\mathrm{R}}-{\\mathrm{S}}$ can be evaluated on two correlated phase space points,\n\\begin{equation*}\n \\left\\{ \\left(t,\\zeta,\\phi\\right), \\left(t,\\zeta,\\phi+\\uppi\/2\\right)\\right\\} \\,\n\\end{equation*}\nwhich cancels the collinear spin correlation exactly, as it is proportional to $\\cos(2\\phi)$.\nTo obtain double-real radiation phase space points for the subtracted double-real correction ${\\mathrm{R\\kern-0.15em R}}-{\\mathrm{S}}$, this procedure can be iterated, yielding four angular-correlated phase space points which cancel spin correlations in double single-collinear and triple-collinear limits.\nDue to the bijective nature of the sector-antenna framework, each $3$- or $4$-particle phase-space point obtained in this way can be mapped back uniquely to its $2$-particle origin, making the NNLO weight exactly Born-local. For $e^+e^- \\to 2j$ this procedure is identical to the one in \\cite{Weinzierl:2006ij}.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{spiketest-nnlo-tc.pdf}\n \\includegraphics[width=0.4\\textwidth]{spiketest-nnlo-ds.pdf}\\\\\n \\includegraphics[width=0.4\\textwidth]{nnlotest-134.pdf}\n \\includegraphics[width=0.4\\textwidth]{nnlotest-243.pdf}\n \\caption{Test of the convergence of the double-real subtraction term ${\\mathrm{S}}(\\Phi_{2},\\Phi_{+2},O)$ in \\cref{eq:SNNLORR} in $e^+e^-\\to q g g \\bar q$. \\textsl{Top row}: progression of weight distributions from $x=10^{-2}$ to $x=10^{-4}$ in the triple-collinear limit ($s_{134}\/s_{1234} < x$) and double-soft limit ($s_{134}s_{234}\/s_{1234}^2 < x$). \\textsl{Bottom row}: trajectories $x\\cdot s_{134}$, $x\\cdot s_{234}$, $x\\to 0$ approaching the two triple collinear limits. Phase space points are not azimuthally averaged.}\n \\label{fig:subtractionNNLO}\n\\end{figure*}\n\nWe have implemented the NNLO antenna subtraction terms for processes with two massless final-state jets, cf.~e.g.~\\cite{GehrmannDeRidder:2004tv}, in V\\protect\\scalebox{0.8}{INCIA}\\xspace in a semi-automated fashion. \nAs a validation, we illustrate the convergence of the double-real radiation subtraction term \\cref{eq:SNNLORR} in the triple-collinear and double-soft limits for the process $e^+e^-\\to q g g \\bar q$ in \\cref{fig:subtractionNNLO}. Phase space points are sampled according to the kinematic map in \\cref{subsec:kinematics} and we do not make use of the azimuthal averaging alluded to above.\n\nIt should be noted that a numerical calculation of the Born-local NNLO weight is not necessary for colour-singlet decays, as the inclusive $K$-factors are well known from analytical calculations, cf.~e.g.~\\cite{Chetyrkin:1996ela,GehrmannDeRidder:2004tv} for $Z\\to q\\bar q$ (with massless quarks), \\cite{Gorishnii:1990zu,Chetyrkin:1996sr,Baikov:2005rw,DelDuca:2015zqa} for $H \\to b\\bar b$ (with massless $b$s), and \\cite{Chetyrkin:1997iv,GehrmannDeRidder:2005aw} for $H\\to gg$ (in the Higgs effective theory).\n\n\\section{Conclusions and Outlook} \\label{sec:conclusions}\nWe have presented a technique to match final-state parton showers fully-differentially to next-to-next-to-leading order calculations in processes with two final-state jets. To our knowledge, this is the first method of its kind.\n\nWe have outlined a full-fledged numerical implementation in the V\\protect\\scalebox{0.8}{INCIA}\\xspace antenna shower in the P\\protect\\scalebox{0.8}{YTHIA}\\xspace 8.3 event generator. Phenomenological studies employing our strategy will be presented in separate works.\n\nWe want to close by noting that, while we here focused on the simplest case of two massless final-state jets, the use of the NNLO antenna subtraction formalism facilitates its adaption to more complicated processes such as $e^+ e^- \\to t\\bar t$ or $e^+e^-\\to 3j$. Considering the latter, spurious singularities in the quark-gluon NNLO antenna subtraction terms need to be removed before exponentiation in the shower.\nFor future work, an extension of our method to processes with coloured initial states can be envisioned, given the applicability of the NNLO antenna subtraction to hadronic collisions.\n\n\\section*{Acknowledgements} \nWe thank Aude Gehrmann-de Ridder and Thomas Gehrmann for providing us with FORM files of their antenna functions.\nWe thank Philip Ilten for the development of a general matrix-element generator interface for P\\protect\\scalebox{0.8}{YTHIA}\\xspace 8.3, which allowed us to interface C\\protect\\scalebox{0.8}{OMIX}\\xspace in this work.\nCTP is supported by the Monash Graduate Scholarship, the Monash International Postgraduate Research Scholarship, and the J.L.~William~Scholarship.\nHTL is supported by the U.S. Department of Energy under Contract No. DE-AC02-06CH11357 and the National Science Foundation under Grant No. NSF-1740142.\nThis research was supported by Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. \nThis work was further partly funded by the Australian Research Council via Discovery Project DP170100708 \u2014 \"Emergent Phenomena in Quantum Chromodynamics\".\nThis work was also supported in part by the European Union's Horizon 2020 research and innovation programme under the Marie Sk\\l{}odowska-Curie grant agreement No 722104 \u2013 MCnetITN3.\n\n\\bibliographystyle{elsarticle-num}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nNext generation 5G cellular systems will encompass frequencies from around 500 MHz all the way to around 100 GHz. For the development of new 5G systems to operate in bands above 6 GHz, there is a need for accurate radio propagation models for these bands which are not fully modeled by existing channel models below 6 GHz, as previous generations were designed and evaluated for operation at frequencies only as high as 6 GHz. One important example is the recently developed 3D-Indoor Hotspot (InH) channel model~\\cite{3GPP36873}. This paper is a summary of key results provided in a much more detailed white paper by the authors found at the link in~\\cite{5GSIG}, in addition to a 3GPP-style outdoor contribution in~\\cite{5GSIG_VTC16}. The 3GPP 3D channel model provides additional flexibility for the elevation dimension, thereby allowing modeling two dimensional antenna systems, such as those that are expected in next generation system deployments. It is important for future system design to develop a new channel model that will be validated for operation at higher frequencies (e.g., up to 100 GHz) and that will allow accurate performance evaluation of possible future technical specifications in indoor environments. Furthermore, the new models should be consistent with the models below 6 GHz. In some cases, the requirements may call for deviations from the modeling parameters or methodology of the existing models, but these deviations should be kept to a bare minimum and only introduced when necessary for supporting the 5G simulation use cases.\n\nThere are many existing and ongoing campaign efforts worldwide targeting 5G channel measurements and modeling. They include METIS2020~\\cite{METIS2015}, COST2100\/COST~\\cite{COST2100}, IC1004~\\cite{COSTic1004}, ETSI mmWave~\\cite{ETSI2015}, NIST 5G mmWave Channel Model Alliance~\\cite{NIST}, MiWEBA~\\cite{MiWEBA2014}, mmMagic~\\cite{mmMagic}, and NYU WIRELESS~\\cite{Rap13a,Rap15a,Rap15b,Mac15a}. METIS2020, for instance, has focused on 5G technologies and has contributed extensive studies in terms of channel modelling. Their target requirements include a wide range of frequency bands (up to 86 GHz), very large bandwidths (hundreds of MHz), fully three dimensional and accurate polarization modelling, spherical wave modelling, and high spatial resolution. The METIS channel models consist of a map-based model, stochastic model, and a hybrid model which can meet requirement of flexibility and scalability.\n\nThe COST2100 channel model is a geometry-based stochastic channel model (GSCM) that can reproduce the stochastic properties of multiple-input\/multiple output (MIMO) channels over time, frequency, and space. On the other hand, the 5G mmWave Channel Model Alliance is newly established and will establish guidelines for measurement calibration and methodology, modeling methodology, as well as parameterization in various environments and a database for channel measurement campaigns. NYU WIRELESS has conducted and published extensive urban propagation measurements at 28, 38, 60, and 73 GHz for both outdoor and indoor channels, and has created large-scale and small-scale channel models and concepts of \\emph{time cluster spatial lobes} (TCSL) to model multiple multipath time clusters that are seen to arrive in particular directions~\\cite{Rap15a,Rap13a,Rap15b,Samimi15a,Samimi15b,Samimi15c,Nie13a}.\n\nThis paper presents a brief overview of the indoor channel properties for bands up to 100 GHz based on extensive measurements and results across a multitude of bands. In addition we present a preliminary set of channel parameters suitable for indoor 5G simulations that are capable of capturing the main properties and trends.\n\n\\section{Requirements For New Channel Model}\n\nThe requirements of the new channel model that will support 5G operation across frequency bands up to 100 GHz are outlined below:\n\\begin{enumerate}\n\\item The new channel model should preferably be based on the existing 3GPP 3D channel model~\\cite{3GPP36873} but with extensions to cater for additional 5G modeling requirements and scenarios, for example:\n\t\\begin{enumerate}\n\t\t\\item Antenna arrays, especially at higher-frequency millimeter-wave bands, will very likely be 2D and dual-polarized both at the access point (AP) and at the user equipment (UE) and will hence need properly-modeled azimuth and elevation angles of departure and arrival of multipath components.\n\t\t\\item Individual antenna elements will have antenna radiation patterns in azimuth and elevation and may require separate modeling for directional performance gains. Furthermore, polarization properties of the multipath components need to be accurately accounted for in the model. \n\t\\end{enumerate}\n\\item The new channel model must accommodate a wide frequency range up to 100 GHz. The joint propagation characteristics over different frequency bands will need to be evaluated for multi-band operation, e.g., low-band and high-band carrier aggregation configurations. \n\\item The new channel model must support large channel bandwidths (up to 2 GHz), where:\n\t\\begin{enumerate}\n\t\t\\item The individual channel bandwidths may be in the range of 100 MHz to 2 GHz and may support carrier aggregation.\n\t\t\\item The operating channels may be spread across an assigned range of several GHz.\n\t\\end{enumerate}\n\\item The new channel model must support a range of large antenna arrays, in particular:\n\t\\begin{enumerate}\n\t\t\\item Some large antenna arrays will have very high directivity with angular resolution of the channel down to around 1.0 degree.\n\t\t\\item 5G will consist of different array types, e.g., linear, planar, cylindrical and spherical arrays, with arbitrary polarization.\n\t\t\\item The array manifold vector can change significantly when the bandwidth is large relative to the carrier frequency. As such, the wideband array manifold assumption is not valid and new modeling techniques may be required. It may be preferable, for example, to model departure\/arrival angles with delays across the array and follow a spherical wave assumption instead of the usual plane wave assumption.\n\t\\end{enumerate}\n\\item The new channel model must accommodate mobility, in particular (for outdoor models, although mentioned here for consistency):\n\t\\begin{enumerate}\n\t\t\\item The channel model structure should be suitable for mobility up to 350 km\/hr.\n\t\t\\item The channel model structure should be suitable for small-scale mobility and rotation of both ends of the link in order to support scenarios such as device to device (D2D) or vehicle to vehicle (V2V).\n\t\\end{enumerate}\n\\item The new channel model must ensure spatial\/temporal\/frequency consistency, in particular:\n\t\\begin{enumerate}\n\t\t\\item The model should provide spatial\/temporal\/frequency consistencies which may be characterized, for example, via spatial consistence, inter-site correlation, and correlation among frequency bands. \n\t\t\\item The model should also ensure that the channel states, such as line-of-sight (LOS)\/non-LOS (NLOS) for outdoor\/indoor locations, the second order statistics of the channel, and the channel realizations change smoothly as a function of time, antenna position, and\/or frequency in all propagation scenarios. \n\t\t\\item The spatial\/temporal\/frequency consistencies should be supported for simulations where the channel consistency impacts the results (e.g. massive MIMO, mobility and beam tracking, etc.). Such support could possibly be optional for simpler studies.\n\t\\end{enumerate}\n\\item The new channel model must be of practical computational complexity, in particular:\n\t\\begin{enumerate}\n\t\t\\item The model should be suitable for implementation in single-link simulation tools and in multi-cell, multi-link radio network simulation tools. Computational complexity and memory requirements should not be excessive. The 3GPP 3D channel model~\\cite{3GPP36873} is seen, for instance, as a sufficiently accurate model for its purposes, with an acceptable level of complexity. Accuracy may be provided by including additional modeling details with reasonable complexity to support the greater channel bandwidths, and spatial and temporal resolutions and spatial\/temporal\/frequency consistency, required for millimeter-wave modeling.\n\t\t\\item The introduction of a new modeling methodology (e.g. Map based model) may significantly complicate the channel generation mechanism and thus substantially increase the implementation complexity of the system-level simulator. Furthermore, if one applies a completely different modeling methodology for frequencies above 6 GHz, it would be difficult to have meaningful comparative system evaluations for bands up to 100 GHz.\n\t\\end{enumerate}\n\\end{enumerate}\n\n\\section{Indoor Deployment Scenarios - Indoor (InH): Open and Closed Office, Shopping Mall}\nThe indoor scenario includes open and closed offices, corridors within offices and shopping malls as examples. The typical office environment has open cubicle areas, walled offices, open areas, corridors, etc., where the partition walls are composed of a variety of materials like sheetrock, poured concrete, glass, cinder block, etc. For the office environment, APs are generally mounted at a height of 2-3 m either on the ceilings or walls, with UEs at heights between 1.2 and 1.5 m. Shopping malls are generally 2-5 stories high and often include an open area (``atrium\"). In the shopping-mall environment, APs are generally mounted at a height of approximately 3 m on the walls or ceilings of the corridors and shops, with UEs at heights between 1.2 and 1.5 m. The density of APs may range from one per floor to one per room, depending on the frequency band and output power. A typical indoor office and shopping mall scenario are shown in Figures~\\ref{fig:InHsc} and~\\ref{fig:SMsc}, respectively. \n\\begin{figure}[b!]\n\t\\centering\n\t\\includegraphics[width = 3.7in]{InHsc.eps}\n\t\\caption{Typical Indoor Office.}\n\t\\label{fig:InHsc}\n\\end{figure}\n\\begin{figure}[b!]\n\t\\centering\n\t\\includegraphics[width = 3.7in]{SMsc.eps}\n\t\\caption{Indoor Shopping Malls.}\n\t\\label{fig:SMsc}\n\\end{figure}\n\n\\section{Characteristics of the InH Channel from 6 GHz to 100 GHz}\nMeasurements over a wide range of frequencies have been performed by the co-authors of this paper. In the following sections we outline the main observations per scenario with some comparisons to the existing 3GPP models for below 6 GHz (e.g.~\\cite{3GPP36873}). \n\nIn LOS conditions, multiple reflections from walls, floor, and ceiling give rise to waveguiding. Measurements in both office and shopping mall scenarios show that path loss exponents, based on a 1 m free space reference distance, are typically below 2 in LOS conditions, leading to more favorable path loss than predicted by Friis' free space path loss formula. The strength of the waveguiding effect is variable and the path loss exponent appears to increase very slightly with increasing frequency, possibly due to the relation between the wavelength and surface roughness. \n\nMeasurements of the small scale channel properties such as angular spread and delay spread have shown remarkable similarities between channels over a very wide frequency range. It appears as if the main multipath components are present at all frequencies though with some smaller variations in amplitudes.\n\nRecent work shows that polarization discrimination ranges between 15 and 25 dB for indoor millimeter wave channels~\\cite{Karttunen15a}, with greater polarization discrimination at 73 GHz than at 28 GHz~\\cite{Mac15a}.\n\n\\section{Penetration Inside Buildings}\nMeasurements have been reported for penetration loss for various materials at 2.5, 28, and 60 GHz for indoor scenarios~\\cite{Rap15a,Rap13a,Anderson02a,Zhao13a}, although all materials were not measured for the same frequencies. For easy comparisons, walls and drywalls were lumped together into a common dataset and different types of clear class were lumped together into a common dataset with normalized penetration loss shown in Figure~\\ref{fig:NYUpenetration}. It was observed that clear glass has widely varying attenuation (20 dB\/cm at 2.5 GHz, 3.5 dB\/cm at 28 GHz, and 11.3 dB\/cm at 60 GHz). For mesh glass, penetration was observed to increase as a function of frequency (24.1 dB\/cm at 2.5 GHz and 31.9 dB\/cm at 60 GHz), and a similar trend was observed with whiteboard penetration increasing as frequency increased. At 28 GHz, indoor tinted glass resulted in a penetration loss of 24.5 dB\/cm. Walls showed very little attenuation per cm of distance at 28 GHz (less than 1 dB\/cm)~\\cite{5GSIG}. Furthermore, a simple parabolic model as a function of frequency for low-loss and high-loss building penetration is given in~\\cite{5GSIG_VTC16}.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width = 3.7in]{NYUpenetration.eps}\n\t\\caption{2.5 GHz, 28 GHz, and 60 GHz normalized material penetration losses from indoor measurements with common types of glass and walls were lumped together into common datasets~\\cite{Rap15b,Anderson02a,Zhao13a}.}\n\t\\label{fig:NYUpenetration}\n\\end{figure}\n\n\\section{Path loss, Shadow Fading, LOS, and Blockage Modeling}\n\\subsection{LOS Probability}\nThe definition of LOS used in this paper is discussed in this sub-section together with other LOS models. The LOS state is determined by a map-based approach, i.e., by considering the transmitter (AP) and receiver (UE) positions and whether any buildings or walls block the direct path between the AP and the UE. The impact of objects not represented in the map such as chairs, desks, office furniture, etc. is modelled separately using shadowing\/blocking terms. An attractive feature of this LOS definition is that it is frequency independent, as only walls are considered in the definition. \n\nSince the 3GPP 3D model~\\cite{3GPP36873} does not include an indoor scenario for LOS-probability, and the indoor hotspot scenario in e.g. the IMT advanced model~\\cite{ITU-M.2135-1} differs from the office scenario considered in this paper, an investigation on the LOS probability for indoor office has been conducted based on ray-tracing simulations. Different styles of indoor office environments were investigated, including open-plan office with cubical area, closed-plan office with corridor and meeting room, and also a hybrid-plan office with both open and closed areas. It has been verified that the following model fits the propagation in indoor office environment the best, of the three models evaluated:\n\\begin{equation}\\label{eq1}\nP_{LOS} = \\begin{cases}\n1, & d\\leq1.2\\text{ m}\\\\\n\\exp(-(d-1.2)\/4.7), & 1.210\\text{ m}\\end{cases}\\end{equation*}} & \\parbox{6.5cm}{\\begin{equation*}\\label{eq5} P_{LOS} = \\begin{cases} 1, & d\\leq1\\text{ m}\\\\ \\exp(-(d-1)\/9.4), & d>1\\text{ m}\\end{cases}\\end{equation*}} & 0.0572 \\\\ \\hline\n\t\t\t\\scriptsize WINNER II model (A1)\t& \\parbox{6.7cm}{\\begin{equation*}\\tiny\\label{eq6} P_{LOS} = \\begin{cases} 1, & d\\leq2.5\\text{ m}\\\\ 1-0.9(1-(1.24-0.61\\log_{10}(d))^3)^{1\/3}, & d>2.5\\text{ m}\\end{cases}\\end{equation*}} & \\parbox{6.7cm}{\\begin{equation*}\\tiny\\label{eq7} P_{LOS} = \\begin{cases} 1, & d\\leq2.6\\text{ m}\\\\ 1-0.9(1-(1.16-0.4\\log_{10}(d))^3)^{1\/3}, & d>2.6\\text{ m}\\end{cases}\\end{equation*}} & 0.0473 \\\\ \\hline\n\t\t\t\\scriptsize New Model\t& N\/A & \\parbox{7.5cm}{\\begin{equation*}\\label{eq8}\n\t\t\tP_{LOS} = \\begin{cases} 1, & d\\leq1.2\\text{ m}\\\\ \\exp(-(d-1.2)\/4.7), & 1.2\\text{ m}d_{BP}\n\t\\end{cases}$}\n\t\\end{equation}\n\\end{figure*}\n\\begin{figure*}\n\t\\begin{equation}\\label{eq12}\n\t\\resizebox{0.94\\hsize}{!}{$%\n\t\t\\mathrm{PL}_{Dual}^{CIF}(f,d)[\\mathrm{dB}] = \\begin{cases} \\textrm{FSPL}(f,\\text{1 m})+10n_1\\left(1+b_1\\left(\\frac{f-f_0}{f_0}\\right)\\right)\\log_{10}\\left(\\frac{d}{\\text{1 m}}\\right), & 1\\text{ m}d_{BP}\n\t\t\\end{cases}$}\n\t\\end{equation}\n\t\\hrulefill\n\n\t\\vspace*{4pt}\n\\end{figure*}\n\nIn the CI PL model, only a single parameter, the path loss exponent (PLE), needs to be determined through optimization to minimize the SF standard deviation over the measured PL data set~\\cite{Rap15b,Sun15a,Sun16a}. In the CI PL model there is an anchor point that ties path loss to the FSPL at 1 m, which captures frequency-dependency of the path loss, and establishes a uniform standard to which all measurements and model parameters may be referred. In the CIF model there are 2 optimization parameters ($n$ and $b$), and since it is an extension of the CI model, it also uses a 1 m free-space close-in reference distance path loss anchor. In the ABG PL model there are three parameters which need to be optimized to minimize the standard deviation (SF) over the data set~\\cite{Mac15a,Sun16a}. Closed form expressions for optimization of the model parameters for the CI, CIF, and ABG path loss models are given in~\\cite{Mac15a}, where it was shown that indoor channels experience an increase in the PLE value as the frequency increases, whereas the PLE is not very frequency dependent in outdoor UMa or UMi scenarios~\\cite{Mac15a,Rap15b,Sun15a,Sun16a,Thomas16a}. The CI, CIF, and ABG models, as well as cross-polarization forms and closed-form expressions for optimization are given for indoor channels in~\\cite{Mac15a}.\n\nAnother important issue related to path loss is shadow fading. For InH, the distance dependency and frequency dependency were investigated for both indoor office and shopping mall. For the LOS propagation condition, the frequency and distance dependency is weak. But for the NLOS propagation condition, frequency and distance dependency is more apparent as indicated in Table 7 of~\\cite{5GSIG}.\n\n\\begin{table}\n\t\\caption{InH and Shopping Mall Path Loss Models for LOS and NLOS.}\\label{tbl:InHPL}\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.6}\n\t\\begin{center}\n\t\\scalebox{0.82}{\n\t\t\\fontsize{8}{8}\\selectfont\n\t\t\\begin{tabular}{|p{3cm}|p{3cm}|p{3cm}|}\n\t\t\t\\hline\n\t\t\tScenario & CI\/CIF Model Parameters & ABG Model Parameters \\\\ \\hline \\hline\n\t\t\tInH-Indoor-Office-LOS & $n$=1.73, $\\sigma_{\\mathrm{SF}}$=3.02 dB & N\/A \\\\ \\hline\n\t\t\tInH-Indoor-Office-NLOS single slope (FFS) & $n$=3.19, $b$=0.06, $f_0$=24.2 GHz, $\\sigma_{\\mathrm{SF}}$=8.29 dB & $\\alpha$=3.83, $\\beta$=17.30, $\\gamma$=2.49, $\\sigma_{\\mathrm{SF}}$=8.03 dB \\\\ \\hline\n\t\t\tInH-Indoor-Office-NLOS dual slope & $n_1$=2.51, $b_1$=0.12, $f_0$=24.1 GHz, $n_2$=4.25, $b_2$=0.04, $d_{BP}$=7.8 m, $\\sigma_{\\mathrm{SF}}$=7.65 dB & $\\alpha_1$=1.7, $\\beta_1$=33.0, $\\gamma$=2.49, $d_{BP}$=6.90 m, $\\alpha_2$=4.17, $\\sigma_{\\mathrm{SF}}$=7.78 dB \\\\ \\hline\n\t\t\tInH-Shopping Malls-LOS & $n$=1.73, $\\sigma_{\\mathrm{SF}}$=2.01 dB & N\/A \\\\ \\hline\n\t\t\tInH-Shopping Malls-NLOS single slope (FFS) & $n$=2.59, $b$=0.01, $f_0$=39.5 GHz, $\\sigma_{\\mathrm{SF}}$=7.40 dB & $\\alpha$=3.21, $\\beta$=18.09, $\\gamma$=2.24, $\\sigma_{\\mathrm{SF}}$6.97 dB \\\\ \\hline\n\t\t\tInH-Shopping Malls-NLOS dual slope & $n_1$=2.43, $b_1$=0.01, $f_0$=39.5 GHz, $n_2$=8.36, $b_2$=0.39, $d_{BP}$= 110 m, $\\sigma_{\\mathrm{SF}}$=6.26 dB & $\\alpha_1$=2.9, $\\beta_1$=22.17, $\\gamma$=2.24, $d_{BP}$=147.0 m, $\\alpha_2$=11.47, $\\sigma_{\\mathrm{SF}}$=6.36 dB \\\\ \\hline\n\t\t\\end{tabular}}\n\t\\end{center}\n\\end{table}\n\n\\section{Fast Fading Modeling}\nFor InH scenarios, an investigation of fast fading modelling has been conducted based on both measurement and ray-tracing. Both indoor office and shopping mall environments have been investigated at frequencies including 2.9 GHz, 3.5 GHz, 6 GHz, 14 GHz, 15 GHz, 20 GHz, 28 GHz, 29 GHz, 60 GHz, and 73 GHz. Some preliminary analysis on large-scale channel characteristics have been summarized in~\\cite{5GSIG}. Although it is still too early to apply these results to the full frequency range up to 100 GHz, these preliminary investigations have provided insight into the difference induced by the largely extended frequency range. The preliminary analysis in~\\cite{5GSIG} illustrates the frequency dependency of large-scale channel characteristics across the measured frequency range.\n\\section{Conclusion}\nThe basis for this paper is the open literature in combination with recent and ongoing propagation channel measurements performed by a majority of the co-authors of this paper, some of which are as of yet unpublished. The InH propagation models are somewhat different from the outdoor UMi and UMa models in that the indoor channels are more frequency-dependent than outdoor channels, leading to the ABG and CIF frequency-dependent NLOS path loss models. In LOS conditions, waveguiding effects were observed in all frequencies measured, leading to path loss exponents less than the theoretical value of $n$ = 2 in LOS. The preceding tables give an overview of these recent measurement activities in different frequency bands and scenarios, in addition to further information provided in~\\cite{5GSIG}.\n\\section*{Acknowledgment}\nThe authors would like to thank Jianhua Zhang\\textsuperscript{b} and Yi Zheng\\textsuperscript{c} who are also contributing authors of this manuscript and 5G white paper~\\cite{5GSIG}.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzyzz b/data_all_eng_slimpj/shuffled/split2/finalzyzz new file mode 100644 index 0000000000000000000000000000000000000000..fb3b34b3cbc69dd4f80c591708e8eeb7e0363641 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzyzz @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction: Motivation \\& Outline}\n\\label{s:intro}\n\nAll natural optical media are to some extent variable in space, often in such a complex way that they are best represented with statistics. In nuclear engineering, there is increasing interest in pebble-bed reactors where the core is made of many small spheres that contain both fuel and moderator material. In contrast with classic reactor designs, their detailed 3D geometry (i.e., how the spheres stack) is quite random. Earth's cloudy atmosphere is another instance of a very clumpy 3D optical medium. These are just two examples from vastly different disciplines where a good theory for stochastic transport would be a valuable asset.\n\nBroadly speaking, three kinds of model have been proposed to account for unresolved spatial variability in a transport medium.\n\\begin{itemize}\n\\item\nThe most natural approach is ``homogenization'' where one seeks \\emph{effective} material properties that can be used in the solution of a transport problem for a uniform medium, but would make an accurate prediction of the behavior of the heterogeneous stochastic medium. The homogenized material properties will depend on statistical quantities (means, variances, correlations, etc.) that characterize the stochastic medium of interest. Examples for the cloudy atmosphere are in \\cite{DavisEtAl1990,Cahalan1994,Cairns2000}.\n\\item\nAn alternative is to develop new transport equations to solve either analytically or numerically. Examples for the cloudy atmosphere are in \\cite{AvasteVainikko1974,Stephens1988b,Davis2006}. Interestingly, the early paper by Avaste and Vyanikko \\cite{AvasteVainikko1974} proposes a binary mixture model that has a long and ongoing history of application to nuclear engineering, going at least back to the seminal papers by Levermore, Pomraning et al. \\cite{LevermorePomraning1986,LevermoreEtAl1988}. This approach is at least conceptually more difficult than the previous one since new methods must be found to solve the new transport equations.\n\\item\nA third approach, of intermediate complexity, is to linearly combine the answers of a number of computations for uniform media in order to approximate the answer for the spatially heterogeneous stochastic medium. Examples of application to the cloudy atmosphere are in \\cite{Mullamaa1972,RonnholmEtAl1980,StephensEtAl_TTSP91,CahalanEtAl1994a,Barker1996a,Barker_etal08}.\n\\end{itemize}\nIn our experience, homogenization will work well for weaker kinds of variability and\/or higher tolerance for error. A model derived using the second approach, such as the one proposed in the following pages, should be more broadly applicable. Models of the 3rd kind can be competitive, largely due to their straightforward implementation.\n\nIn the following, we will primarily keep clouds and atmospheric optics in mind, but the generalized transport model we propose may prove to be more broadly applicable. Accordingly, we will talk about radiative transfer (RT) and RT equations (RTEs), but the entirety of this work can be thought of as transport theory as defined by the linear Boltzmann equation.\n\n\\paragraph{Outline:}\nThe remainder of this article is organized as follows.\nSection~\\ref{s:dDim_sRTE} introduces our notations and states the standard RTE and boundary conditions for homogeneous---or random but ``homogenized''---plane-parallel media in $d$ spatial dimensions ($d=1,2,3$).\nSection~\\ref{s:dDim_gRTE} introduces our ansatz leading to a new class of generalized RTEs in integral form with power-law transmission laws. Therein, we first see how non-exponential transmission laws arise from the statistics of stochastic media, with an emphasis on the role of spatial correlations, as exemplified by the Earth's turbulent and cloudy atmosphere.\nIn Section~\\ref{s:2stream_MarCh}, the $d=1$ case gets special attention. In the framework of standard RT, it is formally identical to the well-known two-stream model. Turning to generalized RT, we derive ab initio a deterministic numerical solution in $d=1$, and use it to investigate internal radiation fields. The new generalized RT solver is based on Markov chain formalism, traditionally a tool for random walk theory (including its application to Monte Carlo methods in transport). A technical Appendix details the computational methodology used in the Markov chain code.\nSection~\\ref{s:DiffusionLimits} revisits the behavior of diffuse transmission in the absence of absorption for standard and generalized RT in the diffusion limit (i.e., asymptotically large transport optical depth). New numerical experiments in $d=2$ validate the theoretical prediction based on self-similar L\\'evy flights. This reduced dimensionality is easier to comprehend graphically, and also may have applications in transport phenomena on random surfaces.\nIn Section~\\ref{s:ReciprocityViolation}, we use the single scattering limit in $d=3$ to show that generalized RT is not reciprocal under a switch of sources and detectors. This violation of angular reciprocity is in fact observed in the Earth's cloudy atmosphere---the original motivation and application of the generalized RT model.\nIn the final Section~\\ref{s:concl}, we present our conclusions and an outlook on practical applications of our theoretical and computational advances, including a connection with recent work atmospheric spectroscopy \\cite{ConleyCollins2011}.\n\n\n\\section{Standard Radiative Transport in $d$ Spatial Dimensions}\n\\label{s:dDim_sRTE}\n\n\\subsection{RTE for Homogeneous---or Homogenized---Media in Integro-Differential Form}\n\nLet $I(z,\\mathbf{\\Omega})$ denote the steady-state radiance field at level $z$ in a \\emph{uniform} $d$-dimensional plane-parallel optical medium of thickness $H$,\n\\begin{equation}\n\\text{M}_d(H) = \\{ \\vect{x}\\in\\mathbb{R}^d, 0 1$ are not noise-like---they have a stochastic continuity property---and are discussed in \\S\\ref{s:nonExpTran}.\n\nIn the present study, we are exclusively interested in the response of uniform or stochastic media to irradiation from an external source. If this source is collimated (highly concentrated into a single direction $\\mathbf{\\Omega}_0$, with $\\Omega_{0z} > 0$), then we can take\n\\begin{equation}\nq(z,\\mathbf{\\Omega}) = F_0 \\exp(-\\sigma z\/\\mu_0) \\sigma_\\text{s} p(\\mathbf{\\Omega}_0\\cdot\\mathbf{\\Omega})\n\\label{e:nD_RTE_SourceTerm}\n\\end{equation}\nin the uniform case, where $F_0$ (in W\/m$^{d-1}$) is its uniform areal density. We also introduce here\n\\begin{equation*}\n\\mu_0 = \\cos\\theta_0 = \\Omega_{z0}.\n\\end{equation*}\nNote that we have oriented the $z$-axis positively in the direction of the incoming flow of solar radiation, as is customary in atmospheric optics. The meaning of each factor in (\\ref{e:nD_RTE_SourceTerm}) is clear: the incoming flux $F_0$ at $z = 0$ is attenuated exponentially (Beer's law) along the oblique path to level $z$ where it is scattered with probability $\\sigma_\\text{s}$ per unit of path length and, more specifically, into direction $\\mathbf{\\Omega}$ according to the PF value for $\\theta_\\text{s} = \\cos^{-1}\\mathbf{\\Omega}_0\\cdot\\mathbf{\\Omega}$. In this case, $I(z,\\mathbf{\\Omega})$ is the diffuse radiation (i.e., scattered once or more).\n\nThe appropriate boundary conditions (BCs) for the diffuse radiance that obeys (\\ref{e:nD_RTE_uniform})--(\\ref{e:nD_RTE_SourceTerm}) will express that none is coming in from the top of the medium, $I(0,\\mathbf{\\Omega}) = 0$ for $\\Omega_z > 0$. At the lower ($z = H$) boundary, we will take\n\\begin{equation}\nI(H,\\mathbf{\\Omega}) = F_-(H) \/ c_d,\n\\label{e:nD_RTE_lowerBC_Lambertian}\n\\end{equation}\nwhere\n\\begin{equation}\nF_-(H) = \\rho \\, F_+(H),\n\\label{e:rho_Lambertian}\n\\end{equation}\nfor all $\\Omega_z < 0$, where $\\rho$ is the albedo of the partially ($0 < \\rho < 1$) or totally ($\\rho = 1$) reflective surface; we have also introduced the downwelling (subscript ``$+$'') and upwelling (subscript ``$-$'') hemispherical fluxes\n\\begin{equation}\n\\begin{array}{l}\nF_+(z) = \\int\\limits_{\\Omega_z>0} \\Omega_z I(z,\\mathbf{\\Omega})\\mathrm{d}\\mathbf{\\Omega}\n + \\mu_0 F_0 \\mathrm{e}^{-\\sigma z\/\\mu_0}, \\\\\nF_-(z) = \\int\\limits_{\\Omega_z<0}|\\Omega_z|I(z,\\mathbf{\\Omega})\\mathrm{d}\\mathbf{\\Omega}.\n\\end{array}\n\\label{e:hemispherical_fluxes}\n\\end{equation}\nThis surface reflectivity model is, for simplicity, Lambertian (isotropically reflective), and the numerical constant $c_d = \\int_{\\Omega_z>0}\\Omega_z\\mathrm{d}\\mathbf{\\Omega}$ is given in Table~\\ref{t:Definitions} for $d = 1,2,3$. Naturally, we will also consider a black (purely absorbing) surface in (\\ref{e:nD_RTE_lowerBC_Lambertian}) by setting $\\rho = 0$.\n\nAlternatively, we can view $I(z,\\mathbf{\\Omega})$ as total (uncollided \\emph{and} once or more scattered) radiance, and assume $q(z,\\mathbf{\\Omega}) \\equiv 0$ inside M$_d(H)$. Radiation sources will then be represented in the expression of boundary conditions (BCs). The upper ($z = 0$) BC expresses either diffuse or collimated incoming radiation. In the former case, we have\n\\begin{equation}\nI(0,\\mathbf{\\Omega}) = F_0 \/ c_d,\n\\label{e:nD_RTE_upperBC_diffuse}\n\\end{equation}\nfor any $\\mathbf{\\Omega}$ with $\\Omega_z > 0$. In the latter case, we have\n\\begin{equation}\nI(0,\\mathbf{\\Omega}) = F_0 \\delta(\\mathbf{\\Omega}-\\mathbf{\\Omega}_0),\n\\label{e:nD_RTE_upperBC_collimated}\n\\end{equation}\nfor $\\Omega_z > 0$. To reconcile (\\ref{e:nD_RTE_SourceTerm}) with the above BC, we notice that\n\\begin{equation}\nI_0(z,\\mathbf{\\Omega}) = F_0 \\exp(-\\sigma z\/\\mu_0) \\delta(\\mathbf{\\Omega}-\\mathbf{\\Omega}_0)\n\\label{e:uncollided}\n\\end{equation}\nis the solution of the ODE in (\\ref{e:nD_RTE_uniform}) when the r.-h. side vanishes identically (no internal sources, nor scattering), and we use (\\ref{e:nD_RTE_upperBC_collimated}) as the initial condition. This uncollided radiance becomes the source of diffuse radiation immediately after scattering, hence its role in (\\ref{e:nD_RTE_SourceTerm}).\n\nIn (\\ref{e:uncollided}), $s = z\/\\mu_0$ is simply the oblique path covered by the radiation in the medium from its source at $z = s = 0$ to the location where it is detected, or scattered, or absorbed, or even escapes the medium ($s \\ge H\/\\mu_0$). From the well-known properties of the exponential probability distribution, this makes the mean free path (MFP) $\\ell$ between emission, scattering or absorption events equal to the the e-folding distance $1\/\\sigma$.\n\nQuantities of particular interest in many applications, including atmospheric remote sensing, are radiances at the boundaries that describe out-going radiation: $I(0,\\mathbf{\\Omega})$ with $\\Omega_z \\le 0$; $I(H,\\mathbf{\\Omega})$ with $\\Omega_z \\ge 0$. Normalized (outgoing, hemispherical) boundary fluxes,\n\\begin{eqnarray}\n\\label{e:reflectivity}\nR &=& \\frac{F_-(0)}{\\mu_0 F_0}, \\\\\n\\label{e:transmittivity}\nT &=& \\frac{F_+(H)}{\\mu_0 F_0},\n\\end{eqnarray}\nare also of interest, particularly, in radiation energy budget computations. In (\\ref{e:reflectivity})--(\\ref{e:transmittivity}), the denominator is in fact $F_+(0)$ from (\\ref{e:hemispherical_fluxes}). Therefore, for the diffuse illumination pattern in (\\ref{e:nD_RTE_upperBC_diffuse}), we only need to divide by $F_0$.\n\nFinally, a convenient non-dimensional representation of out-going radiances, at least at the upper boundary, uses the ``bidirectional reflection factor'' (BRF) form:\n\\begin{equation}\nI_\\text{BRF}(\\mathbf{\\Omega}) = \\frac{c_d I(0,\\mathbf{\\Omega})}{\\mu_0 F_0},\n\\label{e:BRF_form}\n\\end{equation}\nfor $\\mu < 0$. This is the ``effective'' albedo $\\rho$ of the medium, i.e., as defined in (\\ref{e:nD_RTE_lowerBC_Lambertian})--(\\ref{e:rho_Lambertian}), but with $z=0$ rather than $z=H$, knowing $I(0,\\mathbf{\\Omega})$ and hence $F_+(0) = \\mu_0 F_0$.\nUnlike the optical property $\\rho$ in (\\ref{e:rho_Lambertian}) and the radiative response $R$ in (\\ref{e:reflectivity}), $I_\\text{BRF}(\\mathbf{\\Omega})$ is not restricted by energy conservation to the interval $[0,1]$.\n\n\\begin{table}[ht]\n\\caption{{\\bf Definitions for }$d = 1,2,3$}\n\\label{t:Definitions}\n\\begin{center}\n\\begin{tabular}{|l||c|c|c|}\n\\hline\n$d$ & 1 & 2 & 3 \\\\\n\\hline\\hline\n$\\vect{x}$ & $z$ & $(x,z)^\\text{T}$ & $(x,y,z)^\\text{T}$ \\\\\n\\hline\n$\\mathrm{d}\\vect{x}$ & $\\mathrm{d} z$ & $\\mathrm{d} x\\mathrm{d} z$ & $\\mathrm{d} x\\mathrm{d} y\\mathrm{d} z$ \\\\\n\\hline\n$\\mathbf{\\Omega}$ & $\\pm 1$ & $(\\sin\\theta,\\cos\\theta)^\\text{T}$ & $(\\sin\\theta\\cos\\phi,\\sin\\theta\\sin\\phi,\\cos\\theta)^\\text{T}$ \\\\\n\\hline\n$\\mathrm{d}\\mathbf{\\Omega}$ & n\/a$^\\dag$ & $\\mathrm{d}\\theta$ & $\\mathrm{d}\\cos\\theta\\mathrm{d}\\phi$ \\\\\n\\hline\n$c_d$ in (\\ref{e:nD_RTE_lowerBC_Lambertian}), (\\ref{e:nD_RTE_upperBC_diffuse}) & 1 & 2 & $\\pi$ \\\\\n\\hline\n[$F_0$] in (\\ref{e:nD_RTE_SourceTerm}), (\\ref{e:nD_RTE_upperBC_diffuse})--(\\ref{e:BRF_form}) & W & W\/m & W\/m$^2$ \\\\\n\\hline\n[$I$] & W & W\/m\/rad & W\/m$^2$\/sr \\\\\n\\hline\n[$S$] = [$q$] & W\/m & W\/m$^2$\/rad & W\/m$^3$\/sr \\\\\n\\hline\n$p_{d,\\text{iso}} = p_0(\\mu_\\text{s})$ & 1\/2 [-] & 1\/2$\\pi$ [rad$^{-1}$] & 1\/4$\\pi$ [sr$^{-1}$] \\\\\n\\hline\n$p_g(\\mu_\\text{s})$ & $\\frac{1+g\\mu_\\text{s}}{2}$ & $\\left(\\frac{1}{2\\pi}\\right)\\frac{1-g^2}{1+g^2-2g\\mu_\\text{s}}$ & $\\left(\\frac{1}{4\\pi}\\right)\\frac{1-g^2}{(1+g^2-2g\\mu_\\text{s})^{3\/2}}$ \\cite{HG_41}\\\\\n\\hline\n$\\chi_d$ in (\\ref{e:dD_diffusion_BCs}) & 1 & $\\pi$\/4 & 2\/3 \\\\\n\\hline\n\\end{tabular} \\\\\n\\end{center}\n{\\footnotesize $\\null^\\dag$In $d=1$, angular integrals become sums over the up ($\\mu = -1$) and down ($\\mu = +1$) directions, or only downward in (\\ref{e:nD_RTE_lowerBC_Lambertian}).} \\\\\n{\\footnotesize N.B. In all cases, we use $\\mu_\\text{s} = \\cos\\theta_\\text{s}$ to denote $\\mathbf{\\Omega}\\cdot\\mathbf{\\Omega}^\\prime$, the scalar product of the ``before'' and ``after'' scattering direction vectors.}\n\\end{table}\n\nActually, in the familiar $d = 3$ dimensions, all of the above is known as ``1D'' RT theory since only the spatial dimensions with any form of variability count. If $\\sigma$, $\\sigma_\\text{s}$ and $p(\\cdot)$ depend on $z$, it is still 1D RT. One can even remove from further consideration the former quantity by adopting the standard change of variables, $z \\mapsto \\tau = \\int_0^z \\sigma(z^\\prime) \\mathrm{d} z^\\prime$. In this case, $z \\mapsto \\tau = \\sigma z$ (depth in units of MFP $\\ell = 1\/\\sigma$), then (\\ref{e:nD_RTE_uniform})--(\\ref{e:nD_RTE_SourceFunction}) become\n\\begin{equation}\n\\left[ \\mu\\frac{\\mathrm{d}\\;}{\\mathrm{d}\\tau} + 1 \\right]\nI(\\tau,\\mathbf{\\Omega}) = \\omega \\int\\limits_{\\Xi_d}\np(\\mathbf{\\Omega}^\\prime\\cdot\\mathbf{\\Omega})I(\\tau,\\mathbf{\\Omega}^\\prime)\\mathrm{d}\\mathbf{\\Omega}^\\prime + q(\\tau,\\mathbf{\\Omega}),\n\\label{e:1D_RTE}\n\\end{equation}\nwhere $\\mu$ denotes $\\Omega_z$ ($=\\cos\\theta$ if $d>1$) and $\\omega = \\sigma_\\text{s}\/\\sigma$ is the single scattering albedo (SSA). We have assumed that $\\omega$ and $p(\\cdot)$ are independent of $z$, hence of $\\tau$, for simplicity as well as consistency with the notion of a homogenized optical medium.\n\nAnother important non-dimensional property is the total optical thickness of the medium M$_3(H)$, namely, $\\tau^\\star = \\sigma H = H\/\\ell$. BCs for (\\ref{e:1D_RTE}) are expressed as in (\\ref{e:nD_RTE_lowerBC_Lambertian})--(\\ref{e:nD_RTE_upperBC_collimated}) but at $\\tau = 0,\\tau^\\star$.\n\nFinally, we adopt the Henyey--Greenstein (H--G) PF model $p_g(\\mu_\\text{s})$ expressed in the penultimate row of Table~\\ref{t:Definitions}. Its sole parameter is the asymmetry factor $g = \\int_{\\Xi_d}\\mathbf{\\Omega}^\\prime\\cdot\\mathbf{\\Omega} p(\\mathbf{\\Omega}^\\prime\\cdot\\mathbf{\\Omega})\\mathrm{d}\\mathbf{\\Omega}$. The whole 1D RT problem is then determined entirely by the choice of four quantities, $\\{\\omega,g;\\tau^\\star;\\rho\\}$, plus $\\mu_0$ if $d > 1$.\n\n\\subsection{Integral Forms of the $d$-Dimensional RTE}\n\nHenceforth, we take $q(\\tau,\\mathbf{\\Omega}) \\equiv 0$ in (\\ref{e:nD_RTE_uniform}) and, consequently, $I(\\tau,\\mathbf{\\Omega})$ is total (uncollided and scattered) radiation and the upper BC is (\\ref{e:nD_RTE_upperBC_collimated}). We will also assume in the remainder that $\\rho = 0$ in the lower BC, cf. (\\ref{e:nD_RTE_lowerBC_Lambertian})--(\\ref{e:rho_Lambertian}), which then becomes simply $I(\\tau^\\star,\\mathbf{\\Omega}) = 0$ for $\\mu < 0$. These assumptions are not essential to our goal of generalizing RT theory to account for spatial heterogeneity with long-range correlations, but they do simplify many of the following expressions that are key to the discussion.\n\nNow suppose that we somehow know $S(\\tau,\\mathbf{\\Omega})$ in (\\ref{e:nD_RTE_uniform}), with $q(\\tau,\\mathbf{\\Omega}) \\equiv 0$. It is then straightforward to compute $I(\\tau,\\mathbf{\\Omega})$ everywhere. We simply use upwind integration or ``sweep:''\n\\begin{equation}\nI(\\tau,\\mathbf{\\Omega}) =\n\\left\\{ \\begin{array}{ll}\n\\int\\limits_0^\\tau S(\\tau^\\prime,\\mathbf{\\Omega})\\mathrm{e}^{-(\\tau-\\tau^\\prime)\/\\mu}\\frac{\\mathrm{d}\\tau^\\prime}{\\mu}\n+ I(0,\\mathbf{\\Omega})\\mathrm{e}^{-\\tau\/\\mu},\n& \\text{if }\\mu > 0, \\\\\n\\int\\limits_\\tau^{\\tau^\\star}\nS(\\tau^\\prime,\\mathbf{\\Omega})\\mathrm{e}^{-(\\tau^\\prime-\\tau)\/|\\mu|}\\frac{\\mathrm{d}\\tau^\\prime}{|\\mu|}\n+ I(\\tau^\\star,\\mathbf{\\Omega})\\mathrm{e}^{-(\\tau^\\star-\\tau)\/|\\mu|},\n& \\text{otherwise,}\n\\end{array} \\right.\n\\label{e:S_sweep}\n\\end{equation}\nwhere the boundary contributions are specified by the BCs. When these BCs express an incoming collimated beam at $\\tau = 0$, cf. (\\ref{e:nD_RTE_upperBC_collimated}), and an absorbing surface at $\\tau = \\tau^\\star$, cf. (\\ref{e:nD_RTE_lowerBC_Lambertian})--(\\ref{e:rho_Lambertian}) with $\\rho = 0$, this simplifies to\n\\begin{equation}\nI(\\tau,\\mathbf{\\Omega}) =\n\\left\\{ \\begin{array}{ll}\n\\int\\limits_0^\\tau S(\\tau^\\prime,\\mathbf{\\Omega})\\mathrm{e}^{-(\\tau-\\tau^\\prime)\/\\mu}\\frac{\\mathrm{d}\\tau^\\prime}{\\mu}\n+ I_0(\\tau,\\mathbf{\\Omega}),\n& \\text{if }\\mu > 0, \\\\\n\\int\\limits_\\tau^{\\tau^\\star}\nS(\\tau^\\prime,\\mathbf{\\Omega})\\mathrm{e}^{-(\\tau^\\prime-\\tau)\/|\\mu|}\\frac{\\mathrm{d}\\tau^\\prime}{|\\mu|},\n& \\text{otherwise,}\n\\end{array} \\right.\n\\label{e:simpleS_sweep}\n\\end{equation}\nwhere $I_0(\\tau,\\mathbf{\\Omega})$ is uncollided radiance from (\\ref{e:uncollided}) with $z = \\tau\/\\sigma$.\n\nWith this formal solution of the integro-differential RTE in hand, we can substitute the definition of $S(\\tau,\\mathbf{\\Omega})$ in terms of $I(\\tau,\\mathbf{\\Omega})$ expressed in (\\ref{e:nD_RTE_SourceFunction}), and obtain an integral form of the RTE:\n\\begin{equation}\nI(\\tau,\\mathbf{\\Omega}) = \\int\\limits_{\\Xi_d}\\int\\limits_0^{\\tau^\\star} \\mathcal{K}(\\tau,\\mathbf{\\Omega};\\tau^\\prime,\\mathbf{\\Omega}^\\prime) I(\\tau^\\prime,\\mathbf{\\Omega}^\\prime) \\mathrm{d}\\tau^\\prime\\mathrm{d}\\mathbf{\\Omega}^\\prime\n+ Q_I(\\tau,\\mathbf{\\Omega}),\n\\label{e:nD_integral_RTE}\n\\end{equation}\nwhere\n\\begin{equation}\nQ_I(\\tau,\\mathbf{\\Omega}) = \\exp(-\\tau\/\\mu_0) \\delta(\\mathbf{\\Omega}-\\mathbf{\\Omega}_0).\n\\label{e:Q_integral_RTE}\n\\end{equation}\nThis is simply the uncollided radiance field $I_0(\\tau,\\mathbf{\\Omega})$ from (\\ref{e:simpleS_sweep}) and (\\ref{e:uncollided}) where, without loss of generality, we henceforth take $F_0 = 1$. The kernel of the integral RTE is given by\n\\begin{equation}\n\\mathcal{K}(\\tau,\\mathbf{\\Omega};\\tau^\\prime,\\mathbf{\\Omega}^\\prime) =\n\\omega p_g(\\mathbf{\\Omega}\\cdot\\mathbf{\\Omega}^\\prime)\n\\Theta\\left(\\frac{\\tau-\\tau^\\prime}{\\mu}\\right)\n\\frac{\\exp(-|\\tau-\\tau^\\prime|\/|\\mu|)}{|\\mu|},\n\\label{e:K_integral_RTE}\n\\end{equation}\nwhere $\\Theta(x)$ is the Heaviside step function ($=1$ if $x \\ge 0$, $=0$ otherwise). It enforces the causal requirement of doing upwind sweeps.\n\nConversely, one can substitute (\\ref{e:simpleS_sweep}) into (\\ref{e:nD_RTE_SourceFunction}), with the adopted change of spatial coordinate ($z\\mapsto\\tau$) leading to $\\sigma_\\text{s}\\mapsto\\omega$. That yields the so-called ``ancillary'' integral RTE:\n\\begin{equation}\nS(\\tau,\\mathbf{\\Omega}) = \\int\\limits_{\\Xi_d}\\int\\limits_0^{\\tau^\\star} \\mathcal{K}(\\tau,\\mathbf{\\Omega};\\tau^\\prime,\\mathbf{\\Omega}^\\prime) S(\\tau^\\prime,\\mathbf{\\Omega}^\\prime) \\mathrm{d}\\tau^\\prime\\mathrm{d}\\mathbf{\\Omega}^\\prime\n+ Q_S(\\tau,\\mathbf{\\Omega}),\n\\label{e:nD_ancillary_IRTE}\n\\end{equation}\nwhere\n\\begin{equation}\nQ_S(\\tau,\\mathbf{\\Omega}) = \\omega p_g(\\mathbf{\\Omega}\\cdot\\mathbf{\\Omega}_0) \\exp(-\\tau\/\\mu_0).\n\\label{e:Q_ancillary_IRTE}\n\\end{equation}\nThe kernel is the same as given in (\\ref{e:K_integral_RTE}). However, if there were spatial variations in the optical properties, SSA $\\omega$ and\/or PF $p(\\cdot)$, then the kernels would differ in that (\\ref{e:Q_integral_RTE}) would use the starting point and (\\ref{e:nD_ancillary_IRTE}) the end point of the transition; see, e.g., \\cite{DavisKnyazikhin2005}.\n\nIf (\\ref{e:nD_integral_RTE}) is written in operator language as $I = K I + Q_I$, then it is easy to verify that the Neumann series is a constructive approach for the solution: $I = \\sum_{n=0}^\\infty I_n$, where $I_{n+1} = K I_n$, hence\n\\begin{equation}\nI = \\sum_{n=0}^\\infty K^n Q_I = (E-K)^{-1} Q_I,\n\\label{e:IRTE_soln}\n\\end{equation}\nwhere $E$ is the identity operator. This applies equally to the estimation of $S$ as a solution of (\\ref{e:nD_ancillary_IRTE}). Once $S(\\tau,\\mathbf{\\Omega})$ is a known quantity, one can obtain the readily observable quantity $I(\\tau,\\mathbf{\\Omega})$ using (\\ref{e:simpleS_sweep}).\n\n\\noindent\n{\\it Comment on Angular Reciprocity:}\n\n\\noindent\nNote that $\\mathcal{K}(\\tau,\\mathbf{\\Omega};\\tau^\\prime,\\mathbf{\\Omega}^\\prime)$ in (\\ref{e:K_integral_RTE}) is invariant when we replace $(\\tau,\\mathbf{\\Omega};\\tau^\\prime,\\mathbf{\\Omega}^\\prime)$ with $(\\tau^\\prime,-\\mathbf{\\Omega}^\\prime;\\tau,-\\mathbf{\\Omega})$, i.e., swap positions in the medium and switch the direction of propagation. This leads to reciprocity of the radiance fields for plane-parallel slab media under the exchange of sources and detectors \\cite{Chandra1950}. In our case, we consider radiance escaping the medium in reflection ($\\tau = 0$) or transmission ($\\tau = \\tau^\\star$) since the source is external. Focusing on reflected radiance in BRF form (\\ref{e:BRF_form}), reciprocity reads as\n\\begin{equation}\nI_\\text{BRF}(0,\\mathbf{\\Omega};\\mathbf{\\Omega}_0) = I_\\text{BRF}(0,-\\mathbf{\\Omega}_0;-\\mathbf{\\Omega}),\n\\label{e:R_reciprocity}\n\\end{equation}\nwhere the second angular argument reads as a parameter (from upper BC) rather than an independent variable.\nSimilarly, we have $I_\\text{BRF}(\\tau^\\star,\\mathbf{\\Omega};\\mathbf{\\Omega}_0) = I_\\text{BRF}(\\tau^\\star,-\\mathbf{\\Omega}_0;-\\mathbf{\\Omega})$ in transmittance, using the same BRF-type normalization.\n\nWe can verify transmissive reciprocity explicitly on $I_0 = Q_I$ in (\\ref{e:Q_integral_RTE}) for uncollided radiance. Reflective reciprocity can be verified less trivially using singly-scattered radiance $I_1 = K I_0 = K Q_I$. Based on (\\ref{e:Q_integral_RTE})--(\\ref{e:K_integral_RTE}), this leads to\n\\begin{equation}\nI_1(0,\\mathbf{\\Omega};\\mathbf{\\Omega}_0) =\n\\omega p_g(\\mathbf{\\Omega}\\cdot\\mathbf{\\Omega}_0)\n\\int\\limits_0^{\\tau^\\star} \\exp(-\\tau^\\prime\/\\mu_0)\n\\exp(-\\tau^\\prime\/|\\mu|) \\mathrm{d}\\tau^\\prime\/|\\mu|.\n\\label{e:1_scatter_sweep}\n\\end{equation}\nFrom there, (\\ref{e:BRF_form}) yields\n\\begin{equation}\n\\frac{c_d}{\\mu_0} I_1(0,\\mathbf{\\Omega};\\mathbf{\\Omega}_0) = c_d\n\\frac{\\omega p_g(\\mathbf{\\Omega}\\cdot\\mathbf{\\Omega}_0)}{\\mu_0+|\\mu|}\n\\left( 1-\\exp\\left[-\\tau^\\star\\left(\\frac{1}{\\mu_0}+\\frac{1}{|\\mu|}\\right)\\right] \\right),\n\\label{e:1_scatter_R}\n\\end{equation}\nwith $\\mu_0 > 0$ and $\\mu < 0$. Noting that $-\\mu > 0$ and $-\\mu_0 < 0$, (\\ref{e:1_scatter_R}) verifies (\\ref{e:R_reciprocity}). The same can be shown for transmitted radiance.\n\n\n\\section{Generalized Radiative Transport in $d$ Spatial Dimensions}\n\\label{s:dDim_gRTE}\n\n\\subsection{Emergence of Non-Exponential Transmission Laws in the Cloudy Atmosphere}\n\\label{s:nonExpTran}\n\n\\subsubsection{Two-Point Correlations in Clouds According to In-Situ Probes}\n\nWe refer to Davis and Marshak \\cite{DavisMarshak04} and Davis \\cite{Davis2006} for a detailed accounts of the optical variability we expect---and indeed observe \\cite[and references therein]{Davis_etal99}---in the Earth's turbulent cloudy atmosphere. See also Kostinski \\cite{Kostinski2001} for an interestingly different approach.\n\nThe important---almost defining---characteristic of this variability is that it prevails over a broad range of scales, which translates statistically into auto-correlation properties with long ``memories.'' The traditional metric for 2-point correlations in turbulent media is the $q^\\text{th}$-order structure function \\cite{MoninYaglom1975}\n\\begin{equation}\n\\text{SF}_q(r) = \\overline{|f(\\vect{x}+\\vect{r})-f(\\vect{x})|^q},\n\\label{e:qSF_def}\n\\end{equation}\nwhere $f(\\vect{x})$ is a spatial variable of interest, $\\vect{r}$ is a spatial increment of magnitude $r$, and the overscore denotes spatial or ensemble averaging. Structure functions are the appropriate quantities to use for fields that are non-stationary but have stationary increments.\\footnote{\nFollowing many others, we borrow here the terminology of time-series analysis since the proper language of statistical ``homogeneity'' might be confused with structural homogeneity, a usage we've already introduced in the above.} \nStationarity of the increments in $f(\\vect{x})$ means that the ensemble average on the right-hand side of (\\ref{e:qSF_def}) depends only on $\\vect{r}$. Further assuming statistical isotropy, for simplicity, it will depend only on $r$. The norm of wavelet coefficients have become popular alternatives to the absolute increment in $f(\\vect{x})$ used in (\\ref{e:qSF_def}) \\cite{Farge92,MuzyEtAl1994}.\n\nAs expected for all turbulent phenomena, in-situ observations in clouds invariably show that \\cite{Davis_etal94,Davis_etal96,Marshak_etal97,Davis_etal99}\n\\begin{equation}\n\\overline{|f(\\vect{x}+\\vect{r})-f(\\vect{x})|^q} \\sim r^{\\zeta_q},\n\\label{e:qSF_scaling}\n\\end{equation}\nfor $r$ ranging from meters to kilometers, where $\\zeta_q$ is generally a ``multiscaling'' or ``multifractal'' property, meaning that $\\zeta_q\/q$ is not a constant. Physically, this means that knowledge of one statistical moment, such as variance SF$_2(r)$, of the absolute increments cannot be used to predict all others based on dimensional analysis. Otherwise, it is deemed ``monoscaling'' or ``monofractal.''\n\nIt has long been known theoretically---and well-verified empirically---that $\\zeta_2 = 2\/3$ when $f$ is a component of the wind \\cite{Kolmogorov41}, temperature or a passive scalar density \\cite{Obukhov1949,Corrsin1951}, when the turbulence is statistically homogeneous and isotropic. This is equivalent \\cite{MoninYaglom1975} to stating that energy spectra of these various quantities in turbulence are power-law with an exponent $\\beta = -5\/3$ in (\\ref{e:PowerLaw_EnergySpectum}). It can also be shown theoretically that $\\zeta_q$ is necessarily a convex function, a prediction that has also been amply verified empirically, although in practice the convexity is relatively weak.\n\nAt scales smaller than meters, cloud liquid water content (LWC) undergoes, according to reliable in situ measurements in marine stratocumulus, an interesting transition toward higher levels of variability than expected from the scaling in (\\ref{e:qSF_scaling}) \\cite{Davis_etal99}. Specifically, sharp quasi-discontinuities associated with positively-skewed deviations occur at random points\/times in the transect through the LWC field sampled by airborne instruments. These jumps are believed to be a manifestation of the random entrainment of non-cloudy air into the cloud \\cite{GerberEtAl01}.\n\nAt sufficiently large scales, $\\overline{|f(\\vect{x}+\\vect{r})-f(\\vect{x})|^q}$ ceases to increase with $r$ as $f(\\vect{x})$ become independent (decorrelates) from itself at very large distances. For non-negative properties, such as the extinction coefficient or particle density, this decoupling has to happen at least at the scale $r$ where the absolute increments (fluctuations) become commensurate in magnitude with the positive mean of the property. This rationalizes the upper limit of the scaling range for cloud LWC or droplet density at the scale of several kilometers. In the cloudy atmosphere, decorrelation can happen sooner in the vertical than in the horizontal in cases of strong stratification, i.e., stratus and strato-cumulus scenarios versus broken cumulus generated by vigorous convection.\n\nIn atmospheric RT applications to be discussed next, the outer limit of $r$ only needs to be on the order of whatever scale it takes to reach significant optical distances. That can be less than cloud thickness in stratus\/strato-cumulus cases, or can stretch to the whole troposphere (cloudy part of the atmosphere) when convection makes the dynamics more 3D than 2D.\n\n\\subsubsection{Statistical Ramifications for Cloud Optical Properties}\n\nOur present goal is to quantify the impact of unresolved random spatial fluctuations of $\\sigma(\\vect{x})$ on macroscopic transport properties such as large-scale boundary fluxes or remotely observable radiances, spatially averaged in the instrument's field-of-view. In view of the importance of sweep operations in $d$-dimensional RT, we also need to understand the statistics of integrals of $\\sigma(\\vect{x})$ over a range of distances $s$ in an arbitrary direction $\\mathbf{\\Omega}$. This is the optical path along a straight line between points $\\vect{x}$ and $\\vect{x}+s\\mathbf{\\Omega}$:\n\\begin{equation}\n\\tau(\\vect{x},\\vect{x}+s\\mathbf{\\Omega}) = \\int\\limits_0^s \\sigma(\\vect{x}+s\\mathbf{\\Omega})\\mathrm{d} s.\n\\label{e:optical_distance}\n\\end{equation}\nBetter still, we need to characterize statistically the direct transmission factor $\\exp[-\\tau(\\vect{x},\\vect{x}+s\\mathbf{\\Omega})]$ that is used systematically in the upwind sweep operation. Assuming stationarity and isotropy, we define\n\\begin{equation}\n\\overline{T}(s) = \\overline{\\exp[-\\tau(\\vect{x},\\vect{x}+s\\mathbf{\\Omega})]} = \\overline{\\exp[-\\sigma_\\text{avr}(\\vect{x},\\mathbf{\\Omega};s)s]},\n\\label{e:mean_Tdir}\n\\end{equation}\nwhere $\\sigma_\\text{avr}(\\vect{x},\\mathbf{\\Omega};s)$ is the average extinction encountered by radiation propagating uncollided between $\\vect{x}$ and $\\vect{x}+s\\mathbf{\\Omega}$:\n\\begin{equation}\n\\sigma_\\text{avr}(\\vect{x},\\mathbf{\\Omega};s) = \\frac{1}{s}\\int\\limits_0^s\\sigma(\\vect{x}+s^\\prime\\mathbf{\\Omega})\\mathrm{d} s^\\prime\n\\label{e:1ptScaleInv}\n\\end{equation}\nThis is essentially a coarse version of the random field $\\sigma(\\vect{x})$, smoothed over a given scale $s$. What behavior do we expect it to have?\n\nBeing at the core a material density (times a collision cross-section), increments of $\\sigma(\\vect{x})$ will follow (\\ref{e:qSF_scaling}) when $s$ is varied, but the segment-mean $\\sigma_\\text{avr}(\\vect{x},\\mathbf{\\Omega};s)$ will not depend much on the scale $s$. Indeed, comparing values of $\\sigma_\\text{avr}(\\vect{x},\\mathbf{\\Omega};s)$ for different values of $s$ is really just saying, with linear transects, that the notion of a material density can be defined. That simple proposition is usually stated as a volumetric statement: the amount of material in a volume $s^d$ is proportional to that volume, and the proportionality factor is called ``density.''\n\nEven more fundamentally, $s$-independence of (\\ref{e:1ptScaleInv}), at least in the $s \\to 0$ limit,\\footnote{\nThis limit is to be understood physically as to the scale where noise-like fluctuations occur, which is at least the inter-particle distance in a cloud but could be larger \\cite{Davis_etal99,GerberEtAl01}.}\nis tantamount to saying that $\\sigma(\\vect{x}) = \\sigma_\\text{avr}(\\vect{x},\\mathbf{\\Omega};0)$ is indeed a ``function,'' i.e., the symbol ``$\\sigma(\\vect{x})$'' represents a number. This is a natural consequence of increments that vanish at least on average with $s$, a property known as ``stochastic continuity,'' which incidentally does not exclude a countable number of discontinuities (e.g., sharp cloud edges in the atmosphere).\nAll of these ramifications come with the above-mentioned long-range correlations described by (\\ref{e:qSF_scaling}) for finite positive values of $\\zeta_q$.\n\nA counter-example of such intuitive $s$-independent behavior for averages is the spatial equivalent of white noise. Indeed, if $\\sigma(\\vect{x})$ on the right-hand side of (\\ref{e:1ptScaleInv}) could somehow represent white noise (in the discrete world, just a sequence of uncorrelated random numbers), then the left-hand side is just an estimate of its mean value based on as many samples as there are between 0 and $s$. As $s\\to\\infty$, this estimate is known to converge to the mean (law of large numbers) in $1\/\\sqrt{s}$ and, moreover, the PDF of $\\sigma_\\text{avr}(\\vect{x},\\mathbf{\\Omega};s)$ is a Gaussian with variance $\\propto s$ (central limit theorem). This is a reminder that one should not denote white (or other) noises as a function $f(\\vect{x})$ but rather as a distribution that exists only under integrals: $f(\\vect{x})\\mathrm{d}\\vect{x}$ is better, $f(\\vect{x},\\mathrm{d}\\vect{x})$ is the best. That remark is key to Davis and Mineev-Weinstein's \\cite{DavisMineev11} generalization of RT to rapidly fluctuating extinction fields, possibly with anti-correlations across scales.\n\n\\subsubsection{Representation of Spatial Fluctuations with Gamma PDFs}\n\nTurbulent density fields are often found to have log-normal distributions and cloud LWC is no exception. However, Gamma distributions can be used to approximate log-normals in terms of skewness, and are much easier to manipulate. Barker et al. \\cite{Barker1996b} showed that Gamma distributions with a broad range of parameters can fit histograms of cloud optical depth reasonably well. Now, letting $\\vect{x} = (\\vec{x},z)^\\text{T}$, cloud optical depth is simply $\\sigma(\\vec{x},z)$ averaged spatially over $z$ from 0 to the fixed cloud (layer) thickness $H$, then re-multiplied by $H$, and histograms were cumulated over $\\vec{x}$; in this case, many large cloudy images where $\\vec{x}$ designates a 30-m LandSat pixel. Barker \\cite{Barker1996a} proceeded to apply this empirical finding to evaluate unresolved variability effects using the standard two-stream approximation for scattering media, as defined further on, in (\\ref{e:2st_model}), for a representative case. We have used it previously for uncollided radiation \\cite{Davis2006}, and do the same here.\n\nIn summary, assume that the random number $\\sigma_\\text{avr}(\\vect{x},\\mathbf{\\Omega};s)$ in (\\ref{e:mean_Tdir})--(\\ref{e:1ptScaleInv}) is indeed statistically independent of $s$, at least over a range of values that matter for RT. This range of $s$ should encompass small, medium and large propagation distances between emission, scattering, absorption or escape events. These events can unfold in the whole medium, or else in portions of it we might wish to think of in isolation, e.g., clouds in the atmosphere. If $\\overline{\\sigma}$ is the spatially-averaged extinction coefficient, over some or all of the transport space $(\\vect{x},\\mathbf{\\Omega})$, then we require the range of statistical independence on $s$ to go from vanishingly small to several times $1\/\\overline{\\sigma}$. Were the medium homogeneous, this last quantity would be the particle's MFP, but is in general an underestimate of the true MFP \\cite{DavisMarshak04}, as illustrated further on in the specific case of interest here.\n\nFollowing Barker et al. \\cite{Barker1996b}, we now assume that the variability of $\\sigma_\\text{avr}(\\vect{x},\\mathbf{\\Omega};s)$, for fixed $s$, can be approximated with an ensemble of Gamma-distributed values:\n\\begin{equation}\n\\Pr\\{\\sigma,\\mathrm{d}\\sigma\\} = \\frac{(a\/\\overline{\\sigma})^a}{\\Gamma(a)} \\, \\sigma^{a-1}\\exp(-a\\sigma\/\\overline{\\sigma}) \\, \\mathrm{d}\\sigma,\n\\label{e:Gamma_pdf}\n\\end{equation}\nwhere $\\overline{\\sigma}$ is the mean and $a$ is the variability parameter\n\\begin{equation*}\na = \\frac{1}{\\overline{\\sigma^2}\/\\overline{\\sigma}^2-1},\n\\end{equation*}\nan important quantity that varies from 0$^+$ to $\\infty$ since $\\overline{\\sigma^2} \\ge \\overline{\\sigma}^2$ (Schwartz's inequality). If $\\sigma_\\text{avr}(\\vect{x},\\mathbf{\\Omega};s)$ is $\\Gamma$-distributed for fixed $s$, then so is their product $\\tau(\\vect{x},\\vect{x}+s\\mathbf{\\Omega})$ in (\\ref{e:optical_distance}).\n\nEquation (\\ref{e:mean_Tdir}) then reads as the Laplace characteristic function of this Gamma PDF supported by the positive real axis:\n\\begin{equation}\nT_a(s) = \\int\\limits_0^\\infty\\exp(-\\sigma s)\\Pr\\{\\sigma,\\mathrm{d}\\sigma\\} = \\frac{1}{(1+\\overline{\\sigma}s\/a)^a}.\n\\label{e:Gamma_Tdir}\n\\end{equation}\nIn the limit $a\\to\\infty$, variance $\\overline{\\sigma^2}-\\overline{\\sigma}^2$ vanishes as the PDF in (\\ref{e:Gamma_pdf}) becomes degenerate, i.e., $\\delta(\\sigma-\\overline{\\sigma})$. We then retrieve Beer's law: $T_\\infty(s) = \\exp(-\\overline{\\sigma}s)$.\nFor an explicit model-based derivation of (\\ref{e:Gamma_Tdir}) where the exponent $a$ is expressed in terms of the statistical parameters, see Davis and Mineev-Weinstein's \\cite{DavisMineev11} study of scale-invariant media in the ``red-noise'' limit ($\\beta \\to 1$) where the correlations are at their longest.\n\nThe direct transmission law in (\\ref{e:Gamma_Tdir}) with a power-law tail thus generalizes the standard law of exponential decay for the cumulative probability of radiation to reach a distance $s$ (or \\emph{mean} optical distance $\\tau(s)$) from a source without suffering a collision in the material. Figure~\\ref{f:new_Tdir} illustrates both the positively skewed PDFs for $\\sigma$, at fixed $s$, in (\\ref{e:Gamma_pdf}) and the generalized transmission laws in (\\ref{e:Gamma_Tdir}) for selected values of $a$ that we will use further on in numerical experiments. In the middle panel, we can see that direct transmission probability at $\\tau = 1$ increases from $1\/\\mathrm{e} = 0.368\\cdots$ to almost 1\/2 going from $a = \\infty$ (Beer's exponential law) to a power-law with $a = 1.2$. The rightmost panel shows that there is still appreciable probability of direct transmission when $a < \\infty$ at large optical distances where radiation is all but extinguished in the standard $a = \\infty$ case.\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[width=5.5in]{new_Tdir.pdf}\n\\end{center}\n\\caption[new_Tdir]\n{\\label{f:new_Tdir}\nLeft to right: Gamma PDFs in (\\ref{e:Gamma_pdf}) for the spatial variability of $\\sigma$ at a fixed scale $s$, normalized to its ensemble mean $\\overline{\\sigma}$, for indicated values of $a$; generalized transmission laws $T_a(\\tau)$ in (\\ref{e:Gamma_Tdir}) associated with PDFs with $\\tau = \\overline{\\sigma}s$; same as middle panel but in semi-log axes and a broader range of $\\tau$.}\n\\end{figure}\n\nNow, viewing $s$ as a random variable that is crucial to transport theory, we have\n\\begin{equation}\nT_a(s) = \\Pr\\{\\text{step}>s\\} = \\int\\limits_s^\\infty p_a(s)\\mathrm{d} s.\n\\label{e:Tdir_asPDF}\n\\end{equation}\nThe PDF for a random step of length $s$ is therefore\n\\begin{equation}\np_a(s) = \\left|\\frac{\\mathrm{d} T_a}{\\mathrm{d} s}\\right|(s) = -\\,\\frac{\\mathrm{d} T_a}{\\mathrm{d} s}(s) = \\frac{\\overline{\\sigma}}{(1+\\overline{\\sigma}s\/a)^{a+1}}.\n\\label{e:Gamma_stepPDF}\n\\end{equation}\n\nIn the case of particle transport, we know that the MFP for the $a = \\infty$ case (uniform optical media) is $\\ell_\\infty = 1\/\\overline{\\sigma}$. What is it for finite $a$ (variable optical media)? One finds\n\\begin{equation*}\n\\ell_a = \\langle s \\rangle_a = \\int\\limits_0^\\infty s \\, \\mathrm{d} T_a(s)\n = \\int\\limits_0^\\infty s \\, p_a(s)\\mathrm{d} s\n = \\frac{a}{a-1} \\, \\ell_\\infty,\n\\end{equation*}\nwhich is larger than $\\ell_\\infty$ and indeed diverges as $a \\to 1^+$. Generally speaking, the step moment $\\langle s^q \\rangle_a$ is convergent only as long as $-1< q < a$. This immediately opens interesting questions (addressed in depth elsewhere \\cite{DavisMarshak1997} and briefly discussed further on) about the diffusion limit of this variability model when $a \\le 2$, i.e., when the 2$^\\text{nd}$-order moment of the step distribution is divergent.\n\nFigure~\\ref{f:RWs_g0_inset} demonstrates how RT unfolds in $d=2$ inside boundless conservatively scattering media where $\\overline{\\sigma}$ is unitary. The media are either uniform or stochastic but spatially-correlated in such a way the ensemble average transmission law is of the power-law form in (\\ref{e:Gamma_Tdir}). We consider media with exponential transmission ($a = \\infty$, uniform case) and power-law transmission laws with $a$ = 10, 5, 2, 1.5, and 1.2. Scattering is assumed to isotropic and we follow the random walk of a transported particle for 100 scatterings. For illustration, the same scattering angles are used in each of the 6 instances. For the exponential case, the random free paths are generated using the standard rule: $s = -\\log\\xi\/\\overline{\\sigma}$, where $\\xi$ ia a uniform random variable on the interval (0,1). For the power-law cases, we use\n\\begin{equation}\ns = a \\times (\\xi^{-1\/a}-1)\/\\overline{\\sigma}\n\\label{e:Gamma_step}\n\\end{equation}\nAs for the random scattering angles, we use the same sequence of 100 values of $2\\pi\\times\\xi$.\n\nIn the inset fi Fig.~\\ref{f:RWs_g0_inset}, we see that all the traces start at the same point in the same direction. Physically, we can imagine an electron bound to a crystal surface hoping between holes associated with random defects \\cite{LuedtkeLandman1999}. Certain heterogeneous predator-prey and scavenging problems can also lead to 2D transport processes with a mix of small and large jumps \\cite{BuldyrevEtAl2001}. We can immediately appreciate how the MFP increases as $a \\to 1$. At the same time, the increasing frequency of large jumps enables the cumulative traces to end further and further from their common origin as $a$ decreases from $\\infty$ to nearly unity.\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[width=5.5in]{RWs_g0_inset.pdf}\n\\end{center}\n\\caption[RWs_g0_inset]\n{\\label{f:RWs_g0_inset}\nSix traces of random walks in $d=2$ dimensions with 100 isotropic scatterings and step sequences that follow power-law cumulative probabilities (\\ref{e:Gamma_Tdir}) and PDFs (\\ref{e:Gamma_stepPDF}). Both scatterings and steps use the same sequences of uniform random variables. Values of $a$ are $\\infty$ (exponential law), 10, 5, 2, 1.5 and 1.2. The two last ones are asymptotically self-similar L\\'evy-stable flights (steps with divergent variance), the three former are asymptotically self-similar Gaussian walks (steps with finite variance), and for $a=2$ it is a transition case (steps with log-divergent variance). The inset is a $\\times 3$ zoom into the commun origin of the 6 traces. More discussion in main text.}\n\\end{figure}\n\nOne can read Fig.~\\ref{f:RWs_g0_inset} with atmospheric optics in mind, albeit with each isotropic scattering representing $\\approx$$(1-g)^{-1}$ forward scatterings \\cite{Davis2006}. Recalling that $g$ is in the range 0.75 to 0.85 in various types of clouds and aerosols, this translates to 4 to 7 scatterings before directional memory is lost. For all values of $a$ there is a wide distribution of lengths of jumps between scatterings. However, for large values of $a$, the distance covered by a cluster of small steps can equally well be covered with one larger jump. This behavior is characteristic of solar radiation trapped in a single opaque cloud. In contrast, for the smallest values of $a$, it is increasingly unlikely that a cluster of smaller steps can rival in scale a single large jump. This behavior is typical of solar radiation that is alternatively trapped in clouds and bouncing between them. In other words, we are looking at a 2D version of a typical trace of a multiply scattered beam of sunlight in a 3D field of broken clouds.\n\n\\subsection{$d$-Dimensional Generalized RTE in Integral Form}\n\nOur goal is now to formulate the transport equations that describe RT in media when, as in Fig.~\\ref{f:RWs_g0_inset}, we transition from an exponential direct transmission law to a power-law counterpart.\n\nOur starting point is the integral form of the $d$-dimensional plane-parallel RTE in (\\ref{e:nD_integral_RTE}); alternatively, (\\ref{e:nD_ancillary_IRTE}) paired with (\\ref{e:simpleS_sweep}). These formulations are sufficiently general to describe RT and other linear transport processes. It gets specific to the standard form of RT theory only when we look at the make up of the kernel $\\mathcal{K}$ in (\\ref{e:K_integral_RTE}), the source terms $Q_I$ in (\\ref{e:Q_integral_RTE}) and $Q_S$ in (\\ref{e:Q_ancillary_IRTE}).\n\nTherein, we find exponential functions that describe the propagation part of the transport. Specifically, we identify\n\\begin{equation}\nT_\\infty(\\tau) = \\exp(-\\tau)\n\\label{e:T_Beer_cum}\n\\end{equation}\nin $Q_I$, assuming $\\mu_0 = 1$ for the present discussion. The subscript $\\infty$ notation is consistent with our usage in (\\ref{e:Gamma_Tdir}) with $\\overline{\\sigma} = 1$, which is implicit in a non-dimensionalized 1D RT. This is Beer's classic law of direct transmission, the hallmark of homogeneous optical media where $\\tau(s) = \\sigma s$ with, in the present setting, $\\sigma \\equiv \\overline{\\sigma}$ (i.e., a degenerate probability distribution for $\\sigma$).\n\nIn $Q_I$, $T_\\infty(\\tau)$ is therefore at work as the cumulative probability in (\\ref{e:Tdir_asPDF}). In the kernel $\\mathcal{K}$, as well as in $Q_S$, we again find exponential functions, but here we interpret them as a PDF:\n\\begin{equation}\n\\left|\\frac{\\mathrm{d} T_\\infty}{\\mathrm{d} \\tau}\\right| = -T_\\infty(|\\tau-\\tau^\\prime|) = \\exp(-|\\tau-\\tau^\\prime|),\n\\label{e:T_Beer_pdf}\n\\end{equation}\nagain assuming $\\mu = 1$ for the present discussion. The fact that the (\\ref{e:T_Beer_cum}) and (\\ref{e:T_Beer_pdf}) are identical functions is of course a defining property of the exponential.\n\nWhat makes us assign the ``cumulative probability'' interpretation of $\\exp(-\\tau)$ to its use in $Q_I$, and the ``PDF'' interpretation of $\\exp(-|\\tau-\\tau^\\prime|)$ to its use in $Q_S$ and $\\mathcal{K}$? The clue is the foundational transport physics. In $Q_I = I_0$, the uncollided radiation is simply detected at optical distance $\\tau$. It could have gone deeper into the medium before suffering a scattering, an absorption, a reflection, or escaping through the lower boundary. In $\\mathcal{K}$ however, it is used to obtain $I_n$ from $I_{n-1}$, as previously demonstrated for $n = 1$, cf. (\\ref{e:1_scatter_sweep}). In this case, the radiation must be stopped between $\\tau$ and $\\tau+\\delta\\tau$. It is a probability density that is invariably associated with the differential $\\mathrm{d}\\tau$. Similarly in $Q_S$, the transport process is to stop the the propagation in a given layer and, moreover, it is specifically by a scattering event.\n\nIn order to account for unresolved random-but-correlated spatial variability of extinction $\\sigma(\\vect{x})$, we propose for the integral forms of the $d$-dimensional plane-parallel RTE the following \\emph{generalization}: use\n\\begin{equation}\n\\mathcal{K}(\\tau,\\mathbf{\\Omega};\\tau^\\prime,\\mathbf{\\Omega}^\\prime) =\n\\omega p_g(\\mathbf{\\Omega}\\cdot\\mathbf{\\Omega}^\\prime)\n\\Theta\\left(\\frac{\\tau-\\tau^\\prime}{\\mu}\\right)\n\\frac{|\\Dot{T}_a(|\\tau-\\tau^\\prime|\/|\\mu|)|}{|\\mu|},\n\\label{e:K_integral_gRTE}\n\\end{equation}\nrather than (\\ref{e:K_integral_RTE}), with\n\\begin{equation}\nQ_S(\\tau,\\mathbf{\\Omega}) = \\omega p_g(\\mathbf{\\Omega}\\cdot\\mathbf{\\Omega}_0) |\\Dot{T}_a(\\tau\/\\mu_0)|\n\\label{e:Q_ancillary_gIRTE}\n\\end{equation}\nfor (\\ref{e:nD_ancillary_IRTE}), and\n\\begin{equation}\nQ_I(\\tau,\\mathbf{\\Omega}) = T_a(\\tau\/\\mu_0) \\delta(\\mathbf{\\Omega}-\\mathbf{\\Omega}_0)\n\\label{e:Q_integral_gRTE}\n\\end{equation}\nfor (\\ref{e:nD_integral_RTE}), where $a$ can have any strictly positive value, including $\\infty$. We use the overdot notation in (\\ref{e:K_integral_gRTE})--(\\ref{e:Q_ancillary_gIRTE}) to denote the derivative of a function of a single variable, which is the case here when $\\overline{\\sigma}$ is combined with $s$ to form $\\tau$ in (\\ref{e:Gamma_Tdir}) and (\\ref{e:Gamma_stepPDF}), and $a$ is viewed as a fixed parameter.\n\n\\subsection{Are There Integro-Differential Counterparts of Generalized Integral RTEs?}\n\\label{s:id_gRTEs}\n\nIn short, the $d$-dimensional stochastic transport model we propose is simply to replace $T_\\infty(\\tau)$ in (\\ref{e:T_Beer_cum}) with $T_a(\\tau)$ for finite $a$, which we equate with $T_a(s)$ in (\\ref{e:Gamma_Tdir}) when $\\overline{\\sigma}=1$ (thus $s=\\tau$). This logically requires the use of $\\dot{T}_a(\\tau)$ obtained similarly from $\\mathrm{d} T_a(s)\/\\mathrm{d} s$ in (\\ref{e:Gamma_stepPDF}). We thus have a well-defined transport problem using an integral formulation, to be solved analytically or numerically. Now, is there an integro-differential counterpart?\n\nWe do not yet have an answer to this question. One path forward to address it is to follow the steps of Larsen and Vasques \\cite{LarsenVasques11} who start with the classic RT\/linear Boltzmann equation in integro-differential form and transform it into a ``non-classical'' one by introducing a special kind of time-dependence that is essentially reset to epoch 0 at every scattering. Non-exponential free path distributions are thus accommodated, and a modified diffusion limit is derived in cases where $\\langle s^2 \\rangle$ is greater than $2\\langle s \\rangle^2$, its value for the exponential distribution, but not too much larger. Traine and co-authors \\cite{TaineEtAl2010} have also proposed a ``generalized'' RTE for large-scale transport through random (porous) media; this model uses an empirical counterpart of our parametric non-exponential transmission law in some parts of the computation, but retains the standard integro-differential form for the final estimation of radiance using the upwind sweep operator in (\\ref{e:S_sweep}).\n\nAnother path forward is to essentially define new differential (or more likely pseudo-differential) operators as those from which the new integral operator in (\\ref{e:K_integral_gRTE}) follows. This amounts to broadening the definition of the Green function,\n$G(\\tau,\\mathbf{\\Omega};\\tau_\\star,\\mathbf{\\Omega}_\\star) =\nT_a(|\\tau-\\tau_\\star|\/|\\mu|)\\delta(\\mathbf{\\Omega}-\\mathbf{\\Omega}_\\star)$,\nfor 1D RT in the absence of scattering, previously with $a = \\infty$, now with arbitrary values, and assigning a role to $\\partial G\/\\partial\\tau$. This more formal approach seems to us less promising in terms of physical insights---a judgment that may be altered if a rigorous connection to the concept of fractional derivatives \\cite{MillerRoss1993} can be established. These pseudo-differential operators have indeed found many fruitful applications in statistical physics \\cite{MetzlerKlafter2000,West2003}.\n\nAlthough out of scope for the present study, there is an implicit time-dependence aspect to generalized (as well as standard) RT even if the radiance fields are steady in time. The best way to see this is to return to the inset in Fig.~\\ref{f:RWs_g0_inset}. The highlighted region (between gray brackets) shows in essence how standard and generalized 2D RT unfolds for solar illumination of a medium of optical thickness $\\approx$11 at an angle of $\\approx$30$^\\circ$ from zenith. The smaller the value of $a$, the shorter the path of the light inside the medium. The number of scatterings decreases from 25 ($a=\\infty$) to 10 ($a=1.2$). The flight time for sunlight to cross the cloudy portion of the atmosphere---at most from near-ground level to the troposphere (10 to 15~km altitude, depending on latitude)---cannot be measured directly. However, it can be estimated statistically via oxygen spectroscopy \\cite{Pfeil_etal98}. Pfeilsticker, Scholl et al. \\cite{Pfeil99,Scholl_etal06}, as well as Min, Harrison et al. \\cite{MinHarrison99,Min_etal01,Min_etal04}, have found that the more variable the atmosphere at a given mean optical thickness, the shorter the top-to-ground paths on average. This finding offers a degree of validation of generalized RT for applications to the Earth's cloudy atmosphere.\n\nIn the remainder of this study, we derive analytical and numerical solutions of the generalized RTE in (\\ref{e:nD_integral_RTE}) with (\\ref{e:Q_integral_gRTE})--(\\ref{e:K_integral_gRTE}), and then apply them to specific topics where standard and generalized RT differ significantly.\n\n\n\\section{Deterministic Numerical Solution in $d = 1$: The Markov Chain Approach}\n\\label{s:2stream_MarCh}\n\nIn \\S\\ref{s:dDim_sRTE}, we stated that once we adopted the H--G PF in Table~\\ref{t:Definitions} the whole 1D RT problem is determined entirely (in the absence of surface reflection) by three numbers, $\\{\\omega,g;\\tau^\\star\\}$ for a given $d$ = 1, 2, or 3, with the possible addition of $\\mu_0$ when $d > 1$. To this small parameter set, we now add the exponent $a$ of the power-law direct transmission function that distinguishes standard RT (exponential limit, $a\\to\\infty$) from its generalized counterpart ($0 < a < \\infty$). The complete parameter set is therefore $\\{\\omega,g,a;\\tau^\\star(;\\mu_0)\\}$.\n\n\\subsection{Exact Solution of the Standard RTE in $d = 1$}\n\nThe ``$d=1$'' (\\emph{literal} 1D) version of 1D RT has in fact a vast literature of its own since as it is formally identical to the two-stream RT model \\cite{Schuster1905,KubelkaMunk1931}, a classic approximation for (standard) RT in $d=3$ space. This simplified RT model is still by far the most popular way to compute radiation budgets in climate and atmospheric dynamics models \\cite{MeadorWeaver80}. We note that there is no longer an angular integral to compute in the $d$-dimensional RTE in (\\ref{e:1D_RTE}). It is understood to be replaced everywhere by a sum over two directions: ``up'' and ``down.'' Correspondingly, scattering can only be through an angle of $0$ or $\\pi$ rad: $\\mu_\\text{s} = \\pm 1$, respectively. The $d=1$ RT problem at hand thus takes the form of a pair of coupled ODEs:\n\\begin{equation}\n\\left( \\pm\\frac{\\mathrm{d}\\;}{\\mathrm{d}\\tau} + 1 \\right) I_{\\pm}(\\tau) =\n\\omega \\left[p_{+}I_{\\pm}(\\tau)+p_{-}I_{\\mp}(\\tau)\\right] + q_{\\pm}(\\tau)\n\\label{e:2st_model}\n\\end{equation}\nwith $p_{\\pm} = (1 \\pm g)\/2$ (cf. Table~\\ref{t:Definitions}) and $q_{\\pm}(\\tau) = \\omega p_{\\pm} \\exp(-\\tau)$. This system of coupled ODEs is subject to BCs $I_{+}(0) = I_{-}(\\tau^\\star) = 0$ when $\\rho = 0$ (otherwise $I_{-}(\\tau^\\star) = \\rho I_{+}(\\tau^\\star)$).\n\nLet us use\n\\begin{equation}\nI_{\\pm}(\\tau) = \\frac{J(\\tau) \\pm F(\\tau)}{2}\n\\label{e:1D_diffusion}\n\\end{equation}\nto recast the diffuse radiance field in the above 2-stream model, where\n\\begin{eqnarray}\n\\label{e:J_2st_def}\nJ(\\tau) &=& I_+(\\tau) + I_-(\\tau), \\\\\n\\label{e:F_2st_def}\nF(\\tau) &=& I_+(\\tau) - I_-(\\tau),\n\\end{eqnarray}\nare respectively the \\emph{scalar} and \\emph{vector} fluxes.\n\nBy summing the two ODEs in (\\ref{e:2st_model}), we find an expression of radiant energy conservation:\n\\begin{equation}\n\\mathrm{d} F\/\\mathrm{d}\\tau = -(1-\\omega)J + \\omega\\exp(-\\tau).\n\\label{e:2st_conservation}\n\\end{equation}\nDifferencing (\\ref{e:2st_model}) yields\n\\begin{equation}\nF(\\tau) = (-\\mathrm{d} J\/\\mathrm{d}\\tau + \\omega g \\mathrm{e}^{-\\tau})\/(1-\\omega g).\n\\label{e:2st_model_mFick}\n\\end{equation}\nThe 1st term on the right-hand side (and the only one that survives after the 2nd one has decayed at large $\\tau$) is a non-dimensional version of Fick's law, a reminder that diffusion theory is exact in $d=1$. Using (\\ref{e:2st_model_mFick}) in (\\ref{e:2st_conservation})\nleads to a 1D screened Poisson equation for $J(\\tau)$:\n\\begin{equation*}\n\\left[ -\\frac{\\mathrm{d}^2\\;}{\\mathrm{d}\\tau^2} + (1-\\omega)(1-\\omega g) \\right] J(\\tau) =\n\\omega \\left[ 1+(1-\\omega)g \\right] \\exp(-\\tau),\n\\end{equation*}\nsubject to BCs, $J(0)+F(0) = J(\\tau^\\star)-F(\\tau^\\star) = 0$ when $\\rho = 0$ (black surface). Factoring in (\\ref{e:2st_model_mFick}), these are always of the 3rd (Robin) type.\n\nWhen $\\omega = 1$ (no absorption), the solution of the above pair of ODEs and BCs is\n\\begin{eqnarray}\n\\label{e:J_2st_NoAbs}\nJ(\\tau) &=& 1+R(\\tau^\\star)\\times\\left(1-\\frac{\\tau}{\\tau^\\star\/2}\\right)-\\exp(-\\tau), \\\\\n\\label{e:F_2st_NoAbs}\nF(\\tau) &=& T(\\tau^\\star)-\\exp(-\\tau).\n\\end{eqnarray}\nWe have used here boundary-escaping radiances\n\\begin{eqnarray}\n\\label{e:R_2st_NoAbs}\nR(\\tau^\\star) &=& I_{-}(0) = 1-T(\\tau^\\star), \\\\\n\\label{e:T_2st_NoAbs}\nT(\\tau^\\star) &=& I_{+}(\\tau^\\star)+\\exp(-\\tau^\\star) = \\frac{1}{1+(1-g)\\tau^\\star\/2},\n\\end{eqnarray}\nin the above representation of the solution. When $\\omega < 1$, somewhat more complex expressions result in the form of 2nd-order rational functions of $\\exp(-k\\tau)$, where $k = 1\/\\sqrt{(1-\\omega)(1-\\omega g)}$, with polynomial coefficients dependent on $\\omega$, $g$ and $\\exp(-\\tau)$. All these classic results will be used momentarily to verify the new Markov chain numerical scheme.\n\n\\subsection{Markov Chain (MarCh) Scheme}\n\nWe now adapt our ``Markov Chain'' (MarCh) formulation of standard RT in $d=3$ dimensions \\cite{Xu_EtAl_MarCh11,Xu_EtAl_MarChSurf11,Xu_EtAl_MarChLin12} to the present $d=1$ setting for generalized RT. MarCh is an under-exploited alternative to the usual methods of solving the plane-parallel RT problem first proposed by Esposito and House \\cite{Esposito_House78,Esposito79}. It differs strongly from many of the usual approaches: discrete ordinates, spherical harmonics, adding\/doubling, matrix-operator and kindred techniques. It has more in common with source iteration (successive orders of scattering, or Gauss-Seidel iteration), and even with Monte Carlo (MC). In short, we can say that MarCh is an efficient deterministic solution of a discretized version of the integral RTE solved by MC. We illustrate in $d=1$ for simplicity, but also for previously articulated reasons, that there may be an acute need for generalized RT in the 2-stream approximation in climate and, generally speaking, atmospheric dynamical modeling.\n\nThe generalized ancillary integral RTE is expressed in generic form in (\\ref{e:nD_ancillary_IRTE}) with the kernel in (\\ref{e:K_integral_gRTE}) and the source term in (\\ref{e:Q_ancillary_gIRTE}). In $d = 1$, it yields a system of two coupled integral equations for the two possible directions in $S_\\pm(\\tau)$:\n\\begin{eqnarray}\nS_\\pm(\\tau) &=& \\omega \\left[\np_\\pm \\int\\limits_0^\\tau\nS_{+}(\\tau^\\prime) \\, \\left|\\dot{T}_a(\\tau-\\tau^\\prime)\\right| \\, \\mathrm{d}\\tau^\\prime +\np_\\mp \\int\\limits_\\tau^{\\tau^\\star}\nS_{-}(\\tau^\\prime) \\, \\left|\\dot{T}_a(\\tau^\\prime-\\tau)\\right| \\, \\mathrm{d}\\tau^\\prime\n\\right] \\nonumber \\\\\n&+& Q_{S\\pm}(\\tau),\n\\label{e:AncIRTE_1D}\n\\end{eqnarray}\nwhere\n\\begin{equation}\nQ_{S\\pm}(\\tau) = \\omega p_\\pm \\left|\\dot{T}_a(\\tau)\\right|.\n\\label{e:AncIRTE_1D_Q_Order1}\n\\end{equation}\nWe recognize here the operator form of the integral equation, $S = K S + Q_S$, which can be solved by Neumann series expansion, similarly to (\\ref{e:IRTE_soln}):\n\\begin{equation}\nS = Q_S + K Q_S + K^2 Q_S + \\cdots = (E-K)^{-1}Q_S.\n\\label{e:Neumann_S}\n\\end{equation}\n\nAs detailed in the Appendix, the pair of simultaneous integral RTEs in (\\ref{e:AncIRTE_1D}), given (\\ref{e:AncIRTE_1D_Q_Order1}),\nare finely discretized in $\\tau$ (200 layers with $\\Delta\\tau = 0.05$, hence $\\tau^\\star = 10$), with careful attention to accuracy in the evaluation of the integrals using finite summations. The resulting matrix problem is large but tractable. It can be solved using either a truncated series expansion of matrix multiplies or the full matrix inversion depending on problem parameters (primarily, $\\tau^\\star$) and the desired accuracy.\n\nAs usual when working with the ancillary integral RTE, we finish computing radiance \\emph{detected} inside the medium using the formal solution, as in (\\ref{e:simpleS_sweep}) but in $d = 1$ format, and with the appropriate generalized transmission law:\n\\begin{equation}\n\\left\\{ \\begin{array}{l}\nI_{+}(\\tau) = \\int\\limits_0^\\tau\nS_{+}(\\tau^\\prime) \\, T_a(\\tau-\\tau^\\prime) \\, \\mathrm{d}\\tau^\\prime\n+ T_a(\\tau),\n\\\\\nI_{-}(\\tau) = \\int\\limits_\\tau^{\\tau^\\star}\nS_{-}(\\tau^\\prime) \\, T_a(\\tau^\\prime-\\tau) \\, \\mathrm{d}\\tau^\\prime,\n\\end{array} \\right.\n\\label{e:FormalSoln_1D_escape}\n\\end{equation}\nwith $q_{-}$. Indeed, ``detection'' implies that the radiation reaches a level, but could have gone further. A special case of detection is radiation \\emph{escaping} the medium at a boundary: $I_{+}(\\tau^\\star)$ or $I_{-}(0)$, which can also be obtained from known values of $S_\\pm$ using one or another of the expressions in (\\ref{e:FormalSoln_1D_escape}). At any rate, it is the ``cumulative probability'' version of the transmission law that is needed here.\nIn short, after implementing (\\ref{e:Neumann_S}), the final step of the numerical computation is to derive radiances $I_\\pm(\\tau)$ everywhere (it is required) from the known source function $S_\\pm(\\tau)$ using a discretized version of (\\ref{e:FormalSoln_1D_escape}).\n\nIn the Appendix, the discrete-space version of the above problem is derived directly from an analogy with random walk theory using Markov chain formalism: present state, state transition probabilities, probability of stagnation, of absorption (including escape), starting position\/direction of walkers, and so on. Although intimately related to all these concepts, which are used extensively in MC modeling, the new model is deterministic since it uses normal rather than random quadrature rules. We naturally call it the Markov Chain (MarCh) approach to RT. In a recent series of papers \\cite{Xu_EtAl_MarCh11,Xu_EtAl_MarChSurf11,Xu_EtAl_MarChLin12}, we have brought it to bear on aerosol remote sensing on Earth (in $d=3$), so far only with $a=\\infty$, but including polarization.\n\n\\subsection{Illustration with Internal Fields}\n\nTo demonstrate our MarCh code for generalized transport in $d=1$, we focus on uniform or stochastic media with $\\tau^\\star = 10$ irradiated by a unitary source at its upper ($\\tau = 0$) boundary, here, to the left of each panel in Fig.~\\ref{f:MarCh}. We first assume conservative ($\\omega = 1$) and isotropic ($g = 0$) scattering. The outcome is plotted in the top two panels in the $d=1$ equivalent of a decomposition in Fourier modes (in $d=2$) or spherical harmonic modes (in $d=3$). Specifically, we have scalar flux $J = I_{+}+I_{-}$ in the left column and (negative) vector flux $-F = I_{-}-I_{+}$ in the right column. In the middle row, $g$ is raised from 0 to 0.8. In the bottom row, $\\omega$ is then lowered from unity to 0.98. In all of these scenarios, $a$ was varied, the selected values being 1\/2, 1, 3\/2, 2, 10, and $\\infty$; the latter choice is designated as ``Beer's law'' and the others as ``power laws'' in the Figure.\n\nWhen $\\omega = 1$, radiant energy conservation requires that total net flux $F(\\tau)+T_a(\\tau)$ be constant across the medium, and equal to $T(1,g,a;\\tau^\\star)$. This was verified numerically for all values of $a$ and both values of $g$. In the right hand panels, we see that indeed $-F(\\tau) = -T(1,g,a;\\tau^\\star)+T_a(\\tau)$; see (\\ref{e:F_2st_NoAbs}) and (\\ref{e:T_2st_NoAbs}) for the case of $a = \\infty$ and $\\omega = 1$.\n\nFor reference, Table~\\ref{t:MarCh_results} gives $R$, $T_\\text{dif}$, $T_a(\\tau^\\star)$, and absorbtance $A = 1-(R+T) = 1-R-T_\\text{dif}-T_a(\\tau^\\star)$ for our three choices of $\\{\\omega,g\\}$ and all values of $a$. All the entries in Table~\\ref{t:MarCh_results} were verified to all the expressed digits using a custom MC code for generalized transport in $d = 1$. Apart from the fact that there is no oblique illumination, nor is there a distinction between collimated and diffuse illumination, the key difference between a MC for $d=1$ and $d>1$ is how to select a scattering angle. In $d>1$, it is a continuous random variable but in $d=1$ the forward versus backward scattering decision is made based on a Bernoulli trial.\n\nAs another element of verification for the MarCh code, we recognize in the upper and middle left-hand panels of Fig.~\\ref{f:MarCh} the characteristic result for $J(\\tau)$ in the case of standard transport theory ($a\\to\\infty$) in the absence of absorption ($\\omega = 1$), namely, the linear decrease modulated by an exponential expressed in (\\ref{e:J_2st_NoAbs}).\n\nIn standard transport theory in $d = 1$, or using the 2-stream\/diffusion approximation for higher dimensions, the linear decrease of $J(\\tau)$ when $\\omega = 1$ follows directly from the constancy of $F(\\tau)$, assuming they include both diffuse and uncollided radiation; see (\\ref{e:2st_model_mFick}) and (\\ref{e:F_2st_NoAbs}), but without the exponential terms. An interesting finding here is that, although $F(\\tau)+T_a(\\tau)$ is constant for all values of $a$, the linear decrease of $J(\\tau)+T_a(\\tau)$ does not generalize from $a = \\infty$ to $a < \\infty$. We conclude that generalized RT conserves energy, as it should, both globally ($A+R+T = 1$) and locally, as expressed in\n\\begin{equation}\n\\mathrm{d} F\/\\mathrm{d}\\tau = -(1-\\omega)J(\\tau) + \\omega \\dot{T}_a(\\tau),\n\\label{e:Gen_2st_model_EnCons}\n\\end{equation}\nwhich is follows directly from (\\ref{e:2st_model})--(\\ref{e:F_2st_def}) when $a = \\infty$. It is not obvious how to derive (\\ref{e:Gen_2st_model_EnCons}) for the generalized transport model described by one or another of its integral RTEs, even in $d=1$. In contrast, Fick's law in (\\ref{e:2st_model_mFick}), which relates $F$ to $\\mathrm{d} J\/\\mathrm{d}\\tau$, can only be exact when $a = \\infty$ and, moreover, when $d = 1$.\n\nAnother interesting numerical finding is that when $\\omega = 1$ and $a = \\infty$, $T$ depends only on the scaled optical thickness $(1-g)\\tau^\\star$, as is readily seen in (\\ref{e:T_2st_NoAbs}). This means that, by similarity, an isotropically scattering medium ($g = 0$) with $\\tau^\\star = 10$ has the same total transmittance $T$ as a forward scattering medium with (say) $g = 0.8$ and $\\tau^\\star = 50$. Formally, $T(1,g,\\infty;\\tau^\\star)$ is only a function of $(1-g)\\tau^\\star$. More generally, allowing $\\omega \\le 1$, we have\n\\begin{equation}\nF(\\omega,g,\\infty;\\tau^\\star) \\equiv\nf_F\\left(\\frac{1-\\omega}{1-\\omega g},(1-\\omega g)\\tau^\\star\\right)\n\\end{equation}\nfor $F = A,R,T$, where the first argument on the r.-h. side is known as the similarity parameter \\cite{King1987}. This is not the case when $a < \\infty$.\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[width=4in]{MarCh.pdf}\n\\end{center}\n\\caption[MarCh]\n{\\label{f:MarCh}\nInternal radiance fields $J = I_{+}+I_{-}$ (right) and $-F = -I_{+}+I_{-}$ (left) computed using the new MarCh scheme for $d = 1$ described in the Appendix. $J(\\tau)$ in (\\ref{e:J_2st_def}) and $-F(\\tau)$ in (\\ref{e:F_2st_def}) are plotted as a function of optical depth $\\tau$ into a medium with $\\tau^\\star = 10$, the unitary source being at $\\tau = 0$, for selected values of $a$. The standard exponential law obtained when $a\\to\\infty$ is designated as ``Beer's law.'' In the top two rows, no absorption is included but the phase function is varied: $p_{+} = 1\/2$ ($g = 0$) on top; $p_{+} = 0.9$ ($g = 0.8$) in the middle. In the bottom row, again $g = 0.8$ but $\\omega$ is reduced from unity to 0.98.}\n\\end{figure}\n\n\\begin{table}[ht]\n\\caption{Boundary fluxes $R$, $T_\\text{dif}$, $T_\\text{dir} = T_a(\\tau^\\star)$, and absorbtance $A$ for the $d = 1$ stochastic medium with $\\tau^\\star = 10$ used in Fig.~\\ref{f:MarCh}.}\n\\label{t:MarCh_results}\n\\begin{center}\n\\begin{tabular}{|c|c|c||c|c|c|c|c|}\n\\hline\n$\\omega$ & $g$ & $a$ & $R$ & $T_\\text{dif}$ & $T_\\text{dir}$ & $T$ & $A$ \\\\\n\\hline\\hline\n1.00 & 0.0 & $\\infty$ & 0.833 & 0.167 & 0.000 & 0.167 & 0.000 \\\\\n\\hline\n1.00 & 0.0 & 10. & 0.814 & 0.185 & 0.001 & 0.186 & 0.000 \\\\\n\\hline\n1.00 & 0.0 & 2.0 & 0.727 & 0.245 & 0.028 & 0.273 & 0.000 \\\\\n\\hline\n1.00 & 0.0 & 1.5 & 0.693 & 0.260 & 0.047 & 0.307 & 0.000 \\\\\n\\hline\n1.00 & 0.0 & 1.0 & 0.632 & 0.277 & 0.091 & 0.368 & 0.000 \\\\\n\\hline\n1.00 & 0.0 & 0.5 & 0.500 & 0.282 & 0.218 & 0.500 & 0.000 \\\\\n\\hline\\hline\n1.00 & 0.8 & $\\infty$ & 0.500 & 0.500 & 0.000 & 0.500 & 0.000 \\\\\n\\hline\n1.00 & 0.8 & 10. & 0.475 & 0.524 & 0.001 & 0.525 & 0.000 \\\\\n\\hline\n1.00 & 0.8 & 2.0 & 0.378 & 0.594 & 0.028 & 0.622 & 0.000 \\\\\n\\hline\n1.00 & 0.8 & 1.5 & 0.345 & 0.608 & 0.047 & 0.655 & 0.000 \\\\\n\\hline\n1.00 & 0.8 & 1.0 & 0.290 & 0.619 & 0.091 & 0.710 & 0.000 \\\\\n\\hline\n1.00 & 0.8 & 0.5 & 0.192 & 0.590 & 0.218 & 0.808 & 0.000 \\\\\n\\hline\\hline\n0.98 & 0.8 & $\\infty$ & 0.422 & 0.401 & 0.000 & 0.401 & 0.176 \\\\\n\\hline\n0.98 & 0.8 & 10. & 0.406 & 0.432 & 0.001 & 0.433 & 0.161 \\\\\n\\hline\n0.98 & 0.8 & 2.0 & 0.335 & 0.524 & 0.028 & 0.552 & 0.112 \\\\\n\\hline\n0.98 & 0.8 & 1.5 & 0.309 & 0.546 & 0.047 & 0.593 & 0.098 \\\\\n\\hline\n0.98 & 0.8 & 1.0 & 0.264 & 0.568 & 0.091 & 0.659 & 0.077 \\\\\n\\hline\n0.98 & 0.8 & 0.5 & 0.179 & 0.557 & 0.218 & 0.775 & 0.046 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\\section{Diffusion Study in $d = 2$: Theory and Monte Carlo Simulation}\n\\label{s:DiffusionLimits}\n\n\\subsection{Theoretical Predictions}\n\nIn this section, we focus on $d$ = 2 spatial dimensions, partly for simplicity (fidelity with Fig.~\\ref{f:RWs_g0_inset}, where nothing is happening outside of the depicted $(x,z)^\\text{T}$-plane), partly because there are previously-mentioned two-dimensional transport processes on real substrates (including random ones where a stochastic model is in order). We focus specifically on non-absorbing media ($\\omega$ = 1) over an absorbing lower boundary ($\\rho$ = 0). Moreover, we will assume an isotropic source at the upper boundary, that is, BC in (\\ref{e:nD_RTE_upperBC_diffuse}) with $F_0$ = 1.\n\nWe will investigate transmitted fluxes, both direct and diffuse, their total $T(g,a;\\tau^\\star)$ being defined in (\\ref{e:nD_RTE_upperBC_diffuse}), but ignoring $\\mu_0$. We start with a review of the standard $a=\\infty$ case.\n\nIn \\S\\ref{s:dDim_sRTE}, the exact expression for $T(g,\\infty;\\tau^\\star)$ is given in (\\ref{e:T_2st_NoAbs}) for $d=1$ where the diffusion ODE model is mathematically exact. In $d>1$, diffusion is only a physically reasonable approximation to plane-parallel RT for very opaque highly scattering media. In lieu of (\\ref{e:1D_diffusion}), it is based on the 1st-order truncation \\begin{equation}\nI(\\tau,\\mathbf{\\Omega}) \\approx \\frac{J(\\tau) + d \\times F_z(\\tau) \\mu}{\\Xi_d},\n\\end{equation}\nand, in lieu of the first entry in the next-to-last row of Table~\\ref{t:Definitions}, we take\n\\begin{equation}\np_g(\\mu_\\text{s}) \\approx \\frac{1+d \\times g \\mu_\\text{s}}{\\Xi_d}.\n\\end{equation}\nThis leads to Laplace\/Helmholtz or Poisson ODEs for $J(\\tau)$, respectively for boundary and volume expressions for the sources. In plane-parallel slab geometry, the BCs are again Robin-type. When homogeneous (hence sources in the volume), they are conventionally expressed as\n\\begin{equation*}\n\\left[ J - \\frac{\\chi_d}{(1-\\omega g)} \\frac{\\mathrm{d} J}{\\mathrm{d}\\tau} \\right]_{\\tau = 0} = 0, \\;\n\\left[ J + \\frac{\\chi_d}{(1-\\omega g)} \\frac{\\mathrm{d} J}{\\mathrm{d}\\tau} \\right]_{\\tau = \\tau^\\star} = 0,\n\\label{e:dD_diffusion_BCs}\n\\end{equation*}\nwhere $\\chi_d$ is the extrapolation length, i.e., boundary values of $J\/|\\mathrm{d} J\/\\mathrm{d} z|$, expressed in transport MFPs, that is,\n\\begin{equation}\n\\ell_\\text{t} = 1\/(1-\\omega g)\\sigma.\n\\end{equation}\nClassic values for $\\chi_d$ are listed in Table~\\ref{t:Definitions} (last row). In the absence of absorption and using boundary sources, total transmission is\n\\begin{equation}\nT(g,\\infty;\\tau^\\star) \\approx \\frac{1}{1+\\tau_\\text{t}^\\star\/2\\chi_d},\n\\label{e:dD_diffusion_Ttot}\n\\end{equation}\nwhere $\\tau_\\text{t}^\\star = (1-g)\\tau^\\star = H\/\\ell_\\text{t}$ is the scaled optical thickness. This expression is identical to (\\ref{e:T_2st_NoAbs}) for $d=1$ ($\\chi_1 = 1$), but here we use $\\chi_2 = \\pi\/4$.\n\nDiffusion theory for $a < \\infty$ cases is in a far worse state since we do not know yet how to formulate generalized RT in integro-differential form. What is known is the asymptotic scaling of $T(g,a;\\tau^\\star)$ with respect to $\\tau_\\text{t}^\\star$. Based on the appropriate truncation of the Sparre-Anderson law of first returns \\cite{Sparre-Anderson1953}, Davis and Marshak \\cite{DavisMarshak1997} showed that\n\\begin{equation}\nT(g,a;\\tau^\\star) \\propto {\\tau_\\text{t}^\\star}^{-\\alpha\/2},\n\\label{e:scaling_diffusion_Ttot}\n\\end{equation}\nwhere $\\alpha = \\min\\{2,a\\}$ is the L\\'evy index. Recall that $a$ is the generally non-integer value of the lowest order moment of $\\langle s^q \\rangle$ that is divergent for the power-law step distribution in (\\ref{e:Gamma_Tdir}). Then one of two outcomes occurs:\n\\begin{itemize}\n\\item\nIf $a \\ge 2$, hence $\\alpha = 2$, then the position of the random walk in Fig.~\\ref{f:RWs_g0_inset} is Gaussian (central limit theorem), and standard diffusion theory applies. As can be seen from (\\ref{e:dD_diffusion_Ttot}), the scaling exponent in (\\ref{e:scaling_diffusion_Ttot}) is indeed (negative) $\\alpha\/2 = 1$.\n\\item\nIf $a < 2$, hence $\\alpha = a$, then the position of the random walk in Fig.~\\ref{f:RWs_g0_inset} is L\\'evy-stable (generalized central limit theorems), and the diffusion process is ``anomalous.''\n\\end{itemize}\nThe predicted scaling in (\\ref{e:scaling_diffusion_Ttot}) will occur for any spatial dimensionality.\n\n\\subsection{Numerical Results}\n\nExtensive numerical computations were performed in $d=2$ spatial dimensions using a straightforward Monte Carlo scheme. The goal was to estimate $T(g,a;\\tau^\\star)$ for a wide range of $\\tau$ (0.125 to 4096), two choices for $g$ (0 and 0.85), and a representative selection of values for $a$: 1.2, 1.5, 2, 5, 10, and $\\infty$. We used (\\ref{e:Gamma_step}) to sample the distance to the next collision.\n\nThe key idiosyncrasies of Monte Carlo simulation of RT in $d=2$ are for the two procedures for generating random angles:\n\\begin{itemize}\n\\item\nAt the departure point of the trajectory, an isotropic source in the angular half-space ($|\\theta| < \\pi\/2$) uses $\\sin\\theta_0 = 1-2\\xi$ (where $\\xi$ is a uniform random variable on [0,1]) and $\\cos\\theta_0 = \\sqrt{1-\\sin^2\\theta_0}$.\n\\item\nIf $g \\ne 0$, directional correlation is implemented by computing $\\theta_{n+1} = \\theta_n + \\theta_\\text{s}$ where $\\theta_\\text{s} = 2\\tan^{-1}\\left[ \\tan[(\\xi-1\/2)\\pi]\\times(1-g)\/(1+g) \\right]$\nbased on the corresponding H--G PF from Table~\\ref{t:Definitions} for $d = 2$.\n\\end{itemize}\nThe remaining operations (boundary-crossing detection and tallies) are similar in $d$ = 1,2,3.\n\nFigure~\\ref{f:2D_T_vs_tau_t} shows our results for $T(g,a;\\tau^\\star)$ as a function of scaled optical thickness $\\tau_\\text{t}^\\star = (1-g)\\tau^\\star$ in a log-log plot. We notice the similarity of $T(0,\\infty;\\tau^\\star)$ and $T(0.85,\\infty;\\tau^\\star)$ using the scaled optical thickness, as predicted in (\\ref{e:dD_diffusion_Ttot}): $T(g,\\infty;\\tau^\\star) \\sim T((1-g)\\tau^\\star)$ when $(1-g)\\tau^\\star \\gg 1$. Specifically, the two transmission curves overlap when plotted against $(1-g)\\tau^\\star$, at least for large values. In contrast, we see clear numerical evidence that generalized RT does not have such asymptotic similarity in $T(g,a;\\tau^\\star)$, as was previously anticipated when examining internal radiation fields in $d=1$. More precisely, the scaling exponent in (\\ref{e:scaling_diffusion_Ttot}) is, as indicated, independent of $g$ but the prefactor (and approach to the asymptote) is. In Fig.~\\ref{f:2D_T_vs_tau_t}, we have estimated the exponents numerically, and they are close to the predicted value, $\\min\\{1,a\/2\\}$.\n\nIn summary, our modest diffusion theoretical result in (\\ref{e:scaling_diffusion_Ttot}) for generalized RT is well verified numerically, and we have gained some guidance about what to expect for a more comprehensive theory.\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[width=5.33in]{2D_T_vs_tau_t.pdf}\n\\end{center}\n\\caption[2D_T_vs_tau_t]\n{\\label{f:2D_T_vs_tau_t}\n2D Monte Carlo evaluations of $T(g,a;\\tau^\\star)$ versus transport (or ``scaled'') optical thickness $\\tau^\\star_\\text{t}$ in log-log axes for $g$ = 0 (solid black) 0.85 (dotted gray) and for $a$ = 1.2, 1.5, 2, 5, 10, and $\\infty$ (from top down). Asymptotic scaling exponents are estimated numerically using the last two values of $\\tau^\\star$ and compared with theoretical predictions in the main text.}\n\\end{figure}\n\n\n\\section{Single Scattering in $d = 3$: Violation of Angular Reciprocity}\n\\label{s:ReciprocityViolation}\n\nWe first revisit the closed-form expression we derived in standard RT for the single scattering approximation in (\\ref{e:1_scatter_R})\nfor radiances escaping the upper boundary. We remarked that they have the reciprocity property that reversing $\\mathbf{\\Omega}_0$ (source) and $\\mathbf{\\Omega}$ (detector) in both sign and in order gives the same answer.\n\nHere, we need to evaluate\n\\begin{equation}\nI_1(0,\\mathbf{\\Omega};\\mathbf{\\Omega}_0) =\n\\omega p_g(\\mathbf{\\Omega}\\cdot\\mathbf{\\Omega}_0)\n\\int\\limits_0^{\\tau^\\star} T_a(\\tau^\\prime\/\\mu_0) \\,\n|\\Dot{T}_a(\\tau^\\prime\/|\\mu|)| \\, \\mathrm{d}\\tau^\\prime\/|\\mu|.\n\\label{e:1_scatter_gRT_reflection}\n\\end{equation}\nFrom there, (\\ref{e:BRF_form}) yields for the BRF form\n\\begin{equation}\n\\begin{array}{l}\n\\frac{\\pi}{\\mu_0} I_1(0,\\mathbf{\\Omega};\\mathbf{\\Omega}_0) = \\pi \\omega p_g(\\mathbf{\\Omega}\\cdot\\mathbf{\\Omega}_0)\n\\times \\frac{a}{\\mu_0}\n\\left( \\frac{\\mu_0}{|\\mu|} \\right)^a\n\\left( 1-\\frac{\\mu_0}{|\\mu|} \\right)^{-2a} \\\\\n\\quad\\quad\\quad\\quad\\quad\n\\times \\left[\n \\mathrm{B}\\left(2a,1-a;1-\\frac{\\mu_0}{|\\mu|}\\right)\n-\\mathrm{B}\\left(2a,1-a;a\\,\\frac{1-\\mu_0\/|\\mu|}{a+\\tau^\\star\/|\\mu|}\\right)\n\\right],\n\\end{array}\n\\label{e:1_scatter_gR}\n\\end{equation}\nwith $-1 \\le \\mu < 0$ and $0 < \\mu_0 \\le 0$), and where we use the incomplete Euler Beta function: $\\mathrm{B}(x,y;z) = \\int_0^z t^{x-1} (1-t)^{y-1} \\mathrm{d} t$.\n\nTo demonstrate that this complex expression violates the reciprocity relation in (\\ref{e:R_reciprocity}) and by how much, we have plotted in Fig.~\\ref{f:NonRecip_gRT} the ratio of $I_1(0,-\\mathbf{\\Omega}_0;-\\mathbf{\\Omega})\/|\\mu|$ to $I_1(0,\\mathbf{\\Omega};\\mathbf{\\Omega}_0)\/\\mu_0$ for a small value of $\\tau^\\star$ compatible with the single scattering approximation used in (\\ref{e:1_scatter_gRT_reflection}). This ratio is independent of the SSA, $\\omega$, and of azimuthal angle, $\\phi$ (assuming $\\phi_0 = 0$), in $d = 3$ since it appears only in the evaluation of the PF via $\\mathbf{\\Omega}_0\\cdot\\mathbf{\\Omega} = \\mu_0\\mu+\\sqrt{1-\\mu_0^2}\\sqrt{1-\\mu^2}\\cos\\phi$. As expected, the violation is stronger for smaller values of $a$ ($a$ = 1.2 and $a$ = 10 are displayed).\n\nThis violation of reciprocity is a desirable attribute of stochastic RT modeling at least in atmospheric applications. It is indeed consistent with real-world satellite observations of reciprocity violation uncovered by DiGirolamo et al. \\cite{DiGirolamo_EtAl_1998} in spatially variable cloud scenes inside a relatively broad field of view, and readily replicated with numerical Monte Carlo simulations. These findings were soon explained theoretically by Leroy \\cite{Leroy2001}. This provides an element of validation of the new model and, by the same token, invalidates for atmospheric applications all models for RT in stochastic media based on either homogenization or linear mixing.\n\nIt is important to realize that this reciprocity violation is related ({\\it i}) to the uniform illumination of the scene and ({\\it ii}) to the spatial averaging that is inherent in the observations that the new model is designed to predict. Indeed, at the scale of a collimated source at a single point in space and a collimated receiver aimed at another direction at another point in any medium, spatially variable or not, there is a fundamental principle of reciprocity as long as the PF has it, $p(\\mathbf{\\Omega}^\\prime\\to\\mathbf{\\Omega}) = p(-\\mathbf{\\Omega}\\to-\\mathbf{\\Omega}^\\prime)$, and Helmholtz's reciprocity principles will guaranty that property under most circumstances. Starting from there, Case \\cite{Case1957} showed that invariance under arbitrary horizontal translation is also required to extend this (internal) ``Green's function'' reciprocity to Chandrasekhar's \\cite{Chandra1950} (external) reciprocity relations for plane-parallel slabs.\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[width=2.5in]{NonRecip_weak.pdf}\n\\includegraphics[width=2.5in]{NonRecip_strong.pdf}\n\\end{center}\n\\caption[NonRecip_gRT]\n{\\label{f:NonRecip_gRT}\nContour plots of $I_1(0,-\\mathbf{\\Omega}_0;-\\mathbf{\\Omega})\/|\\mu| \\div I_1(0,\\mathbf{\\Omega};\\mathbf{\\Omega}_0)\/\\mu_0$ as functions of $\\mu_0$ and $\\mu$ for $\\tau^\\star = 0.1$ (a small value consistent with the adopted single scattering approximation) and $a=10$ (left), $a=1.2$ (right).}\n\\end{figure}\n\n\n\\section{Conclusions \\& Outlook}\n\\label{s:concl}\n\nWe have surveyed a still small but growing literature on radiative transfer (equivalently, mono-group linear transport) theory where there is no requirement for the direct transmission law---hence the propagation kernel---to be exponential in optical distance. In particular, we gather the evidence from the atmospheric radiation and turbulence\/cloud literatures that a better choice of transmission law \\emph{on average} would have a power-law tail, at least for solar radiative transfer in large domains with a strong but unresolved variability of clouds and aerosols that is shaped by turbulent dynamics. Long-range spatial correlations in the fluctuations of the extinction coefficient in the stochastic medium are essential to the emergence of power-law transmission laws, and such correlations are indeed omnipresent in turbulent media such as cloudy airmasses as well as entire cloud fields.\n\nFrom there, we modified the integral form of the radiative transfer equation to accomodate such power-law kernels. This leads to a \\emph{generalized} linear transport theory parameterized by the power-law exponent. This new model reverts to the standard one where exponential transmission prevails in the limit where the characteristic power-law exponent increases without bound. In the new theory however, the physics dictate that there are two specific roles for the transmission function, which lead to different but related expressions. There is no such formal distinction in standard transport theory. However, when the origins of the exponentials are carefully scrutinized from a transport physics perspective, their different functionalities become apparent.\n\nThe new transport theory, possibly with some restrictions, is likely to be one instance of the new ``non-classical'' class of transport models investigated recently by Larsen and Vasques \\cite{LarsenVasques11}. These authors were primarily motivated by fundamental questions about neutron multiplication processes in pebble-bed nuclear reactors. We do not anticipate long-range spatial correlations in these reactors so the relevant transmission laws are more likely to be modified exponentials such as found by Davis and Mineev-Weinstein \\cite{DavisMineev11} in media with very high frequency fluctuations.\n\nWe presented a unified formulation for standard and generalized transport theory in $d = 1,2,3$ spatial dimensions and their associated direction spaces. The present study first adds to previous ones the capability of a new deterministic computational scheme for solving the generalized linear transport equation, which does not have at present an integro-differential form, only an integral one. We thus address the stochastic transport problem at hand, so far only in $d=1$, using a Markov chain formalism. It is used to explore internal intensity and flux fields where numerical results shed light on questions of similarity and diffusion. Diffusion theory and the space-angle similarity captured in the scaled or ``transport'' mean-free-path are exact in $d=1$ for standard transport---not so in generalized transport. In $d>1$, diffusion is only an approximation applicable to opaque scattering media, and the associated similarity is only asymptotic (large optical thickness regimes).\n\nNew numerical simulations presented here in $d=2$ confirm and qualify the violation of similarly. They also confirm previous predictions about the asymptotic scaling of diffuse transmission, which is anomalous or ``L\\'evy-like'' if the characteristic exponent is less than 2. L\\'evy flights are now attracting considerable interest in laboratory as well as atmospheric optics \\cite{BarthelemyEtAl2008}. Finally, the generalized transport problem is solved in $d=3$ in the single scattering approximation. This solution is used to highlight the violation of angular reciprocity in generalized radiative transfer. This is yet another distinction between standard transport theory, including homogenization-based models for transport in stochastic media, and the new class of generalized transport models. This non-reciprocity is in fact observed in the Earth's cloudy atmosphere using reflected sunlight, and is therefore a desirable attribute for stochastic transport of solar radiation.\n\nA logical next step is to implement the Markov chain solution in $d = 3$. Monte Carlo-based predictions of the angular patterns for radiance escaping the medium on the upper (obliquely illuminated) boundary are already available for the verification process. In $d = 3$, there may be an interest in adding light polarization capability and linearizing the model with respect to the new spatial variability parameter $a$, the exponent that controls the power law tail of the direct transmission law.\n\nFinally, we draw attention to a serendipitous development in the atmospheric radiation literature. While our ongoing theoretical and computational work on unresolved\/stochastic \\emph{spatial} variability of the extinction coefficient in turbulent scattering media has lead to the parameterized class of power-law propagation kernels described herein, Conley and Collins \\cite{ConleyCollins2011} have independently arrived at the very same power-law parameterization for the problem of unresolved \\emph{spectral} variability of the absorption coefficient due to all manner of molecules in the Earth's atmosphere that might contribute to the thermal and solar radiative processes that build up the greenhouse effect.\n\nThis opens tantalizing questions about novel \\emph{unified} formulations of the challenging problem of radiation transport in clumpy 3D scattering media that are permeated with spatially uniform but spectrally variable absorbing gases. A first step in that direction is to assume that the scattering elements of the optical medium are in fact spatially uniform. However, our generalized radiation transport model for multiple scattering based on power-law transmission between scatterings\/absorptions still applies, and it can be invoked to capture the impact of purely spectral variability on the overall radiation transport from sources to sinks\/detectors.\n\n\n\n\\newpage\n\\section{Appendix: Markov Chain Formalism for Generalized Radiative Transfer}\n\\label{s:append}\nThis page (35) is intentionally left blank, to be replaced by corresponding Appendix page. N.B. arXiv users will find a PDF version of the Appendix as an ancillary file accessible from the abstract page.\n\n\\newpage\nThis page (36) is intentionally left blank, to be replaced by corresponding Appendix page. N.B. arXiv users will find a PDF version of the Appendix as an ancillary file accessible from the abstract page.\n\n\\newpage\nThis page (37) is intentionally left blank, to be replaced by corresponding Appendix page. N.B. arXiv users will find a PDF version of the Appendix as an ancillary file accessible from the abstract page.\n\n\\newpage\nThis page (38) is intentionally left blank, to be replaced by corresponding Appendix page. N.B. arXiv users will find a PDF version of the Appendix as an ancillary file accessible from the abstract page.\n\n\\newpage\nThis page (39) is intentionally left blank, to be replaced by corresponding Appendix page. N.B. arXiv users will find a PDF version of the Appendix as an ancillary file accessible from the abstract page.\n\n\\newpage\n\n\n\n\\section*{Acknowledgments}\n\nThe authors are thankful for sustained support from NASA's Radiation Sciences Programs managed by Hal Maring and Lucia Tsaoussi. We also acknowledge many fruitful discussions with Howard Barker, Bob Cahalan, Bill Collins, Martin Frank, Lee Harrison, Alex Kostinski, Ed Larsen, Shaun Lovejoy, Alexander Marshak, Qilong Min, Klaus Pfeilsticker, Anil Prinja, Ray Shaw, and Bob West. We also thank the Editors and two anonymous reviewers for many insightful comments about the submitted manuscript. This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.\n\n\n\\textcolor{white}{\\cite{Marchuk_etal80}}\n\n\n\\newpage\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nA tremendous volume of credit card transactions is conducted daily, especially with the COVID-19 pandemic. Nevertheless, this financial activity necessities many robust resources, in terms of CPU, RAM, storage and human expertise, to detect fraudulent payments. For the year 2019, the Nelson Report estimated the worldwide financial loss from credit card fraud to \\$28.65 billion ~\\cite{CCFstat2019}. Over the last ten years, users that revealed at least one credit card fraud soared by 71\\% in Canada ~\\cite{CCFstat2021}. Credit-card fraud classification is complex to tackle due to the following challenges: {\\bf (1)} transactions are generated continuously and speedily. In this data stream context, fraud must be detected in real-time to avoid losses on the customers' side, {\\bf (2)} Machine Learning Algorithms (MLAs) require storing a very large number of transactions to conduct batch learning. However, financial transactions must not be accumulated in an enormous quantity by the fraud detection models because of data sensitivity and confidentiality, {\\bf (3)} MLAs cannot adapt previous knowledge to newly available transactions to improve their accuracy, making the detection models obsolete and unreliable in the long run, and {\\bf (4)} the architectures of all past incremental learning approaches are pre-determined \\cite{DBLP:journals\/ci\/AnowarS21}, \\cite{DBLP:conf\/flairs\/AnowarS20}, thus, the accuracy may not improve with new data. \n\nMost of the credit-card fraud detection studies employed conventional MLAs that are however inadequate for this specific environment. In the industry, a new fraud prediction system is created from scratch for every number of days to learn the new behavior ~\\cite{DBLP:conf\/dsaa\/LebichotPBSHO20}. However, re-training is very time-consuming, and the learned knowledge is completely lost. To overcome the three challenges above, we develop a new adaptive learning algorithm that learns frequently and efficiently from transaction chunks or mini-batches. According to the incremental training frequency, we decide how much data to collect in each chunk. More the frequency is shorter, less sensitive data is accumulated. In our study, a chunk contains the payment transactions of one day, which is still large. We only process one chunk at a time in the short-term memory and discard it after each model adaptation, without storing it.\n\n\\medskip\n\nThe proposed adaptive approach combines transfer learning and incremental feature learning (a new paradigm). Thanks to transfer learning, we extract valuable features from the original ones and reuse the new features for the subsequent transaction chunks. For instance, in the image processing area, the first layers of the neural network extract fundamental features that can be reused in another image processing task. Following the same reasoning, we use the first layers to collect more beneficial features and then add a new network to utilize those features. By doing so, we take advantage of the previous chunks' knowledge. \n\nOur new Incremental Feature Learning (IFL) algorithm adapts gradually to the new transaction chunks by {\\bf (1)} preserving the previously learned knowledge and {\\bf (2)} dynamically adjusting the network architecture for each new chunk to achieve the highest performance during training. IFL expands the network topology by adding new hidden layers and units during each adaptation phase. Determining the most suitable model's architecture leading to the best performance is a complex problem. Looking for the optimal number of units in a hidden layer is already tricky for static MLAs. Our IFL approach adds hidden units one by one until the model does not converge anymore. Nevertheless, as we are changing the network architecture to increase the performance, we may over-fit the resulting model. Hence, we utilize a validation chunk during training to avoid over-fitting after each extension. More precisely, only the weights of the new hidden units are updated each time as the previous units are frozen to store the previous knowledge. Thus, the less computational time is required to conduct learning for each new chunk. In this way, our IFL approach will always outperform other incremental learning approaches as the former continuously adapts its architecture to reach optimal accuracy. The architecture is permanently fixed for past incremental approaches, and the accuracy may not improve when new training chunks are provided. Developing such an algorithm is very challenging, requiring a deep investigation of the building blocks and libraries of MLA toolkits, such as creating a new hidden unit, adding a new connection, and freezing the weights of a old connection (not to be re-optimized).\n\n\\medskip\n\nThere is only one study that developed an IFL approach, but very elementary ~\\cite{DBLP:journals\/npl\/GuanL01}. Still, this paper does not provide details on the algorithm design and its implementation. Instead, our paper presents the steps of our new IFL approach that learns progressively from newly available chunks. Through a concrete example, we illustrate step by step the sophisticated behavior of our approach. Moreover, using a real credit-card fraud dataset, we create training, testing, and validation chunks and handle the highly imbalanced learning problem. We build four fraud classification models for the experiments: (1) the initial model trained on the first-day transaction chunk, (2) the initial model trained on the second-day transaction chunk, (3) the re-fitted model trained on the second-day chunk, and (4) the final optimal model trained on the second day and produced with our new approach. We thoroughly evaluate and compare all these learned models on unseen data. \n\\section{Related Work}\nWe review recent research on detecting credit card fraud and highlight its weaknesses. The majority of studies conducted batch learning, such as ~\\cite{Najadat2020} that explored deep learning, like BiLSTM and BiGRU, and classical learning, such as Decision Tree, Ada Boosting, Logistic Regression, Random Forest, Voting and Naive Base. Since the fraud dataset is highly imbalanced, the authors adopted random under-sampling, over-sampling and SMOTE. The hybrid of over-sampling, BiLSTM and BiGRU lead to the highest accuracy. Another work ~\\cite{nguyen2020deep} also assessed several MLAs, including LSTM, 2-D CNN, 1-D CNN, Random Forest, ANN and SVM, using different data sampling methods on three credit-card datasets. LSTM and 1-D CNN combined with SMOTE returned the best results. We believe LSTM can be a good option for incremental learning since this algorithm can remember past data and therefore creates predictions using the current inputs and past data, leading to a better response to the environmental changes. In both papers, LSTMs and the other models were trained on very large datasets, requiring storing sensitive information forever. Nevertheless, since user transactions are available incrementally, conventional MLAs are inappropriate for streaming data. Our proposed method aims to address the real credit-card fraud classification context. \n\n\\smallskip \n\nIn \\cite{DBLP:conf\/smc\/AnowarS20}, the authors first utilized SMOTE-ENN to handle a highly imbalanced credit-card fraud dataset and then divided the dataset into multiple training chunks to simulate incoming data. They proposed an ANN-based incremental learning approach that learns gradually from new chunks using an incremental memory model. For adjusting the model each time, the memory consists of one past chunk (so that data are not forgotten immediately) and one recent chunk (to conduct the model adaptation). The authors demonstrated that incremental learning is superior to static learning. However, using two chunks every time can be expensive computationally. Also, since the ANN topology is fixed, the model cannot adapt to significant changes in the chunk patterns. In our study, the ANN architecture is dynamic to build an optimal fraud detection model. Instead of using two chunks simultaneously, leading to storing more data, we use only one chunk. With transfer learning, we take advantage of the previous chunk without storing it.\n\n\\smallskip \n\nIn ~\\cite{Barris2020}, the authors introduced an incremental Gradient Boosting Trees (GBT) approach, which is just an ensemble of decision trees, to minimize the loss function gradually. The ensemble is updated for each new group of transactions by appending a new Decision Tree to create a more robust GBT model. The authors developed three models: static GBT (batch learning), re-trained GBT (re-training all the transactions by including the\ninvestigated ones), and incremental GBT (ensemble). All these methods necessitate storing a tremendous quantity of user transactions. The authors divided the credit-card dataset into several sub-datasets w.r.t time (month) and evaluated the three models for each month (four months in total). Although the re-trained GBT (performed with 1.6 million transactions) achieved the best performance, it is 3000 times slower than the incremental approach. Since re-training is too time-consuming, we aim to improve the accuracy and training time for learning incrementally. \n\\section{Credit-Card Fraud Dataset Preparation}\nWe select a public, anonymized credit-card fraud dataset consisting of 284807 users' purchasing transactions that occurred during two days in September 2013 ~\\cite{CCFD2016}. We may mention that credit-card fraud datasets are lacking due to data privacy. The 2-class dataset possesses 30 predictive features obtained with the feature extraction method PCA; the two features Time and Amount were not transformed as their actual values are essential. Time represents the seconds passed between the current and the first transactions in the dataset. After examining the dataset, only $0.172\\%$ (492) of the transactions is fraudulent. We found duplicated fraud data (19) that we eliminate. We are in the presence of a highly imbalanced dataset where the count of legitimate data is much higher than the count of fraudulent data. In this situation, classifiers will be biased towards the normal class and will misclassify the fraud class. \n\n\\smallskip\n\nWe divide the dataset into two chunks according to the transaction time: the first chunk is related to the first day and the second chunk to the second day. We use the first chunk to perform transfer learning, and the second chunk IFL. Next, using the stratified method, we split first chunk into 70\\% training and 30\\% testing and the second chunk into 70\\% training, 15\\% validation and 15\\% testing. Table \\ref{ccfd} (a) presents the original dataset's statistics. As given in Table \\ref{ccfd} (a), the training chunks are highly imbalanced, therefore we adopt the famous over-samping method SMOTE with the new class distribution ratio of 1.3. We employ over-sampling to keep all the fraud data as they are rare events. For example, the training chunk2 has an imbalance ratio of 1:689 that we re-balance with a new ratio 1.3. Table \\ref{ccfd} (b) exposes the re-sampled train, validation and test chunks. Since Time is irrelevant for the classification task, we remove it.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=1.00\\linewidth]{figures\/DataTable.png}\n\\caption{Original and Re-Sampled Training and Testing Chunks}\n\\end{figure} \n\n\n\n\n\\section{Transfer Learning and Incremental Feature Learning}\nTransfer learning is usually adopted in image processing, where an existing classifier is used to detect new objects, as the classifier can efficiently extract new features from images. In our fraud classification application, based on the initial model trained on the first-day transaction chunk, our algorithm utilizes the transformed features to train on the second-day chunk with the optimal number of hidden units. Instead of re-training the entire model for each new chunk, we only train the newly added units and freeze all the previous layers. This strategy will significantly decrease the training time so that the classifier identifies fraud activities much faster, especially that our application requires real-time responses. The most apparent difference of incremental learning from conventional learning is that the former does not assume the availability of sufficient training data, as data are available over time ~\\cite{DBLP:journals\/npl\/GuanL01}. So, based on these facts, improving learning for each newly supplied chunk is an excellent approach.\\\\\n\nAlgorithm 1 first conducts transfer learning and then IFL. We first build the initial network with four fully connected layers, where the first layer has the input features, the first hidden layer 500 units, the second hidden layer ten units, and the output layer one unit for the binary classification. The study ~\\cite{DBLP:conf\/smc\/AnowarS20}, which validated an example-based incremental learning approach with the same credit-card dataset, determined that 500 units as the best number for the first hidden layer. So, we train this first network with the mentioned topology on the first chunk. Then, after removing the output unit, we refit the initial model using previous knowledge and second-day chunk. Now, the second hidden layer represents the transformed features (called tSubset), i.e., the new dimensional space more relevant to the target class because it is obtained with previous knowledge. Here, tSubset contains the values corresponding to the transformed features. For example, assume we have 100,000 transactions in the second chunk. Using the refitted model, we convert each transaction of 29 values to ten new values. So, our transformed tSubset will contain 100,000 new data with 10 new features. \n\n\\medskip\nWe actually extend the initial network with two sub-networks; the first NN considers the high-level features, and the second NN the low-level features. We leverage both of them to predict the output more efficiently. Also, the study ~\\cite{Wang2014EEGES} showed that creating sub-networks according to the features' relevancy can lead to a better outcome. We add the second sub-NN to improve more the fraud detection model performance. A similar idea is used in the incremental approach defined in ~\\cite{DBLP:journals\/npl\/GuanL01} by connecting the original inputs directly to the output unit with the same motivation. Another paper ~\\cite{DBLP:journals\/isci\/WangWYDC21} explained and compared the performance results of connecting the inputs to the output unit directly. These are the reasons we use high-level features for creating the first sub-NN and then low-level features for creating the second sub-NN. Thus, we can gradually improve the accuracy using newly available data without forgetting the previous knowledge (stored in the previous weights). Moreover, the training time will be much lower as we only train the new chunk and one unit only per epoch. Our approach can be used for any number of chunks by appending a new hidden layer to the first sub-NN.\n\n\\medskip\n\nAlgorithm 2 shows how to extend a sub-NN with new hidden units and connections. The main challenge is determining the number of units that should be added to attain the best performance. In transfer learning, we add a predetermined number of layers and units to the previously trained model. It can lead to fewer layers\/units; therefore, the resulting model will return poor predictions. It can also lead to unnecessary layers\/units, which only increases the learning time. Since the numbers of layers and units are predetermined, all the units are trained together and not one at a time like our proposed method. So, the training time will be very time-consuming. We determine the needed number of units in our work by adding and training them one by one. This approach is highly efficient computationally. Moreover, instead of computing the gradient descent for all the new units at once for each epoch, which is time-consuming, we utilize the ``patience\" parameter as the early stopping callbacks. Based on the convergence threshold, this parameter checks whether the error is reducing or not in a certain number of epochs. In the experiments, if the accuracy does not increase in ten epochs, then the learning will stop. \n\n\\newpage\n\n\\begin{footnotesize}\n\\begin{algorithm}[H]\n{\\bf Inputs:} trainChunk1, trainChunk2, validChunk2, threshold, nEpoch \\\\\n\\KwResult{optimal predictive model} \n\\medskip\n{\\color{blue} (*Initial Model with chunk1*)}\\\\\niniNN $\\gets$ build network with four fully connected layers;\\\\\niniModel $\\gets$ train initNN on trainChunk1; \\\\\n\n{\\color{blue} (*Refitted Model with chunk2*)}\\\\\nrefModel $\\gets$ delete output unit of initModel;\\\\\nrefModel $\\gets$ feedforward trainChunk2 to refModel using past weights; \\\\\n\n{\\color{blue} (*Transformed Feature Sub-dataset*)}\\\\\ntGroup $\\gets$ extract set of transformed features;\\\\\ntSubset $\\gets$ create dataset w.r.t tGroup and trainChunk2;\\\\\n\n{\\color{blue} (*Incremental Feature Learning*)} \\\\\n{\\color{red} (*Sub-NN Training and Extension for Transformed Features*)} \\\\\nsubNN1 $\\gets$ create network (2 hidden units and 1 output unit);\\\\\nsubNN1 $\\gets$ connect subNN1 to output layer of refModel ;\\\\\nmodel1 $\\gets$ train subNN1 with tSubset by freezing past weights;\\\\\nextModel1= {\\bf Algorithm2}(model1, tSubset, validChunk2, threshold, nEpoch);\\\\\n\n{\\color{red} (*Sub-NN Training and Extension for Input Features*)} \\\\\nsubNN2 $\\gets$ create network (2 hidden units);\\\\\nsubNN2 $\\gets$ connect hidden units to output unit of extModel1 ;\\\\\nmodel2 $\\gets$ freeze weights of previous units except hidden and output units;\\\\\nmodel2 $\\gets$ train subNN2 with trainChunk2;\\\\\nextModel2 $\\gets$ {\\bf Algorithm2}(model2, trainChunk2, validChunk2, threshold, nEpoch);\\\\\nreturn extModel2;\\\\\n\\caption{Transfer Learning and Incremental Feature Learning}\n\\end{algorithm}\n\n\\begin{algorithm}[H]\n{\\bf Inputs:} {model; trainData, validData, threshold, nEpoch}\\\\\n\\KwResult{extended model} \n\\medskip\npreValAcc $\\gets$ $0$;\n\\While{True}{\ncurValAcc $\\gets$ compute accuracy of model using validData;\\\\\n\\While{(curValAcc is converging with patience of nEpoch)}{\n model $\\gets$ train non-frozen weights with trainData;\\\\\n curValAcc $\\gets$ compute accuracy of model using validData;\\\\\n}\n\n\\If{(curValAcc - preValAcc $<$ threshold) and (preValAcc $\\neq$ 0)}{\n Break;\n}\npreValAcc $\\gets$ curValAcc;\nmodel $\\gets$ add new hidden unit to hidden layer;\\\\\nmodel $\\gets$ freeze weights of previous units except new hidden and output units;}\nreturn model;\n\\caption{Model Extension with New Hidden Units until No Convergence}\n\\end{algorithm}\n\\end{footnotesize}\n\n\\subsection{A Concrete Example}\nBased on the credit-card fraud dataset, we illustrate the behavior of our new algorithm in Figures 2, 3, and 4, where the green color denotes new connections and units, and red the frozen part. Figure 2 creates the initial network, trains it on chunk1, and after deleting the output unit, utilizes previously learned weights, and creates the refitted model by training with chunk2. Figure 3 adds the first sub-NN and keeps training on data of the ten features by including new hidden units and connections and freezing past parameters until the performance is not converging anymore. Figure 4 creates the second sub-NN using the original features, connects it to the past output unit, freezes previous weights, and trains it on chunk2 until the model does not convergence anymore. \n\n\n\n\n\n\n\n\\begin{figure}[h]\n\n\\centering\n\\subfloat[create initial model on chunk1]{\n\\includegraphics[width=0.40\\linewidth]{figures\/baseModel.png}\n}\n\\subfloat[transform inputs on chunk2 using past weights]{\n\\includegraphics[width=0.36\\linewidth]{figures\/baseModelNoOutput.png}\n} \n\\caption{Transfer Learning}\n\n\\end{figure}\n\n\\begin{figure*}[h]\n\\centering\n\\subfloat[train subNN1 \\& freeze past NN]{\n\\includegraphics[width=0.45\\linewidth]{figures\/baseModelOneLayerAdded1.png}\n}\n\\subfloat[add hidden unit(s) until no convergence]{\n\\includegraphics[width=0.50\\linewidth]{figures\/baseModelOneLayerAdded2.png}\n} \n\\caption{Network training and expansion using transformed features}\n\n\\end{figure*}\n\n\\begin{figure*}[h]\n\\centering\n\\subfloat[connect subNN2 \\& freeze past NN]{\n\\includegraphics[width=0.45\\linewidth]{figures\/subNetAdded1.png}} \n\\subfloat[add hidden unit(s) until no convergence]{\n\\includegraphics[width=0.45\\linewidth]{figures\/subNetAdded2.png}}\n\\caption{Network training and expansion using original inputs}\n\n\\end{figure*}\n\n\\section{Validation}\n\\subsection{Experiment Setup}\nSince the 29 input features of our fraud dataset have different ranges of values and with significant discrepancies between the features, we first normalize them into the same range. Then, we train the initial network (four fully connected layers) on chunk1 data using different ranges to determine the most appropriate feature scale. The range of [-5, +5] returns the best performance. Furthermore, we tune the network hyperparameters, such as the learning rate to 0.001, batch size to 1024, epoch to 100, convergence threshold to 0.01, and patience to 10. We utilize five quality metrics for evaluating the predictive performance: Precision, Recall, F1-score, FNR, and Time (in seconds). Due to the randomness of NNs, we run ten training sessions and ten corresponding testing sessions. We consider the average of test sessions for comparing different fraud detection models. \n\n\\subsection{Feature Transformation}\nUsing the trained model's knowledge obtained with the first-day transaction chunk, we transform the input features by feed-forwarding the second-day chunk to the first model. The new features are now more relevant to the target class, and after checking their correlation values, we find them much higher. Figure \\ref{disNew} exposes the distribution of the ten transformed features.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{figures\/D20.png}\n\\caption{Distribution of Transformed Features}\n\n\\end{figure}\n\n\\subsection{Performance Evaluation}\nWe first build two initial models, the first one is trained on train chunk1 and assessed on test chunk1 (presented in Table 1), and the second one using train chunk2 and test chunk 2 (presented in Table 2). According to the 10-round testing results of the two models, F1-score decreased by 4.6\\% for day2 transactions. This expected decrease is due to the much higher number of instances in chunk1 (60\\% of data) compared to chunk2 (40\\% of data). Moreover, by preserving past knowledge, we re-fit the initial model (trained on chunk1) using train chunk2 to conduct transfer learning and evaluate its accuracy on test chunk2. In this case, F-score decreased by 5.1\\%. Changes in data patterns may have caused this decrease \\cite{DBLP:conf\/dsaa\/LebichotPBSHO20}. For instance, if a concept drift occurred in the credit card data, the model performance decreases ~\\cite{DBLP:journals\/jnca\/AbdallahMZ16}. Lastly, we develop the final optimal model using our incremental feature learning approach. F1-score improved by 9\\% using only one chunk. Also, Recall, a significant metric in fraud detection, is augmented with 9.8\\%, which means we have fewer false negatives in the second chunk. Also, as we are training one hidden unit at a time, the training time is less than the re-fitting time. Therefore, we can conclude that the final model outperforms the initial and re-fitted models using only one chunk. \n\nAnother essential metric for fraud detection is the False Negative Rate (FNR), which refers to the rate of fraudulent transactions detected as normal ones. According to the average of the ten experiments, as exposed in Figure 6, the initial-model FNR on chunk1 is $0.149$, the initial-model FNR on chunk2 is $0.219$, the re-fitting-model FNR is $0.228$, and the final-model FNR is $0.13$. These values show that our model caches more fraudulent cases when fed with new chunks, reducing FNR over time. \n\nFinancial transactions cannot be stored for a long time by the fraud classifiers because of privacy issues. In this case, we train the model incrementally chunk by chunk and discard each chunk right away. As we observe, the proposed approach can enhance the performance using each chunk at a time and with less computational cost. \n\n\n \n\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=1.00\\linewidth]{figures\/t1.png}\n\\caption{Training and Testing on Day1 Transactions}\n\n\\end{figure} \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=1.00\\linewidth]{figures\/t2.png}\n\\caption{Training and Testing of Different Models on Day2 Transactions}\n\n\\end{figure} \n\n\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.65\\linewidth]{figures\/FNR_CHART_EDITED.png}\n\\caption{FNR Comparison on the Four Incremental Models }\n\n\\end{figure} \n\n\\section{Conclusion}\nFor tackling the actual behavior of the credit-card fraud detection environment, we introduced a new classification algorithm that adjusts gradually and efficiently to incoming chunks based on transfer learning and IFL. Transfer learning preserves past knowledge and utilizes it to adapt to subsequent chunks. IFL extends the network topology incrementally by finding the optimal number of hidden units for detecting fraud in each chunk. Our hybrid approach improves the predictive accuracy without the necessity of accumulating a substantial volume of data and spending too much time on training. Our new approach can be employed on everyday credit card transactions to prevent the performance from decreasing using current and past knowledge.\n\n\n\n\\section{Introduction}\nNowadays, a large number of real-world applications, such as fraud detection and e-commerce, are subject to changes in their data distribution over time. In the context of online learning, data space may change over time, with new classes or new features. Such non-stationary datasets are split into two groups: concept drift where new classes appear in the data \\cite{}, and virtual concept drift where new features are included in the data \\cite{}. The latter received very little attention in the literature because it is a complicated problem to solve. For instance, in fraud detection applications, the feature space may change as a response to the new fraud strategies devised by fraudsters. If the fraud model does not self-adjust to the new fraud features, it will become unreliable in the long run. In an evolving application where the feature space changes over time, it is vital to develop an adaptive machine learning algorithm that can adjust to the new features incrementally, instead to re-training the model from scratch. Machine learning algorithms are suitable only for static application where the feature space is fixed and does not change. To fill the huge gap in the literature, we are interested in exploring Incremental Feature Learning (IFL) that consists of adding one feature or a group of features at a time into the predictive algorithm. \\\\ \n \nNeural Networks are the only learning method that can support IFL by modifying its architecture as features are introduced gradually. \\\\ \n\n{\\bf PLAN- IDEA}\n\\begin{itemize}\n\\item Cross validation for training\n\\item split to training and testing \n\\item entropy instead of correlation or not\n\\item compare between feature orderings (descending, ascending and no grouping)\n\\item the whole new set of features is consistent; when we add, we check consistency? \n\\item may remove existing features when we add new ones \n\\item Xavier initialize rs compare with zero one\n\\item can we take the average of weights?????\n\\item KPCA for every new feature group\n\\end{itemize}\n\n\n\\section{Related Work}\n\n\\section{IFL Approach}\nAlgorithm 1 exposes the steps of our IFL process to create dynamically the final neural network (fully connected) by incorporating the feature groups incrementally. After determining the feature groups and ordering them, we split the original dataset into the appropriate training sub-datasets. At the beginning, we first create the first NN (with 3 layers) and train it on the corresponding sub-dataset. A hidden layer always begins with two units. As long as the model's training loss is not converging, we keep adding new hidden units and their required connections (dense) to increase accuracy and decrease the loss. A model converges when the difference between the current and previous models' errors is greater than a certain threshold. We re-train the current network for each new hidden unit by optimizing only the new connections' weights and the output unit's weights and freezing all the other weights. \\\\ \n\nSubsequently, we create a new sub-network (with two fully connected layers) for each remaining feature group and keep adding hidden units and dense connections to the sub-network until the loss function does not converge anymore. For each newly added sub-network, we consider all the previous sub-networks features and the current feature group. As presented in Algorithm 2, we develop a new idea on adding a new hidden unit to the current sub-network using the ``concatenate layer\", a building block defined in the KERAS toolkit. We first transform each hidden unit into a layer to be able to disable the optimization part of that unit. Next, we add the new unit as a layer. Then, we merge all the hidden layers using the concatenate layer. The resulting final model will possess all the dataset's features in the input layer, one hidden layer with many units, and one output unit since we are dealing with binary classification and one-value regression problems. \\\\\n\nWhen creating a new hidden unit, we adopt the random function with the standard deviation of 0.01 for the weight initialization, and we set the weights of biases to zero. Also, since want the new sub-networks and the new units to affect the output smoothly and without any sudden gap, we therefore set the weights of the new connections to the output unit to zero when creating them for the first time.\n\n\\begin{algorithm}[h]\n \\KwData{Training Dataset, Threshold}\n \\KwResult{Optimal Predictive Model} \n \\medskip\n {\\color{red} (*Dataset preparation*)}\\\\\n Create groups of features w.r.t. relevancy to target class; \\\\\n Sort groups in descending order and save into list; \\\\\n Divide dataset into sub-datasets w.r.t. created groups \\\\\n \\smallskip\n \n {\\color{red} (*Build and optimize first sub-network*)} \\\\\n Create first sub-network for first group (1 input layer, 2 hidden units and 1 output unit);\\\\\n\\While{(\\textbf{error is converging})}{\n Train non-frozen weights of sub-network on first sub-dataset;\\\\\n \\If{\\textbf{(error is not converging)}}{\n \\If{(\\textbf{(PreviousError - currentError) $<$ threshold})}{\n Break;\n }\n Add new hidden unit to sub-network (*call Algorithm 2*);\\\\\n Freeze weights of previous units except new hidden and output units;\\\\\n }\n Store current error\n}\n\\smallskip\nDelete first group from list; \\\\ \n{\\color{red} (*Build next sub-networks, merge and optimize them*)} \\\\ \n \\While{(list of groups is not empty)}{\n Create sub-network for current group (1 input layer and 2 hidden units);\\\\\n Connect hidden units to output unit;\\\\\n Freeze weights of previous units except new and output units\\\\\n \\While{(\\textbf{error is converging})}{\n Train non-frozen weights on proper sub-dataset;\\\\\n \\If{(\\textbf{error is not converging})}{\n \\If{((\\textbf{PreviousError - currentError) $<$ threshold})}{\n Break;\n }\n Add new hidden unit to sub-network (*call Algorithm 2*);\\\\\n Freeze weights of previous units except new hidden and output units;\\\\\n }\n Store current error\n }\nDelete current group from list;}\\\\ \\\\ \n\\caption{Incremental Feature Learning with Constructive NNs}\n\\end{algorithm}\n\n\\begin{algorithm}[]\n{\\bf Input:} {Sub-network}\\\\\n \\KwResult{Extended sub-network with one hidden unit and dense connections} \n\\medskip\nConvert each hidden unit to a layer\\\\\nCreate one hidden unit as a layer with desired activation function\\\\\nConnect fully new hidden unit to input layer\\\\\nMerge all the hidden layers using concatenate layer\\\\\n\\caption{Adding New Hidden Unit to Current Sub-network}\n\\end{algorithm}\n\n\\bigskip\n\nImplementing such an IFL algorithm is a challenging task that required a deep investigation of the finely granular building blocks of the KERAS toolkit and its libraries, as shown below: \n\\begin{itemize}\n\\item There is a Boolean variable assigned to each unit named \\textit{trainable} that can be set to FALSE so that the weights of its connections are not updated during training. When it is set to TRUE, the weights are then re-optimized.\n\\item The API called ``Functional\" allows creating custom connections between units. We utilize this mode to be able to merge two sub-networks by creating new connections. \n\\item In order to fuse several layers in Algorithm 2, we use the ``concatenation layer\" concept. This layer does not contain any trainable parameters, so it will not impact the feed-forward or back-propagation processes because it is only used to merge layers. \n\\item To determine when to stop training the current model, we also utilize the ``patience\" parameter as the early stopping callbacks. This parameter checks whether the error is reducing or not in a certain number of epochs. As an example, let us set the patience parameter to $100$; in this case, if the error does not decrease in 100 epochs, then the training would be stopped.\n\\end{itemize}\n\n\\newpage\n\\newpage\n\n\\section{Case Study 1: Credit-Card Fraud}\nWe select a robust credit-card fraud dataset consisting of 284807 user transactions that occurred during two days in September 2013 (https:\/\/www.kaggle.com\/mlg-ulb\/creditcardfraud). The 2-class dataset has 30 numerical robust features as they were obtained with the feature extraction method PCA. However, two features Time and Amount were not transformed. Time is irrelevent for the classification task, as it represents the number of seconds elapsed between the current transaction and the first transaction in the dataset. Nevertheless, the features come with different scales as shown in Figure Y that depicts three examples of feature distributions. {\\bf How come after applying PCA, we have these distributions- let me ask my other PhD student??? perhaps not normalized before PCA???}. \n\n\\medskip\n\nAfter examining the dataset, only $0.172\\%$ (492) of the data is fraudulent. We found duplicated fraud data (19) that we eliminate. We are in presence of a highly imbalanced dataset with a class distribution ratio of normal to fraud samples equal to {\\bf 1:?????.} \n \n \n \n\\begin{table}[]\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\multicolumn{3}{|c|}{ Imbalanced Dataset} \\\\ \\hline\nNormal & Fraud & Total \\\\ \\hline\n\\multicolumn{1}{|c|}{283253} & \\multicolumn{1}{c|}{473} & \\multicolumn{1}{c|}{283726} \\\\ \\hline\n\\multicolumn{3}{|c|}{Re-Balanced Dataset(1:3)} \\\\ \\hline\nNormal & Fraud & Total \\\\ \\hline\n283253 & 93473 & 376726 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[]\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\multicolumn{3}{|c|}{Training (60\\%)} \\\\ \\hline\nNormal & Fraud & Total \\\\ \\hline\n\\multicolumn{1}{|l|}{170064} & \\multicolumn{1}{l|}{55971} & \\multicolumn{1}{l|}{226035} \\\\ \\hline\n\\multicolumn{3}{|c|}{Validation (10\\%)} \\\\ \\hline\nNormal & Fraud & Total \\\\ \\hline\n28345 & 9328 & 37673 \\\\ \\hline\n\\multicolumn{3}{|c|}{Testing (30\\%)} \\\\ \\hline\nNormal & Fraud & Total \\\\ \\hline\n\\multicolumn{1}{|l|}{84844} & \\multicolumn{1}{l|}{28174} & \\multicolumn{1}{l|}{113018} \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\\end{center}\n\n\n\\subsection{Feature Grouping}\n\n\n\n\\section{Case Study 2: Multi-class dataset}\n\n\\section{Conclusion and Future Work}\n\\end{document}\n\n\\href{https:\/\/www.kaggle.com\/gauravduttakiit\/creditcard-fraud-detection-by-logistic-regression}{https:\/\/www.kaggle.com\/gauravduttakiit\/creditcard-fraud-detection-by-logistic-regression}\n\n \\item Correlation heat-map:\n \\begin{figure}[]\n \\centering\n \\caption{Correlation Heatmap}\n \\includegraphics[scale=.90]{Correlation heatmap.png}\n \\label{Correlation Heatmap}\n \\end{figure}\n\\end{itemize}\n\\begin{center}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn the classical approach in the $n$-dimensional Euclidean space $\\mathbb{R}^n$ as the iteration of the operator nabla, $\\Delta= div\\,grad\\,=\\nabla^2$, the Laplacian, can be seen as the composition of two differential operators each of first order of differentiation. The gradient acting on scalars and the divergence on vectors. The intrinsic definitions of these operators are both based in the geometry of the Euclidean space. In fact, the gradient of a scalar function $u$ at a point $P$ in $\\mathbb{R}^n$, is the vector with the direction of the maximal growth of $u$ from $P$ and length equal to the rate of growth of $u$ in that maximal direction. The divergence of a vector field $F$ at a point $P$ of the space is given by the outer flow of $F$, from $P$, per unit volume. Sometimes some problems of potential analysis are posed on sets without any a priori geometric or algebraic structure and there is no other way to measure a gradient of a potential $u$, than the mere difference $u(y)-u(x)$ for any two different points $x$ and $y$ of the domain of $u$. A classical instance of this situation is provided by the Kirchhoff laws in the theory of electric circuits. A resistive circuit of $n$ nodes, $\\set{1,2,\\ldots, n}$ can be schematically seen as a weighted graph. Assume that $R_{ij}$ is the electrical resistance of the connection between nodes $i$ and $j$ of the circuit. If we admit for each $R_{ij}$ any nonnegative value including $+\\infty$, and $\\Phi_{ij}$ is the potential difference between nodes $i$ and $j$, from Ohm's law, the sum of all current intensities is given at each node $i=1,2,\\ldots,n$, by $(K\\Phi)_i=\\sum_{j=1}^n \\frac{\\Phi_{ij}}{R_{ij}}=\\sum_{j=1}^n w_{ij}\\Phi_{ij}$, where $w_{ij}=\\frac{1}{R_{ij}}$. A function $\\Phi$ defined on the edges of the graph will satisfy the first Kirchhoff law if $K\\Phi=0$. The first Kirchhoff law can be seen as a conservation law in the sense that at each node of the electrical circuit all incoming electrical currents have to compensate with all out-coming electrical currents. In this sense the operator $K$ acting on functions $\\Phi$ defined on the edges of the graph, can be seen as a divergence operator. When $\\Phi$ is $\\nabla u$, the ``naive'' gradient of $u$, $(\\nabla u)_{ij}=u_j-u_i$ and $u$ is any function defined on the nodes of the graph, we have the Laplacian operator $\\Delta = K\\nabla$. In particular, $u$ is harmonic ($\\Delta u=0$) if and only if $u$ satisfies the mean value identity at each node, i.e. $u_i =\\sum_{j=1}^n w_{ij}u_j$, whose global or local characters depend on the concentration of the nonvanishing terms of the matrix $w_{ij}$. For a complete analysis see \\cite{DoyleSnellBook84}.\n\nLet us start by some basic abstract definitions. In the following sections we shall profusely exemplify and illustrate the general framework.\n\nLet $X$ be a locally compact Hausdorff space. Set $\\mathscr{C}_c(X)$ and $\\mathscr{C}_c(X\\times X)$ to denote the spaces of real valued continuous functions with compact support, defined on $X$ and $X\\times X$ respectively. We shall use capital Greek letters $\\Phi=\\Phi(x,y)$ to denote real functions defined on $X\\times X$ and small Greek letters $\\varphi=\\varphi(x)$ to denote real functions defined on $X$. Let $\\mu$ be a Borel probability measure on $X$ and $\\pi$ be a Borel probability measure on $X\\times X$. For a given $\\Phi\\in\\mathscr{C}_c(X\\times X)$ we say that a function $\\psi$ defined on $X$ is a \\textbf{Kirchhoff divergence of $\\Phi$ with respect to $\\mu$ and $\\pi$} if the equation\n\\begin{equation}\\label{eq:KichDivOperator}\n\\int_X \\varphi(x)\\psi(x) d\\mu(x) = \\iint_{X\\times X} \\varphi(x)\\Phi(x,y) d\\pi(x,y)\n\\end{equation}\nholds for every $\\varphi$ in $\\mathscr{C}_c(X)$. It is not difficult to provide examples showing that solutions of \\eqref{eq:KichDivOperator} may not exist and may not be unique. For nonexistence take $X=[0,1]$, $\\mu=\\delta_0$ the Dirac delta at the origin, $d\\pi= dx dy$ the area in the unit square and $\\Phi\\equiv 1$. To show a case of non-uniqueness take again $X=[0,1]$, $\\mu=\\delta_0$, $\\Phi\\equiv 1$ and $\\pi=\\delta_0\\times \\delta_0$.\n\nGiven a finite measure $\\pi$ on $X\\times X$, as usual, we denote with $\\pi^i$, $i=1,2$; the two marginal distributions of $\\pi$, for $B$ a Borel set in $X$, $\\pi^1(B)=\\pi(B\\times X)$ and $\\pi^2(B)=\\pi(X\\times B)$.\n\nRadon-Nikodym Theorem provides a simple criteria for existence and uniqueness, up to null sets, of a solution $\\psi$ of \\eqref{eq:KichDivOperator} which shall be enough for our further work. Given $\\Phi\\in\\mathscr{C}_c(X\\times X)$ and $\\pi$ a probability on $X\\times X$, we write $\\pi_\\Phi$ to denote the measure $d\\pi_\\Phi(x,y)=\\Phi(x,y)d\\pi(x,y)$ and $\\pi^1_\\Phi$ and $\\pi^2_\\Phi$ to denote the first and second marginals of $\\pi_\\Phi$.\n\\begin{proposition}\n\tLet $X$ be a locally compact Hausdorff space. Let $\\mu$ be a Borel probability on $X$, $\\pi$ a Borel probability on $X\\times X$ and $\\Phi\\in\\mathscr{C}_c(X\\times X)$. If the first marginal $\\pi^1_\\Phi$ of $\\pi_\\Phi$ is absolutely continuous with respect to $\\mu$, then, the Radon-Nikodym derivative of $\\pi^1_\\Phi$ with respect to $\\mu$, $\\frac{d\\pi^1_\\Phi}{d\\mu}$, solves \\eqref{eq:KichDivOperator}.\n\\end{proposition}\n\\begin{proof}\nNotice that since $\\pi_{\\Phi}^1$ is the first marginal of $\\pi_{\\Phi}$, we have that $\\int_X\\varphi(x) d\\pi_{\\Phi}^1(x)= \\iint_{X\\times X}\\varphi(x) d\\pi_{\\Phi}(x,y)=\\iint_{X\\times X}\\varphi(x)\\Phi(x,y) d\\pi(x,y)$ for every $\\varphi\\in\\mathscr{C}_c(X)$. On the other hand, since we are assuming $\\pi_{\\Phi}^1<<\\mu$, we have\n\\begin{equation*}\n\\iint_{X\\times X}\\varphi(x)\\Phi(x,y) d\\pi(x,y) = \\int_X\\varphi(x)d\\pi^1_\\Phi(x) = \\int_X\\varphi(x)\\frac{d\\pi_{\\Phi}^1(x)}{d\\mu} d\\mu(x)\n\\end{equation*}\nfor every $\\varphi\\in\\mathscr{C}_c(X)$. Which gives \\eqref{eq:KichDivOperator} with $\\psi=\\frac{d\\pi_{\\Phi}^1}{d\\mu}$, as desired.\n\\end{proof}\n\nA probability measure $\\pi$ on $X\\times X$ is said to be a coupling (see \\cite{Villanibook}) between the probability measures $\\mu$ and $\\nu$ on $X$ if $\\pi^1=\\mu$ and $\\pi^2=\\nu$. If $\\pi$ is a given probability on $X\\times X$, then $\\pi^1_\\Phi$ is absolutely continuous with respect to $\\pi^1=\\mu$ and also $\\pi^2_\\Phi$ is absolutely continuous with respect to $\\pi^2=\\nu$, no matter what is the particular $\\Phi\\in\\mathscr{C}_c(X\\times X)$.\n\n\\begin{proposition}\n\tLet $\\mu$ and $\\nu$ be two probability measures on the Borel sets of $X$ and let $\\pi$ be a coupling of $\\mu$ and $\\nu$. Then for every $\\Phi\\in\\mathscr{C}_c(X\\times X)$, the measures $\\pi^1_\\Phi$ and $\\pi^2_\\Phi$ are absolutely continuous with respect to $\\mu$ and $\\nu$ respectively.\n\\end{proposition}\n\\begin{proof}\n\tSince $\\Phi$ is bounded, we have that for every Borel set $B$ in $X$,\n\\begin{align*}\n\t\\abs{\\pi^1_\\Phi (B)} & = \\abs{\\pi_\\Phi (B\\times X)}\\\\\n\t& = \\abs{\\iint_{B\\times X}\\Phi(x,y) d\\pi(x,y)}\\\\\n\t&\\leq \\norm{\\Phi}_\\infty \\pi^1_\\Phi(B)\\\\\n\t&= \\norm{\\Phi}_\\infty \\mu(B).\n\t\\end{align*}\nAlso, $\\abs{\\pi^2_\\Phi (B)}\\leq \\norm{\\Phi}_\\infty \\nu(B).$\n\\end{proof}\nThe above propositions prove the following statement.\n\\begin{theorem}\\label{thm:RadonNikodymFirstMarginal}\n\tLet $X$ be a locally compact Hausdorff space. Let $\\pi$ be a Borel probability measure on $X\\times X$. Let $\\Phi\\in\\mathscr{C}_c(X\\times X)$. Then the Radon-Nikodym derivative $\\frac{d\\pi^1_\\Phi}{d\\mu}$ solves equation \\eqref{eq:KichDivOperator} with $\\mu=\\pi^1$, the first marginal of $\\pi$, and $\\pi^1_\\Phi$ is the first marginal of $\\pi_{\\Phi}(A)=\\iint_A \\Phi d\\pi$, $A$ Borel set in $X\\times X$.\n\\end{theorem}\nWe shall use the notation $Kir_\\pi \\Phi$ for the solution $\\frac{d\\pi^1_\\Phi}{d\\mu}$\n of \\eqref{eq:KichDivOperator} in this case. Notice that $\\Phi$ continuous and bounded suffice for the above results. In particular, if $f:X\\to \\mathbb{R}$ is continuous and bounded, so is $F(x,y)=f(y)-f(x)$, and we can define a Laplacian type operator based on Kirchhoff divergence by\n \\begin{equation*\n \\Delta_\\pi f = Kir_\\pi F.\n \\end{equation*}\n Hence, at least formally, the solution for the heat conduction problem\n \\begin{equation*}\n (P)\\,\\left\\{\n \\begin{array}{ll}\n \\frac{\\partial u}{\\partial t} = \\Delta_\\pi u,\\, & t>0, x\\in X\\\\\n u(x,0)=g(x) \\, & x\\in X\n \\end{array}\n \\right.\n \\end{equation*}\n is given by\n \\begin{equation*}\n u(x,t) = (e^{t \\Delta_\\pi}g)(x).\n \\end{equation*}\n Or, when a spectral resolution of the operator $\\Delta_\\pi$, in terms of a sequence of eigenvalues $\\lambda_i$ and eigenfunctions $\\psi_i$, is available\n \\begin{equation*}\n u(x,t) = \\sum_i e^{t \\lambda_i} \\proin{g}{\\psi_i} \\psi_i (x)\n \\end{equation*}\n with $\\proin{g}{\\psi_i} = \\int_X g(x)\\psi_i (x) d\\mu(x)$.\n\n Along the next sections we aim to explore the above abstract setting for some particular couplings.\n\n\\section{The finite case. Weighted graphs}\\label{sec:FiniteCase}\n\nLet $X=\\mathcal{V}=\\set{1,2,\\ldots,n}$ be the set of vertices of a weighted graph $\\mathcal{G}=(\\mathcal{V},E,w)$, where $E=\\set{(i,j): i\\in\\mathcal{V}, j\\in\\mathcal{V}}=X\\times X$ is the set of all edges of $\\mathcal{G}$, and $w:E\\to\\mathbb{R}^+\\cup\\{0\\}$ is the nonnegative weight of each edge, with $w_{ij}=w_{ji}$, $w_{ii}=0$ for every $i$, $w_{ij}>0$, $i\\neq j$ and $\\sum_{i=1}^n\\sum_{j=1}^n w_{ij}=1$. The weight $w$ determines the probability measure $\\pi$ by $\\pi(A)=\\sum_{(i,j)\\in A}w_{ij}$ for a subset $A$ of $E$. Hence the first marginal $\\mu$ of $\\pi$ is given by the weights $\\mu_i=\\sum_{j=1}^n w_{ij}$. In other words, $\\mu(B)=\\sum_{i\\in B}\\mu_i=\\sum_{i\\in B}\\sum_{j=1}^n w_{ij}=\\pi(B\\times X)$. So that\n\\begin{equation*}\n\\int_X \\varphi d\\mu = \\sum_{i=1}^{n} \\mu_i \\varphi_i, \\textrm{ and }\n\\end{equation*}\n\\begin{equation*}\n\\iint_{X\\times X} \\Phi d\\pi =\\sum_{j=1}^n\\sum_{i=1}^n w_{ij}\\Phi_{ij}.\n\\end{equation*}\nEquation \\eqref{eq:KichDivOperator} takes the form\n\\begin{equation*}\n\\sum_{i=1}^n \\mu_i\\varphi_i \\psi_i =\\int_X \\varphi \\psi d\\mu=\n\\iint_{X\\times X} \\varphi(x)\\Phi(x,y) d\\pi =\\sum_{j=1}^n\\sum_{i=1}^n w_{ij}\\varphi_i\\Phi_{ij} = \\sum_{i=1}^n\\varphi_i \\left(\\sum_{j=1}^n w_{ij}\\Phi_{ij}\\right),\n\\end{equation*}\nwhich should hold for every $\\varphi$. Hence\n$\\mu_i\\psi_i = \\sum_{j=1}^n w_{ij}\\Phi_{ij}$,\nor, since each $\\mu_i$ is positive\n\\begin{equation*\n\\psi_i =(Kir\\, \\Phi)_i =\\frac{1}{\\mu_i}\\sum_{j=1}^n w_{ij}\\Phi_{ij}=\\frac{1}{\\sum_{i=1}^n w_{ij}}\\sum_{j=1}^n w_{ij}\\Phi_{ij}.\n\\end{equation*}\nAnd the corresponding Laplace's operator of a function $f$ defined on the vertices, is given by\n\\begin{equation*}\n(\\Delta_\\pi f)_i =\\frac{1}{\\mu_i}\\sum_{j=1}^n w_{ij}(f_j - f_i).\n\\end{equation*}\nThe harmonic functions in this setting are those that satisfy the mean value identity\n\\begin{equation*}\nf_i = \\frac{1}{\\sum_{j=1}^n w_{ij}}\\sum_{j=1}^n w_{ij} f_j.\n\\end{equation*}\nIn matrix notation, $\\Delta_\\pi=D^{-1}W-I$, where $D = \\textrm{diagonal}(\\mu_1,\\ldots,\\mu_n)$, and $W=(w_{ij})$. The diffusion problem\n\\begin{equation*}\n\\left\\{\n\\begin{array}{ll}\n\\frac{\\partial u}{\\partial t} = \\Delta_\\pi u,\\, & t>0\\\\\nu(i,0)=f_i \\, &i=1,\\ldots,n\n\\end{array}\n\\right.\n\\end{equation*}\nhas the solution $\\overline{u}(t)=e^{t\\Delta_\\pi}\\overline{f}$, $\\overline{f}=(f_1,\\ldots,f_n)$.\n\nWe have $e^{t\\Delta_\\pi}=e^{-t}e^{tD^{-1}W}$.\nThe general theory of Markov chains can be applied to the analysis of the steady state for the solution $\\overline{u}(t)$ of $(P)$. See \\cite{Roberts1976discrete}. A $n\\times n$ transition matrix $B$ is said to correspond to a regular Markov chain, if some power of $B$ has only positive elements. The Fundamental Limit Theorem for regular Markov chains proves that there exist a Markov matrix $M$ with all rows equal to $m=(m_1,\\ldots,m_n)$, with $m_i>0$ for every $i=1, \\ldots, n$ and $\\sum_{i=1}^n m_i=1$, such that\n\\begin{equation*\n\\lim_{k\\to\\infty} B^k = M.\n\\end{equation*}\nThe next result is a particular instance of convergence to equilibrium (see \\cite{NorrisBook97}).\n\\begin{proposition}\\label{propo:limitMarkovmatrix}\nFor $n\\geq 3$, let $W=(w_{ij})$ be a nonnegative $n\\times n$ matrix such that $w_{ii}=0$ for every $i=1,\\ldots,n$, $w_{ij}=w_{ji}>0$ for each $i\\neq j$ and $\\sum_{i=1}^n\\sum_{j=1}^n w_{ij}=1$. Let $\\pi$ be the probability measure defined on $\\{1,\\ldots,n\\}^2$ by $\\pi(A)=\\sum_{(i,j)\\in A} w_{ij}$. Given a function $\\overline{f}=(f_1,\\ldots,f_n)$ defined on the vertices $\\mathcal{V}$ of the weighted graph $\\mathcal{G}$, set $\\overline{u}(t)= e^{t\\Delta_\\pi}\\overline{f}$ to denote the solution of $(P)$ with $t>0$. Then\n\\begin{equation*}\n\\lim_{t\\to\\infty} \\overline{u}(t) = \\left(\\sum_{j=1}^n m_j f_j\\right)\\overline{1},\n\\end{equation*}\nwhere $\\overline{1}=(1,\\ldots,1)$ and $\\overline{m}=(m_1,\\ldots,m_n)$ is the constant row of the Markov limit matrix\n\\begin{equation*}\n(D^{-1}W)^{\\infty} = \\lim_{k\\to\\infty} (D^{-1}W)^k = \\begin{pmatrix}\n\\overline{m}\\\\\n\\vdots\\\\\n\\overline{m}\n\\end{pmatrix}\n\\end{equation*}\n\\end{proposition}\n\n\\begin{proof}\nNotice first that $\\widetilde{W}=D^{-1}W$ is the matrix of a Markov chain positive entries except for its diagonal terms. Hence $\\widetilde{W}$ is a regular Markov chain. From the Fundamental Limit Theorem for regular Markov chains, we have that\n\\begin{equation*}\n\\lim_{k\\to\\infty} \\widetilde{W}^k = M = \\begin{pmatrix}\n\\overline{m}\\\\\n\\vdots\\\\\n\\overline{m}\n\\end{pmatrix}\n\\end{equation*}\nwith $\\overline{m}=(m_1,\\ldots,m_n)$, $m_i>0$ for every $i=1,\\ldots,n$ and $\\sum_{i=1}^n m_i =1$. On the other hand,\n\\begin{equation*}\n\\overline{u}(t) = e^{-t}\\sum_{l\\geq 0}\\frac{t^l}{l!}\\widetilde{W}^l\\overline{f},\n\\end{equation*}\nfor $t>0$. With the standard notation for norms in $\\mathbb{R}^n$, we have that for every $\\varepsilon>0$ there exists an integer $L$ such that for $l>L$ we have $\\norm{\\widetilde{W}^l\\overline{f}-M\\overline{f}}<\\frac{\\varepsilon}{2}$. Hence\n\\begin{align*}\n\\norm{\\overline{u}(t)-\\left(\\sum_{j=1}^n m_j f_j\\right)\\overline{1}}\n& = \\norm{\\overline{u}(t)-M\\overline{f}}\\\\\n&= \\norm{e^{-t}\\sum_{l\\geq 0}\\frac{t^l}{l!}(\\widetilde{W}^l\\overline{f}-M\\overline{f})}\\\\\n& \\leq e^{-t}\\sum_{l=0}^L\\frac{t^l}{l!}\\norm{\\widetilde{W}^l\\overline{f}-M\\overline{f}} +\n\\frac{\\varepsilon}{2}e^{-t}\\sum_{l\\geq L+1}\\frac{t^l}{l!}\\\\\n&<\\varepsilon\n\\end{align*}\nfor $t$ large enough.\n\\end{proof}\n\n\n\n\\section{Markov coupling}\nLet $X$ be a locally compact Hausdorff space. Let $\\mu$ and $\\nu$ be two probabilities on the Borel subsets of $X$. Let $\\pi$ be a probability on the Borel sets of $X\\times X$ that is absolutely continuous with respect to $\\mu\\times \\nu$.\n\\begin{lemma}\\label{lem:couplingMarkov}\nLet $X$, $\\mu$, $\\nu$ and $\\pi$ be as described. Then $\\pi$ is a coupling for $\\mu$ and $\\nu$ if and only if $K=\\frac{d\\pi}{d(\\mu\\times\\nu)}$ is a Markov kernel in the sense that\n\\begin{enumerate}[(i)]\n\\item $\\int_X K(x,y) d\\mu(x)=1$ for $\\nu$ almost every $y\\in X$, and\n\\item $\\int_X K(x,y) d\\nu(y)=1$ for $\\mu$ almost every $x\\in X$.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nAssume first that $\\pi$ is a coupling for $\\mu$ and $\\nu$. Then for every $B$, Borel set in $X$, we have\n\\begin{equation*}\n\\mu(B)=\\pi(B\\times X)=\\iint_{B\\times X} \\frac{d\\pi}{d(\\mu\\times \\nu)} d\\mu d\\nu\n=\\int_B\\left(\\int_X K(x,y) d\\nu(y)\\right) d\\mu(x).\n\\end{equation*}\nHence $\\int_X K(x,y) d\\nu(y) =1$ for $\\mu$ almost every $x\\in X$. Identity $(i)$ follows the same argument. Assume now that $(i)$ and $(ii)$ hold, then by Fubini's theorem\n\\begin{equation*}\n\\pi(B\\times X)=\\iint_{B\\times X} d\\pi=\\iint_{B\\times X} K(x,y) d\\mu(x) d\\nu(y)=\\int_B\\left(K(x,y) d\\nu(y)\\right) d\\mu(x)=\\mu(B).\n\\end{equation*}\n\\end{proof}\n\nThe next statement provides the Kirchhoff divergence operator associated to these type of Markov couplings.\n\\begin{theorem}\n\tLet $X$ be a locally compact Hausdorff topological space. Let $\\mu$ and $\\nu$ be two given Borel probability measures on $X$. Let $\\pi$ be the coupling for $\\mu$ and $\\nu$ given by\n\t\\begin{equation*}\n\t\\pi(A) = \\iint_A K(x,y) d\\mu(x) d\\nu(y)\n\t\\end{equation*}\n\twith $K$ satisfying $(i)$ and $(ii)$ in Lemma~\\ref{lem:couplingMarkov} and $A$ any Borel set in $X\\times X$. Then, for $\\Phi\\in\\mathscr{C}_c(X\\times X)$ we have\n\t\\begin{equation*}\n\tKir_\\pi \\Phi(x) = \\int_{y\\in X} K(x,y) \\Phi(x,y) d\\nu(y).\n\t\\end{equation*}\n\\end{theorem}\n\n\\begin{proof}\n\tThe measure $\\pi_{\\Phi}$ induced by $\\Phi\\in\\mathscr{C}_c(X\\times X)$ is now given by $\\pi_{\\Phi}(A)=\\iint_A \\Phi K d\\mu d\\nu$. Its first marginal is\\begin{equation*}\n\t\\pi_{\\Phi}^1(B) = \\pi_{\\Phi}(B\\times X)=\\int_B\\left(\\int_{y\\in X}\\Phi(x,y) K(x,y) d\\nu(y)\\right) d\\mu(x).\n\t\\end{equation*}\n\tHence\n\t\\begin{equation*}\n\tKir_\\pi \\Phi(x)=\\frac{d\\pi_{\\Phi}^1}{d\\mu} = \\int_{y\\in X}\\Phi(x,y)K(x,y) d\\nu(y),\n\t\\end{equation*}\n\tas desired.\n\\end{proof}\n\nFor a function $f\\in\\mathscr{C}_c(X)$, the function $\\Phi(x,y)=f(y)-f(x)$ is continuous and bounded and the operator $Kir_\\pi$ is well defined on $\\Phi$. The Laplacian of $f$ in this setting is, then\n\\begin{equation*}\n\\Delta_\\pi f(x) = \\int_{y\\in X} (f(y)-f(x)) K(x,y) d\\nu(y) = \\int_{y\\in X} K(x,y) f(y) d\\nu(y) - f(x).\n\\end{equation*}\nOr, in terms of operators, $\\Delta_\\pi=\\mathcal{K}-I$, with $\\mathcal{K}f(x)=\\int_{y\\in X} K(x,y)f(y) d\\nu(y)$.\nIn this setting, the corresponding diffusion (P), is given by\n$u(x,t) =e^{t\\Delta_\\pi} f(x)$,\nwith $e^{t\\Delta_\\pi}=\\sum_{k\\geq 0}\\frac{t^k}{k!}(\\mathcal{K}-I)^k= e^{-t}\\sum_{m\\geq 0}\\frac{t^m}{m!}\\mathcal{K}^m$.\n\nAs before, the existence and structure of steady states for $t\\to\\infty$ depends on the particular dynamics of the sequence $\\{\\mathcal{K}^j: j\\geq 0\\}$ of iterations of the operator $\\mathcal{K}$ induced by the kernel $K$ and the measure $\\nu$ on $X$. Next we explore the case of dyadic Markov kernels, where the spectral analysis can be explicitly carried through Haar type systems built on dyadic type families on abstract settings.\n\nLet $X=[0,1)$ with the usual distance. Let $\\mathcal{D}=\\cup_{j\\geq 0}\\mathcal{D}^j$, $\\mathcal{D}^j=\\{I^j_k=[k2^{-j},(k+1)2^{-j}): k=0,1,\\ldots,2^j-1\\}$ be the family of standard dyadic intervals in $[0,1)$. For $j\\geq 1$, $I=I^j_k\\in\\mathcal{D}^j$ the Haar function $h_I$ is given by $h_I(x)=2^{j\/2}h^0_0(2^j x-k)$, with $h^0_0(x)=\\mathcal{X}_{[0,\\tfrac{1}{2})}(x)-\\mathcal{X}_{[\\tfrac{1}{2},1)}(x)$. The system $\\mathscr{H}\\cup \\{\\mathcal{X}_{[0,1)}\\}$, with $\\mathscr{H}=\\{h_I: I\\in\\mathcal{D}\\}$, provides an orthonormal basis for $L^2([0,1), dx)$. The dyadic family $\\mathcal{D}$ provides also a natural metric structure on $[0,1)$. For $x$ and $y$ in $[0,1)$, set $\\delta(x,y)=\\inf\\{\\abs{I}: I\\in\\mathcal{D} \\textrm{ with } x,y\\in I\\}$. Then $\\delta$ is an ultrametric on $[0,1)$ whose balls are the dyadic intervals. In fact, $B_{\\delta}(x,r)=\\{y\\in [0,1): \\delta(x,y)0, x\\in [0,1)\\\\\nu(x,0)=f(x) \\, & x\\in [0,1).\n\\end{array}\n\\right.\n\\end{equation*}\n\\end{enumerate}\t\n\\end{proposition}\n\n\nThe formula provided by $(2)$ in Proposition~\\ref{thm:heatsolutioncoupling} shows that the steady state of this diffusion is the mean value of the initial condition, in agreement with the discrete case in Proposition~\\ref{propo:limitMarkovmatrix}, when $m_j=\\tfrac{1}{n}$ for every $j$.\n\n\\section{Deterministic coupling}\nLet $X$ be a locally compact Hausdorff topological space. Let $T$ be a Borel measurable mapping on $X$. Let $\\mu$ be a Borel probability on $X$. Set $G:X\\to X\\times X$ be given by $G(x)=(x,T(x))$. For $A$ a Borel set in $X\\times X$, define $\\pi(A)=\\mu(G^{-1}(A))$. Hence, the first marginal $\\pi^1$ of $\\pi$ is $\\mu$ and the second is $\\nu(B)=\\mu(T^{-1}(B))$. See \\cite{Villanibook} for details.\n\\begin{theorem}\nLet $X$, $T$ and $\\pi$ as above. Then for $\\Phi\\in\\mathscr{C}_c(X\\times X)$ we have\n\\begin{equation*}\nKir_\\pi \\Phi(x) = \\Phi(x,T(x)).\n\\end{equation*}\n\\end{theorem}\n\\begin{proof}\nIn order to apply Theorem~\\ref{thm:RadonNikodymFirstMarginal}, we have to compute the first marginal $\\pi^1_\\Phi$ of $\\pi_\\Phi$. Notice first that since for every Borel set $A$ in $X\\times X$,\n\\begin{equation*}\n\\iint_{X\\times X}\\mathcal{X}_A(x,y) d\\pi(x,y) = \\pi(A) = \\mu(G^{-1}(A))=\\int_X \\mathcal{X}_A(x,T(x)) d\\mu(x),\n\\end{equation*}\nwe also have the formula\n$\\iint_{X\\times X}\\sigma(x,y) d\\pi(x,y) = \\int_X \\sigma(x,T(x)) d\\mu(x)$\nfor simple functions $\\sigma$ and also for bounded measurable functions. Hence\n\\begin{equation*}\n\\pi_\\Phi(A)=\\iint_A \\Phi d\\pi=\\iint_{X\\times X}\\mathcal{X}_A \\Phi d\\pi = \\int_X \\mathcal{X}_A(x,T(x)) \\Phi(x,T(x)) d\\mu(x).\n\\end{equation*}\nSo that\n\\begin{equation*}\n\\pi^1_\\Phi(B)=\\pi_\\Phi(B\\times X)=\\int_B \\Phi(x,T(x)) d\\mu(x).\n\\end{equation*}\nHence\n\\begin{equation*}\nKir_\\pi \\Phi(x)=\\frac{d\\pi^1_\\Phi(x)}{d\\mu}=\\Phi(x,T(x)).\n\\end{equation*}\n\\end{proof}\n\nFor a function $f\\in\\mathscr{C}_c(X)$ the corresponding Laplacian operator is then given by\n\\begin{equation*}\n\\Delta_\\pi f(x) = f(T(x))-f(x).\n\\end{equation*}\nOr, in operational form\n\\begin{equation*}\n\\Delta_\\pi = \\tau - I,\n\\end{equation*}\nwhere $\\tau f = f\\circ T$ and $I$ is the identity.\n\nLet us now consider the diffusion problem (P) in our current situation of the determi\\-nis\\-tic transport of $\\mu$ through $T$.\n A way to get the solution of (P) in this setting is provided by the explicit computation of $e^{t\\Delta_\\pi}=\\sum_{k=0}^{\\infty}\\frac{t^k}{k!}\\Delta^k_\\pi$.\n\n\\begin{lemma}\\label{lemma:formulaLaplacianSemigroupIterated}\nLet $X$, $\\mu$, $T$ and $\\pi$ be as before, then\n\\begin{equation*}\ne^{t\\Delta_\\pi}f = e^{-t}\\sum_{l\\geq 0}\\frac{t^l}{l!} (f\\circ T^l)\n\\end{equation*}\nfor every $f\\in \\mathscr{C}_c(X)$.\n\\end{lemma}\n\\begin{proof}\nSince $f$ is bounded, we see that the series above is absolutely convergent and that the $L^\\infty$-norm of the right hand side is bounded by the $L^\\infty$-norm of $f$. Since\n\\begin{equation*}\\label{eq:laplacianIterated}\n\\Delta^k_\\pi f = \\sum_{l=0}^k \\binom{k}{l}(-1)^{k-l} f\\circ T^l,\n\\end{equation*}\nwe have\n\\begin{align*}\ne^{t\\Delta_\\pi} f &= \\sum_{k\\geq 0}\\frac{t^k}{k!}\\Delta_\\pi^k f\\\\\n&= \\sum_{k\\geq 0}\\frac{t^k}{k!}\\sum_{l=0}^k \\binom{k}{l}(-1)^{k-l} f\\circ T^l\\\\\n&= \\sum_{l\\geq 0}f\\circ T^l\\sum_{k\\geq l}\\frac{(-1)^{k-l}}{k!}\\binom{k}{l} t^k\\\\\n&= \\sum_{l\\geq 0}\\frac{t^l}{l!}f\\circ T^l\\sum_{k\\geq l}\\frac{(-1)^{k-l}}{(k-l)!} t^{k-l}\\\\\n&= e^{-t}\\sum_{l\\geq 0}\\frac{t^l}{l!}f\\circ T^l.\n\\end{align*}\n\\end{proof}\nHence in the case of the Laplacian provided by a deterministic coupling of measures, the solution of the initial problem for the equation with initial data $f$, has a really wide diversity of steady states depending on the dynamics induced by the iterated system $\\{T^l: l\\geq 0\\}$. In the next examples we only aim to illustrate this fact. For the case of ergodic mappings $T$, where $\\lim_{t\\to\\infty}u(x,t)$ is a mean value of the initial condition, we mention a preliminary result due to F.~J.~ Mart\\'in-Reyes \\cite{MartinReyespreprint}. Nevertheless, perhaps more interesting from the point of view of the steady state as a classifier of coupling and transports are some particular non-ergodic cases as those considered in the next particular cases.\n\\begin{proposition}\nLet $X=[-\\tfrac{1}{2},\\tfrac{1}{2}]$, $d\\mu_1=dx$, $T(x)= -x$. Hence for every continuous function $f$ defined on $[-\\tfrac{1}{2},\\tfrac{1}{2}]$, its even part $f_e$ is the steady state of $e^{t\\Delta}f$. Precisely\n\\begin{equation*}\ne^{t\\Delta}f\\to f_e = \\frac{f\\circ T + f}{2}, \\quad t\\to \\infty,\n\\end{equation*}\nuniformly on $[-\\tfrac{1}{2},\\tfrac{1}{2}]$.\n\\end{proposition}\n\\begin{proof}\nLet us apply Lemma~\\ref{lemma:formulaLaplacianSemigroupIterated} in the current setting. Note that $f\\circ T^l(x)=f((-1)^lx)$. Hence, with $f=f_e+f_o$ and $f_o=\\frac{f(x)-f(-x)}{2}$ the odd part of $f$,\n\\begin{align*}\ne^{t\\Delta}f(x) &= e^{-t}\\sum_{l\\geq 0}\\frac{t^l}{l!} f((-1)^l x)\\\\\n&= e^{-t}\\sum_{l\\geq 0}\\frac{t^l}{l!}\\left(f_e(x) + f_o((-1)^l x)\\right)\\\\\n&= e^{-t}\\left(\\sum_{l\\geq 0}\\frac{t^l}{l!}\\right) f_e(x) + e^{-t}\\left(\\sum_{l\\geq 0}\\frac{(-t)^l}{l!}\\right)f_o(x)\\\\\n&= f_e(x) + e^{-2t} f_o(x).\n\\end{align*}\n\\end{proof}\n\nThe next result deals with the Cantor function in $[0,1]$.\n\\begin{proposition}\n\tLet $X=[0,1]$, $T$ the Cantor function and $\\mu$ the Hausdorff probability measure supported in the Cantor set contained in $[0,1]$. Then $\\pi=\\mu\\circ G^{-1}$, with $G(x)=(x,Tx)$ is a coupling between $\\mu$ and $dx$ in $[0,1]$. Then, the steady state of the solution of (P) is given by the function $g=f\\circ T$.\n\\end{proposition}\n\\begin{proof}\n\tSince $T$ is continuous, the uniform probability $\\mu$ on the Cantor set is given on intervals by $\\mu([a,b))=T(b)-T(a)$. Let $\\nu=\\mu\\circ T^{-1}$. For $[c,d]$ in $[0,1]$ with $c$ and $d$ that do not belong to the set of dyadic numbers $\\{k2^{-j}: j\\geq 0; k=0,\\ldots,2^j-1\\}$ we have that $T^{-1}(\\{c\\})$ and $T^{-1}(\\{d\\})$ are singletons, or $\\alpha=T^{-1}(c)$, $\\beta=T^{-1}(d)$. Then $\\nu([c,d])=\\mu(T^{-1}([c,d]))=\\mu([T^{-1}(c),T^{-1}(d)])=T(T^{-1}(d))-T(T^{-1}(c))=d-c$. Since the complement of the dyadic numbers of $[0,1]$ is dense in $[0,1]$, we get that $d\\nu=dx$. On the other hand, the first and second marginals of $\\pi$ are $\\mu$ and $\\nu$ respectively. The solution of\n\t\\begin{equation*}\n\t\\left\\{\n\t\\begin{array}{ll}\n\t\\frac{\\partial u}{\\partial t} = \\Delta_\\pi u,\\, & x\\in [0,1), t>0, \\\\\n\tu(x,0)=f(x) \\, & x\\in [0,1).\n\t\\end{array}\n\t\\right.\n\t\\end{equation*}\nwith $f$ continuous on $[0,1]$ is given by $u(x,t)=e^{-t}\\sum_{l\\geq 0} \\frac{t^l}{l!} f\\circ T^l(x)$. Let us compute $T^l(x)$ for $x\\in [0,1]$. Assume that $x\\in L^j_k$ the $k$-th middle third deleted in the $j$-th approximation of the Cantor set. The central point of $L^j_k$ is $k2^{-j}$ and $T(k2^{-j})=k2^{-j}$. Hence $T(x)=k2^{-j}$, $T^2(x)=T(k2^{-j})=k2^{-j}$. So that $T^l(x)=x$ for $l=0$ and $T^l(x)=k2^{-j}$ for every $l\\geq 1$. Thus\n\\begin{align*}\nu(x,t) &= e^{-t}\\left[f(x)+ \\left(\\sum_{l\\geq 1}\\frac{t^l}{l!}\\right)f(k2^{-j})\\right]\\\\\n&= e^{-t} f(x) + e^{-t}(e^t-1) f(k2^{-j})\\\\\n&= e^{-t} [f(x)-f(k2^{-j})] + f(k2^{-j}).\n\\end{align*}\t\nWhich tends to $f(k2^{-j})=f(T(x))$ for $t\\to\\infty$.\n\\end{proof}\n\n\n\\providecommand{\\bysame}{\\leavevmode\\hbox to3em{\\hrulefill}\\thinspace}\n\\providecommand{\\MR}{\\relax\\ifhmode\\unskip\\space\\fi MR }\n\\providecommand{\\MRhref}[2]{%\n\t\\href{http:\/\/www.ams.org\/mathscinet-getitem?mr=#1}{#2}\n}\n\\providecommand{\\href}[2]{#2}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{To do's}\n\t\n\n\n\t\n\t\n\t\n\t\n\n\n\n\n\n\n\n\n\n \\maketitle\n\n\n\\section{Introduction}\nIn~\\cite{BBM_01_Another_look}, Bourgain, Brezis and Mironescu established the following (BBM) difference quotient representation of Sobolev spaces. Let $\\Omega \\subset \\mathbf{R}^n$ be an open set with Lipschitz boundary and let $u \\in W^{1,p}(\\Omega)$, then \n\\[\n\t\\lim_{\\varepsilon \\to 0^+} \\int_\\Omega\\int_\\Omega \\frac{|u(y) - u(x)|^p}{|y - x|^p} \\, \\rho_\\varepsilon(y-x) \\;\\mathrm{d} y \\;\\mathrm{d} x= K_{p,n} \\int_\\Omega |\\nabla u|^p,\n\\]\nwhere $K_{p,n}$ is a constant depending solely on $p \\in [1,\\infty)$ and the spatial dimension $n$. Here,\n$\\{\\rho_\\varepsilon\\}_{\\varepsilon > 0}\\subset L^1(\\mathbf{R}^n)$ is a family of probability non-negative radial functions, that is, \n\\begin{equation}\\label{eq:eps1}\n\\|\\rho_\\varepsilon\\|_{L^p(\\mathbf{R}^n)} = 1,\n\\end{equation}\nwhich approximates the Dirac mass at zero in the sense that\n\\begin{equation}\\label{eq:eps}\n\\lim_{\\varepsilon \\to 0^+} \\|\\rho_\\varepsilon\\|_{L^p(\\mathbf{R}^n \\setminus B_\\delta)} \n= 0 \\qquad \\text{for all} \\; \\delta > 0.\n\\end{equation}\nTheir result states that the converse also holds in the range $1 < p < \\infty$. In this regard, D\\'avila~\\cite{Davila_02} showed that a related representation holds for $BV(\\Omega)$ in the limiting case $p = 1$. \nIt is well-known that a map $u \\in L^1_\\mathrm{loc}(\\mathbf{R}^n)$ belongs to the homogeneous Sobolev space $u \\in \\dot W^{1,p}(\\mathbf{R}^n)$ if and only if \n\\begin{equation}\\label{eq:pointwiseD}\n\t\\liminf_{h \\to 0^+} \\|\\delta_{h\\xi} u \\|_{L^p} < \\infty \\quad \\text{for all directions $\\xi \\in \\mathbb{S}^{n-1}$},\n\\end{equation}\nwhere \n\\[\n\t\\delta_{\\xi} u(x)\\coloneqq \\frac{u(x + \\eta) - u(x)}{|\\eta|}, \\qquad x \\in \\mathbf{R}^n,\n\\] \nis the $|\\eta|$-scale point-wise difference quotient of $u$ on the direction $\\frac \\eta{|\\eta|}$. When~\\eqref{eq:pointwiseD} holds, then the limit in~\\eqref{eq:pointwiseD} exists and \n\\[\n\tD_h u \\longrightarrow Du \\; \\text{strongly in $L^p(\\mathbf{R}^n)^n$, \\, as $h \\to^+ 0$.}\n\\]\n\n\nMotivated by the aforementioned BBM criterion and by the results contained in~\\cite{Du_13} criterion, where the first rigorous non-local vector calculus was developed for the gradient, curl, and divergence operators in Hilbert spaces, Mengesha and Spector~\\cite{MS} proved that the non-local gradient \\begin{align*}\n\t\\mathfrak{D}_{\\varepsilon} u & \\coloneqq n \\left(\\pv \\int \\delta_{\\xi} u \\, \\frac{\\xi}{|\\xi|} \\, \\rho_\\varepsilon(\\xi) \\;\\mathrm{d} \\xi\\right) \n\\end{align*}\n``localizes'' towards the classical gradient $D$ in the spirit of the Bourgain--Brezis--Mironescu characterization of Sobolev space above. More precisely, they showed that if $u$ belongs to a topological function space $(X,\\tau)$, then \n\\[\n\t\\mathfrak D_\\varepsilon u \\stackrel{\\tau}\\longrightarrow Du\n\\] \nfor a wide selection of relevant topological pairs $(X,\\tau)$; including the Sobolev space $\\dot W^{1,p}(\\mathbf{R}^n)$, and the space of functions with bounded variation $\\dot {BV}(\\mathbf{R}^n)$ endowed with area convergence topology (which is finer than the strict topology of measures). In particular, they obtained new derivative-free characterizations of Sobolev and BV spaces in the spirit of (BBM). This work is motivated by several the introduction of the peridynamics model of continuum mechanics, which is a model defined over nonlocal integral operators describing physically relevant quantities and laws. For a more detailed bibliographical account of this model, we refer the reader to~\\cite{DM_16,MS} and references therein. \n\n\n\nThe objective of this paper is twofold: Firstly, we provide a simple formula indenting the natural distributional radial nonlocal operators $\\mathfrak{A}_\\rho$ associated with any general constant-coefficient first-order differential operator $\\mathcal{A}$, in terms of their associated principal symbol. Secondly, \nwe introduce the nonlocal spherical operators $\\mathscr A_s$, for which we establish smooth, distributional, $L^p$ and area-convergence localizations. With the aid of the spherical kernels, we are able to give a straightforward proof of the localization of the nonlocal radial operators \\emph{on spaces of distributions}; for $\\mathscr D, \\mathscr D'$, and $L^p$ topologies as well as the \\emph{area-convergence of measures}. In particular, we obtain a derivative-free characterization of $L^p$ and measure-valued anisotropic gradients. \n\nOur findings contain certain generalizations of the Du and Mengesha~\\cite{DM_16}, where localization results were already introduced and established \\emph{on function spaces}, for nonlocal operators arising from general cubic third-order tensors; with respect to the strong $L^p$ topology and the \\emph{coarser strict topology of measures} (amongst other interesting topologies). In this regard, our symbolic calculus formulas appear to substantially improve upon the transparency of the framework developed in~\\cite{DM_16}. Lastly, it is worth mentioning that our methods \\emph{differ significantly} from the ones developed in previous works, as they are based on the study of localization concerning spherical surface kernels (cf. {\\scshape Section}~\\ref{sec:spherical}). An essential advantage of this is that our proofs can be easily adapted to find localizations for kernels associated with iso-symmetric shapes (such as cubes, regular polygons, or rings) upon a simple modification of the kernels. This even applies to non-symmetric profiles (in this case, the localization would yield approximations for certain non-isotropic norms).\n\n\n\\section{Set up and main results}\n\nWe consider a general constant coefficient first-order homogeneous distributional differential operator $\\mathcal{A} : \\mathscr D'(\\mathbf{R}^n;V) \\to \\mathscr D'(\\mathbf{R}^n;W)$, where $V,W$ are finite-dimensional inner product spaces (identified with their own duals). Such operators can always be written in the form. {\n\\begin{equation}\\label{eq:A}\n\t\\mathcal{A} = \\sum_{i = 1}^n A_i \\partial_i\\;, \\qquad A_i \\in \\mathrm{Hom}(V,W).\n\\end{equation}\nThe gradient, divergence, curl, symmetric gradient, exterior differential, del-var are examples of such general operators. We recall that, in this form and up to a complex constant, the principal symbol associated with $\\mathcal{A}$ is given by the $\\mathrm{Hom}(V,W)$-valued linear polynomial \n\\[\n\t\\mathbb{A}(\\xi) \\coloneqq \\sum_{i = 1}^n \\xi_iA_i\\,,\\qquad \\xi \\in \\mathbf{R}^n.\n\\] \nIn light of our previous discussion, we formally define the nonlocal $\\mathcal{A}$-gradient is defined as\n\\begin{align*}\n\t\\mathfrak{A}_\\varepsilon u(x) = \\, \\pv \\, n \\int \\frac{\\mathbb{A}(h)[u(x) - u(x)]}{|y-x|^2}\\, \\rho_\\varepsilon(y-x) \\, dy\n\\end{align*}\nwhere $u : \\mathbf{R}^n \\to V$ is a sufficiently well-behaved vector field.\nBy looking at each coordinate separately and invoking the results discussed in the introduction, we deduce that if $u \\in \\dot W^{1,p}(\\mathbf{R}^n; V)$, then\n\\begin{equation}\\label{eq:first}\n\t\\mathfrak{A}_\\varepsilon u \\longrightarrow \\mathcal{A} u \\quad \\text{strongly in $L^p$}.\n\\end{equation}\nIn general, the requirement $Du \\in L^p$ is \\emph{too restrictive} to deduce this $L^p$ localization, as one would expect the $p$-integrability of $\\mathcal{A} u$ to be necessary and sufficient for its validity (cf. {\\scshape Theorem}~\\ref{thm:Lp}). Indeed, the Calderon--Zygmund estimate $\\|D u\\|_{L^p} \\cong \\|\\mathcal{A} u\\|_{L^p}$ only holds for elliptic operators in the range $1 < p < \\infty$. The situation worsens for $p=1$, where the requirement $Du \\in L^1$ is restrictive, even for elliptic operators (cf.~\\cite{KK_16,Ornstein_62}). \n\n\\subsection{Nonlocal calculus} Motivated by this discussion, our main object of study will be the following nonlocal spherical and radial (difference quotient) operators. \n\n\\begin{definition}[Nonlocal spherical operator]For a given test function map $u\\in \\mathscr D(\\mathbf{R}^n;V)$ and $s >0$, we define the \\emph{$s$-spherical operator} \n\\begin{align*}\n\t\\mathscr A_s u(x) & \\coloneqq \\frac{n}{s} \\fint_{\\partial B_s} \\Omega_\\mathbb{A}(\\omega) [\\delta_\\omega u(x)] \\, dS(\\omega) \\\\[1em]\n\t& \\phantom{:}= \\frac{n}{s} \\fint_{\\partial B_s} \\mathbb{A}\\left(\\frac{\\omega}{s}\\right) \\left[\\frac{u(x+ \\omega) - u(x)}{s}\\right] \\, dS(\\omega), \\qquad x \\in \\mathbf{R}.\n\\end{align*}\nHere, $S$ denotes the uniform $(n-1)$-dimensional Hausdorff measure on $\\partial B_s$ and where $\\Omega_\\mathbb{A}$ is the zero-homogeneous profile of $\\mathbb{A}$.\n\\end{definition}\n\nHereinafter $\\rho : \\mathbf{R}^n \\to [0,\\infty)$ will denote an integrable radial Borel function with profile $\\hat \\rho : \\mathbf{R} \\to [0,\\infty)$. With a slight abuse of notation (between $\\rho$ and $\\varepsilon$), we define a nonlocal operator associated with $\\mathcal{A}$ through the radial map $\\rho$.\n\n\n\\begin{definition}[Nonlocal radial operator]\n For a given test vector field $u \\in\\mathscr D(\\mathbf{R}^n;V)$, we define the \\emph{nonlocal operator}\n\t\\begin{align*}\\label{eq:rho}\n\t\t\\mathfrak{A}_\\rho u(x) & \\coloneqq \\pv \\, n \\int \\Omega_\\mathbb{A}(h)[\\delta_{h} u(x)] \\, \\rho(h)\\, dh \\\\[1em]\n\t& \\phantom{:}= \\pv \\, n \\int \\frac{\\mathbb{A}(y-x)[u(y) - u(x)]}{|y-x|^2} \\, \\, \\rho(y -x)dy, \\quad x \\in \\mathbf{R}^n.\n\t\\end{align*}\n\\end{definition}\nThese nonlocal operators are linked through the Bochner integral identity\n\\[\n\t\\mathfrak{A}_\\rho u = \\int_0^\\infty n \\omega_n \\hat \\rho(s) s^{n-1} \\mathscr A_s u \\, ds.\n\\]\nIt is straightforward to verify that both $\\mathscr A_s$ and $\\mathfrak{A}_\\rho$ are bounded integral operators from $\\mathscr D(\\mathbf{R}^n;V)$ into $\\mathscr D(\\mathbf{R}^n;W)$ ---this will be formally discussed in {\\scshape Sections}~\\ref{sec:spherical} and~\\ref{sec:radial}--- and are also stable under integration by parts:\n\\begin{proposition}[Integration by parts]\nLet $u \\in \\mathscr D(\\mathbf{R}^n;V)$ and $\\varphi \\in \\mathscr D(\\mathbf{R}^n,W)$ be test functions. Then\n\\[\n\t\\int \\mathscr A_s u \\cdot \\varphi = \\int u \\cdot \\mathscr A_s^* \\varphi\n\\]\nand\n \\[\n \t\\int \\mathfrak{A}_\\rho u \\cdot \\varphi = \\int u \\cdot \\mathfrak{A}^*_\\rho \\varphi\n \\] \n Here $\\mathscr A_s^*$ and\n $\\mathfrak{A}^*_\\rho$ are the respective spherical and nonlocal operators associated with $\\mathcal{A}^* = - \\sum_{i = 1}^n A^t_i \\partial_i$, the formal adjoint of $\\mathcal{A}$. \n\\end{proposition}\nUsing duality, it is, therefore, possible to \\emph{uniquely and continuously} extend both $\\mathscr A_s$ and $\\mathfrak{A}_\\rho$ to all distributions taking values on the vector-space $V$:\n\n\\begin{definition}[Distributional operators] Let $T \\in \\mathscr D'(\\mathbf{R}^n;V)$. The distributional spherical and nonlocal $\\mathcal{A}$-gradient are defined respectively as the distributions acting on test functions $\\varphi \\in \\mathscr D(\\mathbf{R}^n;W)$ as\n\\[\n\t\\mathscr A_s T(\\varphi) = T(\\mathscr A_s^* \\varphi) \\qquad \\text{and} \\qquad\n\t\\mathfrak{A}_\\rho T(\\varphi) = T(\\mathfrak{A}_\\rho^* \\varphi).\n\\]\n\\end{definition}\nBy construction, it stems that $\\mathscr A_s,\\mathfrak{A}_\\rho : \\mathscr D'(\\mathbf{R}^n;V) \\to \\mathscr D'(\\mathbf{R}^n;W)$ are bounded linear operators. Given that we shall often work directly with distributions, it will be useful to work with extended $L^p$-norms:\n\n\\begin{definition}[Extended $L^p$ norms] Let $1 \\le p \\le \\infty$ and let $q$ be the the H\\\"older conjugate of $p$. For a distribution $T\\in \\mathscr D'(\\mathbf{R}^n;V)$, we define the extended $L^p$-norm of $T$ as\n\t\\begin{align*}\n\t\t\\|T\\|_{p} & = \\left(\\int |T|^p\\right)^\\frac{1}{p} \\coloneqq \\sup \\set{ T[\\varphi]\\,, }{\\varphi \\in \\mathscr D(\\mathbf{R}^n;V), \\|\\varphi\\|_{L^q} \\le 1}\n\t\\end{align*}\n\twhere $q$ is the H\\\"older conjugate of $p$ satisfying $p^{-1} + q^{-1} = 1$.\n\t\\end{definition}\n\t\n\\begin{remark}If $p \\in (1,\\infty]$, then $\\|T\\|_p$ is finite if and only if $T$ can be respresented by a $p$-integrable map (bounded map when $p = \\infty)$. Similarly, $\\|T\\|_1$ is finite if and only if $T$ can be represented by a measure with bounded total variation. \n\\end{remark\n\t\nOur first result establishes extended $L^p$-norm bounds and localizations of the spherical operators in terms of the $\\mathcal{A}$-gradient (its proof is contained in {\\scshape Section}~\\ref{sec:spherical}):\n\n\\begin{theorem}[Spherical localization]\\label{thm:spherical_localization}\\ \\begin{enumerate}\n\\setlength{\\itemsep}{10pt}\\setlength{\\parskip}{10pt}\n\\item If $u \\in \\mathscr D(\\mathbf{R}^n;V)$, then \n\\[\n\t\\mathscr A_s u \\stackrel{\\mathscr D}\\longrightarrow \\mathcal{A} u \\qquad \\text{as $s \\to 0^+$}.\n\\]\nand\n\\[\n\\int f(\\mathscr A_s u) \\le \\int f(\\mathcal{A} u)\n\\]\nfor all convex functions $f : W \\to \\mathbf{R}$. \n\n\\item If $T \\in \\mathscr D'(\\mathbf{R}^n;V)$, then\n\\[\n\t\\mathscr A_s T \\stackrel{\\mathscr D'}\\longrightarrow \\mathcal{A} T \\qquad \\text{as $s \\to 0^+$}.\n\\]\nand\n\\[\n\t\\|\\mathscr A_s T\\|_p \\le \\|\\mathcal{A} T\\|_p \\quad \\text{for all $p \\in [1,\\infty]$,}\n\\]\n\n\n\\item If $\\mathcal{A} T \\in L^p(\\mathbf{R}^n;W)$ for some $p \\in [1,\\infty]$, then, in fact,\n\\[\n\\mathscr A_s T \\in (C \\cap L^p)(\\mathbf{R}^n;W).\n\\]\nIn this case, and for $p \\in [1,\\infty)$, the distributional convergence improves to the strong convergence\n\\[\n\t\\mathscr A_s T \\stackrel{L^p}\\longrightarrow \\mathcal{A} T \\qquad \\text{as $s \\to 0^+$}.\n\\]\n\\item If $\\mathcal{A} T\\in \\mathcal{M}(\\mathbf{R}^n;W)$, then \n\\[\n\\mathscr A_s T \\in (L^\\infty \\cap L^1)(\\mathbf{R}^n;W) \n\\]\nand the absolutely continuous measures $\\mathscr A_s T \\, \\mathscr L^n$ converge to $\\mathcal{A} u$ in the area sense of measures as $s \\to 0$, that is,\n\\[\n\t\\lim_{s \\to 0^+} \\int f(\\mathscr A_s T) = \\int f(\\mathcal{A}^a T) + \\int f^\\infty(\\mathcal{A}^s T) \n\\]\nfor all convex maps $f: W \\to \\mathbf{R}$ with linear growth at infinity. (\\,\\footnote{Here, \n\\[\n\tf^\\infty(w) \\coloneqq \\lim_{t \\to \\infty} \\frac {f(tw)}{t}, \\qquad w \\in W,\n\\]\nis the recession function of $F$ and\n\\[\n\t\\mathcal{A} u = \\mathcal{A}^a u\\, \\mathscr L^n + \\mathcal{A}^s u, \\qquad |\\mathcal{A}^s u| \\perp \\mathscr L^n,\n\\]\nis the Lebesgue--Radon--Nykod\\'ym decomposition of $\\mathcal{A} u$.})\n\\end{enumerate}\n\\end{theorem}\n\nThanks to a simple measure-theoretic lemma discussed in the Appendix, we obtain the following representation for the spherical operators when acting on locally summable maps (cf. Eqn.~\\ref{eq:rep}).\n\\begin{lemma}[Convolution represenation]\\label{lem:GG}\n\tLet $u \\in L^1_\\mathrm{loc}(\\mathbf{R}^n;V)$ and further assume that $\\mathcal{A} u \\in \\mathcal{M}(\\mathbf{R}^n;W)$. Then, \n\t\\[\n\t\t\\mathscr A_s u(x) = \\frac{\\mathcal Au(B_s(x))}{\\omega_n s^n}.\n\t\\]\nat almost every $x \\in \\mathbf{R}^n$.\n\\end{lemma}\n\nThe localization result for the spherical operators builds directly into a swift proof of the estimates and localization results for nonlocal operators, which will be described next. (The proof of this result is a direct consequence of Eqn.~\\eqref{eq:p_radial} and {\\scshape Proposition}~\\ref{prop:12}.) \n\n\\begin{theorem}\\label{thm:Lp_bounds} Let $1 \\le p < \\infty$ and let $u \\in \\mathscr D'(\\mathbf{R}^n;V)$. Then\n\\[\n\t\\|\\mathfrak{A}_\\rho u \\|_p \\le \\|\\rho\\|_{L^1}^\\frac{1}{p} \\|\\mathcal{A} u \\|_p.\n\\]\nIf moreover $\\|\\mathcal{A} u\\|_p < \\infty$, then\n\\[\n\\mathfrak{A}_\\rho u \\in L^p(\\mathbf{R}^n;W)\n\\]\nand the extended $L^1$-norm estimate\n\\[\n\t\\int |f(\\mathfrak{A}_\\rho u)| \\,d x \\le \\int |f(\\mathcal{A}^a u)| \\,d x + \\int |f^\\infty(d\\mathcal{A}^s u)|\\,\n\\]\nholds for all convex maps $F : W \\to \\mathbf{R}$. \n\\end{theorem}\n\n\n\n\nFor locally summable maps with locally bounded measure $\\mathcal{A}$-gradients, the operator $\\mathfrak{A}_\\rho$ is expressed, as one would expect, by the principal value integral (this follows from {\\scshape Lemma}~\\ref{lem:11}):\n\\begin{lemma}\nLet $u \\in L^1_\\mathrm{loc}(\\mathbf{R}^n;V)$ with\n$\\|\\mathcal{A} u\\|_p < \\infty$. Then, the limit\n\\[\n\t\\pv \\, n \\int \\frac{\\mathbb{A}(y-x)[u(y) - u(x)]}{|y-x|^2} \\, dy \n\\]\nexists and coincides with $\\mathfrak{A}_\\rho u(x)$ for almost every $x \\in \\mathbf{R}^n$. Here, $u(x)$ is understood as the Lebesgue representative of $u$ at $x$. \n\\end{lemma}\n\nOur first main localization result concerns both an \\emph{energy criterion} and an approximation result for the extended $L^p$-norms of the distribution $\\mathcal{A} u$, in terms of $\\{\\mathfrak{A}_{\\varepsilon} u\\}_\\varepsilon$, where $\\{\\rho_\\varepsilon\\}_{\\varepsilon > 0}$ is a family of radial probability functions satisfying~\\eqref{eq:eps1}-\\eqref{eq:eps}. The precise statement, which follows by gathering the results contained in Eqn.~\\eqref{eq:distributional_convergence}, {\\scshape Lemma}~\\ref{lem:11}\n{\\scshape} Proposition~\\ref{prop:12}, is the following:\n\\begin{lemma}\\label{lem:Lp_extended}\nLet $\\varphi \\in \\mathscr D(\\mathbf{R}^n;V)$ and let $u \\in \\mathscr D'(\\mathbf{R}^n;V)$ a. Then\n\\[\n\t\\mathfrak{A}_\\varepsilon \\varphi \\stackrel{\\mathscr D}\\longrightarrow \\mathfrak{A} \\varphi \\qquad \\text{and}\\qquad \\mathfrak{A}_\\varepsilon u \\stackrel{\\mathscr D'}\\longrightarrow \\mathfrak{A} u.\n\\]\nThe extended $L^p$-norm limit \n\\[\n\t\\lim_{\\varepsilon \\to 0^+} \\int |\\mathfrak{A}_{\\varepsilon} u|^{p} \\in [0,\\infty]\n\\]\t\n always exists. \n \n If moreover $u \\in L^1_\\mathrm{loc}(\\mathbf{R}^n;V)$, then\n\t\\[\n\t\t \\lim_{\\varepsilon \\to 0^+} \\, \\int \\bigg| \\pv n \\int\\frac{\\mathbb{A}(y-x)[u(y) - u(x)]}{|y-x|^2} \\, \\rho_\\varepsilon(y-x) \\;\\mathrm{d} y\\,\\bigg|^p = \\int |\\mathcal{A} u|^p .\n\t\\]\n\\end{lemma} \nThis result and {\\scshape Corollary}~\\ref{cor:2} directly imply the following localization (and characterization) criterion: \n\n\\begin{theorem}[$L^p$-localization]\\label{thm:Lp}\nLet $1 < p < \\infty$ and let $u \\in \\mathscr D'(\\mathbf{R}^n;V)$. Then\n$\\mathcal{A} u \\in L^p(\\mathbf{R}^n;W)$ if and only if \n\\[\n\t\\liminf_{\\varepsilon \\to 0^+} \\|\\mathfrak{A}_{\\varepsilon} u\\|_{p} < \\infty.\n\\]\nMoreover, in this case\n\\[\n\t\\mathfrak A_{\\varepsilon} u \\longrightarrow \\mathcal{A} u \\quad \\text{strongly in $L^p(\\mathbf{R}^n;W)$.}\n\\]\n\\end{theorem}\n\n\\begin{remark}\nFor $p = 1$, it can be shown that the assertion still holds in the following sense: if $\\mathcal{A} u \\in L^1(\\mathbf{R}^n;W)$, then the \nrepresentation of {\\scshape Theorem}~\\ref{thm:Lp} exists and agrees with the $L^1$-norm of $\\mathcal{A} u$ (cf. Corollary~\\ref{cor:L1_sufficiency} below). In general, the converse fails due to concentration effects appearing on the difference quotient integral.\n\\end{remark}\n\t Our second main result covers this case precisely when $\\mathcal{A} u$ is a vector-valued Radon measure.\n\tNotice that, with our definition of extended norm, it holds that a $W$-valued distribution $T$ belongs to $\\mathcal{M}_b(\\mathbf{R}^n;W)$, the space of $W$-valued Radon measures with bounded total variation, if and only if $\\|T\\|_1$, in which case it also holds $\\|T\\|_1 = |T|(\\mathbf{R}^n)$. In particular, a direct consequence of {\\scshape Theorem}~\\ref{thm:Lp_bounds} and {\\scshape Lemma}~\\ref{lem:Lp_extended} is the following criterion and approximation result:\n\n\\begin{theorem}[Area convergence localization]\\label{thm:M}\nLet $u \\in L^1_\\mathrm{loc}(\\mathbf{R}^n;V)$. Then $\\mathcal{A} u\\in \\mathcal{M}_b(\\mathbf{R}^n;W)$ if and only if \n\\[\n\t\\liminf_{\\varepsilon \\to 0^+} \\|\\mathfrak{A}_{\\varepsilon} u\\|_1 < \\infty.\n\\] \nMoreover, in this case\n\\[\n\t(\\mathfrak A_{\\varepsilon} u) \\mathscr L^n \\longrightarrow \\mathcal{A} u \\quad \\text{in the area sense of measures.}\n\\]\nIn particular, \n\\[\n\t\\lim_{\\varepsilon \\to 0} \\int f(\\mathfrak{A}_\\varepsilon u) \\, dx = \\int f(\\mathcal{A}^a u) \\, dx + f^\\infty(\\frac{\\mathcal{A} u}{|\\mathcal{A} u|}) \\, d|\\mathcal{A}^s u|\n\\]\nfor all continuous integrands $f: W \\to \\mathbf{R}$ with linear growth such that the limit \n\\[\n\tf^\\infty(A) \\coloneqq \\lim_{t \\to \\infty} \\frac{f(tA)}{t} \n\\]\nexists for all $A \\in W$.\n\\end{theorem}\n\nThe particular advantage of the area convergence of measures (over the usual strict convergence) is that it discriminates the appearance of $L^1$ concentrations. In particular, we obtain the following refinement of {\\scshape Theorem}~\\ref{thm:M}:\n\n\\begin{corollary}\\label{cor:L1_sufficiency} \nIf $\\mathcal{A} u \\in L^1(\\mathbf{R}^n;W)$, then\n\\[\n\\mathfrak A_{\\varepsilon} u \\longrightarrow \\mathcal{A} u \\quad \\text{strongly in $L^1$.}\n\\]\n\\end{corollary}\n\n\n\\begin{remark}[More general shapes]\nThe results presented here can be deduced for more general shapes other than the sphere and radial symmetry. It is straightforward from our methods that the same results can be obtained by working with any boundary $\\partial U$, where $U$ is a simply connected Lipschitz set containing $0 \\in \\mathbf{R}^n$. In this particular case, the modified surface kernel takes the form \n\\[\n\t\\mathscr A_{U,s} u(x) = \\frac{1}{s^n\\mathscr L^n(U)} \\int_{\\partial (sU)} \\mathbb{A}(\\nu(\\omega)) \\left[\\frac{u(x+ \\omega) - u(x)}{s}\\right] \\, dS(\\omega).\n\\]\nwhere $\\nu$ is the exterior normal to $sU$ at $\\partial (sU)$.\n\\end{remark}\n\n\\section{The spherical surface kernels}\\label{sec:spherical} \nIn this section, we discuss localizations concerning spherical kernels. As we shall see below, working with these lower-dimensional measure kernels arises naturally from the Gauss-Green theorem. The analysis of these kernels will be fundamental in establishing similar estimates and localizations for the radial kernels discussed in the introductions. We shall from now on consider the uniform spherical surface measure \n\\[\nm_s = \\frac{ \\mathcal{H}^{n-1} \\mres \\partial B_s}{|B_s|}, \\qquad s > 0.\n\\]\nTo it, we associate a \\emph{surface} kernel $K_s :\\mathbf{R}^n \\setminus \\{0\\} \\to \\mathrm{Hom}(V,W)$ by setting\n\\[\n\tK_s(d\\omega) \\coloneqq \\Omega_\\mathbb{A}(\\omega) \\, m_s(d\\omega),\n\\]\nwhere $\\Omega_\\mathbb{A}$ is the zero-homogeneous angular part of the symbol $\\mathbb{A}$ map, i.e., \n\\[\n\t\\Omega_\\mathbb{A}(x) \\coloneqq \\mathbb{A}\\left(\\frac{x}{|x|}\\right), \\qquad x \\in \\mathbf{R}^n - \\{0\\}.\n\\]\nFor each $s>0$, $K_s$ defines a finite $\\mathrm{Hom}(V,W)$-valued measure on $\\mathbf{R}^n$ and its therefore a tensor-valued compactly supported distribution. When applied on a test map $u \\in \\mathscr D(\\mathbf{R}^n;V)$, the Gauss--Green theorem yields the identity\n\\begin{align*}\n\tK_s \\star u(x) & = \\frac 1{\\omega_n s^n} \\int_{\\partial B_s(x)} \\Omega_\\mathbb{A}(y-x)u(y) \\, dS(y) = \\frac{\\mathcal{A} u(B_s(x))}{\\omega_n s^n}.\n\\end{align*} \nThe cancelation property \n\\begin{equation}\\label{eq:cancel}\n\t\\int_{\\mathbf{S}^{n-1}} \\mathbb{A}(\\omega) \\,dS(\\omega) = 0 \n\\end{equation}\nconveys the identity \n\\[\n\t\\mathscr A_s u = K_s \\star u \\qquad \\text{for all $\\varphi \\in \\mathscr D(\\mathbf{R}^n;V)$}\n\\]\nso that, in particular, we can reduce the study of $\\mathscr A_s$ to the study of the $K_s$-convolution operator. Notice that a direct consequence of this identity is the integration by parts formula\n\\[\n\t\\int \\mathscr A_s \\varphi \\cdot \\psi = \\int (K_s \\star \\varphi) \\cdot \\psi = \\int \\varphi \\cdot (\\tilde K_s^t \\star \\psi) = \\int \\mathscr \\varphi \\cdot \\mathscr A^*_s \\psi\n\\]\nfor all test functions $\\varphi \\in \\mathscr D(\\mathbf{R}^n;V)$ and $\\psi \\in \\mathscr D(\\mathbf{R}^n;W)$. Here, the operator $(\\tilde\\,\\sbullet\\,)$ denotes the reflection operation $x \\mapsto -x$ so that indeed $\\tilde K_s^t(h) = \\Omega_{\\mathbb{A}^*} \\, dm_s$. \n\tBy standard functional analysis theory, the convolution $K_s \\star T$ defines a $W$-valued distribution on $\\mathbf{R}^n$ for all $T \\in \\mathscr D'(\\mathbf{R}^n;V)$ by setting\n\\[\n\tK_s \\star T (\\varphi)\\coloneqq T(\\tilde K_s^t \\star \\varphi) \\quad \\forall \\varphi \\in \\mathscr D(\\mathbf{R}^n;W).\n\\]\nHere, the operator $(\\tilde\\,\\sbullet\\,)$ denotes the operation $x \\mapsto -x$. The previous convolution representation holds for all distributions $T$ such that $\\mathcal{A} T$ is a Radon measure. \nIndeed, a standard approximation argument implies that, in fact,\n\\[\nK_s \\star T \\in L^\\infty \\quad \\text{with} \\quad \\|K_s \\star T\\|_\\infty \\le \\frac{ |\\mathcal{A} u|(\\mathbf{R}^n)}{\\omega_n s^n}.\n\\]\nTherefore, by standard mollifier theory, we get that\n\\[\n\tK_s \\star T(x) = \\frac 1{|B_s|} \\lim_{\\varepsilon \\to 0^+} (\\mathcal{A} T)_\\varepsilon(B_s(x)) \\quad \\text{for a.e. $x \\in \\mathbf{R}^n$,}\n\\]\nwhere $(\\,\\sbullet\\,)_\\varepsilon$ denotes mollification with a standard mollifier at scale $\\varepsilon$.\nLemma~\\ref{lem:Leb} implies that $|\\mathcal{A} T|(\\partial B_s(x)) = 0$ for almost every $x \\in \\mathbf{R}^n$ and therefore also the right-hand side above converges to $\\mathcal{A} T(B_s(x))\/|B_s|$ almost everywhere on $\\mathbf{R}^n$, or equivalently,\n\\begin{equation}\\label{eq:rep}\n\tK_s \\star T = \\frac{\\mathcal{A} T(B_s(\\,\\sbullet\\,))}{|B_s|} \\quad \\text{as maps in $L^\\infty$}.\n\\end{equation}\n\n\\begin{remark}In general, $K_s \\star u$ needs not to be continuous for $u \\in L^1_\\mathrm{loc}(\\mathbf{R}^n;V)$ with $\\mathcal{A} u \\in \\mathcal{M}(\\mathbf{R}^n;W)$. Indeed, the vector field on the plane given by\n\\[\n\tu(x_1,x_2) = \\frac1{2\\pi} \\left(\\frac{(x_1,x_2 + 1)}{|(x_1,x_2+1)|^2} - \\frac{(x_1+1,x_2)}{|(x_1+1,x_2)|^2}\\right),\n\\]\nsatisfies $\\diverg u = \\delta_{(0,1)} - \\delta_{(1,0)}$. In particular, the map $x \\mapsto \\diverg u(B_1(x))$ is not continuous at $0 \\in \\mathbf{R}^2$. \\end{remark}\n\nWe devise the following direct properties from this discussion, which validate the assertions contained in {\\scshape Theorem}~\\ref{thm:spherical_localization}.\n\\begin{enumerate}[(i)]\n\\item If $u \\in \\mathscr D(\\mathbf{R}^n;V)$, then \n\\[\n\tK_s \\star u \\stackrel{\\mathscr D}\\longrightarrow \\mathcal{A} u \\qquad \\text{as $s \\to 0^+$}.\n\\]\nIndeed, by the properties of differentiation for convolutions, it suffices to observe the validity of the supremum norm estimate\n\\[\n\t|\\partial^\\alpha (K_s \\star u - \\mathcal{A} u)(x)| \\le \\fint_{B_s(x)} |\\mathcal{A} \\partial^\\alpha u(y) - \\mathcal{A} \\partial^\\alpha u(x)| \\, dy \\le s\\|D^k(\\mathcal{A} u)\\|_\\infty\n\\]\nwhere $k = |\\alpha| + 1$.\n\\item If $T \\in \\mathscr D'(\\mathbf{R}^n;V)$, then a byproduct of the previous convergence is the distributional convergence\n\\[\n\tK_s \\star T \\stackrel{\\mathscr D'}\\longrightarrow \\mathcal{A} T \\qquad \\text{as $s \\to 0^+$}.\n\\]\nIndeed, observing that $-\\mathbb{A}^t$ is the principal symbol associated with the formal adjoint $\\mathcal{A}^*$ of $\\mathcal{A}$, we deduce that\n\\[\n\tK_s \\star T (u) = T (\\tilde K_s^t \\star u) \\longrightarrow T(\\mathcal{A}^* u) = \\mathcal{A} T(u).\n\\]\n\\item If $u \\in \\mathscr D(\\mathbf{R}^n;V)$, then Jensen's inequality give\n\\begin{equation}\\label{eq:jensen1}\n\tF(K_s \\star u) \\le F(\\mathcal{A} u) \\star \\frac{\\chi_{B_s}}{|B_s|} \n\\end{equation}\nfor all convex maps $F: W \\to \\mathbf{R}$. Young's convolution inequality conveys the estimate \n\\[\n\\|F(K_s \\star u)\\|_{L^1} \\le \\|F(\\mathcal{A} u)\\|_{L^1} \\quad \\text{for all $p \\in [1,\\infty]$.}\n\\]\nIn particular, \n\\[\n\t\\|K_s \\star u\\|_{L^p} \\le \\|\\mathcal{A} u\\|_{L^p} \\quad \\text{for all $1 \\le p < \\infty$.}\n\\]\n\\item The lower semicontinuity of the extended $L^p$-norms under distributional convergence and the previous estimate gives rise to the extended estimate\n\\[\n\t\\forall p \\in [1,\\infty], \\quad \\|K_s \\star T\\|_p \\le \\|\\mathcal{A} T\\|_p \\quad \\text{for all $T \\in \\mathscr D'(\\mathbf{R}^n;V)$}.\n\\]\nIn particular, again from lower semicontinuity and distributional convergence, we obtain\n\\[\n\t\\limsup_{s \\to 0^+} \\|K_s \\star T\\|_p \\le \\|\\mathcal{A} T\\|_p \\le \\liminf_{s \\to 0^+} \\|K_s \\star T\\|_p\\,,\n\\]\nwhich means that the limit exists and equals $\\|\\mathcal{A} T\\|_p$. \n\\item If $\\mathcal{A} T \\in L^p$ for some $p \\in [1,\\infty]$, then the fact that $|\\mathcal{A} u|(\\partial B_s(x)) = 0$ for all $x \\in \\mathbf{R}^n$ implies that\n\\[\nK_s \\star T \\in C(\\mathbf{R}^n;W) \\quad \\forall s > 0.\n\\]\nMoreover, for $1 < p < \\infty$, the convergence of the $L^p$ norms implies that the distributional converge improves to the strong convergence\n\\[\n\tK_s \\star T \\stackrel{L^p}\\longrightarrow \\mathcal{A} T \\quad \\text{as $s \\to 0^+$.}\n\\]\n\n\\item Lastly, if $\\mathcal{A} T \\in \\mathcal{M}$, then~\\eqref{eq:rep} and the fact that the approximation by convolution for measures implies convergence with respect to the area functional gives\n\\[\n\t(K_s \\star T) \\, \\mathscr L^n \\stackrel{\\mathrm{area}}\\longrightarrow \\mathcal{A} T \\quad \\text{as $s \\to 0^+$.}\n\\]\n\\end{enumerate}\n\n\n\n\\section{The radial kernels}\\label{sec:radial}\nLet $\\rho(x) = \\hat \\rho(|x|)$ be a non-negative radial summable function and consider the kernel $\\mathcal{K}_\\rho \\in C^\\infty(\\mathbf{R}^n - \\{0\\};\\mathrm{Hom}(V,W))$ defined by\n\\[\n\t\\mathcal{K}_\\rho(z) \\coloneqq n \\, \\Omega_\\mathbb{A}\\left(\\frac{z}{|z|}\\right)\\rho(z), \\qquad z \\in \\mathbf{R}^n \\setminus \\{0\\}.\n\\]\nThe mean-value zero property \n\\[\n\t\\int_{\\partial B_s(0)} \\mathcal{K}_\\rho(\\omega) \\, dS(\\omega) = 0 \\qquad \\text{for all $s >0$} \n\\]\nensures that the \\emph{principal value} convolution integral \n\\begin{align*}\n\t\\mathfrak{A}_\\rho u(x) & \\phantom{:}= \\pv \\int \\mathcal{K}_\\rho(y-x) u(y) \\, dy \\\\\n\t& \\coloneqq \\lim_{\\delta \\to 0} \\int_{\\mathbf{R}^n \\setminus B_\\delta(x)} \\mathcal{K}_\\rho(y-x) u(y) \\, dy\n\\end{align*}\nis well-defined for all test maps $u \\in \\mathscr D(\\mathbf{R}^n;V)$ and all $x \\in \\mathbf{R}^n$, independently of the infinitesimal sequence $\\delta \\to 0^+$. Indeed, once more, the cancelation property~\\eqref{eq:cancel} gives\n\\[\n\t\t\\mathfrak{A}_\\rho u(x) = \\pv \\int \\mathcal{K}_\\rho(y-x)[u(y) - u(x)] \\, dy,\n\\]\nSince the integrand on the right-hand is dominated by a multiple of $\\|Du\\|_\\infty \\tau^x \\rho$, it follows that the limit in the principal value integral exists and $\\mathfrak{A}_\\rho u \\in L^\\infty(\\mathbf{R}^n;W)$, with\n\\[\n\\|\\mathfrak{A}_\\rho u\\|_{L^\\infty} \\le n \\|\\Omega_\\mathbb{A}\\|_{L^\\infty} \\|Du\\|_{L^\\infty} \\|\\rho\\|_{L^1}.\n\\]\nMoreover, the absolute integrability of the integrand above and Fubini's theorem give\n\\begin{align*}\n\t\\dpr{\\mathfrak{A}_\\rho u, \\varphi} & = \\int\\int \\dpr{u(x + h) - u(x),\\mathcal{K}_\\rho(h)^t \\varphi(x)} \\, dx \\, dh \\\\\n\t& = \\int \\int \\dpr{u(z), \\mathcal{K}_\\rho(h)^t [\\varphi(z - h) - \\varphi(z)]} \\, dz \\, dh \\\\\n\t& = -\\int \\int \\dpr{u(z), \\mathcal{K}_\\rho(y - z)^t [\\varphi(y) - \\varphi(z)]} \\, dy \\, dz = \\dpr{u,\\mathfrak{A}^*_\\rho \\varphi},\n\\end{align*}\nfor all $\\varphi \\in \\mathscr D(\\mathbf{R}^n;W)$. This shows that the formal adjoint $(\\mathfrak{A}_\\rho)^*$ of $\\mathfrak{A}_\\rho$ is precisely $\\mathfrak{A}^*_\\rho$. By a standard duality argument, we may extend $\\mathfrak{A}_\\rho$ uniquely and continuously to all distributions $T \\in \\mathscr D'(\\mathbf{R}^n;V)$ by setting\n\\[\n\t\\mathfrak{A}_\\rho T(\\varphi) \\coloneqq T(\\mathfrak{A}_\\rho^* \\varphi), \\qquad \\varphi \\in \\mathscr D(\\mathbf{R}^n;W).\n\\]\nIn particular, this shows that, for a family $\\{\\rho_\\varepsilon\\}_{\\varepsilon > 0}$ of probability non-negative radial functions satisfying~\\eqref{eq:eps1}-\\eqref{eq:eps}, it holds\n\\begin{equation}\\label{eq:distributional_convergence}\n\t\\mathfrak A{\\rho_\\varepsilon} T \\stackrel{\\mathscr D'}\\longrightarrow \\mathcal{A} T \\qquad \\text{as $\\varepsilon \\to 0^+$}\n\\end{equation}\nfor all $T \\in \\mathscr D'(\\mathbf{R}^n;V)$.\n\nLet $p \\in [1,\\infty]$ and let $q$ be its associated H\\\"older conjugate exponent. If $T \\in \\mathscr D'(\\mathbf{R}^n;W)$, then, using that the radial integral defining $\\mathfrak{A}^* \\varphi$ is absolutely convergent as a Bochner integral of smooth maps, we deduce from the spherical kernel estimates the following one:\n\\begin{align*}\n\t\\mathfrak{A}_\\rho T(\\varphi) & = T(\\mathfrak{A}_\\rho^* \\varphi) \\\\\n\t& = T\\left(-\\int_0^\\infty n\\omega_n\\hat\\rho(r)r^{n-1} \\tilde K_r^t \\star \\varphi \\, dr \\right) \\\\\n\t& = \\int_0^\\infty n\\omega_n\\hat\\rho(r)r^{n-1} T( K_r^* \\star \\varphi) \\, dr \\\\\n\t& = \\int_0^\\infty n\\omega_n\\hat\\rho(r)r^{n-1} (K_r \\star T)(\\varphi) \\, dr \\\\\n\t\t& \\le \\int_0^\\infty n\\omega_n\\hat\\rho(r)r^{n-1} \\|K_r\\star T\\|_p\\|\\varphi\\|_{L^q} \\, dr \\le \\|\\rho\\|_{L^1} \\|\\mathcal{A} T\\|_p\\|\\varphi\\|_{L^q}.\n\\end{align*} \nTaking the supremum over all $\\varphi$ with $\\|\\varphi\\|_{L^q} = 1$ gives\n\\begin{equation}\\label{eq:p_radial}\n\\|\\mathfrak{A}_\\rho T\\|_p \\le \\|\\rho\\|_{L^1} \\|\\mathcal{A} T\\|_p \\quad \\text{for all $T \\in \\mathscr D'(\\mathbf{R}^n;V)$}.\n\\end{equation} \n\n\\begin{lemma}\\label{lem:11}\nLet $u \\in L^1_\\mathrm{loc}(\\mathbf{R}^n;V)$. Then\n\\[\n\\|\\mathfrak{A}_\\rho u\\|_p^p \\le \\liminf_{\\delta \\to 0} \\int \\left| \\int_{\\mathbf{R}^n \\setminus B_\\delta(x)} K_\\rho(y-x)u(y)\\, dy \\right|^p. \n\\] \nIf moreover $\\|\\mathcal{A} u\\|_p < \\infty$, then $\\mathfrak{A}_\\rho u \\in L^p(\\mathbf{R}^n;W)$, the limit above exists, commutes with the integral and\n\\[\n\t\\mathfrak{A}_\\rho u(x) = \\pv \\int K_\\rho(y-x)u(y) \\, dy \\qquad \\text{for almost every $x \\in \\mathbf{R}^n$.}\n\\]\n\\end{lemma}\n\\begin{remark}\nNotice that $\\mathfrak{A}_\\rho u$ is absolutely continuous even if $\\mathcal{A} u$ is only a finite measure.\n\\end{remark}\n\\begin{proof}[Proof of Lemma~\\ref{lem:11}]\nFirst, we show that $\\mathfrak{A}_\\rho u$ is given by the principal value convolution $\\pv \\mathcal{K}_\\rho \\star u$. By construction, we have that $\\mathfrak{A}_\\rho u \\in \\mathcal{M}(\\mathbf{R}^n;W)$. Since the integrand \n\\[\n(x,y) \\mapsto \\dpr{K_\\rho(y-x)u(y),\\varphi(x) - \\varphi(y)}\n\\]\nis absolutely integrable on $\\mathbf{R}^n \\times \\mathbf{R}^n$, we deduce from Fubini's theorem and a few changes of variables that, for any infinitesimal sequence $\\delta \\to 0$, it holds\n\\begin{align*}\n\t\\dpr{u , \\mathfrak{A}^*_\\rho \\varphi} & = \\int \\pv \\int \\dpr{ K_\\rho(y-x) u(y),\\varphi(x)} \\, dx \\, dy \\\\\n\t& = \\int\\int \\dpr{ K_\\rho(y-x) u(y),\\varphi(x) - \\varphi(y)} \\, dx \\, dy \\\\\n\t& = \\int\\int\\dpr{K_\\rho(h)u(y),\\varphi(y - h) - \\varphi(y)} \\, dh \\, dy.\n\\end{align*}\nNow, the right-hand side can be expressed as the limit \n\\begin{align*} \n\t& = \\lim_{\\delta \\to 0} \\int_{B_\\delta(0)^c}\\int\\dpr{K_\\rho(h)[u(y) - u(y-h)],\\varphi(y - h) } \\, dy \\, dh \\\\\n\t& = \\lim_{\\delta \\to 0}\\int \\int_{B_\\delta(0)^c} \\dpr{K_\\rho(h)[u(y) - u(y-h)],\\varphi(y-h) } \\, dh \\, dy. \\\\\n\t\t& =\\lim_{\\delta \\to 0} \\int \\int_{B_\\delta(y)^c} \\dpr{K_\\rho(y-x)[u(y) - u(x)],\\varphi(x) } \\, dx \\, dy \\\\\n\t\t\t& =\\lim_{\\delta \\to 0} \\int\\dpr{ \\int_{B_\\delta(x)^c} K_\\rho(y-x)u(y)\\, dy ,\\varphi(x) } \\, dx \\\\\n\t& \\le \\|\\varphi\\|_{L^q} \\lim_{\\delta \\to 0} \\left(\\int \\left| \\int_{\\mathbf{R}^n \\setminus B_\\delta(x)} K_\\rho(y-x)u(y)\\, dy \\right|^p \\right)^\\frac{1}{p}.\n\\end{align*}\nTaking the supremum over all $\\varphi \\in \\mathscr D(\\mathbf{R}^n;W)$ with $\\|\\varphi\\|_{L^q} = 1$, where $q$ is the H\\\"older conjugate exponent of $p$, we conclude that \n\\[\n\t\\|\\mathfrak{A}_\\rho u\\|_p^p \\le \\liminf_{\\delta \\to 0} \\int \\left| \\int_{\\mathbf{R}^n \\setminus B_\\delta(x)} K_\\rho(y-x)u(y) \\, dy \\right|^p.\n\\]\nLet us now assume that $|\\mathcal{A} u|$ is a finite measure. Since for every $r > 0$ the function $x \\mapsto |\\mathcal{A} u|(B_r(x))$ is measurable, it follows that the non-negative function defined by\n\\[\n\t f(x) \\coloneqq \\int_0^\\infty \\hat \\rho(r) r^{-1}|\\mathcal{A} u|(B_{r}(x))\\varphi(x) \\, dr \\in [0,\\infty], \\qquad x \\in \\mathbf{R}^n,\n\\]\nis also Borel measurable on $\\mathbf{R}^n$. Moreover, \n Fubini's theorem and Young's convolution inequality convey that $f$ is $p$-integrable with $\\|f\\|_{L^p} \\le \\|\\rho\\|_{L^1} \\|\\mathcal{A} u\\|_{p}$. \nLet us now define the measurable maps \n\\[\n\tg_\\delta(x) \\coloneqq \\int_{\\mathbf{R}^n \\setminus B_\\delta(x)} \\mathcal{K}_\\rho(y-x)u(y)\\, dy, \\quad x \\in \\mathbf{R}^n.\n\\]\nBy Lemma~XX and by the generalized Gauss--Green theorem it follows that $f$ dominates the family $\\{|g_\\delta|^p\\}_{\\delta > 0}$. Since also \n\\[\ng_\\delta(x) \\stackrel{\\delta \\to 0}\\longrightarrow \\pv \\int K_\\rho(y-x)u(y) \\, dy \\quad \\text{for almost every $x \\in \\mathbf{R}^n$},\n\\] \nthe Dominated convergence theorem implies that $\\mathfrak{A}_\\rho u = \\pv (\\mathcal{K}_\\rho \\star u)$.\nThis finishes the proof.\n\\end{proof}\n\n\n\\begin{proposition}\\label{prop:12}\nLet $u \\in \\mathscr D'(\\mathbf{R}^n;V)$ and further suppose that $\\mathcal{A} u \\in \\mathcal{M}_b(\\mathbf{R}^n;W)$. If $\\rho$ is a probability density, then\n\\[\n\t\\|F(\\mathfrak{A}_\\rho u)\\|_{1} \\le \\|F(\\mathcal{A}^a u)\\|_1 +\\|F^\\infty(\\mathcal{A}^s u)\\|_1\n\\]\nfor all convex functions $F : W \\to \\mathbf{R}$. In particular, \n\\[\n\t\\lim_{\\varepsilon \\to 0} \\int |F(\\mathfrak{A}_{\\rho_\\varepsilon} u)| \\, dx = \\int F(\\mathcal{A}^a u)\\, dx + \\int F^\\infty\\left(\\frac{\\mathcal{A} u}{|\\mathcal{A} u|}\\right) \\, |\\mathcal{A} u|(dx)\n\\]\nwhenever $F$ has linear growth at infinity. \n\\end{proposition}\n\n\n\\begin{proof}\nBy the discussion above, we infer that $F(\\mathfrak{A}_\\rho u) \\in L^1_\\mathrm{loc}$. Let $K$ be a compact set of $\\mathbf{R}^n$. Appealing to Jensen's inequality and Young's convolution inequality, we deduce that\n\\begin{align*}\n\t\\int_K F(\\mathfrak{A}_\\rho u(x)) \\, dx & = \\int_K F\\left(\\int_0^\\infty K_s \\star u(x) \\, n\\omega_n \\hat \\rho(s) s^{n-1} \\, ds \\right) \\, dx \\\\\n\t& \\le \\int_K \\int_0^\\infty F(K_s \\star u) \\, n\\omega_n \\hat \\rho(s) s^{n-1} \\, ds \\, dx\\\\\n\t& = \\int_0^\\infty n\\omega_n \\hat \\rho(s) s^{n-1} \\,ds \\int_K F(\\mathcal{A} u \\star \\frac{\\chi_{B_s}}{|B_s|}) \\, dx.\n\\end{align*}\nThe inequality $F(a + b) \\le F(a) + F^\\infty(b)$ and subsequent applications of Jensen's and Young's inequality permit us to estimate the right-hand side above by\n\\begin{align*}\n\\int_K & F(\\mathcal{A}^a u \\star \\frac{\\chi_{B_s}}{|B_s|}) + F^\\infty(\\mathcal{A}^s u \\star \\frac{\\chi_{B_s}}{|B_s|}) \\, dx \\\\ \n& \\le \\int_K F(\\mathcal{A}^a u) \\star \\frac{\\chi_{B_s}}{|B_s|} \\, dx + \\int_K F^\\infty(\\mathcal{A}^s u) \\star \\frac{\\chi_{B_s}}{|B_s|} \\, dx \\\\\n& \\quad \\le \\|F(\\mathcal{A}^a u)\\|_1 + \\|F^\\infty(\\mathcal{A}^s u)\\|_1.\n\\end{align*}\nThe first assertion follows this estimate by taking a sequence of compact sets $K_j \\nearrow \\mathbf{R}^n$. When $F$ has linear growth at infinity, the recession function $F^\\infty$ is a positively homogeneous convex function taking values on $\\mathbf{R}$. Notice that \n\\[\n\tF^\\infty (\\mu) \\coloneqq F^\\infty(g_\\mu) |\\mu| \\in \\mathcal{M}(\\mathbf{R}^n), \\qquad \\forall \\mu \\in \\mathcal{M}(\\mathbf{R}^n;W).\n\\]\nBy~\\eqref{eq:distributional_convergence}, Reschetnyak's theorem (which holds under distributional convergence) and the previous proposition, we conclude that\n\\begin{align*}\n\t\\limsup_{\\varepsilon \\to 0} \\|F(\\mathfrak{A}_{\\rho_\\varepsilon} u)\\|_1 & \\le \\|F(\\mathcal{A}^a u)\\|_{1} + |F^\\infty(\\mathcal{A}^s u)|(\\mathbf{R}^n) \\\\\n\t& \\le \\liminf_{\\varepsilon \\to 0} \\|F(\\mathfrak{A}_{\\rho_\\varepsilon} u)\\|_1.\n\\end{align*}\nThis proves the second assertion.\n\\end{proof}\n\n\n\n\n\n\n\\begin{corollary}\\label{cor:2}\nLet $u \\in L^1_\\mathrm{loc}(\\mathbf{R}^n;V)$ and let $1 \\le p < \\infty$.\n\\begin{enumerate}\n\\item If $\\mathcal{A} u \\in L^p$, then\n\\[\n\t\\mathfrak{A}_{\\rho_\\varepsilon}u \\stackrel{L^p}\\longrightarrow \\mathcal{A} u \\quad \\text{as $\\varepsilon \\to 0$}.\n\\]\n\\item If $\\mathcal{A} u \\in \\mathcal{M}(\\mathbf{R}^n;W)$, then\n\\[\n\t\\mathfrak{A}_{\\rho_\\varepsilon} u \\, \\mathscr L^d \\stackrel{area}\\longrightarrow \\mathcal{A} u \\quad \\text{as $\\varepsilon \\to 0$}.\n\\]\n\\end{enumerate}\n\\end{corollary}\n\\begin{proof} By letting $F(t) = t^p$ in the previous proposition, we obtain that\n\\begin{equation}\\label{eq:extendedlimit} \n\t\\lim_{\\varepsilon \\to 0} \\|\\mathfrak{A}_{\\rho_\\varepsilon} u\\|_p =\\|\\mathcal{A} u\\|_p \\in [0,\\infty].\n\\end{equation}\nFor $1 < p \\le \\infty$, it follows that $\\mathcal{A} u \\in L^p(\\mathbf{R}^n;V)$ if and only if $\\liminf_{\\varepsilon \\to 0} \\|\\mathfrak{A}_{\\rho_\\varepsilon} u\\|_p < \\infty$. \nFor $1 < p < \\infty$, the Brezis--Lieb lemma implies that the distributional convergence in~\\eqref{eq:distributional_convergence} improves to the convergence \n\\[\n\t\\mathfrak{A}_{\\rho_\\varepsilon} u \\longrightarrow \\mathcal{A} u \\quad \\text{strongly in $L^p$}.\n\\] \nThis proves (1) for $p > 1$. \nFor $p = 1$, the previous proposition implies that $\\mathfrak{A}_{\\rho_\\varepsilon} u$ indeed converges in measure to $\\mathcal{A} u$. This proves (2). We are left to show (1) for $p =1$. Notice that when $\\mathcal{A} u \\in L^1$, then the singular part $\\mathcal{A}^s u$ vanishes, and hence from the area convergence, we get\n\\[\n\t\\int \\sqrt{1 + |\\mathfrak{A}_{\\rho_\\varepsilon} u|^2} \\, dx \\longrightarrow \\int \\sqrt{1 + |\\mathcal{A} u|^2} \\, dx.\n\\]\nSince $\\sqrt{1 + |\\,\\sbullet\\,|^2}$ is uniformly strictly convex and larger or equal than $|\\,\\sbullet\\,|$, it follows that $\\{\\mathfrak{A}_{\\rho_\\varepsilon} u\\}_\\varepsilon$ is an equi-bounded family of $L^1$. The Dunford--Pettis criterion implies that the distributional convergence~\\eqref{eq:distributional_convergence} lifts to $L^1$ strong convergence. \n\\end{proof}\n\n\n\\subsection*{Acknowledgements}This project has received funding from the Fonds National de la Recherche Scientifique - FNRS under fellowship No. 40005112 (Compensated compactness and fine properties of PDE-constrained measures).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \nIn propositional or i.i.d. data, marginal probabilities are independent of the number of entities for which data are available. For example, if we start with a test set of 100 data points with fully observed attributes, and we are told that another 1,000 test cases are available, this information does not change a model's predictions on the original 100, because all data points are independent. Several authors have observed that for relational data, this is not true \\cite{lauritzen2017random,shalizi2013consistency,poole2014population,jain2010adaptive}: additional entities, or nodes in a network, are potentially related, and therefore their mere presence can be inferentially relevant.\n\nThe dependence on domain size raises several difficulties. (1) Counter-intuitiveness: if we are interested in making predictions about the 100 members of a Facebook group, learning that Facebook has gained another 1,000 users in the last hour should not change our predictions about the group of interest, unless we have specific information about the new users. (2) Complexity of Inference: We typically learn from fairly large networks and apply the results of learning to draw inferences about relatively small sub-networks. If the presence of other nodes is relevant to the marginal probabilities associated with the sub-network of interest, inference becomes computationally challenging, as it involves a large summation over the possible states of many nodes outside the sub-network. \n(3) Complexity of Learning: The observed network for learning is typically embedded in a much larger network. If marginal predictions of the observed sub-network depend on the size of the embedding network, learning should not treat the sub-network as a closed world to be analyzed in isolation. \n\nOur aim in this paper is to connect dependence on domain size with the classic concept of {\\em projectivity} from statistical theory \\cite{shalizi2013consistency}. An SRL model typically defines a family of probability distributions over relational structures, one for each domain size $n$. Projectivity requires that the distribution for size $m$ agrees with the marginal distribution obtained from any large domain $n > m$. This ensures that the distributions for different domain sizes are consistent with each other, and that predictions about a sub-network do not depend on the presence of outside nodes.\n\nMost (if not all) SRL modeling frameworks support the construction of non-projective models, and in many cases, this is an important source of model expressivity. However, taking into consideration the significant benefits in terms of computational complexity and robustness of inference and learning results, it is worthwhile to explore projective fragments of SRL languages. After establishing the formal framework for defining and analyzing projectivity in an SRL context (Sections~\\ref{sec:background},\\ref{sec:projectivity}), we identify syntactic constraints for three prototypical SRL frameworks that ensure projectivity (Section~\\ref{sec:fragments}). We finally take a closer look at how projectivity can justify maximum likelihood learning from observed sub-networks (Section~\\ref{sec:learning}). \n\n\n\\paragraph{Related Work.} Several SRL works have addressed changing domain sizes. \\citeauthor{poole2014population} discuss the impact of domain size and inference~\\citeyear{poole2014population}. \nConvergence of inference~\\cite{jaeger1998convergence} and learning~\\cite{Xiang2011} in the domain size limit has been investigated.~\\citeauthor{kuzelka2018relational} discuss the validity of learning from a sub-network embedded in a larger network. They focus on models that satisfy a given set of marginal probabilities, not projectivity. Statisticians have examined the projectivity of network models from the exponential family \\cite{shalizi2013consistency,lauritzen2017random}, but have not considered other SRL models.\n\n\\section{Background}\n\\label{sec:background}\n\n\nThe following definitions provide a framework for talking about SRL systems in general.\n\nA relational \\emph{signature} $S$\\ contains relation symbols of varying arities. We write \n$r\/k$ for a relation $r$ of arity $k\\geq 0$. \nA \\emph{possible world} $\\omega$\\ (for $S$) is given by a finite domain $D=\\{d_1,\\ldots,d_n\\}$\\ on \nwhich extensions of the relations in $S$\\ are defined. We then refer to $n$ as the \\emph{size} of \n$\\omega$. Moreover, we generally assume that $D=\\{0,\\ldots,n-1\\}=:[n]$. Then $\\Omega^{(S,n)}$ denotes the set of \nall possible worlds for the signature $S$ over the domain $[n]$. \nThis can be abbreviated as $\\Omega^{(n)}$ when the signature is understood from the context. \n\n$\\omega\\in \\Omega^{(n)}$ can also be identified with a truth value assignment \n\\begin{displaymath}\n \\omega: r(\\boldi) \\mapsto \\{\\emph{true},\\emph{false}\\}\n\\end{displaymath}\nfor all ground atoms $r(\\boldi)$, where $r\/k\\in S$ and $\\boldi=(i_1,\\ldots,i_k)\\in [n]^k$.\n\nWe denote with $\\Delta\\Omega^{(n)}$ the set of all probability distributions on $\\Omega^{(n)}$. \nA \\emph{random relational structure model (rrsm)} is a family of distributions \n$\\{Q^{(n)} \\in\\Delta\\Omega^{(n)} \\mid n\\in\\Nset\\}$. The term RRSM, which was first\nintroduced in~\\cite{JaegerMI02}, is intended to emphasize that this is a multi-relational generalization of\nthe classical concept of random graph models (or \\emph{random networks}~\\cite{lauritzen2017random}). \n\nFor this paper we limit ourselves to the consideration of \\emph{exchangeable} RRSMs: a single distribution\n$Q^{(n)}$ is exchangeable, if $Q^{(n)}(\\omega)=Q^{(n)}(\\omega')$ whenever $\\omega$ and $\\omega'$ are isomorphic. An \nRRSM is exchangeable, when all $Q^{(n)}$ ($n\\in\\Nset$) are exchangeable. \nExchangeability of $Q^{(n)}$ implies that \n$Q^{(n)}(r(\\boldi)=\\emph{true})=Q^{(n)}(r(\\boldj)=\\emph{true})$ when $\\boldj = \\pi(\\boldi)$ for some permutation $\\pi$ of $[n]$.\n\nRRSMs defined by typical SRL frameworks are necessarily exchangeable when the model specification does not\nmake use of any constants referring to specific domain elements, and is not conditioned on a pre-defined\nstructure on the domain. Examples of RRSMs that do not satisfy this condition are temporal models such as\nhidden Markov models, where the model is conditioned on a linear ordering $0<1<\\cdots< n-1$ of the domain, \nand the marginal distributions at different timepoints $t\\neq t'$ are ususally different. \n\n\n\\subsection{SRL Systems}\n\nIn this section we briefly review three different SRL frameworks that can be used to define\nRRSMs. We do not give complete definitions of syntax and semantics for these frameworks here, but\nonly describe their main characteristics by way of examples. The selected systems are representatives\nfor three distinct modeling paradigms.\n\n\\subsubsection{Relational Bayesian Networks}\n\nRelational Bayesian networks (RBNs)~\\cite{Jaeger97UAI} are a representative of frameworks that\nare closely linked to directed graphical models. Other approaches in this group include \nprobabilistic relational models~\\cite{friedman1999learning}, Bayesian logic programs~\\cite{kersting2000interpreting}\nand relational logistic regression~\\cite{kazemi2014relational}. \n\nAn RBN associates with each relation $r\\in S$ a functional expression that defines the probabilities\nof ground atoms $r(\\boldi)$ conditional on other ground atoms. Functional expressions can be formed \nby combining and nesting basic constructs, which support Boolean conditions, mixtures of distributions, \nand aggregation of dependencies using the central tool of \\emph{combination functions}. \n\nFor example, the two formulas \n\n\\begin{displaymath}\n \\begin{array}{lll}\n \\emph{red}(X) & \\leftarrow & 0.3 \\\\\n \\emph{black}(X) & \\leftarrow & {\\tt if}\\ \\emph{red}(X):0\\ {\\tt else}:\\ 0.5\n \\end{array}\n\\end{displaymath}\ndefine Boolean attributes \\emph{red} and \\emph{black}, such that no element can be both \\emph{red}\nand \\emph{black}. One can then condition a binary \\emph{edge} relation on the colors of the nodes:\n\\begin{displaymath}\n \\begin{array}{lll}\n \\emph{edge}(X,Y) & \\leftarrow & {\\tt if}\\ \\emph{red}(X)\\, \\&\\, \\emph{red}(Y): 0.7 \\\\\n & & {\\tt else\\ if}\\ \\emph{black}(X)\\, \\&\\, \\emph{black}(Y): 0.4 \\\\\n & & {\\tt else}:\\ 0.05 \\\\\n \\end{array}\n\\end{displaymath}\n\nTogether, the three formulas for \\emph{red}, \\emph{black} and \\emph{edge} represent a simple \nstochastic block model~\\cite{snijders1997estimation}. \nThe main expressive power of RBNs derives from \\emph{combination functions} that \ncan condition a ground atom on properties of other domain entities:\n\\begin{displaymath}\n \\begin{array}{lll}\n t(X) & \\leftarrow & \\emph{noisy-or}\\{ {\\tt if}\\ \\emph{edge}(X,Y)\\, \\&\\, \\emph{red}(Y):0.2 \\mid Y \\}\n \\end{array}\n\\end{displaymath}\nThis formula makes the probability of the attribute $t$ for $X$ dependent on the number of \\emph{red}\nentities $Y$ that $X$ is connected to via an \\emph{edge}. Each such entity causes $t(X)$ to be\ntrue with probability 0.2, and the different causes $Y$ are modeled as independent via \nthe \\emph{noisy-or} combination function.\n\nAn RBN specification defines an RRSM, provided that for each $n$ the formulas induce an \nacyclic dependency structure on the atoms of $\\Omega^{(S,n)}$. The distribution $Q^{(n)}$ can then be \nrepresented by a Bayesian network whose nodes are the ground atoms $r(\\boldi)$, and where \n$r'(\\boldi')$ is a parent of $r(\\boldi)$, if the truth value of $r'(\\boldi')$ is needed to \nevaluate the probability formula for $r(\\boldi)$. \n\n\\subsubsection{Markov Logic Networks}\n\nMarkov Logic networks (MLNs)~\\cite{RicDom06} are the multi-relational generalization of exponential \nrandom graph models~\\cite{frank1986markov,wasserman1996logit}. An MLN consists of a set\nof weighted formulas. As an example, let $S=\\{\\emph{red}\/1,\\emph{edge}\/2\\}$, and define\nthe following two weighted formulas:\n\\begin{displaymath}\n \\begin{array}{lr}\n \\phi_1(X,Y) :\\equiv \\emph{edge}(X,Y)\\wedge \\emph{red}(X) \\wedge \\emph{red}(Y) & 1.2 \\\\\n \\phi_2(X,Y) :\\equiv \\emph{edge}(X,Y)\\wedge \\emph{red}(X) \\wedge \\neg \\emph{red}(Y) & -0.2 \\\\\n \\end{array}\n\\end{displaymath}\nThese two weighted formulas represent a homophily model for the \\emph{red} attribute \nin a graph. Each pair $(i,j)\\in [n]^2$ contributes a weight of 1.2 (-0.2) to a possible \nworld $\\omega\\in\\Omega^{(S,n)}$ if $\\phi_1(i,j)$ ($\\phi_2(i,j)$) is true in $\\omega$. The probability\n$Q^{(n)}(\\omega)$ then is the normalized sum of all weights contributed by all the weighted formulas, \nand all possible substitutions of domain elements for the variables in the formulas.\n\n$Q^{(n)}$ can be represented by a Markov network whose nodes are the ground atoms $r(\\boldi)$, and\nwhere two atoms $r(\\boldi),r'(\\boldi')$ are connected by an edge if these two atoms \nappear jointly in a grounding of one of the weighted formulas.\n\n\n\\subsubsection{ProbLog}\n\nProbLog~\\cite{DeRKimToi2007,kimmig2011implementation} is a representative of RRSMs that \nare closely linked to logic programming. Other frameworks in this class are Prism~\\cite{Sato95}, and\nindependent choice logic~\\cite{Poole08}. A ProbLog model consists of \na set of \\emph{labeled facts}, which are atoms with a probability label attached:\n\\begin{equation}\n\\label{eq:problog1}\n 0.8::\\ \\emph{red}(X)\n\\end{equation}\ntogether with \\emph{background knowledge} consisting of (non-probabilistic) definite clauses:\n\\begin{equation}\n\\label{eq:problog2}\n \\emph{edge}(X,Y) :-\\ \\emph{red}(X),\\emph{red}(Y).\n\\end{equation}\nWe note that~\\cite{kimmig2011implementation} emphasize the use of ground atoms in the labeled \nfacts. Since our interest is with generic, domain-independent RRSMs, we restrict attention\nto ProbLog models that do not contain domain constants.\n\nThe ProbLog model consisting of (\\ref{eq:problog1}) and (\\ref{eq:problog2}) defines the \nprobability of $\\omega\\in\\Omega^{(S,n)}$ as 0 if $\\omega$ does not satisfy the property that\ntwo elements are connected by an edge if and only if they both are red (i.e., \n$\\omega$ must be a minimal model of (\\ref{eq:problog2})). If $\\omega$ satisfies \n (\\ref{eq:problog2}), then its probability is $0.8^{|\\emph{red}(\\omega)|}0.2^{n-|\\emph{red}(\\omega)|} $, \nwhere $\\emph{red}(\\omega)$ denotes the set of elements $i\\in [n]$ for which \n$\\emph{red}(i)$\\ is true in $\\omega$. \n\nIn order to also make rules such as (\\ref{eq:problog2}) probabilistic, one can introduce\nlatent relations that represent whether rules in the background knowledge become applicable.\nIn our example, we can add a new relation \\emph{rule}\/2, an additional labeled fact\n\\begin{equation}\n\\label{eq:problog3}\n 0.5::\\ \\emph{rule}(X,Y),\n\\end{equation} \nand modify the background knowledge to \n\\begin{equation}\n\\label{eq:problog4}\n \\emph{edge}(X,Y) :-\\ \\emph{red}(X),\\emph{red}(Y),\\emph{rule}(X,Y).\n\\end{equation}\nNow the resulting ProbLog model is a (partial) stochastic block model, where with probability\n0.5 two red nodes are connected by an edge (and still no other than pairs of red nodes could \nbe connected; further labeled facts and rules can be added to also model connections between\nnon-red nodes). \n\n\n\n\n\\section{Projectivity}\n\\label{sec:projectivity}\n\nFor\n$Q^{(n)}\\in\\Delta\\Omega^{(n)}$ and a subset $\\{i_0,\\ldots,i_{m-1}\\}\\subseteq [n]$ we \ndenote with $Q^{(n)}\\downarrow \\{i_0,\\ldots,i_{m-1}\\}$ the marginal distribution on the sub-structures\ninduced by $\\{i_0,\\ldots,i_{m-1}\\}$, or, equivalently, the marginal distribution on the ground atoms\n$r(\\boldi)$ with $\\boldi\\in \\{i_0,\\ldots,i_{m-1}\\}^k$. When $Q^{(n)}$ is exchangeable, then the induced\ndistribution is independent of the actual choice of the elements $i_h$, and we may assume that\n$\\{i_0,\\ldots,i_{m-1}\\}=\\{0,\\ldots,m-1\\}$. We therefore can limit attention to the \nmarginalizations\n\\begin{equation}\n \\label{eq:marginalizeQn}\n Q^{(n)}\\downarrow [m] \\in \\Delta \\Omega^{(m)}.\n\\end{equation}\n\n\n\nBased on \\cite{shalizi2013consistency} we define several different versions of ``projectivity'' for \nprobabilistic relational models. Our definitions are more restricted than the one given by Shalizi \nand Rinaldo~(\\citeyear{shalizi2013consistency}) in that \nwe specifically tailor the definitions to random relational structure models.\nFor convenience, we also include the condition of exchangeability in the following \ndefinitions, even though exchangeability and projectivity are not necessarily linked.\n\n\n\\begin{definition}\n\\label{def:proj1}\n A RRSM is \\emph{projective}, if \n \\begin{itemize}\n \\item every $Q^{(n)}$ is exchangeable\n \\item for every $n$, every $m0$, then $q(n,w)$ is increasing in $n$ (and in $w$). However, for \n$n=2$, we have $q(n,w)=1\/2$, regardless of the value of $w$. Thus, for a suitable combination\nof $n,w$ where $q(n,w)>1\/2$, we cannot find a value $w'$ such that\n$Q^{(n)}_q\\downarrow [2] = Q^{(2)}_{w'}$.\n\\end{example}\n\nThe lack of structural projectivity sets some limits to the approach of defining the\nweights in an MLN as functions of the domain size $n$~\\cite{jain2010adaptive} in \norder to compensate for the domain dependence of the model. \n\n\\begin{example}\n RBNs are not structurally projective: similarly to Example~\\ref{ex:mlnnotsp}, we can construct\na counterexample as follows. Consider the RBN\n\\begin{displaymath}\n \\begin{array}{lll}\n \\emph{edge}(X,Y) & \\leftarrow & 0.5 \\\\\n a(X) & \\leftarrow & \\emph{noisy-or}\\{ {\\tt if}\\ \\emph{edge}(X,Y):\\theta \\mid Y \\}\n \\end{array}\n\\end{displaymath}\ndefining probability distributions $Q^{(n)}_{\\theta}$ depending on the probability parameter $\\theta$.\nDefining $q(n,\\theta)$ as in (\\ref{eq:qna0}), we obtain that $q(2,\\theta)=0$, regardless of $\\theta$,\nbut $q(n,\\theta)>0$ for $n\\geq 3$ and $\\theta>0$. \n\\end{example}\n\n\n\\section{Projective Fragments of SRL Systems}\n\\label{sec:fragments}\n\nIn this section we identify for the three representative SRL frameworks introduced in \nSection~\\ref{sec:background}, restrictions on model structure that give rise to projective models. \n\nFor the propositions stated in this section we give short proof sketches. Full proofs would\nneed to refer to complete formal specifications of syntax and semantics of the various \nframeworks, which we have omitted in Section~\\ref{sec:background}. Even the proof sketches we\nprovide may appeal to some facts about the individual frameworks that were not explicitly\nspelled out in Section~\\ref{sec:background}.\n\n\n\n\\begin{proposition}\n An RBN defines a projective RRSM if it does not contain any combination functions.\n\\end{proposition}\n\n\\begin{proof}[Sketch]\nConsider the Bayesian network representation $B^{(n)}$ \nof $Q^{(n)}$, and a parent child pair $ r'(\\boldi') \\rightarrow r(\\boldi)$ in $B^{(n)}$.\nIf the probability for the relation $r$ is defined without the use of combination functions, \nthen all constants appearing in $\\boldi'$ must also be contained in $\\boldi$. This implies \nthat the sub-network for the ground atoms over $[m]$ ($m\\leq n$) is \nan upward-closed sub-graph of $B^{(n)}$ whose structure and probability parameters do not\ndepend on $n$. This means that the marginal distributions on the ground atoms over $[m]$\ndoes not depend on $n$.\n\\end{proof}\n\nEven though combination functions are the main source for the expressive power of RBNs, one\ncan still encode some relevant models with the combination function free fragment. \nOne example are the stochastic\nblock models as shown in Section~\\ref{sec:background}. Another example are temporal models \nsuch as dynamic Bayesian networks or hidden Markov models. However, as mentioned in Section~\\ref{sec:background}, these models are not exchangeable, and therefore outside the scope of this paper.\n\n\n\n\\begin{proposition}\n\\label{prop:mlnfrag}\nAn MLN defines a projective RRSM if its formulas $\\phi_i$ satisfy the property that\nany two atoms appearing in $\\phi_i$ contain exactly the same variables. \n\\end{proposition}\n\n\\begin{proof}[Sketch]\nIf the condition of the proposition holds, then \nthe Markov network representation $M^{(n)}$ of $Q^{(n)}$ decomposes \ninto a system of disconnected sub-networks, where each sub-network contains ground\natoms over a fixed set of domain elements, with a cardinality at most equal to the maximal \narity of any $r\\in S$. The structure and parameterization of sub-networks containing\nonly the atoms for domain elements from $[m]$ ($m\\leq n$) are the same for all $n$, and they define a \nmarginal distribution that is independent of the nodes contained in the other sub-networks, i.e., \nindependent of $n$. \n\\end{proof}\n\nAlternatively, Proposition~\\ref{prop:mlnfrag} can also be proven by an application of \nTheorem 1 of~\\cite{shalizi2013consistency}.\nOur MLN example from Section~\\ref{sec:background} does not satisfy the restriction of \nProposition~\\ref{prop:mlnfrag}. A somewhat synthetic example that satisfies the condition is\n\n\\begin{displaymath}\n \\begin{array}{lr}\n \\emph{red}(X)\\wedge \\emph{edge}(X,X) & -1.5 \\\\\n \\emph{edge}(X,Y)\\wedge\\emph{edge}(Y,X) & 0.8 \\\\ \n \\end{array}\n\\end{displaymath}\n\nThis MLN represents a model according to which red nodes are unlikely to have self-loops, and\nthe edge relation tends to be symmetric. It is an open question whether our projective MLN fragment\ncontains more natural and practically relevant classes of models. \n\n\n\n\\begin{proposition}\n\\label{prop:problogproj}\n A ProbLog model defines a projective RRSM, if all the background knowledge clauses satisfy \nthe property that the body of the clause does not contain any variables that are not contained in the\nhead of the clause.\n\\end{proposition}\n\n\\begin{proof}[Sketch]\nIf the stated property holds, then a proof for a ground atom $r(\\boldi)$ can only contain\nground atoms $r'(\\boldi')$ with $\\boldi'\\subseteq \\boldi$. The probability that $r(\\boldi)$ is\nprovable when $\\boldi\\subset [m]$ then only depends on the probabilities of ground labeled facts\nwith arguments from $[m]$. The joint distribution of these ground facts is independent of $n$. \n\\end{proof}\n\nThe ProbLog example of Section~\\ref{sec:background} satisfies the condition of \nProposition~\\ref{prop:problogproj}. \n\n\\subsection{Discussion}\nOur conditions for the projective RBN and ProbLog structures are very\nsimilar, and it seems that both\nfragments support more or less the same types of models. \nThe projectivity conditions are essentially limitations on probabilistic dependencies. For\nthe frameworks that (implicitly) encode a directed sampling process for possible worlds (RBN, ProbLog), \nthe limit on the dependency is that when a ground atom $r(\\boldi)$ is sampled, its distribution \ncannot depend on ground atoms containing elements other than included in $\\boldi$. However, \n$r(\\boldi)$, in turn, may influence the sampling probabilities of other atoms containing elements \nnot in $\\boldi$. As a consequence, both in the RBN and the ProbLog model of Section~\\ref{sec:background},\nthe value of $\\emph{red}(i)$ influences the probability for $\\emph{edge}(i,j)$, and as a result,\nthe two variables $\\emph{red}(i), \\emph{edge}(i,j)$ are not independent. \n\nDue to the undirected nature of MLNs, one there cannot impose an ``upstream only'' limitation on\nprobabilistic dependencies. As a result, the restriction imposed in Proposition~\\ref{prop:mlnfrag}\nimplies that a when a model defines two random variables $\\emph{red}(i), \\emph{edge}(i,j)$ ($i\\neq j$),\nthese must be independent.\n\n\\section{Projectivity and Inference}\n\\label{sec:inference}\n\nIn an inference scenario, we are given an RRSM $\\{Q^{(n)}\\mid n\\in \\Nset\\}$, a domain of size $n$, and\na query, which for the sake of concreteness we assume to be of the form\n\\begin{displaymath}\n P(r(\\boldi)\\mid (\\neg)r_1(\\boldi_1),\\ldots,(\\neg)r_h(\\boldi_h)) = ?,\n\\end{displaymath}\nwhere the $(\\neg)r_j(\\boldi_j)$ are observed evidence literals. Let $I:=\\boldi\\cup\\cup_{j=1}^h \\boldi_j$, \nand $m:=|I|$. \nTo answer the query, all we need is the marginal distribution \n$Q^{(n)}\\downarrow I$. If the model is projective, then this marginal is equivalent to $Q^{(m)}$, and \nindependent of $n$. In practice this means, that when inference is performed by grounding the\nrelational model over a concrete domain, that we here only need to ground the model over the\ndomain that contains exactly the entities mentioned in the query. \n\nApart from the clear computational advantages that projectivity affords, it also leads to \nrobustness with regard to domains that are only incompletely known or observed: in many cases\nit can be difficult to specify exactly the size of the domain in which our observed entities $I$ ``live''.\n(what is the domain of a person's social network?). Here projectivity means that our inferences\nare not impacted by missing or imprecise information about the domain.\n\n\n\\section{Projectivity and Learning}\n\\label{sec:learning}\n\nIn a learning setting, we are interested in estimating the\nparameter $\\boldtheta$ for a given parametric family of RRSMs. We assume that\nthe training data consists of one or several observed possible worlds, in the latter \ncase possibly worlds of varying sizes. Like~\\cite{Xiang2011} and\n\\cite{kuzelka2018relational} we mostly focus on the scenario where the training data \nconsists of a single possible world $\\omega\\in\\Omega^{(m)}$, and we estimate $\\boldtheta$ by maximizing\nthe likelihood\n\n\\begin{equation}\n \\label{eq:thetalik}\n L(\\boldtheta|\\omega) = Q^{(m)}_{\\boldtheta}(\\omega).\n\\end{equation}\nFor simplicity we here restrict attention to pure maximum likelihood inference, but our considerations\nare also pertinent for penalized likelihood or Bayesian inference approaches.\n\nFor this learning problems, too, the dependence on the domain size $n$ is an important concern.\nConsider, for example, the problem of learning a model for an evolving (social) network. \nIs the model we learn today, while the network is of size $m$, still a good model when the\nnetwork has evolved to size $n>m$? Moreover, the data from which we learn may not be complete,\nand not contain the data about all the entities that are actually present in the relational \ndomain. Then we try to learn a model for a domain of size $n$, from data corresponding to \na domain of smaller size $m 1$. In this situation the problem becomes much harder, because one needs some understanding of the monodromy of the family $f$ over essentially arbitrary geometrically irreducible subvarieties $Z \\subset S_{\\mathbb{C}}$. To explain what we mean, let $\\mathbb{V} = R^{i} f_{*} \\mathbb{Z}$ and define for each such $Z \\subset S_{\\mathbb{C}}$:\n\\begin{defn}\nThe algebraic monodromy group $\\mathbf{H}_{Z}$ of $Z$ is the identity component of Zariski closure of the monodromy representation associated to $\\restr{\\mathbb{V}}{Z^{\\textrm{nor}}}$, where $Z^{\\textrm{nor}} \\to Z$ is the normalization.\n\\end{defn}\n\\noindent The variation $\\mathbb{V}$ determines a flag variety $\\ch{L}$, on which the group $\\mathbf{H}_{Z}$ can be said to act after identifying $\\ch{L}$ with the variety of Hodge flags on some fibre $\\mathbb{V}_{s}$ for $s \\in Z(\\mathbb{C})$. Let $\\varphi : S \\to \\Gamma \\backslash D$ be a period map determined by $\\mathbb{V}$ with $D \\subset \\ch{L}$ the complex submanifold of polarized Hodge flags. Then the key quantity relevant for the Lawrence-Venkatesh method is\n\\begin{equation}\n\\Delta = \\min_{Z, s} \\left[ \\dim (\\mathbf{H}_{Z} \\cdot F^{\\bullet}_{s}) - \\dim \\varphi(Z) \\right] , \n\\end{equation}\nwhere the minimum is taken over all positive-dimensional geometrically irreducible subvarieties $Z \\subset S_{\\mathbb{C}}$, points $s \\in Z(\\mathbb{C})$, and where $F^{\\bullet}_{s}$ is the Hodge flag on $\\mathbb{V}_{s}$.\\footnote{We not here that $\\varphi(Z)$ is analytically constructible, for instance by applying the main result of \\cite{OMINGAGA}, so its dimension makes sense.} In particular, the Lawrence-Venkatesh method produces an integer $k$, and shows that if $\\Delta \\geq k$, then the Shafarevich conjecture holds for $f$.\n\nFor successful applications of the Lawrence-Venkatesh strategy for the Shafarevich problem in situations when $\\dim S > 1$ we know of only the paper \\cite{lawrence2020shafarevich} of Lawrence and Sawin, who are able to apply this strategy beyond the first induction step to prove a Shafarevich conjecture for hypersurfaces lying inside a fixed abelian variety $A$. Their methods require the auxilliary use of a Tannakian category associated to $A$, and it seems unclear what to do without this abelian variety structure present.\n\nOur main result is as follows:\n\n\\begin{thm}\n\\label{mainthm}\nConsider a smooth projective family $f : X \\to S$ defined over $\\mathcal{O}_{K,N}$ and with $S$ smooth and quasi-projective, and given an integer $d$, define\n\\begin{equation}\n\\Delta_{d} = \\min_{Z, s} \\left[ \\dim (\\mathbf{H}_{Z} \\cdot F^{\\bullet}_{s}) - \\dim \\varphi(Z) \\right] , \n\\end{equation}\nwhere the minimum ranges over all geometrically-irreducible subvarieties $Z \\subset S_{\\mathbb{C}}$ with $\\dim Z > d$ and points $s \\in Z(\\mathbb{C})$. Then there exists an effective procedure which outputs an infinite sequence of integers\n\\[ \\kappa(1) \\leq \\kappa(2) \\leq \\cdots \\leq \\kappa(r) \\leq \\cdots < \\Delta_{d} \\]\nsuch that for some $r = r_{0}$ we have $\\kappa(r_{0}) = \\Delta_{d} - 1$.\n\nIf moreover the period map $\\varphi$ is quasi-finite, one can determine $r_{0}$.\n\\end{thm}\n\n\\begin{rem}\nLet us make absolutely clear what is meant by the term ``effective procedure'' in \\autoref{mainthm}. We mean that there exists an infinite-time algorithm (for instance, a non-halting Turing machine), which outputs a sequence of integers $\\{ \\kappa(r) \\}_{r = 1}^{\\infty}$, with the integer $\\kappa(r)$ being outputted at some finite time $t_{r}$ depending on $r$. Moreover, one also has at time $t_{r}$ a proof that $\\kappa(r) < \\Delta_{d}$. Therefore, after time $t_{r}$, one can stop the algorithm and use the bound $\\kappa(r) < \\Delta_{d}$ as input to the Lawrence-Venkatesh method. One is also guaranteed that there is some $r_{0}$ so that at time $t_{r_{0}}$ the bound $\\kappa(r_{0}) < \\Delta_{d}$ is best possible, however one does not necessarily have a method to determine $r_{0}$ unless $\\varphi$ is quasi-finite. Finally, the algorithm can be described entirely in terms of algebro-geometric computations involving algebraically constructible sets, and implicit in the proof is a description of how to implement it.\n\\end{rem}\n\nThere is no fundamental obstruction which requires us to restrict to quasi-finite $\\varphi$ for the second part of \\autoref{mainthm}. Rather, the second part of \\autoref{mainthm} references some delicate arguments in \\cite{urbanik2021sets} which are only enough to handle the quasi-finite case directly, and recalling enough of the machinery of \\cite{urbanik2021sets} to carry out the argument for the general case would lead us too far astray from the main ideas. We note that one does not actually need to determine the integer $r_{0}$ referenced in \\autoref{mainthm} to apply the machinery of Lawrence and Venkatesh: one wants to be able to compute the best possible lower bound for $\\Delta_{d}$, but one is not required to actually prove that the bound one has is optimal in order to deduce diophantine finiteness results.\n\n\\subsection{The Approach of Lawrence and Venkatesh}\n\nWe begin with a preliminary observation. To show that $\\dim \\overline{S(\\mathcal{O}_{K,N})}^{\\textrm{Zar}} \\leq d$, it suffices to show, for any irreducible subscheme $T \\subset S$ of dimension $> d$ and defined over $\\mathcal{O}_{K,N}$, that the Zariski closure $\\overline{T(\\mathcal{O}_{K,N})}^{\\textrm{Zar}}$ is a proper algebraic subscheme of $T$. We therefore fix such a subscheme $T \\subset S$ with $\\dim T > d$, and seek to show that $\\dim \\overline{T(\\mathcal{O}_{K,N})}^{\\textrm{Zar}} < \\dim T$.\n\nFix a prime $p \\in \\mathbb{Z}$ not dividing $N$, and let $t \\in T(\\mathcal{O}_{K,N})$ be a point. It is conjectured that for any $i$ the representation of $\\textrm{Gal}(\\overline{K}\/K)$ on $H^{i}_{\\textrm{\\'et}}(X_{t,\\overline{K}}, \\mathbb{Q}_{p})$ is semisimple. If this result were to be known for all such $t$, an argument of Faltings shows that for $t \\in T(\\mathcal{O}_{K,N})$ there are at most finitely many possibilities for the isomorphism class of the representation of $\\textrm{Gal}(\\overline{K}\/K)$ on $H^{i}_{\\textrm{\\'et}}(X_{t,\\overline{K}}, \\mathbb{Q}_{p})$. To establish the non Zariski-density of $T(\\mathcal{O}_{K,N})$ it would then suffice to show that the fibres of the map\n\\[ t \\in T(\\mathcal{O}_{K,N}) \\hspace{1em} \\xrightarrow{\\tau} \\hspace{1em} \\big\\{ \\textrm{Gal}(\\overline{K}\/K)\\textrm{-rep. on }H^{i}_{\\textrm{\\'et}}(X_{t,\\overline{K}}, \\mathbb{Q}_{p}) \\big\\}\\hspace{0.5em} \\big\/ \\hspace{0.5em} \\textrm{iso}. \\]\nare not Zariski dense. As explained by Lawrence and Venkatesh in \\cite{LV}, this is essentially the original argument for the Mordell conjecture due to Faltings.\n\nThe problem with applying this strategy in general is twofold. First, the semisimplicity of $H^{i}_{\\textrm{\\'et}}(X_{t,\\overline{K}}, \\mathbb{Q}_{p})$ is not known, and for most choices of $f$ appears out of reach. Secondly, without a good geometric interpretation of the \\'etale cohomology $H^{i}_{\\textrm{\\'et}}(X_{t,\\overline{K}}, \\mathbb{Q}_{p})$ it is difficult to understand $\\tau$. The key insight in the paper of Lawrence and Venkatesh is that one may potentially overcome both problems by passing to a $p$-adic setting where they are more managable.\n\nInstead of considering the global Galois representation $\\rho_{t} : \\textrm{Gal}(\\overline{K}\/K) \\to H^{i}_{\\textrm{\\'et}}(X_{t,\\overline{K}}, \\mathbb{Q}_{p})$, we consider its semisimplification $\\rho^{\\textrm{ss}}_{t}$, and restrict $\\rho_{t}$ along the map $\\textrm{Gal}(\\overline{K_{v}}\/K_{v}) \\to \\textrm{Gal}(\\overline{K}\/K)$ induced by a fixed embedding $\\overline{K} \\hookrightarrow \\overline{K_{v}}$ to obtain $\\rho_{t,v}$, where $v$ is a fixed place above $p$. The functors of $p$-adic Hodge theory tell us that the representation $\\rho_{t,v}$ determines a triple $(H^{i}_{\\textrm{dR}}(X_{t,K_{v}}), \\phi_{t}, F^{\\bullet}_{t})$, where $F^{\\bullet}_{t}$ is the Hodge filtration on de Rham cohomology and $\\phi_{t}$ is the crystalline Frobenius at $t$. If we somehow manage to consider the data $(H^{i}_{\\textrm{dR}}(X_{t,K_{v}}), \\phi_{t}, F^{\\bullet}_{t})$ up to ``semisimplification'' (in the sense that we identify such triples when the associated \\emph{global} Galois representations have isomorphic semisimplifications), our problem is then to study the fibres of the map\n\\[ t \\in T(\\mathcal{O}_{K,N}) \\hspace{0.5em} \\xrightarrow{\\tau_{p}} \\hspace{0.5em} \\big\\{\\textrm{``semisimplifications'' of } (H^{i}_{\\textrm{dR}}(X_{t,K_{v}}), \\phi_{t}, F^{\\bullet}_{t}) \\big\\} \\hspace{0.5em} \\big\/ \\hspace{0.5em} \\textrm{iso}. \\]\nand show they lie in a Zariski closed subscheme of smaller dimension.\n\nNext, we make the elementary observation that to bound the dimension of the Zariski closure of $T(\\mathcal{O}_{K,N})$, it suffices to cover $T(\\mathcal{O}_{K,v})$ by finitely many $v$-adic disks $D_{1}, \\hdots, D_{k}$ and bound the dimension of the Zariski closure of $D_{i} \\cap T(\\mathcal{O}_{K,N})$ for each $i$; here $\\mathcal{O}_{K,v}$ is the ring of $v$-adic integers. It can then be shown that there exists such a cover for which the Hodge bundle $\\mathcal{H} = R^{i} f_{*} \\Omega^{\\bullet}_{X\/S}$ can be trivialized rigid-analytically over each $D_{i}$, moreover with respect to each such trivialilzation the Frobenius operator $\\phi_{t}$ is independent of $t \\in D_{i}$. The problem then reduces to studying a varying filtration $F^{\\bullet}_{t}$ on a fixed vector space $V_{p} = H^{i}_{\\textrm{dR}}(X_{t_{0}})$ for some $t_{0} \\in D_{i}$. In particular, we obtain a rigid-analytic map\n\\[ D_{i} \\hspace{0.5em} \\xrightarrow{\\psi_{p}} \\hspace{0.5em} \\underbrace{\\{ \\hspace{0.3em} \\textrm{Hodge filtrations on }V_{p} \\hspace{0.3em} \\}}_{\\ch{L}_{p}} , \\]\nand those points of $\\ch{L}_{p}$ arising from points $t \\in T(\\mathcal{O}_{K,N}) \\cap D_{i}$ lie inside finitely many subvarieties $O_{i1}, \\hdots, O_{i\\ell}$ of $\\ch{L}_{p}$ corresponding to the finitely many possible isomorphism classes of $\\rho^{\\textrm{ss}}_{t}$.\n\nWe are now faced with the problem of understanding the intersections $\\psi_{p}(D_{i}) \\cap O_{ij}$, and showing that their inverse images under $\\psi_{p}^{-1}$ lie in an algebraic subscheme of smaller dimension. One part of this problem is to understand the dimensions of the varieties $O_{ij}$, and to show that they are sufficiently small: this step is carried out successfully for the families of hypersurfaces studied both by Lawrence-Venkatesh and Lawrence-Sawin, and appears to be managable in general. The more difficult object to control is $\\psi_{p}(D_{i})$, for which we need to understand the variation of the filtration $F^{\\bullet}$ over $D_{i}$. This, in turn, is governed by the Gauss-Manin connection $\\nabla : \\mathcal{H} \\otimes \\Omega^{1}_{T} \\to \\mathcal{H}$, which exists universally over $S$ after possibly increasing $N$; we may adjust $p$ so that it does not divide $N$ if necessary. The fact that $\\nabla$ exists universally over $\\mathcal{O}_{K,N}$ means that the same system of differential equations satisfied by $\\psi_{p}$ at $t \\in T(\\mathcal{O}_{K,N}) \\cap D_{i}$ is also satisfied by a Hodge-theoretic period map $\\psi : B \\to \\ch{L}$ on a sufficiently small analytic neighbourhood $B \\subset T(\\mathbb{C})$ containing $t$, where $\\ch{L}$ is a variety of Hodge flags. In particular, one can prove that the Zariski closures of $\\psi_{p}(D_{i})$ and $\\psi(B)$ have the same dimension.\n\nThe final step, which is to show that the Zariski closure of $T(\\mathcal{O}_{K,N}) \\cap D_{i}$ in $T$ has smaller dimension, is completed as follows. The Ax-Schanuel Theorem \\cite{AXSCHAN} for variations of Hodge structures due to Bakker-Tsimerman shows that if $V$ is an algebraic subvariety of $\\ch{L}$ of dimension at most $\\dim \\overline{\\psi(B)}^{\\textrm{Zar}} - \\dim \\psi(B)$,\\footnote{The dimension $\\dim \\psi(B)$ can once again, at least for open neighbourhoods $B$ with a sufficiently mild geometry, be made sense of the dimension of a locally constructible analytic set. Alternatively one can replace $\\dim \\psi(B)$ with $\\dim \\varphi(Z)$, where $\\varphi$ is as before and $Z \\subset T_{\\mathbb{C}}$ is a component containing $B$.} then the inverse image under $\\psi$ of the intersection $\\psi(B) \\cap V$ lies in an algebraic subvariety of $T_{\\mathbb{C}}$ of smaller dimension. Choosing an isomorphism $\\mathbb{C} \\cong \\overline{K_{v}}$ one can transfer this fact to the same statement for the map $\\psi_{p}$ and in particular for $V = O_{ij}$. Our problem is finally reduced to giving a lower bound for the difference $\\dim \\overline{\\psi(B)}^{\\textrm{Zar}} - \\dim \\psi(B)$. Our main result now reads:\n\n\\begin{thm}\n\\label{mainthm2}\nDefine the quantity\n\\[ \\Delta_{d} := \\min_{Z, \\psi} \\left[ \\dim \\overline{\\psi(B)}^{\\textrm{Zar}} - \\dim \\psi(B) \\right] , \\]\nwhere $Z$ ranges over all irreducible complex algebraic subvarieties $Z \\subset S_{\\mathbb{C}}$ of dimension greater than $d$, and where $\\psi$ is any complex analytic period map determined by the variation of Hodge structures $\\mathbb{V} = R^{i} f_{*} \\mathbb{Z}$ and defined on a neighbourhood $B \\subset Z(\\mathbb{C})$. Then there exists an effective procedure which outputs an infinite sequence of lower bounds\n\\[ \\kappa(1) \\leq \\kappa(2) \\leq \\cdots \\leq \\kappa(r) \\leq \\cdots < \\Delta_{d} \\]\nsuch that for some $r = r_{0}$ we have $\\kappa(r_{0}) = \\Delta_{d} - 1$.\n\nIf moreover the period map $\\varphi$ is quasi-finite, one can determine $r_{0}$.\n\\end{thm}\n\n We note that by \\cite[Lem. 4.10(ii)]{urbanik2021sets} it is also a consequence of the Bakker-Tsimerman Theorem that when $Z$ is geometrically irreducible, we have $\\overline{\\psi(B)}^{\\textrm{Zar}} = \\mathbf{H}_{Z} \\cdot \\psi(t)$ for any point $t \\in Z(\\mathbb{C})$, which recovers the statement of \\autoref{mainthm}. \n\n\\subsection{Basic Idea of the Method}\n\\label{methodsketch}\n\nWe may observe that the computation of the bound described in \\autoref{mainthm2} is a purely Hodge-theoretic problem, i.e., it concerns only properties of the integral variation $\\mathbb{V} = R^{i} f_{*} \\mathbb{Z}$ of Hodge structures on $S_{\\mathbb{C}}$. Let $\\mathcal{Q} : \\mathbb{V} \\otimes \\mathbb{V} \\to \\mathbb{Z}$ be a polarization of $\\mathbb{V}$, and let $(V, Q)$ be a fixed polarized lattice isomorphic to one (hence any) fibre of $(\\mathbb{V}, \\mathcal{Q})$; as it causes no harm, we will assume that $V = \\mathbb{Z}^{m}$ for some $m$, and therefore sometimes write $\\textrm{GL}_{m}$ for $\\textrm{GL}(V)$. Let $D$ be the complex manifold parametrizing polarized Hodge structures on $(V, Q)$ with the same Hodge numbers as $(\\mathbb{V}, \\mathcal{Q})$. A point $h \\in D$ we may view as a morphism $h : \\mathbb{S} \\to \\textrm{GL}(V)_{\\mathbb{R}}$, where $\\mathbb{S}$ is the Deligne torus, and the Mumford-Tate group $\\textrm{MT}(h)$ is the $\\mathbb{Q}$-Zariski closure of $h(\\mathbb{S})$. \n\nTo present our method, we introduce some terminology:\n\n\\begin{notn}\nWe denote by $\\ch{L}$ the $\\mathbb{Q}$-algebraic variety of all Hodge flags on the lattice $V$, not necessarily polarized. We note that $D$ is an open submanifold of a closed $\\mathbb{Q}$-algebraic subvariety $\\ch{D} \\subset \\ch{L}$.\n\\end{notn}\n\n\\begin{defn}\nGiven two subvarieties $W_{1}, W_{2} \\subset \\ch{L}$, we say that $W_{1} \\sim_{\\textrm{GL}} W_{2}$ if there exists $g \\in \\textrm{GL}_{m}(\\mathbb{C})$ such that $g \\cdot W_{1} = W_{2}$. Given a variety $W \\subset \\ch{L}$, we call the equivalence class $\\mathcal{C}(W)$ under $\\sim_{\\textrm{GL}}$ a \\emph{type}. The dimension of a type $\\mathcal{C}(W)$ is the dimension of $W$.\n\\end{defn}\n\n\\begin{defn}\nWe say that a type $\\mathcal{C}$ is \\emph{Hodge-theoretic} if $\\mathcal{C} = \\mathcal{C}(W)$, where $W = N(\\mathbb{C}) \\cdot h$ for $h \\in D$ and $N$ a $\\mathbb{Q}$-algebraic normal subgroup of $\\textrm{MT}(h)$.\n\\end{defn}\n\n\\vspace{0.5em}\n\nThe first step in our algorithm is:\n\n\\begin{quote}\n\\textbf{Step One:} Compute a finite list of types $\\mathcal{C}_{1}, \\hdots, \\mathcal{C}_{\\ell}$ such that every Hodge-theoretic type appears somewhere in the list.\n\\end{quote}\n\n\\noindent When we say to compute a type $\\mathcal{C}$, we mean to compute a representative $W \\subset \\ch{L}$ such that $\\mathcal{C} = \\mathcal{C}(W)$. That there are only finitely many Hodge-theoretic types is shown in \\autoref{finmantypes} below. \n\n\nThe problem given in Step One is solved in \\cite[Prop. 5.4]{urbanik2021sets}; we will say little about it here. It is related to the problem of classifying Mumford-Tate groups up to conjugacy by $\\textrm{GL}_{m}(\\mathbb{C})$, for which one can use a constructive version of the proof in \\cite[Thm. 4.14]{hodgelocivoisin}. It is also similar to the problem of classifying Mumford-Tate domains as studied in \\cite[Chap. VII]{GGK}. We note that the methods of \\cite[Chap. VII]{GGK}, when they can be carried out effectively, result in an approach for which $\\mathcal{C}_{1}, \\hdots, \\mathcal{C}_{\\ell}$ will be exactly the set of Hodge-theoretic types.\n\n\\vspace{0.5em}\n\nThe second step is more involved, and is the crux of our method. To describe it we need to introduce some terminology. \n\n\\begin{defn}\n\\label{locperdef}\nA \\emph{local period map} is a map $\\psi : B \\to \\ch{L}$ obtained as a composition $\\psi = q \\circ A$, where:\n\\begin{itemize}\n\\item[(i)] The set $B \\subset S(\\mathbb{C})$ is a connected analytic neighbourhood on which $\\mathbb{V}$ is constant and $F^{k} \\mathcal{H}$ is trivial for each $k$, where $\\mathcal{H} = \\mathbb{V} \\otimes \\mathcal{O}_{\\an{S}}$.\n\\item[(ii)] The map $A : B \\to \\textrm{GL}_{m}(\\mathbb{C})$ is a varying filtration-compatible period matrix over $B$. More precisely, there exists a basis $v^{1}, \\hdots, v^{m}$ for $\\mathcal{H}(B)$, compatible with the filtration in the sense that $F^{k} \\mathcal{H}(B)$ is spanned by $v^{1}, \\hdots, v^{i_{k}}$ for some $i_{k}$, and a flat frame $b^{1}, \\hdots, b^{m}$ for $\\mathbb{V}_{\\mathbb{C}}(B)$, such that $A(s)$ is the change-of-basis matrix from $v^{1}_{s}, \\hdots, v^{m}_{s}$ to $b^{1}_{s}, \\hdots, b^{m}_{s}$.\n\\item[(iii)] The map $q : \\textrm{GL}_{m} \\to \\ch{L}$ sends a matrix $M$ to the Hodge flag $F_{M}^{\\bullet}$ defined by the property that $F^{k}_{M}$ is spanned by the first $i_{k}$ columns.\n\\end{itemize}\n\\end{defn}\n\n\\noindent To summarize the preceding definition: a local period map is exactly a period map on $B$ except one does not necessarily compute periods with respect to the integral lattice $\\mathbb{V}(B) \\subset \\mathbb{V}_{\\mathbb{C}}(B)$ but is instead allowed to consider periods with respect to a more general complex flat frame. There is a natural $\\textrm{GL}_{m}(\\mathbb{C})$-action on the set of germs of local period maps at a point $s \\in S(\\mathbb{C})$, where $M \\in \\textrm{GL}_{m}(\\mathbb{C})$ acts on the map $\\psi = q \\circ A$ to give $M \\cdot \\psi = q \\circ (M \\cdot A)$. This action corresponds exactly to a change of the flat frame $b^{1}, \\hdots, b^{m}$, and all germs of local period maps at $s$ lie in a single $\\textrm{GL}_{m}(\\mathbb{C})$-orbit.\n\n\nThe construction of a local period map $\\psi : B \\to \\ch{L}$ involves picking a basis $b^{1}, \\hdots, b^{m}$ of $\\mathbb{V}_{\\mathbb{C}}(B)$, and hence choosing an isomorphism $\\mathbb{V}_{\\mathbb{C}}(B) \\simeq \\mathbb{C}^{m}$. When working with a local period map, we will always assume that such a basis has been choosen, and hence identify subgroups of $\\textrm{GL}(\\mathbb{V}_{\\mathbb{C}}(B))$ with subgroups of $\\textrm{GL}_{m}(\\mathbb{C})$. In particular, if $Z \\subset S_{\\mathbb{C}}$ is a geometrically irreducible subvariety which intersects $B$, we have an induced action of $\\mathbf{H}_{Z}$ on $\\ch{L}$.\n\nLastly, we need:\n\n\\begin{defn}\nGiven two types $\\mathcal{C}_{1}$ and $\\mathcal{C}_{2}$, we say that $\\mathcal{C}_{1} \\leq \\mathcal{C}_{2}$ if there exists $W_{i} \\subset \\ch{L}$ for $1 = 1, 2$ such that $\\mathcal{C}_{i} = \\mathcal{C}(W_{i})$ and $W_{1} \\subset W_{2}$.\n\\end{defn}\n\n\\begin{defn}\n\\label{vartypedef}\nGiven a local period map $\\psi : B \\to \\ch{L}$ and a geometrically irreducible subvariety $Z \\subset S_{\\mathbb{C}}$ intersecting $B$ at $s$, we call $\\mathcal{C}(\\overline{\\psi(B \\cap Z)}^{\\textrm{Zar}}) = \\mathcal{C}(\\mathbf{H}_{Z} \\cdot \\psi(s))$ the \\emph{type} of $Z$, and denote it by $\\mathcal{C}(Z)$.\n\\end{defn}\n\n\\noindent For well-definedness, see \\autoref{welldeflem} below. From Step One, we have computed a finite list $\\mathcal{C}_{1}, \\hdots, \\mathcal{C}_{\\ell}$ of types containing all types that can arise from the variation $\\mathbb{V}$. Our next task is then:\n\n\\begin{quote}\n\\textbf{Step Two:} For each type $\\mathcal{C}_{i}$ appearing in the list, compute a differential system $\\mathcal{T}(\\mathcal{C}_{i})$ on $S$ characterized by the property that an algebraic subvariety $Z \\subset S_{\\mathbb{C}}$ is an integral subvariety for $\\mathcal{T}(\\mathcal{C}_{i})$ if and only if $\\mathcal{C}(Z) \\leq \\mathcal{C}_{i}$, and determine the dimension of a maximal integral subvariety for this system.\n\\end{quote}\n\n\\noindent We explain precisely what we mean by ``differential system'' in \\autoref{secthree}; actually our method does something more subtle than Step Two due to the fact that we can only approximate $\\mathcal{T}(\\mathcal{C}_{i})$ up to some finite order, but for expository purposes this is the essential point. After this, we will see the problem is reduced to analyzing which of the differential systems $\\mathcal{T}(\\mathcal{C}_{i})$ admit algebraic solutions of ``exceptional'' dimension, which can be carried out using tools from functional transcendence.\n\n\\subsection{Acknowledgements}\n\nThe author thanks Brian Lawrence, Akshay Venkatesh, and Will Sawin for comments on a draft of this manuscript.\n\n\n\\section{Algebraic Monodromy Orbits up to Conjugacy}\n\nIn this section we describe an effective method for solving ``Step One'' as posed in \\autoref{methodsketch}. We will also prove some preliminary facts about types used in the introduction, and we continue with the notation established there. We will work in the context of a general polarizable integral variation of Hodge structure $\\mathbb{V}$ on the complex algebraic variety $S$, not necessarily coming from a projective family as in the introduction.\n\n\\subsection{Basic Properties of Types}\n\n\\begin{lem}\n\\label{finmantypes}\nFor any geometrically irreducible subvariety $Z \\subset S$ and any local period map $\\psi : B \\to \\ch{L}$ with $Z \\cap B$ non-empty, we have\n\\[ \\overline{\\psi(Z \\cap B)}^{\\textrm{Zar}} = \\mathbf{H}_{Z} \\cdot \\psi(s) , \\]\nfor any point $s \\in Z(\\mathbb{C})$.\n\\end{lem}\n\n\\begin{proof}\nIt suffices to show that \n\\[ \\overline{\\psi(C)}^{\\textrm{Zar}} = \\mathbf{H}_{Z} \\cdot \\psi(s) , \\]\nfor each analytic component $C \\subset Z \\cap B$ separately, with $s$ a point of $C$. By acting on $\\psi$ by an element of $\\textrm{GL}_{m}(\\mathbb{C})$, the claim can be reduced to the situation where the periods which determine $\\psi$ are computed with respect to a basis for the integral lattice $\\mathbb{V}(B)$, and then the claim follows from \\cite[Lem. 4.10(ii)]{urbanik2021sets}.\n\\end{proof}\n\n\\begin{lem}\n\\label{welldeflem}\nThe equivalence class under $\\sim_{\\textrm{GL}}$ of $\\overline{\\psi(B \\cap Z)}^{\\textrm{Zar}}$ is independent of $\\psi$; i.e., the type of $Z \\subset S$ is well-defined.\n\\end{lem}\n\n\\begin{proof}\nLet $p : Z^{\\textrm{sm}} \\to Z$ be a smooth resolution, and consider the variation $p^{*} \\mathbb{V}$. From the fact that germs of local period maps on $Z^{\\textrm{sm}}$ with respect to the variation $p^{*} \\mathbb{V}$ factor through germs of local period maps on $S$, we may reduce to the same problem for $Z^{\\textrm{sm}}$ and the variation $p^{*} \\mathbb{V}$, i.e., we may assume $Z = S$. By analytically continuing a fixed local period map $\\psi$ to the universal covering $\\widetilde{S} \\to S$, we learn from the irreducibility of $\\widetilde{S}$ that at each point $s \\in S$, there exists a local period map $\\psi_{s} : B_{s} \\to \\ch{L}$ such that $\\overline{\\psi_{s}(B_{s})}^{\\textrm{Zar}} = \\overline{\\psi(B)}^{\\textrm{Zar}}$. Since the Zariski closure of $\\psi_{s}(B_{s})$ is determined by the germ of $\\psi_{s}$ at $s$, and because all germs of local period maps at $s$ lie in a single $\\textrm{GL}_{m}(\\mathbb{C})$-orbit, the result follows.\n\\end{proof}\n\n\\begin{lem}\n\\label{fintypesarise}\nThere are only finitely many Hodge-theoretic types.\n\\end{lem}\n\n\\begin{proof}\nWe observe that the problem reduces to the following: show they are finitely many $\\textrm{GL}_{m}(\\mathbb{C})$-equivalence classes of pairs $(h, N)$, where\n\\begin{itemize}\n\\item[(i)] $h \\in D$ is a polarized Hodge structure; and\n\\item[(ii)] $N$ is a $\\mathbb{Q}$-algebraic connected normal subgroup of $\\textrm{MT}(h)$;\n\\end{itemize}\nwhere we regard $\\textrm{GL}_{m}(\\mathbb{C})$ as acting on $h$ through its action on $\\ch{L}$, and on $N$ by conjugation. Note that two such equivalent pairs will generate orbits in $\\ch{L}$ equivalent under $\\sim_{\\textrm{GL}}$. Since the groups $\\textrm{MT}(h)$ are reductive and have finitely many connected normal algebraic factors, this reduces to the same problem for pairs of the form $(h, \\textrm{MT}(h))$. We recall that $D$ is an open submanifold of $\\ch{D}$, the flag variety of flags satisfying the first Hodge-Riemann bilinear relation (the isotropy condition), and that $\\ch{D}$ is an algebraic subvariety of $\\ch{L}$. We then use the fact that there are finitely many Mumford-Tate groups up to $\\textrm{GL}_{m}(\\mathbb{C})$-conjugacy (see \\cite[Thm. 4.14]{hodgelocivoisin}), and that for a fixed Mumford-Tate group $M$ the Hodge structures in $D$ with Mumford-Tate contained in $M$ lie inside finitely many $M(\\mathbb{C})$-orbits in $\\ch{D}$, see \\cite[VI.B.9]{GGK}.\n\\end{proof}\n\n\\subsection{Computing Types up to Conjugacy}\n\nIn this section we give some references for carrying out Step One as described in the introduction.\n\n\\begin{prop}\n\\label{MTgroupequivalgo}\nThere exists an algorithm to compute subvarieties $W_{1}, \\hdots, W_{\\ell} \\subset \\ch{L}$ such that the set of Hodge-theoretic types is a (possibly proper) subset of $\\{ \\mathcal{C}(W_{1}), \\hdots, \\mathcal{C}(W_{\\ell}) \\}$.\n\\end{prop}\n\n\\begin{proof}\nThis is solved in \\cite[Prop. 5.4]{urbanik2021sets}.\n\\end{proof}\n\nLet us comment briefly on a different approach to Step One given in \\cite[Chap. VII]{GGK}. In \\cite[Chap. VII]{GGK}, the authors describe a method for classifying both Mumford-Tate groups and Mumford-Tate domains (orbits of points $h \\in D$ under $\\textrm{MT}(h)(\\mathbb{R})$ and $\\textrm{MT}(h)(\\mathbb{C})$). Given an appropriate such classification, one can easily solve Step One by computing the decompositions of the groups $\\textrm{MT}(h)$ that arise into $\\mathbb{Q}$-simple factors. The method of \\cite[Chap. VII]{GGK} is to first classify CM Hodge structures $h_{\\textrm{CM}} \\in D$, and then give a criterion for deciding when a Lie subalgebra of $\\mathfrak{gl}(V)$ corresponds to a Mumford-Tate group generating a Mumford-Tate domain containing $h_{\\textrm{CM}}$. They carry out this classification procedure successfully when $\\dim V = 4$, and so for variations with Hodge numbers $(2, 2)$ and $(1, 1, 1, 1)$.\n\nThe method given for classifying CM Hodge structures given in \\cite[Chap. VII]{GGK} is to observe that CM Hodge structures up to isogeny are determined by certain data associated to embeddings of CM fields, and hence the first step of the procedure in \\cite[Chap. VII]{GGK} is to ``classify all CM fields of rank up to [$\\dim V$] by [their] Galois group''. We are not aware of an effective method for carrying out this step.\\footnote{The paper \\cite{dodson1984structure} gives a potential approach by giving a method to classify certain abstract structures associated with Galois groups of CM fields. However one still needs to determine which such structures are actually associated to a concrete CM field.} It is also not clear to us precisely the sense in which the term ``classify'' is being used; i.e., we do not know what form the data of a ``classification of CM Hodge structures'' takes, and consequently what form the resulting classification of Mumford-Tate domains will have. For this reason, we were unable to apply the methods of \\cite[Chap. VII]{GGK} to prove \\autoref{MTgroupequivalgo}. \n\n\n\n\n\\section{Differential Tools and a Jet Criterion}\n\\label{secthree}\n\nIn this section we introduce a collection of effectively computable algebro-geometric correspondences which can be used for studying systems of differential equations on $S$ induced by the variation $\\mathbb{V}$, and then use it to solve the main problem. We have already carried out most of the work in two preceding papers \\cite{periodimages} and \\cite{urbanik2021sets}, so we will first need to collect some results. In this section we assume that $S$ is a $K$-variety for $K \\subset \\mathbb{C}$ a number field, and that $\\mathbb{V}$ is a polarizable integral variation of Hodge structure on $S_{\\mathbb{C}}$ such that the vector bundle $\\mathcal{H} = \\mathbb{V} \\otimes_{\\mathbb{Z}} \\mathcal{O}_{\\an{S_{\\mathbb{C}}}}$, the filtration $F^{\\bullet}$, and the connection $\\nabla : \\mathcal{H} \\to \\mathcal{H} \\otimes \\Omega^{1}_{S}$ all admit $K$-algebraic models. Moreover, we assume that we may effectively compute a description of these objects in terms of finitely-presented $K$-modules over an affine cover of $S$; for a justification of this assumption in the situation where $\\mathbb{V}$ comes from a smooth projective $K$-algebraic family $f : X \\to S$ see \\cite[\\S2]{urbanik2021sets}.\n\n\\subsection{The Constructive Period-Jet Correspondence}\n\nOur algebro-geometric correspondences will be formulated using the language of \\emph{jets}. Let $A^{d}_{r} = K[t_{1}, \\hdots, t_{d}]\/\\langle t_{1}, \\hdots, t_{d} \\rangle^{r+1}$, and define $\\mathbb{D}^{d}_{r} = \\textrm{Spec} \\hspace{0.15em} A^{d}_{r}$ to be the $d$-dimensional disk of order $r$; we suppress the field $K$ in the notation. A \\emph{jet space} associated to a space $X$ is a space which parametrizes maps $\\mathbb{D}^{d}_{r} \\to X$. More formally, for $X$ a finite-type $K$-scheme, we have:\n\n\n\n\\begin{defn}\n\\label{jetspacedef}\nWe define $J^{d}_{r} X$ to be the scheme representing the contravariant functor $\\textrm{Sch}_{K} \\to \\textrm{Set}$ given by \\vspace{-0.2em}\n\\[ T \\mapsto \\textrm{Hom}_{K}(T \\times_{K} \\mathbb{D}^{d}_{r}, X), \\hspace{1.5em} [T \\to T'] \\mapsto [\\textrm{Hom}_{K}(T' \\times_{K} \\mathbb{D}^{d}_{r}, X) \\to \\textrm{Hom}_{K}(T \\times_{K} \\mathbb{D}^{d}_{r}, X)] , \\]\nwhere the natural map $\\textrm{Hom}_{K}(T' \\times_{K} \\mathbb{D}^{d}_{r}, X) \\to \\textrm{Hom}_{K}(T \\times_{K} \\mathbb{D}^{d}_{r}, X)$ obtained by pulling back along $T \\times_{K} \\mathbb{D}^{d}_{r} \\to T' \\times_{K} \\mathbb{D}^{d}_{r}$. \n\\end{defn}\n\n\\vspace{0.5em}\n\n\\noindent That the functor defining $J^{d}_{r} X$ in \\autoref{jetspacedef} is representable is handled by \\cite[\\S2]{periodimages}. Moreover, $J^{d}_{r}$ is itself a functor, sending a map $g : X \\to Y$ to the map $J^{d}_{r} g : J^{d}_{r} X \\to J^{d}_{r} Y$ that acts on points by post-composition. For $X$ an analytic space, there is an analogous construction that appears in \\cite[\\S2.3]{periodimages}. If $K \\subset \\mathbb{C}$ is a subfield, this construction is compatible with analytification.\n\nThe purpose of introducing jets is the following result, proven in \\cite{urbanik2021sets}, building on \\cite{periodimages}.\n\n\\begin{thm}\n\\label{jetcorresp}\nFor each $d, r \\geq 0$, a variation of Hodge structure $\\mathbb{V}$ on $S$ gives rise to a canonical map \n\\[ \\eta^{d}_{r} : J^{d}_{r} S \\to \\textrm{GL}_{m} \\backslash J^{d}_{r} \\ch{L} , \\] \nof algebraic stacks characterized by the property that for any local period map $\\psi : B \\to \\ch{L}$ and any jet $j \\in J^{d}_{r} B$ we have $\\psi \\circ j = \\eta^{d}_{r}(j)$ modulo $\\textrm{GL}_{m}(\\mathbb{C})$.\n\nMoreover, if the data $(\\mathcal{H}, F^{\\bullet}, \\nabla)$ associated to the variation $\\mathbb{V}$ admits a $K$-algebraic model, the map $\\eta^{d}_{r}$ is defined over $K$, and there exists an algorithm to compute the $\\textrm{GL}_{m}$-torsor $p^{d}_{r} : \\mathcal{P}^{d}_{r} \\to J^{d}_{r} S$ and the $\\textrm{GL}_{m}$-invariant map $\\alpha^{d}_{r} : \\mathcal{P}^{d}_{r} \\to J^{d}_{r} \\ch{L}$ which defines $\\eta^{d}_{r}$ from a presentation of the data $(\\mathcal{H}, F^{\\bullet}, \\nabla)$ in terms of finitely-presented $K$-modules.\n\\end{thm}\n\nWe note that the computability of the torsor $\\mathcal{P}^{d}_{r}$ in \\autoref{jetcorresp} has in particular the following consequence: if $\\mathcal{S} \\subset (\\textrm{GL}_{m} \\backslash J^{d}_{r} \\ch{L})(\\mathbb{C})$ is a subset which is the image under the quotient of a constructible $L$-algebraic set $\\mathcal{F} \\subset J^{d}_{r} \\ch{L}$, where $K \\subset L$ is a computable extension, then we can compute $(\\eta^{d}_{r})^{-1}(\\mathcal{S})$ by computing $p^{d}_{r}((\\alpha^{d}_{r})^{-1}(\\mathcal{F}))$. Thus if we define \\vspace{0.5em}\n\\begin{defn}\n\\label{Tconstdef}\nFor a constructible $L$-algebraic set $\\mathcal{F} \\subset J^{d}_{r} \\ch{L}$, with $K \\subset L$ an extension, we write\n\\[ \\mathcal{T}^{d}_{r}(\\mathcal{F}) := (\\eta^{d}_{r})^{-1}(\\textrm{GL}_{m} \\cdot \\mathcal{F}) . \\]\nMoreover, for a type $\\mathcal{C} = \\mathcal{C}(W)$, we will write either $\\mathcal{T}^{d}_{r}(\\mathcal{C})$ or $\\mathcal{T}^{d}_{r}(W)$ for the set $\\mathcal{T}^{d}_{r}(J^{d}_{r} W)$.\n\\end{defn} \\vspace{0.3em}\n\\noindent then the main consequence of the preceding discussion for our situation is the following, which is immediate from what we have said:\n\\begin{prop}\n\\label{diffconstprop}\nFor each $d, r \\geq 0$ there exists an algorithm which, given a constructible $L$-algebraic set $\\mathcal{F} \\subset J^{d}_{r} \\ch{L}$ with $K \\subset L$ a computable extension, computes $\\mathcal{T}^{d}_{r}(\\mathcal{F}) \\subset J^{d}_{r} S$. \\qed\n\\end{prop}\n\n\\subsection{Jet Conditions and Types}\n\nLet us now try to understand how computing the ``differential constraints'' induced by types $\\mathcal{C}(W)$ as in \\autoref{diffconstprop} can help us carry out Step Two of \\autoref{methodsketch}. Let $\\Gamma = \\textrm{Aut}(V, Q)(\\mathbb{Z})$, and let $\\varphi : S_{\\mathbb{C}} \\to \\Gamma \\backslash D$ be the canonical period map which sends a point $s \\in S(\\mathbb{C})$ to the isomorphism class of the polarized Hodge structure on $\\mathbb{V}_{s}$. By \\cite{OMINGAGA}, the map $\\varphi$ factors as $\\iota \\circ p$, where $p : S_{\\mathbb{C}} \\to T$ is a dominant map of algebraic varieties and $\\iota$ is a closed embedding of analytic spaces; it follows that for each subvariety $Z \\subset S_{\\mathbb{C}}$ the dimension of the image $\\varphi(Z)$ makes sense as the dimension of a constructible algebraic set.\n\nFix a sequence of compatible embeddings \n\\[ \\textrm{Spec} \\hspace{0.15em} K = \\mathbb{D}^{0}_{r} \\xhookrightarrow{\\iota_{0}} \\mathbb{D}^{1}_{r} \\xhookrightarrow{\\iota_{1}} \\mathbb{D}^{2}_{r} \\xhookrightarrow{\\iota_{2}} \\mathbb{D}^{3}_{r} \\xhookrightarrow{\\iota_{3}} \\mathbb{D}^{4}_{r} \\xhookrightarrow{\\iota_{4}} \\cdots \\]\nof formal disks. By acting on points via pullback, we obtain natural transformations of functors $\\textrm{res}^{d}_{e} : J^{d}_{r} \\to J^{e}_{r}$ which produce maps $J^{d}_{r} X \\to J^{e}_{r} X$ that take jets $j : \\mathbb{D}^{d}_{r} \\to X$ to their restrictions $j \\circ \\iota_{d-1} \\circ \\cdots \\circ \\iota_{e}$. We are now ready to present the key proposition for our method:\n\n\\begin{defn}\nFor a scheme $X$ (resp. analytic space $X$) denote by $J^{d}_{r,nd} X \\subset J^{d}_{r} X$ the subscheme (resp. the analytic subspace) parametrizing those maps $j : \\mathbb{D}^{d}_{r} \\to X$ which are injective on the level of tangent spaces. We call such $j$ \\emph{non-degenerate} jets.\n\\end{defn}\n\n\\begin{prop}\n\\label{mainjetprop}\nLet $\\mathcal{S}$ be a set of types containing all the Hodge-theoretic types, and let $e$ and $k$ be non-negative integers. Then the following are equivalent:\n\\begin{itemize}\n\\item[(i)] there exists a geometrically irreducible subvariety $Z \\subset S_{\\mathbb{C}}$ with $\\dim Z > d$, $\\dim \\varphi(Z) \\geq e$, and such that $\\dim \\mathcal{C}(Z) - \\dim \\varphi(Z) \\leq k$;\n\\item[(ii)] there exists $\\mathcal{C} \\in \\mathcal{S}$ with $\\dim \\mathcal{C} - e \\leq k$, and such that the intersection \n\\[ \\mathcal{K}^{d}_{r}(\\mathcal{C}, e, k) := \\mathcal{T}^{d+1}_{r}(\\mathcal{C}) \\cap \\mathcal{T}^{d+1}_{r}((\\textrm{res}^{d+1}_{e})^{-1}(J^{e}_{r,nd} \\ch{L})) \\cap J^{d+1}_{r,nd} S \\] \nis non-empty for each $r \\geq 0$.\n\\end{itemize}\n\\end{prop}\n\n\\begin{rem}\nIn the situation that the variation $\\mathbb{V}$ admits a local Torelli theorem, one can drop the distinction between $\\dim Z$ and $\\dim \\varphi(Z)$ and consider instead the intersections $\\mathcal{T}^{d+1}_{r}(\\mathcal{C}) \\cap J^{d+1}_{r,nd} S$ in part (ii), ignoring the middle term.\n\\end{rem} \\vspace{0.5em}\n\nThe rest of this section we devote to proving \\autoref{mainjetprop}, identifying $S$ with $S_{\\mathbb{C}}$ for ease of notation. To begin with, let us check that (i) implies (ii) by applying the definitions. If $g : S' \\to S$ is an \\'etale cover and we consider the variation $\\mathbb{V}' = g^{*} \\mathbb{V}$, then the maps $\\eta^{d}_{r}$ and $\\eta'^{d}_{r}$ obtained from \\autoref{jetcorresp} are related by $\\eta'^{d}_{r} = \\eta^{d}_{r} \\circ (J^{d}_{r} g)$. Choosing a finite index subgroup $\\Gamma' \\subset \\Gamma$ and passing to such a cover, we can reduce to the case where we have a period map $\\varphi : S \\to \\Gamma \\backslash D$ with $D \\to \\Gamma \\backslash D$ a local isomorphism. Applying \\cite{OMINGAGA} the map $\\varphi : S \\to \\Gamma \\backslash D$ factors as $\\varphi = \\iota \\circ p$, where $p : S \\to T$ is a dominant map of algebraic varieties and $\\iota$ is an analytic closed embedding. Then via $p$, the variety $Z$ is dominant over a closed subvariety $Y \\subset T$ of dimension $\\dim \\varphi(Z) \\geq e$. Shrinking $S$ (and hence $Z$) we may assume that $Z$ is smooth, and that $Z$ is surjective onto a dense open subset $Y^{\\circ} \\subset Y$. Shrinking $S$ even further we may assume that $Z \\to Y^{\\circ}$ is smooth. The smoothness of $Z \\to Y^{\\circ}$ implies in particular that the induced jet space maps $J^{d}_{r} Z \\to J^{d}_{r} Y^{\\circ}$ for all choices of $d$ and $r$ are surjective.\n\nWe may choose neighbourhoods $B \\subset S(\\mathbb{C})$ and $U \\subset D$ such that $\\restr{\\pi}{U} : U \\to \\pi(U)$ is an isomorphism, both $B \\cap Z$ and $\\pi(U) \\cap Y^{\\circ}$ are non-empty, and we have a local lift $\\psi : B \\to U$ of $\\varphi$. Choose a jet $\\sigma \\in J^{e}_{r,nd} (Y^{\\circ} \\cap \\pi(U))$ and lift it along $p$ to a jet $\\widetilde{\\sigma} \\in J^{e}_{r,nd} (Z \\cap B)$ landing at the point $s \\in S(\\mathbb{C})$. Using the fact that the germ $(Z, s)$ is smooth of dimension $\\dim Z > d$ the jet $\\widetilde{\\sigma}$ can be extended to a jet $j \\in J^{d+1}_{r, nd} (Z \\cap B)$ such that $\\textrm{res}^{d+1}_{e} (j) = \\widetilde{\\sigma}$, and hence $\\textrm{res}^{d+1}_{e} (\\varphi \\circ j) = \\sigma$. From the fact that $\\restr{\\varphi}{B} = \\pi \\circ \\psi$ and the defining property of the map $\\eta^{d}_{r}$ it follows that $j$ lies inside $\\mathcal{T}^{d+1}_{r}((\\textrm{res}^{d+1}_{e})^{-1}(J^{e}_{r,nd} \\ch{L})) \\cap J^{d+1}_{r,nd} S$. We can then take $\\mathcal{C} = \\mathcal{C}(Z)$, and the fact that $j$ factors through $Z$ implies that $j \\in \\mathcal{T}^{d+1}_{r}(\\mathcal{C})$ as well.\n\nTo prove the reverse implication, we review some preliminary facts relating to jets.\n\n\\begin{defn}\nWe say a sequence $\\{ j_{r} \\}_{r \\geq 0}$ with $j_{r} \\in J^{d}_{r} X$ is \\emph{compatible} if the projections $J^{d}_{r} X \\to J^{d}_{r-1} X$ map $j_{r}$ to $j_{r-1}$.\n\\end{defn}\n\n\\begin{lem}\n\\label{compseqlem}\nSuppose that $\\mathcal{T}_{r} \\subset J^{d}_{r} X$ is a collection of non-empty constructible algebraic sets such that the projections $J^{d}_{r} X \\to J^{d}_{r-1} X$ map $\\mathcal{T}_{r}$ into $\\mathcal{T}_{r-1}$. Then there exists a compatible sequence $\\{ j_{r} \\}_{r \\geq 0}$ with $j_{r} \\in \\mathcal{T}_{r}$ for all $r \\geq 0$.\n\\end{lem}\n\n\\begin{proof}\nSee \\cite[Lem. 5.3]{periodimages}.\n\\end{proof}\n\n\\begin{defn}\nGiven a variety $Z$ (algebraic or analytic) and $z \\in Z$ a point, we denote by $(J^{d}_{r} Z)_{z}$ the fibre above $z$ of the natural projection map $J^{d}_{r} Z \\to Z$.\n\\end{defn}\n\n\\begin{lem}\n\\label{factorthrough}\nIf $g : (Z, z) \\to (Y, y)$ is a map of analytic germs with $\\dim (Z, z) = d$ and $(Z, z)$ smooth, we have an infinite compatible family $j_{r} \\in (J^{d}_{r,nd} Z)_{z}$, and $g \\circ j_{r} \\in (J^{d}_{r} X)_{y}$ for some germ $(X, y) \\subset (Y, y)$ and all $r \\geq 0$, then $g$ factors through the inclusion $(X, x) \\subset (Y, y)$. \n\\end{lem}\n\n\\begin{proof}\nSee \\cite[Lem. 4.5]{urbanik2021sets}.\n\\end{proof}\n\n\\begin{lem}\n\\label{jetdimlem}\nSuppose that $X$ is an algebraic variety (resp. analytic space) and $x \\in X$ is a point for which the fibre $(J^{d}_{r,nd} X)_{x}$ above $x$ is non-empty for all $r \\geq 0$. Then the germ $(X, x)$ has dimension at least $d$. \n\\end{lem}\n\n\\begin{proof}\nSee \\cite[Prop. 2.7]{periodimages}.\n\\end{proof}\n\n\\begin{proof}[Proof of \\ref{mainjetprop}:]\nBy what we have said, we are reduced to showing that (ii) implies (i). The statement is unchanged by replacing $S$ with a finite \\'etale covering $g : S' \\to S$ and the variation $\\mathbb{V}$ with $g^{*} \\mathbb{V}$; as before this does not affect the hypothesis (ii) since the maps $\\eta^{d}_{r}$ and $\\eta'^{d}_{r}$ associated to $S$ and $S'$ are related by $\\eta'^{d}_{r} = \\eta^{d}_{r} \\circ (J^{d}_{r} g)$. Choosing a finite index subgroup $\\Gamma' \\subset \\Gamma$ and choosing $g$ so the monodromy of $g^{*} \\mathbb{V}$ lies in $\\Gamma'$ we may reduce to the case where $D \\to \\Gamma \\backslash D$ is a local isomorphism. Moreover, taking a futher finite \\'etale cover we may apply \\cite[Cor. 13.7.6]{CMS} to reduce to the case where $\\varphi$ is proper; this requires possibly extending $S'$ to a variety $S''$ by adding a closed subvariety at infinity, but as long as we are careful to work only with jets that factor through $S'$ our proof will produce a variety $Z$ intersecting $S'$; in particular, we now assume that $\\varphi : S \\to \\Gamma \\backslash D$ is proper but redefine the sets $\\mathcal{K}^{d}_{r}$ to equal\n\\[ \\mathcal{T}^{d+1}_{r}(\\mathcal{C}) \\cap \\mathcal{T}^{d+1}_{r}((\\textrm{res})^{d+1}_{e})^{-1}(J^{d}_{r,nd} \\ch{L})) \\cap J^{d+1}_{r,nd} S^{\\circ} , \\]\nfor some open subvariety $S^{\\circ} \\subset S$.\n\nApplying the main result of \\cite{OMINGAGA}, the map $\\varphi$ once again factors as $\\varphi = \\iota \\circ p$ with $p : S \\to T$ a dominant (now proper) map of algebraic varieties. We can then consider the Stein factorization $S \\xrightarrow{q} U \\xrightarrow{r} T$ of $p$; note that $q$ is proper with connected fibres, $U$ is normal, and $r$ is finite. One can define the type of a subvariety $Y \\subset U$ exactly as in \\autoref{vartypedef} with respect to the period map $U \\to \\Gamma \\backslash D$. From \\autoref{compseqlem} above, the assumption (ii) entitles us to a compatible sequence $\\{ j_{r} \\}_{r \\geq 0}$ of jets such that $j_{r} \\in \\mathcal{K}^{d}_{r}(\\mathcal{C}, e, k)$ for all $r \\geq 0$. Let us write $\\mathcal{C} = \\mathcal{C}(W)$ for some subvariety $W \\subset \\ch{L}$.\n\nBy construction, the jets $\\sigma_{r} = \\textrm{res}^{d+1}_{e} j_{r}$ are non-degenerate, and remain so after composing with any local period map $\\psi : B \\to D$ for which $\\sigma_{r}$ factors through $B$. This in particular implies (since $D \\to \\Gamma \\backslash D$ is a local isomorphism) that the jets $\\varphi \\circ \\sigma_{r}$ are non-degenerate, and hence so are the jets $q \\circ \\sigma_{r}$. Let $Y \\subset U$ be the smallest algebraic subvariety such that $q \\circ j_{r} \\in J^{d+1}_{r} Y$ for all $r$. We observe that there exists a component $Z$ of $q^{-1}(Y)$ of dimension at least $d+1$ that contains the image of the jets $\\{ j_{r} \\}_{r \\geq 0}$: one can see this by picking a neighbourhood of $j_{0}$ of the form $\\mathbb{C}^{\\ell} \\times \\mathbb{C}^{d+1}$ such that $j_{r}$ is constant on the first factor, and applying \\autoref{factorthrough} above to see that the restriction of $q$ to $\\{ 0 \\} \\times \\mathbb{C}^{d+1}$ factors through $Y$. Moreover, we must have $q(Z) = Y$ by minimality, and by applying \\autoref{jetdimlem} to the non-degenerate sequence $\\{ q \\circ \\sigma_{r} \\}_{r \\geq 0}$ that $\\dim Y \\geq e$. Since $r$ is finite, this means $\\dim \\varphi(Z) \\geq e$. From the fact that local period maps on $S$ factor through local period maps on $U$ we learn that $\\mathcal{C}(Z) = \\mathcal{C}(Y)$, so the result will follow if we can show that $\\dim \\mathcal{C}(Y) - \\dim Y \\leq k$. For ease of notation let us now write $\\tau_{r} = q \\circ j_{r}$.\n\nFix a local lift $\\psi : B \\to D$ of the period map $U \\to \\Gamma \\backslash D$ with $B \\subset U(\\mathbb{C})$ an analytic ball such that the jets $\\tau_{r}$ factor through $B$. Consider the set $\\mathcal{G}_{r} \\subset \\textrm{GL}_{m}(\\mathbb{C})$ consisting of those $g \\in \\textrm{GL}_{m}(\\mathbb{C})$ for which $\\psi \\circ \\tau_{r} \\in g \\cdot (J^{d+1}_{r} W)$. Then for each $r$ the set $\\mathcal{G}_{r}$ is algebraically-constructible, and using the fact that $j_{r} \\in \\mathcal{T}^{d+1}_{r}(W)$ the set $\\mathcal{G}_{r}$ is necessarily non-empty. Let $g_{\\infty}$ be an element of this intersection. Extend $\\psi$ to a lift $\\widetilde{\\varphi}_{Y} : \\widetilde{Y} \\to D$ of $Y \\to \\Gamma \\backslash D$ to the universal covering. Then $\\widetilde{\\varphi}_{Y}(\\widetilde{Y}) \\subset D$ is a closed analytic set containing the jets $\\psi \\circ \\tau_{r}$, and hence the non-degenerate jets $\\textrm{res}^{d+1}_{e}(\\psi \\circ \\tau_{r})$. Letting $A \\subset \\widetilde{\\varphi}_{Y}(\\widetilde{Y}) \\cap (g_{\\infty} \\cdot W)$ be the minimal analytic germ through which $\\psi \\circ \\tau_{r}$ (and hence $\\textrm{res}^{d+1}_{e}(\\psi \\circ \\tau_{r})$) factors, it follows from \\autoref{jetdimlem} that $A$ has dimension at least $e$.\n\nConsider the Zariski closure $V \\subset Y$ of $\\psi^{-1}(A)$. We claim that $V = Y$. Because $Y$ was chosen minimal containing the compatible family of jets $\\{ \\tau_{r} \\}_{r \\geq 0}$, it suffices to show that each $\\tau_{r}$ factors through $V$. Consider the component of $q^{-1}(B)$ containing $j_{0}$; by choosing coordinates we may assume $q^{-1}(B) \\subset \\mathbb{C}^{\\ell} \\times \\mathbb{C}^{d+1}$ is an open neighbourhood and identify $j_{0}$ with the origin. After a further change of coordinates we may assume $j_{r}$ is constant on the first factor, and let $F = q^{-1}(B) \\cap (\\{ 0 \\} \\times \\mathbb{C}^{d+1})$. Applying \\autoref{factorthrough} we find that $\\psi(q(F)) \\subset A$, and hence $q(F) \\subset V$. Using proper base change the map $q^{-1}(B) \\to B$ is proper, so $q(F)$ is an analytic subvariety of $B$, and by construction the jets $\\{ \\tau_{r} \\}_{r \\geq 0}$ factor through it, hence through $V$.\n\nWe are now ready to apply the Bakker-Tsimerman transcendence theorem; the jets are no longer needed. It follows from the structure theorem for period mappings \\cite[III.A]{GGK}, the closed analytic set $\\widetilde{\\varphi}_{Y}(\\widetilde{Y})$ lies inside an orbit $\\ch{D}' = \\mathbf{H}_{Y} \\cdot \\psi(\\tau_{0})$ of the algebraic monodromy of $Y$. Consider the graph $E$ of $\\widetilde{\\varphi}_{Y}$ in $Y \\times \\ch{D}'$. Then as $A$ has dimension $e$ and $\\psi^{-1}(A)$ is Zariski dense, there exists a component $C$ of $E \\cap (Y \\times (\\ch{D}' \\cap g_{\\infty} \\cdot W))$ of dimension at least $e$ and projecting to a Zariski dense subset of $Y$. Applying the main theorem of \\cite{AXSCHAN} we learn that\n\\begin{align*}\n\\textrm{codim}_{Y \\times \\ch{D}'} (Y \\times (\\ch{D}' \\cap g_{\\infty} \\cdot W)) + \\textrm{codim}_{Y \\times \\ch{D}'} E &\\leq \\textrm{codim}_{Y\n \\times \\ch{D}'} C \\\\\n(\\dim \\ch{D}' - \\dim W) + \\dim \\ch{D}' &\\leq \\dim Y + \\dim \\ch{D}' - \\dim C \\\\\n\\dim \\ch{D}' - \\dim Y &\\leq \\dim W - \\dim C \\\\\n\\dim \\mathcal{C}(Y) - \\dim Y &\\leq \\dim \\mathcal{C} - e \\\\\n&\\leq k\n\\end{align*}\n\\end{proof}\n\n\\section{Main Results}\n\n\\subsection{Computing Bounds on $\\Delta_{d}$}\n\n\\subsubsection{Computing Lower Bounds}\n\nLet us explain the significance of \\autoref{mainjetprop} in proving \\autoref{mainthm}, i.e., giving an effective method to compute bounds for \\[ \\Delta_{d} = \\min_{\\dim Z > d} [ \\dim \\mathcal{C}(Z) - \\dim \\varphi(Z) ] , \\]\nwhere we have used \\autoref{mainthm2} and \\autoref{vartypedef} to give this equivalent expression for $\\Delta_{d}$. Since $\\Delta_{d}$ is a integer bounded by $\\dim D$, giving an effective method to compute it amounts to developing a procedure to decide, for any integer $k$, whether we have $\\Delta_{d} \\leq k$. This in turn amounts to deciding, for any integer $0 \\leq e \\leq \\dim \\varphi(S)$, whether (ii) holds in \\autoref{mainjetprop}.\n\nLet us take $\\mathcal{S}$ to be the set up types computed by Step One, and let us suppose that in fact $\\Delta_{d} > k$. Then by the equivalence in \\autoref{mainjetprop}, we should find that for any $\\mathcal{C} \\in \\mathcal{S}$ with $\\dim \\mathcal{C} - e \\leq k$, there must be some $r = r(\\mathcal{C}, e, k)$ such that $\\mathcal{K}^{d}_{r}(\\mathcal{C}, e, k)$ is empty. Moreover, verifying that such an $r$ exists for each such $\\mathcal{C}$ and $e$ proves, again by the same equivalence, that $\\Delta_{d} > k$. Consequently, we obtain the following result, which is the first half of \\autoref{mainthm}:\n\n\\begin{prop}\n\\label{lowerboundcomp}\nBy computing the sets $\\mathcal{K}^{d}_{r}(\\mathcal{C}, e, k)$ described in \\autoref{mainjetprop} in parallel, we may compute a non-decreasing sequence of lower bounds\n\\[ \\kappa(1) \\leq \\kappa(2) \\leq \\cdots \\leq \\kappa(r) \\leq \\cdots < \\Delta_{d} \\]\nsuch that for some $r = r_{0}$ we have $\\kappa(r_{0}) = \\Delta_{d} - 1$.\n\\end{prop}\n\n\\begin{proof}\nAt the $r$'th stage we compute all the sets $\\mathcal{K}^{d}_{r}(\\mathcal{C}, e, k)$ for all applicable choices of $\\mathcal{C}$, $e$ and $k$, and set $\\kappa(r)$ to be the smallest $k$ for which all the sets $\\mathcal{K}^{d}_{r}(\\mathcal{C}, e, k)$ are empty. From the discussion preceding the Proposition, the result follows.\n\\end{proof}\n\n\\subsubsection{Computing Upper Bounds}\n\n\\autoref{lowerboundcomp} does not actually give an algorithm for computing $\\Delta_{d}$, since no way is given to decide when $r = r_{0}$. For applications to the Lawrence-Venktesh method this doesn't matter: one wants to be able to compute an optimal lower bound for $\\Delta_{d}$, but one does not actually have to prove that this lower bound actually equals $\\Delta_{d}$ in order to apply the diophantine finiteness machinery. Nevertheless, let us explain how one can do this in the case where $\\varphi$ is quasi-finite; under this assumption, we may drop the distinction between $\\dim Z$ and $\\dim \\varphi(Z)$, and we are instead interested in computing\n\\[ \\min_{\\dim Z > d} \\, \\left[ \\dim \\mathcal{C}(Z) - \\dim Z \\right] . \\]\n\n What is needed is the following:\n\n\\begin{prop}\n\\label{upperboundcomp}\nSuppose that $S$ is quasi-projective and $\\varphi$ is quasi-finite. Then there exists a procedure that outputs an infinite sequence of upper bounds\n\\[ \\tau(1) \\geq \\tau(2) \\geq \\cdots \\geq \\tau(i) \\geq \\cdots \\geq \\Delta_{d} \\] \nsuch that for some $i = i_{0}$ we have $\\tau(i) = \\Delta_{d}$.\n\\end{prop}\n\n\\noindent Given both \\autoref{lowerboundcomp} and \\autoref{upperboundcomp} we obtain an algorithm for computing $\\Delta_{d}$ by running both procedures in parallel and terminating when $\\kappa(r) + 1 = \\tau(i)$. \n\n\\subsection{Finding Varieties that Exhibit $\\Delta_{d}$}\n\nIn this section we prove \\autoref{upperboundcomp}, assuming throughout that $S$ is quasi-projective and $\\varphi$ is quasi-finite. Let us fix a projective compactification $S \\subset \\overline{S}$ of $S$ and consider the Hilbert scheme $\\textrm{Hilb}(\\overline{S})$. There exist algorithms, for instance by working with the Pl\\\"uker coordinates of the appropriate Grassmannian, for computing any finite subset of components of $\\textrm{Hilb}(\\overline{S})$. By \\cite[Lem. 5.10]{urbanik2021sets} we obtain the same fact for the open locus $\\textrm{Var}(S) \\subset \\textrm{Hilb}(\\overline{S})$ consisting of just those points $[\\overline{Z}]$ for which $Z = S \\cap \\overline{Z}$ is a non-empty geometrically irreducible algebraic subvariety of $S$. What we will show is that there exists a procedure which outputs an infinite sequence $\\{ \\mathcal{W}_{i} \\}_{i = 1}^{\\infty}$ of constructible algebraic loci $\\mathcal{W}_{i} \\subset \\textrm{Var}(S)$, with the following two properties:\n\\begin{itemize}\n\\item[(i)] for each $i$, the type $\\mathcal{C}(Z)$ and dimension $\\dim Z$ are constant over all $[Z] \\in \\mathcal{W}_{i}$;\n\\item[(ii)] there exists some $i = i_{0}$ such that \n\\[ \\Delta_{d} = \\dim \\mathcal{C}(Z) - Z , \\]\nfor some (hence any) point $[Z] \\in \\mathcal{W}_{i}$.\n\\end{itemize}\nGiven such an algorithm the problem of computing the bound $\\tau(i)$ that appears in \\autoref{upperboundcomp} reduces to choosing a point $[Z] \\in \\mathcal{W}_{i}$, computing $\\dim \\mathcal{C}(Z) - \\dim Z$, and setting \n\\[ \\tau(i) := \\textrm{min} \\{ \\tau(i-1), \\dim \\mathcal{C}(Z) - \\dim Z \\} . \\] \n(We note that the problem of computing $\\dim \\mathcal{C}(Z)$ from $Z$ and the restriction $\\restr{(\\mathcal{H}, F^{\\bullet}, \\nabla)}{Z}$ of the algebraic data on $S$ is solved for us by \\cite[Lem. 5.8]{urbanik2021sets} by taking the family $g$ in the statement of \\cite[Lem. 5.8]{urbanik2021sets} to be a trivial family; we will say little about this problem here.)\n\nIn fact, an algorithm for computing the sets $\\mathcal{W}_{i}$ has already been given in a previous paper by the author. We begin by recalling the necessary background. We regard $S$ as a complex algebraic variety in what follows. Given two (geometrically) irreducible subvarieties $Z_{1}, Z_{2} \\subset S$ with $Z_{1} \\subset Z_{2}$, the algebraic monodromy group $\\mathbf{H}_{Z_{1}}$ may be naturally regarded as a subgroup of $\\mathbf{H}_{Z_{2}}$ (after choosing a base point $s \\in Z_{1}(\\mathbb{C})$). Using this we define:\n\\begin{defn}\nAn irreducible complex subvariety $Z \\subset S$ is said to be \\emph{weakly special} if it is maximal among such subvarieties for its algebraic monodromy group.\n\\end{defn}\nThe key fact is then the following:\n\\begin{lem}\nFor each integer $d > 0$, there exists a weakly special subvariety $Z \\subset S$ such that \n\\[ \\Delta_{d} = \\dim \\mathcal{C}(Z) - \\dim Z . \\]\n\\end{lem}\n\n\\begin{proof}\nBy \\cite[Prop. 4.18]{urbanik2021sets}, the condition that $Z$ be weakly special is equivalent to $Z$ being a maximal irreducible complex subvariety of $S$ of type $\\mathcal{C}(Z)$. Thus if we have any $Y$ which is not weakly special, there exists a weakly special $Z$ properly containing $Y$ with $\\mathcal{C}(Y) = \\mathcal{C}(Z)$, hence\n\\[ \\dim \\mathcal{C}(Z) - \\dim Z < \\dim \\mathcal{C}(Y) - \\dim Y . \\]\nIt follows that the value of $\\Delta_{d}$ can only be achieved by a weakly special variety.\n\\end{proof}\n\n\\begin{proof}[Proof of \\ref{upperboundcomp}]\nBy the \\emph{degree} of a subvariety $Z \\subset S$ we will mean the degree of its closure $\\overline{Z}$ inside $\\overline{S}$. For any integer $b$, denote by $\\textrm{Var}(S)_{b} \\subset \\textrm{Var}(S)$ the finite-type subscheme parametrizing varieties of degree at most $b$. Denote by $\\mathcal{W} \\subset \\textrm{Var}(S)$ the locus of weakly special subvarieties. Then given an integer $b$, the algorithm that appears in \\cite[Thm. 5.15]{urbanik2021sets} computes the intersection $\\mathcal{W} \\cap \\textrm{Var}(S)_{b}$ as a constructible algebraic locus.\n\nLet us describe the algorithm appearing in \\cite[Thm. 5.15]{urbanik2021sets} more precisely. Consider the types $\\mathcal{C}_{1}, \\hdots, \\mathcal{C}_{\\ell}$ computed by Step One, and define for each such type $\\mathcal{C}_{j}$ the locus\n\\[ \\mathcal{W}(\\mathcal{C}_{j}) := \\{ [Z] \\in \\textrm{Var}(S) : \\mathcal{C}(Z) \\leq \\mathcal{C}_{j} \\} . \\] \nIt is shown in \\cite[Prop. 4.31]{urbanik2021sets} that for each $j$ the locus $\\mathcal{W}(\\mathcal{C}_{j})$ is closed algebraic. We can then consider the sublocus $\\mathcal{W}(\\mathcal{C}_{j})_{\\textrm{opt}} \\subset \\mathcal{W}(\\mathcal{C}_{j})$ consisting of just those components $C \\subset \\mathcal{W}(\\mathcal{C}_{j})$ for which a generic point $[Z] \\in C$ satisfies $\\mathcal{C}(Z) = \\mathcal{C}_{j}$.\n\nIn \\cite[Prop. 5.14]{urbanik2021sets}, an algorithm is given for computing $\\mathcal{W}(\\mathcal{C}_{j})_{\\textrm{opt}} \\cap \\textrm{Var}(S)_{b}$ for each $j$. Using this, one can compute all the finitely many closed algebraic loci $C_{1}, \\hdots, C_{i_{b}}$ which arise as a component of $\\mathcal{W}(\\mathcal{C}_{j})_{\\textrm{opt}} \\cap \\textrm{Var}(S)_{b}$ for some $j$. The problem of computing $\\mathcal{W} \\cap \\textrm{Var}(S)_{b}$ is reduced to computing constructible algebraic conditions on each component $C_{i} \\subset \\mathcal{W}(\\mathcal{C}_{j})_{\\textrm{opt}}$ which define the locus $\\mathcal{W}_{i} \\subset C_{i}$ of points $[Z] \\in C_{i}$ that are weakly special of type $\\mathcal{C}_{j}$. This is taken care of by the proof of \\cite[Thm. 5.15]{urbanik2021sets}. By construction, the points in $\\mathcal{W}_{i}$ all have the same type and the same dimension, so we complete the proof by computing these loci for increasing values of $b$.\n\\end{proof}\n\n\n\\section{Application to Lawrence-Venkatesh}\n\nWe now show how the bound of \\autoref{mainthm} can be used to establish diophantine finiteness results. Similar arguments appear in \\cite{LV} and \\cite{lawrence2020shafarevich}, but as they are not precisely adapted to our setup, we give our own version. We recall the situation: we have a smooth projective family $f : X \\to S$ over the smooth base $S$, with everything defined over $\\mathcal{O}_{K,N}$.\\footnote{Note in particular we are assuming now that $S$ is smooth over $\\mathcal{O}_{K,N}$, which we can achieve by increasing $N$ if necessary.} The relative algebraic de Rham cohomology $\\mathcal{H} = R^{i} f_{*} \\Omega^{\\bullet}_{X\/S}$ gives a model for the Hodge bundle $\\mathbb{V} \\otimes \\mathcal{O}_{\\an{S}}$, where $\\mathbb{V} = R^{i} f_{*} \\mathbb{Z}$. By a result of Katz and Oda \\cite{katz1968}, the flat connection associated to the local system $\\mathbb{V}_{\\mathbb{C}}$ by the Riemann-Hilbert correspondence admits a model $\\nabla : \\mathcal{H} \\to \\Omega^{1}_{S} \\otimes \\mathcal{H}$ after possibly increasing $N$. Likewise, we may also assume the Hodge filtration $F^{\\bullet}$ gives a filtration of $\\mathcal{H}$ by vector subbundles. \n\nFix a prime $p$ not dividing $N$, and a place $v$ of $K$ above $p$. Then for each integral point $s \\in S(\\mathcal{O}_{K,N})$, we have a Galois representation $\\rho_{s} : \\textrm{Gal}(\\overline{K}\/K) \\to \\textrm{Aut}(H^{i}_{\\textrm{\\'et}}(X_{\\overline{K}, s}, \\mathbb{Q}_{p}))$, and an argument of Faltings \\cite[Lem 2.3]{LV} shows that the semisimplifications of the representations $\\rho_{s}$ belong to a finite set of isomorphism classes. From crystalline cohomology, each $s \\in S(\\mathcal{O}_{K,N})$, viewed as a point of $S(\\mathcal{O}_{K,v})$ where $\\mathcal{O}_{K,v}$ is the $v$-adic ring of integers, gives rise to a triple $(H^{i}_{\\textrm{dR}}(X_{s}), \\phi_{s}, F^{\\bullet}_{s})$ where $\\phi_{s}$ is the crystalline Frobenius. Moreover, using the functor $D_{\\textrm{cris}}$ of $p$-adic Hodge theory \\cite[Expos\\'e III]{fontaine1994corps}, the triple $(H^{i}_{\\textrm{dR}}(X_{s}), \\phi_{s}, F^{\\bullet}_{s})$ is determined up to isomorphism by the restriction $\\rho_{s,v}$ along the map $\\textrm{Gal}(\\overline{K_{v}}\/K_{v}) \\to \\textrm{Gal}(\\overline{K}\/K)$ determined by a fixed embedding $\\overline{K} \\hookrightarrow \\overline{K_{v}}$. We denote by $\\mathcal{I}(s)$ all those triples $(V, \\phi, F^{\\bullet})$ which are of the form $D_{\\textrm{cris}}(\\restr{\\rho}{\\mathbb{Q}_{p}})$, where $\\rho$ is a global Galois representation whose semisimplification is isomorphic to the semisimplification of $\\rho_{s}$. \n\n\nRecall that we have fixed the integral lattice $V = \\mathbb{Z}^{m}$, where $m$ is the dimension of the cohomology of the fibres of $f$, and a $\\mathbb{Q}$-algebraic flag variety $\\ch{L}$ of Hodge flags on $V$. In what follows we write $V_{p}$ for $V \\otimes \\mathbb{Q}_{p}$, and $\\ch{L}_{p}$ for $\\ch{L}_{\\mathbb{Q}_{p}}$. Then the key idea of the Lawrence-Venkatesh method is the following:\n\n\\begin{prop}\n\\label{LVprop}\nSuppose that for each $s \\in S(\\mathcal{O}_{K,N})$, whenever we have an endomorphism $\\phi_{s} : V_{p} \\to V_{p}$ and a flag $F^{\\bullet}_{s}$ on $V_{p}$ such that $(V_{p}, \\phi_{s}, F^{\\bullet}_{s})$ represents $\\mathcal{I}(s)$, the Hodge flags $F^{\\bullet}$ on $V_{p}$ for which $(V_{p}, \\phi_{s}, F^{\\bullet}) \\in \\mathcal{I}(s)$ lie in an algebraic subvariety $O_{s} \\subset \\ch{L}_{p}$ satisfying $\\Delta_{d} \\geq \\dim O_{s}$. Then $\\dim \\overline{S(\\mathcal{O}_{K,N})}^{\\textrm{Zar}} \\leq d$. \n\\end{prop}\n\nTo prove \\autoref{LVprop} we will need a rigid-analytic version of the Bakker-Tsimerman transcendence theorem, which we will see can be deduced formally from the complex analytic one. To set things up, let us revisit the term \\emph{local period map}, this time in the rigid analytic setting (c.f. \\autoref{locperdef}). We will denote by $\\mathbb{C}_{p}$ the completion of the algebraic closure $\\overline{K_{v}}$. In what follows we sometimes identify algebraic varieties with their rigid-analytifications when the context is clear.\n\n\\begin{defn}\n\\label{padiclocperdef}\nLet $K_{p}$ be a local field containing $K_{v}$, let $\\an{S_{K_{p}}}$ be the rigid-analytification of the base-change $S_{K_{p}}$ of $S$, and suppose that $B_{p} \\subset \\an{S_{K_{p}}}$ is an affinoid subdomain. Then a (rigid-analytic) local period map $\\psi : B_{p} \\to \\an{\\ch{L}_{K_{p}}}$ is a rigid-analytic map obtained as a composition $\\psi = \\an{q_{K_{p}}} \\circ A_{p}$, where:\n\\begin{itemize}\n\\item[(i)] The rigid analytifications $F^{k} \\an{\\mathcal{H}_{K_{p}}}$ are all trivial on $B_{p}$.\n\\item[(ii)] The map $A_{p} : B_{p} \\to \\an{\\textrm{GL}_{m, K_{p}}}$ is a varying filtration-compatible $p$-adic period matrix over $B_{p}$. More precisely, there exists a basis $v^{1}, \\hdots, v^{m}$ for $\\an{\\mathcal{H}_{K_{p}}}(B_{p})$, compatible with the filtration in the sense that $F^{k} \\an{\\mathcal{H}_{K_{p}}}(B_{p})$ is spanned by $v^{1}, \\hdots, v^{i_{k}}$ for some $i_{k}$, and a flat (for $\\an{\\nabla_{K_{p}}}$) frame $b^{1}, \\hdots, b^{m}$ such that $A_{p}$ gives a varying change-of-basis matrix from $v^{1}, \\hdots, v^{m}$ to $b^{1}, \\hdots, b^{m}$.\n\\item[(iii)] The map $q : \\textrm{GL}_{m} \\to \\ch{L}$ is the map that sends a matrix $M$ to the Hodge flag $F^{\\bullet}_{M}$ defined by the property that $F^{k}_{M}$ is spanned by the first $i_{k}$ columns.\n\\end{itemize}\n\\end{defn}\n\nTo prove \\autoref{LVprop} we will need a version of the Bakker-Tsimerman transcendence result for rigid-analytic local period maps, which we prove by formally transferring the same result for complex analytic local period maps. To avoid certain minor pathologies that can occur in the complex analytic case we will restrict to local period maps $\\psi : B \\to \\ch{L}$ which are definable in the structure $\\mathbb{R}_{\\textrm{an}, \\textrm{exp}}$; for background on definability and definable analytic spaces we refer to \\cite{van1996geometric} and \\cite{OMINGAGA}. We note that this is not a serious restriction: given any local period map $\\psi$ and any point $s \\in B$ there exists a definable restriction of $\\psi$ to a neighbourhood of $s$, a fact which is for instance easily deduced from \\cite[Prop. 4.27]{urbanik2021sets}.\n\n\\begin{lem}\n\\label{padicaxschanlem}\n~\\begin{itemize}\n\\item[(i)] Suppose that $\\psi : B \\to \\an{\\ch{L}_{\\mathbb{C}}}$ is a definable analytic local period map on $\\an{S_{\\mathbb{C}}}$. Let $V \\subset \\ch{L}_{\\mathbb{C}}$ be an algebraic subvariety satisfying $\\Delta_{d} \\geq \\dim V$. Then $\\psi^{-1}(V)$ lies in an algebraic subvariety of $S_{\\mathbb{C}}$ of dimension at most $d$.\n\\item[(ii)] Suppose that $\\psi_{p} : B_{p} \\to \\an{\\ch{L}_{\\mathbb{C}_{p}}}$ is a rigid-analytic local period map on $\\an{S_{\\mathbb{C}_{p}}}$. Let $V_{p} \\subset \\ch{L}_{p, \\mathbb{C}_{p}}$ be an algebraic subvariety satisfying $\\Delta_{d} \\geq \\dim V_{p}$. Then $\\psi_{p}^{-1}(V_{p})$ lies in an algebraic subvariety of $S_{\\mathbb{C}_{p}}$ of dimension at most $d$.\n\\end{itemize}\n\\end{lem}\n\n\\paragraph{Proof of \\autoref{padicaxschanlem}(i):} ~ \\\\\n\n\\vspace{-0.5em}\n\nThis is an application of the Bakker-Tsimerman transcendence theorem. Let $Z \\subset S_{\\mathbb{C}}$ be the Zariski closure of $\\psi^{-1}(V)$. We assume for contradiction that $\\dim Z > d$, and let $Z_{0} \\subset Z$ be a component of maximal dimension. Let $\\varphi : \\an{S_{\\mathbb{C}}} \\to \\Gamma \\backslash D$ be the canonical period map with $\\Gamma = \\textrm{Aut}(V,Q)(\\mathbb{Z})$. The statement is invariant under replacing $\\psi$ with a $\\textrm{GL}_{m}(\\mathbb{C})$-translate $g \\cdot \\psi$ and $V$ with $g \\cdot V$, so we may assume that $\\psi$ is a local lift of $\\varphi$. Arguing as in \\cite[Cor. 13.7.6]{CMS} we may assume that $\\varphi$ is proper, hence the image $T = \\varphi(S)$ is algebraic by \\cite{OMINGAGA}, and we may consider the Stein factorization $S \\xrightarrow{q} U \\xrightarrow{r} T$ of the map $S \\to T$. \n\nLet $Y = q(Z_{0})$, and note that $\\dim Y = \\dim \\varphi(Z_{0})$. By assumption we have $\\Delta_{d} \\leq \\dim \\mathcal{C}(Z_{0}) - \\dim \\varphi(Z_{0}) = \\dim \\mathcal{C}(Y) - \\dim Y$, where the type of $Y$ is taken with respect to the period map $U \\to \\Gamma \\backslash D$. Moreover, this continues to hold if we replace $Y$ with a smooth resolution $Y'$. The variation of Hodge structure on $S$ descends to $U$, and hence shrinking $B$ if necessary we may factor $\\psi$ through a definable local lift on $U$. By pulling back along the resolution we obtain a definable local lift $\\psi' : B' \\to D$ of the period map $\\varphi' : Y' \\to \\Gamma \\backslash D$ such that $\\psi'^{-1}(V)$ is Zariski dense in $Y'$. We are reduced to the following situation: we have smooth variety $Y'$ with a period map $\\varphi' : Y' \\to \\Gamma \\backslash D$, a local lift $\\psi' : B' \\to D$ such that $\\psi'^{-1}(V)$ is Zariski dense, and such that $\\dim \\mathcal{C}(Y') - \\dim Y' \\geq \\dim V$. \n\nWe now contradict the Bakker-Tsimerman theorem. In particular, we may extend the local lift $\\psi'$ to a lift $\\widetilde{\\varphi'} : \\widetilde{Y'} \\to D$ of $\\varphi'$ to the universal cover, and consider the graph $W \\subset Y' \\times \\ch{D}'$ of the map $\\widetilde{\\varphi'}$, where $\\ch{D}'$ is the orbit $\\mathbf{H}_{Y'} \\cdot \\psi'(y)$ for some $y \\in Y'(\\mathbb{C})$. We then have that\n\\begin{align*}\n\\textrm{codim}_{Y' \\times \\ch{D}'} (Y' \\times (V \\cap \\ch{D}')) + \\textrm{codim}_{Y' \\times \\ch{D}'} W &\\geq \\dim \\ch{D}' - \\dim V + \\dim \\ch{D}' \\\\\n&= \\dim \\mathcal{C}(Y') - \\dim V + \\dim \\mathcal{C}(Y') \\\\\n&\\geq \\dim Y' + \\dim \\mathcal{C}(Y') .\n\\end{align*}\nSince $\\psi'$ is definable, $\\psi'^{-1}(V)$ has finitely many components, and hence there exists an analytic component $C \\subset B'$ of $\\psi'^{-1}(V)$ such that $C$ is Zariski dense in $Y'$. Let $\\widetilde{C} \\subset Y' \\times \\ch{D}'$ be its graph under $\\psi'$. If $\\dim Y' = 0$ there is nothing to show, so we may assume that $\\dim \\mathcal{C}(Y') > 0$. Hence we find that $\\dim Y' + \\dim \\mathcal{C}(Y') > \\textrm{codim}_{Y' \\times \\ch{D}'} \\widetilde{C}$, and by the Bakker-Tsimerman theorem \\cite{AXSCHAN} the component $C$ lies in a proper subvariety of $Y'$, giving a contradiction. \\qed\n\n\\vspace{1em}\n\nTo prove \\autoref{padicaxschanlem}(ii) we first translate \\autoref{padicaxschanlem}(i) into a claim about rings of formal power series. In particular let $\\psi : B \\to \\ch{L}$ be a local period map with $V \\subset \\ch{L}$ an algebraic subvariety, and choose a point $s \\in B$ such that $\\psi(s) = t \\in V$. Then $\\psi$ induces a map on formal power series rings $\\widehat{\\psi}^{\\sharp} : \\widehat{\\mathcal{O}}_{\\ch{L}_{\\mathbb{C}}, t} \\to \\widehat{\\mathcal{O}}_{S_{\\mathbb{C}}, s}$. The claim of \\autoref{padicaxschanlem}(i) then says that if $I_{V} \\subset \\mathcal{O}_{\\ch{L}_{\\mathbb{C}}, t}$ is the ideal defining $V$ with extension $\\widehat{I}_{V}$ inside $\\widehat{\\mathcal{O}}_{\\ch{L}_{\\mathbb{C}}, t}$, then the ideal generated by $\\widehat{\\psi}^{\\sharp}(\\widehat{I}_{V})$ contains an ideal $\\widehat{I}_{Z}$ which is the extension of an ideal $I_{Z} \\subset \\mathcal{O}_{S_{\\mathbb{C}}, s}$ defining the germ of a subvariety of dimension at most $d$.\n\n\n\\paragraph{Proof of \\autoref{padicaxschanlem}(ii):} ~ \\\\\n\n\\vspace{-0.5em}\n\nThe claim is Zariski-local on $S$, so we can in particular assume that the bundles $F^{k} \\mathcal{H}$ for varying $k$ are algebraically trivial over $S$, that $S$ is affine, and by smoothness that $\\Omega^{1}_{S}$ is free. By definition, the map $\\psi : B_{p} \\to \\an{\\ch{L}_{\\mathbb{C}_{p}}}$ is associated to the following data: a filtration-compatible frame $v^{1}, \\hdots, v^{m}$, where $v^{1}, \\hdots, v^{i_{k}}$ spans $F^{k} \\an{\\mathcal{H}_{\\mathbb{C}_{p}}} (B_{p})$, and a flat frame $b^{1}, \\hdots, b^{m}$ spanning $\\an{\\mathcal{H}_{\\mathbb{C}_{p}}}(B_{p})$, where flatness means $\\an{\\nabla_{\\mathbb{C}_{p}}} b_{i} = 0$ for all $1 \\leq i \\leq m$. This data satisfies the property that $\\psi = \\an{q_{\\mathbb{C}_{p}}} \\circ A_{p}$, where $A_{p}$ is the change-of-basis matrix from the frame $v^{1}, \\hdots, v^{m}$ to the frame $b^{1}, \\hdots, b^{m}$, and $q$ is the map $\\an{\\textrm{GL}_{m,\\mathbb{C}_{p}}} \\to \\an{\\ch{L}_{\\mathbb{C}_{p}}}$ sending a matrix to the Hodge flag it represents. We note that changing the frame $v^{1}, \\hdots, v^{m}$ to another filtration-compatible frame $v'^{1}, \\hdots, v'^{m}$ does not change the local period map: such a change has the effect of replacing the map $A_{p}$ with $A_{p} \\cdot C$, where $C$ is a varying matrix over $B_{p}$ whose right-action on $A_{p}$ preserves the span of the first $i_{k}$ columns for each $k$, and hence $q \\circ (A_{p} \\cdot C) = q \\circ A_{p}$. We threfore lose no generality by assuming the filtration-compatible frame is the restriction to $B_{p}$ of an algebraic filtration-compatible frame over $S$. \n\nThe affinoid neighbourhood $B_{p}$ is of the form $\\textrm{Sp} \\, T$, where $T$ is an affinoid $\\mathbb{C}_{p}$-algebra. The inverse image $\\psi^{-1}(V)$ is then a closed affinoid subdomain of $B_{p}$, i.e., it corresponds to an ideal $I \\subset T$ such that $\\psi^{-1}(V)$ may be identified with $\\textrm{Sp} \\, T\/I$. If $R$ is the coordinate ring of $S_{\\mathbb{C}_{p}}$, then the map $B_{p} \\hookrightarrow \\an{S_{\\mathbb{C}_{p}}} \\to S_{\\mathbb{C}_{p}}$ induces a map $\\iota : R \\to T$, and the claim to be shown is that there exists an ideal $J \\subset R$ defining a subvariety of dimension at most $d$ such that $\\iota(J) \\subset I$. The ring $T$ is Noetherian, so the ideal $I$ admits a primary decomposition. Taking radicals, we obtain finitely many prime ideals $I_{1}, \\hdots, I_{\\ell}$ containing $I$ such that the problem reduces, for each $1 \\leq j \\leq \\ell$, to finding $J_{j} \\subset R$ defining subvarieties of dimension at most $d$ such that $\\iota(J_{j}) \\subset I_{j}$ for each $j$. The analytification map $\\an{S_{\\mathbb{C}_{p}}} \\to S_{\\mathbb{C}_{p}}$ is bijective onto the set of closed points of $S_{\\mathbb{C}_{p}}$ and induces isomorphisms on completed local rings [see whatever]. It follows that if we choose a maximal ideal $\\mathfrak{m} \\subset T$ containing $I_{j}$ we obtain a commuting diagram\n\\begin{center}\n\\begin{tikzcd}\nR \\arrow[r,\"\\iota\"] \\arrow[d, hook] & \\arrow[d, hook] T \\\\\n\\widehat{R}_{\\iota^{-1}(\\mathfrak{m})} \\arrow[r,\"\\sim\"] & \\widehat{\\mathcal{O}}_{B_{p}, \\mathfrak{m}} ,\n\\end{tikzcd}\n\\end{center}\nwhere the bottom arrow is an isomorphism of completed local rings, and the vertical arrows are injections. In particular, if we denote by $\\widehat{I}_{j}$ the extension of $I_{j}$ in $\\widehat{\\mathcal{O}}_{B_{p}, \\mathfrak{m}}$, it suffices to show that $\\widehat{\\iota}(J_{j}) \\subset \\widehat{I}_{j}$, where $\\widehat{\\iota}$ is the composition of the left and bottom arrow; here we have used the fact that $I_{j} = \\widehat{I}_{j} \\cap T$. \n\nFix an isomorphism $\\tau : \\mathbb{C}_{p} \\xrightarrow{\\sim} \\mathbb{C}$, which we choose to preserve the embeddings $K \\subset \\mathbb{C}$ and $K \\subset \\mathbb{C}_{p}$. Using the model for $S$ over $K$, the isomorphism $\\tau$ allows us to identify $R$ with the coordinate ring of $S_{\\mathbb{C}}$, the ideal $\\iota^{-1}(\\mathfrak{m})$ with a complex point $s \\in S(\\mathbb{C})$, the ring $\\widehat{R}_{\\iota^{-1}(\\mathfrak{m})}$ with the completed local ring $\\widehat{\\mathcal{O}}_{S_{\\mathbb{C}}, s}$. Let $t_{p}$ be the image of the point corresponding to $\\mathfrak{m}$ under $\\psi$, and let $t$ be the composition $t_{p} \\circ \\tau^{-1}$. Applying the isomorphism $\\tau$ at the level of formal power series, the rigid-analytic local period map $\\psi$ induces a map \n\\[ \\widehat{\\mathcal{O}}_{\\ch{L}_{\\mathbb{C}}, t} \\xrightarrow{\\tau} \\widehat{\\mathcal{O}}_{\\ch{L}_{\\mathbb{C}_{p}}, t_{p}} \\xrightarrow{\\widehat{\\psi}^{\\sharp}} \\widehat{\\mathcal{O}}_{B_{p}, \\mathfrak{m}} \\xrightarrow{\\sim} \\widehat{R}_{\\iota^{-1}(\\mathfrak{m})} \\xrightarrow{\\tau} \\widehat{\\mathcal{O}}_{S_{\\mathbb{C}}, s} , \\]\nwhose composition we denote by $\\widehat{\\eta}$. In what follows we identify the ideals $\\widehat{I}_{j}$ with their images in $\\widehat{\\mathcal{O}}_{S_{\\mathbb{C}}, s}$; by construction they are the extensions along $\\widehat{\\eta}$ of an ideal in $\\widehat{\\mathcal{O}}_{\\ch{L}_{\\mathbb{C}}, t}$ associated to the base-change of $V$ using $\\tau$. By part (i) of this theorem and our reformulation of it in terms of completed local rings, it suffices to show that $\\widehat{\\eta}$ is induced by a complex analytic local period map defined on a neighbourhood of $s$.\n\nRecall that we have a decomposition $\\psi = \\an{q_{\\mathbb{C}_{p}}} \\circ A_{p}$, where $q$ is the rigid-analytification of a $\\mathbb{Q}$-algebraic map, and $A_{p}$ gives a varying change-of-basis matrix between a filtration-compatible frame $v^{1}, \\hdots, v^{m}$ and a rigid-analytic flat frame. Recall also that we have chosen $v^{1}, \\hdots, v^{m}$ so that it is the rigid-analytification of a $K$-algebraic filtration-compatible frame $w^{1}, \\hdots, w^{m}$ over $S$. Using the decomposition $\\psi = \\an{q_{\\mathbb{C}_{p}}} \\circ A_{p}$ and the isomorphism $\\tau$ we may factor $\\widehat{\\eta}$ as $\\widehat{q} \\circ \\widehat{\\kappa}$, where $\\widehat{\\kappa} : \\widehat{\\mathcal{O}}_{S_{\\mathbb{C}}, s} \\to \\widehat{\\mathcal{O}}_{\\textrm{GL}_{m, \\mathbb{C}}, P}$ is the base-change under $\\tau$ of the map induced by $A_{p}$. From our definition of local period map in \\autoref{locperdef}, it suffices to show that $\\widehat{\\kappa}$ is induced by a varying change-of-basis matrix $A : B \\to \\an{\\textrm{GL}_{m,\\mathbb{C}}}$ from $w^{1}, \\hdots, w^{m}$ to a complex-analytic flat frame.\n\nThe result will follow from the fact that $A$ and $A_{p}$ satisfy a common set of $K$-algebraic differential equations whose solutions are uniquely determined by the period matrix they assign to a point in $B$. To see this, let us write $\\nabla w^{i} = \\sum_{j = 1}^{m} c_{ij} \\otimes w^{j}$ for $K$-algebraic sections $c_{ij} \\in \\Omega^{1}_{S_{K}}$. Suppose then that $b^{k} = \\sum_{i = 1}^{m} f_{ik} w^{i}$ is a flat frame on some complex analytic or rigid-analytic neighbourhood. We then have that\n\\begin{align*}\n\\nabla b^{k} &= \\nabla \\left( \\sum_{i = 1}^{m} f_{ik} w^{i} \\right) \\\\\n&= \\sum_{j = 1}^{m} df_{jk} \\otimes w^{j} + \\sum_{i = 1}^{m} f_{ik} \\left( \\sum_{j = 1}^{m} c_{ij} \\otimes v^{j} \\right) \\\\\n&= \\sum_{j = 1}^{m} \\left( df_{jk} + \\sum_{i = 1}^{m} f_{ik} c_{ij} \\right) \\otimes v^{j} ,\n\\end{align*}\nfrom which we see that $b^{k}$ giving a flat frame is equivalent to $f_{jk}$ satisfying the system of differential equations $df_{jk} = -\\sum_{i = 1}^{m} f_{ik} c_{ij}$ for all $1 \\leq j, k \\leq m$. If we choose a trivialization $dz_{1}, \\hdots, dz_{n}$ of $\\Omega^{1}_{S_{K}}$, we may write the $c_{ij}$ in terms of their coefficients $c_{ij,\\ell}$ with respect to this trivialization, and the same system of differential equations becomes \n\\begin{equation}\n\\label{diffeq}\n\\partial_{\\ell} f_{jk} = -\\sum_{i} f_{ik} c_{ij,\\ell} ;\n\\end{equation} \nhere the operator $\\partial_{\\ell}$ is defined using the dual basis to $dz_{1}, \\hdots, dz_{n}$. By differentiating \\autoref{diffeq} and substituting the lower-order differential equations into the higher-order ones, we obtain, for each sequence $\\{ \\ell_{i} \\}_{i = 1}^{e}$ with $1 \\leq \\ell_{i} \\leq n$ and $e \\geq 1$, a set of $K$-algebraic polynomials $\\xi_{\\ell_{1}, \\hdots, \\ell_{e}; jk}$ in the functions $f_{uv}$ for $1 \\leq u, v \\leq m$ with coefficients in the coordinate ring of $S_{K}$ such that \n\\begin{equation}\n\\label{diffeqpolys}\n\\partial_{\\ell_{1}} \\cdots \\partial_{\\ell_{e}} f_{jk} = \\xi_{\\ell_{1}, \\hdots, \\ell_{e}; jk}([f_{uv}]) .\n\\end{equation}\n\nBecause $S_{K}$ is smooth, given a point $s$ of $S_{K}$ the functions $z_{1} - s_{1}, \\hdots, z_{n} - s_{n}$, where $s_{i}$ is the value of $z_{i}$ on $s$, induce a coordinate system in the local and formal power series rings associated to $S_{K}$ at $s$. In these coordinates, the map $A_{p}$ is given by $f^{-1}_{p}$, where $f_{p} = [f_{uv}]$ is a rigid-analytic matrix-valued solution to the differential equations \\autoref{diffeq}. The formal map $\\widehat{\\kappa}$ obtained using the isomorphism $\\tau$ then satisfies the same set of differential equations, and in particular its derivatives of all orders at $s$ are determined using \\autoref{diffeqpolys} by the initial condition $f^{-1}(s) = P$. If we then construct an analytic solution to the differential system in \\autoref{diffeq} in a neighbourhood of $s$ satisfying $f^{-1}(s) = P$, the resulting analytic map induces the map $\\widehat{\\kappa}$ on formal power series rings. It follows that $\\widehat{\\eta}$ is induced by a local period map, which completes the proof. \\qed\n\n\\paragraph{Proof of \\autoref{LVprop}:} ~ \\\\\n\n\\vspace{-0.5em}\n\nWe denote by $\\mathcal{O}_{K,(v)}$ the ring of integers localized at the prime ideal $\\mathfrak{p}$ of $\\mathcal{O}_{K}$ corresponding to $v$. We begin by showing that (base changes to $K_{v}$ of) the points of $S(\\mathcal{O}_{K,(v)})$ lie inside finitely many distinguished open affinoids $B_{p} \\subset \\an{S_{K_{v}}}$ admitting local period maps $\\psi_{p} : B_{p} \\to \\an{\\ch{L}_{K_{v}}}$. This reduces to showing that there are finitely many distinguished open affinoids $B_{p} \\subset \\an{S_{K_{v}}}$ containing the points in $S(\\mathcal{O}_{K,(v)})$ over which $\\an{\\mathcal{H}_{K_{v}}}$ admits a rigid-analytic flat frame. We may cover $S$ by finitely many open subschemes $U \\subset S$ such that $\\Omega^{1}_{U}$ and the bundles $F^{k} \\mathcal{H}$ for varying $k$ are all trivial. Then any point $s \\in S(\\mathcal{O}_{K,(v)})$ factors through some element of this cover, so we may reduce to the case where $\\Omega^{1}_{S}$ and the bundles $F^{k} \\mathcal{H}$ are all trivial.\n\nProceeding as in the proof of \\autoref{padicaxschanlem}, we can choose algebraic functions $z_{1}, \\hdots, z_{n}$ on $S$ such that $d z_{1}, \\hdots, d z_{n}$ trivializes $\\Omega^{1}_{S}$. We obtain differential equations as in (\\ref{diffeqpolys}), where the polynomials $\\xi_{\\ell_{1}, \\hdots, \\ell_{e}; jk}$ are functions in the coordinate ring $R$ of $S \\times \\textrm{GL}_{m}$, and so in particular we may view them after base-changing as elements of $R_{\\mathcal{O}_{K,v}}$, where $\\mathcal{O}_{K,v}$ is the ring of $v$-adic integers. Choose a point $\\overline{s_{0}} \\in S(\\mathcal{O}_{K,N}\/\\mathfrak{p} \\mathcal{O}_{K,N})$. Then as $S$ has (by assumption) good reduction modulo $\\mathfrak{p}$, we obtain by \\cite[IV. 18.5.17]{EGA} a lift $s_{0} \\in S(\\mathcal{O}_{K,v})$. Choosing an initial condition $P \\in \\textrm{GL}_{m}(\\mathcal{O}_{K,v})$, we may use (\\ref{diffeqpolys}) to construct a map $\\psi_{s_{0}} = \\an{q_{K_{v}}} \\circ f^{-1}$, where the partial derivatives of $f$ at $s_{0}$ are given by evaluating the polynomials $\\xi_{\\ell_{1}, \\hdots, \\ell_{e}; jk}$ at $(s_{0}, P)$. As the coefficients of the power series defining $\\psi_{s_{0}}$ lie in $\\mathcal{O}_{K,v}$, the map $\\psi_{s_{0}}$ is defined on a residue disk $B_{p, \\overline{s_{0}}}$ of radius $|p|^{1\/[K_{v} : \\mathbb{Q}_{p}]}$, where $|\\cdot|$ is the absolute value on $\\mathbb{Q}_{p}$. Varying $s_{0}$ over the finitely many elements of $S(\\mathcal{O}_{K,N}\/\\mathfrak{p} \\mathcal{O}_{K,N})$, we obtain the desired cover.\n\nNow we wish to show that $\\dim S(\\mathcal{O}_{K,N})^{\\textrm{Zar}} \\leq d$. Recall that for each $s \\in S(\\mathcal{O}_{K,N})$, we have a set $\\mathcal{I}(s) \\subset S(\\mathcal{O}_{K,N})$ of points whose associated Galois representations have isomorphic semisimplifications. As there are finitely many possibilities for the semisimplification, it suffices to consider the sets $\\mathcal{S}(s) \\subset S(\\mathcal{O}_{K,N})$ defined by\n\\[ \\mathcal{S}(s) = \\{ s' \\in S(\\mathcal{O}_{K,N}) : \\mathcal{I}(s) = \\mathcal{I}(s') \\textrm{ and } s \\equiv s' \\textrm{ mod } \\mathfrak{p} \\} , \\]\nand show that $\\dim \\overline{\\mathcal{S}(s)}^{\\textrm{Zar}} \\leq d$. In particular, we can consider the Zariski closure of just those elements of $\\mathcal{S}(s)$ whose associated points in $S(\\mathcal{O}_{K,v})$ lie inside one of the neighbourhoods $B_{p,\\overline{s_{0}}}$ constructed above on which we have a local period map $\\psi_{s_{0}} : B_{p, \\overline{s_{0}}} \\to \\an{\\ch{L}_{K_{v}}}$. The hypothesis of the proposition tells us that the image under $\\psi_{s_{0}}$ of the points in $\\mathcal{S}(s)$ lie in a subvariety $O_{s} \\subset \\ch{L}_{\\mathbb{C}_{p}}$ satisfying $\\dim \\Delta_{d} \\geq \\dim O_{s}$; here we use the fact that the flat frame on $B_{p, \\overline{s_{0}}}$ is compatible with the Frobenius endomorphism (c.f. the discussion in \\cite[\\S3]{LV}). Base-changing to $\\mathbb{C}_{p}$ and applying \\autoref{padicaxschanlem} above, we find that $\\dim \\overline{\\psi_{s_{0}}^{-1}(O_{s})}^{\\textrm{Zar}} \\leq d$, hence the result.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and Background}\\label{sec:introduction}\n\nAs a fundamental tool in discrete mathematics, Lov\\'asz extension has been deeply connected to submodular analysis \\cite{Choquet54,Lovasz}, and has been applied in many areas like combinatorial optimization, game theory, matroid theory, stochastic processes, electrical networks, computer vision and machine learning \\cite{F05-book}. There are many generalizations, such as the disjoint-pair Lov\\'asz extension and the Lov\\'asz extension on distributive lattices \\cite{F05-book,Murota03book}. Recent developments include quasi-Lov\\'asz extension on some algebraic structures and fuzzy mathematics, applications of Lov\\'asz extensions to graph cut problems and computer science, as well as Lov\\'asz-softmax loss in deep learning.\n\n\n\n\n\\vspace{0.16cm}\n\nWe shall start by looking at the original Lov\\'asz extension. For simplicity, we shall work throughout this paper with a finite and nonempty set $V=\\{1,\\cdots,n\\}$ and its power set $\\mathcal{P}(V)$. Also, we shall sometimes work on $\\mathcal{P}(V)^k:=\\{(A_1,\\cdots,A_k):A_i\\subset V,\\,i=1,\\cdots,k\\}$ and $\\mathcal{P}_k(V):=\\{(A_1,\\cdots,A_k)\\in\\ensuremath{\\mathcal{P}}(V)^k:A_i\\cap A_j=\\varnothing,\\,\\forall i\\ne j\\}$, as well as some restricted family $\\ensuremath{\\mathcal{A}}\\subset \\mathcal{P}(V)^k$. We denote the cardinality of a set $A$ by $\\#A$.\nGiven a function $f:\\mathcal{P}(V)\\to \\ensuremath{\\mathbb{R}}$, one identifies every $A\\in \\mathcal{P}(V){\\setminus\\{\\varnothing\\}}$ with its indicator vector $\\vec1_A\\in \\ensuremath{\\mathbb{R}}^V=\\ensuremath{\\mathbb{R}}^n$. The Lov\\'asz extension extends the domain of $f$ to the whole Euclidean space\\footnote{Some other versions in the literature only extend the domain to the cube $[0,1]^V$ or the nonnegative orthant $\\ensuremath{\\mathbb{R}}_{\\ge0}^V$. In fact, many works on Boolean lattices identify $\\ensuremath{\\mathcal{P}}(V)$ with the discrete cube $\\{0,1\\}^n$.} $\\ensuremath{\\mathbb{R}}^V$. There are several equivalent\nexpressions:\n\n\\begin{itemize}\n\\item For $\\vec x =(x_1,\\dots ,x_n)\\in \\mathbb{R}^n$, let $\\sigma:V\\cup\\{0\\}\\to V\\cup\\{0\\}$ be a bijection such that $ x_{\\sigma(1)}\\le x_{\\sigma(2)} \\le \\cdots\\le x_{\\sigma(n)}$ and $\\sigma(0)=0$, where $x_0:=0$. The Lov\\'asz extension of $f$ is defined by\n\\begin{equation}\\label{eq:Lovasum}\nf^{L}(\\vec x)=\\sum_{i=0}^{n-1}(x_{\\sigma(i+1)}-x_{\\sigma(i)})f(V^{\\sigma(i)}(\\vec x)),\n\\end{equation}\nwhere $V^0(\\vec x)=V$ and $V^{\\sigma(i)}(\\vec x):=\\{j\\in V: x_{j}> x_{\\sigma(i)}\\},\\;\\;\\;\\; i=1,\\cdots,n-1$. We can write \\eqref{eq:Lovasum} in an integral form as\n\\begin{align}\\label{eq:Lovaintegral}\nf^{L}(\\vec x)&=\\int_{\\min\\limits_{1\\le i\\le n}x_i}^{\\max\\limits_{1\\le i\\le n}x_i} f(V^t(\\vec x))d t+f(V)\\min_{1\\le i\\le n}x_i\n\\end{align}\n\nwhere $V^t(\\vec x)=\\{i\\in V: x_i>t\\}$. If we apply the M\\\"obius transformation, this becomes\n \\begin{equation}\\label{eq:LovaMobuis} f^{L}(\\vec x)=\\sum\\limits_{A\\subset V}\\sum\\limits_{B\\subset A}(-1)^{\\#A-\\#B}f(B)\\bigwedge\\limits_{i\\in A} x_i,\\end{equation}\nwhere $\\bigwedge\\limits_{i\\in A} x_i$ is the minimum over $\\{x_i:i\\in A\\}$.\n\\end{itemize}\nIt is easy to see that $f^L$ is positively one-homogeneous, PL (piecewise linear) and Lipschitz continuous \\cite{Lovasz,Bach13}. Also, $f^L(\\vec x+t\\vec 1_V)=f^L(\\vec x)+tf(V)$, $\\forall t\\in\\ensuremath{\\mathbb{R}}$, $\\forall \\vec x\\in\\ensuremath{\\mathbb{R}}^V$, {and $f^L(\\vec1_A)=f(A)$ for any $A\\in\\ensuremath{\\mathcal{P}}(V)\\setminus\\{\\varnothing\\}$. The definition of $f^L$ does not involve the datum $f(\\emptyset)$, and thus by convention, it is natural to reset $f(\\emptyset)=0$ to match the equality $f^L(\\vec0)=0$, unless stated otherwise. For convenience, we say that $f:\\ensuremath{\\mathcal{P}}(V)\\to\\ensuremath{\\mathbb{R}}$ is a constant (resp., positive) function if $f$ is constant (resp., positive) on $\\ensuremath{\\mathcal{P}}(V)\\setminus\\{\\varnothing\\}$}. \nMoreover, a continuous function $F:\\ensuremath{\\mathbb{R}}^V\\to \\ensuremath{\\mathbb{R}}$ is the Lov\\'asz extension of some $f:\\ensuremath{\\mathcal{P}}(V)\\to\\ensuremath{\\mathbb{R}}$ if and only if $F(\\vec x+\\vec y)=F(\\vec x)+F(\\vec y)$ whenever $(x_i-x_j)(y_i-y_j)\\ge0$, $\\forall i,j\\in V$.\n\n\n\\vspace{0.13cm}\n\nIn this paper, we shall use the Lov\\'asz extension and its variants to study\nthe interplay between discrete and continuous aspects in topics such as convexity, optimization and spectral theory.\n\n\n\n\n\\vspace{0.13cm}\n\n\\textbf{Submodular and convex functions}\n\n\nSubmodular function have emerged as a powerful concept in discrete optimization, see Fujishige's monograph \\cite{F05-book} and {Bach's works \\cite{Bach13,Bach19}}. We also refer the readers to some recent related works regarding \nsubmodular functions on hypergraphs \\cite{LM18,LM18-,LHM20}. We recall\n that a discrete function $f:\\ensuremath{\\mathcal{A}}\\to \\ensuremath{\\mathbb{R}}$ defined on an algebra $\\ensuremath{\\mathcal{A}}\\subset\\mathcal{P}(V)$ (i.e., $\\ensuremath{\\mathcal{A}}$ is closed under union and intersection) is submodular if $f(A)+f(B)\\ge f(A\\cup B)+f(A\\cap B)$, $\\forall A,B\\in\\ensuremath{\\mathcal{A}}$. The Lov\\'asz extension turns a submodular into a convex function, and we can hence minimize the former by minimizing the latter:\n\\begin{theorem}[Lov\\'asz \\cite{Lovasz}]\n\t$f:\\mathcal{P}(V)\\to\\mathbb{R}$ is submodular $\\Leftrightarrow$ $f^L$ is convex.\n\\end{theorem}\n\n\\begin{center}\n\t\\begin{tikzpicture}[node distance=6cm]\n\t\n\t\\node (convex) [startstop] { Submodularity };\n\t\n\t\\node (submodular) [startstop, right of=convex, xshift=1.6cm] { Convexity };\n\t\n\t\\draw [arrow](convex) --node[anchor=south] { \\small Lov\\'asz extension } (submodular);\n\t\\draw [arrow](submodular) --node[anchor=north] { } (convex);\n\t\\end{tikzpicture}\n\\end{center}\n\n\\begin{theorem}[Lov\\'asz \\cite{Lovasz}]If $f:\\mathcal{P}(V)\\to\\mathbb{R}$ is submodular with $f(\\varnothing)=0$, then\n\t$$\\min\\limits_{A\\subset V}f(A)=\\min\\limits_{\\vec x\\in [0,1]^V}f^L(\\vec x).$$\n\\end{theorem}\n\n\\begin{center}\n\t\\begin{tikzpicture}[node distance=6cm]\n\t\n\t\\node (convex) [process] {\n\t\tSubmodular optimization };\n\t\n\t\\node (submodular) [process, right of=convex, xshift=1.6cm] {\n\t\tConvex programming };\n\t\n\t\\draw [arrow](convex) --node[anchor=south] { \\small Lov\\'asz extension } (submodular);\n\t\\draw [arrow](submodular) --node[anchor=north] { \\small} (convex);\n\t\\end{tikzpicture}\n\\end{center}\n\n\nThus, submodularity can be seen as some kind of\n `discrete convexity', and that naturally lead to many generalizations, such as bisubmodular, $k$-submodular, L-convex and M-convex, see \\cite{F05-book,Murota03book}.\nMoreover, the following classical result characterizes the class of all functions which can be expressed as Lov\\'asz extensions of submodular functions. \n \\begin{theorem}\n Theorem 7.40 in \\cite{Murota03book}]\n \tA one-homogeneous function $F:\\ensuremath{\\mathbb{R}}^V\\to \\ensuremath{\\mathbb{R}}$ is a Lov\\'asz extension of some submodular function if and only if $F(\\vec x+t\\vec 1_V)=F(\\vec x)+tF(\\vec 1_V)$, $\\forall t\\in\\ensuremath{\\mathbb{R}}$, $\\forall \\vec x\\in\\ensuremath{\\mathbb{R}}^V$, and $F(\\vec x)+F(\\vec y)\\ge F(\\vec x\\vee \\vec y)+F(\\vec x\\wedge \\vec y)$, where the $i$-th components of $\\vec x\\vee \\vec y$ and $\\vec x\\wedge \\vec y$ are\n \t$(\\vec x\\vee \\vec y)_i=\\max\\{x_i,y_i\\}$ and $(\\vec x\\wedge \\vec y)_i=\\min\\{x_i,y_i\\}$.\n \\end{theorem}\n One may want to extend such a result to the bisubmodular or more general cases.\n In that direction, we shall obtain some results such as Proposition \\ref{pro:bisubmodular-continuous} and Theorem \\ref{thm:submodular-L-equivalent} in Section \\ref{sec:SubmodularityConvexity}. It is also worth noting that Bach investigated an interesting generalization of submodular functions by a generalized Lov\\'asz extension \\cite{Bach19}. \n\n\nSo far, research has mainly focused on `discrete convex' functions, leading to\n`Discrete Convex Analysis' \\cite{Murota98,Murota03book}, whereas the discrete non-convex setting which is quite popular in modern sciences has not yet received that much attention.\n\n\\vspace{0.16cm}\n\n\\textbf{Non-submodular cases}\n\nObviously, the non-convex case is so diverse and general that it cannot be directly studied by standard submodular tools. Although some publications show several results on non-submodular (i.e., non-convex) minimization based on Lov\\'asz extension \\cite{HS11}, so far, these only work for special minimizations over the whole power set. Here, we shall find applications for discrete optimization and nonlinear spectral graph theory by employing the multi-way Lov\\'asz extension on enlarged and restricted domains. \n\n\n\\vspace{0.16cm}\n\nIn summary, we are going to initiate the study of diverse continuous extensions in non-submodular settings. This paper develops a systematic framework for many aspects around the topic. We establish a universal discrete-to-continuous framework via multi-way extensions, by systematically utilizing integral representations. \nIn \\cite{JZ-prepare21}, we establish the links between discrete Morse theory and continuous Morse theory via the original Lov\\'asz extension. \nWe shall now discuss some connections with other various fields.\n\n\\vspace{0.19cm}\n\n\\textbf{Connections with combinatorial optimization}\n\nBecause of the wide range of applications of discrete mathematics in computer science, combinatorial optimization has been much studied from the mathematical perspective.\nIt is known that any combinatorial optimization can be equivalently expressed as a continuous optimization via convex (or concave) extension, but often,\nthere is the difficulty that one cannot write down an equivalent continuous\nobject function in closed form.\nFor practical purposes, it would be very helpful if one could transfer a\ncombinatorial optimization problem to an explicit and simple equivalent\ncontinuous optimization problem in closed form. Formally, in many concrete situations, it would be useful if one could get an identity of the form\n\\begin{equation}\\label{eq:D-to-C-formal}\\min\\limits_{(A_1,\\cdots,A_k)\\in \\ensuremath{\\mathcal{A}}\\cap \\ensuremath{\\mathrm{supp}}(g)}\\frac{f(A_1,\\cdots,A_k)}{g(A_1,\\cdots,A_k)}=\\inf\\limits_{\\psi\\in {\\mathcal D}(\\ensuremath{\\mathcal{A}})}\\frac{\\widetilde{f}(\\psi)}{\\widetilde{g}(\\psi)}.\\end{equation}\nwhere $f,g:\\ensuremath{\\mathcal{A}}\\to [0,\\infty)$, ${\\mathcal D}(\\ensuremath{\\mathcal{A}})$ is a feasible domain determined by $\\ensuremath{\\mathcal{A}}$ only, $\\mathrm{supp}(g)$ is the support of $g$, and $\\widetilde{f}$ and $\\widetilde{g}$ are suitable continuous extensions of $f$ and $g$.\n\nSo far, only situations where $f:\\ensuremath{\\mathcal{P}}(V)\\to \\ensuremath{\\mathbb{R}}$ or $f:\\ensuremath{\\mathcal{P}}_2(V)\\to\\ensuremath{\\mathbb{R}}$ have been investigated systematically \\cite{HS11,CSZ18}, and what is lacking are situations with restrictions, that is, incomplete data.\n\nAlso, to the best of our knowledge, the known results in the literature do not work for combinatorial optimization directly on set-tuples. But most of combinatorial optimization problems should be formalized in the form of set-tuples, and only a few can be represented in set form or disjoint-pair form. Whenever one can find an equivalent Lipschitz function for a combinatorial problem in the field of discrete optimization, this makes useful tools available and leads to new connections.\nThat is, one wishes to establish a {\\sl discrete-to-continuous transformation} like the operator $\\sim$ in \\eqref{eq:D-to-C-formal}.\nWe will show in Section \\ref{sec:CC-transfer} that the Lov\\'asz extension and its variants \nare suitable choices for such a transformation\n(see Theorems \\ref{thm:tilde-fg-equal}, \\ref{thm:tilde-H-f} and Proposition \\ref{pro:fraction-f\/g} for details).\n\n\\vspace{0.15cm}\n\n\nTo reach these goals, we need to systematically study various generalizations of the Lov\\'asz extension. More precisely, we shall work with the following two different multi-way forms:\n\n\\begin{enumerate}[(1)]\n\n\\item Disjoint-pair version:\n for a function $f:\\ensuremath{\\mathcal{P}}_2(V)\\to\\ensuremath{\\mathbb{R}}$, its disjoint-pair\n Lov\\'asz extension is defined as\n\\begin{equation}\\label{eq:disjoint-pair-Lovasz-def-integral}\nf^{L}(\\vec x)=\\int_0^{\\|\\vec x\\|_\\infty} f(V_+^t(\\vec x),V_-^t(\\vec x))dt,\n\\end{equation}\nwhere $V_\\pm^t(\\vec x)=\\{i\\in V:\\pm x_i>t\\}$, $\\forall t\\ge0$. For $\\ensuremath{\\mathcal{A}}\\subset\\ensuremath{\\mathcal{P}}_2(V)$ and $f:\\ensuremath{\\mathcal{A}}\\to\\ensuremath{\\mathbb{R}}$, the feasible domain ${\\mathcal D}_\\ensuremath{\\mathcal{A}}$ of the disjoint-pair Lov\\'asz extension is $\\{\\vec x\\in\\ensuremath{\\mathbb{R}}^V:(V_+^t(\\vec x),V_-^t(\\vec x))\\in\\ensuremath{\\mathcal{A}},\\forall t\\ge0\\}$. \n\nIt should be noted that the disjoint-pair Lov\\'asz extension introduced by\nQi \\cite{Qi88} has been systematically investigated by Fujishige \\cite{Fujishige14,F05-book} and Murota \\cite{Murota03book} in the context of discrete convex analysis (or the theory of submodular functions). The \nintegral formulation \\eqref{eq:disjoint-pair-Lovasz-def-integral}, however, is \nmore convenient to obtain a closed formula of the equivalent continuous\noptimization problem for a combinatorial optimization problem. Moreover, the references and the present paper \nfocus on different aspects, with the exception of the submodularity theorem (i.e., $f$ is bisubmodular iff $f^L$ is convex).\n\n\\item $k$-way version:\nfor a function $f:\\mathcal{P}(V)^k\\to \\ensuremath{\\mathbb{R}}$, the {\\sl simple $k$-way Lov\\'asz extension} $f^L:\\ensuremath{\\mathbb{R}}^{kn}\\to \\ensuremath{\\mathbb{R}}$ is defined as\n\\begin{equation}\\label{eq:Lovasz-Form-1}\nf^L(\\vec x^1,\\cdots,\\vec x^k)=\\int_{\\min \\vec x}^{\\max \\vec x}f(V^t(\\vec x^1),\\cdots,V^t(\\vec x^k))dt+ f(V,\\cdots,V)\\min\\vec x,\n\\end{equation}\nwhere $V^t(\\vec x^i)=\\{j\\in V:x^i_j>t\\}$, $\\min\\vec x=\\min\\limits_{i,j} x^i_j$ and $\\max\\vec x=\\max\\limits_{i,j} x^i_j$. For $\\ensuremath{\\mathcal{A}}\\subset\\ensuremath{\\mathcal{P}}^k(V)$ and $f:\\ensuremath{\\mathcal{A}}\\to\\ensuremath{\\mathbb{R}}$, we take ${\\mathcal D}_\\ensuremath{\\mathcal{A}}=\\{\\vec x\\in\\ensuremath{\\mathbb{R}}^{kn}_{\\ge0}:(V^t(\\vec x^1),\\cdots,V^t(\\vec x^k))\\in\\ensuremath{\\mathcal{A}},\\forall t\\in\\ensuremath{\\mathbb{R}}\\}$ as a feasible domain of the $k$-way Lov\\'asz extension $f^L$. \n\nBy the Lov\\'asz extension of submodular functions on distributive\n lattices \\cite{F05-book,Murota03book}, our $k$-way version\n \\eqref{eq:Lovasz-Form-1} can be reduced to the classical version on\n distributive lattices. Our main purposes and key results, however, are\n different from that approach. In fact, we mainly aim to deal with discrete\n fractional programming by the $k$-way Lov\\'asz extension, while those references concentrate on submodularity and convex optimization. \n\\end{enumerate}\n\n All these multi-way Lov\\'asz extensions satisfy the optimal identity Eq.~\\eqref{eq:D-to-C-formal}\n\n\\begin{introthm}[Theorem \\ref{thm:tilde-H-f} and Proposition \\ref{pro:fraction-f\/g}]\\label{thm:tilde-fg-equal}\nGiven two functions $f,g:\\ensuremath{\\mathcal{A}}\\to [0,+\\infty)$, let $\\tilde{f}$ and $\\tilde{g}$ be two real\nfunctions on ${\\mathcal D}_\\ensuremath{\\mathcal{A}}$ satisfying $\\tilde{f}(\\vec1_{A_1,\\cdots,A_k})=f(A_1,\\cdots,A_k)$ and $\\tilde{g}(\\vec1_{A_1,\\cdots,A_k})=g(A_1,\\cdots,A_k)$. Then Eq.~\\eqref{eq:D-to-C-formal} holds if $\\tilde{f}$ and $\\tilde{g}$ further possess the properties (P1) or (P2) below. Correspondingly, if $\\tilde{f}$ and $\\tilde{g}$ fulfil (P1') or (P2),\nthere similarly holds\n$$\\max\\limits_{(A_1,\\cdots,A_k)\\in \\ensuremath{\\mathcal{A}}\\cap\\ensuremath{\\mathrm{supp}}(g)}\\frac{f(A_1,\\cdots,A_k)}{g(A_1,\\cdots,A_k)}=\\sup\\limits_{\\psi\\in {\\mathcal D}_\\ensuremath{\\mathcal{A}}\\cap\\ensuremath{\\mathrm{supp}}(\\widetilde{g})}\\frac{\\widetilde{f}(\\psi)}{\\widetilde{g}(\\psi)}.$$\nHere the optional additional conditions of $\\tilde{f}$ and $\\tilde{g}$ are:\n\n(P1) $\\tilde{f}\\ge f^L$ and $\\tilde{g}\\le g^L$.\\;\\;\\; (P1') $\\tilde{f}\\le f^L$ and $\\tilde{g}\\ge g^L$.\n\n(P2)\n$\\tilde{f}=((f^\\alpha)^L)^{\\frac1\\alpha}$ and $\\tilde{g}=((g^\\alpha)^L)^{\\frac1\\alpha}$, where $\\alpha>0$.\n\nHere $f^L$ is either the original or the disjoint-pair or the $k$-way Lov\\'asz extension.\n\\end{introthm}\n\nTheorem \\ref{thm:tilde-fg-equal}\nshows that by the multi-way Lov\\'asz extension, the combinatorial\noptimization in quotient form can be transformed to fractional\nprogramming. And based on this fractional optimization, we propose an\neffective local convergence scheme, which relaxes the Dinkelbach-type\niterative scheme and mixes the inverse power method and the steepest descent\nmethod. Furthermore, many other continuous iterations, such as\nKrasnoselski-Mann iteration, and the stochastic subgradient method, could be directly applied here. We refer the readers to \\cite{JostZhang} for another development on equalities between discrete and continuous optimization problems via various generalizations of Lov\\'asz extension. \n\n\nThe power of Theorem \\ref{thm:tilde-fg-equal} is embodied in many new examples and applications including Cheeger-type problems, various\nisoperimetric constants and max $k$-cut problems (see Subsections \\ref{sec:max-k-cut}, \\ref{sec:boundary-graph-1-lap} and \\ref{sec:variantCheeger}).\nAnd moreover, we find that not only combinatorial optimization,\nbut also some combinatorial invariants like the independence number and the chromatic number, can\nbe transformed into a continuous representation by this scheme.\n\n\\begin{introthm}[Sections \\ref{sec:independent-number} and \\ref{sec:chromatic-number}]\n\\label{thm:graph-numbers}\nFor an unweighted and undirected simple graph $G=(V,E)$ with $\\#V=n$, its independence number can be represented as\n$$\\alpha(G)=\\max\\limits_{\\vec x\\in \\ensuremath{\\mathbb{R}}^n\\setminus\\{\\vec 0\\}}\\frac{\\sum\\limits_{\\{i,j\\}\\in E}(|x_i-x_j|+|x_i+x_j|)- 2\\sum\\limits_{i\\in V}(\\deg_i-1)|x_i|}{2\\|\\vec x\\|_\\infty},$$\nwhere $\\deg_i=\\#\\{j\\in V:\\{j,i\\}\\in E\\}$, $i\\in V$, and its chromatic number is\n\\begin{equation*}\n\\gamma(G)= n^2-\\max\\limits_{\\vec x\\in\\ensuremath{\\mathbb{R}}^{n^2}\\setminus\\{\\vec 0\\}}\\sum\\limits_{k\\in V}\\frac{n\\sum\\limits_{\\{i,j\\}\\in E}(|x_{ik}-x_{jk}|+|x_{ik}+x_{jk}|)+2n\\|\\vec x^{k}\\|_{\\infty}-2n\\deg_k\\|\\vec x^{k}\\|_1- 2\\|\\vec x^{,k}\\|_{\\infty}}{2\\|\\vec x\\|_\\infty},\n\\end{equation*}\nwhere $\\vec x=(x_{ki})_{k,i\\in V}$, $\\vec x^{k}=(x_{k1},\\cdots,x_{kn})$ and $\\vec x^{,k}=(x_{1k},\\cdots,x_{nk})^T$.\nThe maximum matching number of $G$ can be expressed as\n$$\\max\\limits_{\\vec y\\in\\ensuremath{\\mathbb{R}}^E\\setminus\\{\\vec 0\\}}\\frac{\\|\\vec y\\|_1^2}{\\|\\vec y\\|_1^2-2\\sum_{e\\cap e'=\\varnothing}y_ey_{e'}}.$$\n\\end{introthm}\n\n \\vspace{0.15cm}\n\n\n\n\nThere are some equivalent continuous reformulations of the maxcut problem and\nthe independence number of a graph in the literature. However, a continuous reformulation of the coloring number has not yet been proposed. The main reason seems to be the complexity of coloring a graph. Hence, it is very difficult to discover a continuous form of the coloring number by direct observation. \n\n\\begin{introthm}[Theorem \\ref{thm:tilde-H-f}]\\label{thm:tilde-fg-equal-PQ}\nGiven functions $f_1,\\cdots,f_n:\\ensuremath{\\mathcal{A}}\\to[0,+\\infty)$,and $p$-homogeneous functions $P,Q:[0,+\\infty)^n\\to[0,+\\infty)$, we have\n$$\\max\\limits_{A\\in\\ensuremath{\\mathcal{A}}}\\frac{P(f_1(A),\\cdots,f_n(A))}{Q(f_1(A),\\cdots,f_n(A))}=\\sup\\limits_{x\\in{\\mathcal D}_\\ensuremath{\\mathcal{A}}}\\frac{P(f_1^L(\\vec x),\\cdots,f_n^L(\\vec x))}{Q(f_1^L(\\vec x),\\cdots,f_n^L(\\vec x))}$$\nif $P^{\\frac1p}$ is \nsubadditive and $Q^{\\frac1p}$ is superadditive. One can replace `max' by `min' if $P^{\\frac1p}$ is \nsuperadditive and $Q^{\\frac1p}$ is subadditive.\n\\end{introthm}\n\nTheorems \\ref{thm:tilde-fg-equal}, \\ref{thm:tilde-fg-equal-PQ} and \\ref{thm:tilde-H-f} can be seen as natural and nontrivial generalizations of the related original works by Hein's group \\cite{HS11}.\n\n\\vspace{0.1cm}\n\n\\textbf{Connections with spectral graph theory}\n\nSpectral graph theory aims to derive properties of a (hyper-)graph from its eigenvalues and eigenvectors. Going beyond the linear case, nonlinear spectral graph theory is developed in terms of discrete geometric analysis and difference equations on (hyper-)graphs.\nEvery discrete eigenvalue problem can be formulated as a variational problem for an\nobjective functional, a Rayleigh-type quotient.\nIn some cases, this\nfunctional is natural and easy to obtain,\nsince one may compare the discrete version with its original continuous analog in geometric analysis. However, in other\nsituations, there is no such analog. Fortunately, we find a unified framework based on multi-way Lov\\'asz extension to produce appropriate objective functions from a combinatorial problem (see Sections\n\\ref{sec:CC-transfer} and \\ref{sec:eigenvalue}).\n\nMore precisely, for a combinatorial problem with a discrete objective function of the form $\\frac{f(A)}{g(A)}$, we might obtain some correspondences by studying the \nset-valued eigenvalue problem\n$$\n \\nabla f^L (\\vec x)\\bigcap \\lambda\\nabla g^L (\\vec x) \\ne\\varnothing\n$$\nwhich is simply called the eigenvalue problem of the function pair $(f^L,g^L)$. \nHereafter we use $\\nabla$ to denote the (Clarke) sub-gradient operator acting on Lipschitz functions.\n\\begin{center}\n\\begin{tikzpicture}[node distance=6cm]\n\n\\node (graph) [startstop] { combinatorial quantities };\n\n\\node (spectrum) [startstop, right of=convex, xshift=2.6cm] { eigenvalues and eigenvectors };\n\n\\draw [arrow](graph) --node[anchor=south] { \\small Spectral graph theory} (spectrum);\n\\draw [arrow](spectrum) --node[anchor=north] { } (graph);\n\\end{tikzpicture}\n\\end{center}\n\n\n\n\n\\vspace{0.16cm}\nWe shall consider the following three concepts:\n\\begin{itemize}\n\\item {\\sl Eigenvectors and eigenvalues}:\\;\\;\nThe set-valued eigenvalue problems above are usually written as $\\vec0\\in \\nabla f^L (\\vec x)-\\lambda\\nabla g^L (\\vec x)$ by using the Minkowski summation of convex sets. We call $\\lambda$ an eigenvalue and $\\vec x$ an eigenvector associated to $\\lambda$.\n\n\\item {\\sl Critical points and critical values}:\\;\\;\nThe set of critical points\n$\\left\\{\\vec x\\left|0\\in\\nabla \\frac{f^L(\\vec x)}{g^L(\\vec x)}\\right.\\right\\}$\n and the corresponding critical values.\n\n\n\\item {\\sl Minimax critical values} (i.e., {\\sl variational eigenvalues in Rayleigh quotient form}):\\;\\;\nThe Lusternik-Schnirelman theory tells us that the min-max\nvalues\n\\begin{equation}\\label{eq:def-c_km}\n\\lambda_{m}=\\inf_{\\Psi\\in \\Gamma_m}\\sup\\limits_{\\vec x \\in \\Psi}\\frac{f^L(\\vec x)}{g^L(\\vec x)}\n\\end{equation}\nare critical values of $f^L(\\cdot)\/g^L(\\cdot)$. Here $\\Gamma_m$ is a class of certain topological objects at level $m$, e.g., the family of subsets with L-S category (or Krasnoselskii's $\\mathbb{Z}_2$-genus) not smaller than $m$.\n\\end{itemize}\n\n There are the following relations between these three classes:\n$$\\{\\text{Eigenvalues in Rayleigh quotient}\\}\\subset\\{\\text{Critical values}\\}\\subset\\{\\text{Eigenvalues}\\}.$$\nFor linear spectral theory, the above three classes coincide. However, for the non-smooth spectral theory derived by Lov\\'asz extension, we only have the inclusion relations.\n\n\nWe have the following result on the eigenvalue problem for the disjoint-pair Lov\\'asz extension, while for the results on the original Lov\\'asz extension, we refer to Section \\ref{sec:eigenvalue} for details. \n\\begin{introthm}\n\\label{introthm:eigenvalue}\nGiven $f,g:\\ensuremath{\\mathcal{P}}_2(V)\\to\\ensuremath{\\mathbb{R}}$, then every eigenvalue of $(f^L,g^L)$ has an eigenvector of the form $\\vec 1_A-\\vec1_B$. Moreover, we have the following claims:\n\\begin{itemize}\n\\item If $2f(A,B)=f(A,V\\setminus A)+f(V\\setminus B,B)$ and $2g(A,B)=g(A,V\\setminus A)+g(V\\setminus B,B)$ for any $(A,B)\\in \\ensuremath{\\mathcal{P}}_2(V)\\setminus\\{(\\varnothing,\\varnothing)\\}$, then every eigenvalue of $(f^L,g^L)$ has an eigenvector of the form $\\vec 1_A-\\vec1_{V\\setminus A}$.\n\\item If $g=\\mathrm{Const}$, then for any $A\\subset V$, $\\vec 1_A-\\vec1_{V\\setminus A}$ is an eigenvector.\n\\item If $f(A,B)=\\hat{f}(A)+\\hat{f}(B)$ and $g(A,B)=\\hat{g}(A)+\\hat{g}(B)$ for some symmetric function $\\hat{f}:\\ensuremath{\\mathcal{P}}(V)\\to\\ensuremath{\\mathbb{R}}$ (i.e., $\\hat{f}(A)=\\hat{f}(V\\setminus A)$, $\\forall A$) and non-decreasing submodular function $\\hat{g}:\\ensuremath{\\mathcal{P}}(V)\\to\\ensuremath{\\mathbb{R}}_+$, then the second \neigenvalue $\\lambda_2$ of $(f^L,g^L)$ equals \n$$\\min\\limits_{\\vec x\\bot\\vec 1}\\frac{f^L(\\vec x)}{\\min\\limits_{t\\in\\ensuremath{\\mathbb{R}}}g^L(\\vec x-t\\vec 1)}=\\min\\limits_{A\\in\\mathcal{P}(V)\\setminus\\{\\varnothing,V\\}}\\frac{\\hat{f}(A)}{\\min\\{\\hat{g}(A),\\hat{g}(V\\setminus A)\\}}=\\min\\limits_{(A,B)\\in\\mathcal{P}_2(V)\\setminus\\{(\\varnothing,\\varnothing )\\}}\\max\\{\\frac{\\hat{f}(A)}{\\hat{g}(A)},\\frac{\\hat{f}(B)}{\\hat{g}(B)}\\}.$$\n\\end{itemize}\n\n\n\\end{introthm}\n\nThis generalizes recent results on the\ngraph 1-Laplacian and Cheeger's constant \\cite{HeinBuhler2010,TVhyper-13,Chang16,CSZ15,CSZ17}. { And as a new application, we show that the min-cut problem and the max-cut problem are equivalent to solving the \nsmallest nontrivial (i.e., the second) eigenvalue and the largest eigenvalue of a certain nonlinear eigenvalue problem (see Theorem \\ref{thm:mincut-maxcut-eigen}).\n}\n\n\\vspace{0.16cm}\n\n\\textbf{Applications to frustration in signed network }\n\nAs a key measure for analysing signed networks, the frustration index on a signed graph quantifies how far a signature is\nfrom being balanced (see Section \\ref{sec:frustration}). Computing the frustration index is NP-hard, and few algorithms have been proposed \\cite{ArefWilson19,ArefMasonWilson20}. \n\n\n\nConsidering a signed graph $(V,E_+\\cup E_-)$ with $E_+$ (resp. $E_-$)\nthe set of positive (resp. negative) edges, based on the disjoint-pair Lov\\'asz extension, we obtain an equivalent continuous optimization of the frustration index (or the line index of balance \\cite{Harary59}):\n$$\\#E_-+\\min\\limits_{ x\\ne0}\\frac{\\sum_{\\{i,j\\}\\in E_+}|x_i-x_j|-\\sum_{\\{i,j\\}\\in E_-}|x_i-x_j|}{2\\|\\vec x\\|_\\infty}.$$\nThis new reformulation can be computed via typical algorithms in continuous optimization. \n\nAlso, we propose the eigenvalue problem\n\\begin{equation}\\label{eq:frustration-eigen}\n\\nabla\\left(\\sum_{\\{i,j\\}\\in E_+}|x_i-x_j|+\\sum_{\\{i,j\\}\\in E_-}|x_i+x_j|\\right)\\bigcap\\lambda \\nabla\\|\\vec x\\|_\\infty \\ne\\varnothing \n\\end{equation}\nand we show an iterative scheme for searching the frustration index based on the smallest eigenvalue of the nonlinear eigenvalue problem \\eqref{eq:frustration-eigen}. See Section \\ref{sec:frustration} for details and more results. \n\n\\vspace{0.16cm}\n\n Since the transformation of a combinatorial optimization to a\ncontinuous optimization or a nonsmooth eigenvalue problem usually leads\n to a quotient,\nthe task for fractional programming then becomes to compute an optimal value or an eigenvector. In Section \\ref{sec:algo}, we present a general algorithm which is available to compute the resulting continuous reformulations arising in Theorems \\ref{thm:tilde-fg-equal}, \\ref{thm:graph-numbers}, \\ref{thm:tilde-fg-equal-PQ} and \\ref{introthm:eigenvalue}. \n\n In summary, we present a systematic study for constructing nonlinear\neigenvalue problems and equivalent continuous reformulations for\ncombinatorial quantities, which capture the key properties of the original combinatorial problems. This is helpful to increase understanding of certain combinatorial problems by the corresponding eigenvalue problems and the equivalent continuous reformulations. The following picture summarizes the relations between the various concepts developed and studied in this paper.\n\n\n\\begin{figure}[H]\n\\centering\n\\begin{tikzpicture}[node distance=4.5cm]\n\n\\node (CQ) [startstop] {\\begin{tabular}{l}\nCombinatorial\\\\ Quantities\n\\end{tabular}};\n\n\\node (DO) [process, right of=CQ, xshift=-1cm] { \\begin{tabular}{l}\n Discrete\\\\\n Optimization\n\\end{tabular}};\n\n\\node (CO) [startstop, right of=DO, yshift=0cm, xshift=2.6cm] {\\begin{tabular}{l}\n Continuous\\\\\n Optimization\n\\end{tabular} };\n\\node (DM) [io1, below of=CQ, xshift=0.1cm,yshift=2cm] {\\begin{tabular}{l}\n Discrete Morse theory\n\\end{tabular}\n };\n\\node (F\/G) [process, right of=DM, xshift=2.2cm] {\\begin{tabular}{l}\n(topological) Morse theory \\\\\n (metric) critical point theory\n\\end{tabular}\n};\n\\node (FG) [startstop, below of=F\/G, yshift=2.6cm,xshift=3.3cm] {\n\\begin{tabular}{r}\n(Nonlinear) Spectral theory\\\\\n\\end{tabular}};\n\n\\node (algorithm) [io1, right of=F\/G, xshift=2cm] {\\begin{tabular}{l}\n Continuous\\\\\n Programming\\\\\n\\& Algorithm\n\\end{tabular}};\n\\node (submodular) [io2, above of=DO, yshift=-2.6cm] {Submodularity};\n\\node (convexity) [io2, right of=submodular, xshift=5.2cm] {Convexity};\n\\draw [arrow](CQ) --node[anchor=south] { } (DO);\n\\draw [arrow](DO) --node[anchor=south] { } (CQ);\n\\draw [arrow](DO) --node[anchor=south]{Discrete-to-Continuous }node[anchor=north] {extension} (CO);\n\\draw [arrow](DM) -- node[anchor=south] {\\cite{JZ-prepare21}}node[anchor=north] { Part I } (F\/G);\n\\draw [arrow](FG) -- node[anchor=south] {} (algorithm);\n\\draw [arrow](F\/G) -- node[anchor=south] { } (FG);\n\\draw [arrow](CO) -- node[anchor=south] { } (algorithm);\n\\draw [arrow](CO) -- node[anchor=south] { } (F\/G);\n\\draw [arrow](CO) -- node[anchor=south] { } (FG);\n\\draw [arrow](submodular) -- node[anchor=south] { Lov\\'asz extension } (convexity);\n\\draw [arrow](submodular) -- node[anchor=south] { } (DO);\n\\draw [arrow](convexity) -- node[anchor=south] { } (submodular);\n\\draw [arrow](convexity) -- node[anchor=south] { } (algorithm);\n\\end{tikzpicture}\n\\caption{\\label{fig:flowchart} The relationship between the aspects\nstudied in our work.}\n\\end{figure}\n\n\n\n\\begin{notification}\nSince this paper contains many interacting parts and relevant results, some notions and concepts may have slightly distinct\nmeanings in different sections, but this will be stated at the beginning of each section.\n\\end{notification}\n\n\n\n\n\\section{A preliminary: Lov\\'asz extension and submodular functions}\n\\label{sec:Lovasz}\n\n\n\nWhile most of the results on submodularity are known in the field of discrete\nconvex analysis, we present some details in a simple manner, which should be helpful to understand our main results in Section \\ref{sec:main}.\n\n\nWe first formalize some important results about the original Lov\\'asz extension.\n\n\n\\begin{defn}\nTwo vectors $\\vec x$ and $\\vec y$ are {\\sl comonotonic} if $(x_i-x_j)(y_i-y_j)\\ge0$, $\\forall i,j\\in \\{1,2,\\cdots,n\\}$.\n\nA function $F:\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}$ is {\\sl comonotonic additive} if $F(\\vec x+\\vec y)=F(\\vec x)+F(\\vec y)$ for any comonotonic pair $\\vec x$ and $\\vec y$.\n\\end{defn}\nThe following proposition shows that a function is comonotonic additive if and only if it can be expressed as the Lov\\'asz extension of some function.\n\\begin{pro}\n\\label{pro:comonotonic-additivity}\n$F:\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}$ is the Lov\\'asz extension $F=f^L$ of some function $f:\\ensuremath{\\mathcal{P}}(V)\\to\\ensuremath{\\mathbb{R}}$ if and only if $F$ is comonotonic additive.\n\\end{pro}\n\nRecall the following known results:\n\n\\begin{theorem}[Lov\\'asz]\\label{thm:Lovasz}\nThe following conditions are equivalent: (1) $f$ is submodular; (2) $f^L$ is convex; (3) $f^L$ is submodular.\n\\end{theorem}\n\n\\begin{remark}\nThe fact that $f$ is submodular if and only if $f^L$ is submodular is provided by Propositions 7.38 and 7.39 in \\cite{Murota03book}. We shall give a detailed proof for a generalized version of Theorem \\ref{thm:Lovasz} (see Theorem \\ref{thm:submodular-L-equivalent}). \n\\end{remark}\n\n\n\\begin{theorem}\nMurota]\\label{thm:Chateauneuf-Cornet}\n$F:\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}$ is the Lov\\'asz extension $F=f^L$ of some submodular $f:\\ensuremath{\\mathcal{P}}(V)\\to\\ensuremath{\\mathbb{R}}$ if and only if $F$ is positively one-homogeneous, submodular and $F(\\vec x+t\\vec 1)=F(\\vec x)+tF(\\vec 1)$.\n\\end{theorem}\n\n\\begin{remark}\nTheorem \\ref{thm:Chateauneuf-Cornet} was originally proved by establishing a one-to-one correspondence between positively \nhomogeneous L-convex functions and submodular functions (see \n Theorem 7.40 in Murota's book \\cite{Murota03book}). An alternative proof is\n given in \\cite{CC18}.\n\\end{remark}\n\n\nWe shall establish these results for the disjoint-pair version and the $k$-way version of the Lov\\'asz extension.\n\n\\subsection{Disjoint-pair and $k$-way Lov\\'asz extensions}\n\nSince it is natural to set $f(\\varnothing,\\varnothing)=0$, one may write \\eqref{eq:disjoint-pair-Lovasz-def-integral} as\n\\begin{equation}\\label{eq:disjoint-pair-Lovasz-def-integral2}\nf^{L}(\\vec x)=\\int_0^{\\infty} f(V_+^t(\\vec x),V_-^t(\\vec x))dt,\n\\end{equation}\n\\begin{equation}\\label{eq:disjoint-pair-Lovasz-def}\nf^{L}(\\vec x)=\\sum_{i=0}^{n-1}(|x_{\\sigma(i+1)}|-|x_{\\sigma(i)}|)f(V_{\\sigma(i)}^+(\\vec x),V_{\\sigma(i)}^-(\\vec x)),\n\\end{equation}\nwhere $\\sigma:V\\cup\\{0\\}\\to V\\cup\\{0\\}$ is a bijection such that $|x_{\\sigma(1)}|\\le |x_{\\sigma(2)}| \\le \\cdots\\le |x_{\\sigma(n)}|$ and $\\sigma(0)=0$, where $x_0:=0$, and\n$$V_{\\sigma(i)}^\\pm(\\vec x):=\\{j\\in V:\\pm x_j> |x_{\\sigma(i)}|\\},\\;\\;\\;\\; i=0,1,\\cdots,n-1.$$\nWe regard $\\ensuremath{\\mathcal{P}}_2(V)=3^V$ as $\\{-1,0,1\\}^n$ by identifying the disjoint pair $(A,B)$ with the ternary (indicator) vector $\\vec 1_A-\\vec1_B$.\n\nOne may compare the original and the disjoint-pair Lov\\'asz extensions by writing \\eqref{eq:disjoint-pair-Lovasz-def-integral} as\n\\begin{equation}\\label{eq:disjoint-pair-form}\n\\int_{\\min_i |x_i|}^{\\max_i |x_i|} f(V_+^t(\\vec x),V_-^t(\\vec x))dt+\\min_i |x_i| f(V_+,V_-),\n\\end{equation}\nwhere $V_\\pm=\\{i\\in V:\\pm x_i>0\\}$. Note that \\eqref{eq:disjoint-pair-form} is very similar to \\eqref{eq:Lovaintegral}.\n\n\\begin{defn}\nGiven $V_i=\\{1,\\cdots,n_i\\}$, $i=1,\\cdots,k$, and a function $f:\\mathcal{P}(V_1)\\times \\cdots\\times \\mathcal{P}(V_k)\\to \\ensuremath{\\mathbb{R}}$, the $k$-way Lov\\'asz extension $f^L: \\ensuremath{\\mathbb{R}}^{V_1}\\times\\cdots\\times \\ensuremath{\\mathbb{R}}^{V_k}\\to \\ensuremath{\\mathbb{R}}$ can be written as\n\\begin{align*}\nf^L(\\vec x^1,\\cdots,\\vec x^k)&=\\int_{\\min \\vec x}^{\\max \\vec x}f(V^t_1(\\vec x^1),\\cdots,V^t_k(\\vec x^k))dt+ f(V_1,\\cdots,V_k)\\min\\vec x\\\\&\n=\\int_{-\\infty}^0(f(V^t_1(\\vec x^1),\\cdots,V^t_k(\\vec x^k))-f(V_1,\\cdots,V_k))dt + \\int_0^{+\\infty}f(V^t_1(\\vec x^1),\\cdots,V^t_k(\\vec x^k)) d t\n\\end{align*}\nwhere $V^t_i(\\vec x^i)=\\{j\\in V_i:x^i_j>t\\}$, $\\min\\vec x=\\min\\limits_{i,j} x^i_j$ and $\\max\\vec x=\\max\\limits_{i,j} x^i_j$.\n\\end{defn}\n\n\\begin{defn}[$k$-way analog for disjoint-pair Lov\\'asz extension]\n\n Given $V_i=\\{1,\\cdots,n_i\\}$, $i=1,\\cdots,k$, and a function $f:\\mathcal{P}_2(V_1)\\times \\cdots\\times \\mathcal{P}_2(V_k)\\to \\ensuremath{\\mathbb{R}}$, define $f^L: \\ensuremath{\\mathbb{R}}^{V_1}\\times\\cdots\\times \\ensuremath{\\mathbb{R}}^{V_k}\\to \\ensuremath{\\mathbb{R}}$ by\n $$f^L(\\vec x^1,\\cdots,\\vec x^k)=\n \\int_0^{\\|\\vec x\\|_\\infty} f(V_{1,t}^+(\\vec x^1),V_{1,t}^-(\\vec x^1),\\cdots,V_{k,t}^+(\\vec x^k),V_{k,t}^-(\\vec x^k))dt\n $$\n where $V_{i,t}^\\pm(\\vec x^i)=\\{j\\in V_i:\\pm x^i_j>t\\}$, $\\|\\vec x\\|_\\infty=\\max\\limits_{i=1,\\cdots,k} \\|\\vec x^i\\|_\\infty$. We can replace $\\|\\vec x\\|_\\infty$ by $+\\infty$ if we set $f(\\varnothing,\\cdots,\\varnothing)=0$.\n\\end{defn}\n\\vspace{0.19cm}\n\nSome basic properties of the multi-way Lov\\'asz extension are shown below.\n\\begin{pro}\\label{pro:multi-way-property}\nFor the multi-way Lov\\'asz extension $f^L(\\vec x)$, we have\n\\begin{enumerate}[(a)]\n\\item $f^L(\\cdot)$ is positively one-homogeneous, piecewise linear, and Lipschitz continuous.\n\\item $(\\lambda f)^L=\\lambda f^L$, $\\forall\\lambda\\in\\ensuremath{\\mathbb{R}}$.\n\n\\end{enumerate}\n\\end{pro}\n\n\\begin{pro}\\label{pro:setpair-property}\nFor the disjoint-pair Lov\\'asz extension $f^L(\\vec x)$, we have\n\\begin{enumerate}[(a)]\n\\item $f^L$ is Lipschitz continuous, and $|f^L(x)-f^L(y)|\\le 2\\max\\limits_{(A,B)\\in \\ensuremath{\\mathcal{P}}_2(V)}f(A,B) \\|x-y\\|_1$, $\\forall x,y\\in \\mathbb{R}^n$. Also, $|f^L(x)-f^L(y)|\\le 2\\sum\\limits_{(A,B)\\in \\ensuremath{\\mathcal{P}}_2(V)}f(A,B) \\|x-y\\|_\\infty$, $\\forall x,y\\in \\mathbb{R}^n$.\n\\item $f^L(-\\vec x)=\\pm f^L(\\vec x)$, $\\forall \\vec x\\in\\ensuremath{\\mathbb{R}}^V$ if and only if $f(A,B)=\\pm f(B,A)$, $\\forall (A,B)\\in \\ensuremath{\\mathcal{P}}_2(V)$.\n\\item\\label{pro:pro-c} $f^L(\\vec x+\\vec y)=f^L(\\vec x)+f^L(\\vec y)$ whenever $V_\\pm(\\vec y)\\subset V_\\pm(\\widetilde{\\vec x})$, where $\\widetilde{\\vec x}$ has components $\\widetilde{ x}_i=\\begin{cases}x_i,&\\text{ if }|x_i|=\\|\\vec x\\|_\\infty,\\\\ 0,&\\text{ otherwise}.\\end{cases}$\n\\end{enumerate}\n\\end{pro}\n\\begin{proof} (a) and (b) are actually known results and their proofs are elementary.\n(c) can be derived from the definition \\eqref{eq:disjoint-pair-Lovasz-def}.\n\\end{proof}\n\n\\begin{defn\n\\label{def:associate-piece}\nTwo vectors $\\vec x$ and $\\vec y$ are said to be absolutely comonotonic \nif $x_iy_i\\ge0$, $\\forall i$, and $(|x_i|-|x_j|)(|y_i|-|y_j|)\\ge0$, $\\forall i,j$\n\\end{defn}\n\n\\begin{pro}\\label{pro:setpair-character}\nA continuous function $F$ is a disjoint-pair Lov\\'asz extension of some function $f:\\ensuremath{\\mathcal{P}}_2(V)\\to\\ensuremath{\\mathbb{R}}$, if and only if\n$F(\\vec x)+F(\\vec y)=F(\\vec x+\\vec y)$ whenever $\\vec x$ and $\\vec y$ are absolutely comonotonic.\n\\end{pro}\n\n\\begin{proof} By the definition of the disjoint-pair Lov\\'asz extension (see \\eqref{eq:disjoint-pair-Lovasz-def}),\nwe know that $F$ is a disjoint-pair Lov\\'asz extension of some function $f:\\ensuremath{\\mathcal{P}}_2(V)\\to\\ensuremath{\\mathbb{R}}$ if and only if\n$\\lambda F(\\vec x)+(1-\\lambda)F(\\vec y)=F(\\lambda\\vec x+(1-\\lambda)\\vec y)$ for all absolutely comonotonic\nvectors $\\vec x$ and $\\vec y$, $\\forall \\lambda\\in[0,1]$. Therefore, we only need to prove the sufficiency part.\n\nFor $\\vec x\\in\\ensuremath{\\mathbb{R}}^V$, since $s\\vec x$ and $t\\vec x$ with $s,t\\ge 0$ are absolutely comonotonic,\n $F(s\\vec x)+F(t\\vec x)=F((s+t)\\vec x)$, which yields a Cauchy equation on the half-line. Thus the continuity assumption implies the linearity of $F$ on the ray $\\ensuremath{\\mathbb{R}}^+\\vec x$, which implies the property $F(t\\vec x)=tF(\\vec x)$, $\\forall t\\ge 0$, and hence $\\lambda F(\\vec x)+(1-\\lambda)F(\\vec y)=F(\\lambda\\vec x+(1-\\lambda)\\vec y)$ for any absolutely comonotonic\n vectors $\\vec x$ and $\\vec y$, $\\forall \\lambda\\in[0,1]$. This completes the proof.\n\\end{proof}\n\nFor relations between the original and the disjoint-pair Lov\\'asz extensions, we further have\n\\begin{pro}\\label{pro:setpair-original}\n For $h:\\ensuremath{\\mathcal{P}}(V)\\to [0,+\\infty)$ with $h(\\varnothing)=0$, and $f:\\ensuremath{\\mathcal{P}}_2(V)\\to [0,+\\infty)$ with $f(\\varnothing,\\varnothing)=0$ \\footnote{In fact, if $h(\\varnothing)\\ne 0$ or $f(\\varnothing,\\varnothing)\\ne0$, one may change the value and it does not affect the related Lov\\'asz extension.}, we have:\n\\begin{enumerate}[(a)]\n\\item If $f(A,B)=h(A)+h(V\\setminus B)-h(V)$, $\\forall (A,B)\\in \\ensuremath{\\mathcal{P}}_2(V)$, then $f^L=h^L$.\n\\item If $f(A,B)=h(A)+h(B)$ and $h(A)=h(V\\setminus A)$, $\\forall (A,B)\\in \\ensuremath{\\mathcal{P}}_2(V)$, then $f^L=h^L$.\n\\item If $f(A,B)=h(A)$, $\\forall (A,B)\\in \\ensuremath{\\mathcal{P}}_2(V)$, then $f^L(\\vec x)=h^L(\\vec x)$, $\\forall \\vec x\\in[0,\\infty)^V$.\n\\item If $f(A,B)=h(A\\cup B)$, $\\forall (A,B)\\in \\ensuremath{\\mathcal{P}}_2(V)$, then $f^L(\\vec x)=h^L(\\vec x^++\\vec x^-)$.\n\\item If $f(A,B)=h(A)\\pm h(B)$, $\\forall (A,B)\\in \\ensuremath{\\mathcal{P}}_2(V)$, then $f^L(\\vec x)=h^L(\\vec x^+)\\pm h^L(\\vec x^-)$.\n\\end{enumerate}\nHere $\\vec x^\\pm:=(\\pm \\vec x)\\vee \\vec0$.\n\\end{pro}\n\n\\begin{proof}\n\\begin{enumerate}[(a)]\n\\item Note that\n\\begin{align*}\nf^L(\\vec x)&=\\int_0^{\\|\\vec x\\|_\\infty}f(V^+_t(\\vec x),V^-_t(\\vec x))dt\n=\\int_0^{\\|\\vec x\\|_\\infty}(h(V_t(\\vec x))+h(V_{-t}(\\vec x))-h(V))dt\\\\\n&=\\int_{-\\|\\vec x\\|_\\infty}^{\\|\\vec x\\|_\\infty}h(V_t(\\vec x))dt- \\|\\vec x\\|_\\infty h(V)\n=\\int_{x_{\\sigma(1)}}^{x_{\\sigma(n)}}h(V_t(\\vec x))dt +x_{\\sigma(1)} h(V) = h^L(\\vec x),\n\\end{align*}\nwhere we use $\\|x\\|_\\infty=\\max\\{-x_{\\sigma(1)},x_{\\sigma(n)}\\}$ and $h(\\varnothing)=0$.\n\\item This is a direct consequence of (a) since $h(V)=h(\\varnothing)=0$ and $h(B)=h(V\\setminus B)$.\n\\item For any $\\vec x\\in \\ensuremath{\\mathbb{R}}^V$ with $x_i\\ge0$, we note that $f^L(\\vec x)=\\int_0^{\\|\\vec x\\|_\\infty}h(V^+_t(\\vec x))dt=\\int_0^{\\max x_i}h(V_t(\\vec x))dt=\\int_{\\min x_i}^{\\max x_i}h(V_t(\\vec x))dt+\\min x_i h(V)=h^L(\\vec x)$.\n\\item\nSimilar to (c), one can check that $f^L(\\vec x)=h^L(\\vec x^++\\vec x^-)$.\n\\item It is straightforward.\n\\end{enumerate}\n\\end{proof}\n\nIn the sequel, we will not distinguish the original and the disjoint-pair Lov\\'asz extensions, since the reader can infer it\nfrom the domains ($\\ensuremath{\\mathcal{P}}(V)$ or $\\ensuremath{\\mathcal{P}}_2(V)$). Sometime we work on $\\ensuremath{\\mathcal{P}}(V)$ only, and in this situation, the disjoint-pair Lov\\'asz extension acts on the redefined $f(A,B)=h(A\\cup B)$ as Proposition \\ref{pro:setpair-original} states.\n\nThe next result is useful for the application on graph coloring.\n\n\\begin{pro}\\label{pro:separable-summation}\nFor the simple $k$-way Lov\\'asz extension of $f:\\ensuremath{\\mathcal{P}}(V_1)\\times\\cdots\\times\\ensuremath{\\mathcal{P}}(V_k)\\to \\ensuremath{\\mathbb{R}}$ with the separable summation form $f(A_1,\\cdots,A_k):=\\sum_{i=1}^kf_i(A_i)$, $\\forall (A_1,\\cdots,A_k)\\in \\ensuremath{\\mathcal{P}}(V)^k$, we have $f^L(\\vec x^1,\\cdots,\\vec x^k)=\\sum_{i=1}^kf_i^L(\\vec x^i)$, $\\forall (\\vec x^1,\\cdots,\\vec x^k)$.\n\nFor $f:\\ensuremath{\\mathcal{P}}_2(V_1)\\times\\cdots \\times\\ensuremath{\\mathcal{P}}_2(V_k)\\to \\ensuremath{\\mathbb{R}}$ with the form $f(A_1,B_1\\cdots,A_k,B_k):=\\sum_{i=1}^kf_i(A_i,B_i)$, $\\forall (A_1,B_1,\\cdots,A_k,B_k)\\in \\ensuremath{\\mathcal{P}}_2(V_1)\\times\\cdots \\times \\ensuremath{\\mathcal{P}}_2(V_k)$, there similarly holds $f^L(\\vec x^1,\\cdots,\\vec x^k)=\\sum_{i=1}^kf_i^L(\\vec x^i)$.\n\\end{pro}\n\n\\subsection{Submodularity and Convexity}\\label{sec:SubmodularityConvexity}\nIn this subsection, we give new analogs of Theorems \\ref{thm:Lovasz} and \\ref{thm:Chateauneuf-Cornet} for the disjoint-pair Lov\\'asz extension and the $k$-way Lov\\'asz extension. The major difference to existing results in the literature is that we work with the restricted or the enlarged domain of a function.\n\nLet's first recall the standard concepts of submodularity:\n\\begin{enumerate}[({S}1)]\n\\item A discrete function $f:\\ensuremath{\\mathcal{A}}\\to \\ensuremath{\\mathbb{R}}$ is submodular if $f(A)+f(B)\\ge f(A\\cup B)+f(A\\cap B)$, $\\forall A,B\\in\\ensuremath{\\mathcal{A}}$, where $\\ensuremath{\\mathcal{A}}\\subset\\mathcal{P}(V)$ is an algebra (i.e., $\\ensuremath{\\mathcal{A}}$ is closed under union and intersection).\n\\item A continuous function $F:\\ensuremath{\\mathbb{R}}^n\\to \\ensuremath{\\mathbb{R}}$ is submodular if\n $F(\\vec x)+F(\\vec y)\\ge F(\\vec x\\vee \\vec y)+F(\\vec x\\wedge \\vec y)$, where $(\\vec x\\vee \\vec y)_i=\\max\\{x_i,y_i\\}$ and $(\\vec x\\wedge \\vec y)_i=\\min\\{x_i,y_i\\}$, $i=1,\\cdots,n$. For a sublattice ${\\mathcal D}\\subset\\ensuremath{\\mathbb{R}}^n$ that is closed under $\\vee$ and $\\wedge$, one can define submodularity in the same way.\n\\end{enumerate}\n\n\\begin{notification} The discussion about algebras of sets can be reduced to lattices.\nClassical submodular functions on a sublattice of the Boolean lattice\n$\\{0,1\\}^n$ and their continuous versions on $\\ensuremath{\\mathbb{R}}^n$ are presented in (S1) and\n(S2), respectively. Bisubmodular functions on a graded sub-poset (partially\nordered set) of $\\{-1,0,1\\}^n$ are defined in \\eqref{eq:2-submodular} below.\n\\end{notification}\n\n Now, we\n recall the concept of bisubmodularity and introduce its continuous version.\n \\begin{enumerate}\n\\item[(BS1)] A discrete function $f:\\mathcal{P}_2(V)\\to \\ensuremath{\\mathbb{R}}$ is bisubmodular if $\\forall\\,(A,B),(C,D)\\in \\ensuremath{\\mathcal{P}}_2(V)$\n\\begin{equation}\\label{eq:2-submodular}\nf(A,B)+f(C,D)\\ge f((A\\cup C)\\setminus (B\\cup D),(B\\cup D)\\setminus(A\\cup C))+f(A\\cap C,B\\cap D).\n\\end{equation}\nOne can denote $A\\vee B=((A_1\\cup B_1)\\setminus (A_2\\cup B_2),(A_2\\cup B_2)\\setminus (A_1\\cup B_1))$ and $A\\wedge B=(A_1\\cap B_1,A_2\\cap B_2)$, where $A=(A_1,A_2)$, $B=(B_1,B_2)$. For a subset $\\ensuremath{\\mathcal{A}}\\subset \\mathcal{P}_2(V)$ that is closed under $\\vee$ and $\\wedge$, the bisubmodularity of $f:\\ensuremath{\\mathcal{A}}\\to\\ensuremath{\\mathbb{R}}$ can be expressed as $f(A)+f(B)\\ge f(A\\vee B)+f(A\\wedge B)$, $\\forall A,B\\in\\ensuremath{\\mathcal{A}}$.\n\\end{enumerate}\n\nIf we were to continue the definition of submodularity stated in (S2), we would obtain nothing new. Hence, the proof of Theorem \\ref{thm:Chateauneuf-Cornet} cannot directly apply to our situation. To overcome this issue, we need to provide a matched definition of bisubmodularity for functions on $\\ensuremath{\\mathbb{R}}^n$, and an appropriate and careful modification of the translation linearity\ncondition.\n\n\\begin{enumerate}\n \\item[(BS2)] A continuous function $F:\\ensuremath{\\mathbb{R}}^n\\to \\ensuremath{\\mathbb{R}}$ is bisubmodular if\n $F(x)+F(y)\\ge F(x\\vee y)+F(x\\wedge y)$, where\n $$(x \\vee y)_i=\\begin{cases}\\max\\{x_i,y_i\\},&\\text{ if } x_i,y_i\\ge0,\\\\\n \\min\\{x_i,y_i\\},&\\text{ if } x_i,y_i\\le0, \\\\\n 0,&\\text{ if } x_iy_i<0,\\end{cases}\\;\\;\\;\\;\\;\\;\\;(x \\wedge y)_i=\\begin{cases}\\min\\{x_i,y_i\\},&\\text{ if } x_i,y_i\\ge0,\\\\\n \\max\\{x_i,y_i\\},&\\text{ if } x_i,y_i\\le0, \\\\\n 0,&\\text{ if } x_iy_i<0.\\end{cases}$$\n\\end{enumerate}\n\n\\begin{pro}\\label{pro:bisubmodular-continuous}\nA function $F:\\ensuremath{\\mathbb{R}}^V\\to \\ensuremath{\\mathbb{R}}$ is a disjoint-pair Lov\\'asz extension of a bisubmodular function if and only if $F$ is (continuously) bisubmodular (in the sense of (BS2)) and for any $\\vec x\\in\\ensuremath{\\mathbb{R}}^V ,\\,t\\ge0$,\n{ \\linespread{0.95} \\begin{enumerate}\n\\item[] $F(t\\vec x)=tF(\\vec x)$ (positive homogeneity);\n\\item[] $F(\\vec x+t\\vec 1_{V^+,V^-})\\ge F(\\vec x)+F(t\\vec 1_{V^+,V^-})$ for some\\footnote{This is some kind of `translation linearity' if we adopt the assumption $F(\\vec x+t\\vec 1_{V^+,V^-})= F(\\vec x)+F(t\\vec 1_{V^+,V^-})$. } $V^\\pm\\supset V^\\pm(\\vec x)$ with $V^+\\cup V^-=V$.\n\\end{enumerate} }\n\nHenceforth, $\\vec 1_{A,B}$ is defined as $\\vec 1_A-\\vec 1_B$ for simplicity.\n\\end{pro}\n\nThe proof is a modification of the previous version on the original Lov\\'asz extension for submodular functions. \n\n\\begin{proof}\nTake the discrete function $f$ defined as $f(A_1,A_2)=F(\\vec 1_{A_1,A_2})$. One can check the bisubmodularity of $f$ directly. Fix an $\\vec x\\in\\ensuremath{\\mathbb{R}}^n$ and let $\\sigma:V\\cup\\{0\\}\\to V\\cup\\{0\\}$ be a bijection such that $|x_{\\sigma(1)}|\\le |x_{\\sigma(2)}| \\le \\cdots\\le |x_{\\sigma(n)}|$ and $\\sigma(0)=0$, where $x_0:=0$, and\n$$V_{\\sigma(i)}^\\pm=V_{\\sigma(i)}^\\pm(\\vec x):=\\{j\\in V:\\pm x_j> |x_{\\sigma(i)}|\\},\\;\\;\\;\\; i=0,1,\\cdots,n-1.$$\nAlso, we denote $\\vec x_{V_{\\sigma(i)}^+,V_{\\sigma(i)}^-}=\\vec x * \\vec 1_{V_{\\sigma(i)}^+\\cup V_{\\sigma(i)}^-}$ (i.e., the restriction of $\\vec x$ onto $V_{\\sigma(i)}^+\\cup V_{\\sigma(i)}^-$, with other components $0$), where $\\vec x*\\vec y:=(x_1y_1,\\cdots,x_ny_n)$.\n\nFor simplicity, in the following formulas, we identify $\\sigma(i)$ with $i$ for all $i=0,\\cdots,n$.\n\nIt follows from $|x_{i+1}|\\vec 1_{V_{i}^+,V_{i}^-}\\bigvee \\vec x_{V_{i+1}^+,V_{i+1}^-}=\\vec x_{V_{i}^+,V_{i}^-}$ and\n$$|x_{i+1}|\\vec 1_{V_{i}^+,V_{i}^-}\\bigwedge \\vec x_{V_{i+1}^+,V_{i+1}^-}= |x_{i+1}|\\vec 1_{V_{i+1}^+,V_{i+1}^-}$$ that\n\\begin{align*}\nf^{L}(\\vec x)&=\\sum_{i=0}^{n-1}(|x_{i+1}|-|x_{i}|)f(V_{i}^+,V_{i}^-)\n\\\\&=\\sum_{i=0}^{n-1}|x_{i+1}|\\left(f(V_{i}^+,V_{i}^-)-f(V_{i+1}^+,V_{i+1}^-)\\right)\n\\\\&=\\sum_{i=0}^{n-1}\\left\\{F\\left(|x_{i+1}|\\vec 1_{V_{i}^+,V_{i}^-}\\right)-F\\left(|x_{i+1}|\\vec 1_{V_{i+1}^+,V_{i+1}^-}\\right)\\right\\}\n\\\\&\\ge\\sum_{i=0}^{n-1}\\left\\{F\\left(\\vec x_{V_{i}^+,V_{i}^-}\\right)-F\\left(\\vec x_{V_{i+1}^+,V_{i+1}^-}\\right)\\right\\}=F(\\vec x).\n\\end{align*}\nOn the other hand,\n\\begin{align*}\nf^{L}(\\vec x)&=\\sum_{i=0}^{n-1}(|x_{i+1}|-|x_{i}|)f(V_{i}^+,V_{i}^-)\n=\\sum_{i=0}^{n-1}F\\left((|x_{i+1}|-|x_{i}|)\\vec 1_{V_{i}^+,V_{i}^-}\\right)\n\\\\&=\\sum_{i=0}^{n-2}\\left\\{F((|x_{i+1}|-|x_{i}|)\\vec 1_{V_{i}^+,V_{i}^-})-F((|x_{i+1}|-|x_{i}|)\\vec 1_{V^+,V^-})\\right\\}\n\\\\&\\;\\;\\;\\;\\;+\\left\\{\\sum_{i=0}^{n-2}F((|x_{i+1}|-|x_{i}|)\\vec 1_{V^+,V^-})\\right\\}+F\\left((|x_n|-|x_{n-1}|)\\vec 1_{V^+_{n-1},V^-_{n-1}}\\right)\n\\\\&\\le \\sum_{i=0}^{n-2}\\left\\{F\\left(\\vec x_{V_{i}^+,V_{i}^-}-|x_{i}|\\vec 1_{V^+,V^-}\\right)-F\\left(\\vec x_{V_{i+1}^+,V_{i+1}^-}-|x_{i+1}|\\vec 1_{V^+,V^-}+(|x_{i+1}|-|x_{i}|)\\vec 1_{V^+,V^-}\\right)\\right\\}\n\\\\&\\;\\;\\;\\;\\;+\\left\\{\\sum_{i=0}^{n-2}(|x_{i+1}|-|x_{i}|)F(\\vec 1_{V^+,V^-})\\right\\}+F\\left((|x_n|-|x_{n-1}|)\\vec 1_{V^+_{n-1},V^-_{n-1}}\\right)\n\\\\&\\le \\sum_{i=0}^{n-2}\\left(F(\\vec x_{V_{i}^+,V_{i}^-}-|x_{i}|\\vec 1_{V^+,V^-})-F(\\vec x_{V_{i+1}^+,V_{i+1}^-}-|x_{i+1}|\\vec 1_{V^+,V^-})\\right)+F\\left((|x_n|-|x_{n-1}|)\\vec 1_{V^+_{n-1},V^-_{n-1}}\\right)\n\\\\&=F(\\vec x)\n\\end{align*}\naccording to $(|x_{i+1}|-|x_{i}|)\\vec 1_{V^+,V^-}\\bigwedge (\\vec x_{V_{i}^+,V_{i}^-}-|x_{i}|\\vec 1_{V^+,V^-})= (|x_{i+1}|-|x_{i}|)\\vec 1_{V_{i}^+,V_{i}^-}$ and $$(|x_{i+1}|-|x_{i}|)\\vec 1_{V^+,V^-}\\bigvee (\\vec x_{V_{i}^+,V_{i}^-}-|x_{i}|\\vec 1_{V^+,V^-})=\\vec x_{V_{i+1}^+,V_{i+1}^-}-|x_{i+1}|\\vec 1_{V^+,V^-}+(|x_{i+1}|-|x_{i}|)\\vec 1_{V^+,V^-}$$\nfor $i=0,\\cdots,n-2$, as well as $\\vec x_{V_{n-1}^+,V_{n-1}^-}-|x_{n-1}|\\vec 1_{V^+,V^-}= (|x_n|-|x_{n-1}|)\\vec 1_{V^+_{n-1},V^-_{n-1}}$. Therefore, we have $F(\\vec x)=f^L(\\vec x)$. The proof is completed.\n\\end{proof}\n\n\n\\begin{pro}\\label{pro:setpair-character2}\nA continuous function $F$ is a disjoint-pair Lov\\'asz extension of some function $f:\\ensuremath{\\mathcal{P}}_2(V)\\to\\ensuremath{\\mathbb{R}}$ if and only if $F(\\vec x\\vee c\\vec 1_{V^+,V^-})+F(\\vec x-\\vec x\\vee c\\vec 1_{V^+,V^-})=F(\\vec x)$ (or $F(\\vec x\\wedge c\\vec 1_{V^+,V^-})+F(\\vec x-\\vec x\\wedge c\\vec 1_{V^+,V^-})=F(\\vec x)$), for some $V^\\pm\\supset V^\\pm(\\vec x)$ with $V^+\\cup V^-=V$, $\\forall c\\ge0$ and $\\vec x\\in\\ensuremath{\\mathbb{R}}^n$.\n\\end{pro}\n\n\\begin{proof}\nWe only need to prove that the condition implies the absolutely comonotonic additivity of $F$,\nand then apply Proposition \\ref{pro:setpair-character}. Note that the property $F(\\vec x\\vee c\\vec 1)+F(\\vec x-\\vec x\\vee c\\vec 1)=F(\\vec x)$ implies a summation form of $F$ which agrees with the form of the disjoint-pair Lov\\'asz extension. Then using the absolutely comonotonic additivity, we get the desired result.\n\\end{proof}\n\nThe $k$-way submodularity can be naturally defined as (S1) and (S2):\n\\begin{enumerate}\n\\item[({KS})] Given a tuple $V=(V_1,\\cdots,V_k)$ of finite sets and $\\ensuremath{\\mathcal{A}}\\subset\\{(A_1,\\cdots,A_k):A_i\\subset V_i,\\,i=1,\\cdots,k\\}$,\na discrete function $f:\\ensuremath{\\mathcal{A}}\\to \\ensuremath{\\mathbb{R}}$ is $k$-way submodular if $f(A)+f(B)\\ge f(A\\vee B)+f(A\\wedge B)$, $\\forall A,B\\in\\ensuremath{\\mathcal{A}}$, where $\\ensuremath{\\mathcal{A}}$ is a lattice under the corresponding lattice operations join $\\vee$ and meet $\\wedge$ defined by $A\\vee B=(A_1\\cup B_1,\\cdots, A_k\\cup B_k)$ and $A\\wedge B=(A_1\\cap B_1,\\cdots, A_k\\cap B_k)$.\n\\end{enumerate}\n\n\n\\begin{theorem}\\label{thm:submodular-L-equivalent}\nUnder the assumptions and notations in (KS) above, ${\\mathcal D}_\\ensuremath{\\mathcal{A}}$ is also closed under $\\wedge$ and $\\vee$, with $\\wedge$ and $\\vee$ as in (S2).\n Moreover, the following statements are equivalent:\n \\begin{enumerate}[a)]\n \\item $f$ is $k$-way submodular on $\\ensuremath{\\mathcal{A}}$;\n\\item the $k$-way Lov\\'asz extension $f^L$ is convex on each convex subset of ${\\mathcal D}_\\ensuremath{\\mathcal{A}}$;\n\\item the $k$-way Lov\\'asz extension $f^L$ is submodular on ${\\mathcal D}_\\ensuremath{\\mathcal{A}}$.\n\\end{enumerate}\n\nIf one replaces (KS) and (S2) by (BS1) and (BS2) respectively for the bisubmodular setting, then all the above results hold analogously.\n\\end{theorem}\n\nThe proof is a slight variation of the original version by Lov\\'asz, and\nis provided for convenience. \n\n\\begin{proof}\nNote that $V^t(\\vec x)\\vee V^t(\\vec y)=V^t(\\vec x\\vee \\vec y)$ and $V^t(\\vec x)\\wedge V^t(\\vec y)=V^t(\\vec x\\wedge \\vec y)$, where $V^t(\\vec x):=(V^t(\\vec x^1),\\cdots,V^t(\\vec x^k))$, $\\forall t\\in\\ensuremath{\\mathbb{R}}$.\nSince $\\vec x\\in{\\mathcal D}_\\ensuremath{\\mathcal{A}}$ if and only if $V^t(\\vec x)\\in \\ensuremath{\\mathcal{A}}$, $\\forall t\\in\\ensuremath{\\mathbb{R}}$, and $\\ensuremath{\\mathcal{A}}$ is a lattice, ${\\mathcal D}_\\ensuremath{\\mathcal{A}}$ must be\na lattice that is closed under the operations $\\wedge$ and $\\vee$.\nAccording to the $k$-way Lov\\'asz extension \\eqref{eq:Lovasz-Form-1},\nwe may write\n$$f^L(\\vec x)=\\int_{-N}^Nf(V^t(\\vec x))dt-Nf(V)\n$$\nwhere $N>\\|\\vec x\\|_\\infty$ is a sufficiently large number\\footnote{Here we set $f(\\varnothing,\\cdots,\\varnothing)=0$}.\nNote that $\\vec 1_A\\vee \\vec 1_B=\\vec 1_{A\\vee B}$ and $\\vec 1_A\\wedge \\vec 1_B=\\vec 1_{A\\wedge B}$.\nCombining the above results, we immediately get\n$$f(A)+f(B)\\ge f(A\\vee B)+ f(A\\wedge B) \\; \\Leftrightarrow \\; f^L(\\vec x)+f^L(\\vec y)\\ge f^L(\\vec x\\vee \\vec y)+f^L(\\vec x\\wedge \\vec y),$$\nwhich proves (a) $\\Leftrightarrow$ (c). Note that for $\\vec x\\in{\\mathcal D}_\\ensuremath{\\mathcal{A}}$, $f^L(\\vec x)=\\sum\\limits_{A\\in \\ensuremath{\\mathcal{C}}(\\vec x)}\\lambda_Af(A)$ for a unique chain $\\ensuremath{\\mathcal{C}}(\\vec x)\\subset\\ensuremath{\\mathcal{A}}$ that is determined by $\\vec x$ only, and\nthe extension $f^{\\mathrm{convex}}(\\vec x):=\\inf\\limits_{\\{\\lambda_A\\}_{A\\in\\ensuremath{\\mathcal{A}}}\\in\\Lambda(\\vec x)}\\sum\\limits_{A\\in \\ensuremath{\\mathcal{A}}}\\lambda_Af(A)$ is convex on each convex subset of ${\\mathcal D}_\\ensuremath{\\mathcal{A}}$, where $\\Lambda(\\vec x):=\\{\\{\\lambda_A\\}_{A\\in\\ensuremath{\\mathcal{A}}}\\in\\ensuremath{\\mathbb{R}}^\\ensuremath{\\mathcal{A}}:\\sum\\limits_{A\\in\\ensuremath{\\mathcal{A}}}\\lambda_A\\vec 1_A=\\vec x,\\,\\lambda_A\\ge 0\\text{ whenever }A\\ne V\\}$. We only need to prove $f^L(\\vec x)=f^{\\mathrm{convex}}(\\vec x)$ if and only if $f$ is submodular. In fact, along a standard idea proposed in Lov\\'asz's original paper \\cite{Lovasz}, one could prove that for a (strictly) submodular function, the set $\\{A:\\lambda_A^*\\ne0\\}$ must be a chain, where $\\sum\\limits_{A\\in \\ensuremath{\\mathcal{A}}}\\lambda_A^*f(A)=f^{\\mathrm{convex}}(\\vec x)$ achieves the minimum over $\\Lambda(\\vec x)$, and one can then easily check that it agrees with $f^L$. The converse can be proved\nin a standard way: $f(A)+f(B)=f^L(\\vec 1_A)+f^L(\\vec 1_B)\\ge 2f^L(\\frac{1}{2}(\\vec 1_A+\\vec 1_B))=f(\\vec 1_A+\\vec 1_B)=f(\\vec 1_{A\\vee B}+\\vec 1_{A\\wedge B})=f(\\vec 1_{A\\vee B})+f(\\vec 1_{A\\wedge B})=f(A\\vee B)+f(A\\wedge B)$. Now, the proof is completed.\n\nFor the bisubmodular case, the above reasoning can be repeated with minor differences.\n\\end{proof}\n\\label{pro:how-to-be-k-way-Lovasz-submodular}\n\n\n\n\\begin{remark}\nWe show some examples about how both convexity and continuous\nsubmodularity can be satisfied. In fact, it is easy to see that the $l^p$-norm $\\|\\vec x\\|_p$ is both convex and continuously\nsubmodular on $\\ensuremath{\\mathbb{R}}_+^n$, while the $l^1$-norm $\\|\\vec x\\|_1$ is convex and continuously\nsubmodular on the whole $\\ensuremath{\\mathbb{R}}^n$. Besides, an elementary proof shows that a one-homogeneous continuously\nsubmodular function on $\\ensuremath{\\mathbb{R}}_+^2$ must be convex.\n\\end{remark}\n\n\n\n\n\n\n\\section{Main results on optimization and eigenvalue problems}\n\\label{sec:main}\n\nWe uncover the links between combinatorial optimizations and continuous programmings as well as eigenvalue\nproblems in a general setting.\n\\subsection{Combinatorial and continuous optimization}\n\\label{sec:CC-transfer}\n\nAs we have told in the introduction, the application of the Lov\\'asz extension to non-submodular optimization meets with several difficulties, and in this section, we start attacking those. First, we set up some useful results.\n\n\\begin{notification}\\label{notification:fL}\nIn this section, $\\ensuremath{\\mathbb{R}}_{\\ge0}:=[0,\\infty)$ is the set of all non-negative numbers. We use $f^L$ to denote the multi-way Lov\\'asz extension which can be either the original or the disjoint-pair or the $k$-way Lov\\'asz extension.\n\\end{notification}\n\n\\begin{theorem}\\label{thm:tilde-H-f}\n Given set functions $f_1,\\cdots,f_n:\\ensuremath{\\mathcal{A}}\\to \\ensuremath{\\mathbb{R}}_{\\ge0}$, and a zero-homogeneous function $H:\\ensuremath{\\mathbb{R}}^n_{\\ge0}\\setminus\\{\\vec 0\\}\\to\\ensuremath{\\mathbb{R}}\\cup\\{+\\infty\\}$\n with $H(\\vec a+\\vec b)\\ge\\min\\{H(\\vec a),H(\\vec b)\\}$, $\n \\forall \\vec a,\\vec b\\in\\ensuremath{\\mathbb{R}}^n_{\\ge0}\\setminus\\{\\vec 0\\}$, we have\n \\begin{equation}\\label{eq:H-minimum}\n \\min\\limits_{A\\in \\ensuremath{\\mathcal{A}}'}H(f_1(A),\\cdots,f_n(A))=\\inf\\limits_{\\vec x\\in {\\mathcal D}'} H(f^L_1(\\vec x),\\cdots,f^L_n(\\vec x)),\\end{equation}\nwhere $\\ensuremath{\\mathcal{A}}'=\\{A\\in\\ensuremath{\\mathcal{A}}: (f_1(A),\\cdots,f_n(A))\\in\\mathrm{Dom}(H)\\}$, ${\\mathcal D}'=\\{\\vec x\\in{\\mathcal D}_\\ensuremath{\\mathcal{A}}\\cap\\ensuremath{\\mathbb{R}}_{\\ge0}^V:\\,(f^L_1(\\vec x),\\cdots,f^L_n(\\vec x))\\in\\mathrm{Dom}(H)\\}$ and $\\mathrm{Dom}(H)=\\{\\vec a\\in \\ensuremath{\\mathbb{R}}^n _{\\ge0}\\setminus\\{\\vec 0\\}: H(\\vec a)\\in\\ensuremath{\\mathbb{R}}\\}$\n\\end{theorem}\n\n\\begin{proof}\nBy the property of $H$, $\\;\\forall t_i\\ge0\\,,n\\in\\mathbb{N}^+,\\, a_{i,j}\\ge0,i=1,\\cdots,m,\\,j=1,\\cdots,n$,\n\\begin{align*}H\\left(\\sum_{i=1}^m t_i a_{i,1},\\cdots,\\sum_{i=1}^m t_i a_{i,n}\\right)&=H\\left(\\sum_{i=1}^m t_i \\vec a^i\\right)\\ge \\min_{i=1,\\cdots,m} H(t_i\\vec a^i) \\\\ &= \\min_{i=1,\\cdots,m} H(\\vec a^i)\n= \\min_{i=1,\\cdots,m} H(a_{i,1},\\cdots,a_{i,n}).\n\\end{align*}\nTherefore, in the case of the original Lov\\'asz extension, for any $\\vec x\\in{\\mathcal D}'$,\n\\begin{align}\n&H\\left(f^L_1(\\vec x),\\cdots,f^L_n(\\vec x)\\right) \\label{eq:psi-H}\n\\\\ =\\, & H\\left(\\int_{\\min \\vec x}^{\\max \\vec x}f_1(V^t(\\vec x))dt+ f_1(V(\\vec x))\\min \\vec x,\\cdots,\\int_{\\min \\vec x}^{\\max \\vec x}f_n(V^t(\\vec x))dt+ f_n(V(\\vec x))\\min \\vec x\\right)\\notag\n\\\\ =\\, & H\\left(\\sum_{i=1}^m (t_i-t_{i-1}) f_1(V^{t_{i-1}}(\\vec x)),\\cdots,\\sum_{i=1}^m (t_i-t_{i-1}) f_n(V^{t_{i-1}}(\\vec x)) \\right)\\notag\n\\\\ \\ge\\, & \\min_{i=1,\\cdots,m} H\\left(f_1(V^{t_{i-1}}(\\vec x)),\\cdots,f_n(V^{t_{i-1}}(\\vec x)) \\right)\\notag\n\\\\ \\ge\\, & \\min\\limits_{A\\in \\ensuremath{\\mathcal{A}}'}H(f_1(A),\\cdots,f_n(A))\\label{eq:min-H}\n\\\\ =\\, & \\min\\limits_{A\\in \\ensuremath{\\mathcal{A}}'}H(f_1^L(\\vec1_A),\\cdots,f_n^L(\\vec1_A))\\notag\n\\\\ \\ge\\, & \\inf\\limits_{\\vec x\\in {\\mathcal D}'} H(f^L_1(\\vec x),\\cdots,f^L_n(\\vec x)).\\label{eq:inf-H}\n\\end{align}\nCombining \\eqref{eq:psi-H} with \\eqref{eq:min-H}, we have $\\inf\\limits_{\\vec x\\in {\\mathcal D}'} H(f^L_1(\\vec x),\\cdots,f^L_n(\\vec x))\\ge \\min\\limits_{A\\in \\ensuremath{\\mathcal{A}}'}H(f_1(A),\\cdots,f_n(A))$, and then together with \\eqref{eq:min-H} and \\eqref{eq:inf-H}, we get the reverse inequality. Hence, \\eqref{eq:H-minimum} is proved for the original Lov\\'asz extension $f^L$. For multi-way settings, the proof is similar.\n\\end{proof}\n\n\\begin{remark}\\label{remark:thm-H} Duality: If one replaces $H(\\vec a+\\vec b)\\ge\\min\\{H(\\vec a),H(\\vec b)\\}$ by $H(\\vec a+\\vec b)\\le\\max\\{H(\\vec a),H(\\vec b)\\}$, then\n \\begin{equation}\\label{eq:H-maximum}\n \\max\\limits_{A\\in \\ensuremath{\\mathcal{A}}'}H(f_1(A),\\cdots,f_n(A))=\\sup\\limits_{\\vec x\\in {\\mathcal D}'} H(f^L_1(\\vec x),\\cdots,f^L_n(\\vec x)).\\end{equation}\n The proof of the identity \\eqref{eq:H-maximum} is similar to that of \\eqref{eq:H-minimum}, and thus we omit it.\n \\end{remark}\n \\begin{remark}\nA function $H:[0,+\\infty)^n\\to \\overline{\\ensuremath{\\mathbb{R}}}$ has the (MIN) property\nif\n$$H\\left(\\sum_{i=1}^m t_i \\vec a^i\\right)\\ge \\min_{i=1,\\cdots,m} H(\\vec a^i),\\;\\forall t_i>0\\,,m\\in\\mathbb{N}^+,\\,\\vec a^i\\in [0,+\\infty)^n.$$\nThe (MAX) property is formulated analogously.\n\nWe can verify that the (MIN) property is equivalent to the\nzero-homogeneity and $H(\\vec x+\\vec y)\\ge\\min\\{H(\\vec x),H(\\vec y)\\}$. A similar correspondence holds for the (MAX) property.\n\\end{remark}\n\n\\begin{remark}Theorem \\ref{thm:tilde-H-f} shows that if $H$ has the (MIN) or (MAX) property, then the corresponding combinatorial optimization is equivalent to a continuous optimization by means of \n the multi-way Lov\\'asz extension.\n Here are some examples:\n \n Given $c,c_i\\ge 0$ with $\\sum_i c_i>0$, let $H(f_1,\\cdots,f_n)=\\frac{c_1f_1+\\cdots+c_nf_n-c\\sqrt{f_1^2+\\cdots+f_n^2}}{f_1+\\cdots+f_n}$. Then $H$ satisfies the (MIN) property, and by Theorem \\ref{thm:tilde-H-f}, we have\n $$\\min\\limits_{A\\in \\ensuremath{\\mathcal{A}}'}\\frac{\\sum_i c_if_i(A)-c\\sqrt{\\sum_if_i^2(A)}}{\\sum_i f_i(A)}=\\inf\\limits_{\\psi\\in {\\mathcal D}'}\\frac{\\sum_i c_if_i^L(\\psi)-c\\sqrt{\\sum_i (f_i^L(\\psi))^2}}{\\sum_i f_i^L(\\psi)}.$$\n\nTaking $H(f_1,\\cdots,f_n)=\\frac{(c_1f_1^p+\\cdots+c_nf_n^p)^{\\frac1p}}{f_1+\\cdots+f_n}$ for some $p> 1$, then $H$ satisfies the (MAX) property, and by Theorem \\ref{thm:tilde-H-f}, there holds $$\\max\\limits_{A\\in \\ensuremath{\\mathcal{A}}'}\\frac{(\\sum_i c_if_i(A)^p)^{\\frac1p}}{\\sum_i f_i(A)}=\\sup\\limits_{\\psi\\in {\\mathcal D}'}\\frac{(\\sum_i c_if_i^L(\\psi)^p)^{\\frac1p}}{\\sum_i f_i^L(\\psi)}.$$\n\n\n\\begin{proof}[Proof of Theorem \\ref{thm:tilde-fg-equal-PQ}]\nWithout loss of generality, we may assume that $P(f_1,\\cdots,f_n)$ is one-homogeneous and subaddtive, while $Q(f_1,\\cdots,f_n)$ is one-homogeneous and superadditive on $(f_1,\\cdots,f_n)\\in\\ensuremath{\\mathbb{R}}_{\\ge0}^n$. \n\nThen $H(f_1,\\cdots,f_n)=\\frac{P(f_1,\\cdots,f_n)}{Q(f_1,\\cdots,f_n)}$ is zero-homogeneous on $[0,+\\infty)^n$, and \n$$H(\\vec f+\\vec g)=\\frac{P(\\vec f+\\vec g)}{Q(\\vec f+\\vec g)}\\le \\frac{P(\\vec f)+P(\\vec g)}{Q(\\vec f)+Q(\\vec g)}\\le \\max\\{\\frac{P(\\vec f)}{Q(\\vec f)},\\frac{P(\\vec g)}{Q(\\vec g)}\\}=\\max\\{H(\\vec f),H(\\vec g)\\}$$\nwhere $\\vec f=(f_1,\\cdots,f_n)$ and $\\vec g=(g_1,\\cdots,g_n)$.\n\n\n\nThen the proof is completed by Theorem \\ref{thm:tilde-H-f} \n (and Remark \\ref{remark:thm-H}).\n\\end{proof}\n\n\n\\begin{example}\\label{exam:maxcut1}\nGiven a finite graph $(V,E)$, for $\\{i,j\\}\\in E$, let $f_{\\{i,j\\}}(A)=1$ if $A\\cap\\{i,j\\}=\\{i\\}$ or $\\{j\\}$, and $f_{\\{i,j\\}}(A)=0$ otherwise. Let $g(A)=|A|$ for $A\\subset V$. It is clear that $\\frac{\\left(\\sum_{\\{i,j\\}\\in E}f_{\\{i,j\\}}^p\\right)^{\\frac1p}}{g}$ satisfies the condition of Theorem \\ref{thm:tilde-fg-equal-PQ}. Thus, we derive that\n$$\\max\\limits_{A\\ne\\varnothing}\\frac{|\\partial A|^{\\frac1p}}{|A|}=\\max\\limits_{A\\ne\\varnothing}\\frac{(\\sum_{\\{i,j\\}\\in E}f_{\\{i,j\\}}^p(A))^{\\frac1p}}{g(A)} =\\max\\limits_{x\\in\\ensuremath{\\mathbb{R}}_+^V}\\frac{(\\sum_{\\{i,j\\}\\in E}|x_i-x_j|^p)^{\\frac1p}}{\\sum_{i\\in V}x_i}=\\max\\limits_{x\\ne 0}\\frac{(\\sum_{\\{i,j\\}\\in E}|x_i-x_j|^p)^{\\frac1p}}{\\sum_{i\\in V}|x_i|}.$$\nSimilarly, we have\n$$\\max\\limits_{A\\ne\\varnothing}|\\partial\nA|^{\\frac1p}=\\max\\limits_{x\\in\\ensuremath{\\mathbb{R}}_+^V}\\frac{(\\sum_{\\{i,j\\}\\in\n E}|x_i-x_j|^p)^{\\frac1p}}{\\max\\limits_{i\\in V}x_i}=\\max\\limits_{x\\ne\n 0}\\frac{(\\sum_{\\{i,j\\}\\in E}|x_i-x_j|^p)^{\\frac1p}}{2\\|\\vec\n x\\|_\\infty},$$\nwhich gives a continuous representation of the Max-Cut problem. \nThe last equality holds due to the following reason: letting $F(\\vec x)=(\\sum_{\\{i,j\\}\\in\n E}|x_i-x_j|^p)^{\\frac1p}$, we can check that \n$\\max\\limits_{x\\in\\ensuremath{\\mathbb{R}}_+^V}\\frac{F(\\vec x)}{\\max_{i\\in V}x_i}$ achieves its maximum at some characteristic vector $\\vec 1_A$, and then $\\vec 1_A-\\vec 1_{V\\setminus A}$ is a maximizer of $\\frac{F(\\vec x)}{2\\|\\vec\n x\\|_\\infty}$ on $\\ensuremath{\\mathbb{R}}^V\\setminus\\vec0$. \n \n \n Similarly, $\\max\\limits_{x\\ne\n 0}\\frac{F(\\vec x)}{2\\|\\vec\n x\\|_\\infty}$ achieves its maximum at $\\vec 1_A-\\vec 1_{V\\setminus A}$ for some $A$, and then $\\vec1_A$ indicates a maximizer of $\\frac{F(\\vec x)}{\\max_{i\\in V}x_i}$ on the first orthant $\\ensuremath{\\mathbb{R}}^V_+$. We need the factor 2 because $F(\\vec 1_A-\\vec 1_{V\\setminus A})=2F(\\vec 1_A)$. \n \nIt should be noted that the two equivalent continuous reformulations are derived by the original and disjoint-pair Lov\\'asz extensions in the following two ways: $$\\max\\limits_{A\\ne\\varnothing}|\\partial\nA|^{\\frac1p}=\\max\\limits_{A\\in\\ensuremath{\\mathcal{P}}(V)\\setminus\\{\\varnothing\\}}\\frac{(\\sum_{\\{i,j\\}\\in E}f_{\\{i,j\\}}^p(A))^{\\frac1p}}{1} = \\max\\limits_{x\\in\\ensuremath{\\mathbb{R}}_+^V}\\frac{(\\sum_{\\{i,j\\}\\in\n E}|x_i-x_j|^p)^{\\frac1p}}{\\max_{i\\in V}x_i}$$\n where we use $f^L_{\\{i,j\\}}(\\vec x)=|x_i-x_j|$ and $1^L=\\max_{i\\in V}x_i$; \n $$\\max\\limits_{A\\ne\\varnothing}|\\partial\nA|^{\\frac1p}=\\max\\limits_{(A,B)\\in\\ensuremath{\\mathcal{P}}_2(V)\\setminus\\{ (\\varnothing,\\varnothing)\\}}\\frac{\\left(\\sum_{\\{i,j\\}\\in E}(f_{\\{i,j\\}}(A)+f_{\\{i,j\\}}(B))^p\\right)^{\\frac1p}}{2} =\\max\\limits_{x\\ne\n 0}\\frac{(\\sum_{\\{i,j\\}\\in E}|x_i-x_j|^p)^{\\frac1p}}{2\\|\\vec\n x\\|_\\infty}$$\n where we use the fact that the disjoint-pair Lov\\'asz extension of $(A,B)\\mapsto f_{\\{i,j\\}}(A)+f_{\\{i,j\\}}(B)$ is $|x_i-x_j|$ and the disjoint-pair Lov\\'asz extension of $(A,B)\\mapsto 1$ is $\\|\\vec x\\|_\\infty$.\n\\end{example}\n\n\n\\begin{example}\\label{exam:maxcut2}\nThere are many other equalities that can be obtained by Theorem \\ref{thm:tilde-fg-equal-PQ}, such as:\n$$\\min\\limits_{A\\ne\\varnothing,V}\\frac{|\\partial A|}{|A|^{\\frac1p}}=\\min\\limits_{x\\in\\ensuremath{\\mathbb{R}}^V:\\,\\min x=0}\\frac{\\sum_{\\{i,j\\}\\in E}|x_i-x_j|}{(\\sum_{i\\in V}x_i^p)^{\\frac1p}}$$\nand \n$$\\max\\limits_{(A,B)\\in\\ensuremath{\\mathcal{P}}_2(V)\\setminus\\{(\\varnothing,\\varnothing)\\}}\\frac{2|E(A,B)|^{\\frac1p}}{\\vol(A\\cup B)}= \\max\\limits_{x\\ne 0}\\frac{(\\sum_{\\{i,j\\}\\in E}(|x_i|+|x_j|-|x_i+x_j|)^p)^{\\frac1p}}{\\sum_{i\\in V}\\deg_i|x_i|}$$\nwhenever $p\\ge 1$. Here, $\\vol A =\\sum_{i\\in A} \\deg_i$.\\\\ \nThe last equality shows a variant of the dual Cheeger constant. A slight modification gives \n$$\\max\\limits_{A\\in\\ensuremath{\\mathcal{P}}(V)}2|\\partial A|^{\\frac1p}=\\max\\limits_{(A,B)\\in\\ensuremath{\\mathcal{P}}_2(V)}2|E(A,B)|^{\\frac1p}=\\max\\limits_{x\\ne 0}\\frac{\\left(\\sum_{\\{i,j\\}\\in E}(|x_i|+|x_j|-|x_i+x_j|)^p\\right)^{\\frac1p}}{\\|\\vec x\\|_\\infty}$$\nshowing a new continuous formulation of the Maxcut problem.\n\\end{example}\n\nTaking $n=2$ and $H(f_1,f_2)=\\frac{f_1}{f_2}$ in Theorem \\ref{thm:tilde-H-f}, then such an $H$ satisfies both (MIN) and (MAX) properties. So, we get\n$$\\min\\limits_{A\\in \\ensuremath{\\mathcal{A}}'}\\frac{f_1(A)}{f_2(A)}=\\inf\\limits_{\\psi\\in {\\mathcal D}'}\\frac{f_1^L(\\psi)}{f_2^L(\\psi)},\\;\\;\\;\\text{ and } \\;\\; \\max\\limits_{A\\in \\ensuremath{\\mathcal{A}}'}\\frac{f_1(A)}{f_2(A)}=\\sup\\limits_{\\psi\\in {\\mathcal D}'}\\frac{f_1^L(\\psi)}{f_2^L(\\psi)}.$$\nIn fact, we can get more:\n\\end{remark}\n\n\n\\begin{pro}\\label{pro:fraction-f\/g}\n Given two functions $f,g:\\ensuremath{\\mathcal{A}}\\to [0,+\\infty)$, let $\\tilde{f},\\tilde{g}:{\\mathcal D}_\\ensuremath{\\mathcal{A}}\\to \\ensuremath{\\mathbb{R}}$ satisfy $\\tilde{f}\\ge f^L$, $\\tilde{g}\\le g^L$, $\\tilde{f}(\\vec1_{A})=f(A)$ and $\\tilde{g}(\\vec1_{A})=g(A)$. Then\n$$\\min\\limits_{A\\in \\ensuremath{\\mathcal{A}}\\cap\\ensuremath{\\mathrm{supp}}(g)}\\frac{f(A)}{g(A)}=\\inf\\limits_{\\psi\\in {\\mathcal D}_\\ensuremath{\\mathcal{A}}\\cap\\ensuremath{\\mathrm{supp}}(\\tilde{g})}\\frac{\\widetilde{f}(\\psi)}{\\widetilde{g}(\\psi)}.$$\n\nIf we replace the condition $\\tilde{f}\\ge f^L$ and $\\tilde{g}\\le g^L$ by $\\tilde{f}\\le f^L$ and $\\tilde{g}\\ge g^L$, then $$\\max\\limits_{A\\in \\ensuremath{\\mathcal{A}}\\cap\\ensuremath{\\mathrm{supp}}(g)}\\frac{f(A)}{g(A)}=\\sup\\limits_{\\psi\\in {\\mathcal D}_\\ensuremath{\\mathcal{A}}\\cap\\ensuremath{\\mathrm{supp}}(\\tilde{g})}\\frac{\\widetilde{f}(\\psi)}{\\widetilde{g}(\\psi)}.$$\n\nFor any $\\alpha\\ne0$, then $\\tilde{f}=((f^\\alpha)^L)^{\\frac1\\alpha}$ and\n$\\tilde{g}=((g^\\alpha)^L)^{\\frac1\\alpha}$ satisfy the above two\nidentities. \n\\end{pro}\n\n\\begin{proof}\nIt is obvious that\n $$\\inf\\limits_{\\psi\\in {\\mathcal D}_\\ensuremath{\\mathcal{A}}\\cap\\ensuremath{\\mathrm{supp}}(\\tilde{g})}\\frac{\\widetilde{f}(\\psi)}{\\widetilde{g}(\\psi)}\\le\\min\\limits_{A\\in \\ensuremath{\\mathcal{A}}\\cap\\ensuremath{\\mathrm{supp}}(g)} \\frac{\\widetilde{f}(\\vec1_{A})}{\\widetilde{g}(\\vec1_{A})} = \\min\\limits_{A\\in \\ensuremath{\\mathcal{A}}\\cap\\ensuremath{\\mathrm{supp}}(g)}\\frac{f(A)}{g(A)}.$$\n On the other hand, for any $\\psi\\in{\\mathcal D}_\\ensuremath{\\mathcal{A}}\\cap\\ensuremath{\\mathrm{supp}}(\\tilde{g})$, $g^L(\\psi)\\ge \\tilde{g}(\\psi)>0$. Hence, there exists $t\\in (\\min \\widetilde{\\beta}\\psi-1,\\max \\widetilde{\\beta}\\psi+1)$ satisfying $g(V^t(\\psi))>0$. Here $\\widetilde{\\beta}\\psi=\\psi$ (resp., $|\\psi|$), if $f^L$ represents either the original or the $k$-way Lovasz extension of $f$ (resp., either the disjoint-pair or the $k$-way disjoint-pair Lovasz extension). So, the set $W(\\psi):=\\{t\\in\\ensuremath{\\mathbb{R}}: g(V^t(\\psi))>0\\}$ is nonempty. Since $\\{V^t(\\psi):t\\in W(\\psi)\\}$ is finite, there exists $t_0\\in W(\\psi)$ such that $\\frac{f(V^{t_0} (\\psi))}{g(V^{t_0} (\\psi))}=\\min\\limits_{t\\in W(\\psi)}\\frac{f(V^{t} (\\psi))}{g(V^{t} (\\psi))}$. Accordingly, $f(V^{t} (\\psi))\\ge \\frac{f(V^{t_0} (\\psi))}{g(V^{t_0} (\\psi))}g(V^{t} (\\psi))$ for any $t\\in W(\\psi)$, and thus\n $$f(V^{t} (\\psi))\\ge Cg(V^{t} (\\psi)),\\;\\;\\;\\text{ with }\\;\\;C=\\min\\limits_{t\\in W(\\psi)}\\frac{f(V^{t} (\\psi))}{g(V^{t} (\\psi))}\\ge0,$$\n holds for any $t\\in\\ensuremath{\\mathbb{R}}$ (because $g(V^{t} (\\psi))=0$ for $t\\in\\ensuremath{\\mathbb{R}}\\setminus W(\\psi)$ which means that the above inequality automatically holds).\n Consequently,\n \\begin{align*}&\\tilde{f}(\\psi)\\ge f^L(\\psi)\n\\\\ =\\,& \\int_{\\min \\widetilde{\\beta}\\psi}^{\\max \\widetilde{\\beta}\\psi}f(V^t(\\psi))dt+ f(V(\\psi))\\min \\widetilde{\\beta}\\psi\n\\\\ \\ge\\,& C \\int_{\\min \\widetilde{\\beta}\\psi}^{\\max \\widetilde{\\beta}\\psi}g(V^t(\\psi))dt+ g(V(\\psi))\\min \\widetilde{\\beta}\\psi.\n\\\\ =\\,&C g^L(\\psi)\\ge C\\tilde{g}(\\psi).\n \\end{align*}\n It follows that\n $$\\frac{\\widetilde{f}(\\psi)}{\\widetilde{g}(\\psi)}\\ge C\\ge \\min\\limits_{A\\in \\ensuremath{\\mathcal{A}}\\cap\\ensuremath{\\mathrm{supp}}(g)}\\frac{f(A)}{g(A)}$$\n and thus the proof is completed. The dual case is similar.\n\nFor $\\alpha>0$, we can simply suppose $\\ensuremath{\\mathrm{supp}}(g)=\\ensuremath{\\mathcal{A}}$. Then \n \\begin{align*}\n \\min\\limits_{A\\in \\ensuremath{\\mathcal{A}}}\\frac{f(A)}{g(A)}=\\min\\limits_{A\\in \\ensuremath{\\mathcal{A}}}\\frac{ (f^\\alpha)^{\\frac1\\alpha}(A)}{(g^\\alpha)^{\\frac1\\alpha}(A)}=\\left(\\min\\limits_{A\\in \\ensuremath{\\mathcal{A}}}\\frac{f^\\alpha(A)}{ g^\\alpha(A)}\\right)^{\\frac1\\alpha}\n = \\left(\\inf\\limits_{\\psi\\in {\\mathcal D}_\\ensuremath{\\mathcal{A}}}\\frac{( f^\\alpha)^L(\\psi)}{(g^\\alpha)^L(\\psi)}\\right)^{\\frac1\\alpha}=\\inf\\limits_{\\psi\\in {\\mathcal D}_\\ensuremath{\\mathcal{A}}}\\frac{(( f^\\alpha)^L)^{\\frac1\\alpha}(\\psi)}{(( g^\\alpha)^L)^{\\frac1\\alpha}(\\psi)}.\n \\end{align*}\n \nFor $\\alpha<0$, we may suppose without loss of generality that $g(A)>0$ and $f(A)>0$ for any $A\\in\\ensuremath{\\mathcal{A}}$. Then, in this case, \n \\begin{align*}\n \\min\\limits_{A\\in \\ensuremath{\\mathcal{A}}}\\frac{f(A)}{g(A)}=\\min\\limits_{A\\in \\ensuremath{\\mathcal{A}}}\\frac{ (f^\\alpha)^{\\frac1\\alpha}(A)}{(g^\\alpha)^{\\frac1\\alpha}(A)}=\\left(\\max\\limits_{A\\in \\ensuremath{\\mathcal{A}}}\\frac{f^\\alpha(A)}{ g^\\alpha(A)}\\right)^{\\frac1\\alpha}\n= \\left(\\sup\\limits_{\\psi\\in {\\mathcal D}_\\ensuremath{\\mathcal{A}}}\\frac{( f^\\alpha)^L(\\psi)}{(g^\\alpha)^L(\\psi)}\\right)^{\\frac1\\alpha}=\\inf\\limits_{\\psi\\in {\\mathcal D}_\\ensuremath{\\mathcal{A}}}\\frac{(( f^\\alpha)^L)^{\\frac1\\alpha}(\\psi)}{(( g^\\alpha)^L)^{\\frac1\\alpha}(\\psi)}.\n \\end{align*}\n \n This completes the proof.\n\\end{proof}\n\nIt is worth noting that in Proposition \\ref{pro:fraction-f\/g}, $\\ensuremath{\\mathcal{A}}$ can be a family of some set-tuples, and $f^L$ is the multi-way Lov\\'asz extension of the corresponding $f$. We point out that we can replace the Lov\\'asz extension $f^L$ by any other extension $f^E$ with the property that $f^E\/g^E$ achieves its minimum and maximum at some $0$-$1$ vector $\\vec 1_A$ for some $A\\in\\ensuremath{\\mathcal{A}}$. \n Similarly, we have:\n\n\\begin{pro}\\label{pro:maxconvex}\nLet $f,g:\\ensuremath{\\mathcal{A}}\\to [0,+\\infty)$ be two set functions and $f:=f_1-f_2$ and $g:=g_1-g_2$ be decompositions of differences of submodular functions.\n\nLet $\\widetilde{f}_2,\\widetilde{g}_1$ be the restriction of positively one-homogeneous convex functions onto ${\\mathcal D}_\\ensuremath{\\mathcal{A}}$, with $f_1(A)= \\widetilde{f}_1(\\vec1_{A})$ and $g_2(A)= \\widetilde{g}_2(\\vec1_{A})$.\nDefine $\\widetilde{f}=f_1^L-\\widetilde{f}_2$ and $\\widetilde{g}=\\widetilde{g}_1-g_2^L$. Then,\n$$\\min\\limits_{A\\in \\ensuremath{\\mathcal{A}}\\cap\\ensuremath{\\mathrm{supp}}(g)}\\frac{f(A)}{g(A)}=\\min\\limits_{\\vec x\\in {\\mathcal D}_\\ensuremath{\\mathcal{A}}\\cap\\ensuremath{\\mathrm{supp}}(\\widetilde{g})}\\frac{\\widetilde{f}(\\vec x)}{\\widetilde{g}(\\vec x)}.$$\n\\end{pro}\n\n\\begin{remark}\nHirai et al introduce the generalized Lov\\'asz extension of $f:\\mathcal{L}\\to\\overline{\\ensuremath{\\mathbb{R}}}$ on a graded poset $\\mathcal{L}$ (see \\cite{Hirai18,HH17-arxiv}). Since $f^L(\\vec x)=\\sum_i\\lambda_i f(\\vec p_i)$ for $\\vec x=\\sum_i\\lambda_i \\vec p_i$ lying in the orthoscheme complex $K(\\mathcal{L})$, the same results as stated in Theorem \\ref{thm:tilde-H-f} and Proposition \\ref{pro:fraction-f\/g} hold for such a generalized Lov\\'asz extension $f^L$. Propositions \\ref{pro:fraction-f\/g} and \\ref{pro:maxconvex} are also generalizations of Theorem 3.1 in \\cite{HS11} and Theorem 1 (b) in \\cite{BRSH13}.\n\\end{remark}\n\nAlthough the continuous representations translate the original problems into equivalently difficult\noptimization problems,\nwe should point out that the continuous reformulations ensure that many fast algorithms in continuous programming can be applied directly to certain combinatorial optimization problems. For example, the fractional form of the equivalent continuous optimizations shown in Theorem \\ref{thm:tilde-fg-equal} as well as Propositions \\ref{pro:fraction-f\/g} and \\ref{pro:maxconvex} implies that we can directly adopt the Dinkelbach iteration in Fractional Programming \\cite{SI83} to solve them. In addition, since the equivalent continuous formulation is Lipschitz, we can also adopt the \nstochastic subgradient method \\cite{Davis19-FoCM} to solve certain discrete optimization problems directly.\n\nTables~\\ref{tab:L-one} and \\ref{tab:L-two} and Propositions \\ref{pro:discrete:one-to-two}, \\ref{pro:discrete:one-to-k} and \\ref{pro:discrete:two-to-k} present a general correspondence between set or set-pair functions and their Lov\\'asz extensions. We shall make use of several of those in Section \\ref{sec:examples-Applications}. Note that the first four lines in Table \\ref{tab:L-one} for the original Lov\\'asz extension, and the first five lines in Table \\ref{tab:L-two} for the disjoint-pair Lov\\'asz extension are known (see \\cite{HS11,CSZ18}).\n\n\\begin{table}\n\\centering\n\\caption{\\small Original Lov\\'asz extension of some objective functions.}\n\\begin{tabular}{|l|l|}\n \\hline\n Set function $f(A)=$ & Lov\\'asz extension $f^L(\\vec x)=$ \\\\\n \\hline\n $\\#E(A,V\\setminus A)$ & $\\sum\\limits_{\\{i,j\\}\\in E}|x_i-x_j|$\\\\\n \\hline\n $C$&$C\\max_i x_i$\\\\\n \\hline\n $\\vol(A)$ & $\\sum_i \\deg_i x_i$\\\\\n \\hline\n $\\min\\{\\vol(A),\\vol(V\\setminus A)\\}$& $\\min\\limits_{t\\in \\mathbb{R}}\\|\\vec x-t \\vec 1\\|_1$\\\\\n \\hline\n $\\#A\\cdot\\#(V\\setminus A)$ & $\\sum\\limits_{i,j\\in V}|x_i-x_j|$ \\\\\n \\hline\n $\\#V(E(A,V\\setminus A))$ & $\\sum\\limits_{i=1}^n(\\max\\limits_{j\\in N(i)}x_j-\\min\\limits_{j\\in N(i)}x_j)$\n \\\\\n \\hline\n \\end{tabular}\n \\label{tab:L-one}\n\\end{table}\n\n\\begin{table}\n\\centering\n\\caption{\\small Disjoint-pair Lov\\'asz extension of several objective functions.}\n\\begin{tabular}{|l|l|}\n \\hline\n Objective function $f(A,B)=$ & Disjoint-pair Lov\\'asz extension $f^L(\\vec x)=$ \\\\\n \\hline\n $\\#E(A,V\\setminus A)+\\#E(B,V\\setminus B)$& $\\sum\\limits_{\\{i,j\\}\\in E}|x_i-x_j|$\\\\\n \\hline\n $\\#E(A,B)$&$\\frac12\\left(\\sum\\limits_{i\\in V}\\deg_i|x_i|- \\sum\\limits_{\\{i,j\\}\\in E}|x_i+x_j|\\right)$\\\\\n \\hline\n $C$&$C\\|\\vec x\\|_\\infty$\\\\\n \\hline\n $\\vol(A)+\\vol(B)$&$\\sum\\limits_{i\\in V}\\deg_i|x_i|$\\\\\n \\hline\n $\\min\\{\\vol(A),\\vol(V\\setminus A)\\}+\\min\\{\\vol(B),\\vol(V\\setminus B)\\}$&$\\min\\limits_{\\alpha\\in \\mathbb{R}}\\|(x_1,\\cdots,x_n)-\\alpha \\vec 1\\|$\\\\\n \\hline\n $ \\# E(A\\cup B,A\\cup B)$ & $\\sum_{i\\sim j}\\min\\{|x_i|,|x_j|\\}$ \\\\\n \\hline\n $\\# (A\\cup B)\\cdot\\# E(A\\cup B,A\\cup B)$ & $\\sum_{k\\in V,i\\sim j}\\min\\{|x_k|,|x_i|,|x_j|\\}$\\\\ \\hline\n$\\#(A\\cup B)\\cdot\\#(V\\setminus (A\\cup B))$ & $\\sum_{i>j}||x_i|-|x_j||$ \\\\ \\hline\n \\end{tabular}\n \\label{tab:L-two}\n\\end{table}\n\n\n\\begin{pro}\\label{pro:discrete:one-to-two}\nSuppose $f,g:\\ensuremath{\\mathcal{P}}(V)\\to [0,+\\infty)$ are two set functions with $g(A)>0$ for any $A\\in \\ensuremath{\\mathcal{P}}(V)\\setminus\\{\\varnothing\\}$.\nThen $$\\min\\limits_{A\\in \\ensuremath{\\mathcal{P}}(V)\\setminus\\{\\varnothing\\}}\\frac{f(A)}{g(A)}=\\min\\limits_{(A,B)\\in \\ensuremath{\\mathcal{P}}(V)^2\\setminus\\{(\\varnothing,\\varnothing)\\}}\\frac{f(A)+f(B)}{g(A)+g(B)}=\\min\\limits_{(A,B)\\in \\ensuremath{\\mathcal{P}}_2(V)\\setminus\\{(\\varnothing,\\varnothing)\\}}\\frac{f(A)+f(B)}{g(A)+g(B)},$$\nwhere the right identity needs additional assumptions like\n$f(\\varnothing)=g(\\varnothing)=0$\\footnote{This setting is natural, as the\n Lov\\'asz extension doesn't use the datum on $\\varnothing$.} or that $f$\n and $g$ are symmetric.\\footnote{A function $f:\\ensuremath{\\mathcal{P}}(V)\\to\\ensuremath{\\mathbb{R}}$ is {\\sl symmetric} if $f(A)=f(V\\setminus A)$, $\\forall A\\subset V$.}\nReplacing $f(B)$ and $g(B)$ by $f(V\\setminus B)$ and $g(V\\setminus B)$, all the above identities hold without any additional assumption. Clearly, replacing `min' by `max', all statements still hold.\n\\end{pro}\n\n\\begin{pro}\\label{pro:discrete:one-to-k}\nSuppose $f,g:\\ensuremath{\\mathcal{P}}(V)\\to [0,+\\infty)$ are two set functions with $g(A)>0$ for any $A\\in \\ensuremath{\\mathcal{P}}(V)\\setminus\\{\\varnothing\\}$.\nThen $$\\min\\limits_{A\\in \\ensuremath{\\mathcal{P}}(V)}\\frac{f(A)}{g(A)}=\\min\\limits_{(A_1,\\cdots,A_k)\\in \\ensuremath{\\mathcal{P}}(V)^k}\\frac{\\sum_{i=1}^kf(A_i)}{\\sum_{i=1}^kg(A_i)}=\\min\\limits_{(A_1,\\cdots,A_k)\\in \\ensuremath{\\mathcal{P}}(V)^k}\\sqrt[k]{\\frac{\\prod_{i=1}^kf(A_i)}{\\prod_{i=1}^kg(A_i)}}=\\min\\limits_{(A_1,\\cdots,A_k)\\in \\ensuremath{\\mathcal{P}}_k(V)}\\frac{\\sum_{i=1}^kf(A_i)}{\\sum_{i=1}^kg(A_i)},$$\nwhere the last identity needs additional assumptions like $f(\\varnothing)=g(\\varnothing)=0$.\n\\end{pro}\n\n\\begin{pro}\\label{pro:discrete:two-to-k}\nSuppose $f,g:\\ensuremath{\\mathcal{P}}_2(V)\\to [0,+\\infty)$ are two set functions with $g(A,B)>0$ for any $(A,B)\\in \\ensuremath{\\mathcal{P}}_2(V)\\setminus\\{(\\varnothing,\\varnothing)\\}$.\nThen $$\\min\\limits_{A\\in \\ensuremath{\\mathcal{P}}_2(V)}\\frac{f(A,B)}{g(A,B)}=\\min\\limits_{(A_1,B_1,\\cdots,A_k,B_k)\\in \\ensuremath{\\mathcal{P}}_2(V)^k}\\frac{\\sum_{i=1}^kf(A_i,B_i)}{\\sum_{i=1}^kg(A_i,B_i)}=\\min\\limits_{(A_1,B_1,\\cdots,A_k,B_k)\\in \\ensuremath{\\mathcal{P}}_{2k}(V)}\\frac{\\sum_{i=1}^kf(A_i,B_i)}{\\sum_{i=1}^kg(A_i,B_i)},$$\nwhere the last identity needs additional assumptions like $f(\\varnothing,\\varnothing)=g(\\varnothing,\\varnothing)=0$\\footnote{This setting is natural, as the disjoint-pair Lov\\'asz extension doesn't use the information on $(\\varnothing,\\varnothing)$.}.\n\\end{pro}\n\nTogether with Propositions \\ref{pro:setpair-original} and \\ref{pro:discrete:one-to-two}, one may directly transfer the data from Table \\ref{tab:L-one} to Table \\ref{tab:L-two}. Similarly, by employing Propositions \\ref{pro:separable-summation}, \\ref{pro:discrete:one-to-k} and \\ref{pro:discrete:two-to-k}, the $k$-way Lov\\'asz extension of some special functions can be transformed to the original and the disjoint-pair versions.\n\n\\begin{pro}\\label{pro:Lovasz-f-pre}\nFor any $a0$ whenever $(A,B)\\ne(\\varnothing,\\varnothing)$, and we shall apply the following argument about polar cones to this case. \n\n\\vspace{0.16cm}\n\n\\textbf{Argument}. Let $C$ and $\\Omega$ be two convex cones in $\\ensuremath{\\mathbb{R}}^n$ such that $\\Omega\\cap ((-C)\\cup C)=\\{\\vec0\\}$. Then $\\Omega^*\\cap C^*\\ne\\{\\vec 0\\}$ and $\\Omega^*\\cap (-C^*)\\ne\\{\\vec 0\\}$, where $C^*$ indicates the polar cone of $C$. \n\nProof: Indeed, $\\Omega^*\\cap C^*=(\\Omega\\cup C)^*\\supset (\\Omega+ C)^*$, where $\\Omega+ C$ is the Minkowski summation of $C$ and $\\Omega$. If $\\Omega+ C=\\ensuremath{\\mathbb{R}}^n$, then for any $-\\vec c\\in (-C)\\setminus\\{\\vec0\\}$, there exist $\\vec a\\in\\Omega\\setminus\\{\\vec0\\}$ and $\\vec c'\\in C\\setminus\\{\\vec0\\}$ such that $\\vec a+\\vec c'=-\\vec c$. This implies $\\vec a=-\\vec c'-\\vec c\\in -C\\setminus\\{\\vec0\\}$, which contradicts the condition that $\\Omega\\cap(-C)=\\{\\vec0\\}$. Therefore, the convex cone $\\Omega+ C$ is not the whole space $\\ensuremath{\\mathbb{R}}^n$, which implies that $(\\Omega+ C)^*\\ne\\{\\vec0\\}$. Consequently, $\\Omega^*\\cap C^*\\ne\\{\\vec 0\\}$ and similarly, $\\Omega^*\\cap (-C^*)\\ne\\{\\vec 0\\}$. The proof is completed.\n\n\\vspace{0.16cm}\n\nSuppose on the contrary, that $\\nabla f^L(\\vec x)\\cap ((-\\ensuremath{\\mathbb{R}}^n_{\\mathrm{sign}(x)})\\cup \\ensuremath{\\mathbb{R}}^n_{\\mathrm{sign}(x)}) =\\varnothing$ for some $\\vec x\\in\\{-1,1\\}^n$. \nFixing such an $\\vec x$, then $\\mathrm{cone}(\\nabla f^L(\\vec x))\\cap ((-\\ensuremath{\\mathbb{R}}^n_{\\mathrm{sign}(x)})\\cup \\ensuremath{\\mathbb{R}}^n_{\\mathrm{sign}(x)}) =\\{\\vec0\\}$, and by the above argument, we have $\\mathrm{cone}^*(\\nabla f^L(\\vec x))\\cap\\ensuremath{\\mathbb{R}}^n_{\\mathrm{sign}(x)}=\\mathrm{cone}^*(\\nabla f^L(\\vec x))\\cap(-\\ensuremath{\\mathbb{R}}^n_{\\mathrm{sign}(x)})^* \\ne\\{\\vec0\\}$. \n\nHowever, since $f$ is positive-definite, it is known \nthat $\\mathrm{cone}^*(\\nabla f^L(\\vec x))\\subset T_x(\\{\\vec y:f^L(\\vec y)\\le f^L(\\vec x)\\})$, meaning that $ T_x(\\{\\vec y:f^L(\\vec y)\\le f^L(\\vec x)\\})\\cap \\ensuremath{\\mathbb{R}}^n_{\\mathrm{sign}(x)}\\ne\\{\\vec0\\}$, where $T_x$ represents the tangent cone at $\\vec x$. Now, suppose $\\vec x=\\vec 1_{A_n}-\\vec 1_{B_n}$ with $A_n\\sqcup B_n=V$. Every permutation $\\sigma:\\{1,\\cdots,n\\}\\to \\{1,\\cdots,n\\}$ determines a sequence $\\{(A_i,B_i):i=1,\\cdots,n\\}\\subset \\ensuremath{\\mathcal{P}}_2(V)\\setminus\\{(\\varnothing,\\varnothing)\\}$ by the iterative construction: $A_1\\cup B_1=\\{\\sigma(1)\\}$ and $A_{i+1}\\cup B_{i+1}=A_i\\cup B_i\\cup \\{\\sigma(i+1)\\}$, $i=1,\\cdots,n-1$. \n\nSince $f(A_i,B_i)>0$, $f^L(\\vec x^i)=1$ where $\\vec x^i:=(\\vec1_{A_i}-\\vec1_{B_i})\/f(A_i,B_i)$, $i=1,\\cdots,n$. Also, $T_{x^n}\\{\\vec y:f^L(\\vec y)\\le 1\\}=T_x(\\{\\vec y:f^L(\\vec y)\\le f^L(\\vec x)\\})$.\nWithout loss of generality, we may assume that $\\vec x=\\vec x^n$.\n\nThe definition of $f^L$ yields that $\\mathrm{conv}(\\vec0,\\vec x^1,\\cdots,\\vec x^n)\\subset \\{\\vec y:f^L(\\vec y)\\le 1\\}$. We denote by $\\triangle_\\sigma=\\mathrm{conv}(\\vec0,\\vec x^1,\\cdots,\\vec x^n)$ since the construction of $\\vec x^1,\\cdots,\\vec x^n$ depends on the permutation $\\sigma$. \nFor any $\\vec y=\\sum_{i=1}^nt_i\\vec x^i\\in\\mathrm{conv}(\\vec0,\\vec x^1,\\cdots,\\vec x^n)\\setminus \\{\\vec x\\}$, \n$(\\vec y-\\vec x)_{\\sigma(n)}x_{\\sigma(n)}=-(1-t_n)x_{\\sigma(n)}^2<0$, and thus $\\vec y-\\vec x\\not\\in \\ensuremath{\\mathbb{R}}^n_{\\mathrm{sign}(x)}$.\nHence, $T_x(\\triangle_\\sigma)\\cap \\ensuremath{\\mathbb{R}}^n_{\\mathrm{sign}(x)}=\\{\\vec0\\}$. It follows from the fact $T_x(\\{\\vec y:f^L(\\vec y)\\le 1\\})=\\bigcup_{\\sigma}T_x(\\triangle_\\sigma)$ that $T_x(\\{\\vec y:f^L(\\vec y)\\le 1\\})\\cap \\ensuremath{\\mathbb{R}}^n_{\\mathrm{sign}(x)}=\\{\\vec0\\}$. This is a contradiction. \n\n\\vspace{0.16cm}\n\nNow we turn to the general case that $f\\ge 0$. Take a sequence $\\{f_n\\}_{n\\ge 1}$ of positive-definite functions on $\\ensuremath{\\mathcal{P}}_2(V)$ such that $f_n\\to f$ as $n$ tends to $+\\infty$. Then it can be verified that for any $\\vec v_n\\in \\nabla f_n^L(\\vec x)$, all limit points of $\\{\\vec v_n\\}_{n\\ge 1}$ belong to $\\nabla f^L(\\vec x)$. Now, there exist $\\vec u_n\\in \\nabla \\|\\vec x\\|_{\\infty}$ and $\\lambda_n=f^L_n(\\vec x)\/\\|\\vec x\\|_{\\infty}>0$ such that $\\lambda_n\\vec u_n\\in \\nabla f^L_n(\\vec x)$. Then for any limit point $\\vec u$ of $\\{\\vec u_n\\}_{n\\ge 1}$, $\\vec u\\in \\nabla \\|\\vec x\\|_{\\infty}$ and $\\lambda\\vec u\\in \\nabla f^L(\\vec x)$ where $\\lambda=\\lim\\limits_{n\\to+\\infty}\\lambda_n$. Therefore, $(\\lambda,\\vec x)$ is an eigenpair of $(f^L,\\|\\cdot\\|_\\infty)$.\n\nThe proof is completed. \n\\end{proof}\n\nBy Propositions \\ref{pro:Lovasz-eigen} and \\ref{pro:set-pair-infty-norm}, we have \n\\begin{cor}\nIf $2f(A,B)=f(A,V\\setminus A)+f(V\\setminus B,B)$ for any $(A,B)\\in \\ensuremath{\\mathcal{P}}_2(V)$, then the set of eigenvalues of $(f^L,\\|\\cdot\\|_\\infty)$ coincides with $\\{f(A,V\\setminus A):A\\subset V\\}$, and every vector in $\\{-1,1\\}^n$ is an eigenvector.\n\\end{cor}\n\n\\begin{remark}\nThe proof of Proposition \\ref{pro:set-pair-infty-norm} heavily depends on\nthe property that $N_v(X)=-T_v(X)$ for any vertex $v$ of the hypercube $X:=\\{\\vec x:\\|\\vec x\\|_\\infty\\le1\\}$. Characterizing the class of polytopes satisfying $N_v=-T_v$ for any vertex $v$ remains an open problem, where $N_v$ is the normal cone at $v$ and $T_v$ is the tangent cone at $v$. \n\\end{remark}\n\nMotivated by Propositions \\ref{pro:Lovasz-eigen} and \\ref{pro:set-pair-infty-norm}, we suggest a combinatorial eigenvalue problem for $(f,g)$ as follows:\n\nGiven $A\\subset V$ and a permutation $\\sigma:\\{1,\\cdots,n\\}\\to \\{1,\\cdots,n\\}$, there exists a unique sequence $\\{(A_i^\\sigma,B_i^\\sigma):i=1,\\cdots,n\\}\\subset \\ensuremath{\\mathcal{P}}_2(V)\\setminus\\{(\\varnothing,\\varnothing)\\}$ satisfying $A_1^\\sigma\\subset \\cdots \\subset A_n^\\sigma=A$, $B_1^\\sigma\\subset \\cdots \\subset B_n^\\sigma=V\\setminus A$, $A_1^\\sigma\\cup B_1^\\sigma=\\{\\sigma(1)\\}$ and $A_{i+1}^\\sigma\\cup B_{i+1}^\\sigma=A_i^\\sigma\\cup B_i^\\sigma\\cup \\{\\sigma(i+1)\\}$, $i=1,\\cdots,n-1$.\nLet $\\vec u^{A,\\sigma}\\in\\ensuremath{\\mathbb{R}}^n$ be defined by\n$$u^{A,\\sigma}_i=\\begin{cases} f(A_{\\sigma^{-1}(i)}^\\sigma,B_{\\sigma^{-1}(i)}^\\sigma)-f(A_{\\sigma^{-1}(i)-1}^\\sigma,B_{\\sigma^{-1}(i)-1}^\\sigma)\n,&\\text{ if }i\\in A,\\\\\nf(A_{\\sigma^{-1}(i)-1}^\\sigma,B_{\\sigma^{-1}(i)-1}^\\sigma)-f(A_{\\sigma^{-1}(i)}^\\sigma,B_{\\sigma^{-1}(i)}^\\sigma)\n,&\\text{ if }i\\not\\in A.\\end{cases}\n$$\nDenote by $S(f,A)=\\{\\vec u^{A,\\sigma}:\\sigma\\in S_n\\}$ and\n$$\\nabla f(A,B):=\\mathrm{conv}\\left(\\bigcup\\limits_{\\tilde{A}:\\,A\\subset \\tilde{A}\\subset V\\setminus B}S(f,\\tilde{A})\\right),\\;\\;\\forall (A,B)\\in\\ensuremath{\\mathcal{P}}_2(V)$$\nwhere $S_n$ is the permutation group over $\\{1,\\cdots,n\\}$.\n\n\\begin{defn}[Combinatorial eigenvalue problem]\nGiven $f,g:\\ensuremath{\\mathcal{P}}_2(V)\\to \\ensuremath{\\mathbb{R}}$, the combinatorial eigenvalue problem of $(f,g)$ is to find $\\lambda\\in\\ensuremath{\\mathbb{R}}$ and $(A,B)\\in \\ensuremath{\\mathcal{P}}_2(V)\\setminus\\{(\\varnothing,\\varnothing)\\}$ such that $\\nabla f(A,B)\\cap \\lambda \\nabla g(A,B)\\ne\\varnothing$.\n\\end{defn}\n\nSince it can be verified that $\\nabla f^L(\\vec1_A-\\vec1_B)=\\nabla f(A,B)$, Proposition \\ref{pro:Lovasz-eigen} (or Theorem \\ref{introthm:eigenvalue}) implies that the combinatorial eigenvalue problem for $(f,g)$ is equivalent to the nonlinear eigenvalue problem of $(f^L,g^L)$.\n\n{ By Propositions \\ref{pro:Lovasz-eigen-pre} and \\ref{pro:Lovasz-eigen}, for a pair of functions $f$ and $g$ on $\\ensuremath{\\mathcal{P}}(V)$ (resp., $\\ensuremath{\\mathcal{P}}_2(V)$), every eigenvalue of the function pair $(f^L,g^L)$ generated by Lov\\'asz extension has an eigenvector of the form $\\vec1_A$ (resp., $\\vec1_A-\\vec1_B$) for some $A\\in\\ensuremath{\\mathcal{P}}(V)\\setminus\\{\\varnothing\\}$ (resp., $(A,B)\\in\\ensuremath{\\mathcal{P}}_2(V)\\setminus\\{(\\varnothing,\\varnothing)\\}$). We may call such a set $A$ (resp., $(A,B)$) an eigen-set of $(f,g)$. And, we are interested in the eigen-sets and the corresponding eigenvalues, which encode the \nkey information about the data structure generated by the function pair $(f,g)$. } \n\nNext, we study the second eigenvalue of the function pair $(f^L,g^L)$, which\nis closely related to a combinatorial Cheeger-type constant of the form \n$$\\ensuremath{\\mathrm{Ch}}(f,g):=\\min\\limits_{A\\in\\mathcal{P}(V)\\setminus\\{\\varnothing,V\\}}\\frac{f(A)}{\\min\\{g(A),g(V\\setminus A)\\}}$$\nwhere $f:\\ensuremath{\\mathcal{P}}(V)\\to\\ensuremath{\\mathbb{R}}$ is symmetric, i.e., $f(A)=f(V\\setminus A)$, $\\forall A$, and $g:\\ensuremath{\\mathcal{P}}(V)\\to\\ensuremath{\\mathbb{R}}_+$ is submodular and non-decreasing. \n\n\\begin{pro}\\label{eq:Cheeger-identity}\nLet $f_s,g_s:\\ensuremath{\\mathcal{P}}_2(V)\\to\\ensuremath{\\mathbb{R}}$ be defined by $f_s(A,B)=f(A)+f(B)$ and $g_s(A,B)=g(A)+g(B)$. Then \n$$\\ensuremath{\\mathrm{Ch}}(f,g)=\\text{the second eigenvalue of the function pair }(f_s^L,g_s^L)\\;(\\text{or equivalently }(f_s,g_s)).$$\n\\end{pro}\n\nWe need the following auxiliary proposition.\n\\begin{pro}\\label{pro:original-vs-disjoint-pair}Suppose that $g:\\ensuremath{\\mathcal{P}}(V)\\to\\ensuremath{\\mathbb{R}}_+$ is non-decreasing, i.e., $g(A)\\le g(B)$ whenever $A\\subset B$. \nLet $G:\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}$ be the disjoint-pair Lov\\'asz extension of the function $(A,B)\\mapsto g(A)+g(B)$. Then the Lov\\'asz extension of the function $A\\mapsto\\min\\{g(A),g(V\\setminus A)\\}$ is $\\min\\limits_{t\\in\\ensuremath{\\mathbb{R}}}G(\\vec x-t\\vec1)$.\n\\end{pro}\n\\begin{proof}\nDenote by $g_{m}(A)=\\min\\{g(A),g(V\\setminus A)\\}$ and $g_s(A,B)=g(A)+g(B)$, where $g_{m}^L$ is the original Lov\\'asz extension of $g_{m}$, and $g_s^{L}$ is the disjoint-pair Lov\\'asz extension of $g_s$. \nSince $g$ is non-decreasing, $g(V^t(\\vec x))$ must be non-increasing on $t\\in \\ensuremath{\\mathbb{R}}$, i.e., $g(V^{t_1}(\\vec x))\\ge g(V^{t_2}(\\vec x))$ whenever $t_1\\le t_2$. Hence, there exists $t_0\\in\\ensuremath{\\mathbb{R}}$ such that $g(V^t(\\vec x))\\ge g(V\\setminus V^t(\\vec x))$, $\\forall t\\le t_0$; and $g(V^t(\\vec x))\\le g(V\\setminus V^t(\\vec x))$, $\\forall t\\ge t_0$.\nThen \\begin{align*}\ng_{m}^L(\\vec x)&= \\int_{\\min \\vec x}^{\\max \\vec x}g_{m}(V^t(\\vec x))dt+\\min\\vec x g_{m}(V)\n\\\\&=\\int_{\\min \\vec x}^{t_0}g(V\\setminus V^t(\\vec x))dt+\\int_{t_0}^{\\max\\vec x}g(V^t(\\vec x))dt\n\\\\&=\\int_{\\min (\\vec x-t_0\\vec1)}^0g(V\\setminus V^t(\\vec x-t_0\\vec1))dt+\\int_{0}^{\\max(\\vec x-t_0\\vec1)}g(V^t(\\vec x-t_0\\vec1))dt\n\\\\&=\\int_{-\\|\\vec x-t_0\\vec1\\|_\\infty}^0g(V\\setminus V^t(\\vec x-t_0\\vec1))dt+\\int_{0}^{\\|\\vec x-t_0\\vec1\\|_\\infty}g(V^t(\\vec x-t_0\\vec1))dt\n\\\\&=\\int_{0}^{\\|\\vec x-t_0\\vec1\\|_\\infty}g(V^t(\\vec x-t_0\\vec1))+g(V\\setminus V^{-t}(\\vec x-t_0\\vec1))dt\n\\\\&=g_{s}^{L}(\\vec x-t_0\\vec1)=\\min\\limits_{t\\in\\ensuremath{\\mathbb{R}}}g_s^{L}(\\vec x-t\\vec1).\n\\end{align*}\nThe proof is completed. \n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{eq:Cheeger-identity}]\nSince $f$ is symmetric, by Proposition \\ref{pro:setpair-original}, $f_s^L(\\vec x)=f^L(\\vec x)=f_m^L(\\vec x)$. \n\nSince $g$ is positive, submodular and non-decreasing, it is not difficult to check that\n$g_s$ is bisubmodular. Thus, by the equivalence of submodularity and convexity, $g_s^L$ is a convex function. \nTherefore, we have\n$$\\min\\limits_{\\vec x\\bot\\vec 1}\\frac{f_s^L(\\vec x)}{\\min\\limits_{t\\in\\ensuremath{\\mathbb{R}}}g_s^L(\\vec x-t\\vec 1)}=\\min\\limits_{\\text{nonconstant }\\vec x\\in\\ensuremath{\\mathbb{R}}_+^n}\\frac{f_s^L(\\vec x)}{\\min\\limits_{t\\in\\ensuremath{\\mathbb{R}}}g_s^L(\\vec x-t\\vec 1)}=\\min\\limits_{\\vec x\\in\\ensuremath{\\mathbb{R}}_+^n:\\min\\vec x=0}\\frac{f_m^L(\\vec x)}{g_m^L(\\vec x)}=\\min\\limits_{A\\ne\\varnothing,V}\\frac{f_m(A)}{g_m(A)}=\\ensuremath{\\mathrm{Ch}}(f,g),$$\nwhere the first equality is based on the fact that $\\vec x\\mapsto f_s^L(\\vec x)=f^L(\\vec x)$ and $\\vec x\\mapsto\\min\\limits_{t\\in\\ensuremath{\\mathbb{R}}}g_s^L(\\vec x-t\\vec 1)$ are translation invariant along $\\vec1$, the second equality is derived by Proposition \\ref{pro:original-vs-disjoint-pair}, and the third one follows from Theorem \\ref{thm:tilde-fg-equal}.\n\\end{proof}\nFinally, we prove that for any $A,B\\ne \\varnothing$ with $A\\cap B=\\varnothing$, $$\\max\\{\\frac{f(A)}{g(A)},\\frac{f(B)}{g(B)}\\}\\ge\\min\\{\\frac{f(A)}{\\min\\{g(A),g(V\\setminus A)\\}},\\frac{f(B)}{\\min\\{g(B),g(V\\setminus B)\\}}\\}.$$\nSuppose the contrary, and keep $f(A)=f(V\\setminus A)$ in mind. Then, we have $g(A)>g(V\\setminus A)$ and $g(B)>g(V\\setminus B)$, implying $g(A)+g(B)>g(V\\setminus A)+g(V\\setminus B)$.\nSince $A\\subset V\\setminus B$ and $g$ is non-decreasing, one has $g(A)\\le g(V\\setminus B)$. Similarly, $g(B)\\le g(V\\setminus A)$, which leads to a contradiction. \n\nCombining all the results and discussions in this section, we complete the proof of Theorem \\ref{introthm:eigenvalue}. \n\n\n\\subsection{A relaxation of a Dinkelbach-type scheme}\n \\label{sec:algo}\n\nWe would like to establish an iteration framework for finding minimum and maximum eigenvalues. These extremal eigenvalues play significant roles in optimization theory. They can be found via the so-called Dinkelbach iterative scheme \\cite{D67}. This will provide a good starting point for an appropriate\niterative algorithm for the resulting fractional programming. Actually, the equivalent continuous optimization has a fractional form, but such kind of fractions have been hardly touched in the field of fractional programming \\cite{SI83}, where optimizing the ratio of a concave function to a convex one is usually considered. For convenience, we shall work in a normed space $X$ in this subsection.\n\nFor a convex function $F:X\\to \\mathbb{R}$, its sub-gradient (or sub-derivative) $\\nabla F(\\vec x)$ is defined as the collection of $\\vec u\\in X^*$ satisfying $F(\\vec y)-F(\\vec x)\\ge \\langle \\vec u,\\vec y-\\vec x\\rangle,\\;\\forall \\vec y\\in X$, where $X^*$ is the dual of $X$ and $\\langle \\vec u,\\vec y-\\vec x\\rangle$ is the action of $\\vec u$ on $\\vec y-\\vec x$.\nThe concept of a sub-gradient has been extended to Lipschitz functions. This is called the Clarke derivative \\cite{Clarke}:\n $$\\nabla F(\\vec x)=\\left\\{\\vec u\\in X^*\\left|\\limsup_{\\vec y\\to \\vec x, t\\to 0^+}\\frac{F(\\vec y+t\\vec h)-F(\\vec y)}{t}\\ge \\langle \\vec u,\\vec h\\rangle,\\forall \\vec h\\in X\\right.\\right\\}.$$\n And it can even be generalized to the class of lower semi-continuous functions \\cite{DM94,D10}.\n\n\\begin{theorem}[Global convergence of a Dinkelbach-type scheme \\cite{D67}] \\label{thm:global convergence}\nLet $S$ be a compact set and let $F,G:S\\to\\mathbb{R}$ be two continuous functions with $G(\\vec x)>0$, $\\forall \\vec x\\in S$. Then the sequence $\\{r^k\\}$ generated by the two-step iterative scheme\n\\begin{numcases}{}\n\\vec x^{k+1}=\\argopti\\limits_{\\vec x\\in S} \\{F(\\vec x)-r^k G(\\vec x)\\}, \\label{iter0-1}\n\\\\\nr^{k+1}=\\frac{F( \\vec x^{k+1})}{G(\\vec x^{k+1})},\n\\label{iter0}\n\\end{numcases}\n from any initial point $\\vec x^0\\in S$, converges monotonically to a global optimum of $F(\\cdot)\/G(\\cdot)$,\nwhere `opti' is `min' or `max'.\n\\end{theorem}\n\n\\begin{cor}\nIf $F\/G$ is a zero-homogeneous continuous function, then the iterative scheme \\eqref{iter0-1}\\eqref{iter0} from any initial point $\\vec x^0$ converges monotonically to a global optimum on the cone spanned by $S$ (i.e., $\\{t\\vec x: t>0, \\vec x\\in S\\}$).\n\\end{cor}\n\nWe note that Theorem \\ref{thm:global convergence} generalizes Theorem 3.1 in \\cite{CSZ15} and Theorem 2 in \\cite{CSZ18}. Since it is a Dinkelbach-type iterative algorithm in the field of fractional programming, we omit the proof\n\nMany minimization problems in the field of fractional programming possess the form\n$$\n\\min\\,\\frac{\\text{convex }F}{\\text{concave }G},\n$$\nwhich is not necessary for a convex programming problem. The original Dinkelbach iterative scheme turns the ratio form to the inner problem \\eqref{iter0-1} with the form lik\n$$\n\\min \\; (\\text{convex }F-\\text{concave }G),\n$$\nwhich is indeed a convex programming problem. However, most of our examples are in the form\n$$\n\\min\\frac{\\text{convex }F}{\\text{convex }G},\n$$\ni.e., both the numerator and the denominator of the fractional object function are convex.\nSince the difference of two convex functions may not be convex, the inner\nproblem \\eqref{iter0-1} is no longer a convex optimization problem and hence might be very difficult to solve.\n\n\nIn other practical applications, we may encounter optimization problems of the form\n$$\n\\min\\frac{\\text{convex }F_1 - \\text{convex }F_2}{\\text{convex }G_1- \\text{convex }G_2}.\n$$\nThis is NP-hard in general. Fortunately, we can construct an effective relaxation of \\eqref{iter0-1}.\n\n\nThe starting point of the relaxation step is the following classical fact:\n\\begin{pro}\\label{pro:difference-two-submodular}\nFor any function $f:\\ensuremath{\\mathcal{A}}\\to \\ensuremath{\\mathbb{R}}$, there are two submodular functions $f_1$ and $f_2$ on $\\ensuremath{\\mathcal{A}}$ such that $f=f_1-f_2$.\n\\end{pro}\n\nAlthough this is an old result, for readers' convenience, we present a short\nproof below. \n\n\\begin{proof}\nTaking $g$ to be a strict submodular function and letting $$\\delta=\\min\\limits_{A,A'\\in \\ensuremath{\\mathcal{A}}}\\left(g(A)+g(A')-g(A\\vee A')-g(A\\wedge A')\\right)>0.$$ Set $f_2=Cg$ and $f_1=f+f_2$ for a sufficiently large $C>0$. It is clear that $f_2$ is strict submodular and $f_1$ is submodular. So, $f=f_1-f_2$, which completes the proof.\n\\end{proof}\n\nThanks to Proposition \\ref{pro:difference-two-submodular}, any discrete function can be expressed as the difference of two submodular functions. Since the Lov\\'asz extension of a submodular function is convex, every Lov\\'asz extension function is the difference of two convex functions. \n\nThen, for the fractional programming derived by Theorem \\ref{thm:tilde-fg-equal} (or Propositions \\ref{pro:fraction-f\/g} and \\ref{pro:maxconvex}), both the numerator\nand denominator can be rewritten as the differences of two convex functions. This implies that a simple and efficient\nalgorithm can be obtained via further relaxing the Dinkelbach iteration by\ntechniques in DC Programming \\cite{HT99}. It should be noted that the following recent works (especially the papers by Hein's group \\cite{HeinBuhler2010,HS11,TVhyper-13,TMH18}) motivated us to investigate more on this direction:\n\\begin{enumerate}\n\\item The efficient generalization of the inverse power method proposed by Hein et al \\cite{HeinBuhler2010} and the extended steepest descent \nmethod by Bresson et al \\cite{BLUB12} deal with fractional programming in the same spirit. For more relevant papers by Hein's group, we refer to \\cite{HS11} for the RatioDCA method, and \\cite{TMH18} for the generalized RatioDCA technique.\n\\item In \\cite{MaeharaMurota15,MMM18}, the authors address difference convex programming (DC programming)\nfor discrete convex functions, in which an algorithm and a convergence result similar to Theorem \\ref{th:gsd} are presented. \n\n\\item A simple iterative algorithm based on the continuous reformulation by the disjoint-pair Lov\\'asz extension provides the best cut values for maxcut on a G-set among all existing continuous algorithms \\cite{SZZmaxcut}. \n\\end{enumerate}\n In view of these recent developments, and in order to enlarge the scope of fractional programming and RatioDCA method, it is helpful to study this\n aspect by general formulations (see also Remark \\ref{remark:very-general-RatioDCA} for the most general form). \n Thus, \nwe begin to establish a method based on convex programming for solving $\\mathtt{i}\n\\frac{F(\\vec x)}{G(\\vec x)}$ with $F=F_1-F_2$ and $G=G_1-G_2$ being two nonnegative functions, where $F_1,F_2,G_1,G_2$ are four nonnegative convex functions on $X$. Let $\\{H_{\\vec y}(\\vec x):\\vec y\\in X\\}$ be a family of convex differentiable functions on $X$ with $H_{\\vec y}(\\vec x)\\ge H_{\\vec y}(\\vec y)$, $\\forall \\vec x\\in X$. Consider the following three-step iterative scheme\n\\begin{subequations}\n\\label{iter1}\n\\begin{numcases}{}\n\\vec x^{k+1}\\in \\argmin\\limits_{\\vec x\\in \\mathbb{B}} \\{F_1(\\vec x)+r^k G_2(\\vec x) -(\\langle \\vec u^k,\\vec x\\rangle+r^k \\langle \\vec v^k,\\vec x\\rangle) + H_{\\vec x^k}(\\vec x)\\}, \\label{eq:twostep_x2}\n\\\\\nr^{k+1}=F( \\vec x^{k+1})\/G( \\vec x^{k+1}),\n\\label{eq:twostep_r2}\n\\\\\n \\vec u^{k+1}\\in\\nabla F_2( \\vec x^{k+1}),\\;\n \\vec v^{k+1}\\in\\nabla G_1( \\vec x^{k+1}),\n\\label{eq:twostep_s2}\n\\end{numcases}\n\\end{subequations}\nwhere $\\mathbb{B}$ is a convex body containing $\\vec 0$ as its inner point. The following slight modification \n\\begin{subequations}\n\\label{iter2}\n\\begin{numcases}{}\n\\vec y^{k+1}\\in \\argmin\\limits_{\\vec x\\in X} \\{F_1(\\vec x)+r^k G_2(\\vec x) -(\\langle \\vec u^k,\\vec x\\rangle+r^k \\langle \\vec v^k,\\vec x\\rangle) + H_{\\vec x^k}(\\vec x)\\}, \\label{eq:2twostep_x2}\n\\\\\nr^{k+1}=F( \\vec y^{k+1})\/G( \\vec y^{k+1}),~~ \\vec x^{k+1}=\\partial \\mathbb{B}\\cap\\{t\\vec y^{k+1}:t\\ge 0\\} \n\\label{eq:2twostep_r2}\n\\\\\n \\vec u^{k+1}\\in\\nabla F_2( \\vec x^{k+1}),\\;\n \\vec v^{k+1}\\in\\nabla G_1( \\vec x^{k+1}),\n\\label{eq:2twostep_s2}\n\\end{numcases}\n\\end{subequations}\n is available when $F\/G$ is zero-homogeneous and \\eqref{eq:2twostep_x2} has a\n solution. In \\eqref{eq:2twostep_r2}, $\\vec x^{k+1}$ indicates the\n normalization of $\\vec y^{k+1}$ w.r.t. the convex body $\\mathbb{B}$; in particular, $\\vec x^{k+1}:=\\vec y^{k+1}\/\\|\\vec y^{k+1}\\|_2$ if we let $\\mathbb{B}$ be the unit ball. \n These schemes mixing the inverse power (IP) method and steepest descent (SD) method can be well used in computing special eigenpairs of $(F,G)$. Note that the inner problem \\eqref{eq:twostep_x2} (resp. \\eqref{eq:2twostep_x2}) is a convex optimization and thus many algorithms in convex programming are applicable. We should note that the above schemes provide a generalization of the RatioDCA technique in \\cite{HS11}.\n\n\\begin{theorem}[Local convergence for the mixed IP-SD scheme]\\label{th:gsd}\nThe sequence $\\{r^k\\}$ generated by the iterative scheme \\eqref{iter1} (resp. \\eqref{iter2}) from any initial point $\\vec x^0\\in \\mathrm{supp}(G)\\cap \\mathbb{B}$ (resp. $\\vec x^0\\in \\mathrm{supp}(G)$) converges monotonically, where $\\mathrm{supp}(G)$ is the support of $G$.\n\nNext we further assume that $X$ is of finite dimension. If one of the following additional conditions holds, then $\\lim_{k\\to+\\infty} r^k=r^*$ is an eigenvalue of the function pair $(F,G)$ \nin the sense that it fulfills $\\vec0\\in \\nabla F_1(\\vec x^*)-\\nabla F_2(\\vec x^*)-r^*\\left(\\nabla G_1(\\vec x^*)-\\nabla G_2(\\vec x^*)\\right)$, where $\\vec x^*$ is a cluster point of $\\{\\vec x^k\\}$.\n\n\\begin{itemize}\n\\item[Case 1.] For the scheme \\eqref{iter1}, $F_2$ and $G_1$ are one-homogeneous, and $F_1$ and $G_2$ are $p$-homogeneous with $p\\ge 1$, and $H_{\\vec x}=\\text{const}$, $\\forall \\vec x\\in\\mathbb{B}$. \n\\item[Case 2.1.] For the scheme \\eqref{iter2}, $F_1$, $F_2$, $G_1$ and $G_2$ are $p$-homogeneous with $p>1$.\n\\item[Case 2.2.] For the scheme \\eqref{iter2}, $F_1$, $F_2$, $G_1$ and $G_2$\n are one-homogeneous, and $H_{\\vec x}(\\vec x)$ is a continuous\n function of $\\vec x\\in \\mathbb{B}$ and $\\forall M>0$, $\\exists C>0$ such that $H_{\\vec x}(\\vec y)>M \\|\\vec y\\|_2$ whenever $\\vec x\\in\\mathbb{B}$ and $\\|\\vec y\\|_2\\ge C$. \n\\end{itemize}\n\n\\end{theorem}\n\n\nTheorem \\ref{th:gsd} partially generalizes Theorem 3.4 in \\cite{CSZ15},\nand it is indeed an extension of both the IP and the SD method \\cite{BLUB12,CP11,M16,HeinBuhler2010}. \n\n\n\\begin{proof}[Proof of Theorem \\ref{th:gsd}]\n\nIt will be helpful to divide this proof into several parts and steps:\n\n\\begin{enumerate}\n\\item[Step 1.] We may assume $G(\\vec x^k)>0$ for any $k$.\nIn fact, the initial point $\\vec x^0$ satisfies $G(\\vec x^0)> 0$. We will show $F(\\vec x^1)=0$ if $G(\\vec x^1)=0$ and thus the iteration should be terminated at $\\vec x^1$. This tells us that we may assume $G(\\vec x^k)>0$ for all $k$ before the termination of the iteration.\n\nNote that\n\\begin{align*}&F_1(\\vec x^1)+r^0 G_2(\\vec x^1) -(\\langle \\vec u^0,\\vec x^1\\rangle+r^0 \\langle \\vec v^0,\\vec x^1\\rangle) + H_{\\vec x^0}(\\vec x^1)\\\\ \\le~& F_1(\\vec x^0)+r^0 G_2(\\vec x^0) -(\\langle \\vec u^0,\\vec x^0\\rangle+r^0 \\langle \\vec v^0,\\vec x^0\\rangle) + H_{\\vec x^0}(\\vec x^0),\n\\end{align*}\nwhich implies\n\\begin{align*}&F_1(\\vec x^1)-F_1(\\vec x^0)+r^0 (G_2(\\vec x^1)-G_2(\\vec x^0)) + H_{\\vec x^0}(\\vec x^1)-H_{\\vec x^0}(\\vec x^0)\\\\ \\le~& \\langle \\vec u^0,\\vec x^1-\\vec x^0\\rangle+r^0 \\langle \\vec v^0,\\vec x^1-\\vec x^0\\rangle\\le F_2(\\vec x^1)-F_2(\\vec x^0) +r^0 (G_1(\\vec x^1)-G_1(\\vec x^0)),\n\\end{align*}\ni.e.,\n\\begin{align}F(\\vec x^1)-F(\\vec x^0)+ H_{\\vec x^0}(\\vec x^1)-H_{\\vec x^0}(\\vec x^0)&\\le r^0 (G(\\vec x^1)-G(\\vec x^0))\\label{eq:important-inequality}\\\\&=-r^0G(\\vec x^0)=-F(\\vec x^0).\\notag\n\\end{align}\nSince the equality holds, we have $F(\\vec x^1)=0$, $H_{\\vec x^0}(\\vec x^1)=H_{\\vec x^0}(\\vec x^0)$, $\\langle \\vec u^0,\\vec x^1-\\vec x^0\\rangle=F_2(\\vec x^1)-F_2(\\vec x^0)$ and $\\langle \\vec v^0,\\vec x^1-\\vec x^0\\rangle=G_1(\\vec x^1)-G_1(\\vec x^0)$. So this step is finished.\n\n\n\n\\item[Step 2.] $\\{r^k\\}_{k=1}^\\infty$ is monotonically decreasing and hence convergent.\n\nSimilar to \\eqref{eq:important-inequality} in Step 1, we can arrive at\n$$F(\\vec x^{k+1})-F(\\vec x^k)+ H_{\\vec x^k}(\\vec x^{k+1})-H_{\\vec x^k}(\\vec x^k) \\le r^k (G(\\vec x^{k+1})-G(\\vec x^k)),$$\nwhich leads to\n$$F(\\vec x^{k+1})\\le r^k G(\\vec x^{k+1}).$$\nSince $G(\\vec x^{k+1})$ is assumed to be positive,\n$r^{k+1}=F(\\vec x^{k+1})\/G(\\vec x^{k+1})\\le r^k$.\n Thus, there exists $r^*\\in [r_{\\min},r^0]$ such that $\\lim\\limits_{k\\to+\\infty}r^k=r^*$.\n\\end{enumerate}\n\nIn the sequel, we assume that the dimension of $X$ is finite.\n\n\\begin{enumerate}\n\\item[Step 3.] $\\{\\vec x^k\\}$, $\\{\\vec u^k\\}$ and $\\{\\vec v^k\\}$ are sequentially compact.\n\nIn this setting, $\\mathbb{B}$ must be compact. In consequence, there exist $k_i$, $r^*$, $\\vec x^*$, $\\vec x^{**}$, $\\vec u^*$ and $\\vec v^*$ such that $\\vec x^{k_i}\\to \\vec x^*$, $\\vec x^{k_i+1}\\to \\vec x^{**}$, $\\vec u^{k_i}\\to \\vec u^*$ and $\\vec v^{k_i}\\to \\vec v^*$, as $i\\to +\\infty$.\n\n Clearly, the statements in Steps 1, 2 and 3 are also available for the scheme \\eqref{iter2}.\n\n\\item[Step 4.] For the scheme \\eqref{iter1}, $\\vec x^*$ is a minimum of $F_1(\\vec x)+r^* G_2(\\vec x) -(\\langle \\vec u^*,\\vec x\\rangle+r^* \\langle \\vec v^*,\\vec x\\rangle) + H_{\\vec x^*}(\\vec x)$ on $\\mathbb{B}$. For the scheme \\eqref{iter2}, under the additional assumptions introduced in Case 2.1 or Case 2.2, $\\vec x^*$ is a minimum of $F_1(\\vec x)+r^* G_2(\\vec x) -(\\langle \\vec u^*,\\vec x\\rangle+r^* \\langle \\vec v^*,\\vec x\\rangle) + H_{\\vec x^*}(\\vec x)$ on $X$. \n\nLet $g(r,\\vec y,\\vec u,\\vec v)=\\min\\limits_{ \\vec x\\in \\mathbb{B}} \\{F_1(\\vec x)+r G_2(\\vec x) -(\\langle \\vec u,\\vec x\\rangle+r \\langle \\vec v,\\vec x\\rangle) + H_{\\vec y}(\\vec x)\\}$. It is standard to verify that $g(r,\\vec y,\\vec u,\\vec v)$ is continuous on $\\mathbb{R}^{1}\\times X\\times X^*\\times X^*$ according to the compactness of $\\mathbb{B}$.\n\nSince $g(r^{k_i},\\vec x^{k_i},\\vec u^{k_i},\\vec v^{k_i})=r^{k_i+1}$, taking $i\\to+\\infty$, one obtains $g(r^*,\\vec x^*,\\vec u^*,\\vec v^*)=r^*$.\n\nBy Step 3, $\\vec x^{**}$ attains the minimum of $F_1(\\vec x)+r^* G_2(\\vec x) -(\\langle \\vec u^*,\\vec x\\rangle+r^* \\langle \\vec v^*,\\vec x\\rangle) + H_{\\vec x^*}(\\vec x)$ on $\\mathbb{B}$. Suppose the contrary, that $\\vec x^*$ is not a minimum of $F_1(\\vec x)+r^* G_2(\\vec x) -(\\langle \\vec u^*,\\vec x\\rangle+r^* \\langle \\vec v^*,\\vec x\\rangle) + H_{\\vec x^*}(\\vec x)$ on $\\mathbb{B}$. Then\n\\begin{align*}&F_1(\\vec x^{**})+r^* G_2(\\vec x^{**}) -(\\langle \\vec u^*,\\vec x^{**}\\rangle+r^* \\langle \\vec v^*,\\vec x^{**}\\rangle) + H_{\\vec x^*}(\\vec x^{**})\\\\ <~& F_1(\\vec x^*)+r^* G_2(\\vec x^*) -(\\langle \\vec u^*,\\vec x^*\\rangle+r^* \\langle \\vec v^*,\\vec x^*\\rangle) + H_{\\vec x^*}(\\vec x^*),\n\\end{align*}\nand thus $F(\\vec x^{**})0$ and $F(\\vec x^{**})\/ G(\\vec x^{**})1$. \n\nDenote by $B:X\\to[0,+\\infty)$ the unique convex and one-homogeneous function satisfying $B(\\partial\\mathbb{B})=1$. Then the normalization of $\\vec x$ in \\eqref{eq:2twostep_r2} can be expressed as $\\vec x\/B(\\vec x)$.\n\nThe compactness of $\\{\\vec x:B(\\vec x)\\le 1\\}$ and the upper semi-continuity and compactness of subderivatives imply that $\\bigcup_{\\vec x:B(\\vec x)\\le 1}\\nabla F_2(\\vec x)$ and $\\bigcup_{\\vec x:B(\\vec x)\\le 1}\\nabla G_1(\\vec x)$ are bounded sets. So, we have a uniform constant $C_1>0$ such that $\\|\\vec u\\|_2+r^*\\|\\vec v\\|_2\\le C_1$, $\\forall \\vec u\\in \\nabla F_2(\\vec x)$, $ \\vec v\\in \\nabla G_1(\\vec x)$, $\\forall \\vec x\\in\\mathbb{B}$. Let $C_2>0$ be such that $\\|\\vec x\\|_2\\le C_2B(\\vec x)$, and $C_3=\\min\\limits_{B(\\vec x)=1} F_1(\\vec x)>0$ (here we assume without loss of generality that $F_1(\\vec x)>0$ whenever $\\vec x\\ne \\vec 0$). For any $\\vec x$ with $B(\\vec x)\\ge \\max\\{2,(2C_1C_2\/C_3)^{\\frac{1}{p-1}}\\}$, and for any $\\vec x^*\\in \\mathbb{B}$, $ \\vec u^*\\in \\nabla F_2(\\vec x^*)$, $ \\vec v^*\\in \\nabla G_1(\\vec x^*)$, \n\\begin{align*}\n&F_1(\\vec x)+r^* G_2(\\vec x) -(\\langle \\vec u^*,\\vec x\\rangle+r^*\\langle \\vec v^*,\\vec x\\rangle) + H_{\\vec x^*}(\\vec x)\n\\\\ =~& B(\\vec x)^p F_1(\\frac{\\vec x}{B(\\vec x)})+r^*B(\\vec x)^p G_2(\\frac{\\vec x}{B(\\vec x)}) -(\\|\\vec x\\|_2\\langle \\vec u^*,\\frac{\\vec x}{\\|\\vec x\\|_2}\\rangle+r^*\\|\\vec x\\|_2\\langle \\vec v^*,\\frac{\\vec x}{\\|\\vec x\\|_2}\\rangle) + H_{\\vec x^*}(\\vec x)\n\\\\ \\ge~& B(\\vec x)^pF_1(\\frac{\\vec x}{B(\\vec x)})-\\|\\vec x\\|_2(\\|\\vec u^*\\|_2+r^*\\|\\vec v^*\\|_2 ) + H_{\\vec x^*}(\\vec x^*)\n\\\\ \\ge~& B(\\vec x)^pC_3-C_2C_1B(\\vec x) + H_{\\vec x^*}(\\vec x^*)=B(\\vec x)(B(\\vec x)^{p-1}C_3-C_2C_1) + H_{\\vec x^*}(\\vec x^*)\n> H_{\\vec x^*}(\\vec x^*)\\\\ >~& -(p-1)(F_2(\\vec x^*)+r^* G_1(\\vec x^*))+ H_{\\vec x^*}(\\vec x^*)\n\\\\ =~& F_1(\\vec x^*)+r^* G_2(\\vec x^*) -(\\langle \\vec u^*,\\vec x^*\\rangle+r^*\\langle \\vec v^*,\\vec x^*\\rangle) + H_{\\vec x^*}(\\vec x^*)\n\\end{align*}\nwhich means that the minimizers of $F_1(\\vec x)+r^* G_2(\\vec x) -(\\langle \\vec u^*,\\vec x\\rangle+r^*\\langle \\vec v^*,\\vec x\\rangle) + H_{\\vec x^*}(\\vec x)$ exist and they always lie in the bounded set $\\{\\vec x:B(\\vec x)< \\max\\{2,(2C_1C_2\/C_3)^{\\frac{1}{p-1}}\\}\\}$. Since $B(\\vec x^k)=1$, $\\{\\vec y^k\\}$ must be a bounded sequence. There exists $\\{k_i\\}\\subset \\{k\\}$ such that $\\vec x^{k_i} \\to \\vec x^*$, $\\vec y^{k_i+1}\\to \\vec y^{**}$, $\\vec x^{k_i+1} \\to \\vec x^{**}$ for some $\\vec x^*$, $\\vec y^{**}$ and $\\vec x^{**}=\\vec y^{**}\/B(\\vec y^{**})$. Similar to Step 4 and Case 1, $\\vec x^*$ is a minimizer of $F_1(\\vec x)+r^* G_2(\\vec x) -(\\langle \\vec u^*,\\vec x\\rangle+r^* \\langle \\vec v^*,\\vec x\\rangle)+H_{\\vec x^*}(\\vec x)$ on $X$, and thus\n\\begin{align*}\n\\vec 0 &\\in \\nabla|_{ \\vec x= \\vec x^*} \\left(F_1(\\vec x)+r^* G_2(\\vec x) -(\\langle \\vec u^*,\\vec x\\rangle+r^* \\langle \\vec v^*,\\vec x\\rangle)+H_{\\vec x^*}(\\vec x) \\right)\n\\\\&=\\nabla F_1(\\vec x^*)+r^* \\nabla G_2(\\vec x^*) -\\vec u^*-r^*\\vec v^*\n\\subset \\nabla F_1(\\vec x^*)-\\nabla F_2(\\vec x^*)+r^*\\nabla G_2(\\vec x^*)-r^*\\nabla G_1(\\vec x^*)\n\\end{align*}\n\n\\item[Case 2.2.] On the scheme \\eqref{iter2}, $F_1$, $F_2$, $G_1$ and $G_2$ are one-homogeneous; $H_x(\\vec x)$ is continuous of $\\vec x\\in \\mathbb{B}$ and for any $M>0$, there exists $C>0$ such that $H_{\\vec x}(\\vec y)>M\\cdot B(\\vec y)$ whenever $\\vec x\\in\\mathbb{B}$ and $B(\\vec y)\\ge C$. \n\nTaking $M=C_1C_2+2$ in which the constants $C_1$ and $C_2$ are introduced in Case 2.1, there exists $C>\\max\\{\\max\\limits_{x\\in \\mathbb{B}}H_x(\\vec x),1\\}$ such that $H_{\\vec x^*}(\\vec x)\\ge M\\cdot B(\\vec x)$ whenever $\\vec x^*\\in \\mathbb{B}$ and $B(\\vec x)\\ge C$. \n\nSimilar to Case 2.1, for any $\\vec x^*\\in \\mathbb{B}$, $\\vec x\\in X$ with $B(\\vec x)\\ge C$, and $\\forall \\vec u^*\\in \\nabla F_2(\\vec x^*)$, $ \\vec v^*\\in \\nabla G_1(\\vec x^*)$, \n\\begin{align*}\n&F_1(\\vec x)+r^* G_2(\\vec x) -(\\langle \\vec u^*,\\vec x\\rangle+r^*\\langle \\vec v^*,\\vec x\\rangle) + H_{\\vec x^*}(\\vec x)\n\\\\>~& B(\\vec x)(C_3-C_2C_1) + (C_1C_2+2)\\cdot B(\\vec x)\n\\ge 2 B(\\vec x) >H_{\\vec x^*}(\\vec x^*)\n\\\\ =~& F_1(\\vec x^*)+r^* G_2(\\vec x^*) -(\\langle \\vec u^*,\\vec x^*\\rangle+r^*\\langle \\vec v^*,\\vec x^*\\rangle) + H_{\\vec x^*}(\\vec x^*).\n\\end{align*}\nThe remaining part can refer to Case 2.1.\n \\end{enumerate}\n \n\\end{proof}\n\n\\begin{remark}\\label{remark:very-general-RatioDCA}\nAs some direct extensions of the so-called {\\sl generalized RatioDCA} in \\cite{TMH18}, we have the following modified schemes:\n\\begin{subequations}\n\\label{iter1-}\n\\begin{numcases}{}\n\\vec x^{k+1}\\in \\argmin\\limits_{\\vec x\\in \\mathbb{B}} F_1(\\vec x)+r^k G_2(\\vec x) -(\\langle \\vec u^k,\\vec x\\rangle+r^k \\langle \\vec v^k,\\vec x\\rangle) + H_{\\vec x^k}(\\vec x)\\text{ if }r^k\\ge0, \\label{eq:twostep_x2-}\n\\\\\n\\vec x^{k+1}\\in \\argmin\\limits_{\\vec x\\in \\mathbb{B}} G_1(\\vec x)-\\langle \\vec w^k,\\vec x\\rangle-\\frac{1}{r^k}( F_1(\\vec x) - \\langle \\vec u^k,\\vec x\\rangle) + H_{\\vec x^k}(\\vec x)\\text{ if }r^k<0, \\label{eq:twostep_x22-}\n\\\\\nr^{k+1}=F( \\vec x^{k+1})\/G( \\vec x^{k+1}),\n\\label{eq:twostep_r2-}\n\\\\\n \\vec u^{k+1}\\in\\nabla F_2( \\vec x^{k+1}),\\;\n \\vec v^{k+1}\\in\\nabla G_1( \\vec x^{k+1}),\\;\\vec w^{k+1}\\in\\nabla G_2( \\vec x^{k+1})\n\\label{eq:twostep_s2-}\n\\end{numcases}\n\\end{subequations}\nand\n\\begin{subequations}\n\\label{iter2-}\n\\begin{numcases}{}\n\\vec y^{k+1}\\in \\argmin\\limits_{\\vec x\\in X} F_1(\\vec x)+r^k G_2(\\vec x) -(\\langle \\vec u^k,\\vec x\\rangle+r^k \\langle \\vec v^k,\\vec x\\rangle) + H_{\\vec x^k}(\\vec x)\\text{ if }r^k\\ge0, \\label{eq:2twostep_x2-}\n\\\\\n\\vec y^{k+1}\\in \\argmin\\limits_{\\vec x\\in X} G_1(\\vec x)-\\langle \\vec w^k,\\vec x\\rangle-\\frac{1}{r^k}( F_1(\\vec x) - \\langle \\vec u^k,\\vec x\\rangle) + H_{\\vec x^k}(\\vec x)\\text{ if }r^k<0, \\label{eq:2twostep_x22-}\n\\\\\nr^{k+1}=F( \\vec y^{k+1})\/G( \\vec y^{k+1}),~~ \\vec x^{k+1}=\\partial \\mathbb{B}\\cap\\{t\\vec y^{k+1}:t\\ge 0\\} \n\\label{eq:2twostep_r2-}\n\\\\\n \\vec u^{k+1}\\in\\nabla F_2( \\vec x^{k+1}),\\;\n \\vec v^{k+1}\\in\\nabla G_1( \\vec x^{k+1}),\n\\label{eq:2twostep_s2-}\n\\end{numcases}\n\\end{subequations}\nin which the previous assumption $F_1-F_2\\ge 0$ in \\eqref{iter1} and \\eqref{iter2} has been removed. For these modifications, a convergence property like Theorem \\ref{th:gsd} still holds.\n\n\\end{remark}\n\n\n\n\\begin{remark}\nTheorem \\ref{th:gsd} shows the local convergence of a general relaxation of Dinkelbach's algorithm in the spirit of DC programming.\n The DC programming consists in minimizing $F-G$ where $F$ and $G$ are convex functions. \nAs described in \\cite{MaeharaMurota15,MMM18}, both the original DC algorithm and its discrete version can be written as the simple iteration: $\\vec u^k\\in \\nabla G(\\vec x^k)$, $\\vec x^{k+1}\\in \\nabla F^\\star(\\vec u^k)$, where $F^\\star$ is the Fenchel conjugate of $F$. It is known that such an iteration is equivalent to the following scheme \\begin{subequations}\n\\label{F-Giter1}\n\\begin{numcases}{}\n\\vec x^{k+1}\\in \\argmin\\limits_{\\vec x} F(\\vec x) -\\langle \\vec u^k,\\vec x\\rangle, \\label{eq:F-Gtwostep_x2}\n\\\\ \\vec u^{k+1}\\in\\nabla G( \\vec x^{k+1}).\n\\label{eq:F-Gtwostep_s2}\n\\end{numcases}\n\\end{subequations}\nMoreover,\na slight variation of the above scheme by adding a normalization step\n\\begin{subequations}\n\\label{FGiter1}\n\\begin{numcases}{}\n \\hat{\\vec x}^{k+1}\\in \\argmin\\limits_{\\vec x\\in\\ensuremath{\\mathbb{R}}^n} F(\\vec x) -\\langle \\vec u^k,\\vec x\\rangle, \\label{eq:FGtwostep_x2}\n\\\\ \\vec x^{k+1}= \\hat{\\vec x}^{k+1}\/G(\\hat{\\vec x}^{k+1})^{\\frac1p}\n\\\\ \\vec u^{k+1}\\in\\nabla G( \\vec x^{k+1}).\n\\label{eq:FGtwostep_s2}\n\\end{numcases}\n\\end{subequations}\ncan be used to solve the fractional programming $\\min F\/G$, where $F$ and $G$ are convex and $p$-homogeneous with $p>1$. This scheme is nothing but\nAlgorithm 2 in \\cite{HeinBuhler2010}.\nIn fact, we can say more about it. \n\\end{remark}\n\\begin{pro}\nLet $F$ and $G$ be convex, $p$-homogeneous and positive-definite functions on $\\ensuremath{\\mathbb{R}}^n$, where $p>1$. Then, for any initial point $\\vec x^0$, the sequence of the pairs $\\{(r^k,\\vec x^k)\\}_{k\\ge1}$ produced by the following scheme\n\\begin{subequations}\n\\label{iter1FG}\n\\begin{numcases}{}\n\\hat{\\vec x}^{k+1}\\in \\argmin\\limits_{\\vec x\\in \\ensuremath{\\mathbb{R}}^n} F(\\vec x) -a_k\\langle \\vec u^k,\\vec x\\rangle, \\label{eq:twostep_x2FG}\n\\\\\n\\vec x^{k+1}= b_{k+1} \n\\hat{\\vec x}^{k+1}\\; (\\mathrm{scaling}),\\;\\; r^{k+1}=F( \\vec x^{k+1})\/G( \\vec x^{k+1}),\n\\label{eq:twostep_r2FG}\n\\\\\n \\vec u^{k+1}\\in\\nabla G( \\vec x^{k+1}),\n\\label{eq:twostep_s2FG}\n\\end{numcases}\n\\end{subequations}\nconverges to an eigenpair $(r^*,\\vec x^*)$ of $(F,G)$ in the sense that $\\lim\\limits_{k\\to+\\infty}r^k= r^*$ and $\\vec x^*$ is a limit point of $\\{\\vec x^k\\}_{k\\ge1}$, whenever $a_k,b_k>0$ as well as both $\\{a_k\\}_{k\\ge 1}$ and $\\{b_k\\hat{\\vec x}^k\\}_{k\\ge 1}$\n are bounded away from $0$ and $\\infty$.\n\\end{pro}\n\nThe proof is very similar to the original proof of Theorem 3.1 in \\cite{HeinBuhler2010}, with an additional trick like the proof of Case 2.1 in Theorem \\ref{th:gsd}. It can be regarded as a supplement of both Theorem 3.1 in \\cite{HeinBuhler2010} and Theorem \\ref{th:gsd}. It is also interesting that the scheme is stable under perturbations of $a_k$ and $b_k$. Besides, it can be seen that the resulting eigenvalue $r^*$ should be independent of the choice of $a_k$ and $b_k$. In fact, $r^*$ only depends on the initial data and the choice of subgradient $\\vec u^k$. \nThe assumption that $F$ is positive-definite can be removed in some sense. Indeed, if $r^k\\le0$ for some $k$, we can modify \\eqref{eq:twostep_x2FG} as $\\hat{\\vec x}^{k+1}\\in \\argmin\\limits_{\\vec x\\in \\ensuremath{\\mathbb{R}}^n} F(\\vec x) -r^kG(\\vec x)$ or $\\hat{\\vec x}^{k+1}\\in \\argmin\\limits_{\\vec x\\in \\ensuremath{\\mathbb{R}}^n} G(\\vec x) -\\frac{1}{r^k}F(\\vec x)$ when $r^k<0$. Then $\\{r^k\\}$ converges to the global minimum of $F\/G$.\n\n\nAnother solver for the continuous optimization $\\min\\frac{F(\\vec x)}{G(\\vec x)}$ is\n the stochastic subgradient method:\n\\begin{equation*}\n\\vec x^{k+1}=\\vec x^k-\\alpha_k(\\vec y^k+\\vec\\xi^k),\\;\\;\\;\\vec y^k\\in\\nabla\\frac{F(\\vec x^k)}{G(\\vec x^k)},\n\\end{equation*}\nwhere $\\{\\alpha_k\\}_{k\\ge1}$ is a step-size sequence and $\\{\\vec\\xi^k\\}_{k\\ge1}$ is now a sequence of random variables (the ``noise'') on some probability space. Theorem 4.2 in \\cite{Davis19-FoCM} shows that under some natural assumptions, almost surely, every limit point of the stochastic subgradient iterates $\\{\\vec x^k\\}_{k\\ge1}$ is critical for $F\/G$, and the function values $\\{\\frac{F}{G}(\\vec x^k)\\}_{k\\ge1}$ converge.\n\n\\section{Examples and Applications}\n\\label{sec:examples-Applications}\n\n\n\\subsection{Submodular vertex cover and multiway partition problems}\nAs a first immediate application of Theorem \\ref{thm:tilde-fg-equal}, we obtain an easy way to rediscover the famous identity by Lov\\'asz, and the two typical submodular optimizations -- submodular vertex cover and multiway partition problems.\n\\begin{example}\nThe identity $\\min\\limits_{A\\in\\ensuremath{\\mathcal{P}}(V)}f(A)=\\min\\limits_{\\vec x\\in[0,1]^V}f^L(\\vec x)$ discovered by Lov\\'asz in his original paper \\cite{Lovasz} can be obtained by our result. In fact, $$\\min\\limits_{A\\in\\ensuremath{\\mathcal{P}}(V)}f(A)=\\min\\limits_{A\\in\\ensuremath{\\mathcal{P}}(V)}\\frac{f(A)}{1}=\\min\\limits_{\\vec x\\in [0,\\infty)^V}\\frac{f^L(\\vec x)}{\\max\\limits_{i\\in V}x_i}=\\min\\limits_{\\vec x\\in [0,1]^V}\\frac{f^L(\\vec x)}{\\max\\limits_{i\\in V}x_i}=\\min\\limits_{\\vec x\\in [0,1]^V,\\max\\limits_i x_i=1}f^L(\\vec x).$$\nChecking this is easy:\n if $f\\ge 0$, then $\\min\\limits_{\\vec x\\in [0,1]^V,\\max\\limits_i x_i=1}f^L(\\vec x)=0$; if $f(A)<0$ for some $A\\subset V$, then $\\min\\limits_{\\vec x\\in [0,1]^V,\\max\\limits_i x_i=1}f^L(\\vec x)=\\min\\limits_{\\vec x\\in [0,1]^V}f^L(\\vec x)$.\n\\end{example}\n\n\\paragraph{Vertex cover number }\nA vertex cover (or node cover) of a graph is a set of vertices such that each edge of the graph is incident to at least one vertex of the set. The vertex cover number is the minimal cardinality of a vertex cover. Similarly, the independence number of a graph is the maximal number of vertices not connected by edges. The sum of the vertex cover number and the independence number is the cardinality of the vertex set.\n\nBy a variation of the Motzkin-Straus theorem and Theorem \\ref{thm:graph-numbers}, the vertex cover number thus has at least two equivalent continuous representations similar to the independence number.\n\n\\paragraph{Submodular vertex cover problem}\nGiven a graph $G=(V,E)$, and a submodular function $f:\\ensuremath{\\mathcal{P}}(V)\\to[0,\\infty)$, find a vertex cover $S\\subset V$ minimizing $f(S)$.\n\nBy Theorem \\ref{thm:tilde-fg-equal},\n$$\\min\\{f(S):S\\subset V,\\,S\\text{ is a vertex cover}\\}=\\min\\limits_{\\vec x\\in{\\mathcal D}}\\frac{f^L(\\vec x)}{\\|\\vec x\\|_\\infty}=\\min\\limits_{\\vec x\\in \\widetilde{{\\mathcal D}}}f^L(\\vec x)$$\nwhere ${\\mathcal D}=\\{\\vec x\\in[0,\\infty)^V:V^t(\\vec x)\\text{ vertex cover},\\,\\forall t\\ge0\\}=\\{\\vec x\\in[0,\\infty)^V:x_i+x_j>0,\\forall\\{i,j\\}\\in E,\\,\\{i:x_i=\\max_j x_j\\}\\text{ vertex cover}\\}$, and $\\widetilde{{\\mathcal D}}=\\{\\vec x\\in{\\mathcal D}: \\|\\vec x\\|_\\infty=1\\}=\\{\\vec x\\ge\\vec 0:x_i+x_j\\ge 1,\\forall\\{i,j\\}\\in E,\\,\\{i:x_i=\\max_j x_j\\}\\text{ vertex cover}\\}$. Note that\n$$\\mathrm{conv}(\\widetilde{{\\mathcal D}})=\\{\\vec x:x_i+x_j\\ge 1,\\forall\\{i,j\\}\\in E,\\,x_i\\ge 0,\\forall i\\in V\\}.$$\n\nTherefore, $\\min\\limits_{\\vec x\\in \\mathrm{conv}(\\widetilde{{\\mathcal D}})}f^L(\\vec x)\\le \\min\\{f(S):\\text{ vertex cover }S\\subset V\\}$, which rediscovers the convex programming relaxation.\n\n\\paragraph{Submodular multiway partition problem}\nThis problem is about to minimize $\\sum_{i=1}^k f(V_i)$ subject to $V=V_1\\cup\\cdots\\cup V_k$, $V_i\\cap V_j=\\varnothing$, $i\\ne j$, $v_i\\in V_i$, $i=1,\\cdots,k$, where $f:\\ensuremath{\\mathcal{P}}(V)\\to\\ensuremath{\\mathbb{R}}$ is a submodular function.\n\n Letting $\\ensuremath{\\mathcal{A}}=\\{\\text{ partition }(A_1,\\cdots,A_k)\\text{ of }V:A_i\\ni a_i,\\,i=1,\\cdots,k\\}$, by Theorem \\ref{thm:tilde-fg-equal},\n $$\\min\\limits_{(A_1,\\cdots,A_k)\\in\\ensuremath{\\mathcal{A}}}\\sum_{i=1}^k f(A_i)=\\inf\\limits_{\\vec x\\in {\\mathcal D}_\\ensuremath{\\mathcal{A}}}\\frac{\\sum_{i=1}^k f^L(\\vec x^i)}{\\|\\vec x\\|_\\infty}=\\inf\\limits_{\\vec x\\in {\\mathcal D}'}\\sum_{i=1}^k f^L(\\vec x^i),$$\n where ${\\mathcal D}_\\ensuremath{\\mathcal{A}}=\\{\\vec x\\in [0,\\infty)^{kn}: (V^t(\\vec x^1),\\cdots,V^t(\\vec x^k))\\text{ is a partition}, V^t(\\vec x^i)\\ni a_i,\\forall t\\ge 0\\}=\\{\\vec x\\in [0,\\infty)^{kn}: \\vec x^i=t1_{A_i},A_i\\ni a_i,\\forall t\\ge 0\\}$, and ${\\mathcal D}'=\\{(\\vec x^1,\\cdots,\\vec x^k):\\vec x^i\\in [0,\\infty)^V,\\,\\vec x^i=\\vec 1_{A_i},A_i\\ni a_i\\}$. Note that $$\\mathrm{conv}({\\mathcal D}')=\\{(\\vec x^1,\\cdots,\\vec x^k):\\sum_{v\\in V} x^i_v=1,x^i_{a_i}=1,x^i_v\\ge0\\}.$$ So one rediscovers the corresponding convex programming relaxation $\\min\\limits_{\\vec x\\in \\mathrm{conv}({\\mathcal D}')}\\sum_{i=1}^k f^L(\\vec x^i)$.\n \n\\subsection{Min-cut and Max-cut}\n\nGiven an undirected weighted graph $(V,E,w)$, the min-cut problem \n$$\\min\\limits_{S\\ne\\varnothing,V}|\\partial S|:=\\min\\limits_{S\\ne\\varnothing,V}|E(S,V\\setminus S)|=\\min\\limits_{S\\ne\\varnothing,V}\\sum\\limits_{i\\in S,j\\in V\\setminus S}w_{ij}$$\n and the \nmax-cut problem $$\\max\\limits_{S\\ne\\varnothing,V}|\\partial S|:=\\max\\limits_{S\\ne\\varnothing,V}|E(S,V\\setminus S)|=\\max\\limits_{S\\ne\\varnothing,V}\\sum\\limits_{i\\in S,j\\in V\\setminus S}w_{ij}$$\nhave been investigated systematically.\n\n\n\n\\begin{theorem}\\label{thm:mincut-maxcut-eigen}Let $(V,E,w)$ be a weighted undirected graph. \nThen, we have the equivalent continuous optimization formulations for the min-cut and max-cut problems:\n$$\\min\\limits_{S\\ne\\varnothing,V}|\\partial S|=\\min\\limits_{\\min _ix_i+\\max_i x_i=0}\\frac{\\sum_{ij\\in E}w_{ij}|x_i-x_j|}{2\\|\\vec x\\|_\\infty}=\\tilde{\\lambda}_2,$$\n$$\\max\\limits_{S\\ne\\varnothing,V}|\\partial S|= \\max\\limits_{\\vec x\\ne \\vec0}\\frac{\\sum_{ij\\in E}w_{ij}|x_i-x_j|}{2\\|\\vec x\\|_\\infty}=\\tilde{\\lambda}_{\\max},$$\nwhere $\\tilde{\\lambda}_2$ and $\\tilde{\\lambda}_{\\max}$ are the second \n(i.e., the smallest nontrivial) eigenvalue and the largest eigenvalue of the nonlinear eigenvalue problem:\n\\begin{equation}\\label{eq:mincut-maxcut-eigen}\n\\vec0\\in \\nabla\\sum_{ij\\in E}w_{ij}|x_i-x_j|-\\lambda\\nabla 2\\|\\vec x\\|_\\infty. \n\\end{equation}\\end{theorem}\n\n\\begin{proof}\nWe only prove the min-cut case. It is clear that\n$$ \\min\\limits_{S\\ne\\varnothing,V}|\\partial S|= \\min\\limits_{A,B\\ne\\varnothing,A\\cap B=\\varnothing}\\frac{ |\\partial A|+|\\partial B|}{2}$$\nLet $\\ensuremath{\\mathcal{A}}=\\{(A,B)\\in \\ensuremath{\\mathcal{P}}_2(V):A,B\\ne\\varnothing\\}$. Then ${\\mathcal D}_\\ensuremath{\\mathcal{A}}=\\{\\vec x\\in\\ensuremath{\\mathbb{R}}^n: \\max_i x_i=-\\min_i x_i>0\\}$, and by Theorem \\ref{thm:tilde-fg-equal}, $$\\min\\limits_{S\\ne\\varnothing,V}|\\partial S|=\\min\\limits_{(A,B)\\in\\ensuremath{\\mathcal{A}}}\\frac{ |\\partial A|+|\\partial B|}{2}=\\min\\limits_{\\vec x\\in{\\mathcal D}_\\ensuremath{\\mathcal{A}}}\\frac{\\sum_{ij\\in E}w_{ij}|x_i-x_j|}{2\\|\\vec x\\|_\\infty}. $$ \n In addition, according to Theorem \\ref{introthm:eigenvalue}, the set of the eigenvalues of $(f^L,g^L)$ coincides with \n $$\\left\\{\\frac{f^L(\\vec 1_A-\\vec 1_{V\\setminus A})}{g^L(\\vec 1_A-\\vec 1_{V\\setminus A})}:A\\subset V\\right\\}=\\left\\{\\frac{f(A,V\\setminus A)}{g(A,V\\setminus A)}:A\\subset V\\right\\}=\\left\\{\\frac{|\\partial A|+|\\partial (V\\setminus A)|}{2}:A\\subset V\\right\\}=\\{|\\partial A|:A\\subset V\\},$$\n where $f(A,B)=|\\partial A|+|\\partial B|$ and $g(A,B)=2$. In consequence, $\\min\\limits_{S\\ne\\varnothing,V}|\\partial S|$ is the second \neigenvalue of $(f^L,g^L)$. The proof is completed.\n\\end{proof}\n\nEq.~\\eqref{eq:mincut-maxcut-eigen} shows the first nonlinear eigenvalue problem which possesses two nontrivial eigenvalues that are equivalent to two important graph optimization problems, respectively. \n\nIn addition, by our results, we present a lot of equivalent continuous optimizations for the maxcut problem (see Examples \\ref{exam:maxcut1} and \\ref{exam:maxcut2}):\n\\begin{align*}\n\\max\\limits_{S\\subset V}|\\partial S|&=\\max\\limits_{x\\ne\n 0}\\frac{ \\sum_{\\{i,j\\}\\in E}w_{ij}(|x_i|+|x_j|-|x_i+x_j|)^p }{(2\\|\\vec\n x\\|_\\infty)^p}\n\\\\&=\\max\\limits_{x\\ne\n 0}\\frac{ \\sum_{\\{i,j\\}\\in E}w_{ij}|x_i-x_j|^p }{(2\\|\\vec\n x\\|_\\infty)^p}=\\max\\limits_{\\|\\vec\n x\\|_\\infty\\le\\frac12 } \\sum_{\\{i,j\\}\\in E}w_{ij}|x_i-x_j|^p \n\\end{align*}\nfor any $p\\ge1$. \n\n\\subsection{Max $k$-cut problem}\n\\label{sec:max-k-cut}\nThe max $k$-cut problem is to determine a graph $k$-cut by solving\n\\begin{equation}\\label{eq:maxk}\n\\mathrm{MaxC}_k(G)=\\max_{\\text{partition }(A_1,A_2,\\ldots,A_k)\\text{ of }V}\\sum_{i\\ne j}|E(A_i,A_j)|=\\max_{(A_1,A_2,\\ldots,A_k)\\in \\mathcal{C}_{k}(V)}\\sum_{i=1}^k|\\partial A_i|,\n\\end{equation}\nwhere $\\mathcal{C}_{k}(V)=\\{(A_1,\\ldots,A_k)\\big|A_i\\cap A_j = \\varnothing, \\bigcup_{i=1}^{k} A_i= V \\}$, and $\\partial A_i:=E(A_i,V\\setminus A_i)$. We may write \\eqref{eq:maxk} as\n$$ \\mathrm{MaxC}_k(G)=\\max_{(A_1,A_2,\\ldots,A_{k-1})\\in \\mathcal{P}_{k-1}(V)}\\sum_{i=1}^{k-1}|\\partial A_i|+|\\partial (A_1\\cup\\cdots\\cup A_{k-1})|.$$\nTaking $f_k(A_1,\\cdots,A_k)=\\sum_{i=1}^{k}|\\partial A_i|+|\\partial (A_1\\cup\\cdots\\cup A_{k})|$, the $k$-way Lov\\'asz extension is $$f^L_k(\\vec x^1,\\cdots,\\vec x^k)=\\sum_{i=1}^k\\sum_{i\\sim j}|x^k_i-x^k_j|+\\sum_{j\\sim j'}\\left|\\max\\limits_{i=1,\\cdots,k} x^i_j-\\max\\limits_{i=1,\\cdots,k} x^i_{j'}\\right|.$$\nApplying Theorem \\ref{thm:tilde-fg-equal}, we have\n$$ \\mathrm{MaxC}_{k+1}(G)=\\max\\limits_{\\vec x^i\\in\\ensuremath{\\mathbb{R}}^n_{\\ge0}\\setminus\\{\\vec0\\},\\,\\ensuremath{\\mathrm{supp}}(\\vec x^i)\\cap \\ensuremath{\\mathrm{supp}}(\\vec x^j)=\\varnothing}\\frac{\\sum_{i=1}^k\\sum_{i\\sim j}|x^k_i-x^k_j|+\\sum_{j\\sim j'}\\left|\\max\\limits_{i=1,\\cdots,k} x^i_j-\\max\\limits_{i=1,\\cdots,k} x^i_{j'}\\right|}{\\max\\limits_{i,j}x^i_j}$$\n\\subsection{Relative isoperimetric constants on a subgraph with boundary}\n\\label{sec:boundary-graph-1-lap}\nGiven a finite graph $G=(V,E)$ and a subgraph, we\nconsider the Dirichlet and Neumann eigenvalue problems for the corresponding 1-Laplacian. For $A\\subset V$, put $\\overline{A}=A\\cup \\delta A$, where $\\delta A$ is the set of points in $A^c$ that are adjacent to some points in $A$ (see Fig.~\\ref{fig:AdeltaA}).\n\n\\begin{figure}[!h]\\centering\n\\begin{tikzpicture}[scale=0.69\n\\draw (0,0) to (1,0);\n\\draw (0,0) to (0,1);\n\\draw (1,0) to (2,0);\n\\draw (1,0) to (1,1);\n\\draw (1,1) to (0,1);\n\\draw (0,2) to (0,1);\n\\draw (0,2) to (1,2);\n\\draw (2,0) to (2,1);\n\\draw (2,2) to (1,2);\n\\draw (2,2) to (2,1);\n\\draw (1,1) to (1,2);\n\\draw (1,1) to (2,1);\n\\draw (0,0) to (-0.9,0);\n\\draw (0,0) to (0,-0.9);\n\\draw (1,0) to (1,-0.9);\n\\draw (2,0) to (2,-0.9);\n\\draw (0,2) to (-0.9,2);\n\\draw (0,1) to (-0.9,1);\n\\draw[densely dotted] (-2,2) to (-1.1,2);\n\\draw[densely dotted] (-2,1) to (-1.1,1);\n\\draw[densely dotted] (-2,0) to (-1.1,0);\n\\draw[densely dotted] (4,2) to (3.1,2);\n\\draw[densely dotted] (4,1) to (3.1,1);\n\\draw[densely dotted] (4,0) to (3.1,0);\n\\draw[densely dotted] (2,-2) to (2,-1.1);\n\\draw[densely dotted] (1,-2) to (1,-1.1);\n\\draw[densely dotted] (0,-2) to (0,-1.1);\n\\draw[densely dotted] (2,4) to (2,3.1);\n\\draw[densely dotted] (1,4) to (1,3.1);\n\\draw[densely dotted] (0,4) to (0,3.1);\n\\draw[densely dotted] (-0.1,3) to (-1,3);\n\\draw[densely dotted] (-0.1,-1) to (-1,-1);\n\\draw[densely dotted] (2.1,3) to (3,3);\n\\draw[densely dotted] (2.1,-1) to (3,-1);\n\\draw[densely dotted] (3,-0.1) to (3,-1);\n\\draw[densely dotted] (-1,-0.1) to (-1,-1);\n\\draw[densely dotted] (3,2.1) to (3,3);\n\\draw[densely dotted] (-1,2.1) to (-1,3);\n\\draw[dashed] (-1,1.9) to (-1,1.1);\n\\draw[dashed] (-1,0.9) to (-1,0.1);\n\\draw[dashed] (3,1.9) to (3,1.1);\n\\draw[dashed] (3,0.9) to (3,0.1);\n\\draw[dashed] (1.9,-1) to (1.1,-1);\n\\draw[dashed] (0.9,-1) to (0.1,-1);\n\\draw[dashed] (1.9,3) to (1.1,3);\n\\draw[dashed] (0.9,3) to (0.1,3);\n\\node (00) at (0,0) {$\\bullet$};\n\\node (10) at (1,0) {$\\bullet$};\n\\node (11) at (1,1) {$\\bullet$};\n\\node (01) at (0,1) {$\\bullet$};\n\\node (02) at (0,2) {$\\bullet$};\n\\node (20) at (2,0) {$\\bullet$};\n\\node (12) at (1,2) {$\\bullet$};\n\\node (21) at (2,1) {$\\bullet$};\n\\node (22) at (2,2) {$\\bullet$};\n\\node (03) at (0,3) {$\\circ$};\n\\node (30) at (3,0) {$\\circ$};\n\\node (13) at (1,3) {$\\circ$};\n\\node (31) at (3,1) {$\\circ$};\n\\node (23) at (2,3) {$\\circ$};\n\\node (32) at (3,2) {$\\circ$};\n\\node (01') at (0,-1) {$\\circ$};\n\\node (1'0) at (-1,0) {$\\circ$};\n\\node (11') at (1,-1) {$\\circ$};\n\\node (1'1) at (-1,1) {$\\circ$};\n\\node (21') at (2,-1) {$\\circ$};\n\\node (1'2) at (-1,2) {$\\circ$};\n\\draw (2.9,0) to (2,0);\n\\draw (0,2.9) to (0,2);\n\\draw (2.9,1) to (2,1);\n\\draw (1,2.9) to (1,2);\n\\draw (2.9,2) to (2,2);\n\\draw (2,2.9) to (2,2);\n\\end{tikzpicture}\n\\caption{\\label{fig:AdeltaA} In this graph, let $A$ be the set of solid\n points, $\\delta A$ the set of hollow points. We only consider the edges\n for which one vertex is in $A$ and the other in $\\overline{A}$ (solid lines). We will ignore the dashed lines in $\\delta A$, and the dotted lines outside $\\overline{A}$.}\n\\end{figure}\n\nGiven $S\\subset \\overline{A}$, denote the boundary of $S$ relative to $A$ by\n$$\\partial_A S=\\{(u,v)\\in E:u\\in S\\cap A,v\\in \\delta A\\setminus S\\text{ or }u\\in S, v\\in A\\setminus S\\}.$$\nIf $S\\subset A$, then $\\partial_A S=\\{(u,v)\\in E:u\\in S, v\\in \\overline{A}\\setminus S\\}$.~\n\nThe Cheeger (cut) constant of the subgraph $A$ of $G$ is defined as\n$$h(A)=\\min_{S\\subset \\overline{A}}\\frac{|\\partial_A S|}{\\min\\{\\vol(A\\cap S),\\vol(A\\setminus S)\\}}.$$\nA set pair $(S,\\overline{A}\\setminus S)$ that achieves the Cheeger constant is called a Cheeger cut.\n\nThe Cheeger isoperimetric constant\\footnote{Some authors call it the Dirichlet isoperimetric constant.} of $A$ is defined as\n$$h_1(A)=\\min_{S\\subset A}\\frac{|\\partial_A S|}{\\vol(S)},$$\nwhere a set $S$ achieving the Cheeger isoperimetric constant is called a Cheeger set. In the sequel, we fix $A\\subset V$, and we write $h(G)$ and $h_1(G)$ instead of $h(A)$ and $h_1(A)$, respectively.\n\nAccording to our generalized Lov\\'asz extension, we have\n\\begin{equation}\\label{eq:Dirichlet-Cheeger-h_1}\nh_1(G\n=\\inf_{\\vec x\\in \\mathbb{R}^n\\setminus\\{0\\}}\\frac{\\sum_{i\\sim j} |x_i-x_j|+\\sum_{i\\in A} p_i|x_i|}{\\sum_{i\\in A}d_i|x_i|}\n\\end{equation}\nand\n$$h(G\n=\\inf_{\\vec x\\in \\mathbb{R}^n\\setminus\\{0\\}}\\frac{\\sum_{i\\sim j,i,j\\in A}|x_i-x_j|+\\sum_{i\\sim j,i\\in A,j\\in\\delta A}|x_i-x_j|}{\\inf_{c\\in \\mathbb{R}}\\sum_{i\\in A}d_i|x_i-c|}.$$\n\nNote that the term on the right hand side of \\eqref{eq:Dirichlet-Cheeger-h_1} can be written as\n$$\n\\inf\\limits_{ \\vec x|_{V\\setminus S}=0,\\, \\vec x\\ne 0}\\mathcal{R}_1(x)\n$$\nwhich is called the {\\sl Dirichlet $1$-Poincare constant} (see \\cite{OSY19}) over $S$,\nwhere $$\\mathcal{R}_1(\\vec x):=\\frac{\\sum\\limits_{\\{i,j\\}\\in E} |x_i-x_j|}{\\sum_i d_i|x_i|}$$\nis called the $1$-Rayleigh quotient of $\\vec x$.\n\nWe can consider the corresponding spectral problems.\n\\begin{itemize}\n\\item Dirichlet eigenvalue problem:\n$$\\begin{cases}\n\\Delta_1 \\vec x\\cap \\mu D \\Sgn \\vec x\\ne\\varnothing,& \\text{ in } A\\\\\n\\vec x = 0,&\\text{ on } \\delta A\n\\end{cases}$$ where $D$ is the diagonal matrix of the vertex degrees, \nthat is,\n$$\\begin{cases}\n(\\Delta_1 \\vec x)_i-\\mu d_i\\Sgn x_i\\ni 0,&i\\in A\\\\\nx_i=0,&i\\in\\delta A\n\\end{cases}$$~\nwhose component form is: $\\exists$~$c_i\\in \\Sgn(x_i)$,~$z_{ij}\\in \\Sgn(x_i-x_j)$ satisfying $z_{ji}=-z_{ij}$ and\n$$\\sum_{j\\sim i} z_{ij}+ p_ic_i\\in \\mu d_i\\Sgn(x_i),~i\\in A,$$\nin which $p_i$ is the number of neighbors of $i$ in $\\delta A$.\n\\item Neumann eigenvalue problem: There exists $c_i\\in \\Sgn(x_i)$,~$z_{ij}\\in \\Sgn(x_i-x_j)$ with $z_{ji}=-z_{ij}$ such that\n$$\n\\begin{cases}\n\\sum_{j\\sim i,j\\in \\overline{A}}z_{ij}-\\mu d_i c_i=0,&i\\in A\\\\\n\\sum_{j\\sim i,j\\in A}z_{ij}=0,&i\\in\\delta A.\n\\end{cases}\n$$\n\\end{itemize}\nFor a graph $G$ with boundary, we use $\\Delta_1^D(G)$ and $\\Delta_1^N(G)$ to denote the Dirichlet 1-Laplacian and the Neumann 1-Laplacian, respectively. Then\n\\begin{pro}\n$$h_1(G)=\\lambda_1(\\Delta_1^D(G))\\;\\text{ and }\\;h(G)=\\lambda_2(\\Delta_1^N(G)).$$\n\\end{pro}\n\\begin{figure}\\centering\n\\begin{tikzpicture}[scale=0.69\n\\draw (0,0) to (2,0);\\draw (0,0) to (-0.9,0);\n\\draw (0,0) to (1,1);\n\\draw (0,0) to (1,-1);\n\\draw (1,1) to (1,-1);\\draw (1,1) to (1,1.9);\n\\draw (1,1) to (2,0);\n\\draw (1,-1) to (2,0);\\draw (1,-1) to (1,-1.9);\n\\draw (3,0) to (2,0);\\draw (3,0) to (3,0.9);\n\\draw (3,0) to (4,0);\\draw (3,0) to (3,-0.9);\n\\draw (6,0) to (4,0);\n\\draw (5,1) to (4,0);\n\\draw (5,-1) to (4,0);\n\\draw (5,1) to (6,0);\n\\draw (5,-1) to (6,0);\n\\draw (5,-1) to (5,1);\n\\draw (6,0) to (7,0);\n\\draw (8,0) to (7,0);\n\\draw (8,0) to (10,0);\n\\draw (8,0) to (9,1);\n\\draw (8,0) to (9,-1);\n\\draw (10,0) to (9,1);\n\\draw (10,0) to (9,-1);\n\\draw (9,1) to (9,-1);\n\\draw (10,0) to (10.9,0);\n\\draw (5,1) to (5,1.9);\n\\draw (5,1) to (5,-1.9);\n\\draw (9,1) to (9,1.9);\n\\draw (9,1) to (9,-1.9);\n\\draw (7,0) to (7,0.9);\n\\draw (7,0) to (7,-0.9);\n\\node (00) at (0,0) {$\\bullet$};\n\\node (11) at (1,1) {$\\bullet$};\n\\node (1-1) at (1,-1) {$\\bullet$};\n\\node (20) at (2,0) {$\\bullet$};\n\\node (30) at (3,0) {$\\bullet$};\n\\node (40) at (4,0) {$\\bullet$};\n\\node (60) at (6,0) {$\\bullet$};\n\\node (51) at (5,1) {$\\bullet$};\n\\node (5-1) at (5,-1) {$\\bullet$};\n\\node (70) at (7,0) {$\\bullet$};\n\\node (80) at (8,0) {$\\bullet$};\n\\node (100) at (10,0) {$\\bullet$};\n\\node (91) at (9,1) {$\\bullet$};\n\\node (9-1) at (9,-1) {$\\bullet$};\n\\node (-10) at (-1,0) {$\\circ$};\n\\node (110) at (11,0) {$\\circ$};\n\\node (12) at (1,2) {$\\circ$};\n\\node (1-2) at (1,-2) {$\\circ$};\n\\node (31) at (3,1) {$\\circ$};\n\\node (31) at (3,-1) {$\\circ$};\n\\node (71) at (7,1) {$\\circ$};\n\\node (7-1) at (7,-1) {$\\circ$};\n\\node (52) at (5,2) {$\\circ$};\n\\node (5-2) at (5,-2) {$\\circ$};\n\\node (92) at (9,2) {$\\circ$};\n\\node (9-2) at (9,-2) {$\\circ$};\n\\end{tikzpicture}\n\\caption{\\label{fig:k-nodal-domain} In this example, there are $3$ nodal domains of an eigenvector corresponding to the first Dirichlet eigenvalue of the graph 1-Laplacian. Each nodal domain is the vertex set of the $4$-order complete subgraph shown in the figure. }\n\\end{figure}\n\nFor a connected graph, the first eigenvector of $\\Delta_1^N(G)$ is constant\nand it has only one nodal domain while the first eigenvector of $\\Delta_1^D(G)$\n may have any number of nodal domains.\n \\begin{pro}\n For any $k\\in \\mathbb{N}^+$, there exists a connected graph $G$ with boundary such that its Dirichlet 1-Laplacian $\\Delta_1^D(G)$ has a first eigenvector (corresponding to $\\lambda_1(\\Delta_1^D(G))$) with exactly $k$ nodal domains; and its Neumann 1-Laplacian $\\Delta_1^N(G)$ possesses a second eigenvector (corresponding to $\\lambda_2(\\Delta_1^N(G))$) with exactly $k$ nodal domains.\n \\end{pro}\n\n\n \\begin{figure}\\centering\n \\begin{tikzpicture}[scale=0.8]\n\\node (1) at (0,1) {$\\bullet$};\n\\node (2) at (1,0) {$\\bullet$};\n\\node (3) at (1,1) {$\\bullet$};\n\\node (4) at (2,2) {$\\bullet$};\n\\node (5) at (3,1) {$\\bullet$};\n\\node (6) at (4,1) {$\\bullet$};\n\\node (7) at (3,0) {$\\bullet$};\n\\node (8) at (1,3) {$\\bullet$};\n\\node (9) at (1,4) {$\\bullet$};\n\\node (10) at (0,3) {$\\bullet$};\n\\node (11) at (3,3) {$\\bullet$};\n\\node (12) at (3,4) {$\\bullet$};\n\\node (13) at (4,3) {$\\bullet$};\n\\draw (0,1) to (1,1);\n\\draw (1,0) to (1,1);\n\\draw (1,1) to (2,2);\n\\draw (2,2) to (3,1);\n\\draw (3,1) to (4,1);\n\\draw (3,1) to (3,0);\n\\draw (2,2) to (1,3);\n\\draw (1,3) to (1,4);\n\\draw (1,3) to (0,3);\n\\draw (2,2) to (3,3);\n\\draw (3,3) to (3,4);\n\\draw (3,3) to (4,3);\n\\end{tikzpicture}\n\\caption{\\label{fig:second-eigen-k-nodal-domain} In this example, there are $4$ nodal domains of an eigenvector corresponding to the second Neumann eigenvalue of the graph 1-Laplacian. Each nodal domain is the vertex set of the $3$-order subgraph after removing the center vertex and its edges.}\n\\end{figure}\n\n\n\\subsection{Independence number}\\label{sec:independent-number}\nThe independence number $\\alpha(G)$ of an unweighted and undirected simple graph $G$ is the largest cardinality of a subset of vertices in $G$, no two of which are adjacent. It can be seen as an optimization problem $\\max\\limits_{S\\subset V \\text{ s.t. }E(S)=\\varnothing}\\#S$. However, such a graph optimization is not global, and the feasible domain seems to be very complicated.\nBut we may simply multiply by a truncated term $(1-\\#E(S))$.\nThe independence number can then be expressed as a global optimization on the power set of vertices:\n\\begin{equation}\\label{eq:independent-multiple}\n\\alpha(G)=\\max\\limits_{S\\subset V}\\#S(1-\\#E(S)),\n\\end{equation}\nand thus the Lov\\'asz extension can be applied.\n\\begin{proof}[Proof of Eq.~\\eqref{eq:independent-multiple}] Since $G$ is simple, $\\#S$ and $\\#E(S)$ take values in the natural numbers. Therefore,\n$$\n\\#S(1-\\#E(S))\\;\\; \\begin{cases}\\le 0,&\\text{ if } E(S)\\ne\\varnothing\\text{ or }S=\\varnothing,\\\\\n\\ge 1,&\\text{ if } E(S)=\\varnothing\\text{ and }S\\ne\\varnothing.\\end{cases}\n$\nThus, $\\max\\limits_{S\\subset V}\\#S(1-\\#E(S))=\\max\\limits_{S\\subset V \\text{ s.t. }E(S)=\\varnothing}\\#S=\\alpha(G)$.\n\\end{proof}\nHowever, Eq.~\\eqref{eq:independent-multiple} is difficult to calculate. By the disjoint-pair Lov\\'asz extension, it equals \n$$\n\\alpha(G)=\\max\\limits_{\\vec x\\ne \\vec 0}\\frac{\\|\\vec x\\|_1-\\sum\\limits_{k\\in V,i\\sim j}\\min\\{|x_k|,|x_i|,|x_j|\\}}{\\|\\vec x\\|_\\infty},\n$$\nbut we don't know how to further simplify it.\n\nFortunately, there is a known representation of the independence number as follows, and we present a proof for convenience. \n\\begin{pro}The independence number $\\alpha(G)$ of a finite simple graph $G=(V,E)$ satisfies\n\\begin{equation}\\label{eq:independent-difference}\n\\alpha(G)=\\max\\limits_{S\\subset V}\\left(\\#S-\\#E(S)\\right).\n\\end{equation}\n\\end{pro}\n\n\\begin{proof\nLet $A$ be an independent set of $G$, then $\\alpha(G)=\\#A=\\#A-\\#E(A)\\le \\max\\limits_{S\\subset V}\\left(\\#S-\\#E(S)\\right)$ because there is no edge connecting points in $A$.\n\nLet $B\\subset V$ satisfy $ \\#B-\\#E(B) =\\max\\limits_{S\\subset V}\\left(\\#S-\\#E(S)\\right)$. Assume the induced subgraph $(B,E(B))$ has $k$ connected components, $(B_i,E(B_i))$, $i=1,\\cdots,k$. Then $B=\\sqcup_{i=1}^k B_i$ and $E(B)=\\sqcup_{i=1}^k E(B_i)$. Since $(B_i,E(B_i))$ is connected, $\\# B_i\\le \\# E(B_i)+1$ and equality holds if and only if $(B_i,E(B_i))$ is a tree.\nNow taking $B'\\subset B$ such that $\\#(B'\\cap B_i)=1$, $i=1,\\cdots,k$, then $B'$ is an independent set and thus \\begin{align*}\n\\alpha(G)&\\ge \\#B'=k=\\sum_{i=1}^k 1\\ge \\sum_{i=1}^k (\\# B_i- \\# E(B_i))\n= \\sum_{i=1}^k\\# B_i- \\sum_{i=1}^k\\# E(B_i)\\\\&=\\#(\\cup_{i=1}^kB_i)-\\#(\\cup_{i=1}^kE(B_i)) = \\#B-\\#E(B)=\\max\\limits_{S\\subset V}\\left(\\#S-\\#E(S)\\right).\n\\end{align*}\nAs a result, Eq.~\\eqref{eq:independent-difference} is proved.\n\\end{proof}\n\nAccording to Lov\\'asz extension, we get\n\\begin{equation}\\label{eq:independent-continuous}\n\\alpha(G)=\\max\\limits_{\\vec x\\ne \\vec 0}\\frac{\\|\\vec x\\|_1-\\sum\\limits_{i\\sim j}\\min\\{|x_i|,|x_j|\\}}{\\|\\vec x\\|_\\infty}.\n\\end{equation}\nBy the elementary identities: $\\sum_{i\\sim j}|x_i+x_j|+\\sum_{i\\sim j}|x_i-x_j|=2\\sum_{i\\sim j}\\max\\{|x_i|,|x_j|\\}=\\sum_{i\\sim j}\\left||x_i|-|x_j|\\right|+\\sum_i\\mathrm{deg}_i|x_i|$ and $\\sum_i\\mathrm{deg}_i|x_i|=\\sum_{i\\sim j}\\max\\{|x_i|,|x_j|\\}+\\sum_{i\\sim j}\\min\\{|x_i|,|x_j|\\}$, Eq.~\\eqref{eq:independent-continuous} can be reduced to\n\\begin{equation}\\label{eq:independent-continuous-1}\n\\alpha(G)=\\max\\limits_{\\vec x\\ne \\vec 0}\\frac{2\\|\\vec x\\|_1+I^-(\\vec x)+I^+(\\vec x)- 2\\|\\vec x\\|_{1,\\mathrm{deg}}}{2\\|\\vec x\\|_\\infty},\n\\end{equation}\nwhere $I^\\pm(\\vec x)=\\sum_{i\\sim j}|x_i\\pm x_j|$ and $\\|\\vec x\\|_{1,\\mathrm{deg}}=\\sum_i\\mathrm{deg}_i|x_i|$. One would like to write Eq.~\\eqref{eq:independent-continuous-1} as\n\\begin{equation}\\label{eq:independent-continuous-2}\n\\alpha(G)=\\max\\limits_{\\vec x\\ne \\vec 0}\\frac{I^-(\\vec x)+I^+(\\vec x)- 2\\|\\vec x\\|_{1,\\mathrm{deg}'}}{2\\|\\vec x\\|_\\infty},\n\\end{equation}\nwhere $\\|\\vec x\\|_{1,\\mathrm{deg}'}=\\sum\\limits_{i\\in V}(\\mathrm{deg}_i-1)|x_i|$.\n\n\n\\begin{remark}\nThe maximum clique number can be reformulated in a similar way. \nIn addition, we refer to \\cite{BBcontinuous05,BBcontinuous06,SBBB20} for some other continuous formulations of the independence number. \n\\end{remark}\n\n\n\n\\paragraph{Chromatic number of a perfect graph}\nBerge's strong perfect graph conjecture has been proved in \\cite{Annals06}. A graph $G$ is perfect if for every induced subgraph $H$ of $G$, the chromatic number of $H$ equals the size of the largest clique of $H$. The complement of every perfect graph is perfect.\n\n\nSo for a perfect graph, we have an easy way to calculate the chromatic number. In a general simple graph, we refer to Section \\ref{sec:chromatic-number} for transforming the chromatic number.\n\n\\paragraph{Maximum matching }\nA matching $M$ in $G$ is a set of pairwise non-adjacent edges, none of which are loops; that is, no two edges share a common vertex.\nA maximal matching is one with the largest possible number of edges.\n\nConsider the line graph $(E,R)$ whose vertex set $E$ is the edge set of $G$, and whose edge set is $R=\\{\\{e,e'\\}:e\\cap e'\\not=\\varnothing,\\,e,e'\\in E\\}$. Then the maximum matching number of $(V,E)$ coincides with the independence number of $(E,R)$. So, we have an equivalent continuous optimization for a maximum matching problem.\n\nHall's Marriage Theorem provides a characterization of bipartite graphs which have a perfect matching and the Tutte theorem provides a characterization for arbitrary graphs.\n\nThe Tutte-Berge formula says that the size of a maximum matching of a graph is\n$$\\frac12\\min\\limits_{U\\subset V}\\left(\\#V+\\#U-\\#\\text{ odd connected components of }G|_{V\\setminus U}\\right).$$\nCan one transform the above discrete optimization problem into an explicit continuous optimization via some extension?\n\n\\paragraph{$k$-independence number }\nThe independence number admits several generalizations:\n the maximum size of a set of vertices in a graph whose induced subgraph has maximum degree $(k-1)$ \\cite{CaroH13}; the size of the largest $k$-colourable subgraph \\cite{Spacapan11}; the\nsize of the largest set of vertices such that any two vertices in the set are at short-path distance larger than $k$ (see \\cite{Fiol97}). For the $k$-independence number involving short-path distance, one can easily transform it into the following two continuous representations:\n$$\\alpha_k= \\max\\limits_{\\vec x\\in\\ensuremath{\\mathbb{R}}^V\\setminus\\{\\vec 0\\}}\\frac{\\|\\vec x\\|_1^2}{\\|\\vec x\\|_1^2-2\\sum\\limits_{\\mathrm{dist}(i,j)\\ge k+1}x_ix_j} = \\max\\limits_{\\vec x\\in \\ensuremath{\\mathbb{R}}^n\\setminus\\{\\vec 0\\}}\\frac{\\sum\\limits_{\\mathrm{dist}(i,j)\\le k}(|x_i-x_j|+|x_i+x_j|)- 2\\sum\\limits_{i\\in V}(\\deg_{k}(i)-1)|x_i|}{2\\|\\vec x\\|_\\infty},$$\nwhere $\\deg_{k}(i)=\\#\\{j\\in V:\\mathrm{dist}(j,i)\\le k\\}$, $i=1,\\cdots,n$.\n\\subsection{Various and variant Cheeger problems}\n\\label{sec:variantCheeger}\n\nSeveral Cheeger-type constants on graphs have been proposed that are different from the classical one.\n\n\\paragraph{Multiplicative Cheeger constant\nFor instance\n$$h=\\min\\limits_{\\varnothing\\ne A\\subsetneqq V}\\frac{ \\#E(A,V\\setminus A) }{\\#A\\cdot\\#(V\\setminus A)}.$$\nIt is called the normalized cut problem which has many applications in image segmentation and spectral clustering \\cite{SM00,Luxburg07,HS11}. By Proposition \\ref{pro:fraction-f\/g}, it is equal to\n$$\\min\\limits_{\\langle\\vec x,\\vec1\\rangle=0,\\vec x\\ne \\vec0}\\frac{\\sum_{i\\sim j}|x_i-x_j|}{\\sum_{i< j}|x_i-x_j|}.$$\n\n\\paragraph{Isoperimetric profile}\nThe isoperimetric profile $IP:\\mathbb{N}\\to [0,\\infty)$ is defined by\n$$IP(k)= \\inf\\limits_{A\\subset V,\\#A\\le k} \\frac{\\#E(A,V\\setminus A)}{\\#A}.$$\nThen by Lov\\'asz extension, it is equal to\n$$\\inf\\limits_{\\vec x\\in\\ensuremath{\\mathbb{R}}^V,\\,1\\le \\# \\ensuremath{\\mathrm{supp}}(\\vec x)\\le k}\\frac{\\sum_{\\{i,j\\}\\in E}|x_i-x_j|}{\\|\\vec x\\|_1}=\\min\\limits_{\\vec x\\in CH_k(\\ensuremath{\\mathbb{R}}^V)}\\frac{\\sum_{\\{i,j\\}\\in E}|x_i-x_j|}{\\|\\vec x\\|_1},$$\nwhere $CH_n:=\\{\\vec x\\in\\ensuremath{\\mathbb{R}}^V,\\,\\# \\ensuremath{\\mathrm{supp}}(\\vec x)\\le k\\}$ is the union of all $k$-dimensional coordinate hyperplanes in $\\ensuremath{\\mathbb{R}}^V$.\n\n\\paragraph{Modified Cheeger constant}\n\nOn a graph $G=(V,E)$, there are three definitions of the vertex-boundary of a subset $A\\subset V$:\n\\begin{align}\n& \\partial_{\\textrm{ext}} A:=\\{j\\in V\\setminus A\\,\\left|\\,\\{j,i\\}\\in E\\text{ for some }i\\in A\\right.\\} \\label{eq:ext-vertex-boundary}\\\\\n& \\partial_{\\textrm{int}} A:=\\{i\\in A\\,\\left|\\,\\{i,j\\}\\in E\\text{ for some }j\\in V\\setminus A\\right.\\}\\label{eq:int-vertex-boundary}\\\\\n& \\partial_{\\textrm{ver}} A:=\\partial_{\\textrm{out}} A\\cup \\partial_{\\textrm{int}} A=V(E(A,V\\setminus A))=V(\\partial_{\\textrm{edge}} A)\\label{eq:vertex-boundary}\n\\end{align}\nThe {\\sl external vertex boundary} \\eqref{eq:ext-vertex-boundary} and the {\\sl internal vertex boundary} \\eqref{eq:int-vertex-boundary} are introduced and studied recently in \\cite{Vigolo19tams,VigoloPHD}. Research on metric measure space \\cite{HMT19} suggests to consider the {\\sl vertex boundary} \\eqref{eq:vertex-boundary}.\n\nDenote by $N(i)=\\{i\\}\\cup\\{j\\in V:\\{i,j\\}\\in E\\}$ the 1-neighborhood of $i$. Then the Lov\\'asz extensions of $\\#\\partial_{\\textrm{ext}} A$, $\\#\\partial_{\\textrm{int}} A$ and $\\#\\partial_{\\textrm{ver}} A$ are\n$$\\sum\\limits_{i=1}^n(\\max\\limits_{j\\in N(i)}x_j-x_i),\\;\\;\\;\\sum\\limits_{i=1}^n(x_i-\\min\\limits_{j\\in N(i)}x_j)\\;\\;\\text{ and }\\;\\;\\sum\\limits_{i=1}^n(\\max\\limits_{j\\in N(i)}x_j-\\min\\limits_{j\\in N(i)}x_j),$$ respectively.\nThey can be seen as the `total variation' of $\\vec x$ with respect to $V$ in $G$, while the usual {\\sl edge boundary} leads to $\\sum\\limits_{\\{i,j\\}\\in E}|x_i-x_j|$ which is regarded as the total variation of $\\vec x$ with respect to $E$ in $G$. Their disjoint-pair Lov\\'asz extensions are $$\\sum_{i=1}^n \\max\\limits_{j\\in N(i)} |x_j|-\\|\\vec x\\|_1,\\;\\;\\;\\|\\vec x\\|_1-\\sum_{i=1}^n \\min\\limits_{j\\in N(i)} |x_j|,\\;\\;\\;\\sum_{i=1}^n \\left(\\max\\limits_{j\\in N(i)} |x_j|-\\min\\limits_{j\\in N(i)} |x_j|\\right).$$\nComparing with the graph $1$-Poincare profile (see \\cite{Hume17,HMT19,Hume19arxiv}) $$P^1(G):=\\inf\\limits_{\\langle\\vec x,\\vec 1\\rangle=0,\\vec x\\ne\\vec 0}\\frac{\\sum_{i\\in V} \\max\\limits_{j\\sim i}|x_i-x_j|}{\\|\\vec x\\|_1},$$\nwe easily get the following\n\\begin{pro}\n$$ \\frac12\\max\\{h_{\\mathrm{int}}(G),h_{\\mathrm{ext}}(G)\\}\\le P^1(G)\\le h_{\\mathrm{ver}}(G):=\\min\\limits_{A\\in\\ensuremath{\\mathcal{P}}(V)\\setminus\\{\\varnothing,V\\}}\n\\frac{\\#\\partial_{\\mathrm{ver}} A}{\\min\\{\\#(A),\\#(V\\setminus A)\\}}$$\nwhere $h_{\\mathrm{int}}(G)$, $h_{\\mathrm{ext}}(G)$ and $h_{\\mathrm{ver}}(G)$ are modified Cheeger constants w.r.t. the type of vertex-boundary.\n\\end{pro}\n\\begin{proof}\nBy Theorem \\ref{introthm:eigenvalue}, $$h_{\\mathrm{ver}}(G)=\\min\\limits_{A\\in\\ensuremath{\\mathcal{P}}(V)\\setminus\\{\\varnothing,V\\}}\n\\frac{\\#\\partial_{\\mathrm{ver}} A}{\\min\\{\\#(A),\\#(V\\setminus A)\\}}=\\inf\\limits_{\\langle\\vec x,\\vec 1\\rangle=0,\\vec x\\ne\\vec 0}\\frac{\\sum_{i\\in V} \\max\\limits_{j\\sim i}|x_i-x_j|}{\\min\\limits_{t\\in\\ensuremath{\\mathbb{R}}}\\|\\vec x-t\\vec1\\|_1}\\ge P^1(G).$$\nOn the other hand, it is easy to check that $\\min\\limits_{t\\in\\ensuremath{\\mathbb{R}}}\\|\\vec x-t\\vec1\\|_1\\ge \\frac12 \\|\\vec x\\|_1$ whenever $\\langle\\vec x,\\vec 1\\rangle=0$. Thus, $h_{\\mathrm{ver}}(G)\\le 2P^1(G)$. The proof is then completed by noting that $\\max\\{h_{\\mathrm{int}}(G),h_{\\mathrm{ext}}(G)\\}\\le h_{\\mathrm{ver}}(G)$.\n\\end{proof}\n\n\\paragraph{Cheeger-like constant}\nSome further recent results \\cite{JM19arxiv} can be also rediscovered via Lov\\'asz extension.\n\nA main equality in \\cite{JM19arxiv} can be absorbed into the following identities:\n\\begin{align}\n\t \\max_{\\text{edges }(v,w)}\\biggl(\\frac{1}{\\deg v}+\\frac{1}{\\deg w}\\biggr)\n&=\n\\max_{\\gamma:E\\rightarrow\\mathbb{R}}\\frac{\\sum_{v\\in V}\\frac{1}{\\deg v}\\cdot \\biggl|\\sum_{e_{\\text{in}}: v\\text{ input}}\\gamma(e_{\\text{in}})-\\sum_{e_{\\text{out}}: v\\text{ output}}\\gamma(e_{\\text{out}})\\biggr|}{\\sum_{e\\in E}|\\gamma(e)|} \\notag\n\\\\&=\\max_{\\hat{\\Gamma}\\subset\\Gamma \\text{ bipartite}} \\frac{\\sum_{v\\in V}\\frac{\\deg_{\\hat{\\Gamma}}(v)}{\\deg_\\Gamma (v)}}{|E(\\hat{\\Gamma})|}, \\label{eq:mainJM19}\n\\end{align}\nwhere the left quantity is called a Cheeger-like constant \\cite{JM19arxiv}.\n\nIn fact, given $c_i\\ge 0$, $i\\in V$,\n$$\\max\\limits_{\\{i,j\\}\\in E}(c_i+c_j)=\\max\\limits_{E'\\subset E}\\frac{\\sum_{\\{i,j\\}\\in E'}(c_i+c_j)}{\\# E'},$$\nand then via Lov\\'asz extension, one immediately gets that the above constant equals to\n$$\n\\max\\limits_{\\vec x\\in [0,\\infty)^{E}\\setminus\\{\\vec0\\}} \\frac{\\sum\\limits_{e=\\{i,j\\}\\in E}x_e(c_i+c_j)}{\\sum_{e\\in E}x_e}=\\max\\limits_{\\vec x\\in[0,\\infty)^{E}\\setminus\\{\\vec0\\}} \\frac{\\sum_{i\\in V}c_i\\sum_{e\\ni i}x_e}{\\sum_{e\\in E}x_e}=\\max\\limits_{\\vec x\\in \\ensuremath{\\mathbb{R}}^{E}\\setminus\\{\\vec0\\}} \\frac{\\sum_{i\\in V}c_i\\left|\\sum_{e\\ni i}x_e\\right|}{\\sum_{e\\in E}|x_e|}.\n$$\nThus, for any family $\\ensuremath{\\mathcal{E}}\\subset\\ensuremath{\\mathcal{P}}(E)$ such that $E'\\in \\ensuremath{\\mathcal{E}}$ $\\Rightarrow$ $E'\\supset \\{\\{e\\}:e\\in E\\}$, we have\n$$\\max\\limits_{\\{i,j\\}\\in E}(c_i+c_j)=\\max\\limits_{\\vec x\\in \\ensuremath{\\mathbb{R}}^{E}\\setminus\\{\\vec0\\}} \\frac{\\sum_{i\\in V}c_i\\left|\\sum_{e\\ni i}x_e\\right|}{\\sum_{e\\in E}|x_e|}=\\max\\limits_{E'\\in\\ensuremath{\\mathcal{E}}}\\frac{\\sum_{\\{i,j\\}\\in E'}(c_i+c_j)}{\\# E'},$$\nwhich recovers the interesting equality \\eqref{eq:mainJM19} by taking $c_i=\\frac{1}{\\deg i}$ and $\\ensuremath{\\mathcal{E}}$ the collections of all edge sets of bipartite subgraphs.\n\nA similar simple trick gives\n\\begin{equation*}\n \\min_{(v,w)}\\frac{\\bigl|\\mathcal{N}(v)\\cap \\mathcal{N}(w)\\bigr|}{\\max\\{\\deg v,\\deg w\\}}=\\min\\limits_{\\vec x\\in \\ensuremath{\\mathbb{R}}^{E}\\setminus\\{\\vec0\\}} \\frac{\\sum_{i\\in V}\\sum_{e\\ni i}\\left|x_e\\right|\\cdot\\#\\text{ triangles containing }e}{\\sum_{e=\\{i,j\\}\\in E}|x_e|\\max\\{\\deg i,\\deg j\\}}.\n\\end{equation*}\n\n\n\\subsection{Frustration in signed networks}\n\\label{sec:frustration}\nIn this section, we apply our theory to signed graphs, a concept first introduced by Harary \\cite{Harary55}.\n\\begin{defn}\n A \\emph{signed graph} $\\Gamma$ consists of a vertex set $V$ and a set $E$ of undirected edges with a sign function\n\\begin{equation}\\label{sign1}\ns:E \\to \\{+1,-1\\}.\n\\end{equation}\nThe adjacency matrix of $(\\Gamma,s)$, is denoted by $\\mathrm{A}^s:=(s_{ij})_{i,j\\in V}$, where $s_{ij}:=s(e)$ if $e=\\{i,j\\}\\in E$, and $s_{ij}:=0$ otherwise.\n\\end{defn}\nWhen we replace the sign function $s$ by $-s$, we shall call the resulting graph \\emph{antisigned}.\n\\begin{defn}\n The signed cycle $C_m$ (consisting of $m$ vertices that are cyclically connected by $m$ edges) is \\emph{balanced} if \n\\begin{equation}\\label{sign3}\n\\prod_{i=1}^m s(e_i)=1.\n\\end{equation}\nA signed graph $(\\Gamma,s)$ is \\emph{balanced} if every cycle contained in it is balanced.\\\\\n$(\\Gamma,s)$ is \\emph{antibalanced} if $(\\Gamma,-s)$ is balanced. \\\\\nThe frustration index of a signed graph $\\Gamma=(V,E)$ is \n\\begin{equation}\\label{eq:frustration}\n\\min_{x_i\\in \\{-1,1\\},\\forall i} \\sum_{\\{i,j\\}\\in E}|x_i-s_{ij}x_j|,\n\\end{equation} where $s_{ij}\\in\\{-1,1\\}$ indicates the sign of the edge $(i,j)$.\n\\end{defn}\nThe frustration index then vanishes iff the graph is balanced.\\\\\n\\begin{defn}\n The (normalized) Laplacian $\\Delta^s$ of a signed graph is defined by\n\\begin{equation}\n\\label{sign5}\n(\\Delta^s \\vec x)_i:= x_i-\\frac{1}{\\deg i}\\sum_{j \\sim i}s_{ij}x_j=\\frac{1}{\\deg i}\\sum_{j \\sim i}(x_i-s_{ij}x_j)\n\\end{equation}\nfor a vector $\\vec x\\in\\ensuremath{\\mathbb{R}}^V$.\n\\end{defn}\n\\begin{remark}\nThe Laplacian thus is of the form $\\Delta^s =\\mathrm{id}-\\mathrm{A}^s$, and \n when we change the signs of all the edges, that is, go from a signed graph to the corresponding antisigned graph, the operator becomes $\\Delta^{-s} =\\mathrm{id}+\\mathrm{A}^s$. Therefore, the eigenvalues simply change from $\\lambda$ to $2-\\lambda$ (and therefore, also the ordering gets reversed). \n\\end{remark}\n\nBy Proposition \\ref{pro:Lovasz-eigen}, it is easy to verify that every eigenvalue of the function pair $(F,G)$ has an eigenvector in $\\{-1,0,1\\}^n$, where $F(\\vec x)=\\sum_{\\{i,j\\}\\in E}|x_i-s_{ij}x_j|$ and $G(\\vec x)=\\|\\vec x\\|_\\infty$. One may relax \\eqref{eq:frustration} as \\begin{equation}\\label{eq:frustration-relax}\n\\min_{\\vec x\\in \\{-1,0,1\\}^n\\setminus\\{\\vec0\\}} \\sum_{(i,j)\\in E}|x_i-s_{ij}x_j|. \n\\end{equation}\n\n\n\nThis suggests the eigenvalue problem of $(F(\\vec x),\\|\\vec x\\|_\\infty)$ on a signed graph, \nwhere $F(\\vec x)=\\sum_{\\{i,j\\}\\in E}|x_i-s_{ij}x_j|$. Below, we show some key properties.\n\n\n\n\n\n\n\n\\begin{itemize}\n\\item\nThe coordinate form of the eigenvalue problem $\\nabla \\sum_{\\{i,j\\}\\in E}|x_i-s_{ij}x_j| \\cap \\lambda\\nabla\\|\\vec x\\|_\\infty\\ne\\varnothing$ reads as\n\n $\\exists\\,z_{ij}\\in \\Sgn(x_i-s_{ij}x_j) \\mbox{ with }z_{ij}+s_{ij}z_{ji}=0$~such that\n\\begin{align}\n\\label{eq:1LinftyN1}&\\sum\\limits_{j\\sim i} z_{ij} = 0, &i\\in D_0(\\vec x),\\\\\n\\label{eq:1LinftyN2}&\\sum\\limits_{j\\sim i} z_{ij} \\in \\lambda\\ \\mathrm{sign}(x_i)\\cdot [0,1],& i\\in D_\\pm(\\vec x),\\\\\n\\label{eq:1LinftyN3}&\\sum\\limits_i^n \\big|\\sum\\limits_{j\\sim i} z_{ij} \\big|=\\lambda,&\n\\end{align}\nwhere $D_\\pm(\\vec x) =\\{i\\in V\\big| \\pm x_i = \\|\\vec x\\|\\}$,\nand $D_0(\\vec x)=\\{i\\in V\\big| |x_i|<\\|\\vec x\\|\\}$.\n\\item All eigenvalues are integers in $\\{0,1,\\cdots,\\vol(V)\\}$. And each eigenvalue has an eigenvector in $\\{-1,0,1\\}^n$.\n\nProof: This is a direct consequence of Proposition \\ref{pro:Lovasz-eigen}. \n\n\\item The largest eigenvalue has an eigenvector in $\\{-1,1\\}^n$.\n\nProof: Let $\\vec 1_A-\\vec 1_B$ be an eigenvector w.r.t. the largest eigenvalue. Note that $\\vec 1_A-\\vec 1_B=\\frac12(\\vec1_A-\\vec1_{V\\setminus A}+\\vec1_{V\\setminus B}-\\vec1_B)$. By the convexity of $F$, we have $F(\\vec 1_A-\\vec 1_B)\\le\\max\\{F(\\vec1_A-\\vec1_{V\\setminus A}),F(\\vec1_{V\\setminus B}-\\vec1_B)\\}$. Hence, either $\\vec1_A-\\vec1_{V\\setminus A}$ or $\\vec1_{V\\setminus B}-\\vec1_B$ is an eigenvector w.r.t. the largest eigenvalue. \n\n\\item \\textbf{The frustration index is an eigenvalue}.\nHowever, in general, we don't know which eigenvalue the frustration index is.\n\n Proof: We shall check that for any $A\\subset V$, the binary vector $\\vec x:=\\vec1_A-\\vec1_{V\\setminus A}$ is an eigenvector w.r.t. the eigenvalue $\\lambda:=2(|E_+(A,V\\setminus A)|+|E_-(A)|+|E_-(V\\setminus A)|)$, where $|E_+(A,V\\setminus A)|$ indicates the number of positive edges lying between $A$ and $V\\setminus A$, while $|E_-(A)|$ denotes the number of negative edges lying in $A$. Indeed, $D_+(\\vec x)=A$ and $D_-(\\vec x)=V\\setminus A$. For $i\\in A$, taking $z_{ij}=1$ if $s_{ij}x_j<0$; and $z_{ij}=0$ if $s_{ij}x_j>0$. Similarly, for $i\\in V\\setminus A$, letting $z_{ij}=0$ if $s_{ij}x_j<0$; and $z_{ij}=-1$ if $s_{ij}x_j>0$. It is easy to see that $z_{ij}\\in \\mathrm{Sgn}(x_i-s_{ij}x_j)$ and $z_{ij}+s_{ij}z_{ji}=0$ for any edge $ij$. Next, we verify the conditions \\eqref{eq:1LinftyN2} and \\eqref{eq:1LinftyN3}.\n\nNote that $\\sum_{j\\sim i}z_{ij}=\\#(\\{j\\in A:ij\\text{ is negative}\\}\\cup \\{j\\in V\\setminus A:ij\\text{ is positive}\\})\\in[0,\\lambda]$ for $i\\in A$, and $\\sum_{j\\sim i}z_{ij}=-\\#(\\{j\\in A:ij\\text{ is positive}\\}\\cup \\{j\\in V\\setminus A:ij\\text{ is negative}\\})\\in[-\\lambda,0]$ for $i\\in V\\setminus A$. \nTherefore, $\\sum_{i\\in V}|\\sum_{j\\sim i}z_{ij}|=2(|E_+(A,V\\setminus A)|+|E_-(A)|+|E_-(V\\setminus A)|)=\\lambda$.\n\nIn particular, for $\\vec x\\in\\{-1,1\\}^n$ that realizes the frustration index, $\\vec x$ must be an eigenvector, and the frustration index is the corresponding eigenvalue. This fact can also be derived by Proposition \\ref{pro:set-pair-infty-norm}. \n\n\\item We can use the the Dinkelbach-type scheme in Section \\ref{sec:algo} directly to calculate the smallest eigenvalue. When we get an eigenvector $\\vec x$, we can take $\\vec 1_{D_+(\\vec x)}-\\vec 1_{D_-(\\vec x)}$ instead of $\\vec x$. \n\\item We construct a recursive method to approximate the frustration index:\n\\begin{itemize}\n\\item Input a signed graph $G$, and use the Dinkelbach-type algorithm to get a subpartition $(U_{+},U_{-})$ where $U_+=D_+(\\vec x)$ and $U_-=D_-(\\vec x)$ with $\\vec x$ being an eigenvector w.r.t. the smallest eigenvalue. \n\\item Let $G$ be the signed graph induced by $V\\setminus(U_+\\cup U_-)$, and let $(U_+',U_-')$ be the subpartition found by the Dinkelbach-type algorithmm; return $(U_+\\cup U_+',U_-\\cup U_-')$ or $(U_+\\cup U_-',U_-\\cup U_+')$, whichever is better.\n\\item Repeat the above process, until we get a partition $(V_+,V_-)$ of $V$, which derives an approximate solution of the frustration index. There are at most $n$ iterations.\n\\end{itemize}\nIn other words, the relaxation problem \\eqref{eq:frustration-relax} can approximate the frustration index \\eqref{eq:frustration} in a recursive way. This is inspired by the recursive spectral cut algorithm for the maxcut problem proposed by Trevisan \\cite{Trevisan2012}.\n\\end{itemize}\n \n\n \nNext, we show some equivalent continuous representations of the frustration index. Let $E_+$ (resp. $E_-$) collect all the positive (resp. negative) edges of $(V,E)$.\n Note that up to a scale factor, \\eqref{eq:frustration} is equivalent to solve $\\min\\limits_{A\\subset V}|E_+(A,V\\setminus A)|+|E_-(A)|+|E_-(V\\setminus A)|$, where $|E_+(A,V\\setminus A)|$ denotes the number of positive edges between $A$ and $V\\setminus A$, while $|E_-(A)|$ indicates the number of negative edges in $A$. By Lov\\'asz extension, the frustration index is equivalent to\n$$|E_-|+\\min\\limits_{\\vec x\\ne0}\\frac{\\sum_{\\{i,j\\}\\in E_+}|x_i-x_j|+\\sum_{i\\in V}\\deg_i|x_i|-\\sum_{\\{i,j\\}\\in E_-}(|x_i-x_j|+|x_i+x_j|)}{\\|x\\|_\\infty}.$$ Also, \\eqref{eq:frustration} is equivalent to $|E_-|+\\min\\limits_{A\\subset V}(|E_+(A,V\\setminus A)|-|E_-(A,V\\setminus A)|)$, and by Lov\\'asz extension, the frustration index equals \n$$|E_-|+\\min\\limits_{\\vec x\\ne0}\\frac{\\sum_{\\{i,j\\}\\in E_+}|x_i-x_j|-\\sum_{\\{i,j\\}\\in E_-}|x_i-x_j|}{2\\|x\\|_\\infty}.$$\nOne can then apply the Dinkelbach-type scheme in Section \\ref{sec:algo} straightforwardly to compute the frustration index.\n\n \n\n\n\n\\begin{remark}\nWe should point out that the notion $|E_+(A)|$ (resp. $|E_-(A)|$) indicates the number of positive (resp. negative) edges (unordered pairs) whose vertices are in $A$. Therefore, in our paper, the values of $|E_+(A)|$ and $|E_-(A)|$ are \nhalf of those of \nAtay-Liu \\cite{AtayLiu}, in which they count the ordered pairs. \n\\end{remark}\nBesides, by Theorem \\ref{thm:tilde-fg-equal-PQ} (or Theorem \\ref{thm:tilde-H-f}), we can derive another continuous formulation of the frustration index:\n$$\\min\\limits_{A\\subset V}|E_+(A,V\\setminus A)|+|E_-(A)|+|E_-(V\\setminus A)|=\\min\\limits_{x\\ne0}\\frac{\\sum\\limits_{\\{i,j\\}\\in E_+}|x_i-x_j|^\\alpha+\\sum\\limits_{\\{i,j\\}\\in E_-}(2\\|\\vec x\\|_\\infty-|x_i-x_j|)^\\alpha}{(2\\|x\\|_\\infty)^\\alpha}$$\nwhenever $0<\\alpha\\le 1$. It is interesting that by taking $\\alpha\\to 0^+$, we immediately get \n$$\\min\\limits_{A\\subset V}|E_+(A,V\\setminus A)|+|E_-(A)|+|E_-(V\\setminus A)|=\\min\\limits_{x\\ne0}\\sum\\limits_{\\{i,j\\}\\in E_+}\\mathrm{sign}(|x_i-x_j|)+\\sum\\limits_{\\{i,j\\}\\in E_-}\\mathrm{sign}(2\\|\\vec x\\|_\\infty-|x_i-x_j|).$$\n\n\\subsection{Modularity measure}\\label{sec:modularity-measure}\n\nFor a weighted graph $(V,(w_{ij})_{i,j\\in V})$, the modularity measure \\cite{TMH18} is defined as\n$$Q(A)=\\sum_{i,j\\in A}w_{ij}-\\frac{\\vol(A)^2}{\\vol(V)},\\;\\;\\text{where }A\\subset V,$$\nand it satisfies the following equalities (see Theorem 3.7 and Theorem 3.9 in \\cite{TMH18}, respectively)\n\\begin{equation}\\label{eq:Q-modularity-measure}\n\\max\\limits_{A\\subset V}Q(A)=\\max\\limits_{x\\ne 0}\\frac{\\sum_{i,j\\in V}(\\frac{\\deg(i)\\deg(j)}{\\vol(V)}-w_{ij})|x_i-x_j|}{4\\|\\vec x\\|_\\infty}\n\\end{equation}\nand\n\\begin{equation}\\label{eq:Q\/mu-modularity-measure-mu}\n\\max\\limits_{A\\in\\ensuremath{\\mathcal{P}}(V)\\setminus\\{\\varnothing,V\\}}\\frac{Q(A)}{\\mu(A)\\mu(V\\setminus A)}=\\max\\limits_{\\sum_{i\\in V} \\mu_ix_i=0}\\frac{\\sum_{i,j\\in V}(\\frac{\\deg(i)\\deg(j)}{\\vol(V)}-w_{ij})|x_i-x_j|}{\\mu(V)\\sum_{i\\in V}\\mu_i|x_i|}.\n\\end{equation}\n\n\nIt is clear that \\eqref{eq:Q-modularity-measure} can be obtained more directly by Theorem \\ref{thm:tilde-fg-equal}. We shall also state a new analog of \\eqref{eq:Q\/mu-modularity-measure-mu}: \n\\begin{equation}\\label{eq:Q\/mu-modularity-measure}\n\\max\\limits_{A\\in\\ensuremath{\\mathcal{P}}(V)\\setminus\\{\\varnothing,V\\}}\\frac{Q(A)}{\\mu(A)\\mu(V\\setminus A)}=\\max\\limits_{\\sum_{i\\in V} x_i=0}\\frac{\\sum_{i,j\\in V}(\\frac{\\deg(i)\\deg(j)}{\\vol(V)}-w_{ij})|x_i-x_j|}{\\sum_{i,j\\in V}\\mu_i\\mu_j|x_i-x_j|}\n\\end{equation}\nwhich can be derived straightforwardly by Theorem \\ref{thm:tilde-fg-equal}. \n\nBy Proposition \\ref{pro:Lovasz-f-pre}, we immediately obtain Theorem 1 in \\cite{CRT20}, i.e., \nfor any $a,b>0$, \n$$ \\max\\limits_{-a\\le x_i\\le b,\\forall i}\\frac12\\sum_{i,j\\in V}(\\frac{\\deg(i)\\deg(j)}{\\vol(V)}-w_{ij})|x_i-x_j|=(a+b)\\max\\limits_{A\\subset V}Q(A).$$\n\n\n\\textbf{A relation with the frustration index}\n\n For a signed weighted graph with real weights $(w_{ij})_{i,j\\in V}$ and signs $s_{ij}=\\mathrm{sign}(w_{ij})$, we define the frustration index as \\begin{equation}\\label{eq:frustration-weight}\n\\min_{x_i\\in \\{-1,1\\},\\forall i} \\sum_{\\{i,j\\}}|w_{ij}|\\cdot|x_i-s_{ij}x_j|.\n\\end{equation} \n\nThe following result reveals an interesting relation between the modularity\nmeasure and the frustration index.\n\\begin{pro}For a weighted graph $(V,(w_{ij})_{i,j\\in V})$, let $\\tilde{w}_{ij}=w_{ij}-\\frac{\\deg(i)\\deg(j)}{\\vol(V)}$. \nIn the signed weighted graph $(V,(\\tilde{w}_{ij})_{i,j\\in V})$, \n $\\{i,j\\}$ is a positive (resp. negative) edge if $\\tilde{w}_{ij}>0$ (resp. $\\tilde{w}_{ij}<0$). Then, the frustration index of $(V,(\\tilde{w}_{ij})_{i,j\\in V})$ equals \n$2\\left(\\sum_{\\{i,j\\}:\\tilde{w}_{ij}<0}|\\tilde{w}_{ij}|-\\max\\limits_{A\\subset V}Q(A)\\right)$.\n\\end{pro}\n\n\\begin{proof}\nWe know from Section \\ref{sec:frustration} (or by Theorem \\ref{thm:tilde-fg-equal}) that the frustration index of $(V,(\\tilde{w}_{ij})_{i,j\\in V})$ equals \n$$2\\left(\\sum_{\\{i,j\\}:\\tilde{w}_{ij}<0}|\\tilde{w}_{ij}|+\\min\\limits_{\\vec x\\ne0}\\frac{\\sum_{i,j\\in V}\\tilde{w}_{ij}|x_i-x_j|}{4\\|x\\|_\\infty}\\right).$$\nThe proof is then completed by \\eqref{eq:Q-modularity-measure}.\n\\end{proof}\n\n\\subsection{Chromatic number}\\label{sec:chromatic-number}\nThe chromatic number (i.e., the smallest vertex coloring number) of a graph is the smallest number of colors needed to color the vertices\nso that no two adjacent vertices share the same color.\nGiven a simple connected graph $G=(V,E)$ with $\\#V=n$, its chromatic number $\\gamma(G)$ can be expressed as a global optimization on the $n$-power set of vertices:\n\\begin{equation}\\label{eq:coloring-number-sum}\n\\gamma(G)=\\min\\limits_{(A_1,\\cdots,A_n)\\in\\ensuremath{\\mathcal{P}}_n(V)}\\left\\{n\\sum_{i=1}^n\\#E(A_i)\n+\\sum_{i=1}^n\\sgn(\\#A_i)+n\\left(n-\\sum_{i=1}^n\\#A_i\\right)^2\\right\\}\n\\end{equation}\nand similarly, we get the following\n\\begin{pro}The chromatic number $\\gamma(G)$ of a finite simple graph $G=(V,E)$ satisfies\n\\begin{equation}\\label{eq:coloring-number}\n\\gamma(G)=\\min\\limits_{(A_1,\\cdots,A_n)\\in\\ensuremath{\\mathcal{P}}(V)^n}\\left\\{n\\sum_{i=1}^n\\#E(A_i)+\\sum_{i=1}^n\\sgn(\\#A_i)+n\\left(n-\\#\\bigcup_{i=1}^n A_i\\right)\\right\\}\n\\end{equation}\n\\end{pro}\n\n\\begin{proof\n Let $f:\\ensuremath{\\mathcal{P}}(V)^n\\to\\ensuremath{\\mathbb{R}}$ be defined by $$f(A_1,\\cdots,A_n)=n\\sum_{i=1}^n\\#E(A_i)+\\sum_{i=1}^n\\sgn(\\#A_i)+n\\left(n-\\#\\bigcup_{i=1}^n A_i\\right).$$\n Let $\\{C_1,\\cdots,C_{\\gamma(G)}\\}$ be a proper coloring class of $G$, and set $C_{\\gamma(G)+1}=\\cdots=C_n=\\varnothing$. Then we have $E(C_i)=\\varnothing$, $\\#\\cup_{i=1}^n C_i=n$, $\\#C_i\\ge1$ for $1\\le i\\le \\gamma(G)$, and $\\#C_i=0$ for $i> \\gamma(G)$. In consequence, $f(C_1,\\cdots,C_n)=\\gamma(G)$. Thus, it suffices to prove $f(A_1,\\cdots,A_n)\\ge \\gamma(G)$ for any $(A_1,\\cdots,A_n)\\in\\ensuremath{\\mathcal{P}}(V)^n$.\n\nIf $\\bigcup_{i=1}^n A_i\\ne V$, then $f(A_1,\\cdots,A_n)\\ge n+1> \\gamma(G)$.\n\nIf there exist at least $\\gamma(G)+1$ nonempty sets $A_1,\\cdots,A_{\\gamma(G)+1}$, then $f(A_1,\\cdots,A_n)\\ge \\gamma(G)+1> \\gamma(G)$.\n\nSo we focus on the case that $\\bigcup_{i=1}^n A_i= V$ and $A_{\\gamma(G)+1}=\\cdots=A_n=\\varnothing$. If there further exists $i\\in\\{1,\\cdots,\\gamma(G)\\}$ such that $A_i=\\varnothing$, then by the definition of the chromatic number, there is $j\\in \\{1,\\cdots,\\gamma(G)\\}\\setminus\\{i\\}$ with $E(A_j)\\ne \\varnothing$. So $f(A_1,\\cdots,A_n)\\ge n+1> \\gamma(G)$.\nAccordingly, each of $A_1,\\cdots,A_{\\gamma(G)}$ must be nonempty, and thus $f(A_1,\\cdots,A_n)\\ge\\gamma(G)$.\n\nAlso, when the equality $f(A_1,\\cdots,A_n)=\\gamma(G)$ holds, one may see from the above discussion that $A_1,\\cdots,A_{\\gamma(G)}$ are all independent sets of $G$ with $\\bigcup_{i=1}^n A_i\\ne V$.\n\n\\end{proof}\n\nNote that\n$$\\#\\bigcup_{i=1}^n V^t(\\vec x^i)=\\#\\{j\\in V:\\exists i \\text{ s.t. }x_{i,j}>t\\}=\\sum_{j=1}^n\\max\\limits_{i=1,\\cdots,n} 1_{x_{i,j}>t}=\\sum_{j=1}^n 1_{\\max\\limits_{i=1,\\cdots,n}x_{i,j}>t}$$\nSo the $n$-way Lov\\'asz extension of $\\#\\bigcup_{i=1}^n A_i$ is\n\\begin{align*}\n\\int_{\\min\\vec x}^{\\max\\vec x}\\#\\bigcup_{i=1}^n V^t(\\vec x^i)dt+\\min\\vec x\\#\\bigcup_{i=1}^n V(\\vec x^i)&=\\sum_{j=1}^n \\int_{\\min\\vec x}^{\\max\\vec x}1_{\\max\\limits_{i=1,\\cdots,n}x_{i,j}>t}dt+\\min\\vec x\\#V\n\\\\&=\\sum_{j=1}^n(\\max\\limits_{i=1,\\cdots,n}x_{i,j}-\\min\\vec x)+n\\min\\vec x\n\\\\&=\\sum_{j=1}^n\\max\\limits_{i=1,\\cdots,n}x_{i,j}\n\\end{align*}\nAnd the $n$-way disjoint-pair Lov\\'asz extension of $\\#\\bigcup_{i=1}^n A_i$ is $\\sum_{j=1}^n\\max\\limits_{i=1,\\cdots,n}|x_{i,j}|=\\sum_{j=1}^n\\|\\vec x^{,j}\\|_\\infty$.\n\nThe $n$-way Lov\\'asz extension of $\\sgn(\\#A_i)$ is\n\\begin{align*}\n\\int_{\\min\\vec x}^{\\max\\vec x}\\sgn(\\#V^t(\\vec x^i))dt+\\min\\vec x\\sgn(\\#V(\\vec x^i))&= \\int_{\\min\\vec x}^{\\max\\vec x^i}1dt+\\min\\vec x\\sgn(\\#V)\n\\\\&= \\max\\limits_{j=1,\\cdots,n}x_{i,j}-\\min\\vec x+\\min\\vec x= \\max\\limits_{j=1,\\cdots,n}x_{i,j}\n\\end{align*}\nand the $n$-way disjoint-pair Lov\\'asz extension of $\\sgn(\\#A_i)$ is $\\|\\vec x^{i}\\|_\\infty$. Similarly, the $n$-way disjoint-pair Lov\\'asz extension of $\\#E(A_i)$ is $\\sum_{j\\sim j'}\\min\\{|x_{i,j}|,|x_{i,j'}|\\}$. Thus\n\\begin{align*}\nf^L(\\vec x)&=n\\sum_{i=1}^n\\sum_{j\\sim j'}\\min\\{|x_{i,j}|,|x_{i,j'}|\\} +\\sum_{i=1}^n\\|\\vec x^{i}\\|_\\infty+n\\left(n\\|\\vec x\\|_\\infty-\\sum_{j=1}^n\\|\\vec x^{,j}\\|_\\infty\\right)\n\\\\&\n=n\\sum_{i=1}^n\\left(\\|\\vec x^i\\|_{1,\\deg}-(I^+(\\vec x^i)+I^-(\\vec x^i))\/2\\right) +\\sum_{i=1}^n\\|\\vec x^{i}\\|_\\infty+n\\left(n\\|\\vec x\\|_\\infty-\\sum_{j=1}^n\\|\\vec x^{,j}\\|_\\infty\\right)\n\\\\&\n=n^2\\|\\vec x\\|_\\infty+n\\|\\vec x\\|_{1\\text{-}deg,1}+\\|\\vec x\\|_{\\infty,1}-nI_{\\pm,1}(\\vec x)-n\\|\\vec x\\|^{\\infty, 1}.\n\\end{align*}\nAccording to Proposition \\ref{pro:fraction-f\/g} on the multi-way Lov\\'asz extension, we get\n\\begin{equation}\\label{eq:chromatic-continuous}\n\\gamma(G)= n^2-\\sup\\limits_{\\vec x\\in\\ensuremath{\\mathbb{R}}^{n^2}\\setminus\\{\\vec 0\\}}\\frac{nI_{\\pm,1}(\\vec x)+n\\|\\vec x\\|^{\\infty, 1}-n\\|\\vec x\\|_{1\\text{-}deg,1}-\\|\\vec x\\|_{\\infty,1}}{\\|\\vec x\\|_\\infty}.\n\\end{equation}\n\n\\paragraph{Clique covering number }\n\nThe clique covering number of a graph $G$ is the minimal number of cliques in $G$ needed to cover the vertex set. It is equal to the chromatic number of the graph complement of $G$. Consequently, we can explicitly write down the continuous representation of a clique covering number by employing Theorem \\ref{thm:graph-numbers}.\n\n\n{ \\linespread{0.95} \\small ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{section 1}\n\nM. J. Golay introduced Golay complementary pairs (GCPs) in his work on multislit spectrometry \\cite{Golay51}. GCPs are sequence pairs having zero aperiodic autocorrelation sums (AACS) at all non-zero time shifts \\cite{Golay61}. Due to their ideal correlation sums, GCPs have found numerous applications in modern day communication systems \\cite{Davis1999,Paterson2000,Georghiades2001,farrel2003,abdi2007,lei2014}, Radar \\cite{Spano1996,Pezeshki2008}, etc. One of the main drawbacks of the GCPs are its availability for limited lengths \\cite{Borwein2000}. To overcome this drawback and to find the sequence pairs which depicts ``closest\" autocorrelation properties to that of the GCPs Fan \\textit{et al.} proposed Z-complementary pairs (ZCPs) in 2007 \\cite{Fan2007}. ZCPs are sequence pairs having zero AACS within a certain time-shift around the in-phase position \\cite{Fan2007}. In recent years lot of research has been done on the existence \\cite{Li2011}, systematic constructions \\cite{liu20141,liu2014,Adhikary2016,Adhikary2018,Adhikary2020,chen2017,Adhikary20201} and applications of ZCPs \\cite{Adhikary20191,chen20192}.\n\n\\subsection{Sequences pairs with zero periodic crosscorrelation zone}\nSince the autocorrelation of the sequence pair sum up to zero at all non-zero time-shifts (or time-shifts within a certain region in case of ZCPs) Golay sequences have been widely used to reduce peak-to-mean envelope power ratio in orthogonal frequency division multiplexing systems \\cite{Davis1999,Paterson2000}. However, the sequences own periodic autocorrelation plays an important role in some applications like synchronization and detection of the signal. Working in this direction, Gong \\textit{et al.} \\cite{Gong2013} investigated the periodic autocorrelation behaviour of a single Golay sequence in 2013. To be more specific, Gong \\textit{et al.} presented two constructions of Golay sequences of length $2^m$ each displaying a periodic zero autocorrelation zone (ZACZ) of $2^{m-2}$, and $2^{m-3}$, respectively, around the in-phase position \\cite{Gong2013}. In \\cite{Gong2013}, the authors also discussed the application of Golay sequences with large ZACZ for ISI channel estimation. Using Golay sequences with large ZACZ as channel estimation sequences (CES), the authors analysed the performance of\nGolay-sequence-aided channel estimation in terms of the error\nvariance and the classical Cramer-Rao lower bound (CRLB). The performance was also compared with the well known sequences (Frank-Zadoff-Chu sequences and $m$-sequences) which are generally used for ISI channel estimation. It was shown in \\cite{Gong2013} that when the channel impulse response (CIR) is within the ZACZ width then the variance of the Golay sequences attains the CRLB.\n\n\n\nInspired by the work of Gong \\textit{et al.} \\cite{Gong2013}, Chen \\textit{et al.} studied the zero cross-correlation zone among the Golay sequences in 2018 and proposed Golay-ZCZ sequence sets \\cite{Chen201811}. Golay-ZCZ sequence sets are sequence sets having periodic ZACZ for each sequences, periodic zero cross-correlation zone (ZCCZ) for any two sequences and also the aperiodic autocorrelation sum is zero for all non-zero time shifts. Specifically, Chen \\textit{et al.} gave a systematic construction of Golay-ZCZ sequence set consisting $2^k$ sequences, each of length $2^m$ and $\\min\\{ZACZ,ZCCZ\\}$ is $2^{m-k-1}$ \\cite{Chen201811}. However, the lengths of the GCPs with large ZCZs discussed in the works of Gong \\textit{et al.} and Chen \\textit{et al.} are all in the powers of two \\cite{Chen201811}. To the best of the authors knowledge, the problem of investigating the individual periodic autocorrelations of the GCPs and the periodic cross-correlations of the pairs when the length of the GCPs are non-power-of-two, remains largely open. An overview of of the previous works, which considers the periodic ZACZ of the individual sequences and ZCCZ of a GCP, is given in Table \\ref{Table duibi}.\n\n\\begin{table}\n \\centering\n \\caption{Golay sequences with periodic ZACZ and ZCCZ.}\n \\label{Table duibi}\n\\begin{tabular}{|c|c|c|c|c|}\n \\hline\n \n Ref. & Length $N$ & Complementary sets size $M$ & ZACZ width & ZCCZ width \\\\\\hline\n\n \\hline\n\n \\cite{Gong2013} & $2^m$ & $2$ & $2^{m-2}$ or $2^{m-3}$ & Not discussed \\\\\n \\hline\n\n \\cite{Chen201811} & $2^m$ & $2^k$ & $2^{m-k-1}$ & $2^{m-k-1}$ \\\\\n \\hline\n\n Theorem 1 & $4N$ & $2$ & $N$ & $N$ \\\\\n \\hline\n\\end{tabular}\n\\end{table}\n\nBased on the discussion on using Golay sequences as CES for ISI channel estimation in \\cite{Gong2013} it can be realised that our proposed constructions will add flexibility in choosing the Golay sequences of various lengths for using it as CES. Since,\nin practical scenarios, a longer training sequence will give\nrise to a higher training overhead, therefore, selection of the\ntraining length is a trade-off between channel estimation performance\nand training overhead. For example, let us consider\nthat in a practical scenario, the CIR is a length 10 vector. If only\nthe Golay sequences in \\cite{Gong2013} or \\cite{Chen201811} are considered, then one have to use a length 64 GCP which have a ZACZ width of 16. However, by our proposed constructions, a length 40 GCP which have a ZACZ width of 10 can be used as a CES as described in \\cite{Gong2013}. This will improve the system performance.\n\n\n\\subsection{Two-dimensional complementary array pairs}\nIn 1978, Ohyama \\textit{et al.} introduced two-dimensional (2D) sequence sets with zero side lobes in their work on image processing \\cite{Ohyama1978}. In 1985, H. D. Luke presented some iterative constructions of higher-dimensional complementary codes \\cite{Luke1985}. In 1990, Bomer \\textit{et al.} proposed perfect binary arrays \\cite{bomer1990}. In search of one-dimensional sequences with low autocorrelation magnitudes at non-zero time-shifts, in 2007, Jedwab and Parker made a remarkable progress by generalising the problem to multiple dimensions \\cite{Jedwab20071}. Instead of searching sequences with low correlation properties in one dimension, the authors analysed the possibility of existence of such sequence arrays in multiple dimensions in the hope of a larger existence patterns. In \\cite{Jedwab20071} the authors presented a systematic construction of $m$ dimensional Golay complementary array pairs (GCAPs) from an $m+1$ dimensional GCAPs. 2D- GCAPs are array pairs having the 2D autocorrelations of the constituent arrays sum up to zero for all non-zero time-shifts.\n\nGeneralizing the concept of ZCZ in 2D, Fan \\textit{et. al} introduced binary array set with ZCZ in 2001 \\cite{Tang2001}. In 2002 Hayashi proposed a class of 2D- binary sequences with ZCZ \\cite{Hayashi2004}. In 2010 Cheng \\textit{et al.} proposed another new class of class of 2D- binary sequences with ZCZ \\cite{Cheng2010}. Recently in 2019 Chen \\textit{et al.} proposed a systematic construction of 2D- Z-complementary array pairs (ZCAPs) \\cite{pai2019}. However, the behavior of autocorrelation\nof a single Golay array is still unknown. To the best of the authors knowledge, the problem of investigating the periodic 2D- autocorrelations of the constituent arrays of the 2D- GCAPs and the periodic 2D- cross-correlations of the 2D- GCAPs, remains largely open.\n\n\n\\subsection{Contributions}\n\nMotivated by the works of Gong \\textit{et al.} \\cite{Gong2013} and Chen \\textit{et al.} \\cite{Chen201811} we propose a systematic construction of GCPs of length non-power-of-two, where the individual sequences have a periodic ZACZ and the periodic cross-correlation of the sequence pairs also have a ZCCZ. We also extend the ideas to construct 2D-GCAPs. To be more specific we make the following contributions in this paper:\n\\begin{enumerate}\n\t\\item Assuming a GCP of length $N$ exists, we systematically construct GCPs of length $4N$. The proposed GCPs have $Z_{\\min}=N+1$, where $Z_{\\min}=\\min\\{ZACZ,ZCCZ\\}$.\n\t\\item We also systematically constructe a 2D GCAP of size $s_1\\times 4s_2$, assuming a 2D- GCAP of size $s_1 \\times s_2$ exists. The designed 2D- GCAPs have a $2D\\text{-}Z_{\\min}=s_1\\times (s_2+1)$, where $2D\\text{-}Z_{\\min}=\\min\\{2D\\text{-}ZACZ,2D\\text{-}ZCCZ\\}$.\n\t\\item We propose a systematic construction of 2D GCAP of size $4s_1\\times 4s_2$, assuming a 2D- GCAP of size $s_1 \\times s_2$ exists. The designed 2D- GCAPs have a $2D\\text{-}Z_{\\min}=(s_1+1)\\times (s_2+1)$, where $2D\\text{-}Z_{\\min}=\\min\\{2D\\text{-}ZACZ,2D\\text{-}ZCCZ\\}$.\n\\end{enumerate}\n\n\n\n\\subsection{Organization}\nThe rest of the paper is organized as follows. In Section \\ref{section 2}, some useful notations and preliminaries are recalled.\nIn Section \\ref{section 3}, a systematic construction for GCPs of lengths non-power-of-two with large periodic ZACZ and ZCCZ is proposed.\nIn Section \\ref{section 4}, we extended the construction to 2D- GCAPs.\nFinally, we conclude the paper in Section \\ref{section 5}.\n\n\n\n\n\n\n\n\n\\section{Preliminaries}\\label{section 2}\n\\begin{definition}\n\tLet $\\mathcal{A}=[A_{i,j}]$ and $\\mathcal{B}=[B_{i,j}]$, for $0\\leq i 0$. Then the following two conditions\nare equivalent:\n\n(1) $x_1,...,x_d$ form a regular sequence on $A^+$ (by an abuse of language we say that $A^+$ is Cohen-Macaulay).\n\\newline\n(2) $A^+$ is flat over $A$.\n\\end{prop}\n\n\\begin{proof}\nWe prove the equivalence. First assume (1).\nIf $A^+$ is not flat over $A$, choose $i\\geq 1$ as large as possible so that\n$\\operatorname{Tor{}}_i^A(A\/P, A^+)\\ne 0$ for some prime $P$ in $A$. Such a choice is possible because $A$ is regular\nand large Tors vanish. If $y_1,...,y_s$ is a maximal regular sequence\nin $P$, then one can embed $A\/P$ in $A\/(y_1,...,y_s)$ with cokernel $C$. But since our assumption\nforces $\\operatorname{Tor{}}_{i+1}^R(C,A^+) = 0$ (as $C$ has a prime filtration), and $y_1,...,y_s$ form a\nregular sequence on $A^+$, we obtain that $\\operatorname{Tor{}}_i^A(A\/P, A^+) = 0$, a contradiction.\n\nTo see that $y_1,...,y_s$ form a regular sequence, extend them to a system of parameters, and let\n$B = k[[y_1,...,y_d]]$. Then $B^+ = A^+$, and our hypothesis says that the $y's$ form a regular\nsequence.\n\nAssume (2). Flat maps preserve regular sequence in general.\n\\end{proof}\n\n\nOur method of studying regular sequences relies on local cohomology. We only need the description below.\n\n For $x\\in R$, let $K^{\\bullet}(x;R)$ denote the\ncomplex $0\\rightarrow R\\rightarrow R_x\\rightarrow 0$, graded so that the degree $0$ piece of the complex is\n$R$, and the degree $1$ is $R_x$. If $x_1,...,x_n\\in R$, let $K^{\\bullet}(x_1,x_2,...,x_n;R)$ denote\nthe complex $K^{\\bullet}(x_1;R)\\otimes_R ...\\otimes_R K^{\\bullet}(x_n;R)$, where in general recall that\nif $(C^{\\bullet},d_C)$ and $(D^{\\bullet}, d_D)$ are complexes, then the tensor product\nof these complexes, $(C\\otimes_RD, \\Delta)$ is by definition the complex whose ith\ngraded piece is $\\sum_{j+k = i} C_j\\otimes D_k$ and whose differential is determined\nby the map from $ C_j\\otimes D_k\\rightarrow (C_{j+1}\\otimes D_k) \\oplus (C_j\\otimes D_{k+1})$\ngiven by $\\Delta(x\\otimes y) = d_C(x)\\otimes y + (-1)^k x\\otimes d_D(y)$.\n\nThe modules in this complex, called the Koszul cohomology complex, are\n$$0\\rightarrow R\\rightarrow \\oplus\\sum_i R_{x_i}\\rightarrow \\oplus\\sum_{i< j} R_{x_ix_j}\\rightarrow ...\\rightarrow R_{x_1x_2\\cdots x_n}\\rightarrow 0$$\nwhere the differentials are the natural maps induced from localization, but with signs attached.\nIf $M$ is an $R$-module, we set $K^{\\bullet}(x_1,x_2,...,x_n; M) = K^{\\bullet}(x_1,x_2,...,x_n;R)\\otimes_RM$.\nWe denote the cohomology of $K^{\\bullet}(x_1,x_2,...,x_n; M)$ by $H^i_{I}(M)$, called\nthe $i$th local cohomology of $M$ with respect to $I = (x_1,...,x_d)$.\nIt is a fact that this module only depends on the ideal generated by the $x_i$ up to radical.\nWe summarize some useful information concerning these modules.\n\n\\begin{prop}\\label{propbasechange} Let $R$ be a Noetherian ring, $I$ and ideal and $M$ and $R$-module.\nLet $\\phi: R\\rightarrow S$ be a homomorphism and let $N$ be an $S$-module.\n\\begin{enumerate}[\\quad\\rm (1)]\n\\item If $\\phi$ is flat, then $H^j_I(M)\\otimes_RS\\cong H^j_{IS}(M\\otimes_RS)$. In particular,\nlocal cohomology commutes with localization and completion.\n\\item (Independence of Base) $H^j_I(N)\\cong H^j_{IS}(N)$, where the first local cohomology\nis computed over the base ring $R$.\n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof} Choose generators $x_1,...,x_n$ of $I$. The first claim follows at once from the\nfact that $K^{\\bullet}(x_1,...,x_n;M)\\otimes_RS = K^{\\bullet}(\\phi(x_1),...,\\phi(x_n);M)\\otimes_RS)$, and that\n$S$ is flat over $R$, so that the cohomology of $K^{\\bullet}(x_1,...,x_n;M)\\otimes_RS$ is the cohomology\nof $K^{\\bullet}(x_1,...,x_n;M)$ tensored over $R$ with $S$.\n\nThe second claim follows from the fact that $$K^{\\bullet}(x_1,...,x_n;N) = K^{\\bullet}(x_1,...,x_n;R)\\otimes_RN =\n(K^{\\bullet}(x_1,...,x_n;R)\\otimes_RS)\\otimes_SN$$\n$$ = K^{\\bullet}(\\phi(x_1),...,\\phi(x_n);S)\\otimes_RN =\nK^{\\bullet}(\\phi(x_1),...,\\phi(x_n);N).$$\n\\end{proof}\n\\bigskip\n\n\\section{$R^+$ is Cohen-Macaulay in Positive Characteristic}\n\n\\medskip\n\nLet $R$ be a commutative ring containing a field of characteristic $p>0$, let $I\\subset R$\nbe an ideal, and let $R'$ be an $R$-algebra. The Frobenius ring homomorphism\n$f:R'\\stackrel{r\\mapsto r^p}{\\to}R'$ induces a map $f_*:H^i_I(R')\\to H^i_I(R')$ on all\nlocal cohomology modules of $R'$ called the action of the Frobenius on $H^i_I(R')$.\nFor an element $\\alpha\\in H^i_I(R')$ we denote $f_*(\\alpha)$ by $\\alpha^p$. This follows since\nthe Frobenius extends to localization of $R$ in the obvious way, and commutes with the\nmaps in the Koszul cohomology complex, which are simply signed natural maps.\n\nThe main result is that if $R$ is a local Noetherian domain which is a homomorphic image of a\nGorenstein local ring and has positive characteristic, then $R^+$ is Cohen-Macaulay in the\nsense that every system of parameters of $R$ form a regular sequence in $R^+$. To prove this\nresult we use the proof given in \\cite{HL}. The original proof, with slightly different assumptions, was given\nin 1992 in \\cite{HH}, as a result of developments from tight closure theory. Although tight closure has\nnow disappeared from the proof, it remains an integral part of the theory. A critical point is that we must find some\nway of annihilating nonzero local cohomology classes. \nThe next lemma is essentially the only way known to do this.\n\n\\begin{lemma} \\label{element} Let $R$ be a commutative Noetherian domain containing a field of\ncharacteristic $p>0$, let $K$ be the fraction field of $R$ and let $\\overline K$ be the algebraic closure\nof $K$. Let $I$ be an ideal of $R$ and let $\\alpha\\in H^i_I(R)$ be an element such that the elements\n$\\alpha, \\alpha^p,\\alpha^{p^2},\\dots,\\alpha^{p^t},\\dots$ belong to a finitely generated $R$-submodule of $H^i_I(R)$.\nThere exists an $R$-subalgebra $R'$ of $\\overline K$ (i.e. $R\\subset R'\\subset \\overline K$) that is finite\nas an $R$-module and such that the natural map $H^i_I(R)\\to H^i_I(R')$ induced by the natural\ninclusion $R\\to R'$ sends $\\alpha$ to 0.\n\\end{lemma}\n\\emph{Proof.} Let $A_t=\\sum_{i=1}^{i=t}R\\alpha^{p^i}$ be the $R$-submodule of $H^i_I(R)$ generated\nby $\\alpha,\\alpha^p,\\dots,\\alpha^{p^t}$. The ascending chain $A_1\\subset A_2\\subset A_3\\subset\\dots$\nstabilizes because $R$ is Noetherian and all $A_t$ sit inside a single finitely generated $R$-submodule\nof $H^i_I(R)$. Hence $A_s=A_{s-1}$ for some $s$, i.e. $\\alpha^{p^s}\\in A_{s-1}$. Thus there exists an\nequation $\\alpha^{p^s}=r_1\\alpha^{p^{s-1}}+r_2\\alpha^{p^{s-2}}+\\dots+r_{s-1}\\alpha$ with $r_i\\in R$\nfor all $i$. Let $T$ be a variable and let $g(T)=T^{p^s}-r_1T^{p^{s-1}}-r_2^{p^{s-2}}-\\dots-r_{s-1}T$.\nClearly, $g(T)$ is a monic polynomial in $T$ with coefficients in $R$ and $g(\\alpha)=0$.\n\nLet $x_1,\\dots, x_d\\in R$ generate the ideal $I$. Recall that we can calculate the local cohomology\nfrom the Koszul cohomology complex\n$C^{\\bullet}(R)$, \n$$0\\to C^0(R)\\to\\dots \\to C^{i-1}(R)\\stackrel{d_{i-1}}{\\to} C^i(R)\\stackrel{d_i}{\\to}\nC^{i+1}(R)\\to\\dots\\to C^d(R)\\to 0$$\nwhere $C^0(R)=R$ and $C^i(R)=\\oplus_{1\\leq j_1<\\dots0$, let\n$K$ be the fraction field of $R$ and let $\\overline K$ be the algebraic closure of $K$. Assume $R$ is a surjective\nimage of a Gorenstein local ring $A$. Let $\\mathfrak m$ be the maximal ideal of $R$. \nLet $i< \\dim R$\nbe a non-negative integer. There is an $R$-subalgebra $R'$ of $\\overline K$ (i.e. $R\\subset R'\\subset \\overline K$)\nthat is finite as an $R$-module and such that the natural map $H^i_{\\mathfrak m}(R)\\to H^i_{\\mathfrak m}(R')$\nis the zero map.\n\\end{thm}\n\n\\emph{Proof.} The proof comes from \\cite{HL}. Let $n=\\dim A$ and let $N={\\rm Ext}^{n-i}_A(R,A)$. \nClearly $N$ is a finite $A$-module.\n\nLet $d=\\dim R$. We use induction on $d$. For $d=0$ there is nothing to prove, so we assume that\n$d>0$ and the theorem proven for all smaller dimensions. Let $P\\subset R$ be a non-maximal prime ideal.\nWe claim there exists an $R$-subalgebra $R^P$ of $\\overline K$ such that\n$R^P$ is a finite $R$-module and for every $R^P$-subalgebra $R^*$ of $\\overline K$ (i.e. $R^P\\subset R^*\\subset \\overline K$)\nsuch that $R^*$ is a finite $R$-module, the image $\\mathcal I\\subset N$ of the natural map\n${\\rm Ext}^{n-i}_A(R^*,A)\\to N$ induced by the natural inclusion $R\\to R^*$ vanishes after localization at $P$,\ni.e. $\\mathcal I_P=0$. Indeed, let $d_P=\\dim R\/P$. Since $P$ is different from the maximal ideal,\n$d_P>0$. As $R$ is a surjective image of a Gorenstein local ring, it is catenary, hence the dimension\nof $R_{P}$ equals $d-d_P$, and $i0$.\nAssume that $R$ is a surjective image of a Gorenstein local ring. Then the following hold:\n\n(a) $H^i_{\\mathfrak m}(R^+)=0$ for all $i<\\dim R$, where $\\mathfrak m$ is the maximal ideal of $R$.\n\n(b) Every system of parameters of $R$ is a regular sequence on $R^+$.\n\\end{cor}\n\n\\emph{Proof.} (a) $R^+$ is the direct limit of the finitely generated $R$-subalgebras $R'$, hence $H^i_{\\mathfrak m}(R^+)=\\varinjlim H^i_{\\mathfrak m}(R')$.\nBut Theorem~\\ref{module} implies that for each $R'$ there is $R''$ such that the map \n $H^i_{\\mathfrak m}(R')\\to H^i_{\\mathfrak m}(R'')$ in the inductive system is zero. Hence the limit is zero.\n\n(b) Let $x_1,..., x_d$ be a system of parameters of $R$. We prove that $x_1,...,x_j$ is a regular\nsequence on $R^+$ by induction on $j$. The case $j=1$ is clear, since $R^+$ is a domain.\nAssume that $j>1$ and $x_1,\\dots, x_{j-1}$ is a regular sequence on $R^+$. Set $I_t = (x_1,...,x_t)$.\nThe fact that $H^i_{\\mathfrak m}(R^+)=0$ for all $i 0$, and let $I$ be an ideal of $R$. Suppose that\n$z\\in R$ is such that $z^q\\in I^{[q]}$, where $ q = p^e$ is a power of $p$. Then there exists an integral domain\n$S$, which is a module-finite separable extension of $R$, such that $z\\in IS$.\n\\end{prop}\n\\bigskip\n\nThe point here is that there clearly a finite inseparable extension of $R$, say $T$, such that $z\\in IT$. Simply\ntake $qth$ roots of the elements $a_j$ such that $z = \\sum a_jx_j^q$ where $x_j\\in I$. \n\n\\begin{proof} Write $z = \\sum_{1\\leq j\\leq n} a_jx_j^q$ where $x_j\\in I$ as above. Consider the equations for $2\\leq i\\leq n$,\n$$ U_i^q + U_ix_1^q-a_i = 0.$$\nThese are monic separable equations and therefore have roots $u_i$ in a separable field extension of the fraction field\nof $R$. Let $S$ be the integral closure of the ring $R[u_2,...,u_n]$. \nSince $R$ is excellent, $S$ is finite as an $R$-module. We claim that $z\\in IS$. Set\n$$u_1 = (z- \\sum_{2\\leq i\\leq n} x_iu_i)\/x_1.$$\nNote that $u_1$ is an element of the fraction field of $S$. Taking $qth$ powers we see that\n$$u_1^q = a_1 + \\sum_{2\\leq i\\leq n} u_ix_i^q.$$\nTherefore $u_1$ is integral over $S$. As $S$ is integrally closed, $u_1\\in S$. This implies that $$z = \\sum_{1\\leq i\\leq n} u_ix_i$$\nand so $z\\in IS$. \\end{proof}\n\n\\begin{disc}{\\rm There is an interesting property pertaining to our main theorem. Suppose that\n$(R,{\\gothic{m}})$ is a complete local Noetherian domain of positive characteristic, and let $x_1,...,x_d$ form\na regular sequence. If $x_1,...,x_d$ is a system of parameters, and if $R$ is not Cohen-Macaulay, then\nthere is a non-trivial relation $r_1x_1+...+r_dx_d = 0$. Non-trivial means that it does not\ncome from the Koszul relations. Since $R^+$ is Cohen-Macaulay, we can trivialize this relation in $R^+$,\nand therefore in some finite extension ring $S$ of $R$, $R\\subseteq S\\subseteq R^+$. But Theorem~\\ref{module} does not say\nwhether or not there is a fixed finite extension ring $T$, $R\\subseteq T\\subseteq R^+$ in which all relations on all\nparameters of $R$ become simultaneously trivial. Even if such a ring $T$ exists, this does not mean $T$ is\nitself Cohen-Macaulay; new relations coming from elements of $T$ may be introduced. However, there is a finite\nextension which simultaneously trivializes all relations on systems of parameters. This fact has been proved by\nMelvin Hochster and Yongwei Yao \\cite{HY}.}\n\\end{disc}\n\n\\section{Applications}\n\\bigskip\n\nThe existence of a big Cohen-Macaulay algebra has a great many applications. In some sense\nit repairs the failure of a ring to be Cohen-Macaulay. Hochster proved and used the existence of big Cohen-Macaulay\nmodules (the word ``big\" refers to the fact the modules may not be finitely generated) to prove many of the homological\nconjectures. For a modern update, see \\cite{Ho1}. In general, if you can prove a theorem in the\nCohen-Macaulay case, you should immediately try to use $R^+$ to try to prove it in general.\nWe give several examples of this phenomena in this section. As examples, we will prove some\nof the old homological conjectures using this approach; this is not new, but there are currently\na growing number of new homological conjectures, and it could be that characteristic $p$ methods\napply.\n\nOf course, some of the homological conjectures deal directly with systems of parameters. These are\neasy to prove once one has a Cohen-Macaulay module. For example, the next theorem gives the\nmonomial conjecture.\n\n\\begin{thm} Let $R$ be a local Noetherian ring of dimension $d$ and positive characteristic $p$.\nLet $x_1,...,x_d$ be a system of parameters. Then for all $t\\geq 1$, $(x_1\\cdots x_d)^t$ is not\nin the ideal generated by $x_1^{t+1},...,x_d^{t+1}$.\n\\end{thm}\n\n\\begin{proof} We use induction on the dimension $d$ of $R$. The case $d = 1$ is trivial.\nSuppose by way of contradiction that $d > 1$ and $(x_1\\cdots x_d)^t\\in (x_1^{t+1},...,x_d^{t+1})$.\nThis is preserved after completion, and is further preserved after moding out a minimal prime $P$\nsuch that the dimension of the completion modulo $P$ is still $d$. After these operations,\nthe images of the elements $x_i$ still form a system of parameters as well. Thus we may assume that\n$R$ is a complete local domain. We apply Theorem~\\ref{module} to conclude that $x_1,...,x_d$ is\na regular sequence in $R^+$. Write\n$$(x_1\\cdots x_d)^t = \\sum_i s_ix_i^{t+1},$$\nwhere $s_i\\in R$. Then $x_d^t((x_1\\cdots x_{d-1})^t - s_dx_d)\\in (x_1^{t+1},...,x_{d-1}^{t+1})$.\nSince the powers of the $x_i$ also form a regular sequence in $R^+$, we conclude that\n$(x_1\\cdots x_{d-1})^t - s_dx_d\\in (x_1^{t+1},...,x_{d-1}^{t+1})R^+$. It follows that\nthere is a Noetherian complete local domain $S$ containing $R$ and module-finite over $R$\nsuch that $(x_1\\cdots x_{d-1})^t\\in (x_1^{t+1},...,x_{d-1}^{t+1},x_d)S$. But now\n$(x_1\\cdots x_{d-1})^t$ is in the ideal $(x_1^{t+1},...,x_{d-1}^{t+1})$ in the ring $S\/x_dS$,\nwhich has dimension $d-1$. Our induction shows that this is impossible. \\end{proof}\n\nNext, we apply Theorem~\\ref{module} it to various intersection theorems. One of the first such intersection conjectures\nwas: \n\n\\begin{conj} Let $(R,{\\gothic{m}})$ be a local Noetherian ring, and let $M,N$ be two finitely\ngenerated nonzero $R$-modules such that $M\\otimes_RN$ has finite length. Then\n$$\\dim N \\leq \\text{pd}_R(M).$$\n\\end{conj}\n\nOf course there is nothing to prove if the projective dimension of $M$ is infinite.\nWe prove (see \\cite{Ho}):\n\n\\begin{thm}\\label{newintersection} Let $(R,{\\gothic{m}})$ be a local Noetherian ring of positive prime characteristic $p$, and let $M,N$ be two finitely\ngenerated nonzero $R$-modules such that $M\\otimes_RN$ has finite length. Then\n$$\\dim N \\leq \\text{pd}_R(M).$$\n\\end{thm}\n\n\\begin{proof}\nOne can begin by making some easy reductions. These types of reduction are very good practice\nin commutative algebra. First, note that the assumption that the tensor product has finite\nlength is equivalent to saying that $I+J$ is ${\\gothic{m}}$-primary, where $I = \\text{Ann}(N)$ and\n$J = \\text{Ann}(M)$. Then we can choose a prime $P$ containing $I$ such that $\\dim(R\/P) =\n\\dim (N)$, and observe that we can replace $N$ by $R\/P$ without loss of generality.\nIt is more difficult to change $M$, since the property of being finite projective dimension\ndoes not allow many changes. \n\nLet's just suppose for a moment that $R\/P$ is Cohen-Macaulay. Since $P+J$ is ${\\gothic{m}}$-primary,\nwe can always choose $x_1,...,x_d\\in J$ whose images in $B = R\/P$ form a system of parameters\n(and thus are a regular sequence in $R\/P$). If the projective dimension of $M$\nis smaller than $d = \\dim(R\/P)$, then $Tor_d^{R}(B\/(x_1,...,x_d)B, M) = 0$. \nNotice that $Tor_0^{R}(B, M) \\ne 0$. We claim by induction that for $0\\leq i\\leq d$,\n$Tor_i^{R}(B\/(x_1,...,x_i)B, M) \\ne 0$. When $i=d$ we arrive at a contradiction.\nSuppose we know this for $i < d$. Set $B _i = B\/(x_1,...,x_i)$. The short exact sequence\n$0\\to B_i\\to B_i\\to B_{i+1}\\to 0$ obtained by multiplication\nby $x_{i+1}$ on $B_i$ induces a map of Tors when tensored with $M$. Since all $x_i$ kill $M$,\nwe obtain a surjection of $Tor_{i+1}^{R}(B_{i+1}, M)$ onto $Tor_i^{R}(B_i, M)$.\nThis finishes the induction.\n\nOf course, we don't know that $R\/P$ is Cohen-Macaulay, and in general it won't be. But now\nsuppose that we are in positive characteristic. We can first complete $R$ before beginning\nthe proof. Now $R\/P$ is a complete local domain, and $S = (R\/P)^+$ is Cohen-Macaulay in\nthe sense that $x_1,..,x_d$ form a regular sequence in this ring. The same proof works\nverbatium, provided we know that $S\\otimes_RM\\ne 0$. But this is easy; it is even nonzero\nafter passing to the residue field of $S$. \\end{proof}\n\n\nAs a corollary, we get a favorite of the old Chicago school of commutative algebra, \nthe zero-divisor conjecture (now a theorem):\n\n\n\\bigskip\n\n\\begin{thm} Let $(R,{\\gothic{m}})$ be a Noetherian local ring of characteristic $p$, and let\n$M$ be a nonzero finitely generated $R$-module having finite projective dimension. If\n$x$ is a non-zerodivisor on $M$, then $x$ is a non-zerodivisor on $R$.\n\\end{thm}\n\n\\begin{proof} This proof is taken from \\cite{PS}. First observe that the statement of the theorem is equivalent to saying\nthat every associated prime of $R$ is contained in an associated prime of $M$. We induct on\nthe dimension of $M$ to prove this statement. If $\\dim M = 0$, then the only associated\nprime of $M$ is ${\\gothic{m}}$, which clearly contains every prime of $R$. Hence we may assume that\n$\\dim M > 0$. Let $P\\in \\text{Ass}(R)$. First suppose that there is a prime $Q\\in \\text{Supp}(M), Q\\ne {\\gothic{m}},$\nsuch that $P\\subseteq Q$. Then we can change the ring to $R_Q$ and the module to $M_Q$. By induction, $P_Q$ is\ncontained in an associated prime of $M_Q$, so lifting back gives us that $P$ is in an associated\nprime of $M$. We have reduced to the case in which $R\/P\\otimes_RM$ has finite length. By \nTheorem~\\ref{newintersection}, $\\dim (R\/P)\\leq \\text{pd}_R(M) = \\text{depth}(R) - \\text{depth}(M)$. Since\n$P$ is associated to $R$, $\\dim (R\/P)\\geq \\text{depth}(R)$ (exercise). It follows that\nthe depth of $M$ is $0$, and hence the maximal ideal is associated to $M$ (and contains $P$).\n\\end{proof} \n\n\\medskip\n\nFor a completely different type of application, we consider an old result of Grothendieck's concerning\nwhen the punctured spectrum of a local ring is connected.\nThere is a beautiful proof of Grothendieck's result in all characteristics due to Brodmann and Rung \\cite{BR}. The main\npoint here is that if $R$ is Cohen-Macaulay and the $x_i$ are parameters, then there is a very easy\nproof. It turns out that one can always assume that the $x_i$ are parameters, and then the proof\nof the Cohen-Macaulay case directly generalizes to one in characteristic $p$ using $R^+$. \nThe exact statement is:\n\n\n\\begin{thm} Let $(R,{\\gothic{m}})$ be a complete local Noetherian domain of dimension $d$, and let\n$x_1,...,x_k\\in {\\gothic{m}}$, where $k\\leq d-2$. Then the punctured spectrum of $R\/(x_1,...,x_k)$ is\nconnected.\n\\end{thm}\n\n\\begin{proof} We take the proof from \\cite{HH}.\nFirst assume that the $x_i$ are parameters.\nLet $I$ and $J$ give a disconnection of the punctured spectrum\nof $R\/(x_1,...,x_k)$. Choose elements $u+v$, $y+z$ which together with the $x_i$ form parameters\nsuch that $u,y\\in I$ and $v,z\\in J$. Modulo $(x_1,...,x_k)$ one has the relation\n$y(u+v) - u(y+z) = 0$. Since the parameters form a regular sequence in $R^+$, we obtain that\n$y\\in (y+z,x_1,...,x_k)R^+$. Similarly, $z\\in (y+z,x_1,...,x_k)R^+$. Write $y = c(y+z)$ modulo\n$(x_1,...,x_k)$ and $z = d(y+z)$ modulo $(x_1,...,x_k)$. Then $(1-c-d)(y+z)\\in (x_1,...,x_k)R^+$\nso that $1-c-d\\in (x_1,...,x_k)R^+$. At least one of $c$ or $d$ is a unit in $R^+$, say $c$. But then\n$y$ is not a zerodivisor modulo $(x_1,...,x_k)R^+$ and this implies that $J\\subseteq (x_1,...,x_k)R^+$.\nThen $I$ must be primary to $mR^+$ which is a contradiction since the height of $I$ is too small.\n\n\nIt remain to reduce to the case in which the $x_i$ are parameters.\n\nWe claim that any $k$-elements are up to radical in an ideal generated by $k$-elements which are\nparameters.\n The key point is to prove this for $k = 1$. Suppose that $x = x_1$ is given. If $x$\nalready has height one we are done. If $x$ is nilpotent, choose $y$ to be any parameter in $I$. So assume\nthat $x$ is not in every minimal prime. For $n\\gg 0$, $0:x^n = 0:x^{n+1}$, and changing $x$ to $x^n$,\nwe obtain that $x$ is not a zero divisor on $R\/(0:x)$. Then there is an element $s\\in 0:x$ such that\n$y = x +s$ has height one, and we may multiply $s$ by a general element of $I$ to obtain that\n$y\\in I$. But $xy = x^2$ so that $x$ is nilpotent on $(y)$. Inductively choose\n$y_1,...,y_{k-1}$ which are parameters such that the ideal they generate contains $x_1,...,x_{k-1}$ up\nto radical. Replace $R$ by $R\/(y_1,...,y_{k-1})$, and repeat the $k = 1$ step.\n\nNow if\n$I$ and $J$ disconnect the punctured spectrum of $R\/(x_1,...,x_k)$ choose any ideals $I'$ and $J'$\nof height at least $k$, not primary to the maximal ideal such that $I'$ contains $I$ up to radical\nand $J'$ contains $J$ up to radical. Choose parameters in $I'\\cap J'$ such that the $x_i$ are in the\nradical $K$ of these parameters. Then $K + I$ and $K + J$ disconnect the punctured spectrum of $R\/K$.\n\\end{proof}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\subsection{General remarks}\nA large open problem of classical general relativity is the\ncharacterization of the structure of a spacetime by initial data. The\nflat case, Minkowski spacetime, is geodesically complete. To the\nother extreme the singularity theorems by R. Penrose and S. Hawking show\nthat the spacetime cannot be geodesically complete if the data are\nlarge \\cite{HaE73TL}.\n\\\\\nIn the last years there has been remarkable progress in describing\nwhat happens if one goes from data for flat space to large data: The\nfuture of small data evolving in accordance with\nthe Einstein equation with various matter models as sources, vacuum\nand Einstein-Maxwell-Yang-Mills, looks like the future of data for\nflat space \\cite{ChK93TG,Fr93as}. Nevertheless many problems are\nstill unsolved.\n\\\\\nThose results were significantly improved by D.~Christodoulou for\nspherically symmetric models with a massless Klein-Gordon scalar field as\nsource. He was able to relate properties of the initial data to\nproperties of singularities. But even in this case of high symmetry\nthe questions left are still numerous as numerical\nsimulations by M.~Choptuik show \\cite{Ch93ua}. He found very\ninteresting properties, the so called echoing effect, for models which\nare in the parameter space of initial data near to the boundary which\nseparates regular from singular spacetimes.\n\\\\\nIn this paper conformal techniques are used to analyze the\nhyperboloidal initial value problem with scalar fields as\nmatter models --- for data near Minkowskian data the future of the\ninitial value\nsurface possesses a smooth future null infinity and a regular timelike\ninfinity, for large data a smooth future null infinity exists for at\nleast some time. In the second part of the introduction more about\nconformal techniques and their application for a mathematical\ndescription of asymptotically flat spacetimes will be said.\n\\\\\nAlthough the primarily treated matter model is that of the conformally\ninvariant scalar field, whose equations can be written as\n\\begin{subequations}\n\\label{model}\n\\begin{eqnarray}\n\\label{Wllngl}\n \\tilde{\\vphantom{\\phi}\\Box} \\mbox{$\\tilde{\\phi}$} - \\frac{\\mbox{$\\tilde{R}$}}{6} \\, \\mbox{$\\tilde{\\phi}$} & = & 0\n\\\\\n\\label{EinstPhys}\n ( 1 - \\frac{1}{4} \\mbox{$\\tilde{\\phi}$}^2 ) \\, \\mbox{$\\tilde{R}$}_{ab} & = &\n \\left(\n (\\mbox{$\\tilde{\\nabla}$}_a \\mbox{$\\tilde{\\phi}$}) (\\mbox{$\\tilde{\\nabla}$}_b \\mbox{$\\tilde{\\phi}$}) - \\frac{1}{2} \\, \\mbox{$\\tilde{\\phi}$} \\, \\mbox{$\\tilde{\\nabla}$}_a \\mbox{$\\tilde{\\nabla}$}_b \\mbox{$\\tilde{\\phi}$}\n - \\frac{1}{4} \\, \\mbox{$\\tilde{g}$}_{ab} (\\mbox{$\\tilde{\\nabla}$}^c \\mbox{$\\tilde{\\phi}$}) (\\mbox{$\\tilde{\\nabla}$}_c \\mbox{$\\tilde{\\phi}$})\n \\right),\n\\end{eqnarray}\nthe results obtained apply to a larger class of scalar field models,\ngiven by the class of actions (\\ref{ScalarAction}), including the massless\nKlein-Gordon field, as shown in section \\ref{SkalarAequiv}. Note that\nan arbitrary factor can be absorbed into $\\mbox{$\\tilde{\\phi}$}{}$ which changes the\ncoefficients in (\\ref{EinstPhys}). My notational conventions are\ndescribed in the appendix, the $\\tilde{\\vphantom{H}}$ marks quantities\nin the physical spacetime (see definition \\ref{asymFlat}). The\nenergy-momentum tensor for the conformally invariant scalar field can\nbe written as\n\\begin{equation}\n\\label{TkonfS}\n \\mbox{$\\tilde{T}$}_{ab} = (\\mbox{$\\tilde{\\nabla}$}_a \\mbox{$\\tilde{\\phi}$}) (\\mbox{$\\tilde{\\nabla}$}_b \\mbox{$\\tilde{\\phi}$}) - \\frac{1}{2} \\, \\mbox{$\\tilde{\\phi}$} \\, \\mbox{$\\tilde{\\nabla}$}_a \\mbox{$\\tilde{\\nabla}$}_b \\mbox{$\\tilde{\\phi}$}\n + \\frac{1}{4} \\, \\mbox{$\\tilde{\\phi}$}^2 \\mbox{$\\tilde{R}$}_{ab} -\n \\frac{1}{4} \\, \\mbox{$\\tilde{g}$}_{ab}\n \\left( (\\mbox{$\\tilde{\\nabla}$}^c \\mbox{$\\tilde{\\phi}$}) (\\mbox{$\\tilde{\\nabla}$}_c \\mbox{$\\tilde{\\phi}$}) + \\frac{1}{6} \\, \\mbox{$\\tilde{\\phi}$}^2 \\mbox{$\\tilde{R}$}\n \\right).\n\\end{equation}\n\\end{subequations}\nThe analytic investigation presented in this paper show the well\nposedness of the initial value problem in unphysical spacetime, which\nis a technical construct to ``compactified'' asymptotically flat\nspacetimes in analogy to the compactification of the plane of complex\nnumbers ($\\Bbb{R}^2$) into the Riemann sphere ($\\Bbb{S}^2$).\n\\\\\nOne goal\nof this work was making myself familiar with the system in unphysical\nspacetime as a preparation for numerical work showing\nthat the conformal techniques are well suited for a numerical\ninvestigation of global spacetime structure and gravitational\nradiation \\cite{HuXXIP,Hu93nu}. To lower the computational\nresources required these calculations have been done for spherical\nsymmetry. It is well known that spherically symmetric, uncharged vacuum\nmodels are Schwarzschild. The inclusion of matter removes\nthat obstacle, the spacetime may evolve dynamically. Furthermore there\nis no gravitational radiation in spherically symmetric models. Therefore\nthe matter model should also be a model for radiation. Scalar fields\nwith wave equations are choices for matter which model also radiation.\nThe conformally invariant scalar field has been\nchosen since the matter equations are form invariant under rescalings\nof the metric and an appropriate transformation of the scalar field\nas the name already suggests.\n\\\\\nThe scalar fields are interesting from the analytic viewpoint since\nfor the first time conformal techniques could be used for matter\nmodels whose energy-momentum tensor has non-vanishing trace.\n\\subsection{Asymptotically flat spacetimes}\nIn this paper a geometrical, coordinate independent definition of\nasymptotical flatness along the lines suggested by R.~Penrose\nwill be used. A more thorough discussion\nof the ideas and the interpretation can be found at various places in\nthe literature, e.g.\\ \\cite{Ge76as,Pe64ct}. The\ndefinitions of asymptotical flatness given in the literature differ\nslightly. The following will be used here:\n\\begin{Def}\n\\label{asymFlat}\n A spacetime $(\\tilde M, \\mbox{$\\tilde{g}$}_{ab})$ is called {\\bf asymptotically\n flat} if there is another ``unphysical'' spacetime $(M,g_{ab})$\n with boundary \\mbox{$\\cal J$}{} and a smooth embedding by which $\\tilde M$\n can be identified with $M-\\mbox{$\\cal J$}{}$ such that:\n \\begin{enumerate}\n \\item There is a smooth function $\\Omega$ on $M$ with\n \\begin{equation*}\n \\Omega \\mid_{\\tilde M} > 0 \\qquad \\mbox{and} \\qquad\n g_{ab} \\mid_{\\tilde M} = \\Omega^2 \\mbox{$\\tilde{g}$}_{ab}.\n \\end{equation*}\n \\item On \\mbox{$\\cal J$}{}\n \\begin{equation*}\n \\Omega = 0 \\qquad \\mbox{and} \\qquad \\nabla_a \\Omega \\ne 0.\n \\end{equation*}\n \\item \\label{nullCompleteness} Each null geodesic in $(\\tilde\n M,\\tilde g_{ab})$ acquires a past and a future endpoint on \\mbox{$\\cal J$}{}.\n \\end{enumerate}\n\\end{Def}\nBecause of item~\\ref{nullCompleteness} null geodesically incomplete\nspacetimes like Schwarzschild are not asymptotically flat. The next\ndefinition includes those spacetimes which have only an\nasymptotically flat part:\n\\begin{Def}\n\\label{weakasymFlat}\n A spacetime is called {\\bf weakly asymptotically flat} if\n definition~\\ref{asymFlat} with the exception\n of item~\\ref{nullCompleteness} is fulfilled.\n\\end{Def}\nDefinition~\\ref{asymFlat} and~\\ref{weakasymFlat} classify spacetimes,\nthey do not require that Einstein's equation is fulfilled. One would\nlike to know:\n\\\\\nAre they compatible with the Einstein equation with sources?\nNeither definition~\\ref{asymFlat} nor \\ref{weakasymFlat} is in an\ninitial value problem form: A given spacetime is or is not classified\nas asymptotically flat. But for a physical problem one would like to\ngive ``asymptotically flat data'' and have guaranteed that they evolve\ninto an at least weakly asymptotically flat spacetime.\n\\\\\nNevertheless the geometrically description was extremely helpful in\nanalyzing asymptotically flat spacetimes and it can be successfully\nused as guideline to construct a formalism which is better suited for\nanalyzing initial value problems. This method has been developed and\napplied to various matter sources by H.~Friedrich~\n\\cite{Fr81on,Fr83cp,Fr85ot,Fr86op,Fr86ot,Fr88os,Fr91ot}. In this paper\nit will be applied to general relativistic scalar field models.\n\\\\\nThe idea is to choose a spacelike initial value surface in the\nunphysical spacetime $(M,g_{ab})$ and to evolve it. The problems to\nbe faced are:\n\\\\\nFor Minkowski space the unphysical spacetime $(M,g_{ab})$ can be\nsmoothly extended with three points, future $(i^+)$ and past $(i^-)$\ntimelike infinity, the end respectively the starting point of all\ntimelike geodesics of $(\\tilde M,\\mbox{$\\tilde{g}$}_{ab})$, and spacelike infinity\n$(i^0)$,\nthe\nend point of all spacelike geodesics of $(\\tilde M,\\mbox{$\\tilde{g}$}_{ab})$. The point $i^0$\ndivides \\mbox{$\\cal J$}{} into two disjunct parts, future $(\\mbox{$\\cal J$}^+)$ and past\n$(\\mbox{$\\cal J$}^-)$ null infinity. It is well known and has been discussed\nelsewhere that there are unsolved problems in smoothly\nextending a ``normal'' Cauchy hypersurface of $\\tilde M$ to $i^0$ if the\nspacetime has non-vanishing ADM mass. Certain curvature quantities\nblow up at $i^0$, reflecting the non-invariance of the mass under\nrescalings.\n\\\\\nBy choosing a spacelike (with respect to $g_{ab}$) hypersurface $S$\nnot intersecting $i^0$ but $\\mbox{$\\cal J$}^+$ ($\\mbox{$\\cal J$}^-$) we avoid the problems\nwith $i^0$. $S$ is called a hyperboloidal hypersurface --- the\ncorresponding initial value problem is called a hyperboloidal initial\nvalue (a detailed definition for the scalar field models is given in\nsection~\\ref{HypInitValProblSec}, definition~\\ref{HypInitValProbl}).\nThe domain of dependence $D(S)$ of\n$S$ will not contain the whole spacetime. The interior of $S$\ncorresponds to an everywhere spacelike hypersurface in the physical\nspacetime which approaches a null hypersurface $N$ asymptotically. If $N$\nis a light cone $L$ then the domain of dependence of $S$ is $L$. Therefore\nthe hyperboloidal initial value problem is well suited to describe the\nfuture (past) of data on the spacelike hypersurface $S$, e.~g.~a\nstellar object and the gravitational radiation caused by its\ntime evolution. It is not well suited to investigate the structure\nnear $i^0$.\n\\\\\nBut even for the hyperboloidal initial value problem there are\n``regularity'' problems at \\mbox{$\\cal J$}{}: Transforming the Einstein equation\nfrom physical to unphysical spacetime an equation ``singular'' for\n$\\Omega=0$ results. That problem is solved in this paper in analogy to\nH.~Friedrich's work. A new set of equations\nfor the unphysical spacetime will be derived, its equivalence to the\nEinstein equation on $\\tilde M$ proven. This new set of equations is\nused to prove the consistency of the hyperboloidal initial value\nproblem for scalar fields with (weakly) asymptotical flatness and the\nexistence of a regular future (past) timelike infinity\nfor data sufficiently close to data for Minkowski spacetime.\n\\section{Regularizing the unphysical field equations}\nA first attempt for equations determining $(M,g_{ab})$ is the rescaled\nform of the field equation in physical spacetime. A closer look at\nthe transformation of the Einstein tensor under rescalings\n$g_{ab}=\\Omega^2\\,\\mbox{$\\tilde{g}$}_{ab}$,\n\\begin{equation}\n\\label{GabTransf}\n \\mbox{$\\tilde{G}$}_{ab} =\n G_{ab}\n + 2 \\, \\Omega^{-1} \\left(\n \\nabla_a \\nabla_b \\Omega - (\\nabla^c \\nabla_c \\Omega ) \\, g_{ab}\n \\right)\n + 3 \\, \\Omega^{-2} ( \\nabla^c \\Omega ) ( \\nabla_c \\Omega ) \\, g_{ab},\n\\end{equation}\nshows that this first\nattempt fails. Either there are terms proportional to $\\Omega^{-2}$\nand $\\Omega^{-1}$, which need special care on the set ${\\cal I}$ of\npoints where $\\Omega=0$, including\n\\mbox{$\\cal J$}{}, which is part of $M$. Or alternatively, the highest (second\norder) derivatives of the metric, hidden in the Einstein tensor, are\nmultiplied by a factor of $\\Omega^2$ and then the principal part of\nthe second order equation for the metric components vanishes on\n${\\cal I}$. This behaviour of an equation will be called singular on\n${\\cal I}$.\n\\\\\nIn this section a system of equations without the singularity on\n${\\cal I}$ will be derived from the rescaled Einstein\nequation by introducing new variables and equations.\n\\\\\nThe set of equations together with the equations for the matter\nvariables may be a system with a very complicated principal part ---\nas it is the case for a conformally invariant scalar field as\nmatter model. A procedure is carried out to simplify the principal\npart to a form in which no equation contains both derivatives of\nmatter variables as well as derivatives of geometry variables and the\nprincipal part of the subsystem for the geometry variables is the same\nas for the vacuum case (``standard form''). All variables already\npresent in the vacuum case are called geometry variables.\n\\\\\nIt is shown that the procedure works for the conformally invariant\nscalar field. The procedure described does not use very restrictive\nassumptions --- it is very general --- and may work for most matter\nmodels, for which the unphysical matter equations can be regularized on\n${\\cal I}$.\n\\subsection{The geometry part of the system}\nAccording to the definition of asymptotical flatness\n(definition \\ref{asymFlat}) the unphysical spacetime is connected with the\nphysical spacetime through the rescaling\n\\begin{equation}\n g_{ab} \\mid_{\\tilde M} = \\Omega^2 \\, \\mbox{$\\tilde{g}$}_{ab}.\n\\end{equation}\nThis rule also determines the transformation of the connection\nand the curvature.\n\\\\\nAdditionally the transformation of the matter variables $\\tilde\n\\Phi{}$ under rescaling must be specified,\n\\begin{equation}\n \\Phi \\mid_{\\tilde M} = \\Phi[\\mbox{$\\tilde{g}$}_{ab},\\Omega,\\tilde \\Phi].\n\\end{equation}\nIt is assumed that $\\Phi{}$ has a smooth limit on \\mbox{$\\cal J$}{}, the\nrescaled equations for the matter variables are regular on ${\\cal\n I} $\\footnote{In the general case it is not known how to achieve\n that.},\nand there exists a tensor $T_{ab}$, which is independent of derivatives\nof $\\Omega{}$ and derivatives of curvature terms, fulfills\n\\begin{equation}\n T_{ab} \\mid_{\\tilde M} = \\Omega^{-2}\\, \\mbox{$\\tilde{T}$}_{ab},\n\\end{equation}\nand has a limit on \\mbox{$\\cal J$}{} \\footnote{From the definition of\n asymptotical flatness and the Einstein equation it follows that\n $\\Omega^{-1}\\, \\mbox{$\\tilde{T}$}_{ab}$ has a limit on \\mbox{$\\cal J$}{}~\\cite{AsS80ni}. The\n faster fall off and the requirements on the form of $T_{ab}$ have\n technical reasons.}.\nThe conditions required may seem very restrictive but they\ncan be fulfilled for Yang-Mills fields~\\cite{Fr91ot} and for the\nconformally invariant scalar field.\n\\\\[0.1cm]\nThe Riemann tensor will be split into its irreducible parts, the\nconformal Weyl tensor\n\\begin{equation}\n\\label{ddef}\nC_{abc}{}^d=:\\Omega\\,d_{abc}{}^d,\n\\end{equation}\nthe trace free part $\\mbox{$\\hat{R}$}_{ab}$ of the Ricci tensor $R_{ab}$ and the\nRicci scalar $R$:\n\\begin{equation}\n\\label{Ralg}\n R_{abcd} =\n \\Omega \\, d_{abcd}\n + g_{c[a} \\mbox{$\\hat{R}$}_{b]d} - g_{d[a} \\mbox{$\\hat{R}$}_{b]c}\n + \\frac{1}{6} g_{c[a} g_{b]d} R,\n\\end{equation}\nA $\\hat{\\phantom{H}}$ is used as an indication for trace free parts of\n\ntensors.\n\\\\\nThe irreducible decomposition of the energy-momentum tensor is\n\\begin{equation}\n\\label{IrredT}\n T_{ab} = \\mbox{$\\hat{T}$}_{ab} + \\frac{1}{4} \\, g_{ab} \\, T.\n\\end{equation}\nThe irreducible parts transform under rescalings according to\n$$\n \\mbox{$\\hat{T}$}_{ab} = \\Omega^{-2} \\mbox{$\\tilde{\\hT}$}_{ab}\n$$\nand\n$$\n T = \\Omega^{-4} \\mbox{$\\tilde{T}$}.\n$$\nThe vanishing of the divergence of $\\mbox{$\\tilde{T}$}_{ab}$ becomes\n\\begin{equation}\n\\label{transdivT}\n 0 = \\tilde\\nabla^a \\tilde T_{ab} =\n \\Omega^4 \\, \\nabla^a \\mbox{$\\hat{T}$}_{ab} +\n \\frac{1}{4} \\, \\Omega^4 \\, \\nabla_b T +\n \\Omega^3 \\, T \\, \\nabla_b \\Omega.\n\\end{equation}\nFor energy-momentum tensors with non-vanishing trace equation\n(\\ref{transdivT}) as an equation for the components of the\nirreducible parts of the energy-momentum tensor $T_{ab}$\nis singular on ${\\cal I}$. Since (\\ref{transdivT}) should be in same way\npart of the matter equations problems in regularizing the matter\nequations are to be expected.\n\\subsubsection{A regular system}\nThe part of (\\ref{GabTransf}) proportional to $\\mbox{$\\Omega$}^{-2}$ is a pure\ntrace, thus the $\\mbox{$\\Omega$}^{-2}$ singularity is absent in the trace free equation.\nA decomposition into the trace and the trace free part moves the\nworst term into one equation.\n\\\\\n{}From the rescaling rule for the Ricci scalar and tensor,\n\\begin{equation}\n\\label{Rtrans}\n \\mbox{$\\tilde{R}$} = \\Omega^2 R + 6 \\, \\Omega \\, \\nabla^a \\nabla_a \\Omega -\n 12 \\, (\\nabla^a \\Omega) \\, (\\nabla_a \\Omega),\n\\end{equation}\nand\n\\begin{eqnarray}\n \\tilde{\\mbox{$\\hat{R}$}}_{ab} & := & \\mbox{$\\tilde{R}$}_{ab} - \\frac{1}{4} \\mbox{$\\tilde{g}$}_{ab} \\mbox{$\\tilde{R}$}\n\\nonumber \\\\\n & = & \\mbox{$\\hat{R}$}_{ab} + 2 \\, \\Omega^{-1} \\nabla_a \\nabla_b \\Omega\n - \\frac{1}{2} \\Omega^{-1} (\\nabla^c \\nabla_c \\Omega) \\,\n g_{ab},\n\\end{eqnarray}\n$\\mbox{$\\tilde{G}$}_{ab} = \\mbox{$\\tilde{T}$}_{ab}$, ${\\mbox{$\\tilde{G}$}=-\\mbox{$\\tilde{R}$}}$, and ${\\mbox{$\\tilde{T}$} = \\mbox{$\\tilde{G}$}}$\nit follows\n\\begin{equation}\n\\label{SpGl}\n \\Omega \\, R + 6 \\, \\nabla^a \\nabla_a \\Omega\n - 12 \\, \\Omega^{-1} \\, (\\nabla^a \\Omega) \\, (\\nabla_a \\Omega)\n = - \\, \\Omega^3 \\, T\n\\end{equation}\nand\n\\begin{equation}\n\\label{sfGl}\n \\Omega \\, \\mbox{$\\hat{R}$}_{ab} + 2 \\, \\nabla_a \\nabla_b \\Omega -\n \\frac{1}{2} (\\nabla^c \\nabla_c \\Omega) \\, g_{ab} =\n \\Omega^3 \\, T_{ab}.\n\\end{equation}\nEquation (\\ref{SpGl}) can be dealt with by the following lemma:\n\\begin{Lemma}\n From $\\mbox{$\\tilde{R}$}+\\mbox{$\\tilde{T}$}=0$ ($\\hat{=}$ (\\ref{SpGl})) at one point,\n $\\hat{\\mbox{$\\tilde{G}$}}_{ab}=\\hat{\\mbox{$\\tilde{T}$}}_{ab}$ ($\\hat{=}$ (\\ref{sfGl})), and $\\mbox{$\\tilde{\\nabla}$}^b\n {\\mbox{$\\tilde{T}$}_{ab}}=0$ $\\mbox{$\\tilde{R}$}+\\mbox{$\\tilde{T}$}=0$ follows everywhere.\n\\end{Lemma}\nProof:\n\\begin{equation*}\n \\mbox{$\\tilde{\\nabla}$}^a \\mbox{$\\tilde{T}$}_{ab} = \\mbox{$\\tilde{\\nabla}$}^a \\mbox{$\\tilde{\\hT}$}_{ab} + \\frac{1}{4} \\mbox{$\\tilde{\\nabla}$}_b \\mbox{$\\tilde{T}$} = 0.\n\\end{equation*}\nCombined with\n\\begin{eqnarray*}\n 0 & = & \\mbox{$\\tilde{\\nabla}$}^a \\mbox{$\\tilde{G}$}_{ab} \\\\\n & = & \\mbox{$\\tilde{\\nabla}$}^a \\tilde{\\mbox{$\\hat{G}$}}_{ab} + \\frac{1}{4} \\mbox{$\\tilde{\\nabla}$}_b \\mbox{$\\tilde{G}$}\n\\end{eqnarray*}\ngives\n\\begin{equation*}\n \\mbox{$\\tilde{\\nabla}$}_b (\\mbox{$\\tilde{T}$} + \\mbox{$\\tilde{R}$}) = 0,\n\\end{equation*}\ni.e. $\\mbox{$\\tilde{T}$} + \\mbox{$\\tilde{R}$} $ is constant.\n\\\\\nEquation (\\ref{SpGl}) will not be used any longer since $\\mbox{$\\tilde{\\nabla}$}^b\n{\\mbox{$\\tilde{T}$}_{ab}}=0$ can be derived from the remaining equations, contract\n(\\ref{quaSysd}) or see the discussion following (\\ref{IntegrBed}).\n\\\\\nIn the following the Ricci scalar $R$ will be regarded as an\narbitrary, given function.\nIt fixes part of the gauge freedom on the transition from the\nphysical to the unphysical spacetime as follows:\nThe equations (\\ref{SpGl}) and (\\ref{sfGl}) are invariant under\nrescalings ${(g_{ab},\\Omega)}\\mapsto{(\\bar g_{ab},\\bar\n \\Omega)}:={(\\Theta^2 g_{ab},\\Theta\\,\\Omega)}$ with $\\Theta>0$. All\nthe unphysical spacetimes $(M,\\Theta^2 g_{ab}, \\Theta\\,\\Omega)$ belong\nto the same physical spacetime $(\\tilde M,\\mbox{$\\tilde{g}$}_{ab})$.\n\\\\\nUnder the rescaling $\\bar g_{ab} = \\Theta^2\\,g_{ab}$, $R$ and $\\bar R$\nare connected by\n\\begin{equation}\n\\label{conformalGauge}\n 6\\, \\nabla^a \\nabla_a \\Theta = \\Theta R - \\Theta^3 \\bar R,\n\\end{equation}\nwhich is equation (\\ref{Rtrans}) where the covariant derivatives\n$\\nabla_a$ now corresponds to the unscaled metric. Solving\n(\\ref{conformalGauge}) for a spacetime ($M$, $g_{ab}$) and data\nfor $\\Theta{}$ and $\\dot \\Theta{}$ on\na spacelike surface $S$ we get at least locally a unphysical\nspace time with arbitrary Ricci scalar $\\bar{R}$.\n\\\\\nThere is still conformal gauge freedom left as every rescaling with\n$\\Theta>0$ and\n\\begin{equation}\n\\label{ConformalGaugeFix}\n \\nabla^a \\nabla_a \\Theta = \\frac{1}{6} \\Theta\\,R\\,\\left(1-\\Theta^2\\right)\n\\end{equation}\nleaves the Ricci scalar unchanged.\n\\\\[2\\parskip]\nEquation (\\ref{sfGl}) serves as regular equation for\n$\\Omega{}$. Substituting ${\\omega = \\frac{1}{4} \\, \\nabla^c \\nabla_c\n \\Omega}$ yields\n\\begin{equation}\n\\label{OmGl}\n \\nabla_a \\nabla_b \\Omega = - \\, \\frac{1}{2} \\, \\Omega \\, \\mbox{$\\hat{R}$}_{ab}\n + \\omega \\, g_{ab}\n + \\frac{1}{2} \\, \\Omega^3 \\mbox{$\\hat{T}$}_{ab},\n\\end{equation}\nwhich is a second order equation for $\\Omega{}$.\n\\\\\nThe next step is to find equations for the metric and the quantities\nderived therefrom. Expressing the once contracted, second Bianchi\nidentity (${\\nabla_{[a} R_{bc]d}{}^a =0}$) in terms of $\\mbox{$\\hat{R}$}_{ab}$ and\n$d_{abc}{}^d$\nresults in\n\\begin{equation}\n\\label{hRGl}\n \\nabla_{[a} \\mbox{$\\hat{R}$}_{b]c} = - \\, \\frac{1}{12} \\, (\\nabla_{[a} R) g_{b]c}\n - (\\nabla_d \\Omega) \\, d_{abc}{}^d\n - \\Omega \\, \\nabla_d d_{abc}{}^d.\n\\end{equation}\nThe once contracted second Bianchi identity in the physical spacetime,\n$$\n \\mbox{$\\tilde{\\nabla}$}_d \\mbox{$\\tilde{C}$}_{abc}{}^d =\n - \\mbox{$\\tilde{\\nabla}$}_{[a} (\\mbox{$\\tilde{R}$}_{b]c} - \\frac{1}{6} \\mbox{$\\tilde{g}$}_{b]c} \\mbox{$\\tilde{R}$}),\n$$\ntogether with\n$$\n \\Omega^{-1} \\mbox{$\\tilde{\\nabla}$}_d \\mbox{$\\tilde{C}$}_{abc}{}^d = \\nabla_d(\\Omega^{-1}\n C_{abc}{}^d),\n$$\nand the Einstein equation in physical spacetime provide us with\nan equation for $d_{abc}{}^d$:\n\\begin{eqnarray}\n\\label{dGl}\n \\nabla_d d_{abc}{}^d\n & = & - \\, \\Omega \\, \\nabla_{[a} \\mbox{$\\hat{T}$}_{b]c}\n - 3 \\, (\\nabla_{[a} \\Omega) \\, \\mbox{$\\hat{T}$}_{b]c}\n + g_{c[a} \\mbox{$\\hat{T}$}_{b]d} \\, (\\nabla^d \\Omega)\n \\nonumber \\\\\n & & + \\frac{1}{3} \\, (\\nabla_{[a} \\Omega) \\,\n T \\, g_{b]c}\n + \\frac{1}{12} \\Omega \\, (\\nabla_{[a}T)\n \\, g_{b]c} \\; =: \\; t_{abc}.\n\\end{eqnarray}\nWe can now derive the missing equation for $\\omega{}$ from the\nintegrability condition for (\\ref{OmGl}) and by substituting (\\ref{hRGl}):\n\\begin{eqnarray}\n\\label{omGl}\n \\nabla_a \\omega & = & - \\frac{1}{2} \\mbox{$\\hat{R}$}_{ab} \\, \\nabla^b \\Omega\n - \\frac{1}{12} \\, R \\, \\nabla_a \\Omega\n - \\frac{1}{24} \\Omega \\, \\nabla_a R\n + \\frac{1}{2} \\Omega^2 \\, \\mbox{$\\hat{T}$}_{ab} \\, \\nabla^b \\Omega\n \\nonumber \\\\\n & & - \\frac{1}{6} \\Omega^2 \\, (\\nabla_a \\Omega) \\, T\n - \\frac{1}{24} \\Omega^3 \\, \\nabla_a T.\n\\end{eqnarray}\nIn the following in addition to the abstract indices (small Latin\nletters) frame (underlined indices) and coordinate indices (Greek\nletters) are used. The used conventions are explained in the appendix\nin more detail.\n\\\\\nUsing $\\Omega_a := \\nabla_a \\Omega{}$, the frame $e_{\\f{i}}{}^a$, and\nthe Ricci rotation coefficients $\\gamma^a{}_{\\f{i}\\f{j}}$ as further\nvariables, we get the following first order system of tensor equations\nfor $\\Omega{}$, $\\Omega_a$, $\\omega$, $e_{\\f{i}}{}^a$,\n$\\gamma^a{}_{\\f{i}\\f{j}}$, $\\mbox{$\\hat{R}$}_{ab}$, and $d_{abc}{}^d$: \\footnote{\nThe symbol $E$ stands for equation, the first index reminds to the\nquantity for which a temporary equation will be formed by setting the\ntensor $E$ equal to $0$. The tensors providing the eventually used\nforms of\nthe\nequations are named with $\\N{}$ standing for null quantities.}\n\\begin{subequations}\n\\label{quaSys}\n\\begin{eqnarray}\n \\label{quaSysOm}\n \\label{NO}\n \\N{\\Omega}_a = E\\I{\\Omega}_a & = & \\nabla_a \\Omega - \\Omega_a = 0 \\\\\n \\label{quaSysDOm}\n \\label{NDO}\n \\N{D\\Omega}_{ab} = E\\I{D\\Omega}_{ab} & = &\n \\nabla_a \\Omega_b + \\frac{1}{2} \\Omega \\mbox{$\\hat{R}$}_{ab}\n - \\omega g_{ab} - \\frac{1}{2} \\Omega^3 \\mbox{$\\hat{T}$}_{ab} = 0 \\\\\n \\label{No}\n \\N{\\omega}_{ab} = E\\I{\\omega}_a & = &\n \\nabla_a \\omega + \\frac{1}{2} \\mbox{$\\hat{R}$}_{ab} \\Omega^b\n + \\frac{1}{12} R \\Omega_a + \\frac{1}{24} \\Omega \\nabla_a R\n - \\frac{1}{2} \\Omega^2 \\mbox{$\\hat{T}$}_{ab} \\Omega^b \\nonumber \\\\\n && \\qquad + \\frac{1}{6} \\Omega^2 \\Omega_a T\n + \\frac{1}{24} \\Omega^3 \\nabla_a T = 0 \\\\\n \\label{quaSysDe}\n \\label{Ne}\n \\N{e}^a{}_{bc} = E\\I{e}^a{}_{bc} & = & T^a{}_{bc} = 0 \\\\\n \\label{quaSysDgamma}\n \\label{Ng}\n \\N{\\gamma}_{abc}{}^d = E\\I{\\gamma}_{abc}{}^d & = &\n R\\I{diff}_{abc}{}^d - R\\I{alg}_{abc}{}^d = 0 \\\\\n \\label{quaSysR}\n E\\I{R}_{abc} & = & \\nabla_{[a} \\mbox{$\\hat{R}$}_{b]c}\n + \\frac{1}{12} (\\nabla_{[a} R) g_{b]c} + \\Omega_d d_{abc}{}^d\n + \\Omega \\, t_{abc} = 0 \\\\ \n \\label{quaSysd}\n E\\I{d}_{abc} & = & \\nabla_d d_{abc}{}^d - t_{abc} = 0\n\\end{eqnarray}\n\\end{subequations}\nwhere (\\ref{quaSysDe}) means vanishing torsion $T^a{}_{bc}$, expressed\nin frame index form,\n\\begin{equation*}\n T^{\\f{i}}{}_{\\f{j}\\f{k}} =\n \\left( e_{\\f{j}}(e_{\\f{k}}{}^\\mu) -\n e_{\\f{k}}(e_{\\f{j}}{}^\\mu) \\right)\n e^{\\f{i}}{}_\\mu\n + \\gamma^{\\f{i}}{}_{\\f{j}\\f{k}} - \\gamma^{\\f{i}}{}_{\\f{k}\\f{j}},\n\\end{equation*}\nand (\\ref{quaSysDgamma}) means that the curvature tensor in terms of\nthe Ricci rotation coefficients, in frame index form\n\\begin{eqnarray*}\n R\\I{diff}_{\\f{i}\\f{j}\\f{k}}{}^{\\f{l}} & = &\n e_{\\f{j}}(\\gamma^{\\f{l}}{}_{\\f{i}\\f{k}})\n - e_{\\f{i}}(\\gamma^{\\f{l}}{}_{\\f{j}\\f{k}})\n - \\gamma^{\\f{l}}{}_{\\f{i}\\f{m}} \\gamma^{\\f{m}}{}_{\\f{j}\\f{k}}\n + \\gamma^{\\f{l}}{}_{\\f{j}\\f{m}} \\gamma^{\\f{m}}{}_{\\f{i}\\f{k}}\n \\nonumber \\\\\n && \\qquad\n + \\gamma^{\\f{m}}{}_{\\f{i}\\f{j}} \\gamma^{\\f{l}}{}_{\\f{m}\\f{k}}\n + \\gamma^{\\f{m}}{}_{\\f{j}\\f{i}} \\gamma^{\\f{l}}{}_{\\f{m}\\f{k}}\n - \\gamma^{\\f{l}}{}_{\\f{m}\\f{k}} T^{\\f{m}}{}_{\\f{j}\\f{i}},\n\\end{eqnarray*}\nshould equal the combination\n\\begin{equation*}\n \\Omega \\, d_{abcd}\n + g_{c[a} \\mbox{$\\hat{R}$}_{b]d} - g_{d[a} \\mbox{$\\hat{R}$}_{b]c}\n + \\frac{1}{6} g_{c[a} g_{b]d} R =: R\\I{alg}_{abc}{}^d,\n\\end{equation*}\nwhich is the irreducible decomposition of a tensor with the symmetry\nof the Riemann tensor (\\ref{Ralg}). Hence (\\ref{quaSysDe}) and\n(\\ref{quaSysDgamma})\nensure that $R\\I{alg}_{abc}{}^d$ is the curvature tensor corresponding\nto the connection given by the Ricci rotation coefficients which again\nis the torsion free connection coming from the metric (frame).\n\\subsubsection{Complications by the matter terms}\nThe final goal is to use the terms $\\nabla_a \\Omega{}$, $\\nabla_a\n\\Omega_b$, $\\nabla_a \\omega{}$, $\\left( e_{\\f{j}}(e_{\\f{k}}{}^\\mu) -\ne_{\\f{k}}(e_{\\f{j}}{}^\\mu) \\right) e^{\\f{i}}{}_\\mu{}$,\\hfill\n${e_{\\f{j}}(\\gamma^{\\f{l}}{}_{\\f{i}\\f{k}}) -\ne_{\\f{i}}(\\gamma^{\\f{l}}{}_{\\f{j}\\f{k}})}$, $\\nabla_{[a} \\mbox{$\\hat{R}$}_{b]c}$, and\n$\\nabla_d d_{abc}{}^d$ in (\\ref{quaSys}) as principal\npart for the geometry variables of the system. I will call these terms\nleft side of the equations, the remaining terms right side. The left side\ndoes not contain the complete principle part of the system yet as the\nenergy momentum\ntensor $T_{ab}$ and its derivatives $\\nabla_{[a} T_{b]c}$ may contain\nderivatives of the matter and geometry variables.\n\\\\\nIn the case of the conformally invariant scalar field the field equation\n(\\ref{model}) remains invariant under the rescaling\n\\begin{equation*}\n \\phi = \\Omega^{-1} \\, \\mbox{$\\tilde{\\phi}$},\n\\end{equation*}\ni.e.\n\\begin{equation*}\n {\\vphantom{\\phi}\\Box} \\phi - \\frac{R}{6} \\, \\phi = 0.\n\\end{equation*}\nThe physical energy-momentum tensor $\\mbox{$\\tilde{T}$}_{ab}$ fulfills the assumed properties,\n\\begin{eqnarray*}\n \\mbox{$\\tilde{T}$}_{ab} & = & \\mbox{$\\Omega$}^2\n \\left[\n (\\nabla_a \\phi) (\\nabla_b \\phi)\n - \\frac{1}{2} \\phi \\nabla_a \\nabla_b \\phi\n + \\frac{1}{4} \\phi^2 R_{ab}\n - \\frac{1}{4} g_{ab}\n \\left( (\\nabla^c \\phi) (\\nabla_c \\phi)\n + \\frac{1}{6} \\phi^2 R \\right)\n \\right] \\nonumber \\\\\n & =: & \\mbox{$\\Omega$}^2 \\, T_{ab}.\n\\end{eqnarray*}\n\\\\\nThe mentioned complications in (\\ref{quaSys}) by the right sides are\nnow obvious. Firstly $\\nabla_{[a} T_{b]c}$ contains $\\nabla_{[a}\n\\nabla_{b]} \\nabla_c \\phi $ terms which are eliminated with the\nidentity $\\nabla_{[a} \\nabla_{b]} \\nabla_c \\phi{} = \\frac{1}{2}\nR_{abc}{}^d \\nabla_d \\phi $. To get rid of the second and first\norder derivatives of $\\phi $ we use the first order system\n\\begin{subequations}\n\\label{NM}\n\\begin{eqnarray}\n\\label{Np}\n \\N{\\phi}_a & = & \\nabla_a \\phi - \\phi_a = 0\n\\\\\n\\label{NDp}\n \\N{D\\phi}_{ab} & = &\n \\nabla_a \\phi_b - \\hat{\\phi}_{ab} - \\frac{1}{4} \\phi_c{}^c \\, g_{ab} = 0\n\\\\\n\\label{NBp}\n \\N{\\Box\\phi} & = & \\phi_a{}^a - \\frac{R}{6} \\, \\phi = 0\n\\\\\n\\label{NDDp}\n \\N{DD\\phi}_{abc} & = &\n \\nabla _{[a} \\hat{\\phi}_{b]c}\n + \\frac{1}{6} \\, ( \\phi\\,\\nabla_{[a} R + R \\, \\phi_{[a} ) g_{b]c}\n - \\frac{1}{2} \\, R\\I{alg}_{abc}{}^d \\phi_d = 0\n\\\\\n\\label{NDBp}\n \\N{D\\Box\\phi}_a & = &\n \\nabla _{a} \\phi_b{}^b\n - \\frac{1}{6} \\, ( \\phi\\, \\nabla_a R + R \\, \\phi_{a} ) = 0.\n\\end{eqnarray}\n\\end{subequations}\nfor the variables $\\phi$, $\\phi_a$, the trace free symmetric tensor\n$\\hat{\\phi}_{ab}$ and the trace $\\phi_a{}^a$. The system is\nderived from $\\nabla_a\n\\left( {\\vphantom{\\phi}\\Box} \\phi - \\frac{R}{6} \\, \\phi \\right) =0$.\nSystem (\\ref{NM}) also serves as matter part of the system for the\nunphysical spacetime.\n$t_{abc}$ is now written in a form which does not contain any\nderivatives of matter variables explicitly.\n\\\\\n$\\nabla_{[a} T_{b]c}$ and thus $t_{abc}$ still contain\nderivatives $\\nabla_{[a} \\mbox{$\\hat{R}$}_{b]c}$ of the trace free Ricci tensor.\nBy combining (\\ref{quaSysR}) and (\\ref{quaSysd}) the\nderivatives of $\\mbox{$\\hat{R}$}_{ab}$ and $d_{abc}{}^d$ can be decoupled.\n(\\ref{quaSysR}) and (\\ref{quaSysd}) become\n\\begin{equation}\n\\label{quaSysRvar}\n E'\\I{R}_{abc} = \\nabla_{[a} \\mbox{$\\hat{R}$}_{b]c}\n + \\frac{1}{12} (\\nabla_{[a} R) g_{b]c} - \\Omega_d d_{abc}{}^d\n + \\Omega m_{abc} = 0\n\\end{equation}\nand\n\\begin{equation}\n\\label{quaSysdvar}\n E'\\I{d}_{abc} = \\nabla_d d_{abc}{}^d - m_{abc} = 0,\n\\end{equation}\nwith\n\\begin{eqnarray*}\n \\lefteqn{ m_{abc} = \\frac{1}{1-\\frac{1}{4} \\, \\Omega^2\\phi^2} \\qquad * } \\\\\n \\lefteqn{ \\Big( \\, \\Omega } && \\qquad\n \\big[ \\frac{3}{2} \\, \\phi_{[a} \\phi_{b]c}\n - \\frac{1}{2} \\, g_{c[a} \\phi_{b]d} \\phi^d\n + \\frac{1}{4} \\, \\phi \\, \\Omega d_{abc}{}^d \\phi_d\n + \\frac{1}{4} \\, \\phi \\, g_{c[a} \\mbox{$\\hat{R}$}_{b]}{}^d \\phi_d\n - \\frac{3}{4} \\, \\phi \\, \\phi_{[a} \\mbox{$\\hat{R}$}_{b]c} \\\\ && \\qquad \\quad\n - \\frac{1}{12} \\, \\phi \\, \\phi_{[a} g_{b]c} R\n + \\frac{1}{4} \\, \\Omega \\, \\phi^2 d_{abc}{}^d \\Omega_d \\big]\n\\\\ && \\quad\n - 3 \\, \\Omega_{[a} \\big[ \\phi_{b]} \\phi_c\n - \\frac{1}{2} \\, \\phi \\, \\phi_{b]c}\n + \\frac{1}{4} \\, \\phi^2 \\mbox{$\\hat{R}$}_{b]c}\n + \\frac{1}{36} \\, \\phi^2 g_{b]c} R\n - \\frac{1}{3} \\, g_{b]c} \\, \\phi^d \\phi_d \\big]\n\\\\ && \\quad\n + \\Omega^d g_{c[a} \\big[ \\phi_{b]} \\phi_d\n - \\frac{1}{2} \\, \\phi \\, \\phi_{bd}\n + \\frac{1}{4} \\, \\phi^2 \\mbox{$\\hat{R}$}_{b]d} \\big] \\quad \\Big).\n\\end{eqnarray*}\nNote that $m_{abc}$ may become singular for\n$1-\\frac{1}{4}\\,\\Omega^2\\phi^2= 1-\\frac{1}{4}\\,\\mbox{$\\tilde{\\phi}$}^2=0$. In the\nEinstein equations for the physical spacetime (\\ref{EinstPhys})\n$\\mbox{$\\tilde{R}$}_{ab}$ carries a factor $1-\\frac{1}{4} \\mbox{$\\tilde{\\phi}$}^2$ too.\nWe will need later that\n\\begin{equation*}\n \\N{m}_{abc} := t_{abc} - m_{abc} =\n - \\frac{1}{4} \\, \\Omega \\, \\phi^2\n \\left( \\N{R}_{abc} + \\frac{2}{3} \\, \\Omega \\, m_{[a|d|}{}^d \\,\n g_{b]c}\n\\right),\n\\end{equation*}\nwhere $\\N{R}_{abc}$ is the null quantity representing the final form\nof the equation for $\\mbox{$\\hat{R}$}_{ab}$ (\\ref{NR}).\nThe final form of the equation for $d_{abc}{}^d$ is\nobtained from (\\ref{quaSysdvar}) by replacing $E'\\I{d}_{abc}=0$ with\n\\begin{equation}\n\\label{Nd}\n \\N{d}_{abc} := E'\\I{d}_{abc} + \\frac{2}{3} m_{[a|d|}{}^d g_{b]c} = 0.\n\\end{equation}\nThis gives $\\N{d}_{abc}$ the same index symmetry properties as the Weyl tensor.\nThat replacement does not change the equation\nsince $m_{ab}{}^b=0$ as will be seen later.\n\\\\\nAnalogously we replace (\\ref{quaSysRvar}) with\n\\begin{equation}\n\\label{NR}\n \\N{R}_{abc} :=\n E'\\I{R}_{abc} - \\frac{2}{3} \\, \\Omega \\, m_{[a|d|}{}^d g_{b]c} = 0,\n\\end{equation}\nthe contraction $\\N{R}_{ab}{}^b=0$ is then the contracted second\nBianchi identity.\n\\section{Evolution equations and constraints}\nIn the following I will assume a system $\\N{}=0$ of the form (\\ref{quaSysOm})\n-- (\\ref{quaSysDgamma}), (\\ref{Nd}), (\\ref{NR}), the geometry part,\nand a matter part, in the case of the scalar field model system\n(\\ref{NM}). $m_{abc}$ and $t_{abc}$ are assumed to differ only by terms\nexpressible as null quantities. The energy-momentum tensor\n$T_{ab}$, its derivatives $\\nabla_a T_{bc}$, $m_{abc}$, and $t_{abc}$\nare assumed to be expressed in variables and thus do not contain any\nexplicit derivative of variables. By these assumptions the\nprincipal part of the system has block form, the geometry block and\nthe matter block. The two blocks are coupled through the right sides.\n\\\\\nIn this chapter the system $\\N{}=0$ will be reduced to a system of\nsymmetric hyperbolic time evolution equations, the subsidiary system.\nSufficient conditions for the equivalence of the subsidiary system and\n$\\N{}=0$ are given as conditions on $m_{abc}$. If the system can be put\ninto the described block form there do not arise any more conditions\nfrom the geometry part of the system for any matter.\n\\\\\nThe explicit carry out is technical and lengthy, the idea can be\nsummarized as follows: All the equations of the system are\nregarded as null quantities. By requiring the vanishing of some of these null\nquantities and by choosing an appropriate gauge condition for the\ncoordinates and the frame we get a symmetric hyperbolic\nsubsidiary system of evolution equations.\nWhich null quantities to choose can best be seen by a decomposition into\nthe irreducible parts in the spinor calculus as performed\nin~\\cite{Fr91ot}.\n\\\\\nThe solution of this symmetric hyperbolic subsystem exists and is\nunique. To complete the proof we must show\nthat the solution obtained in this way is consistent with the rest of\nthe equations, i.e.\\ that all null quantities remain zero if they are\ninitially zero (``propagation of the constraints''). For that purpose a\nsymmetric hyperbolic system of time evolution equations for\nthe remaining null quantities is derived. Sufficient conditions for the\npropagation of the constraints are firstly the homogeneity of the evolution\nequations for the remaining null quantities in the null quantities\nsince then the unique solution of these evolution equations is the vanishing\nof all null quantities for all times if they vanish on the initial\nsurface and secondly that the domain of dependence of $S$ with respect\nto the equations for the propagation of the constraints is a\nsuperset of the domain of dependence of $S$ with respect to the\nsubsidiary system.\n\\subsection{A symmetric hyperbolic subsystem of evolution\n equations}\n\\label{GaussGauge}\nIntroducing a timelike vector field $t^a$ not necessarily hypersurface\northogonal and its orthogonal projection tensor\n$h_{ab}:=g_{ab}-t_at_b\/(t_ct^c)$ allows to split the\nsystem of equations into two categories, the equations containing time\nderivatives and those containing no time derivatives (the\nconstraints). The equations with time derivatives provide a\nunder\/overdetermined system of evolution equations.\n\\\\\nThe system is\noverdetermined since for some quantities there are too many time\nevolution equations, e.g.\\ there are 12 time evolution equations from\n$\\N{R}_{abc}=0$\nfor 9 independent tensor components. 3 equations are a linear\ncombination of the other 9 equations and the constraints. An\nirreducible decomposition of the tensors $\\N{}$ is a systematic way to\nanalyze these dependencies. Since all the types of tensor index\nsymmetries appearing in the system have been thoroughly investigated\nin~\\cite{Fr91ot} I will only state which combinations are needed.\n\\\\\nThe system is underdetermined since there are 10 time evolution\nequations for the frame and the Ricci rotation coefficients missing.\nBy adding\n\\begin{equation}\n\\label{KoEichFrame}\n e^{\\f{i}}{}_b \\, g^{\\f{j}\\f{k}} \\, e_{\\f{j}}{}^a \\left( \\nabla_a\n e_{\\f{k}}{}^b \\right) = - F_{\\f{i}} =\n \\gamma_{\\f{i}\\f{k}}{}^{\\f{k}}\n\\end{equation}\nand\n\\begin{equation}\n\\label{FrEichFrame}\n \\partial_{\\f{k}} \\gamma^{\\f{i}\\f{k}\\f{j}}\n + \\gamma^{\\f{i}\\f{k}\\f{j}} F_{\\f{k}}\n + \\gamma^{\\f{l}\\f{k}\\f{i}} \\, \\gamma_{\\f{l}\\f{k}}{}^{\\f{j}}\n - \\gamma^{\\f{l}\\f{k}\\f{j}} \\, \\gamma_{\\f{l}\\f{k}}{}^{\\f{i}}\n = F^{\\f{i}\\f{j}},\n\\end{equation}\nthe system becomes complete. The freedom of giving ten functions\ncorresponds to the freedom of giving the lapse and the shift to\ndetermine the coordinates and the six parameters of the Lorentz group\nto determine the frame on every point. The gauge freedom is discussed\nin full detail in~\\cite{Fr85ot,Fr91ot}.\n\\\\\nA choice which makes the system especially simple for analytic\nconsiderations is a Gaussian coordinate and frame system defined as\nfollows. Give on the spacelike initial value surface $S$ coordinates\n$x^\\mu, \\mu = 1..3$, and 3 orthonormal vector fields $e_{\\f{i}}{}^a,\n\\f{i}=\\f{1}..\\f{3}$ in this hypersurface. The affine parameter of the\ngeodesics of the hypersurface orthonormal, timelike vector field\n$e_{\\f{0}}{}^a$ defines the\ntime coordinate $x^0=t$. The spacelike coordinates are transported by\nthese geodesics into a neighbourhood of the initial surface. By\ngeodesic transport of $e_{\\f{i}}{}^a, \\f{i}=\\f{1}\\ldots\\f{3}$, and\n$e_{\\f{0}}{}^a$ a frame is obtained in this neighbourhood. By\nconstruction we have\n$$\n e_{\\f{0}}{}^0=1, \\quad e_{\\f{0}}{}^\\mu = 0 \\mbox{ for }\\mu = 1..3\n$$\nand\n$$\n \\gamma^{\\f{i}}{}_{\\f{0}\\f{k}} = 0.\n$$\nIt is well known that Gaussian coordinates develop caustics if the\nenergy momentum tensor fulfills certain energy conditions, see\ne.g.~\\cite[lemma 9.2.1]{Wa84GR}. In the\nunphysical spacetime the $\\Omega{}$ terms provide a kind of unphysical\nenergy-momentum tensor.\nWhether this energy-momentum tensor fulfills the energy-momentum\nconditions is a difficult question and not known to the author.\nNevertheless the coordinates develop caustics as has been shown by\nnumerical calculations \\cite{Hu93nu}.\n\\\\\nThe following combinations give a symmetric hyperbolic\nsystem for the remaining variables as can be deduced from the\nconsiderations in~\\cite{Fr83cp}:\n\\begin{subequations}\n\\label{EvoSyst}\n\\begin{eqnarray}\n&& \\N{\\Omega}_{\\f{0}} = 0 , \\\\\n&& \\N{D\\Omega}_{\\f{0}b} = 0 , \\\\\n&& \\N{\\omega}_{\\f{0}} = 0 , \\\\\n&& \\N{e}^a{}_{b\\f{0}} = 0 , \\\\\n&& \\N{\\gamma}_{\\f{0}\\f{1}c}{}^d = 0 , \\\\\n&& g^{ab} \\N{R}_{\\f{i}ab} = 0 , \\qquad \\f{i} = \\f{1},\\f{2},\\f{3}, \\\\\n&& \\N{R}_{\\f{0}\\f{i}\\f{i}} = 0 , \\qquad \\f{i} = \\f{1},\\f{2},\\f{3}, \\\\\n&& \\N{R}_{\\f{0}\\f{i}\\f{j}} + \\N{R}_{\\f{0}\\f{j}\\f{i}} = 0 ,\n \\qquad (\\f{i},\\f{j}) = (\\f{1},\\f{2}),(\\f{1},\\f{3}),(\\f{2},\\f{3}), \\\\\n&& \\N{d}_{\\f{2}\\f{1}\\f{2}} - \\N{d}_{\\f{3}\\f{1}\\f{3}} +\n \\N{d}_{\\f{2}\\f{0}\\f{2}} - \\N{d}_{\\f{3}\\f{0}\\f{3}} = 0 , \\\\\n&& - \\N{d}_{\\f{1}\\f{0}\\f{2}} + \\N{d}_{\\f{1}\\f{2}\\f{1}} = 0 , \\\\\n&& \\N{d}_{\\f{1}\\f{0}\\f{1}} = 0 , \\\\\n&& \\N{d}_{\\f{1}\\f{0}\\f{2}} + \\N{d}_{\\f{1}\\f{2}\\f{1}} = 0 , \\\\\n&& - \\N{d}_{\\f{2}\\f{1}\\f{2}} +\n \\N{d}_{\\f{3}\\f{1}\\f{3}} + \\N{d}_{\\f{2}\\f{0}\\f{2}} -\n \\N{d}_{\\f{3}\\f{0}\\f{3}} = 0 , \\\\\n&& \\N{d}_{\\f{2}\\f{1}\\f{3}} + \\N{d}_{\\f{3}\\f{1}\\f{2}} + \\N{d}_{\\f{2}\\f{0}\\f{3}}+\n \\N{d}_{\\f{3}\\f{0}\\f{2}} = 0 , \\\\\n&& - \\N{d}_{\\f{1}\\f{0}\\f{3}} + \\N{d}_{\\f{1}\\f{3}\\f{1}} = 0 , \\\\\n&& - \\N{d}_{\\f{1}\\f{2}\\f{3}} = 0 , \\\\\n&& - \\N{d}_{\\f{1}\\f{0}\\f{3}} - \\N{d}_{\\f{1}\\f{3}\\f{1}} = 0 , \\\\\n&& \\N{d}_{\\f{2}\\f{1}\\f{3}} + \\N{d}_{\\f{3}\\f{1}\\f{2}} -\n \\N{d}_{\\f{2}\\f{0}\\f{3}} - \\N{d}_{\\f{3}\\f{0}\\f{2}} = 0 , \\\\\n&& \\N{\\phi}_{\\f{0}} = 0 , \\\\\n&& \\N{D\\phi}_{\\f{0}b} = 0 , \\\\\n&& g^{ab} \\N{DD\\phi}_{\\f{i}ab} = 0 , \\qquad \\f{i} = \\f{1},\\f{2},\\f{3},\n\\\\\n&& \\N{DD\\phi}_{\\f{0}\\f{i}\\f{i}} = 0 , \\qquad \\f{i} = \\f{1},\\f{2},\\f{3},\n\\\\\n&& \\N{DD\\phi}_{\\f{0}\\f{i}\\f{j}} + \\N{DD\\phi}_{\\f{0}\\f{j}\\f{i}} = 0\n, \\qquad (\\f{i},\\f{j}) = (\\f{1},\\f{2}),(\\f{1},\\f{3}),(\\f{2},\\f{3}), \\\\\n&& \\N{D\\Box\\phi}_{\\f{0}} = 0. \\yesnumber\n\\end{eqnarray}\n\\end{subequations}\nTo see that the system is really symmetric hyperbolic one has to write\ndown the system explicitly. By an appropriate, in the explicit\nform of the system obvious definition of new variables, the system has\nthe structure\n\\begin{equation}\n\\label{symhypEvoSys}\n \\underline{\\underline{A_t}}\\,\\partial_t \\underline{f} +\n \\sum_{i=1}^3 \\underline{\\underline{A_{x^i}}}\\,\\partial_{x^i}\n \\underline{f} +\n \\underline{b}(\\underline{f},x^\\mu) = 0,\n\\end{equation}\nwith a diagonal matrix\n$\\underline{\\underline{A_t}}$, which is positive definite for\n$1-\\frac{1}{4}\\Omega^2\\phi^2>0$, and symmetric matrices\n$\\underline{\\underline{A_{x^i}}}$. $\\underline{f}$ is the vector build\nfrom the variables.\nAll the remaining equations are linear combinations of~(\\ref{EvoSyst})\nand constraints. Since the explicit form of the constraints is not\nneeded I do not list them.\n\\\\\nAs the entries in $\\underline{\\underline{A_t}}$ coming from\n$\\N{R}=0$ and $\\N{d}=0$ vanish for $1-\\frac{1}{4}\\Omega^2\\phi^2=0$ the\nfollowing results apply only if $\\Omega^2\\phi^2<4$ everywhere on the\ninitial value surface $S$ and thus in a neighbourhood of $S$. The\nphysical Einstein equations have a corresponding singularity for\n$1-\\frac{1}{4}\\mbox{$\\tilde{\\phi}$}^2=0$ (see equation \\ref{EinstPhys}).\n\\subsection{A sufficient condition for the propagation of the\n constraints}\nAccording to the analysis in~\\cite{Fr91ot}, involving the left hand\nside of the following identities, a symmetric\nhyperbolic system of evolution equations for the remaining null quantities\ncan be extracted from:\n\\begin{subequations}\n\\label{PropConstrSys}\n\\begin{equation}\n \\nabla_{[a}\\N{\\Omega}_{b]} =\n - \\frac{1}{2} T^c{}_{ab} \\nabla_c \\Omega - \\N{D\\Omega}_{[ab]}\n\\end{equation}\n\n\\begin{eqnarray}\n \\lefteqn{ \\nabla_{[a}\\N{D\\Omega}_{b]c} = } \\nonumber \\\\ && \\qquad\n \\frac{1}{2} \\N{\\gamma}_{abc}{}^d \\Omega_d\n - \\frac{1}{2} T^d{}_{ab} \\nabla_d \\Omega_c\n + \\frac{1}{2} \\Omega \\N{R}_{abc}\n + \\frac{1}{2} \\mbox{$\\hat{R}$}_{c[b} \\N{\\Omega}_{a]}\n - \\frac{3}{2} \\Omega^2 \\N{\\Omega}_{[a} \\mbox{$\\hat{T}$}_{b]c} \\nonumber \\\\ && \\qquad\n - \\frac{1}{2} \\N{\\omega}_{[a} g_{b]c}\n + \\frac{1}{3} \\Omega^2 m_{[a|d|}{}^d g_{b]c}\n + \\frac{1}{3} \\Omega^2 \\N{m}_{[a|d|}{}^d g_{b]c}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n \\lefteqn{ \\nabla_{[a}\\N{\\omega}_{b]} = } \\nonumber \\\\ && \\qquad\n - \\frac{1}{2} T^c{}_{ab} \\nabla_c \\omega\n - \\frac{1}{24} \\Omega^3 T^c{}_{ab} \\nabla_c T\n + \\frac{1}{24} \\N{\\Omega}_{[a} \\nabla_{b]} R\n + \\frac{1}{8} \\Omega^2 \\N{\\Omega}_{[a} \\nabla_{b]} T \\nonumber \\\\ && \\qquad\n + \\frac{1}{2} ( \\mbox{$\\hat{R}$} _{c[b} - \\Omega^2 \\mbox{$\\hat{T}$}_{c[b} ) \\N{D\\Omega}_{a]}{}^c\n + ( \\frac{1}{24} R + \\frac{1}{6} \\Omega^2 T ) \\N{D\\Omega}_{[ab]}\n + \\frac{1}{2} \\Omega^c \\N{R}_{abc} \\nonumber \\\\ && \\qquad\n + \\frac{1}{3} \\Omega \\, \\Omega_{[b} m_{a]c}{}^c\n + \\frac{1}{2} \\Omega \\, \\Omega^c \\N{m}_{abc}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\label{PropNR}\n \\lefteqn{ \\nabla_{[a} \\N{R}_{bc]d} = } \\nonumber \\\\ && \\qquad\n \\frac{1}{2} \\N{\\gamma}_{[abc]}{}^f \\mbox{$\\hat{R}$}_{fd}\n + \\frac{1}{2} \\N{\\gamma}_{[ab|d]}{}^f \\mbox{$\\hat{R}$}_{c]f}\n - \\frac{1}{2} T^f{}_{[ab} \\nabla_{|f|}\\mbox{$\\hat{R}$}_{c]d}\n - \\frac{1}{24} T^f{}_{[ab} (\\nabla_{|f|} R ) g_{c]d} \\nonumber \\\\ && \\qquad\n + \\N{D\\Omega}_{[a|f|} d_{bc]d}{}^f\n + \\Omega^f ( \\N{d}_{[ca|d|} g_{b]f} - \\N{d}_{[ca|f|} g_{b]d} )\n + \\N{\\Omega}_{[a} m_{bc]d} \\nonumber \\\\ && \\qquad\n - \\frac{2}{3} \\N{\\Omega}_{[a} m_{b|f|}{}^f g_{c]d}\n - \\frac{1}{2} \\Omega^2 \\N{\\gamma}_{[abc]}{}^f \\mbox{$\\hat{T}$}_{fd}\n - \\frac{1}{2} \\Omega^2 \\N{\\gamma}_{[ab|d|}{}^f \\mbox{$\\hat{T}$}_{c]f}\n - \\frac{1}{2} \\Omega^2 T^f{}_{[ab} \\nabla_{|f|} \\mbox{$\\hat{T}$}_{c]d} \\nonumber\n \\\\ &&\n\\qquad\n - 3 \\Omega \\N{D\\Omega}_{[ab} \\mbox{$\\hat{T}$}_{c]d}\n + \\Omega \\N{D\\Omega}_{[a}{}^f \\mbox{$\\hat{T}$}_{c|f|} g_{b]d}\n + \\frac{1}{3} \\Omega \\N{D\\Omega}_{[ab} g_{c]d}\n + \\frac{1}{12} \\Omega \\N{\\Omega}_{[a} ( \\nabla_b T ) g_{c]d}\n \\nonumber \\\\ && \\qquad\n + \\frac{1}{12} \\Omega^2 T^f{}_{[ab} ( \\nabla_{|f|} T ) g_{c]d}\n - 2 \\Omega_{[a} \\N{m}_{bc]d}\n + \\Omega_f \\N{m}_{[ca}{}^f g_{b]d} \\nonumber \\\\ && \\qquad\n - \\Omega \\nabla_{[a} \\N{m}_{bc]d}\n + 2 \\Omega_{[a} m_{c|f|}{}^f g_{b]d}\n - \\frac{2}{3} \\Omega ( \\nabla_{[a} m_{b|f|}{}^f ) g_{c]d}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n \\lefteqn{ \\nabla^{c} \\N{d}_{abc} = } \\nonumber \\\\ && \\qquad\n \\frac{1}{2} \\N{\\gamma}^c{}_{da}{}^f d_{fbc}{}^d\n + \\frac{1}{2} \\N{\\gamma}^c{}_{db}{}^f d_{afc}{}^d\n + \\frac{1}{2} \\N{\\gamma}^c{}_{dc}{}^f d_{abf}{}^d\n + \\frac{1}{2} \\N{\\gamma}^{cd}{}_{df} d_{abc}{}^f \\nonumber \\\\ && \\qquad\n - \\frac{1}{2} T^{fcd} \\nabla_f d_{abcd}\n - \\nabla^c m_{abc}\n + \\frac{2}{3} \\nabla_{[b} m_{a]c}{}^c\n\\end{eqnarray}\n\n\\begin{equation}\n \\nabla_{[a} \\N{e}^d{}_{bc]} =\n \\N{\\gamma}_{[abc]}{}^d + \\N{e}^f{}_{[ab} \\N{e}^d{}_{c]f}\n\\end{equation}\n\n\\begin{eqnarray}\n\\label{dNgamma}\n \\lefteqn{ \\nabla_{[f} \\N{\\gamma}_{ab]cd} = } \\nonumber \\\\ && \\qquad\n - T^g{}_{[fa} R\\I{diff}_{b]gcd}\n - \\N{\\Omega}_{[f} d_{ab]cd}\n - \\Omega ( \\N{d}_{[bf|c|} g_{a]d} - \\N{d}_{[bf|d|} g_{a]c} )\n \\nonumber \\\\ && \\qquad\n - \\N{R}_{[fb|d|} g_{a]c}\n + \\N{R}_{[fb|c|} g_{a]d}\n\\end{eqnarray}\n\n\\begin{equation}\n \\nabla_{[a}\\N{\\phi}_{b]} =\n - \\frac{1}{2} T^c{}_{ab} \\nabla_c \\phi - \\N{D\\phi}_{[ab]}\n\\end{equation}\n\n\\begin{equation}\n \\nabla_{[a}\\N{D\\phi}_{b]c} =\n \\frac{1}{2} \\N{\\gamma}_{abc}{}^d \\phi_d\n - \\frac{1}{2} T^d{}_{ab} \\nabla_d \\phi_c\n - \\N{DD\\phi}_{abc}\n - \\frac{1}{4} \\N{D\\Box\\phi}_{[a} g_{b]c}\n\\end{equation}\n\n\\begin{equation}\n \\nabla_a \\N{\\Box\\phi} =\n \\N{D\\Box\\phi}_a\n - \\frac{1}{6} R \\N{\\phi}_a\n\\end{equation}\n\n\\begin{eqnarray}\n\\label{PropNDDp}\n \\lefteqn{ \\nabla_{[a} \\N{DD\\phi}_{bc]d} = } \\nonumber \\\\ && \\qquad\n \\frac{1}{2} \\N{\\gamma}_{abc}{}^f \\hat{\\phi}_{fd}\n + \\frac{1}{2} \\N{\\gamma}_{[ab|d|}{}^f \\hat{\\phi}_{c]f}\n - \\frac{1}{2} T^f{}_{[ab} \\nabla_{|f|} \\hat{\\phi}_{c]d}\n + \\frac{1}{6} \\N{\\phi}_{[a} ( \\nabla_b R ) g_{c]d} \\nonumber \\\\ && \\qquad\n - \\frac{1}{12} \\phi T^f{}_{[ab} g_{c]d} \\nabla_f R\n + \\frac{1}{6} R \\N{D\\phi}_{[ab} g_{c]d}\n + \\frac{1}{2} T^g{}_{[ab} R\\I{diff}_{c]gd}{}^f \\phi_f\n - \\frac{1}{2} R_{[bc|d|}{}^f \\N{D\\phi}_{a]f} \\nonumber \\\\ && \\qquad\n + \\frac{1}{2} ( \\nabla_{[a} \\N{\\gamma}_{bc]d}{}^f ) \\phi_f,\n\\end{eqnarray}\nand\n\\begin{equation}\n \\nabla_{[a} \\N{D\\Box\\phi}_{b]} =\n - \\frac{1}{2} T^d{}_{ab} \\nabla_d \\phi_c{}^c\n - \\frac{1}{6} \\N{\\phi}_{[a} \\nabla_{b]} R\n - \\frac{1}{12} \\phi T^c{}_{ab} \\nabla_c R\n - \\frac{1}{6} R \\N{D\\phi}_{[ab]}.\n\\end{equation}\n\\end{subequations}\nThe last term in (\\ref{PropNDDp}) is homogeneous in null quantities as\ncan be seen from (\\ref{dNgamma}),\nThe deviation of these equalities is even more lengthy than the\nequalities itself, but the essential ideas behind it can already be seen\nin the deviation of the first:\n\\begin{eqnarray*}\n \\nabla_{[a}\\N{\\Omega}_{b]} & = &\n \\nabla_{[a} \\nabla_{b]}\\Omega - \\nabla_{[a} \\Omega_{b]} \\\\\n & = &\n - \\frac{1}{2} T^c{}_{ab} \\nabla_c \\Omega - \\N{D\\Omega}_{[ab]}\n + \\frac{1}{2} \\mbox{$\\hat{R}$}_{[ab]} \\Omega\n - \\frac{1}{2} \\Omega^3 \\mbox{$\\hat{T}$}_{[ab]}\n - \\omega g_{[ab]},\n\\end{eqnarray*}\nwith the last three terms vanishing since the tensors are symmetric. Note\nthat vanishing of the torsion, $T^c{}_{ab}=0$, and\n$2\\,\\nabla_{[a}\\nabla_{b]}\\omega_c=R_{abc}{}^d \\omega_d$ cannot be\nused since they only\nhold if both the time evolution and the constraint equations for the frame\n$e_{\\f{i}}{}^\\mu $ and the Ricci rotation coefficients\n$\\gamma^a{}_{\\f{i}\\f{j}}$ hold everywhere.\n\\\\\nA sufficient set of conditions for homogeneity of the system derived\nin the null quantities is\n\\begin{subequations}\n\\label{IntegrBed}\n\\begin{equation}\n\\label{IntegrBed1}\n m_{ab}{}^b = 0 \\bmod \\N{},\n\\end{equation}\n\\begin{equation}\n\\label{IntegrBed2}\n \\nabla^c m_{abc} + \\frac{2}{3} \\nabla_{[a} m_{b]c}{}^c = 0 \\bmod \\N{},\n\\end{equation}\n\\begin{equation}\n\\label{IntegrBed3}\n \\nabla_{[a} m_{b]c}{}^c = 0 \\bmod \\N{},\n\\end{equation}\nand\n\\begin{equation}\n\\label{IntegrBed4}\n \\Omega \\nabla_{[a} \\N{m}_{bc]d} =\n f \\, \\nabla_{[a} \\N{R}_{bc]d} \\bmod \\N{}, \\quad f \\ne -1.\n\\end{equation}\n\\end{subequations}\nA straightforward but long calculation shows that these conditions are\nfulfilled by the conformally invariant scalar field with\n$f=-\\frac{1}{4}\\Omega^2\\phi^2$. Equation (\\ref{PropNR})\nbecomes singular for $1-\\frac{1}{4}\\Omega^2\\phi^2=0$.\n\\\\\nThe very technical integrability conditions (\\ref{IntegrBed}) have a\nvery simply interpretation. Replacing $m_{abc}$ with $t_{abc}$ --- they\nonly differ by null quantities --- the conditions\n(\\ref{IntegrBed1}--\\ref{IntegrBed3}) reduce to $\\mbox{$\\tilde{\\nabla}$}^b \\mbox{$\\tilde{T}$}_{ab} = 0$\nand ${\\mbox{$\\tilde{\\nabla}$}_b \\mbox{$\\tilde{\\nabla}$}^c \\mbox{$\\tilde{T}$}_{ac} = 0}$. Condition (\\ref{IntegrBed4}) is only\nof technical nature, it gives the principal part a\nsimple block form.\n\\\\\n{}From the considerations in \\cite{Fr91ot} also follows that the domain\nof dependence of $S$ with respect to the evolution equation of the\nconstraints includes the domain of dependence of $S$ with respect to\nthe subsidiary system.\n\\section{The hyperboloidal initial value problem}\n\\label{HypInitValProblSec}\nSo far a system of equations $(\\N{}=0)$ has been derived which contains\nfor at least one choice of gauge a symmetric hyperbolic subsystem of\nevolution equations. The remaining equations in the system --- either\nconstraints or a combination of constraints and time evolution\nequations --- will be satisfied for a solution of the evolution\nequations, if the constraints are satisfied by the initial data. If\nboth, the time evolution and the constraints, are fulfilled, $(\\tilde\nM,\\mbox{$\\tilde{g}$}_{ab},\\mbox{$\\tilde{\\phi}$})$ is a weakly asymptotically flat solution of the Einstein\nequation. This follows from the way the system $\\N{}=0$ for the\nunphysical spacetime has been derived.\n\\\\\nThe essential points in the proofs of the theorems in\n\\cite[chapter~10]{Fr91ot} are the symmetric hyperbolicity of the\nsubsidiary system and the form (\\ref{NDO}) of the equations for\n$\\Omega{}$. Therefore the same techniques can be used and the proofs\nwill not be repeated. The difference to\nthe model treated here lies in the derivation of the subsidiary system\nand the proof of the propagation of the constraints,\nwhich has been done in the previous chapters.\n\\subsection{The initial value problem}\nWe consider the following initial value problem:\n\\begin{Def}\n\\label{HypInitValProbl}\nA ``{\\bf hyperboloidal initial data set for the conformally invariant\nscalar field}'' consists of a pair $(\\bar S,f_0)$ such that:\n\\begin{enumerate}\n\\item $\\bar S = S \\cap \\partial S$ is a smooth manifold with boundary\n $\\partial S$ diffeomorphic to the closed unit ball in $\\Bbb{R}^3$.\n As coordinates on $S$ the pull backs of the natural coordinates\n on $\\Bbb{R}^3$ are used.\n\\item $f_0$ is the vector $\\underline f$ of functions in system\n (\\ref{EvoSyst}) written in the form (\\ref{symhypEvoSys}) at initial\n time $t_0$.\n\\item The fields provided by $f_0$ have uniformly continuous derivatives\n with respect to the coordinates of $S$ to all orders\\footnote{The\n assumption about the smoothness of the data can certainly be\n weakened from $C^\\infty{}$ to $C^n$ for sufficiently large $n$ but\n then more technical effort would be needed in the proofs.}.\n\\item On $S$: $\\Omega>0$. On $\\partial S$: $\\Omega=0$ and $\\nabla_a\n \\Omega{}$ is a future directed null vector.\n\\item The fields provided by $f_0$ satisfy the constraints following\n from $\\N{}=0$ (\\ref{quaSys} and \\ref{NM}) and the gauge conditions.\n\\end{enumerate}\n\\end{Def}\nA point which deserves special notice is the existence of a\nhyperboloidal initial data set. The proof that\nthose data exist has to overcome two problems.\n\\\\\nFirstly, the regularity of the solution on $\\partial S$ which is\nthe consistency of the data with asymptotical flatness.\nFor scalar field data with compact support regularity\nconditions are given in~\\cite{AnCXXxx,AnCA92ot}, which are sufficient\nfor the existence of a solution of the constraints near $\\partial S$.\n\\\\\nSecondly there is a problem with a possible singularity of\nequations in $\\N{}=0$ at {$1-\\frac{1}{4}\\,\\Omega^2\\phi^2=0$}.\n\\\\\nL.~Anderson and\nP.~Chrusc\\'{\\i}el are preparing a paper analyzing both\nproblems \\cite{AnCXXxx}.\n\\subsection{Theorems}\nThe ``theorems'' will be given in a form not containing every\ntechnical detail, since these technical details would make them\nlengthy and\ncan\nbe easily deduced from the theorems in~\\cite{Fr91ot} by replacing the\nYang-Mills matter with the (conformally invariant) scalar field.\n\\\\\nSince the constraints of $\\N{}=0$ propagate we have:\n\\begin{Theorem}\n\\label{physUnphysequi}\n Any (sufficiently smooth) solution of the subsidiary system satisfying the\n constraints on a spacelike hypersurface $\\bar S$ and\n $1-\\frac{1}{4}\\,\\Omega^2\\phi^2>0$ defines in the\n domain of dependence with respect to $g_{ab}$ of $\\bar S$ a\n solution to the unphysical system. Thus $(\\tilde M,\\mbox{$\\tilde{g}$}_{ab},\\mbox{$\\tilde{\\phi}$})$ is\n a weakly asymptotically flat solution of the Einstein equation.\n\\end{Theorem}\nSince the evolution equations are symmetric hyperbolic a unique\nsolution of the initial value problem exists for a finite time.\n{}From the combination with theorem (\\ref{physUnphysequi}) follows:\n\\begin{corollar}\n\\label{ExUni}\n For every regular solution of the constraints\n on $\\bar S$ with ${1-\\frac{1}{4}\\,\\Omega^2\\phi^2\\mid_{\\bar S} >0}$ exists\n locally a unique, weakly asymptotically\n flat solution of the Einstein equation.\n\\end{corollar}\nFor the Minkowski space we can extent $\\bar S$ and the solution of\nthe constraints beyond $\\partial S$ to $S'$ and\nget a solution in the unphysical spacetime which extents beyond\n$i^+$. The continuous dependence of the solution of symmetric\nhyperbolic systems on the data and the form of (\\ref{NO}),\n(\\ref{NDO}) and (\\ref{No}) (see the proof\nof theorem (10.2) in~\\cite{Fr91ot}) guarantees\nthat there is a solution covering the whole domain of dependence of\n$\\bar S$. Furthermore the proof there shows that\n$\\{p\\,|\\,\\Omega(p)=0\\}$ has an isolated critical point $i^+$, where\nall future directed timelike geodesics of $(\\tilde M,\\mbox{$\\tilde{g}$}_{ab})$ end, thus:\n\\begin{Theorem}\n For a sufficient small deviation of the data from Minkowskian data the\n solution of theorem~\\ref{ExUni} possesses a regular future null\n infinity and a regular future timelike infinity.\n\\end{Theorem}\n\\section{The conformal equivalence of the scalar fields}\n\\label{SkalarAequiv}\nThis section shortly reviews the equivalence transformation between\nspacetime models with scalar matter under the viewpoint of solving\nhyperboloidal initial value problems. Other\naspects of this equivalence transformation, especially the generation\nof exact solutions, have been studied\nin~\\cite{AcWA93ce,Be74es,Be75bh,Kl93sf,KlK93ie,Pa91mw,XaD92eg}.\n\\subsection{Local equivalence of solutions}\n\\label{GenEquivalence}\nSpacetime models $(\\tilde M,\\tilde g_{ab},\\tilde\\phi)$\nwith scalar matter $\\tilde\\phi$ described by the action\n\\begin{equation}\n\\label{ScalarAction}\n \\tilde S = \\int_{\\tilde M} \\left[ A(\\tilde\\phi) R - B(\\tilde\\phi)\n (\\tilde\\nabla_a\\tilde\\phi)\n (\\tilde\\nabla^a\\tilde\\phi)\n \\right] \\> (-\\tilde g)^{\\frac{1}{2}} \\> d^4\\tilde x\n\\end{equation}\nwill be considered. Boundary terms in the action have been omitted,\n$\\mbox{$\\tilde{g}$}{}$ is the determinant of $\\mbox{$\\tilde{g}$}_{\\mu\\nu}$.\n\\\\\nBy varying the action $\\tilde S$ with respect to $\\mbox{$\\tilde{\\phi}$}{}$ and\n$\\mbox{$\\tilde{g}$}_{ab}$ the following field equations result:\n\\begin{subequations}\n\\label{allgSys}\n\\begin{eqnarray}\n\\label{allgWell}\n B(\\mbox{$\\tilde{\\phi}$}) \\, \\tilde{\\vphantom{\\phi}\\Box} \\,\\tilde\\phi\n + \\frac{1}{2} \\, \\frac{dB}{d\\tilde\\phi} \\,\n \\left( \\mbox{$\\tilde{\\nabla}$}^a\\tilde\\phi \\right) \\, \\left( \\mbox{$\\tilde{\\nabla}$}_a\\tilde\\phi \\right)\n + \\frac{1}{2} \\, \\frac{dA}{d\\tilde\\phi} \\, \\tilde R\n \\quad & = & \\quad 0 \\qquad \\qquad\n\\\\\n A(\\tilde\\phi) \\, \\left( \\tilde R_{ab} - \\frac{1}{2} \\, \\tilde R\n \\, \\tilde g_{ab} \\right)\n + B(\\tilde \\phi) \\, \\left( \\frac{1}{2} \\,\n \\left(\\mbox{$\\tilde{\\nabla}$}^c\\tilde \\phi\\right) \\,\n \\left(\\mbox{$\\tilde{\\nabla}$}_c\\tilde \\phi\\right) \\mbox{$\\tilde{g}$}_{ab}\n - \\left(\\mbox{$\\tilde{\\nabla}$}_a\\tilde \\phi\\right) \\,\n \\left(\\mbox{$\\tilde{\\nabla}$}_b\\tilde \\phi\\right) \\right)\n\\nonumber\n\\\\\n\\label{allgGeo}\n - \\left( \\mbox{$\\tilde{\\nabla}$}_a\\mbox{$\\tilde{\\nabla}$}_b A(\\tilde\\phi) \\right)\n + \\left( \\mbox{$\\tilde{\\nabla}$}^c\\mbox{$\\tilde{\\nabla}$}_c A(\\tilde\\phi) \\right) \\tilde g_{ab}\n \\quad & = & \\quad 0.\n\\end{eqnarray}\n\\end{subequations}\n$A$ and $B$ are assumed to be $C^\\infty$ functions.\nFor $ B \\neq 0$ the principal part of (\\ref{allgWell}) does not vanish\nand thus (\\ref{allgWell}) is a wave equation. For that reason I assume\n$B(\\tilde\\phi) > \\epsilon > 0$ for every $\\tilde\\phi$. (\\ref{allgGeo})\nis a second order equation for the metric if $A(\\mbox{$\\tilde{\\phi}$})\\ne{}0$.\n\\\\\nIn the spacetime region $\\tilde H := \\left\\{x \\in \\tilde{M} | \\, {\\rm\n sign}(A(\\tilde\\Phi))>0 \\quad \\forall\n \\tilde\\Phi \\in [\\mbox{$\\tilde{\\phi}$}_0,\\mbox{$\\tilde{\\phi}$}(x)]\\right\\}$ the trans\\-for\\-ma\\-tion\\footnote{The\n choice of the parameter $\\mbox{$\\tilde{\\phi}$}_0$ reflects gauge freedom. Models\n where there is no $\\mbox{$\\tilde{\\phi}$}_0$ with $A(\\mbox{$\\tilde{\\phi}$}_0) > 0$ will not be\n considered.}\n\\begin{subequations}\n\\label{Transf}\n\\begin{eqnarray}\n\\label{allgphiTrans}\n \\tilde{\\bar\\phi} & = & \\int_{\\tilde\\phi_0}^{\\tilde\\phi} \\frac{1}{A}\n \\sqrt{ \\frac{3}{2} \\,\n \\left(\\frac{dA}{d\\phi} \\right)^2 + A\\,B } \\quad\n d\\phi\n\\\\\n\\label{allgTrans}\n \\tilde {\\bar g}_{ab} & = & A \\, \\tilde g_{ab}\n\\end{eqnarray}\n\\end{subequations}\ngives a solution of the system (\\ref{allgSys}) with a massless\nKlein-Gordon field\n$\\widetilde{\\bar\\phi}$ as matter model corresponding to the choice\n$(A,B)=(1,1)$ and the equations\n\\begin{subequations}\n\\label{KGGl}\n\\begin{eqnarray}\n \\widetilde{\\bar{\\vphantom{\\phi}\\Box}}\\widetilde{\\bar\\phi} & = & 0\n\\\\\n \\widetilde{\\bar{R}}_{ab} - \\frac{1}{2} \\, \\widetilde{\\bar{R}} \\,\n \\widetilde{\\bar{g}}_{ab}\n & = & \\widetilde{\\bar{T}}_{ab}[{\\widetilde{\\bar{\\phi}}}]\n\\end{eqnarray}\nwith energy momentum tensor\n\\begin{equation}\n \\widetilde{\\bar{T}}_{ab}[{\\widetilde{\\bar{\\phi}}}] =\n (\\widetilde{\\bar\\nabla}_a\\widetilde{\\bar{\\phi}}) \\,\n (\\widetilde{\\bar\\nabla}_b\\widetilde{\\bar{\\phi}})\n - \\frac{1}{2} \\, (\\widetilde{\\bar\\nabla}_c\n \\widetilde{\\bar{\\phi}}) \\,\n (\\widetilde{\\bar\\nabla}^c\n \\widetilde{\\bar{\\phi}})\n \\, \\widetilde{\\bar g}_{ab}.\n\\end{equation}\n\\end{subequations}\n\\\\\n{}From the assumptions about $A$ and $B$ follows that the corresponding\nKlein-Gordon field will be unbounded approaching the\npart of the boundary of $\\tilde H$ where $A(\\mbox{$\\tilde{\\phi}$})\\rightarrow 0$.\nThe singularity in the Klein-Gordon field\nshows up at least in a singularity of the equations for $(\\tilde M,\\tilde\ng_{ab}, \\tilde \\phi)$.\n\\\\\nFor two of the scalar fields in the above class the field equations\nare very special, the already mention massless Klein-Gordon field\n$\\tilde{\\bar\\phi{}}$\n(\\ref{KGGl}) and the conformally invariant scalar field $\\mbox{$\\tilde{\\phi}$}{}$,\n$(A,B)=(1-\\frac{1}{4}\\mbox{$\\tilde{\\phi}$}^2,\\frac{3}{2})$ ($\\mbox{$\\tilde{\\phi}$}{}$ can be\nrescaled by an arbitrary factor).\n\\\\\nThe first, because the set of equations in the physical spacetime\nbecomes remarkable simple and has been analyzed intensely with\nanalytical (e.g.~\\cite{Ch91tf}) and numerical\n(e.g.~\\cite{Ch92CB}) methods for spacetimes with spherical symmetry.\n\\\\\nThe second, yielding the equations (\\ref{model}),\nbecause the matter equations are invariant under rescalings $g_{ab} =\n\\Omega^2 \\, \\mbox{$\\tilde{g}$}_{ab}$ and $\\phi = \\Omega^{-1} \\mbox{$\\tilde{\\phi}$}{}$.\n\\\\\nThe transformation between the two special cases is\n\\begin{subequations}\n\\begin{eqnarray}\n \\tilde{\\B{\\phi}} & = &\n \\sqrt{6} \\, \\mbox{arctanh} \\frac{\\mbox{$\\tilde{\\phi}$}}{2}\n \\\\\n \\tilde{\\B{g}}_{ab} & = & ( 1-\\frac{1}{4}\\mbox{$\\tilde{\\phi}$}^2 ) \\, \\mbox{$\\tilde{g}$}_{ab}\n\\end{eqnarray}\n\\end{subequations}\nwhich is a bijective mapping from\n$\\mbox{$\\tilde{\\phi}$} \\in ]-2,2[$ to $\\tilde{\\bar{\\phi}}\\in ]-\\infty,\\infty[$.\n\\\\\nDue to the following diagram, illustrating the above described\nrelations, it is evident that there is a variable transformation\nregularizing\nthe\nunphysical equations for the Klein-Gordon field:\n\\begin{center}\n\\unitlength1cm\n\\begin{picture}(15,6.5)\n\\addcontentsline{lof}{figure}{{\\string\\numberline\\space{}Konforme\n \\protect\"Aquivalenz von Skalarfeldern}}\n %\n \n \\put(0,0){\\makebox(2.5,2)[l]{\\shortstack[l]{unphysical\\\\spacetime}}}\n \\put(3.5,0){\\framebox(4,2){%\n \\shortstack[l]{conformal field $\\phi$,\\\\reg.\\ equations}}}\n \\put(11.5,0){\\framebox(4,2){%\n \\shortstack[l]{KG field $\\bar{\\phi}$,\\\\sing.\\ equations}}}\n %\n \n \\put(5.5,3.75){\\vector(0,-1){1.5}}\n \\put(6,2.75){\\shortstack[l]{$\\phi=\\mbox{$\\tilde{\\phi}$}\/\\Omega$\\\\$g_{ab}=\\Omega^2\\mbox{$\\tilde{g}$}_{ab}$}}\n \\put(13.5,3.75){\\vector(0,-1){1.5}}\n \\put(11,2.75){\\shortstack[r]{$\\bar{\\phi}=\\tilde{\\bar{\\phi}}\/\\bar{\\Omega}$\\\\\n $\\bar{g}_{ab}=\\bar{\\Omega}^2\\tilde{\\bar{g}}_{ab}$}}\n %\n \n \\put(0,4){\\makebox(3,2)[l]{\\shortstack[l]{physical\\\\spacetime}}}\n \\put(3.5,4){\\framebox(4,2){\\shortstack{conformal field $\\mbox{$\\tilde{\\phi}$}$\\\\\n $\\mbox{$\\tilde{\\phi}$} \\in ]{-2},{2}[$}}}\n \\put(11.5,4){\\framebox(4,2){\\shortstack{KG field\n $\\tilde{\\bar{\\phi}}$\\\\ $\\tilde{\\bar{\\phi}}\\in\n ]-\\infty,\\infty[$}}}\n %\n \n \\put(7.75,5){\\vector(1,0){3.5}}\n \\put(8.5,5.2){\\mbox{$\\bar\\phi=f(\\mbox{$\\tilde{\\phi}$})$}}\n \\put(8.5,4.6){\\mbox{$\\bar g_{ab}=\\omega^2\\mbox{$\\tilde{g}$}_{ab}$}}\n %\n\\end{picture}\n\\end{center}\nBy mapping an arbitrary scalar field $\\tilde{\\bar{\\bar \\phi}}$ with\naction (\\ref{ScalarAction}) to the Klein-Gordon field $\\tilde{\\bar\n \\phi}$ and then to the conformally invariant scalar field $\\tilde\n\\phi $ regular equations for $\\bar{\\bar \\phi}$ are obtained.\n\\subsection{The hyperboloidal initial value problem}\nSince $A(\\phi)>0$ on $\\bar S$ all scalar field models connected\nby transformation (\\ref{Transf}) to a\nhyperboloidal initial value problem with a conformally invariant\nscalar field as matter source are weakly asymptotically flat.\n\\\\\nFor a massless KG model there is a one parameter gauge freedom in the\nscalar field. If $\\tilde{\\bar{\\phi}}$ is a solution, then so is\n$\\tilde{\\bar{\\phi}}+\\tilde{\\bar{\\phi}}_0$ with\n$\\tilde{\\bar{\\phi}}_0=\\mbox{const} {}$, as the energy momentum tensor depends on\nderivatives of $\\tilde{\\bar{\\phi}}$ only. This can also be seen by\nmapping a Klein-Gordon model to a Klein-Gordon model with\n$\\tilde{\\bar{\\phi}}_0\\ne{}0$ and (\\ref{Transf}). The analogue holds for\nevery considered scalar field model.\nFor the hyperboloidal initial value problem $\\mbox{$\\tilde{\\phi}$} = \\Omega\n\\phi{}$, therefore $\\mbox{$\\tilde{\\phi}$}{}$ vanishes at \\mbox{$\\cal J$}{}, fixing the gauge in\n$\\mbox{$\\tilde{\\phi}$}{}$.\n\\\\\nIn definition (\\ref{HypInitValProbl}) $1-\\frac{1}{4} \\Omega^2\n\\phi^2 \\mid_{\\bar S} >0$ was assumed. But with the Bekenstein black\nhole~\\cite{Be75bh} a weakly asymptotically flat\nsolution is known where $A(\\mbox{$\\tilde{\\phi}$})$ vanishes on a regular\npart of the spacetime. In this case the transformation gives a\npossible extension of a massless Klein-Gordon scalar field solution beyond a\nsingularity -- the Klein-Gordon field $\\widetilde{\\bar\\phi}$ and the\nmetric $\\widetilde{\\bar g}_{ab}$ degenerate there.\n\\vspace{0.5cm}\nIt is a pleasure for me to thank Helmut Friedrich, Bernd Schmidt, and\nJ\\\"urgen Ehlers for the very helpful discussions during the grow of\nthis work which is part of my Ph.~D.\\ thesis.\n\n\\begin{appendix}\n\\section{Notation}\nThe signature of the Lorentzian metric $g_{ab}$ is $(-,+,+,+)$.\n\\\\\nWhenever possible I use abstract indices as described\nin~\\cite[chapter~2]{PeR84SA}. Small Latin letters denote abstract\nindices, underlined\nsmall Latin letters are frame indices. For the components of a tensor with\nrespect to coordinates small Greek letters are used. The frame\n$\\left(\\frac{\\partial}{\\partial x^\\mu}\\right)^a$ is constructed from\nthe coordinates $x^\\mu $, $e_{\\f{i}}{}^a$ denotes an arbitrary frame.\nIn this notation $v_a$ is a covector, $v_{\\f{i}}$ a scalar, namely\n$v_a\\, e_{\\f{i}}{}^a$.\n\\\\\n$v(f)$ is defined to be the action of the vector $v^a$ on the function\n$f$, i.e.\\ for every covariant derivative $\\nabla_a$: $t(f)=t^a\\,\\nabla_a f$.\n\\\\\nThe transformation between abstract, coordinate, and frame indices is\ndone by contracting with $e_{\\f{i}}{}^a$ and $e_{\\f{i}}{}^\\mu $. All\nindices may be raised and lowered with the metric $g_{AB}$ and the\ninverse $g^{AB}$. $g^{AC}\\,g_{CB} = \\delta^A{}_B$, $A$ and $B$ are\narbitrary indices, e.g.\\ $e_{\\f{i}a}=g_{ab}\\,e_{\\f{i}}{}^b$ and\n$e^{\\f{i}}{}_a=g^{\\f{i}\\f{j}}\\,e_{\\f{i}a}$.\n\\\\\nFor a frame $e_{\\f{i}}{}^a$ and a covariant derivative\n$\\nabla_a$ the Ricci rotation coefficients\nare defined as\n\\begin{equation*}\n \\gamma^a{}_{\\f{i}\\f{j}} := e_{\\f{i}}{}^b \\nabla_b e_{\\f{j}}{}^a.\n\\end{equation*}\n{}From this definition follows\n\\begin{equation*}\n e_{\\f{i}}{}^a \\, e^{\\f{j}}{}_b \\, (\\nabla_a t^b) =\n e_{\\f{i}}(t^{\\f{j}}) + \\gamma^{\\f{j}}{}_{\\f{i}\\f{k}} \\, t^{\\f{k}}.\n\\end{equation*}\n\\\\\nWith respect to a coordinate frame $e_{\\mu}{}^a \\equiv\n\\left(\\frac{\\partial}{\\partial x^\\mu}\\right)^a$ the components\n$\\gamma^\\lambda{}_{\\mu\\nu}$ are the Christoffel symbols\n$\\Gamma^\\lambda{}_{\\mu\\nu}$.\n\\\\\nThe torsion $T^a{}_{bc}$ is defined by\n\\begin{equation*}\n \\nabla_a\\nabla_b f - \\nabla_b\\nabla_a f = - T^c{}_{ab} \\, \\nabla_c f,\n\\end{equation*}\nthe Riemann tensor $R_{abc}{}^d$ by\n\\begin{equation*}\n \\nabla_a\\nabla_b\\omega_c - \\nabla_b\\nabla_a\\omega_c =\n R_{abc}{}^d \\, \\omega_d - T^{d}{}_{ab} \\, \\nabla_d \\omega_c.\n\\end{equation*}\nContraction gives the Ricci tensor,\n\\begin{equation*}\n R_{ab} = R_{acb}{}^c,\n\\end{equation*}\nand the Ricci scalar\n\\begin{equation*}\n R = R_{ab}\\, g^{ab}.\n\\end{equation*}\nThe Einstein tensor is given by\n\\begin{equation*}\n G_{ab} = R_{ab} - \\frac{1}{2} \\, R \\, g_{ab}.\n\\end{equation*}\n\\\\\nThe speed of light $c$ is set to $1$ as the gravitational constant\n$\\kappa $ in $G_{ab}=\\kappa\\, T_{ab}$.\n\\end{appendix}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzearn b/data_all_eng_slimpj/shuffled/split2/finalzzearn new file mode 100644 index 0000000000000000000000000000000000000000..8f9af7363a450d38b2240ba5e736b7ae4e65e2fd --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzearn @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nPb and Sn $\\sqrt{3}\\times \\sqrt{3}$ adlayer structures on the (111)\nsurface of Ge have recently revealed a reversible charge density wave \n(CDW) transition to a low temperature reconstructed $3\\times 3$ phase.\n\\cite{nature,Modesti_1,Modesti_2,Avila,LeLay,Sn_carpinelli,asensio} \nA half-filled surface state band makes the high temperature phase metallic.\nThe low temperature phase is either metallic -- as seems to be the case\nfor Sn\/Ge(111) -- or weakly gapped, or pseudo-gapped, as suggested\nfor Pb\/Ge(111). \n\nRelated isoelectronic systems, like the $\\sqrt{3}$-adlayer of Si on\nthe (0001) surface of SiC \\cite{SiC} and on K\/Si(111):B,\\cite{KSi}\nshow a clear insulating behavior, with a large gap,\nno structural anomalies, no CDWs, and no transitions, at least to\nour present knowledge.\n\nThese adsorbate surfaces are altogether mysterious. \nThe very existence of a $\\sqrt{3}\\times \\sqrt{3}$ adsorbate phase, \nwith coverage 1\/3, is puzzling. \nFor isoelectronic Si on Si(111), or Ge on Ge(111), for instance, there exists \nno such phase. The stable low-coverage phase are $7\\times 7$ and $c(2\\times 8)$ \nrespectively, whose coverage is instead close to 1\/4. \nThey are made up of $2 \\times2$ basic building blocks, each with one adatom\nsaturating three out of four first-layer atoms, and one unsaturated\nfirst-layer atom, the ``restatom''. In this adatom-restatom block, \nthe nominally unpaired electron of the adatom and that of the restatom pair \noff together, giving rise to a stable, fully saturated, insulating surface. \nBy contrast, the $\\sqrt{3}\\times \\sqrt{3}$ phases\nwith 1\/3 coverage are very common for trivalent adsorbates, such as\nGa and In, and for pentavalent ones like As, on the same (111) surfaces.\nThese adatoms lack the unpaired electron, and can therefore lead to\na fully saturated insulating surface without the need for any restatoms. \n\nA $\\sqrt{3}\\times \\sqrt{3}$ adsorbate phase of {\\it tetravalent} adatoms \nis bound by construction to possess one unpaired electron per adatom,\ngiving rise to a very destabilizing half-filled metallic surface state band.\nSeen in this crude light, it is a puzzle why this kind of coverage should \nconstitute even only a locally stable state of the surface.\n\nLooking more closely, we may speculate that SiC(0001)\\cite{SiC} \nand K\/Si(111):B,\\cite{KSi} most likely Mott-Hubbard insulators,\n\\cite{santoro,Northrup,Anisimov,KSi} are perhaps ``stabilized'' by Coulomb \nrepulsions, so large to make it anyway difficult for electrons to move. \nFor the more innocent-looking, less correlated, Pb\/Ge(111) and Sn\/Ge(111), \nthis argument is less obvious, and the puzzle remains. The function of \nthe $3\\times 3$ CDW state -- whatever its real nature -- most likely \nserves the function of stabilizing these otherwise unstable surfaces at \nlow temperatures. Nonetheless, the CDW periodicity chosen by the surface CDW\n-- $3\\times 3$, meaning a $\\sqrt{3}\\times \\sqrt{3}$ super-cell of adatoms -- \nis not at all evident. In fact, it replaces a supposedly unstable state \ncharacterized by an odd number of electrons\/cell (three), with another \nwhere the electron number (nine) is, alas, odd again.\n\nBe all that as it may, there is little doubt that the main factor driving the\nphenomena on all these surfaces, appears to be precisely the half-filled--\nand extremely narrow -- surface state band. We thus\nbegin with a discussion that in principle encompasses all the\n$\\sqrt{3}\\times \\sqrt{3}$ tetravalent adsorbed surfaces.\n\nWe believe the following points to be of general validity:\n\n{\\em i)\\\/} {\\bf Poor nesting}. \n\nTwo-dimensional Fermi surface (FS) nesting in the half-filled surface \nstates\\cite{tosatti} has been repeatedly invoked as the driving \nmechanism for the\nCDW instability in the case of Pb\/Ge,\\cite{nature,asensio} but excluded \nfor the case of Sn\/Ge.\\cite{Sn_carpinelli,Sn_scandolo} \nHowever, by inspecting either photoemission $k(E)$ data,\n\\cite{Modesti_2,Avila,LeLay,asensio} and existing \nfirst-principle (LDA) calculations\\cite{nature,Sandro,Sn_carpinelli}\nof the surface half-filled band (the``adatom dangling \nbond band''), we fail to detect a particularly good nesting of the \ntwo-dimensional FS at the surface Brillouin zone (BZ) corner \n${\\bf K}=(4\\pi\/3a,0)$. The wavevector-dependent susceptibility \ngenerated by the calculated band structure, in particular, has no \nespecially large value at this k-point, and rather peaks elsewhere \n(see inset in Fig.\\ \\ref{band_bz_chi0:fig}). \nTo be sure, there is nothing preventing in general a good nesting at \n${\\bf K}=(4\\pi\/3a,0)$, or any other k-point.\nHowever, insofar as the surface state band is really lying in a bulk gap at \neach single k-point, it should be with good accuracy \n-- by simple state counting and charge neutrality --\nprecisely half filled. \nThis implies that the filled and empty state areas should be equal. \nHypothetical Fermi surfaces with this kind of shape and good nesting at \n${\\bf K}=(4\\pi\/3a,0)$ do not appear to be compatible \nwith an integer electron number.\nWe thus believe lack of perfect nesting to be the case for both \nPb\/Ge as for Sn\/Ge.\n\nFig.\\ \\ref{band_bz_chi0:fig}, showing a tight binding fit to the LDA surface \nband dispersion for the test-case of Si(111)\/Si,\\cite{Sandro} as well as the\ncorresponding FS and Lindhard density response function $\\chi_o(\\bf q)$,\n\\[ \\chi_o(\\bf q) \\,=\\, \\int_{BZ} \\frac{d\\bf k}{(2\\pi)^2}\n\\; \\frac{ n_{\\bf k} - n_{{\\bf k}+{\\bf q}} }\n{\\epsilon_{{\\bf k}+{\\bf q}} - \\epsilon_{\\bf k} } \\;, \\]\n$n_{\\bf k}$ and $\\epsilon_{\\bf k}$ being the occupation number and energy \nof an electron with Bloch momentum ${\\bf k}$, \nprovides a concrete illustration of these statements.\nWe note, in passing, that a strong nesting at ${\\bf K}$ is, on the contrary, \nautomatically guaranteed if the surface band acquires a uniform magnetization\nin such a way that the densities of up and down electrons become, \nrespectively, $2\/3$ and $1\/3$.\\cite{Sandro} \nThe majority spins would then fill the region external to the reduced BZ in \nFig.\\ \\ref{band_bz_chi0:fig}, and their FS would be strongly nested. \nThis suggestion, which turns out to be correct at the mean-field level, \npoints into the direction of a possible role played by magnetism in these \nsystems. \n\n{\\em ii)\\\/} {\\bf Importance of electron-electron interactions}. \n\nThe width $W$ of the surface band is relatively small:\n$W\\approx 0.5$ eV for Pb and Sn\/Ge(111), $W\\approx 0.3$ eV for SiC(0001). \nMoreover, this band is half-filled.\nThese facts call for a careful consideration of electron-electron\ninteractions, as well as of electron-phonon (e-ph), as possible sources of \ninstability. The importance of electron-electron interaction is \nunderlined by the different phenomenology\nof SiC(0001) and K\/Si(111):B with respect to Pb-Sn\/Ge(111). \nThe stronger insulating character of the former surfaces parallels\nclosely their stronger electron-electron \nrepulsions, connected both with more localized surface Wannier functions\n(see later on),\nand with reduced screening, due to larger bulk semiconducting gaps.\n\n{\\em iii)\\\/} {\\bf Weakness of LDA calculations for ground state prediction}. \n\nLDA electronic structure calculations -- an extremely well tested tool\nin many areas-- are certainly suitable for a weakly interacting system,\nsuch as the bulk semiconductor, or a passivated semiconductor surface. \nThey are less reliable, especially when they do not include spin, in \npredicting the stable state and the instabilities of a narrow band system. \nFor instance, the phenomenology of SiC(0001) -- suggesting a Mott-Hubbard \ninsulator -- is unreproducible by LDA. \nThe onset of a CDW on Sn\/Ge(111)is also not predicted by recent \nLDA calculations.\\cite{Sn_carpinelli,Sn_scandolo} While there is no\nreason to doubt the basic credibility of the one-electron band energies \nobtained from these Kohn-Sham equations, the mean-field treatment of \ninteractions, the screened local exchange, and especially \nthe neglect of magnetic\ncorrelations are the standard source of problems with LDA. As a\nconsequence, it will be necessary to worry more substantially about\ninteractions, and to use methods which, even if mean-field, permit the\ninclusion of strong correlations, including magnetic effects.\n\n{\\em iv)\\\/} \n{\\bf Interaction-driven mechanisms for $3\\times 3$ CDW instabilities}.\n\nThere are several different couplings which the surface electrons, as they hop\nweakly between a surface adatom site and another, experience and can influence the \nformation of the CDW, or of an insulating ground state:\na) on-site, and nearest-neighbor (n.n.) inter-site electron-electron repulsion; \nb) on-site effective attraction (negative Hubbard-$U$ term) of electron-phonon\norigin.\n\nBecause of poor nesting, electron-phonon alone is unlikely to \ndrive the $3\\times 3$ CDW. \nAt weak coupling, the susceptibility peak in Fig.\\ \\ref{band_bz_chi0:fig} would\nrather drive an incommensurate periodicity. \nAt strong coupling, the frustration associated to the triangular lattice, \nwill favor, in general, a superconducting ground state over a \nCDW phase (see Appendix).\\cite{santos}\n\nOn the other hand, the electron-electron interaction, both on-site and, \nindependently, nearest neighbor, naturally suggests, as we shall see later, \nthe $3\\times 3$ surface periodicity, which is found experimentally. \n\nThe approach we will take is based on an extended Hubbard-Holstein model.\nIt is by necessity a ``non-first-principle'' approach and, as such,\nhas no strong predictive power. \nHowever, it is made more realistic by using parameters extracted \nfrom first-principle calculations, and we find it very helpful in clarifying \nthe possible scenarios as a function of the strength of electron-electron \ninteractions. \nBecause of this rather qualitative use, we will make no attempt to push the \naccuracy of treatment of this model to a very high level of sophistication. \nThe basic tool will be the unrestricted Hartree-Fock approximation. \nAlthough mean field, it allows magnetic solutions, favored by exchange which \nis unscreened. \n\n\\section{Model}\nEach tetravalent adatom on a (111) semiconductor surface carries a dangling\nbond -- an unpaired electron in an unsaturated orbital.\nIn the $\\sqrt{3}\\times\\sqrt{3}$ structure, the dangling bonds of the adatoms \ngive rise to a band of surface states which lies in the bulk semiconductor\ngap. \\cite{nature,Sandro} By electron counting, such a band is half-filled. \nOur basic starting point is the quantitatively accurate surface state band \ndispersion $\\epsilon_{\\bf k}$ which one calculates in gradient-corrected LDA,\n\\cite{nature,Sandro}. \nIt is shown in Fig.\\ \\ref{band_bz_chi0:fig} for the case of Si\/Si(111). \nThe solid and dashed lines in Fig.\\ \\ref{band_bz_chi0:fig} are tight-binding \nfits to the LDA results obtained by including, respectively, up to the $6^{th}$ \nand up to the $2^{nd}$ shell of neighbors.\nThe fit with hopping integrals $t_1,t_2,\\cdots,t_6$ is quite good.\nLess good, but qualitatively acceptable, is the fit obtained using only\nnearest neighbor (n.n.) and next-nearest neighbor (n.n.n.) hopping integrals\n$t_1$ and $t_2$.\nThe Fermi surface (FS) for the half-filled surface band is shown in the\nupper inset of Fig.\\ \\ref{band_bz_chi0:fig}.\nIt is important to stress that the FS does not show good nesting properties\nat the wavevector ${\\bf q}={\\bf K}$ (the BZ corner).\nThis feature is shared by all LDA calculations on similar \nsystems.\\cite{nature,Sandro,Sn_carpinelli}\nAlbeit small, the bandwidth $W$ of the surface band is much greater than one \nwould predict by a direct overlap of adatom dangling bonds, as the adatoms\nare very widely apart, for instance about $7\\AA$ on Ge(111). Hopping is\nindirect, and takes place from the adatom to the first-layer atoms\nunderneath, from that to a second-layer atom, then again to a first-layer\natom underneath the other adatom, and from there finally to other adatom\ndangling bond. \nThus, when expressed in terms of elementary hopping processes between \nhybrid orbitals, electron hopping between two neighboring adatom dangling \nbonds is fifth order.\nAs a result, the final dispersion of the surface state band strongly \nparallels that of the closest bulk band, the valence band. Correspondingly,\nhybridization effects of the dangling bond orbitals with first, second, and\neven third, bulk layer orbitals are strong, as shown by the extension \ninto the bulk of the Wannier orbital associated to the LDA surface band \n(Fig.\\ \\ref{wannier:fig}).\n\nIn spite of this, we can still associate to every adatom a Wannier orbital \nand write the effective Hamiltonian for the surface band as follows:\n\\begin{equation}\nH \\,=\\, \\sum_{{\\bf k}}^{BZ} \\sum_{\\sigma} \\epsilon_{\\bf k} \nc^{\\dagger}_{{\\bf k},\\sigma} c_{{\\bf k},\\sigma} \\,+\\, H_{\\rm ph} \\,+\\, \nH_{\\rm e-ph} \\,+\\, H_{\\rm int} \\;,\n\\end{equation}\nwhere $c^{\\dagger}_{{\\bf k},\\sigma}$ is the Fourier transform of \nthe Wannier orbital, namely the surface state in a Bloch picture.\nThe sum over the wavevectors runs over the surface BZ.\n$H_{\\rm int}$ includes correlation effects which are not correctly accounted \nfor within LDA, which we parametrize as follows:\n\\begin{equation}\nH_{\\rm int} = U \\sum_{{\\bf r}} n_{{\\bf r},\\uparrow} n_{{\\bf r},\\downarrow}\n+ \\frac{1}{2} \\sum_{{\\bf r}\\ne {\\bf r}'} V_{{\\bf r}-{\\bf r}'} \n(n_{{\\bf r}}-1) (n_{{\\bf r}'}-1) \\;.\n\\end{equation}\nHere $U$ is an effective repulsion (Hubbard-$U$) for two electrons on the same \nadatom Wannier orbital, and $V_{{\\bf r}-{\\bf r}'}$ is the \ndirect Coulomb interaction between different sites ${\\bf r}$ and \n${\\bf r}'$.\\cite{non-diag:nota}\nLet $V$ be the n.n.\\ value of $V_{{\\bf r}-{\\bf r}'}$, which is, clearly, the\nlargest term.\nWe have considered two models for $V_{{\\bf r}-{\\bf r}'}$: \na model (A) in which we truncate\n$V_{{\\bf r}-{\\bf r}'}$ to n.n., and a model (B) in which\n$V_{{\\bf r}-{\\bf r}'}$ has a long range Coulombic tail of the form\n\\[ V_{{\\bf r}-{\\bf r}'} \\,=\\, \\frac{a V}{|{\\bf r}-{\\bf r}'|} \\;, \\]\nwhere $a$ is the n.n.\\ distance.\nThe results for model B are qualitatively similar to those of A, \nand will be only briefly discussed later on. In other words, even if\nmost of the detailed results in this paper will be base on the n.n.\\ \n$V_{{\\bf r}-{\\bf r}'}$, their validity is more general. \n\nLDA estimates of the {\\it bare} coulomb repulsion $U_o$ and $V_o$ \nbetween two electrons respectively on the same and on neighboring Wannier \norbitals are -- for our test case of Si(111)\/Si -- of about $3.6$ eV and \n$1.8$ eV respectively.\\cite{Sandro} \nScreening effects by the the underlying bulk are expected to\nreduce very substantially these repulsive energies. An order of magnitude\nestimate for $U$ and $V$ is obtained by dividing their bare values\nby the image-charge screening factor, $(\\epsilon +1)\/2\\approx 6$, \nyielding, for Si, $U=0.6$ eV ($10 t_1$), and $V=0.3$ eV ($5 t_1$).\nCorresponding values would be somewhat smaller for Ge(111), in view\nof a very similar dispersion \\cite{Sn_scandolo} and of a ratio of about \n4\/3 between the dielectric constants of Ge and Si. SiC(0001), the opposite\nis true. The surface state band is extremely narrow, of order $0.3$ eV\n\\cite{pollmann}, while the bulk dielectric constant is only about $6.5$. \n\nAs for the e-ph interaction, in principle both the on-site\nWannier state energy and the hopping matrix elements between neighbors\ndepend on the positions of the adatoms. \nWithin the deformation potential approximation, we consider only a \nlinear dependence of the on-site energy from a single ionic coordinate \n(for instance, the height $z_{\\bf r}$ of the adatom measured \nfrom the equilibrium position), and take \n\\begin{equation} \\label{e-ph-ham:eqn}\nH_{\\rm e-ph} = -g \\sum_{{\\bf r}} z_{\\bf r} (n_{\\bf r}-1) \\;,\n\\end{equation}\nwith $g$ of the order of $\\approx 1$ eV\/$\\AA$. The free-phonon term will\nhave the usual form\n\\begin{equation}\nH_{\\rm ph} = \\sum_{\\bf k}^{BZ} \\hbar \\omega_{\\bf k}\n\\left( b^{\\dagger}_{\\bf k} b_{\\bf k} + \\frac{1}{2} \\right) \\;,\n\\end{equation}\nwhere $b_{\\bf k}$ is the phonon annihilation operator, and \n$\\hbar \\omega_{\\bf k}$ a typical phonon frequency of the system, which \nwe take to be about 30 meV, independent of {\\bf k}. \n\n\\section{Phase diagram: some limiting cases} \n\\label{pd_no_g:sec}\n\nPreliminary to the full treatment of Sect.\\ \\ref{hf:sec}, \nwe consider first the purely electronic problem in the absence of \ne-ph interaction. \nWe start the discussion from particular limiting cases for which \nwell-controlled statements, or at least intuitively clear ones, can be made,\nwithout the need of any new specific calculations.\nIn the Appendix we will also consider, because it is useful in connection \nwith the electron-phonon case, the unphysical limit of strong on-site \nattraction (large and negative $U$).\n\n\\subsection{Large positive $U$: the Mott insulator.} \\label{large_u:sec}\nFor $U\\gg V,W$, the system is deep inside the Mott insulating \nregime.\\cite{Anderson_SE} The charge degrees of freedom are frozen,\nwith a gap of order U. The only dynamics is in the spin degrees of freedom.\nWithin the large manifold of spin degenerate states with exactly one electron \nper site, the kinetic energy generates, in second order perturbation theory, \na Heisenberg spin-1\/2 antiferromagnetic effective Hamiltonian\ngoverning the {\\it spin\\\/} degrees of freedom,\n\\begin{equation}\nH_{\\rm eff} = \\sum_{(ij)} J_{ij} \\, {\\bf S}_{{\\bf r}_i} \\cdot\n{\\bf S}_{{\\bf r}_j} \\;,\n\\end{equation}\nwith $J_{ij}=4|t_{ij}|^2\/U$.\\cite{Anderson_SE}\n\nFor our test case of Si(111)\/Si, the values of the hoppings are such that\n$J_1 \\approx 20$ meV, $J_2\/J_1\\approx 0.12$ while the remaining couplings \n$J_3,\\cdots$ are very small. \nAntiferromagnetism is frustrated on the triangular lattice. \nZero temperature long range order (LRO) -- if present -- should be of the \nthree-sublattice $120^o$-N\\'eel type, which can be also seen as a commensurate \nspiral spin density wave (s-SDW).\n\nBecause it does not imbalance charge, this state is not further affected by\nelectron-phonon coupling.\n\nIn summary, we expect for large values of $U$ a wide-gap Mott insulator with \na s-SDW (spins lying in a plane, forming $120^o$ angles), \na $3\\times 3$ {\\it magnetic} unit cell, but uniform charge (no CDW). \nThis is, most likely, the state to be found on the Si-terminated and\nC-terminated SiC(0001) surface at T=0 \\cite{Northrup,Anisimov}.\n\n\\subsection{Strong inter-site repulsion:\nan asymmetric CDW with three inequivalent sites.}\n\\label{large_v:sec}\nThe e-ph coupling can effectively reduce $U$, but not $V$. \nTherefore, it is of interest to consider the hypothetical regime\n$WV_c$, however, an insulating asymmetric CDW (a-CDW) prevails. \nThis is simply the spin collinear version of the non-collinear phase\ndescribed in Sect.\\ \\ref{large_v:sec}. \nFig.\\ \\ref{e_cdw:fig} shows the energy per site of the most relevant\nHF solutions at $U\/t_1=10$ as a function of $V$ for model B (Coulomb tail case).\nThe s-SDW and the l-SDW cross at $V_c\\approx 6.6 t_1$ where,\nhowever, the a-CDW insulating solution starts to be the favored one.\nThis large-$V$ solution has a large $CDW$ order parameter\nwith $\\phi_{\\rho}\\ne 0$ (mod. $2\\pi\/3$), a concomitant l-SDW, and $m^z=1\/3$.\nBy recalling the discussion in sect.\\ \\ref{large_v:sec}, we notice that\na state with a magnetization $m^z=1\/3$ and a l-SDW is the best HF solution once\na $3\\times 3$ restriction has been applied, since a spiral SDW on the\nsingly occupied sublattice would involve a larger periodicity (phase B).\n\n{\\bf Phase D: Symmetric non-magnetic CDW metallic phase.}\nFor small values of $U$ and $V$, or for large enough e-ph coupling\n$g$, a {\\em metallic\\\/} CDW with $\\phi_{\\rho}=0$ (m-CDW) is found. \n(See Fig.\\ \\ref{hf_bands:fig}(c) for the HF bands.)\nThis phase constitutes a candidate, alternative to the magnetic phase B', \nand compatible with the main experimental facts, which \nmight be relevant for the case of Pb\/Ge(111) and of Sn\/Ge(111). \nThe degree of metallicity of this phase is much reduced relative to the \nundistorted surface (pseudo-gap). \n\nWe stress that the e-ph interaction can stabilize the \n$\\phi_{\\rho}=0$ m-CDW also at relatively large $U$, by countering $U$ with a\nlarge negative $\\Delta U=-g^2\/M\\omega^2_{\\bf K}$. \nWe demonstrate this in Fig.\\ \\ref{e_u8v2_ph:fig}, where we plot the energy\nper site as a function of $\\Delta U$ at $U\/t_1=8$ and $V\/t_1=2$, for the \nthree relevant HF solutions, i.e., the spiral SDW (phase A), the collinear\nSDW with $m^z=1\/3$ (phase A'), and the metallic non-magnetic CDW (phase D). \nThe spiral SDW is unaffected by the electron-phonon coupling. \nThe energy of the collinear SDW with $m^z=1\/3$ improves a little bit \nby increasing $g$, due to the small CDW amplitude of this phase. This effect\nis not large enough as to make this phase stable in any range of couplings.\nAt a critical value of $g$, the metallic non-magnetic CDW \n(where the CDW order parameter is large, $|\\rho_{\\bf K}| \\sim 0.5$) wins over\nthe magnetic phases. \nThe Fourier transform of the lattice distortion at ${\\bf K}$ is given by \n$\\langle z_{\\bf K}\\rangle=(g\/M\\omega_{\\bf K}^2)\\rho_{\\bf K}\n=\\rho_{bf K} |\\Delta U|\/g$.\n\nA rough estimate shows that the order of magnitude\nof the electron-phonon coupling necessary to stabilize the CDW phase is not\nunreasonable.\nWith $g=1$ eV\/$\\AA$, $M_{Si}=28$, and $\\omega_{\\bf K}\\approx 30$ meV\nwe get $\\Delta U \\approx -3 t_1$, sufficient to switch from a \ns-SDW ground state to a m-CDW for $U\/t_1=8$ and $V\/t_1=2$. \nWith these values of the parameters we have $|\\rho_{\\bf K}| \\approx 0.43$, and\nwe estimate $|\\langle z_{\\bf K}\\rangle| \\approx 0.07 \\AA$. This corresponds,\nsince $\\langle z_{\\bf r} \\rangle \\sim 2\\cos({\\bf K}\\cdot {\\bf r}) \n|\\langle z_{\\bf K}\\rangle|$, to a total displacement between the adatom\ngoing up and the two going down of $\\Delta z = 3\n|\\langle z_{\\bf K}\\rangle| \\approx 0.2\\AA$. \n\nWe notice that values of $g$ much larger than those used in \nFig.\\ \\ref{e_u8v2_ph:fig} would eventually stabilize a superconducting \nground state (see Appendix).\n\n\\section{CDW order parameter and STM experiments} \\label{stm:sec}\n\nWe discuss, in the present section, the relationship between the\nCDW order parameter, as defined in Eq.\\ \\ref{rho:eqn}, and an STM map\nof the surface. \nAs the crudest approximation to the tunneling current for a given bias \n$V_{\\rm bias}$ we consider the integral of the charge density for \none-electron states within $V_{\\rm bias}$ from the Fermi level, \nweighted with barrier tunneling factor $T(V)$,\\cite{Selloni,Tosatti:high}\n\\begin{equation}\n\\label{stm:eqn}\nJ(V_{\\rm bias},{\\bf r}=x,y;z) \\approx \\int_0^{V_{\\rm bias}} dV \\sum_{n{\\bf k}} \n|\\Psi_{n{\\bf k}}({\\bf r})|^2 \\delta (E_{n{\\bf k}}-E_F+V) T(V) \\;.\n\\end{equation}\nThe tunneling factor leads to weighting prominently the states\nimmediately close to the Fermi level. \nIn view of the purely qualitative value of Eq.\\ (\\ref{stm:eqn}), \nwe have moreover decided to ignore $T(V)$ altogether and to account for \nits effect by reducing the bias voltage\n$V_{\\rm eff}$ in Eq.\\ (\\ref{stm:eqn}), to an effective value \n$V_{\\rm bias}^{\\rm eff}$. \nBy doing this, we have extracted an ``STM map'' for a point in phase A'\n($U\/t_1=9$ and $V=2$, model A)\n-- a spin-density waves where the amplitude of the CDW order parameter is \nrather small, $|\\rho_{\\bf K}|=0.039$ --\nand a point in phase D ($U\/t_1=4$ and $V=2$, model A) \n-- a pure CDW where the order parameter is quite large,\n$|\\rho_{\\bf K}|=0.4$. \nThe results for constant $z$, and $x,y$ moving from adatom A to B to C,\nare shown in Fig.\\ \\ref{stm:fig}(a) and (b), for the two cases. \nThe solid curves refer to positive bias (current flowing from the sample\nto the tip), probing occupied states close to the Fermi level. \nThe dashed curve refers to negative bias, probing unoccupied states.\nIn both cases a) and b), one of the three atoms yields a larger current at \npositive bias, while the other two atoms have larger currents at negative bias. \nThe insets show the predicted ``contrast'' between the two peak values,\n$(J_1-J_2)\/(J_1+J_2)$, $J_1$ and $J_2$ being in each case, respectively, \nthe largest and the smallest of the STM peak currents at the positions of \nthe adatoms.\nWe notice the following points: \ni) for the occupied states (positive bias) the pure CDW phase has, as expected,\na larger contrast than the magnetic phase. \nAs we neglect the tunneling factors $T(V)$, in the limit of large positive\neffective bias we recover the total asymmetry in the charge of the two \ninequivalent atoms, $(n_1-n_2)\/(n_1+n_2)$, indicated by a dashed horizontal \nline in the insets. \nObserve that the way this large bias limit is reached is completely different \nfor the two cases a) and b): in the magnetic case a) the contrast overshoots \nat small biases attaining values substantially larger than the nominal CDW order\nparameter, and then goes to the limit $(n_1-n_2)\/(n_1+n_2)$ from above;\nin the pure CDW case b), on the contrary, the limit is reached monotonically\nfrom below. \nii) for empty states (negative bias) the contrast is even more surprising: at\nsmall bias it is very large in both cases a) and b). By increasing\nthe bias, the contrast for the pure CDW case tends monotonically to a large \nvalue, whereas the magnetic case shows a strong non monotonicity. \n\nThese results suggest that one should look more carefully, and quantitatively,\nat the behavior of the asymmetry between STM peak currents as a function\nof the bias, including the region of relatively small biases: the different\nbehavior of the asymmetry of the magnetic case versus the pure CDW case should \nbe marked enough -- and survive in a more refined analysis including $T(V)$ --\nas to make the STM map a good way of discriminating between the two scenarios. \n\n\\section{Discussion and conclusions} \\label{discussion:sec}\n\nWithin our model study we have learned that on the surfaces considered: \n\n(i) If $U$ and $V$ are ignored, there is no straight electron-phonon driven \n$3\\times 3$ CDW surface instability. \nHowever, any phase involving a CDW, for example as\na secondary order parameter attached to a primary SDW, can take\nadvantage and gain some extra stabilization energy from a small surface\nlattice distortion, via electron-phonon coupling.\n\n(ii) Electron-electron repulsion and the two-dimensional Fermi Surface\nare capable of driving transitions of the undistorted metallic surface \nto a variety of states, that are either insulating or in any case less \nmetallic, some possessing the $3\\times 3$ periodicity. \n\n(iii) This can occur via two different mechanisms: a) the inter-site\nrepulsion $V$ can stabilize insulating or semi-metallic CDWs, without \na crucial involvement of spin degrees of freedom; b) the on-site \nrepulsion $U$ can produce essentially magnetic insulators with or without \na weak accompanying $3\\times 3$ CDW, as required by symmetry.\n\n(iv) For $U$ moderate of order $W$ and for smaller $V$, an interesting state \nis realized, with a large SDW and a small accompanying CDW. The state \nis either a small-gap insulator, or a semi-metal, and may or may not \nbe associated with a net overall magnetization, depending on the nature \n(linear or spiral, respectively) of the leading SDW. \n\n(v) For $U$ and $V$ both small but finite, a metallic CDW without any magnetism\nis obtained. The same phase can also be stabilized for larger values of $U$\nby the presence of a substantial electron-phonon coupling. \nWe stress that, in this case, $V$ is the coupling responsible for the\n$3\\times 3$ symmetry of the unit cell, whereas the role of the electron-phonon \ncoupling is that of destroying magnetism by effectively decreasing $U$.\nElectron-phonon coupling alone is not sufficient to justify a commensurate\n$3\\times 3$ CDW.\n\n(vi) Either of the phases in (iv) or (v) could be natural candidates for \nexplaining the weak $3\\times 3$ CDW seen experimentally on Sn-Pb\/Ge(111). \n\n(vii) Finally, for large $U$, small $V$ (in comparison with the bandwidth $W$)\nthe Mott-Hubbard state prevails. It is a wide-gap insulator, with\na pure spiral SDW, with $3\\times 3$ overall periodicity, and coplanar \n$120^o$ long-range spin ordering at zero temperature. It possesses\nno net magnetization, and no accompanying CDW. \n\n(viii) The above is the kind of state which we expect to be realized on \nSiC(0001), and also possibly on K\/Si(111):B. \n\nAmong existing experiments, we have addressed particularly\nphotoemission and STM. Our calculated band structure for both the SDW\/CDW\nstate A' (iv) and the pure CDW state D (v) exhibit features which are similar \nto those found in photoemission from \nSn-Pb\/Ge(111).\\cite{Modesti_2,Avila,LeLay,asensio} \nThe simulated STM images for the two kind of states are predicted to differ\nin their voltage dependence. \n\nFuture experiments are strongly called for, aimed at detecting whether\nmagnetic correlations are actually dominant, as we think is very likely, \non all these surfaces, or whether Sn-Pb\/Ge(111) are instead non-magnetic\nand electron-phonon driven. \nThe issue of whether magnetic long-range order -- which we definitely propose\nfor SiC(0001) and K\/Si(111):B at $T=0$, and also hypothesize for \nSn-Pb\/Ge(111) -- survives up to finite temperatures is one which we \ncannot settle at this moment. This due to the difficulty\nin estimating the surface magnetic anisotropy, without which order\nwould of course be washed out by temperature. In any case, it should be\npossible to pursue the possibility of either magnetism or incipient\nmagnetism using the appropriate spectroscopic tools.\n\nThis line of experimental research, although undoubtedly difficult, should \nbe very exciting since it might lead to the unprecedented discovery of \nmagnetic states at surfaces possessing no transition metal ions of any kind, \nsuch as these seemingly innocent semiconductor surfaces. \n\nWe acknowledge financial support from INFM, through projects LOTUS and\nHTSC, and from EU, through ERBCHRXCT940438. \nWe thank S. Modesti, J. Lorenzana, M.C. Asensio, J. Avila, G. Le Lay, \nE.W. Plummer and his collaborators, for discussions.\n\n\\section{Appendix. Large negative $U$: a superconducting ground state.}\n\\label{neg_u:sec}\nThe limit of large negative $U$, $U\\to -\\infty$, is considered here\nto show that CDWs are not favored by on-site attraction alone. Instead,\na superconducting ground state is favored.\\cite{santos}\nTo see this, consider the real-space states which are the low energy\nconfigurations for $U\\to -\\infty$: they consist of $N_e\/2$ sites (if $N_e$\nis the number of electrons) each of which is occupied by a pair of electrons\nwith opposite spins.\nThe large degeneracy in this manifold of states is\n-- once again, like in the $U\\to \\infty$ case -- removed by kinetic energy\nin second order perturbation theory. By assigning a pseudo-spin-1\/2 state\nto each site (up, if occupied by a pair, down if empty) one can show that\nthe effective Hamiltonian is \\cite{santos}\n\\begin{equation}\nH_{\\rm eff} = -\\sum_{(ij)} \\frac{J^{\\perp}_{ij}}{2} \\,\n\\left( S^+_{{\\bf r}_i} S^-_{{\\bf r}_j} \\,+\\, {\\rm H.c.} \\right) \\,+\\,\n\\sum_{(ij)} J^{z}_{ij} \\, S^z_{{\\bf r}_i} S^z_{{\\bf r}_j} \\;,\n\\end{equation}\nwith $J^{\\perp}_{ij}=4|t_{ij}|^2\/|U|$ and $J^{z}_{ij}=J^{\\perp}_{ij}$.\nIf $V$-terms are added, $J^z$ is modified to\n$J^{z}_{ij}=J^{\\perp}_{ij}+4V_{ij}$.\nRestricting our consideration to the n.n. case, we are left with a\nn.n.\\ Heisenberg Hamiltonian with ferromagnetic xy-part and an \nantiferromagnetic z-part. The sign of the xy-part cannot be changed at will\nby a canonical transformation because the lattice is non-bipartite. \nThe result is that the order is in the plane (i.e., superconductivity wins)\nfor small $V$. Only if $V$ is large enough the CDW \n(i.e., order in the z-direction) will be favored. \n\nEntirely similar considerations apply to the case of strong electron-phonon\ncoupling, $g\\to \\infty$.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIt has been difficult to read the recent financial news without finding mention of Collateralized Debt Obligations (CDO's). These financial instruments\nprovide ways of aggregating risk from a large number of sources and reselling\nit in a number of parts, each part having different risk-reward characteristics.\nNotwithstanding the role of CDO's in the recent market meltdown, the near\nfuture will no doubt see the financial\nengineering community continuing to develop structured investment\nvehicles like CDO's. Unfortunately, computational challenges in this area are formidable.\nThe main types of these assets have several common problematic features:\n\\begin{itemize}\n\\item they pool a large number of assets\n\\item they tranche the losses.\n\\end{itemize}\nThe ``problematic'' nature of this combination is that the trancheing procedure is nonlinear;\nand as is usual, the effect of a nonlinear transformation on a high-dimensional\nsystem is often difficult to understand. Ideally, one would like a theory which gives,\nif not explicit answers, at least some guidance. Lacking theory, one is\noften forced to search for models which are computationally\nfeasible, structurally robust, and which can be reasonably well-fitted to data.\n\nWe here consider a \\emph{large deviations} (cf. \\cite{MR1739680, MR1619036, MR758258}) analysis of certain aspects of synthetic CDO's.\nThe theory of large deviations is a collection of ideas which are\noften useful in studying rare events. The rare events of interest here involve losses in\n(and hence pricing of)\ninvestment-grade (senior or super-senior) tranches of synthetic CDO's.\nWe would like to see how far we can take a rigorous analysis when we use\nmathematical tools, viz., large deviations, which are designed expressly to study rare events.\nThe theory of large deviations usually gives a very refined\nanalysis of rare events (more refined, for example, than one based on mean-variance calculations); what does this analysis look like for CDO's?\n\nIn the course of our analysis, we will see that\nlarge deviations theory provides a natural framework for studying large amounts\nof idiosyncratic randomness. Moreover, the theory of large deviations\nprovides a way to compare rare events and see how they transform.\nWe believe this to be an important component of a larger\nanalysis of CDO's, particularly in cases where correlation comes from only a few sources (we will pursue a simple form of this idea in Subsection \\ref{S:Correlated}). In a sequel to this paper we will consider the more challenging case of a heterogeneous pool of assets.\n\n\nThis is not the first attempt to apply large deviations to structured finance.\nLosses in pools of large assets like CDO's have been considered in \\cite{MR2022976}, \\cite{Glasserman}\\footnote{Glasserman in \\cite{Glasserman} makes important headway in understanding correlation.}, and \\cite{MR2384674} (see also \\cite{MR1644496} for another application of large deviations to finance). Moreover, effects of tranching have been considered in\n\\cite{Vielex} and \\cite{YangHurdZhang}, both of which discuss saddlepoint\neffects of tranching once the distribution of the loss process is known.\nOur interest is to identify, as much as possible, exact asymptotic formulae\nfor the price of the CDO by focussing on the effects of large amounts\nof idiosyncratic randomness. We find that if we interpret\nthe loss process as an occupation measure, \\emph{Sanov's theorem}\nsuggests how to proceed.\nFurthermore, it allows us to develop something of a bottom-up analysis\nwhich directly connects the CDO price to the default probababilities of the underlying bonds. It also naturally leads to a number of calculations which reflect the\n\\emph{dynamics} of the default probabilities (as opposed\nto a snapshot of the default probabilities at expiry).\n\nFinally, the \\emph{ab initio} nature of our calculations bears note\\footnote{See in particular Remark \\ref{R:Comments} and the comments at the beginning of Section \\ref{S:HAS}.}.\nA number of models, such as the generalized Poission loss model \\cite{Brigo}, the Hawkes process \\cite{Giesecke} and others\n(cf. \\cite{Caflisch, FilOverSch}), which successfully capture some of the complexity of CDO's have been developed and implemented.\nOur approach is limited to investment-grade tranches, and hopefully will complement some of these models and contribute to their study.\n\n\\section{CDS to CDO---a Review}\nA standard review of credit default swaps and synthetic CDO's will help us fix notation, which comes from \\cite{Brigo}.\nLet's fix underlying probability triple $(\\Omega,\\mathscr{F},\\mathbb{P})$, where $\\mathbb{P}$ represents the risk-neutral probability measure and $\\mathbb{E}$ is the associated expectation operator..\n\n\\subsection{Credit Default Swaps}\\label{S:CDS}\nA Credit Default Swap (CDS) is a contract between a \\emph{protection seller}\nand a \\emph{protection buyer} based on the default of a reference bond\n(a \\emph{name}). Under the contract, the protection seller pays the protection buyer $\\$1$\n(the \\emph{notional}) when the bond defaults\\footnote{We assume for simplicity no recovery.} (a nonnegative random time $\\tau$), as long as this\ndefault occurs before\\footnote{We require default to be \\emph{strictly} before expiry; that will save us some calculations resulting from potentially positive probability of default exactly at expiry.} the expiry of the contract (time $T$). This is the\n\\emph{protection leg} of the contract. In return, the protection buyer pays the\nprotection seller a premium $S$ at a finite collection $\\mathcal{T}$ of times (such that $t\\le T$ for all $t\\in \\mathcal{T}$ until\nthe default occurs. This is the \\emph{premium} leg of the contract; see Figure \\ref{fig:CDS}.\n\\begin{figure}\n\\includegraphics[scale=0.8]{cds.pdf}\n\\caption{Credit Default Swap}\n\\label{fig:CDS}\n\\end{figure}\nTo write this mathematically, define the loss process\n\\begin{equation*} L^\\circ_t \\overset{\\text{def}}{=} \\chi_{\\{t\\ge \\tau\\}} = \\begin{cases} 1 &\\text{if $t\\ge \\tau$} \\\\ 0 &\\text{if $t<\\tau$}\\end{cases}\\end{equation*}\nfor all $t\\in \\mathbb{R}$ (of course then $L^\\circ_t=0$ for $t<0$).\nThe present value of the protection and premium legs are thus\n\\begin{gather*}e^{-\\textsf{R} \\tau}\\chi_{\\{\\tau< T\\}} = \\int_{s\\in [0,T)} e^{-\\textsf{R} s}dL^\\circ_s \\\\\nS\\sum_{t\\in \\mathcal{T}} e^{-\\textsf{R} t}\\chi_{\\{\\tau>t\\}}= S\\sum_{t\\in \\mathcal{T}}e^{-\\textsf{R} t}\\{1-\\chi_{\\{\\tau\\le t\\}}\\}= S\\sum_{t\\in \\mathcal{T}} e^{-\\textsf{R} t}\\left(1-L^\\circ_t\\right) \\end{gather*}\nwhere $\\textsf{R}$ is the riskless interest rate\\footnote{It is not difficult to see that\nthe maps $\\omega\\mapsto e^{-\\textsf{R} \\tau(\\omega)}\\chi_{\\{\\tau(\\omega)\\le T\\}}$ and $\\omega\\mapsto \\sum_{t\\in \\mathcal{T}} e^{-\\textsf{R} t}\\chi_{\\{\\tau(\\omega)>t\\}}$ are measurable maps from $\\Omega$ to $\\mathbb{R}$; thus the expectations make sense.}. The value of $S$ is defined by requiring that the expectation of these two legs agree (under the risk-neutral measure).\n\n\\subsection{Synthetic CDO's}\n\nIt is an easy step to modify this notation to construct a synthetic CDO. Consider $N$ credit default swaps (each one on a different name). Each CDS has notional value $1\/N$, and the default of the $n$-th name occurs at a random nonnegative time $\\tau_n$. The \\emph{notional} loss process is thus\n\\begin{equation*} L^{(N)}_t \\overset{\\text{def}}{=} \\frac{1}{N}\\sum_{n=1}^N \\chi_{\\{\\tau_n\\le t\\}} \\end{equation*}\nfor all $t\\in \\mathbb{R}$ (as in our above discussion of credit default swaps, $L^{(N)}_t=0$ for $t<0$).\nNote that $0\\le L^{(N)}_t\\le 1$ for all $t\\ge 0$. Fix \\emph{attachment} and \\emph{detachment} points $\\alpha$ and $\\beta$ in $[0,1]$ such that $\\alpha<\\beta$. We then define the \\emph{tranched} loss process $\\bar L^{(N)}$ as\n\\begin{equation*} \\bar L^{(N)}_t \\overset{\\text{def}}{=} \\frac{(L^{(N)}_t-\\alpha)^+-(L^{(N)}_t-\\beta)^+}{\\beta-\\alpha} = \\begin{cases} 0 &\\text{if $L^{(N)}_t<\\alpha$} \\\\\n\\frac{L^{(N)}_t-\\alpha}{\\beta-\\alpha} &\\text{if $\\alpha\\le L^{(N)}_t\\le \\beta$} \\\\\n1 &\\text{if $L^{(N)}_t\\ge \\beta$}\\end{cases} \\end{equation*}\nfor all $t\\in \\mathbb{R}$. The protection and premium legs of a synthetic CDO are basically given by replacing the loss process $L^\\circ$ in a credit default swap with $\\bar L$.\nNamely, define\n\\begin{equation*}\\textbf{P}^{\\text{prot}}_N \\overset{\\text{def}}{=} \\int_{s\\in [0,T)} e^{-\\textsf{R} s}d\\bar L^{(N)}_s \\qquad\\text{and}\\qquad\n\\textbf{P}^{\\text{prem}}_N\\overset{\\text{def}}{=} \\sum_{t\\in \\mathcal{T}} e^{-\\textsf{R} t}\\left(1-\\bar L^{(N)}_t\\right); \\end{equation*}\n$S_N\\textbf{P}^{\\text{prem}}_N$ is the present value of the premium leg (where $S_N$ \nare the premiums) and $\\textbf{P}^{\\text{prot}}_N$ is the present value of the protection leg.\nThe protection leg thus makes payments when defaults occur, as long as at\nleast $\\alpha$ (in percent) of the names have already defaulted, and only\nas long as no more than $\\beta$ (in percent) of the names have defaulted.\nThese payments are proportioned so that they add up to at most $\\$1$. The\npremium payments, on the other hand, are made only on the proportion\nof names which are still insured (i.e., which have not yet defaulted).\nThe premium $S_N$ should then be given by equating the risk-neutral expectation of two legs; i.e.,\n\\begin{equation}\\label{E:SN} S_N= \\frac{\\mathbb{E}[\\textbf{P}^{\\text{prot}}_N]}{\\mathbb{E}[\\textbf{P}^{\\text{prem}}_N]}. \\end{equation}\nNote that $L^{(N)}_t$ is measurable for each $t\\in \\mathbb{R}$. Since $\\bar L^{(N)}_t$ is a continuous transformation of $L^{(N)}_t$, it is also measurable. Since $0\\le e^{-\\textsf{R} s}\\le 1$, $0\\le \\bar L\\le 1$, and $\\bar L$ is nondecreasing, $\\textbf{P}^{\\text{prot}}_N$ and $\\textbf{P}^{\\text{prem}}_N$\nboth take values in $[0,1]$. Moreover, the measurability of $\\bar L$ implies that $\\textbf{P}^{\\text{prot}}_N$ and $\\textbf{P}^{\\text{prem}}_N$ are measurable. Thus both $\\mathbb{E}[\\textbf{P}^{\\text{prot}}_N]$ and $\\mathbb{E}[\\textbf{P}^{\\text{prem}}_N]$ are well-defined, finite, and nonnegative.\nOur goal is to evaluate $S_N$ when $N$ is large. This will be accomplished in \\eqref{E:Sas}.\n\\begin{figure}\n\\includegraphics[scale=0.7]{lossprocesses.pdf}\n\\caption{Loss processes $L^{(N)}$ and $\\bar L^{(N)}$}\n\\label{fig:typ}\n\\end{figure}\n\n\\section{The Model}\\label{S:SM}\n\nLet's now think about the sources of randomness in the names. Each name is affected by its own \\emph{idiosyncratic} randomness and by \\emph{systemic} randomness (which affects all of the names). Assumedly, the systemic randomness,\nwhich corresponds to macroeconomic factors, is \\emph{low-dimensional}\ncompared to the number of names. For example, there may be only a handful of macroeconomic factors which affect a pool of many thousands of names. We can capture this functionality as\n\\begin{equation}\\label{E:structural} \\chi_{\\{\\tau_n\\alpha\\right\\}$ is small. Guided by Chebychev's inequality, lets' define\n\\begin{equation*} \\mu^{(N)} \\overset{\\text{def}}{=} \\frac{1}{N}\\sum_{n=1}^N \\mathbb{E}\\left[\\chi_A(\\xi^\\text{I}_n,\\xi^\\text{S})\\right] \\qquad \\text{and}\\qquad \\sigma^{(N)} \\overset{\\text{def}}{=} \\sqrt{\\mathbb{E}\\left[\\left(L^{(N)}_{T-}-\\mu^{(N)}\\right)^2\\right]}. \\end{equation*}\nIf $\\alpha>\\mu^{(N)}$, Chebychev's inequality gives us that\n\\begin{equation*} \\mathbb{P}\\left\\{ L^{(N)}_{T-}>\\alpha\\right\\} \\le \\frac{\\left(\\sigma^{(N)}\\right)^2}{\\left(\\alpha-\\mu^{(N)}\\right)^2}. \\end{equation*}\nIn order for this to be small, we would like that $\\sigma^{(N)}$ be small;\nthis is the point of pooling. For any fixed value of $x$, the conditional\nlaw of $L^{(N)}_{T-}$ given that $\\xi^\\text{S}=x$ is the variance of \n$\\tfrac{1}{N}\\sum_{n=1}^N\\chi_A(\\xi^\\text{I}_n,x)$;\nthus the conditional variance of $L^{(N)}_{T-}$ given that $\\xi^\\text{S}=x$\nis at most of order $\\tfrac{1}{4N}$. Hopefully, when we reinsert\nthe systemic randomness, the variance of $L^{(N)}$ will still be small,\nand we will indeed have an investment-grade tranche.\n\nIn fact, we can do better than Chebychev's inequality.\nBy again conditioning on $\\xi^\\text{S}$, we can write that\n\\begin{equation*} \\mathbb{P}\\left\\{ L^{(N)}_{T-}>\\alpha\\right\\} = \\mathbb{E}\\left[\\mathbb{P}\\left\\{ L^{(N) }_{T-}>\\alpha\\big|\\xi^\\text{S}\\right\\} \\right] \\end{equation*}\nThus the tranche will be investment-grade if $\\mathbb{P}\\left\\{ L^{(N) }_{T-}>\\alpha\\big|\\xi^\\text{S}=x\\right\\}$ is small for ``most'' values of $x$ (see Remark \\ref{R:systemic}). As mentioned above,\nhowever, we know the law of $L^{(N)}_{T-}$ conditioned on $\\xi^\\text{S}$.\nNamely, \n\\begin{equation*} \\mathbb{P}\\left\\{ L^{(N) }_{T-}>\\alpha\\big|\\xi^\\text{S}=x\\right\\} = \n\\mathbb{P}\\left\\{ \\frac{1}{N}\\sum_{n=1}^N\\chi_A(\\xi^\\text{I}_n,x)>\\alpha\\right\\}. \\end{equation*}\nThis then clearly motivates a natural two-step approach.\nOur first step is to condition on the value of the systemic randomness \n(which we may think of as fixing a ``state of the world'' or a ``regime'') and concentrate\non how rare events occur due to idiosyncratic randomness (i.e., to effectively \\emph{suppress} the systemic randomness). It will turn out that\nthis is in itself a fairly involved calculation. Nevertheless, it is\nconnected with a classic problem in large deviations theory---\\emph{Sanov's theorem}. With this in hand, we should\nthen be able to return to the original problem and average over the systemic randomness (in Subsection \\ref{S:Correlated}). Some of the\nfiner details of these effects of correlation will appear in sequels\nto this paper. Here we will restrict our interest in the effects of correlation\nto a very simple model (which is hopefully nevertheless illustrative). \n\nDefine $I\\overset{\\text{def}}{=} [0,\\infty]$ and endow $I$ with its usual topology under which it is Polish and its usual ordering\n\\footnote{We endow $I$ with the usual topology and ordering.\n$I$ is the collection of nonnegative real numbers and a non-real ``point'',\nwhich we label as $\\infty$. Define $\\wp:[0,\\pi\/2]\\to I$ as $\\wp(t) \\overset{\\text{def}}{=} \\tan(t)$ for $t\\in [0,\\pi\/2)$, and define $\\wp(\\pi\/2)\\overset{\\text{def}}{=} \\infty$. Then $\\wp$\nis a bijection. The topology\nand ordering of $I$ is that given by pushing the topology and ordering\nof $[0,\\pi\/2]$ forward through $\\wp$. Thus $I$\nis Polish and in fact compact.}; each of the default times is an $I$-valued random variable. Since we want to consider a countable\ncollection of default times, we will take\nour event space to be $\\Omega\\overset{\\text{def}}{=} I^\\mathbb{N}$ and\\footnote{As usual, for any topological space $\\mathsf{X}$, $\\mathscr{B}(\\mathsf{X})$ is the Borel sigma-algebra of subsets of $\\mathsf{X}$, and $\\mathscr{P}(\\mathsf{X})$ is the\ncollection of probability measures on $(\\mathsf{X},\\mathscr{B}(\\mathsf{X}))$.} we will take $\\mathscr{F}\\overset{\\text{def}}{=} \\mathscr{B}(I^\\mathbb{N})$.\nFix next $\\mu\\in \\mathscr{P}(I)$; we will want all of the names to be identically distributed with common law $\\mu$.\nTo reflect our initial working assumption\nthat the names are independent, we now let the risk neutral probability $\\mathbb{P}\\in \\mathscr{P}(I^\\mathbb{N})$ be defined\nby requiring that\n\\begin{equation*} \\mathbb{P}\\left(\\bigcap_{n=1}^N\\{\\tau_n\\in A_n\\}\\right) = \\prod_{n=1}^N \\mu(A_n). \\end{equation*}\nfor all $N\\in \\mathbb{N}$ and all $\\{A_n\\}_{n\\in \\mathbb{N}}\\subset \\mathscr{B}(I)$.\nWe also define, in the usual way,\n\\begin{equation*} F(t) \\overset{\\text{def}}{=} \\mu[0,t]. \\qquad t\\in I \\end{equation*}\nIn principle, one can recover $F$ from prices of credit default swaps.\n\n\\begin{example} Our setup includes both the Merton model and the reduced form model. For the reduced form model, let $\\lambda:(0,\\infty)$ be the hazard rate and set\n\\begin{equation*} f(t) = \\lambda(t)\\exp\\left[-\\int_{s=0}^t \\lambda(s)ds\\right] \\qquad t\\in (0,\\infty)\\end{equation*}\nand let $F$ have density $f$.\nOn the other hand, for the Merton model with stock volatility $\\sigma$, risk-neutral drift $\\theta$, initial valuation $1$, and bankruptcy barrier $K\\in (0,1)$, we would have\n\\begin{equation*} f(t) = \\frac{\\ln (1\/K)}{\\sqrt{2\\pi \\sigma^2 t^3}}\\exp\\left[-\\frac{1}{2\\sigma^2 t}\\left(\\left(\\theta-\\frac{\\sigma^2}{2}\\right)t+\\ln \\frac{1}{K}\\right)^2\\right]. \\qquad t\\in (0,\\infty)\\end{equation*}\nAgain define $F$ by integrating $f$.\n\\end{example}\n\nWe can then rewrite the notional loss process as \n\\begin{equation*} L^{(N)}_t = \\frac{1}{N}\\sum_{n=1}^N\\chi_{[0,t]}(\\tau_n) = \\nu^{(N)}[0,t] \\end{equation*}\nwhere $\\nu^{(N)}$ is empirical distribution of the $\\tau_n$'s; i.e.,\n\\begin{equation}\\label{E:nudef} \\nu^{(N)} = \\frac{1}{N}\\sum_{n=1}^N \\delta_{\\tau_n}. \\end{equation}\nWe point out that $\\nu^{(N)}$ is a random element of $\\mathscr{P}(I)$ (i.e., a random measure\\footnote{Since the map $x\\mapsto \\delta_x$ is a measurable map from $I$ to $\\mathscr{P}(I)$, each map $\\omega\\mapsto \\delta_{\\tau_n(\\omega)}$ is a measurable map from $\\Omega$ to $\\mathscr{P}(I)$. Thus for each $N$, the map $\\omega\\mapsto (\\delta_{\\tau_1(\\omega)},\\delta_{\\tau_2(\\omega)}\\dots \\delta_{\\tau_N(\\omega)})$ is a measurable map from $\\Omega$ to $(\\mathscr{P}(I))^N$. Recalling the definition of the weak\ntopology as integration against continuous bounded functions, we then see that\nthe map $(\\mu_1,\\mu_2\\dots \\mu_N)\\mapsto \\frac{1}{N}\\sum_{n=1}^N \\mu_n$ is continuous and thus measurable as a map from $(\\mathscr{P}(I))^N$ to $\\mathscr{P}(I)$. Hence $\\nu^{(N)}$ is indeed a $\\mathscr{P}(I)$-valued random variable.}). This formulation\nis the starting point for our analysis and will lead to several insights. In particular, the (weak) law of large numbers implies that for each $t>0$, \n\\begin{equation} \\label{E:convprob}\\lim_{N\\to \\infty}L^{(N)}_t=F(t). \\qquad \\text{(in probability)}\\end{equation}\nMore generally, $\\nu^{(N)}$ tends to $\\mu$ (in the Prohorov topology on $\\mathscr{P}(I)$); for every $\\varepsilon>0$,\n\\begin{equation*} \\lim_{N\\nearrow \\infty}\\mathbb{P}\\left\\{d_{\\mathscr{P}(I)}(\\nu^{(N)},\\mu)\\ge \\varepsilon\\right\\}=0. \\end{equation*}\nwhere $d_{\\mathscr{P}(\\mathbb{R}_+)}$ is the Prohorov metric \\cite{MR88a:60130}.\n\nConsider now an investment-grade tranche; i.e., a senior or super-senior tranche. The attachment point for such a tranche should be set so that it is unlikely to suffer any defaults; i.e., it is unlikely that \n$\\textbf{P}^{\\text{prot}}_N$ is nonzero. Clearly \n\\begin{equation}\\label{E:support} \\left\\{\\textbf{P}^{\\text{prot}}_N\\not = 0\\right\\} = \\{L^{(N)}_{T-}>\\alpha\\} = \\{\\nu^{(N)}[0,T)>\\alpha\\}, \\end{equation}\nand comparing this with \\eqref{E:convprob}, we see that a tranche will be investment-grade if and only an obvious requirement holds:\n\\begin{assumption}[Investment-grade]\\label{A:IG} We assume that\n\\begin{equation*} \\alpha>F(T-).\\end{equation*}\n\\end{assumption}\n\\noindent In this case, the valuation of such a tranche should depend in large part on how ``rare'' it is that \n$L^{(N)}_{T-}>\\alpha$. As $N$ becomes large, \\eqref{E:convprob} means that in fact it becomes less and less likely that $L^{(N)}_{T-}>\\alpha$. Note also that since $\\alpha<1$, this assumption implies that $F(T-)<1$. This is natural; if $F(T-)=1$,\nthen all defaults must have occurred before $T$, essentially precluding the possibility of constructing an investment-grade tranche.\n\nCombining our comments after \\eqref{E:SN} about the structure of $\\textbf{P}^{\\text{prot}}_N$ and\n\\eqref{E:support}, we have that\n\\begin{equation}\\label{E:bounded} 0\\le \\textbf{P}^{\\text{prot}}_N\\le \\chi_{\\{\\nu^{(N)}[0,T)>\\alpha\\}}. \\end{equation}\nHence for an investment-grade tranche, $\\mathbb{E}[\\textbf{P}^{\\text{prot}}_N]$ is small if it is unlikely that $\\nu^{(N)}[0,T)>\\alpha$ (in other words, we don't have any competition between ``big'' values of $\\textbf{P}^{\\text{prot}}_N$ and ``small'' sets).\nNote also that \\eqref{E:convprob} implies that $\\lim_{N\\to \\infty}\\bar L^{(N)}_{T-}=0$ (in probability) so that in fact\n\\begin{equation}\\label{E:Premas}\\lim_{N\\to \\infty} \\mathbb{E}[\\textbf{P}^{\\text{prem}}_N] = \\sum_{t\\in \\mathcal{T}} e^{-\\textsf{R} t}. \\end{equation}\nIn other words, if losses are unlikely, all of the premiums will most likely be paid.\nThus the nontrivial part of $S_N$ comes from the protection leg, whose value is small.\n\nLet's now step into the world of large deviations, which tells us how to \nstudy rare events. The asymptotics of $\\nu^{(N)}$ is exactly the subject of \n\\emph{Sanov's} theorem \\cite{MR1619036}, which states that $\\nu^{(N)}$ has\na large deviations principle with rate function given by relative entropy with respect to $\\mu$; i.e., with rate function\n\\begin{equation*} H(\\mu'|\\mu) = \\begin{cases} \\int_{t\\in I}\\ln \\frac{d\\mu'}{d\\mu}(t)\\mu'(dt) &\\text{if $\\mu'\\ll \\mu$} \\\\\n\\infty &\\text{else.}\\end{cases}\\end{equation*}\nInformally, for any $A\\in \\mathscr{B}(\\mathscr{P}(I))$,\n\\begin{equation}\\label{E:san} \\mathbb{P}\\left\\{\\nu^{(N)}\\in A\\right\\} \\overset{N\\nearrow \\infty}{\\asymp} \\exp\\left[-N\\inf_{\\mu'\\in A}H(\\mu'|\\mu)\\right]. \\end{equation}\nSince large deviations is not in the mainstream of financial mathematics (see, however, \\cite{MR1644496})\nwe have summarized some of its foundations in Subsection \\ref{S:LD}. Combining \\eqref{E:bounded} with Sanov's theorem, we conjecture that for large $N$\n\\begin{equation*} \\mathbb{E}\\left[\\textbf{P}^{\\text{prot}}_N\\right] \\le \\mathbb{P}\\left\\{\\nu^{(N)}[0,T)>\\alpha\\right\\} \\overset{N\\nearrow \\infty}{\\asymp} \\exp\\left[-N \\mathfrak{I}(\\alpha)\\right] \\end{equation*}\nwhere\n\\begin{equation*} \\mathfrak{I}(\\alpha) \\overset{\\text{def}}{=} \\inf\\left\\{H(\\mu'|\\mu): \\mu'[0,T)\\ge \\alpha\\right\\}. \\end{equation*}\nAlthough this looks intimidating (it is an infinite-dimensional minimization problem), in fact it has an easy solution and an explicit minimizer.\nFor $\\alpha_1$ and $\\alpha_1$ in $[0,1]$, define\n\\begin{equation*} \\hbar(\\alpha_1,\\alpha_2) \\overset{\\text{def}}{=} \\begin{cases} \\alpha_1\\ln \\frac{\\alpha_1}{\\alpha_2} + (1-\\alpha_1)\\ln \\frac{1-\\alpha_1}{1-\\alpha_2} &\\text{for $\\alpha_1$ and $\\alpha_2$ in $(0,1)$} \\\\\n\\ln \\frac{1}{\\alpha_2} &\\text{for $\\alpha_1=1$, $\\alpha_2\\in (0,1)$} \\\\\n\\ln \\frac{1}{1-\\alpha_2} &\\text{for $\\alpha_1=0$, $\\alpha_2\\in [0,1)$} \\\\\n\\infty &\\text{else.}\\end{cases}\\end{equation*}\n\\begin{proposition}\\label{P:explicit} We have that\n\\begin{equation*} \\mathfrak{I}(\\alpha) = \\hbar(\\alpha,F(T-)) = H(\\tilde \\mu^*_\\alpha|\\mu), \\end{equation*}\nwhere\n\\begin{equation}\\label{E:dacc} \\tilde \\mu^*_\\alpha(A) = \\mu(A\\cap [0,T))\\frac{\\alpha}{F(T-)} + \\mu(A\\cap [T,\\infty])\\frac{1-\\alpha}{1-F(T-)} \\end{equation}\nfor all $A\\in \\mathscr{B}(I)$.\\end{proposition}\n\\noindent The proof of this is given Section \\ref{S:Proofs}.\nIn fact, the formula for $\\mathfrak{I}$ is what we would expect from considering only $L^{(N)}_{T-}$. We can think of $L^{(N)}_{T-}$ as counting the normalized number of\nheads in a collection of i.i.d. coin flips, where the probability of heads\n(i.e., defaults before time $T$)\nfor each coin is $F(T-)$. The likelihood that the normalized number of heads\nis approximately $\\alpha$ is given, via Sanov's theorem, by relative entropy\nof a coin flip with bias $\\alpha$ with respect to a coin with bias $F(T-)$\n(see the comments after Theorem \\ref{T:measurechange}).\n\nWe are almost ready to state our main theorem.\nWe need one last assumption.\n\\begin{assumption}\\label{A:density} We assume that $F(T')F(T')\\ge 0$); this is natural, since if $F(T)=0$, then there is no possibility of any defaults by time $T$.\nSecondly, if $F$ is flat right before $T$, then any defaults by time $T$ must in fact have occurred earlier, so we can effectively reduce the time interval of interest to a smaller one.\nBy disallowing such a flat, we ensure that there is some likelihood of\ndefaults right before $T$, allowing us to carry out a quantitative analysis\nof $L^{(N)}$ right before time $T$ (see the proof of Lemma \\ref{L:TTimes}).\n\nThe goal of this paper is to formalize the asymptotics conjectured above.\nSet\n\\begin{equation}\\label{E:vkapdef} \\varkappa\\overset{\\text{def}}{=} \\ln \\left(\\frac{\\alpha}{1-\\alpha}\\frac{1-F(T-)}{F(T-)}\\right) = \\ln \\left(\\frac{\\frac{1}{F(T-)}-1}{\\frac{1}{\\alpha}-1}\\right). \\end{equation}\nIn light of Assumption \\ref{A:IG}, the second formula ensures that $\\varkappa>0$.\n\\begin{theorem}[Main]\\label{T:Main} We have that\n\\begin{multline*} \\mathbb{E}\\left[\\textbf{P}^{\\text{prot}}_N\\right] = \\frac{e^{-\\textsf{R} T}\\exp\\left[-\\varkappa\\left(\\lceil N\\alpha \\rceil -N\\alpha\\right)\\right]}{N^{3\/2}(\\beta-\\alpha)\\sqrt{2\\pi\\alpha(1-\\alpha)}}\\left\\{ \\frac{\\alpha(1-\\alpha)F(T-)(1-F(T-))}{(\\alpha-F(T-))^2} \\right.\\\\\n\\left. +\\left(\\lceil N\\alpha \\rceil-N\\alpha\\right)\\frac{\\alpha(1-F(T-))}{\\alpha-F(T-)} + \\mathcal{E}(N)\\right\\} \\exp\\left[-N \\mathfrak{I}(\\alpha)\\right] \\end{multline*}\nwhere $\\lim_{N\\to \\infty}\\mathcal{E}(N)=0$.\\end{theorem}\n\\noindent We can recognize a number of effects here. Firstly, the $e^{-\\textsf{R} T}$\nterm reflects the fact that while by assumption losses in the CDO are unlikely, the least unlikely way for them to occur is right before expiry. The term $\\beta-\\alpha$ in the denominator reflects the tranche width; note that we are looking at large $N$-approximations here; if we were to first take asymptotics as the tranche\nwidth tends to zero, we would probably capture some different effects\n(but we expect that the exponentially small entropy term would still appear).\nThe $\\sqrt{2\\pi\\alpha(1-\\alpha)}$ reflects something like a Gaussian correction\nterm (it directly comes from the calculations of Section \\ref{S:Proofs}). The $N^{3\/2}$\nis a combination of two things. Part of it ($N^{1\/2}$) also comes from the Gaussian correction. The rest ($N$) comes from the actual size of the protection\nleg payments $\\textbf{P}^{\\text{prot}}_N$ once the attachment point has been reached.\nThe unsightly term $\\lceil N\\alpha \\rceil - N\\alpha$ comes from an unavoidable granularity in our problem; the loss process can only take on values in $\\mathbb{Z}\/N$. We expect this granularity to disappear if the notional loss takes on a continuum of values. This would be the case, for example, with random recoveries (cf. \\cite{AndersonSidenius}). Of course, by taking $\\alpha$ to be a multiple of $1\/N$, we can make this granularity disappear---at the cost of making our calculations\nlook more restrictive than they actually are.\n\nFinally, we explicitly point out that our analysis is \\emph{asymptotic} as\nthe number $N$ of names becomes large. We cannot say anything specific about\nany finite $N$. This is analogous to the law of large numbers; the law of\nlarge numbers cannot, for example, give information about any finite number\nof coin flips, but rather is useful in framing one's thoughts when one\nhas ``many'' coin flips.\n\nCombining \\eqref{E:Premas} and Theorem \\ref{T:Main}, we see that the asymptotic behavior of the premium $S_N$ is given by\n\\begin{equation}\\label{E:Sas} \\begin{aligned} S_N &= \\frac{1}{N^{3\/2}}\\frac{e^{-\\textsf{R} T}\\exp\\left[-\\varkappa\\left(\\lceil N\\alpha \\rceil -N\\alpha\\right)\\right]}{\\left\\{ \\sum_{t\\in \\mathcal{T}} e^{-\\textsf{R} t}\\right\\} (\\beta-\\alpha)\\sqrt{2\\pi\\alpha(1-\\alpha)}}\\left\\{ \\frac{\\alpha(1-\\alpha)F(T-)(1-F(T-))}{(\\alpha-F(T-))^2}\\right.\\\\\n&\\qquad \\left. +\\left(\\lceil N\\alpha \\rceil-N\\alpha\\right)\\frac{\\alpha(1-F(T-))}{\\alpha-F(T-)} + \\mathcal{E}'(N)\\right\\} \\exp\\left[-N \\mathfrak{I}(\\alpha)\\right] \\end{aligned}\\end{equation}\nwhere $\\lim_{N\\to \\infty}\\mathcal{E}'(N)=0$. \n\nTo close this section, we plot some ``theoretical'' prices as a function of the number $N$. By ``theoretical'', we mean the\nquantity \n\\begin{multline*} S^*_N \\overset{\\text{def}}{=} \\frac{\\exp\\left[-\\varkappa\\left(\\lceil N\\alpha \\rceil -N\\alpha\\right)\\right]}{N^{3\/2}\\sqrt{\\alpha(1-\\alpha)}}\\left\\{ \\frac{\\alpha(1-\\alpha)F(T-)(1-F(T-))}{(\\alpha-F(T-))^2} \\right.\\\\\n\\left.+\\left(\\lceil N\\alpha \\rceil-N\\alpha\\right)\\frac{\\alpha(1-F(T-))}{\\alpha-F(T-)}\\right\\} \\exp\\left[-N \\mathfrak{I}(\\alpha)\\right] \\end{multline*}\nWe have here set $\\mathcal{E}'\\equiv 0$ in \\eqref{E:Sas} and have removed the prefactor\n\\begin{equation*} \\frac{e^{-\\textsf{R} T}}{\\left\\{ \\sum_{t\\in \\mathcal{T}} e^{-\\textsf{R} t}\\right\\} (\\beta-\\alpha)\\sqrt{2\\pi}}. \\end{equation*}\n\\begin{figure}[b]\n\\includegraphics[width=5in, height=3in]{prices.pdf}\n\\caption{$S^*_N$ for several values of $\\alpha$}\n\\label{fig:prices}\n\\end{figure}\n\n\\subsection{Correlation}\\label{S:Correlated}\nWe can now introduce a simple model of correlation without too much trouble. Assume that $\\xi^\\text{S}$ takes values in a finite set $\\mathsf{X}$. Fix $\\{p(x);\\, x\\in \\mathsf{X}\\}$ such that $\\sum_{x\\in \\mathsf{X}}p(x)=1$ and $p(x)>0$ for all $x\\in \\mathsf{X}$;\nwe will assume that $\\xi^\\text{S}$ takes on the value $x$ with probability $p(x)$. We can think of the set $\\mathsf{X}$ as the collection of possible states of the world. If we believe in \\eqref{E:structural}, we should then be in the previous\ncase if we condition on the various values of $\\xi^\\text{S}$. To formalize this,\nfix a $\\{\\mu(\\cdot,x)\\}_{x\\in \\mathsf{X}}\\subset \\mathscr{P}(I)$. Fix a probability measure $\\mathbb{P}$ such that\n\\begin{equation}\\label{E:corrprob} \\mathbb{P}\\left(\\bigcap_{n=1}^N\\{\\tau_n\\in A_n\\}\\right) = \\sum_{x\\in\\mathsf{X}}\\left\\{ \\prod_{n=1}^N \\mu(A_n,x)\\right\\} p(x). \\end{equation}\nfor all $\\{A_n\\}_{n\\in \\mathbb{N}}\\subset \\mathscr{B}(I)$.\n\nTo adapt the previous calculations to this case, we need the analogue of Assumptions \\ref{A:IG} and \\ref{A:density}.\nNamely, we need that $\\max_{x\\in \\mathsf{X}}\\mu([0,T),x)<\\alpha$ and also that $\\mu([0,T'],x)<\\mu([0,T],x)$ for all $T'\\in [0,T)$ and all $x\\in \\mathsf{X}$.\n\n\\begin{remark}\\label{R:systemic} The requirement that $\\max_{x\\in \\mathsf{X}}\\mu([0,T),x)<\\alpha$ is a particularly unrealistic one. It means that the tranche losses will be rare\nfor \\emph{all} values of the systemic parameter. In any truly applicable\nmodel, the losses will come from a combination of bad values of the\nsystemic parameter and from tail events in the pool of idiosyncratic randomness\n(i.e., we need to balance the size of $\\mathbb{P}\\left\\{ L^{(N) }_{T-}>\\alpha\\big|\\xi^\\text{S}=x\\right\\}$ against the distribution of $\\xi^\\text{S}$).\nOne can view our effort here as study which focusses primarily on tail events in the pool of idiosyncratic randomness.\nAny structural model which attempts to study losses due to both idiosyncratic and systemic randomness will most likely involve calculations which are similar\nin a number of ways to ours here. We will explore this issue elsewhere.\n\\end{remark}\n\nFor each $x\\in \\mathsf{X}$, define\n\\begin{equation*} \\varkappa_x\\overset{\\text{def}}{=} \\ln \\left(\\frac{\\alpha}{1-\\alpha}\\frac{1-\\mu([0,T),x)}{\\mu([0,T),x)}\\right) = \\ln \\left(\\frac{\\frac{1}{\\mu([0,T),x)}-1}{\\frac{1}{\\alpha}-1}\\right). \\end{equation*}\nThen\n\\begin{multline*} \\mathbb{E}\\left[\\textbf{P}^{\\text{prot}}_N\\right] = \\frac{e^{-\\textsf{R} T}}{N^{3\/2}(\\beta-\\alpha)\\sqrt{2\\pi\\alpha(1-\\alpha)}}\\sum_{x\\in \\mathsf{X}}\\left(\\exp\\left[-\\varkappa_x\\left(\\lceil N\\alpha \\rceil -N\\alpha\\right)\\right]\\right.\\\\\n\\times \\left.\\left\\{ \\frac{\\alpha(1-\\alpha)\\mu([0,T),x)\\left(1-\\mu([0,T),x)\\right)}{\\left(\\alpha-\\mu([0,T),x)\\right)^2} +\\left(\\lceil N\\alpha \\rceil-N\\alpha\\right)\\frac{\\alpha\\left(1-\\mu([0,T),x)\\right)}{\\alpha-\\mu([0,T),x)} + \\mathcal{E}(N)\\right\\} \\right.\\\\\n\\left.\\times \\exp\\left[-N \\hbar(\\alpha,\\mu([0,T),x))\\right]p(x)\\right) \\end{multline*}\nwhere $\\lim_{N\\to \\infty}\\mathcal{E}_x(N)=0$ for all $x\\in \\mathsf{X}$. Similarly we have that\n\\begin{multline*} S_N = \\frac{1}{N^{3\/2}}\\frac{e^{-\\textsf{R} T}}{\\left\\{ \\sum_{t\\in \\mathcal{T}} e^{-\\textsf{R} t}\\right\\} (\\beta-\\alpha)\\sqrt{2\\pi\\alpha(1-\\alpha)}}\\sum_{x\\in \\mathsf{X}}\\left(\\exp\\left[-\\varkappa_x\\left(\\lceil N\\alpha \\rceil -N\\alpha\\right)\\right]\\right.\\\\\n\\left. \\times \\left\\{ \\frac{\\alpha(1-\\alpha)\\mu([0,T),x)\\left(1-\\mu([0,T),x)\\right)}{\\left(\\alpha-\\mu([0,T),x)\\right)^2}+\\left(\\lceil N\\alpha \\rceil-N\\alpha\\right)\\frac{\\alpha\\left(1-\\mu([0,T),x)\\right)}{\\alpha-\\mu([0,T),x)} + \\mathcal{E}'(N)\\right\\}\\right.\\\\\n\\left.\\times \\exp\\left[-N \\hbar(\\alpha,\\mu([0,T),x))\\right]p(x)\\right) \\end{multline*}\nwhere $\\lim_{N\\to \\infty}\\mathcal{E}'_x(N)=0$ for all $x\\in \\mathsf{X}$.\nIf we further assume that there is a unique $x^*\\in \\mathsf{X}$ such that $\\min_{x\\in \\mathsf{X}} \\mathfrak{I}(\\alpha,\\mu([0,T),x))=\\mathfrak{I}(\\alpha,\\mu([0,T),x^*))$, we furthermore\nhave that\n\\begin{align*} \\mathbb{E}\\left[\\textbf{P}^{\\text{prot}}_N\\right] &= \\frac{e^{-\\textsf{R} T}}{N^{3\/2}(\\beta-\\alpha)\\sqrt{2\\pi\\alpha(1-\\alpha)}}\\exp\\left[-\\varkappa_{x^*}\\left(\\lceil N\\alpha \\rceil -N\\alpha\\right)\\right]\\\\\n&\\qquad \\times \\left\\{ \\frac{\\alpha(1-\\alpha)\\mu([0,T),x^*)\\left(1-\\mu([0,T),x^*)\\right)}{\\left(\\alpha-\\mu([0,T),x^*)\\right)^2} \n+\\left(\\lceil N\\alpha \\rceil-N\\alpha\\right)\\frac{\\alpha\\left(1-\\mu([0,T),x^*)\\right)}{\\alpha-\\mu([0,T),x^*)} + \\mathcal{E}(N)\\right\\} \\\\\n&\\qquad \\times \\exp\\left[-N \\hbar(\\alpha,\\mu([0,T),x^*))\\right]p(x^*)\\\\\nS_N &= \\frac{1}{N^{3\/2}}\\frac{e^{-\\textsf{R} T}}{\\left\\{ \\sum_{t\\in \\mathcal{T}} e^{-\\textsf{R} t}\\right\\} (\\beta-\\alpha)\\sqrt{2\\pi\\alpha(1-\\alpha)}}\\exp\\left[-\\varkappa_{x^*}\\left(\\lceil N\\alpha \\rceil -N\\alpha\\right)\\right]\\\\\n&\\qquad \\times \\left\\{ \\frac{\\alpha(1-\\alpha)\\mu([0,T),x^*)\\left(1-\\mu([0,T),x^*)\\right)}{\\left(\\alpha-\\mu([0,T),x^*)\\right)^2}+\\left(\\lceil N\\alpha \\rceil-N\\alpha\\right)\\frac{\\alpha\\left(1-\\mu([0,T),x^*)\\right)}{\\alpha-\\mu([0,T),x^*)} + \\mathcal{E}'(N)\\right\\}\\\\\n&\\qquad \\times \\exp\\left[-N \\hbar(\\alpha,\\mu([0,T),x^*))\\right]p(x^*) \\end{align*}\nwhere $\\lim_{N\\to \\infty}\\mathcal{E}(N)=0$ and $\\lim_{N\\to \\infty}\\mathcal{E}'(N)=0$.\n\n\n\nNote that we can use this methodology to approximately study Gaussian\ncorrelations. Fix a positive $M\\in \\mathbb{N}$ and define $x_i\\overset{\\text{def}}{=} \\tfrac{i}{M}$\nfor $i\\in \\{-M^2,-M^2+1\\dots M^2\\}$; set $\\mathsf{X} \\overset{\\text{def}}{=} \\{x_i\\}_{i=-M^2}^{M^2}$.\nDefine\n\\begin{equation*} \\Phi(x) \\overset{\\text{def}}{=} \\int_{t=-\\infty}^x\\frac{1}{\\sqrt{2\\pi}}\\exp\\left[-\\frac{t^2}{2}\\right]dt \\qquad x\\in \\mathbb{R} \\end{equation*}\nas the standard Gaussian cumulative distribution function.\nDefine\n\\begin{equation*} p(x_i) \\overset{\\text{def}}{=} \\begin{cases} \\Phi\\left(x_i+\\frac{1}{2M}\\right)-\\Phi\\left(x_i-\\frac{1}{2M}\\right) &\\qquad \\text{if $i\\in \\{-M^2+1,\\dots M^2-1\\}$} \\\\\n\\Phi\\left(x_{-M^2}+\\frac{1}{2M}\\right) &\\qquad \\text{if $i=-M^2$}\\\\\n1-\\Phi\\left(x_{M^2}-\\frac{1}{2M}\\right) &\\qquad \\text{if $i=M^2$}\\end{cases}\\end{equation*}\nIf we have a pool of $N$ names with common probability of default $p$ by time $T$ and we want to consider a Gaussian copula with\ncorrelation $\\rho>0$ (the case $\\rho<0$ can be dealt with similarly),\nwe would take the $\\mu(\\cdot,x_i)$'s such that\n\\begin{equation*} \\mu([0,T),x_i) \\overset{\\text{def}}{=} \\Phi\\left(\\frac{\\Phi^{-1}(p)-\\rho x_i}{\\sqrt{1-\\rho^2}}\\right). \\end{equation*}\nThis is related to the calculations of \\cite{Glasserman} and \\cite{MR2384674};\nthose calculations are asymptotically related to our calculations. We shall explore the connection with these two papers elsewhere. We note, by way of contrast with \\cite{Glasserman} and \\cite{MR2384674}, that our efforts give a good picture of the \\emph{dynamics} of the loss process prior to expiry.\nWe also note that our model of \\eqref{E:corrprob} is entirely comfortable with non-Gaussian correlation. Note also that one could also (by discretization) allow the systemic parameter $\\xi^\\text{S}$ to be path-valued.\n\n\\subsection{Large Deviations}\\label{S:LD}\nWe shall here give a very short summary of the main ideas of large deviations; see \\cite{MR1619036} for a comprehensive treatment. The basic observation behind\nthe theory is that a sum of exponentials behaves like largest-growing\nexponential. For example,\n\\begin{equation*} e^{-3N} + e^{7N} + e^{4N} = e^{7N}\\left\\{ 1 + e^{-10N} + e^{-3N}\\right\\} \\asymp e^{-7N}. \\end{equation*}\nHere ``$\\asymp$'' means ``having the same exponential growth''; in other words,\n$A_N\\asymp B_N$ if $\\lim_{N\\to \\infty}\\tfrac1N\\ln A_N=\\lim_{N\\to \\infty}\\tfrac1N\\ln B_N$.\n\\emph{Laplace asymptotics} extends this to integrals. This is a relevant place\nto start the study of rare events if we consider a collection $\\{X_n\\}_{n\\in \\mathbb{N}}$\nof random variables whose laws are of the form\n\\begin{equation}\\label{E:laplace} \\mathbb{P}\\{X_N\\in A\\} \\overset{\\text{def}}{=} \\int_{x\\in A}c_N\\exp\\left[-N \\phi(x)\\right]dx \\qquad A\\in \\mathscr{B}(\\mathbb{R}) \\end{equation}\nfor some $\\phi\\in C(\\mathbb{R})$ and some normalization constant $c_N$ (e.g., if we take $\\phi(x) = \\tfrac12(x-1)^2$ and $c_N = 1\/\\sqrt{2\\pi\/N}$, then $X_N$ will be a\nnormal random variable with mean $1$ and variance $\\tfrac{1}{N}$). If we\nassume that $\\phi$ has nice enough growth properties (so that the integrals\nin \\eqref{E:laplace} are well-defined and $c_N$ has subexpontial growth) ,\nthen Laplace asymptotics states that\n\\begin{equation}\\label{E:expasymp} \\mathbb{P}\\{X_N\\in A\\} \\asymp \\exp\\left[-N \\inf_{x\\in A}\\phi(x)\\right] \\end{equation}\nfor ``nice'' enough sets $A$. By taking $A=\\mathbb{R}$, we see that\nwe must have that $\\inf_{x\\in \\mathbb{R}}\\phi(x)=0$. If this minimum is achieved at a single point $x^*$, then by taking $A$ as the complement of a neighborhood of $x^*$\nwe have that $X_N\\to x^*$ in probability, so $\\{X_N\\in A\\}$ is a rare\nevent for any nice enough set $A$ not containing $x^*$.\n\nOne of the main aspects of large deviations theory is something of an inverse problem. Can we have \\eqref{E:expasymp} even without \\eqref{E:laplace}?\nIn some cases, yes. Fix $\\theta\\in \\mathbb{R}$\nand consider the limiting rate of growth of the logarithmic moment\ngenerating function; we have that\n\\begin{equation*} \\lim_{N\\to \\infty}\\frac1N\\ln \\mathbb{E}\\left[\\exp\\left[\\theta N X_N\\right]\\right] = \\lim_{N\\to \\infty}\\frac1N\\ln \\int_{x\\in \\mathbb{R}}c_N\\exp\\left[N\\left\\{ \\theta x-\\phi(x)\\right\\}\\right]dx = \\sup_{x\\in \\mathbb{R}}\\left\\{ \\theta x-\\phi(x)\\right\\}. \\end{equation*}\nThe key realization is that the right-hand side is the \\emph{Legendre-Fenchel transform} of $\\phi$, and that if $\\phi$ has nice convexity properties,\nwe can recover $\\phi$ from $M$ by taking the Legendre-Fenchel transform\nagain; i.e., \n\\begin{equation*} \\phi(x) = \\sup_{\\theta\\in \\mathbb{R}}\\left\\{ \\theta x-M(\\theta)\\right\\}. \\end{equation*}\nThe strength of this chain of arguments is that the moment generating function is\nwell-defined (but of course possibly infinite) regardless of whether\n$X_N$ is discrete or continuous. It even makes sense when $X_N$ \ntakes values in an infinite-dimensional topological linear space $\\mathsf{X}$\nif we replace multiplication by $\\theta$ with the action of a linear functional\non $\\mathsf{X}$. The rigorous definition of a large deviations principle is as follows \\cite{MR758258}. We say that $\\{X_n;\\, n\\in \\mathbb{N}\\}$ (which we now assume\nto take values in a topological space $\\mathsf{X}$) has a large deviations principle with rate function $\\mathcal{I}:\\mathsf{X}\\to [0,\\infty]$ if the following three requirements hold:\n\\begin{itemize}\n\\item For every $s\\ge 0$, $\\{x\\in \\mathsf{X}: \\mathcal{I}(x)\\le s\\}$ is a compact subset of $\\mathsf{X}$.\n\\item For every open subset $G$ of $\\mathsf{X}$,\n\\begin{equation*} \\varliminf_{N\\to \\infty}\\frac{1}{N}\\ln \\mathbb{P}\\{X_n\\in G\\} \\ge - \\inf_{x\\in G}\\mathcal{I}(x). \\end{equation*}\n\\item For every closed subset $F$ of $\\mathsf{X}$,\n\\begin{equation*} \\varlimsup_{N\\to \\infty}\\frac{1}{N}\\ln \\mathbb{P}\\{X_n\\in F\\} \\le - \\inf_{x\\in F}\\mathcal{I}(x). \\end{equation*}\n\\end{itemize}\n\nReturning to our focus, which is Sanov's theorem applied to \\eqref{E:nudef},\nwe have that for any $\\phi\\in C(I)$ (the dual of $\\mathscr{P}(I)$),\n\\begin{equation*} \\lim_{N\\to \\infty}\\frac1N\\ln \\mathbb{E}\\left[\\exp\\left[N\\int_{t\\in I}\\phi(t)\\nu^{(N)}(dt)\\right]\\right] = \\ln \\int_{t\\in I}e^{\\phi(t)}\\mu(dt) \\end{equation*}\nand we can then show that\n\\begin{equation}\\label{E:dual} H(\\mu'|\\mu) = \\sup_{\\phi\\in C(I)}\\left\\{ \\int_{t\\in I}\\phi(t)\\mu'(dt) - \\ln \\int_{t\\in I}e^{\\phi(t)}\\mu(dt)\\right\\}. \\qquad \\mu'\\in \\mathscr{P}(I)\\end{equation}\nThis suggests that indeed we should have \\eqref{E:san} as interpreted as a large\ndeviations principle (Sanov's theorem).\n\n\\section{A Measure Transformation}\\label{S:Tilt}\nOne of the things which naturally occurs in proofs of large deviations\nprinciples is a \\emph{measure change} under which the unlikely event becomes more likely---the cost\nof this change of measure is exactly the desired exponential rate of decay (see \\cite{MR1619036}).\nLet's see what this looks like in our situation (see \\cite{MR1619036} for a more\ncomplete motivation of measure changes in large deviations). Define\n\\begin{equation*} \\phi^*_\\alpha(t) = \\ln \\frac{d\\tilde \\mu^*_\\alpha}{d\\mu}(t) = \\ln \\frac{\\alpha}{F(T-)}\\chi_{[0,T)}(t) + \\ln \\frac{1-\\alpha}{1-F(T-)}\\chi_{[T,\\infty]}(t)\\qquad t\\in I \\end{equation*}\n(note that since $\\mathfrak{I}(\\alpha)<\\infty$, $H(\\tilde \\mu^*_\\alpha|\\mu)<\\infty$, so $\\tilde \\mu^*_\\alpha \\ll \\mu$). \nIt is easy to verify that\n\\begin{equation*} H(\\tilde \\mu^*_\\alpha|\\mu) = \\int_{t\\in I}\\phi^*_\\alpha(t)\\tilde \\mu^*_\\alpha(dt) - \\ln \\int_{t\\in I}e^{\\phi^*_\\alpha(t)}\\mu(dt); \\end{equation*}\nthus $\\phi^*_\\alpha$ is the extremal in the variational representation \\eqref{E:dual} for $H(\\tilde \\mu^*_\\alpha|\\mu)$ (if we allow ourselves\nto extend the supremum over $C(I)$ to the collection of bounded measurable functions; it turns out that this is allowable).\nIn our analysis of $\\nu^{(N)}$ of \\eqref{E:nudef}, $\\phi^*_\\alpha$\nwill naturally give us an optimal way to ``tilt'' our original\nprobability measure so that it becomes likely that $\\nu^{(N)}[0,T)\\approx \\alpha$. The penalty for doing this is exactly $\\mathfrak{I}(\\alpha)$.\n\\begin{theorem}\\label{T:measurechange} We have that\n\\begin{equation*} \\mathbb{E}[\\textbf{P}^{\\text{prot}}_N] = I_Ne^{-N\\mathfrak{I}(\\alpha)} \\end{equation*}\nfor all positive integers $N$, where\n\\begin{equation}\\label{E:IDef} I_N\\overset{\\text{def}}{=} \\tilde \\mathbb{E}_N\\left[\\textbf{P}^{\\text{prot}}_N\\exp\\left[-\\varkappa\\gamma_N\\right]\\chi_{\\{\\gamma_N>0\\}}\\right] \\end{equation}\nwhere in turn\n\\begin{equation}\\label{E:MIX}\\begin{aligned} \\tilde \\mathbb{P}_N(A) &\\overset{\\text{def}}{=} \\mathbb{E}\\left[\\chi_A\\prod_{n=1}^N \\frac{d\\tilde \\mu^*_\\alpha}{d\\mu}(\\tau_n)\\right] \\qquad A\\in \\mathscr{F} \\\\\n\\gamma_N&= \\sum_{n=1}^N \\left\\{ \\chi_{[0,T)}(\\tau_n)-\\alpha\\right\\} =N(L_{T-}^{(N)}-\\alpha)\\end{aligned}\\end{equation}\nUnder $\\tilde \\mathbb{P}_N$, $\\{\\tau_1,\\tau_2\\dots \\tau_N\\}$ are independent and identically distributed with common law $\\tilde \\mu^*_\\alpha$.\n\\end{theorem}\n\\begin{proof} Set\n\\begin{equation*}\n\\Gamma_N = N\\left\\{ \\int_{t\\in I}\\phi^*_\\alpha(t)\\nu^{(N)}(dt)-\\int_{t\\in I}\\phi^*_\\alpha(t)\\tilde \\mu^*_\\alpha(dt)\\right\\}\\end{equation*}\nThen\n\\begin{equation*} \\mathbb{E}[\\textbf{P}^{\\text{prot}}_N] = \\frac{\\mathbb{E}\\left[\\textbf{P}^{\\text{prot}}_N \\exp\\left[-\\Gamma_N\\right]\\exp\\left[\\Gamma_N\\right]\\right]}{\\mathbb{E}\\left[\\exp\\left[\\Gamma_N\\right]\\right]}\\mathbb{E}\\left[\\exp\\left[\\Gamma_N\\right]\\right]. \\end{equation*}\nNote that\n\\begin{align*} \\int_{t\\in I}\\phi^*_\\alpha(t)\\tilde \\mu^*_\\alpha(dt)&= \\mathfrak{I}(\\alpha)\\\\\n\\exp\\left[N \\int_{t\\in I}\\phi^*_\\alpha(t)\\nu^{(N)}(dt)\\right]&= \\exp\\left[\\sum_{n=1}^N\\ln \\frac{d\\tilde \\mu^*_\\alpha}{d\\mu}(\\tau_n)\\right] = \\prod_{n=1}^N \\frac{d\\tilde \\mu^*_\\alpha}{d\\mu}(\\tau_n) \\\\\n\\mathbb{E}\\left[\\exp\\left[\\Gamma_N\\right]\\right]&= e^{-N\\mathfrak{I}(\\alpha)}\\mathbb{E}\\left[\\prod_{n=1}^N \\frac{d\\tilde \\mu^*_\\alpha}{d\\mu}(\\tau_n)\\right] = e^{-N\\mathfrak{I}(\\alpha)} \\end{align*}\n(these equalities in fact reflect some of the basic properties of\nlarge deviations measure transformations and are intimately related with\nthe fact that $\\phi^*_\\alpha$ solves the variational problem \\eqref{E:dual} associated with $H(\\tilde \\mu^*_\\alpha|\\mu)$).\nWe also clearly have that\n\\begin{equation*}\\frac{\\mathbb{E}\\left[\\chi_A\\exp\\left[\\Gamma_N\\right]\\right]}{\\mathbb{E}\\left[\\exp\\left[\\Gamma_N\\right]\\right]}\\\\\n= \\frac{\\mathbb{E}\\left[\\chi_A\\exp\\left[N\\int_{t\\in I}\\phi^*_\\alpha(t)\\nu^{(N)}(dt)\\right]\\right]}{\\mathbb{E}\\left[\\exp\\left[N\\int_{t\\in I}\\phi^*_\\alpha(t)\\nu^{(N)}(dt)\\right]\\right]} = \\tilde \\mathbb{P}_N(A) \\end{equation*}\nfor all $A\\in \\mathscr{F}$. The properties of $\\tilde \\mathbb{P}_N$ are clear from the explicit formula.\nWe next check that\n\\begin{multline*}\\Gamma_N=N\\left\\{ \\ln \\frac{\\alpha}{F(T-)} \\nu^{(N)}[0,T) + \\ln \\frac{1-\\alpha}{1-F(T-)}\\nu^{(N)}[T,\\infty] \n-\\frac{\\alpha}{F(T-)} \\alpha - \\ln \\frac{1-\\alpha}{1-F(T-)}(1-\\alpha)\\right\\}\\\\\n=N\\ln \\frac{\\alpha}{F(T-)} \\left\\{ \\nu^{(N)}[0,T)-\\alpha\\right\\} +N \\ln \\frac{1-\\alpha}{1-F(T-)}\\left\\{ \\nu^{(N)}[T,\\infty]-(1-\\alpha)\\right\\} = \\varkappa\\gamma_N. \\end{multline*}\nFinally, we see that $\\textbf{P}^{\\text{prot}}_N$ is nonzero only if $\\gamma_N>0$; we have explicitly included this\nin the expression for $I_N$.\n\\end{proof}\n\\noindent We note here that\n\\begin{equation*} \\tilde \\mathbb{E}_N\\left[L^{(N)}_{T-}\\right] = \\tilde \\mu^*_\\alpha[0,T)=\\alpha\\qquad \\text{and}\\qquad \\tilde \\mathbb{E}_N\\left[\\left(L^{(N)}_{T-}-\\alpha\\right)^2\\right] = \\frac{\\alpha(1-\\alpha)}{N^2}\\le \\frac{1}{4N^2}, \\end{equation*}\nso by Chebychev's inequality, we have that \n\\begin{equation*} \\lim_{N\\to \\infty}\\tilde \\mathbb{P}_N\\left\\{ \\left|L^{(N)}_{T-}-\\alpha\\right|\\ge \\varepsilon\\right\\} = 0 \\end{equation*}\nfor every $\\varepsilon>0$. In other words, $L^{(N)}_{T-}$ tends to the attachment point $\\alpha$ under the sequence $(\\tilde \\mathbb{P}_N)_{N\\in \\mathbb{N}}$ of probability measures\nand thus loss is not a rare event under $\\tilde \\mathbb{P}_N$ as $N\\nearrow \\infty$.\n\nWe also note that we need to understand the appropriate\nchange of measure for the empirical measure $\\nu^{(N)}$ (as opposed to the\nchange of measure for the empirical sum $L^{(N)}_{T-}$) since $\\textbf{P}^{\\text{prot}}_N$ involves the dynamics of the loss process (and not just the probability of loss).\n\n\\section{Asymptotic Analysis}\nWhere do we now stand? If we can show that $I_N$ has no exponential growth or decay (comparable to $e^{-N\\mathfrak{I}(\\alpha)}$) then we have successfully identified the asymptotic behavior of $\\mathbb{E}[\\textbf{P}^{\\text{prot}}_N]$; we will have decomposed it into\nan exponentially small part and a prefactor which is of order 1 as $N\\nearrow \\infty$. Our goal now is to organize our thoughts about the prefactor,\nand in particular to actually extract the asymptotics of Theorem \\ref{T:Main};\ni.e., to ``do the math''.\n\nLooking at the expression \\eqref{E:IDef} for $I_N$, we see that the dominant part of $I_N$ will be where $\\gamma_N$ is order\\footnote{actually, it will be where $\\gamma_N\\ll \\sqrt{N}$} 1; if $\\gamma_N\\gg 1$, then $\\exp[-\\varkappa \\gamma_N]$ will be very small so the contribution to $I_N$ will be negligible (recall here that $\\textbf{P}^{\\text{prot}}_N$ is bounded). This suggests\nwe organize the formula for $I_N$ based on the values of $\\gamma_N$.\nNote that the range of $\\gamma_N$ when it is positive is\n$\\mathcal{S}_N\\overset{\\text{def}}{=} \\{n-N\\alpha: \\text{$n\\in \\mathbb{Z}$ and $N\\alpha\\le n\\le N$}\\}$.\n\\begin{definition} For each $N$, let $H_N:\\mathcal{S}_N\\to [0,1]$ be such that\n\\begin{equation*} H_N(\\gamma_N)= \\tilde \\mathbb{E}_N\\left[\\textbf{P}^{\\text{prot}}_N\\big|\\gamma_N\\right] \\end{equation*}\non $\\{\\gamma_N>0\\}$.\n\\end{definition}\n\\noindent Then we have that\n\\begin{equation*} I_N = \\tilde \\mathbb{E}_N\\left[H_N(\\gamma_N)\\chi_{\\{\\gamma_N>0\\}}\\exp\\left[-\\varkappa \\gamma_N\\right]\\right]. \\end{equation*}\n\\noindent It turns out that $H_N$ has very nice asymptotics.\n\\begin{lemma}\\label{L:hasymp} For all $N$, we have that\n\\begin{equation*} H_N(s) = \\frac{e^{-\\textsf{R} T}s\\left\\{ 1 + \\mathcal{E}_1(s,N)\\right\\}}{(\\beta-\\alpha)N} \\end{equation*}\nwhere\n\\begin{equation*} \\varlimsup_{N\\nearrow \\infty}\\sup_{\\substack{s\\in \\mathcal{S}_N \\\\ s\\le N^{1\/4}}}|\\mathcal{E}_1(s,N)|=0. \\end{equation*}\n\\end{lemma}\n\\noindent We will prove this in Section \\ref{S:HAS}.\n\nThe next step is to understand the distribution of $\\gamma_N$.\n\\begin{lemma}\\label{L:probasymp} We have that\n\\begin{equation*} \\tilde \\mathbb{P}_N\\{\\gamma_N=s\\} = \\frac{1+\\mathcal{E}_2(s,N)}{\\sqrt{2\\pi N\\alpha(1-\\alpha)}} \\end{equation*}\nfor all $N$ and all $s\\in \\mathcal{S}_N$, where\n\\begin{equation*} \\varlimsup_{N\\nearrow \\infty}\\sup_{\\substack{s\\in \\mathcal{S}_N \\\\ s\\le N^{1\/4}}}|\\mathcal{E}_2(s,N)|=0. \\end{equation*}\n\\end{lemma}\n\\noindent We will prove this in Section \\ref{S:Proofs}. Using this result, we can now start our proof of Theorem \\ref{T:Main}. Set\n\\begin{align*} \\tilde I_{1,N} &\\overset{\\text{def}}{=} \\sum_{\\substack{s\\in \\mathcal{S}_N\\\\s\\le N^{1\/4}}}(s-\\alpha) e^{-\\varkappa s}\\\\\n\\tilde I_{2,N} &\\overset{\\text{def}}{=} \\exp\\left[-\\varkappa\\left(\\lceil N\\alpha \\rceil -N\\alpha\\right)\\right]\\left\\{ \\frac{e^{-\\varkappa}}{(1-e^{-\\varkappa})^2} +\\frac{\\lceil N\\alpha \\rceil-N\\alpha}{1-e^{-\\varkappa}}\\right\\} \\end{align*}\nWe thus expect that\n\\begin{equation*} I_N \\approx \\frac{e^{-\\textsf{R} T}\\tilde I_{1,N}}{N^{3\/2}(\\beta-\\alpha)\\sqrt{2\\pi \\alpha(1-\\alpha)}}. \\end{equation*}\nWe then claim that $\\tilde I_{1,N} \\approx \\tilde I_{2,N}$. As a preliminary\nto showing this, let's recall some calculations about geometric series.\nFor $\\lambda>0$ and each positive integer $n$,\n\\begin{equation*} \\sum_{j=0}^n e^{-\\lambda j}= \\frac{1}{1-e^{-\\lambda}}- \\frac{e^{-\\lambda(n+1)}}{1-e^{-\\lambda}}. \\end{equation*}\nDifferentiating with respect to $\\lambda$, we get that\n\\begin{equation*} \\sum_{j=0}^n j e^{-\\lambda j} = \\frac{e^{-\\lambda}}{(1-e^{-\\lambda})^2} -e^{-\\lambda(n+1)}\\frac{n(1-e^{-\\lambda}) + 1}{(1-e^{-\\lambda})^2}. \\end{equation*}\nLet's bound the error terms in these expressions. Note that $\\sup_{x>0}x e^{-x} = e^{-1}$. For $\\lambda>0$ we have that\n\\begin{align*} \\left|e^{-\\lambda(n+1)}\\frac{n(1-e^{-\\lambda}) + 1}{(1-e^{-\\lambda})^2}\\right|\n&\\le e^{-\\lambda(n+1)}\\frac{n+1}{(1-e^{-\\lambda})^2}\\\\\n&= 2\\left\\{ \\frac{\\lambda}{2} (n+1)\\exp\\left[-\\frac{\\lambda}{2}(n+1)\\right]\\right\\} \\frac{\\exp\\left[-\\frac{\\lambda}{2}(n+1)\\right]}{\\lambda\\left(1-e^{-\\lambda}\\right)^2} \\\\\n&\\le 2e^{-1} \\frac{\\exp\\left[-\\frac{\\lambda}{2}(n+1)\\right]}{\\lambda\\left(1-e^{-\\lambda}\\right)^2} \\end{align*}\nand similarly\n\\begin{equation*} \\left|\\frac{e^{-\\lambda(n+1)}}{1-e^{-\\lambda}}\\right|\n= 2 e^{-\\lambda n\/2}\\left(1-e^{-\\lambda}\\right)\\left\\{ \\frac{\\lambda}{2}e^{-\\lambda\/2}\\right\\} \\frac{\\exp\\left[-\\frac{\\lambda}{2}(n+1)\\right]}{\\lambda\\left(1-e^{-\\lambda}\\right)^2}\\\\\n\\le 2 e^{-1}\\frac{\\exp\\left[-\\frac{\\lambda}{2}(n+1)\\right]}{\\lambda\\left(1-e^{-\\lambda}\\right)^2}. \\end{equation*}\nObserve now that\n\\begin{equation*} \\lfloor N\\alpha + N^{1\/4}\\rfloor - \\lceil N\\alpha \\rceil + 1\n\\ge N\\alpha + N^{1\/4} -1-N\\alpha -1+1 =N^{1\/4}-1 \\end{equation*}\nfor all $N\\in \\mathbb{N}$.\nCombining things and recalling that $\\varkappa>0$, we see that for all $N\\in \\mathbb{N}$,\n\\begin{equation} \\label{E:QQ} \\begin{aligned} \\tilde I_{1,N} &= \\sum_{\\substack{j\\in \\mathbb{Z}\\\\0\\le j-N\\alpha \\le N^{1\/4}}}(j-N\\alpha)\\exp\\left[-\\varkappa (j-N\\alpha)\\right]\\\\\n&= \\sum_{j=\\lceil N\\alpha \\rceil}^{\\lfloor N\\alpha + N^{1\/4}\\rfloor}(j-N\\alpha)\\exp\\left[-\\varkappa (j-N\\alpha)\\right]\\\\\n&= \\sum_{j=0}^{\\lfloor N\\alpha +N^{1\/4}\\rfloor-\\lceil N\\alpha \\rceil}(j+\\lceil N\\alpha \\rceil-N\\alpha)\\exp\\left[-\\varkappa (j+\\lceil N\\alpha \\rceil-N\\alpha)\\right]\\\\\n&= \\exp\\left[-\\varkappa (\\lceil N\\alpha \\rceil-N\\alpha)\\right]\\left\\{ \\sum_{j=0}^{\\lfloor N\\alpha +N^{1\/4}\\rfloor-\\lceil N\\alpha \\rceil}j e^{-\\varkappa j} + \\left(\\lceil N\\alpha \\rceil-N\\alpha\\right)\\sum_{j=0}^{\\lfloor N\\alpha +N^{1\/4}\\rfloor-\\lceil N\\alpha \\rceil}e^{-\\varkappa j}\\right\\} \\\\\n&= \\exp\\left[-\\varkappa(\\lceil N\\alpha \\rceil-N\\alpha)\\right]\\left\\{ \\frac{e^{-\\varkappa}}{(1-e^{-\\varkappa})^2} + \\left(\\lceil N\\alpha \\rceil-N\\alpha\\right)\\frac{1}{1-e^{-\\varkappa}} + \\mathcal{E}_3(N)\\right\\}\n\\end{aligned}\\end{equation}\nwhere\n\\begin{equation}\\label{E:QQQ} |\\mathcal{E}_3(N)| \\le 4e^{-1}\\frac{\\exp\\left[-\\frac{\\varkappa}{2}(N^{1\/4}-1)\\right]}{\\varkappa \\left(1-e^{-\\varkappa}\\right)^2}. \\end{equation}\nAs a consequence, we furthermore have that\n\\begin{multline*}\\left|\\tilde I_{1,N}\\right|\n\\le \\frac{e^{-\\varkappa}}{\\left(1-e^{-\\varkappa}\\right)^2} + \\frac{1}{1-e^{-\\varkappa}} + 4e^{-1}\\frac{\\exp\\left[-\\frac{\\varkappa}{2}(N^{1\/4}-1)\\right]}{\\varkappa \\left(1-e^{-\\varkappa}\\right)^2}\\\\ \n\\le \\frac{\\varkappa\\left(e^{-\\varkappa}+1-e^{-\\varkappa}\\right) + 4e^{-1}}{\\varkappa\\left(1-e^{-\\varkappa}\\right)^2}\n\\le \\frac{4e^{-1}(1+\\varkappa)}{\\varkappa\\left(1-e^{-\\varkappa}\\right)^2}. \\end{multline*}\nFrom \\eqref{E:vkapdef}, we have that\n\\begin{equation*} e^{-\\varkappa} = \\frac{1-\\alpha}{\\alpha}\\frac{F(T-)}{1-F(T-)}\\qquad \\text{and}\\qquad 1-e^{-\\varkappa}= \\frac{\\alpha-F(T-)}{\\alpha(1-F(T-))}. \\end{equation*}\nso\n\\begin{equation*} \\frac{e^{-\\varkappa}}{(1-e^{-\\varkappa})^2} = \\frac{1-\\alpha}{\\alpha}\\frac{F(T-)}{1-F(T-)}\\frac{\\alpha^2(1-F(T-))^2}{(\\alpha-F(T-))^2} = \\frac{\\alpha(1-\\alpha)F(T-)(1-F(T-))}{(\\alpha-F(T-))^2}. \\end{equation*}\n\nWe can finally prove our desired result.\n\\begin{proof}[Proof of Theorem \\ref{T:Main}]\nWe have that\n\\begin{equation*} I_N = \\frac{\\tilde I_{2,N}}{N^{3\/2}(\\beta-\\alpha)\\sqrt{2\\pi \\alpha(1-\\alpha)}} + \\sum_{j=1}^5 \\tilde \\mathcal{E}_j(N)\\end{equation*}\nwhere\n\\begin{align*}\n\\tilde \\mathcal{E}_1(N)&\\overset{\\text{def}}{=} \\tilde \\mathbb{E}_N\\left[\\textbf{P}^{\\text{prot}}_N e^{-\\varkappa \\gamma_N}\\chi_{\\{\\gamma_N>N^{1\/4}\\}}\\right] \\\\\n\\tilde \\mathcal{E}_2(N)&\\overset{\\text{def}}{=} \\sum_{\\substack{s\\in \\mathcal{S}_N\\\\s\\le N^{1\/4}}}\\frac{H_N(s)e^{-\\varkappa s}\\mathcal{E}_2(s,N)}{\\sqrt{2\\pi N\\alpha(1-\\alpha)}} \\\\\n\\tilde \\mathcal{E}_3(N)&\\overset{\\text{def}}{=} \\frac{e^{-\\textsf{R} T}}{\\beta-\\alpha}\\sum_{\\substack{s\\in \\mathcal{S}_N\\\\s\\le N^{1\/4}}}\\frac{se^{-\\varkappa s}\\mathcal{E}_1(s,N)}{N^{3\/2}\\sqrt{2\\pi \\alpha(1-\\alpha)}}\\\\\n\\tilde \\mathcal{E}_4(N)&\\overset{\\text{def}}{=} \\frac{e^{-\\textsf{R} T}}{\\beta-\\alpha}\\frac{\\exp\\left[-\\varkappa\\left(\\lceil N\\alpha \\rceil-N\\alpha\\right)\\right]\\mathcal{E}_3(N)}{N^{3\/2}\\sqrt{2\\pi \\alpha(1-\\alpha)}}\\end{align*}\nThen there is a $K_1>0$ such that \n\\begin{equation*} |\\tilde \\mathcal{E}_1(N)|\\le \\frac{1}{K_1}e^{-K_1 N^{1\/4}}\\qquad \\text{and}\\qquad |\\tilde \\mathcal{E}_4(N)|\\le \\frac{1}{K_1}e^{-K_1 N^{1\/4}}\\end{equation*}\nfor all $N\\in \\mathbb{N}$.\nFurthermore, we can fairly easily see that there is a $K_2$ such that\n\\begin{equation*}\n|\\tilde \\mathcal{E}_2(N)|\\le \\frac{K_2\\tilde I_{1,N}}{N^{3\/2}} \\sup_{\\substack{s\\in \\mathcal{S}_N \\\\ s\\le N^{1\/4}}}|\\mathcal{E}_2(s,N)|\\qquad \\text{and}\\qquad |\\tilde \\mathcal{E}_3(N)|\\le \\frac{K_2 \\tilde I_{1,N}}{N^{3\/2}}\\sup_{\\substack{s\\in \\mathcal{S}_N \\\\ s\\le N^{1\/4}}}|\\mathcal{E}_1(s,N)| \\end{equation*}\nfor all $N\\in \\mathbb{N}$ (note from \\eqref{E:QQ} and \\eqref{E:QQQ} that $\\tilde I_{1,N}$ is uniformly bounded in $N$). Combine things together to get the stated result.\\end{proof}\n\n\\begin{remark}\\label{R:Comments} Several comments are in order about the analysis of this section.\n\nFirstly, we re-emphasize that we first identified the law of $L^{(N)}_{T-}$ and then studied the law of $L^{(N)}$ right before $T$.\nFor investment-grade tranches, only this last part of $L^{(N)}$ should be\nof interest. For an investment-grade tranche, losses in general should be\nrare events; losses significantly before expiry should be \\emph{very} rare\nevents. This would follow from a detailed analysis of the measure transformation\nof Section \\ref{S:Tilt}.\n\nSecondly, our analysis here suggests that in more realistic models\n(i.e., not i.i.d. names), the first order of business should be a thorough\nstudy of the law of $L^{(N)}_{T-}$. This is somewhat appealing; by time $T$,\nvarious transients will assumedly have died out, and some sort of macroscopic\nanalysis may be available.\n\nThe third point of interest is the asymptotics of Lemma \\ref{L:probasymp}.\nThis does \\emph{not} directly reflect a Poisson distribution for $L^{(N)}_{T-}$.\nA number of other studies of CDO's have modelled the loss process as a Poisson\nprocess; an interesting question would thus be to try to find a limiting regime of our calculations which leads to Poisson statistics.\n\\end{remark}\n\nFinally, it would not be hard to use the measure change of Section \\ref{S:Tilt}\nand calculations similar to those of this section\nto compute the expected loss given default. We will leave that to the reader.\n\n\\section{Proof of Lemma \\ref{L:hasymp}}\\label{S:HAS}\nWe here prove Lemma \\ref{L:hasymp}. To do so, we need to develop a clear picture of the dynamics of $L^{(N)}$. We note that the calculations of this section, though technical, provide a \\emph{direct} link to the distribution of the default times.\n\nFirst of all, we recall that the definition of $\\bar L^{(N)}$ implies that $\\bar L^{(N)}$ is nonzero only where $L^{(N)}$ exceeds $\\alpha$; since $L^{(N)}$ is nondecreasing, this will in fact be an interval. Set\n\\begin{align*} \\tau^\\alpha_N &\\overset{\\text{def}}{=} \\inf\\{r>0: \\bar L^{(N)}_r>0\\} = \\inf\\{r>0: L^{(N)}_r>\\alpha\\} \\\\\n\\tau^\\beta_N &\\overset{\\text{def}}{=} \\sup\\{r>0: \\bar L^{(N)}_r<\\beta-\\alpha\\} = \\sup\\{r>0: L^{(N)}_r<\\beta\\} \\end{align*}\nA typical graph of $\\bar L^{(N)}$ is given in Figure \\ref{fig:typ}.\nNext note that on $\\{\\gamma_N>0\\}$,\n\\begin{equation}\\label{E:AA} \\textbf{P}^{\\text{prot}}_N = \\int_{s\\in [\\tau^\\alpha_N,\\tau^\\beta_N]\\cap[0,T)}e^{-\\textsf{R} s}d\\bar L^{(N)}_s. \\end{equation}\nIf $\\gamma_N=s$ for some $s\\in \\mathcal{S}_N$, where $s\\le N^{1\/4}$, then (recall the second line of \\eqref{E:MIX}) $L^{(N)}_{T-}=\\alpha+\\frac{s}{N}$ and $\\frac{s}{N}\\ll 1$; thus $L^{(N)}_{T-}$ is close to $\\alpha$. Hence $\\tau^\\beta_N>T$ (at least if $N>(\\beta-\\alpha)^{-4\/3}$)\nand $\\tau^\\alpha_N$ should be close to $T$; it should only take a short amount\nof time for $L^{(N)}$ to increase the extra distance (which is at most $s\/N$) past $\\alpha$.\n\\begin{lemma}\\label{L:TTimes} We have that\n\\begin{equation*} \\varlimsup_{N\\nearrow \\infty}\\sup_{\\substack{s\\in \\mathcal{S}_N \\\\ s\\le N^{1\/4}}}\\tilde \\mathbb{E}_N\\left[T-\\tau^\\alpha_N\\bigg|\\gamma_N\\right]\\chi_{\\{\\gamma_N=s\\}}=0. \\end{equation*}\n\\end{lemma}\n\\noindent Let's rigorously put all of these thoughts together. Assume\nthat $N> (\\beta-\\alpha)^{-4\/3}$ and $0<\\gamma_N\\le N^{1\/4}$. Then\n$0\\le \\tau^\\alpha_N\\le T\\le \\tau^\\beta_N$ and $0\\le L^{(N)}_{\\tau^\\alpha_N-}\\le L^{(N)}_{T-}= \\alpha+\\tfrac{\\gamma_N}{N}<\\beta$. Hence\n\\begin{equation*} \\int_{s\\in [\\tau^\\alpha_N,\\tau^\\beta_N]\\cap [0,T)}e^{-\\textsf{R} s}d\\bar L^{(N)}_s\n= e^{-\\textsf{R} T}\\left(\\bar L^{(N)}_{T-}-\\bar L^{(N)}_{\\tau^\\alpha_N-}\\right) + \\int_{s\\in [\\tau^\\alpha_N,T)}\\left(e^{-\\textsf{R} s}-e^{-\\textsf{R} T}\\right) d\\bar L^{(N)}_s. \\end{equation*}\nNote that\n\\begin{equation*} \\bar L^{(N)}_{T-}=\\frac{L^{(N)}_{T-}-\\alpha}{\\beta-\\alpha} = \\frac{1}{\\beta-\\alpha}\\frac{\\gamma_N}{N} \\qquad \\text{and}\\qquad \\bar L^{(N)}_{\\tau^\\alpha_N-}=\\frac{\\left(L^{(N)}_{\\tau^\\alpha_N-}-\\alpha\\right)^+}{\\beta-\\alpha}. \\end{equation*}\nThus\n\\begin{equation*} \\int_{s\\in [\\tau^\\alpha_N,\\tau^\\beta_N]\\cap [0,T)}e^{-\\textsf{R} s}d\\bar L^{(N)}_s = \\frac{e^{-\\textsf{R} T}}{\\beta-\\alpha}\\frac{\\gamma_N}{N} + \\textsc{\\tiny E}_N \\end{equation*}\nwhere\n\\begin{equation*} \\textsc{\\tiny E}_N = -e^{\\textsf{R} T}\\frac{\\left(L^{(N)}_{\\tau^\\alpha_N-}-\\alpha\\right)^+}{\\beta-\\alpha} + \\int_{s\\in [\\tau^\\alpha_N,T)}e^{-\\textsf{R} s}\\left\\{ 1- e^{-\\textsf{R} (T-s)}\\right\\} d\\bar L^{(N)}_s. \\end{equation*}\nIf $\\tau^\\alpha_N>0$, then $L^{(N)}_{\\tau^\\alpha_N-}\\le \\alpha$. Thus\n\\begin{equation*} \\left(L^{(N)}_{\\tau^\\alpha_N-}-\\alpha\\right)^+ \\le \\frac{\\gamma_N}{N}\\chi_{\\{\\tau^\\alpha_N=0\\}}\n=\\frac{\\gamma_N}{N}\\chi_{\\{T-\\tau^\\alpha_N=T\\}}\n\\le \\frac{1}{T}\\frac{\\gamma_N}{N}\\left(T-\\tau^\\alpha_N\\right). \\end{equation*}\nSimilarly,\n\\begin{multline*} 0\\le \\int_{s\\in [\\tau^\\alpha_N,T)}e^{-\\textsf{R} s}\\left\\{ 1- e^{-\\textsf{R} (T-s)}\\right\\} d\\bar L^{(N)}_s \\le \\textsf{R}(T-\\tau^\\alpha_N)\\left(\\bar L^{(N)}_{T-}-\\bar L^{(N)}_{\\tau^\\alpha_N-}\\right)\n\\le \\frac{\\textsf{R}}{\\beta-\\alpha}(T-\\tau^\\alpha_N)\\left(L^{(N)}_{T-}-\\alpha\\right)\\\\\n= \\frac{\\textsf{R}}{\\beta-\\alpha}(T-\\tau^\\alpha_N)\\frac{\\gamma_N}{N} \\end{multline*}\n(we use here the fact that $\\bar L_{\\tau^\\alpha_N-}\\ge 0$ and that $e^{-x}\\ge 1-x$ for all $x\\ge 0$).\nCombining things, we get that\n\\begin{equation*} |\\textsc{\\tiny E}_N|\\le \\frac{1}{\\beta-\\alpha}\\left\\{ \\frac{1}{T}+\\textsf{R}\\right\\}(T-\\tau^\\alpha_N)\\frac{\\gamma_N}{N} \\end{equation*}\non $\\left\\{ 0<\\gamma_N\\le N^{1\/4}\\right\\}$ if $N>(\\beta-\\alpha)^{-4\/3}$.\nWe then have\n\\begin{proof}[Proof of Lemma \\ref{L:hasymp}] For $s\\in \\mathcal{S}_N$ such that $s\\le N^{1\/4}$, we have that\n\\begin{equation*} \\mathcal{E}_1(s,N)= (\\beta-\\alpha)e^{\\textsf{R} T}\\frac{\\tilde \\mathbb{E}_N\\left[\\textsc{\\tiny E}_N\\big|\\gamma_N\\right]}{\\frac{\\gamma_N}{N}}\\chi_{\\{\\gamma_N=s\\}} \\le e^{\\textsf{R} T}\\left\\{ \\frac{1}{T} + \\textsf{R}\\right\\} \\mathbb{E}_N[T-\\tau^\\alpha_N|\\gamma_N]\\chi_{\\{\\gamma_N=s\\}} \\end{equation*}\nif $N>(\\beta-\\alpha)^{-4\/3}$.\nCombine \\eqref{E:AA}, the preceding calculations, and Lemma \\ref{L:TTimes}. \\end{proof}\n\nWe now need to prove Lemma \\ref{L:TTimes}. This is a moderately complex step.\nThe first problem is that by conditioning on $\\gamma_N$, we are conditioning\non the value of $L^{(N)}$ near the \\emph{endpoint} of the interval $[0,T)$\nof interest. The second problem is that we have a large amount\nof randomness; $L^{(N)}$ can be decomposed into $N$ (independent) processes,\none corresponding to each name.\n\nWe shall resolve these issues by using the martingale problem to decompose\n$L^{(N)}$ into a (reverse-time) zero-mean martingale and a term of bounded\nvariation\\footnote{Much of our notation will thus be in reverse time.}. We will use a martingale inequality to show that the martingale\npart is small. Thus the behavior of $L^{(N)}$ near $T$ will be given by\nthe bounded-variation part, which we can analyze via straightforward calculations.\n\nDefine now\n\\begin{equation*} Z^{(n)}_t \\overset{\\text{def}}{=} \\chi_{\\{\\tau_n< T-t\\}}=\\chi_{(t,\\infty]}(T-\\tau_n) \\qquad t\\in [0,T) \\end{equation*}\nfor each positive integer $n$ (note that the $Z^{(n)}$'s are right-continuous).\nAlso define $\\mathscr{G}_t \\overset{\\text{def}}{=} \\sigma\\{Z^{(n)}_s: 0\\le s\\le t,\\, n\\in \\{1,2\\dots\\}\\}$\nfor all $t\\in [0,T)$. Observe that\n\\begin{equation*} L^{(N)}_{t-} = \\frac{1}{N}\\sum_{n=1}^N \\chi_{[0,t)}(\\tau_n)=\\frac{1}{N}\\sum_{n=1}^N Z^{(n)}_{T-t}. \\end{equation*}\nfor all $t\\in (0,T]$.\n\n\nLet's now localize in time. Let $T^*\\in (0,T)$ be such that $F((T-T^*)-)>0$;\nAssumption \\ref{A:density} ensures that this is possible.\nFor all $t\\in [0,T^*]$, define\n\\begin{align*} \nA^{(n)}_t &= -\\int_{r\\in [T-t,T)} \\frac{1}{F(r)} Z^{(n)}_{(T-r)-}dF_r\\\\\nM^{(n)}_t&\\overset{\\text{def}}{=} Z^{(n)}_t-\\chi_{\\{\\tau_n0; \\end{equation*}\nthus the expressions on the right of \\eqref{E:condprob} are well-defined.\nProceeding, we compute that\n\\begin{equation*} \\tilde \\mathbb{E}_N[Z^{(n)}_t|\\mathscr{G}_s] = \\frac{\\tilde \\mathbb{P}_N\\{\\tau_n< T-t\\}}{\\tilde \\mathbb{P}_N\\{\\tau_n< T-s\\}}Z^{(n)}_s \\end{equation*}\nand hence\\footnote{Under normalization, $\\mu$ and $\\tilde \\mu^*_\\alpha$ agree\non $\\mathscr{B}[0,T)$.}\n\\begin{equation}\\label{E:LZ} \\tilde \\mathbb{E}_N[Z^{(n)}_t|\\mathscr{G}_s]-Z^{(n)}_s = \\frac{\\tilde \\mathbb{P}_N\\{\\tau_n< T-t\\}-\\tilde \\mathbb{P}_N\\{\\tau_n< T-s\\}}{\\tilde \\mathbb{P}_N\\{\\tau_n< T-s\\}}Z^{(n)}_s =\\frac{F((T-t)-)-F((T-s)-)}{F((T-s)-)}Z^{(n)}_s.\\end{equation}\n\nAgain fix $s$ and $t$ in $[0,T^*]$ such that $s\\le t$. For each positive integer $m$, define $r^m_k \\overset{\\text{def}}{=} s+(k\/m)(t-s)$ for $k\\in \\{0,1\\dots m\\}$. Using \\eqref{E:LZ}, we can write that $Z^{(n)}_t-Z^{(n)}_s = \\mathcal{A}_m+\\mathcal{M}_m$ where\n\\begin{equation*} \\mathcal{A}_m = \\sum_{k=0}^{m-1}\\left\\{ \\tilde \\mathbb{E}_n\\left[Z^{(n)}_{r^m_{k+1}}\\big|\\mathscr{G}_{r^m_k}\\right]-Z^{(n)}_{r^m_k}\\right\\} \\qquad \\text{and}\\qquad\n\\mathcal{M}_m = \\sum_{k=0}^{m-1}\\left\\{ Z^{(n)}_{r^m_{k+1}}-\\tilde \\mathbb{E}_n\\left[Z^{(n)}_{r^m_{k+1}}\\big|\\mathscr{G}_{r^m_k}\\right]\\right\\}. \\end{equation*}\nFor $C\\in \\mathscr{G}_s$,\n\\begin{equation}\\label{E:miss} \\tilde \\mathbb{E}_N\\left[\\left\\{ Z^{(n)}_t-Z^{(n)}_s-\\mathcal{A}_m\\right\\} \\chi_C\\right]\n=\\tilde \\mathbb{E}_N\\left[\\mathcal{M}_m\\chi_C\\right]=0. \\end{equation}\n\nWe now need to show that $\\tilde \\mathbb{P}_N$-a.s.,\n\\begin{equation}\\label{E:ggoal} \\lim_{m\\nearrow \\infty}\\mathcal{A}_m = -\\{A^{(n)}_t-A^{(n)}_s\\} \\end{equation}\nThis will require a bit of care. We first rewrite $\\mathcal{A}_m$ as a integral;\n\\begin{equation*} \\mathcal{A}_m = -\\sum_{k=0}^{m-1}\\int_{r\\in [T-r^m_{k+1},T-r^m_k)}\\frac{1}{F((T-r^m_k)-)}Z^{(n)}_{r^m_k}dF_r = \\int_{r\\in [T-t,T-s)}\\phi^m(r,\\tau_n)dF_r \\end{equation*}\nwhere\n\\begin{equation*} \\phi^m(r,t') \\overset{\\text{def}}{=} \\sum_{k=0}^{m-1}\\chi_{[T-r^m_{k+1},T-r^m_k)}(r)\n\\frac{1}{F((T-r^m_k)-)}\\chi_{\\{t'< T-r^m_k\\}} \\end{equation*}\nfor all $r\\in [T-t,T-s)$ and $t'\\in I$.\nDefining\n\\begin{equation*} \\tilde \\phi(r,t') \\overset{\\text{def}}{=} \\frac{1}{F(r-)}\\chi_{(0,r)}(t') \\end{equation*}\nfor all $r\\in [T-t,T-s]$ and $t'\\in I$, we thus have that\n\\begin{equation*} \\phi^m(r,t') =\\sum_{k=0}^{m-1}\\chi_{[T-r^m_{k+1},T-r^m_k)}(r)\\tilde \\phi(T-r^m_k,t') \\end{equation*}\nfor all $r\\in [T-t,T-s)$ and $t'\\in I$. For $r\\in [T-t,T-s)$, $F(r-)\\ge F((T-t)-)\\ge F((T-T^*)-)>0$; thus $\\tilde \\phi$ and the $\\phi^m$'s are all uniformly bounded. It is fairly easy to see that \n\\begin{equation*} \\lim_{m\\to \\infty}\\phi^m(r,t') = \\frac{1}{F(r)}\\chi_{(0,r]}(t') \\end{equation*}\nfor all $r\\in [T-t,T-s)$ and all $t'\\in I$. Thus by dominated convergence,\n\\begin{equation*} \\lim_{m\\to \\infty} \\mathcal{A}_m = -\\int_{r\\in [T-t,T-s)}\\frac{1}{F(r)}\\chi_{(0,r]}(\\tau_n)dF_r, \\end{equation*}\nand \\eqref{E:ggoal} follows.\n\nTaking the limit in \\eqref{E:miss}, we now have that\n\\begin{equation*} \\tilde \\mathbb{E}_N\\left[\\left\\{ (Z^{(n)}_t-A_t(\\tau_n))-(Z^{(n)}_s-A_s(\\tau_n))\\right\\} \\chi_C\\right]=0, \\end{equation*}\nwhich is the martingale property.\nFinally, since $M^{(n)}_0=0$, we have that $M^{(n)}$ is zero-mean.\n\\end{proof}\n\nLet's now recombine things. Set\n\\begin{equation*} \\tilde M^{(N)}_t \\overset{\\text{def}}{=} \\frac{1}{N}\\sum_{n=1}^N M^{(n)}_t \\qquad \\text{and}\\qquad \\tilde A^{(N)}_t \\overset{\\text{def}}{=} \\frac{1}{N}\\sum_{n=1}^N A_t(\\tau_n) \\end{equation*}\nfor $t\\in [0,T)$. Note that\n\\begin{equation}\\label{E:gmeas} L^{(N)}_{T-} = \\frac{1}{N}\\sum_{n=1}^N Z^{(n)}_0. \\end{equation}\nWe next rewrite $\\tau^\\alpha_N$ as a stopping time with respect to $\\{\\mathscr{G}_t;\\, t\\in [0,T)\\}$. Set\n\\begin{equation*} \\varrho^\\alpha_N \\overset{\\text{def}}{=} \\inf\\left\\{ t\\in [0,T): L^{(N)}_{(T-t)-}\\le \\frac{\\lfloor N\\alpha \\rfloor}{N}\\right\\}\\wedge T = \\inf\\left\\{ t\\in [0,T): \\frac{1}{N}\\sum_{n=1}^N Z^{(n)}_t\\le \\frac{\\lfloor N\\alpha \\rfloor}{N}\\right\\}\\wedge T; \\end{equation*}\nthen\\footnote{If $L^{(N)}_0>\\alpha$, then $\\varrho^\\alpha_N=T$ and $\\tau^\\alpha_N=0$.\nAssume next that $L^{(N)}_0\\le \\alpha$. Since $L^{(N)}$ is piecewise-constant and right-continuous,\nwe must have that $\\tau^\\alpha_N>0$. At time $\\tau^\\alpha_N$, we have that\n$L^{(N)}_{\\tau^\\alpha_N}>\\alpha$ and $L^{(N)}_{\\tau^\\alpha_N-}\\le \\alpha$;\nsee Figure \\ref{fig:typ}. Since $L^{(N)}$ takes values only in $\\mathbb{Z}\/N$, we\nhave that $L^{(N)}_{\\tau^\\alpha_N-}\\le \\tfrac{\\lfloor N\\alpha\\rfloor}{N}$\nand $L^{(N)}_{\\tau^\\alpha_N}\\ge \\tfrac{\\lfloor N\\alpha\\rfloor+1}{N}$.\nThus $\\rho^\\alpha_N=T-\\tau^\\alpha_N$, as claimed.} $\\varrho^\\alpha_N=T-\\tau^\\alpha_N$. Furthermore, $\\varrho^\\alpha_N$ is a $\\{\\mathscr{G}_t;\\, t\\in [0,T)\\}$-stopping time.\n\nIt will help to truncate $\\varrho^\\alpha_N$ at $T^*$; set\n$\\tilde \\varrho^\\alpha_N \\overset{\\text{def}}{=} \\varrho^\\alpha_N\\wedge T^*$; this is also a $\\{\\mathscr{G}_t;\\, t\\in [0,T)\\}$-stopping time and $\\tilde \\varrho^\\alpha_N\\le T^*$.\nThus\n\\begin{equation*} L^{(N)}_{(T-\\tilde \\varrho^\\alpha_N)-} = L^{(N)}_{T-} + \\tilde A^{(N)}_{\\tilde \\varrho^\\alpha_N} + \\tilde M^{(N)}_{\\tilde \\varrho^\\alpha_N}. \\end{equation*}\nIf $\\gamma_N>0$, then $L^{(N)}_{T-}>\\alpha$, and since $\\tilde \\varrho^\\alpha_N\\le \\varrho^\\alpha_N$, we have that $L^{(N)}_{(T-\\tilde \\varrho^\\alpha_N)-}\\ge \\frac{\\lfloor N\\alpha \\rfloor}{N}$ and consequently\n\\begin{multline*} - \\tilde A^{(N)}_{\\tilde \\varrho^\\alpha_N} = L^{(N)}_{T-} - L^{(N)}_{(T-\\tilde \\varrho^\\alpha_N)-} + \\tilde M^{(N)}_{\\tilde \\varrho^\\alpha_N}\n\\le L^{(N)}_{T-} - \\frac{\\lfloor N\\alpha \\rfloor}{N} + \\tilde M^{(N)}_{\\tilde \\varrho^\\alpha_N}\n\\le L^{(N)}_{T-} - \\alpha + \\frac{1}{N}+ |\\tilde M^{(N)}_{\\tilde \\varrho^\\alpha_N}|\\\\\n\\le \\gamma_N + \\frac{1}{N}+ |\\tilde M^{(N)}_{\\tilde \\varrho^\\alpha_N}|. \\end{multline*}\n\nLet's now use the fact that $\\frac{1}{N}\\sum_{n=1}^N Z^{(n)}_{\\tilde \\varrho^\\alpha_N}\\ge \\frac{\\lfloor N\\alpha \\rfloor}{N}$ to bound $\\tilde A^{(N)}_{\\tilde \\varrho^\\alpha_N}$. As we pointed out in the proof of Lemma \\ref{L:wmx}, the $Z^{(n)}$'s are nonincreasing. Also, $F\\le 1$. Thus for $N\\ge 2\/\\alpha$ (which implies that $\\lfloor N\\alpha \\rfloor\/N\\ge \\alpha\/2$),\nwe have the following string of inequalities.\n\\begin{multline*} -\\tilde A^{(N)}_{\\tilde \\varrho^\\alpha_N} \n=\\int_{r\\in [T-t,T)}\\frac{1}{F(r-)}\\left(\\frac{1}{N}\\sum_{n=1}^N Z^{(n)}_{(T-r)-}\\right)dF_r \n\\ge \\left(\\frac{1}{N}\\sum_{n=1}^N Z^{(n)}_{\\varrho^\\alpha_N-}\\right)\\int_{r\\in [T-\\tilde \\varrho^\\alpha_N,T)}dF_r\\\\\n\\ge \\left(\\frac{1}{N}\\sum_{n=1}^N Z^{(n)}_{\\varrho^\\alpha_N}\\right)\\left\\{ F(T-)-F((T-\\tilde \\varrho^\\alpha_N)-)\\right\\} \\ge \\frac{\\alpha}{2}\\mathfrak{f}(\\tilde \\rho^\\alpha_N) \\end{multline*}\nwhere we have defined\n\\begin{equation*} \\mathfrak{f}(\\varepsilon) \\overset{\\text{def}}{=} F(T-)-F((T-\\varepsilon)-) \\end{equation*}\nfor all $\\varepsilon\\in (0,T)$. Note that $\\lim_{\\varepsilon \\searrow 0}\\mathfrak{f}(\\varepsilon)=0$, and, thanks to Assumption \\ref{A:density}, $\\mathfrak{f}(\\varepsilon)>0$ for $\\varepsilon\\in (0,T)$.\nThus\n\\begin{equation} \\label{E:CW} \\mathfrak{f}\\left(\\tilde \\varrho^\\alpha_N\\right)\\chi_{\\{\\gamma_N>0\\}}\\le \\frac{2}{\\alpha}\\left\\{ \\gamma_N + \\frac{1}{N} + \\left|\\tilde M^{(N)}_{\\tilde \\varrho^\\alpha_N}\\right|\\right\\}\\chi_{\\{\\gamma_N>0\\}}\n\\le \\frac{2}{\\alpha}\\left\\{ \\gamma_N^+ + \\frac{1}{N} + \\left|\\tilde M^{(N)}_{\\tilde \\varrho^\\alpha_N}\\right|\\right\\}\\end{equation}\nif $N>2\/\\alpha$.\n\\begin{proof}[Proof of Lemma \\ref{L:TTimes}] We begin by taking conditional expectations of \\eqref{E:CW}. Note that $\\gamma_N$ is $\\mathscr{G}_0$-measurable (see \\eqref{E:gmeas}).\nWe have\n\\begin{equation*} \\tilde \\mathbb{E}_N\\left[\\mathfrak{f}\\left(\\tilde \\varrho^\\alpha_N\\right)\\big|\\mathscr{G}_0\\right]\\chi_{\\{\\gamma_N>0\\}} \\le \\frac{2}{\\alpha}\\left\\{ \\gamma_N^+ + \\frac{1}{N} + \\tilde \\mathbb{E}_N\\left[\\left|\\tilde M^{(N)}_{\\tilde \\varrho^\\alpha_N}\\right|\\bigg| \\mathscr{G}_0\\right]\\right\\} . \\end{equation*}\nBy Jensen's inequality,\n\\begin{equation*} \\tilde \\mathbb{E}_N\\left[\\left|\\tilde M^{(N)}_{\\tilde \\varrho^\\alpha_N}\\right|\\bigg| \\mathscr{G}_0\\right]\\le \\tilde \\mathbb{E}_N\\left[\\left(\\tilde M^{(N)}_{\\tilde \\varrho^\\alpha_N}\\right)^2\\bigg| \\mathscr{G}_0\\right]^{1\/2} \\end{equation*}\n$\\tilde \\mathbb{P}_N$-a.s.\nWe can now use optional sampling;\n\\begin{multline*} \\tilde \\mathbb{E}_N\\left[\\left(\\tilde M^{(N)}_{\\tilde \\varrho^\\alpha_N}\\right)^2\\bigg| \\mathscr{G}_0\\right] \\le \\tilde \\mathbb{E}_N\\left[\\left(\\tilde M^{(N)}_{T_2^*}\\right)^2\\bigg| \\mathscr{G}_0\\right]\n= \\frac{1}{N^2}\\sum_{n=1}^N \\tilde \\mathbb{E}_N\\left[\\left(M^{(n)}_{T_2^*}\\right)^2\\bigg| \\mathscr{G}_0\\right]\\\\\n\\le \\frac{3}{N^2}\\sum_{n=1}^N \\left\\{ \\tilde \\mathbb{E}_N\\left[\\left(Z^{(n)}_{T_2^*}\\right)^2\\bigg| \\mathscr{G}_0\\right]+\\tilde \\mathbb{E}_N\\left[\\chi^2_{\\{\\tau_n0\\}}\\le \\frac{2}{\\alpha}\\left\\{ \\left(L^{(N)}_{T-}-\\alpha\\right)^+ + \\frac{1}{N} + \\sqrt{\\frac{3}{N}\\left\\{ 2 + \\frac{1}{F^2((T-T^*)-)}\\right\\} }\\right\\} \\end{equation*}\n$\\tilde \\mathbb{P}_N$-a.s. As we pointed out earlier, $\\sigma\\{\\gamma_N\\} = \\sigma\\{L^{(N)}_{T-}\\} \\subset \\mathscr{G}_0$, so by iterated conditioning, we next have that\n\\begin{equation*} \\tilde \\mathbb{E}_N\\left[\\mathfrak{f}\\left(\\tilde \\varrho^\\alpha_N\\right)\\big|\\gamma_N\\right]\\chi_{\\{\\gamma_N>0\\}}\\le \\frac{2}{\\alpha}\\left\\{ \\gamma_N^+ + \\frac{1}{N} + \\sqrt{\\frac{3}{N}\\left\\{ 2 + \\frac{1}{F^2((T-T^*)-)}\\right\\}}\\right\\} \\end{equation*}\n\nFix now $\\varepsilon\\in (0,T^*)$. If $\\tilde \\varrho^\\alpha_N<\\varepsilon$, then in fact $T-\\tau^\\alpha_N = \\tilde \\varrho^\\alpha_N<\\varepsilon$. On the other hand, if $\\tilde \\varrho^\\alpha_N\\ge \\varepsilon$, then \n\\begin{equation*} \\gamma_N = L^{(N)}_{T-}-\\alpha \\ge \\frac{\\lfloor N\\alpha \\rfloor + 1}{N}-\\alpha >0; \\end{equation*}\nthus\n\\begin{equation*} \\chi_{\\{\\tilde \\varrho^\\alpha_N>\\varepsilon\\}} \\le \\frac{1}{\\mathfrak{f}(\\varepsilon)}\\mathfrak{f}(\\tilde \\varrho^\\alpha_N)\\chi_{\\{\\gamma_N>0\\}}. \\end{equation*}\nHence\n\\begin{multline*} \\tilde \\mathbb{E}_N\\left[T-\\tau^\\alpha_N\\bigg|\\gamma_N\\right]\n\\le \\varepsilon + T \\frac{\\tilde \\mathbb{E}_N\\left[\\mathfrak{f}\\left(\\tilde \\varrho^\\alpha_N\\right)\\chi_{\\{\\gamma_N>0\\}}\\bigg| L^{(N)}_{T-}\\right]}{\\mathfrak{f}(\\varepsilon)} \\\\\n\\le \\varepsilon + \\frac{2T}{\\alpha\\mathfrak{f}(\\varepsilon)}\\left\\{ \\gamma_N^+ + \\frac{1}{N} + \\sqrt{\\frac{3}{N}\\left\\{ 2 + \\frac{1}{F^2((T-T^*)-)}\\right\\}}\\right\\} \\end{multline*}\n$\\tilde \\mathbb{P}_N$-a.s. \nIn other words,\n\\begin{equation*} \\sup_{\\substack{s\\in \\mathcal{S}_N \\\\ s\\le N^{1\/4}}}\\tilde \\mathbb{E}_N\\left[T-\\tau^\\alpha_N\\big| \\gamma_N\\right]\\chi_{\\{\\gamma_N=s\\}}\\le \n\\varepsilon + \\frac{2T}{\\alpha\\mathfrak{f}(\\varepsilon)}\\left\\{ \\frac{1}{N^{3\/4}} + \\frac{1}{N} + \\sqrt{\\frac{3}{N}\\left\\{ 2 + \\frac{1}{F^2((T-T^*)-)}\\right\\} }\\right\\}. \\end{equation*}\nLet $N\\nearrow \\infty$ and then let $\\varepsilon\\searrow 0$.\n\\end{proof}\n\n\\section{Proofs}\\label{S:Proofs}\n\nWe here give the deferred proofs.\n\n\\begin{proof}[Proof of Lemma \\ref{L:probasymp}] To begin, recall Stirling's formula. Let $\\tilde \\mathcal{E}_1:(-1,\\infty)\\to \\mathbb{R}$ be defined by\n\\begin{equation*} \\Gamma(x+1) \\overset{\\text{def}}{=} \\int_{u=0}^\\infty u^x e^{-u}du = \\left(\\frac{x}{e}\\right)^x\\sqrt{2\\pi x}\\left\\{ 1+\\tilde \\mathcal{E}_1(x)\\right\\}; \\end{equation*}\nfor all $x>-1$; then $\\lim_{x\\to \\infty}|\\tilde \\mathcal{E}_1(x)|=0$.\nThen for any $s=n-N\\alpha\\in \\mathcal{S}_N$,\n\\begin{multline*} \\tilde \\mathbb{P}_N\\left\\{ \\gamma_N=s\\right\\}\n= \\tilde \\mathbb{P}_N\\left\\{ \\text{$n$ of the $\\tau$'s are in $[0,T)$ and $N-n$ are in $[T,\\infty)$}\\right\\} \\\\\n= \\binom{N}{n}\\alpha^n (1-\\alpha)^{N-n}\n= \\frac{\\Gamma(N+1)}{\\Gamma(N\\alpha+s+1)\\Gamma(N(1-\\alpha)-s+1)}\\alpha^{N\\alpha+s}(1-\\alpha)^{N(1-\\alpha)-s}\n= A(N)B(s,N) \\end{multline*}\nwhere\n\\begin{align*} A(N) &= \\frac{\\Gamma(N+1)}{\\Gamma(N\\alpha+1)\\Gamma(N(1-\\alpha)+1)}\\alpha^{N\\alpha}(1-\\alpha)^{N(1-\\alpha)}\\\\\nB(s,N) &= \\frac{\\Gamma(N\\alpha+1)\\Gamma(N(1-\\alpha)+1)}{\\Gamma(N\\alpha+s+1)\\Gamma(N(1-\\alpha)-s+1)}\\alpha^s(1-\\alpha)^{-s}. \\end{align*}\nLet's now use Stirling's formula.\nWe have\n\\begin{align*} A(N)&= \\frac{\\left(\\frac{N}{e}\\right)^N}{\\left(\\frac{N\\alpha}{e}\\right)^{N\\alpha}\\left(\\frac{N(1-\\alpha)}{e}\\right)^{N(1-\\alpha)}}\\frac{\\sqrt{2\\pi N}}{\\sqrt{2\\pi N\\alpha}\\sqrt{2\\pi N(1-\\alpha)}}\\alpha^{N\\alpha}(1-\\alpha)^{N(1-\\alpha)} \\\\\n&\\qquad \\times \\frac{1+\\tilde \\mathcal{E}_1(N)}{\\{1+\\tilde \\mathcal{E}_1(N\\alpha)\\}\\{1+\\tilde \\mathcal{E}_1(N(1-\\alpha))\\}} \\\\\n&=\\frac{1}{\\sqrt{2\\pi N\\alpha(1-\\alpha)}}\\frac{1+\\tilde \\mathcal{E}_1(N)}{\\{1+\\tilde \\mathcal{E}_1(N\\alpha)\\}\\{1+\\tilde \\mathcal{E}_1(N(1-\\alpha))\\}} \\end{align*}\nThus\n\\begin{equation*} A(N)=\\frac{1+\\tilde \\mathcal{E}_2(N)}{\\sqrt{2\\pi N\\alpha(1-\\alpha)}}. \\end{equation*}\nwhere $\\lim_{N\\to \\infty}\\tilde \\mathcal{E}_2(N)=0$.\nTo find the asymptotics of $B$, we first let $\\tilde \\mathcal{E}_3:(-1,\\infty)$ be such that\n\\begin{equation*} \\ln (1+x) = x+\\tilde \\mathcal{E}_3(x). \\qquad x>-1 \\end{equation*}\nThen there is a $K>0$ such that $|\\tilde \\mathcal{E}_3(x)|\\le K x^2$ for all $x\\in (-1\/2,1\/2)$.\nAgain using Stirling's formula, we have that\n\\begin{align*} B(s,N) &= \\frac{\\left(\\frac{N\\alpha}{e}\\right)^{N\\alpha}}{\\left(\\frac{N\\alpha+s}{e}\\right)^{N\\alpha+s}}\\frac{\\left(\\frac{N(1-\\alpha)}{e}\\right)^{N(1-\\alpha)}}{\\left(\\frac{N(1-\\alpha)-s}{e}\\right)^{N(1-\\alpha)-s}}\\sqrt{\\left(\\frac{N\\alpha}{N\\alpha+s}\\right)\\left(\\frac{N(1-\\alpha)}{N(1-\\alpha)-s}\\right)}\\alpha^s (1-\\alpha)^{-s}\\\\\n&\\qquad \\times \\frac{1+\\tilde \\mathcal{E}_1(N\\alpha)}{\\left\\{ 1+\\tilde \\mathcal{E}_1(N\\alpha+s)\\right\\}\\left\\{ 1+\\tilde \\mathcal{E}_1(N(1-\\alpha)-s)\\right\\}}\\\\\n &= \\left(\\frac{\\alpha}{\\alpha+\\frac{s}{N}}\\right)^{N\\alpha+s}\\left(\\frac{1-\\alpha}{1-\\alpha-\\frac{s}{N}}\\right)^{N\\alpha-s}\\frac{1}{\\sqrt{\\left(1+\\frac{s}{\\alpha N}\\right)\\left(1-\\frac{s}{N(1-\\alpha)}\\right)}}\\\\\n&\\qquad \\times \\frac{1+\\tilde \\mathcal{E}_1(N\\alpha)}{\\left\\{ 1+\\tilde \\mathcal{E}_1(N\\alpha+s)\\right\\}\\left\\{ 1+\\tilde \\mathcal{E}_1(N(1-\\alpha)-s)\\right\\}}\\\\\n& = \\frac{1}{\\left(1+\\frac{s}{N\\alpha}\\right)^{N\\alpha+s}\\left(1-\\frac{s}{N(1-\\alpha)}\\right)^{N(1-\\alpha)-s}} \\frac{1}{\\sqrt{\\left(1+\\frac{s}{\\alpha N}\\right)\\left(1-\\frac{s}{N(1-\\alpha)}\\right)}}\\\\\n&\\qquad \\times \\frac{1+\\tilde \\mathcal{E}_1(N\\alpha)}{\\left\\{ 1+\\tilde \\mathcal{E}_1(N\\alpha+s)\\right\\}\\left\\{ 1+\\tilde \\mathcal{E}_1(N(1-\\alpha)-s)\\right\\}}\\\\\n&= \\exp\\left[-(N\\alpha+s)\\left(\\frac{s}{N\\alpha} + \\tilde \\mathcal{E}_2\\left(\\frac{s}{N\\alpha}\\right)\\right)- (N(1-\\alpha)-s)\\left(-\\frac{s}{N(1-\\alpha)} + \\tilde \\mathcal{E}_2\\left(\\frac{-s}{N(1-\\alpha)}\\right)\\right)\\right]\\\\\n&\\qquad \\times \\frac{1}{\\sqrt{\\left(1+\\frac{s}{\\alpha N}\\right)\\left(1-\\frac{s}{N(1-\\alpha)}\\right)}}\\times \\frac{1+\\tilde \\mathcal{E}_1(N\\alpha)}{\\left\\{ 1+\\tilde \\mathcal{E}_1(N\\alpha+s)\\right\\}\\left\\{ 1+\\tilde \\mathcal{E}_1(N(1-\\alpha)-s)\\right\\}} \\\\\n&= \\exp\\left[-\\tilde \\mathcal{E}_4(s,N)\\right]\\frac{1}{\\sqrt{\\left(1+\\frac{s}{\\alpha N}\\right)\\left(1-\\frac{s}{N(1-\\alpha)}\\right)}}\\\\\n&\\qquad \\times \\frac{1+\\tilde \\mathcal{E}_1(N\\alpha)}{\\left\\{ 1+\\tilde \\mathcal{E}_1(N\\alpha+s)\\right\\}\\left\\{ 1+\\tilde \\mathcal{E}_1(N(1-\\alpha)-s)\\right\\}} \\end{align*}\nwhere\n\\begin{equation*} \\tilde \\mathcal{E}_4(s,N) \\overset{\\text{def}}{=} \\frac{s^2}{N}\\left(\\frac{1}{\\alpha}+\\frac{1}{1-\\alpha}\\right) + (N\\alpha+s)\\tilde \\mathcal{E}_3\\left(\\frac{s}{N\\alpha}\\right)+(N(1-\\alpha)-s)\\tilde \\mathcal{E}_3\\left(-\\frac{s}{N(1-\\alpha)}\\right). \\end{equation*}\n\nLet's now combine things together. We have that\n\\begin{equation*} \\tilde \\mathbb{P}_N\\{\\gamma_N=s\\}-\\frac{1}{\\sqrt{2\\pi N\\alpha(1-\\alpha)}} = \\frac{\\tilde \\mathcal{E}_5(s,N)-1}{\\sqrt{2\\pi N\\alpha(1-\\alpha)}} \\end{equation*}\nwhere\n\\begin{align*} \\tilde \\mathcal{E}_5(s,N)&=\\frac{1+\\tilde \\mathcal{E}_1(N)}{\\{1+\\tilde \\mathcal{E}_1(N\\alpha)\\}\\{1+\\tilde \\mathcal{E}_1(N(1-\\alpha))\\}}\\exp\\left[-\\tilde \\mathcal{E}_4(s,N)\\right]\\frac{1}{\\sqrt{\\left(1+\\frac{s}{\\alpha N}\\right)\\left(1-\\frac{s}{N(1-\\alpha)}\\right)}}\\\\\n&\\qquad \\times\\frac{1+\\tilde \\mathcal{E}_1(N\\alpha)}{\\left\\{ 1+\\tilde \\mathcal{E}_1(N\\alpha+s)\\right\\}\\left\\{ 1+\\tilde \\mathcal{E}_1(N(1-\\alpha)-s)\\right\\}}. \\end{align*}\nNote that if $|s|\\le N^{1\/4}$, then\n\\begin{equation*} \\left|\\frac{s}{N\\alpha}\\right|\\le \\frac{1}{\\alpha N^{3\/4}} \\qquad \\text{and}\\qquad \\left|\\frac{s}{N(1-\\alpha)}\\right|\\le \\frac{1}{(1-\\alpha) N^{3\/4}}. \\end{equation*}\nThus if $|s|\\le N^{1\/4}$, then for $N$ large enough\n\\begin{equation*} |\\tilde \\mathcal{E}_4(s,N)| \\le \\frac{K}{\\sqrt{N}}. \\end{equation*}\nThe claimed statement follows.\n\\end{proof}\n\n\nLet's now start to prove Proposition \\ref{P:explicit}.\nFirst, define\n\\begin{equation*} \\mathfrak{I}_\\circ(\\alpha') \\overset{\\text{def}}{=} \\inf\\left\\{ H(\\tilde \\mu|\\mu): \\tilde \\mu\\in \\mathscr{P}(I), \\tilde \\mu[0,T)=\\alpha'\\right\\} \\end{equation*}\nfor all $\\alpha'\\in [0,1]$.\n\\begin{lemma}\\label{L:hear} We have that\n\\begin{equation*} \\mathfrak{I}_\\circ(\\alpha') = \\hbar(\\alpha',F(T-))=H(\\tilde \\mu^*_{\\alpha'}|\\mu), \\end{equation*}\nwhere $\\tilde \\mu_{\\alpha'}^*$ is given by \\eqref{E:dacc}.\n\\end{lemma}\n\\begin{proof}\nFix $\\mu'\\in \\mathscr{P}(I)$ such that $\\mu'[0,T)=\\alpha'$.\nIf $\\mu'$ is not absolutely continuous\nwith respect to $\\mu$, then $H(\\mu'|\\mu)=\\infty$; thus we assume\nthat $\\mu'$ is absolutely continuous with respect to $\\mu$.\nDefine\n\\begin{equation*} f(x) \\overset{\\text{def}}{=} \\begin{cases} x \\ln x &\\text{for $x>0$} \\\\\n0 &\\text{if $x=0$.} \\end{cases}\\end{equation*}\nThen $f$ is convex on $[0,\\infty)$. Recall that Assumptions \\ref{A:density} and \\ref{A:IG} imply that $F(T)\\in (0,1)$. Thus we can write (using Jensen's inequality) that\n\\begin{multline*} H(\\mu'|\\mu) = \\int_{t\\in I}f\\left(\\frac{d\\mu'}{d\\mu}(t)\\right)\\mu(dt)\\\\\n=\\mu[0,T)\\int_{t\\in [0,T)}f\\left(\\frac{d\\mu'}{d\\mu}(t)\\right)\\frac{\\mu(dt)}{\\mu[0,T)} + \\mu[T,\\infty]\\int_{t\\in [T,\\infty]}f\\left(\\frac{d\\mu'}{d\\mu}(t)\\right)\\frac{\\mu(dt)}{\\mu[T,\\infty]} \\\\\n\\ge \\mu[0,T)f\\left(\\int_{t\\in [0,T)}\\frac{d\\mu'}{d\\mu}(t)\\frac{\\mu(dt)}{\\mu[0,T)}\\right) + \\mu[T,\\infty]f\\left(\\int_{t\\in [T,\\infty]}\\frac{d\\mu'}{d\\mu}(t)\\frac{\\mu(dt)}{\\mu[T,\\infty]}\\right) \\\\\n= \\mu[0,T)f\\left(\\frac{\\mu'[0,T)}{\\mu[0,T)}\\right) + \\mu[T,\\infty]f\\left(\\frac{\\mu'[T,\\infty]}{\\mu[T,\\infty]}\\right) = \\hbar(\\mu'[0,T),\\mu[0,T)). \\end{multline*}\nWe have equality here if and only if $\\mu$-a.s.\n\\begin{equation*} \\frac{d\\mu'}{d\\mu}(t) = C_1\\chi_{[0,T)}(t) + C_2\\chi_{[T,\\infty]}(t), \\end{equation*}\nwhich holds if and only if $\\mu' = \\tilde \\mu^*_{\\alpha'}$. Collecting things together,\nwe have the claimed result.\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{P:explicit}]\nIn light of Lemma \\ref{L:hear}, we need to show that\n\\begin{equation} \\label{E:minc} \\inf_{\\alpha'\\ge \\alpha}\\mathfrak{I}_\\circ(\\alpha')=\\mathfrak{I}(\\alpha). \\end{equation}\nTo do this, we observe from that $\\mathfrak{I}_\\circ$ is differentiable and that\n\\begin{equation*} \\mathfrak{I}_\\circ'(\\alpha') = \\ln \\left(\\frac{\\alpha'}{1-\\alpha'}\\frac{1-F(T-)}{F(T-)}\\right)=\\ln \\left(1+\\frac{\\alpha'-F(T-)}{F(T-)(1-\\alpha')}\\right) \\end{equation*}\nfor all $\\alpha'\\in (0,1)$. Thus $\\mathfrak{I}'_\\circ(\\alpha')>0$ if $\\alpha'>F(T-)$. Assumption \\ref{A:IG} then gives us \\eqref{E:minc}. Combining our arguments, we get the claimed result.\n\\end{proof}\n\n\\section{Appendix: Measurability}\nWe here verify that $\\textbf{P}^{\\text{prot}}_N$ is measurable.\nLet $D_+$ be the collection of nondecreasing functions $\\phi:\\mathbb{R}\\to [0,1]$ which are right-continuous and have left-hand limits and for which $\\phi(t)=0$ for $t<0$.\nFor $\\mu'\\in \\mathscr{P}(I)$ and $t\\ge 0$, define $\\iota(\\mu')(t) \\overset{\\text{def}}{=} \\mu'[0,t]$ for $t\\ge 0$ and $\\iota(\\mu')(t)=0$ for $t<0$. In particular, $L^{(N)}=\\iota(\\nu^{(N)})$. It is clear that $\\iota:\\mathscr{P}(I)\\to D_+$ and is a bijection (note that $\\mu'\\{\\infty\\} = 1- \\lim_{t\\nearrow \\infty}\\iota(\\mu')(t)$; this\nallows us to recover $\\mu'\\{\\infty\\}$ when writing down the inverse of $\\iota$). We can then topologize $D_+$ by pushing the topology of $\\mathscr{P}(I)$ forward through $\\iota$; thus $\\iota$ is continuous. We also note that $\\{\\ell_n\\}_{n=1}^\\infty \\subset D_+$ converges to $\\ell\\in D_+$ if and only if $\\lim_{n\\to \\infty}\\ell_n(t)=\\ell(t)$\nfor all points $t\\in [0,\\infty)$ at which $\\ell$ is continuous. Thus $L^{(N)}$ is a $D_+$-valued random variable. We next define\n$\\Phi^\\circ_1:D_+\\to D_+$ as \n\\begin{equation*} \\Phi^\\circ_1(\\phi)(t) \\overset{\\text{def}}{=} \\frac{(\\phi(t)-\\alpha)^+-(\\phi(t)-\\beta)^+}{\\beta-\\alpha} \\qquad t\\in \\mathbb{R} \\end{equation*}\nfor all $\\phi\\in D_+$. Thus $\\bar L^{(N)}=\\Phi^\\circ_1(L^{(N)})$. By the above characterization of convergence in $D_+$,\nwe see that $\\Phi^\\circ_1$ is continuous; thus $\\bar L^{(N)}$ is also a $D_+$-valued\nrandom variable. Finally, define $\\Phi^\\circ_2 \\overset{\\text{def}}{=} D_+\\to \\mathbb{R}$ as \n\\begin{equation*} \\Phi^\\circ_2(\\ell) \\overset{\\text{def}}{=} \\int_{s\\in [0,T)} e^{-\\textsf{R} s}d\\ell(s) = e^{-\\textsf{R} T}\\ell(T-)+ \\textsf{R} \\int_{s\\in (0,T)} e^{-\\textsf{R} s}\\ell(s)ds \\end{equation*}\nfor all $\\ell\\in D_+$ (we define the $d\\ell$ integral as a Lebesgue-Stieltjes\nintegral). Then $\\textbf{P}^{\\text{prot}}_N = \\Phi^\\circ_2(\\bar L^{(N)})$. We claim that\n$\\textbf{P}^{\\text{prot}}_N$ is measurable (from $D_+$ to $\\mathbb{R}$).\nLet $\\zeta\\in C^\\infty(\\mathbb{R};[0,1])$ be such that $\\zeta(t)=1$\nfor $t\\le -1$ and $\\zeta(t)=0$ for $t\\ge 0$. For each positive integer $n$\nand each $\\ell\\in D_+$, set\n\\begin{equation*} \\tilde \\Phi^n_2(\\ell) \\overset{\\text{def}}{=} \\int_{s\\in \\mathbb{R}} e^{-\\textsf{R} s}\\zeta(n(s-T))d\\ell(s) = \\int_{s=0}^\\infty e^{-\\textsf{R} s}\\left\\{ \\textsf{R} \\zeta(n(s-T))- n\\dot \\zeta(n(s-T))\\right\\} \\ell(s)ds. \\end{equation*}\nClearly $\\tilde \\Phi^n_2:D_+\\to \\mathbb{R}$ is continuous. Furthermore, by dominated convergence, $\\lim_{n\\to \\infty}\\tilde \\Phi^n_2(\\ell) = \\Phi^\\circ_2(\\ell)$ for each $\\ell$ (i.e., pointwise on $D_+$). Being the pointwise limit of continuous functions, $\\Phi^\\circ_2$ is thus measurable.\n\nCombining all of these arguments, we conclude that $\\textbf{P}^{\\text{prot}}_N$ is indeed a $\\mathbb{R}$-valued random variable. Clearly $\\bar L^{(N)}_T\\le 1$, so $0\\le\\textbf{P}^{\\text{prot}}_N\\le 1$.\n\n\\bibliographystyle{alpha}\n\\def$'$} \\def\\cprime{$'$} \\def\\cprime{$'$} \\def\\cprime{$'${$'$} \\def$'$} \\def\\cprime{$'$} \\def\\cprime{$'$} \\def\\cprime{$'${$'$} \\def$'$} \\def\\cprime{$'$} \\def\\cprime{$'$} \\def\\cprime{$'${$'$} \\def$'$} \\def\\cprime{$'$} \\def\\cprime{$'$} \\def\\cprime{$'${$'$}\n \\def\\polhk#1{\\setbox0=\\hbox{#1}{\\ooalign{\\hidewidth\n \\lower1.5ex\\hbox{`}\\hidewidth\\crcr\\unhbox0}}}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec.intro}\n\nThe chemostat is a biotechnological process of continuous culture developed in the 50s \\citep{monod1950a,novick1950a} and which is at the heart of several industrial applications as well as laboratory devices \\citep{smith1995a}. Bioreactors operating under this mode are maintained under perfect mixing conditions and usually at large bacterial population sizes.\n\nThese features allow such processes to be modeled by ordinary (deterministic) differential systems since, in large populations and under certain conditions, demographic randomness can be neglected. Moreover, perfect mixing conditions permit us to neglect the spatial distribution and express these models in terms of mean concentration in the chemostat. In its simplest version, the chemostat model is expressed as a system of two coupled ordinary differential equations respectively for biomass and substrate concentrations \\citep{smith1995a}. This approach extends to the case of several bacterial species and several substrates. The simplicity of such models makes possible the development of efficient tools for automatic control and the improvement of the associated biotechnological processes. \nHowever, it is increasingly necessary to develop models beyond the standard assumption of perfect mixing with a bacterial population possessing uniform characteristics. For this purpose, several paths are available which take into account the different sources of randomness or the structuring of the bacterial population and its discrete nature. All these aspects have been somewhat neglected in previous models.\n\nIn addition, the recent development of so-called ``omics'' approaches such as genomics and large-scale DNA sequencing technology are the basis of a renewed interest in chemostat techniques \\citep{hoskisson2005a}. These bacterial cultures may also be considered as laboratory models to study selection phenomena and evolution in bacterial ecosystems.\n\nBeyond classical models based on systems of (deterministic) ordinary differential equations (ODE) which neglect any structuring of the bacterial populations, have also appeared in the 60's and 70's bacterial growth models structured in size or mass based on integro-differential equations (IDE) \\citep{fredrickson1967a,ramkrishna1979a}, \u200b\u200bsee also the monograph \\cite{ramkrishna2000a} on these so-called population balance equations for growth-frag\\-men\\-tation models.\n\n\nVarious research papers have been devoted to the stochastic modeling of the chemostat.\n\\cite{crump1979a} propose a model of pure jump for the biomass growth coupled with a differential equation for the substrate evolution. \\cite{stephanopoulos1979a} propose a model with randomly fluctuating dilution rate, hence the noise is rather of an environmental nature, whereas in the previous model it is rather demographic. \\cite{grasman2005a} propose a chemostat model at three trophic levels where randomness appears only in the upper trophic level. \\cite{imhof2005a} also propose a stochastic chemostat model but, as in previous models, the noise is simply ``added'' to the classical deterministic model. In contrast, in \\cite{campillo2011chemostat} article the demographic noise emerges from a description of the dynamics at the microscopic level.\n\nIn recent years, many models for the evolution in chemostats have been proposed either using integro-differential equations \\citep{diekmann-odo2005a,mirrahimi2012a,mirrahimi2012b} or individual-based models (IBM) \\citep{champagnat2013a}.\n\nThere are also computer models, for example \\cite{lee-min-woo2009a} propose an IBM structured in mass for a `` batch'' culture process. In this model, as in \\citep{champagnat2013a} the dynamics of the substrate is described by deterministic differential equations. \nIndeed, the difference in scale between a bacterial cell and a substrate molecule guaranties that, at the scale of the bacterial population, the dynamics of the substrate can be correctly represented by the fluid limit model, while the dynamics of the bacterial population is discrete and random.\n\n\nWe focus here on an individual-based model (IBM) of the chemostat. In contrast to deterministic models with continuous variables, in IBMs all variables, or at least some of them, are stochastic and discrete. These models are generally cumbersome in terms of simulation and difficult to analyze mathematically, but they can be useful in accounting for phenomena inaccessible in earlier models. The majority of IBMs are described initially in natural languages with simple rules. From there they are described as computer models in terms of algorithms, it is this approach that is often termed IBM.\nNonetheless, they can also be described mathematically using a Markov process.\nThe advantage of this approach is to allow the mathematical analysis of the IBM.\nIn particular, as we will see here, the convergence of the IBM to an integro-differential model can be demonstrated.\n The latter approach has been developed in a series of papers: for a simple model of position \\citep{fournier2004a}, for the evolution of trait structured population \\citep{champagnat2006b}, which is then extended to take into account the age of individuals \\citep{vietchitran2006a,vietchitran2008a}. More recently \\citet{champagnat2013a} proposed a chemostat model with multiple resources where the bacterial population has a genetic trait subject to evolution.\n\n \nIn the context of a growth-fragmentation model, \\cite{hatzis1995a} proposed an IBM, without substrate variables, and draw a parallel between this model and an integro-differential model.\n\n\n\n\n\\medskip\n\nIn Section \\ref{sec.model} we introduce the IBM where each individual in the bacterial population is explicitly represented by its mass. We describe the phenomenon which the model will take into account at a microscopic scale: \nindividual cell growth, cell division, up-take \n(substrate and bacteria are constantly withdrawn from the chemostat vessel), as well as the individual consumption described as a coupling with the ordinary differential equation which models the dynamics of the substrate. Then we describe the associated exact Monte Carlo algorithm, noting that this algorithm is asynchronous in time, i.e. different events occur at random instants which are not predetermined.\n\nIn Section \\ref{sec.notations} we introduce some notation, then in Section \\ref{sec.processus.microscopique} we construct the stochastic process associated with the IBM as a Markov process with values in the space of finite measures over the state-space of masses.\n\nIn Section \\ref{sec.convergence} we prove the convergence, in large population limit, of the IBM towards an integro-differential equation of the population-balance equation type \\citep{fredrickson1967a,ramkrishna1979a,ramkrishna2000a} coupled with an equation for the dynamics of the substrate. Finally in Section \\ref{sec.simulations} we present several numerical simulations.\n\n\n\\section{The model}\n\\label{sec.model}\n\n\\subsection{Description of the dynamics}\n\nWe consider an \\emph{individual-based model (IBM) structured in mass} where the\nbacterial population is represented as individuals growing in a perfectly mixed vessel of volume $V$ (l). Each individual is solely characterized by its mass $x\\in\\X \\eqdef [0,{\\textrm{max}}]$, this model does not take into account spatialization. At time $t$ the system is characterized by the pair:\n\\begin{align}\n\\label{eq.xi}\n (S_{t},\\nu_{t})\n\\end{align}\nwhere\n\\begin{enumerate}\n\n\\item $S_{t}$ is the \\emph{substrate concentration} (mg\/l) which is assumed to be uniform in the vessel;\n\n\\item $\\nu_{t}$ is the \\emph{bacterial population}, that is $N_{t}$ individuals\nand the mass of the individual number $i$ will be denoted $x^i_{t}$ (mg) \nfor $i=1,\\dots,N_{t}$. \nIt will be convenient to represent the population $\\{x^i_{t}\\}_{i=1,\\dots,N_{t}}$ at time $t$ as the following punctual measure:\n\\begin{align}\n\\label{eq.nu}\n \\nu_t(\\rmd x)=\\sum_{i=1}^{N_t}\\delta_{x_t^i}(\\rmd x)\\,.\n\\end{align}\n\\end{enumerate}\n\n\nThe dynamics of the chemostat combines \\emph{discrete evolutions}, cell division and bacterial up-take, as well as \\emph{continuous evolutions}, the growth of each individual and the dynamics of the substrate. We now describe the four components of the dynamics, first the discrete ones and then the continuous ones which occur between the discrete ones.\n\n\n\n\\begin{enumerate}\n\\vskip0.8em\n\\item{\\textbf{Cell division}} --\n\\emph{Each individual of mass $x$ divides at rate $\\lambda(s,x)$ into two\nindividuals of respective masses $\\alpha\\,x$ and $(1-\\alpha)\\,x$:\n\\begin{center}\n\\includegraphics[width=5cm]{fig_division1.pdf}\n\\end{center}\nwhere $\\alpha$ is distributed according to a given probability distribution $Q(\\rmd\\alpha)$ on $[0,1]$,\nand $s$ is the substrate concentration.} \n\n\\medskip\n\nFor instance, the function $\\lambda(s,x)$ does not depend on the substrate concentration $s$ and could be of the following form which will be used in the simulation presented in Section \\ref{sec.simulations}:\n\\begin{center}\n\\includegraphics[width=4cm]{fig_function_lambda.pdf}\n\\end{center} \nThus, below a certain mass $m_{\\textrm{\\tiny\\rm div}}$ it is assumed that the cell cannot divide. There are models where the rate also depends on the concentration $s$, see for example \\citep{daoutidis2002a,henson2003b}.\n\n\\medskip\n\nWe suppose that the distribution $Q(\\rmd\\alpha)$ is symmetric with respect to $\\frac{1}{2}$, i.e. $Q(\\rmd\\alpha)=Q(1-\\rmd \\alpha)$. It also may admit a density $Q(\\rmd \\alpha)=q(\\alpha)\\,\\rmd \\alpha$ with the same symmetry:\n\\begin{center}\n\\includegraphics[width=4cm]{fig_division2.pdf}\n\\end{center}\n\nThus, the division kernel of an individual of mass $x$ is $ K(x,\\rmd y) = Q(\\frac{1}{x}\\,\\rmd y)$ with support $[0,x]$. In the case of perfect mitosis, an individual of mass $x$ is divided into two individuals of masses $\\frac x 2$ and then $Q(\\rmd \\alpha)=\\delta_{1\/2}(\\rmd \\alpha)$. \n\n\\medskip\n\n\\emph{It is therefore assumed that, relative to their mass, the division kernel is the same for all individuals. This allows us to reduce the model to a single division kernel. More complex scenarios can also be investigated.} \n\n\n\\vskip0.8em\n\\item{\\textbf{Up-take}} --\n\\emph{Each individual is withdrawn from the chemostat at rate~$D$.}\nOne places oneself in the framework of a \\emph{perfect mixing} hypothesis, where individuals are uniformly distributed in the volume $V$ independently from their mass. During a time step $\\delta$, a total volume of $D\\,V\\,\\delta$ is withdrawn from the chemostat:\n\\begin{center}\n\\includegraphics[width=10cm]{fig_removal_rate.pdf}\n\\end{center}\nand therefore, if we assume that all individuals have the same volume considered as negligible, during this time interval $\\delta$, an individual has a probability $D\\,\\delta$ to be withdrawn from the chemostat, $D$ is the dilution rate. This rate could possibly depend on the mass of the individual.\n\n\n\\end{enumerate}\nWhen the division of an individual occurs, the size of the population instantaneously jumps from $N_{t}$ to $N_{t}+1$; when an individual is withdrawn from the vessel, the size of the population jumps instantaneously from $N_{t}$ to $N_{t}-1$; between each discrete event the size $N_{t}$ remains constant and the chemostat evolves according to the following two continuous mechanisms:\n\\begin{enumerate}\n\\setcounter{enumi}{2}\n\\vskip0.8em\n\\item {\\textbf{Growth of each individual}} --\n\\emph{Each individual of mass $x$ growths at speed $\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_{t},x)$}:\n\\begin{align}\n\\label{eq.masses}\n \\dot x^i_{t} \n =\n \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_{t},x^i_{t})\\,, \\quad i=1,\\dots,N_{t}\n\\end{align}\nwhere $\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}:\\R^2_{+}\\mapsto\\R_{+}$ is given. For the simulation we will consider the following Gompertz model:\n\\begin{align*}\n \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(s,x) \\eqdef r(s)\\,\\log\\Big(\\frac{{\\textrm{max}}}{x}\\Big)\\,x\n\\end{align*}\nwhere the growth rate $r(s)$ depends on the substrate concentration according to the Monod kinetics:\n\\begin{align*}\n r(s)=r_{\\textrm{\\tiny\\rm max}}\\,\\frac{s}{k_r+s}\n\\end{align*}\nhere ${\\textrm{max}}$ is the maximum weight that an individual can reach. In Section\n\\ref{subsec.modeles.deterministes} we also present an example of a function $\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(s,x)$ linear in $x$ which will lead to the classical model of chemostat.\n\n\n\\vskip0.8em\n\\item{\\textbf{Dynamic of the substrate concentration}} -- \n\\emph{The substrate concentration evolves according to the ordinary differential equation:\n\\begin{align}\n\\label{eq.substrat}\n \\dot S_t = \\rho_{\\textrm{\\tiny\\rm\\!\\! s}}(S_{t},\\nu_{t})\n\\end{align}}\nwhere\n\\begin{align*}\n \\rho_{\\textrm{\\tiny\\rm\\!\\! s}} (s,\\nu)\n &\\eqdef\n D({\\mathbf s}_{\\textrm{\\tiny\\rm in}}-s)-k\\, \\mu(s,\\nu)\\,,\n\\\\\n \\mu(s,\\nu)\n &\\eqdef\n \\frac{1}{V} \\int_{\\XX} \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(s,x)\\,\\nu(\\rmd x)\n = \n \\frac{1}{V} \\sum_{i=1}^{N} \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(s,x^i)\n\\end{align*}\nwith $\\nu=\\sum_{i=1}^N \\delta_{x^i}$;\n$D$ is the dilution rate (1\/h), \n${\\mathbf s}_{\\textrm{\\tiny\\rm in}}$ is the input concentration (mg\/l),\n$k$ is the stoichiometric coefficient (inverse of the yield coefficient),\nand $V$ is the representative volume (l).\nMass balance leads to Equation \\eqref{eq.substrat} and the initial condition $S_{0}$ may be random.\n\\end{enumerate}\n\nTo ensure the existence and uniqueness of solutions of the ordinary differential equations \\eqref{eq.masses} and \\eqref{eq.substrat}, we assume\nthat application $\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(s,x)$ is Lipschitz continuous w.r.t. $s$ uniformly in $x$:\n\\begin{align}\n\\label{hyp.rhog.lipschitz}\n \\bigl|\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(s_1,x)-\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(s_2,x)\\bigr|\n \\leq \n k_g \\,|s_1-s_2|\n\\end{align}\nfor all $s_1,\\,s_2 \\geq 0$ and all $x\\in\\XX$. It is further assumed that:\n\\begin{align}\n\\label{hyp.rhog.borne}\n 0\n \\leq\n \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(s,x)\n \\leq \n \\bar g\n\\end{align}\nfor all $(s,x)\\in\\RR_+ \\times \\X$, and that \n in the absence of substrate the bacteria do not grow:\n\\begin{align}\n\\label{hyp.rhog.nulle.en.0}\n \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(0,x)\n =\n 0\n\\end{align}\nfor all $x\\in \\X$. To ensure that the mass of a bacterium \nstays between $0$ and ${\\textrm{max}}$, it is finally assumed that:\n\\begin{align}\n\\label{hyp.rhog.nulle.en.x=mmax}\n \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(s,{\\textrm{max}})\n =\n 0\n\\end{align}\nfor any $s\\geq 0$.\n\n\n\\subsection{Algorithm}\n\n\\begin{algorithm}\n\\begin{center}\n\\begin{minipage}{14cm}\n\\begin{algorithmic}\n\\STATE $t\\ot 0$\n\\STATE sample $(S_0,\\nu_0=\\sum_{i=1}^{N_{0}}\\delta_{x^i_{t}})$\n\\WHILE {$t\\leq t_{\\textrm{\\tiny\\rm max}}$}\n \\STATE $N \\ot \\crochet{\\nu_{t},1}$\n \\STATE $\\tau \\ot (\\bar\\lambda+D)\\,N$\n \\STATE $\\Delta t \\sim {\\textrm{\\rm Exp}}(\\tau)$\n \\STATE integrate the equations for the mass \\eqref{eq.masses} \n and the substrate \\eqref{eq.substrat} over $[t,t+\\Delta t]$\n \\STATE $t \\ot t+\\Delta t$\n \\STATE draw $x$ uniformly in $\\{x^i_{t}\\,;\\,i=1,\\dots,N_{t}\\}$ \n \\STATE $u\\sim U[0,1]$ \n \\IF {$u\\leq \\lambda(S_{t},x)\/(\\bar\\lambda+D)$}\n \\STATE $\\alpha \\sim Q$\n \\STATE $\\nu_{t} \\ot \\nu_{t} -\\delta_{x}+\\delta_{\\alpha\\,x}\n +\\delta_{(1-\\alpha)\\,x}$\n \\COMMENT{division}\n \\ELSIF{$u\\leq (\\lambda(S_{t},x)+D)\/(\\bar\\lambda+D)$}\n \\STATE $\\nu_{t} \\ot \\nu_{t} -\\delta_{x}$\n \\COMMENT{up-take}\n \\ENDIF\n\\ENDWHILE\n\\end{algorithmic}\n\\end{minipage}\n\\end{center}\n\\caption{\\itshape ``Exact'' Monte Carlo simulation of the individual-based model:\napproximations only lie in the numerical integration of the ODEs and in \nthe pseudo-random numbers generators.}\n\\label{algo.ibm}\n\\end{algorithm}\n\nIn the model described above, the division rate $\\lambda(s,x)$ depends on the concentration of substrate $s$ and on the mass $x$ of each individual which continuously evolves according to the system of coupled ordinary differential equations \\eqref{eq.masses} and \\eqref{eq.substrat}, so to simulate the division of the cell we make use of a rejection sampling technique. It is assumed that there exists $\\bar\\lambda<\\infty$ such that:\n\\[\n \\lambda(s,x)\\leq \\bar\\lambda\n\\]\nhence an upper bound for the rate of event, division and up-take combined, at the population level is given by:\n\\[\n \\tau \\eqdef (\\bar\\lambda+D)\\,N\\,.\n\\]\nAt time $t+\\Delta t$ with $\\Delta t\\sim {\\textrm{\\rm Exp}}(\\tau)$, we determine if an event has occurred and what is its type by acceptance\/rejection.\nTo this end, the masses of the $N$ individuals and the substrate concentration evolve according to the coupled ODEs \\eqref{eq.masses} and \\eqref{eq.substrat}.\nThen we chose uniformly at random an individual within the population $\\nu_{(t+\\Delta t)^-}$, that is the population at time $t+\\Delta t$ before any possible event, let $x_{(t+\\Delta t)^-}$ denotes its mass, then:\n\\begin{enumerate}\n\n\\item With probability:\n\\[\n \\frac{\\bar\\lambda}{(\\bar\\lambda+D)}\n\\]\nwe determine if there has been division by acceptance\/rejection:\n\\begin{itemize}\n\\item division occurs, that is:\n\\begin{align}\n\\label{eq.event.division}\n \\nu_{t+\\Delta t}\n =\n \\nu_{(t+\\Delta t)^-}\n -\\delta_{x_{(t+\\Delta t)^-}} \n +\\delta_{\\alpha\\,x_{(t+\\Delta t)^-}} \n +\\delta_{(1-\\alpha)\\,x_{(t+\\Delta t)^-}} \n \\qquad\n \\textrm{with }\\alpha\\sim Q\n\\end{align}\nwith probability $\\lambda(S_{t},x_{(t+\\Delta t)^-})\/\\bar\\lambda$;\n\\item \nno event occurs with probability $1-\\lambda(S_{t},x_{(t+\\Delta t)^-})\/\\bar\\lambda$.\n\\end{itemize}\nIn conclusion, the event \\eqref{eq.event.division} occurs with probability:\n\\[\n \\frac{\\lambda\\bigl(S_{t},x_{(t+\\Delta t)^-}\\bigr)}{\\bar\\lambda}\n \\,\\frac{\\bar\\lambda}{(\\bar\\lambda+D)} \n = \n \\frac{\\lambda\\bigl(S_{t},x_{(t+\\Delta t)^-}\\bigr)}{(\\bar\\lambda+D)}\\,.\n\\]\n\n\\item With probability:\n\\[\n \\frac{D}{(\\bar\\lambda+D)}\n =\n 1-\\frac{\\bar\\lambda}{(\\bar\\lambda+D)}\n\\]\nthe individual is withdrawn, that is:\n\\begin{align}\n\\label{eq.event.soutirage}\n \\nu_{t+\\Delta t}\n =\n \\nu_{(t+\\Delta t)^-}\n -\\delta_{x_{(t+\\Delta t)^-}} \n\\end{align}\n\\end{enumerate}\nFinally, the events and the associated probabilities are:\n\\begin{itemize}\n\\item \ndivision \\eqref{eq.event.division} with probability\n$\\lambda(S_{t},x_{(t+\\Delta t)^-})\/(\\bar\\lambda+D)$,\n\\item\nup-take \\eqref{eq.event.soutirage} with probability \n${D}\/{(\\bar\\lambda+D)}$\n\\end{itemize}\nand no event (rejection) with the remaining probability.\nThe details are given in Algorithm~\\ref{algo.ibm}.\n\nTechnically, the numbering of individuals is as follows: at the initial time individuals are numbered from $1$ to $N$, in case division the daughter cell $\\alpha \\,x$ keeps the index of the parent cell and the daughter cell $(1-\\alpha)\\,x$ takes the index $N+1$; in case of the up-take, the individual $N$ acquires the index of the withdrawn cell.\n\n\n\\section{Notations}\n\\label{sec.notations}\n\nBefore proposing an explicit mathematical description of the process $(\\nu_{t})_{t\\geq 0}$ we introduce some notations.\n\n\n\\subsection{Punctual measures}\n\n\nNotation \\eqref{eq.nu} designating the bacterial population seems somewhat abstract but it will bridge the gap between the ``discrete'' -- counting punctual measures --\nand the ``continuous'' -- continuous measures of the population densities -- in the context of the asymptotic large population analysis. Indeed for any measure $\\nu(\\rmd x)$ defined on $\\R_{+}$ and any function $\\varphi:\\R_{+}\\mapsto\\R$, we define:\n\\[\n \\crochet{\\nu,\\varphi}\n \\eqdef\n \\int_{\\R_{+}}\\varphi(x)\\,\\nu(\\rmd x)\\,.\n\\]\nThis notation is valid for continuous measures as well as for punctual measures $\\nu_{t}(\\rmd x)$ defined by \\eqref{eq.nu}, in the latter case \n$\\crochet{\\nu_{t},\\varphi} = \\sum_{i=1}^{N_{t}} \\varphi(x^i_{t})$.\n\nPractically, this notation allows us to link to macroscopic quantities, e.g. \nat time $t$ the \\emph{population size} is:\n\\[\n N_{t} = \\crochet{\\nu_{t},1}\n\\]\nand the \\emph{total biomass} is:\n\\[\n X_{t} \\eqdef \\crochet{\\nu_{t},I}\n = \\sum_{i=1}^{N_{t}} x^i_{t}\n\\]\nwhere $1(x)\\equiv 1$ and $I(x)\\equiv x$. Finally:\n\\[\n x\\in\\nu_{t}=\\sum_{i=1}^{N_{t}}\\delta_{x^i_{t}}(\\rmd x)\n\\]\nwill denote any individual among $\\{x^1_{t},\\dots,x^{N_{t}}_{t}\\}$.\n\nThe set of finite and positive measures on $\\X$ is denoted $\\M_{F}(\\XX)$,\nand $\\M(\\XX)$ is the subset of punctual finite measures on $\\X$:\n\\[\n \\M(\\XX)\n \\eqdef\n \\left\\{\n \t\\sum_{i=1}^N \\delta_{x^i} \\;; \\; N \\in \\NN, \\, x^i \\in \\X \n \\right\\}\n\\]\nwhere by convention $\\sum_{i=1}^0\\delta_{x^i}$ is the null measure.\n\n\n\\subsection{Growth flow}\n\n\nLet:\n\\[\n\\begin{array}{rrcl}\n A_{t} &: \n \\R_{+}\\times\\MM(\\XX) \n &\\xrightarrow[]{\\mbox{}\\hskip1em} & \n \\R_{+}\\times\\MM(\\XX)\n \\\\\n &(s,\\nu)&\\xrightarrow[]{\\mbox{}\\hskip1em} &\n A_{t} (s,\\nu)\n\\end{array}\n\\]\nbe the differential flow associated with the couple system of ODEs \\eqref{eq.substrat}--\\eqref{eq.masses} apart from any event (division or up-take), i.e.:\n\\begin{align}\n\\label{eq.flot.A}\n A_{t}(s,\\nu)\n =\n \\Biggl(\n \t\tA^0_{t}(s,\\nu) \\;,\\; \\sum_{i=1}^N\\delta_{A^i_{t}(s,\\nu)}\n \\Biggr)\n \\quad\\textrm{with }\\nu=\\sum_{i=1}^N\\delta_{x^i}\n \\,.\n\\end{align}\nwhere $A^0_{t}(s,\\nu)$ and $(A^i_{t}(s,\\nu)\\,;\\,i=1,\\dots,N)$ are the coupled solutions of \\eqref{eq.substrat}--\\eqref{eq.masses} taken at time $t$ from the initial condition $(s,\\nu)$, that is:\n\\begin{align*}\n \\frac{\\rmd}{\\rmd t} \n A_{t}^0(s,\\nu)\n &= \\rho_{\\textrm{\\tiny\\rm\\!\\! s}}\\Bigl(A_{t}^0(s,\\nu),\\sum_{i=1}^N \\delta_{A_{t}^i(s,\\nu)}\\Bigr)\n\\\\\n &= D\\,({\\mathbf s}_{\\textrm{\\tiny\\rm in}}-A_{t}^0(s,\\nu))\n -\\frac{k}{V} \\sum_{i=1}^N \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(A_{t}^0(s,\\nu),A_{t}^i(s,\\nu))\\,,\n &\n A_{0}^0(s,\\nu)=s\\,,\n\\\\[0.5em]\n \\frac{\\rmd}{\\rmd t} \n A_{t}^i(s,\\nu)\n &=\n \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(A_{t}^0(s,\\nu),A_{t}^i(s,\\nu))\\,,\n &\n A_{0}^i(s,\\nu)=x^i\n\\end{align*}\nfor $i=1,\\dots,N$. Hence the flow $A_{t}(s,\\nu)$ depends implicitly on the size $N=\\crochet{\\nu,1}$ of the population $\\nu$. \n\n\\medskip\n\nThe stochastic process $(\\nu_{t})_{t\\geq 0}$ features a jump dynamics (division and up-take) and follows the dynamics of the flow $A_{t}$ between the jumps.\nWe can therefore generalize a well-known formula for the pure jump process:\n\\begin{align}\n\\label{eq.dyn.saut.flot}\n \\Phi(S_{t},\\nu_{t})\n &=\n \\Phi(A_{t}(S_{0},\\nu_{0}))\n +\n \\sum_{u\\leq t} \n \\bigl[ \n \\Phi(A_{t-u}(S_{u},\\nu_{u}))-\\Phi(A_{t-u}(S_{u},\\nu_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}))\n \\bigr] \n \\,,\\quad t\\geq 0\n\\end{align}\nfor any function $\\Phi$ defined on $\\R\\times\\MM(\\XX)$. \n\nThe sum $ \\sum_{u\\leq t}$ contains only a finite number of terms as the process $(\\nu_{t})_{t\\geq 0}$ admits only a finite number of jumps over any finite time interval. Indeed, the number of jumps in the process $(\\nu_{t})_{t\\geq 0}$ is bounded by a linear birth and death process with \\textit{per capita} birth rate $\\bar\\lambda$\nand \\textit{per capita} death rate $D$ \\citep{allen2003a}.\n\n\n\n\n\n\n\n\\section{Microscopic process}\n\\label{sec.processus.microscopique}\n\nLet $(S_0,\\nu_0)$ denote the initial condition of the process, it is a random variable with values in $\\R_+ \\times \\M(\\X)$.\n\n\nThe equation \\eqref{eq.dyn.saut.flot} includes information on the flow, i.e. the dynamics between the jumps, but no information on the jumps themselves. To obtain an explicit equation for $(S_t,\\nu_t)_{t\\geq 0}$ we introduce Poisson random measures which manage the incoming of new individuals by cell division on the one hand, and the withdrawal of individuals by up-take on the other. To this end we consider two punctual Poisson random measures $N_1(\\mathrm{d} u, \\mathrm{d} j, \\mathrm{d} \\alpha, \\mathrm{d} \\theta)$ and $N_2(\\mathrm{d} u, \\mathrm{d} j)$\nrespectively defined on $\\RR_+ \\times \\NN^* \\times \\X \\times [0,1]$ and $\\RR_+ \\times \\NN^*$ with respective intensity measures:\n\\begin{align*}\n n_1(\\mathrm{d} u, \\mathrm{d} j, \\mathrm{d} \\alpha, \\mathrm{d} \\theta)\n &\\eqdef\n \\bar\\lambda\\, \\mathrm{d} u \\,\\Big(\\sum_{k \\geq 0} \n \\delta_k(\\mathrm{d} j)\\Big) \\, Q(\\mathrm{d} \\alpha) \\, \\mathrm{d} \\theta\\,,\n\\\\\n n_2(\\mathrm{d} u, \\mathrm{d} j)\n &\\eqdef\n D\\,\\mathrm{d} u \\,\\Big(\\sum_{k \\geq 0} \\delta_k(\\mathrm{d} j)\\Big)\\,.\n\\end{align*}\nSuppose that $N_1$, $N_{2}$, $S_{0}$ and $\\nu_{0}$ are mutually independent.\nLet $(\\mathcal{F}_t)_{t \\geq 0}$ be the canonical filtration generated by $(S_0,\\nu_0)$, $N_1$ and $N_2$.\nAccording to \\eqref{eq.dyn.saut.flot}, for any function $\\Phi$ defined on \n$\\R\\times\\MM(\\XX)$:\n\\begin{align}\n\\nonumber\n &\\Phi(S_{t},\\nu_{t})\n =\n \\Phi(A_{t}(S_{0},\\nu_{0}))\n\\\\\n\\nonumber\n &\n \\quad+\n \\iiiint\\limits_{[0,t]\\times\\N^*\\times[0,1]^2}\n 1_{\\{j\\leq N_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\\}} \\, \n 1_{\\{0\\leq \\theta \\leq \\lambda(S_{u},x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j)\/\\bar\\lambda\\}}\\,\n \\bigl[\\Phi(A_{t-u}(S_{u},\\nu_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\n -\\delta_{x^j_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\n +\\delta_{\\alpha\\,x^j_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\n +\\delta_{(1-\\alpha)\\,x^j_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}))\n \\\\[-0.7em]&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n\\nonumber\n -\\Phi(A_{t-u}(S_{u},\\nu_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}))\\bigr]\\,\n\t\t \t\tN_1(\\mathrm{d} u, \\mathrm{d} j, \\mathrm{d} \\alpha, \\mathrm{d} \\theta) \n\\\\[0.2em]\n\\label{eq.Phi.S.nu}\n &\\quad+\n\t\t\\iint\\limits_{[0,t]\\times\\N^*} \n\t\t\t1_{\\{j\\leq N_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\\}} \\, \n\t\t\t \\bigl[\\Phi(A_{t-u}(S_{u},\\nu_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}-\\delta_{x^j_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}))\n\t\t\t -\\Phi(A_{t-u}(S_{u},\\nu_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}))\\bigr]\n\t\t\t \t\t\t \\,\tN_2(\\mathrm{d} u, \\mathrm{d} j)\\,.\n\\end{align}\nIn particular, we obtain the following equation for the couple $(S_{t},\\nu_{t})$:\n\\begin{align}\n\\nonumber\n &(S_{t},\\nu_{t})\n =\n A_{t}(S_{0},\\nu_{0})\n\\\\\n\\nonumber\n &\n \\qquad+\n \\iiiint\\limits_{[0,t]\\times\\N^*\\times[0,1]^2}\n 1_{\\{j\\leq N_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\\}} \\, \n 1_{\\{0\\leq \\theta \\leq \\lambda(S_{u},x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j)\/\\bar\\lambda\\}}\\,\n \\bigl[A_{t-u}(S_{u},\\nu_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\n -\\delta_{x^j_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\n +\\delta_{\\alpha\\,x^j_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\n +\\delta_{(1-\\alpha)\\,x^j_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}})\n\\\\[-0.7em]\n\\nonumber\n&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n -A_{t-u}(S_{u},\\nu_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}})\\bigr]\\,\n\t\t \t\tN_1(\\mathrm{d} u, \\mathrm{d} j, \\mathrm{d} \\alpha, \\mathrm{d} \\theta) \n\\\\[0.2em]\n\\label{def.proc.S.nu}\n &\\qquad+\n\t\t\\iint\\limits_{[0,t]\\times\\N^*} \n\t\t\t1_{\\{j\\leq N_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\\}} \\, \n\t\t\t \\big[A_{t-u}(S_{u},\\nu_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}-\\delta_{x^j_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}})\n\t\t\t -A_{t-u}(S_{u},\\nu_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}})\\big]\n\t\t\t \t\t\t \\,\tN_2(\\mathrm{d} u, \\mathrm{d} j)\\,.\n\\end{align}\n\n\\medskip\n\nFrom now on, we consider test functions $\\Phi$ of the form:\n\\[\n \\Phi(s,\\nu)\n =\n F(s,\\crochet{\\nu,f})\n\\]\nwith $F \\in C^{1,1}(\\RR^+ \\times \\RR)$ and $f \\in C^1(\\X)$.\n\n\n\\begin{lemma}\n\\label{lem.gen.inf}\nFor any $t>0$:\n\\begin{align}\n\\nonumber\n &\n F(S_t,\\crochet{\\nu_t, f})\n = \n F(S_0,\\crochet{\\nu_0, f})\n\\\\\n\\nonumber\n &\\quad\n + \\int_0^t \\left[\n\t \t\t\\rho_{\\textrm{\\tiny\\rm\\!\\! s}} (S_u,\\nu_u) \\, \n\t \t\t\\partial_s F\\left(S_u, \\, \\left<\\nu_u, f\\right>\\right)\n\t \t\t+ \\left< \\nu_u , \\, \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_u,.)\\,f'\\right> \\,\n\t \t\t\\partial_x F\\left(S_u, \\, \\left<\\nu_u, f\\right>\\right)\n\t \t\\right] \\mathrm{d} u \n\\\\\n\\nonumber \n &\\quad\n + \\iiiint\\limits_{[0,t]\\times\\N^*\\times[0,1]^2} \n\t\t1_{\\{j\\leq N_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\\}} \\,\n\t\t1_{\\{0\\leq \\theta \\leq \\lambda(S_{u},x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j)\/\n\t\t\t\t\\bar\\lambda\\}}\\, \n\t\t\\bigl[\n\t\t\tF(S_u,\\crochet{\\nu_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\n\t\t\t\t\t-\\delta_{x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}\n\t\t\t\t\t+\\delta_{\\alpha \\, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}\n\t\t\t\t\t+\\delta_{(1-\\alpha)\\, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}, f} \n\t\t\t)\n \\\\[-0.9em] \n \\nonumber\n & \\qquad\\qquad\\qquad\\qquad\\qquad\\quad\n \\qquad\\qquad\\qquad\\qquad\\qquad\n\t\t\t-\n\t\t\tF(S_u, \\crochet{\\nu_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}, f}) \n\t \\bigr] \n\t \\;\n\t\t N_1(\\mathrm{d} u, \\mathrm{d} j, \\mathrm{d} \\alpha, \\mathrm{d} \\theta) \n\\\\\n\\label{eq.F.f}\n &\\quad \n + \\iint\\limits_{[0,t]\\times\\N^*} \n\t 1_{\\{j\\leq N_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\\}} \\, \n\t \\bigl[\n\t\tF(S_u, \\crochet{\\nu_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}-\\delta_{x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}, f})\n\t -\n\t F\\left(S_u, \\, \\left<\\nu_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}, f\\right>\\right) \n\t \\bigr] \\, N_2(\\mathrm{d} u, \\mathrm{d} j).\n\\end{align}\n\\end{lemma}\n\n\n\\begin{proof}\nFrom \\eqref{eq.Phi.S.nu}:\n\\begin{align*}\n &\n \\crochet{\\nu_t, f}\n = \n \\sum_{i=1}^{N_0} f(A_t^i(S_0,\\nu_0))\n\\\\\n &\\quad\n + \\iiiint\\limits_{[0,t]\\times\\N^*\\times[0,1]^2}\n 1_{\\{j\\leq N_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\\}} \\, \n 1_{\\{0\\leq \\theta \\leq \\lambda(S_{u},x_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}^j)\/\\bar\\lambda\\}}\\,\n\\\\[-0.5em]\n\t& \\qquad\\qquad\\qquad\n \\times\\Bigl[\n \t \\sum_{i=1}^{N_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}+1}\n\t\t \tf(A_{t-u}^i(S_u,\n\t\t \t\t\\nu_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}-\\delta_{x_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}^j}\n\t\t\t\t+\\delta_{\\alpha\\,x_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}^j}\n\t\t \t +\\delta_{(1-\\alpha)\\,x_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}^j}))\n\\\\[-1em]\n\t& \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n\t\t-\\sum_{i=1}^{N_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}}\n\t\t \tf(A_{t-u}^i(S_u,\\nu_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}))\n \\Bigl] \\,\n N_1(\\mathrm{d} u, \\mathrm{d} j, \\mathrm{d} \\alpha, \\mathrm{d} \\theta) \n\\\\\n &\\quad\n + \\iint\\limits_{[0,t]\\times\\N^*} \n\t\t1_{\\{j\\leq N_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\\}} \\, \n\t\t\\Bigl[\n\t\t \\sum_{i=1}^{N_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}-1}\n\t\t \t f(A_{t-u}^i(S_u,\n\t\t \t\t\t\\nu_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}-\\delta_{x_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}^j}))\n\t\t -\n\t\t \\sum_{i=1}^{N_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}} f(A_{t-u}^i(S_u,\\nu_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}})) \n\t\t\\Bigl] \\, N_2(\\mathrm{d} u, \\mathrm{d} j).\n\\end{align*}\nAccording to the chain rule formula, for any $\\nu=\\sum_{i=1}^{N}\\delta_{x^i}$:\n\\begin{align*} \n f(A_{t-u}^i(s,\\nu))\n & = \n f(x^i)\n +\n \\int_u^t \t\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(A_{\\tau-u}^0(s,\\nu),A_{\\tau-u}^i(s,\\nu)) \\, f'(A_{\\tau-u}^i(s,\\nu))\n\t\t \\,\\mathrm{d} \\tau\n\\\\\n & = \n f(x^i)\n +\n \\int_u^t \\varphi(A_{\\tau-u}^0(s,\\nu),A_{\\tau-u}^i(s,\\nu)) \\,\\mathrm{d} \\tau\n\\end{align*}\nfor $i \\leq N$, with:\n\\begin{align*}\n \\varphi(s,x) \\eqdef \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(s,x) \\, f'(x)\\,.\n\\end{align*}\nHence:\n\\begin{align*}\n &\\crochet{\\nu_t, f}\n = \n \\crochet{\\nu_0, f} \n\\\\\n & \\qquad \n + \\iiiint\\limits_{[0,t]\\times\\N^*\\times[0,1]^2}\n\t\t1_{\\{j\\leq N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\\}} \\, \n\t\t1_{\\{0\\leq \\theta \\leq \\lambda(S_{u},x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j)\/\\bar\\lambda\\}}\n\\\\[-0.8em]\n &\\qquad\\qquad\\qquad\\qquad\\qquad\n \\times \\,\n\t\t\t \\left[\n\t\t\t f(\\alpha \\, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j)\n\t\t\t + f((1-\\alpha) \\, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j)\n\t\t\t\t- f(x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j)\n\t\t \\right] \\, \n N_1(\\mathrm{d} u, \\mathrm{d} j, \\mathrm{d} \\alpha, \\mathrm{d} \\theta) \n\\\\\n &\\qquad \n - \\iint\\limits_{[0,t]\\times\\N^*} \n\t\t\t1_{\\{j\\leq N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\\}} \\, f(x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j) \\,\n\t\t\t \\,\tN_2(\\mathrm{d} u, \\mathrm{d} j)\n +T_0+T_1+T_2\n\\end{align*}\nwhere:\n\\begin{align*}\n T_0 \n &\\eqdef\n \\sum_{i=1}^{N_0} \\int_0^t \\varphi(A_\\tau^0(S_0,\\nu_0), A_\\tau^i(S_0,\\nu_0)) \\, \\rmd\\tau\t\n\\\\[1em]\n T_1\n &\\eqdef \n \\iiiint\\limits_{[0,t]\\times\\N^*\\times[0,1]^2}\n 1_{\\{j\\leq N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\\}} \\,\n 1_{\\{0\\leq \\theta \\leq \\lambda(S_{u},x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j)\/\\bar\\lambda\\}} \\, \n\\\\[-0.7em]\n &\\qquad\\qquad\\qquad\n \\times\\int_u^t \\Bigl[\n\t \\sum_{i=1}^{N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}+1}\n\t\t \t\\varphi(A_{\\tau-u}^0(S_u,\n\t\t \t\t\t\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}-\\delta_{x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}\n\t\t\t\t\t+\\delta_{\\alpha \\, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}\n\t\t \t\t\t+\\delta_{(1-\\alpha) \\, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}),\n\\\\[-1em]\n &\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \n\t\t A_{\\tau-u}^i(S_u,\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\n\t\t -\\delta_{x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}\n\t\t +\\delta_{\\alpha \\, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}\n\t\t \t\t +\\delta_{(1-\\alpha) \\, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}))\n\\\\[-0.6em]\n &\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n\t -\\sum_{i=1}^{N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\n\t\t \\varphi(A_{\\tau-u}^0(S_u,\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}),A_{\\tau-u}^i(S_u,\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}))\n\t\t\t \\Bigr]\\,\\mathrm{d} \\tau \t\n\\\\\n\t&\\qquad\\qquad\\qquad\n\t\\times N_1(\\mathrm{d} u, \\mathrm{d} j, \\mathrm{d} \\alpha, \\mathrm{d} \\theta)\n\\\\[1em]\n T_2\n &\\eqdef \n \\iint\\limits_{[0,t]\\times\\N^*} 1_{\\{j\\leq N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\\}}\n \\; \\int_u^t\n\t\\Bigl[ \\sum_{i=1}^{N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}-1}\n\t\t \t\\varphi(A_{\\tau-u}^0(S_u,\n\t\t \t\t\t\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}-\\delta_{x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}), A_{\\tau-u}^i(S_u,\n\t\t \t\t\t\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}-\\delta_{x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}))\n\\\\[-0.6em]\n &\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n\t\t\t-\\sum_{i=1}^{N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\n\t\t \t\\varphi(A_{\\tau-u}^0(S_u,\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}),A_{\\tau-u}^i(S_u,\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}})) \n\t\t\t \\Bigr] \\, \\mathrm{d} \\tau\n \\; N_2(\\mathrm{d} u, \\mathrm{d} j).\n\\end{align*}\nFubini's theorem applied to $T_1$ and $T_2$ leads to:\n\\begin{align*}\n T_1\n &= \n \\int_0^t \\iiiint\\limits_{[0,\\tau]\\times\\N^*\\times[0,1]^2}\n 1_{\\{j\\leq N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\\}} \\, \n \\times 1_{\\{0\\leq \\theta \\leq \\lambda(S_{u},x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j)\/\\bar\\lambda\\}} \\, \n\\\\[-0.7em]\n &\\qquad\\qquad\\qquad\n \\times \\Bigl[\n\t\t\\sum_{i=1}^{N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}+1}\n\t\t \t\\varphi(A_{\\tau-u}^0(S_u,\n\t\t \t\t\t\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\n\t\t\t\t\t-\\delta_{x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}\n\t\t\t\t\t+\\delta_{\\alpha \\, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}\n\t\t \t\t\t+\\delta_{(1-\\alpha) \\, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}),\n \\\\[-1em]\n &\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n A_{\\tau-u}^i(S_u,\n \\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}-\\delta_{x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}\n +\\delta_{\\alpha \\, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}\n\t\t \t\t+\\delta_{(1-\\alpha) \\, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}))\n\\\\[-0.7em]\n &\\qquad\\qquad\\qquad\\qquad\\qquad\n\t\t-\\sum_{i=1}^{N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\n\t\t \t\\varphi(A_{\\tau-u}^0(S_u,\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}),A_{\\tau-u}^i(S_u,\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}))\n \\Bigr] \n\\\\\n &\\qquad\\qquad\\qquad\n \\times N_1(\\mathrm{d} u, \\mathrm{d} j, \\mathrm{d} \\alpha, \\mathrm{d} \\theta) \\;\\mathrm{d} \\tau\n\\\\[1em]\n T_2\n &\\eqdef\n \\int_0^t \\iint\\limits_{[0,\\tau]\\times\\N^*} \n 1_{\\{j\\leq N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\\}} \\, \n \\Bigl[\n\t\t\\sum_{i=1}^{N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}-1}\n\t\t \t\\varphi(A_{\\tau-u}^0(S_u,\n\t\t \t\t\t\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}-\\delta_{x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}), A_{\\tau-u}^i(S_u,\n\t\t \t\t\t\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}-\\delta_{x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}))\n \\\\[-1em]\n\t& \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \n\t\t-\\sum_{i=1}^{N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\n\t\t \t\\varphi(A_{\\tau-u}^0(S_u,\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}),A_{\\tau-u}^i(S_u,\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}))\n \\Bigr] \n \\,\tN_2(\\mathrm{d} u, \\mathrm{d} j)\\; \\mathrm{d} \\tau\n\\end{align*}\nso, according to \\eqref{eq.Phi.S.nu}:\n\\begin{align*}\n T_0+T_1+T_2\n &= \n \\int_0^t \\crochet{\\nu_\\tau,\\varphi(S_\\tau,.)} \\, \\mathrm{d}\\tau \\,.\t\t \n\\end{align*}\nFinally,\n\\begin{align*}\n \\crochet{\\nu_t,f}\n &=\n \\crochet{\\nu_0, f} \n + \n \\int_0^t \\crochet{\\nu_u,\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_u,.)\\,f'} \\, \\mathrm{d} u \n\\\\\n &\\quad \n + \\iiiint\\limits_{[0,t]\\times\\N^*\\times[0,1]^2} \n\t1_{\\{j\\leq N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\\}} \\, \n\t1_{\\{0\\leq \\theta \\leq \\lambda(S_{u},x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j)\/\\bar\\lambda\\}}\\,\n\t\\bigl[f(\\alpha \\,x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j)+f((1-\\alpha)\\,x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j)\n\t\t\t\t-f(x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j)\\bigr] \n\\\\[-1em]\n &\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n \\times N_1(\\mathrm{d} u, \\mathrm{d} j, \\mathrm{d} \\alpha, \\mathrm{d} \\theta) \n\\\\\n &\\quad\n - \\iint\\limits_{[0,t]\\times\\N^*} \n\t\t\t1_{\\{j\\leq N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\\}} \\, f(x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j) \\,\n\t\t\t \\,\tN_2(\\mathrm{d} u, \\mathrm{d} j).\n\\end{align*}\n\n\nSince $f$ and $f'$ are continuous and bounded (bounded as defined on a compact set), we can conclude this proof by using the It\u00f4 formula for stochastic integrals with respect to Poisson random measures \\citep{rudiger2006a} to develop the differential of $F(S_{t},\\crochet{\\nu_t,f})$ using Equation \\eqref{eq.substrat} and the previous equation.\n\\carre\n\\end{proof}\n\n\n\\bigskip\n\nConsider the compensated Poisson random measures associated with $N_1$ and $N_2$:\n\\begin{align*}\n \\tilde N_1(\\mathrm{d} u, \\mathrm{d} j, \\mathrm{d} y, \\mathrm{d} \\theta)\n &\\eqdef\n N_1(\\mathrm{d} u, \\mathrm{d} j, \\mathrm{d} y, \\mathrm{d} \\theta)\n -\n n_1(\\mathrm{d} u, \\mathrm{d} j, \\mathrm{d} y, \\mathrm{d} \\theta)\\,,\n\\\\\n \\tilde N_2(\\mathrm{d} u, \\mathrm{d} j)\n &\\eqdef\n N_2(\\mathrm{d} u, \\mathrm{d} j)-n_2(\\mathrm{d} u, \\mathrm{d} j)\\,.\n\\end{align*}\n\n\\medskip\n\nAs the integrands in the Poissonian integrals of \\eqref{eq.F.f} are predictable, one can make use of the result of \\citet[p. 62]{ikeda1981a}:\n\\begin{proposition}\n\\label{prop.martingale}\nLet:\n\\begin{align*}\n M^1_t\n &\\eqdef \n \\iiiint\\limits_{[0,t]\\times\\N^*\\times[0,1]^2} \n 1_{\\{j\\leq N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\\}} \\, \n 1_{\\{0\\leq \\theta \\leq \\lambda(S_{u},x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j)\/\\bar\\lambda\\}} \n\\\\[-1em]\n &\\qquad\\qquad\\qquad\\qquad\n \\times\n \\left[\n\t F\\left(S_u,\\left<\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}-\\delta_{x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}\n\t\t\t +\\delta_{\\alpha \\, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}+\\delta_{(1-\\alpha)\\, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}, f\\right> \\right)\n\t\t\t\t-F\\left(S_u, \\, \\left<\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}, f\\right>\\right) \n\t \\right]\n\\\\\n &\\qquad\\qquad\\qquad\\qquad\n \\times \\tilde N_1(\\mathrm{d} u, \\mathrm{d} j, \\mathrm{d} \\alpha, \\mathrm{d} \\theta)\\,,\n\\\\\n M^2_t\n &\\eqdef \n \\iint\\limits_{[0,t]\\times\\N^*}\n\t 1_{\\{j\\leq N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\\}} \\, \n\t \\left[\n\t F\\left(S_u, \\left<\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}-\\delta_{x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}, f\\right>\\right)\n\t -\n\t F\\left(S_u, \\, \\left<\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}, f\\right>\\right) \n\t \\right] \\,\t\\tilde N_2(\\mathrm{d} u, \\mathrm{d} j)\\,.\n\\end{align*}\nWe have the following properties of martingales:\n\\begin{enumerate}\n\n\\item if for any $t\\geq 0$:\n\\begin{align*}\n &\\EE\\Bigl(\n \\int_0^t \\int_{\\X} \\lambda(S_{u}, x) \\int_0^1\n\t \\bigl|\n F(S_u,\\crochet{\\nu_u-\\delta_x+\\delta_{\\alpha \\, x}\n +\\delta_{(1-\\alpha)\\,x},f})\n\\\\\n &\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n -F(S_u,\\crochet{\\nu_u,f})\n\t\\bigr| \\, Q(\\mathrm{d} \\alpha)\\, \\nu_u(\\mathrm{d} x)\\,\\mathrm{d} u\n \\Bigr) < + \\infty\n\\end{align*}\nthen $(M^1_t)_{t\\geq 0}$ is a martingale;\n\\item if for any $t\\geq 0$\n\\begin{align*}\n \\EE\\Bigl(\n \\int_0^t \\int_{\\X} \n\t \\bigl|\n\t\tF(S_u,\\crochet{\\nu_u-\\delta_x,f})\n\t\t-\n\t\tF(S_u,\\crochet{\\nu_u,f})\n\t \\bigr|\n\t\t\t\\, \\nu_u(\\mathrm{d} x)\\,\\mathrm{d} u\n\t\\Bigr) < +\\infty\n\\end{align*}\nthen $(M^2_{t})_{t\\geq 0}$ is a martingale;\n\n\\item if for any $t\\geq 0$\n\\begin{align*}\n &\\EE\\Bigl(\n \\int_0^t \\int_{\\X} \\lambda(S_{u}, x) \\int_0^1 \n\t \\bigl|\n\t F(S_u,\\crochet{\n\t \\nu_u-\\delta_x+\\delta_{\\alpha \\, x}+\\delta_{(1-\\alpha)\\,x},f\n\t })\n \\\\ \n & \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n\t\t -\n\t\t F(S_u,\\crochet{\\nu_u,f})\n\t \\bigr|^2 \\,\n\t Q(\\mathrm{d} \\alpha)\\,\\nu_u(\\mathrm{d} x)\\,\\mathrm{d} u\n \\Bigr)\n < + \\infty\n\\end{align*}\nthen $(M^1_{t})_{t\\geq 0}$ \nis a square integrable martingale and predictable quadratic variation:\n\\begin{align*}\n &\\crochet{M^1}_t\t\n \\eqdef\n \\int_0^t \\int_{\\X} \\lambda(S_{u}, x) \\, \\int_0^1 \n \\bigl[\n\t\tF(S_u,\n\t\t \\crochet{\n\t\t \\nu_u-\\delta_x+\\delta_{\\alpha \\, x}+\\delta_{(1-\\alpha)\\,x}\n\t\t ,f})\n \\\\ \n & \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n\t\t-\n\t\tF(S_u,\\crochet{\\nu_u,f})\n\t\\bigr]^2 \\,\n\tQ(\\mathrm{d} \\alpha)\\, \\nu_u(\\mathrm{d} x)\\,\\mathrm{d} u\\,;\n\\end{align*}\n\n\\item if for any $t\\geq 0$\n\\begin{align*}\n \\EE\\Bigl(\\int_0^t \\int_{\\X} \n \\left|\n\t F(S_u,\\crochet{\\nu_u-\\delta_x,f})\n\t\t-\n\t\tF(S_u,\\crochet{\\nu_u,f})\n\t\\right|^2 \\, \\nu_u(\\mathrm{d} x)\\,\\mathrm{d} u\n \\Bigr) < + \\infty\n\\end{align*}\nthen $(M^2_{t})_{t\\geq 0}$ \nis a square integrable martingale and predictable quadratic variation:\n\\begin{align*}\n \\left_t\t\n \\eqdef\n D \\, \\int_0^t \\int_{\\X} \n \\left[\n F(S_u,\\crochet{\\nu_u-\\delta_x,f})\n\t-\n\tF(S_u,\\crochet{\\nu_u,f})\n \\right]^2 \\, \n \\nu_u(\\mathrm{d} x)\\,\\mathrm{d} u\\,.\n\\end{align*}\n\\end{enumerate}\n\\end{proposition}\n\n\n\n\\bigskip\n\n\n\\begin{lemma}[Control of the population size]\n\\label{lem.taille.pop}\nLet $T>0$, if there exists $p\\geq 1$ such that $\\EE(\\crochet{\\nu_0,1}^p) <\\infty$, then:\n\\begin{align*}\n \\EE\\left( \n \\sup_{t \\in [0,T]} \\left< \\nu_t,1 \\right>^p \n \\right) \n \\leq C_{p,T}\n\\end{align*}\nwhere $C_{p,T}<\\infty $ depends only on $p$ and $T$.\n\\end{lemma}\n\n\\begin{proof}\nFor any $n \\in \\NN$, define the following stopping time:\n\\begin{align*}\n \\tau_n \\eqdef \\inf \\{t \\geq 0, \\, N_t \\geq n \\}\\,.\n\\end{align*}\nLemma \\ref{lem.gen.inf} applied to $F(s,x)=x^p$ and $f(x)=1$ leads to:\n\\begin{multline*}\n \\sup_{u\\in [0,t \\wedge \\tau_n]} \\crochet{\\nu_u,1}^p\n \\leq \n \\crochet{\\nu_0,1}^p\n\\\\\n + \\iiiint\\limits_{[0,t\\wedge \\tau_n]\\times\\N^*\\times[0,1]^2} \n\t \t1_{\\{j\\leq N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\\}}\\, \n\t\t1_{\\{0\\leq \\theta \\leq \\lambda(S_{u}, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j)\/ \\bar\\lambda\\}}\\,\n\t \t\\bigl[ \n\t\t (\\crochet{ \\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}},1}+1)^p-\\crochet{\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}},1}^p \n\t\t\\bigr] \n\\\\[-1em]\n\t\\times N_1(\\mathrm{d} u, \\mathrm{d} j,\\mathrm{d} \\alpha, \\mathrm{d} \\theta)\\,.\n\\end{multline*}\nFrom inequality $(1+y)^p-y^p\\leq C_p\\,(1+y^{p-1})$ we get:\n\\begin{multline*}\n \\sup_{u\\in [0,t \\wedge \\tau_n]} \\crochet{\\nu_u,1}^p\n \\leq \n \\crochet{\\nu_0,1}^p \n\\\\\n + C_p\\,\\iiiint\\limits_{[0,t\\wedge \\tau_n]\\times\\N^*\\times[0,1]^2} \n 1_{\\{j\\leq N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\\}}\\,\n 1_{\\{0\\leq \\theta \\leq \\lambda(S_{u}, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j)\/ \\bar\\lambda\\}}\\, \n \\bigl[(1+\\crochet{\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}},1}^{p-1}\\bigr] \n\\\\[-1em]\n \\times N_1(\\mathrm{d} u, \\mathrm{d} j,\\mathrm{d} \\alpha, \\mathrm{d} \\theta)\\,.\n\\end{multline*}\nProposition \\ref{prop.martingale}, together with the inequality $(1+y^{p-1})\\,y \\leq 2\\,(1+y^p)$ give:\n\\begin{align*}\n \\EE \\Biggl(\n \\sup_{u\\in [0,t \\wedge \\tau_n]} \\crochet{\\nu_u,1}^p \n \\Biggr)\n \\leq \n \\EE(\\crochet{\\nu_0,1}^p)\n + 2\\,\\bar\\lambda\\,C_p\\,\n \\EE\\int_0^{t} \n\t\t\\bigl( 1+\\crochet{ \\nu_{u\\wedge \\tau_n},1}^p\\bigr)\\,\\mathrm{d} u\\,.\n\\end{align*}\nFubini's theorem and Gronwall's inequality allow us to conclude that for any $T< \\infty$:\n\\begin{align*}\n \\EE\\Biggl(\n \\sup_{t\\in [0,T \\wedge \\tau_n]} \\crochet{\\nu_t,1}^p \n \\Biggr)\n \\leq \n \t\\Bigl(\\EE\\bigl(\\crochet{\\nu_0,1}^p\\bigr) + 2\\,\\bar\\lambda\\,C_p\\,T \\Bigr) \\,\n \t\\exp(2\\,\\bar\\lambda\\,C_p\\,T)\n \\leq\n C_{p,T}\n\\end{align*}\nwhere $C_{p,T}<\\infty$ as $\\EE(\\crochet{\\nu_0,1}^p)<\\infty$.\n\n\nIn addition, the sequence of stopping times $\\tau_n$ tends to infinity, otherwise there would exist $T_0<\\infty$ such that $\\PP(\\sup_n \\tau_n 0$ hence $\\EE(\\sup_{t \\in [0,T_0 \\wedge \\tau_n]} \\crochet{\\nu_t,1}^p)\\geq \\varepsilon_{T_0}\\,n^p$ which contradicts the above inequality. Finally, Fatou's lemma gives:\n\\begin{align*} \n \\EE\\Biggl( \\sup_{t \\in [0,T]} \\crochet{\\nu_t,1}^p \\Biggr)\n =\n \\EE \\Biggl( \n\t \\liminf_{n \\to \\infty} \n\t \\sup_{t\\in [0,T \\wedge \\tau_n]} \n\t \\crochet{\\nu_t,1}^p\n \\Biggr)\n \\leq \n \\liminf_{n \\to \\infty}\n \\EE \\Biggl(\n \\sup_{t\\in [0,T \\wedge \\tau_n]} \\crochet{\\nu_t,1}^p \n \\Biggr)\n \\leq \n C_{p,T}\\,.\n\\end{align*}\n\\vskip-1em\\carre\n\\end{proof}\n\n\\bigskip\n\n\\begin{remark}\nIn particular, if $\\EE\\crochet{\\nu_0,1}<\\infty$ and if the function $ F $ is bounded, then by Lemma \\ref{lem.taille.pop} and Proposition \\ref{prop.martingale}, $(M^1_{t})_{t\\geq 0}$ and $(M^2_{t})_{t\\geq 0}$ are martingales.\n\\end{remark}\n\n\n\\begin{lemma}\n\\label{lem.integrabilite_Sigma}\nIf $\\EE\\crochet{\\nu_0,1}+\\EE(S_0)<\\infty$ then:\n\\begin{align*}\n \\EE\\biggl(\\int_0^t |\\rho_{\\textrm{\\tiny\\rm\\!\\! s}} (S_u,\\nu_u) |\\,\\mathrm{d} u \\biggr)\n \\leq\t\n D\\, t\\,\\EE (S_0\\vee {\\mathbf s}_{\\textrm{\\tiny\\rm in}})\n + \n \\frac kV \\,\\bar{g}\\;\n \\EE\\biggl( \\int_0^t\\crochet{\\nu_u,1} \\, \\mathrm{d} u\\biggr)\n <\\infty\\,.\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}\nAs $S_u\\geq 0$ and $\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}$ is a non negative function,\n\\begin{align*}\n \\rho_{\\textrm{\\tiny\\rm\\!\\! s}} (S_u,\\nu_u)\n \\leq \n D \\, {\\mathbf s}_{\\textrm{\\tiny\\rm in}}\\,.\n\\end{align*}\nFurthermore, for any $(s,x)\\in\\RR_+ \\times \\X$, $\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(s,x)\\leq \\bar g$, and \n$S_u\\leq S_0\\vee {\\mathbf s}_{\\textrm{\\tiny\\rm in}}$ so:\n\\begin{align*}\n \\rho_{\\textrm{\\tiny\\rm\\!\\! s}} (S_u,\\nu_u)\n \\geq \n -D \\, (S_0\\vee {\\mathbf s}_{\\textrm{\\tiny\\rm in}})-\\frac kV \\, \\bar{g}\\,\\crochet{\\nu_u,1}\\,.\n\\end{align*}\nWe therefore deduce that:\n\\begin{align*}\n \\int_0^t |\\rho_{\\textrm{\\tiny\\rm\\!\\! s}} (S_u,\\nu_u) |\\, \\mathrm{d} u\n \\leq\t\n D\\,t\\,(S_0\\vee {\\mathbf s}_{\\textrm{\\tiny\\rm in}})\n + \n \\frac kV \\, \\bar{g}\\, \\int_0^t \\crochet{\\nu_u,1}\\, \\mathrm{d} u\\,. \n\\end{align*}\nAccording to Lemma \\ref{lem.taille.pop}, the last term is integrable \nwhich concludes the proof.\n\\carre\n\\end{proof}\n\n\\bigskip\n\n\\begin{theorem}[Infinitesimal generator]\nThe process $(S_t,\\nu_t)_{t\\geq 0}$ is Markovian with values in $\\R_{+}\\times\\MM(\\XX)$ and its infinitesimal generator is:\n\\begin{multline}\n \\L \\Phi(s,\\nu)\n \\eqdef \n \\bigl(D({\\mathbf s}_{\\textrm{\\tiny\\rm in}}-s)-k\\,\\mu(s,\\nu) \\bigr)\\,\n \\partial_s F(s, \\crochet{\\nu, f}) \t\t\n +\n \\crochet{\\nu ,\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(s,.) \\, f'} \\,\n \\partial_x F(s, \\crochet{\\nu, f}) \n\\\\\n + \n \\int_{\\X} \\lambda(s,x) \\,\n \\int_0^1 \t\t\t \n\t \\bigl[ \n\t F(s,\n\t \\crochet{\\nu-\\delta_x+\\delta_{\\alpha\\,x}\n\t\t +\n\t\t \\delta_{(1-\\alpha)\\,x},f}\n\t\t ) \n\t\t -\n\t\t F(s, \\crochet{\\nu,f}) \n\t \\bigr]\t\t\t\n\t\t\t Q(\\mathrm{d} \\alpha) \\, \\nu(\\mathrm{d} x) \n\\\\\n\\label{eq.generateur.infinitesimal}\n\t\t+ D\\, \\int_{\\X} \n\t\t\t \\bigl[\n\t\t\tF(s,\\crochet{\\nu-\\delta_x,f})\n\t\t\t-\n\t\t\tF(s,\\left<\\nu,f\\right>) \\bigr]\\,\n\t\t\t \\,\t\\nu(\\mathrm{d} x) \n\\end{multline}\nfor any $\\Phi(s,\\nu)=F(s,\\crochet{\\nu,f})$ with $F \\in C_b^{1,1}(\\RR^+ \\times \\RR)$ and $f \\in C^{1}(\\X)$. Thereafter $\\L \\Phi(s,\\nu)$ is denoted \n$\\L F(s,\\crochet{\\nu,f})$.\n\\end{theorem}\n\n\\begin{proof}\nConsider deterministic initial conditions $S_{0}=s\\in\\R_{+}$ and $\\nu_{0}=\\nu\\in\\MM(\\XX)$. \nAccording to Lemma \\ref{lem.gen.inf}:\n\\begin{align*}\n &\n \\EE\\bigl(F(S_t,\\crochet{\\nu_t, f})\\bigr)\n =\n F(s, \\, \\left<\\nu, f\\right>)\n + \n \\EE\\biggl(\\int_0^t \n\t \\rho_{\\textrm{\\tiny\\rm\\!\\! s}} (S_u,\\nu_u) \\, \n\t \\partial_s F(S_u, \\, \\crochet{\\nu_u, f})\\, \\mathrm{d} u \n \\biggr)\n\\\\\n &\\ \n +\\EE \\biggl(\\int_0^t\n\t \\crochet{ \\nu_u , \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_u,.)\\,f'} \\;\n\t \\partial_x F\\left(S_u, \\, \\left<\\nu_u, f\\right>\\right)\\,\n\t \\mathrm{d} u\n \\biggr) \n\\\\\n &\\ \n + \\EE\\biggl(\\ \\ \n\t \\iiiint\\limits_{[0,t]\\times\\N^*\\times[0,1]^2} \\!\\!\\!\\!\n\t\t 1_{\\{j\\leq N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\\}} \\, \n\t\t 1_{\\{0\\leq \\theta \\leq \\lambda(S_{u}, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j)\/\n\t\t\t\t\\bar\\lambda\\}} \n\t\t\t\\Bigl[\n\t\t\t F\\bigl(\n\t\t\t S_u\n\t\t\t , \n\t\t\t \\crochet{\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\n\t\t\t -\\delta_{x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}\n\t\t\t + \\delta_{\\alpha \\, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}\n\t\t\t + \\delta_{(1-\\alpha)\\, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}\n\t\t\t , f}\n\t\t\t \\bigr)\n \\\\[-1em]\n &\\qquad\\qquad\\qquad\\qquad\\qquad\n \\qquad\\qquad\\qquad\\qquad\\qquad \t \n\t\t\t -\n\t\t\t F\\bigl(\n\t\t\t S_u\n\t\t\t , \n\t\t\t \\crochet{\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}, f}\\bigr) \\Bigr]\\,\n\t\t \t N_1(\\mathrm{d} u, \\mathrm{d} j, \\mathrm{d} \\alpha, \\mathrm{d} \\theta)\n\t\t\t\\bigr)\n\\\\\n &\\ \n + \\EE\\biggl( \\ \\ \n \\iint\\limits_{[0,t]\\times\\N^*} \n\t\t1_{\\{j\\leq N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\\}} \\, \n\t\t\\Bigl[\n\t\t\t F(S_u, \\crochet{\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}-\\delta_{x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^j}, f})\n\t\t\t -F(S_u,\\crochet{\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}, f}) \n\t \\Bigr]\\,\n\t N_2(\\mathrm{d} u,\\mathrm{d} j) \n\t \\biggr).\n\\end{align*}\nAs functions $F$, $\\partial_sF$, $\\partial_x F$, $f'$ and $\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}$ are bounded, from Lemmas \\ref{lem.taille.pop} and \\ref{lem.integrabilite_Sigma} and Proposition \\ref{prop.martingale}, the expectation the right side of the above equation are finite.\n\n\nFurthermore, from Proposition \\ref{prop.martingale}:\n\\begin{align*}\n &\n \\EE\\left(F\\left(S_t, \\, \\left<\\nu_t, f\\right>\\right)\\right)\n =\n F(s,\\crochet{\\nu, f})\n + \n \\EE (\\Psi(t))\n\\end{align*}\nwhere:\n\\begin{align*}\n & \\Psi(t)\n \\eqdef\n \t \\int_0^t \n\t \t\\rho_{\\textrm{\\tiny\\rm\\!\\! s}} (S_u,\\nu_u) \\, \n\t \t\\partial_s F(S_u, \\crochet{\\nu_u,f})\\, \\mathrm{d} u \n\\\\\n &\\quad \t\n +\n \\int_0^t\n\t \\crochet{\\nu_u,\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_u,.)\\,f'} \\;\n\t \\partial_x F(S_u, \\crochet{\\nu_u,f})\\,\n\t \\mathrm{d} u \n\\\\\n &\\quad \n + \n \\int_0^t \\int_{\\X} \\int_0^1 \n\t\t\\lambda(S_{u}, x)\\, \n\t\t\\left[\n\t\t\t F\\left(\n\t\t\t S_u\n\t\t\t ,\n\t\t\t \\crochet{\\nu_u -\\delta_{x}\n\t\t\t \t\t+\\delta_{\\alpha\\, x}\n\t\t\t \t\t+\\delta_{(1-\\alpha)\\,x}\n\t\t\t \\,,\\, \n\t\t\t f}\n\t\t\t \\right)\n\t\t\t \\right. \n\t\t\t\\\\[-0.7em]\n\t\t\t&\\qquad\\qquad\\qquad\\qquad\\qquad\n\t\t\t \\qquad\\qquad\\qquad\\qquad\n\t\t\t\\left. \n\t\t\t- F(S_u,\\crochet{\\nu_u,f}) \n\t\t\\right]\t\\,\n\t\tQ(\\mathrm{d} \\alpha)\\,\\nu_u(\\mathrm{d} x) \\, \\mathrm{d} u \n\\\\\n &\\quad \n + \n D\\, \n \\int_0^t \\int_{\\X} \n \t\\bigl[\n\t\tF(S_u,\\crochet{\\nu_u-\\delta_{x}, f}) - F(S_u,\\crochet{\\nu_u, f}) \n\t\\bigr]\\,\n\t\\nu_u(\\mathrm{d} x) \\, \\mathrm{d} u\n\t\\,.\n\\end{align*}\nAlso:\n\\begin{align*}\n &\n \\frac{\\partial}{\\partial t} \n \\Psi(t)\\Bigr|_{t=0}\n = \n \\bigl(D({\\mathbf s}_{\\textrm{\\tiny\\rm in}}-s) - k\\,\\mu(s,\\nu)\\bigr) \\, \n \\partial_s F(s, \\crochet{\\nu,f}) \n + \\crochet{\\nu,\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(s,.)\\,f'} \\;\n\t\\partial_x F(s,\\crochet{\\nu,f}) \n\\\\\n &\\qquad\n + \\int_{\\X}\\int_0^1 \n\t \\lambda(s, x) \\, \n\t \\Bigl[\n\t\tF\\left(\n\t\t s\n\t\t ,\n\t\t \\crochet{\\nu-\\delta_x\n\t\t +\\delta_{\\alpha \\, x}+\\delta_{(1-\\alpha)\\,x}, f\n\t\t }\n\t\t\\right)\n\t\t-\n\t\tF(s,\\crochet{\\nu,f}) \n\t \\Bigr]\\,\n\t Q(\\mathrm{d}\\alpha) \\, \\nu(\\mathrm{d} x) \n\\\\\n &\\qquad \n + D\\, \\int_{\\X} \n\t \\Bigl[\n\t\t\tF(s,\\crochet{\\nu-\\delta_x,f})\n\t\t\t-\n\t\t\tF(s,\\crochet{\\nu,f}) \n\t \\Bigr]\\,\n\t \\nu(\\mathrm{d} x),\n\\end{align*}\nhence:\n\\begin{multline*}\n \\biggl|\n \\frac{\\partial}{\\partial t} \\Psi(t)\\Bigr|_{t=0}\n \\biggr|\n \\leq \n D\\,({\\mathbf s}_{\\textrm{\\tiny\\rm in}}+s)\n +\n \\left(\n\t \\frac kV\\,\\bar g \\, \\norme{\\partial_s F}\n\t + \\bar g \\, \\norme{f'}\\,\\norme{\\partial_x F}\n\t + 2\\,(\\bar\\lambda+D)\\, \\norme{F} \n \\right) \\,\n \\crochet{\\nu,1} \\, .\n\\end{multline*}\nThe right side of the last equation is finite.\nOne may apply the theorem of differentiation under the integral sign, hence the application\n$t\\mapsto \\EE(F(S_t, \\crochet{\\nu_t, f}))$ is differentiable at $t=0$ with derivative $\\L F(s,\\crochet{\\nu, f})$ defined by \\eqref{eq.generateur.infinitesimal}.\n\\mbox{}\\hfil\\carre\n\\end{proof}\n\n\n\n\\begin{remark}\nWe define the washout time as the stopping time:\n\\begin{align*}\n \\tau_{\\textrm{\\tiny\\rm w}}\n \\eqdef\n \\inf\\{t\\geq 0\\,;\\, N_{t}=\\crochet{\\nu_{t},1}=0\\}\n\\end{align*}\nwith the convention $\\inf\\emptyset=+\\infty$.\nBefore $\\tau_{\\textrm{\\tiny\\rm w}}$ the infinitesimal generator is given by \\eqref{eq.generateur.infinitesimal}, after this time $\\nu_{t}$ is the null measure, i.e. the chemostat does not contain any bacteria, and the infinitesimal generator is simply reduced to the generator associated with the ordinary differential equation $\\dot S_{t}=D\\,({\\mathbf s}_{\\textrm{\\tiny\\rm in}}-S_{t})$ coupled with the null measure given by $\\crochet{\\nu_{t},f}=0$ for all $f$.\n\\end{remark}\n\n\\section{Convergence in distribution of the individual-based model}\n\\label{sec.convergence}\n\n\n\\subsection{Renormalization}\n\nIn this section we will prove that the coupled process of the substrate concentration and the bacterial population converges in distribution to a deterministic process in the space:\n\\[\n \\CC([0,T],\\R_{+}) \n \\times\n \\D([0,T],\\MM_{F}(\\X)) \n\\]\nequipped with the product metric: \n\\fenumi\\ the uniform norm on $\\CC([0,T],\\R_{+})$; \\fenumii\\ the Skorohod metric\non $\\D([0,T],\\MM_{F}(\\X))$ where $\\MM_{F}(\\X)$ is equipped with the topology\nof the weak convergence of measures (see Appendix \\ref{appendix.skorohod}).\n\nRenormalization must have the effect that the density of the bacterial population must grow to infinity. To this end, we first consider a growing volume, i.e. in the previous model the volume is replaced by:\n\\[\n V_{n} = n\\,V\n\\]\nand $(S^n_{t},\\nu^n_{t})_{t\\geq 0}$ will denote the process \\eqref{def.proc.S.nu} where $V$ is replaced by $V_{n}$ and\n $x_t^{n,1},\\dots,x_t^{n,N_t^n}$ the $N_t^n$ individuals of $\\nu_t^n$; second\n we introduce the rescaled process:\n\\begin{align}\n\\label{def.renormalisation}\n \\bar\\nu_t^n \\eqdef \\frac{1}{n}\\nu_t^n\\,,\\quad t\\geq 0\n\\end{align}\nand we suppose that:\n\\begin{align*}\n \\bar\\nu_0^n \n =\n \\frac{1}{n}\\nu_0^n\n \\xrightarrow[n\\to\\infty]{} \n \\xi_0 \n \\textrm{ in distribution in }\\MM_F(\\X) \\,.\n\\end{align*}\n$\\xi_0$ is the limit measure after renormalization of the population density at the initial time. It may be random, but we will assume without loss of generality that it is deterministic, moreover we suppose that $\\crochet{\\xi_{0},1}>0$.\n\n\\bigskip\n\nTherefore, this asymptotic consists in simultaneously letting the volume of chemostat and the size of the initial population tend to infinity. \n\nAs the substrate concentration is maintained at the same value, it implies that the population tends to infinity. We will show that the rescaled process $(S^n_t,\\bar\\nu^n_t)_{t\\geq 0}$ defined by \\eqref{def.renormalisation} converges in distribution to a process $(S_t,\\xi_t)_{t\\geq 0}$ introduced later.\n\n\n\n\n\\bigskip\n\nThe process $(S^n_t,\\nu^n_t)_{t\\geq 0}$ is defined by:\n\\begin{align*}\n \\dot S_t^n \n &=\n D\\,({\\mathbf s}_{\\textrm{\\tiny\\rm in}}-S_t^n)-\\frac k{V_{n}} \\, \n \\int_{X} \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_t^n,x)\\,\\nu_t^n(\\mathrm{d} x) \n\\\\\n &= \n D({\\mathbf s}_{\\textrm{\\tiny\\rm in}}-S_t^n)-\\frac {k}{V} \\, \n \\int_{X} \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_t^n,x)\\,\\bar\\nu_t^n(\\mathrm{d} x)\n =\n \\rho_{\\textrm{\\tiny\\rm\\!\\! s}}(S_{t}^n,\\bar\\nu_t^n)\n\\end{align*}\nand\n\\begin{align*}\n \\bar\\nu_t^n\n &= \n \\frac 1n \\, \\sum_{j=1}^{n} \\delta_{A_t^j(S_0^n,\\nu_0^n)} \n\\\\\n &\\qquad\n + \\frac 1n \\, \n\t \\iiiint\\limits_{[0,t]\\times\\N^*\\times[0,1]^2} \n\t\t 1_{\\{i\\leq N^n_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\\}} \\, \n\t\t 1_{\n\t\t \\{0\\leq \\theta \\leq \n\t\t \\lambda(S^n_u, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^{n,i})\/\\bar\\lambda\\}\n\t\t }\\,\n \\Bigl[\n\t-\\sum_{j=1}^{N^n_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}} \n\t\t \\delta_{A_{t-u}^j(S^n_u,\\nu^n_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}})}\n\\\\[-1em]\n &\\qquad\\qquad\\qquad\\qquad\\qquad\n\t+\\sum_{j=1}^{N^n_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}+1}\n\t\t \t\\delta_{\n\t\t\t A_{t-u}^j(S^n_u \\, ,\\,\n\t\t \t\t\t \\nu^n_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}-\\delta_{x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^{n,i}}\n\t\t\t\t\t +\\delta_{\\alpha x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^{n,i}}\n\t\t \t\t\t +\\delta_{(1-\\alpha) x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^{n,i}})\n\t\t\t\t\t}\n \\Bigr] \\;\n N_1(\\mathrm{d} u, \\mathrm{d} i, \\mathrm{d} \\alpha, \\mathrm{d} \\theta)\n\\\\[0.5em]\n &\\qquad\n + \\frac 1n \\, \n \\iint\\limits_{[0,t]\\times\\N^*}\n 1_{\\{i\\leq N^n_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}\\}} \\, \n \\Bigl[\n\t\t-\\sum_{j=1}^{N^n_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}\n\t\t \t\\delta_{A_{t-u}^j(S^n_u,\\nu^n_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}})}\n\t\t+\\sum_{j=1}^{N^n_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}-1}\n\t\t \t\\delta_{A_{t-u}^j(S^n_u,\n\t\t \t\t\t\\nu^n_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}-\\delta_{x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^{n,i}})}\n\t\\Bigr]\n\t\t\t \\,\tN_2(\\mathrm{d} u, \\mathrm{d} i)\n\\end{align*}\n\n\n\\begin{remark}\n\\label{remark.cv.nu.seule}\nDue to the structure of the previous system and specifically the above equation, \nit will be sufficient to prove the convergence in distribution of the component $\\bar\\nu_t^n$ to deduce also the convergence of the component $S_{t}^n$.\n\\end{remark}\n\n\n\n\n\\subsection{Preliminary results}\n\n\n\n\\begin{lemma}\n\\label{lem.renorm}\nFor all $t>0$,\n\\begin{align*}\n &\n F(S_t^n,\\crochet{\\bar\\nu_t^n, f})\n =\n F(S_0^n,\\crochet{\\bar\\nu_0^n, f})\n\\\\\n &\\qquad\n + \n \\int_0^t \\textstyle\\Bigl(\n D\\,({\\mathbf s}_{\\textrm{\\tiny\\rm in}}-S_u^n)\n - \\frac {k}{V}\\,\\int_X \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_u^n,x)\\,\\bar\\nu_u^n(\\mathrm{d} x) \n \\Bigr)\n \\;\n \\partial_s F(S_u^n, \\, \\crochet{\\bar\\nu_u^n, f})\\,\\mathrm{d} u\n\\\\\n &\\qquad \n + \\int_0^t\n\t \\crochet{\\bar\\nu_u^n , \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_u^n,.)\\,f'} \\;\n\t \\partial_x F(S_u^n, \\, \\crochet{\\bar\\nu_u^n, f}) \\, \\mathrm{d} u \n\\\\\n &\\qquad \n + n\\,\\int_0^t \\int_{\\X} \\lambda(S^n_u, x)\\, \n\t \\int_0^1\n\t \\textstyle\n\t \\Bigl[\n\t F\\bigl(\n\t S_u^n\n\t \\,,\\,\n\t \\crochet{\\bar\\nu_u^n,f} +\\frac 1n f(\\alpha \\, x )\n\t +\\frac 1nf((1-\\alpha) \\, x)-\\frac 1nf(x)\n\t \\bigr) \n\\\\[-0.5em]\n & \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n\t\t\t -F\\bigl(S_u^n, \\crochet{\\bar\\nu_u^n, f}\\bigr) \n\t \\Bigr]\n\t \\,Q(\\mathrm{d} \\alpha)\\, \\bar\\nu_u^n(\\mathrm{d} x)\\,\\mathrm{d} u \n\\\\\t\t\t\n &\\qquad \n + D\\,n\\, \n \\int_0^t \\int_{\\X} \n \\textstyle\\Bigl[\n\t F\\bigl(S_u^n, \\, \\crochet{\\bar\\nu_u^n, f}-\\frac 1nf(x) \\bigr)\n\t -\n\t F\\bigl(S_u^n, \\, \\crochet{\\bar\\nu_u^n, f} \\bigr) \n \\Bigr]\\, \\bar\\nu_u^n(\\mathrm{d} x)\\,\\mathrm{d} u\n\t+ Z_t^{F,f,n}\n\\end{align*}\nwhere\n\\begin{align}\n\\nonumber\n Z_t^{F,f,n}\n &\\eqdef \n \\iiiint\\limits_{[0,t]\\times\\N^*\\times[0,1]^2}\n\t1_{\\{i\\leq N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^n\\}} \\, \n\t1_{\\{0\\leq \\theta \\leq \\lambda(S^n_u, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^{n,i})\/\\bar\\lambda\\}}\n\\\\[-1em]\n\\nonumber\n &\\qquad\\qquad\\qquad\\qquad\n\t\\textstyle\n\t\\Bigl[\n\t\tF\\bigl(\n\t\t S_u^n\n\t\t \\,,\\, \n\t\t \\crochet{\\bar\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^n, f}\n\t\t + \\frac 1n f(\\alpha \\,x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^{n,i})\n\t\t +\\frac 1nf((1-\\alpha)\\,x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^{n,i})\n\t\t -\\frac 1nf(x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^{n,i})\n\t\t\\bigr) \\, \n\\\\[-0.5em]\n\\nonumber\n &\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n -F\\bigl(S_u^n, \\crochet{\\bar\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^n, f}\\bigr) \t\n\t\\Bigr]\t\t\t\n\t\\,\\tilde N_1(\\mathrm{d} u, \\mathrm{d} i, \\mathrm{d} \\alpha, \\mathrm{d} \\theta) \n\\\\\n\\label{eq.Z,F,f}\n & \\qquad \n + \\iint\\limits_{[0,t]\\times\\N^*}\n \t\\textstyle \n\t1_{\\{i\\leq N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^n\\}} \\, \n\t\\Bigl[\n\t F\\bigl(S_u^n, \\crochet{\\bar\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^n, f}\n\t -\\frac 1nf(x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^{n,i}) \\bigr)\n\t -\n\t F\\bigl(S_u^n, \\crochet{\\bar\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^n, f}\\bigr) \n\t\\Bigr] \n\t\\,\t\\tilde N_2(\\mathrm{d} u, \\mathrm{d} i)\n\\end{align}\n\\end{lemma}\n\n\\begin{proof}\nIt is sufficient to note that\n$\nF(S_t^n,\\crochet{ \\bar\\nu_t^n,f })\n\t = F(S_t^n,\\crochet{\\nu_t^n,\\frac{1}{n}f})\n$\nand to apply Lemma \\ref{lem.gen.inf}.\n\\carre\n\\end{proof}\n\n\\bigskip\n\n\\begin{lemma}\n\\label{lem.esp}\nIf $\\sup_{n\\in\\N} \\EE(\\crochet{\\bar\\nu_0^n,1}^p)<\\infty$ for some $p \\geq 1$, then:\n\\[\n \\sup_{n\\in\\N} \\EE\\Biggl(\\sup_{u\\in [0,t]} \\crochet{\\bar\\nu_u^n,1}^p\\Biggr)\n <\n C_{t,p}\n\\]\nwhere $C_{t,p}$ depends only on $t$ and $p$.\n\\end{lemma}\n\n\\begin{proof}\nDefine the stopping time:\n\\[\t\n\t\\tau_N^n\n\t\\eqdef\n\t\\inf \\{t\\geq 0, \\crochet{\\bar\\nu_t^n,1} \\geq N \\}\\,.\n\\]\nAccording to Lemma \\ref{lem.renorm}:\n\\begin{align*}\n &\n \\sup_{u \\in [0, t \\wedge \\tau_N^n]} \\crochet{\\bar\\nu_u^n,1}^p\n \\leq \n \\crochet{\\bar\\nu_0^n,1}^p\n\\\\\n &\\qquad\\qquad \n + n\\,\\int_0^{t \\wedge \\tau_N^n} \\int_{\\X}\n\t\t\t\\lambda(S^n_u, x)\\,\n\t\t\t\\textstyle\n\t\t\t \\left[\\left(\\crochet{ \\bar\\nu_u^n,1} + \\frac{1}{n}\\right)^p\n\t\t\t -\\crochet{ \\bar\\nu_u^n,1}^p \\right]\\, \n\t\t\t\\bar\\nu_u(\\mathrm{d} x)\\,\\mathrm{d} u\n\\\\\n & \\qquad\\qquad\n + \\iiiint\\limits_{[0,t\\wedge \\tau_N^n]\\times\\N^*\\times[0,1]^2}\n\t\t1_{\\{i \\leq N_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^n\\}}\\,\n\t\t1_{\\{0\\leq \\theta \\leq \\lambda(S^n_u, x_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^{i,n})\/ \\bar\\lambda\\}}\n\\\\[-1em]\n & \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n \\times \\,\n\t\t\\textstyle\n\t\t\\left[\\left(\\crochet{\\bar\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^n,1} + \\frac{1}{n}\\right)^p\n\t\t\t -\\crochet{\\bar\\nu_{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}^n,1}^p \\right]\\, \n\t \\tilde N_1(\\mathrm{d} u, \\mathrm{d} i,\\mathrm{d} \\alpha, \\mathrm{d} \\theta)\\,.\n\\end{align*}\nFrom the inequality \n$(1+y)^p-y^p \\leq C_p\\, (1+y^{p-1})$,\none can easily check that $(\\frac 1n+y)^p-y^p \\leq \n\\frac{C_p}{n}\\, (1+y^{p-1})$. Taking expectation in the previous inequality and applying Proposition \\ref{prop.martingale} lead to:\n\\begin{align*}\n \\EE \\Biggl( \n \\sup_{u \\in [0, t \\wedge \\tau_N^n]} \\crochet{\\bar\\nu_u^n,1}^p \n \\Biggr)\n &\\leq \n \\EE\\bigl(\\crochet{ \\bar\\nu_0^n,1}^p \\bigr)\n + \n \\EE \\int_0^{t \\wedge \\tau_N^n} \n C_p\\,\n \\bigl(1+\\crochet{\\bar\\nu_u^n,1}^{p-1}\\bigr)\n\t\\int_\\X \\lambda(S^n_u, x)\\,\\bar\\nu_u^n(\\mathrm{d} x)\\, \\mathrm{d} u\n\\\\\n\t&\\leq \n\t\\EE\\bigl(\\crochet{\\bar\\nu_0^n,1 }^p \\bigr)\n\t+ \n\t\\bar\\lambda \\, C_p\\, \n\t\\int_0^t \\EE \\left(\n\t \\crochet{ \\bar\\nu_{u \\wedge \\tau_N^n}^n,1 }\n\t +\n\t \\crochet{\\bar\\nu_{u \\wedge \\tau_N^n}^n,1}^p \n\t\\right) \\, \\mathrm{d} u\\,.\n\\end{align*}\nAs:\n\\[\n \\crochet{ \\bar\\nu_{u \\wedge \\tau_N^n}^n,1} \n +\n \\crochet{ \\bar\\nu_{u \\wedge \\tau_N^n}^n,1}^p \n \\leq \n 2 \\, \\left(1+ \\crochet{\\bar\\nu_{u \\wedge \\tau_N^n}^n,1}^p \\right)\\,,\n\\]\nwe get:\n\\begin{align*}\n \\EE \\Biggl( \n \\sup_{u \\in [0, t \\wedge \\tau_N^n]} \\crochet{\\bar\\nu_u^n,1}^p \n \\Biggr)\n &\\leq \n \\EE\\left(\\crochet{\\bar\\nu_0^n,1}^p \\right)\n\t\t\t+ 2\\, \\bar\\lambda \\, C_p\\, t\n\t\t\t+ 2\\, \\bar\\lambda \\, C_p\\, \\int_0^t \n\t\t\t\\EE \\left(\\sup_{u \\in [0, u \\wedge \\tau_N^n]} \n\t\t\t \\crochet{\\bar\\nu_u^n,1}^p \\right) \\, \\mathrm{d} u\n\\end{align*}\nand from Gronwall's inequality we obtain:\n\\begin{align*}\n \\EE \\Biggl( \n \\sup_{u \\in [0, t \\wedge \\tau_N^n]} \\crochet{\\bar\\nu_u^n,1}^p \n \\Biggr)\n \\leq \n \\Bigl(\n \t\\EE(\\crochet{\\bar\\nu_0^n,1}^p ) + 2\\,\\bar\\lambda \\, C_p\\, t\n \\Bigr)\n \\,\n \\exp(2\\,\\bar\\lambda \\, C_p\\, t)\\,.\n\\end{align*}\nThe sequence of stopping times $\\tau_N^n$ tends to infinity as $N$ tends to infinity \nfor the same reasons as those set in the proof of Lemma \\ref{lem.taille.pop}.\nFrom Fatou's lemma we deduce:\n\\begin{align*}\n \\EE \\Biggl( \n \\sup_{u \\in [0, t]} \\crochet{\\bar\\nu_u^n,1}^p \n \\Biggr)\n & =\n \\EE \\Biggl( \n \\liminf_{N \\to \\infty} \n \\sup_{u \\in [0, t \\wedge \\tau_N^n]} \\crochet{\\bar\\nu_u^n,1}^p \n \\Biggr)\n\\\\\n &\\leq \n \\liminf_{N \\to \\infty}\\EE \n \\Biggl(\n \\sup_{u \\in [0,t\\wedge\\tau_N^n]} \\crochet{\\bar\\nu_u^n,1}^p \n \\Biggr)\n\\\\\n &\\leq \n \\Bigl(\n \\EE\\left(\\crochet{\\bar\\nu_0^n,1}^p \\right) \n + \n\t2\\, \\bar\\lambda \\, C_p\\, t\n \\Bigr)\\, \n \\exp(2\\,\\bar\\lambda\\,C_p\\, t)\n\\end{align*}\nand as $\\sup_n \\EE\\left(\\crochet{\\bar\\nu_0^n,1}^p \\right) <\\infty$, we deduce the proof of the lemma.\n\\carre\n\\end{proof}\n\n\\bigskip\n\n\\begin{corollary}\nLet $f \\in C^1(\\X)$, suppose that $\\EE(\\crochet{\\bar\\nu_0^n, 1}^2)<\\infty$,\nthen for all $t>0$:\n\\begin{align}\n\\nonumber\n \\crochet{\\bar\\nu_t^n, f}\n &=\n \\crochet{\\bar\\nu_0^n, f}\n + \\int_0^t \n\t \t\t\\left< \\bar\\nu_u^n , \\, \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_u^n,.)\\,f'\\right> \n\t \t\t\\, \\mathrm{d} u \n\\\\\n\\nonumber \n &\\qquad\n + \\int_0^t \\int_{\\X} \\lambda(S^n_u,x) \\, \n\t \\int_0^1 \n\t\t\t \\bigl[ f(\\alpha \\, x)+f((1-\\alpha)\\,x)-f(x)\\bigr] \\, \n\t\t\t Q(\\mathrm{d} \\alpha)\\,\\bar\\nu_u^n(\\mathrm{d} x) \\, \\mathrm{d} u\t\t\t \n\\\\\t\t\n\\label{renormalisation}\n &\\qquad \n - D\\, \\int_0^t \\int_{\\X} f(x) \\, \\bar\\nu_u^n(\\mathrm{d} x) \\, \\mathrm{d} u \n + Z_t^{f,n} \n\\end{align}\nwhere\n\\begin{align}\n\\nonumber\n Z_t^{f,n}\n &\\eqdef\n \\frac 1n \\, \n \\iiiint\\limits_{[0,t]\\times\\N^*\\times[0,1]^2} \n \t1_{\\{i\\leq N_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}^n\\}} \\, \n\t1_{\\{0\\leq \\theta \\leq \\lambda(S^n_u, x_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}^{i,n})\/\\bar\\lambda\\}} \n\\\\[-1em]\n\\nonumber\n &\\qquad\\qquad\\qquad\\qquad\t\t\n\t\\times \n\t\\left[\n\t f(\\alpha\\,x_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}^{i,n})\n\t + f((1-\\alpha)\\,x_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}^{i,n})\n\t - f(x_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}^{i,n})\n\t\\right] \\, \n\t\\tilde N_1(\\mathrm{d} u, \\mathrm{d} i, \\mathrm{d} \\alpha, \\mathrm{d} \\theta) \n\\\\[0.5em]\n\\label{eq.Zfn}\n &\\qquad\\qquad\\qquad\n - \\frac 1n \\, \\iint\\limits_{[0,t]\\times\\N^*}\n\t\t\t1_{\\{i\\leq N_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}^n\\}} \\, \n\t\t\tf(x_{{{u^{\\raisebox{-1pt}{\\scriptsize\\scalebox{0.5}{$-$}}}}}}^{i,n})\n\t\t\t \\,\t\\tilde N_2(\\mathrm{d} u, \\mathrm{d} i)\n\\end{align}\nis a martingale with the following predictable quadratic variation:\n\\begin{align} \n\\nonumber\n \\crochet {Z^{f,n}}_t\n &=\n \\frac 1n \n \\int_0^t \\int_{\\X} \\lambda(S^n_u, x) \\, \n \\int_0^1 \n \\left[ f(\\alpha \\, x)+f((1-\\alpha) \\, x)-f(x) \\right]^2 \\, \n\t\t\t Q(\\mathrm{d} \\alpha) \\,\\bar\\nu_u^n(\\mathrm{d} x) \\, \\mathrm{d} u \n\\\\\n\\label{var_qua}\n\t&\\qquad\\qquad\\qquad\n\t + \\frac 1n \\, D\\, \\int_0^t \\int_{\\X} \n\t\t\tf(x)^2 \\, \\bar\\nu_u^n(\\mathrm{d} x) \\, \\mathrm{d} u\\,. \n\\end{align}\n\\end{corollary}\n\n\\begin{proof}\nEquation \\eqref{renormalisation} is obtained by applying Lemma \\ref{lem.renorm} \nwith $F(s,x)=x$, then $Z^{f,n}$ is $Z^{F,f,n}$ defined by\n\\eqref{eq.Z,F,f}. Moreover as the random measures $\\tilde{N_1}$ and $\\tilde{N_2}$ are independent, we have:\n\\begin{align*}\n\\crochet {Z^{f,n}}_t\n\t = &\n\t \t\\frac 1{n^2} \\, \\crochet {M^1}_t\n\t \t+\\frac 1{n^2} \\, \\crochet {M^2}_t\n\\end{align*}\nwhere $M^1$ and $M^2$ are defined at Proposition \\ref{prop.martingale}. \nFrom this latter proposition and Lemma \\ref{lem.esp} we deduce the proof of the corollary.\n\\carre\n\\end{proof}\n\n\\bigskip\n\n\\begin{remark}\nThe infinitesimal generator of the renormalized process $(S^n_t,\\bar\\nu^n_t)_{t\\geq 0}$ is:\n\\begin{multline*}\n \\L^n \\Phi(s,\\nu)\n \\eqdef \n \\bigl(D({\\mathbf s}_{\\textrm{\\tiny\\rm in}}-s)-k\\,\\mu(s,\\nu) \\bigr)\\,\n \\partial_s F(s, \\crochet{\\nu, f}) \t\t\n +\n \\crochet{\\nu ,\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(s,.) \\, f'} \\,\n \\partial_x F(s, \\crochet{\\nu, f}) \n\\\\\n \\quad+ \n n\\,\\int_{\\X} \\lambda(s,x) \\,\n \\int_0^1 \t\t\t \n\t \\bigl[ \n\t F\\bigl(s,\n\t \\crochet{\\textstyle\\nu - \\frac 1n \\delta_x + \\frac 1n \\delta_{\\alpha\\,x} \n\t + \\frac 1n \\delta_{(1-\\alpha)\\,x} \\, , \\, f}\n\t\t \\bigr) \n\t\t -\n\t\t F(s, \\crochet{\\nu,f}) \n\t \\bigr]\t\\;\t\t\n\t\t\t Q(\\mathrm{d} \\alpha) \\, \\nu(\\mathrm{d} x) \n\\\\\n\t\\quad\n\t+ n\\, D\\, \\int_{\\X} \n\t \t\\bigl[\n\t\t\tF\\bigl(s,\\crochet{\\textstyle\\nu- \\frac 1n \\delta_x \\, , \\, f} \\bigr)\n\t\t\t-\n\t\t\tF(s,\\left<\\nu,f\\right>) \n\t\t\\bigr]\\,\n\t\t\t \\,\t\\nu(\\mathrm{d} x) \n\\end{multline*}\nfor any $\\Phi(s,\\nu)=F(s,\\crochet{\\nu,f})$ with $F \\in C_b^{1,1}(\\RR^+ \\times \\RR)$ and $f \\in C^{1}(\\X)$. \nNote that this generator has the same ``substrat'' part than that of the initial generator \n\\eqref{eq.generateur.infinitesimal} which again justifies the \nRemark \\ref{remark.cv.nu.seule}.\n\\end{remark}\n\n\n\\bigskip\n\n\nTo prove the uniqueness of the solution of the limit IDE, we have\nto assume that the application $\\lambda(s,x)$ is Lipschitz continous w.r.t. $s$ uniformly in $x$:\n\\begin{align}\n\\label{hyp.lambda.lipschitz}\n \\bigl|\\lambda(s_1,x)-\\lambda(s_2,x)\\bigr|\\leq k_\\lambda \\,|s_1-s_2|\n\\end{align}\nfor all $s_1,\\,s_2 \\geq 0$ and all $x\\in\\XX$. \nThis hypothesis as well as Hypothesis \\ref{hyp.rhog.lipschitz} \nwill also be used to demonstrate the convergence of IBM, see Theorem \\ref{theo.cv.ibm}.\n\n\n\\subsection{Convergence result}\n\n\n\\begin{theorem}[Convergence of the IBM towards the IDE]\n\\label{theo.cv.ibm}\nUnder the assumptions described above, the process $(S_t^n,\\bar\\nu_t^n)_{t\\geq 0}$ converges in distribution in the product space \n$\\CC([0,T],\\R_{+}) \n \\times\n \\D([0,T],\\MM_{F}(\\X))$\ntowards the process $(S_t,\\xi_t)_{t\\geq 0}$ solution of:\n\\begin{align}\n\\label{eq.limite.substrat.faible}\n\tS_t \n\t&= \n\tS_{0}\n\t+\n\t\\int_{0}^t\\biggl[\n\tD\\,({\\mathbf s}_{\\textrm{\\tiny\\rm in}}-S_u)-\\frac kV \\int_\\X \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_u,x)\\,\\xi_u(\\mathrm{d} x)\n\t\\biggr]\\,\\rmd u\\,,\n\\\\[1em]\n\\nonumber \n\t\\crochet{\\xi_t,f}\n\t&=\n\t\\crochet{\\xi_0,f}\n\t+\n\t\\int_{0}^t\\biggl[\n\t\\int_{\\X} \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_u,x)\\,f'(x) \\,\t\\xi_u(\\mathrm{d} x)\n\\\\\n\\nonumber \n\t&\\qquad\n\t+ \\int_{\\X} \\int_0^1\\lambda(S_u, x)\\, \n\t\\Bigl[f(\\alpha\\, x)+ f((1-\\alpha)\\,x)-f(x)\\Bigr] \\, \t \t\t\t\n\t\t\tQ(\\mathrm{d} \\alpha) \\,\\xi_u(\\mathrm{d} x) \n\\\\\n\\label{eq.limite.eid.faible}\n\t&\\qquad\n\t- D\\, \\int_{\\X} f(x) \\,\t\\xi_u(\\mathrm{d} x) \n\t\\biggr]\\,\\rmd u\\,,\n\\end{align}\nfor any $f \\in C^{1}(\\X)$.\n\\end{theorem}\n\nThe proof is in three steps\\footnote{Note that our situation is simpler than that studied by \\cite{roelly1986a} and \\cite{meleard1993a} since in our case $\\XX$ is compact: in fact in our case the weak topology -- the smallest topology which makes the applications $\\nu\\to\\crochet{\\nu,f}$ continuous for any $f$ continuous and bounded -- and the vague topology -- the smallest topology which makes the applications $\\nu\\to\\crochet{\\nu,f}$ continuous for all $f$ continuous with compact support -- are identical.}:\nfirst the uniqueness of the solution of the limit equation \\eqref{eq.limite.substrat.faible}-\\eqref{eq.limite.eid.faible}, \nsecond the tightness (of the sequence of distribution) of $\\bar\\nu^n$\nand lastly the convergence in distribution of the sequence.\n\n\n\n\\subsubsection*{Step 1: uniqueness of the solution of \\eqref{eq.limite.substrat.faible}-\\eqref{eq.limite.eid.faible}}\n\nLet $(S_t,\\xi_t)_{t\\geq 0}$ be a solution of \\eqref{eq.limite.substrat.faible}-\\eqref{eq.limite.eid.faible}.\nWe first show that $(\\xi_t)_t$ is of finite mass for all $t\\geq 0$:\n\\begin{align*}\n\t\\crochet{\\xi_t, 1}\n\t& = \n\t\\crochet{\\xi_0, 1}\n\t+ \n\t\\int_0^t \\int_{\\X} \\int_0^1 \\lambda(S_u, x)\\, Q(\\mathrm{d} \\alpha) \\, \t \t\n\t\t\t\\xi_u(\\mathrm{d} x)\\, \\mathrm{d} u \n\t\t- D\\, \\int_0^t \\int_{\\X} \\xi_u(\\mathrm{d} x) \\, \\mathrm{d} u\\\\ \n\t& \\leq \\left<\\xi_0, 1\\right>+(\\bar \\lambda - D) \\int_0^t \\left<\\xi_u, 1\\right> \\, \\mathrm{d} u\n\\end{align*} \nand according to Gronwall's inequality: \n$\\crochet{\\xi_t, 1} \\leq \\crochet{\\xi_0, 1} \\, e^{(\\bar \\lambda - D)\\,t} \n\t< \\infty\n$.\n\nWe introduce the following norm on $\\MM_{F}(\\XX)$:\n\\[\n \\normm{\\bar\\nu}\n \\eqdef\n \\sup \n\t\\Bigl\\{\n\t\t|\\crochet{\\bar\\nu,f}|\n\t\t\\,;\\,\n\t\tf \\in C^1(\\X),\\, \\norme{f}_\\infty \\leq 1,\\,\\norme{f'}_\\infty \\leq 1\n\t\\Bigr\\}\n\\]\nand consider two solutions $(S^1_t,\\xi^1_t)_{t\\geq 0}$ and $(S^2_t,\\xi^2_t)_{t\\geq 0}$ \nof \\eqref{eq.limite.substrat.faible}-\\eqref{eq.limite.eid.faible}. \n\n\nIt was previously shown that $\\xi^1_t$ and $\\xi^2_t$ are of finite mass on $\\RR_+$, so we can define:\n\\[\n C_t\n \\eqdef \\sup_{0\\leq u \\leq t} \\crochet{\\xi^1_u+\\xi^2_u , 1}\n \\,.\n\\]\nAccording to \\eqref{eq.limite.eid.faible}, for any $f \\in C^1(\\X)$ such that \n$\\norme{f}_\\infty \\leq 1$ and $\\norme{f'}_\\infty \\leq 1$ we have:\n\\begin{align*}\n\t|\\crochet{\\xi^1_t-\\xi^2_t,f}|\n\t&\\leq \n\t\\int_0^t \\biggl|\\int_\\X\n\t\tf'(x) \\, \n\t\t\\Bigl[\t \t\t\t\n\t \t\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S^1_u,x)\\, [\\xi^1_u(\\mathrm{d} x)-\\xi^2_u(\\mathrm{d} x)] \n\\\\\n\t\t&\\qquad\\qquad\\qquad\n\t\t- [\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S^2_u,x)-\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S^1_u,x)]\\, \\xi^2_u(\\mathrm{d} x)\n\t\t\\Bigr]\n\t \t\\biggr|\\,\\mathrm{d} u \n\\\\\n\t\t&\\qquad +\t\\int_0^t \\biggl|\n\t\t\t\\int_\\X \\int_0^1 \n\t\t\t\t\\left[f(\\alpha\\,x)+f((1-\\alpha)\\,x)-f(x) \\right] \\,\n\t\t\t\tQ(\\mathrm{d} \\alpha)\n\\\\\n\t\t& \\qquad\\qquad\\qquad \n\t\t\t\\left[\n\t\t\t\t\\lambda(S^1_u, x) \\, [\\xi^1_u(\\mathrm{d} x)-\\xi^2_u(\\mathrm{d} x)]\n\t\t\t\t- [\\lambda(S^2_u, x)-\\lambda(S^1_u, x)]\\, \n\t\t\t\t\\xi^2_u(\\mathrm{d} x)\n\t\t\t\\right] \\biggr| \\, \\mathrm{d} u\n\\\\\n\t\t &\\qquad\n\t\t + D \\, \\int_0^t \\biggl|\n\t\t\t\t\\int_\\X f(x) \\, (\\xi^1_u(\\mathrm{d} x)-\\xi^2_u(\\mathrm{d} x))\n\t\t\t\\biggr| \\, \\mathrm{d} u\n\\\\\n\t&\\leq\n\t(\\bar g +3\\, \\bar \\lambda +D)\\,\n\t\t\t\\int_0^t \\normm{\\xi^1_u-\\xi^2_u} \\, \\mathrm{d} u \n\t\t\t+ C_t\\,( k_g + 3 \\, k_\\lambda) \\,\n\t\t\t \\int_0^t | S^1_u- S^2_u |\\, \\mathrm{d} u .\n\\end{align*}\nTaking the supremum over the functions $f$, we obtain:\n\\begin{align*}\n\t\\normm{\\xi^1_t-\\xi^2_t}\t\n\t&\\leq\n\t(\\bar g +3\\, \\bar \\lambda +D)\\,\n\t\\int_0^t \\normm{\\xi^1_u-\\xi^2_u} \\, \\mathrm{d} u\n\t+ C_t\\,( k_g + 3 \\, k_\\lambda) \\, \n\t\t\\int_0^t | S^1_u-S^2_u |\\, \\mathrm{d} u\n\t\\,.\n\\end{align*}\nMoreover, from \\eqref{eq.limite.substrat.faible} we get:\n\\begin{align*}\n\t&\n\t|S^1_t- S^2_t|\n\t\\leq \n\tD \\, \\int_0^t |S^1_u- S^2_u|\\,\\mathrm{d} u \n\\\\\n\t&\\quad\t\n\t+ \\frac kV \\int_0^t \\biggl|\n\t\t\\int_\\X \n\t\t\\Bigl(\t \t\t\t\n\t \t\t\t\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S^1_u,x)\\, [\\xi^1_u(\\mathrm{d} x)-\\xi^2_u(\\mathrm{d} x)] \n\t \t\t\t- \n\t\t\t\t[\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S^2_u,x)-\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S^1_u,x)]\n\t\t\t\t\\, \\xi^2_u(\\mathrm{d} x)\n\t\t\\Bigr)\n\t\\biggr| \\, \\mathrm{d} u \n\\\\\n\t&\\quad\t\n\t\\leq \n\t\\Bigl(D+\\frac kV \\, C_t\\,k_g\\Bigr) \\, \n\t\\int_0^t |S^1_u- S^2_u| \\, \\mathrm{d} u\n\t+ \\frac kV \\,\\bar g \\int_0^t \\normm{\\xi^1_u-\\xi^2_u} \\, \\mathrm{d} u \\,.\n\\end{align*}\nWe define:\n\\begin{align*}\n\tM_t\n\t\\eqdef\n\t\\max \\left\\{\\bar g +3\\, \\bar \\lambda +D+ \\frac kV \\,\\bar g\n\t\t\\, , \\, C_t\\,( k_g + 3 \\, k_\\lambda)+D+\\frac kV \\, C_t\\,k_g\\right\\}\n\\end{align*}\nhence:\n\\begin{align*}\n\t\\normm{\\xi^1_t-\\xi^2_t}\n\t+|S^1_t-S^2_t|\n\t& \\leq\n\tM_t \\, \\int_0^t \\Bigl( \n\t\t\\normm{\\xi^1_u-\\xi^2_u}+|S^1_u-S^2_u|\t\n\t\\bigr) \\, \\mathrm{d} u \n\\end{align*}\nFinally from Gronwall's inequality we get \n$\\normm{\\xi^1_t-\\xi^2_t}+|S^1_t-S^2_t|=0$ for all $t\\geq 0$, hence $\\xi^1_t = \\xi^2_t$ and $S^1_t=S^2_t$.\n\n\n\n\n\\subsubsection*{Step 2: tightness of $(\\bar\\nu^n)_{n\\geq 0}$}\n\nThe tightness of \n$\\bar\\nu^n$ is equivalent to the fact that from any subsequence one can extract a subsequence that converges in distribution in the space $\\D([0,T],\\MM_{F}(\\X))$.\nAccording to \\citet[Th. 2.1]{roelly1986a} this amounts to proving the tightness \nof $\\crochet{\\bar\\nu^n,f}$ in $\\D([0,T],\\R)$ for all $f$ in a set dense in \n$\\C(\\XX)$, here we will consider $f\\in \\C^1(\\XX)$. To prove the latter result, it is sufficient to check the following Aldous-Rebolledo criteria \\citep[Cor. 2.3.3]{joffe1986a}:\n\\begin{enumerate}\n\\item \nThe sequence $(\\langle \\bar\\nu_t^n,f \\rangle)_{n\\geq 0}$ is tight for any $t\\geq 0$.\n\n\\item \nConsider the following semimartingale decomposition:\n\\[\n\t\\crochet{\\bar\\nu_t^n,f}\n\t=\n\t\\crochet{\\bar\\nu_0^n,f}\n\t+\n\tA_t^n+Z_t^n\\,.\n\\]\nwhere $A_t^n$ is of finite variation and $Z_t^n$ is a martingale.\nFor all $t>0$, $\\epsilon>0$, $\\eta>0$ there exists $n_{0}$ such that for any sequence \n$\\tau_n$ of stopping times with $\\tau_{n}\\leq t$ we have:\n\\begin{align}\n\\label{eq.AR.2.1}\n \\sup_{n\\geq n_0} \\sup_{\\theta\\in[0,\\delta]}\n \\P\\Big(\n \\big|\n A^n_{\\tau_n+ \\theta} - A^n_{\\tau_n}\n \\big|\n \\geq \\eta\n \\Big)\n &\n \\leq \\epsilon\\,,\n\\\\\n\\label{eq.AR.2.2}\n \\sup_{n\\geq n_0} \\sup_{\\theta\\in[0,\\delta]}\n \\P\\Big(\n \\big|\n \\crochet{Z^n}_{\\tau_n+ \\theta} - \\crochet{Z^n}_{\\tau_n}\n \\big|\n \\geq \\eta\n \\Big)\n &\n \\leq \\epsilon\\,.\n\\end{align} \n\\end{enumerate}\n\n\n\\subsubsection*{\\it Proof of \\fenumi}\n\nFor any $K>0$,\n\\begin{align*}\n\t\\P\\bigl(|\\crochet{\\bar\\nu_t^n,f}| \\geq K\\bigr)\n\t&\\leq \n\t\\frac{1}{K}\\,\\norme{f}_\\infty \\,\n\t \\sup_{n\\in\\N} \\E\\bigl(\\crochet{\\bar\\nu_t^n,1}\\bigr)\n\\end{align*}\nand using Lemma \\ref{lem.esp}, we deduce \\fenumi.\n\n\n\n\\subsubsection*{\\it Proof of {\\rm(\\textit{ii})}}\n\n\\begin{align*}\n\tA_t^n\n\t&= \n\t\\int_0^t \\crochet{\\bar\\nu_u^n,\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_u^n,.)\\,f'} \\, \\mathrm{d} u \n\\\\\n\t&\\qquad\n\t+ \\int_0^t \\int_{\\X} \\int_0^1 \n\t\t\\lambda(S_u^n, x) \\, \n\t\t\\bigl[ f(\\alpha\\,x)+f((1-\\alpha)\\,x)-f(x) \\bigr] \\, \n\t\tQ(\\mathrm{d} \\alpha) \\,\\bar\\nu_u^n(\\mathrm{d} x) \\, \\mathrm{d} u\n\\\\\n\t&\\qquad \n\t- D\\,\\int_0^t\\int_{\\X} f(x)\\,\\bar\\nu_u^n(\\mathrm{d} x) \\, \\mathrm{d} u\n\\end{align*}\nhence, according to Lemma \\ref{lem.esp}:\n\\begin{align*}\n\t\\EE | A_{\\tau_n+\\theta}^n-A_{\\tau_n}^n|\n\t&\\leq\n\t(\n\t\t\\norme{f'}_\\infty\\, \\bar{g}\n\t\t+ 3\\, \\norme{f}_\\infty\\,\\bar\\lambda\n\t\t+ D\\,\\norme{f}_\\infty\n\t)\n\t\\, C_{t,1}\\, \\theta\\,.\n\\end{align*}\nUsing \\eqref{var_qua}, we also have:\n\\begin{align*}\n\t\\EE | \\crochet{Z^n}_{\\tau_n+\\theta}-\\crochet{Z^n}_{\\tau_n}|\n\t\\leq \n\t\\frac{1}{n}\\,\\left(9 \\, \\bar \\lambda+D \\right) \\, \n\t\t\\norme{f}_\\infty^2\\, C_{t,1} \\, \\theta\\,.\n\\end{align*}\nHence\n$\\EE| A_{\\tau_n+\\theta}^n-A_{\\tau_n}^n|+\\EE | \\crochet{Z^n}_{\\tau_n+\\theta}-\\crochet{Z^n}_{\\tau_n} | \n\t\\leq C \\, \\theta$ \nand we obtain \\fenumii\\ from the Markov inequality.\n\n\n\\bigskip\n\nIn conclusion, from the Aldous-Rebolledo criteria, the sequence $(\\bar\\nu^n)_{n\\geq 0}$ is tight.\n \n\n\n\n\\subsubsection*{Step 3: convergence of the sequence $(\\bar\\nu^n)_{n\\in\\N}$}\n\n\nTo conclude the proof of the theorem it is suffice to show that the sequence $(\\bar\\nu^n)_{n\\in\\N}$ has a unique accumulation point and that this point is equal to $\\xi$ described in Step 1. In order to characterize $\\xi$, the solution of \\eqref{eq.limite.eid.faible}, we introduce, for any given $f \\in C^{1}(\\X)$, the following function defined for all\n$\\zeta\\in\\D([0,T],\\M_F(\\X))$:\n\\begin{align}\n\\nonumber\n \\Psi_{t}(\\zeta)\n &\\eqdef\n \\crochet{\\zeta_{t},f}-\\crochet{\\zeta_{0},f}\n -\\int_{0}^t \\biggl[\n\t\t\\int_{\\X} \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S^\\zeta_u,x)\\,f'(x) \\,\t\\zeta_u(\\mathrm{d} x)\n\\\\\n\\nonumber\n\t&\\qquad\n\t+ \\int_{\\X} \\int_0^1\\lambda(S^\\zeta_u, x)\\, \n\t\\bigl[f(\\alpha\\, x)+ f((1-\\alpha)\\,x)-f(x)\\bigr] \\, \t \t\t\t\n\t\t\tQ(\\mathrm{d} \\alpha) \\,\\zeta_u(\\mathrm{d} x) \n\\\\\n\\label{eq.Psi}\n\t&\\qquad\n\t- D\\, \\int_{\\X} f(x) \\,\t\\zeta_u(\\mathrm{d} x) \n\t\t\\biggr]\\,\\rmd u\n\\end{align}\nwhere $S_t^\\zeta$ is defined by:\n\\begin{align}\n\\label{eq.Psi.2}\n\tS_t^\\zeta\n\t&\\eqdef\n\tS_0\n\t+\n\t\\int_0^t \\Bigl(\n\t\tD\\,({\\mathbf s}_{\\textrm{\\tiny\\rm in}}-S_u^\\zeta)\n\t\t-\\frac kV \\int_\\X \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_u^\\zeta,x)\\,\\zeta_u(\\mathrm{d} x)\n\t\\Bigr) \\, \\mathrm{d} u\\,.\n\\end{align}\nHence, if $\\Psi_{t}(\\zeta)=0$ for all $t\\geq 0$ and all $f \\in C^{1}(\\X)$ then $(S^\\zeta,\\zeta)=(S,\\xi)$ where $(S,\\xi)$ is the unique solution of \\eqref{eq.limite.substrat.faible}-\\eqref{eq.limite.eid.faible}.\n\n\\bigskip\n\nWe consider a subsequence $\\bar\\nu^{n'}$ of $\\bar\\nu^{n}$ which converges in distribution in the space $\\D([0,T],\\M_F(\\X))$ and $\\tilde \\nu$ its limit.\n\n\\subsubsection*{\\it Sub-step 3.1: A.s. continuity of the limit~$\\tilde \\nu$.}\n\n\\begin{lemma}\n$\\tilde \\nu(\\omega)\\in \\CC([0,T],\\M_F(\\X))$ for all $\\omega\\in\\Omega$ a.s.\n\\end{lemma}\n\n\\begin{proof}\nFor any $f\\in\\C(\\X)$ such that $\\norme{f}_\\infty \\leq 1$:\n\\begin{align*}\n\t\\bigl|\n\t \t\\crochet{\\bar\\nu_t^{n'},f}-\\crochet{\\bar\\nu_{t^-}^{n'},f}\n\t\\bigr| \n\t\\leq \n\t\\frac{1}{n'}\\, \n\t\\bigl| \n\t\t\\crochet{\\nu_t^{n'},1}-\\crochet{\\nu_{t^-}^{n'},1}\n\t\\bigr|\\,.\n\\end{align*}\nBut $| \\crochet{\\nu_t^{n'},1}-\\crochet{\\nu_{t^-}^{n'},1}|$ represents \nthe difference between the number of individuals in $\\nu_t^{n'}$ and in $\\nu_{t^-}^{n'}$, which is at most 1. Hence:\n\\begin{align*}\n\t\\sup_{t \\in [0,T]} \\, \n\t\\normtv{\\bar\\nu_t^{n'}-\\bar\\nu_{t^-}^{n'}}\n\t\\leq \\frac{1}{n'}\n\\end{align*}\nwhich proves that the limit process $\\tilde\\nu$ is a.s. continuous\n\\cite[Th. 10.2 p. 148]{ethier1986a} as the Prokhorov metric is dominated by the total variation metric.\n\\end{proof}\n\\carre\n\n\n\\subsubsection*{\\it Sub-step 3.2: Continuity of $\\zeta\\to\\Psi_{t}(\\zeta)$ in any $\\zeta$ continuous.}\n\n\n\\begin{lemma}\n\\label{lemma.Psi.continue}\nFor any given $t\\in[0,T]$ and $f\\in C^{1}(\\X)$, the function $\\Psi_{t}$ defined by\n\\eqref{eq.Psi} is continuous from $\\DD([0,T],\\MM_{F}(\\X))$ with values in $\\R$ in any point $\\zeta\\in\\CC([0,T],\\MM_{F}(\\X))$. \n\\end{lemma}\n\n\n\n\\proof\nConsider a sequence $(\\zeta^{n})_{n\\in\\N}$ which converges towards $\\zeta$ in\n$\\DD([0,T],\\MM_{F}(\\X))$ with respect to the Skorohod topology. As the limit $\\zeta$ is continuous we have that $\\zeta^{n}$ converges to $\\zeta$ with the uniform topology:\n\\begin{align}\n\\label{eq.lemma.Psi.continue.a}\n \\sup_{0\\leq t\\leq T} \n d_{\\textrm{\\tiny\\rm PR}}({\\zeta_t^{n},\\zeta_{t}})\n \\cv_{n\\to\\infty} 0\n\\end{align}\nwhere $d_{\\textrm{\\tiny\\rm PR}}$ is the Prokhorov metric (see Appendix \\ref{appendix.skorohod}). \n\n\nThe functions $\\lambda(s,x)$ and $\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(s,x)$ are Lipschitz continuous functions w.r.t. $s$ uniformly in $x$ and also bounded, see \\eqref{hyp.lambda.lipschitz} and \\eqref{hyp.rhog.lipschitz}, so from \\eqref{eq.Psi.2} we can easily check that:\n\\begin{align*}\n\t|S_t^{\\zeta^{n}}-S_t ^\\zeta|\n\t&\\leq\n\tC\\,\\int_0^t \\Bigl(\n\t\t|S_u^{\\zeta^{n}}-S_u ^\\zeta|\n\t\t+ \\Bigl|\n \t\t\\int_\\X \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_u^{\\zeta^{n}},x)\\,\n\t\t\t\t[\\zeta^{n}_u(\\mathrm{d} x)-\\zeta_u(\\mathrm{d} x)]\n\\\\\n\t&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n \t\t-\n\t\t\t\\int_\\X [\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_u^\\zeta,x)-\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_u^{\\zeta^{n}},x)]\n\t\t\t \\,\\zeta_u(\\mathrm{d} x)\n\t\t\\Bigr|\n\t\\Bigr) \\, \\mathrm{d} u\n\\\\\n\t&\\leq\n\tC\\,\\int_0^t \\Bigl(\n\t\t|S_u^{\\zeta^{n}}-S_u ^\\zeta|\n\t\t+ |\\crochet{\\zeta^{n}_u-\\zeta_u,1}|\\Bigr) \\, \\mathrm{d} u\n\\end{align*}\nand the Gronwall's inequality leads to:\n\\begin{align}\n\\label {eq.lemma.Psi.continue.b} \n |S_t^{\\zeta^{n}}-S_t ^\\zeta|\n \\leq \n C\\,\\int_{0}^t |\\crochet{\\zeta^n_{u}-\\zeta_{u},1}|\\,\\rmd u\\,.\n\\end{align}\nHere and in the rest of the proof the constant $ C $ will depend only on $ T $, $ f $ and on the parameters of the models.\nHence, from \\eqref{eq.Psi}:\n\\begin{align*}\n\t|\\Psi_t(\\zeta^{n})-\\Psi_t(\\zeta)|\n\t&\\leq \n\tC\\,\n\t\\Bigl[\n\t \t|\\crochet{\\zeta_t^{n}- \\zeta_t,1}|\n\t\t+\n\t\t|\\crochet{\\zeta_0^{n}- \\zeta_0,1}|\n\\\\\n &\\qquad\\qquad\\qquad\n\t\t+\n\t\t\\int_0^t |S_u^{\\zeta^{n}}-S_u ^\\zeta| \\, \\mathrm{d} u\n\t\t+\n\t\t\\int_0^t |\\crochet{\\zeta_u^{n}-\\zeta_u,1}| \\, \\mathrm{d} u\n\t\\Bigr]\n\\\\\n &\\leq\n\t\tC\\,\\sup_{0\\leq t\\leq T}|\\crochet{\\zeta_t^{n}-\\zeta_t,1}|\\,.\n\\end{align*}\nLet $\\delta_{t}=d_{\\textrm{\\tiny\\rm PR}}(\\zeta_t^{n},\\zeta_t)$, by definition of the Prokhorov metric:\n\\begin{align*}\n \\zeta_t^{n}(\\XX)-\\zeta_t(\\XX^{\\delta_{t}})&\\leq \\delta_{t}\\,,\n &\n \\zeta_t(\\XX)-\\zeta_t^{n}(\\XX^{\\delta_{t}})&\\leq \\delta_{t}\\,,\n\\end{align*}\nbut $\\XX^{\\delta_{t}}=\\XX$ hence $|\\zeta_t^{n}(\\XX)-\\zeta_t(\\XX)|\\leq \\delta_{t}$. Note finally that $|\\zeta_t^{n}(\\XX)-\\zeta_t(\\XX)| = |\\crochet{\\zeta_t^{n}-\\zeta_t,1}|$, so we get:\n\\[\n |\\Psi_t(\\zeta^{n})-\\Psi_t(\\zeta)| \n \\leq C\\,\n \\sup_{0\\leq t\\leq T}d_{\\textrm{\\tiny\\rm PR}}({\\zeta_t^{n},\\zeta_t})\n\\]\nwhich tends to zero.\n\\carre\n\n\n\n\n\\subsubsection*{\\it Sub-step 3.3: Convergence in distribution of $\\Psi_{t}(\\bar\\nu^{n'})$ to $\\Psi_{t}(\\tilde\\nu)$.}\n\nThe sequence $\\bar\\nu^{n'}$ converges in distribution to $\\tilde\\nu$ and $\\tilde\\nu(\\omega)\\in\\CC([0,T],\\MM_{F}(\\XX))$;\nmoreover the application $\\Psi_{t}$ is continuous in any point of $\\CC([0,T],\\MM_{F}(\\XX))$, thus according to the continuous mapping theorem \\cite[Th. 2.7 p. 21]{billingsley1968a} we get:\n\\begin{align} \n\\label{eq.cv.loi.Psi}\n \\Psi_{t}(\\bar\\nu^{n'})\n \\xrightarrow[n\\to\\infty]{\\textrm{loi}}\n \\Psi_{t}(\\tilde\\nu)\\,.\n\\end{align} \n\n\n\n\n\\subsubsection*{\\it Sub-step 3.4: $\\tilde\\nu=\\xi$ a.s.}\n\nFrom \\eqref{renormalisation}, for any $n\\geq 0$ we have:\n\\begin{align*}\n\t\\Psi_t(\\bar\\nu^n) = Z_t^{f,n}\n\\end{align*}\nwhere $Z_t^{f,n}$ is defined by \\eqref{eq.Zfn}. Also, \\eqref{var_qua} gives:\n\\begin{align*}\n\t\\EE(|Z_t^{f,n}|^2)\n\t&= \n\t\\EE \\crochet{Z^{f,n}}_t \n\t\\leq \\frac{1}{n}\\,(9\\,\\bar\\lambda+D) \\, \\norm{f}_{\\infty}^2\\,C_{t,1} \\, t\\,.\n\\end{align*}\nHence $\\Psi_{t}(\\bar\\nu^n)$ converges to $0$ in $L^2$ but also in $L^1$. \nFurthermore, we easily show that:\n\\begin{align*}\n|\\Psi_{t}(\\zeta)|\n\t\\leq & C_{f,t} \\sup_{0 \\leq u \\leq t} \\crochet{\\zeta_u,1}\n\\end{align*}\nmoreover, from Lemma \\ref{lem.esp}, $(\\Psi_{t}(\\bar\\nu^{n'}))_{n'}$ is uniformly\nintegrable. The dominated convergence theorem and \\eqref{eq.cv.loi.Psi} imply:\n\\[\n 0\n = \\lim_{n' \\to \\infty} \\EE |\\Psi_t(\\bar\\nu^{n'})|\n = \\EE |\\Psi_t(\\tilde \\nu)|\\,.\n\\]\nSo $\\Psi_t(\\tilde \\nu)=0$ a.s. and $\\tilde \\nu$ is a.s. equal to $\\xi$ where $(S,\\xi)$ \nis the unique solution of \\eqref{eq.limite.substrat.faible}-\\eqref{eq.limite.eid.faible}.\n\n\n\\bigskip\n\nThis last step concludes the proof of Theorem \\ref{theo.cv.ibm}.\n\n\n\n\\subsection{Links with deterministic models}\n\\label{subsec.modeles.deterministes}\n\n\nEquation \\eqref{eq.limite.eid.faible} is actually a weak version of an \nintegro-differential equation that can be easily identified.\nIndeed suppose that the solution $\\xi_t$ of Equation \\eqref{eq.limite.eid.faible}\nadmits a density $p_t(x)\\,\\rmd x=\\xi_{t}(\\rmd x)$, and that $Q(\\rmd \\alpha)=q(\\alpha)\\,\\rmd \\alpha$, then the system of equations \\eqref{eq.limite.substrat.faible}-\\eqref{eq.limite.eid.faible} is a weak version of the following system:\n\\begin{align} \n\\label{eq.limite.substrat.fort}\n\t&\n\t\\frac{\\rmd}{\\rmd t} S_t = \n\tD\\,({\\mathbf s}_{\\textrm{\\tiny\\rm in}}-S_t)-\\frac kV \\int_\\X \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_t,x)\\,p_t(x)\\,\\mathrm{d} x\\,,\n\\\\\n\\nonumber\n\t&\n\t\\frac{\\partial}{\\partial t} p_t(x)\n\t+\\frac{\\partial}{\\partial x} \\bigl( \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_t,x)\\,p_t(x)\\bigr)\n\t+ \\bigl(\\lambda(S_t, x)+D \\bigr)\\,p_t(x)\n\\\\\n\\label{eq.limite.eid.fort}\n \t&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n\t= \n\t2\\,\\int_\\X \n\t\\frac{\\lambda(S_t, z)}{z}\\,q\\left(\\frac xz \\right)\\,p_t(z)\\,\\mathrm{d} z\\,.\n\\end{align}\nIn fact, this is the population balance equation introduced by \\cite{fredrickson1967a} and \\cite{ramkrishna1979a} for growth-fragmentation models.\n\n\\bigskip\n\nIt is easy to link the model \\eqref{eq.limite.substrat.fort}-\\eqref{eq.limite.eid.fort}\nto the classic chemostat model. Indeed suppose that\nthe growth function $x \\mapsto \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(s,x)$ is proportional to $ x $, i.e.:\n\\[\n \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(s,x)=\\tilde\\mu(s)\\,x\\,.\n\\]\nThe results presented now are formal insofar as a linear growth function \ndoes not verify the assumptions made in this article.\nWe introduce the bacterial concentration:\n\\[\n Y_t \\eqdef \\frac1V \\,\\int_\\X x\\, p_t(x) \\, \\mathrm{d} x\\,.\n\\] \nAs $\\sup_{0\\leq t\\leq T}\\crochet{p_{t},1}<\\infty$, from \\eqref{eq.limite.eid.fort}:\n\\begin{multline*}\n\t\\frac{\\mathrm{d}}{\\mathrm{d} t} Y_t\n\t- \\frac1V \\int_\\X x \\, \n\t\t\t\\frac{\\partial}{\\partial x}\\bigl( \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_t,x)\\,p_t(x)\\bigr) \\, \\mathrm{d} x\n\t+ \\frac1V \\int_\\X x \\, \\lambda(S_t, x) \\, p_t(x)\\,\\mathrm{d} x\n\t+ D\\, Y_t\n\\\\\n\t= \\frac2V\\, \\int_\\X x \\, \n\t\t\t\t\\int_\\X \n\t\t\t\t\\frac{\\lambda(S_t, z)}{z}\\,q(x\/z)\\,p_t(z)\\,\n\t\t\t\t\\mathrm{d} z \\, \\mathrm{d} x\\,,\n\\end{multline*}\nbut\n\\begin{align*}\n&\\int_\\X x \\, \n\t\\int_\\X \n\t\t\\frac{\\lambda(S_t, z)}{z}\\,\n\t\tq(x\/z)\\,p_t(z)\\,\\mathrm{d} z \\, \\mathrm{d} x\n\t\t= \t\t\n\t\t\\int_\\X \\int_0^1\t\t\t \n\t\t\tz\\, \\lambda(S_t,z)\\, \\alpha \\,q\\left(\\alpha \\right)\\,p_t(z)\\,\\mathrm{d} \\alpha \\, \\mathrm{d} z\n\\\\\n\t\t&\\qquad\\qquad = \\int_\\X \\int_0^1\t\t\t \n\t\t\tz\\, \\lambda(S_t, z)\\, \\alpha \\,q\\left(1-\\alpha \\right)\\,p_t(z)\\,\\mathrm{d} \\alpha \\, \\mathrm{d} z\n\t\t\t\\tag{by symmetry of $q$}\n\\\\\n\t\t&\\qquad\\qquad =\t\\int_\\X \\int_0^1\t\t\t \n\t\t\tz\\, \\lambda(S_t, z)\\, (1-\\alpha) \\,q\\left(\\alpha \\right)\\,p_t(z)\\,\\mathrm{d} \\alpha \\, \\mathrm{d} z\n\\\\\n\t\t&\\qquad\\qquad = - \\int_\\X \\int_0^1\t\t\t \n\t\t\tz\\, \\lambda(S_t, z)\\, \\alpha \\,q\\left(\\alpha \\right)\\,p_t(z)\\,\\mathrm{d} \\alpha \\, \\mathrm{d} z\n\t\t+ \\int_\\X \t\t\t \n\t\t\tz\\, \\lambda(S_t, z)\\,p_t(z)\\, \\mathrm{d} z\n\\end{align*}\nthus:\n\\begin{align*}\n 2\\, \\int_\\X x \\, \n\t\\int_\\X \n\t\\frac{\\lambda(S_t, z)}{z}\\,q(x\/z)\\,p_t(z)\\,\\mathrm{d} z \\, \\mathrm{d} x\n\t\t= \\int_\\X \t\t\t \n\t\t\tz\\, \\lambda(S_t, z)\\,p_t(z)\\, \\mathrm{d} z.\n\\end{align*}\nThe function $x\\mapsto p_t(x)$ is the population density at time $t$. On the one hand $p_0(x)$ has compact support. On the other hand the growth of each bacterium is defined by a differential equation whose right-hand side is bounded by a linear function in $x$, uniformly in $s$. Hence for all $t\\leq T$, we can uniformly bound the mass of all the bacteria and $p_{t}(x)$ has a compact support, i.e. there exists ${\\textrm{max}}$ such that the support of $p_{t}(x)$ is included in $[0,{\\textrm{max}}]$ with $p_{t}({\\textrm{max}})=0$, so we choose $\\X=[0,{\\textrm{max}}]$. Moreover $\\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_t,0)=0$ hence:\n\\begin{align*}\n\t\\int_\\X \n\t\tx \\, \\frac{\\partial}{\\partial x}\\bigl( \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_t,x)\\,p_t(x)\\bigr) \n\t\\, \\mathrm{d} x\n\t= \n\t-\\int_\\X \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_t,x)\\,p_t(x) \\, \\mathrm{d} x.\n\\end{align*}\nFinally:\n\\begin{align*}\n\t\\frac{\\mathrm{d}}{\\mathrm{d} t} Y_t \n\t& = \n\t\t\\frac1V \\int_\\X \\rho_{\\textrm{\\tiny\\rm\\!\\! g}}(S_t,x)\\,p_t(x) \\, \\mathrm{d} x\n\t\t-D\\, Y_t\n\t= \\tilde\\mu(S_{t})\\,Y_{t} -D\\, Y_t\\,.\n\\end{align*}\nWe deduce that the concentrations $(Y_{t},S_{t})_{t\\geq 0}$ of biomass and substrate are the solution of the following closed system of ordinary differential equations:\n\\begin{equation}\n\\label{eq.chemostat.edo}\n\t\\begin{split}\n\t\\dot Y_t \t& = \\bigl(\\tilde\\mu(S_t)-D \\bigr)\\, Y_t\\,,\n\t\\\\\n\t\\dot S_t\t& = D\\,({\\mathbf s}_{\\textrm{\\tiny\\rm in}}-S_t)-k \\,\\tilde\\mu(S_t)\\, Y_t\\,.\n\t\\end{split}\n\\end{equation}\nwhich is none other than the classic chemostat equation \\citep{smith1995a}.\n\n\n\n\n\\section{Simulations}\n\\label{sec.simulations}\n\nIn this section we compare the behavior of the individual-based model (IBM) and two deterministic models: the integro-differential equation (IDE) \\eqref{eq.limite.substrat.fort}-\\eqref{eq.limite.eid.fort} and classic chemostat model, represented by the ordinary differential equation \\eqref{eq.chemostat.edo} (ODE).\nSimulations of the IBM were performed following Algorithm \\ref{algo.ibm}.\nThe resolution of the integro-differential equation was made following the numerical scheme given in Appendix \\ref{appendix.schema.num}, with a discretization step in the mass space of $\\Delta x = 2 \\times 10^{-7}$ and a discretization step in time of $\\Delta t = 5 \\times 10^{-4}$.\n\n\n\\subsection{Simulation parameters}\n\nIn the simulations proposed in this section, the division rate of an individual is given by the following function:\n\\begin{align*}\n\t\\lambda(s,x)\n\t& =\n\t\\frac{\\bar\\lambda}{\\log \\bigl(({\\textrm{max}}-m_{\\textrm{\\tiny\\rm div}}) \\, p_\\lambda +1 \\bigr)} \n\t\\,\n\t\\log \\bigl((x-m_{\\textrm{\\tiny\\rm div}}) \\, p_\\lambda +1 \\bigr) \\, 1_{\\{x \\geq m_{\\textrm{\\tiny\\rm div}}\\}}\n\\end{align*}\nwhich does not depend on the substrate concentration.\n\n\nThe division kernel $Q(\\mathrm{d} \\alpha)=q(\\alpha)\\,\\mathrm{d} \\alpha$ is given by a symmetric beta distribution:\n\\begin{align*}\n\tq(\\alpha)\n\t& = \n\t\\frac{1}{B(p_\\beta)}\\,\n\t\t\\bigl( \\alpha \\,(1-\\alpha) \\bigr)^{p_\\beta-1}\n\\end{align*}\nwhere $B(p_\\beta)\n\t= \\int_0^1 \\bigl( \\alpha \\,(1-\\alpha) \\bigr)^{p_\\beta-1} \\, \\mathrm{d} \\alpha$\nis a normalizing constant.\n\nIndividual growth follows a Gompertz model, with a growth rate depending on the substrate concentration:\n\\begin{align*}\n\tg(s,x)\n\t& = \n\tr_{\\textrm{\\tiny\\rm max}}\\,\\frac{s}{k_r+s} \\,\n\t\\log\\Big(\\frac{{\\textrm{max}}}{x}\\Big)\\,x\\,.\n\\end{align*}\nThe masses of individuals at the initial time are sampled according to the following probability density function:\n\\begin{align}\n\\label{eq.d}\nd(x)\n\t& =\t\n\t\t\\Biggl(\n\t\t\t\\frac{x-0.0005}{0.00025}\n\t\t\t\\,\\left(1-\\frac{x-0.0005}{0.00025}\\right)\n\t\t\\Biggr)^5 \\,\n\t\t1_{\\{0.0005 < x < 0.00075\\}}\\,.\n\\end{align}\nThis initial density will show a transient phenomenon that cannot be reproduced by the classical chemostat model described in terms of ordinary differential equations \\eqref{eq.chemostat.edo}, see Figure \\ref{fig.edo.ibm.eid}.\n\nThe simulations were performed using the parameters in Table \\ref{table.parametres}. The parameters $V$, $N_0$ and $D$ will be specified for each simulation.\n\n\n\\begin{table}[h]\n\\begin{center}\n\\begin{tabular}{|c|c|}\n\t\\hline\n Parameters & Values \\\\\n \\hline\n $S_0$\t\t\t\t&\t5 mg\/l \\\\\n ${\\mathbf s}_{\\textrm{\\tiny\\rm in}}$\t\t\t\t&\t10 mg\/l \\\\\n ${\\textrm{max}}$\t\t\t\t&\t0.001 mg \\\\\t\n $m_{\\textrm{\\tiny\\rm div}}$\t\t\t\t&\t0.0004 mg \\\\\n\t$\\bar\\lambda$\t\t&\t1 h$^{-1}$\\\\\n $p_\\lambda$\t\t\t&\t1000 \\\\\n $p_\\beta$\t\t\t&\t7 \\\\\n $r_{\\textrm{\\tiny\\rm max}}$\t\t\t\t&\t1 h$^{-1}$\\\\\n $k_r$\t\t\t\t&\t10 mg\/l\\\\\n $k$\t\t\t\t\t&\t1\\\\\n \\hline\n\\end{tabular}\n\\end{center}\n\\caption{Simulation parameters.}\n\\label{table.parametres}\n\\end{table}\n\n\n\n\n\\subsection{Comparison of the IBM and the IDE}\n\nTo illustrate the convergence in large population asymptotic of the IBM to the IDE, we performed simulations at different levels of population size. To this end we vary the volume of the chemostat and the number of individuals at the initial time. We considered three cases:\n\\begin{enumerate}\n\\item small size: $V=0.05$ l and $N_0=100$,\n\\item medium size: $V=0.5$ l and $N_0=1000$,\n\\item large size: $V=5$ l and $N_0=10000$.\n\\end{enumerate} \nIn each of these three cases we simulate:\n\\begin{itemize}\n\\item 60 independent runs of the IBM;\n\\item the numerical approximation of \\eqref{eq.limite.substrat.fort}-\\eqref{eq.limite.eid.fort} using the finite difference schemes \ndetailed in Appendix \\ref{appendix.schema.num}\n\\end{itemize}\nwith the same initial biomass concentration distribution.\n\n \n\n\\bigskip\n\nThe convergence of IBM to EID is clearly illustrated\nin Figure \\ref{evol.taille.concentrations} where the evolutions of the population size, of the biomass concentration, and of the substrate concentration are represented.\n\nIn Figure \\ref{fig.evol.eid} the time evolution of the normalized mass distribution is depicted, i.e. the normalized solution of the IDE \\eqref{eq.limite.eid.fort}. We have represented the simulation until time $T=10$ (h) to illustrate the transient phenomenon due to the choice of the initial distribution \\eqref{eq.d}: after a few time iterations this distribution is bimodal; the upper mode (large mass) grows in mass and disappears before time $T=10$ (h). The lower mode (small mass) corresponds to the mass of the bacteria resulting from the division; the upper mode corresponds to the mass of the bacteria from the initial bacteria before their division. Thus, the upper mode is set to disappear quickly by division or by up-take. The IBM realizes this phenomenon, see\nFigure \\ref{repartition.masse}. In contrast, the classical chemostat model presented below, see Equation \\eqref{eq.chemostat.edo}, cannot account for this phenomenon.\n\n\nFigure \\ref{repartition.masse} presents this normalized mass distribution at three different instants, $t=1,\\,4,\\, 80$ (h), and the simulation of the IDE is compared \nto 60 independent runs of the IBM, again for the three levels of population sizes described above. Depending on whether the population is large, medium or small, \nwe needed to adapt the number of bins of the histograms so that the resulting graphics are clear. The convergence of the IBM solution to the IDE in large population limit can be observed.\n\n\nIn conclusion, the IBM converges in large population limit to the IDE and variability ``around'' the asymptotic model is relatively large in small or medium population size; note that there is no reason why the IDE represents the mean value of the IBM.\n\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{ccc}\n\\includegraphics[width=4.5cm]{fig_simu130521160944_size.pdf}\n&\n\\includegraphics[width=4.5cm]{fig_simu130522173621_size.pdf}\n&\n\\includegraphics[width=4.5cm]{fig_simu130605175456_size.pdf}\n\\\\\n\\includegraphics[width=4.5cm]{fig_simu130521160944_biomass.pdf}\n&\n\\includegraphics[width=4.5cm]{fig_simu130522173621_biomass.pdf}\n&\n\\includegraphics[width=4.5cm]{fig_simu130605175456_biomass.pdf}\n\\\\\n\\includegraphics[width=4.5cm]{fig_simu130521160944_substrate.pdf}\n&\n\\includegraphics[width=4.5cm]{fig_simu130522173621_substrate.pdf}\n&\n\\includegraphics[width=4.5cm]{fig_simu130605175456_substrate.pdf}\n\\\\\n\\includegraphics[width=4.5cm]{fig_simu130521160944_phasePortrait.pdf}\n&\n\\includegraphics[width=4.5cm]{fig_simu130522173621_phasePortrait.pdf}\n&\n\\includegraphics[width=4.5cm]{fig_simu130605175456_phasePortrait.pdf}\n\\\\\n\\scriptsize small population size\n&\n\\scriptsize medium population size\n&\n\\scriptsize large population size\n\\\\\n\\scriptsize $V=0.05$ l, $N_0=100$\n&\n\\scriptsize $V=0.5$ l, $N_0=1000$\n&\n\\scriptsize $V=5$ l, $N_0=10000$\n\\end{tabular}\n\\end{center}\n\\vskip-1em\n\\caption{From top to bottom: time evolutions of the population size, the biomass concentration, the concentration substrate and the concentrations phase portrait for the three levels of population sizes (small, medium and large). The blue curves represent the trajectories of 60 independent runs of IBM. The green curve represents the mean value of these runs. The red curve represents the solution of the IDE. The rate $ D $ is 0.2 h$^{-1}$.}\n\\label{evol.taille.concentrations}\n\\end{figure}\n\n\n\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[trim=4.3cm 1.5cm 2.2cm 2.2cm,width=10cm]{fig_IDE_density_evolution.pdf}\n\\end{center}\n\\caption{Time evolution of the normalized mass distribution for the IDE \\eqref{eq.limite.eid.fort}: we represent the simulation until time $T=10$ (h) only to illustrate the transient phenomenon due to the choice of the initial distribution \\eqref{eq.d}. After a few iterations in time this distribution is bimodal, the upped mode growths in mass and disappears before $T=10$ (h).}\n\\label{fig.evol.eid}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{ccc}\n\\includegraphics[width=4.5cm]{fig_simu130521160944_density1.pdf}\n&\n\\includegraphics[width=4.5cm]{fig_simu130522173621_density1.pdf}\n&\n\\includegraphics[width=4.5cm]{fig_simu130605175456_density1.pdf}\n\\\\\n\\includegraphics[width=4.5cm]{fig_simu130521160944_density4.pdf}\n&\n\\includegraphics[width=4.5cm]{fig_simu130522173621_density4.pdf}\n&\n\\includegraphics[width=4.5cm]{fig_simu130605175456_density4.pdf}\n\\\\\n\\includegraphics[width=4.5cm]{fig_simu130521160944_density80.pdf}\n&\n\\includegraphics[width=4.5cm]{fig_simu130522173621_density80.pdf}\n&\n\\includegraphics[width=4.5cm]{fig_simu130605175456_density80.pdf}\n\\\\\n\\scriptsize small population size\n&\n\\scriptsize medium population size\n&\n\\scriptsize large population size\n\\\\\n\\scriptsize $V=0.05$ l, $N_0=100$\n&\n\\scriptsize $V=0.5$ l, $N_0=1000$\n&\n\\scriptsize $V=5$ l, $N_0=10000$\n\\end{tabular}\n\\end{center}\n\\caption{Mass distribution for the time $t=1$ (above), $t=4$ (middle) and $t=80$ (bottom) in small (left), medium (middle) and large (right) population size. For each graph, the blue histograms represent the empirical mass distributions of individuals for the 60 independent runs of IBM. In order to plot the histogram we have adapted the number of bins according to the population size. The red curve represents the mass distribution given by the IDE. The dilution rate $ D $ is 0.2 h$^{-1}$. Again the convergence of the IBM solution to the IDE in large population limit is observed.}\n\\label{repartition.masse}\n\\end{figure}\n\n\n\n\n\n\n\n\\subsection{Comparison of the IBM, the IDE and the ODE}\n\nWe now compare the IBM and the IDE to the classical chemostat model described by the system of ODE \\eqref{eq.chemostat.edo}. The function $\\tilde\\mu$ is the specific growth rate. The growth model in both the IBM and the IDE is of Monod type, so for the ODE model we also consider the classical Monod kinetics:\n\\begin{align}\n\\label{eq.edo.monod}\n\t\\tilde\\mu(S)\n\t& = \n\t\\mu_{\\max} \\, \\frac{S}{K_s+S}\\,.\n\\end{align}\nThe parameters of this Monod law are not given in the initial model and we use \na least squares method to determine the value of the parameters $\\mu_{\\max}$ and $K_s$\nwhich minimize the quadratic distance between $(S_{t},X_{t})_{t\\leq T}$ given by \\eqref{eq.chemostat.edo} and $(S_{t},X_{t}=\\int_{\\X} x\\,p_{t}(x)\\,\\rmd x)_{t\\leq T}$ given by \\eqref{eq.limite.substrat.fort}-\\eqref{eq.limite.eid.fort}.\n\nThe numerical integration of the ODE \\eqref{eq.chemostat.edo} presents no difficulties\nand is performed by the function \\texttt{odeint} of the module \\texttt{scipy.integrate} of \\texttt{Python} with the default parameters.\n\n\\bigskip\n\nFirst we consider a simulation based on the initial mass density $d(x)$ defined by \n\\eqref{eq.d}. With this initial density both the IDE and the IBM feature a transient phenomenon described in the previous section and illustrated in Figures \\eqref{fig.evol.eid} et~\\eqref{repartition.masse}. \n\n\nFigure \\ref{fig.edo.ibm.eid} (left) shows a significant difference between the IBM and the IDE on the one hand and the ODE on the other hand, the latter model cannot account for the transient phenomenon. With the first two models, the individual bacteria are withdrawn uniformly and independently of their mass (large mass bacteria has the same probability of withdrawal as small mass bacteria) and \nat the beginning of the simulation there is a decrease in biomass\nas at initial state $d(x)$ has a substantial proportion of large bacteria mass.\nThe ODE is naturally not able to account for this phenomenon.\n\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[width=5cm]{fig_simu130527183824_MC_biomasse.pdf}\n&\n\\includegraphics[width=5cm]{fig_simu130627163114_MC_biomasse.pdf}\n\\\\\n\\includegraphics[width=5cm]{fig_simu130527183824_MC_substrat.pdf}\n&\n\\includegraphics[width=5cm]{fig_simu130627163114_MC_substrat.pdf}\n\\\\\n\\includegraphics[width=6cm]{fig_simu130527183824_MC_phase.pdf}\n&\n\\includegraphics[width=6cm]{fig_simu130627163114_MC_phase.pdf}\n\\\\[0.7em]\n\\small initial density $d(x)$ \\eqref{eq.d}\n&\n\\small initial density $d'(x)$ \\eqref{eq.d'}\n\\end{tabular}\n\\end{center}\n\\caption{Time evolution of the biomass concentration (top), the substrate concentration (middle) and the concentration trajectories in the phase space (bottom) according to the initial mass distributions \\eqref{eq.d} (left) and \\eqref{eq.d'} (right). In blue, the trajectories of 60 independent runs of the IBM simulated with $V=3$ l and $N_0=20000$ (left), $N_0=25000$ (right); in green, the mean of the IBM runs; in red, the solution of IDE \\eqref{eq.limite.substrat.fort}-\\eqref{eq.limite.eid.fort}; in black, the solution of the ODE \\eqref{eq.chemostat.edo}. The latter is fitted by the least squares method on the IDE, the parameters of the Monod law \\eqref{eq.edo.monod} are $\\mu_{\\max}=0.341$ and $K_s = 2.862$ in the first case and $\\mu_{\\max}=0.397$ and $K_s = 3.996$ in the second.\nAs initial densities are different, $N_{0}$ is adapted so that the average initial biomass concentration is the same in both cases.\nThe dilution rate $D$ is 0.2 h$^{-1}$. In the first case, the ODE gives no account for the transient phenomenon described in the previous section, see Figures \\eqref{fig.evol.eid} and \\eqref{repartition.masse}, while the IDE and the IBM give coherent account for it. In the second case the three models are consistent.}\n\\label{fig.edo.ibm.eid}\n\\end{figure}\n\nThis phenomenon no longer appears when uses the following density:\n\\begin{align}\n\\label{eq.d'}\nd'(x)\n\t& =\t\n\t\t\\Biggl(\n\t\t\t\\frac{x-0.00035}{0.0003}\n\t\t\t\\,\\left(1-\\frac{x-0.00035}{0.0003}\\right)\n\t\t\\Biggr)^5 \\,\n\t\t1_{\\{0.00035 < x < 0.00065\\}}\\,.\n\\end{align}\n\nIndeed, from Figure \\ref{fig.edo.ibm.eid} (right), there is no longer any biomass decay at the beginning of the simulation and the different simulations are comparable, \nthe ODE and the IDE match substantially.\n\n\n\n\\subsection{Study of the washout}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=10cm]{fig_simu130704121134_MC_biomasse.pdf}\n\\end{center}\n\\caption{Time evolution of the biomass concentration. In blue, 1000 independent realizations of the IBM simulated with $V=0.5$ l and $N_0=30$; in green, the mean of these runs; in red, the solution of the IDE; in black, the solution of the ODE with parameters values $\\mu_{\\max}=0.482$ and $K_s = 6.741$. The dilution rate $ D $ is 0.275 h$^{-1}$. Among the 1000 independent runs of the IBM, 111 lead to washout while the deterministic models converge to an equilibrium with strictly positive biomass. The mean value of the 1000 runs of the IBM gives account for the washout probability while IDE and ODE models do not account for this question.}\n\\label{fig.lessivage}\n\\end{figure}\n\n\n\n\\begin{figure}[p]\n\\begin{center}\n\\includegraphics[width=9cm]{fig_simu130605093815_MC_biomasse.pdf}\\\\\n\\includegraphics[width=9cm]{fig_simu130605093815_loi_tps_extinction.pdf}\n\\caption{$\\blacktriangleright$ (Top) Evolution of biomass concentration between $t=20$ and $t=90$ h: blue, 1000 independent runs of the IBM; in green, the mean value of these runs; in red the solution of the IDE; in black, the solution to the ODE with parameters $\\mu_{\\max}=0.814$ and $K_s = 17.547$. The parameters are $V=10$ l and $N_0=10000$, the dilution rate $D$ is 0.5 h$^{-1}$. For both deterministic models, the size of the population decreases exponentially rapidly to 0 but remains strictly positive for any finite time. However, all the runs of the IBM reach washout in finite time $\\blacktriangleright$ (Bottom) empirical distribution of the washout time calculated from 7000 independent runs of the IBM and plotted using a time kernel regularization.}\n\\label{fig.lessivage2}\n\\end{center}\n\\end{figure}\n\n\nOne of the main differences between deterministic and stochastic models lies in their way of accounting for the washout phenomenon (or extinction phenomenon in the case of an ecosystem). With a sufficiently small dilution rate $D$, the solutions of the ODE \\eqref{eq.chemostat.edo} and the IDE \\eqref{eq.limite.substrat.fort}-\\eqref{eq.limite.eid.fort} converge to an equilibrium point with strictly positive biomass. In fact, the washout is an unstable equilibrium point and apart from the line corresponding to the null biomass, the complete phase space corresponds to a basin of attraction leading to a solution with a strictly positive biomass asymptotic point. However, from Figure \\ref{fig.lessivage}, among the 1000 independent runs of the IBM, 111 of them converge to washout before time $t=1000$ h; so the probability of washout at this instant is approximately 11\\%. It may be noted that the IDE and the ODE do not correspond to the average value of the IBM since only the latter may reflect the washout in a finite time horizon.\n\nNow we consider a sufficiently large dilution rate, $D=0.5$ h$^{-1}$, corresponding to the washout conditions. Figure \\ref{fig.lessivage2} (left) presents the evolution of the biomass concentration in the different models. The runs of the IBM converge to the washout in finite time whereas both deterministic ODE and IDE models converge exponentially to washout without ever reaching it in finite time. Figure \\ref{fig.lessivage2} (right) shows the empirical distribution of the washout time calculated from 7000 independent runs of the IBM. This washout time features a relatively large variance.\n\n\n\n\n\n\\section*{Conclusion}\n\nWe proposed a hybrid chemostat model. On the one hand the bacterial population is modeled as an individual-based model: each bacterium is explicitly represented through its mass. On the other hand, the substrate concentration dynamics is represented as a conventional ordinary differential equation. Mechanisms acting on the bacteria are explicitly described: growth, division and up-take. Consumption of the substrate is also described and it is through this mechanism that bacteria interact.\n\n\nWe described an exact Monte Carlo simulation technique of the model. Our main result is the convergence of the IBM to an integro-differential equation model when the population size tends to infinity. There is a convergence in distribution for coupled stochastic processes: the first one with c\u00e0dl\u00e0g trajectories takes values \u200b\u200bin the set of finite measures on the space of masses, the second one with continuous trajectories takes values \u200b\u200bin the set of substrate concentration values. The integro-differential equation model limit has been known for many years as population-balance equation and is used in particular for growth-fragmentation models. The numerical tests have allowed us to illustrate this convergence: in large population, the integro-differential equation model accurately reflects the behavior of the IBM and thus, the randomness can be neglected. In small or medium population this randomness is not negligible. We also proposed a numerical test where the classical chemostat model, in terms of two coupled ordinary differential equations, cannot account for transient behavior observed with the integro-differential model as with the IBM. Finally, in the case of washout, thus in small population size, IBM gives account for the random washout time, whereas the conventional model as the integro-differential model merely offers a more limited vision of this phenomenon as an asymptotic convergence to washout, never reached in finite time.\n\n\n\n\\bigskip\n\n\nIt would be interesting to propose extensive simulations of this model, we can consider several axes. First, our simulations are based on a division rate $\\lambda(s,x)$ which does not depend on the substrate concentration~$s$. One can easily consider a rate depending on~$s$, there are indeed such examples \\citep[see e.g.][]{henson2003b}. Similarly, in our model the individual growth dynamics is deterministic, it would be appropriate to consider a stochastic individual growth dynamics. To progress in this direction it would be necessary to consider specific cases studies where more explicit growth dynamics would be specified.\n\nIt would also be pertinent to develop approximation methods based on moments,\nbut these methods are usually ad hoc and it would be worthwhile to rely on a rigorous and systematic approach.\n\n\nFinally, there are several relatively immediate extensions of the proposed model.\nAt first, one can imagine extending the model to the case of several species and several types of substrates.\nOne can also model the effects of aggregation of bacteria in the chemostat, for example in the form of flocculation with given rates for aggregation and fragmentation of flocs. We can also consider two classes of bacteria, attached bacteria and planktonic bacteria, to account for a biofilm dynamic.\n \n\n\n\n\n\n\\section*{Acknowledgements}\n\nThe authors are grateful to J\u00e9r\u00f4me Harmand and Claude Lobry for discussions on the model, to Pierre Pudlo and Pascal Neveu for their help concerning the programming of the IBM. This work is partially supported by the project ``Mod\u00e8les Num\u00e9riques pour les \u00e9cosyst\u00e8mes Microbiens'' of the French National Network of Complex Systems (RNSC call 2012). The work of Coralie Fritsch is partially supported by the Meta-omics of Microbial Ecosystems (MEM) metaprogram of INRA.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\nThe data produced by functional Magnetic Resonance Imaging (fMRI) is high-dimensional and sometimes not suitable for analyzing the cognitive states \\cite{firat2015learning}. Learning efficient low-dimensional features from high-dimensional complex input spaces is crucial for the decoding of cognitive processes. In this paper, we explore deep learning algorithms in order to i) find a compact representation of connectivity patterns embedded in fMRI signals, ii) detect natural groupings of these patterns and, iii) use these natural groups to extract brain networks to represent cognitive tasks.\n\nOur framework is built upon our previous work in the area \\cite{onal2016hierarchical}, where we decompose fMRI signals into various frequency sub-bands using their wavelet transforms. We further utilize the signals at different sub-bands to form multi-resolution brain networks. Recent studies have shown that brain networks formed by the correlation of voxel pairs' in fMRI signals provide more information for brain decoding compared to the temporal information of single voxels \\cite{lindquist2008statistical,richiardi2013machine}. Moreover, there has been a shift in the literature toward brain decoding algorithms that are based on the connectivity patterns in the brain motivated by the belief that these patterns provide more information about cognitive tasks than the isolated behavior of individual anatomic regions \\cite{shirer2012decoding,ekman2012predicting,onal2017new}. \n\nContrary to the methods suggested in \\cite{lindquist2008statistical,richiardi2013machine} where supervised learning algorithms are employed for brain decoding, in this paper, we investigate the common groupings in HCP task data set to find out if these natural groups correspond to the cognitive tasks. This approach enables us to find shared network representations of a cognitive task together with its variations across the subjects. Additionally, multi-resolution representation of the fMRI signals enables us to observe the variations of networks among different frequency sub-bands. \n\nAfter constructing the brain networks representing the connectivity patterns among the anatomic regions of the brain at each sub-level, a Stacked De-noising Auto-Encoder (SDAE) algorithm is employed to learn shared connectivity features associated with a task based on the estimated mesh networks at different sub-bands. We concatenate the learned connectivity patterns from several wavelet sub-bands and utilize them in a hierarchical clustering algorithm with a distance matrix based on their correlations. The main reason behind concatenation of the feature matrices is that the detected patterns in the brain at different frequencies provide complementary information in regard to the overall cognitive state of the brain. \n\nOur results show that the mesh network representation of cognitive tasks is superior compared to fMRI time-series representation. We observe that SDAE successfully learns a set of connectivity patterns which provide an increased clustering performance. The performances are further improved by fusing the learned representations from multiple time-resolutions. This shows that the modeling of the connectivity of brain in multiple sub-bands of the fMRI signal leads to diverse mesh networks carrying complementary information for representing the cognitive tasks. The high rand index $93\\%$ obtained at the output of the clustering algorithm proves the existence of natural groups with low within-cluster-variances and high between-class-variances among the tasks. \n\nIn order to analyze the similarities and distinctions among the network topologies of fMRI signal, we visualize the networks and their precisions at the cluster centers. The cluster precisions, indicate shared connectivities among the subjects, whereas the mesh networks at the cluster center show a representative network for each cognitive task. It is observed that there are high inter-subject variances in the mesh networks.\n\\section{Experimental Setup} \\label{sec:exp}\nWe use the fMRI data from HCP for $300$ subjects performing specific tasks. A subject performs seven distinct cognitive tasks during the experiment given in Table \\ref{tab:firsttable} \\cite{barch2013function}. Each task $t$ consists of $s_t$ scans of the brain volume representing the changes in the brain during the task (the underlying cognitive process). The duration and the number of scans are task-dependent but the same for all participants. The total number of scans is $S=1940$ and we use $R = 90$ anatomical regions of $116$ AAL after removing the anatomical regions in Cerebellum and Vermis. \n\\begin{table}\n\t\\centering\n\t\\resizebox{.80\\textwidth}{!}{\\begin{minipage}{\\textwidth}\n\t\t\t\\begin{tabular}{l*{7}{c}r}\n\t\t\t\t\\hline\n\t\t\t\t& Emotion & Gambling & Language & Motor & Relational & Social & WM\\\\\n\t\t\t\t\\hline\n\t\t\t\tScans & 176 & 253 & 316 & 284 & 232 & 274 & 405\\\\\n\t\t\t\t\\hline\n\t\t\t\tDurations & 2:16 & 3:12 & 3:57 & 3:34& 2:56 & 3:27 & 5:01\\\\\n\t\t\t\\end{tabular}\n\t\\end{minipage}}\n\t\\caption{Scans per Task and the Duration for each Task (min:sec).}\n\t\\label{tab:firsttable} \n\\end{table}\nRepresentative time-series data points are attained by spatially averaging the signals associated with voxels ($n$) residing in the same region ($r$) in the brain,\n\\begin{align*}\nX_r(t)=\\frac{1}{N} \\sum_{\\forall n \\in r}^{} X_n(t),\n\\end{align*}\nwhere $N$ represents the total number of voxels in region $r$.\n\\section{Hierarchical Multi-Resolution Mesh Networks (HMMNs)} \\label{sec:hmmns}\n\\begin{figure*}[!t]\n\t\\centering\n\t\\includegraphics[scale = 0.5]{Diagram}\n\t\\caption{An Overview of the Proposed Deep Learning Framework.}\n\t\\label{fig:Diagram}\n\\end{figure*}\nIn our work, we utilize the representative time series obtained for each anatomic region to build a set of local meshes. The local meshes estimated around each anatomic region are ensembled to form a mesh network. This is motivated by the fact that the structure of the brain is highly interconnected and that neurons influence each other based on the strengths of their synaptic connections \\cite{pantazatos2012decoding}. HMMNs model cognitive tasks by estimating a mesh network at each frequency sub-band. It is expected that the brain network at each sub-band provides supplementary information about the underlying brain activities. We will show that our modeling of the brain with HMMNs greatly enhances the brain decoding performance by allowing us to look at cognitive states of the brain regions in multiple time-resolutions.\n\nAs the first step, the representative time-series $X_r(t)$, for each anatomic region $r$ are decomposed into a set of signals in different time-resolutions. This allows us to estimate and analyze how the anatomical regions process information in different frequency resolutions \\cite{thompson2015frequency}. We adopt Discrete Wavelet Transform (DWT) as our main tool \\cite{bullmore2004wavelets}. We apply the DWT to $X_r(t)$ for all brain regions to decompose the signals into $l$ sub-bands where $l=1,2,...,11$ ($L=11$). At sub-band level $l$, we attain two sets of orthonormal components named as sets of approximation coefficients $\\mathcal{A}=\\{a_{r,l,k}\\}$ and detail coefficients $\\mathcal{D}=\\{d_{r,l,k}\\}$ where $k$ represents the location of the wavelet waveform in discrete-time \\cite{bullmore2004wavelets}. These coefficients then may be utilized to reconstruct the fMRI signals at each frequency level, yielding the total of $(2\\times L)+1$ fMRI time-series. Formally, the representative time-series at sub-band $j$ ($j\\in [0,1,...,2L]$) may be defined as,\n\\begin{align*}\nx_{j,r}(t)=\n\\begin{cases}\nX_r(t), & \\text{if}\\ j=0 \\\\\n\\sum_{k}^{}a_{r,l,k}\\Phi_{l,k}(t)~and~ l=j & \\text{if}\\ 1\\leq j\\leq L\\\\\n\\sum_{k}^{}d_{r,l,k}\\Psi_{l,k}(t)~and~ l=j-L+1 & \\text{if}\\ j > L\n\\end{cases}\n\\end{align*}\nwhere $\\Phi_{l,k}$ and $\\Psi_{l,k}$ are called the mother wavelet and the father wavelet. More details on our approach are given in \\cite{onal2016hierarchical}. \n\nNow, we can construct a mesh network at each sub-band to represent cognitive tasks in terms of the relationships among anatomic regions. The construction of these networks help us analyze the topological properties of the brain and extract connectivity patterns associated with a cognitive process at each sub-band. In order to demonstrate the benefits of our approach, we propose an unsupervised clustering framework which can successfully take advantage of the connectivity patterns and distinguish between different cognitive tasks at multiple sub-bands. For this purpose, we divide the entire experiment session ($S=1940$ number of scans) for a subject into unlabeled windows of length $w_i=30$ consisting of $30$ discrete scans, where $i=1,...,64$ for each subject for the entire experiment. The length of the window is determined empirically, as the shortest time-interval which provides the highest rand index, at the output of clustering. It is important to note that the windows are unlabeled and may consist of overlapping data points from different cognitive tasks. \n\nThe nodes of the mesh networks are connected to their $p$-nearest neighbors to form a star mesh around a region. The nearest $p$ neighbors for a certain node are the ones having the largest Pearson correlation coefficients with the node. For each mesh formed around an anatomic region $r$, the arc weights for the window $w_i$ are estimated at the sub-band $j$ using the following regularized linear model,\n\\begin{align} \\label{mesh}\nx_{j,w_i,r}=\\sum_{r^\\prime \\in N_p[r]}^{} a_{j,w_i,r,r^\\prime} x_{j,w_i,r^\\prime} + \\lambda |a_{j,w_i,r,r^\\prime}|^2 + \\epsilon_{j,w_i,r}\n\\end{align}\nwhere the regularization parameter is $\\lambda$. The mesh arc weights $a_{j,w_i,r,r^\\prime}$, defined in the $N_p[r]$ neighborhood of region $r$, are estimated by minimizing the error $\\epsilon_{j,w_i,r}$. $x_{j,w_i,r}$ is a vector representing the average voxel time-series in region $r$ at sub-band $j$ for the window $w_i$, such that,\n\\begin{align*}\nx_{j,w_i,r}=[x_{j,w_i,r}(1),x_{j,w_i,r}(2),...,x_{j,w_i,r}(30)].\n\\end{align*}\n\nThe relation defined in (\\ref{mesh}) is solved for each region $r$ with its neighbors separately. In other words, we obtain an independent local mesh around each region $r$. After estimating all the mesh arc weights, we put them under the vector $A_{j,w_i} = \\{a_{j,w_i,r,r^\\prime}\\}_{r,r^\\prime}^R$, called Mesh Arc Descriptors (MADs). We represent $G_{j,w_i}$ as an ensemble of all local meshes. Lastly, the mesh networks are estimated for the original fMRI signal, and its approximation and detail parts of different resolutions. Consequently, we form $2L + 1$ distinct mesh networks for the frequency sub-bands $\\{A_0,A_1,A_2,...,A_L,D_1,D_2,...,D_L\\}$. \n\nThe multi-resolution mesh network for a subject is defined by a connectivity graph, $G_{j,w_i} = \\{ V, A_{j,w_i}: \\forall j \\}$, for each unlabeled window $w_i$ and for each sub-band $j$. The set of vertices $V$ corresponds to the ordered set of anatomic regions and is of size $R$. Vertex attributes are the time-series $x_{j,w_i,r}$ contained in the window $w_i$, at the sub-band $j$. The arc weights, $A_{j,w_i} = \\{a_{j,w_i,r,r^\\prime}\\}_{r,r^\\prime}^R$ between regions $r$ and $r^\\prime$, for each window $w_i$ are obtained from the local meshes of the representative time-series data points at sub-band $j$. This process results in $2L + 1$ distinct mesh networks represented by an adjacency matrix of size $R\\times R$ made up of ($\\forall_{r,r^\\prime} a_{j,w_i,r,r^\\prime}$) for each window $w_i$ ($i=1,...,64$). We concatenate the arc weights under a vector ($f_{j,w_i}$) of size $1 \\times R^2$ and embed the brain network for the window $w_i$ at sub-band $j$. This means that for each level $j$ and each subject, we represent the entire experiment by a large unlabeled matrix of size $64 \\times (R^2)$ i.e. $F_{j,sub_s}=[f_{j,w_1},...,f_{j,w_{64}}]^T$. Next, we will introduce a deep learning algorithm which learns a set of compact connectivity patters from the embedded brain networks and consequently, cluster the windows of similar connectivity patterns. Each cluster of similar connectivity patterns represent a specific cognitive task (see Table \\ref{tab:firsttable}).\n\\section{The Deep Learning Architecture} \\label{sec:dl}\nThe embedded mesh networks model the connectivity among the anatomic regions at different sub-bands of fMRI signal under each window $w_i$ for each subject. Next, we utilize a deep learning architecture to extract a set of compact connectivity patterns from the mesh networks. We will show that the learned connectivity patterns form natural clusters corresponding to cognitive states. To meet this goal, we design a multi-layer stacked de-noising sparse auto-encoder \\cite{poultney2007efficient}.\nFor each sub-band $j$, we train a SDAE that takes the windows in the embedded brain network associated with subject $sub_s$ i.e. $f_{j,w_i} \\in F_{j,sub_s}, i=1,...,64$ as its input, and produces a vector $y$ of size $1\\times 7$. Recall that there are a total of $7$ cognitive tasks. The learned features represent the connectivity patterns at sub-band $j$ for subject $sub_s$ as follows,\n\\begin{align*}\nY_\\theta(F_{j,sub_s})=S(W F_{j,sub_s}+B),\n\\end{align*} \n with the auto-encoder parameter set $\\theta=[W,B]$ where $W$ is the collection of weights $\\{W_i\\}_{1:4}$, $B$ is the collection of biases $\\{B_i\\}_{1:4}$ at each neuron and $S$ represents the activation function $\\arctan$. Our sparse auto-encoder design includes an input layer of size $f_{j,w_i}^T$ with three hidden layers $[500,64,21]$ and an output layer of size $7$ and the sparsity parameter $\\rho$. The output of each neuron $y_i$ may be represented as $y_i=\\sum_{j=1}^{n} w_jx_j + b_i$, where $n$ and $x_j$'s indicate the total number of neurons and the neurons' outputs from the previous layer. The objective function $J$ is to minimize the mean-squared loss function $L(W,B|F_{j,sub_s})$ in the presence of an $L_2$-Ridge regularization with parameter $\\lambda_2$ which adds stability and robustness to the learning process,\n\\begin{align*}\nJ=\\arg \\min_{w_i,b_i}\\{ L(W,B|F_{j,sub_s}) + \\lambda_2 ||W||_2^2 \\}.\n\\end{align*}\n\nIn order to deal with the possible noise in the input data points, we follow a dropout training procedure based on which at each learning epoch, $20\\%$ of the data points are removed. It has been shown that this de-noising procedure will control for over-fitting, as reported in \\cite{wager2013dropout}. After training the above auto-encoder, one can extract the feature matrices for subject $sub_s$ at sub-band $j$ to attain $({Y_{j,sub_s}^{(64 \\times 7)}})_f$. Our results will show that our proposed deep learning algorithm is capable of removing the large intra-variance amongst input data points and can give an effective representation of the brain networks in a low-dimensional space. This can be considered as a non-linear mapping model from a high-dimensional space to a low-dimensional space suited for clustering.\n\\section{Hierarchical Clustering}\nThe main objective behind our work is to design a data driven cluster analysis that is suitable for discriminating between distinct connectivity patterns associated with given cognitive states at different frequency levels. We perform a hierarchical clustering on a combination of features from different frequency levels attained from the deep learning algorithm to show that the framework is capable of detecting the cognitive tasks (given in Table \\ref{tab:firsttable}) based on their connectivity manifestation in the brain networks and their learned features after the deep learning architecture.\n\nThe clustering algorithm clusters a subject's brain features matrix $Y_f^{64 \\times (m \\times 7)}=[({Y_{1,sub_s}^{(64 \\times 7)}})_f,...,({Y_{m,sub_s}^{(64 \\times 7)}})_f]$ consisting of the concatenation of the feature matrices from $m$ different frequency levels selected from the the frequency sub-bands $\\{A_0,A_1,A_2,...,A_{11},D_1,D_2,...,D_{11}\\}$. This is to show that each frequency level carries complementary information in regard to cognitive tasks performed during the experiment. Given the $w=64$ discrete-time windows, the clustering algorithm attempts to divide the data points into $k=7$ clusters ($c_k$, $k=1,...,7$), by minimizing the following cost function,\n\\begin{align*}\nV=\\sum_{k=1}^{7} V_k = \\sum_{k=1}^{7}(\\sum_{y_j \\in c_k \\cap y_j\\in Y_f}^{}dis(y_j,c_k)),\n\\end{align*}\nwhere the distance matrix $dis(y_j,c_k)$ is based on the Pearson Correlation between data points which closely models the functional connectivity pattern in the brain from one task to another. The exact relation between the distance matrix and the correlation matrix is,\n\\begin{align*}\ndis(y_j,c_k)=1-Corr^2(y_j,y_k).\n\\end{align*}\n\nThe entries of the correlation matrix $Corr(y_j,y_k)$ indicate the degree to which window $y_j$ is correlated with window $y_k$. The above relation can capture the time-varying coupling between windows and consequently closely model the flow of change in brain features from one cognitive state to another \\cite{calhoun2014chronnectome}. Fig. \\ref{fig:Diagram} depicts the entire deep learning framework. After clustering the unlabeled windows into $7$ different clusters, in the next section, we will compare the resulting clusters with the labeled data points given in Table \\ref{tab:firsttable} in order to examine the performance of our proposed approach. We utilize Rand Index (RI) and Adjusted Rand Index (ARI) as performance measures for our algorithm \\cite{milligan1986study}.\n\\section{Experimental Results} \\label{sec:expres}\nIn this section, we test the validity of the suggested deep learning architecture in two groups of experiments. The first set of experiments measures the cluster validity by clustering the fMRI signal, mesh arc weights of single and multi-resolution signals and utilizing the measures of performance RI and ARI. The second group of experiments visualizes the mesh networks obtained across subjects and cognitive tasks to observe the inter-task and inter-subject variabilities. We perform within-subject clustering analyses based on the fMRI signals collected from $100$ subjects (described in Section \\ref{sec:exp}). The design parameters are selected empirically through a cross validation process based on performance. We search for the optimal design parameters based on the sets, $p \\in \\{10,20,30,40,50\\}$, $\\lambda \\in \\{16,32,128,256\\}$, $\\rho \\in \\{0.01,0.001,0.0001\\}$, and $\\lambda_2 \\in \\{0.00001,0.00055,0.0001\\}$. We select the design parameters, $p=40$ and $\\lambda=32$ for the mesh networks (Section \\ref{sec:hmmns}), $\\rho=0.001$ and $\\lambda_2=0.00055$ for the SDAE design (Section \\ref{sec:dl}) as optimal values. The RI and ARI values given in the tables for each experiment describe the average clustering performance for all $100$ subjects. \n\nTable \\ref{tab:table4} gives a performance comparison between the clustering of the raw fMRI data (i.e. representative time series of each anatomic region) and the clustering of the arc weights of mesh networks (MADs). Note that, clustering the MADs increases the rand index from $68\\%$ to $84\\%$. This substantial improvement shows that connectivity patterns are much more informative then the average voxel time-series.\n\nOur next analysis involves the representation power of individual frequency sub-bands, where we examine the performance of each sub-band in detecting MADs among anatomic regions for the given tasks. This may also be translated as the amount of complementary information each sub-band carries in regard to the functional connectivity of the given cognitive states. For further comparison, we cluster the data after attaining the MADs at each sub-band (Section \\ref{sec:hmmns}) and also after the deep learning architecture (Section \\ref{sec:dl}). The results are stated in Table \\ref{tab:table2}. The high rand indices for all individual sub-bands confirm the benefits of analyzing the fMRI signals in multiple time-resolutions as it shows that each sub-band carries important information in regard to the mesh network arc weights and the connectivity patterns learned at the output of stacked de-noising auto-encoders. This leads to a clustering performance between the range of $68-86\\%$ across all sub-bands. Note that, the sub-bands \\bf{A5} \\normalfont to \\bf{A11}\\normalfont, \\bf{D5} \\normalfont to \\bf{D7}\\normalfont~and \\bf{D9} \\normalfont to \\bf{D11}\\normalfont~show relatively higher performances indicating that these frequency bands are more informative then the rest. \n\nIn order to boost the RI and ARI values given in Table \\ref{tab:table2}, we fuse the learned connectivity patterns based on a combination of sub-bands to obtain a better representation. In our last set of clustering analyses, we examine the clustering performance by ensembling multiple sub-bands. RI and ARI values for the ensembled sub-bands, given in Table \\ref{tab:table3} point to a substantial increase compared to the best single sub-band clustering performance of $86\\%$ at sub-band A10 to a performance of $93\\%$ for the fusion of all sub-bands. This shows that not only the brain networks constructed at multiple time-resolutions provide complementary information for the clustering algorithm but that the proposed deep architecture is capable of detecting distinct connectivity patterns in the brain for a given cognitive task, independent of subjects.\n\nThe rather high ARI values in Table \\ref{tab:table3} confirm that utilizing the complementary information gained from different time-resolutions result in clusters with relatively low within-cluster variances and high between-cluster variances. This claim is backed by the high ARI values that result from combining the information from different sub-bands before clustering. Further, by increasing the number of subjects to $200$ in our data set, and by fusing the brain networks obtained from the entire $23$ sub-bands and clustering their connectivity pattern extracted by the SDAE platform, we are able to achieve the performance of $93\\%$ RI and $71\\%$ ARI. This experiment shows that increasing the number of subjects does not decrease the clustering performance.\n\\begin{figure*}[h]\n\t\\centering\n\t\\begin{subfigure}\n\t\t\\centering\n\t\t\\includegraphics[scale=0.2539]{Sub29_Task1_axial} \n\t\t\\includegraphics[scale=0.2539]{Sub29_Task2_axial} \n\t\t\\includegraphics[scale=0.2539]{Sub29_Task3_axial} \n\t\t\\includegraphics[scale=0.2539]{Sub29_Task4_axial} \n\t\t\\includegraphics[scale=0.2539]{Sub29_Task5_axial} \n\t\t\\includegraphics[scale=0.2539]{Sub29_Task6_axial} \n\t\t\\includegraphics[scale=0.2539]{Sub29_Task7_axial} \n\t\\end{subfigure}\n\t\\begin{subfigure}\n\t\t\\centering\n\t\t\\includegraphics[scale=0.2539]{Sub40_Task1_axial}\n\t\t\\includegraphics[scale=0.2539]{Sub40_Task2_axial}\n\t\t\\includegraphics[scale=0.2539]{Sub40_Task3_axial}\n\t\t\\includegraphics[scale=0.2539]{Sub40_Task4_axial}\n\t\t\\includegraphics[scale=0.2539]{Sub40_Task5_axial}\n\t\t\\includegraphics[scale=0.2539]{Sub40_Task6_axial}\n\t\t\\includegraphics[scale=0.2539]{Sub40_Task7_axial}\n\t\\end{subfigure}\n\t\\begin{subfigure}\n\t\t\\centering\n\t\t\\includegraphics[scale=0.2539]{Sub56_Task1_axial}\n\t\t\\includegraphics[scale=0.2539]{Sub56_Task2_axial}\n\t\t\\includegraphics[scale=0.2539]{Sub56_Task3_axial}\n\t\t\\includegraphics[scale=0.2539]{Sub56_Task4_axial}\n\t\t\\includegraphics[scale=0.2539]{Sub56_Task5_axial}\n\t\t\\includegraphics[scale=0.2539]{Sub56_Task6_axial}\n\t\t\\includegraphics[scale=0.2539]{Sub56_Task7_axial}\n\t\\end{subfigure}\n\t\\caption{Mesh Networks of 3 Subjects: the top row shows subject 29, the middle row shows subject 40, bottom row shows subject 56. The mesh networks at each row from left to right indicate tasks in the following order: emotion, gambling, language, motor, relational, social and working-memory.}\\label{fig:2}\t\t\n\\end{figure*}\n\\begin{figure*}[h]\n\t\\centering\n\t\\includegraphics[scale=0.252]{Pre_Task1_axial}\n\t\\includegraphics[scale=0.252]{Pre_Task2_axial}\n\t\\includegraphics[scale=0.252]{Pre_Task3_axial}\n\t\\includegraphics[scale=0.252]{Pre_Task4_axial}\n\t\\includegraphics[scale=0.252]{Pre_Task5_axial}\n\t\\includegraphics[scale=0.252]{Pre_Task6_axial}\n\t\\includegraphics[scale=0.252]{Pre_Task7_axial}\n\t\\includegraphics[scale=0.110]{Pre_Scale}\n\t\\caption{Precision of the Mesh Networks of a Subset of Subjects. The mesh networks from left to right indicate tasks in the following order: emotion, gambling, language, motor, relational, social and working-memory.}\n\t\\label{fig:Precision}\n\\end{figure*}\n\\begin{table}\n\t{\n\t\t\\centering\n\t\t\\resizebox{1.\\textwidth}{!}{\n\t\t\t\\begin{minipage}{\\textwidth}\n\t\t\t\t\\begin{tabular}{ | c | c | c | c | c | c | c|}\n\t\t \\hline\n\t\t& Rand & A. Rand & & Rand & A. Rand \\\\\n\t\t\\hline\n\t\tRaw fMRI Data & 0.68 & -0.07 &\n\t\tMAD & 0.84 & 0.37 \\\\\n\t\t\\hline\t\n\t\t\t\t\\end{tabular}\n\t\t\\end{minipage}}\n\t}\n\t\\caption{Clustering Performance Comparison.}\n\t\\label{tab:table4} \n\\end{table}\n\\begin{table}\n{\n\t\\centering\n\t\\resizebox{1.2\\textwidth}{!}{\n \\begin{minipage}{\\textwidth}\n\t\\begin{tabular}{ | c | c | c | c | c | c | }\n\t\\hline\n\tMAD & Rand & A. Rand & SDAE & Rand & A. Rand\\\\\n\t\\hline\n\tA0 & 0.84 & 0.37 & A0 & 0.78 & 0.11\\\\\n\t\\hline\n\tA1 & 0.83 & 0.34 & A1 & 0.76 & 0.02\\\\\n\t\\hline\n\tD1 & 0.81 & 0.28 & D1 & 0.75 & -0.04\\\\\n\t\\hline\n\tA2 & 0.77 & 0.15 & A2 & 0.74 & -0.06\\\\\n\t\\hline\n\tD2 & 0.86 & 0.47 & D2 & 0.76 & 0.11\\\\\n\t\\hline\n\tA3 & 0.75 & 0.12 & A3 & 0.74 & 0.07\\\\\n\t\\hline\n\tD3 & 0.72 & 0.15 & D3 & 0.74 & -0.34\\\\\n\t\\hline\n\tA4 & 0.68 & 0.06 & A4 & 0.77 & 0.06\\\\\n\t\\hline\n\tD4 & 0.77 & 0.24 & D4 & 0.78 & 0.15\\\\\n\t\\hline\n\tA5 & 0.68 & 0.08 & A5 & 0.80 & 0.17\\\\\n\t\\hline\n\tD5 & 0.74 & 0.17 & D5 & 0.80 & 0.16\\\\\n\t\\hline\n\tA6 & 0.75 & 0.18 & A6 & 0.81 & 0.20\\\\\n\t\\hline\n\tD6 & 0.75 & 0.17 & D6 & 0.80 & 0.20\\\\\n\t\\hline\n\tA7 & 0.87 & 0.50 & A7 & 0.80 & 0.21\\\\\n\t\\hline\n\tD7 & 0.84 & 0.37 & D7 & 0.82 & 0.26\\\\\n\t\\hline\n\tA8 & 0.85 & 0.37 & A8 & 0.80 & 0.16\\\\\n\t\\hline\n\tD8 & 0.82 & 0.27 & D8 & 0.79 & 0.14\\\\\n\t\\hline\n\tA9 & 0.85 & 0.39 & A9 & 0.83 & 0.30\\\\\n\t\\hline\n\tD9 & 0.82 & 0.28 & D9 & 0.80 & 0.12\\\\\n\t\\hline\n\tA10 & 0.82 & 0.29 & A10 & 0.86 & 0.41\\\\\n\t\\hline\n\tD10 & 0.83 & 0.30 & D10 & 0.84 & 0.20\\\\\n\t\\hline\n\tA11 & 0.79 & 0.20 & A11 & 0.82 & 0.25\\\\\n\t\\hline\n\tD11 & 0.81 & 0.26 & D11 & 0.83 & 0.29\\\\\n\t\\hline\n\t\\end{tabular}\n \\end{minipage}}\n\\caption{Clustering Performance for Individual Sub-bands.}\n\\label{tab:table2} \t\n}\\end{table}\n\\begin{table}\n{\n\\centering\n\\resizebox{0.92\\textwidth}{!}{\n \\begin{minipage}{\\textwidth}\n\t\\begin{tabular}{ | l | c | c | l | c | c | }\n\t\\hline\n\tMAD & Rand & A. Rand & SDAE & Rand & A. Rand \\\\\n\t\\hline\n\tAll Sub-bands & 0.91 & 0.64 & All Sub-bands& 0.93 & 0.71\\\\\n\t\\hline\n\tSub-bands 7-9 & 0.92 & 0.66 & Subbands 7-9& 0.90 & 0.59 \\\\\n\t\\hline\n\tSub-bands 7-11 & 0.92 & 0.66 & Subbands 7-11& 0.91 & 0.60 \\\\\n\t\\hline\n\tSub-bands 3-8 & 0.89 & 0.57 & Subbands 3-8& 0.91 & 0.64 \\\\\n\t\\hline\n\tSub-bands 3-11 & 0.90 & 0.59 & Subbands 3-11& 0.91 & 0.63 \\\\\n\t\\hline\n\t\\end{tabular}\n \\end{minipage}}\n}\n\\caption{Clustering Performance for Combinations of Sub-bands.}\n\\label{tab:table3} \n\\end{table}\n\nFinally, we visualize the mesh networks obtained in the original fMRI signal to observe the inter-task and inter-subject variability of the brain networks. The motivation behind performing within-subject clustering rather than across-subject clustering in this study is the well-known inter-subject variability, which may prevent the clustering algorithm from finding natural groupings in the data. In order to illustrate the inter-subject variability, we plot the mesh networks of $3$ subjects in Fig. \\ref{fig:2} and Fig \\ref{fig:Precision} for each cognitive task. These subjects have the RI of $99\\%$, which indicates that the proposed model has successfully estimated the natural groupings for each one of these $3$ subjects. The networks shown in the aforementioned figures represent the medoids of the clusters which correspond to each one of the $7$ different tasks. The mesh networks corresponding to each of the subjects are pruned by eliminating the mesh arc weights with values less then a threshold to reach $1\\%$ sparsity for simplification. A close analysis of the mesh networks corresponding to each task for the subjects shows that the mesh networks corresponding to the same task show small similarities across the $3$ subjects. This validates our prior claim on the existence of high inter-subject variabilities. To further investigate the inter-subject variability, we select a group of subjects with rand indices higher than $90\\%$ from the HCP task data set of $300$ individuals. We define the precision of the mesh network across the set of subjects as the inverse of variance and calculate this value for the selected subjects. Fig. \\ref{fig:Precision} shows the pruned precision of the mesh networks of the aforementioned set of subjects with $1\\%$ sparsity. The thickness and the colors of the edges are proportional to their corresponding precision values. One may observe from Fig. \\ref{fig:Precision} that the majority of the edges are thin-blue with only few of them thick-red. This indicates that the majority of the mesh network connections have high standard deviations across subjects.\n\\section{Conclusion} \\label{sec:con}\nIn this paper, we proposed a framework for constructing a set of brain networks in multiple time-resolutions in order to model the connectivity patterns amongst the anatomic regions for different cognitive states. We proposed an unsupervised deep learning architecture that utilized these brain networks in multiple frequency sub-bands to learn the natural groupings of connectivity patterns in the human brain for given cognitive tasks. We showed that our suggested deep learning algorithm is capable of clustering the representative groupings into their associated cognitive states. We examined our suggested architecture on a task data set from HCP and achieved the clustering performance of $93\\%$ Rand Index and $71\\%$ Adjusted Rand Index for $200$ subjects. Lastly, we visualized the mean values and the precisions of the mesh networks at each component of the cluster mixture. We showed that the mean mesh networks at cluster centers have high inter-subject variabilities.\n\\section*{Acknowledgment}\nWe would like to thank Dr. Itir Onal Erturul, Arman Afrasiabi and Omer Ekmekci of Middle East Technical University and Professor Mete Ozay of Tohoku University for supporting us throughout many fruitful discussions. The work is supported by TUBITAK (Scientific and Technological Research Council of Turkey) under the grant No: 116E091.\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Introduction}\n\\input{s_intro}\n\\section{Model problem and continuous setting} \\label{s_contiSetting}\n\\input{s_contiSetting}\n\\section{Numerical setting}\\label{sec::discretization}\n\\input{s_numSetting}\n\n\\newcommand{\\uu}{\\mathbf u}\n\\newcommand{\\pp}{\\mathbf p}\n\\newcommand{\\qq}{\\mathbf q}\n\\newcommand{\\yy}{\\mathbf y}\n\\newcommand{\\vv}{\\mathbf v}\n\\newcommand{\\ww}{\\mathbf w}\n\\newcommand{\\KK}{\\mathbf K}\n\\newcommand{\\MM}{\\mathbf M}\n\\newcommand{\\Amat}{\\mathbf A}\n\\newcommand{\\ff}{\\mathbf f}\n\\section{Numerical {topological-shape } derivative} \\label{sec::numTopShapeDer}\n\\input{s_numericalDerivatives}\n\n\\input{s_connection}\n\n\\section{Numerical Experiments}\\label{sec::numericalExamples}\n\\input{s_numericalExamples}\n\\section{Conclusions} \\label{sec::conclusion}\n\\input{s_conclusion}\n\n\n\n\n\\section*{Conflict of interest}\nThe authors declare that they have no conflict of interest.\n\n\n\\bibliographystyle{spmpsci} \n\n\n\n\n\n\\section{Connection between continuous and discrete sensitivities} \\label{sec::connection}\n\nThe {topological-shape } derivative introduced in \\eqref{eq::topDerivative} and computed for model problem \\eqref{eq::ContinuousProblem} in Theorem \\ref{thm_numTopShapeDer_formula} represents a sensitivity of the discretized problem \\eqref{eq_discr_opti_problem}. In this section, we draw some comparisons with the classical topological and shape derivatives defined on the continuous level before discretization. While the purpose of this paper is to follow the idea \\textit{discretize-then-differentiate}, we consider the other way here for comparison reasons.\n\n\n\n\\subsection{Connections between continuous and discrete topological derivative} \\label{sec_connTD}\nFor comparison, we also illustrate the derivation of the continuous topological derivative according to \\eqref{eq_defTD} for problem \\eqref{eq::ContinuousProblem}. We use the same Lagrangian framework as introduced in Section \\ref{sec::Lagrangian}, see also \\cite{simplified}. Given a shape $\\Omega \\in \\mathcal A$, a point $z \\in D \\setminus \\partial \\Omega$, an inclusion shape $\\omega \\subset \\mathbb R^d$ with $0 \\in \\omega$ and $\\varepsilon \\geq 0$, we define the inclusion $\\omega_\\varepsilon = z + \\varepsilon \\omega$ and the perturbed Lagrangian\n\\begin{align*}\n G(\\varepsilon, \\varphi, \\psi) := c_1 |\\Omega_\\varepsilon| + c_2 \\int_D \\tilde{\\alpha}_{\\Omega_\\varepsilon} | \\varphi - \\hat u|^2\\; \\mbox dx + \\int_D \\lambda_{\\Omega_\\varepsilon} \\nabla \\varphi \\cdot \\nabla \\psi + \\alpha_{\\Omega_\\varepsilon} \\varphi \\psi \\;\\mbox dx - \\int_D f_{\\Omega_\\varepsilon} \\psi \\; \\mbox dx\n\\end{align*}\nwhere $\\Omega_\\varepsilon = \\Omega \\setminus \\omega_\\varepsilon$ for $z \\in \\Omega$ and $\\Omega_\\varepsilon = \\Omega \\cup \\omega_\\varepsilon$ for $z \\in D \\setminus \\overline \\Omega$. For simplicity, we only consider the latter case, \\textit{i.e.}\\, $z \\in D \\setminus \\overline \\Omega$ in the sequel.\n\nNoting that $u_\\varepsilon$, $\\varepsilon \\geq 0$, is the solution to the perturbed state equation with parameter $\\varepsilon$, the topological derivative can also be written as\n\\begin{align}\n d \\mathcal J(\\Omega)(z) = \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }}\\frac{1}{|\\omega_\\varepsilon|} (G(\\varepsilon, u_\\varepsilon, p) - G(0, u_0, p))\n\\end{align}\nwith the solution to the unperturbed adjoint state equation $p$. As in Section \\ref{sec::Lagrangian}, this leads to the topological derivative consisting of the three terms\n\\begin{align*}\nd \\redPhiObjectiveFunction(\\phi)(z) = R_1(u, p) + R_2(u, p) + R_0(u, p)\n\\end{align*}\nwhere\n\\begin{align*}\nR_1(u, p) :=& \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{1}{|\\Omega_\\varepsilon \\triangle \\Omega|} \\int_0^1 [\\partial_u G(\\varepsilon, u_0 + s(u_\\varepsilon - u_0), p) - \\partial_u G(\\varepsilon, u, p)](u_\\varepsilon - u_0) \\mbox ds, \\\\\nR_2(u, p) :=& \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{1}{|\\Omega_\\varepsilon \\triangle \\Omega|}[ \\partial_u G(\\varepsilon, u, p) - \\partial_u G(0, u, p)](u_\\varepsilon - u), \\\\\nR_0(u, p) := & \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{1}{|\\Omega_\\varepsilon \\triangle \\Omega|} [ G(\\varepsilon, u, p) - G(0, u, p)],\n\\end{align*}\nprovided that these limits exist.\nIt is straightforwardly checked that for $z \\in D \\setminus \\overline \\Omega$\n\\begin{align*}\n R_0(u,p) = c_1 + c_2 (\\tilde \\alpha_1 - \\tilde \\alpha_2) (u - \\hat u)^2(z) + (\\lambda_1 - \\lambda_2) \\nabla u(z) \\cdot \\nabla p(z) + (\\alpha_1 - \\alpha_2) u(z) p(z) - (f_1-f_2)(z) p(z).\n\\end{align*}\n\n\nFor the term $R_2(u,p)$, we obtain\n\\begin{align*}\n R_2(u,p)\n =& \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{1}{|\\omega_\\varepsilon|} \\bigg[ 2 c_2 \\int_{\\omega_\\varepsilon} (\\tilde \\alpha_{1} - \\tilde \\alpha_2) (u_0 - \\hat u)(u_\\varepsilon - u_0) \\; dy+ \\int_{\\omega_\\varepsilon} (\\lambda_{1} - \\lambda_2) \\nabla (u_\\varepsilon - u_0) \\cdot \\nabla p \\; dy \\\\\n & \\qquad + \\int_{\\omega_\\varepsilon} (\\alpha_{1} - \\alpha_2) (u_\\varepsilon - u_0) p \\; dy \n \\bigg].\n\\end{align*}\nA change of variables $y = T_\\varepsilon(x) = z + \\varepsilon x$ yields\n\\begin{align} \\label{eq_R2omega}\n \\begin{aligned}\n R_2(u,p) =& \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{1}{|\\omega|} \\bigg[ 2 c_2 (\\tilde \\alpha_{1} - \\tilde \\alpha_2)\\int_{\\omega} (u_0 - \\hat u)\\circ T_\\varepsilon (u_\\varepsilon - u_0) \\circ T_\\varepsilon \\; dx \\\\ & + (\\lambda_{1} - \\lambda_2) \\int_{\\omega} (\\nabla (u_\\varepsilon - u_0) )\\circ T_\\varepsilon \\cdot (\\nabla p)\\circ T_\\varepsilon \\; dx + (\\alpha_{1} - \\alpha_2) \\int_{\\omega} (u_\\varepsilon - u_0)\\circ T_\\varepsilon \\, p\\circ T_\\varepsilon \\; dx \\bigg] \n \\end{aligned}\n\\end{align}\n\nWe have a closer look at the diffusion term \n\\begin{align} \\label{eq_R2lambdaDef}\n R_2^\\lambda(u,p):= \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{1}{|\\omega|} (\\lambda_1 - \\lambda_2)\\int_\\omega (\\nabla(u_\\varepsilon - u_0))\\circ T_\\varepsilon \\cdot (\\nabla p)\\circ T_\\varepsilon \\; \\mbox dx.\n\\end{align}\nIn the continuous setting, we now define $K_\\varepsilon := \\frac{1}{\\varepsilon}(u_\\varepsilon - u_0)\\circ T_\\varepsilon$ and use the chain rule $(\\nabla \\varphi )\\circ T_\\varepsilon = \\frac1\\varepsilon \\nabla (\\varphi \\circ T_\\varepsilon)$ to obtain\n\\begin{align}\n R_2^\\lambda (u,p)=& \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{1}{|\\omega|} (\\lambda_1 - \\lambda_2) \\int_\\omega \\nabla K_\\varepsilon \\cdot (\\nabla p) \\circ T_\\varepsilon \\; \\mbox dx\n\\end{align}\nNext, one can show the weak convergence $\\nabla K_\\varepsilon \\rightharpoonup \\nabla K$ for $K \\in \\dot{BL}(\\mathbb R^2)$ being defined as the solution to the exterior problem\n\\begin{align*}\n \\int_{\\mathbb R^d} \\lambda_\\omega \\nabla K \\cdot \\nabla \\psi \\; \\mbox dx = -(\\lambda_1 - \\lambda_2) \\int_\\omega \\nabla u(z) \\cdot \\nabla \\psi \\; \\mbox dx \\quad \\mbox{for all }\\psi \\in \\dot{BL}(\\mathbb R^2),\n\\end{align*}\nwhere $\\dot{BL}(\\mathbb R^2) := \\{v \\in H^1_{loc}(\\mathbb R^2): \\nabla v \\in L^2(\\mathbb R^2) \\}\/_{\\mathbb R}$ is a Beppo-Levi space.\nAssuming continuity of $\\nabla p$ around the point of perturbation $z$,\nit follows that \n\\begin{align} \\label{eq_R2lambda}\n R_2^\\lambda(u,p) = \\frac{1}{|\\omega|} (\\lambda_1 - \\lambda_2) \\int_\\omega \\nabla K \\cdot \\nabla p(z)\\; \\mbox dx.\n\\end{align}\nIt can be shown that the other terms in \\eqref{eq_R2omega} vanish and thus $R_2(u,p) = R_2^\\lambda(u,p)$.\nFinally, it follows from the analysis in \\cite[Sec. 5]{simplified} that $R_1(u,p) + R_2(u,p) = \\frac{1}{|\\omega|}(\\lambda_1 - \\lambda_2) \\int_\\omega \\nabla K \\cdot \\nabla p(z) \\, \\mbox dx = R_2(u,p)$, thus $R_1(u,p) = 0$ and $d \\redPhiObjectiveFunction(\\Omega)(z)= R_0(u,p) + R_2^\\lambda(u,p)$.\n\n\\begin{remark}\nComparing the topological derivative formula obtained here with the sensitivity for interior nodes ${\\mathbf x}_k \\in \\mathfrak T^- \\cup \\mathfrak T^+$ obtained in Section \\ref{sec::numTopShapeDer}, we see that the term corresponding to $R_2^\\lambda(u,p)$, i.e., the term\n\\begin{align*}\n \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{1}{|\\Omega(\\phi_\\varepsilon)\\triangle \\Omega(\\phi)|}(\\KK_\\varepsilon - \\KK_0) (\\uu_\\varepsilon - \\uu_0) \\cdot \\pp\n\\end{align*}\nin \\eqref{eq_R2discr}, vanishes in the discrete setting. This can be seen as follows:\nFor $u_\\varepsilon^h$, $\\varepsilon \\geq 0$, and $p^h \\in V_h$, we have the expansion in the finite element basis\n\\begin{align*}\n u_\\varepsilon^h(x) = \\sum_{i=1}^M u_\\varepsilon^{(i)} \\varphi_i , \\qquad p^h(x) = \\sum_{i=1}^M p^{(i)} \\varphi_i .\n\\end{align*}\nIf we now plug in these discretized functions into \\eqref{eq_R2lambdaDef} and consider a fixed mesh size $h$, we get on the other hand\n\\begin{align*}\n R_2^\\lambda(u^h, p^h) = & \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{1}{|\\omega|} (\\lambda_1 - \\lambda_2) \\int_\\omega (\\nabla(u_\\varepsilon^h - u_0^h))\\circ T_\\varepsilon \\cdot (\\nabla p^h)\\circ T_\\varepsilon \\; \\mbox dx \\\\\n =& \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{1}{|\\omega|}(\\lambda_1 - \\lambda_2) \\sum_{i,j=1}^M (u_\\varepsilon^{(i)}-u_0^{(i)})p^{(j)} \\int_\\omega (\\nabla \\varphi_i)(z+\\varepsilon x) \\cdot (\\nabla \\varphi_j)(z+\\varepsilon x) \\; \\mbox dx = \n\\end{align*}\nwhere we used the continuity of $\\varepsilon \\mapsto \\uu_\\varepsilon$ according to Lemma \\ref{lem_uepsu0}. Note that, since the mesh and the basis functions are assumed to be fixed and independent of $\\varepsilon$, unlike in the continuous setting, here the continuity of the normal flux of the discrete solution $u_\\varepsilon^h$ across the interface $\\partial \\omega_\\varepsilon$ is not preserved. We mention that, when using an extended discretization technique that accounts for an accurate resolution of the material interface (e.g. XFEM \\cite{MoesDolbowBelytschko1999} or CutFEM \\cite{BurmanClausHansboLarsonMassing2014}), the corresponding discrete sensitivities would include a term corresponding to $R_2^\\lambda(u,p)$.\n\\end{remark}\n\n\\subsection{Connection between continuous and discrete shape derivative}\nThe continuous shape derivative $dg(\\Omega)(V)$ for a PDE-constrained shape optimization problem given a shape $\\Omega \\in \\mathcal A$ and a smooth vector field $V$ can also be obtained via a Lagrangian approach. For our problem \\eqref{eq::ContinuousProblem}, it can be obtained as $dg(\\Omega)(V) = \\partial_t G(0, u, p)$ with\n\\begin{align*}\n G(t, \\varphi, \\psi) :=& c_1 \\int_\\Omega \\xi(t) \\; \\mbox dx + c_2 \\int_D \\tilde{\\alpha}_{\\Omega} | \\varphi - \\hat u\\circ T_t|^2 \\xi(t) \\; \\mbox dx \\\\\n &+ \\int_D \\lambda_{\\Omega} A(t) \\nabla \\varphi \\cdot \\nabla \\psi + \\alpha_{\\Omega} \\varphi \\psi \\xi(t) \\;\\mbox dx- \\int_D f_{\\Omega} \\psi \\xi(t)\\; \\mbox dx\n\\end{align*}\nwhere $T_t(x) = x + t V(x)$, $\\xi(t) = \\mbox{det}(\\partial T_t)$, $A(t) = \\xi(t) \\partial T_t^{-1} \\partial T_t^{-T}$,\nsee \\cite{Sturm2015} for a detailed description. In the continuous setting, the shape derivative reads in its volume form\n\\begin{align*}\n d g(\\Omega)(V) = \\int_D \\mathcal S_1^{\\Omega} : \\partial V + \\mathcal S_0^\\Omega \\cdot V \\; \\mbox dx\n\\end{align*}\nwith $\\mathcal S_1^\\Omega$ and $\\mathcal S_0^\\Omega$ given in \\eqref{eq_S1} and \\eqref{eq_S0}, respectively. Under certain smoothness assumptions, it can be transformed into the Hadamard or boundary form\n\\begin{align} \\label{eq_dg_Hadamard}\n d g(\\Omega)(V) = \\int_{\\partial \\Omega} L \\, (V\\cdot n) \\; \\mbox dS_x\n\\end{align}\nwith $L =( \\mathcal S_1^{\\Omega, \\text{in}} - \\mathcal S_1^{\\Omega, \\text{out}})n \\cdot n$ given by\n\\begin{align} \\label{eq_L}\n L = c_1 + c_2(\\tilde \\alpha_1 - \\tilde \\alpha_2) (u - \\hat u)^2 + (\\alpha_1 - \\alpha_2) u p - (f_1 - f_2) p + L^\\lambda\n\\end{align}\nwhere $L^\\lambda$ is given by\n\\begin{align}\n L^\\lambda :=& (\\lambda_1- \\lambda_2) (\\nabla u \\cdot \\tau)(\\nabla p \\cdot\\tau) - \\left(\\frac{1}{\\lambda_1} - \\frac{1}{\\lambda_2}\\right)( \\lambda_\\Omega \\nabla u \\cdot n)( \\lambda_\\Omega \\nabla p \\cdot n) \\nonumber \\\\\n =&\\lambda_1 \\nabla u_{\\text{in}} \\cdot \\nabla p_{\\text{in}} - \\lambda_2 \\nabla u_{\\text{out}} \\cdot \\nabla p_{\\text{out}} - 2 \\lambda_1 (\\nabla u_{\\text{in}}\\cdot n) ( \\nabla p_{\\text{in}} \\cdot n) + 2 \\lambda_2 (\\nabla u_{\\text{out}}\\cdot n) ( \\nabla p_{\\text{out}} \\cdot n). \\label{eq_Llambda_inout}\n\\end{align}\nHere, $\\nabla u_{\\text{in}}, \\nabla p_{\\text{in}}$ and $\\nabla u_{\\text{out}}, \\nabla p_{\\text{out}}$ denote the restrictions of the gradients to $\\Omega$ and $D \\setminus \\overline \\Omega$, respectively, and $n$ denotes the unit normal vector pointing out of $\\Omega$.\nNote that, when using a finite element discretization which does not resolve the interface such that the gradients of the discretized state and adjoint variable are continuous, i.e. $\\nabla u_{h, \\text{in}} = \\nabla u_{h, \\text{out}}$ and $\\nabla p_{h, \\text{in}} = \\nabla p_{h, \\text{out}}$, \\eqref{eq_Llambda_inout} becomes\n\\begin{align} \\label{eq_Llambdah}\n L^\\lambda_h = (\\lambda_1 - \\lambda_2) \\nabla u_h \\cdot \\nabla p_h - 2(\\lambda_1 - \\lambda_2) (\\nabla u_h \\cdot n)(\\nabla p_h \\cdot n)\n\\end{align}\n\n\n\nWe now discretize the continuous shape derivative formula \\eqref{eq_dg_Hadamard} for the vector field $V^{(k)}$ that is obtained from the perturbation of the level set function $\\phi$ in (only) node ${\\mathbf x}_k$, $k \\in \\mathfrak S$ fixed. For that purpose we fix $\\phi \\in S_h^1(D)$ and the corresponding domain $\\Omega(\\phi)$. Note that we consider $V^{(k)}$ to be supported only on the discretized material interface $\\partial \\Omega(\\phi) \\cap D$. We begin with the case of the pure volume cost function by setting $c_2=0$.\n\n\\begin{proposition} \\label{prop_SD_vol}\n Let $c_2=0$ and ${\\mathbf x}_k \\in \\mathfrak S$ fixed. Let $V^{(k)}$ the vector field that corresponds to a perturbation of the value of $\\phi$ at position ${\\mathbf x}_k$. Then\n \\begin{align*}\n dg(\\Omega(\\phi))(V^{(k)}) = c_1 \\sum_{l \\in C_k} d_ka_l\n \\end{align*}\n where $d_k a_l$ is as in \\eqref{eq::derivativeArea}.\n\n\\end{proposition}\n\n\\begin{proof}\n For $c_2=0$ we also have $p=0$ and thus $L = c_1$, i.e., we are in the case of pure volume minimization. From \\eqref{eq_dg_Hadamard} we know that $dg(\\Omega(\\phi))(V^{(k)}) = c_1 \\int_{\\partial \\Omega(\\phi)} V^{(k)} \\cdot n \\; \\mbox dS_x$.\n First of all, we note that the vector field $V^{(k)}$ corresponding to a perturbation of $\\phi$ at node $x_k$ is only nonzero in elements $\\tau_l$ for $l \\in C_k$ with $C_k$ as defined in \\eqref{eq::areaChangeSet}. Thus, the shape derivative reduces to\n \\begin{align*}\n dg(\\Omega(\\phi))(V^{(k)}) = c_1 \\sum_{l \\in C_k} \\int_{\\tau_l \\cap \\partial \\Omega(\\phi)} V^{(k)}\\cdot n \\; \\mbox dS_x.\n \\end{align*}\n We compute the vector field $V^{(k)}$ and normal vector $n$ explicitly depending on the cut situation. Recall the sets $I_{\\pos_k}^{A_+}, I_{\\pos_k}^{A_-}, I_{\\pos_k}^{B_+}, I_{\\pos_k}^{B_-}, I_{\\pos_k}^{C_+}, I_{\\pos_k}^{C_-}$ introduced in \\eqref{eq_def_Ixk_ABCpm}, see also Figure \\ref{fig::sets}. \n \n Given two points $\\pp$ and $\\qq$ and their respective level set values $a$ and $b$ of different sign, $ab<0$, we denote the root of the linear interpolating function by\n \\begin{align*}\n \\yy(\\pp, \\qq, a, b) = \\pp - \\frac{a}{b-a}(\\qq - \\pp)\n \\end{align*}\n and note that $\\frac{d}{da} \\yy(\\pp, \\qq, a, b) = - \\frac{b}{(b-a)^2}(\\qq - \\pp)$.\n\n We begin with Configuration A. For an element index $l \\in I_{\\pos_k}^{A_+}\\cup I_{\\pos_k}^{A_-}$, we denote the vertices of element $\\tau_l$ by ${\\mathbf x}_{l_1}, {\\mathbf x}_{l_2}, {\\mathbf x}_{l_3}$ and assume their enumeration in counter-clockwise order with ${\\mathbf x}_k = {\\mathbf x}_{l_1}$. The corresponding values of the given level set function $\\phi$ are denoted by $\\phi_{l_1}, \\phi_{l_2}, \\phi_{l_3}$, respectively. We parametrize the line connecting the two roots of the perturbed level set function along the edges by\n \\begin{align*}\n p^A[\\varepsilon](s) = (1-s) \\yy({\\mathbf x}_{l_1}, {\\mathbf x}_{l_2}, \\phi_{l_1}+\\varepsilon, \\phi_{l_2}) + s \\yy({\\mathbf x}_{l_1}, {\\mathbf x}_{l_3}, \\phi_{l_1}+\\varepsilon,\\phi_{l_3})\n \\end{align*}\n and obtain the vector field corresponding to the perturbation of $\\phi_{k} = \\phi_{l_1}$ along the line $\\tau_l \\cap \\partial \\Omega(\\phi)$ as\n \\begin{align*}\n \\hat V^A(s) = \\frac{d}{d\\varepsilon} p^A[\\varepsilon](s)|_{\\varepsilon = 0} = (1-s) \\frac{- \\phi_{l_2}}{(\\phi_{l_2}-\\phi_{l_1})^2} ({\\mathbf x}_{l_2} - {\\mathbf x}_{l_1}) + s \\frac{- \\phi_{l_3}}{ (\\phi_{l_3} - \\phi_{l_1})^2} ({\\mathbf x}_{l_3}-{\\mathbf x}_{l_1}).\n \\end{align*}\n Introducing the notation $d_{ki,kj} := |\\yy({\\mathbf x}_{l_k}, {\\mathbf x}_{l_i}, \\phi_{l_k}, \\phi_{l_i})-\\yy({\\mathbf x}_{l_k}, {\\mathbf x}_{l_j}, \\phi_{l_k}, \\phi_{l_j}) |$ for the length of the interface in element $\\tau_l$, the normed tangential vector along $\\tau_l \\cap \\partial \\Omega(\\phi)$ and the normed normal vector pointing out of $\\Omega(\\phi)$ are given by\n \\begin{align*}\n \\hat t^A =& \\frac{\\yy({\\mathbf x}_{l_1}, {\\mathbf x}_{l_3}, \\phi_{l_1}, \\phi_{l_3}) - \\yy({\\mathbf x}_{l_1}, {\\mathbf x}_{l_2}, \\phi_{l_1}, \\phi_{l_2}) }{d_{13,12}} \\\\% \\quad \\mbox{ with } d_{1,3} := |\\yy({\\mathbf x}_{l_1}, {\\mathbf x}_{l_3}, \\phi_{l_1}, \\phi_{l_3})-\\yy({\\mathbf x}_{l_1}, {\\mathbf x}_{l_2}, \\phi_{l_1}, \\phi_{l_2}) | \\\\\n \\hat n^{A_+} =& \\frac{\\phi_{l_1}}{d_{13,12}} \\left( \\frac{1}{\\phi_{l_2} - \\phi_{l_1}} R ({\\mathbf x}_{l_2} - {\\mathbf x}_{l_1})- \\frac{1}{\\phi_{l_3} - \\phi_{l_1}} R ({\\mathbf x}_{l_3} - {\\mathbf x}_{l_1}) \\right)\n =-\\hat n^{A_-} \n \\end{align*}\n where $R$ denotes a 90 degree counter-clockwise rotation matrix, $R = \\begin{pmatrix} 0& -1\\\\ 1 & 0 \\end{pmatrix}$. Noting that $({\\mathbf x}_{l_3} - {\\mathbf x}_{l_1})^\\top R ({\\mathbf x}_{l_2} - {\\mathbf x}_{l_1}) = |\\mbox{det}J_l| = - ({\\mathbf x}_{l_2} - {\\mathbf x}_{l_1})^\\top R ({\\mathbf x}_{l_3} - {\\mathbf x}_{l_1})$ and $({\\mathbf x}_{l_j} - {\\mathbf x}_{l_1})^\\top R ({\\mathbf x}_{l_j} - {\\mathbf x}_{l_1}) = 0$, $j=2,3$, we get\n \\begin{align} \\label{eq_VAnA}\n \\hat V^A(s) \\cdot \\hat n^{A_+} = -\\frac{|\\mbox{det} J_l| \\phi_{l_1}}{d_{13,12}} \\frac{\\phi_{l_2}(\\phi_{l_3}-\\phi_{l_1}) + s \\phi_{l_1}(\\phi_{l_2} - \\phi_{l_3})}{(\\phi_{l_2}-\\phi_{l_1})^2(\\phi_{l_3}-\\phi_{l_1})^2} = - \\hat V^A(s) \\cdot \\hat n^{A_-} .\n \\end{align}\n Finally, by elementary computation we obtain for $l \\in I_{\\pos_k}^{A_+}$\n \\begin{align*}\n \\int_{\\tau_l \\cap \\partial \\Omega(\\phi)} V^{(k)} \\cdot n \\; \\mbox dS_x =& d_{13,12} \\int_0^1 \\hat V^A(t) \\cdot \\hat n^{A_+} \\; \\mbox dt \\\\\n =&| \\mbox{det} J_l| \\phi_{l_1} \\frac{-\\phi_{l_2}\\phi_{l_3} + \\frac12 \\phi_{l_1}(\\phi_{l_2}+ \\phi_{l_3})}{(\\phi_{l_2}-\\phi_{l_1})^2(\\phi_{l_3}-\\phi_{l_1})^2} \n \\end{align*}\n and the same formula with a different sign for $l \\in I_{\\pos_k}^{A_-}$. Proceeding analogously, we obtain for $l \\in I_{\\pos_k}^{B_+}$ and $l \\in I_{\\pos_k}^{C_+}$\n \\begin{align}\n \\hat V^B(s) \\cdot \\hat n^{B_+} =& \\frac{|\\mbox{det} J_l|}{d_{23,21}} \\frac{(1-s) \\phi_{l_2}^2 }{(\\phi_{l_2} - \\phi_{l_1})^2 (\\phi_{l_3} - \\phi_{l_2})} = - \\hat V^B(s) \\cdot \\hat n^{B_-}, \\\\\n \\hat V^C(s) \\cdot \\hat n^{C_+} = & \\frac{|\\mbox{det} J_l|}{d_{31,32}} \\frac{(1-s) \\phi_{l_3}^2}{(\\phi_{l_3}-\\phi_{l_1})^2 (\\phi_{l_2}-\\phi_{l_3})} = -\\hat V^C(s) \\cdot \\hat n^{C_-}, \\label{eq_VCnC}\n \\end{align}\n and further\n \\begin{align*}\n \\int_{\\tau_l \\cap \\partial \\Omega(\\phi)} V^{(k)} \\cdot n \\; \\mbox dS_x=& \\frac{|\\mbox{det}J_l|}{2} \\frac{\\phi_{l_2}^2}{(\\phi_{l_2}-\\phi_{l_1})^2(\\phi_{l_3}-\\phi_{l_2})}, \\quad l \\in I_{\\pos_k}^{B_+}, \\\\\n \\int_{\\tau_l \\cap \\partial \\Omega(\\phi)} V^{(k)} \\cdot n \\; \\mbox dS_x =& \\frac{|\\mbox{det}J_l|}{2} \\frac{-\\phi_{l_3}^2}{(\\phi_{l_3}-\\phi_{l_1})^2(\\phi_{l_3}-\\phi_{l_2})}, \\quad l \\in I_{\\pos_k}^{C_+},\n \\end{align*}\n respectively. Again, the formulas for $l \\in I_{\\pos_k}^{B_-}$ and $l \\in I_{\\pos_k}^{C_-}$ just differ by a different sign.\n \n Finally, comparing the computed values with the formulas of $d_k a_l$ \\eqref{eq::derivativeArea} yields the claimed result.\n\\end{proof}\n\nIn view of Proposition \\ref{prop_SD_vol}, the definition in \\eqref{eq::shapeDerivativeAlternative} and Remark \\ref{remark::area}, we see that, in the case $c_2=0$, it holds\n\\begin{align}\n \\hat d g(\\Omega(\\phi))(V^{(k)}) = \\frac{c_1 \\sum_{l\\in C_k} d_ka_l}{\\sum_{l\\in C_k} d_k \\tilde a_l} = - c_1,\n\\end{align}\nwhich is in alignment with the first term of the formula in \\eqref{eq::formulaShapeDerivative}.\n\n\nNext, we consider the general PDE-constrained case where $c_2>0$.\n\\begin{proposition} \\label{prop_SD_pde}\n Let $c_1 = 0$ and ${\\mathbf x}_k \\in \\mathfrak S$ fixed. Let $V^{(k)}$ the vector field that corresponds to a perturbation of the value of $\\phi$ at position ${\\mathbf x}_k$. Then\n \\begin{align} \\label{eq_formula_dg_pde}\n \\begin{aligned}\n dg(\\Omega(\\phi))(V^{(k)}) =& \\; \\; \\;\\; \\, \\Delta \\lambda\n\t\\sum_{l \\in C_k} \\mathbf p_{l}^\\top \\mathbf k_{0,l} \\mathbf u_{l} \\,d_k a_{l} \n\t+ \\Delta \\alpha \\sum_{l \\in C_k} \\mathbf p_{l}^\\top d_k\\mathbf m^I_{l} \\,\\mathbf u_{l}\n\t \\\\\n\t & - \\Delta f \\sum_{l \\in C_k} \\mathbf p_{l}^\\top d_k\\mathbf f_l^I \n\t+c_2 \\Delta \\tilde\\alpha \\sum_{l \\in C_k} (\\mathbf u_{l}-\\hat{\\mathbf u}_{l})^\\top d_k\\mathbf m^I_{l} \\,(\\mathbf u_{l}-\\hat{\\mathbf u}_{l}) \\\\\n\t& -2 \\Delta \\lambda \\sum_{l \\in C_k}(\\nabla u_h \\cdot n^I)|_{\\tau_l \\cap \\partial \\Omega(\\phi)}(\\nabla p_h \\cdot n)|_{\\tau_l \\cap \\partial \\Omega(\\phi)} d_ka_l,\n \\end{aligned}\n \\end{align}\nwhere we use the same notation as in Theorem \\ref{thm_numTopShapeDer_formula}. In particular, $d_k \\mathbf m^I_l$ and $d_k \\mathbf f^I_l$ depend on the cut situation, $I \\in \\{A^+, A^-, B^+, B^-, C^+, C^-\\}$ and are given explicitly in Appendix \\ref{app::shapeDerivative}.\n\\end{proposition}\n\\begin{proof}\nLet an element index $l \\in C_k$ fixed and $\\mathbf u_l = [u_{l_1}, u_{l_2}, u_{l_3}]^\\top$, $\\mathbf p_l = [p_{l_1}, p_{l_2}, p_{l_3}]^\\top$ contain the nodal values of the finite element functions $u_h$ and $p_h$ corresponding to the three vertices ${\\mathbf x}_{l_1}$, ${\\mathbf x}_{l_2}$, ${\\mathbf x}_{l_3}$ of element $l$, respectively. Also here, the ordering is in counter-clockwise direction starting with ${\\mathbf x}_{l_1} = {\\mathbf x}_k$.\nWe compute the shape derivative \\eqref{eq_dg_Hadamard} with $L$ given in \\eqref{eq_L} after discretization (i.e. after replacing the functions $u$, $p$, $\\hat u$ by finite element approximations $u_h$, $p_h$, $\\hat u_h$). In particular, the term $L^\\lambda$ is approximated by \\eqref{eq_Llambdah}.\nDepending on how the material interface $\\partial \\Omega(\\phi)$ cuts through element $\\tau_l$, the normal component of the vector field $V^{(k)}$ along the line $\\tau_l \\cap \\partial \\Omega(\\phi)$ is given in \\eqref{eq_VAnA}--\\eqref{eq_VCnC}.\nFor and $I \\in \\{A^+, A^-, B^+, B^-, C^+, C^-\\}$, it can be seen by elementary yet tedious calculations that\n\\begin{align*}\n \\int_{\\tau_l \\cap \\partial \\Omega(\\phi)} p_h(x) V^{I}(x) \\cdot n^{I} \\, \\mbox dS_x =& \\; \\mathbf p_l^\\top d_k \\mathbf f_l^I,\\\\\n \\int_{\\tau_l \\cap \\partial \\Omega(\\phi)} u_h(x) p_h(x) V^{I}(x) \\cdot n^{I} \\, \\mbox dS_x =& \\; \\mathbf p_l^\\top d_k \\mathbf m^I_l \\mathbf u_l,\\\\\n \\int_{\\tau_l \\cap \\partial \\Omega(\\phi)} (u_h(x)-\\hat u_h(x))^2 V^{I}(x) \\cdot n^{I} \\, \\mbox dS_x =& \\; (\\mathbf u_l - \\hat{\\mathbf u}_l)^\\top d_k \\mathbf m^I_l (\\mathbf u_l - \\hat{\\mathbf u}_l),\n\\end{align*}\nwith $d_k \\mathbf m^I_l$ and $d_k \\mathbf f^I_l$ as given in Appendix \\ref{app::shapeDerivative}.\nExamplarily, we illustrate the calculation for the second of these terms for the cut situation $I = A^+$, see Figure \\ref{fig::sets}(a). Let $u_{l,12}$ and $u_{l,13}$ denote the values of the linear function $u_h|_{\\tau_l}$ at the intersection of the interface $\\partial \\Omega(\\phi)$ with the edges that connect the point ${\\mathbf x}_{l_1}$ with ${\\mathbf x}_{l_2}$ and ${\\mathbf x}_{l_1}$ with ${\\mathbf x}_{l_3}$, respectively. Note the relations $u_{l,12} = \\frac{u_{l_2} \\phi_{l_1} - u_{l_1} \\phi_{l_2}}{\\phi_{l_1}-\\phi_{l_2}}$ and $u_{l,13} = \\frac{u_{l_3} \\phi_{l_1} - u_{l_1} \\phi_{l_3}}{\\phi_{l_1}-\\phi_{l_3}}$. Analogously we define the values $p_{l,12}$ and $p_{l,13}$. The function $u_h$ along the line $\\tau_l \\cap \\partial \\Omega(\\phi)$ can now be written as $\\hat u_h(s) = u_{l,12} + s (u_{l,13} - u_{l,12})$, $s \\in [0,1]$ and we get\n\\begin{align*}\n \\int_{\\tau_l \\cap \\partial \\Omega(\\phi)}& u_h(x) p_h(x) V^{I}(x) \\cdot n^{I} \\, \\mbox dS_x \\\\\n &= d_{13,12} \\int_0^1 (u_{l,12} + s (u_{l,13} - u_{l,12})) (p_{l,12} + s (p_{l,13} - p_{l,12})) \\hat V^{A^+}(s) \\cdot \\hat n^{A^+} \\; \\mbox ds \\\\\n &= \\mathbf p_l^\\top d_k \\mathbf m^{A^+}_l \\mathbf u_l\n\\end{align*}\nwhere $d_{13,12} = |\\tau_l \\cap \\partial \\Omega(\\phi)|$. The last equality is obtained by plugging in \\eqref{eq_VAnA} and straightforward (yet tedious) calculation.\nFinally, since $u_h$ and $p_h$ are linear and the normal vector is constant on $\\tau_l \\cap \\partial \\Omega(\\phi)$, we see that $L_h^\\lambda$ is constant and, using Proposition \\ref{prop_SD_vol}, we obtain\n\\begin{align*}\n \\int_{\\tau_l \\cap \\partial \\Omega(\\phi)} L_h^\\lambda(x) V^I(x)\\cdot n^I \\; \\mbox dS_x =& L_h^\\lambda(x) \\int_{\\tau_l \\cap \\partial \\Omega(\\phi)} V^I(x)\\cdot n^I \\; \\mbox dS_x \\\\\n =&\\Delta \\lambda \\left( \\mathbf p_l^\\top \\mathbf k_{0,l} \\mathbf u_l -2 (\\nabla u_h \\cdot n^I)|_{\\tau_l \\cap \\partial \\Omega(\\phi)}(\\nabla p_h \\cdot n)|_{\\tau_l \\cap \\partial \\Omega(\\phi)} \\right) d_ka_l.\n\\end{align*}\n\\end{proof}\n\nCombining the findings of Propositions \\ref{prop_SD_vol} and \\ref{prop_SD_pde} and dividing by $d_k \\tilde a$ (defined in \\ref{remark::area}), we obtain the resulting formula for the alternative definition of the shape derivative as defined in \\eqref{eq::shapeDerivativeAlternative}\n\\begin{align} \\label{eq_dhatg_discr}\n \\begin{aligned}\n \\hat d &g(\\Omega(\\phi))(V^{(k)}) = -c_1 + \\frac{1}{d_k \\tilde a} \\Bigg( \\Delta \\lambda\n\\sum_{l \\in C_k} \\mathbf p_{l}^\\top \\mathbf k_{0,l} \\mathbf u_{l} \\,d_k a_{l} \n+ \\Delta \\alpha \\sum_{l \\in C_k} \\mathbf p_{l}^\\top d_k\\mathbf m^I_{l} \\,\\mathbf u_{l} - \\Delta f \\sum_{l \\in C_k} \\mathbf p_{l}^\\top d_k\\mathbf f_l^I \n \\\\\n & +c_2 \\Delta \\tilde\\alpha \\sum_{l \\in C_k} (\\mathbf u_{l}-\\hat{\\mathbf u}_{l})^\\top d_k\\mathbf m^I_{l} \\,(\\mathbf u_{l}-\\hat{\\mathbf u}_{l})\n -2 \\Delta \\lambda \\sum_{l \\in C_k}(\\nabla u_h \\cdot n^I)|_{\\tau_l \\cap \\partial \\Omega(\\phi)}(\\nabla p_h \\cdot n)|_{\\tau_l \\cap \\partial \\Omega(\\phi)} d_ka_l \\Bigg).\n \\end{aligned}\n\\end{align}\n\\begin{remark}\n Note that \\eqref{eq_dhatg_discr} is obtained by discretizing the continuous shape derivative \\eqref{eq_dg_Hadamard}--\\eqref{eq_Llambda_inout}. We see immediately that \\eqref{eq_dhatg_discr} resembles the formula for the discrete {topological-shape } derivative for nodes ${\\mathbf x}_k \\in \\mathfrak S$ \\eqref{eq::formulaShapeDerivative}. The only difference is the occurence of the last term in \\eqref{eq_dhatg_discr}, which is not accounted for when performing the sensitivity analysis in the discrete setting.\n\n\nNote that this term stems from the presence of $\\partial T_t^{-1} \\partial T_t^{-T}$ in the matrix $A(t)$, which, in turn, originates from two applications of the chain rule, $(\\nabla \\varphi) \\circ T_t = \\partial T_t^{-\\top} \\nabla (\\varphi \\circ T_t)$. \nSimilarly as in the case of the topological derivative in Section \\ref{sec_connTD}, the reason for this discrepancy is the fact that, for the given discretization scheme, the gradients of the finite element basis functions are constant on each element and thus $(\\nabla \\varphi) \\circ T_t = \\nabla \\varphi$ for small enough shape perturbation parameter $t$.\n\n\\end{remark}\n\n\\begin{remark}\n As a second, conceptual difference between the classical continuous shape derivative defined by \\eqref{eq::shapeDerivative} and our discrete counterpart \\eqref{eq::formulaShapeDerivative}, we recall that in the definition \\eqref{eq::topDerivative}\n we divided by the change of volume also in the case ${\\mathbf x}_k \\in \\mathfrak S$.\n\\end{remark}\n\n \n\n\\subsection{Classical topological derivative}\\label{sec::contiTopDerivative}\nLet $\\omega \\subset \\mathbb R^d$ with $0 \\in \\omega$.\nFor a point $z \\in \\Omega \\cup (D \\setminus \\overline \\Omega)$, let $\\omega_\\varepsilon := z + \\varepsilon \\omega$ denote a perturbation of the domain around $z$ of (small enough) size $\\varepsilon$ and of shape $\\omega$.\nThe continuous topological derivative of the shape function $\\redObjectiveFunction=\\redObjectiveFunction(\\Omega)$ is defined by \n\\begin{equation} \\label{eq_defTD}\nd\\redObjectiveFunction(\\Omega)(z) = \\begin{cases}\n\\lim_{\\varepsilon\\searrow 0}\\frac{\\redObjectiveFunction(\\Omega\\cup\\omega_\\varepsilon)-\\redObjectiveFunction(\\Omega)}{|\\omega_\\varepsilon|} \\quad \\text{for } z \\in D \\setminus \\overline \\Omega, \\\\ \n\\lim_{\\varepsilon\\searrow 0}\\frac{\\redObjectiveFunction(\\Omega\\setminus\\bar\\omega_\\varepsilon)-\\redObjectiveFunction(\\Omega)}{|\\omega_\\varepsilon|} \\quad \\text{for } z \\in \\Omega.\n\\end{cases}\n\\end{equation}\nNote that the topological derivative is not defined for points $z \\in \\partial \\Omega$ on the material interface.\nFor problem \\eqref{eq::ContinuousProblem} we obtain for $z \\in D \\setminus \\overline \\Omega$\n\\begin{align}\n \\begin{aligned}\nd\\redObjectiveFunction_2(\\Omega)(z) =&c_1+c_2(\\tilde \\alpha_1 - \\tilde \\alpha_2)(u(z)-\\hat u(z))^2 \\\\\n&+ 2\\lambda_2\\frac{\\lambda_1-\\lambda_2}{\\lambda_1+\\lambda_2}\\nabla u(z) \\cdot \\nabla p(z) +(\\alpha_1-\\alpha_2)u(z)p(z)- ( f_1 - f_2)p(z), \n \\end{aligned}\n\\end{align} \nwhereas for $z \\in \\Omega$\n\\begin{align}\n\\begin{aligned}\nd\\redObjectiveFunction_2(\\Omega)(z) =&-c_1+c_2 (\\tilde \\alpha_2 - \\tilde \\alpha_1)(u(z)-\\hat u(z))^2 \\\\\n&+ 2\\lambda_1\\frac{\\lambda_2-\\lambda_1}{\\lambda_1+\\lambda_2}\\nabla u(z) \\cdot \\nabla p(z) +(\\alpha_2-\\alpha_1)u(z)p(z)- ( f_2 - f_1)p(z),\n\\end{aligned}\n\\end{align}\nsee, e.g. \\cite{Amstutz_2006aa}.\n\n\\subsection{Classical shape derivative}\\label{sec::contiDerivative}\nWe recall the definition of the classical shape derivative as well as its formula for our model problem \\eqref{eq::ContinuousObjective}--\\eqref{eq::ContinuousProblem}. Given an admissible shape $\\Omega \\in \\mathcal A$ and a smooth vector field $V \\in C_c^\\infty(D)$ that is compactly supported in $D$, we define the perturbed domain\n\\begin{equation}\\label{eq::pertDomain}\n\\Omega_t = (\\text{id}+tV)(\\Omega),\n\\end{equation}\nfor a small perturbation parameter $t > 0$ where $\\text{id}:\\mathbb R^d \\rightarrow \\mathbb R^d$ denotes the identity operator. The classical shape derivative of $\\redObjectiveFunction$ at $\\Omega$ with respect to $V$ is then given by\n\\begin{equation}\\label{eq::shapeDerivative}\nd\\redObjectiveFunction(\\Omega)(V) = \\lim_{t\\searrow 0} \\frac{\\redObjectiveFunction(\\Omega_t)-\\redObjectiveFunction(\\Omega)}{t}\n\\end{equation}\nif this limit exists and the mapping $V \\mapsto d \\redObjectiveFunction(\\Omega)(V)$ is linear and continuous.\nUnder suitable assumptions it can be shown that this shape derivative admits the tensor representation\n\\begin{equation}\\label{eq::shapeDerivative_tensor}\nd\\redObjectiveFunction(\\Omega)(V) \n=\\int_D \\mathcal S_1^\\Omega : \\partial V + \\mathcal S_0^\\Omega \\cdot V dx,\n\\end{equation}\nfor some tensors $\\mathcal S_0^\\Omega \\in L^1(D, \\mathbb R^{d} )$, $\\mathcal S_1^\\Omega \\in L^1(D, \\mathbb R^{d\\times d} )$ \\cite{LaurainSturm2016}. Here, $\\partial V$ denotes the Jacobian of the vector field $V$. The structure theorem of Hadamard-Zol\\'esio \\cite[pp. 480-481]{DZ2} states that under certain smoothness assumptions the shape derivative of a shape function $\\redObjectiveFunction$ with respect to a vector field $V$ can always be written as an integral over the boundary of a scalar function $L$ times the normal component of $V$, i.e.,\n\\begin{align}\\label{eq::shapeDerivative_boundary}\nd\\redObjectiveFunction(\\Omega)(V) = \\int_{\\partial \\Omega} L \\,(V \\cdot n) \\; \\mbox ds\n\\end{align}\nwhere $n$ denotes the unit normal vector pointing out of $\\Omega$. For problem \\eqref{eq::ContinuousProblem} one obtains \\cite{LaurainSturm2016}\n\\begin{align} \\label{eq_S1}\n\\mathcal S_1^\\Omega =& (c_1 \\chi_\\Omega + c_2 \\tilde \\alpha_\\Omega (u-\\hat u)^2 + \\lambda_\\Omega \\nabla u \\cdot \\nabla p + \\alpha_\\Omega u p - f_\\Omega p) I - \\lambda_\\Omega \\nabla u \\otimes \\nabla p - \\lambda_\\Omega \\nabla p \\otimes \\nabla u, \\\\\n\\mathcal S_0^\\Omega =& -2 \\tilde \\alpha_\\Omega (u-\\hat u) \\nabla \\hat u \\label{eq_S0}\n\\end{align}\nwhere $I \\in \\mathbb R^{d,d}$ denotes the identity matrix, and\n\\begin{align*}\nL =& [(\\mathcal S_1^{\\Omega,\\text{in}} - \\mathcal S_1^{\\Omega,\\text{out}}) n ]\\cdot n \\\\\n=& c_1 + c_2 (\\tilde \\alpha_1- \\tilde \\alpha_2) (u-\\hat u)^2 + (\\alpha_1-\\alpha_2) u p - (f_1 - f_2) p \\\\\n&+ (\\lambda_1- \\lambda_2) (\\nabla u \\cdot \\tau)(\\nabla p \\cdot\\tau) - \\left(\\frac{1}{\\lambda_1} - \\frac{1}{\\lambda_2}\\right)( \\lambda_\\Omega \\nabla u \\cdot n)( \\lambda_\\Omega \\nabla p \\cdot n).\n\\end{align*}\nHere, $\\mathcal S_1^{\\Omega,\\text{in}}$ and $\\mathcal S_1^{\\Omega,\\text{out}}$ denote the restrictions of the tensor $\\mathcal S_1^{\\Omega}$ to $\\Omega$ and $D \\setminus \\Omega$, respectively. Furthermore, for two column vectors $a, b \\in \\mathbb R^d$, $a \\otimes b = a b^\\top \\in \\mathbb R^{d \\times d}$ denotes their outer product, $\\tau$ denotes the tangential vector and $p \\in H^1_0(D)$ is the solution to the adjoint equation\n\\begin{align*}\n \\int_D \\lambda_\\Omega \\nabla v \\cdot \\nabla p + \\alpha_\\Omega v p \\; \\mbox dx = - 2c_2 \\int_D \\tilde \\alpha_\\Omega (u - \\hat u)v \\; \\mbox dx \\quad \\mbox{for all } v \\in H^1_0(D).\n\\end{align*}\n\n\n\n\nMoreover, motivated by the definition of the topological derivative \\eqref{eq_defTD} with the volume of the difference of the perturbed and unperturbed domains in the denominator, we introduce the alternative definition of a shape derivative\n\\begin{align} \\label{eq::shapeDerivativeAlternative}\n\\hat d \\redObjectiveFunction(\\Omega)(V) = \\underset{t \\searrow 0}{\\mbox{lim }} \\frac{\\redObjectiveFunction(\\Omega_t)-\\redObjectiveFunction(\\Omega)}{|\\Omega_t \\triangle \\Omega|},\n\\end{align}\nwith the symmetric difference of two sets $A \\triangle B := (A \\setminus \\overline B) \\cup (B \\setminus \\overline A)$. Note that the volume of the symmetric difference in \\eqref{eq::shapeDerivativeAlternative} can be written as\n\\begin{equation}\\label{eq::diffSets}\n|\\Omega_t \\triangle \\Omega| = \\int_D |\\chi_{\\Omega_t} - \\chi_{\\Omega}| \\;\\mathrm{d} {\\mathbf x}.\n\\end{equation}\n\n\n\\begin{lemma} \\label{LEM_SYMDIF}\n Let $\\Omega$ and $V$ smooth. It holds\n \\begin{align*}\n \\underset{t \\searrow 0}{\\mbox{lim }}\\frac1t \\left \\lvert \\Omega_t \\triangle \\Omega \\right \\rvert = \\int_{\\partial \\Omega} |V \\cdot n| \\; \\mbox dS_x.\n \\end{align*}\n\\end{lemma}\n\nThe proof is given in Appendix \\ref{app::proofSymDif}. \nFrom Lemma \\ref{LEM_SYMDIF}, we immediately obtain the following relation between $d \\redObjectiveFunction(\\Omega)(V)$ and $\\hat d \\redObjectiveFunction(\\Omega)(V)$.\n\n\\begin{corollary}\nSuppose that $\\redObjectiveFunction$ is shape differentiable at $\\Omega$ and that $\\Omega$ and $V$ are smooth and $\\int_{\\partial \\Omega} |V \\cdot n| \\; dS_x > 0$. Then it holds\n\\begin{align}\n\\hat d \\redObjectiveFunction(\\Omega)(V) = \\frac{ d \\redObjectiveFunction(\\Omega)(V)}{ \\int_{\\partial \\Omega} |V \\cdot n| \\; dS_x}.\n\\end{align}\n\\end{corollary}\n\\begin{proof}\n\tThis follows immediately from the definition of $\\hat d \\redObjectiveFunction(\\Omega)(V)$ by Lemma \\ref{LEM_SYMDIF} since\n\t\\begin{align}\n\t\\hat d \\redObjectiveFunction(\\Omega)(V) = \\underset{t \\searrow 0}{\\mbox{lim }}\\frac{\\redObjectiveFunction(\\Omega_t)-\\redObjectiveFunction(\\Omega)}{|\\Omega_t \\triangle \\Omega|} = \\frac{\\underset{t \\searrow 0}{\\mbox{lim }}\\frac{\\redObjectiveFunction(\\Omega_t)-\\redObjectiveFunction(\\Omega)}{t}}{\\underset{t \\searrow 0}{\\mbox{lim }}\\frac{|\\Omega_t \\triangle \\Omega|}{t}} = \\frac{ d \\redObjectiveFunction(\\Omega)(V)}{\\int_{\\partial \\Omega} |V \\cdot n| \\; dS_x}. \n\t\\end{align}\t\n\t\n\\end{proof}\n\\begin{remark}The condition $\\int_{\\Omega} \\mbox{div}(V) \\; dx \\neq 0$ is a sufficient condition for $\\int_{\\partial \\Omega} |V \\cdot n| \\; dS_x > 0$, since\n\\begin{equation}\n0 < \\left|\\int_{\\Omega} \\mbox{div}(V) \\; dx \\right| = \\left|\\int_{\\partial \\Omega} V \\cdot n \\; dS_x \\right|< \\int_{\\partial \\Omega} |V \\cdot n| \\; dS_x.\n\\end{equation}\t\n\\end{remark}\n\n\n\n\\subsection{The continuous {topological-shape } derivative}\nHere and in the following, we assume that the domain $\\Omega$ is described by a level-set function $\\phi: D \\rightarrow \\mathbb R$ via\n\\begin{subequations}\n\\begin{align}\n\t\\phi({\\mathbf x}) < 0 &\\Longleftrightarrow {\\mathbf x} \\in \\Omega, \\label{eq::levelSet1} \\\\\n\t\\phi({\\mathbf x}) > 0 &\\Longleftrightarrow {\\mathbf x} \\in D \\setminus \\overline{\\Omega} \\label{eq::levelSet2}\\\\\n\t\\phi({\\mathbf x}) = 0 &\\Longleftrightarrow {\\mathbf x} \\in \\partial \\Omega \\cap D. \\label{eq::levelSet3}\n\\end{align}\n\\end{subequations}\nFor given $\\phi$, let $\\Omega(\\phi)$ denote the unique domain defined by \\eqref{eq::levelSet1}--\\eqref{eq::levelSet3}.\nIn this section, in contrast to the setting in Section \\ref{sec::contiDerivative}, we perturb $\\Omega$ indirectly by perturbing $\\phi$ such that\n$\\phi_\\varepsilon = \\operator_\\varepsilon \\phi$\nfor some operator $\\operator_\\varepsilon: C^0(D)\\rightarrow C^0(D)$ depending on $\\varepsilon \\geq 0$ with the property $\\Omega(\\operator_0 \\phi) = \\Omega(\\phi)$.\nLater on, in the discrete setting, we will distinguish between two different types of perturbation operators $\\operator_\\varepsilon$ corresponding to shape or topological perturbations of~$\\Omega$.\n\nLet, from now on, $\\redPhiObjectiveFunction(\\phi):= \\redObjectiveFunction(\\Omega(\\phi))$ denote the reduced cost function as a function of the level set function $\\phi$.\n This way, a continuous {topological-shape } derivative can be defined as \n\\begin{equation}\\label{eq::topologicalShapeDerivative}\n\td\\redPhiObjectiveFunction(\\phi) = \\lim_{\\varepsilon\\searrow 0}\\frac{\\redPhiObjectiveFunction(\\phi_\\varepsilon)-\\redPhiObjectiveFunction(\\phi)}{|\\Omega(\\phi_\\varepsilon) \\triangle \\Omega(\\phi)|}.\n\\end{equation}\nNote that this sensitivity depends on the choice of the perturbation operator $\\operator_\\varepsilon$, which can represent either a shape perturbation or a topological perturbation.\nWe will mostly be concerned with its discrete counterpart, which will be introduced in Section \\ref{sec::numTopShapeDer}.\nNote that, in the case of shape perturbations, due to the scaling $|\\Omega(\\phi_\\varepsilon)\\triangle \\Omega(\\phi)|$ instead of $\\varepsilon$ in the denominator the shape derivative is modified and does not correspond to \\eqref{eq::shapeDerivative} but rather to \\eqref{eq::shapeDerivativeAlternative}.\n\n\n\\paragraph{Relation to literature}\n\n\n\n\\newcommand{G\\^{a}teaux }{G\\^{a}teaux }\nThe sensitivity of shape functions with respect to perturbations of a level set function (representing a shape) was investigated in \\cite{laurain2018analyzing} for the case without PDE constraints. There, the author considers smooth level set functions and rigorously computes the G\\^{a}teaux (semi-)derivative in the direction of a smooth perturbation of the level set function, both for the case of shape and topological perturbations. In the case of shape perturbations, it is shown that the G\\^{a}teaux derivative coincides with the shape derivative \\eqref{eq::shapeDerivative} with respect to a suitably chosen vector field. On the other hand, a resemblance between the notions of G\\^{a}teaux derivative and topological derivative is shown, yet the G\\^{a}teaux derivative may vanish or not exist in cases where the topological derivative is finite.\nEvidently, this discrepancy results from the fact that the denominator in the definition of the G\\^{a}teaux derivative is always of order one whereas it is of the order of the space dimension in the topological derivative.\n\nWhile the analysis for shape and topological perturbations is carried out separately in \\cite{laurain2018analyzing}, a more unified approach is followed in \\cite{delfour2018topological,Delfour_engcomp_2022}. In these publications, the idea is to consider sensitivities with respect to domain perturbations that are obtained by the dilation of lower-dimensional objects. Here, given a set $E \\subset \\mathbb R^d$ of dimension $k \\leq d$, the dilated set of radius $r$ is given by $E_r = \\{x \\in \\mathbb R^d: d_E(x) \\leq r\\}$ where $d_E(x)$ denotes a distance of a point $x$ to a set $E$. For instance, when $E$ is chosen as a single point, the dilated set is just a ball of radius $r$ around that point and performing a sensitivity analysis with respect to the volume of the dilated object leads to the topological derivative. On the other hand, when $E$ is chosen as the boundary of a domain, $E_r$ can be defined using a signed distance function and corresponds to a uniform expansion of the domain. Then, a similar procedure leads to the shape derivative with respect to a uniform expansion in normal direction (i.e. $V = n$ in \\eqref{eq::pertDomain}--\\eqref{eq::shapeDerivative}).\nIn \\cite{delfour2018topological}, a sensitivity analysis for various choices of $E$ is carried out with respect to the volume of the perturbation. We note, however, that arbitrary shape perturbations are not covered and would require an extension of the theory. Comparing \\cite{laurain2018analyzing} and \\cite{delfour2018topological}, we observe that in the former paper only smooth perturbations of a level set function are admissible whereas, in the latter approach, domain perturbations by dilations can be interpreted as perturbations of level set functions by a (non-smooth) distance function.\n\nFinally, we mention \\cite{bernland2018acoustic} where a domain is represented by a discretized level set function and a shape sensitivity analysis is carried out with respect to a perturbation of the level set values close to the boundary. This procedure can be interpreted as an application of the idea of dilation to discretized shape optimization problems.\nAs the authors point out, this kind of shape sensitivity analysis is more natural compared to the standard approach based on domain transformations when employed in a level-set framework; an observation also made in \\cite[Sec. 3]{laurain2018analyzing}. The authors show numerical results for the shape optimization of an acoustic horn, but do not consider topological perturbations in this work.\n\n\\medskip\n\nAs it can be seen from \\eqref{eq::topologicalShapeDerivative}, our approach is related to the dilation concept since we also consider the sensitivity with respect to the volume of the domain perturbation $\\Omega_t \\triangle \\Omega$. In the following, we will investigate the {topological-shape } derivative in a discretized setting. Similarly to \\cite{bernland2018acoustic}, we will consider shape sensitivity analysis with respect to level set values on mesh nodes close to the boundary. Moreover, we will also be able to deal with topological perturbations and treat shape and topological updates in a unified way by a discretized version of \\eqref{eq::topologicalShapeDerivative}, called the numerical {topological-shape } derivative.\n\n\n\n\n\n\n\n\\subsection{Computation of the numerical {topological-shape } derivative for the area cost functional}\nBefore we compute the numerical {topological-shape } derivative \\eqref{eq::topDerivative} for the full model problem \\eqref{eq::ContinuousProblem}, we consider the case of the pure volume cost function and neglect the PDE constraint, i.e., we set $c_1 = 1, c_2 =0$ in \\eqref{eq::ContinuousObjective}.\n\nFor that purpose, we investigate\n\\begin{equation}\\label{eq::areaChange}\n\\delta_{k,\\eps} a = |\\Omega(\\discreteOperator\\phi)|-|\\Omega(\\phi)|,\n\\end{equation}\nfor $\\discreteOperator \\in \\{ T^{-\\rightarrow+}_{k,\\varepsilon},\\,T^{+\\rightarrow-}_{k,\\varepsilon},\\,\\S \\}$. \nWe note that for the computation of \\eqref{eq::areaChange} only those \"cut elements\" are relevant which have a node ${\\mathbf x}_k$, \\textit{i.e.}\\,\n\\begin{equation}\\label{eq::areaChangeB}\n\\delta_{k,\\eps} a = {\\sum_{l\\in C_k} \n\\delta_{k,\\varepsilon}a_l} \t\\quad \\mbox{with } \\delta_{k,\\varepsilon}a_l :=\n\\int_{\\tau_l} H(-\\discreteOperator\\phi) - H(-\\phi) \\;\\mathrm{d} {\\mathbf x},\n\\end{equation}\nwhere $H(x)$ denotes the Heaviside step function and\n\\begin{equation}\\label{eq::areaChangeSet}\nC_k = \\{l \\in I_{{\\mathbf x}_k} : \\tau_l \\cap \\partial \\Omega(\\discreteOperator\\phi) \\neq \\emptyset \\}\n\\end{equation}\n is the set of all indices of elements adjacent to ${\\mathbf x}_k$ which are intersected by the perturbed interface. \\color{black}\n Note that, for $\\varepsilon>0$ small enough, $C_k$ does not depend on the concrete value of $\\varepsilon$. \\color{black}\nFor an element $\\tau_l$ with $l \\in I_{{\\mathbf x}_k}$ we denote the three vertices in counter-clockwise orientation by ${\\mathbf x}_{l_1}$, ${\\mathbf x}_{l_2}$, ${\\mathbf x}_{l_3}$ and assume that ${\\mathbf x}_k = {\\mathbf x}_{l_1}$. Moreover we denote $ \\phi_{l_j} := \\phi({\\mathbf x}_{l_j})$ and $\\tilde \\phi_{l_j} := \\discreteOperator\\phi({\\mathbf x}_{l_j})$ for $j=1,2,3$ and small enough $\\varepsilon$.\nIn the following, we will be interested in \n\\begin{align}\n d_k a := \\sum_{l \\in C_k} d_k a_l \\quad \\mbox{ with } d_k a_l := \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{\\delta_{k,\\varepsilon} a_l}{\\varepsilon^o}.\n\\end{align}\nwith $\\delta_{k,\\varepsilon} a_l$ defined in \\eqref{eq::areaChange}.\nWe consider six different sets (see Figure \\ref{fig::sets} for an illustration)\n\\begin{align}\\label{eq_def_Ixk_ABCpm}\n \\begin{aligned}\nI_{{\\mathbf x}_k}^{A_+} = & \\{ l \\in I_{{\\mathbf x}_k}: \\tilde\\phi_{l_1} > 0, \\tilde\\phi_{l_2} < 0, \\tilde\\phi_{l_3} < 0 \\}, &&&\nI_{{\\mathbf x}_k}^{A_-} = & \\{ l \\in I_{{\\mathbf x}_k}: \\tilde\\phi_{l_1} < 0, \\tilde\\phi_{l_2} > 0, \\tilde\\phi_{l_3} > 0 \\}, \\\\\nI_{{\\mathbf x}_k}^{B_+} =& \\{ l \\in I_{{\\mathbf x}_k}: \\tilde\\phi_{l_1} < 0, \\tilde\\phi_{l_2} > 0, \\tilde\\phi_{l_3} < 0 \\}, &&&\nI_{{\\mathbf x}_k}^{B_-} =& \\{ l \\in I_{{\\mathbf x}_k}: \\tilde\\phi_{l_1} > 0, \\tilde\\phi_{l_2} < 0, \\tilde\\phi_{l_3} > 0 \\}, \\\\\nI_{{\\mathbf x}_k}^{C_+} =& \\{ l \\in I_{{\\mathbf x}_k}: \\tilde\\phi_{l_1} < 0, \\tilde\\phi_{l_2} < 0, \\tilde\\phi_{l_3} > 0 \\}, &&&\nI_{{\\mathbf x}_k}^{C_-} =& \\{ l \\in I_{{\\mathbf x}_k}: \\tilde\\phi_{l_1} > 0, \\tilde\\phi_{l_2} > 0, \\tilde\\phi_{l_3} < 0 \\},\n \\end{aligned}\n\\end{align}\nsuch that \n\\begin{align*}\nC_k = I_{{\\mathbf x}_k}^{A_+} \\cup I_{{\\mathbf x}_k}^{A_-} \\cup I_{{\\mathbf x}_k}^{B_+} \\cup I_{{\\mathbf x}_k}^{B_-} \\cup I_{{\\mathbf x}_k}^{C_+} \\cup I_{{\\mathbf x}_k}^{C_-}\n\\end{align*}\nwith a direct sum on the right hand side. We can thus split the sum in \\eqref{eq::areaChangeB} into six parts,\n\\begin{align*}\n\\delta_{k,\\eps} a = \n\\mathcal{I}_{I_{{\\mathbf x}_k}^{A_+}} + \\mathcal{I}_{I_{{\\mathbf x}_k}^{A_-}}+\n\\mathcal{I}_{I_{{\\mathbf x}_k}^{B_+}} + \\mathcal{I}_{I_{{\\mathbf x}_k}^{B_-}}+\n\\mathcal{I}_{I_{{\\mathbf x}_k}^{C_+}} + \\mathcal{I}_{I_{{\\mathbf x}_k}^{C_-}}.\n\\end{align*}\n\\begin{figure}\n\t\\begin{subfigure}[t]{0.33\\textwidth}\n\t\\includegraphics{pic_setCk8.pdf}\n\t\\subcaption{Triangle is in $I_{{\\mathbf x}_k}^{A_+}$}\t\n\t\\end{subfigure}\\hfill\n\t\\begin{subfigure}[t]{0.33\\textwidth}\n\t\\includegraphics{pic_setCk4.pdf}\n\t\\subcaption{Triangle is in $I_{{\\mathbf x}_k}^{B_+}$}\n\t\\end{subfigure}\\hfill\n\t\\begin{subfigure}[t]{0.33\\textwidth}\n\t\\includegraphics{pic_setCk7.pdf}\n\t\\subcaption{Triangle is in $I_{{\\mathbf x}_k}^{C_+}$}\n\t\\end{subfigure}\n\n\t\\begin{subfigure}[t]{0.33\\textwidth}\n\t\\includegraphics{pic_setCk9.pdf}\n\t\\subcaption{Triangle is in $I_{{\\mathbf x}_k}^{A_-}$}\t\n\t\\end{subfigure}\\hfill\n\t\\begin{subfigure}[t]{0.33\\textwidth}\n\t\t\\includegraphics{pic_setCk5.pdf}\n\t\t\\subcaption{Triangle is in $I_{{\\mathbf x}_k}^{B_-}$}\t\t\n\t\\end{subfigure}\\hfill\n\t\\begin{subfigure}[t]{0.33\\textwidth}\n\t\\includegraphics{pic_setCk6.pdf}\n\t\\subcaption{Triangle is in $I_{{\\mathbf x}_k}^{C_-}$}\t\t\n\t\\end{subfigure}\n\t\\caption{Illustration of the sets $I_{{\\mathbf x}_k}^{A_+},~I_{{\\mathbf x}_k}^{A_-},~I_{{\\mathbf x}_k}^{B_+},~I_{{\\mathbf x}_k}^{B_-},~I_{{\\mathbf x}_k}^{C_+} ,~ I_{{\\mathbf x}_k}^{C_-}$. The nodal values of the level-set functions are indicated by $-$, $+$. The interface is drawn in red.}\n\t\\label{fig::sets}\n\\end{figure}\n\\begin{figure}\\centering\n\\begin{subfigure}[t]{0.49\\textwidth}\\centering\n\t\\includegraphics{pic_setCk1.pdf}\n\t\\subcaption{Triangles are in $I_{{\\mathbf x}_k}^{A_+}$}\n\t\\label{fig::IAtopA}\t\n\\end{subfigure}\n\\begin{subfigure}[t]{0.49\\textwidth}\\centering\n\t\\includegraphics{pic_setCk2.pdf}\n\t\\subcaption{Triangles are in $I_{{\\mathbf x}_k}^{A_-}$}\n\t\\label{fig::IAtopB}\t\t\n\\end{subfigure}\n\\caption{Illustration of $I_{{\\mathbf x}_k}^{A_+}$ and $I_{{\\mathbf x}_k}^{A_-}$ in the case of topological perturbations.}\n\\label{fig::topDir}\n\\end{figure}\n\\paragraph{Configuration A}\nFor $l \\in I_{{\\mathbf x}_k}^{A_+}$ we have\n\\begin{equation*}\n\\begin{aligned}\n\\int_{\\tau_{l}} H(-\\discreteOperator\\phi({\\mathbf x})) \\;\\mathrm{d} {\\mathbf x} &= \n \\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}{2} \\left(1- \\int_{\\xi_1=0}^{\\ell_1} \\int_{\\xi_2=0}^{\\ell_2\\left(1-\\frac{\\xi_1}{\\ell_1}\\right)} \\,d\\xi_2d\\xi_1 \\right) \\\\\n&= \\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}{2}(1-\\ell_1\\ell_2),\n\\end{aligned}\n\\end{equation*}\nwith\n\\begin{align*}\n\t\\ell_1 = \\frac{\\phi_{l_1}+\\varepsilon}{\\phi_{l_1}+\\varepsilon-\\phi_{l_2}}, \\qquad\n\t\\ell_2 = \\frac{\\phi_{l_1}+\\varepsilon}{\\phi_{l_1}+\\varepsilon-\\phi_{l_3}}.\n\\end{align*}\nTherefore,\n\\begin{equation}\n\\begin{aligned}\\label{eq::intAplus}\n\\mathcal{I}_{I_{{\\mathbf x}_k}^{A_+}}\n&= \\sum_{l \\in I_{{\\mathbf x}_k}^{A_+}} \\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}2 \\left(-\\frac{(\\phi_{l_1}+\\varepsilon)^2}{(\\phi_{l_1}+\\varepsilon-\\phi_{l_2})(\\phi_{l_1}+\\varepsilon-\\phi_{l_3})}+\\frac{\\phi_{l_1}^2}{(\\phi_{l_1}-\\phi_{l_2})(\\phi_{l_1}-\\phi_{l_3})} \\right)\\\\\n&= \\sum_{l \\in I_{{\\mathbf x}_k}^{A_+}} \\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}2 \\left(\\frac{\\varepsilon \\phi_{l_1} \\left[\\phi_{l_1}(\\phi_{l_2} + \\phi_{l_3})-2\\phi_{l_2}\\phi_{l_3} \\right]+\\varepsilon^2\\left[\\phi_{l_1}(\\phi_{l_2} + \\phi_{l_3})-\\phi_{l_2}\\phi_{l_3} \\right]}{(\\phi_{l_1} - \\phi_{l_2})^2(\\phi_{l_1} - \\phi_{l_3})^2 + O(\\varepsilon)} \\right).\n\\end{aligned}\n\\end{equation}\nFor ${\\mathbf x}_k \\in \\mathfrak T^-$ and $\\discreteOperator = T^{-\\rightarrow+}_{k,\\varepsilon}$, it holds $I_{{\\mathbf x}_k}=I_{{\\mathbf x}_k}^{A_+}$ (see Figure \\ref{fig::IAtopA}) and we have to consider \\eqref{eq::intAplus} with $\\phi_{l_1} = 0$. Thus, we obtain\n\\begin{align\n\\mathcal{I}_{I_{{\\mathbf x}_k}^{A_+}}= -\n\\sum_{l \\in I_{{\\mathbf x}_k}^{A_+}}\\frac{\\varepsilon^2\\left|{\\textup{\\text{det}}}J_l\\right|}2 \\frac{\\phi_{l_2}\\phi_{l_3} }{ \\phi_{l_2}^2\\phi_{l_3}^2 + O(\\varepsilon)},\n\\end{align}\nand conclude for this case \n\\begin{equation}\\label{eq::areaChangeTopAplus}\n d_{k}a = \\lim_{\\varepsilon\\searrow 0}\\frac{\\delta_{k,\\eps} a}{\\varepsilon^2} =- \\sum_{l \\in I_{{\\mathbf x}_k}^{A_+}}\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}{2\\phi_{l_2}\\phi_{l_3}}.\n\\end{equation}\nMoreover, for ${\\mathbf x}_k \\in \\mathfrak T^+$ and $\\discreteOperator = T^{+\\rightarrow-}_{k,\\varepsilon}$, it holds $I_{{\\mathbf x}_k} = I_{{\\mathbf x}_k}^{A_-}$ (see Figure \\ref{fig::IAtopB}) and we have for $l \\in I_{{\\mathbf x}_k}^{A_-}$,\n\\begin{align*}\n\\int_{\\tau_{l}} H(-\\discreteOperator\\phi({\\mathbf x})) \\;\\mathrm{d} {\\mathbf x} &= \n\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}{2} \\int_{\\xi_1=0}^{\\ell_1} \\int_{\\xi_2=0}^{\\ell_2\\left(1-\\frac{\\xi_1}{\\ell_1}\\right)} \\,d\\xi_2d\\xi_1 = \\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}{2}\\ell_1\\ell_2,\n\\end{align*}\nwhich leads to \n\\begin{equation}\n\\begin{aligned}\\label{eq::intAminus}\n\\mathcal{I}_{I_{{\\mathbf x}_k}^{A_-}}\n&= -\\sum_{l \\in I_{{\\mathbf x}_k}^{A_-}} \\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}2 \\left(\\frac{\\varepsilon \\phi_{l_1} \\left[\\phi_{l_1}(\\phi_{l_2} + \\phi_{l_3})-2\\phi_{l_2}\\phi_{l_3} \\right]+\\varepsilon^2\\left[\\phi_{l_1}(\\phi_{l_2} + \\phi_{l_3})-\\phi_{l_2}\\phi_{l_3} \\right]}{(\\phi_{l_1} - \\phi_{l_2})^2(\\phi_{l_1} - \\phi_{l_3})^2 + O(\\varepsilon)} \\right).\n\\end{aligned}\n\\end{equation}\nTherefore, we obtain for this case\n\\begin{equation*\nd_{k}a = \\lim_{\\varepsilon\\searrow 0}\\frac{\\delta_{k,\\eps} a}{\\varepsilon^2} = \\sum_{l \\in I_{{\\mathbf x}_k}^{A_-}}\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}{2\\phi_{l_2}\\phi_{l_3}}.\n\\end{equation*}\nFinally, for ${\\mathbf x}_k \\in \\mathfrak S$ and $\\discreteOperator = \\S$ we deduce from \\eqref{eq::intAplus} \n\\begin{equation}\\label{eq::shapeA}\n\\lim_{\\varepsilon\\searrow 0}\\frac{\\mathcal{I}_{I_{{\\mathbf x}_k}^{A_+}}}{\\varepsilon} = \\sum_{l \\in I_{{\\mathbf x}_k}^{A_+}}\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|\\;\\phi_{l_1} \\left[\\phi_{l_1}(\\phi_{l_2} + \\phi_{l_3}) - 2\\phi_{l_2}\\phi_{l_3} \\right]}{2(\\phi_{l_1} - \\phi_{l_2})^2(\\phi_{l_1} - \\phi_{l_3})^2},\n\\end{equation}\nand from \\eqref{eq::intAminus}\n\\begin{equation} \\label{eq_IAplus_o_eps}\n\\lim_{\\varepsilon\\searrow 0}\\frac{\\mathcal{I}_{I_{{\\mathbf x}_k}^{A_-}}}{\\varepsilon} = -\\sum_{l \\in I_{{\\mathbf x}_k}^{A_-}}\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|\\;\\phi_{l_1} \\left[\\phi_{l_1}(\\phi_{l_2} + \\phi_{l_3})-2\\phi_{l_2}\\phi_{l_3} \\right]}{2(\\phi_{l_1} - \\phi_{l_2})^2(\\phi_{l_1} - \\phi_{l_3})^2}.\n\\end{equation}\n\\paragraph{Configuration B}\nWe note that Configuration B can only occur for the case ${\\mathbf x}_k \\in \\mathfrak S$ and $\\discreteOperator = \\S$. For $l \\in I_{{\\mathbf x}_k}^{B_+}$ it holds\n\\begin{align}\\label{eq::configurationB}\n\\int_{\\tau_{l}} H(-\\S\\phi({\\mathbf x})) \\;\\mathrm{d} {\\mathbf x} = \\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}2\\left(1- \\frac{\\phi_{l_2}^2}{(\\phi_{l_2}-\\phi_{l_3})(\\phi_{l_2}-\\phi_{l_1}-\\varepsilon)} \\right),\n\\end{align}\nand\n\\begin{equation*}\n\\begin{aligned}\n\\mathcal{I}_{I_{{\\mathbf x}_k}^{B_+}}\n&= \\sum_{l \\in I_{{\\mathbf x}_k}^{B_+}}\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|\\phi_{l_2}^2}2 \\frac{(\\phi_{l_2}-\\phi_{l_3})(\\phi_{l_2}-\\phi_{l_1}-\\varepsilon)-(\\phi_{l_2}-\\phi_{l_3})(\\phi_{l_2}-\\phi_{l_1})}{(\\phi_{l_2}-\\phi_{l_3})(\\phi_{l_2}-\\phi_{l_1})(\\phi_{l_2}-\\phi_{l_3})(\\phi_{l_2}-\\phi_{l_1}-\\varepsilon)}\\\\\n&= \\sum_{l \\in I_{{\\mathbf x}_k}^{B_+}}\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|\\phi_{l_2}^2}2 \\frac{-\\varepsilon(\\phi_{l_2}-\\phi_{l_3})}{(\\phi_{l_2}-\\phi_{l_3})^2(\\phi_{l_2}-\\phi_{l_1})(\\phi_{l_2}-\\phi_{l_1}-\\varepsilon)}.\n\\end{aligned}\n\\end{equation*}\nThus,\n\\begin{equation}\\label{eq::shapeB}\n\\lim_{\\varepsilon\\searrow 0}\\frac{\\mathcal{I}_{I_{{\\mathbf x}_k}^{B_+}}}{\\varepsilon} = -\\sum_{l \\in I_{{\\mathbf x}_k}^{B_+}} \\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}2 \\frac{\\phi_{l_2}^2}{(\\phi_{l_2}-\\phi_{l_3})(\\phi_{l_2}-\\phi_{l_1})^2}.\n\\end{equation}\nFor the case $l \\in I_{{\\mathbf x}_k}^{B_-}$ we have\n\\begin{align}\\label{eq::configurationBminus}\n\t\\int_{\\tau_{l}} H(-\\S\\phi({\\mathbf x})) \\;\\mathrm{d} {\\mathbf x} = \\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}2 \\frac{\\phi_{l_2}^2}{(\\phi_{l_2}-\\phi_{l_3})(\\phi_{l_2}-\\phi_{l_1}-\\varepsilon)},\n\\end{align}\nand obtain\n\\begin{equation*}\n\\lim_{\\varepsilon\\searrow 0}\\frac{\\mathcal{I}_{I_{{\\mathbf x}_k}^{B_-}}}{\\varepsilon} = \\sum_{l \\in I_{{\\mathbf x}_k}^{B_-}} \\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}2 \\frac{\\phi_{l_2}^2}{(\\phi_{l_2}-\\phi_{l_3})(\\phi_{l_2}-\\phi_{l_1})^2}.\n\\end{equation*}\n\\paragraph{Configuration C}\nAnalogously as for Configuration B, we note that Configuration C can only occur for the case ${\\mathbf x}_k \\in \\mathfrak S$ and $\\discreteOperator = \\S$. We have \n\\begin{equation}\\label{eq::shapeC}\n\\lim_{\\varepsilon\\searrow 0}\\frac{\\mathcal{I}_{I_{{\\mathbf x}_k}^{C_+}}}{\\varepsilon} = -\\sum_{l \\in I_{{\\mathbf x}_k}^{C_+}}\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}2 \\frac{\\phi_{l_3}^2}{(\\phi_{l_3}-\\phi_{l_2})(\\phi_{l_3}-\\phi_{l_1})^2},\n\\end{equation}\nand\n\\begin{equation}\n\\lim_{\\varepsilon\\searrow 0}\\frac{\\mathcal{I}_{I_{{\\mathbf x}_k}^{C_-}}}{\\varepsilon} = \\sum_{l \\in I_{{\\mathbf x}_k}^{C_-}}\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}2 \\frac{\\phi_{l_3}^2}{(\\phi_{l_3}-\\phi_{l_2})(\\phi_{l_3}-\\phi_{l_1})^2}.\n\\end{equation}\n\n\n\nSummarizing, we have shown the following result.\n\\begin{theorem}\\label{thm::areaDerivative}\nFor ${\\mathbf x}_k \\in \\mathfrak T^-$ we have\n\\begin{align}\\label{eq_derivAreaTminus}\n d_k a = \\sum_{l \\in I_{{\\mathbf x}_k}^{A_+}} d_ka_l = \\sum_{l \\in I_{{\\mathbf x}_k}^{A_+}}(-1)\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}{2\\phi_{l_2}\\phi_{l_3}}.\n\\end{align}\nFor ${\\mathbf x}_k \\in \\mathfrak T^+$ we have\n\\begin{align}\\label{eq_derivAreaTplus}\n d_k a = \\sum_{l \\in I_{{\\mathbf x}_k}^{A_-}} d_ka_l = \\sum_{l \\in I_{{\\mathbf x}_k}^{A_-}}\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}{2\\phi_{l_2}\\phi_{l_3}}.\n\\end{align}\nFor ${\\mathbf x}_k \\in \\mathfrak S$ we have\n\\begin{equation}\\label{eq::derivativeArea}\n\\begin{aligned}\n d_k a =& \\sum_{l \\in C_{k}} d_k a_l\\\\=&\n \\sum_{l \\in I_{{\\mathbf x}_k}^{A_+}}\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|\\;\\phi_{l_1} \\left[\\phi_{l_1}(\\phi_{l_2} + \\phi_{l_3})-2\\phi_{l_2}\\phi_{l_3} \\right]}{2(\\phi_{l_1} - \\phi_{l_2})^2(\\phi_{l_1} - \\phi_{l_3})^2} + \\sum_{l \\in I_{{\\mathbf x}_k}^{A_-}} (-1) \\frac{\\left|{\\textup{\\text{det}}}J_l\\right|\\;\\phi_{l_1} \\left[\\phi_{l_1}(\\phi_{l_2} + \\phi_{l_3}) - 2\\phi_{l_2}\\phi_{l_3} \\right]}{2(\\phi_{l_1} - \\phi_{l_2})^2(\\phi_{l_1} - \\phi_{l_3})^2} \\\\\n &+\\sum_{l \\in I_{{\\mathbf x}_k}^{B_+}} (-1) \\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}2 \\frac{\\phi_{l_2}^2}{(\\phi_{l_2}-\\phi_{l_3})(\\phi_{l_2}-\\phi_{l_1})^2} + \\sum_{l \\in I_{{\\mathbf x}_k}^{B_-}} \\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}2 \\frac{\\phi_{l_2}^2}{(\\phi_{l_2}-\\phi_{l_3})(\\phi_{l_2}-\\phi_{l_1})^2} \\\\\n &+\\sum_{l \\in I_{{\\mathbf x}_k}^{C_+}} (-1) \\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}2 \\frac{\\phi_{l_3}^2}{(\\phi_{l_3}-\\phi_{l_2})(\\phi_{l_3}-\\phi_{l_1})^2} + \\sum_{l \\in I_{{\\mathbf x}_k}^{C_-}}\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}2 \\frac{\\phi_{l_3}^2}{(\\phi_{l_3}-\\phi_{l_2})(\\phi_{l_3}-\\phi_{l_1})^2}. \n\\end{aligned}\n\\end{equation}\n\\end{theorem}\n\n\n\n\n\n\\begin{remark}\\label{remark::area}\n\tThe corresponding computations for the denominators in \\eqref{eq::topDerivative}, \\textit{i.e.}\\, $|\\Omega(\\discreteOperator\\phi)\\triangle \\Omega(\\phi)|$, are closely related to the computations presented in this section for \\eqref{eq::areaChange}.\n\tDenoting\n\t\\begin{align*}\n \\delta_{k,\\varepsilon} \\tilde a_l := \\int_{\\tau_l} | H(-\\discreteOperator\\phi) - H(-\\phi)| \\;\\mathrm{d} {\\mathbf x}, \\quad d_k \\tilde a_l := \\underset{\\varepsilon \\searrow 0}{\\mbox{lim}} \\frac{\\delta_{k,\\varepsilon} \\tilde a_l}{\\varepsilon^o},\n\t\\end{align*}\n\twe get that \n\t\\begin{align} \\label{eq_dkatilde}\n \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{|\\Omega(\\discreteOperator\\phi) \\triangle \\Omega(\\phi)|}{\\varepsilon^o} = \\sum_{l\\in C_k} d_k \\tilde a_l =: d_k \\tilde a\n\t\\end{align}\n\twhere $d_k \\tilde a_l = |d_k a_l|$ with the formulas for $d_k a_l$ given in \\eqref{eq_derivAreaTminus}--\\eqref{eq::derivativeArea}. It is obvious that, for ${\\mathbf x}_k \\in \\mathcal T^-$, $d_k a_l <0$ and for ${\\mathbf x}_k \\in \\mathcal T^+$, $d_k a_l >0$. Moreover, note that, for ${\\mathbf x}_k \\in \\mathcal S$, it holds $d_k a_l <0$ for all $l \\in C_k$. This yields that\n\t\\begin{align}\n \\frac{d_k a_l}{d_k \\tilde a_l} = \\begin{cases} - 1 & \\mbox{if }k \\in \\mathcal S \\cup \\mathcal T^-, \\\\\n \\; \\;\\, 1 & \\mbox{if } k \\in \\mathcal T^+. \n \\end{cases}\n\t\\end{align}\n\\end{remark}\n\n\\begin{corollary} \\label{cor_dVol}\n From Theorem \\ref{thm::areaDerivative} and Remark \\ref{remark::area}, it follows that the numerical {topological-shape } derivative of the volume cost function $\\mbox{Vol}(\\phi) := |\\Omega(\\phi)|$ is given by\n \\begin{align*}\n d \\mbox{Vol}(\\phi)({\\mathbf x}_k) = \\begin{cases} - 1 & \\mbox{if }k \\in \\mathcal S \\cup \\mathcal T^-, \\\\\n \\; \\;\\, 1 & \\mbox{if } k \\in \\mathcal T^+. \n \\end{cases}\n \\end{align*}\n\n\\end{corollary}\n\\subsection{Computation of the numerical {topological-shape } derivative via Lagrangian framework}\\label{sec::Lagrangian}\n\\newcommand{{\\uu}}{{\\uu}}\nNext, we consider the computation of the numerical {topological-shape } derivative of an optimization problem that is constrained by a discretized PDE. For that purpose, we set $c_1 =0$ in \\eqref{eq::ContinuousObjective} and $J(\\phi,\\uu) := g(\\Omega(\\phi),\\uu)$. The discretized problem reads\n\\begin{align} \\label{eq_discOptiProb_J}\n\\underset{\\phi}{\\mbox{min }} J(\\phi, \\uu)& \\\\\n\\mbox{s.t. } \\Amat_{\\phi} \\uu &= \\ff_{\\phi} \\label{eq_discOptiProb_constr}\n\\end{align}\nand we are interested in the sensitivity of $J$ when the level set function $\\phi$ representing the geometry is replaced by a perturbed level set function $\\phi_{\\varepsilon}=\\discreteOperator\\phi$. \n The perturbed Lagrangian for \\eqref{eq_discOptiProb_J}--\\eqref{eq_discOptiProb_constr} with respect to a perturbation of $\\phi$ reads \n\\begin{align} \\label{eq_defLagDiscr}\nL(\\varepsilon, \\uu, \\vv) =& J(\\phi_\\varepsilon, \\uu) + \\Amat_\\varepsilon\\uu \\cdot \\vv - \\ff_\\varepsilon \\cdot \\vv\n\\end{align}\nwhere we use the abbreviated notation $\\Amat_\\varepsilon := \\Amat_{\\phi_\\varepsilon}$, and $\\ff_\\varepsilon := \\ff_{\\phi_\\varepsilon}$. Moreover, for $\\varepsilon \\geq 0$, we define the perturbed state $\\uu_\\varepsilon$ as the solution to\n\\begin{align*}\n0 = \\partial_\\vv L(\\varepsilon, \\uu_\\varepsilon, \\vv),\n\\end{align*}\ni.e. $\\uu_\\varepsilon$ is the solution to\n\\begin{align} \\label{eq_defueps2}\n\\Amat_\\varepsilon \\uu_\\varepsilon = \\ff_\\varepsilon\n\\end{align}\nand the (unperturbed) adjoint state $\\pp$ as the solution to\n\\begin{align} \\label{eq_defAdjointDiscr}\n0 = \\partial_\\uu L(0, \\uu, \\pp),\n\\end{align}\nfor the state $\\uu$ given, i.e. $\\pp$ solves\n\\begin{align*}\n\\Amat^\\top \\pp = - \\partial_\\uu J(\\phi, \\uu).\n\\end{align*}\nNote that we use the notation $\\uu$ for $\\uu_{\\varepsilon = 0}$.\nThe numerical {topological-shape } derivative at the node ${\\mathbf x}_k$ can be computed as the limit\n\\begin{align*}\nd \\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k) = \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{1}{|\\Omega(\\phi_\\varepsilon) \\triangle \\Omega(\\phi)|} (J(\\phi_\\varepsilon, \\uu_\\varepsilon) - J(\\phi, \\uu) ).\n\\end{align*}\n\nWith the help of the Lagrangian \\eqref{eq_defLagDiscr}, we can rewrite the right hand side as\n\\begin{align*}\nJ(\\phi_\\varepsilon, \\uu_\\varepsilon) - J(\\phi,\\uu) = L(\\varepsilon, \\uu_\\varepsilon, \\pp) - L(0, {\\uu}, \\pp)\n\\end{align*}\nwhere we used that $\\uu_\\varepsilon$ solves \\eqref{eq_defueps2} for $\\varepsilon \\geq 0$. Following the approach used in \\cite{simplified}, we use the fundamental theorem of calculus as well as \\eqref{eq_defAdjointDiscr} to rewrite this as\n\\begin{align}\nJ(\\phi_\\varepsilon, \\uu_\\varepsilon) - J(\\phi,{\\uu}) =& \\int_0^1 [\\partial_\\uu L(\\varepsilon, {\\uu} + s(\\uu_\\varepsilon - {\\uu}), \\pp) - \\partial_\\uu L(\\varepsilon, \\uu, \\pp)](\\uu_\\varepsilon - {\\uu}) \\mbox ds \\\\\n&+[ \\partial_\\uu L(\\varepsilon, \\uu, \\pp) - \\partial_\\uu L(0, \\uu, \\pp)](\\uu_\\varepsilon - \\uu) \\\\\n&+ L(\\varepsilon, \\uu, \\pp) - L(0, \\uu, \\pp).\n\\end{align}\n\nThus the numerical {topological-shape } derivative can be obtained as the sum of three limits,\n\\begin{align*}\nd \\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k) = R_1(\\uu, \\pp) + R_2(\\uu, \\pp) + R_0(\\uu, \\pp)\n\\end{align*}\nwhere\n\\begin{align*}\nR_1(\\uu, \\pp) :=& \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{1}{|\\Omega(\\phi_\\varepsilon) \\triangle \\Omega(\\phi)|} \\int_0^1 [\\partial_\\uu L(\\varepsilon, {\\uu} + s(\\uu_\\varepsilon - {\\uu}), \\pp) - \\partial_\\uu L(\\varepsilon, \\uu, \\pp)](\\uu_\\varepsilon - {\\uu}) \\mbox ds, \\\\\nR_2(\\uu, \\pp) :=& \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{1}{|\\Omega(\\phi_\\varepsilon) \\triangle \\Omega(\\phi)|}[ \\partial_\\uu L(\\varepsilon, \\uu, \\pp) - \\partial_\\uu L(0, \\uu, \\pp)](\\uu_\\varepsilon - \\uu), \\\\\nR_0(\\uu, \\pp) := & \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{1}{|\\Omega(\\phi_\\varepsilon) \\triangle \\Omega(\\phi)|} [ L(\\varepsilon, \\uu, \\pp) - L(0, \\uu, \\pp)].\n\\end{align*}\n\nFor $J(\\phi_\\varepsilon, \\uu) = c_2 (\\uu - \\hat \\uu) \\tilde \\MM_\\varepsilon (\\uu - \\hat \\uu)$, where $\\tilde \\MM_\\varepsilon := \\tilde \\MM_{\\phi_\\varepsilon}$ represents the matrix $\\tilde \\MM$ defined in \\eqref{eq_defMtilde} with $\\Omega = \\Omega(\\phi_\\varepsilon)$, we get\n\\begin{align*}\nR_1(\\uu, \\pp) =& c_2 \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{1}{|\\Omega(\\phi_\\varepsilon) \\triangle \\Omega(\\phi)|} (\\uu_\\varepsilon - {\\uu})^\\top \\tilde \\MM_\\varepsilon (\\uu_\\varepsilon - {\\uu}).\n\\end{align*}\nMoreover, \n\\begin{align} \\label{eq_R2discr}\nR_2(\\uu, \\pp) =& \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{1}{|\\Omega(\\phi_\\varepsilon) \\triangle \\Omega(\\phi)|} \\left[ 2 c_2 (\\tilde \\MM_\\varepsilon - \\tilde \\MM) (\\uu - \\hat \\uu) \\cdot (\\uu_\\varepsilon - {\\uu}) + (\\Amat_\\varepsilon - \\Amat) (\\uu_\\varepsilon - {\\uu}) \\cdot \\pp \\right],\n\\end{align}\nand\n\\begin{align} \\label{eq_R0discr}\nR_0(\\uu, \\pp) =& \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{1}{|\\Omega(\\phi_\\varepsilon) \\triangle \\Omega(\\phi)|} \\left[ c_2({\\uu} - \\hat \\uu)^\\top(\\tilde \\MM_\\varepsilon - \\tilde \\MM) ({\\uu} - \\hat \\uu) + \\pp^\\top (\\Amat_\\varepsilon - \\Amat) \\uu - (\\ff_\\varepsilon - \\ff)\\cdot \\pp \\right].\n\\end{align}\n\n\\begin{lemma}\\label{lem_uepsu0} There exist constants $c>0, \\hat \\varepsilon >0$ such that for all $\\varepsilon \\in (0,\\hat\\varepsilon)$\n \\begin{align*}\n \\| \\uu_\\varepsilon - {\\uu} \\| \\leq c \\, \\varepsilon^o.\n \\end{align*}\n Here, $o=1$ in the case of a shape perturbation and $o=2$ in the case of a topological perturbation.\n\\end{lemma}\n\\begin{proof} \n Subtracting \\eqref{eq_defueps2} for $\\varepsilon = 0$ from the same equation with $\\varepsilon >0$, we get\n \\begin{align*}\n \\Amat_\\varepsilon (\\uu_\\varepsilon - \\uu) &= \\ff_\\varepsilon - \\ff - (\\Amat_\\varepsilon - \\Amat) \\uu \n \\end{align*}\n and thus, by the ellipticity of the bilinear form corresponding to $\\Amat_\\varepsilon$ and the triangle inequality, there is a constant $c>0$ such that for all $\\varepsilon>0$ small enough\n \\begin{align}\n \\| \\uu_\\varepsilon - \\uu \\| \\leq c ( \\| \\ff_\\varepsilon - \\ff \\| - \\| \\Amat_\\varepsilon - \\Amat \\| \\| \\uu \\| ).\n \\end{align}\n For the difference between the perturbed and unperturbed right hand sides we have\n \\begin{align*}\n |(\\ff_\\varepsilon - \\ff)_i| \\leq& |(f_1 - f_2)| \\int_{\\Omega(\\phi_\\varepsilon) \\triangle \\Omega(\\phi)} |\\varphi_i(x)| \\; \\mbox dx \\\\\n |(\\Amat_\\varepsilon - \\Amat)_{i,j}| \\leq & \\int_{\\Omega(\\phi_\\varepsilon) \\triangle \\Omega(\\phi)} \\bigg( |(\\lambda_1 - \\lambda_2)| |\\nabla \\varphi_j(x)| |\\nabla \\varphi_i(x)| + |(\\alpha_1 - \\alpha_2)| |\\varphi_j(x)| |\\varphi_i(x)| \\bigg)\\; \\mbox dx.\n \\end{align*}\n The result follows from the boundedness of $|\\varphi_i(x)|$ and $|\\nabla \\varphi_i(x)|$ together with \\eqref{eq_dkatilde} which implies the existence of $c>0$ such that $|\\Omega(\\phi_\\varepsilon) \\triangle \\Omega(\\phi)| \\leq c \\varepsilon^o$ (cf. Remark \\ref{remark::area}). \n\\end{proof}\nFrom Lemma \\ref{lem_uepsu0} it follows that the terms $R_1(\\uu, \\pp)$ and $R_2(\\uu, \\pp)$ vanish. We remark that this is in contrast to the continuous case, where asymptotic analysis shows that $R_2$ does not vanish. We will address this issue in more detail in Section \\ref{sec::connection}. \nThus, in the discrete setting we obtain $d \\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k)=R_0(\\uu, \\pp)$, i.e.,\n\\begin{align}\\label{eq::derivativeTracking}\nd \\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k) &= \n\\frac{\\mathbf p^\\top(d_k \\mathbf A \\,{\\uu} - d_k \\mathbf f)\n\t+c_2 ({\\uu} - \\hat \\uu)^\\top d_k\\tilde \\MM ({\\uu} - \\hat \\uu)\n}{ d_k\\tilde a} \n\\end{align}\nwhere%\n\\begin{align} \\label{eq_derivativeLimits}\n\\begin{aligned}\nd_k \\Amat &= \\lim_{\\varepsilon\\searrow 0} \\frac{\\Amat_\\varepsilon- \\Amat}{\\varepsilon^o}, &&&\nd_k \\tilde \\MM& = \\lim_{\\varepsilon\\searrow 0} \\frac{\\tilde\\MM_\\varepsilon- \\tilde\\MM}{\\varepsilon^o}, \\\\\nd_k \\ff &= \\lim_{\\varepsilon\\searrow 0} \\frac{\\ff_\\varepsilon-\\ff}{\\varepsilon^o}, &&&\nd_k \\tilde a &= \\lim_{\\varepsilon\\searrow 0} \\frac{|\\Omega(\\phi_\\varepsilon)\\triangle \\Omega(\\phi)|}{\\varepsilon^o}, \n\\end{aligned}\n\\end{align}\nwith $o=1$ for ${\\mathbf x}_k\\in\\mathfrak S$ and $o =2$ for ${\\mathbf x}_k\\in\\mathfrak T^- \\cup \\mathfrak T^+$. To obtain \\eqref{eq::derivativeTracking}, we divided both numerator and denominator of \\eqref{eq_R0discr} by $\\varepsilon^o$ and used that the limit of the quotient coincides with the quotient of the limits provided both limits exist and the limit in the denominator does not vanish. Next we state the numerical {topological-shape } derivative of problem \\eqref{eq::ContinuousProblem} for arbitrary constant weights $c_1, c_2 \\geq 0$ in the cost function \\eqref{eq::ContinuousObjective}.\n\n\\begin{theorem}[Numerical {topological-shape } derivative] \\label{thm_numTopShapeDer_formula}\n\tFor $l=1,\\dots,N$, let $\\mathbf u_l = [u_{l_1},u_{l_2},u_{l_3}]^\\top$ and $\\mathbf p_l = [p_{l_1},p_{l_2},p_{l_3}]^\\top$ be the nodal values for element $\\tau_{l}$ of the solution and the adjoint, and \n\t\\begin{equation*}\n\t\\mathbf k_{0,l}[i,j] = \\left(J_l^{-1}\\nabla_\\xi \\psi_j\\right)^\\top \\left(J_l^{-1}\\nabla_\\xi \\psi_i\\right), \\quad i,j =1,\\dots,3.\n\t\\end{equation*}\n\tMoreover, $u_k = u({\\mathbf x}_k)$, $p_k = p({\\mathbf x}_k)$, and $\\hat u_k = \\hat u({\\mathbf x}_k)$. \n\tFor ${\\mathbf x}_k \\in \\mathfrak T^-$ the numerical topological derivative reads\n\t\\begin{align}\\label{eq::formulaTopologicalDerivativePlus}\n\td\\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k) = -c_1\n\t-\\Delta \\lambda\t\\frac{\\sum_{l \\in I_{\\pos_k}} \\frac{\\mathbf p_{l}^\\top \\mathbf k_{0,l} \\mathbf u_{l}\\left|{\\textup{\\text{det}}}J_l\\right|}{\\phi_{l_2}\\phi_{l_3}}}{\\sum_{l \\in I_{\\pos_k}}\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}{\\phi_{l_2}\\phi_{l_3}} }\t-\\Delta \\alpha p_ku_k\t+\\Delta fp_k\t- c_2\\Delta \\tilde\\alpha(u_k-\\hat u_k)^2,\t\n\t\\end{align}\n\twhereas for ${\\mathbf x}_k \\in \\mathfrak T^+$ we have\n\t\\begin{align}\\label{eq::formulaTopologicalDerivativeMinus}\n\td\\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k) = c_1 +\n\t\\Delta \\lambda\t\\frac{\\sum_{l \\in I_{\\pos_k}} \\frac{\\mathbf p_{l}^\\top \\mathbf k_{0,l} \\mathbf u_{l}\\left|{\\textup{\\text{det}}}J_l\\right|}{\\phi_{l_2}\\phi_{l_3}}}{\\sum_{l \\in I_{\\pos_k}}\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}{\\phi_{l_2}\\phi_{l_3}} }\t+\\Delta \\alpha p_ku_k\t-\\Delta f p_k + c_2\\Delta \\tilde\\alpha(u_k-\\hat u_k)^2,\n\t\\end{align}\n\twith \n\t\\begin{align*}\n\t\\Delta \\alpha = \\alpha_1-\\alpha_2, \\quad\n\t\\Delta \\lambda = \\lambda_1-\\lambda_2, \\quad\n\t\\Delta f = f_1-f_2, \\quad\n\t\\Delta \\tilde\\alpha = \\tilde\\alpha_1-\\tilde\\alpha_2 .\n\t\\end{align*}\n\tFor ${\\mathbf x}_k \\in \\mathfrak S$ the numerical shape derivative reads\n\t\\begin{equation}\\label{eq::formulaShapeDerivative}\n\t\\begin{aligned}\n\td\\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k) = -c_1 + & \n\t\\Delta \\lambda\n\t\\frac{\\sum_{l \\in C_k} \\mathbf p_{l}^\\top \\mathbf k_{0,l} \\mathbf u_{l} \\,d_k a_{l}}{d_k\\tilde a}\n\t+\\Delta \\alpha\\frac{\\sum_{l \\in C_k} \\mathbf p_{l}^\\top d_k\\mathbf m^I_{l} \\,\\mathbf u_{l}}{d_k\\tilde a} \n\t \\\\\n\t & -\\Delta f \\frac{\\sum_{l \\in C_k} \\mathbf p_{l}^\\top d_k\\mathbf f_l^I }{d_k\\tilde a}\n\t+c_2\\Delta \\tilde\\alpha \\frac{\\sum_{l \\in C_k} (\\mathbf u_{l}-\\hat{\\mathbf u}_{l})^\\top d_k\\mathbf m^I_{l} \\,(\\mathbf u_{l}-\\hat{\\mathbf u}_{l})}{d_k\\tilde a},\n\t\\end{aligned}\n\t\\end{equation}\n\twhere the entries of the element matrix $d_k\\mathbf m_{l}^I$ and of the element vector $d_k\\mathbf f_{l}^I$ are dependent on the local cut situation (cases $I=A^\\pm$, $B^\\pm$, $C^\\pm$) and are given in Appendix \\ref{app::shapeDerivative}. The values $d_k \\tilde a$ can be computed by \\eqref{eq::derivativeArea} considering Remark \\ref{remark::area}.\n\t\\color{black}\n\\end{theorem}\n\\begin{proof}\n\tWe evaluate \\eqref{eq::derivativeTracking} for ${\\mathbf x}_k \\in \\mathfrak T^-$ and $\\discreteOperator = T^{-\\rightarrow+}_{k,\\varepsilon}$. Thus, $o=2$ in \\eqref{eq_derivativeLimits}. We note that \n\t\\begin{equation}\n\t\\mathbf p^\\top d_k \\mathbf A \\,{\\uu} = \\sum_{l=1}^N \\mathbf p_{l}^\\top (d_k\\mathbf m_{l} +d_k\\mathbf k_{l}) \\mathbf u_{l}.\n\t\\end{equation}\n\tWe have for $l\\in I_{\\pos_k}$\n\t\\begin{align*}\n\t\\mathbf k_{l}(\\phi_{\\varepsilon})-\\mathbf k_l(\\phi) &= \\mathbf k_{0,l} \\left|{\\textup{\\text{det}}}J_l\\right|\\bigg( \\int_{\\xi_1=0}^1\\int_{\\xi_2=0}^{1-\\xi_1} \\lambda_1 H(-\\phi_{\\varepsilon}\\circ\\Phi_l) +\\lambda_2 H(\\phi_{\\varepsilon}\\circ\\Phi_l) \\;\\mathrm{d} \\xi_2 \\;\\mathrm{d} \\xi_1 \\\\ &\\quad-\\int_{\\xi_1=0}^1\\int_{\\xi_2=0}^{1-\\xi_1} \\lambda_1 H(-\\phi\\circ\\Phi_l) +\\lambda_2 H(\\phi\\circ\\Phi_l) \\;\\mathrm{d} \\xi_2 \\;\\mathrm{d} \\xi_1\\bigg) \\\\\n\t&=\\mathbf k_{0,l} \\left|{\\textup{\\text{det}}}J_l\\right|\\bigg( \\lambda_1\\int_{\\xi_1=0}^1\\int_{\\xi_2=0}^{1-\\xi_1} (H(-\\phi_{\\varepsilon}\\circ\\Phi_l) -H(-\\phi\\circ\\Phi_l)) \\;\\mathrm{d} \\xi_2 \\;\\mathrm{d} \\xi_1 \\\\ &\\quad+\\lambda_2\\int_{\\xi_1=0}^1\\int_{\\xi_2=0}^{1-\\xi_1} (H(\\phi_\\varepsilon\\circ\\Phi_l) - H(\\phi\\circ\\Phi_l)) \\;\\mathrm{d} \\xi_2 \\;\\mathrm{d} \\xi_1\\bigg)\n\t\\\\\n\t&= \\mathbf k_{0,l} (\\lambda_1 - \\lambda_2)\\delta_{k,\\eps} a_l\n\t\\end{align*}\n\tdue to \\eqref{eq::areaChangeB} because $|\\Omega_\\varepsilon|-|\\Omega| = -(|D \\setminus \\Omega_\\varepsilon|-|D \\setminus \\Omega|)$\n\tand with \\eqref{eq::areaChangeTopAplus} we obtain\n\t\\begin{align}\\label{eq::termA}\n\td_k\\mathbf k_l = \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{\\mathbf k_l(\\phi_\\varepsilon) - \\mathbf k_l(\\phi)}{\\varepsilon^2}\n\t&= -\\Delta\\lambda\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}{2\\phi_{l_2}\\phi_{l_3}} \\mathbf k_{0,l}.\n\t\\end{align}\n\tDue to \n\t\\begin{equation*}\n\t\\int_{\\xi_1=0}^{l_1} \\int_{\\xi_2=0}^{l_2\\left(1-\\frac{\\xi_1}{l_1}\\right)} \n\t\\xi_1^a \\xi_2^b \\;\\mathrm{d} \\xi_2 \\;\\mathrm{d} \\xi_1 = \\frac{\\epsilon^{a+b+2}}{\\phi_{l_2}^{a+1}\\phi_{l_3}^{b+1} + O(\\epsilon^{a+b+2})}\n\t\\end{equation*}\n\tfor some $a,b \\in \\mathbb{N}$, we have\n\t\\begin{equation}\\label{eq::limitChangeMass}\n\t\\begin{aligned}\n\td_k\\mathbf m_l = \\lim_{\\varepsilon\\searrow 0} \\frac{\\mathbf m_{l}(\\phi_{\\varepsilon})-\\mathbf m_l(\\phi)}{\\varepsilon^2} &= - \\lim_{\\varepsilon\\searrow 0}\t \\frac{\\Delta\\alpha}{\\varepsilon^2}\n\t\\int_{\\xi_1=0}^{l_1} \\int_{\\xi_2=0}^{l_2\\left(1-\\frac{\\xi_1}{l_1}\\right)} \n\t\\psi_i(\\xi)\\psi_j(\\xi) \\left|{\\textup{\\text{det}}}J_l\\right|\\;\\mathrm{d} \\xi_2 \\;\\mathrm{d} \\xi_1 \\\\ &= \n\t\\begin{bmatrix}\n\t-\\frac{\\Delta\\alpha\\left|{\\textup{\\text{det}}}J_l\\right|}{2\\phi_{l_2}\\phi_{l_3}} & 0 & 0 \\\\\n\t0 & 0 & 0 \\\\\n\t0 & 0 & 0 \n\t\\end{bmatrix},\n\t\\end{aligned}\n\t\\end{equation}\n\tand conclude\n\t\\begin{align}\\label{eq::termB}\n\t\\sum_{l=1}^N \\mathbf p_{l}^\\top d_k\\mathbf m_{l} \\mathbf u_{l} = -\\Delta\\alpha p_k u_k\\sum_{l \\in I_{\\pos_k}}\t\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}{2\\phi_{l_2}\\phi_{l_3}}.\n\t\\end{align}\n\tFurthermore, with \n\t\\begin{equation}\n\t\\begin{aligned}\\label{eq::limitChangeLoad}\n\td_k\\mathbf f_l =\\lim_{\\varepsilon\\searrow 0} \\frac{\\mathbf f_{l}(\\phi_{\\varepsilon})-\\mathbf f_l(\\phi)}{\\varepsilon^2} &= - \\lim_{\\varepsilon\\searrow 0}\t \\frac{\\Delta f}{\\varepsilon^2}\n\t\\int_{\\xi_1=0}^{l_1} \\int_{\\xi_2=0}^{l_2\\left(1-\\frac{\\xi_1}{l_1}\\right)} \n\t\\psi_i(\\xi) \\left|{\\textup{\\text{det}}}J_l\\right|\\;\\mathrm{d} \\xi_2 \\;\\mathrm{d} \\xi_1 \\\\ &= \n\t\\begin{bmatrix}\n\t-\\frac{\\Delta f\\left|{\\textup{\\text{det}}}J_l\\right|}{2\\phi_{l_2}\\phi_{l_3}} \\\\\n\t0 \\\\\n\t0 \n\t\\end{bmatrix},\n\t\\end{aligned}\n\t\\end{equation}\n\tit follows that\n\t\\begin{equation}\\label{eq::termC}\n\t \\mathbf p^\\top d_k\\mathbf f = \\sum_{l=1}^N \\mathbf p_{l}^\\top d_k\\mathbf f_{l} = -\\Delta f p_k \\sum_{l \\in I_{\\pos_k}}\t\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}{2\\phi_{l_2}\\phi_{l_3}}.\n\t\\end{equation}\n\tAnalogously to \\eqref{eq::limitChangeMass} we have\n\t\\begin{equation}\\label{eq::limitChangeTracking}\n\t\\begin{aligned}\n\td_k\\tilde{\\mathbf m}_l = \\lim_{\\varepsilon\\searrow 0} \\frac{\\tilde{\\mathbf m}_{l}(\\phi_{\\varepsilon})-\\tilde{\\mathbf m}_l(\\phi)}{\\varepsilon^2} = \n\t\\begin{bmatrix}\n\t-\\frac{\\Delta\\tilde\\alpha\\left|{\\textup{\\text{det}}}J_l\\right|}{2\\phi_{l_2}\\phi_{l_3}} & 0 & 0 \\\\\n\t0 & 0 & 0 \\\\\n\t0 & 0 & 0 \n\t\\end{bmatrix},\n\t\\end{aligned}\n\t\\end{equation}\n\tand obtain\n\t\\begin{equation}\\label{eq::termD}\n\t\\begin{aligned}\n\t({\\uu} - \\hat \\uu)^\\top d_k\\tilde \\MM ({\\uu} - \\hat \\uu) &= \\sum_{l \\in I_{\\pos_k}} (\\uu_{0,l} - \\hat \\uu_l)^\\top d_k\\tilde {\\mathbf m}_l (\\uu_{0,l} - \\hat \\uu_l) \\\\ &= -\\Delta \\tilde\\alpha(u_k-\\hat u_k)^2 \\sum_{l \\in I_{\\pos_k}}\t\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}{2\\phi_{l_2}\\phi_{l_3}}.\t\n\t\\end{aligned}\n\t\\end{equation}\n\tIn the present situation, $d_k\\tilde a$ is given by the absolute value of \\eqref{eq::areaChangeTopAplus} (see also Remark \\ref{remark::area}),\n\t\\begin{equation}\\label{eq::changeAreaProof}\n\td_k\\tilde a = \\sum_{l \\in I_{{\\mathbf x}_k}^{A_+}}\\frac{\\left|{\\textup{\\text{det}}}J_l\\right|}{2\\phi_{l_2}\\phi_{l_3}}.\n\t\\end{equation}\n\tBy inserting \\eqref{eq::termA}, \\eqref{eq::termB}, \\eqref{eq::termC}, \\eqref{eq::termD}, and \\eqref{eq::changeAreaProof} in \\eqref{eq::derivativeTracking}, together with Corollary \\ref{cor_dVol}, we obtain the sought expression \\eqref{eq::formulaTopologicalDerivativePlus}. Formula \\eqref{eq::formulaTopologicalDerivativeMinus} can be obtained in an analogous way as \\eqref{eq::formulaTopologicalDerivativePlus}.\n\t\n\tThe formula in \\eqref{eq::formulaShapeDerivative} follows directly from \\eqref{eq::derivativeTracking} together with Corollary \\ref{cor_dVol}. The values of $d_k \\mathbf m^I_l$ and $d_k \\mathbf f^I_l$ for all possible cut situations $I \\in \\{A^+, A^-, B^+, B^-, C^+, C^-\\}$ are given in Appendix \\ref{app::shapeDerivative} and were computed using symbolic computer algebra tools. A mathematical derivation of these terms is omitted here for brevity.\n\\end{proof}\n\n\\subsection{Verification}\nThe implementation of the {topological-shape } derivative is verified by comparing the computed sensitivity values against numerical values obtained by three different approaches. These are (i) a finite difference test, (ii) an application of the complex step derivative \\cite{martins2003complex} and (iii) a test based on hyper-dual numbers developed in \\cite{fike2011development}. All tests are conducted for a fixed configuration.\n\nWe recall the definition of the {topological-shape } derivative \\eqref{eq::topDerivative} at a node ${\\mathbf x}_k$ of the mesh\n\\begin{align} \\label{eq_dJ_lim}\n d\\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k) = \\lim_{\\varepsilon\\searrow 0}\n \\delta \\redPhiObjectiveFunction_{\\varepsilon}(\\phi)({\\mathbf x}_k) \\quad \\mbox{ with } \\quad\n \\delta \\redPhiObjectiveFunction_{\\varepsilon}(\\phi)({\\mathbf x}_k) :=\n\\frac{\\redPhiObjectiveFunction({O_{k,\\varepsilon}} \\phi)-\\redPhiObjectiveFunction(\\phi)}\n{|\\Omega({O_{k,\\varepsilon}}\\phi) \\triangle \\Omega(\\phi)|}\n\\end{align}\nwhere $O_{k,\\varepsilon}$ represents the operator $T^{-\\rightarrow+}_{k,\\varepsilon}$, $T^{+\\rightarrow-}_{k,\\varepsilon}$, $S_{k,\\varepsilon}$ depending on whether the node ${\\mathbf x}_k$ is in $\\mathfrak T^-$, $\\mathfrak T^+$ or $\\mathfrak S$, respectively.\n\n\\subsubsection{Finite difference test}\nFor the finite difference (FD) test, we compute the errors\n\\begin{align} \\label{eq_def_errors_eSeT}\ne_S^{FD}(\\varepsilon) = \\sqrt{\\sum_{{\\mathbf x}_k \\in \\mathfrak S} (\\delta \\redPhiObjectiveFunction_{\\varepsilon}(\\phi)({\\mathbf x}_k) - d\\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k))^2}, \\quad\ne_T^{FD}(\\varepsilon) = \\sqrt{\\sum_{{\\mathbf x}_k \\in \\mathfrak T^- \\cup \\mathfrak T^+} (\\delta \\redPhiObjectiveFunction_{\\varepsilon}(\\phi)({\\mathbf x}_k) - d\\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k))^2}\n\\end{align}\nfor a decreasing sequence of values for $\\varepsilon$. \nThe results are shown in Figure \\ref{fig:verify}(a). We observe convergence of order $\\varepsilon$ up to a point where the cancellation error dominates. \n\\begin{figure}\n \\begin{tabular}{ccc}\n \\includegraphics[width=0.33\\linewidth]{verify-derivative-finite} & \\includegraphics[width=0.33\\linewidth]{verify-derivative-imag} &\n \\includegraphics[width=0.33\\linewidth]{verify_derivative_hyper} \\\\\n (a) & (b) & (c) \n \\end{tabular}\n\\caption{(a) Results of the finite difference test. (b) Results obtained with the complex step derivative. (c) Results obtained with hyper-dual numbers.}\n \\label{fig:verify}\n\\end{figure}\n\n\n\\subsubsection{Complex step derivative test}\nIn order to overcome this drawback of the finite difference test, we next consider a test based on the complex step (CS) derivative \\cite{martins2003complex}. For that purpose, using Remark \\ref{remark::area}, let us first rewrite \\eqref{eq_dJ_lim} as\n\\begin{align} \\label{eq_dJ_lim2}\n d\\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k) = \\frac{\\lim_{\\varepsilon\\searrow 0}\\frac{\\redPhiObjectiveFunction({O_{k,\\varepsilon}} \\phi)-\\redPhiObjectiveFunction(\\phi)}\n{\\varepsilon^o}}{\\lim_{\\varepsilon\\searrow 0}\\frac{|\\Omega({O_{k,\\varepsilon}}\\phi) \\triangle \\Omega(\\phi)|}{\\varepsilon^o}}\n =\\frac{1}{d_k \\tilde a} \\lim_{\\varepsilon\\searrow 0}\\frac{\\redPhiObjectiveFunction({O_{k,\\varepsilon}} \\phi)-\\redPhiObjectiveFunction(\\phi)}\n{\\varepsilon^o},\n\\end{align}\nwhere $o=1$ if ${\\mathbf x}_k \\in \\mathfrak S$ and $o=2$ if ${\\mathbf x}_k \\in \\mathfrak T^- \\cup \\mathfrak T^+$. Moreover,\nassuming a higher order expansion of the form\n\\begin{align} \\label{eq_expansion_op3}\n \\redPhiObjectiveFunction(O_{k,\\varepsilon}\\phi) = \\redPhiObjectiveFunction(\\phi) \n + \\varepsilon^o \\, d_k \\tilde a \\, d \\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k)\n + \\varepsilon^{o+1} \\, d_k \\tilde a \\, d^2 \\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k)\n + \\varepsilon^{o+2} \\, d_k \\tilde a \\, d^3 \\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k) + \\mathfrak{o}(\\varepsilon^{o+2})\n\\end{align}\nwith some higher order sensitivities $\nd^2 \\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k)$, $d^3 \\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k)$ and assuming that \\eqref{eq_expansion_op3} also holds for complex-valued $\\varepsilon$, we can follow the idea of the complex step derivative \\cite{martins2003complex}: Setting $\\varepsilon = ih$ in \\eqref{eq_expansion_op3} with $h>0$ and $i$ the complex unit yields\n\\begin{align}\n d \\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k) = \\frac{\\text{Im}(\\redPhiObjectiveFunction(O_{k,ih}\\phi))}{h \\, d_k \\tilde a} + \\mathcal O(h^2)\n\\end{align}\nin the case $o=1$ where ${\\mathbf x}_k \\in \\mathfrak S$, and \n\\begin{align*}\n d \\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k) = \\frac{\\text{Re}(\\redPhiObjectiveFunction(O_{k,ih}\\phi)-\\redPhiObjectiveFunction(\\phi))}{-h^2 \\, d_k \\tilde a} + \\mathcal O(h^2)\n\\end{align*}\nin the case $o=2$ where ${\\mathbf x}_k \\in \\mathfrak T^- \\cup \\mathfrak T^+$. This means\n\\begin{align*}\n d \\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k) = \\delta \\redPhiObjectiveFunction_h^{CS}(\\phi)({\\mathbf x}_k) + \\mathcal O(h^2)\n\\end{align*}\nwith\n\\begin{align} \\label{eq_deltaJh_CS}\n \\delta \\redPhiObjectiveFunction_h^{CS}(\\phi)({\\mathbf x}_k) := \\begin{cases}\n \\frac{\\text{Re}(\\redPhiObjectiveFunction(T^{-\\rightarrow+}_{k, ih} \\phi)-\\redPhiObjectiveFunction(\\phi))}{-h^2 \\, d_k \\tilde a}, & {\\mathbf x}_k \\in \\mathfrak T^- ,\\\\\n \\frac{\\text{Re}(\\redPhiObjectiveFunction(T^{+\\rightarrow-}_{k,ih} \\phi)-\\redPhiObjectiveFunction(\\phi))}{-h^2 \\, d_k \\tilde a}, & {\\mathbf x}_k \\in \\mathfrak T^+ ,\\\\\n \\frac{\\text{Im}(\\redPhiObjectiveFunction( S_{k,ih} \\phi))}{h \\, d_k \\tilde a}, & {\\mathbf x}_k \\in \\mathfrak S.\n \\end{cases}\n\\end{align}\nAnalogously to \\eqref{eq_def_errors_eSeT}, we define the summed errors $e_S^{CS}(h)$ and $e_T^{CS}(h)$ by just replacing $\\delta \\redPhiObjectiveFunction_\\varepsilon(\\phi)({\\mathbf x}_k)$ by $\\delta \\redPhiObjectiveFunction_h^{CS}(\\phi)({\\mathbf x}_k)$ defined above. Figure \\ref{fig:verify}(b) shows the errors $e_S^{CS}$ and $e_T^{CS}$ for a positive, decreasing sequence of $h$ where we observe quadratic decay for both errors. While the error $e_S^{CS}$ corresponding to the shape nodes ${\\mathbf x}_k \\in \\mathfrak S$ decays to machine precision, the error $e_T^{CS}$ corresponding to the interior nodes ${\\mathbf x}_k \\in \\mathfrak T^- \\cup \\mathfrak T^+$ deteriorates at some point due to the cancellation error ocurring when subtracting $\\redPhiObjectiveFunction(\\phi)$ from $\\redPhiObjectiveFunction(O_{k,ih}\\phi)$ in \\eqref{eq_deltaJh_CS}.\n\n\\subsubsection{Test based on hyper-dual numbers}\nIn order to overcome this cancellation error also for the case of ${\\mathbf x}_k \\in \\mathfrak T^- \\cup \\mathfrak T^+$, we resort to hyper-dual (HD) numbers as introduced in \\cite{fike2011development}. Here, the idea is to consider numbers with three non-real components denoted by $E_1$, $E_2$ and $E_1 E_2$ with $E_1^2 = E_2^2 = (E_1E_2)^2=0$. Assuming that expansion \\eqref{eq_expansion_op3} holds up to order $o+1$ also for such hyper-dual values of $\\varepsilon$, we can set $\\varepsilon = h E_1 + h E_2 + 0 E_1E_2$ for some $h>0$. For $o=1$, considering only the first non-real part (i.e., the $E_1$-part) and exploiting that $E_1^2 = 0$, we obtain the equality\n\\begin{align} \\label{eq_dJHD_shape}\n d \\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k) = \\frac{E_1\\text{part}(\\redPhiObjectiveFunction(O_{k, h E_1 + h E_2}\\phi))}{h \\, d_k \\tilde a}\n\\end{align}\nfor ${\\mathbf x}_k \\in \\mathfrak S$. Similarly, with the same choice of $\\varepsilon$, by considering only the $E_1E_2$-part of the expansion and exploiting $E_1^2=E_2^2=E_1^2E_2^2=0$, we obtain for $o=2$\n\\begin{align} \\label{eq_dJHD_topo}\n d \\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k) = \\frac{E_1E_2\\text{part}(\\redPhiObjectiveFunction(O_{k, h E_1 + h E_2}\\phi))}{2 h^2 \\, d_k \\tilde a}\n\\end{align}\nfor ${\\mathbf x}_k \\in \\mathfrak T^- \\cup \\mathfrak T^+$. In this case, the corresponding summed errors $e_S^{HD}(h)$ and $e_T^{HD}(h)$ vanish for arbitrary $h \\in \\mathbb R$. This is also observed numerically since both \\eqref{eq_dJHD_shape} and \\eqref{eq_dJHD_topo} suffer neither from a truncation nor a cancellation error. Figure \\ref{fig:verify}(c) shows that the the obtained results agree up to machine precision with the derivatives obtained by \\eqref{eq::formulaTopologicalDerivativePlus}, \\eqref{eq::formulaTopologicalDerivativeMinus}, and the respective formula for the shape derivative \\eqref{eq::formulaShapeDerivative}.\n\\subsection{Application of optimization algorithm to model problem}\nFinally we show the use of the numerical {topological-shape } derivative computed in Section \\ref{sec::numTopShapeDer} within a level-set based topology optimization algorithm. We first state the precise model problem, before introducing the algorithm and showing numerical results.\n\\subsubsection{Problem setting}\\label{sec::numericalProblemSetting}\nWe consider the unit square $D=[0,1]^2$ and minimize the objective function \\eqref{eq::ContinuousObjective} with $c_1=0$ and $c_2=1$ subject to the PDE constraint in \\eqref{eq::ContinuousProblem}. The chosen problem parameters are shown in Table \\ref{tab::parameters}.\n\\begin{table}\n\t\\centering\n\t\\begin{tabular}{cccccccc}\n\t\t\\toprule\n\t\t$\\tilde\\alpha_1$ & $\\tilde\\alpha_2$ &\n\t\t$\\alpha_1$ & $\\alpha_2$ &\n\t\t$\\lambda_1$ & $\\lambda_2$ &\n\t\t$f_1$ & $f_2$ \\\\\n\t\t1 & 0.9 & 1 & 0.2 & 1 & 0.6 & 1 & 0.5 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\caption{Problem parameters for the numerical experiment}\n\t\\label{tab::parameters}\n\\end{table}\nWe consider a mixed Dirichlet-Neumann problem by choosing\n\\begin{equation*}\n\\Gamma_D = \\{(x,y)\\in\\partial D| y=0 \\mbox{ or } y=1\\} , \\qquad\n\\Gamma_N = \\partial D \\setminus \\Gamma_D,\n\\end{equation*}\nand $g_D(x,y) = y$, $g_N(x,y) = 0$.\nIn order to define a desired state $\\hat u$, we choose a level-set function $\\phi_d$ which implies a desired shape $\\Omega_d$, compute the corresponding solution $u^*$ to $\\phi_d$ and set $\\hat u := u^*$. Then, by construction, $(\\Omega_d, \\hat u)$ is also the solution of the design optimization problem.\nIn the numerical tests we used five different meshes with 145, 545, 2113, 8321, and 33025 nodes respectively.\nFor each mesh we obtain $\\phi_d$ by interpolation of\n\\begin{equation}\n\\bar\\phi_d(x,y) = \\left((x-0.3)^2+(y-0.4)^2-0.2^2\\right)\\left((x-0.7)^2+(y-0.7)^2-0.1^2\\right).\n\\end{equation}\nThis yields that $\\Omega$ are two (approximated) circles with radii $0.2$ and $0.1$ respectively, see Figure \\ref{fig:holes-meshes}.\n\\begin{figure}\n\t\\begin{subfigure}[t]{0.19\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{holes-meshes-3}\n\t\t\\subcaption{145 nodes}\n\t\\end{subfigure}\\hfill\n\t\\begin{subfigure}[t]{0.19\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{holes-meshes-4}\n\t\t\\subcaption{545 nodes}\n\t\\end{subfigure}\\hfill\n\t\\begin{subfigure}[t]{0.19\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{holes-meshes-5}\n\t\t\\subcaption{2113 nodes}\n\t\\end{subfigure}\\hfill\n\t\\begin{subfigure}[t]{0.19\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{holes-meshes-6}\n\t\t\\subcaption{8321 nodes}\n\t\\end{subfigure}\\hfill\n\t\\begin{subfigure}[t]{0.19\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{holes-meshes-7}\n\t\t\\subcaption{33025 nodes}\n\t\\end{subfigure}\\hfill\n\t\\caption{The different meshes and corresponding sought shapes used in the numerical experiments.}\n\t\\label{fig:holes-meshes}\n\\end{figure}\n\\subsubsection{Optimization algorithm}\\label{sec::algorithm}\n\\newcommand{i}{i}\nThe optimization algorithm we use to solve the problem introduced in Section \\ref{sec::numericalProblemSetting} is inspired by \\cite{amstutz2006new}. \n\n\\begin{definition} \\label{def_locOpti}\n We say a level set function $\\phi \\in S_h^1(D)$ is locally optimal for the problem described by $\\mathcal J$ if\n \\begin{align}\n \\begin{cases}\n d \\mathcal J(\\phi)({\\mathbf x}_k) \\geq 0 & \\text{for } {\\mathbf x}_k \\in \\mathfrak T^- \\cup \\mathfrak T^+,\\\\\n d \\mathcal J(\\phi)({\\mathbf x}_k) = 0 & \\text{for } {\\mathbf x}_k \\in \\mathfrak S.\n \\end{cases}\n \\end{align}\n\\end{definition}\n\n\n\nWe introduce the generalized numerical {topological-shape } derivative $G_\\phi \\in S_h^1(D)$ with\n\\begin{equation} \\label{eq_defGphi}\n G_\\phi({\\mathbf x}_k) = \\begin{cases}\n -\\min(d\\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k),0) \\quad &\\text{for } {\\mathbf x}_k \\in \\mathfrak T^-, \\\\[8pt]\n \\min(d\\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k),0) \\quad &\\text{for } {\\mathbf x}_k \\in \\mathfrak T^+, \\\\[8pt]\n -d\\redPhiObjectiveFunction(\\phi)({\\mathbf x}_k) \\quad &\\text{for } {\\mathbf x}_k \\in \\mathfrak S.\n \\end{cases}\n\\end{equation}\nWith this definition, we immediately get the following optimality condition:\n\\begin{lemma}\n Let $\\phi \\in S_h^1(D)$ and\n\t\\begin{equation} \\label{eq_optiCondition}\n\t\t G_\\phi({\\mathbf x}_k) = 0, \\quad \\text{for } k = 1,\\dots ,M.\n\t\\end{equation}\n\tThen $\\phi$ is locally optimal in the sense of Definition \\ref{def_locOpti}.\n\\end{lemma}\nThe update of the level-set function based on the information of the {topological-shape } derivative is done by spherical linear interpolation (see also \\cite{amstutz2006new})\n\\begin{equation} \\label{eq_update_slerp}\n\t\\phi_{i+1} = \\frac{1}{\\sin(\\theta_i)}\\left(\\sin((1-\\kappa_i)\\theta_i) \\phi_i + \\sin(\\kappa_i\\theta_i)\\frac{G_{\\phi_i}}{\\|G_{\\phi_i}\\|_{L_{2}(D)}} \\right),\n\\end{equation}\nwhere $\\theta_i = \\mbox{arc cos}(( \\phi_i, G_{\\phi_i})_{L^2(D)})$ is the angle between the given level set function $\\phi_i$ and the sensitivity $G_{\\phi_i}$ in an $L^2(D)$-sense. Here, $\\kappa_i \\in (0,1)$ is a line search parameter which is adapted such that a decrease in the objective function is achieved. Note that, by construction, the update \\eqref{eq_update_slerp} preserves the $L^2(D)$-norm, $\\|\\phi_{i+1}\\|_{L^2(D)} = \\|\\phi_{i}\\|_{L^2(D)}$. As in \\cite{amstutz2006new,gangl2020multi}, we can also show that $\\phi$ is evolving along a descent direction:\n\\begin{lemma}\n Let $\\phi_i, \\phi_{i+1} \\in S_h^1(D)$ two subsequent iterates related by \\eqref{eq_update_slerp}. Then we have for ${\\mathbf x}_k \\in \\mathfrak T^-(\\phi_i) \\cup \\mathfrak T^+(\\phi_i)$\n \\begin{align}\n \\phi_{i}({\\mathbf x}_k) > 0 > \\phi_{i+1}({\\mathbf x}_k) \\Longrightarrow d \\mathcal J(\\phi_i)({\\mathbf x}_k) <0 \\label{eq_descentTplus}\\\\\n \\phi_{i}({\\mathbf x}_k) < 0 < \\phi_{i+1}({\\mathbf x}_k) \\Longrightarrow d \\mathcal J(\\phi_i)({\\mathbf x}_k) <0 \\label{eq_descentTminus}\n \\end{align}\n\\end{lemma}\n\\begin{proof}\n Let ${\\mathbf x}_k \\in \\mathfrak T^+(\\phi_i)$, i.e. $\\phi_i({\\mathbf x}_k) > 0$ and assume that $\\phi_{i+1}({\\mathbf x}_k) < 0$. Since $\\mbox{sin}(\\theta)>0$ and $\\mbox{sin}(s\\theta)>0$ for all $\\theta \\in (0, \\pi)$ and $s\\in (0,1)$, it follows from \\eqref{eq_update_slerp} that $G_{\\phi_i}(x_k) < 0$ and thus, by \\eqref{eq_defGphi}, $d\\mathcal J(\\phi_i)({\\mathbf x}_k) < 0$ as claimed in \\eqref{eq_descentTplus}. An analogous argument yields \\eqref{eq_descentTminus}.\n\\end{proof}\n\nWe can also show that $G_\\phi$ constitutes a descent direction for ${\\mathbf x}_k \\in \\mathfrak S$.\n\\begin{lemma}\n Let $\\phi \\in S^1_h(D)$ and suppose that\n \\begin{align} \\label{eq_limsubsuper}\n \\underset{\\varepsilon \\searrow 0}{\\mbox{lim }} \\frac{\\mathcal J(S_{k,\\varepsilon} \\phi)-\\mathcal J(\\phi)}{|\\Omega(S_{k,\\varepsilon} \\phi) \\triangle \\Omega(\\phi)|} = - \\underset{\\varepsilon \\nearrow 0}{\\mbox{lim }} \\frac{\\mathcal J(S_{k,\\varepsilon} \\phi)-\\mathcal J(\\phi)}{|\\Omega(S_{k,\\varepsilon} \\phi) \\triangle \\Omega(\\phi)|}.\n \\end{align}\n Let ${\\mathbf x}_k \\in \\mathfrak S(\\phi)$ be fixed and let $\\phi^{\\kappa}$ be the level set function according to \\eqref{eq_update_slerp} with line search parameter $\\kappa \\in (0,1)$ that is updated only in ${\\mathbf x}_k$, i.e.,\n $\\phi^{\\kappa} = a(\\kappa) \\phi + b(\\kappa) G_{\\phi} \\varphi_k$\n with $a(\\kappa) = \\sin((1-\\kappa)\\theta) \/ \\sin(\\theta)$ and $b(\\kappa) = \\sin(\\kappa \\theta) \/ \\left( \\sin(\\theta) \\|G_{\\phi}\\|_{L_{2}(D)} \\right)$. Moreover assume that $|d \\mathcal J(\\phi)({\\mathbf x}_k)|>0$. Then there exists $\\overline \\kappa \\in (0,1)$ such that for all $\\kappa \\in (0,\\overline \\kappa)$\n \\begin{align*}\n \\mathcal J(\\phi^\\kappa) < \\mathcal J(\\phi).\n \\end{align*}\n\\end{lemma}\n\n\\begin{proof}\n Suppose that $0 > d \\mathcal J(\\phi)({\\mathbf x}_k)$. Then it follows from \\eqref{eq::topDerivative} that $\\mathcal J(\\phi + \\varepsilon \\varphi_k) < \\mathcal J(\\phi)$ for $\\varepsilon >0$ small enough.\n Thus, since $a(\\kappa), b(\\kappa)>0$ for $\\theta \\in (0, \\pi)$ and $\\kappa \\in (0,1)$ and since $\\mathcal J(\\phi^\\kappa) = \\mathcal J(\\frac{1}{a(\\kappa)} \\phi^\\kappa)$, it follows\n \\begin{align*}\n \\mathcal J(\\phi^\\kappa) = \\mathcal J(\\phi + b(\\kappa)\/a(\\kappa) G_{\\phi} \\varphi_k) = \\mathcal J(\\phi_i - b(\\kappa)\/a(\\kappa) d \\mathcal J(\\phi_i)({\\mathbf x}_k) \\varphi_k) < \\mathcal J(\\phi_i)\n \\end{align*}\n for $\\kappa>0$ small enough since $-b(\\kappa)\/a(\\kappa) d\\mathcal J(\\phi_i)({\\mathbf x}_k)>0$ and $b(\\kappa)\/a(\\kappa) \\rightarrow 0$ as $\\kappa \\searrow 0$.\n On the other hand, if $0 < d \\mathcal J(\\phi_i)({\\mathbf x}_k)$, it follows from \\eqref{eq::topDerivative} and \\eqref{eq_limsubsuper} that $\\mathcal J(\\phi + \\varepsilon \\varphi_k) < \\mathcal J(\\phi)$ for $\\varepsilon <0$ small enough and further for $\\kappa$ small enough\n \\begin{align*}\n \\mathcal J(\\phi^\\kappa) = \\mathcal J(\\phi - b(\\kappa)\/a(\\kappa) d \\mathcal J(\\phi)({\\mathbf x}_k) \\varphi_k) < \\mathcal J(\\phi).\n \\end{align*}\n\\end{proof}\n\n\\begin{remark}\n In the continuous setting, the property corresponding to \\eqref{eq_limsubsuper} is fulfilled for smooth domains which can be seen as follows. Let $\\Omega_t^V = (\\textrm{id}+t V)(\\Omega)$ and note that $\\Omega_{-t}^V = \\Omega_t^{-V}$. Then, by Lemma \\ref{LEM_SYMDIF},\n \\begin{align*}\n \\underset{s \\nearrow 0}{\\mbox{lim }} \\frac{|\\Omega_s^V \\triangle \\Omega|}{s}\n = -\\underset{t \\searrow 0}{\\mbox{lim }} \\frac{|\\Omega_{t}^{-V} \\triangle \\Omega|}{t}\n = - \\int_{\\partial \\Omega} |V \\cdot n| \\; \\mbox d S_x = -\\underset{s \\searrow 0}{\\mbox{lim }} \\frac{|\\Omega_{s}^{V} \\triangle \\Omega|}{s}\n \\end{align*}\n and, assuming differentiability of $s \\mapsto \\mathfrak g(\\Omega_s^V)$,\n \\begin{align*}\n \\underset{s \\searrow 0}{\\mbox{lim }} \\frac{\\mathfrak g(\\Omega_s^V) - \\mathfrak g(\\Omega)}{|\\Omega_s^V \\triangle \\Omega|} =\n \\frac{\\underset{s \\rightarrow 0}{\\mbox{lim }} \\left( \\mathfrak g(\\Omega_s^V) - \\mathfrak g(\\Omega)\\right) \/ s}{\\underset{s \\searrow 0}{\\mbox{lim }} |\\Omega_s^V \\triangle \\Omega| \/ s} = -\\underset{s \\nearrow 0}{\\mbox{lim }} \\frac{\\mathfrak g(\\Omega_s^V) - \\mathfrak g(\\Omega)}{|\\Omega_s^V \\triangle \\Omega|}.\n \\end{align*}\n\n\n In the discrete case, however, there may occur situations where the limits in \\eqref{eq_limsubsuper} do not coincide. This can be the case in particular in situations where $\\phi({\\mathbf x}_k) = 0$. We remark that this issue seemed not to cause problems in our numerical experiments.\n\n\\end{remark}\n\n\\begin{remark}\n In practice it turned out to be advantageous to include a smoothing step of the level set function. Thus, we chose the following update strategy: We first set\n \\begin{equation*}\n \\psi = \\frac{1}{\\sin(\\theta_i)}\\left(\\sin((1-\\kappa_i)\\theta_i) \\phi_i + \\sin(\\kappa_i\\theta_i)\\frac{G_{\\phi_i}}{\\|G_{\\phi_i}\\|_{L_{2}(D)}} \\right),\n \\end{equation*}\n with the same notation as above before smoothing the level set function in $\\mathfrak T^-(\\psi) \\cup \\mathfrak T^+(\\psi)$ by\n \\begin{equation}\n\t\\hat\\psi({\\mathbf x}_k) = \\begin{cases}\n\t\t\\frac{\\sum_{i\\in R_{{\\mathbf x}_k}} \\psi({\\mathbf x}_i) }{|R_{{\\mathbf x}_k}|} \\quad &\\text{for } {\\mathbf x}_k \\in \\mathfrak T^-(\\psi)\\cup\\mathfrak T^+(\\psi), \\\\[8pt]\n\t\t\\psi({\\mathbf x}_k) \\quad &\\text{for } {\\mathbf x}_k \\in \\mathfrak S.\n\t\\end{cases}\n \\end{equation}\n Finally, the level-set function is normalized and the next iterate is given by\n \\begin{equation}\n \\phi_{i+1} = \\frac{ \\hat\\psi}{\\| \\hat\\psi\\|_{L_{2}(D)}}.\n \\end{equation}\n\\end{remark}\n\\color{black}\n\n\\subsubsection{Numerical results}\nAs an initial design for the optimisation, we take the empty set, $\\Omega = \\emptyset$. This is realized by choosing $\\phi_0 = 1\/\\|1\\|_{L_2(D)}$ as the initial level set function. We use the algorithm outlined in Section \\ref{sec::algorithm}\nto update this level set function. We terminated the algorithm after the fixed number of 800 iterations. The final as well as some intermediate configurations are illustrated in Figures \\ref{fig:holes-pics2-3}-\\ref{fig:holes-pics2-7} for the five different levels of discretization.\n\\newcommand{\\myincludepic}[2]{\\begin{subfigure}{0.49\\linewidth}\\centering\\includegraphics[width=0.7\\linewidth,trim=0cm 0cm 0cm 0cm,clip]{holes-pics2-1e+20_1_1-#1}\\caption{#2 iteration}\n\\end{subfigure}}\n\\newcommand{100}{100}\n\\newcommand{\\picText}[1]{\\centering\n\t\\includegraphics[width=0.5\\linewidth]{colorbar}\n\t\\caption{Evolution of the level-set function for the #1 nodes mesh}}\n\\begin{figure\n\t\\myincludepic{3-1}{1}\n\t\\myincludepic{3-2}{2}\n\t\n\t\\myincludepic{3-10}{10}\n\t\\myincludepic{3-20}{20}\n\t\n\t\\myincludepic{3-100}{100}\n\t\\myincludepic{3-800}{800}\n\t\n\t\n\t\\picText{145}\n\t\\label{fig:holes-pics2-3}\n\\end{figure}\n\\begin{figure\n\t\\myincludepic{4-1}{1}\n\t\\myincludepic{4-2}{2}\n\t\n\t\\myincludepic{4-10}{10}\n\t\\myincludepic{4-20}{20}\n\t\n\t\\myincludepic{4-100}{100}\n\t\\myincludepic{4-800}{800}\n\t\\picText{545}\n\t\\label{fig:holes-pics2-4}\n\\end{figure}\n\\begin{figure\n\t\\myincludepic{5-1}{1}\n\t\\myincludepic{5-2}{2}\n\t\n\t\\myincludepic{5-10}{10}\n\t\\myincludepic{5-20}{20}\n\t\n\t\\myincludepic{5-100}{100}\n\t\\myincludepic{5-800}{800}\n\t\\picText{2113}\n\t\\label{fig:holes-pics2-5}\n\\end{figure}\n\\begin{figure\n\t\\myincludepic{6-1}{1}\n\t\\myincludepic{6-2}{2}\n\t\n\t\\myincludepic{6-10}{10}\n\t\\myincludepic{6-20}{20}\n\t\n\t\\myincludepic{6-100}{100}\n\t\\myincludepic{6-800}{800}\n\t\n\t\\picText{8321}\n\t\\label{fig:holes-pics2-6}\n\\end{figure}\n\\begin{figure\n\t\\myincludepic{7-1}{1}\n\t\\myincludepic{7-2}{2}\n\n\t\\myincludepic{7-10}{10}\n\t\\myincludepic{7-20}{20}\n\n\t\\myincludepic{7-100}{100}\n\t\\myincludepic{7-800}{800}\n\n\t\\picText{33025}\n\t\\label{fig:holes-pics2-7}\n\\end{figure}\n\\begin{figure\n\t\\centering\n\t\\begin{tabular}{cc}\n\t\t\\includegraphics[width=0.49\\linewidth]{holes_history_objective-1e+20_1_1}\n\t\t&\n\t\t\\includegraphics[width=0.49\\linewidth]{holes_history_derivative2-1e+20_1_1}\n\t\t\\\\\n\t\t(a) & (b) \n\t\\end{tabular}\n\t\\caption{Evolution of the objective function (a) and the norm of the {topological-shape } derivative (b) in course of the optimization.}\n\t\\label{fig:holeshistoryobjective-1}\n\\end{figure}\n\nWe observe that in all cases the two circles are recovered with high accuracy. In Figure \\ref{fig:holeshistoryobjective-1} the evolution of the objective function as well as of the norm of the generalized numerical {topological-shape } derivative is plotted. We observe that objective function decreases fast and after 800 iterations a reduction by a factor of approximately $10^{-5}-10^{-8}$ could be achieved. Moreover, we observe that the norm of the {topological-shape } derivative decreases continuously, more and more approaching the optimality condition \\eqref{eq_optiCondition}.\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzebqz b/data_all_eng_slimpj/shuffled/split2/finalzzebqz new file mode 100644 index 0000000000000000000000000000000000000000..80d21bc29a329d509b0737122cb59cadd2b82777 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzebqz @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nRecently, a lot of interest has grown around the possibility of\napplying string inspired techniques to the non-perturbative\nregime of QCD. The starting point is the AdS\/CFT correspondence\n\\cite{Malda}, a conjectured duality between a maximally\nsupersymmetric strongly coupled conformal field theory and the\nsupergravity limit of type~IIB string theory, which involves\ntheories different from QCD. Further developments \\cite{adsqcd}\nhave tried to apply the correspondence to QCD, induced by the\nevidence of the existence of a window of energy in which QCD\nshows an approximate conformal behaviour \\cite{confqcd}. These\ndevelopments have taken different directions. The framework\nthrough which I move here is the so-called soft~wall model of\nAdS\/QCD \\cite{softwall}, a phenomenological model originally\nbuilt to holographically describe chiral symmetry breaking and\nthen adapted to several strong interaction processes. For a list\nof other approaches the reader can refer to \\cite{other}.\n\nIn the following, I discuss the scalar glueball sector and how\nthe spectrum and the two-point correlation function are\nrepresented in the soft wall model. Then, I comment on the\nresults, comparing them with current phenomenology and lattice\ndata.\n\n\\section{Framework}\n\nThe considered model is defined in a $5d$ curved space (the\nbulk) with metric:\n\\begin{equation}\nds^2=g_{MN} dx^M\ndx^N=\\frac{R^2}{z^2}\\,\\big(\\eta_{\\mu\\nu}dx^{\\mu}dx^{\\nu}+dz^2\\big)\n\\label{metric}\n\\end{equation}\nwith $\\eta_{\\mu\\nu}=\\mbox{diag}(-1,+1,+1,+1)$; $R$ is the AdS\ncurvature radius, and the coordinate $z$ runs in the range\n$0\\leq z < +\\infty$. QCD is supposed to live on the boundary\n$z=0$, where the element $\\eta_{\\mu\\nu}dx^{\\mu}dx^{\\nu}$ describes a\nflat Minkowski space.\n\nIn addition to the AdS metric, the model is characterized by the\npresence of a background dilaton field:\n\\begin{equation}\n\\Phi(z)=(c z)^2 \\label{dilaton}\n\\end{equation}\nexponentially coupled to the fields, whose functional form is\nchosen in such a way to have linear Regge trajectories for light\nvector mesons \\cite{softwall}; $c$ is a dimensionful parameter\nsetting the scale of QCD quantities and it is of ${\\cal\nO}(\\Lambda_{QCD})$. It is the responsible of the breaking of\nconformal symmetry and it is fixed by the experimental slope of\nthe rho mesons trajectory.\n\n\n\n\\section{Scalar glueballs}\n\n$0^{++}$ glueballs can be described in QCD by the dimension four\noperator ${\\cal O}_S(x)=\\mbox{Tr}[\\beta(\\alpha_s)G^{a\\,\\mu\\nu}G_{\\mu\\nu}^a]$.\nIn the five dimensional theory its dual field is a massless\nscalar $Y(x,z)$ \\cite{glueballspectrum}, whose action is given\nby:\n\\begin{equation}\\label{action}\n S=-\\displaystyle\\frac{1}{2k}\\int\n d^5x\\sqrt{-g}\\,e^{-\\phi}g^{MN}(\\partial_MY)(\\partial_NY)\n\\end{equation}\nwhere $k$ is a parameter introduced to give the correct\ndimension to the action. The AdS\/CFT dictionary states that this\naction is equivalent to the QCD partition function, in which the\nsource of ${\\cal O}_S(x)$ is the boundary value $Y_0(x)$ of the\nfield $Y(x,z)$. The following relation can be written:\n\\begin{equation}\n Y(x,z)=\\int d^4x^\\prime\n K(x-x^\\prime,z)Y_0(x^\\prime)\\;\\;,\n\\end{equation}\nwhere the function $K(x-x^\\prime,z)$ is called bulk-to-boundary\npropagator, since it links the fields in the bulk with the\nsources on the boundary.\n\nIt is possible to obtain QCD correlation functions functionally\nderiving the action (\\ref{action}) with respect to $Y_0(x)$. The\ntwo-point function obtained in this way is, in the limit\n$q^2\\rightarrow+\\infty$ \\cite{glueballcorrel,forkel}:\n\\begin{eqnarray}\\label{piads}\n \\Pi_{AdS}(q^2) & = & \\displaystyle\\frac{R^3}{k}\\biggl\\{q^4\\cdot\\displaystyle\\frac{1}{8}\\left[2-2\\gamma_E+\\ln4-\\ln(q^2\/\\nu^2)\\right]+\\nonumber\\\\\n && +q^2\\left[-\\displaystyle\\frac{\\nu^2}{2}+\\displaystyle\\frac{c^2}{4}\\left(1-4\\gamma_E+2\\ln4-2\\ln(q^2\/\\nu^2)\\right)\\right]+\\\\\n && -\\displaystyle\\frac{5c^4}{6}+\\displaystyle\\frac{2c^6}{3q^2}+{\\cal\n O}\\left(\\displaystyle\\frac{1}{q^4}\\right)\\biggr\\}\\;\\;,\\nonumber\n\\end{eqnarray}\nto be compared with the QCD result \\cite{Paver}:\n\\begin{eqnarray}\\label{piqcd}\n \\Pi_{QCD}(q^2) & = & 2\\biggl(\\displaystyle\\frac{\\beta_1}{\\pi}\\biggr)^2\\biggl(\\displaystyle\\frac{\\alpha_s}{\\pi}\\biggr)^2\\,q^4\\biggl(-\\ln{\\big(\\displaystyle\\frac{q^2}{\\nu^2}\\big)}+2\n -\\displaystyle\\frac{1}{\\varepsilon^{\\prime}}\\biggr)+4\\beta_1^2\\,\\Big(\\frac{\\alpha_{s}}{\\pi}\\Big)\\langle{\\cal\n C}_4\\rangle\\nonumber\\\\\n &&+8\\beta_1^2\\Big(\\frac{\\alpha_s}{\\pi}\\Big)^2\\frac{\\langle{\\cal C}_6\\rangle}{q^2}+{\\cal\n O}\\left(\\displaystyle\\frac{1}{q^4}\\right)\\; .\n\\end{eqnarray}\nMatching (\\ref{piads}) with (\\ref{piqcd})\n($\\frac{R^3}{8k}=2(\\beta_1\/\\pi)^2(\\alpha_s\/\\pi)^2$) one gets the fully\nanalytic form of the correlator. By casting it in the form:\n\\begin{equation}\n \\Pi_{AdS}(q^2)=\\displaystyle\\sum_{n=0}^\\infty\\displaystyle\\frac{f_n^2}{q^2+m_n^2+i\\varepsilon}\n\\end{equation}\nit is possible to find the poles $m_n^2=4c^2(n+2)$ and the\nrelated residues $f_n^2=\\langle0|{\\cal\nO}_S(0)|n\\rangle^2=\\frac{8R^3c^6}{k}\\,(n+1)(n+2)$, corresponding\nto the mass spectrum and the decay constants of the scalar\nglueballs. The results for the lowest lying state are:\n\\begin{itemize}\n \\item $M_{0^{++}}=1.089$~GeV\n \\item $f_{0^{++}}=\\langle0|{\\cal\n O}_S(0)|n=0\\rangle=0.763$~GeV$^3$~~~.\n\\end{itemize}\nThe $0^{++}$ glueball turns out to be heavier than the $\\rho$\nmeson, as expected by phenomenology, but slightly lighter with\nrespect to the results from lattice simulations\n\\cite{latticeglueball}.\n\nAnother point is that in the large $q^2$ expansion there is a\ndimension~two condensate, absent in QCD since there are no ways\nto construct scalar local gauge invariant quantities with that\ndimension \\cite{zakcond2}.\n\nFinally, the dimension four condensate $\\langle{\\cal\nC}_4\\rangle$ turns out to be negative in this picture, at odds\nwith the commonly used value\n$\\langle(\\alpha_s\/\\pi)G^2\\rangle\\simeq0.012$~GeV$^4$\n\\cite{khodjamirian}.\n\nA discussion on how to fix some of the problems exposed above\ncan be found in \\cite{glueballcorrel, scalarmesons}.\n\n\n\n\n\n\n\n\n\\acknowledgments I thank P.~Colangelo, F.~De~Fazio, F.~Giannuzzi\nand F.~Jugeau for collaboration.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFor most of the history of exploring electromagnetic phenomena, it had been\nbelieved that knowledge of the electric and magnetic fields in a physical\nproblem is sufficient to define the problem. The scalar and vector potentials,\nwhose spatial and temporal derivatives yield the fields, had been regarded as\nauxiliary quantities that are useful but not essential. This conclusion was\napparently reinforced by the fact that the set of potentials to represent the\nfields is not unique. Subject to modest restrictions, there exist\ntransformations (called gauge transformations) to other sets of potentials\nthat produce the same fields.\n\nThis seemingly straightforward situation was upset by the Aharonov-Bohm effect\n\\cite{es,ab}. The simplest realization of this phenomenon is that an electron\nbeam passing outside a solenoid containing a magnetic field will be deflected,\neven though there is no field outside the solenoid. There is, however, a\npotential outside the solenoid that suffices to explain the deflection. The\neffect remained controversial until it was verified experimentally\n\\cite{tonomura}. This has the basic consequence that potentials are more\nfundamental than fields. The Aharonov-Bohm effect is founded on a single\nexplicitly quantum-mechanical phenomenon, and commentary about its\nsignificance has been in terms of quantum mechanics \\cite{furry,vaidman}.\n\nThe concept explored here is different from that of the Aharonov-Bohm effect,\nand much more consequential. It is shown that a change of gauge can introduce\na violation of basic symmetries, even when the usual constraints on allowable\ngauge transformations have been satisfied. Furthermore, these symmetry\nviolations can occur in classical physics as well as in quantum mechanics with\nexternal electromagnetic fields. The consequences of these results are\nprofound. There exist contrasting sets of potentials that yield exactly the\nsame fields, but where one set is consonant with physical requirements but\nanother is not. This proves directly that the selection of the proper set of\npotentials is the decisive matter, since the predicted fields are the same in\nboth cases. A corollary is that gauge transformations are not unitary\ntransformations. This contradicts the field-based assumption that gauge\ntransformations must preserve the values of measurable quantities. The\nassumption of unitarity (often implicit) underlies some of the influential\narticles that have been published on the subject of gauge choice.\n\nExamples employed here to demonstrate the primacy of potentials -- a charged\nparticle in interaction with a constant electric field, and a bound electron\nsubjected to a plane-wave field -- represent basic physics problems, unlike\nthe narrow specificity of the Aharonov-Bohm effect. An important feature\nrevealed by these examples is that, although the physical consequences of\nstatic or quasistatic-electric (QSE) fields are quite similar to plane-wave\neffects at the low field intensities that exist in the usual atomic, molecular\nand optical (AMO) physics processes, at the high field intensities now\nachievable with laser fields, they can be profoundly different. These\ndifferences have yet to be fully appreciated in the AMO literature, leading to\nmisconceptions that persist nearly forty years after the first laboratory\nobservation \\cite{ati} of explicit intense-field effects.\n\nA concept introduced here is that of a hierarchy of physical quantities. Since\npotentials are primary and fields are secondary, it follows that energies are\nprimary and forces are secondary. This ranking resolves the long-standing\nmystery about why the Schr\\\"{o}dinger equation cannot be written directly in\nterms of electric and magnetic fields, even though the fields were\nconventionally assumed to be basic physical quantities. All attempts to\nexpress the Schr\\\"{o}dinger equation directly in terms of fields have resulted\nin nonlocality \\cite{mandelstam,dewitt,belinfante,levy,rohrlich,priou}. This\napparent anomaly is one of the enduring puzzles of quantum mechanics. A\nhierarchy of physical quantities also serves to clarify the current confused\nsituation in the strong-laser community, where field-based intensity measures\nare employed that are inconsistent with energy-based criteria. One important\nexample of this misdirection is the introduction of the concept of the\n\\textquotedblleft critical electric field\\textquotedblrigh\n\\ \\cite{sauter,schwinger} into the discussion of strong laser effects, despite\nthe fact that lasers produce transverse fields and the critical field has\nwell-defined meaning only for longitudinal fields. The basic differences\nbetween transverse fields and longitudinal fields are also exhibited in the\nmacroscopic world in terms of the properties of extremely-low-frequency radio waves.\n\nThe range of applicability of the concepts examined here is very large, since\nit encompasses classical electromagnetism, and also relativistic and\nnonrelativistic quantum mechanics in which the electromagnetic field is\nregarded as an external classical field.\n\nThe limitation to external classical fields is significant, since it places\nthe present work outside the scope of a Yang-Mills theory \\cite{yangmills}.\nQuantum electrodynamics (QED) is a Yang-Mills theory, but standard QED does\nnot incorporate strong-field theory. Strong-field theories contain an apparent\nintensity-dependent \\textquotedblleft mass shift\\textquotedblright, discovered\nindependently by Sengupta \\cite{sengupta} and by the present author\n\\cite{hrdiss,hr62}. This mass shift can be explained in terms of a demand for\ncovariance in external fields \\cite{hrup}. The mass shift is a fundamental\nphenomenon in strong-field physics \\cite{hrje}, but it does not exist in the\ncontext of the quantized fields of QED \\cite{hrdiss,hr62,jehr}.\n\nSection II below discusses two basic examples that exhibit pairs of potential\nchoices that describe exactly the same fields, but where one set of potentials\nis physically acceptable and the other is not. One example is the simplest\npossible case: the classical interaction of a charged particle with a constant\nelectric field. Of the two possibilities for gauge choice, one contradicts\nNoether's Theorem \\cite{noether}. There is no such problem with the\nalternative gauge. The next example is the interaction of a charged particle\nwith a plane-wave field, such as the field of a laser. In this case, the key\nfactor is that the symmetry principle in question -- preservation of the\npropagation property of a plane-wave field -- is not often mentioned, even in\nthe context of very strong fields where this symmetry is crucial \\cite{hrup}.\nThe demand for the preservation of the propagation property imposes a strong\nlimitation on possible gauge transformations. In the presence of a\nsimultaneous scalar interaction, like a Coulomb binding potential, only the\nradiation gauge is possible \\cite{hrgge}. This limitation to a unique gauge\nexists in both classical and quantum domains. An important aspect of this\nproblem is that the widely-used dipole approximation in the description of\nlaser-caused effects suppresses this symmetry, thus masking the errors that\nfollow from ignoring this basic property. Both the constant-electric-field and\nthe propagating-field examples admit of only one possible gauge. This lack of\ngauge-equivalent alternatives is extremely important, since both situations\nrepresent commonplace physical environments. This is in contrast to the\nspecialized Aharonov-Bohm effect.\n\nAn immediate consequence of the demonstrated fact that some nominally valid\ngauge transformations can have unphysical consequences is that a gauge\ntransformation is not a unitary transformation. This is discussed in Section\nIII, where it is shown to be related to the construction of exact transition amplitudes.\n\nThe impossibility of writing the Schr\\\"{o}dinger equation directly in terms of\nelectric and magnetic fields is discussed in Section IV. This is further\nevidence of the basic nature of potentials, and it also supports the notion of\na hierarchy of physical phenomena. Quantum mechanics can be constructed from\nclassical mechanics when expressed in terms of system functions like the\nLagrangian or Hamiltonian, whereas a Newtonian form of classical mechanics has\nsuch an extension only by extrapolation to a desired result. System functions\nare related to energies, whereas Newtonian physics involves forces, and forces\nare directly connected with electric and magnetic fields, as shown by the\nLorentz force expression.\n\nThe ambiguities inherent in the view that the $\\mathbf{E}^{2}-\\mathbf{B}^{2}$\nand $\\mathbf{E}\\cdot\\mathbf{B}$ Lorentz invariants reliably characterize the\nelectrodynamic environment is another topic examined in Section IV. (The\nLorentz invariants, as are all electromagnetic quantities throughout this\npaper, are stated in Gaussian units.) This concept has an important failure\nwhen both invariants are zero, since it associates propagating plane-wave\nfields with the completely different constant crossed fields. The\ncommonly-held assumption that constant crossed fields are a zero-frequency\nlimit of plane-wave fields (see, for example, Refs. \\cite{nikrit66,ritus85}),\nis shown to be untenable.\n\nAnother topic in Section IV is the apparent dominance of the electric\ncomponent of the Lorentz force expression at low frequencies, a field-related\nconception that draws attention away from the rising importance of the\nmagnetic component of a propagating field as the frequency declines\n\\cite{hr75,hr82}. Inappropriate emphasis on the electric field has caused\nconceptual errors even in relativistic phenomena, as discussed in Section IV\nin the context of vacuum pair production. A potentials-related approach\nobviates this electric-field-dominance hazard. The concept of the\n\\textit{critical field} is often mentioned in connection with strong-laser\ninteractions \\cite{arb,ssb}. The critical field refers to that value of\nelectric field at which spontaneous pair production from the vacuum becomes\nsignificant. It has been applied to laser fields in terms of the electric\ncomponent of a plane-wave field. This is devoid of meaning for laser beams in\nvacuum because pure electric fields and plane-wave fields are disjoint\nconcepts, as is evident from Section II. The conservation conditions\napplicable to critical-field considerations cannot be satisfied by a laser.\nEven were the electric component of a laser field equal to the critical field,\npair production cannot occur because pair production from the vacuum by a\nlaser pulse cannot occur unless there is a counter-propagating field to\nprovide the necessary conservation of momentum \\cite{hrdiss,hr62,hr71,burke}.\nThe fact that photons convey momentum is incompatible with the concept of a\ncritical electric field for laser-induced processes.\n\nSection IV concludes with the practical problem of communicating with\nsubmerged submarines. This has been done under circumstances that emphasize\nhow different plane-wave fields are from QSE fields\n\nSection V explores the notion of a hierarchy of physical quantities.\nPotentials are directly related to energies, so they are identified as primary\nquantities. Fields are derived from potentials, so they are secondary. Forces\nare determined by fields and so forces are also secondary. The hierarchy\nconcept is related to classical mechanics in that Newtonian physics is couched\nin terms of forces, and so it is secondary to versions of classical mechanics\nbased on energy-based system functions like the Lagrangian and Hamiltonian.\nMechanics formulated with system functions infer Newtonian mechanics, but the\nconverse is not true.\n\n\\section{Symmetry violation}\n\nThe two examples presented have an important qualitative difference. The first\nexample -- a constant electric field -- is so elementary that the proper\nchoice of potentials is obvious, and there is no motivation to explore the\nproperties of the symmetry-violating alternative potentials. The next example\nis quite different in that the improper choice of potentials is very\nattractive to a laser-physics community that is accustomed to the dipole\napproximation. The requisite propagation property never appears within the\ndipole approximation, and its violation is thereby invisible.\n\nA preliminary step is to introduce the units and conventions employed in this\narticle, and to add some general remarks about terminology.\n\n\\subsection{Units and conventions}\n\nGaussian units are employed for all electromagnetic quantities. The\nexpressions for the electric field $\\mathbf{E}$ and magnetic field\n$\\mathbf{B}$ in terms of the scalar potential $\\phi$ and the 3-vector\npotential $\\mathbf{A}$ ar\n\\begin{equation}\n\\mathbf{E}=-\\mathbf{\\nabla}\\phi\\mathbf{-}\\frac{1}{c}\\partial_{t\n\\mathbf{A},\\qquad\\mathbf{B}=\\mathbf{\\nabla\\times A.} \\label{a\n\\end{equation}\nA gauge transformation generated by the scalar function $\\Lambda$ i\n\\begin{equation}\n\\widetilde{\\phi}=\\phi+\\frac{1}{c}\\partial_{t}\\Lambda,\\qquad\\widetilde\n{\\mathbf{A}}=\\mathbf{A-\\nabla}\\Lambda, \\label{b\n\\end{equation}\nwhere $\\Lambda$ must satisfy the homogeneous wave equatio\n\\begin{equation}\n\\left( \\frac{1}{c^{2}}\\partial_{t}^{2}-\\mathbf{\\nabla}^{2}\\right)\n\\Lambda=\\partial^{\\mu}\\partial_{\\mu}\\Lambda=0. \\label{c\n\\end{equation}\nRelativistic quantities are expressed with the time-favoring Minkowski metric,\nwith the signature $\\left( +---\\right) $, where the scalar product of two\n4-vectors $a^{\\mu}$ and $b^{\\mu}$ i\n\\begin{equation}\na\\cdot b=a^{\\mu}b_{\\mu}=a^{0}b^{0}-\\mathbf{a\\cdot b}. \\label{e\n\\end{equation}\nThe 4-vector potential $A^{\\mu}$ incorporates the scalar and 3-vector\npotentials a\n\\begin{equation}\nA^{\\mu}:\\left( \\phi,\\mathbf{A}\\right) . \\label{g\n\\end{equation}\nIn 4-vector notation, the two gauge transformation expressions in Eq.\n(\\ref{b}) become the single expressio\n\\begin{equation}\n\\widetilde{A}^{\\mu}=A^{\\mu}+\\partial^{\\mu}\\Lambda. \\label{h\n\\end{equation}\nBoth the initial and gauge-transformed 4-vector potentials must satisfy the\nLorenz conditio\n\\begin{equation}\n\\partial^{\\mu}A_{\\mu}=0,\\qquad\\partial^{\\mu}\\widetilde{A}_{\\mu}=0. \\label{j\n\\end{equation}\nThe propagation 4-vector $k^{\\mu}$ consists of the propagation 3-vector\n$\\mathbf{k}$ as the space part, and the amplitude $\\left\\vert \\mathbf{k\n\\right\\vert =\\omega\/c$ as the time component\n\\begin{equation}\nk^{\\mu}:\\left( \\omega\/c,\\mathbf{k}\\right) . \\label{k\n\\end{equation}\nThe 4-vector $k^{\\mu}$ defines the light cone and, according to the rule\n(\\ref{e}), it is \\textquotedblleft self-orthogonal\\textquotedblright\n\\begin{equation}\nk^{\\mu}k_{\\mu}=\\left( \\omega\/c\\right) ^{2}-\\mathbf{k}^{2}=0, \\label{l\n\\end{equation}\nwhich is an important possibility in this non-Euclidean space.\n\nThe concept of transversality refers to the property of plane-wave fields\nexpressed in a relativistic context as \\textit{covariant transversality\n\\begin{equation}\nk^{\\mu}A_{\\mu}=0, \\label{m\n\\end{equation}\nin terms of the 4-potential $A^{\\mu}$. In many textbooks on classical\nelectromagnetic phenomena, transversality is defined as \\textit{geometrical\ntransversality\n\\begin{equation}\n\\mathbf{k\\cdot E}=0\\text{ and }\\mathbf{k\\cdot B}=0, \\label{n\n\\end{equation}\nin terms of the electric and magnetic fields. It can be shown that covariant\ntransversality infers geometrical transversality.\n\n\\subsection{Terminology}\n\nDespite the conclusion in this paper that potentials are more basic than\nfields, it is not possible to avoid the use of the term \\textquotedblleft\nfield\\textquotedblright\\ in a generic sense. For example, one important\nconclusion reached herein is that vector and scalar potentials provide more\ninformation than do electric and magnetic fields in the description of the\neffects of laser fields. In the preceding sentence, the term \\textquotedblleft\nlaser field\\textquotedblright\\ is used generically to identify the radiation\ncreated by a laser, despite the particular result that potentials are the\nbetter approach in the description of that radiation. A similar problem arises\nwhen it is concluded that the dipole approximation amounts to the replacement\nof the \\textquotedblleft transverse field\\textquotedblright\\ of a laser by the\nmore elementary \\textquotedblleft longitudinal field\\textquotedblright. In\neach of the phrases demarcated by quotation marks, the word \\textquotedblleft\nfield\\textquotedblright\\ is used in a generic sense to identify an\nelectromagnetic phenomenon.\n\n\\subsection{Constant electric field}\n\nThe problem of a particle of mass $m$ and charge $q$ immersed in a constant\nelectric field of magnitude $E_{0}$ is inherently one-dimensional. For present\npurposes, nothing is gained by going to three spatial dimensions. The problem\nis clearly one in which energy is conserved. By Noether's Theorem\n\\cite{noether}, the Lagrangian must be independent of time $t$, so that the\nconnection between the electric field and potentials given in Eq. (\\ref{a})\nmust depend only on the scalar potential $\\phi.$ Equation (\\ref{a}) can then\nbe integrated to give the potential\n\\begin{equation}\n\\phi=-xE_{0},\\qquad A=0, \\label{o\n\\end{equation}\nsince an additive constant of integration has no physical meaning. The\npotentials descriptive of this problem are unique, and given by Eq. (\\ref{o}).\n\nThe Lagrangian function is the difference of the kinetic energy $T$ and the\npotential energy $U$\n\\begin{align}\nL & =T-U\\label{p}\\\\\n& =\\frac{1}{2}m\\overset{.}{x}^{2}+qxE_{0}. \\label{p1\n\\end{align}\nThe Lagrangian equation of motion i\n\\begin{equation}\n\\frac{d}{dt}\\frac{\\partial L}{\\partial\\overset{.}{x}}-\\frac{\\partial\nL}{\\partial x}=m\\overset{..}{x}-qE_{0}=0, \\label{q\n\\end{equation}\nwhich is just the elementary Newtonian equatio\n\\begin{equation}\nm\\overset{..}{x}=qE_{0}. \\label{r\n\\end{equation}\nThe simplest initial conditions for this problem -- initial position and\nvelocity set to zero -- lead to the solutio\n\\begin{equation}\nx=\\frac{qE_{0}}{2m}t^{2}. \\label{s\n\\end{equation}\nFrom Eqs. (\\ref{p}) and (\\ref{p1}), it follows tha\n\\begin{equation}\nT=\\frac{1}{2m}\\left( qE_{0}t\\right) ^{2},\\qquad U=-\\frac{1}{2m}\\left(\nqE_{0}t\\right) ^{2},\\qquad T+U=0. \\label{t\n\\end{equation}\nThe anticipated conservation of energy holds true.\n\nDespite the uniqueness of the potentials of Eq. (\\ref{o}), there exists an\napparently proper gauge transformation generated by the functio\n\\begin{equation}\n\\Lambda=ctxE_{0}. \\label{u\n\\end{equation}\nThe gauge-transformed potentials ar\n\\begin{equation}\n\\widetilde{\\phi}=0,\\qquad\\widetilde{A}=-ctE_{0}, \\label{v\n\\end{equation}\nand the Lagrangian function is \\cite{hrjmo\n\\begin{equation}\n\\widetilde{L}=\\frac{1}{2}m\\overset{.}{x}^{2}-qtE_{0}\\overset{.}{x}. \\label{w\n\\end{equation}\nThe kinetic energy is unaltered ($\\widetilde{T}=T$), but the new potential\nenergy i\n\\begin{equation}\n\\widetilde{U}=qtE_{0}\\overset{.}{x}, \\label{x\n\\end{equation}\nwhich is explicitly time-dependent. The new equation of motion i\n\\begin{equation}\n\\frac{d}{dt}\\left( \\frac{\\partial\\widetilde{L}}{\\partial\\overset{.}{x\n}\\right) -\\frac{\\partial\\widetilde{L}}{\\partial x}=m\\overset{..}{x}-qE_{0}=0,\n\\label{y\n\\end{equation}\nwhich is identical to that found in the original gauge, so that the solution\nis the same as Eq. (\\ref{s}). However, the altered gauge has introduced a\nfundamental change. The gauge-transformed potential energy is evaluated a\n\\begin{equation}\n\\widetilde{U}=\\frac{1}{m}\\left( qE_{0}t\\right) ^{2}, \\label{z\n\\end{equation}\nso that\n\\begin{equation}\n\\widetilde{T}+\\widetilde{U}=\\frac{3}{2m}\\left( qE_{0}t\\right) ^{2}.\n\\label{aa\n\\end{equation}\nThe total energy is not conserved, as was presaged by the explicit time\ndependence of the gauge-transformed Lagrangian (\\ref{w}).\n\nHow did this happen? One constraint placed on gauge transformations (see, for\nexample, the classic text by Jackson \\cite{jackson}) is that the generating\nfunction must be a scalar function that satisfies the homogeneous wave\nequation, as in Eq. (\\ref{c}). This is satisfied by the function (\\ref{u}).\nThe only other condition is the Lorenz condition (\\ref{j}), which is satisfied\nby the potentials before and after transformation. However, there is no\ncondition that guarantees preservation of symmetries inherent in the physical\nproblem. It is not enough to employ appropriate fields; it is necessary to\nemploy the appropriate potentials to ensure that all aspects of the physical\nproblem are rendered properly.\n\nThis writer is unaware of any instance where inappropriate potentials have\nbeen accepted and employed in this exceedingly simple problem. The same cannot\nbe said for the next example.\n\n\\subsection{Plane-wave field}\n\nLaser fields are of central importance in contemporary physics, and laser\nfields are plane-wave fields. A plane-wave field is the only electromagnetic\nphenomenon that has the ability to propagate indefinitely in vacuum without\nthe continued presence of sources. In the typical laboratory experiments with\nlasers, the practical consequence of this ability to propagate without need\nfor sources means that all fields that arrive at a target can only be a\nsuperposition of plane-wave fields. Any contamination introduced by optical\nelements like mirrors or gratings can persist for only a few wavelengths away\nfrom such elements. On the scale of a typical laboratory optical table, this\nis negligible.\n\nPlane-wave fields propagate at the speed of light in vacuum; they are\nfundamentally relativistic. The 1905 principle of Einstein is basic: the speed\nof light is the same in all inertial frames of reference \\cite{einstein}. The\nmathematical statement of this principle is that any description of a\nplane-wave field can depend on the spacetime coordinate $x^{\\mu}$ only as a\nscalar product with the propagation 4-vector $k^{\\mu}$. The consequence of\nthis projection of the spacetime 4-vector onto the light cone is that any\nchange of gauge must be such as to be confined to the light cone. That is,\nwith the definitio\n\\begin{equation}\n\\varphi\\equiv k^{\\mu}x_{\\mu}, \\label{ab\n\\end{equation}\nthe field 4-vector must be such that\n\\begin{equation}\nA_{pw}^{\\mu}=A_{pw}^{\\mu}\\left( \\varphi\\right) , \\label{ac\n\\end{equation}\nwhere the subscript \\textit{pw} stand for \\textit{plane-wave}. When the gauge\ntransformation of Eq. (\\ref{h}) is applied, the gauge-altered 4-vector\npotential is confined by the condition (\\ref{ac}) to the form \\cite{hrgge\n\\begin{equation}\n\\widetilde{A}^{\\mu}=A^{\\mu}+k^{\\mu}\\Lambda^{\\prime}, \\label{ad\n\\end{equation}\nwhere the gauge-change generating function can itself depend on $x^{\\mu}$ only\nin the form of $\\varphi,$ and\n\\begin{equation}\n\\Lambda^{\\prime}=\\frac{d}{d\\varphi}\\Lambda\\left( \\varphi\\right) . \\label{ae\n\\end{equation}\nAs is evident from Eq. (\\ref{l}), transversality is maintained by the gauge\ntransformation (\\ref{ad}).\n\nA further limitation arises if an electron is subjected to a scalar binding\npotential in addition to the vector potential associated with the laser field.\nA relativistic Hamiltonian function for a charged particle in a plane-wave\nfield contains a term of the for\n\\begin{equation}\n\\left( i\\hslash\\partial^{\\mu}-\\frac{q}{c}A^{\\mu}\\right) \\left(\ni\\hslash\\partial_{\\mu}-\\frac{q}{c}A_{\\mu}\\right) . \\label{ae1\n\\end{equation}\nThis occurs in the classical case, in the Klein-Gordon equation of quantum\nmechanics, and in the second-order Dirac equation of quantum mechanics\n\\cite{feynmangm,schweber}. The expansion of the expression in Eq. (\\ref{ae1})\ncontains the squared time par\n\\begin{equation}\n\\left( i\\hslash\\partial_{t}-\\frac{q}{c}A^{0}\\right) ^{2}. \\label{ae2\n\\end{equation}\nIf $A^{0}$ contains contributions from both a scalar potential and the time\npart of the plane-wave 4-vector potential, then executing the square in Eq.\n(\\ref{ae2}) would give a term containing the product of these two scalar\npotentials that is not physical; it does not occur in the reduction of\nrelativistic equations of motion to their nonrelativistic counterparts\n\\cite{hrgge}. This applies specifically to applications in AMO physics. That\nis, it must be true that \\cite{hr79,hrgge\n\\begin{equation}\nA_{pw}^{0}=\\phi_{pw}=0. \\label{af\n\\end{equation}\nThis means that gauge freedom vanishes. Only the \\textit{radiation gauge}\n(also known as \\textit{Coulomb gauge}) is possible. This is the gauge in which\nscalar binding influences are described by scalar potentials $\\phi$ and laser\nfields are described by 3-vector potentials $\\mathbf{A}$.\n\nConsider the gauge transformation generated by the function \\cite{hr79\n\\begin{equation}\n\\Lambda=-A^{\\mu}\\left( \\varphi\\right) x_{\\mu}. \\label{ag\n\\end{equation}\nThis leads to the transformed gaug\n\\begin{equation}\n\\widetilde{A}^{\\mu}=-k^{\\mu}x^{\\nu}\\left( \\frac{d}{d\\varphi}A_{\\nu}\\right) ,\n\\label{ah\n\\end{equation}\nwhich was introduced in Ref. \\cite{hr79} in an attempt to base the Keldysh\napproximation \\cite{keldysh} on plane-wave fields rather than on quasistatic\nelectric fields. The transformed 4-potential can also be written as\n\\cite{hr79\n\\begin{equation}\n\\widetilde{A}^{\\mu}=-\\frac{k^{\\mu}}{\\omega\/c}\\mathbf{r\\cdot E}\\left(\n\\varphi\\right) , \\label{ai\n\\end{equation}\nthus suggesting a relativistic generalization of the nonrelativistic\n\\textit{length gauge} used by Keldysh and widely employed within the AMO\ncommunity. The problem with the $\\widetilde{A}^{\\mu}$ of Eq. (\\ref{ah}) or\n(\\ref{ai}) is that it violates the symmetry (\\ref{ac}) required of a\npropagating field. Nevertheless, this $\\widetilde{A}^{\\mu}$ satisfies the\nLorenz condition (\\ref{j}) and the transversality condition (\\ref{m}); and the\ngenerating function of Eq. (\\ref{ag}) satisfies the homogeneous wave equation\nof Eq. (\\ref{c}) \\cite{hr79}. That is, all the usual requirements for a gauge\ntransformation are met even though the transformed 4-vector potential\n$\\widetilde{A}^{\\mu}$ of Eq. (\\ref{ah}) or (\\ref{ai}) violates the symmetry\nrequired of a propagating field like a laser field.\n\nThis violation of a basic requirement for a laser field has unphysical and\nhence unacceptable consequences. The most obvious is that the covariant\nstatement of the all-important \\cite{hrup} ponderomotive energy $U_{p}$\nproduces a null result sinc\n\\begin{equation}\n\\widetilde{U}_{p}\\sim\\widetilde{A}^{\\mu}\\widetilde{A}_{\\mu}=0 \\label{aj\n\\end{equation}\nas a consequence of the self-orthogonality of the propagation 4-vector\n$k^{\\mu}$. The resemblance of Eq. (\\ref{ai}) to the length-gauge\nrepresentation of a quasistatic electric field suggests a tunneling model for\nthe relativistic case \\cite{vspopov,heidelberg}, which is inappropriate for\nstrong laser fields. Tunneling can occur only through interference between\nscalar potentials, and a strong laser field is inherently vector, not scalar.\n\nThe basic defect of the potentials (\\ref{ah}) or (\\ref{ai}) is violation of\nthe Einstein condition of the constancy of the speed of light in all Lorentz\nframes, despite the validity of the gauge transformation leading to those\npotentials. The importance of the physical situation in which this occurs is\nrobust evidence of the significance of the proper choice of potentials, since\nthe electric and magnetic fields attained from the unacceptable potentials\n(\\ref{ah}) or (\\ref{ai}) are exactly the same as those that follow from\npotentials that satisfy properly the condition (\\ref{ac}).\n\n\\section{Gauge transformations and unitarity}\n\nUnitary transformations in quantum physics preserve the values of physical\nobservables. It was shown above that not all gauge transformations produce\nphysically acceptable results. Therefore, gauge transformations are not\nunitary transformations. This conclusion is supported by the basic structure\nof transition amplitudes.\n\nTransition amplitudes without resort to perturbation theory are best expressed\nby S matrices. These are of two (equivalent) types. The \\textit{direct-time}\nor \\textit{post }amplitude i\n\\begin{equation}\n\\left( S-1\\right) _{fi}=-\\frac{i}{\\hslash}\\int_{-\\infty}^{\\infty}dt\\left(\n\\Phi_{f},H_{I}\\Psi_{i}\\right) , \\label{aq\n\\end{equation}\nand the \\textit{time-reversed} or \\textit{prior} amplitude i\n\\begin{equation}\n\\left( S-1\\right) _{fi}=-\\frac{i}{\\hslash}\\int_{-\\infty}^{\\infty}dt\\left(\n\\Psi_{f},H_{I}\\Phi_{i}\\right) . \\label{ar\n\\end{equation}\nThe indices $f$ and $i$ label the final and initial states. The $\\Phi$ states\nare non-interacting states and the $\\Psi$ states are fully interacting states\nsatisfying, respectively, the Schr\\\"{o}dinger equation\n\\begin{align}\ni\\hslash\\partial_{t}\\Phi & =H_{0}\\Phi,\\label{as}\\\\\ni\\hslash\\partial_{t}\\Psi & =\\left( H_{0}+H_{I}\\right) \\Psi, \\label{at\n\\end{align}\nwhere $H_{I}$ is the interaction Hamiltonian.\n\nIn a gauge transformation, the matrix elements within the time integrations in\nEqs. (\\ref{aq}) and (\\ref{ar}) transform a\n\\begin{align}\n\\left( \\Phi_{f},H_{I}\\Psi_{i}\\right) & \\rightarrow\\left( \\Phi\n_{f},\\widetilde{H}_{I}\\widetilde{\\Psi}_{i}\\right) ,\\label{au}\\\\\n\\left( \\Psi_{f},H_{I}\\Phi_{i}\\right) & \\rightarrow\\left( \\widetilde{\\Psi\n}_{f},\\widetilde{H}_{I}\\Phi_{i}\\right) . \\label{av\n\\end{align}\nBecause the noninteracting states are unaltered in a gauge transformation,\nthere is no necessary\\textit{ }equivalence between the two sides of the\nexpressions in Eqs. (\\ref{au}) and (\\ref{av}).\n\nThose authors that endorse the favored status of the length gauge\n\\cite{yang,kobesmirl,beckerss,lss,jbauer} \\textquotedblleft\nsolve\\textquotedblright\\ this problem by attaching a unitary operator to all\nstates, including non-interacting states: $\\widetilde{\\Phi}=U\\Phi$,\n$\\widetilde{\\Psi}=U\\Psi$. All $U$ and $U^{-1}$ operators exactly cancel in the\nmatrix element, and the transition amplitude is unchanged. This is what leads\nto the property \\textquotedblleft gauge-invariant formalism\\textquotedblrigh\n\\ sometimes ascribed to the length gauge. However, this procedure amounts to\nan identity or to a change of \\textit{quantum picture}, but not to a gauge transformation.\n\n\\section{Fundamental contrasts in the applicability of fields and potentials}\n\nThe first example to be presented is the very basic one of the impossibility\nof expressing the Schr\\\"{o}dinger equation directly in terms of electric and\nmagnetic fields, which should be possible if fields are truly more fundamental\nthan potentials. Other direct examples of difficulties posed by the assumption\nof the fundamental importance of fields are shown, many of them long employed\nunnoticed within the strong-field community.\n\n\\subsection{Schr\\\"{o}dinger equation}\n\nThe Schr\\\"{o}dinger equatio\n\\begin{equation}\ni\\hslash\\partial_{t}\\Psi\\left( t\\right) =H\\Psi\\left( t\\right) , \\label{ak\n\\end{equation}\nwhen viewed as a statement in a Hilbert space (that is, without selecting a\nrepresentation such as the configuration representation or the momentum\nrepresentation) states that the effect of rotating a state vector $\\Psi\\left(\nt\\right) $ by the operator $H$ within the Hilbert space produces the same\neffect as differentiating the vector with respect to time (multiplied by\n$i\\hslash$). Time $t$ is an external parameter upon which the state vectors\ndepend, which accounts for why Eq. (\\ref{ak}) specifies $t$ as a label\nindependent of the Hilbert space. A unitary transformation preserves this\nequivalence. This can be stated a\n\\begin{equation}\ni\\hslash\\partial_{t}-\\widetilde{H}=U\\left( i\\hslash\\partial_{t}-H\\right)\nU^{-1}. \\label{al\n\\end{equation}\nSince Eq. (\\ref{al}) can be written a\n\\begin{equation}\n\\widetilde{H}=UHU^{-1}+U\\left( i\\hslash\\partial_{t}U^{-1}\\right) ,\n\\label{am\n\\end{equation}\nthis shows explicitly that the Hamiltonian operator does not transform\nunitarily if there is any time dependence in $U$.\n\nAn important gauge transformation is that introduced by G\\\"{o}ppert-Mayer\n\\cite{gm}, widely employed in the AMO community. This transformation is given\nb\n\\begin{equation}\nU_{GM}=\\exp\\left( \\frac{ie}{\\hslash c}\\mathbf{r\\cdot A}\\left( t\\right)\n\\right) , \\label{an\n\\end{equation}\nwhich depends explicitly on time when $\\mathbf{A}\\left( t\\right) $ describes\na laser field within the dipole approximation, meaning that Eq. (\\ref{am}) is consequential.\n\nThe fact that, in general\n\\begin{equation}\n\\widetilde{H}\\neq UHU^{-1}, \\label{ao\n\\end{equation}\nis the explanation for the curious result to be found in many papers (for\nexample, Refs. \\cite{yang,kobesmirl,beckerss,lss,jbauer}) that the\n$\\mathbf{r\\cdot E}$ potential is a preferred potential. If any other potential\nis employed in solving the Schr\\\"{o}dinger equation, then the claim is made\nthat a transformation factor must be employed even on a non-interacting state.\nThere is a logical contradiction inherent in the requirement that a\nnon-interacting state must incorporate a factor that depends on an\ninteraction, but the list of published papers that accept this premise is much\nlonger than the salient examples cited here. The underlying problem is the\nassumption that a gauge transformation transforms the Hamiltonian unitarily.\nThat problem exists in all of the references just cited, although it is\nusually submerged in complicated manipulations. It is especially clear in Ref.\n\\cite{jbauer}, where it is specified that all operators $O$ transform under a\ngauge transformation according to the unitary-transformation rul\n\\begin{equation}\n\\widetilde{O}=UOU^{-1},\\qquad U^{-1}=U^{\\dag}. \\label{ap\n\\end{equation}\nThat specification is applied to $H$, in violation of the condition\n(\\ref{am}), and to the interaction Hamiltonian $H_{I}$, with no explanation\nfor how it is possible to gauge-transform from the length-gauge interaction to\nany other gauge in view of the absence of operators in the scalar potential\n$\\mathbf{r\\cdot E}$. In the scheme proposed in Refs.\n\\cite{yang,kobesmirl,beckerss,lss,jbauer}, if the problem is initially\nformulated in the context of the $\\mathbf{r\\cdot E}$ potential, it is never\npossible to transform to any other gauge. This explains the use of the phrase\n\\textquotedblleft gauge-invariant formulation\\textquotedblright\\ with respect\nto $\\mathbf{r\\cdot E}$ to be found in some published works.\n\n\\subsection{Locality and nonlocality}\n\nFields are derived from potentials by the calculus process of differentiation,\nas exhibited in Eq. (\\ref{a}). Differentiation is carried out at a point in\nspacetime. It is \\textit{local}. If potentials are to be expressed from\nfields, that requires integration, consisting of information from a range of\nspacetime values; it is \\textit{nonlocal}. The fact that the Schr\\\"{o}dinger\nequation requires the local information from potentials, and cannot be\ndescribed by fields without inferring nonlocality in spacetime, is direct\nevidence that potentials are more fundamental than fields.\n\n\\subsection{Ambiguity in the electromagnetic field tensors}\n\nThe basic field tensor of electrodynamics is defined a\n\\begin{equation}\nF^{\\mu\\nu}=\\partial^{\\mu}A^{\\nu}-\\partial^{\\nu}A^{\\mu}. \\label{aj1\n\\end{equation}\nIt is important to note that this expression is in terms of the derivatives of\npotentials rather than the potentials themselves. Thus it is not surprising\nthat the Lorentz invariant found from the inner product of $F^{\\mu\\nu}$ with\nitself yields an expression in terms of fields\n\\begin{equation}\nF^{\\mu\\nu}F_{\\mu\\nu}=2\\left( \\mathbf{B}^{2}-\\mathbf{E}^{2}\\right) .\n\\label{aj2\n\\end{equation}\nA dual tensor can be defined a\n\\begin{equation}\nG^{\\mu\\nu}=\\frac{1}{2}\\epsilon^{\\mu\\nu\\rho\\lambda}F_{\\rho\\lambda}, \\label{aj3\n\\end{equation}\nwhere $\\epsilon^{\\mu\\nu\\rho\\lambda}$ is the completely asymmetric fourth-rank\ntensor. (The conventions of Jackson \\cite{jackson} are being employed.) The\ninner product of the basic and dual tensors gives a second Lorentz invariant\n\\begin{equation}\nG^{\\mu\\nu}F_{\\mu\\nu}=-4\\mathbf{B\\cdot E}, \\label{aj4\n\\end{equation}\nalso in terms of fields. The two Lorentz invariant\n\\begin{equation}\n\\mathbf{E}^{2}-\\mathbf{B}^{2},\\quad\\mathbf{E\\cdot B} \\label{aj5\n\\end{equation}\nare said to characterize the electrodynamic environment.\n\nAn important special case is that of transverse, propagating fields, wher\n\\begin{equation}\n\\mathbf{E}^{2}-\\mathbf{B}^{2}=0,\\quad\\mathbf{E\\cdot B}=0. \\label{aj6\n\\end{equation}\nThe properties (\\ref{aj6}) lead to radiation fields as sometimes being called\n\\textquotedblleft null fields\\textquotedblright. (The terms radiation field,\npropagating field, transverse field, plane-wave field, are here used\ninterchangeably.) Radiation fields propagate at the speed of light in vacuum,\nand they have the unique character that, after initial formation, they\npropagate indefinitely in vacuum without the presence of sources.\n\nHowever, the invariants (\\ref{aj6}) are not unique to radiation fields; they\napply also to \\textquotedblleft constant crossed fields\\textquotedblright.\nThat is, it is always possible to generate static electric and magnetic fields\nof equal magnitude that are perpendicular to each other, and will thus possess\nzero values for both of the Lorentz invariants of the electromagnetic field.\nConstant crossed fields do not propagate, and they cannot exist without the\npresence of sources. They are unrelated to radiation fields despite sharing\nthe same values of the Lorentz invariants. Most importantly, constant crossed\nfields cannot be considered as the zero-frequency limit of radiation fields,\nas they are sometimes described \\cite{nikrit66,ritus85}. All radiation fields\npropagate at the speed of light for all frequencies, no matter how low. There\nis no possible zero-frequency static limit \\cite{hrtun}.\n\nThere is no ambiguity when radiation fields and constant crossed fields are\nexpressed in terms of their potentials. Radiation-field potentials possess the\nperiodicity inherent in trigonometric dependence on the $\\varphi$ of Eq.\n(\\ref{ab}). This is unrelated to the $\\phi=-\\mathbf{r\\cdot E}_{0}$ potential\nof (\\ref{o}) for a constant electric field $\\mathbf{E}_{0}$, and\n$\\mathbf{A}=-\\left( \\mathbf{r\\times B}_{0}\\right) \/2$ for a constant\nmagnetic field $\\mathbf{B}_{0}$, both of which require source terms for their existence.\n\n\\subsection{Lorentz force}\n\nThe force exerted on a particle of charge $q$ moving with velocity\n$\\mathbf{v}$ in a field with electric and magnetic components $\\mathbf{E}$ and\n$\\mathbf{B}$ is given by the Lorentz force expressio\n\\begin{equation}\n\\mathbf{F}=q\\left( \\mathbf{E}+\\frac{\\mathbf{v}}{c}\\times\\mathbf{B}\\right) .\n\\label{aj7\n\\end{equation}\nIn a plane-wave field, the electric and magnetic fields are of equal\nmagnitude: $\\left\\vert \\mathbf{E}\\right\\vert =\\left\\vert \\mathbf{B}\\right\\vert\n$. Thus, under conditions where $\\left\\vert \\mathbf{v}\\right\\vert \/c\\ll1$, the\nmagnetic component of the force is minor as compared to the electric\ncomponent. The implication is that, as the field frequency declines, the\nmotion-related magnetic component reduces to an adiabatic limit. This concept\nof adiabaticity justifies the complete neglect of the magnetic field that is a\nkey element of the dipole approximation, applied within the AMO community in\nthe form\n\\begin{equation}\n\\mathbf{E}=\\mathbf{E}\\left( t\\right) ,\\quad\\mathbf{B}=0. \\label{aj8\n\\end{equation}\nThe adiabaticity line of reasoning suggests a so-called adiabatic limit where\nthe field frequency declines to zero, and the plane wave field behaves as a\nconstant crossed field that satisfies the plane-wave condition (\\ref{aj6}).\n\nThe entire line of reasoning that involves the concepts of adiabaticity,\nadiabatic limit, and a zero-frequency limit for plane-wave fields is\nfield-based and erroneous \\cite{hr101,hrtun}. When the problem is treated in\nterms of potentials, it becomes clear that $v\/c$ approaches unity for very\nstrong fields even when the frequency is very low, and the magnetic force\nbecomes equivalent to the electric force for very strong fields.\n\n\\subsection{Critical field}\n\nThe \\textquotedblleft critical field\\textquotedblright\\ in electrodynamics is\nrelated to the spontaneous breakdown of the vacuum into electron-positron\npairs. The critical field is defined as the electric field strength at which\nthe $\\pm mc^{2}$ limits for the rest energies of electron and positron in a\nparticle-hole picture are \\textquotedblleft tilted\\textquotedblright\\ by the\nelectric field to allow tunneling between positive and negative energy states\nwhen the spatial limits of the tunnel are separated by an electron Compton\nwavelength. This type of pair production is called Sauter-Schwinger pair\nproduction \\cite{sauter,schwinger}. It is fundamentally different from\nBreit-Wheeler pair production \\cite{bw,hrdiss,hr62} to be discussed below.\n\nThe reason for the fundamental difference is that Sauter-Schwinger pair\nproduction is a phenomenon due to electric fields and Breit-Wheeler is due to\nplane-wave fields. The distinction between pure electric fields and plane-wave\nfields could hardly be more clear, since both types of fields have unique\ngauge choices that are contrasting.\n\nThe critical field is often mentioned as a goal of strong-field laser\nfacilities, which is basically a \\textit{non-sequitur}. The critical field\napplies only to electric fields and lasers produce plane-wave fields. Even if\na laser field were sufficiently intense that its electric component had the\nmagnitude of the QSE critical field, pair production from the vacuum cannot\noccur because the photons of the laser field convey momentum. A\ncounter-propagating plane-wave field is necessary to satisfy momentum\nconservation as well as energy conservation in the production of pairs from\nthe vacuum. This is then the two-fields Breit-Wheeler process, which is\nunrelated to the single-field Sauter-Schwinger process.\n\n\\subsection{Pair production from the vacuum}\n\nFor many years the stated ultimate goal of large-laser programs was to achieve\na laser intensity such that the electric component of the laser is equal to\nthe critical field discussed in the preceding subsection This magnitude of\nelectric field corresponds to an intensity of about $4.6\\times10^{29}W\/cm^{2}$.\n\nThe problem is that the Schwinger limit applies \\textit{only} to electric\nfields. It does not apply to laser fields \\cite{hrspie}, as explained above.\nThe only way to produce momentum balance with a laser field while still\nproducing pairs from the vacuum is to have the laser beam collide with\noppositely-directed photons, as proposed in Ref. \\cite{hr71}. This was\npredicted to be done on a practical basis at a linear accelerator facility\nsuch as that at SLAC\\ (Stanford Linear Accelerator Center), with a laser\nintensity of only slightly greater than $10^{18}W\/cm^{2}$. An important note\nis that the prediction of Ref. \\cite{hr71} was for the use of the\nthen-important ruby laser at a different pulse length and a different energy\nof the energetic electron beam used to produce the countervailing photon\nfield. The predicted threshold of 25 photons from the laser field in Ref.\n\\cite{hr71} is altered to 5 photons for the parameters of the experiment that\nwas actually done at SLAC in 1997 \\cite{burke}. The theoretical prediction of\nan effective threshold intensity of about $10^{18}W\/cm^{2}$ is maintained\nbecause of the essential independence of the laser frequency that was remarked\nin Ref. \\cite{hr71}.\n\n(One \\textit{caveat} about the experiment is that it was reported as a\nhigh-order perturbative result, whereas it is readily shown to be at an\nintensity beyond the radius of convergence of perturbation theory \\cite{epjd}.\nA second problem is that it was described as \\textquotedblleft light-by-light\nscattering\\textquotedblright, which is a different process altogether. Feynman\ndiagrams of these processes have electron and positron as emergent particles\nin a pair production process, while light-by-light scattering has emergent photons.)\n\nThe striking difference between the $4.6\\times10^{29}W\/cm^{2}$ required to\nattain a laser electric component equal to the critical field and the actual\n$1.3\\times10^{18}W\/cm^{2}$ required for the SLAC experiment is evidence of a\nmisplaced focus on the electric field required for vacuum pair production. The\nrequired intensity of the laser field depends on the properties of the\ncounter-propagating field, but it is never as large as the Sauter-Schwinger\ncritical field.\n\n\\subsection{Low frequency limit of a plane-wave field}\n\nIt is conventional to view low-frequency laser-induced phenomena from the\nstandpoint of the Lorentz force expression of Eq. (\\ref{aj7}). In a plane-wave\nfield, the electric and magnetic fields are of equal magnitude: $\\left\\vert\n\\mathbf{E}\\right\\vert =\\left\\vert \\mathbf{B}\\right\\vert $. Thus, under\nconditions where $\\left\\vert \\mathbf{v}\\right\\vert \/c\\ll1$, the magnetic\ncomponent of the force is minor as compared to the electric component. The\nimplication is that, as the field frequency declines, the motion-related\nmagnetic component reduces to an adiabatic limit. This concept of adiabaticity\nappears to justify the complete neglect of the magnetic field that is a key\nelement of the dipole approximation, applied within the AMO community in the\nform given in Eq. (\\ref{aj8}). Adiabaticity suggests a so-called adiabatic\nlimit, where the field frequency declines to zero, and the plane wave field\nbehaves as a constant crossed field that satisfies the plane-wave condition\n(\\ref{aj6}).\n\nThe entire line of reasoning that involves the concepts of adiabaticity,\nadiabatic limit, and a zero-frequency limit for plane-wave fields is\nfield-based and erroneous \\cite{hr101,hrtun}. When the problem is treated in\nterms of potentials, it becomes clear that $v\/c$ approaches unity for very\nstrong fields. The magnetic force becomes equivalent to the electric force for\nvery strong fields.\n\nAnalysis from a potentials standpoint is in stark contrast to a fields-based approach.\n\nThe ponderomotive potential energy of a charged particle in a plane-wave field\nis a fundamental property of the particle \\cite{hrup}, and it becomes\ndivergent as the field frequency approaches zero. The immediate consequence is\nthat the limit $\\omega\\rightarrow0$ causes the dipole approximation to fail\n\\cite{hr101,hrtun}, and corresponds to an extremely relativistic environment\n\\cite{hrup}. This contradicts maximally the field-based conclusions.\n\nThe ponderomotive energy $U_{p}$ is given b\n\\begin{equation}\nU_{p}=\\frac{q^{2}}{2mc^{2}}\\left\\langle \\left\\vert A^{\\mu}A_{\\mu}\\right\\vert\n\\right\\rangle , \\label{aj9\n\\end{equation}\nwhere the angle brackets denote a cycle average, and the absolute value needs\nto be indicated because the 4-vector potential $A^{u}$ is a spacelike 4-vector\nand its square is thus negative with the metric being employed. The\nponderomotive energy is based on potentials. When expressed in terms of field\nintensity $I$, $U_{p}$ behaves a\n\\begin{equation}\nU_{p}\\sim I\/\\omega^{2}, \\label{aj10\n\\end{equation}\nwhich explains the relativistic property of the charged particle as the\nfrequency approaches zero.\n\n\\subsection{Extremely-low-frequency radio waves}\n\nA central issue in this article is that transverse fields and longitudinal\nfields are fundamentally different, even when they seem to have some\nproperties in common. An effective example is the matter of communicating with\nsubmerged submarines with extremely low frequency (ELF) radio waves. The U. S.\nNavy operated such a system \\cite{wikisanguine}, designed to communicate with\nsubmarines submerged at depths of the order of $100m$ over an operational\nrange of about half of the Earth's surface. The point of using extremely low\nfrequencies is the large skin depth in a conducting medium (seawater) that can\nbe achieved. What is most remarkable about the system is that the frequency of\n$76Hz$ that was used has a wavelength of about $4\\times10^{6}m$, which is\napproximately $0.62$ times the radius of the Earth. The system could convey an\nintelligible signal over about half of the Earth's surface. Considering the\nlength of the submarine (about $100m$) to be the size of the receiving\nantenna, this means that the wavelength to antenna-length ratio is about\n$4\\times10^{4}$. That is, the received radio wave is constant over the entire\nlength of the receiving antenna to a very high degree of accuracy. A\nnearly-constant electric field cannot be detected at a distance of half of our\nplanet away from its source; a nearly-constant electric field cannot penetrate\n$100m$ through a conducting medium; and so a nearly-constant electric field\ncannot convey an intelligible signal in the presence of all of these extremely\neffective barriers. A longitudinal field is fundamentally different from a\ntransverse field.\n\nThe proposed \\textquotedblleft local constant-field\napproximation\\textquotedblright\\ (LCFA) for relativistic laser effects\n\\cite{ritus85,heidrmp} is based on the presumed similarity of low-frequency\nlaser fields to constant crossed fields. This was shown earlier in this\nmanuscript to be a meaningless association. The application involved in the\ncommunication with submerged submarines is a practical demonstration that the\nLCFA is not a valid concept.\n\n\\section{Hierarchy of physical effects}\n\nIn basic calculus, if a function $f\\left( x\\right) $ is known, then so is\n$df\/dx$; differentiation is local. If the derivative is all that is known, the\nprocess of integration to find $f\\left( x\\right) $ requires the knowledge of\n$df\/dx$ over a range of values; integration is nonlocal. In electromagnetism,\npotentials are the analog of $f\\left( x\\right) $ and fields correspond to\n$df\/dx$. The example of the Schr\\\"{o}dinger equation shows that knowledge of\npotentials is primary and fields are secondary. This ordering can be extended\nto other physical quantities. For example, the Lorentz force expression in Eq.\n(\\ref{aj7}) relates forces directly to fields, meaning that forces are\nsecondary. A further implication is that Newtonian mechanics, expressed in\nterms of forces, is secondary to Lagrangian and Hamiltonian mechanics that are\nexpressed in terms of energies; that is, in terms of potentials. It is no\naccident that mechanics textbooks show that formalisms based on system\nfunctions like the Lagrangian or Hamiltonian infer the Newtonian formalism,\nbut to go in the reverse direction requires an extrapolation of concepts.\n\nThis ordering, or hierarchy, of physical quantities has consequences in\nproblems that go beyond simple electric-only or magnetic-only fields. In\nlaser-related problems where both electric and magnetic fields are involved,\nit is possible to arrive at invalid inferences if only secondary influences\nare regarded as controlling. An example is the common practice of viewing\nelectric fields as being the dominant quantities in long-wavelength\ncircumstances where the dipole approximation appears to be valid. This\ndisguises the fact that extremely low frequencies can lead to relativistic\nbehavior, where electric fields supply inadequate information\n\\cite{hr101,hrtun}, and inappropriate concepts such as adiabaticity exist.\n\nA cogent example is the critical field. A widely used \\textquotedblleft\nstrong-field QED parameter\\textquotedblright\\ is simply the ratio of the\nelectric field to the critical field. This has meaning only for\nstatic-electric or for QSE fields. For strong-field laser applications,\nrelevant intensity parameters are all ratios of energies. See, for example,\nthe section entitled \\textquotedblleft Measures of intensity\\textquotedblrigh\n\\ in Ref. \\cite{hrrev}. (The early, but still useful review article by Eberly\n\\cite{eberlyreview} is also instructive in this regard.) In particular, the\n\\textquotedblleft free-electron intensity parameter\\textquotedblrigh\n\\ $z_{f}=2U_{p}\/mc^{2}$ is known to measure the coupling between an electron\nand a nonperturbatively intense plane-wave field \\cite{hrdiss,hr62,hr62b,hrup\n, replacing the fine-structure constant of perturbative electrodynamics.\n\nA special insight arises when the $z_{f}$ parameter is related to the\nfine-structure constant $\\alpha$ \\cite{hrup}\n\\begin{equation}\nz_{f}=\\alpha\\rho\\left( 2\\lambda\\lambdabar_{C}^{2}\\right) , \\label{aj11\n\\end{equation}\nwhere $\\rho$ is the number of photons per unit volume, and $2\\lambda\n\\lambdabar_{C}^{2}$ is approximately the volume of a cylinder of radius equal\nto the electron Compton wavelength $\\lambdabar_{C}$ and a length given by the\nwavelength $\\lambda$ of the laser field. That is, it appears that all of the\nphotons within the cylinder participate in the coupling between the electron\nand the laser field. The electron Compton wavelength is the usual interaction\ndistance for a free electron, but the wavelength can be a macroscopic\nquantity. The \\textquotedblleft strong-field physics\\textquotedblright\\ that\narises from the application of the Volkov solution to problems involving the\ninteraction of very intense radiation fields with matter\n\\cite{sengupta,hrdiss,hr62}, thus appears to be the bridge connecting quantum\nelectrodynamics with the classical electrodynamics of Maxwell.\n\nOther examples of the hazards of excessive dependence on secondary quantities\nhave been mentioned above in the context of the assumed dominance of the\nelectric component of the Lorentz force at low frequencies, and the inference\nfrom the Lorentz invariants that constant crossed fields are related to\npropagating fields.\n\n\\section{Failure of perturbation theory}\n\nPerturbation theory has been the cornerstone of QED since the Nobel-prize\nwinning work of Feynman, Schwinger, and Tomonaga. The radius of convergence of\nperturbation theory was examined in depth a long time ago \\cite{hrdiss}. The\nmotivation for the study was the demonstration by Dyson \\cite{dyson} that,\ndespite its remarkable quantitative success, standard covariant QED has a zero\nradius of convergence for a perturbation expansion. The question about whether\na strong-field theory based on the Volkov solution of the Dirac equation\n\\cite{volkov} could be convergent was the motivation for Ref. \\cite{hrdiss}.\nThe answer was affirmative, but with intensity-dependent singularities in the\ncomplex coupling-constant plane that limited the convergence. (This explains\nthe $z_{f}$ terminology, since $z_{f}$ was allowed to be complex, and the\nquantity $z$ is often used for complex numbers.) Physically, convergence fails\nwhenever the field intensity is high enough to cause a channel closing. That\nis, if a process requires a certain threshold energy to proceed in a\nfield-free process as measured by some photon number $n_{0}$, that threshold\nenergy will be increased due to the need for a free electron to possess the\nponderomotive energy $U_{p}$ in the field. When $U_{p}$ is large enough to\ncause $n_{0}$ to index up to $n_{0}+1$, that is called a channel-closing, and\nit marks a sufficient condition for the failure of perturbation theory. The\nupper limit on perturbation theory is therefore set b\n\\begin{equation}\nU_{p}<\\hslash\\omega,\\text{ \\ or \\ }I<4\\omega^{3}, \\label{aj12\n\\end{equation}\nwhere the last expression is in atomic units, using the equivalence\n$U_{p}=I\/4\\omega^{2}$ in atomic units, and $I$ is the laser intensity.\n\nAlthough the original convergence investigation was done for free electrons,\nthe same limit was found for atomic ionization \\cite{hr80}.\n\nThe relevance of using an index based on the ratio of the electric field to\nthe critical field as an \\textit{ad hoc }limit on perturbation theory for\nlaser effects is strongly questioned here. Such a basic matter as the\napplicability of perturbation methods to the treatment of the effects of\nradiation fields is governed by primary quantities like the energies $U_{p}$\nand $mc^{2}$, and not by secondary quantities like the magnitude of the\nelectric field.\n\n\\section{Overview}\n\nThe Aharonov-Bohm effect introduced a major change in electrodynamics because\nit showed that potentials are indeed more fundamental than fields. However,\nalthough the effect relates to a quantum phenomenon, it has had little effect\non the way in which quantum mechanics is employed. The ascendancy of\npotentials over fields as described above is much more consequential,\nespecially for strong-field phenomena. A simple summarizing statement is that\nelectromagnetic scalar and vector potentials convey more physical information\nthan the electric and magnetic fields derivable from them.\n\nOf special importance is the fact that two cases have been identified where\nthere can be only one acceptable set of potentials, and these two cases are of\nvery wide practical scope. One case is the constant electric field, which is\nexactly or approximately applicable to a wide variety of AMO and\ncondensed-matter applications. The unique acceptable potential for static\nelectric fields is the length-gauge, or $\\mathbf{r\\cdot E}$ potential, and\nthis is already widely employed for constant or slowly varying electric-field applications.\n\nThe other case with a unique allowable gauge is the propagating-field case,\nwhich is of profound importance in laser-based experiments. Such experiments\nconstitute an increasingly large proportion of AMO activities. An essential\nreminder is that only propagating fields can persist in the absence of\nsources, so that virtually all laser-based experiments occur in a\nsuperposition of propagating-field components. The radiation gauge (or Coulomb\ngauge) is the only gauge employable without risking the creation of hidden\nerrors due to improper gauge choice. It is unfortunate that, in strong-field\nlaser applications, the widespread use of the dipole approximation introduces\nprecisely that hazard of hidden errors that affects both practical\ncalculations and qualitative insights into the behavior of physical systems\nsubjected to laser fields.\n\nIt has been shown that gauge transformations are not, in general, unitary.\nThis has never previously been reported, and it can lead to further errors in\naddition to the important example explored in Section III above about the\nputative universality of the length gauge.\n\nThe concept of primary and secondary physical quantities has been introduced,\nwith over-dependence on a secondary quantity like the electric field having\nthe capability of leading to important misconceptions.\n\nThe criterion of a critical field for longitudinal fields has no relevance to\nthe transverse field of laser-induced processes.\n\nThe appearance of mixed quantum and classical quantities in ascertaining the\nlimits on perturbative methods in the application of strong-field theories\nidentifies strong-field physics as the bridge connecting quantum\nelectrodynamics and the classical electromagnetism of Maxwell.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n \nThe notion of minimal sets (in the sense of Almgren \\cite{Al76}, Reifenberg \\cite{Rei60}. See David \\cite{DJT}, Liang \\cite{topo}, etc..for other variants) is a way to try to solve Plateau's problem in the setting of sets. Plateau's problem, as one of the main interests in geometric measure theory, aims at understanding the existence, regularity and local structure of physical objects that minimize the area while spanning a given boundary, such as soap films. The result of this article is closely linked to two important aspects of this problem: the local behavior, and the local uniqueness of solutions.\n\nIt is known (cf. Almgren \\cite{Al76}, David \\& Semmes \\cite{DS00}) that a $d$-dimensional minimal set $E$ admits a unique tangent plane at almost every point $x$. In this case the local structure around such a point is very clear: the set $E$ is locally a minimal surface (and hence a real analytic surface) around the point, due to the famous regularity result of Allard \\cite{All72}. \n\nSo we are mostly interested in what happens around points that admit no tangent plane, namely, the singular points. \n\nIn \\cite{DJT}, David proved that the blow-up limits (''tangent objects'') of $d$-dimensional minimal sets at a point are $d$-dimensional minimal cones (minimal sets that are cones in the means time). Blow-up limits of a set at a point reflect the asymptotic behavior of the set at infinitesimal scales around this point. As consequence, a first step to study local structures of minimal sets, is to classify all possible types of singularities--that is to say, minimal cones. \n\n\\subsection{Local behavior, and classification of singularities}\n\nThe plan for the list of $d$-dimensional minimal cones in $\\mathbb R^n$ is very far from clear. Even for $d=2$, we know very little, except for the case in $\\mathbb R^3$, where Jean Taylor \\cite{Ta} gave a complete classification in 1976, and the list is in fact already known a century ago in other circumstances (see \\cite{La} and \\cite{He}). They are, modulo isomorphism: a plane, a $\\mathbb Y$ set (the union of three half planes that meet along a straight line where they make angles of $120^\\circ$), and a $\\mathbb T$ set (the cone over the 1-skeleton of a regular tetrahedron centred at the origin). See the pictures below.\n\n\\centerline{\\includegraphics[width=0.16\\textwidth]{Y.pdf} \\hskip 2cm\\includegraphics[width=0.2\\textwidth]{T.pdf}}\n\\nopagebreak[4]\n\\hskip 4cm a $\\mathbb Y$ set\\hskip 3.9cm a $\\mathbb T$ set\n\nBased on the above, a natural way to find new types of singularities, is by taking unions and products of known minimal cones.\n\nFor unions: The minimality of the union of two orthogonal minimal sets of dimension $d$ can be obtained easily from a well known geometric lemma (cf. for example Lemma 5.2 of \\cite{Mo84}). Thus one suspects that if the angle between two minimal sets is not far from orthogonal, the union of them might also be minimal.\n\nIn case of planes, the author proved in \\cite{2p} and \\cite{2ptopo}, that the almost orthogonal union of several $d$-dimensional planes is Almgren and topological minimal. When the number of planes is two, this is part of Morgan's conjecture in \\cite{Mo93} on the angle condition under which a union of two planes is minimal. \n\nAs for minimal cones other than unions of planes, since they are all with non isolated singularities (after the structure Theorem \\textcolor{blue}{2.22}), the situation is much more complicated, as briefly stated in the introduction of \\cite{2T}. Up to now we are able to treat a big part of 2 dimensional cases: in \\cite{2T} we prove that the almost orthogonal union of several 2-dimensional minimal cones in (in any ambient dimension) are minimal, provided that they are all measure and sliding stable, and satisfy some uniqueness condition. (The theorem is stated separately in the Almgren case and topological case in \\cite{2T}.) Moreover, this union stays measure and sliding stable, and satisfies the same uniqueness condition. This enables us to continue obtaining infinitely many new families of minimal cones by taking a finite number of iterations of almost orthogonal unions.\n\nThe result makes good sense, because almost all known 2-dimensional minimal cones satisfy all the above conditions. The proof of this will be contained in the following papers : \n\nIn this article we prove that all 2-dimensional minimal cones in $\\mathbb R^3$ are topological and Almgren unique (Theorems \\textcolor{blue}{5.1, 5.2 and 5.6}). \n\nThen in \\cite{stablePYT} we prove that all 2-dimensional minimal cones in $\\mathbb R^n$ are measure stable, and all 2-dimensional minimal cones in $\\mathbb R^3$ satisfy the sliding stability. 2-dimensional minimal cones in $\\mathbb R^3$ are still sliding stable. \n\nBy Theorem \\textcolor{blue}{10.1} and Remark \\textcolor{blue}{10.5} of \\cite{2T}, the almost orthogonal unions of several planes in $\\mathbb R^n$ are also topological sliding and Almgren sliding stable.\n\nThe only known 2-dimensional minimal cone other than the aboves, is the set $Y\\times Y$, the product of two 1-dimensional $\\mathbb Y$ sets. The proof of its sliding stability and uniqueness are much more involved, so that we will treat it in a separate paper \\textcolor{blue}{\\cite{stableYXY}}.\n\nAs a small remark, compare to the unions, the case of product is much more mysterious. It is not known in general whether the product of two non trivial minimal cones stays minimal. We even do not know whether the product of a minimal cone with a line stays minimal. Moreover, if we consider the product of two concrete minimal cones (other than planes) one by one, up to now the only known result is the minimality of the product of two 1-dimensional $\\mathbb Y$ sets (cf. \\cite{YXY}). Among all singular minimal cones, 1-dimensional $\\mathbb Y$ sets are of simplest structure, but still, the proof of the minimality of their product is surprisingly hard.\n\n\\subsection{About uniqueness of solutions}\n\nAnother natural question about Plateau's problem is the uniqueness of solutions. \n\nIt is well known that solutions for Plateau's problem are in general not unique, even in codimension 1. A simplest example is the following: given two parallel circles in $\\mathbb R^3$, we can have three types of different solutions : the union of two disks, the part of catenoid, and the third type--a ''catenoid'' with a central disk. See the picture below. They admit different topologies, and they all exist in soap film experiments. \n\n\\centerline{\\includegraphics[width=0.2\\textwidth]{catenoid.jpg} \\hskip 2cm\\includegraphics[width=0.22\\textwidth]{catenodisk.jpg}}\n\\nopagebreak[4]\n\\hskip 3.7cm a catenoid\\hskip 2.7cm a catenoid with a central disk\n\n\\bigskip\n\nOn the other hand, we know that around a regular point $x$ of a minimal set, the solution is locally unique, because the soap film is locally a minimal graph on a convex part of the tangent plane at $x$, and the uniqueness comes from calibrations for minimal graphs.\n\nThe advantage of considering local uniqueness is that we do not have to worry about topology. One can then ask whether this local uniqueness also holds for singular points. Since blow-up limits at singular points are minimal cones, a first step is to study whether each minimal cone is the unique solution, at least under a given topology.\n\nDue to the lack of information on the structure for minimal cones of dimension greater or equal to 3, we are kind of still far from a general conclusion. However, from the very little information we get, we can still give a positive answer for almost all known 2-dimensional minimal cones. See the account in the last subsection.\n\n\\subsection{Upper-semi-continuity, and the organization of the paper}\n\nBesides the main results about uniqueness, an indispensible intermediate step in the discuss for the uniqueness property is the upper-semi-continuity property for minimal sets with reasonable boundary regularity (Theorem \\textcolor{blue}{4.1}). It consists of saying that in many cases, when its boundary is not boo wild, a minimal set minimizes also the measure in the class of limits of deformations, which is much larger than the class of deformations. This property is helpful in various circumstances: when we have to carry on arguments using Hausdorff limits and some properties do not pass to the limit, the upper semi continuity can serve as a link. As an example, it plays a very important role throughout \\cite{2T}.\n\nThe organization of the rest of the article is the following: \n\nIn Section 2 we introduce basic definitions and preliminaries for minimal sets, and properties for 2-dimensional minimal cones. \n\nThe definitions of uniqueness and some related useful properties are proved in Seciton 3.\n\nIn section 4 we prove the upper-semi continuity property for minimal sets with relatively regular boundaries (Theorem \\textcolor{blue}{4.1}). This theorem guarantees in particular that the definition of uniqueness makes good sense for minimal cones, and many other minimal sets. It is also useful in many other circumstances, see \\cite{2T} for example.\n\nWe prove topological and Almgren uniqueness for each 2-dimensional minimal cones in $\\mathbb R^3$ in Section 5.\n\n\\textbf{Acknowledgement:} This work is supported by China's Recruitement Program of Global Experts, School of Mathematics and Systems Science, Beihang University. \n\n\n\\section{Definitions and preliminaries}\n\n\\subsection{Some useful notation}\n\n$[a,b]$ is the line segment with endpoints $a$ and $b$;\n\n$\\overrightarrow{ab}$ is the vector $b-a$;\n\n$R_{ab}$ denotes the half line issued from the point $a$ and passing through $b$;\n\n$B(x,r)$ is the open ball with radius $r$ and centered on $x$;\n\n$\\overline B(x,r)$ is the closed ball with radius $r$ and center $x$;\n\n${\\cal H} ^d$ is the Hausdorff measure of dimension $d$ ;\n\n$d_H(E,F)=\\max\\{\\sup\\{d(y,F):y\\in E\\},\\sup\\{d(y,E):y\\in F\\}\\}$ is the Hausdorff distance between two sets $E$ and $F$. \n\nFor any subset $K\\subset \\mathbb R^n$, the local Hausdorff distance in $K$ $d_K$ between two sets $E,F$ is defined as $d_K(E,F)=\\max\\{\\sup\\{d(y,F):y\\in E\\cap K\\},\\sup\\{d(y,E):y\\in F\\cap K\\}\\}$;\n\nFor any open subset $U\\subset \\mathbb R^n$, let $\\{E_n\\}_n$, $F$ be closed sets in $U$, we say that $F$ is the Hausdorff limit of $\\{E_n\\}_n$, if for any compact subset $K\\subset U$, $\\lim_n d_K(E_n,F)=0$;\n\n$d_{x,r}$ : the relative distance with respect to the ball $B(x,r)$, is defined by\n$$ d_{x,r}(E,F)=\\frac 1r\\max\\{\\sup\\{d(y,F):y\\in E\\cap B(x,r)\\},\\sup\\{d(y,E):y\\in F\\cap B(x,r)\\}\\}.$$\n\nFor any (affine) subspace $Q$ of $\\mathbb R^n$, and $x\\in Q$, $r>0$, $B_Q(x,r)$ stands for $B(x,r)\\cap Q$, the open ball in $Q$.\n\nFor any subset $E$ of $\\mathbb R^n$ and any $r>0$, we call $B(E,r):=\\{x\\in \\mathbb R^n: dist (x,E)0$ such that the $\\epsilon$-neighborhood $B(|\\sigma,\\epsilon)\\subset U\\backslash F$. As a result, since $F_j\\to F$, we know that for $j$ large enough, $F_j\\cap|\\sigma|=\\emptyset$. Hence $\\sigma$ is also a simplicial chain in $U\\backslash F_j$ for $j$ large. Then $\\partial\\sigma=S$ implies that $S$ represents a zero element in $H_{n-d-1}(U\\backslash F_j;G)$ for $j$ large. This contradicts the fact that $S$ represents a non-zero element in $H_{n-d-1}(U\\backslash F_j;G)$ for all $j$.\n\nHence 2) holds.\n\\hfill$\\Box$\\bigskip\n\n\\begin{defn}[reduced set] Let $U\\subset \\mathbb R^n$ be an open set. For every closed subset $E$ of $U$, denote by\n\\begin{equation} E^*=\\{x\\in E\\ ;\\ {\\cal H} ^d(E\\cap B(x,r))>0\\mbox{ for all }r>0\\}\\end{equation}\n the closed support (in $U$) of the restriction of ${\\cal H} ^d$ to $E$. We say that $E$ is reduced if $E=E^*$.\n\\end{defn}\n\nIt is easy to see that\n\\begin{equation} {\\cal H} ^d(E\\backslash E^*)=0.\\end{equation}\nIn fact we can cover $E\\backslash E^*$ by countably many balls $B_j$ such that ${\\cal H} ^d(E\\cap B_j)=0$.\n\n\\begin{rem}\n It is not hard to see that if $E$ is Almgren minimal (resp. $G$-topological minimal), then $E^*$ is also Almgren minimal (resp. $G$-topological minimal). As a result it is enough to study reduced minimal sets. An advantage of reduced minimal sets is, they are locally Ahlfors regular (cf. Proposition 4.1 in \\cite{DS00}). Hence any approximate tangent plane of them is a true tangent plane. Since minimal sets are rectifiable (cf. \\cite{DS00} Theorem 2.11 for example), reduced minimal sets admit true tangent $d$-planes almost everywhere.\n\\end{rem}\n\nIf we regard two sets to be equivalent if they are equal modulo ${\\cal H} ^d$-null sets, then a reduced set is always considered to be a good (in the sense of regularity) represent of its equivalent class. \n\n\\textbf{In the rest of the article, we only consider reduced sets.}\n\n\\begin{rem}The notion of (Almgren or $G$-topological) minimal sets does not depend much on the ambient dimension. One can easily check that $E\\subset U$ is $d-$dimensional Almgren minimal in $U\\subset \\mathbb R^n$ if and only if $E$ is Almgren minimal in $U\\times\\mathbb R^m\\subset\\mathbb R^{m+n}$, for any integer $m$. The case of $G$-topological minimality is proved in \\cite{topo} Proposition 3.18.\\end{rem}\n\n\\subsection{Regularity results for minimal sets}\n\nWe now begin to give regularity results for minimal sets. They are in fact regularity results for Almgren minimal sets, but they also hold for all $G$-topological minimizers, after Proposition \\textcolor{blue}{2.7}. By Remark \\textcolor{blue}{2.10}, from now on all minimal sets are supposed to be reduced. \n\n\\begin{defn}[blow-up limit] Let $U\\subset\\mathbb R^n$ be an open set, let $E$ be a relatively closed set in $U$, and let $x\\in E$. Denote by $E(r,x)=r^{-1}(E-x)$. A set $C$ is said to be a blow-up limit of $E$ at $x$ if there exists a sequence of numbers $r_n$, with $\\lim_{n\\to \\infty}r_n=0$, such that the sequence of sets $E(r_n,x)$ converges to $C$ for the local Hausdorff distance in any compact set of $\\mathbb R^n$.\n\\end{defn}\n\n\\begin{rem}\n $1^\\circ$ A set $E$ might have more than one blow-up limit at a point $x$. However it is not known yet whether this can happen to minimal sets. \n \n When a set $E$ admits a unique blow-up limit at a point $x\\in E$, denote this blow-up limit by $C_xE$.\n \n $2^\\circ$ Let $Q\\subset \\mathbb R^n$ be any subpace, denote by $\\pi_Q$ the orthogonal projection from $\\mathbb R^n$ to $Q$. Then it is easy to see that if $E\\subset \\mathbb R^n$, $x\\in E$, and $C$ is any blow-up limit of $E$ at $x$, then $\\pi_Q(C)$ is contained in a blow-up limit of $\\pi_Q(E)$ at $\\pi_Q(x)$. \n\\end{rem}\n\n\\begin{pro}[c.f. \\cite{DJT} Proposition 7.31]Let $E$ be a reduced Almgren minimal set in an open set $U$ of $\\mathbb R^n$, and let $x\\in E$. Then every blow-up limit of $E$ at $x$ is a reduced Almgren minimal cone $F$ centred at the origin, and ${\\cal H} ^d(F\\cap B(0,1))=\\theta(x):=\\lim_{r\\to 0} r^{-d}{\\cal H} ^d(E\\cap B(x,r)).$\\end{pro}\n\nAn Almgren minimal cone is just a cone which is also Almgren minimal. We will call them minimal cones throughout this paper, since we will not talk about any other type of minimal cones. \n\n\\begin{rem}$1^\\circ$ The existence of the density $\\theta(x)$ is due to the monotonicity of the density function $\\theta(x,r):=r^{-d}{\\cal H} ^d(E\\cap B(x,r))$ at any given point $x$ for minimal sets. See for example \\cite{DJT} Proposition 5.16.\n\n$2^\\circ$ After the above proposition, the set $\\Theta(n,d)$ of all possible densities for points in a $d$-dimension minimal set in $\\mathbb R^n$ coincides with the set of all possible densities for $d$-dimensional minimal cones in $\\mathbb R^n$. When $d=2$, this is a very small set. For example, we know that $\\pi$ is the density for a plane, $\\frac 32\\pi$ is the density for a $\\mathbb Y$ set, and for any $n$, and any other type of 2-dimensional minimal cone in $\\mathbb R^n$, its density should be no less than some $d_T=d_T(n)>\\frac 32\\pi$, by \\cite{DJT} Lemma 14.12.\n\n$3^\\circ$ Obviously, a cone in $\\mathbb R^n$ is minimal if and only if it is minimal in the unit ball, if and only if it is minimal in any open subset containing the origin.\n\n$4^\\circ$ For future convenience, we also give the following notation: let $U\\subset \\mathbb R^n$ be a convex domain containing the origin. A set $C\\subset U$ is called a cone in $U$, if it is the intersection of a cone with $U$.\n\\end{rem}\n\nWe now state some regularity results on 2-dimensional Almgren minimal sets. \n\n\\begin{defn}[bi-H\\\"older ball for closed sets] Let $E$ be a closed set of Hausdorff dimension 2 in $\\mathbb R^n$. We say that $B(0,1)$ is a bi-H\\\"older ball for $E$, with constant $\\tau\\in(0,1)$, if we can find a 2-dimensional minimal cone $Z$ in $\\mathbb R^n$ centered at 0, and $f:B(0,2)\\to\\mathbb R^n$ with the following properties:\n\n$1^\\circ$ $f(0)=0$ and $|f(x)-x|\\le\\tau$ for $x\\in B(0,2);$\n\n$2^\\circ$ $(1-\\tau)|x-y|^{1+\\tau}\\le|f(x)-f(y)|\\le(1+\\tau)|x-y|^{1-\\tau}$ for $x,y\\in B(0,2)$;\n\n$3^\\circ$ $B(0,2-\\tau)\\subset f(B(0,2))$;\n\n$4^\\circ$ $E\\cap B(0,2-\\tau)\\subset f(Z\\cap B(0,2))\\subset E.$ \n\nWe also say that B(0,1) is of type $Z$ for $E$.\n\nWe say that $B(x,r)$ is a bi-H\\\"older ball for $E$ of type $Z$ (with the same parameters) when $B(0,1)$ is a bi-H\\\"older ball of type $Z$ for $r^{-1}(E-x)$.\n\\end{defn}\n\n\\begin{thm}[Bi-H\\\"older regularity for 2-dimensional Almgren minimal sets, c.f.\\cite{DJT} Thm 16.1]\\label{holder} Let $U$ be an open set in $\\mathbb R^n$ and $E$ a reduced Almgren minimal set in $U$. Then for each $x_0\\in E$ and every choice of $\\tau\\in(0,1)$, there is an $r_0>0$ and a minimal cone $Z$ such that $B(x_0,r_0)$ is a bi-H\\\"older ball of type $Z$ for $E$, with constant $\\tau$. Moreover, $Z$ is a blow-up limit of $E$ at $x$.\n\\end{thm}\n\n\\begin{defn}[point of type $Z$] \n\n$1^\\circ$ In the above theorem, we say that $x_0$ is a point of type $Z$ (or $Z$ point for short) of the minimal set $E$. The set of all points of type $Z$ in $E$ is denoted by $E_Z$. \n\n$2^\\circ$ In particular, we denote by $E_P$ the set of regular points of $E$ and $E_Y$ the set of $\\mathbb Y$ points of $E$. Any 2-dimensional minimal cone other than planes and $\\mathbb Y$ sets are called $\\mathbb T$ type cone, and any point which admits a $\\mathbb T$ type cone as a blow-up is called a $\\mathbb T$ type point. Set $E_T=E\\backslash (E_Y\\cup E_P)$ the set of all $\\mathbb T$ type points of $E$. Set $E_S:=E\\backslash E_P$ the set of all singular points in $E$.\n\\end{defn}\n\n\\begin{rem} Again, since we might have more than one blow-up limit for a minimal set $E$ at a point $x_0\\in E$, the point $x_0$ might be of more than one type (but all the blow-up limits at a point are bi-H\\\"older equivalent). However, if one of the blow-up limits of $E$ at $x_0$ admits the``full-length'' property (see Remark \\textcolor{blue}{\\ref{ful}}), then in fact $E$ admits a unique blow-up limit at the point $x_0$. Moreover, we have the following $C^{1,\\alpha}$ regularity around the point $x_0$. In particular, the blow-up limit of $E$ at $x_0$ is in fact a tangent cone of $E$ at $x_0$.\n\\end{rem}\n\n\\begin{thm}[$C^{1,\\alpha}-$regularity for 2-dimensional minimal sets, c.f. \\cite{DEpi} Thm 1.15]\\label{c1} Let $E$ be a 2-dimensional reduced minimal set in the open set $U\\subset\\mathbb R^n$. Let $x\\in E$ be given. Suppose in addition that some blow-up limit of $E$ at $x$ is a full length minimal cone (see Remark \\textcolor{blue}{\\ref{ful}}). Then there is a unique blow-up limit $X$ of $E$ at $x$, and $x+X$ is tangent to $E$ at $x$. In addition, there is a radius $r_0>0$ such that, for $00$) $\\Phi:B(0,2r)\\to\\Phi(B(0,2r))$, such that $\\Phi(0)=x$ and $|\\Phi(y)-x-y|\\le 10^{-2}r$ for $y\\in B(0,2r)$, and $E\\cap B(x,r)=\\Phi(X)\\cap B(x,r).$ \n\nWe can also ask that $D\\Phi(0)=Id$. We call $B(x,r)$ a $C^1$ ball for $E$ of type $X$.\n\\end{thm}\n\n\\begin{rem}[full length, union of two full length cones $X_1\\cup X_2$]\\label{ful}We are not going to give the precise definition of the full length property. Instead, we just give some information here, which is enough for the proofs in this paper.\n\n$1^\\circ$ The three types of 2-dimensional minimal cones in $\\mathbb R^3$, i.e. the planes, the $\\mathbb Y$ sets, and the $\\mathbb T$ sets, all verify the full-length property (c.f., \\cite{DEpi} Lemmas 14.4, 14.6 and 14.27). Hence all 2-dimensional minimal sets $E$ in an open set $U\\subset\\mathbb R^3$ admits the local $C^{1,\\alpha}$ regularity at every point $x\\in E$. But this was known from \\cite{Ta}.\n\n$2^\\circ$ (c.f., \\cite{DEpi} Remark 14.40) Let $n>3$. Note that the planes, the $\\mathbb Y$ sets and the $\\mathbb T$ sets are also minimal cones in $\\mathbb R^n$. Denote by $\\mathfrak C$ the set of all planes, $\\mathbb Y$ sets and $\\mathbb T$ sets in $\\mathbb R^n$. Let $X=\\cup_{1\\le i\\le n}X_i\\in \\mathbb R^n$ be a minimal cone, where $X_i\\in \\mathfrak{C}, 1\\le i\\le n$, and for any $i\\ne j$, $X_i\\cap X_j=\\{0\\}$. Then $X$ also verifies the full-length property. \n\\end{rem}\n\n\\begin{thm}[Structure of 2-dimensional minimal cones in $\\mathbb R^n$, cf. \\cite{DJT} Proposition 14.1] Let $K$ be a reduced 2-dimensional minimal cone in $\\mathbb R^n$, and let $X=K\\cap \\partial B(0,1)$. Then $X$ is a finite union of great circles and arcs of great circles $C_j,j\\in J$. The $C_j$ can only meet at their endpoints, and each endpoint is a common endpoint of exactly three $C_j$, which meet with $120^\\circ$ angles. In addition, the length of each $C_j$ is at least $\\eta_0$, where $\\eta_0>0$ depends only on the ambient dimension $n$.\n\\end{thm}\n\nAn immediate corollary of the above theorem is the following:\n\n\\begin{cor}\n$1^\\circ$ If $C$ is a minimal cone of dimension 2, then for the set of regular points $C_P$ of $C$, each of its connected components is a sector. \n\n$2^\\circ$ Let $E$ be a 2-dimensional minimal set in $U\\subset \\mathbb R^n$. Then $\\bar E_Y=E_S$.\n\n$3^\\circ$ The set $E_S\\backslash E_Y$ is isolated. \\end{cor}\n\nAs a consequence of the $C^1$ regularity for regular points and $\\mathbb Y$ points, and Corollary \\textcolor{blue}{2.23}, we have\n\\begin{cor}Let $E$ be an 2-dimensional Almgren minimal set in a domain $U\\subset \\mathbb R^n$. Then \n\n$1^\\circ$ The set $E_P$ is open in $E$;\n\n$2^\\circ$ The set $E_Y$ is a countable union of $C^1$ curves. The endpoints of these curves are either in $E_T:=E_S\\backslash E_Y$, or lie in $\\partial U$. \n\\end{cor}\n\n\nWe also have a similar quantified version of the $C^{1,\\alpha}$ regularity (cf. \\cite{DJT} Corollary 12.25). In particular, we can use the distance between a minimal set and a $\\mathbb P$ or a $\\mathbb Y$ cone to controle the constants of the $C^{1,\\alpha}$ parametrization. As a direct corollary, we have the following neighborhood deformation retract property for regular and $\\mathbb Y$ points:\n\n\\begin{cor}There exists $\\epsilon_2=\\epsilon_2(n)>0$ such that the following holds : let $E$ be an 2-dimensional Almgren minimal set in a domain $U\\subset \\mathbb R^n$. Then \n\n$1^\\circ$ For any $x\\in E_P$, and any co-dimension 1 submanifold $M\\subset U$ which contains $x$, such that $M$ is transversal to the tangent plane $T_xE+x$, if $r>0$ satisfies that $d_{x,r}(E, x+T_xE)<\\epsilon_2$, then ${\\cal H} ^1(B(x,r)\\cap M\\cap E)<\\infty$, and $B(x,r)\\cap M\\cap E$ is a Lipschitz deformation retract of $B(x,r)\\cap M$;\n\n$2^\\circ$ For any $x\\in E_Y$, and any co-dimension 1 submanifold $M\\subset U$ which contains $x$, such that $M$ is transversal to the tangent cone $C_xE+x$ and its spine, if $r>0$ satisfies that $d_{x,r}(E, x+C_xE)<\\epsilon_2$, then ${\\cal H} ^1(B(x,r)\\cap M\\cap E)<\\infty$, and $B(x,r)\\cap M\\cap E$ is a Lipschitz deformation retract of $B(x,r)\\cap M$.\n\\end{cor}\n\nAs for the regularity for minimal sets of higher dimensions, we know much less. But for points which admit a tangent plane (i.e. some blow up-limit on the point is a plane), we still have the $C^1$ regularity.\n\n\\begin{thm}[cf.\\cite{2p} Proposition 6.4]\\label{e1}For $2\\le d0$ such that if $E$ is a $d$-dimensional reduced minimal set in an open set $U\\subset\\mathbb R^n$, with $B(0,2)\\subset U$ and $0\\in E$. Then if $E$ is $\\epsilon_1$ near a $d-$plane $P$ in $B(0,1)$, then $E$ coincides with the graph of a $C^1$ map $f:P\\to P^\\perp$ in $B(0,\\frac 34)$. Moreover $||\\nabla f||_\\infty<1$.\n\\end{thm}\n\n\\begin{rem}\n$1^\\circ$ This proposition is a direct corollary of Allard's famous regularity theorem for stationary varifold. See \\cite{All72}.\n\n$2^\\circ$ After this proposition, a blow-up limit of a reduced minimal set $E$ at a point $x\\in E$ is a plane if and only if the plane is the unique approximate tangent plane of $E$ at $x$.\n\\end{rem}\n\nAfter Remark \\textcolor{blue}{2.27}, for any reduced minimal set $E$ of dimension $d$, and for any $x\\in E$ at which an approximate tangent $d$-plane exists (which is true for a.e. $x\\in E$), $T_xE$ also denotes the tangent plane of $E$ at $x$, and the blow-up limit of $E$ at $x$. \n\n\\section{Uniqueness: definitions and properties}\n\n\\begin{defn}\nLet $C$ be a $d$-dimensional reduced Almgren minimal set in a bounded domain $U$, we say that \n\n$1^\\circ$ $C$ is Almgren unique in $U$ if it is the only reduced set in $\\overline{\\cal F} (C,U)$ that attains the minimal measure. That is:\n\\begin{equation} \\forall \\mbox{ reduced set }E\\in \\overline{\\cal F} (C,U), {\\cal H} ^d(E)=\\inf_{F\\in \\overline {\\cal F} (C,U)}{\\cal H} ^d(F)\\Rightarrow E=C.\\end{equation}\n\n$2^\\circ$ $C$ is $G$-topological unique in $U$ if $C$ is $G$-topological minimal, and \n\\begin{equation} \\begin{split}\\mbox{For any reduced }d\\mbox{-dimensional }G-\\mbox{topological competitor }E\\mbox{ of }C\\mbox{ in }U,\\\\\n{\\cal H} ^d(E\\cap U)={\\cal H} ^d(C\\cap U), \\mbox{ implies }C=E;\\end{split}\\end{equation}\n\n$3^\\circ$ We say that a $d$-dimensional minimal set $C$ in $\\mathbb R^n$ is Almgren (resp. $G$-topological) unique, if it is Almgren (resp. $G$-topologial) unique in every bounded domain $U\\subset \\mathbb R^n$.\n\nWhen $G=\\mathbb Z$, we usually omit $\\mathbb Z$, and say directly topological unique.\n\\end{defn}\n\nFor minimal cones, we have immediately:\n\n\\begin{pro}[Unique minimal cones]Let $K$ be a $d$-dimensional Almgren minimal cone in $\\mathbb R^n$. Then it is Almgren (resp. $G$-topological) unique, if and only if it is Almgren (resp. $G$-topological) unique in some bounded convex domain $U$ that contains the origin. \n\\end{pro}\n\n\\noindent Proof. By definition, the only if part is trivial. So let us prove the converse.\n\nSuppose that $K$ is a $d$-dimensional Almgren minimal cone in $\\mathbb R^n$, and is Almgren (resp. $G$-topological) unique in a bounded convex domain $U$ that contains the origin. Then since $K$ is a cone centered at the origin, $K$ is Almgren (resp. $G$-topological) unique in $rU$ for all $r>0$. Now for any other bounded domain $U'$, there exists $r$ such that $U'\\subset rU$, hence $K$ is Almgren (resp. $G$-topological) unique in $U'$.\\hfill$\\Box$\\bigskip\n\nLet us give some important remarks:\n\n\\begin{rem}\n\n$1^\\circ$ Note that for an arbitrary $d$-dimensional Almgren minimal set $C$ in $U$, by definition, $C$ only minimizes the measure in the class ${\\cal F} (C,U)$. Hence we do not necessarily have that \n\\begin{equation} {\\cal H} ^d(C)=\\inf_{F\\in \\overline {\\cal F} (C,U)}{\\cal H} ^d(F).\\end{equation}\nOn the other hand, this holds if $U$ is a uniformly convex domain. See Theorem \\textcolor{blue}{4.1} and Corollary \\textcolor{blue}{4.7} in the next section. \n\n$2^\\circ$ As a corollary of the above term $1^\\circ$, and Proposition \\textcolor{blue}{3.2}, we know that if $K$ is a $d$-dimensinoal minimal cone in $\\mathbb R^n$, then \\textcolor{blue}{(3.3)} holds automatically. \n\n$3^\\circ$ The condition ${\\cal H} ^d(E)=\\inf_{F\\in \\overline{\\cal F} }{\\cal H} ^d(F)$ in \\textcolor{blue}{(3.1)} already implies that $E$ is itself a minimal set, since the class $\\overline{\\cal F} $ is stable under deformations. Also notice that ${\\cal H} ^d(E)=\\inf_{F\\in \\overline{\\cal F} }{\\cal H} ^d(F)$ is equivalent to the condition ${\\cal H} ^d(E)\\le\\inf_{F\\in \\overline{\\cal F} }{\\cal H} ^d(F)$ since $E\\in \\overline {\\cal F} $. Similarly, when $U$ is a convex domain, since the condition ${\\cal H} ^d(E\\cap U)={\\cal H} ^d(C\\cap U)$ in \\textcolor{blue}{(3.2)} implies that $E$ minimizes measure among all $G$-topological competitors for $C$, and all $G$-topological competitors for $E$ are $G$-topological competitors for $C$ for $U$ convex, hence $E$ is $G$-topological minimal in $U$.\n\n$4^\\circ$ If $C$ is an Almgren unique minimal set in $U$, $V\\subset U$ is a domain, then $C$ is also Almgren unique minimal in $V$.\n\\end{rem}\n\nThe next proposition shows that for minimal cones, $G$-topological uniqueness implies Almgren uniqueness: \n\n\\begin{pro}Let $K\\subset \\mathbb R^n$ be a $G$-topological unique minimal cone of dimension $d$. Then it is also Almgren unique.\n\\end{pro}\n\n\\noindent Proof. Let $K$ be a $G$-topological unique minimal cone of dimension $d$ in $\\mathbb R^n$. By Proposition \\textcolor{blue}{3.2}, it is enough to prove that $K$ is Almgren unique in the unit ball $B$. \n\nLet $F\\in \\overline{\\cal F} (K,B)$, such that \n\\begin{equation} {\\cal H} ^d(F)=\\inf_{E\\in \\overline{\\cal F} (K,B)}{\\cal H} ^d(E)={\\cal H} ^d(K\\cap B),\\end{equation} \nthe last equality is by Remark \\textcolor{blue}{3.3} $1^\\circ$.\n\nBy definition of $\\overline{\\cal F} (K,B)$, there exists a sequence $F_j\\in {\\cal F} (K,B)$ that converge to $F$. By Proposition \\textcolor{blue}{2.7}, each set $F_j':=F_j\\cup (K\\backslash B)$ is a $G$-topological competitor for $K$ in $2B$. Then by Proposition \\textcolor{blue}{2.8}, the limit $F'=F\\cup (K\\backslash B)$ is a $G$-topological competitor in $3B$. \n\nBy \\textcolor{blue}{(3.4)}, we know that \n\\begin{equation}\\begin{split}{\\cal H} ^d(F'\\cap 3B)&={\\cal H} ^d((F\\cup (K\\backslash B))\\cap 3B)={\\cal H} ^d(F)+{\\cal H} ^d(K\\cap 3B\\backslash B)\\\\\n&={\\cal H} ^d(K\\cap 3B)=\\inf_{E\\in \\overline{\\cal F} (K,3B)}{\\cal H} ^d(E),\\end{split}\\end{equation}\nwhere the last equality is again by Remark \\textcolor{blue}{3.3} $1^\\circ$.\n\nSince $K$ is $G$-topological unique, \\textcolor{blue}{(3.5)} implies that $F'=K$, which means that $F=K\\cap B$.\\hfill$\\Box$\\bigskip\n\n\\begin{pro}[Independent of ambient dimension]Let $K\\subset \\mathbb R^m$ be a $d$-dimensional Almgren minimal cone in $\\mathbb R^m$. If $K$ is Almgren (resp. $G$-topological) unique, then for all $n\\ge m$, $K$ is also Almgren (resp. $G$-topological) unique in $\\mathbb R^n$.\n\\end{pro}\n\n\\noindent Proof. Fix any $n\\ge m$. Write $\\mathbb R^n=\\mathbb R^m\\times \\mathbb R^{n-m}$, and suppose, without loss of generality, that $K$ lives in $\\mathbb R^m\\times \\{0\\}$. Let $\\pi$ be the orthogonal projection from $\\mathbb R^n\\to \\mathbb R^m\\times \\{0\\}$. \n\nSuppose that $K$ is Almgren unique in $\\mathbb R^m$. We want to prove that $K$ is Almgren unique in $\\mathbb R^n$. Let $B_n$ denote the unit ball in $\\mathbb R^n$. Then by Proposition \\textcolor{blue}{3.2}, it is enough to prove that $K\\cap B_n$ is Almgren unique. So let $F\\in \\overline{\\cal F} (K,B_n)$, so that \n\\begin{equation}{\\cal H} ^d(F)=\\inf_{E\\in \\overline{\\cal F} (K,B_n)}{\\cal H} ^d(E)={\\cal H} ^d(K\\cap B_n)={\\cal H} ^d(K\\cap B_m).\\end{equation} \n\nBy Remark \\textcolor{blue}{3.3, $3^\\circ$}, the condition \\textcolor{blue}{(3.6)} implies that $F$ is Almgren mininal in $B_n$. As a result, by the convex hull property of minimal sets, we know that $F$ must be included in the convex hull of $F\\cap \\partial B_n=K\\cap \\partial B_n=K\\cap \\partial B_m\\subset \\bar B_m.$\n\n As a result, $F\\in \\overline{\\cal F} (K,B_m)$. By \\textcolor{blue}{(3.6)}, and the Almgren uniqueness of $K$, we know that $F$ must be $K\\cap B_m=K\\cap B_n$.\n \n The proof for the case of $G$-topological uniqueness is similar, and we leave it to the reader. \\hfill$\\Box$\\bigskip\n \n \\section{Upper-semi-continuity}\n \n In this section we prove the upper-simi-continuity property for minimal sets with reasonable boundary regularity. It consists of saying that in many cases, when its boundary is not boo wild, a minimal set minimizes also the measure in the class of limits of deformations. This serves as an indispensible part in the definition of uniqueness, as we have already seen in the last section (Remark \\textcolor{blue}{3.3}). This property also plays a very important role in \\textcolor{blue}{\\cite{2T}}. \n \n\\bigskip\n \nFor each $k\\in \\mathbb N$, let $\\Delta_k$ denote the family of (closed) dyadic cubes of length $2^{-k}$. For $j\\le n$, let $\\Delta_{k,j}$ denote the set of all faces of elements in $\\Delta_k$. For each cube $Q$, denote by $\\Delta_j(Q)$ the set of all $j$-faces of $Q$. Set $|\\Delta_{k,j}|=\\cup_{\\sigma\\in \\Delta_{k,j}}\\sigma$ the $j$-skeleton of $\\Delta_k$.\n\n\\begin{thm}[upper semi continuity] Let $U\\subset \\mathbb R^n$ be a bounded convex domain, and $E$ be a closed set in $\\bar U$ with locally finite $d$-Hausdorff measure. Let $C$ denote the convex hull of $E$. Suppose that\n\n\\begin{equation} C\\cap\\partial U=E\\cap\\partial U\\end{equation}\nand\n\\begin{equation}\\mbox{There exists a bi Lipschitz map }\\psi: Q_0\\to C\\mbox{, such that }\\psi(E\\cap \\partial U)\\subset |\\Delta_{k,d-1}|.\\end{equation} Then\n\n$1^\\circ$ $\\inf_{F\\in \\overline{\\cal F} (E,U)}{\\cal H} ^d(F)=\\inf_{F\\in {\\cal F} (E,U)}{\\cal H} ^d(F)$;\n\n$2^\\circ$ If $E$ is a $d$-dimensional minimal set in $U$. Then \n\\begin{equation}{\\cal H} ^d(E)=\\inf_{F\\in \\overline{\\cal F} (E,U)}{\\cal H} ^d(F).\\end{equation}\n\\end{thm}\n\n\\begin{rem}\nThe conditions \\textcolor{blue}{(4.1) and (4.2)} can be relaxed, with essentially the same proof, but with more technical details. Here we only give proof under these two hypotheses, which is enough for purpose of use.\\end{rem}\n\n\\noindent Proof. \n\n$1^\\circ$ Since ${\\cal F} (E,U)\\subset \\overline{\\cal F} (E,U)$, we have automatically $\\inf_{F\\in \\overline{\\cal F} (E,U)}{\\cal H} ^d(F)\\le\\inf_{F\\in {\\cal F} (E,U)}{\\cal H} ^d(F)$. So let us prove the converse. \n\nTo prove the converse, we first prove the following case: suppose that $\\partial E_0\\subset |\\Delta_{k_0,d-1}|$ for some $k_0\\in \\mathbb N$.\n\nLet $\\pi$ denote the shortest distance projection from $U$ to $C$. Then $\\pi$ is 1-Lipschitz, and hence for any set $F\\in U$, we have ${\\cal H} ^d(F)\\ge{\\cal H} ^d(\\pi(F))$. \n\nNow we need the following Theorem: \n \n \\begin{thm}[Existence of minimal sets\\label{vincent}; c.f. \\cite{Fv}, Thm 6.1.7]\nLet $U\\subset\\mathbb R^n$ be an open domain, $00$ (depends only on $d$ and $n$), a sequence $(F_k)$ of elements of $\\mathfrak F$, and a set $E$ of dimension $d$ relatively closed in $U$ that verifies \\textcolor{blue}{(2.1)}, such that:\n \n (1) There exists a sequence of compact sets $\\{K_m\\}_{m\\in\\mathbb N}$ in $U$ with $K_m\\subset K_{m+1}$ for all $m$ and $\\cup_{m\\in N}K_m=U$, such that \n \\begin{equation}\\lim_{k\\to\\infty}d_H(F_k\\cap K_m,E\\cap K_m)=0\\mbox{ for all }m\\in \\mathbb N;\\end{equation}\n \n (2) For all open sets $V$ such that $\\overline V$ is relatively compact in $U$, from a certain rank,\n \\begin{equation} F_k\\mbox{ is }(M,+\\infty)\\mbox{-quasiminimal in }V;\\end{equation}\n (See \\cite{DS00} for a precise definition.) \n \n (3) $H^d(E)\\le\\inf_{F\\in\\mathfrak F}H^d(F)$ ;\n \n (4) $E$ is minimal in $U$.\n\\end{thm}\n\nWe apply Theorem \\textcolor{blue}{4.3} to the class $\\mathfrak F$ of all Hausdorff limits of elements in ${\\cal F} (E, V)$, where $V=\\mathbb R^n\\backslash (E\\cap \\partial U)$. It is easy to see that $\\mathfrak F$ is stable by deformation in $V$, and by Hausdorff limit in $V$. As a result, there exists a set $F_0\\in \\mathfrak F$, such that $F_0$ is minimal in $V$, with\n\\begin{equation} {\\cal H} ^d(F_0)=\\inf_{F\\in \\mathfrak F}{\\cal H} ^d(F).\\end{equation}\n\nSet $E_0=\\pi(F_0)$. Then $E_0\\subset C$, and\n\\begin{equation} {\\cal H} ^d(E_0)={\\cal H} ^d(\\pi(F_0))\\le {\\cal H} ^d(F_0)=\\inf_{F\\in \\mathfrak F}{\\cal H} ^d(F).\\end{equation}\n It is easy to see that $E_0\\in \\mathfrak F$. \n \n So let $E_k$ be a sequence in ${\\cal F} (E,V)$ that converges to $E_0$ in $\\mathbb R^n$. Modulo projecting to $C$, we may also suppose that $E_k\\subset C$. Suppose $E_k=\\psi_k(E)$, where $\\psi_k$ is a deformation in $V$, for each $k$. Let $\\psi'_k(x)=\\pi\\circ\\psi_k(x)$ for $x\\in E$, $\\psi'_k=id$ on $\\mathbb R^n\\backslash C$ and $\\{x\\in \\mathbb R^n:\\psi(x)=x\\}$, and extend it to the whole $\\mathbb R^n$, such that $\\psi'_k(U)=U$. Then $\\psi'_k|_U$ is a map from $U$ to $U$, and it is homotopic to the identity through the line homotopy, since $U$ is convex. Thus $\\psi'_k|_U$ is a deformation in $U$, with $\\psi'_k|_U(E)=E_k$. Therefore $E_k\\in {\\cal F} (E,U)$, and hence $E_0\\in \\overline{\\cal F} (E,U)$. \n \n Note that \n$\\overline{\\cal F} (E,U)\\subset \\mathfrak F$, hence\n\\begin{equation} {\\cal H} ^d(E_0)\\le\\inf_{F\\in \\mathfrak F}{\\cal H} ^d(F)\\le \\inf_{F\\in \\overline{\\cal F} (E,U)}{\\cal H} ^d(F)\\le{\\cal H} ^d(E_0),\\end{equation}\ntherefore\n\\begin{equation} {\\cal H} ^d(E_0)=\\inf_{F\\in \\overline{\\cal F} (E,U)}{\\cal H} ^d(F).\\end{equation}\n\nOn the other hand, we know that ${\\cal F} (E_0,U)\\subset \\mathfrak F$ as well, hence \n\\begin{equation} {\\cal H} ^d(E_0)\\le \\inf_{F\\in \\mathfrak F}{\\cal H} ^d(F)\\le \\inf_{F\\in {\\cal F} (E_0,U)}{\\cal H} ^d(F),\\end{equation}\nwhich yields that $E_0$ is minimal in $U$.\n\nWe want to prove that when $E_k$ is sufficiently close to $E_0$, we can deform it into the union of $E_0$ and a set of very small measure, so that the measure after the deformation is less than $\\inf_{F\\in {\\cal F} (E,U)}{\\cal H} ^d(F)$, which cannot happen.\n\nThe construction of such a deformation is simlar to the construction in \\cite{GD03}: by minimality of $E_0$, around each regular point $x$ of $E_0$, there is a neighborhood retract to $E_0$ in some ball centered at $x$, with a uniform Lipschitz constant. We use a finite number of such balls to cover a big part of $E_0$, and the measure of $E_0$ which are not covered is very small. When $E_k$ is close enough to $E_0$, a big part of $E_k$ is contained in the union of these balls, so we can deform $E_k$ onto $E_0$ in each of these balls, and then extend this deformation to the whole space, with the same Lipschitz constant. Outside these balls, since each $E_k$ is very close to $E_0$, we expect that measures of $E_k$ are comparable to the measure of $E_0$, and so the measures of the image of $E_k$ outside the above balls are still small.\n\nBut in our case, there is no reason why the measures of $E_k$ should be uniformly comparable to that of $E_0$ at small scales. This issue results in more works. In a word, we have to first deform $\\{E_k\\}$ into a new sequence $\\{E_k'\\}$, whose local measures can be controlled by that of $E_0$, and their limits are still $E_0$.\n\nNow let us give more details:\n\n Set \n\\begin{equation} Q'_k:=\\{Q\\in \\Delta_k: Q\\cap E_0\\ne\\emptyset\\},\\end{equation}\nand \n\\begin{equation} Q_k=\\{Q\\in \\Delta_k: \\exists Q'\\in Q'_k\\mbox{ such that }Q\\cap Q'\\ne\\emptyset\\},\\end{equation}\nthat is, $Q'_k$ is the family of elements in $\\Delta_k$ that are neighbors $E_0$, and we get $Q_k$ by adding another layer of cubes in $\\Delta_k$ to $Q_k$. Let $|Q_k|=\\cup_{Q\\in Q_k}Q$ be the union of elements in $Q_k$, and for each $j\\le n$, let $Q_{k,j}$ be the set of all $j$ faces of elements in $Q_k$, and let ${\\cal S} _{k,j}=\\cup_{\\sigma\\in Q_{k,j}}\\sigma$ denote the $j$-skeleton of $Q_k$. \n\nSet $\\partial E_0=E_0\\cap\\partial U$, and\n\\begin{equation} R_k:=\\{Q\\in \\Delta_k: \\exists Q'\\in \\Delta_k\\mbox{ such that } Q'\\cap \\partial E_0\\ne\\emptyset\\mbox{ and }Q\\cap Q'\\ne\\emptyset\\}.\\end{equation}\nLet $|R_k|=\\cup_{Q\\in R_k}Q$, and for each $j\\le n$, let $R_{k,j}$ be the set of all $j$ faces of elements in $R_k$, and let ${\\cal T}_{k,j}=\\cup_{\\sigma\\in R_{k,j}}\\sigma$ denote the $j$-skeleton of $R_k$. \n\nIt is easy to see that \n\\begin{equation} Q'_k\\subset Q_k,\\mbox{ and } R_k\\subset Q_k,\\end{equation}\nand hence\n\\begin{equation} |R_k|\\subset |Q_k|, R_{k,j}\\subset Q_{k,j}\\mbox{, and }{\\cal T}_{k,j}\\subset {\\cal S} _{k,j}\\mbox{ for all }j\\le n.\\end{equation}\n\nLet us first give some properties for the sets ${\\cal S} _{k,d}$ and ${\\cal T}_{k,d}$, where $d$ is the dimension of $E_0$.\n\n\\begin{pro}Suppose that $\\partial E_0\\subset {\\cal S} _{k_0,d-1}$ for some $k_0\\in \\mathbb N$, that is, $\\partial E_0$ is a union of dyadic $d-1$-faces. Then we have\n\n $1^\\circ$ $\\lim_{k\\to\\infty} {\\cal H} ^d({\\cal T}_{k,d})\\to 0$;\n \n $2^\\circ$ There exists $M>0$ which depends only on $n$ and $d$, such that for each $k>k_0$, and each $Q\\in Q_k$ and $Q^\\circ\\cap |R_{k-2}|=\\emptyset$, we have \n \\begin{equation} {\\cal H} ^d({\\cal S} _{k,d}\\cap Q)2^{-k+2}$. In particular, $d(y, \\partial E_0)>2\\times 2^{-k}$, which means $B(y,2\\times 2^{-k})\\subset \\mathbb R^n\\backslash \\partial E_0$. Since $E_0$ is minimal in $\\mathbb R^n\\backslash \\partial E_0$, by Ahlfors regularity for minimal sets (cf. \\cite{DS00} Proposition 4.1), \n \\begin{equation} C_2^{-1}2^{-kd}\\le {\\cal H} ^d(E_0\\cap B(y, 2^{-k}))\\le C_2 2^{-kd},\\end{equation}\n where $C_2$ is a constant that depends only on $n$ and $d$. As a result, we have\n \\begin{equation} {\\cal H} ^d({\\cal S} _{k,d}\\cap Q)=\\alpha(n,d)2^{-kd}\\le C_2\\alpha(n,d){\\cal H} ^d(E_0\\cap B(y, 2^{-k}))\\le C_2\\alpha(n,d){\\cal H} ^d(E_0\\cap V(Q)).\\end{equation}\n \\hfill$\\Box$\\bigskip\n\nNext, let us construct the new sequence $E'_k$. Since $C$ is compact, we know that $d_C(E_k,E_0)\\to 0, k\\to \\infty$. As a result, modulo extracting a subsequence, we can suppose that $d_C(E_k, E_0)<2^{-k}$. Therefore, $E_k\\subset B(E_0,2^{-k})$. \n\n\\begin{pro}For each $\\epsilon>0$, there exists a sequence $\\{E'_k\\}_{k\\in \\mathbb N}\\subset {\\cal F} (E, \\mathbb R^n\\backslash \\partial E_0)$, such that \n\\begin{equation} E'_k\\subset B(E_0, 2^{-k+2}),\\end{equation} and for $k$ large, \n\\begin{equation}{\\cal H} ^d (E'_k\\backslash {\\cal S} _{k,d})<\\epsilon.\\end{equation}\nIn particular, $E'_k$ converge to $E_0$.\n\\end{pro}\n\n\\noindent Proof. Fix any $k>k_0$. \n\nSince $E_k\\subset B(E_0,2^{-k})$, $E_k\\subset |Q_k|$. And we know that $E_k$ is a deformation of $E$, hence $E_k$ has locally finite d-Hausdorff measure. As a result, by a standard Federer-Fleming argument (cf. Section 4.2 of \\cite{Fe}, or Section 3 of \\cite{DS00}), there exists a Lipschitz map $\\varphi_k: |Q_k|\\to |Q_k|$ (the Lipschitz constant $L_k$ depends on $k$, and $L_k\\ge 1$), such that\n\n\\begin{equation} \\varphi_k(Q)\\subset Q,\\ \\forall Q\\in Q_k,\\end{equation}\n\\begin{equation} \\varphi_k(E_k)\\subset {\\cal S} _{k,d},\\end{equation}\nand\n\\begin{equation} \\varphi_k(x)=x, \\forall x\\in {\\cal S} _{k,d}.\\end{equation}\n\nIn particular, the sequence $\\varphi_k(E_k)\\subset {\\cal S} _{k,d}$. Note that $\\varphi_k(E_k)$ might not belong to ${\\cal F} (E,\\mathbb R^n\\backslash \\partial E_0)$, because the deformation $\\varphi_k$ may not satisfy the compactness condition \\textcolor{blue}{(2.6)}. So we need to do some slight modification.\n\nFix $\\epsilon>0$. Let $\\mu={\\cal H} ^d\\lfloor_{E_k}$, then $\\mu$ is a locally finite Hausdorff measure. In particular, we know that \n\\begin{equation} \\lim_{r\\to 0}\\mu(B(\\partial E_0, r))=\\mu(\\partial E_0)={\\cal H} ^d(E_k\\cap \\partial E_0)\\le {\\cal H} ^d(\\partial E_0)=0.\\end{equation}\n\nTake $r_k>0$ such that $\\mu(B(\\partial E_0, r_k))<(3L_k+2)^{-d}\\epsilon$, that is, ${\\cal H} ^d(E_k\\cap B(\\partial E_0, r_k))<(3L_k+2)^{-d}\\epsilon$.\n\nFor any $x\\in \\mathbb R^n$, set \n\\begin{equation} t_x=\\left\\{\\begin{array}{rcl}0&,\\ &x\\in B(\\partial E_0, \\frac 12 r_k)\\\\\n1 &,\\ &x\\in B(\\partial E_0, r_k)^C;\\\\\n\\frac{2 d(x,\\partial E_0)}{r_k}-1&,\\ &x\\in B(\\partial E_0, r_k)\\backslash B(\\partial E_0, \\frac 12r_k),\n\\end{array}\\right.\\end{equation}\nand set $f_k(x)=(1-t_x)x+t_x\\varphi_k(x)$.\n\n\nThen $f_k:\\mathbb R^n\\to [0,1]$ is $3L_k+2$-Lipschitz: in fact, for any $x,y$, suppose that $d(x, \\partial E_0)\\ge d(y,\\partial E_0)$, then we know that\n\n\\begin{equation}\\begin{split} ||f_k(x)-f_k(y)||&=||[(1-t_x)x+t_x\\varphi_k(x)]-[(1-t_y)y+t_y\\varphi_k(y)]||\\\\\n&=||(1-t_x)(x-y)+t_x(\\varphi_k(x)-\\varphi_k(y))+(t_x-t_y)(\\varphi_k(y)-y)||\\\\\n&\\le ||(1-t_x)(x-y)||+||t_x(\\varphi_k(x)-\\varphi_k(y))||+||(t_x-t_y)(\\varphi_k(y)-y)||\\\\\n&\\le (1-t_x)||x-y||+(t_x)L_k||x-y||+||(t_x-t_y)(\\varphi_k(y)-y)||\\\\\n&\\le L_k||x-y||+||(t_x-t_y)(\\varphi_k(y)-y)||.\n\\end{split}\\end{equation}\nTo estimate the second term, when $d(y,\\partial E_0)\\ge r_k$, we know that $t_x=t_y=1$, and this term vanishes. So suppose that $d(y,\\partial E_0)0$, there exists $s_k>0$, and a deformation $h_k$ in $U$, such that $h_k=id$ in $B(\\partial E_0, s_k)$, and \n\\begin{equation} {\\cal H} ^d(h_k(E_k))<{\\cal H} ^d(E_0)+\\epsilon.\\end{equation}\n\\end{pro}\n\n\\noindent Proof. Since $E_0$ is minimal in $\\mathbb R^n\\backslash \\partial E_0$, the set of regular points $E_{0P}$ of $E_0$ is of full measure: ${\\cal H} ^d(E\\backslash E_{0P})=0$.\nBy the $C^1$ regularity (Theorem \\textcolor{blue}{2.26}) for regular points, for each $x\\in E_{0P}$, there exists $r_x>0$, with $B(x,2r_x)\\subset U$, such that for all $r0$, there exists a finite set of points $\\{x_j\\}_{1\\le j\\le m}\\subset E_{0P}$, and $r_j\\in (0,r_{x_j})$, such that the balls $B(x_j,r_j)$ are disjoint, $B(x_j, 2r_j)\\cap \\partial E_0=\\emptyset$, and ${\\cal H} ^d(E_{0P}\\backslash \\cup_{j=1}^n B(x_j, r_j))<\\frac{\\epsilon}{3M\\times 2^{d+1}\\times 5^n}$. Take $t_jr$. Then $g$ is 2-Lipschitz, and we can extend it to a 2-Lipschitz map, still denoted by $g$, from $\\mathbb R^n$ to $\\mathbb R^n$. \n\n We would like to control the measure of ${\\cal H} ^d(E'_k\\backslash (\\cup_{j=1}^n B(x_j, r_j)))$. Since the major part of $E'_k$ is included in ${\\cal S} _{k,d}$, let us first estimate ${\\cal H} ^d({\\cal S} _{k,d}\\backslash (\\cup_{j=1}^n B(x_j, r_j)))$.\n \n Take any $Q\\in Q_k$ and $Q^\\circ\\cap |R_{k-2}|=\\emptyset$, then by Proposition \\textcolor{blue}{4.4} $2^\\circ$, we know that for $k>k_0$,\n \\begin{equation} {\\cal H} ^d({\\cal S} _{k,d}\\cap Q)\\le M{\\cal H} ^d(E_0\\cap V(Q)).\\end{equation}\n \n Now if $k$ is such that $2^{-k}<\\frac 16 t$, for each $Q$ such that $Q\\backslash (\\cup_{j=1}^n B(x_j, r_j))\\ne\\emptyset$, we know that $d(Q, (\\cup_{j=1}^n B(x_j, t_j))>t-\\sqrt 2 2^{-k}$, and hence $d(V(Q), (\\cup_{j=1}^n B(x_j, t_j))>t-3\\times \\sqrt 2 2^{-k}>0$, that is $V(Q)\\cap (\\cup_{j=1}^n B(x_j, t_j)=\\emptyset$. Hence we have\n \\begin{equation}\\begin{split}& {\\cal H} ^d({\\cal S} _{k,d}\\backslash (|R_{k-2}|\\cup (\\cup_{j=1}^n B(x_j, r_j))))\\\\\n &\\le\\sum\\{{\\cal H} ^d({\\cal S} _{k,d}\\cap Q):Q\\in Q_k, Q^\\circ\\cap |R_{k-2}|=\\emptyset\\mbox{, and }Q\\backslash (\\cup_{j=1}^n B(x_j, r_j))\\ne\\emptyset\\}\\\\\n &\\le \\sum\\{M{\\cal H} ^d(E_0\\cap V(Q)): Q\\in Q_k \\mbox{, and }V(Q)\\cap (\\cup_{j=1}^n B(x_j, t_j)=\\emptyset\\}\\\\\n &=M\\int_{E_0}\\sum\\{\\chi_{V(Q)}:Q\\in Q_k \\mbox{, and }V(Q)\\cap (\\cup_{j=1}^n B(x_j, t_j)=\\emptyset\\} d{\\cal H} ^d\\\\\n &\\le M\\int_{E_0\\backslash (\\cup_{j=1}^n B(x_j, t_j))}\\sum_{Q\\in Q_k}\\chi_{V(Q)}.\n \\end{split}\\end{equation}\n \n Note that $\\sum_{Q\\in Q_k}\\chi_{V(Q)}\\le\\sum_{Q\\in \\Delta_k}\\chi_{V(Q)}=5^n$, hence\n \\begin{equation} \\begin{split}{\\cal H} ^d({\\cal S} _{k,d}&\\backslash (|R_{k-2}|\\cup (\\cup_{j=1}^n B(x_j, r_j))))\\le M\\int_{E_0\\backslash (\\cup_{j=1}^n B(x_j, t_j))}\\sum_{Q\\in Q_k}\\chi_{V(Q)}\\\\\n &\\le 5^nM\\int_{E_0\\backslash (\\cup_{j=1}^n B(x_j, t_j))}d{\\cal H} ^d=5^nM{\\cal H} ^d(E_0\\backslash (\\cup_{j=1}^n B(x_j, t_j)))\\\\\n &<5^nM\\times \\frac{\\epsilon}{4M\\times 2^d\\times 5^n}=\\frac{\\epsilon}{4\\times 2^d}.\n \\end{split}\\end{equation}\n \nNext let us estimate ${\\cal H} ^d({\\cal S} _{k,d}\\cap |R_{k-2}|)$. For each $Q\\in \\Delta_{k-2}$, we know that\n\\begin{equation} {\\cal H} ^d({\\cal S} _{k,d}\\cap Q)=4^{n-d}{\\cal H} ^d(S_{k-2,d}\\cap Q),\\end{equation}\nhence\n \\begin{equation} {\\cal H} ^d({\\cal S} _{k,d}\\cap |R_{k-2}|)\\le\\sum_{Q\\in R_{k-2}}{\\cal H} ^d({\\cal S} _{k,d}\\cap Q)=4^{n-d}\\sum_{Q\\in R_{k-2}}{\\cal H} ^d({\\cal S} _{k-2,d}\\cap Q)\\le C_34^{n-d}{\\cal H} ^d({\\cal T}_{k-2,d}),\\end{equation}\n where $C_3=C_3(n,d)$ is the number of cubes $Q\\in \\Delta_k$ that share a same $d$-face. This is a constant that only depends on $n$ and $d$.\n \n By Proposition \\textcolor{blue}{4.4} $1^\\circ$, we know that for $k$ large, ${\\cal H} ^d({\\cal S} _{k,d}\\cap |R_{k-2}|)<\\frac{\\epsilon}{4\\times 2^d}.$\n \n \nNow by Proposition \\textcolor{blue}{4.5}, we take $E'_k=f_k(E_k)$ be such that\n\\begin{equation} {\\cal H} ^d(E'_k\\backslash {\\cal S} _{k,d})<\\frac{\\epsilon}{4\\times 2^d}.\\end{equation}\n\nThen for $k$ large, we have, by\n\\begin{equation} \\begin{split}{\\cal H} ^d(g(E'_k))\\le &{\\cal H} ^d(g({\\cal S} _{k,d}))+{\\cal H} ^d(g(E'_k\\backslash {\\cal S} _{k,d}))\\\\\n\\le &{\\cal H} ^d(g({\\cal S} _{k,d}\\cap (\\cup_{j=1}^n B(x_j, r_j))))+{\\cal H} ^d(g({\\cal S} _{k,d}\\cap |R_{k-2}|))\\\\\n&+{\\cal H} ^d(g({\\cal S} _{k,d}\\backslash (|R_{k-2}|\\cup (\\cup_{j=1}^n B(x_j, r_j)))))+{\\cal H} ^d(g(E'_k\\backslash {\\cal S} _{k,d}))\\\\\n\\le &{\\cal H} ^d(E_0)+2^d[{\\cal H} ^d({\\cal S} _{k,d}\\cap |R_{k-2}|)+{\\cal H} ^d({\\cal S} _{k,d}\\backslash (|R_{k-2}|\\cup (\\cup_{j=1}^n B(x_j, r_j))))\\\\\n&+{\\cal H} ^d(E'_k\\backslash {\\cal S} _{k,d})]\\\\\n\\le &{\\cal H} ^d(E_0)+2^d(\\frac{\\epsilon}{4\\times 2^d}+\\frac{\\epsilon}{4\\times 2^d}+\\frac{\\epsilon}{4\\times 2^d})={\\cal H} ^d(E_0)+\\frac 34\\epsilon.\n\\end{split}\\end{equation}\n\nNote that $g\\circ f_k$ is the identity map in a neighborhood $B(\\partial E_0,s_k)$ of $\\partial E_0$, with $s_k=\\min\\{r_k,r\\}$. But $g\\circ f_k$ might even not be a deformation in $\\mathbb R^n\\backslash \\partial E_0$.\n\nWe still have to modify this sequence $g\\circ f_k(E_k)$ to deformations of $E_k$ in $U$. For this purpose, let $C_k$ denote the convex hull of $C\\backslash B(\\partial E_0,s_k)$. Then $C_k$ is a compact subset of $U$: in fact, since $C\\cap \\partial U=\\partial E_0$, hence $d(C\\backslash B(\\partial E_0,s_k), \\partial U)>0$. But $C\\backslash B(\\partial E_0, s_k)\\subset U$, and $U$ is convex, hence $d(C_k, \\partial U)>0$.\n\nLet $\\pi_k$ be the shortest distance projection to $C_k$. We define $h_k: E_k\\to (E_k\\cap B(\\partial E_0,s_k))\\cup C_k$: for $x\\in E_k\\cap B(\\partial E_0,s_k)$, $h_k(x)=x=g\\circ f_k(x)$, and for $x\\in E_k\\backslash B(\\partial E_0,s_k)$, let $h_k(x)=\\pi_k\\circ g\\circ f_k(x)$. It is easy to verify that $h_k$ is Lipschitz, $h_k=id$ outside $C_k$, and $h_k(C_k)\\subset C_k$. Moreover, we know that \n\\begin{equation} \\begin{split}{\\cal H} ^d(h_k(E_k))&\\le{\\cal H} ^d(h_k(E_k\\backslash B(\\partial E_0,s_k))+{\\cal H} ^d(h_k(E_k\\cap B(\\partial E_0,s_k)))\\\\\n&={\\cal H} ^d(\\pi_k\\circ g\\circ f_k(E_k\\backslash B(\\partial E_0,s_k))+{\\cal H} ^d(E_k\\cap B(\\partial E_0,s_k))\\\\\n&\\le {\\cal H} ^d(g\\circ f_k(E_k\\backslash B(\\partial E_0,s_k))+{\\cal H} ^d(E_k\\cap B(\\partial E_0,s_k))\\\\\n&\\le {\\cal H} ^d(g\\circ f_k(E_k)+{\\cal H} ^d(E_k\\cap B(\\partial E_0,s_k))\\\\\n&\\le {\\cal H} ^d(E_0)+\\frac 34\\epsilon+\\frac 14\\epsilon<{\\cal H} ^d(E_0)+\\epsilon.\n\\end{split}\\end{equation} \n \\hfill$\\Box$\\bigskip\n \n Note that after Proposition \\textcolor{blue}{4.6}, Theorem \\textcolor{blue}{4.1} $1^\\circ$ follows directly for the case when $\\partial E_0\\subset {\\cal S} _{k_0,d-1}$ for some $k_0\\in \\mathbb N$. Then $2^\\circ$ is a direct corollary of $1^\\circ$.\n \n For general case where \\textcolor{blue}{(4.2)} holds, we set \n\\begin{equation} Q'_k:=\\{Q\\in \\Delta_k: Q\\cap \\psi^{-1}(E_0)\\ne\\emptyset\\},\\end{equation}\nand \n\\begin{equation} Q_k=\\{Q\\in \\Delta_k: \\exists Q'\\in Q'_k\\mbox{ such that }Q\\cap Q'\\ne\\emptyset\\},\\end{equation}\n Let $|Q_k|=\\cup_{Q\\in Q_k}Q$, and for each $j\\le n$, let $Q_{k,j}$ be the set of all $j$ faces of elements in $Q_k$, and let ${\\cal S} _{k,j}=\\cup_{\\sigma\\in Q_{k,j}}\\sigma$ denote the $j$-skeleton of $Q_k$. \n\nSet \n\\begin{equation} R_k:=\\{Q\\in \\Delta_k: Q\\cap \\psi^{-1}(E_0\\cap U)\\ne\\emptyset\\}.\\end{equation}\nLet $|R_k|=\\cup_{Q\\in R_k}Q$, and for each $j\\le n$, let $R_{k,j}$ be the set of all $j$ faces of elements in $R_k$, and let ${\\cal T}_{k,j}=\\cup_{\\sigma\\in R_{k,j}}\\sigma$ denote the $j$-skeleton of $R_k$. \n \n Then we do all the constructions in $U$ with respect to $\\psi(Q_k)$, $\\psi(R_k)$. All the quantative properties of $Q_k$ and $R_k$ that are used in the proof above will hold also for $\\psi(Q_k)$ and $\\psi(R_k)$, since $\\psi$ is bi-Lipschitz. And the proof goes the same way.\n \\hfill$\\Box$\\bigskip\n \n \\begin{cor}$1^\\circ$ Let $U\\subset \\mathbb R^n$ be a bounded convex domain, and $E$ is a closed set in $\\bar U$ with locally finite $d$-Hausdorff measure. Then the conclusion $1^\\circ$ and $2^\\circ$ of Theorem \\textcolor{blue}{4.1} hold in either of the following cases :\n \n $1^\\circ$ $U$ is uniformly convex, and \\textcolor{blue}{(4.2)} holds;\n \n $2^\\circ$ $d=2$, $E=K$ is a 2-dimensional minimal cone, and $U$ is a convex domain.\n\n \\end{cor} \n\n\\section{Uniqueness properties for 2-dimensional minimal cones in $\\mathbb R^3$}\n\nIn this section we prove the topological and Almgren uniqueness for all 2-dimensional minimal cones in $\\mathbb R^3$.\n\n\\subsection{Planes}\n\n\\begin{thm}A 2-dimensional linear plane $P$ is Almgren and $G$-topological unique in $\\mathbb R^n$ for all $n\\ge 3$, and all abelien group $G$.\n\\end{thm}\n\n\\noindent Proof. Let $P\\subset \\mathbb R^n$ be a 2-dimensional plane containing the origin. By Proposition \\textcolor{blue}{3.2 and 3.4}, to prove that $P$ is Almgren and $G$-topological unique, it is enough to prove that $P$ is $G$-topological unique in the unit ball $B$.\n\nSuppose that $E$ is a reduced $G$-topological competitor for $P$ in $B$, so that\n\\begin{equation} {\\cal H} ^2(E\\cap B)={\\cal H} ^2(P\\cap B).\\end{equation}\nBy Remark \\textcolor{blue}{3.3 $3^\\circ$}, we know that $E$ is $G$-topological and hence Almgren minimal in $B$. By the convex hull property for Almgren minimal sets, $E\\cap B$ is contained in the convex hull of $E\\cap \\partial B=P\\cap \\partial B$, which is $P\\cap \\partial B$. Hence $E\\cap B\\subset P\\cap B$. Then since both $P$ and $E$ are reduced set, \\textcolor{blue}{(5.1)} gives that $E=P$. Hence $P$ is $G$-topological unique, and hence Almgren unique.\\hfill$\\Box$\\bigskip\n\n\\subsection{The $\\mathbb Y$ sets}\n\n\\begin{thm}Any 2-dimensional $\\mathbb Y$ set is Almgren and $G$-topological unique in $\\mathbb R^n$ for all $n\\ge 3$, and all abelien group $G$.\n\\end{thm}\n\n\\noindent Proof. By Proposition \\textcolor{blue}{3.4 and 3.5}, it is enough to prove that $\\mathbb Y$ sets are $G$-topological unique in $\\mathbb R^3$.\n\nSo let $Y$ be a 2-dimensional $\\mathbb Y$ set in $\\mathbb R^3$. Modulo changing the coordinate system, we can suppose that the spine of $Y$ is the vertical line $Z=\\{(x,y,z)\\in \\mathbb R^3:x=y=0\\}$, and that the intersection of $Y$ with the horizontal plane $Q:=\\{z=0\\}$ is the union $Y_1$ of the three half lines $R_{oa_i},1\\le i\\le 3$, where $a_1=(1,0)$, $a_2=(-\\frac 12, \\frac{\\sqrt 3}{2})$, and $a_3=-\\frac 12, -\\frac{\\sqrt 3}{2})$. Then $Y=Y_1\\times Z$.\n\nBy Proposition \\textcolor{blue}{3.2}, it is enough to prove that $Y$ is $G$-topological unique in the cylinder $D:=B_Q(0,1)\\times (-1,1)$.\n\nFor $t\\in (-1,1])$, let $a_i^t=(a_i,t)\\in Q\\times (-1,1)$. \n\nLet $f:\\mathbb R^3\\to \\mathbb R:f(x,y,z)=z$. For any set $F\\subset \\mathbb R^3$, and each $t\\in \\mathbb R$, set $F_t=f^{-1}\\{t\\}\\cap F$ the slice of $F$ at level $t$. \n\nLet $\\wideparen{a_i^ta_j^t}$ denote the open minor arc of circle of $\\partial B_Q(0,1)\\times \\{t\\}=\\partial D_t$ between $a_i^t$ and $a_j^t$, $1\\le i\\ne j\\le 3$. Then they belong to $\\mathbb R^3\\backslash D$. Since $\\wideparen{a_i^ta_j^t}, 1\\le iH^1(E_2)$.\n\nWe want to prove that ${\\cal F} $ admits a maximal element. So take a totally ordered subset ${\\cal F} _1$ of ${\\cal F} $. We will prove that ${\\cal F} _1$ admits a upper bound in ${\\cal F} $. \n\nLet $E_1$ be the intersection of all sets in ${\\cal F} _1: E_1=\\cap_{F\\in {\\cal F} _1}F$. Then $\\{a_1,a_2\\}\\subset E_1$, and for all $F\\in {\\cal F} _1$, $E_1\\subset F$. \n\nLet $H_1$ be a connected component of $F_0$ that contains $a_1$. As a connected component, it is closed in $F_0$. And since $F_0$ is closed, $H_1$ is closed. \n\nWe claim that $a_2\\in H_1$ as well. Otherwise, $a_2\\not\\in H_1$. Let $H_2=E_1\\backslash H_1$. Since both $H_i,i=1,2$ are compact, the distance $d$ between them is positive. Let $U:=B(H_1,\\frac d2)\\cap B$. Then $\\partial U$ is a compact Lipschitz curve, $a_1\\in U$, and $a_2\\in B\\backslash U$. Now for any $F\\in {\\cal F} _1$, it is connected, and contains $a_1$ and $a_2$. As a result, the set $I_F:=F\\cap \\partial U$ is non empty and closed. The family $I:=\\{I_F:F\\in {\\cal F} _1\\}$ is a class of closed set. Since ${\\cal F} _1$ is totally ordered, hence for any finite subsets $\\{F_1,\\cdots, F_k\\}\\subset {\\cal F} _1$, $\\cap_{i=1}^k F_k$ must be one of the $F_1,\\cdots, F_k$. Suppose, without loss of generality, that $\\cap_{i=1}^k F_i=F_1$. Then $\\cap_{i=1}^k I_{F_i}=I_{F_1}\\ne\\emptyset$. We have thus proved that the family $I$ has the finite intersection property. Since the elements in $I$ are subsets of the compact set $E$, we know that $\\cap_{F\\in {\\cal F} _1}I_F\\ne\\emptyset$. By definition, this means, that $E_1\\cap \\partial U\\ne\\emptyset$. But we have suppose that $E_1=H_1\\cup H_2$, and both $H_i,i=1,2$ do not meet $\\partial U$, contradiction.\n\nHence $a_2\\in H_1$, then $H_1\\in {\\cal F} $. Clearly $H_1\\ge F$ for all $F\\in {\\cal F} _0$, which yields that $H_1$ is an upper bound for ${\\cal F} _1$. \n\nWe have thus proved that ${\\cal F} _1$ admits un upper bound. This holds for all totally ordered subset ${\\cal F} _1$ of ${\\cal F} $. By Zorn's lemma, ${\\cal F} $ admits a maximal element $\\gamma$. \n\nWe claim that \n\\begin{equation}\\begin{split}\\forall p\\in \\gamma\\backslash\\{a_1,a_2\\}\\mbox{ there exists two connected sets }\\gamma_1\\mbox{ and }\\gamma_2,\\\\\n\\mbox{ such that }a_i\\in \\gamma_i\\subset \\gamma, i=1,2, \\mbox{, and }\\gamma_1\\cap\\gamma_2=\\{p\\}.\\end{split}\\end{equation}\n\nLet us prove the claim. Let $d=\\min\\{|p-a_1|,|p-a_2|\\}$. Then for all $00$. And hence the set $\\gamma\\backslash B(p,r)$ is a subset of $\\gamma$ with strictly smaller measure, and contains $a_i,i=1,2$. Since $\\gamma$ is a maximal element in ${\\cal F} $, we know that $ \\gamma\\backslash B(p,r)\\not\\in {\\cal F} $, hence is not connected, and do not contain any connected subset that contains both $a_1$ and $a_2$.\n\nAs a result, $a_1$ and $a_2$ lie in two different connected components $H_1^r$ and $H_2^r$ of $\\gamma\\backslash B(p,r)$ for each $r\\in (0,d)$. Note that for $00$ such that $B(p,2s)\\cap H_1=\\emptyset$. Let $G=\\gamma\\backslash (H_1^s\\cup B(p,s))$. Then we have the disjoint union $\\gamma=H_1^s\\cup (\\gamma\\cap B(p,s))\\cup G$. Note that $H_1^s$ is a connected component of $\\gamma\\backslash B(p,s)$, hence the sets $G$ and $H_1^s$ are both relatively open in $H_1^s\\cup G$. As a result, there exists two disjoint open subsets $U_1$ and $U_2$, such that $H_1^s\\subset U_1$, and $G\\subset U_2$. Let $U_2'=U_2\\cup B(p,s)$, and $U_1'=U_1\\backslash \\bar B(p,s)$. Then $U_1'$ and $U_2'$ are disjoint open subsets, $H_1^s\\subset U_1'$, and $(\\gamma\\cap B(p,s))\\cup G\\subset U_2'$. This gives an open decomposition of $\\gamma$, which contradicts that fact that $\\gamma$ is connected.\n\nHence $p\\in \\bar H_i$, and thus $H_i\\cup\\{o\\}$ is connected, $i=1,2$. Let $\\gamma_i=H_i\\cup \\{p\\}$, and we get Claim \\textcolor{blue}{(5.3)}.\n\nNow since $a_1,a_2, a_3$ lie in the same connected component of $E$, we know that $\\gamma$ and $a_3$ lie in the same connected component $E_0$ of $E$. \n\nWe are going to define a connected set $\\gamma_3$, such that $a_3\\in \\gamma_3$, $\\gamma_3\\cup\\gamma$ is connected, and $\\gamma_3\\cap\\gamma$ is a single point.\n\nIf $a_3\\in \\gamma$, then we set $\\gamma_3=\\{a_3\\}$;\n\nOtherwise, we have $a_3\\not\\in\\gamma$. Let $\\gamma'=E_0\\backslash \\gamma$. Then $\\gamma'\\cup \\gamma$ is connected and $a_3\\in \\gamma'$. Let $\\gamma_4$ be the connected component of $\\gamma'$ that contains $a_3$. Then we claim that \n\\begin{equation}\\gamma_4\\cup \\gamma\\mbox{ is connected.}\\end{equation}\n\nIn fact, if $\\gamma_4=\\gamma'$ then it holds automatically; otherwise, if $\\gamma_4\\cup\\gamma$ is not connected, since both $\\gamma_4$ and $\\gamma$ are connected, they are the two connected components of $\\gamma_4\\cup \\gamma$, and hence there exists two disjoint open sets $U_1$ and $U_2$ of $\\mathbb R^3$ such that $\\gamma_4\\subset U_1$ and $\\gamma\\subset U_2$. Similarly since $\\gamma_4$ is a connected component of $\\gamma'$, there exists two disjoint open sets $U_3$ and $U_4$ of $\\mathbb R^3$ such that $\\gamma_4\\subset U_3$ and $\\gamma'\\backslash \\gamma_4\\subset U_4$. Then let $U=U_1\\cap U_2$, and $V=U_3\\cup U_4$. Then $U$ and $V$ are disjoint, and $\\gamma_4\\subset U$, $E_0\\backslash\\gamma_4=\\gamma\\cup \\gamma'\\backslash \\gamma_4\\subset V$. This contradicts that fact that $E_0$ is connected. Hence Claim \\textcolor{blue}{(5.4)} holds.\n\nAs a result, $\\bar\\gamma_4\\cap \\gamma\\ne\\emptyset$, because $\\gamma$ and $\\bar\\gamma_3$ are both closed, and their union is connected.\n\n\nTake $p\\in \\bar\\gamma_4\\cap\\gamma$, and set $\\gamma_3=\\gamma_4\\cup\\{p\\}$. Then $\\gamma_3$ is connected, contains $a_3$, $\\gamma_3\\cup\\gamma$ is connected, and $\\gamma_3\\cap\\gamma=\\{p\\}$ is a single point.\n\nBy Claim \\textcolor{blue}{(5.3)}, there exists two connected sets $\\gamma_1$ and $\\gamma_2$, such that $\\gamma_1\\cap\\gamma_2=\\{p\\}$, and $a_i\\in \\gamma_i,i=1,2$. \n\nTo summerize, we get 3 connected subsets $\\gamma_i,1\\le i\\le 3$ of $E$, such that $a_i\\in \\gamma_i$, ${\\cal H} ^1(\\gamma_i\\cap \\gamma_j)=0$ for $i\\ne j$, and $p\\in \\cap_{i=1}^3\\gamma_i$.\n\nSince each $\\gamma_i$ is connected and contains $a_i$ and $p$, we know that \n\\begin{equation} {\\cal H} ^1(\\gamma_i)\\ge {\\cal H} ^1([p,a_i]),1\\le i\\le 3,\\end{equation}\nand hence\n\\begin{equation} {\\cal H} ^1(E)\\ge {\\cal H} ^1(\\cup_{i=1}^3\\gamma_i)=\\sum_{i=1}^3{\\cal H} ^1(\\gamma_i)\\ge \\sum_{i=1}^3{\\cal H} ^1([p,a_i]).\\end{equation}\n\nObviously the point $p\\in \\bar B$. And it is well known that the quantity $\\sum_{i=1}^3{\\cal H} ^1([p,a_i])$ attains its minimum if and only if $p$ is the Fermat point of the triangle $\\Delta_{a_1a_2a_3}$, which is just the origin $o$. In this case, \n\\begin{equation} \\sum_{i=1}^3{\\cal H} ^1([0,a_i])={\\cal H} ^1(Y_1\\cap \\bar B).\\end{equation}\nThis leads to the conclusion of Proposition \\textcolor{blue}{5.4}.\\hfill$\\Box$\\bigskip\n\nNow let us return to the proof of Theorem \\textcolor{blue}{5.2}. Let $F$ be a reduced $G$-topological competitor of $Y$ in $D$, such that \n\\begin{equation} {\\cal H} ^2(F\\cap D)={\\cal H} ^2(Y\\cap D),\\end{equation}\nwe would like to show that $F=Y$.\n\nBy Lemma \\textcolor{blue}{5.3}, we know that $F_t$ connects the three points $a_i^t,1\\le i\\le 3$. Then Proposition \\textcolor{blue}{5.4} tells that \n\\begin{equation} {\\cal H} ^1(F_t\\cap D_t)\\ge {\\cal H} ^1(Y_t\\cap D_t).\\end{equation}\n\nWe apply the coarea formula (cf. \\cite{Fe} 3.2.22) to the Lipschitz function $f$, and the set $F\\cap D$, and get\n\\begin{equation} {\\cal H} ^2(F\\cap D)\\ge \\int_{-1}^1{\\cal H} ^1(F_t\\cap D_t)\\ge \\int_{-1}^1{\\cal H} ^1(Y_t\\cap D_t)={\\cal H} ^2(Y\\cap D).\\end{equation}\nThen \\textcolor{blue}{(5.8)} tells that\n\\begin{equation} {\\cal H} ^1(F_t\\cap D_t)= {\\cal H} ^1(Y_t\\cap D_t) \\mbox{ for a.e. }t\\in (0,1),\\end{equation}\nand hence \n\\begin{equation} F_t\\cap D_t=Y_t\\cap D_t\\mbox{ for a.e. }t\\in (0,1)\\end{equation}\nby Proposition \\textcolor{blue}{5.4}. Hence we know that $F\\cap D=Y\\cap D$ modulo ${\\cal H} ^2$-null sets. But $F$ is reduced, hence $F\\cap D=Y\\cap D$. Hence $Y$ is $G$-topological unique in $D$, and hence it is $G$-topological unique in $\\mathbb R^3$ (Proposition \\textcolor{blue}{3.2}), and hence in $\\mathbb R^n$ (Proposition \\textcolor{blue}{3.5}).\n\nBy Proposition \\textcolor{blue}{3.4}, $\\mathbb Y$ sets are also Almgren unique in $\\mathbb R^n$.\\hfill$\\Box$\\bigskip\n\n\\begin{rem}It is also possible to prove Theorem \\textcolor{blue}{5.2} by paired calibration (cf. \\cite{LM94} and \\cite{Br91}). In fact, we will use this method to prove the uniqueness for $\\mathbb T$ sets in $\\mathbb R^3$ in the next subsection, and interested readers can easily find a similar proof for $\\mathbb Y$ sets. The proof in this section is more elementary in some sense, mainly use elementary topology. \n\\end{rem}\n\n\\subsection{The $\\mathbb T$ sets}\n\n \\begin{thm}Any 2-dimensional $\\mathbb T$ set is Almgren and ($\\mathbb Z$-)topological unique in $\\mathbb R^n$ for all $n\\ge 3$.\n\\end{thm}\n\n\\noindent Proof. By Proposition \\textcolor{blue}{3.4 and 3.5}, it is enough to prove that $\\mathbb T$ sets are topological unique in $\\mathbb R^3$.\n\nLet $T$ be a $\\mathbb T$ set centered at the origin in $\\mathbb R^3$. That is, $T$ is the cone over the 1-skeleton of a regular tetrahedron $C$ centered at the origin and inscribed in the closed unit ball $B$.\n\nBy Proposition \\textcolor{blue}{3.2}, to prove that $T$ is topological unique in $\\mathbb R^3$, it is enough to prove that $T$ is topological unique in $B$. So suppose that $E$ is a reduced topological competitor for $T$ in $B$, such that\n\\begin{equation} {\\cal H} ^2(E\\cap B)={\\cal H} ^2(T\\cap B).\\end{equation}\n\nBy Remark \\textcolor{blue}{3.3 $3^\\circ$}, we know that $E$ is minimal, and hence is rectifiable. Hence for almost all $x\\in E$, the tangent plane $T_xE$ exists.\n\nAs mentioned in the last subsection, our proof will profit from the paired calibration, so let use first give necessary details: \n\nDenote by $a_i,1\\le i\\le 4$ the four singular points of $T\\cap \\partial B$. Let $\\Omega_i,1\\le i\\le 4$ be the four equivalent connected spherical regions of $\\partial B\\backslash T$, $\\Omega_i$ being on the opposite of $a_i$. \n\nSince $E$ is a topological competitor for $T$ in $B$, we know that $\\partial B\\backslash E=\\partial B\\backslash T=\\cup_{i=1}^4\\Omega_i$, and the four $\\Omega_i$ live in different connected components of $B\\backslash E$.\n\nFor $1\\le i\\le 4$, let $C_i$ be the connected component of $B\\backslash E$ that contains $\\Omega_i$. Let $E_i=\\partial C_i\\backslash \\partial B=\\partial C_i\\backslash \\Omega_i$. Then we know that the four $C_i,1\\le i\\le 4$ are disjoint, and $E_i\\subset E$. Also note that $E_i\\cap \\Omega_i\\subset E\\cap\\partial B=T\\cap \\partial B$ is of ${\\cal H} ^2$ measure zero, hence we have the essentially disjoint unions\n\\begin{equation} \\partial C_i=E_i\\cup\\Omega_i,1\\le i\\le 4.\\end{equation}\n\nSince $C_i$ are disjoint regions in $\\mathbb R^3$, we know that for almost all $x\\in E$, they belong to at most two of the $E_i$'s. So for $i\\ne j$, let $E_{ij}=E_i\\cap E_j$. Let $E_{i0}$ denote $E_i\\backslash (\\cup_{j\\ne i}E_i)$, the set of points $x$ that belongs only to $E_i$. Let $F=\\cup_{1\\le i\\le 4}E_i\\subset E\\cap B$, then we have the disjoint union\n\\begin{equation} F=[\\cup_{1\\le i\\le 4}E_{i0}]\\cup[\\cup_{1\\le id{\\cal H} ^2(x)=\\int_{E_i}d{\\cal H} ^2(x)+\\int_{\\Omega_i}d{\\cal H} ^2(x),\\end{equation}\nand hence\n\\begin{equation} \\int_{E_i}<-a_i,n_i(x)>d{\\cal H} ^2(x)=\\int_{\\Omega_i}d{\\cal H} ^2(x)={\\cal H} ^2(\\pi_i(\\Omega_i)),\\end{equation}\nwhere $\\pi_i$ is the orthogonal projection from $\\mathbb R^3$ to the plane orthogonal to $a_i$, $1\\le i\\le 4$.\nWe sum over $i$, and get\n\\begin{equation} \\sum_{1\\le i\\le 4} \\int_{E_i}<-a_i,n_i(x)>d{\\cal H} ^2(x)=\\sum_{1\\le i\\le 4}{\\cal H} ^2(\\pi_i(\\Omega_i)).\\end{equation}\nFor the left-hand-side, by the disjoint union \\textcolor{blue}{(5.15)}, we have\n\\begin{equation} \\begin{split}&\\sum_{1\\le i\\le 4} \\int_{E_i}<-a_i,n_i(x)>d{\\cal H} ^2(x)\\\\\n=&\\sum_{1\\le i\\le 4}[ \\int_{E_{i0}}<-a_i,n_i(x)>d{\\cal H} ^2(x)+(\\sum_{i\\ne j}\\int_{E_{ij}}<-a_i,n_i(x)>d{\\cal H} ^2(x))\\\\\n=&\\sum_{1\\le i\\le 4}\\int_{E_{i0}}<-a_i,n_i(x)>d{\\cal H} ^2(x)+\\sum_{1\\le i+<-a_j,n_j(x)>)d{\\cal H} ^2(x)\\\\\n=&\\sum_{1\\le i\\le 4}\\int_{E_{i0}}<-a_i,n_i(x)>d{\\cal H} ^2(x)+\\sum_{1\\le id{\\cal H} ^2(x)\\\\\n\\le&\\sum_{1\\le i\\le 4}\\int_{E_{i0}}||a_i||d{\\cal H} ^2(x)+\\sum_{1\\le id{\\cal H} ^2(x)\\le \\sum_{1\\le i\\le 4}{\\cal H} ^2(E_{i0})+\\sum_{1\\le i0$ such that in $B(x,r)$, $E$ is the graph of a $C^1$ function from $T_xE$ to $T_xE^\\perp$, hence for all $y\\in E\\cap B(x,r)$, the tangent plane $T_yE$ exists, and the map $f:E\\cap B(x,r)\\to G(3,2): y\\mapsto T_yE$ is continuous. But by \\textcolor{blue}{(5.23)}, we have only six choices (which are isolated points in $G(3,2)$) for $T_yE$, hence $f$ is constant, and $T_yE=T_xE$ for all $y\\in E\\cap B(x,r)$. As a result, \n\\begin{equation} E\\cap B(x,r)=(T_xE+x)\\cap B(x,r)\\end{equation}\nis a disk parallel to one of the $P_{ij}$.\n\nStill by the $C^1$ regularity Theorem \\textcolor{blue}{2.20}, the set $E_P\\cap B^\\circ$ is a $C^1$ manifold, and is open in $E$. Thus we deduce that\n\\begin{equation} \\begin{split}\\mbox{Each connected component of }E_P\\cap B^\\circ\\mbox{ is part of a plane}\\\\\n\\mbox{ that is parallel to one of the }P_{ij}.\\end{split}\\end{equation}\n\nLet us look at $E_Y$. First, $E_Y\\ne\\emptyset$: otherwise, by \\textcolor{blue}{Corollary 2.23 $2^\\circ$}, $E\\cap B^\\circ=E_P\\cap B^\\circ$, and hence is a union of planes. But $E\\cap \\partial B$ does not coincide with any union of planes.\n\nTake any $x\\in E_Y$, then by the $C^1$ regularity around $\\mathbb Y$ points (Theorem \\textcolor{blue}{2.20} and Remark \\textcolor{blue}{2.21}), there exists $r=r(x)>0$ such that in $B(x,r)$, $E$ is image of a $C^1$ diffeomorphism $\\varphi$ of a $\\mathbb Y$-set Y, and $Y$ is tangent to $E$ at $x$. Denote by $L_Y$ the spine of $Y$, and by $R_i,1\\le i\\le 3$ the three open half planes of $Y$. Then $\\varphi(R_i),1\\le i\\le 3$ are connected subsets $E_P$, hence each of them is a part of a plane parallel to one of the $P_{ij},1\\le I0\\\\\n&\\mbox{such that, in }B(x,r), E\\mbox{ is a }\\mathbb Y-\\mbox{set whose spine is }x+D_j.\\end{split}\\end{equation} \n\nNext, since we are in dimension 3, the only other possible type of singular point is of type $\\mathbb T$. So we are going to discuss two cases: when there exists a $\\mathbb T$ points, or there is no $\\mathbb T$ points.\n\n\\textbf{Case 1:} There exists a point $x\\in E_T$. \n\n\\begin{lem} If there exists a point $x\\in E_T$, then $T\\cap B^\\circ =E$.\n\\end{lem}\n\n\\noindent Proof. By the same argument as above, and by Theorem \\textcolor{blue}{2.20} and Remark \\textcolor{blue}{2.21}, the unique blow-up limit $C_xE$ of $E$ at $x$ must be the set $T$, and there exists $r>0$ such that in $B(x,r)$, $E$ coincides with $T+x$.\n As a result, for each segment $I_i$, at least one of its endpoints is in the unit sphere, because two parallel $\\mathbb T$-sets cannot be connected by a $\\mathbb Y$ segment.\n \n Hence all the segments $I_i$ touch the boundary $\\partial B$. That is, \n \\begin{equation} L_i\\backslash \\{\\{x\\}\\cup \\partial B)\\subset E_Y.\\end{equation}\n \n Denote by $L_i,1\\le i\\le 4$, the four spines of $T+x$. Then $L_i\\cap B^\\circ\\subset E_Y$, because $L_i\\cap B(x,r)$ is part of some $I_j\\subset E_Y$, which already has an endpoint $x$ that does not belong to $\\partial B$, hence the other endpoint must lie in $\\partial B$, which yields $I_j=L_i\\cap B^\\circ$.\n \n Now we take a one parameter family of open balls $B_s$ with radii $r\\le s\\le 1$, with $B_r=B(x,r)$, $B_1=B^\\circ$, such that\n \n $1^\\circ$ $B_s\\subsetneqq B_{s'}$ for all $st>s} B_t=\\bar B_s$ and $\\cup_{tr, (T+x)\\cap B_s\\ne E\\}$. We claim that $R=1$.\n \n Suppose this is not true. By definition of $B_s$, we know that the four spines and the six faces of $T+x$ are never tangent to $\\partial B_s$ for any $r0$ small, we know that $E\\cap B(y,t)\\cap B_R=(x+P_{ij})\\cap B(y,t)\\cap B_R$. Note that the set $(x+P_{ij})\\cap B(y,t)\\cap B_R$ is almost a half disk when $t$ is sufficiently small, hence in particular, $E\\cap B(y,t)$ cannot coincide with a $\\mathbb Y$ set or a $\\mathbb T$ set$\\Rightarrow y\\in E_P$.\n \n If $y\\in E_P$, then $y\\in x+P_{ij}$ for some $i\\ne j$. Then $T_yE=P_{ij}$. By \\textcolor{blue}{(5.27)}, and the fact that $R<1$, there exists $r_y>0$ such that $B(y,r_y)\\subset B^\\circ$ and $E\\cap B(y,r_y)=(P_{ij}+y)\\cap B(y,r_y)$. In other words, \n \\begin{equation}\\mbox{ there exists }r_y>0\\mbox{ such that }E\\mbox{ coincides with }T+x\\mbox{ in }B(y,r_y).\\end{equation}\n \n If $y$ is a $\\mathbb Y$ point, then it lies in one of the $L_i$. By the same argument as above, using \\textcolor{blue}{(5.28)}, we also have \\textcolor{blue}{(5.30)}.\n \n Thus \\textcolor{blue}{(5.30)} holds for all $y\\in \\partial B_R\\cap (T+x)$. Since $\\partial B_R\\cap (T+x)$ is compact, we get an $r>0$, such that $E\\cap B(B_R,r)=(T+x)\\cap B(B_R,r)$. By the continuous condition $2^\\circ$ for the family $B_s$, there exists $R'\\in (R,1)$ such that $B_{R'}\\subset B(B_R,r)$. As consequence, $E\\cap B_{R'}=(T+x)\\cap B_{R'}$, this contradicts the definition of $R$.\n \n Hence $R=1$, and by definition of $R$, we have $(T+x)\\cap B^\\circ=E\\cap B^\\circ$. Since $E\\cap \\partial B=T\\cap \\partial B$, and $E$ is closed and reduced, $x$ must be the origin. Thus we get the conclusion of Lemma \\textcolor{blue}{5.7}.\\hfill$\\Box$\\bigskip\n \n \\textbf{Case 2:} $E_T=\\emptyset$. In this case, the same kind of argument as in Lemma \\textcolor{blue}{5.7} gives the following:\n \n \\begin{lem}Let $x$ be a $\\mathbb Y$ point in $E$ and $T_xE_Y=D_j$. Denote by $Y_j$ the $Y$ set whose spine is $D_j$ and whose three half planes lie in $P_{ij},i\\ne j$. Then $(Y_j+x)\\cap B=E$.\n \\end{lem}\n \n But this is impossible, because $E\\cap \\partial B=T\\cap \\partial B$, which contains with no $(Y_j+x)\\cap \\partial B$ for any $x$ and $j$.\n \n Hence we have $E\\cap \\bar B=T\\cap \\bar B$, and thus $T$ is topological unique in $B$. We thus get Theorem \\textcolor{blue}{5.6}.\\hfill$\\Box$\\bigskip\n\n\n\n\\renewcommand\\refname{References}\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nSolvable lattice models related to Lie superalgebras \\cite{Ka} \nhave been received much attentions\n\\cite{KulSk,Kul,BS,KuR,DFI,Sa,ZBG,MR94}. \nTo construct eigenvalue formulae of transfer matrices \nfor such models is an important problem in mathematical \nphysics. To achieve this program, \n Bethe ansatz has been often used. \n \nNowadays, there are many literatures \n(See for example \\cite{Kul,EK,EKS,FK,Ma,RM,PF,MR,ZB} \nand references therein.) \non Bethe ansatz analysis \nfor solvable lattice models related to Lie superalgebras. \nHowever most of them deal only with models related to \n simple representations like fundamental ones. \n Only a few people \\cite{Ma,PF,ZB} tried to deal with \n more complicated models such as fusion models \\cite{KRS} by \nBethe ansatz; while \nthere was no systematic study on this subject. \n\nTo break through such situations, we have \nrecently executed \\cite{T1,T2,T3,T4} an \nanalytic Bethe ansatz \\cite{R1,R2,BR,KS1,KOS,S2,KS2} systematically \nfor the type I Lie superalgebras $sl(r+1|s+1),C(s)$ \n cases. Namely, we have proposed a set of \n dressed vacuum forms (DVFs) and \n a class of functional relations for it. \n The purpose of this paper\n is to develop our recent works\n to type II Lie superalgebras $B(r|s)=osp(2r+1|2s)$ \n ($r \\in {\\bf Z}_{\\ge 0}$, $s \\in {\\bf Z}_{\\ge 1}$) \n and $D(r|s)=osp(2r|2s)$ \n ($r \\in {\\bf Z}_{\\ge 2}$, $s \\in {\\bf Z}_{\\ge 1}$) cases. \n \n We can express \\cite{Kul,MR} the \n Bethe ansatz equation (BAE) (\\ref{BAE}) \n by using the representation theoretical data of \n $B(r|s)$ \n ($r \\in {\\bf Z}_{\\ge 1}$, $s \\in {\\bf Z}_{\\ge 1}$) \n or $D(r|s)$ ($r \\in {\\bf Z}_{\\ge 2}$, $s \\in {\\bf Z}_{\\ge 1}$), \n as long as we adopt the distinguished simple root system \\cite{Ka}. \nOn the other hand, $B(0|s)=osp(1|2s)$ is a peculiar one among \n the Lie superalgebras. \n In contrast to other Lie superalgebras, \n its simple root system is unique. \n Corresponding to this fact, BAEs \n (\\ref{BAE1})-(\\ref{BAE4}) associated with the root system \n of $B(0|s)$ will be also unique. \n Peculiarity of these BAEs is that so far \n a naive description in terms of \n the simple root system does not exist \n for the $s$-th BAEs (\\ref{BAE3}) and (\\ref{BAE4}), \n which correspond to the odd root $\\alpha_{s}$\n with $(\\alpha_{s}|\\alpha_{s})\\ne 0$ (cf. \\cite{Kul,MR}). \n \n We assume, as our starting point, above-mentioned \n BAEs (\\ref{BAE})-(\\ref{BAE4}) for \n $B(r|s)$ and $D(r|s)$, and \n then carry out an analytic Bethe ansatz systematically \n to construct a class of DVFs . \nOn constructing DVFs, \n{\\em the pole-freeness under the BAE} \nand {\\em the top term hypothesis} \n \\cite{KS1,KOS} play important roles.\n\nWe introduce skew-Young (super) diagrams $\\lambda \\subset \\mu$, \nwhich are related to tensor-like representations\n\\footnote{In this paper, we do not deal with spinorial \nrepresentations.} of $B(r|s)$ or $D(r|s)$. \nOn these skew-Young (super) diagrams, we define \n a set of admissible tableaux $B(\\lambda \\subset \\mu)$ \nwith some semi-standard like conditions. \nThere is a one-to-one correspondence between \nthese conditions for $B(0|s)$ case and the \n conditions for $A_{2s}^{(2)}$ case \\cite{KS2}.\n In addition, \n in contrast to $B(r|s)$ case, these conditions for $D(r|s)$ case \n have non-local nature. \n Next, we define a function ${\\cal T}_{\\lambda \\subset \\mu}(u)$ \n (\\ref{Tge1}) of a spectral parameter $u$ \n as summations over $B(\\lambda \\subset \\mu)$. \n It will provide us the spectra of a set of transfer \nmatrices for various fusion $B(r|s)$ \nor $D(r|s)$ \nvertex models. It contains the top term \\cite{KS1,KOS}, \n which carries the highest weight of the irreducible \n representation of $B(r|s)$ or $D(r|s)$ \n labeled by a skew-Young (super) diagram \n $\\lambda \\subset \\mu$. \n In particular, the simplest example of \n ${\\cal T}_{\\lambda \\subset \\mu}(u)$, that is, \n ${\\cal T}^{1}(u)={\\cal T}_{1}(u)={\\cal T}_{(1^{1})}(u)$\n reduces to the eigenvalue formula of the transfer matrix \n \\cite{MR} of \n some vertex model related to the fundamental \n representation of $B(r|s)$ or $D(r|s)$ \n after some redefinitions. \n The BAEs (\\ref{BAE})-(\\ref{BAE4}) \n are assumed common to all the DVFs \n for transfer matrices with various fusion types in the auxiliary space \n as long as they act on a common quantum space. \n Therefore, we can prove the pole-freeness of \n ${\\cal T}^{a}(u)={\\cal T}_{(1^{a})}(u)$\n for any $a \\in {\\bf Z}_{\\ge 0}$ under the common \n BAEs (\\ref{BAE})-(\\ref{BAE4}). \n We further mention a determinant formula, \n by which ${\\cal T}_{\\lambda \\subset \\mu}(u)$ can be expressed \n only by the fundamental functions $\\{ {\\cal T}^{a} \\} $ \n and then pole freeness follows immediately. \n A set of transfer matrix functional relations among DVFs \n also follows \n from this formula.\n It will be a kind of the $T$-system \\cite{KNS1} \n(see also \\cite{BR,KP,KNS2,KS2,KOS,KNH,\nTK,KLWZ,Ma,PF,ZB,T1,T2,T3,T4,T5}). \n In particular for $B(0|s)$ case, \n there is remarkable duality among DVFs \n (see, Theorem \\ref{dual-th} and (\\ref{dual2})). \nOn constructing above-mentioned functional relations, \nthis duality among DVFs plays an important role. \n\nThe outline of this paper is as follows.\nIn section 2, we briefly mention the Lie superalgebras \n$B(r|s)$ and $D(r|s)$. \nIn section 3, we execute an analytic Bethe ansatz \nbased on the BAEs (\\ref{BAE})-(\\ref{BAE4}) associated \nwith distinguished simple root systems. \n In Section 4, we discuss transfer matrix functional \n relations. \n Section 5 is devoted to summary and discussion. \n In appendix A.1-A.3, we prove the pole-freeness of DVFs. \nAppendix B provides generating series of \n${\\cal T}^{a}(u)$ and ${\\cal T}_{m}(u)={\\cal T}_{(m)}(u)$. \nIn this paper, we adopt similar notation in \\cite{KS1,KOS,T1,T2,T3,T4}. \n Finally we note that we can recover many formulae\n in \\cite{KS1,KOS} \n for $B_{r}$ or $D_{r}$, if we set $s=0$ and redefine the vacuum parts. \n\\setcounter{equation}{0}\n\\section{Lie superalgebras}\nIn this section, we briefly mention the \nLie superalgebras $B(r|s)$ and $D(r|s)$ \n(see for example \\cite{Ka,Ka2,BB,FJ,MSS,Je}). \n\nThere are several choices of simple root systems and \nthe simplest one is the distinguished simple root system.\n They read as follows: \\\\ \n\\begin{figure}\n \\setlength{\\unitlength}{0.9pt}\n \\begin{center}\n \\begin{picture}(410,50) \n \\put(10,20){\\circle{20}}\n \\put(20,20){\\line(1,0){20}}\n \\put(50,20){\\circle{20}}\n \\put(60,20){\\line(1,0){20}}\n \\put(90,20){\\line(1,0){10}}\n \\put(110,20){\\line(1,0){10}}\n \\put(130,20){\\line(1,0){20}}\n \\put(160,20){\\circle{20}}\n \\put(170,20){\\line(1,0){20}}\n \\put(7,0){$\\alpha_{1}$}\n \\put(46,0){$\\alpha_{2}$}\n \\put(156,0){$\\alpha_{s-1}$}\n %\n \\put(192.929,12.9289){\\line(1,1){14.14214}}\n \\put(192.929,27.07107){\\line(1,-1){14.14214}}\n \\put(200,20){\\circle{20}}\n \\put(210,20){\\line(1,0){20}}\n \\put(240,20){\\circle{20}}\n \\put(250,20){\\line(1,0){20}}\n \\put(280,20){\\line(1,0){10}}\n \\put(300,20){\\line(1,0){10}}\n \\put(320,20){\\line(1,0){20}}\n \\put(350,20){\\circle{20}}\n \\put(358.8,25){\\line(1,0){32.4}}\n \\put(358.8,15){\\line(1,0){32.4}}\n \\put(400,20){\\circle{20}}\n \\put(197,0){$\\alpha_{s}$}\n \\put(236,0){$\\alpha_{s+1}$}\n \\put(346,0){$\\alpha_{s+r-1}$}\n \\put(397,0){$\\alpha_{s+r}$}\n \\put(390,20){\\line(-1,1){14.14214}}\n \\put(390,20){\\line(-1,-1){14.14214}}\n \\end{picture}\n \\end{center}\n \\caption{Dynkin diagram for the Lie superalgebra \n $B(r|s)=osp(2r+1|2s)$ ($r \\in {\\bf Z}_{\\ge 1}, \n s \\in {\\bf Z}_{\\ge 1}$)\n corresponding to the distinguished simple \n root system: white circles denote even roots; \n a gray (a cross) circle denotes an odd root $\\alpha$ with \n $(\\alpha|\\alpha)=0$.}\n \\label{dynkin-brs}\n\\end{figure}\n\\begin{figure}\n \\setlength{\\unitlength}{0.8pt}\n \\begin{center}\n \\begin{picture}(250,50) \n \\put(10,20){\\circle{20}}\n \\put(20,20){\\line(1,0){20}}\n \\put(50,20){\\circle{20}}\n \\put(60,20){\\line(1,0){20}}\n \\put(90,20){\\line(1,0){10}}\n \\put(110,20){\\line(1,0){10}}\n \\put(130,20){\\line(1,0){20}}\n \\put(160,20){\\circle{20}}\n \\put(170,20){\\line(1,0){20}}\n \\put(7,0){$\\alpha_{1}$}\n \\put(46,0){$\\alpha_{2}$}\n \\put(156,0){$\\alpha_{s-2}$}\n %\n \\put(200,20){\\circle{20}}\n \\put(240,20){\\circle*{20}}\n \\put(208.8,25){\\line(1,0){32.4}}\n \\put(208.8,15){\\line(1,0){32.4}}\n \\put(197,0){$\\alpha_{s-1}$}\n \\put(236,0){$\\alpha_{s}$}\n \\put(230,20){\\line(-1,1){14.14214}}\n \\put(230,20){\\line(-1,-1){14.14214}}\n \\end{picture}\n \\end{center}\n \\caption{Dynkin diagram for the Lie superalgebra \n $B(0|s)=osp(1|2s)$ ($s \\in {\\bf Z}_{\\ge 1}$):\n a black circle denotes an odd root $\\alpha$ \n with $(\\alpha|\\alpha)\\ne 0$.}\n \\label{dynkin-b0s}\n\\end{figure}\n\\begin{figure}\n \\setlength{\\unitlength}{0.9pt}\n \\begin{center}\n \\begin{picture}(420,50) \n \\put(10,20){\\circle{20}}\n \\put(20,20){\\line(1,0){20}}\n \\put(50,20){\\circle{20}}\n \\put(60,20){\\line(1,0){20}}\n \\put(90,20){\\line(1,0){10}}\n \\put(110,20){\\line(1,0){10}}\n \\put(130,20){\\line(1,0){20}}\n \\put(160,20){\\circle{20}}\n \\put(170,20){\\line(1,0){20}}\n \\put(7,0){$\\alpha_{1}$}\n \\put(46,0){$\\alpha_{2}$}\n \\put(156,0){$\\alpha_{s-1}$}\n %\n \\put(192.929,12.9289){\\line(1,1){14.14214}}\n \\put(192.929,27.07107){\\line(1,-1){14.14214}}\n \\put(200,20){\\circle{20}}\n \\put(210,20){\\line(1,0){20}}\n \\put(240,20){\\circle{20}}\n \\put(250,20){\\line(1,0){20}}\n \\put(280,20){\\line(1,0){10}}\n \\put(300,20){\\line(1,0){10}}\n \\put(320,20){\\line(1,0){20}}\n \\put(350,20){\\circle{20}}\n %\n \\put(398,44){\\circle{20}}\n \\put(359,25){\\line(2,1){30}}\n \\put(398,-4){\\circle{20}}\n \\put(359,16){\\line(2,-1){30}}\n \\put(390,24){$\\alpha_{s+r-1}$}\n \\put(393,-24){$\\alpha_{s+r}$}\n %\n \\put(197,0){$\\alpha_{s}$}\n \\put(236,0){$\\alpha_{s+1}$}\n \\put(343,0){$\\alpha_{s+r-2}$}\n %\n \\end{picture}\n \\end{center}\n \\caption{Dynkin diagram for the Lie superalgebra \n $D(r|s)=osp(2r|2s)$ ($r \\in {\\bf Z}_{\\ge 2}, \n s \\in {\\bf Z}_{\\ge 1}$)\n corresponding to the distinguished simple \n root system.}\n \\label{dynkin-drs}\n\\end{figure}\n$B(r|s)$ ($r,s \\in {\\bf Z}_{\\ge 1}$) case \n(see Figure \\ref{dynkin-brs}):\n\\begin{eqnarray}\n && \\alpha_{i} = \\delta_{i}-\\delta_{i+1} \n \\quad i=1,2,\\dots,s-1, \\nonumber \\\\ \n && \\alpha_{s} =\\delta_{s}-\\epsilon_{1}, \\nonumber \\\\ \n && \\alpha_{s+j} = \\epsilon_{j}-\\epsilon_{j+1} ,\n \\quad j=1,2,\\dots,r-1, \\\\ \n && \\alpha_{s+r} =\\epsilon_{r}; \\nonumber\n \\end{eqnarray}\n$B(0|s)$ ($s \\in {\\bf Z}_{\\ge 1}$) case (see Figure \\ref{dynkin-b0s}):\n\\begin{eqnarray}\n && \\alpha_{i} = \\delta_{i}-\\delta_{i+1} \n \\quad {\\rm for} \\quad i=1,2,\\dots,s-1, \\nonumber \\\\ \n && \\alpha_{s} =\\delta_{s}; \\label{root-B0s}\n\\end{eqnarray}\n$D(r|s)$ ($r \\in {\\bf Z}_{\\ge 2}$, $s \\in {\\bf Z}_{\\ge 1}$) case \n(see Figure \\ref{dynkin-drs}):\n\\begin{eqnarray}\n && \\alpha_{i} = \\delta_{i}-\\delta_{i+1} \n \\quad i=1,2,\\dots,s-1, \\nonumber \\\\ \n && \\alpha_{s} =\\delta_{s}-\\epsilon_{1}, \\nonumber \\\\ \n && \\alpha_{s+j} = \\epsilon_{j}-\\epsilon_{j+1} ,\n \\quad j=1,2,\\dots,r-2, \\\\ \n && \\alpha_{s+r-1} =\\epsilon_{r-1}-\\epsilon_{r}, \\nonumber \\\\ \n && \\alpha_{s+r} =\\epsilon_{r-1}+\\epsilon_{r}, \\nonumber\n \\end{eqnarray}\nwhere \n $\\epsilon_{1},\\dots, \\epsilon_{r};\\delta_{1},\\dots,\\delta_{s}$ \nare the bases of the dual space of the Cartan subalgebra \nwith the bilinear \nform $(\\ |\\ )$ such that \n\\footnote{We normalized the longest simple root as \n$|(\\alpha|\\alpha)|=2$.}\n\\begin{equation}\n (\\epsilon_{i}|\\epsilon_{j})=\\delta_{i\\, j}, \\quad \n (\\epsilon_{i}|\\delta_{j})=(\\delta_{i}|\\epsilon_{j})=0 , \\quad \n (\\delta_{i}|\\delta_{j})=-\\delta_{i\\, j} . \n\\end{equation}\n $\\{\\alpha_i \\}_{i \\ne s}$ are even roots and $\\alpha_{s}$ \nis an odd root. Note that $(\\alpha_{s} | \\alpha_{s})=0$ \nfor $B(r|s)$ ($r,s \\in {\\bf Z}_{\\ge 1}$) and $D(r|s)$ \n($r \\in {\\bf Z}_{\\ge 2}$, $s \\in {\\bf Z}_{\\ge 1}$) cases, \nwhile $(\\alpha_{s} | \\alpha_{s}) \\ne 0$ for \n$B(0|s)$ ($s \\in {\\bf Z}_{\\ge 1}$) case.\n \nLet $\\lambda \\subset \\mu$ be a skew-Young (super) diagram labeled by \nthe sequences of non-negative integers \n$\\lambda =(\\lambda_{1},\\lambda_{2},\\dots)$ and \n$\\mu =(\\mu_{1},\\mu_{2},\\dots)$ such that\n$\\mu_{i} \\ge \\lambda_{i}: i=1,2,\\dots;$ \n$\\lambda_{1} \\ge \\lambda_{2} \\ge \\dots \\ge 0$; \n$\\mu_{1} \\ge \\mu_{2} \\ge \\dots \\ge 0$ and \n$\\lambda^{\\prime}=(\\lambda_{1}^{\\prime},\\lambda_{2}^{\\prime},\\dots)$ \nbe the conjugate of $\\lambda $. \nIn particular, for $B(r|s)$ ($r,s \\in {\\bf Z}_{\\ge 1}$), \n$\\lambda=\\phi $, $\\mu_{r+1}\\le s$ case, \nthe Kac-Dynkin label \n$[b_{1},b_{2},\\dots , b_{s+r}]$ is related to \n the Young (super) diagram with shape \n $\\mu=(\\mu_{1},\\mu_{2},\\dots)$ as follows: \\\\ \n\\begin{eqnarray}\n && b_{i} = \\mu_{i}^{\\prime}-\\mu_{i+1}^{\\prime} \n \\qquad {\\rm for} \\qquad i\\in \\{1,2,\\dots,s-1\\}, \\nonumber \\\\ \n && b_{s} = \\mu_{s}^{\\prime}+\\eta_{1}, \\qquad \\label{Kac-Dynkin} \\\\ \n && b_{s+j} = \\eta_{j}-\\eta_{j+1} \n \\qquad {\\rm for} \\qquad j \\in \\{1,2,\\dots,r-1\\}, \\nonumber \\\\\n && b_{s+r} = 2\\eta_{r} \\nonumber,\n\\end{eqnarray} \n where $\\eta_{i}=Max\\{\\mu_{i}-s,0 \\} $. \nFor $B(0|s)$ ($s \\in {\\bf Z}_{\\ge 1}$), \n$\\lambda=\\phi $ case, \nthe Kac-Dynkin label \n$[b_{1},b_{2},\\dots , b_{s}]$ is related to \n the Young (super) diagram with shape \n $\\mu=(\\mu_{1},\\mu_{2},\\dots)$ as follows:\n\\begin{eqnarray}\n && b_{i} = \\mu_{i}^{\\prime}-\\mu_{i+1}^{\\prime} \n \\qquad {\\rm for} \\qquad i\\in \\{1,2,\\dots,s-1\\}, \\nonumber \\\\ \n && b_{s} = 2\\mu_{s}^{\\prime}. \\label{Kac-Dynkin-b0s}\n\\end{eqnarray} \nFor $D(r|s)$ case, we use only the Young (super) diagram with \nshape $\\mu=(1^{a})$ or $\\mu=(m^{1})$. \nThe Young (super) diagram with shape $\\mu=(1^{a})$ is \nrelated to the Kac-Dynkin label \n$[b_{1},b_{2},\\dots ,b_{s+r}]$ as follows:\n\\begin{eqnarray}\n && b_{j} = a\\delta_{j 1}. \\label{KD-tate}\n\\end{eqnarray} \nAnd the Young (super) diagram with shape \n $\\mu=(m^{1})$ is related to \n the Kac-Dynkin label \n$[b_{1},b_{2},\\dots , b_{s+r}]$ as follows:\n\\begin{eqnarray}\nb_{j}=\n \\left\\{\n \\begin{array}{lll}\n \\! \\delta_{j m} \n & \\! {\\rm if} \\! & m \\in \\{1,2,\\dots, s \\}, \\\\ \n \\! (m-s+1)\\delta_{j s}+(m-s)\\delta_{j s+1} \n & \\! {\\rm if} \\! & r \\in {\\bf Z}_{\\ge 3}, \n m \\in {\\bf Z}_{\\ge s+1}, \\\\ \n \\! (m-s+1)\\delta_{j s}+(m-s)(\\delta_{j s+1}+\\delta_{j s+2}) \n & \\! {\\rm if} \\! & r=2, m \\in {\\bf Z}_{\\ge s+1}. \n \\end{array}\n \\right. \n \\label{KD-yoko}\n\\end{eqnarray}\nAn irreducible representation of $B(0|s)$\nwith the Kac-Dynkin label $[b_{1},b_{2},\\dots , b_{s}]$\n is finite dimensional \\cite{Ka2} if and only if \n\\begin{eqnarray}\n&& b_{j} \\in {\\bf Z}_{\\ge 0} \\qquad {\\rm for} \\qquad \nj \\in \\{1,2,\\dots, s-1\\}, \\nonumber \\\\ \n&& b_{s} \\in 2{\\bf Z}_{\\ge 0}. \\label{finite}\n\\end{eqnarray}\nThe dimensionality of the irreducible representation \n $V[b_{1},b_{2},\\dots , b_{s}]$ of \n $B(0|s)$ with the highest weight \n labeled by the Kac-Dynkin label $[b_{1},b_{2},\\dots , b_{s}]$ \n is given \\cite{Ka2,Je} as follows\n\\footnote{We assume that \n$b_{j}+b_{j+1}+\\cdots +b_{s-1}=0$ if $j=s$.} \n \\begin{eqnarray}\n\\hspace{-30pt} && {\\rm dim}V[b_{1},b_{2},\\dots,b_{s}]=\n \\prod_{1\\le i < j \\le s}\n \\frac{b_{i}+b_{i+1}+\\cdots +b_{j-1}+j-i}{j-i}\\nonumber \\\\\n\\hspace{-30pt}&& \\hspace{10pt} \\times \n \\frac{b_{i}+b_{i+1}+\\cdots +b_{j-1}+\n 2(b_{j}+b_{j+1}+\\cdots +b_{s-1})+b_{s}+2s-i-j+1}\n {2s-i-j+1} \\nonumber \\\\\n\\hspace{-30pt}&& \\hspace{10pt}\\times \\prod_{1 \\le k \\le s}\n \\frac{2(b_{k}+b_{k+1}+\\cdots +b_{s-1})+b_{s}+2s-2k+1}\n {2s-2k+1}.\n \\label{dim}\n \\end{eqnarray}\n\\setcounter{equation}{0}\n\\section{Analytic Bethe ansatz} \nWe assume, as our starting point, the \nfollowing type of the Bethe ansatz equations\n\\footnote{In this paper, we deal with the case,\n as an example, \n that the quantum spaces of the transfer matrices are fundamental \nrepresentations. } \n\\cite{RW,OW,Kul,RM,KOS}. \\\\ \n$B(r|s)$ ($r,s \\in {\\bf Z}_{\\ge 1}$) or $D(r|s)$ ($r \\in {\\bf Z}_{\\ge 2}, s \\in {\\bf Z}_{\\ge 1}$) case:\n\\begin{eqnarray}\n- \\left\\{\n \\prod_{j=1}^{N}\n \\frac{\\Phi (u_k^{(a)}-w_{j}-1)}\n {\\Phi(u_k^{(a)}-w_{j}+1)}\n \\right\\}^{\\delta_{a 1}}\n &=&(-1)^{{\\rm deg}(\\alpha_a)} \n \\prod_{b=1}^{s+r}\\frac{Q_{b}(u_k^{(a)}+(\\alpha_a|\\alpha_b))}\n {Q_{b}(u_k^{(a)}-(\\alpha_a|\\alpha_b))}. \\label{BAE} \n\\end{eqnarray}\n$B(0|s)$ ($s \\in {\\bf Z}_{\\ge 2}$) case: \n\\begin{eqnarray}\n\\hspace{-40pt}\n&& - \\prod_{j=1}^{N}\n\\frac{ \\Phi (u_k^{(1)}-w_{j}-1)}\n { \\Phi (u_k^{(1)}-w_{j}+1)} \n = \n\\frac{Q_{1}(u_k^{(1)}-2)Q_{2}(u_k^{(1)}+1)}\n{Q_{1}(u_k^{(1)}+2)Q_{2}(u_k^{(1)}-1)}, \\label{BAE1}\n\\\\\n\\hspace{-40pt}\n&&-1= \\frac{Q_{a-1}(u_k^{(a)}+1)Q_{a}(u_k^{(a)}-2)Q_{a+1}(u_k^{(a)}+1)}\n {Q_{a-1}(u_k^{(a)}-1)Q_{a}(u_k^{(a)}+2)Q_{a+1}(u_k^{(a)}-1)}\n \\quad{\\rm for}\n \\quad 2 \\le a \\le s-1, \\label{BAE2}\\\\ \n\\hspace{-40pt}\n&&1= \\frac{Q_{s-1}(u_k^{(s)}+1)Q_{s}(u_k^{(s)}+1)Q_{s}(u_k^{(s)}-2)}\n {Q_{s-1}(u_k^{(s)}-1)Q_{s}(u_k^{(s)}-1)Q_{s}(u_k^{(s)}+2)}\n \\label{BAE3} .\n\\end{eqnarray}\n$B(0|1)$ case: \n\\begin{eqnarray}\n\\hspace{-40pt}\n&& \\prod_{j=1}^{N}\n\\frac{ \\Phi (u_k^{(1)}-w_{j}-1)}\n { \\Phi (u_k^{(1)}-w_{j}+1)} \n = \n\\frac{Q_{1}(u_k^{(1)}+1)Q_{1}(u_k^{(1)}-2)}\n {Q_{1}(u_k^{(1)}-1)Q_{1}(u_k^{(1)}+2)}\n \\label{BAE4}.\n\\end{eqnarray}\nHere $ Q_{a}(u)= \\prod_{j=1}^{N_{a}}\\Phi(u-u_j^{(a)})$; \n$ N \\in {\\bf Z }_{\\ge 0}$ is the number of \nthe lattice sites; $N_{a} \\in {\\bf Z }_{\\ge 0}$; \n$u_{j}^{(a)}, w_{j}\\in {\\bf C}$; $a,k \\in {\\bf Z}$ \n($a \\in \\{1,2,\\dots,s+r \\}$ ($r=0$ for $B(0|s)$ case); \n$\\ k \\in \\{1,2,\\dots, N_{a} \\}$); \n\\begin{eqnarray}\n {\\rm deg}(\\alpha_a)&=&\\left\\{\n \\begin{array}{@{\\,}ll}\n 0 & \\mbox{for even root} \\\\ \n 1 & \\mbox{for odd root} \n \\end{array}\n \\right. \\\\ \n &=& \\delta_{a,s}. \\nonumber \n\\end{eqnarray}\n $\\Phi$ is a function, which has zero at $u=0$. \nFor example, $\\Phi(u)$ has the following form\n\\begin{equation}\n\\Phi(u)=u.\n\\end{equation}\nRemarkable enough, Bethe ansatz equations can be written \nin terms of root systems of Lie algebras \\cite{RW,OW} or \nLie superalgebras \\cite{Kul,MR}. \nMartins and Ramos \\cite{MR} pointed out that \n$B(0|s)$ is an exception\n to this observation (see also \\cite{Kul}). \nTo put it more precisely, an exception lies in \nthe right hand side of (\\ref{BAE3}) and (\\ref{BAE4}), \nwhich correspond to \nthe odd root $\\alpha_{s}$ with $(\\alpha_{s}|\\alpha_{s}) \\ne 0$. \nIn fact, one can derive (\\ref{BAE1}) and (\\ref{BAE2}) from \n(\\ref{BAE}) and (\\ref{root-B0s}); \nwhile cannot derive (\\ref{BAE3}) and (\\ref{BAE4}). \\\\ \n{\\em Remark:} \nThere are compact expressions of BAEs \nfor twisted quantum affine algebras \\cite{RW}. \n Moreover \nthe BAEs (\\ref{BAE1})-(\\ref{BAE4}) \nresemble to the BAEs for $A_{2s}^{(2)}$. \nThis resemblance will originate from resemblance between \n$B(0|s)^{(1)}$ and $A_{2s}^{(2)}$. \nThus there is a possibility that the BAEs \n(\\ref{BAE1})-(\\ref{BAE4}) are also \ncompactly written in terms of root system of the \nLie superalgebra $B(0|s)$ ($ s \\in {\\bf Z}_{\\ge 1} $). \nWe also point out that the expression (\\ref{BAE}) \nis not always valid for non-distinguished simple root systems. \nIn fact we have confirmed for several cases that the Bethe \nansatz equations corresponding to the \n odd roots $\\alpha $ with $(\\alpha|\\alpha)\\ne 0$ \nhave similar structure to \n(\\ref{BAE3}) or (\\ref{BAE4}) \nby using the correspondence \\cite{T2} between \nthe particle-hole transformation \n and the (super) Weyl reflection. \n\nWe define the set \n\\begin{eqnarray}\n && J=J_{+} \\cup J_{-},\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\nJ_{-}= \\{ 1,2,\\dots, s,\\overline{s},\\dots,\\overline{2},\n \\overline{1} \\} \\label{set} \n\\end{eqnarray}\nis common for $B(r|s)$ and $D(r|s)$; \nwhile $J_{+}$ is not:\n\\begin{eqnarray}\n && J_{+}=\\{s+1,s+2,\\dots,s+r,\\overline{s+r},\\dots,\n \\overline{s+2},\\overline{s+1} \\} \\cup \\{ 0 \\} \n \\quad {\\rm for } \\quad B(r|s), \\nonumber \\\\ \n && J_{+}=\\{s+1,s+2,\\dots,s+r,\\overline{s+r},\\dots,\n \\overline{s+2},\\overline{s+1} \\} \n \\quad {\\rm for } \\quad D(r|s). \\nonumber \n\\end{eqnarray}\nOn this set $J$, we define the total order \n\\begin{eqnarray} \n 1\\prec 2 \\prec \\cdots \\prec s+r \\prec 0 \n \\prec \\overline{s+r} \\prec \\cdots \\prec \\overline{2} \n \\prec \\overline{1} \\label{order}\n\\end{eqnarray}\n for $B(r|s)$ case, and the partial order \n\\begin{eqnarray} \n 1\\prec 2 \\prec \\cdots \\prec s+r-1 \\prec \n \\begin{array}{c} \n s+r \\\\ \n \\\\ \n \\overline{s+r} \n \\end{array}\n \\prec \\overline{s+r-1} \\prec \\cdots \\prec \\overline{2} \n \\prec \\overline{1} \\label{order2}\n\\end{eqnarray}\nfor $D(r|s)$ case. In contrast to $B(r|s)$ case,\n there is no order between \n$s+r$ and $\\overline{s+r}$ for $D(r|s)$ case. \nWe also define the grading parameter as follows: \n\\begin{equation}\n p(a)=\\left\\{\n \\begin{array}{@{\\,}ll}\n 0 & \\mbox{for $a \\in J_{+}$}, \\\\ \n 1 & \\mbox{for $a \\in J_{-}$ }.\n \\end{array}\n \\right. \\label{grading}\n\\end{equation}\nFor $a \\in J $, we define \n\\footnote{\nIn this paper, we often abbreviate the spectral parameter $u$.}\n the following functions. \\\\ \n $B(r|s)$ ($r \\in {\\bf Z}_{\\ge 0},s \\in {\\bf Z}_{\\ge 1}$) case:\n\\begin{eqnarray} \n\\hspace{-20pt} && \\framebox{$a$}_{u}=\\psi_{a}(u)\n \\frac{Q_{a-1}(u-a-1)Q_{a}(u-a+2)}\n {Q_{a-1}(u-a+1)Q_{a}(u-a)}\n \\quad {\\rm for} \\quad 1 \\le a \\le s,\\nonumber \\\\\n\\hspace{-20pt} && \\framebox{$a$}_{u}=\\psi_{a}(u)\n \\frac{Q_{a-1}(u-2s+a+1)Q_{a}(u-2s+a-2)}\n {Q_{a-1}(u-2s+a-1)Q_{a}(u-2s+a)} \\nonumber \\\\ \n\\hspace{-20pt} && \\hspace{170pt} \n{\\rm for} \\quad s+1 \\le a \\le s+r,\\nonumber \\\\\n\\hspace{-20pt} && \\framebox{$0$}_{u}=\\psi_{0}(u)\n \\frac{Q_{s+r}(u-s+r+1)Q_{s+r}(u-s+r-2)}\n {Q_{s+r}(u-s+r-1)Q_{s+r}(u-s+r)},\n \\label{z+} \\\\\n\\hspace{-20pt} && \\framebox{$\\overline{a}$}_{u}=\\psi_{\\overline{a}}(u)\n \\frac{Q_{a-1}(u+2r-a-2)Q_{a}(u+2r-a+1)}\n {Q_{a-1}(u+2r-a)Q_{a}(u+2r-a-1)}\n \\nonumber \\\\ \n\\hspace{-20pt} && \\hspace{170pt} {\\rm for} \\quad \ns+1 \\le a \\le s+r, \\nonumber \\\\ \n\\hspace{-20pt} && \\framebox{$\\overline{a}$}_{u}=\\psi_{\\overline{a}}(u)\n \\frac{Q_{a-1}(u-2s+2r+a)Q_{a}(u-2s+2r+a-3)}\n {Q_{a-1}(u-2s+2r+a-2)Q_{a}(u-2s+2r+a-1)} \n \\nonumber \\\\ \n\\hspace{-20pt} && \\hspace{170pt} \n{\\rm for} \\quad 1 \\le a \\le s. \\nonumber \n\\end{eqnarray}\n$D(r|s)$ ($r \\in {\\bf Z}_{\\ge 2},s \\in {\\bf Z}_{\\ge 1}$) case: \n\\begin{eqnarray} \n\\hspace{-20pt} && \\framebox{$a$}_{u}=\\psi_{a}(u)\n \\frac{Q_{a-1}(u-a-1)Q_{a}(u-a+2)}\n {Q_{a-1}(u-a+1)Q_{a}(u-a)}\n \\quad {\\rm for} \\quad 1 \\le a \\le s,\\nonumber \\\\\n\\hspace{-20pt} && \\framebox{$a$}_{u}=\\psi_{a}(u)\n \\frac{Q_{a-1}(u-2s+a+1)Q_{a}(u-2s+a-2)}\n {Q_{a-1}(u-2s+a-1)Q_{a}(u-2s+a)} \n \\nonumber \\\\ \n\\hspace{-20pt} && \\hspace{160pt} \n {\\rm for} \\quad s+1 \\le a \\le s+r-2,\\nonumber \\\\\n\\hspace{-20pt} && \\framebox{$r+s-1$}_{u}=\\psi_{r+s-1}(u)\n \\frac{Q_{s+r-2}(u-s+r)Q_{s+r-1}(u-s+r-3)}\n {Q_{s+r-2}(u-s+r-2)Q_{s+r-1}(u-s+r-1)}\n \\nonumber \\\\ \n\\hspace{-20pt} && \\hspace{160pt} \\times \n \\frac{Q_{s+r}(u-s+r-3)}{Q_{s+r}(u-s+r-1)}\n ,\\nonumber \\\\\n\\hspace{-20pt} && \\framebox{$r+s$}_{u}=\\psi_{r+s}(u)\n \\frac{Q_{s+r-1}(u-s+r+1)Q_{s+r}(u-s+r-3)}\n {Q_{s+r-1}(u-s+r-1)Q_{s+r}(u-s+r-1)},\n \\label{z++} \\\\\n\\hspace{-20pt} && \n \\framebox{$\\overline{r+s}$}_{u}=\\psi_{\\overline{r+s}}(u)\n \\frac{Q_{s+r-1}(u-s+r-3)Q_{s+r}(u-s+r+1)}\n {Q_{s+r-1}(u-s+r-1)Q_{s+r}(u-s+r-1)},\n \\nonumber \\\\\n\\hspace{-20pt} && \n\\framebox{$\\overline{r+s-1}$}_{u}=\\psi_{\\overline{r+s-1}}(u)\n \\frac{Q_{s+r-2}(u-s+r-2)Q_{s+r-1}(u-s+r+1)}\n {Q_{s+r-2}(u-s+r)Q_{s+r-1}(u-s+r-1)}\n \\nonumber \\\\ \n\\hspace{-20pt} && \\hspace{160pt} \n \\times \\frac{Q_{s+r}(u-s+r+1)}{Q_{s+r}(u-s+r-1)},\n \\nonumber \\\\\n\\hspace{-20pt} && \\framebox{$\\overline{a}$}_{u}=\\psi_{\\overline{a}}(u)\n \\frac{Q_{a-1}(u+2r-a-3)Q_{a}(u+2r-a)}\n {Q_{a-1}(u+2r-a-1)Q_{a}(u+2r-a-2)}\n \\nonumber \\\\ \n\\hspace{-20pt} && \\hspace{160pt}\n {\\rm for} \\quad s+1 \\le a \\le s+r-2, \\nonumber \\\\ \n\\hspace{-20pt} && \\framebox{$\\overline{a}$}_{u}=\\psi_{\\overline{a}}(u)\n \\frac{Q_{a-1}(u-2s+2r+a-1)Q_{a}(u-2s+2r+a-4)}\n {Q_{a-1}(u-2s+2r+a-3)Q_{a}(u-2s+2r+a-2)} \n \\nonumber \\\\ \n\\hspace{-20pt} && \\hspace{160pt}\n {\\rm for} \\quad 1 \\le a \\le s.\\nonumber \n\\end{eqnarray}\nHere we assume $Q_{0}(u)=1$. \nThe vacuum parts of the functions \n$\\framebox{a}_{u}$ (\\ref{z+}) and (\\ref{z++})\n are given as follows. \\\\\nFor $B(r|s)$ ($r \\in {\\bf Z}_{\\ge 0},s \\in {\\bf Z}_{\\ge 1}$) case:\n\\begin{eqnarray}\n \\psi_{1}(u)&=&\\phi(u-2)\\phi(u-2s+2r-1), \\nonumber \\\\ \n \\psi_{a}(u)&=&\\phi(u)\\phi(u-2s+2r-1) \n \\quad {\\rm for} \\quad \n 2 \\preceq a \\preceq \\overline{2}, \\nonumber \\\\\n \\psi_{\\overline{1}}(u)&=&\\phi(u)\\phi(u-2s+2r+1)\n \\label{psi}.\n\\end{eqnarray}\n$D(r|s)$ ($r \\in {\\bf Z}_{\\ge 2},s \\in {\\bf Z}_{\\ge 1}$) case:\n\\begin{eqnarray}\n \\psi_{1}(u)&=&\\phi(u-2)\\phi(u-2s+2r-2), \\nonumber \\\\ \n \\psi_{a}(u)&=&\\phi(u)\\phi(u-2s+2r-2) \n \\quad {\\rm for} \\quad\n 2 \\preceq a \\preceq \\overline{2}, \\nonumber \\\\\n \\psi_{\\overline{1}}(u)&=&\\phi(u)\\phi(u-2s+2r) .\n \\label{psi-d}\n\\end{eqnarray}\nHere \n\\begin{eqnarray}\n \\phi(u)=\\prod_{j=1}^{N}\\Phi(u-w_{j}).\n\\end{eqnarray}\nUnder the BAEs (\\ref{BAE})-(\\ref{BAE4}),\n we have: \n \\footnote{\n Here $Res_{u=a}f(u)$ denotes the residue of a function $f(u)$ at $u=a $.\n }\\\\ \n $B(r|s)$ ($r,s \\in {\\bf Z}_{\\ge 1}$) case:\n\\begin{eqnarray}\n\\hspace{-45pt}&& Res_{u=d+u_{k}^{(d)}}\n (\\framebox{$d$}_{u}+\\framebox{$d+1$}_{u})=0 \n \\quad {\\rm for} \\quad 1\\le d \\le s-1 , \\label{res1} \\\\\n\\hspace{-45pt}&& Res_{u=s+u_{k}^{(s)}}\n(\\framebox{$s$}_{u}-\\framebox{$s+1$}_{u})=0 ,\n \\\\\n\\hspace{-45pt}&& Res_{u=2s-d+u_{k}^{(d)}}\n (\\framebox{$d$}_{u}+\\framebox{$d+1$}_{u})=0 \n \\quad {\\rm for} \\quad s+1\\le d \\le s+r-1 , \\label{res3} \\\\\n\\hspace{-45pt}&& Res_{u=s-r+u_{k}^{(s+r)}}\n(\\framebox{$s+r$}_{u}+\\framebox{$0$}_{u})=0 ,\n \\\\\n\\hspace{-45pt}&& Res_{u=s-r+1+u_{k}^{(s+r)}}\n(\\framebox{$0$}_{u}+\\framebox{$\\overline{s+r}$}_{u})=0 ,\n \\\\\n\\hspace{-45pt}&& Res_{u=d-2r+1+u_{k}^{(d)}}\n (\\framebox{$\\overline{d+1}$}_{u}+\\framebox{$\\overline{d}$}_{u})=0 \n \\quad {\\rm for} \\quad s+1 \\le d \\le s+r-1 ,\\label{res6} \\\\\n\\hspace{-45pt}&& Res_{u=s-2r+1+u_{k}^{(s)}}\n (\\framebox{$\\overline{s+1}$}_{u}-\\framebox{$\\overline{s}$}_{u})=0,\n \\\\\n\\hspace{-45pt}&& Res_{u=-d+2s-2r+1+u_{k}^{(d)}}\n (\\framebox{$\\overline{d+1}$}_{u}+\\framebox{$\\overline{d}$}_{u})=0 \n \\quad {\\rm for} \\quad 1 \\le d \\le s-1 . \\label{res8} \n\\end{eqnarray}\n$B(0|s)$ ($s \\in {\\bf Z}_{\\ge 1}$) case:\n\\begin{eqnarray}\n\\hspace{-45pt}&& Res_{u=d+u_{k}^{(d)}}\n (\\framebox{$d$}_{u}+\\framebox{$d+1$}_{u})=0 \n \\quad {\\rm for} \\quad 1\\le d \\le s-1 , \\label{res1-b0s} \\\\\n\\hspace{-45pt}&& Res_{u=s+u_{k}^{(s)}}\n(\\framebox{$s$}_{u}-\\framebox{$0$}_{u})=0 ,\n \\\\\n \\hspace{-45pt}&& Res_{u=s+1+u_{k}^{(s)}}\n (\\framebox{$0$}_{u}-\\framebox{$\\overline{s}$}_{u})=0,\n \\\\\n\\hspace{-45pt}&& Res_{u=-d+2s+1+u_{k}^{(d)}}\n (\\framebox{$\\overline{d+1}$}_{u}+\\framebox{$\\overline{d}$}_{u})=0 \n\\quad {\\rm for} \\quad 1 \\le d \\le s-1 . \\label{res4-b0s} \n\\end{eqnarray}\n$D(r|s)$ ($r \\in {\\bf Z}_{\\ge 2},s \\in {\\bf Z}_{\\ge 1}$) case: \n\\begin{eqnarray}\n\\hspace{-45pt}&& Res_{u=d+u_{k}^{(d)}}\n (\\framebox{$d$}_{u}+\\framebox{$d+1$}_{u})=0 \n \\quad {\\rm for} \\quad 1\\le d \\le s-1 , \\label{res1-d} \\\\\n\\hspace{-45pt}&& Res_{u=s+u_{k}^{(s)}}\n(\\framebox{$s$}_{u}-\\framebox{$s+1$}_{u})=0 ,\n \\\\\n\\hspace{-45pt}&& Res_{u=2s-d+u_{k}^{(d)}}\n (\\framebox{$d$}_{u}+\\framebox{$d+1$}_{u})=0 \n \\quad {\\rm for} \\quad s+1\\le d \\le s+r-1 , \\label{res3-d} \\\\\n\\hspace{-45pt}&& Res_{u=s-r+1+u_{k}^{(s+r)}}\n(\\framebox{$s+r-1$}_{u}+\\framebox{$\\overline{s+r}$}_{u})=0 ,\n \\\\\n\\hspace{-45pt}&& Res_{u=s-r+1+u_{k}^{(s+r)}}\n(\\framebox{$s+r$}_{u}+\\framebox{$\\overline{s+r-1}$}_{u})=0 ,\n \\\\\n\\hspace{-45pt}&& Res_{u=d-2r+2+u_{k}^{(d)}}\n (\\framebox{$\\overline{d+1}$}_{u}+\\framebox{$\\overline{d}$}_{u})=0 \n \\quad {\\rm for} \\quad s+1 \\le d \\le s+r-1 ,\\label{res6-8} \\\\\n\\hspace{-45pt}&& Res_{u=s-2r+2+u_{k}^{(s)}}\n (\\framebox{$\\overline{s+1}$}_{u}-\\framebox{$\\overline{s}$}_{u})=0,\n \\\\\n\\hspace{-45pt}&& Res_{u=-d+2s-2r+2+u_{k}^{(d)}}\n (\\framebox{$\\overline{d+1}$}_{u}+\\framebox{$\\overline{d}$}_{u})=0 \n \\quad {\\rm for} \\quad 1 \\le d \\le s-1 . \\label{res8-d} \n\\end{eqnarray}\nWe assign coordinates $(i,j)\\in {\\bf Z}^{2}$ \non the skew-Young superdiagram $\\lambda \\subset \\mu$ \nsuch that the row index $i$ increases as we go downwards and the column \nindex $j$ increases as we go from the left to the right and that \n$(1,1)$ is on the top left corner of $\\mu$.\nWe define an admissible tableau $b$ \non the skew-Young superdiagram \n$\\lambda \\subset \\mu$ as a set of elements $b(i,j)\\in J$ \n labeled by the coordinates \n$(i,j)$ mentioned above, with the following rule. \\\\ \nThe admissible condition for $B(r|s)$ \n($r \\in {\\bf Z}_{\\ge 0}$, $s \\in {\\bf Z}_{\\ge 1}$): \n\\begin{enumerate}\n\\item\n\\begin{eqnarray*}\n b(i,j) \\preceq b(i,j+1), \n\\end{eqnarray*}\n\\item \n\\begin{eqnarray*}\n b(i,j) \\preceq b(i+1,j), \n\\end{eqnarray*}\n\\item \n\\begin{eqnarray*} \n b(i,j) \\prec b(i+1,j) \\quad {\\rm if}\\quad b(i,j) \n \\in J_{+} \\setminus \\{0 \\}, \n\\end{eqnarray*} \n\\item \n\\begin{eqnarray*}\n b(i,j) \\prec b(i,j+1) \\quad {\\rm if} \\quad b(i,j) \n \\in J_{-} \\cup \\{0 \\}. \n\\end{eqnarray*}\n\\end{enumerate}\nThe admissible condition for $D(r|s)$ \n($r \\in {\\bf Z}_{\\ge 2}$, $s \\in {\\bf Z}_{\\ge 1}$); \n$\\lambda=\\phi$; $\\mu=(1^{a})$:\n\\begin{enumerate}\n\\item \n\\begin{eqnarray*}\n b(k,1) \\preceq b(k+1,1) \\quad \n {\\rm if} \\quad b(k+1,1) \\in J_{-} \n\\end{eqnarray*}\n\n\\item \n\\begin{eqnarray*} \nb(k,1) \\prec b(k+1,1) \\quad \n {\\rm if} \\quad b(k+1,1) \\in J_{+} \n\\end{eqnarray*}\nunless \n\\begin{eqnarray*}\n (b(k,1),b(k+1,1))=(\\overline{s+r},s+r) \\quad {\\rm or} \n \\quad (s+r,\\overline{s+r}). \n\\end{eqnarray*}\n\\end{enumerate}\nThe admissible condition for $D(r|s)$ \n($r \\in {\\bf Z}_{\\ge 2}$, $s \\in {\\bf Z}_{\\ge 1}$); \n$\\lambda=\\phi$; $\\mu=(m^{1})$:\n\\begin{enumerate}\n\\item \n\\begin{eqnarray*} \nb(1,k) \\preceq b(1,k+1) \\quad \n {\\rm if} \\quad b(1,k+1) \\in J_{+}, \n\\end{eqnarray*}\n\n\\item \n\\begin{eqnarray*}\n b(1,k) \\prec b(1,k+1) \\quad \n {\\rm if} \\quad b(1,k+1) \\in J_{-}, \n\\end{eqnarray*}\n\n\\item \n$s+r$ and $\\overline{s+r}$ do not appear simultaneously.\n\\end{enumerate}\nLet $B(\\lambda \\subset \\mu)$ be \nthe set of admissible tableaux\n\\footnote{\nIn contrast to $B(r|s)$ case, the \nadmissible condition for $D(r|s)$ \n case has non-local nature. This property makes it \n difficult to extend the \nadmissible condition for $D(r|s)$ \n to more general skew-Young (super) diagrams.\n} \n on $\\lambda \\subset \\mu$. \nWe shall present a function ${\\cal T}_{\\lambda \\subset \\mu}(u)$ \nwith a spectral parameter $u\\in {\\rm C}$ and \n skew-Young superdiagrams $\\lambda \\subset \\mu$, \n which is a candidate of a set of DVFs \n for various fusion types in the \n auxiliary spaces\n \\footnote{We assume that they are finite dimensional modules of \n quantum affine superalgebras (or super Yangians) \\cite{Y1,Y2}. \n Thus ${\\cal T}_{\\lambda \\subset \\mu}(u)$ is expected to be a \n kind of a (super) character of such algebras. \n At present, we can not justify this speculations mathematically \n in general, \n since we luck systematic representation theory of \n such algebras. \n We hope that mathematically satisfactory \n account on our formulae appear after the development of \n representation theory in the future. }\n of transfer matrices of \n $B(r|s)$ or $D(r|s)$ vertex models. \nFor the skew-Young (super) diagrams $\\lambda \\subset \\mu$, \ndefine ${\\cal T}_{\\lambda \\subset \\mu}(u)$ as follows\n\\begin{equation}\n {\\cal T}_{\\lambda \\subset \\mu}(u)=\n\\sum_{b \\in B(\\lambda \\subset \\mu)}\n\\prod_{(i,j) \\in (\\lambda \\subset \\mu)}\n(-1)^{p(b(i,j))}\n\\framebox{$b(i,j)$}_{u-\\mu_{1}+\\mu_{1}^{\\prime}-2i+2j}\t\n\\label{Tge1},\n\\end{equation}\nwhere the product is taken over the coordinates $(i,j)$ on\n $\\lambda \\subset \\mu$.\n\nWe can express ${\\cal T}_{\\lambda \\subset \\mu}(u)$ \nas determinants over matrices, whose matrix elements are \n${\\cal T}^{a}$ or ${\\cal T}_{m}$ \n\\footnote{${\\cal T}_{m}^{a}(u):={\\cal T}_{(m^{a})}(u)$; \n${\\cal T}_{m}(u):={\\cal T}_{m}^{1}(u)$; \n${\\cal T}^{a}(u):={\\cal T}_{1}^{a}(u)$; \n${\\cal T}_{m}^{0}(u)={\\cal T}_{0}^{a}(u)=1$ for \n$m,a\\in {\\bf Z}_{\\ge 0}$; \n${\\cal T}_{m}^{a}(u)=0$ if \n$m\\in {\\bf Z}_{< 0}$ or $a\\in {\\bf Z}_{< 0}$. \nSee also Appendix B.}\n(cf. \\cite{KOS,BR}). \\\\ \nFor $B(r|s)$ ($r \\in {\\bf Z}_{\\ge 0}$, $s \\in {\\bf Z}_{\\ge 1}$) case, we have \n\\begin{eqnarray}\n\\hspace{-30pt}\n{\\cal T}_{\\lambda \\subset \\mu}(u)&=&{\\rm det}_{1 \\le i,j \\le \\mu_{1}}\n ({\\cal T}^{\\mu_{i}^{\\prime}-\\lambda_{j}^{\\prime}-i+j}\n (u-\\mu_{1}+\\mu_{1}^{\\prime}-\\mu_{i}^{\\prime}-\\lambda_{j}^{\\prime}+i+j-1))\n\t\\label{Jacobi-Trudi1} \\\\ \n\\hspace{-30pt}\t\n &=& {\\rm det}_{1 \\le i,j \\le \\mu_{1}^{\\prime}}\n ({\\cal T}_{\\mu_{j}-\\lambda_{i}+i-j}\n (u-\\mu_{1}+\\mu_{1}^{\\prime}+\\mu_{j}+\\lambda_{i}-i-j+1))\t.\n\t\\label{Jacobi-Trudi2} \n\\end{eqnarray}\nFor $D(r|s)$ ($r \\in {\\bf Z}_{\\ge 2}$, $s \\in {\\bf Z}_{\\ge 1}$) case, we have \n\\begin{eqnarray}\n\\hspace{-60pt}&& \n{\\cal T}_{m}(u)={\\rm det}_{1 \\le i,j \\le m}\n ({\\cal T}^{1-i+j}\n (u-m+i+j-1))\n\t\\label{Jacobi-Trudi}. \n\\end{eqnarray}\nNote that the function ${\\cal T}^{1}(u)={\\cal T}_{1}(u)$ \n coincides with \nthe eigenvalue formula of a $B(r|s)$ or $D(r|s)$ vertex model \nby the algebraic Bethe ansatz\n \\cite{MR} after some redefinitions. \n\nWe remark that if $\\Phi (-u) = \\pm \\Phi (u)$, \n$\\framebox{$a$}_{u}$ is transformed to \n$ \\framebox{$\\overline{a}$}_{u}$\n\\footnote{Here we interpret $\\overline{0}$ as $0$.}\nunder the following transformation. \\\\ \n$B(r|s)$ ($r \\in {\\bf Z}_{\\ge 0}, s \\in {\\bf Z}_{\\ge 1} $) case: \n\\begin{eqnarray}\n&& u \\to -(u+2r-2s-1), \\nonumber \\\\ \n&& u_{j}^{(a)} \\to -u_{j}^{(a)}, \\label{cross1} \\\\ \n&& w_{j} \\to -w_{j}. \\nonumber \n\\end{eqnarray}\n$D(r|s)$ ($r \\in {\\bf Z}_{\\ge 2}, s \\in {\\bf Z}_{\\ge 1} $) case:\n\\begin{eqnarray}\n&& u \\to -(u+2r-2s-2), \\nonumber \\\\ \n&& u_{j}^{(a)} \\to -u_{j}^{(a)}, \\label{cross2} \\\\ \n&& w_{j} \\to -w_{j}. \\nonumber \n\\end{eqnarray}\n${\\cal T}_{m}(u)$ and ${\\cal T}^{a}(u)$ \nare invariant under the transformations (\\ref{cross1}) \nor (\\ref{cross2}). This invariance may be\n viewed as a kind of crossing symmetry.\n\nNow we shall present examples of (\\ref{Tge1}) \nfor $B(2|1), J_{-}=\\{1,\\overline{1} \\},\nJ_{+}=\\{2,3,0,\\overline{3},\\overline{2} \\}$ case: \n\\begin{eqnarray}\n{\\cal T}^{1}(u) &=&\n -\\begin{array}{|c|}\\hline \n \t1 \\\\ \\hline\n \\end{array}\n +\\begin{array}{|c|}\\hline \n \t2 \\\\ \\hline\n \\end{array}\n +\\begin{array}{|c|}\\hline \n \t3 \\\\ \\hline\n \\end{array}\n +\\begin{array}{|c|}\\hline \n \t0 \\\\ \\hline\n \\end{array}\n +\\begin{array}{|c|}\\hline \n \t\\stackrel{\\ }{\\overline{3}} \\\\ \\hline\n \\end{array}\n +\\begin{array}{|c|}\\hline \n \t\\stackrel{\\ }{\\overline{2}} \\\\ \\hline\n \\end{array}\n -\\begin{array}{|c|}\\hline \n \t\\stackrel{\\ }{\\overline{1}} \\\\ \\hline\n \\end{array}\n\\nonumber \\\\ \n&=&\n- \\phi(-2 + u)\\phi(1 + u)\\frac{Q_{1}(1 + u)}{Q_{1}(-1 + u)} \n \\nonumber \\\\ \n &+& \n \\phi(u)\\phi(1 + u)\\frac{Q_{1}(1 + u)Q_{2}(-2 + u)}{Q_{1}(-1 + u)Q_{2}(u)}\n \\nonumber \\\\ \n &+& \n \\phi(u)\\phi(1 + u)\\frac{Q_{1}(u)Q_{2}(3 + u)}{Q_{1}(2 + u)Q_{2}(1 + u)} \n \\nonumber \\\\\n &+& \n \\phi(u)\\phi(1 + u)\\frac{Q_{2}(2 + u)Q_{3}(-1 + u)}{Q_{2}(u)Q_{3}(1 + u)} \n \\nonumber \\\\\n &+&\n \\phi(u)\\phi(1 + u)\\frac{Q_{2}(-1 + u)Q_{3}(2 + u)}{Q_{2}(1 + u)Q_{3}(u)} \n \\nonumber \\\\\n &+& \n \\phi(u)\\phi(1 + u)\\frac{Q_{3}(-1 + u)Q_{3}(2 + u)}{Q_{3}(u)Q_{3}(1 + u)}\n \\nonumber \\\\ \n &-& \n \\phi(u)\\phi(3 + u)\\frac{Q_{1}(u)}{Q_{1}(2 + u)} \n, \n\\label{t1-ex}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n{\\cal T}^{2}(u) &=&\n \\begin{array}{|c|}\\hline \n \t1 \\\\ \\hline\n \t1 \\\\ \\hline \n \\end{array}\n -\\begin{array}{|c|}\\hline \n \t1 \\\\ \\hline\n \t2 \\\\ \\hline \n \\end{array}\n -\\begin{array}{|c|}\\hline \n \t1 \\\\ \\hline\n \t3 \\\\ \\hline \n \\end{array}\n -\\begin{array}{|c|}\\hline \n \t1 \\\\ \\hline\n \t0 \\\\ \\hline \n \\end{array}\n -\\begin{array}{|c|}\\hline \n \t1 \\\\ \\hline\n \t\\stackrel{\\ }{\\overline{3}} \\\\ \\hline \n \\end{array}\n -\\begin{array}{|c|}\\hline \n \t1 \\\\ \\hline\n \t\\stackrel{\\ }{\\overline{2}} \\\\ \\hline \n \\end{array}\n +\\begin{array}{|c|}\\hline \n \t1 \\\\ \\hline\n \t\\stackrel{\\ }{\\overline{1}} \\\\ \\hline \n \\end{array}\n +\\begin{array}{|c|}\\hline \n \t2 \\\\ \\hline\n \t3 \\\\ \\hline \n \\end{array}\n +\\begin{array}{|c|}\\hline \n \t2 \\\\ \\hline\n \t0 \\\\ \\hline \n \\end{array}\n +\\begin{array}{|c|}\\hline \n \t2 \\\\ \\hline\n \t\\stackrel{\\ }{\\overline{3}} \\\\ \\hline \n \\end{array}\n \\nonumber \\\\\n &+& \\begin{array}{|c|}\\hline \n \t2 \\\\ \\hline\n \t\\stackrel{\\ }{\\overline{2}} \\\\ \\hline \n \\end{array}\n -\\begin{array}{|c|}\\hline \n \t2 \\\\ \\hline\n \t\\stackrel{\\ }{\\overline{1}} \\\\ \\hline \n \\end{array}\n +\\begin{array}{|c|}\\hline \n \t3 \\\\ \\hline\n \t0 \\\\ \\hline \n \\end{array}\n +\\begin{array}{|c|}\\hline \n \t3 \\\\ \\hline\n \t\\stackrel{\\ }{\\overline{3}} \\\\ \\hline \n \\end{array}\n +\\begin{array}{|c|}\\hline \n \t3 \\\\ \\hline\n \t\\stackrel{\\ }{\\overline{2}} \\\\ \\hline \n \\end{array}\n -\\begin{array}{|c|}\\hline \n \t3 \\\\ \\hline\n \t\\stackrel{\\ }{\\overline{1}} \\\\ \\hline \n \\end{array}\n +\\begin{array}{|c|}\\hline \n \t0 \\\\ \\hline\n \t0 \\\\ \\hline \n \\end{array}\n +\\begin{array}{|c|}\\hline \n \t0 \\\\ \\hline\n \t\\stackrel{\\ }{\\overline{3}} \\\\ \\hline \n \\end{array}\n +\\begin{array}{|c|}\\hline \n \t0 \\\\ \\hline\n \t\\stackrel{\\ }{\\overline{2}} \\\\ \\hline \n \\end{array}\n -\\begin{array}{|c|}\\hline \n \t0 \\\\ \\hline\n \t\\stackrel{\\ }{\\overline{1}} \\\\ \\hline \n \\end{array}\n \\nonumber \\\\\n &+&\\begin{array}{|c|}\\hline \n \t\\stackrel{\\ }{\\overline{3}} \\\\ \\hline\n \t\\stackrel{\\ }{\\overline{2}} \\\\ \\hline \n \\end{array}\n -\\begin{array}{|c|}\\hline \n \t\\stackrel{\\ }{\\overline{3}} \\\\ \\hline\n \t\\stackrel{\\ }{\\overline{1}} \\\\ \\hline \n \\end{array}\n -\\begin{array}{|c|}\\hline \n \t\\stackrel{\\ }{\\overline{2}} \\\\ \\hline\n \t\\stackrel{\\ }{\\overline{1}} \\\\ \\hline \n \\end{array} \n +\\begin{array}{|c|}\\hline \n \t\\stackrel{\\ }{\\overline{1}} \\\\ \\hline\n \t\\stackrel{\\ }{\\overline{1}} \\\\ \\hline \n \\end{array}\n \\nonumber \\\\\n&=& \\phi(-1+ u)\\phi(2+ u) \n\\Bigg(\n\\phi(-3 + u)\\phi(u)\\frac{Q_{1}(2 + u)}{Q_{1}(-2 + u)} \\nonumber \\\\ &+& \n \\phi(-1 + u)\\phi(2 + u)\\frac{Q_{1}(-1 + u)Q_{1}(2 + u)}\n {Q_{1}(u)Q_{1}(1 + u)} \\nonumber \\\\ &+& \n \\phi(1 + u)\\phi(4 + u)\\frac{Q_{1}(-1 + u)}{Q_{1}(3 + u)} \\nonumber \\\\ &-& \n \\phi(-1 + u)\\phi(u)\\frac{Q_{1}(2 + u)Q_{2}(-3 + u)}\n {Q_{1}(-2 + u)Q_{2}(-1 + u)} \\nonumber \\\\ &-& \n \\phi(1 + u)\\phi(2 + u)\\frac{Q_{1}(-1 + u)Q_{1}(2 + u)\n Q_{2}(-1 + u)}{Q_{1}(u)Q_{1}(1 + u)Q_{2}(1 + u)} \\nonumber \\\\ &-& \n \\phi(-1 + u)\\phi(u)\\frac{Q_{1}(-1 + u)Q_{1}(2 + u)Q_{2}(2 + u)}\n {Q_{1}(u)Q_{1}(1 + u)Q_{2}(u)} \\nonumber \\\\ &+& \n \\phi(u)\\phi(1 + u)\\frac{Q_{1}(-1 + u)Q_{1}(2 + u)Q_{2}(-1 + u)\n Q_{2}(2 + u)}{Q_{1}(u)Q_{1}(1 + u)Q_{2}(u)Q_{2}(1 + u)} \\nonumber \\\\ &-& \n \\phi(1 + u)\\phi(2 + u)\\frac{Q_{1}(-1 + u)Q_{2}(4 + u)}\n {Q_{1}(3 + u)Q_{2}(2 + u)} \\nonumber \\\\ &+& \n \\phi(u)\\phi(1 + u)\\frac{Q_{1}(2 + u)Q_{3}(-2 + u)}\n {Q_{1}(u)Q_{3}(u)} \\nonumber \\\\ \n &-& \\phi(-1 + u)\\phi(u)\\frac{Q_{1}(2 + u)\n Q_{2}(1 + u)Q_{3}(-2 + u)}{Q_{1}(u)Q_{2}(-1 + u)Q_{3}(u)}\n \\nonumber \\\\ &-& \n \\phi(-1 + u)\\phi(u)\\frac{Q_{1}(2 + u)Q_{2}(-2 + u)Q_{3}(1 + u)}\n {Q_{1}(u)Q_{2}(u)Q_{3}(-1 + u)} \\nonumber \\\\ &+& \n \\phi(u)\\phi(1 + u)\\frac{Q_{1}(2 + u)Q_{2}(-2 + u)Q_{2}(-1 + u)\n Q_{3}(1 + u)}{Q_{1}(u)Q_{2}(u)Q_{2}(1 + u)Q_{3}(-1 + u)} \\nonumber \\\\ \n &-& \n \\phi(-1 + u)\\phi(u)\\frac{Q_{1}(2 + u)Q_{3}(-2 + u)Q_{3}(1 + u)}\n {Q_{1}(u)Q_{3}(-1 + u)Q_{3}(u)} \\nonumber \\\\ &+& \n \\phi(u)\\phi(1 + u)\\frac{Q_{1}(2 + u)Q_{2}(-1 + u)Q_{3}(-2 + u)\n Q_{3}(1 + u)}{Q_{1}(u)Q_{2}(1 + u)Q_{3}(-1 + u)Q_{3}(u)} \\nonumber \\\\ &-& \n \\phi(1 + u)\\phi(2 + u)\\frac{Q_{1}(-1 + u)Q_{2}(3 + u)Q_{3}(u)}\n {Q_{1}(1 + u)Q_{2}(1 + u)Q_{3}(2 + u)} \\nonumber \\\\ &+& \n \\phi(u)\\phi(1 + u)\\frac{Q_{1}(-1 + u)Q_{2}(2 + u)Q_{2}(3 + u)\n Q_{3}(u)}{Q_{1}(1 + u)Q_{2}(u)Q_{2}(1 + u)Q_{3}(2 + u)} \\nonumber \\\\ &+& \n \\phi(u)\\phi(1 + u)\\frac{Q_{2}(3 + u)Q_{3}(-2 + u)Q_{3}(1 + u)}\n {Q_{2}(1 + u)Q_{3}(-1 + u)Q_{3}(2 + u)} \\nonumber \\\\ &+& \n \\phi(u)\\phi(1 + u)\\frac{Q_{2}(-2 + u)Q_{2}(3 + u)Q_{3}(u)\n Q_{3}(1 + u)}{Q_{2}(u)Q_{2}(1 + u)Q_{3}(-1 + u)Q_{3}(2 + u)}\n \\nonumber \\\\ \n &+& \\phi(u)\\phi(1 + u)\\frac{Q_{1}(-1 + u)Q_{3}(3 + u)}\n {Q_{1}(1 + u)Q_{3}(1 + u)} \\nonumber \\\\ &-& \n \\phi(1 + u)\\phi(2 + u)\\frac{Q_{1}(-1 + u)Q_{2}(u)Q_{3}(3 + u)}\n {Q_{1}(1 + u)Q_{2}(2 + u)Q_{3}(1 + u)} \\nonumber \\\\ &+& \n \\phi(u)\\phi(1 + u)\\frac{Q_{3}(-2 + u)Q_{3}(3 + u)}\n {Q_{3}(-1 + u)Q_{3}(2 + u)} \\nonumber \\\\ &+& \n \\phi(u)\\phi(1 + u)\\frac{Q_{2}(-2 + u)Q_{3}(u)Q_{3}(3 + u)}\n {Q_{2}(u)Q_{3}(-1 + u)Q_{3}(2 + u)} \\nonumber \\\\ &-& \n \\phi(1 + u)\\phi(2 + u)\\frac{Q_{1}(-1 + u)Q_{3}(u)Q_{3}(3 + u)}\n {Q_{1}(1 + u)Q_{3}(1 + u)Q_{3}(2 + u)} \\nonumber \\\\ &+& \n \\phi(u)\\phi(1 + u)\\frac{Q_{1}(-1 + u)Q_{2}(2 + u)Q_{3}(u)\n Q_{3}(3 + u)}{Q_{1}(1 + u)Q_{2}(u)Q_{3}(1 + u)Q_{3}(2 + u)}\n \\Bigg),\n \\label{t2-ex}\n\\end{eqnarray}\n\\begin{eqnarray}\n{\\cal T}_{2}(u) &=&\n-\\begin{array}{|c|c|} \\hline \n 1 & 2 \\\\ \\hline \n\\end{array}\n-\n\\begin{array}{|c|c|}\\hline\n 1 & 3 \\\\ \\hline\n\\end{array}\n-\n\\begin{array}{|c|c|}\\hline\n 1 & 0 \\\\ \\hline\n\\end{array}\n-\n\\begin{array}{|c|c|}\\hline\n 1 & \\stackrel{\\ }{\\overline{3}} \\\\ \\hline\n\\end{array}\n-\n\\begin{array}{|c|c|}\\hline\n 1 & \\stackrel{\\ }{\\overline{2}} \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|c|}\\hline\n 1 & \\stackrel{\\ }{\\overline{1}} \\\\ \\hline\n\\end{array} \n\\nonumber \\\\ \n&+&\n\\begin{array}{|c|c|}\\hline\n 2 & 2 \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|c|}\\hline\n 2 & 3 \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|c|}\\hline\n 2 & 0 \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|c|}\\hline\n 2 & \\stackrel{\\ }{\\overline{3}} \\\\ \\hline\n\\end{array} \n+\n\\begin{array}{|c|c|}\\hline\n 2 & \\stackrel{\\ }{\\overline{2}} \\\\ \\hline\n\\end{array}\n-\n\\begin{array}{|c|c|}\\hline\n 2 & \\stackrel{\\ }{\\overline{1}} \\\\ \\hline\n\\end{array}\n\\nonumber \\\\ \n&+&\n\\begin{array}{|c|c|}\\hline\n 3 & 3 \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|c|}\\hline\n 3 & 0 \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|c|}\\hline\n 3 & \\stackrel{\\ }{\\overline{3}} \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|c|}\\hline\n 3 & \\stackrel{\\ }{\\overline{2}} \\\\ \\hline\n\\end{array}\n-\n\\begin{array}{|c|c|}\\hline\n 3 & \\stackrel{\\ }{\\overline{1}} \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|c|}\\hline\n 0 & \\stackrel{\\ }{\\overline{3}} \\\\ \\hline\n\\end{array}\n\\nonumber \\\\ \n&+&\n\\begin{array}{|c|c|}\\hline\n 0 & \\stackrel{\\ }{\\overline{2}} \\\\ \\hline\n\\end{array}\n-\n\\begin{array}{|c|c|}\\hline\n 0 & \\stackrel{\\ }{\\overline{1}} \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|c|}\\hline\n \\stackrel{\\ }{\\overline{3}} & \n \\stackrel{\\ }{\\overline{3}} \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|c|}\\hline\n \\stackrel{\\ }{\\overline{3}} & \n \\stackrel{\\ }{\\overline{2}} \\\\ \\hline\n\\end{array}\n-\n\\begin{array}{|c|c|}\\hline\n \\stackrel{\\ }{\\overline{3}} & \n \\stackrel{\\ }{\\overline{1}} \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|c|}\\hline\n \\stackrel{\\ }{\\overline{2}} & \n \\stackrel{\\ }{\\overline{2}} \\\\ \\hline\n\\end{array}\n-\n\\begin{array}{|c|c|}\\hline\n \\stackrel{\\ }{\\overline{2}} & \n \\stackrel{\\ }{\\overline{1}} \\\\ \\hline\n\\end{array}\n\\nonumber \\\\ \n&=& \\phi(u)\\phi(1+ u)\n\\Bigg(\n\\phi(-3 + u)\\phi(4 + u)\\frac{Q_{1}(u)Q_{1}(1 + u)}{Q_{1}(-2 + u)Q_{1}(3 + u)}\n \\nonumber \\\\ \n &-& \n \\phi(-1 + u)\\phi(4 + u)\\frac{Q_{1}(u)Q_{1}(1 + u)Q_{2}(-3 + u)}\n {Q_{1}(-2 + u)Q_{1}(3 + u)Q_{2}(-1 + u)} \\nonumber \\\\ &+& \n \\phi(-1 + u)\\phi(2 + u)\\frac{Q_{1}(2 + u)Q_{2}(-3 + u)}\n {Q_{1}(-2 + u)Q_{2}(1 + u)} \\nonumber \\\\ &-& \n \\phi(-3 + u)\\phi(2 + u)\\frac{Q_{1}(2 + u)Q_{2}(-1 + u)}\n {Q_{1}(-2 + u)Q_{2}(1 + u)} \\nonumber \\\\ &-& \n \\phi(-1 + u)\\phi(4 + u)\\frac{Q_{1}(-1 + u)Q_{2}(2 + u)}\n {Q_{1}(3 + u)Q_{2}(u)} \\nonumber \\\\ &+& \n \\phi(-1 + u)\\phi(2 + u)\\frac{Q_{1}(-1 + u)\n Q_{2}(4 + u)}{Q_{1}(3 + u)Q_{2}(u)} \\nonumber \\\\ &-& \n \\phi(-3 + u)\\phi(2 + u)\\frac{Q_{1}(u)Q_{1}(1 + u)Q_{2}(4 + u)}\n {Q_{1}(-2 + u)Q_{1}(3 + u)Q_{2}(2 + u)} \\nonumber \\\\ &+& \n \\phi(-1 + u)\\phi(2 + u)\\frac{Q_{1}(u)Q_{1}(1 + u)Q_{2}(-3 + u)Q_{2}(4 + u)}\n {Q_{1}(-2 + u)Q_{1}(3 + u)Q_{2}(-1 + u)Q_{2}(2 + u)} \\nonumber \\\\ &-& \n \\phi(-1 + u)\\phi(4 + u)\\frac{Q_{1}(1 + u)Q_{2}(1 + u)Q_{3}(-2 + u)}\n {Q_{1}(3 + u)Q_{2}(-1 + u)Q_{3}(u)} \\nonumber \\\\ &+& \n \\phi(-1 + u)\\phi(2 + u)\\frac{Q_{1}(1 + u)Q_{2}(1 + u)Q_{2}(4 + u)\n Q_{3}(-2 + u)}{Q_{1}(3 + u)Q_{2}(-1 + u)Q_{2}(2 + u)Q_{3}(u)} \\nonumber \\\\ &-& \n \\phi(-1 + u)\\phi(4 + u)\\frac{Q_{1}(1 + u)Q_{2}(-2 + u)Q_{3}(1 + u)}\n {Q_{1}(3 + u)Q_{2}(u)Q_{3}(-1 + u)} \\nonumber \\\\ &+& \n \\phi(-1 + u)\\phi(2 + u)\\frac{Q_{1}(1 + u)Q_{2}(-2 + u)Q_{2}(4 + u)\n Q_{3}(1 + u)}{Q_{1}(3 + u)Q_{2}(u)Q_{2}(2 + u)Q_{3}(-1 + u)} \\nonumber \\\\ &-& \n \\phi(-1 + u)\\phi(4 + u)\\frac{Q_{1}(1 + u)Q_{3}(-2 + u)Q_{3}(1 + u)}\n {Q_{1}(3 + u)Q_{3}(-1 + u)Q_{3}(u)} \\nonumber \\\\ &+& \n \\phi(-1 + u)\\phi(2 + u)\\frac{Q_{1}(1 + u)Q_{2}(4 + u)Q_{3}(-2 + u)\n Q_{3}(1 + u)}{Q_{1}(3 + u)Q_{2}(2 + u)Q_{3}(-1 + u)Q_{3}(u)} \\nonumber \\\\ &+& \n \\phi(-1 + u)\\phi(2 + u)\\frac{Q_{2}(3 + u)Q_{3}(-2 + u)}\n {Q_{2}(-1 + u)Q_{3}(2 + u)} \\nonumber \\\\ &-& \n \\phi(-3 + u)\\phi(2 + u)\\frac{Q_{1}(u)Q_{2}(3 + u)Q_{3}(u)}\n {Q_{1}(-2 + u)Q_{2}(1 + u)Q_{3}(2 + u)} \\nonumber \\\\ &+& \n \\phi(-1 + u)\\phi(2 + u)\\frac{Q_{1}(u)Q_{2}(-3 + u)Q_{2}(3 + u)Q_{3}(u)}\n {Q_{1}(-2 + u)Q_{2}(-1 + u)Q_{2}(1 + u)Q_{3}(2 + u)} \\nonumber \\\\ &+& \n \\phi(-1 + u)\\phi(2 + u)\\frac{Q_{2}(-2 + u)Q_{3}(3 + u)}\n {Q_{2}(2 + u)Q_{3}(-1 + u)} \\nonumber \\\\ &+& \n \\phi(-1 + u)\\phi(2 + u)\\frac{Q_{2}(u)Q_{3}(-2 + u)Q_{3}(3 + u)}\n {Q_{2}(2 + u)Q_{3}(-1 + u)Q_{3}(u)} \\nonumber \\\\ &-& \n \\phi(-3 + u)\\phi(2 + u)\\frac{Q_{1}(u)Q_{2}(u)Q_{3}(3 + u)}\n {Q_{1}(-2 + u)Q_{2}(2 + u)Q_{3}(1 + u)} \\nonumber \\\\ &+& \n \\phi(-1 + u)\\phi(2 + u)\\frac{Q_{1}(u)Q_{2}(-3 + u)Q_{2}(u)Q_{3}(3 + u)}\n {Q_{1}(-2 + u)Q_{2}(-1 + u)Q_{2}(2 + u)Q_{3}(1 + u)} \\nonumber \\\\ &+& \n \\phi(-1 + u)\\phi(2 + u)\\frac{Q_{2}(u)Q_{2}(1 + u)Q_{3}(-2 + u)Q_{3}(3 + u)}\n {Q_{2}(-1 + u)Q_{2}(2 + u)Q_{3}(u)Q_{3}(1 + u)} \\nonumber \\\\ &+& \n \\phi(-1 + u)\\phi(2 + u)\\frac{Q_{2}(1 + u)Q_{3}(-2 + u)Q_{3}(3 + u)}\n {Q_{2}(-1 + u)Q_{3}(1 + u)Q_{3}(2 + u)} \\nonumber \\\\ &-& \n \\phi(-3 + u)\\phi(2 + u)\\frac{Q_{1}(u)Q_{3}(u)Q_{3}(3 + u)}\n {Q_{1}(-2 + u)Q_{3}(1 + u)Q_{3}(2 + u)} \n \\label{t21-ex} \\\\ &+& \n \\phi(-1 + u)\\phi(2 + u)\\frac{Q_{1}(u)Q_{2}(-3 + u)Q_{3}(u)Q_{3}(3 + u)}\n {Q_{1}(-2 + u)Q_{2}(-1 + u)Q_{3}(1 + u)Q_{3}(2 + u)} \n\\Bigg)\n. \\nonumber\n\\end{eqnarray}\nThanks to Theorem \\ref{th-tate} (see later) and \nthe relation (\\ref{Jacobi-Trudi1}),\n these DVFs are free of \npoles under the following BAE: \n\\begin{eqnarray}\n && \\frac{\\phi(u_{k}^{(1)}-1)}{\\phi(u_{k}^{(1)}+1)}\n =\\frac{Q_{2}(u_{k}^{(1)}-1)}\n {Q_{2}(u_{k}^{(1)}+1)} \\quad {\\rm for } \\quad \n 1 \\le k \\le N_{1}, \\nonumber \\\\ \n && -1=\\frac{Q_{1}(u_{k}^{(2)}-1)\n Q_{2}(u_{k}^{(2)}+2)Q_{3}(u_{k}^{(2)}-1)}\n {Q_{1}(u_{k}^{(2)}+1)\n Q_{2}(u_{k}^{(2)}-2)Q_{3}(u_{k}^{(2)}+1)} \n \\quad {\\rm for } \\quad \n 1 \\le k \\le N_{2}, \\nonumber \\\\ \n && -1=\\frac{Q_{2}(u_{k}^{(3)}-1)Q_{3}(u_{k}^{(3)}+1)}\n {Q_{2}(u_{k}^{(3)}+1)Q_{3}(u_{k}^{(3)}-1)},\n \\quad {\\rm for } \\quad \n 1 \\le k \\le N_{3}.\n\\label{BAE2-b21}\n\\end{eqnarray}\nNote that DVFs have so called \n{\\em Bethe-strap} structures \n\\cite{KS1,S2}, which bear resemblance to \nweight space diagrams.\n We have observed for many examples \n that ${\\cal T}_{\\lambda \\subset \\mu}(u)$ coincides with \n the Bethe-strap of the minimal connected component (cf \\cite{K2}) \n which\n include the top term \\cite{KS1,KOS} \n as the examples in \n Figure \\ref{best1}, Figure \\ref{best2} and Figure \\ref{best3}\n\\footnote{Recently we have found curious terms \n (pseudo-top terms) in many Bethe-straps (cf. \\cite{T4}). \n However we have confirmed for several examples the fact that \n the pseudo-top terms do not influence on \n connectivity of the Bethe straps (cf \\cite{KS1,KOS,K2}) \n in the whole.}. \n The top term of ${\\cal T}_{\\lambda \\subset \\mu}(u)$ carries \n a $B(r|s)$ or $D(r|s)$ weight. \n For example, for $B(r|s)$, $\\lambda =\\phi$, $\\mu_{r+1}\\le s$ \n case, the term corresponding to the tableau \n\\begin{equation}\n b(i,j)=\n \\left\\{\n \\begin{array}{llll}\n j & {\\rm for } & 1 \\le i \\le \\mu_{j}^{\\prime} & 1 \\le j \\le s \\\\\n i+s & {\\rm for } & 1 \\le i \\le \\mu_{j}^{\\prime} & s+1 \\le j \\le \\mu_{1}\n \\end{array}\n \\right.\n\\end{equation}\ncarries the \n$B(r|s)$ weight with the Kac-Dynkin label (\\ref{Kac-Dynkin}) \nor (\\ref{Kac-Dynkin-b0s}). \n\nThe top term\n\\footnote{Here we omit the vacuum part.} \n\\cite{KS1} of the DVF (\\ref{Tge1}) for \n $D(r|s)$, $\\lambda =\\phi, \\mu=(1^{a})$ will be \n\\begin{eqnarray}\n \\left.\n (-1)^{a}\n \\begin{array}{|c|}\\hline \n \t1 \\\\ \\hline\n 1 \\\\ \\hline\n \t\\vdots \\\\ \\hline \n \t1 \\\\ \\hline\n \\end{array}\n \\, \n \\right\\}\n \\! \n {\\tiny a} \\; \n =(-1)^{a}\n\\frac{Q_{1}(u+a)}{Q_{1}(u-a)},\n\\label{top-tate}\n\\end{eqnarray}\nwhich carries the $D(r|s)$ weight with the Kac-Dynkin label \n in (\\ref{KD-tate}). \nThe top term \n\\footnote{Here we omit the vacuum part.} \n \\cite{KS1} of the DVF (\\ref{Tge1}) for \n $D(r|s)$, $\\lambda =\\phi, \\mu=(m^{1})$ will be \n\\begin{eqnarray}\n&&\n(-1)^{m}\n\\underbrace{\n \\begin{array}{|c|c|c|c|}\\hline \n \t1 & 2 & \\cdots & m \\\\ \\hline\n \\end{array}\n }_{m}\n=(-1)^{m} \\frac{Q_{m}(u+1)}{Q_{m}(u-1)} \n\\qquad {\\rm if} \\quad 1 \\le m \\le s, \n\\nonumber \\\\ \n&& \n(-1)^{s}\n\\underbrace{\n \\begin{array}{|c|c|c|c|c|c|c|}\\hline \n \t1 & 2 & \\cdots & s & s+1 & \\cdots & s+1 \\\\ \\hline\n \\end{array}\n }_{m} \\label{top-yoko} \n \\\\ \n&& \\quad =(-1)^{s} \\frac{Q_{s}(u+m-s+1)Q_{s+1}(u-m+s)}\n {Q_{s}(u-m+s-1)Q_{s+1}(u+m-s)} \\nonumber \\\\ \n&& \\hspace{130pt} {\\rm if} \\quad r \\ge 3 \\quad \n{\\rm and} \\quad m \\ge s+1, \\nonumber \\\\ \n&& \\quad =(-1)^{s} \\frac{Q_{s}(u+m-s+1)Q_{s+1}(u-m+s)Q_{s+2}(u-m+s)}\n {Q_{s}(u-m+s-1)Q_{s+1}(u+m-s)Q_{s+2}(u+m-s)} \n \\nonumber \\\\ \n&& \\hspace{130pt} {\\rm if} \\quad r =2 \\quad \n{\\rm and} \\quad m \\ge s+1, \\nonumber\n\\end{eqnarray}\nwhich carries the $D(r|s)$ weight \nwith the Kac-Dynkin label in (\\ref{KD-yoko}). \n \n{\\em Remark}: \nThere is a supposition (cf \\cite{KOS,K2}) that the \nauxiliary space of a transfer matrix \nis a irreducible one as a representation space of the Yangian \n (or quantum affine algebra) \n if the Bethe strap of the DVF is connected\n\\footnote{Here the word \"connected\" means that any terms in DVF \n are connected directly (or indirectly) each other \nby the arrows like graphs in Figures \\ref{best1}-\\ref{best3}.}\n in the whole. \n Then a natural question arise: \n \"Is the Bethe strap of \n ${\\cal T}_{\\lambda \\subset \\mu}(u)$ (\\ref{Tge1}) always \n connected in the whole ?\" The answer is no. \n In fact for $D(r|s)$ case, \n the Bethe strap of ${\\cal T}^{a}(u)$ \n is not connected \n if $0\\le r-s-1 \\le a \\le 2(r-s-1)$.\n So it is desirable to \n extract the minimal connected component of the Bethe strap \n which contains the top term (\\ref{top-tate}) \n from ${\\cal T}^{a}(u)$. \nA candidate is as follows: \n\\begin{eqnarray}\n {\\cal T}^{a}(u)-h^{a}(u){\\cal T}^{-a+2(r-s-1)}(u), \n\\end{eqnarray}\nwhere $h^{a}(u)=\\prod_{j=1}^{a+1-r+s}\n\\psi_{1}(u+a-2j+1)\\psi_{\\overline{1}}(u-a+2j-1)$. \n \nFor example, for $D(3|1)$ case, ${\\cal T}^{2}(u)$ consists \nof 31 terms and they divide into 30 terms whose Bethe strap \nis connected in the whole\n\\footnote{In this case, \n$\n-\\begin{array}{|c|}\\hline \n 2 \\\\ \\hline \n \\overline{1} \\\\ \\hline \n\\end{array}\n$ \nis a pseudo-top term (cf \\cite{T4}).}\n and 1 isolated term \n$\nh^{2}(u)=\n\\begin{array}{|c|}\\hline \n 1 \\\\ \\hline \n \\overline{1} \\\\ \\hline \n\\end{array}\n=\\{\\phi(u-1)\\phi(u+3)\\}^{2}.\n$ Thus Bethe strap of ${\\cal T}^{2}(u)-h^{2}(u)$ \nis connected in the whole. \n On the other hand for $D(2|2)$ case, ${\\cal T}^{2}(u)$ \ndoes not have such an isolated term and that the Bethe strap \nis connected in the whole \n(in this case, ${\\cal T}^{2}(u)$ has 33 terms). \nSo far this kind of an isolated term is peculiar to $D(r|s)$ case. \nIn fact, we have never yet observed such an isolated term \nin $B(r|s)$ case.\nSimilarly, \n the Bethe strap of ${\\cal T}_{m}(u)$ for $D(r|s)$ \n seems not to be connected \n if $0\\le s-r+1 \\le m \\le 2(s-r+1)$. \n A candidate for the minimal connected component of the Bethe strap \n which contains the top term (\\ref{top-yoko}) is \n\\begin{eqnarray}\n {\\cal T}_{m}(u)-h_{m}(u){\\cal T}_{-m+2(s-r+1)}(u), \n\\end{eqnarray}\nwhere $h_{m}(u)=\\prod_{j=1}^{m+r-s-1}\n\\psi_{j}(u-m+2j-1)\\psi_{\\overline{j}}(u+m-2j+1)$.\n \\rule{5pt}{10pt} \\\\ \n A remarkable resemblance between \n Bethe-straps for vector representations and crystal graphs \n \\cite{KN1,KN2} was pointed out in \\cite{KS1}. \n Whether such resemblance holds true for the Lie superalgebras \n in general or not will be an interesting question. \nThere is a remarkable coincidence between \n currents of deformed Virasoro algebra and DVFs \\cite{FR}. \n Whether such coincidence holds true for the Lie superalgebras \n in general or not will be another interesting question. \n\n\\begin{figure}\n \\setlength{\\unitlength}{1.2pt}\n \\begin{center}\n \\begin{picture}(250,40) \n \\put(-8,3){$-$}\n \\put(0,0){\\line(1,0){10}}\n \\put(10,0){\\line(0,1){10}}\n \\put(10,10){\\line(-1,0){10}}\n \\put(0,10){\\line(0,-1){10}}\n \\put(3,3){\\scriptsize{$1$}}\n %\n \\put(12,5){\\vector(1,0){20}}\n \\put(15,9){\\scriptsize{$(1,1)$}}\n \n \\put(33,3){$$}\n \\put(40,0){\\line(1,0){10}}\n \\put(50,0){\\line(0,1){10}}\n \\put(50,10){\\line(-1,0){10}}\n \\put(40,10){\\line(0,-1){10}}\n \\put(43,3){\\scriptsize{$2$}}\n %\n \\put(52,5){\\vector(1,0){20}}\n \\put(55,9){\\scriptsize{$(2,0)$}}\n \n \\put(73,3){$$}\n \\put(80,0){\\line(1,0){10}}\n \\put(90,0){\\line(0,1){10}}\n \\put(90,10){\\line(-1,0){10}}\n \\put(80,10){\\line(0,-1){10}}\n \\put(83,3){\\scriptsize{$3$}}\n %\n \\put(92,5){\\vector(1,0){20}}\n \\put(95,9){\\scriptsize{$(3,-1)$}}\n \n \\put(113,3){$$}\n \\put(120,0){\\line(1,0){10}}\n \\put(130,0){\\line(0,1){10}}\n \\put(130,10){\\line(-1,0){10}}\n \\put(120,10){\\line(0,-1){10}}\n \\put(123,3){\\scriptsize{$0$}}\n %\n \\put(132,5){\\vector(1,0){20}}\n \\put(135,9){\\scriptsize{$(3,0)$}}\n \n \\put(153,3){$$}\n \\put(160,0){\\line(1,0){10}}\n \\put(170,0){\\line(0,1){10}}\n \\put(170,10){\\line(-1,0){10}}\n \\put(160,10){\\line(0,-1){10}}\n \\put(163,3){\\scriptsize{$\\overline{3}$}}\n %\n \\put(172,5){\\vector(1,0){20}}\n \\put(175,9){\\scriptsize{$(2,-1)$}}\n \n \\put(197,0){\\line(1,0){10}}\n \\put(207,0){\\line(0,1){10}}\n \\put(207,10){\\line(-1,0){10}}\n \\put(197,10){\\line(0,-1){10}}\n \\put(200,3){\\scriptsize{$\\overline{2}$}}\n %\n \\put(212,5){\\vector(1,0){20}}\n \\put(215,9){\\scriptsize{$(1,-2)$}}\n \n \\put(232,3){$-$}\n \\put(240,0){\\line(1,0){10}}\n \\put(250,0){\\line(0,1){10}}\n \\put(250,10){\\line(-1,0){10}}\n \\put(240,10){\\line(0,-1){10}}\n \\put(243,3){\\scriptsize{$\\overline{1}$}}\n \\end{picture}\n \\end{center}\n \\caption{The Bethe-strap structure of \n ${\\cal T}^{1}(u)$ (\\ref{t1-ex}) for \n $B(2|1)=osp(5|2)$: \n The pair $(a,b)$ denotes the common pole $u_{k}^{(a)}+b$ of the pair \n of the tableaux connected by the arrow. \n This common pole vanishes under the BAE (\\ref{BAE2-b21}).\n The leftmost tableau corresponds to the \n \\symbol{96}highest weight \\symbol{39}, \n which is called the {\\em top term}. \n Such a correspondence between certain term in the DVF and a highest \n weight (to be more precise, a kind of Drinfel'd polynomial \n (cf \\cite{D,KOS}))\n may be called {\\em top term hypothesis} \\cite{KS1,KOS}.}\n \\label{best1}\n\\end{figure}\n\\begin{figure}\n \\setlength{\\unitlength}{1pt}\n \\begin{center}\n \\begin{picture}(250,470) \n \\put(120,0){\\line(1,0){10}}\n \\put(130,0){\\line(0,1){20}}\n \\put(130,20){\\line(-1,0){10}}\n \\put(120,20){\\line(0,-1){20}}\n \\put(120,10){\\line(1,0){10}}\n \\put(123,12){\\scriptsize{$\\overline{1}$}}\n \\put(123,2){\\scriptsize{$\\overline{1}$}}\n \\put(114,9){$$}\n \n \\put(120,40){\\line(1,0){10}}\n \\put(130,40){\\line(0,1){20}}\n \\put(130,60){\\line(-1,0){10}}\n \\put(120,60){\\line(0,-1){20}}\n \\put(120,50){\\line(1,0){10}}\n \\put(123,52){\\scriptsize{$\\overline{2}$}}\n \\put(123,42){\\scriptsize{$\\overline{1}$}}\n \\put(110,47){$-$}\n \n \\put(120,80){\\line(1,0){10}}\n \\put(130,80){\\line(0,1){20}}\n \\put(130,100){\\line(-1,0){10}}\n \\put(120,100){\\line(0,-1){20}}\n \\put(120,90){\\line(1,0){10}}\n \\put(123,92){\\scriptsize{$\\overline{3}$}}\n \\put(123,82){\\scriptsize{$\\overline{1}$}}\n \\put(110,87){$-$}\n \n \\put(80,120){\\line(1,0){10}}\n \\put(90,120){\\line(0,1){20}}\n \\put(90,140){\\line(-1,0){10}}\n \\put(80,140){\\line(0,-1){20}}\n \\put(80,130){\\line(1,0){10}}\n \\put(83,132){\\scriptsize{$\\overline{3}$}}\n \\put(83,122){\\scriptsize{$\\overline{2}$}}\n %\n \\put(160,120){\\line(1,0){10}}\n \\put(170,120){\\line(0,1){20}}\n \\put(170,140){\\line(-1,0){10}}\n \\put(160,140){\\line(0,-1){20}}\n \\put(160,130){\\line(1,0){10}}\n \\put(163,132){\\scriptsize{$0$}}\n \\put(163,122){\\scriptsize{$\\overline{1}$}}\n \\put(149,126){$-$}\n \n \\put(80,160){\\line(1,0){10}}\n \\put(90,160){\\line(0,1){20}}\n \\put(90,180){\\line(-1,0){10}}\n \\put(80,180){\\line(0,-1){20}}\n \\put(80,170){\\line(1,0){10}}\n \\put(83,172){\\scriptsize{$0$}}\n \\put(83,162){\\scriptsize{$\\overline{2}$}}\n %\n \\put(160,160){\\line(1,0){10}}\n \\put(170,160){\\line(0,1){20}}\n \\put(170,180){\\line(-1,0){10}}\n \\put(160,180){\\line(0,-1){20}}\n \\put(160,170){\\line(1,0){10}}\n \\put(163,172){\\scriptsize{$3$}}\n \\put(163,162){\\scriptsize{$\\overline{1}$}}\n \\put(149,166){$-$}\n \n \\put(40,200){\\line(1,0){10}}\n \\put(50,200){\\line(0,1){20}}\n \\put(50,220){\\line(-1,0){10}}\n \\put(40,220){\\line(0,-1){20}}\n \\put(40,210){\\line(1,0){10}}\n \\put(43,212){\\scriptsize{$0$}}\n \\put(43,202){\\scriptsize{$\\overline{3}$}}\n %\n \\put(120,200){\\line(1,0){10}}\n \\put(130,200){\\line(0,1){20}}\n \\put(130,220){\\line(-1,0){10}}\n \\put(120,220){\\line(0,-1){20}}\n \\put(120,210){\\line(1,0){10}}\n \\put(123,212){\\scriptsize{$3$}}\n \\put(123,202){\\scriptsize{$\\overline{2}$}}\n %\n \\put(200,200){\\line(1,0){10}}\n \\put(210,200){\\line(0,1){20}}\n \\put(210,220){\\line(-1,0){10}}\n \\put(200,220){\\line(0,-1){20}}\n \\put(200,210){\\line(1,0){10}}\n \\put(203,212){\\scriptsize{$2$}}\n \\put(203,202){\\scriptsize{$\\overline{1}$}}\n \\put(190,206){$-$}\n \n \\put(0,240){\\line(1,0){10}}\n \\put(10,240){\\line(0,1){20}}\n \\put(10,260){\\line(-1,0){10}}\n \\put(0,260){\\line(0,-1){20}}\n \\put(0,250){\\line(1,0){10}}\n \\put(3,252){\\scriptsize{$0$}}\n \\put(3,242){\\scriptsize{$0$}}\n %\n \\put(80,240){\\line(1,0){10}}\n \\put(90,240){\\line(0,1){20}}\n \\put(90,260){\\line(-1,0){10}}\n \\put(80,260){\\line(0,-1){20}}\n \\put(80,250){\\line(1,0){10}}\n \\put(83,252){\\scriptsize{$3$}}\n \\put(83,242){\\scriptsize{$\\overline{3}$}}\n %\n \\put(160,240){\\line(1,0){10}}\n \\put(170,240){\\line(0,1){20}}\n \\put(170,260){\\line(-1,0){10}}\n \\put(160,260){\\line(0,-1){20}}\n \\put(160,250){\\line(1,0){10}}\n \\put(163,252){\\scriptsize{$2$}}\n \\put(163,242){\\scriptsize{$\\overline{2}$}}\n %\n \\put(240,240){\\line(1,0){10}}\n \\put(250,240){\\line(0,1){20}}\n \\put(250,260){\\line(-1,0){10}}\n \\put(240,260){\\line(0,-1){20}}\n \\put(240,250){\\line(1,0){10}}\n \\put(243,252){\\scriptsize{$1$}}\n \\put(243,242){\\scriptsize{$\\overline{1}$}}\n \n \\put(40,280){\\line(1,0){10}}\n \\put(50,280){\\line(0,1){20}}\n \\put(50,300){\\line(-1,0){10}}\n \\put(40,300){\\line(0,-1){20}}\n \\put(40,290){\\line(1,0){10}}\n \\put(43,292){\\scriptsize{$3$}}\n \\put(43,282){\\scriptsize{$0$}}\n %\n \\put(120,280){\\line(1,0){10}}\n \\put(130,280){\\line(0,1){20}}\n \\put(130,300){\\line(-1,0){10}}\n \\put(120,300){\\line(0,-1){20}}\n \\put(120,290){\\line(1,0){10}}\n \\put(123,292){\\scriptsize{$2$}}\n \\put(123,282){\\scriptsize{$\\overline{3}$}}\n %\n \\put(200,280){\\line(1,0){10}}\n \\put(210,280){\\line(0,1){20}}\n \\put(210,300){\\line(-1,0){10}}\n \\put(200,300){\\line(0,-1){20}}\n \\put(200,290){\\line(1,0){10}}\n \\put(203,292){\\scriptsize{$1$}}\n \\put(203,282){\\scriptsize{$\\overline{2}$}}\n \\put(190,286){$-$}\n \n \\put(80,320){\\line(1,0){10}}\n \\put(90,320){\\line(0,1){20}}\n \\put(90,340){\\line(-1,0){10}}\n \\put(80,340){\\line(0,-1){20}}\n \\put(80,330){\\line(1,0){10}}\n \\put(83,332){\\scriptsize{$2$}}\n \\put(83,322){\\scriptsize{$0$}}\n %\n \\put(160,320){\\line(1,0){10}}\n \\put(170,320){\\line(0,1){20}}\n \\put(170,340){\\line(-1,0){10}}\n \\put(160,340){\\line(0,-1){20}}\n \\put(160,330){\\line(1,0){10}}\n \\put(163,332){\\scriptsize{$1$}}\n \\put(163,322){\\scriptsize{$\\overline{3}$}}\n \\put(150,326){$-$}\n \n \\put(80,360){\\line(1,0){10}}\n \\put(90,360){\\line(0,1){20}}\n \\put(90,380){\\line(-1,0){10}}\n \\put(80,380){\\line(0,-1){20}}\n \\put(80,370){\\line(1,0){10}}\n \\put(83,372){\\scriptsize{$2$}}\n \\put(83,362){\\scriptsize{$3$}}\n %\n \\put(160,360){\\line(1,0){10}}\n \\put(170,360){\\line(0,1){20}}\n \\put(170,380){\\line(-1,0){10}}\n \\put(160,380){\\line(0,-1){20}}\n \\put(160,370){\\line(1,0){10}}\n \\put(163,372){\\scriptsize{$1$}}\n \\put(163,362){\\scriptsize{$0$}}\n \\put(150,366){$-$}\n \n \\put(120,400){\\line(1,0){10}}\n \\put(130,400){\\line(0,1){20}}\n \\put(130,420){\\line(-1,0){10}}\n \\put(120,420){\\line(0,-1){20}}\n \\put(120,410){\\line(1,0){10}}\n \\put(123,412){\\scriptsize{$1$}}\n \\put(123,402){\\scriptsize{$3$}}\n \\put(110,406){$-$}\n \n \\put(120,440){\\line(1,0){10}}\n \\put(130,440){\\line(0,1){20}}\n \\put(130,460){\\line(-1,0){10}}\n \\put(120,460){\\line(0,-1){20}}\n \\put(120,450){\\line(1,0){10}}\n \\put(123,452){\\scriptsize{$1$}}\n \\put(123,442){\\scriptsize{$2$}}\n \\put(110,446){$-$}\n \n \\put(120,480){\\line(1,0){10}}\n \\put(130,480){\\line(0,1){20}}\n \\put(130,500){\\line(-1,0){10}}\n \\put(120,500){\\line(0,-1){20}}\n \\put(120,490){\\line(1,0){10}}\n \\put(123,492){\\scriptsize{$1$}}\n \\put(123,482){\\scriptsize{$1$}}\n \\put(114,489){$$}\n %\n \n \\put(92,118){\\vector(3,-2){26}}\n \\put(102,113){\\tiny{$(1,-1)$}}\n \n \\put(52,198){\\vector(3,-2){26}}\n \\put(62,193){\\tiny{$(2,0)$}}\n %\n \\put(132,198){\\vector(3,-2){26}}\n \\put(142,193){\\tiny{$(1,-1)$}}\n \n \\put(12,238){\\vector(3,-2){26}}\n \\put(22,233){\\tiny{$(3,1)$}}\n %\n \\put(92,238){\\vector(3,-2){26}}\n \\put(102,233){\\tiny{$(2,0)$}}\n %\n \\put(172,238){\\vector(3,-2){26}}\n \\put(182,233){\\tiny{$(1,-1)$}}\n \n \\put(52,278){\\vector(3,-2){26}}\n \\put(62,273){\\tiny{$(3,1)$}}\n %\n \\put(132,278){\\vector(3,-2){26}}\n \\put(142,273){\\tiny{$(2,0)$}}\n %\n \\put(212,278){\\vector(3,-2){26}}\n \\put(222,273){\\tiny{$(1,-1)$}}\n \n \\put(172,318){\\vector(3,-2){26}}\n \\put(182,313){\\tiny{$(2,0)$}}\n %\n \\put(92,318){\\vector(3,-2){26}}\n \\put(102,313){\\tiny{$(3,1)$}}\n \n \\put(132,398){\\vector(3,-2){26}}\n \\put(142,393){\\tiny{$(3,0)$}}\n \n \\put(158,118){\\vector(-3,-2){26}}\n \\put(128,114){\\tiny{$(3,-1)$}}\n \n \\put(198,198){\\vector(-3,-2){26}}\n \\put(168,195){\\tiny{$(2,-1)$}}\n %\n \\put(118,198){\\vector(-3,-2){26}}\n \\put(88,194){\\tiny{$(3,-2)$}}\n \n \\put(238,238){\\vector(-3,-2){26}}\n \\put(211,234){\\tiny{$(1,0)$}}\n %\n \\put(158,238){\\vector(-3,-2){26}}\n \\put(128,235){\\tiny{$(2,-1)$}}\n %\n \\put(78,238){\\vector(-3,-2){26}}\n \\put(49,235){\\tiny{$(3,-2)$}}\n \n \\put(38,278){\\vector(-3,-2){26}}\n \\put(8,275){\\tiny{$(3,-2)$}}\n %\n \\put(118,278){\\vector(-3,-2){26}}\n \\put(88,275){\\tiny{$(2,-1)$}}\n %\n \\put(198,278){\\vector(-3,-2){26}}\n \\put(170,273){\\tiny{$(1,0)$}}\n \n \\put(78,318){\\vector(-3,-2){26}}\n \\put(48,316){\\tiny{$(2,-1)$}}\n %\n \\put(160,318){\\vector(-3,-2){26}}\n \\put(130,313){\\tiny{$(1,0)$}}\n \n \\put(118,398){\\vector(-3,-2){26}}\n \\put(89,394){\\tiny{$(1,0)$}}\n %\n \n \\put(125,77){\\vector(0,-1){14}}\n \\put(127,69){\\tiny{$(2,-2)$}}\n %\n \\put(125,37){\\vector(0,-1){14}}\n \\put(127,29){\\tiny{$(1,-3)$}}\n \n \\put(85,157){\\vector(0,-1){14}}\n \\put(87,149){\\tiny{$(3,-1)$}}\n %\n \\put(165,157){\\vector(0,-1){14}}\n \\put(167,149){\\tiny{$(3,-2)$}}\n \n \\put(85,357){\\vector(0,-1){14}}\n \\put(87,349){\\tiny{$(3,0)$}}\n %\n \\put(165,357){\\vector(0,-1){14}}\n \\put(167,349){\\tiny{$(3,1)$}}\n \n \\put(125,437){\\vector(0,-1){14}}\n \\put(127,429){\\tiny{$(2,1)$}}\n %\n \\put(125,477){\\vector(0,-1){14}}\n \\put(127,469){\\tiny{$(1,2)$}}\n \n \\put(94,160){\\vector(3,-1){62}}\n \\put(124,152){\\tiny{$(1,-1)$}}\n \\put(156,360){\\vector(-3,-1){62}}\n \\put(117,357){\\tiny{$(1,0)$}}\n \\end{picture}\n \\end{center}\n \\caption{The Bethe-strap structure of \n ${\\cal T}^{2}(u)$ (\\ref{t2-ex}) for \n $B(2|1)$: \n The topmost tableau corresponds to \n the {\\em top term}.}\n \\label{best2}\n\\end{figure}\n\\begin{figure}\n \\setlength{\\unitlength}{1.5pt}\n \\begin{center}\n \\begin{picture}(100,310) \n \\put(40,0){\\line(1,0){20}}\n \\put(60,0){\\line(0,1){10}}\n \\put(60,10){\\line(-1,0){20}}\n \\put(40,10){\\line(0,-1){10}}\n \\put(50,0){\\line(0,1){10}}\n \\put(43,2){$\\overline{2}$}\n \\put(53,2){$\\overline{1}$}\n \\put(32,3){$-$}\n \n \\put(0,30){\\line(1,0){20}}\n \\put(20,30){\\line(0,1){10}}\n \\put(20,40){\\line(-1,0){20}}\n \\put(0,40){\\line(0,-1){10}}\n \\put(10,30){\\line(0,1){10}}\n \\put(3,32){$\\overline{2}$}\n \\put(13,32){$\\overline{2}$}\n %\n \\put(80,30){\\line(1,0){20}}\n \\put(100,30){\\line(0,1){10}}\n \\put(100,40){\\line(-1,0){20}}\n \\put(80,40){\\line(0,-1){10}}\n \\put(90,30){\\line(0,1){10}}\n \\put(83,32){$\\overline{3}$}\n \\put(93,32){$\\overline{1}$}\n \\put(72,33){$-$}\n \n \\put(0,60){\\line(1,0){20}}\n \\put(20,60){\\line(0,1){10}}\n \\put(20,70){\\line(-1,0){20}}\n \\put(0,70){\\line(0,-1){10}}\n \\put(10,60){\\line(0,1){10}}\n \\put(3,62){$\\overline{3}$}\n \\put(13,62){$\\overline{2}$}\n %\n \\put(80,60){\\line(1,0){20}}\n \\put(100,60){\\line(0,1){10}}\n \\put(100,70){\\line(-1,0){20}}\n \\put(80,70){\\line(0,-1){10}}\n \\put(90,60){\\line(0,1){10}}\n \\put(83,62){$0$}\n \\put(93,62){$\\overline{1}$}\n \\put(72,63){$-$}\n \n \\put(-40,90){\\line(1,0){20}}\n \\put(-20,90){\\line(0,1){10}}\n \\put(-20,100){\\line(-1,0){20}}\n \\put(-40,100){\\line(0,-1){10}}\n \\put(-30,90){\\line(0,1){10}}\n \\put(-37,92){$\\overline{3}$}\n \\put(-27,92){$\\overline{3}$}\n %\n \\put(40,90){\\line(1,0){20}}\n \\put(60,90){\\line(0,1){10}}\n \\put(60,100){\\line(-1,0){20}}\n \\put(40,100){\\line(0,-1){10}}\n \\put(50,90){\\line(0,1){10}}\n \\put(43,92){$0$}\n \\put(53,92){$\\overline{2}$}\n %\n \\put(120,90){\\line(1,0){20}}\n \\put(140,90){\\line(0,1){10}}\n \\put(140,100){\\line(-1,0){20}}\n \\put(120,100){\\line(0,-1){10}}\n \\put(130,90){\\line(0,1){10}}\n \\put(123,92){$3$}\n \\put(133,92){$\\overline{1}$}\n \\put(112,93){$-$}\n \n \\put(-40,120){\\line(1,0){20}}\n \\put(-20,120){\\line(0,1){10}}\n \\put(-20,130){\\line(-1,0){20}}\n \\put(-40,130){\\line(0,-1){10}}\n \\put(-30,120){\\line(0,1){10}}\n \\put(-37,122){$0$}\n \\put(-27,122){$\\overline{3}$}\n %\n \\put(40,120){\\line(1,0){20}}\n \\put(60,120){\\line(0,1){10}}\n \\put(60,130){\\line(-1,0){20}}\n \\put(40,130){\\line(0,-1){10}}\n \\put(50,120){\\line(0,1){10}}\n \\put(43,122){$3$}\n \\put(53,122){$\\overline{2}$}\n %\n \\put(120,120){\\line(1,0){20}}\n \\put(140,120){\\line(0,1){10}}\n \\put(140,130){\\line(-1,0){20}}\n \\put(120,130){\\line(0,-1){10}}\n \\put(130,120){\\line(0,1){10}}\n \\put(123,122){$2$}\n \\put(133,122){$\\overline{1}$}\n \\put(112,123){$-$}\n \n \\put(-40,150){\\line(1,0){20}}\n \\put(-20,150){\\line(0,1){10}}\n \\put(-20,160){\\line(-1,0){20}}\n \\put(-40,160){\\line(0,-1){10}}\n \\put(-30,150){\\line(0,1){10}}\n \\put(-37,152){$3$}\n \\put(-27,152){$\\overline{3}$}\n %\n \\put(40,150){\\line(1,0){20}}\n \\put(60,150){\\line(0,1){10}}\n \\put(60,160){\\line(-1,0){20}}\n \\put(40,160){\\line(0,-1){10}}\n \\put(50,150){\\line(0,1){10}}\n \\put(43,152){$2$}\n \\put(53,152){$\\overline{2}$}\n %\n \\put(120,150){\\line(1,0){20}}\n \\put(140,150){\\line(0,1){10}}\n \\put(140,160){\\line(-1,0){20}}\n \\put(120,160){\\line(0,-1){10}}\n \\put(130,150){\\line(0,1){10}}\n \\put(123,152){$1$}\n \\put(133,152){$\\overline{1}$}\n \\put(114,153){$$}\n \n \\put(-40,180){\\line(1,0){20}}\n \\put(-20,180){\\line(0,1){10}}\n \\put(-20,190){\\line(-1,0){20}}\n \\put(-40,190){\\line(0,-1){10}}\n \\put(-30,180){\\line(0,1){10}}\n \\put(-37,182){$3$}\n \\put(-27,182){$0$}\n %\n \\put(40,180){\\line(1,0){20}}\n \\put(60,180){\\line(0,1){10}}\n \\put(60,190){\\line(-1,0){20}}\n \\put(40,190){\\line(0,-1){10}}\n \\put(50,180){\\line(0,1){10}}\n \\put(43,182){$2$}\n \\put(53,182){$\\overline{3}$}\n %\n \\put(120,180){\\line(1,0){20}}\n \\put(140,180){\\line(0,1){10}}\n \\put(140,190){\\line(-1,0){20}}\n \\put(120,190){\\line(0,-1){10}}\n \\put(130,180){\\line(0,1){10}}\n \\put(123,182){$1$}\n \\put(133,182){$\\overline{2}$}\n \\put(112,183){$-$}\n \n \\put(-40,210){\\line(1,0){20}}\n \\put(-20,210){\\line(0,1){10}}\n \\put(-20,220){\\line(-1,0){20}}\n \\put(-40,220){\\line(0,-1){10}}\n \\put(-30,210){\\line(0,1){10}}\n \\put(-37,212){$3$}\n \\put(-27,212){$3$}\n %\n \\put(40,210){\\line(1,0){20}}\n \\put(60,210){\\line(0,1){10}}\n \\put(60,220){\\line(-1,0){20}}\n \\put(40,220){\\line(0,-1){10}}\n \\put(50,210){\\line(0,1){10}}\n \\put(43,212){$2$}\n \\put(53,212){$0$}\n %\n \\put(120,210){\\line(1,0){20}}\n \\put(140,210){\\line(0,1){10}}\n \\put(140,220){\\line(-1,0){20}}\n \\put(120,220){\\line(0,-1){10}}\n \\put(130,210){\\line(0,1){10}}\n \\put(123,212){$1$}\n \\put(133,212){$\\overline{3}$}\n \\put(112,213){$-$}\n \n \\put(0,240){\\line(1,0){20}}\n \\put(20,240){\\line(0,1){10}}\n \\put(20,250){\\line(-1,0){20}}\n \\put(0,250){\\line(0,-1){10}}\n \\put(10,240){\\line(0,1){10}}\n \\put(3,242){$2$}\n \\put(13,242){$3$}\n %\n \\put(80,240){\\line(1,0){20}}\n \\put(100,240){\\line(0,1){10}}\n \\put(100,250){\\line(-1,0){20}}\n \\put(80,250){\\line(0,-1){10}}\n \\put(90,240){\\line(0,1){10}}\n \\put(83,242){$1$}\n \\put(93,242){$0$}\n \\put(72,243){$-$}\n \n \\put(0,270){\\line(1,0){20}}\n \\put(20,270){\\line(0,1){10}}\n \\put(20,280){\\line(-1,0){20}}\n \\put(0,280){\\line(0,-1){10}}\n \\put(10,270){\\line(0,1){10}}\n \\put(3,272){$2$}\n \\put(13,272){$2$}\n %\n \\put(80,270){\\line(1,0){20}}\n \\put(100,270){\\line(0,1){10}}\n \\put(100,280){\\line(-1,0){20}}\n \\put(80,280){\\line(0,-1){10}}\n \\put(90,270){\\line(0,1){10}}\n \\put(83,272){$1$}\n \\put(93,272){$3$}\n \\put(72,273){$-$}\n \n \\put(40,300){\\line(1,0){20}}\n \\put(60,300){\\line(0,1){10}}\n \\put(60,310){\\line(-1,0){20}}\n \\put(40,310){\\line(0,-1){10}}\n \\put(50,300){\\line(0,1){10}}\n \\put(43,302){$1$}\n \\put(53,302){$2$}\n \\put(32,303){$-$}\n \n \n \\put(10,58){\\vector(0,-1){16}}\n \\put(12,49){\\scriptsize{$(2,0)$}}\n \n \\put(90,58){\\vector(0,-1){16}}\n \\put(92,49){\\scriptsize{$(3,1)$}}\n \n \\put(50,118){\\vector(0,-1){16}}\n \\put(52,109){\\scriptsize{$(3,0)$}}\n \n \\put(50,148){\\vector(0,-1){16}}\n \\put(52,139){\\scriptsize{$(2,1)$}}\n \n \\put(50,178){\\vector(0,-1){16}}\n \\put(52,169){\\scriptsize{$(2,-2)$}}\n \n \\put(50,208){\\vector(0,-1){16}}\n \\put(52,199){\\scriptsize{$(3,-1)$}}\n \n \\put(-30,118){\\vector(0,-1){16}}\n \\put(-28,109){\\scriptsize{$(3,1)$}}\n \n \\put(-30,148){\\vector(0,-1){16}}\n \\put(-28,139){\\scriptsize{$(3,0)$}}\n \n \\put(-30,178){\\vector(0,-1){16}}\n \\put(-28,169){\\scriptsize{$(3,-1)$}}\n \n \\put(-30,208){\\vector(0,-1){16}}\n \\put(-28,199){\\scriptsize{$(3,-2)$}}\n \n \\put(130,118){\\vector(0,-1){16}}\n \\put(132,109){\\scriptsize{$(2,1)$}}\n \n \\put(130,148){\\vector(0,-1){16}}\n \\put(132,139){\\scriptsize{$(1,2)$}}\n \n \\put(130,178){\\vector(0,-1){16}}\n \\put(132,169){\\scriptsize{$(1,-3)$}}\n \n \\put(130,208){\\vector(0,-1){16}}\n \\put(132,199){\\scriptsize{$(2,-2)$}}\n \n \\put(10,268){\\vector(0,-1){16}}\n \\put(12,259){\\scriptsize{$(2,-1)$}}\n \n \\put(90,268){\\vector(0,-1){16}}\n \\put(92,259){\\scriptsize{$(3,-2)$}}\n \n \\put(22,28){\\vector(1,-1){16}}\n \\put(31,21){\\scriptsize{$(1,-3)$}}\n \n \\put(-18,88){\\vector(1,-1){16}}\n \\put(-9,81){\\scriptsize{$(2,-2)$}}\n %\n \\put(62,88){\\vector(1,-1){16}}\n \\put(71,81){\\scriptsize{$(1,-3)$}}\n \n \\put(22,238){\\vector(1,-1){16}}\n \\put(31,231){\\scriptsize{$(3,-2)$}}\n %\n \\put(102,238){\\vector(1,-1){16}}\n \\put(111,231){\\scriptsize{$(3,-1)$}}\n \n \\put(62,298){\\vector(1,-1){16}}\n \\put(71,291){\\scriptsize{$(2,-1)$}}\n \n \\put(78,28){\\vector(-1,-1){16}}\n \\put(56,21){\\scriptsize{$(2,0)$}}\n \n \\put(118,88){\\vector(-1,-1){16}}\n \\put(96,81){\\scriptsize{$(3,0)$}}\n %\n \\put(38,88){\\vector(-1,-1){16}}\n \\put(16,81){\\scriptsize{$(3,1)$}}\n \n \\put(-2,238){\\vector(-1,-1){16}}\n \\put(-23,232){\\scriptsize{$(2,1)$}}\n %\n \\put(78,238){\\vector(-1,-1){16}}\n \\put(57,232){\\scriptsize{$(1,2)$}}\n \n \\put(38,298){\\vector(-1,-1){16}}\n \\put(17,292){\\scriptsize{$(1,2)$}}\n \n \\put(22,59){\\vector(3,-1){55}}\n \\put(46,54){\\scriptsize{$(1,-3)$}}\n \n \\put(-18,119){\\vector(3,-1){55}}\n \\put(6,114){\\scriptsize{$(2,-2)$}}\n \n \\put(62,119){\\vector(3,-1){55}}\n \\put(86,114){\\scriptsize{$(1,-3)$}}\n \n \\put(-18,149){\\vector(3,-1){55}}\n \\put(6,144){\\scriptsize{$(2,-2)$}}\n \n \\put(62,149){\\vector(3,-1){55}}\n \\put(86,144){\\scriptsize{$(1,-3)$}}\n \n \\put(38,179){\\vector(-3,-1){55}}\n \\put(6,176){\\scriptsize{$(2,1)$}}\n \n \\put(118,179){\\vector(-3,-1){55}}\n \\put(86,176){\\scriptsize{$(1,2)$}}\n \n \\put(38,209){\\vector(-3,-1){55}}\n \\put(6,206){\\scriptsize{$(2,1)$}}\n \n \\put(118,209){\\vector(-3,-1){55}}\n \\put(86,206){\\scriptsize{$(1,2)$}}\n \n \\put(78,269){\\vector(-3,-1){55}}\n \\put(46,266){\\scriptsize{$(1,2)$}}\n \n \\end{picture}\n \\end{center}\n \\caption{The Bethe-strap structure of \n ${\\cal T}_{2}(u)$ (\\ref{t21-ex}) for $B(2|1)$: \n The topmost tableau corresponds to the {\\em top term}.}\n \\label{best3}\n\\end{figure}\n \nWe can prove (see Appendix A.1-A.3) \nthe following Theorem, which\n is essential in the analytic Bethe ansatz.\n\\begin{theorem} \\label{th-tate}\nFor \n$a\\in {\\bf Z}_{\\ge 0}$, ${\\cal T}^{a}(u)$ \n((\\ref{Tge1}) for $\\lambda =\\phi$, $\\mu =(1^{a})$) \nis free of poles under the condition that\nthe BAEs {\\rm (\\ref{BAE})-(\\ref{BAE4})} are valid\n\\footnote{\nWe consider the case that \nthe solutions $\\{ u_{j}^{(a)} \\}$ of the \nBAEs (\\ref{BAE})-(\\ref{BAE4}) \n have \\symbol{96}generic' distribution: \nWe assume that \n$u_{i}^{(a)}-u_{j}^{(a)} \\ne (\\alpha_{a}|\\alpha_{a})$\n for any $i,j \\in \\{1,2,\\dots , N_{a}\\} $ \n and $a\\in \\{1,2,\\dots, s+r \\}$ ($i \\ne j$)\n in BAEs (\\ref{BAE})-(\\ref{BAE4}). \nMoreover we assume that the color $b$ pole \n(see Appendix A.1-A.3) of ${\\cal T}^{a}(u)$ \n and the color $c$ pole \n do not coincide each other if $b \\ne c$. \n We will need separate \n consideration for the case where this assumption does not hold. \n We also note that similar assumption was assumed in \n \\cite{T1,T2,T3,T4}.\n}.\n\\end{theorem}\nIn proving Theorem \\ref{th-tate}, we use the following lemmas.\n\\begin{lemma} \\label{le-tate}\nFor $r \\in {\\bf Z}_{\\ge 2}$ and $b\\in \\{s+1,s+2, \\dots,s+r-1\\}$, \n\\begin{eqnarray}\n \\begin{array}{|c|l}\\cline{1-1}\n b & _{v} \\\\ \\cline{1-1} \n b+1 & _{v-2} \\\\ \\cline{1-1}\n \\end{array}\n ,\\qquad \n \\begin{array}{|c|l}\\cline{1-1}\n \\stackrel{\\ }{\\overline{b+1}} & _{v} \\\\ \\cline{1-1} \n \\stackrel{\\ }{\\overline{b}} & _{v-2} \\\\ \\cline{1-1}\n \\end{array}\n\\end{eqnarray}\ndo not contain $Q_{b}$.\n\\end{lemma}\nFor $B(0|s)$ case, we use the following lemma: \n\\begin{lemma} \\label{le-yoko}\nFor $b\\in \\{1,2, \\dots,s-1\\}$, \n\\begin{eqnarray}\n \\begin{array}{|c|c|}\n \\multicolumn{2}{c}{\\quad} \\\\ \\hline \n b & b+1 \\\\ \\hline \n \\multicolumn{1}{c}{^{u}} & \n \\multicolumn{1}{c}{^{u+2}}\n \\end{array}\n ,\\qquad \n \\begin{array}{|c|c|}\n \\multicolumn{2}{c}{\\quad} \\\\ \\hline \n \\overline{b+1} & \\stackrel{\\ }{\\overline{b}} \\\\ \\hline \n \\multicolumn{1}{c}{^{u}} & \n \\multicolumn{1}{c}{^{u+2}}\n \\end{array} \n\\end{eqnarray}\ndo not contain $Q_{b}$, and \n\\begin{eqnarray}\n \\begin{array}{|c|c|c|}\n \\multicolumn{3}{c}{\\quad} \\\\ \\hline \n s & 0 & \\overline{s} \\\\ \\hline \n \\multicolumn{1}{c}{^{u}} & \n \\multicolumn{1}{c}{^{u+2}} &\n \\multicolumn{1}{c}{^{u+4}}\n \\end{array}\n\\end{eqnarray}\ndoes not contain $Q_{s}$.\n\\end{lemma}\nThen owing to the relation (\\ref{Jacobi-Trudi1}), \n$ {\\cal T}_{\\lambda \\subset \\mu}(u)$ for \n$B(r|s)$ is also free of poles under \nthe condition that the BAEs (\\ref{BAE})-(\\ref{BAE4})\n are valid.\t \n Similarly, \n owing to the relation (\\ref{Jacobi-Trudi}), \n$ {\\cal T}_{m}(u)$ for \n$D(r|s)$ is also free of poles under \nthe condition that the BAE (\\ref{BAE})\n is valid.\n\\setcounter{equation}{0}\n\\section{Functional relations among DVFs}\nNow we introduce the functional relations among DVFs.\n For $B(r|s)$ case, the following relation follows from \n the determinant formulae \n (\\ref{Jacobi-Trudi1}) or (\\ref{Jacobi-Trudi2}). \n\\begin{eqnarray}\n{\\cal T}_{m}^{a}(u-1){\\cal T}_{m}^{a}(u+1)\n={\\cal T}_{m-1}^{a}(u){\\cal T}_{m+1}^{a}(u)+\n{\\cal T}_{m}^{a-1}(u){\\cal T}_{m}^{a+1}(u) ,\n\\label{hirota}\n\\end{eqnarray}\nwhere $m,a \\in {\\bf Z}_{\\ge 1}$; \n${\\cal T}_{m}^{0}(u)={\\cal T}_{0}^{a}(u)=1$. \nThis functional relation (\\ref{hirota}) is a Hirota bilinear \ndifference equation \\cite{H} and \ncan be proved by using the Jacobi identity. \n There is a constraint to (\\ref{hirota}) \nfollows from the relation \n(cf. \\cite{DM,MRi} for $sl(r|s)$ case): \\\\ \n {\\em ${\\cal T}_{\\lambda \\subset \\mu}(u) =0$ if \n $\\lambda \\subset \\mu$ contains $a \\times m$ rectangular subdiagram \n ($a$: the number of row, $m$: the number of column) with \n$m \\in {\\bf Z}_{\\ge 2s+2}$ and $a \\in {\\bf Z}_{\\ge 2r+1}$.} \\\\ \nIn particular, we have \n\\begin{eqnarray}\n{\\cal T}_{m}^{a}(u)=0 \\qquad {\\rm if } \\quad \n m \\in {\\bf Z}_{\\ge 2s+2} \\quad {\\rm and}\n \\quad a \\in {\\bf Z}_{\\ge 2r+1}.\n\\label{cons}\n\\end{eqnarray}\nWe also note that the determinant formula (\\ref{Jacobi-Trudi}) \n for $D(r|s)$ reduces to the following functional relation:\n\\begin{equation}\n{\\cal T}^{1}(u-1){\\cal T}^{1}(u+1)\n={\\cal T}_{2}(u)+{\\cal T}^{2}(u) ,\n\\end{equation}\nif we set $m=2$.\n\nIn this section, we consider only $B(0|s)$ \n($s \\in {\\bf Z}_{\\ge 1}$) case from now on. \nNow we redefine the function ${\\cal T}_{\\lambda \\subset \\mu}(u)$\n as follows:\n\\begin{eqnarray}\n{\\cal T}_{\\lambda \\subset \\mu}(u):=\n{\\cal T}_{\\lambda \\subset \\mu}(u)\/\n\\{\n\\prod_{j=1}^{\\mu_{1}^{\\prime}}\n{\\cal F}_{\\mu_{j}-\\lambda_{j}}\n(u-\\mu_{1}+\\mu_{1}^{\\prime}+\\mu_{j}+\\lambda_{j}-2j+1)\n\\}, \n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n && {\\cal F}_{m}(u)=\n\\prod_{j=1}^{m-1}\\phi(u-m+2j+1)\\phi(u-2s-m+2j-2) \\nonumber \\\\ \n && \\hspace{160pt} {\\rm for} \\quad m \\in {\\bf Z}_{\\ge 2}, \n\\end{eqnarray}\nand \n\\begin{eqnarray} \n{\\cal F}_{1}(u)=1, \\qquad \n{\\cal F}_{0}(u)=\\{\\phi(u+1)\\phi(u-2s-2)\\}^{-1}. \n\\end{eqnarray}\nIn particular, we have \n\\begin{eqnarray}\n{\\cal T}_{m}(u)&:=&{\\cal T}_{m}(u)\/{\\cal F}_{m}(u), \\\\ \n{\\cal T}_{0}^{a}(u)&=&\\prod_{j=1}^{a} {\\cal T}_{0}(u+a-2j+1)\n \\nonumber \\\\ \n &=& \\prod_{j=1}^{a} \\phi(u+a-2j+2)\\phi(u+a-2j-2s-1). \n\\end{eqnarray}\nThere is remarkable duality for ${\\cal T}_{m}(u)$. \n\\begin{theorem}\\label{dual-th}\nFor any $m \\in \\{0,1,\\dots,2s+1 \\}$, we have \n\\begin{eqnarray}\n {\\cal T}_{m}(u)={\\cal T}_{2s-m+1}(u).\n \\label{dual1}\n\\end{eqnarray}\n\\end{theorem}\n{\\em Outline of the proof}: \nAt first, we consider the case\n that the vacuum parts are formally trivial. \nIn proving the relation (\\ref{dual1}), we use the following relations, \nwhich can be verified by direct computation. \n\\begin{eqnarray}\n \\begin{array}{|c|}\n \\multicolumn{1}{c}{\\quad} \\\\ \\hline \n \\overline{a} \\\\ \\hline \n \\multicolumn{1}{c}{^{u}}\n \\end{array}\n & \\times &\n \\begin{array}{|c|c|c|c|}\n \\multicolumn{4}{c}{\\quad} \\\\ \\hline \n 1 & 2 & \\cdots \\cdots & a \\\\ \\hline \n \\multicolumn{1}{c}{^{u-2s-1}} & \n \\multicolumn{1}{c}{^{u-2s+1}} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u+2a-2s-3}}\n \\end{array}\n \\nonumber \\\\ \n &=&\n \\begin{array}{|c|c|c|c|}\n \\multicolumn{4}{c}{\\quad} \\\\ \\hline \n 1 & 2 & \\cdots \\cdots & a-1 \\\\ \\hline \n \\multicolumn{1}{c}{^{u-2s+1}} & \n \\multicolumn{1}{c}{^{u-2s+3}} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u+2a-2s-3}}\n \\end{array}\n \\label{modi1},\n\\end{eqnarray} \n\\begin{eqnarray}\n \\begin{array}{|c|}\n \\multicolumn{1}{c}{\\quad} \\\\ \\hline \n a \\\\ \\hline \n \\multicolumn{1}{c}{^{u}}\n \\end{array}\n & \\times &\n \\begin{array}{|c|c|c|c|}\n \\multicolumn{4}{c}{\\quad} \\\\ \\hline \n \\stackrel{\\ }{\\overline{a}} & \n \\cdots \\cdots & \\overline{2} & \n \\stackrel{\\ }{\\overline{1}} \\\\ \\hline \n \\multicolumn{1}{c}{^{u-2a+2s+3}} & \n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u+2s-1}} &\n \\multicolumn{1}{c}{^{u+2s+1}}\n \\end{array}\n \\nonumber \\\\ \n &=&\n \\begin{array}{|c|c|c|c|}\n \\multicolumn{4}{c}{\\quad} \\\\ \\hline \n \\stackrel{\\ }{\\overline{a-1}} & \n \\cdots \\cdots & \\overline{2} & \n \\stackrel{\\ }{\\overline{1}} \\\\ \\hline \n \\multicolumn{1}{c}{^{u-2a+2s+3}} & \n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u+2s-3}} &\n \\multicolumn{1}{c}{^{u+2s-1}}\n \\end{array}\n \\label{modi}\n\\end{eqnarray} \nand \n\\begin{eqnarray}\n\\begin{array}{|c|c|c|c|c|c|c|c|c|}\n \\multicolumn{9}{c}{\\quad} \\\\ \\hline \n 1 & 2 & \\cdots \\cdots & s & 0 & \n \\stackrel{\\ }{\\overline{s}} & \\cdots \\cdots & \n \\overline{2} & \\stackrel{\\ }{\\overline{1}} \\\\ \\hline \n \\multicolumn{1}{c}{^{u-2s}} & \n \\multicolumn{1}{c}{^{u-2s+2}} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u-2}} &\n \\multicolumn{1}{c}{^{u}} &\n \\multicolumn{1}{c}{^{u+2}} &\n \\multicolumn{1}{c}{} & \n \\multicolumn{1}{c}{^{u-2s-2}} &\n \\multicolumn{1}{c}{^{u+2s}}\n\\end{array}\n =1 \\label{const},\n\\end{eqnarray}\nwhere $a \\in \\{1,2,\\dots, s+1 \\}$ \n\\footnote{Here we assume \n$\\framebox{$s+1$}=\\framebox{$\\overline{s+1}$}=\\framebox{$0$}$.}; \n the spectral parameter increases (cf. (\\ref{Tge1})) \n as we go from the left to the right on each tableau. \nWe will show that any term in ${\\cal T}_{m}(u)$ coincides with \na term in ${\\cal T}_{2s-m+1}(u)$. \nWe will consider the signs originated from \nthe grading parameter (\\ref{grading}) separately. \nAny term in ${\\cal T}_{m}(u)$ can be expressed by a tableau \n$b \\in B((m^{1}))$ such that \n$b(1,k)=i_{k}$ for $1\\le k \\le \\alpha $ \n($1 \\preceq i_{1} \\prec \\cdots \\prec i_{\\alpha} \\preceq s$); \n$b(1,k)=\\overline{j_{m-k+1}}$ for $\\alpha +1\\le k \\le m $\n ($0 \\preceq \\overline{j_{m-\\alpha}}\n \\prec \\cdots \\prec \\overline{j_{1}} \\preceq \\overline{1}$); \n $\\alpha \\in {\\bf Z}$. \nThe term corresponding to this tableau is given as follows\n\\footnote{(\\ref{mod0}) is $1$ if $m=0$.}: \n\\begin{eqnarray}\n\\hspace{-20pt} && \\begin{array}{|c|c|c|c|c|c|}\n \\multicolumn{6}{c}{\\quad} \\\\ \\hline \n i_{1} & \\cdots \\cdots & i_{\\alpha} & \n \\stackrel{\\ }{\\overline{j_{m-\\alpha}}} & \n \\cdots \\cdots & \\overline{j_{1}} \n \\\\ \\hline \n \\multicolumn{1}{c}{^{u-m+1}} & \n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha -1}} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha +1}} & \n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u+m-1}}\n \\end{array}\n \\label{mod0} \\\\ \n \\hspace{-20pt} &=&\n \\begin{array}{|c|c|c|c|c|c|}\n \\multicolumn{6}{c}{\\quad} \\\\ \\hline \n i_{1} & \\cdots \\cdots & i_{\\alpha} & \n \\stackrel{\\ }{\\overline{j_{m-\\alpha}}} & \n \\cdots \\cdots & \\overline{j_{1}} \\\\ \\hline \n \\multicolumn{1}{c}{^{u-m+1}} & \n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha -1}} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha +1}} & \n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u+m-1}}\n \\end{array}\n \\nonumber \\\\ \n \\hspace{-20pt} && \\times \n \\begin{array}{|c|c|c|c|c|c|c|c|c|}\n \\multicolumn{9}{c}{\\quad} \\\\ \\hline \n 1 & 2 & \\cdots \\cdots & s & 0 & \n \\overline{s} & \\cdots \\cdots & \n \\stackrel{\\ }{\\overline{2}} & \\overline{1} \\\\ \\hline \n \\multicolumn{1}{c}{^{u-m+2\\alpha -2s}} & \n \\multicolumn{7}{c}{} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha +2s}}\n \\end{array}\n \\label{mod2} \\\\ \n \\hspace{-20pt} &=& \n \\begin{array}{|c|c|c|}\n \\multicolumn{3}{c}{\\quad} \\\\ \\hline \n i_{1} & \\cdots \\cdots & i_{\\alpha-1} \\\\ \\hline \n \\multicolumn{1}{c}{^{u-m+1}} & \n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha -3}} \n \\end{array}\n \\times \n \\begin{array}{|c|c|c|}\n \\multicolumn{3}{c}{\\quad} \\\\ \\hline \n \\stackrel{\\ }{\\overline{j_{m-\\alpha-1}}} & \n \\cdots \\cdots & \\overline{j_{1}} \\\\ \\hline \n \\multicolumn{1}{c}{^{u-m+2\\alpha +3}} & \n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u+m-1}}\n \\end{array}\n \\nonumber \\\\ \n \\hspace{-20pt} && \\times \n \\begin{array}{|c|}\n \\multicolumn{1}{c}{\\quad} \\\\ \\hline \n \\stackrel{\\ }{\\overline{j_{m-\\alpha}}} \\\\ \\hline \n \\multicolumn{1}{c}{^{u-m+2\\alpha +1}}\n \\end{array}\n \\times \n \\begin{array}{|c|c|c|c|}\n \\multicolumn{4}{c}{\\quad} \\\\ \\hline \n 1 & 2 & \\cdots \\cdots & j_{m-\\alpha} \\\\ \\hline \n \\multicolumn{1}{c}{^{u-m+2\\alpha-2s}} & \n \\multicolumn{2}{c}{} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha -2s+2j_{m-\\alpha}-2}} \n \\end{array}\n \\nonumber \\\\ \n \\hspace{-20pt} && \\times \n \\begin{array}{|c|c|c|c|c|c|c|}\n \\multicolumn{7}{c}{\\quad} \\\\ \\hline \n j_{m-\\alpha}+1 & \\cdots \\cdots & s & 0 & \\overline{s} &\n \\cdots \\cdots & \n \\stackrel{\\ }{\\overline{i_{\\alpha}+1}} \\\\ \\hline \n \\multicolumn{1}{c}{^{u-m+2\\alpha-2s+2j_{m-\\alpha}}} & \n \\multicolumn{5}{c}{} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha +2s-2i_{\\alpha}}} \n \\end{array}\n \\nonumber \\\\ \n \\hspace{-20pt} && \\times \n \\begin{array}{|c|}\n \\multicolumn{1}{c}{\\quad} \\\\ \\hline \n i_{\\alpha} \\\\ \\hline \n \\multicolumn{1}{c}{^{u-m+2\\alpha -1}}\n \\end{array}\n \\times \n \\begin{array}{|c|c|c|c|}\n \\multicolumn{4}{c}{\\quad} \\\\ \\hline \n \\stackrel{\\ }{\\overline{i_{\\alpha}}} & \n \\cdots \\cdots & \\overline{2} & \\overline{1} \\\\ \\hline \n \\multicolumn{1}{c}{^{u-m+2\\alpha-2i_{\\alpha}+2s+2}} & \n \\multicolumn{2}{c}{} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha +2s}} \n \\end{array}\n \\label{mod3} \\\\ \n \\hspace{-20pt} &=& \n \\begin{array}{|c|c|c|}\n \\multicolumn{3}{c}{\\quad} \\\\ \\hline \n i_{1} & \\cdots \\cdots & i_{\\alpha-1} \\\\ \\hline \n \\multicolumn{1}{c}{^{u-m+1}} & \n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha -3}} \n \\end{array}\n \\times \n \\begin{array}{|c|c|c|}\n \\multicolumn{3}{c}{\\quad} \\\\ \\hline \n \\stackrel{\\ }{\\overline{j_{m-\\alpha-1}}} & \n \\cdots \\cdots & \\overline{j_{1}} \\\\ \\hline \n \\multicolumn{1}{c}{^{u-m+2\\alpha +3}} & \n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u+m-1}}\n \\end{array}\n \\nonumber \\\\ \n \\hspace{-20pt} && \n \\times \n \\begin{array}{|c|c|c|c|}\n \\multicolumn{4}{c}{\\quad} \\\\ \\hline \n 1 & 2 & \\cdots \\cdots & j_{m-\\alpha}-1 \\\\ \\hline \n \\multicolumn{1}{c}{^{u-m+2\\alpha-2s+2}} & \n \\multicolumn{2}{c}{} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha -2s+2j_{m-\\alpha}-2}} \n \\end{array}\n \\nonumber \\\\ \n \\hspace{-20pt} && \\times \n \\begin{array}{|c|c|c|c|c|c|c|}\n \\multicolumn{7}{c}{\\quad} \\\\ \\hline \n j_{m-\\alpha}+1 & \\cdots \\cdots & s & 0 & \\overline{s} &\n \\cdots \\cdots & \n \\stackrel{\\ }{\\overline{i_{\\alpha}+1}} \\\\ \\hline \n \\multicolumn{1}{c}{^{u-m+2\\alpha-2s+2j_{m-\\alpha}}} & \n \\multicolumn{5}{c}{} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha +2s-2i_{\\alpha}}} \n \\end{array}\n \\nonumber \\\\ \n \\hspace{-20pt} && \\times \n \\begin{array}{|c|c|c|c|}\n \\multicolumn{4}{c}{\\quad} \\\\ \\hline \n \\stackrel{\\ }{\\overline{i_{\\alpha}-1}} & \n \\cdots \\cdots & \\overline{2} &\n \\overline{1} \\\\ \\hline \n \\multicolumn{1}{c}{^{u-m+2\\alpha-2i_{\\alpha}+2s+2}} & \n \\multicolumn{2}{c}{} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha +2s-2}} \n \\end{array}\n \\label{mod4} \\\\\n \\hspace{-20pt} &=& \n \\cdots = \\begin{array}{|c|c|c|c|c|c|}\n \\multicolumn{6}{c}{\\quad} \\\\ \\hline \n J_{1} & \\cdots \\cdots & J_{s+1-m+\\alpha} & \n \\stackrel{\\ }{\\overline{I_{s-\\alpha}}} & \n \\cdots \\cdots & \\overline{I_{1}} \\\\ \\hline \n \\multicolumn{1}{c}{^{u+m-2s}} & \n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha }} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha +2}} & \n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u-m+2s}}\n \\end{array},\n\\label{mod5}\n\\end{eqnarray}\nwhere $\\{J_{k}\\}= \\{1,2,\\dots, s,0 \\} \n\\setminus \\{j_{1},j_{2},\\dots,j_{m-\\alpha}\\}$\n ($1 \\preceq J_{1} \\prec \\cdots \\prec J_{s+1+\\alpha -m} \\preceq 0$); \n$\\{\\overline{I_{k}}\\}= \\{\\overline{s},\\overline{s-1},\\dots, \\overline{1} \\}\n \\setminus \\{\\overline{i_{\\alpha}},\\overline{i_{\\alpha-1}},\n \\dots,\\overline{i_{1}} \\}$\n ($\\overline{s} \\preceq \\overline{I_{s-\\alpha}}\n \\prec \\cdots \\prec \\overline{I_{1}} \\preceq \\overline{1}$). \n (\\ref{mod2}) follows from (\\ref{const}); \n (\\ref{mod4}) follows from (\\ref{modi1}) and (\\ref{modi}). \n After repetition of procedures similar to \n (\\ref{mod3})-(\\ref{mod4}), we obtain (\\ref{mod5}). \n Apparently, (\\ref{mod5}) is a term in ${\\cal T}_{2s-m+1}(u)$. \n Conversely, one can also show that any term in \n ${\\cal T}_{2s-m+1}(u)$ coincides \n with a term in ${\\cal T}_{m}(u)$. \n\nNoting the relation \n\\begin{eqnarray}\n\\hspace{-30pt}&&\\sum_{k=1}^{s+1-m+\\alpha}p(J_{k})+\n \\sum_{k=1}^{s-\\alpha}p(\\overline{I_{k}}) \n \\nonumber \\\\ \n\\hspace{-30pt}&& \\hspace{60pt}\n=\n\\left(\n \\sum_{k=1}^{s}p(k)+p(0)-\\sum_{k=1}^{m-\\alpha}p(j_{k})\n\\right)+\n\\left(\n \\sum_{k=1}^{s}p(\\overline{k})-\n \\sum_{k=1}^{\\alpha}p(\\overline{i_{k}})\n\\right) \\nonumber \\\\ \n\\hspace{-30pt}&& \\hspace{60pt}=\n2s-\\sum_{k=1}^{m-\\alpha}p(j_{k})\n-\\sum_{k=1}^{\\alpha}p(\\overline{i_{k}}) \n \\nonumber \\\\ \n\\hspace{-30pt}&& \\hspace{60pt} \n \\equiv \n\\sum_{k=1}^{m-\\alpha}p(j_{k})\n+\\sum_{k=1}^{\\alpha}p(\\overline{i_{k}}) \n\\quad {\\rm mod} \\quad 2, \n\\end{eqnarray}\nwe find that \n the overall sign for (\\ref{mod0}) coincides with that for (\\ref{mod5}). \n\nFinally, we comment on the vacuum parts. \nFrom now on, we assume that the vacuum parts are not trivial. \nEquivalence between the dress parts of ${\\cal T}_{m}(u)$ \nand ${\\cal T}_{2s-m+1}(u)$ has already been shown, \nso we have only to check that \nthe vacuum part of \n\\begin{eqnarray}\n \\begin{array}{|c|c|c|c|c|c|}\n \\multicolumn{6}{c}{\\quad} \\\\ \\hline \n i_{1} & \\cdots \\cdots & i_{\\alpha} & \n \\stackrel{\\ }{\\overline{j_{m-\\alpha}}} & \n \\cdots \\cdots & \\overline{j_{1}} \n \\\\ \\hline \n \\multicolumn{1}{c}{^{u-m+1}} & \n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha -1}} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha +1}} & \n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u+m-1}}\n \\end{array}\n \\ \/ {\\cal F}_{m}(u) \n\\end{eqnarray}\nis equivalent to that of \n\\begin{eqnarray}\n\\hspace{-10pt}\n\\begin{array}{|c|c|c|c|c|c|}\n \\multicolumn{6}{c}{\\quad} \\\\ \\hline \n J_{1} & \\cdots \\cdots & J_{s+1-m+\\alpha} & \n \\stackrel{\\ }{\\overline{I_{s-\\alpha}}} & \n \\cdots \\cdots & \\overline{I_{1}} \n \\\\ \\hline \n \\multicolumn{1}{c}{^{u+m-2s}} & \n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha }} &\n \\multicolumn{1}{c}{^{u-m+2\\alpha +2}} & \n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{^{u-m+2s}}\n \\end{array}\n \\ \/ {\\cal F}_{2s-m+1}(u). \n\\end{eqnarray}\nAll we have to do is to check this \nby direct computation for the following \nfour cases: \n(i) $i_{1}=1$ and $\\overline{j_{1}}=\\overline{1}$ \n($1 \\prec J_{1}$ and $\\overline{I_{1}} \\prec \\overline{1}$); \n(ii) $i_{1}=1$ and $\\overline{j_{1}} \\prec \\overline{1}$ \n($J_{1}=1$ and $\\overline{I_{1}} \\prec \\overline{1}$);\n(iii) $1 \\prec i_{1}$ and $\\overline{j_{1}}=\\overline{1}$ \n($1 \\prec J_{1}$ and $\\overline{I_{1}}=\\overline{1}$);\n(iv) $1 \\prec i_{1}$ and $\\overline{j_{1}} \\prec \\overline{1}$ \n($J_{1}=1$ and $\\overline{I_{1}}=\\overline{1}$). \n\\rule{5pt}{10pt} \\\\ \n Owing to the relation (\\ref{Jacobi-Trudi2}), \nwe can generalize the relation (\\ref{dual1}) to \n\\begin{eqnarray}\n {\\cal T}_{m}^{a}(u)={\\cal T}_{2s-m+1}^{a}(u) \n \\label{dual2} ,\n\\end{eqnarray}\nwhere $a \\in {\\bf Z}_{\\ge 1}$. \nTaking note on the relations (\\ref{dual2}) and (\\ref{cons}), \n we shall rewrite the functional relation (\\ref{hirota}) in a\n \\symbol{96}canonical' form \nas the original $T$-system for the simple Lie algebra \\cite{KNS1}. \nSet \n$T_{m}^{(a)}(u)={\\cal T}_{a}^{m}(u)$, \n$T_{2m}^{(s)}(u)={\\cal T}_{s}^{m}(u)$ and \n$T_{0}^{(a)}(u)=T_{0}^{(s)}(u)=T_{m}^{(0)}(u)=1$\n for $a \\in \\{1,2,\\dots, s-1\\}$ and $m \\in {\\bf Z}_{\\ge 1}$, \nwhere the subscript $(n,a)$ of $T_{n}^{(a)}(u)$ \ncorresponds to the Kac-Dynkin label $[b_{1}, b_{2},\\dots ,b_{s}]$ \n for $b_{i}=n\\delta_{i a}$ (cf. (\\ref{Kac-Dynkin-b0s})).\n Then we have \n\\begin{eqnarray}\n\\hspace{-40pt} \n&& T_{m}^{(a)}(u-1)T_{m}^{(a)}(u+1)\n=T_{m-1}^{(a)}(u)T_{m+1}^{(a)}(u)+\ng_{m}^{(a)}(u)T_{m}^{(a-1)}(u)T_{m}^{(a+1)}(u) \\nonumber \\\\ \n\\hspace{-40pt}\n&& \\hspace{190pt} {\\rm for } \\qquad a \\in \\{1,2,\\dots,s-2\\}, \n\\label{T-sys1}\\\\\n\\hspace{-40pt}\n && T_{m}^{(s-1)}(u-1)T_{m}^{(s-1)}(u+1)\n=T_{m-1}^{(s-1)}(u)T_{m+1}^{(s-1)}(u) \\nonumber \\\\ \n\\hspace{-40pt}\n&& \\hspace{160pt} \n+g_{m}^{(s-1)}(u)T_{m}^{(s-2)}(u)T_{2m}^{(s)}(u) ,\\\\\n\\hspace{-40pt}\n&& T_{2m}^{(s)}(u-1)T_{2m}^{(s)}(u+1)\n=T_{2m-2}^{(s)}(u)T_{2m+2}^{(s)}(u)+\ng_{2m}^{(s)}(u)T_{m}^{(s-1)}(u)T_{2m}^{(s)}(u), \\label{T-sys3}\n\\end{eqnarray}\nwhere $g_{m}^{(b)}(u)=\n\\{\\prod_{j=1}^{m}{\\cal T}_{0}(u+2j-m-1)\\}^{\\delta_{b,1}}$ \nif $s\\in {\\bf Z}_{\\ge 2}$; \n$g_{2m}^{(1)}(u)=\\prod_{j=1}^{m}{\\cal T}_{0}(u+2j-m-1)$ \nif $s=1$. \nNote that the function $g_{m}^{(b)}(u)$ obey the following relation \n\\begin{eqnarray}\n&& g_{m}^{(b)}(u+1)g_{m}^{(b)}(u-1)=\ng_{m+1}^{(b)}(u)g_{m-1}^{(b)}(u) \\quad {\\rm if } \\quad \n s \\in {\\bf Z}_{\\ge 2}, \\nonumber \\\\\n&& g_{2m}^{(1)}(u+1)g_{2m}^{(1)}(u-1)=\ng_{2m+2}^{(1)}(u)g_{2m-2}^{(1)}(u) \\quad {\\rm if } \\quad s=1.\n\\end{eqnarray} \nThese functional relations (\\ref{T-sys1})-(\\ref{T-sys3}) \nwill be $B(0|s)$ version of \nthe $T$-system. Note that the subscript $n$ of \n$T_{n}^{(s)}(u)$ can take only even number (cf. (\\ref{finite})). \nBy construction, $T_{m}^{(a)}(u)$ can be expressed \nas a determinant over a \nmatrix whose matrix elements are only the fundamental functions \n$T_{1}^{(1)}$, \\dots, $T_{1}^{(s-1)}$, $T_{2}^{(s)}$\n and $g_{1}^{(1)}$ for $s\\in {\\bf Z}_{\\ge 2}$; \n $T_{2}^{(1)}$ and $g_{2}^{(1)}$ for $s=1$. \n This can be summarized as follows:\n\\begin{theorem}\nFor $m\\in {\\bf Z}_{\\ge 1}$, \n\\begin{eqnarray}\n&& T_{m}^{(a)}(u)= \n {\\rm det}_{1\\le i,j \\le m}({\\cal T}_{a+i-j}(u+m-i-j+1)) \n\\nonumber \\\\ && \\hspace{150pt}\n{\\rm for} {\\quad} a \\in \\{1,2, \\dots ,s-1\\}, \\\\\n&& T_{2m}^{(s)}(u)=\n {\\rm det}_{1\\le i,j \\le m}({\\cal T}_{s+i-j}(u+m-i-j+1)) \n\\end{eqnarray}\nsolves (\\ref{T-sys1})-(\\ref{T-sys3}). \nHere ${\\cal T}_{a}(u)$ obeys the relation (\\ref{dual1}) and \nthe boundary condition \n\\begin{eqnarray}\n{\\cal T}_{a}(u)=\\left\\{\n\\begin{array}{lll}\n 0 &{\\rm if} & a <0, \\\\ \n g_{1}^{(1)}(u)& {\\rm if} & \n a=0 \\quad {\\rm and} \\quad s \\in {\\bf Z}_{\\ge 2}\\\\\n g_{2}^{(1)}(u)& {\\rm if} & \n a=0 \\quad {\\rm and} \\quad s=1 \\\\ \n T_{1}^{(a)}(u) & {\\rm if} & a \\in \\{1,2, \\dots ,s-1\\} \\\\ \n T_{2}^{(s)}(u) & {\\rm if} & a=s ,\n\\end{array}\n\\right. \n\\end{eqnarray}\nwhere \n$g_{m}^{(a)}(u)=\\{ \\prod_{j=1}^{m}g_{1}^{(1)}\n (u+2j-m-1)\\}^{\\delta_{a,1}}$ \n if $s \\in {\\bf Z}_{\\ge 2}$; \n$g_{2m}^{(1)}(u)=\\prod_{j=1}^{m}g_{2}^{(1)}(u+2j-m-1)$ \n if $ s=1 $. \n\\end{theorem}\n{\\em Remark}: \nThese functional realtions (\\ref{T-sys1})-(\\ref{T-sys3}) \nresemble to the ones for $A_{2s}^{(2)}$ \\cite{KS2}. \nThis resemblance will originate from resemblance between \n$B(0|s)^{(1)}$ and $A_{2s}^{(2)}$. \n\nThere is a remarkable relation between \nthe number ${\\cal N}_{m}^{(a)}$ of the terms in \n$T_{m}^{(a)}(u)$ and the dimensionality (\\ref{dim}) \nof the Lie superalgebra $B(0|s)$. \n We conjecture that they are related each other as follows: \n\\begin{eqnarray}\n {\\cal N}_{m}^{(a)}&=& \\sum \n {\\rm dim}V[k_{1},k_{2},\\dots, k_{a},0,\\dots,0]\n \\quad {\\rm if } \\quad a \\in \\{1,2,\\dots, s-1 \\}, \\nonumber \\\\ \n{\\cal N}_{2m}^{(s)}&=& \\sum {\\rm dim}V[k_{1},k_{2},\\dots, k_{s-1},2k_{s}]\n ,\n\\end{eqnarray}\nwhere the summation is taken over non-negative \n integers $\\{ k_{j} \\}$ \nsuch that $k_{1}+k_{2}+\\cdots +k_{a} \\le m $ and \n$k_{j} \\equiv m \\delta_{j a}$ mod $2$. \n\\begin{table}\n\\begin{center}\n \\begin{tabular}{ccccc} \\hline \n $m$ & 1 & 2 & 3 & 4 \\\\ \\hline \n ${\\cal N}_{m}^{(1)}$ & 5 & 15 & 35 & 70 \\\\ \n ${\\cal N}_{2m}^{(2)}$ & 10 & 50 & 175 & 490 \\\\ \\hline \n \\end{tabular} \n\\end{center}\n\\caption{The number ${\\cal N}_{m}^{(a)}$ of the terms \n in $T_{m}^{(a)}(u)$ for $B(0|2)$.}\n\\label{num-osp14} \n\\end{table}\n\\begin{table}\n\\begin{center}\n \\begin{tabular}{cccc} \\hline \n $[b_{1},b_{2}]$ & ${\\rm dim}V[b_{1},b_{2}]$ & \n $[b_{1},b_{2}]$ & ${\\rm dim}V[b_{1},b_{2}]$ \\\\ \\hline\n 0 0 & 1 & 0 2 & 10 \\\\ \n 1 0 & 5 & 0 4 & 35 \\\\ \n 2 0 & 14 & 0 6 & 84 \\\\ \n 3 0 & 30 & 2 2 & 81 \\\\ \\hline \n \\end{tabular} \n\\end{center}\n\\caption{The dimensionality of the module \n $V[b_{1},b_{2}]$ for $B(0|2)$.}\n\\label{dim-osp14} \n\\end{table}\nFor example, for $B(0|2)$ case, we have \n(cf. Table \\ref{num-osp14} and Table \\ref{dim-osp14}): \n\\begin{eqnarray}\n {\\cal N}_{1}^{(1)}&=& {\\rm dim}V[1,0], \\nonumber \\\\ \n {\\cal N}_{2}^{(1)}&=& {\\rm dim}V[2,0]+{\\rm dim}V[0,0], \\nonumber \\\\ \n {\\cal N}_{3}^{(1)}&=& {\\rm dim}V[3,0]+{\\rm dim}V[1,0], \\nonumber \\\\ \n {\\cal N}_{2}^{(2)}&=& {\\rm dim}V[0,2], \\nonumber \\\\ \n {\\cal N}_{4}^{(2)}&=& {\\rm dim}V[0,4]+{\\rm dim}V[2,0]\n +{\\rm dim}V[0,0], \\nonumber \\\\ \n {\\cal N}_{6}^{(2)}&=& {\\rm dim}V[0,6]+{\\rm dim}V[2,2]\n +{\\rm dim}V[0,2]. \n\\end{eqnarray}\nThese relations seem to suggest a \nsuperization of the Kirillov-Reshetikhin formula \n\\cite{KR}, which gives the multiplicity of occurrence of \nthe irreducible representations of the Lie superalgebra \nin the Yangian module. \n\\setcounter{equation}{0}\n\\section{Summary and discussion}\nIn this paper, we have carried out an analytic Bethe ansatz \n based on the Bethe ansatz equations (\\ref{BAE})-(\\ref{BAE4})\n with the distinguished\n simple root systems of the type II Lie superalgebras \n $B(r|s)$ and $D(r|s)$. We have proposed \n eigenvalue formulae of transfer matrices in \n DVFs related to a class of tensor-like representations, \n and shown their pole-freeness under \n the BAEs (\\ref{BAE})-(\\ref{BAE4}). \n The key is the top term hypothesis \n and the pole-freeness under the BAE. \n A class of functional relations has been proposed for the DVFs. \n In particular for $B(0|s)$ case, \n remarkable duality among DVFs was found. \n By using this, a complete set of functional relations \n is written down for the DVFs labeled by rectangular Young \n (super) diagrams. \nTo the author' knowledge, this paper is the first \n trial to construct {\\em systematic } theory \n of an analytic Bethe ansatz \n related to fusion $B(r|s)$ and $D(r|s)$ vertex models. \n \n In the present paper, we have executed an analytic Bethe ansatz \n only for tensor-like representations. \n As for spnorial representations, details are under investigation. \n For example, in relation to the \n 64 dimensional typical representation of \n $B(2|1)$, \n we have confirmed the fact that the Bethe-strap generated by \n the following top term\n\\footnote{Here we omit the vacuum part.}\n\\begin{eqnarray}\n\\frac{Q_{1}(u+\\frac{5}{2})Q_{3}(u-\\frac{1}{2})}\n {Q_{1}(u-\\frac{5}{2})Q_{3}(u+\\frac{1}{2})},\n\\end{eqnarray}\nwhich carries $B(2|1)$ weight with the Kac-Dynkin label \n$(\\frac{5}{2},0,1)$ consists of 64 terms. \n\nFor $D(r|s)$ case, we have proposed DVFs labeled by \n Young (super) diagrams with one row or one column. \n It is tempting to extend these DVFs to \n general Young (super) diagrams. \n However, this will be a difficult task \n since we lack of tableaux sum expressions of \n DVFs labeled by general\n Young diagrams even for the non-superalgebra \n $D_{r}$ case \\cite{KS1}. \n One way to bypass cumbersome tableaux sum expressions \n is to construct a complete set of transfer matrix \n functional relations (a hierarchy of $T$-system). \n By solving it, we will be able to calculate DVFs.\n\nIt is an interesting problem to derive TBA equations \nfrom our $T$-system (\\ref{T-sys1})-(\\ref{T-sys3}). \nThis is accomplished by a similar procedure for \n $sl(r|s)$ case \\cite{JKS} (see also \\cite{FK99}). \n We are going to report this in \n the near future.\n\\\\\n{\\bf Acknowledgment} \\\\ \nThe author would like to thank Prof. A. Kuniba for encouragement. \nThis work is supported by a Grant-in-Aid for \nJSPS Research Fellows from the Ministry of Education, Science and \nCulture of Japan. \n\\setcounter{equation}{0}\n\\renewcommand{\\theequation}{A.1.\\arabic{equation}}\n\\section*{Appendix A.1 Outline of the proof of Theorem \n\\ref{th-tate}: $B(r|s)$ ($r,s \\in {\\bf Z}_{\\ge 1}$) case}\nFor simplicity, we assume that the vacuum parts \nare formally trivial from now on. \nWe prove that ${\\cal T}^{a}(u)$ \nis free of color $b$ poles, that is, \n$Res_{u=u_{k}^{(b)}+\\cdots}{\\cal T}^{a}(u)=0$ for any \n $ b \\in \\{1,2,\\dots, s+r \\} $\n under the condition that the BAE (\\ref{BAE}) is valid. \n The function $\\framebox{$c$}_{u}$ \n (\\ref{z+}) with $c \\in J $ has \n color $b$ poles only for $c=b$, $b+1$, \n$\\overline{b+1}$ or $\\overline{b}$ if \n$b\\in \\{1,2,\\dots,s+r-1 \\} $; \nfor $c=s+r$, $0$ or $\\overline{s+r}$ if $b=s+r$, \nso we shall trace only \n\\framebox{$b$}, \\framebox{$b+1$}, \n\\framebox{$\\overline{b+1}$} or \\framebox{$\\overline{b}$}\n for $b\\in\\{1,2,\\dots, s+r-1 \\}$; \n\\framebox{$s+r$}, \\framebox{$0$} \nor \\framebox{$\\overline{s+r}$} for $b=s+r$.\n Let $S_{k}$ be the partial sum of ${\\cal T}^{a}(u)$, \n which contains \n$k$ boxes among \\framebox{$b$}, \\framebox{$b+1$}, \n\\framebox{$\\overline{b+1}$} or \\framebox{$\\overline{b}$}\n for $b \\in \\{1,2,\\dots, s+r-1\\}$; \n\\framebox{$s+r$}, \\framebox{$0$} \nor \\framebox{$\\overline{s+r}$} for $b=s+r$.\n Evidently, $S_{0}$ does not have \ncolor $b$ poles. \n\n Now we examine $S_{1}$ which is a summation \n of the tableaux (with sign) of the form \n\\begin{equation} \n\\begin{array}{|c|}\\hline\n \\xi \\\\ \\hline \n \\eta \\\\ \\hline \n \\zeta \\\\ \\hline\n\\end{array},\n\\end{equation}\nwhere \\framebox{$\\xi$} and \\framebox{$\\zeta$} are columns with \ntotal length $a-1$ and they do not involve $Q_{b}$. \n\\framebox{$\\eta$} is \\framebox{$b$}, \\framebox{$b+1$}, \n\\framebox{$\\overline{b+1}$} or \\framebox{$\\overline{b}$}\n for $b \\in \\{1,2,\\dots, s+r-1\\}$; \n\\framebox{$s+r$}, \\framebox{$0$} \nor \\framebox{$\\overline{s+r}$} for $b=s+r$. \n Thanks to the relations (\\ref{res1})-(\\ref{res8}), \n $S_{1}$ is free of color $b$ poles under the BAE (\\ref{BAE}). \n Hereafter we consider $S_{k}$ for $k\\ge 2$. \\\\ \n$\\bullet$ \nThe case\n\\footnote{This is void for $B(r|1)$ case.}\n $b \\in \\{1,2,\\dots,s-1 \\}$: \n$S_{k} (k\\ge 2)$ is a \nsummation of the tableaux (with sign) of the form \n\\begin{eqnarray}\n&& \n\\sum_{n_{1}=0}^{k_{1}}\n\\sum_{n_{2}=0}^{k_{2}} \n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n E_{1n_{1}} \\\\ \\hline \n \\eta \\\\ \\hline \n E_{2n_{2}} \\\\ \\hline \n \\zeta \\\\ \\hline \n\\end{array}\n \\nonumber \\\\ \n&& =\n\\left(\n\\sum_{n_{1}=0}^{k_{1}} \n\\begin{array}{|c|}\\hline \n E_{1n_{1}} \\\\ \\hline \n\\end{array}\n\\right)\n\\left(\n\\sum_{n_{2}=0}^{k_{2}}\n\\begin{array}{|c|}\\hline \n E_{2n_{2}} \\\\ \\hline \n\\end{array}\n\\right)\n\\times \n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline \n\\end{array}\n\\times \n\\begin{array}{|c|}\\hline \n \\eta \\\\ \\hline \n\\end{array}\n\\times \n\\begin{array}{|c|}\\hline \n \\zeta \\\\ \\hline \n\\end{array}\n\\label{tableauxk2}, \n\\end{eqnarray} \nwhere \\framebox{$\\xi$}, \\framebox{$\\eta$} and \n\\framebox{$\\zeta$} are columns with total \nlength $a-k$, which do not contain \\framebox{$b$}, \n\\framebox{$b+1$}, \\framebox{$\\overline{b+1}$} and \n\\framebox{$\\overline{b}$}; \n\\framebox{$E_{1n_{1}}$} is a column \n\\footnote{We assume that \n$\\framebox{$E_{10}$}=\n\\begin{array}{|c|l}\\cline{1-1} \n b+1 & _{v} \\\\ \\cline{1-1} \n \\vdots & \\\\ \\cline{1-1} \n b+1 & _{v-2k_{1}+2}\\\\ \\cline{1-1} \n \\end{array}\n$\n and \n $\\framebox{$E_{1 k_{1}}$}=\n\\begin{array}{|c|l}\\cline{1-1} \n b & _{v} \\\\ \\cline{1-1} \n \\vdots & \\\\ \\cline{1-1} \n b & _{v-2k_{1}+2}\\\\ \\cline{1-1} \n \\end{array}\n$. \\\\ \\ }\nof the form: \n\\begin{eqnarray}\n \\begin{array}{|c|l}\\cline{1-1} \n b & _v \\\\ \\cline{1-1} \n \\vdots & \\\\ \\cline{1-1} \n b & _{v-2n_{1}+2}\\\\ \\cline{1-1} \n b+1 & _{v-2n_{1}} \\\\ \\cline{1-1} \n \\vdots & \\\\ \\cline{1-1} \n b+1 & _{v-2k_{1}+2}\\\\ \\cline{1-1} \n \\end{array} \n&=&\\frac{Q_{b-1}(v-b+1-2n_{1})Q_{b}(v-b+2)}\n {Q_{b-1}(v-b+1)Q_{b}(v-b-2n_{1}+2)} \n\\label{tableauxk3} \\\\ \n&& \\times \n \\frac{Q_{b}(v-b-2k_{1})Q_{b+1}(v-b+1-2n_{1})}\n {Q_{b}(v-b-2n_{1})Q_{b+1}(v-b+1-2k_{1})},\n\\nonumber\n\\end{eqnarray}\nwhere $v=u+h_{1}$: $h_{1}$ is some shift parameter \nand \\framebox{$E_{2 n_{2}}$} is a column\n\\footnote{We assume that \n$\\framebox{$E_{20}$}=\n\\begin{array}{|c|l}\\cline{1-1} \n \\stackrel{\\ }{\\overline{b}} & _{w} \\\\ \\cline{1-1} \n \\vdots & \\\\ \\cline{1-1} \n \\stackrel{\\ }{\\overline{b}} & _{w-2k_{2}+2}\\\\ \\cline{1-1} \n \\end{array}\n$\n and \n $\\framebox{$E_{2 k_{2}}$}=\n\\begin{array}{|c|l}\\cline{1-1} \n \\stackrel{\\ }{\\overline{b+1}} & _{w} \\\\ \\cline{1-1} \n \\vdots & \\\\ \\cline{1-1} \n \\stackrel{\\ }{\\overline{b+1}} \n & _{w-2k_{2}+2}\\\\ \\cline{1-1} \n \\end{array}\n$.}\n of the form:\n\\begin{eqnarray}\n \\begin{array}{|c|l}\\cline{1-1} \n \\stackrel{\\ }{\\overline{b+1}} & _w \\\\ \\cline{1-1} \n \\vdots & \\\\ \\cline{1-1} \n \\stackrel{\\ }{\\overline{b+1}} & _{w-2n_{2}+2}\\\\ \\cline{1-1} \n \\stackrel{\\ }{\\overline{b}} & _{w-2n_{2}} \\\\ \\cline{1-1} \n \\vdots & \\\\ \\cline{1-1} \n \\stackrel{\\ }{\\overline{b}} & _{w-2k_{2}+2}\\\\ \\cline{1-1} \n \\end{array} \n&=&\\frac{Q_{b-1}(w-2s+2r+b-2n_{2})}\n {Q_{b-1}(w-2s+2r+b-2k_{2})} \n\\label{tableauxk5} \\\\ \n&& \\times \n\\frac{Q_{b}(w-2s+2r+b-2k_{2}-1)}{Q_{b}(w-2s+2r+b-2n_{2}-1)}\n\\nonumber \\\\\n&& \\times \n \\frac{Q_{b}(w-2s+2r+b+1)Q_{b+1}(w-2s+2r+b-2n_{2})}\n {Q_{b}(w-2s+2r+b-2n_{2}+1)Q_{b+1}(w-2s+2r+b)},\n\\nonumber\n\\end{eqnarray} \nwhere \n$w=u+h_{2}$: $h_{2}$ is some shift parameter; \n$k=k_{1}+k_{2}$\n\\footnote{We assume that $\\framebox{$E_{1 n_{1}}$}=1$\n (resp. $\\framebox{$E_{2 n_{2}}$}=1$) \nfor $k_{1}=0$ (resp. $k_{2}=0$). In this case, \n\\framebox{$E_{i n_{i}}$} does \nnot have poles.}. \n\nFor $b \\in \\{1,2,\\dots, s-1\\}$, \n\\framebox{$E_{1 n_{1}}$} has color $b$ poles at\n $u=-h_{1}+b+2n_{1}+u_{p}^{(b)}$ and \n $u=-h_{1}+b+2n_{1}-2+u_{p}^{(b)}$ \n for $1 \\le n_{1} \\le k_{1}-1$; at $u=-h_{1}+b+u_{p}^{(b)}$ \nfor $n_{1}=0$ ; at $u=-h_{1}+b+2k_{1}-2+u_{p}^{(b)}$\n for $n_{1}=k_{1}$\n \\footnote{\nWe assume that these poles at \n$u=-h_{1}+b+2n_{1}+u_{i}^{(b)}$,\n and \n $u=-h_{1}+b+2n_{1}-2+u_{q}^{(b)}$ \n do not coincide each other \n for any $i,q \\in \\{1,2,\\dots , N_{b}\\} $: \n namely $u_{i}^{(b)}-u_{q}^{(b)} \\ne 2$.}. \nThe color $b$ residues at \n$u=-h_{1}+b+2n_{1}+u_{p}^{(b)}$ \n in \\framebox{$E_{1 n_{1}}$} and \\framebox{$E_{1 \\> n_{1}+1}$}\n cancel each other under the BAE (\\ref{BAE}). \n Thus, under the BAE\n (\\ref{BAE}), $\\sum_{n_{1}=0}^{k_{1}}\\framebox{$E_{1 n_{1}}$}$ \n is free of color $b$ poles\n (see Figure \\ref{part-bs}).\n\n\\begin{figure}\n \\setlength{\\unitlength}{1.5pt}\n \\begin{center}\n \\begin{picture}(270,40) \n \\put(0,0){$\n \\framebox{$E_{1 0}$} \n \\stackrel{0}{\\longleftarrow }\n \\framebox{$E_{1 1}$} \n \\stackrel{2}{\\longleftarrow } \n \\cdots \n \\stackrel{2n-4}{\\longleftarrow }\n \\framebox{$E_{1 n-1}$} \n \\stackrel{2n-2}{\\longleftarrow }\n \\framebox{$E_{1 n}$} \n \\stackrel{2n}{\\longleftarrow } \n \\framebox{$E_{1 n+1}$}\n \\stackrel{2n+2}{\\longleftarrow } \n \\cdots \n \\stackrel{2k_{1}-2}{\\longleftarrow }\n \\framebox{$E_{1 k_{1}}$}\n $} \n \\end{picture}\n \\end{center}\n \\caption{Partial Bethe-strap structure of \n $ E_{1 n}$ \n for color $b$ poles ($1\\le b \\le s-1$): \n The number $n$ on the arrow denotes the common color $b$ pole \n $-h_{1}+b+n+u_{k}^{(b)}$ \n of the pair of the tableaux connected by the arrow. \n This common pole vanishes under the BAE (\\ref{BAE}).}\n \\label{part-bs}\n\\end{figure}\n \n \\framebox{$E_{2 n_{2}}$} has color $b$ poles at\n $u=-h_{2}+2s-2r-b+2n_{2}-1+u_{p}^{(b)}$ and \n $u=-h_{2}+2s-2r-b+2n_{2}+1+u_{p}^{(b)}$ \n for $1 \\le n_{2} \\le k_{2}-1$; at \n $u=-h_{2}+2s-2r-b+1+u_{p}^{(b)}$ \nfor $n_{2}=0$ ; at \n$u=-h_{2}+2s-2r-b+2k_{2}-1+u_{p}^{(b)}$\n for $n_{2}=k_{2}$. \n The color $b$ residues at \n$u=-h_{2}+2s-2r-b+2n_{2}+1+u_{p}^{(b)}$\n in \\framebox{$E_{2 n_{2}}$} and \\framebox{$E_{2, n_{2}+1}$}\n cancel each other under the BAE (\\ref{BAE}). \n Thus, under the BAE\n (\\ref{BAE}), $\\sum_{n_{2}=0}^{k_{2}}\\framebox{$E_{2, n_{2}}$}$ \n is free of color $b$ poles. \n So is $S_{k}$. \\\\ \n$\\bullet$ \nThe case $ b=s $ : $S_{k} (k\\ge 2)$ is a \nsummation of the tableaux (with sign) of the form \n\\begin{eqnarray}\n&& \\begin{array}{|c|}\\hline \n D_{11} \\\\ \\hline \n \\eta \\\\ \\hline \n D_{21} \\\\ \\hline\n\\end{array}\n-\\begin{array}{|c|}\\hline \n D_{11} \\\\ \\hline \n \\eta \\\\ \\hline \n D_{22} \\\\ \\hline \n\\end{array}\n-\\begin{array}{|c|}\\hline\n D_{12} \\\\ \\hline \n \\eta \\\\ \\hline \n D_{21} \\\\ \\hline\n\\end{array}\n+\\begin{array}{|c|}\\hline \n D_{12} \\\\ \\hline \n \\eta \\\\ \\hline \n D_{22} \\\\ \\hline \n\\end{array} \\nonumber \\\\ \n&&=(\n\\begin{array}{|c|}\\hline \n D_{11} \\\\ \\hline \n\\end{array}\n-\\begin{array}{|c|}\\hline \n D_{12} \\\\ \\hline \n\\end{array}\n)(\n\\begin{array}{|c|}\\hline \n D_{21} \\\\ \\hline \n\\end{array}\n-\\begin{array}{|c|}\\hline \n D_{22} \\\\ \\hline \n\\end{array}\n)\n\\begin{array}{|c|}\\hline \n \\eta \\\\ \\hline \n\\end{array}\n\\end{eqnarray} \nwhere \\framebox{$\\eta$} is a column with \nlength $a-k$, which does not contain \\framebox{$s$}, \n\\framebox{$s+1$}, \\framebox{$\\overline{s+1}$} and \n\\framebox{$\\overline{s}$}; \n\\framebox{$D_{11}$} is a column \n\\footnote{We assume that \n$\\framebox{$D_{11}$}=\\framebox{$s+1$}_{v}$ if $k_{1}=1$.}\nof the form: \n\\begin{equation}\n\\begin{array}{|c|l}\\cline{1-1}\n s & _v \\\\ \\cline{1-1} \n \\vdots & \\\\ \\cline{1-1}\n s & _{v-2k_{1}+4} \\\\ \\cline{1-1} \n s+1 & _{v-2k_{1}+2} \\\\ \\cline{1-1} \n\\end{array}\n= \\frac{Q_{s-1}(v-s-2k_{1}+3)Q_{s}(v-s+2)Q_{s+1}(v-s-2k_{1}+1)}\n {Q_{s-1}(v-s+1)Q_{s}(v-s-2k_{1}+2)Q_{s+1}(v-s-2k_{1}+3)}; \n\\label{tableauxk1-1}\n\\end{equation}\n\\framebox{$D_{12}$} is a column \nof the form: \n\\begin{equation}\n\\begin{array}{|c|l}\\cline{1-1}\n s & _v \\\\ \\cline{1-1} \n \\vdots & \\\\ \\cline{1-1} \n s & _{v-2k_{1}+4}\\\\ \\cline{1-1}\n s & _{v-2k_{1}+2}\\\\ \\cline{1-1} \n\\end{array}\n=\\frac{Q_{s-1}(v-s-2k_{1}+1)Q_{s}(v-s+2)}\n {Q_{s-1}(v-s+1)Q_{s}(v-s-2k_{1}+2)}\n\\label{tableauxk1-2}, \n\\end{equation}\nwhere $v=u+h_{1}$: $h_{1}$ is some shift parameter; \n\\framebox{$D_{21}$} is a column\n\\footnote{We assume that \n$\\framebox{$D_{21}$}=\\framebox{$\\overline{s+1}$}_{w}$\n if $k_{2}=1$.}\n of the form: \n\\begin{eqnarray}\n\\begin{array}{|c|l}\\cline{1-1}\n \\stackrel{\\ }{\\overline{s+1}} & _w \\\\ \\cline{1-1} \n \\stackrel{\\ }{\\overline{s}} & _{w-2} \\\\ \\cline{1-1}\n \\vdots & \\\\ \\cline{1-1} \n \\stackrel{\\ }{\\overline{s}} & _{w-2k_{2}+2} \\\\ \\cline{1-1} \n\\end{array}\n&=&\\frac{Q_{s-1}(w-s+2r-2)}{Q_{s-1}(w-s+2r-2k_{2})} \n\\label{tableauxk1-3} \\\\ \n&& \\times \n\\frac{Q_{s}(w-s+2r-2k_{2}-1)Q_{s+1}(w-s+2r)}\n {Q_{s}(w-s+2r-1)Q_{s+1}(w-s+2r-2)}; \n\\nonumber \n\\end{eqnarray}\n\\framebox{$D_{22}$} is a column of the form:\n\\begin{equation}\n\\begin{array}{|c|l}\\cline{1-1}\n \\stackrel{\\ }{\\overline{s}} & _w \\\\ \\cline{1-1} \n \\stackrel{\\ }{\\overline{s}} & _{w-2} \\\\ \\cline{1-1}\n \\vdots & \\\\ \\cline{1-1} \n \\stackrel{\\ }{\\overline{s}} & _{w-2k_{2}+2}\\\\ \\cline{1-1} \n\\end{array}\n=\\frac{Q_{s-1}(w-s+2r)Q_{s}(w-s+2r-2k_{2}-1)}\n {Q_{s-1}(w-s+2r-2k_{2})Q_{s}(w-s+2r-1)} \n \\label{tableauxk1-4}, \n\\end{equation}\nwhere $w=u+h_{2}$: $h_{2}$ is some shift parameter; \n$k=k_{1}+k_{2}$\n \\footnote{Here we discussed the case for \n $k_{1}\\ge 1 $ and $k_{2}\\ge 1 $; \n the case for $k_{1}=0 $ or $k_{2}=0 $ can be treated similarly.}. \nObviously, the color $b=s$ residues at $v=s+2k_{1}-2+u_{j}^{(s)}$ \n in (\\ref{tableauxk1-1}) and \n (\\ref{tableauxk1-2}) cancel each other\n under the BAE (\\ref{BAE}). \nAnd the color $b=s$ residues\n at $w=s-2r+1+u_{j}^{(s)}$ \n in (\\ref{tableauxk1-3}) and\n (\\ref{tableauxk1-4}) cancel each other\n under the BAE (\\ref{BAE}). \n Thus $S_{k}$ does not \n have color $s$ poles under the BAE (\\ref{BAE}). \\\\ \n$\\bullet$ \nThe case\n\\footnote{This is void for $B(1|s)$ case.}\n $b \\in \\{s+1,s+2,\\dots, s+r-1\\}$ : \nOwing to the admissibility conditions, \nwe have only to consider $S_{k}$ for \n$k=2,3,4$. \n\n$S_{2}$ is a \nsummation of the tableaux (with sign) of the form \n\\begin{eqnarray}\n&& \n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n b \\\\ \\hline \n b+1 \\\\ \\hline \n \\zeta^{\\prime} \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|}\\hline \n \\xi^{\\prime} \\\\ \\hline\n \\stackrel{\\ }{\\overline{b+1}} \\\\ \\hline\n \\stackrel{\\ }{\\overline{b}} \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n\\end{eqnarray}\nand \n\\begin{eqnarray}\n&& \n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n b \\\\ \\hline \n \\eta \\\\ \\hline \n \\stackrel{\\ }{\\overline{b}} \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n b \\\\ \\hline \n \\eta \\\\ \\hline \n \\stackrel{\\ }{\\overline{b+1}} \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n b+1 \\\\ \\hline \n \\eta \\\\ \\hline \n \\stackrel{\\ }{\\overline{b}} \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n b+1 \\\\ \\hline \n \\eta \\\\ \\hline \n \\stackrel{\\ }{\\overline{b+1}} \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n \\nonumber \\\\ \n&&=\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline \n\\end{array}\n(\n\\begin{array}{|c|}\\hline \n b \\\\ \\hline \n\\end{array}\n+\n\\begin{array}{|c|}\\hline \n b+1 \\\\ \\hline \n\\end{array}\n)\n\\begin{array}{|c|}\\hline \n \\eta \\\\ \\hline \n\\end{array}\n(\n\\begin{array}{|c|}\\hline \n \\stackrel{\\ }{\\overline{b}} \\\\ \\hline \n\\end{array}\n+\n\\begin{array}{|c|}\\hline \n \\stackrel{\\ }{\\overline{b+1}} \\\\ \\hline \n\\end{array}\n)\n\\begin{array}{|c|}\\hline \n \\zeta \\\\ \\hline \n\\end{array},\n\\end{eqnarray} \nwhere \n\\{ \\framebox{$\\xi$}, \\framebox{$\\eta$}, \\framebox{$\\zeta$} \\}, \n\\{ \\framebox{$\\xi$}, \\framebox{$\\zeta^{\\prime}$} \\} and \n\\{ \\framebox{$\\xi^{\\prime}$}, \\framebox{$\\zeta$} \\}\n are columns with total \nlength $a-2$, which do not contain \\framebox{$b$}, \n\\framebox{$b+1$}, \\framebox{$\\overline{b+1}$} and \n\\framebox{$\\overline{b}$}. \nThus, owing to Lemma \\ref{le-tate},\nthe relations (\\ref{res3}) and (\\ref{res6}), $S_{2}$ does not \n have color $b$ poles under the BAE (\\ref{BAE}). \n \n$S_{3}$ is a \nsummation of the tableaux (with sign) of the form \n\\begin{eqnarray}\n&& \n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n b \\\\ \\hline \n \\eta \\\\ \\hline \n \\stackrel{\\ }{\\overline{b+1}} \\\\ \\hline\n \\stackrel{\\ }{\\overline{b}} \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n b+1 \\\\ \\hline \n \\eta \\\\ \\hline \n \\stackrel{\\ }{\\overline{b+1}} \\\\ \\hline\n \\stackrel{\\ }{\\overline{b}} \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n b \\\\ \\hline\n b+1 \\\\ \\hline \n \\eta^{\\prime} \\\\ \\hline \n \\stackrel{\\ }{\\overline{b}} \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n b \\\\ \\hline\n b+1 \\\\ \\hline \n \\eta^{\\prime} \\\\ \\hline \n \\stackrel{\\ }{\\overline{b+1}} \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n,\n\\end{eqnarray} \nwhere \\{ \\framebox{$\\xi$}, \\framebox{$\\eta$}, \\framebox{$\\zeta$} \\} \n and \\{ \\framebox{$\\xi$}, \\framebox{$\\eta^{\\prime }$},\n \\framebox{$\\zeta$} \\} \nare columns with \ntotal length $a-3$, which do not contain \\framebox{$b$}, \n\\framebox{$b+1$}, \\framebox{$\\overline{b+1}$} and \n\\framebox{$\\overline{b}$}.\nThus, owing to the relations (\\ref{res3}), (\\ref{res6}) and \nLemma \\ref{le-tate}, $S_{3}$ does not \n have color $b$ poles under the BAE (\\ref{BAE}). \n\n$S_{4}$ is a \nsummation of the tableaux (with sign) of the form \n\\begin{eqnarray}\n&& \n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n b \\\\ \\hline \n b+1 \\\\ \\hline\n \\eta \\\\ \\hline \n \\stackrel{\\ }{\\overline{b+1}} \\\\ \\hline\n \\stackrel{\\ }{\\overline{b}} \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n,\n\\end{eqnarray} \nwhere \\framebox{$\\xi$}, \\framebox{$\\eta$} and \n\\framebox{$\\zeta$} are columns with \ntotal length $a-4$, which do not contain \\framebox{$b$}, \n\\framebox{$b+1$}, \\framebox{$\\overline{b+1}$} and \n\\framebox{$\\overline{b}$}.\nThus, owing to the \nLemma \\ref{le-tate}, $S_{4}$ does not \n have color $b$ poles under the BAE (\\ref{BAE}).\n \\\\ \n$\\bullet$ \nThe case $b=s+r$ : \n$S_{k}$ ($k\\ge 2$) is a \nsummation of the tableaux (with sign) of the form \n\\begin{eqnarray}\n\\begin{array}{|c|l}\\cline{1-1} \n \\xi & \\\\ \\cline{1-1}\n 0 & _{v} \\\\ \\cline{1-1} \n 0 & _{v-2} \\\\ \\cline{1-1}\n \\vdots & \\\\ \\cline{1-1}\n 0 & _{v-2k+4} \\\\ \\cline{1-1}\n 0 & _{v-2k+2} \\\\ \\cline{1-1}\n \\zeta & \\\\ \\cline{1-1}\n\\end{array}\n+\n\\begin{array}{|c|l}\\cline{1-1} \n \\xi & \\\\ \\cline{1-1}\n s+r & _{v} \\\\ \\cline{1-1} \n 0 & _{v-2} \\\\ \\cline{1-1}\n \\vdots & \\\\ \\cline{1-1}\n 0 & _{v-2k+4} \\\\ \\cline{1-1}\n 0 & _{v-2k+2} \\\\ \\cline{1-1}\n \\zeta & \\\\ \\cline{1-1}\n\\end{array}\n&+&\n\\begin{array}{|c|l}\\cline{1-1} \n \\xi & \\\\ \\cline{1-1}\n 0 & _{v} \\\\ \\cline{1-1} \n 0 & _{v-2} \\\\ \\cline{1-1}\n \\vdots & \\\\ \\cline{1-1}\n 0 & _{v-2k+4} \\\\ \\cline{1-1}\n \\stackrel{\\ }{\\overline{s+r}} & _{v-2k+2} \\\\ \\cline{1-1}\n \\zeta & \\\\ \\cline{1-1}\n\\end{array}\n+\n\\begin{array}{|c|l}\\cline{1-1} \n \\xi & \\\\ \\cline{1-1}\n s+r & _{v} \\\\ \\cline{1-1} \n 0 & _{v-2} \\\\ \\cline{1-1}\n \\vdots & \\\\ \\cline{1-1}\n 0 & _{v-2k+4} \\\\ \\cline{1-1}\n \\stackrel{\\ }{\\overline{s+r}} & _{v-2k+2} \\\\ \\cline{1-1}\n \\zeta & \\\\ \\cline{1-1}\n\\end{array}\n \\nonumber \\\\ \n&& =\nA(v)B(v) \\times \\framebox{$\\xi$} \\times \\framebox{$\\eta$},\n\\end{eqnarray} \nwhere $v=u+h_{3}$: $h_{3}$ is some shift parameter; \n\\framebox{$\\xi$} and \n\\framebox{$\\zeta$} are columns with total \nlength $a-k$, which do not contain \\framebox{$s+r$}, \n\\framebox{$0$} and \n\\framebox{$\\overline{s+r}$};\n\\begin{eqnarray}\n A(v)&=&\\frac{Q_{s+r}(v-s+r+1)}{Q_{s+r}(v-s+r)} \n \\nonumber \\\\ \n && +\\frac{Q_{s+r-1}(v-s+r+1)Q_{s+r}(v-s+r-1)}\n {Q_{s+r-1}(v-s+r-1)Q_{s+r}(v-s+r)}, \\\\ \nB(v)&=& \\frac{Q_{s+r}(v-s+r-2k)}{Q_{s+r}(v-s+r-2k+1)} \n\\nonumber \\\\ \n && +\\frac{Q_{s+r-1}(v-s+r-2k)Q_{s+r}(v-s+r-2k+2)}\n {Q_{s+r-1}(v-s+r-2k+2)Q_{s+r}(v-s+r-2k+1)}.\\nonumber \n\\end{eqnarray}\nOne can check $A(v)$ and $B(v)$ are free of color $s+r$\n poles under the BAE (\\ref{BAE}). \n Thus, $S_{k}$ does not \n have color $s+r$ poles under the BAE (\\ref{BAE}). \n \\rule{5pt}{10pt} \\\\ \n {\\em Remark}: There is another proof for\n Theorem \\ref{th-tate} by the determinant formula \n (\\ref{Jacobi-Trudi2}): \n for $b \\in \\{ 1,2,\\dots, s-1 \\}$, we prove \n ${\\cal T}_{m}(u)$ is free of color $b$ poles, \n and then the pole-freeness of ${\\cal T}^{a}(u)$ follows \n from (\\ref{Jacobi-Trudi2}); while \n for $b \\in \\{ s,s+1, \\dots, s+r \\}$, we prove \n ${\\cal T}^{a}(u)$ is free of color $b$ poles in the \n same way as the above-mentioned proof. \n An advantage of this another proof \n is that we do not encounter an \n awkward expression like (\\ref{tableauxk2}). \n We note that similar idea is also applicable for \n $sl(r|s)$ case \\cite{T1,T2,T3}. \n\\setcounter{equation}{0}\n\\renewcommand{\\theequation}{A.2.\\arabic{equation}}\n\\section*{Appendix A.2 Outline of the proof of Theorem \n\\ref{th-tate}: $B(0|s)$ ($s \\in {\\bf Z}_{\\ge 1}$) case}\n We will show that ${\\cal T}_{m}(u)$ \nis free of color $b$ poles, namely, \n$Res_{u=u_{k}^{(b)}+\\cdots}{\\cal T}_{m}(u)=0$ for any \n $ b \\in \\{1,2,\\dots, s \\} $\n under the condition that the BAE \n (\\ref{BAE1})-(\\ref{BAE4}) is valid. \n The function $\\framebox{$c$}_{u}$ \n (\\ref{z+}) with $c \\in J $ has \n color $b$ poles only for $c=b$, $b+1$, \n$\\overline{b+1}$ or $\\overline{b}$ if \n$b\\in \\{1,2,\\dots,s-1 \\} $; \nfor $c=s$, $0$ or $\\overline{s}$ if $b=s$, \nso we shall trace only \n\\framebox{$b$}, \\framebox{$b+1$}, \n\\framebox{$\\overline{b+1}$} or \\framebox{$\\overline{b}$}\n for $b\\in\\{1,2,\\dots, s-1 \\}$; \n\\framebox{$s$}, \\framebox{$0$} \nor \\framebox{$\\overline{s}$} for $b=s$.\n Let $S_{k}$ be the partial sum of ${\\cal T}_{m}(u)$, \n which contains \n$k$ boxes among \\framebox{$b$}, \\framebox{$b+1$}, \n\\framebox{$\\overline{b+1}$} or \\framebox{$\\overline{b}$}\n for $b\\in\\{1,2,\\dots, s-1 \\}$; \n\\framebox{$s$}, \\framebox{$0$} \nor \\framebox{$\\overline{s}$} for $b=s$. \n Apparently, $S_{0}$ does not have \ncolor $b$ poles. \n\n Now we examine $S_{1}$, which is a summation \n of the tableaux (with sign) of the form \n\\begin{equation} \n\\begin{array}{|c|c|c|}\\hline\n \\xi & \\eta & \\zeta \n \\\\ \\hline \n\\end{array}\n\\end{equation}\nwhere \\framebox{$\\xi$} and \\framebox{$\\zeta$} are rows with \ntotal length $m-1$ and they do not involve $Q_{b}$. \n\\framebox{$\\eta$} is \\framebox{$b$}, \\framebox{$b+1$}, \n\\framebox{$\\overline{b+1}$} or \\framebox{$\\overline{b}$}\n for $b\\in\\{1,2,\\dots, s-1 \\}$; \n\\framebox{$s$}, \\framebox{$0$} \nor \\framebox{$\\overline{s}$} for $b=s$. \n Owing to the relations (\\ref{res1-b0s})-(\\ref{res4-b0s}), \n $S_{1}$ is free of color $b$ poles under the BAEs \n (\\ref{BAE1})-(\\ref{BAE4}). \n From now on, we consider $S_{k}$ for $k\\ge 2$. \\\\ \n$\\bullet$ \nThe case\n\\footnote{This is void for $B(0|1)$ case.}\n $b\\in\\{1,2,\\dots, s-1 \\}$: Owing to the admissibility\n conditions, \nwe have only to consider $S_{2}$, $S_{3}$ and $S_{4}$. \n\n$S_{2}$ is a \nsummation of the tableaux (with sign) of the form \n\\begin{eqnarray}\n \\begin{array}{|c|c|c|c|}\\hline \n \\xi & b & b+1 & \\eta^{\\prime} \\\\\n \\hline \n \\end{array}\n +\n \\begin{array}{|c|c|c|c|}\\hline \n \\xi^{\\prime} & \\stackrel{\\ }{\\overline{b+1}} & \n \\overline{b} & \\zeta\n \\\\ \\hline \n \\end{array}\n\\end{eqnarray}\nand \n\\begin{eqnarray}\n && \n \\begin{array}{|c|c|c|c|c|}\\hline \n \\xi & b & \\eta & \\stackrel{\\ }{\\overline{b}} & \\zeta\n \\\\ \\hline \n \\end{array}\n+\n\\begin{array}{|c|c|c|c|c|}\\hline \n \\xi & b & \\eta & \\stackrel{\\ }{\\overline{b+1}} & \\zeta\n \\\\ \\hline \n \\end{array} \n \\nonumber \\\\ \n&& +\n \\begin{array}{|c|c|c|c|c|}\\hline \n \\xi & b+1 & \\eta & \\stackrel{\\ }{\\overline{b}} & \\zeta\n \\\\ \\hline \n \\end{array}\n+\n \\begin{array}{|c|c|c|c|c|}\\hline \n \\xi & b+1 & \\eta & \\stackrel{\\ }{\\overline{b+1}} & \\zeta\n \\\\ \\hline \n \\end{array} \n \\nonumber \\\\ \n&& =\n \\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline \n \\end{array} \n (\n \\begin{array}{|c|}\\hline \n b \\\\ \\hline \n \\end{array}\n +\n \\begin{array}{|c|}\\hline \n b+1 \\\\ \\hline \n \\end{array}\n )\n \\begin{array}{|c|}\\hline \n \\eta \\\\ \\hline \n \\end{array}\n (\n \\begin{array}{|c|}\\hline \n \\stackrel{\\ }{\\overline{b}} \\\\ \\hline \n \\end{array}\n +\n \\begin{array}{|c|}\\hline \n \\stackrel{\\ }{\\overline{b+1}} \\\\ \\hline \n \\end{array}\n )\n \\begin{array}{|c|}\\hline \n \\zeta \\\\ \\hline \n \\end{array}\n\\end{eqnarray}\nwhere (\\framebox{$\\xi$}, \\framebox{$\\eta^{\\prime}$}),\n(\\framebox{$\\xi^{\\prime}$}, \\framebox{$\\zeta$}) and \n(\\framebox{$\\xi$}, \\framebox{$\\eta$}, \\framebox{$\\zeta$}) \n are rows with total \nlength $m-2$, which do not contain \\framebox{$b$}, \n\\framebox{$b+1$}, \\framebox{$\\overline{b+1}$} and \n\\framebox{$\\overline{b}$}. \nThus, owing to Lemma \\ref{le-yoko},\nthe relations (\\ref{res1-b0s}) and (\\ref{res4-b0s}),\n $S_{2}$ does not \n have color $b$ poles under the BAE (\\ref{BAE1}) and (\\ref{BAE2}). \n\n $S_{3}$ is a \nsummation of the tableaux (with sign) of the form \n\\begin{eqnarray}\n && \n \\begin{array}{|c|c|c|c|c|c|}\\hline \n \\xi & b & b+1 & \\eta & \\stackrel{\\ }{\\overline{b}} & \\zeta\n \\\\ \\hline \n \\end{array}\n+\n\\begin{array}{|c|c|c|c|c|c|}\\hline \n \\xi & b & b+1 & \\eta & \\stackrel{\\ }{\\overline{b+1}} & \\zeta\n \\\\ \\hline \n \\end{array} \n \\nonumber \\\\ \n&& =\n \\begin{array}{|c|c|c|c|}\\hline \n \\xi & b & b+1 & \\eta \\\\ \\hline \n \\end{array} \n (\n \\begin{array}{|c|}\\hline \n \\stackrel{\\ }{\\overline{b}} \\\\ \\hline \n \\end{array}\n +\n \\begin{array}{|c|}\\hline \n \\stackrel{\\ }{\\overline{b+1}} \\\\ \\hline \n \\end{array}\n )\n \\begin{array}{|c|}\\hline \n \\zeta \\\\ \\hline \n \\end{array}\n\\end{eqnarray}\nand \n\\begin{eqnarray}\n && \n \\begin{array}{|c|c|c|c|c|c|}\\hline \n \\xi & b & \\eta^{\\prime} & \\stackrel{\\ }{\\overline{b+1}} \n & \\overline{b} & \\zeta\n \\\\ \\hline \n \\end{array}\n+\n\\begin{array}{|c|c|c|c|c|c|}\\hline \n \\xi & b+1 & \\eta^{\\prime} & \\stackrel{\\ }{\\overline{b+1}}\n & \\overline{b} & \\zeta\n \\\\ \\hline \n \\end{array} \n \\nonumber \\\\ \n&& =\n \\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline \n \\end{array} \n (\n \\begin{array}{|c|}\\hline \n b \\\\ \\hline \n \\end{array}\n +\n \\begin{array}{|c|}\\hline \n b+1 \\\\ \\hline \n \\end{array}\n )\n \\begin{array}{|c|c|c|c|}\\hline \n \\eta^{\\prime} & \\stackrel{\\ }{\\overline{b+1}} \n & \\overline{b} & \\zeta \\\\ \\hline \n \\end{array}\n\\end{eqnarray}\nwhere \n(\\framebox{$\\xi $}, \\framebox{$\\eta$}, \\framebox{$\\zeta$}) and \n(\\framebox{$\\xi$}, \\framebox{$\\eta^{\\prime}$}, \\framebox{$\\zeta$}) \n are rows with total \nlength $m-3$, which do not contain \\framebox{$b$}, \n\\framebox{$b+1$}, \\framebox{$\\overline{b+1}$} and \n\\framebox{$\\overline{b}$}. \nThus, owing to Lemma \\ref{le-yoko},\nthe relations (\\ref{res1-b0s}) and (\\ref{res4-b0s}), $S_{3}$ does not \n have color $b$ poles under the BAE (\\ref{BAE1}) and (\\ref{BAE2}).\n\n$S_{4}$ is a \nsummation of the tableaux (with sign) of the form \n\\begin{eqnarray}\n && \n \\begin{array}{|c|c|c|c|c|c|c|}\\hline \n \\xi & b & b+1 & \\eta & \\stackrel{\\ }{\\overline{b+1}} \n & \\overline{b} & \\zeta\n \\\\ \\hline \n \\end{array}\n\\end{eqnarray}\nwhere \n \\framebox{$\\xi$}, \\framebox{$\\eta$} and \\framebox{$\\zeta$} \n are rows with total \nlength $m-4$, which do not contain \\framebox{$b$}, \n\\framebox{$b+1$}, \\framebox{$\\overline{b+1}$} and \n\\framebox{$\\overline{b}$}. \nThus, owing to Lemma \\ref{le-yoko}, $S_{4}$ does not \n have color $b$ poles. \n \n$\\bullet$ The case $ b=s $ : \nOwing to the admissibility\n conditions, \nwe have only to consider $S_{2}$ and $S_{3}$. \n\n$S_{2}$ is a \nsummation of the tableaux (with sign) of the form \n\\begin{eqnarray}\n \\hspace{-20pt} && \n \\begin{array}{|c|c|c|c|}\\hline \n \\xi & s & 0 & \\eta \n \\\\ \\hline \n \\end{array}\n =\\frac{Q_{s-1}(v-s-1)Q_{s}(v-s+3)}{Q_{s-1}(v-s+1)Q_{s}(v-s+1)}\n \\times \n \\begin{array}{|c|}\\hline \n \\xi \n \\\\ \\hline \n \\end{array}\n \\times \n \\begin{array}{|c|}\\hline \n \\eta \n \\\\ \\hline \n \\end{array},\n \\\\ \n \\hspace{-20pt} && \n \\begin{array}{|c|c|c|c|}\\hline \n \\xi & s & \\stackrel{\\ }{\\overline{s}} & \\eta \n \\\\ \\hline \n \\end{array}\n =\\frac{Q_{s-1}(v-s-1)Q_{s}(v-s+2)}{Q_{s-1}(v-s+1)Q_{s}(v-s)}\n \\nonumber \\\\ \n \\hspace{-20pt}&& \\hspace{70pt} \\times \n\\frac{Q_{s-1}(v-s+2)Q_{s}(v-s-1)}{Q_{s-1}(v-s)Q_{s}(v-s+1)}\n\\times \n \\begin{array}{|c|}\\hline \n \\xi \n \\\\ \\hline \n \\end{array}\n \\times \n \\begin{array}{|c|}\\hline \n \\eta \n \\\\ \\hline \n \\end{array}, \\\\\n \\hspace{-20pt} && \n \\begin{array}{|c|c|c|c|}\\hline \n \\xi & 0 & \\stackrel{\\ }{\\overline{s}} & \\eta \n \\\\ \\hline \n \\end{array}\n =\\frac{Q_{s-1}(v-s+2)Q_{s}(v-s-2)}{Q_{s-1}(v-s)Q_{s}(v-s)}\n \\times \n \\begin{array}{|c|}\\hline \n \\xi \n \\\\ \\hline \n \\end{array}\n \\times \n \\begin{array}{|c|}\\hline \n \\eta \n \\\\ \\hline \n \\end{array}, \n\\end{eqnarray}\nwhere \n \\framebox{$\\xi$} and \\framebox{$\\eta$} \n are rows with total length $m-2$, which do not contain \n\\framebox{$s$}, \\framebox{$0$} and \n\\framebox{$\\overline{s}$}; $v=u+h$: $h$ is a shift parameter. \n The color $s$ residues at $u=u_{k}^{(s)}+s-1-h$ of \n the functions \n$\n\\begin{array}{|c|c|c|c|}\\hline \n \\xi & s & 0 & \\eta \n \\\\ \\hline \n \\end{array}\n$ and \n$\n\\begin{array}{|c|c|c|c|}\\hline \n \\xi & s & \\stackrel{\\ }{\\overline{s}} & \\eta \n \\\\ \\hline \n \\end{array}\n$ \ncancel each other under the BAE (\\ref{BAE3}) or (\\ref{BAE4}).\nThe color $s$ residues at $u=u_{k}^{(s)}+s-h$ of \n the functions \n$\n\\begin{array}{|c|c|c|c|}\\hline \n \\xi & s & \\stackrel{\\ }{\\overline{s}} & \\eta \n \\\\ \\hline \n \\end{array}\n$ \nand \n$\n \\begin{array}{|c|c|c|c|}\\hline \n \\xi & 0 & \\stackrel{\\ }{\\overline{s}} & \\eta \n \\\\ \\hline \n \\end{array}\n$\ncancel each other under the BAE (\\ref{BAE3}) or (\\ref{BAE4}). \nThus, $S_{2}$ does not \n have color $s$ poles under the BAE (\\ref{BAE3}) or (\\ref{BAE4}).\n \n$S_{3}$ is a \nsummation of the tableaux (with sign) of the form \n\\begin{eqnarray}\n \\begin{array}{|c|c|c|c|c|}\\hline \n \\xi & s & 0 & \\stackrel{\\ }{\\overline{s}} & \\eta \n \\\\ \\hline \n \\end{array}\n\\end{eqnarray}\nwhere \n \\framebox{$\\xi$} and \\framebox{$\\eta$} \n are rows with total \nlength $m-3$, which do not contain \n\\framebox{$s$}, \\framebox{$0$} and \n\\framebox{$\\overline{s}$}. \nThus, owing to Lemma \\ref{le-yoko}, $S_{3}$ does not \n have color $s$ poles. \n \nThen $ {\\cal T}_{m}(u)$ is free of poles under \nthe condition that the BAEs \n(\\ref{BAE1})-(\\ref{BAE4}) are valid; \nowing to the relation (\\ref{Jacobi-Trudi2}), \nthis also hold true for \n$ {\\cal T}_{\\lambda \\subset \\mu}(u)$. In particular, \n the pole-freeness of ${\\cal T}^{a}(u)$ follows \n immediately.\n\\setcounter{equation}{0}\n\\renewcommand{\\theequation}{A.3.\\arabic{equation}}\n\\section*{Appendix A.3 Outline of the proof of Theorem \n\\ref{th-tate}: $D(r|s)$ \n($r \\in {\\bf Z}_{\\ge 2}$, $s \\in {\\bf Z}_{\\ge 1}$) case}\nWe prove that ${\\cal T}^{a}(u)$ \nis free of color $b$ poles, that is, \n$Res_{u=u_{k}^{(b)}+\\cdots}{\\cal T}^{a}(u)=0$ for any \n $ b \\in \\{1,2,\\dots, s+r \\} $\n under the condition that the BAE (\\ref{BAE}) is valid. \n The function $\\framebox{$c$}_{u}$ \n (\\ref{z++}) with $c \\in J $ has \n color $b$ poles only for $c=b$, $b+1$, \n$\\overline{b+1}$ or $\\overline{b}$ if \n$b\\in \\{1,2,\\dots,s+r-1 \\} $; \nfor $c=s+r-1$, $s+r$, $\\overline{s+r}$ or $\\overline{s+r-1}$ \nif $b=s+r$, \nso we shall trace only \n\\framebox{$b$}, \\framebox{$b+1$}, \n\\framebox{$\\overline{b+1}$} or \\framebox{$\\overline{b}$}\n for $b\\in\\{1,2,\\dots, s+r-1 \\}$; \n \\framebox{$s+r-1$}, \\framebox{$s+r$}, \\framebox{$\\overline{s+r}$} \n or \\framebox{$\\overline{s+r-1}$} \n for $b=s+r$.\n Let $S_{k}$ be the partial sum of ${\\cal T}^{a}(u)$, \n which contains \n$k$ boxes among \\framebox{$b$}, \\framebox{$b+1$}, \n\\framebox{$\\overline{b+1}$} or \\framebox{$\\overline{b}$}\n for $b\\in \\{1,2,\\dots,s+r-1 \\} $; \n\\framebox{$s+r-1$}, \\framebox{$s+r$}, \\framebox{$\\overline{s+r}$}\n or \\framebox{$\\overline{s+r-1}$} \nfor $b=s+r$.\n Apparently, $S_{0}$ does not have \ncolor $b$ poles. \n\n Next we consider $S_{1}$, which is a summation \n of the tableaux (with sign) of the form \n\\begin{equation} \n\\begin{array}{|c|}\\hline\n \\xi \\\\ \\hline \n \\eta \\\\ \\hline \n \\zeta \\\\ \\hline\n\\end{array},\n\\end{equation}\nwhere \\framebox{$\\xi$} and \\framebox{$\\zeta$} are columns with \ntotal length $a-1$ and they do not contain $Q_{b}$. \n\\framebox{$\\eta$} is \\framebox{$b$}, \\framebox{$b+1$}, \n\\framebox{$\\overline{b+1}$} or \\framebox{$\\overline{b}$}\n for $b\\in \\{1,2,\\dots,s+r-1 \\} $; \n\\framebox{$s+r-1$}, \\framebox{$s+r$}, \\framebox{$\\overline{s+r}$} \n or \\framebox{$\\overline{s+r-1}$} for $b=s+r$. \n Owing to the relations (\\ref{res1-d})-(\\ref{res8-d}), \n $S_{1}$ is free of color $b$ poles under the BAE (\\ref{BAE}). \n From now on we consider $S_{k}$ for $k\\ge 2$. \\\\ \n$\\bullet$ \nThe case $b\\in \\{1,2,\\dots,s+r-2 \\} $: \nThe proof is similar to $B(r|s)$ ($r\\in {\\bf Z}_{\\ge 1}$) case, \nso we omit it. \\\\ \n$\\bullet$ \nThe case $b=s+r-1$ or $b=s+r$: \n$S_{2n}$ ($k=2n$, $n \\in {\\bf Z}_{\\ge 2}$ \n\\footnote{The case $n=1$ can be treated similarly.}) is a \nsummation of the tableaux (with signs) of the form \n\\begin{eqnarray}\n&&\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n s+r-1 \\\\ \\hline \n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline \n s+r \\\\ \\hline\n \\vdots \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n s+r \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r-1}} \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n s+r-1 \\\\ \\hline \n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline \n s+r \\\\ \\hline\n \\vdots \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n s+r \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n s+r \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline \n s+r \\\\ \\hline\n \\vdots \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n s+r \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r-1}} \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|l}\\cline{1-1} \n \\xi & \\\\ \\cline{1-1}\n s+r & _{v} \\\\ \\cline{1-1} \n \\stackrel{\\ }{\\overline{s+r}} & _{v-2} \\\\ \\cline{1-1} \n s+r & _{v-4} \\\\ \\cline{1-1}\n \\vdots & \\\\ \\cline{1-1}\n \\stackrel{\\ }{\\overline{s+r}} & _{v-4n+6} \\\\ \\cline{1-1}\n s+r & _{v-4n+4} \\\\ \\cline{1-1}\n \\stackrel{\\ }{\\overline{s+r}} & _{v-4n+2} \\\\ \\cline{1-1}\n \\zeta & \\\\ \\cline{1-1}\n\\end{array}\n\\nonumber \\\\ \n&&=\nA(v)B(v) \\times \\framebox{$\\xi$} \\times \\framebox{$\\zeta$}\n\\label{even-nul}\n\\end{eqnarray} \nand \n\\begin{eqnarray}\n&&\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline \n s+r \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline \n s+r \\\\ \\hline\n \\vdots \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n s+r \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n s+r \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n s+r-1 \\\\ \\hline \n s+r \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline \n s+r \\\\ \\hline\n \\vdots \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n s+r \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n s+r \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline \n s+r \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline \n s+r \\\\ \\hline\n \\vdots \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n s+r \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r-1}} \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|l}\\cline{1-1} \n \\xi & \\\\ \\cline{1-1}\n s+r-1 & _{v} \\\\ \\cline{1-1} \n s+r & _{v-2} \\\\ \\cline{1-1}\n \\stackrel{\\ }{\\overline{s+r}} & _{v-4} \\\\ \\cline{1-1} \n s+r & _{v-6} \\\\ \\cline{1-1}\n \\vdots & \\\\ \\cline{1-1}\n \\stackrel{\\ }{\\overline{s+r}} & _{v-4n+8} \\\\ \\cline{1-1}\n s+r & _{v-4n+6} \\\\ \\cline{1-1}\n \\stackrel{\\ }{\\overline{s+r}} & _{v-4n+4} \\\\ \\cline{1-1}\n \\stackrel{\\ }{\\overline{s+r-1}} & _{v-4n+2} \\\\ \\cline{1-1}\n \\zeta & \\\\ \\cline{1-1}\n\\end{array}\n\\nonumber \\\\ \n&&=\nC(v)D(v) \\times \\framebox{$\\xi$} \\times \\framebox{$\\zeta$},\n\\end{eqnarray} \nwhere $v=u+h_{1}$: $h_{1}$ is some shift parameter; \n\\framebox{$\\xi$} and \n\\framebox{$\\zeta$} are columns with total \nlength $a-2n$, which do not contain \\framebox{$s+r-1$}, \n\\framebox{$s+r$}, \\framebox{$\\overline{s+r}$} and \n\\framebox{$\\overline{s+r-1}$};\n\\begin{eqnarray}\nA(v)&=&\\frac{Q_{s+r-1}(v-s+r+1)}{Q_{s+r-1}(v-s+r-1)} \n \\nonumber \\\\ \n && +\\frac{Q_{s+r-2}(v-s+r)Q_{s+r-1}(v-s+r-3)}\n {Q_{s+r-2}(v-s+r-2)Q_{s+r-1}(v-s+r-1)}, \\nonumber \\\\ \nB(v)&=& \\frac{Q_{s+r-1}(v-s+r-4n-1)}{Q_{s+r-1}(v-s+r-4n+1)} \n\\nonumber \\\\ \n && +\\frac{Q_{s+r-2}(v-s+r-4n)Q_{s+r-1}(v-s+r-4n+3)}\n {Q_{s+r-2}(v-s+r-4n+2)Q_{s+r-1}(v-s+r-4n+1)},\n \\nonumber \\\\ \n C(v)&=&\\frac{Q_{s+r}(v-s+r+1)}{Q_{s+r}(v-s+r-1)} \n \\nonumber \\\\ \n && +\\frac{Q_{s+r-2}(v-s+r)Q_{s+r}(v-s+r-3)}\n {Q_{s+r-2}(v-s+r-2)Q_{s+r}(v-s+r-1)}, \\\\ \nD(v)&=& \\frac{Q_{s+r}(v-s+r-4n-1)}{Q_{s+r}(v-s+r-4n+1)} \n\\nonumber \\\\ \n && +\\frac{Q_{s+r-2}(v-s+r-4n)Q_{s+r}(v-s+r-4n+3)}\n {Q_{s+r-2}(v-s+r-4n+2)Q_{s+r}(v-s+r-4n+1)}.\n \\nonumber \n\\end{eqnarray} \nApparently, $A(v)$ and $B(v)$ (resp. $C(v)$ and $D(v)$) \ndo not contain $Q_{s+r}$ (resp. $Q_{s+r-1}$). \nOne can also check $A(v)$ and $B(v)$ \n(resp. $C(v)$ and $D(v)$) are free of color $s+r-1$ \n(resp. $s+r$) poles under the BAE (\\ref{BAE}). \n\n$S_{2n+1}$ ($k=2n+1$, $n \\in {\\bf Z}_{\\ge 2}$ \n\\footnote{The case $n=1$ can be treated similarly.}) is a \nsummation of the tableaux (with signs) of the form \n\\begin{eqnarray}\n&&\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n s+r \\\\ \\hline \n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline \n s+r \\\\ \\hline\n \\vdots \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n s+r \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n s+r \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n s+r \\\\ \\hline \n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline \n s+r \\\\ \\hline\n \\vdots \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n s+r \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r-1}} \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n s+r-1 \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline \n s+r \\\\ \\hline\n \\vdots \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n s+r \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n s+r \\\\ \\hline \n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|l}\\cline{1-1} \n \\xi & \\\\ \\cline{1-1}\n s+r-1 & _{v} \\\\ \\cline{1-1} \n \\stackrel{\\ }{\\overline{s+r}} & _{v-2} \\\\ \\cline{1-1} \n s+r & _{v-4} \\\\ \\cline{1-1}\n \\vdots & \\\\ \\cline{1-1}\n \\stackrel{\\ }{\\overline{s+r}} & _{v-4n+6} \\\\ \\cline{1-1}\n s+r & _{v-4n+4} \\\\ \\cline{1-1}\n \\stackrel{\\ }{\\overline{s+r}} & _{v-4n+2} \\\\ \\cline{1-1}\n \\stackrel{\\ }{\\overline{s+r-1}} & _{v-4n} \\\\ \\cline{1-1}\n \\zeta & \\\\ \\cline{1-1}\n\\end{array}\n\\nonumber \\\\ \n&&=\nE(v)F(v) \\times \\framebox{$\\xi$} \\times \\framebox{$\\zeta$}\n\\end{eqnarray}\nand \n\\begin{eqnarray}\n&&\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline \n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline \n s+r \\\\ \\hline \n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline \n s+r \\\\ \\hline\n \\vdots \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n s+r \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n s+r-1 \\\\ \\hline \n s+r \\\\ \\hline \n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline \n s+r \\\\ \\hline\n \\vdots \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n s+r \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|}\\hline \n \\xi \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline \n s+r \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline \n s+r \\\\ \\hline\n \\vdots \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r}} \\\\ \\hline\n s+r \\\\ \\hline\n \\stackrel{\\ }{\\overline{s+r-1}} \\\\ \\hline \n \\zeta \\\\ \\hline\n\\end{array}\n+\n\\begin{array}{|c|l}\\cline{1-1} \n \\xi & \\\\ \\cline{1-1}\n s+r-1 & _{v} \\\\ \\cline{1-1} \n s+r & _{v-2} \\\\ \\cline{1-1} \n \\stackrel{\\ }{\\overline{s+r}} & _{v-4} \\\\ \\cline{1-1} \n s+r & _{v-6} \\\\ \\cline{1-1}\n \\vdots & \\\\ \\cline{1-1}\n \\stackrel{\\ }{\\overline{s+r}} & _{v-4n+4} \\\\ \\cline{1-1}\n s+r & _{v-4n+2} \\\\ \\cline{1-1}\n \\stackrel{\\ }{\\overline{s+r-1}} & _{v-4n} \\\\ \\cline{1-1}\n \\zeta & \\\\ \\cline{1-1}\n\\end{array}\n\\nonumber \\\\ \n&&=\nG(v)H(v) \\times \\framebox{$\\xi$} \\times \\framebox{$\\zeta$}\n\\end{eqnarray}\nwhere $v=u+h_{2}$: $h_{2}$ is some shift parameter; \n\\framebox{$\\xi$} and \n\\framebox{$\\zeta$} are columns with total \nlength $a-2n-1$, which do not contain \\framebox{$s+r-1$}, \n\\framebox{$s+r$}, \\framebox{$\\overline{s+r}$} and \n\\framebox{$\\overline{s+r-1}$};\n\\begin{eqnarray}\nE(v)&=&\\frac{Q_{s+r-1}(v-s+r+1)}{Q_{s+r-1}(v-s+r-1)} \n \\nonumber \\\\ \n && +\\frac{Q_{s+r-2}(v-s+r)Q_{s+r-1}(v-s+r-3)}\n {Q_{s+r-2}(v-s+r-2)Q_{s+r-1}(v-s+r-1)}, \\nonumber \\\\ \nF(v)&=& \\frac{Q_{s+r}(v-s+r-4n-3)}{Q_{s+r}(v-s+r-4n-1)} \n\\nonumber \\\\ \n && +\\frac{Q_{s+r-2}(v-s+r-4n-2)Q_{s+r}(v-s+r-4n+1)}\n {Q_{s+r-2}(v-s+r-4n)Q_{s+r}(v-s+r-4n-1)},\n \\nonumber \\\\ \n G(v)&=&\\frac{Q_{s+r}(v-s+r+1)}{Q_{s+r}(v-s+r-1)} \n \\nonumber \\\\ \n && +\\frac{Q_{s+r-2}(v-s+r)Q_{s+r}(v-s+r-3)}\n {Q_{s+r-2}(v-s+r-2)Q_{s+r}(v-s+r-1)}, \\\\ \nH(v)&=& \\frac{Q_{s+r-1}(v-s+r-4n-3)}{Q_{s+r-1}(v-s+r-4n-1)} \n\\nonumber \\\\ \n && +\\frac{Q_{s+r-2}(v-s+r-4n-2)Q_{s+r-1}(v-s+r-4n+1)}\n {Q_{s+r-2}(v-s+r-4n)Q_{s+r-1}(v-s+r-4n-1)}.\n \\nonumber \n\\end{eqnarray} \nApparently, $E(v)$ and $H(v)$ (resp. $F(v)$ and $G(v)$) \ndo not contain $Q_{s+r}$ (resp. $Q_{s+r-1}$). \nOne can also check $E(v)$ and $H(v)$ \n(resp. $F(v)$ and $G(v)$) are free of color $s+r-1$ \n(resp. $s+r$) poles under the BAE (\\ref{BAE}). \n\n Thus, $S_{k}$ have neither \n color $s+r-1$ poles nor \n color $s+r$ poles under the BAE (\\ref{BAE}). \n\\setcounter{equation}{0}\n\\renewcommand{\\theequation}{B.\\arabic{equation}}\n\\section*{Appendix B Generating series for \n${\\cal T}^{a}(u)$ and ${\\cal T}_{m}(u)$}\nThe functions ${\\cal T}^{a}(u)$ \nand ${\\cal T}_{m}(u)$ \n ($a,m \\in {\\bf Z }$; $u \\in {\\bf C }$) \n are determined by the following \n non-commutative generating series. \\\\ \n$B(r|s)$ case: \n\\begin{eqnarray}\n\\hspace{-30pt} && (1+\\framebox{$\\overline{1}$}X)^{-1}\\cdots \n (1+\\framebox{$\\overline{s}$}X)^{-1} (1+\\framebox{$\\overline{s+1}$}X)\n\\cdots (1+\\framebox{$\\overline{s+r}$}X)(1-\\framebox{$0$}X)^{-1}\n \\nonumber \\\\\n\\hspace{-30pt} && \\times (1+\\framebox{$s+r$}X) \\cdots (1+\\framebox{$s+1$}X)\n (1+\\framebox{$s$}X)^{-1} \\cdots (1+\\framebox{$1$}X)^{-1}\n \\nonumber \\\\ \n\\hspace{-30pt} && \\hspace{40pt} =\\sum_{a=-\\infty}^{\\infty} \n {\\cal T}^{a}(u+a-1)X^{a},\n \\end{eqnarray}\n\\begin{eqnarray}\n\\hspace{-30pt} && (1-\\framebox{$1$}X)\\cdots \n (1-\\framebox{$s$}X) (1-\\framebox{$s+1$}X)^{-1}\n\\cdots (1-\\framebox{$s+r$}X)^{-1}(1+\\framebox{$0$}X)\n \\nonumber \\\\\n\\hspace{-30pt} && \\times (1-\\framebox{$\\overline{s+r}$}X)^{-1} \n\\cdots (1-\\framebox{$\\overline{s+1}$}X)^{-1}\n (1-\\framebox{$\\overline{s}$}X) \\cdots (1-\\framebox{$\\overline{1}$}X)\n \\nonumber \\\\ \n\\hspace{-30pt} && \\hspace{40pt} =\\sum_{m=-\\infty}^{\\infty} \n {\\cal T}_{m}(u+m-1)X^{m}.\n\\end{eqnarray} \n$D(r|s)$ case: \n\\begin{eqnarray}\n\\hspace{-30pt} && (1+\\framebox{$\\overline{1}$}X)^{-1}\\cdots \n (1+\\framebox{$\\overline{s}$}X)^{-1} (1+\\framebox{$\\overline{s+1}$}X)\n\\cdots (1+\\framebox{$\\overline{s+r}$}X)\n\\nonumber \\\\ \n\\hspace{-30pt} && \\times \n(1-\\framebox{$s+r$}X\\framebox{$\\overline{s+r}$}X)^{-1}\n \\nonumber \\\\\n\\hspace{-30pt} && \\times (1+\\framebox{$s+r$}X) \\cdots (1+\\framebox{$s+1$}X)\n (1+\\framebox{$s$}X)^{-1} \\cdots (1+\\framebox{$1$}X)^{-1}\n \\nonumber \\\\ \n\\hspace{-30pt} && \\hspace{40pt} =\\sum_{a=-\\infty}^{\\infty} \n {\\cal T}^{a}(u+a-1)X^{a},\n\\end{eqnarray}\n\\begin{eqnarray}\n\\hspace{-30pt} && (1-\\framebox{$1$}X)\\cdots \n (1-\\framebox{$s$}X) (1-\\framebox{$s+1$}X)^{-1}\n\\cdots (1-\\framebox{$s+r-1$}X)^{-1} \\nonumber \\\\ \n\\hspace{-30pt} && \\times \n[(1-\\framebox{$s+r$}X)^{-1}+\n(1-\\framebox{$\\overline{s+r}$}X)^{-1}-1]\n \\nonumber \\\\\n\\hspace{-30pt} && \\times (1-\\framebox{$\\overline{s+r-1}$}X)^{-1} \n\\cdots (1-\\framebox{$\\overline{s+1}$}X)^{-1}\n (1-\\framebox{$\\overline{s}$}X) \\cdots (1-\\framebox{$\\overline{1}$}X)\n \\nonumber \\\\ \n\\hspace{-30pt} && \\hspace{40pt} =\\sum_{m=-\\infty}^{\\infty} \n {\\cal T}_{m}(u+m-1)X^{m}.\n\\end{eqnarray} \nHere $X$ is a shift operator $X=e^{2\\partial_{u}}$. \nIn particular, we have ${\\cal T}^{0}(u)=1$; \n${\\cal T}_{0}(u)=1$; ${\\cal T}^{a}(u)=0$ for $a<0$; \n ${\\cal T}_{m}(u)=0$ for $m<0$. \n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nYang and Lee \\cite{yang} proposed a mechanism\nfor the occurrence of phase transitions in the thermodynamic limit\nand yielded an insight into the problem of the\nIsing ferromagnet at arbitrary temperature in an arbitrary nonzero external magnetic field\nby introducing the concept of the zeros of the grand partition function $Z(T,H)$\n(for fluid systems, $Z(T,\\mu)$)\nin the {\\it complex} magnetic-field (for fluid systems, fugacity) plane (Yang-Lee zeros).\nThey \\cite{lee} also formulated\nthe celebrated circle theorem which states that\nYang-Lee zeros of the Ising ferromagnet lie on the unit circle $x_0=e^{i\\theta}$\nin the complex $x=e^{-2H\/k_B T}$ plane.\nBelow and at the critical temperature $T_c$, Yang-Lee zeros cut the positive real axis\nat the ferromagnetic transition point $x_c=1$ ($H_c=0$) in the thermodynamic limit.\nThe spontaneous magnetization $m_0$ is determined by the density of Yang-Lee zeros\n$g(\\theta)$ on the positive real axis, i.e., $m_0=2\\pi g(\\theta=0)$.\nHowever, above the critical temperature, Yang-Lee zeros do not cut the positive real axis\nin the thermodynamic limit. There is a gap in the distribution of zeros\naround the positive real axis, that is, $g(\\theta)=0$ for $|\\theta|<\\theta_e$.\nWithin this gap, the free energy is analytic and there is no phase transition.\nThe Yang-Lee zeros at $\\theta=\\pm\\theta_e$ are called the Yang-Lee edge zeros whose\nlocations have never been known exactly for $T>T_c$ and $d\\ge2$.\nAt the critical temperature the gap disappears, i.e., $\\theta_e=0$.\nAs temperature changes from $T_c$ to $\\infty$, the Yang-Lee edge $\\theta_e$\nmoves from $\\theta=0$ to $\\theta=\\pi$ \\cite{kortman,creswick97,wang,kim98}.\n\nIn addition to Yang-Lee zeros,\nFisher showed that the thermal properties of the square-lattice Ising model for $H=0$\nare completely determined by the zeros of the canonical partition function $Z(T)$\nin the complex temperature plane (Fisher zeros) \\cite{fisher65,lu}.\nUntil now, Yang-Lee zeros and Fisher zeros\n(and other kinds of zeros for some problems)\nof numerous physical, chemical, biological, and mathematical systems\nhave been investigated extensively to understand their important properties \\cite{bena}.\n\nIf the density of zeros\n\\cite{lee,kortman,creswick97,fisher65,lu,bena,suzuki70,kenna94,kenna97,janke02,binek,binek2,kim04a,suzuki67}\nis found, the free energy, the equation of state,\nand all other thermodynamic functions can be obtained.\nHowever, very little is known about the actual form of the density of zeros.\nThe density of Yang-Lee zeros for the Ising ferromagnet has never been known exactly\nexcept for the one-dimensional case \\cite{lee}.\nSuzuki {\\it et al.} \\cite{suzuki70} obtained the exact grand\npartition functions of the Ising model on the $4\\times6$ square lattice and\nthe $3\\times3\\times3$ simple-cubic lattice, and calculated\nthe densities of Yang-Lee zeros for these small-size lattices.\nKortman and Griffiths \\cite{kortman} investigated\nthe densities of Yang-Lee zeros for the Ising ferromagnet\non the square lattice and the diamond lattice,\nbased on the high-field, high-temperature series expansion,\nand found that the density of zeros for the square-lattice Ising ferromagnet\ndiverges at Yang-Lee edge $\\theta_e$ in high temperatures.\nFor the $L\\times L$ square-lattice Ising model,\nCreswick and Kim \\cite{creswick97} obtained the exact grand partition functions up to $L=10$\nand the semi-exact grand partition functions up to $L=14$,\ncalculated the densities of Yang-Lee zeros for $T\\le T_c$,\nand evaluated the spontaneous magnetization $m_0(T)=2\\pi g(\\theta=0,T)$\nin the infinite-size limit.\nFrom Monte Carlo data, the density of Yang-Lee zeros\nnear the transition point has been studied\nfor the four-dimensional Ising ferromagnet \\cite{kenna94} and\nthe square-lattice XY ferromagnet \\cite{kenna97,janke02}.\nThe density of Yang-Lee zeros has also been investigated experimentally\nfor the two-dimensional Ising ferromagnet FeCl$_2$ in axial magnetic fields \\cite{binek,binek2}.\n\nUntil now, the divergence of the density of zeros at Yang-Lee edge\n(the so-called Yang-Lee edge singularity)\nfor the square-lattice Ising ferromagnet in high temperatures has been obtained only by\nthe series expansion \\cite{kortman,kurtze1,baker86}.\nFurthermore, for $T>T_c$, the finite-size effects of the density of Yang-Lee zeros\nfor the Ising ferromagnet have never been studied.\nIn this paper, we investigate the properties of the density of Yang-Lee zeros\nbased on the exact grand partition functions of the $L\\times L$ square-lattice Ising model\n($L=3\\sim16$).\nWe deal with the three different classes of phase transitions for the Ising ferromagnet,\n(1) first-order phase transition, (2) second-order phase transition,\nand (3) Yang-Lee edge singularity, in a unified framework using the densities of zeros\nfor finite-size systems.\nWe also study the density of Yang-Lee zeros\nand its divergence in high temperatures from the finite-size data.\n\n\n\\section{Density of zeros}\n\nFor a finite-size system of size $L$, the density of Yang-Lee zeros $g(\\theta,L)$\nat the critical temperature $T_c$ is given by \\cite{creswick97,bena}\n\\begin{equation}\ng(\\theta,L)=L^{-d+y_h}g(\\theta L^{y_h})\n\\end{equation}\nnear $\\theta=0$.\nEquation (1) implies\n\\begin{equation}\ng(\\theta)\\sim|\\theta|^{(d-y_h)\/y_h}=|\\theta|^{1\/\\delta}\n\\end{equation}\nfor the infinite system.\nAt the critical temperature the value of the magnetic scaling exponent $y_h$\nfor the Ising ferromagnet in two dimensions ($d=2$)\nis well known to be 15\/8, that is, $\\delta=15$.\nTherefore, the density of zeros vanishes at the critical point $H_c=0$ ($\\theta=0$).\nThis kind of behavior of the density of zeros at $T_c$ is the characteristic of\na second-order phase transition.\nOn the other hand, for $T 2T_c$,\nin disagreement with Suzuki's assumption.\nKurtze and Fisher \\cite{kurtze1} refined the estimation of $\\sigma$ by analyzing\nthe high-temperature series expansion for the classical $n$-vector model\nand the quantum Heisenberg model in the limit of infinite temperature,\nand reported $\\sigma=-0.163(3)$ in two dimensions (square and triangular lattices).\nBaker {\\it et al.} \\cite{baker86} analyzed the series expansion of much greater length for\nthe square-lattice Ising ferromagnet, and obtained $\\sigma=-0.1560(33)$\nat $y=e^{-2\\beta J}={2\\over3}$ ($T\\approx 5 J\/k_B$) and $\\sigma=-0.1576(34)$\nat $y={4\\over5}$ ($T\\approx 9 J\/k_B$) from integral approximants.\nRemarkably, the exponent $\\sigma$ has also been experimentally measured to be\n$-0.15(2)$ in the range $49K\\le T\\le53K$ and $-0.365$ at $T=34K$\nfor the triangular-lattice Ising ferromagnet FeCl$_2$ \\cite{binek2}.\n\nFisher \\cite{fisher78} renamed the edge zero as the Yang-Lee edge {\\it singularity}\nfor $T>T_c$, and proposed the idea that the Yang-Lee edge singularity\ncan be thought of as a new second-order phase transition with associated critical exponents.\nAbove the critical temperature the value of $\\sigma$ is known to be\n$\\sigma=-{1\\over6}$ in two dimensions \\cite{dhar},\nresulting in $y_h={12\\over5}$ (according to Eq.~(4)), clearly different from\nthe value of $y_h={15\\over8}$ for the Ising ferromagnet at the critical temperature.\nThe study of the Yang-Lee edge singularity has been extended to\nthe spherical model \\cite{kurtze2},\nthe quantum one-dimensional transverse Ising model \\cite{uzelac},\nthe hierarchical model \\cite{baker79},\nbranched polymers \\cite{parisi},\nthe Ising models on fractal lattices \\cite{southern},\nthe Ising systems with correlated disorder \\cite{tadic},\nfluid models with repulsive-core interactions \\cite{lai},\nand the antiferromagnetic Ising model \\cite{kim05}, etc.\n\n\n\\section{grand partition function and number of states}\n\nThe Ising model in an external magnetic field $H$ on a lattice\nwith $N_s$ sites and $N_b$ bonds is defined by the Hamiltonian\n\\begin{equation}\n{\\cal H}=J\\sum_{\\langle i,j\\rangle}(1-\\sigma_i\\sigma_j)+H\\sum_i(1-\\sigma_i),\n\\end{equation}\nwhere $J$ is the coupling constant,\n$\\langle i,j\\rangle$ indicates a sum over all nearest-neighbor pairs\nof lattice sites, and $\\sigma_i=\\pm1$.\nThe grand partition function of the Ising model is\n$$\nZ=\\sum_{\\{ \\sigma_n \\}} e^{-\\beta{\\cal H}},\n$$\nwhere $\\{ \\sigma_n \\}$ denotes a sum over $2^{N_s}$ possible spin configurations\nand $\\beta=(k_BT)^{-1}$.\nIf we define the number of states, $\\Omega(E,M)$,\nwith a given energy\n\\begin{equation}\nE={1\\over2}\\sum_{\\langle i,j\\rangle}(1-\\sigma_i\\sigma_j)\n\\end{equation}\nand a given magnetization\n\\begin{equation}\nM={1\\over2}\\sum_i(1-\\sigma_i),\n\\end{equation}\nwhere $E$ and $M$ are positive integers $0\\le E\\le N_b$ and $0\\le M\\le N_s$,\nthen the grand partition function can be written as\n\\begin{equation}\nZ(x,y)=\\sum_{E=0}^{N_b}\\sum_{M=0}^{N_s}\\Omega(E,M) y^E x^M,\n\\end{equation}\nwhere $y=e^{-2\\beta J}$ and $x=e^{-2\\beta H}$.\nFrom Eq.~(8) it is clear that $Z(x,y)$ is simply a polynomial in $x$ and $y$.\n\nThe {\\it exact} integer values for the number of states $\\Omega(E,M)$ of the Ising model\non the $L\\times L$ square lattices with cylindrical boundary conditions have been evaluated\nup to $L=10$ to study the density of Yang-Lee zeros for $T\\le T_c$\nusing the microcanonical transfer matrix ($\\mu$TM) \\cite{creswick97}.\nIn Ref.~[4], for $11\\le L\\le14$, we had to evaluate\nthe coefficients, for a fixed value of $y$,\n$$\n\\omega(M)=\\sum_E\\Omega(E,M)y^E\n$$\nas real numbers of finite precision due to memory limitations.\nThis semi-exact method worked well for $T\\le T_c$, and produced all zeros on the unit circle.\nHowever, for $T>T_c$ the number of zeros on the unit circle,\nobtained from $Z(x)=\\sum\\omega(M)x^M$, decreases as $T$ increases.\nTherefore, to study the density of Yang-Lee zeros for $T>T_c$ we need\nthe exact integer values for $\\Omega(E,M)$, not $\\omega(M)$.\nIn this work, the exact integer values for $\\Omega(E,M)$ are calculated up to $L=16$\n($2^{256}=1.158\\times10^{77}$ configurations).\nThe memory requirement during $\\mu$TM calculation\nis $2\\times4\\times(N_s\/2+1)\\times(N_b+1)\\times P_m\\times 2^L$ byte\nfor the two-dimensional Ising model,\nwhere $P_m$ is the maximum size for storing very long integer numbers.\nFor $L=16$, the memory requirement is $3.03\\times10^{11}$ byte.\n(For comparison, the memory requirement for $L=10$, the largest size calculated in Ref.~[4],\nis $3.19\\times10^8$ byte.\nThe largest size calculated before this work is $L=14$ whose memory requirement\nis $3.44\\times10^{10}$ byte \\cite{kim04b}.)\nThe largest number of states for $L=16$ is\n\\begin{eqnarray}\n\\Omega(249,128)=\n207964931539339552089271840638951892633 \\cr\n537987437849943362668628644700590272,\n\\end{eqnarray}\nwhich is approximately $2.08\\times10^{74}$.\n\n\n\\section{Numerical results}\n\nFor a finite-system of size $L$, the density of zeros (per site) may be defined as\n\\begin{equation}\ng(\\theta_k,L)={1\\over L^d}{1\\over{\\theta_{k+1}-\\theta_k}},\n\\end{equation}\nwhere $\\theta_k$ ($k=1,2,3,...$) is the argument of the $k$-th zero on the\nunit circle $x_0=e^{i\\theta}$ of $Z(x,y)$.\nThe index $k$ is counted from $x=1$ ($\\theta=0$) to $x=-1$ ($\\theta=\\pi$)\nfor the zeros with $\\theta\\ge0$. Hence, $\\theta_1(L)=\\theta_e(L)$.\nFigure 1 shows the density of Yang-Lee zeros for the $16\\times16$ Ising ferromagnet\nat various temperatures (in unit of $J\/k_B$).\nThe Yang-Lee zeros at $T=0$ are uniformly distributed on the unit circle \\cite{kim98},\nand the density of zeros is given by $g(\\theta)=1\/2\\pi$ ($=0.159...$).\nAs shown in Fig.~1, for a low temperature $T=1$, the density of zeros is\nnearly identical with $g(\\theta)=0.159$.\nThe spontaneous magnetization can be evaluated from the finite-size data for the density\nof zeros ($m_0(L)=2\\pi g(0,L)$, $L=3\\sim16$) using the BST extrapolation algorithm \\cite{bst}.\nFor $T=1$ the extrapolated value of the spontaneous magnetization is\n$m_0=0.9992757(3)$ in excellent agreement with the exact result $0.9992757519...$ \\cite{yang2}.\nAt the critical temperature $T_c$ ($=2.269...$), the maximum value of the density of zeros\nis 0.166096 at $\\theta=2.541$ where the density of zeros begins to decrease monotonically\nas $\\theta$ decreases.\nAt $T_c$ the density of zeros vanishes at $\\theta=0$ in the limit $L\\to\\infty$,\naccording to Eq.~(2).\nFor $T=3$ the density of zeros is similar to that at $T_c$, except for\na violation of the monotonic decrease of the density of zeros around $\\theta=0.2$.\nIt is interesting that the minimum value of the density of zeros at $T=3$\n($g(\\theta_e=0.126)=0.112$) is lower than that at $T_c$ ($g(0.023)=0.117$).\nTherefore, we may guess that the density of zeros for $T$ just above $T_c$, for example $T=3$,\nvanishes at Yang-Lee edge $\\theta_e$ in the infinite-size limit.\nAs shown in Fig.~1, at high temperatures such as $T=5$ and 10, the overall density of zeros\nbecomes higher because the Yang-Lee edge $\\theta_e$ tends to $\\pi$ as $T$ rises.\nFor $T=5$ the density of zeros has a local maximum $g=0.182$ near $\\theta_e=0.532$.\nFinally, for $T=10$, the density of zeros has the global maximum $g=0.279$\nnear $\\theta_e=1.161$.\nThis fact may be a precursor for the divergence of the density of zeros at $\\theta_e$\nin the infinite-size limit for high temperatures.\n\nFigure 2 shows the densities of Yang-Lee zeros for $L=12$ and 16 at different temperatures.\nWell below the critical temperature (for example, $T=1$), the densities of zeros\nfor $L=12$ and 16 are not distinguishable, as shown in Fig.~2(a).\nTherefore, the finite-size effect for the density of zeros is very small\nwell below $T_c$.\nAt and near the critical temperature (for example, $T=2.5$ in Fig.~2(a)),\nthe densities of zeros for $L=12$ and 16 are also indistinguishable,\nexcept for them near $\\theta=0$ where the density of zeros $g(L)$ decreases sharply\nas the system size $L$ increases.\nWell above the critical temperature (for example, $T=10$ in Fig.~2(b)),\nthe overall forms of the density of zeros for $L=12$ and 16 are almost identical.\nThe finite-size effect for the density of zeros at $T=10$ is relatively large,\ncompared to those at $T=1$ and 2.5.\nNear the Yang-Lee edge $\\theta_e=1.161$, the density of zeros is somewhat irregular,\nand it increases as $L$ increases.\n\nIf we set $\\Theta=0$ ($\\theta=\\theta_e(L)$) in Eq.~(3), we have\n\\begin{equation}\ng(0,L)=L^{-d+y_h}g(0),\n\\end{equation}\nfrom which we can define the magnetic scaling exponent\n\\begin{equation}\ny_h(L)=d+{{{\\rm ln}[g(0,L+1)\/g(0,L)]}\\over{\\rm ln}[(L+1)\/L]}\n\\end{equation}\nfor finite lattices.\nTable I shows the values of $y_h(L)$ at $T=1$, $T_c$, and 10, respectively.\nFor $T=1$, $y_h(L)$ changes very slightly around $d=2$ as $L$ increases.\nThis kind of behavior may result from the small finite-size effect\nfor the density of zeros well below the critical temperature.\nAt $T_c$ and $T=10$, $y_h(L)$ increases monotonically as $L$ increases.\nIn particular, at $T=10$, $y_h(L)$ increases quickly.\n\nTable II shows the extrapolated results for the magnetic scaling exponent $y_h$\nby the BST algorithm.\nFor $T=1$, the extrapolated result clearly indicates $y_h=d=2$ that is expected\nat a strong first-order phase transition \\cite{fisher82}.\nAt $T_c$, the extrapolated value of $y_h=1.8747(7)$ is in agreement with\nthe exact value of $y_h={15\\over8}=1.875$.\nFor $T\\ge3$, we obtained the extrapolated value of $y_h=2.4$\nwhich gives $\\sigma=-{1\\over6.0}$ (the characteristic of Yang-Lee edge singularity).\nAccording to Eq.~(4), the density of zeros diverges at Yang-Lee edge $\\theta_e$\nwhen $y_h>d$.\nFinally, we can easily distinguish the three different classes of phase transitions\n(first-order phase transition, second-order phase transition,\nand Yang-Lee edge singularity)\nby estimating $y_h$ from the densities of zeros on finite lattices.\n\nNear the critical temperature $T_c=2.269...$, we expect some crossovers\nbetween different classes.\nAt $T=2.2$ we obtained $y_h=1.98(15)$ that is close to $y_h=2$\nbut is not clearly distinguished from $y_h={15\\over8}$ due to the large error.\nAt $T=2.4$ and 2.5, the extrapolated values for $y_h$ are 2.33(23) and 2.89(72),\nrespectively, indicating the existence of Yang-Lee edge singularity.\nBut their errors are too large.\nIt is interesting that the densities of zeros for finite-size systems at $T=2.4$, 2.5, and 3\nare quite similar to those at $T_c$\nbut they give the values of $y_h$ ($>d=2$) completely different\nfrom $y_h={15\\over8}$ ($T_c$\nvanishes at $\\theta_e$.\nHowever, if the data of much larger sizes can be used,\nthere will be a possibility of obtaining $y_h > 2$ even at $T=2.3$.\n\nThe finite-size data for the density of zeros $g(\\theta_e,L)$\n($L=3\\sim16$) in high temperatures have also been extrapolated to infinite size.\nIn high temperatures, it is convenient to deal with the inverse density of zeros\n$f(\\theta_e,L)\\equiv1\/g(\\theta_e,L)$ because\n$f(\\theta_e,L)$ vanishes with the scale factor $L^{-(y_h-d)}$, in the limit $L\\to\\infty$,\nbut $g(\\theta_e,L)$ diverges.\nTable III shows the extrapolated results for the inverse density of zeros $f(\\theta_e)$.\nFor $T\\ge5$ the extrapolated results imply the divergence of the density of zeros\nat $\\theta_e$.\nHowever, at $T=4$, we can not conclude anything only from the extrapolated result\nfor the inverse density of zeros because of the large error.\n\n\n\\section{phase transitions in small systems}\n\nIn the previous section, the density of zeros has been extracted\nfrom the locations of Yang-Lee zeros for finite sizes,\nusing Eq.~(10) \\cite{creswick97,suzuki70,kim04a}.\nRecently, two alternative approaches (JK approach \\cite{janke02,janke01,janke04,alves}\nand BMH approach \\cite{borrmann,alves}) have been proposed\nto extract the density of zeros from the locations of Fisher zeros\nin the complex temperature ($y=e^{-2\\beta J}$) plane for finite sizes\nand to study phase transitions in small systems.\nThese two approaches are based on the assumption that Fisher zeros\n$y_j=s_j+i t_j=y_c+i r_j\\exp(-i\\psi)$\nclose to the critical point $y_c$ lie on a single line for quite large $L$,\nwhere the index $j(=1,2,3,...)$ increases with distance $r_j$ from the critical point,\n$s_j={\\rm Re}(y_j)=y_c+r_j\\sin\\psi$, and $t_j={\\rm Im}(y_j)=r_j\\cos\\psi$.\n\nIn the JK approach \\cite{janke02,janke01,janke04,alves}, the density of zeros is defined as\n\\begin{equation}\n\\lambda_L(r)=L^{-d}\\sum_j\\delta(r-r_j(L)),\n\\end{equation}\nand the cumulative density of zeros $\\Lambda_L(r)\\equiv\\int_0^r \\lambda_L(r')dr'$\nbecomes a step function with\n\\begin{equation}\n\\Lambda_L(r)={j\\over L^d}\\ \\ \\ [{\\rm if}\\ r\\in(r_j,r_{j+1})]\n\\end{equation}\nfor a finite-size system of size $L$. Then the average cumulative density of zeros\nis given by\n\\begin{equation}\n\\Lambda_L(r_j)=\\frac{2j-1}{2L^d},\n\\end{equation}\nand it is expressed as\n\\begin{equation}\n\\Lambda_{\\infty}(r) = \\lambda_{\\infty}(0)r + a r^{w+1} + \\cdots\n\\end{equation}\nin the limit $L\\to\\infty$.\nHere, the density of zeros $\\lambda_{\\infty}(0)$ on the real axis is proportional\nto the latent heat $\\Delta e$.\nAt a first-order phase transition, the first term $\\lambda_{\\infty}(0)r$ becomes\ndominant for small $r$.\nAt a second-order phase transition, the second term $a r^{w+1}$\nis the leading one because of $\\lambda_{\\infty}(0)=0$.\nFinally, the cumulative density of zeros can be approximated as\n\\begin{equation}\n\\frac{2j-1}{2L^d}=\\Lambda_L(r_j)=a_1 [t_j(L)]^{a_2} + a_3\n\\end{equation}\nbecause the parameter $r_j$ for Fisher zeros close to the critical point $y_c$\nmay be taken to be the imaginary part $t_j(L)$ due to $y_c\\sim s_j(L)$.\nA value of $a_3$ inconsistent with zero indicates the absence of a phase transition.\nThe parameter $a_2$ determines the order of a phase transition.\nA first-order phase transition takes a value $a_2 \\sim 1$ for small $t$,\nwhereas a value of $a_2$ larger than $1$ indicates a second-order phase transition\nwhose specific-heat exponent is given by $\\alpha=2-a_2$.\nFor a first-order phase transition, the parameter $a_1$ corresponds to\nthe density of zeros $\\lambda_{\\infty}(0)$ on the real axis.\nThe parameter $a_2$ is also expressed as $a_2=d\/y_t$\nin terms of the thermal scaling exponent $y_t$.\nThe JK approach has also been applied to the Yang-Lee zeros of the square-lattice\nXY ferromganet \\cite{janke02} where the parameter $a_2$ is expressed as $a_2=d\/y_h$\nin terms of the magnetic scaling exponent $y_h$,\nthat is,\n\\begin{equation}\na_2={2d\\over d+2-\\eta}.\n\\end{equation}\nThe JK approach has also been generalized to study\nthe areal distributions of Fisher zeros \\cite{janke04}.\n\nOn the other hand, there are three important parameters\nin the BMH approach \\cite{borrmann,alves}.\nThe first parameter $t_1$ (the imaginary part of the first zero)\nbecomes 0 in the limit $L\\to\\infty$\nfor a true phase transition in the Ehrenfest sense \\cite{creswick97,kim98,kim04b,kims1,kims2}.\nIf there is a phase transition, we assume that the line density of zeros,\n\\begin{equation}\n\\phi(t_j)\\equiv{1\\over2}\\left({1\\over|y_j-y_{j-1}|}+{1\\over|y_{j+1}-y_j|}\\right),\n\\end{equation}\nfollows a simple power law $\\phi(t)\\sim t^{\\alpha_\\phi}$ for small $t$.\nThe second parameter $\\alpha_\\phi$ can be estimated as\n\\begin{equation}\n\\alpha_\\phi={\\ln\\phi(t_3)-\\ln\\phi(t_2)\\over\\ln t_3-\\ln t_2}\n\\end{equation}\nfrom the first four Fisher zeros ($y_1, y_2, y_3, y_4$).\nThe line of Fisher zeros also crosses the real axis\nat the critical point $y_c$ with the angle $\\psi$, yielding\nthe third parameter\n\\begin{equation}\n\\gamma_\\psi=\\tan\\psi={s_2-s_1\\over t_2-t_1}.\n\\end{equation}\nThe second and third parameters ($\\alpha_\\phi$ and $\\gamma_\\psi$) determine\nthe order of a phase transition.\nThe values of $\\alpha_\\phi=0$ ($\\alpha_\\phi$ can be less than zero for finite sizes)\nand $\\gamma_\\psi=0$ indicate a first-order phase transition.\nA second-order phase transition occurs for $0 < \\alpha_\\phi < 1$ and\narbitrary $\\gamma_\\psi$, and a higher-order phase transition for $\\alpha_\\phi > 1$\nand arbitrary $\\gamma_\\psi$.\nAlves {\\it et al.} \\cite{alves} have performed numerical comparisons\nof the JK and BMH approaches for the Fisher zeros\nof the two-dimensional four-state and five-state Potts models and of biological molecules,\nusing Monte Carlo data.\nTheir results implied that the JK approach is little better than the BMH approach.\n\nThe approach explained in the previous sections, which may be called as\nthe angular-density-of-zeros (ADOZ) approach, can also be used to study\nphase transitions in small systems. In Eq.~(4), the density of Yang-Lee zeros\nnear $\\Theta=0$ is given by $g(\\Theta)\\sim|\\Theta|^{(d-y_h)\/y_h}$.\nSimilarly, the density of Fisher zeros near $\\Theta=0$\nis also given by $g(\\Theta)\\sim|\\Theta|^{(d-y_t)\/y_t}$ \\cite{kim04a}.\nThe exponent $(d-y_t)\/y_t$ (or $(d-y_h)\/y_h$) corresponds to the parameter\n$a_2-1$ of the JK approach.\nAs in the BMH approach, the zero value of $t_1$\nin the limit $L\\to\\infty$ indicates the existence of a true phase transition.\nWhen there is no phase transition, Fisher (or Yang-Lee) edge singularity \\cite{kims2,kim02}\nexists if $y_t > d$ (or $y_h > d$).\nAs shown in Table I, the values of $y_h(L)$ at $T=10$ are clearly greater than $d=2$\nfor relatively large $L$, suggesting the existence of Yang-Lee edge singularity.\nWhen there is a phase transition, the value of $y_t$ (or $y_h$) determines\nthe order of the phase transition.\nTo calculate $y_t(L)$ (or $y_h(L)$), we need only the first two Fisher (or Yang-Lee)\nzeros according to Eqs.~(10) and (12).\nA value of $y_t\\approx d$ (or $y_h\\approx d$) means a first-order phase transition.\nThe values of $y_h(L)$ at $T=1$ in Table I clearly indicate a first-order phase transition.\nA second-order phase transition occurs for $d\/2\\le y_tj$, is a Poisson\nideal.\n\n\\end{itemize}\n\nWe will generalize this picture to the $q$--difference case.\n\n\\subsection{The Miura transformation}\n\nLet $\\F$ be the manifold of differential operators of the\nform\n\\begin{equation} \\label{cartan}\n\\pa_s +\n\\begin{pmatrix}\nv_1 & 0 & 0 & \\hdots & 0 & 0 \\\\\n-1 & v_2 & 0 & \\hdots & 0 & 0 \\\\\n0 & -1 & v_3 & \\hdots & 0 & 0 \\\\\n\\hdotsfor{6} \\\\\n0 & 0 & 0 & \\hdots & v_{n-1} & 0 \\\\\n0 & 0 & 0 & \\hdots & -1 & v_n\n\\end{pmatrix},\n\\end{equation}\nwhere $\\sum_{i=1}^n v_i = 0$.\n\nWe have a map $\\muu: \\F \\arr \\M$, which is the composition of the\nembedding $\\F \\arr M^J_n$ and the projection $\\pi: M^J_n \\arr \\M$.\n\nUsing the definition of $\\pi$ above, $\\muu$ can be described explicitly\nas follows: the image of the operator \\eqref{cartan} under $\\muu$ is\nthe $n$th order differential operator\n$$\\pa_s^n + u_1(s) \\pa_s^{n-2} + \\ldots + u_{n-1}(s) = (\\pa_s +\nv_1(s)) \\ldots (\\pa_s + v_n(s)).$$\n\nThe map $\\muu$ is called the Miura transformation.\n\nWe want to describe the Poisson structure on $\\F$ with respect to\nwhich the Miura transformation is Poisson. To this end, let us\nconsider the restriction of the gauge action (\\ref{coadjoint}) to the\nopposite triangular subgroup $LN_{-};$ let $\\ol{\\mu} :M_n\\rightarrow\nL {\\frak n}_+ \\simeq L{\\frak n}_{-}^{*}$ be the corresponding moment\nmap. The manifold $\\F$ is the intersection of two level surfaces,\n$\\F = \\mu^{-1}(J)\\cap \\ol{\\mu}^{-1}(0).$ It is easy to see that it\ngives a local cross-section for both actions (in other words, the\norbits of $LN$ and $LN_{-}$ are transversal to $F_n$). Hence $F_n$\nsimultaneously provides a local model for the reduced spaces\n$M_n=\\mu^{-1}(J)\/LN$ and $\\ol{\\mu}^{-1}(0)\/LN_{-}$. The Poisson\nbracket on $\\F$ that we need to define is the so-called Dirac\nbracket (see, e.g., \\cite{Flato}), where we may regard the matrix\ncoefficients of $\\ol{\\mu}$ as subsidiary conditions, which fix the\nlocal gauge. The computation of the Dirac bracket for the diagonal\nmatrix coefficients $v_i$ is very simple, since their Poisson brackets\nwith the matrix coefficients of $\\ol{\\mu}$ all vanish on $\\F$. The\nonly correction arises due to the constraint $\\sum_{i=1}^N v_i = 0$.\n\nDenote by $v_{i,n}$ the linear functional on $\\F$, whose value on the\noperator \\eqref{cartan} is the $n$th Fourier coefficient of\n$v_i(s)$. We obtain the following formula for the Dirac bracket on\n$\\F$:\n\\begin{align*}\n\\{ v_{i,n},v_{i,m} \\} &= \\frac{N-1}{N} n \\delta_{n,-m}, \\\\ \\{\nv_{i,n},v_{j,m} \\} &= - \\frac{1}{N} n \\delta_{n,-m}, \\quad i1$, there\nexists $g(s) \\in LN$, such that $g(sq) A(s) g(s)^{-1} \\in\nM^{\\al-1}_{n,q}$. Since the condition is vacuous for $\\al=n$, i.e.\n$M^n_{n,q} = M^J_{n,q}$, this will imply that each $LN$--orbit in\n$M^J_{n,q}$ contains an element of the form \\eqref{qcan}.\n\nTo prove the statement for a given $\\al$, we will recursively\neliminate all entries of the $\\al$th row of $A(s)$ (except the\n$(\\al,\\al-1)$ entry), from right to left using elementary unipotent\nmatrices. Denote by $E_{i,j}(x)$ the upper unipotent matrix whose only\nnon-zero entry above the diagonal is the $(i,j)$ entry equal to\n$x$. At the first step, we eliminate the $(\\al,n)$ entry $A_{\\al,n}$\nof $A(s)$ by applying the $q$--gauge transformation \\eqref{qadjoint}\nwith $g(s) = E_{\\al-1,n}(-A_{\\al,n}(s))$. Then we obtain a new matrix\n$A'(s)$, which still belongs to $M^\\al_{n,q}$, but whose $(\\al,n)$\nentry is equal to $0$. Next, we apply the $q$--gauge transformation by\n$E_{\\al-1,n-1}(-A'_{\\al,n-1}(s))$ to eliminate the $(\\al,n-1)$ entry\nof $A'(s)$, etc. It is clear that at each step we do not spoil the\nentries that have already been set to $0$. The product of the\nelementary unipotent matrices constructed at each step gives us an\nelement $g(s) \\in LN$ with the desired property that $g(sq) A(s)\ng(s)^{-1} \\in M^{\\al-1}_{n,q}$.\n\nTo complete the proof, it suffices to remark that if $A(s)$ and\n$A'(s)$ are of the form \\eqref{qcan}, and $g(sq) A(s) g(s)^{-1} =\nA'(s)$ for some $g(s) \\in LN$, then $A(s)=A'(s)$ and $g(s)=1$.\\qed\n\nFor ${\\cal L}$ of the form \\eqref{special}, $p({\\cal L})$ equals the\noperator $L$ given by formula \\eqref{L}. Thus, we have identified the\nmap $\\pi_q$ with the quotient of $M^J_{n,q}$ by $LN$ and $\\Mq$ with\n$M^J_{n,q}\/LN$.\n\n\\begin{rem}\nIn the same way as above we can prove the following more general\nstatement. Let $R$ be a ring with an automorphism $\\tau$. It rives\nrise to an automorphism of $SL_n(R)$ denoted by the same\ncharacter. Define $M^J_{\\tau,n}$ as the set of elements of $SL_n(R)$\nof the form \\eqref{upper}. Let the group $N(R)$ act on\n$M^J_{\\tau,n}(R)$ by the formula $g \\cdot A = (\\tau \\cdot g) A\ng^{-1}$. Then this action of $N(R)$ is free, and the quotient is\nisomorphic to the set ${\\cal M}^J_{\\tau,n}(R)$ of elements of\n$SL_n(R)$ of the form \\eqref{qcan} (i.e. each orbit contains a unique\nelement of the form \\eqref{qcan}). Note that the proof is not sensible\nto whether $\\tau=\\on{Id}$ or not.\n\nWhen $\\tau=\\on{Id}$, this result is well-known. It gives the classical\nnormal form of a linear operator. Moreover, in that case R.~Steinberg\nhas proved that the subset ${\\cal M}^J_{\\on{Id},n}(K)$ of $SL_n(K)$,\nwhere $K$ is an algebraically closed field, is a cross-section of the\ncollection of regular conjugacy classes in $SL_n(K)$ \\cite{St},\nTheorem 1.4. Steinberg defined an analogous cross-section for any\nsimply-connected semisimple algebraic group \\cite{St}. His results can\nbe viewed as group analogues of Kostant's results on semisimple Lie\nalgebras \\cite{Ko} (cf. \\remref{kostant}). Steinberg's cross-section\nis used in the definition of the discrete Drinfeld-Sokolov reduction\nin the general semisimple case (see \\cite{SS}).\\footnote{We are\nindebted to B.~Kostant for drawing our attention to \\cite{St}}\\qed\n\\end{rem}\n\n\\subsection{Deformed Miura transformation via $q$--gauge action}\n\nLet us attach to each element of $\\Fq$ the $q$--difference operator\n\\beq \\label{Lambda}\n\\La = D + \\begin{pmatrix}\n\\la_1(s) & 0 & \\hdots & 0 & 0 \\\\\n-1 & \\la_2(sq^{-1}) & \\hdots & 0 & 0 \\\\\n\\hdotsfor{5} \\\\\n0 & 0 & \\hdots & \\la_{n-1}(sq^{-n+2}) & 0 \\\\\n0 & 0 & \\hdots & -1 & \\la_n(sq^{-n+1})\n\\end{pmatrix},\n\\end{equation}\nwhere $\\prod_{i=1}^n \\la_i(sq^{-i+1}) = 1$.\n\nLet $\\wt{\\muu}_q: \\Fq \\arr \\Mq$ be the composition of the embedding\n$\\Fq \\arr M^J_{n,q}$ and $\\pi_q: M^J_{n,q} \\arr M^J_{n,q}\/LN \\simeq\n\\Mq$. Using the definition of $\\pi_q$ above, one easily finds that for\n$\\La$ given by \\eqref{Lambda}, $\\wt{\\muu}_q(\\La)$ is the operator\n\\eqref{qcan}, where $t_i(s)$ is given by formula \\eqref{formulai}.\n\nTherefore we obtain\n\n\\begin{lem}\nThe map $\\wt{\\muu}_q$ coincides with the $q$--deformed Miura\ntransformation $\\muu_q$.\n\\end{lem}\n\n\\begin{rem} \\label{qchar}\nLet $G$ be a simply-connected semisimple algebraic group over $\\C$.\nLet $V_i$ be the $i$th fundamental representation of $G$ (in the case\n$G=SL_n$, $V_i = \\Lambda^i \\C^n$), and $\\chi_i: G \\arr \\C$ be the\ncorresponding character, $\\chi_i(g) = \\on{Tr}(g,V_i)$. Define a map\n$p: G \\arr \\C^N$ by the formula $p(g) =\n(\\chi_1(g),\\ldots,\\chi_n(g))$. By construction, $p$ is constant on\nconjugacy classes. In the case $G=SL_n$ the map $p$ has a\ncross-section $r: \\C^n \\arr SL_n(\\C)$:\n$$(a_1,\\ldots,a_n) \\mapsto \\begin{pmatrix} a_1 & a_2 & a_3 & \\hdots &\na_{n-1} & 1 \\\\ -1 & 0 & 0 & \\hdots & 0 & 0 \\\\ 0 & -1 & 0 & \\hdots & 0\n& 0 \\\\ \\hdotsfor{6} \\\\ 0 & 0 & 0 & \\hdots & 0 & 0 \\\\ 0 & 0 & 0 &\n\\hdots & -1 & 0\n\\end{pmatrix}.$$ The composition $r \\circ p$, restricted to\n$M^J_{n,1}$ coincides with the map $\\pi_1$. Moreover, $\\wt{\\muu}_1$\ncan be interpreted as the restriction of $p$ to the subset of $SL_n$\nconsisting of matrices of the form \\eqref{Lambda}. Hence $\\wt{\\muu}_1$\nsends $(\\la_1,\\ldots,\\la_n)$ to the elementary symmetric polynomials\n$$t_i = \\sum_{j_1 < \\ldots < j_i} \\la_{j_1} \\la_{j_2} \\ldots\n\\la_{j_i},$$ which are the characters of the fundamental\nrepresentations of $SL_n$. As we mentioned above, Steinberg has defined\nan analogue of the cross-section $r$ for an arbitrary simply-connected\nsemisimple algebraic group \\cite{St}.\n\nFormula \\eqref{formulai} means that in terms of $\\la_j(z)$ the\ngenerators $t_i(z)$ of $W_q(\\sw_n)$ can be thought of as\n$q$--deformations of the characters of fundamental representations of\n$SL_n$. It is interesting that the same interpretation is also\nsuggested by the definintion of $\\W_q(\\sw_n)$ as the center of a\ncompletion of the quantized universal enveloping algebra\n$U_q(\\sun)_{-\\k}$ \\cite{FR}. Namely, $t_i(z)$ is then defined as the\n($q$--deformed) trace of the so-called $L$--operator acting on\n$\\Lambda^i \\C^n$ considered as a representation of $U_q(\\sun)$, see\n\\cite{ext,FR} (note also that $t_i(z)$ is closely connected with a\ntransfer-matrix of the corresponding integrable spin model).\\qed\n\\end{rem}\n\nThus, we have now represented $\\Mq$ as the quotient of the submanifold\n$M^J_{n,q}$ of the manifold $M_{n,q}$ of first order $q$--difference\noperators by the action of the group $LN$ (acting by $q$--gauge\ntransformations). We have also interpreted the $q$--deformed Miura\ntransformation in these terms. In the next sections we discuss the\nPoisson structure on $M_{n,q}$, which gives rise to the Poisson\nstructure on $\\Mq$ given by explicit formula \\eqref{p2}.\n\n\\section{Poisson structures} \\label{P}\n\n\\subsection{Overview} \\label{over}\nIn view of the previous section, the following diagram is the\n$q$--difference analogue of the diagram presented at the end of\n\\secref{diff1}.\n\n\n\\begin{center}\n\\begin{picture}(125,40)(-30,-33) \n\\put(0,-1){\\vector(0,-1){26}}\n\\put(62,-1){\\vector(0,-1){26}}\n\\put(9,3){\\vector(1,0){45}}\n\\put(9,-31){\\vector(1,0){45}}\n\\put(8.9,3.9){\\oval(1,1)[l]\n\\put(8.9,-30.1){\\oval(1,1)[l]\n\\put(-5,2){$M_{n,q}^J$}\n\\put(-26,-32){$\\Mq=M_{n,q}^J\/LN$}\n\\put(57,2){$M_{n,q}=(LG,\\,\\eta_*^q)$}\n\\put(57,-32){$M_{n,q}\/LN$}\n\n\\end{picture}\n\\end{center}\n\n\nAs in the differential case, in order to define a $q$--deformation of\nthe Drinfeld-Sokolov reduction we need to find a Poisson structure\n$\\eta_*^q$ on $M_{n,q}$ and a Poisson-Lie structure $\\eta$ on $LSL_n$\nsatisfying the following properties.\n\n\\begin{itemize}\n\n\\item[(i)] the action $LSL_n \\times M_{n,q} \\arr M_{n,q}$ by\n$q$--gauge transformations is Poisson;\n\n\\item[(ii)] the subgroup $LN$ of $LSL_n$ is admissible in the sense\nthat the algebra $S_q$ of $LN$--invariant functionals on $M_{n,q}$\nis a Poisson subalgebra of the algebra of all functionals on $M_{n,q}$;\n\n\\item[(iii)] Denote by $\\mu_{ij}$ the function on $M_{n,q}$, whose\nvalue at $D + A \\in M_{n,q}$ equals the $(i,j)$ entry of $A$. The\nideal in $S_q$ generated by $\\mu_{ij} + \\delta_{i-1,j}, i>j$, is a\nPoisson ideal.\n\n\\end{itemize}\n\nGeometrically, the last condition means that $\\Mq$ is a Poisson\nsubmanifold of the quotient $M_{n,q}\/LN$.\n\nFor the sake of completeness, we recall the notions mentioned\nabove. Let $M$ be a Poisson manifold, and $H$ be a Lie group, which is\nitself a Poisson manifold. An action of $H$ on $M$ is called Poisson\nif $H \\times M\\rightarrow M$ is a Poisson map (here we equip $H\\times\nM$ with the product Poisson structure). In particular, if the\nmultiplication map $H \\times H \\arr H$ is Poisson, then $H$ is called\na Poisson-Lie group.\n\nIn this section we describe the general formalism concerning problems\n(i)--(iii) above. Then in the next section we specialize to $M_{2,q}$\nand give an explicit solution of these problems.\n\n\\subsection{Lie bialgebras} \\label{bialg}\nLet $\\g$ be a Lie algebra. Recall \\cite{Dr} that $\\g$ is called a Lie\nbialgebra, if $\\g^*$ also has a Lie algebra structure, such that the dual\nmap $\\delta: \\g \\arr \\Lambda^2 \\g$ is a one-cocycle. We will consider {\\em\nfactorizable} Lie bialgebras ($\\g,\\delta$) satisfying the following\nconditions:\n\n\\begin{itemize}\n\n\\item[(1)] There exists a linear map $r_+: \\g^* \\arr \\g$, such that both\n$r_+$ and $r_-= -r_+^*$ are Lie algebra homomorphisms.\n\n\\item[(2)] The endomorphism $t = r_+ -r_-$ is $\\g$-equivariant and\ninduces a linear isomorphism $\\g^*\\arr\\g$.\n\n\\end{itemize}\n\nInstead of the linear operator $r_+ \\in \\on{Hom}(\\g^*,\\g)$ one often\nconsiders the corresponding element $r$ of $\\g^{\\ot 2}$ (or a\ncompletion of $\\g^{\\ot 2}$ if $\\g$ is infinite-dimensional). The\nelement $r$ (or its image in the tensor square of a particular\nrepresentation of $\\g$) is called classical $r$--matrix. It satisfies\nthe classical Yang-Baxter equation:\n\\begin{equation} \\label{yb}\n[r_{12},r_{13}] + [r_{12},r_{23}] + [r_{13},r_{23}] = 0.\n\\end{equation}\nIn terms or $r$, $\\delta(x)=[r,x], \\forall x \\in \\g$ (here $[a \\otimes\nb,x] = [a,x] \\otimes b + a \\otimes [b,x]$). The maps $r_\\pm: \\g^* \\arr\n\\g$ are given by the formulas: $r_+(y) = (y \\otimes \\on{id})(r),\nr_-(y) = - (\\on{id} \\otimes y)(r)$.\n\nProperty (2) above means that $r+\\sigma(r)$, where $\\sigma(a \\otimes\nb) = b \\otimes a$ is a non-degenerate $\\g$--invariant symmetric\nbilinear form on $\\g^*$.\n\nSet $\\g_\\pm = \\on{Im}(r_\\pm)$. Property (1) above implies that\n$\\g_\\pm \\subset \\g$ is a Lie subalgebra. The following statement is\nessentially contained in \\cite{BD} (cf. also \\cite{rmatr}).\n\n\\begin{lem}\nLet $(\\g, \\g^*)$ be a factorizable Lie bialgebra. Then\n\n(1) The subspace $\\n_\\pm = r_\\pm(\\on{Ker} r_\\mp)$ is a Lie ideal in\n$\\g_\\pm$.\n\n(2) The map $\\theta: \\g_+\/\\n_+\\arr\\g_-\/\\n_-$ which sends the\nresidue class of $r_+(X), X\\in\\g^*$, modulo $\\n_+$ to that of $r_-(X)$\nmodulo $\\n_-$ is a well-defined isomorphism of Lie algebras.\n\n(3) Let $\\D=\\g\\oplus\\g$ be the direct sum of two copies of\n$\\g$. The map $$i: \\g^* \\arr \\D, \\quad \\quad X \\mapsto\n(r_+(X),r_-(X))$$ is a Lie algebra embedding; its image $\\g^* \\subset\n\\D$ is $$\\g^* = \\{(X_+,X_-) \\in \\g_+ \\oplus \\g_- \\subset \\D | \\ol{X}_- =\n\\theta(\\ol{X}_+) \\},$$ where $\\ol{Y}_\\pm = Y \\on{mod} \\n_\\pm$.\n\\end{lem}\n\n\\begin{rem} The connection between our notation and that of\n\\cite{RIMS} is as follows: the operator $r \\in \\on{End}\\g$ of\n\\cite{RIMS} coincides with the composition of $r_+ + r_-$ up to the\nisomorphism $t = r_+ - r_-: \\g^* \\arr \\g$; the bilinear form used in\n\\cite{RIMS} is induced by $t$.\\qed\n\\end{rem}\n\n\\subsection{Poisson-Lie groups and gauge transformations}\n\\label{adj}\nLet ($G,\\eta$) (resp., \\newline ($G^*,\\eta^*$)) be a Poisson-Lie group\nwith factorizable tangent Lie bialgebra ($\\g,\\delta$) (resp.,\n($\\g^*,\\delta^*$)). Let $G_\\pm$ and $N_\\pm$ be the Lie subgroups of\n$G$ corresponding to the Lie subalgebras $\\g_\\pm$ and $\\n_\\pm$. We\ndenote by the same symbol $\\theta$ the isomorphism $G_+\/N_+ \\arr\nG_-\/N_-$ induced by $\\theta: \\g_+\/\\n_+ \\arr \\g_-\/\\n_-$. Then the group\n$G^*$ is isomorphic to $$\\{ (g_+,g_-) \\in G_+ \\times G_- |\n\\theta(\\ol{g}_+) = \\ol{g}_- \\},$$ and we have a map $i: G^* \\arr G$\ngiven by $i((g_+,g_-)) = g_+ (g_-)^{-1}$.\n\nExplicitly, Poisson bracket on ($G,\\eta$) can be written as follows:\n\\beq \\label{skl} \\{ \\vf,\\psi \\} = \\langle r,\\nb \\vf \\wedge \\nb \\psi -\n\\nb' \\vf \\wedge \\nb' \\psi \\rangle,\n\\end{equation}\nwhere for $x \\in G$, $\\nb \\vf(x), \\nb' \\vf(x) \\in \\g^*$ are defined by\nthe formulas:\n\\begin{align} \\label{nabla}\n\\langle \\nb \\vf(x),\\xi \\rangle &= \\frac{d}{dt} \\vf\\left( e^{t\\xi} x\n\\right)|_{t=0},\\\\ \\langle \\nb' \\vf(x),\\xi \\rangle &= \\frac{d}{dt}\n\\vf\\left( x e^{t\\xi} \\right)|_{t=0},\n\\end{align}\nfor all $\\xi \\in \\g$. Analogous formula can be written for the\nPoisson bracket on ($G^*,\\eta^*$). In formula \\eqref{skl} we use the\nstandard notation $a \\wedge b = (a \\ot b - b \\ot a)\/2$.\n\nBy definition, the action of $G$ on itself by left translations is a\nPoisson group action. There is another Poisson structure $\\eta_*$ on\n$G$ which is covariant with respect to the adjoint action of $G$ on\nitself and such that the map $i: (G^*,\\eta^*) \\arr (G,\\eta_*)$ is\nPoisson. It is given by the formula \\beq \\label{another} \\{ \\vf,\\psi\n\\} = \\langle r,\\nb \\vf \\wedge \\nb \\psi + \\nb' \\vf \\wedge \\nb' \\psi\n\\rangle - \\langle r, \\nb' \\vf \\ot \\nb \\psi - \\nb' \\psi \\ot \\nb \\vf\n\\rangle.\n\\end{equation}\n\n\\begin{prop} \\label{embed}\n(1) The map $i: G^* \\arr G$ is a Poisson map between the Poisson\nmanifolds ($G^*,\\eta^*$) and ($G,\\eta_*$);\n\n(2) The Poisson structure $\\eta_*$ on $G$ is covariant with respect to\nthe adjoint action, i.e. the map\n$$(G,\\eta) \\times (G,\\eta_*) \\arr (G,\\eta_*): (g,h) \\mapsto g h\ng^{-1}$$ is a Poisson map.\n\\end{prop}\n\nThese results are proved in \\cite{RIMS}, \\S~3 (see also \\cite{dual},\n\\S~2), using the notion of the Heisenberg double of $G$. Formula\n\\eqref{another} can also be obtained directly from the explicit\nformulas for the Poisson structure $\\eta^*$ and for the embedding $i$.\n\nMore generally, let $\\tau$ be an automorphism of $G$, such that the\ncorresponding automorphism of $\\g$ satisfies $(\\tau \\ot \\tau)(r) =\nr$. Define a twisted Poisson structure $\\eta_*^\\tau$ on $G$ by the\nformula\n\\begin{align}\n\\label{mainb1} \\{ \\vf,\\psi \\} &= \\langle r,\\nb \\vf \\wedge \\nb \\psi + \\nb'\n\\vf \\wedge \\nb' \\psi \\rangle \\\\ \\notag &- \\langle (\\tau \\otimes\n\\on{id})(r), \\nb' \\vf \\ot \\nb \\psi - \\nb' \\psi \\ot \\nb \\vf \\rangle,\n\\end{align}\nand the twisted adjoint action of $G$ on itself by the formula $g\n\\cdot h = \\tau(g) h g^{-1}$.\n\n\\begin{thm} \\label{act1}\n{\\em The Poisson structure $\\eta_*^\\tau$ on $G$ is covariant with\nrespect to the twisted adjoint action, i.e. the map\n$$(G,\\eta) \\times (G,\\eta_*^\\tau) \\arr (G,\\eta_*^\\tau): (g,h) \\mapsto\n\\tau(g) h g^{-1}$$ is a Poisson map.}\n\\end{thm}\n\nThis result was proved in \\cite{RIMS}, \\S~3 (see also \\cite{dual},\n\\S~2), using the notion of the twisted Heisenberg double of $G$. We\nwill use \\thmref{act1} in two cases. In the first, $G$ is the loop\ngroup of a finite-dimensional simple Lie group $\\ol{G}$, and $\\tau$ is\nthe automorphism $g(s) \\arr g(sq), q \\in \\C^\\times$. In the second, $G\n= \\ol{G}^{\\Z\/N\\Z}$, and $\\tau$ is the automorphism $(\\tau(g))_i \\arr\ng_{i+1}$. In the first case twisted conjugations coincide with\n$q$--gauge transformations, and in the second case they coincide with\nlattice gauge transformations.\n\n\\subsection{Admissibility and constraints} \\label{admcon}\n\nLet $M$ be a Poisson manifold, $G$ a Poisson Lie group and $G \\times M\n\\arr M$ be a Poisson action. A subgroup $H\\subset G$ is called\nadmissible if the space $C^\\infty(M)^H$ of $H$--invariant functions on\n$M$ is a Poisson subalgebra in the space $C^\\infty(M)$ of all\nfunctions on $M$.\n\n\\begin{prop}[\\cite{RIMS},Theorem 6] \\label{admiss}\nLet $\\left( {\\frak g},{\\frak g}^{*}\\right) $ be the tangent\nLie bialgebra of $G.$ A connected Lie subgroup $H\\subset G$ with Lie algebra \n${\\frak h}\\subset {\\frak g}$ is admissible if ${\\frak h}^{\\perp }\\subset\n{\\frak g}^{*}$ is a Lie subalgebra.\n\\end{prop}\n\nIn particular, $G$ itself is admissible. Note that $H\\subset G$ is a\nPoisson subgroup if and only if ${\\frak h}^{\\perp }\\subset {\\frak\ng}^{*}$ is an ideal; in that case the tangent Lie bialgebra of $H$ is\n$\\left( {\\frak h},{\\frak g}^{*}\/{\\frak h}^{\\bot }\\right)$.\n\nLet $H\\subset G$ be an admissible subgroup, and $I$ be a Poisson ideal\nin $C^\\infty(M)^H$, i.e. $I$ is an ideal in the ring $C^\\infty(M)^H$,\nand $\\{ f,g \\} \\in C^\\infty(M)^H$ for all $f \\in I, g \\in\nC^\\infty(M)^H$. Then $C^\\infty(M)^H\/I$ is a Poisson algebra.\n\nMore geometrically, the Poisson structure on $C^\\infty(M)^H\/I$ can be\ndescribed as follows. Assume that the quotient $M\/H$ exists as a smooth\nmanifold. Then there exists a Poisson structure on $M\/H$ such that the\ncanonical projection $\\pi: M\\rightarrow M\/H$ is a Poisson map. Hamiltonian\nvector fields $\\xi_\\varphi ,\\varphi \\in \\pi ^{*}C^\\infty(M\/H),$ generate an\nintegrable distribution ${\\frak H}_\\pi$ in $TM$. The following result is\nstraightforward.\n\n\\begin{lem}\n\\label{poisson}\nLet $V\\subset M$ be a submanifold preserved by $H$. Then $V\/H$ is a\nPoisson submanifold of $M\/H$ if and only if $V$ is an integral\nmanifold of ${\\frak H}_\\pi.$\n\\end{lem}\n\nThe integrality condition means precisely that the ideal $I$ of all\n$H$--invariant functions on $M$ vanishing on $V$ is a Poisson ideal in\n$C^\\infty(M)^H$, and $C^\\infty(V\/H) = C^\\infty(V)^H$ $=\nC^\\infty(M)^H\/I$. If this property holds, we will say that the Poisson\nstructure on $M\/H$ can be restricted to $V\/H$. \n\n\\begin{center}\n\\begin{picture}(125,25)(-30,-18) \n\\put(0,0){\\vector(0,-1){12}}\n\\put(50,0){\\vector(0,-1){12}}\n\\put(9,3){\\vector(1,0){35}}\n\\put(9,-16){\\vector(1,0){35}}\n\\put(8.9,3.9){\\oval(1,1)[l]\n\\put(8.9,-15.1){\\oval(1,1)[l]\n\\put(-1,2){$V$}\n\\put(-3,-17){$V\/H$}\n\\put(48,2){$M$}\n\\put(47,-17){$M\/H$}\n\n\\end{picture}\n\\end{center}\n\nThe Poisson structure on $V\/H$ can be described as follows. Let\n$N_V\\subset T^{*}M\\mid_V$ be the conormal bundle of $V$. Clearly,\n$T^{*}V\\simeq T^{*}M\\mid _V\/N_V.$ Let $\\varphi ,\\psi \\in C(V)^H$ and\n$\\overline{d\\varphi},\\overline{d\\psi}\\in T^{*}M\\mid _V$ be any\nrepresentatives of $d\\varphi,d\\psi \\in T^{*}V.$ Let $P_M\\in\n\\bigwedge^2T\\,M$ be the Poisson tensor on $M$.\n\n\\begin{lem}\n\\label{Reduce}\nWe have \n\\begin{equation}\n\\left\\{ \\varphi ,\\psi \\right\\} = \\left\\langle\nP_M,\\overline{d\\varphi }\\ot \\overline{d\\psi}\\right\\rangle ;\n\\label{Pbr}\n\\end{equation}\nin particular, the right hand side does not depend on the choice of\n$\\overline{d\\varphi},\\overline{d\\psi}.$\n\\end{lem}\n\n\\begin{rem}\nIn the case of Hamiltonian action (i.e. when the Poisson structure on\n$H$ is trivial), one can construct submanifolds $V$ satisfying the\ncondition of \\lemref{poisson} using the moment map. Although a similar\nnotion of the nonabelian moment map in the context of Poisson group\ntheory is also available \\cite{Lu}, it is less convenient. The reason\nis that the nonabelian moment map is ``less functorial'' than the\nordinary moment map. Namely, if $G\\times M\\rightarrow M$ is a\nHamiltonian action with moment map $\\mu_G: M\\rightarrow {\\frak\ng}^{*},$ its restriction to a subgroup $H\\subset G$ is also\nHamiltonian with moment $\\mu_H=p \\circ \\mu_G$ (here $p: {\\frak\ng}^{*}\\rightarrow {\\frak h}^{*}$ is the canonical projection). If $G$\nis a Poisson-Lie group, $G^{*}$ its dual, $G\\times M\\rightarrow M$ a\nPoisson group action with moment $\\mu_G: M\\rightarrow G^{*},$ and\n$H\\subset G$ a Poisson subgroup, the action of $H$ still admits a\nmoment map. But if $H\\subset G$ is only admissible, then the\nrestricted action does not usually have a moment map. This is\nprecisely the case which is encountered in the study of the\n$q$--deformed Drinfeld-Sokolov reduction.\\qed\n\\end{rem}\n\n\\section{The $q$--deformed Drinfeld-Sokolov reduction in the case of\n$SL_2$}\n\nIn this section we apply the general results of the previous section\nto formulate a $q$--analogue of the Drinfeld-Sokolov reduction when\n$G=SL_2$.\n\n\\subsection{Choice of $r$--matrix} Let $\\g = L\\sw_2$. We would like\nto define a factorizable Lie bialgebra structure on $\\g$ in such a way\nthat the resulting Poisson-Lie structure $\\eta$ on $LSL_2$ and the\nPoisson structure $\\eta_*^q$ on $M_{2,q}$ satisfy the conditions\n(ii)--(iii) of \\secref{over}.\n\nLet $\\{ E,H,F \\}$ be the standard basis in $\\sw_2$ and $\\{ E_n,H_n,F_n\n\\}$ be the corresponding (topological) basis of $L\\sw_2 = \\sw_2 \\ot\n\\C((s))$ (here for each $A \\in \\sw_2$ we set $A_n = A \\ot s^n \\in\nL\\sw_2$). Let $\\tau$ be the automorphism of $L\\sw_2$ defined by the\nformula $\\tau(A(s)) = A(sq)$ (we assume that $q$ is generic). We have:\n$\\tau \\cdot A_n = q^n A_n$. To be able to use \\thmref{act1}, the\n$r$--matrix $r \\in L\\sw_2^{\\ot 2}$ defining the Lie bialgebra\nstructure on $L\\sw_2$ has to satisfy the condition $(\\tau \\ot \\tau)(r)\n= r$. Hence the invariant bilinear form on $L\\sw_2$ defined by the\nsymmetric part of $r$ should also be $\\tau$--invariant.\n\nThe Lie algebra $L\\sw_2$ has a unique (up to a non-zero constant\nmultiple) invariant non-degenerate bilinear form, which is invariant\nunder $\\tau$. It is defined by the formulas\n$$(E_n,F_m) = \\delta_{n,-m}, \\quad \\quad (H_n,H_m) = 2\n\\delta_{n,-m},$$ with all other pairings between the basis elements\nare $0$. This fixes the symmetric part of the element $r$. Another\ncondition on $r$ is that the subgroup $LN$ is admissible. According to\n\\propref{admiss}, this means that $L\\n_+^\\perp$ should be a Lie\nsubalgebra of $L\\sw_2^*$.\n\nA natural example of $r$ satisfying these two conditions is given by\nthe formula: \\beq \\label{new} r_0 = \\sum_{n \\in \\Z} E_n \\ot F_{-n} +\n\\frac{1}{4} H_0 \\ot H_0 + \\pol \\sum_{n>0} H_n \\ot H_{-n}.\n\\end{equation}\nIt is easy to verify that this element defines a factorizable Lie\nbialgebra structure on $\\g$. We remark that this Lie bialgebra\nstructure gives rise to Drinfeld's ``new'' realization of the\nquantized enveloping algebra associated to $L\\sw_2$\n\\cite{nr,kh-t,ked}. As we will see in the next subsection, $r_0$ can\nnot be used for the $q$--deformed Drinfeld-Sokolov reduction. However,\nthe following crucial fact will enable us to perform the\nreduction. Let $L{\\frak h}$ be the loop algebra of the Cartan\nsubalgebra ${\\frak h}$ of $\\sw_2$.\n\n\\begin{lem} \\label{defi}\nFor any $\\rho \\in \\wedge^2 L{\\frak h}$, $r_0 + \\rho$ defines a\nfactorizable Lie bialgebra structure on $L\\sw_2$, such that\n$L\\n_+^\\perp$ is a Lie subalgebra of $L\\sw_2^*$.\n\\end{lem}\n\nThe fact that $r_0 +\\rho$ still satisfies the classical Yang-Baxter\nequation is a general property of factorizable $r$--matrices\ndiscovered in \\cite{BD}. \\lemref{defi} allows us to consider the\nclass of elements $r$ given by the formula\n\\beq \\label{class1} r = \\sum_{n \\in\n\\Z} E_n \\ot F_{-n} + \\frac{1}{2} \\sum_{m,n \\in \\Z} \\phi_{n,m} \\cdot\nH_n \\ot H_m,\n\\end{equation}\nwhere $\\phi_{n,m} + \\phi_{m,n} = \\delta_{n,-m}$. The condition $(\\tau\n\\ot \\tau)(r) = r$ imposes the restriction $\\phi_{n,m} = \\phi_n\n\\delta_{n,-m}$, so that \\eqref{class1} takes the form \\beq\n\\label{class} r = \\sum_{n \\in \\Z} E_n \\ot F_{-n} + \\frac{1}{2} \\sum_{n\n\\in \\Z} \\phi_n \\cdot H_n \\ot H_{-n},\n\\end{equation}\nwhere $\\phi_n + \\phi_{-n} = 1$.\n\n\\subsection{The reduction}\nRecall that $M_{2,q} = LSL_2 = SL_2((s))$ consists of $2 \\times 2$\nmatrices\n\\beq \\label{two}\nM(s) = \\begin{pmatrix} a(s) & b(s) \\\\ c(s) & d(s) \\end{pmatrix}, \\quad\n\\quad ad - bc = 1.\n\\end{equation}\nWe want to impose the constraint $c(s) = -1$, i.e. consider the\nsubmanifold $M_{2,q}^J$ and take its quotient by the (free) action of\nthe group $$LN = \\left\\{ \\begin{pmatrix} 1 & x(s) \\\\ 0 & 1\n\\end{pmatrix} \\right\\}.$$ Let $\\eta$ be the Poisson-Lie structure on\n$LSL_2$ induced by $r$ given by formula \\eqref{class}.\n\nLet $\\eta_*^q$ be the Poisson structure on $M_{2,q}$ defined by\nformula \\eqref{mainb1}, corresponding to the automorphism $\\tau: g(s)\n\\arr g(sq)$. The following is an immediate corollary of \\thmref{act1},\n\\propref{admiss} and \\lemref{defi}.\n\n\\begin{prop}\n(1) The $q$--gauge action of ($LSL_2,\\eta$) on ($M_{2,q},\\eta^q_*$)\ngiven by formula $g(s) \\cdot M(s) = g(sq) M(s) g(s)^{-1}$ is Poisson;\n\n(2) The subgroup $LN \\subset LSL_2$ is admissible.\n\\end{prop}\n\nThus, we have satisfied properties (i) and (ii) of \\secref{over}. Now\nwe have to choose the remaining free parameters $\\phi_n$ so as to\nsatisfy property (iii).\n\nThe Fourier coefficients of the matrix elements of the matrix $M(s)$\ngiven by \\eqref{two} define functions on $M_{2,q}$. We will use the\nnotation $a_m$ for the $m$th Fourier coefficient of $a(s)$. Let\n$R_{2,q}$ be the completion of the ring of polynomials in $a_m, b_m,\nc_m, d_m, m \\in \\Z$, defined in the same way as the ring $\\RN$ of\nSect.~2.3. Let $S_{2,q} \\subset R_{2,q}$ be the subalgebra of\n$LN$--invariant functions. Denote by $I$ be the ideal of $S_{2,q}$\ngenerated by $\\{ c_n + \\delta_{n,0}, n\\in\\Z \\}$ (the defining ideal of\n$M_{2,q}^J$).\n\nProperty (iii) means that $I$ is a Poisson ideal of $s_{2,q}$, which\nis equivalent to the condition that $\\{ c_n,c_m \\} \\in I$, i.e. that\nif $\\{ c_n,c_m \\}$ vanishes on $M_{2,q}^J$. This condition means that\nthe Poisson bracket of the constraint functions vanishes on the\nconstraint surface, i.e. the constraints are of first class according\nto Dirac.\n\nLet us compute the Poisson bracket between $c_n$'s. First, we list the\nleft and right gradients for the functions $a_n,b_n,c_n,d_n$ (for this\ncomputation we only need the gradients of $c_n$'s, but we will soon\nneed other gradients as well). It will be convenient for us to\nidentify $L\\sw_2$ with its dual using the bilinear form introduced in\nthe previous section. Note that with respect to this bilinear form the\ndual basis elements to $E_n, H_n$, and $F_n$ are $F_{-n}, H_{-n}\/2$,\nand $E_{-n}$, respectively.\n\nExplicit computation gives (for shorthand, we write $a$ for $a(s)$,\netc.):\n$$\\nb a_m = s^{-m} \\begin{pmatrix} \\pol a & 0 \\\\ c & - \\pol a\n\\end{pmatrix}, \\quad \\quad \\nb b_m = s^{-m} \\begin{pmatrix} \\pol b & 0\n\\\\ d & - \\pol b \\end{pmatrix},$$\n$$\\nb c_m = s^{-m} \\begin{pmatrix} - \\pol c & a \\\\ 0 & \\pol c\n\\end{pmatrix}, \\quad \\quad \\nb d_m = s^{-m} \\begin{pmatrix} - \\pol d & b\n\\\\ 0 & \\pol b \\end{pmatrix},$$\n$$\\nb' a_m = s^{-m} \\begin{pmatrix} \\pol a & b \\\\ 0 & - \\pol a\n\\end{pmatrix}, \\quad \\quad \\nb' b_m = s^{-m} \\begin{pmatrix} - \\pol b\n& 0 \\\\ a & \\pol b \\end{pmatrix},$$\n$$\\nb' c_m = s^{-m} \\begin{pmatrix} \\pol c & d \\\\ 0 & - \\pol c\n\\end{pmatrix}, \\quad \\quad \\nb' d_m = s^{-m} \\begin{pmatrix} - \\pol d & 0\n\\\\ c & - \\pol d \\end{pmatrix}.$$\n\nNow we can compute the Poisson bracket between $c_n$'s using formula\n\\eqref{mainb1}: \\beq \\label{cbracket} \\{ c_m,c_k \\} = \\pol \\sum_{n \\in\n\\Z} \\left( \\phi_n - \\phi_{-n} + \\phi_n q^n - \\phi_{-n} q^{-n} \\right)\nc_{-n+m} c_{n+k}.\n\\end{equation}\n\nRestricting to $M_{2,q}^J$, i.e. setting $c_n = -\\delta_{n,0}$, we\nobtain:\n$$\\{ c_m,c_k \\}|_{M_{2,q}^J} = \\pol \\sum_{n \\in \\Z} \\left( \\phi_m -\n\\phi_{-m} + \\phi_m q^m - \\phi_{-m} q^{-m} \\right) \\delta_{m,-k}.$$ This\ngives us the following equation on $\\phi_m$'s: $$\\phi_m - \\phi_{-m} +\n\\phi_m q^m - \\phi_{-m} q^{-m} = 0.$$ Together with the previous condition\n$\\phi_m + \\phi_{-m} = 1$, this determines $\\phi_m$'s uniquely:\n\n\\begin{thm} \\label{third}\n{\\em The Poisson structure $\\eta_*^q$ satisfies property (iii) of the\n$q$--deformed Drin\\-feld-Sokolov reduction if and only if} $$\\phi_n =\n\\frac{1}{1+q^n}.$$\n\\end{thm}\n\nConsider the $r$--matrix \\eqref{class1} with $\\phi_n = (1+q^n)^{-1}$.\nFor this $r$--matrix, the Lie algebras defined in section\n\\secref{bialg} are as follows: $\\g_\\pm = L{\\frak b}_\\mp, {\\frak n}_\\pm\n= L{\\frak n}_\\mp$, where ${\\frak n}_+ = \\C E, {\\frak n}_- = \\C F,\n{\\frak b}_\\pm = {\\frak h} \\oplus {\\frak n}_\\pm$. We have:\n$\\g_\\pm\/{\\frak n}_\\pm \\simeq L{\\frak h}$. The transformation $\\theta$\non $L{\\frak h}$ induced by this $r$--matrix is equal to $-\\tau$.\n\nExplicitly, on the tensor product of the two $2$--dimensional\nrepresentations of $\\sw_2((t))$, the $r$--matrix looks as follows:\n\\begin{equation} \\label{explicit}\n\\begin{pmatrix} \\vff & 0 & 0 & 0 \\\\ 0 & - \\vff & \\delta \\left(\n\\frac{t}{s} \\right) & 0 \\\\ 0 & 0 & - \\vff & 0 \\\\ 0 & 0 & 0 & \\vff\n\\end{pmatrix},\n\\end{equation}\nwhere $$\\phi(x) = \\pol \\sum_{n \\in \\Z} \\frac{1}{1+q^n} x^n.$$ Note\nthat $2 \\pi \\phi(xq^{1\/2})$ coincides with the power series expansion\nof the Jacobi elliptic function $dn$ (delta of amplitude).\n\nNow we have satisfied all the necessary properties on the Poisson\nstructures and hence can perform the $q$--Drinfeld-Sokolov reduction\nof \\secref{qgauge} at the level of Poisson algebras. In the next\nsubsection we check that it indeed gives us the Poisson bracket\n\\eqref{vira} on the reduced space ${\\cal M}_{2,q} = M_{2,q}^J\/LN$.\n\n\\subsection{Explicit computation of the Poisson brackets}\n\nIntroduce the generating series $$A(z) = \\sum_{n \\in \\Z} a_n z^{-n},$$\nand the same for other matrix elements of $M(s)$ given by formula\n\\eqref{two}. We fix the element $r$ by setting $\\phi_n = (1+q^n)^{-1}$\nin formula \\eqref{class} in accordance with \\thmref{third}. Denote\n\\beq\n\\label{fi} \\vf(z) = \\sum_{n \\in \\Z} (\\phi_n - \\phi_{-n}) z^n = \\sum_{n \\in\n\\Z} \\frac{1-q^n}{1+q^n} z^n.\n\\end{equation}\nUsing the formulas for the gradients of the matrix elements given in\nthe previous section and formula \\eqref{mainb1} for the Poisson\nbracket, we obtain the following explicit formulas for the Poisson\nbrackets.\n\n\\begin{align*}\n\\{ A(z),A(w) \\} &= \\vf \\left( \\frac{w}{z} \\right) A(z) A(w), \\\\\n\\{ A(z),B(w) \\} &= - \\delta \\left( \\frac{w}{z} \\right) A(z) B(w), \\\\\n\\{ A(z),C(w) \\} &= \\delta \\left( \\frac{w}{z} \\right) A(z) C(w), \\\\\n\\{ A(z),D(w) \\} &= - \\vf \\left( \\frac{w}{z} \\right) A(z) D(w),\\\\\n\\{ B(z),B(w) \\} &= 0, \\\\\n\\{ B(z),C(w) \\} &= \\delta \\left( \\frac{w}{z} \\right) A(z) D(w) -\n\\delta \\left( \\frac{wq}{z} \\right) A(z) A(w), \\\\\n\\{ B(z),D(w) \\} &= - \\delta \\left( \\frac{wq}{z} \\right) A(z) B(w), \\\\\n\\{ C(z),C(w) \\} &= 0, \\\\\n\\{ C(z),D(w) \\} &= \\delta \\left( \\frac{w}{zq} \\right) A(z) C(w),\\\\\n\\{ D(z),D(w) \\} &= \\vf \\left( \\frac{w}{z} \\right) D(z) D(w) - \\delta\n\\left( \\frac{wq}{z} \\right) C(z) B(w) + \\delta \\left( \\frac{w}{zq}\n\\right) B(z) C(w).\n\\end{align*}\n\n\\begin{rem}\nThe relations above can be presented in matrix form as follows. Let\n$$L(z) = \\begin{pmatrix} A(z) & B(z) \\\\ C(z) & D(z) \\end{pmatrix},$$\nand consider the operators $L_1 = L \\ot \\on{id}, L_2 = \\on{id} \\ot L$\nacting on $\\C^2 \\ot \\C^2$. The $r$--matrix \\eqref{explicit} also acts\non $\\C^2 \\ot \\C^2$. Formula \\eqref{mainb1} can be written as follows:\n\\begin{align*}\n\\{ L_1(z),L_2(w) \\} &= \\pol r^- \\left( \\frac{w}{z} \\right) L_1(z)\nL_2(w) + \\pol L_1(z) L_2(w) r^- \\left( \\frac{w}{z} \\right) \\\\ &-\nL_1(z) r \\left( \\frac{wq}{z} \\right) L_2(w) + L_2(w) \\sigma(r) \\left(\n\\frac{zq}{w} \\right) L_1(z),\n\\end{align*}\nwhere $$r^- \\left( \\frac{w}{z} \\right) = r \\left( \\frac{w}{z} \\right)\n- \\sigma(r) \\left( \\frac{z}{w} \\right) = \\begin{pmatrix} \\pol \\vpp & 0\n& 0 & 0 \\\\ 0 & - \\pol \\vpp & \\delta \\left( \\frac{w}{z} \\right) & 0 \\\\\n0 & - \\delta \\left( \\frac{w}{z} \\right) & - \\pol \\vpp & 0 \\\\ 0 & 0 & 0\n& \\pol \\vpp\n\\end{pmatrix}.$$\\qed\n\\end{rem}\n\n\\subsection{Reduced Poisson structure}\nWe know that ${\\cal M}_{2,q} = M_{2,q}^J\/LN$ is isomorphic to\n$$\\left\\{\n\\begin{pmatrix} t(s) & 1 \\\\ -1 & 0 \\end{pmatrix} \\right\\}$$ (see\nSect.~3). The ring ${\\cal R}_{2,q}$ of functionals on ${\\cal M}_{2,q}$\nis generated by the Fourier coefficients of $t(s)$. In order to\ncompute the reduced Poisson bracket between them, we have to extend\nthem to $LN$--invariant functions on the whole $M_{2,q}$. Set \\beq\n\\wt{t}(s) = a(s) c(sq) + d(sq) c(s).\n\\end{equation}\nIt is easy to check that the Fourier coefficients $\\wt{t}_m$ of\n$\\wt{t}(s)$ are $LN$--invariant, and their restrictions to $M_{2,q}^J$\ncoincide with the corresponding Fourier coefficients of $t(s)$.\n\nLet us compute the Poisson bracket between $\\wt{t}_m$'s. Set\n$$\\wt{T}(z) = \\sum_{m \\in \\Z} \\wt{t}_m z^{-m}.$$ Using the explicit\nformulas above, we find\n\\begin{align} \\notag\n\\{ \\wt{T}(z),\\wt{T}(w) \\} &= \\vf \\left( \\frac{w}{z} \\right) \\wt{T}(z)\n\\wt{T}(w) \\\\ \\label{tildet} &+ \\delta \\left( \\frac{wq}{z} \\right)\n\\Delta(z) c(w) c(wq^2) - \\delta \\left( \\frac{w}{zq} \\right) \\Delta(w)\nc(z) c(zq^2),\n\\end{align}\nwhere $\\Delta(z) = A(z)D(z) - B(z)C(z) = 1$. Hence, restricting to\n$M_{2,q}^J$ (i.e. setting $c(z)=1$ in formula \\eqref{tildet}), we\nobtain:\n$$\\{ T(z),T(w) \\} = \\vf \\left( \\frac{w}{z} \\right) T(z) T(w) + \\delta\n\\left( \\frac{wq}{z} \\right) - \\delta \\left( \\frac{w}{zq} \\right).$$\nThis indeed coincides with formula \\eqref{vira}.\n\n\\begin{rem}\nConsider the subring $\\wt{S}_{2,q}$ of the ring $R_{2,q}$, generated by\n$c_m, \\wt{t}_m, m \\in \\Z$. The ring $\\wt{S}_{2,q}$ consists of\n$LN$--invariant functionals on $M_{2,q}$ and hence it can serve as a\nsubstitute for the ring of functions on $M_{2,q}\/LN$. Let us compute\nthe Poisson brackets in $\\wt{S}_{2,q}$. The Poisson brackets of\n$\\wt{t}_m$'s are given by formula \\eqref{tildet}, and by construction\n$\\{ c_m,c_k \\} = 0$. It is also easy to find that $\\{ c_m,\\wt{t}_k \\}\n= 0$. Hence $\\wt{S}_{2,q}$ is a Poisson subalgebra of $R_{2,q}$. Thus, the\n$q$--deformed Drinfeld-Sokolov reduction can be interpreted as\nfollows. The initial Poisson algebra is $R_{2,q}$. We consider its\nPoisson subalgebra $\\wt{S}_{2,q}$ generated by $c_m$'s and\n$\\wt{t}_m$'s. The ideal $I$ of $\\wt{S}_{2,q}$ generated by $\\{ c_m +\n\\delta_{m,0}, m \\in \\Z \\}$ is a Poisson ideal. The quotient\n$\\wt{S}_{2,q}\/I$ is isomorphic to the $q$--Virasoro algebra ${\\cal\nR}_{2,q}$ defined in Sect.~3.\\qed\n\\end{rem}\n\n\\subsection{$q$--deformation of Miura transformation}\n\nAs was explained in Sect.~3.2, the $q$--Miura transformation of\n\\cite{FR} is the map between two (local) cross-sections of the\nprojection $\\pi_q: M_{n,q}^J \\arr M_{n,q}^J\/LN$. In the case of\n$LSL_2$, the first cross-section $$\\left\\{ \\begin{pmatrix} \\la(s) & 0\n\\\\ -1 & \\la(s)^{-1} \\end{pmatrix} \\right \\}$$ is defined by the\nsubsidiary constraint $b(s)=0$, and the second $$\\left\\{\n\\begin{pmatrix} t(s) & 1 \\\\ -1 & 0 \\end{pmatrix} \\right\n\\}$$ is defined by the subsidiary constraint $d(s)=0$. The map between\nthem is given by the formula $$\\muu_q: \\la(s) \\mapsto t(s) = \\la(s) +\n\\la(sq)^{-1}.$$ Now we want to recover formula \\eqref{virpois} for the\nPoisson brackets between the Fourier coefficients $\\la_n$ of $\\la(s)$,\nwhich makes the map $\\muu_q$ Poisson.\n\nWe have already computed the Poisson bracket on the second (canonical)\ncross-section from the point of view of Poisson reduction. Now we need\nto compute the Poisson bracket between the functions $a_n$'s on the\nfirst cross-section, with respect to which the map $\\muu_q$ is\nPoisson. This computation is essentially similar to the one outlined\nin Sect.~3.2. The Poisson structure on the local cross-section is\ngiven by the Dirac bracket, which is determined by the choice of the\nsubsidiary conditions, which fix the cross-section.\n\nThe Dirac bracket has the following property (see\n\\cite{Flato}). Suppose we are given constraints $\\xi_n, n \\in I$, and\nsubsidiary conditions $\\eta_n, n \\in I$, on a Poisson manifold $M$,\nsuch that $\\{ \\xi_k,\\xi_l \\} = \\{ \\eta_k,\\eta_l \\} = 0, \\forall k,l\n\\in I$. Let $f, g$ be two functions on $M$, such that $\\{ f,\\xi_k \\}$\nand $\\{ g,\\xi_k \\}$ vanish on the common level surface of all $\\xi_k,\n\\eta_k$. Then the Dirac bracket of $f$ and $g$ coincides with their\nordinary Poisson bracket.\n\nIn our case, the constraint functions are $c_m+\\delta_{m,0}, m \\in\n\\Z$, and the subsidiary conditions $b_m, m \\in \\Z$, which fix the\nlocal model of the reduced space. We have: $\\{ b_m,b_k \\} = 0$, $\\{\nc_m,c_k \\} = 0$, and $\\{ a_m,b_k \\} = 0$, if we set $b_m=0, \\forall m\n\\in \\Z$. Therefore we are in the situation described above, and the\nDirac bracket between $a_m$ and $a_k$ coincides with their ordinary\nbracket. In terms of the generating function $A(z) = \\sum_{m \\in \\Z}\na_m z^{-m}$ it is given by the formula $$\\{ A(z),A(w) \\} = \\vf \\left(\n\\frac{w}{z} \\right) A(z) A(w),$$ which coincides with formula\n\\eqref{virpois}. Thus, we have proved the Poisson property of the\n$q$--deformation of the Miura transformation from the point of view of\nthe deformed Drinfeld-Sokolov reduction.\n\n\\section{Lattice Virasoro algebra}\n\nIn this section we consider the lattice counterpart of the\nDrinfeld-Sokolov reduction. Our group is thus $\\GG = (SL_2)^{\\Z\/N\\Z}$,\nwhere $N$ is an integer, and $\\tau$ is the automorphism of $G$, which\nmaps $(g_i)$ to $(g_{i+1})$. Poisson structures on $\\GG$ which are\ncovariant with respect to lattice gauge transformations $x_n \\mapsto\ng_{n+1} x_n g_n^{-1}$ have been studied already in \\cite{RIMS}\n(cf. also \\cite{AFS}). In order to make the reduction over the\nnilpotent subgroup $\\NN \\subset \\GG$ feasible, we have to be careful\nin our choice of the $r$--matrix.\n\n\\subsection{Discrete Drinfeld-Sokolov reduction}\nBy analogy with the continuous case, we choose the element $r$ defining the\nLie bialgebra structure on $\\g = \\sw_2^{\\oplus \\Z\/N\\Z}$ as follows: $$r =\n\\sum_{n \\in \\Z\/N\\Z} E_n \\ot F_n + \\frac{1}{4} \\sum_{m,n \\in \\Z\/N\\Z}\n\\phi_{n,m} H_n \\ot H_m,$$ where $\\phi_{n,m} + \\phi_{m,n} = 2\n\\delta_{m,n}$. It is easy to see that $r$ defines a factorizable Lie\nbialgebra structure on $\\g$. For \\thmref{act1} to be applicable, $r$ has to\nsatisfy the condition $(\\tau \\ot \\tau)(r) = r$, which implies that\n$\\phi_{n,m} = \\phi_{n-m}$.\n\nAn element of $\\GG$ is an $N$--tuple $(g_i)$ of elements of $SL_2$:\n$$g_k = \\begin{pmatrix} a_k & b_k \\\\ c_k & d_k \\end{pmatrix}.$$ We\nconsider $a_k,b_k,c_k,d_k, k \\in \\Z\/N\\Z$, as the generators of the\nring of functions on $\\GG$.\n\nThe discrete analogue of the Drinfeld-Sokolov reduction consists of\ntaking the quotient $\\MM=\\GG^J\/\\NN$, where $\\GG^J = (G^J)^{\\ZN}$,\n$$G^J = \\left\\{ \\begin{pmatrix} a & b \\\\ -1 & d \\end{pmatrix}\n\\right\\},$$ and $\\NN = N^{\\ZN}$, acting on $G^J$ by the formula \\beq\n\\label{dgauge} (h_i) \\cdot (g_i) = (h_{i+1} g_i h_i^{-1}).\n\\end{equation}\nIt is easy to see that $$\\MM \\simeq \\left \\{ \\begin{pmatrix} t_i &\n1 \\\\ -1 & 0 \\end{pmatrix}_{i \\in \\ZN} \\right\\}.$$\n\nThe element $r$ with $\\phi_{n,m} = \\phi_{n-m}, \\phi_k + \\phi_{-k} =\n2\\delta_{k,0}$, defines a Lie bialgebra structure on $\\g$ and Poisson\nstructures $\\eta, \\eta_*^\\tau$ on $\\GG$. According to \\thmref{act1},\nthe action of ($\\GG,\\eta$) on ($\\GG,\\eta_*^\\tau$) given by formula\n\\eqref{dgauge} is Poisson.\n\nAs in the continuous case, for the Poisson structure $\\eta_*^\\tau$ to\nbe compatible with the discrete Drinfeld-Sokolov reduction, we must\nhave:\n\\beq \\label{vanish}\n\\{ c_n,c_m \\}|_{\\GG^J} = 0.\n\\end{equation}\nExplicit calculation analogous to the one made in the previous\nsubsection shows that \\eqref{vanish} holds if and only if $$\\phi_{n-1}\n+ 2\\phi_n + \\phi_{n+1} = 2 \\delta_{n,0} + 2 \\delta_{n+1,0}.$$ The\ninitial condition $\\phi_0 = 1$ and periodicity condition give us a\nunique solution: for odd $N$, $\\phi_k = (-1)^k$; for even $N$, $\\phi_k\n= (-1)^k \\left( 1 - \\dfrac{2k}{N} \\right)$. In what follows we\nrestrict ourselves to the case of odd $N$ (note that in this case the\nlinear operator $\\on{id}+\\tau$ is invertible).\n\nContinuing as in the previous subsection, we define $$\\wt{t}_n = a_n\nc_{n+1} + d_{n+1} c_n, \\quad \\quad n \\in \\ZN.$$ These are\n$\\NN$--invariant functions on $\\GG$. We find in the same way as in the\ncontinuous case:\n\\beq \\label{wtt}\n\\{ \\wt{t}_n,\\wt{t}_m \\} = \\vf_{n-m} \\wt{t}_n \\wt{t}_m + \\delta_{n,m+1}\nc_m c_{m+2} - \\delta_{n+1,m} c_n c_{n+2},\n\\end{equation}\n$$\\{ \\wt{t}_n,c_m \\} = 0, \\quad \\quad \\{ c_n,c_m \\} = 0,$$ where\n$$\\vf_k = \\pol (\\phi_k - \\phi_{-k}) = \\left\\{ \\begin{array}{cc} 0, &\nk=0, \\\\ (-1)^k, & k\\neq 0.\n\\end{array} \\right.$$\nThe discrete Virasoro algebra $\\C[t_i]_{i \\in \\ZN}$ is the quotient of\nthe Poisson algebra $\\C[\\wt{t}_i,c_i]_{i \\in \\ZN}$ by its Poisson\nideal generated by $c_i+1, i \\in \\ZN$. From formula \\eqref{wtt} we\nobtain the following Poisson bracket between the generators $t_i$:\n\\beq\n\\label{dvir} \\{ t_n,t_m \\} = \\vf_{n-m} t_n t_m + \\delta_{n,m+1} -\n\\delta_{n+1,m}.\n\\end{equation}\n\nThe discrete Miura transformation is the map from the local\ncross-section $$\\left\\{ \\begin{pmatrix} \\la_n & 0 \\\\ -1 & \\la_n^{-1}\n\\end{pmatrix} \\right \\}$$ to $\\MM$,\n\\beq \\label{dmiura}\n\\la_n \\mapsto t_n = \\la_n + \\la_{n+1}^{-1}.\n\\end{equation}\nIt defines a Poisson map $\\C[\\la_i^\\pm]_{i \\in \\ZN} \\arr \\C[t_i]_{i\n\\in \\ZN}$, where the Poisson structure on the latter is given by the\nformula \\beq \\label{dheis} \\{ \\la_n,\\la_m \\} = \\vf_{n-m} \\la_n \\la_m.\n\\end{equation}\n\n\\begin{rem}\nThe Poisson algebra $\\C[t_i]_{i \\in \\ZN}$ can be considered as a\nregularized version of the $q$--deformed Virasoro algebra when\n$q=\\ep$, where $\\ep$ is a primitive $N$th root of unity. Indeed, we\ncan then consider $t(\\ep^i), i \\in \\ZN$, as generators and truncate in\nall power series appearing in the relations, summations over $\\Z$ to\nsummations over $\\ZN$ divided by $N$. This means that we replace\n$\\phi(\\ep^n)$ given by formula \\eqref{vira} by\n$$\\wt{\\phi}(\\ep^n) = \\frac{1}{N} \\sum_{i \\in \\ZN}\n\\frac{1-\\ep^i}{1+\\ep^i} \\ep^{ni},$$ and $\\delta(\\ep^n)$ by\n$\\delta_{n,0}$. The formula for the Poisson bracket then becomes: $$\\{\nt(\\ep^n),t(\\ep^m) \\} = \\wt{\\phi}(\\ep^{m-n}) t(\\ep^n) t(\\ep^m) +\n\\delta_{n,m+1} - \\delta_{n+1,m}.$$ If we set $t(\\ep^i)=t_i$, we\nrecover the Poisson bracket \\eqref{dvir}, since it is easy to check\nthat $\\wt{\\phi}(\\ep^{m-n}) = \\vf_{n-m}$.\n\nOne can apply the same procedure to the $q$--deformed $\\W$--algebras\nassociated to $\\sw_N$ and obtain lattice Poisson algebras. It would be\ninteresting to see whether they are related to the lattice\n$\\W$--algebras studied in the literature, e.g., in \\cite{Be,Bo}. In\nthe case of $\\sw_2$, this connection is described in the next\nsubsection.\\qed\n\\end{rem}\n\n\\subsection{Connection with Faddeev-Takhtajan-Volkov algebra}\nThe Poisson structures \\eqref{dvir} and \\eqref{dheis} are nonlocal,\ni.e. the Poisson brackets between distant neighbors on the lattice\nare nonzero. However, one can define closely connected Poisson\nalgebras possessing local Poisson brackets; these Poisson algebras can\nactually be identified with those studied by L.~Faddeev,\nL.~Takhtajan, and A.~Volkov.\n\nLet us first recall some results of \\cite{FR} concerning the\ncontinuous case. As was explained in \\cite{FR}, one can associate a\ngenerating series of elements of the $q$--Virasoro algebra to an\narbitrary finite-dimensional representation of $\\sw_2$. The series\n$T(z)$ considered in this paper corresponds to the two-dimensional\nrepresentation. Let $T^{(2)}(z)$ be the series corresponding to the\nthree-dimensional irreducible representation of $sw_2$. We have the\nfollowing identity \\cite{FR}\n$$T(z) T(zq) = T^{(2)}(z) + 1,$$ which can be taken as the definition of\n$T^{(2)}(z)$. From formula \\eqref{virmiura} we obtain:\n\\begin{align*}\nT^{(2)}(z) &= \\La(z) \\La(zq) + \\La(z) \\La(zq^2)^{-1} + \\La(zq)^{-1}\n\\La(zq^2)^{-1} \\\\ &= A(z) + A(z) A(zq)^{-1} + A(zq)^{-1},\n\\end{align*}\nwhere\n\\beq \\label{az}\nA(z) = \\La(z) \\La(zq)\n\\end{equation}\n(the series $A(z)$ was introduced in Sect.~7 of \\cite{FR}). From\nformula \\eqref{virpois} we find:\n$$\\{ A(z),A(w) \\} = \\left( \\delta \\left( \\frac{w}{zq} \\right) - \\delta\n\\left( \\frac{wq}{z} \\right) \\right) A(z) A(w).$$ It is also easy to find\n\\begin{align*} \\label{three}\n\\{ T^{(2)}(z),T^{(2)}(w) \\} &= \\left( \\delta \\left( \\frac{w}{zq}\n\\right) - \\delta \\left( \\frac{wq}{z} \\right) \\right) \\left( T^{(2)}(z)\nT^{(2)}(w) - 1 \\right) \\\\ &+ \\delta \\left( \\frac{wq^2}{z} \\right) T(w)\nT(wq^3) - \\delta \\left( \\frac{w}{zq^2} \\right) T(z) T(zq^3).\n\\end{align*}\n\nWe can use the same idea in the lattice case. Let $\\nu_n = \\la_n\n\\la_{n+1}$; this is the analogue of $A(z)$. We have: \\beq \\label{nun}\n\\{ \\nu_n,\\nu_m \\} = (\\delta_{n+1,m} - \\delta_{n,m+1}) \\nu_n \\nu_m,\n\\end{equation}\nand hence $\\C[\\nu_i^\\pm]$ is a Poisson subalgebra of $\\C[\\la_n^\\pm]$\nwith local Poisson brackets. We can also define $t^{(2)}_n = t_n\nt_{n+1} - 1$. The Poisson bracket of $t^{(2)}_n$'s is local:\n\\begin{align} \\label{t2}\n\\{ t^{(2)}_n,t^{(2)}_m \\} &= \\left( \\delta_{n+1,m} - \\delta_{n,m+1}\n\\right) \\left( t^{(2)}_n t^{(2)}_m - 1 \\right) \\\\ \\notag &+\n\\delta_{n,m+2} t_m t_{m+3} - \\delta_{n+2,m} t_n t_{n+3}.\n\\end{align}\nUnfortunately, it does not close on $t^{(2)}_n$'s, so that\n$\\C[t^{(2)}_i]$ is not a Poisson subalgebra of $\\C[t_i]$. But let us\ndefine formally\n\\beq \\label{sn}\ns_n = \\frac{1}{1+t^{(2)}_n} = t_n^{-1} t_{n+1}^{-1} =\n\\frac{1}{(1+\\nu_n)(1+\\nu_{n+1}^{-1})}.\n\\end{equation}\nThen from formulas \\eqref{sn} and \\eqref{nun} we find:\n\\begin{align} \\label{fad}\n\\{ s_n,s_m \\} = & s_n s_m \\big( (\\delta_{n+1,m} - \\delta_{n,m+1})(1 -\ns_n - s_m) - \\\\ \\notag & - s_{n+1} \\delta_{n+2,m} + s_{m+1}\n\\delta_{n,m+2} \\big).\n\\end{align}\nThus, the Poisson bracket closes among $s_n$'s and defines a Poisson\nstructure on $\\C[s_i]_{i\\in\\ZN}$.\n\nThe Poisson algebra $\\C[s_i]_{i\\in\\ZN}$ with Poisson bracket\n\\eqref{fad} was first introduced by Faddeev and Takhtajan in \\cite{ft}\n(see formula (54)). We see that it is connected with our version of\nthe discrete Virasoro algebra, $\\C[t_i]$, by a change of variables\n\\eqref{sn}. The Poisson algebra $\\C[\\nu_i^\\pm]$ and the Poisson map\n$\\C[\\nu_i^\\pm] \\arr \\C[s_n]$ given by formula \\eqref{sn} were\nintroduced by Volkov in \\cite{v1} (see formulas (2) and (23))\nfollowing \\cite{ft}; see also related papers \\cite{v2,fv}. This map is\nconnected with our version \\eqref{dmiura} of the discrete Miura\ntransformation by a change of variables.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nPre-trained Language Models (PLMs) have been successfully adapted to a wide range of Natural Language Processing (NLP) tasks using \\emph{prompt-based} learning~\\cite{Radford:2018,GPT-2,GPT3,petroni-etal-2019-language} such as sentiment classification~\\cite{gao-etal-2021-making}, natural language inference (NLI)~\\cite{schick-schutze-2021-exploiting,schick-schutze-2022-true}, relation extraction~\\cite{shin-etal-2020-autoprompt}, cross-lingual inference~\\cite{qi-etal-2022-enhancing}.\nHowever, manually writing prompts that generalize well is very challenging for several reasons such as (a) it might not always be possible to recruit domain-expert human annotators, \n(b) human annotators might not be able to cover all corner cases by writing prompts, and\n(c) there can be disagreements between human annotators regarding the coverage of a particular prompt.\nTo address these challenges, automatic learning of discrete prompts has been proposed such as AdvTrigger~\\cite{wallace-etal-2019-universal}, AutoPrompt~\\cite[\\textbf{AP};][]{shin-etal-2020-autoprompt}, WARP~\\cite{hambardzumyan-etal-2021-warp}, and RLPrompt~\\cite{DBLP:journals\/corr\/abs-2205-12548}.\n\n\n\\begin{table*}[t]\n \\centering\n \\small\n \\begin{tabular}{lllc}\n \\toprule\n \\textbf{Relation} & \\textbf{Method} & \\textbf{Prompt} & \\textbf{P@1} \\\\ \\midrule\n \\textsf{native-language-of} (P103) & Manual & \\texttt{The native language of [X] is [Y]} & 74.54\\\\\n & AP BERT & \\texttt{[X]PA communerug speaks proper [Y]} & \\textbf{84.87}\\\\\n & AP RoBERTa & \\texttt{[X]neau optionally fluent!?\\\" traditional [Y]} & 81.61\\\\ \\midrule\n \\textsf{profession-of} (P106) & Manual & \\texttt{[X] is a [Y] by profession} & 0.73 \\\\\n & AP BERT & \\texttt{[X] supporters studied politicians musician turned [Y]} & 15.83 \\\\\n & AP RoBERTa & \\texttt{[X] (), astronomers businessman\u00b7former [Y]} & \\textbf{19.24} \\\\ \\midrule\n \\textsf{music-played-by} (P136) & Manual & \\texttt{[X] plays [Y] music} & 0.7\\\\\n & AP BERT & \\texttt{[X] freaking genre orchestra fiction acid [Y]} & \\textbf{59.95} \\\\\n & AP RoBERTa & \\texttt{[X] blends postwar hostage drama sax [Y]} & 52.97 \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Examples of prompts learnt by AP for the fact retrieval task for BERT and RoBERTa PLMs and the human-written manual prompts. T-REx relation ids are shown with brackets for each relation type. Precision@1 (P@1) scores are shown when each prompt is used in fact retrieval.}\n \\label{tbl:autoprompt-examples}\n\\end{table*}\n\nAlthough discrete prompt learning methods have achieved good performance in numerous downstream tasks by automatically learnt prompts, such automatic prompts seem to be significantly different from the manually-written ones.\nFor example, \\autoref{tbl:autoprompt-examples} shows manually-written and AP-learnt prompts for fact retrieval~\\cite{petroni-etal-2019-language}.\nWe see that the AP-learnt prompts for BERT~\\cite{devlin-etal-2019-bert} and RoBERTa~\\cite{RoBERTa} outperform the manual prompts in precision\\@1 (P@1) scores.\nHowever, the AP-learnt prompts contain various counter-intuitive language constructs such as punctuation (e.g. `(', `?', `!', `)'), spelling errors (e.g. \\emph{commuenrug}) etc., which seem unrelated to the target relation.\nSimilar cases can be observed for AP-learnt prompts for other tasks as well (see Appendix in \\newcite{shin-etal-2020-autoprompt}).\nIt is unrealistic that a human annotator would be able to write such prompts even if they were able to see the same training instances as used by automatic methods.\n\nConsidering the fact that discrete prompt learning methods are trained in a few-shot setting where they use only a small number of training instances, the seemingly counter-intuitive nature of the discrete prompts learnt by automatic methods raises concerns about their robustness.\nFor example, \\emph{How will the performance of a target task change if we add small random perturbations to the prompts learnt by AP?} and \\emph{Whether the prompts learnt by AP generalize to out-of-domain data?}.\nTo study these issues, in this paper we evaluate the robustness of discrete prompts learnt by automatic prompt learning methods and compare that with manually-written prompts and direct fine-tuning of PLMs.\n\nAn evaluation of the robustness of discrete prompts is important for two main reasons.\nFirst, given that discrete prompt learning methods are learning those prompts from a small set of training instances, it is important that they cover the core patterns that generalize to the target task and not simply capture some random artefacts in the training samples.\nSecond, unlike embedding-based continuous prompts~\\cite{li-liang-2021-prefix,lester-etal-2021-power}, discrete prompts~\\cite{wallace-etal-2019-universal,shin-etal-2020-autoprompt,DBLP:journals\/corr\/abs-2205-12548} are represented in natural language and supposed to be interpretable.\nHowever, if a discrete prompt learning method is less robust, a seemingly harmless perturbation such as removing a punctuation character can significantly alter the performance of a downstream task.\n\nIn contrast to the numerous work that has used prompts for fine-tuning PLMs, to the best of our knowledge, the robustness of discrete prompts to random or adversarial perturbations has not been systematically studied.\nTo address this gap, we use AP as a concrete example of a widely-used method and evaluate its robustness under different types of carefully designed perturbations.\nHowever, we note that our perturbation techniques are not limited to AP and can be used for any discrete prompt learning method.\nWe compare the performance of AP-learnt prompts against fine-tuning using Manually-written Prompts (MP), and Head-based Fine-Tuning (HFT), where we fine-tune both the classifier head and the PLM parameters. \n\nFrom our evaluation, we find several interesting facts about the robustness of discrete prompts as summarized below.\n\\begin{itemize}\n \\item Overall, when the number of training instances is increased, MP outperforms both AP and HFT on CB~\\cite{de23commitmentbank} and MNLI~\\cite{DBLP:conf\/naacl\/WilliamsNB18}, two independent benchmark datasets for NLI (\\autoref{sec:exp:datasize}).\n In particular, the performance of AP on MNLI is much worse than that on CB. This is in contrast to the superior performance of AP on SICK-E~\\cite{DBLP:conf\/lrec\/MarelliMBBBZ14}, another NLI dataset, as reported by \\newcite{shin-etal-2020-autoprompt}.\n \n \\item Moreover, we see a performance drop when we use discrete prompts learnt from CB for MNLI and vice-versa (\\autoref{sec:exp-cross-data}).\n These results indicate that the performance of discrete prompts learnt by AP is highly dataset-dependent and such discrete prompts do not generalize well across datasets.\n \n \\item Compared to MP, AP-learnt discrete prompts turn out to be highly sensitive to the ordering of prompt tokens (\\autoref{sec:exp:reorder}).\n \n \\item Random deletion of prompt tokens decreases performance in both AP and MP (\\autoref{sec:exp:deletion}).\n \n \\item We create an adversarial NLI dataset from randomly-sampled test instances from MNLI and CB, and manually modify the hypothesis sentences with keeping the corresponding premise sentences unchanged, such that (a) the target label would not change, and (b) would reverse an entailment label to a contradiction (or vice-versa).\n Both AP and MP remain relatively robust against the perturbations that do not change the target label, but the performance of MP drops significantly in the label-changing setting (\\autoref{sec:exp:input-perturbation}).\n This shows that AP is relatively more robust against adversarial perturbations than MP, which explains AP's superior performance in various tasks. \n\\end{itemize}\n\n\n\\section{Related Work}\n\\paragraph{Prompting Methods:}\nPrompting or \\emph{in-context-learning} has received wide attention as an efficient method to extract knowledge from PLMs ~\\cite{GPT3,petroni-etal-2019-language,cui-etal-2021-template}.\nHowever, to manually write prompts one must possess task-specific domain knowledge.\nAs an alternative, methods that can automatically learn prompts from training data have been proposed.\nTwo distinct types of prompts have been learnt in prior work:\ndiscrete prompts (learns lexical sequences), and continuous prompts (learns embeddings).\nContinuous prompts~\\cite{li-liang-2021-prefix,lester-etal-2021-power} are parameter efficient because they learn generalizable task-specific embeddings, with performance comparable to PLM fine-tuning.\nHowever, continuous prompts cannot be learnt when a PLM is publicly unavailable and the only access to it is via an API~\\cite{GPT3}.\nMoreover, compared to discrete prompts, continuous prompts are difficult to interpret.\nLearning discrete prompts~\\cite{wallace-etal-2019-universal,shin-etal-2020-autoprompt,DBLP:journals\/corr\/abs-2205-12548} does not suffer from these limitations of continuous prompts and can be used with diverse NLP tasks.\nEspecially, fine-tuning massive PLMs has become computationally costly, which has made discrete prompt learning an attractive alternative.\n\n\n\\paragraph{Analysis of Prompting Methods:}\nPrior work has analyzed prompts from various viewpoints. \n\\citet{DBLP:conf\/naacl\/ScaoR21} studied the effect of training dataset size on fixed-prompt PLM fine-tuning and head-based fine-tuning and found that prompting is often worth 100s of instances on average across classification tasks.\n\\citet{kavumba-etal-2022-prompt} showed that the performance of prompt-based models varies significantly depending on the surface cues in the sentence.\n\\citet{lu-etal-2022-fantastically} found that ordering of task input significantly affects the performance.\n\\citet{utama-etal-2021-avoiding} focused on the reliance on lexical overlap in sentence pair classification and showed that prompt-based models fail to make predictions dependent on the lexical overlap.\nTo the best of our knowledge, the robustness of discrete prompts under different types of perturbations has not been studied in prior work, which is the main focus of this paper.\n\n\n\\section{Experiments}\n\\label{sec:experiments}\nLet us first describe experimental settings common to all experiments.\n\n\\paragraph{Prompting and Fine-Tuning Methods: }\nWe compared the following methods. \n\n\\begin{itemize}\n \\item \\textbf{AutoPrompt}~\\cite[\\textbf{AP};][]{shin-etal-2020-autoprompt} is a representative method of discrete prompt learning.\n The learning strategy is based on fill-in-the-blank task~\\cite{devlin-etal-2019-bert}.\n First, a manually created prompt template (e.g., \\texttt{[X] ... [Y]}) is given, and a prompt token (called a trigger token) is learnt by replacing \\texttt{}, which is a special token representing a trigger token.\n In the search for trigger tokens, the probability of \\texttt{} is converted into class probability by using label tokens (e.g., \\{`\\emph{nobody}', `\\emph{nor}'\\} for contradiction~\\cite{shin-etal-2020-autoprompt}), and trigger tokens are searched by gradient-guided search~\\cite{wallace-etal-2019-universal} to find a candidate set consisting of trigger tokens from a vocabulary of the language model. \n As a template for NLI, we used the one given by \\citet{shin-etal-2020-autoprompt}, and the prompt tokens were learnt from the training dataset. \n In our experiments, we used the official implementation.\\footnote{\\url{https:\/\/github.com\/ucinlp\/autoprompt}}\n \n \\item \\textbf{Manually-written Prompts}~\\cite[\\textbf{MP};][]{schick-schutze-2021-exploiting} is a method for fine-tuning the entire masked language model with training data using manually-written prompts as the input and predicting the \\texttt{} tokens for the labels (e.g., `\\emph{yes}' for entailment).\n We used the template \\texttt{\\{hypothesis\\}? | , \\{premise\\}} and verbalizer (`\\emph{yes}' for entailment, `\\emph{no}' for contradiction, `\\emph{maybe}' for neutral) following prior work \\cite{schick-schutze-2021-exploiting,DBLP:conf\/naacl\/ScaoR21}.\n \\citet{schick-schutze-2021-exploiting} proposed an ensemble-based method with multiple rounds of fine-tuning using different templates.\n However, because a single template is used in AP, for a fair comparison in our experiments, we fine-tuned a PLM using one MP template.\n\n \\item \\textbf{Head-based Fine-Tuning}~\\cite[\\textbf{HFT};][]{devlin-etal-2019-bert} fine-tunes the PLM with a classifier head.\n We report the head-based results trained by \\newcite{DBLP:conf\/naacl\/ScaoR21}. \n They trained HFT with a low learning rate ($10^{-5}$) and always with a large number of steps (at least 250), following the recommendations in prior work~\\cite{DBLP:conf\/iclr\/MosbachAK21,DBLP:conf\/iclr\/0007WKWA21}.\n Note that HFT is not a prompt-based method, so it was excluded from some experiments on the robustness of discrete prompts.\n\\end{itemize}\n\n\\paragraph{Datasets:}\nWe used NLI as an evaluation task to compare the robustness of discrete prompting methods.\nThe NLI task has been used in multiple previous studies to evaluate and\/or propose novel prompt learning methods because it is a fundamental task related to many NLP applications~\\cite{shin-etal-2020-autoprompt,DBLP:conf\/naacl\/ScaoR21,DBLP:conf\/naacl\/WebsonP22}.\nIt is important to use the same NLI task and datasets in our experiments to facilitate fair comparisons and reach reproducible conclusions. \nWe used the two datasets: CommitmentBank \\cite[\\textbf{CB};][]{de23commitmentbank}\\footnote{\\url{https:\/\/super.gluebenchmark.com\/tasks}} (a corpus of short texts), and Multi-Genre Natural Language Inference Corpus \\cite[\\textbf{MNLI};][]{DBLP:conf\/naacl\/WilliamsNB18}\\footnote{\\url{https:\/\/cims.nyu.edu\/~sbowman\/multinli\/}} (a crowdsourced collection of sentence pairs for NLI).\nEach sentence pair is labelled with \\emph{entailment}, \\emph{neutral}, or \\emph{contradiction}.\n\n\\paragraph{PLM:}\nIn our experiments, we used the same pre-trained language model to evaluate AP, MP, and HFT equally.\nSpecifically, we used RoBERTa-large (355M parameters) \\footnote{\\url{https:\/\/huggingface.co\/roberta-large}}~\\cite{RoBERTa}, which has been used in much prior work in prompt learning~\\cite{shin-etal-2020-autoprompt,DBLP:conf\/naacl\/ScaoR21}.\nThe PLM was trained on five datasets, including BookCorpus\\footnote{\\url{https:\/\/yknzhu.wixsite.com\/mbweb}}, English Wikipedia\\footnote{\\url{https:\/\/en.wikipedia.org\/wiki\/English_Wikipedia}}, CC-News\\footnote{\\url{https:\/\/commoncrawl.org\/2016\/10\/news-dataset-available\/}}, OpenWebText\\footnote{\\url{https:\/\/github.com\/jcpeterson\/openwebtext}}, and Stories\\footnote{\\url{https:\/\/arxiv.org\/abs\/1806.02847}}.\nThe texts were tokenised using a byte-level Byte-Pair Encoding \\cite[BPE;][]{DBLP:conf\/acl\/SennrichHB16a} vocabulary of size 50,000.\n\n\\paragraph{Evaluating the Robustness of Prompts: }\nWe used \\emph{rate of degradation} (\\textbf{RoD})~\\cite{Meyers:2020} to evaluate robustness, which is defined as the decrease in accuracy of the target task due to the perturbations added to the prompt.\nIf the RoD of a model is small after the inclusion of a perturbation, the model is considered to be robust against that perturbation.\nSpecifically, we first calculate the respective accuracies $\\textrm{acc}_x$ and $\\textrm{acc}_{x^\\ast}$ on the same evaluation set for both prompt $x$ and its perturbated version $x^{\\ast}$.\nUsing the average accuracies $\\textrm{avg-acc}_x$ and $\\textrm{avg-acc}_{x^\\ast}$ over $M$ prompts ${x_1, ..., x_M}$, we calculate the RoD as $(\\textrm{avg-acc}_x - \\textrm{avg-acc}_{x^\\ast}) \/ \\textrm{avg-acc}_x = 1 - \\textrm{avg-acc}_{x^\\ast} \/ \\textrm{avg-acc}_x$.\n\n\n\\subsection{Effect of the Training Dataset Size}\n\\label{sec:exp:datasize} \nBefore moving on to robustness experiments, we first investigate the number of training instances on which AP and MP perform best, and used the best-performing AP and MP to evaluate their robustness in the subsequent experiments.\n\n\\paragraph{Experimental Settings:} \nWe gradually increased the size of the training dataset following the experimental setup of~\\citet{DBLP:conf\/naacl\/ScaoR21}. \nSpecifically, we experimented with randomly sampled subsets of the training dataset having varying numbers of instances in $\\{10, 15, 20, 30, 50, 70, 100, 150, 200\\}$.\nBecause the performance of few-shot learning methods often varies due to the high feature variance in the training data, we randomly sampled four subsets per each dataset size and used them independently for training the models\\footnote{NVIDIA RTX A5000 was mainly used.} (i.e. trigger tokens and label tokens for AP, or fine-tuned language model for MP and HFT) for each subset and report the average accuracy on the validation data for the four models ($M = 4$).\nWe used the matched (example from the same source as the training set) validation set for MNLI.\nFor CB, we held out 50 training instances for development as in \\citet{DBLP:conf\/naacl\/ScaoR21} and evaluated the original validation set as test data.\n\nWe searched for the optimal values for the following hyperparameters: the number of trigger tokens in \\{3, 5, 10\\}, the number of label tokens in \\{3, 5, 10\\}, and the number of tokens in a candidate set in \\{10, 50\\}. \nWe evaluated the test accuracy using the hyperparameters that had the highest accuracy on the validation data for each dataset size.\nIn the training of MP, we used AdamW optimizer~\\cite{DBLP:conf\/iclr\/LoshchilovH19} with an initial learning rate of $10^{-5}$ and a learning step of 1,000 following \\citet{DBLP:conf\/iclr\/MosbachAK21}.\n\n\n\\begin{table*}[t]\n \\centering\n \\small\n \\begin{tabular}{ccccccc}\n \\toprule\n \\textbf{Method} & \\textbf{\\#Train} & \\textbf{Template} & \\textbf{\\#Prompt tokens} & \\textbf{\\#Label tokens per class} & \\multicolumn{2}{c}{\\textbf{Avg. accuracy}} \\\\\n & & & & & \\textbf{CB} & \\textbf{MNLI} \\\\\n \\midrule\n AP & 200 & \\texttt{\\red{p} \\blue{ ... } \\red{h}} & 10 & 3 & 68.3 & 37.7 \\\\ \n MP & 200 & \\texttt{\\red{h}\\blue{? |} \\blue{,} \\red{p}} & 3 & 1 & \\uline{95.1} & \\uline{65.5} \\\\ \n HFT & - & \\texttt{ \\red{p} \\red{h}} & 0 & - & - & - \\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{\n The average accuracy of the experiment with four training subsets of 200 instances.\n \\red{Red} represents the task inputs, \\red{\\texttt{h}} represents the hypothesis, \\red{\\texttt{p}} represents the premise, \\blue{blue} represents the prompt tokens, and \\blue{\\texttt{}} represents a trigger token.\n Unreported values were marked with `-'. \n }\n \\label{tab:pre-train-best}\n\\end{table*}\n\n\n\\paragraph{Main Results:}\n\\autoref{fig:pre-train} shows the performance\\footnote{HFT results were obtained from \\citet{DBLP:conf\/naacl\/ScaoR21}, F1-macro for CB and accuracy for MNLI.} against the training dataset size.\nWe see that in both CB and MNLI \\textbf{MP is always superior to AP}.\nFor example, with a dataset of size 200, AP and MP achieved the best accuracy in CB, MP's accuracy was 92.7\\%, while that of AP was lower at 54.2\\%.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[clip, width=8cm]{datapoints.pdf}\n \\caption{\n Performance of AutoPrompt (AP), Manually-written Prompt (MP), and Head-based Fine-Tuning (HFT) on the scale of dataset size for CB and MNLI.\n Means and their 95\\% confidence intervals are plotted.\n The accuracy of HFT for dataset size for CB was not plotted because the accuracy was not reported.\n }\n \\label{fig:pre-train}\n\\end{figure}\n\nOur results also suggest that \\textbf{the performance of discrete prompts learnt by AP is highly dataset dependent.}\n\\citet{shin-etal-2020-autoprompt} reported results for AP and HFT on SICK-E~\\cite{DBLP:conf\/lrec\/MarelliMBBBZ14}, which is another NLI dataset.\nThey concluded that AP was always superior to HFT up to training dataset sizes of 1,000 for the same RoBERTa-large PLM that we use.\nHowever, our experiments show the opposite trend (i.e. HFT is superior to AP).\nThis suggests that even if AP is superior to HFT on a given dataset, it is not guaranteed to be superior in a different dataset for the same task.\nThis may be due to the differences in the domain and annotation guidelines for each dataset.\nFor example, the accuracy of MNLI was quite low on AP, which contrasts with that of CB.\nThis result suggests that the discrepancies in domains and annotation guidelines make it difficult for AP to perform consistently.\n\n\\paragraph{Best Prompts:}\n\\autoref{tab:pre-train-best} shows the average accuracy of models trained on 200 instances that performed well in both CB and MNLI.\nNote that there are four training subsets for each dataset size, resulting in corresponding four trained AP prompts and four PLMs fine-tuned by MP. \\footnote{We show the four best prompts learnt by AP in \\autoref{sec:supplementary}.}\nIn the robustness evaluations in \\autoref{sec:exp:reorder} through \\autoref{sec:exp:input-perturbation}, we used these learnt APs and MPs.\nIn this paper, (a) trigger tokens learnt by AP, and (b) manually-written prompts excluding the task inputs and mask tokens are collectively referred to as the \\emph{prompt tokens}.\n\n\n\\subsection{Token Reordering}\n\\label{sec:exp:reorder}\nAs seen from \\autoref{tbl:autoprompt-examples}, compared to MPs where the ordering of tokens in a prompt is manually determined, discrete prompts learnt by AP appear to have no obvious ordering among their tokens.\nTo empirically investigate the importance of the token order in a discrete prompt, we conduct an experiment where we randomly shuffle the prompt tokens and measure the effect on the downstream task performance.\n\n\\paragraph{Experimental Procedure:}\nGiven a discrete prompt, we first randomly reordered its prompt tokens (e.g. shaded in blue in \\autoref{tab:pre-train-best}).\nNext, we used the reordered prompt with the PLM to make entailment predictions for the test instances in the CB and MNLI datasets.\nFinally, the entailment prediction accuracy (Acc) obtained with the reordered prompts was computed.\nWe repeated this evaluation 10 times for each prompt and report the averaged values and the corresponding RoD values.\n\n\\paragraph{Main Results:}\nFrom \\autoref{tab:reorder_result} we see that the accuracy drops for both AP and MP when the prompt tokens are randomly reordered.\nIn particular, the accuracy of AP drops significantly compared to that of MP.\nFor example, the accuracy of AP on CB drops by ca. 14\\% due to token reordering, while that for MP drops only by ca. 2\\%.\nIntuitively, one would expect that changing the order of prompt tokens in MP would result in a significant drop in accuracy because the meaning of the prompts would change.\nHowever, we see that this is not the case. \nThis result shows that \\textbf{the discrete prompts learnt by AP strongly rely on the token order}.\n\n\\paragraph{Additional Analysis:}\nTo further analyze the relationship between the level of perturbation introduced by reordering prompt tokens in AP and its effect on the performance, we computed the token-level edit distance \\citep[Levenshtein distance;][]{levenshtein1966binary} between each prompt and its token-shuffled version as shown in \\autoref{fig:scatter_reodering}.\nFor all four AP prompts, we see that the accuracy drops when the perturbation noise (i.e. measured by edit distance) increases.\nThis reconfirms the lack of robustness in discrete prompts learnt by AP to the random shuffling of prompt tokens.\n\n\\begin{table}[t]\n \\centering\n \\small\n \\begin{tabular}{clllccc}\n \\toprule\n \\textbf{Method} & \\textbf{Metrics} & \\textbf{CB} & \\textbf{MNLI} \\\\\n \\midrule\n \\multirow{2}{*}{AP} & Acc & \\textbf{54.2} & \\textbf{34.3} \\\\ \n & RoD & \\textbf{0.21} & \\textbf{0.10} \\\\ \n \\midrule\n \\multirow{2}{*}{MP} & Acc & \\uline{92.7} & \\uline{59.3} \\\\\n & RoD & \\uline{0.03} & \\uline{0.09} \\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{\n Performance of reordered prompts.\n Acc denotes accuracy; RoD denotes the RoD from before the reordering (\\autoref{tab:pre-train-best}).\n The largest drops in accuracy are \\textbf{bolded} and the smallest drops are \\uline{underlined} for each method and dataset.\n AP relies more strongly on word order than MP.\n }\n \\label{tab:reorder_result}\n\\end{table}\n\n\n\\subsection{Token Deletion} \n\\label{sec:exp:deletion}\nAs seen from \\autoref{tbl:autoprompt-examples}, the discrete prompts learnt by AP perform better than MP.\nHowever, it is often difficult to determine the importance of prompt tokens to the target task due to their lack of interpretability (e.g. prompt token `\\emph{neau}' in \\autoref{tbl:autoprompt-examples}).\nTo understand the significance of individual prompt tokens to the overall discrete prompt, we conducted an experiment where we systematically deleted one or more prompt tokens at various positions from a given discrete prompt and measure the drop (if any) in the performance of the NLI task.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[clip, width=7.5cm]{reordering_edit_distance.png}\n \\caption{Edit distance and accuracy of the reordered trigger tokens. We evaluated them on the validation data of CB. \n The prompts numbered 0 through 3 each represent the four prompts learnt by AP (shown in \\autoref{tab:best-ap-prompts-cb}).\n Note that a point with an edit distance of zero indicates accuracy with the original trigger token.\n }\n \\label{fig:scatter_reodering}\n \\vspace{-3mm}\n\\end{figure}\n\n\\paragraph{Experimental Procedure:}\nWe evaluated two settings of prompt deletion: \\emph{single} and \\emph{multiple} token deletion.\nIn the single token deletion setting, we deleted one token at different positions in a given prompt.\nFor AP, we repeated this with each of the four discrete prompts (shown in \\autoref{tab:pre-train-best}) and report the average accuracy.\nIn the multiple token deletion setting, we delete $n \\in \\{1, 3, 5, 7\\}$ prompt tokens following three strategies:\n\\emph{Random-deletion} deletes $n$ prompt tokens randomly,\n\\emph{Front-deletion} deletes $n$ consecutive prompt tokens from the beginning of the prompt, and \\emph{Back-deletion} deletes $n$ tokens counted backwards from the end of the prompt.\nIn random-deletion, we ran 100 trials and report the average accuracy.\nAs in the previous experiments, we used four prompts for AP and report the averaged results.\n\n\\begin{table*}[t!]\n \\setlength{\\tabcolsep}{1.9mm} \n \\centering\n \\small\n \\begin{tabular}{cccccccccccccc}\n \\toprule\n \\multirow{2}{*}{\\textbf{Task}} & \\multirow{2}{*}{\\textbf{Method}} & \\multirow{2}{*}{\\textbf{Metrics}} & \\multicolumn{10}{c}{\\textbf{Position of the deleted prompt token}} & \\multirow{2}{*}{\\textbf{Orig.}} \\\\\n & & & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\\n \\midrule\n \\multirow{4}{*}{CB} & \n \\multirow{2}{*}{AP} & Acc & 62.1 & \\textbf{61.6} & 63.4 & 59.4 & \\uline{65.6} & \\uline{65.6} & 62.1 & 63.8 & 62.1 & 62.9 & 68.3\\\\\n & & RoD & 0.09 & \\textbf{0.10} & 0.07 & 0.13 & \\uline{0.04} & \\uline{0.04} & 0.09 & 0.07 & 0.09 & 0.08 & - \\\\\n \\cmidrule(lr){2-14}\n & \\multirow{2}{*}{MP} & Acc & 93.8 & \\textbf{93.3} & \\uline{96.0} & - & - & - & - & - & - & - & 95.1 \\\\\n & & RoD & 0.01 & \\textbf{0.02} & \\uline{-0.01} & - & - & - & - & - & - & - & - \\\\\n \\midrule\n \\multirow{4}{*}{MNLI} & \n \\multirow{2}{*}{AP} & Acc & \\uline{37.9} & 37.8 & \\textbf{36.6} & 37.5 & 37.5 & 37.2 & 37.5 & 37.4 & 37.5 & 37.1 & 37.7 \\\\\n & & RoD & \\uline{-0.01} & 0.00 & \\textbf{0.03} & 0.01 & 0.01 & 0.01 & 0.01 & 0.01 & 0.01 & 0.02 & - \\\\\n \\cmidrule(lr){2-14}\n & \\multirow{2}{*}{MP} & Acc & 64.5 & \\uline{65.4} & \\textbf{55.4} & - & - & - & - & - & - & - & 65.5 \\\\\n & & RoD & 0.02 & \\uline{0.00} & \\textbf{0.15} & - & - & - & - & - & - & - & - \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{\n Average accuracy was obtained after deleting a single token at different positions of a given prompt.\n The largest drops in accuracy over the deletion positions are \\textbf{bolded} and the smallest drops are \\uline{underlined} for each method and dataset.\n Column `Orig.' shows the performance of the original prompt.\n }\n \\label{tab:deletion_one}\n\\end{table*}\n\n\\paragraph{Results:}\nFrom \\autoref{tab:deletion_one} we see that the \\textbf{accuracy of both AP and MP drops even when a single token is deleted} at specific positions.\nHowever, the observed trends differ in CB and MNLI.\nFor example, AP resulted in higher RoD values in CB compared to MNLI.\nThis shows that the robustness of AP under single token deletion heavily depends on the dataset.\n\\autoref{tab:deletion_multi} shows the results for the multiple token deletion setting.\nWe see that \\textbf{the performance of both AP and MP degrades when more tokens are deleted.}\nInterestingly, the accuracy drop in CB is very small for MP even when all prompt tokens are deleted (i.e., only the task inputs and \\texttt{} were used as the input).\nThis suggests that the performance on CB is less reliant on the prompt tokens in MP.\n\n\n\\subsection{Cross-Dataset Evaluation}\n\\label{sec:exp-cross-data}\nGiven that discrete prompt learning methods such as AP learn prompts from a small set of training instances, it is important that the learnt prompts encode generalizable task-specific features and not random artefacts in the training sample used.\nTo study the transferability of the learnt discrete prompts from one dataset to another, we conduct a cross-dataset evaluation as described next.\n\n\\begin{table}[t!]\n \\setlength{\\tabcolsep}{0.9mm} %\n \\centering\n \\small\n \\begin{tabular}{ccccccccccccc}\n \\toprule\n \\multirow{2}{*}{\\textbf{Strategy}} & \\multirow{2}{*}{\\textbf{Method}} & \\multirow{2}{*}{\\textbf{Metrics}} & \\multicolumn{4}{c}{\\textbf{\\#Deleted Tokens}} & \\multirow{2}{*}{\\textbf{Orig.}} \\\\\n & & & 1 & 3 & 5 & 7 \\\\\n \\midrule\n \\midrule\n \\multicolumn{8}{c}{\\textbf{CB}} \\\\\n \\midrule\n \n \\multirow{4}{*}{Random}\n & \\multirow{2}{*}{AP}\n & Acc & \\uline{56.7} & 56.0 & 55.4 & \\textbf{54.8} & 68.3 \\\\\n & & RoD & \\uline{0.17} & 0.18 & 0.19 & \\textbf{0.20} & - \\\\\n \\cmidrule(lr){2-8}\n & \\multirow{2}{*}{MP}\n & Acc & \\textbf{93.3} & \\uline{94.6} & - & - & 95.1\\\\\n & & RoD & \\textbf{0.02} & \\uline{0.01} & - & - & - \\\\\n \\midrule\n \\multirow{4}{*}{Front}\n & \\multirow{2}{*}{AP}\n & Acc & \\uline{62.1} & \\textbf{49.1} & 57.6 & 57.6 & 68.3 \\\\\n & & RoD & \\uline{0.09} & \\textbf{0.28} & 0.16 & 0.16 & - \\\\\n \\cmidrule(lr){2-8}\n & \\multirow{2}{*}{MP}\n & Acc & \\textbf{93.8} & \\uline{94.6} & - & - & 95.1\\\\\n & & RoD & \\textbf{0.01} & \\uline{0.01} & - & - & - \\\\\n \\midrule\n \\multirow{4}{*}{Back}\n & \\multirow{2}{*}{AP}\n & Acc & \\uline{62.9} & 57.6 & 55.8 & \\textbf{51.3} & 68.3 \\\\\n & & RoD & \\uline{0.08} & 0.16 & 0.18 & \\textbf{0.25} & - \\\\\n \\cmidrule(lr){2-8}\n & \\multirow{2}{*}{MP}\n & Acc & \\uline{96.0} & \\textbf{94.6} & - & - & 95.1 \\\\\n & & RoD & \\uline{-0.01} & \\textbf{0.01} & - & - & - \\\\\n \\midrule\n \\midrule\n \n \\multicolumn{8}{c}{\\textbf{MNLI}} \\\\\n \\midrule\n \\multirow{4}{*}{Random}\n & \\multirow{2}{*}{AP}\n & Acc & \\textbf{35.8} & \\textbf{35.8} & 36.0 & \\uline{36.2} & 37.7 \\\\\n & & RoD & \\textbf{0.05} & \\textbf{0.05} & 0.05 & \\uline{0.04} & - \\\\\n \\cmidrule(lr){2-8}\n & \\multirow{2}{*}{MP}\n & Acc & \\uline{65.4} & \\textbf{52.6} & - & - & 65.5 \\\\\n & & RoD & \\uline{0.0} & \\textbf{0.20} & - & - & - \\\\\n \\midrule\n \\multirow{4}{*}{Front}\n & \\multirow{2}{*}{AP}\n & Acc & \\uline{37.9} & 36.5 & 36.2 & \\textbf{36.0} & 37.7 \\\\\n & & RoD & \\uline{-0.01} & 0.03 & 0.04 & \\textbf{0.05} & - \\\\\n \\cmidrule(lr){2-8}\n & \\multirow{2}{*}{MP}\n & Acc & \\uline{64.5} & \\textbf{52.6} & - & - & 65.5 \\\\\n & & RoD & \\uline{0.02} & \\textbf{0.20} & - & - & - \\\\\n \\midrule\n \\multirow{4}{*}{Back}\n & \\multirow{2}{*}{AP}\n & Acc & \\uline{37.1} & 36.7 & 35.7 & \\textbf{36.5} & 37.7 \\\\\n & & RoD & \\uline{0.02} & 0.03 & 0.05 & \\textbf{0.03} & - \\\\\n \\cmidrule(lr){2-8}\n & \\multirow{2}{*}{MP}\n & Acc & \\uline{55.4} & \\textbf{52.6} & - & - & 65.5 \\\\\n & & RoD & \\uline{0.15} & \\textbf{0.20} & - & - & - \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{\n Average accuracy was obtained after deleting multiple tokens from a given prompt.\n The largest drops in accuracy over the deleted tokens are \\textbf{bolded} and the smallest drops are \\uline{underlined} for each strategy and method.\n }\n \\label{tab:deletion_multi}\n\\end{table}\n\n\\paragraph{Experimental Procedure:}\nWe used one NLI dataset (e.g. CB) to learn the prompts and then use them to make entailment predictions in another NLI dataset (e.g. MNLI).\nWe then measured the drop in accuracy using RoD for this cross-dataset transferability task with respect to the accuracy of test data from the same dataset.\n\n\\paragraph{Results:}\nAs seen from \\autoref{tab:cross-dataset-eval}, \\textbf{AP-based prompts do not generalize well across datasets.}\nFor both AP and MP, RoD is larger in the transfer from CB to MNLI than in the opposite direction.\nThis implies that MNLI is a better dataset for fine-tuning a PLM for NLI using discrete prompts.\n\n\\begin{table}[t]\n\\centering\n\\small\n\\begin{tabular}{lccccc}\n\\toprule\n\\multirow{2}{*}{\\textbf{Method}}& \\multicolumn{2}{c}{\\textbf{Test Dataset}} & \\multirow{2}{*}{\\textbf{RoD}}\\\\\n& \\textbf{CB} & \\textbf{MNLI} & \\\\\n\\midrule\nAP trained on CB & 68.3 & 36.1 & \\uline{0.47} \\\\ \nAP trained on MNLI & 42.9 & 37.7 & \\uline{0.12} \\\\ \n\\midrule\nMP trained on CB & 95.1 & 43.4 & \\textbf{0.54} \\\\ \nMP trained on MNLI & 43.8 & 65.5 & \\textbf{0.33} \\\\ \n\\bottomrule\n\\end{tabular}\n\\caption{\nAccuracy and RoD for the cross-dataset evaluation where a method (AP\/MP) is trained on one NLI dataset (CB\/MNLI) and the learnt prompts are used to make entailment predictions in a different NLI dataset.\n}\n\\label{tab:cross-dataset-eval}\n\\end{table}\n\n\n\\subsection{Adversarial Perturbations}\n\\label{sec:exp:input-perturbation}\nIntroducing carefully designed adversarial perturbations to the test instances such as modifications to sentences that might or might not alter the original target labels have been used as a technique for probing the robustness of models~\\cite{DBLP:journals\/corr\/GoodfellowSS14}. \nPrevious studies~\\cite{DBLP:journals\/corr\/SamantaM17,DBLP:conf\/aaai\/JinJZS20} have shown that pre-trained models can be easily fooled to make incorrect predictions with seemingly innocuous perturbations to the test instances. Therefore, we evaluate discrete prompt-based NLI models for their robustness against adversarially perturbated test instances.\n\n\\paragraph{Evaluation Dataset:}\nFor this purpose, we asked two annotators to manually edit hypothesis sentences in NLI test data considering two types of perturbations:\n(1) perturbations that do not change reference labels, and (2) perturbations that change reference labels.\nAn example is shown in \\autoref{tab:perturbation-data}. \n\nFor the first type of perturbation, we edited a hypothesis sentence such that its relationship with the corresponding premise remains unchanged.\nFor the second type, we edited a hypothesis sentence such that its relationship (e.g., from \\emph{entailment} to \\emph{contradiction}) will be reversed.\nThe premise and hypothesis pairs were sampled from CB (validation set) and MNLI (test set).\nBecause there are ca. 10,000 test instances in MNLI and it is costly to manually edit sentences, we used 100 randomly-chosen sentence pairs covering MNLI and CB.\n\n\\begin{table}[t]\n \\centering\n \\small\n \\begin{tabular}{lp{7em}p{5em}}\n \\toprule\n \\textbf{} & \\textbf{Hypothesis} & \\textbf{Label} \\\\ \n \\midrule\n Original & The Wither's only had daughters. & contradiction \\\\\n \\midrule\n Perturbation & \\\\\n \\quad w\/o label changes & The Wither's did not have sons. & contradiction \\\\\n \\quad w\/ label changes & The Wither's had a boy. & entailment \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{\n Examples of our evaluation set consisting of task inputs with perturbations. \n The premise sentence is `\\emph{The Wither's eldest boy, one of the four of the town militia, saluted in the old style with his stick sword.}'\n } \n \\label{tab:perturbation-data}\n\\end{table}\n\n\\iffalse\n\\begin{table}[t]\n \\setlength{\\tabcolsep}{1mm} %\n \\centering\n \\begin{tabular}{lccccc}\n \\toprule\n & \\multicolumn{2}{c}{\\textbf{\\#Perturbation}} \\\\\n & \\textbf{w\/o label changes} & \\textbf{w\/ label changes} \\\\ \n \\midrule\n CB & 50 & 49 \\\\ \n MNLI & 48 & 47 \\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{\n The dataset sizes (measured by the number of instances) for the adversarial dataset.\n }\n \\label{tab:perturbation-dataset-stat}\n\\end{table}\n\\fi\n\n\\paragraph{Experimental Procedure:}\nWe computed the RoD of average accuracies obtained with original and adversarial test instances.\nSpecifically, we used the AP prompts in \\autoref{tab:pre-train-best} under three settings:\n(a) original (without perturbations), \n(b) perturbations without label changes, \nand (c) perturbations with label changes.\nThen, we calculate RoD from (a) to (b) and (a) to (c) as shown in \\autoref{tab:task-input-perturbation}.\n\n\\begin{table}[t!]\n\\small\n\\setlength{\\tabcolsep}{0.5mm} %\n\\centering\n\\begin{tabular}{ccccc}\n\\toprule\n\\textbf{Perturbation} & \\textbf{Method} & \\textbf{Metrics} & \\textbf{CB} & \\textbf{MNLI} \\\\\n\\midrule\n\\multirow{4}{*}{Original}\n & \\multirow{2}{*}{AP} \n & Acc & 54.5 & 40.5 \\\\\n & & RoD & - & - \\\\\n \\cmidrule(l){2-5}\n & \\multirow{2}{*}{MP} \n & Acc & 95.5 & 71.0 \\\\\n & & RoD & - & - \\\\\n\\midrule\n\\multirow{4}{*}{\\begin{tabular}{c}Perturbation \\\\w\/o label changes\\end{tabular}}\n & \\multirow{2}{*}{AP} \n & Acc & 55.5 & 43.2 \\\\\n & & RoD & \\uline{-0.02} & \\uline{-0.07} \\\\\n \\cmidrule(l){2-5}\n & \\multirow{2}{*}{MP} \n & Acc & 93.0 & 66.7 \\\\\n & & RoD & \\textbf{0.03} & \\textbf{0.06} \\\\\n\\midrule\n\\multirow{4}{*}{\\begin{tabular}{c}Perturbation \\\\w\/ label changes\\end{tabular}}\n & \\multirow{2}{*}{AP} \n & Acc & 42.3 & 39.4 \\\\\n & & RoD & \\uline{0.22} & \\uline{0.03} \\\\\n \\cmidrule(l){2-5}\n & \\multirow{2}{*}{MP} \n & Acc & 41.8 & 61.2 \\\\\n & & RoD & \\textbf{0.56} & \\textbf{0.14} \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{\nAccuracy and RoD in prompts for task inputs that include perturbations. The RoD here is the rate of degradation in the average accuracy from the original without perturbations to perturbations without label changes or perturbations with label changes.\nThe largest drops in accuracy are \\textbf{bolded} and the smallest drops are \\uline{underlined} for each perturbation and method.\n}\n\\label{tab:task-input-perturbation}\n\\end{table}\n\n\n\\paragraph{Results:}\nOverall, we see that the RoD of AP is consistently smaller than that of MP in both CB and MNLI under both types of perturbations.\nHowever, it is also clear that the accuracy obtained with AP is much smaller than that with MP.\nFor the perturbations without label changes, both AP and MP show small RoD values, compared to those with label changes.\\footnote{w\/o label change modifications slightly increase the average length of a hypothesis and AP seems to better exploit this extra information for inference resulting in a slight improvement in accuracy (negative RoD).}\nThis shows that both AP and MP are relatively robust against modifications to the hypotheses that do not significantly alter the meaning.\nHowever, when stronger perturbations are introduced that would result in label changes, the accuracy of both AP and MP drops significantly. \\footnote{MP is less robust compared to AP, likely as a result of overfitting to strongly perturbed training data during fine-tuning the PLM.}\nThis is a concern because it shows that \\textbf{neither AP nor MP is sufficiently robust to correctly predict the target labels when the hypothesis sentences in test data are adversarially modified.}\n\n\n\\section{Conclusion}\nWe investigated the robustness of discrete prompts under different perturbations.\nWe found that although discrete prompts remain relatively robust against token deletion, it is highly sensitive to other types of perturbations such as token shuffling.\nFor adversarial perturbations to the input, discrete prompts were robust to weak perturbations without label changes, but AP was more robust than MP for perturbations with label changes.\nMoreover, they generalize poorly across different datasets annotated for NLI.\nWe hope our analysis will inspire future work to develop methods that learn both accurate as well as robust discrete prompts.\n\n\n\\section{Limitations}\nPossible limitations of this work are:\n\\begin{itemize}\n\\item We chose popular discrete prompt methods of AP and MP and did not investigate other methods in this work. Our analysis procedure can still be applied to other discrete prompts such as AvgTrigger~\\cite{wallace-etal-2019-universal}.\n\\item We chose RoBERTa-large following previous studies of HFT~\\cite{DBLP:conf\/naacl\/ScaoR21} and AP~\\cite{shin-etal-2020-autoprompt} for reproducible and identical comparisons with them. Other PLMs would lead to different results, but they can also be investigated in the same way as in this work.\n\\item This work focuses on NLI because it is a fundamental natural language understanding task and still difficult even with PLMs~\\cite{GPT3}. Other complex downstream tasks are worth investigating for a deeper understanding of prompt-based approaches in future work.\n\\item The results and conclusions are from the English datasets and would differ in other languages. However, our methodologies do not depend on English and can be applied to other languages as important future studies. \n\\item Since there was a performance gap between MP\/HFT and AP, the accuracies by the perturbations could be affected. However, this work does not aim to find the best prompt learning method but to analyze the robustness of discrete prompts for a deeper understanding of them.\n\\end{itemize}\n\n\n\\section{Ethical Considerations}\nOur adversarial dataset came from existing datasets of CB and MNLI.\nWe visually checked the instances in the data development and found no instances with ethical concerns.\n\nOne should also be aware of social biases (e.g. gender stereotypes) in PLM. \nRoBERTa, the PLM we used in our experiments, is known to have gender biases~\\cite{DBLP:journals\/corr\/abs-2105-05541}. \nSince we used it as-is in order to follow the experimental conditions of previous studies using RoBERTa, our current results are possibly influenced by such biases.\nHowever, the consideration of the prompt robustness of this work would not pose or magnify such ethical concerns.\n\n\n\\section{Acknowledgments}\nThis research was supported by the JSPS KAKENHI (Grants-in-Aid for Scientific Research) JP22H03654.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section*{Supplementary Material}\\label{supplementary_material}\n\n\n\\section{Introduction}\n\\label{introduction}\n\n\nSeveral studies about Hybrid Unmanned Aerial Underwater Vehicles (HUAUVs) have been published recently~\\cite{drews2014hybrid, neto2015attitude, da2018comparative, maia2017design, lu2019multimodal, mercado2019aerial, horn20, aoki2021}. These types of vehicles enable an interesting range of new applications due to their capability to operate both in the air and underwater. These include inspection and mapping of partly submerged areas in industrial facilities, search and rescue and others. Most of the literature in the field is still focused on vehicle design, with few published works on the theme of autonomous navigation~\\cite{bedin2021deep}. The ability to navigate in both environments and successfully transit from one to another imposes additional challenges that must be addressed.\n\nLately, approaches based on Deep-RL have been enhanced to address navigation-related tasks for a range of mobile vehicles, including ground mobile robots~\\cite{ota2020efficient}, aerial robots~\\cite{tong2021uav,grando2022double} and underwater robots~\\cite{carlucho2018}. Based on actor-critic methods and multi-layer network structures, these approaches have achieved interesting results in mapless navigation, obstacle avoidance, even including media transitioning for HUAUVs~\\cite{bedin2021deep, de2022depth}. However, the challenges faced by this kind of vehicle make these existing approaches still too limited, with poor generalization through different scenarios.\n\n\nIn this work, we present two new double-critic Deep-RL approaches in the context of HUAUVs to perform navigation-related tasks in a continuous state-space environment: (1)~a deterministic approach based on Twin Delayed Deep Deterministic Policy Gradient (TD3)~\\cite{fujimoto2018addressing}; and\n (2)~a stochastic approach based on Soft Actor-Critic (SAC)~\\cite{haarnoja2018soft}. We show we are capable of training agents that are consistently better than state-of-the-art in generalizing through different simulated scenarios, with improved stability in mapless navigation, obstacle avoidance and medium transitions. Our evaluation tasks included both air-to-water and water-to-air transitions. We compared our methods with other single critic approaches and with an adapted version of a traditional Behavior-Based Algorithm (BBA)~\\cite{marino2016minimalistic} used in aerial vehicles. Fig.~\\ref{fig:simenv} shows a snapshot of our simulation environment.\n \n\\begin{figure}[tbp!]\n \\vspace{-2mm}\n \\centering\n \\includegraphics[width=\\linewidth]{img\/sonar_v5.png}\n \\caption{Our HUAUV underwater in the first scenario (left) and its respective sonar readings (right).}\n \\label{fig:simenv}\n \\vspace{-4mm}\n\\end{figure}\n\n\nThis work provides the following main contributions:\n\n\\begin{itemize}\n\n\\item We show that our agents present a robust capacity for generalization through different environments, achieving a good performance in a complex and completely unknown environment. The robot also performs the medium transition, being capable of arriving at the desired target and avoiding collisions.\n\n\\item We show that a Long Short Term Memory (LSTM) architecture can achieve better overall performance and capacity for generalization than the state-of-the-art Multi-Layer Perceptron (MLP) architectures\n\n\\end{itemize}\n\nThis work has the following structure: the related works are discussed in the following section (Sec. \\ref{related_works}). Following it, we present our methodology in Sec. \\ref{methodology}. The results are presented in Sec. \\ref{results} and discussed in Sec. \\ref{conclusion}.\n\n\\section{Related Work}\n\\label{related_works}\n\n\nFor more traditional types of vehicles, several works have been published demonstrating how efficiently Deep-RL can solve the mapless navigation problem~\\cite{tobin2017domain}. For a ground robot, Tai~\\emph{et al.}~\\cite{tai2017virtual} demonstrated a mapless motion planner based on the DDPG algorithm employing a 10-dimensional range finder combined with the relative distance to the target as inputs and continuous steering signals as outputs. Recently, Deep-RL methods have also been successfully used by Ota~\\emph{et al.}~\\cite{ota2020efficient}, de Jesus~\\emph{et al.}~\\cite{jesus2019deep,jesus2021soft} and others, to accomplish mapless navigation-related tasks for terrestrial mobile robots. Singh and Thongam~\\cite{singh2018mobile} demonstrated efficient near-optimal navigation for a ground robot in dynamic environments employing an MLP to perform speed control while choosing collision-free path segments.\n\nFor UAVs, Kelchtermans and Tuytelaars \\cite{kelchtermans2017hard} demonstrated how memory could help Deep Neural Networks (DNN) for navigation in a simulated room crossing task. Tong~\\emph{et al.}~\\cite{tong2021uav} showed better than state-of-the-art convergence and effectiveness in adopting a DRL-based method combined with a LSTM to navigate a UAV in highly dynamic environments, with numerous obstacles moving fast.\n\nWhen it comes to problems involving specifically mapless navigation for UAVs, few works examine the effectiveness of Deep-RL. Grando~\\emph{et al.}~\\cite{grando2020visual} explored a Deep-RL architecture, however, navigation was constrained to a 2D space. Rodriguez \\emph{et al.}~\\cite{rodriguez2018deep} employed a DDPG-based strategy to solve the problem of landing UAVs on a moving platform. Similar to our work, they employed RotorS framework~\\cite{furrer2016rotors} combined with the Gazebo simulator. Sampedro~\\emph{et al.}~\\cite{sampedro2019fully} proposed a DDPG-based strategy for search and rescue missions in indoor environments, utilizing real and simulated visual data. Kang~\\emph{et al.}~\\cite{kang2019generalization} also used visual information, although he focused on the subject of collision avoidance. In a go-to-target task, Barros~\\emph{et al.}~\\cite{2020arXiv201002293M} applied a SAC-based method for the low-level control of a UAV. Double critic-based Deep-RL approaches similar to the one proposed here have also been shown to yield good results\\cite{grando2022double}.\n\nThe HUAUV literature is still mostly concerned with vehicle design and modeling \\cite{drews2014hybrid, neto2015attitude, da2018comparative, maia2017design, lu2019multimodal, mercado2019aerial, horn20}. Two works have recently tackled the navigation problem with the medium transition of HUAUVs~\\cite{pinheiro2021trajectory}, \\cite{bedin2021deep}. Pinheiro~\\emph{et al.} \\cite{pinheiro2021trajectory} focused on smoothing the medium transition problem in a simulated model on MATLAB. Grando~\\emph{et al.}~\\cite{bedin2021deep} developed Deep-RL actor-critic approaches and a MLP architecture. These two works were developed using generic distance sensing information for aerial and underwater navigation. In contrast, our work relies on more realistic sensing data, with the simulated LIDAR and sonar being both based on real-world devices.\n\nThe HUAUV presented in this paper is based on Drews-Jr~\\emph{et al.} \\cite{drews2014hybrid} model, which Neto~\\emph{et al.}~\\cite{neto2015attitude} has largely expanded. Our work differs from the previously discussed works by only using the vehicle's relative localization data and not its explicit localization data. We also present Deep-RL approaches based on double critic techniques instead of single critic, with RNN structures instead of MLP, traditionally used for mapless navigation of mobile robots. We compare our approaches with state-of-the-art Deep-RL approaches and with a behavior-based algorithm \\cite{marino2016minimalistic} adapted for hybrid vehicles to show that our new methodology improves the overall capability to generalize through distinct environments.\n\n\\section{Methodology}\n\\label{methodology}\n\nIn this section, we describe our simulation environment, our hybrid vehicle, and the proposed Deep-RL, detailing the network structure for both deterministic and stochastic agents. We also introduce the task that the vehicle must accomplish autonomously and the respective reward function.\n \n\\subsection{Deterministic Deep RL\n\nDeveloping on the DQN~\\cite{mnih2013playing}, Deep Deterministic Policy Gradient (DDPG)~\\cite{lillicrap2015continuous} employs an actor-network where the output is a vector of real values representing the chosen action, and a second neural network to learn the target function, providing stability and making it ideal for mobile robots~\\cite{jesus2019deep}. While it provides good results, DDPG still has its problems, like overestimating the Q-values, which leads to policy breaking. TD3~\\cite{fujimoto2018addressing} uses DDPG as its backbone, adding some improvements, such as clipped double-Q~learning with two neural networks as targets for the Bellman error loss functions, delayed policy updates, and Gaussian noise on the target action, raising its performance. \n\nOur deterministic approach is based on the TD3 technique. The pseudocode can be seen in Algorithm~\\ref{alg:docrl_d}.\n\n\\begin{algorithm}[!htb]\n \\algsetup{linenosize=\\tiny}\n \\scriptsize\n \\caption{Deep Reinforcement Learning Deterministic}\n \\label{alg:docrl_d}\n \\begin{algorithmic}[1]\n \\STATE Initialize params of critic networks $\\theta_{1}$, $\\theta_{2}$ , and actor network $\\phi$\n \\STATE Initialize params of target networks $\\phi^{\\prime}\\leftarrow\\phi$, $\\theta_{1}^{\\prime}\\leftarrow\\theta_{1}$, $\\theta_{2}^{\\prime}\\leftarrow\\theta_{2}$\n \\STATE Initialize replay buffer $\\beta$\n \\FOR{$ep = 1$ to $max\\_eps$}\n \\STATE reset environment state\n \\FOR{$t = 0$ to $max\\_steps$}\n \\IF {$t < start\\_steps$}\n \\STATE $a_{t} \\leftarrow $ env.action\\_space.sample() \n %\n \\ELSE\n \\STATE $a_{t}\\leftarrow\\mu_{\\phi}(s_t)+\\epsilon,\\ \\epsilon\\sim \\mathcal{N}(0,OU)$\n %\n \\ENDIF\n \n \\STATE $s_{t+1}$, $r_{t}$, $d_{t}$, \\_ $\\leftarrow$ env.step($a_{t}$)\n \n \\STATE store the new transition $(s_{t}, a_{t}, r_{t}, s_{t+1}, d_{t})$ into $\\beta$\n \n \\IF{$t > start\\_steps$}\n \\STATE Sample mini-batch $B$ of $N$ transitions $(s_{t}, a_{t}, r_{t}, s_{t+1}, d_{t})$ from $\\beta$\n \n \\STATE $a'\\leftarrow\\mu_{\\phi^{\\prime}}(s^{\\prime})+\\epsilon,\\ \\epsilon\\sim clip(\\mathcal{N}(0,\\tilde{\\sigma}), -c,\\ c)$ \n \n \\STATE Computes target: \\\\ $Q_{t} \\leftarrow r+\\gamma*\\min_{i=1,2}Q_{\\theta_i}(s', a')$\n \n \n \n \\STATE Update double critics with one step gradient descent:\\\\\n $\\nabla_{\\theta_i} \\frac{1}{N} \\sum_{i \\in B}(Q_t - Q_{\\theta_{i}(s_{t},a_{t})})^2$ \\qquad for i=1,2\n \n \\IF {t \\% $policy\\_freq(t)$ == 0}\n \\STATE Update policy with one step gradient descent:\\\\ \n $\\nabla_{\\phi}\\frac{1}{N} \\sum_i[\\nabla_{a_{t}}Q_{\\theta_{1}}(s_{t},a_{t})\\vert _{a_{t}=\\mu(\\phi)}\\nabla_{\\phi}\\mu_{\\phi}(s_{t})]$\n \n Soft update for the target networks: \\\\\n \\STATE $\\phi^{\\prime}\\leftarrow\\tau\\phi+(1-\\tau)\\phi^{\\prime}$\n \\STATE $\\theta_{i}^{\\prime}\\leftarrow\\tau\\theta_{i}+(1-\\tau)\\theta_{i}^{\\prime}$ \\qquad for i=1,2\n \n \n \\ENDIF\n \\ENDIF\n \\ENDFOR\n \\ENDFOR\n \\end{algorithmic}\n\\end{algorithm}\n\nWe train for $max\\_steps$ steps in $max\\_eps$ episodes. Our approach starts by exploring random actions for the initial $start\\_steps$ steps. We use an LSTM as the actor-network $\\phi$ and $\\phi^{\\prime}$ as its target. The double critics are also LSTM networks, denoted by $\\theta_{1}$ and $\\theta_{2}$, with $\\theta_{1}^{\\prime}$ and $\\theta_{2}^{\\prime}$ as their targets. The learning of both networks happens simultaneously, addressing approximation error, reducing the bias, and finding the highest Q-values. The actor target chooses the action $a^{\\prime}$ based on the state $s^{\\prime}$, and we add Ornstein-Uhlenbeck noise to it. The double critic targets take the tuple ($s^{\\prime}$, $a^{\\prime}$) and return two Q-values as outputs, from which only the minimum of the two is considered. The loss is calculated with the Mean Squared Error of the approximate value from the target networks and the value from the critic networks. We use Adaptive Moment Estimation (Adam) to minimize the loss.\n\nWe update the policy network less frequently than the value network, taking into account a $policy\\_freq$ factor that increases over time by the following rule:\n\n\\begin{equation*} policy\\_freq(t)=\\left\\lfloor\\left(0.5- \\frac{t}{max\\_steps \\times 3}\\right)^{-1}\\right\\rfloor\\end{equation*}\n\n\\subsection{Stochastic Deep RL\n\nWe also introduce a bias-stochastic actor-critic algorithm based on SAC~\\cite{haarnoja2018soft}, that combines off-policy updates with a stochastic actor-critic method to learn continuous action space policies. It uses neural networks as approximation functions to learn a policy and two Q-values functions similarly to TD3. However, SAC utilizes the current stochastic policy to act without noise, providing better stability and performance, maximizing the reward and the policy's entropy, encouraging the agent to explore new states and improving training speed. We use the soft Bellman equation with neural networks as a function approximation to maximize entropy. The pseudocode can be seen in Algorithm \\ref{alg:docrl_s}.\n\n\\begin{algorithm}[!htb]\n \\algsetup{linenosize=\\tiny}\n \\scriptsize\n \\caption{Deep Reinforcement Learning Stochastic}\n \\label{alg:docrl_s}\n \\begin{algorithmic}[1]\n \\STATE Initialize params of critic networks $\\theta_{1}$, $\\theta_{2}$ , and actor network $\\phi$\n \\STATE Initialize params of target networks $\\phi^{\\prime}\\leftarrow\\phi$, $\\theta_{1}^{\\prime}\\leftarrow\\theta_{1}$, $\\theta_{2}^{\\prime}\\leftarrow\\theta_{2}$\n \\STATE Initialize replay buffer $\\beta$\n \\FOR{$ep = 1$ to $max\\_eps$}\n \\STATE reset environment state\n \\FOR{$t = 0$ to $max\\_steps$}\n \\IF {$t < start\\_steps$}\n \\STATE $a_{t} \\leftarrow $ env.action\\_space.sample() \n %\n \\ELSE\n \\STATE $a_t\\leftarrow \\text{sample from } \\pi_{\\phi}(\\cdot|s_t)$\n %\n \\ENDIF\n \n \\STATE $s_{t+1}$, $r_{t}$, $d_{t}$, \\_ $\\leftarrow$ env.step($a_{t}$)\n \n \\STATE store the new transition $(s_{t}, a_{t}, r_{t}, s_{t+1}, d_{t})$ into $\\beta$\n \n \\IF{$t > start\\_steps$}\n \\STATE Sample mini-batch $B$ of $N$ transitions $(s_{t}, a_{t}, r_{t}, s_{t+1}, d_{t})$ from $\\beta$\n\n \\STATE $\\tilde{a}_{t} \\leftarrow \\text{sample from } \\pi_{\\phi}(\\cdot|s_t)$\n \n \\STATE $double = ([min_{i=1,2}( Q_{\\theta'_{i}}({s_{t}},{\\tilde{a}_{t}}))-\\alpha \\log \\tilde{a}_{t})])$\n \n \\STATE $Q_t=r({s_{t}},{a_{t}})+\\gamma(1-d_{t})*double$ \n \n \\STATE Update double critics with one step gradient descent:\\\\\n $\\nabla_{\\theta_i} \\frac{1}{N} \\sum_{s_t \\in B}(Q_t - Q_{\\theta_{i}}(s_{t},a_{t}))^2 \\text{ for } i=1,2$\n \n \\IF {t \\% $policy\\_freq(t)$ == 0}\n \n \\STATE Update policy with one step gradient descent:\\\\\n $ \\nabla_{\\phi} \\frac{1}{N} \\sum_{s_t \\in B} ([min_{i=1,2}( Q_{\\theta_{i}}({s_{t}},{\\tilde{a}_{t}}))-\\alpha \\log \\tilde{a}_{t}])$\n \n \\STATE Soft update for the target networks: \\\\\n \\STATE $\\phi^{\\prime}\\leftarrow\\tau\\phi+(1-\\tau)\\phi^{\\prime}$\n \\STATE $\\theta_{i}^{\\prime}\\leftarrow\\tau\\theta_{i}+(1-\\tau)\\theta_{i}^{\\prime}$ \\qquad for i=1,2\n\n \\ENDIF\n \\ENDIF\n \\ENDFOR\n \\ENDFOR\n \\end{algorithmic}\n\\end{algorithm}\n\nLike before, here we train for ($max\\_steps$) steps in ($max\\_eps$) episodes as well, exploring random actions for the first ($start\\_steps$) steps. An LSTM structure was used for the policy network $\\phi$. After sampling a batch $B$ from the memory $\\beta$, we compute the targets for the Q-functions $Q_t({r_{t}},{s_{t+1}},{d_{t}})$, and update the Q-functions. Also, here we update the policy less frequently than the value network, using the same $policy\\_freq$ factor we used in our deterministic approach. \n\n\\subsection{Simulated Environments\n\nOur experiments were conducted on the Gazebo simulator together with ROS, using the RotorS framework \\cite{furrer2016rotors} to allow the simulation of aerial vehicles with different command levels, such as angular rates, attitude, location control and the simulation of wind with an Ornstein-Uhlenbeck noise. The underwater simulation is enabled by the UUV simulator \\cite{manhaes2016uuv}, which allows the simulation of hydrostatic and hydrodynamic effects, as well as thrusters, sensors, and external perturbations. With this framework, we define the vehicle's underwater model with parameters such as the volume, additional mass, center of buoyancy, etc., as well as the characteristics of the underwater environment itself.\n\nWe developed two environments that simulate a walled water tank, with dimensions of 10$\\times$10$\\times$6 meters and a one-meter water column. The first environment has four cylindrical columns representing subsea drilling risers. The second environment simulates complex structures, like those found in sea platforms, and contains several elements, such as walls, half walls and pipes (Figure ~\\ref{fig:env2} ).\n\n\\begin{figure}[tbp!]\n \\vspace{-2mm}\n \\centering\n \\includegraphics[width=\\linewidth]{img\/env_3_v2.png}\n \\caption{Our HUAUV performing in the second scenario.}\n \\label{fig:env2}\n \\vspace{-4mm}\n\\end{figure}\n\n\\subsection{HUAUV Description}\n\n\nOur vehicle was based on the model presented by Drews-Jr \\emph{et al.} \\cite{drews2014hybrid}, Neto \\emph{et al.} \\cite{neto2015attitude} and \\emph{et al.} \\cite{horn2019study}. We described it using its actual mechanical settings, including inertia, motor coefficients, mass, rotor velocity, and others. A ROS package containing the vehicle's description plus the Deep-RL agents can be found in the \\nameref{supplementary_material}.\n\nThe vehicle sensing was optimized to mimic real-world LIDAR and Sonar. The described LIDAR is based on the UST 10LX model. It provides a 10 meters distance sensing with $270$\\degree~of range and $0.25$\\degree~of resolution, simulated using the plugin ray of Gazebo. Our simulated FLS sonar was based on the sonar simulation plugin developed by Cerqueira \\emph{et al.}~\\cite{cerqueira2017novel}. We described a FLS sonar with 20 meters of range, with a bin count of 1000 and a beam count of 256. The width and height angles of the beam were $90$\\degree~and $15$\\degree~, respectively. We obtained these values from the relative localization data using Rotors' geometric controller. In the real world, localization information can be obtained from a combination of standard localization sensing of hybrid vehicles like Global Positioning System (GPS) and Ultra Short Baseline (USBL).\n\n\\subsection{Network Structure and Rewarding System} \\label{secapproach}\n\n\n\n\nThe structure of both our approaches has a total of 26 dimensions for the state, 20 samples for the distance sensors, the three previous actions and three values related to the target goal, which are the vehicle's relative position to the target and relative angles to the target in the x-y plane and the z-distance plane. When in the air, 20 samples come from the LIDAR. We get these samples equally spaced by $13.5\\degree$ in the $270\\degree$ LIDAR. When underwater, the distance information comes from the Sonar. We also get 20 beams equally spaced among the total of 256, and we take the highest bin in each beam. This conversion based on the range gives us the distance towards the obstacle or the tank's wall \\cite{Santos18,Santos19}. The actions are scaled between $0$ and $0.25$ $m\/s$ for the linear velocity, from $-0.25$ $m\/s$ to $0.25$ $m\/s$ for the altitude velocity and from $-0.25$ to $0.25$ $rad$ for the $\\Delta$ yaw.\n\n\\subsubsection{Reward Function}\n\nWe proposed a binary rewarding function that yields a positive reward in case of success or a negative reward in case of failure or in case the episode ($ep$) ends at the 500 steps limit:\n\n\\vspace{-5mm}\n\\begin{equation}\nr(s_t, a_t)= \n\\begin{cases}\n r_{arrive} & \\text{if } d_t < c_d\\\\\n r_{collide} & \\text{if } min_x < c_o\\ ||\\ ep = 500\\\\\n \n \n \n\\end{cases}\n\\end{equation}\n\nThe reward $r_{arrive}$ was set to 100, while the negative reward $r_{collide}$ was set to -10. Both $c_d$ and $c_o$ distances were set to $0.5$ meters.\n\\section{Experimental Results}\n\\label{results}\n\nIn this section, the results obtained during our evaluation are shown. During the training phase, we created a randomly generated goal towards which the agent should navigate. The agents trained for a maximum of 500 steps or until they collided with an obstacle or with the tank's border. In case of reaching the goal before the limit of episodes, a new random goal was generated, allowing the total amount of reward to eventually exceed 100. A learning rate of $10^{-3}$ was used, with a minibatch of 256 samples and the Adam optimizer for all approaches, including the compared methods. We limited the number of episodes trained to 1500 episodes. The limits for the episode number ($max\\_steps$) were used based on the stagnation of the maximum average reward received.\n\nFor each scenario and model, an extensive amount of statistics were collected. The task addressed is goal-oriented navigation considering medium transition, where the robot must navigate from a starting point to an endpoint. This task was addressed in two ways in our tests: (1) starting in the air, performing the medium transition and navigating to a target underwater; and the other way around, (2) starting underwater, performing the medium transition and navigating to a target in the air. We collected the statistics for each of our proposed models (Det. and Sto.) and compared them with the performance of the state-of-the-art deterministic (Det.) and stochastic (Sto.) Deep-RL methods for HUAUVs, \nas well as a behavior-based algorithm \\cite{marino2016minimalistic} .\nThese tasks were performed for 100 trials each and we recorded the total of successful trials, the average time for both underwater ($t\\_water$) and aerial ($t\\_air$) navigation and their standard deviations. \n\nThe models were all trained in the first environment and evaluated in both first (same as trained) and second (never seen) environments. We aim to outline one of the main contributions of this work, \\textit{i.e.} the robust capacity to generalize of our method across environments, in this case performing in a second, unknown and more complex environment. We set the initial position for the Air-Water (A-W) trials to (0.0, 0.0, 2.5) in the Gazebo Cartesian coordinates for the two scenarios. The target position used was (3.6, -2.4, -1.0). In both environments, the target was defined in a path with obstacles on the way. Table \\ref{table:mean_std} shows the results obtained for each environment for 100 navigation trials.\n\n\n\n\\begin{table}[bp!]\n\\vspace{-5mm}\n\\centering\n\\setlength{\\tabcolsep}{0.8pt}\n\\caption{Mean and standard deviation metrics over 100 navigation trials for all approaches in all scenarios.}\n\\label{table:mean_std}\n\\begin{tabular}{c c c c c} \n\\toprule\nEnv & Test & $t_{air}$ (s) & $t_{water}$ (s) & Success \\\\\n\\midrule\n1 & A-W Det. & $76.28$ $\\pm$ $63.20$ & $12.51$ $\\pm$ $20.71$ & 94 \\\\\n1 & A-W Sto. & $21.79$ $\\pm$ $4.57$ & $25.58$ $\\pm$ $5.70$ & $100$ \\\\\n1 & A-W Sto. Grando \\emph{et al.} \\cite{bedin2021deep} & $42.46$ $\\pm$ $62.94$ & $13.13$ $\\pm$ $15.15$ & $42$ \\\\\n1 &\\textbf{ A-W Det. Grando \\emph{et al.} \\cite{bedin2021deep}} & $\\textbf{13.84}$ $\\pm$ $\\textbf{2.11}$ & $\\textbf{5.44}$ $\\pm$ $\\textbf{1.73}$ & $\\textbf{100}$ \\\\\n1 & A-W BBA & $32.42$ $\\pm$ $1.79$ & $21.27$ $\\pm$ $0.18$ & $100$ \\\\\n1 & W-A Det. & $24.66$ $\\pm$ $10.06$ & $5.0$ $\\pm$ $0.71$ & $83$ \\\\\n1 & W-A Sto. & $79.73$ $\\pm$ $27.91$ & $5.41$ $\\pm$ $0.34$ & $100$ \\\\\n\n2 & \\textbf{A-W Det.} & $61.94$ $\\pm$ $45.29$ & $\\textbf{8.44}$ $\\pm$ $\\textbf{9.09}$ & $\\textbf{73}$ \\\\\n2 & \\textbf{A-W Sto.} & $\\textbf{14.89}$ $\\pm$ $\\textbf{1.120}$ & $18.48$ $\\pm$ $6.24$ & $\\textbf{94}$ \\\\\n2 & A-W Sto. Grando \\emph{et al.} \\cite{bedin2021deep} & - & - & $0$ \\\\\n2 & A-W Det. Grando \\emph{et al.} \\cite{bedin2021deep} & - & - & $0$ \\\\\n2 & A-W BBA & $39.69$ $\\pm$ $21.92$ & $11.32$ $\\pm$ $7.46$ & $28$ \\\\\n2 & \\textbf{W-A Det.} & $\\textbf{8.54}$ $\\pm$ $\\textbf{4.44}$ & $\\textbf{4.27}$ $\\pm$ $\\textbf{0.47}$ & $\\textbf{8}$ \\\\\n2 & \\textbf{W-A Sto.} & $\\textbf{15.43}$ $\\pm$ $\\textbf{13.39}$ & $\\textbf{6.60}$ $\\pm$ $\\textbf{1.75}$ & $\\textbf{10}$ \\\\\n2 & W-A Sto. Grando \\emph{et al.} \\cite{bedin2021deep} & - & - & $0$ \\\\\n2 & W-A Det. Grando \\emph{et al.} \\cite{bedin2021deep} & - & - & $0$ \\\\\n2 & W-A BBA & $34.3$ $\\pm$ $22.93$ & $6.13$ $\\pm$ $17.22$ & $8$ \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\nWe also performed a complementary comparison in the second scenario. We used the models trained in the second environment to collect statistics. For a better analysis, we also performed a comparison between models in this second environment. First, we collected the data for Deterministic and Stochastic models trained only in the first environment for 1500 episodes (Env1), as shown before. Then, we trained these models for 500 more episodes in the second environment (Both). Lastly, we compared them with Deterministic and Stochastic trained only in the second environment for 1500 episodes. Table \\ref{table:Comparistion_lstm} shows the obtained results.\n\n\n\\begin{table}[tp!]\n\\centering\n\\setlength{\\tabcolsep}{2.5pt}\n\\caption{Mean and standard deviation metrics over 100 navigation trials tested in the second simulated environment, for both deterministic and stochastic models trained only in the first environment (Env1), in both first and second environments (Both), and only in the second environment (Env2).}\n\\label{table:Comparistion_lstm}\n\\begin{tabular}{c c c c c} \n\\toprule\nModel & $t_{air}$ (s) & $t_{water}$ (s) & Success \\\\\n\\midrule\n\\textbf{A-W Det. (Env1)} & $61.94$ $\\pm$ $45.29$ & $\\textbf{8.44}$ $\\pm$ $\\textbf{9.09}$ & $\\textbf{73}$ \\\\\n\\textbf{A-W Sto. (Env1)} & $\\textbf{14.89}$ $\\pm$ $\\textbf{1.120}$ & $18.48$ $\\pm$ $6.24$ & $\\textbf{94}$ \\\\\n\\textbf{A-W Det. (Both)} & $\\textbf{14.14}$ $\\pm$ $\\textbf{3.77}$ & $\\textbf{8.69}$ $\\pm$ $\\textbf{3.17}$ & $\\textbf{99}$ \\\\\n\\textbf{A-W Sto. (Both)} & $16.82$ $\\pm$ $2.12$ & $14.92$ $\\pm$ $3.60$ & $\\textbf{100}$ \\\\\nA-W Det. (Env2) & $23.17$ $\\pm$ $31.12$ & $32.53$ $\\pm$ $60.28$ & $21$ \\\\\nA-W Sto. (Env2) & $19.98$ $\\pm$ $15.99$ & $49.61$ $\\pm$ $27.86$ & $87$ \\\\\n\nW-A Det. (Env1) & $8.54$ $\\pm$ $4.44$ & $4.27$ $\\pm$ $0.47$ & $8$ \\\\\nW-A Sto. (Env1) & $15.43$ $\\pm$ $13.39$ & $6.60$ $\\pm$ $1.75$ & $10$ \\\\\n\\textbf{W-A Det. (Both)} & $\\textbf{25.09}$ $\\pm$ $\\textbf{38.86}$ & $\\textbf{4.62}$ $\\pm$ $\\textbf{0.51}$ & $\\textbf{34}$ \\\\\n\\textbf{W-A Sto. (Both)} & $33.41$ $\\pm$ $11.82$ & $11.40$ $\\pm$ $2.38$ & $\\textbf{83}$ \\\\\nW-A Det. (Env2) & - & - & $0$ \\\\\nW-A Sto. (Env2) & $3.73$ $\\pm$ $2.97$ & $30.47$ $\\pm$ $9.47$ & $1$ \\\\\n\\bottomrule\n\\end{tabular}\n\\vspace{-4mm}\n\\end{table}\n\n\n\\section{Discussion}\n\\label{discussion}\n\n\nThe evaluation shows an overall increase in performance in navigation through both environments. It is possible to see that our approaches achieve a consistent performance of 100 successful air-to-water navigation trials with also a consistent navigation time ($14.55 \\pm 0.87$ and $11.19 \\pm 2.86$). In this same scenario, the stochastic performed a little worse in air-to-water navigation but outperformed the deterministic approach in water-to-air navigation. \nIn the second scenario, we can see more clearly that a double-critic-based approach with an RNN structure also has a better ability to learn and generalize the environment, including the obstacles and the medium transition. While the state-of-the-art approaches with a MLP structure were not capable of performing the task, our approaches presented once again a consistent performance, especially in air-to-water navigation. Our approaches showed an excellent ability to learn the tasks and the environmental difficulties, not only the scenario itself. That was further addressed in our additional evaluation with agents trained in the first environment only, both first and second environments and the second environment only. Overall, we can conclude that double critic approaches with recurrent neural networks present a consistent ability to learn through scenarios and environments and to generalize between them. Also, our approaches outperformed the BBA algorithm \nin the rate of successful trials and average time in almost all situations.\n\nIt is important to mention that these approaches are extensively evaluated in a realistic simulation, including control issues and disturbances such as wind. Thus, the results indicate that our approach may achieve real-world application if the correct data from the sensing and the relative localization are correctly ensured. Finally, it is also possible to analyze that these new RNN-based approaches provided a more consistent average course of action throughout the environments.\n\n\n\n\n\\section{Conclusions}\n\\label{conclusion}\n\nThe evaluation shows an overall increase in performance in navigation through both environments. It is possible to see that our approaches achieve a consistent performance of 100 successful air-to-water navigation trials with also a consistent navigation time ($14.55 \\pm 0.87$ and $11.19 \\pm 2.86$). In this same scenario, the stochastic performed a little worse in air-to-water navigation but outperformed the deterministic approach in water-to-air navigation. \nIn the second scenario, we can see more clearly that a double-critic-based approach with an RNN structure also has a better ability to learn and generalize the environment, including the obstacles and the medium transition. While the state-of-the-art approaches with a MLP structure were not capable of performing the task, our approaches presented once again a consistent performance, especially in air-to-water navigation. Our approaches showed an excellent ability to learn the tasks and the environmental difficulties, not only the scenario itself. That was further addressed in our additional evaluation with agents trained in the first environment only, both first and second environments and the second environment only. Overall, we can conclude that double critic approaches with recurrent neural networks present a consistent ability to learn through scenarios and environments and to generalize between them. Also, our approaches outperformed the BBA algorithm \nin the rate of successful trials and average time in almost all situations.\n\nIt is important to mention that these approaches are extensively evaluated in a realistic simulation, including control issues and disturbances such as wind. Thus, the results indicate that our approach may achieve real-world application if the correct data from the sensing and the relative localization are correctly ensured. Finally, it is also possible to analyze that these new RNN-based approaches provided a more consistent average course of action throughout the environments.The evaluation shows an overall increase in performance in navigation through both environments. It is possible to see that our approaches achieve a consistent performance of 100 successful air-to-water navigation trials with also a consistent navigation time ($14.55 \\pm 0.87$ and $11.19 \\pm 2.86$). In this same scenario, the stochastic performed a little worse in air-to-water navigation but outperformed the deterministic approach in water-to-air navigation. \nIn the second scenario, we can see more clearly that a double-critic-based approach with an RNN structure also has a better ability to learn and generalize the environment, including the obstacles and the medium transition. While the state-of-the-art approaches with a MLP structure were not capable of performing the task, our approaches presented once again a consistent performance, especially in air-to-water navigation. Our approaches showed an excellent ability to learn the tasks and the environmental difficulties, not only the scenario itself. That was further addressed in our additional evaluation with agents trained in the first environment only, both first and second environments and the second environment only. Overall, we can conclude that double critic approaches with recurrent neural networks present a consistent ability to learn through scenarios and environments and to generalize between them. Also, our approaches outperformed the BBA algorithm \nin the rate of successful trials and average time in almost all situations.\n\nIt is important to mention that these approaches are extensively evaluated in a realistic simulation, including control issues and disturbances such as wind. Thus, the results indicate that our approach may achieve real-world application if the correct data from the sensing and the relative localization are correctly ensured. Finally, it is also possible to analyze that these new RNN-based approaches provided a more consistent average course of action throughout the environments.\n\n\n\nBy using physically realistic simulation in several water-tank-based scenarios, we showed that our approaches achieved an overall better capability to perform autonomous navigation, obstacle avoidance and medium transition than other approaches. Disturbances such as wind were successfully assimilated and good generalization through different scenarios was achieved. With our simple and realistic sensing approach that took into account only the range information, we presented overall better performance than the state-of-the-art and classical behavior-like algorithm. Future studies with our real HUAUV are on the way. \n\n\\section*{Acknowledgment}\n\n\nThe authors would like to thank the VersusAI team. This work was partly supported by the CAPES, CNPq and PRH-ANP.\n\n\\vspace{-2mm}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAngular momentum (AM) is a fundamental parameter in the evolution of galaxies. A dark matter (DM) halo spinning up in the early universe is subject to the same tidal torques as the baryons at its centre, so the total AM of both components is linked, and the specific AM, $j = J\/M$ of the baryons is well matched to $j$ of the DM, (e.g \\cite[Catalan \\& Theuns 1996]{Catalan+1996}). \n$j$ is connected to photometric morphology via the stellar mass -- specific AM -- morphology plane, first shown by \\cite[Fall 1983]{Fall83}, such that galaxies with higher $M_*$ have higher $j$, modulo morphology, with the relation for earlier type galaxies offset to lower $j$. This has since also been shown by \\cite[Romanowsky \\& Fall (2012)]{RF12}, \\cite[Obreschkow \\& Glazebrook (2014)]{OG14}, \\cite[Cortese et al. (2016)]{Cortese+2016}, \\cite[Posti et al. (2018)]{Posti+2018}, and \\cite[Sweet et al. (2018)]{Sweet+2018}.\n\nAlthough the total $j$ for baryons and DM is linked, further physical processes affect the distribution of $j$ for baryons. \\cite[van den Bosch et al. (2001)]{vdb+01} studied the probability density function of $j$ normalised to the mean of the galaxy, \\pdf, and found that dwarf galaxies had a deficit of high-$j$ and of low-$j$ material with respect to the prediction for a DM halo. They attributed this to tidal stripping of the outer, rapidly-rotating material and feedback ejecting the inner, dispersion-dominated material respectively. \\cite[Sharma \\& Steinmetz (2005)]{SS05} then predicted the \\pdf\\\/ for baryonic components (see Fig.\\,\\ref{ss05}), where bulges, which are dominated by random motions, exhibit a peak at $j=0$, and disks have a \\pdf\\\/ of the form $x$exp$(-kx)$ due to their well-ordered rotation. Also see our updated predictions using the NIHAO simulations (Wang+ in prep)).\n\nThe \\pdf\\\/ encodes more physical information than photometry alone, so in this work I am investigating the utility of \\pdf\\\/ as a kinematic tracer of morphology and kinematic decomposition tool.\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=0.4\\linewidth]{SS05.png} \n \\caption{Predictions from \\cite[Sharma \\& Steinmetz 2005]{SS05} for baryonic galaxy components. The \\pdf\\\/ for bulge peaks at $j=0$, while disk components have an exponential profile.}\n \\label{ss05}\n\\end{center}\n\\end{figure}\n\n\\section{\\pdf\\\/}\n\nI have constructed \\pdf\\\/ for a subset of the Calar Alto Legacy Integral Field Area survey (CALIFA, \\cite[Sanchez et al. 2012]{Sanchez+2012}), using observations of 25 galaxies where the stellar kinematics reach to three times the effective radius. I calculate $j_i = r_i \\times v_i$ in every spaxel $i$, where the velocity $v_i = sqrt( v_{i,circ}^2 + v_{i,disp}^2)$ includes the circular velocity $v_{i,circ}$ and dispersion $v_{i,disp}$ added in quadrature. The map of $j$ is then weighted by stellar surface density, as a proxy for mass, and the histogram plotted as the \\pdf\\\/. \n\nA spanning set of local examples is shown in Fig.\\,\\ref{examples}. The colour represents radial distance, with lighter colours indicating material nearer the centre. NGC 6063 is a late-type spiral galaxy with low bulge-to-total light ratio B\/T = 0.04. Its \\pdf\\\/ is broad and symmetric, and peaks near 1, reminiscent of the predictions by \\cite[Sharma \\& Steinmetz (2005)]{SS05} for pure disks. NGC 2592 is an early type galaxy with large B\/T = 0.54; its \\pdf\\\/ is strongly-skewed and peaks near $j=0$ like the spheroidal components in \\cite[Sharma \\& Steinmetz (2005)]{SS05}. Intermediate between these two extremes, the \\pdf\\\/ for NGC 7653 (B\/T = 0.33) has characteristics of both disk and bulge. Unfortunately the spatial resolution, which dictates the number of bins, is not sufficient to resolve separate components in the \\pdf\\\/.\n\nI also show a clumpy disk galaxy at $z\\sim 1.5$, using a combination of OSIRIS adaptive optics integral field spectroscopy to mitigate the effects of beam smearing in the centre of the galaxy, with deeper KMOS seeing-limited data to trace the velocity field out to higher multiples of the effective radius, using a method described in Sweet et al. (in prep.). The shape is intermediate between the pure disk and bulge-dominated local examples, likely due to the typical high-$z$ morphology of dispersion-dominated clumps embedded in a strongly-rotating disk.\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=0.495\\linewidth]{6063.png} \n \\includegraphics[width=0.495\\linewidth]{2592.png} \n \\includegraphics[width=0.495\\linewidth]{7653.png} \n \\includegraphics[width=0.495\\linewidth]{COSMOS_127977.png} \n \\caption{Example \\pdf\\\/ for local galaxies in the CALIFA sample and one disk at $z\\sim 1.5$. Top left: late-type spiral NGC 6063 with low B\/T ratio has a broad, symmetric \\pdf\\\/ which peaks near 1. Top right: early type NGC 2592 with high B\/T ratio has a strongly-skewed \\pdf\\\/ which peaks nearer 0. Bottom left: NGC 7653 with moderate B\/T ratio has a \\pdf\\\/ which is intermediate between the two extremes. Bottom right: $z\\sim 1.5$ clumpy disk galaxy COSMOS 127977 also has an intermediate \\pdf\\\/.}\n \\label{examples}\n\\end{center}\n\\end{figure}\n\nThere is an apparent trend whereby \\pdf that are more skewed and peak nearer $j=0$ correspond to galaxies that are earlier in type and have bigger bulges. This is quantified in the correlation between shape of \\pdf\\\/ (traced by skewness or kurtosis) and morphology (traced by Hubble type and B\/T ratio). For instance, Fig.\\,\\ref{t-skew} illustrates the relation between skewness and Hubble type. Much of the scatter in this correlation arises from the difficulties inherent in photometric classification of Hubble type and quantifying bulge-to-total ratio. Arguably, \\pdf\\\/ as a kinematic quantity encodes more physical information than photometry alone, so may be a more robust tracer of morphology.\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=0.45\\linewidth]{t-skew.png} \n \\caption{Correlation between morphology and shape of \\pdf\\\/. Galaxies with earlier Hubble type are more strongly negatively skewed.}\n \\label{t-skew}\n\\end{center}\n\\end{figure}\n\nI also see that the bulge is linked with \\pdf\\\/. Fig.\\,\\ref{bulgediskkurt} demonstrates that the shape of \\pdf\\\/ is moderately correlated with the surface brightness of the bulge, such that galaxies with bigger bulges have more strongly-tailed \\pdf. On the other hand, the \\pdf\\\/ shape is not at all correlated with the central surface brightness of the disk. This indicates that the size of the bulge is related to the distribution of $j$ within a galaxy, but disks of all sizes have similar \\pdf\\\/.\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=0.8\\linewidth]{bulgediskkurt.png} \n \\caption{Correlation between galaxy components and shape of \\pdf\\\/. Galaxies with bigger bulges have more strongly-tailed \\pdf\\\/, but the shape of \\pdf\\\/ is the same for disks of all sizes.}\n \\label{bulgediskkurt}\n\\end{center}\n\\end{figure}\n\n\n\\section{Conclusion: Utility of PDF(j)}\nThe \\pdf\\\/ traces kinematic morphology of a galaxy. It encodes more physical information than photometry alone, and is a product of the evolutionary history of the galaxy. In future, as spatial resolution increases, I predict that \\pdf\\\/ will be useful to separate out kinematic components: thin disk from thick disk and bulge, clumps from bulges, and pseudobulges from classical bulges.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn this paper, we study the quantitative long time dynamics for the spherically symmetric dispersive spacetimes satisfying the Einstein-scalar field equations. More precisely, these are spherically symmetric solutions $(\\calM,\\bfg,\\phi)$ to the Einstein-scalar field system, where $\\bfg$ is a Lorentzian metric and $\\phi$ is a real valued function on a $3+1$ dimensional manifold $\\calM$, such that $(\\calM,\\bfg)$ is future causally geodesically complete and $\\phi$ scatters locally in the scale-invariant bounded-variation (BV) norm. For these spacetimes, we establish a Price-law type decay for the scalar field $\\phi$, the Christoffel symbols associated to $\\bfg$ and all of their derivatives. To obtain the decay results, we do not need to assume any smallness of the initial data. Moreover, we show that the decay rates in this paper are sharp.\n\nThe spherically symmetric Einstein-scalar field system, being one of the simplest model of self-gravitating matter in this symmetry class, has been studied extensively both numerically and mathematically. In a seminal series of papers by Christodoulou \\cite{Christodoulou:1987ta}, \\cite{Christodoulou:1991}, \\cite{Christodoulou:1993bt}, \\cite{Christodoulou:1994}, \\cite{Christodoulou:1999}, he achieved a complete understanding of the singularity structure of spherically symmetric spacetime solutions to this system. The culmination of the results shows that generic\\footnote{in the BV class, i.e., the initial data for $\\partial_v(r\\phi)$ has bounded variation. More precisely, Christodoulou showed that the non-generic set of initial data has co-dimension at least two in the BV topology.} spherically symmetric initial data with one asymptotically flat end give rise to a spacetime whose global geometry is either dispersive (with a Penrose diagram represented by Figure 1) or contains a black hole region $\\mathcal B\\mathcal H$ which terminates in a spacelike curvature singularity $\\mathcal S$ (with a Penrose diagram represented by Figure 2). In particular, in either of these generic scenarios, the spacetime possesses a complete null infinity $\\mathcal I^+$ and thus obeys the weak cosmic censorship conjecture. Moreover, in either case the maximal Cauchy development of the data is inextendible with a $C^2$ Lorentzian metric and therefore also verifies the strong cosmic censorship conjecture. We refer the readers to \\cite{Kommemi} for a comprehensive discussion on general singularity structures for spherically symmetric spacetimes.\n\n\\begin{figure}[htbp] \\label{fig.disp}\n\\begin{center}\n \n\\includegraphics{fig_I.pdf}\n \n\\caption{The dispersive case}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\begin{center}\n \n\\includegraphics{fig_II.pdf}\n \n\\caption{The black hole case}\n\\end{center}\n\\end{figure}\n\nThe remarkable resolution of the cosmic censorship conjectures however gives very little information on the long time dynamics for these spacetimes except for the small data\\footnote{i.e., when the initial data is close to that of Minkowski space.} case \\cite{Christodoulou:1993bt}. In particular, not much is known about the asymptotic decay of the scalar field as measured by a far-away observer at null infinity. In the dispersive case, Christodoulou showed that the Bondi mass decays to zero along null infinity without an explicit decay rate. In the black hole case, he showed that the Bondi mass approaches the mass of the black hole, from which one can infer the non-quantitative decay for the scalar field along null infinity \\cite{Christodoulou:1993bt}.\n \nThe long time dynamics in the case where the spacetime settles to a black hole was subsequently studied in the seminal work\\footnote{In fact, they considered the more general Einstein-Maxwell-scalar field equations.} of Dafermos-Rodnianski \\cite{DR} . They proved a quantitative decay rate for the scalar field (and its derivatives) in the spacetime including along null infinity $\\mathcal I^+$ and the event horizon $\\mathcal H^+$. The proof is based on the local conservation of energy, which is subcritical, together with techniques exploiting the conformal geometry of the spacetime and the celebrated red-shift effect along the event horizon. The result in particular justified, in a nonlinear setting, the heuristics of Price \\cite{Price}. It turns out that the quantitative decay rates, when combined with the results of \\cite{D}, also have interesting consequences for the strong cosmic censorship conjecture in the context of the spherically symmetric Einstein-Maxwell-scalar field system.\n\nIn this paper, we study the other generic scenerio, i.e., spherically symmetric dispersive spacetime solutions to the Einstein-scalar field system. Unlike in the black hole case, the monotonic Hawking mass is \\emph{supercritical} and provides no control over the dynamics of the solution. We thus do not expect to be able to obtain quantitative decay rates for large solutions without imposing extra assumptions. Instead, we assume \\emph{a priori} the non-quantitative decay of a \\emph{critical} quantity - the BV norm\\footnote{Solutions of bounded variation have been first studied by Christodoulou \\cite{Christodoulou:1993bt} and plays an important role in the proof of the cosmic censorship conjectures \\cite{Christodoulou:1999}.} - but only locally in a region where the area of the orbit of the symmetry group $SO(3)$ remains uniformly bounded. Under this assumption of local BV scattering, we show that the scalar field and all its derivatives decay with a quantitative rate, reminescent of the Price law decay rates in the black hole case. (We refer the readers to the statement of the main theorems in Section \\ref{sec.main.thm} for the precise rates that we obtain.) We prove, in particular, a quantitative decay rate for the scalar field along null infinity.\n\nOur results apply in particular to the class of solutions arising from initial data with small BV norm. Christodoulou \\cite{Christodoulou:1993bt} showed that these spacetimes are future causally geodesically complete. One can easily deduce from \\cite{Christodoulou:1993bt} that in fact these spacetimes satisfy the BV scattering assumption and therefore the solutions obey the quantitative decay estimates of our main theorem (see Theorem \\ref{thm:smallData}). On the other hand, our results do not require any smallness assumptions on the initial data. We conjecture that indeed our class of spacetimes contains those arising from large data:\n\\begin{conjecture}\\label{large.sol.conj}\nThere exists initial data of arbitrarily large BV norm whose maximal global development scatters locally in the BV norm.\n\\end{conjecture}\n\nIn addition to the upper bounds that we obtain in our main theorem, we also construct examples where we prove lower bounds for the solutions with the same rates as the upper bounds. In particular, there exists a class of initial data with compactly supported scalar field whose future development saturates the decay estimates in the main theorem. This shows that the decay rates are sharp. We note that the decay rate is also consistent with the numerical study of Biz\\'on-Chmaj-Rostworowski \\cite{BCR}.\n\nAs a corollary of the main result on decay, we show the following dichotomy: either the quantitative decay rates are satisfied or the solution blows up at infinity. The latter are solutions such that some scale-invariant spacetime norms become infinite (see precise definition in Definition \\ref{def.blow.up.infty}).\n\nThe decay result in this paper easily implies that the locally BV scattering solutions that we consider are stable against small, regular, \\emph{spherically symmetric} perturbations. More ambitiously, one may conjecture that\n\\begin{conjecture}\\label{stab.conj}\nSpherically symmetric locally BV scattering dispersive solutions to the Einstein-scalar field equations are stable against \\emph{non-spherically symmetric} perturbations.\n\\end{conjecture}\n\nConjecture \\ref{stab.conj}, if true, will generalize the monumental theorem on the nonlinear stability of Minkowski spacetime of Christodoulou-Klainerman \\cite{CK} (see also a simpler proof in \\cite{LR}). For nonlinear wave equations satisfying the null condition, it is known \\cite{Alinhac}, \\cite{Yang} that \\emph{large} solutions decaying sufficiently fast are globally stable against small perturbations. On the other hand, our main theorem shows a quantitative decay rate for spherically symmetric locally BV scattering dispersive spacetimes. Conjecture \\ref{stab.conj} can therefore be viewed as an attempt to generalize the results in \\cite{Alinhac}, \\cite{Yang} to the Einstein-scalar field system. We will address both Conjectures \\ref{large.sol.conj} and \\ref{stab.conj} in future works.\n\\\\\n\\\\\n\\noindent{\\bf Acknowledgements.} The authors would like to thank Mihalis Dafermos and Igor Rodnianski for valuable discussions. We also thank Jonathan Kommemi for providing the Penrose diagrams. \n\nJ. Luk is supported by the NSF Postdoctoral Fellowship DMS-1204493. S.-J. Oh is a Miller Research Fellow, and thanks the Miller Institute at UC Berkeley for the support.\n\n\\subsection{Outline of the paper}\nWe outline the remainder of the paper. In Section \\ref{sec.setup}, we discuss the set-up of the problem and in particular define the class of solutions considered in the main theorem, i.e., the locally BV scattering solutions. In Section \\ref{sec.main.thm}, we state the main theorems in the paper (Theorems \\ref{main.thm.1} and \\ref{main.thm.2}), their consequences and additional theorems on the optimality of the decay rates. In the same section, we outline the main ideas of the proof. In Sections \\ref{sec.anal.prop} and \\ref{sec.geom}, we explain the main analytic features of the equations and the geometry of the class of spacetimes that we consider.\n \nSections \\ref{sec.decay1} and \\ref{sec.decay2} consist of the main content of this paper. In Section \\ref{sec.decay1}, we prove the decay estimates for $\\phi$, $\\partial_v(r\\phi)$ and $\\partial_u(r\\phi)$, i.e., the first main theorem (Theorem \\ref{main.thm.1}). In Section \\ref{sec.decay2}, using in particular the results in Section \\ref{sec.decay1}, we derive the decay bounds for the second derivatives for $r\\phi$ and the metric components, i.e., the second main theorem (Theorem \\ref{main.thm.2}). \n\nIn the remaining sections of the paper, we turn to other theorems stated in Section \\ref{sec.main.thm}. In Section \\ref{sec.dichotomy}, we give a proof of the dichotomy alluded to above, i.e., either the quantitative decay holds or the spacetime blows up at infinity. In Section \\ref{sec:smallData}, we sketch a proof of a refinement of the conclusions of the main theorems in the small data case. Finally, in Section \\ref{sec.opt}, we prove optimality of the decay rates asserted by the main theorems.\n\n\\section{Set-up}\\label{sec.setup}\nIn this section, we define the set-up, formulate the equations in a double null coordinate system and explain the characteristic initial value problem. This will allow us to state the main theorem in the next section.\n\n\\subsection{Spherically Symmetric Einstein-Scalar-Field System (SSESF)} \\label{subsec:derivation}\nWe begin with a brief discussion on the derivation of the Spherically Symmetric Einstein-Scalar-Field System \\eqref{eq:SSESF} from the $(3+1)$-dimensional Einstein-scalar-field system. \n\nSolutions to the Einstein-scalar field equations can be represented by a triplet $(\\calM, \\bfg_{\\mu \\nu},\\phi)$, where $(\\calM, \\bfg_{\\mu \\nu})$ is a $(3+1)$-dimensional Lorentzian manifold and $\\phi$ a real-valued function on $\\calM$. The spacetime metric $\\bfg_{\\mu \\nu}$ and the scalar field $\\phi$ satisfy the Einstein-scalar-field system:\n\\begin{equation} \\label{eq:ES}\n\\left\\{\n\\begin{aligned}\n\t\\bfR_{\\mu \\nu} - \\frac{1}{2} \\bfg_{\\mu \\nu} R =& 2 \\bfT_{\\mu \\nu}, \\\\\n\t\\nb^{\\mu} \\partial_{\\mu} \\phi =& 0.\n\\end{aligned}\n\\right.\n\\end{equation}\nwhere $\\bfR_{\\mu \\nu}$ is the Ricci curvature of $\\bfg_{\\mu \\nu}$, $R$ is the scalar curvature, and $\\nb_{\\mu}$ is the covariant derivative given by the Levi-Civita connection on $(\\calM, \\bfg)$. The energy-momentum tensor $\\bfT_{\\mu \\nu}$ is given by the scalar field $\\phi$, i.e.\n\\begin{equation}\\label{eq:T} \n\t\\bfT_{\\mu\\nu} = \\partial_{\\mu} \\phi \\partial_{\\nu} \\phi - \\frac{1}{2} \\bfg_{\\mu \\nu} \\partial^{\\lambda} \\phi \\partial_{\\lambda} \\phi.\n\\end{equation}\n\nAssume that the solution $(\\calM, \\bfg_{\\mu \\nu}, \\phi)$ is spherically symmetric, i.e., the group $\\mathrm{SO}(3)$ of three dimensional rotations acts smoothly and isometrically on $(\\calM, \\bfg)$, where each orbit is either a point or is isometric to $\\mathbb S^{2}$ with a round metric. The scalar field $\\phi$ is required to be constant on each of the orbits. These assumptions are propagated by \\eqref{eq:ES}; hence, if $(\\calM, \\bfg_{\\mu \\nu}, \\phi)$ is a Cauchy development, then it suffices to assume spherical symmetry only on the initial data.\n\nThe quotient $\\mathcal M \/ \\mathrm{SO}(3)$ gives rise to a (1+1)-dimensional Lorentzian manifold with boundary, which we denote by $(\\calQ, g_{ab})$. The boundary $\\Gamma$ consists of fixed points of the group action. We define the \\emph{area radius function} $r$ on $\\calQ$ to be\n\\begin{equation*}\n\t r := \\sqrt{\\frac{\\mbox{Area of symmetry sphere}}{4 \\pi}}.\n\\end{equation*}\nand $r=0$ at $\\Gamma$. Note that each component of $\\Gamma$ is a timelike geodesic.\n\nWe assume that $\\Gamma$ is non-empty and connected, and moreover that there exists a \\emph{global double null coordinates} $(u,v)$, i.e. a coordinate system $(u,v)$ covering $\\calQ$ in which the metric takes the form\n\\begin{equation} \\label{eq:defn4Met}\n\tg_{ab} \\mathrm{d} x^{a} \\cdot \\mathrm{d} x^{b} = - \\Omega^{2} \\mathrm{d} u \\cdot \\mathrm{d} v\n\\end{equation}\nfor some $\\Omega > 0$. We remark that both assumptions are easily justified if $(\\mathcal M, {\\bf g})$ is a Cauchy development of a spacelike hypersurface homeomorphic to $\\mathbb R^{3}$. \n\n\nThe metric ${\\bf g}_{\\mu \\nu}$ of $\\mathcal M$ is characterized by $\\Omega$ and $r$ and takes the form\n\\begin{equation}\n\t{\\bf g}_{\\mu \\nu} \\mathrm{d} x^{\\mu} \\cdot \\mathrm{d} x^{\\nu} = - \\Omega^{2} \\mathrm{d} u \\cdot \\mathrm{d} v + r^{2} \\mathrm{d} s^{2}_{\\mathbb S^{2}}\\label{metric}\n\\end{equation}\nwhere $\\mathrm{d} s^{2}_{\\mathbb S^{2}}$ is the standard line element on the unit sphere $\\mathbb S^{2}$. Therefore, we may reformulate the \\emph{spherically symmetric Einstein-scalar-field system} \\eqref{eq:SSESF} in terms of the triplet $(\\phi, r, \\Omega)$ as \n\\begin{equation} \\label{eq:SSESF} \\tag{SSESF}\n\\left\\{\n\\begin{aligned}\n\tr \\partial_{u} \\partial_{v} r =& - \\partial_{u} r \\partial_{v} r - \\frac{1}{4} \\Omega^{2}, \\\\\n\tr^{2} \\partial_{u} \\partial_{v} \\log \\Omega =& \\, \\partial_{u} r\\partial_{v} r + \\frac{1}{4} \\Omega^{2} - r^{2} \\partial_{u} \\phi \\partial_{v} \\phi, \\\\\n\tr \\partial_{u} \\partial_{v} \\phi =& - \\partial_{u} r \\partial_{v} \\phi - \\partial_{v} r \\partial_{u} \\phi, \\\\\n\t2 \\Omega^{-1} \\partial_{u} r\\, \\partial_{u} \\Omega =& \\, \\partial^{2}_{u} r + r (\\partial_{u} \\phi)^{2}, \\\\\n\t2 \\Omega^{-1} \\partial_{v} r\\, \\partial_{v} \\Omega =& \\, \\partial^{2}_{v} r + r (\\partial_{v} \\phi)^{2},\n\\end{aligned}\n\\right.\n\\end{equation}\nwith the boundary condition $r=0$ along $\\Gamma$. \n\n\\subsection{Basic assumptions, notations and conventions}\nIn this subsection, we introduce the basic assumptions on the base manifold $\\calQ$, as well as some notations and conventions that will be used in the rest of the paper.\n\n\\subsubsection*{Definition of $\\calQ$ and $\\calM$}\nDenote by $\\mathbb R^{1+1}$ the (1+1)-dimensional Minkowski space, with the standard double null coordinates $(u,v)$. Let $\\calQ$ be a (1+1)-dimensional Lorentzian manifold which is conformally embedded into $\\mathbb R^{1+1}$ with $\\mathrm{d} s^{2}_{\\calQ} = - \\Omega^{2} \\mathrm{d} u \\cdot \\mathrm{d} v$. Given a non-negative function $r$ on $\\calQ$, we define the set $\\Gamma := \\set{(u,v) \\in \\calQ : r(u,v) = 0}$, called the \\emph{axis of symmetry}. We also define $(\\calM, \\bfg_{\\mu \\nu})$ to be the (1+3)-dimensional Lorentzian manifold with $\\calM = \\calQ \\times \\mathbb S^{2}$ and $\\bfg_{\\mu \\nu}$ given by \\eqref{eq:defn4Met}; this is to be thought of as the full spacetime before the symmetry reduction. (We refer to \\S \\ref{subsec:derivation} for the full interpretation.)\n\n\\subsubsection*{Assumptions on the conformal geometry of $\\calQ$}\nWe assume that $\\Gamma \\subset \\calQ$ is a connected set, which is the image of a future-directed timelike curve emanating from the point $(1,1)$.\nWe also assume that $C_{1} \\subset \\calQ$, where\n\\begin{equation*}\nC_{1} = \\set{(u,v) \\in \\mathbb R^{1+1} : u=1, \\, 1 \\leq v < \\infty}.\n\\end{equation*}\n \nFurthermore, $\\calQ$ is assumed to be the domain of dependence of $\\Gamma$ and $C_{1}$ to the future, in the sense that every causal curve in $\\calQ$\nhas its past endpoint on either $\\Gamma$ or $C_{1}$. \n\n\\subsubsection*{Notations for the conformal geometry of $\\calQ$}\nDenote by $C_{u}$ (resp. $\\underline{C}_{v}$) the constant $u$ (resp. $v$) curve in $\\calQ$. Note that these are null curves in $\\calQ$.\n\nGiven $(u_{0}, v_{0}) \\in \\calQ$, we define the \\emph{domain of dependence} of the line segment $C_{u_{0}} \\cap \\set{v \\leq v_{0}}$, denoted by $\\mathcal D(u_{0}, v_{0})$, to be the set of all points $p \\in \\calQ$ such that all past-directed causal curves passing $p$ intersects $\\Gamma \\cup (C_{u_{0}} \\cap \\set{v \\leq v_{0}})$, plus the line segment $(C_{u_{0}} \\cap \\set{v \\leq v_{0}})$ itself. \n\nAlso, we define the \\emph{future null infinity} $\\mathcal I^{+}$ to be the set of ideal points $(u, +\\infty)$ such that $\\sup_{C_{u}} r = \\infty$. \n\n\\subsubsection*{Integration over null curves}\nWhenever we integrate over a subset of $C_{u}$ (resp. $\\underline{C}_{v}$), we will use the standard line element $\\mathrm{d} v$ ($\\mathrm{d} u$) for integrals over, i.e., \n\\begin{align*}\n\t\\int_{\\underline{C}_v\\cap\\{u_1\\leq u\\leq u_2\\}} f = \\int_{u_1}^{u_2} f(u',v) \\, \\mathrm{d} u' \\\\ \n\t\\int_{C_u\\cap\\{v_1\\leq v\\leq v_2\\}} f = \\int_{v_1}^{v_2} f(u,v') \\, \\mathrm{d} v',\n\\end{align*} \nrespectively.\n\n\\subsubsection*{Functions of bounded variation}\nUnless otherwise specified, functions of bounded variation (BV) considered in this paper will be assumed to be right-continuous. By convention,\n\\begin{equation*}\n\t\\partial_{v} f \\, \\mathrm{d} v \\hbox{ or } \\partial_{v} f\n\\end{equation*}\nwill refer to the distributional derivative of $f$, which is a finite signed measure, and\n\\begin{equation*}\n\t\\abs{\\partial_{v} f} \\, \\mathrm{d} v \\hbox{ or } \\abs{\\partial_{v} f}\n\\end{equation*}\nwill denote the total variation measure. Unless otherwise specified, these measures will be evaluated on intervals of the form $(v_{1}, v_{2}]$. Thus, according to our conventions,\n\\begin{align*}\n\t\\int_{v_{1}}^{v_{2}} \\partial_{v} f (v) \\, \\mathrm{d} v =& f(v_{2}) - f(v_{1}), \\\\\n\t\\int_{v_{1}}^{v_{2}} \\abs{\\partial_{v} f (v)} \\, \\mathrm{d} v = & \\mathrm{T.V.}_{(v_{1}, v_{2}]} [f]. \n\\end{align*}\n\n\\subsubsection*{New variables}\nWe introduce the following notation for the directional derivatives of $r$:\n\\begin{equation*}\n\t\\lmb := \\frac{\\partial r}{\\partial v}, \\quad \\nu := \\frac{\\partial r}{\\partial u},\n\\end{equation*}\n\nThe \\emph{Hawking mass} $m(u,v)$ is defined by the relation\n\\begin{equation}\\label{mdef}\n1 - \\frac{2m}{r} = \\partial^{a} r \\partial_{a} r = - 4 \\Omega^{-2} \\partial_{u} r \\partial_{v} r.\n\\end{equation}\n\nFor a solution to \\eqref{eq:SSESF}, the quantity $m$ possesses useful monotonicity properties (see Lemma \\ref{lem:mntn4m}), which will be one of the key ingredients of our analysis. We define the \\emph{mass ratio} to be\n\\begin{equation*}\n\t\\mu := \\frac{2m}{r}.\n\\end{equation*}\n\nWe also define the \\emph{Bondi mass} on $C_{u}$ by $M(u) := \\lim_{v \\to \\infty} m(u, v)$, provided the limit exists. The Bondi mass $M_{i} := M(1) = \\lim_{v \\to \\infty} m(1, v)$ on the initial curve $C_{1}$ is called the \\emph{initial Bondi mass}.\n\n\\subsection{Refomulation in terms of the Hawking mass}\nThe Hawking mass as defined in \\eqref{mdef} turns out to obey useful monotonicity (See \\S \\ref{subsec:monotonicity}). We therefore reformulate \\eqref{eq:SSESF} in terms of $m$ and eliminate $\\Omega$. Notice that by \\eqref{metric} and \\eqref{mdef}, the metric is determined by $r$ and $m$.\n\nWe say that \\emph{$(\\phi, r, m)$ on $\\calQ$ is a solution to }(SSESF) if the following equations hold:\n\\begin{equation} \\label{eq:SSESF:dr}\n\\left\\{\n\\begin{aligned}\n\\partial_{u} \\lmb = & \\frac{\\mu}{(1-\\mu) r} \\lmb \\nu, \\\\\n\\partial_{v} \\nu = & \\frac{\\mu}{(1-\\mu) r} \\lmb \\nu,\n\\end{aligned}\n\\right.\n\\end{equation}\n\\begin{equation} \\label{eq:SSESF:dm}\n\\left\\{\n\\begin{aligned}\n2 \\nu \\partial_{u} m = & (1-\\mu) r^{2} (\\partial_{u} \\phi)^{2}, \\\\\n2 \\lmb \\partial_{v} m = & (1-\\mu) r^{2} (\\partial_{v} \\phi)^{2},\n\\end{aligned}\n\\right.\n\\end{equation}\n\\begin{equation} \\label{eq:SSESF:dphi}\n\\partial_{u} \\partial_{v} (r \\phi) = \\frac{\\mu \\lmb \\nu}{(1-\\mu)r} \\phi,\n\\end{equation}\nand moreover, the following boundary conditions hold:\n\\begin{equation*}\n\tr = 0 \\hbox{ and } m = 0 \\hbox{ along } \\Gamma.\n\\end{equation*}\n\nWe remark that using \\eqref{eq:SSESF:dr}, the wave equation \\eqref{eq:SSESF:dphi} for $\\phi$ may be rewritten in either of the following two equivalent forms:\n\\begin{align} \n\\partial_{u} (\\partial_{v} (r \\phi)) = (\\partial_{u} \\lmb) \\phi, \\label{eq:SSESF:dphi'} \\tag{\\ref{eq:SSESF:dphi}$'$} \\\\\n\\partial_{v} (\\partial_{u} (r \\phi)) = (\\partial_{v} \\nu) \\phi. \\label{eq:SSESF:dphi''} \\tag{\\ref{eq:SSESF:dphi}$''$}\n\\end{align}\n\n\\subsection{Choice of coordinates} \\label{subsec:coordSys}\nNote that $\\calQ$ is ruled by the family of null curves $C_{u}$. Since a null curve $C_{u}$ with $u \\neq 1$ cannot intersect $C_{1}$, its past endpoint must be on $\\Gamma$. Therefore, our assumptions so far impose the following conditions on the double null coordinates $(u,v)$ on $\\calQ$: $u$ is constant on each future-directed null curve emanating from $\\Gamma$ and $v$ is constant on each conjugate null curve. However, these conditions are insufficient to give a unique choice of a coordinate system, as the system \\eqref{eq:SSESF} and assumptions so far are invariant under the change of coordinates\n\\begin{equation*}\n\tu \\mapsto U(u), \\quad v \\mapsto V(v), \\quad U(1) = V(1) = 1,\n\\end{equation*}\nfor any strictly increasing functions $U$ and $V$. To remove this ambiguity, we fix the choice of the coordinate system, once and for all, as follows.\n\nWe first fix $v$ on $C_{1}$ relating it with the function $r$. Specifically, we will require that $v = 2r + 1$ on $C_1$, which in particular implies that \n\\begin{equation} \\label{eq:id4dvr}\n\\lmb(1, v) = \\frac{1}{2}.\n\\end{equation}\n\nNext, in order to fix $u$, we prescribe $u$ such that $\\Gamma = \\set{(u,v) : u = v}$. To do so, for every outgoing null curve $\\underline{C}$ in $\\calQ$, follow the incoming null curve to the past starting from $\\underline{C}\\cap \\Gamma$ until the point $p_*$ where it intersects the initial curve $C_1$. We then define the $u$-coordinate value for $\\underline{C}$ to be the $v$-coordinate value for $p_*$. \n\nUnder such coordinate choice, $\\mathcal D(u_{0}, v_{0})$ may be expressed as\n\\begin{equation*}\n\t\\mathcal D(u_{0}, v_{0}) = \\set{(u,v) \\in \\calQ : u \\in [u_{0}, v_{0}], v \\in [u, v_{0}]}.\n\\end{equation*}\n\nMoreover, if $r$ and $\\phi$ are sufficiently regular functions on $\\calQ$, then our coordinate choice leads to $\\lim_{v \\to u+}(\\lmb + \\nu) (u,v) = \\lim_{u \\to v-}(\\lmb + \\nu) (u,v) = 0$ and $\\lim_{v \\to u+} (\\partial_{v} + \\partial_{u}) (r \\phi)(u,v) = \\lim_{u \\to v-} (\\partial_{v} + \\partial_{u}) (r \\phi)(u,v) = 0$. These conditions will be incorporated into precise formulations of solutions to \\eqref{eq:SSESF} with limited regularity in the following subsection.\n\n\\subsection{Characteristic initial value problem}\nIn this paper, we study the characteristic initial value problem for \\eqref{eq:SSESF} with data prescribed on $C_{1}$, under quite general assumptions on the regularity. In this subsection, we give precise formulations of initial data and solutions to \\eqref{eq:SSESF} to be considered in this paper.\n\nWe begin with a discussion on the constraint imposed by \\eqref{eq:SSESF} (more precisely, \\eqref{eq:SSESF:dr}--\\eqref{eq:SSESF:dphi}) on initial data for $(\\phi, r, m)$. In fact, the constraint is very simple, thanks to the fact that they are prescribed on a characteristic (i.e., null) curve $C_{1}$. Once we prescribe $\\phi$ on $C_{1}$, the coordinate condition \\eqref{eq:id4dvr} dictates the initial values of $r$, and the initial values of $m$ are then determined by the constraint \\eqref{eq:SSESF:dm} along the $v$ direction, as well as the boundary condition $m(1, 1) = 0$. In other words, initial data for $(\\phi, r, m)$ possess only one degree of freedom, namely the prescription of a single real-valued function $\\phi(1, v)$, or equivalently, $\\partial_{v} (r \\phi)(1, v)$.\n\nFollowing Christodoulou \\cite{Christodoulou:1993bt}, we say that an initial data set for $(\\phi, r, m)$ is of \\emph{bounded variation} (BV) if $\\partial_{v}(r \\phi)(1, \\cdot)$ is a (right-continuous) BV function on $[1, \\infty)$ with finite total variation on $(1, \\infty)$. We also define the notion of \\emph{solution of bounded variation} to \\eqref{eq:SSESF} as follows.\n\n\\begin{definition}[Bounded variation solutions to \\eqref{eq:SSESF}] \\label{def:BVsolution}\nA solution $(\\phi, r, m)$ to \\eqref{eq:SSESF} is called a \\emph{solution of bounded variation} on $\\calQ$ if on every compact domain of dependence $\\mathcal D(u_{0}, v_{0})$, the following conditions hold:\n\\begin{enumerate}\n\\item $\\sup_{\\mathcal D(u_{0}, v_{0})} (-\\nu) < \\infty$ and $\\sup_{\\mathcal D(u_{0}, v_{0})} \\lmb^{-1} < \\infty$.\n\\item $\\lmb$ is BV on each $C_{u} \\cap \\mathcal D(u_{0}, v_{0})$ uniformly in $u$, and $\\nu$ is BV on each $\\underline{C}_{v} \\cap \\mathcal D(u_{0}, v_{0})$ uniformly in $v$.\n\\item For each $a$ with $(a, a) \\in \\Gamma$,\n\\begin{equation*}\n\t\\lim_{\\epsilon \\to 0+} (\\nu + \\lmb)(a, a+\\epsilon) = 0.\n\\end{equation*}\n\\item $\\phi$ is an absolutely continuous function on each $C_{u} \\cap \\mathcal D(u_{0}, v_{0})$ with total variation bounded uniformly in $u$, and also an absolutely continuous function on each $\\underline{C}_{v} \\cap \\mathcal D(u_{0}, v_{0})$ with total variation bounded uniformly in $v$.\n\\item For each $a$ with $(a, a) \\in \\Gamma$,\n\\begin{align*}\n\\lim_{\\epsilon \\to 0} \\sup_{0 < \\delta \\leq \\epsilon} \\mathrm{T.V.}_{\\set{a-\\delta} \\times (a-\\delta, a)} [\\phi] =0, \n& \\qquad \\lim_{\\epsilon \\to 0} \\sup_{0 < \\delta \\leq \\epsilon} \\mathrm{T.V.}_{(a-\\epsilon, a-\\delta) \\times \\set{a-\\delta}} [\\phi] =0, \\\\\n\\lim_{\\epsilon \\to 0} \\sup_{0 < \\delta \\leq \\epsilon} \\mathrm{T.V.}_{(a, a+\\delta) \\times \\set{a+\\delta}} [\\phi] =0, \n& \\qquad \\lim_{\\epsilon \\to 0} \\sup_{0 < \\delta \\leq \\epsilon} \\mathrm{T.V.}_{\\set{a+\\delta} \\times (a+\\delta, a+\\epsilon)} [\\phi] =0.\n\\end{align*}\n\\item $\\partial_{v}(r \\phi)$ is BV on each $C_{u} \\cap \\mathcal D(u_{0}, v_{0})$ uniformly in $u$, and $\\partial_{u}(r \\phi)$ is BV on each $\\underline{C}_{v} \\cap \\mathcal D(u_{0}, v_{0})$ uniformly in $v$.\n\\item For each $a$ with $(a, a) \\in \\Gamma$,\n\\begin{equation*}\n\t\\lim_{\\epsilon \\to 0+} \\big( \\partial_{v}(r \\phi) + \\partial_{u}(r \\phi) \\big) (a, a+\\epsilon) = 0.\n\\end{equation*}\n\\end{enumerate}\n\\end{definition}\n\nWe also consider more regular data and solutions, as follows. We say that an initial data set for $(\\phi, r, m)$ is $C^{1}$ if $\\partial_{v}(r \\phi)(1, \\cdot)$ is $C^{1}$ on $[1, \\infty)$ with $\\sup_{C_{1}} \\abs{\\partial_{v}^{2}(r \\phi)} < \\infty$. In the following definition, we define the corresponding notion of a \\emph{$C^{1}$ solution} to \\eqref{eq:SSESF}.\n\n\\begin{definition}[$C^{1}$ solutions to \\eqref{eq:SSESF}] \\label{def:C1solution}\nA solution $(\\phi, r, m)$ to \\eqref{eq:SSESF} is called a \\emph{$C^{1}$ solution} on $\\calQ$ if the following conditions hold on every compact domain of dependence $\\mathcal D(u_{0}, v_{0})$:\n\\begin{enumerate}\n\\item $\\sup_{\\mathcal D(u_{0}, v_{0})} (-\\nu) < \\infty$ and $\\sup_{\\mathcal D(u_{0}, v_{0})} \\lmb^{-1} < \\infty$.\n\\item $\\lmb$, $\\nu$ are $C^{1}$ on $\\mathcal D(u_{0}, v_{0})$.\n\\item For each $a$ with $(a, a) \\in \\Gamma$,\n\\begin{equation*}\n\t\\lim_{\\epsilon \\to 0+} (\\nu + \\lmb)(a, a+\\epsilon) = \\lim_{\\epsilon \\to 0+} (\\nu + \\lmb)(a-\\epsilon, a) = 0.\n\\end{equation*}\n\\item $\\partial_{v}(r \\phi)$ and $\\partial_{u} (r \\phi)$ are $C^{1}$ on $\\mathcal D(u_{0}, v_{0})$.\n\\item For each $a$ with $(a, a) \\in \\Gamma$,\n\\begin{equation*}\n\t\\lim_{\\epsilon \\to 0+} \\big( \\partial_{v}(r \\phi) + \\partial_{u}(r \\phi) \\big) (a, a+\\epsilon) \n\t= \\lim_{\\epsilon \\to 0+} \\big( \\partial_{v} (r \\phi) + \\partial_{v}(r \\phi) \\big) (a - \\epsilon, a)\n\t= 0.\n\\end{equation*}\n\\end{enumerate}\n\\end{definition}\n\n\\begin{remark} \\label{rem:wp}\nBy \\cite[Theorem 6.3]{Christodoulou:1993bt}, a BV initial data set leads to a unique BV solution to \\eqref{eq:SSESF} on $\\set{(u,v) : 1 \\leq u \\leq 1+\\delta, v \\geq u}$ for some $\\delta > 0$.\nIf the initial data set is furthermore $C^{1}$, then it is not difficult to see that the corresponding solution is also $C^{1}$ (persistence of regularity). In fact, this statement follows from the arguments in Section \\ref{sec.decay2} of this paper; see, in particular, the proof of Lemma \\ref{lem:decay2:key4nullStr}.\n\\end{remark}\n\n\n\\subsection{Local scattering in BV and asymptotic flatness}\nWe are now ready to formulate the precise notion of \\emph{locally BV scattering solutions} to \\eqref{eq:SSESF}, which is the class of solutions that we consider in this paper. In particular, for this class of solutions, we make a priori assumptions on its global geometry. \n\n\\begin{definition}[Local scattering in BV] \\label{def:locBVScat}\nWe say that a BV solution $(\\phi, r, m)$ to \\eqref{eq:SSESF} is \\emph{locally scattering in the bounded variation norm (BV)}, or a \\emph{locally BV scattering solution}, if the following conditions hold:\n\\begin{enumerate}\n\\item \\emph{Future completeness of radial null geodesics}: Every incoming null geodesic in $\\calQ$ has its future endpoint on $\\Gamma$, and every outgoing null geodesic in $\\calQ$ is infinite towards the future in the affine parameter. \nMoreover, there exists a global system of null coordinates $(u,v)$ and $\\calQ$ is given by\n\\begin{equation} \\label{eq:globalCoords}\n\t\\calQ = \\set{(u,v) : u \\in [1, \\infty), v \\in [u, \\infty)}.\n\\end{equation}\n\n\\item \\emph{Vanishing final Bondi mass}: The final Bondi mass vanishes, i.e.,\n\\begin{equation} \\label{eq:zeroMf}\nM_{f} := \\lim_{u \\to \\infty} M(u) = 0.\n\\end{equation}\n\n\\item \\emph{Scattering in BV in a compact $r$-region}: There exists $R > 0$ such that for $\\PD_{\\mathrm{cpt}}$ defined to be the region $\\set{(u,v) \\in \\calQ : r(u,v) \\leq R}$, we have \n\\begin{equation} \\label{eq:locBVScat}\n\t\\int_{C_{u} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{v}^{2} (r \\phi)} \\to 0, \\quad\n\t\\int_{C_{u} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{v} \\log \\lmb} \\to 0\n\\end{equation}\nas $u \\to \\infty$.\n\\end{enumerate}\n\\end{definition}\n\nSeveral remarks concerning Definition~\\ref{def:locBVScat} are in order.\n\\begin{remark} \nIn fact, the condition \\eqref{eq:globalCoords} is a consequence of future completeness of radial null geodesics and the preceding assumptions. To see this, first recall our assumption that $C_{1} = \\set{(u,v) : u = 1, v \\in [1, \\infty)}$. Hence from our choice of the coordinate $u$ and future completeness of incoming radial null geodesics, it follows that the range of $u$ must be $[1, \\infty)$. Furthermore, for each $u \\in [1, \\infty)$, the range of $v$ on $C_{u}$ is $[u, \\infty)$ by future completeness of outgoing radial null geodesics and Definition~\\ref{def:BVsolution}. More precisely, future completeness of $C_{u}$ implies that it can be continued past $\\set{u} \\times [u, v_{0}]$ as long as $\\int_{u}^{v_{0}} \\Omega^{2} \\, \\mathrm{d} v < \\infty$, and Definition~\\ref{def:BVsolution} implies\\footnote{We refer to the proof of Proposition~\\ref{prop:geomLocBVScat} below for details of estimating $\\frac{-\\nu}{1-\\mu}$ in terms of assumptions on $\\phi$, $\\partial_{v}(r \\phi)$ and $\\lmb$.} that $\\Omega^{2} = - \\frac{4 \\lambda \\nu}{1-\\mu}$ indeed remains bounded on $\\set{u} \\times [u, v_{0}]$ for every finite $v_{0}$.\n\\end{remark}\n\n\\begin{remark} \\label{rem:FCGC}\nFor more regular (e.g., $C^{1}$) asymptotically flat solutions, the conditions $(1)$ and $(2)$ in Definition \\ref{def:locBVScat} may be replaced by a single equivalent condition, namely requiring the full spacetime $(\\calM, \\bfg)$ to be \\emph{future casually geodesically complete} as a (1+3)-dimensional Lorentzian manifold. In particular, (2) follows from the deep work \\cite{Christodoulou:1987ta} of Christodoulou, in which it was proved that if $M_{f} > 0$ then the space-time necessarily contains a black-hole and thus is not future causally geodesically complete.\n\\end{remark}\n\n\\begin{remark}\\label{rmk.unif.int}\nAs we will see in the proof, there exists a universal $\\tilde{\\epsilon}_0$ such that (3) in Definition \\ref{def:locBVScat} can be replaced by the weaker requirement that there exists $R>0$ and $U>0$ such that\n\\begin{equation*}\n\t\\int_{C_{u} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{v}^{2} (r \\phi)} \\leq \\tilde{\\epsilon}_0, \\quad\n\t\\int_{C_{u} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{v} \\log \\lmb} \\leq \\tilde{\\epsilon}_0\n\\end{equation*}\nfor $u\\geq U$. To simplify the exposition, we will omit the proof of this improvement.\n\\end{remark}\n\n\\begin{remark} \nFor a sufficiently regular, asymptotically flat solution to \\eqref{eq:SSESF}, Condition $(1)$ in Definition \\ref{def:locBVScat} is equivalent to requiring that the conformal compactification of $\\calQ$ is depicted by a Penrose diagram as in Figure \\ref{fig.disp} (in the introduction). For more discussion on Penrose diagrams, we refer the reader to \\cite[Appendix C]{DR} and \\cite{Kommemi}. In fact, this equivalence follows easily from the classification of all possible Penrose diagrams for the system \\eqref{eq:SSESF} given in the latter reference.\n\\end{remark}\n\n\nWe also define the precise notion of \\emph{asymptotic flatness} for initial data with BV or $C^{1}$ regularity. As we shall see soon in the main theorems, the rate of decay for the initial data, measured in $r$, is directly related to the rate of decay of the corresponding solution, in both $u$ and $r$.\n\n\\begin{definition}[Asymptotic flatness of order $\\omega'$ in BV or $C^{1}$] \\label{def:AF}\nFor $\\omega' >1$, we make the following definition.\n\n\\begin{enumerate}\n\\item We say that an initial data set is \\emph{asymptotically flat of order $\\omega'$ in BV} if $\\partial_{v} (r \\phi)(1, \\cdot) \\in \\mathrm{BV}[1, \\infty)$ and there exists $\\mathcal I_{1} > 0$ such that \n\\begin{equation}\n\t\\sup_{C_{1}} (1+r)^{\\omega'} \\abs{\\partial_{v}(r \\phi)} \\leq \\mathcal I_{1} < \\infty.\n\\end{equation}\n\n\\item We say that an initial data set is \\emph{asymptotically flat of order $\\omega'$ in $C^{1}$} if $\\partial_{v} (r \\phi)(1, \\cdot) \\in C^{1}[1, \\infty)$ and there exist $\\mathcal I_{2} > 0$ such that \n\\begin{equation}\n\t\\sup_{C_{1}} (1+r)^{\\omega'} \\abs{\\partial_{v}(r \\phi)} + \\sup_{C_{1}} (1+r)^{\\omega'+1} \\abs{\\partial_{v}^{2}(r \\phi)} \\leq \\mathcal I_{2} < \\infty.\n\\end{equation}\n\\end{enumerate}\n\\end{definition}\n\n\\begin{remark} \nThe initial Bondi mass $M_{i} := \\lim_{v \\to \\infty} m(1, v)$ can be easily bounded by $\\leq C \\mathcal I_{1}^{2}$; see Lemma \\ref{lem:bnd4Mi}. \n\\end{remark}\n\n\\begin{remark} \\label{rem:PhiIsZero}\nObserve that both conditions imply that $(r \\phi)(1,v)$ tends to a finite limit as $v \\to \\infty$; in particular, $\\lim_{v \\to \\infty} \\phi(1, v) = 0.$\nThis serves to fix the gauge freedom $(\\phi, r, m) \\mapsto (\\phi + c, r, m)$ for solutions to \\eqref{eq:SSESF}.\n\\end{remark}\n\n\\section{Main results}\\label{sec.main.thm}\n\\subsection{Main theorems}\nWith the definitions of locally BV scattering solutions and asymptotically flat initial data, we now have the necessary means to state the main theorems of this paper. Roughly speaking, these theorems say that locally BV scattering solutions with asymptotically flat initial data exhibits quantitative decay rates, which can be read off from the rate $\\omega'$ in Definition \\ref{def:AF}. The first theorem is for initial data and solutions in BV.\n\n\\begin{theorem}[Main Theorem in BV] \\label{main.thm.1}\nLet $(\\phi, r, m)$ be a locally BV scattering solution to \\eqref{eq:SSESF} with asymptotically flat initial data of order $\\omega'$ in BV. Then for $\\omega := \\min \\set{\\omega', 3}$, there exists a constant $A_{1} > 0$ such that \n\\begin{align} \n\t\\abs{\\phi} \\leq & A_{1} \\min \\set{u^{-\\omega}, r^{-1} u^{-(\\omega-1)}}, \\label{eq:decay1:1} \\\\\n\t\\abs{\\partial_{v}(r \\phi)} \\leq & A_{1} \\min \\set{u^{-\\omega}, r^{-\\omega}}, \\label{eq:decay1:2} \\\\\n\t\\abs{\\partial_{u} (r \\phi)} \\leq & A_{1} u^{-\\omega}. \\label{eq:decay1:3}\n\\end{align}\n\\end{theorem}\n\nThe second theorem is for initial data and solutions in $C^{1}$. \n\n\\begin{theorem}[Main Theorem in $C^{1}$] \\label{main.thm.2}\nLet $(\\phi, r, m)$ be a locally BV scattering solution to \\eqref{eq:SSESF} with asymptotically flat initial data of order $\\omega'$ in $C^{1}$. Then, in addition to the bounds \\eqref{eq:decay1:1}-\\eqref{eq:decay1:3}, there exists a constant $A_{2} > 0$ such that \n\\begin{align} \n\t\\abs{\\partial_{v}^{2} (r \\phi)} \\leq & A_{2} \\min \\set{u^{-(\\omega+1)}, r^{-(\\omega+1)}}, \\label{eq:decay2:1} \\\\\n\t\\abs{\\partial_{u}^{2} (r \\phi)} \\leq & A_{2} u^{-(\\omega+1)}, \\label{eq:decay2:2} \\\\\n\t\\abs{\\partial_{v} \\lmb} \\leq & A_{2} \\min \\set{u^{-3}, r^{-3}}, \\label{eq:decay2:3} \\\\\n\t\\abs{\\partial_{u} \\nu} \\leq & A_{2} u^{-3}. \\label{eq:decay2:4}\n\\end{align}\nfor $\\omega := \\min \\set{\\omega', 3}$.\n\\end{theorem}\n\nSome remarks regarding the main theorems are in order.\n\n\\begin{remark}\nNotice that in Theorem \\ref{main.thm.2}, the decay rates for $\\partial_v\\lambda$ and $\\partial_u\\nu$ are independent of the order $\\omega'$ of asymptotic flatness of the initial data. This is because the scalar field terms enter the equations for $\\partial_u\\partial_v\\log\\lambda$ and $\\partial_v\\partial_u\\log\\nu$ quadratically (see equations \\eqref{eq:eq4dvdvr:normal} and \\eqref{eq:eq4dudur:normal}) and thus as long as $\\omega'>1$, their contributions to the decay rates of $\\partial_v\\lambda$ and $\\partial_u\\nu$ are lower order compared to the term involving the Hawking mass.\n\\end{remark}\n\n\\begin{remark}\nBy Remark \\ref{rem:wp}, a $C^{1}$ initial data set gives rise to a $C^{1}$ solution. Hence Remark \\ref{rem:FCGC} applies, and the conditions (1)--(2) of Definition \\ref{def:locBVScat} may be replaced by a single equivalent condition of \\emph{future causal geodesic completeness} of $(\\calM, {\\bf g})$ in the case of Theorem \\ref{main.thm.2}. \n\\end{remark}\n\n\\begin{remark}\nIn general, the constants $A_1$ and $A_2$ depend not only on the size of the initial data (e.g., $\\mathcal I_{1}$, $\\mathcal I_{2}$), but rather on the full profile of the solution. Nevertheless, for the special case of small initial total variation of $\\partial_{v}(r \\phi)$, $A_{1}$ and $A_{2}$ \\emph{do} depend only on the size of the initial data; see \\S \\ref{sec:mainThm:smallData}.\n\\end{remark}\n\n\\begin{remark}\nIf the initial data also verify higher derivative estimates, then the techniques in proving Theorems \\ref{main.thm.1} and \\ref{main.thm.2} also allow us to derive decay bounds for higher order derivatives. The proof of the higher derivative decay estimates is in fact easier than the proofs of the first and second derivative decay bounds since we have already obtained sufficiently strong control of the scalar field and the geometry of the spacetime. We will omit the details.\n\\end{remark}\n\n\\begin{remark}\nThe decay rates that we obtain in these variables imply as immediate corollaries decay rates for $\\partial_v \\phi$, $\\partial_u \\phi$, etc. See Corollaries \\ref{cor:decay1} and \\ref{cor:decay2}.\n\\end{remark}\n\n\\begin{remark} \\label{rem:coord}\nThe decay rates in the main theorems are measured with respect to the double null coordinates $(u,v)$ normalized at the initial curve and the axis $\\Gamma$ as in \\S \\ref{subsec:coordSys}. To measure the decay rate along null infinity, one can alternatively normalize the $u$ coordinate\\footnote{In particular, this normalization is used in \\cite{DR} for the black hole case. By changing the null coordinates, we can thus more easily compare the decay rates in our setting and that in \\cite{DR}.} by requiring $\\partial_{u} r=-\\frac 12$ at future null infinity. As we will show in \\S \\ref{sec.coord}, for the class of spacetimes considered in this paper, the decay rates with respect to this new system of null coordinates are the same up to a constant multiplicative factor.\n\\end{remark}\n\n\\begin{remark}\nIn view of Remark \\ref{rmk.unif.int}, the assumption of local BV scattering can be replaced by the \\emph{boundedness} of the \\emph{subcritical} quantities\n$$\\int_{C_{u} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{v}^{2} (r \\phi)}^p \\leq C, \\quad\n\t\\int_{C_{u} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{v} \\log \\lmb}^p \\leq C,\\quad\\mbox{for }p>1.$$\nThis is because for fixed $\\tilde{\\epsilon}_0$, one can choose $R$ to be sufficiently small (depending on $C$) and apply H\\\"older's inequality to ensure\tthat\n$$\\int_{C_{u} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{v}^{2} (r \\phi)} \\leq \\tilde{\\epsilon}_0, \\quad\n\t\\int_{C_{u} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{v} \\log \\lmb} \\leq \\tilde{\\epsilon}_0.$$\n\\end{remark}\n\n\\begin{remark}\nWe also notice that the proof of our main theorem can be carried out in an identical manner for locally BV scattering solutions arising from asymptotically flat \\emph{Cauchy data}. More precisely, we can consider initial data given on a Cauchy hypersurface\n$$v=f(u),\\quad\\mbox{with }C^{-1}\\leq -f'(u)\\leq C$$\nand satisfying the constraint equation together with the following bounds on the initial data:\n$$(1+r)^{\\omega'}(|\\phi|+|\\partial_v(r\\phi)|+|\\partial_u(r\\phi)|+|\\lambda-\\frac 12|+|\\nu+\\frac 12|)\\leq \\tilde{\\mathcal I}_1,$$\nand\n$$(1+r)^{\\omega'+1}(|\\partial_v^2(r\\phi)|+|\\partial_u^2(r\\phi)|+|\\partial_v \\log\\lambda|+|\\partial_u\\log\\nu|)\\leq \\tilde{\\mathcal I}_2.$$ \nThen, if we assume in addition that the spacetime is locally BV scattering to the future, the conclusions of Theorems \\ref{main.thm.1} and \\ref{main.thm.2} hold.\n\\end{remark}\n\n\\begin{remark}\nOur main theorems can also be viewed as results on upgrading qualitative decay to quantitative decay estimates. Such problems have been widely studied in the \\emph{linear} setting (without the assumption on spherical symmetry) on nontrapping asymptotically flat Lorentzian manifolds \\cite{DR2}, \\cite{Ta}, \\cite{MTT}, as well as for the obstacle problem on Minkowski space \\cite{Morawetz}, \\cite{Strauss}. In the \\emph{nonlinear} setting, we mention the work of Christodoulou-Tahvildar--Zadeh \\cite{CTZ}, who studied the energy critical 2-dimensional spherically symmetric wave map system and proved asymptotic decay for the solution and its derivatives.\n\\end{remark}\n\n\\subsection{BV scattering and the blow-up at infinity scenerio}\n\nThe condition of local BV scattering in the main theorems follows if one rules out, a priori, a blow-up at infinity scenario. More precisely, we say that a solution blows up at infinity if some scale-invariant spacetime norms are infinite as follows:\n\n\\begin{definition}\\label{def.blow.up.infty}\nLet $(\\phi, r, m)$ be a BV solution to \\eqref{eq:SSESF} such that the condition $(1)$ of Definition \\ref{def:locBVScat} (future completeness of radial null geodesics) holds. We say that the solution \\emph{blows up at infinity} if at least one of the following holds:\n\\begin{enumerate}\n\\item $\\displaystyle{\\sup \\lambda_{\\Gamma}^{-1} = \\infty}$, where $\\displaystyle{\\lambda_{\\Gamma}(u) := \\lim_{v \\to u+} \\lmb(u,v)}$.\n\\item $\\displaystyle{\\int_1^{\\infty}\\int_u^{\\infty}|\\partial_v\\lambda\\partial_u\\phi-\\partial_u\\lambda\\partial_v\\phi| \\mathrm{d} v \\mathrm{d} u =\\infty}$,\n\\item $\\displaystyle{\\int_1^{\\infty}\\int_u^{\\infty}|\\partial_u\\phi\\partial_v(\\nu^{-1}\\partial_u(r\\phi))-\\partial_v\\phi\\partial_u(\\nu^{-1}\\partial_u(r\\phi))| \\mathrm{d} v \\mathrm{d} u =\\infty}$.\n\\end{enumerate}\n\\end{definition} \n\n\\begin{remark}\nWe do not prove in the paper the existence or non-existence of solutions that blow up at infinity. We remark that this is analogous to the blow-up at infinity scenarios which have recently been constructed in some simpler \\emph{semilinear}, \\emph{critical} wave equations \\cite{DK}.\n\\end{remark}\n\nIt follows from our main theorem that if a solution does not blow up at infinity, it obeys quantitative decay estimates. More precisely, we have\n\\begin{theorem}[Dichotomy between blow-up at infinity and BV scattering] \\label{thm.dichotomy}\nLet $(\\phi, r, m)$ be a BV solution to \\eqref{eq:SSESF} such that the condition $(1)$ of Definition \\ref{def:locBVScat} (future completeness of radial null geodesics) holds. Assume furthermore that the initial data for $(\\phi, r, m)$ obey the condition\\footnote{By Remark \\ref{rem:PhiIsZero}, note that this is the only condition on $\\lim_{v \\to \\infty} \\phi(1, v)$ which is consistent with asymptotic flatness.} $\\lim_{v \\to \\infty} \\phi(1,v) = 0$ and\n\\begin{equation} \\label{dichotomy.hyp}\n\\int_{C_{1}} \\abs{\\partial_{v}^{2} (r \\phi)} \\, \\mathrm{d} v + \\sup_{C_{1}} \\abs{\\partial_{v}(r \\phi)} < \\infty.\n\\end{equation}\n\nThen either\n\\begin{enumerate}\n\\item the solution blows up at infinity; or\n\\item the solution is globally BV scattering, in the sense that the conditions $(2)$ and $(3)$ of Definition \\ref{def:locBVScat} hold with $R =\\infty$.\n\\end{enumerate}\n\\end{theorem}\n\nThis theorem is established in Section \\ref{sec.dichotomy}. It then follows from our main theorems (Theorems \\ref{main.thm.1} and \\ref{main.thm.2}) that if a BV solution does not blow up at infinity and possesses asymptotically flat initial data, then it obeys quantitative decay estimates.\n\n\\subsection{Refinement in the small data in BV case} \\label{sec:mainThm:smallData}\nBy a theorem of Christodoulou \\cite{Christodoulou:1993bt}, the maximal development of data with small BV norms does not blow up at infinity. The previous theorem applies, and thus the corresponding solution is globally BV scattering, in the sense described in Theorem \\ref{thm.dichotomy}. Moreover, a closer inspection of the proof of the main theorems reveals that the following stronger conclusion holds in this case.\n\n\\begin{theorem} [Sharp decay for data with small BV norm] \\label{thm:smallData}\nThere exists a universal $\\epsilon_{0} > 0$ such that for $0 < \\epsilon \\leq \\epsilon_{0}$, the following statements hold.\n\n\\begin{enumerate}\n\\item If the initial data set is asymptotically flat of order $\\omega'$ in BV and\n\\begin{equation*}\n\t\\int_{C_{1}} \\abs{\\partial_{v}^{2} (r \\phi)} < \\epsilon,\n\\end{equation*}\nthen the maximal development $(\\phi, r, m)$ is globally BV scattering, in the sense that Definition \\ref{def:locBVScat} holds with arbitrarily large $R > 0$. Moreover, it satisfies the estimates \\eqref{eq:decay1:1}--\\eqref{eq:decay1:3} with $A_{1} \\leq C_{\\mathcal I_{1}} (\\mathcal I_{1} + \\epsilon)$.\n\n\\noindent Here (and similarly in (2)), we use the convention that $C_{\\mathcal I_{1}}$ depends on $\\mathcal I_1$ in a non-decreasing fashion.\\footnote{In particular, for $\\mathcal I_1$ sufficiently small, we have the estimate $A_1\\leq C(\\mathcal I_1+\\epsilon)$ for some absolute constant $C$.}\n\n\\item If, in addition, the initial data set is asymptotically flat of order $\\omega'$ in $C^{1}$, then the maximal development also satisfies \\eqref{eq:decay2:1}--\\eqref{eq:decay2:4} with $A_{2} \\leq C_{\\mathcal I_{2}} (\\mathcal I_{2} + \\epsilon)$.\n\\end{enumerate}\n\\end{theorem}\n\nThe point of this theorem is that we only need to know that the initial total variation to be small in order to conclude pointwise decay rates; in particular, $\\mathcal I_{1}$, $\\mathcal I_{2}$ can be arbitrarily large. In this sense, Theorem \\ref{thm:smallData} generalizes both the small BV global well-posedness theorem \\cite[Theorem 6.2]{Christodoulou:1993bt} and the earlier small data scattering theorem \\cite{Christodoulou:1986ue} for data that are small in a weighted $C^1$ norm. A proof of this theorem will be sketched in Section \\ref{sec:smallData}.\n\n\\subsection{Optimality of the decay rates}\n\nOur main theorems show upper bounds for the decay rates of the scalar field $\\phi$ and its derivatives both towards null infinity (i.e., in $r$) and along null infinity (i.e., in $u$). For $\\omega' = \\omega<3$, if the decay rate of the initial data towards null infinity satisfies also a lower bound, then we can show that both the $r$ and $u$ decay rates in Theorem \\ref{main.thm.1} are saturated. More precisely,\n\n\\begin{theorem} [Sharpness of $t^{-\\omega}$ tail for $1 < \\omega < 3$] \\label{thm.opt.1}\nLet $1 < \\omega < 3$. Suppose, in addition to the assumptions of Theorem \\ref{main.thm.1}, that there exists $V\\geq 1$ such that the initial data set satisfies the lower bound\n\\begin{equation*}\n\tr^{\\omega} \\partial_{v}(r \\phi) (1, v) \\geq L > 0,\n\\end{equation*}\nfor $v\\geq V$.\n\nThen there exists a constant $L_{\\omega} >0$ such that\n\\begin{align*}\n\t\\partial_{v} (r \\phi)(u, v) \\geq & L_{\\omega} \\min\\set{r^{-\\omega}, u^{-\\omega}}, \\\\\n\t- \\partial_{u} (r \\phi)(u, v) \\geq & L_{\\omega} u^{-\\omega},\n\\end{align*}\nfor $u$ sufficiently large.\n\\end{theorem}\n\\begin{remark}\nOne can also infer the sharpness of the decay of $\\phi$ from that of its derivatives. We will omit the details.\n\\end{remark}\n\n\nThis theorem will be proved in \\S \\ref{subsec.opt.1}. In fact, the proof of this theorem is similar to the proof of the upper bounds in the first main theorem (Theorem \\ref{main.thm.1}). We will show that after restricting to $u$ sufficiently large, the initial lower bound propagates and the nonlinear terms only give lower order contributions. Notice also that the analogous statement is false for $\\omega'\\geq 3$, since the nonlinear terms may dominate the contribution of the initial data.\n\nFor $\\omega' \\geq 3$, we can show that the decay rates in Theorem \\ref{main.thm.1} are sharp in the following sense:\n\\begin{theorem} [Sharpness of $t^{-3}$ tail] \\label{thm.opt.2}\nFor arbitrarily small $\\epsilon > 0$, there exists a locally BV scattering solution $(\\phi, r, m)$ to \\eqref{eq:SSESF} which satisfies the following properties:\n\\begin{enumerate}\n\\item $\\partial_{v} (r \\phi)(1, v)$ is smooth, compactly supported in the $v$-variable and has total variation less than $\\epsilon$, i.e.,\n\\begin{equation*}\n\t\\int_{C_{1}} \\abs{\\partial_{v}^{2} (r \\phi)} < \\epsilon.\n\\end{equation*}\n\\item There exists a constant $L_{3} > 0$ such that\n\\begin{align*}\n\t\\partial_{v} (r \\phi) (u, v) \\geq & L_{3} \\min \\set{r^{-3}, u^{-3}}, \\\\\n\t- \\partial_{u} (r \\phi) (u, v) \\geq & L_{3} u^{-3},\n\\end{align*}\nfor $u$ sufficiently large.\n\\end{enumerate}\n\\end{theorem}\n\n\nTo prove Theorem \\ref{thm.opt.2}, we will first establish a sufficient condition for the desired lower bounds in terms of (non-vanishing of) a single real number $\\mathfrak{L}$, which is computed from information at the null infinity. This result (Lemma \\ref{lem:LB}) is proved using the decay rates proved in the main theorems, and we believe it might be of independent interest. In \\S \\ref{subsec.opt.2}, we will complete the proof of Theorem \\ref{thm.opt.2} by constructing an initial data set for which $\\mathfrak{L}$ can be bounded away from zero. This can be achieved by showing that the solution is close to that of a corresponding linear problem and controlling the error terms after taking $\\epsilon > 0$ to be sufficiently small and using Theorem \\ref{thm:smallData}.\n\n\\subsection{Strategy of the proof of the main theorems}\n\nRoughly speaking, the proof of decay of $\\phi$ and its derivatives can be split into three steps. In the first two steps, we control the incoming part\\footnote{We call these variables `incoming' because they obey a transport equation in the $\\partial_u$ direction.} of the derivatives of the scalar field and metric components, i.e., $\\partial_v(r\\phi)$, $\\partial_v^2(r\\phi)$ and $\\partial_v\\lmb$. To this end, we split the spacetime into the exterior region $\\PD_{\\mathrm{ext}}:=\\{(u,v)\\in \\mathcal Q: v\\geq 3u\\}$ and the interior region $\\PD_{\\mathrm{int}}:=\\{(u,v)\\in \\mathcal Q: v\\leq 3u\\}$. In the first step, we control the incoming part of the solution in the exterior region. In this region, we have $r \\gtrsim v, u$, thus the negative $r$ weights in the equations give the required decay of $\\phi$ and its derivatives. We then prove bounds in the interior region in the second step. Here, we exploit certain (non-quantitative) smallness in the spacetimes quantities as $u \\to \\infty$ given by the assumption of local BV scattering to propagate the decay estimates from the exterior region to the interior region all the way up to the axis. Finally, in the third step, we control the outgoing part of the solution, i.e., $\\partial_u(r\\phi)$, $\\partial_u^2(r\\phi)$ and $\\partial_u\\nu$, by showing that the decay bounds that we have proved along the axis can be propagated in the outgoing direction.\n\nWe remind the readers that the above sketch is only a heuristic argument and is not true if taken literally. In particular, in order to carry out this procedue, we need to first show that the local BV scattering assumption provides some control over the spacetime geometry. As we will show below, the estimates are derived in slightly different fashions for the first and the second derivatives of $r\\phi$. We note in particular that carrying out this general scheme relies heavily on the analytic structure of the Einstein-scalar field equations, including the montonicity properties as well as the null structure of the (renormalized) equations.\n\n\\subsubsection{Estimates for first derivatives of $r\\phi$}\n\nTo obtain decay bounds for the first derivatives of $r\\phi$, we will rely on the wave equation\n$$\\partial_u \\partial_v(r\\phi)=\\frac{2 m \\lambda \\nu}{(1-\\mu)r^2}\\phi.$$\nNotice that when we solve for the incoming radiation $\\partial_v(r\\phi)$ using this as a transport equation in $u$, the right hand side does not depend explicitly on the outgoing radiation $\\partial_u(r\\phi)$. Instead, the right hand side consists of terms that are either lower order (in terms of derivatives) or satisfy a certain monotonicity property.\n\nIn particular, this equations shows that as long as $\\phi$ can be controlled, we can estimate $\\partial_v(r\\phi)$ by integrating along the incoming $u$ direction. On the other hand, we can also control $\\phi$ once a bound on $\\partial_v(r\\phi)$ is known by integrating along the outgoing $v$ direction.\n\nTo achieve the desired decay rates for $\\phi$, $\\partial_v(r\\phi)$ and $\\partial_u(r\\phi)$, we follow the three steps outlined above:\n\n\\begin{enumerate}\n\\item [(1)] Bounds\\footnote{The estimates in this region are similar to the corresponding bounds for the black hole case in \\cite{DR}. There, it was observed that the quantity $\\partial_v(r \\phi)$, which Dafermos-Rodnianski called an almost Riemann invariant, verifies an equation such that the right hand side has useful weights in $r$ and give the desired decay rates.} for $\\partial_v(r\\phi)$ and $\\phi$ in $v\\geq 3u$: In the exterior region, we have $r\\gtrsim u,v$, it is therefore sufficient to prove the decay in $r$. First, we prove that $\\sup_{C_u}(1+r)\\phi$ is bounded. This is achieved in a compact region by continuity of the solution\\footnote{In particular, since we are simply using compactness, the constants in Theorem \\ref{main.thm.1} depend not only on the size of the initial data.} and in the region of large $r$ by integrating $\\partial_v(r\\phi)$ in the outgoing direction from the compact region. Since $\\partial_v(r\\phi)$ can in turn be controlled by $\\phi$, we get the desired bound. To improve over this bound we define\n\\begin{equation*}\n\\mathcal B_{1}(U) := \\sup_{u \\in [1, U]} \\sup_{C_{u}} \\Big( u^{\\omega} \\abs{\\phi} + r u^{\\omega-1} \\abs{\\phi} \\Big)\n\\end{equation*}\nand show via the wave equation that\n$$r^\\omega|\\partial_v(r\\phi)|\\leq C(u_1)+\\epsilon(u_1)\\mathcal{B}_1(U),$$\nwhere $\\epsilon\\to 0$ as $u_1\\to \\infty$. This gives the optimal decay rate for $\\partial_v(r\\phi)$ in the exterior region up to an arbitrarily small loss, which can be estimated once $\\mathcal{B}_1(U)$ can be controlled.\n\n\\item [(2)] Bounds for $\\partial_v(r\\phi)$ and $\\phi$ in $v\\leq 3u$: For the decay of the first derivatives, the interior region $\\{v\\leq 3u\\}$ is further divided into the intermediate region $\\{r\\geq R\\}$ and the compact region $\\{r\\leq R\\}$. In these two regions, the $r$ weight in the equation is not sufficient to give the sharp decay rate. Instead, we start from the decay rate $\\partial_v(r\\phi)$ obtained in the first step in the exterior region and propagate this decay estimate inwards. To achieve this, we need to show that $\\int \\frac{2 m \\lambda \\nu}{(1-\\mu)r^2}$ is small when $u$ is sufficiently large. \n\n\\item [(2a)] $r\\geq R$ and $v\\leq 3u$: In the intermediate region where we still have a lower bound on $r$, the required smallness is given by the \\emph{qualitative} information that the Hawking mass approaches $0$. Thus, from some large time onwards, $\\int \\frac{2 m \\lambda \\nu}{(1-\\mu)r^2}$ becomes sufficiently small and we can integrate the wave equation directly to obtain the desired decay bounds.\n\n\\item [(2b)] $r\\leq R$ and $v\\leq 3u$: In this region, we use the local BV scattering assumption to show that $\\int_{\\{r\\leq R\\}} \\frac{2 m \\lambda \\nu}{(1-\\mu)r^2}\\to 0$ as $u\\to\\infty$. This smallness allows us to propagate the decay estimates from the curve $r=R$ to the region $r< R$. At this point, we can also recover the control for $\\mathcal{B}_1(U)$ and close the estimates in step 1. This allows us to derive all the optimal decay rates for $\\phi$ and $\\partial_v(r\\phi)$\n\n\\item [(3)] Bounds for $\\partial_u(r\\phi)$: To achieve the bounds for $\\partial_u(r\\phi)$, first note that along the axis we have $\\partial_u(r\\phi)=-\\partial_v(r\\phi)$. Thus, by the previous derived control for $\\partial_v(r\\phi)$, we also have the decay of $\\partial_u(r\\phi)$ along the axis. We then consider the wave equation as a transport equation in the outgoing direction for $\\partial_u(r\\phi)$ to obtain the sharp decay for $\\partial_u(r\\phi)$ in the whole spacetime. \n\n\\end{enumerate}\n\n\\subsubsection{Estimates for second derivatives of $r\\phi$}\n\nAs for the first derivatives, we control the second derivatives by first integrating the equation in the exterior region up to a curve $v=3u$. We then propagate the decay bounds from the exterior region to the interior region using the estimates already derived for the first derivative of $\\phi$, as well as the local BV scattering assumption. However, at this level of derivatives, some new difficulties arise as we now describe.\n\\\\\n\\fparagraph{Renormalization and the null structure}\nThe assumption of local BV scattering implies that\n\\begin{eqnarray}\n\\int_{C_u\\cap\\{r\\leq R\\}} (|\\partial_v\\phi|+|\\partial_v^2(r\\phi)|)\\to 0 \\label{BV.small.1}\n\\end{eqnarray}\nas $u\\to \\infty$. When combined with Christodoulou's BV theory, this also implies that as $v\\to \\infty$, we have\n\\begin{eqnarray}\n\\int_{\\underline{C}_v\\cap\\{r\\leq R\\}} (|\\partial_u\\phi|+|\\partial_u^2(r\\phi)|) \\to 0. \\label{BV.small.2}\n\\end{eqnarray}\nNotice that on $C_u$ (resp. $\\underline{C}_v$), we only control the integral of $\\partial_v^2(r\\phi)$ and $\\partial_v\\phi$ (resp. $\\partial_u^2(r\\phi)$ and $\\partial_u\\phi$).\n\nSuppose when integrating along the incoming direction to control $\\partial_v^2(r\\phi)$ and $\\partial_v\\lmb$, we need to estimate terms of the form\n$$\\int_{\\underline{C}_v\\cap\\{r\\leq R\\}} |\\partial_u\\phi \\partial_v\\phi|.$$\nWe can apply the BV theory to show that for $v$ sufficiently large,\n$$\\int_{\\underline{C}_v\\cap\\{r\\leq R\\}} |\\partial_u\\phi| \\leq \\epsilon.$$\nOn the other hand, one can show that\n$$\\sup_{\\underline{C}_v\\cap\\{r\\leq R\\}} |\\partial_v\\phi|\\leq C \\sup_{J^-(\\underline{C}_v\\cap\\PD_{\\mathrm{cpt}})} |\\partial_v^2(r\\phi)|$$\nwhich can be controlled by the quantity that we are estimating.\n\nHowever, in equation \\eqref{eq:eq4dvdvrphi:normal} for $\\partial_v^2(r\\phi)$ derived by differentiating \\eqref{eq:SSESF:dphi}, there are terms of the form\n$$\\partial_v\\phi \\partial_v\\phi$$\nsuch that neither of the factors can be controlled a priori in $L^1$ by the local BV scattering assumption. In other words, the equation does not obey any null condition.\n\nTo deal with this problem, we follow \\cite{Christodoulou:1993bt} and introduce the renormalized variables\n$ \\partial_{v}^{2} (r \\phi) - (\\partial_{v} \\lmb) \\phi, $\n\t$ \\partial_{u}^{2} (r \\phi) - (\\partial_{u} \\nu) \\phi, $\n\t$ \\partial_{v} \\log \\lmb - \\frac{\\lmb}{(1-\\mu)} \\frac{\\mu}{r} + \\partial_{v} \\phi \\Big( \\lmb^{-1} \\partial_{v} (r \\phi) - \\nu^{-1} \\partial_{u} ( r \\phi) \\Big), $\n\t$ \\partial_{u} \\log (-\\nu) - \\frac{\\nu}{(1-\\mu)} \\frac{\\mu}{r} + \\partial_{u} \\phi \\Big( \\lmb^{-1} \\partial_{v} (r \\phi) - \\nu^{-1} \\partial_{u} (r \\phi) \\Big)$\nwhich have the property that the nonlinear terms arising in the equations for these variables in fact have a null structure. In particular, we can apply the above heuristic procedure to obtain decay estimates in the compact region $r\\leq R$.\n\\\\\n\\fparagraph{Non-renormalized variables and decay towards null infinity}\n\nWhile the renormalization allows us to apply the BV theory in the interior region, it does not give the optimal $r$ decay rates in the exterior region. For example, the renormalized quantity\n$$\\partial_{v} \\log \\lmb - \\frac{\\mu}{(1-\\mu)} \\frac{\\lmb}{r} + \\partial_{v} \\phi \\Big( \\lmb^{-1} \\partial_{v} (r \\phi) - \\nu^{-1} \\partial_{u} (r \\phi) \\Big)$$\ndecays only as $r^{-2}$ towards null infinity due to the contribution of $\\frac{\\mu}{(1-\\mu)} \\frac{\\lmb}{r}$, which is weaker than the desired $r^{-3}$ decay for $\\partial_v\\log \\lmb$. Therefore, in order to obtain the optimal estimates everywhere in the spacetime, we need to use the variables $\\partial_v^2(r\\phi)$, $\\partial_u^2(r\\phi)$, $\\partial_v\\lmb$ and $\\partial_u\\nu$ together with their renormalized versions.\n\\\\\n\\fparagraph{Coupling of the incoming and outgoing parts}\n\nFinally, an additional challenge is that unlike the estimates for the first derivatives of the scalar field, the bounds for the incoming part of the solution $\\partial_v^2(r\\phi)$ and $\\partial_v\\lmb$ are coupled to that for the outgoing part $\\partial_u^2(r\\phi)$ and $\\partial_u\\nu$. Likewise, to control $\\partial_u^2(r\\phi)$, we need estimates for $\\partial_v^2(r\\phi)$ and $\\partial_v\\lmb$. For example, in the equation for $\\partial_{v} \\log \\lmb - \\frac{\\mu}{(1-\\mu)} \\frac{\\lmb}{r} + \\partial_{v} \\phi \\Big( \\lmb^{-1} \\partial_{v} (r \\phi) - \\nu^{-1} \\partial_{u} (r \\phi) \\Big)$, there is a term involving $\\partial_u^2(r\\phi)$ on the right hand side. In particular, in order to obtain the desired decay for $\\partial_v \\lmb$, we need to at the same time prove the decay for $\\partial_u^2(r\\phi)$.\n\\\\\n\\fparagraph{Strategy for obtaining the decay estimates}\nWith the above difficulties in mind, we can now give a very rough sketch of the strategy of the proof.\n\n\\begin{enumerate}\n\\item [(1)] Bounds for $\\partial_v^2(r\\phi)$ and $\\partial_v\\lmb$ for large $r$: As in the case for the first derivatives, we first prove the optimal $r$ decay for $\\partial_v^2(r\\phi)$ and $\\partial_v\\lmb$ in the exterior region. To this end, we integrate the equations satisfied by the \\emph{non-renormalized} variables. We note that the error terms can all be bounded using the local BV scattering assumption and the decay estimates already proved for the first derivatives.\n\n\\item [(2)] Bounds for all second derivatives: Steps 2 and 3 for the decay bounds for the first derivatives are now coupled. Define \\begin{align*}\n\t\\mathcal B_{2}(U) := \\sup_{u \\in [1, U]} \\sup_{C_{u}} \\Big( & u^{\\omega} \\abs{\\partial_{v}^{2} (r \\phi)} +u^{\\omega} \\abs{\\partial_{u}^{2} (r \\phi)} \n\t\t\t\t\t+ u^{\\omega} \\abs{\\partial_{v} \\lmb} + u^{\\omega} \\abs{\\partial_{u} \\nu} \\Big).\n\\end{align*}\nWe then show that $\\mathcal B_{2}(U)$ can control the error terms arising from integrating the \\emph{renormalized} equations in the sense that we can obtain an inequality of the form\n$$\\abs{\\mbox{weighted renormalized variables}}\\leq C(u_2)+\\epsilon(u_2)\\mathcal B_{2}(U),$$\nwhere $\\epsilon(u_2)\\to 0$ as $u_2\\to \\infty$. We then prove that the renormalized variables in fact control all the weighted second derivatives in $\\mathcal B_2$. After choosing $u_2$ to be sufficiently large, we show that $\\mathcal B_{2}(U)$ is bounded independent of $U$ and thus all the second derivatives have $u^{-\\omega}$ decay.\n\\item [(3)] Optimal bounds in terms of $u$ decay: While we have obtained $u^{-\\omega}$ decay for the second derivatives, the decay rates are not the sharp rates claimed in the main theorem. To finally obtained the desired bounds, we integrate the equations of the \\emph{non-renormalized} variables and use the preliminary estimates obtained in (1) and (2) above. Here, we make use of the fact that the estimates obtained in step (2) above are sufficiently strong (both in terms of regularity and decay) to control the error terms in the non-renormalized equations.\n\\end{enumerate}\n\n\\section{Analytic properties of \\eqref{eq:SSESF}}\\label{sec.anal.prop}\nIn this section, we discuss the analytic properties of \\eqref{eq:SSESF}. These include scaling, monotonicity and the null structure of the system. All these features will play crucial roles in the analysis.\n\\subsection{Scaling}\nFor $a>0$, \\eqref{eq:SSESF} is invariant under the scaling of the coordinate system\n$$ u \\mapsto au,\\quad v\\mapsto av$$\ntogether with the scaling of the functions\n$$r \\mapsto ar,\\quad m\\mapsto am,\\quad \\Omega\\mapsto \\Omega,\\quad\\phi\\mapsto\\phi.$$\nThis in particular implies that the BV norms\n\\begin{equation*}\n\\int_u^{\\infty} |\\partial_v^2(r\\phi)(u,v')| \\mathrm{d} v'\n\\hbox{ and }\n\\int_u^{\\infty} |\\partial_v \\lambda(u,v')| \\mathrm{d} v'\n\\end{equation*}\nare scale invariant. Thus the a priori assumptions \\eqref{eq:locBVScat} are taken with respect to localized versions of scale invariant norms.\n\n\\subsection{Monotonicity properties} \\label{subsec:monotonicity}\nWe first begin with basic monotonicity properties of $r$.\n\\begin{lemma}[Monotonicity of $r$] \\label{lem:mntn4r}\nLet $(\\phi, r, m)$ be a BV solution to \\eqref{eq:SSESF}. Then we have \n\\begin{equation*}\n\t\\nu < 0 \\hbox{ in } \\calQ,\n\\end{equation*}\nand\n\\begin{equation*}\n\t\\left\\{\n\t\\begin{aligned}\n\t\\lmb > 0 & \\hbox{ when } 1-\\mu > 0, \\\\\n\t\\lmb = 0 & \\hbox{ when } 1-\\mu = 0, \\\\\n\t\\lmb < 0 & \\hbox{ when } 1-\\mu < 0.\n\t\\end{aligned}\t\n\t\\right.\n\\end{equation*}\n\\end{lemma}\n\\begin{proof} \nThis was proved in \\cite[Propositions 1.1 and 1.2]{Christodoulou:1993bt}; we reproduce the proof for the reader's convenience. Note the equation\n\\begin{equation*}\n\t\\partial_{u} \\partial_{v} (r^{2}) = - \\frac{1}{2} \\Omega^{2}.\n \\end{equation*}\nwhich easily follows from \\eqref{eq:SSESF}. As $\\partial_{u} r^{2} = 2 r \\partial_{u} r= 0$ on $\\Gamma$ and $r > 0$ on $\\calQ$, we easily see that $\\nu < 0$. Then from the definition of $1-\\mu$, the second conclusion also follows. \\qedhere\n\\end{proof}\n\nAccording to the sign of $\\lmb$, a general Penrose diagram $\\calQ$ is divided into three subregions as follows:\n\\begin{align*}\n\t\\calT := \\set{(u,v) \\in \\calQ : \\lmb < 0}, \\quad \\calA := \\set{(u,v) \\in \\calQ : \\lmb = 0}, \\quad \\calR := \\set{(u,v) \\in \\calQ : \\lmb > 0}.\n\\end{align*}\n\nThese are called the \\emph{trapped region}, \\emph{apparent horizon}, and \\emph{regular region}, respectively. The next lemma, which we borrow from \\cite{Christodoulou:1993bt}, shows that the solutions to \\eqref{eq:SSESF} considered in this paper consist only of the regular region $\\calR$. Therefore, extensive discussion of $\\calT$ and $\\calA$ will be suppressed.\n\n\\begin{lemma}[{\\cite[Proposition 1.4]{Christodoulou:1993bt}}] \\label{lem:regR}\nLet $(\\phi, r, m)$ be a BV solution to \\eqref{eq:SSESF}. Then the causal past of $\\Gamma$ in $\\calQ$ is contained in $\\calR$.\nIn particular, $\\calQ = \\calR$ if $(\\phi, r, m)$ satisfies the condition $(1)$ in Definition \\ref{def:locBVScat} (future completeness of radial null geodesics).\n\\end{lemma}\n\n\nNext, we turn to monotonicity properties of the Hawking mass $m$, which will play an important role in our paper. The following lemma is an obvious consequence of \\eqref{eq:SSESF:dm}.\n\n\\begin{lemma}[Monotonicity of $m$] \\label{lem:mntn4m}\nFor a BV solution $(\\phi, r, m)$ to \\eqref{eq:SSESF}, we have\n\\begin{equation*}\n\t\\partial_{v} m \\geq 0, \\quad \\partial_{u} m \\leq 0 \\hbox{ in } \\calR.\n\\end{equation*}\n\\end{lemma}\n\nBy the monotonicity $\\partial_{v} m \\geq 0$, the limit $M(u):=\\lim_{v \\to \\infty} m(u,v)$ exists (possibly $+\\infty$ at this point) for each $u$. This is called the \\emph{Bondi mass} at retarded time $u$. The following statement is an easy corollary of the preceding lemma.\n\n\\begin{corollary} [Monotonicity of the Bondi mass] \\label{cor:mntn4Bondi}\nLet $(\\phi, r, m)$ be a BV solution to \\eqref{eq:SSESF}, and suppose that $C_{u} \\subset \\calR$ for $u \\in [u_{1}, u_{2}]$. Then the Bondi mass $M(u)$ is a non-increasing function on $[u_{1}, u_{2}]$.\n\\end{corollary}\n\nThe following lemma shows that $M_{i} < \\infty$ for initial data sets considered in this paper. \n\\begin{lemma} \\label{lem:bnd4Mi}\nSuppose that $\\partial_{v}(r \\phi)(1, \\cdot)$ is asymptotically flat or order $\\omega' > 1$ in the sense of Definition \\ref{def:AF}. Then we have\n\\begin{equation} \\label{eq:bnd4Mi}\n\tM_{i} := \\lim_{v \\to \\infty} m(1, v) \\leq C \\mathcal I_{1}^{2}.\n\\end{equation}\n\\end{lemma}\n\nThis is an easy consequence of \\eqref{eq:SSESF:dm} and Lemma \\ref{lem:mntn4r}; we omit its proof.\nBy the preceding corollary, it follows that $M(u) < \\infty$ for each $u$.\n\nWe conclude this subsection with additional monotonicity properties of solutions to \\eqref{eq:SSESF}, useful for controlling the geometry of locally BV scattering solutions to \\eqref{eq:SSESF}.\n\n\\begin{lemma} \\label{lem:mntn4kpp}\n\tLet $(\\phi, r, m)$ be a BV solution to \\eqref{eq:SSESF}. For $(u,v) \\in \\calR$, we have\n\t\\begin{equation*}\n\t\t\\frac{\\lmb}{1-\\mu}(u,v) \\leq \\frac{\\lmb}{1-\\mu}(1, v),\n\t\\end{equation*}\n\t\n\t$$\\partial_u\\lmb =\\partial_v\\nu \\leq 0.$$\n\t\\end{lemma}\n\n\\begin{proof} \nThe lemma follows from the formula\n\\begin{equation*}\n\t\\partial_{u} \\log \\abs{\\frac{\\lmb}{1-\\mu}} = - (- \\nu)^{-1} r (\\partial_{u} \\phi)^{2}\n\\end{equation*} \nand \\eqref{eq:SSESF:dr}. \\qedhere\n\\end{proof}\n\n\\subsection{Null structure of the evolution equations} \\label{subsec:nullStr}\nIn this subsection, we follow \\cite{Christodoulou:1993bt} and demonstrate that the evolution equations verify a form of null structure. In particular, the null structure occurs in the equations for the second derivatives of the scalar field and the metric. However, it is not apparent if we simply take the derivatives of the equations \\eqref{eq:SSESF:dr} and \\eqref{eq:SSESF:dphi}. Instead, we rewrite the equations in renormalized variables for which the null structure is manifest. We will perform this renormalization separately for the wave equations for $\\phi$ and for the equations for $\\lambda$ and $\\nu$.\n\n\\vspace{.1in}\n{\\it - The wave equation for $\\phi$.}\nTaking $\\partial_{v}$ of the equation \\eqref{eq:SSESF:dphi}, we obtain\n\\begin{equation*}\n\t\\partial_{u} (\\partial_{v}^{2} (r \\phi)) = \\partial_{v} (\\partial_{u} \\lmb \\, \\phi) = \\partial_{u} \\lmb \\, \\partial_{v} \\phi + (\\partial_{v} \\partial_{u} \\lmb) \\phi,\n\\end{equation*}\nor equivalently, after substituting in the first equation in \\eqref{eq:SSESF:dr},\n\\begin{equation} \\label{eq:eq4dvdvrphi:normal}\n\\partial_{u} (\\partial_{v}^{2} (r \\phi)) = \n\\frac{2m \\lmb \\nu}{(1-\\mu) r^{2}} \\, \\partial_{v} \\phi + \\frac{ \\nu}{(1-\\mu) } (\\partial_{v} \\phi)^{2} \\phi \n + \\frac{2m \\nu}{(1-\\mu) r^{2}} (\\partial_{v} \\lmb) \\phi - \\frac{4m}{(1-\\mu) r^{3}} \\lmb^{2} \\nu \\phi.\n\\end{equation}\n\nSome terms on the right hand side, such as $(1-\\mu)^{-1} \\nu (\\partial_{v} \\phi)^{2} \\phi$, do not exhibit null structure and are dangerous near $\\Gamma$. To tackle this, we rewrite\n\\begin{equation*}\n\t(\\partial_{v} \\partial_{u} \\lmb) \\phi = \\partial_{u} [(\\partial_{v} \\lmb) \\phi ] - \\partial_{v} \\lmb \\, \\partial_{u} \\phi.\n\\end{equation*}\n\nThus, from the first equation, we derive\n\\begin{equation} \\label{eq:eq4dvdvrphi}\n\t\\partial_{u} [\\partial_{v}^{2} (r \\phi) - (\\partial_{v} \\lmb) \\phi] = \\partial_{u} \\lmb \\, \\partial_{v} \\phi - \\partial_{v} \\lmb \\, \\partial_{u} \\phi.\n\\end{equation}\nBy switching $u$ and $v$, we obtain the following analogous equations in the conjugate direction.\n\\begin{equation} \\label{eq:eq4dudurphi:normal}\n\\partial_{v} (\\partial_{u}^{2} (r \\phi)) = \n\\frac{2m \\lmb \\nu}{(1-\\mu) r^{2}} \\, \\partial_{u} \\phi + \\frac{\\lmb }{(1-\\mu) } (\\partial_{u} \\phi)^{2} \\phi \n + \\frac{2m \\lmb}{(1-\\mu) r^{2}} (\\partial_{u} \\nu) \\phi - \\frac{4m}{(1-\\mu) r^{3}} \\lmb \\nu^{2} \\phi.\n\\end{equation}\n\n\\begin{equation} \\label{eq:eq4dudurphi}\n\t\\partial_{v} [\\partial_{u}^{2} (r \\phi) - (\\partial_{u} \\nu) \\phi] = \\partial_{v} \\nu \\, \\partial_{u} \\phi - \\partial_{u} \\nu \\, \\partial_{v} \\phi.\n\\end{equation}\n\n\n\n\\vspace{.1in}\n{\\it - The equations for $\\lmb$ and $\\nu$.}\nFrom \\eqref{eq:SSESF:dr}, we have\n\\begin{equation*}\n\t\\partial_{u} \\log \\lmb = \\frac{\\mu}{(1-\\mu) r} \\nu, \\quad \\partial_{v} \\log (-\\nu) = \\frac{\\mu}{(1-\\mu) r} \\lmb.\n\\end{equation*}\n\nTake $\\partial_{v}$, $\\partial_{u}$ of the first and second equations respectively. Using \\eqref{eq:SSESF:dr}, it is not difficult to verify that\n\\begin{align}\n\\partial_{u} \\partial_{v} \\log \\lmb\n=& \\frac{1}{(1-\\mu) } \\lmb^{-1} \\nu (\\partial_{v} \\phi)^{2} - \\frac{4m}{(1-\\mu) r^{3}} \\lmb \\nu, \\label{eq:eq4dvdvr:normal} \\\\\n\\partial_{v} \\partial_{u} \\log (-\\nu)\n=& \\frac{1}{(1-\\mu) } \\nu^{-1} \\lmb (\\partial_{u} \\phi)^{2} - \\frac{4m}{(1-\\mu) r^{3}} \\lmb \\nu. \\label{eq:eq4dudur:normal}\n\\end{align}\n\nTo reveal the null structure, we must carry out the renormalization as we have done for \\eqref{eq:eq4dvdvrphi}, \\eqref{eq:eq4dudurphi}. Following Christodoulou \\cite{Christodoulou:1993bt}, it is easy to check that the above two equations are equivalent to\n\\begin{equation} \\label{eq:eq4dvdvr}\n\\begin{aligned}\n& \\partial_{u} \\Big[ \\partial_{v} \\log \\lmb - \\frac{\\mu}{(1-\\mu)} \\frac{\\lmb}{r} + \\partial_{v} \\phi \\Big( \\lmb^{-1} \\partial_{v} (r \\phi) - \\nu^{-1} \\partial_{u} (r \\phi) \\Big) \\Big] \\\\\n& \\qquad = \\partial_{u} \\phi \\, \\partial_{v}\\Big( \\nu^{-1} \\partial_{u} (r \\phi) \\Big)- \\partial_{v} \\phi \\, \\partial_{u} \\Big( \\nu^{-1} \\partial_{u} (r \\phi) \\Big),\n\\end{aligned}\n\\end{equation}\nand the conjugate equation\n\\begin{equation} \\label{eq:eq4dudur}\n\\begin{aligned}\n& \\partial_{v} \\Big[ \\partial_{u} \\log (-\\nu) - \\frac{\\mu}{(1-\\mu)} \\frac{\\nu}{r} + \\partial_{u} \\phi \\Big( \\lmb^{-1} \\partial_{v} (r \\phi) - \\nu^{-1} \\partial_{u} (r \\phi) \\Big) \\Big] \\\\\n&\\qquad = - \\partial_{u} \\phi \\, \\partial_{v}\\Big( \\lmb^{-1} \\partial_{v} (r \\phi) \\Big) + \\partial_{v} \\phi \\, \\partial_{u} \\Big( \\lmb^{-1} \\partial_{v} (r \\phi) \\Big).\n\\end{aligned}\n\\end{equation}\n\n\n\\section{Basic estimates for locally BV scattering solutions} \\label{sec.geom}\nIn this section, we gather some basic estimates concerning locally BV scattering solutions. These estimates will apply, in particular, to solutions satisfying the hypotheses of Theorem \\ref{main.thm.1}.\n\n\\subsection{Integration lemmas for $\\phi$} \\label{subsec:est4phi}\nWe first derive some basic inequalities for $\\phi$, $\\lmb^{-1} \\partial_v(r\\phi)$ and $\\partial_{v} \\phi$. We remark that these are functional inequalities which hold under very general assumptions, and in particular does not rely on the locally BV scattering assumption.\n\n\\begin{lemma} \\label{lem:est4phi}\nLet $\\phi(u, \\cdot)$ and $r(u, \\cdot)$ be Lipschitz functions on $[u,v]$ with $\\lmb>0$ and $r(u, u) = 0$. \nThen the following inequality holds.\n\\begin{equation} \\label{eq:intEst4phi:1}\n\\abs{\\phi(u,v)} \\leq \\sup_{v' \\in [u, v]} \\Big\\vert \\frac{\\partial_{v}(r \\phi)}{\\lmb}(u, v') \\Big\\vert.\n\\end{equation}\n\nMore generally, for $u \\leq v_{1} \\leq v_{2}$, we have\n\\begin{equation} \\label{eq:intEst4phi:2}\n\\abs{r \\phi(u,v_{1}) - r \\phi(u, v_{2})} \\leq \\Big( r(u, v_{2}) - r(u, v_{1}) \\Big) \\sup_{v' \\in [v_{1}, v_{2}]} \\Big\\vert \\frac{\\partial_{v}(r \\phi)}{\\lmb}(u, v') \\Big\\vert.\n\\end{equation}\n\n\n\\end{lemma}\n\n\\begin{proof} \nWe shall prove \\eqref{eq:intEst4phi:2}, since \\eqref{eq:intEst4phi:1} then follows as a special case. Integrating $\\partial_{v} (r \\phi)(u, v')$ over $v' \\in [v_{1}, v_{2}]$, we get\n\\begin{align*}\n\t\\abs{r \\phi(u,v_{1}) - r \\phi (u, v_{2})} \n\t\\leq & \\int_{v_{1}}^{v_{2}} \\abs{\\partial_{v} (r \\phi)(u, v')} \\, \\mathrm{d} v' \\\\\n\t\\leq & \\sup_{v' \\in [v_{1}, v_{2}]} \\Big\\vert \\frac{\\partial_{v}(r \\phi)}{\\lmb} (u, v') \\Big\\vert \\, \\times \\int_{v_{1}}^{v_{2}} \\lmb(u, v') \\, \\mathrm{d} v' \\\\\n\t=& \\Big( r(u, v_{2}) - r(u, v_{1}) \\Big) \\sup_{v' \\in [v_{1}, v_{2}]} \\Big\\vert \\frac{\\partial_{v}(r \\phi)}{\\lmb} (u, v') \\Big\\vert . \\qedhere\n\\end{align*}\n\\end{proof}\n\n\\begin{lemma} \\label{lem:est4dvphi}\nLet $\\phi(u, \\cdot)$ and $r(u, \\cdot)$ be functions on $[u, v]$ such that $\\partial_v\\phi$ is integrable, $r$ is Lipschitz with $\\lambda>0$ and $r(u, u) = 0$. \nSuppose furthermore that $\\lmb^{-1} \\partial_{v} (r \\phi)(u, \\cdot)$ is BV on $[u, v]$. Then the following statements hold.\n\\begin{enumerate}\n\\item We have\n\\begin{equation} \\label{eq:est4dvphi:2}\n\\int_{u}^{v} \\abs{\\partial_{v} \\phi(u,v')} \\, \\mathrm{d} v' \\leq \\int_{u}^{v}\\abs{\\partial_{v}(\\lmb^{-1} \\partial_{v} ( r \\phi))(u,v')} \\, \\mathrm{d} v'.\n\\end{equation}\n\n\\item Suppose, in addition, that $\\lmb^{-1} \\partial_{v}(r \\phi)(u, \\cdot)$ is Lipschitz on $[u,v]$. Then we have\n\\begin{equation} \\label{eq:est4dvphi:1}\n\\abs{\\partial_{v} \\phi(u,v)} \\leq \\frac{1}{2} \\frac{\\sup_{v' \\in [u,v]} \\lmb(u, v')}{\\inf_{v' \\in [u,v]} \\lmb(u, v')} \\sup_{v' \\in [u, v]} \\abs{\\partial_{v} (\\lmb^{-1} \\partial_{v} ( r \\phi))(u, v')}.\n\\end{equation}\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof} \nWe proceed formally to compute\n\\begin{align*}\n\t\\partial_{v} \\phi(u,v) \n\t=& \\frac{\\lmb}{r} \\Big( \\lmb^{-1} \\partial_{v} ( r \\phi) - \\phi\\Big) (u,v) \\\\\n\t=& \\frac{\\lmb}{r^{2}} (u, v) \\int_{u}^{v} \\Big( \\int_{v'}^{v} \\partial_{v} (\\lmb^{-1} \\partial_{v} ( r \\phi)) (u, v'') \\, \\mathrm{d} v'' \\Big) \\lmb(u, v')\\, \\mathrm{d} v' \\\\\n\t=& \\frac{\\lmb}{r^{2}} (u,v) \\int_{u}^{v} r(u, v'') \\partial_{v} (\\lmb^{-1} \\partial_{v} ( r \\phi)) (u, v'') \\, \\mathrm{d} v''.\n\\end{align*}\n\nThe above computation is justified thanks to the hypotheses, where we interpret \n\\begin{equation*}\n\t\\partial_{v} (\\lmb^{-1} \\partial_{v} ( r \\phi)) (u, v'') \\, \\mathrm{d} v''\n\\end{equation*}\nto be the the weak derivative of $\\lmb^{-1} \\partial_{v} ( r \\phi)$, which is a finite signed measure. For a fixed $(u, v)$, observe that\n\\begin{equation*}\n\t\\sup_{v'' \\in [u, v]} r(u, v'') \\int_{v''}^{v} \\frac{\\lmb(u,v')}{r^{2}(u,v')} \\, \\mathrm{d} v' \\leq 1.\n\\end{equation*}\n\nThis proves \\eqref{eq:est4dvphi:2}. For \\eqref{eq:est4dvphi:1}, note that the function $\\lmb^{-1} \\partial_{v} (r \\phi)$ is absolutely continuous on $[u,v]$, so $\\partial_{v}(\\lmb^{-1} \\partial_{v} ( r \\phi)(u, \\cdot))$ exists almost everywhere on $[u, v]$; moreover, it belongs to $L^{\\infty}$ by the Lipschitz assumption. Noting that\n\\begin{equation*}\n\t\\sup_{v' \\in [u, v]} \\frac{\\lmb(u,v')}{r^{2}(u,v')} \\int_{u}^{v'} r(u, v'') \\, \\mathrm{d} v'' \\leq \\frac{1}{2} \\frac{\\sup_{v' \\in [u,v]} \\lmb(u, v')}{\\inf_{v' \\in [u,v]} \\lmb(u, v')}\n\\end{equation*}\nwe obtain \\eqref{eq:est4dvphi:1}.\n\\end{proof}\n\n\\subsection{Geometry of locally BV scattering solutions}\nThe goal of this subsection is to prove the following proposition.\n\\begin{proposition} \\label{prop:geomLocBVScat}\nLet $(\\phi, r, m)$ be a locally BV scattering solution to \\eqref{eq:SSESF} as in Definition \\ref{def:locBVScat}. Assume furthermore that on the initial slice $C_{1}$, we have $\\lmb(1, \\cdot) = \\frac{1}{2}$ and\n\\begin{equation*}\n\t\\sup_{C_{1}} \\abs{\\partial_{v}(r \\phi)} + M_{i} < \\infty.\n\\end{equation*}\n\t\nThen there exist finite constants $K, \\Lambda > 0$ such that the following bounds hold for all $(u, v) \\in \\calQ$:\n\\begin{gather}\n\t\\Lambda^{-1} \\leq \\lmb(u,v) \\leq \\frac{1}{2} \\label{eq:bnd4dvr} \\\\\n\t\\Lambda^{-1} \\leq - \\nu(u,v) \\leq K \\label{eq:bnd4dur} \\\\\n\t1 \\leq (1-\\mu(u,v))^{-1} \\leq K \\Lambda. \\label{eq:bnd4mu}\\\\\n\t0 < \\frac{- \\nu}{1-\\mu(u,v)} \\leq K. \\label{eq:bnd4conjKpp}\n\\end{gather}\n\nMoreover, there exists a finite constant $\\Psi > 0$ such that for all $(u,v) \\in \\calQ$, we have\n\\begin{gather}\n\t\\abs{\\partial_{v}(r \\phi)(u,v)} \\leq \\Psi, \\label{eq:bnd4dvrphi} \\\\\n\t\\abs{\\phi(u,v)} \\leq \\Lambda \\Psi. \\label{eq:bnd4phi}\n\\end{gather}\n\\end{proposition}\n\n{\\bf Once we have this proposition, we will denote by $\\Lambda$, $K$ and $\\Psi$ the best constants such that \\eqref{eq:bnd4dvr}-\\eqref{eq:bnd4phi} hold.}\n\nBy Lemma \\ref{lem:regR}, we already know that $\\lmb > 0$, $- \\nu > 0$ and $(1-\\mu)^{-1} < \\infty$. The first three bounds, namely \\eqref{eq:bnd4dvr}--\\eqref{eq:bnd4mu}, ensure that these bounds concerning the geometry of the spacetime does not degenerate anywhere, in particular along the axis $\\Gamma$. They will be very useful in the analysis in the later section of the paper. \n\nThe proof of Proposition \\ref{prop:geomLocBVScat} will consist of several steps. We begin with elementary bounds for $\\lmb$ and $\\nu$.\n\n\\begin{lemma} \\label{lem:basicEst4dr}\nLet $(\\phi, r, m)$ be a BV solution to \\eqref{eq:SSESF} with $\\calQ = \\calR$. Then for every $(u,v) \\in \\calQ$, we have\n\\begin{align}\n\t\\lmb(u,v) \\leq& \\, \\lmb(1, v), \\label{eq:basicEst4dr:1} \\\\\n\t\\lmb^{-1}(u,v) \\leq& \\, \\lim_{u' \\to v-} \\lmb^{-1}(u',v), \\label{eq:basicEst4dr:2} \\\\\n\t\\nu(u,v) \\leq& - \\lim_{v' \\to u+} \\lmb(u, v'). \\label{eq:basicEst4dr:3}\n\\end{align}\n\\end{lemma}\n\n\\begin{proof}\n By \\eqref{eq:SSESF:dr}, we have\n\\begin{equation*}\n\\begin{aligned}\n\t\\lmb(u,v) =& \\lmb(1, v) \\exp \\Big( \\int_{1}^{u} \\Big( \\frac{2m}{(1-\\mu)r^{2}} \\nu \\Big) (u', v) \\, \\mathrm{d} u' \\Big), \\\\\n\t\\lmb^{-1}(u,v) =& \\lim_{u' \\to v-} \\lmb(u', v)^{-1} \\exp \\Big( \\int_{u}^{v} \\Big( \\frac{2m}{(1-\\mu)r^{2}} \\nu \\Big) (u', v) \\, \\mathrm{d} u' \\Big), \\\\\n\t\\nu(u,v) =& \\lim_{v' \\to u+} \\nu(u, v') \\exp \\Big( \\int_{u}^{v} \\Big( \\frac{2m}{(1-\\mu)r^{2}} \\lmb \\Big) (u, v') \\, \\mathrm{d} v' \\Big).\n\\end{aligned}\n\\end{equation*}\n\nSince $-\\nu, (1-\\mu) > 0$ everywhere, \\eqref{eq:basicEst4dr:1} and \\eqref{eq:basicEst4dr:2} follow. Moreover, since \n\\begin{equation*}\n\\lim_{v' \\to u+} \\nu(u,v') = - \\lim_{v' \\to u+}\\lmb(u,v'),\n\\end{equation*}\nand $\\lmb > 0$ on $\\calQ$, \\eqref{eq:basicEst4dr:3} follows as well. \\qedhere\n\\end{proof}\n\nBy Lemma \\ref{lem:regR}, $\\calQ = \\calR$ holds for a solution \\eqref{eq:SSESF} satisfying the hypotheses of Proposition \\ref{prop:geomLocBVScat}. As an immediate corollary, we have the following easy upper bound for $\\lmb$.\n\\begin{corollary} \\label{cor:est4dr}\nLet $(\\phi, r, m)$ be a solution to \\eqref{eq:SSESF} satisfying the hypotheses of Proposition \\ref{prop:geomLocBVScat}. Then by the coordinate condition $\\lmb(1, v) = \\frac{1}{2}$ and \\eqref{eq:basicEst4dr:1}, we have\n\\begin{equation*}\n\t\\sup_{\\calQ} \\lmb \\leq \\frac{1}{2} \\, .\n\\end{equation*}\n\\end{corollary}\n\nNext, we proceed to prove the lower bounds of \\eqref{eq:bnd4dvr} and \\eqref{eq:bnd4dur}. We begin with a technical lemma concerning a large-$r$ region, which will also be useful in our proof of \\eqref{eq:bnd4dvrphi} and \\eqref{eq:bnd4phi}.\n\n\\begin{lemma} \\label{lem:babySmllPtnl:1}\nLet $(\\phi, r, m)$ be a solution to \\eqref{eq:SSESF} satisfying the hypotheses of Proposition \\ref{prop:geomLocBVScat}. Then for arbitrarily small $\\epsilon > 0$, there exists $r_{0} > 1$ such that\n\\begin{align} \n\t\\sup_{(u,v) \\in \\set{r \\geq r_{0}}} \\int_{1}^{u} \\abs{\\frac{\\mu}{(1-\\mu)} \\frac{\\nu}{r} (u', v)} \\, \\mathrm{d} u' <& \\epsilon \\, . \\label{eq:babySmllPtnl:1}\n\\end{align}\n\\end{lemma}\n\\begin{proof} \nFor $(u,v) \\in \\set{r \\geq r_{0}}$, we begin by simply estimating as follows:\n\\begin{equation*}\n\t\\abs{\\frac{\\mu}{(1-\\mu)} \\frac{\\nu}{r}} \\leq \\frac{2 M_{i}}{(1-\\frac{2M_{i}}{r_{0}})} \\frac{(- \\nu)}{r^{2}}\n\\end{equation*}\n\nThe above inequality holds as long as\\footnote{Indeed, it suffices to choose $r_0>2M_i$ here. The condition $r_0> R$ will be used in the proof of Lemma \\ref{lem:babySmllPtnl:2}.} we choose $r_{0} > \\max \\set{2 M_{i}, R}$. Note that if $(u, v) \\in \\set{r \\geq r_{0}}$, then the null curve $\\set{(u', v) : u' \\in [1, u]}$ from the initial slice $C_{1}$ to $(u,v)$ lies entirely in $\\set{r \\geq r_{0}}$. Integrating along this curve, we obtain for $(u, v) \\in \\set{r \\geq r_{0}}$\n\\begin{equation*}\n\t\\int_{1}^{u} \\abs{\\frac{\\mu}{(1-\\mu)} \\frac{\\nu}{r}(u',v)} \\, \\mathrm{d} u' < \\frac{2 M_{i}}{(1-\\frac{2M_{i}}{r_{0}})} \\frac{1}{r_{0}}\n\\end{equation*}\n\nTaking $r_{0}$ sufficiently large, \\eqref{eq:babySmllPtnl:1} follows. \\qedhere\n\n\\end{proof}\n\n\nNext, we prove an analogous result in a large $u$ region. Key to its proof will be the identity \\eqref{eq:babySmllPtnl:pf:0} below, which will also be used to relate \\eqref{eq:babySmllPtnl:1} and \\eqref{eq:babySmllPtnl:2} to the desired lower bounds of $\\lmb$ and $-\\nu$.\n\n\\begin{lemma} \\label{lem:babySmllPtnl:2}\nLet $(\\phi, r, m)$ be a solution to \\eqref{eq:SSESF} satisfying the hypotheses of Proposition \\ref{prop:geomLocBVScat}. Then for arbitrarily small $\\epsilon > 0$, there exists $U > 1$ such that\n\\begin{align} \n\t\\sup_{v \\geq U} \\int_{U}^{v} \\abs{\\frac{\\mu}{1-\\mu} \\frac{\\nu}{r} (u', v)} \\, \\mathrm{d} u' <& \\epsilon \\, . \\label{eq:babySmllPtnl:2}\n\\end{align}\n\\end{lemma}\n\n\\begin{proof} \nLet $\\epsilon > 0$ be an arbitrary positive number. Using \\eqref{eq:SSESF:dr} and the fact that $1-\\mu > 0, -\\nu > 0$ on $\\calQ$, we have for any $1 \\leq u_{1} \\leq u_{2} < v$,\n\\begin{equation} \\label{eq:babySmllPtnl:pf:0}\n\t\\int_{u_{1}}^{u_{2}} \\abs{\\frac{\\mu}{1-\\mu} \\frac{\\nu}{r} (u', v)} \\, \\mathrm{d} u' = \\log \\lmb(u_{1}, v) - \\log \\lmb(u_{2}, v).\n\\end{equation}\n\nIn order to prove \\eqref{eq:babySmllPtnl:2}, it therefore suffices to exhibit $U > 1$ such that \n\\begin{equation} \\label{eq:babySmllPtnl:pf:1}\n\t\\sup_{(u, v), (u', v') \\in \\set{u \\geq U}} \\abs{\\log \\lmb(u, v) - \\log \\lmb(u', v')} < \\epsilon.\n\\end{equation} \n\nIn order to proceed, we divide $\\calQ$ into three regions: $\\PD_{\\mathrm{cpt}} := \\set{r \\leq R}$, $\\calQ_{[R, r_{0}]} := \\set{R \\leq r \\leq r_{0}}$ and $\\calQ_{[r_{0}, \\infty)} := \\set{r \\geq r_{0}}$, where $r_{0} > \\max\\{2M_i,R\\}$ is chosen via Lemma \\ref{lem:babySmllPtnl:1} so that\n\\begin{equation*} \n\t\\sup_{(u,v) \\in \\calQ_{[r_{0}, \\infty)}}\\int_{1}^{u} \\abs{\\frac{\\mu}{1-\\mu} \\frac{\\nu}{r}(u',v)} \\, \\mathrm{d} u' < \\frac{\\epsilon}{8}.\n\\end{equation*}\n\nUsing \\eqref{eq:babySmllPtnl:pf:0} and the fact that $\\log \\lmb(1, v) = \\frac{1}{2}$, the preceding inequality is equivalent to\n\\begin{equation} \\label{eq:babySmllPtnl:pf:2}\n\t\\sup_{(u,v) \\in \\calQ_{[r_{0}, \\infty)} } \\abs{\\log \\lmb(u,v) - \\frac{1}{2}} < \\frac{\\epsilon}{8}.\n\\end{equation}\n\nNext, we turn to the region $\\calQ_{[R, r_{0}]}$; here we exploit the vanishing of the final Bondi mass. Indeed, taking $U_{1}$ large enough so that $2 M(U_{1}) < R$, we may estimate\n\\begin{equation*}\n\t\\abs{\\frac{\\mu}{1-\\mu} \\frac{\\nu}{r}} \\leq \\frac{2 M(U_{1})}{(1-\\frac{2 M(U_{1})}{R}) R^{2}} (-\\nu)\\quad\\mbox{for }u\\geq U_1.\n\\end{equation*}\n\nConsider now the time-like curve given by $\\gamma_{0} := \\set{(u',v') : r(u', v') = r_{0}}$. On $\\gamma_{0} \\cap \\set{(u,v) : u \\geq U_{1}}$, note that \\eqref{eq:babySmllPtnl:pf:2} holds. Integrating the preceding inequality along incoming null curves emanating from $\\gamma_{0} \\cap \\set{(u,v) : u \\geq U_{1}}$, we obtain for $(u, v) \\in \\calQ_{[R, r_{0}]} \\cap \\set{(u,v) : u \\geq U_{2}}$\n\\begin{equation*} \n\t\\abs{\\log \\lmb(u,v) - \\frac{1}{2}} < \\frac{\\epsilon}{8} + \\frac{2 M(U_{1}) (r_{0} - R)}{(1- \\frac{2 M(U_{1})}{R}) R^{2}} .\n\\end{equation*}\nwhere $U_{2} = U_{2}(U_{1}, r_{0})$ is the future endpoint of the incoming null curve in $\\calQ_{[R, r_{0}]}$ from the past endpoint of $\\gamma_{0} \\cap \\set{(u,v) : u \\geq U_{1}}$; more precisely, $U_{2} = \\sup \\set{u : r(u, V_{1}) \\geq R}$, where $V_{1}$ is the defined by $r(U_{1}, V_{1}) = r_{0}$. Choosing $U_{1}$ sufficiently large, we then obtain\n\\begin{equation} \\label{eq:babySmllPtnl:pf:3}\n\t\\sup_{(u, v) \\in \\calQ_{[R, r_{0}]} \\cap \\set{u \\geq U_{2}}} \\abs{\\log \\lmb(u,v) - \\frac{1}{2}} < \\frac{\\epsilon}{4}.\n\\end{equation}\n\nFinally, in $\\PD_{\\mathrm{cpt}}$, we use the local BV scattering condition \\eqref{eq:locBVScat} to choose $U \\geq U_{2}$ large enough so that we have\n\\begin{equation} \\label{eq:babySmllPtnl:pf:4}\n\t\\sup_{(u, v), (u, v') \\in \\PD_{\\mathrm{cpt}} \\cap \\set{u \\geq U}}\\abs{\\log \\lmb (u, v) - \\log \\lmb(u, v')} < \\frac{\\epsilon}{4}.\n\\end{equation}\n\nTo compare $\\log \\lmb(u, v)$ and $\\log \\lmb(u', v')$ with $u \\neq u'$, we use \\eqref{eq:babySmllPtnl:pf:3}, \\eqref{eq:babySmllPtnl:pf:4} and the triangle inequality. Thus, the desired conclusion \\eqref{eq:babySmllPtnl:pf:1} follows. \\qedhere\n\\end{proof}\n\nAs a corollary of the preceding lemmas and \\eqref{eq:babySmllPtnl:pf:0} (or, more directly, \\eqref{eq:babySmllPtnl:pf:1} and \\eqref{eq:babySmllPtnl:pf:2}), we immediately see that $\\lmb$ and $-\\nu$ is uniformly bounded away from zero.\n\\begin{corollary} \\label{cor:lowerBnd4dvr}\nLet $(\\phi, r, m)$ be a solution to \\eqref{eq:SSESF} satisfying the hypotheses of Proposition \\ref{prop:geomLocBVScat}. Then there exists $0 < \\Lambda < \\infty$ such that for all $(u, v) \\in \\calQ$, we have\n\\begin{equation*}\n\t\\Lambda^{-1} \\leq \\lmb(u, v), \\quad\n\t\\Lambda^{-1} \\leq - \\nu(u,v).\n\\end{equation*}\n\\end{corollary}\n\nTogether with Corollary \\ref{cor:est4dr}, this concludes the proof of \\eqref{eq:bnd4dvr}. Next, using Lemmas \\ref{lem:est4phi}, \\ref{lem:babySmllPtnl:1}, \\ref{lem:babySmllPtnl:2} and the wave equation \\eqref{eq:SSESF:dphi} for $\\phi$, we prove \\eqref{eq:bnd4dvrphi}, \\eqref{eq:bnd4phi} in the following lemma.\n\n\\begin{lemma} \\label{lem:bnd4dvrphiphi}\nLet $(\\phi, r, m)$ be a solution to \\eqref{eq:SSESF} satisfying the hypotheses of Proposition \\ref{prop:geomLocBVScat}. Then there exists a constant $0 < \\Psi < \\infty$ such that\n\\begin{equation} \\label{eq:bnd4dvrphiphi}\n\t\\sup_{\\calQ} \\abs{\\partial_{v}(r \\phi)} \\leq \\Psi, \\quad \n\t\\sup_{\\calQ} \\abs{\\phi} \\leq \\Lambda \\Psi,\n\\end{equation}\nwhere $\\Lambda$ is the best constant such that Corollary \\ref{cor:lowerBnd4dvr} holds.\n\\end{lemma}\n\\begin{proof} \nNote that the second inequality of \\eqref{eq:bnd4dvrphiphi} is an immediate consequence of the first inequality, Lemma \\ref{lem:est4phi} and Corollary \\ref{cor:lowerBnd4dvr}. The proof of the first inequality will proceed in two steps: First, we shall show that $\\partial_{v}(r \\phi)$ is uniformly bounded on the large $r$ region, essentially via Lemma \\ref{lem:babySmllPtnl:1}. By compactness, it immediately follows that $\\partial_{v}(r \\phi)$ is uniformly bounded on the finite $u$ region. Then in the second step, we shall show that $\\partial_{v}(r \\phi)$ is uniformly bounded on a large $u$ region as well using Lemma \\ref{lem:babySmllPtnl:2}.\n\nBy Lemma \\ref{lem:babySmllPtnl:1}, choose $r_{0} > 0$ so that \n\\begin{equation} \\label{eq:bdd4dvrphiphi:pf:1}\n\t\\sup_{(u,v) \\in \\set{r \\geq r_{0}}} \\int_{1}^{u} \\abs{\\frac{\\mu}{1-\\mu} \\frac{\\nu}{r} (u', v)} \\, \\mathrm{d} u' < \\frac{1}{10 \\Lambda}.\n\\end{equation}\n\nWe also borrow the notation $\\calQ_{[r_{0}, \\infty)} := \\set{(u,v) : r(u,v) \\geq r_{0}}$ from the proof of Lemma \\ref{lem:babySmllPtnl:2}. Given $U \\geq 1$, define $\\Psi_{[r_{0}, \\infty)}(U)$ to be\n\\begin{equation*}\n\t\\Psi_{[r_{0}, \\infty)}(U) := \\sup_{(u, v) \\in \\calQ_{[r_{0}, \\infty)} \\cap \\set{1 \\leq u \\leq U}} \\abs{\\partial_{v} (r \\phi)(u, v)}. \n\\end{equation*}\n\nLet $(u,v) \\in \\calQ_{[r_{0}, \\infty)}$. Using \\eqref{eq:SSESF:dphi}, we then write\n\\begin{align*}\n\t\\partial_{u} \\partial_{v} ( r \\phi)\n\t= & \\frac{\\mu}{1-\\mu} \\frac{\\nu}{r} \\Big( \\frac{\\lmb}{r} (r \\phi - r_{0} \\phi_{r_{0}}) + \\frac{\\lmb}{r} r_{0} \\phi_{r_{0}} \\Big).\n\\end{align*} \n\nHere, $\\phi_{r_{0}}(u,v) := \\phi(u, v^{\\star}_{0}(u))$, where $v^{\\star}_{0}(u)$ is the unique $v$-value for which $r(u, v^{\\star}_{0}(u)) = r_{0}$. Note that the outgoing null curve from $(u, v^{\\star}_{0}(u))$ to $(u,v) \\in \\calQ_{[r_{0}, \\infty)}$ lies entirely in $\\calQ_{[r_{0}, \\infty)}$. Thus, by Lemma \\ref{lem:est4phi} and \\eqref{eq:bnd4dvr}, we see that for $(u, v) \\in \\calQ_{[r_{0}, \\infty)}$ with $1 \\leq u \\leq U$, \n\\begin{align*}\n\t\\abs{\\partial_{u} \\partial_{v} ( r \\phi)} \n\t\\leq & \\abs{\\frac{\\mu}{1-\\mu} \\frac{\\nu}{r}} \\Big( \\frac{(r - r_{0})}{2 r} \\Lambda \\Psi_{[r_{0}, \\infty)}(U) + \\frac{r_{0}}{2 r} \\abs{\\phi_{r_{0}}} \\Big) \\\\\n\t\\leq & \\abs{\\frac{\\mu}{1-\\mu} \\frac{\\nu}{r}} \\Big( \\Lambda \\Psi_{[r_{0}, \\infty)}(U) + \\abs{\\phi_{r_{0}}} \\Big).\n\\end{align*}\n\nIntegrating this equation over the incoming null curve from $(1, v)$ to $(u, v)$ (which lies in $\\calQ_{[r_{0}, \\infty)} \\cap \\set{1 \\leq u \\leq U}$) and using Lemma \\ref{lem:babySmllPtnl:1}, we then obtain\n\\begin{align*}\n\t\\Psi_{[r_{0}, \\infty)}(U)\t\\leq \\sup_{C_{1} \\cap \\calQ_{[r_{0}, \\infty)}} \\abs{\\partial_{v} (r \\phi)} + \\frac{1}{10} \\Psi_{[r_{0}, \\infty)}(U) + \\frac{1}{10 \\Lambda} \\sup_{\\gamma_{0} \\cap \\set{1 \\leq u \\leq U}} \\abs{\\phi}\n\\end{align*}\nwhere $\\gamma_{0}$ is the time-like curve $\\set{(u,v) : r(u,v) = r_{0}}$. Note that the first term on the right-hand side is finite by the assumptions on the initial data, whereas the last term is finite for every $1 \\leq U < \\infty$ by compactness of $\\gamma_{0} \\cap \\set{(u,v) : 1 \\leq u \\leq U}$ and continuity of $\\phi$. Then, by a simple continuity argument, it follows that $\\Psi_{[r_{0}, \\infty)}(U) < \\infty$ for every $1 \\leq U < \\infty$. Moreover, by compactness of $\\set{(u, v) : r(u,v) \\leq r_{0}, \\, 1 \\leq u \\leq U}$, as well as the uniform BV assumption on $\\partial_{v}(r \\phi)$, we also have\n\\begin{equation*}\n\t\\Psi_{[0, \\infty)}(U) := \\sup_{(u,v) \\in \\set{1 \\leq u \\leq U}} \\abs{\\partial_{v}(r \\phi)(u,v)} < \\infty.\n\\end{equation*}\n\nWe now proceed to deal with the large-$u$ region, namely $\\set{(u,v) : u \\geq U}$. Using Lemma \\ref{lem:babySmllPtnl:2}, we choose $U_{0} \\geq 1$ sufficiently large so that\n\\begin{equation}\n\t\\sup_{v \\geq U_{0}} \\int_{U_{0}}^{v} \\abs{\\frac{\\mu}{1-\\mu} \\frac{\\nu}{r} (u', v)} \\, \\mathrm{d} u' < \\frac{1}{10 \\Lambda}.\n\\end{equation}\n\nProceeding as before via Lemma \\ref{lem:est4phi}, we estimate for $(u,v) \\in \\set{(u,v) : u \\geq U_{0}}$ \n\\begin{align*}\n\t\\abs{\\partial_{u} \\partial_{v} (r \\phi)(u,v)} \\leq \\abs{\\frac{\\mu}{1-\\mu} \\frac{\\nu}{r}} \\, \\Lambda \\sup_{v' \\in [u, v]} \\abs{\\partial_{v}(r \\phi)(u, v')} .\n\\end{align*}\n\nIntegrating along incoming null curves from $C_{U_{0}}$, we see that\n\\begin{equation*}\n\t\\Psi_{[0, \\infty)}(U) \\leq \\Psi_{[0, \\infty)}(U_{0}) + \\frac{1}{10} \\Psi_{[0, \\infty)}(U)\n\\end{equation*}\nfor any $U \\geq U_{0}$. Absorbing the second term on the right-hand side into the left-hand side and taking $U \\to \\infty$, we obtain \\eqref{eq:bnd4dvrphiphi} with $\\Psi \\leq \\frac{10}{9} \\Psi_{[0, \\infty)}(U_{0}) < \\infty$. \\qedhere\n\\end{proof}\n\nWe are finally ready to conclude the proof of Proposition \\ref{prop:geomLocBVScat}, by proving \\eqref{eq:bnd4conjKpp}. Indeed, the upper bounds in \\eqref{eq:bnd4dur} and \\eqref{eq:bnd4mu} would then follow immediately. Moreover, the lower bound in \\eqref{eq:bnd4mu} is trivial, as $\\mu = \\frac{2m}{r} \\geq 0$.\n\n\\begin{lemma} \\label{lem:est4dur}\nLet $(\\phi, r, m)$ be a solution to \\eqref{eq:SSESF} satisfying the hypotheses of Proposition \\ref{prop:geomLocBVScat}. Then there exists a finite constant $K > 0$ such that for all $(u,v) \\in \\calQ$,\n\\begin{equation} \\label{eq:est4dur:key}\t\n \\frac{- \\nu}{1-\\mu} (u,v) \\leq K.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof} \nTo prove \\eqref{eq:est4dur:key}, we shall rely on the equation\n\\begin{equation} \\label{eq:mntn4durOver1-mu}\n\t\\partial_{v} \\log \\Big( \\frac{-\\nu}{1-\\mu} \\Big) = \\lmb^{-1} r (\\partial_{v} \\phi)^{2},\n\\end{equation}\nwhich may be easily derived from \\eqref{eq:SSESF:dr} and \\eqref{eq:SSESF:dm}.\n\nFor $(u, v) \\in \\calQ$, we begin by integrating \\eqref{eq:mntn4durOver1-mu} on the outgoing null curve from $(u,u) \\in \\Gamma$ to $(u,v)$, which gives\n\\begin{equation*}\n\t\\Big( \\frac{-\\nu}{1-\\mu} \\Big)(u,v) \\leq \\Big( \\lim_{v' \\to u+} \\Big( \\frac{-\\nu}{1-\\mu} \\Big) (u, v') \\Big) \\exp \\Big( \\int_{u}^{v} \\lmb^{-1} r (\\partial_{v} \\phi)^{2} (u, v') \\, \\mathrm{d} v' \\Big).\n\\end{equation*}\n\nWe claim that $\\lim_{v' \\to u+} (- \\nu) (u, v') = \\lim_{v' \\to u+} \\lmb (u, v')\\leq \\frac 12$ and $\\lim_{v' \\to u+} \\mu(u,v') = 0$. The first assertion is obvious. To prove the second one, we first use \\eqref{eq:SSESF:dm} to write\n\\begin{equation*}\nm(u,v) \\leq \\tfrac{1}{2} \\Big( \\sup_{v' \\in [u, v]} \\abs{r^{2} \\partial_{v} \\phi}(u, v') \\Big) \\int_{u}^{v} \\abs{\\partial_{v} \\phi(u, v')} \\, \\mathrm{d} v'.\n\\end{equation*}\nNow observe that $\\sup_{v' \\in [u, v]} \\abs{r^{2} \\partial_{v} \\phi}(u, v') \\leq C r(u,v) \\sup_{v' \\in [u,v]} \\abs{\\partial_{v}(r \\phi)}$, and the remaining integral goes to $0$ as $v \\to u+$, since $\\phi$ is assumed to be absolutely continuous on $C_{u}$ near the axis by Definition \\ref{def:BVsolution}. \n\nBy the above claim, we have\n\\begin{equation*}\n\t\\Big( \\frac{-\\nu}{1-\\mu} \\Big)(u,v) \\leq \\frac{1}{2} \\exp \\Big( \\int_{u}^{v} \\lmb^{-1} r (\\partial_{v} \\phi)^{2} (u, v') \\, \\mathrm{d} v' \\Big).\n\\end{equation*}\n\nThe lemma would therefore follow if we could prove\n\\begin{equation*}\n\\sup_{(u,v) \\in \\calQ} \\int_{u}^{v} \\lmb^{-1} r (\\partial_{v} \\phi)^{2} (u, v') \\, \\mathrm{d} v' < \\infty.\n\\end{equation*}\n\nTo achieve this, we shall divide the integral into two parts, one in $\\PD_{\\mathrm{cpt}}$ and the other in its complement $\\PD_{\\mathrm{cpt}}^c$. Indeed, defining $v^{\\star}(u)$ to be the unique $v$ value such that $r(u, v^{\\star}(u)) = R$, we divide the integral into $\\int_{u}^{v^{\\star}(u)}$ and $\\int_{v^{\\star}(u)}^{v}$. If $v < v^{\\star}(u)$, the latter integral will be taken to be zero.\n\nFor the first integral, let us begin by pulling out $\\lmb^{-1} r \\partial_{v} \\phi$ from the integral. Using the identity $\\lmb^{-1} r \\partial_{v} \\phi = \\lmb^{-1} \\partial_{v} (r \\phi) - \\phi$ we have\n\\begin{align*}\n&\\int_{u}^{v^{\\star}(u)} \\lmb^{-1} r (\\partial_{v} \\phi)^{2} (u, v') \\, \\mathrm{d} v'\\\\\n& \\quad \\leq \\sup_{v' \\in [u, v^{\\star}(u)]} \\Big( \\lmb^{-1} \\abs{\\partial_{v}(r\\phi)}(u, v') + \\abs{\\phi}(u, v') \\Big) \\int_{u}^{v^{\\star}(u)} \\abs{\\partial_{v} \\phi(u,v')} \\, \\mathrm{d} v'.\n\\end{align*}\n\nThen by Lemmas \\ref{lem:est4dvphi}, \\ref{lem:bnd4dvrphiphi} and the local BV scattering assumption, the right-hand side is uniformly bounded in $u$ from above, as desired. For the second integral, note that, by Lemma \\ref{lem:mntn4kpp} and Corollary \\ref{cor:lowerBnd4dvr}, we have\n\\begin{equation*}\n(1-\\mu)^{-1}(u,v) \\leq \\Lambda \\frac{\\lmb}{1-\\mu}(u,v) \\leq \\frac{\\Lambda}{2} \\sup_{C_{1}} (1-\\mu)^{-1}. \n\\end{equation*}\n\nNotice that the quantity $\\sup_{C_{1}}(1-\\mu)^{-1}$ for the initial data is finite, since $1-\\mu > 0$ everywhere and $1-\\mu(1, v) \\to 1$ as $v \\to \\infty$.\nMoreover, for $v \\geq v^{\\star}(u)$, we have $r(u, v) \\geq R$. Therefore, in view of \\eqref{eq:SSESF:dm}, we may estimate\n\\begin{align*}\n\t\\int_{v^{\\star}(u)}^{v} \\lmb^{-1} r (\\partial_{v} \\phi)^{2} \\, \\mathrm{d} v'\n\t\\leq & \\frac{\\Lambda}{R} \\sup_{C_{1}} (1-\\mu)^{-1} \\int_{v^{\\star}(u)}^{v} \\frac{1}{2} \\lmb^{-1} (1-\\mu) r^{2} (\\partial_{v} \\phi)^{2} (u, v') \\, \\mathrm{d} v' \\\\\n\t\\leq & \\frac{\\Lambda}{R} \\sup_{C_{1}} (1-\\mu)^{-1} (m(u, v) - m(u, v^{\\star}(u))) \\\\\n\t\\leq & C_{\\Lambda, R, M_{i}, \\sup_{C_{1}} (1-\\mu)^{-1}} < \\infty,\n\\end{align*}\nfrom which the lemma follows. \\qedhere\n\\end{proof}\n\nWe conclude this subsection with a pair of identities which are useful for estimating $\\int\\abs{\\partial_{u} \\lmb} \\, \\mathrm{d} u$ and $\\int \\abs{\\partial_{v} \\nu} \\, \\mathrm{d} v$ in terms of information on $\\phi$.\n\\begin{lemma} \\label{lem:auxEqs}\nFrom \\eqref{eq:SSESF}, the following identities hold:\n\\begin{align}\n\t\\int_{u}^{v} \\frac{\\mu}{1-\\mu} \\frac{\\lmb}{r} (u,v') \\, \\mathrm{d} v' = & \\log(1-\\mu)(u,v) + \\int_{u}^{v} \\lmb^{-1} r (\\partial_{v} \\phi)^{2} (u, v') \\, \\mathrm{d} v', \\label{eq:auxEqs:1} \\\\\n\t\\int_{u}^{v} \\frac{\\mu}{1-\\mu} \\frac{(-\\nu)}{r} (u',v) \\, \\mathrm{d} u' = & \\log(1-\\mu)(u,v) + \\int_{u}^{v} (-\\nu)^{-1} r (\\partial_{u} \\phi)^{2} (u', v) \\, \\mathrm{d} u'. \\label{eq:auxEqs:2}\n\\end{align}\n\\end{lemma}\n\n\\begin{proof} \n\tWe shall prove \\eqref{eq:auxEqs:1}, leaving the similar proof of \\eqref{eq:auxEqs:2} to the reader. \n\tFrom the proof of Lemma \\ref{lem:basicEst4dr}, we have\n\t\\begin{equation*}\n\t\t\\int_{u}^{v} \\frac{\\mu}{1-\\mu} \\frac{\\lmb}{r} (u,v') = \\log \\frac{\\nu(u, v)}{\\lim_{v' \\to u+} \\nu(u,v')}.\n\t\\end{equation*}\n\t\n\tComparing with the integral of \\eqref{eq:mntn4durOver1-mu}, along with the fact that $\\lim_{v' \\to u+} (1-\\mu)(u, v') = 1$, we arrive at \\eqref{eq:auxEqs:1}. \\qedhere\n\\end{proof}\n\n\\subsection{Normalization of the coordinate system}\\label{sec.coord}\n\nIn \\S \\ref{subsec:coordSys}, the coordinates are normalized such that $\\lmb$ is constant on the initial hypersurface $\\{u=1\\}$. Alternatively, one can introduce a new coordinate system $(u_{\\infty},v_{\\infty})$ which is normalized at future null infinity by requiring that $\\nu_{\\infty}\\to-\\frac 12$ along each outgoing null curve towards null infinity and require, as before, that $\\Gamma=\\{(u,v):u=v\\}$. We will show that the coordinate functions $u$ and $u_{\\infty}$ are comparable and thus the main theorem on the decay rates can also be stated in this alternatively normalized coordinate system.\n\nWe can compute explicitly the coordinate change, which is given by\n\\begin{equation*}\n\\frac{du_{\\infty}}{du}(u)=-2\\lim_{v\\to\\infty}\\nu(u,v),\\quad u_\\infty(1)=1\n\\hbox{ and }\nv_{\\infty}(v)=u_{\\infty}(v).\n\\end{equation*}\n\nNotice that the limit $\\displaystyle\\lim_{v\\to\\infty}\\nu(u,v)$ is well-defined due to the monotonicity of $\\nu$.\n\n\\begin{equation*}\n\tu_{\\infty}(u) = - 2\\int_{1}^{u} \\Big(\\lim_{v\\to\\infty}\\nu(u',v)\\Big) \\, \\mathrm{d} u' + 1,\n\\end{equation*}\n\nBy Proposition \\ref{prop:geomLocBVScat}, the following estimate holds:\n\\begin{equation*}\n\t2(\\Lambda)^{-1} (u-1) \\leq u_{\\infty}-1 \\leq 2 K (u-1).\n\\end{equation*}\n\n\\subsection{Consequence of local BV scattering}\nIn this subsection, we give some estimates for $\\partial_u^2(r\\phi)$, $\\partial_u\\phi$ and $\\partial_u \\nu$ that follow from from the local BV scattering assumption. To this end, we will need the analysis for solutions to \\eqref{eq:SSESF} with small bounded variation norm by Christodoulou in \\cite{Christodoulou:1993bt}. In particular, Christodoulou proved\n\\begin{theorem}[{Christodoulou \\cite[Theorem 6.2]{Christodoulou:1993bt}}]\\label{Chr.BV.Thm}\nThere exists universal constants $\\epsilon_0$ and $C_0$ such that for $\\epsilon<\\epsilon_0$, if $\\lmb(1, \\cdot) = \\frac{1}{2}$ and $\\partial_{v} (r \\phi)(1, \\cdot)$ is of bounded variation with\n\\begin{equation} \\label{Chr.BV.Thm.hyp}\n\\int_{C_1} |\\partial_v^2(r\\phi)| <\\epsilon,\n\\end{equation}\nthen its maximal development $(\\phi, r, m)$ satisfies condition $(1)$ in Definition \\ref{def:locBVScat} (future completeness of radial null geodesics) and obeys\n\\begin{gather}\n\t\\frac{1}{3} \\leq \\lmb \\leq \\frac{1}{2}, \\quad \n\t\\frac{1}{3} \\leq - \\nu \\leq \\frac{2}{3}, \\quad\n\t\\frac{2}{3} \\leq (1-\\mu) \\leq 1, \\label{Chr.BV.Thm.geom} \\\\\n\t\\sup_{u \\geq 1} \\int_{C_{u}} \\Big( \\abs{\\partial_{v} (\\lmb^{-1} \\partial_{v} (r \\phi))} + \\abs{\\partial_{v} \\phi} + \\abs{\\partial_{v} \\log \\lmb} \\Big) < C_{0} \\epsilon, \\label{Chr.BV.Thm.dv} \\\\\n\t\\sup_{v \\geq 1} \\int_{\\underline{C}_{v}} \\Big( \\abs{\\partial_{u} (\\nu^{-1} \\partial_{u} (r \\phi))} + \\abs{\\partial_{u} \\phi} + \\abs{\\partial_{u} \\log \\nu} \\Big) < C_{0} \\epsilon. \\label{Chr.BV.Thm.du}\n\\end{gather}\n\\end{theorem}\n\n\\begin{remark} \nWe remark that in \\cite[Theorem 6.2]{Christodoulou:1993bt}, it is implicitly assumed that $\\phi(1, 1) = 0$; see \\cite[Section 4]{Christodoulou:1993bt}. \nNote, however, that the bounds in the above theorem are stated in such a way that they are invariant under the transform $(\\phi, r, m) \\mapsto (\\phi + c, r, m)$, under which \\eqref{eq:SSESF} is also invariant. Any solution may be then transformed to satisfy $\\phi(1, 1) = 0$. As a consequence, we do not need to check $\\phi(1,1) = 0$ in order to apply the theorem.\n\\end{remark}\nUsing Theorem \\ref{Chr.BV.Thm}, we prove the following bound for locally BV scattering solution to \\eqref{eq:SSESF}.\n\n\\begin{theorem} \\label{thm:decayInCpt}\nLet $(\\phi, r, m)$ be a locally BV scattering solution to \\eqref{eq:SSESF}. For every $\\epsilon > 0$, there exists $u_{0} > 1$ such that the following estimate holds.\n\\begin{align*}\n\t\\sup_{v \\in [u_{0}, \\infty)} \\Big( \\int_{\\underline{C}_{v} \\cap \\set{u \\geq u_{0}}\\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{u}^{2} (r \\phi)} \n\t+ \\int_{\\underline{C}_{v} \\cap \\set{u \\geq u_{0}} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{u} \\phi} \n\t+ \\int_{\\underline{C}_{v} \\cap \\set{u \\geq u_{0}} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{u} \\log \\nu} \\Big) < \\epsilon.\n\\end{align*}\n\nMoreover, we also have\n\\begin{equation} \\label{eq:bnd4durphi}\n\t\\sup_{\\calQ} \\abs{\\partial_{u} (r \\phi)} \\leq C_{K, \\Lambda} \\Psi.\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nWe first show that for a locally BV scattering solution to \\eqref{eq:SSESF},\n\\begin{equation*}\n\t\\int_{C_{u} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{v} (\\lmb^{-1} \\partial_{v} (r \\phi))} \\to 0 \\hbox{ as } u \\to \\infty,\n\\end{equation*}\nExpanding this expression, we have\n\\begin{eqnarray*}\n\\int_{C_{u} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{v} (\\lmb^{-1} \\partial_{v} (r \\phi))}\n\\leq \\int_{C_{u} \\cap \\PD_{\\mathrm{cpt}}} \\lmb^{-1} (\\abs{ \\partial_{v}^2 (r \\phi)}+\\abs{ (\\partial_{v} \\log \\lambda) \\partial_v(r \\phi)})\n\\end{eqnarray*}\nBy \\eqref{eq:bnd4dvr} and \\eqref{eq:bnd4dvrphi}, we have\n$$\\int_{C_{u} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{v} (\\lmb^{-1} \\partial_{v} (r \\phi))}\\leq \nC_{\\Lambda, \\Psi}\\int_{C_{u} \\cap \\PD_{\\mathrm{cpt}}} \\abs{ \\partial_{v}^2 (r \\phi)}+\\abs{ \\partial_{v} \\log \\lambda },$$\nwhich by \\eqref{eq:locBVScat} in Definition \\ref{def:locBVScat} (Scattering in BV in a compact $r$-region) tends to 0 as $u\\to \\infty$.\nNotice that the quantity $\\int_{C_{u} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{v} (\\lmb^{-1} \\partial_{v} (r \\phi))}$ which we have controlled is invariant under any rescaling of the coordinate $v$, and also under the transform $(\\phi, r, m) \\to (\\phi + c, r, m)$.\n\nWe now proceed to the proof of the theorem. Let $v_0$ be sufficiently large and $u^{\\star}(v_0)$ be the unique $r(u^{\\star}(v_0),v_0)=R$. By the finite speed of propagation of the equations, the solution on $\\underline{C}_{v_0}\\cap\\PD_{\\mathrm{cpt}}$ depends only on the data on $C_{u^{\\star}(v_0)} \\cap\\PD_{\\mathrm{cpt}}$.\n\nIn order to apply Theorem \\ref{Chr.BV.Thm}, we change coordinates $(u,v)\\mapsto (U(u),V(v))$ in the region bounded by $C_{u^{\\star}(v_0)}$ and $\\underline{C}_{v_0}$ to a new double null coordinate $(U,V)$ such that for $U^{\\star}=U(u^{\\star}(v_0))$, we have $\\lambda(U^{\\star},V)=\\frac 12$. To this end, define $V(v)$ by\n$$\\frac{dV}{dv}=2\\lmb(u^{\\star}(v_0),v),\\quad V(v_0)=v_0.$$\nNotice that this is acceptable since $\\lmb>0$. In order for the condition $U = V$ to hold on $\\Gamma$, we require\n$U(u)=V(u).$\nThen with respect to the coordinate $V$\n$$\\partial_V r(U^{\\star},V)=\\frac 12.$$\nBy \\eqref{eq:bnd4dvr}, we have\n$$\\Lambda^{-1}\\leq \\frac{dV}{dv}, \\frac{dU}{du}\\leq \\frac 12.$$\nMoreover,\n$$\\int_{u^{\\star}(v_0)}^{v_0} |\\frac{d^2V}{dv^2}(v')|dv'\\leq 2\\int_{u^{\\star}(v_0)}^{v_0} |\\partial_v\\lambda(u^{\\star}(v_0),v')| \\mathrm{d} v',$$\nwhich tends to $0$ as $v_0\\to \\infty$ by the assumption of local BV scattering. For $v_0$ sufficiently large, in the $(U,V)$ coordinate system, $\\int_{C_{u^{\\star}(v_0)} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{V} ((\\partial_{V} r)^{-1} \\partial_{V} (r \\phi))} dV$ is small and $\\partial_V r=\\frac 12$. The data satisfy the assumptions of Theorem \\ref{Chr.BV.Thm} and therefore\\footnote{More precisely, we apply Theorem \\ref{Chr.BV.Thm} to the truncated initial data $$ \\partial_{V} (r \\widetilde{\\phi})(U^{\\star}, V) = \\left\\{ \\begin{array}{cc} \\partial_{V}(r \\phi)(U^{\\star}, V) & \\hbox{ for } V < v_{0} \\\\ \\partial_{V}(r \\phi)(U^{\\star}, v_{0}) & \\hbox{ for } V \\geq v_{0} \\end{array} \\right.$$}\n$$\\int_{\\underline{C}_{v_0}\\cap\\PD_{\\mathrm{cpt}}}(\\abs{\\partial_{U} ((\\partial_{U} r)^{-1} \\partial_{U} (r \\phi))} + \\abs{\\partial_{U} \\phi} + \\abs{\\partial_{U} \\log \\partial_{U} r} ) dU \\to 0$$\nas $v_{0} \\to \\infty$. Returning to the original coordinate system $(u,v)$, the first statement easily follows.\n\nFinally, for the $L^\\infty$ estimate for $\\partial_{u}(r\\phi)$, notice that $\\abs{\\partial_{u}(r\\phi)} \\leq \\Psi$ at the axis by \\eqref{eq:bnd4dvrphi} and $(7)$ of Definition \\ref{def:BVsolution} (BV solutions to \\eqref{eq:SSESF}). Using \\eqref{eq:SSESF:dphi''}, \\eqref{eq:SSESF:dr} (in particular, the fact that $\\partial_{v} \\nu \\leq 0$), \\eqref{eq:bnd4phi} and \\eqref{eq:bnd4dur}, we have\n\\begin{equation*}\n\\abs{\\partial_{u}(r\\phi)(u,v)} \\leq \\Psi+ \\Lambda \\Psi \\int_{u}^{v} (-\\partial_{v} \\nu) \\, \\mathrm{d} v' \\leq C_{K, \\Lambda} \\Psi. \\qedhere\n\\end{equation*}\n\\end{proof}\n\n\n\\section{Decay of $\\phi$ and its first derivatives}\\label{sec.decay1}\nIn this section, we prove the first main theorem (Theorem \\ref{main.thm.1}). Throughout this section, we assume that $(\\phi, r, m)$ is a locally BV scattering solution to \\eqref{eq:SSESF} with asymptotically flat initial data of order $\\omega'$ in BV, as in Definitions \\ref{def:locBVScat} and \\ref{def:AF}, respectively. Let $\\omega = \\min \\set{\\omega', 3}$.\n\n\\subsection{Preparatory lemmas}\nThe following lemma will play a key role in the proof of both Theorems \\ref{main.thm.1} and \\ref{main.thm.2}. It is a consequence of the scattering assumption \\eqref{eq:locBVScat} and vanishing of the final Bondi mass.\n\n\\begin{lemma} \\label{lem:smallPtnl}\nLet $\\epsilon > 0$ be an arbitrary positive number. For $u_{1} > 1$ sufficiently large, we have\n\\begin{align} \n\t\\sup_{v \\in [u_{1}, \\infty)} \\int_{\\underline{C}_{v} \\cap \\set{u \\geq u_{1}}} \\abs{\\frac{2m \\nu}{(1-\\mu) r^{2}}} < \\epsilon, \\label{eq:smallPtnl:u} \\\\ \n\t\\sup_{u \\in [u_{1}, \\infty)} \\int_{C_{u}} \\abs{\\frac{2m \\lmb}{(1-\\mu) r^{2}}} < \\epsilon. \\label{eq:smallPtnl:v}\n\\end{align}\n\\end{lemma}\n\n\\begin{proof} \nThe first statement \\eqref{eq:smallPtnl:u} was proved in Lemma \\ref{lem:babySmllPtnl:2}; thus it only remains to prove \\eqref{eq:smallPtnl:v}.\n\nDivide $\\calQ$ into $\\PD_{\\mathrm{cpt}} =\\calQ\\cap\\set{r \\leq R}$ and $\\PD_{\\mathrm{cpt}}^{c} := \\calQ \\setminus \\PD_{\\mathrm{cpt}}$. First, note that by \\eqref{eq:locBVScat} we have\n\\begin{align*}\n\t\\sup_{u \\in [u_{1}, \\infty)} \\int_{C_{u} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\frac{2m \\lmb}{(1-\\mu) r^{2}}} < \\epsilon\/2\n\\end{align*}\nfor $u_{1}$ sufficiently large. \nNext, we consider $\\PD_{\\mathrm{cpt}}^{c}$. Define $v^{\\star}(u) := \\sup \\set{v \\in [u, \\infty) : r(u,v) \\geq R}$; note that $r(u, v^{\\star}(u)) = R$ by continuity. We now compute\n\\begin{align*}\n\t\\int_{C_{u} \\cap \\PD_{\\mathrm{cpt}}^{c}} \\abs{\\frac{2 m \\lmb}{(1-\\mu) r^{2}}} \n\t= & \\int_{v^{\\star}(u)}^{\\infty} \\abs{\\frac{2m \\lmb}{(1-\\mu) r^{2}}(u,v')} \\, \\mathrm{d} v' \\\\\n\t\\leq & 2 K \\Lambda M(u_{1}) \\int_{v^{\\star}(u)}^{\\infty} \\frac{\\lmb}{r^{2}}(u, v') \\, \\mathrm{d} v' \\\\\n\t\\leq & 2 R^{-1} K \\Lambda M(u_{1}).\n\\end{align*}\nuniformly in $u \\geq u_{1}$. As $\\lim_{u_{1} \\to \\infty} M(u_{1}) = 0$ by \\eqref{eq:zeroMf} (vanishing final Bondi mass), the last line can be made arbitrarily small by taking $u_{1}$ sufficiently large. This proves \\eqref{eq:smallPtnl:v}. \\qedhere\n\\end{proof}\n\nThe following lemma allows us to estimate $\\phi$ in terms of $\\abs{\\partial_{v} (r \\phi)}$.\n\\begin{lemma} \\label{lem:intEst4phi}\nThe following estimates hold.\n\\begin{align*} \n\t\\abs{\\phi}(u,v) \\leq & \\Lambda \\sup_{C_{u} } \\abs{\\partial_{v} (r \\phi)} , \\\\\n\tr u^{\\omega-1} \\abs{\\phi}(u,v) \\leq & C \\Lambda \\Big( \\sup_{C_{u}} u^{\\omega} \\abs{\\partial_{v} (r \\phi)} + \\sup_{C_{u}} r^{\\omega} \\abs{\\partial_{v} (r \\phi)} \\Big). \n\\end{align*}\n\\end{lemma}\n\n\\begin{proof} \nThe first estimate follows from Lemma \\ref{lem:est4phi} and Proposition \\ref{prop:geomLocBVScat}. The second estimate is a consequence of the first when $r(u,v) \\leq u$, so it suffices to assume $r(u,v) \\geq u$. Introducing a parameter $v_{1} \\in [u, v]$, we estimate\n\\begin{align*}\n\tr u^{\\omega-1} \\abs{\\phi}(u,v) \n\t\\leq & u^{\\omega-1} \\int_{u}^{v} \\abs{\\partial_{v} (r \\phi)(u, v')} \\, \\mathrm{d} v' \\\\\n\t\\leq & \\Lambda u^{\\omega-1}(\\sup_{C_u}|\\partial_v(r\\phi)|)\\int_u^{v_1}\\lmb(u,v')\\mathrm{d} v'+\\Lambda u^{\\omega-1}(\\sup_{C_u}r^{\\omega}|\\partial_v(r\\phi)|)\\int_{v_1}^{v}\\frac{\\lmb}{r^{\\omega}}(u,v')\\mathrm{d} v'\\\\\n\t\\leq & \\Lambda (r(u, v_{1})\/u) \\sup_{C_{u}} u^{\\omega} \\abs{\\partial_{v} (r \\phi)} + \\frac{\\Lambda}{\\omega-1} (u^{\\omega-1}\/r(u, v_{1})^{\\omega-1}) \\sup_{C_{u}} r^{\\omega} \\abs{\\partial_{v} (r \\phi)}.\n\\end{align*}\n\nChoosing $v_{1}$ so that $r(u, v_{1}) = u$ (which is possible since $r(u,v) \\geq u$), the desired estimate follows.\n\\end{proof}\n\n\\subsection{Preliminary $r$-decay for $\\phi$} \\label{subsec:decay1:rDecay}\nIn this subsection, we derive bounds for $\\phi$ which are sharp in terms of $r$-weights. As a consequence, they give sharp decay rates towards null infinity.\n\n\\begin{lemma} \\label{lem:decay1:cptu:0}\n\tThere exists a constant $0 < H_{1} < \\infty$ such that the following estimate holds.\n\t\\begin{equation} \\label{eq:decay1:cptu:0}\n\t\t\\sup_{\\calQ} (1+r) \\abs{\\phi} \\leq H_{1}.\n\t\\end{equation}\n\\end{lemma}\n\n\\begin{proof} \nLet $r_{1} > 0$ be a large number to be chosen below. Different arguments will be used in $\\set{r \\geq r_{1}}$ and $\\set{r \\leq r_{1}}$. For each $u \\geq 1$ let $v^{\\star}_{1}(u)$ be the unique $v$-value for which $r(u, v_{1}^{\\star}(u)) = r_{1}$. By the fundamental theorem of calculus, we have\n\\begin{equation} \\label{eq:decay1:cptu:0:pf:1}\n\tr \\phi = r_{1} \\phi(u, v^{\\star}_{1} (u)) + \\int_{v^{\\star}_{1}(u)}^{v} \\partial_{v} (r \\phi) (u, v') \\, \\mathrm{d} v'.\n\\end{equation} \n\nIntegrate \\eqref{eq:SSESF:dphi} along the incoming direction from $(1,v)$ to $(u,v)$. By Corollary \\ref{cor:mntn4Bondi} and Proposition \\ref{prop:geomLocBVScat}, we have\n\\begin{align*}\n\t\\abs{\\partial_{v} (r \\phi)(u,v)}\n\t\\leq& \\abs{\\partial_{v} (r \\phi)(1, v)} + \\abs{\\int_{1}^{u} \\frac{2m \\lmb \\nu}{(1-\\mu) r^{3}} (r\\phi) (u', v) \\, \\mathrm{d} u'} \\\\\n\t\\leq &\\abs{\\partial_{v} (r \\phi)(1, v)} + \\frac{K \\Lambda M_{i}}{2} \\frac{1}{r^{2}(u,v)} \\sup_{u' \\in [1, u]} \\abs{r \\phi(u', v)}.\n\\end{align*}\n\nSubstituting the preceding bound into \\eqref{eq:decay1:cptu:0:pf:1}, we obtain\n\\begin{equation} \\label{eq:decay1:cptu:0:pf:2}\n\\begin{aligned}\n\t\\sup_{C_{u} \\cap \\set{r \\geq r_{1}}} \\abs{r \\phi} \n\t\\leq & \\abs{r_{1} \\phi(u, v^{\\star}_{1} (u))} + \\int_{v_{1}^{\\star}(u)}^{v} \\abs{\\partial_{v} (r \\phi)(1, v')} \\, \\mathrm{d} v' \\\\\n\t& + \\frac{K \\Lambda^{2} M_{i}}{2 r_{1}} \\sup_{u' \\in [1, u]} \\sup_{C_{u'} \\cap \\set{r \\geq r_{1}}} \\abs{r \\phi}.\n\\end{aligned}\n\\end{equation}\n\nThe first term on the right-hand side is bounded by $r_{1} \\Lambda \\Psi$ by \\eqref{eq:bnd4phi}, whereas the second term depends only on the initial data and can be estimated in terms of $\\mathcal I_{1}$ as follows:\n\\begin{equation*}\n\t\\int_{v_{1}^{\\star}(u)}^{v} \\abs{\\partial_{v} (r \\phi)(1, v')} \\, \\mathrm{d} v' \\leq \\Lambda \\mathcal I_{1} \\int_{1}^{\\infty} (1+r(1, v'))^{-\\omega'} \\lmb(1,v') \\, \\mathrm{d} v' \\leq \\frac{\\Lambda}{\\omega'-1} \\mathcal I_{1}.\n\\end{equation*}\n\nMoreover, choosing $r_{1}$ to be large enough so that \n\\begin{equation*}\n\\frac{K \\Lambda^{2} M_{i}}{2 r_{1}} \\leq \\frac{1}{2},\n\\end{equation*}\nthe last term of \\eqref{eq:decay1:cptu:0:pf:2} can be absorbed in to the left-hand side and we conclude\n\\begin{equation*}\n\t\\sup_{\\set{r \\geq r_{1}}} \\abs{r \\phi} \\leq 2 r_{1} \\Lambda \\Psi + \\frac{2}{\\omega'-1}\\Lambda \\mathcal I_{1}.\n\\end{equation*}\n\nOn the other hand, in $\\set{r \\leq r_{1}}$ we have\n\\begin{equation*}\n\t\\sup_{\\set{r \\leq r_{1}}} \\abs{r \\phi} \\leq r_{1} \\Lambda \\Psi\n\\end{equation*}\nby \\eqref{eq:bnd4phi}. Combining the bounds in $\\{r\\geq r_1\\}$ and $\\{r\\leq r_1\\}$, the lemma follows. \\qedhere\n\\end{proof}\n\n\\begin{remark} \nThe preceding argument shows that Lemma \\ref{lem:decay1:cptu:0} holds with\\footnote{Notice that while the constant $C_{\\mathcal I_{1}, K, \\Lambda}$ depends on $\\mathcal I_1$, the preceding argument moreover allows us to choose $C_{\\mathcal I_{1}, K, \\Lambda}$ to be non-decreasing in $\\mathcal I_1$. In particular, \\emph{for $\\mathcal I_1$ sufficiently small}, we have $H_{1} \\leq C_{K, \\Lambda} \\, (\\mathcal I_{1} + \\Psi)$. It is for this reason that we prefer to write the expression $C_{\\mathcal I_{1}, K, \\Lambda} \\, (\\mathcal I_{1} + \\Psi)$ instead of the more general $C_{\\mathcal I_{1}, K, \\Lambda, \\Psi}$.}\n\\begin{equation} \\label{eq:decay1:H1}\n\tH_{1} \\leq C_{\\mathcal I_{1}, K, \\Lambda} \\, (\\mathcal I_{1} + \\Psi).\n\\end{equation}\n\\end{remark}\n\n\n\\subsection{Propagation of $u$-decay for $\\partial_{u} (r \\phi)$}\nHere, we show that $u$-decay estimates proved for $\\partial_{v} (r \\phi)$ and $\\phi$ may be `transferred' to $\\partial_{u} (r \\phi)$; this reduces the proof of Theorem \\ref{main.thm.1} to showing only \\eqref{eq:decay1:1} and \\eqref{eq:decay1:2}. To this end, we integrate $\\partial_{v} \\partial_{u} (r \\phi)$ from the axis $\\Gamma$, along which $\\partial_{u} (r \\phi) = - \\partial_{v} (r \\phi)$. \n\n\\begin{lemma} \\label{lem:decay1:uDecay4durphi}\nSuppose that there exists a finite positive constant $A$ such that \n\\begin{equation*}\n\t\\sup_{\\calQ} \\abs{\\phi} \\leq A u^{-\\omega}, \\qquad \n\t\\sup_{\\calQ} \\abs{\\partial_{v} (r \\phi)} \\leq A u^{-\\omega}.\n\\end{equation*}\n\nThen the following estimate holds.\n\\begin{equation*}\n\t\\sup_{\\calQ} \\abs{\\partial_{u} (r \\phi)} \\leq (1+K)A u^{-\\omega}.\n\\end{equation*}\n\\end{lemma}\n\n\\begin{proof}\nFix $u \\geq 1$ and $v \\geq u$. Integrate \\eqref{eq:SSESF:dphi''} along the outgoing direction from $(u, u)$ to $(u, v)$ and take the absolute value. Using $(7)$ of Definition \\ref{def:BVsolution} (BV solutions to \\eqref{eq:SSESF}), \\eqref{eq:SSESF:dr} (in particular, $\\partial_{v} \\nu\\leq 0$), \\eqref{eq:bnd4dur} and the hypotheses, we have\n\\begin{align*}\n\t\\abs{\\partial_{u}(r\\phi)(u,v)} \n\t\\leq & \\lim_{v' \\to u+} \\abs{\\partial_{v} (r \\phi)(u, v')} + \\sup_{u \\leq v' \\leq v} \\abs{\\phi(u,v')} \\int_{u}^{v} (-\\partial_{v} \\nu) \\, \\mathrm{d} v' \\\\\n\t\\leq & A u^{-\\omega} + K A u^{-\\omega}. \\qedhere\n\\end{align*}\n\\end{proof}\n\n\\subsection{Full decay for $\\phi$ and $\\partial_{v} (r \\phi)$}\\label{sec.full.decay.1}\nIn this subsection, we finish the proof of Theorem \\ref{main.thm.1}. By Lemma \\ref{lem:decay1:uDecay4durphi}, it suffices to establish the full decay of $\\phi$ and $\\partial_{v} ( r \\phi)$, i.e., \\eqref{eq:decay1:1} and \\eqref{eq:decay1:2}. For the convenience of the reader, we recall these estimates below:\n\\begin{align*}\n\t\t\\abs{\\phi} \\leq & A \\min \\set{u^{-\\omega}, r^{-1} u^{-(\\omega-1)}}, \\tag{\\ref{eq:decay1:1}} \\\\\n\t\t\\abs{\\partial_{v} (r \\phi)} \\leq & A \\min \\set{u^{-\\omega}, r^{-\\omega}}. \\tag{\\ref{eq:decay1:2}}\n\\end{align*}\n\nFor $U > 1$, let\n\\begin{equation*}\n\\mathcal B_{1}(U) := \\sup_{u \\in [1, U]} \\sup_{C_{u}} \\Big( u^{\\omega} \\abs{\\phi} + r u^{\\omega-1} \\abs{\\phi} \\Big).\n\\end{equation*}\n\nNotice that this is finite for every fixed $U$ by Lemma \\ref{lem:decay1:cptu:0}. To establish the decay estimate \\eqref{eq:decay1:1}, it suffices to prove that $\\mathcal B_{1}(U)$ is bounded by a finite constant which is \\emph{independent of $U$}. We will show that this implies also \\eqref{eq:decay1:2}. Divide $\\calQ$ into $\\PD_{\\mathrm{ext}} \\cup \\PD_{\\mathrm{int}}$, defined by\n\\begin{equation*}\n\t\\PD_{\\mathrm{ext}} := \\set{(u,v) \\in \\calQ : v \\geq 3u}, \\quad \\PD_{\\mathrm{int}} := \\set{(u,v) \\in \\calQ : v \\leq 3u}.\n\\end{equation*}\n\nWe first establish a bound for $\\partial_{v} (r \\phi)$ with the sharp $r$-weight, which thus gives the sharp decay rate in $\\PD_{\\mathrm{ext}}$. \n\\begin{lemma} \\label{lem:decay1:extr}\nLet $u_{1} > 1$. Then for $u_{1}\\leq u\\leq U$, the following estimate holds.\n\\begin{equation} \\label{eq:decay1:extr}\n\t\\sup_{C_{u}} r^{\\omega} \\abs{\\partial_{v} (r \\phi)} \\leq \\mathcal I_{1} + C_{K, M_{i}} \\, u_{1} H_{1} + C u_{1}^{-1} K M_{i} \\, \\mathcal B_{1}(U).\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof} \nWe separate the proof into cases $\\omega \\geq 2 $ and $1<\\omega\\leq 2$.\n\n\\noindent{\\bf Case 1: $\\omega\\geq 2$}\n\nFirst, notice that\n$$|\\phi|\\leq \\mathcal B_1(U) (r^{-1}u^{-(\\omega-1)})^{\\omega-2} (u^{-\\omega})^{1-(\\omega-2)}\\leq \\mathcal B_1(U) r^{-(\\omega-2)}u^{-2}.$$\nApplying Lemma \\ref{lem:decay1:cptu:0}, we also have\n$$|\\phi|\\leq (1+r)^{-1}H_1.$$\nBy Corollary \\ref{cor:mntn4Bondi} and Proposition \\ref{prop:geomLocBVScat}, we have the following pointwise bounds:\n\\begin{equation*}\n\\sup_{u'\\in [1,u_1]} |\\frac{m\\lmb\\nu}{1-\\mu}|\\leq \\frac{K M_i}{2}\\, , \\quad\n\\sup_{u'\\in [u_{1},\\infty)} |\\frac{m\\lmb\\nu}{1-\\mu}|\\leq \\frac{K M(u_{1})}{2}\\, .\n\\end{equation*}\nTherefore, integrating \\eqref{eq:SSESF:dphi} along the incoming direction from $(1, v)$ to $(u, v)$, we have\n\\begin{align*}\n\t&\\abs{\\partial_{v} (r \\phi)(u,v)}\\\\\n\t& \\quad \\leq \\abs{\\partial_{v} (r \\phi)(1, v)} + \\abs{\\int_{1}^{u} \\frac{2m \\lmb \\nu\\phi}{(1-\\mu) r^{2}} (u', v) \\, \\mathrm{d} u'} \\\\\n\t& \\quad \\leq \\abs{\\partial_{v} (r \\phi)(1, v)} + \\frac{K M_{i}}{r^2(u,v)(1+r(u,v))} H_{1} \\int_{1}^{u_{1}} \\, \\mathrm{d} u' + \\frac{K M(u_1)}{r^{\\omega}(u,v)} \\mathcal B_{1}(U) \\int_{u_{1}}^{u} (u')^{-2} \\, \\mathrm{d} u' \\\\\n\t& \\quad \\leq \\abs{\\partial_{v} (r \\phi)(1, v)} + \\frac{u_{1} K M_{i}}{r^2(u,v)(1+r(u,v))} H_{1} + \\frac{K M(u_{1})}{u_{1} r^{\\omega}(u,v)} \\mathcal B_{1}(U).\n\\end{align*}\n\nMultiplying both sides by $r^{\\omega}(u,v)$ and using the fact that $r(u,v) \\leq r(1, v)$, we conclude\n\\begin{align*}\n\tr^{\\omega} \\abs{\\partial_{v} (r \\phi)}(u,v) \n\t\\leq & r^{\\omega}\\abs{\\partial_{v} (r \\phi)}(1,v) + u_{1} \\frac{r^{\\omega-2}}{(1+r)} K M_{i} \\, H_{1}+ u_{1}^{-1} K M(u_{1}) \\, \\mathcal B_{1}(U) \\\\\n\t\\leq & \\mathcal I_{1}+ C_{u_{1}, K, M_{i}} H_{1} + u_{1}^{-1} K M_{i} \\, \\mathcal B_{1}(U).\n\\end{align*}\n\n\\noindent{\\bf Case 2: $1<\\omega\\leq 2$}\n\nWe will use the following bounds for $\\phi$. First, \n$$|\\phi|\\leq \\mathcal B_1(U) (r^{-1}u^{-(\\omega-1)})^{\\omega-1} (u^{-\\omega})^{(2-\\omega)}\\leq \\mathcal B_1(U) r^{-(\\omega-1)}u^{-1}.$$\n\nAlso, Lemma \\ref{lem:decay1:cptu:0} implies\n$$|\\phi|\\leq (1+r)^{-1}H_1.$$\n\nAs in Case 1 we integrate \\eqref{eq:SSESF:dphi} along the incoming direction from $(1, v)$ to $(u, v)$:\n\\begin{align*}\n\t&\\abs{\\partial_{v} (r \\phi)(u,v)}\\\\\n\t& \\quad \\leq \\abs{\\partial_{v} (r \\phi)(1, v)} + \\abs{\\int_{1}^{u} \\frac{2m \\lmb \\nu\\phi}{(1-\\mu) r^{2}} (u', v) \\, \\mathrm{d} u'} \\\\\n\t& \\quad \\leq \\abs{\\partial_{v} (r \\phi)(1, v)} + \\frac{K \\Lambda M_{i} H_{1}}{(1+r)} \\int_{1}^{u_{1}} \\frac{-\\nu}{r^2} \\, \\mathrm{d} u' \n\t\t\t\t\t\t\t\t+ \\frac{K \\Lambda M(u_1)}{u_1} \\mathcal B_{1}(U) \\int_{u_{1}}^{u} \\frac{-\\nu}{r^{\\omega+1}} \\, \\mathrm{d} u' \\\\\n\t& \\quad \\leq \\abs{\\partial_{v} (r \\phi)(1, v)} + \\frac{\\omega K \\Lambda M_{i}}{r(u,v)(1+r(u,v))} H_{1} + \\frac{\\omega K \\Lambda M(u_{1})}{u_{1} r^{\\omega}(u,v)} \\mathcal B_{1}(U).\n\\end{align*}\n\nMultiply both sides by $r^{\\omega}$ to arrive at the conclusion as in Case 1. In this case, note that the second term is a bit better than what is claimed, as there is no dependence on $u_{1} \\geq 1$. \\qedhere\n\\end{proof}\n\n\\begin{remark} \nNote that the proof of this lemma limits $\\omega$ to be $\\leq 3$. More precisely, this limitation comes from the contribution of the right-hand side of \\eqref{eq:SSESF:dphi}\n\\end{remark}\n\n\nWe are now ready to prove bounds \\eqref{eq:decay1:1} and \\eqref{eq:decay1:2}. The idea is to `propagate' the exterior decay estimate \\eqref{eq:decay1:extr} into $\\PD_{\\mathrm{int}}$ to obtain decay in $u$, using the smallness coming from Lemma \\ref{lem:smallPtnl} in the region where $u$ is sufficiently large. On the other hand, the preliminary $r$-decay estimates proved in \\S \\ref{subsec:decay1:rDecay} will give the desired $r$-decay rates in rest of the space-time.\n\n\\begin{proof}[Proof of \\eqref{eq:decay1:1} and \\eqref{eq:decay1:2}] \nLet $1 \\leq u_{1} \\leq U$. For $(u, v) \\in \\calQ$ with $u \\in [3 u_{1}, U]$, integrate \\eqref{eq:SSESF:dphi} along the incoming direction from $(u\/3, v)$ to $(u, v)$. Then\n\\begin{equation} \\label{eq:decay1:intr:pf:1}\n\\begin{aligned}\n\t\\abs{\\partial_{v} (r \\phi) (u,v)} \\leq \n\t& \\abs{\\partial_{v} (r \\phi) (u\/3,v)} \\\\\n\t& + \\frac{1}{2} (\\sup_{u' \\in [u\/3, u]} \\sup_{C_{u'}} \\abs{\\phi}) \\int_{u\/3}^{u} \\abs{\\frac{2m \\nu}{(1-\\mu) r^{2}} (u', v)} \\, \\mathrm{d} u'.\n\\end{aligned}\n\\end{equation} \n\nMultiply both sides by $u^{\\omega}$ and estimate each term on the right-hand side. For the first term, the key observation is the following: For $v \\geq u$, the point $(u\/3, v)$ lies in $\\PD_{\\mathrm{ext}}$, where \\eqref{eq:decay1:extr} is effective. Indeed, note that\n\\begin{equation*}\n\t(2\/3\\Lambda) u \\leq \\Lambda^{-1} ( v- (u\/3) ) \\leq r(u\/3, v).\n\\end{equation*}\n\nThus, by \\eqref{eq:decay1:extr},\n\\begin{align*}\n\tu^{\\omega} \\abs{\\partial_{v} (r \\phi) (u\/3,v)} \n\t\\leq & (3 \\Lambda \/2)^{\\omega} \\Big( r^{\\omega}(u\/3, v) \\abs{\\partial_{v} (r \\phi) (u\/3,v)} \\Big) \\\\\n\t\\leq & (3 \\Lambda\/2)^{\\omega} \\Big( \\mathcal I_{1} + C_{u_{1}, K, M_{i}} H_{1} + C u_{1}^{-1} K M_{i} \\, \\mathcal B_{1}(U) \\Big) \\\\\n\t\\leq & C_{u_{1}, K, \\Lambda, M_{i}} (\\mathcal I_{1} + H_{1}) + C_{K, \\Lambda} M_{i} u_{1}^{-1} \\, \\mathcal B_{1}(U).\n\\end{align*}\n\nFor the second term on the right-hand side of \\eqref{eq:decay1:intr:pf:1}, we have\n\\begin{align*}\n\\frac{u^{\\omega}}{2} (\\sup_{u' \\in [u\/3, u]} \\sup_{C_{u'}} \\abs{\\phi}) \\int_{u\/3}^{u} \\abs{\\frac{2m \\nu}{(1-\\mu) r^{2}} (u', v)} \\, \\mathrm{d} u'\n\\leq & \\frac{3^{\\omega}}{2} \\Big( \\int_{u\/3}^{u} \\abs{\\frac{2m \\nu}{(1-\\mu) r^{2}} (u', v)} \\, \\mathrm{d} u' \\Big) \\mathcal B_{1}(U).\n\\end{align*}\n\nCombining these estimates, we deduce\n\\begin{equation} \\label{eq:decay1:intr:pf:2}\n\\begin{aligned}\n\t\\sup_{C_{u}} u^{\\omega} \\abs{\\partial_{v}(r \\phi)(u, v)} \\leq & C_{u_{1}, K, \\Lambda, M_{i}} (\\mathcal I_{1} + H_{1}) \\\\ \n\t&+ \\Big( C_{K, \\Lambda} M_{i} u_{1}^{-1} + C \\int_{u\/3}^{u} \\abs{\\frac{2m \\nu}{(1-\\mu) r^{2}} (u', v)} \\, \\mathrm{d} u' \\Big) \\, \\mathcal B_{1}(U).\n\\end{aligned}\n\\end{equation}\n\nRecalling the bounds of $\\phi$ in terms of $\\partial_v(r\\phi)$ in Lemmas \\ref{lem:intEst4phi}, we have\n\\begin{align*}\n\t\\mathcal B_{1}(U) \n\t\\leq & (1+2\\Lambda) \\sup_{u \\in [1, U]} \\sup_{C_{u}} \\Big( u^{\\omega} \\abs{\\partial_{v}(r \\phi)} + r^{\\omega} \\abs{\\partial_{v} (r \\phi)} \\Big).\n\\end{align*}\nThe right-hand side can be controlled by \\eqref{eq:decay1:intr:pf:2} and \\eqref{eq:decay1:extr}, from which we conclude\n\\begin{equation} \\label{eq:decay1:intr:pf:key}\n\t\\mathcal B_{1}(U) \\leq C_{u_{1}, K, \\Lambda, M_{i}} (\\mathcal I_{1} + H_{1}) + \\Big( C_{K, \\Lambda} M_{i} u_{1}^{-1} + C \\int_{u\/3}^{u} \\abs{\\frac{2m \\nu}{(1-\\mu) r^{2}} (u', v)} \\, \\mathrm{d} u'\\Big) \\mathcal B_{1}(U).\n\\end{equation}\n\nAs a consequence of Lemma \\ref{lem:smallPtnl}, the entire coefficient in front of $\\mathcal B_{1}(U)$ can made to be smaller than (say) $1\/2$ by taking $u_{1}$ sufficiently large. Since $\\mathcal B_{1}(U) < \\infty$, we can then absorb this term into the left-hand side. Observing that this bound is independent of $U > 1$, we have thus obtained \\eqref{eq:decay1:1}.\n\nTo prove \\eqref{eq:decay1:2}, simply apply \\eqref{eq:decay1:intr:pf:2} and \\eqref{eq:decay1:extr}, which shows that\n\\begin{align*}\n&\\sup_{u \\in [1, U]} \\sup_{C_{u}} \\Big( u^{\\omega} \\abs{\\partial_{v}(r \\phi)} + r^{\\omega} \\abs{\\partial_{v} (r \\phi)} \\Big)\\\\\n& \\quad \\leq C_{u_{1}, K, \\Lambda, M_{i}} (\\mathcal I_{1} + H_{1}) + \\Big( C_{K, \\Lambda} M_{i} u_{1}^{-1} + C \\int_{u\/3}^{u} \\abs{\\frac{2m \\nu}{(1-\\mu) r^{2}} (u', v)} \\, \\mathrm{d} u'\\Big) \\mathcal B_{1}(U).\n\\end{align*}\nThis boundedness of $\\mathcal B_1(U)$ that we just proved thus implies \\eqref{eq:decay1:2}.\n\\qedhere\n\\end{proof}\n\n\\begin{remark} \nAccording to the proof that we have just given, the constant $A_{1} > 0$ depends on our choice of $u_{1} > 1$, which in turn depends on how fast the coefficient in front of $\\mathcal B_{1}(U)$ in \\eqref{eq:decay1:intr:pf:key} vanishes as $u_{1} \\to \\infty$. This explains why $A_{1} > 0$ does not depend only on the size of the initial data, as remarked in Section \\ref{sec.main.thm}. Controlling the size of $u_{1} > 1$ under an additional small data assumption will be key to proving Statement (1) of Theorem \\ref{thm:smallData} in Section \\ref{sec:smallData}.\n\\end{remark}\n\n\\subsection{Additional decay estimates}\nWe end this section with the following decay estimates for $\\partial_{v} \\phi$, $\\partial_{u} \\phi$ and $m$.\n\\begin{corollary} \\label{cor:decay1}\nLet $(\\phi, r, m)$ be a locally BV scattering solution to \\eqref{eq:SSESF} with asymptotically flat initial data of order $\\omega'$ in BV, and define $\\omega = \\min \\set{\\omega', 3}$.\nLet $A_{1}$ be the constant in Theorem \\ref{main.thm.1}. Then the following decay estimates hold.\n\\begin{align} \n\t\\abs{\\partial_{v} \\phi} \\leq & C A_{1} \\min \\set{r^{-1} u^{-\\omega}, r^{-2} u^{-(\\omega-1)}}, \\label{eq:decay1:4} \\\\\n\t\\abs{\\partial_{u} \\phi} \\leq & C_{K} A_{1} \\, r^{-1} u^{-\\omega}, \\label{eq:decay1:5} \\\\\n\tm \\leq & C_{\\Lambda} A_{1}^{2} \\min \\set{r u^{-2\\omega}, u^{-(2\\omega-1)}}. \\label{eq:decay1:6}\n\\end{align}\n\\end{corollary}\n\\begin{proof} \n\tLet $u \\geq 1$ and $v \\geq u$. Since\n\t\\begin{equation*}\n\t\tr \\partial_{v} \\phi = \\partial_{v} ( r \\phi) - \\lmb \\phi, \\qquad\n\t\tr \\partial_{u} \\phi = \\partial_{u} ( r \\phi) - \\nu \\phi,\n\t\\end{equation*}\n\tthe estimates \\eqref{eq:decay1:4}, \\eqref{eq:decay1:5} follow from \\eqref{eq:decay1:1}--\\eqref{eq:decay1:3} and the fact that $\\sup_{\\calQ} \\abs{\\lmb} \\leq 1\/2$, $\\sup_{\\calQ} \\abs{\\nu} \\leq K$. \n\t\n\tOn the other hand, by \\eqref{eq:SSESF:dm}, we have \n\t\\begin{equation} \\label{eq:decay1:6:pf:1}\n\t\tm(u,v) = \\frac{1}{2} \\int_{u}^{v} \\lmb^{-1} (1-\\mu) r^{2} (\\partial_{v} \\phi)^{2} (u, v')\\, \\mathrm{d} v'.\n\t\\end{equation}\n\t\n\tUsing $\\abs{\\partial_{v} \\phi (u,v) } \\leq C A_{1} r^{-1} u^{-\\omega}$ (which has just been established), we obtain\n\t\\begin{equation*}\n\t\tm(u,v) \\leq C_{\\Lambda} A_{1}^{2} \\, r u^{-2\\omega},\n\t\\end{equation*}\n\twhich proves a `half' of \\eqref{eq:decay1:6}. \n\tTo prove the other `half', let us introduce a parameter $r_{1} > 0$ (to be determined later) and define $v_{1}^{\\star}(u)$ to be the unique $v$-value such that $r(u, v^{\\star}_{1}(u)) = r_{1}$. For $v \\geq v^{\\star}_{1}(u)$, divide the $v'$-integral in \\eqref{eq:decay1:6:pf:1} into $\\int_{u}^{v^{\\star}_{1}(u)} + \\int_{v^{\\star}_{1}(u)}^{v}$ and use $\\abs{\\partial_{v} \\phi (u,v) } \\leq C A_{1} \\, r^{-1} u^{-\\omega}$ for the former and $\\abs{\\partial_{v} \\phi (u,v) } \\leq C A_{1} \\, r^{-2} u^{-(\\omega-1)}$ for the latter. As $m(u,v)$ is non-decreasing in $v$, we then arrive at the estimate\n\t\\begin{equation*}\n\t\t\\sup_{C_{u}} m \\leq C_{\\Lambda} A_{1}^{2} \\, r_{1} u^{-2\\omega} + C_{\\Lambda} A_{1}^{2} \\, r_{1}^{-1} u^{-2(\\omega-1)}.\n\t\\end{equation*}\n\t\nChoosing $r_{1} = u$, we obtain \\eqref{eq:decay1:6}. \\qedhere\n\\end{proof}\n\n\\section{Decay of second derivatives}\\label{sec.decay2}\nIn this section, we establish our second main theorem (Theorem \\ref{main.thm.2}). Throughout the section, we assume that $(\\phi, r, m)$ is a locally BV scattering solution to \\eqref{eq:SSESF} with asymptotically flat initial data of order $\\omega'$ in $C^{1}$, as in Definitions \\ref{def:locBVScat} and \\ref{def:AF}. As discussed in Remark \\ref{rem:wp}, $(\\phi, r, m)$ is then a $C^{1}$ solution to \\eqref{eq:SSESF}. As before, let $\\omega = \\min\\set{\\omega', 3}$. \n\n\\subsection{Preparatory lemmas}\nThe following lemma, along with Lemma \\ref{lem:smallPtnl}, provides the crucial smallness for our proof of Theorem \\ref{main.thm.2}.\n\\begin{lemma} \\label{lem:smallDphi}\nFor every $\\epsilon > 0$, there exists $u_{2} > 1$ such that\n\\begin{align}\n\t\\sup_{v \\in [u_{2}, \\infty)} \\int_{\\underline{C}_{v} \\cap \\set{u \\geq u_{2}}} \\abs{\\partial_{u} \\phi} < \\epsilon, \\label{eq:smallDphi:u} \\\\ \n\t\\sup_{u \\in [u_{2}, \\infty)} \\int_{C_{u}} \\abs{\\partial_{v} \\phi} < \\epsilon. \\label{eq:smallDphi:v}\n\\end{align}\n\\end{lemma}\n\\begin{proof} \nWe will only prove \\eqref{eq:smallDphi:u}, leaving the similar proof of \\eqref{eq:smallDphi:v} to the reader. As in the proof of Lemma \\ref{lem:smallPtnl}, we divide $\\calQ$ into $\\PD_{\\mathrm{cpt}}:=\\calQ\\cap\\{r\\leq R\\}$ and $\\PD_{\\mathrm{cpt}}^{c} := \\calQ \\setminus \\PD_{\\mathrm{cpt}}$, and argue separately. First, by Theorem \\ref{thm:decayInCpt}, we have\n\\begin{equation*}\n\t\\sup_{v \\in [u_{2}, \\infty)} \\int_{\\underline{C}_{v} \\cap \\set{u \\geq u_{2}} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{u} \\phi} < \\epsilon\/2, \n\\end{equation*}\nfor $u_{2}$ sufficiently large. Next, to derive \\eqref{eq:smallDphi:u} in $\\PD_{\\mathrm{cpt}}^{c}$, we define $u^{\\star}(v) := \\sup \\set{u \\in [u_{2}, v] : r(u,v) \\geq R}$, where we use the convention $u^{\\star}(v) = u_{2}$ when the set is empty. Then using Proposition \\ref{prop:geomLocBVScat} and Schwarz, we compute\n\\begin{align*}\n\t\\int_{\\underline{C}_{v} \\cap \\set{u \\geq u_{2}} \\cap \\PD_{\\mathrm{cpt}}^{c}} \\abs{\\partial_{u} \\phi} \n\t= & \\int_{u_{2}}^{u^{\\star}(v)} \\abs{\\partial_{u} \\phi(u',v)} \\, \\mathrm{d} u' \\\\\n\t\\leq & \\sqrt{\\frac{2 K \\Lambda}{R}} \\sqrt{\\int_{u_{2}}^{u^{\\star}(v)} \\frac{1}{2}(-\\nu)^{-1} (1-\\mu) r^{2} (\\partial_{u} \\phi)^{2} (u', v) \\, \\mathrm{d} u'} \\\\\n\t\\leq & \\sqrt{\\frac{2 K \\Lambda}{R} m(u_{2}, v)} \\leq \\sqrt{\\frac{2 K \\Lambda}{R} M(u_{2})}.\n\\end{align*}\n\nBy \\eqref{eq:zeroMf} (Vanishing final Bondi mass), $\\lim_{u_{2} \\to \\infty} M(u_{2}) = 0$. \\eqref{eq:smallDphi:u} thus follows. \\qedhere\n\\end{proof}\n\nThe next lemma allows us to estimate the first derivative of $\\phi$ at $(u,v)$ in terms of information on $C_{u} \\cap \\set{(u, v'): u \\leq v' \\leq v}$.\n\\begin{lemma} \\label{lem:dphi}\nFor every $(u,v) \\in \\calQ$, the following inequalities hold.\n\\begin{align*}\n\t& \\abs{\\partial_{v} \\phi(u,v)} \\leq \t\t\t\\frac{\\Lambda^{2}}{4} \\sup_{u \\leq v' \\leq v} \\abs{\\partial_{v}^{2}(r\\phi)(u, v')} \\\\\n\t& \\phantom{\\abs{\\partial_{v} \\phi(u,v)} \\leq} \t+ \\frac{\\Lambda^{3}}{4} \\sup_{u \\leq v' \\leq v} \\abs{\\partial_{v} (r \\phi)(u, v')} \\sup_{u \\leq v' \\leq v} \\abs{\\partial_{v} \\lmb (u, v')}, \\\\\n\t& \\abs{\\partial_{u} \\phi(u,v)} \\leq \\Lambda \\sup_{u \\leq v' \\leq v} (- \\nu)(u, v') \\abs{\\partial_{v} \\phi(u,v')}.\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof} \nThe first is an easy consequence of \\eqref{eq:est4dvphi:1} in \\S \\ref{subsec:est4phi}. To prove the second inequality, we start from the equation\n\\begin{equation*}\n\t\\partial_{v} (r \\partial_{u} \\phi) = - \\nu \\partial_{v} \\phi,\n\\end{equation*}\nwhich follows from \\eqref{eq:SSESF:dr} and \\eqref{eq:SSESF:dphi}. Therefore, we have\n\\begin{align*}\n\t\\abs{\\partial_{u} \\phi (u,v)} \n\t\\leq & \\frac{1}{r(u,v)} \\int_{u}^{v} (-\\nu) \\abs{\\partial_{v} \\phi} (u, v') \\, \\mathrm{d} v',\n\\end{align*}\nfrom which the second inequality easily follows. \\qedhere\n\\end{proof}\n\nIn the next lemma, we show that improved estimates for $m$ near $\\Gamma$ hold if we assume an $L^{\\infty}$ control of ${\\partial_{v} \\phi}$.\n\\begin{lemma} \\label{lem:muOverR}\nFor every $(u,v) \\in \\calQ$, the following inequalities hold:\n\t\\begin{align} \n\t\\frac{\\mu}{r}(u,v) \\leq & \\Lambda^{2} \\sup_{u \\leq v' \\leq v} \\abs{\\partial_{v} (r \\phi)(u, v')} \\sup_{u \\leq v' \\leq v} \\abs{\\partial_{v} \\phi (u, v')}, \\label{eq:muOverR:1} \\\\\n\t\\frac{\\mu}{r^{2}}(u,v) \\leq & \\frac{\\Lambda^{2}}{3} \\sup_{u \\leq v' \\leq v} \\abs{\\partial_{v} \\phi(u, v')}^{2}. \\label{eq:muOverR:2}\n\t\\end{align}\n\\end{lemma}\n\\begin{proof} \n\tRecall $\\mu = 2m\/r$. By \\eqref{eq:SSESF:dm}, we have\n\t\\begin{equation*}\n\t\t2 m(u,v) = \\int_{u}^{v} (1-\\mu) \\lmb^{-1} r^{2} (\\partial_{v} \\phi)^{2} (u, v') \\, \\mathrm{d} v'.\n\t\\end{equation*}\n\t\n\tPulling everything except $r^{2} \\lmb$ outside the integral and using $\\int_{u}^{v} r^{2} \\lmb(u, v') \\, \\mathrm{d} v' = (1\/3) r^{3}(u,v)$, we obtain \\eqref{eq:muOverR:2}. \n\tOn the other hand, using $\\lmb^{-1} r \\partial_{v} \\phi = \\lmb^{-1} \\partial_{v} (r \\phi) - \\phi$ and $\\int_{u}^{v} r \\lmb(u, v') \\, \\mathrm{d} v' = (1\/2) r^{2}(u,v)$, we easily deduce\n\\begin{equation*}\n\t\\frac{\\mu}{r}(u,v) \\leq \\frac{1}{2} \\sup_{u \\leq v' \\leq v} \\Big( \\Lambda^{2} \\abs{\\partial_{v} (r \\phi)(u, v')} + \\Lambda \\abs{\\phi(u, v')} \\Big) \\abs{\\partial_{v} \\phi(u, v')}.\n\\end{equation*}\n\t\n\tFrom the fact that $\\abs{\\phi(u,v)} \\leq \\Lambda \\sup_{u \\leq v' \\leq v} \\abs{\\partial_{v} (r \\phi)(u, v')}$, \\eqref{eq:muOverR:1} easily follows.\\qedhere\n\\end{proof}\n\n\\subsection{Preliminary $r$-decay for $\\partial_{v}^{2} (r \\phi)$ and $\\partial_{v} \\lmb$}\nIn this subsection, we establish decay estimates for $\\partial_{v}^{2} (r \\phi)$ and $\\partial_{v} \\lmb$ which are sharp in terms of $r$-weights in the region $\\PD_{\\mathrm{ext}}$. We remind the reader the decomposition $\\calQ = \\PD_{\\mathrm{ext}} \\cup \\PD_{\\mathrm{int}}$, where\n\\begin{equation*}\n\t\\PD_{\\mathrm{ext}} = \\set{(u,v) \\in \\calQ : v \\geq 3u}, \\quad \\PD_{\\mathrm{int}} = \\set{(u,v) \\in \\calQ : v \\leq 3u}.\n\\end{equation*}\n\nIn particular, note that $r \\geq 2 \\Lambda^{-1} u > 0$ in $\\PD_{\\mathrm{ext}}$.\n\n\\begin{lemma} \\label{lem:decay2:rDecay}\nThe following estimates hold.\n\\begin{align}\n\t \\sup_{\\PD_{\\mathrm{ext}}} r^{3} \\abs{\\partial_{v} \\lmb} \\leq & C_{K, \\Lambda} A_{1}^{2}, \t \\label{eq:decay2:rDecay:1} \\\\\n\t\\sup_{\\PD_{\\mathrm{ext}}} r^{\\omega+1} \\abs{\\partial_{v}^{2} (r \\phi)} \\leq & C \\mathcal I_{2} + C_{K, \\Lambda, M_{i}} A_{1}^{3}. \\label{eq:decay2:rDecay:2}\n\\end{align}\n\\end{lemma}\n\n\\begin{proof} \nWe begin by proving \\eqref{eq:decay2:rDecay:1}. Recall \\eqref{eq:eq4dvdvr:normal}:\n\\begin{equation*} \\tag{\\ref{eq:eq4dvdvr:normal}}\n\\partial_{u} \\partial_{v} \\log \\lmb\n= \\frac{1}{(1-\\mu)} \\lmb^{-1} \\nu (\\partial_{v} \\phi)^{2} - \\frac{4 m}{(1-\\mu) r^{3}} \\lmb \\nu.\n\\end{equation*}\n\nNote that $\\partial_{v} \\log \\lmb= 0$ on $C_{1}$ by our choice of coordinates. Therefore, integrating the preceding equation along the incoming direction from $(1,v)$ to $(u,v)$, we have\n\\begin{equation*}\n\t\\abs{\\partial_{v} \\log \\lmb(u, v)} \\leq \\int_{1}^{u} \\abs{\\frac{1}{(1-\\mu)} \\lmb^{-1} \\nu (\\partial_{v} \\phi)^{2} (u', v)} \\, \\mathrm{d} u' + \\int_{1}^{u} \\abs{\\frac{4 m}{(1-\\mu) r^{3}} \\lmb \\nu (u', v)} \\, \\mathrm{d} u'.\n\\end{equation*}\n\nThen \\eqref{eq:decay2:rDecay:1} follows using Proposition \\ref{prop:geomLocBVScat}, \\eqref{eq:decay1:4} and \\eqref{eq:decay1:6}. We remark that the power of $r$ is dictated by the second integral.\n\nThe proof of \\eqref{eq:decay2:rDecay:2} is very similar. We start by recalling \\eqref{eq:eq4dvdvrphi:normal}:\n\\begin{equation*} \\tag{\\ref{eq:eq4dvdvrphi:normal}}\n\\partial_{u} (\\partial_{v}^{2} (r \\phi)) = \n\\frac{2m \\lmb \\nu}{(1-\\mu) r^{2}} \\, \\partial_{v} \\phi + \\frac{ \\nu}{(1-\\mu) } (\\partial_{v} \\phi)^{2} \\phi \n + \\frac{2m \\nu}{(1-\\mu) r^{2}} (\\partial_{v} \\lmb) \\phi - \\frac{4m}{(1-\\mu) r^{3}} \\lmb^{2} \\nu \\phi.\n\\end{equation*}\n\nFor $u \\geq 1$, we have $r(u,v) \\leq r(1,v)$; moreover, by hypothesis, we have the estimate for the initial data term \n$$(1+r(1,v))^{\\omega'+1} \\abs{\\partial_{v}^{2}(r \\phi)(1,v)} \\leq \\mathcal I_{2} \\, .$$\nTherefore, by the fundamental theorem of calculus, it suffices to bound\n\\begin{align*}\n& \\int_{1}^{u} \\abs{\\frac{2m \\lmb \\nu}{(1-\\mu) r^{2}} \\, \\partial_{v} \\phi (u', v)} \\, \\mathrm{d} u' + \\int_{1}^{u} \\abs{\\frac{ \\nu}{(1-\\mu) } (\\partial_{v} \\phi)^{2} \\phi (u', v)} \\, \\mathrm{d} u' \\\\\n& \\quad + \\int_{1}^{u} \\abs{\\frac{2m \\nu}{(1-\\mu) r^{2}} (\\partial_{v} \\lmb) \\phi (u', v) } \\, \\mathrm{d} u' + \\int_{1}^{u} \\abs{\\frac{4m}{(1-\\mu) r^{3}} \\lmb^{2} \\nu \\phi (u', v)} \\, \\mathrm{d} u'\n\\end{align*}\nby $C_{K, \\Lambda, M_{i}} A_{1}^{3} r^{-(\\omega+1)}$. This is an easy consequence of Proposition \\ref{prop:geomLocBVScat}, \\eqref{eq:decay1:1}, \\eqref{eq:decay1:4}, \\eqref{eq:decay1:6} and also \\eqref{eq:decay2:rDecay:1} that has just been established. Note that the last term is what limits $\\omega \\leq 3$. \\qedhere\n\\end{proof}\n\n\\subsection{Propagation of $u$-decay for $\\partial_{u}^{2} (r \\phi)$ and $\\partial_{u} \\nu$}\nHere, we show that certain $u$-decay for $\\partial_{u}^{2} (r \\phi)$ and $\\partial_{u} \\nu$ proved in $\\PD_{\\mathrm{int}}$ can be propagated to $\\calQ$. The technique employed is very similar to that in the previous subsection.\n\\begin{lemma} \\label{lem:decay2:uDecayInExtr}\nFor $U \\geq 1$, suppose that there exists a finite positive constant $A, k_{1}, k_{2}$ such that\n\\begin{equation*}\n\t0 \\leq k_{1} \\leq 2\\omega+1, \\quad\n\t0 \\leq k_{2} \\leq 3\\omega+1, \n\\end{equation*}\nand for $u \\in [1, U]$, we have\n\\begin{equation*}\n\t\\sup_{C_{u} \\cap \\PD_{\\mathrm{int}}} u^{k_{1}} \\abs{\\partial_{u} \\nu} \\leq A, \\quad\n\t\\sup_{C_{u} \\cap \\PD_{\\mathrm{int}}} u^{k_{2}} \\abs{\\partial_{u}^{2}(r\\phi)} \\leq A.\n\\end{equation*}\n\nThen for $u \\in [1, U]$, the following estimates hold.\n\\begin{align}\n\t\\sup_{C_{u}} u^{k_{1}} \\abs{\\partial_{u} \\nu} \\leq & C_{K, \\Lambda} A + C_{K, \\Lambda} A_{1}^{2}, \\label{eq:decay2:uDecayInExtr:1} \\\\\n\t\\sup_{C_{u}} u^{k_{2}} \\abs{\\partial_{u}^{2}(r\\phi)} \\leq & A + C_{K, \\Lambda} A_{1}^{3} + C_{K, \\Lambda} A_{1}^{3} \\, \\sup_{C_{u}} u \\abs{\\partial_{u} \\nu} . \\label{eq:decay2:uDecayInExtr:2}\n\\end{align}\n\nFurthermore, the following alternative to \\eqref{eq:decay2:uDecayInExtr:2} also holds.\n\\begin{equation} \\label{eq:decay2:uDecayInExtr:3}\n\\sup_{C_{u}} u^{k_{2}} \\abs{\\partial_{u}^{2}(r\\phi)} \n\t\\leq A + C_{K, \\Lambda} A_{1}^{3} + \n\tC_{K, \\Lambda} \\Psi \\int_{3u}^{\\infty} \\abs{\\frac{2 m \\lmb}{(1-\\mu) r^{2}}}(u, v') \\, \\mathrm{d} v' \\cdot \\sup_{C_{u}} u^{k_{2}} \\abs{\\partial_{u} \\nu}. \n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nLet us begin with \\eqref{eq:decay2:uDecayInExtr:1}. Recall \\eqref{eq:eq4dudur:normal}\n\\begin{equation*} \\tag{\\ref{eq:eq4dudur:normal}}\n\\partial_{v} \\partial_{u} \\log \\nu\n= \\frac{1}{(1-\\mu)} \\lmb \\nu^{-1} (\\partial_{u} \\phi)^{2} - \\frac{4m}{(1-\\mu) r^{3}} \\lmb \\nu.\n\\end{equation*}\n\nGiven $(u,v) \\in \\PD_{\\mathrm{ext}}$ (with $u \\in [1,U]$), let us integrate this equation along the outgoing direction from $(u, 3u)$ to $(u,v)$, take the absolute value and multiply by $u^{k_{1}}$. Using the hypothesis\n\\begin{equation*}\n\t\\sup_{\\PD_{\\mathrm{int}} \\cap \\set{(u,v) \\in \\calQ : u \\in [1,U]}} u^{k_{1}} \\abs{\\partial_{u} \\nu} \\leq A,\n\\end{equation*}\n\\eqref{eq:decay2:uDecayInExtr:1} is reduced to showing\n\\begin{align} \n\tu^{k_{1}} \\int_{3u}^{\\infty} \\abs{\\frac{1}{(1-\\mu)} \\lmb \\nu^{-1} (\\partial_{u} \\phi)^{2} (u, v)} \\, \\mathrm{d} v \\leq & C_{K, \\Lambda} A_{1}^{2}, \\label{eq:decay2:uDecayInExtr:pf:1} \\\\ \n\tu^{k_{1}} \\int_{3u}^{\\infty} \\abs{\\frac{4m}{(1-\\mu) r^{3}} \\lmb \\nu (u, v)} \\, \\mathrm{d} v \\leq & C_{K, \\Lambda} A_{1}^{2}, \\label{eq:decay2:uDecayInExtr:pf:2}\n\\end{align}\nfor $u \\in [1, U]$.\n\nUsing Proposition \\ref{prop:geomLocBVScat} and \\eqref{eq:decay1:5}, the left-hand side of \\eqref{eq:decay2:uDecayInExtr:pf:1} is bounded by\n\\begin{equation*}\n\tC_{K, \\Lambda} A_{1}^{2} \\, u^{k_{1} - 2\\omega} \\int_{3u}^{\\infty} \\frac{1}{r^{2}} \\lmb \\, \\mathrm{d} v \n\t\\leq C_{K, \\Lambda} A_{1}^{2} \\, u^{k_{1}- 2\\omega} r^{-1}(u, 3u).\n\\end{equation*}\n\nAs $u \\geq 1$ and $r(u, 3u) \\geq 2 \\Lambda^{-1} u$, \\eqref{eq:decay2:uDecayInExtr:pf:1} follows. Similarly, by \\eqref{eq:bnd4mu} and \\eqref{eq:decay1:6}, the left-hand side of \\eqref{eq:decay2:uDecayInExtr:pf:2} is also bounded by $C_{K, \\Lambda} A_{1}^{2} \\, u^{k_{1}- 2\\omega} r^{-1}(u, 3u)$, from which \\eqref{eq:decay2:uDecayInExtr:pf:2} immediately follows.\n\nNext, we turn to \\eqref{eq:decay2:uDecayInExtr:2} and \\eqref{eq:decay2:uDecayInExtr:3}; as they are proved similarly as before, we will only outline the main points. Recall \\eqref{eq:eq4dudurphi:normal}:\n\\begin{equation*} \\tag{\\ref{eq:eq4dudurphi:normal}}\n\\partial_{v} (\\partial_{u}^{2} (r \\phi)) = \n\\frac{2m \\lmb \\nu}{(1-\\mu) r^{2}} \\, \\partial_{u} \\phi + \\frac{\\lmb }{(1-\\mu) } (\\partial_{u} \\phi)^{2} \\phi \n + \\frac{2m \\lmb}{(1-\\mu) r^{2}} (\\partial_{u} \\nu) \\phi - \\frac{4m}{(1-\\mu) r^{3}} \\lmb \\nu^{2} \\phi.\n\\end{equation*}\n\nFix $(u,v) \\in \\PD_{\\mathrm{ext}}$ with $u \\in [1,U]$. We then integrate the preceding equation along the outgoing direction from $(u, 3u)$ to $(u, v)$, take the absolute value and multiply by $u^{k_{2}}$. In order to prove \\eqref{eq:decay2:uDecayInExtr:2}, in view of the hypothesis\n\\begin{equation*}\n\t\\sup_{\\PD_{\\mathrm{int}} \\cap \\set{(u, v) \\in \\calQ : u \\in [1, U]}} u^{k_{2}} \\abs{\\partial_{u}^{2} (r \\phi)} \\leq A,\n\\end{equation*}\nit suffices to establish the following estimates for $u \\in [1, U]$:\n\\begin{align*}\n\tu^{k_{2}} \\int_{3u}^{\\infty} \\abs{\\frac{2m \\lmb \\nu}{(1-\\mu) r^{2}} \\, \\partial_{u} \\phi (u,v)}\\, \\mathrm{d} v \n\t\\leq & C_{K, \\Lambda} A_{1}^{3}, \\\\\n\tu^{k_{2}} \\int_{3u}^{\\infty} \\abs{\\frac{\\lmb }{(1-\\mu) } (\\partial_{u} \\phi)^{2} \\phi (u,v)}\\, \\mathrm{d} v \n\t\\leq & C_{K, \\Lambda} A_{1}^{3}, \\\\\n\tu^{k_{2}} \\int_{3u}^{\\infty} \\abs{\\frac{2m \\lmb}{(1-\\mu) r^{2}} (\\partial_{u} \\nu) \\phi (u,v)}\\, \\mathrm{d} v \n\t\\leq & C_{K, \\Lambda} A_{1}^{3} \\, \\sup_{\\calQ} u \\abs{\\partial_{u} \\nu}, \\\\\n\tu^{k_{2}} \\int_{3u}^{\\infty} \\abs{\\frac{4m}{(1-\\mu) r^{3}} \\lmb \\nu^{2} \\phi (u,v)} \\, \\mathrm{d} v\n\t\\leq & C_{K, \\Lambda} A_{1}^{3}.\n\\end{align*}\n\nThe proof of these estimates are similar to that of \\eqref{eq:decay2:uDecayInExtr:pf:1}, \\eqref{eq:decay2:uDecayInExtr:pf:2}; we omit the details. To prove \\eqref{eq:decay2:uDecayInExtr:3}, we replace the third estimate by\n\\begin{equation*}\n\tu^{k_{2}} \\int_{3u}^{\\infty} \\abs{\\frac{2m \\lmb}{(1-\\mu) r^{2}} (\\partial_{u} \\nu) \\phi (u,v)}\\, \\mathrm{d} v \n\t\\leq C_{K, \\Lambda} \\Psi \\int_{3u}^{\\infty} \\abs{\\frac{2 m \\lmb}{(1-\\mu) r^{2}} }(u, v') \\, \\mathrm{d} v' \\cdot \\sup_{C_{u}} u^{k_{2}} \\abs{\\partial_{u} \\nu},\n\\end{equation*}\nwhich is an easy consequence of Proposition \\ref{prop:geomLocBVScat}.\n\\qedhere\n\\end{proof}\n\n\n\\subsection{Full decay for $\\partial_{v}^{2} (r \\phi)$, $\\partial_{u}^{2} (r \\phi)$, $\\partial_{v} \\lmb$ and $\\partial_{u} \\nu$} \\label{sec.full.decay.2}\nWith all the preparations so far, we are finally ready to prove Theorem \\ref{main.thm.2}. Our proof consists of two steps.\nThe first step is use the local BV scattering assumption to prove a preliminary decay rate of $u^{-\\omega}$ for $\\partial_{v}^{2} (r \\phi)$, $\\partial_{u}^{2} (r \\phi)$, $\\partial_{v} \\lmb$ and $\\partial_{u} \\nu$. In this step, it is crucial to pass to the \\emph{renormalized variables} and exploit the null structure of \\eqref{eq:SSESF}, in order to utilize the a priori bounds in the local BV scattering assumption. The second step to upgrade these decay rates to those that are claimed in Theorem \\ref{main.thm.2}. Thanks to the preliminary $u^{-\\omega}$ decay from the first step, the null structure is not necessary at this point.\n\nWe now begin with the first step. The null structure of \\eqref{eq:SSESF} as demonstrated in \\S \\ref{subsec:nullStr} is used in an essential way.\n\n\\begin{proposition} \\label{prop:decay2:nullStr}\nThere exists a finite constant $A_{2}' > 0$ such that the following estimates hold.\n\\begin{align*}\n\t& \\abs{\\partial_{v}^{2} (r \\phi)} \\leq A_{2}' u^{-\\omega}, \\quad\n\t\\abs{\\partial_{u}^{2} (r \\phi)} \\leq A_{2}' u^{-\\omega}, \\\\\n\t& \\abs{\\partial_{v} \\lmb} \\leq A_{2}' u^{-\\omega}, \\qquad\n\t\\abs{\\partial_{u} \\nu} \\leq A_{2}' u^{-\\omega}.\n\\end{align*}\n\\end{proposition}\n\n\\begin{proof} \nFor $U > 1$, we define \n\\begin{equation} \\label{eq:decay2:def4B2}\n\t\\mathcal B_{2}(U) := \\sup_{u \\in [1, U]} \\sup_{C_{u}} \\Big( u^{\\omega} \\abs{\\partial_{v}^{2} (r \\phi)} +u^{\\omega} \\abs{\\partial_{u}^{2} (r \\phi)} \n\t\t\t\t\t+ u^{\\omega} \\abs{\\partial_{v} \\lmb} + u^{\\omega} \\abs{\\partial_{u} \\nu} \\Big).\n\\end{equation}\n \n\n\nNotice that the above is finite for every fixed $U$ due to Lemmas \\ref{lem:decay2:rDecay} and \\ref{lem:decay2:uDecayInExtr}. As indicated earlier, we need to use the null structure of the system \\eqref{eq:SSESF} as in \\S \\ref{subsec:nullStr}. For convenience, we define the shorthands\n\\begin{align*}\n\tF_{1} := & \\partial_{v}^{2} (r \\phi) - (\\partial_{v} \\lmb) \\phi, \\\\\n\tG_{1} := & \\partial_{u}^{2} (r \\phi) - (\\partial_{u} \\nu) \\phi, \n\\end{align*}\nand\n\\begin{align*}\n\tF_{2} := & \\partial_{v} \\log \\lmb - \\frac{\\lmb}{(1-\\mu)} \\frac{\\mu}{r} + \\partial_{v} \\phi \\Big( \\lmb^{-1} \\partial_{v} (r \\phi) - \\nu^{-1} \\partial_{u} ( r \\phi) \\Big), \\\\\n\tG_{2} := & \\partial_{u} \\log (-\\nu) - \\frac{\\nu}{(1-\\mu)} \\frac{\\mu}{r} + \\partial_{u} \\phi \\Big( \\lmb^{-1} \\partial_{v} (r \\phi) - \\nu^{-1} \\partial_{u} (r \\phi) \\Big).\n\\end{align*}\n\nThen \\eqref{eq:eq4dvdvrphi}, \\eqref{eq:eq4dudurphi}, \\eqref{eq:eq4dvdvr} and \\eqref{eq:eq4dudur} may be rewritten in the following fashion.\n\\begin{align} \n& \\partial_{u} F_{1} = \\partial_{u} \\lmb \\, \\partial_{v} \\phi - \\partial_{v} \\lmb \\, \\partial_{u} \\phi, \\label{eq:decay2:nullStr:pf:1} \\\\\n& \\partial_{u} F_{2} = \\partial_{u} \\phi \\, \\partial_{v}\\Big( \\nu^{-1} \\partial_{u} (r \\phi) \\Big)- \\partial_{v} \\phi \\, \\partial_{u} \\Big( \\nu^{-1} \\partial_{u} (r \\phi) \\Big), \\label{eq:decay2:nullStr:pf:2} \\\\\n& \\partial_{v} G_{1} = \\partial_{v} \\nu \\, \\partial_{u} \\phi - \\partial_{u} \\nu \\, \\partial_{v} \\phi,\\label{eq:decay2:nullStr:pf:3} \\\\\n& \\partial_{v} G_{2} = - \\partial_{u} \\phi \\, \\partial_{v} \\Big( \\lmb^{-1} \\partial_{v} (r \\phi) \\Big) + \\partial_{v} \\phi \\, \\partial_{u} \\Big( \\lmb^{-1} \\partial_{v} (r \\phi) \\Big). \\label{eq:decay2:nullStr:pf:4}\n\\end{align}\n\nThe following lemma is the key technical component of the proof.\n\n\\begin{lemma} \\label{lem:decay2:key4nullStr}\nThere exists a finite positive constant $C = C_{A_{1}, \\mathcal I_{2}, K, \\Lambda}$ and positive function $\\epsilon(u)$ satisfying\n\\begin{equation*}\n\t\\epsilon(u) \\to 0 \\hbox{ as } u \\to \\infty\n\\end{equation*}\nsuch that the following inequalities holds for $1 \\leq u_{2} \\leq U$:\n\\begin{align}\n\t\\sup_{\\PD_{\\mathrm{int}} \\cap \\set{(u,v) \\in \\calQ : u \\in [3 u_{2}, U]}} \\Big( u^{\\omega} \\abs{F_{1}} + u^{\\omega} \\abs{G_{1}} \\Big)\n\t \\leq & C_{\\Lambda} \\mathcal I_{2} + C_{K, \\Lambda, M_{i}} A_{1}^{3} + \\epsilon(u_{2}) \\mathcal B_{2}(U), \\label{eq:decay2:key4nullStr:1} \\\\\n\t \\sup_{\\PD_{\\mathrm{int}} \\cap \\set{(u,v) \\in \\calQ : u \\in [3 u_{2}, U]}} \\Big( u^{\\omega} \\abs{F_{2}} + u^{\\omega} \\abs{G_{2}} \\Big)\n\t \\leq & C_{K, \\Lambda} A_{1}^{2} + \\epsilon(u_{2}) \\mathcal B_{2}(U). \\label{eq:decay2:key4nullStr:2} \n\\end{align}\n\\end{lemma}\n\nWe defer the proof of this lemma until later. Instead, we first finish the proof of Proposition \\ref{prop:decay2:nullStr}, assuming Lemma \\ref{lem:decay2:key4nullStr}.\n\n\\noindent\\emph{Proof of Proposition \\ref{prop:decay2:nullStr}.}\nFirst, we claim that \\eqref{eq:decay2:key4nullStr:1} and \\eqref{eq:decay2:key4nullStr:2} imply\n\\begin{equation} \\label{eq:decay2:nullStr:pf:5}\n\t\\sup_{\\PD_{\\mathrm{int}} \\cap \\set{(u,v) \\in \\calQ : u \\in [3 u_{2}, U]}} u^{\\omega} \\Big( \\abs{\\partial_{v}^{2} (r \\phi)} + \\abs{\\partial_{u}^{2} (r \\phi)} + \\abs{\\partial_{v} \\lmb} + \\abs{\\partial_{u} \\nu} \\Big)\n\t\\leq H_{2} + (\\epsilon + \\epsilon')(u_{2}) \\mathcal B_{2}(U).\n\\end{equation}\nfor some constant $0 < H_{2} < \\infty$ and some positive function $\\epsilon'(u_{2})$ which tends to zero as $u_{2} \\to \\infty$.\n\nThe point is that $F_{1}, F_{2}, G_{1}, G_{2}$ controls $\\partial_{v}^{2} (r \\phi)$, $\\partial_{u}^{2} (r \\phi)$, $\\partial_{v} \\lmb$, $\\partial_{u} \\nu$, respectively, up to higher order terms, which may be absorbed into the second term on the right-hand side. Indeed, consider $u \\in [3 u_{2}, U]$. For $F_{1}$ and $G_{1}$, we estimate\n\\begin{align*}\n\t& u^{\\omega} \\abs{\\partial_{v}^{2} (r \\phi)(u,v)} = u^{\\omega} \\abs{F_{1} + (\\partial_{v} \\lmb) \\phi}(u,v) \\leq u^{\\omega} \\abs{F_{1}(u,v)} + \\sup_{C_{u}} \\abs{\\phi} \\cdot \\mathcal B_{2}(U), \\\\\n\t& u^{\\omega} \\abs{\\partial_{u}^{2} (r \\phi)(u,v)} = u^{\\omega} \\abs{G_{1} + (\\partial_{u} \\nu) \\phi}(u,v) \\leq u^{\\omega} \\abs{G_{1}(u,v)} + \\sup_{C_{u}} \\abs{\\phi} \\cdot \\mathcal B_{2}(U),\n\\end{align*}\nwhich are acceptable, as $\\sup_{C_{u}} \\abs{\\phi} \\to 0$ as $u \\geq 3 u_{2} \\to \\infty$ by Theorem \\ref{main.thm.1}. \nFor $F_{2}$, we use Proposition \\ref{prop:geomLocBVScat} to estimate\n\\begin{align*}\n\tu^{\\omega} \\abs{\\partial_{v} \\lmb} = & u^{\\omega} \\lmb \\Big\\vert F_{2} + \\frac{\\lmb}{1-\\mu} \\frac{\\mu}{r} + \\partial_{v} \\phi ( \\lmb^{-1} \\partial_{v} (r \\phi) - \\nu^{-1} \\partial_{u} (r \\phi) ) \\Big\\vert \\\\\n\t\\leq &\\frac{1}{2} u^{\\omega} \\abs{F_{2}} + \\frac{K \\Lambda}{4} u^{\\omega} \\abs{\\frac{\\mu}{r}} + \\frac{\\Lambda}{2} u^{\\omega} \\abs{\\partial_{v} \\phi} \\Big( \\abs{\\partial_{v} (r \\phi)} + \\abs{\\partial_{u} (r \\phi)} \\Big).\n\\end{align*}\n\nApplying \\eqref{eq:muOverR:1} (from Lemma \\ref{lem:muOverR}) to the second term on the last line, and then using Lemma \\ref{lem:dphi} to control $u^{\\omega} \\abs{\\partial_{v} \\phi}$, we arrive at \n\\begin{equation*}\nu^{\\omega} \\abs{\\partial_{v} \\lmb(u,v)} \\leq \\frac{1}{2} u^{\\omega} \\abs{F_{2}(u,v)} + C_{K, \\Lambda} \\, \\Psi \\sup_{C_{u}} \\Big( \\abs{\\partial_{v} (r \\phi)} + \\abs{\\partial_{u} (r \\phi)} \\Big) \\cdot \\mathcal B_{2}(U),\n\\end{equation*}\nwhich is acceptable in view of Theorem \\ref{main.thm.1}. Proceeding similarly, but also using the second inequality of Lemma \\ref{lem:dphi} to control $\\abs{\\partial_{u} \\phi}$, we obtain\n\\begin{equation*}\nu^{\\omega} \\abs{\\partial_{u} \\nu(u,v)} \\leq K u^{\\omega} \\abs{G_{2}(u,v)} + C_{K, \\Lambda} \\, \\Psi \\sup_{C_{u}} \\Big( \\abs{\\partial_{v} (r \\phi)} + \\abs{\\partial_{u} (r \\phi)} \\Big) \\cdot \\mathcal B_{2}(U).\n\\end{equation*}\n\nCombining these estimates with \\eqref{eq:decay2:key4nullStr:1} and \\eqref{eq:decay2:key4nullStr:2}, we conclude \\eqref{eq:decay2:nullStr:pf:5} with\n\\begin{align} \n\tH_{2} =& C_{\\Lambda} \\mathcal I_{2} + C_{K, \\Lambda, M_{i}} A_{1}^{3} + C_{K, \\Lambda} A_{1}^{2}, \n\t\\label{eq:decay2:H2} \\\\\n\t\\epsilon'(u_{2}) =& C \\sup_{u \\geq 3u_{2}} \\abs{\\phi} + C_{K, \\Lambda} \\Psi \\sup_{u \\geq 3u_{2}} \\Big( \\abs{\\partial_{v}(r \\phi)} + \\abs{\\partial_{u}(r \\phi)} \\Big).\n\t\\label{eq:decay2:eps'}\n\\end{align}\n\nNext, note that the (non-decreasing) function\n\\begin{equation} \\label{eq:decay2:def4H'2}\n\tH'_{2}(u_{2}) := \\sup_{\\PD_{\\mathrm{int}} \\cap \\set{(u,v) \\in \\calQ : u \\in [1, 3u_{2}]}} u^{\\omega} \\Big( \\abs{\\partial_{v}^{2} (r \\phi)} + \\abs{\\partial_{u}^{2} (r \\phi)} + \\abs{\\partial_{v} \\lmb} + \\abs{\\partial_{u} \\nu} \\Big) \\geq 0\n\\end{equation}\nis always \\emph{finite} for any fixed $u_{2} \\geq 1$, as the region $\\PD_{\\mathrm{int}} \\cap \\set{(u,v) \\in \\calQ : u \\in [1, 3u_{2}]}$ is compact and each of these terms is a continuous function, since $(\\phi, r, m)$ is a $C^{1}$ solution (see Definition \\ref{def:C1solution}). Combining with \\eqref{eq:decay2:nullStr:pf:5}, we obtain\n\\begin{equation*} \n\t\\sup_{\\PD_{\\mathrm{int}} \\cap \\set{(u,v) \\in \\calQ : u \\in [1, U]}} u^{\\omega} \\Big( \\abs{\\partial_{v}^{2} (r \\phi)} + \\abs{\\partial_{u}^{2} (r \\phi)} + \\abs{\\partial_{v} \\lmb} + \\abs{\\partial_{u} \\nu} \\Big)\n\t\\leq H_{2} + H'_{2}(u_{2}) + (\\epsilon + \\epsilon')(u_{2}) \\mathcal B_{2}(U),\n\\end{equation*}\nfor every $u_2\\in [1,U]$.\n\nNow apply \\eqref{eq:decay2:uDecayInExtr:1}, \\eqref{eq:decay2:uDecayInExtr:3} in Lemma \\ref{lem:decay2:uDecayInExtr} to $\\partial_{u}^{2}(r \\phi)$, $\\partial_{u} \\nu$. Apply also Lemma \\ref{lem:decay2:rDecay} (along with the fact that $r(u,v) \\geq 2 \\Lambda^{-1} u$ in $\\PD_{\\mathrm{ext}}$ and $\\omega \\leq 3$) to $\\partial_{v}^{2} (r \\phi)$, $\\partial_{v} \\lmb$ in $\\PD_{\\mathrm{ext}}$. Then we see that there exist a non-negative and non-decreasing function $H''_{2}(u_2)$ and a positive function $\\epsilon''(u_{2})$ such that\n\\begin{equation*} \n\t\\mathcal B_{2}(U) \\leq H_{2}''(u_2) + \\epsilon''(u_{2}) \\mathcal B_{2}(U),\n\\end{equation*}\nand $\\epsilon''(u_{2}) \\to 0$ as $u_{2} \\to \\infty$. Taking $u_{2}$ sufficiently large, the second term on the right-hand side can be absorbed into the left-hand side; then we conclude that $\\mathcal B_{2}(U) \\leq C_{A_{1}, K, \\Lambda} H''_{2}(u_{2})$. As this bound is independent of $U$, Proposition \\ref{prop:decay2:nullStr} then follows. \\qedhere\n\\end{proof}\n\n\\begin{remark} \nUsing \\eqref{eq:decay2:uDecayInExtr:1}, \\eqref{eq:decay2:uDecayInExtr:3} in Lemma \\ref{lem:decay2:uDecayInExtr} and \\eqref{eq:decay2:rDecay:1}, \\eqref{eq:decay2:rDecay:2} in Lemma \\ref{lem:decay2:rDecay}, the functions $H_{2}''(u_{2})$ and $\\epsilon''(u_{2})$ can be explicitly bounded from the above as follows:\n\\begin{align}\n\tH_{2}''(u_2) \\leq & C_{K, \\Lambda} \\Big( 1 + \\Psi \\int_{3}^{\\infty} \\abs{\\frac{2 m \\lmb}{(1-\\mu) r^{2}}}(u, v') \\, \\mathrm{d} v' \\Big) \\cdot (H_{2} + H'(u_{2}) + A_{1}^{2} + A_{1}^{3}) \t\n\t\\label{eq:decay2:H2''} \\\\\n\t\t\t& + C \\mathcal I_{2} + C_{K, \\Lambda} A_{1}^{2} + C_{K, \\Lambda, M_{i}} A_{1}^{3} \n\t\\notag \\\\\n\t\\epsilon''(u_{2}) \\leq & C_{K, \\Lambda} \\Big( 1 + \\Psi \\int_{3}^{\\infty} \\abs{\\frac{2 m \\lmb}{(1-\\mu) r^{2}}}(u, v') \\, \\mathrm{d} v' \\Big) \\cdot (\\epsilon + \\epsilon')(u_{2}).\n\t\\label{eq:decay2:eps''}\n\\end{align}\n\nThese bounds will be useful in our proof of Theorem \\ref{thm:smallData} in Section \\ref{sec:smallData}.\n\\end{remark}\n\nAt this point, in order to complete the proof of Proposition \\ref{prop:decay2:nullStr}, we are only left to prove Lemma \\ref{lem:decay2:key4nullStr}.\n\\begin{proof}[Proof of Lemma \\ref{lem:decay2:key4nullStr}]\nLet $(u, v) \\in \\PD_{\\mathrm{int}}$ (i.e., $v \\in [u, 3u]$) with $u \\in [3 u_{2}, U]$. In this proof, we will use the notation $\\epsilon(u_{2})$ to refer to a positive quantity which may be made arbitrarily small by choosing $u_{2}$ large enough, which may vary from line to line.\n\nWe first estimate $F_1$ and $F_2$. Integrating \\eqref{eq:decay2:nullStr:pf:1} and \\eqref{eq:decay2:nullStr:pf:2} along the incoming direction from $(u\/3, v)$ to $(u,v)$, we obtain\n\\begin{align*}\n\t\\abs{F_{1}(u,v)} \\leq & \\abs{F_{1}(u\/3, v)} + \\int_{u\/3}^{u} \\abs{\\partial_{u} \\lmb \\, \\partial_{v} \\phi(u', v)} + \\abs{\\partial_{v} \\lmb \\, \\partial_{u} \\phi (u', v)} \\, \\mathrm{d} u', \\\\\n\t\\abs{F_{2}(u,v)} \\leq & \\abs{F_{2}(u\/3, v)} + \\int_{u\/3}^{u} \\abs{\\partial_{u} \\phi \\, \\partial_{v}( \\nu^{-1} \\partial_{u} (r \\phi) )(u', v)} + \\abs{\\partial_{v} \\phi \\, \\partial_{u} ( \\nu^{-1} \\partial_{u} (r \\phi) )(u', v)} \\, \\mathrm{d} u'.\n\\end{align*}\n\nMultiply both sides of these inequalities by $u^{\\omega}$. For $v \\in [u, 3u]$, note that $(u\/3, v) \\in \\PD_{\\mathrm{ext}}$ and $u \\leq (3 \\Lambda\/2) r(u\/3, v)$. \nTherefore, using Theorem \\ref{main.thm.1} for $\\phi$, $\\partial_v(r\\phi)$, Corollary \\ref{cor:decay1} for $\\partial_{v} \\phi$, Lemma \\ref{lem:muOverR} for $\\mu\/r$ and Lemma \\ref{lem:decay2:rDecay} for $\\partial_{v}^{2}(r\\phi)$, $\\partial_{v} \\lmb$, we have\n\\begin{align*}\n\tu^{\\omega} \\abs{F_{1} (u\/3, v)} \n\t\t\\leq & C_{\\Lambda} r^{\\omega} \\Big( \\abs{\\partial_{v}^{2} (r \\phi)} + \\abs{(\\partial_{v} \\lmb) \\phi} \\Big) (u\/3, v) \\\\\n\t\t\\leq & C_{\\Lambda} \\mathcal I_{2} + C_{K, \\Lambda, M_{i}} A_{1}^{3}, \\\\\n\tu^{\\omega} \\abs{F_{2} (u\/3, v)} \n\t\t\\leq & C_{\\Lambda} r^{\\omega} \\Big( \\abs{\\lmb^{-1} \\partial_{v} \\lmb} + \\frac{\\mu}{(1-\\mu)} \\frac{\\lmb}{r} + \\abs{\\partial_{v} \\phi ( \\lmb^{-1} \\partial_{v} (r \\phi) - \\nu^{-1} \\partial_{u} ( r \\phi) )} \\Big) (u\/3, v)\\\\\n\t\t\\leq & C_{K, \\Lambda} A_{1}^{2}.\n\\end{align*}\n\nTherefore, we only need to deal with the $u'$-integrals. For $u \\in [3 u_{2}, U]$, we claim that\n\\begin{align}\n\tu^{\\omega} \\int_{u\/3}^{u} \\abs{\\partial_{u} \\lmb(u', v)} \\abs{\\partial_{v} \\phi(u',v)} \\, \\mathrm{d} u' \\leq & \\epsilon(u_{2}) \\mathcal B_{2}(U), \\label{eq:decay2:key4nullStr:pf:1} \\\\\n\tu^{\\omega} \\int_{u\/3}^{u} \\abs{\\partial_{v} \\lmb(u', v)} \\abs{\\partial_{u} \\phi(u',v)}\\, \\mathrm{d} u' \\leq & \\epsilon(u_{2}) \\mathcal B_{2}(U), \\label{eq:decay2:key4nullStr:pf:2} \\\\\n\tu^{\\omega} \\int_{u\/3}^{u} \\abs{\\partial_{u} \\phi(u', v)} \\abs{\\partial_{v} (\\nu^{-1} \\partial_{u} (r \\phi))(u', v)} \\, \\mathrm{d} u' \\leq & \\epsilon (u_{2}) \\mathcal B_{2}(U), \\label{eq:decay2:key4nullStr:pf:3} \\\\\n\tu^{\\omega} \\int_{u\/3}^{u} \\abs{\\partial_{v} \\phi(u', v)} \\abs{\\partial_{u} (\\nu^{-1} \\partial_{u} (r \\phi))(u', v)} \\, \\mathrm{d} u' \\leq & \\epsilon (u_{2}) \\mathcal B_{2}(U). \\label{eq:decay2:key4nullStr:pf:4}\n\\end{align}\n\n\\pfstep{Proof of \\eqref{eq:decay2:key4nullStr:pf:1}}\nWe proceed similarly as in the proof of Theorem \\ref{main.thm.1}. By \\eqref{eq:SSESF:dr}, \\eqref{eq:bnd4dvrphi} and Lemma \\ref{lem:dphi}, we estimate\n\\begin{align*}\n& u^{\\omega} \\int_{u\/3}^{u} \\abs{\\partial_{u} \\lmb(u', v)} \\abs{\\partial_{v} \\phi(u',v)} \\, \\mathrm{d} u' \\\\\n& \\quad \\leq C_{\\Lambda} \\Big( \\int_{u_{2}}^{v} \\abs{\\frac{2m \\nu}{(1-\\mu) r^{2}}(u', v)} \\, \\mathrm{d} u' \\Big) \\sup_{u' \\in [u\/3, u]} \\sup_{C_{u'}} (u')^{\\omega} (\\abs{\\partial^{2}_{v}(r \\phi)} + \\Psi \\abs{\\partial_{v} \\lmb} ) \\\\\n& \\quad \\leq C_{\\Lambda, \\Psi} \\Big( \\int_{u_{2}}^{v} \\abs{\\frac{2m \\nu}{(1-\\mu) r^{2}}(u', v)} \\, \\mathrm{d} u' \\Big) \\mathcal B_{2}(U).\n\\end{align*}\n\nThus \\eqref{eq:decay2:key4nullStr:pf:1} follows by Lemma \\ref{lem:smallPtnl}.\n\n\\pfstep{Proof of \\eqref{eq:decay2:key4nullStr:pf:2}}\nWe have\n\\begin{align*}\nu^{\\omega} \\int_{u\/3}^{u} \\abs{\\partial_{v} \\lmb(u', v)} \\abs{\\partial_{u} \\phi(u',v)}\\, \\mathrm{d} u'\n\\leq & C \\Big( \\int_{u_{2}}^{v} \\abs{\\partial_{u} \\phi(u', v)} \\, \\mathrm{d} u' \\Big) \\sup_{u' \\in [u\/3, u]} \\sup_{C_{u'}} (u')^{\\omega} \\abs{\\partial_{v} \\lmb} \\\\\n\\leq & C \\Big( \\int_{u_{2}}^{v} \\abs{\\partial_{u} \\phi(u', v)} \\, \\mathrm{d} u' \\Big) \\mathcal B_{2}(U).\n\\end{align*}\n\nThus \\eqref{eq:decay2:key4nullStr:pf:2} follows by Lemma \\ref{lem:smallDphi}.\n\n\\pfstep{Proof of \\eqref{eq:decay2:key4nullStr:pf:3}}\nWe start with the identity\n\\begin{equation*}\n\t\\partial_{v} (\\nu^{-1} \\partial_{u} (r\\phi)) = - \\frac{2m}{(1-\\mu)r^{2}} \\lmb ( \\nu^{-1} \\partial_{u}(r \\phi) - \\phi).\n\\end{equation*}\nwhich is readily verifiable using \\eqref{eq:SSESF:dr} and \\eqref{eq:SSESF:dphi}. By \\eqref{eq:bnd4phi} and \\eqref{eq:bnd4durphi}, we estimate\n\\begin{align*}\n& u^{\\omega} \\int_{u\/3}^{u} \\abs{\\partial_{u} \\phi(u', v)} \\abs{\\partial_{v} (\\nu^{-1} \\partial_{u} (r \\phi))(u', v)} \\, \\mathrm{d} u' \\\\\n& \\quad \\leq C_{K, \\Lambda} \\Psi \\Big( \\int_{u_{2}}^{v} \\abs{\\frac{2m \\nu}{(1-\\mu) r^{2}} (u', v)} \\, \\mathrm{d} u' \\Big) \\, \\sup_{u' \\in [u\/3, u]} \\sup_{C_{u'}} (u')^{\\omega}\\abs{\\partial_{u} \\phi}.\n\\end{align*}\n\nThe $u'$-integral vanishes as $u_{2} \\to \\infty$ by Lemma \\ref{lem:smallPtnl}. On the other hand, by Lemma \\ref{lem:dphi} and Proposition \\ref{prop:geomLocBVScat}, we have\n\\begin{equation} \\label{eq:decay2:key4nullStr:pf:3:1}\n\\sup_{C_{u'}} (u')^{\\omega}\\abs{\\partial_{u} \\phi}\n\\leq C_{K, \\Lambda} \\sup_{C_{u'}} (u')^{\\omega}\\abs{\\partial_{v} \\phi} \n\\leq C_{K, \\Lambda, \\Psi} \\mathcal B_{2}(U),\n\\end{equation}\nfor any $u' \\in [1, U]$.\nTherefore, \\eqref{eq:decay2:key4nullStr:pf:3} follows.\n\n\\pfstep{Proof of \\eqref{eq:decay2:key4nullStr:pf:4}}\nHere we divide the integral into two, one in $\\PD_{\\mathrm{cpt}}$ and the other outside. Recall the notation $u^{\\star}(v) = \\sup \\set{u \\in [1, v] : r (u,v) \\geq R}$. Below, we will consider the case $u^{\\star}(v) \\in [u\/3, u]$, i.e., when the line segment $\\set{(u', v) \\in \\calQ : u' \\in [u\/3, u]}$ crosses $\\set{r = R}$; the other case is easier, and can be handled with a minor modification.\n\nWe first deal with the integral over the portion in $\\PD_{\\mathrm{cpt}}$. We claim that\n\\begin{equation*}\n\tu^{\\omega} \\int_{u^{\\star}(v)}^{u} \\abs{\\partial_{v} \\phi(u', v)} \\abs{\\partial_{u} (\\nu^{-1} \\partial_{u} (r \\phi))(u', v)} \\, \\mathrm{d} u' \\leq \\epsilon (u_{2}) \\mathcal B_{2}(U). \n\\end{equation*}\n\nThis is an easy consequence of the bound for $|\\partial_v\\phi|$ in Lemma \\ref{lem:dphi}, the fact that $u$, $u'$ are comparable over the domain of integration and\n\\begin{equation*}\n\t\\sup_{v \\in [u_{2}, \\infty)} \\int_{\\underline{C}_{v} \\cap \\set{u \\geq u_{2}} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{u} (\\nu^{-1} \\partial_{u} (r \\phi))} \\to 0 \\hbox{ as } u_{2} \\to \\infty\n\\end{equation*}\nwhich follows from \\eqref{eq:decay1:3}, \\eqref{eq:bnd4dur} and Theorem \\ref{thm:decayInCpt}.\n\nWe now consider the remaining contribution to the integral. We begin as follows.\n\\begin{align*}\n\t& u^{\\omega} \\int_{u\/3}^{u^{\\star}(v)} \\abs{\\partial_{v} \\phi(u', v)} \\abs{\\partial_{u} (\\nu^{-1} \\partial_{u} (r \\phi))(u', v)} \\, \\mathrm{d} u' \\\\\n\t& \\quad \\leq C_{K, \\Lambda} \\Big( \\int_{u\/3}^{u^{\\star}(v)} \\abs{\\partial_{v} \\phi(u', v)} \\, \\mathrm{d} u' \\Big) \\sup_{u' \\in [u\/3, u^{\\star}(v)]} \\sup_{C_{u'}} (u')^{\\omega} (\\abs{\\partial_{u}^{2}(r \\phi)} + \\Psi \\abs{\\partial_{u} \\nu}) \\\\\n\t& \\quad \\leq C_{K, \\Lambda, \\Psi} \\Big( \\int_{u\/3}^{u^{\\star}(v)} \\abs{\\partial_{v} \\phi(u', v)} \\, \\mathrm{d} u' \\Big) \\mathcal B_{2}(U).\n\\end{align*}\n\nFor $u' \\in [u\/3, u^{\\star}(v)]$, we have $r (u', v) \\geq R$. Thus, by \\eqref{eq:decay1:4}, we have\n\\begin{equation*}\n\t\\int_{u\/3}^{u^{\\star}(v)} \\abs{\\partial_{v} \\phi(u', v)} \\, \\mathrm{d} u' \\leq \\frac{C_{K} A_{1}}{R} \\int_{u_{2}}^{\\infty} (u')^{-\\omega} \\, \\mathrm{d} u' \\leq \\frac{C_{K} A_{1}}{R} u_{2}^{-(\\omega-1)},\n\\end{equation*}\nwhich vanishes as $u_{2} \\to \\infty$. Therefore, in the case under consideration, \\eqref{eq:decay2:key4nullStr:pf:4} follows.\n\n\\vspace{.5em}\n\nWe have therefore obtained the desired bounds for $F_1$ and $F_2$. Next, estimate $G_1$ and $G_2$. Let us integrate \\eqref{eq:decay2:nullStr:pf:3} and \\eqref{eq:decay2:nullStr:pf:4} along the outgoing direction from $(u, u)$ on the axis to $(u,v)$. Then we obtain\n\\begin{align*}\n\t\\abs{G_{1}(u,v)} \\leq & \\lim_{v' \\to u+} \\abs{G_{1}(u, v')} + \\int_{u}^{v} \\abs{\\partial_{v} \\nu \\, \\partial_{u} \\phi (u, v')} + \\abs{\\partial_{u} \\nu \\, \\partial_{v} \\phi (u, v')} \\, \\mathrm{d} v', \\\\\n\t\\abs{G_{2}(u,v)} \\leq & \\lim_{v' \\to u+} \\abs{G_{2}(u, v')} + \\int_{u}^{v} \\abs{\\partial_{v} \\phi \\, \\partial_{u} ( \\lmb^{-1} \\partial_{v} (r \\phi) ) (u, v')} + \\abs{\\partial_{u} \\phi \\, \\partial_{v} ( \\lmb^{-1} \\partial_{v} (r \\phi) ) (u, v')} \\, \\mathrm{d} v'.\n\\end{align*}\n\nNote that\n\\begin{equation*}\n\t\\lim_{v \\to u+} \\frac{\\mu}{r} (u, v) = 0, \\quad\n\t\\lim_{v \\to u+} \\Big( \\lmb^{-1} \\partial_{v} (r \\phi)(u,v) - \\nu^{-1} \\partial_{u} ( r \\phi) (u,v) \\Big) = 0,\n\\end{equation*}\nsince $(\\phi, r, m)$ is a $C^{1}$ solution. It follows that $\\lim_{v \\to u+} \\partial_{u} \\partial_{v} (r \\phi)(u, v) = 0$ and $\\lim_{v \\to u+} \\partial_{u} \\partial_{v} r (u,v) = 0$. Moreover, we also have\n\\begin{align*}\n\t& \\lim_{v \\to u+} \\partial_{v}^{2} (r \\phi) (u,v) = - \\lim_{v \\to u+} \\partial_{u}^{2} (r \\phi) (u,v), \\quad\n\t\\lim_{v \\to u+} \\partial_{v} \\lmb (u, v)= - \\lim_{v \\to u+} \\partial_{u} \\nu (u,v).\n\\end{align*}\n\nAs a consequence, \n\\begin{equation*}\n\t\\lim_{v' \\to u+} G_{1}(u, v') = - \\lim_{v' \\to u+} F_{1}(u, v'), \\quad\n\t\\lim_{v' \\to u+} G_{2}(u, v') = \\lim_{v' \\to u+} F_{2}(u, v').\n\\end{equation*}\n\nTherefore, by the previous estimates for $F_{1}, F_{2}$, we have\n\\begin{align*}\n\t& u^{\\omega} \\lim_{v' \\to u+} \\abs{G_{1}(u, v')} \\leq C_{\\Lambda} \\mathcal I_{2} + C_{K, \\Lambda, M_{i}} A_{1}^{3} + \\epsilon(u_{2}) \\mathcal B_{2}(U), \\\\\n\t& u^{\\omega} \\lim_{v' \\to u+} \\abs{G_{2}(u, v')} \\leq C_{K, \\Lambda} A_{1}^{2} + \\epsilon(u_{2}) \\mathcal B_{2}(U),\n\\end{align*}\nwhich are acceptable. Recalling that we are considering $(u,v) \\in \\PD_{\\mathrm{int}}$, hence $v \\in [u, 3u]$, we are now left to establish the following estimates: \n\\begin{align}\n\tu^{\\omega} \\int_{u}^{3u} \\abs{\\partial_{v} \\nu(u, v')} \\abs{\\partial_{u} \\phi(u,v')} \\, \\mathrm{d} v' \\leq & \\epsilon(u_{2}) \\mathcal B_{2}(U), \\label{eq:decay2:key4nullStr:pf:5} \\\\\n\tu^{\\omega} \\int_{u}^{3u} \\abs{\\partial_{u} \\nu(u, v')} \\abs{\\partial_{v} \\phi(u,v')}\\, \\mathrm{d} v' \\leq & \\epsilon(u_{2}) \\mathcal B_{2}(U), \\label{eq:decay2:key4nullStr:pf:6} \\\\\n\tu^{\\omega} \\int_{u}^{3u} \\abs{\\partial_{v} \\phi(u, v')} \\abs{\\partial_{u} (\\lmb^{-1} \\partial_{v} (r \\phi))(u, v')} \\, \\mathrm{d} v' \\leq & \\epsilon (u_{2}) \\mathcal B_{2}(U), \\label{eq:decay2:key4nullStr:pf:7} \\\\\n\tu^{\\omega} \\int_{u}^{3u} \\abs{\\partial_{u} \\phi(u, v')} \\abs{\\partial_{v} (\\lmb^{-1} \\partial_{v} (r \\phi))(u, v')} \\, \\mathrm{d} v' \\leq & \\epsilon (u_{2}) \\mathcal B_{2}(U). \\label{eq:decay2:key4nullStr:pf:8}\n\\end{align}\n\n\\pfstep{Proof of \\eqref{eq:decay2:key4nullStr:pf:5}}\nSubstituting $\\partial_{v} \\nu$ by \\eqref{eq:SSESF:dr} and using \\eqref{eq:decay2:key4nullStr:pf:3:1}, we have\n\\begin{align*}\nu^{\\omega} \\int_{u}^{3u} \\abs{\\partial_{v} \\nu(u, v')} \\abs{\\partial_{u} \\phi(u,v')} \\, \\mathrm{d} v'\n\\leq & K \\Big( \\int_{u}^{\\infty} \\abs{\\frac{2m \\lmb}{(1-\\mu) r^{2}}(u, v')} \\, \\mathrm{d} v' \\Big) \\sup_{v' \\in [u, 3u]} u^{\\omega} \\abs{\\partial_{u} \\phi(u,v')} \\\\\n\\leq & C_{K, \\Lambda, \\Psi} \\Big( \\sup_{u \\geq 3u_{2}} \\int_{u}^{\\infty} \\abs{\\frac{2m \\lmb}{(1-\\mu) r^{2}}(u, v')} \\, \\mathrm{d} v' \\Big) \\mathcal B_{2}(U).\n\\end{align*}\n\nThus \\eqref{eq:decay2:key4nullStr:pf:5} follows by Lemma \\ref{lem:smallPtnl}.\n\n\\pfstep{Proof of \\eqref{eq:decay2:key4nullStr:pf:6}}\nWe have\n\\begin{align*}\nu^{\\omega} \\int_{u}^{3u} \\abs{\\partial_{u} \\nu(u, v')} \\abs{\\partial_{v} \\phi(u,v')}\\, \\mathrm{d} v'\n\\leq & \\int_{u}^{\\infty} \\abs{\\partial_{v} \\phi(u, v')} \\, \\mathrm{d} v' \\sup_{v' \\in [u, 3u]} u^{\\omega} \\abs{\\partial_{u} \\nu(u, v')} \\\\\n\\leq & \\Big( \\sup_{u \\geq 3u_{2}} \\int_{u}^{\\infty} \\abs{\\partial_{v} \\phi(u, v')} \\, \\mathrm{d} v' \\Big) \\mathcal B_{2}(U).\n\\end{align*}\n\nThus \\eqref{eq:decay2:key4nullStr:pf:6} follows by Lemma \\ref{lem:smallDphi}.\n\n\\pfstep{Proof of \\eqref{eq:decay2:key4nullStr:pf:7}}\nBy \\eqref{eq:SSESF:dr} and \\eqref{eq:SSESF:dphi}, we have the identity\n\\begin{equation*}\n\t\\partial_{u} (\\lmb^{-1} \\partial_{v} (r\\phi)) = - \\frac{2m}{(1-\\mu)r^{2}} \\nu ( \\lmb^{-1} \\partial_{v}(r \\phi) - \\phi).\n\\end{equation*}\n\nThen by Proposition \\ref{prop:geomLocBVScat}, we have\n\\begin{align*}\n&u^{\\omega} \\int_{u}^{3u} \\abs{\\partial_{v} \\phi(u, v')} \\abs{\\partial_{u} (\\lmb^{-1} \\partial_{v} (r \\phi))(u, v')} \\, \\mathrm{d} v' \\\\\n& \\quad \\leq C_{K, \\Lambda} \\Psi \\Big( \\int_{u}^{\\infty} \\abs{\\frac{2m \\lmb}{(1-\\mu) r^{2}} (u, v')} \\, \\mathrm{d} v' \\Big) \\sup_{v' \\in [u, 3u]} u^{\\omega} \\abs{\\partial_{v} \\phi(u,v')}.\n\\end{align*}\n\nNow \\eqref{eq:decay2:key4nullStr:pf:7} follows by Lemmas \\ref{lem:smallPtnl} and \\ref{lem:dphi} and \\eqref{eq:bnd4dvrphi}.\n\n\\pfstep{Proof of \\eqref{eq:decay2:key4nullStr:pf:8}}\nAs in the proof of \\eqref{eq:decay2:key4nullStr:pf:4}, we will divide the integral into two pieces. More precisely, let us define $v^{\\star}(u)$ to be the unique $v$-value such that $r(u, v^{\\star}(u)) = R$. Assuming $v^{\\star}(u) \\in [u, 3u]$, the integral $\\int_{u}^{3u}$ will be divided into $\\int_{u}^{v^{\\star}(u)}$ and $\\int_{v^{\\star}(u)}^{3u}$. The remaining case $v^{\\star}(u) > 3u$ can be dealt with by adapting the argument for the first integral.\n\nFor the first integral, we claim that\n\\begin{equation*}\n\tu^{\\omega} \\int_{u}^{v^{\\star}(u)} \\abs{\\partial_{u} \\phi(u, v')} \\abs{\\partial_{v} (\\lmb^{-1} \\partial_{v} (r \\phi))(u, v')} \\, \\mathrm{d} v' \\leq \\epsilon(u_{2}) \\mathcal B_{2}(U).\n\\end{equation*}\n\nFrom the locally BV scattering assumption \\eqref{eq:locBVScat}, we have\n\\begin{equation*}\n\t\\sup_{u \\in [3 u_{2}, \\infty)} \\int_{C_{u} \\cap \\PD_{\\mathrm{cpt}}} \\abs{\\partial_{v} (\\lmb^{-1} \\partial_{v} (r \\phi))} \\to 0 \\hbox{ as } u_{2} \\to \\infty.\n\\end{equation*}\n\nCombined with \\eqref{eq:decay2:key4nullStr:pf:3:1}, the claim follows.\n\nNext, we turn to the second integral. By \\eqref{eq:bnd4dvr} and \\eqref{eq:bnd4dvrphi}, we estimate\n\\begin{align*}\n& u^{\\omega} \\int_{v^{\\star}(u)}^{3u} \\abs{\\partial_{u} \\phi(u, v')} \\abs{\\partial_{v} (\\lmb^{-1} \\partial_{v} (r \\phi))(u, v')} \\, \\mathrm{d} v' \\\\\n& \\quad \\leq \\sup_{v' \\in [v^{\\star}(u), 3u]} u^{\\omega} \\abs{\\partial_{v} (\\lmb^{-1} \\partial_{v} (r \\phi))(u, v')} \\int_{v^{\\star}(u)}^{3u} \\abs{\\partial_{u} \\phi(u, v')} \\, \\mathrm{d} v' \\\\\n& \\quad \\leq C_{\\Lambda, \\Psi} \\mathcal B_{2}(U) \\int_{v^{\\star}(u)}^{3u} \\abs{\\partial_{u} \\phi(u, v')} \\, \\mathrm{d} v'.\n\\end{align*}\n\nFor $v' \\in [v^{\\star}(u), 3u]$, we have $r (u, v) \\geq R$. Thus, by \\eqref{eq:decay1:5}, we have\n\\begin{equation*}\n\t\\int_{v^{\\star}(u)}^{3u} \\abs{\\partial_{u} \\phi(u, v')} \\, \\mathrm{d} v' \\leq \\frac{C_{K} A_{1}}{R} \\int_{u}^{3u} u^{-\\omega} \\, \\mathrm{d} v' \\leq \\frac{C_{K} A_{1}}{R} u_{2}^{-(\\omega-1)},\n\\end{equation*}\nwhich vanishes as $u_{2} \\to \\infty$, and therefore finishes the proof of \\eqref{eq:decay2:key4nullStr:pf:8}. We remark that the fact that we are in $\\PD_{\\mathrm{int}}$ is used crucially here, as otherwise the integral would not be convergent. \n\\end{proof}\n\n\\begin{remark} \nIn the case where we have global BV scattering (i.e., conditions $(2)$ and $(3)$ of Definition \\ref{def:locBVScat} are satisfied with $R=\\infty$), we can take $R = \\infty$ in the preceding argument to obtain the following explicit upper bound on $\\epsilon(u_{2})$:\n\\begin{equation} \\label{eq:decay2:eps}\n\\begin{aligned}\n\t\\epsilon(u_{2}) \n\t\\leq & C_{K, \\Lambda, \\Psi} \\sup_{v \\in [u_{2}, \\infty)} \\int_{\\underline{C}_{v} \\cap \\set{u \\geq u_{2}}} \\abs{\\frac{2m \\nu}{(1-\\mu) r^{2}}}\n\t\t+C \\sup_{v \\in [u_{2}, \\infty)} \\int_{\\underline{C}_{v} \\cap \\set{u \\geq u_{2}}} \\abs{\\partial_{u} \\phi} \\\\\n\t\t& + C_{K, \\Lambda, \\Psi} \\sup_{v \\in [u_{2}, \\infty)} \\int_{\\underline{C}_{v} \\cap \\set{u \\geq u_{2}}} \\abs{\\partial_{u} (\\nu^{-1} \\partial_{u} (r \\phi))} \\\\\n\t\t& +C_{K, \\Lambda, \\Psi} \\sup_{u \\geq 3u_{2}} \\int_{C_{u}} \\abs{\\frac{2m \\lmb}{(1-\\mu) r^{2}}}\n\t\t+C \\sup_{u \\geq 3u_{2}} \\int_{C_{u}} \\abs{\\partial_{v} \\phi} \\\\\n\t\t& +C_{K, \\Lambda, \\Psi} \\sup_{u \\in [3 u_{2}, \\infty)} \\int_{C_{u}} \\abs{\\partial_{v} (\\lmb^{-1} \\partial_{v} (r \\phi))}.\n\\end{aligned}\n\\end{equation}\n\nThis will be useful in our proof of the sharp decay rate in the case of small BV norm (Theorem \\ref{thm:smallData}) in Section \\ref{sec:smallData}.\n\\end{remark}\n\nIn the second step of our proof of Theorem \\ref{main.thm.2}, we use the preliminary $u^{-\\omega}$ decay proved in Proposition \\ref{prop:decay2:nullStr} to obtain the optimal the $u$-decay. Key to this step is the following proposition, which claims optimal $u$-decay in $\\PD_{\\mathrm{int}}$.\n\n\\begin{proposition} \\label{prop:decay2:final}\nThere exists a constant $0 < A_{2}'' < \\infty$ such that the following estimates hold.\n\\begin{align} \n\\sup_{\\PD_{\\mathrm{int}}} u^{\\omega+1} \\abs{\\partial_{v}^{2} (r \\phi)} \\leq & A_{2}'',\n\t\\label{eq:decay2:final:1} \\\\\n\\sup_{\\PD_{\\mathrm{int}}} u^{\\omega+1} \\abs{\\partial_{u}^{2} (r \\phi)} \\leq & A_{2}'', \n\t\\label{eq:decay2:final:2} \\\\\n\\sup_{\\PD_{\\mathrm{int}}} u^{3} \\abs{\\partial_{v} \\lmb} \\leq & A_{2}'', \n\t\\label{eq:decay2:final:3} \\\\\n\\sup_{\\PD_{\\mathrm{int}}} u^{3} \\abs{\\partial_{u} \\nu} \\leq & A_{2}''. \n\t\\label{eq:decay2:final:4}\n\\end{align}\n\\end{proposition}\n\nOnce we establish Proposition \\ref{prop:decay2:final}, the desired decay for $\\partial_{v}^{2}(r \\phi)$ and $\\partial_{v} \\lmb$ follow from Lemma \\ref{lem:decay2:rDecay} and the fact that $r \\geq 2 \\Lambda^{-1} u$ in $\\PD_{\\mathrm{ext}}$. Furthermore, the desired decay for $\\partial_{u}^{2}(r \\phi)$ and $\\partial_{u} \\nu$ follow from Lemma \\ref{lem:decay2:uDecayInExtr}.\n\n\\begin{proof} \nThanks to the fact that we have pointwise bounds for sufficient number of derivatives (albeit with sub-optimal decay) near $\\Gamma$ at this point, it suffices to work with the `non-renormalized' equations \\eqref{eq:eq4dvdvrphi:normal}, \\eqref{eq:eq4dudurphi:normal}, \\eqref{eq:eq4dvdvr:normal} and \\eqref{eq:eq4dudur:normal}. In particular, we need not utilize the null structure of \\eqref{eq:SSESF}. \n\nLet $(u,v) \\in \\PD_{\\mathrm{int}}$ (i.e., $v \\in [u, 3u]$) with $u \\geq 3$. We begin with \\eqref{eq:decay2:final:1}. Integrating $\\partial_{u} \\partial_{v}^{2}(r\\phi)$ in the $u$-direction from $u\/3$ to $u$, multiplying by $u^{\\omega+1}$ and using $r(u\/3, v) \\geq (2\/3) \\Lambda^{-1} u$, we obtain\n\\begin{equation} \\label{eq:decay2:final:1:pf}\n\tu^{\\omega+1}\\abs{\\partial_{v}^{2}(r \\phi)}(u,v)\n\t\\leq C_{\\Lambda} r^{\\omega+1} \\abs{\\partial_{v}^{2}(r \\phi)}(u\/3, v) + u^{\\omega+1} \\int_{u\/3}^{u} \\abs{\\partial_{u} \\partial_{v}^{2}(r \\phi)}(u', v) \\, \\mathrm{d} u'.\n\\end{equation}\n\nSince $(u\/3, v) \\in \\PD_{\\mathrm{ext}}$, the first term on the right-hand side is bounded by $\\leq C_{\\Lambda} \\mathcal I_{2} + C_{K, \\Lambda, M_{i}} A_{1}^{3}$, thanks to Lemma \\ref{lem:decay2:rDecay}. To estimate the $u'$-integral, we substitute $\\partial_{u} \\partial_{v}^{2} (r \\phi)$ by \\eqref{eq:eq4dvdvrphi:normal}. Then applying Proposition \\ref{prop:geomLocBVScat}, Lemma \\ref{lem:dphi}, Lemma \\ref{lem:muOverR}, Theorem \\ref{main.thm.1} and Proposition \\ref{prop:decay2:nullStr}, we obtain\n\\begin{equation*}\n\t\\abs{\\partial_{u} \\partial_{v}^{2}(r \\phi)}(u', v)\n\t\\leq C_{A_{1}, K, \\Lambda} (u')^{-3\\omega} A_{1} (A_{2}')^{2}.\n\\end{equation*}\n\nThus we have\n\\begin{equation} \\label{eq:decay2:final:1:explicit}\n\tu^{\\omega+1}\\abs{\\partial_{v}^{2}(r \\phi)}(u,v)\n\t\\leq C_{\\Lambda} \\mathcal I_{2} + C_{K, \\Lambda, M_{i}} A_{1}^{3} + C_{A_{1}, K, \\Lambda} A_{1} (A_{2}')^{2},\n\\end{equation}\nwhere we have used the fact that $\\omega > 1$, and thus $3 \\omega - 1 > \\omega + 1$ to throw away the $u$-weight in the last term. This proves \\eqref{eq:decay2:final:1}.\n\nNext, we prove \\eqref{eq:decay2:final:2}. Integrating $\\partial_{v} \\partial_{u}^{2} (r \\phi)$ in the $v$-direction from $u+$ to $v$ and multiplying by $u^{\\omega+1}$, we have\n\\begin{equation} \\label{eq:decay2:final:2:pf}\n\tu^{\\omega+1} \\abs{\\partial_{u}^{2}(r \\phi)}(u,v)\n\t\\leq \\lim_{v' \\to u+} u^{\\omega+1} \\abs{\\partial_{u}^{2} (r \\phi)}(u, v') + u^{\\omega+1} \\int_{u}^{3u} \\abs{\\partial_{v} \\partial_{u}^{2} (r \\phi)}(u, v') \\, \\mathrm{d} v'.\n\\end{equation}\n\nRecall that $\\lim_{v' \\to u+} \\partial_{u}^{2} (r \\phi)(u, v') = \\lim_{v' \\to u+} \\partial_{v}^{2} (r \\phi)(u, v')$, as $(\\phi, r, m)$ is a $C^{1}$ solution. Thus the first term on the right-hand side can be estimated via \\eqref{eq:decay2:final:1:explicit}. Substitute $\\partial_{v} \\partial_{u}^{2} (r \\phi)$ by \\eqref{eq:eq4dudurphi:normal} and apply, as before, Proposition \\ref{prop:geomLocBVScat}, Lemma \\ref{lem:dphi}, Lemma \\ref{lem:muOverR}, Theorem \\ref{main.thm.1} and Proposition \\ref{prop:decay2:nullStr}. Then we have\n\\begin{equation*}\n\t\\abs{\\partial_{v} \\partial_{u}^{2} (r \\phi)}(u, v')\n\t\\leq C_{A_{1}, K, \\Lambda} u^{-3\\omega} A_{1} (A_{2}')^{2}.\n\\end{equation*}\n\nIt now follows that \n\\begin{equation} \\label{eq:decay2:final:2:explicit}\n\tu^{\\omega+1}\\abs{\\partial_{u}^{2}(r \\phi)}(u,v)\n\t\\leq C_{\\Lambda} \\mathcal I_{2} + C_{K, \\Lambda, M_{i}} A_{1}^{3} + C_{A_{1}, K, \\Lambda} A_{1} (A_{2}')^{2},\n\\end{equation}\nwhich proves \\eqref{eq:decay2:final:2}.\n\nAt this point, combining Lemma \\ref{lem:dphi}, Theorem \\ref{main.thm.1}, Lemma \\ref{lem:decay2:rDecay} and \\eqref{eq:decay2:final:1:explicit}, note that we have the following improved $u$-decay for $\\partial_{v} \\phi$:\n\\begin{equation} \\label{eq:decay2:final:impDvphi} \n\\begin{aligned}\n\t\\sup_{\\calQ} u^{\\omega+1} \\abs{\\partial_{v}\\phi} \n\t\\leq C_{\\Lambda} \\sup_{\\calQ} (u^{\\omega+1} \\abs{\\partial_{v}^{2}(r \\phi)} + u A_{1} \\abs{\\partial_{v} \\lmb})\n\t\\leq B\n\\end{aligned}\n\\end{equation}\nwhere \n\\begin{equation} \\label{eq:decay2:final:aux}\nB := C_{\\Lambda} \\mathcal I_{2} + C_{K, \\Lambda, M_{i}} A_{1}^{3} + C_{A_{1}, K, \\Lambda} A_{1} (A_{2}')^{2} + C_{\\Lambda} A_{1} A_{2}'.\n\\end{equation}\n\nWe now turn to \\eqref{eq:decay2:final:3}. Integrating $\\partial_{u} \\partial_{v} \\log \\lmb$ in the $u$-direction from $u\/3$ to $u$, multiplying by $u^{3}$ and using $r(u\/3, v) \\geq (2\/3) \\Lambda^{-1} u$, we obtain\n\\begin{equation} \\label{eq:decay2:final:3:pf}\n\tu^{3} \\abs{\\partial_{v} \\log \\lmb }(u, v) \\leq \n\tC r^{3} \\abs{\\partial_{v} \\log \\lmb}(u\/3, v) + u^{3} \\int_{u\/3}^{u} \\abs{\\partial_{u} \\partial_{v} \\log \\lmb}(u', v) \\, \\mathrm{d} u'\n\\end{equation}\n\nSince $(u\/3, v) \\in \\PD_{\\mathrm{ext}}$, the first term on the right-hand side is estimated $\\leq C_{K, \\Lambda} A_{1}^{2}$ by Lemma \\ref{lem:decay2:rDecay} and the fact that $\\lmb^{-1} \\leq \\Lambda$. Next, substituting $\\partial_{u} \\log \\lmb$ by \\eqref{eq:eq4dvdvr:normal}, applying Proposition \\ref{prop:geomLocBVScat}, Lemma \\ref{lem:muOverR}, Lemma \\ref{lem:dphi} and using the improved bound \\eqref{eq:decay2:final:impDvphi}, we have \n\\begin{equation*}\n\t\\abs{\\partial_{u} \\partial_{v} \\log \\lmb}(u',v) \\leq C_{K, \\Lambda} B^{2} (u')^{-2(\\omega+1)}.\n\\end{equation*}\n\nTherefore\n\\begin{equation} \\label{eq:decay2:final:3:explicit}\n\tu^{3} \\abs{\\partial_{v} \\lmb}(u,v) \\leq C_{K, \\Lambda} A_{1}^{2} + C_{K, \\Lambda} B^{2},\n\\end{equation}\nwhere we used $2(\\omega + 1) - 1 > 3$ to throw away the $u$-weight in the last term. This proves \\eqref{eq:decay2:final:3}.\n\nFinally, we prove \\eqref{eq:decay2:final:4}. Integrating $\\partial_{v} \\partial_{u} \\log \\nu$ in the $v$-direction from $u+$ to $v$ and multiplying by $u^{3}$, we have\n\\begin{equation} \\label{eq:decay2:final:4:pf}\n\tu^{3} \\abs{\\partial_{u} \\log \\nu}(u,v) \\leq \\lim_{v' \\to u+} u^{3} \\abs{\\partial_{u} \\log \\nu}(u, v') + u^{3} \\int_{u}^{3u} \\abs{\\partial_{v} \\partial_{u} \\log \\nu}(u, v') \\, \\mathrm{d} v'.\n\\end{equation}\n\nSince $\\lim_{v' \\to u+} \\partial_{u} \\nu(u, v') = - \\lim_{v' \\to u+} \\partial_{v} \\lmb(u, v')$, the first term is bounded by \\eqref{eq:decay2:final:3:explicit}. Furthermore, substituting $\\partial_{v} \\partial_{u} \\log \\nu$ by \\eqref{eq:eq4dudur:normal} and applying Proposition \\ref{prop:geomLocBVScat}, Lemma \\ref{lem:muOverR}, Lemma \\ref{lem:dphi} and using the improved bound \\eqref{eq:decay2:final:impDvphi}, we have \n\\begin{equation*}\n\t\\abs{\\partial_{v} \\partial_{u} \\log \\nu}(u, v') \\leq C_{K, \\Lambda} B^{2} u^{-2(\\omega+1)}.\n\\end{equation*}\n\nAs before, it follows that\n\\begin{equation} \\label{eq:decay2:final:4:explicit}\n\tu^{3} \\abs{\\partial_{u} \\nu}(u,v) \\leq C_{K, \\Lambda} A_{1}^{2} + C_{K, \\Lambda} B^{2},\n\\end{equation}\nwhich proves \\eqref{eq:decay2:final:4}.\n\\end{proof}\n\n\\begin{remark} \nCombining \\eqref{eq:decay2:final:1:explicit}, \\eqref{eq:decay2:final:2:explicit}, \\eqref{eq:decay2:final:3:explicit} and \\eqref{eq:decay2:final:4:explicit}, we see that Proposition \\ref{prop:decay2:final} holds with\n\\begin{equation} \\label{eq:decay2:A2''}\n\tA_{2}'' \\leq C_{\\Lambda} \\mathcal I_{2} + C_{K, \\Lambda, M_{i}} A_{1}^{3} + C_{A_{1}, K, \\Lambda} A_{1} (A_{2}')^{2}\n\t\t+ C_{K, \\Lambda} A_{1}^{2} + C_{K, \\Lambda} B^{2}\n\\end{equation}\nwhere $B$ is as in \\eqref{eq:decay2:final:aux}.\n\\end{remark}\n\n\\begin{remark} \nAccording to the argument of this subsection, note that the size of $A_{2}'$ in Proposition \\ref{prop:decay2:nullStr} depends on the choice of $u_{2}$ through the term $H_{2}'(u_{2})$, where the size of $u_{2}$ depends on the rate of convergence of $\\epsilon''(u_{2}) \\to 0$ as $u_{2} \\to \\infty$. This explains why $A_{2}$ does not depend only on the size of the initial data, as remarked in Section \\ref{sec.main.thm}. On the other hand, as stated in Statement (2) of Theorem \\ref{thm:smallData}, we shall show that in the case of small BV initial data, $A_2$ depends only on the size of the initial data. To achieve this, we show in Section \\ref{sec:smallData} that we may take $u_{2} = 1$ under this small data assumption. \n\\end{remark}\n\n\n\\subsection{Additional decay estimates}\nAs in the previous section, we conclude this section by providing additional decay rates concerning second derivatives of $\\phi$, $r$ and improved decay for $m$ near $\\Gamma$.\n\n\\begin{corollary} \\label{cor:decay2}\nLet $(\\phi, r, m)$ be a locally BV scattering solution to \\eqref{eq:SSESF} with asymptotically flat $C^{1}$ initial data of order $\\omega'$.\nLet $A_{1}$ and $A_{2}$ be the constants in Theorems \\ref{main.thm.1} and \\ref{main.thm.2}, respectively. Then \nthe following bounds hold.\n\\begin{align} \n\t\\abs{\\partial_{v} \\phi} \\leq & C_{\\Lambda} (A_{1} + A_{2} + A_{1} A_{2}) \\min \\set{u^{-(\\omega+1)}, r^{-2} u^{-(\\omega-1)}} \\label{eq:decay2:5} \\\\\n\t\\abs{\\partial_{u} \\phi} \\leq & C_{K, \\Lambda} (A_{1} + A_{2} + A_{1} A_{2}) \\min \\set{u^{-(\\omega+1)}, r^{-1} u^{-\\omega}} \\label{eq:decay2:6} \\\\\n\t\\abs{\\partial_{v}^{2} \\phi} \\leq & C_{\\Lambda} (A_{1} + A_{2} + A_{1} A_{2}) \\min \\set{r^{-1} u^{-(\\omega+1)}, r^{-3} u^{-(\\omega-1)}}, \\label{eq:decay2:7}\\\\\n\t\\abs{\\partial_{u} \\partial_{v} \\phi} \\leq & C_{K, \\Lambda} (A_{1} + A_{2} + A_{1} A_{2}) \\min \\set{r^{-1} u^{-(\\omega+1)}, r^{-2} u^{-\\omega}}, \\label{eq:decay2:8}\\\\\n\t\\abs{\\partial_{u}^{2} \\phi} \\leq & C_{K, \\Lambda} (A_{1} + A_{2} + A_{1} A_{2}) \\, r^{-1} u^{-(\\omega+1)}, \\label{eq:decay2:9}\\\\\n\t\\abs{\\partial_{u} \\partial_{v} r} \\leq & C_{K, \\Lambda} (A_{1} + A_{2} + A_{1} A_{2})^{2} \\min \\set{r u^{-(2\\omega+2)}, r^{-2} u^{-(2\\omega-1)}}, \\label{eq:decay2:10} \\\\\n\tm \\leq & C_{K, \\Lambda} (A_{1} + A_{2} + A_{1} A_{2})^{2} \\min \\set{r^{3} u^{-(2\\omega+2)}, u^{-(2\\omega-1)}}. \\label{eq:decay2:11}\n\\end{align}\n\\end{corollary}\n\nThis corollary follows immediately from the estimates derived in Theorem \\ref{main.thm.2}. We sketch the proof below.\n\n\n\\begin{proof} \nFirst, note that \\eqref{eq:decay2:5} and \\eqref{eq:decay2:6} follows from Corollary \\ref{cor:decay1}, Theorem \\ref{main.thm.2} and Lemma \\ref{lem:dphi}. \nNext, \\eqref{eq:decay2:7} and \\eqref{eq:decay2:9} are easy consequences of the preceding estimates, Theorems \\ref{main.thm.1}, \\ref{main.thm.2} and the identities\n\\begin{equation*}\n\tr \\partial_{v}^{2} \\phi = \\partial_{v}^{2} ( r\\phi) - (\\partial_{v} \\lmb) \\phi - 2 \\lmb \\partial_{v} \\phi, \\quad\n\tr \\partial_{u}^{2} \\phi = \\partial_{u}^{2} ( r\\phi) - (\\partial_{u} \\nu) \\phi - 2 \\nu \\partial_{u} \\phi.\n\\end{equation*}\n\nOn the other hand, for \\eqref{eq:decay2:8}, we use the identity\n\\begin{equation*}\n\tr \\partial_{u} \\partial_{v} \\phi = - \\lmb \\partial_{u} \\phi - \\nu \\partial_{v} \\phi,\n\\end{equation*}\nwhich may be verified from \\eqref{eq:SSESF:dr} and \\eqref{eq:SSESF:dphi}.\n\nNext, \\eqref{eq:decay2:11} follows from Corollary \\ref{cor:decay1}, Lemma \\ref{lem:muOverR} and \\eqref{eq:decay2:5}. Finally, using Corollary \\ref{cor:est4dr}, Lemma \\ref{lem:est4dur}, \\eqref{eq:decay2:11} and the equation \\eqref{eq:SSESF:dr}, we conclude \\eqref{eq:decay2:10}. \\qedhere\n\\end{proof}\n\n\\section{Decay and blow up at infinity}\\label{sec.dichotomy}\n\nIn this section, we prove Theorem \\ref{thm.dichotomy}, i.e., unless the solution blows up at infinity, a `future causally geodesically complete' solution scatters in BV.\n\nTake a BV solution to \\eqref{eq:SSESF} satisfying the hypotheses of Theorem \\ref{thm.dichotomy}, which does not blow up at infinity. Note, in particular, that $\\calQ = \\calR$ by $(1)$ of Definition \\ref{def:locBVScat} and Lemma \\ref{lem:regR}. In order to prove Theorem \\ref{thm.dichotomy}, our goal is to show that such a spacetime is in fact BV scattering, i.e., (1), (2) and (3) in Definition \\ref{def:locBVScat} hold and moreover (3) holds with $R=\\infty$.\n\nThe main step will be to show that there exists a constant $C_{\\Lambda}$ such that for every $\\epsilon > 0$, there exists $U$ such that for every $u\\geq U$, we have\n\\begin{equation}\\label{dichotomy.goal}\n\\int_{C_u}|\\partial_v^2(r\\phi)| + \\int_{C_u}|\\partial_v \\lambda| \\leq C_{\\Lambda} \\epsilon.\n\\end{equation}\nThis will be achieved in a sequence of Lemmas and Propositions below.\n\nBefore we proceed, we first prove a preliminary bound on $\\lmb$:\n\\begin{proposition}\nThere exists $0<\\Lambda<\\infty$ such that\n\\begin{equation} \\label{dic.bnd4dvr}\n\t\\Lambda^{-1} \\leq \\lmb(u,v) \\leq \\frac{1}{2}.\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nBy $(1)$ in Definition \\ref{def.blow.up.infty}, there exists $0 < \\Lambda < \\infty$ such that $\\sup \\lmb_{\\Gamma}^{-1} \\leq \\Lambda$. As $\\lim_{u \\to v-} \\lmb_{\\Gamma}(u) = \\lim_{u \\to v-} \\lmb(u', v)$ (see \\cite[Section 7]{Christodoulou:1993bt}), it follows from Lemma \\ref{lem:basicEst4dr} that for every $(u,v) \\in \\calQ$, we have the estimate \\eqref{dic.bnd4dvr}.\n\\end{proof}\n\nWe now proceed to show \\eqref{dichotomy.goal}. The first step is to show that for $u$ sufficiently large, the integrals along $C_u$ of $|F_1|$ and $|F_2|$ are small. Here, we recall the notation in the proof of Proposition \\ref{prop:decay2:nullStr}, i.e.,\n\\begin{align*}\nF_{1} := & \\partial_{v}^{2} (r \\phi) - (\\partial_{v} \\lmb) \\phi, \\\\\n\tF_{2} := & \\partial_{v} \\log \\lmb - \\frac{\\lmb}{(1-\\mu)} \\frac{\\mu}{r} + \\partial_{v} \\phi \\Big( \\lmb^{-1} \\partial_{v} (r \\phi) - \\nu^{-1} \\partial_{u} ( r \\phi) \\Big).\n\\end{align*}\nOnce we obtain the desired bounds for $F_1$ and $F_2$, we then derive \\eqref{dichotomy.goal} from these bounds. This will be the most technical part (see discussions in Remark \\ref{technical.rmk.dichotomy}).\n\nFirst, we bound the integrals of $F_1$ and $F_2$ in the following proposition:\n\\begin{proposition}\\label{8.1.1}\nFor every $\\epsilon>0$, there exists $V$ sufficiently large such that the following bound holds for $u\\geq V$:\n\\begin{equation} \\label{dichotomy.beginning}\n\\int_{C_u} (|F_1|+|F_2|)(u,v) \\leq 3\\epsilon.\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nBy $(2)$ and $(3)$ in Definition \\ref{def.blow.up.infty}, we have\n$$\\int_1^{\\infty}\\int_u^{\\infty}|\\partial_v\\lambda\\partial_u\\phi- \\partial_u\\lambda\\partial_v\\phi | \\mathrm{d} v \\mathrm{d} u <\\infty,$$\nand\n$$\\int_1^{\\infty}\\int_u^{\\infty}|\\partial_u\\phi\\partial_v(\\nu^{-1}\\partial_u(r\\phi))-\\partial_v\\phi\\partial_u(\\nu^{-1}\\partial_u(r\\phi))| \\mathrm{d} v \\mathrm{d} u <\\infty.$$\n\nThus, by choosing $V$ sufficiently large, we have\n\\begin{equation}\n\\int_V^{\\infty}\\int_u^{\\infty}|\\partial_v\\lambda\\partial_u\\phi-\\partial_v\\lambda\\partial_v\\phi| \\mathrm{d} v \\mathrm{d} u <\\epsilon,\\label{non.blowup.1}\n\\end{equation}\nand\n\\begin{equation}\n\\int_V^{\\infty}\\int_u^{\\infty}|\\partial_u\\phi\\partial_v(\\nu^{-1}\\partial_u(r\\phi))-\\partial_v\\phi\\partial_u(\\nu^{-1}\\partial_u(r\\phi))| \\mathrm{d} v \\mathrm{d} u <\\epsilon.\\label{non.blowup.2}\n\\end{equation}\n\n\n\nFrom the initial conditions, we easily see that $F_{1}(1, \\cdot)$, $F_{2}(1, \\cdot)$ obey $\\int_{C_{1}} \\abs{F_{1}} + \\abs{F_{2}} < \\infty$. \nThus, by choosing $V$ larger if necessary, we have\n\\begin{equation}\n\\int_V^{\\infty} (|F_1|+|F_2|)(1,v) \\mathrm{d} v\\leq \\epsilon.\\label{data.BV.bd}\n\\end{equation}\nNotice that by equations \\eqref{eq:decay2:nullStr:pf:1} and \\eqref{eq:decay2:nullStr:pf:2}, the estimates \\eqref{non.blowup.1} and \\eqref{non.blowup.2} control $\\iint |\\partial_u F_1| \\mathrm{d} u \\mathrm{d} v$ and $\\iint |\\partial_u F_2| \\mathrm{d} u \\mathrm{d} v$. Thus, we have\n$$\\int_{\\max\\set{u, V}}^{\\infty} (|F_1|+|F_2|)(u,v) \\mathrm{d} v\\leq 3\\epsilon.$$\nfor every $u\\geq 1$. In particular, for $u\\geq V$, we have\n\\begin{equation*}\n\\int_{C_u} (|F_1|+|F_2|)(u,v) \\leq 3\\epsilon,\n\\end{equation*}\nas desired.\n\\end{proof}\n\nThe inequality \\eqref{dichotomy.beginning} is the starting point for our proof of \\eqref{dichotomy.goal}. More precisely, our basic strategy is to use a continuous induction on $v$, beginning from the axis, to remove the quadratic and higher terms from \\eqref{dichotomy.beginning} and infer \\eqref{dichotomy.goal}. \n\n\\begin{remark} \\label{technical.rmk.dichotomy}\nBefore beginning the proof in earnest, we would like to point out two technical nuisances that we confront: \nFirst, in order to estimate the scalar field $\\phi$ itself from $F_{1}$ and $F_{2}$, we need to integrate essentially from null infinity\\footnote{More precisely, $\\phi$ is determined from $\\partial_{v} (r \\phi)$, which in turn can be determined from $\\int \\abs{\\partial_{v}^{2}(r \\phi)}$ by integrating from $v = \\infty$. Another conceptual reason why information near $v = \\infty$ is relevant for estimating $\\phi$ is that the initial condition $\\lim_{v \\to \\infty} \\phi(1, v) = 0$ implies that $\\lim_{v \\to \\infty} \\phi(u, v)= 0$ for every $u \\geq 1$. See the discussion before \\eqref{dic.0.2}.}, which is opposite to the direction of our method of continuity. Second, as $\\partial_{v}(r \\phi)$ is only assumed to be BV, the left-hand side of \\eqref{dichotomy.goal} is not continuous in $v$ in general. To overcome the first, we make use of the invariance of \\eqref{eq:SSESF} and $F_{1}$, $F_{2}$ under the change $\\phi \\mapsto \\phi + c$. To take care of the second, we carefully keep track of the evolution of discontinuities of $\\partial_{v}(r \\phi)$.\n\\end{remark}\n\nNotice that in order to obtain \\eqref{dichotomy.goal} from \\eqref{dichotomy.beginning}, we only need to integrate on a \\emph{fixed} hypersurface $C_u$. We now fix $u_0\\geq V$ and define a new function $\\overline{\\phi}_{u_{0}}$ by\n\\begin{equation} \\label{dic.0.0}\n\t\\overline{\\phi}_{u_{0}}(u,v) := \\phi(u,v) - \\lim_{v' \\to u_{0} +} \\phi(u_{0}, v').\n\\end{equation}\n\nAs remarked before, note that \\eqref{eq:SSESF} is invariant under the change $(\\phi, r, m) \\mapsto (\\overline{\\phi}_{u_{0}}, r, m)$, i.e., $(\\overline{\\phi}_{u_{0}}, r, m)$ is still a solution to \\eqref{eq:SSESF}. Moreover, it is easy to check that $F_{1}$ and $F_{2}$ are also invariant under this change, i.e.,\n\\begin{equation} \\label{eq:inv4F12}\n\\begin{aligned}\n\tF_{1} =& \\partial_{v}^{2} (r \\overline{\\phi}_{u_{0}}) - (\\partial_{v} \\lmb) \\overline{\\phi}_{u_{0}} \\\\\n\tF_{2} =& \\partial_{v} \\log \\lmb - \\frac{\\lmb}{(1-\\mu)} \\frac{\\mu}{r} + \\partial_{v} \\overline{\\phi}_{u_{0}} \\Big( \\lmb^{-1} \\partial_{v} (r \\overline{\\phi}_{u_{0}}) - \\nu^{-1} \\partial_{u} ( r \\overline{\\phi}_{u_{0}}) \\Big).\n\\end{aligned}\n\\end{equation}\n\nThe new scalar field has been chosen so that $\\overline{\\phi}_{u_{0}}(u_{0}, \\cdot)$ and $\\partial_{v}(r \\overline{\\phi}_{u_{0}})(u_{0}, \\cdot)$ vanish at the axis, i.e.,\n\\begin{equation} \\label{dic.0.1}\n\\lim_{v \\to u_{0}+}\\overline{\\phi}_{u_{0}}(u_{0},v) = \\lim_{v \\to u_{0}+}\\partial_{v}(r \\overline{\\phi}_{u_{0}}) (u_{0},v) = \\lim_{v \\to u_{0}+} \\partial_{u}(r \\overline{\\phi}_{u_{0}})(u_{0},v) = 0.\n\\end{equation}\n\nWe claim that the original scalar field $\\phi(u, v)$ obeys the condition \n\\begin{equation}\\label{claim.vanishing}\n\\lim_{v \\to \\infty} \\phi(u_{0}, v) = 0\n\\end{equation} \nfor every $u_{0} \\geq 1$.\nTherefore, by the definition given in \\eqref{dic.0.0}, we see that $\\phi$ and $\\overline{\\phi}_{u_0}$ are also related by\n\\begin{equation} \\label{dic.0.2}\n\t\\phi(u,v) = \\overline{\\phi}_{u_{0}}(u,v) - \\lim_{v' \\to \\infty} \\overline{\\phi}_{u_{0}}(u, v').\n\\end{equation}\n\n\nTo establish the claim \\eqref{claim.vanishing}, we proceed as in the proof of Lemma~\\ref{lem:decay1:cptu:0}, but work with $\\phi$ rather than $r \\phi$. \nFix $u_{0} > 1$ and let $r_{1} > 0$ be a large number to be determined. For each $u \\geq 1$, let $v^{\\star}_{1}(u)$ be the unique $v$-value such that $r(u, v^{\\star}_{1}(u)) = r_{1}$. Consider $(u, v) \\in \\set{1 \\leq u \\leq u_{0}} \\cap \\set{r \\geq r_{1}}$. Using the uniform bound of $m$ and $\\frac{\\lmb}{1-\\mu}$ in terms of the data at $u= 1$ (which holds thanks to monotonicity), we may integrate \\eqref{eq:SSESF:dphi} along the incoming direction to estimate\n\\begin{equation*}\n\t\\abs{\\partial_{v} (r \\phi) (u, v) - \\partial_{v} (r \\phi)(1, v)} \n\t\\leq \\frac{C_{0}}{r(u, v)} \\sup_{u' \\in [1, u]} \\abs{\\phi(u', v)},\n\\end{equation*}\nwhere $C_{0}$ depends only on the data at $u=1$. Integrating both sides in the outgoing direction from $v_{1}^{\\star}(u)$ to $v$ (using Lemma~\\ref{dic.bnd4dvr} for the right-hand side) and dividing by $r = r(u, v)$, we obtain\n\\begin{equation} \\label{claim.vanishing.key}\n\\begin{aligned}\n\t\\abs{\\phi(u, v)} \\leq & \\frac{r_{1}}{r} \\abs{\\phi(u, v^{\\star}_{1}(u))} + \\frac{r(1, v^{\\star}_{1}(u))}{r} \\abs{\\phi(1, v^{\\star}_{1}(u))} \\\\\n\t& + \\frac{r(1, v)}{r} \\abs{\\phi(1, v)}\t\t+ \\frac{C_{0} \\Lambda}{r} \\log \\Big( \\frac{r}{r_{1}}\\Big) \\sup_{1 \\leq u' \\leq u, \\, v \\geq v^{\\ast}_{1}(u)} \\abs{\\phi}.\n\\end{aligned}\n\\end{equation}\nNow the idea is to use \\eqref{claim.vanishing.key} to first show that $\\phi$ is bounded on the region $\\set{1 \\leq u \\leq u_{0}}$, and then use \\eqref{claim.vanishing.key} again with the additional boundedness of $\\phi$ to conclude that \\eqref{claim.vanishing} holds.\nTo begin with, observe that $\\phi$ is bounded on each set compact subset of $\\calQ$, since it is a BV solution in the sense of Definition~\\ref{def:BVsolution}. Combined with the hypothesis that $\\phi(1, v) \\to 0$ as $v \\to \\infty$, we see that the first three terms are bounded by a constant that depends on $r_{1}$. On the other hand, by taking $r_{1}$ sufficiently large, the coefficient $(C_{0} \\Lambda \/ r) \\log ( r \/ r_{1} )$ of the last term can be made arbitrarily small for $r \\geq r_{1}$. This smallness allows us to absorb the last term to the left-hand side, and conclude the desired boundedness of $\\phi$ on the region $\\set{1 \\leq u \\leq u_{0}}$.\nThen plugging in $u = u_{0}$ and the uniform bound for $\\phi$ into \\eqref{claim.vanishing.key}, the claim \\eqref{claim.vanishing} follows from the hypothesis $\\lim_{v \\to \\infty} \\phi(1, v) = 0$.\n\n\n\n\n\n\nLet\n\\begin{eqnarray*}\nI_1(u, v)&:=&\\int_{u}^v |\\partial_v^2(r \\overline\\phi_{u_{0}})| (u,v')dv' ,\\quad I_2(u, v):=\\int_{u}^v |\\partial_v\\lambda | (u,v')dv' .\n\\end{eqnarray*}\n\nIn the following two lemmas, we will show that\n\\begin{align} \nI_1(u_0, v) \\leq & 3\\epsilon+ C_{\\Lambda} I_{1}(u_0,v) I_2(u_0,v) , \\label{main.ineq.dichotomy.1} \\\\\nI_2(u_0, v)\\leq & 3\\epsilon+ C_{\\Lambda} I_{1}(u_0,v)^{2} (1+I_{1}(u_0,v))^{2} (1+I_{2}(u_0,v))^{2} e^{C_{\\Lambda} I_{1}(u_0,v)^{2} (1+I_{2}(u_0,v))} \\label{main.ineq.dichotomy.2}\n\\end{align}\nfor every $V \\leq u_0 \\leq v$, with $C_{\\Lambda}$ independent of $u_0$ and $v$.\n\n\\begin{lemma} \\label{lem.dic.1}\nThere exists a constant $C_{\\Lambda} > 0$ such that for every $V \\leq u_0 \\leq v$,\n\\begin{equation*}\nI_1(u_0,v) \\leq 3\\epsilon+ C_{\\Lambda} I_{1}(u_0,v) I_2(u_0,v).\n\\end{equation*}\n\\end{lemma}\n\\begin{proof}\nIn this proof, we fix $u_0 \\geq V$ and use the abbreviations\n\\begin{equation} \\label{eq:abbrev-overline-phi}\n\\overline{\\phi} := \\overline{\\phi}_{u_{0}} , \\quad\n\\partial_{v}(r \\overline{\\phi}) := \\partial_{v}(r \\overline{\\phi}_{u_{0}}) \n\\ \\hbox{ and } \\ \n\\partial_{v}^{2}(r \\overline{\\phi}) := \\partial_{v}^{2}(r \\overline{\\phi}_{u_{0}}).\n\\end{equation}\nBy Lemma \\ref{lem:est4phi}, we have\n\\begin{equation} \\label{phi.est}\n\t\\abs{\\overline{\\phi}(u_0,v)} \n\t\\leq \\frac{1}{r} \\int_{u_0}^{v} \\partial_{v}(r \\overline{\\phi}) (u_0, v') \\, \\mathrm{d} v' \n\t\\leq \\Lambda \\sup_{u_0 \\leq v' \\leq v} \\abs{\\partial_{v}(r \\overline{\\phi})(u_0,v')}.\n\\end{equation}\n\nBy the fundamental theorem of calculus and \\eqref{dic.0.1}, note that\n\\begin{equation} \\label{dic.1.0}\n\\sup_{u_0 \\leq v' \\leq v} |\\partial_{v}(r \\overline{\\phi})(u_0,v')|\\leq I_1(u_0,v).\n\\end{equation}\n\nThus, recalling the definition of $F_1$ in \\eqref{eq:inv4F12}, we have\n\\begin{equation*}\n\\begin{split}\nI_1(u_0,v) \\leq &\\int_{u_0}^v |F_1(u_0,v')|dv'+\\int_{u_0}^v|\\partial_v\\lmb||\\overline{\\phi}|(u_0,v')dv' \\\\\n\\leq &\\int_{u_0}^v |F_1(u_0,v')|dv'+ \\Lambda\\, I_1(u_0,v) I_2(u_0,v) \n\t\t\\leq 3\\epsilon+C_{\\Lambda} I_{1}(u_0,v) I_2(u_0,v). \\qedhere\n\\end{split}\n\\end{equation*}\n\\end{proof}\n\nWe now move on to estimate $I_2(u_0,v)$.\n\\begin{lemma} \\label{lem.dic.2}\nThere exists a constant $C_{\\Lambda} > 0$ such that for every $V \\leq u_0 \\leq v$,\n\\begin{equation*} \nI_2(u_0,v)\\leq 3\\epsilon+ C_{\\Lambda} I_{1}(u_0,v)^{2} (1+I_{1}(u_0,v))^{2} (1+I_{2}(u_0,v))^{2} e^{C_{\\Lambda} I_{1}(u_0,v)^{2} (1+I_{2}(u_0,v))}. \n\\end{equation*}\n\\end{lemma}\n\\begin{proof}\nAgain, we fix $u_0 \\geq V$ and use the abbreviation \\eqref{eq:abbrev-overline-phi}, as well as\n\\begin{equation} \\label{eq:abbrev-du-overline-phi}\n\t\\partial_{u} (r \\overline{\\phi}) (u,v) := \\partial_{u} (r \\overline{\\phi}_{u_{0}})(u,v) .\n\\end{equation}\nRecalling the equation for $F_2$ in \\eqref{eq:inv4F12}, in order to control $I_2(u_0,v)$ from $F_2$, we need to estimate $\\int_{u_0}^{v} (\\frac{\\lambda}{1-\\mu} \\frac{\\mu}{r})(u_0,v') \\mathrm{d} v'$ and $\\int_{u_0}^v \\partial_{v} \\overline{\\phi} ( \\lmb^{-1} \\partial_{v} (r \\overline{\\phi}) - \\nu^{-1} \\partial_{u} ( r \\overline{\\phi})) (u_0,v') \\mathrm{d} v'.$\nBy Lemma \\ref{lem:auxEqs},\n$$\\int_{u_0}^{v} (\\frac{\\lambda}{1-\\mu} \\frac{\\mu}{r})(u_0,v') \\mathrm{d} v' = \\log (1-\\mu(u_0,v))+\\int_{u_0}^v \\frac{r(\\partial_v \\overline{\\phi})^2}{\\lambda}(u_0,v') \\mathrm{d} v'.$$\nSince $\\calQ =\\calR$, the integrand on the left-hand side is non-negative. Notice furthermore that since $\\mu \\geq 0$, $\\log (1-\\mu(u_0,v)) <0$. Thus, \n\\begin{eqnarray*}\n\\int_{u_0}^{v} (\\frac{\\lambda}{1-\\mu} \\frac{\\mu}{r})(u_0,v') \\mathrm{d} v'\n&\\leq & |\\int_{u_0}^v \\frac{r(\\partial_v \\overline{\\phi})^2}{\\lambda}({u_0},v') \\mathrm{d} v'|\\\\\n&\\leq & \\int_{u_0}^v |\\partial_v \\overline{\\phi}({u_0},v')||(\\frac{\\partial_v(r\\overline{\\phi})}{\\lambda}- \\overline{\\phi})({u_0},v')| \\mathrm{d} v'\\\\\n&\\leq & 2 \\Lambda I_{1}({u_0},v) \\int_{u_0}^v |\\partial_v \\overline{\\phi}({u_0},v')| \\, \\mathrm{d} v' ,\n\\end{eqnarray*}\nwhere we have used \\eqref{phi.est} and \\eqref{dic.1.0} on the last line. Using Lemma \\ref{lem:est4dvphi}, we estimate the integral on the last line by\n\\begin{equation} \\label{dic.2.0}\n\\int_{u_0}^v |\\partial_v \\overline{\\phi}({u_0},v')|dv' \\leq\n\\int_{u_0}^v|\\partial_v(\\lambda^{-1}\\partial_v(r \\overline{\\phi}))({u_0},v')| \\mathrm{d} v' ,\n\\end{equation}\nand the right hand side can in turn be estimated using \\eqref{dic.1.0} by\n\\begin{eqnarray}\n\\int_{u_0}^v|\\partial_v(\\lambda^{-1}\\partial_v(r \\overline{\\phi}))({u_0},v')| \\mathrm{d} v' \n&\\leq &\\int_{u_0}^v \\lambda^{-1}|\\partial_v^2(r \\overline{\\phi})({u_0},v')| \\mathrm{d} v'+\\int_{u_0}^v \\lambda^{-2}|\\partial_v\\lambda\\partial_v(r \\overline{\\phi})({u_0},v')| \\mathrm{d} v'\\notag\\\\\n&\\leq &\\Lambda I_1({u_0},v)+ \\Lambda^2 I_{1}({u_0},v) I_2({u_0},v). \\notag\n\\end{eqnarray}\n\nTherefore, we have\n\\begin{equation}\\label{dic.2.1}\n\\int_{u_0}^{v} (\\frac{\\lambda}{1-\\mu} \\frac{\\mu}{r})({u_0},v') \\mathrm{d} v'\\leq C_{\\Lambda} I_1({u_0},v)^{2} (1+ I_{2}({u_0},v)).\n\\end{equation}\n\n\n\nWe now move on to bound $\\int_{u_0}^v \\partial_{v} \\overline{\\phi} \\, \\lmb^{-1} \\partial_{v} (r \\overline{\\phi}) ({u_0}, v') \\mathrm{d} v'$. Using \\eqref{dic.1.0} and \\eqref{dic.2.0}, we easily estimate\n\\begin{align}\n\\int_{u_0}^v \\abs{\\partial_{v} \\overline{\\phi} \\, \\lmb^{-1} \\partial_{v} ( r \\overline{\\phi}) ({u_0},v')} \\mathrm{d} v' \\notag\n\\leq & \\Lambda \\int_{{u_0}}^{v} \\abs{\\partial_{v} \\overline{\\phi}}({u_0}, v') \\, \\mathrm{d} v' \\sup_{{u_0} \\leq v' \\leq v} \\abs{\\partial_{v} ( r \\overline{\\phi}) ({u_0},v')} \\\\\n\\leq & C_{\\Lambda} I_{1}({u_0},v)^{2} (1+I_{2}({u_0},v)). \\label{dic.2.2}\n\\end{align}\n\nFinally, we are only left to bound $- \\int_{u_0}^v \\partial_{v} \\overline{\\phi} \\, \\nu^{-1} \\partial_{u} ( r \\overline{\\phi}) ({u_0},v') \\mathrm{d} v'$. As before, we begin by estimating\n\\begin{align}\n\\int_{u_0}^v \\abs{\\partial_{v} \\overline{\\phi} \\, \\nu^{-1} \\partial_{u} ( r \\overline{\\phi}) ({u_0},v')} \\mathrm{d} v'\n\\leq & \\int_{{u_0}}^{v} \\abs{\\partial_{v} \\overline{\\phi}}({u_0}, v') \\, \\mathrm{d} v' \\sup_{{u_0} \\leq v' \\leq v} \\abs{\\nu^{-1} \\partial_{u} ( r \\overline{\\phi}) ({u_0},v')} \\notag \\\\\n\\leq & C_{\\Lambda} I_{1}({u_0},v) (1+I_{2}({u_0},v)) \\sup_{{u_0} \\leq v' \\leq v} \\abs{\\nu^{-1} \\partial_{u} ( r \\overline{\\phi}) ({u_0},v')}. \\label{dic.2.3}\n\\end{align}\n\nIn this case, we do not wish to pull out $\\nu$ as we have not assumed any bound on it. Instead, we consider $\\nu^{-1} \\partial_{u} ( r \\overline{\\phi})$ as a whole and note that\n\\begin{equation}\\label{inv.wave.eqn}\n\\partial_{v}(\\nu^{-1} \\partial_{u} ( r \\overline{\\phi}) ) \n= \t- \\Big( \\frac{\\lmb}{1-\\mu} \\frac{\\mu}{r} \\Big) \\nu^{-1} \\partial_{u} ( r \\overline{\\phi}) \n\t+ \\Big( \\frac{\\lmb}{1-\\mu} \\frac{\\mu}{r} \\Big) \\overline{\\phi}.\n\\end{equation}\nThe equation \\eqref{inv.wave.eqn} holds since by \\eqref{eq:SSESF:dr} and \\eqref{eq:SSESF:dphi}, we have \n\\begin{equation*}\n\\partial_{v}(\\nu^{-1} \\partial_{u} ( r {\\phi}) ) \n= \t- \\Big( \\frac{\\lmb}{1-\\mu} \\frac{\\mu}{r} \\Big) \\nu^{-1} \\partial_{u} ( r {\\phi}) \n\t+ \\Big( \\frac{\\lmb}{1-\\mu} \\frac{\\mu}{r} \\Big) {\\phi}\n\\end{equation*}\nand moreover both the left hand side and the right hand side of the equation are invariant under the transformation $\\phi\\to \\phi+c$.\n\nTherefore, by the variation of constants formula and \\eqref{dic.0.1}, we have\n\\begin{equation*}\n\t\\nu^{-1} \\partial_{u} (r \\overline{\\phi})(u_0, v)\n\t= e^{-J({u_0},v)} \\int_{{u_0}}^{v} e^{J({u_0},v')} \\frac{\\lmb}{1-\\mu} \\frac{\\mu}{r} \\, \\overline{\\phi} ({u_0}, v') \\, \\mathrm{d} v',\n\\end{equation*}\nwhere\n\\begin{equation*}\n\tJ({u_0},v) := \\int_{{u_0}}^{v} \\frac{\\lmb}{1-\\mu} \\frac{\\mu}{r} ({u_0}, v') \\, \\mathrm{d} v'.\n\\end{equation*}\n\nBy \\eqref{dic.1.0} and \\eqref{dic.2.1}, we have\n\\begin{equation*}\n\t\\sup_{{u_0} \\leq v' \\leq v}\\abs{\\nu^{-1} \\partial_{u} (r \\overline{\\phi}) ({u_0},v')} \n\t\\leq C_{\\Lambda} I_{1}({u_0},v)^{3} (1+I_{2}({u_0},v)) e^{C_{\\Lambda} I_{1}({u_0},v)^{2} (1+I_{2}({u_0},v))}.\n\\end{equation*}\n\nThen by \\eqref{dic.2.3}, we conclude that \n\\begin{equation} \\label{dic.2.4}\n\\int_{u_0}^v \\abs{\\partial_{v} \\overline{\\phi} \\, \\nu^{-1} \\partial_{u} ( r \\overline{\\phi}) ({u_0},v')} \\mathrm{d} v'\n\t\\leq C_{\\Lambda} I_{1}({u_0},v)^{4} (1+I_{2}({u_0},v))^{2} e^{C_{\\Lambda} I_{1}({u_0},v)^{2} (1+I_{2}({u_0},v))}.\n\\end{equation}\n\nCombining \\eqref{dic.2.1}, \\eqref{dic.2.2} and \\eqref{dic.2.4}, we conclude that \\eqref{main.ineq.dichotomy.2} holds. \\qedhere\n\\end{proof}\n\nNext, we apply \\eqref{main.ineq.dichotomy.1} and \\eqref{main.ineq.dichotomy.2} to show that \n\\begin{proposition} \\label{dic.main.prop}\nFor ${u_0}$ sufficiently large and $v\\geq {u_0}$, we have\n$$I_1({u_0},v)+I_2({u_0},v)\\leq C_{\\Lambda} \\epsilon.$$\n\\end{proposition}\n\\begin{remark}\nIf it is the case that $\\int_{u_0}^v(|\\partial_v^2(r\\overline{\\phi})| + |\\partial_v \\lambda|)({u_0},v')dv'$ is continuous in $v$ for each fixed ${u_0}$, then the desired conclusion follows from \\eqref{main.ineq.dichotomy.1} and \\eqref{main.ineq.dichotomy.2} via a simple continuity argument in $v$. In particular, the conclusion follows in the case where the initial data of $\\partial_v(r \\phi)$ are in $W^{1,1}$ or $C^{1}$. The only remaining difficulty is therefore to control the size of the delta function singularities in $\\partial_v^2(r\\overline{\\phi})$ in the general case where we only have a BV solution.\n\\end{remark}\n\\begin{proof}\nWe begin by studying the propagation of discontinuities for a BV solution to \\eqref{eq:SSESF}. In the general case where $\\partial_v(r \\phi)(1, \\cdot)$ is only in BV and contains jump discontinuities (at which $\\partial_{v}(r \\phi)(1, \\cdot)$ is assumed to be right-continuous), notice that the jump discontinuities for a BV function are discrete, i.e., they occur only at a (possibly infinite) sequence of points $V 1$ to be larger than the maximum of the constants from \\eqref{main.ineq.dichotomy.1} and \\eqref{main.ineq.dichotomy.2}. First, a standard continuity argument using \\eqref{main.ineq.dichotomy.1} and \\eqref{main.ineq.dichotomy.2} implies that if \n$$\\lim_{v\\to v_i+}\\int_{u_0}^v |\\partial_v^2(r\\overline{\\phi}) ({u_0},v')| \\mathrm{d} v'\\leq 5 C_{\\Lambda}\\epsilon\n\\hbox{ and }\n\\lim_{v\\to v_i+}\\int_{u_0}^v |\\partial_v \\lambda({u_0},v')| \\mathrm{d} v'\\leq 5\\epsilon,$$\n(with the convention $v_{0} := {u_0}$) then\n$$\\int_{u_0}^v |\\partial_v^2(r\\overline{\\phi}) ({u_0},v')| \\mathrm{d} v'\\leq 4 C_{\\Lambda}\\epsilon\n\\hbox{ and }\n\\int_{u_0}^v |\\partial_v \\lambda({u_0},v')| \\mathrm{d} v'\\leq 4 \\epsilon,$$\nfor $v_i< v< v_{i+1}$.\n\nAssume, for the sake of contradiction, that the conclusion of the proposition is not satisfied. Recall that the integral of $\\abs{\\partial_v\\lambda}$ is continuous. Thus, we have that for some $v_i$ with $i > 0$\n$$\\lim_{v\\to v_i-}\\int_{u_0}^v |\\partial_v^2(r\\overline{\\phi}) ({u_0},v')| \\mathrm{d} v'\\leq 4 C_{\\Lambda}\\epsilon,$$\nholds, but at the same time\n$$\\lim_{v\\to v_i+}\\int_{u_0}^v |\\partial_v^2(r\\overline{\\phi}) ({u_0},v')| \\mathrm{d} v' > 5 C_{\\Lambda}\\epsilon.$$\n\nHowever, we have seen that the size of the jump in $I_1({u_0},v)$ is bounded by $\\epsilon$, which is smaller than $C_{\\Lambda} \\epsilon$ if $C_{\\Lambda}>1$. This leads to a contradiction and thus the conclusion of the proposition holds.\n\\end{proof}\n\nWe are now ready to conclude the proof of Theorem \\ref{thm.dichotomy}.\n\\begin{proof} [Conclusion of Proof of Theorem \\ref{thm.dichotomy}]\nWe first establish \\eqref{dichotomy.goal}. In what follows, we use the abbreviations in \\eqref{eq:abbrev-overline-phi} and \\eqref{eq:abbrev-du-overline-phi}, such as $\\overline{\\phi} = \\overline{\\phi}_{u_{0}}$ etc. The idea is to transform back to $(\\overline{\\phi}, r, m) \\mapsto (\\phi, r, m)$ using \\eqref{dic.0.2}. Note that $\\abs{\\partial_{v} \\lmb}$ remains the same under this change, so it suffices to estimate $\\abs{\\partial_{v}^{2} (r \\phi)}$. By \\eqref{dic.2.0} and Proposition \\ref{dic.main.prop}, for sufficiently large ${u_0}$, the limit $\\overline{\\phi}({u_0}, \\infty) := \\lim_{v \\to \\infty} \\overline{\\phi}({u_0},v)$ exists and satisfies\n\\begin{equation*}\n\t\\abs{\\overline{\\phi}({u_0}, \\infty)} \\leq C_{\\Lambda} \\epsilon,\n\\end{equation*}\nwhere we note that $C_{\\Lambda}$ is independent of ${u_0}$.\n\nBy \\eqref{dic.0.2}, we have $\\phi(u,v) = \\overline{\\phi}(u,v) - \\overline{\\phi}(u, \\infty)$ for all $u$. Thus, using Proposition \\ref{dic.main.prop}, we estimate\n\\begin{align*}\n\t\\int_{{u_0}}^{\\infty} \\abs{\\partial_{v}^{2} ( r\\phi)({u_0},v)} \\, \\mathrm{d} v\n\t= & \\int_{{u_0}}^{\\infty} \\abs{\\partial_{v}^{2} ( r \\overline{\\phi}({u_0},v) - r \\overline{\\phi}({u_0}, \\infty))} \\, \\mathrm{d} v \\\\\n\t\\leq & \\int_{{u_0}}^{\\infty} \\abs{\\partial_{v}^{2} (r \\overline{\\phi})({u_0},v)} \\, \\mathrm{d} v \n\t\t+ \\abs{\\overline{\\phi}({u_0},\\infty)} \\int_{{u_0}}^{\\infty} \\abs{\\partial_{v} \\lmb ({u_0},v)} \\, \\mathrm{d} v \\\\\n\t\\leq & C_{\\Lambda} (\\epsilon + \\epsilon^{2}).\n\\end{align*}\nSince $u_0\\geq V$ is arbitrary, this proves \\eqref{dichotomy.goal}.\n\nFinally, we prove that the conditions $(2)$ and $(3)$ of Definition \\ref{def:locBVScat} hold.\nIndeed, since $\\partial_{v} \\log \\lmb = \\lmb^{-1} \\partial_{v} \\lmb$, $(3)$ in Definition \\ref{def:locBVScat} follows from \\eqref{dichotomy.goal} and \\eqref{dic.bnd4dvr}. In fact, $(3)$ in Definition \\ref{def:locBVScat} holds with arbitrarily large $R > 0$. Next, by \\eqref{eq:SSESF:dm}, non-negativity of $1-\\mu$ and $\\mu$ (by Lemma \\ref{lem:mntn4r}) and the fact that $m$ is invariant under $\\phi \\mapsto \\overline{\\phi}$,\n\\begin{equation*}\n\tm(u_0, v) \n\t\\leq \\frac{1}{2} \\sup_{u_0 \\leq v' \\leq v} \\abs{(\\lmb^{-1} \\partial_{v}(r \\overline{\\phi}) - \\overline{\\phi})(u_0, v')} \\int_{u_0}^{v} \\abs{\\partial_{v} \\overline{\\phi}(u_0, v')} \\, \\mathrm{d} v'\n\\end{equation*}\nwhere the right-hand side is $\\leq C_{\\epsilon, \\Lambda} \\epsilon$ (with $C_{\\epsilon, \\Lambda}$ non-decreasing in $\\epsilon$) by the estimates proved so far. Therefore, $(2)$ of Definition \\ref{def:locBVScat} follows.\nThis concludes the proof of Theorem \\ref{thm.dichotomy}. \\qedhere\n\\end{proof}\n\n\\section{Refinement in the small data case} \\label{sec:smallData}\nIn this section, we sketch a proof of Theorem \\ref{thm:smallData}. The idea is to revisit the proofs of the main theorems (Theorems \\ref{main.thm.1} and \\ref{main.thm.2}), and notice that all the required smallness can be obtained by taking initial total variation of $\\partial_{v}(r \\phi)$ small. Key to this idea is the following lemma.\n\n\\begin{lemma} \\label{lem:smallData}\nThere exist universal constants $\\epsilon_{0}$ and $C_{0}$ such that for $\\epsilon < \\epsilon_{0}$, the following holds. Suppose that $\\lmb(1, \\cdot) = \\frac{1}{2}$ and $\\partial_{v}(r \\phi)(1, \\cdot)$ is of bounded variation with\n\\begin{equation} \\label{eq:smallData:hyp}\n\t\\int_{C_{1}} \\abs{\\partial_{v}^{2} (r \\phi)} < \\epsilon.\n\\end{equation}\n\nSuppose furthermore that $\\lim_{v \\to \\infty} \\phi(1, v) = 0$. Then the maximal development $(\\phi, r, m)$ satisfies condition $(1)$ of Definition \\ref{def:locBVScat} (future completeness of radial null geodesics) and obeys\n\\begin{gather}\n\t\\sup_{v \\in [1, \\infty)} \\int_{\\underline{C}_{v}} \\abs{\\frac{\\mu }{(1-\\mu)} \\frac{\\nu}{r}} \n\t+ \\sup_{u \\in [1, \\infty)} \\int_{C_{u}} \\abs{\\frac{\\mu }{(1-\\mu)} \\frac{\\lmb}{r}} \\leq C_{0} \\epsilon^{2},\t\\label{eq:smallData:smallPtnl} \\\\\n\t\\sup_{v \\in [1, \\infty)} \\int_{\\underline{C}_{v}} \\abs{\\partial_{u} \\phi}\n\t+ \\sup_{u \\in [1, \\infty)} \\int_{C_{u}} \\abs{\\partial_{v} \\phi} \\leq C_{0} \\epsilon,\t\t\t\t \\label{eq:smallData:smallDphi} \\\\\n\t\\sup_{v \\in [1, \\infty)} \\int_{\\underline{C}_{v}} (\\abs{\\partial_{u}^{2} (r \\phi)} + \\partial_{u} \\log \\nu)\n\t+ \\sup_{u \\in [1, \\infty)} \\int_{C_{u}} (\\abs{\\partial_{v}^{2} (r \\phi)} + \\partial_{v} \\log \\lmb) \\leq C_{0} \\epsilon.\t\t\t\t \\label{eq:smallData:smallTV} \n\\end{gather}\n\nMoreover, the bounds in Proposition \\ref{prop:geomLocBVScat} hold with\n\\begin{equation} \\label{eq:smallData:geom}\n\tK + \\Lambda \\leq C_{0}, \\quad\n\t\\Psi \\leq C_{0} \\epsilon.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof} \nThis lemma is an easy consequence of Theorem \\ref{Chr.BV.Thm} and Lemma \\ref{lem:auxEqs} once we show \n\\begin{equation*}\n\\sup_{\\calQ} \\abs{\\partial_{v} (r \\phi)} \\leq C_{0} \\epsilon,\n\\end{equation*}\nusing the additional condition $\\lim_{v \\to \\infty} \\phi(1, v) = 0$. By Lemma \\ref{lem:est4dvphi}, note that $\\int_{C_{1}} \\abs{\\partial_{v} \\phi} \\leq C \\epsilon$; therefore, integrating from $v = \\infty$, we have $\\lim_{v \\to 1+} \\abs{\\phi(1, v)} \\leq C \\epsilon$. Then using \\eqref{eq:smallData:hyp} to integrate from $v = 1$, where we note that $\\lim_{v \\to 1+}\\phi(1, v) = \\lim_{v \\to 1+} \\partial_{v}(r \\phi)(1, v)$, we obtain\n\\begin{equation*}\n\t\\sup_{C_{1}} \\abs{\\partial_{v} (r \\phi)} \\leq C \\epsilon.\n\\end{equation*}\n\nUsing \\eqref{eq:SSESF:dphi'}, $\\partial_{u} \\lmb \\leq 0$, Lemma \\ref{lem:est4phi} (to control $\\abs{\\phi}$ from $\\abs{\\partial_{v}(r \\phi)}$) and $\\frac{1}{3} \\leq \\lmb \\leq \\frac{1}{2}$ (by Theorem \\ref{Chr.BV.Thm}), it follows that\n\\begin{align*}\n\t\\sup_{\\mathcal D(1, v)}\\abs{\\partial_{v}(r \\phi)} \n\t\\leq & \\sup_{1 \\leq v' \\leq v} \\abs{\\partial_{v} (r \\phi)(1, v')} + \\sup_{(u,v) \\in \\mathcal D(1,v)} \\sup_{1 \\leq u' \\leq u} \\abs{\\phi(u', v)} \\int_{1}^{u} (- \\partial_{u} \\lmb)(u', v) \\, \\mathrm{d} u' \\\\\n\t\\leq & C \\epsilon + \\frac{1}{2} \\sup_{\\mathcal D(1,v)} \\abs{\\partial_{v}(r \\phi)},\n\\end{align*} \nwhich proves $\\sup_{\\calQ} \\abs{\\partial_{v} (r \\phi)} \\leq C_{0} \\epsilon$, as desired. \\qedhere\n\\end{proof}\n\nEquipped with Lemma \\ref{lem:smallData}, we now proceed to outline the proof of Theorem \\ref{thm:smallData}.\n\\begin{proof} [Proof of $(1)$ in Theorem \\ref{thm:smallData}]\nThat $(\\phi, r, m)$ is globally BV scattering follows from Theorem \\ref{thm.dichotomy} and the fact that initial data with small total variation cannot lead to a development which blows up at infinity; the latter fact follows from the results proved by Christodoulou \\cite[Section 4, Theorem 6.2]{Christodoulou:1993bt}. \n\nIt remains to prove that \\eqref{eq:decay1:1}--\\eqref{eq:decay1:3} hold with $A_{1} \\leq C_{\\mathcal I_{1}} (\\mathcal I_{1} + \\epsilon)$ if $\\epsilon > 0$ is sufficiently small. By \\eqref{eq:decay1:H1}, it follows Lemma \\ref{lem:decay1:cptu:0} holds with $H_{1} \\leq C_{\\mathcal I_{1}} (\\mathcal I_{1} + \\epsilon)$, and \\eqref{eq:decay1:extr} in Lemma \\ref{lem:decay1:extr} becomes\ufffd\n\\begin{equation*} \\tag{\\ref{eq:decay1:extr}$'$}\n\t\\sup_{C_{u}} r^{\\omega} \\abs{\\partial_{v} (r \\phi)} \\leq C_{\\mathcal I_{1}} u_{1} (\\mathcal I_{1} + \\epsilon) + C M_{i} u_{1}^{-1} \\mathcal B_{1}(U),\n\\end{equation*}\n\nNote that $M_{i} \\leq C \\mathcal I_{1}^{2}$. Then repeating the arguments in \\S \\ref{sec.full.decay.1}, we see that \\eqref{eq:decay1:intr:pf:key} becomes\n\\begin{equation*} \\tag{\\ref{eq:decay1:intr:pf:key}$'$}\n\t\\mathcal B_{1}(U) \\leq C_{\\mathcal I_{1}} u_{1} (\\mathcal I_{1} + \\epsilon) + C (\\mathcal I_{1}^{2} u_{1}^{-1} + \\epsilon^{2}) \\mathcal B_{1}(U). \n\\end{equation*}\n\nIt is important to note that the constant $C$ in the last term does not depend on $\\mathcal I_{1}$. Take $u_{1} = 1000 C (1+\\mathcal I_{15})^{2}$. Then for $\\epsilon > 0$ sufficiently small (independent of $\\mathcal I_{1}$), we derive\n\\begin{equation*}\n\t\\mathcal B_{1}(U) \\leq C_{\\mathcal I_{1}} (\\mathcal I_{1} + \\epsilon)\n\\end{equation*}\n\nIt then follows that \\eqref{eq:decay1:1} and \\eqref{eq:decay1:2} hold with $A_{1} \\leq C_{\\mathcal I_{1}} (\\mathcal I_{1} + \\epsilon)$. Applying Lemma \\ref{lem:decay1:uDecay4durphi}, we conclude that \\eqref{eq:decay1:3} holds with $A_{1} \\leq C_{\\mathcal I_{1}} (\\mathcal I_{1} + \\epsilon)$ as well. \\qedhere\n\\end{proof}\n\n\\begin{proof} [Proof of $(2)$ in Theorem \\ref{thm:smallData}]\nWe need to prove that \\eqref{eq:decay2:1}--\\eqref{eq:decay2:4} hold with $A_{2} \\leq C_{\\mathcal I_{2}} (\\mathcal I_{2} + \\epsilon)$. The key is to show that Proposition \\ref{prop:decay2:nullStr} holds with\n\\begin{equation} \\label{eq:smallData:pf2:key}\n\tA_{2}' \\leq C_{\\mathcal I_{2}} (\\mathcal I_{2} + \\epsilon)\n\\end{equation}\n\nIndeed, by the explicit bounds on the constants (in particular, \\eqref{eq:decay2:rDecay:1}, \\eqref{eq:decay2:rDecay:2}, \\eqref{eq:decay2:uDecayInExtr:1}, \\eqref{eq:decay2:uDecayInExtr:2}, \\eqref{eq:decay2:final:aux} and \\eqref{eq:decay2:A2''}), the desired conclusion easily follows once \\eqref{eq:smallData:pf2:key} is established.\n\nNote that $\\mathcal I_{1} \\leq \\mathcal I_{2}$ by definition, and thus $A_{1} \\leq C_{\\mathcal I_{2}} (\\mathcal I_{2} + \\epsilon)$ by the preceding proof. We furthermore claim that the following statements hold:\n\\begin{itemize}\n\\item Lemma \\ref{lem:decay2:key4nullStr} holds with\n\\begin{equation} \\label{eq:smallData:pf2:1}\n\t \\epsilon(u_{2}) \\leq C \\epsilon,\n\\end{equation}\nfor every $u_{2} \\geq 1$.\n\n\\item We have\n\\begin{equation} \\label{eq:smallData:pf2:2}\n\t H_{2}'(1) \\leq C_{\\mathcal I_{2}} (\\mathcal I_{2} + \\epsilon),\n\\end{equation}\nwhere we remind the reader that\n\\begin{equation*}\nH_{2}'(1) = \\sup_{\\set{(u,v) : u \\in [1, 3] \\, v \\in [u, 3u]}} u^{\\omega} \\Big( \\abs{\\partial_{v}^{2}(r \\phi)} + \\abs{\\partial_{u}^{2}(r \\phi)} + \\abs{\\partial_{v} \\lmb} + \\abs{\\partial_{u} \\nu}\\Big)\n\\end{equation*}\naccording to \\eqref{eq:decay2:def4H'2}.\n\\end{itemize}\n\nThe first claim follows easily from Lemma \\ref{lem:smallData} and \\eqref{eq:decay2:eps}. For the second claim, since $1 \\leq u \\leq 3$, it suffices to prove\\footnote{Recall that $\\mathcal D(1,9) = \\set{(u,v) : u \\in [1, 3] \\, v \\in [u, 3u]}$ is the domain of dependence of $C_{1} \\cap \\set{1 \\leq v \\leq 9}$.}\n\\begin{equation*}\n\t \\sup_{\\mathcal D(1,9)} \\Big( \\abs{\\partial_{v}^{2}(r \\phi)} + \\abs{\\partial_{u}^{2}(r \\phi)} + \\abs{\\partial_{v} \\lmb} + \\abs{\\partial_{u} \\nu}\\Big)\\leq C (\\mathcal I_{2} + \\epsilon),\n\\end{equation*}\nwhich follows from a persistence of regularity argument, similar to our proof of Lemma \\ref{lem:decay2:key4nullStr}. \n\nTo conclude the proof, recall that we had\n\\begin{equation*}\n\t\\mathcal B_{2}(U) \\leq H_{2}''(u_2) + \\epsilon''(u_{2}) \\mathcal B_{2}(U)\n\\end{equation*}\nwhere $\\mathcal B_{2}(U)$ was defined in \\eqref{eq:decay2:def4B2}, and $H_{2}''(u_2)$, $\\epsilon''(u_{2})$ obey the bounds in \\eqref{eq:decay2:H2''} and \\eqref{eq:decay2:eps''} respectively. Thanks to \\eqref{eq:smallData:pf2:1}, it follows that we may take $u_{2} = 1$ and $\\epsilon''(1) \\leq C\\epsilon$, where $C$ does not depend on $\\mathcal I_{2}$. Next, since $u_{2} = 1$, we see that $H_{2}''(1) \\leq C_{\\mathcal I_{2}} (\\mathcal I_{2} + \\epsilon)$ by \\eqref{eq:smallData:pf2:2}. Therefore, for $\\epsilon > 0$ sufficiently small (independent of $\\mathcal I_{2}$), we conclude that\n\\begin{equation*}\n\t\\mathcal B_{2}(U) \\leq C_{\\mathcal I_{2}} (\\mathcal I_{2} + \\epsilon),\n\\end{equation*}\nwhich proves that Proposition \\ref{prop:decay2:nullStr} holds with \\eqref{eq:smallData:pf2:key}, as desired. \\qedhere\n\\end{proof}\n\n\n\\section{Optimality of the decay rates} \\label{sec.opt}\nIn this section, we show the optimality of the decay rates obtained above, i.e., we prove Theorems \\ref{thm.opt.1} and \\ref{thm.opt.2}. \n\n\\subsection{Optimality of the decay rates, in the case $1 < \\omega' < 3$} \\label{subsec.opt.1}\nIn this subsection, we prove Theorem \\ref{thm.opt.1}. More precisely, we will demonstrate that the proof of the upper bounds for $\\phi$ and its derivatives can in fact be sharpened to give also lower bounds for $\\partial_v(r\\phi)$ and $\\partial_u(r\\phi)$ if the initial data satisfy appropriate lower bounds for $\\omega<3$.\n\n\\begin{proof}[Proof of Theorem \\ref{thm.opt.1}]\n\nWe first prove the lower bound for $\\partial_v(r\\phi)$. We split the spacetime into the exterior region $\\PD_{\\mathrm{ext}}$ and interior region $\\PD_{\\mathrm{int}}$ as before. Notice that in the exterior region, $u\\lesssim r$ and it suffices to prove a lower bound for $r^\\omega\\partial_v(r\\phi)$. Similarly, in the interior region, $r\\lesssim u$ and it suffices to prove a lower bound for $u^\\omega\\partial_v(r\\phi)$.\n\nRevisiting the proof of Lemma \\ref{lem:decay1:extr}, we note that instead of controlling $\\partial_v(r\\phi)$ by the initial data and error terms, we can bound the difference between $\\partial_v(r\\phi)(u,v)$ and the corresponding initial value of $\\partial_v(r\\phi)(1,v)$. More precisely, from the proof of Lemma \\ref{lem:decay1:extr}, we have\n\n\\begin{align*}\n\t\\abs{\\partial_{v} (r \\phi)(u,v)-\\partial_{v} (r \\phi)(1, v)}\n\t\\leq \\frac{u_{1} K M_{i}}{r^2(u,v)(1+r(u,v))} H_{1} + \\frac{K M(u_{1})}{u_{1} r^{\\omega}(u,v)} \\mathcal B_{1}(U)\n\\end{align*}\nin the case $2<\\omega<3$ and\n\\begin{align*}\n\t\\abs{\\partial_{v} (r \\phi)(u,v)-\\partial_{v} (r \\phi)(1, v)}\n\t\t\\leq \\frac{\\omega K M_{i}}{r(u,v)(1+r(u,v))} H_{1} + \\frac{\\omega K M(u_{1})}{u_{1} r^{\\omega}(u,v)} \\mathcal B_{1}(U).\n\\end{align*}\nin the case $1<\\omega\\leq 2$. By the decay results proved in \\S \\ref{sec.full.decay.1}, we have\n$$\\sup_u(H_{1}+ \\mathcal B_{1}(u))\\leq A$$\nfor some constant $A$. Therefore, by choosing $u_1$ sufficiently large, we have in the region $3u\\leq v$,\n$$r^{\\omega}\\abs{\\partial_{v} (r \\phi)(u,v)-\\partial_{v} (r \\phi)(1, v)}\\leq \\frac{L}{4},$$\nas long as $u\\geq u_1$.\nWe now apply the assumption on the lower bound for the initial data $r^{\\omega} \\partial_{v} (r \\phi)(1, v)\\geq L$ for $v\\geq V$. Choosing $u$ larger if necessary, we can assume that $u\\geq V$. Then, we derive that in $3u\\leq v$,\n$$r^{\\omega}\\partial_{v} (r \\phi)(u,v)\\geq \\frac{L}{2}.$$\n\nWe now move to the interior region where $3u \\geq v$. To this end, we improve the bounds in \\eqref{eq:decay1:intr:pf:1}. First, notice that the lower bound in the exterior region implies that there exists $L'$ such that \n\\begin{eqnarray}\nu^{\\omega}\\partial_{v} (r \\phi)(u,v)\\geq L'\\label{lower.bd.L1}\n\\end{eqnarray}\nfor $3u \\leq v$. Then, integrating \\eqref{eq:SSESF:dphi} along the incoming direction from $(u\/3, v)$ to $(u, v)$, we get\n\\begin{equation*}\n\\begin{aligned}\n\t\\abs{\\partial_{v} (r \\phi) (u,v)-\\partial_{v} (r \\phi) (u\/3,v)} \n\t\\leq \\frac{1}{2} (\\sup_{u' \\in [u\/3, u]} \\sup_{C_{u'}} \\abs{\\phi}) \\int_{u\/3}^{u} \\abs{\\frac{2m \\nu}{(1-\\mu) r^{2}} (u', v)} \\, \\mathrm{d} u'.\n\\end{aligned}\n\\end{equation*} \nBy Theorem \\ref{main.thm.1}, we have \n$$\\sup_{C_u}|\\phi|\\leq A_{1} u^{-\\omega}$$\nfor some $A_{1} > 0$. Lemma \\ref{lem:smallPtnl} implies that \n$$\\int_{u\/3}^{u} \\abs{\\frac{2m \\nu}{(1-\\mu) r^{2}} (u', v)} \\, \\mathrm{d} u'\\to 0$$\nas $u\\to \\infty$. Thus the right hand side can be bounded by $\\frac{L' u^{\\omega}}{2}$ after choosing $u$ to be sufficiently large.\nCombining this with the lower bound \\eqref{lower.bd.L1}, we have\n$$u^{\\omega}\\partial_{v} (r \\phi) (u,v) \\geq \\frac{L'}{2}$$\nfor $3u \\leq v$ and $u$ sufficiently large.\n\nWe now proceed to obtain the lower bound for $\\partial_u(r\\phi)$ by revisiting the proof of Lemma \\ref{lem:decay1:uDecay4durphi}. Integrating \\eqref{eq:SSESF:dphi} along the outgoing direction from $(u, u)$ to $(u, v)$, we have\n\\begin{align}\\label{est.durphi.diff}\n\t\\abs{\\partial_{u} (r \\phi)(u,v) - \\lim_{v' \\to u+} \\partial_{u} (r \\phi)(u, v')}\n\t\\leq & \\int_{C_{u}} \\abs{\\frac{\\mu\\lmb\\nu}{(1-\\mu)r}\\phi}.\n\\end{align}\nAs before, we Theorem \\ref{main.thm.1}, i.e.,\n$$\\sup_{C_u}|\\phi|\\leq A_{1} u^{-\\omega}$$\nfor some $A_{1} > 0$. By Lemma \\ref{lem:smallPtnl} and the upper bound \\eqref{eq:bnd4dur} for $|\\nu|$, we have\n$$\\int_{C_{u}} \\abs{\\frac{\\mu\\lmb\\nu}{(1-\\mu)r}} \\to 0, \\quad\\mbox{as }u\\to\\infty.$$\nTherefore, we can choose $u$ sufficiently large such that\n$$u^{\\omega} \\int_{C_{u}} \\abs{\\frac{\\mu\\lmb\\nu}{(1-\\mu)r}\\phi}\\leq \\frac{L'}{4}.$$\nReturning to \\eqref{est.durphi.diff} and recalling that for $u$ large,\n$$-\\lim_{v' \\to u+} \\partial_{u} (r \\phi)(u, v')=\\lim_{v' \\to u+} \\partial_{v} (r \\phi)(u, v')\\geq \\frac{L'}{2}u^{-\\omega},$$\nwe have\n$$-\\partial_{u} (r \\phi)(u,v) \\geq \\frac{L'}{4}u^{-\\omega}$$\nfor $u$ sufficiently large, as desired. \\qedhere\n\n\\end{proof}\n\n\\subsection{Key lower bound lemma}\nThe goal of the remainder of this section is to prove Theorem \\ref{thm.opt.2}. In this subsection we establish the following result, which provides a sufficient condition for the desired lower bounds on the decay of $\\phi$ in terms of a number (called $\\mathfrak{L}$) computed on $\\mathcal I^{+}$. This will be an important ingredient for our proof of Theorem \\ref{thm.opt.2} in the next subsection.\n\n\\begin{lemma}[Key lower bound lemma] \\label{lem:LB}\nLet $(\\phi, r, m)$ be a $C^{1}$ solution to \\eqref{eq:SSESF} which is locally BV scattering and asymptotically flat initial data of order $\\omega = 3$ in $C^{1}$. \nSuppose furthermore that \n\\begin{equation*}\n\t\\mathfrak{L} := \\lim_{v \\to \\infty} r^{3} \\partial_{v} (r \\phi)(1, v) + \\int_{1}^{\\infty} (M \\nu_{\\infty} \\Phi)(u) \\, \\mathrm{d} u \\neq 0,\n\\end{equation*}\nwhere $M(u) := \\lim_{v \\to \\infty} m(u, v)$, $\\nu_{\\infty}(u) := \\lim_{v \\to \\infty} \\nu(u,v)$ and $\\Phi(u) := \\lim_{v \\to \\infty} r \\phi(u,v)$. \nThen there exist constants $U, L_{3} > 0$ such that the following lower bounds for the decay of $\\partial_{v}(r \\phi)$, $\\partial_{u} (r \\phi)$ hold on $\\set{(u, v) : u \\geq U}$.\n\\begin{align} \n\t\\abs{\\partial_{v}(r \\phi)(u, v)} \\geq & L_{3} \\min \\set{r(u,v)^{-3}, u^{-3}}, \\label{eq:LB:1} \\\\\n\t\\abs{\\partial_{u}(r \\phi)(u, v)} \\geq & L_{3} u^{-3}. \\label{eq:LB:2}\n\\end{align}\n\\end{lemma}\n\n\\begin{remark} \nNote that $\\partial_{v}(r \\phi)$ and $\\partial_{u}(r \\phi)$ have definite signs by \\eqref{eq:LB:1}, \\eqref{eq:LB:2}. In fact, the proof below shows that the signs of $\\partial_{v} (r \\phi)$ and $-\\partial_{u} (r \\phi)$ agree with that of $\\mathfrak{L}$.\n\\end{remark}\n\n\n\\begin{proof} \nWithout loss of generality, assume that $\\mathfrak{L} > 0$. For $0 < \\eta \\leq 1$, define the \\emph{$\\eta$-exterior region} by\n\\begin{equation*}\n\t\\PD_{\\mathrm{ext}}^{\\eta} := \\set{(u,v) \\in \\calQ : u \\leq \\eta v}.\n\\end{equation*}\n\n\\pfstep{Step 1} In the first step, we make precise the relation between $r$ and $v$ in $\\PD_{\\mathrm{ext}}^{\\eta}$ for small $\\eta$. We claim that $r \\sim v\/2$ in this region; more precisely,\n\\begin{equation} \\label{eq:LB:pf:1}\n\\abs{\\frac{r(u,v)}{v} - \\frac{1}{2}} \\leq \\eta C_{A_{1}, A_{2}, K, \\Lambda}.\n\\end{equation}\n\nIntegrating by parts, we have\n\\begin{align*}\nr(u,v) \n= \\int_{u}^{v} \\lmb(u, v') \\, \\mathrm{d} v' \n= - \\int_{u}^{v} \\partial_{v} \\lmb(u, v') v' \\, \\mathrm{d} v' + v \\lmb(u, v) - u \\lmb(u, u).\n\\end{align*}\n\nTo make the leading term $v \\lmb(u,v)$ and small number $\\frac{u}{v}$ explicit, we rewrite the last expression as follows:\n\\begin{equation*}\n\tr(u,v) = v \\Big[ \\lmb(u,v) - \\frac{u}{v} \\Big( \\lmb(u,u) + \\int_{u}^{v} \\partial_{v} \\lmb(u, v') \\frac{v'}{u} \\, \\mathrm{d} v' \\Big) \\Big].\n\\end{equation*}\n\nRecall that $\\lmb$ is uniformly bounded from the above and below on $\\calQ$, i.e., $\\Lambda^{-1} \\leq \\lmb \\leq 1\/2$. Moreover, by the decay estimates for $\\partial_{v} \\lmb$ proved in Theorem \\ref{main.thm.2}, we have\n\\begin{equation*}\n\\sup_{(u,v) \\in \\calQ} \\int_{u}^{v} \\abs{\\partial_{v} \\lmb(u, v')} \\frac{v'}{u} \\, \\mathrm{d} v' \\leq C_{A_{2}}.\n\\end{equation*}\n\nAs a consequence,\n\\begin{equation*}\n\\abs{\\frac{r(u,v)}{v} - \\lmb(u,v)} \\leq \\eta C_{A_{2}, \\Lambda}.\n\\end{equation*}\n\nThus \\eqref{eq:LB:pf:1} will follow once we establish\n\\begin{equation} \\label{eq:LB:pf:1:1}\n\t\\abs{\\lmb(u,v) - \\frac{1}{2}} \\leq \\eta^{2} C_{A_{1}, A_{2}, K, \\Lambda}.\n\\end{equation}\n\nThis inequality is proved by integrating the decay estimate \\eqref{eq:decay2:10} for $\\partial_{u} \\lmb = \\partial_{u} \\partial_{v} r$ along the incoming direction, starting from the normalization $\\lmb(1, v) = 1\/2$. \nHere, we use the easy geometric fact that if $(u,v)$ lies in $\\PD_{\\mathrm{ext}}^{\\eta}$, then so does the incoming null curve from $(1,v)$ to $(u,v)$. \n\n\\pfstep{Step 2} We claim that for $U_{1} \\geq 1$ sufficiently large and $0 < \\eta \\leq 1$ suitably small, we have\n\\begin{equation} \\label{eq:LB:pf:2}\n\t\\partial_{v} (r \\phi)(u,v) \\geq \\frac{\\mathfrak{L}}{2} \\Big( \\frac{v}{2} \\Big)^{-3}\n\\end{equation}\nfor $(u, v) \\in \\PD_{\\mathrm{ext}}^{\\eta} \\cap \\set{u \\geq U_{1}}$.\n\nWe begin with\n\\begin{equation} \\label{eq:LB:pf:2:1}\n\t\\Big( \\frac{v}{2} \\Big)^{3} \\partial_{v}(r \\phi)(u,v) = \\Big( \\frac{v}{2} \\Big)^{3} \\partial_{v}(r \\phi)(1, v) + \\Big( \\frac{v}{2} \\Big)^{3} \\int_{1}^{u} \\frac{2m \\lmb \\nu}{(1-\\mu) r^{3}} r \\phi(u', v) \\, \\mathrm{d} u',\n\\end{equation}\nobtained by integrating the $\\partial_{u} \\partial_{v} (r \\phi)$ equation and multiplying by $(v\/2)^{3}$. To prove \\eqref{eq:LB:pf:2}, it suffices to show that the right-hand side of \\eqref{eq:LB:pf:2:1} is bounded from below by $\\mathfrak{L}\/2$ for $(u, v) \\in \\PD_{\\mathrm{ext}}^{\\eta} \\cap \\set{u \\geq U_{1}}$ with sufficiently large $U_{1} \\geq 1$ and small $0 < \\eta \\leq 1$.\n\nNote that $r = \\frac{v-1}{2}$ on $C_{1}$, and $v \\geq \\eta^{-1}$ if $(u,v) \\in \\PD_{\\mathrm{ext}}^{\\eta}$. Thus for $(u,v) \\in \\PD_{\\mathrm{ext}}^{\\eta}$ and $0 < \\eta \\leq 1$ sufficiently small, we have\n\\begin{equation*}\n\\abs{\\Big( \\frac{v}{2} \\Big)^{3} \\partial_{v} (r \\phi)(1, v) - \\lim_{v \\to \\infty} r^{3} \\partial_{v} (r \\phi)(1, v)} < \\frac{\\mathfrak{L}}{8}.\n\\end{equation*}\n\nIn order to proceed, it is useful to keep in mind the following technical point: For $U_{1} \\geq 1$, by the decay estimates \\eqref{eq:decay1:1} and \\eqref{eq:decay1:6}, we have\n\\begin{equation} \\label{eq:LB:pf:2:2}\n\t\\sup_{v \\geq U_{1}} \\int_{U_{1}}^{v} \\abs{\\frac{2m \\lmb \\nu}{1-\\mu} r \\phi (u', v)} \\, \\mathrm{d} u' \\leq U_{1}^{-6} C_{A_{1}, \\Lambda}.\n\\end{equation}\n\nIn what follows, let $(u, v) \\in \\PD_{\\mathrm{ext}}^{\\eta} \\cap \\set{u \\geq U_{1}}$. Using \\eqref{eq:LB:pf:1}, \\eqref{eq:LB:pf:2:2} and the fact that the null segment from $(1, v)$ to $(u, v)$ lies in $\\PD_{\\mathrm{ext}}^{\\eta}$, we get\n\\begin{equation*}\n\\Big\\vert \\Big( \\frac{v}{2} \\Big)^{3} \\int_{1}^{u} \\frac{2m \\lmb \\nu}{(1-\\mu) r^{3}} r \\phi(u', v) \\, \\mathrm{d} u' - \\int_{1}^{u} \\frac{2m \\lmb \\nu}{1-\\mu} r \\phi(u', v) \\, \\mathrm{d} u'\\Big\\vert\n\\leq \\eta C_{A_{1}, A_{2}, K, \\Lambda}.\n\\end{equation*}\n\nTaking $U_{1} \\geq 1$ large enough and using \\eqref{eq:LB:pf:2:2}, we may arrange \n\\begin{equation*}\n\t\\sup_{v \\geq U_{1}} \\int_{U_{1}}^{v} \\abs{\\frac{2m \\lmb \\nu}{1-\\mu} r \\phi (u', v)} \\, \\mathrm{d} u' + \\int_{U_{1}}^{\\infty} \\abs{M \\nu_{\\infty} \\Phi (u')} \\, \\mathrm{d} u' < \\frac{\\mathfrak{L}}{8}.\n\\end{equation*} \n\nOn the other hand, note that $2m \\lmb \\nu (1-\\mu)^{-1} r \\phi (u, v) \\to M \\nu_{\\infty} \\Phi(u)$ for each $u \\geq 1$ as $v \\to \\infty$. Therefore, by the dominated convergence theorem, for $0 < \\eta \\leq 1$ sufficiently small (so that $v$ is large), we have\n\\begin{align*}\n\t\\abs{\\int_{1}^{U_{1}} \\frac{2m \\lmb \\nu}{1-\\mu} r \\phi (u', v) \\, \\mathrm{d} u' - \\int_{1}^{U_{1}} M \\nu_{\\infty} \\Phi(u') \\, \\mathrm{d} u'} < \\frac{\\mathfrak{L}}{8}.\n\\end{align*}\n\nPutting these together and taking $0 < \\eta \\leq 1$ sufficiently small, we conclude \\eqref{eq:LB:pf:2}.\n\n\n\\pfstep{Step 3} Next, we claim that there exists $U_{2} = U_{2}(U_{1}, A_{2}, \\Lambda, K, \\eta) \\geq 1$ such that $U_{2} \\geq U_{1}$ and for $(u,v) \\in (\\calQ \\setminus \\PD_{\\mathrm{ext}}^{\\eta}) \\cap \\set{u \\geq U_{2}}$, we have\n\\begin{align} \n\t\\partial_{v}(r \\phi)(u, v) \\geq 2 \\eta^{3} \\mathfrak{L} \\, u^{-3}.\t\t\\label{eq:LB:pf:3}\n\\end{align}\n\nCombined with \\eqref{eq:LB:pf:2} (keeping in mind that $r \\sim v\/2$ in $\\PD_{\\mathrm{ext}}^{\\eta}$ by \\eqref{eq:LB:pf:1}), this would establish \\eqref{eq:LB:1}.\n\nTake $U_{2} \\geq \\eta^{-1} U_{1}$, and consider $(u, v) \\in (\\calQ \\setminus \\PD_{\\mathrm{ext}}^{\\eta}) \\cap \\set{u \\geq U_{2}}$. Integrating \\eqref{eq:SSESF:dphi}, we have\n\\begin{equation*}\n\t\\partial_{v} (r \\phi)(u,v) = \\partial_{v} (r \\phi)(\\eta u, v) + \\int_{\\eta u}^{u} \\frac{2m \\lmb \\nu}{r^{2}} \\phi(u', v) \\, \\mathrm{d} u'.\n\\end{equation*}\n \nNote that $(\\eta u, v) \\in \\PD_{\\mathrm{ext}}^{\\eta} \\cap \\set{u \\geq U_{1}}$ since $v \\geq u$ and $\\eta u \\geq \\eta U_{2} \\geq U_{1}$. Therefore, by \\eqref{eq:LB:pf:2} and the fact that $\\eta^{-1} u > v$ (as $(u,v) \\in \\calQ \\setminus \\PD_{\\mathrm{ext}}^{\\eta}$), the first term on the right-hand side obeys the lower bound\n\\begin{equation*}\n\t\\partial_{v} (r \\phi)(\\eta u, v) \\geq \\Big( \\frac{\\mathfrak{L}}{2} \\Big) \\Big( \\frac{v}{2} \\Big)^{-3} > 4 \\eta^{3} \\mathfrak{L} \\, u^{-3}.\n\\end{equation*}\n\nOn the other hand, using \\eqref{eq:decay1:1} and \\eqref{eq:decay2:11}, we have\n\\begin{equation*}\n\t\\abs{\\int_{\\eta u}^{u} \\frac{2m \\lmb \\nu}{r^{2}} \\phi(u', v) \\, \\mathrm{d} u'}\n\t\\leq C_{A_{1}, A_{2}, \\Lambda, K} \\int_{\\eta u}^{u} \\frac{1}{(u')^{10}} \\, \\mathrm{d} u' \n\t\\leq C_{A_{1}, A_{2}, \\Lambda, K, \\eta} \\, u^{-9}.\n\\end{equation*}\n\nTaking $U_{2}$ large enough, we conclude that \\eqref{eq:LB:pf:3} holds.\n\n\\pfstep{Step 4} Finally, we claim that there exists $U = U(U_{2}, A_{2}, \\Lambda, K, \\eta) \\geq 1$ such that $U \\geq U_{2} \\geq U_{1}$ and for $(u,v) \\in \\set{u \\geq U}$, we have\n\\begin{align} \n\t- \\partial_{u}(r \\phi)(u, v) \\geq \\eta^{3} \\mathfrak{L} \\, u^{-3}.\t\t\\label{eq:LB:pf:4}\n\\end{align}\n\nThis would prove \\eqref{eq:LB:2}, thereby completing the proof of Lemma \\ref{lem:LB}.\n\nOur argument will be very similar to the previous step. Take $U \\geq U_{2}$ and consider $(u,v) \\in \\set{u \\geq U}$. Integrating \\eqref{eq:SSESF:dphi} along the outgoing direction, we have\n\\begin{equation*}\n\t- \\partial_{u}(r \\phi)(u,v) = - \\partial_{u}(r \\phi)(u, u) - \\int_{u}^{v} \\frac{2 m \\lmb \\nu}{r^{2}} \\phi(u, v') \\, \\mathrm{d} v'.\n\\end{equation*}\n\nRecall that $\\lim_{v \\to u+} \\partial_{u} (r \\phi)(u, v) = - \\lim_{v \\to u+} \\partial_{v}(r \\phi)(u,v)$. By \\eqref{eq:LB:pf:3} and the fact that $u \\geq U \\geq U_{2}$, we see that the first term on the right-hand side obeys the lower bound\n\\begin{equation*}\n\t- \\partial_{u} (r \\phi)(u,u) \\geq 2 \\eta^{3} \\mathfrak{L} \\, u^{-3}.\n\\end{equation*}\n\nOn the other hand, using \\eqref{eq:decay1:1} and \\eqref{eq:decay2:11}, we have\n\\begin{equation*}\n\t\\abs{\\int_{u}^{v} \\frac{2 m \\lmb \\nu}{r^{2}} \\phi(u, v') \\, \\mathrm{d} v'}\n\t\\leq C_{A_{1}, A_{2}, K} \\int_{u}^{v} \\min \\set{u^{-10}, r^{-2} u^{-8}} \\, \\lmb \\, \\mathrm{d} v' \n\t\\leq C_{A_{1}, A_{2}, K} \\, u^{-9}.\n\\end{equation*}\n\nTaking $U$ sufficiently large, we conclude that \\eqref{eq:LB:pf:4} holds. \\qedhere\n\\end{proof}\n\n\\subsection{Optimality of the decay rates, in the case $\\omega' \\geq 3$} \\label{subsec.opt.2}\nIn this subsection, we prove Theorem \\ref{thm.opt.2} by studying the solution to \\eqref{eq:SSESF} arising from the initial value\n\\begin{equation*}\n\t\\partial_{v}(r \\phi)(1, v) = \\epsilon \\widetilde{\\chi}\\Big( \\frac{v - v_{0}}{N} \\Big),\n\\end{equation*}\nwhere $\\widetilde{\\chi} : (-\\infty, \\infty) \\to [0, \\infty)$ is a smooth function such that\n\\begin{equation*}\n\\mathrm{supp} \\, \\widetilde{\\chi} \\subset (-1\/2, 1\/2), \\quad\n\\int_{\\mathbb R} \\widetilde{\\chi} = 1.\n\\end{equation*}\n\nWe also require that $v_{0} \\geq 2$ and $N \\leq v_{0}$. With such data, the initial total variation is of size $\\leq C\\epsilon$, i.e.,\n\\begin{equation*}\n\t\\int_{1}^{\\infty} \\abs{\\partial_{v}^{2} (r \\phi) (1, v)} \\, \\mathrm{d} v \\leq \\epsilon \\int_{-\\infty}^{\\infty} \\abs{ \\widetilde{\\chi}\\,' \\Big( \\frac{v - v_{0}}{N} \\Big) } \\, \\frac{\\mathrm{d} v}{N} \\leq C \\epsilon.\n\\end{equation*}\n\nWe also see that $\\mathcal I_{1} \\leq C \\epsilon v_{0}^{3}$ and $\\mathcal I_{2} \\leq C \\epsilon v_{0}^{4} \/ N$ with $\\omega' = 3$, as\n\\begin{equation*}\n\t\\sup_{v \\in [1, \\infty)} (1+r)^{3} \\abs{\\partial_{v} (r \\phi)}(1,v) \\leq C \\epsilon v_{0}^{3}, \\quad\n\t\\sup_{v \\in [1, \\infty)} (1+r)^{4} \\abs{\\partial_{v}^{2} (r \\phi)}(1,v) \\leq C \\epsilon \\frac{v_{0}^{4}}{N}.\n\\end{equation*}\n\nWe are now ready to give a proof of Theorem \\ref{thm.opt.2}. The idea is to compute $\\mathfrak{L}$ to the leading order (which turns out to be $- c\\epsilon^{3}$ for some $c > 0$), and then control the lower order terms by taking $\\epsilon > 0$ sufficiently small and applying Theorem \\ref{thm:smallData}.\n\n\\begin{proof} [Proof of Theorem \\ref{thm.opt.2}]\nFor this proof, we fix $v_{0} = 4$ and $N = 1$. We use the shorthand\n\\begin{equation*}\n\t\\chi(v) := \\widetilde{\\chi}(v-4).\n\\end{equation*}\n\nBy the preceding discussion on the size of initial data, we see that Theorem \\ref{thm:smallData} applies when $\\epsilon > 0$ is sufficiently small. Therefore, there exists a constant $C > 0$ independent of $\\epsilon > 0$ such that Theorems \\ref{main.thm.1}, \\ref{main.thm.2} and Proposition \\ref{prop:geomLocBVScat} hold with\n\\begin{equation} \\label{eq:opt2:eps}\n\tA_{1}, A_{2} \\leq C \\epsilon, \\quad K, \\Lambda \\leq C.\n\\end{equation}\n\nWe begin by showing\n\\begin{equation} \\label{eq:opt2:dvrphi}\n\t\\partial_{v}(r \\phi)(u,v) = \\epsilon \\chi(v) + \\mathrm{Err}_{1}(u,v),\n\\end{equation}\nwhere\n\\begin{equation} \\label{eq:opt2:Err1}\n\t\\abs{\\mathrm{Err}_{1}(u,v)} \\leq C \\epsilon^{3} \\min \\set{u^{-3}, r(u,v)^{-3}}.\n\\end{equation}\n\nThe argument is similar to the proof of Theorem \\ref{thm.opt.1}, but this time we rely on Theorem \\ref{thm:smallData} to make the dependence of $\\mathrm{Err}_{1}$ on $\\epsilon$ explicit. Indeed, by \\eqref{eq:SSESF:dphi}, we have\n\\begin{equation*}\n\t\\abs{\\mathrm{Err}_{1}(u,v)}\n\t\\leq \\int_{1}^{u} \\abs{\\frac{\\mu \\lmb \\nu}{(1-\\mu)r} \\phi} (u', v) \\, \\mathrm{d} u'.\n\\end{equation*}\n\nThen estimating the right-hand side using Theorem \\ref{main.thm.1}, Proposition \\ref{prop:geomLocBVScat} and Corollary \\ref{cor:decay2}, and using \\eqref{eq:opt2:eps} make the $\\epsilon$-dependence explicit, \\eqref{eq:opt2:Err1} follows.\n\nIntegrating \\eqref{eq:opt2:dvrphi}, we also have\n\\begin{align*} \n\tr\\phi(u,v) \t= & \\int_{u}^{v} \\partial_{v}(r \\phi)(u, v') \\, \\mathrm{d} v' \\\\\n\t\t\t= & \\epsilon \\int_{u}^{v} \\chi(v') \\, \\mathrm{d} v' + \\int_{u}^{v} \\mathrm{Err}_{1}(u, v') \\, \\mathrm{d} v' \\\\\n\t\t\t= & \\epsilon X(u,v) + \\mathrm{Err}_{2}(u,v)\n\\end{align*}\nwhere $X(u, v) := \\int_{u}^{v} \\chi(v') \\, \\mathrm{d} v'$ and $\\mathrm{Err}_{2}(u,v) := \\int_{u}^{v} \\mathrm{Err}_{1}(u, v') \\, \\mathrm{d} v'$. Integrating \\eqref{eq:opt2:Err1}, and using the bound $C^{-1} \\leq \\lmb \\leq 1\/2$, we easily obtain\n\\begin{equation} \\label{eq:opt2:Err2}\n\t\\abs{\\mathrm{Err}_{2}(u,v)} \\leq C \\epsilon^{3} \\min \\set{r u^{-3}, u^{-2}}.\t\n\\end{equation}\n\nIn particular, taking $v \\to \\infty$, we see that\n\\begin{equation} \\label{eq:opt2:Phi}\n\t\\abs{\\Phi(u) - \\epsilon X(u, \\infty)} \\leq C \\epsilon^{3} u^{-2}.\n\\end{equation}\n\nWe now proceed to estimate $M(u)$. We begin with the easy observation\n\\begin{equation} \\label{eq:opt2:UBforM}\n\tM(u) \\leq C \\epsilon^{2} u^{-5},\n\\end{equation}\nwhich follows from Corollary \\ref{cor:decay2} and \\eqref{eq:opt2:eps}. On the other hand, recalling the definition of $M(u)$ from \\eqref{eq:SSESF:dm} and using the elementary inequality $(a+b)^{2} \\geq \\frac{1}{2} a^{2} - b^{2}$,\n\\begin{align*}\n\tM(u)\t= & \\frac{1}{2} \\int_{u}^{\\infty} \\frac{1-\\mu}{\\lmb} [ \\partial_{v}(r\\phi) - \\frac{\\lmb}{r} (r \\phi) ]^{2} (u,v) \\, \\mathrm{d} v \\\\\n\t\t\\geq & \\frac{\\epsilon^{2}}{4} \\int_{u}^{\\infty} \\frac{1-\\mu}{\\lmb}(u,v) [\\chi (v) - \\frac{\\lmb}{r} X (u,v) ]^{2} \\, \\mathrm{d} v \n\t\t\t- \\frac{1}{2}\\int_{u}^{\\infty} \\frac{1-\\mu}{\\lmb} [\\mathrm{Err}_{1} - \\frac{\\lmb}{r} \\mathrm{Err}_{2}]^{2} (u,v) \\, \\mathrm{d} v .\n\\end{align*}\n\nBy \\eqref{eq:opt2:eps}, \\eqref{eq:opt2:Err1} and \\eqref{eq:opt2:Err2}, we have\n\\begin{equation*}\n\\abs{\\frac{1}{2} \\int_{u}^{\\infty} \\frac{1-\\mu}{\\lmb} [\\mathrm{Err}_{1} - \\frac{\\lmb}{r} \\mathrm{Err}_{2}]^{2} (u,v) \\, \\mathrm{d} v} \\leq C \\epsilon^{6}.\n\\end{equation*}\n\nFurthermore, note that $(1-\\mu) \\geq (K \\Lambda)^{-1} \\geq C^{-1} > 0$ by Proposition \\ref{prop:geomLocBVScat} and \\eqref{eq:opt2:eps}. Also, for $(u,v) \\in [1,2] \\times [8, \\infty)$, note that $\\chi(v) = 0$ and $X(u,v) = 1$. Therefore, for $1 \\leq u \\leq 2$, there exists $c > 0$ (independent of $\\epsilon > 0$) such that\n\\begin{align*}\n\\frac{1}{4}\\int_{u}^{\\infty} \\frac{1-\\mu}{\\lmb}(u,v) [\\chi - \\frac{\\lmb}{r} X]^{2} (u,v) \\, \\mathrm{d} v\n\\geq & (4 C)^{-1} \\int_{u}^{\\infty} [\\chi - \\frac{\\lmb}{r} X]^{2} (u,v) \\, \\lmb^{-1} (u,v) \\mathrm{d} v \\\\\n\\geq & (4 C)^{-1} \\int_{8}^{\\infty} \\frac{\\lmb}{r^{2}} (u,v) \\, \\mathrm{d} v \\\\\n\\geq & c\\,.\n\\end{align*}\n\nTherefore, we conclude that\n\\begin{equation} \\label{eq:opt2:LBforM}\n\tM(u) \\geq c \\epsilon^{2} - C \\epsilon^{6} \\qquad \\hbox{ for } 1 \\leq u \\leq 2.\n\\end{equation}\n\nWe are now ready to compute $\\mathfrak{L}$ and complete the proof. We begin by observing that \n\\begin{equation*}\n\\lim_{v \\to \\infty} r^{3} \\abs{\\partial_{v}(r \\phi)(1,v)} = 0\n\\end{equation*}\nby our choice of data. Therefore,\n\\begin{align*}\n\t- \\mathfrak{L}\n\t\t = &\\int_{1}^{\\infty} M (-\\nu_{\\infty}) \\Phi(u) \\, \\mathrm{d} u \\\\\n\t\t= & \\epsilon \\int_{1}^{\\infty} M (u) (-\\nu_{\\infty}) (u) X(u,\\infty) \\, \\mathrm{d} u \n\t\t\t+ \\int_{1}^{\\infty} M(u) (-\\nu_{\\infty})(u) \\mathrm{Err}_{2}(u, \\infty) \\, \\mathrm{d} u. \n\\end{align*}\n\nBy Proposition \\ref{prop:geomLocBVScat}, \\eqref{eq:opt2:eps}, \\eqref{eq:opt2:Err2} and \\eqref{eq:opt2:UBforM}, we have\n\\begin{equation*}\n\\abs{\\int_{1}^{\\infty} M(u) (-\\nu_{\\infty})(u) \\mathrm{Err}_{2}(u, \\infty) \\, \\mathrm{d} u } \\leq C \\epsilon^{5}.\n\\end{equation*}\n\nOn the other hand, by Proposition \\ref{prop:geomLocBVScat}, \\eqref{eq:opt2:eps} and \\eqref{eq:opt2:LBforM}, we have (taking $c > 0$ smaller if necessary)\n\\begin{align*}\n\t\t\\epsilon \\int_{1}^{\\infty} M(u) (-\\nu_{\\infty})(u) X(u, \\infty) \\, \\mathrm{d} u \n\t\t\\geq & \\epsilon \\int_{1}^{2} M(u) (-\\nu_{\\infty})(u) X(u, \\infty) \\, \\mathrm{d} u \\\\\n\t\t\\geq & \\Lambda^{-1} \\epsilon \\int_{1}^{2} M(u) \\, \\mathrm{d} u\n\t\t\\geq c \\epsilon^{3} - C \\epsilon^{7}.\n\\end{align*}\n\nTherefore, taking $\\epsilon > 0$ sufficiently small, we see that $- \\mathfrak{L} > \\frac{c}{2} \\epsilon^{3} > 0$. \\qedhere\n\\end{proof}\n\n\\bibliographystyle{amsplain}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzffpf b/data_all_eng_slimpj/shuffled/split2/finalzzffpf new file mode 100644 index 0000000000000000000000000000000000000000..9c62f250ebe88e0e85d6e5ac0b73d2fa675c6234 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzffpf @@ -0,0 +1,5 @@ +{"text":"\\section{\\label{sec:level1}Introduction}\n\nProcessive motor proteins or molecular motors~\\cite{howard,bray}\n(such as kinesin, cytoplasmic dynein, and myosin V) ``walk'' along\nmolecular tracks (microtubules and actin filaments) taking {\\it\nobserved mechanical steps} of well defined (mean) spacing $d$ , each\nstep being of {\\it ``negligibly short''} ($\\lesssim 100\\,\\mu s$)\nduration relative to the mean time(s) between steps that are of\norder $1$ to $20\\,\\textrm{ms}$\n~\\cite{howard,bray,mehta99,rief00,nishiyama02,block03,nishiyama03,yildiz03,snyder04,uemura04,baker04,oiwa05,carter05,taniguchi05,clemen05,guydosh06,toba06}.\nSteps may be taken {\\it forwards} ($+$) or {\\it backwards} ($-$).\nThe mean velocity (observed over 10's to 100's ) of steps, $V$, is a\n{\\it function} of the load $\\textbf{F}=(F_x,F_y,F_z)$ exerted on the\nmotor and the fuel concentration [ATP] (for most\ncases)~\\cite{fisher99,kolomeisky03,fisher05} and, in general, of\nother features of the aqueous solution including the pH, ionic\nstrength, temperature $T$, and other reagents\/reactants such as\n[ADP], [Pi], [AMP-PNP], [BeF$_2$],\netc.)\\cite{howard,block03,carter05,guydosh06}.\n\nMotor proteins are enzymatic {\\it catalysts} that, following\nbiochemical knowledge and principles, turn over one ``fuel molecule\"\n(usually ATP) for each full step via (in the simplest case) a {\\it\nlinear sequence of reversible kinetic transitions} (or {\\it\nreactions}) embodying $N$ (bio)chemical states per\nturnover~\\cite{fisher99,kolomeisky03,fisher05,fisher99a,kolomeisky00,kim05,bustamante01}.\nThis situation is embodied in the following {\\it basic sequential\nmodel}\n\\begin{align}\n&\\cdots\\begin{array}{c}\n u_{N-1}\\\\\n \\xleftarrow{\\quad\\quad\\quad}\\\\\n \\vspace{-10 mm}\\\\\n \\xrightarrow{\\quad\\quad\\quad}\\\\\n w_{0\\equiv N}\n\\end{array}\n[0]_l\\begin{array}{c}\n u_0\\\\\n \\xleftarrow{\\quad\\quad\\quad}\\\\\n \\vspace{-10 mm}\\\\\n \\xrightarrow{\\quad\\quad\\quad}\\\\\n w_1\n\\end{array}\n(1)_l\\begin{array}{c}\n u_1\\\\\n \\xleftarrow{\\quad\\quad\\quad}\\\\\n \\vspace{-10 mm}\\\\\n \\xrightarrow{\\quad\\quad\\quad}\\\\\n w_2\n\\end{array}\n(2)_l\\begin{array}{c}\n u_2\\\\\n \\xleftarrow{\\quad\\quad\\quad}\\\\\n \\vspace{-10 mm}\\\\\n \\xrightarrow{\\quad\\quad\\quad}\\\\\n w_3\n\\end{array}\n\\cdots\\nonumber\\\\\n&\\cdots\\begin{array}{c}\n u_{N-2}\\\\\n \\xleftarrow{\\quad\\quad\\quad}\\\\\n \\vspace{-10 mm}\\\\\n \\xrightarrow{\\quad\\quad\\quad}\\\\\n w_{N-1}\n\\end{array}\n(N-1)_l\\begin{array}{c}\n u_{N-1}\\\\\n \\xleftarrow{\\quad\\quad\\quad}\\\\\n \\vspace{-10 mm}\\\\\n \\xrightarrow{\\quad\\quad\\quad}\\\\\n w_N\n\\end{array}\n[N]_l\\equiv[0]_{l+1}\\begin{array}{c}\n u_0\\\\\n \\xleftarrow{\\quad\\quad}\\\\\n \\vspace{-10 mm}\\\\\n \\xrightarrow{\\quad\\quad}\\\\\n w_1\n\\end{array}\n\\cdots,\\label{BasicModel}\n\\end{align}\n\nwhich is understood to repeat periodically as the motor moves\nprocessively along its track. The subscript $l=1,2,\\cdots$ labeling\nthe $N$ basic states $[0]$, $(1)$, $\\cdots(N-1)$, denotes the sites\non the linear track spaced at distance $d$ apart.\n\nBy convention the state $(i)=[0]$ is ``bound'' or\n``nucleotide-free'' so that the transition $[0]\\rightarrow(1)$\nrepresents the {\\it binding} of one fuel molecule to the awaiting\nmotor. Thus we write\n\\begin{equation}\nu_0=k_0[\\mbox{ATP}],\\label{ATPdepend}\n\\end{equation}\nwhere the pseudo-first-order rate constant $k_0$ and all the\nremaining rate constants $u_i$, $w_j$ depend also on $\\textbf{F}$.\nBut under fixed conditions ($\\textbf{F}$, [ATP], $\\cdots$), the\nrates do not change. Note that this formulation embodies the {\\it\ntight coupling principle} of one fuel molecule being consumed per\n(forward) step~\\cite{howard,fisher99,kolomeisky03,bustamante01}.\nThis is {\\it assumed} in the basic model, which also neglects\nirreversible {\\it detachments} from the track (which, however, can\nbe included readily in\nprinciple~\\cite{fisher99,fisher99a,kolomeisky00}).\n\nWhen convenient we will allow the state labels $i$, $j$, $\\cdots$ to\ntake values outside the basic range $[0,N-1]$; for that reason we\nadopt the periodicity convention\n\\begin{equation}\nu_{i+lN}\\equiv u_i,\\;\\;w_{j+lN}\\equiv\nw_j,\\;\\;l=0,\\pm1,\\pm2,\\cdots.\\label{Convention}\n\\end{equation}\n\nNow, in the simplest experimental situation, as observed for\nkinesin, no {\\it mechanical substeps} are\ndetected~\\cite{nishiyama02,carter05} to within the noise level\n(which amounts to $\\Delta x\\lesssim 1\\,\\textrm{nm}$). Furthermore to\nwithin the resolution time ($\\lesssim 100\\,\\mu s$), successive steps\noccur at times, say, $\\cdots, t_{k-1}, \\;t_k, \\;t_{k+1}, \\cdots$.\nThus, between the identifiable mechanical steps of (mean) magnitude\n$d$, the motor {\\it dwells} in a {\\it mechanical state} that, within\nthe noise level $\\Delta x$, appears well defined with no\nsystematically detectable {\\it substeps}, forwards or backwards.\nThen, individual {\\it dwell times} in the mechanical states, namely,\n\\begin{equation}\n\\tau_k=t_k-t_{k-1},\\label{DefDwellTimes}\n\\end{equation}\ncan be measured to reasonable precision and averages may be\ncomputed, over ``many'' observations encompassing, say, $n$ steps,\nto yield an {\\it overall mean dwell time}\n\\begin{equation}\n\\tau=\\left<\\tau_k\\right>\\approx\\frac{1}{n}\\sum_{k=1}^n\\tau_k.\\label{OverallMeanDwellTimes}\n\\end{equation}\nHere and below we use the ``asymptotically equals'' symbol $\\approx$\nto indicate an approximate equality that becomes exact in a long run\nunder steady-state conditions.\n\nGiven a (sufficiently long) sequence of $n$ observed steps with\n$n_+$ forward steps and $n_-$ backward steps, we can also define the\n(steady-state) {\\it step splitting probabilities} or back-step and\nforward-step fractions\n\\begin{equation}\n\\pi_+\\approx n_+\/n,\\qquad\\pi_-\\approx n_-\/n,\\label{StepSplProb}\n\\end{equation}\nwhere, since $n=n_++n_-$, one has\n\\begin{equation}\n\\pi_++\\pi_-=1.\\label{SumOfProbIsOne}\n\\end{equation}\n\nFurthermore, dwell times before a $+$ or $-$ step can be (and have\nbeen~\\cite{carter05}) measured separately leading to distinct {\\it\nprior dwell times}\n\\begin{equation}\n\\tau_+=\\frac{1}{n_+}\\mbox{$\\sum^{+}$}\\tau_k\\qquad\\mbox{and}\\qquad\\tau_-=\\frac{1}{n_-}\\mbox{$\\sum^{-}$}\\tau_k,\\label{PriorDwellTimes}\n\\end{equation}\nthe restricted sums including just $+$ or $-$ steps, respectively.\n\nTo the degree that the runs are long so that $\\pi_+$ and $\\pi_-$ may\nbe accurately considered as probabilities one must evidently also\nhave\n\\begin{equation}\n\\pi_+\\tau_++\\pi_-\\tau_-=\\tau.\\label{TotalDwellasSum}\n\\end{equation}\n\nAs discussed recently in some detail~\\cite{fisher05,kim05}, each\nindividual biochemical state $(i)$, may be characterized by a\ndefinite (mean) {\\it longitudinal location} in physical space, i.e.,\n{\\it along} the track, which we supposed aligned with the $x$\ncoordinate, and, possibly, {\\it transverse} to the track, the\n$y$-coordinate, or {\\it normal} to the track, the $z$-coordinate.\nHence the basic model implies the existence of {\\it substeps}, say,\nof magnitude\n\\begin{equation}\nd_j=x_{j+1}-x_j,\\label{StepMagnitude}\n\\end{equation}\nbetween successive mechanochemical states~\\cite{foot0}. However, the\ngreat majority of these mechanical displacements will be {\\it\nhidden} by noise and so {\\it unobservable}. This is the crucial\nissue.\n\nThe evidence (in particular for kinesin~\\cite{nishiyama02,carter05})\nreveals the existence of one principal or {\\bf major mechanical\nsubstep} of magnitude\n\\begin{equation}\nd_M=x_{M+1}-x_M\\simeq d,\\label{MajorMechStep}\n\\end{equation}\nthat corresponds to a specific transition\n$(M)\\!\\!\\rightarrow\\!\\!(M+1)$ for a {\\it forward} or $+$ {\\it step}.\nSuch a unique forward-step is sometimes called a ``power stroke''.\nThen, clearly, within the basic model a back-step ($-$) corresponds\nto the specific transition $(M+1)\\!\\rightarrow\\!(M)$.\n\nFor simplicity we will initially suppose that there is {\\it only\none} such single, well defined and observable principal mechanical\ntransition in the processive reaction cycle: it will be referred to\nas a {\\bf major transition} while all other smaller, unobservable\ndisplacements, presumed ``hidden,'' will be termed {\\it substeps}.\n\nIt is of interest, all the same, to analyze situations in which,\nwithin the full cycle, there are a number of {\\it visible} (or\nobservable) {\\it substeps}. Indeed, an initial substep large enough\nto be readily observable was predicted for myosin V by Kolomeisky\nand Fisher~\\cite{kolomeisky03} on the basis of dwell-time data\nobtained at different [ATP] and force levels~\\cite{mehta99}; it was\nlater observed unambiguously by Uemura et al~\\cite{uemura04}. Thus,\nin Sec.~\\ref{sec:level5} below, the case of $K$ ($1$\nbiochemical or mechanochemical substeps, while only major mechanical\ntransitions are detectible, one {\\it cannot} in general decide\nunambiguously whether a motor executed the detectable steps (or\npower strokes) with or without completing a full cycle. In such\ncases, the previous expressions cannot be applied to account for the\nobserved step fractions and dwell times $\\pi_{\\pm}$ and\n$\\tau_{\\pm}$. Instead, the results must be modified to allow for the\nambiguity arising from the hidden substeps. It transpires, as we\nshow below, that this rather subtle and at first-sight\ninconsequential small difference actually leads to significant\nchanges in the load dependence especially (but not exclusively) when\nthe fractions of back-steps and forward-steps are similar in\nfrequency, i.e., on approaching stall conditions when the velocity,\n$V$, becomes small relative to its load-free\nvalue~\\cite{fisher99,kolomeisky03,fisher05}.\n\n\\begin{figure*}\n\\includegraphics[angle=0, scale=0.70]{PapeFig1Bsm}\n\\caption{\\label{fig:wide} Schematic graphs illustrating why the\nfull-cycle interpretation is not adequate for describing stepping\ndata: Plot {\\bf (a)} depicts a hypothetical time series of forward\nsteps for an ($N=4$) cycle with $M=2$. The (0,1)'s indicate that the\nstates $[0]_l$ and $(1)_l$ are both certain to occur at least once\nbetween the pairs of states $(3)_{l-1}$ and $(2)_l$. (See the text.)\nThen {\\bf (b)} represents a similar time series but with one major\nback-step. The question-marks between the forward-back and\nback-forward steps indicate that one does not know if the motor has\never passed through the states $[0]_l$ and\/or $(1)_l$ in these\nintervals. Finally, {\\bf (c)} and {\\bf (d)} depict two equivalent\ntime series for a motor with $N=3$ and $M=1$ but with different\nnoise levels. The first plot allows one to identify only major\nforward and backward transitions at times $t_k$ (marked on the axis)\nwhile the states $[0]_l$ and corresponding substeps are hidden in\nthe noise. However, the second plot reveals all the transitions and\nsubsteps, so that the $t^0_k$, marking the beginning (or end) of\neach full cycle, can be determined. These schematic examples\ndemonstrate that detectable transitions do not necessarily\ncorrespond to a full biochemical cycle so that a proper statistical\nanalysis must take account of substeps hidden in the noise. Notice,\nindeed, that the three major transitions identified as forward and\nback steps at times $t_3$, $t_4$ and $t_5$ (on the left) are\nassociated with only a single complete cycle from $t^0_2$ to\n$t^0_3$.}\n\\end{figure*}\n\nTo clarify the issues involved, suppose the cycle has four states\n($N=4$) and the major transition occurs between states $(2)_l$ and\n$(3)_l$ (i.e., $M=2$). Then in stepping time series such as\nillustrated in Figs.~1(a) and (b), one can identify all the moments\nof time at which the motor leaves state $(2)_l$ and reaches state\n$(3)_l$ on moving forwards or when it leaves state $(3)_l$ for state\n$(2)_l$ on moving backwards. When successive forward steps are\nrealized, as illustrated in~Fig.~1(a), one knows that the motor must\npass through the remaining two states, $[0]$ and $(1)$, at some\npoints between the major transitions [see Fig.~1(a)] because state\n$(2)$ cannot otherwise be reached {\\it following} state $(3)$ at the\nsame site $l$. Thus in a sequence of three successive $+$ steps one\ncan conclude that the middle step is associated with a complete\n(forward) cycle. The corresponding observations are equally true for\nsuccessive back steps. On the other hand, when the overall stepping\nsequence encompasses {\\it both} back-steps {\\it and} forward-steps,\nwhich is the interesting (and usual) situation [see Fig.~1(b)], it\nis {\\it impossible}, for example, to be sure that the motor has\ncompleted a full forward cycle when a (detectable) forward step is\nfollowed by a back step; likewise, one cannot tell if a full back\ncycle was completed. In such cases, the full-cycle assumption is not\nvalid.\n\nThe full-cycle assumption can be inadequate even when only a run of\nforward steps is seen, as in Fig.~1(a), in that back steps may\nmerely be infrequent. This is somewhat counterintuitive since one\nmight well argue that each $+$ step does then correspond to a full\ncycle. However, if the enzymatic cycle is reversible there is always\nthe possibility of a completed back step; thus explicit expressions\nin terms of the basic rates $\\{u_j,\\;w_j\\}$ will differ when\npotentially hidden substeps are allowed for. Never-the-less, as we\ndemonstrate in Sec.~\\ref{sec:level4}, there are various cases in\nwhich the quantitative differences may be small.\n\nTo demonstrate further the consequences of different conceivable\ninterpretations, consider an ($N=3$)-state motor with two possible\nsubsteps. Fig.~1(c) illustrates a stepping series with a relatively\nhigh noise level so that only the major transitions, say\n$(1)_l\\rightleftharpoons(2)_l$ for $M=1$, at times $t_k$ (with the\ncorresponding dwell times $\\tau_k=t_k-t_{k-1}$) can be measured. On\nthe other hand, Fig.1(d) shows {\\it exactly the same} series of\nsteps, but with a much lower noise level revealing the previously\nobscured small substeps, $[0]_l\\rightleftharpoons(1)_l$ and\n$(2)_l\\rightleftharpoons[0]_{l+1}$. In the latter case, one can\ndetermine the times $t^0_k$ when the motor reaches the bound state\n$[0]_l$ for the first time (i.e., when a cycle is completed). And\nthen one can reliably determine the number $\\check{n}_+$ of full\nforward, and $\\check{n}_-$ of full backward {\\it cycles}. In\ngeneral, when both forward- and back-steps are present the mean\nvalues of the cycle times $\\check{\\tau}_i=t^0_i-t^0_{i-1}$ (and so\n$\\check{\\tau}$, $\\check{\\tau}_+$, and $\\check{\\tau}_-$) are quite\ndifferent from the mean step-to-step dwell times $\\tau$, $\\tau_+$\nand $\\tau_-$, that one can obtain from the noisy stepping series in\nFig.1(c). The difference between the splitting probabilities,\n$\\check{\\pi}_{\\pm}$ and $\\pi_{\\pm}$, is even more obvious. For\nexample in Fig.~1(d) one has {\\it only} one back cycle since one\nmust not consider the major transitions at times $t_3$ and $t_4$ as\nindicating full stepping cycles because the motor never actually\nreached the next bound state $[0]$: hence from this sequence one\nshould estimate $\\check{\\pi}_+\\simeq5\/6$ and\n$\\check{\\pi}_-\\simeq1\/6$ and $\\check{\\pi}_+\/\\check{\\pi}_-\\simeq5$.\nConversely in Fig.~1(c) one would count 6 forward and 2 back steps\n(or to be more precise, major transitions) leading to the estimates\n$\\pi_+\\simeq3\/4$ and $\\pi_-\\simeq1\/4$ so that $\\pi_+\/\\pi_-\\simeq3$.\n\nFrom a mathematical viewpoint, although most of the transitions and\nbiomechanochemical states remain unseen, there is one bright spot!\nSpecifically, in light of the basic feature or model assumption\nembodied in Eq.~(\\ref{BasicModel}), at the instant of time before\nthe moment, say $t_k$, at which a $+$ step occurs, one can be {\\it\nsure} the motor was in state ($M$) while in the {\\it instant} just\n{\\it after} $t_k$ the motor is in state ($M+1$); and, likewise, just\n{\\it before a backward} ($-$) {\\it step} the state ($M+1$) is\noccupied, while just {\\it after} a back-step the state ($M$) is\ndefinitely occupied. Together with the standard Markovian premise of\nchemical kinetics, namely, that once in a well defined chemical\nstate the subsequent departures are independent of the mode of\narrival, this crucial observation enables the systematic calculation\nof splitting probabilities and conditional dwell times for general\n$N>1$ via the {\\it Theory of First Passage Times:} specifically, as\nwe now explain, we may use the analysis as formulated by van\nKampen~\\cite{kampen}.\n\n\n\n\\section{\\label{sec:level2}Conditional splitting probabilities and dwell times}\n\nBefore undertaking explicit calculations to obtain expressions for\n$\\pi_+$, $\\pi_-$, and $\\tau_+$, $\\tau_-$, in terms of the $u_i$ and\n$w_j$ for general $N$ and $M$, we introduce some further statistical\nproperties that are straightforward to observe experimentally and\nmight prove mechanistically informative. At the same time, they\nenter naturally in to the first-passage analysis that is presented\nin Sec.~\\ref{sec:level3}.\n\nIn addition to the prior dwell times defined\nin~(\\ref{PriorDwellTimes}) one may {\\it separately} observe {\\it\npost dwell times} by measuring intervals {\\it following after} $+$\nor $-$ major steps: we will label the corresponding mean dwell times\n$\\tau_{+\\diamond}$ and $\\tau_{-\\diamond}$, where the subscript\n$\\diamond$ is read as `diamond' and denotes, here and below, a $+$\n{\\it or} a $-$ step. However, such dwell times may be {\\it\ntruncated} by {\\it detachments} (or dissociations or disconnections)\nin which the motor leaves the track (essentially irreversibly) so\nending a run. The {\\it rates of detachment from states} $(i)$, say\n$\\delta_i$, can certainly be included in the basic sequential\nkinetic model~\\cite{fisher99,fisher99a,kolomeisky00}; but in the\n{\\it first instance} they may be {\\it neglected} provided, as we\nwill suppose, only time intervals between observed $+$ or $-$\nmechanical steps are considered. (Their effects, however, would be\nsignificant if dwell times {\\it prior to detachments or immediately\nfollowing attachments} were considered which might, indeed, prove\ninformative.)\n\nNeglecting such ``initial'' and ``final'' dwell times (although the\nformer have been examined by Veigel {\\it et al.} for myosin V in\nseeking {\\it observable} mechanical substeps~\\cite{veigel05}) one\nmay still observe the {\\it four distinct conditional mean dwell\ntimes}:\n\\begin{equation}\n\\begin{array}{ll}\n\\!\\!\\tau_{++}: & \\mbox{between two successive forward (+) steps},\\\\\n\\!\\!\\tau_{+-}: & \\mbox{between a + step followed by a back-step},\\\\\n\\!\\!\\tau_{-+}: & \\mbox{between a back-step followed by a + step},\\\\\n\\!\\!\\tau_{--}: & \\mbox{between two successive back-steps},\n\\end{array}\\label{DefPairwiseDwells}\n\\end{equation}\ndefined, as in~(\\ref{PriorDwellTimes}), in terms of the observed\nintervals $\\tau^{++}_k=t^{(+)}_k-t^{(+)}_{k-1}$, averaged over\n$n_{++}$ pairs of successive $+$ steps, and likewise for $n_{+-}$\npairs of $+$ steps followed by $-$ steps, etc.\n\nAnother aspect is to note that for realistic runs of limited length,\ndeviations of order $1\/n$ will arise. Thus, for example, for a run\nof length $n=n_++n_-$ {\\it starting} with a $+$ step the overall\nmean dwell time is given by\n\\begin{equation}\n\\tau=[(n_+-1)\\tau_++n_-\\tau_-]\/(n-1),\\label{TauCorrection1}\n\\end{equation}\nthere being only $(n-1)$ measurable (prior) dwell times\n$\\tau_l=t_l-t_{l-1}$. Using the definitions (\\ref{StepSplProb}) then\nyields\n\\begin{equation}\n\\tau=\\pi_+\\tau_++\\pi_-\\tau_--\\frac{\\pi_-(\\tau_+-\\tau_-)}{n-1}.\\label{TauCorrection2}\n\\end{equation}\nIn fitting asymptotic ($n\\gg 1$) expressions to real data from\nfinite runs such systematic deviations should be recognized. Here,\nhowever, we will neglect such finite-$n$ or end-effects.\n\nTo proceed further it is also helpful to introduce the {\\it pairwise\nstep probabilities} $\\;\\pi_{++}$ and $\\pi_{+-}$ defined as the\nprobability that a $+$ step is followed by a $+$ or, respectively,\nby a $-$ step, and likewise, $\\pi_{-+}$ and $\\pi_{--}$. These then\nsatisfy\n\\begin{equation}\n\\pi_{++}+\\pi_{+-}=1\\qquad\\mbox{and}\\qquad\\pi_{-+}+\\pi_{--}=1.\\label{PairwiseSplNorm}\n\\end{equation}\n\nAgain, in a finite run of $n$ steps one can divide the $n-1$\nsuccessive pairs into $n_{++}$ of $+$ steps followed by a $+$ step,\nand so on, and use $\\pi_{++}\\approx n_{++}\/(n_{++}+n_{+-})$,\n$\\pi_{+-}\\approx n_{+-}\/(n_{+-}+n_{++})$, etc. Noting that in a\ngiven run one must have $|n_{+-}-n_{-+}|\\leq 1$, and neglecting\nfinite-$n$ corrections, leads to the valuable relation\n\\begin{equation}\n\\pi_{+}\\pi_{+-}=\\pi_{-}\\pi_{-+}.\\label{ValuableRelation}\n\\end{equation}\nFrom this follows the connections\n\\begin{equation}\n\\pi_+=1-\\pi_-=\\pi_{-+}\/(\\pi_{+-}+\\pi_{-+}),\\label{StepSplInPairSplPl}\n\\end{equation}\n\\begin{equation}\n\\frac{1}{\\pi_+}=1+\\frac{\\pi_{+-}}{\\pi_{-+}}\\qquad\\mbox{and}\\qquad\\frac{1}{\\pi_-}=1+\\frac{\\pi_{-+}}{\\pi_{+-}}.\\label{StepSplInPairSplMi}\n\\end{equation}\nTogether with~(\\ref{PairwiseSplNorm}) these relations show that the\npair $\\pi_{+-}$ and $\\pi_{-+}$ or, equivalently, $\\pi_{++}$ and\n$\\pi_{--}$ serve to determine all the back\/forward or splitting\nprobabilities.\n\nIt is worthwhile to carry these considerations a stage further by\nrecognizing the Markovian character of the basic $N$-state\nmodel~(\\ref{BasicModel}). Thus, neglecting detachments, the four\ndivision or splitting probabilities $\\pi_{++}$, $\\pi_{+-}$,\n$\\pi_{-+}$, and $\\pi_{--}$ satisfying~(\\ref{PairwiseSplNorm}) can be\nregarded as the elements of a $2\\!\\times\\!2$ stepping matrix,\n$[\\pi_{\\alpha\\beta}]$, that stochastically determines the\ntransitions from one (major) step, $+$ or $-$, to the next. By\nvirtue of the conservation of probability, the largest eigenvalue is\n$\\;\\lambda_0=1;\\;$ but the second eigenvalue, which determines the\ndecay per step of step-step correlations, is just\n\\begin{eqnarray}\n\\lambda_1&=&1-\\pi_{+-}-\\pi_{-+}\\nonumber\\\\\n&=&\\pi_{++}-\\pi_{-+}=\\pi_{++}+\\pi_{--}-1.\\label{Eigenvalue2}\n\\end{eqnarray}\nThis vanishes when $\\pi_{+-}=\\pi_{-+}=\\frac{1}{2}$ which corresponds\nto $\\pi_+=\\pi_-$ and hence, to {\\it stall conditions} in which the\nmean velocity, $V$, vanishes.\n\nCounting arguments similar to those\nyielding~(\\ref{TauCorrection1})-(\\ref{ValuableRelation}) also lead\nto relations for the conditional mean dwell times. For completeness\nand consistency with later expressions, we utilize the\n$+\/\\!\\!-`diamond$' notation introduced before. For the prior dwell\ntimes, we thus find\n\\begin{eqnarray}\n&&\\tau_+\\equiv\\tau_{\\diamond+}=\\pi_{++}\\tau_{++}+\\pi_{+-}\\tau_{-+},\\label{StepDwlInPairDwlPl}\\\\\n&&\\tau_-\\equiv\\tau_{\\diamond-}=\\pi_{-+}\\tau_{+-}+\\pi_{--}\\tau_{--},\\label{StepDwlInPairDwlMi}\n\\end{eqnarray}\nwhich, in turn, are fully consistent with the relation\n(\\ref{TotalDwellasSum}) for $\\tau$ in terms of $\\tau_+$ and $\\tau_-$\nwhere we should note\n\\begin{equation}\\label{PiPMDiamond}\n \\pi_+\\equiv\\pi_{\\diamond+}\\equiv\\pi_{+\\diamond}\\quad\\mbox{and}\\quad\\pi_-\\equiv\\pi_{\\diamond-}\\equiv\\pi_{-\\diamond}.\n\\end{equation}\nThen the {\\it post dwell times} likewise satisfy\n\\begin{eqnarray}\n&&\\tau_{+\\diamond}=\\pi_{++}\\tau_{++}+\\pi_{+-}\\tau_{+-},\\label{PostDwlInPairDwlPl}\\\\\n&&\\tau_{-\\diamond}=\\pi_{-+}\\tau_{-+}+\\pi_{--}\\tau_{--},\\label{PostDwlInPairDwlMi}\n\\end{eqnarray}\nwhile the overall mean dwell time is given by\n\\begin{equation}\\label{OverallDwlTime}\n \\tau\\equiv\\tau_{\\diamond\\diamond}=\\pi_+\\tau_{+\\diamond}+\\pi_-\\tau_{-\\diamond}.\n\\end{equation}\n\nEach of these pairwise fractions and dwell times can be obtained\nfrom the same experimental data (i.e., stepping time series) that\nhave been used experimentally to obtain the step splitting\nprobabilities and the prior dwell times in the course of studying\nthe dynamics of a motor as a function of load and [ATP], etc. But by\nobserving such further independent statistical parameters one can\ntest the basic theory more completely and hope to obtain more\nreliable and constrained fitting values for the rates determining\nthe full mechanochemical cycle.\n\nAt a more detailed level it is also useful to define\n$\\;\\;n_{i,+}^{\\rho\\sigma}\\;\\;$ and $\\;\\;n_{j,-}^{\\rho\\sigma}\\;\\;$\nwith $\\;\\;\\rho,\\sigma=\\diamond,+,\\;\\mbox{or}\\;-$, as the mean number\nof hidden forward and backward transitions, possibly hidden, from\nstates $(i)$ and $(j)$, respectively, in the intervals between\n(major) steps {\\it subject} to the conditions specified by the pair\n$(\\rho,\\sigma)$. If these transitions prove to be detectable, they\ncan be counted and used in fitting parameters; but if they pertain\nto hidden transitions (e.g., the hydrolysis of ATP, etc.), it is of\ninterest to estimate how often they occur given specific rates. The\nappropriate calculations on the basis of the\nmodel~(\\ref{BasicModel}) are developed below in\nSec.~\\ref{sec:level3e}.\n\nIt is appropriate here to mention various hidden-Markov methods,\netc.~\\cite{smith01,milescu06,mckinney06,milescu06a}, that have been\nderived and employed to locate steps in the presence of noise (and\nto fit their amplitudes, or kinetic parameters, etc.). These\napproaches require an input stochastic\nmodel~\\cite{smith01,milescu06a,foot1}; we believe the present\napproach could provide a valuable complement in the extraction of\nkinetic parameters from such experimental data since, as we will\nsee, it reveals the kinds of behavior different models can generate.\n\n\\section{\\label{sec:level3}Explicit calculations}\n\\subsection{\\label{sec:level3a}Formulation and Notation}\nThe various stepping fractions, dwell times, etc., introduced in\nSec.~\\ref{sec:level1} and \\ref{sec:level2} can be derived explicitly\nin terms of the basic kinetic rates by using van Kampen's analysis\nfor one-dimensional, nearest-neighbor first-passage\nprocesses~\\cite{kampen}. Accordingly, following the basic sequential\nmodel~(\\ref{BasicModel}), with the $N$-periodicity\nconventions~(\\ref{Convention}) for the sequential forward and\nbackward rates, $u_i$ and $w_j$, we envisage a random walker on a\none-dimensional lattice with sites labeled $i,j=0,\\pm1,\\pm2,\\cdots$,\ncorresponding, in turn, to the motor states $(i)$, $(j)$, etc.\n(again subject to the periodicity convention). If the single major\nor observable step per cycle corresponds to the transitions\n$(M)\\rightleftharpoons(M+1)$ with $M\\in[0,N-1]$ we introduce\n(following~\\cite{kampen}) absorbing boundaries on the left and the\nright via\n\\begin{equation}\\label{SpecifyLR}\n L\\equiv M\\qquad \\mbox{and}\\qquad R\\equiv M+1+N.\n\\end{equation}\n\nIf, for given initial conditions at time $t=0$ (to be selected\nbelow), $q_i(t)$ is the probability that the motor\/walker is in\nstate $(i)$ at time $t$ we may construct the $N\\!\\times\\!N$\ntransition matrix ${\\bf A}=[A_{ij}]$ with elements\n\\begin{equation}\nA_{i,j}=u_j\\delta_{i,j+1}+w_j\\delta_{i+1,j}-(u_j+w_j)\\delta_{i,j},\\label{ElemA}\n\\end{equation}\nwhere $i,j\\in[L+1,R-1]=[M+1,M+N]$. Then if\n$\\textbf{q}^{\\textrm{T}}=[q_{M+1},q_{M+2},\\cdots,q_{M+N}]$ is the\nstate vector, the governing rate equations are\n\\begin{equation}\n\\frac{d\\textbf{q}(t)}{dt}\\;=\\;\\textbf{A}\\textbf{q}(t),\\label{RateEqnForq}\n\\end{equation}\n\nThis completes the first-passage formulation~\\cite{kampen}. Before\nproceeding, however, we record some convenient notation for the\nvarious products and sums of the rate constants that enter the\nanalysis. To that end, our first definition~\\cite{foot2}\nis of the $(m\\!\\geqslant\\!1)$-term product\n\\begin{equation}\n\\Gamma_{l,m}=\\prod_{j=1}^{m}\\frac{w_{l+j}}{u_{l+j}},\\label{DefGamma}\n\\end{equation}\nwhich, by periodicity, is invariant under $l\\Rightarrow l\\pm N$.\nLikewise, the $N$-term product $\\Gamma_{l,N}$ is independent of $l$\nyielding, specifically,\n\\begin{equation}\n\\Gamma_N=\\Gamma_{l,N}=\\prod_{j=0}^{N-1}\\frac{w_j}{u_j},\\label{GammaN}\n\\end{equation}\n\\cite{foot2}. Then for all $l=0,\\pm1,\\pm2,\\ldots$ a central role\nwill be played by the $(n-1)$-term sum\n\\begin{equation}\n\\Delta_{l,n}=\\sum_{m=1}^{n-1}\\Gamma_{l,m}\\qquad(n\\geqslant1),\\label{DefDelta}\n\\end{equation}\nwhere, for the empty sum, we set $\\Delta_{l,1}\\equiv0$. Indeed,\nthese sums appear in previous\nanalyses~\\cite{fisher99,fisher99a,kolomeisky00,foot2} via\n``renormalized'' inverse forward rates (or transition times)\n\\begin{equation}\nr_l=u_l^{-1}(1+\\Delta_{l,N}).\\label{TransitionTimes}\n\\end{equation}\nSpecifically, these enter into the\nexpression~\\cite{fisher99,fisher99a,kolomeisky00}\n\\begin{equation}\nV\/d=(1-\\Gamma_N)\\left\/\\mbox{$\\sum_{l=0}^{N-1}r_l$}\\right.\\label{Velocity}\n\\end{equation}\nfor the {\\it mean velocity} $V$, which we recall here for\nconvenience of reference. (Note that $d$ is the mean spacing of\nsites along the track.) One sees directly from this that {\\it stall\nconditions}, i.e., $V=0$, are determined by $\\Gamma_N({u_i,w_j})=1$.\nThe situation near stall will be a major focus for our discussions\nin Sec.~\\ref{sec:level4}.\n\nThe analysis of van Kampen~\\cite{kampen} may now be put to work.\nReaders uninterested in the details may skip to the next section or\nperuse the main results, namely,\n(\\ref{SplProbPlPl})-(\\ref{StepSplProbMi}) for $\\pi_{++}$, etc.,\n(\\ref{DwelPlPl})-(\\ref{DwelMiMi}) for $\\tau_{++}$, etc., and\n(\\ref{DwelPl})-(\\ref{DwelTot}) for $\\tau_{+}$, $\\tau_{-}$ and\n$\\tau$.\n\n\n\\subsection{\\label{sec:level3b}Pairwise step splitting probabilities}\nTo proceed, let $\\pi_k^L$ be the total probability that a motor\nstarting at $t=0$ in state $(k)$ with $L