diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzqdpc" "b/data_all_eng_slimpj/shuffled/split2/finalzzqdpc" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzqdpc" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\tAs a well theoretically developed realm in machine learning, kernel methods have already achieved certain success in a broad range of fields~\\cite{steinwart2008support,shawe2004kernel}. \n\tSpecifically, the use of kernel functions allows for implicit non-linear transformations that map feature spaces into reproducing kernel Hilbert spaces (RKHSs), which makes kernel methods suitable for non-linear applications.\n\tHowever, memory and computation bottlenecks pop up when dealing with large-scale datasets. \n\tTo address this issue, much effort has been devoted to developing a variety of computationally efficient schemes \\cite{fine2001efficient,rahimi2007random,le2013fastfood,avron2014subspace}. \n\t\n\t\n\tAmong all competing scaling-up schemes, the Nystr\\\"{o}m method, first introduced to the machine learning community by \\citet{williams2001using}, has demonstrated its efficiency in terms of memory and computation time.\n\tSo far, several different approaches that employ Nystr\\\"{o}m method have been proposed to scale up different types of kernel machines.\n\n\tAmong various pioneering studies, \\citet{williams2001using} suggested replacing the Gram matrix with the Nystr\\\"{o}m-based approximate one for kernel ridge regression (KRR). \n\tSpecifically, the training of KRR can be easily sped up by taking advantage of the low-rank decomposition of the approximate Gram matrix. \n\tFor convenience, this approach is named Gram matrix substitution approach (GSA) in this paper. \n\tInspired by an equivalent way to linearize kernel support vector machine (KSVM), \\citet{lan2019scaling} proposed a low-rank linearization approach (LLA) that makes use of the low-rank structure of the Nystr\\\"{o}m-based approximate Gram matrix to linearize KSVM, after which efficient linear solvers can be utilized \\cite{lin2008trust,keerthi2008sequential,hsieh2008dual}. \n\tThe idea of LLA was also adopted and studied in scaling up dictionary learning \\cite{golts2016linearized}. \n\tBesides, Nyst\\\"{o}m computational regularization (NCR) was developed for KRR that restricts feasible solutions to lie in the span of the selected landmarks in RKHS \\cite{rudi2015less}. \n\tRecently, NCR has also been extended to kernel principal component analysis (KPCA) \\cite{sterge2020gain}. \n\tHowever, previous studies generally analyze these three approaches separately, without considering their underlying relationships. \n\tTherefore, it remains a question that the philosophy of which approach is more promising when it is used for other kernel machines.\n\t\n\tEven though approximation errors for each of these three approaches have been established, they are specific to certain types of kernel machines. \n\tFor instance, the prediction errors of GSA to KRR and KSVM were studied by \\citet{cortes2010impact}. \n\tLLA came with an approximation error for KSVM \\cite{lan2019scaling}, but its established error is locally estimated from the best low-rank approximate solution rather than a non-approximate optimal one. \n\tGeneralization performance of NCR was developed specifically for KRR \\cite{rudi2015less} and kernel classification \\cite{jin2013improved} under carefully-imposed assumptions. \n\tTherefore, a natural question is whether approximation error analysis can be done in a general setting for these approaches?\n\t\n\t\n\tIn this paper, motivated by the column inclusion property of Gram matrices, we propose a subspace projection approach (SPA) for running Nystr\\\"{o}m-based kernel machines in general. \n\tUnlike other studies that rely on RKHS~\\cite{yang2012nystrom,jin2013improved,rudi2015less}, our analysis is based on the Hilbert space. \n\tThe main advantage of this simplification is its convenience in handling the geometry of data, which is instrumental in reaching conclusions of interest. \n\tSpecifically, aided by the setting of SPA, we first recast LLA into an equivalent optimization problem. \n\tThis equivalence quickly leads to the revelation that NCR is a specific case of LLA. \n\tThus, we will mainly focus on LLA.\n\tThen, we carefully study when SPA can serve as an alternative perspective for analyzing LLA.\n\tOur conclusion is that either a certain kind of sampling strategies used in the Nystr\\\"{o}m method or the representer theorem is enough to guarantee the equivalence between SPA and LLA. \n\tOne significant implication of it is that analysis developed for SPA also works for NCR and LLA. \n\tIn particular, we build up approximation errors (i.e., the accuracy of the computed approximate solutions) for SPA in a general setting.\n\tMoreover, the view of SPA also clearly demonstrates the relations between LLA (including NCR) and GSA.\n\tFirst, the analytical forms of the two computed approximate solutions only differ in one term. \n\tSecond, GSA can be implemented as efficiently as LLA (including NCR) by sharing the same training procedure. \n\tSuch an equivalent implementation for GSA does not add to computational cost.\n\tAll these results lead to the conjecture that GSA can provide more accurate solutions than LLA (including NCR). \n\tAs provided by our analysis, the accuracy of the two corresponding approximate solutions can be exactly computed. \n\tTherefore, we carry out experiments with classification tasks to support our conjecture. \n\t\n\t\n\tThe contributions of this work can be summarized as:\n\t\\begin{itemize}\n\t\t\\item Our proposed SPA provides an alternative geometric interpretation for analyzing LLA. Meanwhile, we show that NCR is a specific case of LLA. In a nutshell, the mechanism behind LLA is that it projects all data in the new feature space before normally running kernel machines.\n\t\t\n\t\t\\item We deduce an approximation error bound based on kernel machines in general for SPA (including LLA and NCR).\n\t\t\n\t\t\\item The view of SPA reveals that the analytical forms of the computed approximate solutions from LLA and GSA only differ in one term. Also, GSA can be implemented as efficiently as LLA by sharing the same training procedure.\n\t\t\n\t\t\\item Since our analytical framework provides ways for computing the accuracy of these approximate solutions, experiments with classification tasks are performed to verify our conjecture that GSA can provide more accurate solutions than LLA.\n\t\\end{itemize}\n\t\n\t\n\tThe rest of this paper is organized as follows. \n\tSection~\\ref{background} will review necessary requisite backgrounds. \n\tIn Section~\\ref{Method}, we introduce our proposed SPA, study what it can provide for LLA, NCR and GSA, and also the sufficient conditions that lead to the equivalence between SPA and LLA.\n\tIn Section~\\ref{sec:experiments}, we perform experiments with classification tasks to support our conjecture about LLA (including NCR) and GSA. \n\tFinally, this paper is concluded in Section~\\ref{conclusion}.\n\t\n\t\n\t\\section{Background}\n\t\\label{background}\n\t\\subsection{Notation}\n\tWe focus on real Hilbert space $ \\mathcal{H} $ with its endowed inner product $ \\langle \\cdot,\\cdot \\rangle_{\\mathcal{H}} $, which serves as a new feature space when using a kernel function.\n\tIn this paper, bold lower letters represent (column) vectors. \n\tFor instance, $ \\mathbf{a} \\in \\mathbb{R}^{p} $ is a column vector and $ \\mathbf{a}_{\\mathcal{H}} $ is a vector in $ \\mathcal{H} $.\t\n\tBold upper letters denote matrices or tuples of vectors. \n\tFor example, $ \\mathbf{A} $ is a matrix whereas $ \\mathbf{A}_{\\mathcal{H}} \\in \\mathcal{H}^{p} $ is a $ p $-tuple $ (\\mathbf{a}_{\\mathcal{H}}^{1},\\mathbf{a}_{\\mathcal{H}}^{2},\\dots,\\mathbf{a}_{\\mathcal{H}}^{p}) $. \n\tNon-bold letters are used to denote scalars or functions. \n\tGiven a matrix $ \\mathbf{A} $, let $ \\mathbf{a}_{i} $ be the $ i $-th column of $ \\mathbf{A} $, $ \\mathbf{A}^{\\dagger} $ be its pseudo-inverse, and $ A_{ij} $ be its $ (i,j) $-th entry. \n\tMore definitions are listed in Table~\\ref{tab:md}.\n\tSince $ (\\mathbf{A}_{\\mathcal{H}}\\mathbf{B})\\mathbf{C} = \\mathbf{A}_{\\mathcal{H}}(\\mathbf{B}\\mathbf{C}) $, writing $ \\mathbf{A}_{\\mathcal{H}}\\mathbf{B}\\mathbf{C} $ is without ambiguity. \n\tBesides, it can be checked quickly that $ \\langle \\mathbf{Y}_{\\mathcal{H}}\\mathbf{A}, \\mathbf{Z}_{\\mathcal{H}}\\mathbf{B} \\rangle_{\\mathcal{H}} = \\mathbf{A}^{T}\\langle \\mathbf{Y}_{\\mathcal{H}}, \\mathbf{Z}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\mathbf{B} $ and $ \\langle \\mathbf{Y}_{\\mathcal{H}}, \\mathbf{Z}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}^{T} = \\langle \\mathbf{Z}_{\\mathcal{H}}, \\mathbf{Y}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $.\n\t\n\t\\begin{table}\n\t\n\t\t\\small\n\t\t\\caption{Mathematical definitions used in this paper}\n\t\t\\label{tab:md}\n\t\t\\begin{center}\n\t\t\t\\begin{small}\n\t\t\t\t\\begin{tabular}{ll}\n\t\t\t\t\t\\toprule\n\t\t\t\t\tNotation & Definition\\\\\n\t\t\t\t\t\\midrule\n\t\t\t\t\t$ \\Phi: \\mathbb{R}^{d} \\mapsto \\mathcal{H} $ & A feature map\\\\\n\t\t\t\t\t$ \\mathbf{Y}_{\\mathcal{H}} = \\mathbf{A}_{\\mathcal{H}}\\mathbf{B} $ & $ \\mathbf{y}^{i}_{\\mathcal{H}} = \\sum_{j}b_{ji}\\mathbf{a}^{j}_{\\mathcal{H}} $ for all $ i $\\\\\n\t\t\t\t\t$ \\mathbf{Z} = \\langle \\mathbf{A}_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $ & $ Z_{ij} = \\langle \\mathbf{a}^{i}_{\\mathcal{H}}, \\mathbf{b}^{j}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $ for all $ i,j $ \\\\ $ \\mathbf{Z}_{\\mathcal{H}} = \\mathbf{A}_{\\mathcal{H}}+\\mathbf{B}_{\\mathcal{H}} $ & $ \\mathbf{z}^{i}_{\\mathcal{H}} = \\mathbf{a}^{i}_{\\mathcal{H}} + \\mathbf{b}^{i}_{\\mathcal{H}} $ for all $ i $\\\\\n\t\t\t\t\t$ \\mathbf{Y}_{\\mathcal{H}} = \\alpha\\mathbf{A}_{\\mathcal{H}} $ & $ \\mathbf{y}^{i}_{\\mathcal{H}} = \\alpha\\mathbf{a}^{i}_{\\mathcal{H}} $ for all $ i $, and $ \\alpha \\in \\mathbb{R} $ \\\\\n\t\t\t\t\t$ \\mathrm{span}(\\mathbf{A}_{\\mathcal{H}}) $ & $ \\{ \\sum_{i}\\alpha_{i}\\mathbf{a}_{\\mathcal{H}}^{i}\\,:\\, \\alpha_{i} \\in \\mathbb{R} \\} $ \\\\ $ \\mathbf{A}_{\\mathcal{H}} = \\Phi(\\mathbf{A}) $ & $ \\mathbf{a}_{\\mathcal{H}}^{i} = \\Phi(\\mathbf{a}_{i}) $ for all $ i $\\\\\n\t\t\t\t\t$ \\|\\mathbf{A}\\|_{2} $ & Spectral norm of the matrix $ \\mathbf{A} $\\\\\n\t\t\t\t\t$ \\|\\mathbf{A}\\|_{*} $ & Trace norm of the matrix $ \\mathbf{A} $\\\\\n\t\t\t\t\t$ \\|\\mathbf{A}\\|_{F} $ & Frobenius norm of the matrix $ \\mathbf{A} $\\\\\n\t\t\t\t\t$ \\| \\mathbf{A}_{\\mathcal{H}} \\|_{\\mathcal{H}S}$ & Hilbert-Schmidt norm $ (\\sum_{i}\\| \\mathbf{a}^{i}_{\\mathcal{H}} \\|_{\\mathcal{H}}^{2})^{\\frac{1}{2}} $\\\\\n\t\t\t\t\t$ \\| \\mathbf{A}_{\\mathcal{H}} \\|_{op} $ & Operator norm $ \\sup_{\\| \\boldsymbol\\alpha \\|_{F}=1}\\| \\mathbf{A}_{\\mathcal{H}}\\boldsymbol\\alpha \\|_{\\mathcal{H}} $\\\\\n\t\t\t\t\t\\bottomrule\n\t\t\t\t\\end{tabular}\n\t\t\t\\end{small}\n\t\t\\end{center}\n\t\\end{table}\t\n\t\n\tLet $ \\mathbf{X} \\in \\mathbb{R}^{d\\times n} $ denote a set of training data, where $ d $ and $ n $ refer to the number of features and data points, respectively. \n\t$ \\Phi:\\mathbb{R}^{d} \\mapsto \\mathcal{H} $ denotes a feature map of a selected kernel function. \n\tThe advantages of using Hilbert space rather than RKHS will be demonstrated in our analysis. \n\tLet $ \\mathbf{X}_{\\mathcal{H}} = \\Phi(\\mathbf{X}) \\in \\mathcal{H}^{n} $ and $ \\mathbf{K} = \\langle \\mathbf{X}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\in \\mathbb{R}^{n\\times n} $.\n\t\n\t\n\t\\subsection{Kernel Machines}\n\tIn this paper, we consider a general form of kernel machines as follows:\n\t\\begin{equation} \\label{op:kernel models}\n\t\t\\begin{aligned}\n\t\t\t\\argmin_{f\\in\\mathcal{H}} \\hat{\\mathcal{R}}(f,\\mathbf{X}_{\\mathcal{H}}) =& \\cfrac{1}{n}\\sum_{i=1}^{n}\\mathcal{L}(\\langle f, \\mathbf{x}_{\\mathcal{H}}^{i} \\rangle_{\\mathcal{H}}, y_{i}) + \\Omega(\\|f\\|_{\\mathcal{H}}^{2})\\\\\n\t\t\t\\text{ subject to }& f \\in \\mathrm{span}(\\mathbf{X}_{\\mathcal{H}})\n\t\t\\end{aligned}\n\t\\end{equation} where $ \\hat{\\mathcal{R}} $ is an objective function, $ \\mathcal{L} $ is a loss function, $ \\mathbf{y} $ is a vector of labels, and $ \\Omega : [0,+\\infty] \\mapsto [-\\infty,+\\infty] $ is a regularizing function. \n\tIf $ \\mathcal{H} $ is assumed to be a reproducing kernel Hilbert space, then $ \\langle f, \\mathbf{x}_{\\mathcal{H}}^{i} \\rangle_{\\mathcal{H}} = f(\\mathbf{x}_{i}) $, and thus we treat $ f $ as a function rather than a vector in $ \\mathcal{H} $. \n\tThe constraint $ f \\in \\mathrm{span}(\\mathbf{X}_{\\mathcal{H}}) $ mainly results from the representer theorem. \n\tNote that the sufficient conditions leading to the representer theorem vary \\cite{Schlkopf2001AGR,dinuzzo2012representer,yu2013characterizing}. \n\t\n\t\n\tThe merit of this constraint is that it makes the problem~\\eqref{op:kernel models} solvable. \n\tThat is, the problem~\\eqref{op:kernel models} is equivalent to\n\t\\begin{equation} \\label{op:solvable models}\n\t\t\\begin{gathered}\n\t\t\t\\argmin_{\\boldsymbol\\alpha \\in \\mathbb{R}^{n}} \\cfrac{1}{n}\\sum_{i=1}^{n}\\mathcal{L}(\\boldsymbol\\alpha^{T}\\mathbf{k}_{i}, y_{i}) + \\Omega(\\boldsymbol\\alpha^{T}\\mathbf{K}\\boldsymbol\\alpha)\n\t\t\\end{gathered}\n\t\\end{equation}\n\twith $ f=\\mathbf{X}_{\\mathcal{H}}\\boldsymbol\\alpha $. \n\tHenceforth, one can obtain an optimal solution to the problem~\\eqref{op:kernel models} through optimizing the problem~\\eqref{op:solvable models}.\n\t\n\t\n\t\\subsection{Nystr\\\"{o}m Method}\n\tWithout scalable techniques, in general, the running time for optimizing the problem~\\eqref{op:solvable models} is $ \\mathcal{O}(n^{3}) $, which is quite computationally expensive. \n\tFortunately, the Nystr\\\"{o}m method is able to reduce the running time significantly.\n\tThe main idea of the Nystr\\\"{o}m method is to generate a small set of landmarks $ \\mathbf{C}_{\\mathcal{H}} \\in \\mathcal{H}^{m} $ ($ m \\ll n) $ to efficiently ``represent'' the training data by, e.g., approximating $ \\mathbf{K} $ by $ \\widetilde{\\mathbf{K}} $ which is cheaper to calculate. \n\tSo far, the sampling strategies for generating $ \\mathbf{C}_{\\mathcal{H}} $ have been extensively studied \\cite{drineas2005nystrom,drineas2012fast,wang2019scalable}, and there is a wide range of choices \\cite{gittens2016revisiting,oglic2017nystrom,pourkamali2018randomized,wang2019scalable}.\n\tOn the other hand, there are already several well-studied Nystr\\\"{o}m methods for using $ \\mathbf{C}_{\\mathcal{H}} $ to obtain $ \\widetilde{\\mathbf{K}} $ \\cite{li2014large,wang2013improving,lim2015double}. \n\tAs a recent advance, \\citet{lim2018multi} proposed a multi-scale Nystr\\\"{o}m method that further shapes $ \\mathbf{C}_{\\mathcal{H}} $ into a multi-layer structure so that a good balance between approximation and running time can be achieved while increasing $ m $. \n\tGenerally, $ \\widetilde{\\mathbf{K}} $ admits the form $ \\widetilde{\\mathbf{K}} = \\mathbf{K}_{nm}\\mathbf{M}\\mathbf{M}^{T}\\mathbf{K}_{mn} $ where $ \\mathbf{K}_{nm} = \\langle \\mathbf{X}_{\\mathcal{H}}, \\mathbf{C}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $, $ \\mathbf{K}_{mn} = \\mathbf{K}_{nm}^{T} $, $ \\mathbf{M} \\in \\mathbb{R}^{m\\times s} $ is a method-dependent variable and $ s\\leq m \\ll n $.\n\t\n\t\n\tNote that the form $ \\widetilde{\\mathbf{K}} = \\mathbf{G}^{T}\\mathbf{G} $ with $ \\mathbf{G} = \\mathbf{M}^{T}\\mathbf{K}_{mn} \\in \\mathbb{R}^{s\\times n} $ and $ s\\leq m\\ll n $ is the key that enables both GSA and LLA to scale up kernel machines.\n\tFor example, GSA can reduce the training time from $ \\mathcal{O}(n^{3}) $ to $ O(nms) $ for KRR \\cite{williams2001using}.\n\t\n\t\n\n\t\n\t\n\t\\subsection{Gram Matrix Substitution Approach (GSA)}\n\tAs suggested by \\citet{williams2001using}, one way to make use of $ \\mathbf{C}_{\\mathcal{H}} $ is to merely replace $ \\mathbf{K} $ by $ \\widetilde{\\mathbf{K}} = \\mathbf{G}^{T}\\mathbf{G} $ with $ \\mathbf{G} = \\mathbf{M}^{T}\\mathbf{K}_{mn} $ in the problem~\\eqref{op:solvable models}, leading to the following\n\t\\begin{equation} \\label{op:approximate_solvable models}\n\t\t\\begin{gathered}\n\t\t\t\\argmin_{\\boldsymbol\\alpha \\in \\mathbb{R}^{n}} \\cfrac{1}{n}\\sum_{i=1}^{n}\\mathcal{L}(\\boldsymbol\\alpha^{T}\\widetilde{\\mathbf{k}}_{i}, y_{i}) + \\Omega(\\boldsymbol\\alpha^{T}\\widetilde{\\mathbf{K}}\\boldsymbol\\alpha) .\n\t\t\\end{gathered}\n\t\\end{equation}\n\tThen, if $ \\hat{\\boldsymbol\\alpha} $ is optimal to the problem~\\eqref{op:approximate_solvable models}, $ f^{\\mathrm{GSA}} = \\mathbf{X}_{\\mathcal{H}}\\hat{\\boldsymbol\\alpha} $ is an approximate solution computed by using GSA.\n\tNotably, the optimization can be accelerated by taking advantage of the low-rank decomposition $ \\widetilde{\\mathbf{K}} = \\mathbf{G}^{T}\\mathbf{G} $.\n\tFor example, for each data point $ \\mathbf{z} \\in \\mathbb{R}^{d} $ and each scalar $ c > 0 $, $ \\widetilde{\\mathbf{K}}\\mathbf{z} $ be implemented as $ \\mathbf{G}^{T}(\\mathbf{G}\\mathbf{z}) $, or aided by the Woodbury formula, $ (\\widetilde{\\mathbf{K}}+c\\mathbf{I})^{-1}\\mathbf{z} $ be equally replaced by $ \\frac{1}{c}(\\mathbf{z}-\\mathbf{G}^{T}(\\mathbf{G}\\mathbf{G}^{T}+c\\mathbf{I})^{-1}(\\mathbf{G}\\mathbf{z})) $. \n\tNote that the replacement provided by the Woodbury formula reduces the running time from $ \\mathcal{O}(n^{3}) $ to $ \\mathcal{O}(ns^{2}) $ where $ s\\ll n $. \n\tHowever, such an approach would be inconvenient to apply when the optimization procedure is given as a black box, which is often the case.\n\n\t\n\t\n\t\\subsection{Low-rank Linearization Approach (LLA)} \\label{sub:lla}\n\tSince the inspiration for LLA is an equivalent linearization of KSVM, it was merely studied and analyzed for KSVM when proposed. However, its mechanism can be described for kernel machines in general. \n\tSpecifically, the goal of the LLA approach is to find a finite-dimensional approximate feature map for $ \\Phi:\\mathbb{R}^{d}\\mapsto \\mathcal{H} $ by looking into the approximate Gram matrix $ \\widetilde{\\mathbf{K}} $. \n\tSince $ \\widetilde{\\mathbf{K}} = \\mathbf{G}^{T}\\mathbf{G} $ with $ \\mathbf{G} = \\mathbf{M}^{T}\\mathbf{K}_{mn} $, LLA treats $ \\mathbf{G} $ as the sought-after mapped training data in the approximate feature space. \n\tSpecifically, let $ \\mathbf{z} \\in \\mathbb{R}^{d} $ be a data point, the map $ \\widetilde{\\Phi}: \\mathbf{z} \\mapsto \\mathbf{M}^{T}\\langle \\mathbf{C}_{\\mathcal{H}}, \\Phi(\\mathbf{z}) \\rangle_{\\mathcal{H}} $ exactly maps $ \\mathbf{X} $ into $ \\mathbf{G} $ in a column-by-column manner, and thus is considered as the desired approximate feature map. \n\tIn a nutshell, LLA first maps all data by using the finite-dimensional approximate feature map $ \\widetilde{\\Phi} $ so as to linearize the kernel-based optimization problem. Precisely,\n\tLLA is trying to solve a linearized version of the problem~\\eqref{op:kernel models} as follows,\n\t\\begin{equation} \\label{op:solvableLLA}\n\t\t\\begin{gathered}\n\t\t\t\\argmin_{\\mathbf{w} \\in \\mathbb{R}^{s}} \\hat{\\mathcal{R}}(\\mathbf{w},\\mathbf{G}) = \\cfrac{1}{n}\\sum_{i=1}^{n}\\mathcal{L}(\\mathbf{w}^{T}\\mathbf{g}_{i}, y_{i}) + \\Omega(\\|\\mathbf{w}\\|_{F}^{2}) ,\n\t\t\\end{gathered}\n\t\\end{equation}\n\twhich can be solved much more efficiently by using well-developed linear solvers since $ s\\ll n $. \n\tIf $ \\hat{\\mathbf{w}} $ is an optimal solution to the problem~\\eqref{op:solvableLLA}, then for each data point $ \\mathbf{z} \\in \\mathbb{R}^{d} $, the corresponding prediction is $ \\hat{\\mathbf{w}}^{T}\\mathbf{M}^{T}\\langle \\mathbf{C}_{\\mathcal{H}}, \\Phi(\\mathbf{z}) \\rangle_{\\mathcal{H}} $. \n\tSpecifically, LLA suggests using standard Nystr\\\"{o}m method \\cite{williams2001using} to form $ \\mathbf{G} $, in which the approximate Gram matrix is $ \\widetilde{\\mathbf{K}}^{\\mathrm{std}} = \\mathbf{K}_{nm}\\mathbf{K}_{mm}^{\\dagger}\\mathbf{K}_{mn} $ where $ \\mathbf{K}_{mm} = \\langle \\mathbf{C}_{\\mathcal{H}}, \\mathbf{C}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $. \n\t\n\n\t\n\n\n\t\n\t\n\t\\subsection{Nystr\\\"{o}m Computational Constraint (NCR)}\n\tAs another approach, NCR aims to scale up kernel machines by replacing the constraint in the problem~\\eqref{op:kernel models} by $ f \\in \\mathrm{span}(\\mathbf{C}_{\\mathcal{H}}) $ \\cite{rudi2015less,jin2013improved,sterge2020gain}. \n\tHowever, a noticeable drawback of this approach is that it can not be straightforwardly\n\tgeneralized to other types of kernel machines, since each related study developed an analytical optimal solution exclusively for a certain kind of kernel machine.\n\t\n\t\n\t\\section{Proposed Approach}\n\t\\label{Method}\n\t\\subsection{Motivation and Modeling}\n\tOur main inspiration is Observation 7.1.10 in \\cite{horn2012matrix}, which leads to the following proposition. \n\tWe offer a more straightforward proof that provides a clear geometric interpretation. \n\n\t\\begin{proposition}[Column Inclusion Property of Gram Matrices] \\label{prop:CIP}\n\t\tLet $ \\mathbf{k} = \\langle \\mathbf{X}_{H}, \\Phi(\\mathbf{z}) \\rangle_{\\mathcal{H}} $, where $ \\mathbf{z} \\in \\mathbb{R}^{d} $ is an unseen data point, then there exists $ \\boldsymbol\\beta \\in \\mathbb{R}^{n} $ such that $ \\mathbf{k} = \\mathbf{K}\\boldsymbol\\beta $. \n\t\\end{proposition}\n\t\\emph{Proof.} \n\tLet $ \\mathbf{S}_{\\mathcal{H}} $ be an orthogonal basis of $ \\mathrm{span}(\\mathbf{X}_{\\mathcal{H}}) $, which can be obtained by performing Gram-Schmidt process on $ \\mathbf{X}_{\\mathcal{H}} $. Let $ \\mathbf{z}_{\\mathcal{H}} = \\Phi(\\mathbf{z}) $, then it holds that \n\t\\begin{equation}\n\t\t\\begin{aligned}\n\t\t\n\t\t\t\\langle \\mathbf{X}_{\\mathcal{H}}, \\mathbf{z}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} =&\\langle \\mathbf{S}_{\\mathcal{H}}\\langle \\mathbf{S}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}, \\mathbf{z}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\\\\n\t\t\t=& \\langle \\mathbf{X}_{\\mathcal{H}}, \\mathbf{S}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\langle \\mathbf{S}_{\\mathcal{H}}, \\mathbf{z}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\\\\n\t\t\t=& \\langle \\mathbf{X}_{\\mathcal{H}}, \\mathbf{S}_{\\mathcal{H}}\\langle \\mathbf{S}_{\\mathcal{H}}, \\mathbf{z}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\rangle_{\\mathcal{H}} .\n\t\t\n\t\t\\end{aligned}\n\t\\end{equation}\n\tSince $ \\mathrm{span}(\\mathbf{S}_{\\mathcal{H}}) = \\mathrm{span}(\\mathbf{X}_{\\mathcal{H}}) $, there exists $ \\boldsymbol\\beta \\in \\mathbb{R}^{n} $ such that $ \\mathbf{X}_{\\mathcal{H}}\\boldsymbol\\beta = \\mathbf{S}_{\\mathcal{H}}\\langle \\mathbf{S}_{\\mathcal{H}}, \\mathbf{z}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $. Consequently, $ \\mathbf{k} = \\mathbf{K}\\boldsymbol\\beta $. \\hfill $ \\Box $\n\t\n\t\n\tNote that in our proof, $ \\mathbf{S}_{\\mathcal{H}}\\langle \\mathbf{S}_{\\mathcal{H}}, \\mathbf{z}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $ simply projects the unseen data point $ \\mathbf{z}_{\\mathcal{H}} $ onto $ \\mathrm{span}(\\mathbf{X}_{\\mathcal{H}}) $.\n\tIn other words, the constraint $ f \\in \\mathrm{span}(\\mathbf{X}_{\\mathcal{H}}) $ implicitly projects all data, including the mapped training data $ \\mathbf{X}_{\\mathcal{H}} $ and each mapped unseen data point $ \\mathbf{z}_{\\mathcal{H}} $, onto a finite-dimensional subspace $ \\mathrm{span}(\\mathbf{X}_{\\mathcal{H}}) $ before training and test.\n\t\n\t\n\tMotivated by Proposition~\\ref{prop:CIP}, a natural way to make kernel machines scalable by using Nystr\\\"{o}m method is to use a set of landmarks $ \\mathbf{C}_{\\mathcal{H}} $ to first learn a meaningful orthogonal basis $ \\mathbf{B}_{\\mathcal{H}} \\in \\mathcal{H}^{s} $ (which means $ \\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = \\mathbf{I} $) with the constraint $ \\mathbf{B}_{\\mathcal{H}} = \\mathbf{C}_{\\mathcal{H}}\\mathbf{A} $. \n\tHere, $ \\mathbf{A} \\in \\mathbb{R}^{m\\times s} $ is a learning variable and $ s \\leq m $ denotes the dimension of the targeted subspace. \n\tWhen a learned $ \\mathbf{B}_{\\mathcal{H}} $ is given, our proposed approach is to project all data onto $ \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) $ before training and test. \n\tIn this way, the projected training data will be $ \\widetilde{\\mathbf{X}}_{\\mathcal{H}} = \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $, and the problem~\\eqref{op:kernel models} becomes\n\t\\begin{equation} \\label{op:projected kernel models}\n\t\t\\begin{aligned}\n\t\t\t\\argmin_{f\\in\\mathcal{H}} \\hat{\\mathcal{R}}(f,\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) =& \\cfrac{1}{n}\\sum_{i=1}^{n}\\mathcal{L}(\\langle f, \\widetilde{\\mathbf{x}}_{\\mathcal{H}}^{i} \\rangle_{\\mathcal{H}}, y_{i}) + \\Omega(\\|f\\|_{\\mathcal{H}}^{2})\\\\\n\t\t\t\\text{ subject to }& f \\in \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}),\n\t\t\\end{aligned}\n\t\\end{equation}\n\twhich is called subspace projection approach (SPA) in this paper. \n\tNote that the problem above is equivalent to the problem~\\eqref{op:approximate_solvable models} with $ f = \\widetilde{\\mathbf{X}}_{\\mathcal{H}}\\boldsymbol\\alpha $. \n\tIn the following, we will show how NCR and LLA can be equivalently transformed to be the problem~\\eqref{op:projected kernel models}. \n\t\n\t\n\tIt is worth mentioning that each optimal solution to the problem~\\eqref{op:projected kernel models} will automatically project all unseen data onto $ \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) $ before predicting.\n\tSpecifically,\n\tgiven an unseen data point $ \\mathbf{z} \\in \\mathbb{R}^{d} $ and let $ \\mathbf{z}_{\\mathcal{H}} = \\Phi(\\mathbf{z}) $, since $ \\widetilde{\\mathbf{X}}_{\\mathcal{H}} = \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\widetilde{\\mathbf{X}}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $,\n\tthere is\n\t\\begin{equation}\n\t\t\\begin{aligned}\n\t\t\t\\langle \\widetilde{\\mathbf{X}}_{\\mathcal{H}}, \\mathbf{z}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} =& \\langle \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\widetilde{\\mathbf{X}}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}, \\mathbf{z}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\\\\n\t\t\t=& \\langle \\widetilde{\\mathbf{X}}_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{z}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\\\\n\t\t\t=& \\langle \\widetilde{\\mathbf{X}}_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{z}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\rangle_{\\mathcal{H}}.\n\t\t\\end{aligned}\n\t\\end{equation}\n\tIn this equality, $ \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{z}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $ is the projection of $ \\mathbf{z}_{\\mathcal{H}} $ onto $ \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) $.\n\tSo, there is no need to explicitly project unseen data onto $ \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) $. \n\t\n\t\n\t\n\t\n\t\\subsection{Further Justification of the Use of $ \\mathbf{B}_{\\mathcal{H}} $} \n\t\\label{sub:justification}\n\tThe use of orthogonal basis $ \\mathbf{B}_{\\mathcal{H}} $ is consistent with the aforementioned useful form $ \\widetilde{\\mathbf{K}} = \\mathbf{K}_{nm}\\mathbf{MM}^{T}\\mathbf{K}_{mn} $.\n\tPrecisely,\n\tgiven the projected training data $ \\widetilde{\\mathbf{X}}_{\\mathcal{H}} $, the expected approximate Gram matrix is $ \\widetilde{\\mathbf{K}} = \\langle \\widetilde{\\mathbf{X}}_{\\mathcal{H}}, \\widetilde{\\mathbf{X}}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = \\langle \\mathbf{X}_{\\mathcal{H}},\\mathbf{B}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = \\mathbf{K}_{nm}\\mathbf{AA}^{T}\\mathbf{K}_{mn} $. \n\tNote that $ \\mathbf{M} $ is a method-dependent variable when selecting Nystr\\\"{o}m methods to form $ \\widetilde{\\mathbf{K}} $, whereas $ \\mathbf{A} $ is a learning variable when searching a meaningful orthogonal basis $ \\mathbf{B}_{\\mathcal{H}} $ with the constraint $ \\mathbf{B}_{\\mathcal{H}} = \\mathbf{C}_{\\mathcal{H}}\\mathbf{A} $. \n\tSince all our results will be based on the use of $ \\mathbf{B}_{\\mathcal{H}} $, we take $ \\mathbf{A} = \\mathbf{M} $ in the following.\n\tNotably, it has been demonstrated that for standard Nystr\\\"{o}m method \\cite{williams2001using}, one-shot Nystr\\\"{o}m method \\cite{fowlkes2004spectral}, double-shot Nystr\\\"{o}m method \\cite{lim2015double}, and multi-scale Nystr\\\"{o}m method \\cite{lim2018multi}, the corresponding approximate Gram matrix $ \\widetilde{\\mathbf{K}} $ exactly admits the form $ \\widetilde{\\mathbf{K}} = \\langle \\mathbf{X}_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $ where $ \\mathbf{B}_{\\mathcal{H}} = \\mathbf{C}_{\\mathcal{H}}\\mathbf{A} $ and $ \\langle \\mathbf{B}_{\\mathcal{H}},\\mathbf{B}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = \\mathbf{I} $.\n\tMore details can be found in \\cite{lim2015double,lim2018multi}.\n\t\n\t\n\t\n\tMeanwhile, the use of $ \\mathbf{B}_{\\mathcal{H}} $ can be justified by its relations with how accurate the corresponding approximate Gram matrix $ \\widetilde{\\mathbf{K}} = \\langle \\widetilde{\\mathbf{X}}_{\\mathcal{H}}, \\widetilde{\\mathbf{X}}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $ is, relations that are depicted by the following two lemmas. Particularly, they hold only with the assumption $ \\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = \\mathbf{I} $.\n\t\\begin{lemma} \\label{lem:relation between K and B}\n\t\t\\begin{gather}\n\t\t\t\\| \\mathbf{K} - \\widetilde{\\mathbf{K}} \\|_{*} = \\|\\mathbf{X}_{\\mathcal{H}}-\\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\|_{\\mathcal{H}S}^{2} , \\label{eq:k=B1}\\\\ \n\t\t\t\\| \\mathbf{K} - \\widetilde{\\mathbf{K}} \\|_{2} = \\|\\mathbf{X}_{\\mathcal{H}}-\\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\|_{op}^{2} . \\label{eq:k=B2}\n\t\t\\end{gather}\n\t\\end{lemma}\n\tWhen the Hilbert space $ \\mathcal{H} $ is assumed to be finite-dimensional, the operator norm $ \\|\\cdot\\|_{op} $ and the Hilbert-Schmidt norm $ \\|\\cdot\\|_{\\mathcal{H}S} $ will reduce to the spectral norm $ \\|\\cdot\\|_{2} $ and the Frobenius norm $ \\|\\cdot\\|_{F} $, respectively.\n\tBasically, Lemma~\\ref{lem:relation between K and B} indicates that learning a meaningful orthogonal basis $ \\mathbf{B}_{\\mathcal{H}} $ can be exactly equivalent to searching a good approximation $ \\widetilde{\\mathbf{K}} $.\n\t\n\t\n\t\\begin{lemma} \\label{lem:reconstruction error}\n\t\tGiven two data points $ \\mathbf{p}, \\mathbf{q} \\in \\mathbb{R}^{d} $, let $ \\widetilde{\\mathbf{p}}_{\\mathcal{H}} = \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\Phi(\\mathbf{p}) \\rangle_{\\mathcal{H}} $ and $ \\widetilde{\\mathbf{q}}_{\\mathcal{H}} = \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\Phi(\\mathbf{q}) \\rangle_{\\mathcal{H}} $, then the reconstruction error $ |\\langle \\widetilde{\\mathbf{p}}_{\\mathcal{H}}, \\widetilde{\\mathbf{q}}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} - \\langle \\Phi(\\mathbf{p}), \\Phi(\\mathbf{q}) \\rangle_{\\mathcal{H}}| $ is $ 0 $ if either $ \\Phi(\\mathbf{p}) $ or $ \\Phi(\\mathbf{q}) $ belongs to $ \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) $.\n\t\\end{lemma}\n\n\t\n\tThis result generalizes Proposition 2 in the previous study \\cite{zhang2010clustered}. \n\tTo be specific, the proposition there proves that if the set of landmarks $ \\mathbf{C}_{\\mathcal{H}} $ contains two training data points from $ \\mathbf{X}_{\\mathcal{H}} $, say $ \\mathbf{x}_{\\mathcal{H}}^{i} $ and $ \\mathbf{x}_{\\mathcal{H}}^{j} $,\n\tthen $ \\widetilde{K}_{ij} = K_{ij} $. \n\tHere, $ \\widetilde{\\mathbf{K}} = \\mathbf{K}_{nm}\\mathbf{K}_{mm}^{\\dagger}\\mathbf{K}_{mn} $ where $ \\mathbf{K}_{mm} = \\langle \\mathbf{C}_{\\mathcal{H}}, \\mathbf{C}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $, which is the result of using standard Nystr\\\"{o}m method to form $ \\widetilde{\\mathbf{K}} $. \n\tThis fact is straightforward by using Lemma~\\ref{lem:reconstruction error}.\n\tNote that the corresponding embedded orthogonal basis $ \\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}} \\in \\mathcal{H}^{s} $ is the one spanning $ \\mathrm{span}(\\mathbf{C}_{\\mathcal{H}}) $ \\cite{lim2015double}.\n\tParticularly, $ \\langle \\widetilde{\\mathbf{X}}_{\\mathcal{H}}^{\\mathrm{std}}, \\widetilde{\\mathbf{X}}_{\\mathcal{H}}^{\\mathrm{std}} \\rangle_{\\mathcal{H}} = \\mathbf{K}_{nm}\\mathbf{K}_{mm}^{\\dagger}\\mathbf{K}_{mn} $ where $ \\mathbf{X}_{\\mathcal{H}}^{\\mathrm{std}} = \\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}}\\langle \\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $. \t\n\tIf $ \\mathbf{C}_{\\mathcal{H}} $ contains $ \\mathbf{x}_{\\mathcal{H}}^{i} $ and $ \\mathbf{x}_{\\mathcal{H}}^{j} $, then the two data points belong to $ \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}}) = \\mathrm{span}(\\mathbf{C}_{\\mathcal{H}}) $. By Lemme~\\ref{lem:reconstruction error}, there is $ \\widetilde{K}_{ij} = \\langle \\widetilde{\\mathbf{x}}_{\\mathcal{H}}^{i}, \\widetilde{\\mathbf{x}}_{\\mathcal{H}}^{j} \\rangle_{\\mathcal{H}} = \\langle \\mathbf{x}_{\\mathcal{H}}^{i}, \\mathbf{x}_{\\mathcal{H}}^{j} \\rangle_{\\mathcal{H}} = K_{ij} $.\t \n\tAfter all, Lemma~\\ref{lem:reconstruction error} suggests that the closer the training data $ \\mathbf{X}_{\\mathcal{H}} $ are to $ \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) $, the smaller the reconstruction errors will be.\n\t\n\t\n\t\\subsection{NCR: A Specific Case of LLA}\n\tTo explore the relationships among LLA, NCR and SPA, we introduce the following problem when a learned orthogonal basis $ \\mathbf{B}_{\\mathcal{H}} $ is provided, which will be shown is exactly an optimization problem of LLA that searches solutions directly in $ \\mathcal{H} $.\n\t\\begin{equation} \\label{op:recastLLA}\n\t\t\\begin{aligned}\n\t\t\t\\argmin_{f\\in\\mathcal{H}} \\hat{\\mathcal{R}}(f, \\mathbf{X}_{\\mathcal{H}}) = & \\cfrac{1}{n}\\sum_{i=1}^{n}\\mathcal{L}(\\langle f, \\mathbf{x}_{\\mathcal{H}}^{i} \\rangle_{\\mathcal{H}}, y_{i}) + \\Omega(\\|f\\|_{\\mathcal{H}}^{2})\\\\\n\t\t\t\\text{ subject to } & f \\in \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}).\n\t\t\\end{aligned}\n\t\\end{equation}\n\tThe problem above is exactly equivalent to the problem~\\eqref{op:solvableLLA} with $ f = \\mathbf{B}_{\\mathcal{H}}\\mathbf{w} $ by noticing that $ \\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = \\mathbf{A}^{T}\\mathbf{K}_{mn} = \\mathbf{G} $. \n\tNote that $ \\mathbf{A}=\\mathbf{M} $ in our settings as stated in Subsection~\\ref{sub:justification}.\n\tTherefore,\n\tif $ \\hat{\\mathbf{w}} $ is an optimal solution to the problem~\\eqref{op:solvableLLA}, which is LLA, the solution $ \\hat{f} = \\mathbf{B}_{\\mathcal{H}}\\hat{\\mathbf{w}} $ is optimal to the problem~\\eqref{op:recastLLA}. Then, we verify that $ \\hat{f} $ and LLA share the same prediction for each data point $ \\mathbf{z} \\in \\mathbb{R}^{d} $. \n\tAs provided in Subsection~\\ref{sub:lla}, the prediction from LLA is $ \\hat{\\mathbf{w}}^{T}\\mathbf{A}^{T}\\langle \\mathbf{C}_{\\mathcal{H}}, \\Phi(\\mathbf{z}) \\rangle_{\\mathcal{H}} $. \n\tThis result is exactly the same as the prediction $ \\langle \\hat{f},\\Phi(\\mathbf{z}) \\rangle_{\\mathcal{H}} = \\hat{\\mathbf{w}}^{T}\\langle \\mathbf{B}_{\\mathcal{H}}, \\Phi(\\mathbf{z}) \\rangle_{\\mathcal{H}} $ when using $ \\hat{f} $.\n\tThus, we conclude that the problem~\\eqref{op:recastLLA} is an alternative optimization problem for LLA.\n\tThe significance of this result is that an approximate solution generated from LLA can be expressed as $ \\mathbf{B}_{\\mathcal{H}}\\hat{\\mathbf{w}} \\in \\mathcal{H} $. And it is the key to show that NCR is a specific case of LLA in the following.\n\t\n\t\\begin{proposition} \\label{prop:NCR 0 $, respectively. \n\tAs provided by \\citet{rudi2015less}, the analytical optimal solution for NCR to KRR is\n\t\\begin{equation}\n\t\t\\begin{gathered}\n\t\t\t\\mathbf{C}_{\\mathcal{H}} (\\mathbf{K}_{mn}\\mathbf{K}_{nm}+\\lambda_{0}\\mathbf{K}_{mm})^{\\dagger}\\mathbf{K}_{mn}\\mathbf{y}\n\t\t\\end{gathered}\n\t\\end{equation}\n\twhere $ \\mathbf{K}_{mm} = \\langle \\mathbf{C}_{\\mathcal{H}}, \\mathbf{C}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $ and $ \\lambda_{0} = n\\lambda $. \n\tBy contrast, when using LLA with standard Nystr\\\"{o}m method, the optimal solution to the problem~\\eqref{op:solvableLLA} in terms of KRR is $ \\hat{\\mathbf{w}}^{\\mathrm{KRR}} = (\\mathbf{G}\\mathbf{G}^{T}+\\lambda_{0}\\mathbf{I})^{-1}\\mathbf{G}\\mathbf{y} $ where $ \\mathbf{G} \\gets (\\mathbf{V}\\boldsymbol\\Sigma^{-1})^{T}\\mathbf{K}_{mn} $. \n\tHere, $ \\mathbf{V} $ and $ \\boldsymbol\\Sigma $ are from the spectral decomposition $ \\mathbf{K}_{mm} = \\mathbf{V}\\boldsymbol\\Sigma^{2}\\mathbf{V}^{T} $, where all diagonal entries in $ \\boldsymbol\\Sigma $ are positive.\n\tMeanwhile, the orthogonal basis induced by standard Nystr\\\"{o}m method is $ \\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}} = \\mathbf{C}_{\\mathcal{H}}\\mathbf{V}\\boldsymbol\\Sigma^{-1} $.\n\tThen, the optimal solution for LLA to KRR is\n\t\\begin{equation}\n\t\t\\begin{gathered}\n\t\t\t\\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}}\\hat{\\mathbf{w}}^{\\mathrm{KRR}}=\\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}}(\\mathbf{G}\\mathbf{G}^{T}+\\lambda_{0}\\mathbf{I})^{-1}\\mathbf{G}\\mathbf{y} .\n\t\t\\end{gathered}\n\t\\end{equation}\n\tSince the optimal solution to the problem~\\eqref{op:solvableLLA} in terms of linear ridge regression is unique, the equivalence between the problems~\\eqref{op:solvableLLA} and~\\eqref{op:recastLLA} immediately leads to the following corollary, which will be verified directly in Appendix~\\ref{app:NCR2LLA}.\n\t\\begin{corollary} \\label{cor:NCR2LLA}\n\t\tFor KRR, the two analytical optimal solutions generated by using NCR, and LLA with $ \\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}} $ are exactly the same, i.e.,\n\t\t\\begin{equation}\n\t\t\t\\begin{gathered}\n\t\t\t\t\\mathbf{C}_{\\mathcal{H}} (\\mathbf{K}_{mn}\\mathbf{K}_{nm}+\\lambda_{0}\\mathbf{K}_{mm})^{\\dagger}\\mathbf{K}_{mn}\\mathbf{y} = \\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}}\\hat{\\mathbf{w}}^{\\mathrm{KRR}} .\n\t\t\t\\end{gathered}\n\t\t\\end{equation}\n\t\\end{corollary}\n\t\n\t\n\t\n\t\n\t\n\t\n\t\\subsection{Implications from Equivalence between SPA and LLA}\n\tOur next question is whether SPA is equivalent to LLA. \n\tIf so, what will be provided by such an equivalence? \n\tBefore moving forward, it will be helpful to lay down the following definitions. \n\t\\begin{definition}\n\t\tLLA and SPA are said to be strongly equivalent if LLA and SPA share the same set of optimal solutions.\n\t\\end{definition}\n\n\t\\begin{definition}\n\t\tLLA and SPA are said to be weakly equivalent if whenever $ f \\in \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) $ is optimal to LLA, the projection of $ f $ onto $ \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) $ is optimal to both LLA and SPA.\n\t\\end{definition}\n\t\n\tNote that the weak equivalence implies that each optimal solution to SPA is also optimal to LLA.\n\tBefore studying the sufficient conditions for these two types of equivalence, we first assume the strong equivalence holds and explore what it can provide.\n\tLet a learned orthogonal basis $ \\mathbf{B}_{\\mathcal{H}} $ be given, and $ \\hat{\\mathbf{w}} $ be an optimal solution to the problem~\\eqref{op:solvableLLA}. \n\tThen, an immediate result is that there must exist an optimal solution $ \\hat{\\boldsymbol\\alpha} $ to the problem~\\eqref{op:approximate_solvable models} such that\n\t\\begin{equation} \\label{eq:xa=bw}\n\t\t\\begin{gathered}\n\t\t\t\\widetilde{\\mathbf{X}}_{\\mathcal{H}}\\hat{\\boldsymbol\\alpha} = \\mathbf{B}_{\\mathcal{H}}\\hat{\\mathbf{w}} .\n\t\t\\end{gathered}\n\t\\end{equation}\n\t\n\tThis equality indicates that the optimal solution sought by LLA, which is $ f^{\\mathrm{LLA}} = \\mathbf{B}_{\\mathcal{H}}\\hat{\\mathbf{w}} $, can be equally expressed as $ f^{\\mathrm{LLA}} = \\widetilde{\\mathbf{X}}_{\\mathcal{H}}\\hat{\\boldsymbol\\alpha} $, which is optimal to SPA~\\eqref{op:projected kernel models}. \n\tThere are three significant messages conveyed by the equality~\\eqref{eq:xa=bw}. \n\t\n\t\n\t\\emph{First}, it provides a more convenient way for analyzing how accurate $ f^{\\mathrm{LLA}} $ is.\n\tPrecisely, let $ \\widetilde{\\boldsymbol\\alpha} $ be an optimal solution to the problem~\\eqref{op:solvable models}, then $ f^{*} = \\mathbf{X}_{\\mathcal{H}}\\widetilde{\\boldsymbol\\alpha} $ is a non-approximate optimal solution to the kernel machine~\\eqref{op:kernel models}. \n\tAn approximation error bound for LLA (including NCR) in a general setting can be easily developed.\n\tAlso, the approximation error $ \\| f^{\\mathrm{LLA}}-f^{*} \\|_{\\mathcal{H}} $ can be explicitly computed. \n\tThese results are summarized in the following proposition.\n\t\n\t\\begin{proposition} \\label{prop:approximation error}\n\t\tAssume the strong equivalence between LLA and SPA holds. Then, for kernel machines in general, the approximation error for LLA (including NCR) satisfies\n\t\t\\begin{gather}\n\t\t\t\\left\\lVert f^{\\mathrm{LLA}}-f^{*} \\right\\rVert_{\\mathcal{H}}^{2} = \\hat{\\boldsymbol\\alpha}^{T}\\widetilde{\\mathbf{K}}\\hat{\\boldsymbol\\alpha} + \\widetilde{\\boldsymbol\\alpha}^{T}\\mathbf{K}\\widetilde{\\boldsymbol\\alpha} - 2\\hat{\\boldsymbol\\alpha}^{T}\\widetilde{\\mathbf{K}}\\widetilde{\\boldsymbol\\alpha} ,\\label{eq:LLA-f*}\\\\\n\t\t\t\\|f^{\\mathrm{LLA}}-f^{*}\\|_{\\mathcal{H}} \\leq \\|\\mathbf{K}-\\widetilde{\\mathbf{K}}\\|_{2}^{\\frac{1}{2}}\\|\\widetilde{\\boldsymbol\\alpha}\\|_{F} + \\|\\mathbf{K}\\|_{2}^{\\frac{1}{2}}\\|\\hat{\\boldsymbol\\alpha}-\\widetilde{\\boldsymbol\\alpha}\\|_{F} . \\label{ieq:LLA}\n\t\t\\end{gather}\n\t\\end{proposition}\n\t\n\tThe approximation error bound~\\eqref{ieq:LLA} shows that the accuracy of the approximate solutions computed through LLA is mainly determined by 1) the Gram matrix approximation error $ \\| \\mathbf{K} - \\widetilde{\\mathbf{K}} \\|_{2}^{\\frac{1}{2}} $ and 2) the continuity of the kernel machine $ \\| \\hat{\\boldsymbol\\alpha} - \\widetilde{\\boldsymbol\\alpha} \\|_{F} $, which is machine-dependent. Particularly, \\citet{cortes2010impact} has proved that for KRR, $ \\|\\hat{\\boldsymbol\\alpha}-\\widetilde{\\boldsymbol\\alpha}\\|_{F} \\leq \\mathcal{O}(\\|\\mathbf{K}-\\widetilde{\\mathbf{K}}\\|_{2}) $; for KSVM, $ \\|\\hat{\\boldsymbol\\alpha}-\\widetilde{\\boldsymbol\\alpha}\\|_{F} \\leq \\mathcal{O}(\\|\\mathbf{K}-\\widetilde{\\mathbf{K}}\\|_{2}^{\\frac{1}{4}}) $. \n\tCombining these results, we have the following corollary.\n\t\n\t\\begin{corollary}\n\t\tSuppose the strong equivalence between LLA and SPA holds, there is\n\t\t\\begin{gather}\n\t\t\t\\| f^{\\mathrm{LLA}}-f^{*} \\|_{\\mathcal{H}} \\leq \\mathcal{O}(\\| \\mathbf{K}-\\widetilde{\\mathbf{K}} \\|_{2}^{\\frac{1}{2}}) \\text{ for KRR, }\\\\\n\t\t\t\\| f^{\\mathrm{LLA}}-f^{*} \\|_{\\mathcal{H}} \\leq \\mathcal{O}(\\| \\mathbf{K}-\\widetilde{\\mathbf{K}} \\|_{2}^{\\frac{1}{4}}) \\text{ for KSVM. } \\label{ieq:ksvm}\n\t\t\\end{gather}\n\t\\end{corollary}\n\t\n\t\n\t\\emph{Second}, the difference between LLA and GSA is clear. Note that $ f^{\\mathrm{GSA}} = \\mathbf{X}_{\\mathcal{H}}\\hat{\\boldsymbol\\alpha} $ is an approximate solution looked for by GSA. By comparing $ f^{\\mathrm{LLA}} = \\widetilde{\\mathbf{X}}_{\\mathcal{H}}\\hat{\\boldsymbol\\alpha} $ with $ f^{\\mathrm{GSA}} = \\mathbf{X}_{\\mathcal{H}}\\hat{\\boldsymbol\\alpha} $, it is expected that GSA can provide more accurate solutions than LLA. \n\tBecause in comparison with the non-approximate optimal solution $ f^{*} = \\mathbf{X}_{\\mathcal{H}}\\widetilde{\\boldsymbol\\alpha} $ to the problem~\\eqref{op:kernel models}, there are two approximate items in $ f^{\\mathrm{LLA}} = \\widetilde{\\mathbf{X}}_{\\mathcal{H}}\\hat{\\boldsymbol\\alpha} $ while there is only one in $ f^{\\mathrm{GSA}} = \\mathbf{X}_{\\mathcal{H}}\\hat{\\boldsymbol\\alpha} $. Note that the approximation error $ \\| f^{\\mathrm{GSA}} - f^{*} \\|_{\\mathcal{H}} $ for GSA can be computed by\n\t\\begin{equation} \n\t\t\\begin{gathered} \\label{eq:GSA-f*}\n\t\t\t\\left\\lVert f^{\\mathrm{GSA}}-f^{*} \\right\\rVert_{\\mathcal{H}}^{2} = (\\widetilde{\\boldsymbol\\alpha}-\\hat{\\boldsymbol\\alpha})^{T}\\mathbf{K}(\\widetilde{\\boldsymbol\\alpha}-\\hat{\\boldsymbol\\alpha}).\n\t\t\\end{gathered}\n\t\\end{equation}\n\tAided by the equalities~\\eqref{eq:LLA-f*} and~\\eqref{eq:GSA-f*}, we run experiments with classification tasks to verify our conjecture that GSA can provide more accurate solutions than LLA. \n\t\n\t\\emph{In addition}, the equality~\\eqref{eq:xa=bw} suggests that $ \\hat{\\boldsymbol\\alpha} $ can be computed from $ \\hat{\\mathbf{w}} $.\n\tAs suggested by other studies \\cite{lan2019scaling,jin2013improved}, the reason to abandon GSA is because $ \\hat{\\boldsymbol\\alpha} $ cannot be calculated efficiently when the related solvers are used as a black box. \n\tBy contrast, $ \\hat{\\mathbf{w}} $ can be easily obtained by using efficient linear solvers. \n\tHowever, the way of computing $ \\hat{\\boldsymbol\\alpha} $ from $ \\hat{\\mathbf{w}} $ is as computationally efficient as calculating $ \\hat{\\mathbf{w}} $.\n\t\n\t\\begin{corollary} \\label{cor: LLA2GSA}\n\t\tIf SPA and LLA are strongly equivalent, and $\\hat{\\mathbf{w}}$ is optimal to the problem~\\eqref{op:solvableLLA}, then\n\t\t\\begin{equation} \\label{step:LLA2GSA}\n\t\t\t\\begin{gathered}\n\t\t\t\t\\hat{\\boldsymbol\\alpha} \\gets \\mathbf{G}^{\\dagger}\\hat{\\mathbf{w}} \\text{ with } \\mathbf{G} = \\mathbf{A}^{T}\\mathbf{K}_{mn} = \\langle \\mathbf{B}_{\\mathcal{H}},\\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\n\t\t\t\\end{gathered}\n\t\t\\end{equation}\n\t\tis an optimal solution to the problem~\\eqref{op:approximate_solvable models}.\n\t\\end{corollary}\n\t\\emph{Proof.} Since $ \\mathbf{B}_{\\mathcal{H}}\\mathbf{G}\\hat{\\boldsymbol\\alpha} = \\widetilde{\\mathbf{X}}_{\\mathcal{H}}\\hat{\\boldsymbol\\alpha} = \\mathbf{B}_{\\mathcal{H}}\\hat{\\mathbf{w}} $ and $ \\mathbf{B}_{\\mathcal{H}} $ is linearly independent, there is the equality $ \\mathbf{G}\\hat{\\boldsymbol\\alpha} = \\hat{\\mathbf{w}} $, which indicates $ \\hat{\\boldsymbol\\alpha} = \\mathbf{G}^{\\dagger}\\hat{\\mathbf{w}} $. \\hfill $ \\Box $\n\t\n\tNote that the running time for yielding $ \\mathbf{G} $ is $ \\mathcal{O}(nms) $ when standard Nystr\\\"{o}m method is taken, and the time for calculating $ \\mathbf{G}^{\\dagger} $ is always $ \\mathcal{O}(ns^{2}) $ where $ s\\leq m $. Therefore, the step~\\eqref{step:LLA2GSA} does not add to computational cost while it enables us to use GSA as efficiently as LLA.\n\t\n\t\n\tEven though Corollary~\\ref{cor: LLA2GSA} is based on the strong equivalence between LLA and SPA. \n\tIn fact, it holds even if there is only weak equivalence between LLA and SPA.\n\t\n\t\\begin{proposition} \\label{prop:w2a}\n\t\tSuppose the weak equivalence between LLA and SPA holds. If $ \\hat{\\mathbf{w}} $ is optimal to the problem~\\eqref{op:solvableLLA}, $ \\mathbf{G}^{\\dagger}\\hat{\\mathbf{w}} $ is optimal to the problem~\\eqref{op:approximate_solvable models}.\n\t\\end{proposition}\t\n\tTo conclude, LLA and GSA can be implemented to share the same training procedure, as summarized in Algorithm~\\ref{alg:general}.\n\t\n\t\\begin{algorithm}[!t]\n\t\t\\small\n\t\t\\caption{Algorithms for running LLA (including NCR) or GSA} \n\t\t\\label{alg:general}\n\t\t\\begin{algorithmic}\n\t\t\t\\Statex {\\bfseries Training phase}\n\t\t\t\\Statex {\\bfseries Input:} Data $ \\mathbf{X}_{\\mathcal{H}} = \\Phi(\\mathbf{X}) $, a vector of labels $ \\mathbf{y} $, the number of landmarks $ m $, the targeted low-dimension $ s \\leq m $.\n\t\t\t\\Statex ~~~1) Generate a set of landmarks $ \\mathbf{C}_{\\mathcal{H}} \\in \\mathcal{H}^{m} $ by a specific sampling strategy \\cite{kumar2012sampling,sun2015review,gittens2016revisiting}.\n\t\t\t\\Statex ~~~2) Compute $ \\mathbf{K}_{nm}^{T} = \\mathbf{K}_{mn} = \\langle \\mathbf{C}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $ and $ \\mathbf{K}_{mm} = \\langle \\mathbf{C}_{\\mathcal{H}}, \\mathbf{C}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $.\n\t\t\t\\Statex ~~~3) Obtain $ \\mathbf{A} \\in \\mathbb{R}^{m\\times s} $ that satisfies $ \\mathbf{A}^{T}\\mathbf{K}_{mm}\\mathbf{A} = \\mathbf{I} $ by using a specific Nystr\\\"{o}m method \\cite{lim2015double,lim2018multi};\n\t\t\t\\Statex ~~~4) Compute $ \\mathbf{G} \\gets \\mathbf{A}^{T}\\mathbf{K}_{mn} $;\n\t\t\t\\Statex ~~~5) Get $ \\hat{\\mathbf{w}} $ by using an efficient linear solver upon $ \\mathbf{G} $.\n\t\t\t\\Statex {\\bfseries Output}: $ \\hat{\\mathbf{w}} $.\n\t\t\t\\\\\\hrulefill\n\t\t\t\\Statex {\\bfseries Testing phase} (LLA: Low-rank linearization approach; GSA: Gram matrix substitution approach)\n\t\t\t\\Statex {\\bfseries Input:} A test data point $ \\mathbf{z} $.\n\t\t\t\\Statex ~~~~{\\bf If implementing LLA: } $ \\mathbf{t} \\gets \\mathbf{A}^{T}\\langle \\mathbf{C}_{\\mathcal{H}}, \\Phi(\\mathbf{z}) \\rangle_{\\mathcal{H}} $.\n\t\t\t\\Statex ~~~~{\\bf If implementing GSA: } $ \\mathbf{t} \\gets (\\mathbf{G}^{\\dagger})^{T}\\langle \\mathbf{X}_{\\mathcal{H}}, \\Phi(\\mathbf{z}) \\rangle_{\\mathcal{H}} $.\n\t\t\t\\Statex \\textbf{Prediction:} $ \\hat{\\mathbf{w}}^{T}\\mathbf{t} $.\n\t\t\\end{algorithmic}\n\t\\end{algorithm}\n\t\n\tWhen there is only weak equivalence, though the computed approximate solution $ f^{\\mathrm{LLA}} $ may not satisfies the equality~\\eqref{eq:xa=bw}, the projection of $ f^{\\mathrm{LLA}} $ onto $ \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) $ always meets the equality~\\eqref{eq:xa=bw}. \n\tTherefore, all the analysis above still works when using the projection of $ f^{\\mathrm{LLA}} $ instead. \n\tMoreover, since the weak equivalence implies that each optimal solution to SPA is also optimal to LLA, SPA always serves as an alternative perspective for LLA (including NCR).\n\t\n\t\n\t\n\t\\subsection{Sufficient Conditions for the Equivalence}\n\tThe remaining question is when the two types of equivalence hold.\n\tFirst, note that in the problem~\\eqref{op:recastLLA}, it holds that $ \\langle f, \\mathbf{x}_{\\mathcal{H}}^{i} \\rangle_{\\mathcal{H}} = \\langle f, \\widetilde{\\mathbf{x}}_{\\mathcal{H}}^{i} \\rangle_{\\mathcal{H}} $ for each $ 1\\leq i \\leq n $ and each feasible solution $ f \\in \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) $. \n\tThis result is due to\n\t\\begin{equation}\n\t\t\\begin{gathered}\n\t\t\t\\langle \\mathbf{B}_{\\mathcal{H}}, \\widetilde{\\mathbf{X}}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = \\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\\\= \\langle \\mathbf{B}_{\\mathcal{H}},\\mathbf{B}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = \\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} .\n\t\t\\end{gathered}\n\t\\end{equation}\n\tTherefore, the problem~\\eqref{op:recastLLA} is exactly equivalent to\n\t\\begin{equation} \\label{op:recastLLA2}\n\t\t\\begin{aligned}\n\t\t\t\\argmin_{f\\in\\mathcal{H}} \\hat{\\mathcal{R}}(f, \\widetilde{\\mathbf{X}}_{\\mathcal{H}}) =& \\cfrac{1}{n}\\sum_{i=1}^{n}\\mathcal{L}(\\langle f, \\widetilde{\\mathbf{x}}_{\\mathcal{H}}^{i} \\rangle_{\\mathcal{H}}, y_{i}) + \\Omega(\\|f\\|_{\\mathcal{H}}^{2})\\\\\n\t\t\t\\text{ subject to }& f \\in \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}).\n\t\t\\end{aligned}\n\t\\end{equation}\n\twhere $ \\mathbf{B}_{\\mathcal{H}} $ is a learned orthogonal basis.\n\tOne can observe that the only difference between the problems~\\eqref{op:projected kernel models} and~\\eqref{op:recastLLA2} lies on the constraint. \n\tTherefore, it is obvious that the strong equivalence will hold when there is\n\t\\begin{equation} \\label{eq:span(X)=span(B)}\n\t\t\\begin{gathered}\n\t\t\t\\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) = \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) .\n\t\t\\end{gathered}\n\t\\end{equation}\n\tSince $ \\widetilde{\\mathbf{X}} = \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $, we already have $ \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) \\subseteq \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) $. However, the other direction does not necessarily hold. \n\tFor example, when the set of landmarks $ \\mathbf{C}_{\\mathcal{H}} $ contains some points that are not located inside $ \\mathrm{span}(\\mathbf{X}_{\\mathcal{H}}) $. \n\tTo the best of our knowledge, this could only happen when non-kernel k-means clustering sampling strategies are used \\cite{zhang2010clustered,pourkamali2018randomized}. \n\tFor other sampling strategies, $ \\mathbf{C}_{\\mathcal{H}} $ comes with an associated condition $ \\mathbf{C}_{\\mathcal{H}} = \\mathbf{X}_{\\mathcal{H}}\\mathbf{P} $ where $ \\mathbf{P} \\in \\mathbb{R}^{n\\times m} $ is a sampling matrix. \n\tThis equality definitely asserts that $ \\mathrm{span}(\\widetilde{\\mathbf{X}}) = \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) $.\n\t\\begin{proposition} \\label{prop:C=XP}\n\t\tIf $ \\mathbf{C}_{\\mathcal{H}} = \\mathbf{X}_{\\mathcal{H}}\\mathbf{P} $, it holds that $ \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) = \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) $. Henceforth, there is strong equivalence between SPA and LLA.\n\t\\end{proposition}\n\t\n\tEven if $ \\mathrm{span}(\\widetilde{\\mathbf{X}}) $ is a proper subset of $ \\mathrm{span}(\\mathbf{B}) $, the two types of equivalence are closely related to the representer theorem. \n\tIn particular, we need to categorize the representer theorem. \n\tFor kernel machine~\\eqref{op:kernel models}, each solution $ f \\in \\mathcal{H} $\n\tcan be uniquely decomposed as $ f = f_{r} + f_{n} $ such that $ f_{r} \\in \\mathrm{span}(\\mathbf{X}_{\\mathcal{H}}) $ and $ f_{n} \\in \\mathrm{span}(\\mathbf{X}_{\\mathcal{H}})^{\\perp} $ (the complement of $ \\mathrm{span}(\\mathbf{X}_{\\mathcal{H}}) $). There are two specific types of the representer theorem.\n\t\\begin{definition}\n\t\tThe objective of kernel machine~\\eqref{op:kernel models} is said to satisfy the strong representer theorem if whenever a solution $ f \\in \\mathcal{H} $ comes with $ f_{n} \\not= \\mathbf{0} $, there is $ \\hat{\\mathcal{R}}(f_{r}) < \\hat{\\mathcal{R}}(f) $.\n\t\\end{definition}\n\t\\begin{definition}\n\t\tThe objective of kernel machine~\\eqref{op:kernel models} is said to satisfy the weak representer theorem if there is $ \\hat{\\mathcal{R}}(f_{r}) \\leq \\hat{\\mathcal{R}}(f) $ for each solution $ f \\in \\mathcal{H} $.\n\t\\end{definition}\n\t\n\tThe difference between the strong and weak type of the representer theorem is that the weak one does not exclude the case when there exists an optimal solution that is not located inside $ \\mathrm{span}(\\mathbf{X}_{\\mathcal{H}}) $. \n\tPrevious studies have already provided some sufficient conditions for these two types of the representer theorem \\cite{Schlkopf2001AGR,dinuzzo2012representer,yu2013characterizing}. \n\tFor example, if the regularizing function $ \\Omega:[0,+\\infty] \\mapsto [-\\infty,+\\infty] $ is strictly increasing, then the objective of kernel machine~\\eqref{op:kernel models} satisfies the strong representer theorem; if $ \\Omega $ is non-decreasing, then the objective of kernel machine~\\eqref{op:kernel models} satisfies the weak representer theorem instead.\n\tWith this categorization, the equivalence between SPA and LLA can be characterized by the following proposition.\n\t\n\t\n\t\n\t\n\t\\begin{proposition} \\label{prop:reprst to equiv}\n\t\tIf the objective of kernel machine~\\eqref{op:kernel models} satisfies the strong (respectively, weak) representer theorem, then SPA and LLA are strongly (respectively, weakly) equivalent.\n\t\\end{proposition}\n\t\n\t\n\tWith the proposition above, it is easy to see that some well-established kernel machines, e.g., KRR, KSVM, KPCA, etc., satisfy the strong equivalence, since their optimization problems can be expressed into the form~\\eqref{op:kernel models} where\n\t$ \\Omega $ is strictly increasing \\cite{dinuzzo2012representer}.\n\t\n\t\n\tTo sum up, the strong (respectively, weak) representer theorem leads to strong (respectively, weak) equivalence between SPA and LLA.\n\tAlternatively, for most sampling strategies, the associated condition $ \\mathbf{C}_{\\mathcal{H}} = \\mathbf{X}_{\\mathcal{H}}\\mathbf{P} $ alone is sufficient to ensure the strong equivalence between LLA and SPA. Therefore, we conclude that the mechanism behind LLA is that it projects all data in the new feature space onto $ \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) $ before normally running kernel machines.\n\t\n\t\n\t\n\t\\begin{figure*}[h]\n\t\n\t\n\t\n\t\n\t\t\\centering\n\t\t\\begin{tabular}{@{\\hskip 0ex}r@{\\hskip 0ex}cccc}\n\t\t\t& {\\small Gaussian Sampling} & {\\small Uniform Sampling} & {\\small Leverage score Sampling} & {\\small K-Means Clustering Sampling}\n\t\t\t\\\\\n\t\t\t\\multirow{2}{*}[0.5ex]{\\rotatebox[origin=c]{90}{usps}}\n\t\t\t& \\includegraphics[width=0.225\\linewidth]{usps_Gaussian_G_Bound2021}\n\t\t\t& \\includegraphics[width=0.225\\linewidth]{usps_Gaussian_U_Bound2021}\n\t\t\t& \\includegraphics[width=0.225\\linewidth]{usps_Gaussian_LS_Bound2021}\n\t\t\t& \\includegraphics[width=0.225\\linewidth]{usps_Gaussian_KMC_Bound2021} \\\\\n\t\t\t& \\includegraphics[width=0.225\\linewidth]{usps_Gaussian_G_Compare2021}\n\t\t\t& \\includegraphics[width=0.225\\linewidth]{usps_Gaussian_U_Compare2021}\n\t\t\t& \\includegraphics[width=0.225\\linewidth]{usps_Gaussian_LS_Compare2021}\n\t\t\t& \\includegraphics[width=0.225\\linewidth]{usps_Gaussian_KMC_Compare2021}\n\t\t\t\\\\\n\t\t\t\\multirow{2}{*}[0.5ex]{\\rotatebox[origin=c]{90}{gisette}}\n\t\t\t& \\includegraphics[width=0.225\\linewidth]{gisette_Gaussian_G_Bound2021}\n\t\t\t& \\includegraphics[width=0.225\\linewidth]{gisette_Gaussian_U_Bound2021}\n\t\t\t& \\includegraphics[width=0.225\\linewidth]{gisette_Gaussian_LS_Bound2021}\n\t\t\t& \\includegraphics[width=0.225\\linewidth]{gisette_Gaussian_KMC_Bound2021} \\\\\n\t\t\t& \\includegraphics[width=0.225\\linewidth]{gisette_Gaussian_G_Compare2021}\n\t\t\t& \\includegraphics[width=0.225\\linewidth]{gisette_Gaussian_U_Compare2021}\n\t\t\t& \\includegraphics[width=0.225\\linewidth]{gisette_Gaussian_LS_Compare2021}\n\t\t\t& \\includegraphics[width=0.225\\linewidth]{gisette_Gaussian_KMC_Compare2021}\n\t\t\\end{tabular}\t\t\n\t\t\\caption{Comparison between Gram matrix substitution approach (GSA) and low-rank linearization approach (LLA) in terms of approximation error and classidfication accuracy (with $\\nu$-SVM). Every two rows correspond to a specific dataset. Each column is related to a certain sampling strategy.}\n\t\t\\label{fig:result1} \n\t\\end{figure*}\n\t\n\t\n\t\\begin{figure*}[h]\n\t\n\t\n\t\n\t\n\t\t\\centering\n\t\t\\begin{tabular}{@{\\hskip 0ex}r@{\\hskip 0ex}cccc}\n\t\t\t& {\\small Gaussian Sampling} & {\\small Uniform Sampling} & {\\small Leverage score Sampling} & {\\small K-Means Clustering Sampling}\n\t\t\t\\\\\n\t\t\t\\multirow{2}{*}[0.5ex]{\\rotatebox[origin=c]{90}{dna}}\n\t\t\t& \\includegraphics[width=0.22\\linewidth]{dna_Gaussian_G_Bound2021}\n\t\t\t& \\includegraphics[width=0.22\\linewidth]{dna_Gaussian_U_Bound2021}\n\t\t\t& \\includegraphics[width=0.22\\linewidth]{dna_Gaussian_LS_Bound2021}\n\t\t\t& \\includegraphics[width=0.22\\linewidth]{dna_Gaussian_KMC_Bound2021} \\\\\n\t\t\t& \\includegraphics[width=0.22\\linewidth]{dna_Gaussian_G_Compare2021}\n\t\t\t& \\includegraphics[width=0.22\\linewidth]{dna_Gaussian_U_Compare2021}\n\t\t\t& \\includegraphics[width=0.22\\linewidth]{dna_Gaussian_LS_Compare2021}\n\t\t\t& \\includegraphics[width=0.22\\linewidth]{dna_Gaussian_KMC_Compare2021}\n\t\t\t\\\\\n\t\t\t\\multirow{2}{*}[0.5ex]{\\rotatebox[origin=c]{90}{phishing}}\n\t\t\t& \\includegraphics[width=0.22\\linewidth]{phishing_Gaussian_G_Bound2021}\n\t\t\t& \\includegraphics[width=0.22\\linewidth]{phishing_Gaussian_U_Bound2021}\n\t\t\t& \\includegraphics[width=0.22\\linewidth]{phishing_Gaussian_LS_Bound2021}\n\t\t\t& \\includegraphics[width=0.22\\linewidth]{phishing_Gaussian_KMC_Bound2021} \\\\\n\t\t\t& \\includegraphics[width=0.22\\linewidth]{phishing_Gaussian_G_Compare2021}\n\t\t\t& \\includegraphics[width=0.22\\linewidth]{phishing_Gaussian_U_Compare2021}\n\t\t\t& \\includegraphics[width=0.22\\linewidth]{phishing_Gaussian_LS_Compare2021}\n\t\t\t& \\includegraphics[width=0.22\\linewidth]{phishing_Gaussian_KMC_Compare2021}\n\t\t\\end{tabular}\t\t\n\t\t\\caption{Comparison between Gram matrix substitution approach (GSA) and low-rank linearization approach (LLA) in terms of approximation error and classidfication accuracy (with $\\nu$-SVM). Every two rows correspond to a specific dataset. Each column is related to a certain sampling strategy.}\n\t\t\\label{fig:result2} \n\t\\end{figure*}\n\t\n\t\n\t\n\t\\section{Experiments} \n\t\\label{sec:experiments}\n\tBear in mind that the intent of our experiments is to verify that GSA can provide more accurate solutions than LLA (including NCR). \n\tBeing more accurate does not mean the performance will necessarily be better. \n\tBut it is expected to be true when the non-approximate optimal solution $ f^{*} $ to the kernel machine~\\eqref{op:kernel models} performs well. \n\tTo the best of our knowledge, when the Nystr\\\"{o}m method is compared with other scalable techniques, LLA is always employed \\cite{lan2019scaling,hsieh2014divide}. \n\tTherefore, it is of interest to see the performances of GSA versus LLA.\n\tSpecifically, we use the step provided by Proposition~\\ref{prop:w2a} to efficiently optimize GSA, which is presented in Algorithm~\\ref{alg:general}.\n\t\n\n\n\t\n\n\tFollowing previous studies \\cite{zhang2010clustered,lan2019scaling,hsieh2014divide}, we focus on classification tasks, and the Gaussian kernel $ \\exp(-\\| \\mathbf{x}-\\mathbf{y} \\|_{F}^{2} \/ 2\\gamma^{2}) $ is used for all datasets. Four datasets from the LIBSVM archive (\\url{https:\/\/www.csie.ntu.edu.tw\/~cjlin\/libsvmtools\/datasets\/}) are employed, which is listed in Table~\\ref{tab:datasets}. \n\tIn our experiments, \\texttt{NuSVC} from \\texttt{sklearn} is employed for implementing $ \\nu $-SVM. \n\tTwo metrics are used for evaluation, including 1) approximation error, and 2) classification accuracy.\n\tAll experiments are conducted on a computer with 8 $ \\times $ 2.40 GHz Intel(R) Core(TM) i7-4700HQ CPU and 16 GB of RAM. \n\t\n\t\\begin{table}[!t]\n\t\t\\centering\n\t\t\\caption{Summary of Datasets and Hyperparameters}\n\t\t\\label{tab:datasets}\n\t\t\\scriptsize\n\t\t\\begin{tabular}{ccccccc} \\toprule\n\t\t\t\\textbf{Dataset} & \\#training data & \\#test data & \\#feature & \\#class & $ \\gamma $ & $ \\nu $ \\\\ \\midrule\n\t\t\t\\textbf{usps} & 7,291 & 2,007 & 256 & 10 & 10 & 0.1 \\\\\n\t\t\n\t\t\t\\textbf{gisette} & 4,800 & 1,200 & 5,000 & 2 & 70 & 0.2 \\\\\n\t\t\t\\textbf{phishing} & 8,388 & 2,097 & 9,947 & 4 & 10 & 82\\\\\n\t\t\t\\textbf{dna} & 2,000 & 1,186 & 180 & 3 & 200 & 0.3 \\\\ \\bottomrule\n\t\t\\end{tabular}\t\n\t\\end{table}\n\t\n\t\n\t\n\t\\subsection{Experiment Setting}\n\t\n\tFour sampling strategies are used here, including a) Gaussian sampling, b) uniform sampling, c) leverage score sampling, and d) k-means clustering sampling. \n\tThe details of these sampling strategies can be found in \\cite{gittens2016revisiting,zhang2010clustered}. \n\tSince the considered sampling strategies involve randomness, for each ratio ($ m\/n $) of landmarks to training data, the averaged results with the standard deviations over the first $ 30 $ random seeds are reported.\n\tFor each dataset, the ratio $ m\/n $ gradually increases from $ 1\\% $ to $ 10\\% $ of the size of training data.\n\t\n\t\n\tFor each dataset, we randomly split the whole dataset into a training set ($ 64\\% $), a validation set ($ 16\\% $) and a test set ($ 20\\% $) if it is not previously divided. \n\tWe tune the hyperparameters of $ \\nu $ and $ \\gamma $ based on the training and validation sets.\n\tThe considered ranges for $ \\gamma $ and $ \\nu $ are $ [10^{-3}, 10^{3}] $ and $ [0.1, 0.9] $, respectively.\n\tThe eventually chosen hyperparameters of $ \\nu $ and $ \\gamma $ are listed in Table~\\ref{tab:datasets}. \n\tBesides, the maximum iteration for \\texttt{NuSVC} is fixed as $ 1,000 $. \n\tThe embedded orthogonal basis $ \\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}} $ when using standard Nystr\\\"{o}m method to form $ \\widetilde{\\mathbf{K}} $ is selected throughout our experiments. \n\t$ \\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}} \\in \\mathcal{H}^{s} $ can be calculated as follows: 1) by spectral decomposition, $ \\langle \\mathbf{C}_{\\mathcal{H}}, \\mathbf{C}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = \\mathbf{V}\\boldsymbol\\Sigma^{2}\\mathbf{V}^{T} $, where all diagonal entries in $ \\boldsymbol\\Sigma $ are positive, and then 2) $ \\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}} = \\mathbf{C}_{\\mathcal{H}}\\mathbf{V}\\boldsymbol\\Sigma^{-1} $ is the desired orthogonal basis.\n\tHere, the dimension of the targeted subspace $ s $ is the dimension of $ \\mathrm{span}(\\mathbf{C}_{\\mathcal{H}}) $. \n\t\n\t\n\t\n\t\n\n\t\n\t\n\t\n\t\n\t\\subsection{Results}\n\tThe experimental results on four datasets are reported in Figure~\\ref{fig:result1} and Figure~\\ref{fig:result2}. \n\tFrom these two figures, we have several interesting observations. \n\t\\emph{First}, one can see that \n\n\tall approximate solutions get more accurate as the ratio $ m\/n $ increases. \n\tThe reason could be that, as the ratio $ m\/n $ of landmarks to training data increases, the Gram matrix approximation error gets smaller; \n\tand as indicated by the approximation error bound~\\eqref{ieq:ksvm}, $ \\| f^{\\mathrm{LLA}}-f^{*} \\|_{\\mathcal{H}} $ is bounded by $ \\| \\mathbf{K}-\\widetilde{\\mathbf{K}} \\|_{2}^{\\frac{1}{4}} $ for KSVM. \n\tTherefore, it is expected that the curve of $ \\| f^{\\mathrm{LLA}}-f^{*} \\|_{\\mathcal{H}} $ versus the ratio $ m\/n $ will go down. \n\t\\emph{Second}, GSA is significantly more accurate than LLA on the usps dataset.\n\tOn the gisette and phishing datasets, however, LLA can achieve more accurate approximate solutions than GSA when the ratio $ m\/n $ is close to $ 1\\% $. \n\tMeanwhile, LLA is almost as accurate as GSA on the dna dataset. \n\t\\emph{In addition}, more accurate solutions (i.e., lower approximation errors) do not necessarily lead to better performances in terms of classification accuracy.\n\tBy comparing the performances over the gisette and phishing datasets, we observe that even though GSA provides modestly more accurate solutions on both datasets when the ratio $ m\/n $ is close to $ 10\\% $, GSA performs better than LLA over gisette dataset but becomes slightly worse over phishing dataset. \n\t\n\n\tIn a nutshell, even though LLA is commonly-used as an exemplar of using Nystr\\\"{o}m method to scale up kernel machines, we should not forget that GSA is potential to provide more accurate approximate solutions or perform better.\n\t\n\t\n\t\\section{Conclusion}\n\t\\label{conclusion}\n\tMotivated by the column inclusion property of Gram matrices, we propose a subspace projection approach (SPA) as a cornerstone to study the relations among several well-studied approaches (i.e., LLA, NCR and GSA). \n\tSpecifically, the setting of SPA provides a way to reformulate LLA, which in turn reveals that NCR is a specific case of LLA. \n\tWhen either the selected sampling strategy satisfies the equality $ \\mathbf{C}_{\\mathcal{H}} = \\mathbf{X}_{\\mathcal{H}}\\mathbf{P} $ or the objective of kernel machine~\\eqref{op:kernel models} meets the representer theorem, SPA serves as an alternative perspective for analyzing LLA. \n\tThe equivalence of LLA and SPA leads to three significant implications.\n\t\\emph{First}, approximation errors for LLA in a general setting can be built up with a little effort.\n\t\\emph{Second}, it reveals that the analytical forms of the approximate solutions computed through LLA and GSA only differ in one term.\n\t\\emph{In addition}, GSA can be implemented as efficiently as LLA by sharing the same training procedure. \n\tAll the analytical results lead to the conjecture that GSA can provide better solutions than LLA, which is confirmed by our experiments with classification tasks. In a nutshell, the mechanism behind LLA is that it projects all data onto $ \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) $ before normal training and test.\n\t\n\t\n\t\n\t\n\t\\appendices\n\t\\section{Proofs of Lemma~\\ref{lem:relation between K and B} and~\\ref{lem:reconstruction error}}\n\tFor Lemma~\\ref{lem:relation between K and B}, let $ \\widetilde{\\mathbf{X}}_{\\mathcal{H}} = \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $. Since\n\t\\begin{align*}\n\t\t\\langle \\mathbf{X}_{\\mathcal{H}}, \\widetilde{\\mathbf{X}}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} =& \\langle \\mathbf{X}_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\\\\n\t\t=& \\langle \\mathbf{X}_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\\\\n\t\t=& \\langle \\mathbf{X}_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\\\\n\t\t=& \\langle \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\\\\n\t\t=& \\langle \\widetilde{\\mathbf{X}}_{\\mathcal{H}}, \\widetilde{\\mathbf{X}}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} ,\n\t\\end{align*}\n\tthere is\n\t\\begin{gather*}\n\t\t\\begin{aligned}\n\t\t\t&\\langle \\mathbf{X}_{\\mathcal{H}}-\\widetilde{\\mathbf{X}}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}}-\\widetilde{\\mathbf{X}}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\\\\n\t\t\t=& \\langle \\mathbf{X}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} + \\langle \\widetilde{\\mathbf{X}}_{\\mathcal{H}}, \\widetilde{\\mathbf{X}}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} - 2 \\langle \\mathbf{X}_{\\mathcal{H}}, \\widetilde{\\mathbf{X}}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\\\\n\t\t\t=&\\mathbf{K} + \\widetilde{\\mathbf{K}} - 2\\widetilde{\\mathbf{K}} \\\\\n\t\t\t=& \\mathbf{K}-\\widetilde{\\mathbf{K}} .\n\t\t\\end{aligned}\n\t\\end{gather*}\n\tThis equality implies\n\t\\begin{gather*}\n\t\t\\begin{aligned}\n\t\t\t\\left\\lVert \\mathbf{X}_{\\mathcal{H}}-\\widetilde{\\mathbf{X}}_{\\mathcal{H}} \\right\\rVert_{\\mathcal{H}S}^{2}\n\t\t\t= \\mathrm{trace}(\\mathbf{K} - \\widetilde{\\mathbf{K}})\n\t\t\t= \\left\\lVert \\mathbf{K} - \\widetilde{\\mathbf{K}} \\right\\rVert_{*} . \n\t\t\\end{aligned}\n\t\\end{gather*}\n\tHere, the last equality holds due to that $ \\mathbf{K} \\succeq \\widetilde{\\mathbf{K}} $. In other words, $ \\mathbf{K}-\\widetilde{\\mathbf{K}} $ is positive semi-definite.\n\t\n\tFor the other inequality, it is sufficient to prove that $ \\| \\mathbf{A}_{\\mathcal{H}} \\|_{op}^{2} = \\| \\langle \\mathbf{A}_{\\mathcal{H}}, \\mathbf{A}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\|_{2} $ for each $ \\mathbf{A}_{\\mathcal{H}} \\in \\mathcal{H}^{k} $. For convenience, let $ \\mathbf{M} = \\langle \\mathbf{A}_{\\mathcal{H}}, \\mathbf{A}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $. \n\tBy spectral decomposition, $ \\mathbf{M} = \\mathbf{B}^{T}\\mathbf{B} $.\n\tAccording to the definitions of operator norm and spectral norm, there exist $ \\mathbf{x},\\mathbf{y} \\in \\mathbb{R}^{k} $ such that $ \\| \\mathbf{x} \\|_{F} = \\| \\mathbf{y} \\|_{F} = 1, \\| \\mathbf{A}_{\\mathcal{H}}\\mathbf{x} \\|_{\\mathcal{H}} = \\| \\mathbf{A}_{\\mathcal{H}} \\|_{op} $ and $ \\| \\mathbf{By} \\|_{F} = \\| \\mathbf{B} \\|_{2} $.\n\tTherefore,\n\t\\begin{align*}\n\t\t\\| \\mathbf{A}_{\\mathcal{H}} \\|_{op}^{2} & = \\| \\mathbf{A}_{\\mathcal{H}}\\mathbf{x} \\|_{\\mathcal{H}}^{2} = \\mathbf{x}^{T}\\mathbf{Mx} \\leq \\| \\mathbf{M} \\|_{2}\\| \\mathbf{x} \\|_{F}^{2} = \\| \\mathbf{M} \\|_{2} .\n\t\\end{align*}\n\tOn the other hand,\n\t\\begin{align*}\n\t\t\\| \\mathbf{M} \\|_{2} &= \\| \\mathbf{B}^{T}\\mathbf{B} \\|_{2} = \\| \\mathbf{B} \\|_{2}^{2} = \\| \\mathbf{By} \\|_{F}^{2} = \\mathbf{y}^{T}\\mathbf{M}\\mathbf{y} \\\\\n\t\t&= \\| \\mathbf{A}_{\\mathcal{H}}\\mathbf{y} \\|_{\\mathcal{H}}^{2} \\leq \\| \\mathbf{A}_{\\mathcal{H}} \\|_{op}^{2} .\n\t\\end{align*}\n\tTherefore, we reach the equality $ \\| \\mathbf{A}_{\\mathcal{H}} \\|_{op}^{2} = \\| \\mathbf{M} \\|_{2} $. As previously shown, there is $ \\langle \\mathbf{X}_{\\mathcal{H}}-\\widetilde{\\mathbf{X}}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}}-\\widetilde{\\mathbf{X}}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = \\mathbf{K}-\\widetilde{\\mathbf{K}} $, which leads to\n\t\\begin{gather*}\n\t\t\\left\\lVert \\mathbf{X}_{\\mathcal{H}}-\\widetilde{\\mathbf{X}}_{\\mathcal{H}} \\right\\rVert_{op}^{2} = \\left\\lVert \\mathbf{K}-\\widetilde{\\mathbf{K}} \\right\\rVert_{2} .\n\t\\end{gather*} \\hfill $ \\Box $\n\t\n\tFor Lemma~\\ref{lem:reconstruction error},\n\tlet $ \\mathbf{p}_{\\mathcal{H}} = \\Phi(\\mathbf{p}), \\mathbf{q}_{\\mathcal{H}} = \\Phi(\\mathbf{q}), \\mathbf{p}'_{\\mathcal{H}} = \\mathbf{p}_{\\mathcal{H}}-\\widetilde{\\mathbf{p}}_{\\mathcal{H}} $ and $ \\mathbf{q}'_{\\mathcal{H}} = \\mathbf{q}_{\\mathcal{H}}-\\widetilde{\\mathbf{q}}_{\\mathcal{H}} $. \n\tNote that\n\t\\begin{gather*}\n\t\t\\langle \\widetilde{\\mathbf{p}}_{\\mathcal{H}}, \\mathbf{q}'_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = \\langle \\widetilde{\\mathbf{p}}_{\\mathcal{H}}, \\mathbf{q}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} - \\langle \\widetilde{\\mathbf{p}}_{\\mathcal{H}}, \\widetilde{\\mathbf{q}}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = 0\n\t\\end{gather*}\n\tsince\n\t\\begin{align*}\n\t\t\\langle \\widetilde{\\mathbf{p}}_{\\mathcal{H}}, \\mathbf{q}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} &= \\langle \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{p}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}, \\mathbf{q}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\\\\n\t\t&= \\langle \\mathbf{q}_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{q}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\\\\n\t\t&= \\langle \\mathbf{q}_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{q}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\\\\n\t\t&= \\langle \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{p}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{q}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\\\\n\t\t&= \\langle \\widetilde{\\mathbf{p}}_{\\mathcal{H}}, \\widetilde{\\mathbf{q}}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\n\t\\end{align*}\t\n\tLikewise, $ \\langle \\widetilde{\\mathbf{q}}_{\\mathcal{H}}, \\mathbf{p}_{\\mathcal{H}}' \\rangle_{\\mathcal{H}} = 0 $. \n\tThen,\n\t\\begin{align*}\n\t\t&|\\langle \\widetilde{\\mathbf{p}}_{\\mathcal{H}}, \\widetilde{\\mathbf{q}}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} - \\langle \\mathbf{p}_{\\mathcal{H}}, \\mathbf{q}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}| \\\\\n\t\t=& | \\langle \\mathbf{p}_{\\mathcal{H}}', \\mathbf{q}_{\\mathcal{H}}' \\rangle_{\\mathcal{H}}+\\langle \\widetilde{\\mathbf{p}}_{\\mathcal{H}}, \\mathbf{q}_{\\mathcal{H}}' \\rangle_{\\mathcal{H}} + \\langle \\mathbf{p}_{\\mathcal{H}}',\\widetilde{\\mathbf{q}}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} | \\\\\t\t\n\t\t=& |\\langle \\mathbf{p}'_{\\mathcal{H}}, \\mathbf{q}'_{\\mathcal{H}} \\rangle|\n\t\t\\leq \\| \\mathbf{p}'_{\\mathcal{H}} \\|_{\\mathcal{H}}\\| \\mathbf{q}'_{\\mathcal{H}} \\|_{\\mathcal{H}} .\n\t\\end{align*}\n\tTherefore, the lemma holds since $ \\| \\mathbf{p}'_{\\mathcal{H}} \\|_{\\mathcal{H}} $ = 0 if and only if $ \\mathbf{p}_{\\mathcal{H}} \\in \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) $. \\hfill $ \\Box $\n\t\n\t\\section{Proof of Corollary~\\ref{cor:NCR2LLA}} \\label{app:NCR2LLA}\n\tTo be self-contained, we provide all tools that are needed for our proof here.\n\t\n\t\n\t\\begin{lemma} \\label{lem:T}\n\t\tLet $ \\mathbf{A}_{\\mathcal{H}} \\in \\mathcal{H}^{p} $ be given, and $ \\rho $ denotes the dimension of $ \\mathrm{span(\\mathbf{A}_{\\mathcal{H}})} $. Then, there is an isomorphism $ T_{A} $, which is also an isometry, between $ \\mathrm{span}(\\mathbf{A}_{\\mathcal{H}}) $ and $ \\mathbb{R}^{\\rho} $.\n\t\\end{lemma}\n\t\n\t\\emph{Proof.} If the vectors in $ \\mathbf{A}_{\\mathcal{H}} $ are not linearly dependent, then gradually remove one unnecessary vector each time from $ \\mathbf{A}_{\\mathcal{H}} $ until the remaining vectors (denoted by $ \\hat{\\mathbf{A}}_{\\mathcal{H}} \\in \\mathcal{H}^{\\rho} $) are linearly independent. Then, $ \\mathrm{span}(\\mathbf{A}_{\\mathcal{H}}) = \\mathrm{span}(\\hat{\\mathbf{A}}_{\\mathcal{H}}) $. \n\tThe linear independence of $ \\hat{\\mathbf{A}}_{\\mathcal{H}} $ guarantees that $ \\hat{\\mathbf{K}} = \\langle \\hat{\\mathbf{A}}_{\\mathcal{H}}, \\hat{\\mathbf{A}}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} \\in \\mathbb{R}^{\\rho\\times\\rho} $ is invertible, meaning that it is positive definite. Therefore, by Cholesky decomposition, $ \\hat{\\mathbf{K}} = \\mathbf{B}^{T}\\mathbf{B} $ where $ \\mathbf{B} \\in \\mathbb{R}^{\\rho\\times\\rho} $ is invertible. Define a linear map $ T_{A} : \\mathrm{span}(\\mathbf{A}_{\\mathcal{H}}) \\mapsto \\mathbb{R}^{\\rho} $ by setting\n\t\\begin{equation}\n\t\t\\begin{gathered}\n\t\t\tT_{A}(\\sum_{i=1}^{\\rho}\\alpha_{i}\\hat{\\mathbf{a}}^{i}_{\\mathcal{H}}) = \\sum_{i=1}^{\\rho}\\alpha_{i}\\mathbf{b}_{i} .\n\t\t\\end{gathered}\n\t\\end{equation}\n\tOne can check that $ T_{A} $ is indeed a linear bijection, and thus an isomorphism. Moreover, $ \\langle \\mathbf{x}_{\\mathcal{H}}, \\mathbf{y}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = T_{A}(\\mathbf{x}_{\\mathcal{H}})^{T}T_{A}(\\mathbf{y}_{\\mathcal{H}}) $ for all $ \\mathbf{x}_{\\mathcal{H}},\\mathbf{y}_{\\mathcal{H}} \\in \\mathrm{span}(\\mathbf{A}_{\\mathcal{H}}) $ indicates that $ T_{A} $ is an isometry. \\hfill $ \\Box $\n\t\n\t\\begin{lemma}[Compact SVD on $ \\mathcal{H} $] \\label{lem:svd}\n\t\tLet $ \\mathbf{A}_{\\mathcal{H}} \\in \\mathcal{H}^{k} $ be given. By spectral decomposition, $ \\langle \\mathbf{A}_{\\mathcal{H}}, \\mathbf{A}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = \\mathbf{R}\\boldsymbol\\Lambda^{2}\\mathbf{R}^{T} $ where $ \\mathbf{R}^{T}\\mathbf{R} = \\mathbf{I} $ and all diagonal entries in $ \\boldsymbol\\Lambda $ are all positive. Let $ \\mathbf{Y}_{\\mathcal{H}} = \\mathbf{A}_{\\mathcal{H}}\\mathbf{R}\\boldsymbol\\Lambda^{-1} $, and thus $ \\langle \\mathbf{Y}_{\\mathcal{H}}, \\mathbf{Y}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = \\mathbf{I} $. Then, there is $ \\mathbf{A}_{\\mathcal{H}} = \\mathbf{Y}_{\\mathcal{H}}\\boldsymbol\\Lambda\\mathbf{R}^{T} $.\n\t\\end{lemma}\n\t\\emph{Proof.} Firstly, construct $ T_{A} $ according to Lemma~\\ref{lem:T}. Then, $ T_{A}(\\mathbf{A}_{\\mathcal{H}})^{T}T_{A}(\\mathbf{A}_{\\mathcal{H}}) = \\langle \\mathbf{A}_{\\mathcal{H}}, \\mathbf{A}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = \\mathbf{R}\\boldsymbol\\Lambda^{2}\\mathbf{R}^{T} $. Let $ \\mathbf{Y} = T_{A}(\\mathbf{A}_{\\mathcal{H}})\\mathbf{R}\\boldsymbol\\Lambda^{-1} $, then $ T_{A}(\\mathbf{A}_{\\mathcal{H}}) = \\mathbf{Y}\\boldsymbol\\Lambda\\mathbf{R}^{T} $ is a compact SVD. \n\tBesides, $ T_{A}^{-1}(\\mathbf{Y}) = \\mathbf{A}_{\\mathcal{H}}\\mathbf{R}\\boldsymbol\\Lambda^{-1} = \\mathbf{Y}_{\\mathcal{H}} $ due to the linearity of $ T_{A}^{-1} $. Note that\n\t\\begin{gather*}\n\t\t\\mathbf{A}_{\\mathcal{H}} = T_{A}^{-1}(\\mathbf{Y}\\boldsymbol\\Lambda\\mathbf{R}^{T}) = \\mathbf{Y}_{\\mathcal{H}}\\boldsymbol\\Lambda\\mathbf{R}^{T},\n\t\\end{gather*}\n\tthe proof is completed. \\hfill $ \\Box $\n\t\n\t\n\t\n\t\\emph{Proof of corollary~\\ref{cor:NCR2LLA}}.\n\tNote that $ \\mathbf{K}_{nm}^{T} = \\mathbf{K}_{mn} = \\langle \\mathbf{C}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}, \\mathbf{K}_{mm} = \\langle \\mathbf{C}_{\\mathcal{H}}, \\mathbf{C}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $ and $ \\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}} = \\mathbf{C}_{\\mathcal{H}}\\mathbf{V}\\boldsymbol\\Sigma^{-1} $ where $ \\mathbf{V} $ and $ \\boldsymbol\\Sigma $ come from the spectral decomposition $ \\mathbf{K}_{mm} = \\mathbf{V}\\boldsymbol\\Sigma^{2}\\mathbf{V}^{T} $. Here, all diagonal entries in $ \\boldsymbol\\Sigma $ are positive. According to Lemma~\\ref{lem:svd}, there is $ \\mathbf{C}_{\\mathcal{H}} = \\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}}\\boldsymbol\\Sigma\\mathbf{V}^{T} $.\n\tOur goal is to show the equality\n\t\\begin{gather*}\n\t\t\\mathbf{C}_{\\mathcal{H}} (\\mathbf{K}_{mn}\\mathbf{K}_{nm}+\\lambda_{0}\\mathbf{K}_{mm})^{\\dagger}\\mathbf{K}_{mn}\\mathbf{y} = \\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}}\\hat{\\mathbf{w}}^{\\mathrm{KRR}}\t\n\t\\end{gather*}\n\twhere\n\t\\begin{gather*}\n\t\t\\hat{\\mathbf{w}}^{\\mathrm{KRR}} = (\\mathbf{G}\\mathbf{G}^{T}+\\lambda_{0}\\mathbf{I})^{-1}\\mathbf{G}\\mathbf{y} , \\\\\n\t\t\\mathbf{G} = (\\mathbf{V}\\boldsymbol\\Sigma^{-1})^{T}\\mathbf{K}_{mn}.\n\t\\end{gather*}\n\t\n\tSince\n\t\\begin{gather*}\n\t\t\\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}}\\hat{\\mathbf{w}}^{\\mathrm{KRR}} = \\mathbf{C}_{\\mathcal{H}}\\mathbf{H}(\\mathbf{H}^{T}\\mathbf{K}_{mn}\\mathbf{K}_{nm}\\mathbf{H}+\\lambda_{0}\\mathbf{I})^{-1}\\mathbf{H}^{T}\\mathbf{K}_{mn}\\mathbf{y}\n\t\\end{gather*}\n\twhere $ \\mathbf{H} = \\mathbf{V}\\boldsymbol\\Sigma^{-1} $,\n\tit is sufficient to prove that\n\t\\begin{gather*}\n\t\t(\\mathbf{K}_{mn}\\mathbf{K}_{nm}+\\lambda_{0}\\mathbf{K}_{mm})^{\\dagger} = \\mathbf{H}(\\mathbf{H}^{T}\\mathbf{K}_{mn}\\mathbf{K}_{nm}\\mathbf{H}+\\lambda_{0}\\mathbf{I})^{-1}\\mathbf{H}^{T} .\n\t\\end{gather*}\n\t\n\tLet $ \\mathbf{M} = \\boldsymbol\\Sigma\\langle \\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\langle \\mathbf{X}_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}}^{\\mathrm{std}} \\rangle_{\\mathcal{H}}\\boldsymbol\\Sigma $, then there is\n\t\\begin{gather*}\n\t\t\\mathbf{K}_{mn}\\mathbf{K}_{nm} = \\mathbf{V}\\mathbf{M}\\mathbf{V}^{T} .\n\t\\end{gather*}\n\t\n\tSince\n\t\\begin{gather*}\n\t\t\\begin{aligned}\n\t\t\t&\\mathbf{H}^{T}\\mathbf{K}_{mn}\\mathbf{K}_{nm}\\mathbf{H} + \\lambda_{0}\\mathbf{I}\\\\\n\t\t\t=& \\mathbf{H}^{T}(\\mathbf{K}_{mn}\\mathbf{K}_{nm}+\\lambda_{0}\\mathbf{K}_{mm})\\mathbf{H}\\\\\n\t\t\t=& \\mathbf{H}^{T}\\mathbf{V}(\\mathbf{M}+\\lambda_{0}\\boldsymbol\\Sigma^{2})\\mathbf{V}^{T}\\mathbf{H} \\\\\n\t\t\t=& \\boldsymbol\\Sigma^{-1}(\\mathbf{M}+\\lambda_{0}\\boldsymbol\\Sigma^{2})\\boldsymbol\\Sigma^{-1} ,\n\t\t\\end{aligned}\n\t\\end{gather*}\n\tthere is \n\t\\begin{gather*}\n\t\t\\begin{aligned}\n\t\t\t&\\mathbf{H}(\\mathbf{H}^{T}\\mathbf{K}_{mn}\\mathbf{K}_{nm}\\mathbf{H}+\\lambda_{0}\\mathbf{I})^{-1}\\mathbf{H}^{T} \\\\\n\t\t\t=& \\mathbf{H}\\boldsymbol\\Sigma(\\mathbf{M}+\\lambda_{0}\\boldsymbol\\Sigma^{2})^{-1}\\boldsymbol\\Sigma\\mathbf{H}^{T} \\\\\n\t\t\t=& \\mathbf{V}(\\mathbf{M}+\\lambda_{0}\\boldsymbol\\Sigma^{2})^{-1}\\mathbf{V}^{T} \\\\\n\t\t\t=& (\\mathbf{V}(\\mathbf{M}+\\lambda_{0}\\boldsymbol\\Sigma^{2})\\mathbf{V}^{T})^{\\dagger} \\\\\n\t\t\t=& (\\mathbf{K}_{mn}\\mathbf{K}_{nm}+\\lambda_{0}\\mathbf{K}_{mm})^{\\dagger} .\n\t\t\\end{aligned}\n\t\\end{gather*} \\hfill $ \\Box $\n\t\n\t\\section{Proof of Proposition~\\ref{prop:approximation error}}\n\t\\begin{gather*}\n\t\t\\begin{aligned}\n\t\t\t& \\| f^{\\mathrm{LLA}} - f^{*} \\|_{\\mathcal{H}}^{2}\\\\\n\t\t\t=& \\langle \\widetilde{\\mathbf{X}}_{\\mathcal{H}}\\hat{\\boldsymbol\\alpha}-\\mathbf{X}_{\\mathcal{H}}\\widetilde{\\boldsymbol\\alpha}, \\widetilde{\\mathbf{X}}_{\\mathcal{H}}\\hat{\\boldsymbol\\alpha}-\\mathbf{X}_{\\mathcal{H}}\\widetilde{\\boldsymbol\\alpha} \\rangle_{\\mathcal{H}}\\\\\n\t\t\t=& \\hat{\\boldsymbol\\alpha}^{T}\\widetilde{\\mathbf{K}}\\hat{\\boldsymbol\\alpha} + \\widetilde{\\boldsymbol\\alpha}^{T}\\mathbf{K}\\widetilde{\\boldsymbol\\alpha} - 2\\hat{\\boldsymbol\\alpha}^{T}\\widetilde{\\mathbf{K}}\\widetilde{\\boldsymbol\\alpha}\\\\\n\t\t\t=& (\\hat{\\boldsymbol\\alpha}-\\widetilde{\\boldsymbol\\alpha})^{T}\\widetilde{\\mathbf{K}}(\\hat{\\boldsymbol\\alpha}-\\widetilde{\\boldsymbol\\alpha}) + \\widetilde{\\boldsymbol\\alpha}^{T}(\\mathbf{K}-\\widetilde{\\mathbf{K}})\\widetilde{\\boldsymbol\\alpha}\\\\\n\t\t\t\\leq& \\| \\widetilde{\\mathbf{K}} \\|_{2}\\| \\hat{\\boldsymbol\\alpha}-\\widetilde{\\boldsymbol\\alpha} \\|_{F}^{2} + \\| \\mathbf{K}-\\widetilde{\\mathbf{K}} \\|_{2}\\| \\widetilde{\\boldsymbol\\alpha} \\|_{F}^{2} \\\\\n\t\t\t\\leq& \\| \\mathbf{K} \\|_{2}\\| \\hat{\\boldsymbol\\alpha}-\\widetilde{\\boldsymbol\\alpha} \\|_{F}^{2} + \\| \\mathbf{K}-\\widetilde{\\mathbf{K}} \\|_{2}\\| \\widetilde{\\boldsymbol\\alpha} \\|_{F}^{2} .\n\t\t\\end{aligned}\n\t\\end{gather*}\n\tThe last inequality is due to $ \\widetilde{\\mathbf{K}} \\preceq \\mathbf{K} \\implies \\| \\widetilde{\\mathbf{K}} \\|_{2} \\leq \\| \\mathbf{K} \\|_{2} $. \n\tTaking square root on both sides leads to\n\t\\begin{align*}\n\t\t&\\| f^{\\mathrm{LLA}} - f^{*} \\|_{\\mathcal{H}} \\\\\n\t\t\\leq& \\left( \\| \\mathbf{K} \\|_{2}\\| \\hat{\\boldsymbol\\alpha}-\\widetilde{\\boldsymbol\\alpha} \\|_{F}^{2} + \\| \\mathbf{K}-\\widetilde{\\mathbf{K}} \\|_{2}\\| \\widetilde{\\boldsymbol\\alpha} \\|_{F}^{2} \\right)^{\\frac{1}{2}}\\\\\n\t\t\\leq& \\left( \\| \\mathbf{K} \\|_{2}\\| \\hat{\\boldsymbol\\alpha}-\\widetilde{\\boldsymbol\\alpha} \\|_{F}^{2} \\right)^{\\frac{1}{2}} + \\left( \\| \\mathbf{K}-\\widetilde{\\mathbf{K}} \\|_{2}\\| \\widetilde{\\boldsymbol\\alpha} \\|_{F}^{2} \\right)^{\\frac{1}{2}} \\\\\n\t\t=& \\| \\mathbf{K} \\|_{2}^{\\frac{1}{2}}\\| \\hat{\\boldsymbol\\alpha}-\\widetilde{\\boldsymbol\\alpha} \\|_{F} + \\| \\mathbf{K}-\\widetilde{\\mathbf{K}} \\|_{2}^{\\frac{1}{2}}\\| \\widetilde{\\boldsymbol\\alpha} \\|_{F} .\n\t\\end{align*} \\hfill $ \\Box $\n\t\n\t\n\t\\section{Proof of Proposition~\\ref{prop:w2a}}\n\t\\begin{lemma}[Theorem 4.11 in~\\cite{rudin2006real}] \\label{lem:rn}\n\t\tSuppose a subspace $ \\mathcal{S} \\subseteq \\mathcal{H} $ is closed, and let $ \\mathcal{S}^{\\perp} $ be its orthogonal complement $ \\{ \\mathbf{x}_{\\mathcal{H}} \\in \\mathcal{H} \\mid \\langle \\mathbf{x}_{\\mathcal{H}}, \\mathbf{s}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = 0 \\text{ for all } \\mathbf{s}_{\\mathcal{H}} \\in \\mathcal{S} \\} $. Then, there are two linear maps $ r:\\mathcal{H}\\mapsto \\mathcal{S} $ and $ n:\\mathcal{H}\\mapsto \\mathcal{S}^{\\perp} $ such that for each $ \\mathbf{x}_{\\mathcal{H}} \\in \\mathcal{H} $, $ \\mathbf{x}_{\\mathcal{H}} = r(\\mathbf{x}_{\\mathcal{H}}) + n(\\mathbf{x}_{\\mathcal{H}}) $.\n\t\\end{lemma}\n\t\n\t\n\tIt will be trivially true if $ \\mathbf{B}_{\\mathcal{H}}\\hat{\\mathbf{w}} \\in \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) $. Because it implies that the equality~\\eqref{eq:xa=bw} holds, and thus Corollary~\\ref{cor: LLA2GSA} works.\n\tTherefore, we suppose that $ \\mathbf{B}_{\\mathcal{H}}\\hat{\\mathbf{w}} \\not\\in \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) $. \n\t\n\tAccording to Lemma~\\ref{lem:T}, $ \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) $ is bijectively isometric to another complete space. This implies $ \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) $ is closed, and thus by Lemma~\\ref{lem:rn}, $ \\hat{\\mathbf{w}} $ can be decomposed as\n\t\\begin{gather*}\n\t\t\\hat{\\mathbf{w}} = \\mathbf{w}_{r} + \\mathbf{w}_{n}\n\t\\end{gather*}\n\tsuch that $ \\mathbf{B}_{\\mathcal{H}}\\mathbf{w}_{r} \\in \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}), \\mathbf{B}_{\\mathcal{H}}\\mathbf{w}_{n} \\in \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}})^{\\perp} $, which means $ \\mathbf{B}_{\\mathcal{H}}\\mathbf{w}_{r} \\perp \\mathbf{B}_{\\mathcal{H}}\\mathbf{w}_{n} $. \n\t\n\tThe weak equivalence between SPA and LLA indicates that $ \\mathbf{B}_{\\mathcal{H}}\\mathbf{w}_{r} $ is optimal to both SPA and LLA. By Corollary~\\ref{cor: LLA2GSA}, $ \\mathbf{G}^{\\dagger}\\mathbf{w}_{r} $ is an optimal solution to the problem~\\eqref{op:approximate_solvable models}.\n\tNote that $ \\mathbf{G} = \\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $.\n\tSuffice it to prove that\n\t\\begin{gather*}\n\t\t\\mathbf{G}^{\\dagger}\\hat{\\mathbf{w}} = \\mathbf{G}^{\\dagger}\\mathbf{w}_{r} ,\n\t\\end{gather*}\n\twhich is equivalent to say that $ \\mathbf{G}^{\\dagger}\\mathbf{w}_{n} = \\mathbf{0} $. \n\tThis equality will be true if $ \\mathbf{w}_{n} \\in \\mathrm{span}(\\widetilde{\\mathbf{G}})^{\\perp} $. \n\tSince $ \\widetilde{\\mathbf{X}}_{\\mathcal{H}} = \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = \\mathbf{B}_{\\mathcal{H}}\\mathbf{G} $ and $ \\mathbf{B}_{\\mathcal{H}}\\mathbf{w}_{n} \\in \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}})^{\\perp} $, we have\n\t\\begin{gather*}\n\t\t\\mathbf{0} = \\langle \\widetilde{\\mathbf{X}}_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}}\\mathbf{w}_{n} \\rangle_{\\mathcal{H}} = \\mathbf{G}^{T}\\langle \\mathbf{B}_{\\mathcal{H}},\\mathbf{B}_{\\mathcal{H}} \\rangle_{\\mathcal{H}}\\mathbf{w}_{n} = \\mathbf{G}^{T}\\mathbf{w}_{n} .\n\t\\end{gather*}\n\tTherefore, $ \\mathbf{w}_{n} \\in \\mathrm{null}(\\mathbf{G}^{T}) = \\mathrm{span}(\\mathbf{G})^{\\perp} $ where $ \\mathrm{null}(\\mathbf{G}^{T}) = \\{ \\mathbf{x}\\mid \\mathbf{G}^{T}\\mathbf{x} = \\mathbf{0} \\} $. \\hfill $ \\Box $\n\t\n\t\n\t\\section{Proof of Proposition~\\ref{prop:C=XP}}\n\tNote that we already have $ \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) \\subseteq \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) $ since $ \\widetilde{\\mathbf{X}}_{\\mathcal{H}} = \\mathbf{B}_{\\mathcal{H}}\\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{X}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} $.\n\tSuffice it to prove $ \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) \\subseteq\\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) $.\n\tSince $ \\mathbf{B}_{\\mathcal{H}} = \\mathbf{C}_{\\mathcal{H}}\\mathbf{A} = \\mathbf{X}_{\\mathcal{H}}\\mathbf{PA} $, we have\n\t\\begin{gather*}\n\t\t\\mathbf{\\widetilde{\\mathbf{X}}}_{\\mathcal{H}} = \\mathbf{B}_{\\mathcal{H}}\\mathbf{A}^{T}\\mathbf{P}^{T}\\mathbf{K} .\n\t\\end{gather*}\n\tThe desired result $ \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) \\subseteq \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) $ will follow if $ \\mathbf{A}^{T}\\mathbf{P}^{T}\\mathbf{K} \\in \\mathbb{R}^{s\\times n} $ is full-row rank, which is guaranteed by the fact that\n\t\\begin{gather*}\n\t\t\\mathbf{A}^{T}\\mathbf{P}^{T}\\mathbf{KP}\\mathbf{A} = \\langle \\mathbf{B}_{\\mathcal{H}}, \\mathbf{B}_{\\mathcal{H}} \\rangle_{\\mathcal{H}} = \\mathbf{I} .\n\t\\end{gather*}\n\t\\hfill $ \\Box $\n\t\n\t\\section{Proof of Proposition~\\ref{prop:reprst to equiv}}\n\tSuppose that the objective of kernel machine~\\eqref{op:kernel models} satisfies the strict representer theorem, and let $ \\hat{\\mathbf{w}} $ be an optimal solution to the problem~\\eqref{op:solvableLLA}, we will prove the proposition by contradiction. \n\tLet $ f = \\mathbf{B}_{\\mathcal{H}}\\hat{\\mathbf{w}} $, which is optimal to LLA~\\eqref{op:recastLLA}. \n\tSimilar to discussion in the proof of Proposition~\\ref{prop:w2a}, $ f $ can be uniquely decomposed as $ f = f_{r}+f_{n} $ such that $ f_{r} \\in \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) $ and $ f_{n} \\in \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}})^{\\perp} $. \n\tIf $ f $ is not optimal to the SPA~\\eqref{op:projected kernel models}, there are two cases:\n\t\n\t1) If $ f \\not\\in \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) $, then $ f_{n} \\not= \\mathbf{0} $. The fact that the kernel machine~\\eqref{op:kernel models} satisfies the strict representer theorem indicates that $ \\hat{\\mathcal{R}}(f_{r}) < \\hat{\\mathcal{R}}(f) $, a contradiction.\n\t\n\t2) If $ f \\in \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) $, there will be another solution $ \\hat{f} \\in \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) \\subseteq \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) $ such that $ \\hat{\\mathcal{R}}(\\hat{f}) < \\hat{\\mathcal{R}}(f) $, a contradiction.\n\t\n\tTherefore, $ f = \\mathbf{B}_{\\mathcal{H}}\\hat{\\mathbf{w}} $ must be optimal to SPA~\\eqref{op:projected kernel models}.\n\t\n\tSuppose the objective of the kernel machine~\\eqref{op:kernel models} satisfies the weak representer theorem. Let $ f \\in \\mathrm{span}(\\mathbf{B}_{\\mathcal{H}}) $ be an optimal solution to LLA, and denote its projection onto $ \\mathrm{span}(\\widetilde{\\mathbf{X}}_{\\mathcal{H}}) $ by $ \\overline{f} $.\n\tThe optimality of $ f $ implies $ \\hat{\\mathcal{R}}(f) = \\hat{\\mathcal{R}}(\\overline{f}) $, and thus $ \\overline{f} $ is optimal to both SPA~\\eqref{op:projected kernel models} and LLA~\\eqref{op:recastLLA}. \\hfill $ \\Box $\n\t\n\t\n\t\n\n\t\\section*{Acknowledgment}\n\tW.~Li and D.~Zhang were supported in part by the National Key R\\&D Program of China (Nos. 2018YFC2001600 and 2018YFC2001602), the National Natural Science Foundation of China (Nos.~61732006, 61876082, and 61861130366), and the Royal Society-Academy of Medical Sciences Newton Advanced Fellowship (No.~NAF$\\backslash$R1$\\backslash$180371).\n\t\n\t\n\n\n\t\\ifCLASSOPTIONcaptionsoff\n\t\\newpage\n\t\\fi\n\t\n\t\n\t\\footnotesize\n\n\t\\bibliographystyle{IEEEtranN}\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSentiment analysis is one of the most famous Natural Language Processing (NLP) tasks. This task was used in social network services~\\cite{pang2008opinion,asur2010predicting}, e-retailing, advertising~\\cite{qiu2010dasa,jin2007sensitive}, question answering systems~\\cite{somasundaran2007qa,stoyanov2005multi} and many other domains. It focuses on the automatic prediction of polarity or sentiment on tweets or reviews. While most computer science research in this field has focused on strict positive\/negative sentiment analysis, the three dominant theories~\\cite{marsella2010computational,stelmack1991galen} of emotion agree that humans express or operate with much more nuanced emotion representations. In other words, tweets or reviews, in recent times, include non-standard representations of emotion like emoticons, emojis etc. This task of sentiment analysis became increasingly complex due to an addition of creatively spelt words (for eg, ``gm\" for ``good morning\", ``hpy\" for ``happy\" etc,.) and hashtags, particularly in case of tweets.\n\nThe current research of Sentiment Analysis is gearing towards evaluating emotion intensity in a text to identify and quantify discrete emotions which can help in above mentioned applications mentioned and many new ones. Here, intensity refers to the degree or quantity of an emotion such as anger, fear, joy, or sadness. For example, consider the three statements ``The product is awesome and delivery is before time\", ``It was waste of money and time\" and ``This TV is ok ok product at this budget range\". The above 3 statements respectively express the level of satisfaction as very happy, very sad and moderately happy. This illustrates the different intensities of happiness of the particular person. Similarly, a person expresses different intensities of other emotions like anger, frustration etc.\n\n\\section{Related Work}\nIn literature, there has been an increasing focus towards building sentiment classification\/prediction models through various approaches like rule mining, machine learning or deep learning. A brief overview of the efforts of scientific community towards sentiment related models can be found in ~\\cite{pang2008opinion,paltoglou2010online,wilson2004just,liu2012survey}.\n\nMany prior works of emotion detection have always used manual strategies to map emotion category to emotional expression. However, such manual categorization requires an understanding of the emotional content of each expression, which is time-consuming and an arduous task. In~\\cite{warriner2013norms}, emotions are projected as points in 3-dimensional space of valence (positiveness-negativeness), arousal (active-passive), and dominance (dominant-submissive). Using this theory, there is a huge effort on creating valence lexicons like MPQA~\\cite{wilson2005recognizing}, Norms Lexicon~\\cite{warriner2013norms}, NRC Emotion Lexicon~\\cite{mohammad2017emotion}, WordNet Affect Lexicon~\\cite{baccianella2010sentiwordnet} and many others. However, these lexicon based approaches usually ignore the intensity of emotions and sentiment, which provides important information for fine-grained sentiment analysis. The current research shifts towards automatic emotion classification which has been proposed for many different kinds of text, including tweets~\\cite{mohammad2015using,mohammad2017emotion}. \n\nExisting approaches to analyze intensity are based simply on lexicons, word-embeddings, combinational features and supervised learning. ~\\cite{nielsen2011new} introduced lexicon based methods which rely on lexicons to assign the intensity score of each word in the tweet. However, this method did not consider the semantic information from the text. Some supervised methods like deep neural networks were applied to tweet sentiment analysis to predict the polarity~\\cite{dos2014deep}.\nAlthough deep learning methods outperform lexicon based methods as shown in~\\cite{dos2014deep}, but could not capture the fine-grained property of the sentiment in a text. To capture this fine-grained aspect of a sentiment,~\\cite{mohammad2016sentiment} proposed to identify the intensity of emotion in texts. To further expand the scope of emotion analysis, ~\\cite{W17-5205,SemEval2018Task1} introduced EmoInt-2017 and SemEval-2018 shared tasks where the top performing teams use deep learning models such as CNN, RNN, LSTMs~\\cite{goel2017prayas,koper2017ims} and classifiers like Support Vector Machine or Random Forest ~\\cite{duppada2017seernet,koper2017ims}. In the above two tasks, some participants use an ensemble-based approach by simply averaging the outputs of two top performing models~\\cite{duppada2017seernet,DBLP:conf\/semeval\/DuppadaJH18} and the weighted average of predicted outputs of three different deep neural network based models~\\cite{goel2017prayas}. \nThe subtasks of SemEval-2018 Task-1, AIT~\\cite{SemEval2018Task1} are detailed in Section 3.\n\nThe structure of the paper is as follows. In section 3, we describe the dataset. Section 4 describes the approach we are using to build the model, while section 5 discusses the approaches of preprocessing and feature extraction. Section 6 presents comparative results of various models along with the analysis of the results. Section 7 presents concluding remarks and future work. \n\n\\section{Dataset Description}\nWe used the dataset from SemEval-2018 Task 1: AIT \\footnote{\\url{https:\/\/competitions.codalab.org\/competitions\/17751\\#learn_the_details-datasets}} for training our system. \nThere is a total of five subtasks: EI-reg (Emotion Intensity regression), EI-oc (Emotion Intensity ordinal classification), V-reg (Valence regression), V-oc (Valence ordinal classification) and E-c (Emotion multi-label classification). Each subtask has three datasets: train, dev, and test. In this paper, we worked on the all five subtasks mentioned above. The dataset details are briefly shown in Table~\\ref{dataset}. \n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{||c|c c c c||} \\hline \\hline\n\\textbf{Dataset} & \\textbf{Train} & \\textbf{Dev} & \\textbf{Test} & \\textbf{Total} \\\\ \\hline \\hline\nEI-reg, EI-oc & & & &\\\\\nanger &1701& 388 &1002 & 3091 \\\\ \nfear &2252& 389 &986 & 3011 \\\\ \n\njoy &1616& 290 &1105 & 2905 \\\\ \nsadness &1533& 397 &975 & 2905 \\\\ \nV-reg, V-oc &1181& 886 &3259 & 2567 \\\\ \nE-c &6838 &886 &3259 &10953 \\\\ \\hline\\hline\n\\end{tabular}\n\\caption{SemEval-2018 Task-1 Dataset Details}\n\\label{dataset}\n\\end{table}\n\n\n\n\n\\section{Approach}\nWe took inspiration from the Mixture of Experts (MoE)~\\cite{jacobs1991adaptive,nowlan1991evaluation} regression and classification models, where each expert tunes to some set of features out of all the features. \n\\subsection{MoE Description}\nIn this subsection, we briefly describe the MoE model to enable the readers to relate our model to MoE architecture. The MoE architecture consists of a number of experts and a gating network. In MoE, there are parameters for each of the expert and a separate set of parameters for gating network. The expert and gate parameters are trained simultaneously using Expectation Maximization~\\cite{jordan1994hierarchical} or Gradient Descent Approach~\\cite{jordan1995convergence}.\n\nConsider the following regression problem. Let $X = \\{\\mathbf{x}^{(n)}\\}_{n=1}^{N}$ are $N$ input vectors (samples) and $Y = \\{\\mathbf{y}^{(n)}\\}_{n=1}^{N}$ are $N$ targets for each input vector. Then, MoE model is described in terms of parameter $\\theta = \\{\\theta_{g},\\theta_{e}\\}$ where $\\theta_{g}$ is set of the gate parameters and $\\theta_{e}$ is sets of the expert parameters. Given a sample $\\mathbf{x}$ from among $N$ samples, the total probability of predicting target $\\mathbf{y}$ can be written in terms of the experts as \n\\begin{eqnarray}\n\\label{eq:prob}\n\\nonumber\nP(\\mathbf{y}|\\mathbf{x}, \\theta) &=& \\sum_{i=1}^{I} P(\\mathbf{y}, \\mathbf{x}|\\theta) \\\\\n\\nonumber\n &=& \\sum_{i=1}^{I} P(i|\\mathbf{x}, \\theta_{g}) P(\\mathbf{y}|i, \\mathbf{x}, \\theta_{e}) \\\\\n &=& \\sum_{i=1}^{I} g_{i}(\\mathbf{x}, \\theta_{g}) P(\\mathbf{y}|i, \\mathbf{x}, \\theta_{e})\n\\end{eqnarray}\nwhere $I$ is the number of experts, the function $g_{i}(\\mathbf{x}, \\theta_{g}) = P(i|\\mathbf{x}, \\theta_{g})$ represents the probability of selecting $i^{th}$ expert given $x$ and $P(\\mathbf{y}|i, \\mathbf{x}, \\theta_{e})$ represents the probability of $i^{th}$ expert giving $\\mathbf{y}$ on seeing $\\mathbf{x}$.\n\nThe MoE training maximizes the log-likelihood of the probability in equation~\\ref{eq:prob} to learn the parameters $\\theta$~\\cite{yuksel2012twenty}.\n\n\\begin{figure*}[h]\n\\begin{center}\n\\includegraphics[width=0.8\\textwidth]{moemodel_final}\n\\caption{ Proposed Experts model}\n\\label{fig:MOE}\n\\end{center}\n\\end{figure*}\n\n\\subsection{Our Proposed Approach}\nWe used the similar architecture, however with some modifications, to measure the intensity of an emotion in a tweet (regression) or predict an emotional intensity (classification). In our proposed approach, we pre-train each expert, to get parameters $\\theta_{e}$, on the training samples, unlike traditional MoE model. Each expert, in itself, can be a separate Regression\/Classification model like Multi-level Perceptron (MLP) model or Long Short-Term Memory (LSTM) model or any other model that best suits the data and task at hand. Once each expert is trained separately, we train only the gating network based on Gradient Descent Approach. The Detailed description of the model is depicted in Figure~\\ref{fig:MOE} and explained below.\n\nWe build different models - Neural Network Classifier\/Regressor, Gradient Boosting Classifier\/Regressor, XGBoost Classifier\/Regressor, Random Forest Classifier\/Regressor, Lasso Regressor and Light Gradient Boosting Classifier\/Regressor and train each of them with the extracted feature vector of each tweet. We obtain this feature vector by concatenating all of the features discussed in Section 5. We assign parameters $\\theta_g$, weights and bias, for each Classifier\/Regressor at the gating network. Later, we train the gating network to fit the predicted $\\hat{y}$ of each expert $i$ with actual $y$ and learn the best $\\theta_g$.\n\nLet $w[i]$ denote weight of each expert $i$ at the gating network. Let $b[i]$ denote the bias term for each expert $i$ at the gating network. Let $I$ be the number of experts. We define the Error function ($E$) as \\\\\n\\begin{equation*}\nE = \\sum_{i=1}^{I} \\frac{1}{2} prob[i] \\Big (\\mathbf{y}[i] - \\hat{\\mathbf{y}}[i] + b[i] \\Big )^2\n\\end{equation*}\nwhere $prob[i]$ is softmax probability of weight $w[i]$, $\\mathbf{y}[i]$ is the actual $\\mathbf{y}$ of $i^{th}$ expert for some sample $\\mathbf{x}$. Similarly, $\\hat{\\mathbf{y}}[i]$ is the predicted $\\mathbf{y}$ of $i^{th}$ expert for same sample $\\mathbf{x}$. It is to be noted that $\\forall i\\hspace{3pt} \\mathbf{y}[i] = \\mathbf{y}$. \n\nFor each sample $\\mathbf{x}$ and $\\mathbf{y}$, we train the gating network using the update equations of gradients as follows:\\\\\n\\begin{align*}\n\\frac{\\partial E}{\\partial w[i]} &= \\frac{1}{2} prob[i] \\Big (1 - prob[i] \\Big) \\Big(\\mathbf{y}[i] - \\hat{\\mathbf{y}}[i] + b[i] \\Big)^2 \\\\\n\\frac{\\partial E}{\\partial b[i]} &= prob[i] \\Big(\\mathbf{y}[i] - \\hat{\\mathbf{y}}[i] + b[i] \\Big)\n\\end{align*}\nand\n\\begin{align*}\nw[i] &= w[i] - \\eta * \\frac{\\partial E}{\\partial w[i]} \\\\\nb[i] &= b[i] - \\eta * \\frac{\\partial E}{\\partial b[i]}\n\\end{align*}\nwhere $\\eta$ is the learning rate.\n\n\\section{Preprocessing \\& Feature Extraction}\nTo preprocess each tweet, first we break all the contractions (like ``can't\" to ``cannot\", ``I'm\" to ``I am\" etc,.) followed by spelling corrections, decoding special words and acronyms (like ``e g\" to ``eg\", ``fb\" to ``facebook\" etc,.) and symbol replacements (like ``\\$\" to ``dollar\", ``=\" to ``is equal to\" etc). Later, we tokenized each tweet using NLTK tweet tokenizer \\footnote{\\url{https:\/\/www.nltk.org\/api\/nltk.tokenize.html}}\n\nThe basic idea of using different experts and eclectic features is from the intuition that each expert learn from different aspects of the concatenated features. Hence, we explored and extracted a variety of features; and used only the features which best performs among all the explored ones are explained in the following subsections. \n\\subsection{Deep-Emoji Features}\nDeep-Emoji~\\cite{felbo2017using} performs prediction using the model trained on a dataset of 1246 million tweets and achieves state-of-the-art performance within sentiment, emotion and sarcasm detection. We can use the architecture of Deep-Emoji and train the model using millions of tweets from social media to get a better representation of new data. Using the pre-trained Deep-Emoji model, we extracted two different set of features - one, 64-dimensional vector from the softmax layer and the other, 2304-dimensional vector from attention layer. \n\n\\subsection{Word-Embedding Features}\nIn this paper, we tried four different pre-trained word-embedding approaches such as Word2Vec~\\cite{mikolov2013distributed}, GloVe~\\cite{pennington2014glove}, Edinburgh Twitter Corpus~\\cite{petrovic2010edinburgh} and FastText~\\cite{bojanowski2017enriching} for generating word vectors. We used the GloVe model of 300 dimensions.\n\n\\subsection{Skip-Thought Features}\nSkip-Thoughts vectors~\\cite{kiros2015skip} model is in the framework of encoder-decoder models. Here, an encoder maps words to sentence vector and a decoder is used to generate the surrounding sentences. The main advantage of Skip-Thought vectors is that it can produce highly generic sentence representations from an encoder that share both semantic and syntactic properties of surrounding sentences. Here, we used Skip-Thought vector encoder model to produce a 4800 dimension vector representation of each tweet.\n\n\\subsection{Lexicon Features}\nWe also chose various lexicon features for the model. The lexicon features include AFINN Lexicon~\\cite{nielsen2011new} (calculates positive and negative sentiment scores from the lexicon), MPQA Lexicon~\\cite{wilson2005recognizing} (calculates the number of positive and negative words from the lexicon), Bing Liu Lexicon~\\cite{bauman2017aspect} (calculates the number of positive and negative words from the lexicon), NRC Affect Intensities, NRC-Word-Affect Emotion Lexicon, NRC Hash-tag Sentiment Lexicon, Sentiment140 Lexicon~\\cite{go2009twitter} (calculates positive and negative sentiment score provided by the lexicon in which tweets are annotated by lexicons), and SentiWordNet~\\cite{baccianella2010sentiwordnet} (calculates positive, negative, and neutral sentiment score). The final feature vector is the concatenation of all the individual features.\n\n\\subsection{Hash-tag Intensity Features}\nThe work by~\\cite{mohammad2017emotion} describes that removal of the emotion word hashtags causes the emotional intensity of the tweet to drop. This indicates that emotion word hashtags are not redundant with the rest of the tweet in terms of the overall intensity. Here, we used Depeche mood dictionary~\\cite{staiano2014depeche} to get the intensities of hashtag words. We average the intensities of all hashtags of a single tweet to get the total intensity score.\n\n\\subsection{Stylometric Features}\nTweets and other electronic messages (e-mails, posts, etc.) are written far shorter, way more informal and much richer in terms of expressive elements like emoticons and aspects at both syntax and structure level, etc. Common techniques use stylometric features~\\cite{anchieta2015using} which are categorized into 5 different types: lexical, syntactic, structural, content specific, and idiosyncratic. In this paper, we used 7 stylometric features such as ``number of emoticons\", ``number of nouns\", ``number of adverbs\", ``number of adjectives\", ``number of punctuations\", ``number of words\", and ``average word length\".\n\n\n\\subsection{Unsupervised Sentiment Neuron Features}\nUnsupervised sentiment neuron model~\\cite{radford2017learning} provides an excellent learning representation of sentiment, despite being trained only on the text of Amazon reviews. A linear model using this representation results in good accuracy. This model represents a 4096 feature vector for any given input tweet or text.\n\n\n\n\\section{Experimental Setup \\& Results}\n\nTo train our proposed approach, we consider a total of five learning models, one for each expert: Gradient Boosting, XGBoost, Light Gradient Boosting, Random Forest, and Neural Network(NN) for subtasks EI-reg and V-reg. While for the subtasks EI-oc and V-oc, we consider all the models except NN model. For subtask E-c, we consider all the models except Light Gradient Boosting model.\n\n\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{||c|c||} \\hline \\hline\n\n\\textbf{Model} & \\textbf{Parameters}\\\\ \\hline \\hline\n & n\\_estimators: 3000, \\\\ \nGradient Boosting & Learning rate: 0.05 \\\\\n & Max\\_depth: 4\\\\ \\hline\n & n\\_estimators: 100\\\\ \nXGBoosting & learning\\_rate: 0.1\\\\\n& max\\_depth: 3\\\\ \\hline\n& Optimizer: adam \\\\\nNeural Network & Activation : relu\\\\ \\hline\n& n\\_estimators: 250\\\\ \nRandom Forest & max\\_depth: 4 \\\\ \\hline\n & n\\_estimators: 720 \\\\ \nLight Gradient Boosting & learning\\_rate: 0.05 \\\\\n& num\\_leaves: 5 \\\\ \\hline \\hline\n\\end{tabular}\n\\caption{Model-Parameters}\n\\label{model:parameters}\n\\end{table}\n\n\\setlength{\\tabcolsep}{1.3pt}\n\n\n\\begin{table*}[ht]\n\\centering\n\\begin{tabular}{||c|c c c c c|c c c c c||} \\hline \\hline\n\n & \\multicolumn{5}{c|}{\\textbf{EI-reg (Pearson (all instances))}} & \\multicolumn{5}{c||}{\\textbf{EI-reg (Pearson (gold in 0.5-1))}}\\\\ \n\\textbf{Team} & \\textbf{macro-avg} & \\textbf{anger} & \\textbf{fear} & \\textbf{joy} & \\textbf{sadness} & \\textbf{macro-avg} & \\textbf{anger} & \\textbf{fear} & \\textbf{joy} & \\textbf{sadness}\\\\ \\hline \\hline\nSeerNet & 0.799(1) & 0.827 & 0.779 & 0.792 & 0.798 & 0.638(1) & 0.708 & 0.608 & 0.708 & 0.608\\\\ \nNTUA-SLP & 0.776(2) & 0.782 & 0.758 & 0.771 & 0.792 & 0.610(2) & 0.636 & 0.595 & 0.636 & 0.595\\\\ \nPlusEmo2Vec & 0.766(3) & 0.811 & 0.728 & 0.773 & 0.753 & 0.579(5) & 0.663 & 0.497 & 0.663 & 0.497\\\\ \n\npsyML & 0.765(4) & 0.788 & 0.748 & 0.761 & 0.761 & 0.593(4) & 0.657 & 0.541 & 0.657 & 0.541\\\\ \n~\\textbf{Experts Model} & ~\\textbf{0.753(5)} & ~\\textbf{0.789} & ~\\textbf{0.742} & ~\\textbf{0.748} & ~\\textbf{0.733} & ~\\textbf{0.598(3)} & ~\\textbf{0.656} & ~\\textbf{0.582} & ~\\textbf{0.546} & ~\\textbf{0.608}\\\\ \nMedian Team & 0.653(23) & 0.654 & 0.672 & 0.648 & 0.635 & 0.490(23) & 0.526 & 0.497 & 0.420 & 0.517\\\\ \nBaseline & 0.520(37) & 0.526 & 0.525 & 0.575 & 0.453 & 0.396(37) & 0.455 & 0.302 & 0.476 & 0.350\\\\ \\hline \\hline\n\\multicolumn{9}{c}{Note : The numbers inside parenthesis in both macro-avg columns represent the rank}\n\\end{tabular}\n\\caption{Comparison of Regression results of various models with our Experts Model}\n\\label{EI-reg}\n\\end{table*}\n\n\\begin{table*}[ht]\n\\centering\n\\begin{tabular}{||c|c c c c c|c c c c c||} \\hline \\hline\n\n &\\multicolumn{5}{c|}{\\textbf{EI-oc (Pearson (all classes))}} & \\multicolumn{5}{c||}{\\textbf{EI-oc (Pearson (some emotion))}}\\\\ \n\\textbf{Team} &\\textbf{macro-avg}& \\textbf{anger} & \\textbf{fear} & \\textbf{joy} &\\textbf{sadness} & \\textbf{macro-avg} & \\textbf{anger}& \\textbf{fear} & \\textbf{joy} & \\textbf{sadness}\\\\ \\hline \\hline\nSeerNet & 0.695(1) & 0.706 & 0.637 & 0.720 & 0.717 & 0.547(1) & 0.559 & 0.458 & 0.610 & 0.560\\\\ \nPlusEmo2Vec & 0.659(2) & 0.704 & 0.528 & 0.720 & 0.683 & 0.501(4) & 0.548 & 0.320 & 0.604 & 0.533\\\\ \n\npsyML & 0.653(3) & 0.670 & 0.588 & 0.686 & 0.667 & 0.505(3) & 0.517 & 0.468 & 0.570 & 0.463\\\\ \nAmobee & 0.646(4) & 0.667 & 0.536 & 0.705 & 0.673 & 0.480(5) & 0.458 & 0.367 & 0.603 & 0.493\\\\ \n\\textbf{Experts Model} & ~\\textbf{0.636(5)} & ~\\textbf{0.658} & ~\\textbf{0.576} & ~\\textbf{0.666} & ~\\textbf{0.644} & ~\\textbf{0.520(2)} & ~\\textbf{0.493} & ~\\textbf{0.502} & ~\\textbf{0.579} & ~\\textbf{0.509}\\\\ \nMedian Team & 0.530(17) & 0.530 & 0.470 & 0.552 & 0.567 & 0.415(17) & 0.408 & 0.310 & 0.494 & 0.448 \\\\\nBaseline & 0.394(26) & 0.382 & 0.355 & 0.469 & 0.370 & 0.296(26) & 0.315 & 0.183 & 0.396 & 0.289\\\\ \\hline \\hline\n\\multicolumn{9}{c}{Note : The numbers inside parenthesis in both macro-avg columns represent the rank}\n\\end{tabular}\n\\caption{Comparison of Classification results of various models with our Experts Model}\n\\label{EI-oc}\n\\end{table*}\n\n\n\n\n\n\\subsection{Training Strategy}\nAt the input layer, we used a concatenation vector of all features: Deep-Emoji, Skip-Thought, Lexicons, Stylometric, BoW, Tf-IDF, Glove, Word2Vec, Edinburgh, and HashTagIntensity which is same for each expert. We combined both training and dev data and used them for training our model. The training model is validated by stratified K-fold approach in which the model is repeatedly trained on K-1 folds and the remaining one fold is used for validation. \n\nIn order to tune the hyper-parameters of our experts model, we adopt a grid search cross-validation for each learning model. Using grid search cross-validation, we set the various types of parameters based on the learning model. Table~\\ref{model:parameters} shows the parameter settings for all experts.\n\n\n\n\n\n\n\\subsection{Results}\n\n\\begin{table*}[t]\n\\centering\n\\begin{minipage}{.45\\linewidth}\n\\centering\n\\begin{tabular}{||c|c c||} \\hline \\hline\n\n&\\multicolumn{2}{c||}{\\textbf{V-reg (Pearson)}} \\\\\n\\textbf{Team} &\\textbf{(all instances)} & \\textbf{(gold in 0.5-1)}\\\\ \\hline \\hline\n SeerNet & 0.873 & 0.697 \\\\ \n TCS Research & 0.861 & 0.680\\\\ \nPlusEmo2Vec & 0.860 & 0.691\\\\ \nNTUA-SLP & 0.851 & 0.688\\\\\n Amobee & 0.843 & 0.644 \\\\\n \\textbf{Experts Model} &\\textbf{0.830} & \\textbf{0.670}\\\\\n Median Team & 0.784 & 0.509 \\\\\n Baseline& 0.585 & 0.449\\\\ \\hline \\hline\n\\end{tabular}\n\\caption{Comparison of Valence-reg results of various models with our Experts Model}\n\\label{V-reg}\n\\end{minipage}\n\\hspace{0.5cm}\n\\begin{minipage}{.45\\linewidth}\n\\centering\n\\begin{tabular}{||c|c c||} \\hline \\hline\n\n&\\multicolumn{2}{c||}{\\textbf{V-oc (Pearson)}}\\\\ \n\\textbf{Team} &\\textbf{(all instances)} &\\textbf{(gold in 0.5-1)}\\\\ \\hline \\hline\nSeerNet &0.836 & 0.884\\\\\nPlusEmo2Vec & 0.833 & 0.878\\\\ \nAmobee & 0.813 & 0.865\\\\ \npsyML & 0.802 & 0.869\\\\ \nEiTAKA &0.796 & 0.838\\\\\n\\textbf{Experts Model} & \\textbf{0.738}&\\textbf{0.773} \\\\\nMedian Team & 0.682 & 0.754\\\\\nBaseline &0.509 &0.560 \\\\ \\hline \\hline\n\\end{tabular}\n\\caption{Comparison of Valence-oc results of various models with our Experts Model}\n\\label{V-oc}\n\\end{minipage}\n\\end{table*}\n\n\\setlength{\\tabcolsep}{1.5pt}\n\\begin{table}[!ht]\n\\centering\n\\begin{tabular}{||c|c c c||} \\hline \\hline\n\n&\\multicolumn{3}{c||}{\\textbf{E-c}} \\\\\n\\textbf{Team} &\\textbf{(acc.)} & \\textbf{(micro F1)} & \\textbf{(macro F1)}\\\\ \\hline \\hline\n NTUA-SLP & 0.588(1) & 0.701 & 0.528 \\\\ \n TCS Research & 0.582(2) & 0.693 & 0.530\\\\ \nPlusEmo2Vec & 0.576(4) & 0.692 & 0.497\\\\ \npsyML & 0.574(5) & 0.697 & 0.574\\\\\n \\textbf{Experts Model} &\\textbf{0.578(3)} & \\textbf{0.691} & \\textbf{0.581}\\\\\n Median Team & 0.471(17) & 0.599 & 0.464 \\\\\n Baseline& 0.442(21) & 0.570 & 0.443\\\\ \\hline \\hline\n \\multicolumn{4}{c}{Note : The numbers inside parenthesis in } \\\\\n \\multicolumn{4}{c}{accuracy column represent the rank}\n\\end{tabular}\n\\caption{Comparison of E-c results of various models with our Experts Model}\n\\label{E-c}\n\\end{table}\n \nTo evaluate our computational model, we compare our results with SemEval-2018 Task-1 (Affect in Tweets) baseline results, top five performers and Median Team (as per SemEval-2018 results). The results in the EI-reg, EI-oc, V-reg, V-oc, E-c are shown in Tables~\\ref{EI-reg},~\\ref{EI-oc},~\\ref{V-reg},~\\ref{V-oc},~\\ref{E-c} respectively. The tables illustrate (a) the results obtained by our proposed approach, (b) top five performers in SemEval-2018, (c) the results obtained by a baseline SVM system using unigrams as features and (d) Median Team among all submissions. From the Tables~\\ref{EI-reg} and ~\\ref{EI-oc}, we observe that our model (considering only macro-average for Pearson Correlation) for EI-reg and EI-oc stands within 5 places among 48 submissions. A quick walk-through of Table~\\ref{EI-reg} for individual emotions shows that anger and fear ranks $3^{rd}$ and $4^{th}$ respectively for EI-reg Pearson(all instances) and for EI-reg Pearson(gold in 0.5-1), fear stands at $3^{rd}$ position and sadness equals score of top performer. Similarly, Table~\\ref{EI-oc} for classification results shows that anger and fear ranks $4^{th}$ and $3^{rd}$ respectively for EI-oc Pearson(all classes) and for EI-oc Pearson(some emotion), anger, fear, joy and sadness stands at positions $4^{th}$, $1^{st}$, $4^{th}$ and $3^{rd}$ respectively. \nIt is to be noted that in both Tables~\\ref{EI-reg} and~\\ref{EI-oc}, the numbers inside parenthesis under column ``macro-avg\" represent the rank according to macro-avg Pearson scores. These values shows that our model stands at $3^{rd}$ and $2^{nd}$ positions in EI-reg Pearson(gold in 0.5-1) and EI-oc Pearson(some emotion) respectively. \nTables~\\ref{V-reg} and~\\ref{V-oc} illustrate that the results from our model are among the top 10 submissions of subtasks V-reg and V-oc. \nTable~\\ref{E-c} shows the results of multi-label emotion classification (11 classes). \nOur model is among the top 3 submissions for Jaccard similarity (accuracy) metric, in top 5 for micro F1 metric and topped the submissions for macro F1 metric.\n\n\n\n\\subsection{Metrics}\n\n\\begin{figure*}[!htb]\n\\centering\n\\begin{minipage}{.47\\textwidth}\n\\includegraphics[width=\\linewidth,height=5cm]{Plot_45_anger}\n\n\\begin{center}\n(a) Emotion: Anger\n\\end{center}\n\\end{minipage}\n\\qquad\n\\begin{minipage}{.47\\textwidth}\n\\includegraphics[width=\\linewidth,height=5cm]{Plot_45_fear}\n\n\\begin{center}\n(b) Emotion: Fear\n\\end{center}\n\\end{minipage}\n\\qquad\n\\begin{minipage}{.47\\textwidth}\n\\includegraphics[width=\\linewidth,height=5cm]{Plot_45_joy}\n\n\\begin{center}\n(c) Emotion: Joy\n\\end{center}\n\\end{minipage}\n\\qquad\n\\begin{minipage}{.47\\textwidth}\n\\includegraphics[width=\\linewidth,height=5cm]{Plot_45_sadness}\n\n\\begin{center}\n(d) Emotion: Sadness\n\\end{center}\n\\end{minipage}\n\\caption{Feature Importance: Comparison of Pearson Scores for each feature vector \\& concatenated vector}\n\\label{fig:2fig}\n\\end{figure*}\n\nWe use the competition metric, Pearson Correlation Coefficient with the Gold ratings\/labels from SemEval-2018 task-1 AIT for EI-reg, EI-oc, V-reg and V-oc. Further, macro-average was calculated by averaging the correlation scores of four emotions: anger, fear, joy, and sadness for the tasks EI-reg and EI-oc. Along with Pearson Correlation Coefficient, we use some additional metrics for each subtask. The additional metric used for EI-reg and V-reg tasks was to calculate the Pearson correlation only for a subset of test samples where the intensity score was greater than or equal to 0.5. For the classification subtasks EI-oc and V-oc, we use the additional metric Pearson correlation calculated only for some emotion like low emotion, moderate emotion, or high emotion. However, for the multi label emotion classification E-c, we used the official evaluation metrics Jaccard Similarity (accuracy), micro average F1 score and macro average F1 score of all the classes.\n\nFigure~\\ref{fig:2fig} shows the influence of each feature type on scores for predicting the intensity or emotion. We can observe from Figure~\\ref{fig:2fig} that for ``Deep-Emoji\" and ``Deep-Emoji-Softmax\" features, Pearson scores are dominating other feature types. Feature types - Skip-Thought, Lexicons, Glove, and Edinburgh features are contributing approximately similar in each of the 4 emotions. However, Stylometric features and features from Unsupervised sentiment neurons are performing worse. \n\n\n\n\n\n\n\n\\section{Conclusion}\nIn this paper, we have proposed a novel approach inspired from standard Mixture-of-Experts model to predict the intensity of an emotion(Regression) or level of an emotion (Classification) or multi-label emotion classification. Experiment results show that our proposed approach can effectively deal the emotion detection problem and stands at top-5 when compare with SemEval-2018 Task-1 AIT results and baseline results.\nAs most of the Pearson scores are in the range of 0.50 to 0.75, there is a lot of scope for improvement in predicting emotions or quantifying the emotion intensity through various other approaches, which are yet to be unfolded. The source code is publicly available at~\\url{https:\/\/goo.gl\/NktJhF} so that researchers\nand developers can work on this exciting problem collectively. \n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\label{intro}\n\n\nWe consider {\\em a linear phonon Boltzmann equation} in contact with a heat bath at the origin.\nThis equation describes the evolution, after a proper kinetic limit,\nof the phonon energy in a chain of harmonic oscillators\nwith random scattering of velocities, where one oscillator is in contact with a heat bath at temperature $T$.\n\nWe denoting {by}\n$W(t,y,k)$ the energy density of phonons at time $t\\ge0$,\nwith respect to their position $y\\in{\\mathbb R}$ and frequency variable\n$k\\in{\\mathbb T}$ - the one dimensional circle, understood as the\n interval $[-1\/2,1\/2]$ with identified endpoints. The heat bath creates an interface localized at $y=0$.\nOutside the interface the density satisfies\n\\begin{equation}\n \\label{eq:8}\n \\begin{array}{ll}\n \\partial_tW(t,y,k) + \\bar\\omega'(k) \\partial_y W(t,y,k) = \\gamma_0 L W(t,y,k), &\n\\quad (t,y,k)\\in{\\mathbb R}_+\\times {\\mathbb R}_*\\times{\\mathbb T}_*\n\\end{array}\n\\end{equation}\nHere \n$$\n{\\mathbb R}_+:=(0,+\\infty), \\quad {\\mathbb R}_*:={\\mathbb R}\\setminus\\{0\\},\\quad\n{\\mathbb T}_*:={\\mathbb T}\\setminus\\{0\\},\n$$\nThe parameter $\\gamma_0>0$ represents the phonon scattering rate.\nThe scattering operator $L$, acting only on the $k$-variable,\nis given by \n$$\nLF(k):= {\\displaystyle \\int_{{\\mathbb T}}}R(k,k')\n\\left[F\\left(k'\\right) - F\\left(k\\right)\\right]dk',\\quad k\\in{\\mathbb T}\n$$\nfor $F$ belongs to $ B_b({\\mathbb T})$ - the set of bounded measurable, real valued functions.\nHere\n\\begin{equation}\n\\label{barom}\n\\bar\\omega'(k)=\\frac{\\omega'(k)}{2\\pi},\\quad k\\in{\\mathbb T},\n\\end{equation}\n{where $ \\omega:{\\mathbb T}\\to[0,+\\infty)$ is {\\em the dispersion relation} of the harmonic chain}.\n\nThe interface conditions that describe the interaction of a phonon\nwith a thermostat (placed at $y=0$), at temperature $T>0$, are given as follows:\n\n- the outgoing densities are given in terms of the incoming ones as\n\\begin{equation}\\label{feb1408}\n \\begin{split}\nW(t,0^+, k)&=p_-(k)W(t,0^+, -k)+p_+(k)W(t,0^-,k)+T\\mathfrak g(k), \\quad \\hbox{ for $0< k\\le 1\/2$},\\\\\nW(t,0^-, k)&=p_-(k)W(t,0^-,-k) + p_+(k)W(t,0^+, k)+T\\mathfrak g(k), \\quad \\hbox{ for $-1\/2< k< 0$}.\n\\end{split}\n\\end{equation}\nwhere $p_-,p_+,{\\frak g}:{\\mathbb T}\\to(0,1]$ are continuous and\n\\begin{equation}\n\\label{012304}\np_+(k)+p_-(k)+ \\mathfrak g(k)=1. \n\\end{equation}\n\n\nIn other words, $p_-(k)$ and $p_+(k)$ are the reflection and\ntransmission coefficients across the interface, respectively. They correspond to the\nprobabilities of the phonon being reflected, or transmitted, by the interface.\nThe quantity\n$T\\mathfrak g(k)$ is the phonon production rate by the thermostat as well as the absorption rate of the\nfrequency $k$ phonon by the interface. The parameter $T>0$ is the heat bath temperature.\nThis equation has been obtained in \\cite{bos}, without the heat bath,\nas the Boltzmann-Grad limit of the energy density \nfunction for a microscopic model of a heat conductor consisting\nof a one dimensional chain of harmonic oscillators, with\ninter-particle scattering conserving the energy and volume.\nIn the presence of the thermostat, but with no scattering (the case $\\gamma_0=0$),\nthe limit has been proved \\cite{kors}.\nIt is believed that the limit also holds in case of the presence of\nscattering, i.e. when $\\gamma_0>0$. \n\n\nWe are interested in the asymptotics of the solutions\nto \\eqref{eq:8} under the diffusive scaling, i.e. the limit, as $\\epsilon\\to0$, for\n$W^{\\varepsilon}(t,y,k) = W(t\/\\varepsilon^2,y\/\\varepsilon,k)$, i.e. we consider the equation\n\\begin{equation}\\label{resc:eq}\\begin{split}\n &\\partial_t W^{\\varepsilon}(t,y,k)\n +\\frac 1 {\\varepsilon} \\; \\bar\\omega'(k) \\partial_y W^{\\varepsilon}(t,y,k)= \n {\\frac{\\gamma }{\\varepsilon^2}} \\int_{{\\mathbb T}}R(k,k') \\left[\n W^{\\varepsilon}\\left(t,y,k'\\right) - W^{\\varepsilon}\\left(t,y,k\\right)\\right]\\;\n dk', \\qquad y\\neq 0,\\\\\n&W^{\\varepsilon}(0,y,k)=W_0(y,k),\n\\end{split}\\end{equation} \nwith the interface conditions \\eqref{feb1408}.\nLet $R(k) = \\int R(k,k') dk'$. \nIn our main result, see Theorem \\ref{thm011302-19a} below,\nwe prove that under the assumption \n\\begin{equation}\n\\label{041402-19a}\n\\int_{{\\mathbb T}}\\frac{\\omega'(k)^2}{R(k)} dk<+\\infty,\n\\end{equation}\nand some other technical hypotheses, formulated in Sections \\ref{sec2.2}\nand \\ref{sec2.3} below,\nfor any $G\\in C_0^\\infty({\\mathbb R}\\times{\\mathbb T})$ -compactly supported, smooth\nfunction - we have\n\\begin{equation}\n\\label{061502-19}\n\\lim_{\\epsilon\\to0}\\int_{{\\mathbb R}\\times{\\mathbb T}} W^{\\varepsilon}(t,y,k)G(y,k)dydk=\\int_{{\\mathbb R}\\times{\\mathbb T}} \\rho(t,y)G(y,k)dydk,\n\\end{equation}\nwhere\n\\begin{align}\n\\label{heat10}\n&\n\\partial_t\\rho(t,y)=D\\partial_y^2\\rho(t,y),\\quad (t,y)\\in{\\mathbb R}_+\\times{\\mathbb R}_*,\\\\\n &\n \\rho(t,0^\\pm)\\equiv T\\\\\n &\n \\rho(0,y)=\\rho_0(y):=\\int_{{\\mathbb T}}W_0(y,k)dk. \\nonumber\n\\end{align}\nThe diffusion coefficient is given by \n\\begin{equation}\nD = \\frac{1}{\\gamma} \\int \\omega'(k) (-L)^{-1}\\omega'(k) \\; dk \\label{eq:D}\n\\end{equation}\nthat is finite by the assumption \\eqref{041402-19a} and the properties of $R(k,k')$ made in section\n\\ref{sec2.2}.\n\nThe result implies that only the absorption and the creation of phonons at the interface\n matter in this diffusive scale. Phonons that are reflected or transmitted will come back to the interface, due to scattering, and eventually get absorbed in a shorter time scale.\n\nThe diffusive limit has been considered, without\nthe presence of\ninterface, in \\cite{b,jko,mmm}. It has been shown there that, if \\eqref{041402-19a}\nis in force, then the solutions of the initial problem \\eqref{resc:eq} converge, as in \\eqref{061502-19},\nto $\\rho(t,y)$ - the solution of the Cauchy problem for the heat\nequation \\eqref{heat10}.\nWhen the condition \\eqref{041402-19a} is violated a superdiffusive\nscaling may be required and the limit could be a fractional\ndiffusion. This case has been also considered in\n\\cite{bb,AMP,jko,mmm}.\n\nThe case of the diffusive limit of the solution of a kinetic equation\nwith an absorbing boundary\n has been considered in e.g. \\cite{LK, BLP, DL, BSS,BBGS}.\nThe diffusive limit with some other boundary conditions has\nalso been discussed in the review paper\n\\cite{BGS}, see the references contained\ntherein. A related result for the radiative transport equation with some\n reflection\/transmission condition has been obtained in \\cite{BR} for the steady state,\n giving rise to different boundary conditions.\n We are not aware of a similar result in the dynamical case, as considered\n in the present paper.\n\nThe result for a fractional diffusive limit with the interface\ncondition is a subject of the paper \\cite{kro}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Preliminaries and the statement of the main result}\n\n\n\n\\subsection{Weak solution of the kinetic equation with an interface} \n\nIn what follows we denote ${\\mathbb R}_-:=(-\\infty,0)$. Consider \n$\n\\widetilde W(t,y,k):=W(t,y,k)-T$. It satisfies the equation\n\\eqref{eq:8} with the interface given by\n\\begin{equation}\\label{feb1408h}\n\\widetilde W(t,0^+, k)=p_-(k)\\widetilde W(t,0^+, -k)+p_+(k)\\widetilde W(t,0^-,k)), \\hbox{ for $0< k\\le 1\/2$},\n\\end{equation}\nand\n\\begin{equation}\\label{feb1410h}\n\\widetilde W(t,0^-, k)=p_-(k)\\widetilde W(t,0^-,-k) + p_+(k)\\widetilde W(t,0^+, k), \\quad \\hbox{ for $-1\/2< k< 0$}.\n\\end{equation}\n\n\n\\begin{df}\n\\label{df013001-19}\nA function\n$\\widetilde W:\\bar {\\mathbb R}_+\\times{\\mathbb R}\\times {\\mathbb T}\\to{\\mathbb R}$ is called a (weak)\nsolution to equation \\eqref{eq:8} with the interface conditions\n\\eqref{feb1408h} and \\eqref{feb1410h}, provided that\n it belongs to\n$L^2_{\\rm loc}({\\mathbb R}_+,L^2({\\mathbb R}\\times{\\mathbb T}))$, its restrictions\n$\\widetilde W_\\iota$ to ${\\mathbb R}_+\\times{\\mathbb R}_{\\iota}\\times {\\mathbb T}$, $\\iota\\in\\{-,+\\}$\nextend to continuous functions on ${\\mathbb R}_+\\times\\bar{\\mathbb R}_{\\iota}\\times\n{\\mathbb T}$ that satisfy \\eqref{feb1408h} and \\eqref{feb1410h},\n and\n\\begin{align}\n\\label{062504x}\n&\n0=\\int_0^{+\\infty}\\int_{{\\mathbb R}\\times {\\mathbb T}}\\widetilde W(t,y,\n k)\\left[\\partial_t\\varphi(t,y,k)+ \\; \\bar\\omega'(k) \\partial_y\n \\varphi(t,y,k)+\\gamma L\\varphi(t,y,k)\\right]dt dydk\\nonumber\\\\\n&+\\int_{{\\mathbb R}\\times {\\mathbb T}}W_0(y,\n k)\\varphi(0,y,k)dydk\n\\end{align}\nfor any test function $\\varphi\\in\nC_0^\\infty(\\bar{\\mathbb R}_+\\times{\\mathbb R}_*\\times {\\mathbb T})$,\n\\end{df}\n\n\n\n\\subsection{Assumption about the dispersion relation and the\n scattering kernel.} \n\\label{sec2.2}\n\nWe assume that\n$\\omega(\\cdot)$ is even, belongs to $C^\\infty({\\mathbb T}\\setminus\\{0\\})$,\ni.e. it smooth outside $k=0$. Furthermore we assume that $\\omega(\\cdot)$ is unimodal,\nthat implies that $k\\omega'(k) \\ge 0$ for $k\\in (-1\/2, 1\/2)$.\n\nWe assume that the scattering kernel is symmetric\n\\begin{equation}\n\\label{sym}\nR(k,k')=R(k',k),\n\\end{equation}\npositive, except for $0$ frequency, i.e.\n\\begin{equation}\n\\label{pos}\nR(k,k')>0,\\quad (k,k')\\in{\\mathbb T}_*^2\n\\end{equation}\nand the total scattering kernel\n\\begin{equation}\n\\label{tot}\nR(k):=\\int_{{\\mathbb T}}R(k,k')dk'\n\\end{equation}\nis such that the stochastic kernel\n\\begin{equation}\n\\label{stoch}\np(k,k'):=\\frac{R(k,k')}{R(k)}\\in C^\\infty({\\mathbb T}^2).\n\\end{equation}\nIn addition we assume that \\eqref{041402-19a} is in force.\n\n\n\n{\\bf Example.} Suppose that\n$$\nR(k)\\sim R_0|\\sin(\\pi k)|^{\\beta},\\quad |k|\\ll1\n$$\nfor some $\\beta\\ge 0$ and $R_0>0$\nand \n\\begin{equation}\n\\label{A}\n\\omega'(k)\\sim 2\\omega_0' \\,{\\rm sign}(k)\\,|\\sin(\\pi k)|^{\\kappa},\\quad |k|\\ll 1\n\\end{equation}\nfor some $\\kappa\\ge 0$.\nThen \\eqref{041402-19a} holds, provided that\n\\begin{equation}\n\\label{041402-19}\n0\\le \\beta<1+2\\kappa.\n\\end{equation}\n\n\n\n\n\\subsection{About the reflection, transmission and\n absorption coefficients}\n\\label{sec2.3}\n\nIn \\cite{kors} the coefficient $p_{\\pm}(k)$ and $\\mathfrak g(k)$ are obtained from the microscopic dynamics\nand depends on the dispersion relation as follows.\n\n\nLet $\\gamma>0$ (the thermostat strength) and\n\\begin{equation}\n\\label{tg}\n\\tilde g(\\lambda) := \\left( 1 + \\gamma \\int_{{\\mathbb T}} \\frac{\\lambda dk}{\\lambda^2 + \\omega^2(k)} \\right)^{-1}, \\quad {\\rm Re}\\,\\lambda>0.\n\\end{equation} \nIt turns out, see \\cite{kors}, that \n\\begin{equation}\n\\label{012410}\n|\\tilde g(\\lambda)|\\le 1,\\quad \\lambda\\in \\mathbb C_+:=[\\lambda\\in \\mathbb C:\\, {\\rm Re}\\,\\lambda>0].\n\\end{equation} \nThe function $\\tilde g(\\cdot)$ is analytic on $ \\mathbb C_+$. \nBy Fatou's theorem\nwe know that\n\\begin{equation}\n\\label{nu}\n\\nu(k) :=\\lim_{\\epsilon\\to0+}\\tilde g(\\epsilon-i\\omega(k))\n\\end{equation}\nexists a.e. in ${\\mathbb T}$ and in any $L^p({\\mathbb T})$ sense for\n$p\\in[1,\\infty)$. \nDenote \n\\begin{align}\n\\label{022304}\n&\\mathfrak g(k) := \\frac{\\gamma |\\nu(k)|^2}{|\\bar\\omega'(k)|^2},\n\\qquad \\mathfrak P(k) := \\frac{\\gamma \\nu(k)}{2|\\bar\\omega'(k)|} \\nonumber\\\\\n&\np_+(k):= \\left|1 - \\mathfrak P(k)\\right|^2 \\\\\n&\np_-(k):=\\left|\\mathfrak P(k)\\right|^2.\\nonumber\n\\end{align}\nIt has been shown in Section 10 of \\cite{kors} that\n\\begin{equation}\n\\label{feb1402}\n{\\rm Re}\\,\\nu(k)=\\left(1+\\frac{\\gamma}{2|\\bar\\omega'(k)|}\\right)|\\nu(k)|^2.\n\\end{equation}\nThis identity implies in particular that\n\\eqref{012304} is in force.\n\n\n\n\n\n\n\n\\subsection{Scaled kinetic equations and the formulation of the main result}\n\nConsider $W^{\\varepsilon} $ the solution of a rescaled kinetic equations\n\\eqref{resc:eq}. Our main result can be stated as follows.\n\\begin{thm}\n\\label{thm011302-19a}\nSuppose that $W_0(y,k)=T+\\widetilde W_0(y,k)$, where $\\widetilde W_0\\in\nL^2({\\mathbb R}\\times{\\mathbb T})$. Under the assumptions made about the scattering kernel\n$R(\\cdot,\\cdot)$ and dispersion relation $\\omega(\\cdot)$,\nfor any test function $\\varphi\\in C_0^\\infty({\\mathbb R}_+\\times\n{\\mathbb R}\\times{\\mathbb T})$ we have\n\\begin{equation}\n\\label{011302-19}\n\\lim_{\\epsilon\\to0}\\int_0^{+\\infty}dt\\int_{{\\mathbb R}\\times{\\mathbb T}}W^{\\varepsilon}(t,y,k)\\varphi(t,y,k)dydk=\\int_0^{+\\infty}dt\\int_{{\\mathbb R}\\times{\\mathbb T}}\\rho(t,y)\\varphi(t,y,k)dydk,\n\\end{equation}\nwhere $\\rho(\\cdot,\\cdot)$ is the solution of the heat equation\n\\begin{align}\n\\label{heat1}\n&\n\\partial_t\\rho(t,y)=D\\partial_y^2\\rho(t,y),\\quad (t,y)\\in{\\mathbb R}_+\\times{\\mathbb R}_*,\\nonumber\\\\\n&\n\\\\\n&\n\\rho(0,y)=\\rho_0(y):=\\int_{{\\mathbb T}}W_0(y,k)dk,\\quad \\rho(t,0)\\equiv\n T,\\,t>0.\\nonumber\n\\end{align}\nHere, the coefficient $D>0$ is given by \\eqref{def:D1} below.\n\\end{thm}\n\n\n\n\n\nDefining $\\widetilde W^{\\varepsilon} = W^{\\varepsilon} - T$, one can easily see that it also\nsatisfies \\eqref{resc:eq} with\n\\begin{equation}\\label{feb1408vet}\n\\widetildeW^{\\varepsilon}(t,0^+, k)=p_-(k) \\widetildeW^{\\varepsilon}(t,0^+, -k)+p_+(k) \\widetildeW^{\\varepsilon}(t,0^-,k), \\hbox{ for $0\\le k\\le 1\/2$},\n\\end{equation}\nand\n\\begin{equation}\\label{feb1410vet}\n\\widetildeW^{\\varepsilon}(t,0^-, k)=p_-(k) \\widetildeW^{\\varepsilon}(t,0^-,-k) + p_+(k) \\widetildeW^{\\varepsilon}(t,0^+, k), \\quad \\hbox{ for $-1\/2\\le k\\le 0$}.\n\\end{equation}\nThe initial condition\n$\n\\widetildeW^{\\varepsilon}(0,y,k)=\\widetilde W_0(y,k):=W_0(y,k)-T\n$ \nbelongs to\n$L^2({\\mathbb R}\\times{\\mathbb T})$.\nThis means that it is enough to prove Theorem \\ref{thm011302-19a} for $T=0$.\nThis proof is presented in Section \\ref{sec4}.\n\n\n\n\n\n\n\n\n\n\\section{Some auxiliaries}\n\n\n\n\\subsection{Some functional spaces}\nLet $H_+^1$ be the Hilbert obtained as the completion of the\nSchwartz class ${\\mathcal S}({\\mathbb R}\\times {\\mathbb T})$ in the norm\n $$\n \\|\\varphi\\|_{H^1_+}^2:= \\|\\varphi\\|_{L^2({\\mathbb R}\\times {\\mathbb T})}^2+\\int_0^{+\\infty}\\int_{{\\mathbb T}}|\\omega'(k)|[\\partial_y\\varphi(y,k)]^2 dydk\n $$\n Similarly, we introduce $H^1_-$.\n\nLet\n ${\\mathcal H}$ be the Hilbert space obtained as the completion of ${\\mathcal S}({\\mathbb R}\\times {\\mathbb T})$ in the norm\n \\begin{equation}\n\\label{calH}\n \\|\\varphi\\|_{{\\mathcal H}}^2:= \\|\\varphi\\|_{H^1_-}^2+ \\|\\varphi\\|_{H^1_+}^2.\n \\end{equation}\n Let also\n $$\n \\|\\varphi\\|_{{\\mathcal H}_0}^2:= \\int_{{\\mathbb T}}|\\omega'(k)| \\mathfrak g(k)\n \\left\\{[\\varphi(0^+,k)]^2+[\\varphi(0^-,k)]^2\\right\\}dk+ \\int_{{\\mathbb R}\\times{\\mathbb T}^2} [\\varphi(y,k')-\\varphi(y,k)]^2dy dk dk'\n $$\n\n\n\n\n\\subsection{Apriori bounds}\n\nComputing the time derivative we have\n\\begin{equation}\n \\label{eq:14}\n \\frac12 \\frac{d}{dt} \\|\\widetildeW^{\\varepsilon}(t)\\|_{L^2}^2 = -\\frac{\\gamma}{2\\varepsilon^2} \\int_{-\\infty}^{\\infty} dy \\mathcal D(\\widetildeW^{\\varepsilon}(t,y, \\cdot)) - \\frac{1}{2\\varepsilon} \\int_{{\\mathbb T}} \\; \\bar\\omega'(k) \\left[\\widetildeW^{\\varepsilon}(t,0^-, k)^2-\\widetildeW^{\\varepsilon}(t,0^+, k)^2\n \\right] dk ,\n\\end{equation}\nwith\n\\begin{equation}\n\\label{cD}\n \\mathcal D(f):=\\int_{{\\mathbb T}^2}R(k,k')[f(k)-f(k')]^2dkdk' .\n\\end{equation}\nTaking into account \\eqref{feb1408vet} and \\eqref{feb1410vet} we obtain\n\\begin{align*}\n&\\int_{{\\mathbb T}} \\; \\bar\\omega'(k) \\left\\{ [\\widetildeW^{\\varepsilon}(t,0^-, k)]^2-[\\widetildeW^{\\varepsilon}(t,0^+, k)]^2 \\right\\} dk\n\\\\\n&\n=\\int_{0}^{1\/2} \\; \\bar\\omega'(k) \\left\\{ [\\widetildeW^{\\varepsilon}(t,0^-, k)]^2-\\left[p_-(k) \\widetildeW^{\\varepsilon}(t,0^+, -k)+p_+(k) \\widetildeW^{\\varepsilon}(t,0^-,k)\\right]^2 \\right\\} dk\\\\\n&\n+\\int_{-1\/2}^0 \\; \\bar\\omega'(k) \\left\\{ \\left[p_-(k) \\widetildeW^{\\varepsilon}(t,0^-,-k) + p_+(k) \\widetildeW^{\\varepsilon}(t,0^+, k)\\right]^2-[\\widetildeW^{\\varepsilon}(t,0^+, k)]^2 \\right\\} dk.\n\\end{align*}\nAfter straightforward calculations (recall that coefficients $p_\\pm(k)\n$ are even, while $\\bar\\omega'(k)$ is odd) we conclude that the right hand\nside equals\n\\begin{align*}\n \\int_{0}^{1\/2} \\; \\bar\\omega'(k) & \\left\\{ \\left(\\widetildeW^{\\varepsilon}(t,0^-, k)^2+ \\widetildeW^{\\varepsilon}(t,0^+, -k)^2\\right)\n \\left(1-p_+^2(k)-p_-^2(k)\\right)\\right. \\\\\n&\n\\left.-\n4p_-(k)p_+(k)\\widetildeW^{\\varepsilon}(t,0^+, -k)\\widetildeW^{\\varepsilon}(t,0^-,k)\\right\\} dk.\n\\end{align*}\nSince\n$p_+(k)+p_-(k)\\le 1$\nwe have $1-p_+^2(k)-p_-^2(k)\\ge 0$. In addition,\n\\begin{align*}\n&\n{\\rm det}\n\\left[\n\\begin{array}{ll}\n1-p_+^2(k)-p_-^2(k)&-2p_-(k)p_+(k)\\\\\n&\\\\\n-2p_-(k)p_+(k)&1-p_+^2(k)-p_-^2(k)\n\\end{array}\n\\right]\n=\\left[1-(p_+(k)+p_-(k))^2\\right]\\left[1-(p_+(k)-p_-(k))^2\\right].\n\\end{align*}\nUsing \\eqref{012304} we conclude that the quadratic form\n$$\n(x,y)\\mapsto \\left(1-p_+^2(k)-p_-^2(k)\\right)(x^2+y^2)-4p_-(k)p_+(k)xy\n$$\nis positive definite as long as $p_+(k)+p_-(k)<1$. The eigenvalues of the form can be determined from the equation\n\\begin{align*}\n&\n0\n=\\left[1-\\lambda-(p_+(k)+p_-(k))^2\\right]\\left[1-\\lambda-(p_+(k)-p_-(k))^2\\right],\n\\end{align*}\nwhich yields\n$$\n\\lambda_+:=1-(p_+(k)-p_-(k))^2,\\quad \\lambda_-:=1-(p_+(k)+p_-(k))^2\n$$\nand $\\lambda_+>\\lambda_-$. Note that\n$$\n2 \\mathfrak g(k)\\ge \\lambda_-=\\mathfrak g(k)\\left[1+p_+(k)+p_-(k)\\right]\\ge \\mathfrak g(k).\n$$\nEquality \\eqref{eq:14} allows us to obtain the following apriori bounds\n\\begin{equation}\n \\label{eq:16}\n \\begin{split}\n &\\|\\widetildeW^{\\varepsilon}(t)\\|_{L^2({\\mathbb R}\\times{\\mathbb T})}^2 \\le \\|\\widetilde W_0\\|_{L^2({\\mathbb R}\\times{\\mathbb T})}^2,\\\\\n &\\int_0^t ds \\int_{{\\mathbb R}} \\mathcal D(\\widetildeW^{\\varepsilon}(s,y, \\cdot)) dy\\le\n \\frac{\\varepsilon^2}{\\gamma} \\|\\widetilde W_0\\|_{L^2({\\mathbb R}\\times{\\mathbb T})}^2, \\\\\n &\\int_0^t ds \\int_0^{1\/2}\\bar\\omega'(k) \\mathfrak g(k) dk \\; \\left(\n \\widetildeW^{\\varepsilon}(s,0^-, k)^2 + \\widetildeW^{\\varepsilon}(s,0^+,-k)^2\\right) \\le \\varepsilon \\|\\widetilde W_0\\|_{L^2({\\mathbb R}\\times{\\mathbb T})}^2.\n \\end{split}\n\\end{equation}\nBy \\eqref{feb1408vet} and \\eqref{feb1410vet} we obtain that\n\\begin{equation}\n \\label{eq:2}\n \\begin{split}\n \\widetildeW^{\\varepsilon}(s,0^+, k)^2 \\le \\widetildeW^{\\varepsilon}(s,0^-, k)^2 + \\widetildeW^{\\varepsilon}(s,0^+, -k)^2, \\quad k\\in (0,1\/2)\\\\\n \\widetildeW^{\\varepsilon}(s,0^-, k)^2 \\le \\widetildeW^{\\varepsilon}(s,0^-, -k)^2 + \\widetildeW^{\\varepsilon}(s,0^+, k)^2, \\quad k\\in (-1\/2,0).\n\\end{split}\n\\end{equation}\nThen using the unimodality of $\\omega(k)$ it follows that\n\\begin{equation}\n \\label{eq:16a}\n \\int_0^t ds \\int_{{\\mathbb T}} dk |\\omega'(k)| \\mathfrak g(k) \\; \\left(\n \\widetildeW^{\\varepsilon}(s,0^-, k)^2 + \\widetildeW^{\\varepsilon}(s,0^+,k)^2\\right) \\le\n 2\\varepsilon \\|\\widetilde W_0\\|_{L^2({\\mathbb R}\\times{\\mathbb T})}^2.\n\\end{equation}\n\n\n\n\n\\subsection{Uniform continuity at $y=0$}\n\nSuppose that $y>0$. \nLet\n\\begin{equation}\n\\label{Vep}\nV_{\\varepsilon}(t,y,k):=\\int_0^t\\widetildeW^{\\varepsilon}(s,y,k)ds.\n\\end{equation}\nSince $\\widetildeW^{\\varepsilon}(s,y, k)$ satisfies \\eqref{resc:eq} we can write\n\\begin{equation}\\label{resc:eq-1}\\begin{split}\n\\varepsilon\\left[ \\widetildeW^{\\varepsilon}(t,y,k)- \\widetildeW^{\\varepsilon}(0,y,k)\\right]\n + \\; \\bar\\omega'(k)\\partial_yV_{\\varepsilon}(t,y,k)= F_\\epsilon(t,y,k),\n\\end{split}\\end{equation} \nwith\n$$\n F_\\epsilon(t,y,k):={\\frac{\\gamma }{\\varepsilon}} \\int_{{\\mathbb T}} R(k,k')\\left[V_{\\varepsilon}\\left(t,y,k'\\right) - V_{\\varepsilon}\\left(t,y,k\\right)\\right]\\; dk', \\qquad y\\neq 0.\n$$\nHence, using Cauchy-Schwarz inequality, we get\n\\begin{align*}\n&\n\\|F_\\epsilon(t,\\cdot)\\|^2_{L^2({\\mathbb R}\\times{\\mathbb T})}=\\left({\\frac{\\gamma\n }{\\varepsilon}}\\right)^2\\int_{{\\mathbb R}\\times{\\mathbb T}}dydk\\left\\{\\int_0^tds\\int_{{\\mathbb T}}\n R(k,k')\\left[\\widetildeW^{\\varepsilon}(s,y,k') - \\widetildeW^{\\varepsilon}(s,y,k)\\right]\\;\n dk'\\right\\}^2\\\\\n&\n\\le \\left({\\frac{\\gamma}{\\varepsilon}}\\right)^2\\int_{{\\mathbb R}\\times{\\mathbb T}}dydk\\left\\{\\int_0^tds\\int_{{\\mathbb T}}\n R(k,k')dk'\\right\\}\\left\\{\\int_0^tds\\int_{{\\mathbb T}}\n R(k,k')\\left[\\widetildeW^{\\varepsilon}(s,y,k') - \\widetildeW^{\\varepsilon}(s,y,k)\\right]^2\\;\n dk'\\right\\}\\\\\n&\n\\le t\\|R\\|_\\infty \\left({\\frac{\\gamma}{\\varepsilon}}\\right)^2 \\int_0^tds\\int_{{\\mathbb R}}\\mathcal\n D(\\widetildeW^{\\varepsilon}(s,y, \\cdot)) dy.\n\\end{align*}\nUsing the second estimate of \\eqref{eq:16} we conclude that for each $t_0>0$\n\\begin{equation}\n\\label{Feps}\n\\sup_{\\epsilon\\in(0,1]}\\sup_{t\\in[0,t_0]}\\|F_\\epsilon(t,\\cdot)\\|_{L^2({\\mathbb R}\\times{\\mathbb T})} \\le\n\\gamma t_0\\|R\\|_\\infty \\|\\widetilde W_0\\|_{L^2({\\mathbb R}\\times{\\mathbb T})}^2 <+\\infty.\n\\end{equation}\nFrom \\eqref{resc:eq-1} we conclude that\n\\begin{equation}\\label{resc:eq-2}\\begin{split}\n\\partial_yV_{\\varepsilon}(t,y,k)= \\frac{\\tilde F_\\epsilon(t,y,k)}{\\omega'(k)},\\qquad\ny, \\bar\\omega'(k)\\neq 0,\n\\end{split}\\end{equation} \nwhere\n$$\n \\tilde F_\\epsilon(t,y,k)= F_\\epsilon(t,y,k)-\\varepsilon\\left[ \\widetildeW^{\\varepsilon}(t,y,k)- \\widetildeW^{\\varepsilon}(0,y,k)\\right].\n$$\nFrom \\eqref{Feps} and the first estimate of \\eqref{eq:16} we conclude\n$$\n\\sup_{\\epsilon\\in(0,1]}\\sup_{t\\in[0,T]}\\|\\tilde F_\\epsilon(t,\\cdot)\\|_{L^2({\\mathbb R}\\times{\\mathbb T})}=:\\tilde F_*(T)<+\\infty.\n$$\n\n\n\nWe have \n\\begin{equation}\n\\label{x-2}\n\\|V_{\\varepsilon}(t,\\cdot)\\|_{H^1_\\pm}\\le \\|\\tilde F_\\epsilon(t,\\cdot)\\|_{L^2({\\mathbb R}\\times {\\mathbb T})}\\quad t\\ge0.\n\\end{equation}\n Since \n$\n\\dot V_\\epsilon(t)=\\widetilde W_\\epsilon(t),\n$\nfrom the first estimate of \\eqref{eq:16} we conclude that for any $t_0>0$\n$$\n\\sup_{\\epsilon\\in(0,1]}\\left\\|\\dot V_{\\varepsilon}\\right\\|_{L^\\infty([0,t_0];L^2({\\mathbb R}\\times{\\mathbb T}))}<+\\infty.\n$$\nFrom \\eqref{x-2} we get also (cf \\eqref{calH})\n$$\n\\sup_{\\epsilon\\in(0,1]}\\left\\|V_{\\varepsilon}\\right\\|_{L^\\infty([0,t_0];{\\mathcal H})}<+\\infty.\n$$\nSummarizing, we have shown the following.\n\\begin{prop}\n\\label{prop012404}\nFor any $t_0>0$\n\\begin{equation}\n \\label{012404a}\nC(t_0):=\\sup_{\\epsilon\\in(0,1]}\\left(\\left\\|V_{\\varepsilon}\\right\\|_{L^\\infty([0,t_0];{\\mathcal\n H})}+\\left\\|\\dot V_{\\varepsilon}\\right\\|_{L^\\infty([0,t_0];L^2({\\mathbb R}\\times{\\mathbb T}))}\\right)<+\\infty\n\\end{equation}\nand \n$$\n\\lim_{\\varepsilon\\to0+}\\left\\|V_{\\varepsilon}\\right\\|_{L^\\infty([0,t_0];{\\mathcal H}_0)}=0.\n$$\n\\end{prop}\n\n\n\\bigskip\n\nDenote by $W^{1,\\infty}_{0}([0,t_0];L^2({\\mathbb R}\\times{\\mathbb T}))$ the\ncompletion of the space of smooth functions $f:[0,t_0]\\to L^2({\\mathbb R}\\times{\\mathbb T})$ satisfying\n$f(0)=0$, with respect to the norm\n$$\n\\|f\\|_{W^{1,\\infty}_{0}([0,t_0];L^2({\\mathbb R}\\times{\\mathbb T}))}:=\\sup_{t\\in[0,t_0]}\\|\\dot f\\|_{L^2({\\mathbb R}\\times{\\mathbb T})}.\n$$\n As a consequence of the above proposition we immediately conclude the following.\n\\begin{corollary} \n\\label{cor012604}\nThe family $\\left(V_{\\varepsilon}(\\cdot)\\right)_{\\epsilon\\in(0,1]}$ is bounded in\n$\nW^{1,\\infty}_{0}([0,t_0];L^2({\\mathbb R}\\times{\\mathbb T}))\\cap L^\\infty([0,t_0];{\\mathcal H})\n$ for aby $t_0>0$.\nAny $\\star$-weak limit point $V(\\cdot)$ of $V_{\\varepsilon}(\\cdot)$, as $\\varepsilon\\to0+$, satisfies \nthe following:\n\\begin{itemize}\n\\item[1)] $V(t,y,k)\\equiv \\bar V(t,y):=\\mathlarger{\\int}_{{\\mathbb T}}V(t,y,k)dk$ for\n $(t,y,k)\\in{\\mathbb R}_+\\times{\\mathbb R}\\times {\\mathbb T})$\n\\item[2)] the mapping ${\\mathbb R}_+\\times{\\mathbb R}_\\iota\\ni (t,y)\\mapsto \\bar\n V(t,y)$ extends to a mapping from $C(\\bar{\\mathbb R}_+\\times\\bar{\\mathbb R}_\\iota)$,\n $\\iota\\in\\{-,+\\}$,\n\\item[3)] $V(t,0^\\pm)=0$ for each $t>0$,\n\\item[4)] $V(0,y)\\equiv 0$, $y\\in{\\mathbb R}$. \n\\end{itemize}\n\\end{corollary}\n\n\n\n\n\\section{Proof of Theorem \\ref{thm011302-19a}}\n\n\\label{sec4}\n\n\n\nThanks to the above estimates, we conclude that the solutions $\\widetildeW^{\\varepsilon}(\\cdot)$\nare $\\star$-weakly sequentially compact in $L^\\infty\\left([0,+\\infty);L^2_w({\\mathbb R}\\times {\\mathbb T})\\right)$,\nwhere $L^2_w({\\mathbb R}\\times {\\mathbb T})$ denotes \n$L^2({\\mathbb R}\\times {\\mathbb T})$ with the weak topology.\nSuppose that $\\bar W(t,y,k)$ is a limiting point for some subsequence $(\\widetilde W^{\\varepsilon_{n}}(s,y, k))$, \nwhere $\\varepsilon_n\\to0$. For convenience sake we shall denote the\nsubsequence by $(\\widetildeW^{\\varepsilon}(s,y, k))$. Thanks to \\eqref{eq:16}\nfor each $t>0$ we have (cf \\eqref{cD})\n\\begin{equation}\n\\label{052504}\n\\lim_{\\epsilon\\to0} \\int_0^tds\\int_{{\\mathbb R}}{\\mathcal D}\\left(\\widetildeW^{\\varepsilon}(s,y, \\cdot)\\right)dy =0\n\\end{equation}\nthus $\\bar W(t,y,k)\\equiv \\rho(t,y)$, for a.e. \n$(t,y,k)\\in{\\mathbb R}_+\\times{\\mathbb R}\\times{\\mathbb T}$.\n\\begin{lm}\n\\label{lm031402-19}\nEquation\n\\begin{equation}\n\\label{cor}\n-L X_1=\\bar\\omega'\n\\end{equation}\nhas a unique solution such that \n$$\n\\int_{{\\mathbb T}}X_1(k)R(k)dk=0\n$$\nand\n\\begin{equation}\n\\label{chi2}\n\\int_{{\\mathbb T}}X_1^2(k)R(k)dk<+\\infty.\n\\end{equation}\n\\end{lm} \n{\\em Proof. } Let $\\mu$ be a Borel probability measure on ${\\mathbb T}$ given by\n$$\n\\mu(dk)=\\frac{R(k)}{\\bar R}dk,\n$$ \nwhere\n$$\n\\bar R:=\\int_{{\\mathbb T}}R(k)dk.\n$$\nWe can reformulate \\eqref{cor} as\n\\begin{equation}\n\\label{cor1}\nX_1-PX_1=\\frac{\\bar\\omega'}{R},\n\\end{equation}\nwhere, by virtue of \\eqref{041402-19a}, the right hand side belongs to $L^2(\\mu)$ and $P:L^2(\\mu)\\to L^2(\\mu)$ is a symmetric operator on $L^2(\\mu)$ given by\n$$\nPF(k):=\\int_{{\\mathbb T}}p(k,k')F(k')dk',\\quad F\\in L^2(\\mu).\n$$\nThe operator is a compact contraction\nand, since\n$$\n\\int_{{\\mathbb T}}F(k) (I-P)F(k)F(k)\\mu(dk)={\\mathcal D}(F)\n$$\nwe conclude that $1$ is a simple eigenvalue, with the respective\neigenspace spanned on the eigenvector $F_0\\equiv 1$. Thus the conclusion\nof the lemma follows, as $\\bar\\omega'\/R\\perp F_0$.\n$\\Box$ \n\n\\bigskip\n\n\\begin{prop}\n\\label{prop021402-19}\nFor any function $\\varphi\\in C_0^\\infty\\left({\\mathbb R}_+\\times {\\mathbb R}_*\\right)$ we have\n\\begin{equation}\n\\label{022404}\n\\int_0^{+\\infty}\\int_{{\\mathbb R}}\\rho(t,y)\\left[\\partial_t\\varphi(t,y)+D\\partial_{yy}^2 \\varphi(t,y)\\right]dt dy=0.\n\\end{equation}\nwith \n\\begin{equation}\\label{def:D1}\n00.\n$$ \nThis, combined with \\eqref{051502-19}, implies that\n\\begin{equation}\n\\rho(t,y)=\\frac{1}{\\sqrt{4\\pi D t}}\\int_{{\\mathbb R}_\\pm}\\left\\{\\exp\\left\\{-\\frac{(y-y')^2}{4Dt}\\right\\}-\\exp\\left\\{-\\frac{(y+y')^2}{4Dt}\\right\\}\\right\\}\\rho_0(y')dy' ,\\quad t,\\,\\pm y>0 \\label{eq:3},\n\\end{equation}\nwhich satisfies the conclusion of Theorem \\ref{thm011302-19a} for $T=0$.\nThe only thing yet to be shown is the proof of Proposition \\ref{prop011502-19}.\n\n\n\n\\bigskip \n\n\\subsection*{Proof of Proposition \\ref{prop011502-19}}\nAccording to \\eqref{resc:eq-1} we have\n\\begin{equation}\\label{resc:eq-2}\\begin{split}\n\\partial_tV_{\\varepsilon}(t,y,k)\n + \\; \\frac{1}{\\varepsilon}\\bar\\omega'(k)\\partial_yV_{\\varepsilon}(t,y,k)= \\frac{\\gamma}{\\epsilon^2}LV_\\epsilon(t,y,k)+ \\widetildeW^{\\varepsilon}(0,y,k)\n\\end{split}\\end{equation} \nand obviously $V_\\epsilon(0,y,k)=0$ a.e.\nLet $\\varphi_\\varepsilon(t,y,k)$ be given by \\eqref{022404a}.\nFrom \\eqref{resc:eq-2} we can write\n\\begin{align}\n\\label{062504a}\n&\n0=\\int_0^{+\\infty}\\int_{{\\mathbb R}\\times {\\mathbb T}}\\partial_t\\left[V_\\varepsilon(t,y, k)\\varphi_\\varepsilon(t,y,k)\\right]dt dydk\\nonumber\\\\\n&\n=\\int_0^{+\\infty}\\int_{{\\mathbb R}\\times {\\mathbb T}}\\left\\{\\left[V_\\varepsilon(t,y, k)\\partial_t\\varphi_\\varepsilon(t,y,k)-\\frac 1 {\\varepsilon} \\; \\bar\\omega'(k) \\partial_y V_\\varepsilon(t,y,k)\\varphi_\\varepsilon(t,y,k)\\right.\\right.\\\\\n&\n\\left. \\left.+\\frac{\\gamma}{\\varepsilon^2}LV_\\varepsilon(t,y, k)\\varphi_\\varepsilon(t,y,k)\\right]+\\widetildeW^{\\varepsilon}(0,y,k) \\varphi_\\varepsilon(t,y,k)\\right\\}dt dydk\\nonumber\\\\\n&\n=\\int_0^{+\\infty}\\int_{{\\mathbb R}\\times {\\mathbb T}}\\left\\{V_\\varepsilon(t,y, k)\\left[\\partial_t\\varphi_\\varepsilon(t,y,k)+\\frac 1 {\\varepsilon} \\; \\bar\\omega'(k) \\partial_y \\varphi_\\varepsilon(t,y,k)\\right.\\right.\\nonumber\\\\\n&\n\\left.\\left.+\\frac{\\gamma }{\\varepsilon^2}L\\varphi_\\varepsilon(t,y,k)\\right]+ W_0(y,k)\\varphi_\\varepsilon(t,y,k)\\right\\}dt dydk.\\nonumber\n\\end{align}\nSubstituting from \\eqref{022404a} we obtain that the term in the square brackets has the form\n\\eqref{form},\nwith\n$I$,\n$\nI\\!I\n$\nand\n$\nI\\!I\\!I_\\epsilon\n$ as given in \\eqref{051502-19}. \nTaking the limit in \\eqref{062504a} we obtain \\eqref{022404a}.$\\Box$ \n\n\n\n\n\n\n\n\n\\bigskip\n\n\n\n\n\n\n \n{\\small\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n\n\nThis is the first part of a series of papers on the long-time behavior of K\\\"ahler-Ricci flows on Fano manifolds. We will solve a long-standing conjecture\nin low dimensional case.\n\n\nLet $M$ be a Fano $n$-manifold. Consider the normalized K\\\"ahler-Ricci flow:\n\\begin{equation}\\label{KRF}\n\\frac{\\partial g}{\\partial t}\\,=\\,g\\,-\\,{\\rm Ric}(g).\n\\end{equation}\nIt was proved in \\cite{Ca85} that \\eqref{KRF} has a global solution $g(t)$ in the case that $g(0)=g_0$ has canonical K\\\"ahler class, i.e., $2\\pi c_1(M)$ as its\nK\\\"ahler class.\nThe main problem is to understand the limit of $g(t)$ as $t$ tends to $\\infty$.\nA desirable picture for the limit is given in the following folklore conjecture \\footnote{It has been often referred\nas the Hamilton-Tian conjecture in literatures, e.g., in \\cite{Pe02}. Also see \\cite{Ti97} for a formulation of this conjecture.}\n\n\\begin{conj}[\\cite{Ti97}]\n\\label{conj:HT}\n$(M,g(t))$ converges (at least along a subsequence) to a shrinking K\\\"ahler-Ricci soliton with mild singularities.\n\\end{conj}\n\nHere,``mild singularitie'' may be understood in two ways: (i) A singular set of codimension at least $4$, and (ii) a singular set of a normal variety. The first interpretation concerns the differential geometric part of the problem where the convergence is taken in the Gromov-Hausdorff topology, while in the second interpretation the spaces $(M,g(t))$ converge as algebraic varieties in some projective space. By extending the partial $C^0$-estimate conjecture \\cite{Ti10} to the K\\\"ahler-Ricci flow, one can show that these two approaches are actually equivalent (see Theorem \\ref{regularity:2} below and Section 5).\n\n\nThis conjecture implies another famous conjecture, the Yau-Tian-Donaldson conjecture, in the case of Fano manifolds. The Yau-Tian-Donaldson conjecture states that a Fano manifold $M$ admits a K\\\"ahler-Einstein metric if and only if it is K-stable. The necessary part of the conjecture is proved by the first named author in \\cite{Ti97}. Last Fall, the first named author gave a proof for the sufficient part (see \\cite{Ti12}) by establishing the partial $C^0$-estimate for conic K\\\"ahler-Einstein metrics.\nAnother proof was given in \\cite{ChDoSu1, ChDoSu2, ChDoSu3}. As we will see in the sections below, the essential step in the resolution of conjecture (\\ref{conj:HT}), as\nfor proving Yau-Tian-Donaldson conjecture, is the Cheeger-Gromov convergence of the K\\\"ahler-Ricci flow.\n\n\nLet us recall some known facts on the K\\\"ahler-Ricci flow. By the noncollapsing result of Perelman \\cite{Pe02}, there is a positive constant $\\kappa$ depending only on $g_0$ such that\n\\begin{equation}\\label{volume noncollaping:0}\n\\vol_{g(t)}(B_{g(t)}(x,r))\\geq\\kappa r^{2n},\\hspace{0.5cm}\\forall t\\geq 0, r\\leq 1.\n\\end{equation}\nAlso by Perelman, the diameter and scalar curvature of $g(t)$ are uniformly bounded (see \\cite{SeTi08} for a proof). Since the volume stays the same along the K\\\"ahler-Ricci flow, the noncollapsing property (\\ref{volume noncollaping:0}) implies that for any sequence $t_i \\rightarrow\\infty$,\nby taking a subsequence if necessary, $(M,g(t_i))$ converge to a limiting length metric space $(M_\\infty,d)$ in the Gromov-Hausdorff topology:\n\\begin{equation}\\label{e13}\n(M,g(t_i))\\stackrel{d_{GH}}{\\longrightarrow}(M_\\infty,d).\n\\end{equation}\nThe question remained is the regularity of the limit $M_\\infty$. In the case of Del-Pezzo surfaces, or in the higher dimension with additional assumption of uniformly bounded Ricci curvature or Bakry-\\'Emery-Ricci curvature, the regularity of $M_\\infty$ has been checked, cf. \\cite{Se05}, \\cite{ChWa12} and \\cite{TiZh12}.\nIf $M$ admits K\\\"ahler-Einstein metrics in a prior, Perelman first claimed that the K\\\"ahler-Ricci flow converges to a smooth K\\\"ahler-Einstein metric and showed a few\ncrucial estimates towards his proof. Tian-Zhu gave a proof of this and generalized this to the case of K\\\"ahler-Ricci solitons under the assumption that the metric is invariant by the holomorphic vector field of the Ricci soliton \\cite{TiZhu07}; see also \\cite{TiZhu13, TZZZ}.\n\n\nThe main result of this paper is the following\n\n\\begin{theo}\\label{regularity:1}\nLet $(M,g(t))$, $t_i$ and $(M_\\infty, d)$ be given as above. Suppose that for some uniform constants $p>n$ and $\\Lambda< \\infty$,\n\\begin{equation}\\label{Ricci:Lp0}\n\\int_M|Ric(g(t))|^pdv_{g(t)}\\,\\leq\\,\\Lambda.\n\\end{equation}\nThen the limit $M_\\infty$ is smooth outside a closed subset $\\mathcal{S}$ of (real) codimension $\\geq 4$ and $d$ is induced by a smooth K\\\"ahler-Ricci soliton $g_\\infty$ on $M_\\infty\\backslash\\mathcal{S}$. Moreover, $g(t_i)$ converge to $g_\\infty$ in the $C^\\infty$-topology outside $\\mathcal{S}$.\\footnote{The convergence with these properties is also referred as the convergence in the Cheeger-Gromov topology, see \\cite{Ti90} for instance.}\n\\end{theo}\n\n\n\\begin{rema}\nIn view of the main result in \\cite{TZZZ}, one should be able to prove that under the assumption of Theorem \\ref{regularity:1}, the K\\\"ahler-Ricci flow $g(t)$ converge globally to $(M_\\infty, g_\\infty)$ in the Cheeger-Gromov topology as $t$ tends to $\\infty$. If $M$ admits a shrinking K\\\"ahler-Ricci soliton, then by the uniqueness theorem of Berndtsson \\cite{Be13} and Berman-Boucksom-Essydieux-Guedj-Zeriahi \\cite{BB12}, the K\\\"ahler-Ricci flow should converge to the Ricci soliton. This will be discussed in a future paper.\n\\end{rema}\n\n\n\nThe proof relies on Perelman's pseudolocality theorem \\cite{Pe02} of Ricci flow and a regularity theory for manifolds with integral bounded Ricci curvature. The latter is a generalization of the regularity theory of Cheeger-Colding \\cite{ChCo97, ChCo00} and Cheeger-Colding-Tian \\cite{ChCoTi02} for manifolds with bounded Ricci curvature. We remark that the uniform noncollapsing condition (\\ref{volume noncollaping:0}) also plays a role in the regularity theory; see Section 2 for further discussions.\n\n\nThe central issue is to check the integral condition of Ricci curvature under the K\\\"ahler-Ricci flow. Indeed we can prove the following partial integral estimate:\n\n\\begin{theo}\nLet $(M,g(t))$ be as above. There exists some constant $\\Lambda$ depending on $g_0$ such that\n\\begin{equation}\\label{Ricci:L4}\n\\int_M|Ric(g(t))|^4dv_{g(t)}\\,\\leq\\,\\Lambda.\n\\end{equation}\n\\end{theo}\n\n\nTherefore, by the regularity result, we have\n\n\\begin{coro}\nConjecture \\ref{conj:HT} holds for dimension $n\\leq3$.\n\\end{coro}\n\n\nInspired by \\cite{DoSu12} as well as \\cite{Ti12, Ti13}, as a\nconsequence of Theorem \\ref{regularity:1}, we establish the partial $C^0$ estimate for the K\\\"ahler-Ricci flow (See Section 5 for details).\nAs a direct corollary, we refine the regularity in Theorem \\ref{regularity:1}.\n\n\\begin{theo}\\label{regularity:2}\nSuppose $(M,g(t_i))\\stackrel{d_{GH}}{\\longrightarrow}(M_\\infty,g_\\infty)$ as phrased in Theorem \\ref{regularity:1}. Then\n$M_\\infty$ is a normal projective variety and ${\\mathcal S}$ is a subvariety of complex codimension at least $2$.\n\\end{theo}\n\n\n\n\\begin{rema}\nIf we consider a K\\\"ahler-Ricci flow on a normal Fano orbifold, then the limit $M_\\infty$ is also a normal variety. The main ingredients in the proof of our regularity of K\\\"ahler-Ricci flow remain hold for orbifolds. Actually, using the convexity of the regular set one can generalize the regularity theory of Cheeger-Colding and Cheeger-Colding-Tian to orbifolds with integral bounded Ricci curvature. Moreover, Perelman's estimates to Ricci potentials and local volume noncollapsings as well as the pseudolocality theorem keep valid for orbifold K\\\"ahler-Ricci flow.\n\\end{rema}\n\n\n\nThe partial $C^0$ estimate of K\\\"ahler-Einstein manifolds plays the key role in Tian's program to resolve the Yau-Tian-Donaldson conjecture, see \\cite{Ti90}, \\cite{Ti97}, \\cite{Ti10}, \\cite{DoSu12} and \\cite{Ti12} for examples. An extension of the partial $C^0$ estimate to shrinking K\\\"ahler-Ricci solitons was given in \\cite{PSS12}. These works are based on the compactness of Cheeger-Colding-Tian \\cite{ChCoTi02} and its generalizations to K\\\"ahler-Ricci solitons by \\cite{TiZh12}.\nBesides these known cases, the partial $C^0$ estimate conjecture proposed in \\cite{Ti90, Ti90b} is still open in general.\n\n\nFinally we show the Yau-Tian-Donaldson conjecture from the Hamilton-Tian conjecture by the method of K\\\"ahler-Ricci flow. As discussed before, the key is the partial $C^0$ estimate. One can follow the arguments in \\cite{Ti10} and \\cite{Ti12}. Let $M$ be K-stable as defined in \\cite{Ti97}. Suppose $(M,g(t_i))$, $t_i\\rightarrow\\infty$, converges in the Cheeger-Gromov topology to a shrinking K\\\"ahler-Ricci soliton $(M_\\infty,g_\\infty)$ (maybe with singularities) as in Theorem \\ref{regularity:1}. We are going to show that $M_\\infty$ is isomorphic to $M$ and $g_\\infty$ is Einstein in Section 6, that is, we have\n\\begin{theo}\nSuppose that $M$ is K-stable. If $(M,g(t_i))\\stackrel{d_{GH}}{\\longrightarrow}(M_\\infty,g_\\infty)$ as phrased in Theorem \\ref{regularity:1}, then $M_\\infty$ coincides with $M$ and $g_\\infty$ is a K\\\"ahler-Einstein metric.\n\\end{theo}\n\n\n\n\n\n\\begin{coro}\nThe Yau-Tian-Donaldson conjecture holds for dimension $\\leq 3$.\n\\end{coro}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Manifolds with integral bounded Ricci curvature}\n\nIn this section, following lines of Cheeger-Colding \\cite{ChCo96, ChCo97, ChCo00}, Cheeger-Colding-Tian \\cite{ChCoTi02} and Colding-Naber \\cite{CoNa12}, we\ndevelop a regularity theory for manifolds with integral bounded Ricci curvature. Let $(M,g)$ be an $m$-dimensional Riemannian manifold satisfying\n\\begin{equation}\\label{Ricci:Lp1}\n\\int_M|Ric_-|^pdv\\leq\\Lambda\n\\end{equation}\nfor some constants $\\Lambda<\\infty$ and $p>\\frac{m}{2}$, where $Ric_-=\\max_{|v|=1}\\big(0,-Ric(v,v)\\big)$. We may assume $\\Lambda\\ge 1$ in generality.\nFor applications to the regularity theory\nof K\\\"ahler-Ricci flow, we shall focus on the case when the manifold $(M,g)$ are uniformly locally noncollapsing in the sense that\n\\begin{equation}\\label{noncollapsing}\n\\vol(B(x,r))\\geq \\kappa r^m,\\hspace{0.3cm}\\forall x\\in M,r\\leq 1,\n\\end{equation}\nwhere $\\kappa>0$ is a fixed constant. It is remarkable that different phenomena would happen if we replace the condition (\\ref{noncollapsing}) by noncollapsing in a definite scale such as $\\vol(B(p,1))\\geq\\kappa$. Actually, due to an example of Yang \\cite{Ya92}, for any $p>0$, there exists Gromov-Hausdorff limit space of $m$-manifolds with uniformly $L^p$ bounded Riemannian curvature and $\\vol(B(x,1))\\geq\\kappa$ for any $x$ whose tangent cone at some points may collapse.\n\n\nThe geometry of manifolds with integral bounded Ricci curvature has been studied extensively by Dai, Petersen, Wei et al., see \\cite{PeWe00} and references therein. It is also pointed out in \\cite{PeWe00} that there should exist a Cheeger-Gromov convergence theory for such manifolds. The critical assumption added here is (\\ref{noncollapsing}). The regularity theory without this uniform noncollapsing condition is much more subtle and needs further study.\n\n\nWe start by reviewing some known results for manifolds satisfying (\\ref{Ricci:Lp1}) which are proved in \\cite{PeWe97, PeWe00}. Together with the segment inequalities proved in Subsection 2.4, these estimates will be sufficient to give a direct generalization of the regularity theory of Cheeger-Colding \\cite{ChCo96, ChCo97, ChCo00} and Cheeger-Colding-Tian \\cite{ChCoTi02} under noncollapsing condition (\\ref{noncollapsing}); cf. \\cite{PeWe00}. Then we derive some analytical results including the short-time heat kernel estimate on manifolds with additional assumption (\\ref{noncollapsing}) and apply these to derive the Hessian estimate to the parabolic approximations of distance functions as in \\cite{CoNa12}. This makes it possible to give a generalization of Colding-Naber's work on the H\\\"older continuity of tangent cones \\cite{CoNa12} on the limit spaces of manifolds satisfying (\\ref{Ricci:Lp1}) and (\\ref{noncollapsing}).\n\n\nFor simplicity we will denote by $C(a_1,a_2,\\cdots)$ a positive constant which depends on the variables $a_1,a_2,\\cdots$ but may be variant in different situations.\n\n\n\n\\subsection{Preliminary results}\n\n\n\nFor any $x\\in M$, let $(t,\\theta)\\in\\mathbb{R}^+\\times S_x^{m-1}$ be the polar coordinate at $x$ where $S_x^{m-1}$ is the unit sphere bundle restricted at $x$. Write the Riemannian volume form in this coordinate as\n\\begin{equation}\\label{volume form}\ndv\\,=\\,{\\cal A}(t,\\theta)dt\\wedge d\\theta.\n\\end{equation}\nLet $r(y)=d(x,y)$ denote the distance function to $x$. Then an immediate computation in the polar coordinate shows\n\\begin{equation}\\label{Laplace comparison:1}\n\\triangle r\\,=\\,\\frac{\\partial}{\\partial r}\\log{\\cal A}(r,\\cdot).\n\\end{equation}\nAs in \\cite{PeWe97}, introduce the error function of the Laplacian comparison of distances\n\\begin{equation}\\label{Laplace comparison:2}\n\\psi(r,\\theta)\\,=\\,\\bigg(\\triangle r(\\exp_x(r\\theta))-\\frac{m-1}{r}\\bigg)_+,\n\\end{equation}\nwhere $a_+=\\max(a,0)$. Notice that $\\psi$ depends on the base point $x$. For any subset $\\Gamma\\subset S_x$ define\n$$B_\\Gamma(x,r)\\,=\\,\\{\\exp_x(t\\theta)|0\\leq t0,p>\\frac{m}{2},\n\\end{equation}\nwhere $C(m,p)=\\big(\\frac{(m-1)(2p-1)}{2p-m}\\big)^p$. Based on this integral estimate, Petersen-Wei proved the following relative volume comparison theorem:\n\n\n\n\\begin{theo}[\\cite{PeWe97}]\nFor any $p>\\frac{m}{2}$ there exists $C(m,p)$ such that the following holds\n\\begin{equation}\\label{volume comparison:1}\n\\frac{d}{dr}\\bigg(\\frac{\\vol(B_\\Gamma(x,r))}{r^{m}}\\bigg)^{\\frac{1}{2p}}\\,\\leq\\, C(m,p)\\bigg(\\frac{1}{r^{m}}\\int_{B_\\Gamma(x,r)}|Ric_-|^pdv\\bigg)^{\\frac{1}{2p}},\\,\\forall r>0.\n\\end{equation}\nIntegrating gives, for any $r_2>r_1>0$,\n\\begin{eqnarray}\\label{vlume comparison:2}\n&&\\bigg(\\frac{\\vol(B_\\Gamma(x,r_2))}{r_2^m}\\bigg)^{\\frac{1}{2p}}-\\bigg(\\frac{\\vol(B_\\Gamma(x,r_1))}{r_1^m}\\bigg)^{\\frac{1}{2p}}\\nonumber\\\\\n&&\\hspace{3cm}\\leq C(m,p)\\bigg(r_2^{2p-m}\\int_{B_\\Gamma(x,r_2)}|Ric_-|^pdv\\bigg)^{\\frac{1}{2p}}.\n\\end{eqnarray}\n\\end{theo}\n\n\n\\begin{rema}\nThe quantity $r^{2p-m}\\int_{B_\\Gamma(x,r)}|Ric_-|^pdv$ in above inequality {\\rm(\\ref{vlume comparison:2})} is scaling invariant. Therefore, under the global integral condition of Ricci curvature {\\rm(\\ref{Ricci:Lp1})}, the volume ratio $\\frac{\\vol(B_\\Gamma(x,r))}{r^m}$ will become almost monotone whenever the radius $r$ in consideration is sufficiently small. This implies in particular the metric cone structure of the tangent cone on noncollapsing limit spaces.\n\\end{rema}\n\n\\begin{rema}\nUnder additional assumption {\\rm(\\ref{noncollapsing})}, the relative volume comparison {\\rm(\\ref{vlume comparison:2})} gives rise to a volume doubling property of concentric metric balls of small radii \\cite{PeWe00}.\n\\end{rema}\n\n\n\\begin{coro}\nUnder the assumption {\\rm(\\ref{Ricci:Lp1})}, the volume has the upper bound\n\\begin{equation}\\label{volume comparison:3}\n\\vol(B_\\Gamma(x,r))\\,\\leq\\,|\\Gamma|\\cdot r^{m}+C(m,p)\\Lambda r^{2p},\\hspace{0.5cm}\\forall r>0,\n\\end{equation}\nwhere $|\\Gamma|$ denotes the measure of $\\Gamma$ as a subset of unit sphere.\n\\end{coro}\n\n\nThe upper bound of volume of geodesic balls can be refined to the upper bound of areas of geodesic spheres as follows.\n\n\n\\begin{lemm}\\label{volume comparison:9}\nUnder the assumption {\\rm(\\ref{Ricci:Lp1})}, we have\n\\begin{equation}\\label{volume comparison:7}\n\\vol(\\partial B(x,r))\\,\\leq\\, C(m,p,\\Lambda)\\cdot r^{m-1},\\,\\mbox{ when }r\\leq 1,\n\\end{equation}\nand\n\\begin{equation}\\label{volume comparison:8}\n\\vol(\\partial B(x,r))\\,\\leq\\, C(m,p,\\Lambda)\\cdot r^{2p-1},\\,\\mbox{ when }r\\geq 1.\n\\end{equation}\n\\end{lemm}\n\\begin{proof}\nWhen $r\\leq 1$, this is exactly the Lemma 3.2 of \\cite{DaWe04}. We next use iteration to prove the case $r>1$. For simplicity we only consider $r=2^k$ for $k$ being any positive integers. Other radii bigger than 1 can be attained by a finite step iteration starting from a unique radius between $\\frac{1}{2}$ and $1$.\n\nBy (\\ref{Laplace comparison:1}) and (\\ref{Laplace comparison:2}),\n\\begin{equation}\\nonumber\n\\frac{\\partial}{\\partial t}\\frac{{\\cal A}(t,\\theta)}{t^{m-1}}\\,\\leq\\,\\psi(t,\\theta)\\frac{{\\cal A}(t,\\theta)}{t^{m-1}}.\n\\end{equation}\nIntegrating over the direction space $S_x^{m-1}$ gives\n$$\\frac{d}{dt}\\frac{\\int_{S_x}{\\cal A}(t,\\theta)d\\theta}{t^{m-1}}\\,\\leq\\,\\frac{\\int_{S_x}\\psi(t,\\theta){\\cal A}(t,\\theta)d\\theta}{t^{m-1}}.$$\nIntegrating over an interval of radius $[r,2r]$ gives\n\\begin{eqnarray}\n\\frac{\\int_{S_x}{\\cal A}(2r,\\theta)d\\theta}{(2r)^{m-1}}-\n\\frac{\\int_{S_x}{\\cal A}(r,\\theta)d\\theta}{r^{m-1}}\n&\\leq& \\int_r^{2r}\\frac{\\int_{S_x}\\psi(t,\\theta){\\cal A}(t,\\theta)d\\theta}{t^{m-1}}dt\\nonumber\\\\\n&\\leq& \\frac{1}{r^{m-1}}\\int_{B(x,2r)}\\psi dv.\\nonumber\n\\end{eqnarray}\nBy the integral version of mean curvature comparison (\\ref{Laplace comparison:3}) and volume comparison (\\ref{volume comparison:3}),\n$$\\int_{B(x,2r)}\\psi dv\\leq\\big(\\int_{B(x,2r)}\\psi^{2p}\\big)^{\\frac{1}{2p}}\\vol(B(x,2r))^{\\frac{2p-1}{2p}}\\leq C(m,p,\\Lambda)(2r)^{2p-1}.$$\nThus,\n$$\\frac{\\int_{S_x}{\\cal A}(2r,\\theta)d\\theta}{(2r)^{m-1}}\\leq\n\\frac{\\int_{S_x}{\\cal A}(r,\\theta)d\\theta}{r^{m-1}}+C(m,p,\\Lambda)(2r)^{2p-m}.$$\nPut $r_k=2^k$, $k\\geq 0$. An iteration then gives\n$$\\int_{S_x}{\\cal A}(r_k,\\theta)d\\theta\\leq C(m,p,\\Lambda)r_k^{2p-1},$$\nas desired.\n\\end{proof}\n\nLet $\\partial B_\\Gamma(x,r)=:\\{y=\\exp_x(r\\theta)|\\theta\\in\\Gamma,\\,d(x,y)=r\\}.$ By the proof of Lemma 3.2 in \\cite{DaWe04} we also have the following volume estimate of $\\partial B_\\Gamma$ in terms of $|\\Gamma|$.\n\n\\begin{lemm}\nUnder the assumption {\\rm(\\ref{Ricci:Lp1})}, we have\n\\begin{equation}\\label{volume comparison:11}\n\\vol(\\partial B_\\Gamma(x,r))\\,\\leq\\,C(m,p,\\Lambda)\\cdot\\big(|\\Gamma|r^{m-1}+ r^{2p-1}\\big),\\,\\mbox{ when }r\\leq 1.\n\\end{equation}\n\\end{lemm}\n\n\n\n\nNext we recall a nice cut-off which is constructed by Petersen-Wei following the idea of Cheeger-Colding \\cite{ChCo96}. In the following of this subsection we assume {\\rm(\\ref{Ricci:Lp1})} and {\\rm(\\ref{noncollapsing})} hold.\n\n\\begin{lemm}[\\cite{PeWe00}]\\label{cut-off:1}\nThere exist $r_0=r_0(m,p,\\kappa,\\Lambda)$ and $C=C(m,p,\\kappa,\\Lambda)$ such that on any $B(x,r)$, $r\\le r_0$, there exists a\ncut-off $\\phi\\in C_0^\\infty(B(x,r))$ which satisfies\n\\begin{equation}\n\\phi\\ge 0,\\,\\phi\\equiv 1\\mbox{ in }B(x,\\frac{r}{2}),\n\\end{equation}\nand\n\\begin{equation}\n\\|\\nabla\\phi\\|_{C^0}^2+\\|\\triangle\\phi\\|_{C^0}\\leq C r^{-2}.\n\\end{equation}\n\\end{lemm}\n\n\nAs in \\cite{CoNa12} one can extend the construction to a slightly general case, by using a covering technique based on the volume doubling property. Let $E$ be a closed subset of $M$. Denote the $r$-neighborhood of $E$ by\n$$U_r(E)=:\\{x\\in M| d(x,E)0$, there exists $C=C(m,p,\\kappa,\\Lambda,R)$ such that the following holds. Let $E$ be any closed subset and $00$, there exists $C=C(m,p,\\kappa,\\Lambda,R)$ such that\n\\begin{equation}\\label{localsobolev:2}\nC_s(B(x,R))\\leq C,\\,\\forall x\\in M.\n\\end{equation}\n\\end{coro}\n\n\nHere, the local Sobolev constant $C_s(B(x,R))$ is defined to be the minimum value of $C_s$ such that\n\\begin{equation}\\label{localsobolev:3}\n\\bigg(\\int f^{\\frac{2m}{m-2}}dv\\bigg)^{\\frac{m-2}{m}}\\,\\leq\\, C_s\\int\\big(|\\nabla f|^2+f^2\\big)dv,\\,\\forall f\\in C_0^\\infty(B(x,R)).\n\\end{equation}\n\n\n\n\n\n\n\n\n\\subsection{Heat Kernel estimate}\n\n\nThe aim of this subsection is to prove a heat kernel estimate as well as some geometric inequalities for heat equations on manifolds with integral bounded Ricci curvature.\n\n\nLet $M$ be a Riemannian manifold satisfying (\\ref{Ricci:Lp1}) and (\\ref{noncollapsing}) for some constants $p>\\frac{m}{2}$, $\\Lambda>1$ and $\\kappa>0$. We start with the mean value inequality and gradient estimate to heat equations.\n\n\nDenote by $\\oint_A=\\frac{1}{\\vol(A)}\\int_A$ the average integration over the set $A$.\n\n\n\\begin{lemm}\nThere exists $C=C(m,p,\\kappa,\\Lambda)$ such that the following holds. For any $0\\frac{m}{2}$ at any time slice $t$. Then\n\\begin{equation}\\label{mean value:3}\n\\oint_{B(x,r)}f(\\cdot,0)dv\\leq C\\big(f(x,r^2)+r^{2-\\frac{m}{q}}\\cdot\\sup_{t\\in[0,r^2]}\\|\\xi_+(t)\\|_q\\big),\\, \\forall x\\in M, r\\leq\\sqrt{\\tau}.\n\\end{equation}\n\\end{coro}\n\\begin{proof}\nThe idea follows \\cite{CoNa12}. A direct calculation shows\n\\begin{eqnarray}\n\\frac{d}{dt}\\int f(y,t)H(x,y,r^2-t)dv(y)&=&\\int H(x,y,r^2-t)(\\frac{\\partial}{\\partial t}-\\triangle)f(y,t)dv(y)\\nonumber\\\\\n&\\ge&-\\int H(x,y,r^2-t)\\xi_+(y,t)dv(y).\\nonumber\n\\end{eqnarray}\nThen, by the upper bound of $H$,\n\\begin{eqnarray}\n\\int H(x,y,r^2-t)\\xi_+(y,t)dv(y)&\\le& C(r^2-t)^{-\\frac{m}{2}}\\int\\xi_+(y,t)e^{-\\frac{d^2(x,y)}{5(r^2-t)}}dv(y)\\nonumber\\\\\n&\\le&C(r^2-t)^{-\\frac{m}{2}}\\|\\xi_+(t)\\|_q\\bigg(\\int e^{-\\frac{q}{q-1}\\frac{d^2(x,y)}{5(r^2-t)}}dv(y)\\bigg)^{1-\\frac{1}{q}}\\nonumber\\\\\n&\\le&C(r^2-t)^{-\\frac{m}{2q}}\\|\\xi_+(t)\\|_q.\\nonumber\n\\end{eqnarray}\nIntegrating from $0$ to $r^2$ and applying the lower bound of $H$ and upper bound of $\\vol(B(x,r))$, we have\n\\begin{eqnarray}\nf(x,r^2)&\\ge&\\int f(y,0)H(x,y,r^2)dv(y)-\\int_0^{r^2}C(r^2-t)^{-\\frac{m}{2q}}\\|\\xi_+(t)\\|_qdt\\nonumber\\\\\n&\\ge& C^{-1}\\oint_{B(x,r)} f(y,0)dv(y)-Cr^{2(1-\\frac{m}{2q})}\\sup_{t\\in[0,r^2]}\\|\\xi_+(t)\\|_q.\\nonumber\n\\end{eqnarray}\nThe required estimate now follows directly.\n\\end{proof}\n\n\n\\begin{coro}\\label{sub mean value:2}\nAssume as above. There exist constants $\\tau=\\tau(m,p,\\kappa,\\Lambda)$ and $C=C(m,p,q,\\kappa,\\Lambda)$ such that the following holds. Let $f$ be a nonnegative function on $M$ satisfying\n\\begin{equation}\\label{super harmonic}\n\\triangle f\\leq\\xi\n\\end{equation}\nwhere $\\xi\\in L^q$ for some $q>\\frac{m}{2}$. Then\n\\begin{equation}\\label{mean value:4}\n\\oint_{B(x,r)}fdv\\leq C\\big(f(x)+r^{2-\\frac{m}{q}}\\cdot\\|\\xi\\|_q\\big),\\, \\forall x\\in M, r\\leq\\sqrt{\\tau}.\n\\end{equation}\n\\end{coro}\n\n\nThe crucial application is when $f$ is the distance function $d$, in which case we have\n$$\\triangle d\\leq\\frac{n-1}{d}+\\psi,$$\nwhere $\\psi$ has a uniform $L^{2p}$ bound in terms of $\\int|Ric_-|^pdv$ by {\\rm(\\ref{Laplace comparison:3})}.\n\n\\begin{rema}\nThere exists an estimate of same type as in Corollary \\ref{sub mean value:1} even if $\\|\\xi(t)\\|_q$ is not bounded but satisfies certain growth condition as $t$ approach 0, for example $\\|\\xi(t)\\|_q\\leq Ct^{-1+\\epsilon}$ for some $\\epsilon>0$. See Lemma 2.23 for an application.\n\\end{rema}\n\n\n\n\\begin{rema}\nTrivial examples show that the order of $r$, namely $(2-\\frac{m}{q})$, in the estimates {\\rm(\\ref{mean value:3})} and {\\rm(\\ref{mean value:4})} is sharp. It infers that the estimates for the parabolic approximations in the next subsection are best.\n\\end{rema}\n\n\n\n\n\n\\subsection{Parabolic approximations}\n\n\nLet $M$ be a complete Riemannian manifold of dimension $m$ which satisfies {\\rm(\\ref{Ricci:Lp1})} and {\\rm(\\ref{noncollapsing})} for some $\\kappa>0$, $p>\\frac{m}{2}$ and $\\Lambda\\ge 1$.\n\n\nLet us represent some notations we shall use in this subsection. Let $\\tau=\\tau(m,p,\\kappa,\\Lambda)$ denote the constant in Corollary \\ref{sub mean value:2} and $\\delta<\\tau$ be fixed small positive constant. In the following of this subsection $C=C(m,p,\\kappa,\\Lambda,\\delta)$ will always be a positive constant depending on the parameters $m,p,\\kappa,\\Lambda,\\delta$.\n\nPick two base points $p^\\pm\\in M$ with $d=d(p^+,p^-)\\leq\\frac{1}{20}\\sqrt{\\tau}$. Define the annulus\n$$A_{r,s}=A_{rd,sd}(\\{p,q\\}),~~~00$.\n\\end{coro}\n\nA similar argument as in the proof of Corollary \\ref{sub mean value:1} also gives\n\n\\begin{lemm}\nThe followings hold\n\\begin{equation}\\label{parabolic approximate:2}\n\\triangle {\\bf b}^+_t,\\,\\triangle {\\bf b}^-_t,\\,\\triangle {\\bf e}_t\\,\\leq\\,C\\big( d^{-1}+ t^{-\\frac{m}{4p}}\\big),\\,\\forall 0\\frac{m}{2}$ and $\\Lambda\\ge 1$. Then, the following holds for any $B=B(z,R)$, $R\\le 1$,\n\\begin{eqnarray}\\label{segment:2}\n\\int_{B\\times B}\\mathcal{F}_f(x,y)dv(x)dv(y)&\\leq& 2^{m+1} R\\vol(B)\\int_{B(z,2R)}fdv\\\\\n&+&C(m,p,\\Lambda)R^{m+2-\\frac{m}{2p}}\\vol(B)\\|f\\|_{C^0(B(z,2R))}.\\nonumber\n\\end{eqnarray}\n\\end{prop}\n\\begin{proof}\nDenote by $\\gamma=\\gamma_{x,y}$ a minimal geodesic from $x$ to $y$. Write\n$$\\mathcal{F}_1(x,y)=\\int_0^{\\frac{d(x,y)}{2}}f(\\gamma(t))dt,\\,\n\\mathcal{F}_2(x,y)=\\int_{\\frac{d(x,y)}{2}}^{d(x,y)}f(\\gamma(t))dt.$$\nBy symmetry, as in \\cite{ChCo96}, it suffices to establish a bound of $\\int_{B\\times B}\\mathcal{F}_2(x,y)dv(x)dv(y)$.\n\nFix $x\\in B$. If $y=\\exp_x(r\\theta)\\in B$, $r=d(x,y)$,\n\\begin{eqnarray}\n{\\cal A}(r,\\theta)\\int_{\\frac{r}{2}}^{r}f(\\gamma(t))dt&\\leq&\n2^{m-1}\\int_{\\frac{r}{2}}^r f(\\gamma(t)){\\cal A}(t,\\theta)dt\\nonumber\\\\\n&&+2^{m-1}\\int_{\\frac{r}{2}}^r\n\\bigg(\\int_t^\\rho f(\\gamma(t))\\psi(\\tau,\\theta){\\cal A}(\\tau,\\theta)d\\tau\\bigg)dt\\nonumber\\\\\n&\\leq&2^{m-1}\\int_0^r f(\\gamma(t)){\\cal A}(t,\\theta)dt\\nonumber\\\\\n&&+2^{m}R\\|f\\|_{C^0}\\int_0^r\\psi(\\tau,\\theta){\\cal A}(\\tau,\\theta)d\\tau\\nonumber.\n\\end{eqnarray}\nIntegrating over $B$ gives\n$$\\int_{B}\\mathcal{F}_2(x,y)dv(y)\\leq2^{m}R\\int_{B(z,2R)}fdv+2^{m}R^2\\|f\\|_{C^0}\\int_{B(x,2R)}\\psi.$$\nBy (\\ref{Laplace comparison:3}) and volume growth estimate (\\ref{volume comparison:3}) we get\n$$\\int_{B(x,2R)}\\psi\\leq C(m,p,\\Lambda)R^{m(1-\\frac{1}{2p})}.$$\nThis is sufficient to complete the proof.\n\\end{proof}\n\n\nFor any $x,y\\in M$ let $\\gamma_{x,y}$ be a minimizing normal geodesic connecting $x$ and $y$.\n\n\n\\begin{prop}\\label{segment:3}\nLet $f\\in C^\\infty(B(z,3R))$, $R\\le 1$, satisfing $|\\nabla f|\\,\\le\\,\\Lambda^{'}.$ For any $\\eta>0$ the following holds\n\\begin{eqnarray}\\label{segment:4}\n&&\\int_{B(z,R)\\times B(z,R)}\\big|\\langle\\nabla f,\\dot{\\gamma}_{x,y}\\rangle(x)-\\langle\\nabla f,\\dot{\\gamma}_{x,y}\\rangle(y)\\big|dv(x)dv(y)\\\\\n&&\\hspace{1cm}\\le C(m)\\eta^{-1}R^{m+1}\\int_{B(z,3R)}\\big|\\Hess f\\big|dv\n+C(m,p,\\Lambda)\\cdot\\Lambda^{'}\\cdot(R^{2p}+\\eta R^m)\\vol(B).\\nonumber\n\\end{eqnarray}\n\\end{prop}\n\\begin{proof}\nLet $B=B(z,R)$. Fix $x\\in B$ and view points of $B$ in polar coordinate at $x$. Define for any $\\theta\\in S_x$ the maximum radius $r(\\theta)$ such that $\\exp_x(r\\theta)\\in B$ and $d(x,\\exp_x(r\\theta))=r$. Obviously $r(\\theta)\\le 2R$. Let $\\gamma_\\theta(t)=\\exp_x(t\\theta)$, $t\\le r(\\theta)$, be a radial geodesic for $\\theta\\in S_x$. Then\n\\begin{eqnarray}\n\\int_{\\{x\\}\\times B}\\big|\\langle\\nabla f,\\dot{\\gamma}_{x,y}\\rangle(x)-\\langle\\nabla f,\\dot{\\gamma}_{x,y}\\rangle(y)\\big|dv(y)\n&\\le&\\int_{S_x}\\int_0^{r(\\theta)}\\big|\\langle\\nabla f,\\dot{\\gamma}_\\theta\\rangle(0)-\\langle\\nabla f,\\dot{\\gamma}_\\theta\\rangle(t)\\big|{\\cal A}(t,\\theta)dtd\\theta\\nonumber\\\\\n&\\le&\\int_0^{2R}\\int_{S_x}\\big|\\langle\\nabla f,\\dot{\\gamma}_\\theta\\rangle(0)-\\langle\\nabla f,\\dot{\\gamma}_\\theta\\rangle(t)\\big|{\\cal A}(t,\\theta)d\\theta dt\\nonumber.\n\\end{eqnarray}\nThen we divide the integration into two parts, for each $t\\in[0,2r]$,\n$$\\int_{\\{{\\cal A}(t,\\theta)\\le \\eta^{-1}t^{m-1}\\}}\\big|\\langle\\nabla f,\\dot{\\gamma}_\\theta\\rangle(0)-\\langle\\nabla f,\\dot{\\gamma}_\\theta\\rangle(t)\\big|{\\cal A}(t,\\theta)d\\theta\\,\\le\\,\\eta^{-1}t^{m-1}\\int_{S_x}\\big|\\langle\\nabla f,\\dot{\\gamma}_\\theta\\rangle(0)-\\langle\\nabla f,\\dot{\\gamma}_\\theta\\rangle(t)\\big|d\\theta,$$\n$$\\int_{\\{{\\cal A}(t,\\theta)> \\eta^{-1}t^{m-1}\\}}\\big|\\langle\\nabla f,\\dot{\\gamma}_\\theta\\rangle(0)-\\langle\\nabla f,\\dot{\\gamma}_\\theta\\rangle(t)\\big|{\\cal A}(t,\\theta)d\\theta\\,\\le\\,2\\Lambda^{'}\\int_{\\{{\\cal A}(t,\\theta)>\\eta^{-1}t^{m-1}\\}}{\\cal A}(t,\\theta)d\\theta.$$\nBy (\\ref{volume comparison:7}),\n$$\\big|\\{{\\cal A}(t,\\theta)>\\eta^{-1}t^{m-1}\\}\\big|\\,\\le\\, C(m,p,\\Lambda)\\eta.$$\nThen (\\ref{volume comparison:11}) gives\n$$\\int_{\\{{\\cal A}(t,\\theta)>\\eta^{-1}t^{m-1}\\}}{\\cal A}(t,\\theta)d\\theta\\,\\le\\,C(m,p,\\Lambda)(\\eta t^{m-1}+t^{2p-1}).$$\nTherefore,\n\\begin{eqnarray}\n&&\\int_{\\{x\\}\\times B}\\big|\\langle\\nabla f,\\dot{\\gamma}_{x,y}\\rangle(x)-\\langle\\nabla f,\\dot{\\gamma}_{x,y}\\rangle(y)\\big|dv(y)\\nonumber\\\\\n&&\\le\n\\eta^{-1}\\int_0^{2R}\\int_{S_x}\\big|\\langle\\nabla f,\\dot{\\gamma}_\\theta\\rangle(0)-\\langle\\nabla f,\\dot{\\gamma}_\\theta\\rangle(t)\\big|t^{m-1}d\\theta dt\n+C(m,p,\\Lambda)\\cdot\\Lambda^{'}\\cdot (R^{2p}+\\eta R^m)\\nonumber\\\\\n&&\\le C(m)\\eta^{-1}R^{m-1}\\int_0^{2R}\\int_{S_x}\\big|\\langle\\nabla f,\\dot{\\gamma}_\\theta\\rangle(0)-\\langle\\nabla f,\\dot{\\gamma}_\\theta\\rangle(t)\\big|d\\theta dt\n+C(m,p,\\Lambda)\\cdot\\Lambda^{'}\\cdot (R^{2p}+\\eta R^m)\\nonumber\\\\\n&&\\le C(m)\\eta^{-1}R^m\\int_0^{2R}\\int_{S_x}\\big|\\Hess f\\big|(\\gamma_\\theta(t))d\\theta dt\n+C(m,p,\\Lambda)\\cdot\\Lambda^{'}\\cdot (R^{2p}+\\eta R^m)\\nonumber.\n\\end{eqnarray}\nIntegrating over $x\\in B$ gives\n\\begin{eqnarray}\n&&\\int_{B\\times B}\\big|\\langle\\nabla f,\\dot{\\gamma}_{x,y}\\rangle(x)-\\langle\\nabla f,\\dot{\\gamma}_{x,y}\\rangle(y)\\big|dv(x)dv(y)\\nonumber\\\\\n&&\\le C(m)\\eta^{-1}R^m\\int_0^{2R}\\int_{SB}\\big|\\Hess f\\big|(\\gamma_\\theta(t))d\\theta dt\n+C(m,p,\\Lambda)\\cdot\\Lambda^{'}\\cdot (R^{2p}+\\eta R^m)\\vol(B)\\nonumber\\\\\n&&\\le C(m)\\eta^{-1}R^{m+1}\\int_{B(z,3R)}\\big|\\Hess f\\big|dv\n+C(m,p,\\Lambda)\\cdot\\Lambda^{'}\\cdot(R^{2p}+\\eta R^m)\\vol(B).\\nonumber\n\\end{eqnarray}\nThe last inequality uses the invariance of Liouville measure under geodesic flow.\n\\end{proof}\n\n\n\n\n\n\\subsection{Almost rigidity structures}\n\n\n\nLet $M$ be a complete Riemannian manifold of dimension $m$ which satisfies {\\rm(\\ref{Ricci:Lp1})} and {\\rm(\\ref{noncollapsing})} for some $\\kappa>0$, $p>\\frac{m}{2}$ and $\\Lambda\\ge 1$. The local almost rigidity properties below can be proved exactly as in \\cite{ChCo96, ChCo97}.\n\nFor any $\\epsilon>0$ small, there exist positive constants $\\delta$, $r_0$ depending on $m,p,\\kappa,\\Lambda$ and $\\epsilon$ such that the following theorems \\ref{almost rigidity:1}-\\ref{almost rigidity:4} hold.\n\n\\begin{theo}[\\cite{PeWe00}, Almost splitting]\\label{almost rigidity:1}\nLet $p^\\pm\\in M$ with $d=d(p^+,p^-)\\le r_0$. If $x\\in M$ satisfies $d(p^\\pm,x)\\ge\\frac{1}{5} d$ and\n\\begin{equation}\nd(p^+,x)+d(p^-,x)-d\\,\\le\\,\\delta^2 d,\n\\end{equation}\nthen there exists a complete length space $X$ and $B((0,x^*),r)\\subset\\mathbb{R}\\times X$ such that\n\\begin{equation}\nd_{GH}\\big(B(x,\\delta d),B((0,x^*),\\delta d)\\big)\\,\\leq\\,\\epsilon d.\n\\end{equation}\n\\end{theo}\n\n\n\\begin{theo}[\\cite{PeWe00}, Volume convergence]\\label{almost rigidity:2}\nIf $x\\in M$ satisfies\n\\begin{equation}\nd_{GH}\\big(B(x,r),B_r\\big)\\,\\leq\\,\\delta r,\n\\end{equation}\nfor some $r\\le r_0$, where $B_r$ denotes an Euclidean ball of radius $r$, then\n\\begin{equation}\\label{volumeconcollap2}\n\\vol(B(x,r))\\,\\geq\\,(1-\\epsilon)\\vol(B_r).\n\\end{equation}\n\\end{theo}\n\n\n\n\\begin{theo}[Almost metric cone]\\label{almost rigidity:3}\nIf $x\\in M$ satisfies\n\\begin{equation}\\label{almost volume cone}\n\\frac{\\vol(B(x,2r))}{\\vol(B_{2r})}\\,\\geq\\,(1-\\delta)\\frac{\\vol(B(x,r))}{\\vol(B_r)},\n\\end{equation}\nfor some $r\\le r_0$, then there exists a compact length space $X$ with\n\\begin{equation}\\label{diameter bound}\n\\diam(X)\\,\\leq\\,(1+\\epsilon)\\pi\n\\end{equation}\nsuch that, for metric ball $B(o^*,r)\\subset C(X)$ centered at the vertex $o^*$,\n\\begin{equation}\nd_{GH}\\big(B(x,r),B(o^*,r)\\big)\\,\\leq\\,\\epsilon r.\n\\end{equation}\n\\end{theo}\n\n\n\\begin{theo}\\label{almost rigidity:4}\nIf $x\\in M$ satisfies\n\\begin{equation}\n\\vol(B(x,2r))\\,\\geq\\,(1-\\delta)\\vol(B_{2r})\n\\end{equation}\nfor some $r\\le r_0$, then\n\\begin{equation}\nd_{GH}\\big(B(x,r),B_r\\big)\\,\\leq\\,\\epsilon r.\n\\end{equation}\n\\end{theo}\n\n\n\n\\subsection{$C^\\alpha$ structure in almost Euclidean region}\n\nLet $M$ be a complete Riemannian manifold of dimension $m$ which satisfies {\\rm(\\ref{noncollapsing})} for some $\\kappa>0$. Instead of {\\rm(\\ref{Ricci:Lp1})} we also assume the following $L^p$ bound of Ricci curvature\n\\begin{equation}\\label{Ricci:Lp3}\n\\int_M|Ric|^p\\,\\leq\\,\\Lambda,\n\\end{equation}\nfor some $p>\\frac{m}{2}$ and $\\Lambda\\ge 1$.\n\n\nFix $\\alpha\\in(0,1)$ and $\\theta>0$. For $x\\in M$, define the \\textit{$C^\\alpha$ harmonic radius} at $x$, denoted by $r_g^{\\alpha,\\theta}(x)$, to be the maximal radius $r$ such that there exists a harmonic coordinate $\\textrm{X}=(x^1,\\cdots,x^{2n}):B(x,r)\\rightarrow\\mathbb{R}^{2n}$ which satisfies\n\\begin{equation}\\label{harmonic coordinate:1}\ne^{-\\theta}(\\delta_{ij})\\leq(g_{ij})\\leq e^\\theta(\\delta_{ij})\n\\end{equation}\nas matrices, and\n\\begin{equation}\\label{harmonic coordinate:2}\n\\sup_{i,j}\\big(\\|g_{ij}\\|_{C^0}+r^\\alpha\\|g_{ij}\\|_{C^\\alpha}\\big)\\leq e^\\theta,\n\\end{equation}\nwhere $g_{ij}=(\\textrm{X}^{-1})^*g(\\frac{\\partial}{\\partial x^i},\\frac{\\partial}{\\partial x^j})$ is defined on the domain $\\textrm{X}(B(x,r))$. In harmonic coordinates, the $L^p$ bound of Ricci curvature gives the $L^{2,p}$ bound of the metric tensor $g_{ij}$ which in turn implies the $C^\\alpha$ regularity of metric. Following the arguments in \\cite{An90} and \\cite{Pe96} one can prove\n\n\\begin{theo}\nFor any $\\delta,\\theta\\in (0,1)$ and $0<\\alpha<2-\\frac{m}{p}$, there exist $\\eta>0$ and $r_0>0$ such that the following holds: if $x\\in M$ satisfies\n\\begin{equation}\\label{harmonic coordinate:3}\n\\vol(B(x,r))\\,\\ge\\,(1-\\eta)\\vol(B_r)\n\\end{equation}\nfor some $r\\le r_0$, then\n\\begin{equation}\\label{harmonic coordinate:4}\nr_g^{\\alpha,\\theta}(x)\\,\\geq\\,\\delta r.\n\\end{equation}\n\\end{theo}\n\n\\begin{coro}\nAssume as in above theorem. If $x\\in M$ satisfies {\\rm(\\ref{harmonic coordinate:3})}, then the isoperimetric constant of $B(x,\\delta r)$ has a lower bound\n\\begin{equation}\n\\Isop(B(x,\\delta r))\\geq(1-\\theta)\\Isop(\\mathbb{R}^m).\n\\end{equation}\n\\end{coro}\n\n\n\n\n\n\\subsection{Structure of the limit space}\n\n\n\nLet $(M_i,g_i)$ be a sequence of Riemannian manifolds of dimension $m$ which satisfies {\\rm(\\ref{noncollapsing})} and {\\rm(\\ref{Ricci:Lp3})} for some $\\kappa,\\Lambda>0$ and $p>\\frac{m}{2}$ independent of $i$. Then (\\ref{volume comparison:3}) gives us the uniform upper bound of volume growth. By Gromov's first convergence theorem, there exists a complete length metric space $(Y,d)$ such that,\n\\begin{equation}\\label{convergence3}\n(M_i,g_i)\\stackrel{d_{GH}}{\\longrightarrow}(Y,d)\n\\end{equation}\nalong a subsequence in the pointed Gromov-Hausdorff topology.\n\n\\begin{theo}\\label{theorem:Lplimit}\nAssume as above, the followings hold,\n\\begin{itemize}\n\\item[(i)] for any $r>0$ and $x_i\\in M_i$ such that $x_i\\rightarrow x_\\infty\\in Y$, we have\n\\begin{equation}\\label{volumeconvergence}\n\\vol(B(x_i,r))\\rightarrow\\mathcal{H}^{m}(B(x_\\infty,r)),\n\\end{equation}\nwhere $\\mathcal{H}^{m}$ denotes the $m$-dimensional Hausdorff measure;\n\\item[(ii)] For any $x\\in Y$ and any sequence $\\{r_j\\}$ with $\\lim r_j= 0$, a subsequence of $(Y, r^{-2}_j d, x)$ converges to a metric space $({\\cal C}_x, d_x, o)$. Any such a $({\\cal C}_x, d_x, o)$ is a metric cone with vertex $o$ and splits off lines isometrically;\n\\item[(iii)] $Y=\\mathcal{S}\\cup\\mathcal{R}$ such that $\\mathcal{S}$ is a closed set of codimension $\\geq 2$ and $\\mathcal{R}$ is convex in $Y$; $\\mathcal{R}$ consists of points whose tangent cone is $\\mathbb{R}^m$;\n\\item[(iv)] There is a $C^{1,\\alpha}$-smooth structure on $\\mathcal{R}$ and a $C^{\\alpha}$, $\\forall\\alpha<2-\\frac{m}{p}$, metric $g_\\infty$ there which induces $d$; moreover, $g_i$ converges to $g_\\infty$ in the $C^{1,\\alpha}$ topology on $\\mathcal{R}$;\n\\item[(v)] The singular set $\\mathcal{S}$ has codimension $\\geq 4$ if each $(M_i, g_i)$ is K\\\"ahlerian.\n\\end{itemize}\n\\end{theo}\n\nThe proofs of (i)-(iv), except the convexity of $\\mathcal{R}$, are standard, following the same line as that of Cheeger-Colding and Cheeger-Colding-Tian; see \\cite{Ch03, ChCo96, ChCo97, ChCoTi02}. In the K\\\"ahler setting, the convergence of the metric and complex structure takes place in the $C^\\alpha\\cap L^{2,p}$ topology on $\\mathcal{R}$ (cf. \\cite{Pe96}), so $g_\\infty$ is K\\\"ahler with respect to the limit complex structure in the weak sense. However, the $L^{2,p}$ convergence of $g_i$ will be enough to carry out the slice argument as in \\cite{ChCoTi02} or \\cite{Ch03} to show the codimension 4 property of the singular set $\\mathcal{S}$. The convexity of $\\mathcal{R}$ is a consequence of the following local H\\\"older continuity of geodesic balls in the interior of geodesic segments and the local $C^\\alpha$ structure of the regular set; see \\cite{CoNa12} for details.\n\n\n\\begin{theo}\nLet $(M,g)$ be a complete Riemannian manifolds of dimension $m$ which satisfies {\\rm(\\ref{noncollapsing})} and {\\rm(\\ref{Ricci:Lp3})} for some $\\kappa,\\Lambda>0$ and $p>\\frac{m}{2}$. There is $\\alpha=\\alpha(p,m)>0$ such that the following holds. For any $\\delta>0$ small, we can find positive constants $C$, $r_0$ depending on $m,p,\\kappa,\\Lambda,\\delta$ such that on any normal geodesic $\\gamma:[0,l]\\rightarrow M$ of length $l\\le 1$,\n\\begin{equation}\\label{Holder continuity}\nd_{GH}\\big(B_r(\\gamma(s)),B_r(\\gamma(t))\\big)\\le\\frac{C}{\\delta l}|s-t|^\\alpha r,\n\\end{equation}\nwhenever\n$$00$ and $p>\\frac{m}{2}$. To apply the segment inequality established in section 2.4, we replace the Hessian estimate along a geodesic connecting $x$ and $y$, namely $\\int_{\\gamma_{x,y}}|\\Hess {\\bf b}^\\pm_{r^2}|$ where ${\\bf b}^\\pm_{r^2}$ is the parabolic approximation of distance function defined in subsection 2.3 with base points $p=\\gamma(0)$ and $q=\\gamma(l)$, by the integrand in (\\ref{segment:4})\n$$F(x,y)=:\\big|\\langle\\nabla {\\bf b}^\\pm_{r^2},\\dot{\\gamma}_{x,y}\\rangle(x)-\\langle\\nabla {\\bf b}^\\pm_{r^2},\\dot{\\gamma}_{x,y}\\rangle(y)\\big|.$$\nDefine $I^r_{t-t^{'}}$, $T^r_\\eta$ and $T^r_{\\eta}(x)$ for $x\\in T^r_\\eta,\\eta>0$ as in \\cite{CoNa12}, just by replacing the upper bound of $e_{p,q}$ in $T^r_\\eta$ by $\\eta^{-1}r^{2-\\frac{m}{2p}}$. The points in $T^r_\\eta(x)$ behaves very well under the gradient flow associated to the distance function at $p$. In the most simple case, by the Hessian estimate and (\\ref{parabolic approximate:15}) in subsection 2.3, if $\\gamma(t)\\in T^r_\\eta$, $t\\in[\\delta,1-\\delta]$, and $x\\in T^r_\\eta(\\gamma(t))\\cap T^r_\\eta$ as considered in page 1210 of \\cite{CoNa12} , the distortion of distance under the geodesic flow can be estimated as follows\n\\begin{eqnarray}\nd(\\gamma_{p,x}(t^{'}),\\gamma(t^{'}))-d(x,\\gamma(t))&\\le& C\\eta^{-2}\\big[\\delta^{-1}r^{-\\frac{m}{4p}}\\sqrt{t-t^{'}}+\\delta r^{-1}(t-t^{'})+r^{2p-m-1}(t-t^{'})\\big]r\\nonumber\\\\\n&\\le&C\\eta^{-2}\\big[\\delta^{-1}(t-t^{'})^{\\frac{1}{2}-\\frac{m}{4p}}+\\delta+(t-t^{'})^{2p-m}\\big]r\\nonumber\n\\end{eqnarray}\nfor all $\\delta>0$ and $t^{'}\\le t\\le t^{'}+r$. It follows by picking $\\delta=(t-t^{'})^{\\frac{1}{4}-\\frac{m}{8p}}$,\n$$d(\\gamma_{p,x}(t^{'}),\\gamma(t^{'}))-d(x,\\gamma(t))\\,\\le\\,C\\eta^{-2}(t-t^{'})^{\\frac{1}{4}-\\frac{m}{8p}}r,\\,\\forall \\delta\\le t^{'}\\le t\\le t^{'}+r\\le 1-\\delta.$$\nFor general case where $\\gamma(t)$ is not in $T^r_\\eta$, one can follow \\cite{CoNa12} to get a precise $\\alpha=\\alpha(p,m)$ such that (\\ref{Holder continuity}) holds for certain constant $C$.\n\\end{proof}\n\n\n\n\n\n\n\n\\section{Regularity under K\\\"aher-Ricci flow}\n\n\n\nIn this section, we prove Theorem \\ref{regularity:1}. We will first show that the regular set $\\mathcal{R}$ of the limit space is smooth, and then apply the Pseudolocality theorem of Perelman \\cite{Pe02}.\n\n\nLet us fix some notions first. Assume $(M;J)$ is a compact K\\\"ahler manifold of (complex) dimension $n$. Let $g$ and $\\nabla^L$ denote a K\\\"ahler metric and associated Levi-Civita connection. In local complex coordinate $(z^1,\\cdots,z^n)$, define $g_{i\\bar{j}}=g(\\frac{\\partial}{\\partial z^i},\\frac{\\partial}{\\partial\\bar{z}^j})$ and $R_{i\\bar{j}}=Ric(\\frac{\\partial}{\\partial z^i},\\frac{\\partial}{\\partial\\bar{z}^j})$, etc. Let $\\nabla_i$ and $\\nabla_{\\bar{j}}$ be the abbreviations of $\\nabla^L_{\\frac{\\partial}{\\partial z^i}}$ and $\\nabla^L_{\\frac{\\partial}{\\partial\\bar{z}^j}}$ for simplicity. Define the projections of Levi-Civita connection onto the $(1,0)$ and $(0,1)$ spaces as\n$$\\nabla\\,=\\,\\nabla_{i}\\otimes dz^i,\\,\\bar{\\nabla}\\,=\\,\\nabla_{\\bar{i}}\\otimes d\\bar{z}^i.$$\nDefine the rough Laplacian acting on tensor fields $\\triangle=g^{i\\bar{j}}\\nabla_i\\nabla_{\\bar{j}}.$\n\n\n\nFrom now on, $(M;J)$ will be a compact Fano $n$-manifold and $g_0$ is a K\\\"ahler metric in the anti-canonical class $2\\pi c_1(M;J)$. Let $g(t)$ be the solution to the volume normalized K\\\"ahler-Ricci flow\n\\begin{equation}\n\\frac{\\partial g}{\\partial t}\\,=\\,g\\,-\\,{\\rm Ric}(g)\n\\end{equation}\nwith initial $g(0)=:g_0$. By $\\partial\\bar{\\partial}$-lemma, there exists a family of real-valued functions $u(t)$, called Ricci potentials of $g(t)$, which are determined by\n\\begin{equation}\\label{Ricci potential}\ng_{i\\bar{j}}-R_{i\\bar{j}}\\,=\\,\\partial_i\\partial_{\\bar{j}}u,\\,\\frac{1}{\\V}\\int e^{-u(t)}dv_{g(t)}\\,=\\,1\n\\end{equation}\nwhere $\\V=\\int dv_g$ denotes the volume of the K\\\"ahler-Ricci flow. By Perelman's estimate (see \\cite{SeTi08} for a proof), there exists $C$ depending only on initial metric $g_0$ such that\n\\begin{equation}\\label{perelman bound:Ricci potential1}\n\\|u(t)\\|_{C^0}+\\|\\nabla u(t)\\|_{C^0}+\\|\\triangle u(t)\\|_{C^0}\\,\\le\\, C.\n\\end{equation}\nBy Perelman's noncollapsing theorem for Ricci flow \\cite{Pe02}, there exist positive constants $\\kappa$ and $D$ depending on $g_0$ such that\n\\begin{equation}\\label{perelman bound:noncollapsing}\n\\vol(B_{g(t)}(x,r))\\,\\ge\\,\\kappa r^{2n},\\,\\forall x\\in M,r\\leq 1\n\\end{equation}\n\\begin{equation}\\label{perelman bound:diameter}\n\\diam(M,g(t))\\,\\le\\, D.\n\\end{equation}\n\n\nThe following formulas for $u(t)$ can be easily checked under the K\\\"ahler-Ricci flow\n\\begin{eqnarray}\n\\frac{\\partial}{\\partial t}u&=&\\triangle u+u-a;\\\\\n\\frac{\\partial}{\\partial t}|\\nabla u|^2&=&\\triangle|\\nabla u|^2-|\\nabla\\nabla u|^2-|\\nabla\\bar{\\nabla}u|^2+|\\nabla u|^2;\\\\\n\\frac{\\partial}{\\partial t}\\triangle u&=&\\triangle\\triangle u-|\\nabla\\bar{\\nabla} u|^2+\\triangle u;\\\\\n\\frac{\\partial}{\\partial t}|\\nabla\\bar{\\nabla}u|^2&=&\\triangle|\\nabla\\bar{\\nabla}u|^2-2|\\nabla\\nabla\\bar{\\nabla}u|^2\n+2R_{i\\bar{j}k\\bar{l}}\\nabla_{\\bar{i}}\\nabla_lu\\nabla_j\\nabla_{\\bar{k}}u\\\\\n\\frac{\\partial}{\\partial t}|\\nabla\\triangle u|^2&=&\\triangle|\\nabla\\triangle u|^2-|\\nabla\\nabla\\triangle u|^2-|\\nabla\\bar{\\nabla}\\triangle u|^2+|\\nabla\\triangle u|^2\\nonumber\\\\\n&&-\\nabla_i|\\nabla\\bar{\\nabla}u|^2\\nabla_{\\bar{i}}\\triangle u-\\nabla_i\\triangle u\\nabla_{\\bar{i}}|\\nabla\\bar{\\nabla}u|^2.\n\\end{eqnarray}\nHere,\n\\begin{equation}\\label{average potential}\na(t)\\,=\\,\\frac{1}{\\V}\\int u(t)e^{-u(t)}dv_{g(t)},\n\\end{equation}\nis the average of $u(t)$. By the Jensen inequality, $a(t)\\leq 0$. It is known that $a(t)$ increases along the K\\\"ahler-Ricci flow \\cite{Zh11}, so we may assume\n\\begin{equation}\n\\lim_{t\\rightarrow\\infty} a(t)\\,=\\,a_\\infty.\n\\end{equation}\n\n\n\n\\subsection{Long-time behavior of Ricci potentials}\n\nWe will show that the Ricci potentials $u(t)$ behaves very well as $t\\rightarrow\\infty$ under the K\\\"ahler-Ricci flow, namely its gradient field tends to be holomorphic in the $L^2$ topology. This implies that the limit of K\\\"ahler-Ricci flow should be K\\\"ahler-Ricci soliton (in certain weak topology).\n\n\n\n\\begin{prop}\\label{prop1}\nUnder the K\\\"ahler-Ricci flow,\n\\begin{equation}\\label{L2bound:Ricci potential1}\n\\int_0^\\infty\\int_M|\\nabla\\nabla u|^2dvdt<\\infty.\n\\end{equation}\nIn particular,\n\\begin{equation}\\label{L2bound:Ricci potential2}\n\\int_M|\\nabla\\nabla u|^2dv\\rightarrow 0,\\hspace{0.5cm}\\mbox{ as }t\\rightarrow\\infty.\n\\end{equation}\n\\end{prop}\n\n\n\\begin{prop}\\label{prop2}\nUnder the K\\\"ahler-Ricci flow,\n\\begin{equation}\\label{L2bound:Ricci potential3}\n\\int_t^{t+1}\\int_M\\big|\\nabla(\\triangle u-|\\nabla u|^2+u)\\big|^2dvdt\\rightarrow 0,\\hspace{0.5cm}\\mbox{ as }t\\rightarrow\\infty,\n\\end{equation}\n\\begin{equation}\\label{L2bound:Ricci potential4}\n\\int_M(\\triangle u-|\\nabla u|^2+u-a)^2dv\\rightarrow 0,\\hspace{0.5cm}\\mbox{ as }t\\rightarrow\\infty.\n\\end{equation}\n\\end{prop}\n\n\\begin{rema}\nProposition \\ref{prop1} gives a hint of the global convergence of a K\\\"ahler-Ricci flow. Assuming the boundedness of curvature, this has been proved by Ache \\cite{Ac12}.\n\\end{rema}\n\n\n\\begin{rema}\nRecall that a Riemannian manifold $(M,g)$ is a shrinking Ricci soliton if\n\\begin{equation}\nRic+\\Hess f=\\lambda g\n\\end{equation}\nfor some $f\\in C^\\infty(M;\\mathbb{R})$ and $\\lambda>0$. In the case $M$ is Fano and $g\\in 2\\pi c_1(M)$, the manifold is a shrinking Ricci soliton (always called shrinking K\\\"ahler-Ricci soliton) only if $\\lambda=1$ and $f$ equals the Ricci potential $u$. In other words, $(M,g)$ is a shrinking K\\\"ahler-Ricci soliton if and only if\n\\begin{equation}\n\\nabla\\nabla u\\,=\\,0.\n\\end{equation}\nMoreover, applying the Bianchi identity, it can be checked that $(M,g)$ is a shrinking K\\\"ahler-Ricci soliton if and only if the Shur type identity holds\n\\begin{equation}\n\\triangle u-|\\nabla u|^2+u\\,=\\,a.\n\\end{equation}\n\\end{rema}\n\n\nTo prove the proposition, we need Perelman's entropy functional (compare Perelman's original definition in \\cite{Pe02}): For any K\\\"ahler metric $g\\in 2\\pi c_1(M)$, let\n\\begin{equation}\n\\mathcal{W}(g,f)\\,=\\,\\frac{1}{\\V}\\int_M(s+|\\nabla f|^2+f-n)e^{-f}dv\\,~~~\\forall f\\in C^\\infty(M;\\mathbb{R})\n\\end{equation}\nand define\n\\begin{equation}\n\\mu(g)\\,=\\,\\inf\\bigg\\{\\mathcal{W}(g,f)\\bigg|\\int_Me^{-f}dv=\\V\\bigg\\},\n\\end{equation}\nwhere $s$ is the scalar curvature of $g$. It is known a smooth minimizer of $\\mu$, though may not be unique, always exists \\cite{Ro81}. The entropy admits a natural upper bound\n$$\\mu(g)\\,\\le\\,\\frac{1}{\\V}\\int_Mue^{-u}dv=:a\\leq 0.$$\n\n\nConsider the entropy under the K\\\"ahler-Ricci flow $g(t)$: for any solution $f(t)$ to the backward heat equation\n\\begin{equation}\\label{backheat}\n\\frac{\\partial f}{\\partial t}\\,=\\,-\\triangle f+|\\nabla f|^2+\\triangle u,\n\\end{equation}\nwe have\n\\begin{equation}\n\\frac{d}{dt}\\mathcal{W}(g,f)\\,=\\,\\frac{1}{\\V}\\int_M\\big(|\\nabla\\bar{\\nabla}(u-f)|^2+|\\nabla\\nabla f|^2\\big)e^{-f}dv.\n\\end{equation}\nThis implies Perelman's monotonicity\n\\begin{equation}\n\\mu(g_0)\\,\\le\\,\\mu(g(t))\\,\\le\\, 0,\\,\\forall t\\geq 0.\n\\end{equation}\n\nWe also need the following lemma to prove the propositions:\n\n\n\n\\begin{lemm}\\label{lemma2}\nFor any $g=g(t)$ and smooth function $f$ we have\n\\begin{equation}\n\\int_M|\\nabla\\nabla f|^2dv\\,\\le\\, C(g_0)\\int_M|\\nabla\\bar{\\nabla}f|^2dv.\n\\end{equation}\n\\end{lemm}\n\\begin{proof}\nBy adding a constant we may assume $f$ satisfies $\\int fe^{-u}dv=0$. Then the weighted Poincar\\'e inequality \\cite{Fu} gives\n\\begin{equation}\\nonumber\n\\int f^2e^{-u}dv\\leq\\int|\\nabla f|^2e^{-u}dv.\n\\end{equation}\nBy Perelman's estimate to $u$ (\\ref{perelman bound:Ricci potential1}), $$\\int f^2dv\\leq C(g_0)\\int|\\nabla f|^2dv.$$\nThus,\n$$\\int|\\nabla f|^2dv=-\\int f\\triangle fdv\\leq\\frac{1}{2C}\\int f^2dv+2C\\int(\\triangle f)^2dv$$\nfrom which it follows\n$$\\int|\\nabla f|^2dv\\leq C(g_0)\\int(\\triangle f)^2dv.$$\n\n\nDoing integration by parts gives\n\\begin{eqnarray}\n\\int|\\nabla\\nabla f|^2dv&=&\\int\\big((\\triangle f)^2-R_{i\\bar{j}}\\nabla_{\\bar{i}}f\\nabla_jf\\big)dv\\nonumber\\\\\n&=&\\int\\big((\\triangle f)^2-|\\nabla f|^2+\\nabla_i\\nabla_{\\bar{j}}u\\nabla_{\\bar{i}}f\\nabla_jf\\big)dv.\\nonumber\n\\end{eqnarray}\nThe last term on the right hand side can be estimated as follows\n\\begin{eqnarray}\n\\int\\nabla_i\\nabla_{\\bar{j}}u\\nabla_{\\bar{i}}f\\nabla_jfdv&=&-\\int\\nabla_{\\bar{j}}u\\big(\\triangle f\\nabla_jf+\\nabla_{\\bar{i}}f\\nabla_i\\nabla_jf\\big)dv\\nonumber\\\\\n&\\leq&\\int\\big((\\triangle f)^2+\\frac{1}{2}|\\nabla\\nabla f|^2+\\|\\nabla u\\|_{C^0}^2|\\nabla f|^2\\big)dv.\\nonumber\n\\end{eqnarray}\nCombining these estimates we have\n\\begin{equation}\\nonumber\n\\int|\\nabla\\nabla f|^2dv\\leq\\int\\big(4(\\triangle f)^2+2C|\\nabla f|^2\\big)dv\\leq C\\int(\\triangle f)^2dv\\leq C\\int|\\nabla\\bar{\\nabla}f|^2dv,\n\\end{equation}\nthe desired estimate.\n\\end{proof}\n\n\nNow we give the proof of the propositions:\n\n\n\\begin{proof}[Proof of the Proposition \\ref{prop1}]\nFor any time $t=k\\ge 1$, choose a normalized minimizer of $\\mu(g(k))$, say $f_k$, satisfing $\\int_Me^{-f_k}dv=\\V$. Let $f_k(t)$ be the solution to (\\ref{backheat}) on the time interval $[k-1,k]$. Then we have\n\\begin{equation}\\nonumber\n\\frac{1}{\\V}\\int_{k-1}^k\\int_M\\big(|\\nabla\\bar{\\nabla}(u-f_k)|^2+|\\nabla\\nabla f_k|^2\\big)e^{-f_k}dvdt\\leq\\mu(g(k))-\\mu(g(k-1)).\n\\end{equation}\nIt is proved that $|f_k(t)|\\leq C(g_0)$ for any $t\\in[k-1,k]$ (see \\cite{TiZh12, TiZhu13, TZZZ}). Thus,\n\\begin{equation}\\nonumber\n\\int_{k-1}^k\\int_M\\big(|\\nabla\\bar{\\nabla}(u-f_k)|^2+|\\nabla\\nabla f_k|^2\\big)dvdt\\leq C(g_0)\\big(\\mu(g(k))-\\mu(g(k-1))\\big).\n\\end{equation}\nSumming up $k=1,2,\\cdots,$ and using $\\mu(g(t))\\leq 0$ for all $t$, we conclude\n\\begin{equation}\\label{e301}\n\\sum_{k=1}^\\infty\\int_{k-1}^k\\int_M\\big(|\\nabla\\bar{\\nabla}(u-f_k)|^2+|\\nabla\\nabla f_k|^2\\big)dvdt\\leq C(g_0).\n\\end{equation}\nApplying Lemma \\ref{lemma2} to $u-f_k$ we get\n\\begin{equation}\\nonumber\n\\int|\\nabla\\nabla(u-f_k)|^2dv\\leq C(g_0)\\int|\\nabla\\bar{\\nabla}(u-f_k)|^2dv,\\hspace{0.5cm}\\forall t\\in[k-1,k].\n\\end{equation}\nCombining with (\\ref{e301}) it gives\n\\begin{equation}\\nonumber\n\\int_0^\\infty\\int_M|\\nabla\\nabla u|^2dv\\leq\\sum_{k=1}^\\infty\\int_M\\big(2|\\nabla\\nabla(u-f_k)|^2+2|\\nabla\\nabla f_k|^2\\big)dv\\leq C(g_0).\n\\end{equation}\nThis proves (\\ref{L2bound:Ricci potential1}).\n\nTo prove (\\ref{L2bound:Ricci potential2}), it will be sufficient to show\n\\begin{equation}\\label{e302}\n\\frac{d}{dt}\\int|\\nabla\\nabla u|^2dv\\leq C(g_0),\\hspace{0.5cm}\\forall t\\geq 0.\n\\end{equation}\nAn easy calculation shows\n\\begin{equation}\n\\frac{\\partial}{\\partial t}|\\nabla\\nabla u|^2=\\triangle|\\nabla\\nabla u|^2-|\\bar{\\nabla}\\nabla\\nabla u|^2-|\\nabla\\nabla\\nabla u|^2-2R_{i\\bar{j}k\\bar{l}}\\nabla_{\\bar{i}}\\nabla_{\\bar{k}}u\\nabla_j\\nabla_lu.\n\\end{equation}\nIntegrating this formula gives\n\\begin{eqnarray}\n\\frac{d}{dt}\\int|\\nabla\\nabla u|^2dv&=&\\int\\big(-|\\bar{\\nabla}\\nabla\\nabla u|^2-|\\nabla\\nabla\\nabla u|^2+\\triangle u|\\nabla\\nabla u|^2\\nonumber\\\\\n&&\\hspace{3cm}-2R_{i\\bar{j}k\\bar{l}}\\nabla_{\\bar{i}}\\nabla_{\\bar{k}}u\\nabla_j\\nabla_lu\\big)dv\n\\nonumber\\\\\n&\\leq&\\int\\big(\\triangle u|\\nabla\\nabla u|^2+2\\nabla_{\\bar{j}}R_{k\\bar{l}}\\nabla_{\\bar{k}}u\\nabla_j\\nabla_lu\n+2R_{i\\bar{j}k\\bar{l}}\\nabla_{\\bar{k}}u\\nabla_{\\bar{i}}\\nabla_j\\nabla_lu\\big)dv\n\\nonumber\\\\\n&\\leq&\\int\\bigg((\\|\\triangle u\\|_{C^0}+\\|\\nabla u\\|_{C^0}^2)|\\nabla\\nabla u|^2+|\\nabla\\nabla\\bar{\\nabla} u|^2+\\|\\nabla u\\|_{C^0}^2|Rm|^2\\bigg)dv.\\nonumber\n\\end{eqnarray}\nThe desired estimate (\\ref{e302}) now follows from Perelman's estimate to $u$ (\\ref{perelman bound:Ricci potential1}) and the general estimates (\\ref{integral bound:1}), (\\ref{integral bound:4}) in next section.\n\\end{proof}\n\n\\begin{proof}[Proof of proposition \\ref{prop2}]\nFirst observe that, by using Ricci potential equation,\n$$\\nabla_i(\\triangle u-|\\nabla u|^2+u)=\\nabla_{\\bar{j}}\\nabla_i\\nabla_ju-\\nabla_i\\nabla_ju\\nabla_{\\bar{j}}u.$$\nThus,\n$$|\\nabla(\\triangle u-|\\nabla u|^2+u)|^2\\leq 2(|\\nabla u|^2|\\nabla\\nabla u|^2+|\\nabla_{\\bar{j}}\\nabla_i\\nabla_ju|^2).$$\nTo prove (\\ref{L2bound:Ricci potential3}) it suffices to show $\\int_t^{t+1}\\int|\\nabla_{\\bar{j}}\\nabla_i\\nabla_ju|^2dvdt\\rightarrow 0$.\n\nIntegrating by parts and using the second Bianchi identity,\n\\begin{eqnarray}\n\\int|\\nabla_{\\bar{j}}\\nabla_i\\nabla_ju|^2dv&=&\\int\\nabla_{\\bar{j}}\\nabla_i\\nabla_ju\\nabla_k\\nabla_{\\bar{i}}\\nabla_{\\bar{k}}udv\\nonumber\\\\\n&=&-\\int\\nabla_i\\nabla_ju\\nabla_{\\bar{j}}\\big(\\nabla_{\\bar{i}}\\triangle u+R_{\\bar{i}k}\\nabla_{\\bar{k}}u\\big)dv\\nonumber\\\\\n&=&-\\int\\nabla_i\\nabla_ju\\big(\\nabla_{\\bar{j}}\\nabla_{\\bar{i}}\\triangle u+\\nabla_{\\bar{j}}R_{\\bar{i}k}\\nabla_{\\bar{k}}u\n+R_{\\bar{i}k}\\nabla_{\\bar{j}}\\nabla_{\\bar{k}}u\\big)dv\\nonumber\\\\\n&=&-\\int\\nabla_i\\nabla_ju\\nabla_{\\bar{j}}\\nabla_{\\bar{i}}\\triangle udv+\\int\\nabla_i\\nabla_ju\\nabla_{\\bar{j}}\\nabla_{\\bar{i}}\\nabla_ku\\nabla_{\\bar{k}}udv\\nonumber\\\\\n&&-\\int|\\nabla\\nabla u|^2dv+\\int\\nabla_i\\nabla_ju\\nabla_{\\bar{i}}\\nabla_ku\\nabla_{\\bar{j}}\\nabla_{\\bar{k}}udv\\nonumber\\\\\n&=&-\\int\\nabla_i\\nabla_ju\\nabla_{\\bar{j}}\\nabla_{\\bar{i}}\\triangle udv+\\int\\nabla_i\\nabla_ju\\nabla_{\\bar{j}}\\nabla_{\\bar{i}}\\nabla_ku\\nabla_{\\bar{k}}udv\\nonumber\\\\\n&&-\\int|\\nabla\\nabla u|^2dv-\\int\\nabla_ku\\big(\\nabla_i\\nabla_ju\\nabla_{\\bar{i}}\\nabla_{\\bar{j}}\\nabla_{\\bar{k}}u\n+\\nabla_{\\bar{i}}\\nabla_i\\nabla_ju\\nabla_{\\bar{j}}\\nabla_{\\bar{k}}u\\big)\\nonumber.\n\\end{eqnarray}\nThen, by Schwarz inequality,\n\\begin{eqnarray}\n\\int_{t-1}^t\\int|\\nabla_{\\bar{j}}\\nabla_i\\nabla_ju|^2dvdt&\\leq&\\big(\\int_{t-1}^t\\int|\\nabla\\nabla u|^2dvdt\\big)^{1\/2}\\cdot\n\\bigg[\\big(\\int_{t-1}^t\\int|\\nabla\\nabla\\triangle u|^2dvdt\\big)^{1\/2}\\nonumber\\\\\n&&+\\big(\\int_{t-1}^t\\int|\\nabla u|^2(|\\nabla\\nabla\\nabla u|^2+|\\nabla\\nabla\\bar{\\nabla}u|^2+|\\bar{\\nabla}\\nabla\\nabla u|^2)\\big)^{1\/2}\\bigg].\\nonumber\n\\end{eqnarray}\nApplying (\\ref{integral bound:4}) and the $L^2$ bound of $\\nabla\\nabla\\triangle u$ (see Remark \\ref{remark201}) we get (\\ref{L2bound:Ricci potential3}).\n\n\nSet $h=\\triangle u-|\\nabla u|^2+u-a$. Noticing that $\\int he^{-u}dv=0$, by weighted Poincar\\'e inequality, and using uniform bound of $u$, we derive\n\\begin{equation}\\nonumber\n\\int h^2dv\\leq C(g_0)\\int|\\nabla(\\triangle u-|\\nabla u|^2+u)|^2dv.\n\\end{equation}\nThus,\n\\begin{equation}\\nonumber\n\\int_t^{t+1}\\int h^2dvdt\\rightarrow 0,\\hspace{0.5cm}\\mbox{ as }t\\rightarrow\\infty.\n\\end{equation}\nTo show (\\ref{L2bound:Ricci potential4}), it suffices to prove\n$$\\frac{d}{dt}\\int h^2dv\\leq C(g_0)(1+\\int h^2dv).$$\nActually, by\n\\begin{equation}\\nonumber\n\\frac{\\partial}{\\partial t}h=\\triangle h+h-\\frac{d}{dt}a+|\\nabla_i\\nabla_ju|^2,\n\\end{equation}\nwe have\n\\begin{eqnarray}\\nonumber\n\\frac{d}{dt}\\int h^2dv&=&\\int2h(\\triangle h+h-\\frac{d}{dt}a+|\\nabla_i\\nabla_ju|^2+\\frac{1}{2}h\\triangle u)dv\\\\\n&\\leq&\\int2h(h+|\\nabla_i\\nabla_ju|^2+\\frac{1}{2}h\\triangle u)dv\\nonumber\\\\\n&\\leq&(3+\\|\\triangle u\\|_{C^0})\\int h^2dv+\\int|\\nabla\\nabla u|^4dv.\\nonumber\n\\end{eqnarray}\nThe required estimate follows from (\\ref{integral bound:4}) in next section.\n\\end{proof}\n\n\n\n\n\n\\subsection{Regularity of the limit}\n\n\nFor any sequence $t_i\\rightarrow\\infty$, define a family of K\\\"ahler-Ricci flows\n\\begin{equation}\n(M,g_i(t))\\,=\\,(M,g(t_i+t)),\\,t\\geq -1.\n\\end{equation}\nLet $u_i(t)$ denote associated Ricci potentials, which satisfy the uniform bound\n\\begin{equation}\\label{perelman bound:Ricci potential2}\n\\|u_i\\|_{C^0}+\\|\\nabla u_i\\|_{C^0}+\\|\\triangle u_i\\|_{C^0}\\,\\le\\, C(g_0).\n\\end{equation}\nFurthermore, by (\\ref{L2bound:Ricci potential2}), for any $t\\geq -1$,\n\\begin{equation}\\label{limit0}\n\\int_M\\big|\\nabla\\nabla u_i(t)\\big|^2dv_{g_i}\\rightarrow0,\\,\\mbox{ as }i\\rightarrow\\infty.\n\\end{equation}\n\n\nBy the convergence theorem in Section 2, passing to a subsequence if necessary we may assume at time $t=0$,\n\\begin{equation}\\label{limit1}\n(M,g_i(0))\\stackrel{d_{GH}}{\\longrightarrow}(M_\\infty,d).\n\\end{equation}\nThe space $M_\\infty=\\mathcal{S}\\cup\\mathcal{R}$, where $\\mathcal{R}$ is a smooth complex manifold with a $C^{\\alpha}$ complex structure $J_\\infty$ and a $C^\\alpha$ metric $g_\\infty$ which induces $d$, while $\\mathcal{S}$ is a closed singular set of codimension $\\geq 4$. Moreover, under the Gromov-Hausdorff convergence,\n\\begin{equation}\\label{L2pconvergence}\n(g_i(0),u_i(0))\\stackrel{C^\\alpha\\cap L^{2,p}}{\\longrightarrow}(g_\\infty,u_\\infty)\\,\\mbox{ on }\\mathcal{R}.\n\\end{equation}\nThe convergence of $u_i(0)$ follows from the elliptic regularity to $\\triangle u_i(0)=n-s(g_i(0))\\in L^p$. It is obvious $u_\\infty$ is globally Lipschitz on $M_\\infty$ by Perelman's estimate.\n\n\n\n\n\n\\begin{prop}\nSuppose {\\rm(\\ref{L2pconvergence})} holds, then $g_\\infty$ is smooth and satisfies\n\\begin{equation}\\label{limitsoliton1}\nRic(g_\\infty)+\\Hess u_\\infty\\,=\\,g_\\infty,\\,\\mbox{ on }\\mathcal{R}.\n\\end{equation}\nMoreover, $J_\\infty$ is smooth on $\\mathcal{R}$ and $g_\\infty$ is K\\\"ahler with respect to $J_\\infty$.\n\\end{prop}\n\\begin{proof}\nWe first show $g_\\infty$ is smooth and satisfies (\\ref{limitsoliton1}). The strategy is to apply a bootstrap as in \\cite{Pe96}; the difference is the existence of a twisted function term. In local harmonic coordinate $(x^1,\\cdots,x^{2n})$, the soliton equation (\\ref{limitsoliton1}) is equivalent to\n\\begin{equation}\\label{limitsoliton2}\ng^{\\alpha\\beta}\\frac{\\partial^2g_{\\gamma\\delta}}{\\partial x^\\alpha\\partial x^\\beta}\\,=\\,-\\frac{\\partial^2u_\\infty}{\\partial x^\\gamma\\partial x^\\delta}+Q(g,\\partial g)_{\\gamma\\delta}+T(g^{-1},\\partial g,\\partial u)_{\\gamma\\delta}+g_{\\gamma\\delta},\n\\end{equation}\nwhere $Q$ is a quadratical term while $T$ is a trilinear term of their variables.\nBy Proposition \\ref{prop1}, (\\ref{limitsoliton2}) holds in $L^2(\\mathcal{R})$. Since both $u_\\infty$ and $g_\\infty$ are in $L^{2,p}$, equation (\\ref{limitsoliton2}) holds even in $L^p(\\mathcal{R})$. On the other hand, by Proposition \\ref{prop2} and (\\ref{L2pconvergence}), we have that,\n\\begin{equation}\\label{limitsoliton3}\ng^{\\alpha\\beta}\\frac{\\partial^2 u_\\infty}{\\partial x^\\alpha\\partial x^\\beta}\\,=\\,g^{\\alpha\\beta}\\frac{\\partial u_\\infty}{\\partial x^\\alpha}\\frac{\\partial u_\\infty}{\\partial x^\\beta}-2u_\\infty+2a_\\infty,\n\\end{equation}\nin $L^p$ topology. A bootstrap argument to the elliptic systems (\\ref{limitsoliton3}) and (\\ref{limitsoliton2}) shows that $g_\\infty$ and $u_\\infty$ are actually smooth on $\\mathcal{R}$.\n\n\nSince $g_\\infty$ is smooth and $\\nabla_{g_\\infty}J_\\infty=0$, the elliptic regularity shows that $J_\\infty$ is also smooth.\n\\end{proof}\n\n\n\n\n\\subsection{Smooth convergence on the regular set}\n\n\nIn order to prove the smooth convergence of the K\\\"ahler-Ricci flow on the regular set, we need the following version of Perelman's pseudolocality theorem: there exists $\\epsilon_P,\\delta_P>0$ and $r_P>0$, which depend on $p,\\Lambda$ in the Theorem \\ref{regularity:1} such that for any space-time point $(x_0,t_0)\\in M\\times[-1.\\infty)$ in any flow $g_i(t)$ constructed in the previous subsection, if\n\\begin{equation}\\label{pseudolocality1}\n\\vol_{g_i(t_0)}\\big(B_{g_i(t_0)}(x_0,r)\\big)\\,\\ge\\,(1-\\epsilon_P)\\vol(B_r)\n\\end{equation}\nfor some $r\\leq r_P$, where $\\vol(B_r)$ denotes the volume of Euclidean ball of radius $r$ in $\\mathbb{R}^{2n}$, then we have the following curvature estimate\n\\begin{equation}\\label{pseudolocality2}\n|Rm_{g_i}(x,t)|\\,\\le\\,\\frac{1}{t-t_0},\\,\\forall x\\in B_{g_i(t)}(x_0,\\epsilon_Pr),\\,t_00$ is the constant in (\\ref{harmonic coordinate:3}). One can assume $\\eta\\le\\epsilon_P$ in application. In other words, in view of Shi's higher derivative estimate to curvature \\cite{Sh89}, the region around $x_0$ is almost Euclidean in the $C^\\infty$ topology at time $t_0+\\epsilon_P^2r^2$.\n\n\nNotice that Perelman's pseudolocality theorem is originally stated for Ricci flow \\cite{Pe02}. In our application, the sequence of K\\\"ahler flows $g_i(t)$ comes from a Ricci flow by scalings with a definite control (by Perelman's estimate to scalar curvature (\\ref{perelman bound:Ricci potential1})). The condition (\\ref{pseudolocality1}) implies the local $C^\\alpha$ structure at $x_0$ ( see Section 2.6).\n\n\nWe start to prove the part of smooth convergence in our Main Theorem.\n\n\n\\begin{proof}[Proof of Theorem \\ref{regularity:1}]\nRecall that we have a family of flows $(M,g_i(t)),-1\\le t<\\infty$, which converges at $t=0$ in the Cheeger-Gromov topology to a limit space $(M_\\infty,d)$. The regular set $\\mathcal{R}$ is a smooth complex manifold with a smooth metric $g_\\infty$ which induces $d$; the singular set $\\mathcal{S}$ is closed and has codimension $\\geq 4$. Moreover, the metric $g_i(0)$ converges to $g_\\infty$ in the $C^\\alpha$ sense on $\\mathcal{R}$. The goal is to show that $g_i(0)$ converges smoothly to $g_\\infty$.\n\n\nFor any radius $00$, $i\\ge 1$ and $-1\\le t_0\\le 0$ a radius $00\n\\end{equation}\nfor a sequence of $\\ell\\rightarrow\\infty$.\n\\end{theo}\n\n\n\nIn the proof of Theorem \\ref{th:partial-1}, two ingredients are important: Gradient estimate to pluri-anti-canonical sections and H\\\"ormander's $L^2$ estimate to $\\bar{\\partial}$-operator on (0,1)-forms. When Ricci curvature is bounded below, these estimates are standard and well-known, cf. \\cite{Ti90} and \\cite{DoSu12}.\nIn our case of K\\\"ahler-Ricci flow, the arguments should be modified because of the lack of Ricci curvature bound.\n\nAt any time $t$, for any holomorphic section $\\sigma\\in H^0(M,K_M^{-\\ell})$, we have\n\\begin{equation}\\label{e18}\n\\triangle|\\sigma|^2=|\\nabla\\sigma|^2-n\\ell|\\sigma|^2\n\\end{equation}\nand the Bochner formula\n\\begin{equation}\n\\triangle|\\nabla\\sigma|^2=|\\nabla\\nabla\\sigma|^2+|\\bar{\\nabla}\\nabla\\sigma|^2-(n+2)\\ell|\\nabla\\sigma|^2+\\langle Ric(\\nabla\\sigma,\\cdot),\\nabla\\sigma\\rangle.\\label{e19}\n\\end{equation}\nIn view of Ricci potentials, above formula can be rewritten as\n\\begin{equation}\n\\triangle|\\nabla\\sigma|^2=|\\nabla\\nabla\\sigma|^2+|\\bar{\\nabla}\\nabla\\sigma|^2\n-\\big((n+2)\\ell-1\\big)|\\nabla\\sigma|^2-\\langle\\partial\\bar{\\partial}u(\\nabla\\sigma,\\cdot),\\nabla\\sigma\\rangle.\\label{e110}\n\\end{equation}\n\nRecall that by \\cite{ZhQ} or \\cite{Ye07}, there is a uniform bound of the Sobolev constant along the K\\\"ahler-Ricci flow. This makes it possible to\napply the standard iteration arguments of Nash-Moser to the above equations on $\\sigma$ and $\\nabla\\sigma$.\nIn order to do this, we need to deal with the extra and bad term $\\langle\\partial\\bar{\\partial}u(\\nabla\\sigma,\\cdot),\\nabla\\sigma\\rangle$ in the iteration process by using an integration by parts and then applying Perelman's gradient estimate on $u$.\nThen we can conclude the following $L^\\infty$ estimate and gradient estimate on $\\sigma$.\n\n\\begin{lemm}\\label{lemm:gradient}\nThere exist constant $C=C(g_0)$ such that for any $\\ell\\geq 1$, $t\\geq 0$, and $\\sigma\\in H^0(M,K_M^{-\\ell})$, we have\n\\begin{equation}\\label{gradient estimate:holomorphic section}\n\\|\\sigma\\|_{C^0}+\\ell^{-\\frac{1}{2}}\\|\\nabla\\sigma\\|_{C^0}\\,\\le\\, C\\ell^{\\frac{n}{2}}\\bigg(\\int_M|\\sigma|^2dv\\bigg)^{1\/2}.\n\\end{equation}\n\\end{lemm}\n\n\nThe $L^2$ estimate to the $\\bar{\\partial}$ operator is established firstly for K\\\"ahler-Einstein surfaces in \\cite{Ti90}. The following is a similar estimate for the K\\\"ahler-Ricci flow.\n\n\n\\begin{lemm}\\label{lemm:L2}\nThere exists $\\ell_0$ depending on $g_0$ such that for any $\\ell\\geq \\ell_0$, $t\\geq 0$ and $\\sigma\\in C^\\infty(M,T^{0,1}M\\otimes K_M^{-\\ell})$ with $\\bar{\\partial}\\sigma=0$, we can find a solution $\\bar{\\partial} \\vartheta=\\sigma$ which satisfies\n\\begin{equation}\\label{L2 estimate}\n\\int_M|\\vartheta|^2dv\\,\\le\\,4\\ell^{-1}\\int_M|\\sigma|^2dv.\n\\end{equation}\n\\end{lemm}\n\\begin{proof}\nIt suffices to show that the Hodge Laplacian $\\triangle_{\\bar{\\partial}}=\\bar{\\partial}\\bar{\\partial}^*+\\bar{\\partial}^*\\bar{\\partial}\\geq\\frac{\\ell}{4}$ as an operator on $C^\\infty(M,T^{0,1}M\\otimes K_M^{-\\ell})$ when $\\ell$ is sufficiently large. Actually, this implies (i) $H^{0,1}(M,K_M^{-\\ell})=0$ and thus $\\bar{\\partial} \\vartheta=\\sigma$ is solvable when $\\bar{\\partial}\\sigma=0$ and (ii) the first positive eigenvalue of $\\triangle_{\\bar{\\partial}}$ on $C^\\infty(M,K_M^{-\\ell})$ is $\\geq\\frac{\\ell}{4}$ so that (\\ref{L2 estimate}) holds for some solution $\\vartheta$.\n\n\n\nThe following Weitzenb\\\"{o}ch type formulas hold for any $\\sigma\\in C^\\infty(M,T^{0,1}M\\otimes K_M^{-\\ell})$,\n\\begin{equation}\\label{e114}\n\\triangle_{\\bar{\\partial}}\\sigma\\,=\\,\\bar{\\nabla}^*\\bar{\\nabla}\\sigma+Ric(\\sigma,\\cdot)+\\ell\\sigma,\n\\end{equation}\n\\begin{equation}\\label{e115}\n\\triangle_{\\bar{\\partial}}\\sigma\\,=\\,\\nabla^*\\nabla\\sigma-(n-1)\\ell\\sigma.\n\\end{equation}\nA combination gives\n\\begin{equation}\\label{e116}\n\\triangle_{\\bar{\\partial}}\\sigma\\,=\\,(1-\\frac{1}{2n})\\bar{\\nabla}^*\\bar{\\nabla}\\sigma+\\frac{1}{2n}\\nabla^*\\nabla\\sigma\n+(1-\\frac{1}{2n})Ric(\\sigma,\\cdot)+\\frac{\\ell}{2}\\sigma.\n\\end{equation}\nMultiplying with $\\sigma$ and integrating over $M$, we obtain\n\\begin{eqnarray}\n\\int\\langle\\triangle_{\\bar{\\partial}}\\sigma,\\sigma\\rangle&=&\\int\\big((1-\\frac{1}{2n})|\\bar{\\nabla}\\sigma|^2+\\frac{1}{2n}|\\nabla\\sigma|^2\n+\\frac{\\ell}{2}|\\sigma|^2\\big)\\nonumber\\\\\n&&+(1-\\frac{1}{2n})\\int\\big(|\\sigma|^2-\\langle\\nabla\\bar{\\nabla}u(\\sigma,\\cdot),\\sigma\\rangle\\big),\\nonumber\n\\end{eqnarray}\nwhere the bad term $\\int\\langle\\nabla\\bar{\\nabla}u(\\sigma,\\cdot),\\sigma\\rangle$ can be estimated as follows\n\\begin{eqnarray}\n\\int\\langle\\nabla\\bar{\\nabla}u(\\sigma,\\cdot),\\sigma\\rangle&=&-\\int\\bar{\\nabla}u\\big(\\langle\\nabla\\sigma,\\sigma\\rangle\n+\\langle\\sigma,\\bar{\\nabla}\\sigma\\rangle\\big)\\nonumber\\\\\n&\\leq&\\frac{1}{2n}\\int\\big(|\\nabla\\sigma|^2+|\\bar{\\nabla}\\sigma|^2\\big)+C\\int|\\sigma|^2\\nonumber\n\\end{eqnarray}\nwhere $C$ depend on $n$ and $\\|\\nabla u\\|_{C^0}$. Thus,\n$$\\int\\langle\\triangle_{\\bar{\\partial}}\\sigma,\\sigma\\rangle\\,\\ge\\,\\big(\\frac{\\ell}{2}-C\\big)\\int|\\sigma|^2,\\,\\forall\\sigma\\in\nC^\\infty(M,T^{0,1}M\\otimes K_M^{-\\ell}).$$\nIn particular, $\\triangle_{\\bar{\\partial}}\\geq\\frac{\\ell}{4}$ when $\\ell$ is large enough.\n\\end{proof}\n\n\nThe partial $C^0$ estimate for K\\\"ahler-Ricci flow will follow from a parallel argument as one did in the K\\\"ahler-Einstein case\nin \\cite{Ti12}, \\cite{Ti13} and \\cite{DoSu12}. We will adopt the notations and follow the arguments in \\cite{Ti13}.\n\nAccording to our results of Section 3, for any $r_j\\mapsto 0$, by taking a subsequence if necessary,\nwe have a tangent cone ${\\cal C}_x$ of $(M_\\infty,\\omega_\\infty)$ at $x$, where ${\\cal C}_x$ is the limit $\\lim_{j\\to \\infty} (M_\\infty, r_j^{-2} \\omega_\\infty, x)$ in the Gromov-Hausdorff topology, satisfying:\n\n\\vskip 0.1in\n\\noindent\n${\\bf TZ}_1$. Each ${\\cal C}_x$ is regular outside a closed subcone ${\\cal S}_x$ of complex codimension at least $2$.\nSuch a ${\\cal S}_x$ is the singular set of ${\\cal C}_x$;\n\n\\vskip 0.1in\n\\noindent\n${\\bf TZ}_2$. There is a natural K\\\"ahler Ricci-flat metric $g_x$ on ${\\cal C}_x \\backslash {\\cal S}_x$ which is also a cone metric. Its K\\\"ahler form $\\omega_x$ is equal to $\\sqrt{-1} \\,\\partial\\bar\\partial \\rho_x^2$ on the regular part of ${\\cal C}_x$, where $\\rho_x$ denotes the distance function from the vertex of ${\\cal C}_x$, denoted by $x$ for simplicity.\n\\vskip 0.1in\nWe will denote by $L_x$ the trivial bundle ${\\cal C}_x\\times \\CC$ over ${\\cal C}_x$ equipped with\nthe Hermitian metric $e^{-\\rho_x^2}\\,|\\cdot |^2$. The curvature of this Hermitian metric is given by $\\omega_x$.\n\nWithout loss of generality, we may assume that for each $j$, $r_j^{-2}= k_j$ is an integer.\n\nFor any $ \\epsilon \\,>\\,0$, we put\n$$V(x; \\epsilon)\\,=\\,\\{ \\,y \\,\\in \\, {\\cal C}_x \\,|\\, y\\,\\in\\, B_{\\epsilon^{-1}}(0,g_x)\\,\\backslash \\,\\overline{B_{\\epsilon}(0,g_x)},\\,\\, d(y, {\\cal S}_x )\\, > \\,\\epsilon\\,\\,\\},$$\nwhere $B_R(o,g_x)$ denotes the geodesic ball of $({\\cal C}_x, g_x)$ centered at the vertex and with radius $R$.\n\nFor any $\\epsilon > 0$, whenever $j$ is sufficiently large, there are diffeomorphisms\n$$\\phi_j: V(x;\\frac{\\epsilon}{4})\\mapsto M_\\infty\\backslash {\\cal S}$$\nsatisfying:\n\\vskip 0.1in\n\\noindent\n(1) $d(x, \\phi_j(V(x; \\epsilon))) \\,<\\, 10 \\epsilon r_j$ and $\\phi_j(V(x;\\epsilon)) \\subset B_{(1+\\epsilon^{-1}) r_j}(x)$, where\n$B_R(x)$ the geodesic ball of $(M_\\infty, \\omega_\\infty)$ with radius $R$ and center at $x$;\n\\vskip 0.1in\n\\noindent\n(2) If $g_\\infty$ is the K\\\"ahler metric with the K\\\"ahler form $\\omega_\\infty$ on $M_\\infty\\backslash {\\cal S}$, then\n\\begin{equation}\n\\label{eq: bound-1}\n\\lim_{j\\to \\infty} ||r_j^{-2} \\phi_j^*g_\\infty - g_x ||_{C^6(V(x; \\frac{\\epsilon}{2}))} \\,=\\,0,\n\\end{equation}\nwhere the norm is defined in terms of the metric $g_x$.\n\n\\begin{lemm}\n\\label{lemm:par-1}\nFor any $\\delta$ sufficiently small, there are a sufficiently large $\\ell \\,=\\,k_j$ and an isomorphism $\\psi $ from the trivial bundle ${\\cal C}_x\\times \\CC$ onto\n$K_{M_\\infty}^{- \\ell}$\nover $V(x;\\epsilon)$ commuting with $\\phi\\,=\\,\\phi_j$ satisfying:\n\\begin{equation}\n\\label{eq:est-2}\n|\\psi(1)|_\\infty^{2} \\,=\\, e^{-\\rho_x^2} ~~~{\\rm and}~~~ ||\\nabla \\psi ||_{C^4(V(x; \\epsilon))} \\,\\le \\, \\delta,\n\\end{equation}\nwhere $|\\cdot|_\\infty^2$ denotes the induced norm on $K_{M_\\infty}^{-\\ell}$ by $e^{-\\frac{1}{n} u_\\infty} g_\\infty$,\n$\\nabla$ denotes the covariant derivative with respect to the norms\n$|\\cdot|_\\infty^2$ and $e^{-\\rho_x^2}\\, |\\cdot |^2$.\n\\end{lemm}\n\nWe refer the readers to \\cite{Ti13} for its proof. Actually, it is easier in our case here since the singularity\n${\\cal S}_x$ is of complex codimension at least $2$.\n\nLet $\\epsilon\\,>\\,0$ and $\\delta\\,>\\,0$ be sufficiently small and be determined later. Choose $\\ell$, $\\phi$ and $\\psi$ as in Lemma \\ref{lemm:par-1}, then\nthere is a section\n$\\tau \\,=\\, \\psi( 1 )$ of $K_{M_\\infty}^{-\\ell}$ on $\\phi(V(x; \\epsilon))$ satisfying:\n$$|\\tau|_\\infty^2\\,=\\,e^{-\\rho_x^2}.$$\nBy Lemma \\ref{lemm:par-1}, for some uniform constant $C$, we have\n$$|\\bar\\partial \\tau|_\\infty \\, \\le C \\,\\delta . $$\n\nSince ${\\cal S}_x$ has codimension at least $4$, we can easily construct a smooth function $\\gamma_{\\bar\\epsilon}$ on ${\\cal C}_x$\nfor each $\\bar \\epsilon \\,>\\,0$ with properties: $\\gamma_{\\bar\\epsilon }(y)\\,=\\,1$ if $d(y,{\\cal S}_x)\\,\\ge\\,\\bar\\epsilon$,\n$0\\,\\le\\,\\gamma_{\\bar\\epsilon} \\,\\le\\,1$, $\\gamma_{\\bar\\epsilon} (y)\\,=\\,0$ in an neighborhood of ${\\cal S}_x$ and\n$$\\int_{B_{{\\bar\\epsilon}^{-1}}(o,g_x)} \\,|\\nabla \\gamma_{\\bar\\epsilon}|^2\\, \\omega_x^n\\,\\le\\,\\bar\\epsilon.$$\nMoreover, we may have $|\\nabla \\gamma_{\\bar\\epsilon}|\\,\\le\\, C$ for some constant $C\\,=\\,C(\\bar\\epsilon)$.\n\nWe define for any $y\\,\\in \\,V(x; \\epsilon)$\n$$\n\\tilde \\tau (\\phi(y))\\,=\\,\\eta(2 \\delta \\rho_x(y)) \\, \\gamma_{\\bar\\epsilon}(y)\n\\,\\tau (\\phi(y)).\n$$\nwhere $\\eta$ is a cut-off function satisfying:\n$$\\eta(t)\\,= \\,1~~{\\rm for}~~t\\,\\le\\, 1,~~\\eta(t)\\,=\\,0~~{\\rm for}~~ t\\,\\ge\\, 2~~{\\rm and}~~|\\eta'(t)|\\,\\le\\, 1.$$\n\nChoose $\\bar\\epsilon$ such that $V(x; \\epsilon)$ contains the support of $\\gamma_{\\bar \\epsilon}$.\nand $\\gamma_{\\bar \\epsilon} \\,=\\,1$ on $V(x; \\delta_0)$, where $\\delta_0\\, >\\,0$ is determined later.\n\nIt is easy to see that $\\tilde \\tau$ vanishes outside $\\phi(V(x; \\epsilon))$, so it extends to a smooth section of $K_{M_\\infty}^{-\\ell}$ on\n$M_\\infty$. Furthermore, $\\tilde \\tau $ satisfies:\n\\vskip 0.1in\n\\noindent\n(i) $\\tilde \\tau \\,=\\, \\tau $ on $\\phi(V(x; \\delta_0))$;\n\n\\vskip 0.1in\n\\noindent\n(ii) There is an $\\nu\\,=\\,\\nu(\\delta,\\epsilon)$ such that\n$$\\int_{M_\\infty} |\\bar\\partial \\tilde \\tau|_\\infty^2\\, \\omega_\\infty^n \\,\\le \\, \\nu\\, r^{2n-2}.$$\nNote that we can make $\\nu$ as small as we want so long as $\\delta$, $\\epsilon$ and $\\bar \\epsilon$ are sufficiently small.\n\\vskip 0.1in\n\nSince $(M,g(t_i))$ converge to $(M_\\infty, g_\\infty)$ and\nthe Hermitian metrics $h(t_i)$ on $K_M^{-\\ell}$ converge to $h_\\infty$ on $M_\\infty\\backslash {\\cal S}$ in the $C^\\infty$-topology.\nTherefore, there are diffeomorphisms\n$$\\tilde\\phi_i : M_\\infty\\backslash {\\cal S}\\,\\mapsto\\, M $$\nand smooth isomorphisms\n$$F_i:K_{M_\\infty}^{-\\ell}\\,\\mapsto\\, K_M^{-\\ell} $$\nover $M$, satisfying:\n\n\\vskip 0.1in\n\\noindent\n${\\bf C}_1$: $\\tilde\\phi_i(M_\\infty\\backslash N_{1\/i}({\\cal S}))\\,\\subset \\,M$, where $N_{\\varepsilon}({\\cal S})$ is the $\\varepsilon$-neighborhood of ${\\cal S}$;\n\\vskip 0.1in\n\\noindent\n${\\bf C}_2$: $\\pi_i \\circ F_i\\,=\\, \\tilde\\phi_i\\circ \\pi_\\infty$, where $\\pi_i$ and $\\pi_\\infty$ are corresponding projections;\n\n\\vskip 0.1in\n\\noindent\n${\\bf C}_3$: $||\\tilde\\phi_i^*g(t_i) - g_\\infty||_{C^2(M_\\infty\\backslash T_{1\/i}({\\cal S}))}\\,\\to\\,0$ as $i\\to \\infty$;\n\n\\vskip 0.1in\n\\noindent\n${\\bf C}_4$: $||F_i^* h(t_i) - h_\\infty||_{C^4(M_\\infty\\backslash T_{1\/i}({\\cal S}))} \\,\\to \\, 0$ as $i\\to \\infty$.\n\n\\vskip 0.1in\nPut $\\tilde\\tau_i\\,=\\,F_i(\\tilde\\tau)$, then we deduce from the above\n\\vskip 0.1in\n\\noindent\n(i) $\\tilde \\tau_i \\,=\\, F_i(\\tau) $ on $\\tilde\\phi_i (\\phi(V(x; \\delta_0)))$;\n\n\\vskip 0.1in\n\\noindent\n(ii) For $i$ sufficiently large, we have\n$$\\int_{M} |\\bar\\partial \\tilde \\tau_i|_i^2\\, dV_{g(t_i)} \\,\\le \\, 2 \\nu\\, r^{2n-2},$$\nwhere $|\\cdot|_i$ denotes the Hermitian norm corresponding to $h(t_i)$.\n\\vskip 0.1in\n\n\nBy the $L^2$-estimate in Lemma \\ref{lemm:L2}, we get a section $v_i$ of $K_{M}^{-\\ell }$ such that\n$$\\bar\\partial v_i \\,=\\, \\bar\\partial \\tilde \\tau_i$$\nand\n$$\\int_{M_\\infty} |v_i|_{i}^2 \\,dV_{g(t_i)} \\,\\le \\, \\frac{1}{\\ell} \\int_{M} |\\bar\\partial \\tilde \\tau_i|_{i}^2 \\,dV_{g(t_i)} \\,\\le\\, 3 \\nu\\, r^{2n} .$$\n\nPut $\\sigma_i \\,=\\, \\tilde \\tau_i \\,- \\,v_i$, it is a holomorphic section of $K_{M}^{-\\ell}$.\nOne can show the $C^4$-norm of $\\bar\\partial v_i$ on $\\tilde\\phi_i(\\phi(V(x; \\delta_0)))$ is bounded from above by\n$c \\delta $ for a uniform constant $c$. By the standard elliptic estimates, we have\n$$\\sup _{\\tilde\\phi(\\phi(V(x; 2\\delta_0)\\cap B_1(o,g_x)))} |v_i|_{i}^2 \\,\\le \\, C \\,(\\delta_0 r)^{-2n}\\, \\int_{M_i} |v_i|_i^2\\, dV_{g(t_i)}\n\\,\\le\\, C \\,\\delta_0 ^{-2n} \\,\\nu .$$\nHere $C$ denotes a uniform constant. For any given $\\delta_0$, if $\\delta$ and $\\epsilon$ are sufficiently small, then we can make $\\nu$ such that\n$$ 8 C\\,\\nu\\,\\le\\, \\delta_0^{2n}.$$\nIt follows\n$$|\\sigma_i|_i \\,\\ge\\, |F_i(\\tau)|_i\\,-\\,|v_i|_i\\,\\ge \\,\\frac{1}{2}~~~{\\rm on}~~\\tilde\\phi_i(\\phi(V(x; \\delta_0)\\cap B_1(o,g_x))).$$\nOn the other hand, by applying Lemma \\ref{lemm:gradient} to $\\sigma_i$, we get\n$$\\sup_{M} |\\nabla \\sigma_i|_i \\,\\le\\, C'\n\\ell ^{\\frac{n+1}{2}} \\left (\\int_{M} |\\sigma_i|_i^2\\,dV_{g(t_i)}\\right )^{\\frac{1}{2}} \\,\\le\\, C'\\, r^{-1} .$$\nSince the distance $d(x, \\phi(\\delta_0 u)) $ is less than $ 10 \\delta_0 r$ for some $u \\,\\in\\,\\partial B_1(o,g_x)$, if $i$ is sufficiently large,\nwe deduce from the above estimates\n$$|\\sigma_i|_i (x_i)\\, \\ge\\, 1\/4 - C'\\,\\delta_0,$$\nhence, if we choose $\\delta_0$ such that $C' \\delta_0 < 1\/8$, then $\\rho_{\\omega_i,\\ell} (x_i) > 1\/8$.\n\nTheorem \\ref{th:partial-1}, i.e., the partial $C^0$-estimate\nfor $g(t_i)$ in the K\\\"ahler-Ricci flow, is proved.\n\n\nUsing the same arguments as those in proving Theorem 5.9 in \\cite{Ti13}, we can deduce Theorem \\ref{regularity:2} from Theorem \\ref{th:partial-1}.\n\n\n\n\\section{A corollary of Conjecture \\ref{conj:HT}}\n\n\nIn this last section, we will show how to deduce the Yau-Tian-Donaldson conjecture in case of Fano manifolds from Conjecture \\ref{conj:HT}. The key is to prove\nthat there is an uniform lower bound for Mabuchi's $K$-energy along the K\\\"ahler-Ricci flow provided the partial $C^0$ estimate and $K$-stability of the manifold.\n\nLet $\\omega(t)$ be the K\\\"ahler form of the K\\\"ahler-Ricci flow $g(t)$. For K\\\"ahler metrics $\\omega_1,\\omega_2\\in2\\pi c_1$, denote by $K(\\omega_1,\\omega_2)$\nthe relative Mabuchi's $K$-energy from $\\omega_1$ to $\\omega_2$ (the function $M$ in \\cite{Ma86}).\n\n\n\\begin{theo}\\label{K-energy}\nSuppose the partial $C^0$ estimate {\\rm(\\ref{e118})} holds for a sequence of times $t_i\\rightarrow\\infty$. If $M$ is $K$-stable, then the $K$-energy is bounded below under the K\\\"ahler-Ricci flow\n\\begin{equation}\nK(\\omega(0),\\omega(t))\\,\\ge\\, -C(g_0).\n\\end{equation}\n\\end{theo}\n\\begin{proof}\nIt is well known that $K(\\omega(0),\\omega(t))$ is non-increasing in $t$ (cf. \\cite{TiZhu07}). So it suffices to show a uniform lower bound of $K(\\omega(0),\\omega_i)$ where $\\omega_i=\\omega(t_i)$. We will establish this by using a result of S. Paul. It is proved in \\cite{Pa12, Pa12b} that if $M$ is $K$-stable, then the $K$-energy\nis bounded from below on the space of Bergman-type metrics which arise from the Kodaira embedding via bases of $K_M^{-\\ell}$.\n\nFix an integer $\\ell>0$ sufficiently large such that $K_M^{-\\ell}$ is very-ample and $M$ is $K$-stable with respect to $K_M^{-\\ell}$. Any orthonormal basis $\\{s_{t_i,\\ell,k}\\}_{k=0}^{N_{\\ell}}$ of $H^0(M,K_M^{-\\ell})$ at $t_i$ defines an embedding\n$$\\Phi_i:M\\rightarrow\\mathbb{C}P^{N_\\ell}.$$\nLet $\\omega_{FS}$ be the Fubini-Study metric on $\\mathbb{C}P^{N_\\ell}$ and put $\\tilde{\\omega}_i=\\frac{1}{\\ell}\\Phi_i^*\\omega_{FS}$, the Bergman metric associated to $\\Phi_i$. For any $i\\ge 1$, there exists a $\\sigma_i\\in SL(N_\\ell+1,\\mathbb{C})$ such that $\\Phi_i=\\sigma_i\\circ \\Phi_1$. By the result of \\cite{Pa12b}, we have\n$$K(\\tilde{\\omega}_1,\\tilde{\\omega}_i)\\,\\ge\\,-C, $$\nwhere $C$ is a uniform constant. By the cocycle condition of the $K$-energy,\n$$K(\\omega(0),\\omega_i)+K(\\omega_i,\\tilde{\\omega}_i)\\,=\\,K(\\omega(0),\\tilde{\\omega}_i)\\,=\\,K(\\omega(0),\\tilde{\\omega}_1)\n+K(\\tilde{\\omega}_1,\\tilde{\\omega}_i)\\,\\ge\\, -C.$$\nTherefore, to show that $K(\\omega(0),\\omega_i)$ is bounded from below, we only need to get an upper bound for $K(\\omega_i,\\tilde{\\omega}_i)$.\n\n\nPut $\\tilde{\\rho}_i=\\frac{1}{\\ell}\\rho_{t_i,\\ell}$, where $\\rho_{t_i,\\ell}$ is defined by (\\ref{e117}) with $t=t_i$. Then\n$$\\omega_i\\,=\\,\\tilde{\\omega}_i\\,+\\,\\sqrt{-1}\\partial\\bar{\\partial}\\,\\tilde{\\rho}_i.$$\nThe $K$-energy has the following explicit expression \\cite{Ti},\n\\begin{eqnarray}\nK(\\omega_i,\\tilde{\\omega}_i)&=&\\int_M\\log\\frac{\\tilde{\\omega}_i^n}{\\omega_i^n}\\tilde{\\omega}_i^n\n+\\int_Mu(t_i)\\big(\\tilde{\\omega}^n_i-\\omega_i^n\\big)\\nonumber\\\\\n&&\\,-\\sum_{k=0}^{n-1}\\frac{n-k}{n+1}\\int_M\\sqrt{-1}\\partial\\tilde{\\rho}_i\\wedge\\bar{\\partial}\\tilde{\\rho}_i\n\\wedge\\omega_i^k\\wedge\\tilde{\\omega}_i^{n-k-1},\\nonumber\n\\end{eqnarray}\nwhere $u(t_i)$ is the Ricci potential at time $t_i$ of the K\\\"ahler-Ricci flow. Thus,\n$$K(\\omega_i,\\tilde{\\omega}_i)\\,\\le\\,\\int_M\\log\\frac{\\tilde{\\omega}_i^n}{\\omega_i^n}\\tilde{\\omega}_i^n\n+\\int_Mu(t_i)\\big(\\tilde{\\omega}^n_i-\\omega_i^n\\big).$$\nBy Perelman's estimate, we have $|u(t_i)|\\le C(g_0)$. It follows\n$$K(\\omega_i,\\tilde{\\omega}_i)\\,\\le\\,\\int_M\\log\\frac{\\tilde{\\omega}_i^n}{\\omega_i^n}\\tilde{\\omega}_i^n+C.$$\nFinally, by using the partial $C^0$-estimate and applying the gradient estimate in Lemma \\ref{lemm:gradient} to each $s_{t_i,\\ell,k}$,\nwe have\n$$\\tilde{\\omega}_i\\,\\le\\, C(g_0)\\cdot\\omega_i.$$\nThis gives a desired upper bound of $K(\\omega_i,\\tilde{\\omega}_i)$, and consequently, a lower bound of $K(\\omega(0),\\omega_i)$. The proof is now completed.\n\\end{proof}\n\nTheorem \\ref{K-energy} implies that the limit $M_\\infty$ must be K\\\"ahler-Einstein (see \\cite{TiZhu07} for example). Then its automorphism group\nmust be reductive as a corollary of the uniqueness theorem due to B. Berndtsson and R. Berman (see \\cite{Be13}). It follows that if $M_\\infty$ is not equal to $M$,\nthere is a $\\CC^*$-action $\\{\\sigma(s)\\}_{s\\in \\CC^*}\\subset SL(N_\\ell+1,\\CC)$ such that $\\sigma(s)\\cdot\\Phi_1(M)$ converges to the embedding of $M_\\infty$ in $\\CC P^{N_\\ell}$.\nThis contradicts to the K-stability since the Futaki invariant of $M_\\infty$ vanishes. Hence, there is a K\\\"ahler-Einstein metric on $M=M_\\infty$.\n\n\\begin{rema}\nIn fact, using a very recent result of S. Paul \\cite{Pa12c} and the same argument as those in the proof of Theorem \\ref{K-energy},\nwe can prove directly that the K-energy is proper along the K\\\"ahler-Ricci flow, so\nthe flow converges to a K\\\"ahler-Einstein metric on the same underlying K\\\"ahler manifold.\n\\end{rema}\n\nAs a final remark, we outline a method of directly producing a non-trivial holomorphic vector field on $M_\\infty$ if it is different from $M$.\nSuppose $M_\\infty$ is not isomorphic to $M$. Let $\\lambda(t)$ be the smallest eigenvalue of the weighted Laplace $\\triangle_u=\\triangle-g^{i\\bar{j}}\\partial_iu\\partial_{\\bar{j}}$ at time $t$, where $u=u(t)$ is the Ricci potential of $g(t)$ defined in (\\ref{Ricci potential}).\nThe Poincar\\'e inequality \\cite{Fu} shows $\\lambda(t)> 1$. According to Theorem 1.5 of \\cite{Zh11}, $\\lambda(t_i)\\rightarrow 1$ as $i\\rightarrow\\infty$.\nIf we denote by $\\theta(t_i)$ an eigenfunction of $\\lambda(t_i)$ satisfying the normalization:\n$$\\int|\\theta_i|^2e^{-u(t_i)}dv_{g(t_i)}=1,$$\nthen by the Nash-Moser iteration, we have the following gradient estimate:\n\n\n\\begin{lemm}\nThere exists $C=C(g_0)$ such that any eigenfunction $\\theta$, at any time $t$, satisfying\n\\begin{equation}\ng^{i\\bar{j}}\\big(\\partial\\partial_{\\bar{j}}\\theta-\\partial_iu\\partial_{\\bar{j}}\\theta\\big)\\,=\\,\\lambda\\theta\n\\end{equation}\nhas the gradient estimates\n\\begin{equation}\n\\|\\bar{\\partial}\\theta\\|_{C^0}+\\|\\partial\\theta\\|_{C^0}\\,\\le\\, C\\lambda^{\\frac{n+1}{2}}\\|\\theta\\|_{L^2}.\n\\end{equation}\n\\end{lemm}\nIt follows that $\\theta(t_i)$ converges to a nontrivial eigenfunction $\\theta_\\infty$ with eigenvalue $1$ on the limit variety $M_\\infty$. By an easy calculation,\n$$\\int_M|\\bar{\\nabla}\\bar{\\nabla}\\theta_i|^2e^{-u(t_i)}dv_{g(t_i)}\\,=\\,\\lambda(t_i)\\big(\\lambda(t_i)-1\\big)\\rightarrow 0.$$\nTogether with Perelman's $C^0$ estimate on $u$, this yields a bounded holomorphic vector field on $M_\\infty$ as the gradient field of $\\theta_\\infty$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro}\n\nIt is well known that the space of homogeneous type introduced \nby Coifman and Weiss \\cite{CW71, CW77} provides a natural setting for\nthe study of function spaces and the boundedness of operators. A \\emph{quasi-metric space} $(X,d)$ is a\nnon-empty set $X$ equipped with a \\emph{quasi-metric} $d$, that is, a non-negative function defined on\n$X\\times X$, satisfying that, for any $x,\\ y,\\ z\\in X$,\n\\begin{enumerate}\n\\item $d(x,y)=0$ if and only if $x=y$;\n\\item $d(x,y)=d(y,x)$;\n\\item there exists a constant $A_0\\in[1,\\infty)$ such that $d(x,z)\\le A_0[d(x,y)+d(y,z)]$.\n\\end{enumerate}\nThe ball $B$ on $X$ centered at $x_0\\in X$ with radius $r\\in(0,\\fz)$ is defined by setting\n$$\nB:=\\{x\\in X:\\ d(x,x_0)\\psi_\\az^k(\\cdot)\n=\\sum_{k\\in\\zz}\\int_X Q_k(\\cdot,y)f(y)\\,d\\mu(y)\\qquad\\textup{in}\\; L^2(X)\n\\end{align}\nand the kernels $\\{Q_k\\}_{k\\in\\zz}$ were proved in \\cite[Lemma~10.1]{AH13} to satisfy conditions (ii), (iii)\nand (v) of Definition \\ref{def:eti} below.\nIt was essentially proved in \\cite[Lemma~3.6 and (3.22)]{HLW16} that the kernels $\\{Q_k\\}_{k\\in\\zz}$ satisfy\nthe ``second difference regularity condition'', with exponentially decay. This inspires us to introduce a new\nkind of \\emph{approximations of the identity with exponential decay} (for short, $\\exp$-ATI); see Definition\n\\ref{def:eti} below.\n\nRecall that any \\emph{approximations of the identity} (for short, ATI) on RD-spaces or Ahlfors-$n$ regular\nspaces appeared in the literature only has the polynomial decay (see \\cite{HMY08,dh09}).\nThe $\\exp$-ATI turns out to be an approximations of the identity used in \\cite{HMY08}.\nHowever, as it is pointed out in Remark \\ref{rem-add} below, even the approximations of the identity with\nbounded supports can not provide the exponential decay factor like\n$$ \\exp\\lf\\{-\\nu\\lf[\\frac{\\max\\{d(x, \\CY^k)),\\,d(x,\\CY^k)\\}}{\\dz^k}\\r]^a\\r\\}$$\nin the right-hand sides of \\eqref{eq:etisize}, \\eqref{eq:etiregx} and \\eqref{eq:etidreg} below; we explain\nthe symbols $a,\\ \\dz$ and $\\CY^k$ in Section \\ref{pre} below.\nThe evidence for the importance of such an exponential decay factor can be found in \\cite[Lemma~8.3]{AH13}\nwhich establishes the following estimate\n$$\n\\sum_{\\dz^k\\ge r}\\frac 1{\\mu(B(x, \\dz^k))}\\exp\\lf\\{-\\nu\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\n\\ls \\frac 1{\\mu(B(x,r))},\\qquad \\forall\\, x\\in X,\\ r\\in(0,\\infty),\n$$\nwith the implicit positive constant independent of $x\\in X$ and $r\\in(0,\\fz)$. Observe that this estimate can\nbe used as a replacement of the reverse doubling condition of $\\mu$.\n\nAnother motivation of this article is the creationary works of Han et al.\\ \\cite{HHL16,hhl17,HLW16}\nin which they attempted to show that \\eqref{wrf} holds true when $f$ belongs to the distribution space\non spaces of homogeneous type (see Section \\ref{ati} below). Indeed, it was established in\n\\cite[Theorem 3.4]{HLW16} the following \\emph{wavelet reproducing formula}: for any $f$ in the space\n$\\GO{\\bz,\\gz}$ of test functions with $0<\\bz'<\\bz\\le\\eta$ and $0<\\gz'<\\gz\\le\\eta$ (see Definition\n\\ref{def:test} below),\n\\begin{equation*}\nf=\\sum_{k\\in\\zz}\\sum_{\\az\\in\\CG_k}\\lf\\psi_\\az^k \\qquad \\textup{in}\\; \\GO{\\bz',\\gz'},\n\\end{equation*}\nwhere $\\eta\\in(0,1)$ denotes the regularity exponent of the wavelets from \\cite{AH13}.\nThis wavelet reproducing formula was used in \\cite{HLW16} to obtain a Littlewood-Paley theory of Hardy spaces\non product spaces. It is the creative combination of the wavelet theory of \\cite{AH13} with the existed\ndistributional theory on spaces of homogeneous type, which motivates us to consider analogous versions of\n({\\bf CCRF}) and ({\\bf DCRF}) on spaces of homogeneous type in the sense of distributions.\n\nLet us mention here that a Calder\\'{o}n reproducing formula\nfor functions in the intersection of $L^2(X)$ and the Hardy space $H^p(X)$ was established in\n\\cite[Proposition 2.5]{HHL16}, and then was used to obtain atomic decompositions of Hardy spaces defined via\nthe Littlewood-Paley wavelet square functions. A deficit of \\cite[Proposition 2.5]{HHL16} is that it does not\nhave exactly the analogous version of ({\\bf DCRF}).\nOne might also mention here that the range of $p\\in(\\omega\/(\\omega+\\eta),1]$ in \\cite[Proposition 2.5]{HHL16}\nseems to be \\emph{problematic}. This is because the regularity exponent of the approximations of the identity in\n\\cite[p.\\ 3438]{HHL16} is $\\thz$ [indeed, $\\thz$ is from the regularity of the quasi-metric $d$ in\n\\eqref{regular-d}], which leads to that the regularity exponent in \\cite[(2.6)]{HHL16} should be\n$\\min\\{\\thz,\\eta\\}$ and hence the correct range of $p$ in \\cite[Proposition 2.5]{HHL16} (indeed, all results of\n\\cite{HHL16}) seems to be $(\\omega\/[\\omega+\\min\\{\\thz,\\eta\\}],1]$. This range of $p$ is not optimal.\n\nVia the aforementioned newly introduced $\\exp$-ATI, we follow the Coifman idea in \\cite{DJS} (see also\n\\cite{HMY08}) to establish the (in)homogeneous continuous\/discrete Calder\\'on reproducing formulae.\nLet $\\{Q_k\\}_{k\\in\\zz}$ bs an $\\exp$-ATI as in Definition \\ref{def:eti}. Then, for any $N\\in\\nn$, we write\n$$\nI=\\lf(\\sum_{k=-\\fz}^\\fz Q_k\\r)\\lf(\\sum_{l=-\\fz}^\\fz Q_l\\r)=\\sum_{|l|>N}\\sum_{k=-\\fz}^\\fz Q_{k+l}Q_k\n+\\sum_{k=-\\fz}^{\\fz}Q_k^N Q_k=:R_N+T_N,\n$$\nwhere $I$ denotes the \\emph{identity operator}.\nWhen $N$ is sufficiently large, if we can prove that the operators norms of $R_N$ on both $L^2(X)$ and the\nspace $\\GOO{\\bz,\\gz}$ of test functions (see Definition \\ref{def:test} below) are all smaller than $1$, then\n$T_N$ is invertible in both $L^2(X)$ and $\\GOO{\\bz,\\gz}$, where $\\bz,\\ \\gz\\in(0,\\eta)$. After setting\n$\\wz{Q}_k:=T_N^{-1}Q_k$ for any $k\\in\\zz$, we then have\n\\begin{equation}\\label{eq:fcrf}\nI=\\sum_{k=-\\fz}^\\fz\\wz{Q}_kQ_k,\n\\end{equation}\nwhich is the homogeneous continuous Calder\\'on reproducing formula. Moreover, \\eqref{eq:fcrf}\nholds true in the space of test functions, as well as its dual space, and also in $L^p(X)$ with any given\n$p\\in(1,\\fz)$.\n\nThe difficulty to establish these Calder\\'{o}n reproducing formulae lies in the treatment of $R_N$. This is\nmainly because of the lack of the regularity of a quasi-metric.\nFor any $x_0\\in X$ and $r\\in(0,\\infty)$, let $\\GO{x_0, r, \\bz,\\gz}$ be the space of test functions (see\nDefinition \\ref{def:test} below).\nRecall that, in the setting of RD-spaces, the boundedness of $R_N$ on $\\GO{x_0, r, \\bz,\\gz}$ was ensured by\n\\cite[Theorem 2.18]{HMY08}. However,\nthe proof of \\cite[Theorem 2.18]{HMY08} needs the existence of the\n$1$-ATI with bounded support (see \\cite[Definition 2.3 and Theorem 2.6]{HMY08}).\nFor a space of homogeneous type, the existence of the $\\eta$-ATI with bounded support\nis hard to prove due to the lack of the regularity of the quasi-metric $d$. Indeed, it is still unknown\nwhether or not a corresponding theorem similar to \\cite[Theorem 2.18]{HMY08} still holds true on a space of\nhomogeneous type. To overcome this essential difficulty, we observe that, for any $f\\in\\GO{x_0, r, \\bz,\\gz}$\nand $x\\in X$,\n$$R_Nf(x)=\\lim_{M\\to\\fz} R_{N,M}f(x)$$\nholds true both in $L^2(X)$ and locally uniformly (see Lemma \\ref{lem:ccrf3} below),\nwhere each $R_{N,M}$ is associated to an integral kernel, still denoted by $R_{N,M}$, in the following way\n$$\nR_{N,M}g(x)=\\int_X R_{N,M}(x,y)g(y)\\,d\\mu(y), \\qquad \\forall\\, g\\in\\bigcup_{p\\in[1,\\fz]} L^p(X),\\ \\forall\\,x\\in X,\n$$\nwith $R_{N,M}$ being a standard Calder\\'on-Zygmund kernel and satisfying the ``second difference regularity\ncondition''. Thus, the boundedness of $R_N$ on $\\GO{x_0, r, \\bz,\\gz}$ can then be reduced to the\ncorresponding boundedness of operators like $R_{N,M}$, while the latter is obtained in Theorem \\ref{thm:Kbdd}\nbelow. This is the key creative point used in this article to obtain the desired homogeneous continuous\nCalder\\'on reproducing formulae.\n\nThe above discussion mainly works for the proof of homogeneous continuous Calder\\'{o}n reproducing formula.\nFor the homogeneous discrete one, we formally apply the mean value\ntheorem to \\eqref{eq:fcrf}. For the inhomogeneous ones, the\ndifficulties we meet are similar to those for homogeneous ones.\nFor their detailed proofs, see Sections \\ref{hdrf} and \\ref{idrf} below, respectively.\n\nCompared with the Calder\\'{o}n reproducing formulae on RD-spaces (or Ahlfors-$n$ regular spaces) in\n\\cite{HMY08}, which holds true in the space $\\mathring\\CG_0^\\ez(\\bz,\\gz)$ of test functions with $\\ez$\n\\emph{strictly} smaller than the H\\\"older regularity exponent $\\eta$\nof the approximations of the identity, here\nall Calder\\'{o}n reproducing formulae obtained in this article\nhold true in $\\mathring\\CG_0^\\eta(\\bz,\\gz)$, that is, the regularity exponent of the space of test functions\ncan attain the corresponding one of the approximations of the identity.\n\nFollowing \\cite[pp.\\ 587--588]{CW77}, throughout this article, we always make the following assumptions:\nfor any point $x\\in X$, assume that the balls $\\{B(x,r)\\}_{r\\in(0,\\infty)}$\nform a basis of open neighborhoods of $x$; assume that $\\mu$ is Borel regular, which means that open sets are measurable and every set $A\\subset X$\nis contained in a Borel set $E$ satisfying that $\\mu(A)=\\mu(E)$; we also\nassume that $\\mu(B(x, r))\\in(0,\\fz)$ for any $x\\in X$ and $r\\in(0,\\infty)$.\nFor the presentation concision,\nwe always assume that $(X,d,\\mu)$ is non-atomic [namely, $\\mu(\\{x\\})=0$ for any\n$x\\in X$] and $\\diam (X)=\\fz$. It is known that $\\diam (X)=\\fz$ implies that\n$\\mu(X)=\\fz$ (see, for example, \\cite[Lemma 8.1]{AH13}).\n\nThe organization of this article is as follows.\n\nSection \\ref{pre} deals with approximations of the identity on $(X, d, \\mu)$.\nIn Section \\ref{ati}, we recall the notions of both the approximations of the identity with polynomial decay\n(for short, ATI) and the space of test functions from \\cite{HMY08}, and then state some often\nused related estimates. In Section \\ref{eti}, motivated by the wavelet theory established in \\cite{AH13}, we\nintroduce a new kind of approximations of the identity with exponential decay (for short, $\\exp$-ATI), and then\nestablish several equivalent characterizations of $\\exp$-ATIs and discuss \nthe relationship between $\\exp$-ATIs and ATIs.\n\nSection \\ref{BDD} concerns the boundedness of Calder\\'{o}n-Zygmund-type operators on spaces of test functions.\nIn Section \\ref{s3.1}, we show that Calder\\'{o}n-Zygmund operators whose kernels satisfying the second\ndifference regularity condition and some other size and regularity conditions\nare bounded on spaces of test functions with cancellation, whose proof is long and separated into two\nsubsections (see Sections \\ref{size} and \\ref{reg}). Section \\ref{s3.4} deals with the boundedness of\nCalder\\'{o}n-Zygmund-type operators on spaces of test functions without cancellation.\nCompared with \\cite[Theorem 2.18]{HMY08}, the condition used here is a little bit stronger,\nbut the proof is easier and enough for us to build the Calder\\'{o}n reproducing formulae in Sections\n\\ref{hcrf}, \\ref{hdrf} and \\ref{irf} below.\n\nIn Section \\ref{hcrf}, we start our discussion by dividing the identity into a main operator $T_N$ and a\nremainder $R_N$. In Section \\ref{com}, we prove that compositions of two $\\exp$-ATIs have properties similar\nto those of an $\\exp$-ATI.\nWith this and the conclusions in Section \\ref{BDD}, we prove, in Section \\ref{RN}, that the operator norms of\n$R_N$ on both $L^2(X)$ and spaces of test functions can be small enough if\n$N$ is sufficiently large. This ensures the existence of $T_N^{-1}$ which leads to the\nhomogeneous continuous Calder\\'{o}n reproducing formulae in Section \\ref{pr}.\n\nIn Section \\ref{hdrf}, by a method similar to that used in \\cite[Section 4]{HMY08}, we split the $k$-level\ndyadic cubes $Q_\\az^k$ into a sum of dyadic subcubes in level $k+j$.\nThe remainder for the discrete case contains $R_N$ and another part $G_N$ [see \\eqref{eq:defGR} below].\nIn Section \\ref{sec5.1}, we treat the boundedness of $G_N$ on both $L^2(X)$ and spaces of test functions, and\nfurther establish homogeneous discrete Calder\\'on reproducing formulae in Section \\ref{pr2}.\n\nIn Section \\ref{irf}, we obtain inhomogeneous continuous and discrete Calder\\'on reproducing formulae, whose\nproofs are similar to those of homogeneous ones presented in Sections \\ref{hcrf} and \\ref{hdrf}.\n\n\n\nLet us make some conventions on notation. Throughout this article, we use $A_0$ to denote the coefficient\nappearing in the \\emph{quasi-triangular inequality} of $d$, the parameter $\\omega$ means the \\emph{upper\ndimension constant} in \\eqref{eq:doub}, and $\\eta$ is defined to be the\n\\emph{smoothness index of wavelets} (see Theorem \\ref{thm:wave} below). Denote by $\\dz$ a\n\\emph{small positive number}, for example, $\\dz\\le (2A_0)^{-10}$, which comes from constructing the dyadic\ncubes on $X$ (see Theorem \\ref{thm:dys} below).\nFor any $p\\in[1,\\fz]$, we use $p'$ to denote its \\emph{conjugate index}, namely, $1\/p+1\/p'=1$.\nThe \\emph{symbol $C$} denotes a positive constant which is independent of the main parameters involved, but\nmay vary from line to line. We use $C_{(\\az,\\bz,\\ldots)}$ to denote a positive constant depending on the\nindicated parameters $\\az$, $\\bz,\\ldots$. The \\emph{symbol $A \\ls B$} means that\n$A \\le CB$ for some positive constant $C$, while $A \\sim B$ is used as an abbreviation of $A \\ls B \\ls A$. We\nalso use $A\\ls_{\\az,\\bz,\\ldots}B$ to indicate that here the implicit positive constant depends on $\\az$,\n$\\bz$, \\ldots and, similarly, $A\\sim_{\\az,\\bz,\\ldots}B$.\nFor any (quasi)-Banach spaces $\\mathcal X,\\,\\mathcal Y$ and any operator $T$, we use\n$\\|T\\|_{\\mathcal X\\to\\mathcal Y}$ to denote the \\emph{operator norm} of $T$ from $\\mathcal X$ to $\\mathcal Y$.\nFor any $j,\\ k\\in\\rr$, let $j\\wedge k:=\\min\\{j,k\\}$.\n\n\\section{Approximations of the identity}\\label{pre}\n\nThis section concerns approximations of the identity on $(X, d, \\mu)$.\nIn Section \\ref{ati}, we recall the notions of both the approximations of the identity with polynomial decay\nand the space of test functions from \\cite{HMY08}, and then state some often used related estimates.\nIn Section \\ref{eti}, we recall the dyadic systems established in \\cite{HK12} and the wavelet systems built in\n\\cite{AH13}, which further motivate us to introduce a new kind of approximations of the identity with\nexponential decay. Equivalence definitions and properties of $\\exp$-ATIs are discussed in Section \\ref{eti}.\n\n\n\\subsection{Approximations of the identity with polynomial decay}\\label{ati}\n\nFor any $x,\\ y\\in X$ and $r\\in(0,\\infty)$, we adopt the notation\n$$V(x,y):=\\mu(B(x,d(x,y)))\n\\qquad \\textup{and}\\qquad V_r(x):=\\mu(B(x,r)).\n$$\nWe recall the following notion of approximations of the identity constructed in \\cite{HMY08}.\n\n\\begin{definition}\\label{def:ati}\nLet $\\bz\\in(0,1]$ and $\\gz\\in(0,\\fz)$. A sequence $\\{P_k\\}_{k\\in\\zz}$ of bounded linear integral operators\non $L^2(X)$ is called an \\emph{approximation of the identity of order $(\\bz,\\gz)$} [for short,\n$(\\bz,\\gz)$-ATI] if there exists a positive constant $C$ such that, for any $k\\in\\zz$, the kernel of operator\n$P_k$, a function on $X\\times X$, which is also denoted by $P_k$, satisfying\n\\begin{enumerate}\n\\item (the \\emph{size condition}) for any $x,\\ y\\in X$,\n\\begin{equation}\\label{eq:atisize}\n|P_k(x,y)|\\le C\\frac 1{V_{\\dz^k}(x)+V(x,y)}\\lf[\\frac{\\dz^k}{\\dz^k+d(x,y)}\\r]^\\gz;\n\\end{equation}\n\\item (the \\emph{regularity condition}) if $d(x,x')\\le (2A_0)^{-1}[\\dz^k+d(x,y)]$, then\n\\begin{align}\\label{eq:atisregx}\n&|P_k(x,y)-P_k(x',y)|+|P_k(y,x)-P_k(y, x')|\\\\\n&\\quad\\le C\\lf[\\frac{d(x,x')}{\\dz^k+d(x,y)}\\r]^\\bz\\frac 1{V_{\\dz^k}(x)+V(x,y)}\n\\lf[\\frac{\\dz^k}{\\dz^k+d(x,y)}\\r]^\\gz;\\notag\n\\end{align}\n\\item (the \\emph{second difference regularity condition}) if $d(x,x')\\le (2A_0)^{-2}[\\dz^k+d(x,y)]$ and\n$d(y,y')\\le (2A_0)^{-2}[\\dz^k+d(x,y)]$, then\n\\begin{align}\\label{eq:atidreg}\n&|[P_k(x,y)-P_k(x',y)]-[P_k(x,y')-P_k(x',y')]|\\\\\n&\\quad \\le C\\lf[\\frac{d(x,x')}{\\dz^k+d(x,y)}\\r]^\\bz\n\\lf[\\frac{d(y,y')}{\\dz^k+d(x,y)}\\r]^\\bz\\frac 1{V_{\\dz^k}(x)+V(x,y)}\\lf[\\frac{\\dz^k}{\\dz^k+d(x,y)}\\r]^\\gz;\\noz\n\\end{align}\n\\item for any $x,\\ y\\in X$,\n\\begin{equation*}\n\\int_X P_k(x,y')\\,d\\mu(y')=1=\\int_X P_k(x',y)\\,d\\mu(x').\n\\end{equation*}\n\\end{enumerate}\n\\end{definition}\n\nLet $L_\\loc^1(X)$ be the space of all locally integrable functions on $X$.\nDenote by $\\CM$ the \\emph{Hardy-Littlewood maximal operator} defined by setting, for any $f\\in L_\\loc^1(X)$ and\n$x\\in X$,\n\\begin{equation}\\label{eq:defmax}\n\\CM(f)(x):=\\sup_{r\\in(0,\\fz)}\\frac 1{\\mu(B(x,r))}\\int_{B(x,r)} |f(y)|\\,d\\mu(y).\n\\end{equation}\nFor any $p\\in(0,\\fz]$, we use the \\emph{symbol $L^p(X)$} to denote the set of all Lebesgue measurable functions\n$f$ such that\n$$\n\\|f\\|_{L^p(X)}:=\\lf[\\int_X |f(x)|^p\\,d\\mu(x)\\r]^{\\frac 1p}<\\fz\n$$\nwith the usual modification made when $p=\\fz$.\n\nNow we list some basic properties of $(\\bz,\\gz)$-ATIs. For their proofs, see \\cite[Proposition 2.7]{HMY08}.\n\\begin{proposition}\\label{prop:basic}\nLet $\\{P_k\\}_{k\\in\\zz}$ be an $(\\bz,\\gz)$-{\\rm ATI}, with $\\bz\\in(0,1]$ and $\\gz\\in(0,\\fz)$, and $p\\in[1,\\fz)$.\nThen there exists a positive constant $C$ such that\n\\begin{enumerate}\n\\item for any $x,\\ y\\in X$,\n$\\int_X|P_k(x,y')|\\,d\\mu(y')\\le C $ and $\\int_X|P_k(x',y)|\\,d\\mu(x')\\le C$;\n\\item for any $f\\in L^1_\\loc(X)$ and $x\\in X$, $|P_kf(x)|\\le C\\CM(f)(x)$;\n\\item for any $f\\in L^p(X)$, $\\|P_kf\\|_{L^p(X)}\\le C\\|f\\|_{L^p(X)}$, which also holds true when $p=\\fz$;\n\\item for any $f\\in L^p(X)$, $\\|f-P_kf\\|_{L^p(X)}\\to 0$ as $k\\to\\fz$.\n\\end{enumerate}\n\\end{proposition}\n\nLet us recall the notions of both the space of test functions and the space of distributions, whose\nfollowing versions were originally given in \\cite{HMY08} (see also \\cite{HMY06}).\n\n\\begin{definition}[test functions]\\label{def:test}\nLet $x_1\\in X$, $r\\in(0,\\fz)$, $\\bz\\in(0,1]$ and $\\gz\\in(0,\\fz)$. A function $f$ on $X$ is called a\n\\emph{test function of type $(x_1,r,\\bz,\\gz)$}, denoted by $f\\in\\CG(x_1,r,\\bz,\\gz)$, if there exists a positive\nconstant $C$ such that\n\\begin{enumerate}\n\\item (the \\emph{size condition}) for any $x\\in X$,\n\\begin{equation}\\label{eq:size}\n|f(x)|\\le C\\frac{1}{V_r(x_1)+V(x_1,x)}\\lf[\\frac r{r+d(x_1,x)}\\r]^\\gz;\n\\end{equation}\n\n\\item (the \\emph{regularity condition}) for any $x,\\ y\\in X$ satisfying $d(x,y)\\le (2A_0)^{-1}[r+d(x_1,x)]$,\n\\begin{equation}\\label{eq:reg}\n|f(x)-f(y)|\\le C\\lf[\\frac{d(x,y)}{r+d(x_1,x)}\\r]^\\bz\n\\frac{1}{V_r(x_1)+V(x_1,x)}\\lf[\\frac r{r+d(x_1,x)}\\r]^\\gz.\n\\end{equation}\n\\end{enumerate}\nFor any $f\\in\\CG(x_1,r,\\bz,\\gz)$, define the norm\n$$\n\\|f\\|_{\\CG(x_1,r,\\bz,\\gz)}:=\\inf\\{C\\in(0,\\fz):\\ \\text{\\eqref{eq:size} and \\eqref{eq:reg} hold true}\\}.\n$$\nDefine\n$$\n\\mathring{\\CG}(x_1,r,\\bz,\\gz):=\\lf\\{f\\in\\CG(x_1,r,\\bz,\\gz):\\ \\int_X f(x)\\,d\\mu(x)=0\\r\\}\n$$\nequipped with the norm $\\|\\cdot\\|_{\\mathring{\\CG}(x_1,r,\\bz,\\gz)}:=\\|\\cdot\\|_{\\CG(x_1,r,\\bz,\\gz)}$. Fixed\n$x_0\\in X$,\nwe denote $\\CG(x_0,1,\\bz,\\gz)$ [resp., $\\mathring{\\CG}(x_0,1,\\bz,\\gz)$], simply, by $\\CG(\\bz,\\gz)$ [resp.,\n$\\mathring{\\CG}(\\bz,\\gz)$].\n\\end{definition}\n\nFix $x_0\\in X$. For any $x\\in X$ and $r\\in(0,\\fz)$, we find that $\\CG(x,r,\\bz,\\gz)=\\CG(x_0,1,\\bz,\\gz)$\nwith equivalent norms, but\nthe equivalent positive constants depend on $x$ and $r$.\n\nFix $\\epsilon\\in(0,1]$ and $\\bz,\\ \\gz\\in(0,\\epsilon]$. Let $\\CG^\\epsilon_0(\\bz,\\gz)$ [resp.,\n$\\mathring\\CG^\\epsilon_0(\\bz,\\gz)$] be the completion of the set $\\CG(\\epsilon,\\epsilon)$ [resp.,\n$\\mathring\\CG(\\ez,\\ez)$] in $\\CG(\\bz,\\gz)$. Furthermore, if $f\\in\\CG^\\epsilon_0(\\bz,\\gz)$ [resp.,\n$f\\in\\mathring\\CG^\\epsilon_0(\\bz,\\gz)$], we then let $\\|f\\|_{\\CG^\\epsilon_0(\\bz,\\gz)}:=\\|f\\|_{\\CG(\\bz,\\gz)}$\n[resp., $\\|f\\|_{\\mathring\\CG^\\epsilon_0(\\bz,\\gz)}:=\\|f\\|_{\\CG(\\bz,\\gz)}$].\nThe \\emph{dual space} $(\\CG^\\epsilon_0(\\bz,\\gz))'$ [resp., $(\\mathring{\\CG}^\\epsilon_0(\\bz,\\gz))'$] is defined\nto be the set of all continuous linear functionals from $\\CG^\\epsilon_0(\\bz,\\gz)$ [resp.,\n$(\\mathring{\\CG}^\\epsilon_0(\\bz,\\gz))'$] to $\\cc$, equipped with the weak-$*$ topology. The spaces\n$(\\CG^\\epsilon_0(\\bz,\\gz))'$ and $(\\mathring{\\CG}^\\epsilon_0(\\bz,\\gz))'$ are called the \\emph{spaces of\ndistributions}.\n\nWe conclude this subsection with some estimates from \\cite[Lemma~2.1]{HMY08}, which are proved by using\n\\eqref{eq:doub}.\n\n\\begin{lemma}\\label{lem-add}\nLet $\\bz,\\ \\gz\\in(0,\\infty)$.\n\\begin{enumerate}\n\\item For any $x,\\ y\\in X$ and $r\\in(0,\\fz)$, $V(x,y)\\sim V(y,x)$ and\n$$\nV_r(x)+V_r(y)+V(x,y)\\sim V_r(x)+V(x,y)\\sim V_r(y)+V(x,y)\\sim \\mu(B(x, r+d(x,y))),\n$$\nwhere the equivalent positive constants are independent of $x$, $y$ and $r$.\n\\item There exists a positive constant $C$ such that, for any $x_1\\in X$ and $r\\in(0,\\infty)$,\n$$\\int_X\\frac{1}{V_r(x_1)+V(x_1,y)}\\lf[\\frac r{r+d(x_1,y)}\\r]^\\gz\\,d\\mu(y)\\le C.$$\n\\item There exists a positive constant $C$ such that, for any $x\\in X$ and $R\\in(0,\\infty)$,\n$$\\int_{d(x,y)\\le R}\\frac 1{V(x,y)}\\lf[\\frac{d(x,y)}{R}\\r]^\\bz\\,d\\mu(y)\\le C\n\\quad \\textup{and}\\quad\n\\int_{d(x, y)\\ge R}\\frac{1}{V(x,y)}\\lf[\\frac R{d(x,y)}\\r]^\\bz\\,d\\mu(y)\\le C.$$\n\\item There exists a positive constant $C$ such that, for any $x_1\\in X$ and $r,\\ R\\in(0,\\infty)$,\n$$\n\\int_{d(x, x_1)\\ge R}\\frac{1}{V_r(x_1)+V(x_1,x)}\\lf[\\frac r{r+d(x_1,x)}\\r]^\\gz\\,d\\mu(x)\n\\le C\\lf(\\frac{r}{r+R}\\r)^\\gz.\n$$\n\\end{enumerate}\n\\end{lemma}\n\n\\subsection{Approximations of the identity with exponential decay}\\label{eti}\n\nThe main aim of this section is to introduce the approximations of the identity with exponential decay.\nRecall that\nHyt\\\"{o}nen and Kariema \\cite{HK12} established a system of dyadic cubes, which is re-stated in the following\ntheorem.\n\n\\begin{theorem}[{\\cite[Theorem 2.2]{HK12}}]\\label{thm:dys}\nFix constants $0B>0\\; \\textup{and}\\; a\\in(0,1],\n\\end{align*}\nimplies that\n$$[d(x',y)]^a\\ge (2A_0)^{-a}[d(x,y)-\\dz^k]^a \\ge (2A_0)^{-a}\\lf\\{[d(x,y)]^a-[\\dz^k]^a\\r\\}.$$\nSimilarly, we have\n\\begin{align*}\n[\\max\\{d(x, \\CY^k),\\,d(y,\\CY^k)\\}]^a\n&\\le [\\max\\{\\dz^k+2A_0d(x', \\CY^k),\\,d(y,\\CY^k)\\}]^a\\\\\n&\\le (\\dz^k)^a+[2A_0\\max\\{d(x', \\CY^k),\\,d(y,\\CY^k)\\}]^a.\n\\end{align*}\nThus, we obtain\n\\begin{align}\\label{eq-add5}\n&\\lf[\\frac{\\dz^k+d(x,y)}{d(x,x')}\\r]^\\eta\\exp\\lf\\{-\\nu\\lf[\\frac{d(x',y)}{\\dz^k}\\r]^a\\r\\} \\exp\\lf\\{-\\nu\\lf[\\frac{\\max\\{d(x', \\CY^k),\\,d(y,\\CY^k)\\}}{\\dz^k}\\r]^a\\r\\}\\\\\n&\\quad \\le \\lf[\\frac{\\dz^k+d(x,y)}{\\dz^k}\\r]^\\eta\\exp\\lf\\{2(2A_0)^{-a}\\nu\\r\\}\\exp\\lf\\{-\\nu\\lf[\\frac{d(x,y)}{2A_0\\dz^k}\\r]^a\\r\\} \\notag\\\\\n&\\qquad\\times\\exp\\lf\\{-\\nu\\lf[\\frac{\\max\\{d(x, \\CY^k),\\,d(y,\\CY^k)\\}}{2A_0\\dz^k}\\r]^a\\r\\}.\\notag\n\\end{align}\nFrom this and \\eqref{eq:etisize}, we easily deduce (iii)$'$.\n\nNow we prove (iv)$'$. Let us consider the case $\\dz^k(2A_0)^{-1}[r+d(x_0,x)]}{d(x,y)<10A_0^4r}}|K(x,y)|[|f(y)|+|f(x)|]\\,d\\mu(y)\\\\\n&\\ls \\frac 1{V_r(x_1)}\\int_{d(x,y)\\le(2A_0)^{-1}[r+d(x_1,x)]}\\frac 1{V(x,y)}\\lf[\\frac{d(x,y)}{r+d(x_1,x)}\\r]^\\bz\n\\,d\\mu(y)\\\\\n&\\quad+\\frac 1{V_r(x_1)}\\int_{(2A_0)^{-1}r (2A_0)^{-1}[r+d(x_1,y)]$, then, by\n\\eqref{eq:1} and $R\\le 2A_0 d(x_1,y)$, we obtain\n$$\n|f_1(y)-f_1(y')|\\le|f(y)|+|f(y')|\\ls\\lf[\\frac{d(y,y')}{r+d(x_1,y)}\\r]^\\bz\\frac 1{V_R(x_1)}\n\\lf(\\frac rR\\r)^\\gz\\ls\\lf[\\frac{d(y,y')}R\\r]^\\bz\\frac 1{V_R(x_1)}\\lf(\\frac rR\\r)^\\gz.\n$$\nIf $d(y,y')\\le (2A_0)^{-1}[r+d(x_1,y)]$, then, by the definition of $u_1$ and the regularity condition of $f$,\nwe find that\n\\begin{align*}\n|f_1(y)-f_1(y')|&\\le|f(y)||u_1(y)-u_1(y')|+|u_1(y')||f(y)-f(y')|\\\\\n&\\ls\\frac 1{V_1(x_1 )+V(x_1 ,y)}\\lf[\\frac r{r+d(x_1,y)}\\r]^\\gz\n\\min\\lf\\{1,\\;\\lf[\\frac{d(y,y')}R\\r]^\\eta\\r\\}\\\\\n&\\quad+\\lf[\\frac{d(y,y')}{r+d(x_1,y)}\\r]^\\bz\\frac 1{V_r(x_1)+V(x_1,y)}\n\t\\lf[\\frac r{r+d(x_1 ,y)}\\r]^\\gz\\\\\n\t&\\ls\\lf[\\frac{d(y,y')}R\\r]^\\bz\\frac 1{V_R(x_1 )}\\lf(\\frac rR\\r)^\\gz.\n\t\\end{align*}\nThis proves \\eqref{eq:2}.\n\n\nSince $B(x,(2A_0)^{-2}R)\\cap B(x_1,(2A_0)^{-2}R)=\\emptyset$, it follows that\n$0\\le u_3\\le 1$ and\n$\nu_3(x)=0$ if $x\\in B(x,(2A_0)^{-3}R)\\cup B(x_1,(2A_0)^{-3}R)$. Combining this with the size condition of $f$ implies \\eqref{eq:3}.\nFrom \\eqref{eq:3} and Lemma \\ref{lem-add}(iv), it follows directly \\eqref{eq:4}.\n\nTo prove \\eqref{eq:5}, since $f\\in\\mathring{\\CG}(x_1,r,\\bz,\\gz)$, we deduce that $\\int_X f(y)\\,d\\mu(y)=0$,\nwhich, together with \\eqref{eq:4}, \\eqref{eq:1} and $\\supp f_1\\subset B(x,(2A_0)^{-2}R)$, implies that\n$$\n\\lf|\\int_X f_2(y)\\,d\\mu(y)\\r|=\\lf|\\int_X [f_1(y)+f_3(y)]\\,d\\mu(y)\\r|\\ls\n\\int_{B(x,(2A_0)^{-2}R)}|f_1(y)|\\,d\\mu(y)+\\lf(\\frac rR\\r)^\\gz\\ls \\lf(\\frac rR\\r)^\\gz.\n$$\nTherefore, we complete the proofs of \\eqref{eq:1} through \\eqref{eq:5}.\n\t\nNow we continue the proof of \\eqref{eq:Tfsize} in the case $R=d(x_1, x)\\ge 2A_0r$. Notice that, in this case,\nthe right-hand side of \\eqref{eq:Tfsize} is comparable with $\\frac 1{V_R(x_1)}(\\frac rR)^\\gz$.\nBy Lemma \\ref{lem:scf}, we find a function $u$ such that\n$\\chi_{B(x,(2A_0)^{-2}R)}\\le u\\le\\chi_{B(x, (2A_0)^{-1}R)}$ and $\\|u\\|_{\\dot{C}^\\eta(X)}\\ls R^{-\\eta}$. Then $f_1=uf_1$ and\n\\begin{align*}\nTf_1(x)&=\\int_ X K(x,y)u(y)f_1(y)\\,d\\mu(y)\\\\\n&=\\int_X K(x,y)u(y)[f_1(y)-f_1(x)]\\,d\\mu(y)+f_1(x)\\int_X K(x,y)u(y)\\,d\\mu(y)=:\\Gamma_1+\\Gamma_2.\n\\end{align*}\nBy Lemma \\ref{lem:3.24} and \\eqref{eq:1}, we conclude that\n$|\\Gamma_2|\\ls|f_1(x)|\\ls\\frac 1{V_R(x_1)}(\\frac rR)^\\gz$.\nFrom the fact that $\\supp u\\subset B(x,(2A_0)^{-1}R)$, \\eqref{eq:Ksize}, \\eqref{eq:2} and Lemma\n\\ref{lem-add}(iii), we deduce that\n\\begin{align*}\n|\\Gamma_1|&\\le \\int_{d(x,y)<(2A_0)^{-1}R}|K(x,y)||f_1(y)-f_1(x)|\\,d\\mu(y)\\\\\n&\\ls\\frac 1{V_R(x_1)}\\lf(\\frac rR\\r)^\\gz\n\\int_{d(x,y)<(2A_0)^{-1}R}\\frac 1{V(x,y)}\\lf[\\frac{d(x,y)}R\\r]^\\bz\\,d\\mu(y)\\ls\\frac 1{V_R(x_1)}\\lf(\\frac rR\\r)^\\gz.\n\\end{align*}\nCombining the estimates of $\\Gamma_1$ and $\\Gamma_2$, we obtain the desired estimate of $Tf_1(x)$.\n\nNow we estimate $Tf_2(x)$. Indeed, from \\eqref{eq:Ksize}, \\eqref{eq:Kreg}, \\eqref{eq:5}, $|f_2|\\le |f|$, the\nsize condition of $f$ and Lemma \\ref{lem-add}(iii), we deduce that\n\\begin{align*}\n|Tf_2(x)|&\\le\\lf|\\int_X [K(x,y)-K(x,x_1)]f_2(y)\\,d\\mu(y)\\r|+|K(x,x_1)|\\lf|\\int_X f_2(y)\\,d\\mu(y)\\r|\\\\\n&\\ls \\int_{d(x_1,y)<(2A_0)^{-2}R} |K(x,y)-K(x,x_1)||f_2(y)|\\,d\\mu(y)+\\frac 1{V(x,x_1)}\\lf(\\frac rR\\r)^\\gz\\\\\n&\\ls\\int_{d(x_1,y)<(2A_0)^{-2}R}\\lf[\\frac{d(x_1,y)}R\\r]^{s}\\frac 1{V(x,x_1)}\\lf(\\frac r{r+d(x_1,y)}\\r)^\\gz\n\\frac 1{V(x_1,y)}\\,d\\mu(y)+\\frac 1{V_R(x_1)}\\lf(\\frac rR\\r)^\\gz\\\\\n&\\ls\\frac 1{V_R(x_1)}\\lf(\\frac rR\\r)^\\gz\\lf\\{\\int_{d(x_1,y)<(2A_0)^{-2}R}\\lf[\\frac{d(x_1,y)}R\\r]^{s-\\gz}\n\\frac 1{V(x_1,y)}\\,d\\mu(y)+1\\r\\}\\sim\\frac 1{V_R(x_1)}\\lf(\\frac rR\\r)^\\gz.\n\\end{align*}\n\nFinally, by \\eqref{eq:3}, \\eqref{eq:Ksize} and \\eqref{eq:4}, we conclude that\n\\begin{align*}\n|Tf_3(x)|&\\le\\int_{d(x,y)\\ge (2A_0)^{-3}R}|K(x,y)||f_3(y)|\\,d\\mu(y)\\\\\n&\n\\ls\\int_{d(x,y)\\ge (2A_0)^{-3}R} \\frac 1{V(x,y)}|f_3(y)|\\,d\\mu(y)\\ls\\frac 1{V_R(x_1)}\\lf(\\frac rR\\r)^\\gz.\n\\end{align*}\n\nSince $Tf=Tf_1+Tf_2+Tf_3$, we summarize the estimates of $\\{Tf_i(x)\\}_{i=1}^3$ and then complete\nthe proof of \\eqref{eq:Tfsize} in the case $R=d(x_1, x)\\ge 2A_0r$. This finishes the proof of\n\\eqref{eq:Tfsize}.\n\\end{proof}\n\n\n\\subsection{Proof of the regularity estimate \\eqref{eq:Tfreg}}\\label{reg}\n\n\\begin{proof}[Proof of \\eqref{eq:Tfreg}]\nAssume that $\\rho:=d(x,x')\\le(2A_0)^{-1}[r+d(x_1,x)]$, we show \\eqref{eq:Tfreg} by considering the following three cases.\n\n{\\it Case 1) $\\rho\\ge (3A_0)^{-4}[r+d(x_1,x)]$.} In this case, we have\n$d(x,x')\\sim r+d(x_1,x)\\sim r+d(x_1,x')$, so that the size condition \\eqref{eq:Tfsize} and the doubling\ncondition \\eqref{eq:doub} directly imply \\eqref{eq:Tfreg}.\n\n{\\it Case 2) $\\rho<(3A_0)^{-4}[r+d(x_1,x)]$ and $R:=d(x_1,x)(2A_0)^{-1}[r+d(x_1,x)]}\n}|K(x,y)[|f(y)|+|f(x)|]\\,d\\mu(y)=:\\Gamma_{1,1}+\\Gamma_{1,2}.\n\\end{align*}\nBy \\eqref{eq:Ksize}, the regularity condition of $f$ and Lemma \\ref{lem-add}(ii), we conclude that\n\\begin{align*}\n\\Gamma_{1,1}&\\ls\\int_{d(x,y)<8A_0^3\\rho}\\frac 1{V(x,y)}\\lf[\\frac{d(x,y)}{r+d(x_1,x)}\\r]^\\bz\n\\frac 1{V_r(x_1)}\\,d\\mu(y)\\ls \\lf(\\frac\\rho{r}\\r)^\\bz\\frac 1{V_r(x_1)}.\n\\end{align*}\nLikewise, using the size condition of $f$, we obtain\n\\begin{align*}\n\\Gamma_{1,2}&\\ls\\frac 1{V_r(x_1)}\n\\int_{d(x,y)<8A_0^3\\rho}\\frac 1{V(x,y)}\\lf[\\frac{d(x,y)}{r+d(x_1,x)}\\r]^\\bz\\,d\\mu(y)\\ls \\lf(\\frac\\rho{r}\\r)^\\bz\\frac 1{V_r(x_1)}.\n\\end{align*}\nTherefore,\n$\n|\\Gamma_1(x)|\\ls (\\frac\\rho{r})^\\bz\\frac 1{V_r(x_1)}.\n$\n\nNotice that, if $y\\in\\supp\\varphi\\subset B(x,(2A_0)^3\\rho)$, then $d(x',y)<9A_0^4\\rho$, so, by an argument\nsimilar to the estimation of $\\Gamma_1(x)$, we also conclude that\n$|\\Gamma_1(x')|\\ls (\\frac\\rho{r})^\\bz\\frac 1{V_r(x_1)}$.\nThus, we have\n$$|\\Gamma_1(x)-\\Gamma_1(x')|\\le |\\Gamma_1(x)|+|\\Gamma_1(x')|\\ls \\lf(\\frac\\rho{r}\\r)^\\bz\\frac 1{V_r(x_1)}.$$\n\nTo estimate $|\\Gamma_2(x)-\\Gamma_2(x')|$, by \\eqref{eq:Kcany} and the fact $\\int_X K(x,y)\\,d\\mu(y)=\\int_X K(x',y)\\,d\\mu(y)$, we write\n\\begin{align*}\n\\Gamma_2(x)-\\Gamma_2(x')\n&=\\int_X [K(x,y)-K(x',y)][f(y)-f(x)]\\zeta(y)\\,d\\mu(y)\\\\\n&\\quad+[f(x)-f(x')]\\int_X K( x',y)\\vz(y)\\,d\\mu(y)=:\\Gamma_{2,1}+\\Gamma_{2,2}.\n\\end{align*}\nTo estimate $\\Gamma_{2,1}$, by $0\\le\\zeta\\le 1$ and the support condition of $\\zeta$, we have\n\\begin{align*}\n|\\Gamma_{2,1}|&\\le\\int_{\\gfz{d(x,y)\\ge(2A_0)^2\\rho}{d(x,y)\\le (2A_0)^{-1}[r+d(x_1,x)]}}\n|K(x,y)-K(x',y)||f(y)-f(x)|\\,d\\mu(y)\\\\\n&\\quad+\\int_{\\gfz{d(x,y)\\ge(2A_0)^2\\rho}{(x,y)>(2A_0)^{-1}[r+d(x_1,x)]}}\n|K(x,y)-K(x',y)|[|f(y)|+|f(x)|]\\,d\\mu(y)=:\\Gamma_{2,1,1}+\\Gamma_{2,1,2}.\n\\end{align*}\nBy \\eqref{eq:Ksize}, the regularity condition of $f$, $d(x,x_1)(2A_0)^{-1}[r+d(x_1,x)]}}\n\\lf[\\frac{d(x,x')}{d(x,y)}\\r]^{s}\\frac 1{V(x,y)}\\,d\\mu(y)\\\\\n&\\ls \\lf(\\frac\\rho{r}\\r)^\\bz\\frac 1{V_r(x_1)}\n\\int_{\nd(x,y)\\ge(2A_0)^2\\rho}\\lf[\\frac{\\rho}{d(x,y)}\\r]^{s-\\bz}\\frac 1{V(x,y)}\\,d\\mu(y)\\ls \\lf(\\frac\\rho{r}\\r)^\\bz\\frac 1{V_r(x_1)}.\n\\end{align*}\nWe therefore obtain $\n|\\Gamma_{2,1}|\\ls(\\frac\\rho{r})^\\bz\\frac 1{V_r(x_1)}.$\n\nNow we estimate $\\Gamma_{2,2}$. By Lemma \\ref{lem:3.24}, the regularity of $f$ and $d(x, x_1)r_0$\nand hence $d(x,x')=\\rho\\le (3A_0)^{-4}[r_0+R]\\le (2A_0)^{-1}d(x,y)$. By this,\n\\eqref{eq:Kregb} and Lemma \\ref{lem-add}(ii), we conclude that\n\\begin{align*}\n|Tf_2(x)-Tf_2(x')|&\\le\\int_{d(x_1,y)\\le (2A_0)^{-2}R} |K(x,y)-K(x',y)||f_2(y)|\\,d\\mu(y)\\\\\n&\\ls\\int_{d(x_1,y)\\le (2A_0)^{-2}R}\\lf[\\frac{d(x,x')}{d(x,y)}\\r]^s\n\\frac 1{V(x,y)}\\lf[\\frac{r_0}{d(x,y)}\\r]^\\sigma|f(y)|\\,d\\mu(y)\\\\\n&\\ls\\lf[\\frac{d(x,x')}{R}\\r]^s\\frac 1{V_R(x)}\\lf(\\frac{r_0}{R}\\r)^\\sigma\\\\\n&\\ls\\lf[\\frac{d(x,x')}{r_0+d(x_1,x)}\\r]^s\\frac 1{V_{r_0}(x_1)+V(x_1,x)}\n\\lf[\\frac{r_0}{r_0+d(x_1,x)}\\r]^\\gz.\n\\end{align*}\nThis proves \\eqref{eq:Tfreg} and hence finishes the proof of Theorem \\ref{thm:Kibdd}.\n\\end{proof}\n\n\n\\section{Homogeneous continuous Calder\\'{o}n reproducing formulae}\\label{hcrf}\n\nSuppose that $\\{Q_k\\}_{k=-\\fz}^\\fz$ is an $\\exp$-ATI. For any fixed $N\\in\\nn$, we have\n\\begin{equation}\\label{eq:defRT}\nI=\\sum_{k=-\\fz}^\\fz Q_k=\\sum_{k=-\\fz}^\\fz\\sum_{l=-\\fz}^\\fz Q_{k+l}Q_k=\n\\sum_{k=-\\fz}^\\fz Q_k^NQ_k+\\sum_{k=-\\fz}^\\fz\\sum_{|l|>N} Q_{k+l}Q_k=:T_N+R_N\n\\quad\\text{in}\\; L^2(X),\n\\end{equation}\nwhere $Q_k^N:=\\sum_{|l|\\le N}Q_{k+l}$ for any $k\\in\\zz$. To deal with the remainder $R_N$\n(see Section \\ref{RN}), we need to estimate compositions of $\\exp$-ATIs first (see Section \\ref{com}).\nThen the homogeneous continuous reproducing formula is established in Section \\ref{pr}.\n\n\\subsection{Compositions of two exp-ATIs}\\label{com}\n\nIn this subsection, we consider the compositions of two $\\exp$-ATIs.\n\\begin{lemma}\\label{lem:ccrf1}\nLet $\\{\\wz Q_j\\}_{j\\in\\zz}$ and $\\{{Q}_k\\}_{k\\in\\zz}$ be two $\\exp$-{\\rm ATI}s.\nFix $\\eta'\\in(0,\\eta)$. Then, for any $j,\\ k\\in\\zz$, the kernel of\n$\\wz{Q}_jQ_k$, still denoted by $\\wz{Q}_jQ_k$, has the following properties:\n\\begin{enumerate}\n\\item for any $x,\\ y\\in X$,\n\\begin{align}\\label{eq:mixsize}\n\\lf|\\wz{Q}_jQ_k(x,y)\\r|&\\le C\\dz^{|j-k|\\eta}\\frac 1{V_{\\dz^{j\\wedge k}}(x)}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^{j\\wedge k}}\\r]^a\\r\\}\\exp\\lf\\{-c\\lf[\\frac{d(x,\\CY^{j\\wedge k})}{\\dz^{j\\wedge k}}\\r]^a\\r\\};\n\\end{align}\n\\item if $d(x,x')\\le(2A_0)^{-2}d(x,y)$ with $x\\neq y$, then\n\\begin{align}\\label{eq:mixregx}\n&\\lf|\\wz{Q}_jQ_k(x,y)-\\wz{Q}_jQ_k(x',y)\\r|+\\lf|\\wz{Q}_jQ_k(y,x)-\\wz{Q}_jQ_k(y,x')\\r|\\\\\n&\\quad\\le C\\dz^{|j-k|(\\eta-\\eta')}\\lf[\\frac{d(x,x')}{\\dz^{j\\wedge k}}\\r]^{\\eta'}\n\\frac 1{V_{\\dz^{j\\wedge k}}(x)}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^{j\\wedge k}}\\r]^a\\r\\}\\exp\\lf\\{-c\\lf[\\frac{d(x,\\CY^{j\\wedge k})}{\\dz^{j\\wedge k}}\\r]^a\\r\\};\\noz\n\\end{align}\n\\item if $d(x,x')\\le(2A_0)^{-3}d(x,y)$ and $d(y,y')\\le(2A_0)^{-3}d(x,y)$ with $x\\neq y$, then\n\\begin{align}\\label{eq:mixdreg}\n&\\lf|\\lf[\\wz{Q}_jQ_k(x,y)-\\wz{Q}_jQ_k(x',y)\\r]-\\lf[\\wz{Q}_jQ_k(x,y')-\\wz{Q}_jQ_k(x',y')\\r]\\r|\\\\\n&\\noz\\quad\\le C\\dz^{|j-k|(\\eta-\\eta')}\\lf[\\frac{d(x,x')}{\\dz^{j\\wedge k}}\\r]^{\\eta'}\n\\lf[\\frac{d(y,y')}{\\dz^{j\\wedge k}}\\r]^{\\eta'}\n\\frac 1{V_{\\dz^{j\\wedge k}}(x)}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^{j\\wedge k}}\\r]^a\\r\\}\\\\\n&\\qquad\\times\\exp\\lf\\{-c\\lf[\\frac{d(x,\\CY^{j\\wedge k})}{\\dz^{j\\wedge k}}\\r]^a\\r\\};\n\\noz\n\\end{align}\n\\item for any $x,\\ y\\in X$,\n\\begin{equation}\\label{eq:mixcan}\n\\int_X \\wz{Q}_jQ_k(x,y')\\,d\\mu(y')=0=\\int_X \\wz{Q}_jQ_k(x',y)\\,d\\mu(x'),\n\\end{equation}\n\\end{enumerate}\nwhere, in (i), (ii) and (iii), $C$ and $c$ are positive constants independent of $j$, $k$, $x$, $y$, $x'$ and\n$y'$.\nMoreover, according to Remark \\ref{rem:andef}, in (i), (ii) and (iii), the factors ${V_{\\dz^{j\\wedge k}}(x)}$\nand $d(x,\\CY^{j\\wedge k})$ can be replaced, respectively, by ${V_{\\dz^{j\\wedge k}}(y)}$\nand $d(y,\\CY^{j\\wedge k})$, but with the factor $\\exp\\{-c[\\frac{d(x,y)}{\\dz^{j\\wedge k}}]^a\\}$\nreplaced by $\\exp\\{-c'[\\frac{d(x,y)}{\\dz^{j\\wedge k}}]^a\\}$ for some $c'\\in(0,c)$ independent of $j$, $k$, $x$,\n$y$, $x'$ and $y'$.\n\\end{lemma}\n\\begin{proof}\nFor any $j,\\ k\\in\\zz$ and $x,\\ y\\in X$, we have $\\wz{Q}_jQ_k(x,y)=\\int_X\\wz{Q}_j(x,z)Q_k(z,y)\\,d\\mu(z)$.\nBy \\eqref{eq:etisize}, the dominated convergence theorem and the cancellation property of\n$\\wz{Q}_j$ and $Q_k$ (see Definition \\ref{def:eti}), we then have \\eqref{eq:mixcan}.\nSo it remains to prove \\eqref{eq:mixsize}, \\eqref{eq:mixregx} and \\eqref{eq:mixdreg}.\nBy symmetry, we may as well assume that $j\\ge k$.\n\nFirst we show \\eqref{eq:mixsize}. By the cancellation of $\\wz{Q}_j$, we have\n\\begin{align*}\n\\lf|\\wz{Q}_jQ_k(x,y)\\r|&\n=\\lf|\\int_X\\wz{Q}_j(x,z)[Q_k(z,y)-Q_k(x,y)]\\,d\\mu(z)\\r|\\\\\n&\\le\\int_{d(x,z)\\le\\dz^k}\\lf|\\wz{Q}_j(x,z)\\r||Q_k(z,y)-Q_k(x,y)|\\,d\\mu(z)+\n\\int_{d(x,z)>\\dz^k}\\lf|\\wz{Q}_j(x,z)\\r||Q_k(z,y)|\\,d\\mu(z)\\\\\n&\\quad+|Q_k(x,y)|\\int_{d(x,z)>\\dz^k}\\lf|\\wz{Q}_j(x,z)\\r|\\,d\\mu(z)=:\\RY_1+\\RY_2+\\RY_3.\n\\end{align*}\nFrom the size condition of $\\wz{Q}_j$, the smooth condition of $Q_k$ and Lemma \\ref{lem-add}(ii), we deduce\nthat\n\\begin{align*}\n\\RY_1&\\ls\\int_{d(x,z)\\le\\dz^k}\\frac 1{\\sqrt{V_{\\dz^j}(x)V_{\\dz^j}(z)}}\n\\exp\\lf\\{-\\nu\\lf[\\frac{d(x,z)}{\\dz^j}\\r]^a\\r\\}\n\\lf[\\frac{d(x,z)}{\\dz^k}\\r]^\\eta\\frac 1{\\sqrt{V_{\\dz^k}(x)V_{\\dz^k}(y)}}\\\\\n&\\quad\\times\\exp\\lf\\{-\\nu\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-\\nu\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\n\\,d\\mu(z)\\\\\n&\\ls\\dz^{(j-k)\\eta}\\frac 1{\\sqrt{V_{\\dz^k}(x)V_{\\dz^k}(y)}}\\exp\\lf\\{-\\nu\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\\\\n&\\quad\\times\\int_{d(x,z)\\le\\dz^k}\\frac 1{\\sqrt{V_{\\dz^j}(x)V_{\\dz^j}(z)}}\\lf[\\frac{d(x,z)}{\\dz^j}\\r]^\\eta\n\\exp\\lf\\{-\\nu\\lf[\\frac{d(x,z)}{\\dz^j}\\r]^a\\r\\}\\,d\\mu(z)\\\\\n&\\ls\\dz^{(j-k)\\eta}\\frac 1{\\sqrt{V_{\\dz^k}(x)V_{\\dz^k}(y)}}\\exp\\lf\\{-\\nu\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}.\n\\end{align*}\nBy the size conditions of $\\wz{Q}_j$, $Q_k$ and Lemma \\ref{lem-add}(ii), we conclude that\n\\begin{align*}\n\\RY_2&\\ls\\int_{d(x,z)>\\dz^k} \\frac 1{\\sqrt{V_{\\dz^j}(x)V_{\\dz^j}(z)}}\\exp\\lf\\{-\\nu\\lf[\\frac{d(x,z)}{\\dz^j}\\r]^a\\r\\}\\\\\n&\\quad\\times\n\\frac 1{\\sqrt{V_{\\dz^k}(x)V_{\\dz^k}(y)}}\\exp\\lf\\{-\\nu\\lf[\\frac{d(y,z)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\,d\\mu(z).\n\\end{align*}\nNotice that $a\\in(0,1]$. From the inequality $[d(x,y)]^a\\le A_0^a([d(x,z)]^a+[d(y,z)]^a)$, it follows that\n\\begin{align}\\label{eq-add7}\n\\exp\\lf\\{-\\frac\\nu2\\lf[\\frac{d(x,z)}{\\dz^j}\\r]^a\\r\\}\\exp\\lf\\{-\\frac\\nu2\\lf[\\frac{d(y,z)}{\\dz^k}\\r]^a\\r\\}\\le \\exp\\lf\\{-\\frac\\nu2\\lf[\\frac{d(x,y)}{A_0\\dz^k}\\r]^a\\r\\}.\n\\end{align}\nSimilarly, from the fact $[d(x,\\CY^k)]^a\\le A_0^a([d(x,y)]^a+[d(y,\\CY^k)]^a)$, it also follows that\n\\begin{align}\\label{eq-add8}\n\\exp\\lf\\{-\\frac\\nu4\\lf[\\frac{d(x,y)}{A_0\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\n\\le \\exp\\lf\\{-\\frac\\nu4\\lf[\\frac{d(x,\\CY^k)}{A_0\\dz^k}\\r]^a\\r\\}.\n\\end{align}\nCombining the above two formulae and Lemma \\ref{lem-add}(ii), together with the fact\n$\\exp\\{-\\frac\\nu4[\\frac{d(x,z)}{\\dz^j}]^a\\}\\le \\exp\\{-\\frac\\nu4\\dz^{(k-j)a}\\}\\ls \\dz^{(j-k)\\eta}$, we find that\n\\begin{align*}\n\\RY_2&\\ls\\dz^{(j-k)\\eta}\\frac 1{\\sqrt{V_{\\dz^k}(x)V_{\\dz^k}(y)}} \\exp\\lf\\{-\\frac\\nu4\\lf[\\frac{d(x,y)}{A_0\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-\\frac\\nu4\\lf[\\frac{d(x,\\CY^k)}{A_0\\dz^k}\\r]^a\\r\\}.\n\\end{align*}\nAgain, by the size conditions of $\\wz{Q}_j$ and $Q_k$, and Lemma \\ref{lem-add}(ii), we obtain\n\\begin{align*}\n\\RY_3&\\ls\\frac 1{\\sqrt{V_{\\dz^k}(x)V_{\\dz^k}(y)}}\\exp\\lf\\{-\\nu\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\\\\\n&\\quad\\times\\int_{d(x,z)>\\dz^k}\\frac 1{\\sqrt{V_{\\dz^j}(x)V_{\\dz^j}(z)}}\\exp\\lf\\{-\\nu\\lf[\\frac{d(x,z)}{\\dz^j}\\r]^a\\r\\}\\,d\\mu(z)\\\\\n&\\ls\\exp\\lf\\{-\\frac\\nu 2 \\dz^{(k-j)a}\\r\\}\\frac 1{\\sqrt{V_{\\dz^k}(x)V_{\\dz^k}(y)}}\\exp\\lf\\{-\\nu\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\},\n\\end{align*}\nwhich is desired, because $\\exp\\{-\\frac\\nu 2 \\dz^{(k-j)a}\\}\\ls \\dz^{(j-k)\\eta}$.\nTherefore, we obtain the size condition \\eqref{eq:mixsize}.\n\nNext we consider $|\\wz{Q}_jQ_k(x,y)-\\wz{Q}_jQ_k(x',y)|$ in \\eqref{eq:mixregx}, where\n$d(x,x')\\le(2A_0)^{-2}d(x,y)$ with $x\\neq y$.\nNotice that $d(x,x')\\le(2A_0)^{-2}d(x,y)$ implies that $(\\frac 43A_0)^{-1}d(x',y)\\le d(x,y)\\le \\frac 43A_0d(x',y)$.\nOn one hand, by this and \\eqref{eq:mixsize}, we find that\n\\begin{align}\\label{eq-add6}\n&\\lf|\\wz{Q}_jQ_k(x,y)-\\wz{Q}_jQ_k(x',y)\\r|\\\\\n&\\quad\\le\\lf|\\wz{Q}_jQ_k(x,y)\\r|+\\lf|\\wz{Q}_jQ_k(x',y)\\r|\\notag\\\\\n&\\quad\\ls\\dz^{(j-k)\\eta}\\frac 1{\\sqrt{V_{\\dz^k}(x)V_{\\dz^k}(y)}} \\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-c\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}.\\notag\n\\end{align}\nOn the other hand, we write\n\\begin{align*}\n&\\lf|\\wz{Q}_jQ_k(x,y)-\\wz{Q}_jQ_k(x',y)\\r|\\\\\n&\\quad=\\lf|\\int_X\\lf[\\wz{Q}_j(x,z)-\\wz{Q}_j(x',z)\\r][Q_k(z,y)-Q_k(x,y)]\\,d\\mu(z)\\r|\\\\\n&\\quad\\le\\int_{d(x,z)<2A_0d(x,x')}\\lf|\\wz{Q}_j(x,z)-\\wz{Q}_j(x',z)\\r||Q_k(z,y)-Q_k(x,y)|\\,d\\mu(z)\\\\\n&\\qquad+\\int_{2A_0d(x,x')\\le d(x,z)\\le(2A_0)^{-1}d(x,y)}\\cdots\n+\\int_{d(x,z)>(2A_0)^{-1}d(x,y)}\\cdots=:\\RY_4+\\RY_5+\\RY_6.\n\\end{align*}\nFor the term $\\RY_4$, noticing that $d(x,z)<2A_0d(x,x')\\le (2A_0)^{-1}d(x,y)$, we apply the\nregularity condition to $|Q_k(z,y)-Q_k(x,y)|$ [see Remark \\ref{rem:andef}(iii)], and then use Lemma\n\\ref{lem-add}(ii) to deduce that\n\\begin{align*}\n\\RY_4&\\ls\\frac 1{\\sqrt{V_{\\dz^k}(x)V_{\\dz^k}(y)}}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\\\\n&\\quad\\times\\int_{d(x,z)<2A_0d(x,x')}\n\\lf[\\lf|\\wz{Q}_j(x,z)\\r|+\\lf|\\wz{Q}_j(x',z)\\r|\\r]\\lf[\\frac{d(x,z)}{\\dz^k}\\r]^\\eta\\,d\\mu(z)\n\\\\\n&\\ls\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\\frac 1{\\sqrt{V_{\\dz^k}(x)V_{\\dz^k}(y)}}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}.\n\\end{align*}\nTo deal with $\\RY_5$, we first use the regularity conditions to estimate both $|\\wz{Q}_j(x,z)-\\wz{Q}_j(x',z)|$\nand $|Q_k(z,y)-Q_k(x,y)|$ [see Remark \\ref{rem:andef}(iii)], and then use Lemma \\ref{lem-add}(ii) to conclude\nthat\n\\begin{align*}\n\\RY_5&\\ls\\frac 1{\\sqrt{V_{\\dz^k}(x)V_{\\dz^k}(y)}}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\} \\\\\n&\\quad\\times\\int_X \\lf[\\frac{d(x,x')}{\\dz^j}\\r]^\\eta\n\\frac 1{\\sqrt{V_{\\dz^j}(x)V_{\\dz^j}(z)}}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,z)}{\\dz^j}\\r]^a\\r\\}\n\\lf[\\frac{d(x,z)}{\\dz^k}\\r]^\\eta\\,d\\mu(z)\\\\\n&\\ls\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\\frac 1{\\sqrt{V_{\\dz^k}(x)V_{\\dz^k}(y)}}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}.\n\\end{align*}\nFor the term $\\RY_6$, noticing that $d(x,x')\\le (2A_0)^{-2}d(x,y)<(2A_0)^{-1}d(x,z)$, we apply the regularity of $|\\wz{Q}_j(x,z)-\\wz{Q}_j(x',z)|$ [see Remark \\ref{rem:andef}(iii)],\ntogether with the size conditions of $Q_k$ when $d(x,z)> \\dz^k$ and the regularity of $Q_k$ when $d(x,z)\\le \\dz^k$,\nto obtain\n\\begin{align*}\n\\RY_6&\\ls\\int_{d(x,z)>(2A_0)^{-1}d(x,y)}\n\\lf[\\frac{d(x,x')}{\\dz^j}\\r]^\\eta\\frac 1{\\sqrt{V_{\\dz^j}(x)V_{\\dz^j}(z)}}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,z)}{\\dz^j}\\r]^a\\r\\}\n\\min\\lf\\{1,\\;\\lf[\\frac{d(x,z)}{\\dz^k}\\r]^\\eta\\r\\}\\\\\n&\\quad\\times\\frac 1{V_{\\dz^k}(y)}\\lf(\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,z)}{\\dz^k}\\r]^a\\r\\}\n+\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\\r)\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\,d\\mu(z).\n\\end{align*}\nAs in the estimation of \\eqref{eq-add7}, we always have\n$$\n\\exp\\lf\\{-\\frac{\\nu'} 2\\lf[\\frac{d(x,z)}{\\dz^j}\\r]^a\\r\\}\n\\lf(\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,z)}{\\dz^k}\\r]^a\\r\\}\n+\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\\r)\n\\ls \\exp\\lf\\{-\\frac{\\nu'}2\\lf[\\frac{d(x,y)}{A_0\\dz^k}\\r]^a\\r\\}\n$$\nfor some positive constant $c$ independent of $x,\\ y,\\ z$ and $k,\\ j$. Consequently,\n\\begin{align*}\n\\RY_6&\\ls\n\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\\frac 1{V_{\\dz^k}(y)}\\exp\\lf\\{-\\frac{\\nu'}2\\lf[\\frac{d(x,y)}{A_0\\dz^k}\\r]^a\\r\\} \\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\\\\n&\\quad\\times\n\\int_{d(x,z)>(2A_0)^{-1}d(x,y)}\n\\lf[\\frac{d(x,z)}{\\dz^j}\\r]^\\eta\\frac 1{\\sqrt{V_{\\dz^j}(x)V_{\\dz^j}(z)}}\\exp\\lf\\{-\\frac{\\nu'} 2\\lf[\\frac{d(x,z)}{\\dz^j}\\r]^a\\r\\}\\,d\\mu(z)\\\\\n&\\ls\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\\frac 1{\\sqrt{V_{\\dz^k}(x)V_{\\dz^k}(y)}}\\exp\\lf\\{-\\frac{\\nu'}4\\lf[\\frac{d(x,y)}{A_0^2\\dz^k}\\r]^a\\r\\} \\exp\\lf\\{-\\frac{\\nu'}4\\lf[\\frac{d(x,\\CY^k)}{A_0^2\\dz^k}\\r]^a\\r\\},\n\\end{align*}\nwhere in the last step we also used \\eqref{eq-add8}.\n\nCombining the estimates of $\\RY_4$ through $\\RY_6$, we conclude that\n\\begin{align}\\label{eq-add9}\n&\\lf|\\wz{Q}_jQ_k(x,y)-\\wz{Q}_jQ_k(x',y)\\r|\\\\\n&\\quad\\ls \\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\\frac 1{\\sqrt{V_{\\dz^k}(x)V_{\\dz^k}(y)}}\\exp\\lf\\{-\\frac{\\nu'}4\\lf[\\frac{d(x,y)}{A_0\\dz^k}\\r]^a\\r\\} \\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,\\CY^k)}{A_0\\dz^k}\\r]^a\\r\\}.\\notag\n\\end{align}\nTaking the geometric means between \\eqref{eq-add6} and \\eqref{eq-add9}, we then obtain the desired estimate of\n$|\\wz{Q}_jQ_k(x,y)-\\wz{Q}_jQ_k(x',y)|$ in \\eqref{eq:mixregx}.\n\n\nNow we estimate $|\\wz{Q}_jQ_k(y,x)-\\wz{Q}_jQ_k(y,x')|$ in \\eqref{eq:mixregx}.\nAgain $d(x,x')\\le(2A_0)^{-2}d(x,y)$ implies that $(\\frac 43A_0)^{-1}d(x',y)\\le d(x,y)\\le \\frac 43A_0d(x',y)$.\nOn one hand, by \\eqref{eq:mixsize}, we have\n\\begin{align}\\label{eq-add66}\n\\lf|\\wz{Q}_jQ_k(y,x)-\\wz{Q}_jQ_k(y,x')\\r|\\ls\\frac {\\dz^{(j-k)\\eta}}{\\sqrt{V_{\\dz^k}(x)V_{\\dz^k}(y)}} \\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-c\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}.\n\\end{align}\nOn the other hand, using the size condition of $\\wz Q_j$, as well the size and the regularity conditions of\n$Q_k$, and invoking (i) and (ii) of Remark \\ref{rem:andef}, we find that\n\\begin{align*}\n&\\lf|\\wz{Q}_jQ_k(y,x)-\\wz{Q}_jQ_k(y,x')\\r|\\\\\n&\\quad=\\lf|\\int_X\\wz{Q}_j(y,z)[Q_k(z,x)-Q_k(z,x')]\\,d\\mu(z)\\r|\\\\\n&\\quad\\ls \\int_X \\frac1{V_{\\dz^j}(y)V_{\\dz^k}(z)}\n\\min\\lf\\{1,\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\\r\\}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\\\\n&\\qquad\\times\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,z)}{\\dz^j}\\r]^a\\r\\}\n\\lf(\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,x)}{\\dz^k}\\r]^a\\r\\}\n+\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,x')}{\\dz^k}\\r]^a\\r\\}\\r)\\,d\\mu(z).\n\\end{align*}\nNoticing that $k\\le j$, by \\eqref{eq-add8}, we have\n\\begin{align*}\n&\\min\\lf\\{1,\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\\r\\}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\frac{\\nu'}{4}\\lf[\\frac{d(y,z)}{\\dz^j}\\r]^a\\r\\}\\\\\n&\\quad\\ls\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta \\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\frac{\\nu'}{4}\\lf[\\frac{d(y,z)}{\\dz^k}\\r]^a\\r\\}\\\\\n&\\quad\\ls\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta \\exp\\lf\\{-\\frac{\\nu'}{4}\\lf[\\frac{d(y,\\CY^k)}{A_0\\dz^k}\\r]^a\\r\\}.\n\\end{align*}\nBy \\eqref{eq:doub}, we obtain\n$\n\\frac1{V_{\\dz^k}(z)}\\ls \\frac1{V_{\\dz^k}(x)} [1+\\frac{d(x,z)}{\\dz^k}]^\\omega,\n$\nwhich, together with $d(x,x')\\ls d(x,y)$, further implies that\n\\begin{align*}\n&\\frac1{V_{\\dz^k}(z)}\\lf(\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,x)}{\\dz^k}\\r]^a\\r\\}\n+\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,x')}{\\dz^k}\\r]^a\\r\\}\\r)\\\\\n&\\quad\\ls \\frac1{V_{\\dz^k}(x)}\\lf[1+\\frac{d(x,y)}{\\dz^k}\\r]^\\omega \\lf(\\exp\\lf\\{-\\frac{\\nu'}2\\lf[\\frac{d(z,x)}{\\dz^k}\\r]^a\\r\\}\n+\\exp\\lf\\{-\\frac{\\nu'}2\\lf[\\frac{d(z,x')}{\\dz^k}\\r]^a\\r\\}\\r).\n\\end{align*}\nAs in the estimation of \\eqref{eq-add7}, we always have\n\\begin{align*}\n&\\exp\\lf\\{-\\frac{\\nu'}2\\lf[\\frac{d(y,z)}{\\dz^j}\\r]^a\\r\\}\\lf(\\exp\\lf\\{-\\frac{\\nu'}2\\lf[\\frac{d(z,x)}{\\dz^k}\\r]^a\\r\\}\n+\\exp\\lf\\{-\\frac{\\nu'}2\\lf[\\frac{d(z,x')}{\\dz^k}\\r]^a\\r\\}\\r)\\\\\n&\\quad\\le \\exp\\lf\\{-\\frac{\\nu'}2\\lf[\\frac{d(y,x)}{A_0\\dz^k}\\r]^a\\r\\}\n+\\exp\\lf\\{-\\frac{\\nu'}2\\lf[\\frac{d(y,x')}{A_0\\dz^k}\\r]^a\\r\\} \\le 2\\exp\\lf\\{-\\frac{\\nu'}2\\lf[\\frac{d(y,x)}{2A_0^2\\dz^k}\\r]^a\\r\\}.\n\\end{align*}\nApplying the above estimates, Lemma \\ref{lem-add}(ii) and \\eqref{eq-add8}, we conclude that\n\\begin{align*}\n&\\lf|\\wz{Q}_jQ_k(y,x)-\\wz{Q}_jQ_k(y,x')\\r|\\\\\n&\\quad\\ls \\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\n\\frac1{V_{\\dz^k}(x)}\\exp\\lf\\{-\\frac{\\nu'}4\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\frac{\\nu'}4\\lf[\\frac{d(y,x)}{2A_0^2\\dz^k}\\r]^a\\r\\}\\\\\n&\\qquad\\times\\int_X \\frac1{V_{\\dz^j}(y)}\n\\exp\\lf\\{-\\frac{\\nu'} 4\\lf[\\frac{d(y,z)}{\\dz^j}\\r]^a\\r\\}\\,d\\mu(z)\\\\\n&\\quad\\ls \\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta \\frac1{V_{\\dz^k}(x)}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-c\\lf[\\frac{d(y,x)}{\\dz^k}\\r]^a\\r\\}.\n\\end{align*}\nTaking the geometric means between the last estimate and \\eqref{eq-add66}, we obtain the desired estimate of\n$|\\wz{Q}_jQ_k(y,x)-\\wz{Q}_jQ_k(y,x')|$ in \\eqref{eq:mixregx}.\n\nFinally, we prove \\eqref{eq:mixdreg}.\nIf $d(x,x')\\le(2A_0)^{-3}d(x,y)$ and $d(y,y')\\le(2A_0)^{-3}d(x,y)$ with $x\\neq y$, by \\eqref{eq:mixsize} and\n(i) and (ii) of Remark \\ref{rem:andef}, we conclude that\n\\begin{align}\\label{eq:*1}\n&\\lf|\\lf[\\wz{Q}_jQ_k(x,y)-\\wz{Q}_jQ_k(x',y)\\r]-\\lf[\\wz{Q}_jQ_k(x,y')-\\wz{Q}_jQ_k(x',y')\\r]\\r|\\\\\n&\\quad\\ls\\dz^{(j-k)\\eta}\\frac{1}{V_{\\dz^k}(x)}\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-c\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\}.\\noz\n\\end{align}\nOnce we have proved that, when $d(x,x')\\le(2A_0)^{-3}d(x,y)$ and $d(y,y')\\le(2A_0)^{-3}d(x,y)$,\n\\begin{align}\\label{eq:mixdreg1}\n&\\lf|\\lf[\\wz{Q}_jQ_k(x,y)-\\wz{Q}_jQ_k(x',y)\\r]-\\lf[\\wz{Q}_jQ_k(x,y')-\\wz{Q}_jQ_k(x',y')\\r]\\r|\\\\\n&\\quad\\ls\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\\lf[\\frac{d(y,y')}{\\dz^k}\\r]^\\eta\\frac{1}{V_{\\dz^k}(x)}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-c\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\},\\noz\n\\end{align}\nthen taking the geometric mean between \\eqref{eq:*1} and \\eqref{eq:mixdreg1} gives \\eqref{eq:mixdreg}.\nThus, it remains to prove \\eqref{eq:mixdreg1}.\n\nTo show \\eqref{eq:mixdreg1}, we observe that\n\\begin{align*}\n&\\lf|\\lf[\\wz{Q}_jQ_k(x,y)-\\wz{Q}_jQ_k(x',y)\\r]-\\lf[\\wz{Q}_jQ_k(x,y')-\\wz{Q}_jQ_k(x',y')\\r]\\r|\\\\\n&\\quad\\le\\int_X\\lf|\\wz{Q}_j(x,z)-\\wz{Q}_j(x',z)\\r||Q_k(x,y)-Q_k(z,y)-Q_k(x,y')+Q_k(z,y')|\\,d\\mu(z)\\\\\n&\\quad=\\sum_{i=1}^3\\int_{W_i}\\lf|\\wz{Q}_j(x,z)-\\wz{Q}_j(x',z)\\r|\n|Q_k(x,y)-Q_k(z,y)-Q_k(x,y')+Q_k(z,y')|\\,d\\mu(z)=:\\sum_{i=1}^3\\RZ_i,\n\\end{align*}\nwhere\n\\begin{align*}\nW_1&:=\\{z\\in X:\\ d(x,z)<2A_0d(x,x')\\},\\\\\nW_2&:=\\{z\\in X:\\ 2A_0d(x,x')\\le d(x,z)\\le (2A_0)^{-2}d(x,y)\\}\n\\end{align*}\nand\n$$\nW_3:=\\{z\\in X:\\ d(x,z)\\ge(2A_0)^{-2}d(x,y)\\}.\n$$\nNotice that $d(x,z)<2A_0d(x,x')\\le(2A_0)^{-2}d(x,y)$ when $z\\in W_1$. By this, (i) and (iii) of Remark\n\\ref{rem:andef} for $Q_k$ and Lemma \\ref{lem-add}(iii), we conclude that\n\\begin{align*}\n\\RZ_1&\\ls\\lf[\\frac{d(y,y')}{\\dz^k}\\r]^\\eta\\frac 1{V_{\\dz^k}(x)}\n\\exp\\lf\\{-\\frac{\\nu'}2\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\\\\n&\\quad\\times\\int_{W_1}\\lf[\\lf|\\wz{Q}_j(x,z)\\r|+\\lf|\\wz{Q}_j(x',z)\\r|\\r]\\lf[\\frac{d(x,z)}{\\dz^k}\\r]^\\eta\n\\,d\\mu(z)\\\\\n&\\ls\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\\frac 1{V_{\\dz^k}(x)}\\lf[\\frac{d(y,y')}{\\dz^k}\\r]^\\eta\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-\\frac{\\nu'}2\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}.\n\\end{align*}\n\nFrom (i) and (iii) of Remark \\ref{rem:andef} for $Q_k$ and $\\wz{Q}_j$, together with Lemma \\ref{lem-add}(ii),\nwe deduce that\n\\begin{align*}\n\\RZ_2&\\ls\\lf[\\frac{d(y,y')}{\\dz^k}\\r]^\\eta\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\\\\n&\\quad\\times\\int_{W_2}\\lf[\\frac{d(x,x')}{\\dz^j}\\r]^\\eta\\frac{1}{V_{\\dz^j}(x)}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,z)}{\\dz^j}\\r]^a\\r\\}\\lf[\\frac{d(x,z)}{\\dz^k}\\r]^\\eta\\,d\\mu(z)\\\\\n&\\ls\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\\lf[\\frac{d(y,y')}{\\dz^k}\\r]^\\eta\\frac 1{V_{\\dz^k}(x)}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}.\n\\end{align*}\n\nTo estimate $\\RZ_3$, we claim that, for any $z\\in X$,\n\\begin{align}\\label{eq-x0}\n&|[Q_k(x,y)-Q_k(x,y')]-[Q_k(z,y)-Q_k(z,y')]|\\\\\n&\\quad\\ls\\lf[\\frac{d(y,y')}{\\dz^k}\\r]^\\eta\\lf[\\frac{d(x,z)}{\\dz^k}\\r]^\\eta\n\\lf[\\frac{1}{V_{\\dz^k}(y)}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\r.\\noz\\\\\n&\\qquad\n+\\frac{1}{V_{\\dz^k}(y')}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,y')}{\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y',\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\noz\\\\\n&\\qquad\n+\\frac{1}{V_{\\dz^k}(y)}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,y)}{\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\noz\\\\\n&\\qquad\n\\lf.+\\frac{1}{V_{\\dz^k}(y')}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,y')}{\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y',\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\r].\\noz\n\\end{align}\n\nIndeed, if $d(y,y')\\le\\dz^k$ and $d(x,z)\\le \\dz^k$, then \\eqref{eq-x0} follows from the second difference\nregularity condition of $Q_k$ and Remark \\ref{rem:andef}(i).\nIf $d(y,y')>\\dz^k$ and $d(x,z)>\\dz^k$, then we use the size condition of $Q_k$ and Remark \\ref{rem:andef}(i)\nto obtain \\eqref{eq-x0}. If $d(y,y')\\le\\dz^k$ and $d(x,z)> \\dz^k$, then, by the regularity of $Q_k$ and Remark\n\\ref{rem:andef}(i), we have\n\\begin{align}\\label{eq-x1}\n&|[Q_k(x,y)-Q_k(x,y')]-[Q_k(z,y)-Q_k(z,y')]|\\\\\n&\\quad\\le |Q_k(x,y)-Q_k(x,y')|+|Q_k(z,y)-Q_k(z,y')|\\noz\\\\\n&\\quad\\ls\\lf[\\frac{d(y,y')}{\\dz^k}\\r]^\\eta \\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\frac{1}{V_{\\dz^k}(y)}\n\\noz\\\\\n&\\qquad\\times\\lf(\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}+\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,y)}{\\dz^k}\\r]^a\\r\\}\\r).\\noz\n\\end{align}\nUsing $d(x,z)> \\dz^k$, we directly multiply $[d(x,z)\/\\dz^k]^\\eta$ in the right-hand side\nof \\eqref{eq-x1} to obtain \\eqref{eq-x0}. When $d(y,y')>\\dz^k$ and $d(x,z)\\le \\dz^k$, a symmetric argument\nalso leads to \\eqref{eq-x0}.\n\nThe treatment for the four terms in the bracket of the right-hand side of \\eqref{eq-x0} is similar. We only\nconsider the last term, which is also the\nmost difficult one. Thus, the estimation of $\\RZ_3$ is reduced to the estimation of\n\\begin{align*}\n\\wz\\RZ_3\n:={}&\\int_{W_3} \\lf|\\wz{Q}_j(x,z)-\\wz{Q}_j(x',z)\\r| \\lf[\\frac{d(y,y')}{\\dz^k}\\r]^\\eta\\lf[\\frac{d(x,z)}{\\dz^k}\\r]^\\eta \\\\\n&\\times\n\\frac{1}{V_{\\dz^k}(y')}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,y')}{\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y',\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\,d\\mu(z).\n\\end{align*}\nWhen $z\\in W_3$, we have $d(x,x')\\le(2A_0)^{-3}d(x,y)\\le (2A_0)^{-1}d(x,z)$, so that we can apply (i) and\n(iii) of Remark \\ref{rem:andef} for $\\wz Q_j$, \\eqref{eq-add7} and Lemma \\ref{lem-add}(ii) to conclude that\n\\begin{align*}\n\\wz\\RZ_3\n&\\ls \\int_{W_3} \\lf[\\frac{d(x,x')}{\\dz^j}\\r]^\\eta \\lf[\\frac{d(y,y')}{\\dz^k}\\r]^\\eta\\lf[\\frac{d(x,z)}{\\dz^k}\\r]^\\eta \\frac{1}{V_{\\dz^j}(x)V_{\\dz^k}(y')} \\\\\n&\\quad\\times\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,z)}{\\dz^j}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,y')}{\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y',\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\,d\\mu(z)\\\\\n&\\ls\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta \\lf[\\frac{d(y,y')}{\\dz^k}\\r]^\\eta\n\\frac{1}{V_{\\dz^k}(y')} \\exp\\lf\\{-\\frac{\\nu'} 2\\lf[\\frac{d(x,y')}{A_0\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y',\\CY^k)}{\\dz^k}\\r]^a\\r\\}.\n\\end{align*}\nBy $d(y,y')\\le (2A_0)^{-2}d(x,y)$, we have $(\\frac 43A_0)^{-1}d(x,y)\\le d(x,y')\\le \\frac 43A_0 d(x,y)$ and hence\n$$d(x, \\CY^k)\\le A_0^2[d(x,y)+d(y,y')+d(y',\\CY^k)]\\le 2A_0^2 [d(x,y)+d(y',\\CY^k)],$$\nwhich further implies that\n$$\n\\exp\\lf\\{-\\frac{\\nu'} 2\\lf[\\frac{d(x,y')}{A_0\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y',\\CY^k)}{\\dz^k}\\r]^a\\r\\}\n\\le \\exp\\lf\\{-\\frac{\\nu'} 8\\lf[\\frac{d(x,y)}{2A_0^2\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-\\frac{\\nu'} 8\\lf[\\frac{d(x,\\CY^k)}{2A_0^2\\dz^k}\\r]^a\\r\\}.\n$$\nAlso, the condition $d(y,y')\\le (2A_0)^{-2}d(x,y)$ and \\eqref{eq:doub} imply that\n$$\n\\frac{1}{V_{\\dz^k}(y')}\\ls \\frac{1}{V_{\\dz^k}(x)} \\lf[\\frac{\\dz^k+d(y',x)}{\\dz^k}\\r]^\\omega \\ls\n\\frac{1}{V_{\\dz^k}(y)} \\lf[1+\\frac{d(x,y)}{\\dz^k}\\r]^\\omega.\n$$\nTherefore, we obtain\n\\begin{align*}\n\\wz\\RZ_3\n\\ls\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta \\lf[\\frac{d(y,y')}{\\dz^k}\\r]^\\eta\n\\frac{1}{V_{\\dz^k}(x)} \\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-c\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\},\n\\end{align*}\nso does $\\RZ_3$. Combining the estimates of $\\RZ_1$, $\\RZ_2$ and $\\RZ_3$, we obtain\n\\eqref{eq:mixdreg1} and hence \\eqref{eq:mixdreg}. This finishes the proof of Lemma \\ref{lem:ccrf1}.\n\\end{proof}\n\n\\begin{corollary}\\label{cor:mixb}\nLet all the notation be as in Lemma \\ref{lem:ccrf1}. Then $\\wz{Q}_jQ_k$ satisfies the following estimates:\n\\begin{enumerate}\n\\item if $d(x,x')\\le(2A_0)^{-1}d(x,y)$ with $x\\neq y$, then\n\\begin{align}\\label{eq:mixregxb}\n&\\lf|\\wz{Q}_jQ_k(x,y)-\\wz{Q}_jQ_k(x',y)\\r|+\\lf|\\wz{Q}_jQ_k(y,x)-\\wz{Q}_jQ_k(y,x')\\r|\\\\\n&\\quad\\le C\\dz^{|j-k|(\\eta-\\eta')}\\lf[\\frac{d(x,x')}{d(x,y)}\\r]^{\\eta'}\n\\frac 1{V_{\\dz^{j\\wedge k}}(x)}\\exp\\lf\\{-c'\\lf[\\frac{d(x,y)}{\\dz^{j\\wedge k}}\\r]^a\\r\\}\\noz\\\\\n&\\qquad\\times\\exp\\lf\\{-c'\\lf[\\frac{d(x,\\CY^{j\\wedge k})}{\\dz^{j\\wedge k}}\\r]^a\\r\\};\\noz\n\\end{align}\n\\item if $d(x,x')\\le(2A_0)^{-2}d(x,y)$ and $d(y,y')\\le(2A_0)^{-2}d(x,y)$ with $x\\neq y$, then\n\\begin{align}\\label{eq:mixdregb}\n&\\lf|\\lf[\\wz{Q}_jQ_k(x,y)-\\wz{Q}_jQ_k(x',y)\\r]-\\lf[\\wz{Q}_jQ_k(x,y')-\\wz{Q}_jQ_k(x',y')\\r]\\r|\\\\\n&\\noz\\quad\\le C\\dz^{|j-k|(\\eta-\\eta')}\\lf[\\frac{d(x,x')}{\\dz^{j\\wedge k}}\\r]^{\\eta'}\n\\lf[\\frac{d(y,y')}{\\dz^{j\\wedge k}}\\r]^{\\eta'}\n\\frac 1{V_{\\dz^{j\\wedge k}}(x)}\\exp\\lf\\{-c'\\lf[\\frac{d(x,y)}{\\dz^{j\\wedge k}}\\r]^a\\r\\}\\\\\n&\\qquad\\times\\exp\\lf\\{-c'\\lf[\\frac{d(x,\\CY^{j\\wedge k})}{\\dz^{j\\wedge k}}\\r]^a\\r\\},\n\\noz\n\\end{align}\n\\end{enumerate}\nwhere $C$ and $c'$ are positive constants independent of $k,\\ j\\in\\zz$ and $x,\\ y,\\ x',\\ y'\\in X$.\n\\end{corollary}\n\n\\begin{proof}\nFor (i), if $d(x,x')\\le(2A_0)^{-2}d(x,y)$, then \\eqref{eq:mixregxb} holds true by \\eqref{eq:mixregx} and\n\\begin{equation}\\label{eq:*2}\n\\lf[\\frac{d(x,y)}{\\dz^{j\\wedge k}}\\r]^\\eta\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^{j\\wedge k}}\\r]^a\\r\\}\n\\ls\\exp\\lf\\{-c'\\lf[\\frac{d(x,y)}{\\dz^{j\\wedge k}}\\r]^a\\r\\}.\n\\end{equation}\nIf $(2A_0)^{-2}d(x,y)N}Q_{k_1+l_1}Q_{k_1}\\r)\\lf(\\sum_{|l_2|>N}Q_{k_2+l_2}Q_{k_2}\\r)^*\\r\\|_{L^2(X)\\to L^2(X)}\\\\\n&\\quad\\le\\sum_{|l_1|>N}\\sum_{|l_2|>N}\n\\lf\\|Q_{k_1+l_1}Q_{k_1}\\lf(Q_{k_2+l_2}Q_{k_2}\\r)^*\\r\\|_{L^2(X)\\to L^2(X)}\\noz\\\\\n&\\quad\\ls\\sum_{|l_1|>N}\\sum_{|l_2|>N}\\dz^{(|l_1|+|l_2|)\\eta\\thz}\\dz^{|k_1-k_2|\\eta(1-\\thz)}\\noz\n\\sim \\dz^{2N\\eta\\thz}\\dz^{|k_1-k_2|\\eta(1-\\thz)}.\n\\end{align}\nCombining this with Lemma \\ref{lem:CSlem} implies that\n$\n\\|R_N\\|_{L^2(X)\\to L^2(X)}\\ls \\dz^{N\\eta\\thz}.\n$\nTaking $\\eta':=\\eta\\thz$, we then complete the proof of Lemma \\ref{lem:ccrf2}.\n\\end{proof}\n\nTo consider the boundedness of $R_N$ on spaces of test functions, we begin with the following proposition.\n\n\\begin{proposition}\\label{prop:sizeRN}\nFor any $N\\in\\nn$, suppose that $R_N$ is defined as in \\eqref{eq:defRT} and $\\eta'\\in(0,\\eta)$. Then\n$R_N$ satisfies \\eqref{eq:Ksize}, \\eqref{eq:Kreg} and \\eqref{eq:Kdreg}\nwith $C_T$ therein replaced by $C\\dz^{(\\eta-\\eta')N}$, where $C$ is a positive\nconstant $C$ independent of $N$.\n\\end{proposition}\n\nThe proof of Proposition \\ref{prop:sizeRN} is based on the following several lemmas.\n\\begin{lemma}[{\\cite[Lemma 8.3]{AH13}}]\\label{lem:expsum}\nFor any fixed $a,\\ c\\in(0,\\fz)$, there exists a positive constant $C$ such that, for any $r\\in(0,\\fz)$\nand $x\\in X$,\n$$\n\\sum_{\\dz^k\\ge r}\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-c\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\n\\le \\frac C{V_r(x)}.\n$$\n\\end{lemma}\n\\begin{lemma}\\label{lem:sum2}\nFor any fixed $a,\\ c\\in(0,\\fz)$, there exists a positive constant $C$ such that, for any $x,\\ y\\in X$\nwith $x\\neq y$,\n$$\n\\sum_{k=-\\fz}^\\fz\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^k}\\r]\\r\\}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\n\\le \\frac C{V(x,y)}.\n$$\n\\end{lemma}\n\\begin{proof}\nTake $r:=d(x,y)$. Due to Lemma \\ref{lem:expsum}, to show this lemma, it suffices to prove that\n\\begin{equation*}\n\\RY:=\\sum_{\\dz^kN}|Q_{k+l}Q_{k}(x,y)|\\\\\n&\\ls\\sum_{l=N+1}^\\fz\\dz^{\\eta l}\\sum_{k=-\\fz}^\\fz\n\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\\\\n&\\quad+\\sum_{l=-\\fz}^{-N-1}\\dz^{-\\eta l}\\sum_{k=-\\fz}^\\fz\n\\frac 1{V_{\\dz^{k+l}}(x)}\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^{k+l}}\\r]^a\\r\\}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,\\CY^{k+l})}{\\dz^{k+l}}\\r]^a\\r\\}\\ls\\dz^{\\eta N}\\frac 1{V(x,y)}.\n\\end{align*}\n\nNext we prove \\eqref{eq:Kreg}. Suppose that $d(x,x')\\le (2A_0)^{-1}d(x,y)$ with $x\\neq y$. Then, from\n\\eqref{eq:mixregxb} and Lemma \\ref{lem:sum2}, we deduce that\n\\begin{align*}\n&|R_N(x,y)-R_N(x',y)|+|R_N(y,x)-R_N(y,x')|\\\\\n&\\quad\\le\\sum_{|l|>N}^\\fz\\sum_{k=-\\fz}^\\fz\n[|Q_{k+l}Q_{k}(x,y)-Q_{k+l}Q_k(x',y)|+|Q_{k+l}Q_{k}(y,x)-Q_{k+l}Q_k(y,x')|]\\\\\n&\\quad\\ls\\lf[\\frac{d(x,x')}{d(x,y)}\\r]^{\\eta'}\\lf[\\sum_{l=N+1}^\\fz\\dz^{(\\eta-\\eta') l}\\sum_{k=-\\fz}^\\fz\n\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-c'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-c'\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\r.\\\\\n&\\qquad\\lf.+\\sum_{l=-\\fz}^{-N-1}\\dz^{-(\\eta-\\eta')l}\\sum_{k=-\\fz}^\\fz\n\\frac 1{V_{\\dz^{k+l}}(x)}\\exp\\lf\\{-c'\\lf[\\frac{d(x,y)}{\\dz^{k+l}}\\r]^a\\r\\}\n\\exp\\lf\\{-c'\\lf[\\frac{d(x,\\CY^{k+l})}{\\dz^{k+l}}\\r]^a\\r\\}\\r]\\\\\n&\\quad\\ls\\dz^{(\\eta-\\eta') N}\\lf[\\frac{d(x,x')}{d(x,y)}\\r]^{\\eta'}\\frac 1{V(x,y)}.\n\\end{align*}\n\nFinally, we show \\eqref{eq:Kdreg}. By \\eqref{eq:mixdregb} and Lemma \\ref{lem:sum2}, we have\n\\begin{align*}\n&|[R_N(x,y)-R_N(x',y)]-[R_N(x,y')-R_N(x',y')]|\\\\\n&\\quad\\le\\sum_{|l|>N}^\\fz\\sum_{k=-\\fz}^\\fz\n|[Q_{k+l}Q_{k}(x,y)-Q_{k+l}Q_k(x',y)]-[Q_{k+l}Q_{k}(x,y')-Q_{k+l}Q_k(x',y')]|\\\\\n&\\quad\\ls\\lf[\\frac{d(x,x')}{d(x,y)}\\r]^{\\eta'}\\lf[\\frac{d(y,y')}{d(x,y)}\\r]^{\\eta'}\\\\\n&\\qquad\\times\\lf[\\sum_{l=N+1}^\\fz\\dz^{(\\eta-\\eta') l}\\sum_{k=-\\fz}^\\fz\n\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-c'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-c'\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\r.\\\\\n&\\qquad\\lf.+\\sum_{l=-\\fz}^{-N-1}\\dz^{-(\\eta-\\eta')l}\\sum_{k=-\\fz}^\\fz\n\\frac 1{V_{\\dz^{k+l}}(x)}\\exp\\lf\\{-c'\\lf[\\frac{d(x,y)}{\\dz^{k+l}}\\r]^a\\r\\}\n\\exp\\lf\\{-c'\\lf[\\frac{d(x,\\CY^{k+l})}{\\dz^{k+l}}\\r]^a\\r\\}\\r]\\\\\n&\\quad\\ls\\dz^{(\\eta-\\eta') N}\\lf[\\frac{d(x,x')}{d(x,y)}\\r]^{\\eta'}\\lf[\\frac{d(y,y')}{d(x,y)}\\r]^{\\eta'}\n\\frac 1{V(x,y)}.\n\\end{align*}\nThis finishes the proof of Proposition \\ref{prop:sizeRN}.\n\\end{proof}\n\nLet $x_1\\in X$, $r\\in(0,\\fz)$ and $\\bz,\\ \\gz\\in(0,\\eta)$.\nTo prove the boundedness of $R_N$ on $\\mathring{\\CG}(x_1,r,\\bz,\\gz)$, we cannot use Theorem \\ref{thm:Kbdd}\ndirectly, since it is not clear whether or not $R_N$ satisfies conditions (b) and (c) of Theorem\n\\ref{thm:Kbdd}. To overcome this difficulty, for any $M\\in\\nn$, define\n\\begin{equation}\\label{eq:defRNM}\nR_{N,M}:=\\sum_{|k|\\le M}\\sum_{N<|l|\\le M}Q_{k+l}Q_k.\n\\end{equation}\nClearly, it is easy to see that, for any $f\\in C^\\bz(X)$ with $\\bz\\in(0,\\eta]$ and $x\\in X$,\n$$\nR_{N,M}f(x)=\\int_X R_{N,M}(x,y)f(y)\\,d\\mu(y).\n$$\nMoreover, for any $x\\in X$,\n$$\n\\int_X R_{N,M}(x,y)\\,d\\mu(y)=0=\\int_X R_{N,M}(y,x)\\,d\\mu(y).\n$$\nNotice that Lemma \\ref{lem:ccrf2} and Proposition \\ref{prop:sizeRN} hold true with $R_N$ replaced by $R_{N,M}$, with\nall the constants involved independent of $M$ and $N$. Besides,\nsince $\\int_{X} R_{N,M}(x,y)\\,d\\mu(x)=0$ for any $y\\in X$, from the Fubini theorem, it\nfollows that $\\int_X R_{N,M}f(x)\\,d\\mu(x)=0$. Applying these and\nTheorem \\ref{thm:Kbdd}, we know that, for any $f\\in\\mathring{\\CG}(x_1,r,\\bz,\\gz)$,\n\\begin{equation}\\label{eq:RNMG}\n\\|R_{N,M}f\\|_{\\mathring\\CG(x_1,r,\\bz,\\gz)}\\ls\\dz^{(\\eta-\\eta')N}\\|f\\|_{\\mathring\\CG(x_1,r,\\bz,\\gz)}\n\\end{equation}\nwith $\\eta'\\in(\\max\\{\\bz,\\gz\\},\\eta)$, where the implicit positive constant is independent of $M, N$, $x_1$ and\n$r$. To pass form $R_{N,M}$ in \\eqref{eq:RNMG} to $R_N$, we consider the relationship between $R_Nf$ and\n$R_{N,M}f$.\n\n\\begin{lemma}\\label{lem:ccrf3} Let $f\\in\\CG(x_1,r,\\bz,\\gz)$ with $x_1\\in X$, $r\\in(0,\\fz)$ and $\\bz,\\ \\gz\\in(0,\\eta)$.\nFix $N\\in\\nn$. Then the following assertions hold true:\n\\begin{enumerate}\n\\item $\\lim_{M\\to\\fz} R_{N,M}f=R_Nf$ in $L^2(X)$;\n\\item for any $x\\in X$, the sequence $\\{R_{N,M}f(x)\\}_{M=N+1}^\\infty$ converges locally uniformly to some\nelement, denoted by $\\widetilde{R_N} f(x)$, where $\\widetilde{R_N }f$ differs from $R_N f$ at most on a set of\n$\\mu$-measure $0$;\n\\item the operator $\\widetilde {R_N}$ can be uniquely extended from $\\CG(x_1,r,\\bz,\\gz)$ to $L^2(X)$, with the\nextension operator coincides with $R_N$. In this sense, for any $f\\in\\CG(x_1,r,\\bz,\\gz)$ and almost every\n$x\\in X$,\n\\begin{equation*\n\\lim_{M\\to\\fz} R_{N,M}f(x)=\\widetilde{R_N}f(x)=R_Nf(x).\n\\end{equation*}\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nWe first prove (i). We only need to prove that, for any $f\\in L^2(X)$,\n\\begin{equation}\\label{eq:limL2}\n\\lim_{M\\to\\fz} \\lf\\|R_{N,M}f-R_Nf\\r\\|_{L^2(X)}=0.\n\\end{equation}\nIndeed, when $M>N$, write\n$$\nR_N-R_{N,M}=\\sum_{|k|\\le M}\\sum_{|l|>M}Q_{k+l}Q_k+\\sum_{|k|>M}\\sum_{|l|>N}Q_{k+l}Q_k\n$$\nAn argument similar to that used in the proof of \\eqref{eq:**} implies that, for any $k_1,\\ k_2\\in\\zz$ and\n$\\thz\\in(0,1)$,\n$$\n\\lf\\|\\lf(\\sum_{|l_1|>M}Q_{k_1+l_1}Q_{k_1}\\r)^*\\lf(\\sum_{|l_2|>M}Q_{k_2+l_2}Q_{k_2}\\r)\\r\\|_{L^2(X)\\to L^2(X)}\n\\ls\\dz^{2M\\eta\\thz}\\dz^{|k_1-k_2|\\eta(1-\\thz)}\n$$\nand\n$$\n\\lf\\|\\lf(\\sum_{|l_1|>M}Q_{k_1+l_1}Q_{k_1}\\r)\\lf(\\sum_{|l_2|>M}Q_{k_2+l_2}Q_{k_2}\\r)^*\\r\\|_{L^2(X)\\to L^2(X)}\n\\ls\\dz^{2M\\eta\\thz}\\dz^{|k_1-k_2|\\eta(1-\\thz)}.\n$$\nCombining this with Lemma \\ref{lem:CSlem}, we obtain\n$$\n\\lf\\|\\sum_{|k|\\le M}\\sum_{|l|>M}Q_{k+l}Q_k\\r\\|_{L^2(X)\\to L^2(X)}\\ls \\dz^{M\\eta\\thz}.\n$$\nTherefore, $\\sum_{|k|\\le M}\\sum_{|l|>M}Q_{k+l}Q_kf\\to 0$ in $L^2(X)$ when $M\\to\\fz$.\n\nBy Lemmas \\ref{lem:ccrf2} and \\ref{lem:CSlem}, we know that\n$$\nR_Nf=\\sum_{k=-\\fz}^\\fz\\sum_{|l|>N}Q_{k+l}Q_kf=\\lim_{M\\to\\fz}\\sum_{|k|\\le M}\\sum_{|l|>N}Q_{k+l}Q_kf\n\\quad\\text{in $L^2(X)$},\n$$\nwhich implies that\n$$\n\\lim_{M\\to\\fz}\\lf\\|\\sum_{|k|>M}\\sum_{|l|>N}Q_{k+l}Q_kf\\r\\|_{L^2(X)}=0.\n$$\nThus, we obtain \\eqref{eq:limL2}. This finishes the proof of (i).\n\nNow we prove (ii). Let $f\\in\\CG(x_1,r,\\bz,\\gz)$.\nTo prove that $\\{R_{N,M}f\\}_{M=1}^\\fz$ is a locally uniformly convergent sequence, fixing an arbitrary point $x\\in X$,\nwe only need to find a positive sequence $\\{c_{k,l}\\}_{k,\\ l=-\\fz}^\\fz$ such that\n\\begin{equation*}\n\\sup_{y\\in B(x,r)}|Q_{k+l}Q_kf(y)|\\le c_{k,l}\\qquad \\textup{and}\\qquad \\sum_{k=-\\fz}^\\fz\\sum_{l=-\\fz}^\\fz c_{k,l}<\\fz.\n\\end{equation*}\nWe claim that\n\\begin{align}\\label{claim-add}\n\\sup_{y\\in B(x,r)}|Q_{k+l}Q_kf(y)|\\ls\n\\begin{cases}\n\\displaystyle \\dz^{\\eta |l|}\\frac 1{V_{\\dz^{[k\\wedge (k+l)]}}(x)}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,\\CY^{[k\\wedge (k+l)]})}{A_0\\dz^{[k\\wedge (k+l)]}}\\r]^a\\r\\}\n&\\textup{if}\\;\\dz^{[k\\wedge (k+l)]}\\ge r,\\\\\n\\displaystyle\\dz^{\\eta |l|}\\frac 1{V_r(x_1)}\\lf(\\frac{\\dz^{[k\\wedge (k+l)]}}r\\r)^\\bz\n&\\textup{if}\\;\\dz^{[k\\wedge (k+l)]}(2A_0)^{-1}[r+d(x_1,y)]$ to conclude that\n\\begin{align*}\n|f(z)-f(y)|\\ls \\lf[\\frac{d(y,z)}{r+d(x_1,y)}\\r]^\\bz\\frac 1{V_r(x_1)} \\ls \\lf[\\frac{d(y,z)}{r}\\r]^\\bz\\frac 1{V_r(x_1)}.\n\\end{align*}\nFrom this, \\eqref{eq:mixsize} and Lemma \\ref{lem-add}(ii), it follows that\n\\begin{align*}\n|Q_{k+l}Q_kf(y)|&\\ls\\dz^{\\eta l}\\frac 1{V_r(x_1)} \\lf(\\frac{\\dz^k}r\\r)^\\bz\n\\int_X\\frac 1{V_{\\dz^k}(y)}\n\\exp\\lf\\{-c\\lf[\\frac{d(y,z)}{\\dz^k}\\r]^a\\r\\}\\lf[\\frac{d(y,z)}{\\dz^k}\\r]^\\bz\\,d\\mu(z)\\\\\n&\\ls\\dz^{\\eta l}\\frac 1{V_r(x_1)}\\lf(\\frac{\\dz^k}r\\r)^\\bz.\n\\end{align*}\nThis finishes the proof of \\eqref{claim-add}. Thus, for any $x\\in X$, the sequence $\\{R_{N,M}f(x)\\}_{M=N+1}^\\infty$ converges locally uniformly to some element, which is denoted by $\\widetilde{R_N} f(x)$.\n\nBy \\eqref{eq:limL2} and the Riesz theorem, we know that\nthere exists an increasing sequence $\\{M_j\\}_{j\\in\\nn}\\subset\\nn$ which tends to $\\infty$ such that\n\\begin{equation*}\n\\lim_{j\\to\\fz} R_{N,M_j}f(x)=R_Nf(x)\\quad\\text{$\\mu$-almost every $x\\in X$}.\n\\end{equation*}\nConsequently,\n$\\wz{R_N}f(x)=\\lim_{M\\to\\fz} R_{N,M}f(x)=\\lim_{j\\to\\fz} R_{N,M_j}f(x)=R_Nf(x)$ for $\\mu$-almost every $x\\in X$.\nThis finishes the proof of (ii).\n\n\nFor any $f\\in \\CG(x_1,r,\\bz,\\gz)$, since $\\wz{R_N} f$ is well defined and\n$\\wz{R_N}f(x)=R_Nf(x)$ for $\\mu$-almost every $x\\in X$, it follows, from the boundedness of $R_N$ on $L^2(X)$\nand the density of $\\CG(x_1,r,\\bz,\\gz)$ in $L^2(X)$, that $\\wz{R_N}$ can be uniquely extended to a boundedness\noperator on $L^2(X)$. The extension operator, still denoted by $\\wz{R_N}$,\nsatisfies that $R_Ng=\\wz{R_N}g$ both in $L^2(X)$ and almost everywhere for all $g\\in L^2(X)$. This finishes the\nproof of (iii) and hence of Lemma \\ref{lem:ccrf3}.\n\\end{proof}\n\nDue to Lemma \\ref{lem:ccrf3}, it is not necessary to distinguish $\\wz R_N$ and $R_N$.\nAs a consequence of \\eqref{eq:RNMG} and the\ndominated convergence theorem, we easily deduce the following boundedness of $R_N$, the details being omitted.\n\\begin{proposition}\\label{prop-add}\nLet $x_1\\in X$, $r\\in(0,\\fz)$ and $\\bz,\\ \\gz\\in(0,\\eta)$.\nThen, for any $N\\in\\nn$ and $f\\in\\mathring{\\CG}(x_1,r,\\bz,\\gz)$,\n\\begin{equation*}\n\\|R_Nf\\|_{\\mathring{\\CG}(x_1,r,\\bz,\\gz)}\\le C\\dz^{(\\eta-\\eta')N}\\|f\\|_{\\mathring{\\CG}(x_1,r,\\bz,\\gz)},\n\\end{equation*}\nwhere $\\eta'\\in(\\max\\{\\bz,\\gz\\},\\eta)$ and $C$ is a positive constant independent of $x_1$, $r$, $N$ and $f$.\n\\end{proposition}\n\n\n\\subsection{Proofs of homogeneous continuous Calder\\'{o}n reproducing formulae}\\label{pr}\n\nThis section is devoted to the proofs of homogeneous Calder\\'{o}n continuous reproducing formulae.\nWe start with following several lemmas.\n\\begin{lemma}\\label{lem:propQkN}\nLet $\\{Q_k\\}_{k\\in\\zz}$ be an $\\exp$-{\\rm ATI} and $Q_k^N:=\\sum_{|l|\\le N}Q_{k+l}$ for any $k\\in\\zz$ and $N\\in\\nn$.\nThen there exist positive constants $C_{(N)}$ and $c_{(N)}$, depending on $N$, but being independent of $k$,\nsuch that\n\\begin{enumerate}\n\\item for any $x,\\ y\\in X$,\n\\begin{equation}\\label{eq:QkNsize}\n\\lf|Q_k^N(x,y)\\r|\\le C_{(N)}\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-c_{(N)}\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\};\n\\end{equation}\n\\item for any $x,\\ x',\\ y\\in X$ with $d(x,x')\\le\\dz^k$,\n\\begin{align}\\label{eq:QkNregx}\n&\\lf|Q_k^N(x,y)-Q_k^N(x',y)\\r|+\\lf|Q_k^N(y,x')-Q_k^N(y,x)\\r|\\\\\n&\\quad\\le C_{(N)}\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\n\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-c_{(N)}\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\};\\noz\n\\end{align}\n\\item for any $x,\\ x',\\ y,\\ y'\\in X$ with $d(x,x')\\le\\dz^k$ and $d(y,y')\\le\\dz^k$,\n\\begin{align*\n&\\lf|\\lf[Q_k^N(x,y)-Q_k^N(x',y)\\r]-\\lf[Q_k^N(x,y')-Q_k^N(x',y')\\r]\\r|\\\\\n&\\quad\\le C_{(N)}\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\\lf[\\frac{d(y,y')}{\\dz^k}\\r]^\\eta\n\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-c_{(N)}\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\};\\noz\n\\end{align*}\n\\item for any $x,\\ y\\in X$, $\\int_X Q_k^N(x,y')\\,d\\mu(y')=0=\\int_X Q_k^N(x',y)\\,d\\mu(x')$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nFrom the cancellation of $Q_k$, it is easy to see that (iv) holds true. Noticing that the constants $C_{(N)}$\nand $c_{(N)}$ are allowed to depend on $N$, we obtain (i) directly by the size condition of $Q_k$\nand Remark \\ref{rem:andef}(i).\n\nTo see (ii) and (iii), we make the following observation. Fix $N\\in\\nn$ and $\\tau\\in(0,\\fz)$. Then, for any\n$k\\in\\zz$ and $l\\in\\{-N,-N+1,\\ldots,N-1,N\\}$,\nwhen $d(x,x')\\le\\tau\\dz^{k+l}$ and $d(y,y')\\le\\tau\\dz^{k+l}$, the regularity condition (resp., the second\ndifference regularity condition) of $Q_{k+l}$ in \\eqref{eq:etiregx} [resp., \\eqref{eq:etidreg}] remains true by\nusing the size conditon of $Q_{k+l}$ (resp., the regularity of $Q_{k+l}$),\nwith all constants involved depending on $\\tau$ but independent of $k$, $l$, $x$, $y$, $x'$ and $y'$.\n\nUsing the above observation, we easily obtain (ii) and (iii), via taking\n$\\tau:=1$ when $l\\in\\{0,\\ldots,N\\}$, and $\\tau:=\\dz^{-N}$ when $l\\in\\{-N,-N+1,\\ldots,-1\\}$. This finishes the\nproof of Lemma \\ref{lem:propQkN}.\n\\end{proof}\n\n\\begin{lemma}\\label{lem-add2} Let $\\{Q_k\\}_{k=-\\fz}^\\fz$ be an $\\exp$-{\\rm ATI}.\nFor any $k\\in\\zz$, let $E_k:=Q_k^NQ_k=\\sum_{|l|\\le N} Q_{k+l}Q_k$. Then there exist positive constants\n$C_{(N)}$ and $c_{(N)}$, depending on $N$, but being independent of $k$, such that the integral kernel of\n$E_k$, still denoted by $E_k$, satisfies the following:\n\\begin{enumerate}\n\\item for any $x,\\ y\\in X$,\n\\begin{equation*}\n|E_k(x,y)|\\le C_{(N)}\\frac{1}{V_{\\dz^k}(x)}\\exp\\lf\\{-c_{(N)}\\lf[\\frac{d(x,y)}{\\dz^{k}}\\r]^a\\r\\}\n\\exp\\lf\\{-c_{(N)}\\lf[\\frac{d(x,\\CY^k)}{\\dz^{k}}\\r]^a\\r\\};\n\\end{equation*}\n\\item for any $x,\\ y,\\ y'\\in X$ with $d(y,y')\\le\\dz^k$ or $d(y,y')\\le (2A_0)^{-1}[\\dz^k+d(x, y)]$,\n\\begin{align*}\n&|E_k(x,y)-E_k(x,y')|\\\\\n&\\quad\\le C_{(N)}\\lf[\\frac{d(y,y')}{\\dz^k}\\r]^\\eta\n\\frac{1}{V_{\\dz^k}(x)}\\exp\\lf\\{-c_{(N)}\\lf[\\frac{d(x,y)}{\\dz^{k}}\\r]^a\\r\\}\n\\exp\\lf\\{-c_{(N)}\\lf[\\frac{d(x,\\CY^k)}{\\dz^{k}}\\r]^a\\r\\}.\n\\end{align*}\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nApplying Lemma \\ref{lem:propQkN} and Remark \\ref{rem:andef}(i)\nand following the proofs of (i) and (ii) of Lemma \\ref{lem:ccrf1}, we\ndirectly obtain (i) and (ii) for $d(y,y')\\le\\dz^k$.\nFurther, applying (i) and proceeding as in the proof of Proposition \\ref{prop:etoa} [see Remark\n\\ref{rem:andef}(iii)], we find that (ii) remains true when $d(y,y')\\le(2A_0)^{-1}[\\dz^k+d(x,y)]$. This\nfinishes the proof of Lemma \\ref{lem-add2}.\n\\end{proof}\n\n\\begin{lemma}\\label{lem-add3}\nLet $\\{Q_k\\}_{k=-\\fz}^\\fz$ be an $\\exp$-{\\rm ATI}.\nThen, for any $k\\in\\zz$ and $f\\in\\CG(\\eta,\\eta)$, $Q_kf\\in\\CG(\\eta,\\eta)$.\n\\end{lemma}\n\n\\begin{proof}\nNotice that $\\CG(x_0,\\dz^k,\\eta,\\eta)$ and $\\CG(\\eta,\\eta)$ coincide in the sense of equivalent norms, with\nthe equivalent positive constants depending on $k$, but this is harmless for the proof of this lemma. Thus,\nwithout loss of generality, we may as well assume that $\\|f\\|_{ \\CG(x_0,\\dz^k,\\eta,\\eta)}= 1$ and, to prove\nthis lemma, it suffices to show $Q_k f\\in \\CG(x_0,\\dz^k,\\eta,\\eta)$.\n\nFor any $x\\in X$, by the size conditions of $Q_k$ and $f$, we write\n\\begin{align*}\n|Q_kf(x)|&=\\lf|\\int_X Q_k(x,y)f(y)\\,d\\mu(y)\\r|\\\\\n&\\ls\\int_X \\frac{1}{V_{\\dz^k}(x)}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\} \\frac 1{V_{\\dz^k}(x_0)+V(x_0,y)}\\lf[\\frac{\\dz^k}{\\dz^k+d(x_0,y)}\\r]^\\eta\\,d\\mu(y).\n\\end{align*}\nObserve that, for any $y\\in X$, the quasi-triangle inequality of $d$ implies that either $d(x, y)\\ge d(x_0, x)\/(2A_0)$\nor $d(x_0, y)\\ge d(x_0, x)\/(2A_0)$. Also, notice that\n\\begin{align}\\label{eq-xxx}\n\\frac{1}{V_{\\dz^k}(x)}\\ls \\frac{1}{\\mu(B(x, \\dz^k+d(x,y)))}\\lf[\\frac{\\dz^k+d(x,y)}{\\dz^k}\\r]^\\omega.\n\\end{align}\nFrom these and Lemma \\ref{lem-add}(ii), it follows that, for any $x\\in X$,\n\\begin{align}\\label{eq-x4}\n|Q_kf(x)|\\ls \\frac1{V_{\\dz^k}(x_0)+V(x_0,x)}\n\\lf[\\frac{\\dz^k}{\\dz^k+d(x_0,x)}\\r]^\\eta.\n\\end{align}\n\nNow we consider the regularity of $Q_k f$.\nFor any $x,\\ x'\\in X$ satisfying $d(x,x')\\le(2A_0)^{-2}[\\dz^k+d(x_0,x)]$,\nby the fact that $\\int_X [Q_k(x,y)-Q_k(x',y)]\\,d\\mu(y)=0$, we write\n\\begin{align*}\n|Q_kf(x)-Q_kf(x')|&=\\lf|\\int_X [Q_k(x,y)-Q_k(x',y)][f(y)-f(x)]\\,d\\mu(y)\\r|\\\\\n&\\le\\int_{d(x,y)\\le(2A_0)^{-1}[\\dz^k+d(x_0,x)]} |Q_k(x,y)-Q_k(x',y)||f(y)-f(x)|\\,d\\mu(y)\\\\\n&\\quad+\\int_{d(x,y)>(2A_0)^{-1}[\\dz^k+d(x_0,x)]}|Q_k(x,y)-Q_k(x',y)||f(y)|\\,d\\mu(y)\\\\\n&\\quad+|f(x)|\\int_{d(x,y)>(2A_0)^{-1}[\\dz^k+d(x_0,x)]}|Q_k(x,y)-Q_k(x',y)|\\,d\\mu(y)\n=:\\RZ_1+\\RZ_2+\\RZ_3.\n\\end{align*}\nWe first deal with $\\RZ_1$. By the size condition of $Q_k$ and Remark \\ref{rem:andef}(i), we conclude that\n$$\n|Q_k(x,y)-Q_k(x',y)|\\ls\\frac{1}{V_{\\dz^k}(y)}\\lf(\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n+\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x',y)}{\\dz^k}\\r]^a\\r\\}\\r).\n$$\nIf, in addition, $d(x,x')\\le(2A_0)^{-1}[\\dz^k+d(x,y)]$, then, by Remark \\ref{rem:andef}(iii), the right-hand\nside of the above formula can be multiplied by another\nterm $[\\frac{d(x,x')}{\\dz^k+d(x,y)}]^\\eta$ by the regularity of $Q_k$. By this, the regularity\nof $f$ and Lemma \\ref{lem-add}(ii), we have\n\\begin{align*}\n\\RZ_1&\\ls\\frac 1{V_{\\dz^k}(x_0)+V(x_0,x)}\\lf[\\frac{\\dz^k}{\\dz^k+d(x_0,x)}\\r]^\\eta\n\\int_{d(x,y)\\le(2A_0)^{-1}[\\dz^k+d(x_0,x)]}\\lf[\\frac{d(x,y)}{\\dz^k+d(x_0,x)}\\r]^\\eta\\\\\n&\\quad\\times\\min\\lf\\{1,\\lf[\\frac{d(x,x')}{\\dz^k+d(x,y)}\\r]^\\eta\\r\\}\\frac 1{V_{\\dz^k}(y)}\n\\lf(\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n+\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x',y)}{\\dz^k}\\r]^a\\r\\}\\r)\\,d\\mu(y)\\\\\n&\\ls\\lf[\\frac{d(x,x')}{\\dz^k+d(x_0,x)}\\r]^\\eta\\frac1{V_{\\dz^k}(x_0)+V(x_0,x)}\n\\lf[\\frac{\\dz^k}{\\dz^k+d(x_0,x)}\\r]^\\eta.\n\\end{align*}\nNotice that, when $d(x,y)>(2A_0)^{-1}[\\dz^k+d(x_0,x)]$, we have\n$$d(x,x')\\le(2A_0)^{-2}[\\dz^k+d(x_0,x)]\n<(2A_0)^{-1}d(x,y)\\le(2A_0)^{-1}[\\dz^k+d(x,y)].$$ Thus, from the regularity of $Q_k$, Remark \\ref{rem:andef}(i)\nand Lemma \\ref{lem-add}(ii), we deduce that\n\\begin{align*}\n\\RZ_2&\\ls\\int_{d(x,y)>(2A_0)^{-1}[\\dz^k+d(x_0,x)]}\\lf[\\frac{d(x,x')}{\\dz^k+d(x,y)}\\r]^\\eta\n\\frac{1}{V_{\\dz^k}(x)+V(x,y)}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}|f(y)|\\,d\\mu(y)\\\\\n&\\ls\\lf[\\frac{d(x,x')}{\\dz^k+d(x_0,x)}\\r]^\\eta\n\\frac{1}{V_{\\dz^k}(x)+V(x_0,x)}\\lf[\\frac{\\dz^k}{\\dz^k+d(x_0,x)}\\r]^\\eta.\n\\end{align*}\nSimilarly, by Remark \\ref{rem:andef}(i) and Lemma \\ref{lem-add}(ii), we conclude that\n\\begin{align*}\n\\RZ_3&\\ls\\frac 1{V_{\\dz^k}(x_0)+V(x_0,x)}\\lf[\\frac{\\dz^k}{\\dz^k+d(x_0,x)}\\r]^\\eta\\\\\n&\\quad\\times\\int_{d(x,y)>(2A_0)^{-1}[\\dz^k+d(x_0,x)]}\\lf[\\frac{d(x,x')}{\\dz^k+d(x,y)}\\r]^\\eta\n\\frac{1}{V_{\\dz^k}(x)+V(x,y)}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\\,d\\mu(y)\\\\\n&\\ls\\lf[\\frac{d(x,x')}{\\dz^k+d(x_0,x)}\\r]^\\eta\n\\frac 1{V_{\\dz^k}(x_0)+V(x_0,x)}\\lf[\\frac{\\dz^k}{\\dz^k+d(x_0,x)}\\r]^\\eta.\n\\end{align*}\nCombining this with $\\RZ_1$ through $\\RZ_3$, we find that, when $d(x,x')\\le(2A_0)^{-2}[\\dz^k+d(x_0,x)]$,\n\\begin{align}\\label{eq-x5}\n|Q_kf(x)-Q_kf(x')|\\ls\\lf[\\frac{d(x,x')}{\\dz^k+d(x_0,x)}\\r]^\\eta\n\\frac 1{V_{\\dz^k}(x_0)+V(x_0,x)}\\lf[\\frac{\\dz^k}{\\dz^k+d(x_0,x)}\\r]^\\eta,\n\\end{align}\n\nWhen $(2A_0)^{-2}[\\dz^k+d(x_0,x)](2A_0)^{-1}[1+d(x_0,x)]}|E_k(x,y)||f(y)|\\,d\\mu(y)\\\\\n&\\quad+|f(x)|\\int_{d(x,y)>(2A_0)^{-1}[1+d(x_0,x)]}|E_k(x,y)|\\,d\\mu(y)\n=:\\RZ_{1,1}+\\RZ_{1,2}+\\RZ_{1,3}.\n\\end{align*}\nFrom Lemma \\ref{lem-add2}(i), the regularity condition of $f$, Lemma \\ref{lem-add}(ii) and $\\gz'>\\gz$, we\ndeduce that\n\\begin{align*}\n\\RZ_{1,1}&\\ls\\frac 1{V_1(x_0)+V(x_0,x)}\\lf[\\frac 1{1+d(x_0,x)}\\r]^{\\gz'}\\int_X\n\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^{k}}\\r]^a\\r\\}\n\\lf[\\frac{d(x,y)}{1+d(x_0,y)}\\r]^{\\bz'}\\,d\\mu(y)\\\\\n&\\ls\\frac 1{V_1(x_0)+V(x_0,x)}\\lf[\\frac 1{1+d(x_0,x)}\\r]^{\\gz'}\\int_X\\frac {\\dz^{k\\bz'}}{V_{\\dz^k}(x)}\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^{k}}\\r]^a\\r\\}\n\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^{\\bz'}\\,d\\mu(y)\\\\\n&\\ls\\dz^{|k|\\bz'}\\frac 1{V_1(x_0)+V(x_0,x)}\\lf[\\frac 1{1+d(x_0,x)}\\r]^{\\gz}.\n\\end{align*}\nFor the term $\\RZ_{1,2}$, applying Lemma \\ref{lem-add2}(i) and the regularity condition of $f$, we conclude that\n\\begin{align*}\n\\RZ_{1,2}&\\ls\\int_{d(x,y)>(2A_0)^{-1}[1+d(x_0,x)]}\\frac 1{V_{\\dz^k}(x)}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^{k}}\\r]^a\\r\\}\\frac 1{V_1(x_0)}\\,d\\mu(y).\n\\end{align*}\nObserve that the doubling condition \\eqref{eq:doub} implies that $V_1(x_0)+V(x_0,x)\\ls [1+d(x_0, x)]^\\omega V_1(x_0)$.\nMeanwhile, if $d(x,y)>(2A_0)^{-1}[1+d(x_0,x)]$ and $k\\in\\zz_+$, then\n$$\\exp\\lf\\{-\\frac c2\\lf[\\frac{d(x,y)}{\\dz^{k}}\\r]^a\\r\\}\\le \\exp\\lf\\{-\\frac c 2\\lf[\\frac{(2A_0)^{-1}[1+d(x_0,x)]}{\\dz^{k}}\\r]^a\\r\\}\n\\ls \\lf[\\frac{\\dz^{k}}{1+d(x_0, x)}\\r]^{\\omega+\\gz+\\bz'}.$$\nBy these and Lemma \\ref{lem-add}(ii), we further obtain\n\\begin{align*}\n\\RZ_{1,2}&\\ls\\dz^{|k|(\\omega+\\gz+\\bz')}\\frac 1{V_1(x_0)+V(x_0,x)}\\lf[\\frac 1{1+d(x_0,x)}\\r]^{\\gz}.\n\\end{align*}\nNow we estimate $\\RZ_{1,3}$. Again, using the fact that the conditions $d(x,y)>(2A_0)^{-1}[1+d(x_0,x)]$ and\n$k\\in\\zz_+$, we obtain $\\exp\\{-\\frac c2[\\frac{d(x,y)}{\\dz^{k}}]^a\\}\\ls \\dz^{k\\bz'}$.\nFrom this, Lemma \\ref{lem-add2}(i), the regularity condition of $f$ and Lemma \\ref{lem-add}(ii), it follows that\n\\begin{align*}\n\\RZ_{1,3}&\\ls|f(x)|\n\\int_{d(x,y)>(2A_0)^{-1}[1+d(x_0,x)]}\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^{k}}\\r]^a\\r\\}\\,d\\mu(y)\\\\\n&\\ls\\dz^{k\\bz'}\\frac 1{V_1(x_0)+V(x_0,x)}\\lf[\\frac 1{1+d(x_0,x)}\\r]^{\\gz}.\n\\end{align*}\nCombining the estimates of $\\RZ_{1,1}$ through $\\RZ_{1,3}$, we obtain \\eqref{eq:sumsize} when $k\\in\\zz_+$.\n\nNext we prove \\eqref{eq:sumsize} for the case $k\\in\\zz\\setminus\\zz_+$. Notice that, for any $x\\in X$,\n\\begin{align*}\n\\lf|Q_{k}^NQ_kf(x)\\r|\n&=\\lf|\\int_X [E_k(x,y)-E_k(x,x_0)]|f(y)|\\,d\\mu(y)\\r|\\\\\n&\\le\\int_{d(x_0,y)\\le(2A_0)^{-1}[\\dz^k+d(x_0, x)]}|E_k(x,y)-E_k(x,x_0)||f(y)|\\,d\\mu(y)\\\\\n&\\quad+\\int_{d(x_0,y)>(2A_0)^{-1}[\\dz^k+d(x_0, x)]}|E_k(x,y)||f(y)|\\,d\\mu(y)\\\\\n&\\quad+|E_k(x,x_0)|\\int_{d(x_0,y)>(2A_0)^{-1}[\\dz^k+d(x_0, x)]}|f(y)|\\,d\\mu(y)\n=:\\RZ_{1,4}+\\RZ_{1,5}+\\RZ_{1,6}.\n\\end{align*}\nTo estimate $\\RZ_{1,4}$, we choose $\\wz\\gz\\in(\\gz,\\gz')$. By Lemma \\ref{lem-add2}(ii), $\\wz\\gz<\\eta$ and the\nsize condition of $f$, we have\n\\begin{align*}\n\\RZ_{1,4}&\\ls\\int_X\n\\lf[\\frac{d(x_0,y)}{\\dz^k}\\r]^{\\wz\\gz}\\frac 1{V_{\\dz^k}(x_0)}\\exp\\lf\\{-c\\lf[\\frac{d(x_0,x)}{\\dz^k}\\r]^a\\r\\}\n\\frac 1{V_1(x_0)+V(x_0,y)}\\lf[\\frac 1{1+d(x_0,y)}\\r]^{\\gz'}\\,d\\mu(y).\n\\end{align*}\nSince $\\dz^k\\ge 1$, it follows that\n\\begin{align}\\label{eq-x2}\n&\\frac 1{V_{\\dz^k}(x_0)}\\exp\\lf\\{-c\\lf[\\frac{d(x_0,x)}{\\dz^k}\\r]^a\\r\\}\\\\\n&\\quad\\ls \\frac 1{\\mu(B(x_0, \\dz^k+d(x_0, x)))} \\lf[\\frac{\\dz^k+d(x_0, x)}{\\dz^k}\\r]^\\omega \\lf[1+\\frac{d(x_0,x)}{\\dz^k}\\r]^{-\\omega-\\gz}\\noz\\\\\n&\\quad\\ls \\frac 1{\\mu(B(x_0, 1+d(x_0, x)))} \\lf[\\frac{\\dz^k}{\\dz^k+d(x_0,x)}\\r]^{\\gz}\n\\ls \\dz^{k\\gz}\\frac 1{V_1(x_0)+V(x_0,x)}\\lf[\\frac 1{1+d(x_0,x)}\\r]^{\\gz}.\\noz\n\\end{align}\nAlso, notice that\n$[\\frac{d(x_0,y)}{\\dz^k}]^{\\wz\\gz} [\\frac 1{1+d(x_0,y)}]^{\\gz'} \\le \\dz^{-k\\wz\\gz}[\\frac 1{1+d(x_0,y)}]^{\\gz'-\\wz\\gz}.$\nCombining these with Lemma \\ref{lem-add}(ii), we find that\n\\begin{align*}\n\\RZ_{1,4}&\\ls\\dz^{k(\\gz-\\wz\\gz)}\\frac 1{V_1(x_0)+V(x_0,x)}\\lf[\\frac 1{1+d(x_0,x)}\\r]^{\\gz}\\\\\n&\\quad\\times\\int_{d(x_0,y)\\le\\dz^k}\n\\frac 1{V_1(x_0)+V(x_0,y)}\\lf[\\frac 1{1+d(x_0,y)}\\r]^{\\gz'-\\wz\\gz}\\,d\\mu(y)\\\\\n&\\ls\\dz^{(\\wz\\gz-\\gz)|k|}\\frac 1{V_1(x_0)+V(x_0,x)}\\lf[\\frac 1{1+d(x_0,x)}\\r]^{\\gz}.\n\\end{align*}\nFor the term $\\RZ_{1,5}$, from Lemma \\ref{lem-add2}(i), the size condition of $f$ and Lemma \\ref{lem-add}(iii),\nwe deduce that\n\\begin{align*}\n\\RZ_{1,5}&\\ls\\int_{d(x_0,y)>(2A_0)^{-1}[\\dz^k+d(x_0, x)]}\\lf[\\frac{d(x_0,y)}{\\dz^k}\\r]^{\\gz'-\\gz}\\frac 1{V_{\\dz^k}(x)}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^{k}}\\r]^a\\r\\}\\\\\n&\\quad\\times\\frac 1{V_1(x_0)+V(x_0,y)}\\lf[\\frac 1{1+d(x_0,y)}\\r]^{\\gz'}\\,d\\mu(y)\\\\\n&\\ls\\frac 1{V_1(x_0)+V(x_0,x)}\\lf[\\frac 1{1+d(x_0,x)}\\r]^{\\gz}\\dz^{(\\gz-\\gz')k}\\int_X\\frac 1{V_{\\dz^k}(x)}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^{k}}\\r]^a\\r\\}\\,d\\mu(y)\\\\\n&\\ls \\dz^{(\\gz'-\\gz)|k|}\\frac 1{V_1(x_0)+V(x_0,x)}\\lf[\\frac 1{1+d(x_0,x)}\\r]^{\\gz}.\n\\end{align*}\nTo estimate $\\RZ_{1,6}$, we again choose $\\wz\\gz\\in(\\gz,\\gz')$.\nApplying Lemma \\ref{lem-add2}(i), \\eqref{eq-x2}, the size condition of $f$ and Lemma \\ref{lem-add}(ii), we\nproceed as in the estimate of $\\RZ_{1,4}$ to derive that\n\\begin{align*}\n\\RZ_{1,6}&\\ls\\frac 1{V_{\\dz^k}(x_0)}\\exp\\lf\\{-c\\lf[\\frac{d(x,x_0)}{\\dz^{k}}\\r]^a\\r\\}\n\\int_{d(x_0,y)>(2A_0)^{-1}[\\dz^k+d(x_0,x)]}\\lf[\\frac{d(x_0,y)}{\\dz^k}\\r]^{\\wz\\gz}\\\\\n&\\qquad\\times\\frac 1{V_1(x_0)+V(x_0,y)}\\lf[\\frac{1}{1+d(x_0,y)}\\r]^{\\gz'}\\,d\\mu(y)\\\\\n&\\ls\\dz^{(\\wz\\gz-\\gz)|k|}\\frac 1{V_1(x_0)+V(x_0,x)}\\lf[\\frac 1{1+d(x_0,x)}\\r]^{\\gz}.\n\\end{align*}\nCombining the estimates of $\\RZ_{1,4}$ through $\\RZ_{1,6}$, we obtain \\eqref{eq:sumsize} when\n$k\\in\\zz\\setminus\\zz_+$. This finishes the proof of \\eqref{eq:sumsize} and hence of Step 1).\n\n{\\it Step 2) Proof of the convergence of \\eqref{eq:hcrf} in $\\mathring{\\CG}^\\eta_0(\\bz,\\gz)$ when $f\\in \\mathring{\\CG}^\\eta_0(\\bz,\\gz)$.}\n\nIf $f\\in \\mathring{\\CG}^\\eta_0(\\bz,\\gz)$, then there exists a sequence\n$\\{g_n\\}_{m=1}^\\infty\\subset\\GO{\\eta,\\eta}$ such that $\\lim_{n\\to\\infty}\\|f-g_n\\|_{\\GO{\\bz,\\gz}}=0$.\nBy the already proved result in Step 1), we know that every $g_n$ satisfies\n$$\n\\lim_{L\\to\\infty}\\lf\\|g_n-\\sum_{|k|\\le L}\\wz{Q}_kQ_kg_n\\r\\|_{\\mathring{\\CG}(\\bz,\\gz)}=0.\n$$\nFor any $N,\\ L\\in\\nn$, define\n\\begin{align}\\label{TNL}\n\\wz T_{N,L}:=\\sum_{|k|\\le L}{Q}_k^NQ_k=\\sum_{|k|\\le L}\\sum_{|l|\\le N}Q_{k+l}Q_k.\n\\end{align}\nRepeating the proof of Lemma \\ref{lem:ccrf2}, we find that, for any fixed $\\thz\\in(0,1)$ and any\n$k_1,\\ k_2\\in\\zz$,\n\\begin{align*}\n&\\lf\\|\\lf(\\sum_{|l_1|\\le N}Q_{k_1+l_1}Q_{k_1}\\r)\\lf(\\sum_{|l_2|\\le N}Q_{k_2+l_2}Q_{k_2}\\r)^*\\r\\|\n_{L^2(X)\\to L^2(X)}\\\\\n&\\quad\\le\\sum_{|l_1|\\le N}\\sum_{|l_2|\\le N}\n\\lf\\|Q_{k_1+l_1}Q_{k_1}\\lf(Q_{k_2+l_2}Q_{k_2}\\r)^*\\r\\|_{L^2(X)\\to L^2(X)}\\noz\\\\\n&\\quad\\ls\\sum_{l_1=-\\fz}^\\fz\\sum_{l_2=-\\fz}^\\fz\\dz^{(|l_1|+|l_2|)\\eta\\thz}\\dz^{|k_1-k_2|\\eta(1-\\thz)}\n\\sim \\dz^{|k_1-k_2|\\eta(1-\\thz)}\\noz.\n\\end{align*}\nSimilar estimate also holds true for\n$$\n\\lf\\|\\lf(\\sum_{|l_1|\\le N}Q_{k_1+l_1}Q_{k_1}\\r)^*\\lf(\\sum_{|l_2|\\le N}Q_{k_2+l_2}Q_{k_2}\\r)\\r\\|\n_{L^2(X)\\to L^2(X)}\n$$\ndue to the symmetry. Then, by Lemma \\ref{lem:CSlem}, we conclude that $\\wz{T}_{N,L}$ is bounded on $L^2(X)$\nwith its operator norm independent of $N$ and $L$.\nMoreover, repeating the proof of Proposition \\ref{prop:sizeRN} with the summation $\\sum_{|l|>N}$ therein\nreplaced by $\\sum_{|l|\\le N}$, we know that the kernel $\\wz T_{N,L}$ satisfies all the conditions of Theorem\n\\ref{thm:Kbdd}, with $c_0:=0$ and $C_T$ therein being a positive constant independent of $L$ and $N$.\nThus, from Theorem \\ref{thm:Kbdd}, it follows that $\\wz T_{N,L}$ is a bounded operator on\n$\\mathring\\CG(\\bz,\\gz)$.\n\nFurther, recall that $T_N^{-1}Q_k^N=\\wz{Q}_k$ and $\\|T_N^{-1}\\|_{\\mathring{\\CG}(\\bz,\\gz)\\to\n\\mathring{\\CG}(\\bz,\\gz)}\\le 2$. Therefore,\n\\begin{align*}\n&\\lf\\|f-\\sum_{|k|\\le L}\\wz{Q}_kQ_kf\\r\\|_{\\mathring{\\CG}(\\bz,\\gz)}\\\\\n&\\quad\\le \\lf\\|f-g_n\\r\\|_{\\mathring{\\CG}(\\bz,\\gz)}+\\lf\\|g_n-\\sum_{|k|\\le L}\\wz{Q}_kQ_kg_n\\r\\|_{\\mathring{\\CG}(\\bz,\\gz)}\n+\\lf\\|T_N^{-1}\\lf(\\sum_{|k|\\le L}Q_k^NQ_k(g_n-f)\\r)\\r\\|_{\\mathring{\\CG}(\\bz,\\gz)}\\\\\n&\\quad\\ls \\|f-g_n\\|_{\\mathring{\\CG}(\\bz,\\gz)}+\\lf\\|g_n-\\sum_{|k|\\le L}\\wz{Q}_kQ_kg_n\\r\\|_{\\mathring{\\CG}(\\bz,\\gz)},\n\\end{align*}\nwhich tends to $0$ as $n,\\ L\\to \\infty$.\n\nWe still need to prove that $\\sum_{|k|\\le L}\\wz{Q}_kQ_kf$ can be approximated by a sequence of functions in\n$\\mathring\\CG(\\eta,\\eta)$ in the norm of $\\mathring{\\CG}(\\bz,\\gz)$.\nNotice that the boundedness of $T_N^{-1}$ and $\\wz{T}_{N,L}$ on $\\GO{\\bz,\\gz}$ implies that\n$$\n\\lim_{n\\to\\infty}\\lf\\|\\sum_{|k|\\le L}\\wz{Q}_kQ_kf\n-\\sum_{|k|\\le L}\\wz{Q}_kQ_kg_n\\r\\|_{\\mathring{\\CG}(\\bz,\\gz)}\n=\\lim_{n\\to\\infty}\\lf\\|T_N^{-1}\\wz{T}_{N,L}(f-g_n)\\r\\|_{\\mathring{\\CG}(\\bz,\\gz)}=0.\n$$\nIf we know that $\\sum_{|k|\\le L}\\wz{Q}_kQ_kg_n\\in \\mathring{\\CG}_0^\\eta(\\bz,\\gz)$, then\n$\\sum_{|k|\\le L}\\wz{Q}_kQ_kg_n$ can be approximated by functions in $\\mathring\\CG(\\eta,\\eta)$, so does $\\sum_{|k|\\le L}\\wz{Q}_kQ_kf$.\n\nSuppose that $h\\in\\mathring{\\CG}^\\eta_0(\\bz,\\gz)$. Then there exists\n$\\{h_j\\}_{j=1}^\\infty\\subset\\mathring{\\CG}(\\eta,\\eta)$ such that $\\|h-h_j\\|_{\\mathring{\\CG}(\\bz,\\gz)}\\to 0$ as $j\\to\\infty$.\nBy Lemma \\ref{lem-add3}, we find that $Q_k^NQ_k h_j=\\sum_{|l|\\le N}Q_{k+l}Q_k h_j\\in\\mathring{\\CG}(\\eta,\\eta)$ for\nany $k\\in\\zz$ and $j\\in\\nn$. By the definition of $T_N$ and the arguments in the proof of Step 1), we conclude that\n$\\wz T_{N,L}h_j=\\sum_{|k|\\le L}Q_k^NQ_k h_j$ converges to $T_N h_j$ in $\\mathring{\\CG}(\\bz,\\gz)$ as $L\\to\\infty$. Therefore, every $T_N h_j\\in\\mathring{\\CG}^\\eta_0(\\bz,\\gz)$.\nRecall that $R_N$ is bounded on $\\mathring{\\CG}(\\bz,\\gz)$\nwith operator norm at most $1\/2$. Thus, $T_N$ is also bounded on $\\mathring{\\CG}(\\bz,\\gz)$, which implies that\n $\\|T_Nh-T_Nh_j\\|_{\\mathring{\\CG}(\\bz,\\gz)}\\to 0$ as $j\\to\\infty$. We therefore obtain $T_N h\\in \\mathring{\\CG}^\\eta_0(\\bz,\\gz)$, so does\n $R_N h$ for any $h\\in \\mathring{\\CG}^\\eta_0(\\bz,\\gz)$. Further, applying\n \\begin{align*}\n T_N^{-1} h =(I-R_N)^{-1}= \\sum_{j=0}^\\infty R_N^j h,\n \\end{align*}\nwe know that $T_N^{-1} h \\in \\mathring{\\CG}^\\eta_0(\\bz,\\gz)$.\n\nAgain applying Lemma \\ref{lem-add3}, we also know that $h:=Q_k^NQ_k g_n\\in \\mathring\\CG(\\eta,\\eta)\\subset\n\\mathring\\CG_0^\\eta(\\bz,\\gz)$. Thus, $\\wz{Q}_kQ_kg_n=T_N^{-1}h$ belongs to $ \\mathring\\CG_0^\\eta(\\bz,\\gz)$, so\ndoes $\\sum_{|k|\\le L}\\wz{Q}_kQ_kg_n$. This finishes the proof of Step 2).\n\n{\\it Step 3) Proof of the convergence of \\eqref{eq:hcrf} in $L^p(X)$ when $f\\in L^p(X)$ with any given\n$p\\in(1,\\infty)$.}\n\nFor any $L\\in\\nn$, define $\\wz{T}_L:=\\sum_{|k|\\le L}\\wz{Q}_kQ_k$. Notice that $\\wz{T}_L$ is associated to an\nintegral kernel\n$$\n\\wz T_L(x,y)=\\sum_{|k|\\le L}\\wz{Q}_kQ_k(x,y) =\\sum_{|k|\\le L}\\int_X\\wz{Q}_k(x,z)Q_k(z,y)\\,d\\mu(y),\n\\quad\\forall\\;x,\\ y\\in X.\n$$\nWith $\\wz T_{N,L}$ as defined in \\eqref{TNL}, we have $\\wz T_L=T_N^{-1}T_{N,L}$.\nRecall that both $T_N^{-1}$ and $T_{N,L}$ are bounded on $L^2(X)$ with the operator norm independent of $N$\nand $L$, so does $\\wz T_L$.\n\nNext, we show that $\\wz T_L$ is a standard Calder\\'on-Zygmund kernel.\nWe first prove the size condition. Indeed, for any $x,\\ y\\in X$ with $x\\neq y$, by Remark \\ref{rem:andef}(i)\nand the size condition of $\\wz Q_k$, we have\n\\begin{align*}\n&\\lf|\\wz{Q}_kQ_k(x,y)\\r|\\\\\n&\\quad=\\lf|\\int_X\\wz{Q}_k(x,z)Q_k(z,y)\\,d\\mu(y)\\r|\\\\\n&\\quad\\ls\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\} \\int_X \\frac 1{V_{\\dz^k}(x)+V(x,z)}\n\\lf[\\frac{\\dz^k}{\\dz^k+d(x,z)}\\r]^\\gz\\frac 1{V_{\\dz^k}(y)}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,y)}{\\dz^k}\\r]^a\\r\\}\\,d\\mu(z).\n\\end{align*}\nFor the last integral, we separate $X$ into two domains $\\{z\\in X:\\ d(z,x)\\ge(2A_0)^{-1}d(x,y)\\}$ and\n$\\{z\\in X:\\ d(z,y)\\ge(2A_0)^{-1}d(x,y)\\}$. Then, from Lemma \\ref{lem-add}(ii), we deduce that\n\\begin{align*}\n&\\int_X \\frac 1{V_{\\dz^k}(x)+V(x,z)}\\lf[\\frac{\\dz^k}{\\dz^k+d(x,z)}\\r]^\\gz\\frac 1{V_{\\dz^k}(y)}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,y)}{\\dz^k}\\r]^a\\r\\}\\,d\\mu(z)\\\\\n&\\quad\\ls\\frac 1{V_{\\dz^k}(x)+V(x,y)}\\lf[\\frac{\\dz^k}{\\dz^k+d(x,z)}\\r]^\\gz\\int_{d(z,x)\\ge (2A_0)^{-1}d(x,y)}\n\\frac 1{V_{\\dz^k}(z)}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,y)}{\\dz^k}\\r]^a\\r\\}\\,d\\mu(z)\\\\\n&\\qquad+\\frac{1}{V_{\\dz^k}(y)}\\exp\\lf\\{-\\frac{\\nu'} 2\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n\\int_{d(z,y)\\ge (2A_0)^{-1}d(x,y)}\\frac 1{V_{\\dz^k}(x)+V(x,z)}\\lf[\\frac{\\dz^k}{\\dz^k+d(x,z)}\\r]^\\gz\\,d\\mu(z)\\\\\n&\\quad\\ls\\frac 1{V_{\\dz^k}(x)+V(x,y)}\\lf[\\frac{\\dz^k}{\\dz^k+d(x,z)}\\r]^\\gz.\n\\end{align*}\nTherefore, by the two formulae above, we find that\n$$\n\\lf|\\wz{Q}_kQ_k(x,y)\\r|\\ls\\frac 1{V_{\\dz^k}(x)+V(x,y)}\\lf[\\frac{\\dz^k}{\\dz^k+d(x,z)}\\r]^\\gz\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\},\n$$\nwhich, consequently, implies that\n\\begin{align}\\label{eq:wTLsize}\n\\lf|\\wz{T}_L(x,y)\\r|\n\\ls\\sum_{\\dz^k\\ge d(x,y)}\\frac 1{V_{\\dz^k}(y)}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\n+\\sum_{\\dz^kN}\\sum_{k=-\\fz}^\\fz Q_{k+l}Q_kf(x)\\noz\\\\\n=:{}&\\sum_{k=-\\fz}^\\fz G_{k, N}f(x)+R_Nf(x)=:G_Nf(x)+R_Nf(x),\\noz\n\\end{align}\nwhere $R_N$ is as in \\eqref{eq:defRT}.\nWe have shown, in the previous section, that $R_N$ is bounded on both $L^2(X)$ and\n$\\mathring{\\CG}(x_1,r,\\bz,\\gz)$, where $x_1\\in X$, $r\\in(0,\\fz)$ and\n$\\bz,\\ \\gz\\in(0,\\eta)$. We will show, in Section \\ref{sec5.1} below, that the operator $G_N$ is also bounded\non both $L^2(X)$ and $\\mathring{\\CG}(x_1,r,\\bz,\\gz)$. Thus, if $f$ belongs to $L^2(X)$ [resp.,\n$\\mathring{\\CG}(x_1,r,\\bz,\\gz)$], then $\\CS_Nf$ in \\eqref{eq:S} is a well defined function in $L^2(X)$ [resp.,\nin $\\mathring{\\CG}(x_1,r,\\bz,\\gz)$]. The proofs for the homogeneous discrete reproducing formulae are presented\nin Section \\ref{pr2}.\n\n\\subsection{Boundedness of the remainder $\\CR_N$}\\label{sec5.1}\n\nBased on the discussion after \\eqref{eq:defGR}, the boundedness of $\\CR_N$ can be reduced to the study\nof the boundedness of $G_N$ on $L^2(X)$ and spaces of test functions. We need the following lemma.\n\n\\begin{lemma}\\label{lem:GkN}\nFor any $k\\in\\zz$ and $N\\in\\nn$, let $G_{k,N}$ be defined as in \\eqref{eq:defGR}, that is, for any $x,\\ y\\in X$,\n$$\nG_{k,N}(x,y)=\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}Q_k^N(x,z)\n\\lf[Q_k(z,y)-Q_k\\lf(y_\\az^{k,m},y\\r)\\r]\\,d\\mu(z).\n$$\nThen there exist positive constants $C_{(N)}$ and $c_{(N)}$, depending on $N$, but being independent of $k$,\n$j_0$ and $y_\\az^{k,m}$, such that $G_{k,N}$ satisfies:\n\\begin{enumerate}\n\\item for any $x,\\ y\\in X$,\n\\begin{equation}\\label{eq:Gksize}\n|G_{k,N}(x,y)|\\le C_{(N)}\\dz^{j_0\\eta}\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-c'_{(N)}\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-c'_{(N)}\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\};\n\\end{equation}\n\\item for any $x,\\ x',\\ y\\in X$ with $d(x,x')\\le\\dz^k$ or $d(x,x')\\le(2A_0)^{-1}[\\dz^k+d(x,y)]$,\n\\begin{align}\\label{eq:Gkregx}\n&|G_{k,N}(x,y)-G_{k,N}(x',y)|+|G_k(y,x)-G_k(y,x')|\\\\\n&\\quad\\le C_{(N)}\\dz^{j_0\\eta}\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\n\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-c'_{(N)}\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-c'_{(N)}\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\noz;\n\\end{align}\n\\item for any $x,\\ x',\\ y,\\ y'\\in X$ with $d(x,x')\\le\\dz^k$ and $d(y,y')\\le\\dz^k$, or\n$d(x,x')\\le(2A_0)^{-2}[\\dz^k+d(x,y)]$ and $d(y,y')\\le(2A_0)^{-2}[\\dz^k+d(x,y)]$,\n\\begin{align}\\label{eq:Gkdreg}\n&|[G_{k,N}(x,y)-G_{k,N}(x',y)]-[G_{k,N}(x,y')-G_{k,N}(x',y')]|\\\\\n&\\quad\\le C_{(N)}\\dz^{j_0\\eta}\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\\lf[\\frac{d(y,y')}{\\dz^k}\\r]^\\eta\n\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-c'_{(N)}\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\\noz\\\\\n&\\qquad\\times\\exp\\lf\\{-c'_{(N)}\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\};\\noz\n\\end{align}\n\\item for any $x\\in X$, $\\int_X G_k(x,y)\\,d\\mu(y)=0=\\int_X G_k(y,x)\\,d\\mu(y)$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nWe first prove (i). Indeed, from \\eqref{j_0}, it follows that, for any $z\\in Q_\\az^{k,m}$,\n\\begin{align}\\label{eq-xx}\nd\\lf(z,y_\\az^{k,m}\\r)\\le(2A_0)^2C^\\natural\\dz^{k+j_0}\\le(2A_0)^{-2}\\dz^k\\le\\dz^k.\n\\end{align}\nWith this, applying \\eqref{eq:QkNsize}, Remark \\ref{rem:andef}(i), \\eqref{eq-add7}, Lemma \\ref{lem-add}(ii)\nand \\eqref{eq-add8}, we conclude that, for any $x,\\ y\\in X$,\n\\begin{align*}\n|G_{k,N}(x,y)|&\\ls\\frac 1{V_{\\dz^k}(x)}\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,z)}{\\dz^k}\\r]^a\\r\\}\\lf[\\frac{d(z,y_\\az^{k,m})}{\\dz^k}\\r]^\\eta\\frac 1{V_{\\dz^k}(z)}\n\\\\\n&\\quad\\times\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,y)}{\\dz^k}\\r]^a\\r\\}\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\,d\\mu(z)\\\\\n&\\ls\\dz^{j_0\\eta}\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-\\frac{c\\wedge\\nu'}2\\lf[\\frac{d(x,y)}{A_0\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\\\\n&\\quad\\times\\int_X\\frac{1}{V_{\\dz^k}(z)}\\exp\\lf\\{-\\frac{\\nu'}{2}\\lf[\\frac{d(z,y)}{\\dz^k}\\r]^a\\r\\}\n\\,d\\mu(z)\\\\\n&\\ls\\dz^{j_0\\eta}\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-\\frac{c\\wedge\\nu'}4\\lf[\\frac{d(x,y)}{A_0\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\frac{c\\wedge\\nu'}4\\lf[\\frac{d(x,\\CY^k)}{A_0^2\\dz^k}\\r]^a\\r\\},\n\\end{align*}\nas desired. Moreover, by the dominated convergence theorem and the cancellation of $Q_k$, we also obtain (iv).\n\nSuppose now that $y\\in X$ and $x,\\ x'\\in X$ satisfying $d(x,x')\\le\\dz^k$. Then, from \\eqref{eq:QkNregx}, the\nregularity condition of $Q_k$, Remark \\ref{rem:andef}(i), \\eqref{eq-add7}, Lemma \\ref{lem-add}(ii) and\n\\eqref{eq-add8}, we deduce that\n\\begin{align*}\n&|G_{k,N}(x,y)-G_{k,N}(x',y)|\\\\\n&\\quad\\le\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}\n\\lf|Q_k^N(x,z)-Q_k^N(x',z)\\r|\\lf|Q_k(z,y)-Q_k\\lf(y_\\az^{k,m},y\\r)\\r|\\,d\\mu(z)\\\\\n&\\quad\\ls\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\n\\frac 1{V_{\\dz^k}(x)}\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,z)}{\\dz^k}\\r]^a\\r\\}\\lf[\\frac{d(z,y_\\az^{k,m})}{\\dz^k}\\r]^\\eta\\frac 1{V_{\\dz^k}(z)}\\\\\n&\\qquad\\times\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,y)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\,d\\mu(z)\\\\\n&\\quad\\ls\\dz^{j_0\\eta}\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\\frac 1{V_{\\dz^k}(x)}\n\\exp\\lf\\{-\\frac{c\\wedge\\nu'}2\\lf[\\frac{d(x,y)}{A_0\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\\\\n&\\qquad\\times\\int_X\\frac{1}{V_{\\dz^k}(z)}\\exp\\lf\\{-\\frac{\\nu'}{2}\n\\lf[\\frac{d(z,y)}{\\dz^k}\\r]^a\\r\\}\\,d\\mu(z)\\\\\n&\\quad\\ls\\dz^{j_0\\eta}\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\n\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-\\frac{c\\wedge\\nu'}4\\lf[\\frac{d(x,y)}{A_0\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\frac{c\\wedge\\nu'}4\\lf[\\frac{d(x,\\CY^k)}{ A_0^2\\dz^k}\\r]^a\\r\\}.\n\\end{align*}\nSimilarly, when $d(x,x')\\le\\dz^k$, by \\eqref{eq:QkNsize}, Remark \\ref{rem:andef}(i),\n\\eqref{eq-add7} and Lemma \\ref{lem-add}(ii), we conclude that\n\\begin{align*}\n&|G_{k,N}(y,x)-G_{k,N}(y,x')|\\\\\n&\\quad\\le\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}\n\\lf|Q_k^N(y,z)\\r|\\lf|Q_k(z,x)-Q_k\\lf(y_\\az^{k,m},x\\r)-Q_k(z,x')+Q_k\\lf(y_\\az^{k,m},x'\\r)\\r|\\,d\\mu(z)\\\\\n&\\quad\\ls\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\n\\frac 1{V_{\\dz^k}(x)}\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}\\frac 1{V_{\\dz^k}(z)}\n\\exp\\lf\\{-c\\lf[\\frac{d(y,z)}{\\dz^k}\\r]^a\\r\\}\\lf[\\frac{d(z,y_\\az^{k,m})}{\\dz^k}\\r]^\\eta\\\\\n&\\qquad\\times\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,x)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\,d\\mu(z)\\\\\n&\\quad\\ls\\dz^{j_0\\eta}\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\n\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-\\frac{c\\wedge\\nu'}2\\lf[\\frac{d(x,y)}{2A_0\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\n\\end{align*}\nBy the two formulae above, we obtain (ii) when $d(x,x')\\le\\dz^k$. Using (i) and arguing as Proposition\n\\ref{prop:etoa} [see also Remark \\ref{rem:andef}(iii)], we conclude that \\eqref{eq:Gkregx} remains true when\n$d(x,x')\\le(2A_0)^{-1}[\\dz^k+d(x,y)]$.\n\nBased on (ii) and the proof of Proposition\n\\ref{prop:etoa} [see also Remark \\ref{rem:andef}(iii)], we only show that (iii) holds true when\n$x,\\ x',\\ y,\\ y'\\in X$ satisfying $d(x,x')\\le\\dz^k$ and $d(y,y')\\le\\dz^k$. In this case, applying\n$\\eqref{eq:QkNregx}$, the second difference regularity of $Q_k$, Remark \\ref{rem:andef}(i), \\eqref{eq-add7},\nLemma \\ref{lem-add}(ii) and \\eqref{eq-add8}, we obtain\n\\begin{align*}\n&|[G_{k,N}(x,y)-G_{k,N}(x',y)]-[G_{k,N}(x,y')-G_{k,N}(x',y')]|\\\\\n&\\quad\\le\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\n\\int_{Q_\\az^{k,m}}\\lf|Q_k^N(x,z)-Q_k^N(x',z)\\r|\n\\lf|Q_k(z,y)-Q_k\\lf(y_\\az^{k,m},y\\r)-Q_k(z,y')+Q_k\\lf(y_\\az^{k,m},y'\\r)\\r|\\,d\\mu(z)\\\\\n&\\quad\\ls\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\\lf[\\frac{d(y,y')}{\\dz^k}\\r]^\\eta\n\\frac 1{V_{\\dz^k}(x)}\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,z)}{\\dz^k}\\r]^a\\r\\}\\lf[\\frac{d(z,y_\\az^{k,m})}{\\dz^k}\\r]^\\eta\\frac 1{V_{\\dz^k}(z)}\\\\\n&\\qquad\\times\\exp\\lf\\{-\\nu'\\lf[\\frac{d(z,y)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\,d\\mu(z)\\\\\n&\\quad\\ls\\dz^{j_0\\eta}\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\\lf[\\frac{d(y,y')}{\\dz^k}\\r]^\\eta\n\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-\\frac{c\\wedge\\nu'}2\\lf[\\frac{d(x,y)}{A_0\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\\\\\n&\\qquad\\times\\int_X\\frac{1}{V_{\\dz^k}(z)}\\exp\\lf\\{-\\frac{\\nu'}{2}\\lf[\\frac{d(z,y)}{\\dz^k}\\r]^a\\r\\}\\,d\\mu(z)\\\\\n&\\quad\\ls\\dz^{j_0\\eta}\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\\lf[\\frac{d(y,y')}{\\dz^k}\\r]^\\eta\n\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-\\frac{c\\wedge\\nu'}4\\lf[\\frac{d(x,y)}{A_0\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-\\frac{c\\wedge\\nu'}4\\lf[\\frac{d(x,\\CY^k)}{ A_0^2\\dz^k}\\r]^a\\r\\}.\n\\end{align*}\nThis proves (iii) and hence finishes the proof of Lemma \\ref{lem:GkN}.\n\\end{proof}\n\nTo consider the boundedness of $G_N$ on the spaces of test functions, we use the method\nsimilar to the proof of $R_N$ in Section \\ref{RN}. For any $N,\\ M\\in\\nn$ and $x,\\ y\\in X$, define\n\\begin{equation}\\label{eq:defGM}\nG_N^{(M)}(x,y):=\\sum_{|k|\\le M} G_{k,N}(x,y)\n=\\sum_{|k|\\le M}\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}Q_k^N(x,z)\n\\lf[Q_k(z,y)-Q_k\\lf(y_\\az^{k,m},y\\r)\\r]\\,d\\mu(z).\n\\end{equation}\n\n\\begin{proposition}\\label{prop:GNM}\nFor any $N,\\ M\\in\\nn$, let $G_N$ and $G_N^{(M)}$ be as in \\eqref{eq:defGR} and \\eqref{eq:defGM}. Let\n$x_1\\in X$, $r\\in(0,\\infty)$ and $\\bz,\\ \\gz\\in(0,\\eta)$. Then there exists a positive constant $C_{(N)}$,\ndepending on $N$, but being independent of $M$, $j_0$, $x_1$, $r$ and $y_\\az^{k,m}$, such that\n\\begin{equation}\\label{eq:GL2}\n\\|G_N\\|_{L^2(X)\\to L^2(X)}+\\lf\\|G_N^{(M)}\\r\\|_{L^2(X)\\to L^2(X)}\\le C_{(N)}\\dz^{j_0\\eta}\n\\end{equation}\nand\n\\begin{equation}\\label{eq:GMG}\n\\lf\\|G_{N}^{(M)}\\r\\|_{\\mathring{\\CG}(x_1,r,\\bz,\\gz)\\to\\mathring{\\CG}(x_1,r,\\bz,\\gz)}\\le C_{(N)}\\dz^{j_0\\eta}.\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nApplying (i) and (ii) of Lemma \\ref{lem:GkN} and Remark \\ref{rem:andef}, we follow the proof of Lemma\n\\ref{lem:ccrf1}(i) with $\\wz Q_j$ replaced by $G_{k,N}$ or $G_{k,N}^*$, and $Q_k$ replaced by $G_{l,N}$ or\n$G_{l,N}^*$, to deduce that, for any $k,\\ l\\in\\zz$ and $x,\\ y\\in X$,\n$$\n\\lf|G_{k,N}^*G_{l,N}(x,y)\\r|+\\lf|G_{k,N}G_{l,N}^*(x,y)\\r|\\le \\wz C\\dz^{j_0\\eta}\\dz^{|k-l|\\eta}\\frac 1{V_{\\dz^{k\\wedge l}}(x)}\n\\exp\\lf\\{-\\wz{c}\\lf[\\frac{d(x,y)}{\\dz^{k\\wedge l}}\\r]^a\\r\\},\n$$\nwhere $\\wz C$ and $\\wz{c}$ are positive constants independent of $k$, $l$, $x$, $y$, $j_0$ and $y_\\az^{k,m}$.\nCombining this with Proposition \\ref{prop:basic}(iii), we find that, for any $k,\\ l\\in\\zz$,\n$$\n\\lf\\|G_{k,N}^*G_{l,N}\\r\\|_{L^2(X)\\to L^2(X)}+\\lf\\|G_{k,N}G_{l,N}^*\\r\\|_{L^2(X)\\to L^2(X)}\n\\ls_N\\dz^{j_0\\eta}\\dz^{|k-l|\\eta}.\n$$\nThen \\eqref{eq:GL2} follows from Lemma \\ref{lem:CSlem}.\n\nNow we show \\eqref{eq:GMG}. Let $M\\in\\nn$. By the definition of $G_{N}^{(M)}$ and \\eqref{eq:Gksize}, for\nany $f\\in C^\\bz(X)$ and $x\\in X$, we have\n$$\nG_N^{(M)}f(x)=\\int_X G_N^{(M)}(x,y)f(y)\\,d\\mu(y).\n$$\nIt remains to prove that $G_N^{(M)}$ satisfies \\eqref{eq:Ksize}, \\eqref{eq:Kreg} and \\eqref{eq:Kdreg} with\n$C_T:=C\\dz^{j_0\\eta}$, where $C$ is a positive constant depending on $N$, but being independent of $M$ and\n$j_0$. For the size condition, by \\eqref{eq:Gksize} and Lemma \\ref{lem:sum2}, we conclude that, for any\n$x,\\ y\\in X$ with $x\\neq y$,\n\\begin{align*}\n\\lf|G_N^{(M)}(x,y)\\r|&\\le\\sum_{|k|\\le M}|G_{k,N}(x,y)|\\\\\n&\\ls\\dz^{j_0\\eta}\\sum_{k=-\\fz}^\\fz\\frac 1{V_{\\dz^k}(x)}\\exp\\lf\\{-c'\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\n\\exp\\lf\\{-c'\\lf[\\frac{d(x,\\CY^k)}{\\dz^k}\\r]^a\\r\\}\n\\ls\\dz^{j_0\\eta}\\frac 1{V(x,y)}.\n\\end{align*}\nSimilarly, by \\eqref{eq:Gkregx} [resp., \\eqref{eq:Gkdreg}] and Lemma \\ref{lem:sum2}, we obtain the regularity\ncondition (resp., the second difference regularity condition).\nNotice that $G_N^{(M)}$ is bounded on $L^2(X)$ with its operator norm independent of $j_0$ and $M$ [see\n\\eqref{eq:GL2}] and the kernel of $G_N^{(M)}$ has\ncancellation condition. Then, applying Theorem \\ref{thm:Kbdd}, we obtain \\eqref{eq:GMG}. This finishes the\nproof of Proposition \\ref{prop:GNM}.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:GMlim}\nLet $f\\in\\CG(x_1,r,\\bz,\\gz)$ with $x_1\\in X$, $r\\in(0,\\fz)$ and $\\bz,\\ \\gz\\in(0,\\eta)$. Fix $N\\in\\nn$.\nThen the following assertions are true:\n\\begin{enumerate}\n\\item $\\lim_{M\\to\\fz} G_N^{(M)}=G_Nf$ in $L^2(X)$;\n\\item for any $x\\in X$, the sequence $\\{G_{N}^{(M)}f(x)\\}_{M=1}^\\infty$ converges locally uniformly to some\nelement, denoted by $\\widetilde{G_N} f(x)$, where $\\widetilde{G_N}f$ differs from $R_N f$ at most a set of\n$\\mu$-measure $0$;\n\\item the operator $\\widetilde {G_N}$ can be uniquely extended from $\\CG(x_1,r,\\bz,\\gz)$ to $L^2(X)$, with the\nextension operator coinciding with $G_N$. In this sense, for any $f\\in\\CG(x_1,r,\\bz,\\gz)$ and $x\\in X$,\n\\begin{equation*}\n\\lim_{M\\to\\fz}G_N^{(M)}f(x)=\\widetilde{G_N}f(x)=G_Nf(x).\n\\end{equation*}\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nTo obtain (i), applying Lemma \\ref{lem:CSlem} and \\eqref{eq:defGM}, we find that, for any $f\\in L^2(X)$,\n$$\nG_Nf=\\sum_{k=-\\fz}^\\fz G_{k,N}f=\\lim_{M\\to\\fz}G_N^{(M)}f\\quad\\text{in $L^2(X)$}.\n$$\n\nNext we prove (ii). Let $f\\in\\CG(x_1,r,\\bz,\\gz)$. To prove that $\\{G_{N}^{(M)}f\\}_{M=1}^\\fz$ is a locally\nuniformly convergent sequence, for any fixed point $x\\in X$, we only need to find a positive sequence\n$\\{c_{k}\\}_{k=-\\fz}^\\fz$ such that\n\\begin{equation}\\label{eq:sfin2}\n\\sup_{y\\in B(x,r)}|G_{k,N}f(y)|\\le c_{k}\\qquad \\textup{and}\\qquad\n\\sum_{k=-\\fz}^\\fz c_{k}<\\fz.\n\\end{equation}\nWe claim that\n\\begin{align}\\label{claim-add2}\n\\sup_{y\\in B(x,r)}|G_{k,N}f(y)|\\ls\n\\begin{cases}\n\\displaystyle \\frac 1{V_{\\dz^{k}}(x)}\\exp\\lf\\{-c\\lf[\\frac{d(x,\\CY^{k})}{A_0\\dz^{k}}\\r]^a\\r\\}\n&\\qquad \\textup{if}\\;\\dz^{k}\\ge r,\\\\\n\\displaystyle \\frac 1{V_r(x_1)}\\lf(\\frac{\\dz^k}r\\r)^\\bz&\\qquad \\textup{if}\\;\\dz^k(2A_0)^{-1}[\\dz^k+d(x_0,u)]}|Q_k(y,z)||f(z)|\\,d\\mu(z)\\noz\\\\\n&\\quad+|f(u)|\\int_{d(u, z)>(2A_0)^{-1}[\\dz^k+d(x_0,u)]}|Q_k(y,z)|\\,d\\mu(z)\\noz\n=:\\mathrm{J}_1+\\mathrm{J}_2+\\mathrm{J}_3.\\noz\n\\end{align*}\nBy the size condition of $Q_k$, Remark \\ref{rem:andef}(i), the regularity condition of $f$, \\eqref{eq-xx} and\nLemma \\ref{lem-add}(ii), we conclude that\n\\begin{align*}\n\\mathrm{J}_1\n&\\ls \\int_{d(u, z)\\le(2A_0)^{-1}[\\dz^k+d(x_0,u)]}\\frac1{V_{\\dz^k}(z)}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,z)}{\\dz^k}\\r]^a\\r\\}\\lf[\\frac{d(u, z)}{\\dz^k+d(x_0,u)}\\r]^{\\bz}\\\\\n&\\quad\\times\\frac 1{V_{\\dz^k}(x_0)+V(x_0,u)}\\lf[\\frac {\\dz^k}{\\dz^k+d(x_0,u)}\\r]^{\\gz}\\,d\\mu(z)\\\\\n&\\ls\\lf[1+\\frac{d(u, y)}{\\dz^k}\\r]^{\\bz}\\lf[\\frac{\\dz^k}{\\dz^k+d(u, x_0)}\\r]^{\\bz}\\frac 1{V_{\\dz^k}(x_0)+V(x_0,u)}\\lf[\\frac {\\dz^k}{\\dz^k+d(x_0,u)}\\r]^{\\gz},\n\\end{align*}\nwhere, in the last step, we used the inequality\n\\begin{align}\\label{eq-xx5}\n\\frac{d(u, z)}{\\dz^k}\\le \\frac{A_0^2[d(u, y)+d(y,z)]}{\\dz^k}\n\\ls\\lf[1+\\frac{d(u, y)}{\\dz^k}\\r]\\lf[1+\\frac{d(y,z)}{\\dz^k}\\r]\n\\end{align}\nand the fact that the last term $ 1+\\frac{d(y,z)}{\\dz^k} $ is absorbed by the factor\n$\\exp\\{-\\nu'[\\frac{d(y,z)}{\\dz^k}]^a\\}$.\n\nBy the size condition of $Q_k$, Remark \\ref{rem:andef}(i), the size condition of $f$, the fact that\n$[\\frac{d(u, z)}{\\dz^k+d(x_0,u)}]^{\\bz}\\ge (2A_0)^{-\\bz}$ and \\eqref{eq-xx5}, we have\n\\begin{align*}\n\\mathrm{J}_2&\\ls\\int_{d(u, z)>(2A_0)^{-1}[\\dz^k+d(x_0,u)]}\\frac1{V_{\\dz^k}(z)}\n\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,z)}{\\dz^k}\\r]^a\\r\\}\\lf[\\frac{d(u, z)}{\\dz^k+d(x_0,u)}\\r]^{\\bz}\\\\\n&\\quad\\times\\frac 1{V_{\\dz^k}(x_0)+V(x_0,z)}\\lf[\\frac {\\dz^k}{\\dz^k+d(x_0,z)}\\r]^{\\gz}\\,d\\mu(z)\\\\\n&\\ls \\lf[1+\\frac{d(u,y)}{\\dz^k}\\r]^{\\bz}\\lf[\\frac{\\dz^k}{\\dz^k+d(u, x_0)}\\r]^{\\bz}\n\\int_X\\lf[1+\\frac{d(y,z)}{\\dz^k}\\r]^\\bz\\frac1{V_{\\dz^k}(z)}\\\\\n&\\quad\\times\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,z)}{\\dz^k}\\r]^a\\r\\}\\frac 1{V_{\\dz^k}(x_0)+V(x_0,z)}\n\\lf[\\frac {\\dz^k}{\\dz^k+d(x_0,z)}\\r]^{\\gz}\\,d\\mu(z)\\\\\n&\\ls \\lf[1+\\frac{d(u,y)}{\\dz^k}\\r]^{\\bz}\\lf[\\frac{\\dz^k}{\\dz^k+d(u, x_0)}\\r]^{\\bz}\n\\int_X\\frac1{V_{\\dz^k}(z)}\\exp\\lf\\{-\\frac{\\nu'}2\\lf[\\frac{d(y,z)}{\\dz^k}\\r]^a\\r\\}\\\\\n&\\quad\\times\\frac 1{V_{\\dz^k}(x_0)+V(x_0,z)}\n\\lf[\\frac {\\dz^k}{\\dz^k+d(x_0,z)}\\r]^{\\gz}\\,d\\mu(z).\n\\end{align*}\nFor the last integral displayed above, we separate $X$ into\n$\\{z\\in X:\\ d(y,z)\\ge d(x_0,y)\/(2A_0)\\}$ and $\\{z\\in X:\\ d(x_0,z)\\ge d(x_0,y)\/(2A_0)\\}$. Then, by Lemma\n\\ref{lem-add}(ii), we conclude that\n\\begin{align*}\n&\\int_X\\frac1{V_{\\dz^k}(z)}\\exp\\lf\\{-\\frac{\\nu'}2\\lf[\\frac{d(y,z)}{\\dz^k}\\r]^a\\r\\}\n\\frac 1{V_{\\dz^k}(x_0)+V(x_0,z)}\\lf[\\frac {\\dz^k}{\\dz^k+d(x_0,z)}\\r]^{\\gz}\\,d\\mu(z)\\\\\n&\\quad\\ls\\frac1{V_{\\dz^k}(x_0)}\\exp\\lf\\{-\\frac{\\nu'}4\\lf[\\frac{d(x_0,y)}{2A_0\\dz^k}\\r]^a\\r\\}\n\\int_{d(y,z)\\ge (2A_0)^{-1}d(x_0,y)}\\frac 1{V_{\\dz^k}(x_0)+V(x_0,z)}\n\\lf[\\frac {\\dz^k}{\\dz^k+d(x_0,z)}\\r]^{\\gz}\\,d\\mu(z)\\\\\n&\\qquad+\\frac 1{V_{\\dz^k}(x_0)+V(x_0,z)}\\lf[\\frac {\\dz^k}{\\dz^k+d(x_0,z)}\\r]^{\\gz}\n\\int_{d(x_0,z)\\ge (2A_0)^{-1}d(x_0,y)}\n\\frac1{V_{\\dz^k}(z)}\\exp\\lf\\{-\\frac{\\nu'}2\\lf[\\frac{d(y,z)}{\\dz^k}\\r]^a\\r\\}\\,d\\mu(z)\\\\\n&\\quad\\ls\\frac 1{V_{\\dz^k}(x_0)+V(x_0,z)}\\lf[\\frac {\\dz^k}{\\dz^k+d(x_0,z)}\\r]^{\\gz}.\n\\end{align*}\nFrom this, the doubling condition \\eqref{eq:doub} and \\eqref{eq-xx5}, it follows that\n\\begin{align*}\n\\mathrm{J}_2&\\ls \\lf[1+\\frac{d(u, y)}{\\dz^k}\\r]^{\\bz}\\lf[\\frac{\\dz^k}{\\dz^k+d(u, x_0)}\\r]^{\\bz}\n\\frac1{V_{\\dz^k}(x_0)+V(y, x_0)} \\lf[\\frac{\\dz^k}{\\dz^k+d(y, x_0)}\\r]^{\\gz}\\\\\n&\\ls \\lf[1+\\frac{d(u, y)}{\\dz^k}\\r]^{\\bz+\\omega+\\gz}\\lf[\\frac{\\dz^k}{\\dz^k+d(u, x_0)}\\r]^{\\bz}\n\\frac 1{V_{\\dz^k}(x_0)+V(x_0,u)}\\lf[\\frac {\\dz^k}{\\dz^k+d(x_0,u)}\\r]^{\\gz}.\n\\end{align*}\nFinally, for the term $\\RJ_3$, by the size conditions of $f$ and $Q_k$, Remark \\ref{rem:andef}(i),\n\\eqref{eq-xx5} and Lemma \\ref{lem-add}(ii), we obtain\n\\begin{align*}\n\\RJ_3&\\ls\\frac 1{V_{\\dz^k}(x_0)+V(x_0,u)}\\lf[\\frac {\\dz^k}{\\dz^k+d(x_0,u)}\\r]^{\\gz}\n\\int_{d(u,z)>(2A_0)^{-1}[\\dz^k+d(x_0,u)]}\\lf[1+\\frac{d(u,z)}{\\dz^k}\\r]^\\bz\n\\frac 1{V_{\\dz^k}(z)}\\\\\n&\\quad\\times\\lf[1+\\frac{d(u,z)}{\\dz^k}\\r]^\\bz\\exp\\lf\\{-\\nu'\\lf[\\frac{d(y,z)}{\\dz^k}\\r]^a\\r\\}\\,d\\mu(z)\\\\\n&\\ls\\lf[1+\\frac{d(u,y)}{\\dz^k}\\r]^\\bz\\frac 1{V_{\\dz^k}(x_0)+V(x_0,u)}\\lf[\\frac {\\dz^k}{\\dz^k+d(x_0,u)}\\r]^{\\gz}\n\\int_{d(u,z)>(2A_0)^{-1}[\\dz^k+d(x_0,u)]}\\frac 1{V_{\\dz^k}(z)}\\\\\n&\\quad\\times \\exp\\lf\\{-\\frac{\\nu'}2\\lf[\\frac{d(y,z)}{\\dz^k}\\r]^a\\r\\}\\,d\\mu(z)\\\\\n&\\ls\\lf[1+\\frac{d(u,y)}{\\dz^k}\\r]^\\bz\\frac 1{V_{\\dz^k}(x_0)+V(x_0,u)}\n\\lf[\\frac {\\dz^k}{\\dz^k+d(x_0,u)}\\r]^{\\gz}.\n\\end{align*}\n\nCombining the estimates of $\\mathrm{J}_1$ through $\\mathrm{J}_3$, we obtain \\eqref{eq-xxx1},\nwhich completes the proof of Lemma \\ref{lem-add5}.\n\\end{proof}\n\n\\begin{theorem}\\label{thm:hdrf}\nLet $\\{Q_k\\}_{k=-\\fz}^\\fz$ be an $\\exp$-{\\rm ATI} and $\\bz,\\ \\gz\\in(0,\\eta)$.\nFor any $k\\in\\zz$, $\\az\\in\\CA_k$ and $m\\in\\{1,\\ldots,N(k,\\az)\\}$, suppose that $y_\\az^{k,m}$ is an arbitrary point in $Q_\\az^{k,m}$. Then, for any $i\\in\\{0,1,2\\}$, there\nexists a sequence $\\{\\wz{Q}_k^{(i)}\\}_{k=-\\fz}^\\fz$ of bounded linear operators on $L^2(X)$ such that, for any\n$f$ in $\\GOO{\\bz,\\gz}$ [resp., $L^p(X)$ with $p\\in(1,\\fz)$],\n\\begin{align}\\label{eq:hdrf}\nf(\\cdot)&=\\sum_{k=-\\fz}^\\fz\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}\\wz{Q}_k^{(0)}\n(\\cdot,y)\\,d\\mu(y)Q_kf\\lf(y_\\az^{k,m}\\r)\\\\\n&=\\sum_{k=-\\fz}^\\fz\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\wz{Q}^{(1)}_k\\lf(\\cdot,y_\\az^{k,m}\\r)\n\\int_{Q_\\az^{k,m}}Q_kf(y)\\,d\\mu(y)\\noz\\\\\n&=\\sum_{k=-\\fz}^\\fz\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\mu\\lf(Q_\\az^{k,m}\\r)\n\\wz{Q}^{(2)}_k\\lf(\\cdot,y_\\az^{k,m}\\r)Q_kf\\lf(y_\\az^{k,m}\\r),\\noz\n\\end{align}\nwhere all the summations converge in the sense of $\\GOO{\\bz,\\gz}$ [resp., $L^p(X)$ with $p\\in(1,\\fz)$].\nMoreover, the kernels of $\\wz{Q}_k^{(0)}$, $\\wz{Q}^{(1)}_k$ and $\\wz{Q}^{(2)}_k$ satisfy\nthe size condition \\eqref{eq:atisize}, the regularity condition \\eqref{eq:atisregx}\nonly for the first variable, and also the following cancellation condition: for any $x\\in X$,\n\\begin{align}\\label{eq:x00}\n\\int_X \\wz{Q}_k^{(i)}(x,y)\\,d\\mu(y)=0=\\int_X\\wz{Q}_k^{(i)}(y,x)\\,d\\mu(y), \\qquad \\forall\\,i\\in\\{0,1,2\\}.\n\\end{align}\n\\end{theorem}\n\n\\begin{proof}\nWe only prove the first equality in \\eqref{eq:hdrf}.\nIndeed, to obtain the second and the third equalities in \\eqref{eq:hdrf}, instead of $\\CS_N$ in\n\\eqref{eq:S}, we only need to consider\n$$\n\\CS_N^{(1)}f(x):=\\sum_{k=-\\fz}^\\fz\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}Q_k^N\\lf(x,y_\\az^{k,m}\\r)\n\\int_{Q_\\az^{k,m}}Q_kf(y)\\,d\\mu(y),\\qquad \\forall\\,x\\in X,\n$$\nrespectively,\n$$\n\\CS_N^{(2)}f(x):=\\sum_{k=-\\fz}^\\fz\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\mu\\lf(Q_\\az^{k,m}\\r)\nQ_k^N\\lf(x,y_\\az^{k,m}\\r)Q_kf\\lf(y_\\az^{k,m}\\r),\\qquad \\forall\\,x\\in X.\n$$\nOne finds that the corresponding remainders $\\CR_N^{(1)}:= I-\\CS_N^{(1)}$ and $\\CR_N^{(2)}:=I-\\CS_N^{(2)}$\nsatisfy the same estimate as $\\CR_N$ in Proposition \\ref{prop:GG}. The remaining arguments are similar,\nthe details being omitted.\n\nNow we prove the first equality in \\eqref{eq:hdrf}. Due to Remark \\ref{rem:j0N}, the operator $\\CS_N^{-1}=(I-\\CR_N)^{-1}$ satisfies\n$$\n\\lf\\|\\CS_N^{-1}\\r\\|_{L^2(X)\\to L^2(X)}\\le 2 \\quad\\text{and}\\quad\n\\lf\\|\\CS_N^{-1}\\r\\|_{\\GOX{x_1,r,\\bz,\\gz}\\to\\GOX{x_1,r,\\bz,\\gz}}\\le 2.\n$$\nfor any $x_1\\in X$ and $r\\in(0,\\fz)$. For any $k\\in\\zz$ and $x,\\ y\\in X$, define\n$$\n\\wz{Q}_k^{(0)}(x,y):=\\wz{Q}_k(x,y):=\\CS_N^{-1}\\lf(Q_k^N(\\cdot,y)\\r)(x).\n$$\nBy Lemma \\ref{lem:propQkN} and the proof of Proposition \\ref{prop:etoa}, we find that $Q_k^N(\\cdot, y)\\in\n\\mathring\\CG(y, \\dz^k, \\bz,\\gz)$ with $\\|\\cdot\\|_{\\CG(y, \\dz^k, \\bz,\\gz)}$-norm independent of $k$ and $y$,\nwhich implies that\n$\\{\\wz{Q}_k\\}_{k=-\\fz}^\\fz$ satisfies the size condition \\eqref{eq:atisize} and the regularity condition \\eqref{eq:atisregx}\nonly for the first variable. The proof of \\eqref{eq:x00} is similar to that of \\eqref{eq:wzQcan}, with $R_N$\ntherein replaced by $\\CR_N$, the details being omitted. Moreover, for any $f\\in L^2(X)$, \\eqref{eq:hdrf}\nconverges in $L^2(X)$. We divide the remaining arguments into three steps.\n\n{\\it Step 1) Proof of the convergence of \\eqref{eq:hdrf} in $\\GO{\\bz,\\gz}$ when $f\\in\\GO{\\bz',\\gz'}$\nwith $\\bz'\\in(\\bz,\\eta)$ and $\\gz'\\in(\\gz,\\eta)$.}\n\nWithout loss of generality, we may assume that $\\|f\\|_{\\GO{\\bz',\\gz'}}=1$.\nFor any $k\\in\\zz$ and $M\\in\\nn$, define\n$$\\CA_{k,M}:=\\{\\az\\in\\CA_k:\\ d(x_0,z_\\az^k)\\le M\\}\\quad \\textup{and}\\quad \\CA_{k,M}^\\complement:=\\CA_k\\setminus\\CA_{k,M}=\\{\\az\\in\\CA_k:\\ d(x_0,z_\\az^k)>M\\}.$$\nTo obtain the convergence of \\eqref{eq:hdrf} in $\\GO{\\bz,\\gz}$, it suffices to show that\n\\begin{equation}\\label{eq:limb}\n\\lim_{L\\to\\fz}\\lim_{M\\to\\fz}\n\\lf\\|f-\\sum_{|k|\\le L}\\sum_{\\az\\in\\CA_{k,M}}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}\\wz{Q}_k(\\cdot,y)\\,d\\mu(y)\nQ_kf\\lf(y_\\az^{k,m}\\r)\\r\\|_{\\GO{\\bz,\\gz}}=0.\n\\end{equation}\nWriting $f=\\CS_N^{-1}\\CS_N f$ and noticing that $\\CS_N^{-1}$ is bounded on $\\GO{\\bz,\\gz}$, we only need to prove\n\\begin{equation}\\label{eq:limb1}\n\\lim_{L\\to\\fz} \\lf\\|\\sum_{|k|\\ge L+1}\\sum_{\\az\\in\\CA_{k}}\n\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}Q_k^N(\\cdot,y)\\,d\\mu(y)Q_kf\\lf(y_\\az^{k,m}\\r)\\r\\|_{\\GO{\\bz,\\gz}}=0\n\\end{equation}\nand\n\\begin{equation}\\label{eq:limb2}\n\\lim_{L\\to\\fz}\\lim_{M\\to\\fz}\n\\lf\\|\\sum_{|k|\\le L}\\sum_{\\az\\in\\CA_{k,M}^\\complement}\n\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}Q_k^N(\\cdot,y)\\,d\\mu(y)Q_kf\\lf(y_\\az^{k,m}\\r)\\r\\|_{\\GO{\\bz,\\gz}}=0.\n\\end{equation}\n\nInvoking the definition of the kernel $F_k$ in Lemma \\ref{lem-add4}, we find that, for any $f\\in\\GO{\\bz,\\gz}$,\n$$\nF_kf(x)=\\int_X F_k(x,y)f(y)\\,d\\mu(y),\\qquad \\forall\\,x\\in X\n$$\nand hence that \\eqref{eq:limb1} is equivalent to that\n\\begin{equation}\\label{eq:limb3}\n\\lim_{L\\to\\fz} \\lf\\|\\sum_{|k|\\ge L+1} F_kf\\r\\|_{\\GO{\\bz,\\gz}}=0.\n\\end{equation}\nComparing Lemma \\ref{lem-add4} with Lemma \\ref{lem-add3}, we find that $F_k$ satisfies the same estimate as\n$E_k=Q_k^NQ_k$ defined in Lemma \\ref{lem-add3}. Therefore, repeating the estimations of \\eqref{eq:sumsize} and\n\\eqref{eq:sumreg}, with $Q_k^NQ_k$ replaced by $F_k$, and using the Fubini theorem,\nwe find that there exists a positive constant\n$\\sigma\\in(0,\\fz)$ such that, for any $k\\in\\zz$ and $f\\in\\GOO{\\bz,\\gz}$,\n$\\|F_kf\\|_{\\GO{\\bz,\\gz}}\\ls\\dz^{|k|\\sigma}\\|f\\|_{\\GO{\\bz',\\gz'}}$, which implies \\eqref{eq:limb1}.\n\nSince the summation in $k$ in \\eqref{eq:limb2} has only finite terms,\nthe proof of \\eqref{eq:limb2} can be reduced to proving that,\n for any fixed $k\\in\\zz$,\n\\begin{equation}\\label{eq:limb2aim}\n\\lim_{M\\to\\fz}\\lf\\|\\sum_{\\az\\in\\CA_{k,M}^\\complement}\n\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}Q_k^N(\\cdot,y)\\,d\\mu(y)Q_kf\\lf(y_\\az^{k,m}\\r)\\r\\|_{\\GO{\\bz,\\gz}}=0.\n\\end{equation}\nNoticing that $\\GO{\\bz,\\gz}=\\mathring\\CG(x_0,\\dz^k,\\bz,\\gz)$, we may as well consider the $\\|\\cdot\\|_{\\mathring\\CG(x_0,\\dz^k,\\bz,\\gz)}$-norm in \\eqref{eq:limb2aim}. To simplify the notation, we let\n$$\nH_M(x):= \\sum_{\\az\\in\\CA_{k,M}^\\complement}\n\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}Q_k^N(x,y)\\,d\\mu(y)Q_kf\\lf(y_\\az^{k,m}\\r),\\quad\\forall\\, x\\in X.\n$$\n\nChoose $M\\in\\nn$ large enough such that $M\\ge 2A_0C^\\natural\\dz^{-L}$. Then, when $|k|\\le L$\nand $\\az\\in \\CA_{k,M}^\\complement$, we know that, for any $y\\in Q_\\az^{k,m}$\n\\begin{equation}\\label{star2}\nd(y,x_0)\\ge A_0^{-1}d\\lf(z_\\az^{k},x_0\\r)-d\\lf(z_\\az^{k},y\\r)\\ge A_0^{-1}M-\\delta^k\\ge M\/(2A_0).\n\\end{equation}\nBased on $f\\in \\GO{\\bz',\\gz'}=\\mathring\\CG(x_0,\\dz^k,\\bz',\\gz')$ and \\eqref{eq-xxx0}, we know that, for any $y\\in Q_\\az^{k,m}$,\n\\begin{align*}\n\\lf|Q_kf\\lf(y_\\az^{k,m}\\r)\\r|\n&\\ls \\frac1{V_{\\dz^k}(x_0)+V(y_\\az^{k,m}, x_0)} \\lf[\\frac{\\dz^k}{\\dz^k+d(y_\\az^{k,m}, x_0)}\\r]^{\\gz'}\n\\sim \\frac1{V_{\\dz^k}(x_0)+V(y, x_0)} \\lf[\\frac{\\dz^k}{\\dz^k+d(y, x_0)}\\r]^{\\gz'}.\n\\end{align*}\nFrom this, \\eqref{eq:QkNsize}, and \\eqref{star2}, it follows that, for any $x\\in X$,\n\\begin{align*}\n|H_M(x)| \\ls \\int_{d(y, x_0)>M\/(2A_0)} \\frac1{V_{\\dz^k}(x)}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\\frac1{V_{\\dz^k}(x_0)+V(y, x_0)} \\lf[\\frac{\\dz^k}{\\dz^k+d(y, x_0)}\\r]^{\\gz'}\\,d\\mu(y).\n\\end{align*}\nFor any $y\\in X$, the quasi-triangle inequality of $d$ implies that either\n$d(y, x)\\ge d(x,x_0)\/(2A_0)$ or $d(y, x_0)\\ge d(x,x_0)\/(2A_0)$.\nWith this and \\eqref{eq-xxx}, the last integral can be further controlled by\n\\begin{align*}\n&\\frac1{V_{\\dz^k}(x_0)+V(x, x_0)} \\lf[\\frac{\\dz^k}{\\dz^k+d(x, x_0)}\\r]^{\\gz}\n\\lf\\{\\int_{\\gfz{d(y, x_0)>M\/(2A_0)}{d(y, x)\\ge d(x,x_0)\/(2A_0)}} \\frac1{V_{\\dz^k}(x_0)+V(y, x_0)} \\lf[\\frac{\\dz^k}{\\dz^k+d(y, x_0)}\\r]^{\\gz'}\\,d\\mu(y)\\r.\\\\\n&\\quad+\\lf.\\int_{\\gfz{d(y, x_0)>M\/(2A_0)}{d(y, x_0)\\ge d(x,x_0)\/(2A_0)}} \\frac1{V_{\\dz^k}(x)}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\}\\lf[\\frac{\\dz^k}{\\dz^k+M}\\r]^{\\gz'-\\gz}\\,d\\mu(y)\\r\\}.\n\\end{align*}\nThis, together with Lemma \\ref{lem-add}(ii), implies that, for any $x\\in X$,\n\\begin{align}\\label{eq-xx1}\n|H_M(x)|\n\\ls M^{\\gz-\\gz'}\\frac1{V_{\\dz^k}(x_0)+V(x, x_0)} \\lf[\\frac{\\dz^k}{\\dz^k+(x, x_0)}\\r]^{\\gz}.\n\\end{align}\n\nAssume for the moment that, when $d(x,x')\\le (2A_0)^{-1}[\\dz^k+d(x,x_0)]$,\n\\begin{align}\\label{eq-xx2}\n|H_M(x)-H_M(x')|\\ls \\lf[\\frac{d(x,x')}{\\dz^k+d(x,x_0)}\\r]^{\\bz'}\\frac1{V_{\\dz^k}(x_0)+V(x, x_0)}\n \\lf[\\frac{\\dz^k}{\\dz^k+(x, x_0)}\\r]^{\\gz}.\n\\end{align}\nMeanwhile, when $d(x,x')\\le (2A_0)^{-1}[\\dz^k+d(x,x_0)]$, we have $V_{\\dz^k}(x_0)+V(x, x_0)\\sim V_{\\dz^k}(x_0)+V(x', x_0)$\nand $\\dz^k+(x, x_0)\\sim \\dz^k+(x', x_0)$, which combined with \\eqref{eq-xx1}, gives\n\\begin{align*}\n|H_M(x)-H_M(x')|\\ls \\frac{M^{\\gz-\\gz'} }{V_{\\dz^k}(x_0)+V(x, x_0)} \\lf[\\frac{\\dz^k}{\\dz^k+(x, x_0)}\\r]^{\\gz}.\\noz\n\\end{align*}\nThen, taking the geometry means between the above two formulae, we obtain\n\\begin{align}\\label{eq-xx3}\n|H_M(x)-H_M(x')|\\ls M^{(\\gz-\\gz')(1-\\bz\/\\bz')} \\lf[\\frac{d(x,x')}{\\dz^k+d(x,x_0)}\\r]^{\\bz}\\frac1{V_{\\dz^k}(x_0)+V(x, x_0)} \\lf[\\frac{\\dz^k}{\\dz^k+(x, x_0)}\\r]^{\\gz}.\n\\end{align}\nFrom \\eqref{eq-xx1} and \\eqref{eq-xx3}, it follows directly \\eqref{eq:limb2aim}.\n\nNow we show that \\eqref{eq-xx2} holds true when $d(x,x')\\le (2A_0)^{-1}[\\dz^k+d(x,x_0)]$. From\n\\eqref{eq:QkNregx} when $d(x,x')\\le \\dz^k$ and \\eqref{eq:QkNsize} when $d(x,x')>\\dz^k$, we deduce that\n\\begin{align*}\n\\lf|Q_k^N(x,y)-Q_k^N(x',y)\\r|\n&\\ls\\min\\lf\\{1,\\lf[\\frac{d(x,x')}{\\dz^k}\\r]^\\eta\\r\\} \\lf[ \\frac1{V_{\\dz^k}(x)}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\} \\r.\\\\\n&\\quad\\lf.+\\frac1{V_{\\dz^k}(x')}\n\\exp\\lf\\{-c\\lf[\\frac{d(x',y)}{\\dz^k}\\r]^a\\r\\}\\r].\n\\end{align*}\nNotice that the condition $d(x,x')\\le (2A_0)^{-1}[\\dz^k+d(x,x_0)]$ implies that $V_{\\dz^k}(x_0)+V(x_0,x)\n\\sim V_{\\dz^k}(x_0)+V(x_0,x')$ and $\\dz^k+d(x_0,x)\\sim \\dz^k+d(x_0,x')$.\nWith this and $f\\in \\GO{\\bz',\\gz'}=\\mathring\\CG(x_0,\\dz^k,\\bz',\\gz')$ , we apply \\eqref{eq-xxx1}\n(with $u=x$ or $u=x'$ therein) to deduce that\n\\begin{align*}\n\\lf|Q_kf\\lf(y_\\az^{k,m}\\r)\\r|\n&\\ls \\lf[\\frac{\\dz^k}{\\dz^k+d(x, x_0)}\\r]^{\\bz'}\n\\frac 1{V_{\\dz^k}(x_0)+V(x_0,u)}\\lf[\\frac {\\dz^k}{\\dz^k+d(x_0,x)}\\r]^{\\gz'}\\\\\n&\\quad\\times\n\\min\\lf\\{ \\lf[1+\\frac{d(x, y_\\az^{k,m})}{\\dz^k}\\r]^{\\bz+\\omega+\\gz},\n\\lf[1+\\frac{d(x', y_\\az^{k,m})}{\\dz^k}\\r]^{\\bz+\\omega+\\gz}\\r\\}.\n\\end{align*}\nAlso, due to \\eqref{eq-xx}, the variable $y_\\az^{k,m}$ in the right-hand side of the above estimate of $|Q_kf(y_\\az^{k,m})|$ can be replaced by any\npoint $y\\in Q_\\az^{k,m}$. Therefore, by Lemma \\ref{lem-add}(ii), we conclude that\n\\begin{align*}\n|H_M(x)-H_M(x')|\n&\\ls \\lf[\\frac{d(x,x')}{\\dz^k+d(x, x_0)}\\r]^{\\bz'}\n\\frac 1{V_{\\dz^k}(x_0)+V(x_0,u)}\\lf[\\frac {\\dz^k}{\\dz^k+d(x_0,x)}\\r]^{\\gz'} \\\\\n&\\quad\\times\n\\int_X\n\\lf(\\frac1{V_{\\dz^k}(x)}\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^k}\\r]^a\\r\\} +\\frac1{V_{\\dz^k}(x')}\n\\exp\\lf\\{-c\\lf[\\frac{d(x',y)}{\\dz^k}\\r]^a\\r\\}\\r)\\\\\n&\\quad\\times\\min\\lf\\{ \\lf[1+\\frac{d(x, y)}{\\dz^k}\\r]^{\\bz+\\omega+\\gz},\n\\lf[1+\\frac{d(x',y)}{\\dz^k}\\r]^{\\bz+\\omega+\\gz}\\r\\}\\,d\\mu(y)\\\\\n&\\ls \\lf[\\frac{d(x,x')}{\\dz^k+d(x, x_0)}\\r]^{\\bz'}\n\\frac 1{V_{\\dz^k}(x_0)+V(x_0,u)}\\lf[\\frac {\\dz^k}{\\dz^k+d(x_0,x)}\\r]^{\\gz'}.\n\\end{align*}\nThis proves \\eqref{eq-xx2}, and hence finishes the proof of Step 1).\n\n{\\it Step 2) Proof of the convergence of \\eqref{eq:hdrf} in $\\GOO{\\bz,\\gz}$ when $f\\in\\GOO{\\bz,\\gz}$.}\n\nWe first claim that $\\CS_N^{-1}=(I-\\CR_N)^{-1}$ maps $\\GOO{\\bz,\\gz}$ continuously into $\\GOO{\\bz,\\gz}$.\nIndeed, recalling that Remark \\eqref{rem:j0N} says that $\\|\\CR_N\\|_{\\GO{x_1,r,\\bz,\\gz}\\to\\GO{x_1,r,\\bz,\\gz}}\\le \\frac 12$, it suffices to show that $\\CR_N$ maps $\\GOO{\\bz,\\gz}$ into $\\GOO{\\bz,\\gz}$.\nWith $\\CR_N=I-\\CS_N$, we only need to show that $\\CS_N h\\in\\GOO{\\bz,\\gz}$ whenever $h\\in \\GOO{\\bz,\\gz}$.\n\nIndeed, for any $h\\in\\mathring{\\CG}^\\eta_0(\\bz,\\gz)$, there exists\n$\\{h_j\\}_{j=1}^\\infty\\subset\\mathring{\\CG}(\\eta,\\eta)$ such that $\\|h-h_j\\|_{\\mathring{\\CG}(\\bz,\\gz)}\\to 0$ as\n$j\\to\\infty$. Notice that, for ant $j\\in\\nn$,\n$$\n\\CS_Nh_j(\\cdot)=\\sum_{k=-\\fz}^\\fz\\sum_{\\az\\in\\CA_{k}}\n\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}Q_k^N(\\cdot,y)\\,d\\mu(y)Q_kh_j\\lf(y_\\az^{k,m}\\r),\n$$\nwhere the series converges in $\\mathring\\CG(\\bz,\\gz)$ due to \\eqref{eq:limb1} and \\eqref{eq:limb2}.\nSince every $\\int_{Q_\\az^{k,m}}Q_k^N(\\cdot,y)\\,d\\mu(y)\\in \\mathring\\CG(\\eta,\\eta)$, then, from\n\\eqref{eq:limb2aim}, it follows that, for any $k\\in\\zz$, $F_kh_j\\in\\GOO{\\bz,\\gz}$. This, together with the\ndefinition of $F_k$ and \\eqref{eq:limb1}, further implies that $\\CS_Nh_j\\in\\GOO{\\bz,\\gz}$.\nMoreover, by the fact that\n$$\n\\|\\CS_Nh-\\CS_Nh_j\\|_{\\GO{\\bz,\\gz}}=\\|(h-h_j)-\\CR_N(h-h_j)\\|_{\\GO{\\bz,\\gz}}\\le2\\|h-h_j\\|_{\\GO{\\bz,\\gz}}\\to 0\n$$\nas $j\\to\\infty$, we obtain $\\CS_Nh\\in\\mathring{\\CG}^\\eta_0(\\bz,\\gz)$. This finishes the proof of the claim.\nMoreover, repeating the proof of Step 2) in the proof of Theorem \\ref{thm:hcrf} with $T_N$ and $R_N$\nreplaced, respectively, by $\\CS_N$ and $\\CR_N$, we find that both $\\CS_N$ and $\\CS_N^{-1}$ are bounded on\n$\\GOO{\\bz,\\gz}$.\n\nNext, we use the above claim to conclude the proof of Step 2). For any $f\\in\\GOO{\\bz,\\gz}$, there exists\n$\\{h_j\\}_{j=1}^\\fz\\subset\\GO{\\eta,\\eta}$ such that $\\|f-h_j\\|_{\\GO{\\bz,\\gz}}\\to 0$ as $j\\to\\fz$. For any\n$k\\in\\zz$, $L,\\ M\\in\\nn$ and $x,\\ y\\in X$, define\n$$\nF_{k,M}(x,y):=\\sum_{\\az\\in\\CA_{k,M}}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}Q_k^N(x,z)\\,d\\mu(z)\nQ_k\\lf(y_\\az^{k,m},y\\r)\n$$\nand $F_M^{(L)}:=\\sum_{|k|\\le L} F_{k,M}$. Repeating the proof of Lemma \\ref{lem-add4} with $F_k$ replaced by\n$F_{k,M}$ and the sum $\\sum_{\\az\\in\\CA_k}$ by $\\sum_{\\az\\in\\CA_{k,M}}$, we find that $F_{k,M}$\nsatisfies (i) through (iv) of Lemma \\ref{lem-add4}, with the implicit positive constants independent of $M$.\nThis, combined with Lemma \\ref{lem:GkN}, implies that $F_{k,M}$ satisfies the same estimates as $G_{k,N}$ with\nthe implicit positive constants independent of $M$ and the factor $\\dz^{j_0\\eta}$ removed. Therefore, following\nthe proof of Proposition \\ref{prop:GNM}, with $G_N^{(M)}$ replaced by $F_M^{(L)}$, we further conclude that\n$F_M^{(L)}$ is bounded on $\\GO{\\bz,\\gz}$ with its operator norm independent of $M$ and $L$. By this,\n\\eqref{eq:limb} and the boundedness of $\\CS_N$ on $\\GOO{\\bz,\\gz}$, we conclude that\n\\begin{align*}\n\\lf\\|\\CS_Nf-F_M^{(L)}f\\r\\|_{\\GOO{\\bz,\\gz}}&\\le\\lf\\|\\CS_N(f-h_j)\\r\\|_{\\GO{\\bz,\\gz}}+\n\\lf\\|h_j-F_M^{(L)}h_j\\r\\|_{\\GO{\\bz,\\gz}}+\\lf\\|F_M^{(L)}(h_j-f)\\r\\|_{\\GO{\\bz,\\gz}}\\\\\n&\\ls \\lf\\|f-h_j\\r\\|_{\\GO{\\bz,\\gz}}+\\lf\\|h_j-F_M^{(L)}h_j\\r\\|_{\\GO{\\bz,\\gz}}\n\\to \\lf\\|f-h_j\\r\\|_{\\GO{\\bz,\\gz}}\n\\end{align*}\nas $L,\\ M\\to\\fz$. If we let $j\\to\\fz$ and use the boundedness of $\\CS_N^{-1}$ on $\\GOO{\\bz,\\fz}$, then we know\nthat \\eqref{eq:hdrf} converges in $\\GOO{\\bz,\\gz}$. This finishes the proof of Step 2).\n\n{\\it Step 3) Proof of the convergence of \\eqref{eq:hdrf} in $L^p(X)$ when $f\\in L^p(X)$ with any given\n$p\\in(1,\\fz)$.}\n\nFor any $k\\in\\zz$ and $M,\\ L\\in\\nn$, recall that\n$$\nF_{k,M}(x,y):=\\sum_{\\az\\in\\CA_{k,M}}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}Q_k^N(x,z)\\,d\\mu(z)\nQ_k\\lf(y_\\az^{k,m},y\\r),\\qquad\\forall\\, x,\\ y\\in X\n$$\nand $F_M^{(L)}=\\sum_{|k|\\le L} F_{k,M}$.\nNotice that $F_{k,M}$ satisfies all conditions in Lemma \\ref{lem-add4} with the implicit positive constants\nindependent of $M$ [this has been proved in Step 2)]. Following the proof of \\eqref{eq:GL2} with $G_{k,N}$\nreplaced by $F_{k,M}$ and $G_N^{(M)}$ by $F_M^{(L)}$,\nwe know that the operator $F_{M}^{(L)}$ is bounded on $L^2(X)$, so is the action of $\\CS_N^{-1}$ on it.\n\nLet us write\n$$\n\\wz F_{k, M}f:= \\CS_N^{-1}F_{k,M}f=\\sum_{|k|\\le L}\\sum_{\\az\\in\\CA_{k,M}}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}\\wz{Q}_k(\\cdot,y)\\,d\\mu(y)\nQ_kf\\lf(y_\\az^{k,m}\\r).\n$$\nConsequently, $ \\sum_{|k|\\le L} \\wz F_{k, M}$ is bounded on $L^2(X)$, with its operator norm independent\nof $M$ and $L$. Notice that each $\\wz F_{k, M}$ is associated to an integral kernel\n$$\n\\wz F_{k, M}(x,y)=\\sum_{\\az\\in\\CA_{k,M}}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}\\wz{Q}_k(x,z)\\,d\\mu(z)\nQ_k\\lf(y_\\az^{k,m},y\\r),\\qquad\\forall\\, x,\\ y\\in X.\n$$\nApplying the size and the regularity conditions of $\\wz Q_k$, and following the estimations of\n\\eqref{eq:wTLsize}, \\eqref{eq:wTLregx} and \\eqref{eq:wTLregy} in Step 3) of the proof of Theorem \\ref{thm:hcrf},\nwe easily conclude that\n\\begin{align*}\n\\sum_{k\\le |L|}\\lf|\\wz{F}_{k,M}(x,y)\\r|\\ls \\frac1{V(x,y)}\n\\end{align*}\nand that, when $d(x,x')\\le (2A_0)^{-1}d(x,y)$ with $x\\neq y$,\n\\begin{align*}\n\\sum_{k\\le |L|}\\lf|\\wz{F}_{k,M}(x,y)-\\wz{F}_{k,M}(x',y)\\r|\n+\\sum_{k\\le |L|}\\lf|\\wz{F}_{k,M}(y,x)-\\wz{F}_{k,M}(y,x')\\r|\n&\\ls \\lf[\\frac{d(x,x')}{d(x,y)}\\r]^{\\bz_1}\\frac1{V(x,y)},\n\\end{align*}\nwhere $\\bz_1\\in(0,\\bz\\wedge\\gz)$ and the implicit positive constants are independent of $M$, $L$, $x$, $x'$\nand $y$. Thus, $\\sum_{|k|\\le L} \\wz F_{k, M}$ has a standard $\\bz_1$-Calder\\'on-Zygmund kernel.\nFrom the well-known Calder\\'on-Zygmund theory on spaces of homogeneous type in \\cite{CW71}, it follows that\nthe operator $\\sum_{|k|\\le L} \\wz F_{k, M}$ is bounded on $L^p(X)$ for any $p\\in(1,\\infty)$, with its\noperator norm independent of $M$ and $L$.\n\nWith this, by a standard density argument as in the proof of Step 3) of the proof of Theorem \\ref{thm:hcrf}, we\nconclude that \\eqref{eq:hdrf} converges in $L^p(X)$ when $f\\in L^p(X)$ with $p\\in(1,\\fz)$. This finishes the\nproof Step 3) and hence of Theorem \\ref{thm:hdrf} overall.\n\\end{proof}\n\nWe state some other homogeneous discrete Calder\\'on reproducing formulae,\nwith the proof omitted due to the similarity.\n\n\\begin{theorem}\\label{thm:hdrf3}\nLet all the notation be as in Theorem \\ref{thm:hdrf}. Then there exist sequences\n$\\{\\overline{Q}_k^{(0)}\\}_{k=-\\fz}^\\fz$, $\\{\\overline{Q}_k^{(1)}\\}_{k=-\\fz}^\\fz$ and\n$\\{\\overline{Q}_k^{(2)}\\}_{k=-\\fz}^\\fz$ of bounded linear operators on $L^2(X)$ such that, for any $f$ in\n$\\GOO{\\bz,\\gz}$ [resp., $L^p(X)$ with any given $p\\in(1,\\fz)$],\n\\begin{align}\\label{eq:hdrf3}\nf(\\cdot)&=\\sum_{k=-\\fz}^\\fz\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}Q_k(\\cdot,y)\\,d\\mu(y)\n\\overline{Q}_k^{(0)}f\\lf(y_\\az^k\\r)\\\\\n&=\\sum_{k=-\\fz}^\\fz\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}Q_k\\lf(\\cdot,y_\\az^{k,m}\\r)\n\\int_{Q_\\az^{k,m}}\\overline{Q}^{(1)}_kf(y)\\,d\\mu(y)\\noz\\\\\n&=\\sum_{k=-\\fz}^\\fz\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\mu\\lf(Q_\\az^{k,m}\\r)\nQ_k\\lf(\\cdot,y_\\az^{k,m}\\r)\\overline{Q}^{(2)}_kf\\lf(y_\\az^{k,m}\\r),\\noz\n\\end{align}\nwhere all the series converge in $\\mathring{\\CG}^\\eta_0(\\bz,\\gz)$ [resp., $L^p(X)$ with any given\n$p\\in(1,\\fz)$]. Moreover, for any $k\\in\\zz$, the kernel of $\\overline{Q}_k^{(i)}$ with $i\\in\\{0,1,2\\}$ satisfies\nthe size condition \\eqref{eq:atisize}, the regularity condition \\eqref{eq:atisregx}\nonly for the second variable, and also the cancellation condition \\eqref{eq:x00}.\n\\end{theorem}\n\n\\begin{remark}\\label{rem:hdrf}\nIn Theorems \\ref{thm:hdrf} and \\ref{thm:hdrf3}, for any $i\\in\\{0,1,2\\}$, all the estimates satisfied by\n$\\wz{Q}_k^{(i)}$ and\n$\\overline{Q}_k^{(i)}$ are independent of $\\{y_\\az^{k,m}:\\ \\az\\in\\CA_k,\\ m\\in\\{1,\\ldots,N(k,\\az)\\}\\}$, and also\nindependent of $\\bz$ and $\\gz$ whenever $(\\bz,\\gz)$ belongs to a compact subset $K$ of $(0,\\eta)^2$\n(but, in this case, may depend on $K$).\n\\end{remark}\n\nSince we already have Theorems \\ref{thm:hdrf} and \\ref{thm:hdrf3}, then a duality argument implies the\nfollowing conclusion, the details being omitted.\n\n\\begin{theorem}\\label{thm:hdrf4}\nLet all the notation be as in Theorems \\ref{thm:hdrf}. Then, for any $f\\in(\\GOO{\\bz,\\gz})'$, both\n\\eqref{eq:hdrf} and \\eqref{eq:hdrf3} hold true in $(\\GOO{\\bz,\\gz})'$.\n\\end{theorem}\n\n\\section{Inhomogeneous Calder\\'{o}n reproducing formulae}\\label{irf}\n\nIn this section, we establish Calder\\'{o}n reproducing formulae by using the newly defined inhomogeneous\napproximation of the identity.\n\\begin{definition}\\label{def:ieti}\nA sequence $\\{Q_k\\}_{k=0}^\\fz$ of bounded linear operators on $L^2(X)$ is called an \\emph{inhomogeneous\napproximation of the identity with exponential decay} (for short, $\\exp$-IATI) if $\\{Q_k\\}_{k=0}^\\fz$\nhas the following properties:\n\\begin{enumerate}\n\\item $\\sum_{k=0}^\\fz Q_k=I$ in $L^2(X)$;\n\\item for any $k\\in\\nn$, $Q_k$ satisfies (ii) through (v) in Definition \\ref{def:eti};\n\\item $Q_0$ satisfies (ii), (iii) and (iv) in Definition \\ref{def:eti} with $k:=0$ but without the term\n$$\n\\exp\\lf\\{-\\nu\\lf[\\max\\lf\\{d\\lf(x,\\CY^0\\r),d\\lf(y,\\CY^0\\r)\\r\\}\\r]^a\\r\\};\n$$\nmoreover, $\\int_X Q_0(x,y)\\,d\\mu(y)=1=\\int_X Q_0(y,x)\\,d\\mu(y)$ for any $x\\in X$.\n\\end{enumerate}\n\\end{definition}\nVia the above $\\exp$-IATIs, we show inhomogeneous continuous and discrete Calder\\'{o}n reproducing formulae in\nSections \\ref{icrf} and \\ref{idrf}, respectively.\n\n\\subsection{Inhomogeneous continuous Calder\\'{o}n reproducing formulae}\\label{icrf}\n\nBy Definition \\ref{def:ieti}, we write\n\\begin{equation}\\label{eq:defTR2}\nI=\\sum_{k=0}^\\fz Q_k=\\sum_{k=0}^\\fz\\lf(\\sum_{l=0}^\\fz Q_l\\r)Q_k\n=\\sum_{k=0}^\\fz Q_k^NQ_k+\\sum_{k=0}^\\fz\\sum_{|l|>N}Q_{k+l}Q_k=:\\FT_N+\\FR_N,\n\\end{equation}\nwhere\n\\begin{align}\\label{eq-z1}\nQ_k^N:=\\begin{cases}\n\\displaystyle \\sum_{l=0}^{k+N}Q_l & \\text{if $k\\in\\{0,\\ldots,N\\}$,}\\\\\n\\displaystyle \\sum_{l=k-N}^{k+N}Q_l & \\text{if $k\\in\\{N+1,N+2,\\ldots\\}$}\\\\\n\\end{cases}\n\\qquad\n\\textup{and}\\qquad Q_k:=0\\;\\textup{if}\\; k\\in\\zz\\setminus\\zz_+.\n\\end{align}\nTherefore, for any $x\\in X$,\n\\begin{equation}\\label{eq:QkNint}\n\\int_X Q_k^N(x,y)\\,d\\mu(y)=\\int_X Q_k^N(y,x)\\,d\\mu(y)=\\begin{cases}\n1 & \\text{if $k\\in\\{0,\\ldots,N\\}$,}\\\\\n0 & \\text{if $k\\in\\{N+1,N+2,\\ldots\\}$.}\n\\end{cases}\n\\end{equation}\nNext, we consider the\nboundedness of $\\FR_N$ on $L^2(X)$ and $\\CG(\\bz,\\gz)$. To this end, we prove\nthe following two lemmas.\n\n\\begin{lemma}\\label{lem:RNi}\nFix $N\\in\\nn$ and $\\eta'\\in(0,\\eta)$. Then $\\FR_N$ in \\eqref{eq:defTR2} is a standard\n$\\eta'$-Calder\\'on-Zygmund operator with the kernel satisfying (a) of Theorem \\ref{thm:Kbdd} and\n(d) and (e) of Theorem \\ref{thm:Kibdd} with $s:=\\eta'$, $r_0:=1$,\n$\\sigma\\in(0,\\infty)$, $C_T:=C\\dz^{(\\eta-\\eta')N}$ and\n$\\|\\FR_{N}\\|_{L^2(X)\\to L^2(X)}\\le C\\dz^{\\eta'N}$, where $C$ is a positive constant independent of $N$.\n\\end{lemma}\n\n\\begin{proof}\nIn the definition of $\\FR_N$, when $\\min\\{k+l,k\\}=0$, we have\n$Q_{k+l}Q_k=Q_{l}Q_0$ with $l>N$ or $Q_{k+l}Q_k=Q_0Q_k$ with $k>N$, where we recall that $Q_0$ has no cancellation.\nThus,\n\\begin{align}\\label{eq-z0}\n\\FR_N=\\sum_{\\gfz{k>0,\\,k+l> 0}{|l|>N}}Q_{k+l}Q_k+\\sum_{l>N}Q_{l}Q_0+\\sum_{k>N} Q_0Q_k.\n\\end{align}\nFollowing the proofs of Lemma \\ref{lem:ccrf2} and Proposition \\ref{prop:sizeRN},\nwe deduce that the first term in the right-hand side of \\eqref{eq-z0} is a standard Calder\\'on-Zygmund operator\nwith the kernel satisfying (a) of Theorem \\ref{thm:Kbdd}.\n\nLet $\\sigma\\in(0,\\infty)$ and $\\eta'\\in(0,\\eta)$.\nNotice that, for any $x,\\ y\\in X$ and $k,\\ l\\in\\zz$ satisfying $\\min\\{k+l,k\\}\\ge 0$,\n\\begin{align}\\label{eq-z}\n\\exp\\lf\\{-\\frac c2\\lf[\\frac{d(x,y)}{\\dz^{(k+l)\\wedge k}}\\r]^a\\r\\}\\ls \\lf[\\frac{\\dz^{(k+l)\\wedge k}}{d(x,y)}\\r]^\\sigma\\ls \\lf[\\frac{1}{d(x,y)}\\r]^\\sigma.\n\\end{align}\nWhen $d(x,y)\\ge 1$, by Lemma \\ref{lem:ccrf1}(i), Remark \\ref{rem:andef}(i), \\eqref{eq-z} and Lemma\n\\ref{lem:sum2}, we have\n\\begin{align*}\n\\sum_{\\gfz{k>0,\\,k+l> 0}{|l|>N}}\\lf|Q_{k+l}Q_k(x,y)\\r|\n&\\ls \\sum_{\\gfz{k>0,\\,k+l> 0}{|l|>N}}\\dz^{|l|\\eta}\\frac 1{V_{\\dz^{(k+l)\\wedge k}}(x)}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^{(k+l)\\wedge k}}\\r]^a\\r\\}\\exp\\lf\\{-c\\lf[\\frac{d(x,\\CY^{(k+l)\\wedge\nk})}{\\dz^{(k+l)\\wedge k}}\\r]^a\\r\\}\\\\\n&\\ls \\dz^{N\\eta}\\lf[\\frac{1}{d(x,y)}\\r]^\\sigma \\frac1{V(x,y)}.\n\\end{align*}\nThis shows that the first term in the right-hand side of \\eqref{eq-z0} satisfies (d) of Theorem \\ref{thm:Kibdd}.\n\nWhen $d(x,y)\\ge 1$ and $d(x,x')\\le(2A_0)^{-1}d(x,y)$, then, from Corollary \\ref{cor:mixb}(i), \\eqref{eq-z} and\nLemma \\ref{lem:sum2}, we deduce that\n\\begin{align*}\n&\\sum_{\\gfz{k>0,\\,k+l> 0}{|l|>N}}\n|Q_{k+l}Q_{k}(x,y)-Q_{k+l}Q_k(x',y)|\\\\\n&\\quad\\ls \\sum_{\\gfz{k>0,\\,k+l> 0}{|l|>N}}\\dz^{|l|(\\eta-\\eta')}\\lf[\\frac{d(x,x')}{d(x,y)}\\r]^{\\eta'}\\frac 1{V_{\\dz^{(k+l)\\wedge k}}(x)}\n\\exp\\lf\\{-c\\lf[\\frac{d(x,y)}{\\dz^{(k+l)\\wedge k}}\\r]^a\\r\\}\\exp\\lf\\{-c\\lf[\\frac{d(x,\\CY^{(k+l)\\wedge k})}{\\dz^{(k+l)\\wedge k}}\\r]^a\\r\\}\\\\\n&\\quad\\ls \\dz^{(\\eta-\\eta') N} \\lf[\\frac{1}{d(x,y)}\\r]^\\sigma \\lf[\\frac{d(x,x')}{d(x,y)}\\r]^{\\eta'}\\frac1{V(x,y)}.\n\\end{align*}\nThus, the first term in the right-hand side of \\eqref{eq-z0} satisfies (e) of Theorem \\ref{thm:Kibdd} .\n\nDue to Remark \\ref{rem:mix}, following the proofs of Lemma \\ref{lem:ccrf1} and Corollary \\ref{cor:mixb}, with\n$\\wz Q_jQ_k$ therein replaced by $Q_lQ_0$ and $Q_0Q_k$ with $l,\\ k>N$, we find that the cancellation of $Q_0$\nis not needed (see Remark \\ref{rem:mix}) and hence, all conclusions of Lemma \\ref{lem:ccrf1} and Corollary\n\\ref{cor:mixb} hold true for $Q_lQ_0$ and $Q_0Q_k$, only with the factor $\\exp\\{-c[d(x,\\CY^0)]^a\\}$ therein\nremoved. Therefore, by Proposition \\ref{prop:basic}(ii) and Lemma \\ref{lem:HL}, we find that\n$\\|Q_lQ_0\\|_{L^2(X)\\to L^2(X)}\\ls\\dz^{l\\eta}$ and $\\|Q_0Q_k\\|_{L^2(X)\\to L^2(X)}\\ls\\dz^{k\\eta}$, which further\nimply that\n$$\n\\sum_{l>N}\\|Q_lQ_0\\|_{L^2(X)\\to L^2(X)}+\\sum_{k>N}\\|Q_0Q_k\\|_{L^2(X)\\to L^2(X)}\\ls\\dz^{N\\eta}.\n$$\nThen we deduce that the second and the third terms in the right-hand side of \\eqref{eq-z0} are bounded on\n$L^2(X)$.\n\nFor any $x,\\ y\\in X$, by the proof Lemma \\ref{lem:ccrf1}(i) and \\eqref{eq:doub}, we obtain, when $x\\neq y$.\n\\begin{align*}\n\\sum_{l>N}\\lf|Q_{l}Q_0(x,y)\\r|\n&\\ls \\dz^{N\\eta} \\frac {\\exp\\lf\\{-c[{d(x,y)}]^a\\r\\}}{V_{1}(x)}\\ls \\dz^{N\\eta}\n\\min\\lf\\{1,\\ \\lf[\\frac1{d(x,y)}\\r]^\\sigma\\r\\} \\frac1{V(x,y)}.\n\\end{align*}\nMoreover, when $d(x,x')\\le(2A_0)^{-1}d(x,y)$ with $x\\neq y$, by the proof of Corollary \\ref{cor:mixb}(i) and\n\\eqref{eq:doub}, we find that\n\\begin{align*}\n&\\sum_{l>N}\\lf|Q_{l}Q_0(x,y)-Q_{l}Q_0(x',y)\\r|+\\sum_{l>N}\\lf|Q_{l}Q_0(y,x)-Q_{l}Q_0(y,x')\\r|\\\\\n&\\quad \\ls \\dz^{N(\\eta-\\eta')} \\lf[\\frac{d(x,x')}{d(x,y)}\\r]^{\\eta'} \\frac {\\exp\\lf\\{-c[{d(x,y)}]^a\\r\\}}{V_{1}(x)}\\ls \\dz^{N(\\eta-\\eta')}\\min\\lf\\{1,\\ \\lf[\\frac1{d(x,y)}\\r]^\\sigma\\r\\} \\lf[\\frac{d(x,x')}{d(x,y)}\\r]^{\\eta'}\\frac1{V(x,y)}.\n\\end{align*}\nWhen $d(x,x')\\le(2A_0)^{-2}d(x,y)$ and $d(y,y')\\le (2A_0)^{-2}d(x,y)$ with $x\\neq y$, by the proof of\nCorollary \\ref{cor:mixb}(ii) and \\eqref{eq:doub}, we also obtain\n\\begin{align*}\n&\\sum_{l>N} |[Q_lQ_0(x,y)-Q_lQ_0(x',y)]-[Q_lQ_0(x,y')-Q_lQ_0(x',y')]|\\\\\n&\\quad\\ls\\dz^{N(\\eta-\\eta')} \\lf[\\frac{d(x,x')}{d(x,y)}\\r]^{\\eta'} \\lf[\\frac{d(y,y')}{d(x,y)}\\r]^{\\eta'} \\frac {\\exp\\lf\\{-c[{d(x,y)}]^a\\r\\}}{V_{1}(x)}\\ls\\dz^{(\\eta-\\eta') N}\\lf[\\frac{d(x,x')}{d(x,y)}\\r]^\\eta\\lf[\\frac{d(y,y')}{d(x,y)}\\r]^\\eta\n\\frac 1{V(x,y)}.\n\\end{align*}\nSimilarly, $\\sum_{k>N} Q_0Q_k$ satisfies also the last three formulae displayed above. Therefore, we know that\nthe second and the third terms in the right-hand side of \\eqref{eq-z0} are also standard\n$\\eta'$-Calder\\'on-Zygmund operators with the kernel satisfying (a) of Theorem \\ref{thm:Kbdd} and (d) and (e)\nof Theorem \\ref{thm:Kibdd}. This finishes the proof of Lemma \\ref{lem:RNi}.\n\\end{proof}\n\nTo avoid the deficit that $\\FR_N$ does not satisfy (b) and (c) of Theorem \\ref{thm:Kbdd}, we have\nthe following lemma via an argument similar to that used in the proof of Lemma \\ref{lem:ccrf3}, with $R_N$\nreplaced by $\\FR_N$ and $R_{N,M}$ by $\\FR_{N,M}$ defined below, the details being omitted.\n\n\\begin{lemma}\\label{lem:RNMi}\nLet $\\{Q_k\\}_{k=0}^\\fz$ be an $\\exp$-{\\rm IATI} and, for any $N\\in\\nn$, $\\FR_N$ be defined as in \\eqref{eq:defTR2}.\nFor any $M\\in\\nn$, let\n$$\n\\FR_{N,M}:=\\sum_{k=0}^M\\sum_{N<|l|\\le M} Q_{k+l}Q_k.\n$$\nThen the following assertions hold true:\n\\begin{enumerate}\n\\item all the conclusions of Lemma \\ref{lem:RNi} remain true for $\\FR_{N,M}$ with all the involved positive constants\nindependent of $M$;\n\\item for any $x,\\ y\\in X$, $\\int_X \\FR_{N,M}(x,y')\\,d\\mu(y')=0=\\int_X \\FR_{N,M}(x',y)\\,d\\mu(x')$;\n\\item for any $f\\in L^p(X)$ with $p\\in[1,\\fz]$ and any $x\\in X$,\n$\\FR_{N,M}f(x)=\\int_X \\FR_{N,M}(x,y)f(y)\\,d\\mu(y)$;\n\\item for any $f\\in\\CG(\\bz,\\gz)$ with $\\bz,\\ \\gz\\in(0,\\eta)$, the sequence $\\{\\FR_{N,M}f(x)\\}_{M=1}^\\fz$\nconverges locally uniformly to an element, denoted by $\\wz{\\FR_N}(f)(x)$, where $\\wz{\\FR_N}(f)$ differs from\n$\\FR_Nf(x)$ at most a set of $\\mu$-measure $0$;\n\\item if we extend $\\wz{\\FR_N}$ to a bounded linear operator on\n$L^2(X)$, still denoted by $\\wz{\\FR_N}$, then, for any $f\\in L^2(X)$, $\\wz{\\FR_N}f=\\FR_Nf$ in $L^2(X)$ and\nalmost everywhere.\n\\end{enumerate}\n\\end{lemma}\n\nApplying Theorems \\ref{thm:Kbdd} and \\ref{thm:Kibdd} to the operator $\\FR_{N,M}$ and then passing limit to\n$\\FR_N$, we obtain the following conclusion, the details being omitted.\n\n\\begin{proposition}\\label{prop-add-x}\nLet $x_1\\in X$, $r\\in(0,\\fz)$, $\\bz,\\ \\gz\\in(0,\\eta)$ and $\\eta'\\in(\\max\\{\\bz,\\gz\\},\\eta)$.\nThen there exists a positive constant $C$, independent of $x_1$, $r$ and $N$, such that\n$$\n\\|\\FR_N\\|_{L^2(X)\\to L^2(X)}\\le C\\dz^{\\eta'N}\n$$\nand\n$$\n\\|\\FR_N\\|_{\\mathring{\\CG}(x_1,r,\\bz,\\gz)\\to\\mathring{\\CG}(x_1,r,\\bz,\\gz)}\n+\\|\\FR_N\\|_{\\CG(x_1,1,\\bz,\\gz)\\to \\CG(x_1,1,\\bz,\\gz)}\\le C\\dz^{(\\eta-\\eta')N}.\n$$\n\\end{proposition}\n\nNow we show the following inhomogeneous continuous Calder\\'on reproducing formulae.\n\n\\begin{theorem}\\label{thm:icrf}\nLet $\\bz,\\ \\gz\\in(0,\\eta)$ and $\\{Q_k\\}_{k=0}^\\fz$ be an $\\exp$-{\\rm IATI}. Then there exist $N\\in\\nn$ and\na sequence $\\{\\wz{Q}_k\\}_{k=0}^\\fz$ of bounded linear operators on $L^2(X)$ such that, for any\n$f\\in\\go{\\bz,\\gz}$ [or $L^p(X)$ with any given $p\\in(1,\\fz)$],\n\\begin{equation}\\label{eq:icrf}\nf=\\sum_{k=0}^\\fz \\wz{Q}_kQ_kf,\n\\end{equation}\nwhere the series converges in $\\go{\\bz,\\gz}$ [or $L^p(X)$ with any given $p\\in(1,\\fz)$].\nMoreover, for any $k\\in\\zz_+$, $\\wz{Q}_k$ satisfies the size condition \\eqref{eq:atisize}, the regularity\ncondition \\eqref{eq:atisregx} only for the first variable and the following integration condition: for any\n$x\\in X$,\n\\begin{equation}\\label{eq:iwzQint}\n\\int_X \\wz{Q}_k(x,y)\\,d\\mu(y)=\\int_X \\wz{Q}_k(y,x)\\,d\\mu(y)=\\begin{cases}\n1& \\text{if\\ \\ $k\\in\\{0,\\ldots,N\\}$,}\\\\\n0& \\text{if\\ \\ $k\\in\\{N+1,N+2,\\ldots\\}$.}\n\\end{cases}\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nWe briefly sketch the main ideas of the proof of this theorem due to its similarity to the proof of Theorem\n\\ref{thm:hcrf}. Based on Proposition \\ref{prop-add-x}, we choose $N\\in\\nn$ large enough such that, for any\n$x_1\\in X$ and $r\\in(0,\\fz)$,\n$$\n\\max\\lf\\{\\|\\FR_N\\|_{L^2(X)\\to L^2(X)},\n\\|\\FR_N\\|_{\\mathring{\\CG}(x_1,r,\\bz,\\gz)\\to\\mathring{\\CG}(x_1,r,\\bz,\\gz)},\n\\|\\FR_N\\|_{\\CG(x_1,1,\\bz,\\gz)\\to \\CG(x_1,1,\\bz,\\gz)}\\r\\}\\le 1\/2.\n$$\nConsequently, $\\FT_N:=I-\\FR_N$ is invertible on $L^2(X)$, $\\CG(x_1,1,\\bz,\\gz)$ and $\\GOX{x_1,r,\\bz,\\gz}$, with\nall operator norms at most $2$. For any $k\\in\\zz_+$ and $x,\\ y\\in X$, let\n$$\n\\wz{Q}_k(x,y):=\\FT_N^{-1}\\lf(Q_k^N(\\cdot,y)\\r)(x)\n$$\nwith $Q_k^N$ as in \\eqref{eq-z1}. Then it is easy to show that \\eqref{eq:icrf} holds true in $L^2(X)$.\n\nBy Lemma \\ref{lem:propQkN}, we know that, when $k\\in\\{N+1,N+2,\\ldots\\}$,\n$Q_k^N(\\cdot,y)=\\sum_{l=k-N}^{k+N}Q_{l}Q_k(\\cdot,y)\\in\\GOX{y,\\dz^k,\\bz,\\gz}$\nand, by Lemma \\ref{lem:ccrf1} and Corollary \\ref{cor:mixb} for $Q_lQ_k$, we also know that,\nwhen $k\\in\\{0,\\ldots,N\\}$,\n$Q_k^N(\\cdot,y)=\\sum_{l=0}^{k+N}Q_{l}Q_k(\\cdot,y)\\in\\CG(y,1,\\bz,\\gz)$.\nThe boundedness of $\\FT_N^{-1}$ implies that $\\wz{Q}_k(\\cdot,y)\\in \\GOX{y,\\dz^k,\\bz,\\gz}$\nwhen $k\\in\\{N+1,N+2,\\ldots\\}$,\nand $\\wz{Q}_k(\\cdot,y)\\in \\CG(y,1,\\bz,\\gz)$ when $k\\in\\{0,\\ldots,N\\}$.\nIf $k\\in\\{0,\\ldots,N\\}$, then $\\CG(y,1,\\bz,\\gz)=\\CG(y,\\dz^k,\\bz,\\gz)$ with norms depending only on $N$.\nTherefore, for any $k\\in\\zz_+$, the kernel of\n$\\wz{Q}_k$ satisfies the size condition \\eqref{eq:atisize}, and the regularity\ncondition \\eqref{eq:atisregx} for the first variable.\n\nSimilarly to the proof of \\eqref{eq:wzQcan}, for any $x\\in X$, we have $\\int_X \\wz{Q}_k(x,y)\\,d\\mu(y)=0=\\int_X\n\\wz{Q}_k(y,x)\\,d\\mu(y)$ when $k\\in\\{N+1,N+2,\\ldots\\}$. When $k\\in\\{0,\\ldots,N\\}$, we write\n$$\n\\wz{Q}_k(x,y)=\\FT_N^{-1}\\lf(Q_k^N(\\cdot,y)\\r)(x)=\\sum_{j=0}^\\fz \\lf(\\lf(\\FR_N\\r)^jQ_k^N(\\cdot,y)\\r)(x).\n$$\nBy Lemma \\ref{lem:RNMi} and the dominated convergence theorem, we find that, for any $x\\in X$,\n$\\FR_Ng\\in\\GOX{x_1,1,\\bz,\\gz}$ whenever $g\\in\\CG(x_1,1,\\bz,\\gz)$.\nBy this and the definition of $Q_k^N$, we conclude that, for any $y\\in X$,\n$\\FR_NQ_k^N(\\cdot,y)\\in\\GO{y,1,\\bz,\\gz}$. This, together with the boundedness of $\\FR_{N}$ on\n$\\GO{y,1,\\bz,\\gz}$ (see Proposition \\ref{prop-add-x}), implies that, for any $j\\in\\nn$ and $y\\in X$,\n$$\n\\int_X (\\FR_N)^jQ_k^N(x,y)\\,d\\mu(x)=0.\n$$\nOn the other hand, from the Fubini theorem, \\eqref{eq:QkNint} and the cancellation of $\\FR_{N,M}$ [see Lemma\n\\ref{lem:RNMi}(ii)], we deduce that, for any $M\\in\\nn$ and $x\\in X$,\n$$\n\\int_X \\FR_{N,M}Q_k^N(x,y)\\,d\\mu(y)=\\int_X\\FR_{N,M}(x,z)\\int_X Q_k^N(z,y)\\,d\\mu(y)\\,d\\mu(z)=0.\n$$\nThen, repeating the proof of \\eqref{eq:RNjcan} with $R_{N,M}$ replaced by $\\FR_{N,M}$ and $R_N$ by $\\FR_N$, we find\nthat, for any $j\\in\\nn$ and $x\\in X$,\n$$\n\\int_X (\\FR_N)^jQ_k^N(x,y)\\,d\\mu(y)=0.\n$$\nBy these and \\eqref{eq:QkNint}, we conclude that, for any $k\\in\\{0,\\ldots,N\\}$ and $x,\\ y\\in X$,\n$$\n\\int_X \\wz{Q}_k(x,y')\\,d\\mu(y')=\\sum_{j=0}^\\fz\\int_X\\lf(\\FR_N\\r)^jQ_k^N(x,y')\\,d\\mu(y')\n=\\int_XQ_k^N(x,y')\\,d\\mu(y')=1\n$$\nand, similarly,\n$$\n\\int_X \\wz{Q}_k(x',y)\\,d\\mu(x')=\\sum_{j=0}^\\fz\\int_X\\lf(\\FR_N\\r)^jQ_k^N(x',y)\\,d\\mu(x')\n=\\int_XQ_k^N(x',y)\\,d\\mu(x')=1.\n$$\nThis finishes the proof of \\eqref{eq:iwzQint}.\n\nNow we prove that, when $\\bz'\\in(\\bz,\\eta)$ and $\\gz'\\in(\\gz,\\eta)$ and $f\\in\\CG(\\bz',\\gz')$,\n\\begin{equation}\\label{eq:ilim}\n\\lim_{L\\to\\fz} \\lf\\|f-\\sum_{k=0}^L\\wz{Q}_kQ_kf\\r\\|_{\\CG(\\bz,\\gz)}=0.\n\\end{equation}\nNotice that $f=\\FT^{-1}\\FT_N f$ and $\\FT_N^{-1}$ is bounded on $\\CG(\\bz,\\gz)$. Then it suffices to show that\n\\begin{equation}\\label{eq:isum}\n\\lim_{L\\to\\fz}\\lf\\|\\FT_Nf-\\sum_{k=0}^LQ_k^NQ_kf\\r\\|_{\\CG(\\bz,\\gz)}=\n\\lim_{L\\to\\fz}\\lf\\|\\sum_{k=L+1}^\\fz Q_k^NQ_kf\\r\\|_{\\CG(\\bz,\\gz)}=0.\n\\end{equation}\nTo simplify our discussion, we can assume that $L\\ge N+1$, so that $Q_k^N$ has the cancellation properties.\nThus, the proofs of \\eqref{eq:sumsize} and \\eqref{eq:sumreg} for the case $k\\ge L+1$ also imply\n\\eqref{eq:isum}, and hence \\eqref{eq:ilim}.\n\nNow we prove the convergence of \\eqref{eq:icrf} in $\\go{\\bz,\\gz}$. For any $L\\in\\nn$, define\n$$\n\\FT_{N,L}:=\\sum_{k=0}^L Q_k^NQ_k=\\sum_{k=0}^L\\sum_{|l|\\le N}Q_{k+l}Q_k.\n$$\nThen, repeating the proof of Lemma\n\\ref{lem:RNMi} with $\\FR_{N,M}$ replaced by $\\FT_{N,L}$ and the sum $\\sum_{N<|l|\\le M}$ replaced by\n$\\sum_{|l|\\le N}$, we find that $\\FT_{N,L}$ satisfies all the assumptions of Theorem \\ref{thm:Kibdd} with\n$c_0:=1$, $r:=1$ and $C_T$ a positive constant independent of $L$. Therefore, by Theorem \\ref{thm:Kibdd}, we\nconclude that $\\FT_{N,L}$ is bounded on $\\CG(\\bz,\\gz)$ with its operator norm independent of $L$. Thus, from\na density argument used in Step 2) of the proof of Theorem \\ref{thm:hcrf}, we deduce that $\\FT_N$ is\nbounded on $\\go{\\bz,\\gz}$ and hence \\eqref{eq:isum} converges on $\\go{\\bz,\\gz}$. Moreover, since\n$I=\\FT_N+\\FR_N$, it then follows that $\\FR_N$ is bounded on $\\go{\\bz,\\gz}$ with\n$$\n\\|\\FR_N\\|_{\\go{\\bz,\\gz}\\to\\go{\\bz,\\gz}}\\le \\|\\FR_N\\|_{\\CG(\\bz,\\gz)\\to\\CG(\\bz,\\gz)}\\le \\frac 12.\n$$\nTherefore, $\\FT_N^{-1}$ is bounded on $\\go{\\bz,\\gz}$, which, together with the convergence of\n\\eqref{eq:isum} in $\\go{\\bz,\\gz}$, implies that \\eqref{eq:icrf} converges in $\\go{\\bz,\\gz}$.\n\nNext we prove that \\eqref{eq:icrf} holds true in $L^p(X)$ with any given $p\\in(1,\\fz)$. For any $L\\in\\nn$, let\n$\\wz{\\FT}_L:=\\sum_{k=0}^L \\wz{Q}_kQ_k$.\nFollowing the arguments used in Step 3) of the proof of Theorem \\ref{thm:hcrf},\nwith $T_L$ replaced by $\\wz\\FT_L$, we deduce that, when $L\\in\\{N+1,N+2,\\ldots\\}$,\n$\\sum_{k=N+1}^L \\wz{Q}_kQ_k$ is a $\\bz_1$-Calder\\'on-Zygmund operator for some $\\bz_1\\in(0,\\bz\\wedge\\gz)$,\nso that it is bounded on $L^p(X)$. Meanwhile, by Proposition \\ref{prop:basic}(iii),\nwe know that $\\sum_{k=0}^{N} \\wz{Q}_kQ_k$ is also bounded on $L^p(X)$. Altogether, we have\n$\\|\\wz{\\FT}_L\\|_{L^p(X)\\to L^p(X)}\\ls 1$,\nwhere the implicit positive constant is independent of $L$.\nFrom this and a density argument as that used in Step 3) of the proof of Theorem \\ref{thm:hcrf}, we obtain the\nconvergence of \\eqref{eq:icrf} in $L^p(X)$. This finishes the proof of Theorem \\ref{thm:icrf}.\n\\end{proof}\n\nSimilarly, we also have the following two theorems, the details being omitted.\n\\begin{theorem}\\label{thm:icrf2}\nSuppose that $\\bz,\\ \\gz\\in(0,\\eta)$ and $\\{Q_k\\}_{k=0}^\\fz$ is an $\\exp$-{\\rm IATI}.\nThen there exist $N\\in\\nn$ and a sequence $\\{\\overline{Q}_k\\}_{k=0}^\\fz$ of bounded linear operators on\n$L^2(X)$ such that, for any $f\\in\\go{\\bz,\\gz}$ [resp., $L^p(X)$ with any given $p\\in(1,\\fz)$],\n\\begin{equation}\\label{eq:icrf2}\nf=\\sum_{k=0}^\\fz Q_k\\overline{Q}_kf,\n\\end{equation}\nwhere the series converges in $\\go{\\bz,\\gz}$ [resp., $L^p(X)$ with any given $p\\in(1,\\fz)$]. Moreover, for any\n$k\\in\\zz_+$, the kernel of $\\overline{Q}_k$ satisfies the size condition \\eqref{eq:atisize}, the regularity\ncondition \\eqref{eq:atisregx} only for the second variable and\n\\eqref{eq:iwzQint}.\n\\end{theorem}\n\n\\begin{theorem}\\label{thm:icrf3}\nLet all the notation be as in Theorems \\ref{thm:icrf} and \\ref{thm:icrf2}. Then, for any $f\\in(\\go{\\bz,\\gz})'$,\n\\eqref{eq:icrf} and \\eqref{eq:icrf2} hold true in $(\\go{\\bz,\\gz})'$.\n\\end{theorem}\n\n\\begin{remark}\\label{rem:icrf}\nSimilarly to Remark \\ref{rem:r2}, we can show that, if $K$ is a compact subset of $(0,\\eta)^2$, then\nTheorems \\ref{thm:icrf}, \\ref{thm:icrf2} and \\ref{thm:icrf3} hold true with the implicit positive constants\nindependent of $(\\bz,\\gz)\\in K$, but depending on $K$.\n\\end{remark}\n\n\\subsection{Inhomogeneous discrete Calder\\'{o}n reproducing formulae}\\label{idrf}\n\nIn this subsection, we consider the inhomogeneous discrete Calder\\'{o}n reproducing formulae.\nWe use the same notation as in Section \\ref{hdrf}. Here we omit the details\nbecause the proofs are combinations of those in Sections \\ref{icrf} and \\ref{hdrf}.\n\n\\begin{theorem}\\label{thm:idrf}\nLet $\\{Q_k\\}_{k=0}^\\fz$ be an $\\exp$-{\\rm IATI} and $\\bz,\\ \\gz\\in(0,\\eta)$. Assume that every $y_\\az^{k,m}$\nis an arbitrary point in $Q_\\az^{k,m}$. Then there exist $N\\in\\nn$ and sequences $\\{\\wz{Q}^{(i)}_k\\}_{k=0}^\\fz$, $i\\in\\{1,2,3\\}$, of bounded\nlinear operators on $L^2(X)$ such that, for any $f\\in\\go{\\bz,\\gz}$ [resp., $L^p(X)$ with any given\n$p\\in(1,\\fz)$],\n\\begin{align}\\label{eq:idrf}\nf(\\cdot)&=\\sum_{k=0}^N\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}\\wz{Q}^{(1)}_k(\\cdot,y)\\,d\\mu(y)\nQ_{\\az,1}^{k,m}(f)\\\\\n&\\quad+\\sum_{k=N+1}^\\fz\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}\\wz{Q}^{(1)}_k(\\cdot,y)\n\\,d\\mu(y)Q_kf\\lf(y_\\az^{k,m}\\r)\\noz\\\\\n&=\\sum_{\\az\\in\\CA_0}\\sum_{m=1}^{N(0,\\az)}\\int_{Q_\\az^{k,m}}\\wz{Q}^{(2)}_k(\\cdot,y)\\,d\\mu(y)Q_{\\az,1}^{0,m}(f)\n\\noz\\\\\n&\\quad+\\sum_{k=1}^\\fz\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\mu\\lf(Q_\\az^{k,m}\\r)\n\\wz{Q}^{(2)}_k\\lf(\\cdot,y_\\az^{k,m}\\r)Q_{\\az,1}^{k,m}(f)\\noz\\\\\n&=\\sum_{\\az\\in\\CA_0}\\sum_{m=1}^{N(0,\\az)}\\int_{Q_\\az^{k,m}}\\wz{Q}^{(3)}_k(\\cdot,y)\\,d\\mu(y)Q_{\\az,1}^{0,m}(f)\n\\noz\\\\\n&\\quad+\\sum_{k=1}^N\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\mu\\lf(Q_\\az^{k,m}\\r)\n\\wz{Q}^{(3)}_k\\lf(\\cdot,y_\\az^{k,m}\\r)Q_{\\az,1}^{k,m}(f)\\noz\\\\\n&\\quad+\\sum_{k=N+1}^\\fz\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\mu\\lf(Q_\\az^{k,m}\\r)\n\\wz{Q}^{(3)}_k\\lf(\\cdot,y_\\az^{k,m}\\r)Q_kf\\lf(y_\\az^{k,m}\\r),\\noz\n\\end{align}\nwhere the series converge in $\\go{\\bz,\\gz}$ [resp., $L^p(X)$ with any given $p\\in(1,\\fz)$], where, for any\n$z\\in X$,\n\\begin{equation}\\label{eq:defQa1kn}\nQ_{\\az,1}^{k,m}(z):=\\frac 1{\\mu(Q_{\\az}^{k,n})}\\int_{Q_\\az^{k,m}}Q_k(u,z)\\,d\\mu(u).\n\\end{equation}\nMoreover, for any $i\\in\\{1,2,3\\}$ and $k\\in\\zz_+$, $\\wz{Q}^{(i)}_k$ has the properties same as\n$\\wz Q_k$ in Theorem \\ref{thm:icrf}.\n\\end{theorem}\n\n\\begin{remark}\\label{rem:pridrf}\nWe only explain the decomposition of $I$ to derive the first equality in \\eqref{eq:idrf}. For any\n$f\\in L^2(X)$ and $x\\in X$, by \\eqref{eq:defTR2}, we write\n\\begin{align*}\nf(x)&=\\sum_{k=0}^\\fz Q_k^NQ_kf(x)+\\FR_Nf(x)\\\\\n&=\\sum_{k=0}^{N}\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_{\\az}^{k,m}}Q_k^N(x,y)Q_kf(y)\\,d\\mu(y)\n+\\sum_{k=N+1}^{\\fz}\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_{\\az}^{k,m}}Q_k^N(x,y)Q_kf(y)\\,d\\mu(y)\\\\\n&\\quad+\\FR_Nf(x)\\\\\n&=\\lf[\\sum_{k=0}^{N}\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_{\\az}^{k,m}}Q_k^N(x,y)\\,d\\mu(y)\nQ_{\\az,1}^{k,m}(f)\\r.\\\\\n&\\quad+\\lf.\\sum_{k=N+1}^\\fz\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_{\\az}^{k,m}}Q_k^N(x,y)\\,d\\mu(y)\nQ_kf\\lf(y_{\\az}^{k,m}\\r)\\r]+\\FR_Nf(x)\\\\\n&\\quad+\\sum_{k=0}^{N}\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\frac 1{\\mu(Q_\\az^{k,m})}\n\\int_{Q_{\\az}^{k,m}}Q_k^N(x,y)\\int_{Q_{\\az}^{k,m}}[Q_kf(y)-Q_kf(u)]\\,d\\mu(u)\\,d\\mu(y)\\\\\n&\\quad+\\sum_{k=N+1}^\\fz\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_{\\az}^{k,m}}Q_k^N(x,y)\n\\lf[Q_kf(y)-Q_kf\\lf(y_{\\az}^{k,m}\\r)\\r]\\,d\\mu(y)\\\\\n&=:\\mathscr{S}_Nf(x)+\\FR_Nf(x)+\\mathscr{R}_N^1f(x)+\\mathscr{R}_N^2f(x).\n\\end{align*}\nThen, we can use the method in Sections \\ref{hdrf} and \\ref{icrf} to consider the boundedness of\nthe remainders $\\FR_N$, $\\mathscr{R}_N^1$ and $\\mathscr{R}_N^2$ on both $L^2(X)$ and $\\CG(\\bz,\\gz)$, the\ndetails being omitted.\n\\end{remark}\n\n\\begin{theorem}\\label{thm:idrf2}\nLet $\\{Q_k\\}_{k=0}^\\fz$ be an $\\exp$-{\\rm IATI} and $\\bz,\\ \\gz\\in(0,\\eta)$. Assume that $y_\\az^{k,m}\\in Q_\\az^{k,m}$\nis an arbitrary point. Then there exist $N\\in\\nn$ and sequences $\\{\\overline{Q}^{(i)}_k\\}_{k=0}^\\fz$,\n$i\\in\\{1,2,3\\}$, of bounded linear operators on $L^2(X)$ such that, for any $f\\in\\go{\\bz,\\gz}$ [resp., $L^p(X)$\nwith any given $p\\in(1,\\fz)$],\n\\begin{align}\\label{eq:idrf2}\nf(\\cdot)&=\\sum_{k=0}^N\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}Q_k(\\cdot,y)\\,d\\mu(y)\n\\overline{Q}_{\\az,1}^{(1),k,m}(f)\\\\\n&\\quad+\\sum_{k=N+1}^\\fz\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\int_{Q_\\az^{k,m}}Q_k(\\cdot,y)\\,d\\mu(y)\n\\overline{Q}^{(1)}_kf\\lf(y_\\az^{k,m}\\r)\\noz\\\\\n&=\\sum_{\\az\\in\\CA_0}\\sum_{m=1}^{N(0,\\az)}\\int_{Q_\\az^{k,m}}Q_k(\\cdot,y)\\,d\\mu(y)\n\\overline{Q}_{\\az,1}^{(2),0,m}(f)\\noz\\\\\n&\\quad+\\sum_{k=1}^\\fz\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\mu\\lf(Q_\\az^{k,m}\\r)\nQ_k\\lf(\\cdot,y_\\az^{k,m}\\r)\\overline{Q}_{\\az,1}^{(2),k,m}(f)\\noz\\\\\n&=\\sum_{\\az\\in\\CA_0}\\sum_{m=1}^{N(0,\\az)}\\int_{Q_\\az^{k,m}}Q_k(\\cdot,y)\\,d\\mu(y)\\overline{Q}_{\\az,1}^{(3),0,m}\n(f)\\noz\\\\\n&\\quad+\\sum_{k=1}^N\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\mu\\lf(Q_\\az^{k,m}\\r)\nQ_k\\lf(\\cdot,y_\\az^{k,m}\\r)\\overline{Q}_{\\az,1}^{(3),k,m}(f)\\noz\\\\\n&\\quad+\\sum_{k=N+1}^\\fz\\sum_{\\az\\in\\CA_k}\\sum_{m=1}^{N(k,\\az)}\\mu\\lf(Q_\\az^{k,m}\\r)\n\\overline{Q}^{(3)}_k\\lf(x,y_\\az^{k,m}\\r)Q_kf\\lf(y_\\az^{k,m}\\r),\\noz\n\\end{align}\nwhere the series converge in $\\GOO{\\bz,\\gz}$ [resp., $L^p(X)$ with any given $p\\in(1,\\fz)$] and, for any\n$i\\in\\{1,2,3\\}$, $\\overline{Q}_{\\az,1}^{(i),k,m}$ is defined as in \\eqref{eq:defQa1kn}, with $Q_k$ replaced by\n$\\overline{Q}^{(i)}_k$. Moreover, for\n$i\\in\\{1,2,3\\}$ and $k\\in\\zz_+$, $\\overline{Q}^{(i)}_k$ has the properties same as $\\overline{Q}_k$ in Theorem\n\\ref{thm:icrf2}.\n\\end{theorem}\n\\begin{theorem}\\label{thm:idrf3}\nLet all the notation be as in Theorems \\ref{thm:idrf} and \\ref{thm:idrf2}. Then, for any $f\\in(\\go{\\bz,\\gz})'$,\nall equalities in \\eqref{eq:idrf} and \\eqref{eq:idrf2} hold true in $(\\go{\\bz,\\gz})'$.\n\\end{theorem}\n\n\\begin{remark}\\label{rem:idrf}\nSimilarly to Remark \\ref{rem:r2}, if $K$ is a compact subset of $(0,\\eta)^2$, then\nTheorems \\ref{thm:idrf} through \\ref{thm:idrf3} hold true with the implicit positive constants\nindependent of $(\\bz,\\gz)\\in K$, but depending on $K$.\n\\end{remark}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}