diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzosep" "b/data_all_eng_slimpj/shuffled/split2/finalzzosep" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzosep" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nStochastic hybrid systems are dynamical systems \ncomprising continuous and discrete dynamics interleaved with probabilistic noise and stochastic events \\cite{BL06a}. \nBecause of their versatility and generality, methods for analysis and design of stochastic hybrid systems \ncarry great promise in \nmany safety critical applications \\cite{BL06a}. Examples of such applications include power networks, automotive, finance, \nair traffic control, biology, telecommunications, and embedded systems. \nStochastic \\emph{switched} systems are a relevant subclass of stochastic hybrid systems. \nThey consist of a finite (discrete) set of modes of operation, \neach of which is associated to continuous probabilistic dynamics; \nfurther, their discrete dynamics, in the form of mode changes, are governed by a non-probabilistic control (switching) signal. \n\nIt is known \\cite{liberzon} that switched systems can be endowed with global behaviors that are not characteristic of the behavior of any of their modes: \nfor instance, global instability may arise by proper choice over time of the discrete switches between a set of stable modes. \nThis is but one of the many features that makes switched systems theoretically interesting. \nWith focus on \\emph{stochastic} switched systems, \ndespite recent progresses on basic dynamical analysis focused on stability properties \\cite{debasish}, \nthere are no notable results in the literature targeting more complex objectives, \nsuch as those dealing with verification or (controller) synthesis for logical specifications. Examples of those specifications include linear temporal logic or automata on infinite strings, \nand as such they are not amenable to classical approaches for stochastic processes. \n\nA promising direction to investigate these general properties is the use of \\emph{symbolic models}. \nSymbolic models are abstract descriptions of the original dynamics, \nwhere each abstract state (or symbol) corresponds to an aggregate of states in the concrete system. \nWhen a finite symbolic model \nis obtained and is formally put in relationship with the original system, \none can leverage automata-theoretic techniques for controller synthesis over the finite model \\cite{MalerPnueliSifakis95} \nto automatically synthesize controllers for the original system. \nTowards this goal, \na relevant approach is the construction of finite-state symbolic models that are \\emph{bisimilar} to the original system. \nUnfortunately, the class of continuous (-time and -space) dynamical systems admitting exactly bisimilar finite-state symbolic models is quite restrictive \\cite{AHLP00,LPS00} and in particular it covers mostly non-probabilistic models. The results in \\cite{manuela} provide a notion of exact stochastic bisimulation for a class of stochastic hybrid systems, \nhowever, \\cite{manuela} does not provide any abstraction algorithm, nor does it look at the synthesis problem. \nTherefore, rather than requiring exact bisimilarity, \none can resort to \\emph{approximate bisimulation} relations \\cite{girard}, \nwhich introduce a metric between the trajectories of the abstract and the concrete models, \nand require boundedness in time of this distance. \n\nThe construction of approximately bisimilar symbolic models has been extensively studied for \nnon-probabilistic control systems, possibly affected by disturbances \n\\cite{majid4,pola,pola1} and references therein, \nas well as for \nnon-probabilistic switched systems \\cite{girard2}. \nHowever, stochastic systems, particularly when endowed with hybrid dynamics, \nhave only been scarcely explored. \nWith focus on these models, \na few existing results deal with abstractions of discrete-time stochastic processes \\cite{AAPLS07,abate1,azuma}. \nResults for continuous-time models cover probabilistic rectangular hybrid automata \\cite{sproston} \nand stochastic dynamical systems under some contractivity assumptions \\cite{abate}. \nFurther, the results in \\cite{julius1} only {\\em check} the relationship between an uncountable abstraction and a given class of stochastic hybrid systems via the notion of stochastic (bi)simulation function. However, these results do not provide any {\\em construction} of approximations, nor do they deal with {\\em finite} abstractions, \nand moreover appear to be computationally tractable only in the case where no input is present. \nThe recent results in \\cite{majid8} and \\cite{majid11} investigate the construction of finite bisimilar abstractions for continuous-time stochastic control systems, without any hybrid dynamics, and randomly switched stochastic systems, respectively, such that the discrete dynamics in the latter systems are governed by a random uncontrolled signal. Finally, the recently proposed techniques in \\cite{majid10} improve\nthe ones in \\cite{majid8} by not requiring state-space discretization but only input set discretization. \nIn summary, \nto the best of our knowledge there is no comprehensive work on the automatic construction of finite bisimilar abstractions for continuous-time stochastic switched systems in which the discrete dynamics are governed by a non-probabilistic control signal. \n\nThe main contributions of this work consist in showing the existence and the construction of approximately bisimilar symbolic models for incrementally stable stochastic switched systems using two different techniques: one requires state space discretization and the other one does not require any space discretization. Note that all the techniques provided in \\cite{majid4,pola,pola1,girard2,AAPLS07,abate1,azuma,sproston,abate,majid8,majid11} are only based on the discretization of state sets. Therefore, they suffer severely from \\emph{the curse of dimensionality} due to gridding those sets, which is especially irritating for models with high-dimensional state sets. We also provide a simple criterion in which one can choose between the two proposed approaches the most suitable one (based on the size of the abstraction) for a given stochastic switched system. Another advantage of the second proposed approach here is that it allows one to construct symbolic models with probabilistic output values, resulting possibly in less conservative symbolic abstractions in comparison with the first proposed approach and the ones in \\cite{majid8,majid11} allowing for non-probabilistic output values only. Furthermore, the second proposed approach here allows one to construct symbolic models for any given precision $\\varepsilon$ and any given sampling time, but the first proposed approach and the ones in \\cite{majid8,majid11} may not be applicable for a given sampling time. \n\n\nIncremental stability is a property on which the main proposed results of this paper rely. This type of stability requires uniform asymptotic stability of every trajectory, rather than stability of an equilibrium point or a particular time-varying trajectory. \nIn this work, we show the description of incremental stability in terms of a so-called common Lyapunov function or of multiple Lyapunov functions. \nThe main results are illustrated by synthesizing controllers (switching signals) for two examples. \nFirst, we consider a room temperature control problem (admitting a common Lyapunov function) for a six-room building. \nWe synthesize a switching signal regulating the temperature toward a desired level which is not tractable using the first proposed technique. \nThe second example illustrates the use of multiple Lyapunov functions (one per mode) using the first proposed approach. A preliminary investigation on the construction of bisimilar symbolic models for stochastic switched systems using the first proposed approach (requiring state space discretization) appeared in \\cite{majid9}. In this paper we present a detailed and mature description of the results presented in \\cite{majid9}, including proofs, as well as proposing a second approach which does not require any space discretization. \n\\vspace{-0.2cm}\n\n\\section{Stochastic Switched Systems}\n\\subsection{Notation} \nThe symbols ${\\mathbb{N}}$, ${\\mathbb{N}}_0$, ${\\mathbb Z}$, ${\\mathbb{R}}$, ${\\mathbb{R}}^+$, and ${\\mathbb{R}}_0^+$ denote the set of natural, nonnegative integer, integer, real, positive, and nonnegative real numbers, respectively. The symbols $I_n$, $0_n$, and $0_{n\\times{m}}$ denote the identity matrix, zero vector, and zero matrix in ${\\mathbb{R}}^{n\\times{n}}$, ${\\mathbb{R}}^n$, and ${\\mathbb{R}}^{n\\times{m}}$, respectively. Given a set $A$, define $A^{n+1}=A\\times A^n$ for any $n\\in{\\mathbb{N}}$. Given a vector \\mbox{$x\\in\\mathbb{R}^{n}$}, we denote by $x_{i}$ the $i$--th element of $x$, and by $\\Vert x\\Vert$ the infinity norm of $x$, namely, \\mbox{$\\Vert x\\Vert=\\max\\{|x_1|,|x_2|,...,|x_n|\\}$}, where $|x_i|$ denotes the absolute value of $x_i$. \nGiven a matrix $P=\\{p_{ij}\\}\\in{\\mathbb{R}}^{n\\times{n}}$, we denote by $\\text{Tr}(P)=\\sum_{i=1}^np_{ii}$ the trace of $P$.\nThe \\emph{diagonal set} $\\Delta\\subset{\\mathbb{R}}^n\\times{\\mathbb{R}}^n$ is defined as: $\\Delta=\\left\\{(x,x) \\mid x\\in {\\mathbb{R}}^n\\right\\}$.\n\nThe closed ball centered at $x\\in{\\mathbb{R}}^{n}$ with radius $\\varepsilon$ is defined by \\mbox{$\\mathcal{B}_{\\varepsilon}(x)=\\{y\\in{\\mathbb{R}}^{n}\\,|\\,\\Vert x-y\\Vert\\leq\\varepsilon\\}$}. A set $B\\subseteq {\\mathbb{R}}^n$ is called a \n{\\em box} if $B = \\prod_{i=1}^n [c_i, d_i]$, where $c_i,d_i\\in {\\mathbb{R}}$ with $c_i < d_i$ for each $i\\in\\set{1,\\ldots,n}$.\nThe {\\em span} of a box $B$ is defined as $\\mathit{span}(B) = \\min\\set{ | d_i - c_i| \\mid i=1,\\ldots,n}$. By defining $[{\\mathbb{R}}^n]_{\\eta}=\\left\\{a\\in {\\mathbb{R}}^n\\mid a_{i}=k_{i}\\eta,k_{i}\\in\\mathbb{Z},i=1,\\ldots,n\\right\\}$, the \nset \\mbox{$\\bigcup_{p\\in[{\\mathbb{R}}^n]_{\\eta}}\\mathcal{B}_{\\lambda}(p)$} is a \ncountable covering of ${\\mathbb{R}}^n$ for any $\\eta\\in{\\mathbb{R}}^+$ and $\\lambda\\geq\\eta\/2$. \nFor a box $B\\subseteq{\\mathbb{R}}^n$ and $\\eta \\leq \\mathit{span}(B)$,\ndefine the $\\eta$-approximation $[B]_\\eta = [{\\mathbb{R}}^n]_{\\eta}\\cap{B}$. \nNote that $[B]_{\\eta}\\neq\\varnothing$ for any $\\eta\\leq\\mathit{span}(B)$. \nGeometrically, for any $\\eta\\in{\\mathbb{R}^+}$ with $\\eta\\leq\\mathit{span}(B)$ and $\\lambda\\geq\\eta$, the collection of sets \n\\mbox{$\\{\\mathcal{B}_{\\lambda}(p)\\}_{p\\in [B]_{\\eta}}$}\nis a finite covering of $B$, i.e., \\mbox{$B\\subseteq\\bigcup_{p\\in[B]_{\\eta}}\\mathcal{B}_{\\lambda}(p)$}. \nWe extend the notions of $\\mathit{span}$ and of {\\em approximation} to finite unions of boxes as follows.\nLet $A = \\bigcup_{j=1}^M A_j$, where each $A_j$ is a box.\nDefine $\\mathit{span}(A) = \\min\\set{\\mathit{span}(A_j)\\mid j=1,\\ldots,M}$,\nand for any $\\eta \\leq \\mathit{span}(A)$, define $[A]_\\eta = \\bigcup_{j=1}^M [A_j]_\\eta$.\n\nA continuous function \\mbox{$\\gamma:\\mathbb{R}_{0}^{+}\\rightarrow\\mathbb{R}_{0}^{+}$}, is said to belong to class $\\mathcal{K}$ if it is strictly increasing and \\mbox{$\\gamma(0)=0$}; $\\gamma$ is said to belong to class $\\mathcal{K}_{\\infty}$ if \\mbox{$\\gamma\\in\\mathcal{K}$} and $\\gamma(r)\\rightarrow\\infty$ as $r\\rightarrow\\infty$. A continuous function \\mbox{$\\beta:\\mathbb{R}_{0}^{+}\\times\\mathbb{R}_{0}^{+}\\rightarrow\\mathbb{R}_{0}^{+}$} is said to belong to class $\\mathcal{KL}$ if, for each fixed $s$, the map $\\beta(r,s)$ belongs to class $\\mathcal{K}$ with respect to $r$ and, for each fixed nonzero $r$, the map $\\beta(r,s)$ is decreasing with respect to $s$ and $\\beta(r,s)\\rightarrow 0$ as \\mbox{$s\\rightarrow\\infty$}. We identify a relation \\mbox{$R\\subseteq A\\times B$} with the map \\mbox{$R:A \\rightarrow 2^{B}$} defined by $b\\in R(a)$ iff \\mbox{$(a,b)\\in R$}. Given a relation \\mbox{$R\\subseteq A\\times B$}, $R^{-1}$ denotes the inverse relation defined by \\mbox{$R^{-1}=\\{(b,a)\\in B\\times A:(a,b)\\in R\\}$}.\n\n\\subsection{Stochastic switched systems}\\label{sss}\nLet $(\\Omega, \\mathcal{F}, \\mathds{P})$ be a probability space endowed with a filtration $\\mathds{F} = (\\mathcal{F}_t)_{t\\geq 0}$ satisfying the usual conditions of completeness and right-continuity \\cite[p.\\ 48]{ref:KarShr-91}. Let $(W_t)_{t \\ge 0}$ be a $\\widehat{q}$-dimensional $\\mathds{F}$-adapted Brownian motion \\cite{oksendal}. The class of stochastic switched systems considered in this paper is formalized as follows. \n\n\\begin{definition}\n\\label{Def_control_sys}A stochastic switched system $\\Sigma$ is a tuple $\\Sigma=(\\mathbb{R}^{n},\\mathsf{P},\\mathcal{P},F,G)$, where\n\\begin{itemize}\n\\item $\\mathbb{R}^{n}$ is the state space;\n\n\\item $\\mathsf{P}=\\left\\{1,\\ldots,m\\right\\}$ is a finite set of modes;\n\n\\item $\\mathcal{P}$ is a subset of the set of piecewise constant c\\`adl\\`ag (i.e. right-continuous and with left limits) functions from ${\\mathbb{R}}_0^+$ to $\\mathsf{P}$, and with a finite number of discontinuities on every bounded interval in ${\\mathbb{R}}_0^+$ (no Zeno behaviour); \n\n\\item $F=\\left\\{f_1,\\ldots,f_m\\right\\}$ is such that for any $p\\in\\mathsf{P}$, $f_p:{\\mathbb{R}}^n\\rightarrow{\\mathbb{R}}^n$ is globally Lipschitz continuous; \n\n\\item $G=\\left\\{g_1,\\ldots,g_m\\right\\}$ is such that for any $p\\in\\mathsf{P}$, $g_p:{\\mathbb{R}}^n\\rightarrow{\\mathbb{R}}^{n\\times{\\widehat{q}}}$ is globally Lipschitz continuous with Lipschitz constant $Z_p\\in{\\mathbb{R}}_0^+$. \n\\end{itemize}\n\\end{definition}\n\nA continuous-time stochastic process \\mbox{$\\xi:\\Omega \\times {\\mathbb{R}}_0^+ \\rightarrow \\mathbb{R}^{n}$} is said to be a \\textit{solution process} of $\\Sigma$ if there exists a switching signal $\\upsilon\\in\\mathcal{P}$ satisfying \n\\begin{small}\n\\begin{equation}\n\\label{eq00}\n\t\\diff \\xi= f_{\\upsilon}(\\xi)\\diff t+g_{\\upsilon}(\\xi)\\diff W_t,\n\\end{equation}\n\\end{small}$\\mathds{P}$-almost surely ($\\mathds{P}$-a.s.), at each time $t\\in{\\mathbb{R}}_0^+$ whenever $\\upsilon$ is continuous. \nLet us emphasize that $\\upsilon$ is a piecewise constant function defined over ${\\mathbb{R}}_0^+$ and taking values in $\\mathsf{P}$, \nwhich simply dictates in which mode the solution process $\\xi$ is located, at any time $t \\in {\\mathbb{R}}_0^+$. \n\nFor any given $p\\in\\mathsf{P}$, we denote by $\\Sigma_p$ the subsystem of $\\Sigma$ defined by the stochastic differential equation (SDE)\n\\begin{small}\n\\begin{equation}\n\\label{eq10}\n\t\\diff \\xi= f_{p}(\\xi)\\diff t+g_{p}(\\xi)\\diff W_t, \n\\end{equation}\n\\end{small}where $f_{p}$ is known as the drift and\n$g_{p}$ as the diffusion.\nGiven an initial condition which is a random variable, measurable in $\\mathcal{F}_0$, a solution process of $\\Sigma_p$ exists and is uniquely determined owing to the assumptions on $f_p$ and on $g_p$ \n\\cite[Theorem 5.2.1, p.\\ 68]{oksendal}. \n \nWe further write $\\xi_{a \\upsilon}(t)$ to denote the value of the solution process of $\\Sigma$ at time $t\\in{\\mathbb{R}}_0^+$ under the switching signal $\\upsilon$ from initial condition $\\xi_{a \\upsilon}(0) = a$ $\\mathds{P}$-a.s., in which $a$ is a random variable that is measurable in $\\mathcal{F}_0$. \n\nFinally, note that a solution process of $\\Sigma_p$ is also a solution process of $\\Sigma$ corresponding to the constant switching signal $\\upsilon(t)=p$, for all $t\\in{\\mathbb{R}}_0^+$. We also use $\\xi_{ap}(t)$ to denote the value of the solution process of $\\Sigma_p$ at time $t\\in{\\mathbb{R}}_0^+$ from the initial condition $\\xi_{a p}(0) = a$ $\\mathds{P}$-a.s..\n\\vspace{-0.2cm}\n\n\\section{Notions of Incremental Stability}\\label{stability}\nThis section introduces some stability notions for stochastic switched systems, \nwhich generalize the notions of incremental global asymptotic stability ($\\delta$-GAS) \\cite{angeli} for non-probabilistic dynamical systems and of incremental global uniform asymptotic stability ($\\delta$-GUAS) \\cite{girard2} for non-probabilistic switched systems. \nThe main results presented in this work rely on the stability assumptions discussed in this section. \n\n\\begin{definition}\n\\label{dGAS}\nThe stochastic subsystem $\\Sigma_p$ is incrementally globally asymptotically stable in the $q$th moment ($\\delta$-GAS-M$_q$), where $q\\geq1$, if there exists a $\\mathcal{KL}$ function $\\beta_p$ such that for any $t\\in{\\mathbb{R}_0^+}$ and any ${\\mathbb{R}}^n$-valued random variables $a$ and $a'$ that are measurable in $\\mathcal{F}_0$, the following condition is satisfied:\n\\begin{small}\n\\begin{equation}\n\\mathds{E} \\left[\\left\\Vert \\xi_{ap}(t)-\\xi_{a'p}(t)\\right\\Vert^q\\right] \\leq\\beta_p\\left( \\mathds{E}\\left[ \\left\\Vert a-a' \\right\\Vert^q\\right], t \\right). \\label{delta_SGAS}\n\\end{equation}\n\\end{small}\n\\end{definition}\n\nIt can be easily checked that a $\\delta$-GAS-M$_q$ stochastic subsystem $\\Sigma_p$ is $\\delta$-GAS \\cite{angeli} in the absence of any noise. \nFurther, note that when $f_p(0_n)=0_n$ and $g_p(0_n)=0_{n\\times{\\widehat{q}}}$ (drift and diffusion terms vanish at the origin), \nthen $\\delta$-GAS-M$_q$ implies global asymptotic stability in the $q$th moment (GAS-M$_q$) \\cite{debasish}, \nwhich means that all the trajectories of $\\Sigma_p$ converge in the $q$th moment to the (constant) trajectory $\\xi_{0_np}(t)=0_n$ (namely, the equilibrium point), for all $t\\in{\\mathbb{R}}_0^+$. \nWe extend the notion of $\\delta$-GAS-M$_q$ to stochastic switched systems as follows. \n\n\\begin{definition}\n\\label{dGUAS}\nA stochastic switched system $\\Sigma$ is incrementally globally uniformly asymptotically stable in the $q$th moment ($\\delta$-GUAS-M$_q$), where $q\\geq1$, if there exists a $\\mathcal{KL}$ function $\\beta$ such that for any $t\\in{\\mathbb{R}_0^+}$, any ${\\mathbb{R}}^n$-valued random variables $a$ and $a'$ that are measurable in $\\mathcal{F}_0$, and any switching signal ${\\upsilon}\\in\\mathcal{P}$, the following condition is satisfied:\n\\begin{small}\n\\begin{equation}\n\\mathds{E} \\left[\\left\\Vert \\xi_{a\\upsilon}(t)-\\xi_{a'\\upsilon}(t)\\right\\Vert^q\\right] \\leq\\beta\\left( \\mathds{E}\\left[ \\left\\Vert a-a' \\right\\Vert^q\\right], t \\right). \\label{delta_SGUAS}\n\\end{equation}\n\\end{small}\n\\end{definition}\n\nEssentially, Definition \\ref{dGUAS} extends Definition \\ref{dGAS} uniformly over any possible switching signal $\\upsilon\\in\\mathcal{P}$. \nAs expected, this notion generalizes known ones in the literature: \nit can be easily seen that a $\\delta$-GUAS-M$_q$ stochastic switched system $\\Sigma$ is $\\delta$-GUAS \\cite{girard2} in the absence of any noise \nand that, whenever $f_p(0_n)=0_n$ and $g_p(0_n)=0_{n\\times{\\widehat{q}}}$ for all $p\\in\\mathsf{P}$, then $\\delta$-GUAS-M$_q$ implies global uniform asymptotic stability in the $q$th moment (GUAS-M$_q$) \\cite{debasish}. \n\nFor non-probabilistic systems the $\\delta$-GAS property can be characterized by $\\delta$-GAS Lyapunov functions \\cite{angeli}. \nAlong these lines, we describe $\\delta$-GAS-M$_q$ \nin terms of the existence of some {\\em incremental Lyapunov functions}, defined as the following. \n\n\\begin{definition}\n\\label{delta_SGAS_Lya}\nConsider a stochastic subsystem $\\Sigma_p$ and a continuous function $V_p:\\mathbb{R}^n\\times\\mathbb{R}^n\\rightarrow\\mathbb{R}_0^+$ that is twice continuously differentiable on \n$\\{{\\mathbb{R}}^n\\times{\\mathbb{R}}^n\\}\\backslash\\Delta$. \nFunction $V_p$ is called a $\\delta$-GAS-M$_q$ Lyapunov function for $\\Sigma_p$, \nwhere $q\\geq1$, \nif there exist $\\mathcal{K}_{\\infty}$ functions \n$\\underline{\\alpha}_p$, $\\overline{\\alpha}_p$ and a constant $\\kappa_p\\in\\mathbb{R}^+$, such that\n\\begin{itemize}\n\\item[(i)] $\\underline{\\alpha}_p$ (resp. $\\overline \\alpha_p$) is a convex (resp. concave) function;\n\n\\item[(ii)] for any $x,x'\\in\\mathbb{R}^n$, $\\underline{\\alpha}_p\\left(\\left\\Vert x-x'\\right\\Vert^q\\right)\\leq{V_p}(x,x')\\leq\\overline{\\alpha}_p\\left(\\left\\Vert x-x'\\right\\Vert^q\\right)$;\n\n\\item[(iii)] for any $x,x'\\in\\mathbb{R}^n$, such that $x\\neq x'$,\n\\begin{small}\n\\begin{align*}\n\t&\\mathcal{L} V_p(x, x') \n\t := \\left[\\partial_xV_p~~\\partial_{x'}V_p\\right] \\begin{bmatrix} f_p(x)\\\\f_p(x')\\end{bmatrix}+\\frac{1}{2} \\text{Tr} \\left(\\begin{bmatrix} g_p(x) \\\\ g_p(x') \\end{bmatrix}\\left[g_p^T(x)~~g_p^T(x')\\right] \\begin{bmatrix}\n\\partial_{x,x} V_p & \\partial_{x,x'} V_p\\\\ \\partial_{x',x} V_p & \\partial_{x',x'} V_p\n\\end{bmatrix}\t\\right)\\leq-\\kappa_pV_p(x,x'). \n\\end{align*} \n\\end{small}\n\\end{itemize}\n\\end{definition}\nThe operator $\\mathcal{L}$ is the infinitesimal generator associated to the stochastic subsystem $\\Sigma_p$, defined by the SDE in \\eqref{eq10} \\cite[Section 7.3]{oksendal}. \nThe symbols $\\partial_x$ and $\\partial_{x,x'}$ denote first- and second-order partial derivatives with respect to $x$ and $x'$, respectively. \n\n\nThe following theorem describes $\\delta$-GAS-M$_q$ in terms of the existence of a $\\delta$-GAS-M$_q$ Lyapunov function.\n\n\\begin{theorem}\n\\label{the_Lya}\nA stochastic subsystem $\\Sigma_p$ is $\\delta$-GAS-M$_q$ if it admits a $\\delta$-GAS-M$_q$ Lyapunov function. \n\\end{theorem}\n\n\\begin{proof}\n\tThe proof is a consequence of the application of Gronwall's inequality and of Ito's lemma \\cite[p. 80 and 123]{oksendal}. Assume that there exists a $\\delta$-GAS-M$_q$ Lyapunov function in the sense of Definition \\ref{delta_SGAS_Lya}. For any $t\\in{\\mathbb{R}}_0^+$, and any ${\\mathbb{R}}^n$-valued random variables $a$ and $a'$ that are measurable in $\\mathcal{F}_0$, we obtain\n\t\t\\begin{small}\n\t\t\\begin{align*}\n\t\t\t\\mathds{E} \\left[ V_p(\\xi_{ap}(t),\\xi_{a'p}(t)) \\right]&=\\mathds{E} \\left[V_p(a,a') + \\int_0^{t} \\mathcal{L}V_p(\\xi_{ap}(s),\\xi_{a'p}(s))ds\\right]\\le\\mathds{E} \\left[ V_p(a,a') + \\int_0^{t} \\left(-\\kappa_p V_p(\\xi_{ap}(s),\\xi_{a'p}(s))\\right)ds\\right] \\\\\n\t\t\t&\\le-\\kappa_p \\int_0^{t} \\mathds{E} \\left[ V_p(\\xi_{ap}(s),\\xi_{a'p}(s))\\right ]ds+\\mathds{E}\\left[V_p(a,a')\\right]\\nonumber,\n\t\t\\end{align*}\n\t\t\\end{small}which, by virtue of Gronwall's inequality, leads to \n\t\t\\begin{small}\n\\begin{align}\n\t\\nonumber\n\t\t\\mathds{E} &\\left[ V_p(\\xi_{ap}(t),\\xi_{a'p}(t)) \\right]\\le \\mathds{E}[V_p(a,a')] \\mathsf{e}^{-\\kappa_p t}.\n\\end{align}\n\\end{small}Hence, using property (ii) in Definition \\ref{delta_SGAS_Lya}, we have\n\t\\begin{small}\n\t\\begin{align}\\nonumber\n\t\t\\underline{\\alpha}_p\\left(\\mathds{E} \\left[ \\left\\|\\xi_{ap}(t)-\\xi_{a'p}(t)\\right\\|^q\\right]\\right)&\\le \\mathds{E} \\left[ \\underline{\\alpha}_p\\left(\\left\\| \\xi_{ap}(t)-\\xi_{a'p}(t)\\right\\|^q\\right) \\right]\\le\\mathds{E} \\left[ V_p\\left(\\xi_{ap}(t),\\xi_{a'p}(t)\\right)\\right]\\leq \\mathds{E}\\left[V_p(a,a')\\right] \\mathsf{e}^{-\\kappa_p t}\\\\\\notag&\\leq \\mathds{E}\\left[\\overline{\\alpha}_p\\left(\\left\\| a - a' \\right\\|^q\\right)\\right] \\mathsf{e}^{-\\kappa_p t}\\le \\overline{\\alpha}_p\\left( \\mathds{E}\\left[\\left\\| a - a' \\right\\|^q\\right] \\right) \\mathsf{e}^{-\\kappa_p t},\n\t\\end{align}\\end{small}where the first and last inequalities follow from property (i) and Jensen's inequality \\cite[p. 310]{oksendal}. Since $\\underline{\\alpha}_p\\in\\mathcal{K}_\\infty$, we obtain\n\t\\begin{small}\n\\begin{align}\\nonumber\n\\mathds{E} \\left[ \\left\\|\\xi_{ap}(t)-\\xi_{a'p}(t)\\right\\|^q\\right]\\le\\underline{\\alpha}_p^{-1} \\left(\\overline{\\alpha}_p\\left( \\mathds{E}\\left[ \\|a - a' \\|^q\\right] \\right) \\mathsf{e}^{-\\kappa_p t}\\right).\n\\end{align}\n\\end{small}Therefore, by introducing function\n\n\n\t\t$\\beta_p\\left(r,s\\right) := \\underline{\\alpha}_p^{-1}\\left(\\overline{\\alpha}_p\\left(r\\right)\\mathsf{e}^{-\\kappa_ps} \\right)$,\t\n\t\n\n\tcondition (\\ref{delta_SGAS}) is satisfied. Hence, the stochastic subsystem $\\Sigma_p$ is $\\delta$-GAS-M$_q$.\t\n\\end{proof}\n\nLet us now direct our attention from subsystems to the overall switched model. \nAs qualitatively stated in the introduction, \nit is known that a non-probabilistic switched system, whose subsystems are all $\\delta$-GAS, may exhibit some unstable behaviors under fast switching signals \\cite{girard2} and, hence, may not be $\\delta$-GUAS. \nThe same phenomenon can happen for a stochastic switched system endowed by $\\delta$-GAS-M$_q$ subsystems. \nThe $\\delta$-GUAS property of non-probabilistic switched systems can be established by using a common Lyapunov function, \nor alternatively via multiple Lyapunov functions that are mode-dependent \\cite{girard2}. \nThis leads to the following extensions for $\\delta$-GUAS-M$_q$ property of stochastic switched systems. \n\nAssume that for any $p\\in\\mathsf{P}$, the stochastic subsystem $\\Sigma_p$ admits a $\\delta$-GAS-M$_q$ Lyapunov function $V_p$, satisfying conditions (i)-(iii) in Definition \\ref{delta_SGAS_Lya} with $\\mathcal{K}_\\infty$ functions $\\underline\\alpha_p$, $\\overline\\alpha_p$, and a constant $\\kappa_p\\in{\\mathbb{R}}^+$. \nLet us introduce the $\\mathcal{K}_\\infty$ functions $\\underline\\alpha$ and $\\overline\\alpha$ and the positive constant $\\kappa$ for use in the rest of the paper as the following: $\\underline\\alpha=\\min\\left\\{\\underline\\alpha_1,\\ldots,\\underline\\alpha_m\\right\\}$, \n$\\overline\\alpha=\\max\\left\\{\\overline\\alpha_1,\\ldots,\\overline\\alpha_m\\right\\}$, \nand $\\kappa=\\min\\left\\{\\kappa_1,\\ldots,\\kappa_m\\right\\}$. \nWe first show a result based on the existence of a common Lyapunov function in which $\\underline\\alpha=\\underline\\alpha_1=\\cdots=\\underline\\alpha_m$ and $\\overline\\alpha=\\overline\\alpha_1=\\cdots=\\overline\\alpha_m$. \n\n\\begin{theorem}\\label{theorem2}\nConsider a stochastic switched system $\\Sigma$. \nIf there exists a function $V$ that is a common $\\delta$-GAS-M$_q$ Lyapunov function for all the subsystems $\\left\\{\\Sigma_1,\\ldots,\\Sigma_m\\right\\}$, \nthen $\\Sigma$ is $\\delta$-GUAS-M$_q$.\n\\end{theorem}\n\n\\begin{proof}\nThe proof is a consequence of the application of Gronwall's inequality and of Ito's lemma \\cite[p. 80 and 123]{oksendal}. For any ${\\mathbb{R}}^n$-valued random variables $a$ and $a'$ that are measurable in $\\mathcal{F}_0$, any switching signal $\\upsilon\\in\\mathcal{P}$, and for all $t\\in{\\mathbb{R}}_0^+$ where $\\upsilon$ is continuous, we have $\\mathcal{L}V\\left(\\xi_{a\\upsilon}(t),\\xi_{a'\\upsilon}(t)\\right)\\leq-\\kappa V(\\xi_{a\\upsilon}(t),\\xi_{a'\\upsilon}(t))$. \nUsing the continuity of $V$ and of the solution process $\\xi$, for all $t\\in{\\mathbb{R}}_0^+$ one gets\n\\begin{small}\n\t\t\\begin{align*}\n\t\t\t\\mathds{E} \\left[ V(\\xi_{a\\upsilon}(t),\\xi_{a'\\upsilon}(t)) \\right]&\\le\\mathds{E} \\left[ V(a,a') + \\int_0^{t} \\left(-\\kappa V(\\xi_{a\\upsilon}(s),\\xi_{a'\\upsilon}(s))\\right)ds\\right] \\le-\\kappa\\int_0^{t} \\mathds{E} \\left[ V(\\xi_{a\\upsilon}(s),\\xi_{a'\\upsilon}(s))\\right ]ds+\\mathds{E}\\left[V(a,a')\\right]\\nonumber,\n\t\t\\end{align*}\n\t\t\\end{small}which, by virtue of Gronwall's inequality, leads to \n\t\t\\begin{small}\n\\begin{align}\n\t\\nonumber\n\t\t\\mathds{E} &\\left[ V(\\xi_{a\\upsilon}(t),\\xi_{a'\\upsilon}(t)) \\right]\\le \\mathds{E}[V(a,a')] \\mathsf{e}^{-\\kappa t}.\n\\end{align}\n\\end{small}\nSince the $\\mathcal{K}_\\infty$ functions $\\underline\\alpha$ and $\\overline\\alpha$ are convex and concave, respectively, using Jensen's inequality we have\n\\begin{small}\n\t\\begin{align}\\nonumber\n\t\t\\underline\\alpha\\left(\\mathds{E} \\left[ \\left\\Vert\\xi_{a\\upsilon}(t)-\\xi_{a'\\upsilon}(t)\\right\\Vert^q\\right]\\right)&\\le \\mathds{E} \\left[ \\underline\\alpha\\left(\\left\\Vert \\xi_{a\\upsilon}(t)-\\xi_{a'\\upsilon}(t)\\right\\Vert^q\\right) \\right]\\le\\mathds{E} \\left[ V\\left(\\xi_{a\\upsilon}(t),\\xi_{a'\\upsilon}(t)\\right)\\right]\\leq \\mathds{E}\\left[V(a,a')\\right] \\mathsf{e}^{-\\kappa t}\\\\\\notag&\\leq \\mathds{E}\\left[\\overline\\alpha\\left(\\left\\Vert a - a' \\right\\Vert^q\\right)\\right] \\mathsf{e}^{-\\kappa t}\\le \\overline\\alpha\\left( \\mathds{E}\\left[\\left\\Vert a - a' \\right\\Vert^q\\right] \\right) \\mathsf{e}^{-\\kappa t}.\n\t\\end{align}\n\t\\end{small}\nSince $\\underline\\alpha\\in\\mathcal{K}_\\infty$, we obtain \\begin{small}$$\\mathds{E} \\left[ \\left\\|\\xi_{a\\upsilon}(t)-\\xi_{a'\\upsilon}(t)\\right\\|^q\\right]\\le\\underline{\\alpha}^{-1} \\left(\\overline{\\alpha}\\left( \\mathds{E}\\left[ \\|a - a' \\|^q\\right] \\right) \\mathsf{e}^{-\\kappa t}\\right),$$\\end{small}for all $t\\in{\\mathbb{R}}_0^+$. Then condition (\\ref{delta_SGUAS}) holds with the function $\\beta(r,s):=\\underline\\alpha^{-1}\\left(\\overline\\alpha({r})\\mathsf{e}^{-\\kappa s}\\right)$. \n\\end{proof}\n\nWhen a common $\\delta$-GAS-M$_q$ Lyapunov function $V$ fails to exist, \nthe $\\delta$-GUAS-M$_q$ property of $\\Sigma$ can still be established by resorting to multiple $\\delta$-GAS-M$_q$ Lyapunov functions (one per mode) over a restricted set of switching signals. \nMore precisely, let $\\mathcal{P}_{\\tau_d}$ be a subset of the set of switching signals $\\upsilon$ with \\emph{dwell time} \n$\\tau_d\\in{\\mathbb{R}}_0^+$, \nwhere $\\upsilon$ is said to have dwell time $\\tau_d$ if the switching times $t_1,t_2,\\ldots$ (occurring at the discontinuity points of $\\upsilon$) satisfy $t_1>\\tau_d$ and $t_i-t_{i-1}\\geq\\tau_d$, for all $i\\geq{2}$. \nWe now show a stability result based on the existence of multiple Lyapunov functions.\n\n\\begin{theorem}\\label{multiple_lyapunov}\nLet $\\tau_d\\in{\\mathbb{R}}_0^+$, and consider a stochastic switched system $\\Sigma_{\\tau_d}=(\\mathbb{R}^{n},\\mathsf{P},\\mathcal{P}_{\\tau_d},F,G)$. Assume that for any $p\\in\\mathsf{P}$, there exists a $\\delta$-GAS-M$_q$ Lyapunov function $V_p$ for subsystem $\\Sigma_{\\tau_d,p}$ and that in addition there exits a constant $\\mu\\geq1$ such that\n\\begin{small}\n\\begin{equation}\\label{eq0}\n\\forall{x,x'}\\in{\\mathbb{R}}^n,~~\\forall{p,p'\\in\\mathsf{P}},~~V_p(x,x')\\leq\\mu V_{p'}(x,x').\n\\end{equation}\n\\end{small}\nIf $\\tau_d>\\log{\\mu}\/\\kappa$, then $\\Sigma_{\\tau_d}$ is $\\delta$-GUAS-M$_q$.\n\\end{theorem}\n\n\\begin{proof}\nThe proof is inspired by that of Theorem 2.8 in \\cite{girard2} for the non-probabilistic case. We show the result for the case that switching signals have infinite number of discontinuities (switching times). A proof for the case of finite discontinuities can be written in a similar way. Let $a$ and $a'$ be any ${\\mathbb{R}}^n$-valued random variables that are measurable in $\\mathcal{F}_0$, $\\upsilon\\in\\mathcal{P}_{\\tau_d}$, $t_0=0$, and let $p_{i+1}\\in\\mathsf{P}$ denote the value of the switching signal on the open interval $(t_i,t_{i+1})$, for $i\\in{\\mathbb{N}}_0$. Using (iii) in Definition \\ref{delta_SGAS_Lya} for all $i\\in{\\mathbb{N}}_0$ and $t\\in(t_i,t_{i+1})$, one gets\\begin{small}$$\\mathcal{L} V_{p_{i+1}}\\left(\\xi_{a\\upsilon}(t),\\xi_{a'\\upsilon}(t)\\right)\\leq-\\kappa V_{p_{i+1}}\\left(\\xi_{a\\upsilon}(t),\\xi_{a'\\upsilon}(t)\\right).$$\\end{small} Similar to the proof of Theorem \\ref{theorem2}, for all $i\\in{\\mathbb{N}}_0$ and $t\\in[t_i,t_{i+1}]$, we have \n\\begin{small}\n\\begin{align}\\label{eq1}\n\\mathds{E}&\\left[V_{p_{i+1}}(\\xi_{a\\upsilon}(t),\\xi_{a'\\upsilon}(t))\\right]\\leq\\mathds{E}\\left[V_{p_{i+1}}(\\xi_{a\\upsilon}(t_i),\\xi_{a'\\upsilon}(t_i))\\right]\\mathsf{e}^{-\\kappa(t-t_i)}. \n\\end{align}\n\\end{small}\nParticularly, for $t=t_{i+1}$ and from (\\ref{eq0}), it can be checked that for all $i\\in{\\mathbb{N}}_0$:\n\\begin{small}\n\\begin{align*}\n\\mathds{E}&\\left[V_{p_{i+2}}(\\xi_{a\\upsilon}(t_{i+1}),\\xi_{a'\\upsilon}(t_{i+1}))\\right]\\leq\\mu\\mathsf{e}^{-\\kappa(t_{i+1}-t_i)}\\mathds{E}\\left[V_{p_{i+1}}(\\xi_{a\\upsilon}(t_i),\\xi_{a'\\upsilon}(t_i))\\right]. \n\\end{align*}\n\\end{small}\nUsing this inequality, we prove by induction that for all $i\\in{\\mathbb{N}}_0$\n\\begin{small}\n\\begin{equation}\\label{eq2}\n\\mathds{E}\\left[V_{p_{i+1}}(\\xi_{a\\upsilon}(t_i),\\xi_{a'\\upsilon}(t_i))\\right]\\leq\\mu^i\\mathsf{e}^{-\\kappa{t_i}}\\mathds{E}\\left[V_{p_1}(a,a')\\right].\n\\end{equation} \n\\end{small}\nFrom (\\ref{eq1}) and (\\ref{eq2}), for all $i\\in{\\mathbb{N}}_0$ and $t\\in[t_i,t_{i+1}]$, one obtains\\begin{small}$$\\mathds{E}\\left[V_{p_{i+1}}(\\xi_{a\\upsilon}(t),\\xi_{a'\\upsilon}(t))\\right]\\leq\\mu^i\\mathsf{e}^{-\\kappa{t}}\\mathds{E}\\left[V_{p_1}(a,a')\\right].$$\\end{small} Since the switching signal $\\upsilon$ has dwell time $\\tau_d$, then $t_i\\geq i\\tau_d$ and hence for all $t\\in[t_i,t_{i+1}]$, $t\\geq i\\tau_d$. Since $\\mu\\geq1$, then for all $i\\in{\\mathbb{N}}_0$ and $t\\in[t_i,t_{i+1}]$, one has $\\mu^i=\\mathsf{e}^{i\\log{\\mu}}\\leq\\mathsf{e}^{\\left(\\log{\\mu}\/\\tau_d\\right)t}.$ Therefore, for all $i\\in{\\mathbb{N}}_0$ and $t\\in[t_i,t_{i+1}]$, we get\\begin{small}$$\\mathds{E}\\left[V_{p_{i+1}}(\\xi_{a\\upsilon}(t),\\xi_{a'\\upsilon}(t))\\right]\\leq\\mathsf{e}^{\\left(\\left(\\log{\\mu}\/\\tau_d\\right)-\\kappa\\right)t}\\mathds{E}\\left[V_{p_1}(a,a')\\right].$$\\end{small} Using functions $\\underline\\alpha,\\overline\\alpha$ and Jensen's inequality, and for all $t\\in{\\mathbb{R}}_0^+$, where $t\\in[t_i,t_{i+1}]$ for some $i\\in{\\mathbb{N}}_0$, we have\n\\begin{small}\n\\begin{align}\\nonumber\n\\underline\\alpha\\left(\\mathds{E}\\left[\\left\\Vert\\xi_{a\\upsilon}(t)-\\xi_{a'\\upsilon}(t)\\right\\Vert^q\\right]\\right)&\\leq\\underline\\alpha_{p_{i+1}}\\left(\\mathds{E}\\left[\\left\\Vert\\xi_{a\\upsilon}(t)-\\xi_{a'\\upsilon}(t)\\right\\Vert^q\\right]\\right)\\leq\\mathds{E}\\left[\\underline\\alpha_{p_{i+1}}\\left(\\left\\Vert\\xi_{a\\upsilon}(t)-\\xi_{a'\\upsilon}(t)\\right\\Vert^q\\right)\\right]\\leq\\mathds{E}\\left[V_{p_{i+1}}(\\xi_{a\\upsilon}(t),\\xi_{a'\\upsilon}(t))\\right]\\\\\\notag&\\leq\\mathsf{e}^{\\left(\\left(\\log{\\mu}\/\\tau_d\\right)-\\kappa\\right)t}\\mathds{E}\\left[V_{p_1}(a,a')\\right]\\leq\\mathsf{e}^{\\left(\\left(\\log{\\mu}\/\\tau_d\\right)-\\kappa\\right)t}\\mathds{E}\\left[\\overline\\alpha_{p_1}\\left(\\Vert a-a'\\Vert^q\\right)\\right]\\\\\\notag&\\leq\\mathsf{e}^{\\left(\\left(\\log{\\mu}\/\\tau_d\\right)-\\kappa\\right)t}\\overline\\alpha_{p_1}\\left(\\mathds{E}\\left[\\Vert a-a'\\Vert^q\\right]\\right)\\leq\\mathsf{e}^{\\left(\\left(\\log{\\mu}\/\\tau_d\\right)-\\kappa\\right)t}\\overline\\alpha\\left(\\mathds{E}\\left[\\left\\Vert a-a'\\right\\Vert^q\\right]\\right).\n\\end{align}\n\\end{small}\nTherefore, for all $t\\in{\\mathbb{R}}_0^+$\n\\begin{small}\n\\begin{align*}\n\\mathds{E}&\\left[\\left\\Vert\\xi_{a\\upsilon}(t)-\\xi_{a'\\upsilon}(t)\\right\\Vert^q\\right]\\leq\\underline\\alpha^{-1}\\left(\\mathsf{e}^{\\left(\\left(\\log{\\mu}\/\\tau_d\\right)-\\kappa\\right)t}\\overline\\alpha\\left(\\mathds{E}\\left[\\Vert a-a'\\Vert^q\\right]\\right)\\right). \n\\end{align*}\n\\end{small}\nThen condition (\\ref{delta_SGUAS}) holds with the function $\\beta(r,s):=\\underline\\alpha^{-1}\\left(\\overline\\alpha({r})\\mathsf{e}^{\\left(\\left(\\log{\\mu}\/\\tau_d\\right)-\\kappa\\right)s}\\right)$ which is a $\\mathcal{KL}$ function since by assumption $\\log\\mu\/\\tau_d-\\kappa<0$. The same inequality holds for switching signals with a finite number of discontinuities, \nhence the stochastic switched system $\\Sigma_{\\tau_d}$ is $\\delta$-GUAS-M$_q$. \n\\end{proof}\n\n\n\n\\begin{comment}\n\nWe look next into special instances where these functions can be easily computed based on the model dynamics. \nThe first result provides a sufficient condition for a particular function $V_p$ to be a $\\delta$-GAS-M$_q$ Lyapunov function for a stochastic subsystem $\\Sigma_p$, when $q\\in[1,2]$.\n\n\\begin{lemma}\\label{lem:lyapunov}\n\tConsider a stochastic subsystem $\\Sigma_p$. Let $q\\in[1,2]$, $P_p\\in{\\mathbb{R}}^{n\\times{n}}$ be a symmetric positive definite matrix, and the function $V_p: {\\mathbb{R}}^n \\times {\\mathbb{R}}^n \\rightarrow {\\mathbb{R}}_0^{+}$ be defined as follows:\n\n\t\\begin{align}\n\t\\label{V}\n\t\t\tV_p(x,x'):=\\left(\\frac{1}{q}\\left(x-x'\\right)^TP_p\\left(x-x'\\right)\\right)^{\\frac{q}{2}},\t\n\t\\end{align}\n\n\tand satisfies\n\n\t\\begin{align}\n\t\\notag\n\t(x-x')^T P_p(f_p(x)-f_p(x'))+& \\frac{1}{2} \\left \\| \\sqrt{P_p} \\left( g_p(x) - g_p(x')\\right) \\right \\|_F^2\\\\\\label{nonlinear ineq cond}&\\le-\\kappa_p \\left(V_p(x,x')\\right)^{\\frac{2}{q}},\t\n\t\\end{align}\n\n\tor, if $f_p$ is differentiable, satisfies\n\n\t\\begin{align}\\notag\n\t\t(x-x')^T P_p \\partial_x f_p(z)(x-x') +& \\frac{1}{2} \\left \\| \\sqrt{P_p} \\left( g_p(x) - g_p(x')\\right) \\right \\|_F^2\\\\\\label{nonlinear ineq cond1}&\\le-\\kappa_p \\left(V_p(x,x')\\right)^{\\frac{2}{q}},\t\n\t\\end{align}\n\n\tfor all $x,x',z\\in{\\mathbb{R}}^n$, and for some constant $\\kappa_p\\in{\\mathbb{R}}^+$. Then $V_p$ is a $\\delta$-GAS-M$_q$ Lyapunov function for $\\Sigma_p$.\n\\end{lemma}\n\n\\begin{proof}\nIt is not difficult to check that the function $V_p$ in \\eqref{V} satisfies properties (i) and (ii) of Definition \\ref{delta_SGAS_Lya} with functions $\\underline{\\alpha}_p(y) :=\\left(\\frac{1}{q}\\lambda_{\\min}\\left({P_p}\\right)\\right)^{\\frac{q}{2}}y$ and $\\overline{\\alpha}_p(y) := \\left(\\frac{n}{q}\\lambda_{\\max}\\left({P_p}\\right)\\right)^{\\frac{q}{2}}y$. It then suffices to verify property (iii). We verify property (iii) for the case that $f_p$ is differentiable and using condition (\\ref{nonlinear ineq cond1}). The proof, using condition (\\ref{nonlinear ineq cond}), is completely similar by just removing the inequality in the proof including derivative of $f_p$. By the definition of $V_p$ in (\\ref{V}), for any $x,x'\\in{\\mathbb{R}}^n$ such that $x\\neq{x'}$, and for $q\\in[1,2]$, one has\t\n\n\t\\begin{align*}\n\t\t\\partial_xV_p&= -\\partial_{x'} V_p=(x-x')^TP_p\\left(\\widetilde{V}(x,x')\\right)^{\\frac{q}{2}-1},\\\\\n\t\t\\partial_{x,x}V_p&= \\partial_{x',x'}V_p= -\\partial_{x,x'}V_p=P_p\\left(\\widetilde{V}(x,x')\\right)^{\\frac{q}{2}-1}\\\\&\\qquad+\\frac{q-2}{q}P_p(x-x')(x-x')^TP_p\\left(\\widetilde{V}(x,x')\\right)^{\\frac{q}{2}-2},\n\t\\end{align*}\t\n\n\twhere $\\widetilde{V}(x,x')=\\frac{1}{q}\\left(x-x'\\right)^TP_p\\left(x-x'\\right).$\n\tTherefore, following the definition of $\\mathcal{L}$, and for any \\mbox{$x,x',z\\in{\\mathbb{R}}^n$} such that $x\\neq{x'}$, one obtains:\n\n\t\\begin{align}\\nonumber\n\t\t&\\mathcal{L} V_p(x, x')\\\\\\notag&= (x-x')^TP_p\\left(\\widetilde{V}(x,x')\\right)^{\\frac{q}{2}-1}\\left(f_p(x)-f_p(x')\\right)\\\\\\notag&\\quad+\\frac{1}{2} \\text{Tr} \\left(\\begin{bmatrix} g_p(x) \\\\ g_p(x') \\end{bmatrix}\\left[g_p^T(x)~~g_p^T(x')\\right] \\begin{bmatrix}\n\\partial_{x,x} V_p& -\\partial_{x,x}V_p\\\\ -\\partial_{x,x}V_p& \\partial_{x,x}V_p\n\\end{bmatrix}\\right)\\\\\\notag\n\t\t& = (x-x')^TP_p\\left(\\widetilde{V}(x,x')\\right)^{\\frac{q}{2}-1}\\left(f_p(x) - f_p(x')\\right)\\\\\\notag\t\t\n\t\t& \\quad + \\frac{1}{2}\\text{Tr}\\left( \\left(g_p(x)-g_p(x')\\right) \\left ( g_p^T(x) - g_p^T(x') \\right) \\partial_{x,x} V_p \\right)\t\\\\\\notag\t\n\t\t& = (x-x')^TP_p\\left(\\widetilde{V}(x,x')\\right)^{\\frac{q}{2}-1}\\left(f_p(x) - f_p(x')\\right)\\\\\\notag&\\quad+\\frac{1}{2}\\left \\| \\sqrt{P_p} \\left( g_p(x) - g_p(x')\\right) \\right \\|_F^2\\left(\\widetilde{V}(x,x')\\right)^{\\frac{q}{2}-1}\\\\\\notag&\\quad +\\frac{q-2}{q}\\left \\|(x-x')^TP_p\\left( g_p(x) - g_p(x')\\right) \\right \\|_F^2\\left(\\widetilde{V}(x,x')\\right)^{\\frac{q}{2}-2} \\\\\\notag\n\t\t&\\le(x-x')^TP_p\\left(\\widetilde{V}(x,x')\\right)^{\\frac{q}{2}-1} \\left(f_p(x) - f_p(x')\\right)\\\\\\notag&\\quad+\\frac{1}{2}\\left \\| \\sqrt{P_p} \\left( g_p(x) - g_p(x')\\right) \\right \\|_F^2\\left(\\widetilde{V}(x,x')\\right)^{\\frac{q}{2}-1}\\\\\n\t\t\\label{1}\n\t\t& \\le \\bigg[(x-x')^TP_p\\partial_x f_p(z)(x-x')\\\\\\notag&\\quad+\\frac{1}{2}\\left \\| \\sqrt{P_p} \\left( g_p(x) - g_p(x')\\right) \\right \\|_F^2\\bigg]\\left(\\widetilde{V}(x,x')\\right)^{\\frac{q}{2}-1}\\\\\\notag\n\t\t&\\le -\\kappa_p V_p(x,x').\n\t\\end{align}\n\n\tIn \\eqref{1}, $z \\in {\\mathbb{R}}^n$ where the mean value theorem \n\n\tis applied to the differentiable function $x \\mapsto f_p(x)$ at points $x,x'$. \\qed\t\n\\end{proof}\n\nThe next result provides a condition that is equivalent to (\\ref{nonlinear ineq cond}) or to (\\ref{nonlinear ineq cond1}) for affine stochastic subsystems $\\Sigma_p$ \n(that is, for subsystems with affine drift and linear diffusion terms) \nin the form of a linear matrix inequality (LMI), which can be easily solved by numerical optimization tools. \n\n\\begin{corollary}\\label{corollary1}\n\tConsider a stochastic subsystem $\\Sigma_p$, \n\twhere for any $x\\in{\\mathbb{R}}^n$, $f_p(x):= A_px+b_p$ for some $A_p\\in{\\mathbb{R}}^{n\\times{n}},b_p\\in{\\mathbb{R}}^{n}$, \n\tand $g_p(x) := \\begin{bmatrix} \\sigma_{1,p} x~&\\sigma_{2,p}x~&\\ldots~& \\sigma_{\\widehat{q},p} x\\end{bmatrix}$ for some $\\sigma_{i,p} \\in {\\mathbb{R}}^{n \\times n}$, where $i=1,\\ldots,\\widehat{q}$. \n\tThen, \n\tfunction $V_p$ in \\eqref{V} is a $\\delta$-GAS-M$_q$ Lyapunov function for $\\Sigma_p$ if there exists a positive constant $\\widehat\\kappa_p\\in{\\mathbb{R}}^+$ satisfying the following LMI:\n\n\t\\begin{align}\n\t\\label{LMI}\n\t\tP_pA_p + A_p^TP_p + \\sum_{i=1}^{\\widehat{q}} \\sigma_{i,p}^T P_p \\sigma_{i,p} \\preceq -\\widehat\\kappa_p P_p.\n\t\\end{align}\n\n\\end{corollary}\n\n\\begin{proof}\n\t\tThe corollary is a particular case of Lemma \\ref{lem:lyapunov}. It suffices to show that for affine dynamics the LMI \\eqref{LMI} yields the condition in \\eqref{nonlinear ineq cond} or \\eqref{nonlinear ineq cond1}. First it is straightforward to observe that\n\t\n\t\\begin{align*}\n\t\t&\\left \\| \\sqrt{P_p} \\left( g_p(x) - g_p(x')\\right) \\right \\|_F^2 \\\\&\\quad= \\text{Tr}\\left( \\left(g_p(x)-g_p(x')\\right)^T P_p \\left(g_p(x)-g_p(x')\\right) \\right) \\\\&\\quad=\\left(x-x'\\right)^T \\sum_{i=1}^{\\widehat{q}}\\sigma_{i,p}^T P_p \\sigma_{i,p}(x-x'),\n\t\t\\end{align*}\n\t\n\t\tand that \n\t\n\t\t\\begin{align*}\n\t(x-x')^TP_p&\\partial_x f_p(z)(x-x')=(x-x')^TP_p\\left(f_p(x)-f_p(x')\\right)\\\\&=\\frac{1}{2}(x-x')^T\\left(P_pA_p+A_p^TP_p\\right)(x-x'),\n\t\\end{align*}\n\n\tfor any $x,x',z\\in{\\mathbb{R}}^n$. Now suppose there exists $\\widehat\\kappa_p\\in{\\mathbb{R}}^+$ such that \\eqref{LMI} holds. It can be readily verified that the desired assertion of \\eqref{nonlinear ineq cond} or \\eqref{nonlinear ineq cond1} is verified by choosing $\\kappa_p=\\widehat\\kappa_p\/2$. \\qed\n\\end{proof}\n\nNotice that \nCorollary \\ref{corollary1} allows us obtaining tighter upper bounds for the inequalities (\\ref{delta_SGAS}) and (\\ref{delta_SGUAS}) for any $p\\in\\mathsf{P}$, \nby selecting appropriate matrices $P_p$ satisfying the LMI in (\\ref{LMI}). \n\n\\end{comment}\n\nIn order to show some of the main results of the paper in Section \\ref{existence}, \nwe need the following technical result, \nwhich provides an upper bound on the distance (in the $q$th moment) between the solution processes of $\\Sigma_p$ (resp. $\\Sigma_{\\tau_d,p}$) and the corresponding non-probabilistic subsystem $\\overline\\Sigma_p$ (resp. $\\overline\\Sigma_{\\tau_d,p}$), obtained by disregarding the diffusion term $g_p$. \nFrom now on, we use the notation $\\overline\\xi_{xp}$ to denote the trajectory of $\\overline\\Sigma_p$ (resp. $\\overline\\Sigma_{\\tau_d,p}$) starting from the initial condition $x$ and satisfying the ordinary differential equation (ODE) $\\dot{\\overline\\xi}_{xp}=f_p\\left(\\overline\\xi_{xp}\\right)$. \n\n\\begin{lemma}\n\\label{lem:moment est}\n\tConsider a stochastic subsystem $\\Sigma_p$ (resp. $\\Sigma_{\\tau_d,p}$) such that $f_p(0_n)=0_n$ and $g_p(0_n) = 0_{n\\times{\\widehat{q}}}$.\t\n\tSuppose $q\\geq2$ and there exists a $\\delta$-GAS-M$_q$ Lyapunov function $V_p$ for $\\Sigma_p$ (resp. $\\Sigma_{\\tau_d,p}$) such that its Hessian is a positive semidefinite matrix in ${\\mathbb{R}}^{2n\\times2n}$ and $\\partial_{x,x}V_p(x,x')\\leq P_p$, $\\forall x,x'\\in{\\mathbb{R}}^n$ and some positive semidefinite matrix $P_p\\in{\\mathbb{R}}^{n\\times{n}}$. Then for any $x\\in{\\mathbb{R}}^n$, we have \\begin{small}$\\mathds{E} \\left[\\left\\Vert\\traj{\\xi}{x}{p}(t)-\\traj{\\overline \\xi}{x}{p}(t)\\right\\Vert^q\\right] \\le h_x^p(t)$\\end{small}, \n\twhere \n\t\\begin{small}\n\t\\begin{align}\\nonumber\n\th_x^p(t)=&\\underline\\alpha_p^{-1}\\left(\\frac{1}{2}\\left\\Vert{\\sqrt{P_p}}\\right\\Vert^2\\min\\{n,\\widehat{q}\\}Z_p^2\\mathsf{e}^{-\\kappa_p t}\\int_0^t\\left(\\beta_p\\left(\\left\\Vert{x}\\right\\Vert^q,s\\right)\\right)^{\\frac{2}{q}}ds\\right),\n\t\\end{align}\n\t\\end{small}$Z_p$ is the Lipschitz constant, introduced in Definition \\ref{Def_control_sys}, and $\\beta_p$ is the $\\mathcal{KL}$ function\\footnote{Using a $\\delta$-GAS-M$_q$ Lyapunov function $V_p$, one can always choose $\\beta_p(r,s)=\\underline{\\alpha}_p^{-1}\\left(\\overline{\\alpha}_p\\left(r\\right)\\mathsf{e}^{-\\kappa_ps}\\right)$, as showed in Theorem \\ref{the_Lya}.} appearing in \\eqref{dGAS}.\n\\end{lemma}\n\nIt can be readily seen that the nonnegative valued function $h_x^p$ tends to zero as $t \\rightarrow 0$, $t\\rightarrow+\\infty$, or as $Z_p\\ra0$ and is identically zero if the diffusion term is identically zero (i.e. $Z_p=0$) which is the case for $\\overline\\Sigma_p$ (resp. $\\overline\\Sigma_{\\tau_d,p}$).\n\n\\begin{proof}\nThe proof is similar to the proof of Lemma 3.7 in \\cite{majid8}, where one needs to eliminate all the terms $\\gamma(\\cdot)$. \n\\end{proof}\n\n\\begin{comment}\nThe following lemma provides a similar result in line with that of Lemma \\ref{lem:moment est} for a stochastic subsystem $\\Sigma_p$ admitting a $\\delta$-GAS-M$_q$ Lyapunov function $V_p$ as in \\eqref{V}, where $q\\in[1,2]$.\n\n\\begin{lemma}\\label{lem:moment est1}\nConsider a stochastic subsystem $\\Sigma_p$ such that $f_p(0_n)=0_n$ and $g_p(0_n) = 0_{n\\times{\\widehat{q}}}$.\t Suppose that the function $V_p$ in \\eqref{V} satisfies \\eqref{nonlinear ineq cond} or \\eqref{nonlinear ineq cond1} for $\\Sigma_p$. For any $x$ in a compact set $\\mathsf{D}\\subset{\\mathbb{R}}^n$, we have $\\mathds{E} \\left[\\left\\Vert\\traj{\\xi}{x}{p}(t)-\\traj{\\overline \\xi}{x}{p}(t)\\right\\Vert^2\\right] \\le h_p(g_p,t),$\t \n\twhere \n\t\\begin{small}\n\t\\begin{align}\\nonumber\n\th_p(g_p,t)=&\\frac{q\\left\\Vert\\sqrt{P_p}\\right\\Vert^2n^2\\min\\left\\{n,\\widehat{q}\\right\\}Z_p^2\\mathsf{e}^{\\frac{-2\\kappa_p}{q}t}}{4\\lambda_{\\min}^2({P_p}){\\kappa_p}}\\\\\\notag&\\qquad\\cdot\\lambda_{\\max}({P_p})\\left(1-\\mathsf{e}^{\\frac{-2\\kappa_p}{q}t}\\right)\\sup_{x\\in \\mathsf{D}}\\left\\{\\Vert{x}\\Vert^2\\right\\},\n\t\\end{align}\n\t\\end{small}and $Z_p$ is the Lipschitz constant, introduced in Definition \\ref{Def_control_sys}.\n\\end{lemma}\n\nHere also it can be readily verified that the nonnegative valued function $h_p$ tends to zero as $t \\rightarrow 0$, $t\\rightarrow+\\infty$, or as $Z_p\\ra0$.\n\n\\begin{proof}\nThe proof is similar to the proof of Lemma 3.9 in \\cite{majid8}. One just needs to enforce $L_u=0$ in the proof of Lemma 3.9 in \\cite{majid8}.\n\\end{proof}\n\nFor an affine stochastic subsystem $\\Sigma_p$,\nthe following corollary provides possibly a less conservative expression for $h_p$, based on parameters of drift and diffusion. \n\n\\begin{corollary}\\label{corollary2}\nConsider a stochastic subsystem $\\Sigma_p$, where for all $x\\in{\\mathbb{R}}^n$, $f_p(x):= A_px+b_p$, for some $A_p\\in{\\mathbb{R}}^{n\\times{n}},b_p\\in{\\mathbb{R}}^{n}$, and $g_p(x) := \\left[\\sigma_{1,p} x~~\\sigma_{2,p}x~~\\ldots~~\\sigma_{\\widehat{q},p} x\\right]$, for some $\\sigma_{i,p} \\in {\\mathbb{R}}^{n \\times n}$, where $i=1,\\ldots,\\widehat{q}$. Suppose there exists a positive definite matrix $P_p\\in{\\mathbb{R}}^{n\\times{n}}$ satisfying (\\ref{LMI}) for $\\Sigma_p$.\nFor any $x$ in a compact set $\\mathsf{D}\\subset{\\mathbb{R}}^n$, we have \n\t \t$\\mathds{E} \\left[\\left\\Vert\\traj{\\xi}{x}{p}(t)-\\traj{\\overline \\xi}{x}{p}(t)\\right\\Vert^2\\right] \\le h_p(g_p,t),$\t \n\n\twhere \n\\begin{small}\n\\begin{align}\\nonumber\n\th_p(g_p,t)&=\\frac{n\\lambda_{\\max}\\left(\\sum\\limits_{i = 1}^{\\widehat{q}} \\sigma^T_{i,p}P_p\\sigma_{i,p}\\right)\\mathsf{e}^{-\\widehat{\\kappa} t}}{\\lambda_{\\min}({P_p})}\\\\\\notag&\\cdot\\int_0^t\\left(\\left\\Vert\\mathsf{e}^{A_ps}\\right\\Vert\\sup_{x\\in{\\mathsf{D}}}\\left\\{\\left\\Vert x\\right\\Vert\\right\\}+\\left(\\int_0^{s}\\left\\Vert\\mathsf{e}^{A_pr}b_p\\right\\Vert dr\\right)\\right)^2ds.\n\t\\end{align}\n\\end{small}\n\\end{corollary}\n\n\\begin{proof}\nThe proof is similar to the proof of Corollary 3.10 in \\cite{majid8}. One just requires to enforce $B=b_p$ and $\\Vert u\\Vert=1$, for any $u\\in\\mathsf{U}$, in the proof of Corollary 3.10 in \\cite{majid8}.\n\\end{proof}\n\\end{comment}\n\nThe interested readers are referred to the results in \\cite{majid8}, providing a result in line with that of Lemma \\ref{lem:moment est} for an (affine) stochastic subsystem $\\Sigma_p$ (resp. $\\Sigma_{\\tau_d,p}$) admitting a specific type of $\\delta$-GAS-M$_q$ Lyapunov functions. \nFor later use, we introduce function $h_x(t)=\\max\\left\\{h_x^1(t),\\ldots,h_x^m(t)\\right\\}$ for all $t\\in{\\mathbb{R}}_0^+$.\n\\vspace{-0.2cm}\n\n\\section{Systems and Approximate Equivalence Notions}\\label{symbolic}\nWe employ the notion of \\emph{system}, introduced in \\cite{paulo}, to provide (in Sec. \\ref{existence}) an alternative description of stochastic switched systems that can be later directly related to their symbolic models. \n\\begin{definition}\\label{system}\nA system $S$ is a tuple \n$S=(X,X_0,U,\\longrightarrow,Y,H),$\nwhere\n$X$ is a set of states (possibly infinite), \n$X_0\\subseteq X$ is a set of initial states (possibly infinite), \n$U$ is a set of inputs (possibly infinite),\n$\\longrightarrow\\subseteq X\\times U\\times X$ is a transition relation,\n$Y$ is a set of outputs, and\n$H:X\\rightarrow Y$ is an output map.\n\\end{definition}\n\nWe write $x\\rTo^{u}x'$ if \\mbox{$(x,u,x')\\in\\longrightarrow$}. \nIf $x\\rTo^{u}x'$, we call state $x'$ a \\mbox{$u$-successor}, or simply a successor, of state $x$.\nFor technical reasons, we assume that for each $x\\in X$, there is some $u$-successor of $x$, for some $u\\in U$ -- let us remark that this is always the case for the considered systems later in this paper. \n\nA system $S$ is said to be\n\\begin{itemize}\n\\item \\textit{metric}, if the output set $Y$ is equipped with a metric\n$\\mathbf{d}:Y\\times Y\\rightarrow\\mathbb{R}_{0}^{+}$;\n\n\\item \\textit{finite} (or \\textit{symbolic}), if $X$ and $U$ are finite sets;\n\n\\item \\textit{deterministic}, if for any state $x\\in{X}$ and any input $u\\in{U}$, there exists at most one \\mbox{$u$-successor}.\n\\end{itemize}\n\nFor a system $S=(X,X_0,U,\\longrightarrow,Y,H)$, given any initial state $x_0\\in{X_0}$, a finite state run generated from $x_0$ is a finite sequence of transitions: \n\\begin{small}\n\\begin{align}\n\\label{run}\nx_0\\rTo^{u_0}x_1\\rTo^{u_1}x_2\\rTo^{u_2}\\cdots\\rTo^{u_{n-2}}x_{n-1}\\rTo^{u_{n-1}}x_n,\n\\end{align}\n\\end{small}such that $x_i\\rTo^{u_i}x_{i+1}$ for all $0\\leq i\\left(\\underline\\alpha^{-1}\\left(\\frac{\\widehat\\gamma\\left(\\left(h_{[X_{\\tau0}]_\\eta}(\\tau)\\right)^{\\frac{1}{q}}\\right)}{1-\\mathsf{e}^{-\\kappa\\tau}}\\right)\\right)^{\\frac{1}{q}}.\n\\end{equation}\n\\end{small} \nOne can easily verify that the lower bound on $\\varepsilon$ in (\\ref{lower_bound}) goes to zero as $\\tau$ goes to infinity or as $Z_p \\rightarrow 0$, for any $p\\in\\mathsf{P}$, where $Z_p$ is the Lipschitz constant introduced in Definition \\ref{Def_control_sys}. \n\nNote that $S_{\\mathsf{q}}(\\Sigma)$ has a countable number of states and it is finite if one is interested in the dynamics of $\\Sigma$ on a compact $\\mathsf{D}\\subset{\\mathbb{R}}^n$ which is always the case in practice.\n\n\\begin{proof}\nWe start by proving \\mbox{$S_{\\tau}(\\Sigma)\\preceq^{\\varepsilon}_\\mathcal{S}S_{{\\mathsf{q}}}(\\Sigma)$}.\nConsider the relation $R\\subseteq X_{\\tau}\\times X_{{\\mathsf{q}}}$ defined by \n{$\\left(x_{\\tau},x_{{\\mathsf{q}}}\\right)\\in R$}\nif and only if \n{$\\mathbb{E}\\left[V\\left(H_{\\tau}(x_{\\tau}),H_{{\\mathsf{q}}}(x_{{\\mathsf{q}}})\\right)\\right]=\\mathbb{E}\\left[V\\left(x_{\\tau},x_{{\\mathsf{q}}}\\right)\\right]\\leq\\underline\\alpha\\left(\\varepsilon^q\\right)$}. Consider any \\mbox{$\\left(x_{\\tau},x_{{\\mathsf{q}}}\\right)\\in R$}. Condition (i) in Definition \\ref{APSR} is satisfied because\n\\begin{small}\n\\begin{equation} \n\\label{convexity1}\n\\left(\\mathbb{E}\\left[\\Vert x_{\\tau}-x_{{\\mathsf{q}}}\\Vert^q\\right]\\right)^{\\frac{1}{q}}\\leq\\left(\\underline\\alpha^{-1}\\left(\\mathbb{E}\\left[V(x_{\\tau},x_{{\\mathsf{q}}})\\right]\\right)\\right)^{\\frac{1}{q}}\\leq\\varepsilon.\n\\end{equation}\n\\end{small}\nWe used the convexity assumption of $\\underline\\alpha$ and the Jensen inequality \\cite{oksendal} to show the inequalities in (\\ref{convexity1}). Let us now show that condition (ii) in Definition\n\\ref{APSR} holds. \nConsider the transition \\mbox{$x_{\\tau}\\rTo^{p}_{\\tau} x'_{\\tau}=\\xi_{x_{\\tau}p}(\\tau)$} $\\mathds{P}$-a.s. in $S_{\\tau}(\\Sigma)$. Since $V$ is a common Lyapunov function for $\\Sigma$, we have\n\\begin{small}\n\\begin{align}\\label{b02}\n\\mathbb{E}\\left[V(x'_{\\tau},\\xi_{x_{{\\mathsf{q}}}p}(\\tau))\\right] &\\leq \\mathds{E}\\left[V(x_\\tau,x_q)\\right] \\mathsf{e}^{-\\kappa\\tau}\\leq \\underline\\alpha\\left(\\varepsilon^q\\right) \\mathsf{e}^{-\\kappa\\tau}.\n\\end{align}\n\\end{small}\nSince \\mbox{${\\mathbb{R}}^n\\subseteq\\bigcup_{p\\in[\\mathbb{R}^n]_{\\eta}}\\mathcal{B}_{\\eta}(p)$}, there exists \\mbox{$x'_{{\\mathsf{q}}}\\in{X}_{{\\mathsf{q}}}$} such that \n\\begin{equation}\n\\left\\Vert\\overline{\\xi}_{x_{{\\mathsf{q}}}p}(\\tau)-x'_{{\\mathsf{q}}}\\right\\Vert\\leq\\eta, \\label{b04}\n\\end{equation}\nwhich, by the definition of $S_{\\mathsf{q}}(\\Sigma)$, implies the existence of $x_{{\\mathsf{q}}}\\rTo^{p}_{{\\mathsf{q}}}x'_{{\\mathsf{q}}}$ in $S_{{\\mathsf{q}}}(\\Sigma)$. \nUsing Lemmas \\ref{lem:moment est}, the concavity of $\\widehat\\gamma$, the Jensen inequality \\cite{oksendal}, the inequalities (\\ref{supplement}), (\\ref{bisim_cond}), (\\ref{b02}), (\\ref{b04}), and triangle inequality, we obtain\n\\begin{small}\n\\begin{align*}\n\\mathbb{E}\\left[V(x'_{\\tau},x'_{{\\mathsf{q}}})\\right]&=\\mathbb{E}\\left[V(x'_\\tau,\\xi_{x_{{\\mathsf{q}}}p}(\\tau))+V(x'_{\\tau},x'_{\\mathsf{q}})-V(x'_\\tau,\\xi_{x_{{\\mathsf{q}}}p}(\\tau))\\right]= \\mathbb{E}\\left[V(x'_{\\tau},\\xi_{x_{{\\mathsf{q}}}p}(\\tau))\\right]+\\mathbb{E}\\left[V(x'_{\\tau},x'_{\\mathsf{q}})-V(x'_\\tau,\\xi_{x_{{\\mathsf{q}}}p}(\\tau))\\right]\\\\\\notag&\\leq\\underline\\alpha\\left(\\varepsilon^q\\right)\\mathsf{e}^{-\\kappa\\tau}+\\mathbb{E}\\left[\\widehat\\gamma\\left(\\left\\Vert\\xi_{x_{{\\mathsf{q}}}p}(\\tau)-x'_{{\\mathsf{q}}}\\right\\Vert\\right)\\right]\\leq\\underline\\alpha\\left(\\varepsilon^q\\right)\\mathsf{e}^{-\\kappa\\tau}+\\widehat\\gamma\\left(\\mathbb{E}\\left[\\left\\Vert\\xi_{x_{{\\mathsf{q}}}p}(\\tau)-\\overline{\\xi}_{x_{{\\mathsf{q}}}p}(\\tau)+\\overline{\\xi}_{x_{{\\mathsf{q}}}p}(\\tau)-x'_{{\\mathsf{q}}}\\right\\Vert\\right]\\right)\n\\\\\\notag&\\leq\\underline\\alpha\\left(\\varepsilon^q\\right)\\mathsf{e}^{-\\kappa\\tau}+\\widehat\\gamma\\left(\\mathbb{E}\\left[\\left\\Vert\\xi_{x_{{\\mathsf{q}}}p}(\\tau)-\\overline{\\xi}_{x_{{\\mathsf{q}}}p}(\\tau)\\right\\Vert\\right]+\\left\\Vert\\overline{\\xi}_{x_{{\\mathsf{q}}}p}(\\tau)-x'_{{\\mathsf{q}}}\\right\\Vert\\right)\\\\\\notag&\\leq\\underline\\alpha\\left(\\varepsilon^q\\right)\\mathsf{e}^{-\\kappa\\tau}+\\widehat\\gamma\\left(\\left(h_{[X_{\\tau0}]_\\eta}(\\tau)\\right)^{\\frac{1}{q}}+\\eta\\right)\\leq\\underline\\alpha\\left(\\varepsilon^q\\right).\n\\end{align*}\n\\end{small}Therefore, we conclude that $\\left(x'_{\\tau},x'_{{\\mathsf{q}}}\\right)\\in{R}$ and that condition (ii) in Definition \\ref{APSR} holds. Since $X_{\\tau 0}\\subseteq\\bigcup_{p\\in[\\mathbb{R}^n]_{\\eta}}\\mathcal{B}_{\\eta}(p)$, \nfor every $x_{\\tau 0}\\in{X_{\\tau 0}}$ there always exists \\mbox{$x_{{\\mathsf{q}} 0}\\in{X}_{{\\mathsf{q}} 0}$} such that $\\Vert{x_{\\tau0}}-x_{\\params0}\\Vert\\leq\\eta.$ Then,\n\\begin{small}\n\\begin{align}\\nonumber\n\\mathbb{E}\\left[V({x_{\\tau0}},x_{\\params0})\\right]=V({x_{\\tau0}},x_{\\params0})&\\leq\\overline\\alpha\\left(\\Vert x_{\\tau0}-x_{\\params0}\\Vert^q\\right)\\leq\\overline\\alpha\\left(\\eta^q\\right)\\leq\\underline\\alpha\\left(\\varepsilon^q\\right),\n\\end{align}\n\\end{small}because of (\\ref{bisim_cond1}) and since $\\overline\\alpha$ is a $\\mathcal{K}_\\infty$ function.\nHence, \\mbox{$\\left(x_{\\tau0},x_{\\params0}\\right)\\in{R}$} implying that \\mbox{$S_{\\tau}(\\Sigma)\\preceq^{\\varepsilon}_\\mathcal{S}S_{{\\mathsf{q}}}(\\Sigma)$}. In a similar way, we can prove that \\mbox{$S_{{\\mathsf{q}}}(\\Sigma)\\preceq^{\\varepsilon}_{\\mathcal{S}}S_{\\tau}(\\Sigma)$} by showing that $R^{-1}$ is an $\\varepsilon$-approximate simulation relation from $S_{\\mathsf{q}}(\\Sigma)$ to $S_\\tau(\\Sigma)$. \n\\end{proof}\n\nNote that the results in \\cite[Theorem 4.1]{girard2} for non-probabilistic models are fully recovered by the statement in Theorem \\ref{main_theorem1} if the stochastic switched system $\\Sigma$ is not affected by any noise, \nimplying that $h_x^p(t)$ is identically zero for all $p\\in\\mathsf{P}$ and all $x\\in{\\mathbb{R}}^n$, \nand that the $\\delta$-GAS-M$_q$ common Lyapunov function simply reduces to being $\\delta$-GAS one. \n\n\n\\subsubsection{Multiple Lyapunov functions}\nIf a common $\\delta$-GAS-M$_q$ Lyapunov function does not exist or cannot be practically found, \none can still attempt computing approximately bisimilar symbolic models by seeking mode-dependent Lyapunov functions and by restricting the set of switching signals using a condition on the dwell time $\\tau_d=\\widehat{N}\\tau$ for some $\\widehat{N}\\in{\\mathbb{N}}$. \n\nConsider a stochastic switched system $\\Sigma_{\\tau_d}$ and a pair $\\mathsf{q}=(\\tau,\\eta)$ of quantization parameters, where $\\tau$ is the sampling time and $\\eta$ is the state space quantization. \nGiven $\\Sigma_{\\tau_d}$ and $\\mathsf{q}$, consider the following system:\n\\begin{small}\n\\begin{equation}\\label{T3}\nS_{\\mathsf{q}}\\left(\\Sigma_{\\tau_d}\\right)=(X_{\\mathsf{q}},X_{\\params0},U_{\\mathsf{q}},\\rTo_{\\mathsf{q}},Y_{\\mathsf{q}},H_{\\mathsf{q}}),\n\\end{equation}\n\\end{small}where $X_{\\mathsf{q}}=[{\\mathbb{R}}^n]_\\eta\\times\\mathsf{P}\\times\\left\\{0,\\ldots,\\widehat{N}-1\\right\\}$, $X_{\\params0}=[{\\mathbb{R}}^n]_\\eta\\times\\mathsf{P}\\times\\left\\{0\\right\\}$, $U_{\\mathsf{q}}=\\mathsf{P}$, $Y_{\\mathsf{q}}=Y_\\tau$, and\n\\begin{itemize}\n\\item $\\left(x_{\\mathsf{q}},p,i\\right)\\rTo_{\\mathsf{q}}^{p}\\left(x'_{\\mathsf{q}},p',i'\\right)$ if there exists a $x'_{\\mathsf{q}}\\in X_{\\mathsf{q}}$ such that $\\left\\Vert\\overline{\\xi}_{x_{\\mathsf{q}}p}(\\tau)-x'_{\\mathsf{q}}\\right\\Vert\\leq\\eta$ and one of the following holds:\n\n\\begin{itemize}\n\\item $i<\\widehat{N}-1$, $p'=p$, and $i'=i+1$;\n\n\\item $i=\\widehat{N}-1$, $p'=p$, and $i'=\\widehat{N}-1$;\n\n\\item $i=\\widehat{N}-1$, $p'\\neq p$, and $i'=0$.\n\\end{itemize}\n\n\n\\item $H_{\\mathsf{q}}(x_\\mathsf{q},p,i)=x_\\mathsf{q}$ for any $(x_\\mathsf{q},p,i)\\in[{\\mathbb{R}}^n]_\\eta\\times\\mathsf{P}\\times\\left\\{0,\\ldots,\\widehat{N}-1\\right\\}$.\n\\end{itemize}\n\n\nWe present the second main result of this subsection, \nwhich relates the existence of multiple Lyapunov functions for a stochastic switched system to that of a symbolic model, based on the state space discretization. In order to show the next result, we assume that $f_p(0_n)=0_n$ only if $\\Sigma_p$ is not affine and that $g_p(0_n)=0_{n\\times\\widehat{q}}$ for any $p\\in\\mathsf{P}$.\n\\begin{theorem}\\label{main_theorem2}\nConsider a stochastic switched system $\\Sigma_{\\tau_d}$. Let us assume that for any $p\\in\\mathsf{P}$, there exists a $\\delta$-GAS-M$_q$ Lyapunov function $V_p$, of the form explained in Lemma \\ref{lem:moment est}, for subsystem $\\Sigma_{\\tau_d,p}$. Moreover, assume that (\\ref{eq0}) holds for some $\\mu\\geq1$. If $\\tau_d>\\log{\\mu}\/\\kappa$, for $X_{\\tau0}=X_0\\times\\mathsf{P}\\times\\left\\{0,\\ldots,\\widehat{N}\\right\\}$, where $X_0={\\mathbb{R}}^n$, any $\\varepsilon\\in{\\mathbb{R}}^+$, \nand any pair $\\mathsf{q}=(\\tau,\\eta)$ of quantization parameters satisfying\n\\begin{small}\n\\begin{align}\\label{bisim_cond_mul1}\n\\overline\\alpha\\left(\\eta^q\\right)&\\leq\\underline\\alpha\\left(\\varepsilon^q\\right),\\\\\\label{bisim_cond_mul}\n\\widehat\\gamma\\left(\\left(h_{[X_0]_\\eta}(\\tau)\\right)^{\\frac{1}{q}}+\\eta\\right)&\\leq\\frac{\\frac{1}{\\mu}-\\mathsf{e}^{-\\kappa\\tau_d}}{1-\\mathsf{e}^{-\\kappa\\tau_d}}\\left(1-\\mathsf{e}^{-\\kappa\\tau}\\right)\\underline\\alpha\\left(\\varepsilon^q\\right),\n\\end{align}\n\\end{small}we have that \\mbox{$S_{\\mathsf{q}}\\left(\\Sigma_{\\tau_d}\\right)\\cong_{\\mathcal{S}}^{\\varepsilon}S_{\\tau}\\left(\\Sigma_{\\tau_d}\\right)$}.\n\\end{theorem} \n\nIt can be readily seen that when we are interested in the dynamics of $\\Sigma_{\\tau_d}$ on a compact $\\mathsf{D}\\subset{\\mathbb{R}}^n$ of the form of a finite union of boxes, implying that $X_0=\\mathsf{D}$, and for a precision $\\varepsilon$, there always exists a sufficiently large value of $\\tau$ and a small value of $\\eta$, such that $\\eta\\leq\\mathit{span}(\\mathsf{D})$ and the conditions in (\\ref{bisim_cond_mul1}) and (\\ref{bisim_cond_mul}) are satisfied. For a given fixed sampling time $\\tau$, the precision $\\varepsilon$ is lower bounded by \n\\begin{small}\n\\begin{equation}\\label{lower_bound_mul}\n\\varepsilon\\geq\\left(\\underline\\alpha^{-1}\\left(\\frac{\\widehat\\gamma\\left(\\left(h_{[X_0]_\\eta}(\\tau)\\right)^{\\frac{1}{q}}\\right)}{1-\\mathsf{e}^{-\\kappa\\tau}}\\cdot\\frac{1-\\mathsf{e}^{-\\kappa\\tau_d}}{\\frac{1}{\\mu}-\\mathsf{e}^{-\\kappa\\tau_d}}\\right)\\right)^{\\frac{1}{q}}.\n\\end{equation}\n\\end{small} \nThe properties of the bound in (\\ref{lower_bound_mul}) are analogous to those of the case of a common Lyapunov function. \n\nNote that $S_{\\mathsf{q}}\\left(\\Sigma_{\\tau_d}\\right)$ has a countable number of states and it is finite if one is interested in the dynamics of $\\Sigma_{\\tau_d}$ on a compact $\\mathsf{D}\\subset{\\mathbb{R}}^n$ which is always the case in practice.\n\n\\begin{proof}\nThe proof was inspired by the proof of Theorem 4.2 in \\cite{girard2} for non-probabilistic switched systems. We start by proving \\mbox{$S_{\\tau}\\left(\\Sigma_{\\tau_d}\\right)\\preceq^{\\varepsilon}_\\mathcal{S}S_{{\\mathsf{q}}}\\left(\\Sigma_{\\tau_d}\\right)$}.\nConsider the relation $R\\subseteq X_{\\tau}\\times X_{{\\mathsf{q}}}$ defined by \n{$\\left(x_{\\tau},p_1,i_1,x_{{\\mathsf{q}}},p_2,i_2\\right)\\in R$}\nif and only if $p_1=p_2=p$, $i_1=i_2=i$, and $\\mathbb{E}\\left[V_p\\left(H_{\\tau}(x_{\\tau},p_1,i_1),H_{{\\mathsf{q}}}(x_{{\\mathsf{q}}},p_2,i_2)\\right)\\right]=\\mathbb{E}\\left[V_p\\left(x_{\\tau},x_{{\\mathsf{q}}}\\right)\\right]\\leq\\delta_i$, where $\\delta_0,\\ldots,\\delta_{\\widehat{N}}$ are given recursively by\\begin{small}$$\\delta_0=\\underline\\alpha\\left(\\varepsilon^q\\right),~~\\delta_{i+1}=\\mathsf{e}^{-\\kappa\\tau}\\delta_i+\\widehat\\gamma\\left(\\left(h_{[X_0]_\\eta}(\\tau)\\right)^{\\frac{1}{q}}+\\eta\\right).$$\\end{small} One can easily verify that\n\\begin{small}\n\\begin{align}\\nonumber\n\\delta_i=&\\mathsf{e}^{-i\\kappa\\tau}\\underline\\alpha\\left(\\varepsilon^q\\right)+\\widehat\\gamma\\left(\\left(h_{[X_0]_\\eta}(\\tau)\\right)^{\\frac{1}{q}}+\\eta\\right)\\frac{1-\\mathsf{e}^{-i\\kappa\\tau}}{1-\\mathsf{e}^{-\\kappa\\tau}}\\\\\\label{delta_i}=&\\frac{\\widehat\\gamma\\left(\\left(h_{[X_0]_\\eta}(\\tau)\\right)^{\\frac{1}{q}}+\\eta\\right)}{1-\\mathsf{e}^{-\\kappa\\tau}}+\\mathsf{e}^{-i\\kappa\\tau}\\left(\\underline\\alpha\\left(\\varepsilon^q\\right)-\\frac{\\widehat\\gamma\\left(\\left(h_{[X_0]_\\eta}(\\tau)\\right)^{\\frac{1}{q}}+\\eta\\right)}{1-\\mathsf{e}^{-\\kappa\\tau}}\\right).\n\\end{align}\n\\end{small}\nSince $\\mu\\geq1$, and from (\\ref{bisim_cond_mul}), one has \\begin{small}$$\\widehat\\gamma\\left(\\left(h_{[X_0]_\\eta}(\\tau)\\right)^{\\frac{1}{q}}+\\eta\\right)\\leq(1-\\mathsf{e}^{-\\kappa\\tau})\\underline\\alpha\\left(\\varepsilon^q\\right).$$\\end{small} It follows from (\\ref{delta_i}) that $\\delta_0\\geq\\delta_1\\geq\\cdots\\geq\\delta_{\\widehat{N}-1}\\geq\\delta_{\\widehat{N}}$. From (\\ref{bisim_cond_mul}) and since $\\tau_d=\\widehat{N}\\tau$, we get \n\\begin{small}\n\\begin{align}\\label{Ntozero}\n\\delta_{\\widehat{N}}=&\\mathsf{e}^{-\\kappa\\tau_d}\\underline\\alpha\\left(\\varepsilon^q\\right)+\\widehat\\gamma\\left(\\left(h_{[X_0]_\\eta}(\\tau)\\right)^{\\frac{1}{q}}+\\eta\\right)\\frac{1-\\mathsf{e}^{-\\kappa\\tau_d}}{1-\\mathsf{e}^{-\\kappa\\tau}}\\leq\\mathsf{e}^{-\\kappa\\tau_d}\\underline\\alpha\\left(\\varepsilon^q\\right)+\\left(\\frac{1}{\\mu}-\\mathsf{e}^{-\\kappa\\tau_d}\\right)\\underline\\alpha\\left(\\varepsilon^q\\right)=\\frac{\\underline\\alpha\\left(\\varepsilon^q\\right)}{\\mu}.\n\\end{align}\n\\end{small}\nWe can now prove that $R$ is an $\\varepsilon$-approximate simulation relation from $S_{\\tau}\\left(\\Sigma_{\\tau_d}\\right)$ to $S_{{\\mathsf{q}}}\\left(\\Sigma_{\\tau_d}\\right)$.\nConsider any \\mbox{$\\left(x_{\\tau},p,i,x_{{\\mathsf{q}}},p,i\\right)\\in R$}. Using the convexity assumption of $\\underline\\alpha_p$, and since it is a $\\mathcal{K}_\\infty$ function, and the Jensen inequality \\cite{oksendal}, we have:\n\\begin{small}\n\\begin{align}\\nonumber\n&\\underline\\alpha\\left(\\mathbb{E}\\left[\\Vert H_\\tau(x_{\\tau},p,i)-H_{\\mathsf{q}}(x_{{\\mathsf{q}}},p,i)\\Vert^q\\right]\\right)=\\underline\\alpha\\left(\\mathbb{E}\\left[\\Vert x_{\\tau}-x_{{\\mathsf{q}}}\\Vert^q\\right]\\right)\\\\\\notag&\\leq\\underline\\alpha_p\\left(\\mathbb{E}\\left[\\Vert x_{\\tau}-x_{{\\mathsf{q}}}\\Vert^q\\right]\\right)\\leq\\mathds{E}\\left[\\underline\\alpha_p\\left(\\Vert x_{\\tau}-x_{{\\mathsf{q}}}\\Vert^q\\right)\\right]\\leq\\mathbb{E}\\left[V_p(x_{\\tau},x_{{\\mathsf{q}}})\\right]\\leq\\delta_i\\leq\\delta_0.\n\\end{align}\n\\end{small}Therefore, we obtain\n\\begin{small}\n$\\left(\\mathbb{E}\\left[\\Vert x_{\\tau}-x_{{\\mathsf{q}}}\\Vert^q\\right]\\right)^{\\frac{1}{q}}\\leq\\left(\\underline\\alpha^{-1}\\left(\\delta_0\\right)\\right)^{\\frac{1}{q}}\\leq\\varepsilon$,\n\\end{small}because of $\\underline\\alpha\\in\\mathcal{K}_\\infty$. Hence, condition (i) in Definition \\ref{APSR} is satisfied.\nLet us now show that condition (ii) in Definition\n\\ref{APSR} holds. \nConsider the transition $(x_{\\tau},p,i)\\rTo^{p}_{\\tau} (x'_{\\tau},p',i')$ in $S_{\\tau}\\left(\\Sigma_{\\tau_d}\\right)$, where $x'_{\\tau}=\\xi_{x_{\\tau}p}(\\tau)$ $\\mathds{P}$-a.s.. Since $V_p$ is a $\\delta$-GAS-M$_q$ Lyapunov function for subsystem $\\Sigma_p$, we have\n\\begin{small}\n\\begin{align}\\label{b06}\n\\mathbb{E}\\left[V_p(x'_{\\tau},\\xi_{x_{{\\mathsf{q}}}p}(\\tau))\\right] &\\leq \\mathds{E}\\left[V_p(x_\\tau,x_q)\\right] \\mathsf{e}^{-\\kappa\\tau}\\leq\\mathsf{e}^{-\\kappa\\tau}\\delta_i.\n\\end{align}\n\\end{small}\nSince \\mbox{${\\mathbb{R}}^n\\subseteq\\bigcup_{p\\in[\\mathbb{R}^n]_{\\eta}}\\mathcal{B}_{\\eta}(p)$}, there exists \\mbox{$x'_{{\\mathsf{q}}}\\in[\\mathbb{R}^n]_{\\eta}$} such that \n\\begin{small}\n\\begin{equation}\n\\left\\Vert\\overline{\\xi}_{x_{{\\mathsf{q}}}p}(\\tau)-x'_{{\\mathsf{q}}}\\right\\Vert\\leq\\eta. \\label{b07}\n\\end{equation}\n\\end{small}Using Lemmas \\ref{lem:moment est}, the $\\mathcal{K}_\\infty$ function $\\widehat\\gamma$, the concavity of $\\widehat\\gamma_p$ in \\eqref{supplement}, the Jensen inequality \\cite{oksendal}, the inequalities (\\ref{supplement}), (\\ref{b06}), (\\ref{b07}), and triangle inequality, we obtain\n\\begin{small}\n\\begin{align}\\nonumber\n\\mathbb{E}\\left[V_p(x'_{\\tau},x'_{{\\mathsf{q}}})\\right]&=\\mathbb{E}\\left[V_p(x'_\\tau,\\xi_{x_{{\\mathsf{q}}}p}(\\tau))+V_p(x'_{\\tau},x'_{\\mathsf{q}})-V_p(x'_\\tau,\\xi_{x_{{\\mathsf{q}}}p}(\\tau))\\right]= \\mathbb{E}\\left[V_p(x'_{\\tau},\\xi_{x_{{\\mathsf{q}}}p}(\\tau))\\right]+\\mathbb{E}\\left[V_p(x'_{\\tau},x'_{\\mathsf{q}})-V_p(x'_\\tau,\\xi_{x_{{\\mathsf{q}}}p}(\\tau))\\right]\\\\\\notag&\\leq\\mathsf{e}^{-\\kappa\\tau}\\delta_i+\\mathbb{E}\\left[\\widehat\\gamma_p\\left(\\left\\Vert\\xi_{x_{{\\mathsf{q}}}p}(\\tau)-x'_{{\\mathsf{q}}}\\right\\Vert\\right)\\right]\\leq\\mathsf{e}^{-\\kappa\\tau}\\delta_i+\\widehat\\gamma_p\\left(\\mathbb{E}\\left[\\left\\Vert\\xi_{x_{{\\mathsf{q}}}p}(\\tau)-x'_{{\\mathsf{q}}}\\right\\Vert\\right]\\right)\\\\\\label{b08}\n&\\leq\\mathsf{e}^{-\\kappa\\tau}\\delta_i+\\widehat\\gamma\\left(\\mathbb{E}\\left[\\left\\Vert\\xi_{x_{{\\mathsf{q}}}p}(\\tau)-\\overline{\\xi}_{x_{{\\mathsf{q}}}p}(\\tau)\\right\\Vert\\right]+\\left\\Vert\\overline{\\xi}_{x_{{\\mathsf{q}}}p}(\\tau)-x'_{{\\mathsf{q}}}\\right\\Vert\\right)\\leq\\mathsf{e}^{-\\kappa\\tau}\\delta_i+\\widehat\\gamma\\left(\\left(h_{[X_0]_\\eta}(\\tau)\\right)^{\\frac{1}{q}}+\\eta\\right)=\\delta_{i+1}.\n\\end{align}\n\\end{small}\nWe now examine three separate cases:\n\\begin{itemize}\n\\item If $i<\\widehat{N}-1$, then $p'=p$, and $i'=i+1$; since, from (\\ref{b08}), $\\mathds{E}\\left[V_p(x'_{\\tau},x'_{{\\mathsf{q}}})\\right]\\leq\\delta_{i+1}$, we conclude that $(x'_{\\tau},p,i+1,x'_{{\\mathsf{q}}},p,i+1)\\in R$; \n\n\\item If $i=\\widehat{N}-1$, and $p'=p$, then $i'=\\widehat{N}-1$; from (\\ref{b08}), $\\mathds{E}\\left[V_p(x'_{\\tau},x'_{{\\mathsf{q}}})\\right]\\leq\\delta_{\\widehat{N}}\\leq\\delta_{\\widehat{N}-1}$, we conclude that $(x'_{\\tau},p,\\widehat{N}-1,x'_{{\\mathsf{q}}},p,\\widehat{N}-1)\\in R$; \n\n\\item If $i=\\widehat{N}-1$, and $p'\\neq{p}$, then $i'=0$; from (\\ref{Ntozero}) and (\\ref{b08}), $\\mathds{E}\\left[V_p(x'_{\\tau},x'_{{\\mathsf{q}}})\\right]\\leq\\delta_{\\widehat{N}}\\leq\\delta_0\/\\mu$. From (\\ref{eq0}), it follows that $\\mathds{E}\\left[V_{p'}(x'_{\\tau},x'_{{\\mathsf{q}}})\\right]\\leq\\mu \\mathds{E}\\left[V_p(x'_{\\tau},x'_{{\\mathsf{q}}})\\right]\\leq\\delta_0$. Hence, $(x'_{\\tau},p',0,x'_{{\\mathsf{q}}},p',0)\\in R$.\n\\end{itemize}\nTherefore, we conclude that condition (ii) in Definition \\ref{APSR} holds. Since $X_0\\subseteq\\bigcup_{p\\in[\\mathbb{R}^n]_{\\eta}}\\mathcal{B}_{\\eta}(p)$, for every $\\left(x_{\\tau 0},p,0\\right)\\in X_{\\tau0}$ there always exists $\\left(x_{\\params0},p,0\\right)\\in{X}_{\\params0}$ such that $\\Vert{x_{\\tau0}}-x_{\\params0}\\Vert\\leq\\eta$. Then,\n\\begin{small}\n\\begin{align}\\nonumber\n&\\mathbb{E}\\left[V_p(H_\\tau(x_{\\tau0},p,0),H_{\\mathsf{q}}(x_{\\params0},p,0)\\right]=V_p({x_{\\tau0}},x_{\\params0})\\leq\\overline\\alpha_p\\left(\\left\\| x_{\\tau0}-x_{\\params0}\\right\\|^q\\right)\\leq\\overline\\alpha\\left(\\Vert x_{\\tau0}-x_{\\params0}\\Vert^q\\right)\\leq\\overline\\alpha\\left(\\eta^q\\right)\\leq\\underline\\alpha\\left(\\varepsilon^q\\right),\n\\end{align}\n\\end{small}because of (\\ref{bisim_cond_mul1}) and since $\\overline\\alpha$ is a $\\mathcal{K}_\\infty$ function.\nHence, $V_p(x_{\\tau0},x_{\\params0})\\leq\\delta_0$ and \\mbox{$\\left(x_{\\tau0},p,0,x_{\\params0},p,0\\right)\\in{R}$} implying that \\mbox{$S_{\\tau}(\\Sigma_{\\tau_d})\\preceq^{\\varepsilon}_\\mathcal{S}S_{{\\mathsf{q}}}(\\Sigma_{\\tau_d})$}. In a similar way, we can prove that \\mbox{$S_{{\\mathsf{q}}}\\left(\\Sigma_{\\tau_d}\\right)\\preceq^{\\varepsilon}_{\\mathcal{S}}S_{\\tau}\\left(\\Sigma_{\\tau_d}\\right)$} by showing that $R^{-1}$ is an $\\varepsilon$-approximate simulation relation from $S_{\\mathsf{q}}(\\Sigma_{\\tau_d})$ to $S_\\tau(\\Sigma_{\\tau_d})$. \n\\end{proof}\n\nAs before, Theorem \\ref{main_theorem2} \nsubsumes \\cite[Theorem 4.2]{girard2} over non-probabilistic models. \n\n\\subsection{Second approach}\nThis subsection contains the second main results of the paper providing bisimilar symbolic models without any space discretization.\n\n\\subsubsection{Common Lyapunov function}\nFirst, we show one of the main results of this subsection on the construction of symbolic models based on the existence of a common $\\delta$-GAS-M$_q$ Lyapunov function. \nWe proceed by introducing two fully symbolic systems for the concrete one $\\Sigma$.\nConsider a stochastic switched system $\\Sigma$ and a triple $\\overline{\\mathsf{q}}=\\left(\\tau,N,x_s\\right)$ of parameters, where $\\tau$ is the sampling time, $N\\in{\\mathbb{N}}$ is a \\emph{temporal horizon}, and $x_s\\in{\\mathbb{R}}^n$ is a \\emph{source state}.\nGiven $\\Sigma$ and $\\overline{\\mathsf{q}}$, consider the following systems:\n\\begin{small}\n\\begin{align}\\notag\nS_{\\overline{\\mathsf{q}}}(\\Sigma)&=(X_{\\overline{\\mathsf{q}}},X_{\\overline{\\mathsf{q}}0},U_{\\overline{\\mathsf{q}}},\\rTo_{\\overline{\\mathsf{q}}},Y_{\\overline{\\mathsf{q}}},H_{\\overline{\\mathsf{q}}}),\\\\\\notag\\overline{S}_{\\overline{\\mathsf{q}}}(\\Sigma)&=(X_{\\overline{\\mathsf{q}}},X_{\\overline{\\mathsf{q}}0},U_{\\overline{\\mathsf{q}}},\\rTo_{\\overline{\\mathsf{q}}},Y_{\\overline{\\mathsf{q}}},\\overline{H}_{\\overline{\\mathsf{q}}}),\n\\end{align}\n\\end{small}where $X_{\\overline{\\mathsf{q}}}=\\mathsf{P}^N$, $X_{\\overline{{\\mathsf{q}}}0}=X_{\\overline{\\mathsf{q}}}$, $U_{\\overline{\\mathsf{q}}}=\\mathsf{P}$, $Y_{\\overline{\\mathsf{q}}}=Y_{\\tau}$, and\n\\begin{itemize}\n\\item $x_{\\overline{{\\mathsf{q}}}}\\rTo_{\\overline{\\mathsf{q}}}^{p}x'_{\\overline{\\mathsf{q}}}$, where $x_{\\overline{{\\mathsf{q}}}}=(p_1,p_2,\\ldots,p_N)$, if and only if $x'_{\\overline{{\\mathsf{q}}}}=(p_2,\\ldots,p_N,p)$;\n\\item $H_{\\overline{\\mathsf{q}}}(x_{\\overline{{\\mathsf{q}}}})=\\xi_{x_sx_{\\overline{{\\mathsf{q}}}}}(N\\tau)$ $\\left(\\overline{H}_{\\overline{\\mathsf{q}}}(x_{\\overline{{\\mathsf{q}}}})=\\overline\\xi_{x_sx_{\\overline{{\\mathsf{q}}}}}(N\\tau)\\right)$.\n\\end{itemize}\n\nNote that we have abused notation by identifying $x_{\\overline{{\\mathsf{q}}}}=(p_1,p_2,\\ldots,p_N)$ with a switching signal obtained by the concatenation of modes $p_i$ \\big(i.e. $x_{\\overline{{\\mathsf{q}}}}(t)=p_i$ for any $t\\in[(i-1)\\tau,i\\tau[$\\big) for $i=1,\\ldots,N$. Notice that the proposed system $S_{\\overline{{\\mathsf{q}}}}(\\Sigma)$ $\\left(\\text{resp.}~\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)\\right)$ is symbolic and deterministic in the sense of Definition \\ref{system}. Note that $H_{\\overline{{\\mathsf{q}}}}$ and $\\overline{H}_{\\overline{{\\mathsf{q}}}}$ are mappings from a non-probabilistic point $x_{\\overline{{\\mathsf{q}}}}$ to the random variable $\\xi_{x_sx_{\\overline{{\\mathsf{q}}}}}(N\\tau)$ and to the one with a Dirac probability distribution centered at $\\overline\\xi_{x_sx_{\\overline{{\\mathsf{q}}}}}(N\\tau)$, respectively. One can readily verify that the transition relation of $S_{\\overline{\\mathsf{q}}}(\\Sigma)$ (resp. $\\overline{S}_{\\overline{\\mathsf{q}}}(\\Sigma)$) admits a very compact representation under the form of a shift operator and such symbolic systems do not require any continuous space discretization.\n\nBefore providing the main results, we need the following technical lemmas.\n\n\\begin{lemma}\\label{lemma1}\nConsider a stochastic switched system $\\Sigma$, admitting a common $\\delta$-GAS-M$_q$ Lyapunov function $V$, and consider its corresponding symbolic model $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)$. We have:\n\\begin{small}\n\\begin{align}\\label{upper_bound}\n\\overline\\eta\\leq&\\left(\\underline\\alpha^{-1}\\left(\\mathsf{e}^{-\\kappa N\\tau}\\max_{p\\in \\mathsf{P}}V\\left(\\overline\\xi_{x_sp}(\\tau),x_s\\right)\\right)\\right)^{1\/q},\n\\end{align}\n\\end{small}\nwhere\n\\begin{small}\n\\begin{align}\\label{eta}\n\\overline\\eta:=\\max_{\\substack{p\\in\\mathsf{P},x_{\\overline{{\\mathsf{q}}}}\\in X_{\\overline{{\\mathsf{q}}}}\\\\x_{\\overline{{\\mathsf{q}}}}\\rTo_{\\overline{{\\mathsf{q}}}}^{p}x'_{\\overline{{\\mathsf{q}}}}}}\\left\\Vert\\overline\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{\\overline{{\\mathsf{q}}}})p}(\\tau)-\\overline{H}_{\\overline{{\\mathsf{q}}}}\\left(x'_{\\overline{{\\mathsf{q}}}}\\right)\\right\\Vert.\n\\end{align}\n\\end{small}\n\\end{lemma}\n\n\\begin{proof}\nLet $x_{\\overline{{\\mathsf{q}}}}\\in X_{\\overline{{\\mathsf{q}}}}$, where $x_{\\overline{{\\mathsf{q}}}}=\\left(p_1,p_2,\\ldots,p_N\\right)$, and $p\\in U_{\\overline{{\\mathsf{q}}}}$. Using the definition of $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)$, one obtains $x_{\\overline{{\\mathsf{q}}}}\\rTo_{\\overline{{\\mathsf{q}}}}^{p}x'_{\\overline{{\\mathsf{q}}}}$, where $x'_{\\overline{{\\mathsf{q}}}}=\\left(p_2,\\ldots,p_N,p\\right)$. Since $V$ is a common $\\delta$-GAS-M$_q$ Lyapunov function for $\\Sigma$ and based on the manipulations in the proof of Theorem \\ref{theorem2}, we have:\n\\begin{small}\n\\begin{align}\\notag\n\\underline\\alpha\\left(\\left\\Vert\\overline\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{\\overline{{\\mathsf{q}}}})p}(\\tau)-\\overline{H}_{\\overline{{\\mathsf{q}}}}\\left(x'_{\\overline{{\\mathsf{q}}}}\\right)\\right\\Vert^q\\right)&\\leq V\\left(\\overline\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{\\overline{{\\mathsf{q}}}})p}(\\tau),\\overline{H}_{\\overline{{\\mathsf{q}}}}\\left(x'_{\\overline{{\\mathsf{q}}}}\\right)\\right)=V\\left(\\overline\\xi_{\\overline\\xi_{x_sx_{\\overline{{\\mathsf{q}}}}}(N\\tau)p}(\\tau),\\overline\\xi_{x_sx'_{\\overline{{\\mathsf{q}}}}}(N\\tau)\\right)\\\\\\notag&=V\\left(\\overline\\xi_{\\overline\\xi_{x_sp_1}(\\tau)(p_2,\\ldots,p_N,p)}(N\\tau),\\overline\\xi_{x_s(p_2,\\ldots,p_N,p)}(N\\tau)\\right)\\leq\\mathsf{e}^{-\\kappa N\\tau}V\\left(\\overline\\xi_{x_sp_1}(\\tau),x_s\\right).\n\\end{align}\n\\end{small}\nHence, one gets\n\\begin{small}\n\\begin{align}\\label{upper_bound2}\n\\left\\Vert\\overline\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{\\overline{{\\mathsf{q}}}})p}(\\tau)-\\overline{H}_{\\overline{{\\mathsf{q}}}}\\left(x'_{\\overline{{\\mathsf{q}}}}\\right)\\right\\Vert\\leq\\left(\\underline\\alpha^{-1}\\left(\\mathsf{e}^{-\\kappa N\\tau}V\\left(\\overline\\xi_{x_sp_1}(\\tau),x_s\\right)\\right)\\right)^{1\/q},\n\\end{align}\n\\end{small}because of $\\underline\\alpha\\in\\mathcal{K}_\\infty$. Since the inequality \\eqref{upper_bound2} holds for all $x_{\\overline{{\\mathsf{q}}}}\\in X_{\\overline{{\\mathsf{q}}}}$ and $p\\in U_{\\overline{{\\mathsf{q}}}}$, and $\\underline\\alpha\\in\\mathcal{K}_\\infty$, inequality \\eqref{upper_bound} holds. \n\\end{proof}\n\nThe next lemma provides a similar result as the one of Lemma \\ref{lemma1}, but by using the symbolic model $S_{\\overline{{\\mathsf{q}}}}(\\Sigma)$ rather than $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)$.\n\\begin{lemma}\\label{lemma4}\nConsider a stochastic switched system $\\Sigma$, admitting a common $\\delta$-GAS-M$_q$ Lyapunov function $V$, and consider its corresponding symbolic model $S_{\\overline{{\\mathsf{q}}}}(\\Sigma)$. One has:\n\\begin{small}\n\\begin{align}\\label{upper_bound4}\n\\widehat\\eta\\leq&\\left(\\underline\\alpha^{-1}\\left(\\mathsf{e}^{-\\kappa N\\tau}\\max_{p\\in\\mathsf{P}}\\mathds{E}\\left[V\\left(\\xi_{x_sp}(\\tau),x_s\\right)\\right]\\right)\\right)^{1\/q},\n\\end{align}\n\\end{small}\nwhere\n\\begin{small}\n\\begin{align}\\label{eta1}\n\\widehat\\eta:=\\max_{\\substack{p\\in\\mathsf{P},x_{\\overline{{\\mathsf{q}}}}\\in X_{\\overline{{\\mathsf{q}}}}\\\\x_{\\overline{{\\mathsf{q}}}}\\rTo_{\\overline{{\\mathsf{q}}}}^px'_{\\overline{{\\mathsf{q}}}}}}\\mathds{E}\\left[\\left\\Vert\\xi_{H_{\\overline{{\\mathsf{q}}}}(x_{\\overline{{\\mathsf{q}}}})p}(\\tau)-H_{\\overline{{\\mathsf{q}}}}\\left(x'_{\\overline{{\\mathsf{q}}}}\\right)\\right\\Vert\\right].\n\\end{align}\n\\end{small}\n\\end{lemma}\n\n\\begin{proof}\nThe proof is similar to the one of Lemma \\ref{lemma1} and can be shown by using convexity of $\\underline\\alpha$ and Jensen inequality \\cite{oksendal}. \n\\end{proof}\n\nWe can now present the first main result of this subsection, relating the existence of a common $\\delta$-GAS-M$_q$ Lyapunov function to the construction of a bisimilar finite abstraction without any continuous space discretization. In order to show the next result, we assume that $f_p(0_n)=0_n$ only if $\\Sigma_p$ is not affine and $g_p(0_n)=0_{n\\times\\widehat{q}}$ for any $p\\in\\mathsf{P}$.\n\n\\begin{theorem}\\label{main_theorem}\nConsider a stochastic switched system $\\Sigma$ admitting a common $\\delta$-GAS-M$_q$ Lyapunov function $V$, of the form of the one explained in Lemma \\ref{lem:moment est}. Let $\\overline\\eta$ be given by \\eqref{eta}. For any $\\varepsilon\\in{\\mathbb{R}}^+$ and any triple $\\overline{\\mathsf{q}}=\\left(\\tau,N,x_s\\right)$ of parameters satisfying\n\\begin{small}\n\\begin{align}\n\\label{bisim_cond11}\n\\mathsf{e}^{-\\kappa\\tau}\\underline\\alpha\\left(\\varepsilon^q\\right)+\\widehat\\gamma\\left(\\left(h_{x_s}((N+1)\\tau)\\right)^{\\frac{1}{q}}+\\overline\\eta\\right)&\\leq\\underline\\alpha\\left(\\varepsilon^q\\right),\n\\end{align}\n\\end{small}the relation \\begin{small}$$R=\\left\\{\\left(x_\\tau,x_{\\overline{{\\mathsf{q}}}}\\right)\\in X_\\tau\\times X_{\\overline{{\\mathsf{q}}}}\\,\\,|\\,\\,\\mathds{E}\\left[V\\left(x_\\tau,\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{\\overline{{\\mathsf{q}}}})\\right)\\right]\\leq\\underline\\alpha\\left(\\varepsilon^q\\right)\\right\\}$$\\end{small}is an $\\varepsilon$-approximate bisimulation relation between $\\overline{S}_{\\overline{\\mathsf{q}}}(\\Sigma)$ and $S_{\\tau}(\\Sigma)$.\n\\end{theorem}\n\n\\begin{proof}\nWe start by proving that $R$ is an $\\varepsilon$-approximate simulation relation from $S_{\\tau}(\\Sigma)$ to $\\overline{S}_{{\\overline{{\\mathsf{q}}}}}(\\Sigma)$. Consider any \\mbox{$\\left(x_{\\tau},x_{{\\overline{{\\mathsf{q}}}}}\\right)\\in R$}. Condition (i) in Definition \\ref{APSR} is satisfied because\n\\begin{small}\n\\begin{equation}\n\\label{convexity}\n(\\mathbb{E}[\\Vert x_{\\tau}-\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}})\\Vert^q])^{\\frac{1}{q}}\\leq\\left(\\underline\\alpha^{-1}\\left(\\mathbb{E}\\left[V\\left(x_{\\tau},\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}})\\right)\\right]\\right)\\right)^{\\frac{1}{q}}\\leq\\varepsilon.\n\\end{equation}\n\\end{small}We used the convexity assumption of $\\underline\\alpha$ and the Jensen inequality \\cite{oksendal} to show the inequalities in (\\ref{convexity}). Let us now show that condition (ii) in Definition\n\\ref{APSR} holds. Consider the transition \\mbox{$x_{\\tau}\\rTo^{p}_{\\tau} x'_{\\tau}=\\xi_{x_{\\tau}p}(\\tau)$} $\\mathds{P}$-a.s. in $S_{\\tau}(\\Sigma)$. Since $V$ is a common $\\delta$-GAS-M$_q$ Lyapunov function for $\\Sigma$, we have (cf. proof of Theorem \\ref{theorem2})\n\\begin{small}\n\\begin{align}\\label{b020}\n\\mathbb{E}[V(x'_{\\tau},\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}})p}(\\tau))] \\leq \\mathds{E}[V(x_\\tau,\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_q))] \\mathsf{e}^{-\\kappa\\tau}\\leq \\underline\\alpha(\\varepsilon^q) \\mathsf{e}^{-\\kappa\\tau}.\n\\end{align}\n\\end{small}Note that, by the definition of $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)$, there exists $x_{{\\overline{{\\mathsf{q}}}}}\\rTo^{p}_{{\\overline{{\\mathsf{q}}}}}x'_{{\\overline{{\\mathsf{q}}}}}$ in $\\overline{S}_{{\\overline{{\\mathsf{q}}}}}(\\Sigma)$.\nUsing Lemma \\ref{lem:moment est}, the concavity of $\\widehat\\gamma$, the Jensen inequality \\cite{oksendal}, equation \\eqref{eta}, the inequalities (\\ref{supplement}), (\\ref{bisim_cond11}), (\\ref{b020}), and triangle inequality, we obtain\n\\begin{small}\n\\begin{align*}\n\\mathbb{E}[V(x'_{\\tau},\\overline{H}_{\\overline{{\\mathsf{q}}}}(x'_{{\\overline{{\\mathsf{q}}}}}))]=&\\mathbb{E}[V(x'_\\tau,\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}})p}(\\tau))+V(x'_{\\tau},\\overline{H}_{\\overline{{\\mathsf{q}}}}(x'_{\\overline{{\\mathsf{q}}}}))-V(x'_\\tau,\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}})p}(\\tau))]\\\\ \\notag\n=& \\mathbb{E}[V(x'_{\\tau},\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}})p}(\\tau))]+\\mathbb{E}[V(x'_{\\tau},\\overline{H}_{\\overline{{\\mathsf{q}}}}(x'_{\\overline{{\\mathsf{q}}}}))-V(x'_\\tau,\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}})p}(\\tau))]\\\\\\notag\\leq&\\underline\\alpha(\\varepsilon^q)\\mathsf{e}^{-\\kappa\\tau}+\\mathbb{E}[\\widehat\\gamma(\\Vert\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}})p}(\\tau)-\\overline{H}_{\\overline{{\\mathsf{q}}}}(x'_{{\\overline{{\\mathsf{q}}}}})\\Vert)]\\\\\\notag\n\\leq&\\underline\\alpha(\\varepsilon^q)\\mathsf{e}^{-\\kappa\\tau}+\\widehat\\gamma(\\mathbb{E}[\\Vert\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}})p}(\\tau)-\\overline{\\xi}_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}})p}(\\tau)+\\overline{\\xi}_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}})p}(\\tau)-\\overline{H}_{\\overline{{\\mathsf{q}}}}(x'_{{\\overline{{\\mathsf{q}}}}})\\Vert])\\\\\\notag\n\\leq&\\underline\\alpha(\\varepsilon^q)\\mathsf{e}^{-\\kappa\\tau}+\\widehat\\gamma(\\mathbb{E}[\\Vert\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}})p}(\\tau)-\\overline{\\xi}_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}})p}(\\tau)\\Vert]+\\Vert\\overline{\\xi}_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}})p}(\\tau)-\\overline{H}_{\\overline{{\\mathsf{q}}}}(x'_{{\\overline{{\\mathsf{q}}}}})\\Vert)\\\\\\notag\n\\leq&\\underline\\alpha(\\varepsilon^q)\\mathsf{e}^{-\\kappa\\tau}+\\widehat\\gamma((h_{x_s}((N+1)\\tau))^{\\frac{1}{q}}+\\eta)\\leq\\underline\\alpha(\\varepsilon^q).\n\\end{align*}\n\\end{small}Therefore, we conclude that \\mbox{$\\left(x'_{\\tau},x'_{{\\overline{{\\mathsf{q}}}}}\\right)\\in{R}$} and that condition (ii) in Definition \\ref{APSR} holds.\n\nIn a similar way, we can prove that that $R^{-1}$ is an\n$\\varepsilon$-approximate simulation relation from $\\overline{S}_{{\\overline{{\\mathsf{q}}}}}(\\Sigma)$ to $S_{\\tau}(\\Sigma)$ implying that $R$ is an $\\varepsilon$-approximate bisimulation relation between $\\overline{S}_{{\\overline{{\\mathsf{q}}}}}(\\Sigma)$ and $S_\\tau(\\Sigma)$.\n\\end{proof}\n\nNote that one can also use any over approximation of $\\overline\\eta$ such as the one in \\eqref{upper_bound} instead of $\\overline\\eta$ in condition \\eqref{bisim_cond11}. By choosing $N$ sufficiently large, one can enforce $h_{x_s}((N+1)\\tau)$ and $\\overline\\eta$ to be sufficiently small. Hence, it can be readily seen that for a given precision $\\varepsilon$,\nthere always exists a large value of $N$, such that the condition in (\\ref{bisim_cond11}) is satisfied.\n\nNote that the results in \\cite{corronc} for non-probabilistic models are fully recovered by the statement in Theorem \\ref{main_theorem} if $\\Sigma$ is not affected by any noise.\n\nThe next theorem provides a result that is similar to the one of Theorem \\ref{main_theorem}, but by using the symbolic model $S_{{\\overline{{\\mathsf{q}}}}}(\\Sigma)$.\n\n\\begin{theorem}\\label{main_theorem3}\nConsider a stochastic switched system $\\Sigma$, admitting a common $\\delta$-GAS-M$_q$ Lyapunov function $V$. Let $\\widehat\\eta$ be given by \\eqref{eta1}. For any $\\varepsilon\\in{\\mathbb{R}}^+$ and any triple $\\overline{\\mathsf{q}}=\\left(\\tau,N,x_s\\right)$ of parameters satisfying\n\\begin{small}\n\\begin{align}\n\\label{bisim_cond3}\n\\mathsf{e}^{-\\kappa\\tau}\\underline\\alpha\\left(\\varepsilon^q\\right)+\\widehat\\gamma\\left(\\widehat\\eta\\right)&\\leq\\underline\\alpha\\left(\\varepsilon^q\\right),\n\\end{align}\n\\end{small}\nthe relation \\begin{small}$$R=\\left\\{(x_\\tau,x_{\\overline{{\\mathsf{q}}}})\\in X_\\tau\\times X_{\\overline{{\\mathsf{q}}}}\\,\\,|\\,\\,\\mathds{E}\\left[V(x_\\tau,H_{\\overline{{\\mathsf{q}}}}(x_{\\overline{{\\mathsf{q}}}}))\\right]\\leq\\underline\\alpha\\left(\\varepsilon^q\\right)\\right\\}$$\\end{small}is an $\\varepsilon$-approximate bisimulation relation between ${S}_{\\overline{\\mathsf{q}}}(\\Sigma)$ and $S_{\\tau}(\\Sigma)$.\n\\end{theorem}\n\n\\begin{proof}\nThe proof is similar to the one of Theorem \\ref{main_theorem}.\n\\end{proof}\n\nHere, one can also use any over approximation of $\\widehat\\eta$ such as the one in \\eqref{upper_bound4} instead of $\\widehat\\eta$ in condition \\eqref{bisim_cond3}. Finally, we establish the results on the existence of symbolic model $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)$ (resp. $S_{\\overline{{\\mathsf{q}}}}(\\Sigma)$) such that \\mbox{$\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)\\cong_{\\mathcal{S}}^{\\varepsilon}S_\\tau(\\Sigma)$} (resp. \\mbox{$S_{\\overline{{\\mathsf{q}}}}(\\Sigma)\\cong_{\\mathcal{S}}^{\\varepsilon}S_\\tau(\\Sigma)$}).\n\n\\begin{theorem}\\label{main_theorem5}\nConsider the result in Theorem \\ref{main_theorem}. If we choose: \\begin{small}$$X_{\\tau0}=\\{x\\in{\\mathbb{R}}^n\\,\\,|\\,\\,\\Vert x-\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}0})\\Vert\\leq\\left(\\overline\\alpha^{-1}\\left(\\underline\\alpha\\left(\\varepsilon^q\\right)\\right)\\right)^{\\frac{1}{q}},~\\forall x_{{\\overline{{\\mathsf{q}}}}0}\\in X_{{\\overline{{\\mathsf{q}}}}0}\\},$$\\end{small}then we have \\mbox{$\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)\\cong_{\\mathcal{S}}^{\\varepsilon}S_\\tau(\\Sigma)$}.\n\\end{theorem}\n\n\\begin{proof}\nWe start by proving that \\mbox{$S_{\\tau}(\\Sigma)\\preceq^{\\varepsilon}_\\mathcal{S}\\overline{S}_{{\\overline{{\\mathsf{q}}}}}(\\Sigma)$}. For every $x_{\\tau 0}\\in{X_{\\tau 0}}$, there always exists \\mbox{$x_{{\\overline{{\\mathsf{q}}}} 0}\\in{X}_{{\\overline{{\\mathsf{q}}}} 0}$} such that $\\Vert{x_{\\tau0}}-\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}0})\\Vert\\leq\\left(\\overline\\alpha^{-1}\\left(\\underline\\alpha\\left(\\varepsilon^q\\right)\\right)\\right)^{\\frac{1}{q}}$. Then,\n\\begin{small}\n\\begin{align}\\nonumber\n\\mathbb{E}\\left[V\\left({x_{\\tau0}},\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}0})\\right)\\right]&=V\\left({x_{\\tau0}},\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}0})\\right)\\leq\\overline\\alpha\\left(\\left\\Vert x_{\\tau0}-\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}0})\\right\\Vert^q\\right)\\leq\\underline\\alpha\\left(\\varepsilon^q\\right),\n\\end{align}\n\\end{small}since $\\overline\\alpha$ is a $\\mathcal{K}_\\infty$ function.\nHence, \\mbox{$\\left(x_{\\tau0},x_{{\\overline{{\\mathsf{q}}}}0}\\right)\\in{R}$} implying that \\mbox{$S_{\\tau}(\\Sigma)\\preceq^{\\varepsilon}_\\mathcal{S}\\overline{S}_{{\\overline{{\\mathsf{q}}}}}(\\Sigma)$}. In a similar way, we can show that \\mbox{$\\overline{S}_{{\\overline{{\\mathsf{q}}}}}(\\Sigma)\\preceq^{\\varepsilon}_{\\mathcal{S}}S_{\\tau}(\\Sigma)$}, equipped with the relation $R^{-1}$, which completes the proof.\n\\end{proof}\n\nThe next theorem provides a similar result as the one of Theorem \\ref{main_theorem5}, but by using the symbolic model $S_{\\overline{{\\mathsf{q}}}}(\\Sigma)$.\n\n\\begin{theorem}\\label{main_theorem6}\nConsider the results in Theorem \\ref{main_theorem3}. If we choose: \\begin{small}\n\\begin{align}\\notag\nX_{\\tau0}=\\{&a\\in \\mathcal{X}_0\\,\\,|\\,\\,\\left(\\mathds{E}\\left[\\left\\Vert a-H_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}0})\\right\\Vert^q\\right]\\right)^{\\frac{1}{q}}\\leq\\left(\\overline\\alpha^{-1}\\left(\\underline\\alpha\\left(\\varepsilon^q\\right)\\right)\\right)^{\\frac{1}{q}},~\\forall x_{{\\overline{{\\mathsf{q}}}}0}\\in X_{{\\overline{{\\mathsf{q}}}}0}\\},\n\\end{align}\n\\end{small}then we have \\mbox{$S_{\\overline{{\\mathsf{q}}}}(\\Sigma)\\cong_{\\mathcal{S}}^{\\varepsilon}S_\\tau(\\Sigma)$}.\n\\end{theorem}\n\n\\begin{proof}\nThe proof is similar to the one of Theorem \\ref{main_theorem5}.\n\\end{proof}\n\n\\subsubsection{Multiple Lyapunov functions}\nHere, we provide results on the construction of symbolic models for $\\Sigma_{\\tau_d}$ without any continuous space discretization.\nConsider a stochastic switched system $\\Sigma_{\\tau_d}$ and a triple $\\overline{\\mathsf{q}}=\\left(\\tau,N,x_s\\right)$ of parameters.\nGiven $\\Sigma_{\\tau_d}$ and $\\overline{\\mathsf{q}}$, consider the following systems:\n\\begin{small}\n\\begin{align}\\notag\nS_{\\overline{\\mathsf{q}}}(\\Sigma_{\\tau_d})&=(X_{\\overline{\\mathsf{q}}},X_{\\overline{\\mathsf{q}}0},U_{\\overline{\\mathsf{q}}},\\rTo_{\\overline{\\mathsf{q}}},Y_{\\overline{\\mathsf{q}}},H_{\\overline{\\mathsf{q}}}),\\\\\\notag\\overline{S}_{\\overline{\\mathsf{q}}}(\\Sigma_{\\tau_d})&=(X_{\\overline{\\mathsf{q}}},X_{\\overline{\\mathsf{q}}0},U_{\\overline{\\mathsf{q}}},\\rTo_{\\overline{\\mathsf{q}}},Y_{\\overline{\\mathsf{q}}},\\overline{H}_{\\overline{\\mathsf{q}}}),\n\\end{align}\n\\end{small}consisting of: $X_{\\overline{\\mathsf{q}}}=\\mathsf{P}^{N}\\times\\{0,\\ldots,\\widehat{N}-1\\}$, $U_{\\overline{\\mathsf{q}}}=\\mathsf{P}$, $Y_{\\overline{\\mathsf{q}}}=Y_\\tau$, and\n\\begin{small}\n\\begin{itemize}\n\\item \\begin{small}\n\\begin{itemize}\n\\item if $N\\leq\\widehat{N}-1$: $X_{{\\overline{{\\mathsf{q}}}}0}=\\left\\{\\left(p,\\ldots,p,N\\right)\\,\\,|\\,\\,\\forall p\\in\\mathsf{P}\\right\\}$; \n\\item if $N>\\widehat{N}-1$: $X_{{\\overline{{\\mathsf{q}}}}0}=\\{(\\overbrace{p_1,\\ldots,p_1}^{m_1~\\text{times}},\\dots,\\overbrace{p_k,\\ldots,p_k}^{m_k~\\text{times}},i)|~~\\exists k\\in{\\mathbb{N}}~~\\text{s.t.}~~ m_1,\\ldots,m_{k-1}\\geq{\\widehat{N}},~i=\\min\\{m_k-1,\\widehat{N}-1\\},~p_1,\\ldots,p_k\\in\\mathsf{P}\\}$;\n\\end{itemize}\\end{small}\n\\item $\\left(p_1,p_2,\\ldots,p_N,i\\right)\\rTo_{\\overline{\\mathsf{q}}}^{p_N}\\left(p_2,\\ldots,p_N,p,i'\\right)$ if one of the following holds:\n\n\\begin{itemize}\n\\item $i<\\widehat{N}-1$, $p=p_N$, and $i'=i+1$;\n\\item $i=\\widehat{N}-1$, $p=p_N$, and $i'=\\widehat{N}-1$;\n\\item $i=\\widehat{N}-1$, $p\\neq p_N$, and $i'=0$;\n\\end{itemize}\n\\item $H_{\\overline{\\mathsf{q}}}(x_{\\overline{{\\mathsf{q}}}},i)=\\xi_{x_sx_{\\overline{{\\mathsf{q}}}}}(N\\tau)$ $\\left(\\overline{H}_{\\overline{\\mathsf{q}}}(x_{\\overline{{\\mathsf{q}}}},i)=\\overline\\xi_{x_sx_{\\overline{{\\mathsf{q}}}}}(N\\tau)\\right)$ for any $(x_{\\overline{{\\mathsf{q}}}},i)\\in X_{\\overline{{\\mathsf{q}}}}$, where $x_{\\overline{{\\mathsf{q}}}}=\\left(p_1,\\ldots,p_N\\right)$.\n\\end{itemize}\n\\end{small}\n\nNotice that the proposed system $S_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$ $\\left(\\text{resp.}~\\overline{S}_{{\\overline{{\\mathsf{q}}}}}(\\Sigma_{\\tau_d})\\right)$ is symbolic and deterministic in the sense of Definition \\ref{system}. Note that the set $X_{{\\overline{{\\mathsf{q}}}}0}$ is chosen in such a way that it respects the dwell time of switching signals (i.e. being in each mode at least $\\tau_d=\\widehat{N}\\tau$ seconds).\n\nBefore providing the second main result of this subsection, we need the following technical results, similar to the ones in Lemmas \\ref{lemma1} and \\ref{lemma4}.\n\n\\begin{lemma}\\label{lemma11}\nConsider a stochastic switched system $\\Sigma_{\\tau_d}$, admitting multiple $\\delta$-GAS-M$_q$ Lyapunov functions $V_p$, and consider its corresponding symbolic model $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$. Moreover, assume that (\\ref{eq0}) holds for some $\\mu\\geq1$. If $\\tau_d>\\log{\\mu}\/\\kappa$, then we have:\n\\begin{small}\n\\begin{align}\n\\label{upper_bound5}\n\\overline\\eta\\leq&\\left(\\underline\\alpha^{-1}(\\mathsf{e}^{-(\\kappa-\\log\\mu\/\\tau_d) N\\tau}\\max_{p,p'\\in \\mathsf{P}}V_{p'}(\\overline\\xi_{x_sp}(\\tau),x_s))\\right)^{1\/q},\n\\end{align}\n\\end{small}\nwhere\n\\begin{small}\n\\begin{align}\\label{eta2}\n\\overline\\eta:=\\max_{\\substack{(x_{\\overline{{\\mathsf{q}}}},i)\\in X_{\\overline{{\\mathsf{q}}}}\\\\(x_{\\overline{{\\mathsf{q}}}},i)\\rTo^p_{\\overline{{\\mathsf{q}}}}(x'_{\\overline{{\\mathsf{q}}}},i')}}\\Vert\\overline\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{\\overline{{\\mathsf{q}}}},i)p}(\\tau)-\\overline{H}_{\\overline{{\\mathsf{q}}}}\\left(x'_{\\overline{{\\mathsf{q}}}},i'\\right)\\Vert.\n\\end{align}\n\\end{small}\n\\end{lemma}\n\nThe proof is similar to the proof of Lemma \\ref{lemma1}.\n\nThe next lemma provides a similar result as the one of Lemma \\ref{lemma11}, but by using the symbolic model $S_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$ rather than $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$.\n\\begin{lemma}\\label{lemma44}\nConsider a stochastic switched system $\\Sigma_{\\tau_d}$, admitting multiple $\\delta$-GAS-M$_q$ Lyapunov functions $V_p$, and consider its corresponding symbolic model ${S}_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$. Moreover, assume that (\\ref{eq0}) holds for some $\\mu\\geq1$. If $\\tau_d>\\log{\\mu}\/\\kappa$, then we have:\n\\begin{small}\n\\begin{align}\\label{upper_bound6}\n\\widehat\\eta\\leq&(\\underline\\alpha^{-1}(\\mathsf{e}^{-(\\kappa-\\log\\mu\/\\tau_d) N\\tau}\\max_{p,p'\\in\\mathsf{P}}\\mathds{E}[V_{p'}\\left(\\xi_{x_sp}(\\tau),x_s\\right)]))^{\\frac{1}{q}},\n\\end{align}\n\\end{small}\nwhere\n\\begin{small}\n\\begin{align}\\label{eta3}\n\\widehat\\eta:=\\max_{\\substack{(x_{\\overline{{\\mathsf{q}}}},i)\\in X_{\\overline{{\\mathsf{q}}}}\\\\(x_{\\overline{{\\mathsf{q}}}},i)\\rTo^p_{\\overline{{\\mathsf{q}}}}(x'_{\\overline{{\\mathsf{q}}}},i')}}\\mathds{E}[\\Vert\\xi_{H_{\\overline{{\\mathsf{q}}}}(x_{\\overline{{\\mathsf{q}}}},i)p}(\\tau)-H_{\\overline{{\\mathsf{q}}}}\\left(x'_{\\overline{{\\mathsf{q}}}},i'\\right)\\Vert].\n\\end{align}\n\\end{small}\n\\end{lemma}\nThe proof is similar to the proof of Lemma \\ref{lemma4}.\n\nNow, we present the second main result of this subsection, relating the existence of multiple Lyapunov functions to that of a bisimilar finite abstractions without any continuous space discretization. In order to show the next result, we assume that $f_p(0_n)=0_n$ only if $\\Sigma_{\\tau_d,p}$ is not affine and $g_p(0_n)=0_{n\\times\\widehat{q}}$ for any $p\\in\\mathsf{P}$.\n\n\\begin{theorem}\\label{main_theorem33}\nConsider a stochastic switched system $\\Sigma_{\\tau_d}$. Let us assume that for any $p\\in\\mathsf{P}$, there exists a $\\delta$-GAS-M$_q$ Lyapunov function $V_p$, of the form of the one explained in Lemma \\ref{lem:moment est}, for subsystem $\\Sigma_{\\tau_d,p}$. Moreover, assume that (\\ref{eq0}) holds for some $\\mu\\geq1$. Let $\\overline\\eta$ be given by \\eqref{eta2}. If $\\tau_d>\\log{\\mu}\/\\kappa$, for any $\\varepsilon\\in{\\mathbb{R}}^+$, \nand any triple $\\overline{\\mathsf{q}}=\\left(\\tau,N,x_s\\right)$ of parameters satisfying\n\\begin{small}\n\\begin{align}\n\\label{bisim_cond_mul0}\n\\widehat\\gamma(\\left(h_{x_s}\\left((N+1)\\tau\\right)\\right)^{\\frac{1}{q}}+\\overline\\eta)&\\leq\\frac{\\frac{1}{\\mu}-\\mathsf{e}^{-\\kappa\\tau_d}}{1-\\mathsf{e}^{-\\kappa\\tau_d}}(1-\\mathsf{e}^{-\\kappa\\tau})\\underline\\alpha(\\varepsilon^q),\n\\end{align}\n\\end{small}there exists an $\\varepsilon$-approximate bisimulation relation $R$ between $\\overline{S}_{\\overline{\\mathsf{q}}}(\\Sigma_{\\tau_d})$ and $S_{\\tau}(\\Sigma_{\\tau_d})$ as the following:\\\\ $\\left(x_{\\tau},p_1,i_1,x_{{\\overline{{\\mathsf{q}}}}},i_2\\right)\\in R$, where $x_{\\overline{{\\mathsf{q}}}}=(\\overline{p}_1,\\ldots,\\overline{p}_N)$, if and only if $p_1=\\overline{p}_N=p$, $i_1=i_2=i$, and \\begin{small}$$\\mathbb{E}[V_p(H_{\\tau}(x_{\\tau},p_1,i_1),\\overline{H}_{{\\overline{{\\mathsf{q}}}}}(x_{{\\overline{{\\mathsf{q}}}}},i_2))]=\\mathbb{E}[V_p(x_{\\tau},\\overline\\xi_{x_sx_{\\overline{{\\mathsf{q}}}}}(N\\tau))]\\leq\\delta_i,$$\\end{small}where $\\delta_0,\\ldots,\\delta_{\\widehat{N}-1}$ are given recursively by \\begin{small}$\\delta_{i+1}=\\mathsf{e}^{-\\kappa\\tau}\\delta_i+\\widehat\\gamma\\left(\\left(h_{x_s}\\left((N+1)\\tau\\right)\\right)^{\\frac{1}{q}}+\\overline\\eta\\right)$\\end{small} and $\\delta_0=\\underline\\alpha\\left(\\varepsilon^q\\right)$.\n\\end{theorem} \n\n\\begin{proof}\nConsider the relation $R\\subseteq X_{\\tau}\\times X_{{\\overline{{\\mathsf{q}}}}}$ defined by $\\left(x_{\\tau},p_1,i_1,x_{{\\overline{{\\mathsf{q}}}}},i_2\\right)\\in R$, where $x_{\\overline{{\\mathsf{q}}}}=(\\overline{p}_1,\\ldots,\\overline{p}_N)$, if and only if $p_1=\\overline{p}_N=p$, $i_1=i_2=i$, and \\begin{small}$$\\mathbb{E}[V_p(H_{\\tau}(x_{\\tau},p_1,i_1),\\overline{H}_{{\\overline{{\\mathsf{q}}}}}(x_{{\\overline{{\\mathsf{q}}}}},i_2))]=\\mathbb{E}[V_p(x_{\\tau},\\overline\\xi_{x_sx_{\\overline{{\\mathsf{q}}}}}(N\\tau))]\\leq\\delta_i,$$\\end{small}where $\\delta_0,\\ldots,\\delta_{\\widehat{N}}$ are given recursively by \\begin{small}$$\\delta_0=\\underline\\alpha\\left(\\varepsilon^q\\right),~~\\delta_{i+1}=\\mathsf{e}^{-\\kappa\\tau}\\delta_i+\\widehat\\gamma\\left(\\left(h_{x_s}\\left((N+1)\\tau\\right)\\right)^{\\frac{1}{q}}+\\eta\\right).$$\\end{small}One can easily verify that\n\\begin{small}\n\\begin{align}\\label{delta_i}\n\\delta_i&=\\mathsf{e}^{-i\\kappa\\tau}\\underline\\alpha(\\varepsilon^q)+\\widehat\\gamma((h_{x_s}((N+1)\\tau))^{\\frac{1}{q}}+\\eta)\\frac{1-\\mathsf{e}^{-i\\kappa\\tau}}{1-\\mathsf{e}^{-\\kappa\\tau}}\\\\\\notag&=\\frac{\\widehat\\gamma((h_{x_s}((N+1)\\tau))^{\\frac{1}{q}}+\\eta)}{1-\\mathsf{e}^{-\\kappa\\tau}}+\\mathsf{e}^{-i\\kappa\\tau}(\\underline\\alpha(\\varepsilon^q)-\\frac{\\widehat\\gamma((h_{x_s}((N+1)\\tau))^{\\frac{1}{q}}+\\eta)}{1-\\mathsf{e}^{-\\kappa\\tau}}).\n\\end{align}\n\\end{small}\nSince $\\mu\\geq1$, and from (\\ref{bisim_cond_mul0}), one has \\begin{small}$$\\widehat\\gamma\\left(\\left(h_{x_s}\\left((N+1)\\tau\\right)\\right)^{\\frac{1}{q}}+\\eta\\right)\\leq(1-\\mathsf{e}^{-\\kappa\\tau})\\underline\\alpha\\left(\\varepsilon^q\\right).$$\\end{small}It follows from (\\ref{delta_i}) that $\\delta_0\\geq\\delta_2\\geq\\cdots\\geq\\delta_{\\widehat{N}-1}\\geq\\delta_{\\widehat{N}}$. From (\\ref{bisim_cond_mul0}) and since $\\tau_d=\\widehat{N}\\tau$, we get \n\\begin{small}\n\\begin{align}\\label{Ntozero}\n\\delta_{\\widehat{N}}=&\\mathsf{e}^{-\\kappa\\tau_d}\\underline\\alpha(\\varepsilon^q)+\\widehat\\gamma((h_{x_s}((N+1)\\tau))^{\\frac{1}{q}}+\\eta)\\frac{1-\\mathsf{e}^{-\\kappa\\tau_d}}{1-\\mathsf{e}^{-\\kappa\\tau}}\\leq\\mathsf{e}^{-\\kappa\\tau_d}\\underline\\alpha(\\varepsilon^q)+(\\frac{1}{\\mu}-\\mathsf{e}^{-\\kappa\\tau_d})\\underline\\alpha(\\varepsilon^q)=\\frac{\\underline\\alpha(\\varepsilon^q)}{\\mu}.\n\\end{align}\n\\end{small}We start by proving that $R$ is an $\\varepsilon$-approximate simulation relation from $S_{\\tau}(\\Sigma_{\\tau_d})$ to $\\overline{S}_{{\\overline{{\\mathsf{q}}}}}(\\Sigma_{\\tau_d})$. Consider any \\mbox{$\\left(x_{\\tau},p,i,x_{{\\overline{{\\mathsf{q}}}}},i\\right)\\in R$}. Using the convexity assumption of $\\underline\\alpha_p$, and since it is a $\\mathcal{K}_\\infty$ function, and the Jensen inequality \\cite{oksendal}, we have:\n\\begin{small}\n\\begin{align}\\nonumber\n\\underline\\alpha(\\mathbb{E}[\\Vert H_\\tau(x_{\\tau},p,i)-H_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}},i)\\Vert^q])&=\\underline\\alpha(\\mathbb{E}[\\Vert x_{\\tau}-\\overline\\xi_{x_sx_{\\overline{{\\mathsf{q}}}}}(N\\tau)\\Vert^q])\\leq\\underline\\alpha_p(\\mathbb{E}[\\Vert x_{\\tau}-\\overline\\xi_{x_sx_{\\overline{{\\mathsf{q}}}}}(N\\tau)\\Vert^q])\\leq\\mathds{E}[\\underline\\alpha_p(\\Vert x_{\\tau}-\\overline\\xi_{x_sx_{\\overline{{\\mathsf{q}}}}}(N\\tau)\\Vert^q)]\\\\\\notag&\\leq\\mathbb{E}[V_p(x_{\\tau},\\overline\\xi_{x_sx_{\\overline{{\\mathsf{q}}}}}(N\\tau))]\\leq\\delta_i\\leq\\delta_0.\n\\end{align}\n\\end{small}Therefore, we obtain\n\\begin{small}\n$(\\mathbb{E}[\\Vert x_{\\tau}-\\overline\\xi_{x_sx_{\\overline{{\\mathsf{q}}}}}(N\\tau)\\Vert^q])^{\\frac{1}{q}}\\leq(\\underline\\alpha^{-1}(\\delta_0))^{\\frac{1}{q}}\\leq\\varepsilon$,\n\\end{small}\nbecause of $\\underline\\alpha\\in\\mathcal{K}_\\infty$. Hence, condition (i) in Definition \\ref{APSR} is satisfied.\nLet us now show that condition (ii) in Definition\n\\ref{APSR} holds. \nConsider the transition $(x_{\\tau},p,i)\\rTo^{p}_{\\tau} (x'_{\\tau},p',i')$ in $S_{\\tau}\\left(\\Sigma_{\\tau_d}\\right)$, where $x'_{\\tau}=\\xi_{x_{\\tau}p}(\\tau)$ $\\mathds{P}$-a.s.. Since $V_p$ is a $\\delta$-GAS-M$_q$ Lyapunov function for subsystem $\\Sigma_p$, we have\n\\begin{small}\n\\begin{align}\\label{b06}\n\\mathbb{E}[V_{p}(x'_{\\tau},\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}},i)p}(\\tau))] &\\leq \\mathds{E}[V_{p}(x_\\tau,\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_q,i))] \\mathsf{e}^{-\\kappa\\tau}\\leq\\mathsf{e}^{-\\kappa\\tau}\\delta_i.\n\\end{align}\n\\end{small}Using Lemma \\ref{lem:moment est}, the $\\mathcal{K}_\\infty$ function $\\widehat\\gamma$, the concavity of $\\widehat\\gamma_p$ in \\eqref{supplement}, the Jensen inequality \\cite{oksendal}, equation \\eqref{eta2}, the inequalities (\\ref{supplement}) and (\\ref{b06}), and triangle inequality, we obtain\n\\begin{small}\n\\begin{align}\\nonumber\n\\mathbb{E}[V_{p}(x'_{\\tau},\\overline{H}_{\\overline{{\\mathsf{q}}}}(x'_{{\\overline{{\\mathsf{q}}}}},i'))]&=\\mathbb{E}[V_{p}(x'_\\tau,\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}},i)p}(\\tau))+V_{p}(x'_{\\tau},\\overline{H}_{\\overline{{\\mathsf{q}}}}(x'_{{\\overline{{\\mathsf{q}}}}},i'))-V_{p}(x'_\\tau,\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}},i)p}(\\tau))]\\\\ \\notag\n&= \\mathbb{E}[V_{p}(x'_\\tau,\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}},i)p}(\\tau))]+\\mathbb{E}[V_{p}(x'_{\\tau},\\overline{H}_{\\overline{{\\mathsf{q}}}}(x'_{{\\overline{{\\mathsf{q}}}}},i'))-V_{p}(x'_\\tau,\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}},i)p}(\\tau))]\\\\\\notag&\\leq\\mathsf{e}^{-\\kappa\\tau}\\delta_i+\\mathbb{E}[\\widehat\\gamma_{p}(\\Vert\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}},i)p}(\\tau)-\\overline{H}_{\\overline{{\\mathsf{q}}}}(x'_{{\\overline{{\\mathsf{q}}}}},i')\\Vert)]\\leq\\mathsf{e}^{-\\kappa\\tau}\\delta_i+\\widehat\\gamma_{p}(\\mathbb{E}[\\Vert\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}},i)p}(\\tau)-\\overline{H}_{\\overline{{\\mathsf{q}}}}(x'_{{\\overline{{\\mathsf{q}}}}},i')\\Vert])\\\\\\notag\n&\\leq\\mathsf{e}^{-\\kappa\\tau}\\delta_i+\\widehat\\gamma(\\mathbb{E}[\\Vert\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}},i)p}(\\tau)-\\overline{\\xi}_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}},i)p}(\\tau)+\\overline{\\xi}_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}},i)p}(\\tau)-\\overline{H}_{\\overline{{\\mathsf{q}}}}(x'_{{\\overline{{\\mathsf{q}}}}},i')\\Vert])\\\\\\notag\n&\\leq\\mathsf{e}^{-\\kappa\\tau}\\delta_i+\\widehat\\gamma(\\mathbb{E}[\\Vert\\xi_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}},i)p}(\\tau)-\\overline{\\xi}_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}},i)p}(\\tau)\\Vert]+\\Vert\\overline{\\xi}_{\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}},i)p}(\\tau)-\\overline{H}_{\\overline{{\\mathsf{q}}}}(x'_{{\\overline{{\\mathsf{q}}}}},i')\\Vert)\\\\\\label{b08}\n&\\leq\\mathsf{e}^{-\\kappa\\tau}\\delta_i+\\widehat\\gamma((h_{x_s}((N+1)\\tau))^{\\frac{1}{q}}+\\eta)=\\delta_{i+1}.\n\\end{align}\n\\end{small}\nWe now examine three separate cases:\n\\begin{itemize}\n\\item If $i<\\widehat{N}-1$, then $p'=p$, and $i'=i+1$; from (\\ref{b08}), \\begin{small}$\\mathds{E}\\left[V_p\\left(x'_{\\tau},\\overline{H}_{\\overline{{\\mathsf{q}}}}\\left(x'_{{\\overline{{\\mathsf{q}}}}},i'\\right)\\right)\\right]\\leq\\delta_{i+1}$\\end{small}, we conclude that \\begin{small}$(x'_{\\tau},p,i+1,x'_{{\\overline{{\\mathsf{q}}}}},i+1)\\in R$\\end{small}; \n\n\\item If $i=\\widehat{N}-1$, and $p'=p$, then $i'=\\widehat{N}-1$; from (\\ref{b08}), \\begin{small}$\\mathds{E}\\left[V_p\\left(x'_{\\tau},\\overline{H}_{\\overline{{\\mathsf{q}}}}\\left(x'_{{\\overline{{\\mathsf{q}}}}},i'\\right)\\right)\\right]\\leq\\delta_{\\widehat{N}}\\leq\\delta_{\\widehat{N}-1}$\\end{small}, we conclude that \\begin{small}$(x'_{\\tau},p,\\widehat{N}-1,x'_{{\\overline{{\\mathsf{q}}}}},\\widehat{N}-1)\\in R$\\end{small}; \n\n\\item If $i=\\widehat{N}-1$, and $p'\\neq{p}$, then $i'=0$; from (\\ref{Ntozero}) and (\\ref{b08}), \\begin{small}$\\mathds{E}\\left[V_p\\left(x'_{\\tau},\\overline{H}_{\\overline{{\\mathsf{q}}}}\\left(x'_{{\\overline{{\\mathsf{q}}}}},i'\\right)\\right)\\right]\\leq\\delta_{\\widehat{N}}\\leq\\delta_0\/\\mu$\\end{small}. From (\\ref{eq0}), it follows that \\begin{small}$\\mathds{E}\\left[V_{p'}(x'_{\\tau},\\overline{H}_{\\overline{{\\mathsf{q}}}}\\left(x'_{{\\overline{{\\mathsf{q}}}}},i'\\right))\\right]\\leq\\mu \\mathds{E}\\left[V_p\\left(x'_{\\tau},\\overline{H}_{\\overline{{\\mathsf{q}}}}\\left(x'_{{\\overline{{\\mathsf{q}}}}},i'\\right)\\right)\\right]\\leq\\delta_0$\\end{small}. Hence, \\begin{small}$(x'_{\\tau},p',0,x'_{{\\overline{{\\mathsf{q}}}}},0)\\in R$\\end{small}.\n\\end{itemize}\nTherefore, we conclude that condition (ii) in Definition \\ref{APSR} holds. In a similar way, we can prove that that $R^{-1}$ is an\n$\\varepsilon$-approximate simulation relation from $\\overline{S}_{{\\overline{{\\mathsf{q}}}}}(\\Sigma_{\\tau_d})$ to $S_{\\tau}(\\Sigma_{\\tau_d})$ implying that $R$ is an $\\varepsilon$-approximate bisimulation relation between $\\overline{S}_{{\\overline{{\\mathsf{q}}}}}(\\Sigma_{\\tau_d})$ and $S_\\tau(\\Sigma_{\\tau_d})$.\n\\end{proof}\n\nNote that one can use any over approximation of $\\overline\\eta$ such as the one in \\eqref{upper_bound5} instead of $\\overline\\eta$ in condition \\eqref{bisim_cond_mul0}. By choosing $N$ sufficiently large, one can enforce $h_{x_s}((N+1)\\tau)$ and $\\overline\\eta$ to be sufficiently small. Hence, it can be readily seen that for a given precision $\\varepsilon$,\nthere always exists a large value of $N$, such that the condition in (\\ref{bisim_cond_mul0}) is satisfied.\n\nThe next theorem provides a result that is similar to the one of Theorem \\ref{main_theorem33}, but by using the symbolic model $S_{{\\overline{{\\mathsf{q}}}}}(\\Sigma_{\\tau_d})$.\n\n\\begin{theorem}\\label{main_theorem22}\nConsider a stochastic switched system $\\Sigma_{\\tau_d}$. Let us assume that for any $p\\in\\mathsf{P}$, there exists a $\\delta$-GAS-M$_q$ Lyapunov function $V_p$ for subsystem $\\Sigma_{\\tau_d,p}$. Moreover, assume that (\\ref{eq0}) holds for some $\\mu\\geq1$. Let $\\widehat\\eta$ be given by \\eqref{eta3}. If $\\tau_d>\\log{\\mu}\/\\kappa$, for any $\\varepsilon\\in{\\mathbb{R}}^+$, \nand any triple $\\overline{\\mathsf{q}}=\\left(\\tau,N,x_s\\right)$ of parameters satisfying\n\\begin{small}\n\\begin{align}\n\\label{bisim_cond_mul10}\n\\widehat\\gamma\\left(\\widehat\\eta\\right)&\\leq\\frac{\\frac{1}{\\mu}-\\mathsf{e}^{-\\kappa\\tau_d}}{1-\\mathsf{e}^{-\\kappa\\tau_d}}\\left(1-\\mathsf{e}^{-\\kappa\\tau}\\right)\\underline\\alpha\\left(\\varepsilon^q\\right),\n\\end{align}\n\\end{small}there exists an $\\varepsilon$-approximate bisimulation relation $R$ between ${S}_{\\overline{\\mathsf{q}}}(\\Sigma_{\\tau_d})$ and $S_{\\tau}(\\Sigma_{\\tau_d})$ as the following:\\\\ $\\left(x_{\\tau},p_1,i_1,x_{{\\overline{{\\mathsf{q}}}}},i_2\\right)\\in R$, where $x_{\\overline{{\\mathsf{q}}}}=(\\overline{p}_1,\\ldots,\\overline{p}_N)$, if and only if $p_1=\\overline{p}_N=p$, $i_1=i_2=i$, and \\begin{small}$$\\mathbb{E}[V_p(H_{\\tau}(x_{\\tau},p_1,i_1),{H}_{{\\overline{{\\mathsf{q}}}}}(x_{{\\overline{{\\mathsf{q}}}}},i_2))]=\\mathbb{E}[V_p(x_{\\tau},\\xi_{x_sx_{\\overline{{\\mathsf{q}}}}}(N\\tau))]\\leq\\delta_i,$$\\end{small}where $\\delta_0,\\ldots,\\delta_{\\widehat{N}-1}$ are given recursively by \\begin{small}$\\delta_0=\\underline\\alpha\\left(\\varepsilon^q\\right),~\\delta_{i+1}=\\mathsf{e}^{-\\kappa\\tau}\\delta_i+\\widehat\\gamma\\left(\\widehat\\eta\\right)$\\end{small}.\n\\end{theorem} \n\n\\begin{proof}\nThe proof is similar to the one of Theorem \\ref{main_theorem33}.\n\\end{proof}\n\nNote that one can also use any over approximation of $\\widehat\\eta$ such as the one in \\eqref{upper_bound6} instead of $\\widehat\\eta$ in condition \\eqref{bisim_cond_mul10}. Finally, we establish the results on the existence of symbolic model $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$ (resp. $S_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$) such that \\mbox{$\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})\\cong_{\\mathcal{S}}^{\\varepsilon}S_\\tau(\\Sigma_{\\tau_d})$} (resp. \\mbox{$S_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})\\cong_{\\mathcal{S}}^{\\varepsilon}S_\\tau(\\Sigma_{\\tau_d})$}).\n\n\\begin{theorem}\\label{main_theorem55}\nConsider the result in Theorem \\ref{main_theorem33}. If we choose: \\begin{small}\n\\begin{align}\\notag\nX_{\\tau0}=\\big\\{&\\left(x,p,i\\right)\\,\\,|\\,\\,x\\in{\\mathbb{R}}^n,\\left\\Vert x-\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}0},i)\\right\\Vert\\leq\\left(\\overline\\alpha_p^{-1}\\left(\\delta_i\\right)\\right)^{\\frac{1}{q}},p=p_N,\\notag\\forall (p_1,\\ldots,p_N,i)\\in X_{{\\overline{{\\mathsf{q}}}}0}\\big\\},\n\\end{align}\n\\end{small}then we have \\mbox{$\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})\\cong_{\\mathcal{S}}^{\\varepsilon}S_\\tau(\\Sigma_{\\tau_d})$}.\n\\end{theorem}\n\n\\begin{proof}\nWe start by proving that \\mbox{$S_{\\tau}(\\Sigma_{\\tau_d})\\preceq^{\\varepsilon}_\\mathcal{S}\\overline{S}_{{\\overline{{\\mathsf{q}}}}}(\\Sigma_{\\tau_d})$}. For every $\\left(x_{\\tau 0},p,i\\right)\\in{X_{\\tau 0}}$, there always exists \\mbox{$\\left(x_{{\\overline{{\\mathsf{q}}}} 0},i\\right)\\in{X}_{{\\overline{{\\mathsf{q}}}} 0}$}, where $x_{{\\overline{{\\mathsf{q}}}}0}=(p_1,\\ldots,p_N)$, such that $p=p_N$ and \\begin{small}$\\left\\Vert{x_{\\tau0}}-\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}0},i)\\right\\Vert\\leq\\left(\\overline\\alpha_p^{-1}\\left(\\delta_i\\right)\\right)^{\\frac{1}{q}}$\\end{small}. Then,\n\\begin{footnotesize}\n\\begin{align}\\nonumber\n\\mathbb{E}\\left[V_p\\left({x_{\\tau0}},\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}0},i)\\right)\\right]&=V_p\\left({x_{\\tau0}},\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}0},i)\\right)\\leq\\overline\\alpha_p(\\Vert x_{\\tau0}-\\overline{H}_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}0},i)\\Vert^q)\\leq\\delta_i,\n\\end{align}\n\\end{footnotesize}since $\\overline\\alpha_p$ is a $\\mathcal{K}_\\infty$ function.\nHence, \\mbox{$\\left(x_{\\tau0},p,i,x_{{\\overline{{\\mathsf{q}}}}0},i\\right)\\in{R}$} implying that $S_{\\tau}(\\Sigma_{\\tau_d})\\preceq^{\\varepsilon}_\\mathcal{S}\\overline{S}_{{\\overline{{\\mathsf{q}}}}}(\\Sigma_{\\tau_d})$. In a similar way, we can show that \\mbox{$\\overline{S}_{{\\overline{{\\mathsf{q}}}}}(\\Sigma_{\\tau_d})\\preceq^{\\varepsilon}_{\\mathcal{S}}S_{\\tau}(\\Sigma_{\\tau_d})$}, equipped with the relation $R^{-1}$, which completes the proof.\n\\end{proof}\n\nThe next theorem provides a similar result as the one of Theorem \\ref{main_theorem55}, but by using the symbolic model $S_{\\overline{{\\mathsf{q}}}}(\\Sigma)$.\n\n\\begin{theorem}\\label{main_theorem66}\nConsider the results in Theorem \\ref{main_theorem22}. If we choose: \\begin{small}\n\\begin{align}\\notag\nX_{\\tau0}=\\big\\{&\\left(a,p,i\\right)\\,\\,|\\,\\,a\\in\\mathcal{X}_0,\\left(\\mathds{E}\\left[\\left\\Vert a-H_{\\overline{{\\mathsf{q}}}}(x_{{\\overline{{\\mathsf{q}}}}0},i)\\right\\Vert^q\\right]\\right)^{\\frac{1}{q}}\\leq\\left(\\overline\\alpha_p^{-1}\\left(\\delta_i\\right)\\right)^{\\frac{1}{q}}, p=p_N, \\forall (p_1,\\ldots,p_N,i)\\in X_{{\\overline{{\\mathsf{q}}}}0}\\big\\},\n\\end{align}\n\\end{small}then we have \\mbox{$S_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})\\cong_{\\mathcal{S}}^{\\varepsilon}S_\\tau(\\Sigma_{\\tau_d})$}.\n\\end{theorem}\n\n\\begin{proof}\nThe proof is similar to the one of Theorem \\ref{main_theorem55}.\n\\end{proof}\n\n\\begin{remark}\\label{remark1}\nThe symbolic model $S_{\\overline{{\\mathsf{q}}}}(\\Sigma)$ (resp. $S_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$), computed by using the parameter $\\overline{\\mathsf{q}}$ provided in Theorem \\ref{main_theorem3} (resp. Theorem \\ref{main_theorem22}), has fewer (or at most equal number of) states than the symbolic model $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)$ (resp. $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$), computed by using the parameter ${\\overline{{\\mathsf{q}}}}$ provided in Theorem \\ref{main_theorem} (resp. Theorem \\ref{main_theorem33}) while having the same precision. However, the symbolic models $S_{\\overline{{\\mathsf{q}}}}(\\Sigma)$ and $S_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$ have states with probabilistic output values, rather than non-probabilistic ones which makes the control synthesis over them more involved.\n\\end{remark}\n\n\\begin{remark}\nThe control synthesis over $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)$ (resp. $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$) is simple as the outputs are non-probabilistic points. For ${S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)$ (resp. ${S}_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$) it is less intuitive and more involved. We refer the interested readers to \\cite[Subsection 5.3]{majid10} explaining how one can synthesize controllers over finite metric systems with random output values.\n\\end{remark}\n\n\\subsection{Comparison between the two proposed approaches}\n\nNote that given any precision $\\varepsilon$ and sampling time $\\tau$, one can always use the results proposed in Theorems \\ref{main_theorem5} and \\ref{main_theorem55} to construct symbolic models $\\overline{S}_{\\mathsf{q}}(\\Sigma)$ and $\\overline{S}_{\\mathsf{q}}(\\Sigma_{\\tau_d})$, respectively, that are $\\varepsilon$-approximately bisimilar to $S_\\tau(\\Sigma)$ and $S_\\tau(\\Sigma_{\\tau_d})$, respectively. However, the results proposed in Theorems \\ref{main_theorem1} and \\ref{main_theorem2} cannot be applied for any sampling time $\\tau$ if the precision $\\varepsilon$ is lower than the thresholds introduced in inequalities \\eqref{lower_bound} and \\eqref{lower_bound_mul}, respectively (cf. the first case study). Furthermore, while the results in Theorems \\ref{main_theorem1} and \\ref{main_theorem2} only provide symbolic models with non-probabilistic output values, the ones in Theorems \\ref{main_theorem6} and \\ref{main_theorem66} provide symbolic models with probabilistic output values as well which can result in less conservative symbolic models (cf. Remark \\ref{remark1} and the first case study).\n\nOne can compare the results provided in Theorems \\ref{main_theorem5} and \\ref{main_theorem55} with the results provided in Theorems \\ref{main_theorem1} and \\ref{main_theorem2}, respectively, in terms of the sizes of the symbolic models. One can readily verify that the precision of the symbolic model $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)$ (resp. $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$) and the one $S_{\\mathsf{q}}(\\Sigma)$ (resp. $S_{\\mathsf{q}}(\\Sigma_{\\tau_d})$) is approximately the same as long as the state space quantisation parameter $\\eta$ is equal to the parameter $\\overline\\eta$ in \\eqref{eta} (resp. in \\eqref{eta2}), i.e. \\begin{small}$\\eta\\leq\\left(\\underline\\alpha^{-1}\\left(\\mathsf{e}^{-\\kappa N\\tau}\\overline{\\eta}_0\\right)\\right)^{1\/q}$\\end{small} (resp. \\begin{small}$\\eta\\leq\\left(\\underline\\alpha^{-1}\\left(\\mathsf{e}^{-(\\kappa-\\log\\mu\/{\\tau_d})N\\tau}\\widehat\\eta_0\\right)\\right)^{1\/q}$\\end{small}), where \\begin{small}$\\overline{\\eta}_0=\\max_{p\\in\\mathsf{P}}V\\left(\\overline\\xi_{x_sp}(\\tau),x_s\\right)$\\end{small} (resp. \\begin{small}$\\widehat\\eta_0=\\max_{p,p'\\in\\mathsf{P}}V_{p'}\\left(\\overline\\xi_{x_sp}(\\tau),x_s\\right)$\\end{small}). The reason their precisions are approximately (not exactly) the same is because we use $(h_{x_s}((N+1)\\tau))^{1\/q}$ in conditions \\eqref{bisim_cond11} and \\eqref{bisim_cond_mul0} rather than $(h_{[X_{\\tau0}]_\\eta}(\\tau))^{1\/q}$ (resp. $(h_{[X_0]_\\eta}(\\tau))^{1\/q}$) that is being used in condition\\eqref{bisim_cond} (resp.\\eqref{bisim_cond_mul}). By assuming that $(h_{x_s}((N+1)\\tau))^{1\/q}$ and $(h_{[X_{\\tau0}]_\\eta}(\\tau))^{1\/q}$ (resp. $(h_{[X_0]_\\eta}(\\tau))^{1\/q}$) are much smaller than $\\overline{\\eta}$ and $\\eta$, respectively, or $h_{x_s}((N+1)\\tau)\\approx h_{[X_{\\tau0}]_\\eta}(\\tau)$ (resp. $h_{x_s}((N+1)\\tau)\\approx h_{[X_0]_\\eta}(\\tau)$), the precisions are the same.\n\nThe number of states of the proposed symbolic models $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)$ and $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$ are $m^N$ and $m^{N}\\times\\widehat{N}$, respectively. Assume that we are interested in the dynamics of $\\Sigma$ (resp. $\\Sigma_{\\tau_d}$) on a compact set $\\mathsf{D}\\subset{\\mathbb{R}}^n$. Since the set of states of the proposed symbolic models $S_{\\mathsf{q}}(\\Sigma)$ and $S_{\\mathsf{q}}(\\Sigma_{\\tau_d})$ are $\\left[\\mathsf{D}\\right]_{\\eta}$ and $\\left[\\mathsf{D}\\right]_{\\eta}\\times\\mathsf{P}\\times\\{0,\\ldots,\\widehat{N}-1\\}$, respectively, their sizes are $\\left\\vert\\left[\\mathsf{D}\\right]_{\\eta}\\right\\vert=\\frac{K}{\\eta^n}$ and $\\frac{K}{\\eta^n}\\times{m}\\times\\widehat{N}$, respectively, where $K$ is a positive constant proportional to the volume of $\\mathsf{D}$. Hence, it is more convenient to use the proposed symbolic models $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)$ and $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$ rather than the ones $S_{\\mathsf{q}}(\\Sigma)$ and $S_{\\mathsf{q}}(\\Sigma_{\\tau_d})$, respectively, as long as:\n\\begin{small}\n\\begin{align}\\nonumber\n&m^N\\leq\\frac{K}{\\left(\\underline\\alpha^{-1}\\left(\\mathsf{e}^{-\\kappa N\\tau}\\overline\\eta_0\\right)\\right)^{n\/q}}~~\\text{and}~~m^{N-1}\\leq\\frac{K}{\\left(\\underline\\alpha^{-1}\\left(\\mathsf{e}^{-(\\kappa-\\log\\mu\/{\\tau_d}) N\\tau}\\widehat\\eta_0\\right)\\right)^{n\/q}},\n\\end{align}\n\\end{small}respectively. Without loss of generality, one can assume that $\\underline\\alpha({r})=r$ for any $r\\in{\\mathbb{R}}_0^+$. Hence, for sufficiently large value of $N$, it is more convenient to use the proposed symbolic models $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)$ and $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$ in comparison with the ones $S_{\\mathsf{q}}(\\Sigma)$ and $S_{\\mathsf{q}}(\\Sigma_{\\tau_d})$, respectively, as long as:\n\\begin{small}\n\\begin{align}\\label{criterion}\nm\\mathsf{e}^{\\frac{-\\kappa\\tau n}{q}}\\leq1,~~\\text{and}~~m\\mathsf{e}^{\\frac{-(\\kappa-\\log\\mu\/{\\tau_d})\\tau n}{q}}\\leq1,\n\\end{align}\n\\end{small}respectively.\n\n\\begin{comment}\n\\begin{remark}[Relationship with notions in the literature] \nThe approximate bisimulation notion in Definition \\ref{APSR} is structurally different than the probabilistic version discussed for finite state, \\emph{discrete-time} labeled Markov chains in \\cite{DLT08}, which can also be extended to continuous-space processes as in \\cite{DP02}. \nThe notion in this work can be instead related to the approximate probabilistic bisimulation notion discussed in \\cite{abate,julius1}, \nwhich lower bounds the probability that the Euclidean distance between trajectories of abstract and of concrete models remains close over a given time horizon: \nboth notions hinge on distances over trajectories, rather than over transition probabilities as in \\cite{DP02,DLT08}. \nAlong these lines, \\cite[Theorem 5.12]{majid8} can be readily employed to establish a probabilistic approximate bisimulation relation between $S_\\tau(\\Sigma)$ (resp. $S_\\tau\\left(\\Sigma_{\\tau_d}\\right)$) and $S_{\\mathsf{q}}(\\Sigma)$ (resp. $S_{\\mathsf{q}}\\left(\\Sigma_{\\tau_d}\\right)$) point-wise in time: \nthis result is sufficient to work with LTL specifications that need to be satisfied at a single time instance, \nsuch as next ($\\bigcirc$) and reach ($\\diamondsuit$). \\cite[Theorems 5.8, 5.10]{majid8} can be employed to instead establish probabilistic approximate bisimulation relations between $S_\\tau(\\Sigma)$ (resp. $S_\\tau\\left(\\Sigma_{\\tau_d}\\right)$) and $S_{\\mathsf{q}}(\\Sigma)$ (resp. $S_{\\mathsf{q}}\\left(\\Sigma_{\\tau_d}\\right)$) simultaneous in time, \nover an infinite and finite time horizon, respectively. \n\\end{remark}\n\\end{comment}\n\n\\section{Examples}\n\n\\subsection{Room temperature control (common Lyapunov function)}\nConsider the stochastic switched system $\\Sigma$ which is a simple thermal model of a six-room building as depicted schematically in Figure \\ref{fig1} and described by the following stochastic differential equations:\n\\begin{footnotesize}\n\\begin{align}\\label{room}\n \n \n\\hspace{-1.5mm}\\begin{array}{l}\n \\diff{\\xi}_1=\\big(\\alpha_{21}\\left(\\xi_2-\\xi_1\\right)+\\alpha_{31}\\left(\\xi_3-\\xi_1\\right)+\\alpha_{51}\\left(\\xi_5-\\xi_1\\right)+\\alpha_{e1}\\left(T_e-\\xi_1\\right)+\\alpha_{f1}\\left(T_{f1}-\\xi_1\\right)\\delta_{p2}\\big)\\diff{t}+\\left(\\sigma_{1,1}\\delta_{p1}+(1-\\delta_{p1})\\sigma_1\\right)\\xi_1\\diff{W^1_t},\\\\ \n\\diff{\\xi}_2=\\left(\\alpha_{12}\\left(\\xi_1-\\xi_2\\right)+\\alpha_{42}\\left(\\xi_4-\\xi_2\\right)+\\alpha_{e2}\\left(T_e-\\xi_2\\right)\\right)\\diff{t}+\\left(\\sigma_{2,1}\\delta_{p1}+(1-\\delta_{p1})\\sigma_2\\right)\\xi_2\\diff{W^2_t},\\\\\n\\diff{\\xi}_3=\\left(\\alpha_{13}\\left(\\xi_1-\\xi_3\\right)+\\alpha_{43}\\left(\\xi_4-\\xi_3\\right)+\\alpha_{e3}\\left(T_e-\\xi_3\\right)\\right)\\diff{t}+\\left(\\sigma_{3,1}\\delta_{p1}+(1-\\delta_{p1})\\sigma_3\\right)\\xi_3\\diff{W^3_t},\\\\\n\\diff{\\xi}_4=\\big(\\alpha_{24}\\left(\\xi_2-\\xi_4\\right)+\\alpha_{34}\\left(\\xi_3-\\xi_4\\right)+\\alpha_{64}\\left(\\xi_6-\\xi_4\\right)+\\alpha_{e4}\\left(T_e-\\xi_4\\right)+\\alpha_{f4}\\left(T_{f4}-\\xi_4\\right)\\delta_{p3}\\big)\\diff{t}+\\left(\\sigma_{4,1}\\delta_{p1}+(1-\\delta_{p1})\\sigma_4\\right)\\xi_4\\diff{W^4_t},\\\\ \n\\diff{\\xi}_5=\\left(\\alpha_{15}\\left(\\xi_1-\\xi_5\\right)+\\alpha_{e5}\\left(T_e-\\xi_5\\right)\\right)\\diff{t}+\\left(\\sigma_{5,1}\\delta_{p1}+(1-\\delta_{p1})\\sigma_5\\right)\\xi_5\\diff{W^5_t},\\\\\n\\diff{\\xi}_6=\\left(\\alpha_{46}\\left(\\xi_4-\\xi_6\\right)+\\alpha_{e6}\\left(T_e-\\xi_6\\right)\\right)\\diff{t}+\\left(\\sigma_{6,1}\\delta_{p1}+(1-\\delta_{p1})\\sigma_6\\right)\\xi_6\\diff{W^6_t},\\\\ \n\\end{array}\n \n\\end{align}\n\\end{footnotesize}where the terms $W_t^i$, $i=1,\\ldots,6$, denote the standard Brownian motion and $\\delta_{pi}=1$ if $i=p$ and $\\delta_{pi}=0$ otherwise.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=8cm]{room}\n\\end{center}\n\\caption{A schematic of the six-room building.}\n\\label{fig1}\n\\end{figure}\n\nNote that $\\xi_i$, $i=1,\\ldots,6$, denotes the temperature in each room, \n$T_e=10$ (degrees Celsius) is the external temperature, and $T_{f1}=T_{f4}=100$ are the temperatures of two heaters\\footnote{Here, we assume that at most one heater is on at each instant of time.} that both can be switched off ($p=1$), 1st heater ($T_{f1}$) on and the 2nd one ($T_{f4}$) off ($p=2$), or vice versa ($p=3$). \nThe drifts $f_p$ and diffusion terms $g_p$, $p = 1,2,3$, can be simply written out of \\eqref{room} and are affine and linear, respectively. \nThe parameters of the drifts are chosen as follows: $\\alpha_{21}=\\alpha_{12}=\\alpha_{13}=\\alpha_{31}=\\alpha_{42}=\\alpha_{24}=\\alpha_{34}=\\alpha_{43}=\\alpha_{15}=\\alpha_{51}=\\alpha_{46}=\\alpha_{64}=5\\times10^{-2}$, $\\alpha_{e1}=\\alpha_{e4}=5\\times10^{-3}$, $\\alpha_{e2}=\\alpha_{e3}=\\alpha_{e5}=\\alpha_{e6}=3.3\\times10^{-3}$, and $\\alpha_{f1}=\\alpha_{f4}=3.6\\times10^{-3}$. \nThe noise parameters are chosen as $\\sigma_{i,1}=0.002$ and $\\sigma_i=0.003$, for $i=1,\\ldots,6$. \n\nIt can be readily verified that the function $V(x,x')=\\sqrt{(x-x')^T(x-x')}$ satisfies the LMI condition (9) in \\cite{majid9} with $q=1$, $P_p=I_6$, and $\\widehat\\kappa_p=0.0076$, for any $p\\in\\{1,2,3\\}$. Hence, $V$ is a common $\\delta$-GAS-M$_1$ Lyapunov function for $\\Sigma$, satisfying conditions (i)-(iii) in Definition \\ref{delta_SGAS_Lya} with $q=1$, $\\underline\\alpha_p({r})=\\overline\\alpha_p({r})=r$, $\\forall r\\in{\\mathbb{R}}_0^+$, and $\\kappa_p=0.0038$, for any $p\\in\\{1,2,3\\}$. Using the results of Theorem \\ref{theorem2}, one gets that function $\\beta(r,s)=\\mathsf{e}^{-\\kappa_p{s}}r$ satisfies property \\eqref{delta_SGUAS} for $\\Sigma$.\n\nFor a \\emph{source state}\\footnote{Note that here we computed the source state as $x_s=\\arg\\min_{x\\in{\\mathbb{R}}^n}\\max_{p\\in\\mathsf{P}}V(\\overline\\xi_{xp}(\\tau),x)$ in order to have the smallest upper bound for $\\overline\\eta$ as in \\eqref{upper_bound}.} $x_s=[18,17.72,17.72,18,17.46,17.46]^T$, a given sampling time $\\tau=30$ time units, and a selected precision $\\varepsilon=1$, the parameter $N$ for $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)$, based on inequality (\\ref{bisim_cond11}) in Theorem \\ref{main_theorem}, is obtained as 13 and one gets $\\overline\\eta\\leq0.1144$, where $\\overline\\eta$ is given in \\eqref{eta}. Therefore, the resulting cardinality of the set of states for $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)$ is $3^{13}=1594323$.\n\nNow, consider that the objective is to design a control policy forcing the trajectories of $\\Sigma$, starting from the initial condition $x_0=[11.7,11.7,11.7,11.7,11.7,11.7]^T$, to reach the region $\\mathsf{D}=[19~22]^6$ in finite time and remain there forever. \nThis objective can be encoded via the LTL specification $\\Diamond\\Box\\mathsf{D}$. \n\nIn Figure \\ref{fig2}, we show several realizations of the trajectory $\\xi_{x_0\\upsilon}$ stemming from initial condition $x_0$ (top panels), \nas well as the corresponding evolution of synthesized switching signal $\\upsilon$ (bottom panel). \nFurthermore, in Figure \\ref{fig3}, we show the average value over 10000 experiments of the distance in time of the solution process $\\xi_{x_0\\upsilon}$ to the set $\\mathsf{D}$, namely $\\left\\Vert\\xi_{x_0\\upsilon}(t)\\right\\Vert_\\mathsf{D}$, where the point-to-set distance is defined as $\\Vert{x}\\Vert_\\mathsf{D}=\\inf_{d\\in{\\mathsf{D}}}\\Vert x-d\\Vert$. \n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=12cm]{overal}\n\\end{center}\n\\caption{A few realizations of the solution process $\\xi_{x_0\\upsilon}$ (top panel) and the corresponding evolution of the obtained switching signal $\\upsilon$ (bottom panel).}\n\\label{fig2}\n\\end{figure}\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=10cm]{mean}\n\\end{center}\n\\caption{The average values (over 10000 experiments) of the distance of the solution process $\\xi_{x_0\\upsilon}$ to the set $\\mathsf{D}$ in different vertical scales.}\n\\label{fig3}\n\\end{figure}\n\nTo compute exactly the size of the symbolic model, proposed in Theorem \\ref{main_theorem1}, we consider the dynamics of $\\Sigma$ over the subset $\\mathsf{W}=[11.7~22]^6\\subset{\\mathbb{R}}^6$. Note that using the sampling time $\\tau=30$, the results in Theorem \\ref{main_theorem1} cannot be applied because the precision $\\varepsilon$ has to be lower bounded by $2.7$ as in inequality \\eqref{lower_bound}. Using a bigger precision $\\varepsilon=2.8$ than the one here, the same sampling time $\\tau=30$ as the one here, and the inequalities \\eqref{bisim_cond1} and \\eqref{bisim_cond}, we obtain the state space quantization parameter as $\\eta\\leq0.02$. Therefore, if one uses $\\eta=0.02$, the cardinality of the state set of the symbolic model $S_{\\mathsf{q}}(\\Sigma)$ is equal to $\\left(\\frac{22-11.7}{0.02}\\right)^6=1.8657\\times10^{16}$ which is much higher than the one of $\\overline S_{\\overline{{\\mathsf{q}}}}(\\Sigma)$, i.e. $1594323$, while having even larger precision.\n\n\\begin{remark}\nBy considering the dynamics of $\\Sigma$ over the set $\\mathsf{W}$, at least $1-10^{-5}$ confidence level, and using Hoeffding's inequality \\cite{hoeffding}, one can verify that the number of samples should be at least $74152$ to empirically compute the upper bound of $\\widehat\\eta$ in \\eqref{upper_bound4}. We compute $\\widehat\\eta\\leq0.1208$ when $x_s=[18~17.72~17.72~18~17.46~17.46]^T$, $N=13$, and $\\tau=30$. Using the results in Theorem \\ref{main_theorem3} and the same parameters $\\overline{\\mathsf{q}}$ as the ones in $\\overline{S}_{\\overline{\\mathsf{q}}}(\\Sigma)$, one obtains $\\varepsilon=0.6$ in \\eqref{bisim_cond3}. Therefore, $S_{\\overline{\\mathsf{q}}}(\\Sigma)$, with confidence at least $1-10^{-5}$, provides a less conservative precision than $\\overline{S}_{\\overline{\\mathsf{q}}}(\\Sigma)$, while having the same size as $\\overline{S}_{\\overline{\\mathsf{q}}}(\\Sigma)$. \n\\end{remark}\n\n\\begin{remark}\nAnother advantage of using the 2nd approach in comparison with the 1st one is that one can construct only a relevant part of the abstraction given an initial condition and the specification which was the case in this example. \n\\end{remark}\n\n\\subsection{Multiple Lyapunov functions}\n Consider the following stochastic switched system borrowed from \\cite{girard2} and additionally affected by noise: \n\\begin{small}\n\\begin{align}\\nonumber\n\\Sigma:\n\\left\\{\n\\begin{array}{l}\n \\diff\\xi_1=\\left(-0.25\\xi_1+p\\xi_2+(-1)^p0.25\\right)\\diff{t}+0.01\\xi_1\\diff{W_t^1},\\\\\n \\diff\\xi_2=\\left(\\left(p-3\\right)\\xi_1-0.25\\xi_2+(-1)^p\\left(3-p\\right)\\right)\\diff{t}+0.01\\xi_2\\diff{W_t^2},\n \\end{array}\\right.\n\\end{align}\n\\end{small}where $p=1,2$. \nThe noise-free version of $\\Sigma$ is endowed with stable subsystems, \nhowever it can globally exhibit unstable behaviors for some switching signals \\cite{girard2}. \nSimilarly, $\\Sigma$ does not admit a common $\\delta$-GAS-M$_q$ Lyapunov function. \nWe are left with the option of seeking for multiple Lyapunov functions. \nIt can be indeed shown that each subsystem $\\Sigma_p$ admits a $\\delta$-GAS-M$_1$ Lyapunov function of the form \\begin{small}$V_p(x_1,x_2)=\\sqrt{(x_1-x_2)^TP_p(x_1-x_2)}$\\end{small}, with \\begin{small}$P_1=\\left[ {\\begin{array}{cc}\n2&0\\\\\n0&1\\\\\n \\end{array} } \\right]$\\end{small} and \\begin{small}$P_2=\\left[ {\\begin{array}{cc}\n1&0\\\\\n0&2\\\\\n \\end{array} } \\right].$\\end{small}\nThese $\\delta$-GAS-M$_1$ Lyapunov functions have the following characteristics: \n$\\underline\\alpha({r})=r$, $\\overline\\alpha({r})=2r$, $\\kappa=0.2498$. \nNote that $V^2_p(x_1,x_2)$ is also a $\\delta$-GAS-M$_2$ Lyapunov function for $\\Sigma_p$, where $p\\in\\{1,2\\}$, satisfying the requirements in Lemma \\ref{lem:moment est}. \nFurthermore, the assumptions of Theorem \\ref{multiple_lyapunov} hold by choosing a parameter $\\mu=\\sqrt{2}$ and a dwell time $\\tau_d=2>\\log{\\mu}\/\\kappa$. \nIn conclusion, the stochastic switched system $\\Sigma$ is $\\delta$-GUAS-M$_1$. \n \nLet us work within the set $\\mathsf{D}=[-5,~5]\\times[-4,~4]$ of the state space of $\\Sigma$. \nFor a sampling time $\\tau=0.5$, using inequality (\\ref{lower_bound_mul}) the precision $\\varepsilon$ is lower bounded by $1.07$. \nFor a chosen precision $\\varepsilon=1.2$, the discretization parameter $\\eta$ of $S_{{\\mathsf{q}}}(\\Sigma)$, \nobtained from Theorem \\ref{main_theorem2}, \nis equal to $0.0083$. \nThe resulting number of states in $S_{{\\mathsf{q}}}(\\Sigma_{\\tau_d})$ is $9310320$, taking 3.4 MB memory space, where the computation of the abstraction $S_{\\mathsf{q}}(\\Sigma_{\\tau_d})$ has been performed via the software tool \\textsf{CoSyMA}~\\cite{CoSyMA} on an iMac with CPU $3.5$GHz Intel Core i$7$.\nThe CPU time needed for computing the abstraction has amounted to $22$ seconds. \n \nConsider the objective to design a controller (switching signal) forcing the first moment of the trajectories of $\\Sigma$ to stay within $\\mathsf{D}$, \nwhile always avoiding the set $\\mathsf{Z}=[-1.5,1.5]\\times[-1,1]$. \nThis corresponds to the following LTL specification: $\\Box\\mathsf{D}\\wedge\\Box\\neg{\\mathsf{Z}}$. \nThe CPU time needed for synthesizing the controller has amounted to $12.46$ seconds. \nFigure \\ref{fig22} displays several realizations of the closed-loop trajectory of $\\xi_{x_0\\upsilon}$, \nstemming from the initial condition $x_0=(-4,-3.8)$ (left panel), \nas well as the corresponding evolution of the switching signal $\\upsilon$ (right panel). \nFurthermore, Figure \\ref{fig22} (middle panels) shows the average value (over 10000 experiments) of the distance in time of the solution process $\\xi_{x_0\\upsilon}$ to the set $\\mathsf{D}\\backslash{\\mathsf{Z}}$, namely $\\left\\Vert\\xi_{x_0\\upsilon}(t)\\right\\Vert_{\\mathsf{D}\\backslash{\\mathsf{Z}}}$. \nNotice that the empirical average distance is significantly lower than the theoretical precision $\\varepsilon=1.2$. \n\n\\begin{figure}\n\\includegraphics[width=13cm]{overal1}\n\\caption{Several realizations of the closed-loop trajectory $\\xi_{x_0\\upsilon}$ with initial condition \\mbox{$x_0=(-4,-3.8)$} (left panel). \nAverage values (over 10000 experiments) in time of the distance of solution process $\\xi_{x_0\\upsilon}$ to the set $\\mathsf{W}=\\mathsf{D}\\backslash{\\mathsf{Z}}$, in different vertical scales (middle panel). \nEvolution of the synthesized switching signal $\\upsilon$ (right panel).}\n\\label{fig22}\n\\end{figure}\n\nNote that using the same sampling time $\\tau=0.5$, the same precision $\\varepsilon=1.2$, and the inequalities \\eqref{bisim_cond_mul0} in Theorem \\ref{main_theorem33}, we obtain the temporal horizon as $N=22$. Therefore, the cardinality of the state set of the symbolic model $\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$ is equal to $2^{22}=4194304$ which is roughly half of the one of $S_{{\\mathsf{q}}}(\\Sigma_{\\tau_d})$, i.e. $9310320$.\n\n\n\\section{Conclusions}\nThis work has shown that any stochastic switched system $\\Sigma$ (resp. $\\Sigma_{\\tau_d}$), admitting a common (multiple) $\\delta$-GAS-M$_q$ Lyapunov function(s), and within a compact set of states, admits an approximately bisimilar symbolic model $S_{\\mathsf{q}}(\\Sigma)$ (resp. $S_{\\mathsf{q}}(\\Sigma_{\\tau_d})$) requiring a space discretization or $S_{\\overline{{\\mathsf{q}}}}(\\Sigma)\/\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma)$ (resp. $S_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})\/\\overline{S}_{\\overline{{\\mathsf{q}}}}(\\Sigma_{\\tau_d})$) without any space discretization. Furthermore, we have provided a simple criterion by which one can choose between the two proposed abstraction approaches the most suitable one (based on the size of the abstraction) for a given stochastic switched system.\nThe constructed symbolic models can be used to synthesize controllers enforcing complex logic specifications, \nexpressed via linear temporal logic or as automata on infinite strings. \n\n\n\\section{Acknowledgements}\nThe authors would like to thank Ilya Tkachev for fruitful technical discussions. \n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nTeleoperational control allows the operator to complete tasks at a safe, remote location. This is accomplished by having a human operator send control signals from a remote console, traditionally known as \\textit{master}, to the robot performing the task, traditionally known as \\textit{slave}. The slave then returns feedback to the master through sensors, such as cameras and haptic devices, which gives the operator the feeling of \\textit{telepresence} through displays and force feedback. Through improvement of telepresence and robotic systems, teleoperational systems are becoming common place in a wide range of applications such as underwater exploration, space robotics, mobile robots, and telesurgery \\cite{TeleoperationHistory, yipDasJournal} . \n\nWith the advent of surgical robots such as the da Vinci\\textregistered{} Surgical System, attempts have been made to investigate the feasibility of remote telesurgery. Anvari et al. in 2005 was first to establish a remote telesurgical service and reported on 21 remote laparoscopic surgeries with an observed delay of 135-140 msec over the 400 km \\cite{firstTeleRobotic}. The signal latency grows as the distance increases, and prior reports have cited 300 msec as the maximum time delay where surgeons began to consider the operation unsafe \\cite{TransatlanticSurgery, LatencyEffects}. When using satellite communication between London and Toronto, a report measured a delay of $560.7 \\pm 16.5$ msec \\cite{reasonForDelay}. This unavoidable time delay is the greatest obstacle for safe and effective remote telesurgery because it leads to overshoot and oscillations. These undesirable effects have been observed with as little as 50 msec of time delay in surgical tasks \\cite{SteadyHand, TreatmentPlanning}. The challenges of teleoperating under delay has been investigated for over 50 years, and the field is too expansive to cover in this paper. A more broad look can be found in Hokayem and Spong's historical survey \\cite{TeleoperationHistory}.\n\nThe first study on the effects of delay in teleoperational system were by Ferrell in 1965 and found that experienced operators will use a strategy called move and wait \\cite{MoveAndWait}. A stable system is realized by inputting a new control command and then waiting to see the effect. Ferrell and Sheridan then developed supervisory control to address the problem of delay \\cite{supervisoryControl}. This gives a level of autonomy to the system such that the operator supervises rather than explicitly inputting trajectory motions. Therefore, supervisory control can only be implemented practically at this time in structured environments.\n\nIn the case of teleoperating with haptic feedback, also known as \\textit{bilateral} teleoperation, delay has been experimentally and theoretically shown to create instability within the system \\cite{experimentalInstability, theoryInstability}. Techniques such as wave variables have been used to dampen the unstable overshoot in bilateral control under delay \\cite{waveVariables}. However, they have been shown to increase time to complete task \\cite{yipBadHaptic, yipBadHaptic_2}, and therefore teleoperating without haptic feedback can often be a better alternative.\n\nPredictive displays circumvent the deleterious effects from teleoperating under delay by giving the operator immediate feedback through a virtual prediction. Early predictive displays focused on space robotics where delays can reach up to 7sec \\cite{spaceTeleopDelay2} \\cite{spaceTeleopDelay3}. Winck et al. recently applied this approach by creating an entire predicted virtual environment that is displayed to the operator and is also used to generate haptic feedback \\cite{spaceTeleopDelay}. Currently this is not applicable to telesurgery because of the unique 3D geometry found in a surgical environment. The scale of the operation requires high precision tracking to create reliable predictions, and obstacles such as tissue cannot be modeled as rigid bodies nor accurately predicted.\n\nIn this paper, we present new motion scaling solutions for conducting telesurgery under delay that:\n\\begin{enumerate}\n\t\\item reduce errors to improve patient safety,\n\t\\item have intuitive control since the delay can change from one operation to another due to variable distances, and\n\t\\item have simple implementation so it can be easily deployed to any teleoperated system.\n\\end{enumerate}\n\nTo show the performance of the solutions with statistical significance, we conducted a user study on the da Vinci\\textregistered{} Surgical System. The user study was designed such that there is independence from unwanted variables such as participant exhaustion and experience and that the best performing solution has a short learning curve, therefore intuitive. Both time to complete task and an error metric were used to evaluate task performance, and we measured that our proposed solutions decreased the number of errors at the cost of time. In fact, 16 out of the 17 participants from the user study performed best, with regards to error rate, using our proposed solutions.\n\n\\section{Methods} \n\nTeleoperation systems utilizes a scaling factor between master arms movements and the slave arms movements to adapt the teleoperator to the slave workspace. It can be expressed as a general teleoperation system with delay, shown in Fig. \\ref{fig:flowChart}, as:\n\\begin{equation}\n\ts_m[n] = scale_m(m[n] - m[n-1]) + s_m[n-1]\n\\end{equation}\n\\begin{equation}\n\ts[n] = s_m[n-n_d\/2]\n\\end{equation}\nwhere $scale_m$ is the scaling factor previously described. It is worth noting that teleoperation systems described in this manner using position control, such as the da Vinci\\textregistered{} Surgical System, only apply this scaling to position and not orientation, rather mapping orientation commands one-to-one so there is no obvious mismatch between user wrist orientation and robot manipulator orientations. The solutions we present are all modifications of Equations (1) and (2).\n\n\\begin{figure}[b]\n\t\\centering\n\t\\includegraphics[width=0.49\\textwidth]{Teleop_Flowchart.png}\n\t\\caption{Flowchart of a teleoperation system with round trip delay of $d$ where $m$ is the masters position, $s_m$ is the target slave position from the master, $s$ is the slave position, $I$ is feedback to the operator. For the sake of simplicity, the entire system is assumed to have a sampling rate of $f_s$, and let $n_d = f_sd$ }\n\t\\label{fig:flowChart}\n\\end{figure}\n\n\\subsection{Constant Scaling Solution}\n\nThe first solution simply reduces the scaling factor in Equation (1). Lowering the scaling factor in robotic surgery has been shown to increase task completion time and improve accuracy \\cite{motionScaling1, motionScaling2}. We extend this to teleoperating under delay and hypothesize that decreasing the constant scale will reduce the number of errors at the cost of time when operating under delay. This idea is similar to the move and wait strategy, creating stability by slowing down the motions. However, here the motions are slowed continuously, while the move and wait strategy discretizes the motions and requires the operator to have experience working under the delay. \n\n\\subsection{Positional Scaling Solution}\n\nPositional scaling builds on the constant scaling solution by decreasing the scaling as the slave arms move towards an obstacle. Therefore, reducing the cost of time from the constant scaling solution by slowing down the operators motions only in areas of the work space that require higher precision. Since this deals with obstacle detection, the positional scaling will be implemented on the slave side through the following equations:\n\\begin{equation}\n\tscale_s = \\text{min}(maxscale, \\text{max}(minscale, k*r[n]))\n\\end{equation}\n\\begin{equation}\n\ts[n] = scale_s(s_m[n-n_d\/2] - s_m[n-n_d\/2-1]) + s[n-1]\n\\end{equation}\nwhere $r$ is the distance from $s_m$ to the nearest obstacle, $k$ is the rate $scale_s$ changes, and $minscale$ and $maxscale$ set the lower and upper bounds respectively of $scale_s$. This is paired with Equation (1), and can be thought of as secondary layer of scaling based on proximity to an obstacle.\n\nTo implement this scaling method, the distance $r$ requires the location of the obstacles in the environment. Methods such as feature-based 3D tissue reconstruction from \\cite{tissue3DReconstruction} or dense pixel-based depth reconstruction can be used. In the presented user study, the task environment is known and distances $r$ are found analytically. \n\n\\begin{figure*}[t]\n\t\\centering\n\t\\begin{subfigure}{.24\\textwidth}\n \t\t\\centering\n \t\t\\includegraphics[width=1\\linewidth]{example_error1.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}{.24\\textwidth}\n\t\t\\centering\n \t\t\\includegraphics[width=1\\linewidth]{example_error2.png}\n\t\\end{subfigure}\n\n\t\\begin{subfigure}{.24\\textwidth}\n \t\t\\centering\n \t\t\\includegraphics[width=1\\linewidth]{example_error3.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}{.24\\textwidth}\n\t\t\\centering\n \t\t\\includegraphics[width=1\\linewidth]{example_error4.png}\n\t\\end{subfigure}\n\t\\caption{Photos of example errors from the user study. From left to right the errors are: stretch ring on peg, touch peg, drop ring, and stretch ring during handoff.}\n\t\\label{fig:sampleErrors}\n\\end{figure*}\n\n\\subsection{Velocity Scaling Solution}\n\nVelocity scaling is founded on the idea that an operator will naturally move slower when higher accuracy is required. This is inspired by the software controlling computer mice and trackpads, which natively utilize this strategy by default (unless turned off by the owner). Combining velocity scaling with the constant scaling solutions hypothesis, the scaling factor in Equation (1) is increased proportionally to the input translational velocity as shown:\n\\begin{equation}\n\tscale_m = v_1 + v_2 \\big| \\big| \\dot{m}_m[n] \\big| \\big|\n\\end{equation}\nwhere $v_1$ is the base scaling, the bonus velocity scaling is given at the rate of $v_2$, $\\dot{m}$ is the translational velocity, and $|| \\cdot ||$ is the magnitude. Equation (2) is then used to set the slave position on the slave side. This allows for a low base scaling to get the benefits of the constant scaling solution when the operator is making small, precise motions. It also increases the scaling when the operator makes large motions, which requires less precision, in order to reduce the cost of time.\n\nTo find $\\dot{m}$ on a real system where there is noise, the following filter can be used: \n\\begin{equation}\n\t\\dot{m}_m[n] = \\frac{m[n] + m[n-1] - m[n-2] - m[n-3]}{4\/f_s} \n\\end{equation}\\\\\nThis represents a running average with weights 0.25, 0.5, and 0.25 for three individual velocity measurements.\n\n\\section{Experiment}\n\nTo measure the effectiveness of the solutions, a user study with 17 participants was conducted on the da Vinci\\textregistered{} Surgical System. The delayed feedback to the user is a stereoscopic 1080p laparoscopic camera running at 30FPS. The camera feed is displayed to a console for stereo viewing by the operator. Both pairs of master and slave arms have seven degrees of freedom. A modified version of Open-Source da Vinci Research Kit \\cite{DVRK} was used on an a computer with an Intel\\textregistered{} Core\\texttrademark{} i9-7940X Processor and NVIDIA's GeForce GTX 1060.\n\n\nTo ensure that the solutions are tested in a realistic, high delay condition, a round trip delay of 750 msec was simulated for the delayed environment in the user study. This follows a recent report showing satellite communication between London and Toronto measured a round trip delay of $560.7 \\pm 16.5$ msec for a telesurgical task \\cite{reasonForDelay}. \n\n\\subsection{Task}\nA peg transfer task is used as a test scenario. Fig. \\ref{fig:taskEnv} shows a photo from the endoscope used by the operator in our study. The task involves:\n\n\\begin{enumerate}\n\\item Lifting a rubber o-ring from either the front left or right peg with the corresponding left or right arm\n\\item Passing the ring to the other arm\n\\item Placing the ring on the other front peg\n\\item Repeating steps 1 to 3 for the back pair of pegs\n\\end{enumerate}\n\n\\begin{figure}[b]\n\t\\centering\n\t\\includegraphics[width=0.48\\textwidth]{TaskEnvironment.png}\n\t\\caption{Task environment for the user study as seen from the endoscope. The scale of the environment ensures that participants must be accurate in order to perform well with regards to errors.}\n\t\\label{fig:taskEnv}\n\\end{figure}\n\nThis task and environment was chosen because it inherits complex motions such as hand off, regions where larger movements are safe, and yet still requires precise motions during parts of the trajectory due to the tightness of fit between the rings and the pegs.\n\n\\subsection{Metrics}\n\nThe metrics to evaluate task performance are time to complete task and an error metric. The error metric is an enumeration of different types of errors that we observed. These errors are weighted according to Table 1 and summed to get a weighted error. The weights were chosen such that severity of the error would be reflected properly in the error metric. Example errors are shown in Fig. \\ref{fig:sampleErrors}. A sample video was created and shown to the participants before the study to show the task and the different types of errors.\n\n\\begin{table}[h]\n\\caption{\\\\Weights associated with type of error}\n\\begin{center}\n\\begin{tabular}{ |c|c| } \n\\hline \n\t\\textbf{Error} & \\textbf{Weight}\t\t\t\t\t\t\t\t\\\\ \\hline\n\tTouch peg & 1\t\t\t\t \t\t\t\t\t\t\t\\\\ \\hline\n\tTouch ground & 2\t\t\t\t\t \t\t\t\t\t\\\\ \\hline\n\tStretch ring during hand-off for a second or less & 2 \t\t\t\\\\ \\hline\n\tDrop ring & 3\t\t\t\t\t \t\t\t\t\t\t\\\\ \\hline\n\tStretch ring on peg for a second or less & 4\t\t\t\t\t\\\\ \\hline\n\tStretch ring for an additional second & 4\t\t\t\t\t \t\\\\ \\hline\n\tStretch\/move peg & 10\t\t\t\t\t \t\t\t\t\\\\ \\hline\n\tKnock down peg & 20\t\t\t\t\t \t\t\t\t\\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\subsection{Procedure}\nThe scaling scenarios for the participants to complete the peg transfer task are as follows in both 0 msec and 750 msec round trip delay:\n\\begin{enumerate}\n\t\\item Constant scaling of 0.3\n\t\\item Constant scaling of 0.2\n\t\\item Constant scaling of 0.1\n\t\\item Position scaling\n\t\\item Velocity scaling\n\\end{enumerate}\n\nFor constant scaling, the corresponding scaling value listed above was used for $scale_m$. For positional scaling, the following simplification was made: $r$ is the minimum 2D distance from center of each of the 4 pegs to the target tool-tip position projected on the plane constructed from the top of the 4 pegs ($ax + by + z = c$). The following equations are used to find the least squared solution of the plane:\n\n\\begin{equation}\n A = \n\\begin{bmatrix}\nx_1 & y_1 & -1 \\\\\nx_2 & y_2 & -1 \\\\\nx_3 & y_3 & -1 \\\\\nx_4 & y_4 & -1 \\\\\n\\end{bmatrix}\n\\end{equation}\n\\begin{equation}\n \\begin{bmatrix}\n a & b & c\n \\end{bmatrix}^T\n = -(A^T A)^{-1}A^T\n \\begin{bmatrix}\n z_1 & z_2 & z_3 & z_4\n \\end{bmatrix}^T\n\\end{equation}\nwhere $\\begin{bmatrix} x_i & y_i & z_i \\end{bmatrix}^T$ is the position of the center of the $i$-th peg. The following is computed realtime to project the target tool-tip position and find $r$:\n\\begin{equation}\n e = \\frac{\\begin{bmatrix} a & b & 1 \\end{bmatrix}^T}{\\big|\\big|\\begin{bmatrix} a & b & 1 \\end{bmatrix}^T\\big|\\big|}\n\\end{equation}\n\\begin{equation}\n s_p[n] = s_m[n-n_d\/2] - \\begin{bmatrix} 0 & 0 & c \\end{bmatrix}^T - \\text{ proj}_e s_m[n-n_d\/2]\n\\end{equation}\n\\begin{equation}\n r[n] = \\text{ }\\underset{i}{\\operatorname{min}}\\text{ } \\big|\\big| s_p[n] - \\begin{bmatrix} x_i & y_i & z_i \\end{bmatrix}^T \\big|\\big|\n\\end{equation}\n\nThrough experimentation, $scale_m = 0.2$, $k = 100$, $minscale= 0.5$, and $maxscale = 1.0$ were chosen for positional scaling. Fig. \\ref{fig:positionScalingPlot} shows a generated map of the value of $scale_s$ using these parameters. $scale_s$ is calculated and applied to both arms individually. These positional scaling parameters mean that the total scaling from master to slave ranges from 0.1 to 0.2.\n\n\\begin{figure}[b]\n\t\\centering\n\t\\includegraphics[width=0.48\\textwidth]{pos_scale_heat_map.png}\n\t\\caption{Map showing the value of $scale_s$ for the positional scaling used in the user study. This map is from a bird-eye view of the pegs which are located at: (30, 40), (30, 60), (70, 40), and (70, 60). }\n\t\\label{fig:positionScalingPlot}\n\\end{figure}\n\nFor velocity scaling, the values of $v_1 = 0.1$ and $v_2 = 100$ were found through experimentation. Similar to positional scaling, this is calculated and applied to both arms individually.\n\nThe procedure for each participant was as follows:\n\\begin{enumerate}\n\t\\item Practice: repeat the task twice with constant scaling of 0.2 under no delay\n\t\\item Record: perform the task with constant scaling of 0.2 under no delay\n\t\\item Practice: select a new scaling method at random under no delay and take 20 seconds to become accustomed to the new scenario\n\t\\item Record: perform the task with the previously selected scaling solution\n\t\\item Repeat step 3 and 4 with all scaling solutions \n\t\\item Repeat step 3 and 4 with constant scaling of 0.2 \n\t\\item Repeat 1-6 with round trip delay of 750 msec.\n\\end{enumerate}\n\nStep 1 is so that participants get over the learning curve of the task and the system. By comparing the results of step 2 and 6, we can ensure that the user overcame the learning curve in step 1 and has not degraded in performance due to exhaustion. This is critical since it shows that the performance of a participant is only affected by the delay and different scaling methods. To further break any potential correlation, the randomization of the order of scaling methods in step 3 and 4 was used. Each participant has only 20 seconds to practice in the new scenario (step 3), so that the solutions with best results will be intuitive.\n\n\n\\section{Results}\n\nTo show there is no significant change in performance between step 2 and 6 of the procedure, the results are compared using two-sided paired-sample t-tests. Under no delay, $p = 0.670$ and $p = 0.054$ for weighted error and time to complete task respectively is computed. For the roundtrip delay of 750 msec, $p = 0.633$ and $p = 0.192$ for weighted error and time to complete task respectively is computed. Therefore, no statistically significant ($p<0.05$) change in performance was measured. Combining this result with the randomization of order in step 3 of the procedure implies that the measured results will be uncorrelated with participant experience and exhaustion and only affected by the delay and scaling solution.\n\nThe results showing which solution performed best and second best for each individual person under delay are shown in Fig. \\ref{fig:bestScaling}. Only one out of seventeen participants performed best, with regards to weighted error, using the baseline scaling, constant scaling of 0.2 and 0.3. Furthermore, from Fig. \\ref{fig:bestScaling} it is evident that even when including the second best performing solution, the vast majority of participants performed better with our proposed solutions with regards to weighted error. \n\nThe statistics of the results are shown in Table 2 and 3 for roundtrip delay of 750 msec and 0sec respectively. The $p$-values are generated by comparing against the baseline scaling with a two-sided paired t-test to show if the solution had a statistically significant ($p < 0.05$) effect on performance. The box plots of the results are shown at the end of the paper in Fig. \\ref{fig:allDataPlots}. All the proposed solutions do achieve statistically significant lower weighted error under delay except for positional scaling compared against constant scaling of 0.2. The proposed solutions were also measured to make a statistically significant increase in time under delay when comparing against constant scaling of 0.2. Therefore, the solutions are working as intended, increasing the accuracy at the cost of time. \n\n\\begin{figure*}[t]\n\t\\centering\n\t\\begin{subfigure}{.4\\textwidth}\n \t\t\\centering\n \t\t\\includegraphics[width=1\\linewidth]{best_scaling_participant_error.png}\n\t\\end{subfigure}\n\t\\qquad\n\t\\qquad\n\t\\begin{subfigure}{.4\\textwidth}\n\t\t\\centering\n \t\t\\includegraphics[width=1\\linewidth]{best_scaling_participant_time.png}\n\t\\end{subfigure}\n\t\\caption{Counts of participants for which scaling solution performed best and second best from the user study under a roundtrip delay of 750 msec. Left is weighted error and right is time to compete task.}\n\t\\label{fig:bestScaling}\n\\end{figure*}\n\n\\begin{table*}[t]\n\\centering\n\\caption{\\\\Statistics from user study trials under a roundtrip delay of 750 msec. $p$-value is generated from two-sided paired t-test.}\n\\def1.25{1.25}\n\\setlength\\tabcolsep{1.25em}\n\\begin{tabular}{c|ccc|ccc}\n & \\multicolumn{3}{c|}{\\textbf{Weighted Error}} & \\multicolumn{3}{c}{\\textbf{Time (sec)}} \\\\ \n & $mean \\pm std$ & $p$-value & $p$-value & $mean \\pm std$ & $p$-value & $p$-value \\\\ \n & & vs. c = 0.3 & vs. c = 0.2 & & vs. c = 0.3 & vs. c = 0.2 \\\\\\hline\n \\textbf{c = 0.3} & 16.4 $\\pm$ 13.3 & --- & 0.305 & 106 $\\pm$ 32.0 & --- & 0.224 \\\\\n \\textbf{c = 0.2} & 13.2 $\\pm$ 8.22 & 0.305 & --- & 92.5 $\\pm$ 31.8 & 0.224 & --- \\\\\n \\textbf{c = 0.1} & 9.29 $\\pm$ 9.62 & 0.0149 & 0.0480 & 131 $\\pm$ 42.8 & 0.000316 & $<0.0001$ \\\\\n \\textbf{pos.} & 12.3 $\\pm$ 14.6 & 0.0245 & 0.572 & 116 $\\pm$ 38.0 & 0.173 & 0.0474 \\\\\n \\textbf{vel.} & 9.31 $\\pm$ 8.46 & 0.00926 & 0.00114 & 113 $\\pm$ 35.1 & 0.284 & 0.00617 \\\\\n\\end{tabular}\n\\end{table*}\n\n\\begin{table*}[t]\n\\centering\n\\caption{\\\\Statistics from user study trials under a roundtrip delay of 0sec. $p$-value is generated from two-sided paired t-test.}\n\\def1.25{1.25}\n\\setlength\\tabcolsep{1.25em}\n\\begin{tabular}{c|ccc|ccc}\n & \\multicolumn{3}{c|}{\\textbf{Weighted Error}} & \\multicolumn{3}{c}{\\textbf{Time (sec)}} \\\\ \n & $mean \\pm std$ & $p$-value & $p$-value & $mean \\pm std$ & $p$-value & $p$-value \\\\ \n & & vs. c = 0.3 & vs. c = 0.2 & & vs. c = 0.3 & vs. c = 0.2 \\\\\\hline\n \\textbf{c = 0.3} & 6.00 $\\pm$ 5.60 & --- & 0.266 & 40.3 $\\pm$ 13.7 & --- & 0.320 \\\\\n \\textbf{c = 0.2} & 4.47 $\\pm$ 3.40 & 0.266 & --- & 44.3 $\\pm$ 17.6 & 0.320 & --- \\\\\n \\textbf{c = 0.1} & 5.35 $\\pm$ 5.73 & 0.686 & 0.486 & 72.7 $\\pm$ 27.9 & $<0.0001$ & $<0.0001$ \\\\\n \\textbf{pos.} & 6.12 $\\pm$ 5.01 & 0.833 & 0.243 & 60.0 $\\pm$ 20.5 & 0.000246 & 0.000100 \\\\\n \\textbf{vel.} & 4.76 $\\pm$ 4.49 & 0.259 & 0.784 & 53.2 $\\pm$ 24.5 & 0.0123 & 0.0965 \\\\\n\\end{tabular}\n\\end{table*}\n\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\begin{subfigure}{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{box_plot_error_results_d0.png}\n \\end{subfigure} \n \\qquad\\qquad\n \\begin{subfigure}{0.4\\textwidth} \n \\centering \n \\includegraphics[width=\\linewidth]{box_plot_error_results_d750.png}\n \\end{subfigure}\n \\vskip\\baselineskip\n \\begin{subfigure}{0.4\\textwidth} \n \\centering \n \\includegraphics[width=\\linewidth]{box_plot_time_results_d0.png}\n \\end{subfigure}\n \\qquad\\qquad\n \\begin{subfigure}{0.4\\textwidth} \n \\centering \n \\includegraphics[width=\\linewidth]{box_plot_time_results_d750.png}\n \\end{subfigure}%\n \\caption{Performance results of all tested scaling methods from the user study. Plots on the left are results from no delay and plots on the right are results with a roundtrip delay of 750 msec.} \n \n \\label{fig:allDataPlots}\n\\end{figure*} \n\n\\section{Discussion}\n\nThe user study results of constant scaling of 0.1 under delay match with our initial hypothesis that decreasing constant scaling slows down the operators motions to achieve higher accuracy. As seen in Table 2, constant scaling of 0.1 makes a statistically significant decrease in errors at the cost of increasing the time to complete task. However, the hypothesis does not appear to hold true when looking at the performance of constant scaling of 0.3 to 0.2 under delay. We believe this is since participants had so many errors, such as ring drops, when using constant scaling of 0.3, that it severely affected their time to complete the task.\n\nBy comparing positional scaling and constant scaling of 0.1 relative time to complete task performances under delay, it is evident that positional scaling did decrease the cost of time as intended via dynamic scaling. However, positional scaling did not retain the weighted error performance of constant scaling of 0.1 when under delay. We believe this occurs because our implementation of positional scaling does not count the slave-arms as obstacles. Therefore during the ring pass, which typically occurs in the center of the plot in Fig. \\ref{fig:positionScalingPlot}, there is little to no additional scaling to improve the accuracy. The data from the user study supports this since 2 ring stretches and 7 ring drops during hand off were recorded for positional scaling under delay, and 0 ring stretches and 2 ring drops during hand off were recorded for constant scaling of 0.1 under delay. \n\nThe final result is the performance of velocity scaling, which performed the best of all of the proposed solutions under delay. It has similar performance in weighted error as constant scaling of 0.1 at a lower cost of time. While under delay, velocity scaling on average reduced the weighted error rate by 43\\% and 29\\% and increased the time to complete task by no statistically significant margin and 22\\% when compared against constant scaling of 0.3 and 0.2 respectively.\n\n\\section{Conclusion}\n\nThe results from the user study confirms with statistical significance our original hypothesis which states that decreasing the constant scaling factor will improve accuracy at the cost of time. Velocity scaling was shown to improve on this by achieving the same accuracy with a lower cost of time. Positional scaling did not perform as well, and it has the added challenge of requiring environmental information. Participants were also given little time to adjust to the proposed scaling solutions, so the performance improvements implies that they are intuitive. Furthermore, the simplicity of the constant scaling and velocity scaling solutions allows for them to be easily deployed on any teleoperational system under delay. Future work involves dynamically learning the appropriate scales for individual operators and delay conditions.\n\n\\section{Acknowledgements}\nWe thank all the participants from the user study for their time, all the members of the Advanced Robotics and Controls Lab at University of California San Diego for the intellectual discussions and technical help, and the UCSD Galvanizing Engineering in Medicine (GEM) program and the US Army AMEDD Advanced Medical Technology Initiative (AAMTI) for the funding.\n\\balance\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nKnowledge graphs (KGs) possess a large number of facts about the real world in a structured manner. \nEach fact consists of two entities and their relation in the form of a triple like \\textit{(subject entity, relation, object entity)}. \nKGs are becoming increasingly important because they can simplify knowledge acquisition, storage, and reasoning, and successfully support many knowledge-powered tasks such as semantic search and recommendation. \nAs different KGs are constructed by different people for different purposes using different data sources,\nthey are far from complete, yet complementary to each other. \nKnowledge from a single KG usually cannot satisfy the requirement of downstream tasks, calling for the integration of multiple KGs for knowledge fusion.\nEntity alignment, which aims to identify equivalent entities from different KGs, is a fundamental task in KG fusion.\nIn recent years, there has been an increasing interest in embedding-based entity alignment \\cite{chen2017multilingual,wang2018cross,cao2019multi,sun2020knowledge}. \nIt encodes entities into vector space for similarity calculation, such that entity alignment can be achieved via nearest neighbor search.\nHowever, current embedding-based models still face two major challenges regarding alignment learning and inference.\n\n\\textbf{Challenge 1:} How to effectively represent entities in the alignment learning phase?\nExisting embedding-based methods do not fully explore the inherent structures of KGs for entity embedding. \nMost of them only utilize a one-sided view of the structures, such as relation triples \\cite{chen2017multilingual,sun2017cross}, paths \\cite{zhu2017iterative,guo2019learning}, \nand neighborhood subgraphs \\cite{wang2018cross, sun2020knowledge, cao2019multi, wu2019relation, wu2019jointly, li2019semi, wu2020neighborhood, xu2019cross, yang2019aligning}.\nIntuitively, all these multi-context views could be helpful for embedding learning and provide additional evidence for entity alignment.\nMulti-view learning \\cite{zhang2019multi} is not the best solution to aggregate these features as it requires manual feature selection to determine useful views. \nBesides, KGs are heterogeneous, indicating that not all features in a helpful view could contribute to alignment learning, such as the noisy neighbors in neighborhood subgraphs \\cite{sun2020knowledge}. \nHence, it is necessary to flexibly and selectively incorporate different types of structural contexts for entity embedding.\n\n\\textbf{Challenge 2:} How to make full use of the learned embeddings and achieve informed alignment learning? Existing models directly rely on entity embeddings to estimate their semantic similarity, which is used for alignment decisions while ignoring the correlation between entities and relations in the KG. Although some studies \\cite{wu2019jointly, zhu2021relation} have tried to conduct joint training of entity and relation to improve the quality of the learned embeddings, alignment of each entity is still conducted independently from a local perspective. In fact, entity alignment is a holistic process: the neighboring entities and relations of the aligned pair should also be equivalent. The methods \\cite{zeng2020collective,zeng2021reinforcement} conduct entity alignment from a global perspective by modelling it as a stable matching problem. Whereas, such reasoning is regarded as a post-processing step without utilizing the holistic information to inform the embedding learning module. We argue that holistic inference can provide crucial clues to achieve reliable entity alignment, and the inferred knowledge can offer reliable alignment evidence and help to build and strengthen the trust towards black-box embedding-based entity alignment.\n\n\\textbf{Contributions:} In this work, we introduce \\textbf{IMEA\\xspace}, an \\textbf{I}nformed \\textbf{M}ulti-context \\textbf{E}ntity \\textbf{A}lignment model, to tackle the above obstacles.\nMotivated by the strong expressiveness and representation learning ability of Transformer \\cite{vaswani2017attention}, \nwe design a novel Transformer architecture to encode the multiple structural contexts in a KG while capturing the deep alignment interaction across different KGs.\nSpecifically, we apply two Transformers to encode the neighborhood subgraphs and entity paths, respectively.\nTo capture the semantics of relation triples, we introduce relation regularization based on TransE \\cite{bordes2013translating} at the input layer.\nWe generate entity paths by random walks across two KGs. \nTo capture the deep alignment interactions, we replace the entity in a path with its counterpart in the seed alignment (i.e., training data) to generate a new semantically equivalent path.\nIn this way, a path may contain entities from the two KGs that remain to be aligned, \nand the Transformer is trained to fit such a cross-KG entity sequence and its equivalent path.\nThe two Transformers share parameters.\nTheir training objectives are central entity prediction and mask entity prediction, respectively.\nThe self-attention mechanism of Transformer can recognize and highlight helpful features in an end-to-end manner. \n\nFurthermore, we design a holistic reasoning process for entity alignment, to complement the imperfection of embedding similarities.\nIt compares entities by reasoning over their neighborhood subgraphs based on both embedding similarities and the holistic relation and entity functionality (i.e., the importance for identifying similar entities and relations).\nThe inferred alignment probabilities are injected back into the Transformer as soft labels to assist alignment learning. In this way, both the positive and negative alignment evidence obtained from the holistic inference can be utilized to adjust soft labels and guide embedding learning, thus improving the overall performance and reliability of entity alignment.\n\nWe conduct extensive experiments including ablation studies and case studies on the benchmark dataset OpenEA \\cite{sun13benchmarking} to evaluate the effectiveness of our IMEA\\xspace model. Experimental results demonstrate that IMEA\\xspace can achieve the state-of-the-art performance by combining multi-context Transformer and holistic reasoning,\nand the global evidence is quite effective in informing alignment learning.\n\n\\section{Related Work}\n\\textbf{KG Contexts for Entity Alignment.} \nExisting structure-based entity alignment methods utilize different types of structural contexts to generate representative KG embeddings. \nA line of studies learns from \\textit{relation triples} and encodes local semantic information of both entities and relations. \nTransE \\cite{bordes2013translating} is one of the most fundamental embedding techniques that has been adopted in a series of entity alignment models such as MTransE \\cite{chen2017multilingual}, JAPE \\cite{sun2017cross}, BootEA \\cite{sun2018bootstrapping} and SEA \\cite{pei2019semi}.\nInspired by the success of graph neural networks (GNNs) like GCN \\cite{kipf2016semi} and GAT \\cite{velickovic2018graph} in graph embedding learning,\nrecent entity alignment studies incorporate various GNN variants to encode \\textit{subgraph contexts} for neighborhood-aware KG embeddings. \nFor example, GCN-Align \\cite{wang2018cross} employs vanilla GCN for KG structure modeling, \nbut suffers from the heterogeneity of different KGs. \nTo address this issue, MuGNN \\cite{cao2019multi} preprocesses KGs by KG completion and applies an attention mechanism to highlight salient neighbors.\nOther work improves neighborhood-aware entity alignment by introducing relation-aware GNNs \\cite{li2019semi,ye2019vectorized,yu2020generalized,mao2020mraea,mao2020relational}, multi-hop GNNs \\cite{sun2020knowledge,mao2021boosting} and hyperbolic GNNs \\cite{sun2020knowledge}.\nBesides the triple and neighbourhood context, \nRSN4EA \\cite{guo2019learning} and IPTransE \\cite{zhu2017iterative} further explore long-term dependency along \\textit{relation paths} in KGs. \nThere are also some entity alignment models that consider additional information such as \\textit{entity names} \\cite{xu2019cross,wu2019relation,wu2019jointly,wu2020neighborhood}, \\textit{entity attributes} \\cite{trisedya2019entity,zhang2019multi,liu2020exploring} and \\textit{textual descriptions} \\cite{chen2018co}.\nAlthough structural contexts are abundant and always available in KGs, few existing EA models have tried to integrate them into the embedding learning process. In this work, we introduce Transformer to capture multi-context interactions and learn more comprehensive entity embeddings.\n\n\\noindent\\textbf{Collective Entity Alignment.}\nEntities are usually aligned independently in existing EA models. Only two methods have attempted to achieve collective entity alignment.\nCEAFF \\cite{zeng2020collective, zeng2021reinforcement} formulates alignment inference as a stable matching problem. \nRMN \\cite{zhu2021relation} designs an iterative framework to leverage positive interactions between entity and relation alignment. However, it can only capture the interactions from seed entity alignment and requires extra pre-aligned relations, which is not always feasible in practice. Furthermore, both methods treat alignment learning and alignment inference separately, e.g., by regarding holistic reasoning as a post-processing step to adjust the alignment results \\cite{zeng2020collective}. By contrast, our IMEA\\xspace model can inject the holistic knowledge obtained during alignment inference into the Transformer via soft label editing, so as to guide the embedding learning process.\n\n\\section{Methodology}\n\\subsection{Framework}\nWe define a KG as $\\mathcal{G}=(\\mathcal{E},\\mathcal{R},\\mathcal{T})$, \nwhere $\\mathcal{E}$ and $\\mathcal{R}$ denote the sets of entities and relations, respectively, and $\\mathcal{T}\\subset\\mathcal{E}\\times\\mathcal{R}\\times\\mathcal{E}$ is the set of relation triples. Each triple can be represented as $(s, r, o)$, where $s\\in\\mathcal{E}$ and $o\\in\\mathcal{E}$ denote the subject (head entity) and object (tail entity), respectively, and $r\\in\\mathcal{R}$ is the relation.\nGiven two heterogeneous KGs, i.e., a source KG $\\mathcal{G}$ and a target KG $\\mathcal{G}'$, entity alignment (EA) aims to find as many aligned entity pairs as possible $\\mathcal{A} = \\{(e, e') \\in \\mathcal{E} \\times \\mathcal{E}'| e \\equiv e'\\}$, \nwhere the equivalent relationship $\\equiv$ indicates that entities $e$ and $e'$ from different KGs represent the same real-world identity. \nTypically, to jump-start alignment learning, some seed entity alignment $\\mathcal{A}^s$ is provided as training data.\n\n\\begin{figure*}[!t]\n \\centering\n \\includegraphics[width=0.68\\linewidth]{image\/framework.pdf}\n \\caption{\\label{fig:framework.pdf}Framework of the proposed informed multi-context entity alignment model.}\n\\end{figure*}\n\nFigure \\ref{fig:framework.pdf} shows the overall framework of our IMEA\\xspace (Informed Multi-context Entity Alignment) model, which contains three main components:\nFirstly, we introduce Transformer as the contextualized encoder to capture the multiple structural contexts (i.e., triples, neighborhood subgraphs and paths) and generate representative KG embeddings for alignment learning. \nNext, we identify candidate target entities for each source entity based on their embedding similarity and perform holistic reasoning for each candidate entity pair. \nFinally, we inform the embedding learning module by injecting the alignment evidence into the training loss using our proposed soft label editing strategy. \nWe will introduce each component in detail in the following sections.\n\n\\subsection{Multi-context Transformer}\nFigure \\ref{fig:transformer} presents the architecture of our multi-context transformer. \nSpecifically, we apply two Transformers to encode the neighborhood context and path context, respectively.\nThe neighborhood context is generated by sampling neighboring entities. \nTo capture the relation semantics, we introduce a relation regularization based on the translational embedding function \\cite{bordes2013translating} at the input layer.\nThe path context is represented as an entity sequence generated by random walks. \nTo capture the deep alignment interactions across different KGs, \nwe replace the entity in a path with its counterpart in the seed entity alignment to generate a new semantically equivalent path.\nIn this way, a path may contain entities from the two KGs that remain to be aligned, \nand the Transformer seeks to fit such a cross-KG entity sequence and its equivalent path to capture the alignment information between two KGs.\n\n\\subsubsection{Intra-KG Neighbourhood Encoding}\nNeighborhood information is critical in representing KG structures for entity alignment, as similar entities usually have similar neighborhood subgraphs. \nConsidering that an entity may have a very large number of neighbors, \nwe repeatedly sample $n$ neighbors of each entity during training and feed these neighbors into Transformer to capture their interactions. \nSpecifically, given an entity $e$ and its sampled neighbours $(e_1, e_2,..., e_n)$, \nwe regard the neighbourhood as a sequence and feed it into a Transformer with $L$ self-attention layers stacked. \nThe input embeddings of Transformer are $(\\mathbf{h}_{e_1}^0, \\mathbf{h}_{e_2}^0, \\cdots, \\mathbf{h}_{e_n}^0)$, \nand the output of the $L$-th layer is generated as follows:\n\\begin{equation}\n \\mathbf{h}_{e_1}^L, \\mathbf{h}_{e_2}^L, \\cdots, \\mathbf{h}_{e_n}^L = \\text{Encoder}(\\mathbf{h}_{e_1}^0, \\mathbf{h}_{e_2}^0, \\cdots, \\mathbf{h}_{e_n}^0).\n\\end{equation}\nThe training objective is to predict the central entity $e$ with the input neighborhood context $\\mathbf{c}_e=\\text{MeanPooling}(\\mathbf{h}_{e_1}^L,\\mathbf{h}_{e_2}^L,\\cdots,\\mathbf{h}_{e_n}^L)$.\nThis can be regarded as a classification problem.\nThe prediction probability for each $e_i$ is given by\n\\begin{equation}\n \\label{eq:ea_pro}\n p_{e_i} = \\frac{\\sigma(\\mathbf{c}_e,\\mathbf{h}_{e_i}^0)}{\\sum_{e_j \\in \\mathcal{E}}\\sigma(\\mathbf{c}_e,\\mathbf{h}_{e_j}^0)},\n\\end{equation}\nwhere $\\sigma()$ denotes inner product.\nLet $\\mathbf{q}_{e}$ be the one-hot label vector for predicting entity $e$. \nFor example, $\\mathbf{q}_{e}=[1,0,0,0,0]$ means that the entity at the first position is the target.\nWe adopt the label smoothing strategy to get soft labels, e.g., $\\hat{\\mathbf{q}}_{e}=[0.8,0.05,0.05,0.05, 0.05]$,\nand use cross-entropy, i.e., $\\text{CE}(x,y)=-\\sum x \\log y$, to calculate the loss between the soft label vector and the prediction distribution over all candidates $\\mathbf{p}_{e}$. \nFormally, the overall loss is defined as\n\\begin{equation}\n \\label{eq:neigh_loss}\n \\mathcal{L}_\\text{neighbor} = \\sum_{e\\in \\mathcal{E}\\cup\\mathcal{E}'} \\text{CE}(\\hat{\\mathbf{q}}_{e}, \\mathbf{p}_{e}).\n\\end{equation}\n\n\\subsubsection{Cross-KG Path Encoding}\nTo capture the alignment information, we use seed entity alignment to bridge two KGs and perform random walks to generate cross-KG paths.\nFor example, a typical path of length $m$ can be denoted as $(e_1, e_2',...,e_m)$.\nThen we randomly mask an entity in the path and get $(e_1, [MASK],...,e_m)$.\nSimilar to neighborhood encoding, we feed the masked path into Transformer to get its representations:\n\\begin{equation}\n \\mathbf{h}_{e_1}^L, \\mathbf{h}_{[MASK]}^L, \\cdots, \\mathbf{h}_{e_m}^L = \\text{Encoder}(\\mathbf{h}_{e_1}^0, \\mathbf{h}_{[MASK]}^0, \\cdots, \\mathbf{h}_{e_m}^0).\n\\end{equation}\nThe output representation of $[MASK]$ is used to predict the masked entity $e_2'$. \nHere we use $\\mathbf{h}_{[MASK]}^L$ to replace $\\mathbf{c}_e$ in Eq. (\\ref{eq:ea_pro}) to calculate the prediction probability.\nSince the $[MASK]$ representation captures the information from all entities in the path, our model can capture the deep alignment interactions across two KGs.\nIf $e_2'$ has a counterpart $e_2$ in the seed entity alignment, \nour model also uses the $[MASK]$ representation to predict $e_2$, \nwhich means there are two labels for this prediction task.\nThe loss is given by:\n\\begin{equation}\n \\label{eq:path_loss}\n \\mathcal{L}_\\text{path} = \\sum_{e\\in \\mathcal{E}\\cup\\mathcal{E}'} \\text{CE}(\\hat{\\mathbf{q}}_{e}, \\mathbf{p}_{e}) + \\sum_{(e, e')\\in \\mathcal{A}^{s}} \\text{CE}(\\hat{\\mathbf{q}}_{e'}, \\mathbf{p}_{e}).\n\\end{equation}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{image\/transformer.pdf}\n \\caption{\\label{fig:transformer}Illustration of multi-context Transformer.}\n\\end{figure*}\n\n\\subsubsection{Relation Regularization}\nRelation triples are the most fundamental structural contexts of KGs, \nand have already demonstrated the effectiveness for entity alignment.\nTo encode relation triples into Transformer, we propose a relation regularization module at the input layer without increasing the model complexity and parameters.\nThe objective of regularization comes from the widely-used translational embedding model TransE \\cite{bordes2013translating} that interprets a relation embedding as the translation vector from its subject entity to object entity.\nGiven a relation triple $(s, r, o)$, we expect the embeddings to hold that $\\mathbf{h}_s^0 + \\mathbf{r} = \\mathbf{h}_o^0$, \nwhere $\\mathbf{r}$ denotes the embedding of relation $r$. \nTo realize such translation, we add a additional regularization loss by minimizing the translation error over all relation triples:\n\\begin{equation}\n \\label{eq:relation_loss}\n \\mathcal{L}_\\text{relation} = \\sum_{(s, r, o)\\in \\mathcal{T}\\cup\\mathcal{T}'}\\| \\mathbf{h}_s^0 + \\mathbf{r} - \\mathbf{h}_o^0\\|_2.\n\\end{equation}\n\n\\subsection{Holistic Relation and Entity Alignment}\nGiven the learned embeddings, we propose a holistic reasoning method for the joint alignment of entities and relations. We use the input embeddings to conduct this process, i.e., $\\mathbf{e} = \\mathbf{h}_{e}^0$.\nDifferent from existing methods that only consider entity embedding similarities to identify aligned entities, we also compare relations and neighbors to compute entity similarities.\nThe basic idea is that similar entities tend to have higher embedding similarity, and their relations and neighboring entities should also be similar. \n\n\\subsubsection{Relation and Entity Functionality}\nIntuitively, when comparing two entities, different relational neighbors have different importance. \nFor example, relation \\textit{birthplace} is more important than relation \\textit{liveIn}, \nbecause a person can only be born in one place but can live in multiple places. \nBased on this, if we know two entities denote the same person, we can deduce that their neighbors along the relation \\textit{birthplace} also indicate the same place. \nWe use relation functionality \\cite{suchanek2011paris} to quantify the importance of relations.\nIt measures the number of first arguments per relation-entity edge.\nFor example, given $r\\in\\mathcal{R}$, its functionality is calculated as:\n\\begin{equation}\n f(r) = \\frac{|\\{s|(s,r,o) \\in \\mathcal{T}\\}|}{|(s, o)|(s,r,o) \\in \\mathcal{T}\\}|},\n\\end{equation}\nwhere $|\\cdot|$ denotes the size of the set. Correspondingly, the inverse relation functionality is defined as:\n\\begin{equation}\n f^{-1}(r) = \\frac{|\\{o|(s,r,o) \\in \\mathcal{T}\\}|}{|(s, o)|(s,r,o) \\in \\mathcal{T}\\}|}.\n\\end{equation}\n\nAnalogously, we propose entity functionality for comparing two relations. It defines the importance of an entity in uniquely determining a relation. \nRich connectivity of an entity would alleviate its importance. \nTherefore, we define entity functionality based on the number of relations an entity connects to, as a subject (functionality) or object (inverse functionality):\n\\begin{align}\n \\begin{split}\n f(e) = \\frac{1}{|\\{(e,r,o)|(e,r,o) \\in \\mathcal{T}\\}|}, \\\\ \n f^{-1}(e) = \\frac{1}{|\\{(s,r,e)|(s,r,e) \\in \\mathcal{T}\\}}.\n\\end{split}\n\\end{align}\nAll of these functionality features can be computed in advance, as they are only related to the statistics of KG connectivity.\n\n\\subsubsection{Relation Alignment Inference}\nTo assist entity alignment, we first find relation alignment in an unsupervised fashion. \nAs each relation connects many entities, we utilize each relation's subject and object entity embeddings to approximate the relation embedding. \nRelations have the property of direction. Considering the heterogeneity of KGs that the relations from different KGs may be equivalent in a different direction, we generate embeddings for both relations and inverse relations by swapping the subject and object entity sets. \nThe embeddings of connected entities are weighted by entity functionality. \nThe approximated embeddings of each relation $\\textbf{r}$ and its inverse $\\textbf{r}_{inv}$ are calculated as follows:\n\\begin{equation}\n \\mathbf{r} = [\\mathbf{S}, \\mathbf{O}],\\,\\,\\,\\mathbf{r}_{inv} = [\\mathbf{O}, \\mathbf{S}],\n\\end{equation}\nwhere $\\textbf{S}$ and $\\textbf{O}$ represent the embedding of the subject entity set $\\mathcal{E}_{\\text{r}}^{\\text{subj}}$ and object entity set $\\mathcal{E}_{\\text{r}}^{\\text{obj}}$ of a relation $r$.\n$[\\cdot]$ denotes concatenation. \nThe embeddings of the two sets can be generated by:\n\\begin{align}\n \\begin{split}\n \\textbf{S} = \\sum_{s \\in\\mathcal{E}_{\\text{r}}^{\\text{subj}}} f(s) \\times \\mathbf{s},\\,\\,\\,\n \\textbf{O} = \\sum_{o \\in \\mathcal{E}_{\\text{r}}^{\\text{obj}}}f^{-1}(o) \\times \\mathbf{o}\n \\end{split}\n\\end{align}\nFinally, we define the alignment score of a pair of relations as the cosine similarity of the approximated relation embeddings:\n\\begin{equation}\n \\text{align}(r, r') = \\cos(\\mathbf{r}, \\mathbf{r}').\n\\end{equation}\nHere, $r$ can be either a relation or a reverse relation.\n\n\\subsubsection{Entity Alignment Inference}\nAfter approximating the relation alignment score, \nwe are able to match the relational neighbors (i.e., triples) of two entities from different KGs. \nFigure \\ref{fig:relation_align.pdf} shows the process of measuring the equivalence between two entity neighborhood sets, which consists of the following steps:\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\linewidth]{image\/relation_align.pdf}\n \\caption{\\label{fig:relation_align.pdf}Illustration of entity alignment inference.}\n\\end{figure}\n\n\\noindent\\textbf{Step 1. Pair-wise matching matrix generation.}\nLet $N_e$ and $N_{e'}$ denote the neighbourhood sets of $e$ and $e'$, respectively.\nWe first compute the pair-wise triple-level matching score for $e_i \\in N_e$ and $e_j \\in N_{e'}$.\nWe take into consideration both relation direction and functionality.\nIf the neighbour is outgoing, we utilize the inverse relation functionality $f^{-1}(r)$ for weighting; otherwise, we adopt its functionality $f(r)$. \nFor brevity, we use the matching of two outgoing neighbors to illustrate the calculation of similarity matrix. \nAssuming the adjacent relations to the center entity of $e_i$ and $e_j$ are $r_i$ and $r_j$ respectively, we define $\\phi$ as the similarity function of $e_i \\in N_e$ and $e_j \\in N_{e'}$ as follows:\n\\begin{equation}\n \\phi(e_i,e_j) = f^{-1}(r_i) \\times f^{-1}(r_j) \\times \\text{align}(r_i, r_j) \\times \\cos(\\mathbf{e}_i,\\mathbf{e}_j).\n\\end{equation}\n\\noindent\\textbf{Step 2. Neighborhood intersection calculation.}\nWe then acquire the intersection of two neighborhood sets $N_e \\cap N_{e'}$ based on the matching matrix. \nAs the size of the smaller set indicates the maximum number of possibly matched entities that two sets may have, we conduct the max-pooling operation along the smaller dimension to identify possibly matched entities. \nFormally, assuming $|N_e| < |N_{e'}|$, the possibly matched entities are defined as:\n\\begin{equation}\n \\Psi = \\{(e_i, e_j)| i = 1,\\cdots,|N_e|, j=\\argmax_{j \\in [1,|N_{e'}|]}\\phi(e_i, e_j)\\}.\n\\end{equation}\nThen, we use a dynamic threshold $\\eta$ to filter out impossible matches.\n\\begin{equation}\n \\Theta = \\{(e_i, e_j)| (e_i, e_j) \\in \\Psi,\\ \\phi(e_i, e_j) > \\eta\\}.\n\\end{equation} \nFinally, we sum up all the remaining similarities as the equivalence score of the two neighborhood sets.\nThis process can be regarded as computing the intersection of two sets.\nIn other words, the intersection score of $N_e$ and $N_{e'}$ is the aggregation of the triple-level scores from the intersection entity set:\n\\begin{equation}\n N_e \\cap N_{e'} = \\sum_{(e_i, e_j) \\in \\Theta}\\phi(e_i, e_j).\n\\end{equation}\n\n\\noindent\\textbf{Step 3. Neighborhood union calculation.} \nNext, we compute the union of neighborhoods to normalize the above equivalence score.\nWe aggregate the weighted edges after the max-pooling step.\nSpecifically, we use the relation functionality of connected edges as the weight and calculate the pair-wise functionality multiplication to approximate the union of two neighborhood sets. \nTaking outgoing edges as an example, for $e_i\\in N_e$ and $e_j \\in N_{e'}$, we have:\n\\begin{equation} \n \\phi^r(e_i, e_j) = f^{-1}(r_i)\\times f^{-1}(r_j).\n\\end{equation}\nThen, we aggregate the pair-wise functionality of possibly matched neighbors as the union score of two neighborhood sets $N_e \\cup N_{e'}$:\n\\begin{equation}\n N_e \\cup N_{e'} = \\sum_{(e_i, e_j) \\in \\Psi}\\phi^r(e_i, e_j).\n\\end{equation}\n\n\\noindent\\textbf{Step 4. Holistic alignment probability. }\nFinally, the entity alignment probability is calculated in a Jaccard-like fashion based on their neighborhoods, as defined below:\n\\begin{equation}\n \\text{prob}(e, e') = \\frac{N_e \\cap N_{e'}}{N_e \\cup N_{e'}}.\n\\end{equation}\n\n\\subsection{Soft Label Editing by Alignment Evidence}\nAfter acquiring the holistic alignment probability of important entity pairs,\ni.e., the top-k similar target counterparts of each source entity, \nwe inform the multi-context Transformer of such an alignment clue to edit the soft labels.\nWe design two types of alignment evidence.\nSpecifically, we collect entity pairs with high alignment probability, i.e., $\\text{prob}(e, e') \\geq \\gamma_{\\text{pos}}$, as positive evidence.\nOn the contrary, if $\\text{prob}(e, e') \\leq \\gamma_{\\text{neg}}$, the two entities form a negative alignment pair.\nThen, we inject both positive and negative alignment evidences into the embedding module by editing the soft labels for prediction tasks.\nThe idea is that if the target entity has several counterparts in positive pairs, all counterparts should also be labeled as $\\min(s, p)$, where $s$ denotes the soft label of the target entity and $p$ is the calculated alignment probability.\nWe still use the soft label vector $\\hat{\\mathbf{q}}_{e}=[0.8,0.05,0.05,0.05,0.05]$ as an example, where the entity at the first position is the target. \nSuppose it has two predicted counterparts at the second and third positions with the probabilities of $0.5$ and $0.9$, respectively. In this case, the soft label vector is rewritten as $\\hat{\\mathbf{q}}_{e}=[0.8,0.5,0.8,0.05,0.05]$, and it is further normalized to serve as the labels for training the Transformer.\nFor negative counterparts, the corresponding label values are set to $0$ which will not be included in the prediction.\n\n\\section{Experiment}\n\\subsection{Experiment Setting}\n\\subsubsection{Datasets}\nWe use the benchmark dataset (V1) in OpenEA \\cite{sun13benchmarking} for evaluation as it follows the data distribution of real KGs. \nIt contains two cross-lingual settings extracted from multi-lingual DBpedia (English-to-French and English-to-German), and two monolingual settings among popular KGs (DBpedia-to-Wikidata and DBpedia-to-YAGO). \nEach setting has two sizes with 15K and 100K pairs of reference entity alignment, respectively, without reference relation alignment. \nWe follow the data splits in OpenEA where $20\\%$ alignment is for training, $10\\%$ for validation and $70\\%$ for testing. \n\n\\subsubsection{Implementation Details}\nOur IMEA\\xspace model is implemented based on the OpenEA library. \nWe initialize the trainable parameters with Xavier initialization \\cite{Xavier} and optimize the loss with Adam \\cite{Adam}.\nAs for hyper-parameters, the number of sampled neighbors is $3$, and the length of paths is $5$.\nThe batch size is $4096$, and the learning rate is $0.0001$.\nThe hidden size of Transformer is $256$. \nIt has $8$ layers with $8$ attention heads in each layer. \nWe conduct the soft label editing process based on holistic inference from the second training epoch.\nThe dynamic threshold of hard filter is $\\eta = \\max(0.6, 0.5 * (\\max_{(e_i, e_j)\\in \\Theta}\\phi(e_i, e_j) + \\text{mean}_{(e_i, e_j)\\in \\Theta}\\phi(e_i, e_j)))$. \nThe soft label value is $s=0.6$. \nThe thresholds for selecting positive and negative pairs are set as $\\gamma_{\\text{pos}} = 0.62$ and $\\gamma_{\\text{neg}} = 0.01$, respectively.\nBy convention, we use Hits@1, Hits@5 and mean reciprocal rank (MRR) as the evaluation metrics, and higher scores indicate better performance. \nWe release our source code at GitHub\\footnote{\\url{https:\/\/github.com\/JadeXIN\/IMEA}}.\n\n\n\\begin{table*}[!t]\n\t\\centering\n\t\\caption{Entity alignment results on 15K datasets.}\n\t\\resizebox{0.9\\textwidth}{!}{\n\t\t\\begin{tabular}{lcclcclcclccl}\n\t\t\t\\toprule\n\t\t\t\\multirow{2}{*}{Methods} & \\multicolumn{3}{c}{EN-FR-15K} & \\multicolumn{3}{c}{EN-DE-15K} & \\multicolumn{3}{c}{D-W-15K} & \\multicolumn{3}{c}{D-Y-15K} \\\\\n\t\t\t\\cmidrule(lr){2-4} \\cmidrule(lr){5-7} \\cmidrule(lr){8-10} \\cmidrule(lr){11-13} \n\t\t\t& Hits@1 & Hits@5 & MRR & Hits@1 & Hits@5 & MRR & Hits@1 & Hits@5 & MRR & Hits@1 & Hits@5 & MRR \\\\ \\midrule\n\t\t\tMTransE & 0.247 & 0.467 & 0.351 & 0.307 & 0.518 & 0.407 & 0.259 & 0.461 & 0.354 & 0.463 & 0.675 & 0.559 \\\\\n\t\t\tIPTransE & 0.169 & 0.320 & 0.243 & 0.350 & 0.515 & 0.430 & 0.232 & 0.380 & 0.303 & 0.313 & 0.456 & 0.378 \\\\\n\t\t\tAlignE & 0.357 & 0.611 & 0.473 & 0.552 & 0.741 & 0.638 & 0.406 & 0.627 & 0.506 & 0.551 & 0.743 & 0.636 \\\\\n\t\t\tGCN-Align & 0.338 & 0.589 & 0.451 & 0.481 & 0.679 & 0.571 & 0.364 & 0.580 & 0.461 & 0.465 & 0.626 & 0.536 \\\\\n\t\t\tSEA & 0.280 & 0.530 & 0.397 & 0.530 & 0.718 & 0.617 & 0.360 & 0.572 & 0.458 & 0.500 & 0.706 & 0.591 \\\\\n\t\t\tRSN4EA & 0.393 & 0.595 & 0.487 & 0.587 & 0.752 & 0.662 & 0.441 & 0.615 & 0.521 & 0.514 & 0.655 & 0.580 \\\\\n\t\t\tAliNet* & 0.364 & 0.597 & 0.467 & 0.604 & 0.759 & 0.673 & 0.440 & 0.628 & 0.522 & 0.559 & 0.690 & 0.617 \\\\\n\t\t\tHyperKA* & 0.353 & 0.630 & 0.477 & 0.560 & 0.780 & 0.656 & 0.440 & 0.686 & 0.548 & \\underline{0.568} & \\underline{0.777} & \\underline{0.659} \\\\\n\t\t\tKE-GCN* & \\underline{0.408} & \\underline{0.670} & \\underline{0.524} & \\textbf{0.658} & \\textbf{0.822} & \\textbf{0.730} & \\underline{0.519} & \\underline{0.727} & \\underline{0.608} & 0.560 & 0.750 & 0.644\\\\\n\t\t\t\\midrule\n\t\t\tIMEA\\xspace & \\textbf{0.458} & \\textbf{0.720} & \\textbf{0.574} & \\underline{0.639} & \\underline{0.827} & \\underline{0.724} & \\textbf{0.527} & \\textbf{0.753} & \\textbf{0.626} & \\textbf{0.639} & \\textbf{0.804} & \\textbf{0.712} \\\\\n\t\t\t\\bottomrule\n\t\\end{tabular}}\n\t\\label{tab:ent_alignment_15k}\n\\end{table*}\n\n\\begin{table*}[!t]\n\t\\centering\n\t\\caption{Entity alignment results on 100K datasets.}\n\t\\resizebox{0.9\\textwidth}{!}{\n\t\\begin{tabular}{lcclcclcclccl}\n\t\t\\toprule\n\t\t\\multirow{2}{*}{Methods} & \\multicolumn{3}{c}{EN-FR-100K} & \\multicolumn{3}{c}{EN-DE-100K} & \\multicolumn{3}{c}{D-W-100K} & \\multicolumn{3}{c}{D-Y-100K} \\\\\n\t\t\\cmidrule(lr){2-4} \\cmidrule(lr){5-7} \\cmidrule(lr){8-10} \\cmidrule(lr){11-13} \n\t\t& Hits@1 & Hits@5 & MRR & Hits@1 & Hits@5 & MRR & Hits@1 & Hits@5 & MRR & Hits@1 & Hits@5 & MRR \\\\\n\t\t\t\\midrule\n\t\t\tMTransE & 0.138 & 0.261 & 0.202 & 0.140 & 0.264 & 0.204 & 0.210 & 0.358 & 0.282 & 0.244 & 0.414 & 0.328 \\\\\n\t\t\tIPTransE & 0.158 & 0.277 & 0.219 & 0.226 & 0.357 & 0.292 & 0.221 & 0.352 & 0.285 & 0.396 & 0.558 & 0.474 \\\\\n\t\t\tAlignE & 0.294 & 0.483 & 0.388 & 0.423 & 0.593 & 0.505 & 0.385 & 0.587 & 0.478 & 0.617 & 0.776 & 0.691 \\\\\n\t\t\tGCN-Align & 0.230 & 0.412 & 0.319 & 0.317 & 0.485 & 0.399 & 0.324 & 0.507 & 0.409 & 0.528 & 0.695 & 0.605 \\\\\n\t\t\tSEA & 0.225 & 0.399 & 0.314 & 0.341 & 0.502 & 0.421 & 0.291 & 0.470 & 0.378 & 0.490 & 0.677 & 0.578 \\\\\n\t\t\tRSN4EA & 0.293 & 0.452 & 0.371 & 0.430 & 0.570 & 0.497 & 0.384 & 0.533 & 0.454 & 0.620 & 0.769 & 0.688 \\\\\n\t\t\tAliNet* & 0.266 & 0.444 & 0.348 & 0.405 & 0.546 & 0.471 & 0.369 & 0.535 & 0.444 & \\underline{0.626} & \\underline{0.772} & \\underline{0.692} \\\\\n\t\t\tHyperKA* & 0.231 & 0.426 & 0.324 & 0.239 & 0.432 & 0.332 & 0.312 & 0.535 & 0.417 & 0.473 & 0.696 & 0.574\\\\\n\t\t\tKE-GCN* & \\underline{0.305} & \\underline{0.513} & \\underline{0.405} & \\underline{0.459} & \\underline{0.634} & \\underline{0.541} & \\textbf{0.426} & \\textbf{0.620} & \\textbf{0.515} & 0.625 & 0.791 & 0.700 \\\\\n\t\t\t\\midrule\n\t\t\tIMEA\\xspace & \\textbf{0.329} & \\textbf{0.526} & \\textbf{0.424} & \\textbf{0.467} & \\textbf{0.641} & \\textbf{0.549} & \\underline{0.419} & \\underline{0.614} & \\underline{0.508} & \\textbf{0.664} & \\textbf{0.817} & \\textbf{0.733}\\\\\n\t\t\t\\bottomrule\n\t\\end{tabular}}\n\t\\label{tab:ent_alignment_100k}\n\\end{table*}\n\n\\subsubsection{Baselines}\nTo evaluate the effectiveness of our proposed IMEA\\xspace model, we compare it with existing state-of-the-art supervised structure-based entity alignment methods. \nThey can be roughly divided into the following categories.\n\n\\begin{itemize}\n\\item Triple-based models that capture the local semantics information of relation triples based on TransE, including MTransE \\cite{chen2017multilingual}, JAPE \\cite{sun2017cross}, AlignE \\cite{sun2018bootstrapping} and SEA \\cite{pei2019semi}.\n\\item Neighborhood-based models that use GNNs to exploit subgraph structures for entity alignment, including GCN-Align \\cite{wang2018cross}, AliNet \\cite{sun2020knowledge}, HyperKA \\cite{sun2020knowledge}, KE-GCN \\cite{yu2020generalized}.\n\\item Path-based models that explore the long-term dependency across relation paths, including IPTransE \\cite{zhu2017iterative} and RSN4EA\\cite{guo2019learning}.\n\\end{itemize}\n\nOur model and the above baselines all focus on the structural information of KGs. For a fair comparison, we ignore other models that incorporate side information (e.g., attributes, entity names and descriptions) like RDGCN \\cite{wu2019relation}, KDCoE \\cite{chen2018co} and AttrGNN \\cite{liu2020exploring}.\n\n\\subsection{Results and Analysis}\n\\subsubsection{Main Results}\nTables \\ref{tab:ent_alignment_15k} and \\ref{tab:ent_alignment_100k} present the entity alignment results on the OpenEA 15K and 100K datasets, respectively.\nResults labelled by * are reproduced using the released source code with careful tuning of hyper-parameters, and the other results are taken from OpenEA \\cite{sun13benchmarking}. \nWe can see that IMEA\\xspace achieves the \\textbf{best} or \\underline{second-best} performance compared with these SOTA baselines. \nIt outperforms the best performing baselines on Hits@1 by $1\\%-7\\%$, except for the EN-DE-15K and D-W-100K datasets. \nFor example, on 15K datasets, IMEA\\xspace achieves a gain of $7.1\\%$ by Hits@1 compared with HyperKA and $7.9\\%$ against the latest method KE-GCN on D-Y-15K.\nOverall, KE-GCN achieves the second-best performance. \nIt exceeds our performance by 1.9\\% on EN-DE-15K and 0.7\\% on D-W-100K due to its strength in combining advanced knowledge embedding models with GCNs. \nHowever, the limited structure context of its embedding module prevents it from acquiring further improvement. \nOn the 100K datasets, many baselines fail to achieve promising results due to the more complex KG structures and larger alignment space.\nIMEA\\xspace also acquires the best results on most large-scale datasets, except the second-best on D-W-100K, which demonstrates the practicability of our model. \nThis is because IMEA\\xspace can integrate multiple structural contexts, and the informed knowledge can further improve the performance by holistic reasoning. \nIn summary, the results show the superiority of IMEA\\xspace.\n\n\\subsubsection{Different Path Length}\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.9\\linewidth]{image\/path_len.pdf}\n \\caption{\\label{fig:path_len}Performance and training time w.r.t path length on 15K datasets.}\n\\end{figure*}\nAs a path is one of the crucial structural contexts for both embedding and alignment learning, we evaluate the effect of path length on entity alignment. \nFigure \\ref{fig:path_len} reports the IMEA\\xspace performance w.r.t. different path lengths on 15K datasets. \nAs expected, entity alignment accuracy (Hits@1 and MRR) improves as the path length increases.\nThis is because a longer path can boost the cross-KG interaction and mitigate the heterogeneity issue between different KGs. \nThe performance increase is more evident on EN-FR-15K due to the higher structural heterogeneity in this dataset. \nHowever, it is also noticeable that the training time per epoch increases as the path length grows. \nHence, we choose a path length of 5 in our setting as an effectiveness-efficiency trade-off.\n\n\\subsubsection{Complementarity of Different Context}\n\n\\begin{table}[!t]\n\\centering\n\\caption{Complementarity of context on 15K datasets.}\n\\resizebox{0.99\\linewidth}{!}{\n\\begin{tabular}{lcccccccc} \n\\toprule\n\\multirow{2}{*}{Methods} & \\multicolumn{2}{c}{EN-FR} & \\multicolumn{2}{c}{EN-DE} & \\multicolumn{2}{c}{D-W} & \\multicolumn{2}{c}{D-Y} \\\\\n\\cmidrule(lr){2-9}\n& Hits@1 & MRR & Hits@1 & MRR & Hits@1 & MRR & Hits@1 & MRR \\\\\n\\midrule\nIMEA\\xspace & 0.458 & 0.574 & 0.639 & 0.724 & 0.527 & 0.626 & 0.639 & 0.712 \\\\ \n\\,\\, w\/o relation & 0.378 & 0.482 & 0.609 & 0.684 & 0.460 & 0.552 & 0.599 & 0.667\\\\\n\\,\\, w\/o neighbor & 0.431 & 0.547 & 0.612 & 0.681 & 0.494 & 0.595 & 0.635 & 0.708\\\\\n\\bottomrule\n\\end{tabular}}\n\\label{tab:context-effect}\n\\end{table}\n\nTable~\\ref{tab:context-effect} gives the results of the ablation study on the triple and neighborhood contexts in 15K datasets (paths are necessary for alignment learning).\nIMEA\\xspace ``w\/o relation\" and ``w\/o neighbor\" represent the variants by removing relation triples or neighborhood information from our multi-context Transformer. \nThe performance of both variants drops. \nWhen the relation triple context is removed, the Hits@1 decreases by $4\\%$ to $8\\%$, and MRR drops by $4\\%$ to $9\\%$. \nThe decline is smaller when the neighborhood is absent, which is $0.5\\%$ to $3\\%$ drop on Hits@1 and $0.5\\%$ to $4\\%$ drop on MRR. \nOverall, the absence of both relation and neighbor contexts results in a performance drop on all datasets, and relation triples contribute more than neighborhood.\n\n\\subsubsection{Effect of Informed Process}\nThe ablation study on the effect of the holistic reasoning process is shown in Table \\ref{tab:inf-effect}, \nwhere IMEA\\xspace (w\/o inf) represents the variant without using the referred holistic knowledge. It can be observed that the alignment clue captured by holistic inference can bring improvement on all datasets by $1\\%$ to $2.5\\%$ on Hits@1 and $1\\%$ to $2\\%$ on MRR. \nThis quantitatively validates the effectiveness of our holistic reasoning process and its ability to inform embedding and alignment learning.\n\n\\begin{table}[!t]\n\\centering\n\\caption{Effect of holistic inference on 15K datasets.}\n\\resizebox{0.99\\linewidth}{!}{\n\\begin{tabular}{lcccccccc} \n\\toprule\n\\multirow{2}{*}{Methods} & \\multicolumn{2}{c}{EN-FR} & \\multicolumn{2}{c}{EN-DE} & \\multicolumn{2}{c}{D-W} & \\multicolumn{2}{c}{D-Y} \\\\\n\\cmidrule(lr){2-9}\n& Hits@1 & MRR & Hits@1 & MRR & Hits@1 & MRR & Hits@1 & MRR \\\\\n\\midrule\nIMEA\\xspace & 0.458 & 0.574 & 0.639 & 0.724 & 0.527 & 0.626 & 0.639 & 0.712 \\\\ \n\\,\\, w\/o inf & 0.431 & 0.546 & 0.614 & 0.703 & 0.502 & 0.598 & 0.629 & 0.704\\\\\n\\bottomrule\n\\end{tabular}}\n\\label{tab:inf-effect}\n\\end{table}\n\n\\subsection{Case Study on Holistic Reasoning}\nIn this section, we conduct case studies to further demonstrate the effectiveness of our holistic reasoning, i.e., how the positive and negative evidence obtained from holistic reasoning can be used to fix the alignment error caused by learned embeddings.\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}[b]{0.95\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{image\/case2.pdf}\n \\caption{\\label{fig:case2.pdf}}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.95\\linewidth}\n \\includegraphics[width=1\\linewidth]{image\/case3.pdf}\n \\caption{\\label{fig:case3.pdf}}\n \\end{subfigure}\n \\caption{Examples of aligned entity pairs discovered by holistic inference from D-Y-15K. The dark arrow and dark rectangle indicate the matched relations and entities recognized by the inference, respectively.}\n \\label{fig:correct example}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{image\/case1_0.pdf}\n \\caption{\\label{fig:case1_0.pdf}Incorrect alignment pairs.}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\centering\n \\includegraphics[width=1.0\\linewidth]{image\/case1_1.pdf}\n \\caption{\\label{fig:case1_1.pdf}Correct alignment pairs.}\n\\end{subfigure}\n\\caption{Example of a negative alignment from EN-FR-15K which can be corrected by the holistic reasoning. \nThe vertical and horizontal axis represents the entity's neighbors from EN and FR respectively. Each neighbor consists of the relation (with its functionality) and the entity (either subject \"s\" or object \"o\"). The top figure reports the relation alignment while the bottom figure shows the neighbor alignment. Darker cells indicate higher alignment probability between the corresponding relations or neighbors.\n}\n\\label{fig: incorrect example}\n\\end{figure}\n\n\\subsubsection{Positive Evidence Benefit Embedding Learning}\nFigure \\ref{fig:case2.pdf} illustrates a case of positive alignment evidence from D-Y-15K. \nThe learned embedding similarity between these two identical entities is only $0.59$. \nSuch an inaccurate embedding distance is caused by the sparse connections between the two entities. \nLong-tail entities suffer from the isolation issue. Hence, the two entities with sparse triples would have few chances to be pushed closer during training. \nAnother reason for the low embedding similarity is the large degree difference between the two entities, since the embedding model tends to generate dissimilar representations for entities with different degrees.\nFortunately, our holistic inference can effectively capture the positive interaction between entities by matching their neighborhood subgraphs at a more fine-grained level (i.e., triple-level). \nFirstly, it can catch the matched relations, i.e., \\textit{``is-part-of''} and \\textit{``is-located-in''}, \\textit{``official-language''} and \\textit{``has-official-language''}. \nAs the relational neighbors of the two entities are not always long-tail entities, the matched neighbors (denoted by dark blue) can be successfully captured, which leads to a positive evidence score of $0.92$.\nFigure \\ref{fig:case3.pdf} demonstrates another example from D-Y-15K, which shows the holistic process is especially suitable for handling the heterogeneous issue between KGs. \nFrom our observation, even though two entities have many relational neighbors in common, the generated embeddings tend to be more dissimilar if the relations are in different directions.\nThe embedding similarity in Figure \\ref{fig:case3.pdf} is only $0.56$. \nAs the holistic process considers relation direction, it successfully captures equivalent relations \\textit{``writer''} and \\textit{``created''} with different directions, and similar relations \\textit{``birthplace''} and \\textit{``is-citizen-of''}, \nwhich leads to a high inference score of $0.89$.\nOverall, for those equivalent entities with low embedding similarity,\nour holistic inference can successfully capture their actual alignment interactions by comparing their neighborhood subgraphs, thus fixing the error caused by unreliable embedding learning.\n\n\\subsubsection{Negative Evidence Benefit Embedding Learning}\nOur holistic reasoning can also capture the negative alignment evidence between some wrongly closely located entity embeddings, and correctly accumulate positive clues between matched entities. \nFigure \\ref{fig: incorrect example} shows an example from EN-FR-15K about two songs from ``Whitney Houston'', where the incorrectly aligned entity pairs in Figure \\ref{fig:case1_0.pdf} have a slightly higher embedding similarity ($0.99$) than the corresponding ground-truth in Figure \\ref{fig:case1_1.pdf} ($0.97$). Such an embedding mistake can be corrected by our holistic reasoning based on the neighborhood information.\nThe neighboring entities' embeddings could also be inaccurate, which results in a closer embedding distance between the entity and its incorrect match. \nHowever, by holistic inference, we observe that the incorrectly aligned entities in Figure \\ref{fig:case1_0.pdf} only share some unimportant neighbors with low relation functionality such as \\textit{``genere''}, \\textit{``artist''}, \\textit{``producer''} and \\textit{``composer''}, leading to a limited alignment score of $0.32$. On the contrary, the ground-truth alignment in Figure \\ref{fig:case1_1.pdf} demonstrates a large number of common neighbors, among which positive matching clues are observed on some crucial neighbours with large relation functionality like \\textit{``album''} (the corresponding entities \\textit{``It's not right but It's okay''} are matched). Hence, it accumulates more positive evidences and achieves a higher alignment probability of $0.76$. In this way, holistic inference successfully identifies the aligned entity pair in Figure \\ref{fig:case1_0.pdf} as a negative prediction from the embedding module (with a smaller alignment probability), and injects such feedback into the Transformer to adjust embedding learning, thus improving the final entity alignment performance.\n\n\\section{Conclusion and Future Work}\nIn this paper, we propose an informed multi-context entity alignment model (IMEA\\xspace). It consists of a multi-context Transformer to capture multiple structural contexts for alignment learning, and a holistic reasoning process to generate global alignment evidence, which is iteratively injected back into the Transformer via soft label editing to guide further embedding training. Extensive experiments on the OpenEA dataset verify the superiority of IMEA\\xspace over existing state-of-the-art baselines. As future work, we will include extra side information of KGs to enhance embedding learning in IMEA\\xspace, and generate human-understandable interpretations of entity alignment results based on the informed holistic reasoning, thus achieving explainability of embedding-based EA models.\n\n\\begin{acks}\nThis work was partially supported by the Australian Research Council under Grant No. DE210100160 and DP200103650.\nZequn Sun's work was supported by Program A for Outstanding PhD Candidates of Nanjing University.\n\\end{acks}\n\n\\bibliographystyle{ACM-Reference-Format}\n\\balance\n\n\\section{Introduction}\nKnowledge graphs (KGs) possess a large number of facts about the real world in a structured manner. \nEach fact consists of two entities and their relation in the form of a triple like \\textit{(subject entity, relation, object entity)}. \nKGs are becoming increasingly important because they can simplify knowledge acquisition, storage, and reasoning, and successfully support many knowledge-powered tasks such as semantic search and recommendation. \nAs different KGs are constructed by different people for different purposes using different data sources,\nthey are far from complete, yet complementary to each other. \nKnowledge from a single KG usually cannot satisfy the requirement of downstream tasks, calling for the integration of multiple KGs for knowledge fusion.\nEntity alignment, which aims to identify equivalent entities from different KGs, is a fundamental task in KG fusion.\nIn recent years, there has been an increasing interest in embedding-based entity alignment \\cite{chen2017multilingual,wang2018cross,cao2019multi,sun2020knowledge}. \nIt encodes entities into vector space for similarity calculation, such that entity alignment can be achieved via nearest neighbor search.\nHowever, current embedding-based models still face two major challenges regarding alignment learning and inference.\n\n\\textbf{Challenge 1:} How to effectively represent entities in the alignment learning phase?\nExisting embedding-based methods do not fully explore the inherent structures of KGs for entity embedding. \nMost of them only utilize a one-sided view of the structures, such as relation triples \\cite{chen2017multilingual,sun2017cross}, paths \\cite{zhu2017iterative,guo2019learning}, \nand neighborhood subgraphs \\cite{wang2018cross, sun2020knowledge, cao2019multi, wu2019relation, wu2019jointly, li2019semi, wu2020neighborhood, xu2019cross, yang2019aligning}.\nIntuitively, all these multi-context views could be helpful for embedding learning and provide additional evidence for entity alignment.\nMulti-view learning \\cite{zhang2019multi} is not the best solution to aggregate these features as it requires manual feature selection to determine useful views. \nBesides, KGs are heterogeneous, indicating that not all features in a helpful view could contribute to alignment learning, such as the noisy neighbors in neighborhood subgraphs \\cite{sun2020knowledge}. \nHence, it is necessary to flexibly and selectively incorporate different types of structural contexts for entity embedding.\n\n\\textbf{Challenge 2:} How to make full use of the learned embeddings and achieve informed alignment learning? Existing models directly rely on entity embeddings to estimate their semantic similarity, which is used for alignment decisions while ignoring the correlation between entities and relations in the KG. Although some studies \\cite{wu2019jointly, zhu2021relation} have tried to conduct joint training of entity and relation to improve the quality of the learned embeddings, alignment of each entity is still conducted independently from a local perspective. In fact, entity alignment is a holistic process: the neighboring entities and relations of the aligned pair should also be equivalent. The methods \\cite{zeng2020collective,zeng2021reinforcement} conduct entity alignment from a global perspective by modelling it as a stable matching problem. Whereas, such reasoning is regarded as a post-processing step without utilizing the holistic information to inform the embedding learning module. We argue that holistic inference can provide crucial clues to achieve reliable entity alignment, and the inferred knowledge can offer reliable alignment evidence and help to build and strengthen the trust towards black-box embedding-based entity alignment.\n\n\\textbf{Contributions:} In this work, we introduce \\textbf{IMEA\\xspace}, an \\textbf{I}nformed \\textbf{M}ulti-context \\textbf{E}ntity \\textbf{A}lignment model, to tackle the above obstacles.\nMotivated by the strong expressiveness and representation learning ability of Transformer \\cite{vaswani2017attention}, \nwe design a novel Transformer architecture to encode the multiple structural contexts in a KG while capturing the deep alignment interaction across different KGs.\nSpecifically, we apply two Transformers to encode the neighborhood subgraphs and entity paths, respectively.\nTo capture the semantics of relation triples, we introduce relation regularization based on TransE \\cite{bordes2013translating} at the input layer.\nWe generate entity paths by random walks across two KGs. \nTo capture the deep alignment interactions, we replace the entity in a path with its counterpart in the seed alignment (i.e., training data) to generate a new semantically equivalent path.\nIn this way, a path may contain entities from the two KGs that remain to be aligned, \nand the Transformer is trained to fit such a cross-KG entity sequence and its equivalent path.\nThe two Transformers share parameters.\nTheir training objectives are central entity prediction and mask entity prediction, respectively.\nThe self-attention mechanism of Transformer can recognize and highlight helpful features in an end-to-end manner. \n\nFurthermore, we design a holistic reasoning process for entity alignment, to complement the imperfection of embedding similarities.\nIt compares entities by reasoning over their neighborhood subgraphs based on both embedding similarities and the holistic relation and entity functionality (i.e., the importance for identifying similar entities and relations).\nThe inferred alignment probabilities are injected back into the Transformer as soft labels to assist alignment learning. In this way, both the positive and negative alignment evidence obtained from the holistic inference can be utilized to adjust soft labels and guide embedding learning, thus improving the overall performance and reliability of entity alignment.\n\nWe conduct extensive experiments including ablation studies and case studies on the benchmark dataset OpenEA \\cite{sun13benchmarking} to evaluate the effectiveness of our IMEA\\xspace model. Experimental results demonstrate that IMEA\\xspace can achieve the state-of-the-art performance by combining multi-context Transformer and holistic reasoning,\nand the global evidence is quite effective in informing alignment learning.\n\n\\section{Related Work}\n\\textbf{KG Contexts for Entity Alignment.} \nExisting structure-based entity alignment methods utilize different types of structural contexts to generate representative KG embeddings. \nA line of studies learns from \\textit{relation triples} and encodes local semantic information of both entities and relations. \nTransE \\cite{bordes2013translating} is one of the most fundamental embedding techniques that has been adopted in a series of entity alignment models such as MTransE \\cite{chen2017multilingual}, JAPE \\cite{sun2017cross}, BootEA \\cite{sun2018bootstrapping} and SEA \\cite{pei2019semi}.\nInspired by the success of graph neural networks (GNNs) like GCN \\cite{kipf2016semi} and GAT \\cite{velickovic2018graph} in graph embedding learning,\nrecent entity alignment studies incorporate various GNN variants to encode \\textit{subgraph contexts} for neighborhood-aware KG embeddings. \nFor example, GCN-Align \\cite{wang2018cross} employs vanilla GCN for KG structure modeling, \nbut suffers from the heterogeneity of different KGs. \nTo address this issue, MuGNN \\cite{cao2019multi} preprocesses KGs by KG completion and applies an attention mechanism to highlight salient neighbors.\nOther work improves neighborhood-aware entity alignment by introducing relation-aware GNNs \\cite{li2019semi,ye2019vectorized,yu2020generalized,mao2020mraea,mao2020relational}, multi-hop GNNs \\cite{sun2020knowledge,mao2021boosting} and hyperbolic GNNs \\cite{sun2020knowledge}.\nBesides the triple and neighbourhood context, \nRSN4EA \\cite{guo2019learning} and IPTransE \\cite{zhu2017iterative} further explore long-term dependency along \\textit{relation paths} in KGs. \nThere are also some entity alignment models that consider additional information such as \\textit{entity names} \\cite{xu2019cross,wu2019relation,wu2019jointly,wu2020neighborhood}, \\textit{entity attributes} \\cite{trisedya2019entity,zhang2019multi,liu2020exploring} and \\textit{textual descriptions} \\cite{chen2018co}.\nAlthough structural contexts are abundant and always available in KGs, few existing EA models have tried to integrate them into the embedding learning process. In this work, we introduce Transformer to capture multi-context interactions and learn more comprehensive entity embeddings.\n\n\\noindent\\textbf{Collective Entity Alignment.}\nEntities are usually aligned independently in existing EA models. Only two methods have attempted to achieve collective entity alignment.\nCEAFF \\cite{zeng2020collective, zeng2021reinforcement} formulates alignment inference as a stable matching problem. \nRMN \\cite{zhu2021relation} designs an iterative framework to leverage positive interactions between entity and relation alignment. However, it can only capture the interactions from seed entity alignment and requires extra pre-aligned relations, which is not always feasible in practice. Furthermore, both methods treat alignment learning and alignment inference separately, e.g., by regarding holistic reasoning as a post-processing step to adjust the alignment results \\cite{zeng2020collective}. By contrast, our IMEA\\xspace model can inject the holistic knowledge obtained during alignment inference into the Transformer via soft label editing, so as to guide the embedding learning process.\n\n\\section{Methodology}\n\\subsection{Framework}\nWe define a KG as $\\mathcal{G}=(\\mathcal{E},\\mathcal{R},\\mathcal{T})$, \nwhere $\\mathcal{E}$ and $\\mathcal{R}$ denote the sets of entities and relations, respectively, and $\\mathcal{T}\\subset\\mathcal{E}\\times\\mathcal{R}\\times\\mathcal{E}$ is the set of relation triples. Each triple can be represented as $(s, r, o)$, where $s\\in\\mathcal{E}$ and $o\\in\\mathcal{E}$ denote the subject (head entity) and object (tail entity), respectively, and $r\\in\\mathcal{R}$ is the relation.\nGiven two heterogeneous KGs, i.e., a source KG $\\mathcal{G}$ and a target KG $\\mathcal{G}'$, entity alignment (EA) aims to find as many aligned entity pairs as possible $\\mathcal{A} = \\{(e, e') \\in \\mathcal{E} \\times \\mathcal{E}'| e \\equiv e'\\}$, \nwhere the equivalent relationship $\\equiv$ indicates that entities $e$ and $e'$ from different KGs represent the same real-world identity. \nTypically, to jump-start alignment learning, some seed entity alignment $\\mathcal{A}^s$ is provided as training data.\n\n\\begin{figure*}[!t]\n \\centering\n \\includegraphics[width=0.68\\linewidth]{image\/framework.pdf}\n \\caption{\\label{fig:framework.pdf}Framework of the proposed informed multi-context entity alignment model.}\n\\end{figure*}\n\nFigure \\ref{fig:framework.pdf} shows the overall framework of our IMEA\\xspace (Informed Multi-context Entity Alignment) model, which contains three main components:\nFirstly, we introduce Transformer as the contextualized encoder to capture the multiple structural contexts (i.e., triples, neighborhood subgraphs and paths) and generate representative KG embeddings for alignment learning. \nNext, we identify candidate target entities for each source entity based on their embedding similarity and perform holistic reasoning for each candidate entity pair. \nFinally, we inform the embedding learning module by injecting the alignment evidence into the training loss using our proposed soft label editing strategy. \nWe will introduce each component in detail in the following sections.\n\n\\subsection{Multi-context Transformer}\nFigure \\ref{fig:transformer} presents the architecture of our multi-context transformer. \nSpecifically, we apply two Transformers to encode the neighborhood context and path context, respectively.\nThe neighborhood context is generated by sampling neighboring entities. \nTo capture the relation semantics, we introduce a relation regularization based on the translational embedding function \\cite{bordes2013translating} at the input layer.\nThe path context is represented as an entity sequence generated by random walks. \nTo capture the deep alignment interactions across different KGs, \nwe replace the entity in a path with its counterpart in the seed entity alignment to generate a new semantically equivalent path.\nIn this way, a path may contain entities from the two KGs that remain to be aligned, \nand the Transformer seeks to fit such a cross-KG entity sequence and its equivalent path to capture the alignment information between two KGs.\n\n\\subsubsection{Intra-KG Neighbourhood Encoding}\nNeighborhood information is critical in representing KG structures for entity alignment, as similar entities usually have similar neighborhood subgraphs. \nConsidering that an entity may have a very large number of neighbors, \nwe repeatedly sample $n$ neighbors of each entity during training and feed these neighbors into Transformer to capture their interactions. \nSpecifically, given an entity $e$ and its sampled neighbours $(e_1, e_2,..., e_n)$, \nwe regard the neighbourhood as a sequence and feed it into a Transformer with $L$ self-attention layers stacked. \nThe input embeddings of Transformer are $(\\mathbf{h}_{e_1}^0, \\mathbf{h}_{e_2}^0, \\cdots, \\mathbf{h}_{e_n}^0)$, \nand the output of the $L$-th layer is generated as follows:\n\\begin{equation}\n \\mathbf{h}_{e_1}^L, \\mathbf{h}_{e_2}^L, \\cdots, \\mathbf{h}_{e_n}^L = \\text{Encoder}(\\mathbf{h}_{e_1}^0, \\mathbf{h}_{e_2}^0, \\cdots, \\mathbf{h}_{e_n}^0).\n\\end{equation}\nThe training objective is to predict the central entity $e$ with the input neighborhood context $\\mathbf{c}_e=\\text{MeanPooling}(\\mathbf{h}_{e_1}^L,\\mathbf{h}_{e_2}^L,\\cdots,\\mathbf{h}_{e_n}^L)$.\nThis can be regarded as a classification problem.\nThe prediction probability for each $e_i$ is given by\n\\begin{equation}\n \\label{eq:ea_pro}\n p_{e_i} = \\frac{\\sigma(\\mathbf{c}_e,\\mathbf{h}_{e_i}^0)}{\\sum_{e_j \\in \\mathcal{E}}\\sigma(\\mathbf{c}_e,\\mathbf{h}_{e_j}^0)},\n\\end{equation}\nwhere $\\sigma()$ denotes inner product.\nLet $\\mathbf{q}_{e}$ be the one-hot label vector for predicting entity $e$. \nFor example, $\\mathbf{q}_{e}=[1,0,0,0,0]$ means that the entity at the first position is the target.\nWe adopt the label smoothing strategy to get soft labels, e.g., $\\hat{\\mathbf{q}}_{e}=[0.8,0.05,0.05,0.05, 0.05]$,\nand use cross-entropy, i.e., $\\text{CE}(x,y)=-\\sum x \\log y$, to calculate the loss between the soft label vector and the prediction distribution over all candidates $\\mathbf{p}_{e}$. \nFormally, the overall loss is defined as\n\\begin{equation}\n \\label{eq:neigh_loss}\n \\mathcal{L}_\\text{neighbor} = \\sum_{e\\in \\mathcal{E}\\cup\\mathcal{E}'} \\text{CE}(\\hat{\\mathbf{q}}_{e}, \\mathbf{p}_{e}).\n\\end{equation}\n\n\\subsubsection{Cross-KG Path Encoding}\nTo capture the alignment information, we use seed entity alignment to bridge two KGs and perform random walks to generate cross-KG paths.\nFor example, a typical path of length $m$ can be denoted as $(e_1, e_2',...,e_m)$.\nThen we randomly mask an entity in the path and get $(e_1, [MASK],...,e_m)$.\nSimilar to neighborhood encoding, we feed the masked path into Transformer to get its representations:\n\\begin{equation}\n \\mathbf{h}_{e_1}^L, \\mathbf{h}_{[MASK]}^L, \\cdots, \\mathbf{h}_{e_m}^L = \\text{Encoder}(\\mathbf{h}_{e_1}^0, \\mathbf{h}_{[MASK]}^0, \\cdots, \\mathbf{h}_{e_m}^0).\n\\end{equation}\nThe output representation of $[MASK]$ is used to predict the masked entity $e_2'$. \nHere we use $\\mathbf{h}_{[MASK]}^L$ to replace $\\mathbf{c}_e$ in Eq. (\\ref{eq:ea_pro}) to calculate the prediction probability.\nSince the $[MASK]$ representation captures the information from all entities in the path, our model can capture the deep alignment interactions across two KGs.\nIf $e_2'$ has a counterpart $e_2$ in the seed entity alignment, \nour model also uses the $[MASK]$ representation to predict $e_2$, \nwhich means there are two labels for this prediction task.\nThe loss is given by:\n\\begin{equation}\n \\label{eq:path_loss}\n \\mathcal{L}_\\text{path} = \\sum_{e\\in \\mathcal{E}\\cup\\mathcal{E}'} \\text{CE}(\\hat{\\mathbf{q}}_{e}, \\mathbf{p}_{e}) + \\sum_{(e, e')\\in \\mathcal{A}^{s}} \\text{CE}(\\hat{\\mathbf{q}}_{e'}, \\mathbf{p}_{e}).\n\\end{equation}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{image\/transformer.pdf}\n \\caption{\\label{fig:transformer}Illustration of multi-context Transformer.}\n\\end{figure*}\n\n\\subsubsection{Relation Regularization}\nRelation triples are the most fundamental structural contexts of KGs, \nand have already demonstrated the effectiveness for entity alignment.\nTo encode relation triples into Transformer, we propose a relation regularization module at the input layer without increasing the model complexity and parameters.\nThe objective of regularization comes from the widely-used translational embedding model TransE \\cite{bordes2013translating} that interprets a relation embedding as the translation vector from its subject entity to object entity.\nGiven a relation triple $(s, r, o)$, we expect the embeddings to hold that $\\mathbf{h}_s^0 + \\mathbf{r} = \\mathbf{h}_o^0$, \nwhere $\\mathbf{r}$ denotes the embedding of relation $r$. \nTo realize such translation, we add a additional regularization loss by minimizing the translation error over all relation triples:\n\\begin{equation}\n \\label{eq:relation_loss}\n \\mathcal{L}_\\text{relation} = \\sum_{(s, r, o)\\in \\mathcal{T}\\cup\\mathcal{T}'}\\| \\mathbf{h}_s^0 + \\mathbf{r} - \\mathbf{h}_o^0\\|_2.\n\\end{equation}\n\n\\subsection{Holistic Relation and Entity Alignment}\nGiven the learned embeddings, we propose a holistic reasoning method for the joint alignment of entities and relations. We use the input embeddings to conduct this process, i.e., $\\mathbf{e} = \\mathbf{h}_{e}^0$.\nDifferent from existing methods that only consider entity embedding similarities to identify aligned entities, we also compare relations and neighbors to compute entity similarities.\nThe basic idea is that similar entities tend to have higher embedding similarity, and their relations and neighboring entities should also be similar. \n\n\\subsubsection{Relation and Entity Functionality}\nIntuitively, when comparing two entities, different relational neighbors have different importance. \nFor example, relation \\textit{birthplace} is more important than relation \\textit{liveIn}, \nbecause a person can only be born in one place but can live in multiple places. \nBased on this, if we know two entities denote the same person, we can deduce that their neighbors along the relation \\textit{birthplace} also indicate the same place. \nWe use relation functionality \\cite{suchanek2011paris} to quantify the importance of relations.\nIt measures the number of first arguments per relation-entity edge.\nFor example, given $r\\in\\mathcal{R}$, its functionality is calculated as:\n\\begin{equation}\n f(r) = \\frac{|\\{s|(s,r,o) \\in \\mathcal{T}\\}|}{|(s, o)|(s,r,o) \\in \\mathcal{T}\\}|},\n\\end{equation}\nwhere $|\\cdot|$ denotes the size of the set. Correspondingly, the inverse relation functionality is defined as:\n\\begin{equation}\n f^{-1}(r) = \\frac{|\\{o|(s,r,o) \\in \\mathcal{T}\\}|}{|(s, o)|(s,r,o) \\in \\mathcal{T}\\}|}.\n\\end{equation}\n\nAnalogously, we propose entity functionality for comparing two relations. It defines the importance of an entity in uniquely determining a relation. \nRich connectivity of an entity would alleviate its importance. \nTherefore, we define entity functionality based on the number of relations an entity connects to, as a subject (functionality) or object (inverse functionality):\n\\begin{align}\n \\begin{split}\n f(e) = \\frac{1}{|\\{(e,r,o)|(e,r,o) \\in \\mathcal{T}\\}|}, \\\\ \n f^{-1}(e) = \\frac{1}{|\\{(s,r,e)|(s,r,e) \\in \\mathcal{T}\\}}.\n\\end{split}\n\\end{align}\nAll of these functionality features can be computed in advance, as they are only related to the statistics of KG connectivity.\n\n\\subsubsection{Relation Alignment Inference}\nTo assist entity alignment, we first find relation alignment in an unsupervised fashion. \nAs each relation connects many entities, we utilize each relation's subject and object entity embeddings to approximate the relation embedding. \nRelations have the property of direction. Considering the heterogeneity of KGs that the relations from different KGs may be equivalent in a different direction, we generate embeddings for both relations and inverse relations by swapping the subject and object entity sets. \nThe embeddings of connected entities are weighted by entity functionality. \nThe approximated embeddings of each relation $\\textbf{r}$ and its inverse $\\textbf{r}_{inv}$ are calculated as follows:\n\\begin{equation}\n \\mathbf{r} = [\\mathbf{S}, \\mathbf{O}],\\,\\,\\,\\mathbf{r}_{inv} = [\\mathbf{O}, \\mathbf{S}],\n\\end{equation}\nwhere $\\textbf{S}$ and $\\textbf{O}$ represent the embedding of the subject entity set $\\mathcal{E}_{\\text{r}}^{\\text{subj}}$ and object entity set $\\mathcal{E}_{\\text{r}}^{\\text{obj}}$ of a relation $r$.\n$[\\cdot]$ denotes concatenation. \nThe embeddings of the two sets can be generated by:\n\\begin{align}\n \\begin{split}\n \\textbf{S} = \\sum_{s \\in\\mathcal{E}_{\\text{r}}^{\\text{subj}}} f(s) \\times \\mathbf{s},\\,\\,\\,\n \\textbf{O} = \\sum_{o \\in \\mathcal{E}_{\\text{r}}^{\\text{obj}}}f^{-1}(o) \\times \\mathbf{o}\n \\end{split}\n\\end{align}\nFinally, we define the alignment score of a pair of relations as the cosine similarity of the approximated relation embeddings:\n\\begin{equation}\n \\text{align}(r, r') = \\cos(\\mathbf{r}, \\mathbf{r}').\n\\end{equation}\nHere, $r$ can be either a relation or a reverse relation.\n\n\\subsubsection{Entity Alignment Inference}\nAfter approximating the relation alignment score, \nwe are able to match the relational neighbors (i.e., triples) of two entities from different KGs. \nFigure \\ref{fig:relation_align.pdf} shows the process of measuring the equivalence between two entity neighborhood sets, which consists of the following steps:\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\linewidth]{image\/relation_align.pdf}\n \\caption{\\label{fig:relation_align.pdf}Illustration of entity alignment inference.}\n\\end{figure}\n\n\\noindent\\textbf{Step 1. Pair-wise matching matrix generation.}\nLet $N_e$ and $N_{e'}$ denote the neighbourhood sets of $e$ and $e'$, respectively.\nWe first compute the pair-wise triple-level matching score for $e_i \\in N_e$ and $e_j \\in N_{e'}$.\nWe take into consideration both relation direction and functionality.\nIf the neighbour is outgoing, we utilize the inverse relation functionality $f^{-1}(r)$ for weighting; otherwise, we adopt its functionality $f(r)$. \nFor brevity, we use the matching of two outgoing neighbors to illustrate the calculation of similarity matrix. \nAssuming the adjacent relations to the center entity of $e_i$ and $e_j$ are $r_i$ and $r_j$ respectively, we define $\\phi$ as the similarity function of $e_i \\in N_e$ and $e_j \\in N_{e'}$ as follows:\n\\begin{equation}\n \\phi(e_i,e_j) = f^{-1}(r_i) \\times f^{-1}(r_j) \\times \\text{align}(r_i, r_j) \\times \\cos(\\mathbf{e}_i,\\mathbf{e}_j).\n\\end{equation}\n\\noindent\\textbf{Step 2. Neighborhood intersection calculation.}\nWe then acquire the intersection of two neighborhood sets $N_e \\cap N_{e'}$ based on the matching matrix. \nAs the size of the smaller set indicates the maximum number of possibly matched entities that two sets may have, we conduct the max-pooling operation along the smaller dimension to identify possibly matched entities. \nFormally, assuming $|N_e| < |N_{e'}|$, the possibly matched entities are defined as:\n\\begin{equation}\n \\Psi = \\{(e_i, e_j)| i = 1,\\cdots,|N_e|, j=\\argmax_{j \\in [1,|N_{e'}|]}\\phi(e_i, e_j)\\}.\n\\end{equation}\nThen, we use a dynamic threshold $\\eta$ to filter out impossible matches.\n\\begin{equation}\n \\Theta = \\{(e_i, e_j)| (e_i, e_j) \\in \\Psi,\\ \\phi(e_i, e_j) > \\eta\\}.\n\\end{equation} \nFinally, we sum up all the remaining similarities as the equivalence score of the two neighborhood sets.\nThis process can be regarded as computing the intersection of two sets.\nIn other words, the intersection score of $N_e$ and $N_{e'}$ is the aggregation of the triple-level scores from the intersection entity set:\n\\begin{equation}\n N_e \\cap N_{e'} = \\sum_{(e_i, e_j) \\in \\Theta}\\phi(e_i, e_j).\n\\end{equation}\n\n\\noindent\\textbf{Step 3. Neighborhood union calculation.} \nNext, we compute the union of neighborhoods to normalize the above equivalence score.\nWe aggregate the weighted edges after the max-pooling step.\nSpecifically, we use the relation functionality of connected edges as the weight and calculate the pair-wise functionality multiplication to approximate the union of two neighborhood sets. \nTaking outgoing edges as an example, for $e_i\\in N_e$ and $e_j \\in N_{e'}$, we have:\n\\begin{equation} \n \\phi^r(e_i, e_j) = f^{-1}(r_i)\\times f^{-1}(r_j).\n\\end{equation}\nThen, we aggregate the pair-wise functionality of possibly matched neighbors as the union score of two neighborhood sets $N_e \\cup N_{e'}$:\n\\begin{equation}\n N_e \\cup N_{e'} = \\sum_{(e_i, e_j) \\in \\Psi}\\phi^r(e_i, e_j).\n\\end{equation}\n\n\\noindent\\textbf{Step 4. Holistic alignment probability. }\nFinally, the entity alignment probability is calculated in a Jaccard-like fashion based on their neighborhoods, as defined below:\n\\begin{equation}\n \\text{prob}(e, e') = \\frac{N_e \\cap N_{e'}}{N_e \\cup N_{e'}}.\n\\end{equation}\n\n\\subsection{Soft Label Editing by Alignment Evidence}\nAfter acquiring the holistic alignment probability of important entity pairs,\ni.e., the top-k similar target counterparts of each source entity, \nwe inform the multi-context Transformer of such an alignment clue to edit the soft labels.\nWe design two types of alignment evidence.\nSpecifically, we collect entity pairs with high alignment probability, i.e., $\\text{prob}(e, e') \\geq \\gamma_{\\text{pos}}$, as positive evidence.\nOn the contrary, if $\\text{prob}(e, e') \\leq \\gamma_{\\text{neg}}$, the two entities form a negative alignment pair.\nThen, we inject both positive and negative alignment evidences into the embedding module by editing the soft labels for prediction tasks.\nThe idea is that if the target entity has several counterparts in positive pairs, all counterparts should also be labeled as $\\min(s, p)$, where $s$ denotes the soft label of the target entity and $p$ is the calculated alignment probability.\nWe still use the soft label vector $\\hat{\\mathbf{q}}_{e}=[0.8,0.05,0.05,0.05,0.05]$ as an example, where the entity at the first position is the target. \nSuppose it has two predicted counterparts at the second and third positions with the probabilities of $0.5$ and $0.9$, respectively. In this case, the soft label vector is rewritten as $\\hat{\\mathbf{q}}_{e}=[0.8,0.5,0.8,0.05,0.05]$, and it is further normalized to serve as the labels for training the Transformer.\nFor negative counterparts, the corresponding label values are set to $0$ which will not be included in the prediction.\n\n\\section{Experiment}\n\\subsection{Experiment Setting}\n\\subsubsection{Datasets}\nWe use the benchmark dataset (V1) in OpenEA \\cite{sun13benchmarking} for evaluation as it follows the data distribution of real KGs. \nIt contains two cross-lingual settings extracted from multi-lingual DBpedia (English-to-French and English-to-German), and two monolingual settings among popular KGs (DBpedia-to-Wikidata and DBpedia-to-YAGO). \nEach setting has two sizes with 15K and 100K pairs of reference entity alignment, respectively, without reference relation alignment. \nWe follow the data splits in OpenEA where $20\\%$ alignment is for training, $10\\%$ for validation and $70\\%$ for testing. \n\n\\subsubsection{Implementation Details}\nOur IMEA\\xspace model is implemented based on the OpenEA library. \nWe initialize the trainable parameters with Xavier initialization \\cite{Xavier} and optimize the loss with Adam \\cite{Adam}.\nAs for hyper-parameters, the number of sampled neighbors is $3$, and the length of paths is $5$.\nThe batch size is $4096$, and the learning rate is $0.0001$.\nThe hidden size of Transformer is $256$. \nIt has $8$ layers with $8$ attention heads in each layer. \nWe conduct the soft label editing process based on holistic inference from the second training epoch.\nThe dynamic threshold of hard filter is $\\eta = \\max(0.6, 0.5 * (\\max_{(e_i, e_j)\\in \\Theta}\\phi(e_i, e_j) + \\text{mean}_{(e_i, e_j)\\in \\Theta}\\phi(e_i, e_j)))$. \nThe soft label value is $s=0.6$. \nThe thresholds for selecting positive and negative pairs are set as $\\gamma_{\\text{pos}} = 0.62$ and $\\gamma_{\\text{neg}} = 0.01$, respectively.\nBy convention, we use Hits@1, Hits@5 and mean reciprocal rank (MRR) as the evaluation metrics, and higher scores indicate better performance. \nWe release our source code at GitHub\\footnote{\\url{https:\/\/github.com\/JadeXIN\/IMEA}}.\n\n\n\\begin{table*}[!t]\n\t\\centering\n\t\\caption{Entity alignment results on 15K datasets.}\n\t\\resizebox{0.9\\textwidth}{!}{\n\t\t\\begin{tabular}{lcclcclcclccl}\n\t\t\t\\toprule\n\t\t\t\\multirow{2}{*}{Methods} & \\multicolumn{3}{c}{EN-FR-15K} & \\multicolumn{3}{c}{EN-DE-15K} & \\multicolumn{3}{c}{D-W-15K} & \\multicolumn{3}{c}{D-Y-15K} \\\\\n\t\t\t\\cmidrule(lr){2-4} \\cmidrule(lr){5-7} \\cmidrule(lr){8-10} \\cmidrule(lr){11-13} \n\t\t\t& Hits@1 & Hits@5 & MRR & Hits@1 & Hits@5 & MRR & Hits@1 & Hits@5 & MRR & Hits@1 & Hits@5 & MRR \\\\ \\midrule\n\t\t\tMTransE & 0.247 & 0.467 & 0.351 & 0.307 & 0.518 & 0.407 & 0.259 & 0.461 & 0.354 & 0.463 & 0.675 & 0.559 \\\\\n\t\t\tIPTransE & 0.169 & 0.320 & 0.243 & 0.350 & 0.515 & 0.430 & 0.232 & 0.380 & 0.303 & 0.313 & 0.456 & 0.378 \\\\\n\t\t\tAlignE & 0.357 & 0.611 & 0.473 & 0.552 & 0.741 & 0.638 & 0.406 & 0.627 & 0.506 & 0.551 & 0.743 & 0.636 \\\\\n\t\t\tGCN-Align & 0.338 & 0.589 & 0.451 & 0.481 & 0.679 & 0.571 & 0.364 & 0.580 & 0.461 & 0.465 & 0.626 & 0.536 \\\\\n\t\t\tSEA & 0.280 & 0.530 & 0.397 & 0.530 & 0.718 & 0.617 & 0.360 & 0.572 & 0.458 & 0.500 & 0.706 & 0.591 \\\\\n\t\t\tRSN4EA & 0.393 & 0.595 & 0.487 & 0.587 & 0.752 & 0.662 & 0.441 & 0.615 & 0.521 & 0.514 & 0.655 & 0.580 \\\\\n\t\t\tAliNet* & 0.364 & 0.597 & 0.467 & 0.604 & 0.759 & 0.673 & 0.440 & 0.628 & 0.522 & 0.559 & 0.690 & 0.617 \\\\\n\t\t\tHyperKA* & 0.353 & 0.630 & 0.477 & 0.560 & 0.780 & 0.656 & 0.440 & 0.686 & 0.548 & \\underline{0.568} & \\underline{0.777} & \\underline{0.659} \\\\\n\t\t\tKE-GCN* & \\underline{0.408} & \\underline{0.670} & \\underline{0.524} & \\textbf{0.658} & \\textbf{0.822} & \\textbf{0.730} & \\underline{0.519} & \\underline{0.727} & \\underline{0.608} & 0.560 & 0.750 & 0.644\\\\\n\t\t\t\\midrule\n\t\t\tIMEA\\xspace & \\textbf{0.458} & \\textbf{0.720} & \\textbf{0.574} & \\underline{0.639} & \\underline{0.827} & \\underline{0.724} & \\textbf{0.527} & \\textbf{0.753} & \\textbf{0.626} & \\textbf{0.639} & \\textbf{0.804} & \\textbf{0.712} \\\\\n\t\t\t\\bottomrule\n\t\\end{tabular}}\n\t\\label{tab:ent_alignment_15k}\n\\end{table*}\n\n\\begin{table*}[!t]\n\t\\centering\n\t\\caption{Entity alignment results on 100K datasets.}\n\t\\resizebox{0.9\\textwidth}{!}{\n\t\\begin{tabular}{lcclcclcclccl}\n\t\t\\toprule\n\t\t\\multirow{2}{*}{Methods} & \\multicolumn{3}{c}{EN-FR-100K} & \\multicolumn{3}{c}{EN-DE-100K} & \\multicolumn{3}{c}{D-W-100K} & \\multicolumn{3}{c}{D-Y-100K} \\\\\n\t\t\\cmidrule(lr){2-4} \\cmidrule(lr){5-7} \\cmidrule(lr){8-10} \\cmidrule(lr){11-13} \n\t\t& Hits@1 & Hits@5 & MRR & Hits@1 & Hits@5 & MRR & Hits@1 & Hits@5 & MRR & Hits@1 & Hits@5 & MRR \\\\\n\t\t\t\\midrule\n\t\t\tMTransE & 0.138 & 0.261 & 0.202 & 0.140 & 0.264 & 0.204 & 0.210 & 0.358 & 0.282 & 0.244 & 0.414 & 0.328 \\\\\n\t\t\tIPTransE & 0.158 & 0.277 & 0.219 & 0.226 & 0.357 & 0.292 & 0.221 & 0.352 & 0.285 & 0.396 & 0.558 & 0.474 \\\\\n\t\t\tAlignE & 0.294 & 0.483 & 0.388 & 0.423 & 0.593 & 0.505 & 0.385 & 0.587 & 0.478 & 0.617 & 0.776 & 0.691 \\\\\n\t\t\tGCN-Align & 0.230 & 0.412 & 0.319 & 0.317 & 0.485 & 0.399 & 0.324 & 0.507 & 0.409 & 0.528 & 0.695 & 0.605 \\\\\n\t\t\tSEA & 0.225 & 0.399 & 0.314 & 0.341 & 0.502 & 0.421 & 0.291 & 0.470 & 0.378 & 0.490 & 0.677 & 0.578 \\\\\n\t\t\tRSN4EA & 0.293 & 0.452 & 0.371 & 0.430 & 0.570 & 0.497 & 0.384 & 0.533 & 0.454 & 0.620 & 0.769 & 0.688 \\\\\n\t\t\tAliNet* & 0.266 & 0.444 & 0.348 & 0.405 & 0.546 & 0.471 & 0.369 & 0.535 & 0.444 & \\underline{0.626} & \\underline{0.772} & \\underline{0.692} \\\\\n\t\t\tHyperKA* & 0.231 & 0.426 & 0.324 & 0.239 & 0.432 & 0.332 & 0.312 & 0.535 & 0.417 & 0.473 & 0.696 & 0.574\\\\\n\t\t\tKE-GCN* & \\underline{0.305} & \\underline{0.513} & \\underline{0.405} & \\underline{0.459} & \\underline{0.634} & \\underline{0.541} & \\textbf{0.426} & \\textbf{0.620} & \\textbf{0.515} & 0.625 & 0.791 & 0.700 \\\\\n\t\t\t\\midrule\n\t\t\tIMEA\\xspace & \\textbf{0.329} & \\textbf{0.526} & \\textbf{0.424} & \\textbf{0.467} & \\textbf{0.641} & \\textbf{0.549} & \\underline{0.419} & \\underline{0.614} & \\underline{0.508} & \\textbf{0.664} & \\textbf{0.817} & \\textbf{0.733}\\\\\n\t\t\t\\bottomrule\n\t\\end{tabular}}\n\t\\label{tab:ent_alignment_100k}\n\\end{table*}\n\n\\subsubsection{Baselines}\nTo evaluate the effectiveness of our proposed IMEA\\xspace model, we compare it with existing state-of-the-art supervised structure-based entity alignment methods. \nThey can be roughly divided into the following categories.\n\n\\begin{itemize}\n\\item Triple-based models that capture the local semantics information of relation triples based on TransE, including MTransE \\cite{chen2017multilingual}, JAPE \\cite{sun2017cross}, AlignE \\cite{sun2018bootstrapping} and SEA \\cite{pei2019semi}.\n\\item Neighborhood-based models that use GNNs to exploit subgraph structures for entity alignment, including GCN-Align \\cite{wang2018cross}, AliNet \\cite{sun2020knowledge}, HyperKA \\cite{sun2020knowledge}, KE-GCN \\cite{yu2020generalized}.\n\\item Path-based models that explore the long-term dependency across relation paths, including IPTransE \\cite{zhu2017iterative} and RSN4EA\\cite{guo2019learning}.\n\\end{itemize}\n\nOur model and the above baselines all focus on the structural information of KGs. For a fair comparison, we ignore other models that incorporate side information (e.g., attributes, entity names and descriptions) like RDGCN \\cite{wu2019relation}, KDCoE \\cite{chen2018co} and AttrGNN \\cite{liu2020exploring}.\n\n\\subsection{Results and Analysis}\n\\subsubsection{Main Results}\nTables \\ref{tab:ent_alignment_15k} and \\ref{tab:ent_alignment_100k} present the entity alignment results on the OpenEA 15K and 100K datasets, respectively.\nResults labelled by * are reproduced using the released source code with careful tuning of hyper-parameters, and the other results are taken from OpenEA \\cite{sun13benchmarking}. \nWe can see that IMEA\\xspace achieves the \\textbf{best} or \\underline{second-best} performance compared with these SOTA baselines. \nIt outperforms the best performing baselines on Hits@1 by $1\\%-7\\%$, except for the EN-DE-15K and D-W-100K datasets. \nFor example, on 15K datasets, IMEA\\xspace achieves a gain of $7.1\\%$ by Hits@1 compared with HyperKA and $7.9\\%$ against the latest method KE-GCN on D-Y-15K.\nOverall, KE-GCN achieves the second-best performance. \nIt exceeds our performance by 1.9\\% on EN-DE-15K and 0.7\\% on D-W-100K due to its strength in combining advanced knowledge embedding models with GCNs. \nHowever, the limited structure context of its embedding module prevents it from acquiring further improvement. \nOn the 100K datasets, many baselines fail to achieve promising results due to the more complex KG structures and larger alignment space.\nIMEA\\xspace also acquires the best results on most large-scale datasets, except the second-best on D-W-100K, which demonstrates the practicability of our model. \nThis is because IMEA\\xspace can integrate multiple structural contexts, and the informed knowledge can further improve the performance by holistic reasoning. \nIn summary, the results show the superiority of IMEA\\xspace.\n\n\\subsubsection{Different Path Length}\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.9\\linewidth]{image\/path_len.pdf}\n \\caption{\\label{fig:path_len}Performance and training time w.r.t path length on 15K datasets.}\n\\end{figure*}\nAs a path is one of the crucial structural contexts for both embedding and alignment learning, we evaluate the effect of path length on entity alignment. \nFigure \\ref{fig:path_len} reports the IMEA\\xspace performance w.r.t. different path lengths on 15K datasets. \nAs expected, entity alignment accuracy (Hits@1 and MRR) improves as the path length increases.\nThis is because a longer path can boost the cross-KG interaction and mitigate the heterogeneity issue between different KGs. \nThe performance increase is more evident on EN-FR-15K due to the higher structural heterogeneity in this dataset. \nHowever, it is also noticeable that the training time per epoch increases as the path length grows. \nHence, we choose a path length of 5 in our setting as an effectiveness-efficiency trade-off.\n\n\\subsubsection{Complementarity of Different Context}\n\n\\begin{table}[!t]\n\\centering\n\\caption{Complementarity of context on 15K datasets.}\n\\resizebox{0.99\\linewidth}{!}{\n\\begin{tabular}{lcccccccc} \n\\toprule\n\\multirow{2}{*}{Methods} & \\multicolumn{2}{c}{EN-FR} & \\multicolumn{2}{c}{EN-DE} & \\multicolumn{2}{c}{D-W} & \\multicolumn{2}{c}{D-Y} \\\\\n\\cmidrule(lr){2-9}\n& Hits@1 & MRR & Hits@1 & MRR & Hits@1 & MRR & Hits@1 & MRR \\\\\n\\midrule\nIMEA\\xspace & 0.458 & 0.574 & 0.639 & 0.724 & 0.527 & 0.626 & 0.639 & 0.712 \\\\ \n\\,\\, w\/o relation & 0.378 & 0.482 & 0.609 & 0.684 & 0.460 & 0.552 & 0.599 & 0.667\\\\\n\\,\\, w\/o neighbor & 0.431 & 0.547 & 0.612 & 0.681 & 0.494 & 0.595 & 0.635 & 0.708\\\\\n\\bottomrule\n\\end{tabular}}\n\\label{tab:context-effect}\n\\end{table}\n\nTable~\\ref{tab:context-effect} gives the results of the ablation study on the triple and neighborhood contexts in 15K datasets (paths are necessary for alignment learning).\nIMEA\\xspace ``w\/o relation\" and ``w\/o neighbor\" represent the variants by removing relation triples or neighborhood information from our multi-context Transformer. \nThe performance of both variants drops. \nWhen the relation triple context is removed, the Hits@1 decreases by $4\\%$ to $8\\%$, and MRR drops by $4\\%$ to $9\\%$. \nThe decline is smaller when the neighborhood is absent, which is $0.5\\%$ to $3\\%$ drop on Hits@1 and $0.5\\%$ to $4\\%$ drop on MRR. \nOverall, the absence of both relation and neighbor contexts results in a performance drop on all datasets, and relation triples contribute more than neighborhood.\n\n\\subsubsection{Effect of Informed Process}\nThe ablation study on the effect of the holistic reasoning process is shown in Table \\ref{tab:inf-effect}, \nwhere IMEA\\xspace (w\/o inf) represents the variant without using the referred holistic knowledge. It can be observed that the alignment clue captured by holistic inference can bring improvement on all datasets by $1\\%$ to $2.5\\%$ on Hits@1 and $1\\%$ to $2\\%$ on MRR. \nThis quantitatively validates the effectiveness of our holistic reasoning process and its ability to inform embedding and alignment learning.\n\n\\begin{table}[!t]\n\\centering\n\\caption{Effect of holistic inference on 15K datasets.}\n\\resizebox{0.99\\linewidth}{!}{\n\\begin{tabular}{lcccccccc} \n\\toprule\n\\multirow{2}{*}{Methods} & \\multicolumn{2}{c}{EN-FR} & \\multicolumn{2}{c}{EN-DE} & \\multicolumn{2}{c}{D-W} & \\multicolumn{2}{c}{D-Y} \\\\\n\\cmidrule(lr){2-9}\n& Hits@1 & MRR & Hits@1 & MRR & Hits@1 & MRR & Hits@1 & MRR \\\\\n\\midrule\nIMEA\\xspace & 0.458 & 0.574 & 0.639 & 0.724 & 0.527 & 0.626 & 0.639 & 0.712 \\\\ \n\\,\\, w\/o inf & 0.431 & 0.546 & 0.614 & 0.703 & 0.502 & 0.598 & 0.629 & 0.704\\\\\n\\bottomrule\n\\end{tabular}}\n\\label{tab:inf-effect}\n\\end{table}\n\n\\subsection{Case Study on Holistic Reasoning}\nIn this section, we conduct case studies to further demonstrate the effectiveness of our holistic reasoning, i.e., how the positive and negative evidence obtained from holistic reasoning can be used to fix the alignment error caused by learned embeddings.\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}[b]{0.95\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{image\/case2.pdf}\n \\caption{\\label{fig:case2.pdf}}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.95\\linewidth}\n \\includegraphics[width=1\\linewidth]{image\/case3.pdf}\n \\caption{\\label{fig:case3.pdf}}\n \\end{subfigure}\n \\caption{Examples of aligned entity pairs discovered by holistic inference from D-Y-15K. The dark arrow and dark rectangle indicate the matched relations and entities recognized by the inference, respectively.}\n \\label{fig:correct example}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{image\/case1_0.pdf}\n \\caption{\\label{fig:case1_0.pdf}Incorrect alignment pairs.}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\centering\n \\includegraphics[width=1.0\\linewidth]{image\/case1_1.pdf}\n \\caption{\\label{fig:case1_1.pdf}Correct alignment pairs.}\n\\end{subfigure}\n\\caption{Example of a negative alignment from EN-FR-15K which can be corrected by the holistic reasoning. \nThe vertical and horizontal axis represents the entity's neighbors from EN and FR respectively. Each neighbor consists of the relation (with its functionality) and the entity (either subject \"s\" or object \"o\"). The top figure reports the relation alignment while the bottom figure shows the neighbor alignment. Darker cells indicate higher alignment probability between the corresponding relations or neighbors.\n}\n\\label{fig: incorrect example}\n\\end{figure}\n\n\\subsubsection{Positive Evidence Benefit Embedding Learning}\nFigure \\ref{fig:case2.pdf} illustrates a case of positive alignment evidence from D-Y-15K. \nThe learned embedding similarity between these two identical entities is only $0.59$. \nSuch an inaccurate embedding distance is caused by the sparse connections between the two entities. \nLong-tail entities suffer from the isolation issue. Hence, the two entities with sparse triples would have few chances to be pushed closer during training. \nAnother reason for the low embedding similarity is the large degree difference between the two entities, since the embedding model tends to generate dissimilar representations for entities with different degrees.\nFortunately, our holistic inference can effectively capture the positive interaction between entities by matching their neighborhood subgraphs at a more fine-grained level (i.e., triple-level). \nFirstly, it can catch the matched relations, i.e., \\textit{``is-part-of''} and \\textit{``is-located-in''}, \\textit{``official-language''} and \\textit{``has-official-language''}. \nAs the relational neighbors of the two entities are not always long-tail entities, the matched neighbors (denoted by dark blue) can be successfully captured, which leads to a positive evidence score of $0.92$.\nFigure \\ref{fig:case3.pdf} demonstrates another example from D-Y-15K, which shows the holistic process is especially suitable for handling the heterogeneous issue between KGs. \nFrom our observation, even though two entities have many relational neighbors in common, the generated embeddings tend to be more dissimilar if the relations are in different directions.\nThe embedding similarity in Figure \\ref{fig:case3.pdf} is only $0.56$. \nAs the holistic process considers relation direction, it successfully captures equivalent relations \\textit{``writer''} and \\textit{``created''} with different directions, and similar relations \\textit{``birthplace''} and \\textit{``is-citizen-of''}, \nwhich leads to a high inference score of $0.89$.\nOverall, for those equivalent entities with low embedding similarity,\nour holistic inference can successfully capture their actual alignment interactions by comparing their neighborhood subgraphs, thus fixing the error caused by unreliable embedding learning.\n\n\\subsubsection{Negative Evidence Benefit Embedding Learning}\nOur holistic reasoning can also capture the negative alignment evidence between some wrongly closely located entity embeddings, and correctly accumulate positive clues between matched entities. \nFigure \\ref{fig: incorrect example} shows an example from EN-FR-15K about two songs from ``Whitney Houston'', where the incorrectly aligned entity pairs in Figure \\ref{fig:case1_0.pdf} have a slightly higher embedding similarity ($0.99$) than the corresponding ground-truth in Figure \\ref{fig:case1_1.pdf} ($0.97$). Such an embedding mistake can be corrected by our holistic reasoning based on the neighborhood information.\nThe neighboring entities' embeddings could also be inaccurate, which results in a closer embedding distance between the entity and its incorrect match. \nHowever, by holistic inference, we observe that the incorrectly aligned entities in Figure \\ref{fig:case1_0.pdf} only share some unimportant neighbors with low relation functionality such as \\textit{``genere''}, \\textit{``artist''}, \\textit{``producer''} and \\textit{``composer''}, leading to a limited alignment score of $0.32$. On the contrary, the ground-truth alignment in Figure \\ref{fig:case1_1.pdf} demonstrates a large number of common neighbors, among which positive matching clues are observed on some crucial neighbours with large relation functionality like \\textit{``album''} (the corresponding entities \\textit{``It's not right but It's okay''} are matched). Hence, it accumulates more positive evidences and achieves a higher alignment probability of $0.76$. In this way, holistic inference successfully identifies the aligned entity pair in Figure \\ref{fig:case1_0.pdf} as a negative prediction from the embedding module (with a smaller alignment probability), and injects such feedback into the Transformer to adjust embedding learning, thus improving the final entity alignment performance.\n\n\\section{Conclusion and Future Work}\nIn this paper, we propose an informed multi-context entity alignment model (IMEA\\xspace). It consists of a multi-context Transformer to capture multiple structural contexts for alignment learning, and a holistic reasoning process to generate global alignment evidence, which is iteratively injected back into the Transformer via soft label editing to guide further embedding training. Extensive experiments on the OpenEA dataset verify the superiority of IMEA\\xspace over existing state-of-the-art baselines. As future work, we will include extra side information of KGs to enhance embedding learning in IMEA\\xspace, and generate human-understandable interpretations of entity alignment results based on the informed holistic reasoning, thus achieving explainability of embedding-based EA models.\n\n\\begin{acks}\nThis work was partially supported by the Australian Research Council under Grant No. DE210100160 and DP200103650.\nZequn Sun's work was supported by Program A for Outstanding PhD Candidates of Nanjing University.\n\\end{acks}\n\n\\bibliographystyle{ACM-Reference-Format}\n\\balance\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nPublic datasets and benchmarks play an important role in computer vision especially as the rise of deep learning approaches accelerate the relative scalability and reliability of solutions for different tasks. Starting with ImageNet~\\cite{imagenet}, COCO~\\cite{coco}, and DAVIS~\\cite{davis18} for detection, captioning, and segmentation tasks respectively, competitions bring fairness and reproducibility to the evaluation process of computer vision approaches and improve their capabilities by introducing valuable datasets. Such datasets, and the corresponding challenges, also increase the visibility, availability, and feasibility of machine learning models, which brings up even more scalable, diverse, and accurate algorithms to be evaluated on public benchmarks.\n\nComputer vision approaches have shown tremendous progress toward understanding shapes from various data formats, especially since entering the deep learning era. Although detection, recognition, and segmentation approaches achieve highly accurate results, there has been relatively less attention and research dedicated to extracting topological and geometric information from shapes. However, geometric skeleton-based representations of a shape provide a compact and intuitive representation of the shape for modeling, synthesis, compression, and analysis. Generating or extracting such representations is significantly different from segmentation and recognition tasks, as they condense both local and global information about the shape, and often combine topological and geometrical recognition.\n\nWe observe that the main challenges for such shape abstraction tasks are (i) the inherent dimensionality reduction from the shape to the skeleton, (ii) the domain change as the true skeletal representation would be best expressed in a continuous domain, and (iii) the trade off between the noise and representative power for the skeleton to prohibit overbranching but still preserve the shape. Although the lower dimensional representation is a clear advantage for shape manipulation, it raises the challenge of characterizing features and representation, especially for deep learning. Computational methods for skeleton extraction are abundant, but are typically not robust to noise on the boundary of the shape (see \\cite{Tagliasacchi2016}, for example). Small changes in the boundary result in large changes to the skeleton structure, with long branches describing insignificant bumps on the shape. Even for clean extraction methods such as Durix's robust skeleton \\cite{Durix2019} used for our dataset, changing the resolution of an image changes the value of the threshold for keeping only the desirable branches. Training a neural network to learn to extract a clean skeleton directly, without relying on a threshold, would be a significant contribution to skeleton extraction. In addition, recent deep learning algorithms have shown great results in tasks requiring dimensionality reduction and such approaches could be easily applied to the shape abstraction task we describe in this paper.\n\nOur observation is that deep learning approaches are useful for proposing generalizable and robust solutions since classical skeletonization do lack robustness. Our motivation arises from the fact that such deep learning approaches need comprehensive datasets, similar to 3D shape understanding benchmarks based on ShapeNet~\\cite{shapenet}, SUNCG~\\cite{suncg}, and SUMO~\\cite{sumo} datasets, with corresponding challenges. The tasks and expected results from such networks should also be well-formulated in order to evaluate and compare them properly. We chose skeleton extraction as the main task, to be investigated in pixel, point, and parametric domains in increasing complexity. \n\nIn order to solve the proposed problem with deep learning and direct more attention to geometric shape understanding tasks, we introduce SkelNetOn Challenge. We aim to bring together researchers from computer vision, computer graphics, and mathematics to advance the state of the art in topological and geometric shape analysis using deep learning. The datasets created and released for this competition will serve as reference benchmarks for future research in deep learning for shape understanding. Furthermore, different input and output data representations can become valuable testbeds for the design of robust computer vision and computational geometry algorithms, as well as understanding deep learning models built on representations in 3D and beyond. The three SkelNetOns are defined below:\n\\begin{itemize}\n\\item \\textbf{Pixel SkelNetOn:}\nAs the most common data format for segmentation or classification models, our first domain poses the challenge of extracting the skeleton pixels from a given shape in an image. \n\n\\item \\textbf{Point SkelNetOn:} \nThe second challenge track investigates the problem in the point domain, where the shapes will be represented by point clouds as well as the skeletons. \n\n\\item \\textbf{Parametric SkelNetOn:} \nThe last domain aims to push the boundaries to find parametric representation of the skeleton of a shape, given its image. The participants are expected to output skeletons of shapes represented as parametric curves with a radius function. \n\n\\end{itemize}\n\nIn the next section, we introduce how the skeletal models are generated. We then inspect characteristics of our datasets and the annotation process (Section~\\ref{sec:data}), give description of the tasks and formulations of evaluation metrics (Section~\\ref{sec:tasks}), and introduce state-of-the-art methods as well as our baselines (Section~\\ref{sec:res}). The results of the competition will be presented in the Deep Learning for Geometric Shape Understanding Workshop during the 2019 International Conference on Computer Vision and Pattern Recognition (CVPR) in Long Beach, CA on June 17th, 2019. As of April 17th, more than 200 participants have registered in SkelNetOn competitions and there are 37 valid submissions in the leaderboards over the three tracks. Public leaderboards and the workshop papers are listed in our website\\footnote{http:\/\/ubee.enseeiht.fr\/skelneton\/}.\n\n\\section{Skeletal Models and Generation}\\label{sec:skeletons}\nThe Blum medial axis (BMA) is the original skeletal model, consisting of a collection of points equidistant from the boundary (the skeleton) and their corresponding distances (radii)~\\cite{Blum1973}. The BMA produces skeleton points both inside and outside the shape boundary. Since each set, interior and exterior, reconstructs the boundary exactly, we select the interior skeleton to avoid redundancy. For the interior BMA, skeleton points are centers of circles maximally inscribed within the shape and the radii are their associated circles radii. For a discrete sampling of the boundary, Voronoi approaches to estimating the medial axis are well-known, and proven to converge to the true BMA as the boundary sampling becomes dense~\\cite{ogniewicz1994}. In the following, we refer to the skeleton points and radius of the shape together as the interior BMA. \n\nThe skeleton offers an intuitive and low dimensional representation of the shape that has been exploited for shape recognition, shape matching, and shape animation. However, this representation also suffers from poor robustness: small perturbations of the boundary may cause long branches to appear that model only a small boundary change. Such are uninformative about the shape. These perturbations also depend on the domain of the shape representation, since the noise on the boundary may be the product of coarse resolution, non-uniform sampling, and approximate parameterization. Many approaches have been proposed to remove these uninformative branches from an existing skeleton~\\cite{Attali1997,Chazal2005,Giesen2009}, whereas some more recent methods offer a skeletonization algorithm that directly computes a clean skeleton~\\cite{Durix2019,Leborgne2014}. \n\nWe base our ground truth generation on this second approach. First, we apply one-pixel dilation and erosion operations on the original image to close some negligible holes and remove isolated pixels that might change the topology of the skeleton. We manually adjust the shapes if the closing operation changes the shape topology. Then, the skeleton is computed with 2-, 4-, and 6-pixel thresholds. In other words, the Hausdorff distance from the shape represented by the skeleton to the shape used for the skeletonization is at most 2, 4, or 6 pixels ~\\cite{Durix2019}. We then manually select the skeleton that is visually the most coherent for the shape from among the three approximations to produce a skeleton which has the correct number of branches. Finally, we manually remove some isolated skeleton points or branches if spurious branches still remain. \n\nWe compiled several shape repositories~\\cite{Bronstein2007,Bronstein2008,leonardshape16,Sebastian2004}\nfor our dataset with $1,725$ shapes in 90 categories. We used the aforementioned skeleton extraction pipeline for obtaining the initial skeletons, and created shapes and skeletons in other domains using the shape boundary, skeleton points, and skeleton edges.\n\n\n\n\n\n\\section{Datasets}\\label{sec:data}\nWe converted the shapes and their corresponding ground truth skeletal models into three representation domains: pixels, points, and B\\'ezier curves. This section will discuss these datasets derived from the initial skeletonization. \n\n\\subsection{Shapes and Skeletons in Pixels}\n\\label{sec:skelimage}\nThe image dataset consists of $1,725$ black and white images given in portable network graphics format with size $256\\times256$ pixels, split into $1,218$ training images, $241$ validation images, and $266$ test images.\n\nWe provide samples from every class in both the test and validation sets. There are two types of images: the shape images which represent the shapes in our dataset (Figure~\\ref{fig:pixels}), and the skeleton images which represent the skeletons corresponding to the shape images (Figure~\\ref{fig:pixels}). In the shape images, the background is annotated with black pixels and the shape with a closed polygon filled with white pixels. In the skeleton images, the background is annotated with black pixels and the skeleton is annotated with white pixels. The shapes have no holes; some of them are natural, while others are man-made. If one overlaps a skeleton image with its corresponding shape image, the skeleton will lie in the ``middle\" of the shape (i.e., it would be an approximation of the shape's skeleton). \n \n \\begin{figure}[hbt!]\n\t\\centering\n \t\t\\includegraphics[width=1\\linewidth]{img\/new_img_skel.png}\n \t\\caption{\\textbf{Pixel Dataset.} A subset of shape and skeleton image pairs is demonstrated from our pixel dataset.\n \t} \\label{fig:pixels}\n \\end{figure}\n \nFor generating shape images, inside of the shape boundaries mentioned in Section~\\ref{sec:skeletons} are rendered in white, whereas outside is rendered in black. The renders are then cropped, padded, and downsampled to $256\\times 256$. No noise was added or removed, therefore all expected noise is due to pixelation or resizing effects. For generating skeleton images, the skeleton points as well as all pixels linearly falling between two consecutive skeleton points are rendered in white, on a black background. By definition of the skeletons from the original generation process, we assume adjacency within 8-neighborhood in the pixel domain, and provide connected skeletons.\n \n\n\\subsection{Shapes and Skeletons in Points}\nSimilar to the image dataset, the point dataset consists of $1,725$ shape point clouds and corresponding ground truth skeleton point clouds, given in the basic point cloud export format \\texttt{.pts}. Sample shape point clouds and their corresponding skeleton point clouds are shown in Figure~\\ref{fig:dataset:points}.\n\n\n \\begin{figure}[hb]\n \t\t\\centering\n \t\t\\includegraphics[width=1\\linewidth]{img\/pt_skel_zoom.png}\n \t\\caption{\\textbf{Point Dataset.} A subset of our shape and skeleton point cloud pairs is demonstrated. We also emphasize the point sampling using two close ups at the bottom right.}\n \t\t\\label{fig:dataset:points}\n \\end{figure}\n\nThe dataset is again split into $1,218$ training point clouds, $241$ validation point clouds, and $266$ test point clouds. We derive the point dataset by extracting the boundaries of the shapes (mentioned in Section~\\ref{sec:skeletons}) as two-dimensional point clouds. We fill the closed region within this boundary by points that implicitly lie on a grid with granularity $h=1$. After experimenting with over\/under sampling the point cloud with different values of $h$, we end up with this balancing value because the generated point clouds were representative enough to not lose details, and still computationally reasonable to process. Even though the average discretization parameter is given as $h=1$ in the provided dataset, we shared scripts\\footnote{https:\/\/github.com\/ilkedemir\/SkelNetOn} in the competition starting kit to coarsen or populate the provided point clouds so participants are able to experiment with different granularities of the point cloud representation. To prevent the comfort of regularity which is observed in the pixel domain, we add some uniformly distributed noise and avoid any structural dependency in the later computed results. The noise is scaled by the sampling density, and we also provide scripts to apply noise with other probability distributions, such as Gaussian noise. \n\nGround truth for the skeletal models are given as a second point cloud which only contains the points representing the skeleton. To compute the skeletal point clouds, we computed the proximity of each point in the shape point cloud to the skeleton points and skeleton edges from the original dataset. Shape points closer than a threshold (depending on $h$) to any original skeleton points or edges in Euclidean space are accepted for the skeleton point cloud. This generation process allows one-to-one matching of each point in the skeleton point cloud to a point in the shape point cloud, thus the ground truth can be converted to labels if the task in hand would be assumed as a segmentation task. \n\n\n\\subsection{Parametric Skeletons}\\label{sec:skelprm}\nFinally, the parametric skeleton dataset consists of $1,725$ shape images and corresponding ground truth parametric skeletons, exported in tab separated \\texttt{.csv} format. The dataset is again split into $1,218$ training shapes, $241$ validation shapes, and $266$ test shapes. The shape images are created as discussed in Section~\\ref{sec:skelimage}, and parametric skeletons are modeled as degree five B\\'ezier curves. Each curve corresponds to a branch of the skeleton, where the first two coordinates describe the $\\{x,y\\}$ location in the image of an inscribed circle center, and the third coordinate is the radius $r$ associated with the point. Output is a vector containing 3D $(x, y, r)$ coordinates of the control points of each branch.\n\\begin{equation}\n v = [x_0^0, y_0^0, r_0^0, x_1^0, y_1^0, r_1^0, .. x_5^0, y_5^0, r_5^0, x_0^1, y_0^1, r_0^1 ..],\n\\end{equation}\nwhere $b_i^j = (x_i^j,y_i^j,r_i^j)$ is the $i$-th control point of the $j$-th branch in the medial axis.\n\nFrom the simply connected shapes of the dataset mentioned in Section~\\ref{sec:skeletons}, we first extract a clean medial axis representation. For a simply connected shape, the skeleton is a tree, whose joints and endpoints are connected by curves, which we call proto-branches. Unfortunately, the structure of the tree is not stable. Because skeletons are unstable in the presence of noise, a new branch due to a small perturbation of the boundary could appear and break a proto-branch into two branches. Moreover the tree structure gives a dependency and partial order between the branches, not a total order. To obtain a canonical parametric representation of the skeleton, we first merge branches that have been broken by a less important branch. We then order the reduced set of branches according to branch importance. For both steps, we use a salience measure, the Weighted Extended Distance Function (WEDF) function on the skeleton~\\cite{leonardshape16}, to determine the relative importance of branches. The WEDF function has been shown to measure relative importance of shape parts in a way that matches human perception \\cite{carlier2016}. \n\n\\paragraph{Merging branches.} \nFirst, we identify pairs of adjacent branches that should be joined to represent a single curve: a branch is split into two parts at a junction induced by a child branch of lower importance if the WEDF function is continuous across the junction. When two child branches joining the parent branch are of equal importance, then the junction is considered an end point of the parent curve. Figure~\\ref{fig:parametric} shows the resulting curves.\n\n\\paragraph{Computation of the B\\'ezier approximation.} \nEach individual curve resulting from the merging process is approximated by a B\\'ezier curve of degree five, whose control points have three parameters $(x,y,r)$, where $(x,y)$ are coordinates, and $r$ is the radius. The end points of the curve are interpolated, and the remaining points are determined by a least square approximation. \n\n\\paragraph{Branch ordering}\nWe then order the branches by importance to have a canonical representation. We estimate a branch importance by the maximal WEDF value of a point in the branch. The branches can then be ordered, and their successive list of control points is the desired output.\n\n\\begin{figure}[hbt!]\n \t\t\\centering\n \t\t\\includegraphics[width=0.5\\linewidth]{img\/parametric_mousse.pdf}\n \t\t\\caption{\\textbf{Skeletal Branches.} The red curve passes continuously through the arm and nose curves. However, legs (bottom) and ears (top) split the main red curve into two parts of equal importance, becoming the end-points. }\n \t\t\\label{fig:parametric}\n\\end{figure}\n\n\n\\section{Tasks and Evaluations}\\label{sec:tasks}\nAlthough there are different choices of evaluation metrics specific for each data modality, we formulate our metrics in accordance with the tasks for each domain.\n\n\\subsection{Pixel Skeleton Classification}\nGenerating skeleton images from the given shape images can be posed as a pixel-wise binary classification task, or an image generation task. This makes it possible to evaluate performance by comparing a generated skeleton image, pixel by pixel, to its ground truth skeleton image. Such a comparison automatically accounts for common errors seen in skeleton extraction algorithms such as lack of connectivity, double-pixel width, and branch complexity. \n\nHowever, using a simple $L1$ loss measurement would provide a biased evaluation of image similarity. One can see this by looking at any of the skeleton images: the black background pixels far outnumber the white skeleton ones, giving the former much more significance in the final value of the computed $L1$ loss. To minimize the effects of class imbalance, the evaluation is performed using the $F1$ score, which takes into account the number of skeleton and background pixels in the ground truth and generated images. This is consistent with metrics used in the literature and will enable further comparisons with future work~\\cite{shen2017deepskeleton}. The number of skeleton pixels (positive) or background pixels (negative) is first counted in both the generated and ground truth skeleton images. The $F1$ score is then calculated from the harmonic average of the precision and recall values as follows: \n \\begin{eqnarray}\n \tF1 = \\dfrac{2 \\times precision \\times recall } { precision + recall},\n \\end{eqnarray}\nusing \n\\begin{eqnarray}\n Precision = \\dfrac{TP} {TP + FP} \\nonumber \\\\\n Recall = \\dfrac{TP}{TP + FN},\n\\end{eqnarray}\nwhere $TP$, $FN$, and $FP$ stand for number of pixels for true positives, false negatives, and false positives respectively. \n\n\\subsection{Point Skeleton Extraction}\nGiven a 2D point set representing a shape, the goal of the point skeleton extraction task is to output a set of point coordinates corresponding to the given shape's skeleton. This can be approached as a binary point classification task or a point generation task, both of which end up producing a skeleton point cloud that approximate the shape skeleton. The output set of skeletal points need not be part of the original input point set. The evaluation metric for this task needs to be invariant to the number and ordering of the points. The metric should also be flexible for different point sampling distributions representing the same skeleton. Therefore, the results are evaluated using the symmetric Chamfer distance function, defined by:\n\\begin{eqnarray}\nCh(A,B) = \\frac{1}{|A|}\\sum_{a\\in A} \\min _ {b\\in B} ||a-b||_2 + \\nonumber \\\\ \\frac{1}{|B|}\\sum_{b\\in B} \\min _ {a\\in A} ||a-b||_2,\n\\end{eqnarray}\nwhere $A$ and $B$ represent the skeleton point sets to be compared, $|.|$ denotes set cardinality, and $||.||$ denotes the Euclidean distance between two points. We use a fast vectorized \\texttt{Numpy} implementation of this function in order to compute Chamfer distances quickly in our evaluation script.\n\n\\subsection{Parametric Skeleton Generation}\nThe parametric skeleton extraction task aims to recover the medial axis as a set of 3D parametric curves from an input image of a shape. Following the shape and parametric skeleton notations introduced in Section~\\ref{sec:skelprm}, different metrics can be proposed to evaluate such a representation. In particular, we can measure either the distance of the output medial axis to the ground-truth skeleton, or its distance to the shape described in the input image. Although the second method looks better adapted to the task, it does not take into account several properties of the medial axis. It would be difficult, for example, to penalize disconnected branches or redundant parts of the medial axis.\n\nWe evaluate our results by distance to the ground truth medial axes in our database, since the proposed skeletal representation in the dataset already guarantees the properties introduced above and in Section~\\ref{sec:skelprm}, and are ordered in a deterministic order. We use the mean squared distance between the control points on the original and predicted branches as:\n\\begin{equation}\n MSD(b,\\tilde{b}) = \\frac{1}{6}\\sum_{i=0}^5 \\left( (x_i-\\tilde{x}_i)^2 + (y_i-\\tilde{y}_i)^2 + (r_i-\\tilde{r}_i)^2 \\right),\n\\end{equation}\nwhere $b = {(x_i,y_i,r_i)}_{i=\\{0..5\\}}$ is a branch in the ground truth data, and $\\tilde{b} = {(\\tilde{x}_i,\\tilde{y}_i,\\tilde{r}_i)}_{i=\\{0..5\\}}$ is the corresponding branch in the output data.\n\nThe evaluation metric needs to take into account models with an incorrect number of branches, since this number is different for each shape. We penalize each missing (or extra) branch in the output with a measure on the length of the branch in the ground truth (or in the output). We use a measure called missing branch error (MBE) for each missing or extra branch $b$:\n\\begin{equation}\n MBE(b) = \\frac{1}{5}\\sum_{i=0}^4 (x_{i+1} - x_i)^2 + (y_{i+1} - y_i)^2\n + \\frac{1}{6}\\sum_{i=0}^5 r_i^2.\n\\end{equation}\n\nFinally, the evaluation function between an output vector $\\tilde{V}$ and its associated ground truth $V$ is defined as: \n\\begin{equation}\nD(V,\\tilde{V}) = \\frac{1}{N_b} \\left( \n \\sum_{j=0}^{n_b-1} MSD(b^j,\\tilde{b}^j) + \n \\sum_{j=n_b}^{N_b-1} MBE(\\hat{b}^j)\n \\right )\n\\end{equation}\nwhere $N_b,n_b$ are respectively the number minimal and maximal of branches in the ground truth and in the output, and $\\hat{b}$ are branches in the vector containing the maximal number of branches.\n\n\\section{State-of-the-art and Baselines}\\label{sec:res}\nSkeleton extraction has been an interesting topic of research for different domains. We briefly introduce these approaches in addition to our own deep learning based baselines. The participants are expected to surpass these baselines in order to be eligible for the prizes. \n\n\\subsection{Pixel Skeleton Results}\nEarly work on skeleton extraction was based on using segmented images as input \\cite{bai2007skeleton}. A comprehensive overview of previous approaches using pre-segmented inputs is presented in \\cite{saha2016survey}. With the advancement of neural network technology, the most recent approaches perform skeletonization from natural images with fully convolutional neural networks \\cite{shen2017deepskeleton,shen2016object}. In this challenge, we consider the former type of skeletonization task using pre-segmented input images.\n\nWe formulate the task of skeletonization as image translation from pre-segmented images to skeletons with a conditional adversarial neural network. For our baseline result, we use a vanilla pix2pix\nmodel as proposed in~\\cite{isola2017image}. We apply a distance transform to preprocess the binary input shape as illustrated in Figure~\\ref{fig:dis}. We found that this preprocessing of the input data enhances the neural network learning significantly. The model is trained with stochastic gradient descent using the $L1$-image loss for $200$ epochs. We measure performance in terms of $F1$ score achieving a test accuracy of $\\textbf{0.6244}$ on the proposed validation set. \n\\begin{figure}[ht]\n\t\\centering\n \t\\begin{subfigure}{0.5\\linewidth}\n \t\t\\centering\n \t\t\\includegraphics[width=0.7\\linewidth]{img\/bat-7.png}\n \t\t\\caption{}\n \t\t\\label{fig:dis-shape}\n \t\\end{subfigure\n \t\\begin{subfigure}{0.49\\linewidth}\n \t\t\\centering\n \t\t\\includegraphics[width=0.7\\linewidth]{img\/bat-7-inputs.png}\n \t\t\\caption{}\n \t\t\\label{fig:dis-dis}\n \t\\end{subfigure}\\\\\n \t\\begin{subfigure}{0.5\\linewidth}\n \t\t\\centering\n \t\t\\includegraphics[width=0.7\\linewidth]{img\/bat-7-outputs.png}\n \t\t\\caption{}\n \t\t\\label{fig:dis-shape}\n \t\\end{subfigure\n \t\\begin{subfigure}{0.49\\linewidth}\n \t\t\\centering\n \t\t\\includegraphics[width=0.7\\linewidth]{img\/bat-7-targets.png}\n \t\t\\caption{}\n \t\t\\label{fig:dis-dis}\n \t\\end{subfigure}\\\\ \t\n\t\\caption{\\textbf{Pixel SkelNetOn Baseline.} (a) Original shape image. (b) Distance transformed shape image. (c) Baseline prediction result. (d) Ground truth skeleton. }\n\t\\label{fig:dis}\n\\end{figure}\n\n\\subsection{Point Skeleton Results}\nSkeleton and medial axis extraction from point clouds have been extensively explored in 2D and 3D domains by using several techniques such as Laplacian contraction~\\cite{cao10}, mass transport~\\cite{jalba16}, medial scaffold~\\cite{leymarie07}, locally optimal projection~\\cite{Lipman07}, maximal tangent ball~\\cite{Ma2012}, and local $L_1$ medians~\\cite{Huang13}. Although these approaches extract approximate skeletons, the dependency on sampling density, non-uniform and incomplete shapes, and the inherent noise in the point clouds are still open topics that deep learning approaches can handle implicitly. One such recent approach (P2PNet~\\cite{p2pnet}) builds a Siamese architecture to build spatial transformer networks for learning how points can be moved from a shape surface to the medial axis and vice versa.\n\nBased on the definition of this task in Section~\\ref{sec:tasks}, it can be formulated as (i) a generation task to create a point cloud by learning the distribution from the original shapes and skeleton points, (ii) a segmentation task to label the points of a shape as skeletal and non-skeletal points, and (ii) a spatial task to learn the pointwise translations to transform the given shape to its skeleton. We chose the second approach to utilize state-of-the-art point networks. First, we obtained ground truth skeletons as labels for the original point cloud (skeletal label as $1$, and non-skeletal labeled as $2$). Then we experimented with PointNet++~\\cite{pointnet} architecture's part-segmentation module to classify each pixel into two classes, defined by the labels. As the dimensionality of the classes are inherently different, we observed that the skeleton class collapses. Similarly, if we over-sample the skeleton, non-skeleton class collapses. To overcome this problem, we experimented with dynamic weighting of samples, so that the skeletal points are trusted more than non-skeletal points. The weights are adjusted at every $10^{th}$ epoch as well as the learning rate, to keep the balance of the classes. Our approach achieved an accuracy of $58.93\\%$, measured by the mean IoU over all classes. Even though the mIoU is a representative metric for our task, we would like to evaluate our results better by calculating the Chamfer distance of skeletons in shape space in future.\n\n\\subsection{Parametric Medial Axis Results}\n\nParametric representations of the medial axis have been proposed before, in particular representations with intuitive control parameters. For example, Yushkevich. et al, \\cite{yushkevich2003} use a cubic B-spline model with control points and radii; similarly Lam et al. \\cite{lam2007} use piecewise B\\'ezier curves.\n\nWe train a Resnet-50~\\cite{He2016} modified for regression. Inputs are segmented binary images, and outputs are six control points in $\\mathbb{R}^3$ that give $(x,y,r)$ values for the degree five B\\'ezier curves used to model medial axis branches.\n\nThe parametric data is the most challenging of the three ground truths presented here, for two reasons. First, the number of entries in the ground truth varies with the number of branches in the skeleton. Second, the B\\'ezier control points do not encode information about the connectivity of the skeleton graph. To overcome the first challenge, we set all shapes to have $5$ branches. For those shapes with fewer than $5$ branches, we select the medial point with the maximum WEDF value (a center of the shape) as the points in any non-existent branches, and set the corresponding radii to zero. We do not address the second issue for our baseline results. We use a simple mean squared error measure on the output coordinates as our loss function. While more branches are desirable to obtain quality models, there is a trade-off between the number of meaningless entries in the ground truth for a simple shape (with a low number of branches), and providing for an adequate number of branches in the ground truth of a complex shape (with many branches).\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{img\/param_results.png}\n \\caption{\\textbf{Parametric SkelNetOn Baseline.} Input image (left), ground truth parametric medial axis (middle), prediction from our model (right).}\n \\label{fig:paramres}\n\\end{figure}\n\nWe obtain a validation loss of $1405$ pixels, with a corresponding training loss of $121$ pixels, using early stopping. As a reference, the loss at the beginning of training is in the order of $50,000$ pixels.\n\nAs shown in Figure \\ref{fig:paramres}, branch connectivity is only occasionally produced by our trained model. The model does well with location, orientation, and scale of shape, but often misses long and narrow appendages such as the guitar neck (third row). A future work is to incorporate other constraints in the architecture to encourage connectivity of branches. \n\n\\section{Conclusions}\nWe presented SkelNetOn Challenge at Deep Learning for Geometric Shape Understanding, in conjunction with CVPR~2019. Our challenge provides datasets, competitions, and a workshop structure around skeleton extraction in three domains as Pixel SkelNetOn, Point SkelNetOn, and Parametric SkelNetOn. We introduced our dataset analysis, formulated our evaluation metrics following our competition tasks, and shared our preliminary results as baselines following the previous work in each domain. \n\nWe believe that SkelNetOn has the potential to become a fundamental benchmark for the intersection of deep learning and geometry understanding. We foresee that the challenge and the workshop will enable more collaborative research of different disciplines on the crossroads of shapes. Ultimately, we envision that such deep learning approaches can be used to extract expressive parametric and hierarchical representations that can be utilized for generative models \\cite{3dgan} and for proceduralization \\cite{demir18}.\n\\section*{Acknowledgments}\nWithout the contributions of the rest of our technical and program committees, this workshop would not happen, so thank you everyone, in particular Veronika Schulze for her contributions throughout the project, Daniel Aliaga as the papers chair, and Bedrich Benes and Bernhard Egger for the support. The authors thank AWM's Women in Shape (WiSH) Research Network, funded by the NSF AWM Advances! grant. We also thank the University of Trier Algorithmic Optimization group, who provided funding for the workshop where this challenge was devised. We would also like to thank NVIDIA for being the prize sponsor of the competitions. Lastly, we would like to acknowledge the support and motivation of the CVPR chairs and crew. \n{\\small\n\\bibliographystyle{ieee}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\vglue-3mm\\subsection{Motivation}\n\\noindent\nIn a famous paper from 1960, Erd\\H os and Taylor~\\cite{ET60} studied the most-frequently visited site by the simple random walk on~$\\mathbb Z^2$ of time-length~$n$. They showed that the time spent at that site is of order~$(\\log n)^2$ and conjectured that the time is asymptotically sharp on that scale. This conjecture was proved in 2001 by Dembo, Peres, Rosen and Zeitouni~\\cite{DPRZ01} (see also Rosen~\\cite{R05}) who in addition described the multifractal structure of the set of \\myemph{thick points}; namely, those points where the local time is at least a given positive multiple of its maximum. The problem has been revisited numerous times; e.g., by Dembo, Peres, Rosen and Zeitouni~\\cite{DPRZ06} who studied random walk late points, by Okada~\\cite{O16} who studied the most visited site on the inner boundary of the range, or by Jego~\\cite{J18} who extended the results of \\cite{DPRZ01,R05} to more general random walks.\n\nOver the past two decades, it has become increasingly clear that many questions about the local time can be usefully rephrased as questions about an associated Discrete Gaussian Free Field (DGFF). This connection, discovered originally in mathematical physics (Symanzik~\\cite{Symanzik}, Brydges, Fr\\\"ohlich and Spencer~\\cite{BFS}), is now elegantly expressed via Dynkin-type Isomorphism\/Second Ray-Knight theorems (Dynkin~\\cite{D83}, Eisenbaum, Kaspi, Marcus, Rosen and Shi~\\cite{EKMRS}) which underlie many important objects in current probability; e.g., random interlacements (Sznitman~\\cite{S12}, Rodriguez~\\cite{R13}, etc), loop-soups (Lawler and Werner~\\cite{LW}, Le Jan~\\cite{LeJan}, Lupu~\\cite{Lupu}, etc) and the cover time (Ding, Lee and Peres~\\cite{DLP12}, Ding~\\cite{D14}, etc). Studying the random walk at times proportional to the cover time is particularly interesting as it permits consideration of \\myemph{thin points}; namely, those where the local time is less than a fraction of its typical value. These exhibit a multifractal structure as well, as shown in Abe~\\cite{A15} for the random walk on $N\\times N$ torus.\n\nIn the present paper we study the precise statistics of the thick and thin points for a two-dimensional simple random walk run for times of the order of the cover time. Closely related to this is the set of \\myemph{avoided points}, i.e., those not visited at all, as well as the set of \\myemph{light points}, where the local time is at most a given constant. We show that all these level sets are intimately connected with the corresponding (so called intermediate) level sets of the Discrete Gaussian Free Field studied earlier by O.~Louidor and the second author~\\cite{BL4}. In particular, their limiting statistics is captured by the Liouville Quantum Gravity measures introduced and studied by Duplantier and Sheffield~\\cite{DS}.\n\n\\subsection{Setting for the random walk}\n\\label{sec1.2}\\noindent\nIn order to take full advantage of the prior work~\\cite{BL4} on the DGFF, we will consider a slightly different setting than references~\\cite{ET60,A15}. Indeed, our random walk will behave as the simple random walk only inside a large finite subset of~$\\mathbb Z^2$; when it exits this set it reenters in the next step through a uniformly-chosen boundary edge.\n\nTo describe the dynamics of our random walk, consider first a general finite, unoriented, connected graph $G = (V\\cup\\{\\varrho\\},E)$, where~$\\varrho$ is a distinguished vertex (not belonging to~$V$). We assume that each edge~$e\\in E$ is endowed with a number~$c_e>0$, called the conductance of~$e$. Let~$X$ denote a continuous-time (constant-speed) Markov chain on~$V\\cup\\{\\varrho\\}$ which makes jumps at independent exponential times to a neighbor selected with the help of transition probabilities\n\\begin{equation}\n\\cmss P (u, v) := \n\\begin{cases} \n\\frac{c_{e}}{\\pi(u)}, &~\\text{if}~e:=(u, v) \\in E, \n\\\\\n0, &~\\text{otherwise},\n\\end{cases}\n\\end{equation}\nwhere $\\pi(u)$ is the sum of~$c_e$ for all edges incident with~$u$. We will use~$P^u$ to denote the law of~$X$ with~$P^u(X_0=u)=1$. \n\nGiven a path~$X$ of the above Markov chain, the local time at~$v\\in V\\cup\\{\\varrho\\}$ at time~$n$ is then given by \n\\begin{equation} \n\\label{E:local_time}\n\\ell_t^V(v) := \\frac1{\\pi(u)}\\int_0^t\\text{\\rm d}\\mkern0.5mu s\\,1_{\\{X_s=u\\}},\\quad t\\ge0,\n\\end{equation}\nwhere the normalization by $\\pi(u)$ ensures that the leading-order growth of~$t\\mapsto\\ell_t^V(v)$ is the same for all vertices. \nWe will henceforth work in the time parametrization by the local time at the distinguished vertex~$\\varrho$. For this we set $\\hat\\tau_{\\varrho}(t):=\\inf\\{s\\ge0\\colon\\ell_s^V(\\varrho)\\ge t\\}$ and denote\n\\begin{equation}\n\\label{E:LVt}\nL_t^V(v):=\\ell_{\\hat\\tau_{\\varrho}(t)}^V(v).\n\\end{equation}\nIn this parametrization, $t$ is the (leading-order) value of~$L^V_t$ at the typical vertex of~$V$.\n\nOur derivations will make heavy use of the connection of the above Markov chain with an instance of the Discrete Gaussian Free Field (DGFF). Denoting by\n\\begin{equation}\nH_{v} := \\inf \\bigl\\{t \\geq 0 \\colon X_t = v \\bigr\\}\n\\end{equation}\nthe first hitting time of vertex~$v$, this DGFF is the centered Gaussian process $\\{h_v^{V}\\colon v \\in V\\}$ with covariances given by\n\\begin{equation}\n\\label{E:cov}\n\\mathbb E\\bigl(h_u^{V} h_v^{V} \\bigr) = G^{V} (u, v) :=E^u\\bigl(\\ell_{H_{\\varrho}}^V(v)\\bigr).\n\\end{equation} \nHere and henceforth,~$\\mathbb E$ denotes expectation with respect to the law~$\\mathbb P$ of~$h^V$. The field naturally extends to~$\\varrho$ by~$h^V_\\varrho=0$. \n\n\\smallskip\nReturning back to random walks on~$\\mathbb Z^2$, in our setting $V$ stands for a large finite subset~$V\\subset\\mathbb Z^2$ while~$\\varrho$ represents the ``\\myemph{boundary vertex};'' namely, the set of vertices outside~$V$, but with a neighbor in~$V$. The set of edges~$E$ is that between the nearest-neighbor pairs in~$V$ plus all the edges from~$V$ to~$\\mathbb Z^2\\smallsetminus V$ that now ``end'' in~$\\varrho$; see Fig.~\\ref{fig1}. The transition rule of the Markov chain is that of the simple random walk on the underlying graph; indeed, all conductances take a unit value, $c_e:=1$, at all the involved edges including those incident with~$\\varrho$. The DGFF associated with this network then corresponds to the ``standard'' DGFF in~$V$ (cf the review by Biskup~\\cite{B-notes}) with zero boundary conditions outside~$V$ except that our normalization is slightly different than the one used in~\\cite{B-notes} --- indeed, our fields are smaller by a multiplicative factor of~$2$ than those in \\cite{B-notes}.\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics[height = 2.0in]{.\/figs\/domain.pdf}\n}\n\\begin{quote}\n\\small \n\\vglue-0.2cm\n\\caption{\n\\label{fig1}\nThe graph corresponding to~$V$ being the square of $6\\times 6$ vertices. Each vertex on the outer perimeter of~$V$ has an edge to the ``boundary vertex''~$\\varrho$; the corner vertices that have two edges to~$\\varrho$. The ``boundary vertex'' plays the role of the wired boundary condition used often in statistical mechanics. For us this ensures that the associated DGFF vanishes outside~$V$.}\n\\normalsize\n\\end{quote}\n\\end{figure}\n\nFor the lattice domains, we will take sequences of subsets of~$\\mathbb Z^2$ that approximate, in the scaling limit, well-behaved continuum domains. The following definitions are taken from Biskup and Louidor~\\cite{BL2}:\n\n\n\\begin{definition} \nAn admissible domain \nis a bounded open subset of $\\mathbb{R}^2$ \nthat consists of a finite number of connected components\nand whose boundary is composed of a finite number of connected sets each of which has \na positive Euclidean diameter.\n\\end{definition} \n\nWe write $\\mathfrak{D}$ to denote the family of all admissible domains and let $d_{\\infty} (\\cdot, \\cdot)$ denote the $\\ell^{\\infty}$-distance on $\\mathbb{R}^2$.\n\n\\begin{definition}\nAn admissible lattice approximation of $D \\in \\mathfrak{D}$ is a sequence $D_N\\subset\\mathbb{Z}^2$ such that the following holds: There is~$N_0\\in\\mathbb N$ such that for all~$N\\ge N_0$ we have\n\\begin{equation}\n\\label{E:1.8i}\nD_N \\subseteq \\Bigl\\{x \\in \\mathbb{Z}^2 : \nd_{\\infty}\\bigl(\\ffrac{x}{N}, \\mathbb{R}^2 \\smallsetminus D\\bigr) > \\frac{1}{N} \\Bigr\\}\n\\end{equation}\nand, for any~$\\delta>0$ there is also~$N_1\\in\\mathbb N$ such that for all~$N\\ge N_1$,\n\\begin{equation}\nD_N \\supseteq \\bigl\\{x \\in \\mathbb{Z}^2 : \nd_{\\infty} (\\ffrac{x}{N}, \\mathbb{R}^2 \\smallsetminus D) > \\delta \\bigr\\}.\n\\end{equation}\n\\end{definition} \n\nAs shown in \\cite{BL2}, these choices ensure that the discrete harmonic measure on~$D_N$ tends, under the scaling of space by~$N$, weakly to the harmonic measure on~$D$. This yields a precise asymptotic expansion of the associated Green functions; see \\cite[Chapter~1]{B-notes} for a detailed exposition. In particular, we have $G^{D_N}(x,x)=g\\log N+O(1)$ for\n\\begin{equation}\ng:=\\frac1{2\\pi}\n\\end{equation}\nwhenever~$x$ is deep inside~$D_N$. (This is by a factor~$4$ smaller than the corresponding constant in~\\cite{B-notes,BL2}.)\n\nOur random walk will invariably start from the ``boundary vertex''~$\\varrho$; throughout we will thus write~$P^\\varrho$ for the corresponding law of the Markov chain~$X$. (This law depends on~$N$ but we suppress that notationally.) \n\n\n\n\\section{Main results}\n\\noindent\nOur aim in this work is to describe the random walk at times that correspond to a $\\theta$-multiple of the \\myemph{cover time}, for every~$\\theta>0$. Recall that the cover time of a graph is the first time that every vertex of the graph has been visited. Although this is a random quantity, it is quite well concentrated (provided that the maximal hitting time is of smaller order than the expected cover time; see Aldous~\\cite{Aldous}).\n In particular, at the cover time of~$D_N$ the local time at a typical vertex is asymptotic to~$2g(\\log N)^2$. This suggests that we henceforth take~$t$ proportional to $(\\log N)^2$ as $N\\to\\infty$. \n\n\\subsection{Maximum, minimum and exceptional sets}\nLet us begin by noting the range of values that the local time takes on~$D_N$:\n\n\\begin{theorem}\n\\label{thm-minmax}\nLet~$\\{t_N\\}$ be a positive sequence such that, for some~$\\theta>0$,\n\\begin{equation}\n\\label{E:1.12}\n\\lim_{N\\to\\infty}\\frac{t_N}{(\\log N)^2}=2g\\theta.\n\\end{equation}\nThen for any $D\\in\\mathfrak D$, any admissible sequence~$\\{D_N\\}$ of lattice approximations of~$D$, the following limits hold in $P^{\\varrho}$-probability: \\normalcolor\n\\begin{equation}\n\\label{E:max}\n\\frac1{(\\log N)^2}\\,\\max_{x\\in D_N} L^{D_N}_{t_N}(x)\\,\\,\\,\\underset{ N\\to\\infty}\\longrightarrow\\,\\,\\,2 g\\bigl(\\sqrt\\theta+1\\bigr)^2\n\\end{equation}\nand\n\\begin{equation}\n\\label{E:min}\n\\frac1{(\\log N)^2}\\,\\min_{x\\in D_N} L_{t_N}^{D_N}(x)\\,\\,\\,\\underset{ N\\to\\infty}\\longrightarrow\\,\\,\\,2 g\\bigl[(\\sqrt\\theta-1)\\vee0\\,\\bigr]^2.\n\\end{equation}\n\\end{theorem}\n\nThese conclusions have previously been obtained by Abe~\\cite[Corollary~1.3]{A15} for the continuous-time walk on~$N\\times N$ torus.\nAs is checked from \\eqref{E:min}, the cover time indeed corresponds to $\\theta=1$. Noting that the typical value of the local time at a~$\\theta$-multiple of the cover time is asymptotic to~$2g\\theta(\\log N)^2$, we are naturally led to consider the set of $\\lambda$-\\myemph{thick points},\n\\begin{equation}\n\\label{E:Lplus}\n\\mathcal T_N^{+}(\\theta,\\lambda):=\\Bigl\\{x\\in D_N\\colon L_{t_N}^{D_N}(x)\\ge 2g(\\sqrt\\theta+\\lambda)^2(\\log N)^2\\Bigr\\},\\quad \\lambda\\in(0,1],\n\\end{equation}\nand $\\lambda$-\\myemph{thin points},\n\\begin{equation}\n\\label{E:Lminus}\n\\mathcal T_N^{-}(\\theta,\\lambda):=\\Bigl\\{x\\in D_N\\colon L_{t_N}^{D_N}(x)\\le 2g(\\sqrt\\theta-\\lambda)^2(\\log N)^2\\Bigr\\},\\quad\\lambda\\in(0,\\sqrt\\theta\\wedge1],\n\\end{equation}\nwhere the upper bounds on~$\\lambda$ reflect on \\twoeqref{E:max}{E:min}. \nAs a boundary case of $\\mathcal T_N^{-}(\\theta,\\lambda)$, we single out the set of $r$-\\myemph{light points},\n\\begin{equation}\n\\mathcal L_N(\\theta,r):=\\bigl\\{x\\in D_N\\colon L_{t_N}^{D_N}(x)\\le r\\bigr\\},\\quad r\\ge0,\n\\end{equation}\nincluding the special case of the set of \\myemph{avoided points},\n\\begin{equation}\n\\AA_N(\\theta):=\\bigl\\{x\\in D_N\\colon L_{t_N}^{D_N}(x)=0\\bigr\\}.\n\\end{equation}\nBy \\eqref{E:min}, the latter two sets will only be relevant for~$\\theta\\in(0,1]$.\nOur aim is to describe the scaling limit of these sets in the limit as~$N\\to\\infty$.\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics[width=0.4\\textwidth]{.\/figs\/thick.pdf}\n\\includegraphics[width=0.4\\textwidth]{.\/figs\/thin.pdf}\n}\n\\begin{quote}\n\\small \n\\vglue-0.2cm\n\\caption{\n\\label{fig2}\nPlots of the $\\lambda$-thick (left) and $\\lambda$-thin (right) level sets for the same sample of the random walk on a square of side length~$1000$ and parameter choices $\\theta:=10$ and $\\lambda=0.1$.}\n\\normalsize\n\\end{quote}\n\\end{figure}\n\n\n\\subsection{Digression on exceptional sets of DGFF}\nAs noted previously, the second author and O.~Louidor~\\cite{BL4} have addressed similar questions in the context of the DGFF. There the maximum of~$h^{D_N}$ is asymptotic to~$2\\sqrt g\\log N$ and so the set of $\\lambda$-thick points is naturally defined as that where the field is exceeds $2\\lambda\\sqrt g\\log N$. It was noted that taking a limit of these sets directly does not lead to interesting conclusions as, after scaling space by~$N$, they become increasingly dense in~$D$. A proper way to capture their structure is via the random measure\n\\begin{equation}\n\\label{E:etaDGFF}\n\\eta_N^D:=\\frac1{K_N}\\sum_{x\\in D_N}\\delta_{x\/N}\\otimes\\delta_{h^{D_N}_x-a_N},\n\\end{equation}\nwhere~$\\{a_N\\}$ is a centering sequence with the asymptotic $a_N\\sim 2\\lambda\\sqrt{g}\\log N$ and\n\\begin{equation}\n\\label{E:1.19e}\nK_N:=\\frac{N^2}{\\sqrt{\\log N}}\\,\\text{\\rm e}\\mkern0.7mu^{-\\frac{a_N^2}{2g\\log N}}.\n\\end{equation}\nIn \\cite[Theorem~2.1]{BL4} it was then shown that, for each~$\\lambda\\in(0,1)$ there is~$\\fraktura c(\\lambda)>0$ such that, in the sense of vague convergence of measures on~$\\overline D\\times(\\mathbb R\\cup\\{\\infty\\})$,\n\\begin{equation}\n\\label{E:1.19}\n\\eta_N^D\\,\\,\\underset{N\\to\\infty}{\\,\\overset{\\text{\\rm law}}\\longrightarrow\\,}\\,\\,\\fraktura c(\\lambda)\\,Z^D_\\lambda(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\text{\\rm e}\\mkern0.7mu^{-\\alpha\\lambda h}\\text{\\rm d}\\mkern0.5mu h,\n\\end{equation}\nwhere $\\alpha:=2\/\\sqrt g$ and~$Z_\\lambda^D$ is a random measure in~$D$ called the Liouville Quantum Gravity (LQG) at parameter~$\\lambda$-times critical. The constant~$\\fraktura c(\\lambda)$, given explicitly in terms of~$\\lambda$ and the constants in the asymptotic expansion of the potential kernel on~$\\mathbb Z^2$, allows us take~$Z^D_\\lambda$ to be normalized so that, for each Borel set~$A\\subseteq D$,\n\\begin{equation}\n\\label{E:1.19a}\n\\mathbb E Z_\\lambda^D(A)=\\int_A r^{\\,D}(x)^{2\\lambda^2}\\text{\\rm d}\\mkern0.5mu x,\n\\end{equation} \nwhere~$r^D$ is an explicit function supported on~$D$ that, for~$D$ simply connected, is simply the conformal radius; see~\\cite[(2.10)]{BL4}. \n\nA construction of the LQG measures goes back to Kahane's Multiplichative Chaos theory~\\cite{Kahane}; they were recently reintroduced and further studied by Duplantier and Sheffield~\\cite{DS}. Shamov~\\cite{Shamov} characterized the LQG measures for all~$\\lambda\\in(0,1)$ by their expected value and the behavior under Cameron-Martin shifts of the underlying continuum Gaussian Free Field.\n\n\n\\subsection{Thick and thin points}\n\\label{sec2.3}\\noindent\nInspired by the above developments, we will thus encode the level sets $\\mathcal T_N^{\\pm}(\\theta,\\lambda)$ via the random measures\n\\begin{equation}\n\\label{E:zetaND}\n\\zeta^D_N:=\\frac1{W_N}\\sum_{x\\in D_N}\\delta_{x\/N}\\otimes\\delta_{(L_{t_N}^{D_N}(x)-a_N)\/\\log N}\\,,\n\\end{equation}\nwhere~$\\{a_N\\}$ is a centering sequence and $\\{t_N\\}$ is a sequence of times, both growing proportionally to~$(\\log N)^2$, and\n\\begin{equation}\n\\label{E:WN}\nW_N:=\\frac{N^2}{\\sqrt{\\log N}}\\text{\\rm e}\\mkern0.7mu^{-\\frac{(\\sqrt{2t_N}-\\sqrt{2a_N})^2}{2g\\log N}}.\n\\end{equation}\nThe normalization by~$\\log N$ in the second delta-mass in \\eqref{E:zetaND} indicates that we are tracking variations of the local time of scale~$\\log N$. (This is also the order of the variation of the local time between nearest neighbors.)\nWe then get: \n\n\\begin{theorem}[Thick points]\n\\label{thm-thick}\nSuppose $\\{t_N\\}$ and $\\{a_N\\}$ are positive sequences such that, for some $\\theta>0$ and some $\\lambda\\in(0,1)$,\n\\begin{equation}\n\\label{E:1.20}\n\\lim_{N\\to\\infty}\\frac{t_N}{(\\log N)^2}=2g\\theta\\quad\\text{\\rm and}\\quad\\lim_{N\\to\\infty}\\frac{a_N}{(\\log N)^2}=2g(\\sqrt\\theta+\\lambda)^2.\n\\end{equation}\nFor any~$D\\in\\mathfrak D$, any sequence $\\{D_N\\}$ of admissible approximations of~$D$, and for~$X$ sampled from~$P^{\\varrho}$, in the sense of vague convergence of measures on $\\overline D\\times(\\mathbb R\\cup\\{\\infty\\})$,\n\\begin{equation}\n\\label{E:1.21dis}\n\\zeta^D_N\\,\\,\\,\\underset{N\\to\\infty}{\\,\\overset{\\text{\\rm law}}\\longrightarrow\\,}\\,\\,\\, \\frac{\\theta^{1\/4}}{2\\sqrt{g}\\,(\\sqrt\\theta+\\lambda)^{3\/2}}\\,\\,\\fraktura c(\\lambda) Z_\\lambda^{D}(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\text{\\rm e}\\mkern0.7mu^{-\\alpha(\\theta,\\lambda) h}\\text{\\rm d}\\mkern0.5mu h, \n\\end{equation}\nwhere $\\alpha(\\theta,\\lambda):=\\frac1g\\frac{\\lambda}{\\sqrt\\theta+\\lambda}$\nand $\\fraktura c(\\lambda)$ is as in \\eqref{E:1.19}.\n\\end{theorem}\n\nFor the thin points, we similarly obtain:\n\n\\begin{theorem}[Thin points]\n\\label{thm-thin}\nSuppose $\\{t_N\\}$ and $\\{a_N\\}$ are positive sequences such that, for some $\\theta>0$ and some $\\lambda\\in(0,1\\wedge\\sqrt\\theta)$,\n\\begin{equation}\n\\label{E:1.22}\n\\lim_{N\\to\\infty}\\frac{t_N}{(\\log N)^2}=2g\\theta\\quad\\text{\\rm and}\\quad\\lim_{N\\to\\infty}\\frac{a_N}{(\\log N)^2}=2g(\\sqrt\\theta-\\lambda)^2.\n\\end{equation}\nFor any~$D\\in\\mathfrak D$, any sequence $\\{D_N\\}$ of admissible approximations of~$D$, and for~$X$ sampled from~$P^{\\varrho}$, in the sense of vague convergence of measures on $\\overline D\\times(\\mathbb R\\cup\\{-\\infty\\})$, \n\\begin{equation}\n\\label{E:1.23dis}\n\\zeta^D_N\\,\\,\\,\\underset{N\\to\\infty}{\\,\\overset{\\text{\\rm law}}\\longrightarrow\\,}\\,\\,\\, \\frac{\\theta^{1\/4}}{2\\sqrt{g}\\,(\\sqrt\\theta-\\lambda)^{3\/2}}\\,\\fraktura c(\\lambda) Z_\\lambda^{D}(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\text{\\rm e}\\mkern0.7mu^{+\\tilde\\alpha(\\theta,\\lambda) h}\\text{\\rm d}\\mkern0.5mu h,\n\\end{equation}\nwhere~$\\tilde\\alpha(\\theta,\\lambda):=\\frac1g\\frac{\\lambda}{\\sqrt\\theta-\\lambda}$ and $\\fraktura c(\\lambda)$ is as in \\eqref{E:1.19}.\n\\end{theorem}\n\nNote that, under \\eqref{E:1.20} or \\eqref{E:1.22}, the above implies\n\\begin{equation}\n|\\mathcal T^\\pm_N(\\theta,\\lambda)|=N^{2(1-\\lambda^2)+o(1)},\n\\end{equation}\nwhere~$o(1)\\to0$ in probability.\nThis conclusion has previously been obtained by the first author in~\\cite[Theorem~1.2]{A15}, albeit for random walks on tori and under a different parametrization of the level sets. The present theorems tell us considerably more. Indeed, they imply that points picked at random from $\\mathcal T^\\pm_N(\\theta,\\lambda)$ have asymptotically the same statistics as those picked from the set where the DGFF is above the $\\lambda$-multiple of its absolute maximum. \n\nThe connection with the DGFF becomes nearly perfect if instead of $\\log N$ we normalize the second coordinate of~$\\zeta^D_N$ by $\\sqrt{2a_N}$. In that parametrization, the resulting measure \\myemph{coincides} (up to reversal of the second coordinate for the thin points) with that for the DGFF up to an overall normalization constant. This demonstrates \\myemph{universality} of the Gaussian Free Field in these extremal problems. \n\n\\subsection{Light and avoided points}\nThe level sets \\twoeqref{E:Lplus}{E:Lminus} are naturally nested which indicates that, for~$\\theta\\in(0,1)$, also the set of $r$-light points~$\\mathcal L_N(\\theta,r)$ and avoided points $\\AA_N(\\theta)$ should correspond to an intermediate level set of the DGFF, this time with~$\\lambda:=\\sqrt\\theta$. As the next theorem shows, this is true albeit under a different normalization:\n\n\\begin{theorem}[Light points]\n\\label{thm-light}\nSuppose $\\{t_N\\}$ is a positive sequence such that\n\\begin{equation}\n\\label{E:1.29}\n\\theta:=\\frac1{2g}\\lim_{N\\to\\infty}\\frac{t_N}{(\\log N)^2}\\in (0,1).\n\\end{equation}\nFor any~$D\\in\\mathfrak D$, any sequence $\\{D_N\\}$ of admissible approximations of~$D$, and for~$X$ sampled from~$P^{\\varrho}$, consider the measure\n\\begin{equation}\n\\label{E:varthetaND}\n\\vartheta^D_N:=\\frac1{\\widehat W_N }\\sum_{x\\in D_N}\\delta_{x\/N}\\otimes\\delta_{L_{t_N}^{D_N}(x)},\n\\end{equation}\nwhere\n\\begin{equation}\n\\label{E:1.31}\n\\widehat W_N :=N^2\\text{\\rm e}\\mkern0.7mu^{-\\frac{t_N}{g\\log N}}.\n\\end{equation}\nThen, in the sense of vague convergence of measures on~$\\overline D\\times[0,\\infty)$,\n\\begin{equation}\n\\label{E:2.22ii}\n\\vartheta^D_N\\,\\,\\,\\underset{N\\to\\infty}{\\,\\overset{\\text{\\rm law}}\\longrightarrow\\,}\\,\\,\\, \\sqrt{2\\pi g}\\,\\fraktura c(\\sqrt\\theta)\\,\\, Z_{\\sqrt{\\theta}\\,}^{D}(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\mu(\\text{\\rm d}\\mkern0.5mu h),\n\\end{equation}\nwhere $\\fraktura c(\\lambda)$ is as in \\eqref{E:1.19} and~$\\mu$ is the Borel measure \n\\begin{equation}\n\\label{E:1.33}\n\\mu(\\text{\\rm d}\\mkern0.5mu h):=\\delta_0(\\text{\\rm d}\\mkern0.5mu h)+\\biggl(\\,\\sum_{n=0}^\\infty\\frac1{n!(n+1)!}\\Bigl(\\frac{\\alpha^2\\theta}2\\Bigr)^{n+1} h^n\\biggr)1_{(0,\\infty)}(h)\\,\\text{\\rm d}\\mkern0.5mu h.\n\\end{equation}\n\\end{theorem}\n\nNote that the density of the continuous part of the measure in \\eqref{E:1.33} is uniformly positive on~$[0,\\infty)$ and grows exponentially in~$\\sqrt h$. Naturally, the atom at zero has the interpretation of the contribution of the avoided points and so we also get:\n\n\\begin{theorem}[Avoided points]\n\\label{thm-avoid}\nSuppose $\\{t_N\\}$ is a sequence such that \\eqref{E:1.29} holds. For any $D\\in\\mathfrak D$, any sequence $\\{D_N\\}$ of admissible approximations of~$D$, and for~$X$ sampled from~$P^{\\varrho}$, consider the measure\n\\begin{equation}\n\\label{E:kappaND}\n\\kappa^D_N:=\\frac1{\\widehat W_N }\\sum_{x\\in D_N}1_{\\{L_{t_N}^{D_N}(x)=0\\}}\\,\\delta_{x\/N},\n\\end{equation}\nwhere~$\\widehat W_N $ is as in \\eqref{E:1.31}. Then, in the sense of vague convergence of measures on~$\\overline D$,\n\\begin{equation}\n\\label{E:kappa-lim}\n\\kappa^D_N\\,\\,\\,\\underset{N\\to\\infty}{\\,\\overset{\\text{\\rm law}}\\longrightarrow\\,}\\,\\,\\, \\sqrt{2\\pi g}\\,\\fraktura c(\\sqrt\\theta)\\,\\, Z_{\\sqrt{\\theta}\\,}^{D}(\\text{\\rm d}\\mkern0.5mu x),\n\\end{equation}\nwhere $\\fraktura c(\\lambda)$ is again as in \\eqref{E:1.19}.\n\\end{theorem}\n\nWe conclude that, at times asymptotic to a $\\theta$-multiple of the cover time with~$\\theta<1$, the total number of avoided points is~proportional to~$\\widehat W_N =N^{2(1-\\theta)+o(1)}$. Moreover, when normalized by~$\\widehat W_N $, it tends in law to a constant times the total mass of~$Z^D_{\\sqrt\\theta}$.\n\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics[width=0.4\\textwidth]{.\/figs\/avoided-points-1.pdf}\n\\includegraphics[width=0.4\\textwidth]{.\/figs\/avoided-points-3.pdf}\n}\n\\begin{quote}\n\\small \n\\vglue-0.2cm\n\\caption{\n\\label{fig2b}\nThe sets of avoided points for a sample of the random walk on a square of side-length $N=2000$ observed at times corresponding to $\\theta$-multiple of the cover time for $\\theta:=0.1$ (left) and $\\theta=0.3$ (right).}\n\\normalsize\n\\end{quote}\n\\end{figure}\n\n\n\n\\subsection{Local structure: thick and thin points}\nSimilarly to the case of the DGFF treated in \\cite{BL4}, the convergence of the point measures associated with the exceptional sets can be extended to include information about the local structure of the exceptional sets under consideration. \nFor the case of thick and thin points, this structure is captured by the measure on Borel subsets of~$D\\times\\mathbb R\\times\\mathbb R^{\\mathbb Z^2}$ (under the product topology) defined by\n\\begin{equation}\n\\label{E:zeta-local}\n\\widehat \\zeta^D_N :=\n\\frac{1}{W_N} \\sum_{x \\in D_N} \\delta_{x\/N} \\otimes \n\\delta_{(L_{t_N}^{D_N} (x) - a_N)\/\\log N} \n\\otimes \\delta_{\\{(L_{t_N}^{D_N} (x) - L_{t_N}^{D_N} (x+z))\/\\log N\\colon z \\in \\mathbb{Z}^2 \\}}. \n\\end{equation}\nIn order to express the limit measure, we need to introduce the DGFF $\\phi$ on $\\Bbb{Z}^2$ pinned to zero at the origin.\nThis is a centered Gaussian field on $\\Bbb{Z}^2$ with law~$\\nu^0$ and covariance\n\\begin{equation}\n\\label{E:phi-cov}\nE_{\\nu^0} (\\phi_x \\phi_y) = \\fraktura a(x) + \\fraktura a(y) - \\fraktura a (x-y),\n\\end{equation}\nwhere $\\fraktura a \\colon \\Bbb{Z}^2 \\to [0, \\infty)$ is the potential kernel,\ni.e., the unique function with $\\fraktura a (0) = 0$ which is discrete harmonic \non $\\Bbb{Z}^2 \\smallsetminus \\{0 \\}$ and satisfies $\\fraktura a (x) = g \\log |x| + O(1)$\nas $|x| \\to \\infty$. For the thick points, we then get:\n\n\\begin{theorem}[Local structure of the thick points]\n\\label{thm-thick-local}\nUnder the conditions of Theorem \\ref{thm-thick}\nand denoting by $\\zeta^D$ the limit measure on the right of (\\ref{E:1.21dis}), \n\\begin{equation}\n\\widehat \\zeta^D_N\\,\\,\\,\\underset{N\\to\\infty}{\\,\\overset{\\text{\\rm law}}\\longrightarrow\\,}\\,\\,\\, \\zeta^D \\otimes \\nu_{\\theta, \\lambda},\n\\end{equation}\nwhere $\\nu_{\\theta, \\lambda}$ is the law of $2\\sqrt{g} (\\sqrt{\\theta} + \\lambda) \n(\\phi + \\alpha\\lambda \\fraktura a)$ under $\\nu^0$.\n\\end{theorem}\n\nFor the thin points, we in turn get:\n\n\\begin{theorem}[Local structure of the thin points]\n\\label{thm-thin-local}\nUnder the condition of Theorem \\ref{thm-thin} and\ndenoting by $\\zeta^D$ the limit measure on the right of (\\ref{E:1.23dis}),\n\\begin{equation}\n\\widehat \\zeta^D_N\\,\\,\\,\\underset{N\\to\\infty}{\\,\\overset{\\text{\\rm law}}\\longrightarrow\\,}\\,\\,\\, \\zeta^D \\otimes \\widetilde \\nu_{\\theta, \\lambda},\n\\end{equation}\nwhere $\\widetilde \\nu_{\\theta, \\lambda}$ is the law of $2\\sqrt{g} (\\sqrt{\\theta} - \\lambda) \n(\\phi + \\alpha \\lambda\\fraktura a)$ under $\\nu^0$.\n\\end{theorem} \n\nAs shown in \\cite{BL4}, the field $\\phi+\\lambda\\alpha\\fraktura a$ describes the local structure of the DGFF near the points where it takes values (close to) $2\\sqrt g\\lambda\\log N$. As before, the prefactor $2\\sqrt g(\\sqrt\\theta\\pm\\lambda)$ disappears when instead of~$\\log N$ we normalize the third coordinate of~$\\widehat\\zeta^D_N$ by$~\\sqrt{2a_N}$. The above results thus extend the universality of the DGFF to the local structure as well. \n\n\\subsection{Local structure: avoided points}\nThe local structure of the local time near the avoided points will be radically different. Indeed, in the vicinity of an avoided point, the local time will remain of order unity and so a proper way to extend the measure~$\\kappa_N^D$ is\n\\begin{equation}\n\\widehat \\kappa_N^D :=\n\\frac{1}{\\widehat W_N}\n\\sum_{x \\in D_N}\n1_{\\{L_{t_N}^{D_N} (x) = 0 \\}} \\delta_{x\/N} \\otimes \\delta_{\\{L_{t_N}^{D_N} (x+z) \\colon z \\in \\Bbb{Z}^2 \\}},\n\\end{equation}\nwhich is now a Borel measure on $D\\times[0,\\infty)^{\\mathbb Z^2}$. Moreover, near an avoided point~$x$, the walk itself should behave as if conditioned not to hit~$x$. This naturally suggests that its trajectories will look like two-dimensional \\myemph{random interlacements} introduced recently by Comets, Popov and Vachkovskaia~\\cite{CPV} and Rodriguez~\\cite{R18}, building on earlier work of Sznitman~\\cite{S12} and Teixeira~\\cite{T09} in transient dimensions. In order to state our limit theorem, we need to review some of the main conclusions from \\cite{CPV,R18}.\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics[width = 3.2in]{.\/figs\/occupation-time-field-0.pdf}\n\\hglue-1.7cm\\includegraphics[width = 3.2in]{.\/figs\/occupation-time-field-24.pdf}\n}\n\\begin{quote}\n\\small \n\\vglue-0.2cm\n\\caption{\n\\label{fig3b}\nSamples of the occupation-time field near two randomly-selected avoided points of a random walk run for $0.2$-multiple of the cover time in a square of side-length $N=2000$. Only the square of side-length $81$ centered at the chosen avoided point is depicted.}\n\\normalsize\n\\end{quote}\n\\end{figure}\n\n\nFirst we need some notation. Let~$W$ be the set of all doubly-infinite\ntransient random-walk trajectories on~$\\mathbb Z^2$; namely, piece-wise constant right-continuous maps $X\\colon\\mathbb R\\to\\mathbb Z^2$ that make only jumps between nearest neighbors and spend only finite time (as measured by the Lebesgue measure) in every finite subset of~$\\mathbb Z^2$. We endow~$W$ with the $\\sigma$-field~$\\mathcal{W}$ generated by finite-dimensional coordinate projections, $\\mathcal{W}:=\\sigma(X_t\\colon t\\in\\mathbb R)$.\nFor $A \\subset \\mathbb{Z}^2$ finite, we write $W_A$ for the subset of~$W$ of the trajectories that visit~$A$.\n\nNext we will put a measure $Q_A^{0, \\mathbb Z^2}$ on~$W_A$ as follows. Let $\\mathfrak h_A$ denote the harmonic measure of~$A$ from infinity for the simple random walk on $\\mathbb{Z}^2$. Let~$\\widehat{P}^x$ denote the law of a constant-speed continuous-time random walk on $\\mathbb{Z}^2 \\smallsetminus \\{0\\}$ started at~$x$ with conductance $\\fraktura a (y) \\fraktura a (z)$ at nearest-neighbor edges~$(y,z)$ in~$\\mathbb Z^2$. \nFor all cylindrical events $E^+, E^- \\in \\sigma(X_t\\colon t\\ge0)$ and any $x \\in \\mathbb Z^2$, we then set\n\\begin{multline}\n\\qquad\nQ_A^{0, \\mathbb Z^2} \\Bigl((X_{-t})_{t \\geq 0} \\in E^-,\\,X_0 = x,\\,(X_t)_{t \\geq 0} \\in E^+\\Bigr)\n\\\\\n:= 4\\,\\fraktura a (x) \\mathfrak h_A (x) \\widehat{P}^x(E^+) \\widehat{P}^x (E^-\\,|\\,H_A = \\infty).\n\\qquad\n\\end{multline}\nNote that, since cylindrical events are unable to distinguish left and right path continuity, writing $(X_{-t})_{t \\geq 0} \\in E^-$ is meaningful. The transience of~$\\widehat{P}^x$ implies $\\widehat{P}^x(H_A=\\infty)>0$ and so the conditioning on the right is non-singular. \n\nThe measure $Q_A^{0, \\mathbb Z^2}$ represents the (un-normalized) law of doubly-infinite trajectories of the simple random walk that hit~$A$ (recall that $\\mathfrak{h}_A(x)=0$ unless $x\\in A$) but avoid~$0$ for all times (by Doob's $h$-transform, $\\widehat{P}^x$ is a law of the simple random walk on~$\\mathbb Z^2$ conditioned to avoid~$0$). As the main results of \\cite{CPV,R18} show, the normalization is chosen such that these measures are consistent, albeit only after factoring out time shifts. To state this precisely, we need some more notation. Regarding two trajectories $w,w'\\in W$ as equivalent if they are time shifts of each other --- i.e., if there is $t\\in\\mathbb R$ such that $w(s)=w'(s+t)$ for all~$s\\in\\mathbb R$ --- we use~$W^{\\star}$ to denote the quotient space of~$W$ induced by this equivalence relation. Writing $\\Pi_\\star \\colon W \\to W^{\\star}$ for the canonical projection, the induced $\\sigma$-field on~$W^{\\star}$ is given by $\\mathcal{W}^\\star:=\\{E\\subseteq W^\\star\\colon \\Pi_\\star^{-1}(E)\\in \\mathcal W\\}$. Note that $W_A^{\\star} := \\Pi_\\star (W_A)\\in\\mathcal{W}^\\star$. \n\nTheorems~3.3 and~4.2 of~\\cite{R18} (building on~\\cite[Theorem~2.1]{T09}, see also \\cite[page 133]{CPV}) then ensure the existence of a (unique) measure on~$W^{\\star}$ such that for any finite $A \\subset \\mathbb{Z}^2$ and any $E\\in\\mathcal W^\\star$,\n\\begin{equation}\n\\nu^{0, \\mathbb Z^2}(E \\cap W_A^\\star) = Q_A^{0, \\mathbb Z^2}\\circ\\Pi_\\star^{-1}(E\\cap W_A^\\star).\n\\end{equation}\nSince $Q_A^{0, \\mathbb Z^2}$ is a finite measure and the set of finite $A\\subset\\mathbb Z^2$ is countable, $\\nu^{0,\\mathbb Z^2}$ is $\\sigma$-finite. We may thus consider a Poisson point process on~$W^\\star\\times[0,\\infty)$ with intensity $\\nu^{0, \\mathbb Z^2}\\otimes{\\rm Leb}$. Given a sample~$\\omega$ from this process, which we may write as $\\omega = \\sum_{i\\in\\mathbb N} \\delta_{(w_i^{\\star}, u_i)}$, and any $u\\in[0,\\infty)$, we define the \\myemph{occupation time field at level~$u$} by\n\\begin{equation}\n\\label{E:occ-time-field}\nL_u(x) := \\sum_{i \\in \\mathbb N} 1_{\\{u_i \\leq u\\}}\\,\\frac{1}{4} \\int_\\mathbb R \\text{\\rm d}\\mkern0.5mu t\\, 1_{\\{w_i (t) = x \\}},\n\\quad x \\in \\mathbb{Z}^2,\n\\end{equation}\nwhere $w_i \\in W$ is any representative of the class of trajectories marked by~$w_i^\\star$; i.e., $\\Pi_\\star (w_i) = w_i^{\\star}$. (The integral does not depend on the choice of the representative.) We are now ready to state the convergence of the measures $\\widehat \\kappa_N^D$.\n\n\n\\begin{theorem}[Local structure of the avoided points]\n\\label{thm-avoid-local}\nUnder the conditions of Theorem \\ref{thm-avoid} and for~$\\kappa^D$ denoting the measure on the right of \\eqref{E:kappa-lim},\n\\begin{equation}\n\\widehat \\kappa^D_N\\,\\,\\,\\underset{N\\to\\infty}{\\,\\overset{\\text{\\rm law}}\\longrightarrow\\,}\\,\\,\\, \n\\kappa^D \\otimes \\nu_{\\theta}^{\\text{\\rm RI}},\n\\end{equation}\nwhere $\\nu_{\\theta}^{\\text{\\rm RI}}$ is the law of the occupation time field \n$(L_u(x))_{x \\in \\Bbb{Z}^2}$ at $u:=\\pi\\theta$.\n\\end{theorem}\n\nWe expect a similar result to hold for the light points as well but with the random interlacements replaced by a suitably modified version that allows the walks to hit the origin but accumulating a given (order unity) amount of local time there. \n\n\n\n\\section{Main ideas, extensions and outline}\n\\label{sec2}\\nopagebreak\\noindent\nLet us proceed by a brief overview of the main ideas of the proof and then a list of possible extensions and refinements. We also outline the remainder of this paper.\n\n\\subsection{Main ideas}\n\\label{sec2.1}\\noindent\nAs already noted, key for all developments in this paper is the connection of the local time $L^V_t$ and the associated DGFF~$h^V$.\nOur initial take on this connection was through the fact that the DGFF represents the fluctuations of~$L^V_t$ at large times via\n\\begin{equation}\n\\label{E:GFFlimit}\n\\frac{L_t^V(\\cdot)-t}{\\sqrt{2t}}\\,\\,\\,\\underset{t\\to\\infty}{\\,\\overset{\\text{\\rm law}}\\longrightarrow\\,}\\,\\,\\, h^V.\n\\end{equation}\n(This also dictated the parametrization in the earlier work on this problem, e.g.,~\\cite{A15}.) However, as noted at the end of Subsection~\\ref{sec2.3}, for the thick and thin points, the effective~$t$ in the correspondence \\eqref{E:GFFlimit} of the local time with the DGFF is~$a_N$, rather than~$t_N$. In any case, approximating the local time by the DGFF becomes accurate only \\myemph{beyond} the times of the order of the cover time. \n\nWe thus base our proofs on a deeper version of this connection, known under the name \\myemph{Second Ray-Knight Theorem} after Ray~\\cite{Ray} and Knight~\\cite{Knight} or \\myemph{Dynkin isomorphism} after Dynkin~\\cite{D83}, although the statement we use is due to Eisenbaum, Kaspi, Marcus, Rosen and Shi~\\cite{EKMRS} (with an interesting new proof by Sabot and Tarres~\\cite{Sabot-Tarres}): \n\n\\begin{theorem}[Dynkin isomorphism]\n\\label{thm-Dynkin}\nFor each~$t>0$ there exists a coupling of $L^V_t$ (sampled under~$P^\\varrho$) and two copies of the DGFF $h^V$ and~$\\tilde h^V$ such that\n\\begin{equation}\n\\label{E:Dynkin1}\nh^V\\text{\\rm\\ and } L^V_t \\text{\\rm\\ are independent}\n\\end{equation}\nand\n\\begin{equation}\n\\label{E:Dynkin2}\nL_t^V(u)+\\frac12(h_u^V)^2 = \\frac12\\bigl(\\tilde h_u^V+\\sqrt{2t}\\bigr)^2, \\quad u\\in V.\n\\end{equation}\n\\end{theorem}\n\n\\noindent\nThis is usually stated as a distributional identity; the coupling version is then a result of abstract-nonsense theorems in probability (see Zhai~\\cite[Section~5.4]{Zhai}). \n\nThe identity \\eqref{E:Dynkin2} suggests a natural idea:\nIf we could simply disregard the DGFF on the left-hand side, the relation would tie the level set corresponding to~$L^{D_N}_{t_N}\\approx a_N$ to the level sets of the DGFF where\n\\begin{equation}\n\\text{either}\\quad \\tilde h^{D_N}\\approx\\sqrt{2a_N}-\\sqrt{2t_N}\\quad\\text{or}\\quad \\tilde h^{D_N}\\approx-\\sqrt{2a_N}-\\sqrt{2t_N}.\n\\end{equation}\nFor~$a_N\\to\\infty$, the second level set lies further away from the mean of~$\\tilde h^{D_N}$ than the first and its contribution can therefore be disregarded. (This is true for the thick and thin points; for the light and avoided points both levels play a similar role). One could then simply hope to plug to the existing result \\eqref{E:1.19}. \n\nUnfortunately, since $\\text{\\rm Var}(h^{D_N}_x)$ is of order~$\\log N$, the square of the DGFF on the left of \\eqref{E:Dynkin2} is typically of the size of the anticipated fluctuations of~$L^{D_N}_{t_N}$ and so it definitely affects the limiting behavior of the whole quantity. The main technical challenge of the present paper is thus to understand the contribution of this term precisely. A key observation that makes this possible is that even for~$x\\in D_N$ where $L^{D_N}_{t_N}(x)+\\frac12(h^{D_N}_x)^2$ takes exceptional values, the DGFF $h^{D_N}_x$ remains typical (and $L^{D_N}_{t_N}(x)$ is thus dominant). This requires proving fairly sharp single-site tail estimates for the local time and combining them with the corresponding tail bounds for the DGFF. \n\nOnce that is done, we include the field~$h^{D_N}$, properly scaled, as a third ``coordinate'' of the point process and study weak subsequential limits of these. For instance, for the thick and thin points this concerns the measure\n\\begin{equation}\n\\label{E:2.5i}\n\\frac1{W_N}\\sum_{x\\in D_N}\\delta_{x\/N}\\otimes\\delta_{(L^{D_N}_{t_N}(x)-a_N)\/\\log N}\\otimes\\delta_{h^{D_N}_x\/\\sqrt{\\log N}}.\n\\end{equation}\nHere the key is to show that the DGFF part acts, in the limit, as an explicit \\myemph{deterministic} measure. For instance, for the thick and thin points this means that if~$\\zeta^D_N$ converges to some~$\\zeta^D$ along a subsequence of~$N$'s, the measure in \\eqref{E:2.5i} converges to $\\zeta^D\\otimes\\mathfrak g$ where~$\\mathfrak g$ is the normal law~$\\mathcal N(0,g)$; see Lemma~\\ref{lemma-add-field}.\n\nDenoting by~$\\ell$ the second variable and by~$h$ the third variable in \\eqref{E:2.5i}, the Dynkin isomorphism now tells us that the ``law'' of $\\ell+\\frac{h^2}2$ under any weak subsequential limit of the measures in \\eqref{E:2.5i} is the same as the limit ``law'' of the DGFF level set at scale~$\\sqrt{2a_N}-\\sqrt{2t_N}$ (for the thick points) which we know from \\eqref{E:1.19}. This produces a convolution-type identity for subsequential limits of the local-time point process. Some technical work then shows that this identity has a unique solution which can be identified explicitly in all cases of interest.\n\n\nOur control of the local structure of the exceptional points will also rely on the above isomorphism theorem. For the thick and thin points, we will then simply plug in to Theorem~2.1 of~\\cite{BL4} that describes the local structure of intermediate level sets of the DGFF. For the avoided points, we will also need the \\myemph{Pinned Isomorphism Theorem} of Rodriguez~\\cite[Theorem~5.5]{R18} that links the random-interlacement occupation-time field $(L_u(x))_{x\\in\\mathbb Z^2}$ introduced in \\eqref{E:occ-time-field} to the pinned DGFF~$\\phi$ defined via \\eqref{E:phi-cov} as follows:\n\n\\begin{theorem}[Pinned Isomorphism Theorem]\n\\label{thm-pinned-iso}\nLet~$u>0$ and suppose that $(L_u(x))_{x\\in\\mathbb Z^2}$ with law $\\nu^{\\text{\\rm RI}}_{u\/\\pi}$ is independent of~$\\{\\phi_x\\colon x\\in\\mathbb Z^2\\}$ with law~$\\nu^0$. Then\n\\begin{equation}\n\\label{E:3.6uiw}\nL_u+\\frac12\\phi^2\\,\\,\\overset{\\text{\\rm law}}=\\,\\frac12\\bigl(\\phi+ 2 \\sqrt{2u}\\,\\fraktura a\\bigr)^2,\n\\end{equation}\nwhere~$\\fraktura a$ is the potential kernel. (The extra factor of~2 compared to \\cite[Theorem~5.5]{R18} is due to different normalizations of the local time, the pinned field and the potential kernel.) \n\\end{theorem}\n\nIt is exactly the generalization of this theorem that blocks us from extending control of the local structure to the light points. Indeed, we expect that, for the light points, the associated process is still that of random interlacements but with the local time at the origin fixed to a given number. Developing the theory of this process explicitly goes beyond the scope of the present paper.\n\n\n\n\\subsection{Extensions and refinements}\n We see a number of possible ways the existing conclusions may be refined so let us discuss these in some more detail.\n\n\\smallskip\\noindent\n\\textsl{Other ``boundary'' conditions: } Perhaps the most significant deficiency of our setting is the somewhat unnatural mechanism by which the walk returns back to~$D_N$ after each exit. Perhaps contrary to the intuition one might have, this does not lead to the local time exploding near the boundary; see Fig.~\\ref{fig3} or the fact that~$Z^D_\\lambda$ puts no mass on~$\\partial D$. The main reason for using the specific setting worked out here is that it allows us to seamlessly plug in the existing results from~\\cite{BL4} on the ``intermediate'' level sets of the~DGFF. The natural alternatives are\n\\settowidth{\\leftmargini}{(11111)}\n\\begin{enumerate}\n\\item[(1)] running the walk on an $N\\times N$ torus, or\n\\item[(2)]\nrunning the walk as a simple random walk on all of~$\\mathbb Z^2$ but only reflecting the local time spent inside~$D_N$.\n\\end{enumerate}\nThe latter setting is perhaps most natural; unfortunately, it does not fit into the scheme where the connection with a DGFF can naturally be pulled out. The former setting in turn requires developing level-set analysis of a torus DGFF pinned at one point.\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics[height = 2.4in]{.\/figs\/walk-3.pdf}\n\\includegraphics[width = 2.8in]{.\/figs\/local-time-3.pdf}\n}\n\\begin{quote}\n\\small \n\\vglue-0.2cm\n\\caption{\n\\label{fig3}\nLeft: Plot of the trajectory of the random walk on a $200\\times200$ square run for $0.3$-multiple of the cover time. The time runs in the vertical direction. Right: The corresponding local time profile. Note that while short excursions near the boundary are numerous, most contribution to the local time profile comes from the excursions that reach ``deep'' into the domain.}\n\\normalsize\n\\end{quote}\n\\end{figure}\n\n\n\n\\smallskip\\noindent\n\\textsl{Time parametrization: }\nAnother feature for which our setting may not be considered most natural is the time parametrization by the time spent at the ``boundary vertex.'' A reasonable question is then what happens when we instead use the parametrization by the actual time of the walk (continuous-time parametrization), or even by the number of discrete steps that the walk has taken (discrete-time parametrization). The main problem here is the lack of a direct connection with the underlying DGFF; instead, one has to rely on approximations.\n\nPreliminary calculations have so far shown that, at least approximately, the local time in the continuous-time parametrization is still connected with the DGFF as in \\eqref{E:Dynkin2} but now with the field $\\tilde h^{D_N}$ reduced by its arithmetic mean over~$D_N$. This implies that, for both continuous and discrete-time analogues of the measures $\\zeta^D_N$, $\\vartheta^D_N$ and~$\\kappa^D_N$, their $N\\to\\infty$ limits still take the product form as in \\eqref{E:1.21dis}, \\eqref{E:1.23dis}, \\eqref{E:2.22ii} and \\eqref{E:kappa-lim}, respectively, albeit now with~$Z^D_\\lambda$ replaced by a suitable substitute reflecting on the reduction of the CGFF by its arithmetic mean. We intend to establish these statements rigorously in a follow-up work.\n\n\n\\smallskip\\noindent\n\\textsl{Critical cases: }\nAnother natural problem to examine are various borderline behaviors left untouched in the present paper; for instance, $\\lambda:=1$ for the $\\lambda$-thick points and~$\\lambda:=\\sqrt\\theta\\wedge1$ for the $\\lambda$-thin points as well as~$\\theta:=1$ for the avoided points. In analogy with the corresponding question for the DGFF (Biskup and Louidor~\\cite{BL1,BL2,BL3}), we expect that the corresponding measures will require a different scaling --- essentially, boosting by an additional factor of~$\\log N$ --- and the limit spatial behavior will be governed by the \n\\myemph{critical} LQG measure $Z^D_1$. For the simple random walk on a homogeneous tree of depth~$n$, this program has already been carried out by the first author~(Abe~\\cite{A18}). A breakthrough result along these lines describing the limit law of the cover time on homogenous trees has recently been posted by Cortines, Louidor and Saglietti~\\cite{CLS}. \n\n\n\\smallskip\\noindent\n\\textsl{Brownian local time: } \nYet another interesting extension concerns the corresponding problem for the Brownian local time. This requires working with the $\\epsilon$-cover time defined as the first time when every disc of radius~$\\epsilon>0$ inside~$D$ has been visited; the limit behavior is then studied as~$\\epsilon\\downarrow0$. We actually expect that, with proper definitions, very similar conclusions will hold here as well although we presently do not see other way to prove them than by approximations via random walks.\n\nJego~\\cite{J18b} recently posted a preprint that proves the existence of a scaling limit for the process associated, similarly to our~$\\zeta^D_N$ from \\eqref{E:zetaND}, with the local-time thick points of the Brownian path killed upon \\myemph{first exit} from~$D$. As it turns out, the limit measure still factors into a product of a random spatial part, defined via limits of exponentials of the root of the local time, and an exponential measure. However, although the spatial part of the measure obeys the expectation identity of the kind \\eqref{E:1.19a}, it is certainly not one of the LQG measures $Z^D_\\lambda$ above, due to the limited time horizon of the Brownian path. Characterizing Jego's limit measure more directly is a natural next challenge.\n\n\n\n\\subsection{Outline}\nThe rest of this note is organized as follows. In the next section (Section~\\ref{sec3b}) we derive tail estimates for the local time that will come handy later in the proofs. These are used to prove tightness of the corresponding point measures. Section~\\ref{sec4} then gives the proof of convergence for the measure associated with $\\lambda$-thick points following the outline from Section~\\ref{sec2.1}. This proof is then used as a blue print for the corresponding proofs for the $\\lambda$-thin points (Section~\\ref{sec5}) and the light and avoided points (Section~\\ref{sec6}).\nThe results on local structure are proved at the very end (Section~\\ref{sec8}).\n\n\n\\section{Tail estimates and tightness}\n\\label{sec3b}\\noindent\\nopagebreak\nWe are now ready to commence the proofs of our results. All of our derivations will pertain to the continuous-time Markov chain started, and with the local time parametrized by the time spent, at the ``boundary vertex.'' \nLet us pick a domain~$D\\in\\mathfrak D$ and a sequence~$\\{D_N\\}$ of admissible approximations of~$D$ and consider these fixed throughout the rest of this paper. Recall that we write $\\zeta^D_N$, $\\vartheta^D_N$ and~$\\kappa^D_N$ for the measures in \\eqref{E:zetaND}, \\eqref{E:varthetaND} and \\eqref{E:kappaND}, respectively.\n\n\\subsection{Upper tails}\nWe begin by estimates on the tails of the random variable~$L^{D_N}_{t_N}(x)$ which then readily imply tightness of the random measures of interest. We first derive these estimates in the general setting of a random walk on a graph with distinguished vertex~$\\varrho$ and only then specialize to~$N$-dependent domains in the plane. We begin with the upper tail:\n\n\\begin{lemma}[Local time upper tail]\n\\label{lemma-upper}\nConsider the random walk on~$V\\cup\\{\\varrho\\}$ as detailed in Section~\\ref{sec1.2}.\nFor all~$a,t>0$ and all~$b\\in\\mathbb R$ such that $a+b>t$, and all~$x\\in V$,\n\\begin{equation}\n\\label{E:2.2a}\nP^\\varrho\\bigl(L_t^V(x)\\ge a+b\\bigr)\\le \\frac{\\sqrt{G^V(x,x)}}{\\sqrt{2(a+b)}-\\sqrt{2t}}\\,\n\\,\\text{\\rm e}\\mkern0.7mu^{-\\frac{(\\sqrt{2a}-\\sqrt{2t})^2}{2G^V(x,x)}}\\,\\text{\\rm e}\\mkern0.7mu^{-b\\frac{\\sqrt{2a}-\\sqrt{2t}}{G^V(x,x)\\sqrt{2a}}}\\,.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proofsect}{Proof}\nWe will conveniently use estimates developed in earlier work on this problem. Denoting by $(Y_s)_{s \\geq 0}$ the $0$-dimensional Bessel process and writing $P_Y^a$ for its law with $P_Y^a(Y_0=a)=1$, Lemma 3.1(e) of Belius, Rosen and Zeitouni~\\cite{BRZ} shows \n\\begin{equation}\n\\label{E:Bessel}\nL_t^V(x)~\\text{under }P^\\varrho \n\\,\\,\\,\\overset{\\text{\\rm law}}=\\,\\,\\,\n\\frac{1}{2} \\left(Y_{G^V (x, x)} \\right)^2~\\text{under } P_Y^{\\sqrt{2t}}.\n\\end{equation}\nLet $P_B^r$ be a law under which\n$(B_s)_{s \\geq 0}$ is a standard Brownian motion on $\\mathbb{R}$ starting at~$r$. The process~$Y$ is absolutely continuous with respect to~$B$ up to the first time it hits zero; after that~$Y$ vanishes identically. The Radon-Nikodym derivative takes the explicit form (see, for example, \\cite[(2.12)]{BRZ})\n\\begin{equation} \n\\label{E:Bessel-to-Brown}\n\\frac{\\text{\\rm d}\\mkern0.5mu P_Y^r}{\\text{\\rm d}\\mkern0.5mu P_B^r}\\Bigr|_{\\mathcal{F}_{ H_0\\wedge t}}\n= \\sqrt{\\frac{r}{B_t}} \\exp \\left\\{- \\frac{3}{8} \\int_0^t \\text{\\rm d}\\mkern0.5mu s\\,\\frac{1}{B_s^2} \\right\\},\\quad\\text{on }\\{H_0 > t \\},\n\\end{equation}\nwhere $\\mathcal{F}_t$ is the $\\sigma$-field generated by the process up to time~$t$ and~$H_a$ is the first time the process hits level~$a$.\n \n\nThe identification \\eqref{E:Bessel} along with the assumptions~$a+b>0$ translates the event $\\{L_t^V(x)\\ge a+b\\}$ to~$\\{Y_t\\ge\\sqrt{2(a+b)}\\}$ intersected by~$\\{H_0>t\\}$. For~$r:=\\sqrt{2t}$, the assumption~$a+b>t$ implies that the quantity in \\eqref{E:Bessel-to-Brown} is less than one everywhere on the event of interest. Hence,\n\\begin{equation}\n\\label{E:4.4}\n\\begin{aligned}\nP^\\varrho\\bigl(L_t^V(x)\\ge a+b\\bigr)&\\le P_B^{\\sqrt{2t}}\\Bigl(B_{G^V(x,x)}\\ge \\sqrt{2(a+b)}\\,\\Bigr)\n\\\\\n&= P_B^0\\Bigl(B_{G^V(x,x)}\\ge\\sqrt{2(a+b)}-\\sqrt{2t}\\Bigr).\n\\end{aligned}\n\\end{equation}\n In order to get \\eqref{E:2.2a} from this, we invoke the Gaussian estimate $P(\\mathcal N(0,\\sigma^2)\\ge x)\\le\\sigma x^{-1}\\text{\\rm e}\\mkern0.7mu^{-\\frac{x^2}{2\\sigma^2}}$ valid for all~$x>0$ along with the calculation\n\\begin{equation}\n\\begin{aligned}\n\\Bigl(\\sqrt{2(a+b)}-\\sqrt{2t}\\Bigr)^2\n&=2(a+b)+2t-2\\sqrt{2a}\\sqrt{2t}\\Bigl(1+\\frac{b}{a}\\Bigr)^{1\/2}\n\\\\\n&\\ge 2(a+b)+2t-2\\sqrt{2a}\\sqrt{2t}\\Bigl(1+\\frac{b}{2a}\\Bigr)\n\\\\\n&=\\bigl(\\sqrt{2a}-\\sqrt{2t}\\bigr)^2+2b\\frac{\\sqrt{2a}-\\sqrt{2t}}{\\sqrt{2a}},\n\\end{aligned}\n\\end{equation}\nwhere in the middle line we used that $(1+x)^{1\/2}\\le 1+x\/2$ holds for all~$x>-1$.\n\\end{proofsect}\n\nFrom this we readily obtain:\n\n\\begin{corollary}[Tightness for the thick points]\n\\label{cor-tightness-upper}\nSuppose that $t_N$ and~$a_N$ are such that the limits in \\eqref{E:1.20} exist for some~$\\theta>0$ and some $\\lambda\\in(0,1)$. For each~$b\\in\\mathbb R$, there is~$c_1(b)\\in(0,\\infty)$ such that for all~$A\\subset\\mathbb R^2$ closed,\n\\begin{equation}\n\\label{E:2.1}\n\\limsup_{N\\to\\infty}\\,\nE^\\varrho\\bigl[\\,\\zeta^D_N\\bigl(A\\times[b,\\infty)\\bigr)\\bigr]\\le c_1(b)\\,{\\rm Leb}(A\\cap D).\n\\end{equation}\n\\end{corollary}\n\n\\begin{proofsect}{Proof}\nIt suffices to prove the bound for all $b<0$ with~$|b|$ sufficiently large. \nPick~$x\\in D_N$. If~$G^{D_N}(x,x)\\ge \\frac g{b^2}\\log N$, then Lemma~\\ref{lemma-upper} with~$a:=a_N$, $t:=t_N$ and~$b$ replaced by~$b\\log N$ and the uniform bound~$G^{D_N}(x,x)\\le g\\log N+c$ give\n\\begin{equation}\n\\label{E:2.2}\nP^\\varrho\\bigl(L^{D_N}_{t_N}(x)\\ge a_N+b\\log N\\bigr)\n\\le \\frac{\\tilde c}{\\sqrt{\\log N}}\\,\\text{\\rm e}\\mkern0.7mu^{-\\frac{(\\sqrt{2a_N}-\\sqrt{2t_N})^2}{2g\\log N}}\\,\\text{\\rm e}\\mkern0.7mu^{\\beta |b|^3},\n\\end{equation}\nfor some constants~$\\tilde c<\\infty$ and~$\\beta>0$ independent of~$b$ and~$N$, once~$N$ is sufficiently large. This is order~$W_N\/N^2$. If, on the other hand, $G^{D_N}(x,x)\\le \\frac g{b^2}\\log N$, then we use that~$G^{D_N}(x,x)\\ge\\frac14$ in the second exponential on the right of \\eqref{E:2.2a} to get\n\\begin{equation}\n\\label{E:2.2b}\nP^\\varrho\\bigl(L^{D_N}_{t_N}(x)\\ge a_N+b\\log N\\bigr)\n\\le \\frac{\\tilde c'}{\\sqrt{\\log N}}\\,\\text{\\rm e}\\mkern0.7mu^{-b^2\\frac{(\\sqrt{2a_N}-\\sqrt{2t_N})^2}{2g\\log N}}\\,\\text{\\rm e}\\mkern0.7mu^{\\beta' |b|\\log N},\n\\end{equation}\nwhere again~$\\tilde c'<\\infty$ and~$\\beta'>0$ do not depend on~$b$ or~$N$ once~$N$ is sufficiently large. Since the first exponent in \\eqref{E:2.2b} is order~$\\log N$, for~$|b|$ large enough, this is again at most order~$W_N\/N^2$. Now write $A_\\epsilon:=\\{x\\in\\mathbb R^2\\colon \\text{\\rm d}_\\infty(x,A)<\\epsilon\\}$ and note that, in light of \\eqref{E:1.8i}, we have \n\\begin{equation}\n\\label{E:3.9ui}\n\\#\\bigl\\{x\\in D_N\\colon x\/N\\in A\\bigr\\}\\le N^2{\\rm Leb}(A_{1\/N}\\cap D).\n\\end{equation}\nSumming the relevant bound from \\twoeqref{E:2.2}{E:2.2b} over~$x\\in D_N$ with~$x\/N\\in A$, the claim follows by noting that, since~$A$ is closed, ${\\rm Leb}(A_{1\/N}\\cap D)\\to{\\rm Leb}(A\\cap D)$ as~$N\\to\\infty$. \n\\end{proofsect}\n\n\n\\subsection{Lower tails}\nFor the lower tail we similarly get:\n\n\\begin{lemma}[Local time lower tail]\n\\label{lemma-lower}\nConsider the random walk on~$V\\cup\\{\\varrho\\}$ as in Section~\\ref{sec1.2}.\nFor all~$a,t>0$ and all~$b'0$ and $a+b0$ and each~$x\\in V$,\n\\begin{equation}\n\\label{E:L-vanish}\nP^\\varrho\\bigl(L_t^V(x)=0\\bigr) = \\text{\\rm e}\\mkern0.7mu^{-\\frac{t}{G^V(x,x)}}.\n\\end{equation}\nIn fact, for every~$b\\ge0$, we have\n\\begin{equation}\n\\label{E:L-vanish2}\nP^\\varrho\\bigl(L_t^V(x)\\le b\\bigr) \\le\\text{\\rm e}\\mkern0.7mu^{-\\frac{t}{G^V(x,x)}\\exp\\{-\\frac{b}{G^V(x,x)}\\}} \\le \\text{\\rm e}\\mkern0.7mu^{-\\frac{t}{G^V(x,x)}+b\\frac{t}{G^V(x,x)^2}}.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proofsect}{Proof}\nHere we proceed by a direct argument based on excursion decomposition (see, however, Remark~\\ref{rem-Bessel-works}). Writing $\\hat H_u$ for the first time to return to~$u$ after the walk left~$u$, consider the following independent random variables: \n\\settowidth{\\leftmargini}{(1111)}\n\\begin{enumerate}\n\\item[(1)] $N:=$ Poisson$(t\/G^V(x,x))$,\n\\item[(2)] $\\{Z_n\\colon n\\ge1\\}:=$ Geometric with parameter~$p:=P^x(H_\\varrho<\\hat H_x)$,\n\\item[(3)] $\\{T_{k,j}\\colon k,j\\ge1\\}:=$ Exponentials with mean one.\n\\end{enumerate}\nWe then claim\n\\begin{equation}\n\\label{E:3.13}\n\\pi(x)L_t^V(x) \\,\\,\\overset{\\text{\\rm law}}=\\,\\,\\sum_{k=1}^N \\sum_{j=1}^{Z_k}T_{k,j}.\n\\end{equation}\nTo see this, note that thanks to the parametrization by the local time at~$\\varrho$, the value~$L_t^V(x)$ is accumulated through a Poisson$(\\pi(\\varrho)t)$ number of independent excursions that start and end at~$\\varrho$. Each excursion that actually visits~$x$, which happens with probability $P^\\varrho(H_x<\\hat H_\\varrho)$, contributes a Geometric$(p)$-number of independent exponential random variables to the total time the walk spends at~$x$. By Poisson thinning, the number of excursions that visit~$x$ is Poisson with parameter~$\\pi(\\varrho)P^\\varrho(H_x<\\hat H_\\varrho)t$. We claim that this equals $t\/G^V(x,x)$. Indeed, since the walk is constant speed, reversibility gives\n\\begin{equation}\n\\pi(\\varrho)P^\\varrho(H_x<\\hat H_\\varrho) =\\pi(x)P^x(H_\\varrho<\\hat H_x).\n\\end{equation}\nAs was just noted, under~$P^x$ the quantity $\\pi(x)\\ell_{H_\\varrho}(x)$ is the sum of Geometric($p$) independent exponentials of mean one. From \\eqref{E:cov} we then get $\\pi(x)G^V(x,x)=1\/p$.\n\nWith \\eqref{E:3.13} in hand, to get \\eqref{E:L-vanish} we just observe that, modulo null sets, the sum in~\\eqref{E:3.13} vanishes only if~$N=0$. To get \\eqref{E:L-vanish2} we note that, for~$L_t^V(x)\\le b$ we must have $\\sum_{j=1}^{Z_k}T_{k,j}\\le b\\pi(x)$ for each $k=1,\\dots,N$. The probability that the sum of~$Z_k$ independent exponentials is less than~$b\\pi(x)$ equals $1-\\text{\\rm e}\\mkern0.7mu^{-bp\\pi(x)}$, and that this happens for all $k=1,\\dots,N$ thus has probability at most \n\\begin{equation}\n\\sum_{n=0}^\\infty \\frac{(t\/G^V(x,x))^n}{n!}\\bigl[ 1-\\text{\\rm e}\\mkern0.7mu^{-bp\\pi(x)}\\bigr]^n\\text{\\rm e}\\mkern0.7mu^{-\\frac{t}{G^V(x,x)}}\n=\\text{\\rm e}\\mkern0.7mu^{-\\frac{t}{G^V(x,x)} \\text{\\rm e}\\mkern0.7mu^{-bp\\pi(x)}}.\n\\end{equation}\nThe claim again follows from $1\/p=\\pi(x)G^V(x,x)$ and the bound $\\text{\\rm e}\\mkern0.7mu^{-x}\\ge1-x$. \n\\end{proofsect}\n\n\n\\begin{remark}\n\\label{rem-Bessel-works}\nWe note that a proof based on the connection with the $0$-dimensional Bessel process is also possible. Indeed, by~Belius, Rosen and Zeitouni~\\cite[(2.7)]{BRZ}, given $x > 0$ the law of $(Y_s)^2$ under~$P^x_Y$ is given by\n\\begin{equation}\n\\text{\\rm e}\\mkern0.7mu^{- \\frac{x}{2s}} \\delta_0 (\\text{\\rm d}\\mkern0.5mu y) + 1_{(0, \\infty)} (y) \\,\\frac{1}{2s} \\,\\sqrt{\\frac{x}{y}} \\,I_1 \\Bigl(\\frac{\\sqrt{xy}}{s}\\Bigr)\n\\text{\\rm e}\\mkern0.7mu^{- \\frac{x+y}{2s}} \\text{\\rm d}\\mkern0.5mu y,\n\\end{equation}\nwhere $I_1 (z) := \\sum_{k=0}^{\\infty} \\frac{(z\/2)^{2k+1}}{k! (k+1)!}$.\nThe identity \\eqref{E:L-vanish} follows immediately from \\eqref{E:Bessel} and\n\\begin{equation}\nI_1 \\Bigl(\\frac{\\sqrt{2ts}}{G^V (x,x)} \\Bigr) \\leq \\frac{\\sqrt{2ts}}{2G^V(x,x)} \\,\\text{\\rm e}\\mkern0.7mu^{\\frac{ts}{2 G^V(x,x)^2}}\n\\end{equation}\nthen implies the inequality in \\eqref{E:L-vanish2} as well.\n\\end{remark}\n\n\nFrom Lemma~\\ref{lemma-vanish} we get:\n\n\\begin{corollary}[Tightness for the light and avoided points]\n\\label{cor-light}\nSuppose $t_N$ is such that \\eqref{E:1.29} holds with some~$\\theta\\in(0,1)$. \nFor each~$b>0$ there is a constant~$c_2(b)\\in(0,\\infty)$ such that for each $A\\subset\\mathbb R^2$ closed,\n\\begin{equation}\n\\label{E:vartheta-lim}\n\\limsup_{N\\to\\infty}\\,E^\\varrho\\bigl[\\,\\vartheta^D_N(A\\times[0,b])\\bigr]\\le c_2(b)\\,{\\rm Leb}(A\\cap D).\n\\end{equation}\nIn particular,\n\\begin{equation}\n\\limsup_{N\\to\\infty}\\,E^\\varrho\\bigl[\\,\\kappa^D_N(A)\\bigr]\\le c_2(b)\\,{\\rm Leb}(A\\cap D).\n\\end{equation}\n\\end{corollary}\n\n\\begin{proofsect}{Proof}\n It suffices to prove just \\eqref{E:vartheta-lim} and that for~$b>0$ sufficiently large. Denote~$\\tilde c:=\\sup_{N\\ge1}t_N\/(\\log N)^{2}$. We then claim\n\\begin{equation}\n\\label{E:4.21ie}\nP^\\varrho\\bigl(L^{D_N}_{t_N}(x)\\le b \\bigr)\\le \\text{\\rm e}\\mkern0.7mu^{-bt_N(\\log N)^{-1}}+\\text{\\rm e}\\mkern0.7mu^{-\\frac{t_N}{G^{D_N}(x,x)}+\\tilde c b^3\\text{\\rm e}\\mkern0.7mu^{8b}}.\n\\end{equation}\nIndeed, the first term arises for~$x$ with $G^{D_N}(x,x)\\le b^{-1}\\text{\\rm e}\\mkern0.7mu^{-4b}\\log N$ by the first inequality in \\eqref{E:L-vanish2} along with~$G^{D_N}(x,x)\\ge\\frac14$. The second term controls the remaining~$x$; we invoke the second inequality in \\eqref{E:L-vanish2} along with $bt_N\/G^{D_N}(x,x)^2\\le \\tilde cb^3\\text{\\rm e}\\mkern0.7mu^{8b}$. \n\nFor~$b$ sufficiently large, the first term on the right of \\eqref{E:4.21ie} is~$o(\\widehat W_N\/N^2)$ independently of~$x\\in D_N$. The second term is in turn $O( \\widehat W_N\/N^2)$, with the implicit constant depending on~$b$, by the fact that that~$G^{D_N}(x,x)\\le g\\log N+c$, uniformly in~$x\\in D_N$. The sum over such~$x\\in D_N$ with~$x\/N\\in A$ is now handled via \\eqref{E:3.9ui}. \n\\end{proofsect}\n\n\\subsection{Some corollaries}\nCombining the conclusions of Lemmas~\\ref{lemma-lower} and~\\ref{lemma-vanish}, we can now derive the easier halves of Theorem~\\ref{thm-minmax} albeit for the continuous-time process parametrized by the time at the ``boundary vertex:''\n\n\\begin{lemma}\n\\label{lemma-minmax}\nSuppose~$\\theta>0$ is related to $t_N$ as in \\eqref{E:1.12}. Then for each~$\\epsilon>0$, the bounds\n\\begin{equation}\n\\label{E:max-upper}\n\\frac{1}{(\\log N)^2}\\max_{x\\in D_N} L^{D_N}_{t_N}(x)\\le 2 g\\bigl(\\sqrt\\theta+1\\bigr)^2+\\epsilon\n\\end{equation}\nand\n\\begin{equation}\n\\label{E:min-lower}\n\\frac{1}{(\\log N)^2}\\min_{x\\in D_N} L^{D_N}_{t_N}(x)\\ge 2 g\\bigl[(\\sqrt\\theta-1)\\vee0\\bigr]^2-\\epsilon\n\\end{equation}\nhold with $P^\\varrho$-probability tending to one as~$N\\to\\infty$.\n\\end{lemma}\n\n\\begin{proofsect}{Proof}\nFor the maximum, pick~$\\epsilon>0$ and abbreviate $a_N:=2 g\\bigl(\\sqrt\\theta+1+\\epsilon\\bigr)^2(\\log N)^2$. Then use \\eqref{E:2.2a} with~$b:=0$ and $a:=a_N$ to bound the probability that $L^{D_N}_{t_N}(x)\\ge a_N$ by order $N^{-2(1+\\epsilon)+o(1)}$ uniformly in~$x\\in D_N$. The union bound then gives \\eqref{E:max-upper}.\n\nFor the minimum, it suffices to deal with the case~$\\theta>1$. We pick~$\\epsilon>0$ such that $\\sqrt\\theta>1+\\epsilon$. Abbreviate $a_N:=2 g\\bigl(\\sqrt\\theta-1-\\epsilon\\bigr)^2(\\log N)^2$ and apply Lemma~\\ref{lemma-lower} to get, for any~$b>0$, \n\\begin{equation}\n\\label{E:3.18}\n\\begin{aligned}\nP^\\varrho\\bigl(L^{D_N}_{t_N}&(x)\\le \\,a_N\\bigr)\n\\\\\n&=P^\\varrho\\bigl(L^{D_N}_{t_N}(x)\\le b \\bigr)+P^\\varrho\\bigl(b0$ and some $\\lambda\\in(0,\\sqrt\\theta\\wedge1)$. For all~$b\\in\\mathbb R$ there is~$c_3(b)\\in(0,\\infty)$ such that for all~$A\\subset\\mathbb R^2$ closed,\n\\begin{equation}\n\\limsup_{N\\to\\infty}\\,\nE^\\varrho\\bigl[\\,\\zeta^D_N\\bigl(A\\times(-\\infty,b]\\bigr)\\bigr]\\le c_3(b)\\,{\\rm Leb}(A\\cap D).\n\\end{equation}\n\\end{corollary}\n\n\\begin{proofsect}{Proof}\nWe proceed as in the proof of Lemma~\\ref{lemma-minmax}. Let~$a_N\\sim 2g(\\sqrt\\theta-\\lambda)^2(\\log N)^2$ be as given, pick $\\epsilon\\in(0,\\sqrt\\theta-\\lambda)$ an abbreviate~$\\hat a_N:= 2g\\epsilon^2(\\log N)^2$. Then for any $b' > 0$,\n\\begin{multline}\n\\label{E:3.18ie}\n\\quad P^\\varrho\\bigl(L^{D_N}_{t_N}(x)\\le \\,a_N + b\\log N\\bigr)\n=P^\\varrho\\bigl(\\,L^{D_N}_{t_N}(x)\\le b' \\bigr)\n\\\\+P^\\varrho\\bigl(b'0$ and some~$\\lambda\\in(0,1)$. Introduce the auxiliary centering sequence\n\\begin{equation}\n\\label{E:4.1}\n\\widehat a_N:=\\sqrt{2a_N}-\\sqrt{2t_N}\n\\end{equation}\nand note that $\\widehat a_N\\sim 2\\lambda\\sqrt g\\log N$ as~$N\\to\\infty$. The arguments below make frequent use of the coupling of~$L_{t_N}^{D_N}$ and an independent DGFF $h^{D_N}$ to another DGFF~$\\tilde h^{D_N}$ via the Dynkin isomorphism (Theorem~\\ref{thm-Dynkin}). We will use these notations throughout and write~$\\widehat\\eta_N^D$ to denote the DGFF process associated with~$\\tilde h^{D_N}$ and the centering sequence~$\\widehat a_N$. A key point to note is that~$W_N$ then coincides with normalizing constant from \\eqref{E:1.19e}. \n\n\\subsection{Tightness considerations}\nThe proof of Theorem~\\ref{thm-thick} naturally divides into two parts. In the first part we dominate~$\\zeta^D_N$ using~$\\widehat\\eta_N^D$ and control the effect of adding~$h^{D_N}$ to the local time~$L^{D_N}_{t_N}$. The second part is then a derivation, and a solution, of a convolution-type identity linking the weak-limits of~$\\zeta^D_N$ to those of~$\\widehat\\eta_N^D$. Our tightness considerations start by the following domination lemma: \n\n\\begin{lemma}[Domination by DGFF process]\n\\label{lemma-domination}\nFor any~$b\\in\\mathbb R$ and any measurable~$A\\subseteq D$,\n\\begin{equation}\n\\label{E:4.7a}\n\\zeta^D_N\\bigl(A\\times[b,\\infty)\\bigr)\\,\\,\\overset{\\text{\\rm law}}\\le\\,\\,\\widehat\\eta_N^D\\Bigl(A\\times\\bigl[\\tfrac1{2\\sqrt g}\\tfrac{b}{\\sqrt\\theta+\\lambda},\\infty\\bigr)\\Bigr)\n+o(1)\n\\end{equation}\nwhere~$o(1)\\to0$ in probability as $N\\to\\infty$. Similarly, for any measurable~$A\\subseteq D\\times D$ and any~$b\\in\\mathbb R$,\n\\begin{equation}\n\\label{E:4.7b}\n\\zeta^D_N\\otimes\\zeta^D_N\\bigl(A\\times[b,\\infty)^2\\bigr)\\,\\,\\overset{\\text{\\rm law}}\\le\\,\\,\\widehat\\eta_N^D\\otimes\\widehat\\eta_N^D\\Bigl(A\\times\\bigl[\\tfrac1{2\\sqrt g}\\tfrac{b}{\\sqrt\\theta+\\lambda},\\infty\\bigr)^2\\Bigr)\n+o(1).\n\\end{equation}\n\\end{lemma}\n\n\\begin{proofsect}{Proof}\nLet us start by \\eqref{E:4.7a}. The Dynkin isomorphism shows\n\\begin{equation}\n\\label{E:4.4u}\nL^{D_N}_{t_N}\\le L^{D_N}_{t_N}+\\frac12 (h^{D_N})^2= \\frac12\\bigl(\\tilde h^{D_N}+\\sqrt{2t_N}\\bigr)^2.\n\\end{equation}\nHence,\nthe expression in \\eqref{E:4.7a} is bounded as\n\\begin{equation}\n\\label{E:4.10}\n\\zeta^D_N\\bigl(A\\times[b,\\infty)\\bigr)\n\\le\\frac1{W_N}\\sum_{\\begin{subarray}{c}\nx\\in D_N\\\\x\/N\\in A\n\\end{subarray}}\n1_{\\{|\\tilde h^{D_N}_x+\\sqrt{2t_N}|\\ge\\sqrt{2a_N+2b\\log N}\\}}.\n\\end{equation}\nPick any~$b'0$ be such that\n\\begin{equation}\n\\label{E:4.8}\n\\bigl|f(x,\\ell,h)\\bigr|\\le \\Vert f\\Vert_\\infty 1_A(x)1_{[-b,\\infty)}(\\ell)1_{[-b,b]}(h).\n\\end{equation}\nAbbreviate $L'_N(x):=(L^{D_N}_{t_N}(x)-a_N)\/\\log N$ and $h'_N(x):=h^{D_N}_x\/\\sqrt{\\log N}$. Writing $\\text{Var}_{\\mathbb P}$, resp., $\\text{Cov}_{\\mathbb P}$ for the conditional variance, resp., covariance given the local time, we have\n\\begin{multline}\n\\label{E:4.11}\n\\quad\\text{Var}_{\\mathbb P}\\bigl(\\langle\\zeta^{D,\\text{ext}}_N,f\\rangle\\bigr)\n\\\\\n=\\frac1{W_N^2}\\sum_{x,y\\in D_N}\\text{Cov}_{\\mathbb P}\\Bigl(f\\bigl(\\ffrac xN, L'_N(x),h'_N(x)\\bigr),f\\bigl(\\ffrac yN, L'_N(y),h'_N(y)\\bigr)\\Bigr).\n\\quad\n\\end{multline}\nPick~$\\epsilon>0$ and split the sum according to whether~$|x-y|\\ge\\epsilon N$ or not. In the former cases, we use the Gibbs-Markov decomposition to write~$h^{D_N}$ using the value~$h^{D_N}_x$ and an independent DGFF in~$D_N\\smallsetminus\\{x\\}$ as\n\\begin{equation}\n\\label{E:5.12i}\nh^{D_N}\\,\\overset{\\text{\\rm law}}=\\, h^{D_N}_x\\fraktura b_{D_N,x}(\\cdot)+\\widehat h^{D_N\\smallsetminus\\{x\\}},\\quad h^{D_N}_x\\independent \\widehat h^{D_N\\smallsetminus\\{x\\}},\n\\end{equation}\nwhere~$\\fraktura b_{D_N,x}\\colon\\mathbb Z^2\\to[0,1]$ is the unique function that is discrete harmonic on~$D_N\\smallsetminus\\{x\\}$, vanishes outside~$D_N$ and equals one at~$x$. A key point, proved with the help of monotonicity of~$D\\mapsto\\fraktura b_{D,x}(y)$ with respect to set inclusion, is\n\\begin{equation}\n\\label{E:5.13i}\n\\max_{\\begin{subarray}{c}\nx,y\\in D_N\\\\|x-y|\\ge\\epsilon N\n\\end{subarray}}\n\\fraktura b_{D_N,x}(y)\\le\\frac{c(\\epsilon)}{\\log N},\n\\end{equation}\nwhere~$c(\\epsilon)\\in(0,\\infty)$ is independent of~$N$. The uniform continuity of~$f$ in the third variable then shows that the portion of the sum in \\eqref{E:4.11} corresponding to~$|x-y|\\ge\\epsilon N$ vanishes in the limit~$N\\to\\infty$. \n\nUsing the bound in \\eqref{E:4.8}, the part of the sum in \\eqref{E:4.11} corresponding to~$|x-y|\\le\\epsilon N$ is bounded by $\\Vert f\\Vert_\\infty^2$ times\n\\begin{equation}\n\\label{E:4.12}\n\\zeta^D_N\\otimes\\zeta^D_N\\Bigl(\\bigl\\{(x,y)\\colon |x-y|\\le\\epsilon\\bigr\\}\\times[-b,\\infty)^2\\Bigr).\n\\end{equation}\nBy Lemma~\\ref{lemma-domination}, this is stochastically dominated by\n\\begin{equation}\n\\widehat\\eta_N^D\\otimes\\widehat\\eta_N^D\\Bigl(\\bigl\\{(x,y)\\colon |x-y|\\le\\epsilon\\bigr\\}\\times\n[-\\tfrac{1}{2\\sqrt{g}} \\tfrac{b}{\\sqrt{\\theta}+\\lambda},\\infty)^2\\Bigr)+o(1).\n\\end{equation}\nAs $\\{(x,y)\\colon |x-y|\\le\\epsilon\\}$ is closed and $\\widehat a_N\\sim2\\lambda\\sqrt g\\log N$ as~$N\\to\\infty$, \\eqref{E:1.19} and the Portmanteau Theorem show that this expression is, in the limit~$N\\to\\infty$, stochastically dominated by a \\normalcolor $b$-dependent constant times\n\\begin{equation}\nZ^D_\\lambda\\otimes Z^D_\\lambda\\bigl(\\{(x,y)\\colon |x-y|\\le\\epsilon\\}\\bigr).\n\\end{equation}\nAs~$\\epsilon\\downarrow0$, this tends to zero a.s. due to the fact that~$Z^D_\\lambda$ has no point masses a.s.\n\nWe conclude that $\\text{Var}_{\\mathbb P}(\\langle\\zeta^{D,\\text{ext}}_N,f\\rangle)$\ntends to zero in $P^\\varrho$-probability. This implies\n\\begin{equation}\n\\label{E:4.20}\n\\langle\\zeta^{D,\\text{ext}}_N,f\\rangle-\\mathbb E\\bigl(\\langle\\zeta^{D,\\text{ext}}_N,f\\rangle\\bigr)\\,\\,\\underset{N\\to\\infty}\\longrightarrow\\,\\,0,\\quad\\text{in $P^\\varrho\\otimes\\mathbb P$-probability}.\n\\end{equation}\nTo infer the desired claim, abbreviate\n\\begin{equation}\nf_{\\mathfrak g}(x,\\ell):=\\int\\mathfrak g(\\text{\\rm d}\\mkern0.5mu h) f(x,\\ell,h)\n\\end{equation}\nand note that, since $A$ in \\eqref{E:4.8} is compact,~$h^{D_N}_x\/\\sqrt{\\log N}$ tends in law to~$\\mathcal N(0,g)$ uniformly for all $x\\in\\{y\\in D_N\\colon y\/N\\in A\\}$. The continuity of~$f$ along with \\eqref{E:4.8} yield \n\\begin{equation}\n\\label{E:4.22}\n\\mathbb E\\bigl(\\langle\\zeta^{D,\\text{ext}}_N,f\\rangle\\bigr)-\\langle\\zeta^D_N,f_{\\mathfrak g}\\rangle\\,\\,\\underset{N\\to\\infty}\\longrightarrow\\,\\,0,\\quad\\text{in $P^\\varrho$-probability}.\n\\end{equation}\nCombining \\eqref{E:4.20} and \\eqref{E:4.22} we then get \\eqref{E:4.7}.\n\\end{proofsect}\n\n\nAs a consequence of the above lemmas, we now get:\n\n\\begin{lemma}\n\\label{lemma-conv}\nRecall that~$\\mathfrak g$ is the law of~$\\mathcal N(0,g)$. Given~$f\\in C_{\\text{\\rm c}}(D\\times\\mathbb R)$ with~$f\\ge0$, let\n\\begin{equation}\n\\label{E:4.24}\nf^{\\ast\\mathfrak g}(x,\\ell):=\\int\\mathfrak g(\\text{\\rm d}\\mkern0.5mu h)f\\Bigl(x,\\tfrac1{2\\sqrt g(\\sqrt\\theta+\\lambda)}\\bigl(\\ell+\\tfrac{h^2}2\\bigr)\\Bigr).\n\\end{equation}\nThen for every subsequential weak limit~$\\zeta^D$ of~$\\zeta^D_N$, simultaneously for all~$f$ as above,\n\\begin{equation}\n\\label{E:4.25a}\n\\langle\\zeta^D,f^{\\ast\\mathfrak g}\\rangle\\,\\overset{\\text{\\rm law}}=\\, \\fraktura c(\\lambda)\\int Z^D_\\lambda(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\text{\\rm e}\\mkern0.7mu^{-\\alpha\\lambda h}\\text{\\rm d}\\mkern0.5mu h \\,\\,f(x,h),\n\\end{equation}\nwhere, we recall,~$\\alpha:=2\/\\sqrt g$ and~$\\fraktura c(\\lambda)$ is as in \\eqref{E:1.19}.\n\\end{lemma}\n\n\\begin{proofsect}{Proof}\nPick~$f$ as above and let~$f^{\\ast\\mathfrak g}$ be as in \\eqref{E:4.24}. Let the DGFF~$\\tilde h^{D_N}$ be related to the local time~$L^{D_N}_{t_N}$ and an independent DGFF~$h^{D_N}$ via the Dynkin isomorphism. Recalling \\eqref{E:4.1}, we then have\n\\begin{equation}\n\\label{E:4.26}\n\\begin{aligned}\n\\langle\\widehat\\eta_N^D, f\\rangle&=\\frac1{W_N}\\sum_{x\\in D_N}f\\bigl(\\ffrac xN,\\tilde h_x^{D_N}-\\widehat a_N\\bigr)\n\\\\\n&=\\frac1{W_N}\\sum_{x\\in D_N}f\\Bigl(\\ffrac xN,\\sqrt{2L^{D_N}_{t_N}(x)+(h_x^{D_N})^2}-\\sqrt{2a_N}\\Bigr),\n\\end{aligned}\n\\end{equation}\nwhere the sign ambiguity of the square root was resolved by noting that the compact support of~$f$ forces~$\\tilde h_x^{D_N}$ to be strictly positive once~$N$ is sufficiently large. By the same reasoning, for~$x$ to contribute to the sum, $2L^{D_N}_{t_N}(x)+(h_x^{D_N})^2$ must be close to~$2a_N$. Let $\\chi\\colon[0,\\infty)\\to[0,1]$ be non-increasing, \ncontinuous with~$\\chi(x)=1$ for~$0\\le x\\le1$ and $\\chi(x)=0$ for~$x\\ge2$. Invoking Lemma~\\ref{lemma-no-conspire}, we now truncate the values of~$h_x^{D_N}$ using~$\\chi$ and rewrite \\eqref{E:4.26} as a random quantity whose~$L^\\infty$ norm is at most a constant times $\\Vert f\\Vert_\\infty\\text{\\rm e}\\mkern0.7mu^{-\\beta M^2}$ uniformly in~$N$ plus the quantity\n\\begin{equation}\n\\label{E:4.27}\n\\frac1{W_N}\\sum_{x\\in D_N}f\\Bigl(\\ffrac xN,\\sqrt{2a_N+2[\\,L^{D_N}_{t_N}(x)-a_N]+(h_x^{D_N})^2}-\\sqrt{2a_N}\\Bigr)\\chi\\Bigl(\\frac{|h_x^{D_N}|}{M\\sqrt{\\log N}}\\Bigr).\n\\end{equation}\nThe truncation of the field ensures that, for~$x$ to contribute to the sum, \\myemph{both}~$(h_x^{D_N})^2$ and $L^{D_N}_{t_N}(x)-a_N$ must be at most order~$\\log N$. Expanding the square root and using the uniform continuity of~$f$ along with the tightness of~$\\zeta^D_N$ to replace $a_N$ by its asymptotic expression then recasts \\eqref{E:4.27} as\n\\begin{equation}\n\\label{E:4.28}\n\\frac1{W_N}\\sum_{x\\in D_N}f_{\\text{ext}}\\Bigl(\\ffrac xN,\\tfrac{L^{D_N}_{t_N}(x)-a_N}{\\log N}, \\tfrac{h_x^{D_N}}{\\sqrt{\\log N}}\\Bigr)\\chi\\Bigl(\\frac{|h_x^{D_N}|}{M\\sqrt{\\log N}}\\Bigr)+o(1),\n\\end{equation}\nwhere\n\\begin{equation}\nf_{\\text{ext}}(x,\\ell,h):=f\\Bigl(x,\\tfrac1{2\\sqrt g(\\sqrt\\theta+\\lambda)}\\bigl(\\ell+\\tfrac{h^2}2\\bigr)\\Bigr).\n\\end{equation}\nThe function $\\ell,h\\mapsto f_{\\text{ext}}(x,\\ell,h)\\chi(|h|\/M)$ that effectively appears in \\eqref{E:4.28} is compactly supported in both variables; Lemma~\\ref{lemma-add-field} then shows that, along subsequences where~$\\zeta^D_N$ converges in law to some~$\\zeta^D$, the expression in \\eqref{E:4.28} converges to $\\langle\\zeta^D, f^{\\ast\\mathfrak g}_M \\rangle$ where~$f^{\\ast\\mathfrak g}_M$ is defined by \\eqref{E:4.24} with~$\\mathfrak g(\\text{\\rm d}\\mkern0.5mu h)$ replaced by $\\chi(|h|\/M)\\mathfrak g(\\text{\\rm d}\\mkern0.5mu h)$. \nFrom the known convergence of~$\\widehat\\eta_N^D$ (see \\eqref{E:1.19}) we thus conclude \n\\begin{equation}\n\\langle\\zeta^D,f^{\\ast\\mathfrak g}_M\\rangle +O(\\text{\\rm e}\\mkern0.7mu^{-\\beta M^2})\\,\\,\\overset{\\text{\\rm law}}=\\,\\, \\fraktura c(\\lambda)\\int Z^D_\\lambda(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\text{\\rm e}\\mkern0.7mu^{-\\alpha\\lambda h}\\text{\\rm d}\\mkern0.5mu h \\,\\,f(x,h),\n\\end{equation}\nwhere $O(\\text{\\rm e}\\mkern0.7mu^{-\\beta M^2})$ is a random quantity with $L^\\infty$-norm at most a constant times~$\\text{\\rm e}\\mkern0.7mu^{-\\beta M^2}$. Taking~$M\\to\\infty$ via Monotone Convergence Theorem now gives \\eqref{E:4.25a}.\n\\end{proofsect}\n\nWorking towards the proof of Theorem~\\ref{thm-thick}, a key remaining point to show is that the class of~$f^{\\ast\\mathfrak g}$ arising from functions~$f$ for which the integral on the right of \\eqref{E:4.25a} converges absolutely is sufficiently rich so that \\eqref{E:4.25a} determines the measure~$\\zeta^D$ uniquely. For this we note that, by an application of the Dominated and Monotone Convergence Theorems, \\eqref{E:4.25a} extends from~$C_{\\text{\\rm c}}(D\\times\\mathbb R)$ to the class of functions~$(x,h)\\mapsto 1_A(x)f(h)$, where $A\\subset D$ is open with~$\\overline A\\subset D$ and $f\\in C_{\\text{\\rm c}}^\\infty(\\mathbb R)$ with~$f\\ge0$. The transformation \\eqref{E:4.24} only affects the second variable on which it takes the form $f\\mapsto (f\\ast\\fraktura e)\\circ\\fraktura s$, where the convolution is with the function\n\\begin{equation}\n\\label{E:5.33}\n \\fraktura e(z):=\\sqrt{\\frac\\beta\\pi}\\,\\frac{\\text{\\rm e}\\mkern0.7mu^{\\beta z}}{\\sqrt{-z}}1_{(-\\infty,0)}(z)\\quad\\text{for}\\quad\\beta:=\\alpha\\bigl(\\sqrt\\theta+\\lambda\\bigr)\n\\end{equation}\nand where $h\\mapsto\\fraktura s(h)$ is the scaling map\n\\begin{equation}\n\\label{E:5.34}\n\\fraktura s(h):=\\frac{h}{2\\sqrt g(\\sqrt\\theta+\\lambda)}.\n\\end{equation}\nAs it turns out, it then suffices to observe:\n\n\\begin{lemma}\n\\label{lemma-unique}\nLet $\\mu_\\lambda(\\text{\\rm d}\\mkern0.5mu h):=\\text{\\rm e}\\mkern0.7mu^{-\\alpha\\lambda h}\\text{\\rm d}\\mkern0.5mu h$. Assuming~$\\theta>0$, there is at most one Radon measure~$\\nu$ on~$\\mathbb R$ such that for all~$f\\in C_{\\text{\\rm c}}^\\infty(\\mathbb R)$ with $f\\ge0$,\n\\begin{equation}\n\\label{E:5.35iw}\n\\bigl\\langle\\nu,f\\ast\\fraktura e\\bigr\\rangle=\\langle\\mu_\\lambda,f\\rangle.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proofsect}{Proof}\nWriting \\eqref{E:5.35iw} explicitly using integrals and using the fact that the class of all $f\\in C_{\\text{\\rm c}}^\\infty(\\mathbb R)$ with $f\\ge0$ separates Radon measures on~$\\mathbb R$ shows\n\\begin{equation}\n\\int_{\\mathbb R}\\nu(\\text{\\rm d}\\mkern0.5mu s)\\fraktura e(s-h) = \\text{\\rm e}\\mkern0.7mu^{-\\alpha\\lambda h},\\quad h\\in\\mathbb R.\n\\end{equation}\nAbbreviating $\\nu_{\\lambda}(\\text{\\rm d}\\mkern0.5mu h):=\\text{\\rm e}\\mkern0.7mu^{\\alpha\\lambda h}\\nu(\\text{\\rm d}\\mkern0.5mu h)$ and $\\fraktura e_{\\lambda}(h):=\\text{\\rm e}\\mkern0.7mu^{-\\alpha\\lambda h}\\fraktura e(h)$, this can be recast as\n\\begin{equation}\n\\int_{\\mathbb R}\\nu_{\\lambda}(\\text{\\rm d}\\mkern0.5mu s)\\fraktura e_{\\lambda}( s-h ) = 1,\\quad h\\in\\mathbb R.\n\\end{equation}\nIntegrating this against suitable test functions with respect to the Lebesgue measure and applying Dominated Convergence, we conclude\n\\begin{equation}\n\\bigl\\langle\\nu_\\lambda,f\\ast\\fraktura e_\\lambda\\bigr\\rangle=\\langle{\\rm Leb},f\\rangle,\\quad f\\in \\mathcal S(\\mathbb R),\n\\end{equation}\nwhere~$\\mathcal S(\\mathbb R)$ is the Schwartz class of functions on~$\\mathbb R$. Note that this identity entails that the integral on the left-hand side converges absolutely.\n\nSince $\\mathcal S(\\mathbb R)$ separates Radon measures on~$\\mathbb R$, to conclude the statement from \\eqref{E:5.35iw} it suffices to prove that, for $\\theta>0$,\n\\begin{equation}\n\\label{E:5.39iw}\nf\\mapsto f\\ast\\fraktura e_\\lambda\\text{ is a bijection of }\\mathcal S(\\mathbb R) \\text{ onto itself}.\n\\end{equation}\nThe Fourier transform maps~$\\mathcal S(\\mathbb R)$ bijectively onto itself and so we may as well prove \\eqref{E:5.39iw} in the Fourier picture. For this we note that, as $\\theta>0$ we have~$\\tilde\\beta:=\\beta-\\alpha\\lambda>0$ and so $z\\mapsto\\fraktura e_\\lambda(z)$ decays exponentially as~$z\\to-\\infty$. In particular, $\\fraktura e_\\lambda$ is integrable and so in the Fourier transform,~$f\\mapsto f\\ast\\fraktura e_\\lambda$ is reduced to the multiplication by\n\\begin{equation}\n\\widehat\\fraktura e_\\lambda(k):=\\int_\\mathbb R\\text{\\rm d}\\mkern0.5mu z\\,\\fraktura e_\\lambda(z)\\text{\\rm e}\\mkern0.7mu^{2\\pi\\text{\\rm i}\\mkern0.7mu kz}\n= \\sqrt{\\frac{\\beta}{\\tilde\\beta}}\\,\\frac1{|1+4\\pi^2 \\tilde\\beta^{-2}k^2|^{1\/4}}\\,\\text{\\rm e}\\mkern0.7mu^{-\\text{\\rm i}\\mkern0.7mu\\theta(k)},\n\\end{equation}\nwhere~$\\theta(k)$ is the unique number in~$[0,\\frac\\pi4]$ such that\n\\begin{equation}\n \\cos\\theta(k)=\\sqrt{\\frac{1+|1+4\\pi^2 \\tilde\\beta^{-2}k^2|^{-1\/2}}2}.\n\\end{equation}\n We now check that $k\\mapsto\\widehat\\fraktura e_\\lambda(k)$ is bounded and~$C^\\infty(\\mathbb R)$ which implies that $f\\mapsto \\fraktura e_{\\lambda} \\ast f$ maps~$\\mathcal S(\\mathbb R)$ into~$\\mathcal S(\\mathbb R)$. The map is also injective because~$|\\widehat\\fraktura e_\\lambda|>0$ and onto because $|\\widehat\\fraktura e_\\lambda(k)|^{-1}$ is bounded by a power of~$|k|$. Hence \\eqref{E:5.39iw} follows. \n\\end{proofsect}\n\nWe are now ready to give:\n\n\\begin{proofsect}{Proof of Theorem~\\ref{thm-thick}}\nConsider a subsequential limit~$\\zeta^D$, pick~$f\\in C_{\\text{\\rm c}}(\\mathbb R)$ with~$f\\ge0$ and let~$A\\subset D$ be open with~$\\overline A\\subset D$. Using the notation \\twoeqref{E:5.33}{E:5.34} we then have \n\\begin{equation}\n\\bigl\\langle\\zeta^D,(1_A\\otimes f)^{\\ast\\mathfrak g}\\bigr\\rangle=\\bigl\\langle\\zeta_A^D,(f\\ast\\fraktura e)\\circ\\fraktura s\\bigr\\rangle=\\bigl\\langle\\zeta_A^D\\circ\\fraktura s^{-1},f\\ast\\fraktura e\\bigr\\rangle,\n\\end{equation}\nwhere~$\\zeta_A^D$ is a Borel measure on~$\\mathbb R$ defined by $\\zeta_A^D(B):=\\zeta^D(A\\times B)$. Writing $\\mu_\\lambda(\\text{\\rm d}\\mkern0.5mu h):=\\text{\\rm e}\\mkern0.7mu^{-\\alpha\\lambda h}\\text{\\rm d}\\mkern0.5mu h$, the identity \\eqref{E:4.25a} then translates into\n\\begin{equation}\n\\label{E:5.43nw}\n\\bigl\\langle\\zeta_A^D\\circ\\fraktura s^{-1},f\\ast\\fraktura e\\bigr\\rangle \\,\\overset{\\text{\\rm law}}=\\, \\fraktura c(\\lambda)Z^D_\\lambda(A)\\langle\\mu_\\lambda,f\\rangle,\n\\end{equation}\nwhere the equality in law holds simultaneously for all~$A$ and~$f$ as above. \n\nTo infer the product form of~$\\zeta^D$ from \\eqref{E:5.43nw}, define (for a given~$A$ and a given realization of~$\\zeta^D$) a Borel measure on~$\\mathbb R$ by\n\\begin{equation}\n\\label{E:5.44nw}\n\\nu:=\\Bigl[\\alpha\\lambda \\bigl\\langle\\zeta_A^D\\circ\\fraktura s^{-1},1_{[0,\\infty)}\\ast\\fraktura e\\bigr\\rangle\\Bigr]^{-1}\\zeta^D_A,\n\\end{equation}\nwhere the conditions on~$A$ imply $Z^D_\\lambda(A)>0$ a.s.\\ and so, by \\eqref{E:5.43nw}, the quantity in the square bracket is strictly positive a.s. By \\eqref{E:5.43nw} we have $\\langle\\nu\\circ\\fraktura s^{-1},f\\ast\\fraktura e\\rangle=\\langle\\mu_\\lambda,f\\rangle$ for all~$f\\in C_{\\text{\\rm c}}(\\mathbb R)$ and so, by Lemma~\\ref{lemma-unique}, $\\nu\\circ\\fraktura s^{-1}$, and thus also~$\\nu$, is determined uniquely. In particular, $\\nu$ is the same for all~$A$ as above and for a.e.~realization of~$\\zeta^D$. Using \\eqref{E:5.43nw} in \\eqref{E:5.44nw} then shows $\\zeta^D_A(\\text{\\rm d}\\mkern0.5mu h)\\,\\overset{\\text{\\rm law}}=\\,\\fraktura c(\\lambda)Z^D_\\lambda(A)\\nu(\\text{\\rm d}\\mkern0.5mu h)$. As this holds simultaneously for all~$A$ as above, Remark~\\ref{rem-restrict} permits us to conclude\n\\begin{equation}\n\\zeta^D\\,\\overset{\\text{\\rm law}}=\\, \\fraktura c(\\lambda)Z^D_\\lambda\\,\\otimes\\,\\nu,\n\\end{equation}\nwhere~$\\nu$ is a uniquely-determined deterministic Radon measure on~$\\mathbb R$. \n\nIt remains to derive the explicit form of~$\\nu$ which, thanks to its uniqueness, we can do by plugging the desired expression on the left-hand side of \\eqref{E:4.25a} and checking for equality. Abbreviate $\\tilde\\alpha:=\\alpha(\\theta,\\lambda)$ and note that\n\\begin{equation}\n\\tilde\\alpha = \\frac1{2\\sqrt g(\\sqrt\\theta+\\lambda)}\\alpha\\lambda.\n\\end{equation}\nPick~$f\\in C_{\\text{\\rm c}}(D\\times\\mathbb R)$ and perform the following calculation where, in the last step, we invoke the substitution $r:=\\frac1{2\\sqrt g(\\sqrt\\theta+\\lambda)}(\\ell+\\frac{h^2}2)$ and separate integrals using Fubini-Tonelli:\n\\begin{equation}\n\\label{E:4.38}\n\\begin{aligned}\n\\int_{D\\times\\mathbb R}&Z^D_\\lambda(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\text{\\rm e}\\mkern0.7mu^{-\\tilde \\alpha\\ell}\\text{\\rm d}\\mkern0.5mu\\ell\\,f^{\\ast\\mathfrak g}(x,\\ell)\n\\\\\n&=\\int_{D\\times\\mathbb R\\times\\mathbb R}Z^D_\\lambda(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\text{\\rm e}\\mkern0.7mu^{-\\tilde \\alpha\\ell}\\text{\\rm d}\\mkern0.5mu\\ell\\otimes\\mathfrak g(\\text{\\rm d}\\mkern0.5mu h)\\,\nf\\Bigl(x,\\tfrac1{2\\sqrt g(\\sqrt\\theta+\\lambda)}\\bigl(\\ell+\\tfrac{h^2}2\\bigr)\\Bigr)\n\\\\\n&=\\int_{D\\times\\mathbb R\\times\\mathbb R}Z^D_\\lambda(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\text{\\rm e}\\mkern0.7mu^{-\\tilde\\alpha(\\ell+\\frac{h^2}2)}\\text{\\rm d}\\mkern0.5mu\\ell\\otimes\\text{\\rm e}\\mkern0.7mu^{\\,\\tilde\\alpha\\frac{h^2}2}\\mathfrak g(\\text{\\rm d}\\mkern0.5mu h)\\,\nf\\Bigl(x,\\tfrac1{2\\sqrt g(\\sqrt\\theta+\\lambda)}\\bigl(\\ell+\\tfrac{h^2}2\\bigr)\\Bigr)\n\\\\\n&=2\\sqrt g(\\sqrt\\theta+\\lambda)\\Bigl(\\int_\\mathbb R\\mathfrak g(\\text{\\rm d}\\mkern0.5mu h)\\text{\\rm e}\\mkern0.7mu^{\\,\\tilde\\alpha\\frac{h^2}2}\\Bigr)\n\\int_{D\\times\\mathbb R}Z^D_\\lambda(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\text{\\rm e}\\mkern0.7mu^{-\\alpha \\lambda r}\\text{\\rm d}\\mkern0.5mu r\\,f(x,r)\\,.\n\\end{aligned}\n\\end{equation}\nAs $\\tilde\\alpha<1\/g$, the first integral on the last line converges to the root of $(1-\\tilde\\alpha g)^{-1}=\\frac{\\sqrt\\theta+\\lambda}{\\sqrt\\theta}$ while \\eqref{E:4.25a} equates the second integral to $\\langle\\zeta^D,f^{\\ast\\mathfrak g}\\rangle$ in law. This implies \n\\begin{equation}\n \\zeta^D \\,\\,\\overset{\\text{\\rm law}}=\\,\\, \\frac{\\theta^{1\/4}}{2\\sqrt{g}\\,(\\sqrt\\theta+\\lambda)^{3\/2}}\\,\n\\fraktura c(\\lambda)Z^D_\\lambda(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\text{\\rm e}\\mkern0.7mu^{-\\tilde \\alpha\\ell}\\text{\\rm d}\\mkern0.5mu\\ell. \n\\end{equation}\nIn particular, all weakly converging subsequences of~$\\{\\zeta^D_N\\colon N\\ge1\\}$ converge to this~$\\zeta^D$, thus proving the desired claim.\n\\end{proofsect}\n\n\\normalcolor\n\n\\section{Thin points}\n\\label{sec5}\\noindent\\nopagebreak\n Our next task is the convergence of point measures $\\zeta^D_N$ associated with $\\lambda$-thin points. The argument proceeds very much along the same sequence of lemmas as for the $\\lambda$-thick points and so we will concentrate on the steps where a different reasoning is needed. Throughout we assume that~$t_N$ and~$a_N$ are sequences satisfying \\eqref{E:1.22} with some~$\\theta>0$ and some~$\\lambda\\in(0, 1 \\wedge \\sqrt\\theta)$. The auxiliary centering sequence~$\\widehat a_N$ is now defined by \n\\begin{equation}\n\\label{E:5.1}\n\\widehat a_N:=\\sqrt{2t_N}-\\sqrt{2a_N}\n\\end{equation}\nwhich ensures that we still have $\\widehat a_N\\sim 2\\lambda\\sqrt g\\log N$ as~$N\\to\\infty$. Appealing to the coupling of~$L^{D_N}_{t_N}$ and~$h^{D_N}$ to~$\\tilde h^{D_N}$ via Dynkin isomorphism, we use~$\\widehat\\eta_N^D$ to denote the point process associated with~$\\tilde h^{D_N}$ and the centering sequence~$-\\widehat a_N$. \n\nThe proof again opens up by proving suitable tightness and joint-convergence statements. We start with an analogue of Lemma~\\ref{lemma-no-conspire}: \n\n\\begin{lemma}\n\\label{lemma-no-conspire2}\nLet $0<\\beta<\\frac1{2g}\\frac{\\sqrt\\theta}{\\sqrt\\theta-\\lambda}$. Then for each~$b\\in\\mathbb R$ there is~$c_5(b)\\in(0,\\infty)$ and, for each $M\\ge0$, there is~$N'=N'(b,M)$ such that for all~$N\\ge N'$ and all~$x\\in D_N$,\n\\begin{equation}\nP^\\varrho\\otimes\\mathbb P\\biggl(L^{D_N}_{t_N}(x)+\\frac{(h^{D_N}_x)^2}2 \\le a_N+b\\log N,\\,\\frac{|h^{D_N}_x|}{\\sqrt{\\log N}}\\ge M\\biggr)\\le c_5(b)\\frac{W_N}{N^2}\\text{\\rm e}\\mkern0.7mu^{-\\beta M^2}.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proofsect}{Proof}\nLet us again for simplicity just deal with the case~$b=0$. Pick~$0<\\delta<\\sqrt\\theta-\\lambda$. Then the probability in question is bounded by\n\\begin{multline}\n\\qquad\nP^\\varrho\\Bigl(L^{D_N}_{t_N}(x)\\le 2g(\\sqrt\\theta-\\lambda-\\delta)^2(\\log N)^2\\Bigr)\n\\\\+P^\\varrho\\Bigl(2g(\\sqrt\\theta-\\lambda-\\delta)^2(\\log N)^2\\le L^{D_N}_{t_N}(x)\\le a_N-\\frac12M^2\\log N\\Bigr).\n\\qquad\n\\end{multline}\nInvoking the calculation in \\eqref{E:3.18}, the first term is at most order~$N^{-2(\\lambda+\\delta)^2+o(1)}$ which is $o(W_N\/N^2)$. The second term is now bounded using Lemma~\\ref{lemma-lower} and the fact that, by the uniform bound $G^{D_N}(x,x) \\le g \\log N + c$ with~$c$ independent of~$N$, we have \n\\begin{equation}\n\\min_{x\\in D_N}\n\\frac{\\log N\\bigl(\\sqrt{2t_N}-\\sqrt{2a_N}\\bigr)}{G^{D_N}(x,x)\\sqrt{2a_N}}\\ge\\frac1g\\,\\frac{\\lambda}{\\sqrt\\theta-\\lambda}+o(1)\n\\end{equation}\nin the limit~$N\\to\\infty$. \nIndeed, this \\normalcolor shows that the last exponential in \\eqref{E:4.7a2} for the choice \n$b:=-\\frac12M^2\\log N$ is less than $\\text{\\rm e}\\mkern0.7mu^{-\\beta M^2}$ once~$N$ is sufficiently large.\n\\end{proofsect}\n\nNext we will give an analogue of Lemma~\\ref{lemma-add-field} which we restate \\textit{verbatim}, albeit with a somewhat different proof: \n\n\\begin{lemma}\n\\label{lemma-add-field2}\nSuppose $\\{N_k\\}$ is a subsequence along which~$\\zeta^D_N$ converges in law to~$\\zeta^D$. Then\n\\begin{equation}\n\\frac1{W_N}\\sum_{x\\in D_N}\\delta_{x\/N}\\otimes\\delta_{(L^{D_N}_{t_N}(x)-a_N)\/\\log N}\\otimes\\delta_{h_x^{D_N}\/\\sqrt{\\log N}}\\,\\,\\underset{\\begin{subarray}{c}\nN=N_k\\\\k\\to\\infty\n\\end{subarray}}{\\overset{\\text{\\rm law}}\\longrightarrow}\\,\\,\n\\zeta^D\\otimes\\mathfrak g,\n\\end{equation}\nwhere~$\\mathfrak g$ is the law of~$\\mathcal N(0,g)$.\n\\end{lemma}\n\n\\begin{proofsect}{Proof}\nLet~$\\zeta^{D,\\text{ext}}_N$ denote the measure on the left and let~$f\\in C_{\\text{\\rm c}}(D\\times\\mathbb R\\times\\mathbb R)$ obey~$f\\ge0$. As for Lemma~\\ref{lemma-add-field}, the argument hinges on proving\n\\begin{equation}\n\\text{\\rm Var}_{\\mathbb P}\\bigl(\\langle\\zeta^{D,\\text{ext}}_N,f\\rangle\\bigr)\\,\\underset{N\\to\\infty}\\longrightarrow\\,0,\\quad \\text{in $P^\\varrho$-probability},\n\\end{equation}\nwhere~$\\text{\\rm Var}_{\\mathbb P}$ denotes the variance with respect to the law of~$h^{D_N}$, conditional on~$L^{D_N}_{t_N}$. \nInvoking \\eqref{E:4.11}, we treat the sum over the pairs~$|x-y|\\ge\\epsilon N$ via \\twoeqref{E:5.12i}{E:5.13i}. \\normalcolor\nThe key difference is that we no longer have the domination of~$\\zeta^D_N$ by a DGFF process and so we have to control the sum over the pairs $x,y\\in D_N$ with~$|x-y|\\le\\epsilon N$ differently. \n\nSince~$f$ is non-negative and compactly supported in the third variable, we in fact just need to show that, for any~$M>0$, the $L^1(\\mathbb P)$-norm of \n\\begin{multline}\n\\label{E:5.3}\n\\quad\\frac1{W_N^2}\\sum_{\\begin{subarray}{c}\nx,y\\in D_N\\\\|x-y|\\le\\epsilon N\n\\end{subarray}}\n1_{\\{L^{D_N}_{t_N}(x)\\le a_N+M^2\\log N\\}}1_{\\{|h^{D_N}_x|\\le M\\sqrt{\\log N}\\}}\n\\\\\n\\times1_{\\{L^{D_N}_{t_N}(y)\\le a_N+M^2\\log N\\}}1_{\\{|h^{D_N}_y|\\le M\\sqrt{\\log N}\\}}\n\\quad\n\\end{multline}\nvanishes in $P^\\varrho$-probability in the limit as~$N\\to\\infty$ and~$\\epsilon\\downarrow0$. To this end we note that, dropping the indicators involving the DGFF, \\eqref{E:5.3} is bounded by $[\\zeta^D_N(D\\times(-\\infty,M^2])]^2$ which by Corollary~\\ref{cor-tightness-lower} is bounded in probability as~$N\\to\\infty$. Therefore, it suffices to prove that \\eqref{E:5.3} vanishes in the stated limits in $P^\\varrho\\otimes\\mathbb P$-probability. \n\nTo this end pick~$b>\\frac{M^2}{\\sqrt{g}(\\sqrt{\\theta}-\\lambda)}$ and note that, as soon as~$N$ is sufficiently large, the asymptotic forms of~$a_N$ along with the Dynkin isomorphism yield \n\\begin{multline}\n\\qquad\n1_{\\{L^{D_N}_{t_N}(x)\\le a_N+M^2\\log N\\}}1_{\\{|h^{D_N}_x|\\le M\\sqrt{\\log N}\\}}\n\\\\\n\\le 1_{\\{(\\tilde h^{D_N}_x+\\sqrt{2t_N})^2\\le 2a_N+ 3 M^2\\log N\\}}\n\\le 1_{\\{\\tilde h^{D_N}_x\\le -\\widehat a_N+b\\}}.\n\\qquad\n\\end{multline}\n It follows that \\eqref{E:5.3} is bounded by \n\\begin{equation}\n\\widehat\\eta_N^D\\otimes\\widehat\\eta_N^D\\Bigl(\\bigl\\{(x,y)\\colon|x-y|\\le\\epsilon\\bigr\\}\\times(-\\infty,b]^2\\Bigr)\n\\end{equation}\n whose~$N\\to\\infty$ and~$\\epsilon\\downarrow0$ limits are now handled as before.\n\\end{proofsect}\n\nOur next task is a derivation of a convolution identity that will, as for the thick points, ultimately characterize the limit measure uniquely: \n\n\\begin{lemma}\n\\label{lemma-conv2}\nGiven~$f\\in C_{\\text{\\rm c}}(D\\times\\mathbb R)$ with~$f\\ge0$, let (abusing our earlier notation) \n\\begin{equation}\nf^{\\ast\\mathfrak g}(x,\\ell):=\\int\\mathfrak g(\\text{\\rm d}\\mkern0.5mu h)f\\Bigl(x,\\tfrac1{2\\sqrt g(\\sqrt\\theta-\\lambda)}\\bigl(\\ell+\\tfrac{h^2}2\\bigr)\\Bigr).\n\\end{equation}\nThen for every subsequential weak limit~$\\zeta^D$ of~$\\zeta^D_N$, simultaneously for all~$f$ as above, \n\\begin{equation}\n\\label{E:4.25b}\n\\langle\\zeta^D, f^{\\ast\\mathfrak g}\\rangle\\,\\overset{\\text{\\rm law}}=\\, \\fraktura c(\\lambda)\\int Z^D_\\lambda(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\text{\\rm e}\\mkern0.7mu^{\\alpha\\lambda h}\\text{\\rm d}\\mkern0.5mu h \\,\\,f(x,h),\n\\end{equation}\nwhere~$\\alpha:=2\/\\sqrt g$ and~$\\fraktura c(\\lambda)$ is as in \\eqref{E:1.19}.\n\\end{lemma}\n\n\\begin{proofsect}{Proof}\nPick~$f$ as above and let~$\\chi$ be the function as in the proof of Lemma~\\ref{lemma-conv}. The fact that~$f$ has compact support gives\n\\begin{equation}\n\\frac1{W_N}\\sum_{x\\in D_N}f\\bigl(\\ffrac xN, \\tilde h^{D_N}_x+\\widehat a_N\\bigr)\n=\\frac1{W_N}\\sum_{x\\in D_N} f\\Bigl(\\ffrac xN,-\\sqrt{2a_N}+\\sqrt{2L^{D_N}_{t_N}(x)+(h^{D_N}_x)^2}\\Bigr)\n\\end{equation}\nand Lemma~\\ref{lemma-no-conspire2} then bounds this by~$O(\\text{\\rm e}\\mkern0.7mu^{-\\beta M^2})$ \nplus\n\\begin{equation}\n\\frac1{W_N}\\sum_{x\\in D_N} f\\Bigl(\\ffrac xN,-\\sqrt{2a_N}+\\sqrt{2L^{D_N}_{t_N}(x)+(h^{D_N}_x)^2}\\Bigr)\\chi\\Bigl(\\frac{|h_x^{D_N}|}{M\\sqrt{\\log N}}\\Bigr).\n\\end{equation}\nThe truncation of the field now forces $L^{D_N}_{t_N}-a_N$ to be order $\\log N$.\nExpanding the square root and using the uniform continuity with the help of Corollary~\\ref{cor-tightness-lower} rewrites this as\n\\begin{equation}\n\\label{E:5.28}\n\\frac1{W_N}\\sum_{x\\in D_N}\\tilde f_{\\text{ext}}\\Bigl(\\ffrac xN,\\tfrac{L^{D_N}_{t_N}(x)-a_N}{\\log N}, \\tfrac{h_x^{D_N}}{\\sqrt{\\log N}}\\Bigr)\\chi\\Bigl(\\frac{|h_x^{D_N}|}{M\\sqrt{\\log N}}\\Bigr)+o(1),\n\\end{equation}\nwhere\n\\begin{equation}\n\\tilde f_{\\text{ext}}(x,\\ell,h):=f\\Bigl(x,\\tfrac1{2\\sqrt g(\\sqrt\\theta-\\lambda)}\\bigl(\\ell+\\tfrac{h^2}2\\bigr)\\Bigr).\n\\end{equation}\nThe rest of the proof now proceeds as before. (The exponential on the right-hand side of \\eqref{E:4.25b} does not get a negative sign because~$\\widehat\\eta_N^D$ is centered along negative sequence of order~$\\log N$.)\n\\end{proofsect}\n\n\n\nUsing Dominated and Monotone Convergence, we now readily extend \\eqref{E:4.25b} to functions of the form $1_A\\otimes f$ where~$A\\subset D$ is open with~$\\overline A\\subset D$ and~$f\\in C_{\\text{\\rm c}}(\\mathbb R)$ obeys~$f\\ge0$. For such~$f$ we then get\n\\begin{equation}\n(1_A\\otimes f)^{\\ast\\mathfrak g}=1_A\\otimes (f\\ast\\fraktura e')\\circ\\fraktura s'\n\\end{equation}\nwhere~$\\fraktura e'$ is given by the same formula as~$\\fraktura e$ in \\eqref{E:5.33} but with~$\\beta$ replaced by\n\\begin{equation}\n\\beta':=\\alpha\\bigl(\\sqrt\\theta-\\lambda\\bigr)\n\\end{equation}\nand~$\\fraktura s'(h):=h\/(2\\sqrt g(\\sqrt\\theta-\\lambda))$. We then state:\n\n\\begin{lemma}\n\\label{lemma-unique2}\nLet $\\mu_\\lambda'(\\text{\\rm d}\\mkern0.5mu h):=\\text{\\rm e}\\mkern0.7mu^{\\alpha\\lambda h}\\text{\\rm d}\\mkern0.5mu h$. Assuming~$\\theta>0$, there is at most one Radon measure~$\\nu$ on~$\\mathbb R$ such that for all~$f\\in C_{\\text{\\rm c}}^\\infty(\\mathbb R)$ with $f\\ge0$,\n\\begin{equation}\n\\label{E:5.35iw2}\n\\bigl\\langle\\nu,f\\ast\\fraktura e'\\bigr\\rangle=\\langle\\mu_\\lambda',f\\rangle.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proofsect}{Proof}\nAs in the proof of Lemma~\\ref{lemma-unique}, we recast \\eqref{E:5.35iw2} as\n\\begin{equation}\n\\bigl\\langle\\nu_\\lambda,f\\ast\\fraktura e'_\\lambda\\bigr\\rangle=\\langle{\\rm Leb},f\\rangle\n\\end{equation}\nwhere $\\nu_\\lambda(\\text{\\rm d}\\mkern0.5mu h)=\\text{\\rm e}\\mkern0.7mu^{-\\alpha\\lambda h} \\nu(\\text{\\rm d}\\mkern0.5mu h)$ and $\\fraktura e_\\lambda'(h)=\\text{\\rm e}\\mkern0.7mu^{\\alpha\\lambda h}\\fraktura e'(h)$. Since $\\tilde\\beta':=\\beta'+\\alpha\\lambda =\\alpha\\sqrt\\theta>0$, we again get that~$\\fraktura e_\\lambda'$ is integrable. Replacing~$\\tilde\\beta$ by~$\\tilde\\beta'$, the rest of the argument is then identical to that in the proof of Lemma~\\ref{lemma-unique}.\n\\end{proofsect}\n\n\n\nWe are now ready to give:\n\n\\begin{proofsect}{Proof of Theorem~\\ref{thm-thin}}\nThe argument proving that \\eqref{E:4.25b} determines $\\zeta^D$ uniquely is the same as for the thick points so we just need to perform the analogue of the calculation in \\eqref{E:4.38}. Denoting \n\\begin{equation}\n\\widehat\\alpha:=\\frac1{2\\sqrt g\\,(\\sqrt\\theta-\\lambda)}\\alpha\\lambda,\n\\end{equation}\nwe get\n\\begin{equation}\n\\label{E:4.38a}\n\\begin{aligned}\n\\int_{D\\times\\mathbb R}&Z^D_\\lambda(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\text{\\rm e}\\mkern0.7mu^{\\widehat \\alpha\\ell}\\text{\\rm d}\\mkern0.5mu\\ell\\,\\,\\tilde f^{\\ast\\mathfrak g}(x,\\ell)\n\\\\\n&=\\int_{D\\times\\mathbb R\\times\\mathbb R}Z^D_\\lambda(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\text{\\rm e}\\mkern0.7mu^{\\widehat \\alpha\\ell}\\text{\\rm d}\\mkern0.5mu\\ell\\otimes\\mathfrak g(\\text{\\rm d}\\mkern0.5mu h)\\,\nf\\Bigl(x,\\tfrac1{2\\sqrt g(\\sqrt\\theta-\\lambda)}\\bigl(\\ell+\\tfrac{h^2}2\\bigr)\\Bigr)\n\\\\\n&=\\int_{D\\times\\mathbb R\\times\\mathbb R}Z^D_\\lambda(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\text{\\rm e}\\mkern0.7mu^{\\widehat\\alpha(\\ell+\\frac{h^2}2)}\\text{\\rm d}\\mkern0.5mu\\ell\\otimes\\text{\\rm e}\\mkern0.7mu^{\\,-\\widehat\\alpha\\frac{h^2}2}\\mathfrak g(\\text{\\rm d}\\mkern0.5mu h)\\,\nf\\Bigl(x,\\tfrac1{2\\sqrt g(\\sqrt\\theta-\\lambda)}\\bigl(\\ell+\\tfrac{h^2}2\\bigr)\\Bigr)\n\\\\\n&=2\\sqrt g(\\sqrt\\theta-\\lambda)\\Bigl(\\int_\\mathbb R\\mathfrak g(\\text{\\rm d}\\mkern0.5mu h)\\text{\\rm e}\\mkern0.7mu^{\\,-\\widehat\\alpha\\frac{h^2}2}\\Bigr)\n\\int_{D\\times\\mathbb R}Z^D_\\lambda(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\text{\\rm e}\\mkern0.7mu^{\\alpha \\lambda r}\\text{\\rm d}\\mkern0.5mu r\\,f(x,r)\\,.\n\\end{aligned}\n\\end{equation}\nThe Gaussian integral on the last line equals the root of $\\frac{\\sqrt\\theta-\\lambda}{\\sqrt\\theta}$. It follows that~$\\zeta^D_N$ converges in law to the measure\n\\begin{equation}\n\\frac{\\theta^{1\/4}}{2\\sqrt{g}\\,(\\sqrt\\theta-\\lambda)^{3\/2}}\\,\n\\fraktura c(\\lambda)Z^D_\\lambda(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\text{\\rm e}\\mkern0.7mu^{\\widehat \\alpha\\ell}\\text{\\rm d}\\mkern0.5mu\\ell.\n\\end{equation}\nThis is the desired claim.\n\\end{proofsect}\n\n\n\n\\section{Light and avoided points}\n\\label{sec6}\\noindent\\nopagebreak\nIn this section we will deal with the point measures $\\vartheta^D_N$ and~$\\kappa^D_N$ associated with the light and avoided points, respectively. The argument follows the blueprint of the proof for the~$\\lambda$-thick (and~$\\lambda$-thin) points although important differences arise due to a different scaling of~$\\widehat W_N$ with~$N$ compared to~$W_N$. We begin with an analogue of Lemma~\\ref{lemma-add-field} where this issue becomes quite apparent.\n\n\\begin{lemma}\n\\label{lemma-7.1}\nSuppose $\\{N_k\\}$ is a subsequence along which~$\\vartheta^D_N$ converges in law to~$\\vartheta^D$. Then\n\\begin{equation}\n\\frac{\\sqrt{\\log N}}{\\widehat W_N }\\sum_{x\\in D_N}\\delta_{x\/N}\\otimes\\delta_{L^{D_N}_{t_N}(x)}\\otimes\\delta_{h^{D_N}_x}\n\\,\\,\\underset{\\begin{subarray}{c}\nN=N_k\\\\k\\to\\infty\n\\end{subarray}}{\\overset{\\text{\\rm law}}\\longrightarrow}\\,\\,\n\\vartheta^D\\otimes\\frac1{\\sqrt{2\\pi g}}\\,{\\rm Leb}.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proofsect}{Proof}\nLet $\\vartheta^{D,\\text{ext}}_N$ denote the measure on the left and pick~$f\\in C_{\\text{\\rm c}}(D\\times[0,\\infty)\\times\\mathbb R)$. \\normalcolor Suppose that~$f(x,\\ell,h)=0$ unless~$x\\in A$, where $A$ is an open set with~$\\overline A\\subset D$, and unless $\\ell,|h|\\le M$ for some~$M>0$. Noting that the probability density of~$h^{D_N}_x$ is~$(1+o(1))(2\\pi g\\log N)^{-1\/2}$ with~$o(1)\\to0$ as~$N\\to\\infty$ uniformly over any compact interval shows, with the help of the tightness of~$\\{\\vartheta^D_N\\colon N\\ge1\\}$ proved in Corollary~\\ref{cor-light}, that\n\\begin{equation}\n\\label{E:6.2u}\n\\mathbb E\\bigl\\langle\\vartheta^{D,\\text{ext}}_N,f\\bigr\\rangle=o(1)+\\frac1{\\sqrt{2\\pi g}}\\bigl\\langle\\vartheta^D_N\\otimes{\\rm Leb},f\\bigr\\rangle,\n\\end{equation}\nwhere~$o(1)\\to0$ in $P^\\varrho$-probability as~$N\\to\\infty$. As before, we just need to prove concentration of $\\langle\\vartheta^{D,\\text{ext}}_N,f\\rangle$ around the (conditional) expectation with respect to~$h^{D_N}$. \n\nConsider the conditional variance $\\text{\\rm Var}_{\\mathbb P}(\\langle\\vartheta^{D,\\text{ext}}_N,f\\rangle)$ and use the additive structure of $\\langle\\vartheta^{D,\\text{ext}}_N,f\\rangle$ to write it as the sum of\n\\begin{equation}\n\\label{E:7.3ie} \n\\mathcal C_N(x,y):= \n\\text{\\rm Cov}_{\\mathbb P}\\Bigl(f\\bigl(\\ffrac xN,L^{D_N}_{t_N}(x),h^{D_N}_x\\bigr),\nf\\bigl(\\ffrac yN,L^{D_N}_{t_N}(y),h^{D_N}_y\\bigr)\\Bigr)\n\\end{equation}\nover all pairs~$x,y\\in A_N:=\\{z\\in D_N\\colon z\/N\\in A\\}$. In order to estimate this quantity, for any $x,y\\in D_N$ let~$\\rho^{D_N}_{x,y}\\colon\\mathbb R^2\\to[0,\\infty)$ denote the joint probability density of $(h^{D_N}_x,h^{D_N}_y)$ with respect to the Lebesgue measure on~$\\mathbb R^2$. Abbreviate\n\\begin{multline}\n\\quad\nF(h_x,h_y;h_x',h_y') := \\frac12\\Bigl(f\\bigl(\\ffrac xN,L^{D_N}_{t_N}(x),h_x\\bigr)-f\\bigl(\\ffrac xN,L^{D_N}_{t_N}(x),h_x'\\bigr)\\Bigr)\n\\\\\n\\times\\Bigl(f\\bigl(\\ffrac yN,L^{D_N}_{t_N}(y),h_y\\bigr)-f\\bigl(\\ffrac yN,L^{D_N}_{t_N}(y),h_y'\\bigr)\\Bigr).\\quad\n\\end{multline} \nInvoking the identity $\\text{\\rm Cov}(X,Y)=\\frac12E((X-X')(Y-Y'))$, where $(X',Y')$ is an independent copy of~$(X,Y)$, we get\\begin{equation}\n\\label{E:7.5ie}\n\\mathcal C_N(x,y)=\\int_{[-M,M]^4}\\text{\\rm d}\\mkern0.5mu h_x\\,\\text{\\rm d}\\mkern0.5mu h_y\\,\\text{\\rm d}\\mkern0.5mu h_x'\\,\\text{\\rm d}\\mkern0.5mu h_y'\\,\\,\\rho^{D_N}_{x,y}(h_x,h_y)\\rho^{D_N}_{x,y}(h_x',h_y')F(h_x,h_y;h_x',h_y'),\n\\end{equation}\nwhere the restriction on the domain of integration reflects the support restrictions on~$f$ in the third variable.\n\n\nOur way of control \\eqref{E:7.5ie} hinges on the observation that, once we replace both probability densities by constants, the integral vanishes. For these constants we will choose the value of the probability density at $(h,h')=(0,0)$. To account for the errors, we thus introduce\n\\begin{equation}\n\\label{E:7.6ie}\n\\delta_N(M,r):=\\max_{\\begin{subarray}{c}\nx,y\\in A_N\\\\|x-y|\\ge r\n\\end{subarray}}\\,\\sup_{|h|,|h'|\\le M}\\,\n\\Bigl|\\,\\log \\rho^{D_N}_{x,y}(h,h')-\\log\\rho^{D_N}_{x,y}(0,0)\\Bigr|.\n\\end{equation}\nUsing that $|\\text{\\rm e}\\mkern0.7mu^x-1|\\le \\text{\\rm e}\\mkern0.7mu^{|x|}-1$ along with\n\\begin{equation}\n\\bigl|F(h_x,h_y;h_x',h_y')\\bigr|\\le 2\\Vert f\\Vert_\\infty^2\\,1_{\\{L^{D_N}_{t_N}(x)\\le M\\}}1_{\\{L^{D_N}_{t_N}(y)\\le M\\}},\n\\end{equation}\nas implied by the restrictions on the support of~$f$, we thus get\n\\begin{equation}\n\\label{E:7.8ie}\n\\mathcal C_N(x,y)\\le2\\Vert f\\Vert_\\infty^2\\,1_{\\{L^{D_N}_{t_N}(x)\\le M\\}}1_{\\{L^{D_N}_{t_N}(y)\\le M\\}}\\,\\rho^{D_N}_{x,y}(0,0)^2\\bigr(\\text{\\rm e}\\mkern0.7mu^{2\\delta_N(M,|x-y|)}-1\\bigr).\n\\end{equation}\nIt remains to analyze the $N$-dependence of the terms~$\\rho^{D_N}_{x,y}(0,0)$ and~$\\delta_N(M,|x-y|)$.\n\nUsing \\eqref{E:5.12i} and the fact that $\\fraktura b_{D_N,x}(y)=G^{D_N}(x,y)\/G^{D_N}(x,x)$, the law of~$h^{D_N}_y$ conditioned on~$h^{D_N}_x$ is normal with mean $h^{D_N}_x\\fraktura b_{D_N,x}(y)$ and variance \n\\begin{equation}\n\\label{E:7.9ie}\n\\sigma_N^2(x,y):=G^{D_N}(y,y)-\\frac{G^{D_N}(x,y)^2}{G^{D_N}(x,x)}.\n\\end{equation}\nUsing the shorthand~$\\sigma_N^2(x):=G^{D_N}(x,x)$ for the variance of~$h^{D_N}_x$ and~$\\phi(y)$ for the value of~$\\fraktura b_{D_N,x}(y)$, we thus have\n\\begin{equation}\n\\rho_{x,y}^{D_N}(h_x,h_y) = \\frac1{2\\pi\\sigma_N(x)\\sigma_N(x,y)}\\,\\text{\\rm e}\\mkern0.7mu^{-\\frac{h_x^2}{2\\sigma_N^2(x)}-\\frac{[h_y-h_x\\phi(y)]^2}{2\\sigma_N^2(x,y)}}\\,.\n\\end{equation}\nThanks to~$\\overline A\\subset D$, the difference $\\sigma_N^2(x)-g\\log N$ is bounded uniformly in $x\\in A_N$ and $N\\ge1$.\nInvoking the representation of the Green function using the potential kernel (see, e.g.,~\\cite[Lemma~B.3]{BL3}) and then the asymptotic growth of the potential kernel (see, e.g., \\cite[Lemma~B.4]{BL3}) we also get that, for any~$r_N\\to\\infty$,\n\\begin{equation}\n\\min_{\\begin{subarray}{c}\nx,y\\in A_N\\\\|x-y|\\ge r_N\n\\end{subarray}}\\sigma^2_N(x,y)\\,\\underset{N\\to\\infty}\\longrightarrow\\,\\infty.\n\\end{equation}\nPutting these together with~$\\phi(y)\\in[0,1]$, \\eqref{E:7.6ie} then yields\n\\begin{equation}\n(\\log N)\\Bigl(\\,\\max_{\\begin{subarray}{c}\nx,y\\in A_N\\\\|x-y|\\ge r_N\n\\end{subarray}}\\rho_{x,y}^{D_N}(0,0)^2\\Bigr)\\bigr(\\text{\\rm e}\\mkern0.7mu^{2\\delta_N(M,r_N)}-1\\bigr)\\,\\underset{N\\to\\infty}\\longrightarrow\\,0.\n\\end{equation}\nUsing this in \\eqref{E:7.8ie} we conclude that, as soon as~$r_N\\to\\infty$,\n\\begin{equation}\n\\label{E:7.11oe}\n\\frac{\\log N}{\\widehat W_N^2}\\sum_{\\begin{subarray}{c}\nx,y\\in D_N\\\\|x-y|\\ge r_N\n\\end{subarray}}\n\\mathcal C_N(x,y)= o(1) \\,\\vartheta^D_N\\bigl(D\\times[0,M]\\bigr)^2,\n\\end{equation}\nwhere~$o(1)$ is a numerical sequence tending to zero as~$N\\to\\infty$. By the tightness of $\\{\\vartheta^D_N\\colon N\\ge1\\}$ (cf Corollary~\\ref{cor-light}), \\eqref{E:7.11oe} thus tends to zero in~$P^\\varrho$-probability.\n\nTo infer that $\\text{\\rm Var}_{\\mathbb P}(\\langle\\vartheta^{D,\\text{ext}}_N,f\\rangle)$ vanishes as~$N\\to\\infty$ in $P^\\varrho$-probability, we just note that, by dropping one of the indicators in \\eqref{E:7.8ie}, the sum over $x,y\\in D_N$ complementary to those in \\eqref{E:7.11oe} is at most order $r_N^2(\\log N)\/\\widehat W_N$ times~$\\Vert f\\Vert_\\infty^2\\vartheta^D_N(D\\times[0,M])$. As~$\\widehat W_N$ grows polynomially with~$N$ for our choices of~$\\theta$, there is a way to choose~$r_N\\to\\infty$ so that this tends to zero in $P^\\varrho$-probability as well.\n\\end{proofsect}\n\nNext we prove an analogue of Lemma~\\ref{lemma-conv}:\n\n\\begin{lemma}\nGiven~$f\\in C_{\\text{\\rm c}}(D\\times[0,\\infty))$ with~$f\\ge0$ denote\n\\begin{equation}\nf^{\\ast{\\rm Leb}}(x,\\ell):=\\frac1{\\sqrt{2\\pi g}}\\int_{\\mathbb R}\\text{\\rm d}\\mkern0.5mu h\\,\\,f\\bigl(x,\\ell+\\tfrac{h^2}2\\bigr).\n\\end{equation}\nThen for every weak subsequential limit~$\\vartheta^D$ of~$\\vartheta^D_N$,\n\\begin{equation}\n\\label{E:6.3}\n\\bigl\\langle\\vartheta^D,f^{\\ast{\\rm Leb}}\\bigr\\rangle\\,\\overset{\\text{\\rm law}}=\\,\\fraktura c(\\sqrt\\theta)\\int Z^D_{\\sqrt\\theta}(\\text{\\rm d}\\mkern0.5mu x)\\otimes\\text{\\rm e}\\mkern0.7mu^{\\alpha\\sqrt\\theta\\,h}\\text{\\rm d}\\mkern0.5mu h \\,\\,f\\bigl(x,\\tfrac{h^2}2\\bigr)\n\\end{equation}\nsimultaneously for all~$f$ as above. \n\\end{lemma}\n\n\\begin{proofsect}{Proof}\nPick~$f\\in C_{\\text{\\rm c}}(D\\times[0,\\infty))$ with~$f\\ge0$ and set $f^{\\text{ext}}(x,\\ell,h):=f(x,\\ell+\\tfrac12h^2)$. Then\n\\begin{equation}\n\\begin{aligned}\n\\sum_{x\\in D_N}f\\Bigl(\\ffrac xN,\\frac12(\\tilde h^{D_N}_x+\\sqrt{2t_N}\\bigr)^2\\Bigr)\n&=\\sum_{x\\in D_N}f\\Bigl(\\ffrac xN,L^{D_N}_{t_N}(x)+\\frac12(h^{D_N}_x)^2\\Bigr)\n\\\\\n&=\\sum_{x\\in D_N}f^{\\text{ext}}\\Bigl(\\ffrac xN, L^{D_N}_{t_N}(x),h^{D_N}_x\\Bigr).\n\\end{aligned}\n\\end{equation}\nSince~$f^{\\text{ext}}$ is compactly supported in all variables, Lemma~\\ref{lemma-7.1} tells us that, after multiplying by $\\sqrt{\\log N}\\,\/\\widehat W_N $ and specializing~$N$ to the subsequence along which~$\\vartheta^D_N$ tends in law to~$\\vartheta^D$, the right-hand side tends to $\\bigl\\langle\\vartheta^D,f^{\\ast{\\rm Leb}}\\bigr\\rangle$. \\normalcolor By \\eqref{E:1.19} and the fact that~$\\sqrt{2t_N}\\sim2\\sqrt g\\sqrt\\theta\\log N$, the left-hand side tends to the measure on the right of \\eqref{E:6.3}.\n\\end{proofsect}\n\nWith these in hand we are ready to prove convergence of~$\\vartheta^D_N$'s:\n\n\\begin{proofsect}{Proof of Theorem~\\ref{thm-light}}\nPick~$A\\subset D$ open with $\\overline A\\subset D$. Taking a sequence of compactly supported functions converging upward to $f(x,h):=1_A(x)\\text{\\rm e}\\mkern0.7mu^{-s h}1_{[0,\\infty)}(h)$, where~$s>0$, and denoting\n\\begin{equation}\n\\tilde\\mu_A(B):=\\vartheta^D(A\\times B),\n\\end{equation}\nthe Tonelli and Monotone Convergence Theorems yield\n\\begin{equation}\n\\label{E:7.18iw}\n\\int_0^\\infty\\tilde\\mu_A(\\text{\\rm d}\\mkern0.5mu\\ell)\\text{\\rm e}\\mkern0.7mu^{-\\ell s} \\,\\overset{\\text{\\rm law}}=\\, \\sqrt{2\\pi g}\\,\\fraktura c(\\sqrt\\theta)Z^D_{\\sqrt\\theta}(A)\\,\\text{\\rm e}\\mkern0.7mu^{\\frac{\\alpha^2\\theta}{2s}},\\quad s>0.\n\\end{equation}\nNote that $s\\mapsto \\text{\\rm e}\\mkern0.7mu^{\\frac{\\alpha^2\\theta}{2s}}$ is the Laplace transform of the measure in \\eqref{E:1.33}. Since the Laplace transform determines Borel measures on $[0,\\infty)$ uniquely, the claim follows by the fact that the right-hand side is a Borel measure in~$A$ which is determined by its values on~$A$ open with the closure in~$D$. \n\\end{proofsect}\n\nIn order to extend Theorem~{\\ref{thm-light} to the control of the measure $\\kappa^D_N$ associated with the avoided points, we need the following estimate:\n\n\\begin{lemma}\n\\label{lemma-6.3}\nLet~$A\\subset D$ be open with~$\\overline A\\subset D$. Then\n\\begin{equation}\n\\lim_{\\epsilon\\downarrow0}\\,\\limsup_{N\\to\\infty}\\,\\frac{N^2}{\\widehat W_N }\\max_{\\begin{subarray}{c}\nx\\in D_N\\\\ x\/N\\in A\n\\end{subarray}}\nP^\\varrho\\Bigl(00\\Bigr)\n\\\\\n&=\\mathbb P\\Bigl(\\frac12(\\tilde h^{D_N}_x-\\sqrt{2t_N})^2\\le 2\\epsilon\\Bigr)-\\mathbb P\\Bigl(\\frac12(h^{D_N}_x)^2\\le 2\\epsilon\\Bigr)P^\\varrho\\bigl(L^{D_N}_{t_N}(x)=0\\bigr).\n\\end{aligned}\n\\end{equation}\nThe fact that $|G^{D_N}(x,x)- g\\log N|$ is bounded uniformly for all~$x\\in D_N$ with~$x\/N\\in A$ then shows\n\\begin{equation}\n\\mathbb P\\Bigl(\\frac12(\\tilde h^{D_N}_x-\\sqrt{2t_N})^2\\le \\epsilon\\Bigr)\n=\\bigl(2+o(1)\\bigr)\\sqrt{2\\epsilon}\\,\\frac1{\\sqrt{2\\pi G^{D_N}(x,x)}}\\,\n\\,\\text{\\rm e}\\mkern0.7mu^{-\\frac{t_N}{G^{D_N}(x,x)}}\n\\end{equation}\nwhile\n\\begin{equation}\nP\\Bigl(\\frac12 (h^{D_N}_x)^2\\le\\epsilon\\Bigr)= \n\\bigl(2+o(1)\\bigr)\\sqrt{2\\epsilon}\\,\\frac1{\\sqrt{2\\pi G^{D_N}(x,x)}},\n\\end{equation}\nwhere $o(1)\\to0$ as~$\\epsilon\\downarrow0$ uniformly in~$N\\ge1$ and~$x$ as above. In light of \\eqref{E:L-vanish}, the right-hand side of \\eqref{E:monster} divided by $\\widehat W_N \/N^2$-times the DGFF probability on the extreme left tends to zero as~$N\\to\\infty$ and~$\\epsilon\\downarrow0$. \n\\end{proofsect}\n\nWe are ready to give:\n\n\\begin{proofsect}{Proof of Theorem~\\ref{thm-avoid}}\nTake~$f_n\\in C_{\\text{\\rm c}}([0,\\infty))$ such that~$f_n(h):=(1-nh)\\vee0$ and pick~$A\\subset D$ open with $\\overline A\\subset D$. Then\n\\begin{equation}\n E^\\varrho \\bigl|\\langle\\kappa^D_N,1_A\\rangle-\\langle\\vartheta^D_N,1_A\\otimes f_n\\rangle\\bigr|\n\\le\\frac2{\\widehat W_N }\\sum_{\\begin{subarray}{c}\nx\\in D_N\\\\ x\/N\\in A\n\\end{subarray}}P^\\varrho\\bigl(0 0$ and some $\\lambda \\in (0,1)$ and recall the notation~$\\widehat \\zeta^D_N$ for the extended point measures from \\eqref{E:zeta-local} that describe the $\\lambda$-thick points along with their local structure. Let $\\widehat a_N$ be the sequence given by (\\ref{E:4.1}).\nWe will compare~$\\widehat \\zeta_N^D$ to the point measures\n\\begin{equation}\n\\label{E:full-etaDGFF}\n\\widehat \\eta^D_N :=\n\\frac{1}{W_N} \\sum_{x \\in D_N} \\delta_{x\/N} \\otimes \n\\delta_{\\widetilde h_x^{D_N} - \\widehat a_N} \n\\otimes \\delta_{\\left\\{\\frac{2\\sqrt{2 a_N}+2(\\widetilde h_x^{D_N} - \\widehat a_N) + (\\widetilde h_{x+z}^{D_N} - \\widetilde h_x^{D_N})}{2\\log N} (\\widetilde h_x^{D_N} - \\widetilde h_{x+z}^{D_N})~:~z \\in \\mathbb{Z}^2 \\right\\}}\n\\end{equation} \nassociated with the DGFF~$\\widetilde h^{D_N}$. For that we need:\n\n\\begin{lemma}[Gradients of squared DGFF]\n\\label{lemma-gradient-DGFF}\nFor all $b \\in\\mathbb R$, all $M \\ge1$ and all $r > 0$, \n\\begin{multline}\n\\label{E:9.3}\n\\lim_{N \\to \\infty} \\frac{1}{W_N}\\sum_{x \\in D_N} P^{\\varrho} \n\\bigl(L_{t_N}^{D_N} (x) \\geq a_N + b \\log N \\bigr) \\\\\n\\times \\mathbb P \\biggl(\\,\\bigcup_{ z \\in \\Lambda_r (0)}\\Bigl\\{\n\\bigl|(h_x^{D_N})^2 - (h_{x+z}^{D_N})^2 \\bigr| > (\\log N)^{3\/4},\n\\,|h_x^{D_N}| \\leq M \\sqrt{\\log N}\\Bigr\\}\\biggr)\n= 0,\n\\end{multline}\nwhere $\\Lambda_r (x) := \\{z \\in \\Bbb{Z}^2 \\colon |z-x| \\leq r \\}$.\n\\end{lemma}\n\n\\begin{proofsect}{Proof}\nWhen $|h_x^{D_N}| \\leq M \\sqrt{\\log N}$,\nwe have\n\\begin{equation}\n\\bigl|(h_x^{D_N})^2 - (h_{x+z}^{D_N})^2 \\bigr| \\leq\n \\bigl|h_x^{D_N} - h_{x+z}^{D_N} \\bigr|^2 + 2M \\sqrt{\\log N} \\bigl|h_x^{D_N} - h_{x+z}^{D_N} \\bigr|.\n\\end{equation}\nThus, for $M\\ge1$, the term corresponding to $x \\in D_N$ \non the left-hand side of (\\ref{E:9.3}) is bounded from above by\n\\begin{equation}\n\\label{E:8.4nw}\n\\sum_{z \\in \\Lambda_r (0)}\nP^{\\varrho}\n\\bigl(L_{t_N}^{D_N} (x) \\geq a_N + b \\log N \\bigr) \n\\mathbb{P} \\Bigl(\\bigl|h_x^{D_N} - h_{x+z}^{D_N} \\bigr| > (4M)^{-1}(\\log N)^{1\/4} \\Bigr).\n\\end{equation}\nFor $\\epsilon > 0$, abbreviate $D_N^{\\epsilon} := \\{x \\in D_N \\,\\colon\\,\\text{\\rm d}_\\infty(x, D_N^{\\text{c}}) > \\epsilon N \\}$. \nThen for any $x \\in D_N^{\\epsilon}$ and $z \\in \\Lambda_r (0)$,\n$\\text{Var}_{\\mathbb P} (h_x^{D_N} - h_{x+z}^{D_N})$ is equal to\n\\begin{multline}\n\\qquad\nG^{D_N} (x, x) + G^{D_N} (x+z,x+z) - 2G^{D_N} (x,x+z) \n\\\\= g\\log N + g \\log N - 2 g \\log (N\/(1+|z|)) + O(1) \n\\\\\\leq 2g \\log (1+r) + O(1).\n\\qquad\n\\end{multline}\nThe standard Gaussian tail estimate bounds \\eqref{E:8.4nw} by $o(1)P^\\varrho(L_{t_N}^{D_N} (x) \\geq a_N + b \\log N)$ with $o(1)\\to0$ uniformly in~$x\\in D_N^\\epsilon$. Lemma \\ref{lemma-upper} subsequently shows that the sum over $x \\in D_N^{\\epsilon}$ on the left-hand side of \\eqref{E:9.3} is $o(1)$ as $N \\to \\infty$.\nThe sum over $x \\in D_N \\smallsetminus D_N^{\\epsilon}$ is bounded from above by\n$E^\\varrho(\\zeta_N^D (D \\smallsetminus D^{\\epsilon} \\times [b, \\infty) ))$ which tends to $0$ \nas $N \\to \\infty$ followed by $\\epsilon \\downarrow 0$ by Corollary \\ref{cor-tightness-upper}. \n\\end{proofsect}\n\nWe are ready to give:\n\n\\begin{proofsect}{Proof of Theorem~\\ref{thm-thick-local}}\nPick any $f = f(x, \\ell, \\phi) \\in C_{\\text{c}} (D \\times \\mathbb R \\times \\mathbb R^{\\mathbb{Z}^2})$\nwhich depends only on a finite number of coordinates of $\\phi$, say, those in $\\Lambda_r (0)$\nfor some $r > 0$. The following identity is key for the entire proof\n\\begin{multline}\n\\qquad\n\\left\\{\\sqrt{2 a_N}+(\\widetilde h_x^{D_N} - \\widehat a_N) + \\frac{1}{2}(\\widetilde h_{x+z}^{D_N} - \\widetilde h_x^{D_N}) \\right\\}\n(\\widetilde h_x^{D_N} - \\widetilde h_{x+z}^{D_N})\n\\\\\n= L_{t_N}^{D_N} (x) - L_{t_N}^{D_N} (x+z) + \\frac{1}{2} (h_x^{D_N})^2 - \\frac{1}{2} (h_{x+z}^{D_N})^2.\n\\qquad\n\\end{multline} \nIndeed, we then get\n\\begin{multline}\n\\qquad\n\\label{E:9.10}\n\\langle \\widehat \\eta_N^D, f \\rangle\n=\\frac{1}{W_N}\n\\sum_{x \\in D_N} f \\Biggl(\\frac{x}{N}, \\sqrt{2 L_{t_N}^{D_N} (x) + (h_x^{D_N})^2} - \\sqrt{2a_N}, \n\\\\\n\\biggl\\{\\frac{\\nabla_z L_{t_N}^{D_N} (x)}{\\log N}\n+ \\frac{\\nabla_z (h^{D_N})^2 (x)}{2 \\log N}\\colon z \\in \\mathbb{Z}^2 \\biggr\\} \\Biggr),\n\\qquad\n\\end{multline}\nwhere, given a function $s\\colon \\Bbb{Z}^2 \\to \\mathbb R$, we abbreviated $\\nabla_z s (x) := s (x) - s (x+z)$.\n\nIn order to control gradients of the DGFF squared that appear on the right-hand side of \\eqref{E:9.10}, set\n\\begin{equation}\n\\label{E:gradient-squared-DGFF}\nG_{N, r} (x) := \\bigcap_{z \\in \\Lambda_r (0)}\\Bigl\\{\\bigl|\\nabla_z (h^{D_N})^2 (x)\\bigr| \\leq (\\log N)^{3\/4}\n\\Bigr\\}\n\\end{equation}\nand let, as before, $\\chi \\colon [0, \\infty) \\to [0,1]$ be a non-increasing, continuous function\nwith $\\chi(x) = 1$ for $0 \\leq x \\leq 1$ and $\\chi(x) = 0$ for $x \\geq 2$.\nBy Lemmas \\ref{lemma-no-conspire} and \\ref{lemma-gradient-DGFF},\nwe may truncate \\eqref{E:9.10} by introducing $1_{G_{N, r} (x)}$ and $\\chi (M^{-1}|h_x^{D_N}|\/ \\sqrt{\\log N})$\nfor $M > 0$ under the sum and write $\\langle \\widehat \\eta_N^D, f \\rangle$ as a random quantity whose $L^1$ norm is at most a constant times $\\|f\\|_{\\infty} \\text{\\rm e}\\mkern0.7mu^{- \\beta M^2}$\nuniformly in~$N$ plus the quantity\n\\begin{multline}\n\\label{E:9.12}\n\\quad\n\\frac{1}{W_N}\n\\sum_{x \\in D_N} 1_{G_{N,r} (x)} f \\Biggl(\\frac xN, \\sqrt{2 L_{t_N}^{D_N} (x) + (h_x^{D_N})^2} - \\sqrt{2a_N}, \\\\\n\\biggl\\{\\frac{\\nabla_z L_{t_N}^{D_N} (x)}{\\log N}\n+ \\frac{\\nabla_z (h^{D_N})^2 (x)}{2 \\log N}\\colon z \\in \\mathbb{Z}^2 \\biggr\\} \\Biggr)\n \\chi \\biggl(\\frac{|h_x^{D_N}|}{M \\sqrt{\\log N}} \\biggr).\n \\quad\n\\end{multline}\nUsing the uniform continuity of $f$\nand Corollary \\ref{cor-tightness-upper} and\nLemma \\ref{lemma-gradient-DGFF},\nwe rewrite (\\ref{E:9.12}) by a random quantity which tends to $0$\nas $N \\to \\infty$ in probability plus the quantity\n\\begin{equation}\n\\label{E:9.13}\n\\frac{1}{W_N} \\sum_{x \\in D_N} f_{\\text{ext}} \\Biggl(\\frac xN, \\frac{L_{t_N}^{D_N} (x) - a_N}{\\log N},\n\\Bigl\\{\\frac{\\nabla_z L_{t_N}^{D_N} (x)}{\\log N} \\Bigr\\}_{z \\in \\mathbb{Z}^2},\n\\frac{h_x^{D_N}}{\\sqrt{\\log N}} \\Biggr) \\chi \\biggl(\\frac{|h_x^{D_N}|}{M \\sqrt{\\log N}} \\biggr),\n\\end{equation}\nwhere we introduced\n\\begin{equation}\nf_{\\text{ext}} (x, \\ell, \\phi, h) := f \n\\biggl(x, \\frac{1}{2 \\sqrt{g} (\\sqrt{\\theta} + \\lambda)}\n\\bigl(\\ell + \\tfrac{1}{2} h^2 \\bigr), \\phi \\biggr).\n\\end{equation}\nNote that Corollary \\ref{cor-tightness-upper} implies that $\\{\\widehat \\zeta_N^D\\colon N\\ge1\\}$ is tight. \nLet $\\widehat \\zeta^D$ be a subsequential weak limit of $\\widehat \\zeta_N^D$ along the subsequence $\\{N_k\\}$.\nBy the same argument as in the proof of Lemma \\ref{lemma-add-field},\nas $k \\to \\infty$ followed by $M \\to \\infty$, \n$\\langle \\widehat \\eta_{N_k}^D, f \\rangle$ converges in law to\n\\begin{equation}\n\\label{E:8.13nw}\n\\int \\widehat \\zeta^D (\\text{\\rm d}\\mkern0.5mu x\\,\\text{\\rm d}\\mkern0.5mu\\ell\\,\\text{\\rm d}\\mkern0.5mu\\phi) \\otimes \\mathfrak{g} (\\text{\\rm d}\\mkern0.5mu h)\nf_{\\text{ext}}(x, \\ell, \\phi, h).\n\\end{equation}\nOn the other hand, noting that~$\\sqrt{2a_N}\/\\log N\\to 2\\sqrt g(\\sqrt\\theta+\\lambda)$, \\cite[Theorem 2.1]{BL4} shows that $\\langle \\widehat \\eta_N^D, f \\rangle$ converges, as $N \\to \\infty$, in law to\n\\begin{equation}\n \\int \\fraktura c(\\lambda)Z_{\\lambda}^D (\\text{\\rm d}\\mkern0.5mu x) \\otimes \\text{\\rm e}\\mkern0.7mu^{- \\alpha \\lambda h} \\text{\\rm d}\\mkern0.5mu h \\otimes \n\\nu_{\\theta, \\lambda} (\\text{\\rm d}\\mkern0.5mu \\phi) f(x, h, \\phi).\n\\end{equation}\nThe arguments in the proof of Theorem \\ref{thm-thick} show that the class of functions $f_{\\text{ext}}$ arising from~$f\\in C_{\\text{\\rm c}}(D\\times\\mathbb R\\times\\mathbb R^{\\mathbb Z^2})$ above determines the measure~$\\widehat\\zeta^D$ uniquely from \\eqref{E:8.13nw}; the calculation \\eqref{E:4.38} then gives\n\\begin{equation}\n\\widehat \\zeta^D \\,\\,\\overset{\\text{\\rm law}}=\\,\\,\n\\frac{\\theta^{1\/4}}{2\\sqrt{g} (\\sqrt{\\theta} + \\lambda)^{3\/2}} \\fraktura c(\\lambda) \nZ_{\\lambda}^D (\\text{\\rm d}\\mkern0.5mu x) \\otimes \\text{\\rm e}\\mkern0.7mu^{- \\alpha (\\theta, \\lambda) \\ell} \\text{\\rm d}\\mkern0.5mu\\ell \\otimes \n\\nu_{\\theta, \\lambda} (\\text{\\rm d}\\mkern0.5mu\\phi).\n\\end{equation}\nThis is the desired claim.\n\\end{proofsect}\n\n\\subsection{Local structure of thin points}\nWe move to the proof of the convergence of point measures $\\widehat \\zeta_N^D$ \nassociated with $\\lambda$-thin points. The proof follows very much the same steps as for the thick points so we stay quite brief. \nAssume that~$a_N$ and~$t_N$ satisfy \\eqref{E:1.22} with some $\\theta > 0$ and some $\\lambda \\in (0, 1 \\wedge \\sqrt{\\theta} )$.\nAs a counterpart to Lemma \\ref{lemma-gradient-DGFF}, we need the following:\n\n\\begin{lemma}[Gradients of squared DGFF]\n\\label{lemma-control-gradient-DGFF2}\nFor all $b > 0$, all $M \\ge1$ and all $r > 0$,\n\\begin{multline}\n\\lim_{N \\to \\infty}\n\\frac{1}{W_N}\\sum_{x \\in D_N} \nP^{\\varrho} \\Bigl(a_N - b \\log N \\leq L_{t_N}^{D_N} (x) \\leq a_N + b \\log N \\Bigr) \\\\\n\\times \n\\mathbb P \\biggl(\\,\\bigcup_{ z \\in \\Lambda_r (0)}\\Bigl\\{\n\\bigl|(h_x^{D_N})^2 - (h_{x+z}^{D_N})^2 \\bigr| > (\\log N)^{3\/4},\n\\,|h_x^{D_N}| \\leq M \\sqrt{\\log N}\\Bigr\\}\\biggr)\n= 0.\n\\end{multline}\n\\end{lemma}\n\n\\begin{proofsect}{Proof}\nThe proof is almost the same as that of Lemma \\ref{lemma-gradient-DGFF}\nexcept that we use Lemma \\ref{lemma-lower} \nand Corollary \\ref{cor-tightness-lower}\ninstead of Lemma \\ref{lemma-upper} and Corollary \\ref{cor-tightness-upper}, respectively.\n\\end{proofsect}\n\nWe are again ready to start:\n\n\\begin{proofsect}{Proof of Theorem~\\ref{thm-thin-local}}\nSet \n\\begin{equation}\n\\widehat a_N := \\sqrt{2t_N} - \\sqrt{2a_N}\n\\end{equation}\nand pick any\n$f = f(x, \\ell, \\phi) \\in C_{\\text{c}} (D \\times \\mathbb R \\times \\mathbb R^{\\mathbb{Z}^2})$\nthat depends only on a finite number of coordinates of~$\\phi$.\nLet $\\widehat \\eta_N^D$ be the point process obtained from (\\ref{E:full-etaDGFF})\nby replacing~$\\widehat a_N$ by~$- \\widehat a_N$. Using the calculation\n\\begin{multline}\n\\qquad\n\\left\\{\\sqrt{2 a_N}+(\\widetilde h_x^{D_N} + \\widehat a_N) + \\frac{1}{2}(\\widetilde h_{x+z}^{D_N} - \\widetilde h_x^{D_N}) \\right\\}\n(\\widetilde h_x^{D_N} - \\widetilde h_{x+z}^{D_N}) \n\\\\\n= L_{t_N}^{D_N} (x) - L_{t_N}^{D_N} (x+z) + \\frac{1}{2} (h_x^{D_N})^2 - \\frac{1}{2} (h_{x+z}^{D_N})^2\n\\qquad\n\\end{multline}\nwe then again have \\eqref{E:9.10} for $\\langle\\widehat\\eta^D_N,f\\rangle$.\nUsing Corollary \\ref{cor-tightness-lower} and\nLemmas \\ref{lemma-no-conspire2} and \\ref{lemma-control-gradient-DGFF2},\nwe rewrite \\eqref{E:9.10} as a random quantity whose $L^1$ norm \nis at most a constant times $\\|f\\|_{\\infty} \\text{\\rm e}\\mkern0.7mu^{- \\beta M^2}$\nuniformly in~$N$ plus \\eqref{E:9.13},\nwhere, in this case,\n\\begin{equation}\nf_{\\text{ext}} (x, \\ell, \\phi, h) := f \\biggl(x, \\frac{1}{2 \\sqrt{g} (\\sqrt{\\theta} - \\lambda)}\n\\bigl(\\ell + \\tfrac{1}{2} h^2 \\bigr), \\phi \\biggr).\n\\end{equation}\nNote that Corollary \\ref{cor-tightness-lower} implies the tightness of $\\{\\widehat \\zeta_N^D\\colon N\\ge1\\}$.\nLet $\\widehat \\zeta^D$ be any subsequential weak limit of $\\widehat \\zeta_N^D$ along the subsequence~$\\{N_k\\}$. \nBy the same argument as in the proof of Lemma~\\ref{lemma-add-field2},\nas $k \\to \\infty$ and $M \\to \\infty$, $\\langle \\widehat \\eta_{N_k}^D, f \\rangle$ tends in law to\n\\begin{equation}\n\\int \\widehat \\zeta^D (\\text{\\rm d}\\mkern0.5mu x\\,\\text{\\rm d}\\mkern0.5mu\\ell\\,\\text{\\rm d}\\mkern0.5mu\\phi) \\otimes \\mathfrak{g} (\\text{\\rm d}\\mkern0.5mu h)\nf_{\\text{ext}}(x, \\ell, \\phi, h).\n\\end{equation}\nOn the other hand, by \\cite[Theorem 2.1]{BL4}, as $N \\to \\infty$,\n$\\langle \\widehat \\eta_N^D, f \\rangle$ converges in law to\n\\begin{equation}\n \\int \\fraktura c(\\lambda)Z_{\\lambda}^D (\\text{\\rm d}\\mkern0.5mu x) \\otimes \\text{\\rm e}\\mkern0.7mu^{\\alpha \\lambda h} \\text{\\rm d}\\mkern0.5mu h \\otimes \n\\widetilde \\nu_{\\theta, \\lambda} (\\text{\\rm d}\\mkern0.5mu\\phi) f(x, h, \\phi).\n\\end{equation}\nThe arguments in the proof of Theorem \\ref{thm-thin} and the calculation \\eqref{E:4.38a} then show\n\\begin{equation}\n\\widehat \\zeta^D \\,\\,\\overset{\\text{\\rm law}}=\\,\\,\n\\frac{\\theta^{1\/4}}{2\\sqrt{g} (\\sqrt{\\theta} - \\lambda)^{3\/2}} \\,\\fraktura c(\\lambda) \\,\nZ_{\\lambda}^D (\\text{\\rm d}\\mkern0.5mu x) \\otimes \\text{\\rm e}\\mkern0.7mu^{\\widetilde \\alpha (\\theta, \\lambda) \\ell} \\text{\\rm d}\\mkern0.5mu\\ell \\otimes \n\\widetilde \\nu_{\\theta, \\lambda} (\\text{\\rm d}\\mkern0.5mu\\phi).\n\\end{equation}\nThis is the desired claim.\n\\end{proofsect}\n\n\n\\subsection{Local structure of avoided points}\nIn this section we will prove the convergence of the point measures \nassociated with the local structure of the avoided points. The proof will make use of the Pinned Isomorphism Theorem (see Theorem~\\ref{thm-pinned-iso}) but that so only at the very end. Most of the argument consists of careful manipulations with the doubly extended measure\n\\begin{equation}\n\\widehat \\kappa_N^{D, \\text{ext}} :=\n\\frac{\\sqrt{\\log N}}{\\widehat W_N}\n\\sum_{x \\in D_N}\n1_{\\{L_{t_N}^{D_N} (x) = 0 \\}} \\delta_{x\/N} \\otimes \\delta_{\\{L_{t_N}^{D_N} (x+z) \\,:\\,z \\in \\Bbb{Z}^2 \\}}\n\\otimes \\delta_{h_x^{D_N}} \\otimes \\delta_{\\{\\widehat h_{x+z}^{D_N \\smallsetminus \\{x\\}}\\,:\\,z \\in \\Bbb{Z}^2 \\}},\n\\end{equation}\nwhere, for $\\fraktura b_{D_N,x}$ as in \\eqref{E:5.12i},\n\\begin{equation}\n\\label{E:8.24uiw}\n\\widehat h^{D_N \\smallsetminus \\{x \\}}_z :=h^{D_N}_z-h^{D_N}_x\\fraktura b_{D_N,x}(z),\\quad z\\in\\mathbb Z^2.\n\\end{equation}\nBy \\eqref{E:5.12i}, $\\widehat h^{D_N \\smallsetminus \\{x \\}}$ is the field $h^{D_N}$ conditioned on $h^{D_N}_x=0$. In particular, \n\\begin{equation}\n\\label{E:8.25uiw}\n\\widehat h^{D_N \\smallsetminus \\{x \\}}\\independent h^{D_N}_x.\n\\end{equation}\nCorollary \\ref{cor-light} implies that $\\{\\widehat \\kappa_N^{D, \\text{ext}} \\colon N\\ge1\\}$ is tight with respect to vague convergence of measures on the product space $D\\times[0,\\infty)^{\\mathbb Z^2}\\times\\mathbb R\\times\\mathbb R^{\\mathbb Z^2}$. We now claim:\n\n\\begin{lemma}\n\\label{lemma-avoid-local-add}\nSuppose $\\{N_k\\}$ is a subsequence along which $\\widehat \\kappa_N^D$\nconverges in law to $\\widehat \\kappa^D$. \nThen\n\\begin{equation}\n\\widehat \\kappa_N^{D, \\text{\\rm ext}} \\,\\,\\,\\underset{\\begin{subarray}{c} N = N_k \\\\ k\\to\\infty \\end{subarray}}{\\,\\overset{\\text{\\rm law}}\\longrightarrow\\,}\\,\\,\\,\n \\frac{1}{\\sqrt{2\\pi g}} \\,\\widehat \\kappa^D \\otimes {\\rm Leb} \\otimes \\nu^0.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proofsect}{Proof}\nLet $f = f(x, \\ell, h, \\phi)\\colon D\\times[0,\\infty)^{\\mathbb Z^2}\\times\\mathbb R\\times\\mathbb R^{\\mathbb Z^2} \\to \\mathbb R$\nbe a continuous, compactly-supported function\nthat depends only on a finite number of coordinates of $\\ell$ and $\\phi$, say, \nthose in $\\Lambda_{r_0} (0)$ for some~$r_0>0$.\nSuppose in addition that $f(x, \\ell, h, \\phi) = 0$\nunless $x \\in A$ for some open $A \\subset D$ with $\\overline A \\subset D$, and unless\n$|h| \\leq M$ and $\\ell_z, |\\phi_z| \\leq M$ for all $z \\in \\Lambda_{r_0} (0)$ for some $M > 0$.\n\nNoting that only the second pair of the variables of~$\\widehat \\kappa_N^{D, \\text{ext}}$ is affected by expectation~$\\mathbb E$ with respect to the law of~$h^{D_N}$, we now claim\n\\begin{equation}\n\\Bbb{E} \\langle \\widehat \\kappa_N^{D, \\text{ext}}, f \\rangle\n= \\frac{1}{\\sqrt{2\\pi g}}\\,\\langle \\widehat \\kappa_N^D \\otimes {\\rm Leb} \\otimes \\nu^0, f \\rangle + o(1),\n\\end{equation}\nwhere $o(1) \\to 0$ in $P^{\\varrho}$-probability as $N \\to \\infty$ and where~$\\nu^0$ the law of the pinned DGFF. As in the proof of Lemma~\\ref{lemma-7.1} (8.26) follows by noting that the probability density of $h_x^{D_N}$ multiplied by~$\\sqrt{\\log N}$ tends to $(2\\pi g)^{-1\/2}$ uniformly over any compact interval and by the fact $(\\widehat h_{x+z}^{D_N \\smallsetminus \\{x\\}})_{z \\in \\Lambda_{r_0} (0)}$ tends in law to $(\\phi_z)_{z \\in \\Lambda_{r_0}(0)}$ (which can be gleaned from the representation of the Green function by the potential kernel, see \\cite[Lemma B.3]{BL3}, and the asymptotic expression for the potential kernel, see \\cite[Lemma B.4]{BL3}). These two convergences may be applied jointly in light of the independence \\eqref{E:8.25uiw} and the Bounded Convergence Theorem enabled by the tightness of~$\\{\\widehat\\kappa^D_N\\colon N\\ge1\\}$.\n\nIn order to prove the claim, it thus suffices to prove that the conditional variance\n$\\text{Var}_{\\mathbb P} (\\langle \\widehat \\kappa_N^{D, \\text{ext}}, f \\rangle)$\ntends to zero in $P^{\\varrho}$-probability as $N \\to \\infty$.\nWriting $A_N:=\\{x\\in\\mathbb Z^2\\colon x\/N\\in A\\}$, for this we note that\n\\begin{equation}\n\\text{Var}_{\\mathbb P} (\\langle \\widehat \\kappa_N^{D, \\text{ext}}, f \\rangle)\n=\\frac{\\log N}{(\\widehat W_N)^2}\\sum_{x,y\\in A_N}1_{\\{L_{t_N}^{D_N} (x) = 0\\}}1_{\\{L_{t_N}^{D_N} (y) = 0\\}}\\,\\mathcal{C}_N(x,y)\n\\end{equation}\nwhere $\\mathcal{C}_N(x,y)$ denotes the quantity \n\\begin{equation}\n\\text{Cov}_{\\mathbb P}\n\\Bigl(f\\bigl(\\tfrac xN,\\, L_{t_N}^{D_N} (x+\\cdot),\\,\nh_x^{D_N},\\, \\widehat h_{x+\\cdot}^{D_N \\smallsetminus \\{x \\}} \\bigr),\\,\nf\\bigl(\\tfrac yN,\\, L_{t_N}^{D_N} (y+\\cdot),\\,\nh_y^{D_N},\\, \\widehat h_{y+\\cdot}^{D_N \\smallsetminus \\{y \\}} \\bigr) \\Bigr).\n\\end{equation}\nWe will now proceed to estimate $\\mathcal{C}_N (x, y)$ by arguments similar to those invoked in the proof of Lemma~\\ref{lemma-7.1}. \n\nTo lighten the notation, we will henceforth write $\\phi^{(x)}$ for $(\\widehat h_{x+z}^{D_N \\smallsetminus \\{x \\}})_{z \\in \\Lambda_{r_0}(0)}$. Let $\\rho_{x,y}^{D_N}$ be joint probability density of $(h_x^{D_N}, h_y^{D_N})$ and let $\\widehat \\rho_{x,y}^{D_N}$ be the joint probability density of $(\\phi^{(x)},\\phi^{(y)})$. Writing\n$F(h_x, h_y, \\phi^{(x)}, \\phi^{(y)}; \\tilde h_x, \\tilde h_y, \\tilde \\phi^{(x)}, \\tilde \\phi^{(y)})$ for the product\n\\begin{multline}\n\\quad\n\\frac{1}{2} \\biggl[\n f\\Bigl(\\tfrac xN,\\, L_{t_N}^{D_N} (x+\\cdot),\\,\nh_x,\\, \\phi^{(x)} \\Bigr) \n- f\\Bigl(\\tfrac xN,\\, L_{t_N}^{D_N} (x+\\cdot),\\,\n\\tilde h_x,\\, \\tilde\\phi^{(x)}\\Bigr) \\biggr] \n\\\\\n\\times \\biggl[\n f\\Bigl(\\tfrac yN,\\, L_{t_N}^{D_N} (y+\\cdot),\\,\nh_y,\\, \\phi^{(y)} \\Bigr) \n- f\\Bigl(\\tfrac yN,\\, L_{t_N}^{D_N} (y+\\cdot),\\,\n\\tilde h_y,\\, \\tilde\\phi^{(y)} \\Bigr) \\biggr]\n\\quad\n\\end{multline}\nand abbreviating $\\Omega_M:= [-M, M]^4 \\times ([-M, M]^{\\Lambda_{r_0}(0)})^4$, we then get\n\\begin{multline}\n\\label{E:9.31}\n\\mathcal{C}_N (x,y)=\\int_{\\Omega_M}\n\\text{\\rm d}\\mkern0.5mu h_x \\,\\text{\\rm d}\\mkern0.5mu h_y \\,\\text{\\rm d}\\mkern0.5mu \\tilde h_x \\,\\text{\\rm d}\\mkern0.5mu \\tilde h_y\\,\n\\text{\\rm d}\\mkern0.5mu\\phi^{(x)} \\,\\text{\\rm d}\\mkern0.5mu \\tilde\\phi^{(x)} \\,\\text{\\rm d}\\mkern0.5mu \\phi^{(y)} \\,\\text{\\rm d}\\mkern0.5mu\\tilde\\phi^{(y)}\n\\rho_{x,y}^{D_N} (h_x, h_y) \\rho_{x, y}^{D_N} (\\tilde h_x, \\tilde h_y)\n\\\\\n\\times\n\\widehat \\rho_{x,y}^{D_N} \\bigl(\\phi^{(x)}, \\phi^{(y)}\\bigr) \n\\,\\widehat \\rho_{x,y}^{D_N} \\bigl(\\tilde\\phi^{(x)}, \\tilde\\phi^{(y)}\\bigr)\n\\,F\\bigl(h_x, h_y, \\phi^{(x)}, \\phi^{(y)}; \\tilde h_x, \\tilde h_y, \\tilde\\phi^{(x)}, \\tilde\\phi^{(y)}\\bigr).\n\\end{multline}\nLet $\\delta_N (M, r)$ abbreviate the quantity\n\\begin{equation}\n\\max_{\\begin{subarray}{c} x, y \\in A_N \\\\ |x-y| \\geq r \\end{subarray}}\\,\\,\n\\sup_{h_x, h_y \\in[-M,M]} \\,\\,\n\\sup_{\\begin{subarray}{c} \n\\phi^{(x)},\\phi^{(y)}\\in[-M,M]^{\\Lambda_{r_0} (0)}\n\\end{subarray}}\\,\\,\n\\Biggl|\\,\\log \\frac{\\rho_{x,y}^{D_N} (h_x, h_y)\\,\\widehat\\rho_{x,y}^{D_N}(\\phi^{(x)},\\phi^{(y)})}\n{\\rho_{x,y}^{D_N}(0,0)\\,\\widehat\\rho_{x}^{D_N}(\\phi^{(x)})\\,\\widehat\\rho_{y}^{D_N}(\\phi^{(y)})}\\Biggr|,\n\\end{equation}\nwhere $\\widehat\\rho_{x}^{D_N}$, resp., $\\widehat\\rho_{y}^{D_N}$ are the probability density of $\\phi^{(x)}$, resp., $\\phi^{(y)}$ with respect to the Lebesgue measure on~$\\mathbb R^{\\Lambda_{r_0}(0)}$.\nInvoking the bound\n\\begin{multline}\n\\quad\n\\rho_{x,y}^{D_N} (h_x, h_y) \\rho_{x,y}^{D_N} (\\tilde h_x, \\tilde h_y)\n\\widehat \\rho_{x,y}^{D_N} (\\phi^{(x)}, \\phi^{(y)}) \\widehat\\rho_{x,y}^{D_N} (\\tilde\\phi^{(x)}, \\tilde\\phi^{(y)})\n\\\\\n\\leq \\rho_{x,y}^{D_N} (0,0)^2 \n\\widehat\\rho_x^{D_N} (\\phi^{(x)}) \\widehat\\rho_y^{D_N} (\\phi^{(y)}) \\widehat\\rho_x^{D_N} (\\tilde\\phi^{(x)})\n \\widehat\\rho_y^{D_N} (\\tilde\\phi^{(y)}) \\bigl(\\text{\\rm e}\\mkern0.7mu^{2 \\delta_N (M, |x-y|)} - 1 \\bigr)\n\\\\+ \\rho_{x,y}^{D_N} (0,0)^2 \\widehat\\rho_x^{D_N} (\\phi^{(x)}) \\widehat\\rho_y^{D_N} (\\phi^{(y)}) \\widehat\\rho_x^{D_N} (\\tilde\\phi^{(x)})\n \\widehat\\rho_y^{D_N} (\\tilde\\phi^{(y)}),\n \\quad\n\\end{multline}\nand noting that the integral of the second term against $F(\\cdots)$ vanishes,\nwe have\n\\begin{equation}\n\\label{E:8.34uiw}\n\\mathcal{C}_N (x, y) \\leq 32M^4 \\Vert f\\Vert_\\infty^2\n\\,\\rho_{x,y}^{D_N} (0, 0)^2 \\bigl(\\text{\\rm e}\\mkern0.7mu^{2 \\delta_N (M, |x-y|)} - 1 \\bigr),\n\\end{equation}\nwhere also bounded $F(\\cdots)$ by $2\\Vert f\\Vert_\\infty^2$ and noted that the integral over~$\\Omega_M$ of the product of the individual densities of $\\phi^{(x)}$, $\\phi^{(y)}$, $\\tilde\\phi^{(x)}$ and $\\tilde\\phi^{(y)}$ is at most $(2M)^4$. \n \n\nIt remains to control the~$N$-dependent terms on the right-hand side of \\eqref{E:8.34uiw}.\nLet $r_N$ be a positive sequence with $\\lim_{N \\to \\infty} r_N = \\infty$\nand $\\lim_{N \\to \\infty} \\log r_N\/\\log N = 0$.\nUsing the representation of the Green function by the potential kernel (cf \\cite[Lemma B.3]{BL3})\nand the asymptotic of the potential kernel (cf \\cite[Lemma B.4]{BL3}), we have\n\\begin{equation}\n\\max_{\\begin{subarray}{c}\nx,y\\in A_N\\\\|x-y|\\ge r_N\n\\end{subarray}}\n\\,\\,\\max_{z, z^{\\prime} \\in \\Lambda_{r_0} (0)}\n\\mathbb E\\bigl(\\phi^{(x)}_z \\phi^{(y)}_{z^{\\prime}}\\bigr)\\,\\underset{N\\to\\infty}\\longrightarrow\\,0.\n\\end{equation}\nSince the variances of $\\phi^{(x)}_z$, for $z\\in\\Lambda_{r_0}(0)\\smallsetminus\\{0\\}$ stay uniformly positive, the argument as in \\eqref{E:7.9ie} shows\n\\begin{equation}\n\\max_{\\begin{subarray}{c}\nx,y\\in A_N\\\\|x-y|\\ge r_N\n\\end{subarray}}\\,\\,\n\\sup_{\\phi^{(x)},\\phi^{(y)}\\in[-M,M]^{\\Lambda_{r_0}(0)}}\\,\\biggl|\\,\\frac{\\widehat\\rho_{x,y}^{D_N} (\\phi^{(x)}, \\phi^{(y)})}{\\widehat\\rho_x^{D_N} (\\phi^{(x)}) \n\\widehat\\rho_y^{D_N} (\\phi^{(y)})}-1\\biggr|\n\\,\\underset{N\\to\\infty}\\longrightarrow\\,0.\n\\end{equation}\nThis implies $\\delta_N (M, r_N) \\to0$ as $N \\to \\infty$ and so, for $o(1)\\to0$ in $P^\\varrho$-probability as~$N\\to\\infty$,\n\\begin{equation}\n\\label{E:9.35}\n\\frac{\\log N}{\\widehat W_N^2} \\sum_{\\begin{subarray}{c} x, y \\in A_N \\\\ |x-y| \\geq r_N \\end{subarray}}\n 1_{\\{L_{t_N}^{D_N} (x) = 0\\}}1_{\\{L_{t_N}^{D_N} (y) = 0\\}}\\, \n\\mathcal{C}_N (x, y) = o(1) \\kappa_N^D (D)^2.\n\\end{equation}\nThe sum over $x, y \\in A_N$ with~$|x-y|\\le r_N$\nis at most $\\|f\\|^2 \\kappa_N^D (D)$ times $(r_N)^2(\\log N)\/\\widehat W_N$. The latter factor is~$o(1)$ as $N \\to \\infty$ by our choice of~$r_N$. The claim follows from the tightness of~$\\{\\kappa^D_N(D)\\colon N\\ge1\\}$.\n\\end{proofsect}\n\nWe are now ready to give:\n\n\\begin{proofsect}{Proof of Theorem \\ref{thm-avoid-local}}\nConsider the coupling from Theorem~\\ref{thm-Dynkin} between the local time~$L^{D_N}_{t_N}$ and two copies~$h^{D_N}$ and~$\\tilde h^{D_N}$ of the DGFF in~$D_N$, with the former independent of~$L^{D_N}_{t_N}$. Recall the definition of $\\widehat h^{D_N\\smallsetminus\\{x\\}}$ from \\eqref{E:8.24uiw}, write $\\phi^{(x)}_z:=\\widehat h^{D_N\\smallsetminus\\{x\\}}_{x+z}$ and abbreviate $\\nabla_z s(x):= s(x)-s(x+z) $. Then for each $x \\in D_N$ and $z \\in \\Bbb{Z}^2$, we have\n\\begin{multline}\n\\label{E:9.37}\n\\qquad\n\\Bigl(\\tilde h_x^{D_N} + \\sqrt{2t_N} - \\frac{1}{2} \\nabla_z \\tilde h^{D_N} (x) \\Bigr)\n\\Bigl(- \\nabla_z \\tilde h^{D_N} (x) \\Bigr) \\\\\n= - \\nabla_z L_{t_N}^{D_N} (x) \n+ \\frac{1}{2} \\bigl(\\,\\phi^{(x)}_z\n+ \\fraktura b_{D_N,x} (x+z) h_x^{D_N}\\bigr)^2\n- \\frac{1}{2} (h_x^{D_N})^2.\n\\qquad\n\\end{multline}\nLet $\\Phi_x (z)$ and $\\Psi_x (z)$ denote the left-hand side and the right-hand side\nof \\eqref{E:9.37}, respectively.\nThen for each \n$f \\in C_{\\text{c}} (D \\times [0, \\infty) \\times \\mathbb R^{\\Bbb{Z}^2})$,\n\\begin{multline}\n\\label{E:9.38}\n\\quad\n\\frac{\\sqrt{\\log N}}{\\widehat W_N}\n\\sum_{x \\in D_N}\nf \\left(x\/N,\\, \\frac{1}{2} (\\tilde h_x^{D_N} + \\sqrt{2t_N})^2,\\,\n\\{\\Phi_x (z)\\,:\\,z \\in \\Bbb{Z}^2\\} \\right) \n\\\\\n= \\frac{\\sqrt{\\log N}}{\\widehat W_N}\n\\sum_{x \\in D_N}\nf \\left(x\/N,\\, L_{t_N}^{D_N} (x) + \\frac{1}{2} (h_x^{D_N})^2,\\,\n\\{\\Psi_x (z)\\,:\\,z \\in \\Bbb{Z}^2\\} \\right).\n\\quad\n\\end{multline}\nNext pick $F \\in C_{\\text{c}} (\\mathbb R^{\\Bbb{Z}^2})$ that\ndepends only on a finite number of coordinates, say, in~$\\Lambda_r (0)$,\nand obeys $F(\\phi) = 0$ unless $|\\phi_z| \\leq M$ for all $z \\in \\Lambda_r (0)$ for some $M > 0$. Then set $f(x, \\ell, \\phi) := 1_A(x) f_n (\\ell) F(\\phi)$,\nwhere $A \\subset D$ is an open set with $\\overline{A} \\subset D$ and $f_n\\colon [0, \\infty) \\to [0, 1]$\nare given by $f_n (\\ell) := (1 - n \\ell) \\vee 0$. The Bounded Convergence Theorem ensures that \\eqref{E:9.38} applies to these~$f$'s as well so we will now explicitly compute both sides (suitably scaled) in the \\myemph{joint} distributional limit as $N\\to\\infty$ and~$n\\to\\infty$. Note that taking the limit jointly preserves pointwise equality.\n\nStarting with the right hand side of \\eqref{E:9.38}, the uniform continuity of $F$ and Corollary \\ref{cor-light}, we may rewrite it as a random quantity whose $L^1$-norm under $P^{\\varrho} \\otimes \\mathbb P$\nis at most $o(1) n^{-1\/2}$, with $o(1) \\to 0$ as $n \\to \\infty$, plus the quantity\n\\begin{equation}\n\\label{E:9.39}\n\\frac{\\sqrt{\\log N}}{\\widehat W_N}\n\\sum_{x\\in A_N}\nf_n \\biggl(L_{t_N}^{D_N} (x) + \\frac{1}{2} (h_x^{D_N})^2 \\biggr)\nF \\biggl(\\Bigl\\{L_{t_N}^{D_N} (x+z) + \\frac{1}{2} (\\phi^{(x)}_z)^2\n\\colon z \\in \\Bbb{Z}^2 \\Bigr\\} \\biggr),\n\\end{equation}\nwhere we denoted $A_N:=\\{x\\in\\mathbb Z^2\\colon x\/N\\in A\\}$.\nDecomposing the sum over $x$ with $L_{t_N}^{D_N} (x) = 0$\nand the sum over $x$ with $L_{t_N}^{D_N} (x) > 0$\nand applying Lemma \\ref{lemma-6.3} to the latter,\nwe rewrite (\\ref{E:9.39})\nas\n\\begin{equation}\n\\label{E:29}\n\\frac{\\sqrt{\\log N}}{\\widehat W_N} \\sum_{x\\in A_N}\n1_{\\{L_{t_N}^{D_N} (x) = 0 \\}}\\,\nf_n \\Bigl(\\frac{1}{2} (h_x^{D_N})^2 \\Bigr)\nF \\biggl(\\Bigl\\{L_{t_N}^{D_N} (x+z) + \\frac{1}{2} (\\phi^{(x)}_z)^2\n\\colon z \\in \\Bbb{Z}^2 \\Bigr\\} \\biggr)\n\\end{equation}\nplus a random quantity whose $L^1$-norm under $P^{\\rho} \\otimes \\mathbb P$\nis at most $o(1) n^{-1\/2}$ with $o(1) \\to 0$ as $N \\to \\infty$\nfollowed by $n \\to \\infty$.\nLet $\\widehat \\kappa^D$ be a (subsequential) weak limit of $\\widehat \\kappa_N^D$ along the subsequence $\\{N_k\\}$. \nBy Lemma \\ref{lemma-avoid-local-add}, as $k \\to \\infty$, \\eqref{E:29} converges in law to\n\\begin{multline}\n\\qquad\n\\frac{1}{\\sqrt{2 \\pi g}} \n\\int \\widehat \\kappa^D (\\text{\\rm d}\\mkern0.5mu x\\,\\text{\\rm d}\\mkern0.5mu\\ell)\\otimes \\text{\\rm d}\\mkern0.5mu h \\otimes \\nu^0 (\\text{\\rm d}\\mkern0.5mu\\phi) \n1_A (x) f_n\\bigl(\\tfrac{h^2}2\\bigr) \nF \\biggl(\\Bigl\\{\\ell_z + \\frac{1}{2} \\phi_z^2\\colon z \\in \\Bbb{Z}^2 \\Bigr\\} \\biggr) \n\\\\\n= \\frac{4}{3\\sqrt{\\pi g n}} \\int \\widehat \\kappa^D (\\text{\\rm d}\\mkern0.5mu x\\,\\text{\\rm d}\\mkern0.5mu\\ell)\\otimes \\nu^0 (\\text{\\rm d}\\mkern0.5mu\\phi) \n1_A (x)\nF \\biggl(\\Bigl\\{\\ell_z + \\frac{1}{2} \\phi_z^2\\colon z \\in \\Bbb{Z}^2 \\Bigr\\} \\biggr) \n\\qquad\n\\end{multline}\nwhere we used the explicit form of~$f_n$ to perform the integral over~$h$. Multiplying this by $\\frac{3}{4} \\sqrt{\\frac{n}{2}}$, as $n\\to\\infty$ this converges to\n\\begin{equation}\n\\label{E:8.43uiw}\n\\frac1{\\sqrt{2\\pi g}}\\int \\widehat \\kappa^D_A (\\text{\\rm d}\\mkern0.5mu\\ell)\\otimes \\nu^0 (\\text{\\rm d}\\mkern0.5mu\\phi) \nF \\biggl(\\Bigl\\{\\ell_z + \\frac{1}{2} \\phi_z^2\\colon z \\in \\Bbb{Z}^2 \\Bigr\\} \\biggr)\n\\end{equation}\nas~$n\\to\\infty$ where $\\widehat\\kappa^D_A(B):=\\widehat\\kappa^D(A\\times B)$. This is the $N\\to\\infty$ and~$n\\to\\infty$ limit of the (rescaled) right-hand side of \\eqref{E:9.38}.\n\n\nConcerning the left-hand side of \\eqref{E:9.38}, whenever~$A$ is such that ${\\rm Leb}(\\partial A)=0$ (which implies $Z^D_{\\sqrt\\theta}(\\partial A)=0$ a.s.), \\cite[Theorem~2.1]{BL4} yields convergence to\n\\begin{equation}\n\\label{E:9.42}\n\\fraktura c(\\sqrt{\\theta}) Z_{\\sqrt{\\theta}}^D (A)\n\\int \\text{\\rm d}\\mkern0.5mu h\\otimes \\nu_{\\sqrt{\\theta}} (\\text{\\rm d}\\mkern0.5mu\\phi)\n\\,\\text{\\rm e}\\mkern0.7mu^{\\alpha \\sqrt{\\theta} h} \\,f_n\\bigl(\\tfrac{h^2}2\\bigr)\\, F\\biggl(\\Bigl\\{\\bigl(h-\\tfrac{1}{2} \\phi_z\\bigr) (-\\phi_z)\\colon z \\in \\Bbb{Z}^2 \\Bigr\\} \\biggr),\n\\end{equation}\nwhere $\\nu_{\\sqrt{\\theta}}$ is the law of $\\phi+\\alpha\\sqrt{\\theta}\\, \\fraktura a$ with~$\\phi$ distributed according to~$\\nu^0$.\nUsing that\n\\begin{equation}\n\\int \\text{\\rm d}\\mkern0.5mu h\\,\\,\\text{\\rm e}\\mkern0.7mu^{\\alpha \\sqrt{\\theta} h} f_n\\bigl(\\tfrac{h^2}2\\bigr) \n= \\frac{4\\sqrt{2}}{3\\sqrt{n}} + O(n^{-3\/2}),\\quad n\\to\\infty,\n\\end{equation}\n\\eqref{E:9.42} multiplied by $\\frac{3}{4} \\sqrt{\\frac{n}{2}}$ converges to\n\\begin{equation}\n\\label{E:8.46uiw}\n\\fraktura c(\\sqrt{\\theta}) Z_{\\sqrt{\\theta}}^D (A)\n\\int \\nu^0 (\\text{\\rm d}\\mkern0.5mu\\phi) F\\Bigl(\\bigl\\{\\tfrac{1}{2} (\\phi_z+\\alpha\\sqrt\\theta\\,\\fraktura a)^2\\colon z \\in \\Bbb{Z}^2 \\bigr\\}\\Bigr)\n\\end{equation} \nas~$n\\to\\infty$. This is the $N\\to\\infty$ and~$n\\to\\infty$ limit of the (rescaled) left-hand side of \\eqref{E:9.38}.\n\nWe now finally have a chance to invoke the Pinned Isomorphism Theorem of~\\cite{R18}. Indeed, since $2 \\sqrt{2u} =\\alpha\\sqrt\\theta$ implies $u=\\pi\\theta$, \\eqref{E:3.6uiw} equates \\eqref{E:8.46uiw} (and thus also \\eqref{E:8.43uiw}) with\n\\begin{equation}\n\\label{E:8.47uiw}\n\\fraktura c(\\sqrt{\\theta}) Z_{\\sqrt{\\theta}}^D (A)\n\\int \\nu_{\\theta}^{\\text{\\rm RI}} (\\text{\\rm d}\\mkern0.5mu\\ell) \\otimes \\nu^0 (\\text{\\rm d}\\mkern0.5mu\\phi)\nF\\Bigl(\\bigl\\{\\ell_z + \\tfrac{1}{2} \\phi_z^2\\colon z \\in \\Bbb{Z}^2 \\bigr\\}\\Bigr).\n\\end{equation} \nThe Bounded Convergence Theorem extends the equality of \\eqref{E:8.43uiw} and \\eqref{E:8.47uiw} to~$F$ of the form $F(\\ell):=\\exp\\{-\\sum_{z\\in\\Lambda_r(0)}b_z\\ell_z\\}$ for any~$b_z\\ge0$. This effectively transforms the term~$\\frac12\\phi_z^2$ away from both expressions and, thanks to the Cram\\'er-Wold device, implies\n\\begin{equation}\n\\widehat\\kappa_A^D(\\text{\\rm d}\\mkern0.5mu \\ell) \\overset{\\text{\\rm law}}= \\sqrt{2\\pi g}\\,\\fraktura c(\\sqrt{\\theta})Z^D_{\\sqrt\\theta}(A)\\,\\nu_{\\theta}^{\\text{\\rm RI}}(\\text{\\rm d}\\mkern0.5mu\\ell).\n\\end{equation}\nAs this holds for all open~$A\\subset D$ with~$\\overline A\\subset D$, the claim follows.\n\\end{proofsect}\n\n\n\n\n\\section*{Acknowledgments}\n\\nopagebreak\\noindent\nThe first author has been supported in part by JSPS KAKENHI, Grant-in-Aid for Early-Career Scientists 18K13429. The second author has been partially supported by the NSF award DMS-1712632.\n\n\n\\bibliographystyle{abbrv}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}