diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzkdfw" "b/data_all_eng_slimpj/shuffled/split2/finalzzkdfw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzkdfw" @@ -0,0 +1,5 @@ +{"text":"\\section{A Dark Lens candidate}\n\nIn a campaign to study the optical properties of faint VLA FIRST \n(Faint Images of the Radio Sky at Twenty-Centimeters) sources with the \nHubble Space Telescope, Russell et al. [\\pos{1}] identified an optical arc\n4 arcseconds away from FIRST J121839.7 +295325 (hereafter J1218+2953). \nThere was no optical identification of the radio source itself.\nThe possible relation between the two objects was further investigated by\nRyan et al. [\\pos{2}], who proposed that the arc may be a gravitationally lensed\nimage and the radio source may belong to the lensing object. There are two \nmajor difficulties with this interpretation. First of all the lensing galaxy\nis not seen. Moreover the redshifts of the radio source and the optical arc\nare not known. Ryan et al. [\\pos{2}] estimated a redshift range of\n$0.80.8$), but it has a relatively bright radio flux density ($z<1.5$). \nThey carried out photometric redshift measurements of the optical arc with \nthe SAO Multi-Mirror Telescope. A weak spectral break was seen at around\n4300 \\AA, which could be due to the Balmer\/4000 \\AA ~break at $z\\sim0.13$, \nor due to the Lyman-break at $z\\sim2.5$. Probability analysis showed \nthat the Lyman-break is much more likely, which suggests that the arc is \nlocated at a high redhsift. Obviously, if the spectral break is related to \nthe Balmer series then the redshift is much smaller and the two objects are \nunrelated.\n\nRyan et al. [\\pos{2}] carried out gravitational lens modelling and found that\nthe mass distribution must be elliptical to produce such a long arc. However \nthe observed sub-structure within the arc, and especially a bright knot at the\nend, cannot be easily explained by gravitational lensing.\nThey predict a secondary image which appears close to a pointlike source in the\nHST image, but this is likely due to a \"hot pixel\". According to the model, the enclosed \nmass within the Einstein-radius of 1.3 arcseconds is $10^{12\\pm0.5}M_{\\odot}$.\n\nIt is not clear how such a massive object may remain hidden. Even if there is strong\nobscuration by dust, there should be an infrared counterpart detected. IR imaging with \nthe SAO Wide-field camera showed no galaxy with limiting magnitudes of $J=22.0$\\,mag \nand $H=20.7$\\,mag [\\pos{2}]. Ryan et al. conclude that either there is an early-type\ngalaxy with significant amount of dark matter, or this could be a massive system\nwith an AGN that is completely obscured, with dynamic mass-to-light ratio exceeding\n100 $M_{\\odot}L_{\\odot}^{-1}$.\n\n\\section{VLBI imaging of FIRST J1218+2953}\n\n \\begin{figure*}\n \\centering\n \\vspace{20pt}\n \\includegraphics[bb=33 120 582 714,clip,angle=0,width=7cm]{SHORT.ps}\n \\includegraphics[bb=36 123 531 641,clip,angle=0,width=7cm]{UVPLOT.ps}\n \\caption{\\label{short} Left: naturally weighted image of J1218+2953 from 2 hours of short \ne-EVN observations at 1.6 GHz. The first contour is drawn at 0.4 mJy\/beam, the following ones \nare multiples of this by $\\sqrt{2}$. The beam was 30.3 $\\times$ 23.8 mas, oriented at \nPA 3.3 degrees. The peak brightness was about 2.3 mJy\/beam. The total cleaned flux density \nrecovered was less then 15 mJy. Right: $uv$-coverage of the observations. The shortest MERLIN \nbaselines (Jb2-Cm-Kn) are a great value for imaging of sources extending to several mas. \nHowever in this case the short observations were still not enough to recover most of the \nflux density.} \n \\end{figure*}\n\n\\subsection{Short e-EVN observations}\n\nWe applied for short e-EVN observations in December 2008, to test the AGN hypothesis\nfor J1218+2953. The total flux density of the source at 1.6 GHz is 33 mJy, which \ncan be easily detected with limited EVN resources if it is compact. The observations\ntook place on 23 January 2009 at a data rate of 512 Mbps with 2-bit sampling, dual \npolarization, and lasted for 2 hours. The array consisted of Cambridge and Knockin \n(MERLIN telescopes, limited to 128 Mbps), Jodrell Bank MkII, Medicina, Onsala, Torun and \nthe Westerbork Synthesis Radio Telescope (WSRT). \nThe target was phase-referenced to the nearby calibrator J1217+3007. \nThe data were pipelined and then imaged in Difmap [\\pos{3}]. The source was detected\nand was resolved to two components, separated by about 500~mas (see Fig. \\ref{short}). \n\nThis result opened up new possibilities for the interpretation of the radio source.\nAlthough it was not predicted from the \"best-fit\" gravitational lens model of Ryan et al.,\none may speculate that these two components are gravitationally lensed images of the\nsame background source that gives rise to the optical arc images, or perhaps a completely\nunrelated background source. Alternatively, we may see a core-jet system or\na medium-size symmetric object (MSO). To distinguish between these scenarios we put in an\nobserving proposal by the 1 February 2009 deadline (just 8 days following the e-EVN\nobservations), for full-track e-EVN observations at 5 and 1.6 GHz.\n\n\\subsection{Follow-up experiments}\n\nJ1218+2953 was observed at 5 GHz on 24-25 March for 8 hours. The array this time\nincluded the 100m Effelsberg telescope as well. Four telescopes (Ef, On, Tr, Wb) sent data \nto the correlator at 1024 Mbps rate, the rest at 512 Mbps or lower. The 1.6 GHz\nobservations were carried out on 21-22 April 2009 at 512 Mbps data rate. In the array\nKnockin was replaced by Darnhall, the 76m Lovell Telescope was used instead of the MkII\nin Jodrell Bank, and Arecibo joined as well (for 2 hours and 20 minutes). These observations \nand the data processing were similar to the short project described above. In addition, we \nreduced the synthesis array data from the WSRT that was obtained during the VLBI run.\n\nThe 5 GHz image resolves the South-East component into an elongated, slightly curved jet-like\nstructure, which does not point towards the North-West component (see Fig.~\\ref{fulltracks}). \nThere is no very compact component that could be firmly identified as a core. The total \ncleaned flux density is only 3 mJy compared to the total WSRT flux density of 9 mJy, indicating \nthat most of the source is resolved out in this image. The 1.6 GHz image reveals an even more \ncomplex, but more continuous structure. The total cleaned flux density was about 20 mJy, close\nto the total WSRT flux density of 27 mJy. \n\n\\section{Interpretation of the results}\n\nThese preliminary e-EVN results show that the radio source near the optical arc has a complex\nstructure. The spectrum of the components is steep; that of the faint component near the phase \ncentre is somewhat flatter. Because of the apparent quasi-continuous structure, the various \ncomponents are likely not gravitationally lensed images of an unrelated background source. \n\nComparing the total cleaned flux density of about 20 mJy to the WSRT flux density of 27 mJy, \nit is evident that most of the flux density is recovered at 1.6 GHz with the EVN. This indicates \nthat the radio source and the optical arc cannot be lensed image pairs of the same background \nobject, because in that case most of the radio flux density would be present near the optical \narc since that image is strongly magnified (if the arc is indeed a lensed image). \n\nFurther, gravitational lensing should be achromatic; thus were the pair of components to the \nSE and NW in the 1.6 GHz image (Fig.~\\ref{fulltracks}, bottom panel) lensed images, the flux \ndensity ratio of the inner:outer components of each should be similar, a condition that is \nclearly violated. \n\nThe most likely scenario is that the images show a single compact steep-spectrum (CSS) source \n(projected linear size < 20 $h^{-1}$ kpc), that might be categorized as a medium-size symmetric \nobject (MSO, projected linear size > 1 $h^{-1}$ kpc) as well [\\pos{4}]. There is thus evidence \nfor AGN activity in the radio source. Note that if the optical arc is lensed by an object related \nto this AGN, then the mass-centre of the lens should be at the position of the AGN, which \nconstrains further modelling of the system. \n\nFinally we note that this research benefited strongly from two aspects of the (e-)EVN: \nthe additional short MERLIN spacings were of great use in recovering flux density on \nseveral-hundred mas scales, and the simultaneous recording of WSRT synthesis array data was \nvery important for the intepretation of our results. \n\n \\begin{figure*}\n \\centering\n \\vspace{20pt}\n \\includegraphics[bb=33 138 582 685,clip,angle=0,width=9.5cm]{5GHz_map.ps}\n \\includegraphics[bb=33 138 582 685,clip,angle=0,width=9.5cm]{1.6GHz_map.ps}\n \\caption{\\label{fulltracks} Full-track 5 GHz (top) and 1.6 GHz (bottom) e-EVN maps of J1218+2953.\n The peak brightnesses are 360 $\\mu$Jy\/beam and and 2.7 mJy\/beam, respectively. The lowest contour\n levels are set at the 3~sigma noise level (49 $\\mu$Jy\/beam and 75 $\\mu$Jy\/beam, respectively)\n and they increase by a factor of $\\sqrt{2}$. Both maps were naturally weighted; at 1.6 GHz\n a Gaussian taper was applied additionally, with half amplitude at 10 M$\\lambda$. \n The beam was 15.4 $\\times$ 8.8 mas, oriented at PA $-$30.5 degrees at 5 GHz, and \n 30.9 $\\times$ 23.3 mas, oriented at PA $-$13.3 degrees at 1.6 GHz.} \n \\end{figure*}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro} Exponential random graphs are among the most\nwidespread models for real-world networks as they are able to\ncapture a broad variety of common network tendencies by\nrepresenting a complex global structure through a set of tractable\nlocal features. These rather general models are exponential\nfamilies of probability distributions over graphs, in which\ndependence between the random edges is defined through certain\nfinite subgraphs, in imitation of the use of potential energy to\nprovide dependence between particle states in a grand canonical\nensemble of statistical physics. They are particularly useful when\none wants to construct models that resemble observed networks as\nclosely as possible, but without going into details of the\nspecific process underlying network formation. For history and a\nreview of developments, see e.g. Fienberg \\cite{F1} \\cite{F2},\nFrank and Strauss \\cite{FS}, H\\\"{a}ggstr\\\"{o}m and Jonasson\n\\cite{HJ}, Newman \\cite{N}, and Wasserman and Faust \\cite{WF}.\n\nMany of the investigations into exponential random graphs employ\nthe elegant theory of graph limits as developed by Lov\\'{a}sz and\ncoauthors (V.T. S\\'{o}s, B. Szegedy, C. Borgs, J. Chayes, K.\nVesztergombi, ...) \\cite{BCLSV1} \\cite{BCLSV2} \\cite{BCLSV3}\n\\cite{Lov} \\cite{LS}. Building on earlier work of Aldous\n\\cite{Aldous} and Hoover \\cite{Hoover}, the graph limit theory\nconnects sequences of graphs to a unified graphon space equipped\nwith a cut metric. Though the theory itself is tailored to dense\ngraphs, serious attempts have been made at formulating parallel\nresults for sparse graphs \\cite{AL} \\cite{BS} \\cite{BCCZ1}\n\\cite{BCCZ2} \\cite{CD2}. Since networks are often very large in\nsize, a pressing objective in studying exponential models is to\nunderstand their asymptotic tendencies. From the point of view of\nextremal combinatorics and statistical mechanics, emphasis has\nbeen made on the variational principle of the limiting\nnormalization constant, concentration of the limiting probability\ndistribution, phase transitions and asymptotic structures. See\ne.g. Aristoff and Zhu \\cite{AZ}, Chatterjee and Diaconis\n\\cite{CD}, Kenyon et al. \\cite{KRRS}, Kenyon and Yin \\cite{KY},\nLubetzky and Zhao \\cite{LZ1} \\cite{LZ2}, Radin and Sadun\n\\cite{RS1} \\cite{RS2}, and Radin and Yin \\cite{RY}.\n\nDespite their flexibility, conventionally used exponential random\ngraphs admittedly have one shortcoming. They cannot directly model\nweighted networks as the underlying probability space consists of\nsimple graphs only, i.e., edges are either present or absent.\nSince many substantively important networks are weighted, this\nlimitation is especially problematic. The need to extend the\nexisting exponential framework is thus justified, and several\ngeneralizations have been proposed \\cite{K} \\cite{RPW}\n\\cite{WDBCD}. An alternative interpretation for simple graphs is\nsuch that the edge weights are iid and satisfy a Bernoulli\ndistribution. This work will instead assume that the iid edge\nweights follow a generic common distribution and rigorously\nanalyze the associated phase transitions and critical phenomena.\n\nThe rest of this paper is organized as follows. In Section\n\\ref{weight} we provide basics of graph limit theory and introduce\nkey features of edge-weighted exponential random graphs. In\nSection \\ref{statement} we summarize some important general\nresults, including a variational principle for the limiting\nnormalization constant (Theorems \\ref{main1} and \\ref{main2}) and\nan associated concentration of measure (Theorem \\ref{main3})\nindicating that almost all large graphs lie near the maximizing\nset. Theorems \\ref{main4} and \\ref{gen} then give simplified\nversions of these theorems in the ``attractive'' region of the\nparameter space where the parameters $\\beta_2,...,\\beta_k$ are all\nnon-negative. In Section \\ref{app} we specialize to exponential\nmodels where the edge weights are uniformly distributed and show\nin Theorem \\ref{phase} the existence of a first order phase\ntransition curve ending in a second order critical point. Lastly,\nin Section \\ref{discuss} we investigate the asymptotic phase\nstructure of a directed model where a large deviation principle is\nmissing.\n\n\\section{Edge-weighted exponential random graphs}\n\\label{weight} Let $G_n\\in \\mathcal{G}_n$ be an edge-weighted\nundirected labeled graph on $n$ vertices, where the edge weights\n$x_{ij}$ between vertex $i$ and vertex $j$ are iid real random\nvariables having a common distribution $\\mu$. Any such graph\n$G_n$, irrespective of the number of vertices, may be represented\nas an element $h^{G_n}$ of a single abstract space $\\mathcal{W}$\nthat consists of all symmetric measurable functions $h(x,y)$ from\n$[0,1]^2$ into $\\mathbb{R}$ (referred to as ``graph limits'' or\n``graphons''), by setting $h^{G_n}(x,y)$ as the edge weight\nbetween vertices $\\lceil nx \\rceil$ and $\\lceil ny \\rceil$ of\n$G_n$. For a finite simple graph $H$ with vertex set\n$V(H)=[k]=\\{1,...,k\\}$ and edge set $E(H)$ and a simple graph\n$G_n$ on $n$ vertices, there is a notion of density of graph\nhomomorphisms, denoted by $t(H, G_n)$, which indicates the\nprobability that a random vertex map $V(H) \\to V(G_n)$ is\nedge-preserving,\n\\begin{equation}\n\\label{t} t(H, G_n)=\\frac{|\\text{hom}(H,\nG_n)|}{|V(G_n)|^{|V(H)|}}.\n\\end{equation}\nFor a graphon $h\\in \\mathcal{W}$, define the graphon homomorphism\ndensity\n\\begin{equation}\n\\label{tt} t(H, h)=\\int_{[0,1]^k}\\prod_{\\{i,j\\}\\in E(H)}h(x_i,\nx_j)dx_1\\cdots dx_k.\n\\end{equation}\nThen $t(H, G_n)=t(H, h^{G_n})$ by construction, and we take\n(\\ref{tt}) as the definition of graph homomorphism density $t(H,\nG_n)$ for an edge-weighted graph $G_n$. This graphon\ninterpretation enables us to capture the notion of convergence in\nterms of subgraph densities by an explicit ``cut distance'' on\n$\\mathcal{W}$:\n\\begin{equation}\nd_{\\square}(f, h)=\\sup_{S, T \\subseteq [0,1]}\\left|\\int_{S\\times\nT}\\left(f(x, y)-h(x, y)\\right)dx\\,dy\\right|\n\\end{equation}\nfor $f, h \\in \\mathcal{W}$. The common distribution $\\mu$ for the\nedge weights yields probability measure $\\mathbb P_n$ and the associated\nexpectation $\\mathbb E_n$ on $\\mathcal{G}_n$, and further induces\nprobability measure $\\mathbb Q_n$ on the space $\\mathcal{W}$ under the\ngraphon representation.\n\nA non-trivial complication is that the topology induced by the cut\nmetric is well defined only up to measure preserving\ntransformations of $[0,1]$ (and up to sets of Lebesgue measure\nzero), which may be thought of vertex relabeling in the context of\nfinite graphs. To tackle this issue, an equivalence relation\n$\\sim$ is introduced in $\\mathcal{W}$. We say that $f\\sim h$ if\n$f(x, y)=h_{\\sigma}(x, y):=h(\\sigma x, \\sigma y)$ for some measure\npreserving bijection $\\sigma$ of $[0,1]$. Let $\\tilde{h}$\n(referred to as a ``reduced graphon'') denote the equivalence\nclass of $h$ in $(\\mathcal{W}, d_{\\square})$. Since $d_{\\square}$\nis invariant under $\\sigma$, one can then define on the resulting\nquotient space $\\tilde{\\mathcal{W}}$ the natural distance\n$\\delta_{\\square}$ by $\\delta_{\\square}(\\tilde{f},\n\\tilde{h})=\\inf_{\\sigma_1, \\sigma_2}d_{\\square}(f_{\\sigma_1},\nh_{\\sigma_2})$, where the infimum ranges over all measure\npreserving bijections $\\sigma_1$ and $\\sigma_2$, making\n$(\\tilde{\\mathcal{W}}, \\delta_{\\square})$ into a metric space.\nWith some abuse of notation we also refer to $\\delta_{\\square}$ as\nthe ``cut distance''. After identifying graphs that are the same\nafter vertex relabeling, the probability measure $\\mathbb P_n$ yields\nprobability measure $\\tilde{\\mathbb P}_n$ and the associated expectation\n$\\tilde{\\mathbb E}_n$ (which coincides with $\\mathbb E_n$). Correspondingly,\nthe probability measure $\\mathbb Q_n$ induces probability measure\n$\\tilde{\\mathbb Q}_n$ on the space $\\tilde{\\mathcal{W}}$ under the\nmeasure preserving transformations.\n\nBy a $k$-parameter family of exponential random graphs we mean a\nfamily of probability measures $\\mathbb P_n^{\\beta}$ on $\\mathcal{G}_n$\ndefined by, for $G_n\\in\\mathcal{G}_n$,\n\\begin{equation}\n\\label{pmf} \\mathbb P_n^{\\beta}(G_n)=\\exp\\left(n^2\\left(\\beta_1\nt(H_1,G_n)+\\cdots+\n \\beta_k t(H_k,G_n)-\\psi_n^{\\beta}\\right)\\right)\\mathbb P_n(G_n),\n\\end{equation}\nwhere $\\beta=(\\beta_1,\\dots,\\beta_k)$ are $k$ real parameters,\n$H_1,\\dots,H_k$ are pre-chosen finite simple graphs (and we take\n$H_1$ to be a single edge), $t(H_i, G_n)$ is the density of graph\nhomomorphisms, $\\mathbb P_n$ is the probability measure induced by the\ncommon distribution $\\mu$ for the edge weights, and\n$\\psi_n^{\\beta}$ is the normalization constant,\n\\begin{equation}\n\\label{psi} \\psi_n^{\\beta}=\\frac{1}{n^2}\\log \\mathbb E_n\n\\left(\\exp\\left(n^2 \\left(\\beta_1 t(H_1,G_n)+\\cdots+\\beta_k\nt(H_k,G_n)\\right) \\right)\\right).\n\\end{equation}\nWe say that a phase transition occurs when the limiting\nnormalization constant $\\displaystyle\n\\psi^\\beta_\\infty:=\\lim_{n\\to\n \\infty}\\psi_n^{\\beta}$ has a singular point, as it is the\ngenerating function for the limiting expectations of other random\nvariables,\n\\begin{equation}\n\\label{E} \\lim_{n\\to \\infty}\\mathbb E_n^\\beta t(H_i, G_n)=\\lim_{n\\to\n\\infty}\\frac{\\partial}{\\partial\n\\beta_i}\\psi_n^\\beta=\\frac{\\partial}{\\partial\n\\beta_i}\\psi_\\infty^\\beta,\n\\end{equation}\n\\begin{equation}\n\\label{Cov} \\lim_{n\\to \\infty}n^2\\left(\\mathbb C\\textrm{ov}_n^\\beta\n\\left(t(H_i, G_n), t(H_j, G_n)\\right)\\right)=\\lim_{n\\to\n\\infty}\\frac{\\partial^2}{\\partial \\beta_i\n\\partial \\beta_j}\\psi_n^\\beta=\\frac{\\partial^2}{\\partial \\beta_i\n\\partial \\beta_j}\\psi_\\infty^\\beta.\n\\end{equation}\nThe exchange of limits in (\\ref{E}) and (\\ref{Cov}) is nontrivial,\nbut may be justified using similar techniques as in Yang and Lee\n\\cite{YL}. Since homomorphism densities $t(H_i, G_n)$ are\npreserved under vertex relabeling, the probability measure\n$\\tilde{\\mathbb P}_n^\\beta$ and the associated expectation\n$\\tilde{\\mathbb E}_n^\\beta$ (which coincides with $\\mathbb E_n^\\beta$) may\nlikewise be defined.\n\n\\begin{definition}\nA phase is a connected region of the parameter space $\\{\\beta\\}$,\nmaximal for the condition that the limiting normalization constant\n$\\psi_\\infty^\\beta$ is analytic. There is a $j$th-order transition\nat a boundary point of a phase if at least one $j$th-order partial\nderivative of $\\psi_\\infty^\\beta$ is discontinuous there, while\nall lower order derivatives are continuous.\n\\end{definition}\n\nMore generally, we may consider exponential models where the terms\nin the exponent defining the probability measure contain functions\non the graph space other than homomorphism densities. Let $T:\n\\tilde{\\mathcal{W}} \\rightarrow \\mathbb{R}$ be a bounded\ncontinuous function. Let the probability measure $\\mathbb P^T_n$ and the\nnormalization constant $\\psi^T_n$ be defined as in (\\ref{pmf}) and\n(\\ref{psi}), that is,\n\\begin{equation}\n\\label{pmf2}\n\\mathbb P^T_n(G_n)=\\exp\\left(n^2(T(\\tilde{h}^{G_n})-\\psi^T_n)\\right)\\mathbb P_n(G_n),\n\\end{equation}\n\\begin{equation}\n\\label{psi2} \\psi^T_n=\\frac{1}{n^2}\\log \\mathbb E_n \\exp\\left(n^2\nT(\\tilde{h}^{G_n}) \\right),\n\\end{equation}\nThen the probability measure $\\tilde{\\mathbb P}_n^T$ and the associated\nexpectation $\\tilde{\\mathbb E}_n^T$ (which coincides with $\\mathbb E_n^T$) may\nbe defined in a similar manner.\n\nWe will assume that the common distribution $\\mu$ on the edge\nweights has \\textit{finite support}, which implies that the\ngraphon space $\\mathcal{W}$ under consideration is a finite subset\nof $\\mathbb{R}$. These $L^\\infty$ graphons generalize graphons\nthat take values in $[0,1]$ only and are better suited for generic\nedge-weighted graphs instead of just simple graphs. The ``finite\nsupport'' assumption also assures that the moment generating\nfunction $M(\\theta)=\\int e^{\\theta x}\\mu(dx)$ is finite for all\n$\\theta$ and the conjugate rate function of Cram\\'{e}r, $I:\n\\mathbb{R}\\rightarrow \\mathbb{R}$, where\n\\begin{equation}\n\\label{I} I(x)=\\sup_{\\theta\\in \\mathbb{R}}\\left(\\theta x-\\log\nM(\\theta)\\right)\n\\end{equation}\nis nicely defined. The domain of the function $I$ can be extended\nto $\\tilde{\\mathcal{W}}$ in the usual manner:\n\\begin{equation}\n\\label{II} I(\\tilde{h})=\\int_{[0,1]^2}I(h(x,y))dxdy,\n\\end{equation}\nwhere $h$ is any representative element of the equivalence class\n$\\tilde{h}$. It was shown in \\cite{CV} that $I$ is well defined on\n$\\tilde{\\mathcal{W}}$ and is lower semi-continuous under the cut\nmetric $\\delta_\\square$. The space $(\\tilde{\\mathcal{W}},\n\\delta_{\\square})$ enjoys many other important properties that are\nessential for the study of exponential random graph models. It is\na compact space and homomorphism densities $t(H, \\cdot)$ are\ncontinuous functions on it. In fact, homomorphism densities\ncharacterize convergence under the cut metric: a sequence of\ngraphs converges if and only if its homomorphism densities\nconverge for all finite simple graphs, and the limiting\nhomomorphism densities then describe the resulting graphon.\n\n\\section{Statement of results}\n\\label{statement} The normalization constant plays a central role\nin statistical mechanics because it encodes essential information\nabout the structure of the probability measure; even the existence\nof its limit bears important consequences. In the case of\nexponential random graphs, the computational tools currently used\nby practitioners to compute this constant become unreliable for\nlarge networks \\cite{BBS} \\cite{SPRH}. This problem however can be\ncircumvented if we know a priori that the limit of the\nnormalization constant exists. One can then choose a ``scaled\ndown'' network model with a smaller number of vertices and use the\nexact value of the normalization constant in the scaled down model\nas an approximation to the normalization constant in the larger\nmodel, and a computer program that can evaluate the exact value of\nthe normalization constant for moderate sized networks would serve\nthe purpose \\cite{Hunter}. The following Theorem \\ref{main1} is a\ngeneralization of the corresponding result (Theorem 3.1) in\nChatterjee and Diaconis \\cite{CD}, where they assumed that the\nedge weights $x_{ij}$ between vertices $i$ and $j$ are iid real\nrandom variables satisfying a special common distribution --\nBernoulli having values $1$ and $0$ each with probability $1\/2$.\nFor the sake of completeness and to motivate further discussions\nin Theorem \\ref{main2}, we present the proof details below.\n\n\\begin{theorem}\n\\label{main1} Let $T: \\tilde{\\mathcal{W}} \\rightarrow \\mathbb{R}$\nbe a bounded continuous function. Let $\\psi_n^T$ and $I$ be\ndefined as before (see (\\ref{psi2}), (\\ref{I}) and (\\ref{II})).\nThen the limiting normalization constant $\\displaystyle\n\\psi^T_\\infty:=\\lim_{n\\rightarrow \\infty}\\psi_n^T$ exists, and is\ngiven by\n\\begin{equation}\n\\label{setmax} \\psi^T_\\infty=\\sup_{\\tilde{h}\\in\n\\tilde{\\mathcal{W}}}\\left(T(\\tilde{h})-\\frac{1}{2}I(\\tilde{h})\\right).\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nFor each Borel set $\\tilde{A} \\subseteq \\tilde{\\mathcal{W}}$ and\neach $n$, define\n\\begin{equation}\n\\label{n} \\tilde{A}^n=\\{\\tilde{h}\\in \\tilde{A}:\n\\tilde{h}=\\tilde{h}^{G_n} \\text{ for some } G_n\\in\n\\mathcal{G}_n\\}.\n\\end{equation}\nFix $\\epsilon>0$. Since $T$ is a bounded function, there is a\nfinite set $R$ such that the intervals $\\{(c, c +\\epsilon): c\\in\nR\\}$ cover the range of $T$. Since $T$ is a continuous function,\nfor each $c\\in R$, the set $\\tilde{U}_{c}$ consisting of reduced\ngraphons $\\tilde{h}$ with $c0$ there exist $C, \\gamma>0$ such that for all $n$ large\nenough,\n\\begin{equation}\n\\mathbb P_n^T(\\delta_\\square(\\tilde{h}^{G_n}, \\tilde{H})>\\eta)\\leq\nCe^{-n^2\\gamma}.\n\\end{equation}\nThis is stronger than ordinary convergence in probability, which\nonly shows that the probability decays to zero but does not give\nthe speed. Utilizing the fast exponential decay rate, we can then\nresort to probability estimates in the product space. By\nBorel-Cantelli,\n\\begin{equation}\n\\sum_{n=1}^\\infty \\mathbb P_n^T(\\delta_\\square(\\tilde{h}^{G_n},\n\\tilde{H})>\\eta)<\\infty \\text{ implies that }\n\\mathbb P_n^T(\\delta_\\square(\\tilde{h}^{G_n}, \\tilde{H})>\\eta \\text{\ninfinitely often})=0.\n\\end{equation}\nThe conclusion hence follows.\n\\end{proof}\n\nWhen the bounded continuous function $T$ on $\\tilde{\\mathcal{W}}$\nis given by the sum of graph homomorphism densities\n$T=\\sum_{i=1}^k \\beta_i t(H_i, \\cdot)$, the statement of Theorems\n\\ref{main1} and \\ref{main3} may be simplified in the\n``attractive'' region of the parameter space where the parameters\n$\\beta_2,...,\\beta_k$ are all non-negative, as seen in the\nfollowing Theorems \\ref{main4} and \\ref{gen}.\n\n\\begin{theorem}\n\\label{main4} Consider a general $k$-parameter exponential random\ngraph model (\\ref{pmf}). Suppose $\\beta_2,...,\\beta_k$ are\nnon-negative. Then the limiting normalization constant\n$\\displaystyle \\psi_\\infty^\\beta$ exists, and is given by\n\\begin{equation}\n\\label{lmax} \\psi_{\\infty}^{\\beta}=\\sup_{u}\\left(\\beta_1\nu^{e(H_1)}+\\cdots+\\beta_k u^{e(H_k)}-\\frac{1}{2}I(u)\\right)\\\\,\n\\end{equation}\nwhere $e(H_i)$ is the number of edges in $H_i$, $I$ is the\nCram\\'{e}r function (\\ref{I}) (\\ref{II}), and the supremum is\ntaken over all $u$ in the domain of $I$, i.e., where $I<\\infty$.\n\\end{theorem}\n\n\\begin{proof}\nThe proof of this theorem follows a similar line of reasoning as\nin the proof of the corresponding result (Theorem 4.1) in\nChatterjee and Diaconis \\cite{CD}, where the exact form of $I$ for\nthe Bernoulli distribution was given. The crucial step is\nrecognizing that $I$ is convex. So though the exact form of $I$ is\nnot always obtainable for a generic common distribution $\\mu$, the\nlog of the moment generating function $M(\\theta)$ is convex and\n$I$, being the Legendre transform of a convex function, must also\nbe convex.\n\\end{proof}\n\n\\begin{theorem}\n\\label{gen} Let $G_n$ be an exponential random graph drawn from\n(\\ref{pmf}). Suppose $\\beta_2,...,\\beta_k$ are non-negative. Then\n$G_n$ behaves like an Erd\\H{o}s-R\\'{e}nyi graph $G(n, u^*)$ in the\nlarge n limit, where $u^*$ is picked randomly from the set $U$ of\nmaximizers of (\\ref{lmax}).\n\\end{theorem}\n\n\\begin{proof}\nThe assertions in this theorem are direct consequences of Theorems\n\\ref{main3} and \\ref{main4}.\n\\end{proof}\n\n\\section{An application: Uniformly distributed edge weights}\n\\label{app} Consider the set $\\mathcal{G}_n$ of all edge-weighted\nundirected labeled graphs on $n$ vertices, where the edge weights\n$x_{ij}$ between vertex $i$ and vertex $j$ are iid real random\nvariables uniformly distributed on $(0,1)$. The common\ndistribution for the edge weights yields probability measure\n$\\mathbb P_n$ and the associated expectation $\\mathbb E_n$ on $\\mathcal{G}_n$.\nGive the set of such graphs the probability\n\\begin{equation}\n\\mathbb P_n^\\beta(G_n)=\\exp\\left(n^2\\left(\\beta_1 t(H_1, G_n)+\\beta_2\nt(H_2, G_n)-\\psi_n^\\beta\\right)\\right)\\mathbb P_n(G_n),\n\\end{equation}\nwhere $\\beta=(\\beta_1, \\beta_2)$ are $2$ real parameters, $H_1$ is\na single edge, $H_2$ is a finite simple graph with $p\\geq 2$\nedges, and $\\psi_n^\\beta$ is the normalization constant,\n\\begin{equation}\n\\label{dpsi} \\psi_n^\\beta=\\frac{1}{n^2}\\log\n\\mathbb E_n\\left(\\exp\\left(n^2\\left(\\beta_1 t(H_1, G_n)+\\beta_2 t(H_2,\nG_n)\\right)\\right)\\right).\n\\end{equation}\nThe associated expectation $\\mathbb E_n^\\beta$ may be defined\naccordingly, and a phase transition occurs when the limiting\nnormalization constant $\\displaystyle \\psi^\\beta_\\infty=\\lim_{n\\to\n \\infty}\\psi_n^{\\beta}$ has a singular point.\n\n\\begin{theorem}\n\\label{phase} For any allowed $H_2$, the limiting normalization\nconstant $\\psi_\\infty^\\beta$ is analytic at all $(\\beta_1,\n\\beta_2)$ in the upper half-plane $(\\beta_2\\geq 0)$ except on a\ncertain decreasing curve $\\beta_2=r(\\beta_1)$ which includes the\nendpoint $(\\beta_1^c, \\beta_2^c)$. The derivatives\n$\\frac{\\partial}{\\partial \\beta_1}\\psi_\\infty^\\beta$ and\n$\\frac{\\partial}{\\partial \\beta_2}\\psi_\\infty^\\beta$ have (jump)\ndiscontinuities across the curve, except at the end point where,\nhowever, all the second derivatives $\\frac{\\partial^2}{\\partial\n\\beta_1^2}\\psi_\\infty^\\beta$, $\\frac{\\partial^2}{\\partial \\beta_1\n\\partial \\beta_2}\\psi_\\infty^\\beta$ and $\\frac{\\partial^2}{\\partial\n\\beta_2^2}\\psi_\\infty^\\beta$ diverge.\n\\end{theorem}\n\n\\begin{corollary}\nFor any allowed $H_2$, the parameter space $\\{(\\beta_1, \\beta_2):\n\\beta_2\\geq 0\\}$ consists of a single phase with a first order\nphase transition across the indicated curve $\\beta_2=r(\\beta_1)$\nand a second order phase transition at the critical point\n$(\\beta_1^c, \\beta_2^c)$.\n\\end{corollary}\n\n\\begin{figure}\n\\centering\n\\includegraphics[clip=true, height=3.5in]{curve.pdf}\n\\caption{The V-shaped region (with phase transition curve\n$r(\\beta_1)$ inside) in the $(\\beta_1, \\beta_2)$ plane. Graph\ndrawn for $p=2$.} \\label{Vshape}\n\\end{figure}\n\n\\begin{proof}\nThe moment generating function for the uniform $(0,1)$\ndistribution is given by $M(\\theta)=(e^\\theta-1)\/\\theta$. The\nassociated Cram\\'{e}r function $I$ is finite on $(0,1)$,\n\\begin{equation}\n\\label{Iu} I(u)=\\sup_{\\theta\\in \\mathbb{R}}\\left(\\theta u-\\log\n\\frac{e^\\theta-1}{\\theta}\\right),\n\\end{equation}\nbut does not admit a closed-form expression. As shown in Theorem\n\\ref{main4}, the limiting normalization constant\n$\\psi_\\infty^\\beta$ exists and is given by\n\\begin{equation}\n\\label{smax} \\psi_\\infty^\\beta=\\sup_{0\\leq u \\leq 1}\n\\left(\\beta_1u+\\beta_2u^p-\\frac{1}{2}I(u)\\right).\n\\end{equation}\nA significant part of computing phase boundaries for the\n$2$-parameter exponential model is then a detailed analysis of the\ncalculus problem (\\ref{smax}). However, as straightforward as it\nsounds, since the exact form of $I$ is not obtainable, getting a\nclear picture of the asymptotic phase structure is not that easy\nand various tricks need to be employed.\n\nConsider the maximization problem for $l(u; \\beta_1,\n\\beta_2)=\\beta_1u+\\beta_2u^p-\\frac{1}{2}I(u)$ on the interval $[0,\n1]$, where $-\\infty<\\beta_1<\\infty$ and $0\\leq \\beta_2<\\infty$ are\nparameters. The location of maximizers of $l(u)$ on the interval\n$[0, 1]$ is closely related to properties of its derivatives\n$l'(u)$ and $l''(u)$,\n\\begin{equation}\nl'(u)=\\beta_1+p\\beta_2u^{p-1}-\\frac{1}{2}I'(u),\n\\end{equation}\n\\begin{equation*}\nl''(u)=p(p-1)\\beta_2u^{p-2}-\\frac{1}{2}I''(u).\n\\end{equation*}\n\n\\begin{figure}\n\\centering\n\\includegraphics[clip=true, width=6in, height=2.5in]{below.pdf}\n\\caption{Outside the V-shaped region, $l(u)$ has a unique local\nmaximizer (hence global maximizer) $u^*$. Graph drawn for\n$\\beta_1=-5$, $\\beta_2=3.5$, and $p=2$.} \\label{below}\n\\end{figure}\n\nFollowing the duality principle for the Legendre transform between\n$I(u)$ and $\\log M(\\theta)$ \\cite{ZRM}, we first analyze\nproperties of $l''(u)$ on the interval $[0, 1]$. Recall that\n\\begin{equation}\nI(u)+\\log \\frac{e^\\theta-1}{\\theta}=\\theta u,\n\\end{equation}\nwhere $\\theta$ and $u$ are implicitly related. Taking derivatives,\nwe have\n\\begin{equation}\n\\label{dual} u=\\left.\\left(\\log\n\\frac{e^\\theta-1}{\\theta}\\right)\\right|_\\theta',\n\\end{equation}\n\\begin{equation*}\nI''(u)\\cdot \\left.\\left(\\log\n\\frac{e^\\theta-1}{\\theta}\\right)\\right|_\\theta''=1.\n\\end{equation*}\nConsider the function\n\\begin{equation}\nm(u)=\\frac{I''(u)}{2p(p-1)u^{p-2}}\n\\end{equation}\non $[0, 1]$. By (\\ref{dual}), analyzing properties of $m(u)$\ntranslates to analyzing properties of the function\n\\begin{eqnarray}\nn(\\theta)&=&2p(p-1)\\left.\\left(\\log\n\\frac{e^\\theta-1}{\\theta}\\right)\\right|_\\theta''\\cdot\n\\left(\\left.\\left(\\log\n\\frac{e^\\theta-1}{\\theta}\\right)\\right|_\\theta'\\right)^{p-2}\\\\\\notag\n&=&2p(p-1)\\left(\\frac{e^{2\\theta}-e^\\theta(\\theta^2+2)+1}{(e^\\theta-1)^2\\theta^2}\\right)\\left(\\frac{e^\\theta(\\theta-1)+1}{(e^\\theta-1)\\theta}\\right)^{p-2}\n\\end{eqnarray}\nover $\\mathbb{R}$, where $m(u)n(\\theta)=1$ and $u$ and $\\theta$\nsatisfy the dual relationship. We recognize that\n\\begin{equation}\n\\lim_{\\theta\\rightarrow -\\infty}n(\\theta)=0,\n\\end{equation}\n\\begin{equation*}\n\\lim_{\\theta\\rightarrow\n0}n(\\theta)=\\frac{p(p-1)}{3}\\left(\\frac{1}{2}\\right)^{p-1},\n\\end{equation*}\n\\begin{equation*}\n\\lim_{\\theta\\rightarrow \\infty}n(\\theta)=0,\n\\end{equation*}\nwhich implies that $n(\\theta)$ achieves a finite global maximum.\nWe claim that the global maximum can only be attained at\n$\\theta_0\\geq 0$. This would further imply that there is a finite\nglobal minimum for $m(u)$, and it can only be attained at $u_0\\geq\n\\frac{1}{2}$. First suppose $p=2$. Then $n'(\\theta_0)=0$ easily\nimplies that $\\theta_0=0$. Now suppose $p>2$. Then since\n$n'(\\theta_0)=0$, we have\n\\begin{equation}\n\\label{compare}\n\\left.\\left(\\frac{e^{2\\theta}-e^\\theta(\\theta^2+2)+1}{(e^\\theta-1)^2\\theta^2}\\right)\\right|_{\\theta_0}'\\left.\\left(\\frac{e^\\theta(\\theta-1)+1}{(e^\\theta-1)\\theta}\\right)\\right|_{\\theta_0}\n=-(p-2)\\left.\\left(\\frac{e^{2\\theta}-e^\\theta(\\theta^2+2)+1}{(e^\\theta-1)^2\\theta^2}\\right)^2\\right|_{\\theta_0}<0.\n\\end{equation}\nThis says that $\\theta_0$ is on the portion of $\\mathbb{R}$ where\n$\\left(e^{2\\theta}-e^\\theta(\\theta^2+2)+1\\right)\/\\left((e^\\theta-1)^2\\theta^2\\right)$\nis decreasing, i.e., $\\theta_0>0$. As $\\theta$ goes from $0$ to\n$\\infty$, the left hand side of (\\ref{compare}) first decreases\nfrom $0$ then increases to $0$ and is always negative, whereas the\nright hand side of (\\ref{compare}) monotonically increases from\n$-(p-2)\/144$ to $0$ and approaches $0$ at a rate faster than the\nleft hand side, and so there exists a $\\theta_0$ which makes both\nsides equal. Our numerical computations show that $n'(\\theta_0)=0$\nis uniquely defined, and $n(\\theta)$ increases from $-\\infty$ to\n$\\theta_0$ and decreases from $\\theta_0$ to $\\infty$. See Table\n\\ref{table1} for some of these critical values.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{ccccc}\n$p$ & $\\theta_0$ & $n(\\theta_0)$ & $u_0$ & $m(u_0)$ \\\\\n\\hline \\hline \\\\\n$2$ & $0$ & $0.3333$ & $0.5$ & $3$ \\\\\n$3$ & $1.3251$ & $0.5575$ & $0.6073$ & $1.7937$ \\\\\n$5$ & $2.9869$ & $0.8324$ & $0.7183$ & $1.2014$ \\\\\n$10$ & $5.6256$ & $1.0894$ & $0.8259$ & $0.9180$ \\\\ \\\\\n\\end{tabular}\n\\end{center}\n\\caption{Critical values for $m(u)$ and $n(\\theta)$ as a function\nof $p$.} \\label{table1}\n\\end{table}\n\nThe above analysis shows that for $\\beta_2\\leq m(u_0)$,\n$l''(u)\\leq 0$ over the entire interval $[0, 1]$; while for\n$\\beta_2>m(u_0)$, $l''(u)$ takes on both positive and negative\nvalues, and we denote the transition points by $u_1$ and $u_2$\n($u_1m(u_0)$, $l'(u)$ is decreasing from $0$ to $u_1$,\nincreasing from $u_1$ to $u_2$, and then decreasing again from\n$u_2$ to $1$.\n\nThe analytic properties of $l''(u)$ and $l'(u)$ help us\ninvestigate properties of $l(u)$ on the interval $[0, 1]$. Being\nthe Legendre transform of a smooth function, $I(u)$ is a smooth\nfunction, and grows unbounded when $u$ approaches $0$ or $1$. By\nthe duality principle for the Legendre transform, $I'(u)=\\theta$\nwhere $\\theta$ and $u$ are linked through (\\ref{dual}), so\n$I'(0)=-\\infty$ and $I'(1)=\\infty$. This says that $l(u)$ is a\nsmooth function, $l(0)=l(1)=-\\infty$, $l'(0)=\\infty$ and\n$l'(1)=-\\infty$, so $l(u)$ can not be maximized at $0$ or $1$. For\n$\\beta_2\\leq m(u_0)$, $l'(u)$ crosses the $u$-axis only once,\ngoing from positive to negative. Thus $l(u)$ has a unique local\nmaximizer (hence global maximizer) $u^*$. For $\\beta_2>m(u_0)$,\nthe situation is more complicated. If $l'(u_1)\\geq 0$ (resp.\n$l'(u_2)\\leq 0$), $l(u)$ has a unique local maximizer (hence\nglobal maximizer) at a point $u^*>u_2$ (resp. $u^*2$,\nsince $g'(\\theta_0)=0$, we have\n\\begin{multline}\n\\left.\\left(\\frac{e^\\theta(\\theta-1)+1}{(e^\\theta-1)\\theta}\\right)\\right|_{\\theta_0}'\\left.\\left(\\frac{e^{2\\theta}-e^\\theta(\\theta^2+2)+1}{(e^\\theta-1)^2\\theta^2}\\right)\\right|_{\\theta_0}-\n\\left.\\left(\\frac{e^{2\\theta}-e^\\theta(\\theta^2+2)+1}{(e^\\theta-1)^2\\theta^2}\\right)\\right|_{\\theta_0}'\\left.\\left(\\frac{e^\\theta(\\theta-1)+1}{(e^\\theta-1)\\theta}\\right)\\right|_{\\theta_0}\n\\\\=(p-1)\\left.\\left(\\frac{e^{2\\theta}-e^\\theta(\\theta^2+2)+1}{(e^\\theta-1)^2\\theta^2}\\right)^2\\right|_{\\theta_0},\n\\end{multline}\nwhich yields the same solution as (\\ref{compare}) after a simple\ntransformation. Thus $g(\\theta)$ decreases from $-\\infty$ to\n$\\theta_0$ and increases from $\\theta_0$ to $\\infty$. This further\nimplies that there is a finite global minimum for $f(u)$ attained\nat $u_0$, and $f(u)$ decreases from $0$ to $u_0$ and increases\nfrom $u_0$ to $1$. Both $\\theta_0$ and $u_0$ coincide with the\ncritical values listed in Table \\ref{table1}. See Table\n\\ref{table2}. We conclude that $l'(u_1)\\geq 0$ for $\\beta_1\\geq\n-f(u_0)$ and $\\beta_2=m(u_0)$. The only possible region in the\n$(\\beta_1, \\beta_2)$ plane where $l'(u_1)<0m(u_0):=\\beta_2^c$.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{ccccc}\n$p$ & $\\theta_0$ & $g(\\theta_0)$ & $u_0$ & $f(u_0)$ \\\\\n\\hline \\hline \\\\\n$2$ & $0$ & $3$ & $0.5$ & $3$ \\\\\n$3$ & $1.3251$ & $1.3222$ & $0.6073$ & $1.3222$ \\\\\n$5$ & $2.9869$ & $0.1059$ & $0.7183$ & $0.1059$ \\\\\n$10$ & $5.6256$ & $-1.1723$ & $0.8259$ & $-1.1723$ \\\\ \\\\\n\\end{tabular}\n\\end{center}\n\\caption{Critical values for $f(u)$ and $g(\\theta)$ as a function\nof $p$.} \\label{table2}\n\\end{table}\n\nWe examine the behavior of $l'(u_1)$ and $l'(u_2)$ more closely\nwhen $\\beta_1$ and $\\beta_2$ are chosen from this region. Recall\nthat $u_1a(\\beta_1)$ and $l'(u_2)>0$ for $u_2>b(\\beta_1)$. As\n$\\beta_1\\rightarrow -\\infty$, $a(\\beta_1)\\rightarrow 0$ and\n$b(\\beta_1)\\rightarrow 1$. $a(\\beta_1)$ is an increasing function\nof $\\beta_1$, whereas $b(\\beta_1)$ is a decreasing function, and\nthey satisfy $f(a(\\beta_1))=f(b(\\beta_1))=-\\beta_1$. The\nrestrictions on $u_1$ and $u_2$ yield restrictions on $\\beta_2$,\nand we have $l'(u_1)<0$ for $\\beta_20$ for $\\beta_2>m(b(\\beta_1))$. As $\\beta_1\\rightarrow\n-\\infty$, $m(a(\\beta_1))\\rightarrow \\infty$ and\n$m(b(\\beta_1))\\rightarrow \\infty$. $m(a(\\beta_1))$ and\n$m(b(\\beta_1))$ are both decreasing functions of $\\beta_1$, and\nthey satisfy $l'(u_1)=0$ when $\\beta_2=m(a(\\beta_1))$ and\n$l'(u_2)=0$ when $\\beta_2=m(b(\\beta_1))$. As $l'(u_2)>l'(u_1)$ for\nevery $(\\beta_1, \\beta_2)$, the curve $m(b(\\beta_1))$ must lie\nbelow the curve $m(a(\\beta_1))$, and together they generate the\nbounding curves of the $V$-shaped region in the $(\\beta_1,\n\\beta_2)$ plane with corner point $(\\beta_1^c, \\beta_2^c)$ where\ntwo local maximizers exist for $l(u)$. See Figures \\ref{Vshape},\n\\ref{lower} and \\ref{upper}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[clip=true, width=6in, height=2.5in]{lower.pdf}\n\\caption{Along the lower bounding curve $m(b(\\beta_1))$ of the\nV-shaped region, $l'(u)$ has two zeros $u_1^*$ and $u_2^*$, but\nonly $u_1^*$ is the global maximizer for $l(u)$. Graph drawn for\n$\\beta_1=-5.7$, $\\beta_2=5$, and $p=2$.} \\label{lower}\n\\end{figure}\n\nFix an arbitrary $\\beta_1<\\beta_1^c$, we examine the effect of\nvarying $\\beta_2$ on the graph of $l'(u)$. It is clear that\n$l'(u)$ shifts upward as $\\beta_2$ increases and downward as\n$\\beta_2$ decreases. As a result, as $\\beta_2$ gets large, the\npositive area bounded by the curve $l'(u)$ increases, whereas the\nnegative area decreases. By the fundamental theorem of calculus,\nthe difference between the positive and negative areas is the\ndifference between $l(u_2^*)$ and $l(u_1^*)$, which goes from\nnegative ($l'(u_2)=0$, $u_1^*$ is the global maximizer) to\npositive ($l'(u_1)=0$, $u_2^*$ is the global maximizer) as\n$\\beta_2$ goes from $m(b(\\beta_1))$ to $m(a(\\beta_1))$. Thus there\nmust be a unique $\\beta_2$: $m(b(\\beta_1))<\\beta_22$. Analytical\ncalculations give that $u-u^p<(1-u)-(1-u)^p$ for\n$00$), the global maximizer $u^*$ of\n$l(u)$ satisfies $u^*\\leq \\frac{1}{2}$. This implies the desired\nresult.\n\\end{proof}\n\n\\begin{figure}\n\\centering\n\\includegraphics[clip=true, width=6in, height=2.5in]{along.pdf}\n\\caption{Along the phase transition curve $r(\\beta_1)$, $l(u)$ has\ntwo local maximizers $u_1^*$ and $u_2^*$, and both are global\nmaximizers for $l(u)$. Graph drawn for $\\beta_1=-5$, $\\beta_2=5$,\nand $p=2$.} \\label{along}\n\\end{figure}\n\n\\section{Further discussion}\n\\label{discuss} But what if the common distribution $\\mu$ on the\nedge weights does not have finite support? Unfortunately, we do\nnot have a generic large deviation principle in this case. As an\nexample, we will examine the asymptotic phase structure for a\ncompletely solvable exponential model and hope the analysis would\nprovide some inspiration. Let $G_n\\in \\mathcal{G}_n$ be an\nedge-weighted directed labeled graph on $n$ vertices, where the\nedge weights $x_{ij}$ from vertex $i$ to vertex $j$ are iid real\nrandom variables whose common distribution $\\mu$ is standard\nGaussian. As in the undirected case, the common distribution for\nthe edge weights yields probability measure $\\mathbb P_n$ and the\nassociated expectation $\\mathbb E_n$ on $\\mathcal{G}_n$. Give the set of\nsuch graphs the probability\n\\begin{equation}\n\\mathbb P_n^\\beta(G_n)=\\exp\\left(n^2\\left(\\beta_1e(G_n)+\\beta_2s(G_n)-\\psi_n^\\beta\\right)\\right)\\mathbb P_n(G_n),\n\\end{equation}\nwhere $\\beta=(\\beta_1, \\beta_2)$ are $2$ real parameters, $H_1$ is\na directed edge, $H_2$ is a directed $2$-star, $e(G_n)$ and\n$s(G_n)$ are respectively the directed edge and $2$-star\nhomomorphism densities of $G_n$,\n\\begin{equation}\ne(G_n)=\\frac{1}{n^2}\\sum_{1\\leq i,j\\leq n} x_{ij}, \\hspace{0.2cm}\ns(G_n)=\\frac{1}{n^3}\\sum_{1\\leq i,j,k \\leq n} x_{ij}x_{ik},\n\\end{equation}\nand $\\psi_n^\\beta$ is the normalization constant,\n\\begin{equation}\n\\label{dpsi} \\psi_n^\\beta=\\frac{1}{n^2}\\log\n\\mathbb E_n\\left(\\exp\\left(n^2\\left(\\beta_1e(G_n)+\\beta_2s(G_n)\\right)\\right)\\right).\n\\end{equation}\nThe associated expectation $\\mathbb E_n^\\beta$ may be defined\naccordingly, and a phase transition occurs when the limiting\nnormalization constant $\\displaystyle \\psi^\\beta_\\infty=\\lim_{n\\to\n \\infty}\\psi_n^{\\beta}$ has a singular point.\n\nPlugging the formulas for $e(G_n)$ and $s(G_n)$ into (\\ref{dpsi})\nand using the iid property of the edge weights, we have\n\\begin{eqnarray}\n\\psi_n^\\beta&=&\\frac{1}{n^2}\\log\n\\mathbb E_n\\left(\\exp\\left(\\beta_1\\sum_{i=1}^n\\left(\\sum_{j=1}^n\nx_{ij}\\right)+\\frac{\\beta_2}{n}\\sum_{i=1}^n\\left(\\sum_{j=1}^n\nx_{ij}\\right)^2\\right)\\right)\\\\\n&=&\\frac{1}{n}\\log\\mathbb E \\left(\\exp\\left(\\beta_1\nY+\\frac{\\beta_2}{n}Y^2\\right)\\right),\\notag\n\\end{eqnarray}\nwhere $Y=\\sum_{j=1}^n x_{1j}$ satisfies a Gaussian distribution\nwith mean $0$ and variance $n$, and $\\mathbb E$ is the associated\nexpectation. We compute, for $\\beta_2<\\frac{1}{2}$,\n\\begin{equation}\n\\mathbb E \\left(\\exp\\left(\\beta_1\nY+\\frac{\\beta_2}{n}Y^2\\right)\\right)=\\int_{-\\infty}^\\infty\ne^{\\beta_1y+\\frac{\\beta_2}{n}y^2}\\frac{1}{\\sqrt{2\\pi\nn}}e^{-\\frac{y^2}{2n}}dy\n=\\frac{1}{\\sqrt{1-2\\beta_2}}e^{\\frac{n\\beta_1^2}{2(1-2\\beta_2)}},\n\\end{equation}\nwhich implies that\n\\begin{equation}\n\\psi_\\infty^\\beta=\\frac{\\beta_1^2}{2(1-2\\beta_2)}.\n\\end{equation}\nThis is a smooth function in terms of the parameters $\\beta_1$ and\n$\\beta_2$, and so $\\psi_\\infty^\\beta$ does not admit a phase\ntransition.\n\n\\section*{Acknowledgements}\nThis work originated at the Special Session on Topics in\nProbability at the 2016 AMS Western Spring Sectional Meeting,\norganized by Tom Alberts and Arjun Krishnan. Mei Yin's research\nwas partially supported by NSF grant DMS-1308333. She thanks Sean\nO'Rourke and Lingjiong Zhu for helpful conversations.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:Intro}\n\nSequence modelling is an important problem in NLP, as many NLP tasks can be modelled as sequence-to-sequence decoding.\nAmong them are POS tagging, chunking, named entity recognition \\cite{Collobert-2011-NLP-1953048.2078186}, Spoken Language Understanding (SLU) for human-computer interactions \\cite{demori08-SPM}, and also machine translation \\cite{Sutskever-2014-SSL-2969033.2969173,DBLP-journals-corr-BahdanauCB14}.\n\nIn other cases, NLP tasks can be decomposed, at least in principle, in several subtasks, the first of which is a sequence modelling problem.\nFor instance, syntactic parsing can be performed by applying syntactic analysis to POS-tagged sentences \\cite{Collins-1997-TGL}; coreference chain detection \\cite{Soon2001,Ng2002,Grouin.etAL:I2B2:2011} can be decomposed into mention detection and coreferent mention linking; and structured named entity detection \\cite{Grouin.etAL:I2B2:2011,dinarelli2012-eacl,DINARELLI-ROSSET:OCR-NER:LREC2012}, can be done by first detecting simple entity components then combining them to construct complex tree-shaped entities.\n\nMost of these tasks can also be performed by a single model: either as a joint architecture like the joint model for POS tagging and syntactic analysis from \\cite{Rush-2012-IPP-2390948.2391112} or with a fully end-to-end model like the one developed by \\cite{D17-1018} for coreference detection.\nIn any case, these models still include at some point a sequence modelling module that could be improved by studying successful models for the related sequence labelling tasks.\n\nThis is even more true for neural models, since designing a single complex neural architecture for a complex problem may indeed lead to sub-optimal learning.\nFor this reason, it may be more desirable to train a sequence labelling model alone at first and to learn to perform the other steps using the pre-trained parameters of the first step's model, as is done for instance when using pre-trained lexical embeddings in a downstream model \\cite{lample2016neural,Ma-Hovy-ACL-2016}.\nIn that case, care must be taken to avoid too unrelated downstream tasks that could lead to \\emph{Catastrophic forgetting} \\cite{kemker2018measuring}, though some hierarchical multi-task architectures have proven successful \\cite{N18-1172}.\n\nFinally, \\cite{DBLP-journals-corr-VinyalsKKPSH14} has shown that it is possible to model syntactic analysis as a sequence labelling problem by adapting a \\emph{Seq2seq} model. As a consequence, we could actually design a unified multi-task learning neural architecture for a large class of NLP problems, by recasting them as sequence decoding tasks.\n\nRecurrent Neural Networks (RNNs) hold state-of-the-art results in many NLP tasks, and in particular in sequence modelling problems \\cite{lample2016neural,Ma-Hovy-ACL-2016,dinarelli_hal-01553830,D17-1018}.\nGated RNNs such as GRU and LSTM are particularly effective for sequence labelling thanks to an architecture that allows them to use long-range information in their internal representations \\cite{werbos-bptt,Hochreiter-1997-LSTM,Cho-2014-GatedRecurrentUnits}.\n\nIn this paper we focus our work to searching for more effective neural models for sequence labelling tasks such as POS tagging or Spoken Language Understanding (SLU).\nSeveral very effective solutions already exist for these problems, in particular the sequence-to-sequence model \\cite{Sutskever-2014-SSL-2969033.2969173} (\\emph{Seq2seq} henceforth), the \\emph{Transformer} model \\cite{46201}, and the whole family of models using a neural CRF layer on top of one or several LSTM or GRU layers \\cite{Hochreiter-1997-LSTM,Cho-2014-GatedRecurrentUnits,lample2016neural,Ma-Hovy-ACL-2016,Vukotic.etal_2016,LSTM-CNN-NER-2015,huang2015bidirectional}.\n\nWe propose an alternative neural architecture to those mentioned above.\nThis architecture uses GRU recurrent layers as internal memory capable of taking into account arbitrarily long contexts of both input (words and characters), and output (labels).\nOur architecture is a variant of the \\emph{Seq2seq} model where two different decoders are used instead of only one of the original architecture. The first decoder goes backward through the sequence, outputting label predictions, using the hidden states of the encoder and its own previous hidden states and label predictions as input.\nThe second decoder is a more standard forward decoder that uses the hidden states of the encoder, the hidden states and \\emph{future} predictions generated by the backward decoder and its own previous hidden states and predictions to output labels.\nWe name this architecture \\emph{Seq2biseq}, as it generates output sequences from output-wise bidirectional, global decisions.\n\nOur work is inspired by previous work published in \\cite{dinarelli_hal-01553830,Dupont-etAl-LDRNN-CICling2017,2016:arXiv:DinarelliTellier:NewRNN,DinarelliTellier:RNN:CICling2016}, where bidirectional output-wise decisions were taken using a simple recurrent network.\nA similar idea, called \\textit{deliberation network}, has been proposed in \\cite{NIPS2017_6775}, where however two forward decoders were used. In this respect we believe that using a backward decoder for the first pass may encode more different, expressive information for the second, forward pass.\nOur architecture takes global decisions like a \\emph{LSTM+CRF} model \\cite{lample2016neural} thanks to the use of the two decoders. These take global context into account on both sides of a given position of the input sequence.\n\nWe compare our solution with state-of-the-art models for SLU and POS-tagging in particular the models described in \\cite{dinarelli_hal-01553830,Dupont-etAl-LDRNN-CICling2017} and in \\cite{lample2016neural}.\nIn order to make a direct comparison, we evaluate our models on the same tasks: a French SLU task provided with the MEDIA corpus \\cite{Bonneau-Maynard2006-media}, and the well-known task of POS-tagging of the Wall Street Journal portion of the Penn Treebank \\cite{Marcus93buildinga}.\n\nOur results are all reasonably close to the state of the art, and most of them are actually better.\n\nThe paper is organized as follows: in the next section we describe the state-of-the-art of neural models for sequence labelling.\nIn the section~\\ref{sec:GRU-IRNN} we describe the neural model we propose in this paper, while in the section~\\ref{sec:eval} we describe the experiments we performed to evaluate our models.\nWe draw our conclusions in the section~\\ref{sec:Conclusions}\n\n\\section{State of the Art}\n\\label{sec:SOTA}\n\nThe two main neural architectures used for sequence modelling are the \\emph{Seq2seq} model \\cite{Sutskever-2014-SSL-2969033.2969173} and a group of models where a neural CRF output layer is stacked on top of one or several LSTM or GRU layers \\cite{Hochreiter-1997-LSTM,Cho-2014-GatedRecurrentUnits,lample2016neural,Ma-Hovy-ACL-2016,Vukotic.etal_2016,LSTM-CNN-NER-2015,huang2015bidirectional}.\n\nThe \\emph{Seq2seq} model, also known as \\emph{encoder-decoder}, uses a first module to encode the input sequence as a single vector $c$.\nIn the version of this model proposed in \\cite{Sutskever-2014-SSL-2969033.2969173} $c$ is the hidden state of the encoder after seeing the whole input sequence.\nA second module decodes the output sequence using its previous predictions and $c$ as input.\n\nThe subsequent work of \\cite{DBLP-journals-corr-BahdanauCB14} extends this model with an attention mechanism.\nThis mechanism provides the decoder with a dynamic representation of the input that depends on the decoding step, which proved to be more efficient for translating long sentences.\n\nThis mechanism has also been turned out to be effective for other NLP tasks \\cite{D17-1018,DBLP-journals-corr-KimDHR17,simonnet-hal-01433202}.\n\nConcerning models using a neural CRF output layer \\cite{Ma-Hovy-ACL-2016,lample2016neural}, a first version was already described in \\cite{Collobert-2011-NLP-1953048.2078186}.\nThese solutions use one or more recurrent hidden layers to encode input items (words) in context. Earlier simple recurrent layers like \\emph{Elman} and \\emph{Jordan} \\cite{Elman90findingstructure,jordan-serial}, which showed limitations for learning long-range dependencies \\cite{Bengio-1994-RNN-Learning-Difficulty}, have been replaced by more sophisticated layers like LSTM and GRU \\cite{Hochreiter-1997-LSTM,Cho-2014-GatedRecurrentUnits}, which reduced such limitations by using gates.\n\nIn this type of neural models, a first representation of the prediction is computed with a local output layer.\nIn order to compute global predictions with a CRF neural layer, the \\emph{Viterbi} algorithm is applied over the sequence of local predictions \\cite{Collobert-2011-NLP-1953048.2078186,Mesnil-RNN-2015}.\n\nA more recent neural architecture for sequence modelling is the \\emph{Transformer} model \\cite{46201}. This model use an innovative deep non-recurrent neural architecture, relying heavily on attention mechanisms \\cite{DBLP-journals-corr-BahdanauCB14} and skip connections \\cite{Bengio03aneural} to overcome limitations of recurrent networks in propagating the learning signal over long paths. The Transformer model has been designed for computational efficiency reasons, but it captures long-range contexts with multiple attention mechanisms (multi-head attention) applied to the whole input sequence. Skip-connections guarantee that the learning signal is back-propagated effectively to all the network layers.\n\nConcerning previous works on the same tasks used in this work, namely MEDIA \\cite{Bonneau-Maynard2006-media} and the Penn Treebank (WSJ) \\cite{Marcus93buildinga}, several publications have been produced starting from $2007$ (MEDIA) and $2002$ (WSJ) \\cite{raymond07-luna,dinarelli09:Interspeech,Hahn.etAL-SLUJournal-2010,Dinarelli2010.PhDThesis,dinarelli2011:emnlp,Dinarelli.etAl-SLU-RR-2011}, applying several different models like \\emph{SVM} and \\emph{CRF} \\cite{Vapnik98-book,lafferty01-crf}.\nStarting from 2013 several works also focused on neural models. At first simple recurrent networks have been used \\cite{RNNforSLU-Interspeech-2013,RNNforLU-Interspeech-2013,Vukotic.etal_2015}. In the last few years also more sophisticated models have been studied \\cite{SLUwithLSTM-NN-IEEEWshop-2014,Vukotic.etal_2016,dinarelli_hal-01553830}.\n\n\\section{The Seq2biseq Neural Architecture}\n\\label{sec:GRU-IRNN}\n\n\\begin{figure}\n\\center\n\\begin{tikzpicture}\n\t\\def$h_{w_1}$, $h_{w_2}$, $h_{w_3}${$w_1$, $w_2$, $w_3$}\n\t\\def$\\overrightarrow{e_1}$, $\\overrightarrow{e_2}$, $\\overrightarrow{e_3}${$\\overrightarrow{e_1}$, $\\overrightarrow{e_2}$, $\\overrightarrow{e_3}$}\n\t\\def$\\overleftarrow{e_1}$, $\\overleftarrow{e_2}$, $\\overleftarrow{e_3}${$\\overleftarrow{e_1}$, $\\overleftarrow{e_2}$, $\\overleftarrow{e_3}$}\n\t\n\t\\begin{scope}[local bounding box=net]\n\t\\foreach \\w [count=\\wi from 1, remember=\\wi as \\lastwi] in $h_{w_1}$, $h_{w_2}$, $h_{w_3}$ {\n\t\\ifnum\\wi>1\n \t\\node[right=6.5em of w\\lastwi.center, text height=1.5ex, text depth=0.25ex, anchor=center] (w\\wi) {\\w};\n \\else\n \t\\node[text height=1.5ex, text depth=0.25ex] (w\\wi) {\\w};\n \\fi\n\t}\n\t\\foreach \\w [count=\\wi from 1, remember=\\wi as \\lastwi] in $h_{w_1}$, $h_{w_2}$, $h_{w_3}$ {\n\t\t\\draw[->] (w\\wi) -- +(0, 2em) node[draw, anchor=south, rounded corners=1pt, inner sep=0.3em] (ew\\wi) {encoder};\n\t\t\\ifnum\\wi>1\n\t\t\t\\draw[<->] (ew\\lastwi) -- (ew\\wi);\n\t\t\\fi\n\t}\n\t\n\t\\foreach \\w [count=\\wi from 1, remember=\\wi as \\lastwi] in $h_{w_1}$, $h_{w_2}$, $h_{w_3}$ {\n\t\t\\draw[->] (ew\\wi) -- +(-0.5, 2em) node[draw, anchor=south, rounded corners=1pt, inner sep=0.3em] (bdec\\wi) {$\\overleftarrow{decoder}$};\n\t\t\\ifnum\\wi>1\n\t\t\t\\draw[->] (bdec\\wi) -- (bdec\\lastwi);\n\t\t\\fi\n\t}\n\t\n\t\\foreach \\w [count=\\wi from 1, remember=\\wi as \\lastwi] in $h_{w_1}$, $h_{w_2}$, $h_{w_3}$ {\n\t\t\\draw[->,red] (ew\\wi) to[bend right=30] +(+0.5, 5em) node[draw, anchor=south, rounded corners=1pt, inner sep=0.3em] (fdec\\wi) {$\\overrightarrow{decoder}$};\n\t\t\\draw[->] (bdec\\wi) -- (fdec\\wi);\n\t\t\\ifnum\\wi>1\n\t\t\t\\draw[->] (fdec\\lastwi) -- (fdec\\wi);\n\t\t\\fi\n\t}\n\t\n\t\\foreach \\y [count=\\yi from 1, remember=\\yi as \\lastyi] in $\\overrightarrow{e_1}$, $\\overrightarrow{e_2}$, $\\overrightarrow{e_3}$ {\n \t\\draw[red,->] (fdec\\yi) -- +(0, 3em) node[text height=1.5ex, text depth=0.25ex, anchor=south] (fy\\yi) {\\y};\n \t\\ifnum\\yi>1\n \t\t\\draw[red,->] (fy\\lastyi) to[bend right=30] (fdec\\yi);\n \t\\fi\n\t}\n\t\n\t\\foreach \\y [count=\\yi from 1, remember=\\yi as \\lastyi] in $\\overleftarrow{e_1}$, $\\overleftarrow{e_2}$, $\\overleftarrow{e_3}$ {\n\t\t\\draw[->] (bdec\\yi) -- +(0, 5.3em) node[text height=1.5ex, text depth=0.25ex, anchor=south] (by\\yi) {\\y};\n\t}\n\t\n\t\\draw[->] (by3) to[bend left=42] (bdec2.east);\n\t\\draw[->] (by2) to[bend left=42] (bdec1.east);\n\n \\end{scope}\n\n\\end{tikzpicture}\n \\caption{Overall network structure}\\label{fig:network-structure}\n\\end{figure}\n\nAs an alternative to the \\emph{Seq2seq} and \\emph{LSTM+CRF} neural models for sequence labelling, we propose in this paper a new neural architecture inspired from the original \\emph{Seq2seq} model and from models described in \\cite{dinarelli_hal-01553830,Dupont-etAl-LDRNN-CICling2017}.\nFigure~\\ref{fig:network-structure} shows the overall architecture.\n\nOur architecture is similar to the \\emph{Seq2seq} model in that we use modules to encode a long-range context on the output side similar to the decoder of the \\emph{Seq2seq} architecture.\nThe similarity with respect to models described in \\cite{dinarelli_hal-01553830,Dupont-etAl-LDRNN-CICling2017} is the use of a bidirectional context on the output side in order to take into account previous, but also future predictions for the current model decision. Future predictions are computed by an independent decoder which processes the input sequence backward.\n\nOur architecture extends the \\emph{Seq2seq} original model through the use of an additional backward decoder that allows taking into account both past and future information at decoding time.\nOur architecture also improves the models described in \\cite{dinarelli_hal-01553830,Dupont-etAl-LDRNN-CICling2017} since it uses more sophisticated layers to model long-range contexts on the output side, while previous models used fixed-size windows and simple linear hidden layers.\nThanks to these modifications our model makes predictions informed by a global distributional context, which approximates a global decision function.\nWe also improve the character-level word representations by using a similar solution to the one proposed in \\cite{Ma-Hovy-ACL-2016}.\n\nOur neural architecture is based on the use of GRU recurrent layers at word, character and label levels.\nGRU is an evolution of the LSTM recurrent layer which has often shown better capacities to model contextual information \\cite{Cho-2014-GatedRecurrentUnits,Vukotic.etal_2016}.\n\nIn order to make notation clear, in the following sections, bidirectional GRU hidden layers are noted $\\GRU$, while we use $\\fGRU$ and $\\bGRU$ for a forward and backward hidden layer respectively.\nFor the output of these layers we use respectively $h_{w_i}$, $\\overrightarrow{h_{e_i}}$ and $\\overleftarrow{h_{e_i}}$, with a letter as index to specialize the GRU layer for a specific input (e.g. $w$ for the GRU layer used for words, $e$ for labels, or entities, and so on), and an index $i$ to indicate the index position in the current sequence.\nFor example $\\overleftarrow{h_{e_i}}$ is the backward hidden state, at current position ($i$), of the GRU layer for labels.\nThe models described in this work always use as input words, characters and labels. Their respective embedding matrices are all noted $E_x$, with $x$ denoting the different input unit types (e.g. $E_w$ is the embedding matrix for words), and their dimensions $D_x$.\n\n\\subsection{Character-level Representations}\n\\label{subsec:char_rep}\n\nThe character-level representation of words was computed at first as in \\cite{Ma-Hovy-ACL-2016}, substituting a GRU to the LSTM layer:\nthe characters $c_{w, 1}, \\dots, c_{w, n}$ of a word $w$ are first represented as a sequence $S_c(w)$ of $n$ $D_c$-dimensional embeddings. These are fed to the $\\GRU_c$ layer. The final state $h_c(w)$ is kept as the character level representation of $w$.\n\nWe improved this module so that it generates a character-level representation using all the hidden states generated by $\\GRU_c$:\n\\begin{equation}\\label{eq:charrep}\n \\begin{aligned}\n S_c(w) &= (E_{c}(c_{w, 1}), \\dots, E_c(c_{w, n})) \\\\\n (h_c(c_{w, 1}), \\dots, h_c(c_{w, n})) &= \\GRU_c(S_{c}(w), h_0^c) \\\\\n h_c(w) &= \\FFNN( Sum( h_c(c_{w, 1}), \\dots, h_c(c_{w, n}) ) )\n \\end{aligned}\n\\end{equation}\n$\\FFNN$ is again a general, possibly multi-layer Feed-Forward Neural Network with non-linear activation functions. This new architecture was inspired by \\cite{46201}, where $\\FFNN$s were used to extract deeper features at each layer.\n\nPreliminary experiments have shown that this character-level representation is more effective than the one inspired by the work of \\cite{Ma-Hovy-ACL-2016}.\n\n\\subsection{Word-level Representations}\n\\label{subsec:word_rep}\n\nWords are first mapped into embeddings, then the embedding sequence is processed by a $\\GRU_{w}$ bidirectional layer.\nUsing the same notation as for characters, a sequence of words $S = w_1, \\dots, w_N$ is converted into embeddings $E_w(w_i)$ with $1\\leq i \\leq N$.\nWe denote $S_i = w_1, \\dots, w_i$ the sub-sequence of $S$ up to the words $w_i$.\nIn order to augment the word representations with their character-level representations, and to use a single distributed representation, we concatenate the character-level representations $h_{c}(w_i)$ (eq.~\\ref{eq:charrep}) to the word embeddings before feeding the $\\GRU_w$ layer with the whole sequence. Formally:\n\\begin{equation}\\label{eq:lex-rep}\n \\begin{aligned}\n S_w &= (E_{w}(w_{1}), \\dots, E_{w}(w_{N})) \\\\\n S^{lex} &= ([E_{w}(w_{1}), h_{c}(w_1)], \\dots, [E_{w}(w_{N}), h_{c}(w_N)]) \\\\\n h_{w_i} &= \\GRU_w(S_i^{lex}, h_{w_{i-1}})\n \\end{aligned}\n\\end{equation}\nWhere we used $S_w$ for the whole sequence of word embeddings generated from the word sequence $S$.\n\nIn the same way, $S^{\\mathrm{lex}}$ is the sequence obtained concatenating word embeddings and character-level representations, which constitute the lexical-level information given as input to the model.\n$[~]$ is the matrix (or vector) concatenation, and we also used the notation $S_i^{\\mathrm{lex}}$ for the sub-sequence of $S^{\\mathrm{lex}}$ up to position $i$.\n\n\\subsection{Label-level Representations}\n\\label{subsec:label_rep}\n\nIn order to obtain label representations encoding long-range contexts, we use a $\\GRU$ hidden layer also on label embeddings.\nWe apply first a backward step on label embeddings in order to compute representations that will be used as future label predictions, or right context, in the following forward step.\nUsing the same notation as used previously, we have:\n\\begin{equation}\\label{eq:label-bw}\n \\overleftarrow{h_{e_i}} = \\bGRU_e(E_l(e_{i+1}), \\overleftarrow{h_{e_{i+1}}})\n\\end{equation}\nfor $i = N \\dots 1$.\nWe note that here we use the label on the right of the current position, $e_{i+1}$, $e_{i}$ is not known at time step $i$.\n\nThe hidden state $\\overleftarrow{h_{e_{i+1}}}$ is the hidden state computed at previous position in the backward step, thus associated to the label on the right of the current label to be predicted. In other words we interpret $\\overleftarrow{h_{e_{i}}}$ as the right context of the (unknown) label $e_i$, instead of as the in-context representation of $e_i$ itself, and similarly for $\\overleftarrow{h_{e_{i+1}}}$.\nThe right context of $e_i$, $\\overleftarrow{h_{e_i}}$, is used to predict $e_i$ at time step $i$.\n\nIn the same way, we compute the representation of the left context of the label $e_i$ by scanning the input sequence forward, which gives:\n\\begin{equation}\\label{eq:label-fw}\n \\overrightarrow{h_{e_i}} = \\fGRU_{e}(E_l(e_{i-1}), \\overrightarrow{h_{e_{i-1}}})\n\\end{equation}\nfor $i = 1 \\dots N$.\nThe neural components described so far are already sufficient to build rich architectures.\nHowever, we believe that the information from the lexical context is useful not only to disambiguate the current word in-context, but also to disambiguate the contextual representations used for label prediction.\nIndeed, in sequence labelling labels only provide abstract lexical or semantic information. It thus seems reasonable to think that they are not sufficient to effectively encode features in the label context representations $\\overleftarrow{h_{e_i}}$ and $\\overrightarrow{h_{e_i}}$.\n\nFor this reason, we add to the input of the layers $\\bGRU_{e}$ and $\\fGRU_{e}$ the lexical hidden representation $h_{w_i}$ computed by the $\\GRU_{w}$ layer.\nTaking this into account, the computation of the right context for the current label prediction becomes:\n\\begin{equation}\\label{eq:label-bw-le}\n \\overleftarrow{h_{e_i}} = \\bGRU_{e}([h_{w_i}, E_l(e_{i+1})], \\overleftarrow{h_{e_{i+1}}})\n\\end{equation}\nThe computation of the left context is done in a similar way.\n\nThis modification makes the layers $\\bGRU_{e}$ and $\\fGRU_{e}$ in our architecture similar to the decoder of a \\emph{Seq2seq} architecture \\cite{Sutskever-2014-SSL-2969033.2969173}.\nThe modules $\\bGRU_{e}$ and $\\fGRU_{e}$ are indeed like two decoders from an architectural point of view, but also they encode the contextual information in the same way using gated recurrent layers.\n\nHowever, the full architecture differs from a traditional \\emph{Seq2seq} model by the use of an additional decoder, capable of modelling the right label context, while the original model used a single decoder, modelling only the left context.\nThe idea of using two decoders is inspired mainly by the evidence that both left and right output-side contexts are equally informative for the current prediction.\n\nAnother difference with respect to the \\emph{Seq2seq} model is that the $\\bGRU_{e}$ and $\\fGRU_{e}$ layers have access to the lexical-level hidden states $h_{w_i}$.\nThis allows these layers to take the current lexical context into account and is thus more adapted to sequence labelling than using the same representation of the input sentence for all the positions, which is the solution of the original \\emph{Seq2seq} model.\n\nAs we mentioned above, the \\emph{Seq2seq} model has been improved with an attention mechanism \\cite{DBLP-journals-corr-BahdanauCB14}, which is another way to provide the model with a lexical representation focusing dynamically on different parts of the input sequence depending on the position $i$.\nThis attention mechanism has also proved to be efficient for sequence labelling, and it might be that our architecture could benefit from it too, but this is out of our scope for this article and we leave it for future work.\\footnote{This is currently in progress}\n\nWe can motivate the use of the lexical information $h_{w_i}$ in the decoders $\\bGRU_{e}$ and $\\fGRU_{e}$ with complex systems theory considerations, as suggested in \\cite{2017-NoRNN}.\n\\cite{Holland-1999-ECO-520475} state that a complex system, either biological or artificial, is not equal to the sum of its components.\nMore precisely, the behaviour of a complex system evolves during its existence and shows the emergence of new functionalities, which can not be explained by simply considering the system's components individually.\n\\cite{RePEc-wop-safiwp-93-11-070} qualitatively characterizes the evolution of a complex system's behaviour with three different types of adaptation, two of which are particularly interesting in the context of this work and can be concisely named \\emph{aggregation} and \\emph{specialization}.\n\nIn the first, several components of the system adapt in order to become a single \\emph{aggregated} component from a functioning point of view.\nIn \\emph{specialization}, several initially identical components of the system adapt to perform different functionalities.\nThese adaptations may take place at different unit levels, a neuron, a simple layer, or a whole module.\n\nThe most evident cases of \\emph{specialization} are the gates of the LSTM or GRU layers \\cite{Cho-2014-GatedRecurrentUnits}, as well as the attention mechanism \\cite{DBLP-journals-corr-BahdanauCB14}.\nIndeed, the $\\mathbf{z}$ and $\\mathbf{r}$ gates of a GRU recurrent layer are defined in the exact same way, with the same number of parameters, and they use exactly the same input information.\n\nHowever, during the evolution of the system \u2014 that is, during the learning phase \u2014 the $\\mathbf{r}$ gate adapts (specialises) to become the reset gate, which allows the network to forget the past information, when it is not relevant for the current prediction step.\nIn the same way, the $\\mathbf{z}$ gate becomes the equivalent of the input gate of a LSTM, which controls the amount of input information that will affect the current prediction.\n\nIn our neural architecture we can observe \\emph{aggregation}: the layers $\\bGRU_{e}$ and $\\fGRU_{e}$ adapt at the whole layer level, they become like gates which filter label-level information that is not useful for the current prediction.\nIn the same way as the input to gates of GRU or LSTM is made of current input and previous hidden state, the input to the $\\bGRU_{e}$ and $\\fGRU_{e}$ layers is made of lexical level and previous label level information, both needed to discriminate the abstract semantic information provided by the labels alone.\nWe will show in the evaluation section the effectiveness provided by this choice.\n\nWhile both of the two decoders used in our models are equivalent to the decoder of the original \\emph{Seq2seq} architecture, we believe it is interesting to analyse the contribution of each piece of information given as input to this component, which we will show in the evaluation section.\n\n\\subsection{Output Layer}\n\\label{subsec:output}\n\nOnce all pieces of information needed to predict the current label are computed, the output of the backward step is computed as follows:\n\\begin{equation}\\label{eq:backward-model}\n \\begin{aligned}\n o_{bw} &= W_{bw} [h_{w_{i}}, \\overleftarrow{h_{e_i}}] + b_{bw} \\\\\n e_{i} &= \\argmax(\\logsoftmax(o_{bw}))\n \\end{aligned}\n\\end{equation}\nWe start the backward step using a conventional symbol (\\texttt{}) as end-of-sentence marker.\nWe repeat the backward step prediction for the whole input sequence. The process is shown in figure~\\ref{pic:back-dec}.\n\n\\begin{figure}\n\\center\n\\begin{tikzpicture}\n\t\\def$h_{w_1}$, $h_{w_2}$, $h_{w_3}${$h_{w_1}$, $h_{w_2}$, $h_{w_3}$}\n\t\\def$\\overrightarrow{e_1}$, $\\overrightarrow{e_2}$, $\\overrightarrow{e_3}${$\\overleftarrow{e_{1}}$, $\\overleftarrow{e_{2}}$, $\\overleftarrow{e_{3}}$}\n\t\n\t\\begin{scope}[local bounding box=net]\n\t\\foreach \\w [count=\\wi from 1, remember=\\wi as \\lastwi] in $h_{w_1}$, $h_{w_2}$, $h_{w_3}$ {\n\t\\ifnum\\wi>1\n \t\\node[right=6.5em of w\\lastwi.center, text height=1.5ex, text depth=0.25ex, anchor=center] (w\\wi) {\\w};\n \\else\n \t\\node[text height=1.5ex, text depth=0.25ex] (w\\wi) {\\w};\n \\fi\n\t}\n\t\n\t\\foreach \\w [count=\\wi from 1, remember=\\wi as \\lastwi] in $h_{w_1}$, $h_{w_2}$, $h_{w_3}$ {\n\t\t\\draw[->] (w\\wi) -- +(0, 2em) node[draw, anchor=south, rounded corners=1pt, inner sep=0.3em] (gruw\\wi) {$\\overleftarrow{GRU_l}$};\n\t\t\\ifnum\\wi>1\n\t\t\t\\draw[<-] (gruw\\lastwi) -- (gruw\\wi);\n\t\t\\fi\n\t}\n\t\n\t\\foreach \\w [count=\\wi from 1, remember=\\wi as \\lastwi] in $\\overrightarrow{e_1}$, $\\overrightarrow{e_2}$, $\\overrightarrow{e_3}$ {\n\t\t\\draw[->] (gruw\\wi) -- +(0, 2em) node[draw, text width=1.5cm, text centered, anchor=south, rounded corners=1pt, inner sep=0.3em] (lin\\wi) {log-soft linear};\n\t\t\\draw[->] (w\\wi) to[bend left=55] (lin\\wi);\n\t}\n\t\n\t\\foreach \\w [count=\\wi from 1, remember=\\wi as \\lastwi] in $\\overrightarrow{e_1}$, $\\overrightarrow{e_2}$, $\\overrightarrow{e_3}$ {\n\t\t\\draw[->] (lin\\wi) -- +(0, 2.5em) node[text height=1.5ex, text depth=0.25ex, anchor=south] (o\\wi) {\\w};\n\t\t\\ifnum\\wi>1\n\t\t\t\\draw[->,dashed] (o\\wi) -- (gruw\\lastwi);\n\t\t\\fi\n\t}\n\t\n\t\\node[right=6.5em of o3.center, text height=1.5ex, text depth=0.25ex, anchor=center] (eos) {$$};\n\t\\draw[->,dashed] (eos) to[bend left] (gruw3);\n\n \\end{scope}\n\n\\end{tikzpicture}\n\\caption{Structure of the backward decoder}\\label{pic:back-dec}\n\\end{figure}\n\nThis allows to have all the pieces of information needed to predict the current label in the forward step, at character and word level, but also at right and left label context level, with respect to the current position to be labeled:\n\\begin{equation}\\label{eq:bidirectional-model}\n \\begin{aligned}\n o_i &= W_o [\\mathbf{\\overrightarrow{h_{e_i}}}, h_{w_i}, \\mathbf{\\overleftarrow{h_{e_i}}}] + b_o \\\\\n e_i &= \\argmax(\\logsoftmax(o_i))\n \\end{aligned}\n\\end{equation}\nA high-level schema of the forward pass is shown in figure~\\ref{pic:for-dec}.\n\n\\begin{figure}\n\\center\n\\begin{tikzpicture}\n\t\\def$h_{w_1}$, $h_{w_2}$, $h_{w_3}${$h_{w_1}$, $h_{w_2}$, $h_{w_3}$}\n\t\\def$\\overrightarrow{e_1}$, $\\overrightarrow{e_2}$, $\\overrightarrow{e_3}${$\\overrightarrow{e_1}$, $\\overrightarrow{e_2}$, $\\overrightarrow{e_3}$}\n\t\n\t\\begin{scope}[local bounding box=net]\n\t\\foreach \\w [count=\\wi from 1, remember=\\wi as \\lastwi] in $h_{w_1}$, $h_{w_2}$, $h_{w_3}$ {\n\t\\ifnum\\wi>1\n \t\\node[right=6.5em of w\\lastwi.center, text height=1.5ex, text depth=0.25ex, anchor=center] (ew\\wi) {\\w};\n \\else\n \t\\node[text height=1.5ex, text depth=0.25ex] (ew\\wi) {\\w};\n \\fi\n\t}\n\t\n\t\\foreach \\w [count=\\wi from 1, remember=\\wi as \\lastwi] in $h_{w_1}$, $h_{w_2}$, $h_{w_3}$ {\n\t\t\\draw[->] (ew\\wi) -- +(-0.5, 2em) node[draw, anchor=south, rounded corners=1pt, inner sep=0.3em] (bdec\\wi) {$\\overleftarrow{decoder}$};\n\t\t\\ifnum\\wi>1\n\t\t\t\\draw[->] (bdec\\wi) -- (bdec\\lastwi);\n\t\t\\fi\n\t}\n\t\n\t\\foreach \\w [count=\\wi from 1, remember=\\wi as \\lastwi] in $h_{w_1}$, $h_{w_2}$, $h_{w_3}$ {\n\t\t\\draw[->,red] (ew\\wi) to[bend right=30] +(+0.5, 5em) node[draw, anchor=south, rounded corners=1pt, inner sep=0.3em] (fdec\\wi) {$\\overrightarrow{decoder}$};\n\t\t\\draw[->] (bdec\\wi) -- (fdec\\wi);\n\t\t\\ifnum\\wi>1\n\t\t\t\\draw[->] (fdec\\lastwi) -- (fdec\\wi);\n\t\t\\fi\n\t}\n\t\n\t\\foreach \\y [count=\\yi from 1, remember=\\yi as \\lastyi] in $\\overrightarrow{e_1}$, $\\overrightarrow{e_2}$, $\\overrightarrow{e_3}$ {\n \t\\draw[red,->] (fdec\\yi) -- +(0, 3em) node[text height=1.5ex, text depth=0.25ex, anchor=south] (fy\\yi) {\\y};\n \t\\ifnum\\yi>1\n \t\t\\draw[red,->] (fy\\lastyi) to[bend right=30] (fdec\\yi);\n \t\\fi\n\t}\n\n \\end{scope}\n\n\\end{tikzpicture}\n\\caption{High-level schema of the forward pass}\\label{pic:for-dec}\n\\end{figure}\n\nThe $\\logsoftmax$ function computes log-probabilities and it is thus suited for the loss-function used to learn the model described in the next section.\n\nWe note that the forward decoder is in fact a bidirectional decoder, as it uses both backward and forward hidden states $\\overrightarrow{h_{e_i}}$ and $\\overleftarrow{h_{e_i}}$ for the current prediction.\n\nThe hypothesis motivating the architecture of our neural models is the following: gated hidden layers such as LSTM and GRU can keep relatively long contexts in memory and to extract from them the information that is relevant to the current model prediction.\nThis is supported by the findings in recent works, such as \\cite{P18-2116}, which shows that most of the modelling power of gated RNN comes from their ability to compute at each step a context-dependent weighted sum on their inputs, in a way that is akin to dynamical attention mechanism.\nAs an immediate consequence, we think that using such hidden layers is an effective way to keep in memory a relatively long context on the output item level, that is labels, as well as on the input item level, that is words, characters and possibly other information.\n\nAn alternative, non-recurrent architecture, the Transformer model \\cite{46201} has been proposed with the goal of using attention mechanisms to overcome the learning issues of RNN in contexts where the learning signal has to back-propagate through very long paths.\nHowever, the recent work of \\cite{Dehghani2018UniversalT} shows that integrating a concept of recurrence in Transformers can improve their performances in some contexts.\nThis leads us to believe that recurrence is a fundamental feature for neural architectures for NLP and all of the domains where data are sequential by nature.\n\nAs a side note, the main features of the Transformer model - the multi-head attention mechanism and the skip connections \\cite{46201} - could in principle be integrated into our architecture.\nInvestigations of the costs and benefits of such additions is left for future work.\n\nFinally, while the decision function of our model remains local, its decisions are informed by global information at the word, character and label level thanks to the use of long-range contexts encoded by the GRU layers.\nIn that sense, it can be interpreted as an approximation of a global decision function and provides a viable alternative to the use of a CRF output layer \\cite{lample2016neural,Ma-Hovy-ACL-2016}.\n\n\\subsection{Learning}\n\\label{subsec:learning}\n\nOur models are learned by minimizing the negative log-likelihood $\\mathcal{LL}$ with respect to the data. Formally:\n\\begin{equation}\\label{eq:LL}\n -\\mathcal{LL}(\\Theta | D) = -\\sum_{d=1}^{|D|} \\sum_{i=1}^{N_d} \\frac{1}{2}(\\text{log-p}(\\overrightarrow{e_i})+\\text{log-p}(\\overleftarrow{e_i})) + \\frac{\\lambda}{2} \\left | \\Theta \\right |^2 )\n\\end{equation}\n$\\text{log-p}(\\overrightarrow{e_i})$ and $\\text{log-p}(\\overleftarrow{e_i})$ are the log-probabilities over predictions of the forward and backward decoders, respectively, we thus strengthen the global character of our model's predictions.\nThe first sum scans the learning data $D$ of size $|D|$, while the second sum scans each learning sequence $S_d$, of size $N_d$.\n\nGiven the relatively small size of the data we use for the evaluation, and the relatively high complexity of the models proposed in this paper, we add a $L_2$ regularization term to the cost function with a $\\lambda$ coefficient.\nThe cost-function is minimized with the \\emph{Back-propagation Through Time} algorithm (BPTT) \\cite{werbos-bptt}, provided natively by the \\emph{Pytorch} library (see section~\\ref{subsec:settings}).\n\n\\section{Evaluation}\n\\label{sec:eval}\n\n\\subsection{Data}\n\\label{subsec:data}\n\nWe evaluate our models on two tasks, one of Spoken Language Understanding (SLU), and one of POS tagging, namely \\emph{MEDIA} and \\emph{WSJ} respectively.\nThese tasks have been widely used in the literature \\cite{Vukotic.etal_2015,Vukotic.etal_2016,dinarelli_hal-01553830,Ma-Hovy-ACL-2016,2018-ijcai-lstm-ldrnn} and allow thus for a direct comparison of results.\n\n\\textbf{The French MEDIA corpus} \\cite{Bonneau-Maynard2006-media} was created for the evaluation of spoken dialog systems in the domain of hotel information and reservation in France.\nIt is made of $1~250$ human-machine dialogs acquired with a \\textit{Wizard-of-OZ} approach, where \\num{250} users followed \\num{5} different reservation scenarios.\n\nData have been manually transcribed and annotated with domain concepts, following a rich ontology.\nSemantic components can be combined to build relatively complex semantic classes.\\footnote{For example, the label \\texttt{localisation} can be combined with the components \\texttt{ville} (city), \\texttt{distance-relative} (relative-distance), \\texttt{localisation-relative-g\u00e9n\u00e9rale} (general-relative-localisation), \\texttt{rue} (street), etc.}\n\nStatistics on the training, development and test data of the MEDIA corpus are shown in table~\\ref{tab:MEDIAStats}.\nThe MEDIA task can be modelled as a sequence labelling task by segmenting concepts over words with the BIO formalism \\cite{Ramshaw95-BIO}.\nAn example of sentence with its semantic annotation is shown in table~\\ref{tab:ATIS-MEDIA-exemple}.\nFor exhaustive, we also show some word-classes available for this task, allowing models for a better generalization.\nHowever, our model does not use these classes, as explained in section~\\ref{subsec:settings}.\n\n\\textbf{The English corpus Penn Treebank} \\cite{Marcus93buildinga}, and in particular the section of the corpus corresponding to the articles of Wall Street Journal (WSJ), is one of the most known and used corpus for the evaluation of models for sequence labelling.\n\nThe task consists of annotating each word with its Part-of-Speech (POS) tag. We use the most common split of this corpus, where sections from \\num{0} to \\num{18} are used for training ($38~219$ sentences, $912~344$ tokens), sections from \\num{19} to \\num{21} are used for validation ($5~527$ sentences, $131~768$ tokens), and sections from \\num{22} to \\num{24} are used for testing ($5~462$ sentences, $129~654$ tokens).\n\n\\begin{table}[t]\n\t \\centering\n\t \\scriptsize\n\t \\begin{tabular}{|ccc|}\n\t \\hline\n\t \\multicolumn{3}{|c|}{MEDIA corpus example} \\\\\n\t \\textbf{Words} & \\textbf{Classes} & \\textbf{Labels} \\\\\n\t \\hline\n Oui & - & Answer-B \\\\\n l' & - & BDObject-B \\\\\n hotel & - & BDObject-I \\\\\n le & - & Object-B \\\\\n prix & - & Object-I \\\\\n \u00e0 & - & Comp.-payment-B \\\\\n moins & relative & Comp.-payment-I \\\\\n cinquante & tens & Paym.-amount-B \\\\\n cinq & units & Paym.-amount-I \\\\\n euros & currency & Paym.-currency-B \\\\\n \\hline\n\t \\end{tabular}\n\t \\caption{An example of sentence with its semantic annotation and word classes, taken from the French corpus MEDIA. The translation of the sentence in English is ``Yes, the hotel with a price less than fifty euros per night''}\n\t \\label{tab:ATIS-MEDIA-exemple}\n \\end{table}\n\n\\begin{table}[t]\n\\begin{minipage}{1.0\\linewidth}\n \\centering\n \\scriptsize\n \\begin{tabular}{|l|rr|rr|rr|}\n \\hline\n & \\multicolumn{2}{|c|}{Training} & \\multicolumn{2}{|c|}{Validation} & \\multicolumn{2}{|c|}{Test}\\\\\n \\hline\n \\# sentences &\\multicolumn{2}{|c|}{12~908} &\\multicolumn{2}{|c|}{1~259}&\\multicolumn{2}{|c|}{3~005} \\\\\n \\hline\n \\hline\n & \\multicolumn{1}{|c}{Words} & \\multicolumn{1}{c|}{Concepts} & \\multicolumn{1}{|c}{Words} & \\multicolumn{1}{c|}{Concepts} &\n \\multicolumn{1}{|c}{Words} & \\multicolumn{1}{c|}{Concepts} \\\\\n \\hline\n \\# words & 94~466 & 43~078 & 10~849 & 4~705 & 25~606 & 11~383 \\\\\n \\# dict. & 2~210 & 99 & 838 & 66 & 1~276 & 78 \\\\\n \\# OOV\\% & -- & -- & 1,33 & 0,02 & 1,39 & 0,04 \\\\\n \\hline\n \\end{tabular}\n \\caption{Statistics on the French MEDIA corpus}\n \\label{tab:MEDIAStats}\n \\end{minipage}\n\\end{table}\n\n\\subsection{Experimental settings}\n\\label{subsec:settings}\n\nIn order to keep our architecture as general as possible, we limit our model inputs to the strict word (and character) information available in the raw text data and ignore the additional features available in the MEDIA dataset.\n\nFor convenience, the hyperparameters of our system have been tuned by simple independent linear searches on the validation data \u2014 rather than a grid search on the full hyperparameters space.\n\nAll of the parameters of neural layers are initialised with the Pytorch 0.4.1 default initializers\\footnote{Uniform random initialization for the GRU layers and \\cite{LeakyReLU-PReLU-2015} initialization for the linear layers.} and trained by SGB with a \\num{0.9} momentum for \\num{40} epochs on MEDIA, and ADAM optimizer for \\num{52} epochs on WSJ, keeping the model that gave the best accuracy on the development data set.\n\nFor training, we start with a learning rate of \\num{0.125} that we decay linearly after each epoch to end up at \\num{0} at the end of the chosen number of training epochs.\nFollowing \\cite{PracticalRecommendations-Bengio-2012}, we also apply a random dropout to the embeddings and the output of the hidden layers that we optimized to a rate of \\num{0.5}, and $L_2$ regularization to all the parameters with an optimal coefficient of \\num{e-4}.\n\nFinally, we have conducted experiments to find the optimal layer sizes, which gave us \\num{200}, \\num{150} and \\num{30} for word, labels and character embeddings respectively, \\num{100} for the $\\GRU_c$ layer and \\num{300} for all the other GRU layers.\nThose values are for the MEDIA task; for WSJ only the word embeddings and hidden layer sizes (respectively \\num{300} and \\num{150}) are different.\n\nIn order to reduce the training time, we use mini-batches of size\\footnote{Using larger batches is faster but degrades the overall accuracy.} \\num{100}.\nIn the current neural network frameworks, all the sequences in a mini-batch must have the same length, which we enforced at first by padding all of the sentences with the conventional symbol \\texttt{} to the length of the longest one.\nHowever this caused two problems: first, there are a few unusually long sentences in the datasets we used, for instance, there is a single sentence of \\num{198} words in MEDIA.\nSecondly, in order to compute automatically the gradients of the parameters, Pytorch keeps in memory the whole graph of operations performed on the input of the model \\cite{paszke2017automatic}, which was far too large for the hardware we used, since for our model, we have to keep track of all the operations at all of the timesteps.\n\nWe found two solutions to these problems.\nThe first was to train on fixed-length, overlapping sub-sequences, or segments\\footnote{Shifting each segment one token ahead with respect to the previous}, truncated from the whole sentences, which did not appear to impair the performances significantly and allowed us to avoid more involved solutions such as back-propagation through time with memorization \\cite{NIPS2016_6221}.\nThe second was to cluster sentences by their length. This makes small clusters for unusually long sentences, which fit thus in memory, and big clusters of average-length sentences, which are further split into sub-clusters to have an optimal balance between the learning signals of different clusters, and alleviate us to find adaptive learning rates for different clusters.\n\nIn the optimization phase, we found out that the first solution works far better for the MEDIA task.\nWe believe that this is due to the noisy nature of the corpus (speech transcription), and to its relatively small size\nUsing fixed-length segments reduces the amount of noise the network must filter, while the fact that segments shift and overlap makes the network more robust, as it can see any token as the beginning of a segment, which in turns helps overcoming scarcity of the dataset.\nThis robustness is not needed when using bigger amount of grammatically well-formed textual data, like the WSJ corpus.\nIndeed the two solutions gave similar results on this corpus, we thus preferred sentence clusters which is a more intuitive solution and may better fit bigger data sets.\n\nAfter performing these optimization on the development set for each task, we kept the best models and evaluated them on the corresponding test sets, which we report and discuss in the next section.\n\nAll of our development and experiments were done on \\emph{2,1 GHz Intel Xeon E5-2620} CPUs and \\emph{GeForce GTX 1080} GPUs.\\footnote{1600 MHz, 2560 cores}.\n\n\\subsection{Results}\n\\label{subsec:results}\n\nResults presented in this section on the MEDIA corpus are means over \\num{10} runs, while results on the WSJ corpus are obtained in a single run, as it seems the most common practice.\\footnote{We can note that results over different runs on the WSJ have a very small variation, less or equal to \\num{0.01} accuracy points}\n\nConcerning the MEDIA task, since the model selection during the training phase is done based on the accuracy on the development data, we show accuracy in addition to F1 measure and Concept Error Rate (CER) as it is common practice in the literature on this task.\nF1 measure is computed with the script made available to the community for the \\emph{CoNLL} evaluation campaign.\\footnote{\\url{https:\/\/github.com\/robertostling\/efselab\/blob\/master\/3rdparty\/conlleval.perl}}. CER is computed by Levenshtein alignment between reference annotation and model hypothesis, with an algorithm much similar to the one implemented in the \\emph{sclite} toolkit.\\footnote{\\url{http:\/\/www1.icsi.berkeley.edu\/Speech\/docs\/sctk-1.2\/sclite.htm}}\n\nSince our model is similar to \\emph{Seq2seq} model, but it uses two decoders, in the remainder of this paper our model will be named \\emph{Seq2Biseq}.\nThe model training is performed using gold labels in the training data, while in test phase the model uses predicted labels to build left and right label-level contexts. This corresponds to the best strategy, according to \\cite{RNNforSLU-Interspeech-2013}.\n\nWe compare our results to those obtained by running the software developed for \\cite{dinarelli_hal-01553830}\\footnote{Available upon request at \\url{http:\/\/www.marcodinarelli.it\/software.php}} and tuning its hyperparameters\\footnote{The optimal settings being more or less those provided in the original article}.\n\n\\begin{table}[t]\n\\centering\n \\begin{tabular}{|l|r|c|c|}\n\n \\hline\n \\textbf{Model} & \\textbf{Accuracy} & \\textbf{F1 measure} & \\textbf{CER} \\\\ \\hline\n \\hline\n \\multicolumn{4}{|c|}{\\textbf{MEDIA DEV}}\\\\ \\hline\n \\hline\n Seq2Biseq \t\t\t\t&\t89.11\t\t&\t85.59\t&\t11.46\t\t\\\\\n Seq2Biseq$_{le}$ \t\t\t&\t89.42\t&\t86.09\t&\t10.58\t\\\\\n Seq2Biseq$_{le}$ seg-len 15\t\t&\t\\textbf{89.97}\t&\t\\textbf{86.57}\t&\t\\textbf{10.42}\t\\\\\n \\emph{fw}-Seq2Biseq$_{le}$ seg-len 15\t&\t89.51\t&\t85.94\t&\t11.40 \\\\\n \\hline\n\n \\end{tabular}\n\\caption{Comparison of results on the development data of the MEDIA corpus, with and without the lexical information (Seq2Biseq$_{le}$) as input to the modules $\\bGRU_{e}$ and $\\bGRU_{e}$}\n\\label{tab:lex-label-gru}\n\\end{table}\n\nConcerning our hypothesis about the capability of our models to encode a long-range context, and to filter out useless information with respect to the current labelling decision, we show results of two (sets of) experiments to validate such hypothesis.\n\nIn the first one, we compare the results obtained by models with and without the use of the lexical information as input to the decoders $\\bGRU_{e}$ and $\\bGRU_{e}$ (section~\\ref{subsec:label_rep}).\nThese results are shown in the first two lines of the table~\\ref{tab:lex-label-gru}.\nThe model using the lexical information is indicated with Seq2Biseq$_{le}$ in the table (for \\textbf{l}abels and l\\textbf{e}xical information).\nAs we can see in the table, this model obtains much better results than the one not using the lexical information as input to the label decoders.\nThis confirms that this information helps discriminating the semantic information provided by labels at a given processing step of the input sequence.\n\nIn the second experiment, we test the capability of our models to filter out useless semantic information, that is on the label side, for the current labelling decision.\nIn order to do this, we increase the size of the segments in the learning phase: \\num{15} instead of \\num{10} by default.\nIt is important to note that in the context of a SLU task, where input sequences are transcriptions of human speech, using longer segments is possibly risky, since a longer context may be much more noisy even if it is slightly more informative.\n\nMoreover, the models in the literature applied to the MEDIA task and using a fixed-size window to capture contextual information, never use a window wider than \\num{3} tokens around the current token to be labelled.\nThis confirms the difficulty to extract useful information from a longer context.\nResults of this experiment are shown in the third line of table~\\ref{tab:lex-label-gru}.\nOur hypothesis seems to be also valid in this case, as models using segments of length \\num{15} obtain better results than those using the default size of \\num{10} and this with respect to all the evaluation metrics.\n\nWe note that, while the effectiveness of the decoder's architecture of the \\emph{Seq2seq} model does not need any more to be proved, these results still provide possibly interesting analyses in the particular context of sequence labelling.\\footnote{The \\emph{Seq2seq} model has been designed and mainly used for machine translation}\n\nIn order to show the advantage provided by the use of two decoders instead of only one like in the original \\emph{Seq2seq} model, we show results obtained using only one decoder for the left label-side context in table~\\ref{tab:lex-label-gru}\nThese results are indicated in the table with \\emph{fw-Seq2Biseq$_{le}$ seg-len 15} (this model corresponds basically to the original \\emph{Seq2seq}). This model is exactly equivalent to our best model \\emph{Seq2Biseq$_{le}$ seg-len 15}, the only difference is that it uses only the left label context. As we can see, this model is much less effective than the version using two decoders, which also confirms that the right context on the output side (labels) is very informative.\n\nOur hypothesis concerning the \\emph{aggregation} specialization of our model during the learning phase seems also confirmed (section~\\ref{subsec:label_rep}).\nThe fact that the Seq2Biseq$_{le}$ model obtains better results than the simpler model Seq2Biseq tends to confirm the hypothesis.\n\nIndeed, if the model Seq2Biseq$_{le}$ gave more importance to the lexical information than the semantic information given by labels at the input of the decoders $\\bGRU_{e}$ and $\\fGRU_{e}$, its better results would not have a clear explanation, as both Seq2Biseq$_{le}$ and Seq2Biseq models (table~\\ref{tab:lex-label-gru}) use the lexical information separately (indicated with $h_{w_i}$ in the equation~\\ref{eq:lex-rep}).\n\nSince the information provided by labels alone is already taken into account by the model Seq2Biseq, we can deduct that the Seq2Biseq$_{le}$ model can extract more effective semantic representations, and this even when we provide it with longer contexts (with segments of size $15$).\n\n\\begin{table}[t]\n\\centering\n \\begin{tabular}{|l|r|c|c|c|}\n\n \\hline\n \\textbf{Model} & \\textbf{Accuracy} & \\textbf{F1 measure} & \\textbf{CER} & \\textbf{p-value}\\\\ \\hline\n \\hline\n \\multicolumn{5}{|c|}{\\textbf{MEDIA DEV}}\\\\ \\hline\n \\hline\n LD-RNN$_{\\mathrm{deep}}$\t\t\t\t&\t89.26 (0.16)\t&\t85.79 (0.24)\t&\t10.72 (0.14) & -- \\\\ \\hline\n Seq2Biseq$_{le}$ seg-len 15 \t& 89.97 (0.20)\t& 86.57 (0.22)\t& 10.42 (0.26) & 0.043 \\\\\n Seq2Biseq$_{\\text{2-opt}}$ \t& \\textbf{90.22} (0.14)\t& \\textbf{86.88} (0.16)\t& \\textbf{9.97} (0.24) & -- \\\\ \\hline\n \\hline\n \\multicolumn{5}{|c|}{\\textbf{MEDIA TEST}}\\\\ \\hline\n \\hline\n LD-RNN$_{\\mathrm{deep}}$\t\t\t\t&\t89.51 (0.21)\t&\t87.31 (0.19)\t& 10.02 (0.17) & -- \\\\ \\hline\n Seq2Biseq$_{le}$ seg-len 15 \t& 89.57 (0.12)\t& 87.50 (0.17)\t& 10.26 (0.19) & 0.047 \\\\\n Seq2Biseq$_{\\text{2-opt}}$ \t& \\textbf{89.79} (0.22)\t& \\textbf{87.69} (0.20)\t& \\textbf{9.93} (0.28) & -- \\\\\n \\hline\n\n \\end{tabular}\n \\caption{Comparison of results obtained on the MEDIA corpus by the system LD-RNN$_{\\mathrm{deep}}$, ran by ourselves for this work, and our model Seq2Biseq$_{le}$, using segments of size $15$ (see section~\\ref{subsec:settings}).}\n\\label{tab:ldrnn-vs-gru-ldrnn-clen15}\n\\end{table}\n\nIn another set of experiments, we compared our model with the one proposed in \\cite{dinarelli_hal-01553830}, from which we inspired our neural architecture.\nWe downloaded the software associated to the paper\\footnote{Described at \\url{http:\/\/www.marcodinarelli.it\/software.php} and available upon request}, and we ran experiments on the MEDIA corpus in the same conditions as our experiments. We used the deep variant of the model described in \\cite{dinarelli_hal-01553830}, LD-RNN$_{\\mathrm{deep}}$, which gives the best results on MEDIA. The results of these experiments are shown in the table~\\ref{tab:ldrnn-vs-gru-ldrnn-clen15}.\nAs we can see in the table, on the development data of the MEDIA task (MEDIA DEV), our model is more effective than the LD-RNN$_{\\mathrm{deep}}$ of \\cite{dinarelli_hal-01553830}, which holds the state-of-the-art on this task.\nThese gains are also present for the test data (MEDIA TEST), even if they are smaller, and the LD-RNN$_{\\mathrm{deep}}$ model is still the more effective in terms of Concept Error Rate (CER).\n\nWe would like to underline that we did not perform an exhaustive optimization of all the hyper-parameters.\\footnote{This because it takes a lot of time, but more importantly because we believe a good model should give good results without too much effort, otherwise a previous model which already proved comparably effective should be preferred}\nAs we can see in table~\\ref{tab:ldrnn-vs-gru-ldrnn-clen15}, results obtained with the model LD-RNN$_{\\mathrm{deep}}$ on the test data are always better than those obtained on the development data. In contrast, our model obtains a worse accuracy, which leads the model selection in the training phase, on the test data. This lack of generalization may indicate a sub-optimal parameter choice or an over-training problem.\n\nIn the table~\\ref{tab:ldrnn-vs-gru-ldrnn-clen15} we also report standard deviations on the \\num{10} experiments (between parentheses), and the results of the significance tests performed on the output of our model and of the model LD-RNN$_{\\mathrm{deep}}$. We used the significance test described in \\cite{Yeh00moreaccurate}, which applies on the output of the two compared systems, and it is suited for the evaluation metrics used most often in NLP.\\footnote{In contrast to several other significance tests, this test doesn't make any assumption on the classes independence, nor on the representative coverage of the sample} We re-implemented the significance test script based on the one described in \\cite{sigf06}.\\footnote{\\url{https:\/\/nlpado.de\/~sebastian\/software\/sigf.shtml}}\nOur model is compared to the LD-RNN$_{\\mathrm{deep}}$ model in terms of F1 measure, which is more constraining than the accuracy and as constraining as the CER.\nThe result of the significance test is given in the column \\emph{p-value} of the table, and it represents the probability that the gain is not significant. Most often the gains are considered significant with a p-value equal or smaller than $0.05$.\n\n\\begin{table}[t]\n\\centering\n \\begin{tabular}{|l|rcc|}\n\n \\hline\n \\textbf{Model} & \\textbf{Accuracy} & \\textbf{F1 measure} & \\textbf{CER} \\\\ \\hline\n \\hline\n \\multicolumn{4}{|c|}{\\textbf{MEDIA TEST}}\\\\ \\hline\n \\hline\n BiGRU+CRF \\cite{dinarelli_hal-01553830}\t\t&\t-- \t&\t86.69\t\t&\t10.13\t\\\\\\hline\n LD-RNN$_{\\mathrm{deep}}$ \\cite{dinarelli_hal-01553830}\t& -- \t\t&\t87.36\t\t&\t\\textbf{9.8}\t\t\\\\\n LD-RNN$_{\\mathrm{deep}}$\t\t\t\t&\t89.51\t\t&\t87.31\t\t&\t10.02\t\t\\\\\\hline\n Seq2Biseq$_{le}$ seg-len 15 \t&\t89.57\t&\t87.50\t&\t10.26\t\\\\\n Seq2Biseq$_{\\text{2-opt}}$ \t& \\textbf{89.79}\t& \\textbf{87.69}\t& 9.93 \\\\ \\hline\n\n \\end{tabular}\n\\caption{Comparison of results on MEDIA with our best models and the best models in the literature}\n\\label{tab:gru-ldrnn-clen15-vs-SOTA-media}\n\\end{table}\n\nWe ran another set of experiments on the MEDIA task with our best model in order to compare to the best models in the literature on this task, which are those described in \\cite{dinarelli_hal-01553830}.\nIn particular we compared our results to the models using a neural CRF output layer for modelling label sequences and take global decisions.\n\nThe results of these experiments are shown in the table~\\ref{tab:gru-ldrnn-clen15-vs-SOTA-media}.\nIn this table we indicate simply with LD-RNN$_{\\mathrm{deep}}$ the results obtained in our experiments using the software \\emph{LD-RNN}\\footnote{\\url{http:\/\/www.marcodinarelli.it\/software.php}}, while we add the reference \\cite{dinarelli_hal-01553830} after LD-RNN$_{\\mathrm{deep}}$ to indicate that results have been taken directly from the reference.\nAs we can see, the only new outcome in this table with respect to those already shown in previous tables, is the best CER of $9.8$ obtained by the model LD-RNN$_{\\mathrm{deep}}$ published in \\cite{dinarelli_hal-01553830}.\nThese results are obtained however using also the word-classes available with the MEDIA corpus. Our model is still more effective than the others in terms of accuracy and F1 measure, providing thus the new state-of-the-art results on this task.\n\nThe experiments performed on the MEDIA task with different variants of our model allowed us to find the best neural architecture for sequence modelling. In order to have a more general view on the effectiveness of our model on the problem of sequence labelling, we performed some experiments of POS tagging on the WSJ corpus, which is a well-known benchmark for sequence labelling, used since more than $15$ years.\nIn order to show the effectiveness of the model alone, without the impact of any external resources, we performed experiments without using pre-trained embeddings. This is however a quite common practice and can lead to remarkable improvements \\cite{Ma-Hovy-ACL-2016}.\n\nOn this task we compare to the model \\emph{LD-RNN$_{\\mathrm{deep}}$} of \\cite{dinarelli_hal-01553830}, and to the model \\emph{LSTM-CRF} of \\cite{Ma-Hovy-ACL-2016}. To the best of our knowledge the latter is one of the rare work on neural models where results are given also without pre-trained embeddings, allowing a direct comparison. The \\emph{LSTM-CRF} model is moreover one of the best models on the WSJ corpus when using embeddings pre-trained with GloVe \\cite{pennington2014glove}.\n\nThe results of the POS tagging task on the WSJ corpus are shown in the table~\\ref{tab:comp-WSJ}. As we can see our model obtains the best results among those not using any pre-trained embeddings.\nOur results are however worse than those obtained with pre-trained embeddings, which constitute the state-of-the-art on this task.\nIn this respect, we would like to underline that the overall best results are obtained with a neural model described in \\cite{2018-ijcai-lstm-ldrnn}. This model is only slightly better than the \\emph{LSTM-CRF} model, which we outperform when not using pre-trained embeddings. Moreover the model proposed in \\cite{2018-ijcai-lstm-ldrnn} (\\emph{LSTM+LD-RNN} in the table) is very similar to our model.\n\n\\begin{table}[t]\n\\centering\n \\begin{tabular}{|l|c|c|}\n\n \\hline\n \\textbf{Model} & \\multicolumn{2}{|c|}{\\textbf{Accuracy}} \\\\ \\hline\n \\hline\n & WSJ DEV & WSJ TEST \\\\ \\hline\n \\hline\n LD-RNN$_{\\mathrm{deep}}$\t\t\t\t\t\t\t& 96.90\t& 96.91 \\\\\n LSTM+CRF \\cite{Ma-Hovy-ACL-2016}\t\t\t\t& -- \t\t& 97.13 \\\\\n Seq2Biseq\t\t\t\t\t\t\t\t& 97.13\t& 97.20 \\\\\n Seq2Biseq$_{\\text{2-opt}}$\t\t\t\t\t\t& \\textbf{97.33}\t& \\textbf{97.35} \\\\\n \\hline\n \\hline\n LSTM+CRF + Glove \\cite{Ma-Hovy-ACL-2016}\t\t& 97.46 \t& 97.55 \\\\\n LSTM+LD-RNN + Glove \\cite{2018-ijcai-lstm-ldrnn}\t& -- \t\t& 97.59 \\\\ \\hline\n\n \\end{tabular}\n\\caption{Comparison of our model with the model \\emph{LD-RNN$_{\\mathrm{deep}}$}, and the best models of the literature, on the POS tagging task of the WSJ corpus}\n\\label{tab:comp-WSJ}\n\\end{table}\n\nIn order to compare our model to the model LD-RNN$_{\\mathrm{deep}}$ also in terms of complexity and computation efficiency, we show in the table~\\ref{tab:comp-params-train-time} the number of parameters as well as the training time on the MEDIA and WSJ corpora.\nFor the sake of completeness, we also report the number of parameters of the other models mentioned in this paper. Except for the model \\emph{GRU+CRF} for which we took the number of parameters from the reference \\cite{dinarelli_hal-01553830} (hidden layers of size $200$), all the other numbers are computed based on the same layer sizes.\n\nWe can see in the table~\\ref{tab:comp-params-train-time} that the training time for our model is longer than for the model LD-RNN$_{\\mathrm{deep}}$ on the MEDIA task.\nThis is because our neural architecture is quite more complex, and since the corpus is relatively small, we can not fuly take advantage of GPU parallelism.\n\nThis is confirmed on the WSJ corpus, where the training time of our model is much smaller than the time needed by the LD-RNN$_{\\mathrm{deep}}$ model, despite this corpus is quite bigger than MEDIA.\\footnote{The model LD-RNN$_{\\mathrm{deep}}$ is coded in Octave, and while it can run on GPUs, this framework is not fully optimized to scale on GPUs}\nThe time needed for testing are not reported in the table, we can note that they are negligible for both models, as it never exceeded a few minutes\n\n\\begin{table}[t]\n\\centering\n \\begin{tabular}{|l|c|c|c|}\n\n \\hline\n \\textbf{Model} & \\textbf{\\# of parameters} & \\multicolumn{2}{|c|}{\\textbf{Training time}} \\\\ \\hline\n \\hline\n & MEDIA & MEDIA & WSJ \\\\ \\hline\n \\hline\n Seq2Biseq$_{le}$\t\t\t\t\t& 2,139,950 & 3h30' & 16h-17h \\\\\n LD-RNN$_{\\mathrm{deep}}$\t\t\t\t\t& 2,551,700 & 1h30' & $>$ 6 days \\\\\n \\hline\n \\hline\n GRU+CRF \\cite{dinarelli_hal-01553830}\t& 2,328,360 & -- & -- \\\\\n \\emph{Seq2seq}\t\t\t\t\t\t& 1,703,450 & -- & -- \\\\\n \\emph{Seq2seq+Att.}\t\t\t\t\t& 2,244,050 &-- & -- \\\\ \\hline\n\n \\end{tabular}\n\\caption{Comparison of the neural models proposed or mentioned in this paper, in terms of number of parameters, and of training time for our model and the the model LD-RNN$_{\\mathrm{deep}}$}\n\\label{tab:comp-params-train-time}\n\\end{table}\n\nWhile the results described in this paper can be considered satisfactory, considering the complexity of our neural network with respect to the LD-RNN$_{\\mathrm{deep}}$ model, we were surprised to find out that the gains were not larger on the MEDIA task.\nAt first we thought that our network suffered from overfitting on such a small task, and given the complexity of our network, nothing could be done to solve this problem beyond reducing the total number of parameters.\nHowever, after a quick analysis of the output of our model on the MEDIA development data, we found clear signs revealing that our model was actually ignoring the learning signal coming from the backward decoder (eq.~\\ref{eq:backward-model}).\n\nSince our neural network was explicitly designed to take both left and right label-side contexts into account, we thought that the problem was coming from the learning phase. In particular we thought that our model was underfitting due to the problem of \\textit{very-long back-propagation paths} described in \\cite{46201}, and which motivated the design of the Transformer model, without recurrent layers and with skip connections to enforce the back-propagation of the learning signal.\nWe adopted a different approach: we applied two different optimizers to the two decoders, one for a negative log-likelihood computed with the output of the backward decoder (only $\\text{log-p}(\\overleftarrow{e_i})$, see eq.~\\ref{eq:backward-model}), and another one for the global negative log-likelihood computed from the output of both forward and backward decoders (see equation~\\ref{eq:LL}).\nWe note that the forward decoder also uses predictions and hidden states of the backward decoder, the second optimizer thus also refines the parameters of the backward decoder with left, forward information.\n\nWe ran new experiments in exactly the same conditions as described before, the only difference being that we used these two optimizers.\nThe final results are reported in table~\\ref{tab:ldrnn-vs-gru-ldrnn-clen15} for MEDIA and in the table~\\ref{tab:comp-WSJ} for the WSJ, where the model learned using two optimizers is indicated with Seq2Biseq$_{\\text{2-opt}}$.\n\nAs we can see in the tables, the results improved on both tasks, on both development and test data, and in terms of all the evaluation metrics.\nTo the best of our knowledge, the results obtained on MEDIA are the best on this task, except for the CER where the model LD-RNN$_{\\mathrm{deep}}$ using class features is still the best (9.8 vs. our 9.93 on the test set). Also, the results obtained on the WSJ corpus are the best obtained without any external resource and without pre-trained embeddings. We leave the integration of pre-trained embeddings as future work.\n\n\\section{Conclusions}\n\\label{sec:Conclusions}\n\nIn this article, we propose a new neural architecture for sequence modelling heavily based on GRU recurrent hidden layers.\nWe use these layers to encode long-range contextual information at several levels: words, characters and labels.\n\nOur main contributions are the use of two different decoders for label prediction, one modelling a backward (future, or right) label context, and one for a forward label context.\nThe combination of the two contexts allow our model to take labelling decisions informed by a global context, approximating a global decision function.\nAnother contribution is the use of two different optimizers to optimize separately the two decoders.\nThis improves even further the results obtained on the two evaluation tasks studied in this work.\n\nThe results obtained are state-of-the-art on the MEDIA task.\nOn the POS tagging task of the WSJ corpus, our results are state-of-the-art if we do not consider the models that use pre-trained word embeddings, and still close to the state-of-the-art if we do so.\n\n\\section{Acknowledgements}\n\nThis work is part of the \"Investissements d'Avenir\" overseen by the French National Research Agency ANR-10-LABX-0083 (Labex EFL), and is also supported by the ANR DEMOCRAT (Describing and Modelling Reference Chains: Tools for Corpus Annotation and Automatic Processing) project ANR-15-CE38-0008.\n\n\\bibliographystyle{splncs2016}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{images\/new-teaser-v1.PNG}\n \\caption{Our VRAG encodes video-level embeddings from spatial intra-frame and temporal inter-frame information for CBVR.} \\vspace{-3mm}\n \\label{fig:teaser}\n\\end{figure}\n\nThe volume of videos on the Internet has grown exponentially with the inception of media-sharing websites such as Facebook, Twitch and Youtube. \nContent-based Video Retrieval (CBVR) is important on these platforms for applications such as video recommendation and video filtering.\n\nIn CBVR, evaluating the video similarity is a key component. There are predominantly two types of approaches for inferring video-to-video similarity: frame-level approaches and video-level approaches. Video-level methods encode videos into fixed size embeddings, and measure video similarity through the similarity of their embeddings. On the other hand, frame-level methods derive video similarity by aggregating pairwise similarities between video frames. These two approaches form a dichotomy. Video-level approaches offer faster evaluation speeds, while frame-level methods provide more accurate video similarities at significant overheads. Although practical implementations of frame-level methods may employ optimizations such as summarizing the video into a shorter sequence of frames~\\cite{vid-summary:1708-09545, vid-summary:1812-01969, vid-summary:zhou2018deep}, video-level methods remain orders of magnitude more efficient than frame-level methods when the number of videos scale to the size of billions. Consequently, video-level methods remain highly relevant for real-world applications despite its inferior performance to frame-level methods.\n\nRecently, several video-level CBVR works~\\cite{lbow, Baraldi2018LAMVLT, kordopatis2017dml, baseline:tmk} propose the same design principle of extracting local frame-level features, and then aggregating these features into fixed-size video embeddings. As a result, most of these works focus on proposing a better frame-level feature extractor and\/or video-level feature pooling methods. For example, DML~\\cite{kordopatis2017dml} and LBoW~\\cite{lbow} extract Maximum Activation of Convolution (MAC) features from each frame and then apply mean pooling and bag-of-words, respectively, to get video-level representations. LAMV~\\cite{Baraldi2018LAMVLT} later propose representing frames at a finer granularity using Region Maximum Activation of Convolutions (R-MAC) and learnable pooling of these frame descriptors in the Fourier Domain~\\cite{baseline:tmk}. While prior works have made strides in frame-level feature extraction and video-level pooling, these approaches encode each frame separately, and do not model the spatial and temporal interactions inherent within videos. \n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{images\/shot-level-video-retrieval-smaller.PNG}\n \\caption{Our VRAG in a shot-level video retrieval setting.} \\vspace{-3mm}\n \\label{fig:shot-retrieval-diagram}\n\\end{figure}\n\nIn view that prior video-level approaches lack modelling of spatio-temporal interactions within videos, we propose VRAG: a region attention graph-based framework for content-based video retrieval. As shown in Figure~\\ref{fig:teaser}, our VRAG models videos at the fine-grained region-level as a graph. Each node of the graph represents a R-MAC feature vector. To model interactions between region nodes, we encode spatial relations through complete subgraphs between regions in the same frame and temporal relations through fully connected edges across adjacent frames. We further augment these spatial and temporal connections through self-attention, which modulates the strength of these associations through the affinities between their region-level content. Consequently, we transform each region into context-aware embeddings by selectively aggregating features from neighboring regions via Graph Attention~\\cite{graph-attention} layers. To generate video-level embeddings, we learn attention weights for each region and selectively aggregate region-level embeddings into a fixed size representation. We model our attention pooling on the intuition that important video regions add significant context to other regions and derive the attention weights from the pairwise affinities between regions across multiple Graph Attention layers. \n\nOur VRAG improves the state-of-the-art for video-level approaches across video retrieval tasks~\\cite{dataset:fivr200k, dataset:evve, dataset:cc-web-video}, and reduces the performance gap between video-level and frame-level retrieval approaches. On Event Video Retrieval~\\cite{dataset:evve}, our VRAG also achieves higher retrieval scores over the state-of-the-art for frame-level retrieval, ViSiL~\\cite{kordopatiszilos2019visil}, while being more than 50$\\times$ faster. We further propose to reduce the gap between video-level and frame-level approaches by segmenting videos into shots and representing videos over multiple shot embeddings using our VRAG, as illustrated in Figure~\\ref{fig:shot-retrieval-diagram}. Our shot-level VRAG evaluates the similarity between videos by aggregating pairwise similarities between their shot embeddings. In our experiments, we show that our shot-level VRAG bridges the gap between video-level and frame-level approaches on most video retrieval tasks~\\cite{dataset:fivr200k, dataset:cc-web-video}\nat faster evaluation speeds than frame-level methods, and higher retrieval precision over video-level approaches. \n\n\\section{Related Work}\n\n\\subsection{Video Retrieval}\n\n\\textbf{Multi-modal video retrieval} performs video retrieval through multi-modal embeddings. Recent methods use pretrained expert networks to extract video representations across different modalities, e.g. optical flow, audio, and introduce novel techniques to fuse these representations into a multi-modal embedding~\\cite{collaborative-experts, moEE, mm-transformer}. \\cite{collaborative-experts} models pairwise interactions between modalities and aggregates across modalities via attention pooling while \\cite{mm-transformer} use Transformers~\\cite{attention-is-all-you-need} to fuse features from different modalities.\n\n\\textbf{Video-to-text retrieval} is a popular instance of cross-modal video retrieval, where relevant videos are retrieved given a query sentence and vice versa. The video-to-text pipeline extracts video and text representations and map related video-text pairs into the same representation~\\cite{collaborative-experts, howto100m}.\n\n\\subsection{Content-based Video Retrieval (CBVR)}\nCBVR methods can be broadly classified into video-level~\\cite{baseline:tmk, Baraldi2018LAMVLT, kordopatis2017dml, lbow} and frame-level~\\cite{temporal_network, baseline:dp, kordopatiszilos2019visil} approaches.\n\n\\textbf{Video-level approaches} encode videos into fixed size embeddings, i.e. vectors. These methods extract features from video frames and temporally pool frame features into a fixed size representation. The similarity between videos is then determined through the similarities between their embeddings e.g. cosine similarity. Compared to prior video-level~\\cite{kordopatis2017dml, lbow, baseline:tmk, Baraldi2018LAMVLT} methods that encode video frames independently, our VRAG models spatio-temporal relations between\/within frames through Graph Convolution, and adds video-level context into the frame representations.\n\n\\textbf{Frame-level approaches}~\\cite{temporal_network, baseline:dp} represent videos as sequences of video frame features that scale with the video length. To evaluate video similarity, these methods aggregate pairwise similarities between video frames to derive the similarity between videos. Recently, ViSiL~\\cite{kordopatiszilos2019visil} propose using Convolutional Neural Networks (CNN) layers and the (Symmetric) Chamfer Similarity to aggregate pairwise frame similarities, and achieves state-of-the-art frame-level CBVR. However, ViSiL requires $n^2$ forward passes to evaluate pairwise similarities between $n$ videos, while our VRAG is orders of magnitude faster since it uses only $n$ forward passes to generate video embeddings.\n\n\\section{Our Approach}\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.85\\textwidth]{images\/wider-fig.PNG}\n \\caption{Our Video Region Attention Graph Network (VRAG). (Left) Graph structure. (Right) VRAG Network.}\n \\label{fig:VRAG-architecture}\n\\end{figure*}\n\nWe present our Video Region Attention Graph (VRAG) network in Figure~\\ref{fig:VRAG-architecture}. \n\n\\textbf{\\label{sect:frame-repr}Input Frame Representation.} Following ViSiL, we represent each video frame using concatenated Region Maximum Activation of Convolution~\\cite{rmac} (R-MAC) features from intermediate convolution layers. We pass each video frame through a pretrained ConvNet, i.e. ResNet50~\\cite{resnet} pre-trained on ImageNet, with $L$ layers to generate intermediate activation maps, $f_\\text{conv}: \\mathcal{I}^{(t)} \\mapsto \\left\\{\\mathcal{M}^{(t)}_1, \\ldots, \\mathcal{M}^{(t)}_L\\right\\}$. Given that each intermediate activation map $\\mathcal{M}_l^{(t)} \\in \\mathbb{R}^{H_l \\times W_l \\times C_l}$ has varying spatial dimensions, we transform the activation maps into the same spatial dimension by defining different R-MAC kernel sizes for each layer. Subsequently, we concatenate the intermediate R-MAC features channel-wise to represent each video frame, i.e. $f_\\text{rmac}: \\left\\{\\mathcal{M}^{(t)}_1, \\ldots, \\mathcal{M}^{(t)}_L\\right\\} \\mapsto \\mathcal{X}^{(t)} \\in \\mathbb{R}^{R \\times C}$, where $R$ refers to the number of R-MAC features in each frame, and \n$C=\\sum_{l=1}^{L}C_l$\nis the dimensions of the concatenated vector. We then encode a video of length $T$ into $N=T\\times R$ R-MAC features, i.e. $\\mathbf{X} = \\left[\\mathcal{X}^{(1)}|\\mathcal{X}^{(2)}| \\ldots| \\mathcal{X}^{(T)}\\right] \\in \\mathbb{R}^{N \\times C}$.\n\n\\textbf{Spatial and Temporal Graph Structure.}\nWe represent each R-MAC feature vector $\\mathbf{x}_i \\in \\mathbf{X}=\\left[\\mathbf{x}_1, \\ldots, \\mathbf{x}_N\\right] \\in \\mathbb{R}^{N \\times C}$ as a node $\\mathcal{V}_i$ in the graph $\\mathcal{G} = \\{\\mathcal{V}, \\mathcal{E}_\\text{spatial}, \\mathcal{E}_\\text{temporal}\\}$ of our VRAG.\nWe then add two sets graph edges: 1) $\\mathcal{E}_\\text{spatial}$ is a set of edges that connects region nodes in the same frame $\\mathcal{I}^{(t)}$ to capture spatial relationships between these nodes with a frame and this includes a self referencing edge. 2) $\\mathcal{E}_\\text{temporal}$ refers to edges that connect the regions between consecutive frames $\\mathcal{I}^{(t-1)}$ and $\\mathcal{I}^{(t+1)}$ to capture temporal relations between video frames. \n\n\\textbf{Learning Region Embeddings.} From the video region graph $\\mathcal{G} = \\{\\mathcal{V}, \\mathcal{E}_\\text{spatial}, \\mathcal{E}_\\text{temporal}\\}$ constructed earlier, we learn region-level embeddings using Graph Convolutional Network~\\cite{gcn, graph-attention} (GCN) layers. To increase the duration of videos that can be processed, we first reduce the dimensions of R-MAC region vectors, i.e. $\\mathbf{X} \\in \\mathbb{R}^{N\\times C}$ using a fully-connected layer with non-linear activation, i.e. $f_{r}: \\mathbf{X} \\mapsto \\mathbf{X}^{(0)} \\in \\mathbb{R}^{N\\times C^\\prime}$.\nWe then pass the region vectors through $K$ Graph Attention~\\cite{graph-attention} layers to generate intermediate region features, i.e. $g^{(k)}: \\mathbf{X}^{(k-1)} \\mapsto \\mathbf{X}^{(k)} \\in \\mathbb{R}^{N \\times C^\\prime}$ for $k \\in \\left[1, \\ldots, K\\right]$. Specifically, we aggregate the features from neighboring regions as follows: \n\n\\paragraph{1)} To enable interactions between neighboring regions with complementary representations, we use the key-query self-attention mechanism~\\cite{attention-is-all-you-need} to describe the similarity between regions. Specifically, we generate key and query embeddings using linear transformations from the input region vectors $\\mathbf{x}_i^{(k-1)}$, i.e.:\n\\begin{equation}\n\\small\n \n f_\\text{query}^{(k)} : \\mathbf{x}_i^{(k-1)} \\mapsto \\mathbf{a}_i^{(k)} \\in \\mathbb{R}^{C^\\prime}, \\quad\n f_\\text{key}^{(k)} : \\mathbf{x}_i^{(k-1)} \\mapsto \\mathbf{b}_i^{(k)} \\in \\mathbb{R}^{C^\\prime}.\n \n \\label{eqn:kq-embeddings}\n\\end{equation}\n\n\\vspace{-2mm}\n\\paragraph{2)} The similarity between a region $i$ and its neighboring region $j \\in \\mathcal{N}(i)$ is defined using the dot product of their query and key embeddings, i.e. $s_{ij}^{(k)}= \\mathbf{a}_i^{(k)} \\cdot \\mathbf{b}_{j}^{(k)}$. The output of the Graph Attention layer $g^{(k)}$ is derived as:\n\\begin{equation}\n \n w_{ij}^{(k)} = \\frac{e^{s_{ij}^{(k)}}}{\\sum_{n \\in \\mathcal{N}(i)}{e^{s_{in}^{(k)}}}}, \\quad\n f^{(k)} : \\sum_{j \\in \\mathcal{N}(i)} w_{ij}^{(k)} \\mathbf{x}_{j}^{(k-1)} \\mapsto \\mathbf{x}_i^{(k)},\n \n\\end{equation}\nwhere $f^{(k)}$ is a linear layer with a non-linear activation.\n\n\\vspace{-2mm}\n\\paragraph{3)} Finally, given that adding graph convolution layers may lead to over-smoothing of region representations~\\cite{gcn:over-smoothing1, chen2019measuring}, we propose concatenating region representations along the depth of our network to preserve discriminative features, i.e. $f_\\text{concat}: \\left\\{\\mathbf{X}, \\mathbf{X}^{(0)}, \\ldots, \\mathbf{X}^{(K)}\\right\\} \\rightarrow \\mathbf{R} \\in \\mathbb{R}^{N \\times \\left(C + (K+1)C^\\prime\\right)}$.\n\n\\textbf{Video Embedding.} We aggregate region embeddings $\\mathbf{R}=[\\mathbf{r}_1, \\ldots, \\mathbf{r}_N] \\in \\mathbb{R}^{N \\times \\left(C + (K+1)C^\\prime\\right)}$ using weighted sum aggregation to generate a pooled region representation:\n $\\overline{\\mathbf{r}}=\\sum^{N}_{i=1}\\beta_i \\mathbf{r}_i$,\nwhere $\\beta_i$ is the attention weight for region $\\mathbf{r}_i$. The pooled representation $\\overline{\\mathbf{r}}$ is fed into two linear layers to generate the video-level embedding $\\mathbf{v}$, i.e.: $f_\\text{mlp}:\\overline{\\mathbf{r}} \\mapsto \\mathbf{v} \\in \\mathbb{R}^D$. We model our attention weights $\\boldsymbol{\\beta} = [\\beta_1, \\ldots, \\beta_N] \\in \\mathbb{R}^N$ from the intuition that important regions in a video act as anchor regions that add context to other regions, and should have high affinities with other regions in the video. Therefore, we compute the attention weights $\\boldsymbol{\\beta}$ as follows:\n\\vspace{-2mm}\n\\paragraph{1)} We reuse the key and query embeddings of each Graph Attention layer $g^{(k)}$ from Equation~\\ref{eqn:kq-embeddings} to compute $K$ pairwise affinity matrices $\\mathcal{A}^{(k)}$ between regions, i.e.\n\\begin{equation}\n \\begin{aligned}\n \\mathcal{A}^{(k)}&=\\mathbf{A}^{(k)} \\times \\left(\\mathbf{B}^{(k)}\\right)^\\top \\in \\mathbb{R}^{N \\times N}, \\\\\n \\mathbf{A}^{(k)} &= [\\mathbf{a}_i^{(k)}, \\ldots, \\mathbf{a}^{(k)}_N] \\in \\mathbb{R}^{N \\times C^\\prime}, \\\\\n \\mathbf{B}^{(k)} &= [\\mathbf{b}_i^{(k)}, \\ldots, \\mathbf{b}^{(k)}_N] \\in \\mathbb{R}^{N \\times C^\\prime}. \\\\\n \\end{aligned}\n\\end{equation}\nThe row $\\mathcal{A}_i^{(k)}$ returns the pairwise affinity between region $i$ and every other region $j \\in [1, \\ldots, N]$ at Graph Attention layer $k$. For each region $i$, we average the affinities in $\\mathcal{A}_i^{(k)}$ and concatenate the averaged affinities over $K$ Graph Attention layers to obtain the affinity vector $\\boldsymbol{\\alpha}_i \\in \\mathbb{R}^{K}$. \n\n\\vspace{-2mm}\n\\paragraph{2)} We feed $\\boldsymbol{\\alpha}_i$ into a single linear layer to derive the unnormalized attention weights i.e. $f_\\text{att}: \\boldsymbol{\\alpha}_i \\mapsto \\tilde{\\beta}_i$ and normalize the region attention weights \nvia softmax, i.e.:\n\\begin{equation}\n \\beta_i = \\frac{e^{\\tilde\\beta_i}}{\\sum^{N}_{j=1}e^{\\tilde\\beta_j}}.\n\\end{equation}\n\n\\subsection{Training VRAG}\nWe train VRAG using the triplet margin loss~\\cite{triplet-margin-loss}. From a triplet of video-level embeddings $\\left(\\mathbf{v}, \\mathbf{v}^+, \\mathbf{v}^-\\right)$ that correspond to the anchor, positive and negative video respectively, we compute the loss:\n\\begin{equation}\n \\mathcal{L} = \\left\\lfloor c\\left(\\mathbf{v}, \\mathbf{v}^-\\right) - c\\left(\\mathbf{v}, \\mathbf{v}^{+}\\right) + m\\right\\rfloor_+,\n\\end{equation}\nwhere $c(\\cdot, \\cdot)$ computes the cosine similarity between embeddings, $m$ refers to the margin. During training, we sample triplets using the triplet mining scheme consistent with ~\\cite{kordopatiszilos2019visil}.\n\n\\section{Shot Representations}\n\nIn this section, we introduce an intermediate approach utilizing shot representations for video retrieval. We use the shot-boundary algorithm in Section~\\ref{sect:sba} to divide a video $\\left[\\mathcal{I}^{(1)}, \\ldots, \\mathcal{I}^{(T)}\\right]$ into non-overlapping shots and use VRAG to encode each shot into embeddings. Using shots can reduce noise within VRAG embeddings by removing spurious edges across shot boundaries.\nIn our shot-level approach, we evaluate video similarity by aggregating the pairwise similarities between shot-level embeddings.\n\n\\textbf{\\label{sect:sba}Shot Boundary Algorithm.} In our shot boundary algorithm, we identify the boundaries between video shots by comparing the similarities between consecutive frames. Specifically, we flatten the R-MAC features from each frame $\\mathcal{X}^{(t)} \\in \\mathbb{R}^{R \\times C}$ into the frame representation vector $\\mathcal{F}^{(t)} \\in \\mathbb{R}^{(R\\times C)\\times1}$. The similarities between consecutive frames is then the cosine similarity between their representations vectors $\\mathcal{F}^{(t)}$. We mark the frame $t$ as the start of a new video shot when the cosine similarity between $\\mathcal{F}^{(t)}$ and $\\mathcal{F}^{(t-1)}$ is lower than the minimum cosine similarity threshold $\\tau_s$. Generally, a higher $\\tau_s$ creates more shots and resembles frame-level approaches while a smaller $\\tau_s$ reduces the number of shots and closely approximates video-level methods.\n\n\\textbf{\\label{sect:aggregate-shots}Aggregating Shot Similarities.} Under our shot-level approach, we evaluate the similarity between videos $\\mathbf{X}$ and $\\mathbf{X}^\\prime$ from the pairwise similarities between their shot embeddings, i.e. $\\left[\\mathbf{o}_1, \\ldots, \\mathbf{o}_N\\right]$ and $\\left[\\mathbf{o}_1^\\prime, \\ldots, \\mathbf{o}_M^\\prime\\right]$. We compute the cosine similarity between pairs of shots and build a pairwise shot cosine similarity matrix $\\mathcal{S} \\in [-1, 1]^{N \\times M}$ where $\\mathcal{S}_{ij}$ returns the cosine similarity between shots $\\mathbf{o}_i$ and $\\mathbf{o}_j^\\prime$.\n\nWe aggregate these pairwise similarities into a similarity metric between videos. We use two aggregation schemes proposed in \\cite{kordopatiszilos2019visil}: Chamfer similarity (CS) which takes the maximum similarity along the columns of $\\mathcal{S}$ followed by average pooling of the maximum similarity vector, i.e. \n\\begin{equation}\n s = \\operatorname{CS}(\\mathbf{X}, \\mathbf{X}^\\prime)=\\frac{1}{N}\\sum_{i=1}^N \\left(\\max_{j \\in 1, \\ldots, M} \\mathcal{S}_{ij} \\right),\n\\end{equation}\nSymmetric Chamfer Similarity (SCS) which computes the average Chamfer similarity from $\\mathcal{S}$ and $\\mathcal{S^\\top}$, i.e.:\n\\begin{equation}\n \\begin{aligned}\n s = \\operatorname{SCS}(\\mathbf{X}, \\mathbf{X}^\\prime) = \\frac{\\operatorname{CS}(\\mathbf{X}, \\mathbf{X}^\\prime) + \\operatorname{CS}(\\mathbf{X}^\\prime, \\mathbf{X})}{2}.\n \\end{aligned}\n\\end{equation}\n\n\\section{Experiments}\n\nWe evaluate video retrieval performance using mean Average Precision~\\cite{dataset:cc-web-video} (mAP). Additionally, we provide qualitative results in our supplementary material.\n\n\\subsection{Experiment Settings}\n\nWe evaluate VRAG and our shot-level approach over several video retrieval settings: Near-Duplicate Video Retrieval (NDVR); Event Video Retrieval (EVR); and Fine-grained Incident Video Retrieval (FIVR).\n\n\\vspace{0mm}\n\\paragraph{Datasets.} For evaluation, we use CC\\_WEB\\_VIDEO~\\cite{dataset:cc-web-video} for NDVR, EVVE~\\cite{dataset:evve} for EVR, and FIVR200K~\\cite{dataset:fivr200k} and its subset FIVR5K~\\cite{dataset:fivr200k, kordopatiszilos2019visil} for FIVR. During training, we sample triplets of videos from VCDB~\\cite{dataset:vcdb}.\n\n\\noindent \\textbf{CC\\_WEB\\_VIDEO}~\\cite{dataset:cc-web-video} contains 13,129 videos with 24 query videos. In this dataset, near-duplicate videos correspond to videos stored in different formats e.g. .flv; .wmv; and videos with minor content differences e.g. photometric variations like lighting. We evaluate on the original annotations from \\cite{dataset:cc-web-video} and the cleaned annotations from \\cite{kordopatiszilos2019visil}. \n\n\\noindent\\textbf{EVVE}~\\cite{dataset:evve} contains 2,995 videos from Youtube with 620 query videos over 13 events. \n\n\\noindent \\textbf{FIVR200K}~\\cite{dataset:fivr200k} contains 100 query videos and a total of 225,960 videos from 4,687 Wikipedia events. The dataset groups similar videos into four categories: 1) Near-Duplicate (ND) videos which share all scenes with the query video; 2) Duplicate Scene (DS) videos that share at least one scene with the query video; 3) Complementary Scene (CS) videos that share at least segment with the query video, but from a different viewpoint; 4) Incident Scene (IS) videos that capture the same incident as the query video without sharing segments.\nFrom the video categories, the dataset introduces three video retrieval settings: 1) Duplicate Scene Video Retrieval (DSVR) which only considers ND and DS videos as positives; 2) Complementary Scene Video Retrieval (CSVR) that extends from DSVR to include CS videos; and 3) Incident Scene Video Retrieval (ISVR) which marks all four categories of videos as similar videos.\n\n\\vspace{0mm}\n\\noindent \\textbf{FIVR5K}~\\cite{dataset:fivr200k, kordopatiszilos2019visil} is a subset of FIVR200K, with 50 query videos and 5,000 database videos. To create FIVR5K, the authors~\\cite{kordopatiszilos2019visil} select the 50 hardest query videos on the DSVR task from FIVR200K using \\cite{lbow} to measure difficulty.\n\n\\vspace{0mm}\n\\noindent \\textbf{VCDB}~\\cite{dataset:vcdb} consists of a core dataset with 528 videos and a background dataset with 100,000 distractor videos. The core dataset videos has a total of 9,236 pairs of partial copies between the 528 videos in the core dataset.\n\n\\paragraph{Implementation Details.}\nFor each video, we sample frames at one second intervals. We extract R-MAC features from each bottleneck layer of ResNet50~\\cite{resnet}, i.e. $L=4$ and $C=3840$. We set the number of regions per frame $R=9$, intermediate dimensions $C^\\prime=512$, video-level embedding dimensions $D=4096$. We use $K=3$ Graph Attention~\\cite{graph-attention} layers and the ELU~\\cite{nonlinearity:elu} non-linearity.\nWe train VRAG on a single GTX 1080 Ti using a batch size of one over 120 epochs. The maximum duration of each video clip in the training triplet is $W=64$s, and we sample 1,000 triplets from each triplet pool, giving a total of 2,000 training iterations per epoch. We optimize VRAG using the triplet margin loss with $m=0.2$ and the Adam~\\cite{optimizer:adam} optimizer using a fixed learning rate at $3 \\times 10^{-7}$. During inference, VRAG processes up to 2,000 frames on 11GB of GPU memory, which amounts to over 30 minutes of video.\n\n\n\\subsection{Ablation Studies}\n\n\n\n\\paragraph{Our VRAG.}\nAt the region-level, we use attention to aggregate features from neighboring regions. We compare our attention aggregation mechanism with alternatives such as max aggregation and average aggregation, i.e. GCN~\\cite{gcn}. In Table~\\ref{tab:ablation:region-aggregation}, we show that our choice of region-level attention aggregation improves performance across all FIVR5K settings.\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n Method & Region Agg. & DSVR & CSVR & ISVR \\\\\n \\hline\\hline\n \n \n \\multirow{3}{*}{VRAG} & Max & 0.520 & 0.537 & 0.508 \\\\\n \\cline{2-5}\n & Average & 0.518 & 0.532 & 0.506 \\\\\n \\cline{2-5}\n & Attention & \\textbf{0.532} & \\textbf{0.548} & \\textbf{0.518} \\\\\n \\hline\n \\end{tabular}\n \\caption{Comparison on FIVR5K over different choices of region-level aggregation. Attention-weighted summation is used to pool region embeddings for all choices of region-level aggregation.}\n \\label{tab:ablation:region-aggregation}\n\\end{table}\n\nAt the video-level, we pool region embeddings to a fixed size vector $\\bar{\\mathbf{r}}$ using an attention-weighted summation. We compare our attention pooling with standard methods, i.e. max pooling and average pooling. In Table~\\ref{tab:ablation:region-pooling}, we demonstrate that our learnable pooling gives better performance over conventional pooling techniques. \n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n Method & Region Pooling & DSVR & CSVR & ISVR \\\\\n \\hline\\hline\n \\multirow{3}{*}{VRAG} & Max & 0.423 & 0.435 & 0.415 \\\\\n \\cline{2-5}\n & Average & 0.505 & 0.513 & 0.499 \\\\\n \\cline{2-5}\n & Attention & \\textbf{0.532} & \\textbf{0.548} & \\textbf{0.518} \\\\\n \\hline\n \\end{tabular}\n \\caption{\\label{tab:ablation:region-pooling} {Comparison on FIVR5K over different video-level pooling techniques.}}\n\\end{table}\nWe also compare our choice of self-attention to an alternative with reduced parameters. In Table~\\ref{tab:ablation:attention}, we observe that using separate key\/query parameters allows for more complex relations between regions and significantly improves retrieval performance in FIVR5K.\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n Method & Attention & DSVR & CSVR & ISVR \\\\\n \\hline\\hline\n \\multirow{2}{*}{VRAG} & $f_\\text{query}^{(k)} = f_{key}^{(k)}$ & 0.493 & 0.510 & 0.490 \\\\\n \\cline{2-5}\n & $f_\\text{query}^{(k)} \\neq f_{key}^{(k)}$ & \\textbf{0.532} & \\textbf{0.548} & \\textbf{0.518} \\\\\n \\hline\n \\end{tabular}\n \\caption{Comparison on FIVR5K using different implementations of attention.}\n \\label{tab:ablation:attention}\n\\end{table}\n\nFinally, we propose concatenating the region-level embeddings to reduce over-smoothing effects observed in GCNs~\\cite{gcn:over-smoothing1, chen2019measuring}. We compare against three baselines: 1) Using only the output from the final Graph Attention layer; 2) Concatenating the outputs from all Graph Attention layers; 3) Concatenating the outputs from all Graph Attention layers and $f_r$. In Table~\\ref{tab:ablation:region-embed}, we show that multi-layer concatenation of region-level embeddings greatly improves retrieval performance. Furthermore, we observe a monotonic increase in performance as we concatenate region representations across more layers.\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n Method & Concat. Layers & DSVR & CSVR & ISVR \\\\\n \\hline\\hline\n \\multirow{4}{*}{VRAG} & Final Graph Attn. & 0.452 & 0.463 & 0.442 \\\\\n \\cline{2-5}\n & All Graph Attn. & 0.481 & 0.497 & 0.477 \\\\\n \\cline{2-5}\n & All Graph Attn. + $f_r$ & 0.507 & 0.526 & 0.498 \\\\\n \\cline{2-5}\n & All Layers & \\textbf{0.532} & \\textbf{0.548} & \\textbf{0.518} \\\\\n \\hline\n \\end{tabular}\n \\caption{Comparison on FIVR5K over different levels of depth-wise concatenation.}\n \\label{tab:ablation:region-embed}\n\\end{table}\n\n\\paragraph{Shot-level Video Retrieval.}\nIn shot-level video retrieval, i.e. VRAG-S, we compare different choices of $\\tau_s$ for our shot boundary algorithm. In Table~\\ref{tab:ablation:shot-level}, we demonstrate that increasing the number of shots i.e $\\tau_s = 0.75$ results in better performance in FIVR5K. Our results are also consistent with the frame-level approach ViSiL~\\cite{kordopatiszilos2019visil}, where aggregating similarities using Chamfer Similarity (CS) has better performance than Symmetric Chamfer Similarity (SCS). We also observe that our shot-level approach bridges the large gap in performance between video level approaches, i.e. VRAG and frame-level approaches, i.e. ViSiL~\\cite{kordopatiszilos2019visil}, in FIVR5K.\n\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|c|c|c|c|c|c|}\n \\hline\n Method & $\\tau_s$ & Shot Agg. & DSVR & CSVR & ISVR \\\\ \n \\hline\\hline\n VRAG & - & - & 0.532 & 0.548 & 0.518 \\\\\n \\hline \\hline\n \n \n \n \n \\multirow{4}{*}{VRAG-S} & \\multirow{2}{*}{0.5} & CS & 0.493 & 0.512 & 0.483 \\\\\n \\cline{3-6}\n && SCS & 0.463 & 0.477 & 0.464 \\\\\n \\cline{2-6}\n \n \n \n \n & \\multirow{2}{*}{0.75} & CS & \\textbf{0.709} & \\textbf{0.704} & \\textbf{0.636} \\\\\n \\cline{3-6}\n & & SCS & 0.596 & 0.606 & 0.575 \\\\\n \\hline \\hline \n \\multirow{2}{*}{ViSiL} & \\multirow{2}{*}{-} & CS & \\textbf{0.880} & \\textbf{0.869} & \\textbf{0.777} \\\\\n \\cline{3-6}\n & & SCS & 0.830 & 0.823 & 0.731 \\\\\n \\hline \\hline\n \\end{tabular}\n \\caption{Comparison on FIVR5K over different shot-level hyperparameters.}\n \\label{tab:ablation:shot-level}\n\\end{table}\n\n\\subsection{Comparison with Baseline Methods\n\nWe compare VRAG and our shot-level approach with existing approaches as baselines across several video retrieval tasks. We evaluate our performance on CC\\_WEB\\_VIDEO~\\cite{dataset:cc-web-video} for NDVR, EVVE~\\cite{dataset:evve} for EVR, and use FIVR200K~\\cite{dataset:fivr200k}, and FIVR5K~\\cite{kordopatiszilos2019visil, dataset:fivr200k}, for FIVR.\n\\vspace{0mm}\n\\paragraph{Video-level Methods.} For LBoW~\\cite{lbow}, we use the results from \\cite{video-verification-fake-news} for CC\\_WEB\\_VIDEO and FIVR200K. We re-implement LBoW, i.e. LBoW$^\\dagger$, with 1000 codebooks built on VCDB using KMeans++, for EVVE and FIVR5K. For DML~\\cite{kordopatis2017dml}, we obtain results for CC\\_WEB\\_VIDEO and FIVR200K from \\cite{video-verification-fake-news}, and use the released source code\\footnote{https:\/\/github.com\/MKLab-ITI\/ndvr-dml} for EVVE and FIVR5K, i.e. DML*. \nFor TMK~\\cite{baseline:tmk} and LAMV~\\cite{Baraldi2018LAMVLT} on EVVE, we found discrepancies between their evaluation script and the original script from \\cite{dataset:evve}. We provide corrected results, i.e. TMK*, LAMV*, using the original EVVE evaluation script~\\cite{dataset:evve}. LAMV was not available in the released source code\\footnote{\\label{footnote:lamv}https:\/\/github.com\/facebookresearch\/videoalignment}, and we use TMK with frequency normalization (0.534 mAP on EVVE) as a close proxy for LAMV (0.536 mAP on EVVE~\\cite{Baraldi2018LAMVLT}). We also evaluate TMK* and LAMV* on FIVR5K and FIVR200K. For other video-level methods, we report results from \\cite{kordopatiszilos2019visil}.\n\n\\vspace{-3mm}\n\\paragraph{Frame-level Methods.} On EVVE, the authors of ViSiL~\\cite{kordopatiszilos2019visil} were only able to download, process and evaluate on ~80\\% of the dataset. We provide complete results for ViSiL using the released source code\\footnote{https:\/\/github.com\/MKLab-ITI\/visil.git}, i.e. ViSiL$_\\text{sym}$* and ViSiL$_{v}$*. For other frame-level methods, we report the updated results using deep network features in \\cite{kordopatiszilos2019visil}.\n\n\\textbf{Near-duplicate Video Retrieval.}\n\nIn Table~\\ref{tab:results:ccweb}, we compare our approach with other video-level and frame-level approaches on CC\\_WEB\\_VIDEO~\\cite{dataset:cc-web-video}, and demonstrate that our approach achieves state-of-the-art performance for video-level NDVR. Similar to FIVR5K, we observe that our shot-level approach has intermediate performance to frame-level and video-level approaches. \nWe also compare the run times for video-level VRAG and shot-level VRAG with ViSiL$_v$~\\cite{kordopatiszilos2019visil} after extracting R-MAC features from each video frame. Video-level and shot-level VRAG takes 33mins and 52mins, respectively, while ViSiL uses 109mins to infer video similarities. \n\n\\begin{table}[!htbp]\n\\small\n\\setlength\\tabcolsep{3pt}\n \\begin{tabular}{|l|l|c|c|c|c|}\n \\hline \\hline\n Type & Method & cc\\_web & cc\\_web* & cc\\_web$_c$ & cc\\_web$_c$* \\\\\n \\hline \\hline\n \\multirow{3}{*}{Video} & \\text{LBoW} & 0.957 & 0.906 & - & - \\\\\n & \\text{DML} & \\textbf{0.971} & 0.941 & 0.979 & 0.959 \\\\\n & \\textbf{Ours} & \\textbf{0.971} & \\textbf{0.952} & \\textbf{0.980} & \\textbf{0.967} \\\\\n \\hline \\hline\n \n \\multirow{2}{*}{Shot} & \\textbf{Ours (CS)} & 0.975 & 0.955 & \\textbf{0.987} & \\textbf{0.977} \\\\\n & \\textbf{Ours (SCS)} & \\textbf{0.976} & \\textbf{0.959} & 0.986 & \\textbf{0.977} \\\\\n \\hline\\hline\n \\multirow{6}{*}{Frame} & \\text{CTE} & \\textbf{0.996} & - & - & - \\\\\n & \\text{DP} & 0.975 & 0.958 & 0.990 & 0.982 \\\\\n & \\text{TN} & 0.978 & 0.965 & 0.991 & 0.987 \\\\\n & \\text{ViSiL}$_f$ & 0.984 & 0.969 & 0.993 & 0.987 \\\\\n & \\text{ViSiL}$_\\text{sym}$ & 0.982 & 0.969 & 0.991 & 0.988 \\\\\n & \\text{ViSiL}$_v$ & 0.985 & \\textbf{0.971} & \\textbf{0.996} & \\textbf{0.993} \\\\\n \\hline\n \\end{tabular}\n \\caption{Results on four different versions of CC\\_WEB\\_VIDEO. (*) denotes evaluation on the entire data set and the subscript $c$ uses cleaned annotations.} \\vspace{-3mm}\n \\label{tab:results:ccweb}\n\\end{table}\n\n\\begin{table*}[tb]\n\\setlength\\tabcolsep{5pt}\n\\footnotesize\n\\centering\n\\begin{tabular}{|l||l||c||ccccccccccccc|}\n\\hline\nApproach & Method & mAP & \\multicolumn{13}{c|}{per event class} \\\\\n\\hline \\hline\n\\multirow{9}{*}{Video}\n & LBoW$^\\dagger$ & 0.469 & 0.323 & 0.373 & 0.062 & 0.392 & 0.306 & 0.232 & 0.205 & 0.127 & 0.060 & 0.376 & 0.233 & 0.769 & 0.713 \\\\ \n & \\text{DML*} & 0.472 & 0.437 & 0.368 & 0.052 & 0.385 & 0.242 & 0.275 & 0.205 & 0.105 & 0.085 & 0.414 & 0.245 & 0.783 & 0.656 \\\\\n &\\text{TMK*} & 0.469 & 0.508 & 0.306 & 0.139 & 0.366 & 0.294 & 0.244 & 0.208 & 0.125 & 0.152 & 0.287 & 0.213 & 0.810 & 0.614 \\\\\n & \\text{LAMV*} & 0.493 & 0.649 & 0.321 & 0.157 & 0.411 & 0.319 & 0.241 & 0.224 & 0.124 & 0.194 & 0.257 & 0.191 & 0.857 & 0.660 \\\\\n & \\text{LAMV+QE*} & 0.541 & 0.795 & 0.413 & \\textbf{\\underline{0.160}} & \\underline{0.546} & \\underline{0.376} & 0.297 & 0.235 & 0.124 & 0.236 & 0.257 & 0.185 & 0.907 & 0.754 \\\\\n & \\text{LAMV} & 0.536 & 0.715 & 0.383 & 0.158 & 0.461 & 0.387 & 0.227 & 0.247 & 0.138 & 0.222 & 0.273 & 0.273 & 0.908 & 0.691 \\\\\n & \\text{LAMV+QE} & 0.587 & 0.837 & 0.500 & 0.126 & \\textbf{0.588} & \\textbf{0.455} & \\textbf{0.343} & \\textbf{0.267} & 0.142 & 0.230 & 0.293 & 0.216 & \\textbf{0.950} & 0.770 \\\\\n & \\textbf{Ours} & 0.623 & 0.792 & 0.675 & 0.072 & 0.496 & 0.329 & 0.292 & \\underline{0.256} & 0.241 & \\underline{\\textbf{0.497}} & 0.692 & 0.378 & 0.928 & 0.770 \\\\\n & \\textbf{Ours+QE} & \\textbf{\\underline{0.653}} & \\textbf{\\underline{0.888}} & \\textbf{\\underline{0.743}} & 0.042 & 0.505 & 0.342 & \\underline{0.304} & 0.247 & \\underline{\\textbf{0.280}} & 0.489 & \\underline{\\textbf{0.782}} & \\underline{\\textbf{0.410}} & \\underline{0.943} & \\underline{\\textbf{0.835}} \\\\\n \\hline\\hline\n \n \\multirow{2}{*}{Shot} & \\textbf{Ours (CS)} & 0.539 & 0.796 & 0.599 & 0.077 & \\textbf{0.515} & 0.203 & \\textbf{0.266} & 0.190 & 0.098 & 0.222 & 0.589 & 0.299 & 0.836 & \\textbf{0.775} \\\\\n & \\textbf{Ours (SCS)} & \\textbf{0.606} & \\textbf{0.832} &\\textbf{ 0.722} & \\textbf{0.155} & 0.494 & \\textbf{0.336} & 0.265 & \\textbf{0.236} & \\textbf{0.177} & \\textbf{0.366} & \\textbf{0.620} & \\textbf{0.304} & \\textbf{0.925} & 0.670 \\\\\n \\hline\\hline\n \\multirow{3}{*}{Frame} & \\text{ViSiL}$_{f}$ & 0.589 & 0.889 & 0.570 & 0.169 & 0.432 & 0.345 & \\textbf{0.393} & \\textbf{0.297} & 0.181 & 0.479 & 0.564 & 0.369 & 0.885 & 0.799 \\\\\n & \\text{ViSiL}$_\\text{sym}$* & 0.612 & \\textbf{0.923} & \\textbf{0.724} & \\textbf{0.301} & 0.573 & \\textbf{0.418} & 0.276 & 0.291 & \\textbf{0.200} & \\textbf{0.544} & 0.396 & 0.339 & \\textbf{0.938} & 0.753 \\\\\n & \\text{ViSiL}$_\\text{v}$* & \\textbf{0.618} & 0.920 & 0.713 & 0.222 & \\textbf{0.589} & 0.350 & 0.345 & 0.276 & 0.169 & 0.444 & \\textbf{0.567} & \\textbf{0.375} & 0.909 &\\textbf{0.842} \\\\\n \\hline \\hline\n \\end{tabular}\n \\caption{Comparison with state-of-the-art EVR approaches on EVVE. We use the same event class ordering as \\cite{dataset:evve}. For video-level approaches, we also underline the result with the highest mAP, excluding the results with discrepancies in the evaluation script, i.e. LAMV and LAMV+QE. We report results obtained from the original EVVE evaluation script.} \\vspace{-2mm}\n \\label{tab:results:evve}\n\\end{table*}\n\n\\textbf{Fine-grained Incident Video Retrieval.} We evaluate on FIVR5K and FIVR200K in Table \\ref{tab:results:fivr5k} and Table \\ref{tab:results:fivr200k} respectively. Our results include the run times for available frame-level and video-level methods after extracting frame-level features. VRAG achieves state-of-the-art performance over video-level methods on FIVR datasets.\nGenerally, we observe a huge difference in retrieval performance between video-level and frame-level approaches on FIVR. Although our shot-level approach bridges the gap between video-level and frame-level while taking $1.4\\times$ longer than video-level VRAG, the dramatic disparity between the frame-level and video-level approaches warrants a qualitative inspection of FIVR200K.\n\n\\begin{table}[ht]\n \\footnotesize\n \\begin{tabular}{|l|l|c|c|c|c|}\n \\hline \\hline\n Type & Method & DSVR & CSVR & ISVR & Time\\\\\n \\hline \\hline\n \\multirow{6}{*}{Video} & \\text{LBoW$^\\dagger$} & 0.351 & 0.320 & 0.298 & 4m 9s \\\\\n & \\text{DML*} & 0.354 & 0.351 & 0.331 & 3m 29s \\\\\n & \\text{TMK*} & 0.411 & 0.416 & 0.388 & 19m 9s \\\\ \n & \\text{LAMV*} & 0.498 & 0.488 & 0.426 & 20m 24s \\\\\n & \\textbf{Ours} & \\textbf{0.532} & \\textbf{0.548} & \\textbf{0.518} & 8m 27s \\\\\n \\hline \\hline\n \n \\multirow{2}{*}{Shot} & \\textbf{Ours (CS)} & \\textbf{0.709} & \\textbf{0.704} & \\textbf{0.636} & \\multirow{2}{*}{10m 35s}\\\\\n & \\textbf{Ours (SCS)} & 0.596 & 0.606 & 0.575 & \\\\\n \\hline\\hline\n \\multirow{3}{*}{Frame} \n & \\text{ViSiL}$_{f}$ & 0.838 & 0.832 & 0.739 & - \\\\\n & \\text{ViSiL}$_\\text{sym}$ & 0.830 & 0.823 & 0.731 & 45m 24s \\\\\n & \\text{ViSiL}$_v$ & \\textbf{0.880} & \\textbf{0.869} & \\textbf{0.777} & 46m 39s \\\\\n \\hline\n \\end{tabular}\n \n \\caption{Results on FIVR5K} \\vspace{-3mm}\n \\label{tab:results:fivr5k}\n\\end{table}\n\\vspace{-3mm}\n\\begin{table}[ht]\n \\footnotesize\n \\begin{tabular}{|l|l|c|c|c|c|}\n \\hline \\hline\n Type & Method & DSVR & CSVR & ISVR & Time\\\\\n \\hline \\hline\n \\multirow{6}{*}{Video} & \\text{LBoW} & 0.378 & 0.361 & 0.297 & 3h 50m \\\\\n & \\text{DML} & 0.398 & 0.351 & 0.331 & 3h 31m \\\\\n & \\text{TMK*} & 0.417 & 0.394 & 0.319 & 18h 51m \\\\ \n & \\text{LAMV*} & \\textbf{0.489} & 0.459 & 0.364 & 20h 30m \\\\\n & \\textbf{Ours} & 0.484 & \\textbf{0.470} & \\textbf{0.399} & 6h 50m \\\\\n \\hline \\hline\n \\multirow{2}{*}{Shot} & \\textbf{Ours (CS)} & \\textbf{0.723} & \\textbf{0.678} & \\textbf{0.554} & \\multirow{2}{*}{9h 34m} \\\\\n & \\textbf{Ours (SCS)} & 0.536 & 0.504 & 0.422 & \\\\\n \\hline\\hline\n \\multirow{5}{*}{Frame}\n & \\text{DP} & 0.775 & 0.740 & 0.632 & - \\\\\n & \\text{TN} & 0.724 & 0.699 & 0.589 & - \\\\\n & \\text{ViSiL}$_{f}$ & 0.843 & 0.797 & 0.660 & - \\\\\n & \\text{ViSiL}$_\\text{sym}$ & 0.830 & 0.823 & 0.731 & 63h 31m \\\\\n & \\text{ViSiL}$_{v}$ & \\textbf{0.880} & \\textbf{0.869} & \\textbf{0.777} & 66h 18m\\\\\n \\hline \\hline\n \\end{tabular}\n \n \\caption{Results on FIVR200K} \\vspace{-2mm}\n \\label{tab:results:fivr200k}\n\\end{table}\n\nWe found that most FIVR200K queries comprise of single shots while their Duplicate Scene (DS) videos are sequences of visually diverse shots. Additionally, most shots in the DS videos have low visual and semantic correspondence with the query segment. Consequently, video-level methods are likely to yield noisy representations that are distinct to the query. On the other hand, higher fidelity approaches, i.e. frame-level and shot-level retrieval, that segment the video into multiple representations can preserve the query video segment and obtain more accurate video similarities. We show examples in the supplementary material.\n\n\n\n\n\\subsubsection{Event Video Retrieval}\nIn Table~\\ref{tab:results:evve}, we evaluate our performance on EVVE~\\cite{dataset:evve}. Without test augmentations, i.e. Query Expansion, VRAG outperforms state-of-the-art video-level and frame-level methods. We apply Average Query Expansion~\\cite{query-expansion} on VRAG, i.e. VRAG+QE and it demonstrates further performance gains. Our shot-level approach is competitive with video-level and frame-level approaches and use 75mins of evaluation time. In contrast, video-level VRAG takes approximately 12mins for inference on EVVE, while ViSiL~\\cite{kordopatiszilos2019visil} uses 18hrs. \nWe attribute the difference in efficiency to paradigm differences between video-level and frame-level approaches: Our video-level method requires $620 + 2375 = 2995$ forward passes through VRAG to encode $n$ video embeddings. In contrast, ViSiL directly outputs the similarity between every query-candidate video pair. This requires $620 \\times 2375 \\approx 1.5\\times10^6$ forward passes through ViSiL, which is orders of magnitude larger than the number of videos $n=2995$. \n\n\n\n\n\\section{Conclusion}\n\nIn this work, we introduce Video Region Attention Graph Network (VRAG) that utilizes self-attention during region aggregation and region pooling to generate video-level embeddings for efficient video retrieval. \nSpecifically, we represent videos at a finer granularity through region-level features and encode video spatio-temporal dynamics through region-level relations. \nOur VRAG improves the state-of-the-art performance of video-level methods across multiple video retrieval datasets. We also introduce an intermediate approach to video-level and frame-level video retrieval that utilizes shots for video retrieval. We demonstrate that our shot-level approach bridges the gap in performance between video-level and frame-level approaches, where it obtains higher video retrieval performance at marginal increase in computational costs over video-level approaches.\n\n\n\\section{Qualitative Results}\n\n\\subsection{T-SNE Embeddings}\n\nIn this section, we visualize our video-level representations into T-SNE embeddings for EVVE~\\cite{dataset:evve} in Figure~\\ref{fig:tsne-evve} and FIVR5K~\\cite{dataset:fivr200k, kordopatiszilos2019visil} in Figure~\\ref{fig:tsne-fivr5k}.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=1\\columnwidth]{images\/evve-final.png}\n \\caption{T-SNE embedding visualizations on EVVE. (Left) DML~\\cite{kordopatis2017dml}. (Right) Our VRAG.}\n \\label{fig:tsne-evve}\n\\end{figure}\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=1\\columnwidth]{images\/fivr5k-final.png}\n \\caption{T-SNE embedding visualizations on FIVR-5K. (Left) DML~\\cite{kordopatis2017dml}. (Right) Our VRAG.}\n \\label{fig:tsne-fivr5k}\n\\end{figure}\n\nAcross both datasets, we observe that our VRAG forms more visible and well-separated T-SNE embedding clusters compared to the existing video-level method DML~\\cite{kordopatis2017dml}.\n\n\n\\subsection{Retrieval Results}\n\nAdditionally, we include examples of retrieval results on the EVVE~\\cite{dataset:evve} dataset. In our retrieval examples, we visualize the five closest database videos to the query videos. We demonstrate positive results in Figures~{\\ref{fig:pos:geyser} to \\ref{fig:pos:performance}}. From Figures~\\ref{fig:pos:flood} and \\ref{fig:pos:performance}, we see that our VRAG can retrieve videos of similar events taken from different viewpoints. In Figures~\\ref{fig:pos:geyser} and \\ref{fig:pos:water-park}, VRAG retrieves videos of the same event taken from different points in time.\n\nFigures~\\ref{fig:neg:geyser} to \\ref{fig:neg:performance} demonstrate some failure cases of our method with negatively-related videos marked in red borders. In Figures~{\\ref{fig:neg:geyser}, \\ref{fig:neg:riot} and \\ref{fig:neg:performance}}, we see that our VRAG pulls negative videos that are semantically related to the query video. Specifically, Figure~\\ref{fig:neg:geyser} includes a negative video from a different geyser, Figure~\\ref{fig:neg:riot} has a negative video from another riot, and Figure~\\ref{fig:pos:performance} shows a negative video from a separate live performance. However, our VRAG may confuse videos \nin instances where distractor videos contain similar elements to query videos as positives. For example in Figure~\\ref{fig:neg:protest}, VRAG retrieves a negative video depicting a crowded sports stadium as a close positive to a query video of a protest with multiple scenes of crowds.\n\n\\begin{figure*}[h]\n \\centering\n \\includegraphics[width=1.5\\columnwidth]{images\/qualitative\/positives\/0bqBMJHjqdo_censored.jpg}\n \\caption{Positive geyser event retrieval example.}\n \\label{fig:pos:geyser}\n\\end{figure*}\n\n\\begin{figure*}[h]\n \\centering\n \\includegraphics[width=1.5\\columnwidth]{images\/qualitative\/positives\/5aiVFD2jOKE_censored.jpg}\n \\caption{Positive flooding event retrieval example.}\n \\label{fig:pos:flood}\n\\end{figure*}\n\n\\begin{figure*}[h]\n \\centering\n \\includegraphics[width=1.5\\columnwidth]{images\/qualitative\/positives\/GOg-JbebB3w.png}\n \\caption{Positive water park event retrieval example.}\n \\label{fig:pos:water-park}\n\\end{figure*}\n\n\\begin{figure*}[h]\n \\centering\n \\includegraphics[width=1.5\\columnwidth]{images\/qualitative\/positives\/H3m66ZlyIhs.png}\n \\caption{Positive live performance retrieval example.}\n \\label{fig:pos:performance}\n\\end{figure*}\n\n\\begin{figure*}[h]\n \\centering\n \\includegraphics[width=1.5\\columnwidth]{images\/qualitative\/negatives\/0Y5NdarmalI.png}\n \\caption{Negative geyser event retrieval example. Negative videos have red borders.}\n \\label{fig:neg:geyser}\n\\end{figure*}\n\n\\begin{figure*}[h]\n \\centering\n \\includegraphics[width=1.5\\columnwidth]{images\/qualitative\/negatives\/2sr4odOy33I_censored.jpg}\n \\caption{Negative protests event retrieval example. Negative videos have red borders.}\n \\label{fig:neg:protest}\n\\end{figure*}\n\n\\begin{figure*}[h]\n \\centering\n \\includegraphics[width=1.5\\columnwidth]{images\/qualitative\/negatives\/BOAU395mfJw_censored.jpg}\n \\caption{Negative riots event retrieval example. Negative videos have red borders.}\n \\label{fig:neg:riot}\n\\end{figure*}\n\n\\begin{figure*}[h]\n \\centering\n \\includegraphics[width=1.5\\columnwidth]{images\/qualitative\/negatives\/LY9lCuarBOA.png}\n \\caption{Negative live performance retrieval example. Negative videos have red borders.}\n \\label{fig:neg:performance}\n\\end{figure*}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\setcounter{equation}{0} Consider the linear parabolic equation\n\\begin{align}\\label{PDE0}\n\\left\\{\n\\begin{array}{ll}\n\\displaystyle\n\\partial_tu-\\sum_{i,j=1}^d\\partial_{i}\\big(a_{ij}(x)\\partial_j\nu\\big)+c(x)u =f-\\sum_{j=1}^d\\partial_jg_j\n&\\mbox{in}~~\\Omega\\times(0,T),\\\\[5pt]\n\\displaystyle \\sum_{i,j=1}^da_{ij}(x)n_i\\partial_j u=\\sum_{j=1}^dg_jn_j\n&\\mbox{on}~~\\partial\\Omega\\times(0,T),\\\\[8pt]\nu(\\cdot,0)=u^0 &\\mbox{in}~~\\Omega,\n\\end{array}\n\\right.\n\\end{align}\nwhere $\\Omega$ is a bounded smooth domain in $\\mathbb{R}^d$ $(d\\geq 2)$, $T$\nis a given positive number, $f$ and ${\\bf g}=(g_1,\\cdots,g_d)$ are\ngiven functions. The Galerkin finite element method (FEM) for the\nabove equation seeks $\\{u_h(t)\\in S_h\\}_{t> 0}$\nsatisfying the parabolic finite element equation:\n\\begin{align}\\label{FEEq0}\n\\left\\{\n\\begin{array}{ll}\n\\displaystyle \\big(\\partial_tu_h,v_h\\big)\n+\\sum_{i,j=1}^d\\big(a_{ij}\\partial_j\nu_h,\\partial_iv_h\\big)+\\big(cu_h,v_h\\big)\n=(f,v_h)+\\sum_{j=1}^d(g_j,\\partial_jv_h), ~~\\forall~v_h\\in\nS_h,\\\\[8pt]\nu_h(0)=u^0_h ,\n\\end{array}\n\\right.\n\\end{align}\nwhere $S_h$, $00}$ and\n$\\{E_h(t)=e^{-tA_h}\\}_{t>0}$ denote the semigroups generated by the\noperators $-A$ and $-A_h$, respectively.\nFrom the theory of parabolic equations, we know that\n$\\{E(t)\\}_{t>0}$ extends to an analytic semigroup on\n$C(\\overline\\Omega)$, satisfying\n\\begin{align}\n&\\|E(t)v\\|_{L^\\infty}\n+t\\|\\partial_tE(t)v\\|_{L^\\infty}\\leq C\\|v\\|_{L^\\infty}\n,\\quad\\forall~v\\in C(\\overline\\Omega) .\n\\end{align}\nIts counterpart for the discrete finite element\noperator is the analyticity of the semigroup $\\{E_h(t)\\}_{t>0}$\non $L^\\infty\\cap S_h$:\n\\begin{align}\n&\\|E_h(t)v_h\\|_{L^\\infty} +t\\|\\partial_tE_h(t)v_h\\|_{L^\\infty}\\leq\nC\\|v_h\\|_{L^\\infty} ,\\quad\\forall~v_h\\in S_h,~ \\forall~t>0.\n\\label{STLEst}\n\\end{align}\nAlong with the approach of analytic semigroup, one may reach more\nprecise analysis of the finite element solution, such as\nmaximum-norm error estimates of semi-discrete Galerkin FEMs\n\\cite{Tho, TW1,Wah}, resolvent estimates of elliptic finite element\noperators \\cite{Bak, BTW,CLT}, error analysis of fully discrete FEMs for\nparabolic equations \\cite{LN, Pal,Tho}, and the discrete maximal\n$L^p$ regularity \\cite{Gei1,Gei2}.\n\nA related topic is the space-time maximum-norm\nstability estimate for inhomogeneous\nequations ($f$ or $g_j$ may not be identically\nzero):\n\\begin{align}\n\\|u_h\\|_{L^\\infty(\\Omega\\times(0,T))}\\leq\nC_T\\|u_h^0\\|_{L^\\infty}+C_Tl_h\\|u\\|_{L^\\infty(\\Omega\\times(0,T))},\n\\quad\\forall~T>0\n. \\label{STLEst2}\n\\end{align}\nUnder certain regularity assumptions on $u$, a straightforward\napplication of the above inequality is the maximum-norm error\nestimate:\n\\begin{align}\n\\| u - u_h \\|_{L^\\infty(\\Omega\\times(0,T))} \\leq C_T\\| u^0 - u_h^0\n\\|_{L^\\infty} +C_Tl_h h^{r+1} \\| u \\|_{L^\\infty( (0,T);\nW^{r+1,\\infty})} . \\label{error}\n\\end{align}\n\n\nIn the last several decades, many efforts have been devoted to the\nstability-analyticity estimate (\\ref{STLEst}) and the space-time\nstability estimate (\\ref{STLEst2}). Schatz et. al. \\cite{STW1}\nestablished (\\ref{STLEst}) for $d=2$ and $r=1$, with constant\ncoefficients $a_{ij}$, by using a weighted-norm technique. Later,\nNitsche and Wheeler \\cite{NW} proved (\\ref{STLEst2}) for $d=2,3$ and\n$r\\geq 4$ with constant coefficients. Rannacher \\cite{Ran} proved\n(\\ref{STLEst})-(\\ref{STLEst2}) in convex polygons with\nconstant coefficients, and Chen \\cite{Chen} improved the results to\n$1\\leq d\\leq 5$. A more precise analysis was given by Schatz et al\n\\cite{STW2}, where they proved that \\refe{STLEst}-\\refe{STLEst2}\nhold with $l_h=1$ for $r\\geq 2$ and $l_h=\\ln(1\/h)$ for $r=1$, and\nthey showed that the logarithmic factor is necessary for $r=1$. In\n\\cite{STW2}, the proof was given under the condition that the parabolic\nGreen's function\nsatisfies \n\\begin{align}\n|\\partial_t^\\alpha\\partial_x^\\beta G(t, x, y)|\\leq\nC(t^{1\/2}+|x-y|)^{-(d+2\\alpha+|\\beta|)} e^{-\\frac{|x-y|^2}{Ct}}\n\\label{g-cond} ,~~\\forall~\\alpha\\geq 0,|\\beta|\\geq 0,\n\\end{align}\nwhich holds when the coefficients $a_{ij}(x)$\nare smooth enough \\cite{EI70}. The stability\nestimate (\\ref{STLEst}) was also studied in \\cite{Bak, TW2}\nfor the Dirichlet boundary condition and in \\cite{Cro} for\na lumped mass method. Moreover, Leykekhman \\cite{Ley}\nshowed the stability estimate (\\ref{STLEst2}) in a more general weighted norm,\nand Hansbo \\cite{Han} investigated the related $L^s\\rightarrow L^r$ stability\nestimate. Also see \\cite{TW1,Wah} for some\nworks in the one-dimensional space. Clearly, all these results were\nestablished for parabolic equations with the coefficient $a_{ij}(x)$\nbeing smooth enough. Related maximum-norm error estimates of\nGalerkin FEMs in terms of an elliptic projection and the associated elliptic Green's function\ncan be found in \\cite{BSTW,ELWZ,Lin2, LTW, Nit,Ran,Tho, Whe}.\nSome other nonlinear models were analyzed in\n\\cite{Dob2}. Again, these works were\nbased on the assumption that the coefficients $a_{ij}$ are smooth\nenough.\n\n\nIn many physical applications, the coefficients $a_{ij}$ may\nnot be smooth enough. One of examples is the\nincompressible miscible flow in porous media \\cite{Dou,LS1},\nwhere $[a_{ij}]_{i,j=1}^d$ denotes the diffusion-dispersion tensor\nwhich is Lipschitz continuous in many cases. In this case,\nthe solution is in $L^p((0,T);W^{2,q})$ for $10}|E_h(t)v_h|\\big\\|_{L^q}\\leq C_q\\|v_h\\|_{L^q},\n~~\\forall~v_h\\in S_h,~~10$, the Bochner spaces\n\\cite{Yos} $L^p((0,T);X)$ and $W^{1,p}((0,T);X)$ are equipped with\nthe norms\n\\begin{align*}\n&\\|f\\|_{L^p((0,T);X)} =\\left\\{\n\\begin{array}{ll}\n\\displaystyle\\biggl(\\int_0^T\n\\|f(t)\\|_X^pdt\\biggl)^\\frac{1}{p},\n&\n1\\leq p<\\infty\n,\\\\[10pt]\n\\displaystyle{\\rm ess\\,sup}_{t\\in(0,T)}\\|f(t)\\|_X,\n& p=\\infty,\\end{array}\n\\right.\n\\\\[8pt]\n&\\|f\\|_{W^{1,p}((0,T);X)}=\n\\|f\\|_{L^p((0,T);X)}\n+\\|\\partial_tf\\|_{L^p((0,T);X)} ,\n\\end{align*}\nand we set\n$ Q_T:=\\Omega\\times(0,T)$. For\nnonnegative integers $k_1$ and $k_2$, we define\n\\begin{align*}\n&\\|f\\|_{W^{k_1,k_2}_{p,q}( Q_T)}\n:=\\|f\\|_{L^p((0,T);L^{q}(\\Omega))}\n+\\|\\partial_t^{k_1}f\\|_{L^p((0,T);L^{q}(\\Omega))}\n+\\|f\\|_{L^p((0,T);W^{k_2,q}(\\Omega))} ,\n\\end{align*}\nand\n\\begin{align*}\n&\\|f\\|^{(h)}_{W^{k,0}_{p}( Q_T)}\n:=\\|f\\|_{L^p( Q_T)}\n+\\biggl(\\int_0^T\\sum_{|\\alpha|\\leq k}\\sum_{l=1}^L\n\\int_{\\tau^h_l}|\\partial^\\alpha f|^p{\\rm d} x{\\rm d} t \\biggl)^{\\frac{1}{p}} ,\n\\end{align*}\nwhere $\\tau^h_l$, $l=1,\\cdots,L$, denote elements of a quasi-uniform triangulation of $\\Omega$.\n\nFor the simplicity of notations,\nin the following sections, we\nwrite $L^p$, $W^{k,p}$, $C^{k+s}$ and\n$W^{k_1,k_2}_{p,q}$ as the abbreviations of $L^p(\\Omega)$,\n$W^{k,p}(\\Omega)$, $C^{k+s}(\\overline\\Omega)$ and\n$W^{k_1,k_2}_{p,q}( Q_T)$, respectively. We also set\n$L^p( Q_T)=\nL^p((0,T);L^p)$,\n$W^{k_1,k_2}_{p}\n=W^{k_1,k_2}_{p,p}$\nfor nonnegative integer $k_1,k_2$ and\n$1\\leq p\\leq\\infty$, and\n $L^p_h:=L^p\\cap S_h$. For any domain $Q\\subset\n Q_T$, we define $$Q^t:=\\{x\\in\\Omega:~ (x,t)\\in Q\\}$$\nand\n$$\n\\|f\\|_{L^{p,q}(Q)}:=\\biggl(\\int_0^T\\biggl(\\int_{Q^t}|f(x,t)|^q{\\rm d}\nx\\biggl)^{\\frac{p}{q}}{\\rm d} t\\biggl)^{\\frac{1}{p}}\n,\\quad\\mbox{for}~ 1\\leq p,q<\\infty ,\n$$\nand we use the abbreviations\n$$\n(\\phi,\\varphi):=\\int_\\Omega \\phi(x)\\varphi(x){\\rm d} x,\\qquad\n[u,v]:=\\iint_{ Q_T} u(x,t)v(x,t){\\rm d} x{\\rm d} t .\n$$\nWe write $w(t)=w(\\cdot,t)$ as abbreviation for any function $w$ defined on $ Q_T$.\n\n\nMoreover, we set $a(x)=[a_{ij}(x)]_{d\\times d}$ as\na coefficient matrix and define the operators\n\\begin{align*}\n&A:H^1\\rightarrow H^{-1},\\qquad~\\, A_h:S_h\\rightarrow S_h,\\\\\n&R_h:H^1\\rightarrow S_h, \\qquad~~\\, P_h:L^2\\rightarrow S_h, \\\\\n&\\overline\\nabla\\cdot:(H^1)^d\\rightarrow H^{-1},\\quad\n\\overline\\nabla_h\\cdot:(H^1)^d\\rightarrow S_h,\n\\end{align*}\nby\n\\begin{align*}\n&\\big(A\\phi,v\\big)= \\big(a\\nabla \\phi,\\nabla\nv\\big)+\\big(c\\phi,v\\big) \\qquad~~~\n\\mbox{for all~~$\\phi ,v \\in H^1$},\\\\[3pt]\n&\\big(A_h\\phi_h,v\\big)= \\big(a\\nabla \\phi_h,\\nabla\nv\\big)+\\big(c\\phi_h,v\\big) \\quad\\,\n\\mbox{for all~~$\\phi_h\\in S_h$, $v\\in S_h$},\\\\[3pt]\n&\\big(A_hR_hw,v\\big)=\\big(Aw,v\\big) \\qquad\\qquad\\qquad~~ \\mbox{for\nall~~$w \\in H^1$ and $v\\in S_h$} ,\n\\\\[3pt]\n&\\big(P_h\\phi,v\\big)=\\big(\\phi,v\\big) \\qquad\\qquad\\qquad\\qquad~~~\n\\mbox{for all~~$\\phi \\in L^2$\nand $v \\in S_h$} ,\\\\\n&\\big(\\overline\\nabla\\cdot{\\bf w},v\\big)=-\\big({\\bf w},\\nabla v\\big)\n\\qquad\\qquad\\qquad~\\, \\mbox{for all~~${\\bf w} \\in (H^1)^d$ and\n$v \\in H^1$,} \\\\\n&\\big(\\overline\\nabla_h\\cdot{\\bf w},v\\big)=-\\big({\\bf w},\\nabla\nv\\big) \\qquad\\qquad\\qquad \\mbox{for all~~${\\bf w} \\in (H^1)^d$ and $v\n\\in S_h$} .\n\\end{align*}\nClearly,\n$R_h$ is the Ritz projection operator associated to the elliptic\noperator $A$ and $P_h$ is the $L^2$ projection operator onto the finite element space,\nwhich satisfy\n\\begin{align*}\n&\\|u-P_hu\\|_{W^{m,p}} \\leq C\\|u\\|_{W^{m,p}},\\qquad\\quad\\, 1\\leq p\\leq\\infty,\\\\\n&\\|u-R_hu\\|_{W^{m,p}} \\leq Ch^{1-m}\\|u\\|_{W^{1,p}},\\quad 12$ and $q=(d+2)p\/(d+2+p)0$ with $G(0,\\cdot,x_0)=\\delta_{x_0}$}\n.\n\\end{align}\nThe corresponding {\\it regularized Green's function} $\\Gamma(t,x,x_0)$\nis defined by\n\\begin{align}\\label{GMFdef}\n&\\partial_t\\Gamma(\\cdot,\\cdot,x_0)+A\\Gamma(\\cdot,\\cdot,x_0)=0\\quad\n\\mbox{for~ $t>0$~ with~\n$\\Gamma(0,\\cdot,x_0)=\\widetilde\\delta_{x_0}$},\n\\end{align}\nand the {\\it discrete Green's function} $\\Gamma_h(\\cdot,\\cdot,x_0)$\nis defined as the solution of the equation\n\\begin{align}\n&\\partial_t\\Gamma_{h}(\\cdot,\\cdot,x_0)\n+ A_h\\Gamma_{h}(\\cdot,\\cdot,x_0)=0 \\label{GMhFdef}\n\\quad\\mbox{for~ $t>0$~ with~\n$\\Gamma_{h}(0,\\cdot,x_0)=P_h\\delta_{x_0}=P_h\\widetilde\\delta_{x_0}$},\n\\end{align}\nwhere $P_h$ is the $L^2$ projection onto the finite element space.\nNote that\n$\\Gamma(t,x,x_0)$ and $\\Gamma_h(t,x,x_0)$ are symmetric with respect\nto $x$ and $x_0$.\n\nBy the fundamental estimates of parabolic equations \\cite{FS}\nand from Appendix B of \\cite{Gei1}, we know that the\nGreen's function $G$ satisfies\n\\begin{align}\n&|G(t,x,y)|\\leq\nC(t^{1\/2}+|x-y|)^{-d}e^{-\\frac{|x-y|^2}{Ct}},\\label{FEstP}\\\\\n&|\\partial_tG(t,x,y)|\\leq Ct^{-d\/2-1}e^{-\\frac{|x-y|^2}{Ct}} ,\\label{FtEstP}\\\\\n&|\\partial_{tt}G(t,x,y)|\\leq Ct^{-d\/2-2}e^{-\\frac{|x-y|^2}{Ct}} .\\label{FtEstP2}\n\\end{align}\nBy estimating $\\Gamma(t,x,x_0)=\\int_\\Omega\nG(t,x,y)\\widetilde\\delta_{x_0}(y){\\rm d} y$, we see that\n(\\ref{FEstP})-(\\ref{FtEstP2}) also hold when $G$ is replaced by\n$\\Gamma$ and when $\\max(t^{1\/2},|x-y|)\\geq 2h$.\n\nFor any open subset $D\\subset\\Omega$, we set $\\overline\nD^\\partial ={\\rm int}(D)\\cup (\\overline D \\cap\n\\partial\\Omega)$. Let $S_h(D)$ denote the restriction of\nfunctions in $S_h$ to $D$, and let $S_h^0(\\overline\nD^\\partial)$ denote the functions in $S_h$ with the support in\n$\\overline D^\\partial$. For a given subset $D\\subset\\Omega$,\nwe set $D_{d} =\\{x\\in \\Omega : {\\rm dist}(x,D)\\leq d\\}$\nfor $d> 0$.\nWe denote by $I_h : W^{1,1}(\\Omega) \\rightarrow S_h$ the\noperator given in \\cite{STW2} having the following properties:\nif $d \\geq kh$, then\n$$\\| I_hv - v\\|_{W^{s,p}(D_d)}^{(h)} \\leq\nCh^{l-s}\\|v\\|_{W^{l,p}(D_{2d})},\n\\quad~\\mbox{for~ $0\\leq s\\leq l\\leq r $ ~and~ $ 1\\leq p\\leq\n\\infty$,}$$\nand if supp$(v)\\subset \\overline D ^\\partial_d$, then\n$I_hv\\in S^0_h(\\overline D_{2d}^\\partial)$; also, if\n$v|_{D_d}\\in S_h(D_d)$, then $I_hv = v $ on $D$ and the\nbound above may be replaced by\n$Ch^{l-s}\\|v\\|_{W^{l,p}(D_{2d}\\backslash D)}$.\n\nFor any integer $j$, we define $d_j=2^{-j}$.\nLet $J_1=1$ and $J_0=0$, and let $J_*$ be an integer\nsatisfying $2^{-J_*}= C_*h$ with $C_*\\geq 16$ to\nbe determined later, thus $J_*=\\log_2[1\/(C_*h)]\\leq 2\\ln(2+1\/h)$.\nFor the given constant $C_*$, we have $J_10$\nsuch that when $hd$ and $\\alpha>0$ such that \nfor any integer $J_0\\leq j\\leq J_*$, we have\n\\begin{align}\n&d_j^{-4}\\vertiii{\\Gamma(\\cdot,\\cdot,x_0)}_{2,Q_j(x_0)}\n+d_j^{-2}\\vertiii{\\partial_{t}\n\\Gamma(\\cdot,\\cdot,x_0)}_{2,Q_j(x_0)} \\nonumber\\\\\n&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n+\\vertiii{\\partial_{tt}\\Gamma(\\cdot,\\cdot,x_0)}_{2,Q_j(x_0)}\n\\leq Cd_j^{-d\/2-5}, \\label{GFest01}\\\\\n&\\|\\partial_t\\partial_{x_i}G(\\cdot,\\cdot,x_0)\n\\|_{L^\\infty(Q_j(x_0))}+\\|\\partial_t\n\\partial_{x_i}\\Gamma(\\cdot,\\cdot,x_0)\\|_{L^\\infty(Q_j(x_0))}\\leq\nCd_j^{-d -3},\\label{GFest02}\\\\\n&\\|\\partial_{x_i x_l}G(\\cdot,\\cdot,x_0) \\|_{L^{\\infty,p_1}(\\cup_{k\\leq\nj}Q_k(x_0))}\\leq Cd_j^{-d-2+d\/p_1} \\label{GFest03}\n,\\\\\n&\\|\\partial_t\\Gamma(\\cdot,\\cdot,x_0)\\|_{L^1(\\Omega\\times(T,\\infty))}\n+\\|t\\partial_{tt}\\Gamma(\\cdot,\\cdot,x_0)\\|_{L^1(\\Omega\\times(T,\\infty))}\\leq\nC ,\\label{GFest0423}\\\\\n&d_j^{-\\alpha}\\|\\partial_{x_i y_l}\nG(\\cdot,\\cdot,x_0)\\|_{L^\\infty(Q_k(x_0))} +\\|\\partial_{x_i\ny_l}G(\\cdot,\\cdot,x_0)\\|_{C^{\\alpha,\\alpha\/2}(\\overline\nQ_k(x_0))}\\leq Cd_j^{-d -2-\\alpha}\\label{GFest05} ,\\\\\n&\\|\\Gamma_h(1,\\cdot,x_0)\\|_{L^2}+\\|\\partial_t\\Gamma_h(1,\\cdot,x_0)\\|_{L^2}\n+\\|\\partial_{tt}\\Gamma_h(1,\\cdot,x_0)\\|_{L^2}\\leq\nC\\|\\Gamma_h(\\cdot,\\cdot,x_0)\\|_{L^2(\\Omega\\times(1\/2,1])} ,\n\\label{dlky60}\n\\end{align}\nfor $i,l=1,2,\\cdots,d$.\n}\n\\end{lemma}\n\\noindent{\\it Proof}~~~\nFor the given $x_0$ and $j$, we define a coordinate transformation\n$x-x_0=d_j\\widetilde x$ and $t=d_j^2\\widetilde t$, and $\\widetilde\nG(\\widetilde t,\\widetilde x):=G(t,x,x_0)$, $\\widetilde\nG_{y_l} (\\widetilde t,\\widetilde x):=\\partial_{y_l} G(t,x,y)|_{y=x_0}$, $\\widetilde\na(\\widetilde x):=a(x)$, \n$\\widetilde\nc(\\widetilde x):=c(x)$,\n$\\widetilde Q_k=\\{(\\widetilde x,\\widetilde\nt)\\in\\mathbb{R}^{d+1}:(x,t)\\in Q_k\\}$, $\\widetilde Q_k'=\\widetilde\nQ_{k-1}\\cup \\widetilde Q_k\\cup \\widetilde Q_{k+1}$, \n$\\widetilde \\Omega_k=\\{(\\widetilde x,\\widetilde\nt)\\in\\mathbb{R}^{d+1}:(x,t)\\in \\Omega_k\\}$, $\\widetilde \\Omega_k'=\\widetilde\n\\Omega_{k-1}\\cup \\widetilde \\Omega_k\\cup \\widetilde \\Omega_{k+1}$,\n$\\widetilde \\Omega=\\{\\widetilde x\\in\\mathbb{R}^d: x \\in\n\\Omega\\}$, and $\\widetilde Q_{\\widetilde T}=\\{(\\widetilde\nx,\\widetilde t)\\in\\mathbb{R}^{d+1}: (x,t) \\in Q_T\\}$.\nThen $\\widetilde G(\\widetilde t,\\widetilde\nx)$ and $\\widetilde G_{y_l} (\\widetilde t,\\widetilde\nx)$ are solutions of the equations\n\\begin{align}\n&\\partial_{\\widetilde t}\\widetilde G-\\nabla_{\\widetilde\nx}\\cdot(\\widetilde a\\nabla_{\\widetilde x}\n\\widetilde G)+\\widetilde c\\widetilde G=0 \\quad\\mbox{in}~~\\widetilde Q_{j}', \\label{GFest06}\\\\\n&\\partial_{\\widetilde t}\\widetilde G_{y_l}-\\nabla_{\\widetilde\nx}\\cdot(\\widetilde a\\nabla_{\\widetilde x}\n\\widetilde G_{y_l})\n+\\widetilde c\\widetilde G_{y_l}=0 \\quad\\mbox{in}~~\\widetilde Q_{j}' .\n\\label{GFest066}\n\\end{align}\n\nBy the estimates of parabolic equations \n(see Lemma A.1 in Appendix), \nwe have\n\\begin{align}\n&|\\!|\\!|\\partial_{\\widetilde t}\\widetilde\nG|\\!|\\!|_{\\widetilde Q_j}+|\\!|\\!|\\widetilde\nG|\\!|\\!|_{2,\\widetilde Q_j}\n+|\\!|\\!|\\partial_{\\widetilde t}\\widetilde\nG|\\!|\\!|_{2,\\widetilde Q_j}\n+|\\!|\\!|\\partial_{\\widetilde t\\widetilde t}\\widetilde\nG|\\!|\\!|_{2,\\widetilde Q_j}\n\\leq C|\\!|\\!|\\widetilde G|\\!|\\!|_{\\widetilde Q_j'} , \\label{dkjq}\\\\\n&\\|\\partial_{\\widetilde x_i}\\widetilde\nG\\|_{L^\\infty(\\widetilde Q_j)}\n+\\|\\partial_{\\widetilde x_i}\\widetilde\nG\\|_{C^{\\alpha,\\alpha\/2}(\\overline{\\widetilde Q}_j)}\n+\\|\\partial_{\\widetilde x_i}\\partial_{\\widetilde x_l}\\widetilde\nG\\|_{L^{\\infty,p_1}(\\widetilde Q_j)}\n \\leq C|\\!|\\!|\\widetilde G|\\!|\\!|_{\\widetilde Q_j'}\n\\\\\n&\\|\\partial_{\\widetilde x_i}\\widetilde\nG_{y_l}\\|_{L^\\infty(\\widetilde Q_j)} \n+\\|\\partial_{\\widetilde x_i}\\widetilde\nG_{y_l}\\|_{C^{\\alpha,\\alpha\/2}(\\overline{\\widetilde Q}_j)}\n\\leq C|\\!|\\!|\\widetilde G_{y_l}|\\!|\\!|_{\\widetilde Q_j'} .\n\\label{dkjq3}\n\\end{align}\n \nTransforming back to the $(x,t)$ coordinates, \\refe{dkjq}-\\refe{dkjq3}\nreduce to\n\\begin{align*}\n&d_j^{-4}\\vertiii{ G}_{2,Q_j}\n+d_j^{-2}\\vertiii{\\partial_{t}\nG}_{2,Q_j}+\\vertiii{\\partial_{tt}G}_{2,Q_j}\\leq\nCd_j^{-6}\\vertiii{ G}_{Q_j'} ,\\\\\n& d_j^{-2}\\|\\partial_{x_i}G\\|_{L^\\infty( Q_j)}\n+\\|\\partial_t\\partial_{x_i}G\\|_{L^\\infty( Q_j)}\\leq Cd_j^{-d\/2\n-4}\\vertiii{ G}_{Q_j'},\\\\\n&\n\\|\\partial_{x_i}\\partial_{x_l}G\\|_{L^{\\infty,p_1}( Q_j)}\\leq Cd_j^{-d\/2\n-3+d\/p_1}\\vertiii{ G }_{Q_j'} ,\\\\\n& d_j^{-\\alpha}\\|\\partial_{x_i}\n\\partial_{y_l}G\\|_{L^\\infty(Q_j)}\n+\\|\\partial_{x_i}\\partial_{y_l}G\n\\|_{C^{\\alpha,\\alpha\/2}(\\overline\nQ_j)} \\leq Cd_j^{-d\/2 -2-\\alpha}\n\\vertiii{ \\partial_{y_l}G }_{Q_j'}\\leq Cd_j^{-1-\\alpha}\n\\|\\partial_{y_l}G \\|_{L^\\infty(Q_j')}.\n\\end{align*}\nFrom the Green function estimate (\\ref{FEstP}), we see that\n$\\vertiii{ G}_{Q_j'}\\leq Cd_j^{-d\/2+1}$ and so\n\\begin{align}\n&d_j^{-4}\\vertiii{ G}_{2,Q_j}\n+d_j^{-2}\\vertiii{\\partial_{t}\nG}_{2,Q_j}+\\vertiii{\\partial_{tt}G}_{2,Q_j}\n\\leq Cd_j^{-d\/2-5}, \\label{GFest07}\\\\\n&d_j^{-2}\\|\\partial_{x_i}G\\|_{L^\\infty(Q_j)}\n+\\|\\partial_t\\partial_{x_i}G\\|_{L^\\infty(Q_j)}\\leq Cd_j^{-d\n-3},\\label{GFes03}\\\\\n&\\|\\partial_{x_i x_l}G\\|_{L^{\\infty,p_1}(Q_j)}\\leq Cd_j^{-d\n-2+d\/p_1} \\label{Gsnn} ,\\\\\n&d_j^{-\\alpha}\\|\\partial_{x_i}\\partial_{y_l}\nG\\|_{L^\\infty(Q_j)}+\\|\\partial_{x_i}\\partial_{y_l}\nG\\|_{C^{\\alpha,\\alpha\/2}(\\overline Q_j)} \\leq Cd_j^{\n-1-\\alpha}\\|\\partial_{y_l}G \\|_{L^\\infty(Q_j')}\n\\leq\nCd_j^{-d-2-\\alpha} , \\label{Gmm2}\n\\end{align}\nwhere we have used (\\ref{GFes03}) in deriving (\\ref{Gmm2}).\nClearly, (\\ref{Gsnn}) further implies that\n\\begin{align}\\label{fd00}\n&\\|\\partial_{x_i x_l}G\\|_{L^{\\infty,p_1}(\\cup_{k\\leq j}Q_k)}\\leq\nCd_j^{-d -2+d\/p_1} .\n\\end{align}\n\n\n\nBy estimating $\\Gamma(t,x)=\\int_\\Omega\nG(t,x,y)\\widetilde\\delta_{x_0}(y){\\rm d} y$, we can see that the\nestimates (\\ref{GFest07})-(\\ref{fd00}) also hold when $G$ is\nreplaced by $\\Gamma$.\n\nFrom the inequalities\n(\\ref{FtEstP})-(\\ref{FtEstP2}) we derive that\n\\begin{align}\\label{dlk80}\n&\\|\\partial_tG(t,\\cdot,x_0)\\|_{L^\\infty}\n+\\|\\partial_t\\Gamma (t,\\cdot,x_0)\\|_{L^\\infty}\n\\nonumber\\\\\n&+\\|t\\partial_{tt}G(t,\\cdot,x_0)\\|_{L^\\infty}\n+\\|t\\partial_{tt}\\Gamma (t,\\cdot,x_0)\\|_{L^\\infty}\\leq\nCt^{-d\/2-1}~~\\mbox{for}~~t\\geq 1\/4,\n\\end{align}\nwhich implies (\\ref{GFest0423}).\n\nFinally, we note that the inequality (\\ref{dlky60}) follows\nfrom basic energy estimates.\n\nThe proof of Lemma \\ref{GFEst1} is complete.\n~\\vrule height8pt width 5pt depth 0pt\\bigskip\n\n\n\n\n\n\\subsection{Proof of Lemma \\ref{GMhEst}}\n\\label{dka7}\n\n\\begin{lemma}\\label{LocEst}\n{\\it\nSuppose that $z(\\cdot,t)\\in H^1$, $z_t(\\cdot,t)\\in L^2$ and $z_h(\\cdot,t)\\in S_h$\nfor each fixed $t\\in[0,T]$, and suppose that $e=z_h-z$ satisfies the equation\n$$\n(e_t,\\chi)+(a\\nabla e,\\nabla \\chi)+(c\ne,\\chi)=0,\\quad\\forall~\\chi\\in S_h,~t>0 ,\n$$\nwith $z(\\cdot,0)=0$ and $z_h(\\cdot,0)=z_{0h}$ on $\\Omega_j'$.\nThen for any $q>0$ there exists a constant $C_q$ such that\n\\begin{align*}\n\\vertiii{e_t}_{Q_j} + d_j^{-1}\\vertiii{e}_{1,Q_j}\n\\leq\nC_q\\big(I_j(z_{0h})+X_j(I_hz-z)\n+H_j(e)+d_j^{-2}\\vertiii{e}_{Q_j'}\\big),\n\\end{align*}\nwhere\n\\begin{align*}\n&I_j(z_{0h})=\\|z_{0h}\\|_{1,\\Omega_j'}\n+d_j^{-1}\\|z_{0h}\\|_{\\Omega_j'},\\\\\n&X_j(I_hz-z)=d_j\\vertiii{\\partial_t(I_hz-z)}_{1,Q_j'}\n+\\vertiii{\\partial_t(I_hz-z)}_{Q_j'}\n+d_j^{-1}\\vertiii{I_hz-z}_{1,Q_j'}+\nd_j^{-2}\\vertiii{I_hz-z}_{Q_j'},\\\\\n&H_j(e)=(h\/d_j)^q\\big(\\vertiii{e_t}_{Q_j'}\n+d_j^{-1}\\vertiii{e}_{1,Q_j'}\\big) .\n\\end{align*}\n}\n\\end{lemma}\n\nThe above lemma was proved in \\cite{STW2} (Section 5 and Section 6) only for\nparabolic equations with smooth coefficients. However, we can see from the proof\nthat the lemma still holds when $a_{ij}\\in W^{1,\\infty}(\\Omega)$ and\n$c\\in L^\\infty(\\Omega)$ satisfy (\\ref{coeffcond}).\nMoreover, for parabolic equations with smooth coefficients,\nLemma \\ref{GMhEst} was proved in \\cite{STW2} by applying Lemma\n\\ref{LocEst} with the additional assumption (\\ref{g-cond}). Here, we\nshall prove Lemma \\ref{GMhEst} directly from Lemma \\ref{GFEst1} and\nLemma \\ref{LocEst}.\n\nFirst, we prove \\refe{FFEst1}. \nLet $\\mu_j=[h\\ln(2+1\/h)]^{-1}+d_j^{-1}$ and define\n\\begin{align}\\label{KdKj0}\nK_j=\n\\vertiii{\\partial_tF}_{Q_j}+\nd_j^2\\vertiii{\\partial_{tt}F}_{Q_j}\n+\n\\mu_j\n\\vertiii{F}_{1,Q_j} ,\n\\end{align}\nand\n\\begin{align}\\label{KdKj}\n{\\cal K}:=\\sum_{j}d_j^{1+d\/2}K_j.\n\\end{align}\nFrom Section 4 of \\cite{STW2} we see that (\\ref{FFEst1}) holds if we can prove that ${\\cal K}\\leq C$ for some positive constant $C$ which is independent of $h$, $J_*$ and $C_*$.\n\nTo prove the boundedness of ${\\cal K}$, we set $e=F$ and $e=F_t$\nin Lemma \\ref{LocEst}, respectively. Since in either case $z(0)=0$ on $\\Omega_j'$,\nwe obtain\n\\begin{align}\nK_j\\leq\nC(\\widehat{I_j}+\\widehat{X_j}+\\widehat{H_j}+\\mu_jd^{-1}_j\\vertiii{\nF }_{Q'_j} ),\n\\end{align}\nwhere, by using the exponential decay of \n$|P_h\\widetilde\\delta_{x_0}(y)|\\leq Ch^{-d}e^{-C|y-x_0|\/h}$ \n\\cite{STW2}, we have\n\\begin{align*}\n&\\widehat{I_j}=d_j^2\\|F_t(0)\\|_{1,\\Omega_j'}\n+d_j\\|F_t(0)\\|_{\\Omega_j'}\n+d_j\\mu_j\\|F(0)\\|_{1,\\Omega_j'}+\\mu_j\\|F(0)\\|_{\\Omega_j'}\n\\leq Ch^{-1-d\/2}e^{-Cd_j\/h},\\\\\n&\\widehat{X_j}=\nd_j^3\\vertiii{(I_h\\Gamma-\\Gamma)_{tt}}_{1,Q_j'}\n+d_j^2\\vertiii{(I_h\\Gamma-\\Gamma)_{tt}}_{Q_j'}\n+\\mu_jd_j^2\\vertiii{(I_h\\Gamma-\\Gamma)_t}_{1,Q_j'}\n+\\mu_jd_j\\vertiii{(I_h\\Gamma-\\Gamma)_t}_{Q_j'}\\\\\n&\\qquad~\n+\\mu_j\\vertiii{I_h\\Gamma-\\Gamma}_{1,Q_j'}\n+\\mu_jd_j^{-1}\\vertiii{I_h\\Gamma-\\Gamma}_{Q_j'}\\\\\n&\\quad~\n\\leq (d_j^3h+d_j^2h^2)\\vertiii{\\partial_{tt}\\Gamma}_{2,Q_j''}\n+(\\mu_jd_j^2h+\\mu_jd_jh^2)\\vertiii{\\partial_{t}\\Gamma}_{2,Q_j''}\n+(\\mu_jh+\\mu_jd_j^{-1}h^2)\\vertiii{\\Gamma}_{2,Q_j''}\\\\\n&\\quad~\n\\leq C hd_j^{-d\/2-2}+C(\n[\\ln(2+1\/h)]^{-1}+h\/d_j)d_j^{-d\/2-1},\\\\\n&\\widehat{H_j}=\\big(h\/d_j\\big)^q \\big(d_j^2\\vertiii{F_{tt}}_{Q_j'}\n+d_j\\vertiii{F_{t}}_{1,Q_j'}\n+\\mu_jd_j\\vertiii{F_t}_{Q_j'}+\\mu_j\\vertiii{F}_{1,Q_j'}\n\\big)\\\\\n&\\quad~\n\\leq \\big(h\/d_j\\big)^q\n\\big(d_j^2\\vertiii{F_{tt}}_{Q_T}+d_j\\vertiii{F_{t}}_{1,Q_T}\n+\\mu_jd_j\\vertiii{F_t}_{Q_T}+\\mu_j\\vertiii{F}_{1,Q_T}\n\\big).\n\\end{align*}\nThe last term $\\widehat{H_j}$ was estimated in \\cite{STW2} via\nenergy estimates, with $\\sum_{j}d_j^{1+d\/2}\\widehat{H_j}\\leq C$.\nTherefore,\n\\begin{align}\\label{dlj6}\n{\\cal K}=\\sum_{j}d_j^{1+d\/2}K_j\\leq\nC+C\\sum_{j}d_j^{d\/2}\\mu_j\\vertiii{F}_{Q_j'} .\n\\end{align}\n\nTo estimate $\\vertiii{F}_{Q_j'}$, we apply a duality argument.\nLet $w$ be the solution of the backward parabolic equation\n$$\n-\\partial_tw+Aw=v\\quad\\mbox{with}~~w(T)=0,\n$$\nwhere $v$ is a function which is supported in $Q_j'$ and\n$\\vertiii{v}_{Q_j'}=1$. Multiplying the above equation by $F$,\nwith integration by parts we get\n\\begin{align}\\label{dka6}\n[F,v]=(F(0),w(0))+[F_t,w]+\\sum_{i,j=1}^d[a_{ij}\\partial_j\nF,\\partial_i w]+[cF,w],\n\\end{align}\nwhere\n\\begin{align*}\n(F(0),w(0))&=(P_h\\widetilde\\delta_{x_0}-\\widetilde\\delta_{x_0},w(0))\\\\\n&=(P_h\\widetilde\\delta_{x_0}-\\widetilde\\delta_{x_0},w(0)-I_hw(0))\\\\\n&=\n(P_h\\widetilde\\delta_{x_0},w(0)-I_hw(0))_{\\Omega_j''}\n+(P_h\\widetilde\\delta_{x_0}-\\widetilde\\delta_{x_0},\nw(0)-I_hw(0))_{(\\Omega_j'')^c}\\\\\n&:=I_1+I_2 .\n\\end{align*}\nSince\n$|P_h\\widetilde\\delta_{x_0}(y)|\\leq Ch^{-d}e^{-C|y-x_0|\/h}$ \n\\cite{STW2}, we derive\nthat\n\\begin{align}\n&|I_1|\\leq\nCh\\|P_h\\widetilde\\delta_{x_0}\\|_{L^2(\\Omega_j'')}\\|w(0)\\|_{H^1(\\Omega)}\\leq\nCd_j^{d\/2}h^{-d+1}e^{-Cd_j\/h}\\vertiii{v}_{Q_j'}\\leq\nCh^{2}d_j^{-d\/2 -1}, \\label{SF2}\\\\\n&|I_2|\\leq C\\|P_h\\widetilde\\delta_{x_0}-\n\\widetilde\\delta_{x_0}\\|_{L^{p_1'}}\n\\|w(0)-I_hw(0)\\|_{L^{p_1}((\\Omega_j'')^c)}\\leq\nCh^{2-d\/p_1}\\|w(0)\\|_{W^{2,p_1}((\\Omega_j'')^c)} .\n\\label{SF22}\n\\end{align}\nWe proceed to estimate $\\|w(0)\\|_{W^{2,p_1}((\\Omega_j'')^c)}$.\nLet $D_j$ be a set containing $(\\Omega_j'')^c$ but its\ndistance to $\\Omega_j'$ is larger than $C^{-1}d_j$. Since\n$$\n\\partial_{x_i}\\partial_{x_j}w(x,0)\n=\\int_0^{T}\\int_{\\Omega}\n\\partial_{x_i}\\partial_{x_j}G(s,x,y)v(y,s){\\rm d} y{\\rm d} s ,\n$$\nby taking the $L^p(D_j)$ norm with respect to $x$ we obtain\n$$\n|x-y|+s^{1\/2}\\geq C_1^{-1}d_j \\quad\\mbox{for $x\\in D_j$ and $(y,s)\\in\nQ_j$}\n$$\nfor some positive constant $C_1$. Using (\\ref{GFest03}) we\nfurther derive that\n\\begin{align}\n\\|\\partial_{x_i}\\partial_{x_j}w(0)\\|_{L^{p_1}(D_j)}\n&\\leq C\\sup_{y\\in\\Omega}\\|\\partial_{x_i}\\partial_{x_j}\nG(\\cdot,\\cdot,y)\\|_{L^{\\infty,p_1}(\\cup_{k\\leq\nj+\\log_2C_1}Q_k(y))}\\|v\\|_{L^{1}(Q_j')}\\nonumber\\\\\n&\\leq C d_j^{-d -2+d\/p_1}\\|v\\|_{L^{1}(Q_j')}\n \\nonumber\\\\\n&\\leq C d_j^{-d\/2 -1+d\/p_1}\\vertiii{v}_{Q_j'} , \\nonumber\\\\\n&=C d_j^{-d\/2 -1+d\/p_1} . \\label{SF3}\n\\end{align}\nFrom (\\ref{SF2})-(\\ref{SF3}), we see that the \nfirst term on the right-hand side of \\refe{dka6} is bounded by\n\\begin{align}\n|(F(0),w(0))| \\leq Ch^{2}d_j^{-d\/2 -1}+Ch^{2}d_j^{-d\/2\n-1}(h\/d_j)^{-d\/p_1} \\leq Ch^{2}d_j^{-d\/2 -1}(h\/d_j)^{-d\/p_1} ,\n\\label{f69}\n\\end{align}\nand the rest terms are bounded by\n\\begin{align}\\label{sd80}\n&[F_t,w]+\\sum_{i,j=1}^d[a_{ij}\\partial_j F,\\partial_iw]+[cF,w]\\nonumber\\\\\n&=[F_t,w-I_hw]+\\sum_{i,j=1}^d[a_{ij}\\partial_j F,\\partial_i\n(w-I_hw)] +[cF,w-I_hw]\\nonumber\\\\\n&\\leq\n\\sum_{*,i}C(h^2\\vertiii{F_t}_{Q_i}+h\\vertiii{F}_{1,Q_i})\\vertiii{w}_{2,Q_i'}\n.\n\\end{align}\nTo estimate $\\vertiii{w}_{2,Q_i'}$ we consider the expression\n$$\n\\partial_{x_i}\\partial_{x_j}w(x,t)\n=\\int_0^{T}\\int_{\\Omega}\n\\partial_{x_i}\\partial_{x_j}G(s-t,x,y)v(y,s)1_{s>t}\\,{\\rm d} y{\\rm d} s .\n$$\nIf $i\\leq j-2$ (so that $d_i>d_j$), then $w(t)=0$ for\n$t>d_j^2$ (because $v$ is supported in $Q_j$), $|x-y|\\sim d_i$\nand $s-t\\in(0,d_i^2)$ for $t< d_j^2$, $(x,t)\\in Q_i$ and\n$(y,s)\\in Q_j'$.\nwe obtain\n\\begin{align*}\n\\vertiii{\\partial_{x_i}\\partial_{x_j}w}_{Q_i'}\n\\leq\\sup_{y}\\vertiii{\\partial_{x_i}\\partial_{x_j}G(\\cdot,\\cdot,y)}_{Q_i(y)}\n\\|v\\|_{L^1(Q_j)}\\leq\nCd_i^{-d\/2-1}d_j^{d\/2+1}\\vertiii{v}_{Q_j}\\leq\nC(d_j\/d_i)^{d\/2+1} .\n\\end{align*}\nIf $i\\geq j+2$ (so that $d_i\\leq d_j$), then\n$\\max(|s-t|^{1\/2},|x-y|)\\geq d_{j+2}$ for $(x,t)\\in Q_i$, thus for\n$1\/2=1\/\\bar p_1+1\/p_1$ we have\n\\begin{align*}\n\\vertiii{\\partial_{x_i}\\partial_{x_j}w}_{Q_i'}\n&=\\sup_{(y,s)\\in Q_T}\\vertiii{\\partial_{x_i}\\partial_{x_j}G(s-\\cdot,\\cdot,y)\n1_{\\cup_{k\\leq j+2}Q_{k}(y)}}_{Q_i'}\n\\|v\\|_{L^1(Q_j')}\\\\\n&\\leq Cd_i^{ 1+d \/\\bar p_1}\\sup_{y}\\|\\partial_{x_i}\n\\partial_{x_j}G(\\cdot,\\cdot,y)\\|_{L^{\\infty,p_1}(\\cup_{k\\leq j+2}Q_{k}'(y))}\n\\vertiii{v}_{Q_j'}d_j^{d\/2+1}\n\\\\\n&\\leq Cd_i^{1+d \/\\bar p_1}d_j^{-d -2+d\/p_1}d_j^{d\/2+1}\\\\[5pt]\n&= C(d_i\/d_j)^{1+d\/2-d\/p_1} .\n\\end{align*}\nIf $|i-j|\\leq 1$, then by applying the standard energy estimate we get\n$\\vertiii{w}_{2,Q_T}\\leq C\\vertiii{v}_{Q_T}=C$.\nCombining the three cases, we have proved\n$$\\vertiii{w}_{2,Q_i'}\\leq C\\min\\big(d_i\/d_j,d_j\/d_i\\big)^{1+d\/2-d\/p_1}:=C m_{ij}.$$\nSubstituting \\refe{f69}-\\refe{sd80} into (\\ref{dka6}) gives \n\\begin{align}\n\\vertiii{F}_{Q_j'}\\leq Ch^2d_j^{-d\/2-1}\n(h\/d_j)^{-d\/p_1}+C\\sum_{*,i}m_{ij}\n(h^2\\vertiii{F_t}_{Q_i}\n+h\\vertiii{F}_{1,Q_i}) .\n\\end{align}\nBy noting that $p_1>d$, \\refe{dlj6} reduces to\n\\begin{align*}\n{\\cal K}&\\leq C+C\\sum_j (h\/d_j)^{1-d\/p_1}+C\\sum_j d_j^{d\/2}\n\\mu_j \\sum_{*,i}m_{ij}\\big(h^2\\vertiii{F_t}_{Q_i}\n+h\\vertiii{F}_{1,Q_i} \\big)\\\\\n&\\leq\nC+C\\sum_{*,i}\\left(h^2\\vertiii{F_t}_{Q_i}+h\\vertiii{F}_{1,Q_i}\n\\right)\\sum_jd^{d\/2}_j\\mu_jm_{ij}\\\\\n&\\leq\nC+C\\sum_{*,i}\\left(h^2\\vertiii{F_t}_{Q_i}+h\\vertiii{F}_{1,Q_i}\n\\right)\nd^{1+d\/2}_i\\mu_id^{-1}_i\\\\\n&\\leq C+\\left(h\\vertiii{F_t}_{Q_*}\n+\\vertiii{F}_{1,Q_*} \\right)d_{J_*}^{d\/2}\n\\big(1\/\\ln(2+1\/h)+h\/d_{J_*}\\big)\\\\\n&~~~+C\\sum_id^{1+d\/2}_i\n\\left(\\vertiii{F_t}_{Q_i}+\\mu_i\\vertiii{F}_{1,Q_i}\\right\n)\\left(\\frac{h}{d_i}\\right)\\\\\n&\\leq\nC+CC^{d\/2}_*+C\\sum_id^{1+d\/2}_iK_i\\left(\\frac{h}{d_i}\\right)\\\\\n&\\leq C_2+C_2C^{d\/2}_*+C_2C^{-1}_*{\\cal K}\n\\end{align*}\nfor some positive constant $C_2$. \nBy choosing $C_*=16+2C_2$, the above inequality shows\nthat ${\\cal K}$ is bounded. As we have mentioned,\nthe boundedness of ${\\cal K}$ implies (\\ref{FFEst1}).\n\nNext, we prove \\refe{FFEst2}. From the definition of ${\\cal K}$ in (\\ref{KdKj0})-(\\ref{KdKj}),\nwe further derive that\n\\begin{align*}\n\\vertiii{\\partial_tF}_{L^2(\\Omega\\times(1\/4,1))}+\n \\vertiii{\\partial_{tt}F}_{L^2(\\Omega\\times(1\/4,1))} \\leq C .\n\\end{align*}\nThe above inequality and (\\ref{dlk80}) imply that\n\\begin{align*}\n\\vertiii{\\partial_t\\Gamma_h}_{L^2(\\Omega\\times(1\/4,1))}+\n \\vertiii{\\partial_{tt}\\Gamma_h}_{L^2(\\Omega\\times(1\/4,1))} \\leq C ,\n\\end{align*}\nwhich together with (\\ref{dlky60}) gives\n\\begin{align*}\n\\|\\partial_t\\Gamma_h(1,\\cdot,x_0)\\|_{L^2}+\n\\|\\partial_{tt}\\Gamma_h(1,\\cdot,x_0)\\|_{L^2} \\leq C .\n\\end{align*}\nDifferentiating the equation (\\ref{GMhFdef}) with respect to $t$ and\nmultiplying the result by $\\partial_t\\Gamma_h$, we get\n\\begin{align*}\n&\\frac{{\\rm d}}{{\\rm d} t}\\|\\partial_t\\Gamma_h(t,\\cdot,x_0)\\|_{L^2}^2\n+c_0\\|\\partial_t\\Gamma_h(t,\\cdot,x_0)\\|_{L^2}^2 \\\\\n&\\leq \\frac{{\\rm d}}{{\\rm d} t}\\|\\partial_t\\Gamma_h(t,\\cdot,x_0)\\|_{L^2}^2\n+(A_h\\partial_t\\Gamma_h(t,\\cdot,x_0),\\partial_t\\Gamma_h(t,\\cdot,x_0)) \\\\\n&= 0\n\\end{align*}\nfor $t\\geq 1$, which further gives\n\\begin{align*}\n\\|\\partial_t\\Gamma_h(t,\\cdot,x_0)\\|_{L^2}^2\\leq\ne^{-c_0(t-1)}\\|\\partial_t\\Gamma_h(1,\\cdot,x_0)\\|_{L^2}^2\n\\leq Ce^{-c_0(t-1)} .\n\\end{align*}\nSimilarly, we can prove that\n\\begin{align*}\n\\|\\partial_{tt}\\Gamma_h(t,\\cdot,x_0)\\|_{L^2}^2\n\\leq Ce^{-c_0(t-1)} .\n\\end{align*}\nFrom (\\ref{FFEst1}), (\\ref{GFest0423})\nand the last two inequalities, we derive (\\ref{FFEst2}) for the case\n$h1$ follows from the case $T=1$ by iterations:\n\\begin{align*}\n\\|u_h\\|_{L^\\infty(\\Omega\\times(k,k+1])}\\leq\nC\\|u_h\\|_{L^\\infty(\\Omega\\times(k-1,k])}+Cl_h\\|u\\|_{L^\\infty(\\Omega\\times(0,T))},\n\\quad\\forall~k\\geq 1.\n\\end{align*}\n\nWhen $h\\geq h_0$ and $f\\equiv g_j\\equiv 0$, the standard energy\nestimates of (\\ref{FEEq0}) give\n\\begin{align*}\n\\|u_h(t)\\|_{L^2}+t\\|\\partial_tu_h(t)\\|_{L^2}\\leq C\\|u_h(0)\\|_{L^2} .\n\\end{align*}\nBy using an inverse inequality, we further derive that\n\\begin{align*}\n\\|u_h(t)\\|_{L^\\infty}+t\\|\\partial_tu_h(t)\\|_{L^\\infty}\n\\leq Ch_0^{-d\/2}(\\|u_h(t)\\|_{L^2}+t\\|\\partial_tu_h(t)\\|_{L^2})\n\\leq C\\|u_h(0)\\|_{L^2} \\leq C\\|u_h(0)\\|_{L^\\infty},\n\\end{align*}\nwhich implies (\\ref{STLEst}).\n\nWhen $h\\geq h_0$ while $f$ or $g_j$ may not be identically zero, we decompose\nthe solution of (\\ref{FEEq0}) as\n$u_h=\\widetilde u_h+v_h$, where $\\widetilde u_h$ and $v_h$ are solutions of\nthe equations\n\\begin{align}\\label{Ga513}\n&\\left\\{\n\\begin{array}{ll}\n\\partial_t\\widetilde u_h+A_h\\widetilde u_h = f_h-\\overline\\nabla_h\\cdot{\\bf g} ,\n\\\\[3pt]\n\\widetilde u_h(0)=P_hu^0 ,\n\\end{array}\n\\right.\n\\end{align}\nand\n\\begin{align}\\label{Eqv9}\n&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\left\\{\n\\begin{array}{ll}\n\\partial_tv_h+A_hv_h = 0 ,\n\\\\[3pt]\nv_h(0)=u^0_h-P_hu^0,\n\\end{array}\n\\right.\n\\end{align}\nrespectively. Write the equation (\\ref{PDE0}) as\n\\begin{align}\\label{Ga903}\n&\\left\\{\n\\begin{array}{ll}\n\\partial_tu+Au\n=f -\\overline\\nabla\\cdot {\\bf g} &\\mbox{in}~\\Omega,\n\\\\[3pt]\nu(0)=u^0 &\\mbox{in}~\\Omega ,\n\\end{array}\n\\right.\n\\end{align}\nand let $w_h=\\widetilde u_h-P_hu$. The difference of (\\ref{Ga513}) and\n(\\ref{Ga903}) gives\n\\begin{align*}\n&\\left\\{\n\\begin{array}{ll}\n\\partial_t w_h+A_h w_h = A_h(R_hu-P_hu) ,\n\\\\[3pt]\nw_h(0)=0 .\n\\end{array}\n\\right.\n\\end{align*}\nMultiplying the above equation by $w_h$, we obtain\n\\begin{align*}\n\\|w_h\\|_{L^\\infty((0,T);L^2)}\n&\\leq C\\|R_hu-P_hu\\|_{L^2((0,T);H^1)}\\\\\n&\\leq Ch_0^{-1}\\|R_hu-P_hu\\|_{L^2((0,T);L^2)}\\\\\n&\\leq C_{T}\\|R_hu-P_hu\\|_{L^\\infty((0,T);L^\\infty)}\\\\\n&\\leq C_{T}\\|u\\|_{L^\\infty( Q_T)} ,\n\\end{align*}\nwhere we have used the inequality \n$\\|R_hu\\|_{L^\\infty}\\leq C_{h_0}\\|u\\|_{L^\\infty}$\nin the last step. By using an inverse inequality we further derive that\n\\begin{align*}\n\\|w_h\\|_{L^\\infty( Q_T)}\\leq\nCh_0^{-d\/2}\\|w_h\\|_{L^\\infty((0,T);L^2)}\\leq C_T\\|u\\|_{L^\\infty( Q_T)}.\n\\end{align*}\n\nApplying (\\ref{STLEst}) to the equation (\\ref{Eqv9}) we obtain\n$$\n\\|v_h\\|_{L^\\infty( Q_T)}\\leq C\\|u_h^0-P_hu^0\\|_{L^\\infty}\n\\leq C\\|u_h^0\\|_{L^\\infty}+C\\|u\\|_{L^\\infty( Q_T)} .\n$$\nThe last two inequalities imply (\\ref{STLEst2}) for the case $h\\geq h_0$.\n\n\\subsection{Proof of (\\ref{smgest})}\n\n\nWe define the truncated Green function $G_{\\rm tr}^*$ in the\nfollowing way. Let $\\eta$ be a nonnegative smooth function on $\\mathbb{R}$\nsuch that $\\eta(\\rho)=0$ for $|\\rho|\\leq 1\/2$ and $\\eta(\\rho)=1$ for\n$|\\rho|\\geq 1$. If we set $\\chi(t,x,y)=\\eta\\big(|x-y|^4+t^2\\big)$\nand $\\chi_\\epsilon(t,x,y)=\n\\chi(t\/\\epsilon^2,x\/\\epsilon,y\/\\epsilon)$, then $\\chi_\\epsilon$ is a\n$C^\\infty$ function of $x,y$ and $t$. It is easy to see that $\\chi_\\epsilon=0$ when\n$\\max(|x-y|,\\sqrt{t})<\\epsilon\/2$, and $\\chi_\\epsilon=1$ when\n$\\max(|x-y|,\\sqrt{t})>\\epsilon$, and $|\\partial^{\\alpha_1}_t\n\\partial^{\\beta_1}_x\\partial^{\\beta_2}_y\n\\chi_\\epsilon(t,x,y)|\\leq C\\epsilon^{-2\\alpha_1\n-|\\beta_1|-|\\beta_2|}$.\n\nFor $d_{J_*}=C_*h$,\n$\\chi_{d_*}(\\cdot,\\cdot,y)=0$ in the domain\n$Q_{*\/2}(y):=\\{(x,t)\\in Q_T: \\max(|x-y|,\\sqrt{t})d_{J_*}$.\n\nFor the fixed trianglular element $\\tau_l^h$ and\nthe point $x_0\\in \\tau_l^h$, the function\n$\\widetilde \\delta_{x_0}$ is supported in\n$\\tau_l^h\\subset \\Omega_*(x_0)$ with\n$\\int_\\Omega\\widetilde\\delta_{x_0}(y){\\rm d} y=1$\n(see the notations in Section \\ref{fnot}). Therefore, by using\nLemma \\ref{GFEst1} we see that\n\\begin{align*}\n&\\iint_{\\Omega_\\infty\\backslash\nQ_*(x_0)}|\\partial_t\\Gamma(\\tau,x,x_0)\n-\\partial_tG(\\tau,x,x_0)|{\\rm d} x{\\rm d}\\tau\\\\\n&= \\iint_{ Q_T\\backslash Q_*(x_0)}\\biggl|\\int_\\Omega\n\\partial_tG(\\tau,x,y)\\widetilde\\delta_{x_0}(y){\\rm d}\ny-\\partial_tG(\\tau,x,x_0)\\biggl|{\\rm d} x{\\rm d}\\tau\\\\\n&~~~\n+\\iint_{\\Omega\\times(T,\\infty)}|\\partial_t\\Gamma(\\tau,x,x_0)\n-\\partial_tG(\\tau,x,x_0)|{\\rm d} x{\\rm d}\\tau\\\\\n&\\leq\nCh\\iint_{\\max(|x-y|,\\tau^{1\/2})>\\frac{1}{2}C_*h}\\sup_{y\\in\\tau^l}\\big|\n\\nabla_y\\partial_tG(\\tau,x,y)\\big|{\\rm d} x{\\rm d}\\tau\n+C \\\\\n&=Ch\\sum_{j}\\iint_{Q_j'(y)}\\sup_{y\\in\\tau^l}\\big|\n\\nabla_y\\partial_tG(\\tau,x,y)\\big|{\\rm d} x{\\rm d}\\tau\n+C\\\\\n&\\leq C\\sum_{j}\\frac{h}{d_j} +C \\\\\n&\\leq C .\n\\end{align*}\n\nMultiplying (\\ref{GMFdef}) by $\\partial_t\\Gamma$ and integrating the\nresult, we get\n\\begin{align*}\n&\\|\\partial_t\\Gamma(\\cdot,\\cdot,x_0)\\|_{L^2(Q_T)}\\leq\nC\\|\\widetilde \\delta_{x_0}\\|_{H^1(\\Omega)}\\leq Ch^{-d\/2-1} ,\n\\end{align*}\nwhich implies that\n\\begin{align*}\n&\\iint_{Q_*(x_0)}|\\partial_t\\Gamma(\\tau,x,x_0)|{\\rm d} x{\\rm d}\\tau\n\\leq\nd_{J_*}^{d\/2+1}\\|\\partial_t\\Gamma(\\cdot,\\cdot,x_0)\\|_{L^2(Q_T)}\\leq\nC .\n\\end{align*}\nEasy to check that\n\\begin{align}\n|\\partial_tG_{\\rm tr}^*(t,x,y)|\\leq Cd_{J_*}^{-d-2}\n\\quad\\mbox{for}~\\max(|x-y|,t^{1\/2}) \\leq d_{J_*}\n\\end{align}\nand so\n\\begin{align}\n&\\iint_{Q_*(x_0)} |\\partial_tG_{\\rm tr}^*(t,x,x_0)|{\\rm d}\nx{\\rm d}\\tau\\leq Cd_{J_*}^{-d-2} d_{J_*}^{d+2} \\leq C .\n\\end{align}\nIt follows that\n\\begin{align*}\n&\\iint_{\\Omega_\\infty}|\\partial_t\\Gamma(\\tau,x,x_0)\n-\\partial_tG_{\\rm tr}(\\tau,x,x_0)|{\\rm d} x{\\rm d}\\tau\\\\\n&= \\iint_{\\Omega_\\infty\\backslash\nQ_*}|\\partial_t\\Gamma(\\tau,x,x_0)\n-\\partial_tG(\\tau,x,x_0)|{\\rm d} x{\\rm d}\\tau\n+\\iint_{Q_*}(|\\partial_t\\Gamma(\\tau,x,x_0)|+\n|\\partial_tG_{\\rm tr}(\\tau,x,x_0)|){\\rm d} x{\\rm d}\\tau\\\\\n&\\leq C .\n\\end{align*}\nFrom Lemma \\ref{GMhEst} and the last inequality, we see that\n\\begin{align*}\n&\\iint_{\\Omega_\\infty}|\\partial_t\\Gamma_{h}(\\tau,x,x_0)\n-\\partial_tG_{\\rm tr}(\\tau,x,x_0)|{\\rm d} x{\\rm d}\\tau\\\\\n&\\leq\\iint_{\\Omega_\\infty}|\\partial_t\\Gamma_{h}(\\tau,x,x_0)\n-\\partial_t\\Gamma(\\tau,x,x_0)|{\\rm d} x{\\rm d}\\tau\n+\\iint_{\\Omega_\\infty}|\\partial_t\\Gamma(\\tau,x,x_0)\n-\\partial_tG_{\\rm tr}(\\tau,x,x_0)|{\\rm d} x{\\rm d}\\tau\\\\\n&\\leq C.\n\\end{align*}\n\nSince both $\\Gamma_h(\\tau,x,y)$ and $G^*_{\\rm tr}(\\tau,x,y)$\nare symmetric with respect to $x$ and $y$, from the last\ninequality we see that the kernel\n$K(x,y)=\\int_0^\\infty|\\partial_t\\Gamma_h(\\tau,x,y)\n-\\partial_tG^*_{\\rm tr}(\\tau,x,y)|{\\rm d}\\tau$\nsatisfies\n\\begin{align*}\n&\\sup_{y\\in\\Omega}\\int_\\Omega K(x,y){\\rm d}\nx+\\sup_{x\\in\\Omega}\\int_\\Omega K(x,y){\\rm d} y\\leq C .\n\\end{align*}\nBy Schur's lemma \\cite{Kra}, the operator $M_K$ defined by\n$M_Ku_h(x)=\\int_\\Omega K(x,y)u_h(y){\\rm d} y$ is bounded on\n$L^q(\\Omega)$ for any $1\\leq q\\leq\\infty$, i.e.\n\\begin{align}\\label{Ms1}\n\\|M_Ku_h\\|_{L^q}\\leq C\\|u_h\\|_{L^q} ,\\quad 1\\leq q\\leq \\infty\n.\n\\end{align}\n\nLet\n$E^*_{\\rm tr}(t)u_h(x)=\\int_\\Omega G_{\\rm tr}^*(t,x,y)u_h(y){\\rm d} y$.\nWe have\n\\begin{align*}\n&\\sup_{t>0}|E_h(t)u_h(x)|\\\\\n&\n\\leq\\sup_{t>0}|(E_h(t)-E^*_{\\rm\ntr}(t))u_h(x)|+\\sup_{t>0}|E^*_{\\rm tr}(t)u_h(x)|\\\\\n&\n\\leq\n|(P_h\\delta_{x},u_h)|\n+\\sup_{t>0}\\biggl|\\int_0^t\n\\int_\\Omega(\n\\partial_t\\Gamma_h(\\tau,\\cdot,y)\n-\\partial_tG^*_{\\rm tr}(\\tau,x,y)u_h(y)){\\rm d}\ny{\\rm d}\\tau\\biggl|+\\sup_{t>0}|E^*_{\\rm tr}(t)u_h(x)|\\\\\n&\\leq\n|(P_h\\delta_{x},u_h)|\n+\\int_0^\\infty\n\\int_\\Omega\n|\\partial_t\\Gamma_h(\\tau,x,y)\n-\\partial_tG^*_{\\rm tr}(\\tau,x,y)|u_h(y){\\rm d} y{\\rm d}\n\\tau+\\sup_{t>0}E(t)|u_h|(x),\\\\\n&\n=:|u_h(x)|+M_Ku_h(x)+\\sup_{t>0}E(t)|u_h|(x)\n\\end{align*}\nwhere\n\\begin{align*}\n&\\|M_Ku_h\\|_{L^q}\\leq C\\|u_h\\|_{L^q} , \\qquad\\qquad\\forall~\n1\\leq q\\leq \\infty,~~\\mbox{by (\\ref{Ms1})},\\\\[5pt]\n&\\|\\sup_{t>0}E(t)|u_h|\\|_{L^q}\\leq C_q\\|u_h\\|_{L^q},\n\\quad\\forall~10}E(t)|u_h|\\|_{L^\\infty}\\leq \\|u_h\\|_{L^\\infty},\n\\quad~ \\mbox{by the maximum principle} .\n\\end{align*}\nThis proves (\\ref{smgest}) for the case $h0}|E_h(t)v_h|\\big\\|_{L^q}\n\\leq C\\sup_{t>0}\\|E_h(t)v_h\\|_{L^\\infty}\n\\leq C\\|v_h\\|_{L^\\infty}\\leq Ch_0^{-d\/q}\\|v_h\\|_{L^q} .\n\\end{align*}\n\nThe proof of (\\ref{smgest}) is completed.\n\n\n\\subsection{Proof of (\\ref{LpqSt1})-(\\ref{LpqSt3})}\n\n\nSince the operator $E_h(t)$ is symmetric, i.e.\n$(E_h(t)u_h,v_h)=(u_h,E_h(t)v_h)$ for any $u_h,v_h\\in S_h$,\nfrom (\\ref{STLEst}) we derive that, by a\nduality argument and by interpolation \\cite{BL},\n\\begin{align}\n&\\|E_h(t)v_h\\|_{L^q} + t\\|\\partial_tE_h(t)v_h\\|_{L^q} \\leq\nC\\|v_h\\|_{L^q}, \\quad\\mbox{for}~~ 1\\leq q\\leq\\infty , \\label{anf3}\n\\end{align}\nwhich means that $\\{E_h(t)\\}_{t>0}$ is an analytic semigroup on $L^q_h$.\n\nFirst, we prove (\\ref{LpqSt3}).\nFor the case $u^0_h\\equiv {\\bf g}\\equiv 0$,\nwe rewrite the equation (\\ref{FEEq0}) as\n\\begin{align}\\label{Ga5}\n&\\left\\{\n\\begin{array}{ll}\n\\partial_tu_h+A_hu_h=f_h ,\n\\\\[5pt]\nu_h(0)=0 ,\n\\end{array}\n\\right.\n\\end{align}\nwhere $f_h=P_hf$.\nFrom \\cite{Weis1,Weis2}, we know that the maximal $L^p$\nregularity (\\ref{LpqSt3}) holds iff one of the following sets is\n$R$-bounded in ${\\cal L}(L^q_h,L^q_h)$ independent of $h$:\\\\\n(i)~ $\\{\\lambda(\\lambda+A_h)^{-1}:|{\\rm arg}(\\lambda)|<\n\\pi\/2+\\theta\\}$ for some\n$0<\\theta< \\pi\/2 \\,$,\\\\\n(ii)~ $\\{E_h(t),~tA_hE_h(t):t>0\\} \\,$, \\\\\n(iii)~ $\\{E_h(z):|{\\rm arg}(z)|<\\theta\\}$ for some $0<\\theta< \\pi\/2\n\\,$.\n\nMoreover, from Lemma 4.c in \\cite{Weis2} we know that the set in (iii) is $R$-bounded in\n${\\cal L}(L^q_h,L^q_h)$ for some $\\theta=\\theta_{\\kappa_q}>0$ if the\nanalytic semigroup $\\{E_h(z)\\}$ satisfies the maximal estimate:\n\\begin{align*}\n\\biggl\\|\\sup_{t>0}\\biggl|\\frac{1}{t} \\int_0^tE_h(s)u_h{\\rm d}\ns\\biggl|\\biggl\\|_{L^q}\\leq \\kappa_q\\|u_h\\|_{L^q},\\quad\\forall~u_h\\in\nL^q_h(\\Omega) .\n\\end{align*}\nSince the last inequality is a consequence of the maximal semigroup estimate\n(\\ref{smgest}), we thus proved the maximal $L^p$ regularity\n(\\ref{LpqSt3}).\n\nSecondly, we prove \\refe{LpqSt1} and \\refe{LpqSt2}. For the general case $u^0_h\\neq 0$\nor ${\\bf g}\\neq 0$, we let $u_h=\\widetilde u_h+v_h$, where\n$\\widetilde u_h$ and $v_h$ are the solutions of the equations\n\\begin{align}\\label{Ga51}\n&\\left\\{\n\\begin{array}{ll}\n\\partial_t\\widetilde u_h+A_h\\widetilde u_h = f_h-\\overline\\nabla_h\\cdot{\\bf g} ,\n\\\\[3pt]\n\\widetilde u_h(0)=P_hu^0 ,\n\\end{array}\n\\right.\n\\end{align}\nand\n\\begin{align}\n&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\left\\{\n\\begin{array}{ll}\n\\partial_tv_h+A_hv_h = 0 ,\n\\\\[3pt]\nv_h(0)=u^0_h-P_hu^0,\n\\end{array}\n\\right.\n\\end{align}\nrespectively. Write the equation (\\ref{PDE0}) as\n\\begin{align}\\label{Ga90}\n&\\left\\{\n\\begin{array}{ll}\n\\partial_tu+Au\n=f -\\overline\\nabla\\cdot {\\bf g} &\\mbox{in}~\\Omega,\n\\\\[3pt]\nu(0)=u^0 &\\mbox{in}~\\Omega ,\n\\end{array}\n\\right.\n\\end{align}\nand let $w_h=\\widetilde u_h-P_hu$. The difference of (\\ref{Ga51}) and\n(\\ref{Ga90}) gives\n\\begin{align*}\n&\\left\\{\n\\begin{array}{ll}\n\\partial_t w_h+A_h w_h = A_h(R_hu-P_hu) ,\n\\\\[3pt]\nw_h(0)=0 .\n\\end{array}\n\\right.\n\\end{align*}\nMultiplying the above equation by $A_h^{-1}$, we get\n\\begin{align*}\n&\\left\\{\n\\begin{array}{ll}\n\\partial_t A_h^{-1}w_h+A_h A_h^{-1}w_h = R_hu-P_hu ,\n\\\\[3pt]\nA_h^{-1}w_h(0)=0 ,\n\\end{array}\n\\right.\n\\end{align*}\nand using (\\ref{LpqSt3}) we derive that\n\\begin{align*}\n\\|w_h\\|_{L^p((0,T);L^q)}\\leq C_{p,q}\\|R_hu-P_hu \\|_{L^p((0,T);L^q)} .\n\\end{align*}\nOn the other hand, it is easy to derive that $\\|v_h(t)\\|_{L^2}\\leq\nCe^{-t\/C}\\|u^0_h-P_hu^0\\|_{L^2}$, which with (\\ref{anf3})\ngives (via interpolation)\n\\begin{align*} \\|v_h(t)\\|_{L^q}\\leq\nCe^{-t\/C_q}\\|u^0_h-P_hu^0\\|_{L^q} ~~~\\mbox{for}~~1< q<\\infty .\n\\end{align*}\nThe last two inequalities imply (\\ref{LpqSt1}).\n\nIf $u^0_h\\equiv u^0\\equiv f\\equiv 0$, then $v_h=0$\nand by using Lemma \\ref{s00} we derive that\n\\begin{align*}\n\\|u_h\\|_{L^p((0,T);W^{1,q})}\n&=\\|\\widetilde u_h\\|_{L^p((0,T);W^{1,q})} \\\\\n&\\leq \\|w_h\\|_{L^p((0,T);W^{1,q})}\n+\\|P_hu\\|_{L^p((0,T);W^{1,q})}\\\\\n&\\leq Ch^{-1}\\|w_h\\|_{L^p((0,T);L^q)}\n+C\\|u\\|_{L^p((0,T);W^{1,q})}\\\\\n&\\leq C_{p,q}h^{-1}\\|R_hu-P_hu \\|_{L^p((0,T);L^q)}\n+C\\|u\\|_{L^p((0,T);W^{1,q})} \\\\\n&\\leq C_{p,q}\\|u\\|_{L^p((0,T);W^{1,q})} \\\\\n&\\leq C_{p,q}\\|{\\bf g}\\|_{L^p((0,T);L^q)} .\n\\end{align*}\nThis proves the inequality (\\ref{LpqSt2}).\n\nThe proof of Theorem \\ref{MainTHM1} is completed. ~\\vrule height8pt width 5pt depth 0pt\n\n\n\n\n\n\n\n\n\n\n\n\n\\section*{Appendix --- \nInterior estimates of parabolic equations on $\\bf\\widetilde Q_j'$}\n\\renewcommand{\\thelemma}{A.\\arabic{lemma}}\n\\renewcommand{\\theproposition}{A.\\arabic{lemma}}\n\\renewcommand{\\theequation}{A.\\arabic{equation}}\n\\setcounter{lemma}{0} \\setcounter{equation}{0}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}