diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzivyd" "b/data_all_eng_slimpj/shuffled/split2/finalzzivyd" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzivyd" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThere has been recently spectacular progress on the existence theory of minimal hypersurfaces led by the landmark work of F. Marques and A. Neves on the Almgren-Pitts minmax theory \\cite{MaNe14, MaNe17, MaNe16, MaNe18} and the volume spectrum \\cite{LiMaNe18} (jointly with Y. Liokumovich). For instance, by Marques-Neves \\cite{MaNe17} and A. Song \\cite{Song18}, we now know that any closed Riemannian manifold of dimension $3\\leq n\\leq 7$ contains {\\it infinitely many} smoothly embedded closed minimal hypersurfaces. In addition, when the metric on $M$ is {\\it generic}, Irie-Marques-Neves \\cite{IrMaNe18} showed that the union of embedded minimal hypersurfaces is {\\it dense}. Soon after, Marques-Neves-Song \\cite{MaNeSo17} proved that there is a sequence of such minimal hypersurfaces that is actually {\\it equidistributed}. Finally, in 3 dimensions and again for generic metrics, Chodosh-Mantoulidis \\cite{CM18} showed the existence of minimal surfaces with arbitrarily large area, genus, and Morse index.\n\nIt is interesting to ask in general how the index, topology, and geometry (area and curvature) of minimal surfaces relate to each other. The answer can be subtle. For instance, in presence of positive Ricci curvature, Choi--Schoen \\cite{ChSc85} proved that a sequence of minimal surfaces with bounded genus must have bounded area, curvature, and index. In a general Riemannian three-manifolds, however, it is not possible to bound the area nor the index of an embedded minimal surface by the genus, even if one assumes positive scalar curvature, as it can be seen in examples constructed by Colding--De Lellis \\cite{CoDe05}. On the other hand, jointly with O Chodosh and D. Ketover \\cite{CKM17}, we were able to show that manifolds positive scalar curvature, uniform index bounds do imply uniform area and genus bounds. However, without ambient curvature assumptions, there can be sequences of minimal surfaces with uniformly bounded index but arbitrarily large genus and area \\cite{CKM17}.\n \nThe purpose of this note is to prove the following estimate:\n\\begin{theo}\\label{thm:main}\nLet $(M^3,g)$ be a fixed closed Riemannian three-manifold. For any natural number $I$, there exists a constant $C=C(M,g,I)>0$ such that if $\\Sigma_j$ is a sequence of two-sided, closed, connected, embedded minimal surfaces with $\\Index(\\Sigma_j)\\leq I$, then\n $$\\genus(\\Sigma_j)1$. These surfaces have the property that, for each $k$, the projection $p:S \\times \\SS^{1} \\rightarrow S$ restricted to $S_k$ is a covering map of $S$ of order $k$. In particular, their Euler characteristic satisfy $\\chi(S_k)=k\\chi(S)$, so the genus of $S_k$ grows linearly in $k$.\n\nFixing a constant curvature metric in $S$, we may then minimize area in each homotopy class of $\\Sigma_k$ in $S \\times \\SS^{1}$ using \\cite{ScYa79} and obtain as a result a stable minimal surface $\\Sigma_k$ which is embedded (after passing to a one-sided quotient, if necessary) by \\cite{FHS83}. These will have uniformly bounded curvature in $k$, by \\cite{Sch83}, and thus area growth at least linear in $k$ since $p|_{\\Sigma_k}: \\Sigma_k \\rightarrow S$ is a $k$-cover.\n\\end{rema}\n\n\\begin{rema}\nColding--Minicozzi \\cite{CoMi00} showed that there is a $C^{2}$-open set of metrics $g$ on any manifold $M$ so that there is a sequence of embedded stable minimal tori $\\Sigma_{j}$ with $\\area_{g}(\\Sigma_{j})\\to\\infty$ and thus the reverse inequality in Theorem \\ref{thm:main} does not hold in general. \n\\end{rema}\n\n\\begin{rema}\nThe estimate in Theorem \\ref{thm:main} seems to be new in the literature even in the particular case of stable minimal surfaces. \n\\end{rema}\n\n\\begin{rema}\n\tH. Rosenberg and B. Meeks have told me they have a different approach to obtain a similar bound to Theorem \\ref{thm:main}.\n\\end{rema}\n\n\\subsection{Acknowledgements} I would like to thank H. Rosenberg for asking me whether such bound were true. I would also like to thank O. Chodosh and J. Nogueira for interesting discussions about laminations.\n\n\n\n\\section{Preliminaries}\n\n\\subsection{Minimal laminations} Minimal laminations are a natural generalization of minimal surfaces. They are very convenient in dealing with a sequence of surfaces without area bounds. We recall their definition:\n\n\n\\begin{defi}\nA closed set $\\cL$ of $M^3$ is called a \\textit{minimal lamination} if it is the union of pairwise disjoint, connected, injectively immersed minimal surfaces, which we call \\textit{leaves}. For each point $x\\in \\cL$, we ask for the existence of a neighborhood $\\Omega$ of $x$ and a $C^{0,\\alpha}$ local coordinate chart $\\Phi:\\Omega \\rightarrow \\mathbb{R}^3$ under which image the leaves of $\\cL$ pass through in slices of the form $\\mathbb{R}^2\\times\\{t\\} \\cap \\Phi(\\Omega)$.\n\\end{defi}\n\n\\begin{rema}\\label{rem:stable}\nAs an illustration of how laminations naturally appear when dealing with embedded minimal surfaces with no area bounds, consider a sequence of closed stable minimal surfaces $\\Sigma_j$ on $(M^3,g)$. Suppose their areas are blowing up, $i.e.,$ $|\\Sigma_j|\\rightarrow\\infty$\\footnote{Such sequence of stable minimal surfaces with unbounded area appear, for example, in \\textit{every metric} of the 3-torus, see Example 1.13 of \\cite{CKM17}}. By curvature estimates of Schoen \\cite{Sch83}, $\\Sigma_j$ must have uniformly bounded second fundamental form, that is: \n$$|\\textrm{II}_{\\Sigma_j}|\\leq C$$\nfor some $C>0$. Thus, after a subsequence, $\\Sigma_j$ converges locally smoothly to a minimal lamination $\\cL$ (see, for instance, page 475 by \\cite{MeRo06}). \n\\end{rema}\n\n\\subsection{Branched surfaces}\\label{sec:branched} To analyze laminations that might appear as a sequence of minimal surfaces, we will consider branched surfaces that carry them. We remark that such methods were pioneered by Williams \\cite{Wi74} and have been successfully used in geometry topology on several occasions, $e.g.$ see \\cite{Li06,Li07} and, more recently, \\cite{CoGa18}. They are higher dimensional analogs of train tracks carrying geodesic laminations, see \\cite{PeHa92}.\n\n\n\\begin{defi}\nA {\\it branched surface} $B$ on a three-manifold $M$ is a two-complex consisting of a finite union surfaces that combine along the 1-skeleton giving it a well-defined tangent space at every point, and generic singularities. Every point $p \\in B$ thus has a neighborhood in\n$M$ which is homeomorphic to one of the local models in Figure \\ref{fig:branched}:\n\\end{defi}\n\\begin{figure}[h]\n\t\\begin{tikzpicture}\n\\draw (-7,-1) -- (-6,1) -- (-4,1) -- (-5,-1) -- cycle;\n\n\\draw (-3,-1) -- (-2,1) -- (0,1) -- (-1,-1) -- cycle;\n\t\\draw (-1,-0.5) -- (0,1.5);\n\t\\draw (-1,1) to [bend right =10] (0,1.5);\n\t\\draw (-2,-1) to [bend right =10] (-1,-0.5);\n\t\\draw[dashed] (-2,-1) -- (-1,1);\n\n\\draw (1,-1) -- (2,1) -- (4,1) -- (3,-1) -- cycle;\n\\draw (3,-0.5) -- (4,1.5);\n\\draw (3,1) to [bend right =10] (4,1.5);\n\\draw (2,-1) to [bend right =10] (3,-0.5);\n\\draw[dashed] (2,-1) -- (3,1);\n\\draw (1.5,-1.25) -- (3.5,-1.25);\n\\draw[dashed] (1.5,0) -- (3.5,0);\n\\draw (1.5,0) to [bend right =10] (1.5,-1.25);\n\\draw (3.5,0) to [bend right =10] (3.5,-1.25);\n\\end{tikzpicture}\t\\caption{Local models for a branched surface}\\label{fig:branched}\n\\end{figure}\nFor branched surface $B$, we define its \\textit{branched locus} to be the set of points in $B$ which do not have a neighborhood diffeomorphic to $\\mathbb{R}^2$. We call the closure of each component of $B\\setminus L$ a \\textit{branch sector}. As with train tracks, a branched surface has a well-defined normal bundle in a 3-manifold and for any $\\varepsilon>0$ sufficiently small, an $\\varepsilon$-tubular neighborhood $N_\\varepsilon(B)$ can be foliated by intervals transverse to $B$ as an $I$-bundle in such a way that collapsing these intervals collapses $N(B)$ to a new branched surface which can be canonically identified with $B$. \n\nWe say that a surface $\\Sigma$ (or, similarly, a lamination $\\cL$) is \\textit{fully carried} by $B$ if there exists $\\varepsilon>0$ such that $S \\subset N_\\varepsilon(B)$ transversely intersects every $I$-fiber of $N_\\varepsilon(B)$. If $\\pi:N(B)\\rightarrow B$ is the projection that collapses every $I$-fiber to a point and $b_1,\\ldots,b_N$ are the components of $B\\setminus L$, we let $x_i= |\\Sigma \\cap \\pi^{-1}(b_i)|$ for each $b_i$. We can describe $\\Sigma$ combinatorially via a non-negative integer coordinate $(x_1,\\ldots,x_N)\\in\\mathbb{R}^N$ satisfying an obvious system of branch equations coming the from intersections of the respective branch sectors , see \\cite{FlOe84,Oe88}:\n\n\\begin{figure}[h]\n\t\\begin{tikzpicture}\n\n\t\\draw (-0.25,-1) -- (0.75,1) -- (5.25,1) -- (4.25,-1) -- cycle;\n\t\\draw (4,0.5) -- (5,2.5);\n\t\\draw (3,1) to [bend right =12] (5,2.5);\n\t\\draw (2,-1) to [bend right =12] (4,0.5);\n\t\\draw[dashed] (2,-1) -- (3,1);\n\t\n\t\\node at (1.5,0) {$x_i$};\n\t\\node at (4,-.2) {$x_j$};\t\n\t\\node at (3.4,0.55) {$x_k$};\n\t\n\\end{tikzpicture}\\caption{Branch equations $x_i=x_j+x_k$}\\label{fig:brancheq}\n\\end{figure}\n\n\n\n\n\n\\begin{prop}\\label{prop:lam}\nEvery lamination $\\cL$ in $M$ is fully carried by a branched surface $B$.\n\\end{prop}\n\\begin{proof}\nThis follows by general topological arguments, see \\cite{GaOe89,Ha80s}. However, we sketch an argument for the laminations relevant to the proof of Theorem \\ref{thm:main}.\n\nSuppose $\\cL$ is a minimal lamination which is the limit of a sequence of closed embedded minimal surfaces $\\Sigma_j$ with $|\\textrm{II}_{\\Sigma_j}|\\leq C$ for some fixed $C>0$. Following a standard argument on page 475 of \\cite{MeRo06}, we may find $r>0$ sufficiently small and cover $\\cL$ with extrinsic balls $B_r(x_1),B_r(x_2),\\ldots,B_r(x_k)$, where $x_1,x_2\\ldots,x_k$ belong to $\\cL$, such that the intersection of any leaf with any of the balls $B_r(x_i)$ is either empty or the graph of a function with small gradient over a subset of a disk passing through $x_i$. Such discs can be the isotoped and glued to form the desired branched surface.\n\\end{proof}\t\n\\begin{rema}\\label{rem:branched}\nWe remark that the argument above implies the constructed branched surface $B$ also carry the surfaces $\\Sigma_j$ or $j$ sufficiently large. In fact, $\\Sigma_j \\subset N_{2r}(B)$.\n\\end{rema}\n\n\n\n\n\n\n\\subsection{Local structure and surgery theorems} In Remark \\ref{rem:stable}, we saw that a sequence of stable minimal surfaces converges, locally smoothly and after passing to a subsequence, to a minimal lamination. In this section we recall the main results of \\cite{CKM17}, which guarantees that closed embedded minimal surfaces with uniformly bounded index behave similarly qualitatively, up to controlled errors. What follows comprises of combining Theorem 1.17 and Corollary 1.19 of \\cite{CKM17}:\n\n\\begin{theo}\\label{thm:ckm}\nThere exist functions $\\tilde r(I)$ and $\\tilde m(I)$ with the following property. Fix a closed three-manifold $(M^{3},g)$ and a natural number $I \\in \\mathbb{N}$. Suppose $\\Sigma_{j}\\subset (M,g)$ is a sequence of closed embedded minimal surfaces with $\\Index(\\Sigma_{j})\\leq I$. Then, after passing to a subsequence, there is $C>0$ and a finite set of points $\\mathcal B_{j}\\subset \\Sigma_{j}$ with cardinality $|\\mathcal B_{j}|\\leq I$ so that the curvature of $\\Sigma_{j}$ is uniformly bounded away from the set $\\mathcal B_{j}$, i.e.,\n\\[\n|\\sff_{\\Sigma_{j}}|(x) \\min\\{1,d_{g}(x,\\mathcal B_{j})\\}\\leq C,\n\\]\nbut not at $\\mathcal B_{j}$, i.e.,\n\\[\n\\liminf_{j\\to\\infty} \\min_{p\\in\\mathcal B_{j}}|\\sff_{\\Sigma_{j}}|(p) = \\infty.\n\\]\nPassing to a further subsequence, the points $\\mathcal B_{j}$ converge to a set of points $\\mathcal B_{\\infty}$ and the surfaces $\\Sigma_{j}$ converge locally smoothly, away from $\\mathcal B_{\\infty}$, to some lamination $\\cL \\subset M \\setminus \\mathcal B_{\\infty}$. The lamination has removable singularities, i.e., there is a smooth lamination $\\widetilde\\cL\\subset M$ so that $\\cL = \\widetilde\\cL\\setminus\\mathcal B_{\\infty}$. Moreover, there exists $\\varepsilon_{0}>0$ smaller than the injectivity radius of $(M,g)$ so that $\\mathcal B_{\\infty}$ is $4\\varepsilon_{0}$-separated and for any $\\varepsilon \\in (0,\\varepsilon_{0}]$, taking $j$ sufficiently large guarantees that there exists embedded surfaces $\\widetilde \\Sigma_{j}\\subset (M^{3},g)$ satisfying:\n\\begin{enumerate}[itemsep=5pt, topsep=5pt]\n\t\\item The new surfaces $\\widetilde\\Sigma_{j}$ agree with $\\Sigma_{j}$ outside of $B_{\\varepsilon}(\\mathcal B_{\\infty})$. \n\t\\item The components of $\\Sigma_{j}\\cap B_{\\varepsilon}(\\mathcal B_{\\infty})$ that do not intersect the spheres $ \\partial B_{\\varepsilon}(\\mathcal B_{\\infty})$ transversely and the components that are topological disks appear in $\\widetilde\\Sigma_{j}$ without any change.\n\t\\item The curvature of $\\widetilde\\Sigma_{j}$ is uniformly bounded, i.e.\n\t\\[\n\t\\limsup_{j\\to\\infty}\\sup_{x\\in\\widetilde\\Sigma_{j}}|\\sff_{\\widetilde\\Sigma_{j}}|(x) <\\infty.\n\t\\]\n\t\\item Each component of $\\widetilde\\Sigma_{j}\\cap B_{\\varepsilon}(\\mathcal B_{\\infty})$ which is not a component of $\\Sigma_{j}\\cap B_{\\varepsilon}(\\mathcal B_{\\infty})$ is a topological disk of area at most $2\\pi\\varepsilon^{2}(1+o(\\varepsilon))$.\n\t\\item The genus drops in controlled manner, i.e.,\n\t\\[\n\t\\genus(\\Sigma_{j})-\\tilde r(I) \\leq \\genus(\\widetilde\\Sigma_{j}) \\leq \\genus(\\Sigma_{j}).\n\t\\]\n\t\\item The number of connected components increases in a controlled manner, i.e.,\n\t\\[\n\t|\\pi_{0}(\\Sigma_{j})|\\leq |\\pi_{0}(\\widetilde\\Sigma_{j})| \\leq |\\pi_{0}(\\Sigma_{j})| + \\tilde m(I).\n\t\\]\n\t\\item While $\\widetilde\\Sigma_{j}$ is not necessarily minimal, it is asymptotically minimal in the sense that $\\lim_{j\\to\\infty} \\Vert H_{\\widetilde\\Sigma_{j}}\\Vert_{L^{\\infty}(\\widetilde\\Sigma_{j})} = 0$.\n\\end{enumerate}\nFinally, the new surfaces $\\widetilde\\Sigma_{j}$ converge locally smoothly to the smooth minimal lamination $\\widetilde \\cL$.\n\\end{theo} \n\n\\section{End of proof of Theorem \\ref{thm:main}}\n\nWe argue by contradiction: if the desired constant does not exist, we may find a sequence $\\Sigma_j$ of closed, two-sided, embedded minimal surfaces with $\\Index(\\Sigma_j)\\leq I$ and such that:\\\n\\begin{equation}\\label{eqproof}\n\\frac{\\genus(\\Sigma_j)}{\\area(\\Sigma_j)} \\nearrow \\infty.\n\\end{equation}\nBy Theorem 1.1 of \\cite{CKM17}, this implies $\\area(\\Sigma_j)\\rightarrow\\infty$; otherwise, bounded index and bounded area would imply bounded genus and thus contradict \\eqref{eqproof}.\n\nPassing to a subsequence if necessary, we may then apply Theorem \\ref{thm:ckm} to $\\Sigma_j$ and obtain, after surgery, a sequence of nearly minimal surfaces $\\widetilde\\Sigma_{j}$ which, by $(4), (5), (6)$, we may also assume are connected and satisfy:\n\\begin{equation}\\label{eqprooftil}\n\\frac{\\genus(\\widetilde\\Sigma_j)}{\\area(\\widetilde\\Sigma_j)} \\nearrow \\infty.\n\\end{equation}\nThis crucially uses the fact that the surgery procedure deletes at most $I$ components and replaces them with disks of comparable area. \n\n\nMoreover, again by Theorem \\ref{thm:ckm}, $\\widetilde\\Sigma_j$ converges locally smoothly to a smooth minimal lamination $\\widetilde{\\cL}$. By Proposition \\ref{prop:lam}, we can find a branched surface $B$ that fully carries $\\widetilde{\\cL}$. As in Section \\ref{sec:branched}, let $L$ be the branched locus of $B$ and $b_1,b_2,\\ldots,b_N$ its branched sections, $i.e.$, the components of $B\\setminus L$. Each $\\Sigma_j$ then correspond to an non-negative integer coordinate:\n$$\\Sigma_j \\longrightarrow (x^j_1,x^j_2\\ldots,x^j_N)\\in\\mathbb{R}^N.$$\nMoreover, by Remark \\ref{rem:branched}, $\\Sigma_j \\subset N_{2r}(B)$ for $j$ sufficiently large, and it may be reconstructed from disks which are graphical over the sectors $b_1,\\ldots,b_N$. Thus, \n$$x^j_1|b_1|+ x^j_2|b_2| +\\cdots +x^j_N|b_N| = {O}(|\\Sigma_j|),$$\nwhere $|b_i|$ is the area of the branch sector $b_i$. \n\nOn other hand, the Euler characteristic of can also be estimated using a triangulation obtained by gluing the graphical pieces over the sectors $b_1,\\ldots,b_N$. This yields the estimate\n$$ |\\chi(\\Sigma_j)| = {O}(x_1+x_2+\\ldots+x_n)$$\nwhich contradicts equation \\eqref{eqprooftil} and we are done.\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nBlazars are the most extreme objects in the class of active galactic nuclei (AGNs). They can be divided into flat spectrum radio quasars (FSRQs) and BL Lac objects (BL Lacs) based on the presence or not of broad emission lines in their optical spectra \\citep[e.g.\\,,][]{Stickel1991}.\nTheir spectral energy distribution (SED) is typically dominated by two non-thermal components, extending from the radio band to $\\gamma$ rays. \nThe low-frequency component is due to synchrotron emission by relativistic electrons within the jet, and its peak ($\\nu^{\\mathrm{Syn}}_{\\mathrm{peak}}$) can be found in the spectral region extending from radio to soft X-ray energies.\nDepending on the $\\nu^{\\mathrm{Syn}}_{\\mathrm{peak}}$ position, blazars are further classified as: low synchrotron peaked (LSP; $\\nu^{\\mathrm{Syn}}_{\\mathrm{peak}} < 10^{14}$\\,Hz), intermediate synchrotron peaked (ISP; $10^{14}$\\,Hz $\\nu^{\\mathrm{Syn}}_{\\mathrm{peak}} < 10^{15}$\\,Hz), and high synchrotron peaked (HSP; $\\nu^{\\mathrm{Syn}}_{\\mathrm{peak}} > 10^{15}$\\,Hz) \\citep{Abdo2010a}. In general FSRQs are predominantly LSP objects, while BL Lacs can be part of all of the three classes.\n\nThe high-frequency component of the SED, peaking from X-ray to $\\gamma$-ray bands, is commonly assumed to originate from inverse Compton (IC) scattering of low energy photons by relativistic electrons in the jet.\nThe scattered photons may be the same photons produced by the synchrotron mechanism \\citep[synchrotron-self-Compton, SSC, e.g.\\,,][]{Maraschi1992, Abdo2010a, Hovatta2014}, or photons from external sources such as the accretion disk, the broad line region and\/or the dusty torus \\citep[external Compton, EC, e.g.\\,,][]{Ghisellini1996, Sikora2008}.\nHadronic models, in which relativistic protons within the jet are the ultimate responsible for the observed emission, have been proposed \\citep[e.g.\\,,][]{Levinson2006, Bottcher2007}.\n\nIn the last years, the large area telescope (LAT) onboard the \\fermi\\ satellite confirmed that blazars dominate the census of the $\\gamma$-ray sky \\citep{Acero2015}.\nExploring the possible correlation between radio and $\\gamma$-ray emission is a fundamental step to understand the physics and the emission processes in blazars, and this topic was the subject of several works \\citep[e.g.\\,,][]{Kovalev2009, Ghirlanda2010, Giroletti2010, Mahony2010, Nieppola2011, Piner2014, Giroletti2016}. \n\n\\citet{Ackermann2011} revealed a positive and highly significant correlation between radio and $\\gamma$-ray emission in the energy range between 100\\,MeV and 100\\,GeV for the AGNs included in the \\fermi -LAT first source catalog \\citep[1FGL, ][]{Abdo2010b}. In that work, the authors made use of both archival interferometric 8\\,GHz data and concurrent single dish 15\\,GHz observations from the Owens Valley Radio Observatory observing program \\citep{Richards2011}.\nIn particular, \\citet{Ackermann2011} found that the correlation strength decreases when higher $\\gamma$-ray energies are considered. A similar result is reported in a more recent work by \\citet{Mufakharov2015}, in which the authors explore the correlation between radio at cm wavelengths and $\\gamma$-ray emission at $E > 100$\\,MeV for 123 1FGL \\fermi\\ blazars. In that work, based on quasi-simultaneous observations, the authors find a positive and statistically significant correlation between the emission bands, that weakens when higher $\\gamma$-ray energies are used.\n\nThe possible correlation between radio and very-high energy (VHE, $E > 0.1$\\,TeV) $\\gamma$ rays still remains, mainly due to the lack of a homogeneous coverage of the VHE sky.\nCurrently, VHE observations of blazars are conducted by imaging atmospheric Cherenkov telescopes (IACTs), which mainly operate in pointing mode with a limited sky coverage, and usually observe sources in flaring state. \nAll of these limitations, which introduce a strong bias in VHE catalogs and make it difficult to assess any possible radio-VHE correlation, will be overcome by the advent of the new generation Cherenkov Telescope Array \\citep[CTA, ][]{Actis2011}.\n\n\\begin{table\n\\caption{\\small Composition of the two source sample extracted from 1FHL and 2FHL catalogs.}\n\\label{tab_sample_composition} \n\\centering \n\\small \n\\setlength{\\tabcolsep}{9pt}\n\\renewcommand{\\arraystretch}{1.3}\n\\begin{tabular}{lccc} \n\\hline\n\\hline \nSource type & Catalog & Num. of Sources & Sources with $z$ \\\\\n\\hline\n\\hline \nAll sources & 1FHL & 237 & 147 \\\\\n & 2FHL & 131 & 76 \\\\\nBL Lac & 1FHL & 173 & 100 \\\\\n & 2FHL & 112 & 63 \\\\\nFSRQ & 1FHL & 44 & 44 \\\\ \n & 2FHL & 5 & 5 \\\\\nHSP & 1FHL & 103 & 60 \\\\ \n & 2FHL & 84 & 48 \\\\\nISP & 1FHL & 45 & 23 \\\\\n & 2FHL & 18 & 7 \\\\ \nLSP & 1FHL & 58 & 52 \\\\\n & 2FHL & 23 & 17 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nAmong the 61 currently known TeV blazars included in the online TeVCat\\footnote{\\url{http:\/\/tevcat.uchicago.edu\/}} catalog, which contains all of the TeV sources so far detected, 75\\% (46 objects) of them belong to the HSP class\\footnote{We refer to the catalog version 3.400.}. \nIn general, HSP blazars show peculiar features such as lower Compton dominance, lower synchrotron luminosity, and parsec scale jets less variable in flux density and structure than in other blazars \\citep[e.g.\\,,][]{Giroletti2004, Piner2008, Lico2012, Blasi2013, Lico2014}. \n\nAt present, the first and second \\fermi -LAT catalogs of high-energy $\\gamma$-ray sources, 1FHL and 2FHL \\citep[][]{Ackermann2013, Ackermann2016}, represent the best compromise for addressing the connection between radio and hard $\\gamma$-ray emission. The 1FHL (in the 10 - 500\\,GeV energy range) and the 2FHL (in the 50\\,GeV - 2\\,TeV energy range) catalogs provide us with two large, deep and unbiased samples of $\\gamma$-ray sources in an energy range approaching and partly overlapping with the VHE band.\n\nIn this work we investigate the possible radio-VHE correlation by performing a statistical analysis on the 1FHL and 2FHL AGN samples, mostly composed of HSP blazars, by using the method developed by \\citet{Pavlidou2012}. \nA preliminary version of the third Fermi-LAT catalog of high-energy gamma-ray sources (3FHL \\footnote{Preliminary 3FHL release: arXiv:1702.00664}) has been recently released by the \\fermi -LAT Collaboration, but it is not yet published. For this reason we do not use the 3FHL sources in the present analysis, and the 3FHL will be the subject of a future work.\nAt radio frequencies we make use of very long baseline interferometry (VLBI) flux densities, which are representative of the emission from the innermost (milliarcsecond) source region, where the $\\gamma$-ray emission is likely produced.\n\nThe paper is organized as follows: in Sect.~\\ref{sec.catalogs} we describe the catalogs used in this work and the sample construction; we present the results in Sect.~\\ref{sec.results} and we discuss them in Sect.~\\ref{sec.discussion}. Throughout the paper we use a $\\Lambda$CDM cosmology with $h = 0.71$, $\\Omega_m = 0.27$, and $\\Omega_\\Lambda=0.73$ \\citep{Komatsu2011}.\n\n\\begin{figure\n\\begin{center}\n\\includegraphics[bb=20 4 490 340, width= 0.5\\textwidth, clip]{histo_flux.ps} \\\\ \n\\end{center}\n\\caption{\\small $\\gamma$-ray energy flux distribution for the 1FHL-n (top-left panel) and 2FHL-n (bottom-left panel) samples. VLBI flux density distribution for the 1FHL-n (top-right panel) and 2FHL-n (bottom-right panel) samples. The black solid lines represent the full source samples, while the blue and red dashed lines represent BL Lacs and FSRQs, respectively.}\n\\label{histo_flux}\n\\end{figure}\n\n\n\\begin{figure*\n\\begin{center}\n\\includegraphics[bb=6 0 495 350, width= 0.45\\textwidth, clip]{1FHL_scatter_plot_combo.ps} \n\\includegraphics[bb=0 0 485 350, width= 0.45\\textwidth, clip]{1FHL_3FGL_scatter_plot_combo.ps} \\\\\n\\end{center}\n\\caption{\\small VLBI flux density vs. 1FHL (left panel) and 3FGL (right panel) energy flux scatter plots for the full 1FHL-n sample. HSP, ISP and LSP sub-classes are indicated in blue, green and red colors, respectively. Sources with no spectral classification are indicated in black color. The filled and empty symbols represent sources with or without redshift, respectively.}\n\\label{1fhl_scatter_plot_combo}\n\\end{figure*}\n\n\\begin{figure*\n\\begin{center} \n\\includegraphics[bb=5 155 490 355, clip]{1FHL_multi_scatter_plots.ps}\n\\end{center}\n\\caption{\\small VLBI flux density vs. 1FHL (upper panels) and 3FGL (lower panels) energy flux scatter plots for BL Lacs, FSRQs, HSPs, ISPs,and LSPs, belonging to the 1FHL-n sample. The black and red symbols represent sources with or without redshift, respectively.}\n\\label{1fhl_scatter_plots}\n\\end{figure*}\n\n\\begin{table*}\n\\caption{\\small Results of the correlation analysis between 1FHL (10 - 500\\,GeV) energy fluxes and VLBI flux densities for various 1FHL-n sub-samples. For comparison the same analysis was performed by using 3FGL (0.1 - 300\\,GeV) energy fluxes.}\n\\label{tab_1fhl_corr} \n\\centering \n\\small \n\\setlength{\\tabcolsep}{9pt}\n\\renewcommand{\\arraystretch}{1.3}\n\\begin{tabular}{lccccc} \n\\hline\n\\hline \nSource type & Catalog & Number of Sources & Number of $z$-bins & r-Pearson & Significance \\\\ \n\\hline \nAll sources & 1FHL & 147 & 14 & -0.05 & $ 0.59 $ \\\\\n & 3FGL & 147 & 14 & 0.71 & $ <10^{-6} $ \\\\\nBL Lac & 1FHL & 100 & 9 & 0.12 & $ 0.55 $ \\\\ \n & 3FGL & 100 & 9 & 0.70 & $ <10^{-6} $ \\\\ \nFSRQ & 1FHL & 44 & 4 & -0.01 & $ 0.99 $ \\\\ \n & 3FGL & 44 & 4 & 0.49 & $ <10^{-6} $ \\\\\nHSP & 1FHL & 60 & 5 & 0.57 & $ 1.0\\times 10^{-6} $ \\\\ \n & 3FGL & 60 & 5 & 0.77 & $ <10^{-6} $ \\\\\nISP & 1FHL & 23 & 2 & 0.19 & $ 0.40 $ \\\\ \n & 3FGL & 23 & 2 & 0.46 & $ 2.5\\times 10^{-2} $ \\\\ \nLSP & 1FHL & 52 & 5 & 0.21 & $ 0.12 $ \\\\ \n & 3FGL & 52 & 5 & 0.43 & $ 3.0\\times 10^{-6} $ \\\\ \n\\hline\n\\end{tabular}\n\\end{table*}\n\n\n\\section{Catalogs and sample selection}\n\\label{sec.catalogs}\n\\subsection{1FHL}\nThe 1FHL catalog is based on LAT data accumulated during the first 3 years of \\fermi\\ operation (from 2008 August to 2011 August), providing us with a uniform and deep all-sky survey in the 10 - 500\\,GeV energy range.\nThe 1FHL contains 514 $\\gamma$-ray sources detected with Test Statistic (TS) larger than 25 (significance above $\\sim 4\\sigma$), and provides for each source the position (with a mean $95\\%$ positional confidence radius of $\\sim5.3$ arcmin), the spectral and variability properties, as well as the associations with sources at other wavelengths.\n65 ($\\sim$13\\%) out of the 1FHL sources do not have any plausible low-frequency association and are classified as unassociated $\\gamma$-ray sources (UGS).\nAmong the 449 associated 1FHL sources, 393 are AGNs while the remaining ones (12\\%) are sources of Galactic nature (i.e. pulsars, supernova remnants, and pulsar wind nebulae).\nWe note that 88\\% of the 1FHL associated sources are statistically associated with blazars (the 75\\% of the entire 1FHL catalog), which clearly indicates that the LAT sky at energies $>10$\\,GeV is dominated by blazars.\n\n\n\\subsection{2FHL}\nThe 2FHL catalog is based on data accumulated during the first 6.5 years of the \\fermi\\ mission, from 2008 August to 2015 April, at the highest LAT energy range between 50\\,GeV and 2\\,TeV.\nThe 2FHL contains 360 $\\gamma$-ray sources detected above 4$\\sigma$ significance and represents the largest, deepest and unbiased sample of $\\gamma$-ray sources in the VHE domain: about 80\\% (284\/360) of the 2FHL sources have photons detected at $E > 100$\\,GeV.\nFor each source the 2FHL catalog provides: the position (with a mean positional confidence radius of $\\sim4.0$ arcmin at 95\\% confidence level), the spectral and variability properties, and the possible multi-frequency association.\nThe vast majority of the 2FHL sources are AGNs (76\\%), and 98\\% of them are statistically associated with blazars. \nOf the remaining 2FHL sources, 11\\% are of Galactic nature, while 13\\% (48 sources) are UGS or associated with a TeV source of unknown type. \n\n\n\\subsection{3FGL\nThe third \\fermi -LAT source catalog \\citep[3FGL, ][]{Acero2015} is based on LAT data accumulated during the first 4 years of the mission (from 2008 August to 2012 July). The 3FGL contains 3033 $\\gamma$-ray sources detected above 4$\\sigma$ significance at energies between 100\\,MeV and 300\\,GeV, and represents the deepest catalog in this energy range.\nAbout $35\\%$ of the 3FGL sources have no clear counterpart at low frequencies.\nFor each source the 3FGL catalog provides the source location region (with a mean $95\\%$ positional confidence radius of $\\sim6.2$ arcmin), the flux measurements in different energy bins, the spectral properties, and the multi-wavelength associations.\n\n\n\\subsection{Radio fundamental catalog}\nThe Radio Fundamental Catalog\\footnote{\\url{http:\/\/astrogeo.org\/rfc\/}} (RFC) collects and provides archival VLBI flux densities and precise positions (accuracy at milliarcsecond level), at several frequencies (between 2 and 22\\,GHz), for thousands of compact radio sources. \nThe RFC makes use of all the available VLBI observations obtained during the past 35 years under absolute astrometry and geodesy programs.\nThe last RFC available release (rfc\\_2016c), used in this work, is updated at 2016\/07\/18 and contains $11448$ objects.\n\n\\subsection{Sample selection and construction}\nBy considering the high Galactic latitude (|$b$|>10$^{\\circ}$) AGN distribution in the sky, we notice that $68$\\% (i.e.\\,165\/243) of 1FHL BL Lacs and $64$\\% (i.e.\\,43\/67) of 1FHL FSRQs are found in the northern hemisphere, while $75$\\% (i.e. 30\/40) of AGNs of unknown type are found in the southern hemisphere. A similar fraction is found for the 2FHL AGNs.\nThis asymmetry in the source count distribution has not a physical origin, but it is rather due to a more sparse optical coverage in the southern hemisphere, that prevents an accurate source association. \nTo avoid any possible bias introduced by the source distribution asymmetry due to the lack of a spectroscopic classification, we focus our attention on the 1FHL and 2FHL AGNs with declination $\\delta > 0^{\\circ}$. \n\nDue to the large source positional uncertainty of the $\\gamma$-ray sources we make use of the coordinates of the proposed low-energy counterparts as listed in the 1FHL\/2FHL catalogs.\nTo obtain high resolution radio observations we cross-match our sample with the RFC.\nBeing the available RFC 5\\,GHz VLBI flux densities consistent with those at 8\\,GHz (their average spectral index is $0.0 \\pm 0.1$), we use either 5\\,GHz or 8\\,GHz RFC flux densities for our analysis.\nFor those sources not included in the RFC we use the 5\\,GHz very long baseline array flux densities reported in \\citet{Lico2016}.\n\nThe resulting samples that we use for the correlation analysis contain 237 1FHL sources (hereafter 1FHL-n) and 131 2FHL sources (hereafter 2FHL-n). For some sources of the selected samples either the optical (BL Lacs and FSRQs) or spectral (HSPs, ISPs and LSPs) classification is not available. The details and the composition of both samples are reported in Table~\\ref{tab_sample_composition}. \n\n\n\\begin{figure*\n\\begin{center}\n\\includegraphics[bb=6 0 495 348, width= 0.45\\textwidth, clip]{2FHL_scatter_plot_combo.ps} \n\\includegraphics[bb=0 0 485 348, width= 0.45\\textwidth, clip]{2FHL_3FGL_scatter_plot_combo.ps} \\\\\n\\end{center}\n\\caption{\\small VLBI flux density vs. 2FHL (left panel) and 3FGL (right panel) energy flux scatter plots for the full 2FHL-n sample. HSP, ISP and LSP sub-classes are indicated in blue, green and red colors, respectively. Sources with no spectral classification are indicated in black color. The filled and empty symbols represent sources with or without redshift, respectively.}\n\\label{2fhl_scatter_plot_combo}\n\\end{figure*}\n\n\n\\begin{figure*\n\\begin{center} \n\\includegraphics[bb=5 155 490 355, clip]{2FHL_multi_scatter_plots.ps}\n\\end{center}\n\\caption{\\small VLBI flux density vs. 2FHL (upper panels) and 3FGL (lower panels) energy flux scatter plots for BL Lacs, FSRQs, HSPs, ISPs,and LSPs, belonging to the 2FHL-n sample. The black and red symbols represent sources with or without redshift, respectively.}\n\\label{2fhl_scatter_plots}\n\\end{figure*}\n\n\\begin{table*}[!h]\n\\centering\n\\caption{\\small Results of the correlation analysis between 2FHL (50\\,GeV - 2\\,TeV) energy fluxes and VLBI flux densities for various 2FHL-n sub-samples. For comparison the same analysis was performed by using 3FGL (0.1 - 300\\,GeV) energy fluxes.}\n\\label{tab_2fhl_corr} \n\\small\n\\setlength{\\tabcolsep}{9pt}\n\\renewcommand{\\arraystretch}{1.3} \n\\begin{tabular}{lccccc} \n\\hline\n\\hline \nSource type & Catalog & Number of Sources & Number of $z$-bins & r-Pearson & Significance \\\\ \n\\hline \nAll sources & 2FHL & 76 & 7 & 0.13 & $ 0.36 $ \\\\\n & 3FGL & 76 & 7 & 0.72 & $ <10^{-6} $ \\\\\nBL Lac & 2FHL & 63 & 6 & 0.23 & $ 0.34 $ \\\\\n & 3FGL & 63 & 6 & 0.73 & $ <10^{-6} $ \\\\\nHSP - with $z$& 2FHL & 48 & 4 & 0.57 & $ 7.0\\times 10^{-6} $ \\\\\n & 3FGL & 48 & 4 & 0.58 & $ <10^{-6} $ \\\\\nHSP - all${\\scriptsize ^1}$ & 2FHL & 84 & 8 & 0.61 & $ <10^{-6} $ \\\\\n & 3FGL & 84 & 8 & 0.53 & $ <10^{-6} $ \\\\\n\\hline\n\\end{tabular}\n\\begin{threeparttable\n\\begin{tablenotes}\n\\item [1] \\footnotesize Full 2FHL-n HSP sample. For those sources without known $z$, we assign a redshift randomly selected among the 2FHL-n source sample with known redshifts (see Sect.~\\ref{2fhl-n_analysis}). \n\\end{tablenotes}\n\\end{threeparttable}\n\\end{table*}\n\n\n\n\\section{Results}\n\\label{sec.results}\nIn Sect.~\\ref{1fhl-n_analysis} we investigate the correlation between radio VLBI and $\\gamma$-ray emission at $E > 10$\\,GeV for the 147 sources with known $z$ of the 1FHL-n sample. By using the same strategy, in Sect.~\\ref{2fhl-n_analysis} we perform the correlation analysis by considering the 76 2FHL-n sources with known $z$ detected in the energy range between 50\\,GeV and 2\\,TeV.\n\nWe explore the possible correlation for the full sample and for the different subsets of blazars, divided according to the optical (BL Lacs and FSRQs) and spectral (HSPs, ISPs and LSPs) classification, by using\n1FHL\/2FHL energy fluxes and the 3FGL energy fluxes as a reference. \n\nTo assess the statistical significance of the correlation results between radio VLBI and $\\gamma$-ray emission we use the method of surrogate data proposed by \\citet{Pavlidou2012}, that was used in the analysis presented in \\citet{Ackermann2011}.\nThis method, based on permutations of the luminosities, takes into account the various observational biases (e.g.\\,, Malmquist bias and common distance effects) which can apparently enhance or dilute any intrinsic luminosity correlation.\nSince the method of surrogate data requires the calculation of luminosities, for the correlation analysis we only consider sources with known redshift. As a consequence the number of sources in the samples could be significantly reduced and the redshift distribution may be altered. This condition mainly affects the class of BL Lacs and HSP objects: only about half of them, in the 1FHL-n and 2FHL-n samples, have redshifts.\n\\citet{Pavlidou2012} and \\citet{Ackermann2011} showed that the correlation significance generally increases when more sources are added, for reasonable assumptions on the redshift distribution of sources without a known $z$.\n\n\\subsection{1FHL-n AGN sample}\n\\label{1fhl-n_analysis}\nThe distribution of the $\\gamma$-ray energy fluxes above 10\\,GeV of the 1FHL-n sample ($S_{\\gamma, \\mathrm{1FHL}}$) is shown in the top-left panel of Fig.~\\ref{histo_flux} (solid black line). $S_{\\gamma, \\mathrm{1FHL}}$ has a median value of $\\sim 6.4 \\times 10^{-12}$ \\eflux and covers about three orders of magnitude, ranging from $1.1 \\times 10^{-12}$ to $3.2 \\times 10^{-10}$ \\eflux.\nBL Lacs and FSRQs are represented by blue and red dashed line, respectively, with BL Lacs reaching the highest $\\gamma$-ray energy flux values.\nThe VLBI flux density distribution is shown in the top-right panel of Fig.~\\ref{histo_flux} (solid black line), and has a median value of $57$ mJy. BL Lacs tend to cluster at lower flux density values (median value $42$ mJy) than FSRQs (median value $372$ mJy).\n\nIn Fig.~\\ref{1fhl_scatter_plot_combo} we show the scatter plots of the VLBI flux density vs. 1FHL (left panel) and 3FGL (right panel) energy flux. The different colors represent the three spectral types: LSP (red), ISP (green) and HSP (blue) objects. The filled and empty symbols indicate those sources with known and unknown redshift, respectively.\nIn Fig.~\\ref{1fhl_scatter_plots} we show the scatter plots of the VLBI flux density vs. 1FHL (upper panel) and 3FGL (lower panel) energy flux for each blazar sub-class (BL Lacs, FSRQs, HSPs, ISPs, and LSPs).\nThe results of the correlation analysis are summarized in Table~\\ref{tab_1fhl_corr}. We report the number of sources in each subset, the number of redshift bins used for the permutations (with each bin containing at least 10 objects), the resulting Pearson product-moment correlation coefficient ($r$), and the corresponding statistical significance ($p$), which represents the probability to obtain a correlation, from intrinsically uncorrelated data, at least as strong as the one observed in the real sample.\n\nWhen we use the 3FGL energy fluxes (0.1 - 300\\,GeV) we find a strong positive correlation with a high statistical significance (chance probability $p<10^{-6}$) for the full sample ($r = 0.71$), and for all of the considered HSP\/ISP\/LSP and FSRQ\/BL Lac blazar sub-classes (see Table~\\ref{tab_1fhl_corr}).\n\nOn the other hand, by considering the full 1FHL-n sample, VLBI flux densities and 1FHL energy fluxes are uncorrelated ($r = -0.05$). Even when we separately consider BL Lacs and FSRQs we do not find any correlation between radio VLBI and $\\gamma$-ray at $E > 10$\\,GeV emission. \nThe correlation coefficient shows a different behavior when each spectral blazar sub-class is considered.\nHSPs are the only blazar sub-class showing a strong ($r = 0.57$) and significant ($p = 1\\times10^{-6}$) correlation, while for LSP and ISP objects in the 1FHL energy range we find a weak correlation.\n\nHSP blazars are therefore the only blazar sub-class showing a strong and significant correlation between radio VLBI and 1FHL\/3FGL $\\gamma$-ray emission.\n\n\n\\subsection{2FHL-n AGN sample}\n\\label{2fhl-n_analysis}\nIn the bottom-left panel of Fig.~\\ref{histo_flux} we show the distribution of the $\\gamma$-ray energy fluxes above 50\\,GeV ($S_{\\gamma, \\mathrm{2FHL}}$) of the 2FHL-n sample (solid black line).\n$S_{\\gamma, \\mathrm{2FHL}}$ ranges from $8.3 \\times 10^{-13}$ to $3.3 \\times 10^{-10}$ \\eflux, with a median value of $\\sim 3.2 \\times 10^{-12}$ \\eflux.\nBL Lacs and FSRQs are represented by blue and red dashed line, respectively.\nWe show the VLBI flux density distribution in the bottom-right panel of Fig.~\\ref{histo_flux} (solid black line), which has a median value of $41$ mJy. BL Lacs and FSRQs have median values of $38$ and $106$ mJy, respectively.\n\nIn Fig.~\\ref{2fhl_scatter_plot_combo} we show the scatter plots of the VLBI flux density vs. 2FHL (left panel) and 3FGL (right panel) energy flux. The different colors represent the three spectral types: LSP (red), ISP (green) and HSP (blue) objects. The filled and empty symbols indicate those sources with known and unknown redshift, respectively.\nThe scatter plots of the VLBI flux density vs. 2FHL (upper panel) and 3FGL (lower panel) energy flux for each blazar sub-class (BL Lacs, FSRQs, HSPs, ISPs, and LSPs) are shown in Fig.~\\ref{2fhl_scatter_plots}.\n\nWe note that in the 2FHL-n sample for some blazar sub-classes the number of sources with known $z$ is less that 20 (e.g.\\,, there are only five objects classified as FSRQs) and therefore is not large enough for obtaining a statistically significant result. For this reason, we perform the correlation analysis only for the full 2FHL-n sample, for the BL Lac class, and for the HSP sub-sample.\nIn Table~\\ref{tab_2fhl_corr} we summarize the correlation analysis results by reporting the number of sources in each subset, the number of redshift bins used for the permutations (with each bin containing at least 10 objects), the correlation coefficient $r$, and the corresponding statistical significance.\n\nWhen the 3FGL energy fluxes are used we find a strong correlation for all the considered blazar sub-classes. On the other hand, VLBI flux densities and 2FHL energy fluxes are uncorrelated both for the full sample and for BL Lac objects (see Table~\\ref{tab_2fhl_corr}). On the contrary, for blazars of HSP type a strong ($r = 0.57$) and significant ($p = 7\\times10^{-6}$) correlation is found.\n\nHSP objects are again the only blazar sub-class for which we find a strong and significant correlation both in the 2FHL and 3FGL $\\gamma$-ray energy ranges.\n\nAs mentioned earlier, the method of surrogate data for the statistical significance can only be applied to sources with known $z$.\nThis requirement can play an important role in the case of HSP objects of our sample, considering that for about half of them the redshift is unknown.\nFor this reason, we perform the correlation analysis for the full 2FHL-n HSP sample (84 objects), by assigning a redshift value for the sources without known redshift, randomly selected from the sources of the 2FHL-n sample with known redshifts. In this way we assume the same redshift distribution.\nWe find a correlation coefficient $r = 0.61$, with $p <10^{-6}$ (see Table~\\ref{tab_2fhl_corr}).\nWe note that similar results are obtained if we assume that the unknown redshifts are systematically higher ($\\Delta z=0.5$) than the known ones.\n\n\n\\section{Discussion and conclusions}\n\\label{sec.discussion}\nAs revealed by the observations performed by \\fermi -LAT and the ground-based IACTs, blazars dominate the census of the $\\gamma$-ray sky. Exploring the possible correlation between radio and $\\gamma$-ray emission is a central issue to understand the blazar emission processes and physics.\n\nA strong and significant correlation between radio and $\\gamma$-ray emission was found in several works \\citep[e.g.\\,,][]{Kovalev2009, Ghirlanda2010, Nieppola2011}. However, the correlation strength seems to decrease when higher $\\gamma$-ray energies are considered \\citep[see ][]{Ackermann2011, Mufakharov2015}.\nIn the present work we explore the possible existence of a correlation between radio and GeV-TeV $\\gamma$-ray emission, by using the most complete and unbiased AGN samples available at present, extracted from the 1FHL and 2FHL catalogs.\nAn important novelty of our analysis is that at radio frequencies we use VLBI flux densities, which are more representative of the innermost source regions, where the $\\gamma$-ray emission is produced, than single dish or interferometric observations with arcsecond-scale resolution.\n\nThe present work points out that (1) HSP blazars are the only sub-class showing a strong ($r = 0.6$) and significant ($p <10^{-6}$) correlation between radio VLBI emission and $\\gamma$ rays with $E > 10$\\,GeV, (2) the radio-$\\gamma$-ray correlation is found for all classes when the 0.1 - 300\\,GeV 3FGL energy range is considered.\n\nThe correlation that we find when we consider the 0.1 - 300\\,GeV LAT $\\gamma$-ray band is stronger than what was found by \\citet{Ackermann2011}. This result may be a direct consequence of the fact that at radio frequencies we use VLBI flux densities, which probe the radio emission from the regions close to the $\\gamma$-ray emission zone, while previous works considered low-resolution radio data with possible contamination from extended structures.\nSuch a strong correlation between radio and $\\gamma$-ray at $E > 100$\\,MeV emission was also revealed by \\citet{Ghirlanda2011} in a sample of 230 \\fermi\\ AGNs. In that work the authors made use of 20\\,GHz Australia telescope compact array observations, such a high frequency is representative of the emission from the jet base, with no significant contribution from the large-scale structures.\n\nThe strong and significant radio and $\\gamma$-ray connection vanishes when $\\gamma$ rays approaching the VHE domain are considered for all of the blazar sub-classes, with the exception of HSP objects. \nThis effect, suggested in previous dedicated analysis, is well constrained and quantified in the present work by using the largest AGN samples currently available at $E > 10$\\,GeV and by taking into account the various observational bias.\n\n\\begin{figure\n\\begin{center}\n\\includegraphics[bb= 84 74 690 550, width= 0.45\\textwidth, clip]{LSP_ISP_HSP_LAT_range.eps} \n\\end{center}\n\\caption{\\small Schematic representation of LSP (upper curve), ISP (middle curve) and HSP (lower curve) blazar spectral classification, according to the position of $\\nu^{\\mathrm{Syn}}_{\\mathrm{peak}}$. The green filled area represents the $0.1 - 300$\\,GeV 3FGL energy range, while the black dotted and dash-dotted vertical lines represent the 1FHL and 2FHL energy thresholds, respectively.}\n\\label{schema_SEDs}\n\\end{figure}\n\nWe explain the results of the present analysis within the framework of the blazar SED and its interpretation.\nIn Fig.~\\ref{schema_SEDs} we show a schematic representation of the LSP (upper curve), ISP (middle curve) and HSP (lower curve) blazar spectral classification, according to the position of $\\nu^{\\mathrm{Syn}}_{\\mathrm{peak}}$. The green filled area represents the $0.1 - 300$\\,GeV 3FGL energy range.\n\nBy inspecting the SED properties of LSP objects it emerges that in general they have soft $\\gamma$-ray spectra (with a 3FGL median photon index $\\Gamma_{\\rm LSP, 3FGL}=2.2$) and their high energy component peak occurs at energies lower than those sampled by the LAT (upper curve in Fig.~\\ref{schema_SEDs}).\nMoreover, the 1FHL and 2FHL energy ranges (black dotted and dash-dotted vertical lines in Fig.~\\ref{schema_SEDs}, respectively) are limited to the highest energies of LAT, where the emission has severely dropped. In the 1FHL and 2FHL energy ranges the median value for the LSP photon index is $\\Gamma_{\\rm LSP, 1FHL}=2.9$ and $\\Gamma_{\\rm LSP, 2FHL}=4.3$, respectively, pinpointing a severe steepening of their spectra, and therefore a fast decrease of their flux in the 1FHL\/2FHL energy range.\nIn general the brightest blazars are of LSP type, and their spectral break, above few (1-10)\\,GeV, is due to severe cooling losses of the emitting particles \\citep[e.g.\\,,][]{Tavecchio2009, Finke2010, Harris2012, Stern2014}. In addition, FSRQs are rich of ambient photons that may cause $\\gamma \\gamma$ absorption, producing an additional energy cutoff.\nISP blazars share most of these features, except that their high energy emission peak may fall in the softest part of the LAT energy band.\n\nHSP objects are less powerful than LSPs\/ISPs and the energy losses are less severe. This is directly reflected in the position of the high energy component, which peaks above $\\sim100$\\,GeV, at much higher energies than LSP\/ISP objects. \nIn HSP blazars, the part of the high energy spectrum affected by cooling effects is well beyond the energy range sampled by the LAT \\citep{Ghisellini1998}, showing a rising spectrum both in the 3FGL and 1FHL\/2FHL energy ranges (lower curve in Fig.~\\ref{schema_SEDs}). \nThis is reflected in their harder spectra than those of LSPs ($\\Gamma_{\\rm HSP, 3FGL}=1.9$), mostly in the highest $\\gamma$-ray energy ranges ($\\Gamma_{\\rm HSP, 1FHL}=2.1$ and $\\Gamma_{\\rm HSP, 2FHL}=2.8$). \nThe connection of the observed bolometric luminosity and the shape of the blazar SED is described by the so-called blazar sequence, in which both the low- and high-energy emission SED peaks shift to lower frequencies when the total power increases \\citep{Fossati1998, Ghisellini1998, Ghisellini2017}. \nAs a consequence, for the brightest objects, in general of LSP type, in the 1FHL\/2FHL energy range we are sampling the part of the spectrum where the high energy emission is strongly decreasing. On the contrary, for HSP objects the high energy emission SED peak is usually found within 1FHL\/2FHL energy range. \nThis sampling effect, which mainly affects LSP objects, can be connected with the fact that we find a correlation between radio and gamma-ray emission only for HSPs.\n\n\nRegarding the optical blazar sub-classes, we note that when the $0.1 - 300$\\,GeV 3FGL energy range is considered a strong correlation is found for both optical blazar sub-classes, with BL Lacs showing a higher correlation coefficient ($r = 0.70$) with respect to FSRQs ($r = 0.49$). The different correlation strength may be ascribed to the intrinsically different properties of the two optical blazar sub-classes. \nThe rich ambient photon field and the usually higher distance of FSRQs with respect to BL Lacs make their $\\gamma$-ray spectrum softer, likely weakening the correlation. \nHowever, when $\\gamma$ rays at $E > 10$\\,GeV are considered the correlation vanishes for both FSRQs and BL Lacs. This is because the FSRQ and BL Lac classification is only based on the properties of their optical spectra without taking into account the different energy and spectral properties.\n\nThe sources of both 1FHL-n and 2FHL-n samples span a wide range of $z$ (from 0.01 up to 2.2), and the redshift distribution is different among the different blazar sub-classes.\nFor this reason, to further validate our results we run the correlation analysis by using K-corrected radio flux densities and $\\gamma$-ray energy fluxes. When using K-corrected quantities it is important to have a reliable estimation of both $z$ and the spectral indexes in the two observing bands for not introducing additional uncertainties. By assuming an average spectral index $\\alpha = 0$ in the radio band, and by using the best-fit power-law photon index provided by the 3FGL catalog in the $\\gamma$-ray energy band, we obtain results which are in good agreement and consistent with those presented in Tables~\\ref{tab_1fhl_corr} and \\ref{tab_2fhl_corr}.\n\nWithin this simple picture, an important issue to be taken into account is the variability argument.\nBlazars are strongly variable objects at all frequencies, showing intensity variations on timescales ranging from several years to a few days. In particular, at TeV energies they show variability on timescales as short as a few minutes \\citep[e.g.\\,, ][]{Aharonian2007}. \nBy considering that our data are not taken simultaneously and that we are using average values for the $\\gamma$-ray fluxes and radio flux densities from single observations, the variability can affect and spoil a possible correlation for these sources.\nIn particular, being the variability more pronounced above the SED peak, this effect is more relevant for LSP and ISP objects, whose high energy emission SED peak occurs at lower energies than those sampled in the 1FHL\/2FHL energy range. \nConversely, given that in HSP objects the high energy emission SED peak is found in general at energies above $\\sim100$\\,GeV, in the 1FHL\/2FHL energy range they are not as variable as LSPs\/ISPs, and their correlation should be less affected by the use of non simultaneous data.\n\n\\citet{Ackermann2011} revealed for the first time that the correlation between radio and $\\gamma$-ray emission is stronger when concurrent observations are considered. \nHowever, also in the case of concurrent observations there are some caveats to be taken into account. The radio and $\\gamma$-ray emission vary on different time scales, with the $\\gamma$-ray variability being in general more rapid. Blazars often show strong outbursts or long-term periods of enhanced activity both at radio and $\\gamma$-ray frequencies observed with time lags, due to the optical depth effects at radio frequencies \\citep[see e.g.\\,,][]{Ghirlanda2011, Fuhrmann2014}. \n\nAs it emerges from the results of the present analysis, to proper characterize the radio vs. VHE emission connection, both extensive long-term VLBI monitoring and systematic VHE sky surveys are required. The new generation aperture synthesis radio telescope Square Kilometer Array (SKA) in synergy with the new generation ground-based VHE $\\gamma$-ray instrument CTA will provide us with the best chance to investigate the existence of radio-VHE emission connection.\n\n\n\\begin{acknowledgement}\n\\begin{small}\nWe acknowledge financial contribution from grant PRIN-INAF-2014. \nFor this paper we made use of NASA's Astrophysics Data System and the TOPCAT software \\citep{Taylor2005}.\nThis research has made use of the NASA\/IPAC Extragalactic Database NED, which is operated by the JPL, Californian Institute of Technology, under contract with the National Aeronautics and Space Administration. \nThe \\textit{Fermi}-LAT Collaboration acknowledges support for LAT development, operation, and data analysis from NASA and DOE (United States), CEA\/Irfu and IN2P3\/CNRS (France), ASI and INFN (Italy), MEXT,\nKEK, and JAXA (Japan), and the K.A.~Wallenberg Foundation, the Swedish Research Council, and the National Space Board (Sweden). Science analysis support in the operations phase from INAF (Italy) and CNES (France) is also gratefully acknowledged.\n\\end{small}\n\\end{acknowledgement}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\nPersonalizing the user experience is a continuous growing challenge for various digital applications. \nThis is of particular importance when recommending releases on the Netflix platform, when digesting latest Yahoo news, or for helping users to find their next musical obsession. \n\nAmong the different approaches towards personalization, matrix factorization ranges among the most popular ones \\cite{koren2009matrix, zhou2008large}. \nIn this line of work, data is represented in the form of a user-item matrix, encoding user-item interactions in the form of binary or real values. \nMatrix factorization aims at decomposing a matrix into latent representations designed to accurately reconstruct observed interaction values. \nMost interestingly, these latent features are also used to predict missing (or unknown) ratings (i.e. if item $j$ is exposed to user $i$, what would be his rating). \nHowever, by trying to predict the unknown ratings based on a model trained on the observed ratings, the recommender systems implicitly assume that the distribution of the observed ratings is representative of the distribution of the unknown ones. \nThis is called the \\emph{missing at random} assumption \\cite{little2002statistical}, and it is probably a wrong asumption in most real-world applications. \nIn the case of a movie recommender system, for example, users rate movies that they have seen, and their choices are biased by their interests.\n\nIn this work, building on the \\emph{not missing at random assumption} ~\\cite{missing_at_random, steck2010training} we make the hypothesis that it is more likely for an unknown item to be weakly rated, this due to the huge amounts of existing items coupled to the limited number of items a user may be interested in. \nThis translates into a strong prior suggesting that unknown ratings should be reconstructed from latent features as small values (i.e. close to 0).\nWhile this assumption may be wrong for specific cases, such constraints act as a good regularizer that helps in significantly improving the recommendations. \n\nOur work is not the first to propose new interpretation of the missing data in a matrix factorization framework \\cite{implicit_feedback_confidence, ALS_unknown_weighting, weighting_sampling,steck2010training}. \nHowever, to the best of our knowledge, we are the first to propose an \\emph{online learning} mechanism that sets an explicit prior on unknown values and this, without any significant additional cost.\nWe introduce a method to update our model each time a new rating is observed with a time complexity independent of the size of the data (i.e. the total number of users, items, and ratings). \nThis fast update mechanism allows keeping the model up to date when a flow of new users, items and ratings enters the system. \n\nThe contributions of this work are as follows:\n\\begin{squishlist}\n\n\\item We extend the squared loss, the absolute loss\\ and the generalized Kullback-Liebler divergence\\ to take into account an explicit prior on unknown values. \n\n\\item For each loss function, we derive an efficient \\emph{online learning algorithm} to update the parameters of the model with a complexity independent of the data size.\n\n\\item We validate the hypothesis that applying an explicit prior on missing ratings improves the recommendations in a static and in a dynamic setting on three public datasets.\n\n\n\\item Our methods are easy to implement and we provide an open-source implementation of the squared loss~and absolute loss. \n\\end{squishlist}\n\n\nThe rest of this paper is organized as follows.\nSection~\\ref{sec:recommendation-problem} summarizes the recommendation problem and Section~\\ref{sec:missing-data} formulates how to apply priors on unknown values in the context of recommendation.\nSection~\\ref{sec:loss-funtcions} extends three loss functions and shows how they can be optimized in a static and dynamic fashion.\nSection~\\ref{sec:experiments} presents our experimental results and Section~\\ref{sec:related-work} discusses works related to our study.\nSection~\\ref{sec:conclusion} concludes this paper.\n\\section{The recommendation problem}\\label{sec:recommendation-problem}\n\nBefore addressing the challenge of interpreting missing data, let us state the standard recommendation problem. \n\nWe have at our disposal $m$ items rated by $n$ different users, where the rating given by the $i^{\\text{th}}$ user to the $j^{\\text{th}}$ item is denoted by $r_{ij}$. \nIn many real applications, these ratings take an integer value between 1 and 5. \nIn this work, we assume that ratings are positive and that an item rated by user $i$ with a high numerical value is preferred by this user over items she ranked with lower numerical values. \nWe denote by $\\mathcal{R}$ the set of all known ratings, and by $\\mathcal{R}_{i\\bullet}$ and $\\mathcal{R}_{\\bullet j}$ the set of known ratings of user $i$ and item $j$, respectively.\nIf $r_{ij} \\notin \\mathcal{R}$ we say that the rating is \\emph{unknown}.\n\nFor a while, the objective of recommender systems has been to predict the \\emph{value} of unknown ratings \\cite{koren2009matrix}. \nIt is now widely accepted that a more practical goal is to correctly \\emph{rank} the unknown ratings for each user, while the actual value of the rating is of little interest \\cite{balakrishnan2012collaborative, cremonesi2010performance, lee2014local, ALS_unknown_weighting}.\nThis has led to a change in the way methods are evaluated (in terms of ranking metrics such as NDCG, AUC or MAP, instead of rating prediction metrics as measured by RMSE).\nWe embrace that shift towards \\emph{ranking}, and the purpose of adding a prior on the unknown ratings is not to improve matrix factorization techniques in terms of RMSE, but in terms of ranking metrics.\n\nMatrix factorization methods produce for each user and each item a vector of $k$ ($<< n$ and $m$) real values that we call \\emph{latent features}.\nWe denote by ${w}_{i}$ the row vector containing the $k$ features of the $i^{\\text{th}}$ user, and $h_{j}$ the row vector, composed of $k$ features, associated to the $j^{\\text{th}}$ item.\nAlso, we denote by $\\mathbf{W}$ the $n \\times k$ matrix whose $i^{\\text{th}}$ row is $\\mathbf{w}_{i}$, and $\\mathbf{H}$ as the $k \\times m$ matrix whose $j^{\\text{th}}$ column is $\\mathbf{h}_{j}^{T}$.\nMatrix factorization is presented as an optimization problem, whose general form is:\n\n\\begin{align}\n\\argmin_{\\mathbf{W}, \\mathbf{H}} \\sum_{i,j|r_{ij} \\in \\mathcal{R}} E\\left( r_{ij}, \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right) + R(\\mathbf{W}, \\mathbf{H})\n\\label{eq:classic_problem}\n\\end{align}\nwhere $R$ is a regularization term (often $L_{1}$ or $L_{2}$ norms), and $E$ measures the error that the latent model makes on the observed ratings.\nMost often, $E$ is the squared error.\n\nUsing a matrix factorization approach for predicting unknown ratings relies on the hypothesis that a model accurately predicting observed rating generalizes well to unknown ratings.\nIn the following section, we argue that the former hypothesis is easily challenged.\n\\section{Interpreting missing data}\\label{sec:missing-data}\n\nLaunchCast is Yahoo's former music service, where users could, among other things, rate songs.\nIn a survey of 2006, users were asked to rate randomly selected songs \\cite{missing_at_random}.\nThe distribution of ratings of random songs was then compared to the distribution of voluntary ratings.\nThe experiment concluded that the distribution of the ratings for random songs was strongly dominated by low ratings, while the voluntary ratings had a distribution close to uniform \\cite{missing_at_random}.\n\nIntuitively, a simple process could explain the results: users chose to rate songs they listen to, and listen to music they expect to like, while avoiding genres they dislike.\nTherefore, most of the songs that would get a bad rating are not voluntary rated by the users.\nSince people rarely listen to random songs, or rarely watch random movies, we should expect to observe in many areas a difference between the distribution of ratings for random items and the corresponding distribution for the items selected by the users. \nThis observation has a direct impact on the presumed capacity of matrix factorization to generalize a model based on observed ratings to unknown ratings.\n\n\nBuilding on the \\emph{not missing at random assumption} ~\\cite{missing_at_random,steck2010training}, we propose to incorporate in the optimization problem stated in Equation \\ref{eq:classic_problem} a prior about the unknown ratings, in order to limit the bias caused by learning on observed ratings:\n\n\\begin{align}\n\\begin{split}\n\\argmin_{\\mathbf{W}, \\mathbf{H}} & \\sum_{i,j|r_{ij} \\in \\mathcal{R}} E\\left( r_{ij}, \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right) \\\\ & + \\alpha \\sum_{i,j|r_{ij} \\notin \\mathcal{R}} E\\left( \\hat{r}_{0}, \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right) + R(\\mathbf{W}, \\mathbf{H})\n\\end{split}\n\\label{eq:prior_problem}\n\\end{align}\n\nThe objective function (Equation \\ref{eq:prior_problem}) has now two parts (besides the regularization): the first part fits the model to the observed ratings, and the second part drives the model toward a prior estimate $\\hat{r}_{0}$ on the unknown ratings. \nIn absence of further knowledge about a specific dataset, we suggest to use $\\hat{r}_{0} = 0$, the worst rating, as a prior estimate. \nThe coefficient $\\alpha$ allows to balance the influence of the unknown ratings, and the original formulation is obtained with $\\alpha = 0$. \nWe expect $\\alpha$ to be small to deal with the problem of class imbalance. \nIndeed, in real-life applications the number of known ratings $|\\mathcal{R}|$ is very small in comparison to the number of unknown ratings ($nm - |\\mathcal{R}|$), and if $\\alpha$ is close to 1, or larger, the second term of the objective function will completely dominate the other parts and drive all the users' and items' features to zero. \nIt is therefore important to find a right balance between the influence of the few known ratings and of the many unknown ones. \n\nIn order to have a more intuitive feeling of the influence of both parts of the objective function we introduce $\\rho = \\alpha (nm - |\\mathcal{R}|) \/ |\\mathcal{R}|$, which can be interpreted as an influence ratio between unknown and known ratings. \nIf $\\rho = 0$, the unknown ratings are ignored, if $\\rho = 1$, both the known ratings and the unknown ratings have the same global influence on the objective function, if $\\rho = 2$, the unknown ratings are twice as important as the known ratings, etc.\n\nA more involved model could assume an adaptive $\\rho$ per user or item, which could lead to additional, albeit small, gains.\nHowever, this implies more parameters to tune, more cumbersome equations to explain and an involved process to prove that the complexity of the method remains the same.\nDue to limited space, instead, we provide a general demonstration of the method and leave the adaptive model for future work.\n\\section{Loss functions}\\label{sec:loss-funtcions}\n\nAn obvious difficulty raised by the new optimization problem introduced earlier is the apparent increase in complexity. \nThe naive complexity of evaluating this objective function is $O(nmk)$, while it is $O(|\\mathcal{R}|k)$ for classical matrix factorization approaches (Equation \\ref{eq:classic_problem}).\nIn this section, we demonstrate how it is possible to use our new model without the naive additional cost, and present a way to perform fast updates to incorporate new ratings in the model.\n\nTo this end, we show the applicability of our method when $E$ is the squared loss~in Section~\\ref{sec:square-loss} and the absolute loss~in Section~\\ref{sec:absolute-loss}.\nFor the sake of demonstration, we also discuss its applicability on the generalized Kullback-Liebler divergence~in Section~\\ref{sec:kullback-leibler}.\nFinally, in Section~\\ref{sec:static-dynamic} we outline how the method can be enforced in a static setting, and a dynamic setting with continuous updates of new ratings, items and users.\n\n\\subsection{Squared Loss}\\label{sec:square-loss}\n\nBy considering $E$ as the squared loss, and $R$ as the $L_{1}$ regularization, the optimization problem becomes:\n\n\\begin{align}\n\\begin{split}\n\\argmin_{\\mathbf{W}, \\mathbf{H}} & \\sum_{i,j|r_{ij} \\in \\mathcal{R}} \\left( r_{ij} - \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right) ^{2} \\\\ & + \\alpha \\sum_{i,j|r_{ij} \\notin \\mathcal{R}} \\left( \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right) ^{2} \\\\ & + \\lambda \\left( \\sum_{i = 1}^{n} || \\mathbf{w}_{i} || _{1} + \\sum_{j = 1}^{m} || \\mathbf{h}_{j} || _{1} \\right)\n\\end{split}\n\\label{eq:opt_problem}\n\\end{align}\n\nFor the sake of simplicity, let us forget about the regularization term of the objective function for now (adding it to the following development is trivial), and let us call $L(\\mathbf{W}, \\mathbf{H},\\mathcal{R})$ the objective function without regularization. \nWe want to be able to update the features of one user or of one item in a time independent of the size of the dataset ($n,m,|\\mathcal{R}|$).\nIn the remainder, we show that it is possible to compute $\\partial L \/ \\partial \\mathbf{w}_{i}$ and $\\partial L \/ \\partial \\mathbf{h}_{j}$ with a complexity linear in the number of ratings provided by user $i$ ($|\\mathcal{R}_{i\\bullet}|$) or given to item $j$ ($|\\mathcal{R}_{\\bullet j}|$), respectively.\nOn most datasets, and for most users and items, we have $|\\mathcal{R}_{i\\bullet}| \\ll m$ and $|\\mathcal{R}_{\\bullet j}| \\ll n$, and, therefore computing the gradient for one user or one item is fast.\n\nFirst, let us separate $L$ in $n$ blocks $l_{i}^{w}$ that contain only the terms of $L$ depending on $\\mathbf{w}_{i}$:\n\n\\begin{align}\nl_{i}^{w} = \\sum_{j|r_{ij} \\in \\mathcal{R}} \\left( r_{ij} - \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right) ^{2} + \\alpha \\sum_{j|r_{ij} \\notin \\mathcal{R}} \\left( \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right) ^{2}\n\\label{eq:l_w_i}\n\\end{align}\n\nNotice that we have:\n\\begin{align*}\nL = \\sum_{i} l_{i}^{w} \\quad \\text{and} \\quad\t\\frac{\\partial L}{\\partial \\mathbf{w}_{i}} = \\frac{\\partial l_{i}^{w}}{\\partial \\mathbf{w}_{i}}\n\\end{align*}\n\nIf we adopt a naive computation, the second term of Equation (\\ref{eq:l_w_i}) is more time expensive because most items are not rated by the user.\nHowever, the sum on unknown ratings (i.e. $\\sum_{j|r_{ij} \\notin \\mathcal{R}}$), can be formulated as the difference between the sum on all items (i.e. $\\sum_{j = 1}^{m}$) and the sum on rated items only (i.e. $\\sum_{j|r_{ij} \\in \\mathcal{R}}$) . By so doing, the sum on unknown ratings disappears from the computations:\n\n\\begin{align}\n\\sum_{j|r_{ij} \\notin \\mathcal{R}} \\left( \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right) ^{2} &= \\sum_{j = 1}^{m} \\left( \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right) ^{2} - \\sum_{j|r_{ij} \\in \\mathcal{R}} \\left( \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right) ^{2} \\\\\n&= \\mathbf{w}_{i} \\mathbf{S^h} \\mathbf{w}_{i}^{T} - \\sum_{j|r_{ij} \\in \\mathcal{R}} \\left( \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right) ^{2}\n\\label{eq:sl_trick}\n\\end{align}\n\nwhere we have posed $\\mathbf{S^h} = \\sum_{j} \\mathbf{h}_{j}^{T}\\mathbf{h}_{j}$, a $k \\times k$ matrix independent of $i$ (i.e. it is the same matrix for all $l_{i}^{w}$). Assuming that $\\mathbf{S^h}$ is known, we can now compute $l_{i}^{w}$ and $\\partial L\/\\partial \\mathbf{w}_{i}$ with a complexity of $O(|\\mathcal{R}_{i\\bullet}|k + k^2)$.\nFrom Equations \\ref{eq:l_w_i} and \\ref{eq:sl_trick}, we obtain:\n\n\\begin{align}\n\\begin{split}\nl_{i}^{w} = \\sum_{j|r_{ij} \\in \\mathcal{R}} \\left[ ( r_{ij} - \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} ) ^{2} - \\alpha ( \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} ) ^{2} \\right] + \\alpha \\mathbf{w}_{i} \\mathbf{S^h} \\mathbf{w}_{i}^{T} \n\\end{split}\n\\label{eq:block}\n\\end{align}\n\nWe can easily derive:\n\n\\begin{align}\n\\begin{split}\n\\frac{\\partial L}{\\partial \\mathbf{w}_{i}} = -2 \\sum_{j|r_{ij} \\in \\mathcal{R}} \\left[ r_{ij} - (1 - \\alpha) \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right] \\mathbf{h}_{j} + 2 \\alpha \\mathbf{w}_{i} \\mathbf{S^h}\n\\end{split}\n\\label{eq:gradient}\n\\end{align}\n\nSymmetrically, if $\\mathbf{S^w} = \\sum_{i} \\mathbf{w}_{i}^{T}\\mathbf{w}_{i}$, we have:\n\n\\begin{align}\n\\begin{split}\nl_{j}^{h} = \\sum_{i|r_{ij} \\in \\mathcal{R}} \\left[ ( r_{ij} - \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} ) ^{2} - \\alpha ( \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} ) ^{2} \\right] + \\alpha \\mathbf{h}_{j} \\mathbf{S^w} \\mathbf{h}_{j}^{T} \n\\end{split}\n\\label{eq:fast_block}\n\\end{align}\nand:\n\\begin{align}\n\\begin{split}\n\\frac{\\partial L}{\\partial \\mathbf{h}_{j}} = -2 \\sum_{i|r_{ij} \\in \\mathcal{R}} \\left[ r_{ij} - (1 - \\alpha) \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right] \\mathbf{w}_{i} + 2 \\alpha \\mathbf{h}_{j} \\mathbf{S^w}\n\\end{split}\n\\label{eq:fast_gradient}\n\\end{align}\n\nAssuming that $\\mathbf{S^w}$ is known, the complexity of computing $l_{j}^{h}$ or $\\partial L\/\\partial \\mathbf{h}_{j}$ is now $O(|\\mathcal{R}_{\\bullet j}|k + k^2)$, and the complexity of computing it for every $j \\in \\{1, \\ldots, m\\}$ is $O(|\\mathcal{R}|k + k^2)$.\n\n\\subsection{Absolute Loss}\\label{sec:absolute-loss}\n\nA similar development can be done when the squared loss is replaced by the absolute loss.\nWith the absolute loss, $L$ becomes:\n\n\\begin{align*}\n\\begin{split}\nL = \\sum_{i,j|r_{ij} \\in \\mathcal{R}} \\left| r_{ij} - \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right| + \\alpha \\sum_{i,j|r_{ij} \\notin \\mathcal{R}} \\left| \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right|\n\\end{split}\n\\end{align*}\n\nAs with the squared loss, we divide $L$ into $l_{i}^{w}$ and $l_{j}^{h}$. \n\n\\begin{align*}\nl_{i}^{w} = \\sum_{j|r_{ij} \\in \\mathcal{R}} \\left| r_{ij} - \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right| + \\alpha \\sum_{j|r_{ij} \\notin \\mathcal{R}} \\left| \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right|\n\\end{align*}\n\nAs with the squared loss, we will change the expression of $l_{i}^{w}$ to remove the sum over all unknown ratings, but in this case we have to impose non-negativity of the features to go further.\nIf $\\mathbf{W}, \\mathbf{H} \\geq 0$, we have $\\left| \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right| = \\mathbf{w}_{i}\\mathbf{h}_{j}^{T}$, and therefore:\n\n\\begin{align}\n\\sum_{j|r_{ij} \\notin \\mathcal{R}} \\left| \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right| &= \\sum_{j = 1}^{m} \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} - \\sum_{j|r_{ij} \\in \\mathcal{R}} \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\\\\n&= \\mathbf{w}_{i} \\left( \\sum_{j = 1}^{m} \\mathbf{h}_{j}^{T} \\right) - \\sum_{j|r_{ij} \\in \\mathcal{R}} \\mathbf{w}_{i}\\mathbf{h}_{j}^{T}\n\\label{eq:al_trick}\n\\end{align}\n\nHere, instead of $\\mathbf{S^w}$ and $\\mathbf{S^h}$, we will define $\\mathbf{s}_{w} = \\sum_{i = 1}^{n} \\mathbf{w}_{i}$ and $\\mathbf{s}_{h} = \\sum_{j = 1}^{m} \\mathbf{h}_{j}$.\nWe can now express $l_{i}^{w}$ and $\\partial L\/ \\partial \\mathbf{w}_{i}$ efficiently:\n\n\\begin{align}\nl_{i}^{w} = \\sum_{j|r_{ij} \\in \\mathcal{R}} \\left( \\left| r_{ij} - \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right| - \\alpha \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right) + \\alpha \\mathbf{w}_{i} \\mathbf{s}_{h}^{T} \n\\label{eq:al_fast_block}\n\\end{align}\nso that:\n\\begin{align}\n\\frac{\\partial L}{\\partial \\mathbf{w}_{i}} = \\sum_{j|r_{ij} \\in \\mathcal{R}} \\left( \\text{sign}\\left(\\mathbf{w}_{i}\\mathbf{h}_{j}^{T} - r_{ij} \\right) - \\alpha \\right) \\mathbf{h}_{j} + \\alpha \\mathbf{s}_{h}^{T}\n\\label{eq:al_fast_gradient}\n\\end{align}\nwhere $\\text{sign}(x) = x\/|x|$ if $x \\neq 0$, and equals $0$ otherwise.\nAssuming $\\mathbf{s}_{h}$ is known, the complexity of computing $l_{i}^{w}$ or $\\partial L\/ \\partial \\mathbf{w}_{i}$ is now $O(|\\mathcal{R}_{i\\bullet}|k)$.\nThe corresponding expression of $l_{j}^{h}$ and $\\partial L\/ \\partial \\mathbf{h}_{j}$ is trivial, and the complexity to compute them is $O(|\\mathcal{R}_{\\bullet j}|k)$.\n\n\\subsection{Generalized Kullback-Leibler Divergence}\\label{sec:kullback-leibler}\n\nFor the sake of demonstration on other common loss functions in matrix factorization, we show here the applicability of the sparsity trick on the generalized Kullback-Liebler divergence~(GKL)~\\cite{mult_NMF, lin2007projected}.\nWe do not elaborate further on this function in the rest of the paper.\n\nThe generalized Kullback-Liebler divergence\\ is defined as follows:\n\n\\begin{align}\nD(r_{ij}||\\mathbf{w}_{i}\\mathbf{h}_{j}^{T}) = r_{ij} \\log (\\frac{r_{ij}}{\\mathbf{w}_{i}\\mathbf{h}_{j}^{T}}) - r_{ij} + \\mathbf{w}_{i}\\mathbf{h}_{j}^{T}\n\\label{eq:gkl}\n\\end{align}\n\nThe GKL is not defined when $r_{ij} = 0$. In the following we extend the GKL by using its limit value:\n\n\\begin{align}\nD(0||\\mathbf{w}_{i}\\mathbf{h}_{j}^{T}) := \\lim\\limits_{r \\to 0} D(r||\\mathbf{w}_{i}\\mathbf{h}_{j}^{T}) = \\mathbf{w}_{i}\\mathbf{h}_{j}^{T}\n\\label{eq:gkl_0}\n\\end{align}\n\nUsing Equation \\ref{eq:gkl} and \\ref{eq:gkl_0}, $L$ becomes:\n\n\\begin{align*}\n\\begin{split}\nL = \\sum_{i,j|r_{ij} \\in \\mathcal{R}} \\left( r_{ij} \\log (\\frac{r_{ij}}{\\mathbf{w}_{i}\\mathbf{h}_{j}^{T}}) - r_{ij} + \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right) + \\alpha \\sum_{i,j|r_{ij} \\notin \\mathcal{R}} \\mathbf{w}_{i}\\mathbf{h}_{j}^{T}\n\\end{split}\n\\end{align*}\n\nWe now follow the same development as with the other losses.\nWe define $l_{i}^{w}$:\n\n\\begin{align*}\n\\begin{split}\nl_{i}^{w} = \\sum_{j|r_{ij} \\in \\mathcal{R}} \\left( r_{ij} \\log (\\frac{r_{ij}}{\\mathbf{w}_{i}\\mathbf{h}_{j}^{T}}) - r_{ij} + \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right) + \\alpha \\sum_{j|r_{ij} \\notin \\mathcal{R}} \\mathbf{w}_{i}\\mathbf{h}_{j}^{T}\n\\end{split}\n\\end{align*}\n\nIn the case of the GKL, the process to remove the sum on unknown ratings is the same as with the absolute loss, except that in absence of absolute value we do not have to impose non-negativity of the features:\n\n\\begin{align}\n\\sum_{j|r_{ij} \\notin \\mathcal{R}} \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} = \\mathbf{w}_{i} \\mathbf{s}_{h}^{T} - \\sum_{j|r_{ij} \\in \\mathcal{R}} \\mathbf{w}_{i}\\mathbf{h}_{j}^{T}\n\\label{eq:gkl_trick}\n\\end{align}\n\nThis leads to:\n\n\\begin{align}\n\\begin{split}\nl_{i}^{w} = & \\sum_{j|r_{ij} \\in \\mathcal{R}} \\left(r_{ij} \\log (\\frac{r_{ij}}{\\mathbf{w}_{i}\\mathbf{h}_{j}^{T}}) - r_{ij} + (1 - \\alpha) \\mathbf{w}_{i}\\mathbf{h}_{j}^{T} \\right) \\\\ \n& + \\alpha \\mathbf{w}_{i} \\mathbf{s}_{h}^{T} \n\\end{split}\n\\end{align}\n\nNow we can easily derive $\\partial L\/\\partial \\mathbf{w}_{i}$:\n\n\\begin{align}\n\\frac{\\partial L}{\\partial \\mathbf{w}_{i}} = \\sum_{j|r_{ij} \\in \\mathcal{R}} \\left( - \\frac{r_{ij}}{\\mathbf{w}_{i}\\mathbf{h}_{j}^{T}} + (1 - \\alpha) \\right) \\mathbf{h}_{j} + \\alpha \\mathbf{s}_{h}^{T}\n\\end{align}\n\nThe corresponding expression of $l_{j}^{h}$ and $\\partial L\/ \\partial \\mathbf{h}_{j}$ is obtained symmetrically.\nAs with the absolute loss, the complexity of computing $l_{i}^{w}$ or $\\partial L\/ \\partial \\mathbf{w}_{i}$ is now $O(|\\mathcal{R}_{i\\bullet}|k)$ (it is $O(|\\mathcal{R}_{\\bullet j}|k)$ for $l_{j}^{h}$ and $\\partial L\/ \\partial \\mathbf{h}_{j}$).\n\n\\subsection{Static and Dynamic Factorization}\\label{sec:static-dynamic}\n\nWe introduce an online algorithm to learn the latent factors from the input data in a static setting, and show how it can accommodate updates in a dynamic setting. \n\n\n\\subsubsection{Static Factorization}\nIn order to factorize a whole new set of data we propose to use a randomized block coordinate descent \\cite{richtarik2014iteration}.\nAt each iteration, all the users and items are traversed in a random order.\nFor each of them a gradient step is performed on their features while keeping the other features constant.\n\nWe can use a line search \\cite{boyd2009convex} to determine the size of the gradient step because the variation of $L$ for a modification of $\\mathbf{w}_{i}$ is entirely determined by $l_{i}^{w}$ and can therefore be computed efficiently.\nLine search allows to avoid the burden of tuning the step size, proper to stochastic gradient descent (SGD) methods \\cite{duchi2011adaptive}.\nMoreover, using line search guarantees the convergence of the value of the objective function.\nIndeed, each gradient step decreases (or rather \\emph{cannot increase}) the objective function which is bounded from below.\nThis implies that the variation of the objective function converges to zero.\n\nThe complete procedure for the factorization through randomized block coordinate descent is summarized in Algorithm \\ref{alg:static}.\n\n\\spara{Complexity.}\nIn the case of the squared loss, the computation of fast gradient step relies on knowing $\\mathbf{S^w}$ and $\\mathbf{S^h}$.\nTheir initial value is computed in $O(nk^2)$ and $O(mk^2)$, respectively, and the cost of updating them after each gradient step is $O(k^2)$.\nThe total complexity of an iteration of our algorithm is therefore $O(|\\mathcal{R}|k + (n + m)k^2)$, as good as the best factorization methods that do not use priors on unknown ratings \\cite{gemulla2011large}.\n\nIn the case of the absolute loss~and generalized Kullback-Liebler divergence, the computation uses $\\mathbf{s}_{w}$ and $\\mathbf{s}_{h}$.\nTheir initial value is computed in $O(nk)$ and $O(mk)$, while the cost of updating them is $O(k)$.\nThe total complexity of one iteration then becomes $O(|\\mathcal{R}|k + (n + m)k)$, which is lower than the squared loss' complexity.\nHowever, this usually comes at a cost on the performance of the results, as we will show in the experiments in Section~\\ref{sec:experiments}.\n\n\n\\begin{algorithm}[!t]\n\\caption{Randomized block coordinate descent}\n\\label{alg:static}\n\\begin{algorithmic}[1]\n\\REQUIRE $\\,$ \\\\\n -- The ratings $\\mathcal{R}$.\\\\\n -- The number of features $k$.\\\\ \\algrule\n\\STATE Initialize $\\mathbf{W}$ and $\\mathbf{H}$.\n\\STATE Compute $\\mathbf{S^w}$ and $\\mathbf{S^h}$ ($\\mathbf{s}_{w}$ and $\\mathbf{s}_{h}$)\n\\WHILE{not converged}\n\t\\FORALL{user and item, traversed in a random order} \n\t\t\\STATE In the case of a user ($i$) \\textbf{do}\n\t\t\\STATE \\hspace{\\algorithmicindent} Perform a gradient step on $\\mathbf{w}_{i}$ using line search\n\t\t\\STATE \\hspace{\\algorithmicindent} Update $\\mathbf{S^w}$ ($\\mathbf{s}_{w}$)\n\t\t\\STATE In the case of an item ($j$) \\textbf{do}\n\t\t\\STATE \\hspace{\\algorithmicindent} Perform a gradient step on $\\mathbf{h}_{j}$ using line search\n\t\t\\STATE \\hspace{\\algorithmicindent} Update $\\mathbf{S^h}$ ($\\mathbf{s}_{h}$)\n\t\\ENDFOR\n\\ENDWHILE\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsubsection{Fast Updates}\nThe expressions of $l_{i}^{w}$, $l_{j}^{h}$, and their gradients (Equations (\\ref{eq:block}), (\\ref{eq:gradient}), (\\ref{eq:fast_block}) and (\\ref{eq:fast_gradient})) allow us to compute the latent representations of one user or one item in a time independent of the number of users and items in the system.\nWe can use that ability to design a simple algorithm for updating an existing factorization when a new rating is added to $\\mathcal{R}$:\nIf user $i$ rates item $j$, we iteratively perform gradient steps for $\\mathbf{w}_{i}$ and $\\mathbf{h}_{j}$, keeping all other features constant.\nThis relies on the assumption that a new rating will only affect significantly the user and item that are directly concerned with it.\nAlthough this assumption can be disputed, we will show in our experiments (Section \\ref{sec:dynamic}) that our update algorithm produces recommendations of stable quality, indicating that limiting our updates to the directly affected users and items does not degrade the factorization over time.\n\nWhen ratings are produced by new users or given to new items, a new set of features for that user or item is created before performing the local optimization.\nVarious initialization strategies could be explored here.\nHowever, as we show in our experimental results, assigning a random value to one of the features and setting the others to zero performs well in practice.\nThe update procedure is summarized in Algorithm \\ref{alg:update}.\n\n\\spara{Complexity.}\nAs mentioned earlier, our update algorithm is independent of the number of users or items in the system, making it suitable for very large datasets.\nEach iteration of the update algorithm is composed of two gradient steps (one on the user's features, and one on the item's features).\nIn particular, the complexity of one iteration is $O((|\\mathcal{R}_{i\\bullet}| + |\\mathcal{R}_{\\bullet j}|)k + k^2)$ for the squared loss, and only $O((|\\mathcal{R}_{i\\bullet}| + |\\mathcal{R}_{\\bullet j}|)k)$ for the absolute loss~and the GKL.\nThis difference in complexity becomes significant when $k$ is large with regards to the average number of ratings per user and per item.\n\nUpdates based on classic SGD methods have an even smaller complexity ($O(k))$), but we will show in Section \\ref{sec:experiments} that our method produces recommendations of much higher quality, while still being able to satisfy applications requiring low-latency updates.\n\n\n\\begin{algorithm}[!t]\n\\caption{Update algorithm}\n\\label{alg:update}\n\\begin{algorithmic}[1]\n\\REQUIRE $\\,$ \\\\\n -- The new rating $r_{ij}$.\\\\\n -- The ratings of user $i$ ($\\mathcal{R}_{i\\bullet}$) and of item $j$ ($\\mathcal{R}_{\\bullet j}$).\\\\ \\algrule\n\\STATE If $\\mathbf{w}_{i}$ ($\\mathbf{h}_{j}$) does not exist, initialize it (for example by setting a random feature to 1).\n\\STATE Add $r_{ij}$ to $\\mathcal{R}_{i\\bullet}$ and $\\mathcal{R}_{\\bullet j}$.\n\\WHILE{not converged}\n\\STATE{Perform a gradient step on $\\mathbf{w}_{i}$ using line search}\n\\STATE{Update $\\mathbf{S^w}$ ($\\mathbf{s}_{w}$)}\n\\STATE{Perform a gradient step on $\\mathbf{h}_{j}$ using line search}\n\\STATE{Update $\\mathbf{S^h}$ ($\\mathbf{s}_{h}$)}\n\\ENDWHILE\n\\end{algorithmic}\n\\end{algorithm}\n\\section{Experiments}\n\\label{sec:experiments}\n\nWe perform several experiments to demonstrate the following key points:\n\\begin{squishlist}\n\\item Using priors on the unknown values leads to overall improved quality of ranking, in a static or dynamic setting.\n\\item The quality does not degrade with time, i.e., as more updates are added, the model does not lose accuracy.\n\\item Our methods can outperform traditional techniques on various large datasets.\n\\end{squishlist}\n\nIn our experiments, we test the performance of the squared loss~(SL) and the absolute loss~(AL) with and without prior on unknown values.\nIn Section~\\ref{sec:exp-setup} we describe our experimental setup: the benchmarked datasets used, the performance metrics recorded and how we tune the various parameters of the models tested during the experiments.\nThen, in Sections~\\ref{sec:static} and~\\ref{sec:dynamic} we describe the results of our methods in a \\emph{static} and \\emph{dynamic} learning setting, respectively, and how they compare with state-of-the-art methods.\nIn Section~\\ref{sec:delay} we illustrate the importance of fast updates by studying the impact of having a delay between the arrival of new ratings and the update of the factorization.\nIn Section~\\ref{sec:exp-param-influence} we investigate in depth the influence of parameter values selected in the two loss functions (squared and absolute loss).\nFinally, details allowing the reproducibility of the results are given in Section~\\ref{sec:exp-reproducibility}.\n\n\\subsection{Experimental Setup}\\label{sec:exp-setup}\n\nHere we briefly describe the experimental setup used for the static and dynamic learning and how the parameters of the different methods are tuned.\n\n\\subsubsection{Datasets}\n\nDuring the experiments, we use three datasets with distinct features.\nTable~\\ref{table:datasets} summarizes the characteristics of these datasets which provide different challenges to the recommendation task:\n\n\\begin{squishlist}\n\\item \\dataset{Movielens}: This is the well-known movie ratings dataset produced by the Grouplens project.\nWe use the version containing 1 million ratings, with at least 20 ratings for each user.\n\\item \\dataset{FineFoods}: This is a collection of ratings about food products extracted from the Amazon comments~\\cite{mcauley2013hidden}.\nThe dataset is much sparser than \\dataset{Movielens}, with most users having only a handful of ratings, making it a very hard dataset for the recommendation task.\n\\item \\dataset{AmazonMovies}: This is a larger collection of ratings extracted from the movie section of Amazon~\\cite{mcauley2013hidden}.\nThis dataset is also sparser than \\dataset{Movielens}, although not as sparse as \\dataset{FineFoods}.\n\\end{squishlist}\n\n\\begin{table}[t]\n\\caption{Characteristics of the datasets used.}\n\\centering\n\\normalsize\n\\tabcolsep=0.4cm\n\\begin{tabular}{lrrr}\n\\toprule\nDataset\t\t\t&\tUsers\t&\tItems\t&\tRatings\t\\\\\n\\midrule\n\\dataset{Movielens}\t&\t6,040\t&\t3,706\t&\t1,000,209\t\\\\\n\\dataset{FineFoods}\t&\t256,059\t&\t74,258\t&\t568,454\t\\\\\n\\dataset{AmazonMovies}\t&\t889,176\t&\t253,059\t&\t7,831,442\t\\\\\n\\bottomrule\n\\end{tabular}\n\\label{table:datasets}\n\\end{table}\n\n\\subsubsection{Evaluation Metrics}\n\nWe measure two standards metrics used in ranking evaluation: (1) Normalized Discounted Cumulative Gain (NDCG) \\cite{balakrishnan2012collaborative, lee2014local} and, (2) area under ROC curve (AUC) \\cite{ALS_unknown_weighting, rendle2009bpr, transductive_NMF}.\n\nNDCG rewards methods that rank items with the highest observed rating at the top of the ranking.\nThe \\emph{discounted} aspect of NDCG comes from the fact that relevant items ranked at low positions of the ranking contribute less to the final score than relevant items at top positions. \n\n\nIn the static experiments, we also report the NDCG computed on the rated items only. \nThis metric does not consider the real world case scenario which consists of ranking all items since we do not know in advance which item will be rated or not. \nIntuitively, by biasing our objective through the introduction of priors on unknown rating we may loose performance when ranking rated items only, while performing better when considering all the items. \n\nWe use AUC to evaluate the ability of the different methods to predict which items are going to be rated.\nAUC measures whether the items whose ratings were held out during learning are ranked higher than unrated items.\nThe perfect ranking has an AUC of 1, while the average AUC for random ranking is $0.5$.\n\n\\subsubsection{Parameter Tuning}\n\nTable~\\ref{table:parameters} shows the parameters of the various models and the values tested during parameter tuning.\nFor each test, the parameters' values producing the best ranking on the validation sets (measured by NDCG for the static test and AUC for the dynamic test) were selected to be used.\nSee Sections \\ref{sec:static} and \\ref{sec:dynamic} for the description of the validation sets.\n\n\\begin{table}[t]\n\\caption{List of the parameters of each method, and set of values tested during the parameters tuning of the squared loss (SL) and absolute loss (AL), with and without prior on unknown values, as well as the multiplicative update algorithm (Mult-NMF), Alternating Least Square (ALS-UV), and Vowpal Wabbit (VW).\n$k$: number of features, $\\lambda$: regularization coefficient, $\\rho$: unknown\/known influence ratio, $\\gamma$: learning rate.}\n\\centering\n\\tabcolsep=0.3cm\n\\begin{tabular}{lcl}\n\\toprule\nMethod\t\t\t\t\t&\tParameter\t\t&\tTested Values\t\\\\\n\\midrule\n\\multirow{3}*{\\parbox{1.4cm}{SL\/AL with prior}} & $k$\t& 5, 10, 20, 50, 100, 200 \\\\\n\t\t\t\t\t\t&\t$\\lambda$\t\t& 0, 0.01, 0.1, 1, 10 \\\\\n\t\t\t\t\t\t&\t$\\rho$\t\t& 0.3, 0.7, 1, 2 \\\\\n\\midrule\n\\multirow{2}*{\\parbox{1.9cm}{SL\/AL without prior}}\t& $k$ & 5, 10, 20, 50, 100, 200 \\\\\n\t\t\t\t\t\t&\t$\\lambda$\t\t& 0, 0.01, 0.1, 1, 10 \\\\\n\\midrule\n\\multirow{2}*{ALS-UV}\t\t\t&\t$k$\t\t\t& 20, 50, 100, 200, 500 \\\\\n\t\t\t\t\t\t&\t$\\lambda$\t\t& 0, 0.001, 0.01, 0.05, 0.1 \\\\\n\\midrule\nMult-NMF\t\t\t\t\t\t&\t$k$\t\t\t& 20, 50, 100, 200, 500 \\\\\n\\midrule\n\\multirow{3}*{\\parbox{1.5cm}{VW}}\t&\t$k$ \t& 20, 50, 100, 200, 500 \\\\\n\t\t\t\t\t\t&\t$\\lambda$\t\t& 0 1e-5 1e-2 \\\\\n\t\t\t\t\t\t&\t$\\gamma$ \t& 0.01, 0.02, 0.05, 0.1, 0.2 \\\\\n\\bottomrule\n\\end{tabular}\n\\label{table:parameters}\n\\end{table}\n\n\n\n\\subsection{Static Learning}\\label{sec:static}\n\n\\spara{Research question.}\nIn a static mode, we test to which extent using a prior on unknown ratings improves the ranking of items when recommended to users. \n\n\\spara{Process followed.}\nThe test set was constructed by randomly selecting 1000 users, and splitting the ratings of those users in half, keeping the first 50\\% of the ratings in the training set, according to timestamp, and the last 50\\% in the test set.\nThe same process (selecting 1000 users and splitting their ratings) was then applied three times on the training set in order to create three training\/validation pairs of sets.\nOn each run, the parameters producing on average the best NDCG over the three validation sets were then used to factorize the full training set, and evaluated on the test set. \n\n\\spara{Baseline.}\nWe report the results achieved by two traditional well-known algorithms: UV matrix decomposition solved with Alternating Least Square (ALS-UV) \\cite{zhou2008large}, and non-negative matrix factorization with the multiplicative update algorithm (Mult-NMF) \\cite{mult_NMF}.\nBoth ALS-UV and Mult-NMF use the squared loss.\n\n\\spara{Results.}\nThe results, averaged over 10 runs, are shown in Table \\ref{table:static_results}. \nWe can observe that for both the squared loss~and the absolute loss, and on all datasets, by adding a prior on the unknown ratings we improve significantly the rankings of the items recommended to users over rankings obtained by the same techniques when they do not put a prior on the unknown ratings (and also over rankings obtained by state-of-the-art approaches ALS-UV and Mult-NMF).\nIn particular, our implementation of the squared loss~with prior outperforms all other methods, as confirmed by a Mann-Withney U test with a confidence level of 1\\%.\n\nOn the \\dataset{Movielens} dataset, the results of Mult-NMF and ALS-UV are, as expected, similar to the ones of our implementation of the squared loss~without prior.\nIndeed, those methods optimize the same objective function, and differ only by their algorithm.\nInterestingly, on the sparser \\dataset{FineFoods} and \\dataset{AmazonMovies} dataset, our randomized block coordinate gradient descend method outperforms Mult-NMF and ALS-UV, even without prior on the unknown ratings.\n\nFurthermore, in the three tested data sets, when only considering the rated items, the loss in ranking performance is never significant (see NDCG-RI with and without prior in Table \\ref{table:static_results}).\nIn other words, while improving on the global ranking, the performance does not deteriorate when considering only the subset of rated items.\n\n\n\\begin{table} \n\\centering\n\\scriptsize\n\\caption{Comparison of our introduced algorithm in static learning on the datasets \\dataset{Movielens}, \\dataset{FineFoods} and \\dataset{AmazonMovies}. \nValues in bold hold for the method that outperform all the other methods according to a Mann-Withney U test with a confidence level of 1\\%.\nAverage values are shown alongside their standard deviation over 10 runs.}\n\\tabcolsep=0.07cm\n\\begin{tabular}{l c c c}\n\\hline\n\t\t\t& \\multicolumn{3}{c}{\\dataset{Movielens}}\t\t\t\t\t\t\\\\\n\t\t\t& NDCG-RI\t & NDCG \t\t\t\t\t& AUC\t\t\t\t\t\\\\\n\\hline\nSL w\/ prior \t&0.885 $\\pm$ 0.0014& \\bf{0.5046} $\\pm$ 0.0013\t& \\bf{0.8695} $\\pm$ 0.0012\t\t\\\\\nSL w\/o prior & 0.886 $\\pm$ 0.0015 & 0.3597 $\\pm$ 0.0012\t\t& 0.6548 $\\pm$ 0.0014\t\t\\\\\nAL w\/ prior\t& 0.8683 $\\pm$ 0.0030 & 0.4452 $\\pm$ 0.0009\t\t& 0.8134 $\\pm$ 0.0011\t\t\t\\\\\nAL w\/o prior & 0.8794 $\\pm$ 0.0031 &0.3801 $\\pm$ 0.0106\t\t& 0.6927 $\\pm$ 0.0322\t\\\\\nMult-NMF\t\t& 0.8433 $\\pm$ 0.0007 & 0.3758 $\\pm$ 0.0006\t\t& 0.7011 $\\pm$ 0.0009\t\t\\\\\nALS-UV\t\t& 0.8332 $\\pm$ 0.0014 & 0.3292 $\\pm$ 0.0004\t\t& 0.5839 $\\pm$ 0.0005\t\t\\\\\n\\hline\n\t\t\t& \\multicolumn{3}{c}{\\dataset{FineFoods}}\t\t\t\t\t\t\\\\\n\t\t\t& NDCG-RI\t & NDCG \t\t\t\t\t& AUC\t\t\t\t\t\\\\\n\\hline\nSL w\/ prior\t&0.887 $\\pm$ 0.0016& \\bf{0.1237} $\\pm$ 0.0039\t& \\bf{0.8452} $\\pm$ 0.0074\t\\\\\nSL w\/o prior\t& 0.888 $\\pm$ \t0.0158 & 0.1023 $\\pm$ 0.0022\t\t& 0.8314 $\\pm$ 0.0058\t\t\\\\\nAL w\/ prior\t&0.8722 $\\pm$ 0.0142 & 0.1026 $\\pm$ 0.0030\t\t& 0.8412 $\\pm$ 0.0047\t\t\\\\\nAL w\/o prior &0.8730 $\\pm$ 0.0260\t& 0.0923 $\\pm$ 0.0008\t\t& 0.7294 $\\pm$ 0.0143\t\t\\\\\nMult-NMF\t\t& 0.8476 $\\pm$ 0.0084\t& 0.0830 $\\pm$ 0.0008\t\t& 0.3403 $\\pm$ 0.0052\t\t\\\\\nALS-UV\t\t& 0.8653 $\\pm$ 0.025\t& 0.0873 $\\pm$ 0.0009\t\t& 0.5485 $\\pm$ 0.0114\t\t\\\\\n\\hline\n\t\t\t& \\multicolumn{3}{c}{\\dataset{AmazonMovies}}\t\t\t\t\t\\\\\n\t\t\t& NDCG-RI\t & NDCG \t\t\t\t\t& AUC\t\t\t\t\t\\\\\n\\hline\nSL w\/ prior\t&0.8992 $\\pm$ 0.0101 & \\bf{0.1887} $\\pm$ 0.0088\t& \\bf{0.9276} $\\pm$ 0.0031\t\\\\\nSL w\/o prior & 0.9035 $\\pm$ 0.0089 & 0.1103 $\\pm$ 0.0008\t\t& 0.8656 $\\pm$ 0.0033\t\t\t\t\\\\\nAL w\/ prior\t&0.8804 $\\pm$ 0.0077 & 0.1348 $\\pm$ 0.0035\t\t& 0.8634 $\\pm$ 0.0045\t\t\\\\\nAL w\/o prior\t&0.8854 $\\pm$ 0.0102 & 0.1002 $\\pm$ 0.0012\t\t& 0.7625 $\\pm$ 0.0051\t\t\t\\\\\nMult-NMF\t\t& 0.8498 $\\pm$ 0.0026 & 0.0959 $\\pm$ 0.0004\t\t& 0.6330 $\\pm$ 0.0040\t\t\\\\\nALS-UV\t\t& 0.8658 $\\pm$ 0.0034 & 0.0906 $\\pm$ 0.0003\t\t& 0.6601 $\\pm$ 0.0061\t\t\\\\\n\\hline\n\\end{tabular}\n\\label{table:static_results}\n\\end{table}\n\n\n\n\\subsection{Dynamic Learning}\\label{sec:dynamic}\n\n\\spara{Research question.}\nIn this section, we target two research questions:\n\n\\begin{enumerate}\n\\item We test whether our update algorithm is able to sustain stable quality of recommendations over time;\n\\item We test to which extent using priors on unknown ratings improves the ranking of recommended items when the model is updated each time a new instance is encountered. By so doing, the system is evaluated on more realistic scenarios where the cases of cold items and users are considered as well.\n\\end{enumerate}\n\n\\spara{Process followed.}\nWe order the ratings by timestamps and separate the ratings in three blocks: first the training, then the validation and finally the test block (see Table \\ref{table:datasets_block_size} for the size of each block in the different datasets).\n\n\\begin{table}[h]\n\\caption{Number of ratings in each block for the dynamic learning.}\n\\centering\n\\tabcolsep=0.15cm\n\\begin{tabular}{lrrr}\n\\toprule\nDataset\t\t\t&\tTraining\t&\tValidation\t&\tTest\t\\\\\n\\midrule\n\\dataset{Movielens}\t&\t500,000\t&\t100,000\t&\t100,000\t\\\\\n\\dataset{FineFoods}\t&\t400,000\t&\t100,000\t&\t68,000\t\\\\\n\\dataset{AmazonMovies}\t&\t5,000,000\t&\t1,000,000\t&\t1,000,000\t\\\\\n\\bottomrule\n\\end{tabular}\n\\label{table:datasets_block_size}\n\\end{table}\n\nThe evaluation is performed as follows: an initial model is built based on all the ratings present before the test block, then, for each rating of the test block, two steps are performed in the following order:\n\\begin{enumerate}\n\t\\item The current model is evaluated by computing the AUC over the new (user, item) pair. \n\tNotice that in this case, computing the AUC means computing the proportion of items not yet rated by the user that the model ranks lower than the item that was just rated.\n\tAn AUC of 1 means that the new item was the top recommendation of the method for that user.\n\t\\item The model is updated using the new rating. It is worth noticing that the rating may concern a new user or item, and, therefore, features for that new user\/item have to be added to the model.\n\\end{enumerate}\n\nParameter tuning is done as described above, but starting at the beginning of the validation block and ending before the test block. \nThe values of parameters tested are the same as in the static test (see Table \\ref{table:parameters}).\n\n\\spara{Baseline.}\nWe compared our methods to Vowpal Wabbit (VW).\nVW\\ is a machine learning framework solving different optimization problems for classification and ranking, by implementing a carefully optimized, stochastic gradient descent (SGD) using feature hashing \\cite{weinberger2009feature} and adaptive gradient steps \\cite{duchi2011adaptive}.\nWe are using the VW's implementation of low-rank interactions\\footnote{\\url{https:\/\/github.com\/JohnLangford\/vowpal_wabbit\/tree\/master\/demo\/movielens}} based on factorization machines~\\cite{DBLP:conf\/icdm\/Rendle10}.\n\n\\spara{Results.}\nFigure~\\ref{fig:dynamic_results-movielens} shows how the average AUC evolves as new ratings enter the system.\nWe first observe that the quality of the results does not decrease over time, indicating that our update algorithm can work for long periods of time without propagating or amplifying errors.\nAs in the static experiment, we confirm that adding a prior on unknown ratings improves the quality of the ranking and, again, this is maintained across time.\nMoreover, the SGD approach of VW\\ is outranked in each dataset by our approach with prior.\n\n\\begin{figure}\n\\centering\n\t\\includegraphics[scale=0.5]{dynamic_results_moving_avg.pdf}\n\\caption{Performance comparison with respect to average AUC for the various methods tested in dynamic learning on \\dataset{Movielens}, \\dataset{FineFoods} and \\dataset{AmazonMovies}.\nResults are averaged over 20 runs.}\n\\label{fig:dynamic_results-movielens}\n\\end{figure}\n\n\\subsection{Impact of delayed updates}\\label{sec:delay}\n\n\\spara{Research Question.}\nWe test the performance of delayed models produced by our methods in delivering recommendations to users.\n\n\\spara{Process followed.}\nIn order to address this question, we simulate a recommender system that is not able to incorporate new ratings in the model as soon as they enter the system.\nTo do so, we modify the process of dynamic learning presented in Section~\\ref{sec:dynamic} to impose a delay between the arrival of a new rating and the update of the factorization.\nMore precisely, after the $i^{\\text{th}}$ rating is given by a user, the model is updated up to the $(i - d)^{\\text{th}}$ rating ($d$ being the arbitrary delay).\nThis way, the model is always $d$ ratings behind the last one arrived (the ratings are sorted by real time of arrival).\nIn real applications, the delay would probably vary, depending on the level of activity of the users.\nHowever, this experiment gives a first impression of the impact of delays on the recommendation task.\n\n\\spara{Results.}\nFigure~\\ref{fig:delay} shows the impact of a delay on the average AUC of the squared loss~and absolute loss~with prior for a dense dataset like \\dataset{Movielens} and a sparse dataset like \\dataset{FineFoods}.\nWe observe that even a small delay can affect the quality of the recommendation, depending on the characteristics of the data.\nFor \\dataset{Movielens}, if the model is behind by $5-10$ ratings, the average AUC drops by $3\\%$, and it goes down by about $14\\%$ when the model is behind by $1000$ ratings, and this applies to both loss functions.\nOn the other hand, for the much sparser \\dataset{FineFoods}, the effect is more apparent.\nWith only $5-10$ ratings behind, the model's AUC already drops by $10\\%$.\n\nTo show the effect of fast updates on weakly-engaged (or cold) users, we also report the impact of delays on those users for both \\dataset{Movielens} and \\dataset{FineFoods} with the squared loss\\ which performs best (Figure~\\ref{fig:delay}, Cold Users). \nWe define such users as the ones that rated at most two items.\nAs hypothesized, the cost induced by delayed predictions (for five ratings delayed) is higher for cold users.\nWe observe a relative drop in AUC of 11.8\\% and 13.4\\% for weakly-engaged users on \\dataset{Movielens} and \\dataset{FineFoods}, respectively, while when considering all the users, the relative drop is 1.1\\% and 12.9\\%, respectively.\n\nIn such sparse scenarios, cold users perform only a handful of actions before deciding to abandon the site or not.\nTherefore, it is important to consider cold users in the model as soon as they arrive, to keep them engaged by fast, efficient and good recommendations.\n\n\\begin{figure}\n\\centering\n\t\\includegraphics[scale=0.7]{delay.pdf}\n\\caption{Average AUC of the squared loss~and absolute loss~with prior on \\dataset{Movielens} and \\dataset{FineFoods} for various delays $d$, imposed as a number of ratings that the model is behind the current rating.}\n\\label{fig:delay}\n\\end{figure}\n\n\\subsection{Parameter Analysis}\\label{sec:exp-param-influence}\n\n\\spara{Research Question.}\nWe test to which extent the number of features ($k$), the weight of the prior ($\\rho$) and the regularization coefficient ($\\lambda$) affect the AUC and the runtime per update on our loss functions.\n\nFigure \\ref{fig:parameter-influence} shows the results of this investigation for different values of these parameters, for both squared loss~and absolute loss~and on each dataset.\nThe results are obtained using the dynamic learning process.\n\n\\spara{Number of features.}\nConcerning the quality of ranking (AUC), we observe the usual overfitting\/under-fitting trade-off (Figure~\\ref{fig:parameter-influence}(a)).\nThe optimal number of features depends on the dataset as well as on the loss function used, suggesting that a careful tuning of that parameter is always needed.\n\nIn some cases, speed constraints will force the use a suboptimal number of features.\nIndeed, the update runtime heavily depends on the number of features.\nFigure~\\ref{fig:parameter-influence}(d) suggests a linear relationship between runtime and number of features.\nFor both losses there is indeed a linear role of the number of features in the theoretical complexity (Section \\ref{sec:static-dynamic}).\nNotice, however, that the theoretical complexity of the squared loss\\ also has a quadratic term that becomes dominant for large number of features (with regards to the number of ratings per users).\nAlso note that while the squared loss\\ produces better AUC, the absolute loss\\ is able to sustain higher update rates, and can therefore be the loss of choice when speed is the first criterion.\n\n\\spara{Regularization coefficient.} \nThe influence of $\\lambda$ seems rather limited, except for high values that cause both the AUC and the update runtime to drop (Figure\\ref{fig:parameter-influence}(b) and (e)).\nA small regularization is supposed to increase the quality of the model by reducing overfitting, but this effect is not visible here.\nThe reason may be that the role of regularization is already taken by the prior on unknown ratings.\nIntroducing the prior seems to have the side effect of making the regularization obsolete (or redundant).\nIn fact, we confirm this with the results for $\\lambda = 0$ which demonstrate no impact on the quality or runtime.\nAgain, we see that setting a prior on unknown ratings increases the quality of recommendations without increasing the complexity of the solution.\nWhile it adds a term and a parameter to the objective function, it allows to remove one and its associated parameters.\n\n\\spara{Unknown\/known influence ratio.}\nThe ratio $\\rho$ influences the performance of the squared loss~algorithm in the following way: the AUC increases when a prior on unknown values is added ($\\rho > 0$), but the exact value of $\\rho$ has little influence (in the observed range) (Figure~\\ref{fig:parameter-influence}(c)).\nThe absolute loss is more sensitive to the value of $\\rho$, with the AUC decreasing when $\\rho$ becomes too large (on \\dataset{Movielens} and \\dataset{AmazonMovies}).\nHowever, in both cases, and on all datasets, giving the same weight to the known and unknown ratings ($\\rho = 1$) offers a significant improvement over not using a prior, suggesting that $\\rho = 1$ can be used as a first guideline, avoiding the burden of further parameter tuning.\n\nThe update runtime is also affected by $\\rho$, decreasing when $\\rho$ increases (Figure~\\ref{fig:parameter-influence}(f)).\nThe explanation can be that the prior on unknown ratings acts as a regularizer, driving features towards 0, and in doing so speeding up the convergence.\n\n\\spara{Runtime.}\nIn general, our technique demonstrates low running time which is heavily dependent on the number of features used, and less on the regularization applied or the ratio of unknown over known values.\nThese results demonstrate that our method can satisfy applications requiring low-latency updates.\n\n\\begin{figure*}[htbp]\n\\begin{center}\n\t\\includegraphics[scale=0.45]{parameter_influence.pdf}\n\t\\caption{Influence of parameter values on the AUC and runtime for the models produced by the squared loss~(SL) and absolute loss~(AL). Y-error bars declare a standard deviation on the average value of each metric.}\n\\label{fig:parameter-influence}\n\\end{center}\n\\end{figure*}\n\n\\subsection{Reproducibility of Results}\\label{sec:exp-reproducibility}\n\nThe implementation of the algorithms introduced in Section~\\ref{sec:loss-funtcions} is available on Github:\\\\\n{\\small \\url{https:\/\/github.com\/rdevooght\/MF-with-prior-and-updates}}.\n\nFor both ALS-UV and Mult-NMF we use the implementation of GraphChi, an open source tool for graph computation with impressive performance~\\cite{kyrola2012graphchi}.\n\nThe code and documentation of Vowpal Wabbit is available on its Github page:\\\\\n\\url{https:\/\/github.com\/JohnLangford\/vowpal_wabbit\/wiki}.\n\nThe \\dataset{Amazon} datasets are available on the SNAP webpage:\\\\\n\\url{http:\/\/snap.stanford.edu\/data\/index.html}\n\nThe \\dataset{Movielens} dataset is available on the Grouplens page:\\\\\n\\url{http:\/\/www.grouplens.org\/datasets\/movielens\/}.\n\\section{Related Work}\\label{sec:related-work}\n\nThe problem of recommending products based on the actions and feedback from other users (rather than based on content similarity) is often called \\emph{collaborative filtering}, and dates back 20 years ago, with works such as Tapestry~\\cite{tapestry} and Grouplens~\\cite{grouplens}.\nThe field is now dominated by methods based on matrix factorization, with algorithms such as ALS~\\cite{zhou2008large}, the multiplicative update rule~\\cite{mult_NMF}, and the stochastic gradient descent method (SGD)~\\cite{gemulla2011large, koren2009matrix}.\n\nThe missing at random assumption has yet to get the attention it deserves in collaborative filtering. \nBoth ~\\cite{missing_at_random} and ~\\cite{steck2010training} have validated the hypothesis of ratings missing not at random.\nPractical propositions for the interpretation of missing data can be found in the fields of one-class collaborative filtering and collaborative filtering based on implicit feedback, where the missing at random assumption is often obviously untenable~\\cite{implicit_feedback_confidence, ALS_unknown_weighting, weighting_sampling}.\n~\\cite{transductive_NMF} offers an interesting approach where missing ratings are considered as optimization variables; they use an EM algorithm to optimize in turn the factorization and the estimation of missing values. Unfortunately, that method has a high complexity, and the proposed approximations that work with large problems remove some of the method's appeal.\n\nNone of those works, however, consider the real world, dynamic scenario of continuously observing new ratings, users and items.\nOther works \\cite{gaillard2015time, rendle2008online} focus on the dynamic update of matrix factorization (mainly through the use of SGD), but those, on the other hand, implicitly rely on the missing at random assumption, and therefore suffer from lower accuracy in predictions. Other state-of-art methods for matrix factorization scale by relying on stochastic gradient computation \\cite{chin2015fast, chin2015learning}, while we rely on exact gradient approach. In this work, at the difference of what is mostly seen on scalable machine learning techniques nowadays \\cite{dean2012large}, we base our approach on coordinate random block descent to compute exact gradient in order to deal with missing data of large scale matrices.\n\\section{Conclusions}\\label{sec:conclusion}\nIn this work we proposed a new, simple, and efficient, way to incorporate a prior on unknown ratings in several loss functions commonly used for matrix factorization.\nWe experimentally demonstrated the importance of adding such a prior to solve the problem of collaborative ranking.\nWe also tackled the problem of updating the factorization when new users, items and ratings enter the system.\nWe believe that this problem is central to real applications of recommendation systems, because new users constantly enter those systems and the factorization must be kept up to date to give them recommendations immediately after their first few interactions with the platform.\nWe offer an update algorithm whose complexity is independent of the size of the data, making it a good approach for large datasets.\nIn the future, we would like to explore how our methods perform under real workloads of updates with variable arrival rates of ratings per user and item.\nFurthermore, we would like to test the performance of our methods in platforms built to analyze streams of data such as Storm, Twitter's Distributed Processing Engines platform.\n\\section{Acknowledgements}\nR. Devooght is supported by the Belgian Fonds pour la Recherche dans l'Industrie et l'Agriculture (FRIA, 1.E041.14).\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Conclusion}\n\n\n The definition of driving cost is highly ambiguous and should encode human's driving behavior as well as the environment characteristic such as road structure. In this work, we proposed a novel fully self-supervised approach for estimating high dimensional CMs. Due to the importance of the prediction in CME, part of the network is dedicated for predicting high-dimensional OGMs. Input and predicted OGMs, contextual information as well as the human driving behavior are then utilized to extract features required to encode expert's demonstration. We applied the proposed method to Argoverse dataset and illustrated the effectiveness of our approach in different planning horizons. \n\n\n\n\n\\section{Experiments} \\label{sec:exp}\n\n\\begin{figure*}[!t]\n\t\\centering\n\t\\includegraphics[trim = 0mm 0mm 0mm 0mm,width = 18cm]{sections\/figs\/final_results.pdf}\n\n\t\\caption{\\small Low cost trajectories selected by the proposed method. In these experiments the algorithm is forced to pick the trajectories from different driving modes as described in Section \\ref{sec:exp-2-obj}. Each row, is a different scenario where the first column shows the planned trajectory. In next columns the car is moved according to that trajectory.\n\t}\n\t\\label{fig:results}\n\\end{figure*}\n\n\\begin{table*}\n\t\\footnotesize\n\t\\begin{center}\n\t\t\\begin{tabular}{|cc||ccc||ccc||ccc|}\n\t\t\t\\hline\n\t\t\t\\multicolumn{2}{|c||}{Algorithms} & \\multicolumn{3}{c||}{$\\tau=1$ sec $T=1$ sec} & \\multicolumn{3}{c||}{$\\tau=2$ sec $T=2$ sec}& \\multicolumn{3}{c|}{$\\tau=1$ sec $T=3$ sec}\\\\\n\t\t\t\\hline\t\t\t\n\t\t\tAlg. & Arch. & minADE & CR(\\%) & RV(\\%) & minADE & CR(\\%) & RV(\\%) & minADE & CR(\\%) & RV(\\%)\\\\\n\t\t\t\\hline\n\t\t\t\n\t\t\n\t\t\t& \\multicolumn{1}{l||}{BC-MLP} & \\textbf{0.05} & \\textbf{0.00} & 0.07 & \\textbf{0.79} & 2.36 & 2.61 & 3.18 & 7.11 & 5.73 \\\\ \n\t\t\t\\multirow{-2}{*}{BC}& \\multicolumn{1}{l||}{BC-LSTM} & 0.08 & \\textbf{0.00} & 0.09 & 0.84 & 2.57 & 1.87 & 3.01 & 6.19 & 5.03\\\\\n\t\t\t\\hline\n\t\t\t\n\t\t\n\t\t\tRuleCM & \\multicolumn{1}{c||}{} & 1.03 & 0.34 & \\textbf{0.00} & 2.21 & 3.18 & \\textbf{0.00} & 3.24 & 4.93 & 0.09 \\\\\n\t\t\t\\hline\n\t\t\t\n\t\t\t\n\t\t\n\t\t\t&\\multicolumn{1}{l||}{MFP.1} & 0.21 & \\textbf{0.00} & \\textbf{0.00} & 1.92 & 1.18 & 0.86 & 3.78 & 4.33 & 2.98 \\\\\n\t\t\t\\multirow{-2}{*}{MFP$_{K=6}$} & \\multicolumn{1}{l||}{MFP.3} & 0.21 & 0.09 & 0.05 & 1.92 & 2.07 & 0.97 & 3.78 & 5.96 & 3.59 \\\\\n\t\t\t\\hline\n\t\t\n\t\t\t& \\multicolumn{1}{l||}{ESP.1} & 0.41 & 0.21 & \\textbf{0.00} & 2.07 & 2.84 & 1.15 & 3.97 & 3.87 & 4.62 \\\\ \n\t\t\t\\multirow{-2}{*}{ESP$_{K=6}$} & \\multicolumn{1}{l||}{ESP.3} & 0.41 & 0.94 & 0.36 & 2.07 & 2.92 & 1.42 & 3.97 & 5.46 & 4.93 \\\\ \n\t\t\t\\hline\\hline\n\t\t\n\t\t\t\\rowcolor[gray]{0.9}& \\multicolumn{1}{l||}{MSCME.a.1} & 0.15 & 0.01 & \\textbf{0.00} & 1.94 & 0.92 & 0.01 & 3.52 & 1.68 & 0.05 \\\\ \n\t\t\t\\rowcolor[gray]{0.9}& \\multicolumn{1}{l||}{MSCME.b.1} & 0.11 & \\textbf{0.00} & \\textbf{0.00} & 1.67 & 0.85 & \\textbf{0.00} & 3.28 & 1.57 & \\textbf{0.01} \\\\\n\t\t\t\\rowcolor[gray]{0.9}& \\multicolumn{1}{l||}{MSCME.b.3} & 0.11 & 0.01 & \\textbf{0.00} & 1.67 & 0.91 & \\textbf{0.00} & 3.28 & 1.62 & \\textbf{0.01} \\\\ \n\t\t\t\\rowcolor[gray]{0.9}&\\multicolumn{1}{l||}{RCME.a.1} & 0.18 & \\textbf{0.00} & \\textbf{0.00} & 2.74 & 0.67 & 0.01 & \\textbf{2.92} & 0.84 & \\textbf{0.01}\\\\\n\t\t\t\\rowcolor[gray]{0.9}& \\multicolumn{1}{l||}{RCME.b.1} & 0.17 & \\textbf{0.00} & \\textbf{0.00} & 2.81 & \\textbf{0.59} & 0.01 & 2.93 & \\textbf{0.78} & \\textbf{0.01}\\\\\n\t\t\t\\rowcolor[gray]{0.9}\\multirow{-6}{*}{CME (ours)}& \\multicolumn{1}{l||}{RCME.b.3} & 0.17 & \\textbf{0.00} & \\textbf{0.00} & 2.81 & 0.64 & 0.01 & 2.93 & 0.82 & 0.03\\\\\n\t\t\t\n\t\t\t\\hline\n\t\t\\end{tabular} \n\n\t\\end{center}\n\t\\vspace{-5pt}\n\t\\caption{Argoverse dataset planning evaluation. Note that the \\textbf{.1} and \\textbf{.3} variations using the same models\/samples. In \\textbf{.1} we selected the trajectory with highest probability\/lowest cost. In \\textbf{.3} we chose 3. Therefore, the minADE is the same for these variations. Variants of our method (gray) outperformed other algorithm in term of CR in all of the scenarios.}\n\t\\label{tab:Results-}\n\t\\vspace{-10pt}\n\\end{table*}\n\n\n\\begin{table*}\n\t\\footnotesize\n\t\\begin{center}\n\t\t\\begin{tabular}{cc||ccc||ccc||ccc}\n\t\t\t\\multicolumn{2}{c||}{Algorithms} & \\multicolumn{3}{c||}{$\\tau=1$ sec $T=1$ sec} & \\multicolumn{3}{c||}{$\\tau=2$ sec $T=2$ sec}& \\multicolumn{3}{c}{$\\tau=1$ sec $T=3$ sec}\\\\\n\t\t\t\\hline\t\t\t\n\t\t\tAlg. & Aux & minADE & CR(\\%) & RV(\\%) & minADE & CR(\\%) & RV(\\%) & minADE & CR(\\%) & RV(\\%)\\\\\n\t\t\t\\hline\n\t\t\t\\multicolumn{1}{l}{RCME.1}& $\\checkmark$ & 0.18 & 0.00 & 0.00 & 2.45 & 0.67 & 0.01 & 2.92 & 0.84 & 0.01\\\\\n\t\t\t\\multicolumn{1}{l}{RCME.1}& & 0.22 & 0.00 & 0.00 & 2.76 & 0.89 & 0.01 & 3.18 & 4.02 & 0.01\\\\\n\t\t\t\\multicolumn{1}{l}{RCME.3}&$\\checkmark$ & 0.18 & 0.00 & 0.00 & 2.45 & 0.93 & 0.01 & 2.92 & 1.00 & 0.01\\\\\n\t\t\t\\multicolumn{1}{l}{RCME.3}& & 0.22 & 0.11 & 0.06 & 2.76 & 2.68 & 0.03 & 3.18 & 7.32 & 0.12\\\\\n\t\t\t\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-5pt}\n\t\\caption{CME with and without the auxiliary task}\n\t\\label{tab:abl-obj}\n\t\\vspace{-10pt}\n\\end{table*}\n\n\n\n\\begin{table*}\n\t\\footnotesize\n\t\\begin{center}\n\t\t\\begin{tabular}{cc||ccc||ccc||ccc}\n\t\t\t&& \\multicolumn{3}{c||}{$\\tau=1$ sec $T=1$ sec} & \\multicolumn{3}{c||}{$\\tau=2$ sec $T=2$ sec}& \\multicolumn{3}{c}{$\\tau=1$ sec $T=3$ sec}\\\\\n\t\t\t\\cline{3-11}\t\t\t\n\t\t\t\\multicolumn{2}{c||}{\\multirow{-2}{*}{CME Methods}} & minADE & CR(\\%) & RV(\\%) & minADE & CR(\\%) & RV(\\%) & minADE & CR(\\%) & RV(\\%)\\\\\n\t\t\t\\hline\n\t\t\t\\multicolumn{2}{l||}{RCME, with pred} & \\textbf{0.15} & \\textbf{0.01} & \\textbf{0.00} & \\textbf{1.94} & \\textbf{0.92} & 0.01 & \\textbf{3.52} & \\textbf{1.68} & 0.05 \\\\ \n\t\t\t\\multicolumn{2}{l||}{RCME, without pred}& 0.34 & 0.79 & \\textbf{0.00} & 3.13 & 7.18 & \\textbf{0.00} & 3.99 & 11.14 & \\textbf{0.00}\\\\\n\t\t\t\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-5pt}\n\t\\caption{CME with and without the OGM prediction}\n\t\\label{tab:abl-pred}\n\t\\vspace{-10pt}\n\\end{table*}\n\n\n\\begin{table*}\n\t\\footnotesize\n\t\\begin{center}\n\t\t\\begin{tabular}{cc||ccc||ccc||ccc}\n\t\t\t&& \\multicolumn{3}{c||}{$\\tau=1$ sec $T=1$ sec} & \\multicolumn{3}{c||}{$\\tau=2$ sec $T=2$ sec}& \\multicolumn{3}{c}{$\\tau=1$ sec $T=3$ sec}\\\\\n\t\t\t\\cline{3-11}\t\t\t\n\t\t\t\\multicolumn{2}{c||}{\\multirow{-2}{*}{Algorithms}} & TP & TN & $S_{100}$ & TP & TN & $S_{100}$ & TP & TN & $S_{100}$\\\\\n\t\t\t\\hline\n\t\t\t\\multicolumn{2}{c||}{Diff. Learn} & 81.92 & 98.32 & 96.32 & 80.07 & \\textbf{99.08} & \\textbf{96.52} & 78.64 & \\textbf{99.26}& \\textbf{97.33} \\\\ \n\t\t\t\\multicolumn{2}{c||}{RCME}& \\textbf{82.13} & \\textbf{98.31} & \\textbf{97.48} & \\textbf{81.96} & 97.08 & 93.84 & \\textbf{81.38} &98.19& 95.39\\\\\n\t\t\t\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-5pt}\n\t\\caption{OGM prediction with and without CME}\n\t\\label{tab:abl-cm}\n\t\\vspace{-10pt}\n\\end{table*}\n\n\nWe applied our approach to the Argoverse \\cite{Argoverse} dataset. The LiDAR point clouds are converted to $256\\times256$ BEV with the ground removal described in \\cite{miller2006mixture}. Moreover, to increase the ability of our model in handling occlusions, we applied a visibility mask as in \\cite{dequaire2018deep}. We also encoded the information from the map into 8 different channels.\n\nWe report performance on 3 different settings for the input sequence length $\\tau$ and the prediction horizon $T$. In all cases, the data is partitioned into 20 sequences of frames. Thus, the time gap between two consecutive frames for ${(\\tau=1, T=1), (\\tau=2, T=2), (\\tau=1, T=3)}$ are $0.1$, $0.2$ and $0.2$ respectively.\n\nWe first evaluate the effectiveness of our approach in trajectory planning using a variety of metrics and compare our approach to multiple baselines. We quantitatively evaluate the performance of all these solutions in different planning horizons. In Section \\ref{sec:exp-2} we provide multiple ablation studies to show the effects of different modules and objective terms in the overall performance of the system.\n\n\\subsection{Cost Map Estimation} \\label{sec:exp-1}\n\nDirect evaluation of predicted CMs is not straightforward as there is no groundtruth for them. We thus evaluate their quality by using them with the planning approach described in Section \\ref{sec:planning} and compare planning quality under the following metrics.\n\n\\begin{itemize}\n\t\n\t\\item \\textbf{minADE:} We used the minimum average displacement error (minADE) $\\min{ \\frac{1}{T} \\sum_{\\tau}^{\\tau + T} {\\left\\lVert{\\hat{s} - s^*}\\right\\rVert}_2}$ to measure the minimum drift of the trajectories generated by each model from the groundtruth. This metric is especially suitable for baselines producing multiple trajectories as well as the proposed method because it does not penalize the trajectories that are valid, but far from the groundtruth. For algorithms that only produce a single trajectory, minADE reduces to ADE. Since our solution aims at capturing the underlying reasons for the expert's behavior rather than merely generating trajectories, just minADE with respect to expert's trajectories is inadequate. \n\t\n\t\\item \\textbf{Potential Collision Rate (CR):} Each selected trajectory is mapped to the future frames to check if it collides with any object in the scene. While the ego vehicle's behavior affects other cars' trajectories in the real world, for short horizons considered here we may ignore such interaction.\n\n\t\\item \\textbf{Road Violation (RV):} The selected trajectory is mapped to a drivable area to check for possible violations of traffic rules.\n\t\n\\end{itemize}\n\\noindent\n\nWe compare our solution with the following baselines: \n\n\\begin{itemize}\n\t\\item \\textbf{Behavior Cloning(BC):} We implemented a BC learner that receives a sequence of OGMs and the past trajectory for $\\tau$ timesteps and generates trajectories close to those of the human driver. For fair comparison we use the same OGM predictor in our model. Trajectories are generated with one of two architectures, where BC-MLP uses four CNN layers with [32, 64, 64, 128] filters followed by three mlp layers with [64, 32, 2T] units and BC-LSTM uses a CNN encoder with [16, 32] filters to encode map and predicted ($\\hat{O}$) or observed ($O^*$) OGMs and predict $S$ for T timesteps.\n\t\n\t\\item \\textbf{Rule-Based Cost Map (RuleCM):} Instead of predicting CMs we use hard rules to shape a CM. We assign high cost to non-drivable areas and the occupied cells at the present time. The same trajectory generator as in Section \\ref{sec:planning} is used and driving trajectory with lowest cost is selected. This baseline highlights the importance of the \\textit{predicted} cost for motion planning.\n\t\n\t\\item \\textbf{Estimating Social-forecast Probability (ESP):} We compare our results to ESP \\cite{rhinehart2019precog} for the single agent, using the code published at \\url{https:\/\/github.com\/nrhine1\/precog}. We did not do hyperparameter tuning, but we used both forward KL and symmetric cross entropy for the objective function and reported the best results. We sample 6 trajectories $(K=6)$ for evaluation. For CR and RV we chose top-1 and top-3 trajectories according to the model-assigned probability and reported the results for both. These variations are referred to as ESP.1 and ESP.3. Note that because we use the same samples from the same model to study if all of the generated trajectories are useful for planning, the minADE is the same for these variations. \n\t\n\t\\item \\textbf{Multiple Future Prediction (MFP):} MFP \\cite{tang2019multiple} is a multi-modal trajectory prediction solution. We use code from \\url{https:\/\/github.com\/apple\/ml-multiple-futures-prediction} with adaptations to make it work for a single agent. We acknowledge that this adaptation affects the performance of MFP as the other agents' trajectories are the key inputs to this algorithm. In a complete AD system, such data may come from perception modules that detect and track other agents. Since we are studying the performance of prediction in the absence of such modules, we choose to test MFP in a limit case. We used 3 modes for MFP. Similar to ESP, we use two variations of sampled trajectories, where MFP.1 and MFP.3 refer to top-1 and top-3 trajectory selections respectively.\n\t\n\\end{itemize}\n\nFor our solution, we study the performance of both the RCME and the MSCME architectures. In one setting (variation \\textbf{.a}) we use the trajectories generated according to Section \\ref{sec:planning}, and in another setting (variation \\textbf{.b}) we add the trajectories generated by the imitation network to the samples. Similar to ESP and MFP, we use top-1 and top-3 trajectories to evaluate the quality of the trajectories.\n\n\\subsection{Planning Results}\n\nAs shown in Table \\ref{tab:Results-}, all solutions have low collision rate (CR) for shorter horizons. As horizon gets longer, history shorter, and frequency lower, CR increases markedly for all the algorithms except for ours.\n\nBoth BC baselines have low ADE for shorter horizons $(T=1, 2)$. This is expected as they explicitly minimize the difference between predicted and expert's trajectories. But for the more challenging settings where they plan trajectories for 3 seconds based on 1 second of history, the generated trajectories have higher ADE, CR and RV. In contrast, even though our solution uses a trajectory sampler to propose trajectories it has low ADE, CR and RV in all settings. Adding the predicted trajectory by the imitation network to the samples helps with the performance in some scenarios.\n\nRuleCM has a low RV percentage as we manually assign high values to the non-drivable grids. However, its high CR shows that it cannot handle dynamic objects well. This highlights the importance of the predictive nature of our proposed solution. \n\nAs mentioned above, we did not tune the hyperparameters for MFP and ESP on Argoverse. \\cite{park2020diverse} reports better performance for these algorithms but we could not replicate those results. Multiple factors including the difference in hyperparameters, preprocessing of the data, prediction horizon and number of samples may have contributed to this performance gap. We also use the single-agent variant for both, which forces these algorithms to capture interactions using high dimensional inputs only. This can potentially lead to a decline in performance. The high CR of ESP suggests that its architecture may not be effective in capturing dynamics of the environment and interactions from the high dimensional features. Moreover, the increase in CR and RV for top-3 trajectories over top-1 trajectories and also higher CR and RV even when the performance of these algorithms are close to the proposed method in terms of minADE show that the multi-modality of these algorithms may not be directly suitable for planning. In other words, a small minADE is not an indication of the admissibility of all the samples. The authors of ESP used a similar architecture in \\cite{rhinehart2018r2p2} and \\cite{rhinehart2018deep} for planning; the authors of MFP also did a brief study on using MFP for planning \\cite{tang2019multiple}. Given the success of these algorithms in multi-modal trajectory prediction, our experiments suggest that in order to assess their potential in motion planning they should also be evaluated with planning metrics such as CR on real-world data.\n\n\\subsection{Ablation Study} \\label{sec:exp-2}\n\n\\subsubsection{Objective Function}\\label{sec:exp-2-obj}\n\nTo study the contribution of the proposed auxiliary task we trained two models with or without it and summarized the results from the RCME.a setting in Table \\ref{tab:abl-obj}. The results show that for the more challenging scenario ($\\tau$ = 3, T = 1) CR is lower when the auxiliary objective is used. Intuitively, the auxiliary task does not affect RV as much because $\\mathcal{L}_p$ takes the static environment into account. But as the dynamics of the environment gets more complex, the role of the auxiliary objective gets more clear. \n\nFor the settings with top-3 trajectories, we explicitly chose trajectories from different clusters so as to examine the ability of the system to reason about different scenarios. This leads to a larger effect of the auxiliary objective (RCME.3 rows in Table \\ref{tab:abl-obj}) even in the easier scenarios, suggesting that the auxiliary task contributes to better generalization.\n\n\\textbf{Discussion:} We tried to employ the planning objective introduced in \\cite{zeng2019end} to compare the results. However, the network failed to estimate CMs. We believe due to the highly sparse nature of that objective, the perception modules in the architecture are crucial to lead the training.\n\\subsubsection{Network Architecture}\nWe also conducted ablation studies on different modules in our architecture. \nFirst, we replace the OGM predictor in the MSCME architecture with a CNN encoder so that the model directly estimates the CM without the help from OGM predictions. We do this only to MSCME, because in RCME the CM estimation is embedded inside the OGM prediction system. The results are summarized in Table \\ref{tab:abl-cm}.\nThis change leads to a large performance gap in terms of CR, suggesting that simultaneous OGM prediction extracts better mid-level features suitable for reasoning about environment dynamics. \n\nWhile the quality of OGM prediction is not the focus of this paper, studying it offers more insight into the overall performance of the system. Thus, we compared the OGM predictions of \\textbf{RCME} with the \\textbf{Difference Learning Architecture} in \\cite{mohajerin2019multi}. The metrics we use are percentage of True Positive (TP), True Negative (TN). We also multiply SSIM by 100 ($S_{100}$) to make it the same scale as the other metrics. The results are summarized in Table \\ref{tab:abl-pred}. It is not surprising that RCME has higher TP rate compared to Difference Learning, because the additional CM estimation task brings more information to the system, leading to more accurate predictions.\n\\section{Introduction}\n\nConventional autonomous driving (AD) stacks consist of various modules \\cite{urmson2008boss}. A perception component is responsible for detecting objects in the scene and a prediction module for projecting their positions in the future. Based on their outputs a motion planner generates a desired trajectory according to a manually specified cost function \\cite{choset2005principles}, which is in turn executed by a controller. A key advantage of this approach is the interpretability of the final decision. For example, in case of accident, each component can be investigated individually. However, with different parts designed and tuned separately, each module is not aware of the errors made by the other parts. In many cases, there is no clear way for estimating the model uncertainty and propagating it to the system. In addition to massive amount of human-labelled data required to train the perception components, the manual design and tuning of the cost function for motion planning tends to limit the system's ability for dealing with complex driving scenarios .\n\nAs an alternative, several works \\cite{bojarski2016end,codevilla2018end,xu2017end,chi2017deep} proposed driving systems that use raw sensory input to directly produce control commands (i.e., acceleration and steering). This approach allows full backpropagation and eliminates the need for a cost function. Since a large quantity of data can be collected from cars equipped with appropriate sensors and directly used for training without human labelling, this approach is potentially highly scalable with data and compute. However, such a monolithic approach lacks internally interpretable components, offers little insight as to how system faults may arise, and is thus ill-suited for safety-critical real-world deployment.\n\n\n \\begin{figure}[!t]\n\t\\vspace{-.35cm}\n\t\\centering\n\t\\includegraphics[scale=0.5]{sections\/figs\/arch.pdf}\n\t\\vspace{-.3cm}\n\t\\caption{\\small {High Dimensional Cost Map Estimation.}\n\t}\n\t\\vspace{-.6cm}\n\t\\label{fig:arch}\n\\end{figure}\n\n\nIn this paper, we propose a new approach that allows meaningful interpretation and avoids manual data labeling and design of cost function. Our approach is centered around a novel architecture for learning-based space-time Cost Map Estimation (CME). \nThe proposed method is highly scalable as it can be trained in a fully self-supervised fashion. Moreover, the space-time cost map has both a natural interface with motion planner and an interpretable domain-specific semantics.\n\nSpecifically, our architecture encodes high dimensional Occupancy Grid Maps (OGM)s as well as other contextual information (e.g., drivable area and intersections) and simultaneously predicts the OGMs and estimate the cost map for multiple steps into the future. This leads to interpretable intermediate representations in the form of OGMs. A cost map (CM) is a grid map where the value of each cell represents the cost of driving into that cell location. By extending the predicted CM multiple steps into the future, we arrive at a sequence of space-time CMs. These CMs can then be used by a motion planner to rank possible future trajectories through integrating the cost over the cells these trajectories occupy.\n\nImportantly, while it is obvious that OGM prediction training can be made self-supervised using sequences of driving data (e.g., \\cite{mohajerin2019multi}) so long as occupancy estimation is accurate, labels for self-supervised training of CM estimation are much harder to synthesize. To solve this problem, we decompose the CME objective into two parts. The first one injects the prior knowledge about the environment where it is available (e.g., occupied cells are high cost). However, there is no explicit information about the cost of most of the cells. Hence, we propose using an auxiliary task to guide the training. Using auxiliary objectives for improving the performance of a model in a primary task has been proven to be effective in different fields \\cite{liu2015deep,trinh2018learning,burda2018large}. Similarly, in this work we define an auxiliary imitation task that forces the model to predict the expert's intention and trajectory based on the estimated CMs. For this task a data-driven set of intentions capturing different modes of driving is used. This objective term pushes the model to fill in the blanks and arrive at complete and systematically accurate predictions of the CM.\n\nThe main contributions of this paper are as follows:\n\\begin{itemize}\n \\vspace{-.10cm}\n \\item An architecture for estimating CMs simultaneously with OGM prediction from human driving data that is fully self-supervised and requires no extra data labeling,\n \\vspace{-.10cm}\n \\item A set of specific training objectives that combines environment constraints, expert's behavior, map information as well as solving auxiliary imitation tasks leading to estimating space-time cost maps,\n \\vspace{-.10cm}\n \\item An empirical demonstration of the effectiveness of this design in the overall performance of an AD system through multiple experiments and generalization tests.\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\\section{Related Work}\n\nUnlike the monolithic end-to-end approach, our proposal replaces modules of a conventional AD stack to enable doing a fully self-supervised learning of cost maps. Our main goal is balancing scalability and interpretability. From this perspective, we discuss how our work compares and contrasts with existing works.\n\n\\subsection{Cost Design and State Prediction}\n\nConventional motion planners typically optimize trajectories according to a predefined cost function \\cite{werling2010optimal,schlechtriemen2016wiggling,hu2018dynamic}. However, manually defining and tuning a cost is extremely hard for highly dynamic environments such as driving. This has led to trade-offs where AD solutions largely avoid difficult interactions. More recently learning based received significant attention in planning \\cite{paden2016survey}. Similarly, we aim at replacing the manual cost function with learning-based space-time CMs that predict into the future. However, it is important to note that our proposal is compatible with both classical and learning-based planning methods insofar as they can use the predicted CMs to rank candidate trajectories.\n\nPrediction is a crucial part of any AD stack. Learning-based prediction is increasingly popular in AD research. In \\cite{luo2018fast}, detection, tracking and motion forecasting are done using a single network. \\cite{sadeghian2018car} uses a bird's eye view image of navigation and the motion history of agent to predict its future path. \\cite{tang2019multiple, rhinehart2019precog, rhinehart2018r2p2} combine high dimensional and low dimensional data to predict multi-modal trajectories. Both video prediction \\cite{xu2017end} and OGM prediction \\cite{hoermann2018dynamic, mohajerin2019multi} have been done in AD research. In contrast to such work, our approach aims to simultaneously predict states and estimate CMs. In many cases, there is no clear way to compute the uncertainty of the isolated prediction models and propagate it to the motion planner. Furthermore, although we evaluate the quality of estimated CMs with a specific trajectory sampler, in our design any set of trajectories can be ranked using the estimated CMs. Predicting trajectories directly makes both generalization and adaption to a new or temporary constraint harder.\n\n\\subsection{RL and IRL}\n\nAs a powerful framework for sequential decision making, reinforcement learning (RL) makes an attractive choice for AD research. \\cite{paxton2017combining} proposed a hierarchical RL scheme that divides tasks into high-level decisions and low-level control. \\cite{shalev2016long} introduced a two-phase approach for cruising and merging tasks in autonomous driving, where future states are first predicted from the current ones and then an RL planner uses these predictions to output acceleration. These proposals utilize low-dimensional states derived from perceptual processing of high dimensional sensory data. However, by implicitly conflating the processed low dimensional state estimations as the actual states, error and uncertainty do not get propagated through to improve the whole system. Moreover, similar to the case of manual design of the cost function, defining reward for RL solutions remains an open research challenge.\n\nInverse reinforcement learning (IRL) focuses on learning a reward function from the expert's behavior, with maximum entropy \\cite{ziebart2008maximum} being a popular method. However, standard IRL algorithms can not deal with high dimensional data and continuous space. Some works such as \\cite{rosbach2019driving} extended these algorithms to use high dimensional data. But linear estimation of reward function can adversely limit generalization of the system. Similar to our work, \\cite{wulfmeier2016watch} estimates a high dimensional cost map using a maximum entropy deep IRL framework. There are two main differences between this work and ours. First, the system is not modular which challenges the interpretability in case of failure. Second, in such IRL frameworks training the policy and estimating the reward is done together. Empirically training the new policy with the learned reward does not lead to similar performance. Hence, if the set of actions or the policy model needs to be changed, the learned reward may not lead to similar performance. \n\n\\subsection{Imitation Learning}\n\nImitation learning (IL) is a standard approach to learning from expert demonstration. It has the advantage of not requiring costly online exploration typically necessary for RL methods. \\cite{bojarski2016end,codevilla2018end} follow a monolithic, end-to-end approach where the network receives images and generates control commands. While this approach avoids costly data labeling, direct behavior cloning suffers from cascading errors when dealing with out-of-distribution inputs. To alleviate this issue, \\cite{kuefler2017imitating} adopted Generative Adversarial Imitation Learning (GAIL) \\cite{ho2016generative}. But the interpretability challenges remain.\n\nSince highly mature control solutions are widely used in AD systems, learning to do level control is often unnecessary and learning based trajectory planners are more practical. \\cite{bansal2018chauffeurnet} uses a modular network to generate driving trajectories. They propose novel data augmentation approaches to generate scenarios such as collisions which most real-world datasets lack and give the network a better opportunity to learn to handle such situations. \\cite{zeng2019end} also has a modular design where the perception module does 3D object detection and motion forecasting. A cost volume generation is done by another component of the overall network. They use the cost volume to choose a trajectory with the lowest cost. However, in contrast to our proposal, both of these works heavily rely on labeled data for the perception components.\n\\noindent\n\n\n\n\n\n\\section{Technical Approach}\n\nWe address the problem of predicting a high dimensional cost map by proposing a modular architecture that can be trained end-to-end in a self-supervised fashion. We then use this prediction to evaluate and score different trajectories. The model takes a sequence of LiDAR point clouds and other contextual information (e.g. map) as input. It predicts the future OGMs representing road dynamics and estimate the space-time cost of driving in each cell of the OGM simultaneously. The proposed architecture has two components: (1) an OGM predictor and (2) a CM estimator. Note that this design adds interpretability because the predicted OGMs over the planning horizon are independently semantically meaningful.\n\n\\subsection{Input Encoding and Network Architecture}\n\nOn the input side, frames of LiDAR point clouds from the immediate past are first converted into a sequence of OGMs. These OGMs are then transformed to a reference frame attached to the vehicle's current position. Both prediction and cost estimation are done with respect to this coordinate system to avoid unnecessary complexity due to motion of the ego vehicle. Moreover, similar to \\cite{bansal2018chauffeurnet}, we encoded semantic information from the HD map such as drivable area and intersection structure in separate channels. We then concatenated the map encodings with OGMs to form the input to the network.\n\nPredicted OGMs are represented by a binary random variable $o_k(i, j) \\in \\{0, 1\\}$ where $o_k(i, j)$ is the occupancy state of the cell at the $i^{th}$ row and the $j^{th}$ column at the $k^{th}$ time step, with 1 for occupied and 0 for empty. $p_o$ is the probability of occupancy associated with each cell. Cost value at each cell $c_k(i, j)$ is coded similarly where 1 and 0 are assigned to high and low cost cells respectively.\n\nWe study two alternative architectures. For the \\textit{Recurrent Cost Map Estimator} (RCME), we assume the CMs in each time step are conditionally dependent on the information in previous time steps. For the \\textit{Multi-Step Cost Map Estimator} (MSCME), we omit this assumption. In Section \\ref{sec:exp} we show that these two architectures have similar performance for shorter prediction horizons. The recurrent architecture performs better for longer planning horizons.\n\n\\vspace{5pt}\n\\begin{figure}[h]\n\t\\raggedleft\n\t\\includegraphics[scale=0.36]{sections\/figs\/rnn_estimator.pdf}\n\t\\caption{Recurrent Cost Map Estimator}\n\t\\label{fig:rnncost}\n\\end{figure}\n\n\\subsubsection{Recurrent Cost Map Estimator}\n\nThe RCME incorporates the \\textit{difference learning} method in \\cite{mohajerin2019multi} for OGM prediction and extends it with our cost estimator module to simultaneously predict CMs (Figure \\ref{fig:rnncost}). Formally, the output of the network at time step $k$ can be represented as a two-channel tensor:\n\n\\vspace{1pt}\n\\begin{equation}\n\\resizebox{0.9\\hsize}{!}{$\n\\mathbf{B}^1_{k}=\\Big{[}p_o\\big{(}o_k(i,j)\\big{)} \\Big{]}=\n\\begin{bmatrix}\np_o\\big{(}o_k(1,1)\\big{)} & \\dots & p_o\\big{(}o_k(1,W)\\big{)} \\\\\n\\vdots & \\ddots & \\vdots \\\\\np_o\\big{(}o_k(H,1)\\big{)} & \\dots & p_o\\big{(}o_k(H,W)\\big{)} \\\\\n\\end{bmatrix}\n$}\n\\label{eq:ogm0}\n\\end{equation}\n\n\n\\vspace{5pt}\n\\begin{equation}\n\\resizebox{0.9\\hsize}{!}{$\n\\mathbf{B}^2_{k}=\\Big{[}p_c\\big{(}c_k(i,j)\\big{)} \\Big{]}=\n\\begin{bmatrix}\np_c\\big{(}c_k(1,1)\\big{)} & \\dots & p_c\\big{(}c_k(1,W)\\big{)} \\\\\n\\vdots & \\ddots & \\vdots \\\\\np_c\\big{(}c_k(H,1)\\big{)} & \\dots & p_c\\big{(}c_k(H,W)\\big{)} \\\\\n\\end{bmatrix}\n$}\n\\label{eq:ogm1}\n\\end{equation}\n\n\\noindent\nwhere $B^1_{k}$ and $B^2_{k}$ are the first and second output channels respectively. $o_k(i, j)$ is taken to be independent of the values of other cells at time step $k$, but conditioned on values of all cells in previous time steps. And the same assumption is made for $c_k(i,j)$:\n\\vspace{-1pt}\n\\begin{equation}\np_o\\big{(} o_k(i,j)|\\mathcal{O}_{k-1},\\mathcal{O}_{k-2},...\\big{)}\n\\label{eq:probs_1}\n\\end{equation}\n\\noindent\n\\begin{equation}\np_c\\big{(} c_k(i,j)|\\mathcal{C}_{k-1},\\mathcal{C}_{k-2},...\\big{)}\n\\label{eq:probs_2}\n\\end{equation}\n\\noindent\nwhere\n\\noindent\n\\begin{equation}\n\\mathcal{O}_k=\\big{\\lbrace}o_k(m,n)|m=1,...,H;n=1,...,W\\big{\\rbrace}\n\\end{equation}\n\\begin{equation}\n\\mathcal{C}_k=\\big{\\lbrace}c_k(m,n)|m=1,...,H;n=1,...,W\\big{\\rbrace}\n\\end{equation}\n\n\\noindent\nand m and n are indices ranging over the entire OGM. The conditional probabilities in Equation \\ref{eq:probs_1} and \\ref{eq:probs_2} may be captured using the recurrent architecture proposed. In practice, a short history suffices. The network observes OGMs for the past $\\tau$ time steps and predicts the OGMs for the next $T$ time steps while estimating a CM at every step.\n\nIn reality not every cell in the OGM changes between two time steps. The \\textbf{Difference Learning} module implicitly distinguishes between dynamic and static objects. By adding the features extracted by this module to the previous observed OGMs $B^{1*}_{k}$ or predicted OGMs $\\hat{B^1_{k}}$ and stacking them with the encoded map, the \\textbf{OGM Classifier} can be trained to effectively and efficiently predict if a cell is occupied or not. We did not use the same architecture for estimating CMs as they are not directly \\textit{observable} and imposing such a feedback loop can amplify error in CM estimation. Hence, the stacked OGM features are separately fed along with the encoded map to the \\textbf{Cost Estimator} module that consists of an encoder and a decoder. The encoder has \\{32, 64\\} $3\\times3$ convolution filters with stride 2. The decoder has two deconvolution layers with \\{64, 32\\} $3\\times3$ filters with stride 2 each deconvolution layer is followed by a convolution layer with the same size and stride 1. \n\n\\subsubsection{Multi-Step Cost Map Estimator}\n\nThe MSCME architecture is illustrated in Figure \\ref{fig:mscost}. Similar to RCME, the OGMs and the encoded maps are fed to the predictor network. The predicted and observed OGMs are then stacked with the encoded map and passed through an encoder and then a decoder to estimate a CM for $T$ time steps. To avoid computationally expensive 3D convolutions we concatenate time steps along the last dimension. In order to get similar performance as the previous architecture we used \\{32, 64, 128\\} filters in the encoder and \\{128, 64, 32, T\\} filters in the decoder where $T$ is the number of time steps we predict the CM for.\n\\vspace{5pt}\n\\begin{figure}[h]\n\t\\raggedleft\n\t \\includegraphics[scale=0.40]{sections\/figs\/mscost_estimator_all.pdf}\n\t \\caption{Multi-Step Cost Map Estimator}\n\t \\label{fig:mscost}\n\\end{figure}\n\n\n\\subsection{Training Loss}\nThe designed architecture is a multi-task network. The objective function is accordingly defined to direct the network to learn each task:\n\\noindent\n\\begin{align}\n \\mathcal{L}_{total} = w_1 \\mathcal{L}_{Pred} + w_2 \\mathcal{L}_{CME}\n\\end{align}\n\nwhere $w_1$ and $w_2$ are hyperparameters. We define each term in detail below.\n\n\\subsubsection{Prediction Loss}\n\nWe follow \\cite{mohajerin2019multi} to formulate OGM prediction as a classification problem where each cell can be occupied or not. Hence, the objective function includes a pixel-wise Cross-Entropy between the predicted OGMs, $\\hat{B^1}$, and the target OGMs, $B^{1*}$, multiplied by a visibility matrix, $\\mathcal{V}$, described in \\cite{dequaire2018deep} to handle occlusion. Due to unbalanced number of occupied and free cells, we normalize the loss by the ratio of occupied\/free cells, $\\eta$. Finally, to push the predicted OGMs toward the target OGMs we use Structural Similarity Index Metrics (SSIM) \\cite{wang2003multiscale}. The OGM prediction loss is then defined as:\n\n\\begin{align}\n \\resizebox{1\\hsize}{!}{$\n \\mathcal{L}_{Pred} = \\frac{\\eta}{WH} \\sum_{x} \\sum_{y} \\mathcal{V} \\odot \\mathcal{H}(\\hat{B^1}, B^{1*}) + \\gamma (1 - SSIM(\\hat{B^1}, B^{1^*}))\n $}\n\\end{align}\n\\noindent\nwhere $H$ and $W$ are the OGM dimensions, $\\odot$ denotes the element-wise product, $\\mathcal{H}(a, b)$ is the pixel-wise cross-entropy and $\\gamma$ is a hyperparameter.\n\\subsubsection{CM Estimation Loss:} Since there is no ground truth for the CM, defining an objective function which pushes the network to learn meaningful CM is challenging. Relying only on the expert's trajectory makes it difficult for the network to generalize. The expert's trajectory only occupies a few cells and it does not give information about the most of the surrounding area. To address these issues we define an objective function consisting of two terms: \n\n\\begin{align}\n\\mathcal{L}_{CME} = \\alpha \\mathcal{L}_{p} + \\beta \\mathcal{L}_{aux}\n\\end{align}\n\n\\noindent\nwhere $\\alpha$ and $\\beta$ are hyperparameters. \n\n$\\mathcal{L}_{p}$ is defined to inject the prior knowledge about cell cost such as the high cost associated with non-drivable areas. Specifically, $\\mathcal{L}_p$ is a classification loss comparing the generated CM and a target ${C}_{target}$, which at each time step is 0 for the cells occupied by the expert and 1 for non-drivable areas and the occupied cells in drivable areas. Since there is no information about the other cells we do not want to push the network to assign any values to them. Moreover, the number of cells belonging to the expert's trajectory (low cost cells) are far less in number than the high cost ones. In order to address both of these issues, we calculate the loss on a subset of cells selected by a mask, $\\mathcal{M}$ with 0 and 1 elements. The total number of 1s is set to a predefined number, $N$. $\\mathcal{M}$ elements are 1 for the pixels occupied by the expert. The rest of 1s are sampled from the high cost cells, i.e. cells occupied by objects or in non-drivable area, with the high cost cells that are occupied having 2 times more chance to be selected. This ratio empirically speeds up training. The $L_p$ loss function is then:\n \n \\begin{align}\n \\mathcal{L}_p = \\frac{1}{WH} \\sum_{x} \\sum_{y} \\mathcal{M} \\odot \\mathcal{H}(B_{k}^2, C_{target})\n \\end{align}\n\n\\noindent\n\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=1.1]{sections\/figs\/intentions.pdf}\n\t\\caption{\\small Different modes of driving driven from data. Dashed lines show the mean of each cluster.}\n\t\\label{fig:intention}\n\\end{figure}\n\nWe added an \\textit{Imitation Network} to the architecture and defined an auxiliary task in overall objective function, $\\mathcal{L}_{aux}$, in order to indirectly push the CMs to be a representation of the underlying reason for the expert's behavior. For this purpose, a sequence of estimated CMs from time $\\tau$ to the prediction horizon $\\tau + T$ are fed to an encoder. These features are then utilized by an \\textit{intention prediction} head and a \\textit{a regressor head} to predict the expert's trajectory.\n \nA predefined set of \"intentions\" is used to represent different semantic modes made by the expert (e.g. changing lane, speed up, slow down), $\\mathcal{I} = \\{i^k\\}^K_{k=1}$ where $i^k = \\{s^k_1, ... s^k_T\\}$ defines a trajectory for T timesteps. These intentions are derived by clustering the expert's trajectories in the dataset (Figure \\ref{fig:intention}). Specifically, we used the DBSCAN clustering algorithm and Hausdorff distance to cluster trajectories. Given the CMs, for each intention $i^k$, the intention predictor head predicts a Bernoulli distribution $p(i^k|CM_{\\tau:T+\\tau})$ to determine whether the expert chose that driving mode or not. Hence, each trajectory can belong to multiple clusters at the same time. In this way we do not penalize the network for choosing the modes that are close to each other. One can also use the soft labels in a cross-entropy setting where the labels are the normalized distances to clusters. However, empirically our problem formulation worked better for this architecture. The regressor head then outputs K offsets, $s^o_k$, between the mean of each cluster $\\mu_k$ and the expert's trajectory $s^*$. We then used a weighted MSE to optimize the networks. \n\n\\vspace{3pt}\n\\begin{align} \\label{eq:aux}\n\\mathcal{L}_{aux} = \\frac{1}{K}\\sum_{k}\\mathcal L_{cls}(p_k, p_k^*) + \\lambda \\sum_{k}\\omega_k MSE(\\mu_k + s^o_k , s^*)\n\\end{align}\n\\vspace{3pt}\n\n\\noindent\nwhere ${p}_k$ is the probability of an intention to be the expert's intention, $\\omega_k$ is the normalized distance of the groundtruth trajectory to each mode and $\\lambda$ is a hyperparameter.\n\n\\subsection{Motion Planning} \\label{sec:planning}\n\nTo evaluate the quality of the CMs, we use them for motion planning. We follow \\cite{zeng2019end} to use clothoids \\cite{shin1992path} as well as circular and straight lines to define the shape of candidate trajectories. The velocity profile of a candidate trajectory is determined by sampling acceleration in the range of $[-5, 5]$ $m\/{s^2}$ and velocity between 0 and the speed limit. Since computing the cost of each candidate trajectory using the estimated CMs is a cheap operation, our motion planning module is computationally very efficient.\n\nNote that the output of the regressor head for the imitation auxiliary task could in principle be used for trajectory planning. However, we opt for a simple sampling method that uses the CMs. This is partly to demonstrate the versatility of the CMs when working with motion planning methods and partly because the CM estimations are presumably much more reliable since they integrate by design broader concerns beyond imitating the expert. ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\subsection*{The model}\n\nWe study in this article a Large Deviation Principle (LDP) for the weighted empirical mean\n\\begin{equation*}\nL_n =\\frac{1}{n}\n\\sum_1^{n} \\mathbf{f}(x_i^n)\\cdot Z_i ,\n\\end{equation*}\nwhere $(Z_i)_{i\\in \\mathbb{N}}$ is a sequence of $\\mathbb{R}^d$-valued independent and\nidentically distributed (i.i.d) random variables satisfying:\n\\begin{equation}\\label{moment-exp}\n\\mathbb{E}\\, e^{\\alpha |Z_1|}<\\infty \\qquad \\textrm{for\\ some}\\quad \\alpha>0.\n\\end{equation}\nThe application $\\mathbf{f}:{\\mathcal X}\\rightarrow\n\\mathbb{R}^{m\\times d}$ is a $m\\times d$ matrix-valued continuous\nfunction, $({\\mathcal X},\\rho)$ being a locally compact metric space. The term\n$\\mathbf{f}(x)\\cdot Z$ denotes the product between matrix\n$\\mathbf{f}(x)$ and vector $Z$. The set $\\{x_i^n,1\\le i\\le n,\\ n\\ge\n1\\}$ is an ${\\mathcal X}$-valued sequence of deterministic elements\nsuch that the empirical measure\n$\\hat{R}_n\\stackrel{\\triangle}{=}\\frac{1}{n} \\sum_{i=1}^n\n\\delta_{x_i^n}$ satisfies:\n\\begin{equation}\\label{mun}\n\\hat{R}_n \\xrightarrow[n\\rightarrow\\infty]{\\mathrm{weakly}} R\\ ,\n\\end{equation}\nwhere $R$ is a probability measure with compact support ${\\mathcal\n Y}$. \\\\\nWe focus in this paper on cases where there are outliers,\nthat is where some of the $x_i^n$ remain far from the support (also called bulk) of\n$R$. Loosely speaking, one can think of an outlier as a sequence\n$(x_{i(n)}^n, n\\ge 1)$ satisfying:\n\\begin{equation}\\label{loose-outlier}\n\\liminf_{n\\rightarrow \\infty} \\rho(x_{i(n)}^n, {\\mathcal Y})>0\\ .\n\\end{equation}\nAt a large deviation level, such outliers may have a dramatic impact\non the shape of the rate function as demonstrated in the simple\nexample of {\\sc Figure} \\ref{dessin}. Although the model under study\nlooks very similar to the LDP studied in \\cite{Naj02}, the presence of\noutliers substantially modifies the resulting LDP and may naturally create\ninfinitely many non-exposed points (see the definition in \\cite{DemZei98} and also Remarks \\ref{non-expo} and \\ref{non-expo-rem}) for the rate function.\n\nThe purpose of this article is to provide clear assumptions (which\ncover situations where (\\ref{loose-outlier}) can occur) over the set\n$\\{\\mathbf{f}(x_i^n),\\ 1\\le i\\le n,\\ 1\\le n\\}$ and over $Z_i$ under\nwhich fairly general LDP results can be proved.\n\n\\begin{figure}\n\\begin{center}\n\\epsfig{figure=trueRF,height=5cm}\n\\epsfig{figure=twistedRF,height=5cm}\n\\caption{\\small The rate function of $\\frac 1n \\sum_{i=1}^n X_i^2$\n where the $X_i$'s are ${\\mathcal N}(0,1)$ Gaussian i.i.d. random variables \n (left); the rate function of $\\frac 1n \\sum_{i=1}^{n-1} X_i^2\n +\\frac{3}{n} X_n^2$ (right). Both rate functions coincide for $x\\le\n \\frac32$ but the right one is linear for $x>\\frac\n 32$.}\\label{dessin}\n\\end{center}\n\\end{figure} \n\n\\subsection*{Motivations and related work}\nSuch models are of particular interest in the field of statistical\nmechanics (spherical spin glasses in \\cite{BDG01}, spherical integrals\nin the finite rank case in \\cite{GuiMai05}, etc.) where one has often\nto establish a LDP for the empirical mean $L_n$ in the case where the\nrandom variable $Z_i$ satisfies condition (\\ref{moment-exp}). In\nparticular, spherical integrals are intimately connected to the study\nof Deformed Ensembles (see \\cite{Pec06} for instance for the\ndefinition) in Random Matrix Theory. In dimension one, $Z_i$ is\ntypically the square of a Gaussian random variable. The measure\n$\\frac{1}{n} \\sum_{i=1}^n \\delta_{x_i^n}$ is then a realization of the\nempirical measure of the eigenvalues associated to a given random\nmatrix model and there are important cases when some of the $x_i^n$'s\nstay far away from the support of $R$. Indeed, there has recently been\na strong interest in random matrix models (so-called spiked models)\nwhere some of the largest eigenvalues lie out of the bulk, that is\nwhere the set of limit points of $(x_i^n,\\ 1\\le i\\le n,\\ n\\ge 1)$ can\ndiffer from the support of $R$ (see Johnstone \\cite{Joh01}, Baik et\nal. \\cite{BBP05}, \\cite{BaiSil04pre}, P\\'ech\\'e \\cite{Pec06}). These\nspiked models are of particular interest for statistical applications\n\\cite{Joh01}.\n\nThe study of the LDP for weighted means was developed by Bercu et al.\n\\cite{BerGamRou97} for Gaussian functionals and considered in greater\ngenerality in Najim \\cite{Naj02}. In \\cite{Naj02}, the LDP is stated\nfor $L_n$ under condition (\\ref{moment-exp}) but in the case where\n$(x_i^n,\\ 1\\le i\\le n,\\ 1\\le n)$ is a subset of ${\\mathcal Y}$, the\nsupport of the limiting probability measure $R$. In particular, the\nframework of \\cite{Naj02} does not allow any of the $x_i^n$'s to lie\nfar from the bulk. LDPs involving outliers can be found in Bercu et\nal. \\cite{BerGamRou97},\nGuionnet and Ma\\\"{\\i}da \\cite{GuiMai05}.\nFor related work concerning quadratic forms of Gaussian processes, we\nshall also refer the reader to Bercu et al. \\cite{BerGamLav00},\nGamboa et al. \\cite{GamRouZan99}, Bryc and Dembo \\cite{BryDem97} and\nZani \\cite{Zan99}.\n\n\\subsection*{Presentation of the results} The purpose of this article\nis to establish the LDP for the empirical mean $L_n$ under the moment\nassumption (\\ref{moment-exp}) and under assumptions which allow the\npresence of outliers (see (\\ref{loose-outlier})). Such a LDP will rely\non the individual LDP for $\\frac{Z_1}n$. This is the content of the\nfollowing assumption.\n\\begin{assump}\\label{LDP-Particle}\nThe $\\mathbb{R}^d$-valued random variable $Z_1$ satisfies the following exponential condition:\n$$\n\\mathbb{E}\\, e^{\\alpha |Z_1|}<\\infty \\qquad \\textrm{for\\ some}\\quad \\alpha>0,\n$$\nand \n$\\frac {Z_1}n$ satisfies the LDP with a good rate function denoted by $I$.\n\\end{assump}\nNote that if $\\frac{Z_i}n$ does not satisfy a LDP, one can construct\ncounterexamples where $L_n$ does not fulfill a LDP (see for instance\n\\cite[Section 2.3]{Naj02}). Finally, two subcases of Assumption\n(A-\\ref{LDP-Particle}) yield to two distinct classes of results:\n\n\\subsubsection*{The case where $I$ is convex (Assumption\n (A-\\ref{convex-rf}), Section \\ref{subsection-assump})} This paper is\nmainly devoted to the study of this case. If $I$ is convex then the\nassumptions on the sets $C_n^{\\mathbf{f}}=\\{\\mathbf{f}(x_i^n),\\ 1\\le\ni\\le n, \\ 1\\le n\\}$ needed to state the LDP for $L_n$ are quite mild.\nApart from a standard compacity assumption (Assumption\n(A-\\ref{Compacity}), see Section \\ref{subsection-assump}), the main\nassumption over $C_n^{\\mathbf{f}}$ (Assumption (A-\\ref{Limit-Points}),\nSection \\ref{subsection-assump}) bears on the sole limiting points of\n$C_n^{\\mathbf{f}}$ (in the sense of Painlev\\'e-Kuratowski convergence\nof sets) and on their role in the LDP. It turns out that\n(A-\\ref{Limit-Points}) is an intricate assumption concerning the limiting\nbehaviour of $C_n^{\\mathbf{f}}$ and some limiting points of\n$C_n^{\\mathbf{f}}$ involved in the definition of a certain convex\ndomain. This convex domain plays a role in the definition of the rate function of the LDP.\nAs demonstrated by examples in Section \\ref{subsection-examples},\n(A-\\ref{Limit-Points}) covers a wide variety of models with outliers\nin the convex case, at least those for which a LDP is to be expected.\n\nUnder Assumptions (A-\\ref{LDP-Particle})-(A-\\ref{Limit-Points}) and the\nmore classical assumption (A-\\ref{hypo-EJP}) (convergence of $\\hat\nR_n$ to $R$), the empirical mean $L_n$ satisfies the LDP with a good\nconvex rate function (Theorem \\ref{ldp-mem}). This rate function\nadmits a fairly good representation (in terms of convex features)\nwhere the role of the outliers is quiet transparent (Theorem\n\\ref{theo-rf} and examples in Section \\ref{section:examples}).\n\n \n\\subsubsection*{The case where $I$ is not convex} In this case, one can still prove \nthe LDP but the assumptions over $C_n^{\\mathbf{f}}$ are much more stringent and the rate function\nis given by an abstract formula. Moreover, very few insight can be gained\nby the study of the general formula of the rate function. It seems\nthat the study must be held on a case-by-case analysis.\n\n\n\\subsection*{Outline of the article}\nIn order to study the Large Deviations of $L_n$, we shall separate outliers from the bulk and split accordingly $L_n$ into two subsums:\n\\begin{eqnarray*}\nL_n&=&\n\\frac 1n \\sum_{\\{x_i^n\\ \\textrm{far from the bulk}\\}} \\mathbf{f}(x_i^n)\\cdot Z_i+\n\\frac 1n \\sum_{\\{x_i^n\\ \\textrm{near or in the bulk}\\}} \\mathbf{f}(x_i^n)\\cdot Z_i\\\\\n&\\stackrel{\\triangle}{=}& \\pi_n + \\tilde L_n.\n\\end{eqnarray*}\nThe idea is then to establish separately the LDP for each\nsubsum. \nThis line of proof has been developed in the one-dimensional\nsetting for Gaussian quadratic forms by Bercu et al.\n\\cite{BerGamRou97} and is extended to the multidimensional\nsetting in this article. \n\nThe paper is organized as follows. \nSections \\ref{section:partiel},\n\\ref{section:ldp} and \\ref{section:examples} are devoted to the study\nof the convex case. \n\nIn Section \\ref{section:partiel}, we study the Large Deviations for the following model:\n\\begin{eqnarray}\\label{model-sec2}\n\\pi_n =\\frac 1n \\sum_{x_i^n \\in C_n} \\mathbf{f}(x_i^n)\\cdot Z_i \\quad \\textrm{where}\n\\quad \\frac{\\mathrm{card}(C_n)}{n} \\xrightarrow[n\\rightarrow\\infty]{} 0.\n\\end{eqnarray}\nThe main assumptions related to the set $C_n^{\\mathbf{f}} =\\{ \\mathbf{f}(x_i^n);\\ x_i^n \\in C_n\\}$\nare stated and the LDP for $\\pi_n$ is established.\n\nIn Section \\ref{section:ldp}, the decomposition $L_n =\\pi_n +\\tilde\nL_n$ where $\\pi_n$ satisfies \\eqref{model-sec2} is precisely specified, \nthe LDP for $L_n$ is established and a\nrepresentation formula is given for the rate function.\nSection \\ref{section:examples} is devoted to examples of LDPs with\noutliers in the convex case. \n\nA general LDP stated with an abstract rate function is established in\nthe non-convex case in Section \\ref{section:nonconvex}.\nIn Section \\ref{section:eigen}, a\npartial study of the rate function is also carried out in the\nnon-convex case in the setting of a specific example.\n\nComments related to the link between the study of the spherical integral and \nthe LDP of $L_n$ are made in Sections \\ref{section:examples} (rank one\ncase) and \\ref{section:eigen} (higher rank).\n\n\\section{The LDP for the partial mean $\\pi_n$ in the convex case}\\label{section:partiel}\n\nLet $(C_n)_{n\\geq 1}$ be a finite subset of ${\\mathcal X}$. This section is devoted to the study of the LDP of \n$$\n\\pi_n = \\frac 1n \\sum_{x_i^n \\in C_n} \\mathbf{f}(x_i^n)\\cdot Z_i \\quad \n\\textrm{where}\\quad \\frac{\\mathrm{card}( C_n)}{n} \\xrightarrow[n\\rightarrow \\infty]{} 0,\n$$\nwith $\\mathrm{card}(C_n)$ standing for the cardinality of the set $C_n$. It will be proved \nin Section \\ref{decomposition} that $L_n$ can be decomposed as $\\pi_n+\\tilde L_n$ with $\\pi_n$\nas above.\n\\begin{rem} In the case where the random variable $Z_1$ satisfies\n\\begin{equation}\\label{every-exp-moment}\n\\mathbb{E} e^{\\alpha |Z_1|} <\\infty\\quad \\textrm{for all} \\quad \\alpha \\in \\mathbb{R}^+,\n\\end{equation}\nthe following limit holds true:\n$$\n\\limsup_{n\\rightarrow \\infty} \\frac 1n\\log \\mathbb{P}\\{ |\\pi_n|>\\delta \\}=-\\infty \\quad \\textrm{for all}\\quad \\delta>0.\n$$\nOtherwise stated $L_n$ and $\\tilde L_n$ are exponentially equivalent and $\\pi_n$ does not play any role \nat a large deviation level. Of course the situation is completely different if \\eqref{every-exp-moment}\ndoes not hold.\n\\end{rem}\nWe first introduce some notations as well as the concepts of inner limit, outer limit and\nPainlev\\'e-Kuratowski convergence for sets.\nWe then state the assumptions over the sets\n$C_n^{\\mathbf{f}}=\\{ \\mathbf{f}(x_i^n),\\\nx_i^n\\in C_n\\}$ and prove the LDP for $\\pi_n$.\n\n\n\\subsection{Notations}\nDenote by ${\\mathcal B}({\\mathcal Z})$ the Borel sigma-field of a given topological space ${\\mathcal Z}$ \n(usually $\\mathbb{R}^d$, $\\mathbb{R}^m$, $\\mathbb{R}^{m\\times d}$ or ${\\mathcal X}$). Denote by $|\\cdot|$ a norm on \nany finite-dimensional vector space ($\\mathbb{R}^d$, $\\mathbb{R}^m$ or $\\mathbb{R}^{m\\times d}$).\nIn the sequel, we use bold letters $\\mathbf{a},\\mathbf{b},\\mathbf{y}$, etc. to denote $m\\times d$ matrices. We denote by \n$\\langle \\cdot,\\cdot\\rangle$ the scalar product in any finite-dimensional space and by $\\cdot$\nthe product between vectors and matrices with compatible size. \nLet $A$ be a subset of $\\mathbb{R}^k$. We denote by $\\bar{A}$ its closure, by $\\mathrm{int}(A)$ its interior, \nby $\\Delta(\\cdot\\mid A)$ the convex indicator function of the set $A$ and by $\\Delta^*(\\cdot\\mid A)$ \nits convex conjugate (also called the support function of $A$), that is:\n\\begin{eqnarray*}\n\\Delta(\\theta \\mid A)&=&\\left\\{\n\\begin{array}{ll}\n0&\\textrm{if} \\ \\theta \\in A,\\\\\n\\infty&\\textrm{else}.\n\\end{array}\\right. ,\\\\\n\\Delta^*(y \\mid A)\n&=& \\sup_{ \\theta \\in \\mathbb{R}^k}\\{ \\langle y, \\theta \\rangle -\\Delta(\\theta\\mid A)\\}\n=\\sup_{\\theta\\in A} \\langle y, \\theta \\rangle, \n\\end{eqnarray*}\nwhere $y$ and $\\theta$ are in $\\mathbb{R}^k$. The following proposition whose proof is straightforward \nwill be of constant use in the sequel.\n\\begin{prop}\\label{support-function} \nLet $A$ be a subset of $\\mathbb{R}^k$, then \n$$\n\\Delta^*(\\cdot\\mid A) = \\Delta^*(\\cdot \\mid \\bar{A}).\n$$ \nIf moreover $A$ is convex with non-empty interior, then \n$$\n\\Delta^*(\\cdot\\mid \\mathrm{int}(A)) = \\Delta^*(\\cdot\\mid A) = \\Delta^*(\\cdot \\mid \\bar{A}).\n$$\n\\end{prop}\n\nLet $D_n$ be a sequence of subsets of $\\mathbb{R}^{m\\times d}$. We\ndefine its outer limit (denoted by $D_{\\infty,\\mathrm{out}}$) \nand its inner limit (denoted by $D_{\\infty,\\mathrm{in}}$) \nby \n\\begin{eqnarray*}\nD_{\\infty,\\mathrm{out}} &=& \\left\\{\n\\mathbf{x}\\in \\mathbb{R}^{m\\times d},\\ \\exists\\, \\phi: \\mathbb{N}\n\\rightarrow \\mathbb{N} \\ \\textrm{increasing,}\\ \n\\exists\\, \\mathbf{x}_{\\phi(n)} \\in D_{\\phi(n)},\\ \\mathbf{x}_{\\phi(n)}\\xrightarrow[n\\rightarrow \\infty]{} \\mathbf{x} \\right\\}\\\\ \nD_{\\infty,\\mathrm{in}} &=& \\left\\{\n\\mathbf{x}\\in \\mathbb{R}^{m\\times d},\\ \\exists\\, n_0,\\ \\forall\\, n\\ge n_0,\\exists\\, \\mathbf{x}_n\\in D_n,\\ \n\\mathbf{x}_{n}\\xrightarrow[n\\rightarrow \\infty]{} \\mathbf{x} \\right\\}\\\\\n\\end{eqnarray*}\nThe limit $D_{\\infty}$ of the sets $(D_n)$ exists if the outer limit and the inner limit are equal. \nSet convergence in this sense is known as Painlev\\'e-Kuratowski convergence and in this case, we will denote:\n$$\nD_n\\xrightarrow[n\\rightarrow\\infty]{\\textrm{pk}} D_{\\infty}.\n$$\nFor more details on Painlev\\'e-Kuratowski convergence of sets, see Rockafellar and Wets \\cite[Chapter 4]{RocWet98}.\n\n\n\\subsection{A preliminary analysis: Two simple examples}\\label{subsection-examples} Consider\n$$\nC^{\\mathbf{f}}_n=\\left\\{ \\mathbf{f}(x_i^n),\\ x_i^n \\in C_n\\right\\}\\quad \\textrm{where}\\quad\n\\frac{\\mathrm{card}(C_n)}{n} \\rightarrow 0.\n$$\nThe sets $C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}$ and $C_{\\infty,\\mathrm{out}}^{\\mathbf{f}}$ are respectively the inner and outer limits of $(C_n^{\\mathbf{f}})$.\nIn the study of the forthcoming examples, we will focus on the links between the LDP for\n$\\pi_n$ and the sets $C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}$ and $C_{\\infty,\\mathrm{out}}^{\\mathbf{f}}$. This section is aimed at introducing Assumption \n(A-\\ref{Limit-Points}) but can be skipped as no further notation is introduced.\n\n\\subsubsection{Example 1: A simple case where the LDP fails to hold for $\\pi_n$}\\label{example-1}\nLet $X$ be a standard Gaussian random variable and consider\n$\\pi_n=\\frac{2+(-1)^n}{n} X^2$. Direct computations yield the LDP for\n$\\pi_{2n}$ (resp. $\\pi_{2n+1}$) with good rate function\n$\\Delta^*_{\\mathrm{even}}$ (resp. $\\Delta^*_{\\mathrm{odd}}$) where\n$$\n\\Delta^*_{\\mathrm{even}}(z)=\\left\\{\n\\begin{array}{ll}\nz\/6 & \\mathrm{if}\\ z>0,\\\\\n\\infty & \\mathrm{else}.\n\\end{array}\\right. \n\\quad \\textrm{and} \\quad\n\\Delta^*_{\\mathrm{odd}}(z)=\\left\\{\n\\begin{array}{ll}\nz\/2 & \\mathrm{if}\\ z>0,\\\\\n\\infty & \\mathrm{else}.\n\\end{array}\\right. \n$$\nTherefore one cannot expect the LDP for $(\\pi_n,n\\in \\mathbb{N})$.\n\n\n\n\\subsubsection{Example 2: The LDP holds after modification of Example 1}\\label{example-2}\nLet $X$ and $Y$ be independent standard Gaussian random variables and consider \n$\\pi_n=\\frac{2+(-1)^n}{n} X^2 +\\frac 4n Y^2$. \nIn this case, $\\pi_{2n}$ and \n$\\pi_{2n+1}$ satisfy the LDP (by a direct analysis) with the same rate function \n$$\n\\Delta^*(z)=\\left\\{\n\\begin{array}{ll}\nz\/8 & \\mathrm{if}\\ z>0,\\\\\n\\infty & \\mathrm{else}.\n\\end{array}\\right.\n$$\nThis yields the LDP for the whole sequence $(\\pi_n,n\\in\\mathbb{N})$ with rate function $\\Delta^*$. \n\nDespite the erratic behaviour of $\\frac{2+(-1)^n}{n} X^2$ (as seen in\nthe previous example), the LDP holds due to presence of the term $\\frac 4n Y^2$.\n\n\\subsubsection{Comparison of the two examples}\nDenote by \n$$\n{\\mathcal D}_y=\\{\\lambda\\in \\mathbb{R},\\ \\log \\mathbb{E} e^{\\lambda y X^2} <\\infty\\}\n=\\left(-\\infty,(2y)^{-1}\\right)\n$$ \nwhere $X$ is a standard \nGaussian random variable.\n\n\nIn the case of Example 1, one can easily check that $C_{2n}^{\\mathbf{f}}= \\{3\\}$ and \n$C_{2n+1}^{\\mathbf{f}}= \\{1\\}$. Thus $C_{\\infty,\\mathrm{out}}^{\\mathbf{f}}=\\{1,3\\}$ while\n$C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}=\\emptyset$. It is straightforward to check that the rate functions \ndriving the LDP of $\\pi_{2n}$ and $\\pi_{2n+1}$ can be expressed as:\n$$\n\\Delta^*_{\\mathrm{even}}(z)\\ =\\ \\sup_{\\lambda\\in {\\mathcal D}_3} \\lambda z\\qquad\\mathrm{and}\\qquad \n\\Delta^*_{\\mathrm{odd}}(z)\\ =\\ \\sup_{\\lambda\\in {\\mathcal D}_1} \\lambda z,\n$$ \nThe very reason for which the LDP does not hold in this case is that\n$$\n\\bigcap_{y\\in C_{\\infty,\\mathrm{out}}^{\\mathbf{f}}} {\\mathcal D}_y\\ \\neq\\ \n\\bigcap_{y\\in C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}} {\\mathcal D}_y.\n$$\n\nIn the case of Example 2, $C_{2n}^{\\mathbf{f}}=\\{3,4\\}$ while \n$C_{2n+1}^{\\mathbf{f}}=\\{1,4\\}$. Therefore $C_{\\infty,\\mathrm{out}}^{\\mathbf{f}}=\\{1,3,4\\}$ while \n$C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}=\\{4\\}$. Despite the fact that $C_{\\infty,\\mathrm{out}}^{\\mathbf{f}} \\neq C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}$,\nthe LDP holds in this case with good rate function given by:\n\\begin{eqnarray*}\n\\Delta^*(z)&=& \\sup_{\\lambda\\in {\\mathcal D}_4} \\lambda z.\n\\end{eqnarray*} \nAs we shall see, the underlying reason for which the LDP holds is\n$$\n\\bigcap_{y\\in C_{\\infty,\\mathrm{out}}^{\\mathbf{f}}} {\\mathcal D}_y\\ =\\ \n\\bigcap_{y\\in C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}} {\\mathcal D}_y\\ \\left(=\\ {\\mathcal D}_4\\right),\n$$\nand this will be a key-point in the statement of Assumption (A-\\ref{Limit-Points}).\n\nWe are now in position to state the assumptions and the main result. \n\\subsection{Assumptions and main results}\\label{subsection-assump}\nLet $C_n$ be a finite subset of ${\\mathcal X}$ and recall that\n$$\nC^{\\mathbf{f}}_n=\\left\\{ \\mathbf{f}(x_i^n),\\ x_i^n \\in C_n \\right\\}\\quad \\textrm{where}\\quad \\frac{\\mathrm{card}(C_n)}{n}\n\\xrightarrow[n\\rightarrow \\infty]{}0.\n$$\nLet $\\mathbf{y}$ be a\n$m\\times d$ matrix and denote by\n\\begin{equation}\\label{def-D}\n{\\mathcal D}_{\\mathbf{y}}=\\left\\{ \\lambda \\in \\mathbb{R}^m,\\ \\log \\mathbb{E}\\, e^{\\langle \\lambda, \\mathbf{y}\\cdot Z_1\\rangle} <\\infty \\right\\}.\n\\end{equation}\nWe can now state our assumptions. \n\nAssume that $Z_1$ is a $\\mathbb{R}^d$-valued random variable satisfying Assumption (A-\\ref{LDP-Particle})\nand recall that $I$ is the rate function associated to $\\frac {Z_1} n$.\n\n\\begin{assump}\\label{convex-rf}\nLet \n$ \\mathcal D_Z \\stackrel{\\triangle}{=} \\{\\theta \\in\n\\mathbb{R}^d,\\ \\log \\mathbb{E}\\, e^{\\langle \\theta, Z_1\\rangle} < \\infty\\},\n$\nthen \n$$\nI(z) = \\Delta^*(z \\mid \\mathcal D_Z).\n$$\nIn particular, $I$ is a convex rate function.\n\\end{assump}\n\n\\begin{assump}\\label{Compacity} \nLet $(D_n)_{n\\ge 1}$ be a sequence of non empty subsets of\n $\\mathbb{R}^{m\\times d}$. There exists a compact set $K\\subset \\mathbb{R}^{m\\times d}$ such that\n$D_n \\subset K$ for every $n\\ge 1$.\n\\end{assump}\n\\begin{rem}\nThis assumption implies in particular that \nthe outer limit $D_{\\infty,\\mathrm{out}}$ of $(D_n)_{n\\ge 1}$ is a nonempty compact set of $\\mathbb{R}^{m\\times d}$.\n\\end{rem}\n\n\\begin{assump}\\label{Limit-Points}\nLet $(D_n)_{n\\ge 1}$ be a sequence of subsets of\n $\\mathbb{R}^{m\\times d}$. Denote by $D_{\\infty,\\mathrm{in}}$ and $D_{\\infty,\\mathrm{out}}$ its inner\n and outer limits. Then:\n$$\n\\bigcap_{\\mathbf{y}\\in D_{\\infty,\\mathrm{in}}}{\\mathcal D}_{\\mathbf{y}}\n=\\bigcap_{\\mathbf{y}\\in D_{\\infty,\\mathrm{out}}}{\\mathcal D}_{\\mathbf{y}}\n$$\nwhere ${\\mathcal D}_{\\mathbf{y}}$ is defined by (\\ref{def-D}). \n\\end{assump}\n\n\n\\begin{rem}\nIf $(D_n)_{n\\ge 1}$ fulfills (A-\\ref{Compacity}) and\n(A-\\ref{Limit-Points}), then in particular, $D_{\\infty,\\mathrm{in}}$ is not empty.\n\\end{rem}\nWe can now state the main result of the section.\n\\begin{theo}\\label{ldp}\n Assume that $(Z_i)_{i\\in \\mathbb{N}}$ is a sequence of\n $\\mathbb{R}^d$-valued i.i.d. random variables. Assume moreover that\n (A-\\ref{LDP-Particle}) and (A-\\ref{convex-rf}) hold for $Z_1$.\n Assume that $({\\mathcal X},\\rho)$ is a metric space and let\n $C_n\\subset{\\mathcal X}$ be such that\n$$\n\\frac{\\mathrm{card}(C_n)}{n}\n\\xrightarrow[n\\rightarrow \\infty]{}0.\n$$\nDenote by $C_n^{\\mathbf{f}}=\\{ \\mathbf{f}(x_i^n),\\ x_i^n \\in\n C_n\\}$ where $\\mathbf{f}:{\\mathcal X} \\rightarrow \\mathbb{R}^{m\\times d}$ is continuous. \n Assume that (A-\\ref{Compacity}) and (A-\\ref{Limit-Points})\n hold for the sequence of sets $(C^{\\mathbf{f}}_n)_{n\\in\n \\mathbb{N}}$. Then the random variable\n$$\n\\pi_n =\\frac{1}{n} \\sum_{x_i^n \\in C_n} \\mathbf{f}(x_i^n)\\cdot Z_i\n$$\nsatisfies the LDP in $(\\mathbb{R}^m,{\\mathcal B}(\\mathbb{R}^m))$ with good rate function \n$$\n\\Delta^*(z\\mid {\\mathcal D})=\\sup_{} \\{\\langle \\lambda ,z \\rangle,\\ \\lambda \\in {\\mathcal D}\\}\n\\qquad\\textrm{where}\\qquad \n{\\mathcal D}=\\bigcap_{\\mathbf{y}\\in C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}}{\\mathcal D}_{\\mathbf{y}}\n=\\bigcap_{\\mathbf{y}\\in C_{\\infty,\\mathrm{out}}^{\\mathbf{f}}}{\\mathcal D}_{\\mathbf{y}}.\n$$\n\\end{theo}\n\n\n\\begin{rem}[On Assumption (A-\\ref{Limit-Points})]\n A close look to the proof of Theorem \\ref{ldp} shows\n that the rate function that drives the lower bound of the LDP is the support function of \n $\\cap_{\\mathbf{y}\\in\n C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}}{\\mathcal D}_{\\mathbf{y}}$ while the\n rate function that drives the upper bound is the support function of $ \\cap_{\\mathbf{y}\\in C_{\\infty,\\mathrm{out}}^{\\mathbf{f}}}{\\mathcal\n D}_{\\mathbf{y}}$. Both rate functions coincide when assuming (A-\\ref{Limit-Points}).\n(see also the examples in Section \\ref{subsection-examples}).\n\\end{rem}\n\n\n\n\n\\subsection{Proof of Theorem \\ref{ldp}}\n\nIn order to prove Theorem \\ref{ldp} , we follow the strategy developed in \\cite{Naj02}, essentially based on \nan exponential approximation technique. The next proposition is the counterpart of Lemma 5.1 in \\cite{Naj02}.\n\n\\begin{lemma}\n\\label{nw}\nLet $\\phi : \\mathbb{N} \\setminus\\{0\\} \\rightarrow \\mathbb{N} \\setminus \\{0\\}$ be such that $\\frac{\\phi(n)}{n} \\xrightarrow[n\\rightarrow \\infty]{}\n 0$. Let $(Z_i)$ be a sequence of $\\mathbb{R}^d$-valued random variables\n satisfying (A-\\ref{LDP-Particle}) and (A-\\ref{convex-rf}). Then \n$\\bar{Z}^{\\phi}_n\\stackrel{\\triangle}{=}\\frac 1n \\sum_{i=1}^{\\phi(n)} Z_i$ satisfies the LDP in $\\mathbb{R}^d$\nwith good rate function\n given by \n$$ I(y)=\\Delta^*(y\\mid {\\mathcal D}_Z)$$\nwhere ${\\mathcal D}_Z$ is defined in (A-\\ref{convex-rf}).\n\\end{lemma}\n\n\\begin{proof}\nDenote by $\\Lambda^{\\phi}_n$ the log-Laplace transform of $\\bar{Z}^{\\phi}_n$, \ni.e. $\\Lambda^{\\phi}_n(\\theta)=\\log \\mathbb{E}\\, e^{\\langle \\theta,\\bar{Z}^{\\phi}_n\\rangle}$. Then \n$$\n\\frac{1}{n} \\Lambda^{\\phi}_n(n\\theta)=\\frac{\\phi(n)}{n}\\log \\mathbb{E}\\,\\, e^{\\langle \\theta, Z_i\\rangle}\\xrightarrow[n\\rightarrow \\infty]{} \n\\Delta(\\theta\\mid{\\mathcal D}_Z).\n$$\nTherefore, the large deviation upper bound holds for $\\bar{Z}^{\\phi}_n$ with rate function $I$ \nby Theorem 2.3.6 (a) in \\cite{DemZei98}. \nTo prove the large deviation lower bound, it is sufficient to prove that\n$$\n-I(y) \\le \\liminf_{n\\rightarrow \\infty} \\frac{1}{n}\\log\n\\mathbb{P}\\left(\\bar{Z}^{\\phi}_n\\in B(y,\\varepsilon)\\right)\n$$\nwhere $B(y,\\varepsilon)=\\{y'\\in \\mathbb{R}^d,\\ |y'-y|<\\varepsilon\\}$.\nDefine\n$$\n\\tilde{Z}_n^{\\phi}=\n\\left\\{\n\\begin{array}{ll}\n\\frac 1n \\sum_{i=2}^{\\phi(n)} Z_i & \\textrm{if}\\ \\phi(n)\\ge2,\\\\\n0 & \\textrm{otherwise}.\n\\end{array}\n\\right. . \n$$\nThen $\n\\{Z_1\/n \\in B(y,\\varepsilon\/3)\\} \\cap \\{\\tilde{Z}^{\\phi}_{n}\\in B(0,\\varepsilon\/3)\\}\n\\subset \\{\\bar{Z}^{\\phi}_n\\in B(y,\\varepsilon)\\}\n$\nwhich yields \n\\begin{multline}\\label{eq:min}\n\\frac{1}{n}\\log\\mathbb{P} \\left(Z_1\/n \\in B(y,\\varepsilon\/3)\\right)\n+ \\frac{1}{n}\\log \\mathbb{P} \\left(\\tilde{Z}^{\\phi}_{n}\\in B(0,\\varepsilon\/3)\\right)\\\\\n\\le \\frac{1}{n}\\log \\mathbb{P}\\left(\\bar{Z}^{\\phi}_n\\in B(y,\\varepsilon)\\right).\n\\end{multline}\nExponential Markov inequality yields $\\lim_{n\\rightarrow \\infty}\n\\mathbb{P}\\{|\\tilde{Z}^{\\phi}_{n}|>\\varepsilon\/3\\}=0$ which readily\nimplies that $\\lim_{n\\rightarrow \\infty}\\mathbb{P} \\{\\tilde{Z}^{\\phi}_{n}\\in B(0,\\varepsilon\/3)\\}=1.$\nConsequently, taking the liminf in both sides of\n(\\ref{eq:min}) and using the lower bound for the single variable\n$\\frac{Z_1}{n}$ yields the desired lower bound. The proof is\ncompleted. \n\\end{proof}\nWe first consider Theorem \\ref{ldp} under an additional assumption.\n\\begin{lemma}\n\\label{ldp-pk}\nUnder the same assumptions as in Theorem \\ref{ldp} and if we assume in addition\nthat\n\\begin{equation}\\label{tight-assumption}\nC_n^{\\mathbf{f}}\n \\xrightarrow[n\\rightarrow\\infty]{\\mathrm{pk}}\n C^{\\mathbf{f}}_{\\infty},\n\\end{equation}\nthen $\\pi_n$ satisfies the LDP in $\\mathbb{R}^d$ with good rate function\n$\\Delta^*(\\,\\cdot \\mid \\mathcal D),$\nwhere ${\\mathcal D}=\\cap_{\\mathbf{y}\\in C_{\\infty}^{\\mathbf{f}}}{\\mathcal D}_{\\mathbf{y}}$.\n\\end{lemma} \n\n\\noindent Proof of Lemma \\ref{ldp-pk} is postponed to Appendix \\ref{proof-ldp-pk}.\\\\\n\n\\noindent We now relax the extra assumption (\\ref{tight-assumption}) and prove\nTheorem \\ref{ldp}. The scheme of the proof is the following. We first\nshow, using directly the result in Lemma \\ref{ldp-pk}, that the lower\nbound is driven by the support function of the set $\\bigcap_{\\mathbf y\n \\in C_{\\infty,\\mathrm{in}}^{\\mathbf f}} \\mathcal D_{\\mathbf y}$. We then obtain that the\nupper bound is driven by the support function of the set\n$\\bigcap_{\\mathbf y \\in C_{\\infty,\\mathrm{out}}^{\\mathbf f}} \\mathcal D_{\\mathbf y}$, by\nmajorizing the log-Laplace of $\\pi_n$. Under Assumption\n(A-\\ref{Limit-Points}), both bounds coincide and we get the full LDP.\n\n\\begin{proof}[Proof of Theorem \\ref{ldp}]\nTo get the lower bound, we split $C_n^{\\mathbf f}$ into two disjoint subsets:\n\\begin{equation}\\label{split}\nC_n^{\\mathbf{f}}={\\mathcal I}_n^{\\mathbf{f}} \\cup {\\mathcal O}_n^{\\mathbf{f}}\\qquad\n\\textrm{ where }\\qquad {\\mathcal I}_n^{\\mathbf{f}}\\xrightarrow[n\\rightarrow\\infty]{\\mathrm{pk}} C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}\n\\end{equation}\nLet us sketch the construction of ${\\mathcal\n I}_n^{\\mathbf{f}}$. Let $B(z,\\frac 1m)$ be a ball centered in $z\\in C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}$ with radius $\\frac 1m$. \nSince $C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}$ is compact by (A-\\ref{Compacity}), there exist $\\left(z_{\\ell}\\right)_{1\\le \\ell\\le L_m}$ such that\n$$\nC_{\\infty,\\mathrm{in}}^{\\mathbf{f}} \\subset \\bigcup_{\\ell =1}^{L_m}B\\left(z_{\\ell},\\frac 1m\\right)\\quad \\text{ and }\\quad B\\left(z_{\\ell},\\frac 1m\\right)\\cap C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}\\not= \\emptyset\\quad \\textrm{for}\\quad 1\\le \\ell\\le L_m. \n$$ \nThe mere definition of $C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}$ yields that there exists $\\psi(m)$\nsuch that for all $\\ell,\\ 1\\le \\ell\\le L_m$:\n$$\n\\forall n\\ge \\psi(m),\\quad \\exists \\mathbf{f}(x_{i_{\\ell}}^n)\\in \nB\\left(z_{\\ell},\\frac 1m\\right)\n\\quad \\textrm{with}\\quad \\mathbf{f}(x_{i_{\\ell}}^n)\\in C_n^{\\mathbf{f}}.\n$$\nDenote by ${\\mathcal A}_{n,m}$ ($n\\ge \\psi(m)$) such a collection of\n$\\mathbf{f}(x_{i_{\\ell}}^n)$'s. Choose now similarly a collection of\nballs with radius $\\frac 1{m+1}$ and the related $\\psi(m+1)$ with\n$\\psi(m+1)>\\psi(m)$,\nand set\n$$\n{\\mathcal I}_n^{\\mathbf{f}}={\\mathcal A}_{n,m}\\quad \\textrm{if}\\quad\n\\psi(m)\\le n< \\psi(m+1).\n$$\nWith such a definition, it is straightforward to check that ${\\mathcal I}_n^{\\mathbf{f}}\\xrightarrow[]{\\mathrm{pk}} C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}.$\nWe write \n\\begin{eqnarray*}\n\\pi_n &=&\\frac 1n \\sum_{x_i^n \\in \\mathbf{f}^{-1}({\\mathcal\n I}_n^{\\mathbf{f}})} \\mathbf{f}(x_i^n) \\cdot Z_i \n+ \\frac 1n \\sum_{x_i^n \\notin \\mathbf{f}^{-1}({\\mathcal\n I}_n^{\\mathbf{f}})} \\mathbf{f}(x_i^n) \\cdot Z_i\\ , \\\\\n&\\stackrel{\\triangle}{=}& \\pi_n^{\\mathcal\n I} + \\pi_n^{\\mathcal O}\\ .\n\\end{eqnarray*}\nThe lower bound can be established as in Lemma \\ref{nw}.\nLet us prove that:\n\\begin{equation}\\label{borneinf}\n-\\Delta^*(z\\mid \\cap_{\\mathbf y \\in C_{\\infty,\\mathrm{in}}^{\\mathbf f}} \\mathcal\n D_{\\mathbf y}) \\le \\liminf_{n\\rightarrow \\infty} \\frac 1n \\log\n \\mathbb{P} \\left(\\pi_n \\in B(z,\\varepsilon)\\right).\n\\end{equation}\nSince \n$$\n\\{ \\pi_n^{\\mathcal I}\\in B(z,\\varepsilon\/3) \\} \\cap \n\\{ \\pi_n^{\\mathcal O}\\in B(0,\\varepsilon\/3) \\}\\subset \\{ \\pi_n\\in B(z,\\varepsilon) \\},\n$$\none has \n\\begin{multline}\\label{minoration1}\n\\frac{1}{n}\\log\\mathbb{P} \\left( \\pi_n^{\\mathcal I} \\in B(z,\\varepsilon\/3)\\right)\n+ \\frac{1}{n}\\log \\mathbb{P} \\left(\\pi_n^{\\mathcal O}\\in B(0,\\varepsilon\/3)\\right)\\\\\n\\le \\frac{1}{n}\\log \\mathbb{P} \\left(\\pi_n \\in B(z,\\varepsilon)\\right).\n\\end{multline}\nExponential Markov inequality yields $\\lim_{n\\rightarrow \\infty} \\mathbb{P}(|\\pi_n^{\\mathcal O}|>\\varepsilon\/3)=0$.\nThis in turn implies that $\\lim_{n\\rightarrow \\infty}\\mathbb{P} \\left(\\pi_n^{\\mathcal O}\\in B(0,\\varepsilon\/3)\\right)=1$.\nSince $\\pi_n^{\\mathcal I}$ fulfills assumptions of Lemma \\ref{ldp-pk}, the following lower bound holds:\n\\begin{equation}\\label{minoration2}\n -\\Delta^*\\left(z\\mid \\cap_{\\mathbf y \\in C_{\\infty,\\mathrm{in}}^{\\mathbf f}} \\mathcal D_{\\mathbf y} \\right) \\le \\frac{1}{n}\\log\\mathbb{P} \\left( \\pi_n^{\\mathcal I} \\in B(z,\\varepsilon\/3)\\right) \n\\end{equation}\nConsequently, taking the liminf in both sides of (\\ref{minoration1}) and using (\\ref{minoration2})\nyields the desired lower bound. The proof of the lower bound is completed.\n\nLet us now prove the upper bound.\nDenote by $\\Lambda_n(\\lambda)$ the log-Laplace transform of $\\pi_n$, i.e. $\\Lambda_n(\\lambda)=\\log \\mathbb{E}\\, e^{\\langle \\lambda, \\pi_n \\rangle}$.\nIn order to prove the upper bound, we estimate the following limit:\n$$\n\\frac 1n \\Lambda_n(n \\lambda)= \\frac 1n \\sum_{x_i^n\\in C_n} \\log \\mathbb{E}\\, e^{\\langle \\lambda , \\mathbf{f}(x_i^n) \\cdot Z_i\\rangle}\n\\qquad \\textrm{where}\\qquad \\frac{\\mathrm{card}(C_n)}n\\xrightarrow[n\\rightarrow \\infty]{}0.\n$$\nWe shall prove that \n\\begin{equation}\\label{ub}\n\\limsup_{n\\rightarrow \\infty} \\frac 1n \\Lambda_n( \\lambda) \\le\n\\Delta(\\lambda\\mid \\mathrm{int}(\\cap_{\\mathbf y \\in C_{\\infty,\\mathrm{out}}^{\\mathbf f}} \\mathcal D_{\\mathbf y})).\n\\end{equation}\nTheorem 4.5.3 in \\cite{DemZei98} will then yield:\n\\begin{eqnarray}\n\\limsup_{n\\rightarrow \\infty} \\frac 1n \\log \\mathbb P(\\pi_n \\in F) & \\le & - \\inf_{z\n \\in F} \\Delta^*(z\\mid \\mathrm{int}(\\cap_{\\mathbf y \\in C_{\\infty,\\mathrm{out}}^{\\mathbf f}} \\mathcal D_{\\mathbf y}))\\nonumber \\\\\n& \\stackrel{(a)}{=}& - \\inf_{z\n \\in F} \\Delta^*(z\\mid \\cap_{\\mathbf y \\in C_{\\infty,\\mathrm{out}}^{\\mathbf f}} \\mathcal D_{\\mathbf y}) \\label{ldp-upperbound}\n\\end{eqnarray}\nfor any closed set $F$. Equality $(a)$ follows from Proposition\n\\ref{support-function} and the fact that\n$\\mathrm{int}(\\cap_{\\mathbf y \\in C_{\\infty,\\mathrm{out}}^{\\mathbf f}} \\mathcal\nD_{\\mathbf y})$ is a non-empty convex set due to (A-\\ref{LDP-Particle}).\n\nIn order to prove \\eqref{ub}, consider $\\lambda \\in \\mathbb{R}^d$ such that\n\\begin{equation}\\label{limsup-strict-pos} \n\\limsup_{n\\rightarrow \\infty} \\frac 1n \\Lambda_n(n \\lambda) >0.\n\\end{equation}\n>From \\eqref{limsup-strict-pos}, we can successively:\n\\begin{itemize}\n\\item[-] extract a subsequence $n_{\\alpha}$ from $n$ such that \n$$\n\\lim_{n \\rightarrow \\infty} \\frac 1{n_{\\alpha}}\n\\sum_{x_i^{n_{\\alpha}}\\in C_{n_{\\alpha}}} \\log \\mathbb{E} e^{\\langle\n \\lambda, \\mathbf{f}(x_i^{n_{\\alpha}}) \\cdot Z_i \\rangle} >0;\n$$\n\\item[-] extract a subsequence $n_{\\beta}$ from $n_{\\alpha}$ such that \n$$\n\\lim_{n \\rightarrow \\infty} \\mathbb{E}e^{\\langle \\lambda,\n \\mathbf{f}(x_i^{n_{\\beta}}) \\cdot Z_i \\rangle} =\\infty,\n$$\n\\item[-] extract a subsequence $n_{\\gamma}$ from $n_{\\beta}$ such that\n$$\n\\mathbf{f}(x_i^{n_{\\gamma}}) \\xrightarrow[n \\rightarrow \\infty]{} \\mathbf{y}_0.\n$$\nOne can notice in particular that $\\mathbf{y}_0\\in C_{\\infty,\\mathrm{out}}^{\\mathbf{f}}$.\n\\end{itemize}\nLet us now prove that \n\\begin{equation}\\label{lambda-frontier}\n\\lambda\\notin \\mathrm{int}({\\mathcal D}_{\\mathbf{y}_0}).\n\\end{equation}\nAssume that (\\ref{lambda-frontier}) is not true.\nThen there exists $p>1$ such that $p\\lambda \\in {\\mathcal D}_{\\mathbf{y}_0}$. Let $\\varepsilon>0$ \nbe arbitrarily small. Then, if $n$ is large enough to ensure that \n$|\\lambda| |\\mathbf{f}(x_i^{n_{\\gamma}}) -\\mathbf{y}_0| \\le \\varepsilon\/q$ where $1\/p +1\/q=1$, one has \n\\begin{eqnarray*}\n\\mathbb{E}\\,e^{\\langle \\lambda, \\mathbf{f}(x_i^{n_{\\gamma}}) \\cdot Z\\rangle }\n&=&\\mathbb{E}\\,e^{\\langle \\lambda, \\mathbf{y}_0 \\cdot Z\\rangle } \ne^{\\langle \\lambda, (\\mathbf{f}(x_i^{n_{\\gamma}})-\\mathbf{y}_0) \\cdot Z\\rangle }\\\\\n&\\le& \\left( \\mathbb{E}\\,e^{p \\langle \\lambda, \\mathbf{y}_0 \\cdot Z\\rangle } \\right)^{\\frac 1p}\n\\left( \\mathbb{E}\\,e^{\\varepsilon |Z| } \\right)^{\\frac 1q}.\n\\end{eqnarray*}\nThis contradicts the fact that \n$$\n\\lim_{n\\rightarrow \\infty} \\mathbb{E}e^{\\langle \\lambda, \\mathbf{f}(x_i^{n_{\\gamma}}) Z_i \\rangle} =\\infty.\n$$ \n \n\n\\noindent Therefore \\eqref{lambda-frontier} holds and yields that\n$\\lambda\\notin \\mathrm{int}(\\cap_{\\mathbf{y}\\in\n C_{\\infty,\\mathrm{out}}^{\\mathbf{f}}}{\\mathcal D}_{\\mathbf{y}_0})$. From this, we\ndeduce that\n$$\n\\limsup_{n\\rightarrow \\infty} \\frac 1n \\Lambda_n(n \\lambda) >0 \\quad \\Rightarrow \\quad \\lambda\\notin \n\\mathrm{int}(\\cap_{\\mathbf{y}\\in C_{\\infty,\\mathrm{out}}^{\\mathbf{f}}}{\\mathcal D}_{\\mathbf{y}}).\n$$\nOtherwise stated:\n$$\n\\limsup_{n\\rightarrow \\infty} \\frac 1n \\Lambda_n(n \\lambda) \\le\n\\Delta\\left(\\lambda \\mid \\mathrm{int}(\\cap_{\\mathbf{y}\\in\n C_{\\infty,\\mathrm{out}}^{\\mathbf{f}}}{\\mathcal D}_{\\mathbf{y}})\\right).\n$$\nTherefore, \\eqref{ub} is proved and so is \\eqref{ldp-upperbound}. \n\nGathering the lower bound \\eqref{borneinf}, the upper bound\n\\eqref{ldp-upperbound} and Assumption (A-\\ref{Limit-Points}) yield the\nfull LDP for $\\pi_n$.\n\\end{proof}\n\n\\section{The LDP for the empirical mean and the rate function in the convex case}\\label{section:ldp}\n\nOur goal is now to get the full LDP for $L_n$ (Theorem \\ref{ldp-mem}\nbelow). As announced in the outline of the article, the first step is\nto split the $x_i^n$'s into two different subsets according to whether\nthey live near the support of the limiting measure or whether they are outliers.\n\n\\subsection{The decomposition ${\\bf L_n=\\pi_n + \\tilde L_n}$}\\label{decomposition}\nRecall that $({\\mathcal X},\\rho)$ is a metric space.\n\\begin{prop}\\label{proper-decomposition} Let $A_n=\\{x_i^n,\\ 1\\le i\\le n\\}$.\nAssume that \n$$\n\\hat R_n =\\frac 1n \\sum_{i=1}^n \\delta_{x_i^n} \\xrightarrow[n\\rightarrow\\infty]{\\mathrm{weakly}} R.\n$$\nand denote by ${\\mathcal Y}$ the support of $R$. Then there exist \nsubsets $B_n$ and $C_n=A_n \\setminus B_n$ such that\n\\begin{enumerate}\n\\item $\\frac{\\mathrm{card}(B_n)}{n} \\xrightarrow[n\\rightarrow \\infty]{} 1$,\n \\label{prop1}\n\\item $\\frac{1}{\\mathrm{card}(B_n)} \\sum_{x_i^n \\in B_n} \\delta_{x_i^n}\n \\xrightarrow[n\\rightarrow \\infty]{\\mathrm{weakly}} R$, \\label{prop2}\n\\item $ \\rho(B_n, {\\mathcal Y}) \\xrightarrow[n\\rightarrow \\infty]{} 0$\n where ${\\mathcal Y}$ is the support of $R$. \\label{prop3}\n\\end{enumerate} \n\\end{prop}\n\nWe will then set \n$$\n\\tilde L_n =\\frac 1n \\sum_{x_i^n \\in B_n} \\mathbf{f}(x_i^n)\\cdot Z_i \\quad \n\\textrm{and}\\quad \\pi_n =\\frac 1n \\sum_{x_i^n \\in C_n} \\mathbf{f}(x_i^n)\\cdot Z_i.\n$$\nNote that since $\\mathrm{card}(B_n)+\\mathrm{card}(C_n)=n$, property\n\\eqref{prop1} yields then that\n$\n\\frac{\\mathrm{card}(C_n)}{n} \\rightarrow 0\n$\nas $n$ goes to infinity.\n\\begin{proof}\n{\\em Construction of $B_n$}. Let $m\\ge 1$ be fixed and\ndenote by ${\\mathcal Y}_m$ the $\\frac 1m$-blowup of ${\\mathcal Y}$,\ni.e. ${\\mathcal Y}_m=\\{ x\\in \\mathcal X,\\ \\rho(x,{\\mathcal Y})<\\frac 1m \\}$\nwhere ${\\mathcal Y}$ is the support of $R$. Then $\\frac 1n \\sum_1^n\n1_{{\\mathcal Y}_m}(x_i^n) \\rightarrow 1$; in particular there exists\n$\\psi_m\\ge 1$ such that for all $n\\ge \\psi_m$:\n$$\n\\left| \\frac 1n \\sum_{i=1}^n 1_{{\\mathcal Y}_m} (x_i^n) -1\\right| <\\frac 1m.\n$$\nOne can then build recursively a sequence of integers $(\\psi_m)_{m\\in\n \\mathbb{N}}$ such that $\\psi_m<\\psi_{m+1}$ (so that\n$\\psi_m\\rightarrow \\infty$ as $m\\rightarrow \\infty$). Set\n$$\nB_n=\\{x_i^n \\in {\\mathcal Y}_m,\\ 1\\le i\\le n\\} \\quad \\textrm{for}\\quad \\psi_m\\le n<\\psi_{m+1}.\n$$ \nWe prove property (\\ref{prop1}) and leave the proofs of\nproperties (\\ref{prop2}) and (\\ref{prop3}) to the reader. \n\nLet $\\varepsilon>0$ be fixed and take $m$ such that $\\frac 1m < \\varepsilon$.\nFor such an $m$, take the corresponding $\\psi_m$ and let $n\\ge \\psi_m$. Then,\n$$\n\\left| \\frac{\\mathrm{card}(B_n)}{n} -1\\right| =\\left|\n \\frac{\\sum_{i=1}^n 1_{{\\mathcal Y}_m}(x_i^n)}{n} -1\\right| \\le \\frac\n1m <\\varepsilon.\n$$\nSince $\\varepsilon>0$ is arbitrary, property (\\ref{prop1}) is proved. \n\\end{proof}\n\n\\subsection{The LDP for the empirical mean $L_n$}\nIn order to get the full LDP for $L_n =\\tilde L_n +\\pi_n$, we need to\nprove the LDP for $\\tilde L_n$. We will mainly rely on the results in\n\\cite{Naj02}. The following assumption is needed:\n\\begin{assump}\\label{hypo-EJP}\nAssume that $({\\mathcal X}, \\rho)$ is a locally compact metric space.\nThe family $(x_i^n,1\\le i\\le n,n\\ge 1)\\subset {\\mathcal X}$ satisfies \n$$\n\\hat R_n = \\frac 1n \\sum_{i=1}^n \\delta_{x_i^n} \\xrightarrow[n\\rightarrow\\infty]{\\textrm{weakly}} R,\n$$\nwhere $R$ is a probability measure over $({\\mathcal X}, {\\mathcal\n B}({\\mathcal X}))$. Moreover, the support of $R$\ndenoted by ${\\mathcal Y}$ is a compact set and for every non-empty open set $U$ of ${\\mathcal Y}$ (for the \ninduced topology over ${\\mathcal Y}$), $R(U)>0$.\n\\end{assump}\n\\begin{rem} The LDP may fail to hold if the last part of Assumption\n (A-\\ref{hypo-EJP}), that is $R(U)>0$ for $U$ non-empty\n open set, is not fulfilled. Counterexamples, also closely related to Assumption\n (A-\\ref{LDP-Particle}), are developed in \\cite{Naj02}.\n\\end{rem}\n\nWe recall that we denote by $\\Lambda(\\theta)=\\log \\mathbb{E}\\, e^{\\langle \\theta,\n Z_1\\rangle }$ the log-Laplace transform of $Z_1$. We introduce the\nfollowing functional\n\\begin{equation} \\label{Gamma}\n\\Gamma(\\lambda)=\\int_{\\mathcal X} \\Lambda\\left(\\sum_{k=1}^m \\lambda_k f_k(x)\\right) R(dx),\n\\end{equation}\nwhere $\\lambda=(\\lambda_1,\\cdots,\\lambda_m)\\in \\mathbb{R}^m$ and $f_k$\ndenotes the k$^{\\textrm{th}}$ row of matrix $\\mathbf{f}$. Let\n$\\Gamma^*$ be the convex conjugate of $\\Gamma$:\n$$\n\\Gamma^*(z)=\\sup_{\\lambda\\in \\mathbb{R}^m}\\left\\{ \\langle \\lambda, z \\rangle- \\Gamma(\\lambda)\\right\\}. \n$$\n\nWe can now state the LDP.\n\n\\begin{theo}\\label{ldp-mem} Let $(Z_i)_{i\\in \\mathbb{N}}$ be a sequence of\n $\\mathbb{R}^d$-valued i.i.d. random variables where $Z_1$ satisfies\n (A-\\ref{LDP-Particle}) and (A-\\ref{convex-rf}). \n\n Consider a triangular array $(x_i^n, 1\\le i\\le n,n\\ge\n 1)\\subset {\\mathcal X}$ which fulfills (A-\\ref{hypo-EJP}). \n\n Denote by $C_n^{\\mathbf{f}}=\\{ \\mathbf{f}(x_i^n),\\ x_i^n \\in C_n\\}$\n where $C_n$ is a subset of $\\{x_i^n,\\ 1\\le i\\le n\\}$ given by\n Proposition \\ref{proper-decomposition} and $\\mathbf{f}:{\\mathcal X}\n \\rightarrow \\mathbb{R}^{m\\times d}$ is continuous. Assume that\n $C_n^{ \\mathbf{f}}$ satisfies (A-\\ref{Compacity}) and\n (A-\\ref{Limit-Points}). Then\n$$\nL_n=\\frac{1}{n}\n\\sum_1^{n} \\mathbf{f}(x_i^n)\\cdot Z_i\n$$\nsatisfies the LDP in $(\\mathbb{R}^m,{\\mathcal B}(\\mathbb{R}^m))$ with\ngood rate function\n$$\nI_{\\mathbf{f}}(z) = \\inf\\{ \\Gamma^*(z_1) + \\Delta^*(z_2\\mid {\\mathcal\n D}),\\ z_1 +z_2=z\\}\\ ,\n$$\nwhere the definition of ${\\mathcal D}$ follows from Theorem \\ref{ldp}.\n\\end{theo}\n\n\n\\begin{proof}\nRecall the decomposition $L_n=\\tilde L_n + \\pi_n$ where \n$$\n\\tilde L_n =\\frac 1n \\sum_{x_i^n\\in B_n} \\mathbf{f}(x_i^n)\\cdot Z_i \\quad \n\\textrm{and}\\quad \\pi_n =\\frac 1n \\sum_{x_i^n \\in C_n} \\mathbf{f}(x_i^n)\\cdot Z_i,\n$$\nwhere the sets $B_n$ and $C_n$ are defined in Section\n\\ref{decomposition}. Theorem \\ref{ldp} yields the LDP for $\\pi_n$ with\ngood rate function $\\Delta^*(\\cdot \\mid {\\mathcal D})$. It remains now\nto prove the LDP for $\\tilde L_n$. We will rely on Theorem 2.2 in\n\\cite{Naj02} and therefore slightly modify $\\tilde L_n$ so that it\nfulfills the assumptions of this theorem.\n\nIn fact, it is required in \\cite{Naj02} that all the points $x_i^n$ belong to ${\\mathcal Y}$, \nwhich might not be the case here. We build in the sequel a sequence $(\\tau(x_i^n))\\subset{\\mathcal Y}$ \nwhich approximates the sequence $(x_i^n,x_i^n\\in B_n)$.\nLet $x_i^n\\in B_n$ and set\n$$\n\\tau(x_i^n)=\n\\left\\{\n\\begin{array}{ll}\nx_i^n & \\textrm{if}\\ x_i^n\\in {\\mathcal Y},\\\\\n\\textrm{one of the}\\ \\textrm{argmin}\\{ \\rho(x,x_i^n),\\ x\\in {\\mathcal Y}\\} & \\textrm{else}.\n\\end{array}\n\\right.\n$$\nSuch a minimizer always exists and belongs to ${\\mathcal Y}$ since ${\\mathcal Y}$ \nis compact. \\\\\nSince $\\lim_n \\sup\\{ \\rho(x,{\\mathcal Y}),\\ x\\in B_n\\}=0$, one has $\\sup_{x_i^n\\in B_n} \n\\rho(x_i^n,\\tau(x_i^n))\\xrightarrow[n\\rightarrow \\infty]{}0$ and \n$$\n\\kappa_n(\\mathbf{f})\\stackrel{\\triangle}{=}\\sup_{x_i^n\\in B_n}\\{\n|\\mathbf{f}(x_i^n)-\\mathbf{f}(\\tau(x_i^n))|\\}\\xrightarrow[n\\rightarrow\\infty]{}\n0.\n$$\nIndeed, for $n$ large enough, $B_n$ lies in an $\\varepsilon$-blowup of\n$\\mathcal Y$, which is compact since ${\\mathcal X}$ is locally compact\nand $\\mathbf f$ is therefore uniformly continuous on this set.\n\nNow, if we define $\\bar L_n$ by\n$$ \\bar L_n \\stackrel{\\triangle}{=} \\frac 1n \\sum_{x_i^n\\in B_n}\n\\mathbf{f}(\\tau(x_i^n))\\cdot Z_i, $$\nthen $\\tilde{L}_n$ and $\\bar L_n$ are exponentially equivalent. Indeed,\n\\begin{eqnarray*}\n\\frac 1n \\log \\mathbb{P}\\left( |\\tilde{L}_n- \\bar L_n|>\\varepsilon\\right)\n&\\le& \\frac 1n \\log \\mathbb{P}\\left( \\frac 1n \\sum_{i=1}^{\\mathrm{card}(B_n)} |Z_i| >\\frac{\\varepsilon}{\\kappa_n(\\mathbf{f})} \\right)\\\\\n&\\le& -\\Lambda^*_{|Z|}\\left(\\frac{\\varepsilon}{\\kappa_n(\\mathbf{f})}\\right) \\xrightarrow[n\\rightarrow\\infty]{}-\\infty.\n\\end{eqnarray*}\nwhere $\\Lambda^*_{|Z|}$ stands for the convex conjugate of the\nlog-Laplace transform of $|Z|$. The measure\n$\\bar L_n$ satisfies all\nthe assumptions of Theorem 2.2 in \\cite{Naj02}. Therefore, the LDP holds for it with good rate function\n$\\Gamma^*$. Finally the exponential equivalence yields the LDP for\n$\\tilde L_n$ with the same rate function \n(see for instance \\cite[Theorem 4.2.13]{DemZei98}).\n\nAs the two subsums are independent, \nthe contraction principle yields the LDP for \n$L_n$ with good rate function $I_{\\mathbf{f}}$ given by:\n\\begin{equation} \\label{convo}\nI_{\\mathbf{f}}(z) = \\inf\\{ \\Gamma^*(z_1) + \\Delta^*(z_2\\mid {\\mathcal D}),\\ z_1 +z_2=z\\}.\n\\end{equation}\n\\end{proof}\n\n\\subsection{More insight on the rate function $I_{\\mathbf{f}}$}\nIn the convex case, that is when Assumption (A-\\ref{convex-rf}) holds, the rate function $I_{\\mathbf{f}}$\ncan be expressed more explicitely.\nThis section is aimed at describing how to perform the inf-convolution \\eqref{convo}.\n\nWe first introduce some definitions from convex analysis (see e.g. \\cite{Roc70}). \nThe main result is stated in Theorem \\ref{theo-rf}.\n\n\\begin{Def}[Normal cone]\nLet $\\mathcal C\\subset \\mathbb{R}^d$ be a convex set and let $a \\in\n\\mathcal C.$ The normal cone of $\\mathcal C$ at $a$, denoted by $N_{\\mathcal C}(a)$, \nis defined by:\n$$ \nN_{\\mathcal C}(a)=\\{ z\\in \\mathbb{R}^d;\\ \\langle z, x-a\\rangle \\le\n 0,\\ \\forall x \\in {\\mathcal C}\\}.\n$$\n\\end{Def}\n\\begin{rem} In particular, if $z\\in N_{\\mathcal C}(a)$ then $\\Delta^*(z\\mid\n{\\mathcal C}) =\\langle z,a\\rangle$.\n\\end{rem}\n\n\\begin{Def}[Relative interior]\nLet ${\\mathcal C}\\subset \\mathbb{R}^d$ be a convex set. Its affine\nhull, denoted by $\\mathrm{aff\\,} {\\mathcal C}$, is the smallest affine subset of\n$\\mathbb{R}^d$\ncontaining ${\\mathcal C}$. The relative interior of ${\\mathcal C}$, denoted by $\\mathrm{ri\\,} {\\mathcal C}$,\nis defined by:\n$$ \n\\mathrm{ri\\,} {\\mathcal C} \\stackrel{\\triangle}{=} \\{ x \\in \\mathrm{aff\\, } {\\mathcal C},\\ \n\\exists \\varepsilon >0\\ \\textrm{such that}\\ (x + \\varepsilon B(0,1)) \\cap \\mathrm{aff\\, } \\mathcal\nC \\subset {\\mathcal C}\\}\n$$\n\\end{Def} \n \n\\begin{Def}[Subdifferential of a convex function]\nA vector $x^*$ is said to be a subgradient of a convex function $f$\nat a point $x$ if for any $z$,\n$$ f(z) \\geq f(x) +\\langle x^*, z-x \\rangle.$$\nThe subdifferential $\\partial f(x)$ of $f$ at $x$ is the set of all\nsubgradients\nof $f$ at $x$.\n\\end{Def}\n\nWe can now state: \n\\begin{theo}\\label{theo-rf}\n Under the assumptions of Theorem \\ref{ldp-mem}, the rate function $I_{\\mathbf{f}}$ admits the following representation: \n\\begin{equation} \\label{repn1}\n I_{\\mathbf{f}}(z)= \\sup_{\\lambda \\in {\\mathcal D}}\n(\\langle \\lambda, z\\rangle - \\Gamma(\\lambda))\\ ,\n\\end{equation} \nwhere $\\Gamma$ is given by \\eqref{Gamma}. Furthermore, for any $ z\n\\in \\mathrm{ri\\,} \\mathrm{dom}\\, I_{\\mathbf{f}},$ we can decompose $z$\nas $z= z^* + z_{\\mathbf{n}},$ where there exists $\\lambda^* \\in\n\\mathrm{dom}\\, \\Gamma \\cap \\bar{ \\mathcal D}$ such that:\n\\begin{itemize}\n\\item[(i)] $z^* \\in \\partial \\Gamma (\\lambda^*)$ and \n\\item[(ii)] $z_{\\mathbf{n}} \\in N_{{ \\bar{\\mathcal D}}}(\\lambda^*).$ \n\\end{itemize}\nIn particular, for any such decomposition,\n$$ I_{\\mathbf{f}}(z)= \\Gamma^*(z^*) + \\Delta^*(z_{\\mathbf{n}} \\mid {\\mathcal D}).$$ \n\\end{theo} \n\n\\begin{rem}[Non-exposed points]\\label{non-expo} Let $z\\in \\mathrm{ri}\\,\\mathrm{dom}\n I_{\\mathbf{f}}$. Consider the decomposition given by Theorem\n \\ref{theo-rf}, namely $z=z^* +z_{\\mathbf{n}}$, then:\n $$\n \\forall t\\in \\mathbb{R}^+,\\ I_{\\mathbf{f}}(z^* + tz_{\\mathbf{n}})=\n \\Gamma^*(z^*) + t\\langle z_{\\mathbf{n}}, \\lambda^*\\rangle\\quad\n \\textrm{where}\\ z^* \\in \\partial\\Gamma(\\lambda^*) \\textrm{ and }\n z_{\\mathbf{n}} \\in N_{\\mathcal D}(\\lambda^*).\n $$\n In particular if $z_{\\mathbf{n}} \\neq 0$, $I_{\\mathbf{f}}$ is affine\n in the direction $\\mathbb{R}^+\\ni t\\mapsto z^* + tz_{\\mathbf{n}}$\n and has thus infinitely many non-exposed points (see for instance\n the example developed in Section \\ref{section:examples}).\n\\end{rem}\n\n\n\\begin{proof}\nWe first prove \\eqref{repn1}. Theorem \\ref{ldp-mem} and Proposition\n\\ref{support-function} yield\n$$ \nI_{\\mathbf{f}}(z) = \\inf_{z= z_1+z_2}\\{\\Gamma^*(z_1) + \\Delta^*(z_2\\mid \n \\mathcal D)\\}.\n$$ \nAs $ I_{\\mathbf{f}}$, $ \\Gamma$ and $\\Delta(.\\mid \\bar{\n \\mathcal D})$ are convex, proper and lower semicontinuous,\nwe get from Theorem 16.4 in \\cite{Roc70} that\n\\begin{eqnarray*}\n I_{\\mathbf{f}}(z) &=& \\left[ \\Gamma + \\Delta (.\\mid \\bar{\n \\mathcal D})\\right]^*(z), \\\\\n &=& \\sup_{\\lambda\\in \\mathbb R^d} \\{\\langle \\lambda, z\\rangle\n -\\Gamma(\\lambda) - \\Delta (\\lambda \\mid \\bar{\n \\mathcal D})\\}, \\\\\n &= & \\sup_{\\lambda\\in \\bar{\\mathcal D}} \\{\\langle \\lambda, z\\rangle -\\Gamma(\\lambda)\\} \n \\quad = \\quad \\sup_{\\lambda\\in {\\mathcal D}} \\{\\langle \\lambda, z\\rangle -\\Gamma(\\lambda)\\},\n\\end{eqnarray*}\nand \\eqref{repn1} is proved. As $ I_{\\mathbf{f}}$ is convex, so is its\ndomain and we can consider its relative interior\n$\\mathrm{ri\\,}\\mathrm{dom\\,} I_{\\mathbf{f}}$. Let $z \\in \\mathrm{ri\\,}\\mathrm{dom\\,}\nI_{\\mathbf{f}}$, then $I_{\\mathbf{f}}(z) < +\\infty$ and define $F_z$\nby :\n$$ \nF_z(x) = \\Gamma^* (x)+ \\Delta^*(z-x\\mid \\bar{\\mathcal D}).\n$$\nThe properties of $\\Gamma^*$ and $\\Delta^*(.\\mid \\bar{\\mathcal D})$\nyield that $F_z$ is proper, convex and lower semicontinuous; its level\nsets are compact. In particular, the infimum of $F_z$ is attained over\n$\\mathbb R^d.$ Let $z^*$ be a point where this infimum is attained,\ni.e.\n$$\n\\inf_{x\\in \\mathbb{R}^d} F_z(x) = F_z(z^*).\n$$\nIn this case,\n$$\n0 \\in \\partial F_z(z^*).\n$$\nIn order to go further in the proof, we shall describe $\\partial\nF_z(z^*)$ in terms of $\\partial \\Gamma^*$ and $\\partial\n\\Delta^*(z-\\cdot \\mid \\bar{\\mathcal D})$. This is the purpose of the following proposition:\n\n\\begin{prop} \\label{subdif}\nIf $z \\in \\mathrm{ri\\, dom}\\, I_{\\mathbf{f}}$ , then for any $x$,\n$$ \\partial F_z(x) = \\partial \\Gamma^*(x) - \\partial \\Delta^*(z-x\\mid \\bar{\n \\mathcal D}).$$\n\\end{prop} \n \n\\begin{proof}[Proof of Proposition \\ref{subdif}]\nDefine $f_z$ to be the function given by $f_z(x) = \\Delta^*(z-x\\mid \\bar{\n \\mathcal D})$. Note in particular that $F_z(x)=\\Gamma^*(x) +f_z(x)$.\nSince $I_{\\mathbf{f}}(z) = \\inf_{z= z_1+z_2}\\{\\Gamma^*(z_1) + \\Delta^*(z_2\\mid \n \\mathcal D)\\},$ the sum of the epigraphs of $\\Gamma^*$ and $\\Delta^*$ are equal\nto the epigraph of $I_{\\mathbf{f}}$. This immediatly implies that \n$$ \n\\mathrm{dom\\,} I_{\\mathbf{f}} = \\mathrm{dom\\,} \\Gamma^* + \\mathrm{dom\\,} \\Delta^*(\\cdot \\mid \\bar{\n \\mathcal D}).\n$$\nThese sets being convex, Corollary 6.6.2 in \\cite{Roc70} yields\n$$ \n\\mathrm{ri\\,}\\mathrm{dom\\,} I_{\\mathbf{f}} = \\mathrm{ri\\,}\\mathrm{dom\\,} \\Gamma^* +\n \\mathrm{ri\\,}\\mathrm{dom\\,} \\Delta^*(\\cdot \\mid \\bar{\n \\mathcal D}).$$\nLet $z \\in \\mathrm{ri\\,}\\mathrm{dom\\,} I_{\\mathbf{f}}$, then there exists\n$y \\in \\mathrm{ri\\,}\\mathrm{dom\\,} \\Gamma^*$ such that $z - y \\in \\mathrm{ri\\,}\\mathrm{dom\\,} \\Delta^*(\\cdot\\mid \\bar{\n \\mathcal D})$. This is equivalent to the fact that $y\\in \\mathrm{ri\\,}\\mathrm{dom\\,} f_z(x)$ and therefore\n\\begin{equation}\\label{main-assump-rocka}\n\\mathrm{ri\\,}\\mathrm{dom\\,} \\Gamma^* \\cap \\mathrm{ri\\,}\\mathrm{dom\\,} f_z \\neq \\emptyset.\n\\end{equation}\nTheorem 23.8 in \\cite{Roc70} whose main assumption is fulfilled \nby \\eqref{main-assump-rocka} yields then\n\\begin{eqnarray*}\n \\partial F_z(x) & = & \\partial \\Gamma^*(x) + \\partial f_z(x)\\\\\n& = & \\partial \\Gamma^*(x) - \\partial \\Delta^*(z-x\\mid \\bar{\n \\mathcal D})\n\\end{eqnarray*}\nand Proposition \\ref{subdif} is proved.\n\\end{proof}\nLet us now go back to the proof of Theorem\n\\ref{theo-rf}. By Proposition \\ref{subdif},\n$$\n\\partial F_z(z^*) = \\partial \\Gamma^*(z^*) - \\partial \\Delta^*(z-z^* \\mid \\bar{\n \\mathcal D}).\n$$\nSince $0\\in \\partial F_z(z^*)$, there exists $\\lambda^*\\in \\partial \\Gamma^*(z^*)$ such that \n$\\lambda^* \\in \\partial \\Delta^*(z-z^* \\mid \\bar{\\mathcal D})$. By applying Theorem 23.5 in \\cite{Roc70}, one obtains\n$$\n\\lambda^* \\in \\partial \\Gamma^* (z^*) \\quad \\Leftrightarrow \\quad z^* \\in \\partial \\Gamma(\\lambda^*)\n$$\nwhich in particular implies that $\\lambda^* \\in \\mathrm{dom\\,}\\Gamma$. Moreover, \n\\begin{eqnarray*}\n-\\lambda^* \\in \\partial \\Delta^*(z-z^*\\mid \\bar{\\mathcal D}) &\\Leftrightarrow& z-z^* \\in \\partial \\Delta(\\lambda^*\\mid \\bar{\\mathcal D})\\\\\n& \\Leftrightarrow & z-z^* \\in N_{\\bar{\\mathcal D}}(\\lambda^*),\n\\end{eqnarray*} \nwhich in particular implies that $\\lambda^* \\in \\bar{\\mathcal D}.$\\\\\nDenote by $z_{\\mathbf{n}}= z-z^*$ , then one obtains the decomposition stated in Theorem \\ref{theo-rf}. It remains to prove that:\n\n$$ \nI_{\\mathbf{f}} (z) = \\Gamma^*(z^*) + \\Delta^*(z_\\mathbf{n}\\mid \\bar{\n \\mathcal D}).\n$$ \nWe have:\n\\begin{eqnarray*}\nI_{\\mathbf{f}}(z) &=& \\sup_{\\lambda\\in \\bar{\\mathcal D}} \\{\\langle \\lambda, z\\rangle -\\Gamma(\\lambda)\\}\\\\\n&\\ge & \\langle \\lambda^*, z^*\\rangle -\\Gamma(\\lambda^*) + \\langle \\lambda^*, z_{\\mathbf{n}}\\rangle \n\\quad = \\quad \\Gamma^*(z^*) + \\Delta^*(z_{\\mathbf{n}} \\mid {\\mathcal D}) .\n\\end{eqnarray*}\nOn the other hand, \n\\begin{eqnarray*}\nI_{\\mathbf{f}}(z) &=& \\sup_{\\lambda\\in \\bar{\\mathcal D}} \\{\\langle \\lambda, z\\rangle -\\Gamma(\\lambda)\\}\\\\\n&\\le & \\sup_{\\lambda\\in \\bar{\\mathcal D}} \\{\\langle \\lambda, z^*\\rangle -\\Gamma(\\lambda)\\} \n+\\sup_{\\lambda\\in \\bar{\\mathcal D}} \\langle \\lambda, z_{\\mathbf{n}}\\rangle \n\\quad = \\quad \\Gamma^*(z^*) + \\langle \\lambda^*, z_{\\mathbf{n}} \\rangle,\n\\end{eqnarray*}\nand Theorem \\ref{theo-rf} is proved.\n\\end{proof}\n\n\\section{An example of LDP in the convex case}\\label{section:examples}\nTo illustrate the range of Theorems \\ref{ldp-mem} and \\ref{theo-rf}, we\nstudy in detail the following model :\n\\begin{equation} \\label{def-Ln}\nL_n = \\frac 1 n \\sum_{i=1}^n \\mathbf{f}(x_i^n) \\cdot Z_i\\quad \\textrm{where}\\quad \\mathbf{f}(x)=\\begin{pmatrix} \n1 & 0 \\\\\n0 & x\n\\end{pmatrix}\\quad \\textrm{and} \\quad Z_i= \\begin{pmatrix} \nX_i^2 \\\\\nX_i^2 \n\\end{pmatrix},\n\\end{equation} \nthe sequence $(X_i)_{i\\in \\mathbb{N}}$ being a sequence of i.i.d. ${\\mathcal\n N}(0,1)$ Gaussian random variables\nand $(x_i^n)_{n \\in \\mathbb N}$ being a sequence of real numbers satisfying\n$$ \n\\hat R_n = \\frac 1n \\sum_{i=1}^n \\delta_{x_i^n} \\rightarrow R.\n$$\nWe assume moreover that the support ${\\mathcal Y}$ of $R$ is given by\n${\\mathcal Y}=[m,M]$ and that \n$$ \n\\sup_{1\\le i\\le n} x_i^n \\xrightarrow[n\\rightarrow \\infty]{} x_{\\max} >M \\qquad \\textrm{and}\\qquad \\inf_{1\\le i\\le n} x_i^n\n\\xrightarrow[n\\rightarrow \\infty]{} x_{\\min} M$, $H_{\\rm min}$ is a well-defined negative number while \n $H_{\\rm max}$ is a well-defined positive number. In particular \n$x_{\\rm min} < \\alpha_{\\min}$ and $\\alpha_{\\max} < x_{\\max}$. \nMoreover, the following inequalities hold true:\n$$\nm < \\alpha_{\\min} \\le \\int x\\,R(dx)\\quad \\textrm{and}\\quad \n\\int x\\, R(dx) \\le \\alpha_{\\max} < M.\n$$\nIn particular, $\\alpha_{\\min}\\le \\alpha_{\\max}$.\nIn order to describe the rate function related to the LDP of $L_n$, we introduce the following domains:\n\\begin{eqnarray*}\n {\\mathcal D}_{\\infty} &=& \\{(x,y)\\in \\mathbb{R}^2,\\ x\\le 0\\quad \\mathrm{or}\\quad y \\geq x_{\\rm max} x\\quad \\mathrm{or}\n\\quad y \\leq x_{\\min}x\\} \\\\\n {\\mathcal D}_{(I_{\\mathbf{f}}=\\Gamma^*)} &=& \\{(x,y)\\in\\mathbb{R}^2,\\ x> 0\\quad \\mathrm{and}\\quad \n\\alpha_{\\min} x \\le y\\le \\alpha_{\\max} x \\}\\\\\n {\\mathcal D}_{\\mathrm{linear}}^+ &=& \\{(x,y)\\in\\mathbb{R}^2,\\ x> 0\\quad \\mathrm{and}\\quad \n\\alpha_{\\max} x < y\\le x_{\\max} x \\}\\\\\n {\\mathcal D}_{\\mathrm{linear}}^- &=& \\{(x,y)\\in\\mathbb{R}^2,\\ x> 0\\quad \\mathrm{and}\\quad \nx_{\\min} x \\le y< \\alpha_{\\min} x \\}\n\\end{eqnarray*}\nThese domains are represented in {\\sc Figure \\ref{domains}} (right).\nWe can now state the following result.\n\\begin{prop}\\label{rf-ex}\nThe empirical mean $L_n$ defined in \\eqref{def-Ln} satisfies the LDP in\n$\\mathbb R^2$ with good rate funtion $I_{\\mathbf f}$ given by\n\\begin{enumerate}\n\\item If $ (x,y)\\in {\\mathcal D}_{\\infty}$ then $I_{\\mathbf f}(x,y) = +\\infty$, \n\\item If $ (x,y)\\in {\\mathcal D}_{(I_{\\mathbf{f}}=\\Gamma^*)}$ then $I_{\\mathbf f}(x,y) = \\Gamma^*(x,y)$, \n\\item If $ (x,y)\\in {\\mathcal D}_{\\mathrm{linear}}^+$ then \n\\begin{multline*}\nI_{\\mathbf f}(x,y) = \\Gamma^*\\left(H_{\\rm\n max}(x_{\\rm max}x-y), \\alpha_{\\rm max}\nH_{\\rm max}(x_{\\rm max}x-y) \\right)\\\\ + \\frac 1 2 \\left((1- H_{\\rm\n max}x_{\\rm max}\\right)x +H_{\\rm max}y),\n\\end{multline*}\n\\item If $ (x,y)\\in {\\mathcal D}_{\\mathrm{linear}}^-$ then \n\\begin{multline*}\nI_{\\mathbf f}(x,y) = \\Gamma^*\\left(H_{\\rm\n min}(x_{\\rm min}x-y), \\alpha_{\\rm min}H_{\\rm min}(x_{\\rm min}x-y) \\right)\\\\ \n+ \\frac 1 2 \\left((1- H_{\\rm min}x_{\\rm min})x +H_{\\rm min}y\\right).\n\\end{multline*}\\\\\n\\end{enumerate}\n\\end{prop}\n\\begin{rem}\\label{non-expo-rem}\nLet $x_0>0$ be fixed and consider the ray:\n$$\ny^-(x)=x_{\\min} x +(\\alpha_{\\min}-x_{\\min}) x_0,\\quad x\\ge x_0.\n$$\nThen \n$$\nI_{\\mathbf{f}}(x,y^-(x))=\\Gamma^*(x_0,\\alpha_{\\min} x_0) +\\frac 12 (x-x_0).\n$$\nIn particular, there are infinitely many non-exposed points for\n$I_{\\mathbf{f}}$ along the ray $((x,y^-(x)); x\\ge x_0)$. The same can\nbe shown along the ray\n$$\ny^+(x)=x_{\\max} x +(\\alpha_{\\max} -x_{\\max}) x_0;\\ x\\ge x_0.\n$$\n\\end{rem}\n\n\\begin{proof}[Proof of Proposition \\ref{rf-ex}]\nThe LDP will be established as soon as assumptions of Theorem \\ref{ldp-mem} are fulfilled. \nIt is straightforward to check (A-\\ref{LDP-Particle}) to\n(A-\\ref{Compacity}) and (A-\\ref{hypo-EJP}). In order to check Assumption (A-\\ref{Limit-Points}),\nwe rely on the following lemma: \n\\begin{lemma}\\label{Dmax} For every $x\\in [x_{\\rm min}, x_{\\rm max}]$, one has:\n$$ \n\\mathcal D_{\\mathbf f (x_{\\rm min})} \\cap \\mathcal D_{\\mathbf f\n (x_{\\rm max})} \\subset \\mathcal D_{\\mathbf f (x)}.$$ \n\\end{lemma}\n\\begin{proof}[Proof of Lemma \\ref{Dmax}]\nLet $(\\xi, \\xi^\\prime) \\in \\mathcal D_{\\mathbf f (x_{\\rm min})} \\cap\n\\mathcal D_{\\mathbf f (x_{\\rm max})}.$\nThis implies that $(\\xi, x_{\\rm min}\\xi^\\prime) \\in \\mathcal D_{Z_1}$ and \n$(\\xi, x_{\\rm max}\\xi^\\prime) \\in \\mathcal D_{Z_1}.$\nEvery $x \\in [x_{\\rm min}, x_{\\rm max}]$ can be written as a convex\ncombination\nof $x_{\\rm min}$ and $ x_{\\rm max}:$\n$ x= a x_{\\rm min} + b x_{\\rm max},$ where $a+b=1$, $a,b$ being nonnegative.\nBy convexity of $\\mathcal D_{Z_1}$,\n$ (\\xi, x \\xi^\\prime) = \n a (\\xi, x_{\\rm min}\\xi^\\prime) + b (\\xi, x_{\\rm max}\\xi^\\prime)\\in\n \\mathcal D_{Z_1}.$\nTherefore $(\\xi, \\xi^\\prime) \\in \\mathcal D_{\\mathbf f(x)}.$ \n\\end{proof}\nWe can now check (A-\\ref{Limit-Points}). The mere definition of $x_{\\min}$ and $x_{\\max}$ \nimplies that both $x_{\\min}$ and $x_{\\max}$ belong to $C_{\\infty,\\mathrm{out}}^{\\mathbf{f}}$ and $C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}$ and that\nboth $C_{\\infty,\\mathrm{out}}^{\\mathbf{f}}$ and $C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}$ are included in $[x_{\\min},x_{\\max}]$.\nIn particular, the set $\\mathcal D$ is well defined and is given by:\n$$\n{\\mathcal D}= \\bigcap_{\\{x,\\ \\mathbf{f}(x) \\in\n C_{\\infty,\\mathrm{out}}^{\\mathbf{f}}\\}}{\\mathcal D}_{\\mathbf{f}(x)}\n\\stackrel{(a)}{=}\\mathcal D_{\\mathbf f (x_{\\rm min})} \\cap \\mathcal\nD_{\\mathbf f (x_{\\rm max})}\\stackrel{(b)}{=} \\bigcap_{\\{x,\\\n \\mathbf{f}(x) \\in C_{\\infty,\\mathrm{in}}^{\\mathbf{f}}\\}} {\\mathcal D}_{\\mathbf{f}(x)}\n$$\nwhere $(a)$ and $(b)$ follow from Lemma \\ref{Dmax}. An easy computation yields \n\\begin{equation} \\label{D}\n \\mathcal D = \\{(\\xi, \\xi^\\prime) \\in \\mathbb R^2;\\ \n1-2\\xi-2x_{\\rm min}\\xi^\\prime >0 \\textrm{ and } 1-2\\xi-2x_{\\rm max}\\xi^\\prime >0\\}.\n\\end{equation}\nThe LDP is therefore established by applying Theorem \n\\ref{ldp-mem} and the rate function is given by:\n$$ \nI_{\\mathbf f }(z) = \\inf_{z=z_1+z_2} \\{\\Gamma^*(z_1) +\n\\Delta^*(z_2| \\mathcal D)\\},\n$$\nwith $\\mathcal D$ as above and $\\Gamma$ as defined in \\eqref{Gamma}.\nFormula \\eqref{Gamex} yields: \n$$ \n\\textrm{dom}\\, \\Gamma = \\{(\\xi, \\xi^\\prime) \\in \\mathbb R^2;\\ \n1-2\\xi-2x\\xi^\\prime >0 \\textrm{ for all } x \\in [m,M]\\},\n$$\nand therefore\n\\begin{equation} \\label{domG}\n\\textrm{dom}\\, \\Gamma = \\{(\\xi, \\xi^\\prime) \\in \\mathbb R^2;\\ \n1-2\\xi-2m\\xi^\\prime >0 \\textrm{ and } 1-2\\xi-2M\\xi^\\prime >0\\}.\n\\end{equation}\n{\\sc Figure} \\ref{figex} shows $\\textrm{dom}\\, \\Gamma$ and $\\mathcal D $\nfor particular choices of the parameters.\\\\\n\n\n\\begin{figure}\n\\begin{center}\n\\epsfig{figure=domGamma,height=5cm}\n\\epsfig{figure=mathcalD,height=5cm}\n\\caption{\\small On this figure are represented $\\textrm{dom}\\,\n \\Gamma$ \nfor $m=-1$ and $M=1$ (left) and\n $\\mathcal D $\nfor $x_{\\rm min} =-4$ and $x_{\\rm max} =4$ (right).\nOn the picture of $\\mathcal D,$ we figured also \nsome of the normal cones to $\\bar{\\mathcal D}$, whose directions are represented by the arrows.}\\label{figex}\n\\end{center}\n\\end{figure} \n\nWe first prove Proposition \\ref{rf-ex}-(1). In order to prove this statement, it is equivalent \nto determine the domain of $I_{\\mathbf f}.$ We use the fact that \n$$ \n\\textrm{dom}\\,I_{\\mathbf f} = \\textrm{dom}\\, \\Gamma^* + \n\\textrm{dom}\\, \\Delta^*(\\cdot \\mid \\mathcal D) \n$$\nand focus on the two domains of the right-hand side. One can check that\n\\begin{eqnarray*}\n\\textrm{dom}\\, \\Gamma^* &=& \\{(x, y) \\in \\mathbb R^2;\\ x>0 \\textrm{ and } \nmx\\leq y \\leq Mx\\},\\\\%\\label{domGstar}\n\\textrm{dom}\\, \\Delta^*(\\cdot\\mid\\mathcal D) &=& \\{(x, y) \\in \\mathbb R^2;\\ \nx\\geq 0 \\textrm{ and } x_{\\rm min} x \\leq y\\leq x_{\\rm max}\nx\\}\n\\end{eqnarray*}\nTherefore\n \\begin{equation}\n\\textrm{ dom}\\, I_{\\mathbf f} = \\{(x, y) \\in \\mathbb R^2;\\ \nx>0\\quad \\textrm{and}\\quad \nx_{\\rm min} x < y < x_{\\rm max}x\\}.\n\\end{equation}\nNote in particular that in this case, $\\textrm{ ri\ndom}I_{\\mathbf f} =\\textrm{ \ndom}I_{\\mathbf f} . $\\\\\n\nThe three domains $\\textrm{dom}\\, \\Gamma^*,$ $\\textrm{dom}\\, \\Delta^*(\\cdot\\mid\\mathcal D)$\nand $\\textrm{dom}\\, I_{\\mathbf f} $ are represented on {\\sc Figure} \\ref{domains}.\\\\ \n\n\\begin{figure}\n\\begin{center}\n\\epsfig{figure=domGammastar,height=5cm}\n\\epsfig{figure=domIf,height=5cm}\n\\caption{\\small The left picture represents \n$\\textrm{dom}\\, \\Gamma^*$ (hatched cone) and $\\textrm{dom}\\, \\Delta^*(\\cdot\\mid\n\\mathcal D)$\n(delimited by the two half-lines $y=4x$ and $y=-4x$).\nThe right picture represents the four zones of $\\mathbb{R}^2$ where \n$I_{\\mathbf{f}}$ has a particular expression. Zone (1) (resp. (2), (3) and (4)) \nrepresents ${\\mathcal D}_{\\infty}$ (resp. ${\\mathcal D}_{(I_{\\mathbf{f}}=\\Gamma^*)}$,\n${\\mathcal D}_{\\mathrm{linear}}^+$ and ${\\mathcal D}_{\\mathrm{linear}}^-$).\nWe kept the same values\nof the parameters as in {\\sc Figure} \\ref{figex} and chose a\nparticular $R$ for which\n$H_{\\rm max} = - H_{\\rm min} = 4\/15$.}\\label{domains}\n\\end{center}\n\\end{figure} \n\nWe now prove Proposition \\ref{rf-ex}-(2).\nTheorem \\ref{theo-rf} yields:\n$$ \nI_{\\mathbf f} (z) = \\sup_{\\lambda \\in \\bar{\\mathcal D}}\\{\\langle\n\\lambda, z \\rangle - \\Gamma(\\lambda)\\}.\n$$\nIf one consider $g_z(\\lambda) = \\langle \\lambda, z \\rangle - \\Gamma(\\lambda),$\none can check that for $z \\in \\textrm{dom}\\, \\Gamma^*,$ an element\n$\\bar \\lambda = (\\bar \\xi, \\bar \\xi^\\prime)$ realizing the supremum\nof $g_z$ satisfies the condition\n$$ \n\\alpha - \\frac 1{H(\\alpha)} = \\frac y x, \\quad \\textrm{with } \\alpha = \\frac{1-\n 2\\bar \\xi}{2 \\bar \\xi^\\prime}. \n$$\nTherefore $\\bar \\lambda \\in \\textrm{dom}\\, \\Gamma \\cap \\bar{\\mathcal\n D}$ if and only if $\\frac y x \\in [\\alpha_{\\rm min}, \\alpha_{\\rm max}]$\nand in this case $I_{\\mathbf f} (z)= \\Gamma^*(z). $\\\\\n\nWe now turn to the proof of Proposition \\ref{rf-ex}-(3).\n>From Theorem \\ref{theo-rf}, we just need to exhibit a decomposition\n$z= z^* + z_{\\mathbf{n}},$ where $z^* \\in \\partial\\Gamma(\\lambda^*)$\nand $z_{\\mathbf{n}} \\in N_{\\bar{\\mathcal D}}(\\lambda^*)$ for some $\n\\lambda^* \\in \\textrm{dom} \\Gamma \\cap \\bar{\\mathcal D}$. \nIn this case, the value of $I_{\\mathbf f} (z)$ is given by $I_{\\mathbf f} (z) = \\Gamma^*(z^*) \n+\\langle \\lambda^*, z_{\\mathbf{n}}\\rangle$.\nOne can check that $ \\textrm{dom}\\, \\Gamma \\cap \\bar{\\mathcal\n D}$ can be split into three subsets : the interior of $\\mathcal D$,\nand the two half-lines $\\{1-2\\xi-2x_{\\rm min} \\xi^\\prime=0, \\xi < 1\/2\\}$\nand $\\{1-2\\xi-2x_{\\rm max} \\xi^\\prime=0, \\xi < 1\/2\\}$. The normal cones\nto $ \\bar{\\mathcal D}$ are then easy to determine:\n\\begin{itemize}\n\\item[-] if $(\\xi, \\xi^\\prime) \\in \\textrm{int } \\mathcal D$, then\n$N_{\\bar{\\mathcal D}}(\\xi, \\xi^\\prime) = \\{(0,0)\\},$\n\\item[-] if $\\xi< 1\/2$ \nand $ 1-2\\xi-2x_{\\rm min}\\xi^\\prime =0 ,$ then\n$N_{\\bar{\\mathcal D}}(\\xi, \\xi^\\prime) = \\{t(1,x_{\\rm min}), t\\geq0\\},$\n \\item[-] if $\\xi < 1\/2$ \nand $ 1-2\\xi-2x_{\\rm max}\\xi^\\prime =0 ,$ then\n$N_{\\bar{\\mathcal D}}(\\xi, \\xi^\\prime) = \\{t(1,x_{\\rm max}), t\\geq0\\}.$\n\\end{itemize}\nThese normal cones are represented by the arrows on {\\sc Figure \\ref{figex}}(right).\n\nWe can now conclude the proof of the third point of the proposition.\nIf we choose \n\\begin{eqnarray*}\n\\lambda^* &=& \\left(\\frac 1 2 - \\frac{x_{\\rm\n min}}{y-x_{\\rm min }x}, \\frac 1{y-x_{\\rm min}x}\\right), \\\\\nz^* &=&\n(H_{\\rm min}(x_{\\rm min}x-y), (x_{\\rm min}H_{\\rm min}-1)(x_{\\rm\n min}x-y)),\\\\\nz_{\\mathbf{n}}&=&z-z^*,\n\\end{eqnarray*}\nit is easy to check that\nthis decomposition fulfills the required properties, i.e. $z^* \\in\n\\partial\\Gamma(\\lambda^*)$ and $z_{\\mathbf{n}} \\in N_{\\bar{\\mathcal\n D}}(\\lambda^*)$ for some $ \\lambda^* \\in \\textrm{dom} \\Gamma \\cap\n\\bar{\\mathcal D}$. Therefore,\n\\begin{eqnarray*}\n I_{\\mathbf f}(z) & = & \\Gamma^*(z^*) + \\langle \\lambda^*, z_n \\rangle \\\\\n & = & \\Gamma^*(z^*) + \\frac 1 2 (x + H_{\\rm min}(y-x_{\\rm min}x))\n\\end{eqnarray*}\nThe decomposition $z= z^* + z_{\\mathbf{n}}$ \ncan be seen on Figure \\ref{decompo}.\n\nThe proof of Proposition \\ref{rf-ex}-(4) is very similar and is left to the reader.\n\\end{proof}\n\n\\begin{figure}[htbp]\\label{decompo}\n\\begin{center}\n\\epsfig{figure=decompo,height=5cm}\n\\caption{\\small For a $z =(x,y)$ such that $x_{\\rm min} x< y< \\alpha_{\\rm\n min} x,$ \nwe decompose $z = z^* + z_n$ with $z^*$ such that \n$y^* = \\alpha_{\\rm min} x^*$ and $z_n = t(1, x_{\\rm min} ),$\nfor a $t>0$.}\n\\end{center}\n\\end{figure}\n\n\\subsection*{Remarks on the LDP and the spherical integral}\nWe conclude this section with remarks related to the prime motivation\nof this study, namely the study of the asymptotics of spherical\nintegrals. We recall from \\cite{GuiMai05} that the goal is to get the\nasymptotics of\n\\begin{equation}\\label{spherical-integral}\nI_n(A_n,B_n) = \\int e^{N\\ {\\rm Trace} (A_nUB_nU^*)} dm_n(U),\n\\end{equation}\nwhere $A_n$ and $B_n$ are two real diagonal matrices and $m_n$ is the\nHaar measure on the orthogonal group. Obtaining the asymptotic\nexpansion of such integrals has major applications in statistics for\ninstance. Indeed, the asymptotic expansion for the joint eigenvalue\ndensity of some deformed Wigner matrices can readily be deduced\nfrom the above integral.\n\nIn the case where $A_n$ is of rank one, with a unique nonzero eigenvalue\ndenoted by $\\theta$ and where $B_n=\\mathrm{diag}(x_i^n,\\ 1\\le i\\le n)$ where \n$\\frac 1n \\sum \\delta_{x_i^n}$ converges, the spherical integral can be written as \n\\begin{equation}\n\\label{IN}\nI_n(A_n,B_n) = \\mathbb E\\,\\exp\\left(n\\theta \\frac{\\sum_{i=1}^n x_i^n\n X_i^2}{\\sum_{i=1}^n X_i^2}\\right),\n\\end{equation}\nwhere $\\mathbb E$ is the expectation under the standard\n$N$-dimensional Gaussian measure.\n\nA natural strategy to tackle the asymptotics of $I_n$ is then to establish the LDP for the empirical \nmeasure $L_n$ as studied in the previous example and to apply Varadhan's lemma\nto get the asymptotics of $I_n$ (see \\cite[Theorem 6]{GuiMai05}).\n\n\nBeside the fact that we fully recover the LDP result of\n\\cite{GuiMai05}, we believe that the representation of the rate\nfunction (Theorem \\ref{theo-rf}) sheds new light on the role played by\nthe largest and lowest eigenvalues in the asymptotics of the rank-one\nspherical integral: The very reason comes from the fact that the individual rate\nfunction of the particle $\\frac 1n {\\tiny \\left( \\begin{array}{c} X_1^2\\\\ X_1^2 \\end{array}\\right)}$\nfulfills the convexity assumption (A-\\ref{convex-rf}). This is in particular illustrated \nin Lemma \\ref{Dmax}.\n\nIn the forthcoming section, we study the LDP in the non-convex case,\nthat is when (A-\\ref{convex-rf}) is not fulfilled. This will lead to\npartial results in the study of the asymptotics of the spherical\nintegral beyond the rank-one case.\n\n\n\n\n\n\n\\section{The LDP in the non-convex case}\\label{section:nonconvex}\n\nThere are several models which fulfill Assumption\n(A-\\ref{LDP-Particle}) with a non-convex rate function. Take for\ninstance the simple model $Z_1=(X_1^2, Y_1^2, X_1 Y_1)$ where $X_1$\nand $Y_1$ are independent standard Gaussian random variables. Denote\nby ${\\mathcal C}=\\{(x,y,z) \\in \\mathbb R^3,\\ z=-\\sqrt{xy} \\textrm{ or }\nz= \\sqrt{xy}\\}$, then $\\frac{Z_1}n $\nsatisfies the LDP with good rate function\n$$\nI(x,y,z)=\\frac x2 +\\frac y2 +\\Delta (z \\mid {\\mathcal C})\\quad \\textrm{where}\\quad \n\\Delta(z\\mid {\\mathcal C})=\\left\\{\n\\begin{array}{ll}\n0& \\textrm{if}\\ z\\in {\\mathcal C}\\\\\n\\infty & \\textrm{else}.\n\\end{array}\n\\right.,\n$$\nwhich is highly non-convex. We will see that this kind of models arises in the study of\nspherical integrals and may give rise to interesting phenomenas. \n\nWe give in this section an assumption over the set $A_n=\\{x_i^n\\in\n{\\mathcal X},\\,1\\le i\\le n\\}$ which ensures the LDP for $L_n$ to hold.\nAlthough quite stringent, this assumption encompasses interesting\nmodels as we shall see. We then state the LDP.\n\nRecall that ${\\mathcal Y}$ is the support of the limiting probability $R$.\n\\begin{assump}\\label{assumption-nonconvex}\nAssume that ${\\mathcal X}\\subset \\mathbb{R}^p$ for a given integer $p$. \nDenote by $A_n=\\{x_i^n\\in {\\mathcal X},\\ 1\\le i\\le n\\}$. \nThen there exists an integer $T$ such that:\n$$\nA_n=\\tilde A_n \\cup \\bigcup_{\\ell=1}^T \\{x_{i_{\\ell}}^n\\}\n$$\nwhere $\\rho(\\tilde A_n,{\\mathcal Y})$ goes to zero as $n\\rightarrow \\infty$ \nwhile for $1 \\leq \\ell\\leq T$,\n$$\nx_{i_{\\ell}}^n \\xrightarrow[n\\rightarrow \\infty]{} x_{\\ell}^{\\infty} ,\n$$\nwhere the $x_{\\ell}^{\\infty}$'s do not belong to ${\\mathcal Y}$.\n\\end{assump}\n\n\\begin{rem} Assumption (A-\\ref{assumption-nonconvex}) implies that there \nexists a finite number of outliers $x_{i_{\\ell}}^n$ that remain outside \nthe support ${\\mathcal Y}$ and that converge pointwise to a limit\n$x_{\\ell}^\\infty$.\n\\end{rem}\n\n\\begin{theo}\\label{ldp-nonconvex}\nAssume that $(Z_i)_{i\\in \\mathbb{N}}$ is a sequence of\n $\\mathbb{R}^d$-valued i.i.d random variables where $Z_1$ satisfies\n (A-\\ref{LDP-Particle}). Assume that (A-\\ref{hypo-EJP}) and (A-\\ref{assumption-nonconvex}) hold \nfor the sequence $(x_i^n, 1\\le i\\le n,n\\ge 1)$. Then \n$$\nL_n=\\frac{1}{n}\n\\sum_1^{n} \\mathbf{f}(x_i^n)\\cdot Z_i\n$$\nsatisfies the LDP in $(\\mathbb{R}^m,{\\mathcal B}(\\mathbb{R}^m))$ with\ngood rate function\n\\begin{equation}\\label{rf-nonconvex}\nI_{\\mathbf{f}}(z) = \\inf\\left\\{ \\Gamma^*(z_0) + \\sum_{\\ell=1}^T I(y_{\\ell}) ;\\ \nz_0+\\sum_{\\ell=1}^T \\mathbf{f}(x_{\\ell}^\\infty)\\cdot y_{\\ell} = z\\right\\}.\n\\end{equation}\n\\end{theo}\n\n\\begin{proof}\nRecall that $A_n=\\tilde A_n \\cup \\bigcup_{\\ell =1}^T \\{x_{i_{\\ell}}^n\\}$ \nby (A-\\ref{assumption-nonconvex}) and write:\n$$\nL_n= \\frac 1n \\sum_{x_i^n\\in \\tilde A_n} \\mathbf{f}(x_i^n)\\cdot Z_i + \\frac 1n \n\\sum_{\\ell=1}^T \\mathbf{f}(x_{i_{\\ell}}^n)\\cdot Z_{i_{\\ell}},\n$$\nOne can prove the LDP for $\\frac 1n \\sum_{x_i^n\\in \\tilde A_n}\n\\mathbf{f}(x_i^n)\\cdot Z_i$ as in the proof of Theorem \\ref{ldp-mem}\n(which relies on an adaptation of Theorem 2.1 in \\cite{Naj02} and does\nnot involve the convexity of $I$). On the other hand,\n$\\sum_{\\ell=1}^T \\frac{\\mathbf{f}(x_{i_{\\ell}}^n)\\cdot Z_{i_{\\ell}}}{n}$ is\nexponentially equivalent to $\\sum_{\\ell=1}^T \\frac{\\mathbf{f}(x_{\\ell}^\\infty)\\cdot\n Z_{i_{\\ell}}}{n}$ which satisfies the LDP with good rate function \n$$\nJ(z)=\\inf\\left \\{ \\sum_{\\ell=1}^T I(y_{\\ell}),\\ \\sum_{\\ell=1}^T \\mathbf{f}(x_{\\ell}^\\infty)\\cdot y_{\\ell}=z\\right\\}.\n$$\nSince $\\frac 1n \\sum_{x_i^n\\in \\tilde A_n} \\mathbf{f}(x_i^n)\\cdot Z_i$ and $\\frac 1n \n\\sum_{\\ell=1}^T \\mathbf{f}(x_{i_{\\ell}}^n)\\cdot Z_{i_{\\ell}}$ are independent, the LDP holds\nwith good rate function $I_{\\mathbf{f}}$ given by \\eqref{rf-nonconvex}. Proof of Theorem \\ref{ldp-nonconvex}\nis completed.\n\\end{proof}\n\n\\section{An example of LDP in the non-convex case: Influence of the\n second largest eigenvalue}\\label{section:eigen}\n\n\\subsection{Presentation of the example}\nIn this section, we shall study a simple model which underlines the\ndifferences between the LDP in the convex case and the LDP in the non-convex one.\nConsider the set $A_n=\\{x_i^n,\\ 1\\le i\\le n\\}$ where $x_1^n=\\kappa_1$, $x_2^n=\\kappa_2$ and $x_i^n=1$ for $i\\ge 3$.\nAssume the following:\n$$\n1<\\kappa_2<\\kappa_1.\n$$\nOne can think of the $x_i^n$ as the eigenvalues of a $n\\times n$\nmatrix and one can check that \n$$\n\\frac 1n \\sum_{i=1}^n \\delta_{x_i^n} \\xrightarrow[n\\rightarrow\\infty]{} \\delta_1\n$$ \nwhile $\\kappa_1$ and $\\kappa_2$ are two outliers.\n\nIn the sequel, we study the influence of the second\nlargest eigenvalue $\\kappa_2$ over the rate function of a given LDP in a\nconvex and non-convex case. We prove that the second largest eigenvalue has no influence \non the rate function that drives the LDP in the convex case (Proposition \\ref{second-eigen-convex}) \nwhile this eigenvalue has an impact \non the LDP in the non-convex case (Proposition \\ref{second-eigen-nc}). We finally go back \nto spherical integrals and make some concluding remarks.\n\nDenote by $\\mathbf{f}$ the following matrix-valued function:\n$$\n\\mathbf{f}(x)=\\left({\\tiny \\begin{array}{ccc}\n1 & 0 & 0\\\\\n0 &1 &0 \\\\\nx &0 &0 \\\\\n0 &x &0 \\\\\n0 &0 &1\n\\end{array}}\\right)\n$$ \n\nLet us now introduce the random variables we will consider.\n\n\\subsection{The convex model} Consider a family of $\\mathbb{R}^3$-valued random variables \n$( Z_i)_{i\\ge 1}$ satisfying Assumptions (A-\\ref{LDP-Particle}) and (A-\\ref{convex-rf}).\nDenote by\n\\begin{eqnarray*}\nL_n( Z) &=& \\frac 1n \\sum_{i=1}^n \\mathbf{f}(x_i^n)\\cdot Z_i\\\\\n&=& \\frac 1n \\mathbf{f}(\\kappa_1)\\cdot Z_1 \n+\\frac 1n \\mathbf{f}(\\kappa_2)\\cdot Z_2 \n+\\frac 1n \\sum_{i=3}^n \\mathbf{f}(x_i^n)\\cdot Z_i\\\\\n&\\stackrel{\\triangle}{=}& \\pi_n^1( Z) + \\pi_n^2( Z) +\\tilde{L}_n( Z)\\\\\n\\textrm{and by}\\ \\bar L_n( Z)&\\stackrel{\\triangle}{=}& \\pi_n^1( Z)+\\tilde{L}_n( Z)\\\\\n\\end{eqnarray*}\n\nOne can apply Theorem \\ref{ldp-mem} to $L_n( Z)$ and $\\bar\nL_n( Z)$ which therefore satisfy LDPs with given rate functions that\nwe denote respectively by $I_{\n Z}$ and $\\bar I_{ Z}$.\n\n\\begin{prop}\\label{second-eigen-convex}\nThe rate functions $I_{ Z}$ and $\\bar I_{ Z}$ related to the LDPs of $L_n( Z)$ and \n$\\bar L_n( Z)$ are equal.\n\\end{prop}\n\\begin{rem} This proposition underlines the fact that the second largest eigenvalue \ndoes not have any influence on the rate function of the LDP.\n\\end{rem}\n\n\\begin{proof} Let \n$$\n Z_i= \\left(\n\\begin{array}{c}\nU_i\\\\\nV_i\\\\\nW_i\n\\end{array}\\right)\n\\qquad \\textrm{then}\\qquad \n\\mathbf{f}(x)\\cdot Z_i =\\left({\\tiny \n\\begin{array}{c}\nU_i\\\\\nV_i\\\\\nxU_i\\\\\nxV_i\\\\\nW_i\n\\end{array}}\\right).\n$$\nFor $\\lambda \\in \\mathbb R^5$, denote by\n\\begin{eqnarray*}\n\\Lambda(\\lambda) &=& \\ln \\mathbb{E} e^{\\langle \\lambda,\\mathbf{f}(1) \\cdot Z \\rangle},\\\\\n\\Lambda_i(\\lambda) &=& \\ln \\mathbb{E} e^{\\langle \\lambda, \\mathbf{f}(\\kappa_i)\\cdot Z\\rangle },\\quad i\\in \\{1,2\\}.\n\\end{eqnarray*}\nConsider also the associated domains:\n\\begin{eqnarray*}\n{\\mathcal D}_0&=&\\{ \\lambda \\in \\mathbb{R}^5;\\ \\Lambda(\\lambda)<\\infty\\},\\\\\n{\\mathcal D}_i&=&\\{ \\lambda \\in \\mathbb{R}^5;\\ \\Lambda_i(\\lambda)<\\infty\\},\\quad i\\in \\{1,2\\}.\n\\end{eqnarray*}\nRemark that \n\\begin{equation}\\label{convex-domain}\n\\lambda = (\\alpha,\\beta,\\gamma,\\delta,\\theta)\\in {\\mathcal D}_i \\quad \\Leftrightarrow\\quad \n\\lambda_i = (\\alpha,\\beta,\\kappa_i \\gamma,\\kappa_i \\delta,\\theta)\\in {\\mathcal D}_0, \\quad i\\in \\{1,2\\}.\n\\end{equation}\n>From Theorem \\ref{ldp-mem}, we know that \n$$\n I_{ Z}(z) = \\sup_{\\lambda\\in {\\mathcal D}_0\\cap {\\mathcal D}_1 \\cap {\\mathcal D}_2} \n\\{\\langle \\lambda,z\\rangle -\\Lambda(\\lambda)\\}\\quad \\textrm{and}\\quad \n \\bar{I}_{ Z}(z) = \\sup_{\\lambda\\in {\\mathcal D}_0\\cap {\\mathcal D}_1}\n\\{\\langle \\lambda,z\\rangle -\\Lambda(\\lambda)\\}\n$$\nWe now prove that \n$\n\\lambda\\in {\\mathcal D}_0\\cap {\\mathcal D}_1$ implies that $\\lambda\\in {\\mathcal D}_2.$ Let $\\lambda =(\\alpha,\\beta,\\gamma,\\delta,\\theta)\\in{\\mathcal D}_0\\cap {\\mathcal D}_1$.\n>From \\eqref{convex-domain},\n$$\n\\lambda \\in {\\mathcal D}_1 \\quad \\Rightarrow \\quad\n\\lambda_1=(\\alpha,\\beta,\\kappa_1 \\gamma,\\kappa_1 \\delta,\\theta)\\in\n{\\mathcal D}_0.\n$$ \nMoreover, as $1< \\kappa_2 < \\kappa_1$, $\\kappa_2 $ can be written as \n$\\kappa_2 = a + b \\kappa_1,$ with $a,b$ non-negative and\n$a+b=1.$\nDue to the convexity of ${\\mathcal D}_0$, we have that\n$\na \\lambda +b \\lambda_1 \\in {\\mathcal D}_0.\n$\nOn the other hand, \n$$ a \\lambda +b \\lambda_1 = (\\alpha, \\beta, \\kappa_2 \\gamma, \\kappa_2 \\delta,\n\\theta),$$\nso that $\\lambda\\in {\\mathcal D}_2$ by \\eqref{convex-domain}.\nTherefore,\n\\begin{eqnarray*}\nI_{ Z}(z) &=& \\sup_{\\lambda\\in {\\mathcal D}_0\\cap {\\mathcal D}_1 \\cap {\\mathcal D}_2} \n\\{\\langle \\lambda,z\\rangle -\\Lambda(\\lambda)\\}\\\\\n&=&\\sup_{\\lambda\\in {\\mathcal D}_0\\cap {\\mathcal D}_1}\n\\{\\langle \\lambda,z\\rangle -\\Lambda(\\lambda)\\} \\quad =\\quad \\bar{I}_{ Z}(z) \n\\end{eqnarray*}\nand the proof of Proposition \\ref{second-eigen-convex} is completed.\n\n\\end{proof}\n\n\n\\subsection{The non-convex model}\\label{example-non-convex-sub}\nLet $(X_i)_{i\\ge 1}$ and $(Y_i)_{i\\ge 1}$ be two independent families\nof i.i.d. standard Gaussian random variables and consider the i.i.d. \n$\\mathbb{R}^3$-valued \nrandom variables \n\\begin{eqnarray*}\n{\\check Z}_i=\\left(\n\\begin{array}{c}\nX_i^2\\\\\nY_i^2\\\\\nX_i Y_i\n\\end{array}\n\\right).\n\\end{eqnarray*}\nWe shall study the LDP of \n\\begin{eqnarray*}\nL_n({\\check Z})&=&\\frac 1n \\sum_{i=1}^n \\mathbf{f}(x_i^n)\\cdot {\\check Z}_i\\\\\n&=&\\frac 1n \\left( {\\tiny \\begin{array}{c}\nX_1^2\\\\\nY_1^2\\\\\n\\kappa_1 X_1^2\\\\\n\\kappa_1 Y_1^2\\\\\nX_1 Y_1 \n\\end{array}}\\right) +\n \\frac 1n \n\\left( {\\tiny \\begin{array}{c}\nX_2^2\\\\\nY_2^2\\\\\n\\kappa_2 X_2^2\\\\\n\\kappa_2 Y_2^2\\\\\nX_2 Y_2 \n\\end{array}}\\right) +\n\\frac 1n \\sum_{i=3}^n\n\\left( {\\tiny \\begin{array}{c}\nX_i^2\\\\\nY_i^2\\\\\nX_i^2\\\\\nY_i^2\\\\\nX_i Y_i \n\\end{array}}\\right) \\\\\n&\\stackrel{\\triangle}{=}& \\pi^1_n({\\check Z}) + \\pi_n^2({\\check Z}) +\\tilde{L}_n({\\check Z})\n\\end{eqnarray*}\nAs above, we also introduce $\\bar{L}_n({\\check Z})=\\pi_n^1({\\check Z})+\\tilde{L}_n({\\check Z})$.\\\\\n\\\\\n\\indent The non-convex model satisfies assumptions of Theorem \\ref{ldp-nonconvex}. Therefore, both \n$L_n({\\check Z})$ and $\\bar{L}_n({\\check Z})$ satisfy the LDP with\ngiven rate functions \nthat we denote respectively by $I_{\\check Z}$ and $\\bar I_{\\check Z}$. \n\nWe shall prove the following:\n\\begin{prop}\\label{second-eigen-nc}\n Let $\\kappa_1 <2\\kappa_2 -1$. The rate function $I_{\\check Z}$ that drives the LDP for $L_n({\\check Z})$ differs\n from the rate function $\\bar I_{\\check Z}$ that\n drives the LDP for $\\bar{L}_n({\\check Z})$.\n\\end{prop}\n\n\\begin{rem}\n Proposition \\ref{second-eigen-nc} illustrates the influence\n of the second largest eigenvalue on the rate function of the LDP in\n the non-convex case. Note that the condition $\\kappa_1 <2\\kappa_2 -1$ is merely\n technical and yields to easier computations.\n\\end{rem}\n\n\\begin{proof} In order to prove Proposition \\ref{second-eigen-nc}, \nwe shall prove that there exists some point $z^\\star$ such that \n$$\nI_{\\check Z}(z^\\star)<\\infty \\qquad \\textrm{while}\\qquad \\bar{I}_{\\check Z}(z^\\star)=\\infty.\n$$\nDenote by $z=(x,y,x',y',r)$ and by ${\\mathcal A}$ the convex set\n$$\n{\\mathcal A}=\\{z\\in \\mathbb{R}^5 ;\\ x>0,\\, y>0,\\, x'=x,\\, y'=y,\\, r^2\\le xy \\}. \n$$\nThen Cram\\'er's theorem yields the LDP for $\\tilde{L}_n({\\check Z})$ with good rate function\n$$\n\\Gamma^*(z) = \\frac{x+y}2 -\\frac 12 \\log(xy -r^2) +\\Delta(z\\mid {\\mathcal A}).\n$$\nDenote by ${\\mathcal B}_{\\kappa}$ the following non-convex set: \n$$\n{\\mathcal B}_{\\kappa}=\\{z\\in \\mathbb{R}^5;\\ x>0,\\, y>0,\\, x'=\\kappa x,\\, y'=\\kappa y,\\, |r|=\\sqrt{xy}\\} \n$$\nOne can prove that $\\pi_n^1({\\check Z})$ and $\\pi_n^2({\\check Z})$ satisfy the LDP with respective rate functions\n$$\nI_1(z)= \\frac{x+y}2 +\\Delta(z\\mid {\\mathcal B}_{\\kappa_1}) \\qquad \\textrm{and}\n\\qquad I_2(z)= \\frac{x+y}2 +\\Delta(z\\mid {\\mathcal B}_{\\kappa_2}).\n$$ \nThe contraction principle then yields \n\\begin{eqnarray*}\nI_{\\check Z}(z)&=&\\inf_{z_0+z_1 +z_2=z}\\{ \\Gamma^*(z_0)+ I_1(z_1) + I_2(z_2)\\}\\\\\n\\bar{I}_{\\check Z}(z)&=&\\inf_{z_0+z_1 =z}\\{ \\Gamma^*(z_0)+ I_1(z_1) \\}\\\\\n\\end{eqnarray*}\nLet $z^\\star=(1,1,\\kappa_2,\\kappa_2,0)$ then we shall prove that\n\\begin{equation}\\label{rf-different}\nI_{\\check Z}(z^\\star)<\\infty \\qquad \\textrm{while}\\qquad \\bar{I}_{\\check Z}(z^\\star)=\\infty.\n\\end{equation}\nThis will complete the proof of Proposition \\ref{second-eigen-nc}. \n\nIn the sequel, we use the notation $z_i=(x_i,y_i,x_i',y_i',r_i)$ with\n$i\\in \\{0,1,2\\}$. From the definition of $\\bar{I}_{\\check Z}$, one can\neasily check that $\\bar{I}_{\\check Z}(z^\\star)$ is finite iff the\nfollowing system of equations:\n\\begin{equation}\\label{system}\n\\left\\{\n\\begin{array}{l}\nx_0 + x_1 =1\\\\\ny_0 + y_1 =1\\\\\nx_0 +\\kappa_1 x_1 = \\kappa_2\\\\\ny_0 +\\kappa_1 y_1 = \\kappa_2\\\\\nx_1 y_1 < x_0 y_0\n\\end{array}\\right.\n\\end{equation}\nhas a solution such that $x_0>0$, $y_0>0$, $x_1>0$ and $y_1>0$. From easy computations,\nsuch a solution should satisfy\n\\begin{equation}\\label{system-condition} \nx_0 =\\frac{\\kappa_1 -\\kappa_2}{\\kappa_1 -1}= y_0.\n\\end{equation}\nOn the other hand, the last equation of \\eqref{system} implies that $(1-x_0)^2\n< x_0^2$, that is $x_0>\\frac 12$. As we have assumed that\n$\\kappa_1 <2\\kappa_2 -1$, this is not compatible with \\eqref{system-condition} and \n$$\n\\bar{I}_{\\check Z}(z^\\star)=\\infty.\n$$\nWe now prove that $I_{\\check Z}(z^\\star)<\\infty$. The mere definition of $I_{\\check Z}$ yields that $I_{\\check Z}(z^\\star)<\\infty$\niff there exists a solution to the following system\n\\begin{equation}\\label{system2}\n\\left\\{\n\\begin{array}{l}\nx_0 +x_1 +x_2 =1\\\\\ny_0 +y_1 + y_2 =1\\\\\nx_0 +\\kappa_1 x_1 +\\kappa_2 x_2 =\\kappa_2\\\\\ny_0 +\\kappa_1 y_1 +\\kappa_2 y_2 =\\kappa_2\\\\\nr_0^2 +\\epsilon_1 x_1 y_1 +\\epsilon_2 x_2 y_2 =0\n\\end{array}\\right.\n\\end{equation}\nsatisfying $x_0>0$, $y_0>0$, $x_1>0$, $y_1>0$, $x_2>0$, $y_2>0$,\n$\\epsilon_{1,2}=\\pm 1$ and $r_0^2\\leq x_0y_0$.\n\nWe can easily check that this system admits the following solution:\n\\begin{eqnarray*}\nx_0=y_0&=& \\frac{\\kappa_1 -\\kappa_2}{\\kappa_1 +\\kappa_2 -2},\\\\ \nx_1=y_1 &=& \\frac{\\kappa_2 -1}{\\kappa_1 +\\kappa_2 -2}=x_2=y_2,\\\\\n\\epsilon_1 = - \\epsilon_2 &=& -1 \\qquad \\textrm{and}\\qquad r_0=0.\n\\end{eqnarray*}\nTherefore, \\eqref{rf-different} is proved.\n\\end{proof}\n\n\\subsection{Links with the spherical integral beyond the rank-one case}\n\nWhen one wants to study the asymptotics of the spherical integral \nin the case when the matrix $A_n$ \nin \\eqref{spherical-integral} is of finite rank larger than one,\none is led to study the Large Deviations for empirical\nmeans which do not fulfill the convexity assumption (Assumption\n(A-\\ref{convex-rf})). For example, in the rank two case, the related\nempirical mean to look at is given by:\n$$\nL_n^{(2)} = \\frac 1 n \\sum {\\mathbf f}^{(2)}(x_i^n)\\cdot Z_i,\n\\textrm{ with } Z_i = \\left( \\begin{array}{c}\nX_i^2\\\\\nY_i^2 \\\\\n X_iY_i\n\\end{array}\\right) \\textrm{ and } \\mathbf{f}^{(2)}(x)=\\left({\\tiny \\begin{array}{ccc}\n1 & 0 & 0\\\\\n0 &1 &0 \\\\\nx &0 &0 \\\\\n0 &x &0 \\\\\n0 &0 &1\\\\\n0 & 0 & x\n\\end{array}}\\right)\n$$ \nand Theorem \\ref{ldp-nonconvex} applies whenever\n(A-\\ref{assumption-nonconvex}) is fulfilled. It is then an easy \napplication of Varadhan's Lemma to get the convergence of the\nspherical integrals in the rank two case (and analogously for an arbitrary\nfinite rank). The example studied in\nSection \\ref{example-non-convex-sub} supports the feeling (although in\na very indirect way) that the asymptotics of the spherical integral in\nthis case should depend not only on the largest eigenvalue (as \nproved in the rank-one case in \\cite{GuiMai05}) but also on the second\nlargest eigenvalue and maybe on other ones, the number of which is\nrelated to the rank of $A_n$. Unfortunatelly, the very intricate\nformula of the rate function associated to the LDP in the non-convex\ncase gives little clue on how to relate the asymptotics of the\nspherical integral to the largest eigenvalues beyond the rank-one\ncase.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}