diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzpejw" "b/data_all_eng_slimpj/shuffled/split2/finalzzpejw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzpejw" @@ -0,0 +1,5 @@ +{"text":"\\section{#1}\\setcounter{equation}{0}}\n\\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}\n\\def{\\hbox{ 1\\kern-.8mm l}}{{\\hbox{ 1\\kern-.8mm l}}}\n\\def{\\hbox{ 0\\kern-1.5mm 0}}{{\\hbox{ 0\\kern-1.5mm 0}}}\n \n\\begin{document}\n{}~\n{}~\n\\hfill\\vbox{\\hbox{hep-th\/0502126}\n}\\break\n \n\\vskip .6cm\n\\begin{center}\n{\\Large \\bf \nBlack Holes, Elementary Strings and Holomorphic Anomaly\n}\n\n\\end{center}\n\n\\vskip .6cm\n\\medskip\n\n\\vspace*{4.0ex}\n \n\\centerline{\\large \\rm\nAshoke Sen}\n \n\\vspace*{4.0ex}\n \n \n\\centerline{\\large \\it Harish-Chandra Research Institute}\n\n \n\\centerline{\\large \\it Chhatnag Road, Jhusi,\nAllahabad 211019, INDIA}\n \n\\centerline{E-mail: ashoke.sen@cern.ch,\nsen@mri.ernet.in}\n \n\\vspace*{5.0ex}\n \n\\centerline{\\bf Abstract} \\bigskip\n\nIn a previous paper we had proposed a specific route to relating\nthe entropy\nof two charge black holes to the degeneracy of elementary string states\nin N=4 supersymmetric heterotic\nstring theory in four dimensions. For toroidal compactification\nthis proposal works\ncorrectly to all orders in a power series expansion in inverse\ncharges provided we take\ninto account the \ncorrections to the black hole entropy formula due to holomorphic anomaly.\nIn this paper we demonstrate that similar agreement holds also for other\nN=4 supersymmetric heterotic string compactifications.\n\n\\vfill \\eject\n \n\\baselineskip=18pt\n\n\\tableofcontents\n\n\\sectiono{Introduction} \\label{sintro}\n\nThe attempt to relate the Bekenstein-Hawking entropy of a black hole\nto the number of states of the black hole in some microscopic description\nof the theory is quite old. In string theory this takes a new direction\nas the theory already has a large number of massive states in the spectrum\nof elementary string, and hence one is tempted to speculate that for large\nmass we should be able to relate the degeneracy of these states to the\nentropy of a black hole with the same charge quantum \nnumbers\\cite{thooft,9309145,9401070,9405117,9612146}. However this\nproblem is complicated due to large renormalization effects which make\nit difficult to relate the quantum numbers of the black hole to those\nof the elementary string states. This problem can be avoided by \nconsidering a tower of BPS states\\cite{rdabh0,rdabh1,rdabh2}\nwhere such renormalization effects\nare absent. The entropy of the corresponding\nblack hole solution vanishes in the supergravity approximation;\nhowever one finds\nthat the curvature associated with the solution becomes large near\nthe horizon, indicating breakdown of the supergravity approximation.\nAlthough the complete analysis of the problem has not been possible to this\ndate, a general argument based on symmetries of the theory shows that\nthe entropy of the black hole, modified by $\\alpha'$ corrections, has the\nright dependence on all the parameters (charges as well as the asymptotic\nvacuum expectation values of various fields) so as to agree with the\nlogarithm of the degeneracy of elementary string \nstates\\cite{9504147,9506200,9712150}. However the overall \nnormalization constant is not determined\nby the symmetry argument, and its\ncomputation requires inclusion of all order\n$\\alpha'$ corrections to the tree\nlevel heterotic string action.\n\nIt was later realized that instead of elementary strings, \nD-branes\\cite{9602052} provide\na much richer arena for the study of black holes. In particular, by\nconsidering a sufficiently complicated configuration of D-branes one can\nensure that the corresponding BPS black hole solution carrying the same \ncharge\nquantum numbers as the D-brane system has a finite area event\nhorizon where $\\alpha'$ and string loop corrections are small.\nComparison of the entropy of the black hole\nto the logarithm of the degeneracy of states of the D-brane configuration\n(which we shall call the statistical entropy)\nshows exact agreement\\cite{9601029} for large charges. This\nagreement has been verified for a variety of \nblack holes in different string\ntheories.\n\nInitial comparison between the black hole entropy and the statistical\nentropy was carried out in the limit of large charges. For a class of black\nholes in $N=2$\nsupersymmetric string compactification\\cite{9508072,9602111,9602136}\nref.\\cite{9711053} attempted to go beyond the large charge limit, and computed\ncorrections to the statistical entropy which are suppressed by the inverse\npower of the charges. The corresponding corrections to the black hole entropy\ncome from higher derivative terms in the effective\naction. By taking\ninto account a special class of higher derivative \nterms\\cite{9602060,9603191} which come from\nsupersymmetrization of the curvature squared terms in the\naction\\cite{rzwiebach,9610237}, \nrefs.\\cite{9812082,9904005,9906094,9910179,0007195,0009234,0012232}\ncomputed corrections\nto the black hole entropy and found precise agreement. One non-trivial\naspect of this calculation is that in order to reach this agreement\nwe need to also modify the Bekenstein-Hawking formula for the black\nhole entropy due to the presence of higher derivative\nterms\\cite{9307038,9312023,9403028,9502009}.\n\nRecently there has been renewed interest in the black hole solution\nrepresenting elementary string states. This followed\nthe observation by Dabholkar\\cite{0409148}\nthat if we take into account the special class of higher derivative\nterms which were used in the analysis of \n\\cite{9812082,9904005,9906094,9910179,0007195,0009234,0012232}\nand calculate their\neffect on the black hole solutions representing elementary string states,\nwe get a solution with a finite area event horizon.\nThe entropy of this black hole, calculated using the formul\\ae\\\ngiven in \\cite{9307038,9312023,9403028,9502009}, reproduces\nprecisely the leading term in the expression for the\nstatistical entropy obtained by taking the logarithm of the degeneracy\nof elementary string states. This analysis was developed further in\n\\cite{0410076,0411255,0411272,0501014}. An alternative viewpoint for these \nblack \nholes \ncan be found in \\cite{0412133,0502050}.\n\nOne of the advantages of using elementary string states for comparison\nwith black hole entropy is that for this system the degeneracy of states\nand hence the statistical entropy is known very precisely. Hence one\ncan try to push this comparison beyond the large charge approximation.\nHowever one problem that one encounters in this process is that even if\nwe know the degeneracies exactly, the\ndefinition of the statistical entropy is somewhat ambiguous since it depends\non the particular ensemble we use to define entropy. As in the case of\nan ordinary thermodynamic\nsystem, all definitions of entropy agree when the charge (analog of volume)\nis large, but finite `size' corrections to the entropy differ between the\nentropies defined through different ensemble. This is due to the fact that\nthe agreement between different ensembles ({\\it e.g.} microcanonical,\ncanonical and grand canonical ensembles)\nis proved using a saddle point\napproximation which is valid only in the `large volume' limit. Thus the\nquestion that we need to address is: which definition of \nstatistical entropy should\nwe use for comparison with the black hole entropy? There is no {\\it\na priori} answer to this question, and one has to proceed by trial and\nerror to discover if there is some natural definition of\nstatistical entropy which agrees with the\nblack hole entropy beyond leading order. For a class of black holes\nin $N=2$ supersymmetric string compactification,\nRefs.\\cite{0405146,0412139} proposed\nsuch a definition based on a mixed ensemble where we sum over\nhalf of the charges (the `electric' charges) by introducing\nchemical potentials for these charges and keep the other half of\nthe charges fixed. By applying the same prescription to the black\nholes representing elementary string states in $N=4$ supersymmetric\ntheories, \\cite{0409148} was able to reproduce the black hole entropy\nto all orders in the inverse charges up to an additive\nlogarithmic piece which\nappears as a multiplicative factor in the partition function involving\npowers of the winding number charge\\cite{ism04}. One\ndisadvantage of this prescription is that it destroys manifest\nsymmetry between the momentum and winding charges of the string\nsince in defining the ensemble\nwe sum over all momentum states but keep fixed the winding\ncharge. As a result\nT-duality invariance of the statistical entropy defined\nthis way is not guaranteed.\n\nA related but somewhat different proposal for relating the degeneracy\nof elementary string states to black hole entropy,\nwhich maintains manifest T-duality invariance,\nwas proposed in \\cite{0411255}. This also requires summing over\ncharges but in a manner that preserves manifest T-duality. In particular\nthe chemical potential couples to a T-duality invariant combination\nof the charges.\\footnote{We also sum over all angular momentum states\nwhich is equivalent to choosing an ensemble with a chemical potential\ncoupled to the angular momentum, and then extremizing \nthe corresponding\nfree energy with respect to this chemical potential. This\nsets the chemical potential\nto zero. This argument is due to B.~Pioline, and\nI wish to thank A.~Dabholkar for discussion on this\npoint.}\nIt was shown in \\cite{0411255} that up to terms which\nare non-perturbative in the inverse charges, this definition\nof the statistical entropy agrees with the black hole entropy including\nlogarithmic terms, provided we take into account the effect of\nholomorphic anomaly\\cite{9302103,9309140} in the effective action.\nA related duality invariant\npresecription for dealing with 1\/4 BPS black holes in N=4\nsupersymmetric heterotic string compactification was later given\nin \\cite{0412287}. \n\nIn order to put this proposal on a firm footing it is important that \nwe test it for other N=4 heterotic string compactifications. This is\nwhat we attempt to do in this paper. In particular we focus on a class\nof four dimensional CHL models with $N=4$ \nsupersymmetry\\cite{CHL,CP,9507027,9507050,9508144,9508154} and compare the\nstatistical entropy computed using the prescription of \\cite{0411255}\nwith the black hole entropy. We again find that after taking into account\ncorrections due to holomorphic anomaly, the black hole entropy and the\nstatistical entropy agree up to non-perturbative terms.\\footnote{In\nthe analysis of this paper as well as in the analysis of \\cite{0411255}\nan overall charge independent additive constant in the expression\nfor the entropy could not be fixed\ndue to our lack of precise knowledge of the effect of the\nholomorphic anomaly terms on black hole entropy.\nThus we could not compare this overall additive\nconstant between the black hole\nand the statistical entropy.} \n\nThe rest of the paper is organised as follows. In section \\ref{srev} we\nreview the proposal of \\cite{0411255} for relating the black hole\nentropy to the degeneracy of elementary string states. In section\n\\ref{sstat} we review CHL string compactifications, count the degeneracy\nof elementary string states in these models, and compute the statistical\nentropy using these results. In section \\ref{sbh} we calculate the entropy\nof the black holes of the CHL model carrying the same charge quantum numbers\nas the elementary string states, and show that the result agrees with the\nstatistical entropy found in section \\ref{sstat}. During this computation\nwe also determine the coefficient of the holomorphic anomaly term in\nthese CHL models. Section \\ref{sdiss} contains a discussion of\nthe results and possible extension to more general class of models\nand\/or states.\nThe two appendices are devoted to \nthe analysis of the errors\ninvolved in the various approximations used in this paper, and to\ndemonstrate that these corrections are all non-perturbative in the\ninverse charges. Of the two appendices, appendix \\ref{sa} analyses\nthe possible errors in the computation of the statistical entropy and\nappendix \\ref{sb} examines possible errors in the computation of\nthe black hole entropy. In appendix \\ref{sb} we also determine the\nS-duality invariant form of the curvature squared terms in the\nCHL models.\n\nI have been informed by A.~Dabholkar that for a general class of models,\nref.\\cite{private}\nhas successfully carried out the comparison between the black\nhole entropy and the entropy defined through the ensemble introduced\nin \\cite{0405146}. After completing this paper we also learned of\nref.\\cite{9708062}\nwhere some of the computations of section \\ref{sbh} and\nappendix \\ref{sb}, required for determining the form of the curvature\nsquared terms in the effective action, have been carried out.\n\n\\sectiono{Proposal for Relating Black Hole Entropy to the Degeneracy of \nElementary String States} \\label{srev}\n\nWe shall be considering $N=4$ supersymmetric\nheterotic string theory in four\ndimension, with a compactification manifold of the form $K_5\\times S^1$.\nIn this theory we consider a fundamental\nstring wound $w$ times along the circle $S^1$ and carrying $n$ units\nof momentum along the same circle.\nLet $d(n,w)$ denote the degeneracy\nof elementary string states satisfying the following properties:\n\\begin{itemize}\n\\item The state is invariant under half of the space-time supersymmetry\ntransformations.\n\\item The state carries gauge charges appropriate to an elementary\nheterotic string carrying $w$\nunits of winding and $n$ units of momentum along $S^1$. This means\nthat if $x^4$ denotes the coordinate along $S^1$ and $x^\\mu$ ($0\\le\\mu\\le \n3$) denote the coordinates of the non-compact part of the space-time, then \nthe state carries\ngauge charges proportional to $n$ and $w$ associated with the gauge fields\n$G^{(10)}_{4\\mu}$ and $B^{(10)}_{4\\mu}$ respectively, but does \nnot carry any other \ngauge charge. Here $G^{(10)}_{MN}$ and $B^{(10)}_{MN}$ denote the\nten dimensional string metric and the anti-symmetric tensor field\nrespectively.\n\\end{itemize}\nWe shall see in section \\ref{sstat} that the\ndegeneracy of such states\nis a function of the combination $N\\equiv nw$. \nDenoting this\nby $d_{N}$, we define the partition function associated with these\nstates as:\n\\begin{equation} \\label{e2a}\ne^{{\\cal F}(\\mu)} = \\sum_{N} d_N \\, e^{-\\mu N}\\, .\n\\end{equation}\nGiven ${\\cal F}(\\mu)$, we define the statistical entropy $\\widetilde S_{stat}$ as the\nLegendre transform of ${\\cal F}(\\mu)$:\n\\begin{equation} \\label{e11}\n\\widetilde S_{stat}(N) = {\\cal F}(\\mu) + \\mu \\, N\\, ,\n\\end{equation}\nwhere $\\mu$ is given by the solution of the equation\n\\begin{equation} \\label{e12}\n{\\partial{\\cal F}\\over \\partial\\mu} + N =0\\, .\n\\end{equation}\nThe proposal of ref.\\cite{0411255} is to identify \n$\\widetilde S_{stat}(nw)$ with\nthe entropy of the black hole solution carrying same charge quantum numbers\n$(n,w)$:\n\\begin{equation} \\label{eprop}\n\\widetilde S_{stat}(nw) = S_{BH}(n,w)\\, .\n\\end{equation}\nThis is the relation we shall try to verify in this paper for different\nheterotic string compactifications.\n\nThe definition of statistical entropy given above is appropriate for a kind\nof grand canonical ensemble where we introduce a chemical potential conjugate\nto $nw$.\nA \nmore direct definition of the statistical entropy would be the one based on\nthe microcanonical ensemble:\n\\begin{equation} \\label{e12a}\nS_{stat}(N) = \\ln d_{N}\\, .\n\\end{equation}\nThe two definitions agree in the limit of large $N$ \nwhere we can evaluate\nthe sum in \\refb{e2a} by a saddle point method. In this case the leading\ncontribution to $e^{{\\cal F}(\\mu)}$ is given by $d_{N_0} e^{-\\mu N_0}$ where\n$N_0$ is the value of $N$ that maximizes the summand:\n\\begin{equation} \\label{esaddle}\n{\\cal F}(\\mu) \\simeq \\ln d_{N_0} - \\mu N_0, \\qquad {\\partial\\over \\partial N_0}\n\\ln d_{N_0}-\\mu = 0\\, .\n\\end{equation}\nThus in this approximation ${\\cal F}(\\mu)$ is the Legendre transform of\n$\\ln d_{N_0}=S_{stat}(N_0)$. Hence $\\widetilde S_{stat}(N)$, \ndefined as the Legendre\ntransform of ${\\cal F}(\\mu)$, will be equal to $S_{stat}(N)$.\nHowever the complete ${\\cal F}(\\mu)$ defined through \\refb{e2a} has\nadditional contribution besides that given in \\refb{esaddle},\nand as a result\n$S_{stat}$ and\n$\\widetilde S_{stat}$ differ in non-leading terms. In particular\nthe coefficient of\nthe $\\ln N$ terms in $S_{stat}$ and \n$\\widetilde S_{stat}$ differ. It is not\n{\\it a priori} clear which definition of \nstatistical entropy we should be comparing\nwith the entropy of the black hole solution \ncarrying the same quantum numbers.\nIt was shown in \\cite{0411255} that for \nheterotic string theory compactified on a\ntorus, $\\widetilde S_{stat}$ agrees with the black hole entropy\nup to exponentially suppressed contributions. \nWe shall see in section \\ref{sbh}\nthat such agreement between $\\widetilde S_{stat}$ and \n$S_{BH}$ continues to hold also\nfor CHL compactification\\cite{CHL,CP,9508144,9508154} \nof the heterotic string theory.\n\nNote that given $S_{stat}=\\ln {d_N}$ we can calculate $\\widetilde S_{stat}$\nusing eqs.\\refb{e2a}-\\refb{e12}. Conversely, given $\\widetilde S_{stat}$ we\ncan compute ${\\cal F}(\\mu)$ by taking its Legendre transform and then\ncompute $d_N$ by solving \\refb{e2a}.\nThis gives $S_{stat}$. Thus $S_{stat}$\nand $\\widetilde S_{stat}$ contain complete information about each other and the \ndegeneracies\n$d_N$. This allows us to restate the proposal \\refb{eprop} in a slightly\ndifferent but equivalent form. Given $S_{BH}(n,w)$ (which turns out to be\na function of the combination $N=nw$) we define ${\\cal F}_{BH}(\\mu)$ by taking\nthe Legendre transform of $S_{BH}$ with respect to the variable $N$, and\nthen define $d^{BH}_N$ through an analog of eq.\\refb{e2a} with ${\\cal F}(\\mu)$ \nand \n$d_N$ replaced by ${\\cal F}_{BH}(\\mu)$ and $d_N^{BH}$ respectively. The\nproposal \\refb{eprop} then translates to the relation:\n\\begin{equation} \\label{enewp}\n{\\cal F}_{BH}(\\mu) = {\\cal F}(\\mu)\\, , \\qquad d^{BH}_N = d_N\\, .\n\\end{equation}\nAlthough we shall work with \\refb{eprop} for convenience, we should\nkeep in mind that verifying \\refb{eprop} amounts to verifying\n\\refb{enewp}.\n\n\\sectiono{Counting Degeneracy of BPS String States in CHL Models} \n\\label{sstat}\n\nIn this section we shall compute $d_N$ and hence ${\\cal F}(\\mu)$ for a\nclass of $N=4$ supersymmetric heterotic string compactification.\nFirst we shall illustrate the counting procedure in the context of a\nspecific CHL model\\cite{CP} \nand then generalize this to other models. The \nconstruction of the model\nbegins with\n$E_8\\times E_8$ heterotic string theory compactified on a six torus\n$T^4\\times \\widetilde\nS^1\\times S^1$. \nIn this theory the gauge fields in the Cartan subalgebra consist of\n22 gauge fields arising out of the left-moving U(1) currents of the\nworld-sheet theory, and six gauge fields arising out of the right-moving\nU(1) currents of the world-sheet theory. All the gauge fields associated\nwith the $E_8\\times E_8$ group arise out of the left-moving world-sheet\ncurrents.\nWe now mod out this theory by a $Z_2$ transformation\nthat involves a half shift along $\\widetilde S^1$ together with an exchange of\nthe two $E_8$ lattices\\cite{CP}. The resulting theory still has $N=4$\nsupersymmetry. In particular the 6 U(1) \ngauge fields associated with the right-moving world-sheet currents\nare untouched by the $Z_2$ projection, and continue to provide\nus with the graviphoton fields of $N=4$ supergravity.\nOn the other hand only the diagonal sum of the two $E_8$\ngauge groups survive the projection. As a result the $E_8\\times E_8$\ncomponent of the gauge group is reduced to $E_8$, and the rank of the\nunbroken gauge group from the left-moving sector of the world-sheet is\nreduced to $14$ from its original value 22. 8 of these U(1) gauge fields\ncome from the\nsurviving $E_8$ gauge group\nand the other 6 come from appropriate linear combination\nof the metric and antisymmetric tensor field, with one index lying\nalong one of the six directions of\nthe internal torus and the other index lying along one of the\nnon-compact directions.\n\nWe now consider an elementary string state wound $w$ times along $S^1$ and \ncarrying $n$ units of momentum along the same $S^1$. The BPS excitations \nof this string state come from restricting the right-moving \noscillators \nto have total level 1\/2 (in the Neveu-Schwarz sector) or 0 (in the Ramond \nsector) but allowing arbitrary oscillator and momentum excitations in the \nleft-moving sector. We would like to count BPS states with a given set of \ngauge charges, notably those carried by an elementary string state with \n$w$ units of winding and $n$ units of momentum along $S^1$. First let us \ndo this calculation for heterotic \nstring theory on a torus\\cite{rdabh0}. In this case\nthe only possible \nexcitations are those created by left-moving oscillators, since any\nadditional momentum and \/ or winding will generate additional gauge\ncharges carried by the state.\nIf $N_L$ denotes \nthe total level of the left-moving oscillators then the level matching \ncondition gives $N_L=nw+1$, and hence the degeneracy of elementary string \nstates carrying quantum numbers $(n,w)$ is the number \nof ways we can get total oscillator level $N_L$ from the 24 left-moving \noscillators, multiplied by a factor of 16 that counts the degeneracy of\nthe ground state of the right-moving sector. \nWe shall call this number $d^{(0)}_{N_L}$.\nIt is \ngiven by the generating function\\cite{rdabh0}\n\\begin{equation} \\label{e4}\n\\sum_{N_L} d^{(0)}_{N_L} e^{-\\mu (N_L-1)} = \n16 \\left(\\eta(e^{-\\mu})\\right)^{-24}, \\qquad\n\\eta(q) = q^{1\/24} \n\\, \\prod_{n=1}^\\infty ( 1 - \nq^n)\\, .\n\\end{equation}\nFor the CHL string theory under consideration, the counting is a \nlittle more complicated. Since only the diagonal $E_8$ gauge group \nsurvives, we can \nsatisfy the condition for vanishing $E_8$ charge if we choose equal and \nopposite momentum vector $\\vec p$ and $-\\vec p$\nfrom the two $E_8$ lattices. We choose\nthe overall normalization of $\\vec p$ such that the $E_8$\nlattice is self-dual \nunder the inner product $(\\vec p,\\vec q)=\\vec p\\cdot \\vec q$. In this \nnormalization there is one lattice point per unit volume\nin the $\\vec p$ space, and the\ncontribution to the $\\bar L_0$ eigenvalue from the vector \n$\\vec s =(\\vec p, -\\vec p)$ is given by \n$\\vec p^2\/2+\\vec p^2\/2=\\vec p^2$.\nThe level matching condition now gives:\n\\begin{equation} \\label{e1}\nN_L + {\\vec p^2} = nw + 1\\, .\n\\end{equation}\nThus the degeneracy $d_{nw}$ for given $(n,w)$ is equal to the number of \nways we can satisfy \\refb{e1}, subject to the condition that the\nresulting state is even under the orbifold \ngroup:\n\\begin{equation} \\label{ednw}\nd_{nw} \\simeq {1\\over 2} \\sum_{N_L} \\sum_{\\vec p\\in \\Lambda_{E_8}} \nd^{(0)}_{N_L} \\, \\delta_{N_L+\\vec p^2 -nw, 1}\\, .\n\\end{equation}\nSince we only include\nstates which are even under the $Z_2$ transformation, we must symmetrize\nthe state under the exchange of the oscillators and momenta associated with\nthe two $E_8$ factors. As shown in appendix \\ref{sa}, up to exponentially \nsmall contribution\nthis introduces a multiplicative factor of 1\/2 in the counting of states\nwhich we have included in the right hand side of \\refb{ednw}.\nNote that the twisted sector states do not play any\nrole in this counting, since they carry half-integral winding number along\nthe circle $\\widetilde S^1$ and hence belong to a different charge sector. \nUsing \\refb{ednw} and \\refb{e4} the partition function $e^{{\\cal F}(\\mu)}$ \ndefined in \\refb{e2a} is now\ngiven by\n\\begin{equation} \\label{e3}\ne^{{\\cal F}(\\mu)} \\simeq {1\\over 2}\n\\sum_{N_L} d^{(0)}_{N_L} e^{-\\mu (N_L-1)} \\, \n\\sum_{\\vec p\\in \\Lambda_{E_8}} \ne^{-\\mu \\vec \np^2} = 8\\,\n\\left(\\eta(e^{-\\mu})\\right)^{-24} \\, \\sum_{\\vec p\\in \\Lambda_{E_8}} \ne^{-\\mu \\vec \np^2}\\, .\n\\end{equation}\n\nWe shall be interested in the behaviour of ${\\cal F}(\\mu)$ at small $\\mu$. In this\nlimit,\n\\begin{equation} \\label{e6}\n\\left(\\eta(e^{-\\mu})\\right)^{-24} \\simeq e^{{4\\pi^2\/\\mu}} \\, \\left(\n{\\mu\\over 2\\pi}\\right)^{12}\\, ,\n\\end{equation}\nand, using Poisson resummation formula,\n\\begin{equation} \\label{e7}\n\\sum_{\\vec p\\in \\Lambda_{E_8}} \ne^{-\\mu \\vec \np^2} = \\left({\\pi\\over \\mu}\\right)^4\\, \n\\sum_{\\vec q\\in \\Lambda_{E_8}} e^{-\\pi^2\\vec q^2\/\\mu}\n\\simeq \\left({\\pi\\over \\mu}\\right)^{4}\\, .\n\\end{equation}\nHere we have used the fact that the $E_8$ lattice is self-dual.\nThus we get, for small $\\mu$\n\\begin{equation} \\label{e8}\ne^{{\\cal F}(\\mu)} \\simeq {1\\over 2}\\, \n\\left({\\mu\\over 2\\pi}\\right)^{8} \n\\, e^{{4\\pi^2\/\\mu}} \\, ,\n\\end{equation}\nand hence\n\\begin{equation} \\label{e9}\n{\\cal F}(\\mu) \\simeq {4\\pi^2\\over \\mu} + 8 \\ln {\\mu\\over 2\\pi}\n+\\ln{1\\over 2}\\, .\n\\end{equation}\n\nBefore we turn to the more general case, let us try to estimate the error\nin \\refb{e9}. The first source of error appears in \\refb{e3} where we have \nrepresented the symmetry requirement of the states under the $Z_2$\norbifold group by a factor of ${1\\over 2}$ in\n$e^{\\cal F}$. A more careful analysis described in appendix \\ref{sa} shows that \nthe error in ${\\cal F}$ due to this\napproximation involves powers of $e^{-\\pi^2\/\\mu}$. The second source of\nerror is in the small $\\mu$ approximation of $\\eta(e^{-\\mu})$ used\nin \\refb{e6}. The fractional error in \nthis formula is of order $e^{-4\\pi^2\/\\mu}$. \nFinally the approximation used in \\refb{e7} \nalso introduces a fractional error\ninvolving powers of $e^{-\\pi^2\/\\mu}$. \nThus we conclude that the net error in\neq.\\refb{e9} for ${\\cal F}(\\mu)$ is non-perturbative in the \nsmall $\\mu$ approximation.\n\n\nThe above analysis can be easily generalized to a class of\nother CHL compactifications.\nWe begin with heterotic string theory compactified on $T^4\\times \\widetilde S^1\n\\times S^1$, and tune the moduli associated with $T^4$ compactification such\nthat the twentyfour dimensional Narain lattice\\cite{narain,nsw}\n$\\Lambda_{20,4}$\nassociated with heterotic compactification on $T^4$ has a $Z_m$\nsymmetry. We now\nmod out the theory by a $Z_m$ symmetry group generated by\na shift $h$ of order $m$ along $\\widetilde S^1$,\naccompanied by the generator $g$ of the $Z_m$ automorphism group\nof $\\Lambda_{20,4}$. In order that the\nfinal theory has $N=4$ world-sheet supersymmetry, the $Z_m$\nautomorphism should act trivially on the right-moving U(1) currents\nof the world-sheet. However it could have non-trivial action on the\nleft-moving world-sheet currents, and as a result modding out by this\nsymmetry projects out certain number (say $k$) of the U(1) gauge fields\nbelonging to the Cartan subalgebra of the gauge group.\nIn the resulting quotient theory the rank of the gauge group associated\nwith the left-moving sector of the world-sheet theory is \nreduced to $(22-k)$.\nIf we now consider an elementary string wound $w$ times along $S^1$ and \ncarrying\n$n$ units of momentum along $S^1$, then the computation of \nthe degeneracy $d_N$ ($N=nw$) and the partition function\n$e^{{\\cal F}(\\mu)}$ associated with these states involves a sum over the oscillator\nlevels $N_L$ as well as a sum over the $k$-dimensional momentum lattice \n$\\Lambda$ whose vectors do not\ncouple to any massless gauge field of the resulting theory. \nThis gives\n\\begin{equation} \\label{ednnew}\nd_N \\simeq {1\\over m} \\, \\sum_{N_L} \\, \\sum_{\\vec s\\in \\Lambda}\nd_{N_L}^{(0)} \\, \\delta_{N_L+\\vec s^2\/2 -N, 1}\\, ,\n\\end{equation}\nand\n\\begin{equation} \\label{efnew}\ne^{{\\cal F}(\\mu)} \\simeq {1\\over m} \\, \\sum_{N_L} d_{N_L}^{(0)} \ne^{-\\mu (N_L-1)} \\, \\sum_{\\vec s\\in \\Lambda} \\, e^{-\\mu\\vec s^2\/2}\\, .\n\\end{equation}\nAs discussed in appendix \\ref{sa}, the factor\nof $1\/m$ approximately accounts for the fact that we need to count only \nthose states\nwhich are invariant under the orbifold group, and the error involved in \nthis approximation involves powers of $e^{-\\pi^2\/\\mu}$. The sum over $N_L$ \ncan \nbe performed using \\refb{e4}, whereas the sum over $\\vec s$ can be done\nusing Poisson resummation formula:\n\\begin{equation} \\label{epois}\n\\sum_{\\vec s\\in \\Lambda} e^{-\\mu \\vec s^2\/2} = {1\\over V}\n\\, \\left(\n{2\\pi\\over \\mu}\\right)^{k\/2} \n\\sum_{\\vec r \\in \\widetilde\\Lambda} \\, e^{-2\\pi^2 \\vec r^2\/\\mu} \\simeq\n{1\\over V}\\, \\left(\n{2\\pi\\over \\mu}\\right)^{k\/2}\\, ,\n\\end{equation}\nwhere $V$ denotes the volume of the unit cell in the lattice $\\Lambda$\nand $\\widetilde\\Lambda$ is the lattice dual to $\\Lambda$.\nThus\nthe final expression for\n${\\cal F}(\\mu)$ is given by\n\\begin{equation} \\label{e10}\n{\\cal F}(\\mu) \\simeq {4\\pi^2\\over \\mu} + {1\\over 2} \\, \n(24-k) \\ln {\\mu\\over 2\\pi} + \\ln {16\\over Vm}\\, .\n\\end{equation}\nThe errors in this equation come from errors in eqs.\\refb{e6}, \n\\refb{efnew}\nand \\refb{epois}. Each of these errors involves\npowers of $e^{-\\pi^2\/\\mu}$. Thus\nas in the first example, for small $\\mu$ the corrections to \\refb{e10}\ninvolve powers of $e^{-\\pi^2\/\\mu}$.\n \nGiven ${\\cal F}(\\mu)$, we define the statistical entropy $\\widetilde S_{stat}$ \nthrough \\refb{e11}, \\refb{e12}. This gives:\n\\begin{equation} \\label{e10a}\n\\widetilde S_{stat}(N) \\simeq \\mu \\, N +\n{4\\pi^2\\over \\mu} + {1\\over 2} \\, \n(24-k) \\ln {\\mu\\over 2\\pi} + \\ln {16\\over Vm}\\, ,\n\\end{equation}\nwhere $\\mu$ is the solution of the equation\n\\begin{equation} \\label{e10b}\n-{4\\pi^2\\over \\mu^2} +{24-k\\over 2\\mu} +N \\simeq 0 \\, ,\n\\end{equation}\nand $N=nw$.\nIn the limit of large $N$, the $\\mu$ obtained by solving \\refb{e10b} \nis\ngiven by\n\\begin{equation} \\label{e13}\n\\mu \\simeq {2\\pi \\over \\sqrt{N}} \n\\left(1+{\\cal O}\\left({1\\over \\sqrt N}\\right)\\right) \\, .\n\\end{equation}\nThus for large $N$, $\\mu$ is small. This justifies the small $\\mu$\napproximation used in arriving at \\refb{e10}.\nSince the error in ${\\cal F}(\\mu)$ involves powers of $e^{-\\pi^2\/\\mu}$, the\nerror in $\\widetilde S_{stat}$ computed from \\refb{e10a}, \\refb{e10b} will\ninvolve powers of $e^{-\\pi\\sqrt{N}}$.\n\nWe conclude this section by noting that $\\widetilde S_{stat}$ computed from\n\\refb{e10a}, \\refb{e10b} is of the form\n\\begin{equation} \\label{e14}\n\\widetilde S_{stat} = 4\\pi\\sqrt{N} - {24-k\\over 2} \\, \\ln \\sqrt{N} + {\\cal O}(1) \\, .\n\\end{equation}\nAlthough eq.\\refb{e14} gives more explicit expression for\n$\\widetilde S_{stat}$, this\nequation has corrections involving inverse powers of $\\sqrt N$. Thus\nthe comparison with the black hole entropy will be made with the formul\\ae\\\n\\refb{e10a}, \\refb{e10b} for $\\widetilde S_{stat}$ which\nare correct up to error terms involving\npowers of $e^{-\\pi\\sqrt N}$.\n\n\\sectiono{Analysis of Black Hole Entropy and Comparison with the \nStatistical Entropy} \\label{sbh}\n\nWe shall now turn to the analysis of the entropy $S_{BH}$\nof the BPS black hole\ncarrying the\nsame charge and mass as an elementary string state described above.\nThe entropy of such a black hole vanishes in the supergravity \napproximation\\cite{9504147}. However the curvature associated \nwith the string\nmetric becomes large near the horizon, showing that we must\ntake the higher derivative terms into account for computing\nthe entropy of such a black hole.\nIn contrast the string coupling near the horizon is small\nfor large\n$n$ and $w$ and hence to leading order we can ignore the string loop\ncorrections\\cite{9504147}. There is a general symmetry argument that shows \nthat at the tree level in heterotic string theory the modified entropy \nmust have the form $a\\sqrt{nw}$ for some numerical constant \n$a$\\cite{9504147,9712150,0411255}. However the value of the constant $a$ \nis not determined by this argument ($a$ could be zero for example). If \n$a=4\\pi$, the black hole entropy would agree with the leading term in the \nexpression \\refb{e14} for $\\widetilde S_{stat}$. Following \nthe formalism developed in\nrefs.\\cite{9812082,9904005,9906094,9910179,0007195,0009234}, \nref.\\cite{0409148} analyzed the effect of\na special class of higher\nderivative terms in the tree level effective action of heterotic \nstring theory which come from supersymmetric completion of the\nterm\\cite{9602060,9603191}\n\\begin{equation} \\label{ebh0}\n{1\\over 16\\pi} \n\\int d^4 x \\, \\sqrt{\\det g}\\, \\left(S\\, R^-_{\\mu\\nu\\rho\\sigma} \n\\, R^{-\\mu\\nu\\rho\\sigma} + \\bar S \\, R^+_{\\mu\\nu\\rho\\sigma} \n\\, R^{+\\mu\\nu\\rho\\sigma} \\right)\\, ,\n\\end{equation}\nwhere $g_{\\mu\\nu}$, $R^\\pm_{\\mu\\nu\\rho\\sigma}$ and $S$ \ndenote respectively \nthe canonical\nmetric, the self-dual and anti-self-dual components of the\nRiemann tensor and the complex scalar field whose real and\nimaginary parts are given by the exponential of the dilaton field \nand the axion field respectively. After taking into account the\nmodification of the equations of motion and supersymmetry transformation\nlaws due to these additional terms, the modified\nblack hole entropy is given by\nthe expression\\cite{0409148,0410076,0411255,0411272}:\n\\begin{equation} \\label{eb1}\nS_{BH} = {\\pi N \\over S_0} + 4\\pi \\, S_0\\, , \\qquad \nN \\equiv nw\\, ,\n\\end{equation}\nwhere $S_0$, defined as the value of the field $S$ at the horizon,\nis given by the solution of the equation\\footnote{Note that\nthe left hand side of \\refb{eb2} is equal to the derivative of the \nright hand side of \\refb{eb1} with respect to $S_0$. This feature\nsurvives even after including the correction due to \nholomorphic anomaly\\cite{0411255,0412287}.}\n\\begin{equation} \\label{eb2}\n-{\\pi\\, N\\over S_0^2} + 4 \\pi =0\\, .\n\\end{equation}\nThis gives $S_{BH}=4\\pi\\sqrt{N}$. This agrees with the leading term\nin the expression \\refb{e14} for $\\widetilde S_{stat}$\\cite{0409148}.\n\nRef.\\cite{0409148} checked this agreement for heterotic string \ncompactification on a torus. However once this has been checked\nfor torus compactification, similar agreement for other heterotic\nstring compactifications is\nautomatic due to an argument in \\cite{9712150} where it was shown that at tree\nlevel in heterotic string theory the part of the effective action relevant\nfor computing the entropy of these states is identical in all heterotic string\ncompactification with $N=4$ or $N=2$ supersymmetry. Thus the leading \ncontribution\nto $S_{BH}$ will be given by $4\\pi\\sqrt{nw}$ for all heterotic string\ncompactifications. This clearly agrees with the leading term in \nthe expression \\refb{e14} for $\\widetilde S_{stat}$.\n\nWe now turn to the non-leading corrections to the entropy. \nFor this we need to go beyond the tree level effective action of\nthe heterotic string theory.\nA special class of\nsuch corrections come from a term in the action of the form:\n\\begin{equation} \\label{eb3}\n-{K\\over 128\\pi^2} \\, \\int d^4 x \\, \\sqrt{\\det g}\\, \\ln (S+\\bar S) \n\\, R_{\\mu\\nu\\rho\\sigma} \\, R^{\\mu\\nu\\rho\\sigma}\\, ,\n\\end{equation}\nthat arises from the so called holomorphic anomaly\\cite{9302103,9309140}.\nHere $K$ is a \nconstant that will be determined later. (For toroidal\ncompactification $K=24$\\cite{9610237}.)\nIn order to carry out a systematic analysis of the effect of this term on the\nexpression for the black hole entropy, we need to \n\\begin{itemize}\n\\item supersymmetrize this term,\n\\item study how these additional terms in the action\nmodify the expression for\nthe black hole entropy in terms of various fields near the horizon,\n\\item study how the various field configurations near the horizon are\nmodified by these extra terms in the equation of motion, \n\\item and finally evaluate the modified expression for \nthe black hole entropy for the\nmodified near horizon field configuration.\n\\end{itemize}\nThis however has not so far been carried out explicitly. \nIn order to appreciate the reason for this difficulty, one needs to\nknow the difference in the origin of the terms \\refb{ebh0} and \\refb{eb3}.\nIn fact both terms originate from a term of the form:\\footnote{In the\nconvention of ref.\\cite{9906094} this corresponds to choosing\n$ F^{(1)} = -{i\\pi\\over 4} \\, \\left( g(S) - {K\\over 64\\pi^2} \\ln(S+\\bar S)\n\\right) \\, \\Upsilon. $\nFor toroidal compactification $g(S) = -{3\\over 4\\pi^2} \\ln\\eta\n(e^{-2\\pi S})$ \nand $K=24$.}\n\\begin{equation} \\label{eb6}\n\\int d^4 x \\, \\sqrt{\\det g} \\left[\n\\phi(S, \\bar S) \\, R^-_{\\mu\\nu\\rho\\sigma} \nR^{-\\mu\\nu\\rho\\sigma} + c.c.\\right] \\, ,\n\\end{equation}\nwhere \n\\begin{equation} \\label{eb7a}\n\\phi(S,\\bar S)=g(S) -{K\\over 128\\pi^2} \\, \\ln(S+\\bar S)\n\\end{equation} \nis the sum of a piece $g(S)$ that is holomorphic\nin $S$ and a piece proportional to $\\ln(S+\\bar S)$\nthat is a function of both $S$ \nand $\\bar S$.\nFor large $S$,\n\\begin{equation} \\label{e7b}\ng(S) \\simeq {S\\over 16\\pi}\\, ,\n\\end{equation}\nso as to reproduce \\refb{ebh0}. Hence \n\\begin{equation} \\label{eb7}\n\\phi(S, \\bar S)\n\\simeq {1\\over 16\\pi} \\, \\left( S -{K\\over 8\\pi} \\ln(S+\\bar S)\n\\right)\\, .\n\\end{equation}\nA detailed analysis of the function $g(S)$ can be found in\nappendix \\ref{sb} where it has been shown that corrections to \\refb{e7b} \nare of order $e^{-2\\pi S}$. The contribution\n\\refb{ebh0} comes from the $g(S)\\simeq S\/16\\pi$ term in $\\phi(S,\\bar S)$.\nBeing holomorphic in $S$,\nthis part is easy to supersymmetrize, and was used in arriving at expressions\n\\refb{eb1}, \\refb{eb2} for $S_{BH}$. On the other hand \\refb{eb3} arises\nfrom the part of $\\phi(S,\\bar S)$ proportional to $\\ln(S+\\bar S)$ which cannot\nbe regarded as a holomorphic function. Supersymmetrization\nof this term has not been carried out completely.\nNevertheless,\nusing various consistency requirements, \\cite{9906094} guessed that \nsupersymmetric completion of the term \\refb{eb6} modifies\nequations \\refb{eb1} and \\refb{eb2} to\\footnote{The analysis of\n\\cite{9906094} was done in the context of toroidal compactification\nof heterotic string theory. We are using a generalization of\nthis result.}\n\\begin{equation} \\label{eb4aa}\nS_{BH} = {\\pi N \\over S_0} + \n64\\pi^2 \\, g(S_0) - {K\\over 2} \\, \\ln (2S_0)\\, ,\n\\end{equation}\nand\n\\begin{equation} \\label{eb5aa}\n-{\\pi\\, N\\over S_0^2} + 64 \\pi^2 g'(S_0) -{K\\over 2S_0} \\simeq 0\\, .\n\\end{equation}\nFor large $N$, $S_0$ computed from \\refb{eb5aa} is of order $\\sqrt{N}$\nand hence we can use the large $S_0$ approximation \\refb{e7b} for $g(S_0)$.\nThis gives\n\\begin{equation} \\label{eb4}\nS_{BH} \\simeq {\\pi N \\over S_0} + \n4\\pi \\, S_0 - {K\\over 2} \\, \\ln (2S_0)\\, ,\n\\end{equation}\nand\n\\begin{equation} \\label{eb5}\n-{\\pi\\, N\\over S_0^2} + 4 \\pi -{K\\over 2S_0} \\simeq 0\\, .\n\\end{equation}\n\n{}In order to complete the computation of $S_{BH}$ we need to calculate the\nconstant $K$.\\footnote{This calculation has been carried out earlier\nin \\cite{9708062} using direct analysis of one loop amplitudes in type II\nstring theory.} \nFortunately there is a simple expression for $K$ by virtue of\nthe fact that it appears as the coefficient of the non-holomorphic piece of\n$\\phi(S,\\bar S)$ and hence is directly related to the holomorphic anomaly\n$\\partial_S\\partial_{\\bar S} \\phi(S,\\bar S)$\\cite{9302103,9309140}. \nThis computation is carried out by mapping the heterotic string theory to\nthe dual type IIA description. As is well known, heterotic string theory\non $T^4$ is dual to type IIA string theory on \nK3\\cite{9410167,9501030,9503124,9504027,9504047}.\nUnder this duality the Narain lattice $\\Lambda_{20,4}$ of the heterotic \nstring theory\ngets mapped to the lattice of integer homology cycles of $K3$, and\nthe components of the\ngauge fields take value in the real cohomology group\nof $K3$\\cite{9503124}. Also the generator $g$ of the $Z_m$ symmetry\nof $\\Lambda_{20,4}$, that was used in section \\ref{sstat}\nfor the construction of the CHL model, gets mapped to an order $m$ symmetry\ngenerator $\\widetilde g$ of the conformal field theory describing type IIA string\ntheory on $K3$\\cite{9508154} with specific action \non the elements of the homology and the cohomology group of $K3$.\nSince $g$ preserves $(24-k)$ of the 24 directions of the Narain lattice\nassociated with heterotic string compactification on $T^4$,\n$\\widetilde g$ will preserve $(24-k)$ of the 24 basis vectors of the real\ncohomology group of $K3$. \nNow\ncompactifying both sides on $\\widetilde S^1\\times S^1$ we get a duality between\nheterotic string theory on \n$T^4\\times \\widetilde S^1\\times S^1$ and type IIA on $K3\\times \n\\widetilde S^1\\times S^1$. \nLet us denote by $h$ the generator of the\norder $m$ shift along $\\widetilde S^1$. Then the CHL model, obtained by\nmodding out heterotic string theory on $T^4\\times \\widetilde S^1\\times S^1$\nby the $Z_m$ symmetry group generated by \n$h\\cdot g$, is dual to\ntype IIA string theory on $K3\\times\n\\widetilde S^1\\times S^1$, modded out by the $Z_m$ group generated by\n$h\\cdot \\widetilde g$\\cite{9507027,9507050,9508144,9508154}.\nWe shall denote\nby ${\\cal C}$ the conformal field theory associated with the six compact directions\nof the type IIA string theory after taking this quotient. \n\nIt is well\nknown that the dilaton-axion field $S$ of the heterotic string theory\ngets mapped to the complexified Kahler modulus of the two torus \n$\\widetilde S^1\\times S^1$ on the type IIA side\\cite{9503124}. \nThus computation of\n$\\partial_S\\partial_{\\bar S}\\phi(S,\\bar S)$ requires computing the derivative \nof $\\phi$ with\nrespect to the Kahler modulus of $\\widetilde S^1\\times S^1$ and its complex conjugate\nin the type IIA description.\nThe detailed procedure for this computation can be found in \n\\cite{9302103,9309140}; \nhere we just summarize\nthe relevant result of these papers which lead to the value of $K$.\nLet\nus denote by $\\psi^4$, $\\psi^5$ the right-handed world-sheet fermions,\nand by $\\bar\\psi^4$, $\\bar \\psi^5$ the left-handed world-sheet fermions \nassociated with the directions along the circles $S^1$ and $\\widetilde S^1$ in the\ntype IIA theory. We define:\n\\begin{equation} \\label{ec1}\n\\psi^\\pm = {1\\over \\sqrt 2} (\\psi^4 \\pm i \\psi^5), \\qquad\n\\bar\\psi^\\pm = {1\\over \\sqrt 2} (\\bar\\psi^4 \\pm i \\bar\\psi^5)\\, .\n\\end{equation}\nIn the Ramond-Ramond (RR) sector\n$\\psi^\\pm$ as well as $\\bar\\psi^\\pm$ have zero modes. We denote them by\n$\\psi_0^\\pm$ and $\\bar\\psi_0^\\pm$ respectively. \nThey satisfy the usual anti-commutation relations\n\\begin{equation} \\label{ec1a}\n\\{\\psi_0^+, \\psi_0^-\\} = 1, \\qquad \\{\\bar\\psi_0^+, \\bar\\psi_0^-\\} = 1\\, ,\n\\end{equation}\nwith all other anti-commutators being zero.\nIf we now define\n\\begin{equation} \\label{ec2}\nC = \\psi_0^+ \\bar \\psi_0^-, \\qquad \\bar C = \\psi_0^- \\bar \\psi_0^+\\, ,\n\\end{equation}\nthen in the subspace\nof RR sector ground states they represent\nthe action of the\noperators $\\psi^+\\bar \\psi^-$ and $\\psi^-\\bar\\psi^+$ which appear in the\nvertex operator of the Kahler class of $\\widetilde S^1\\times S^1$ and its complex \nconjugate, -- the fields $S$, $\\bar S$\nwith respect to which we want to take derivatives of\n$\\phi(S,\\bar S)$. \nIn terms of these operators the coefficient $K$ is \ngiven by\\cite{9302103}\n\\begin{equation} \\label{ec3}\nK = -Tr_{RR} \\left[ (-1)^{F_L+F_R} \\, C \\, \\bar C \\right]\\, ,\n\\end{equation}\nwhere the trace is to be\ntaken over the RR sector ground states (with $L_0=\\bar \nL_0=0$) of the conformal field theory ${\\cal C}$, and $F_L$ and $F_R$ denote\nthe world-sheet fermion numbers in the left and the right-moving sector of\nthis conformal field theory. In arriving at \\refb{ec3} we have used\nthe fact that $Tr\\left( (-1)^{F_L+F_R}\\right)$ vanishes in the conformal\nfield theory ${\\cal C}$, since the action of the fermion zero modes\n$\\psi_0^\\pm$ pairs states with equal and opposite $(-1)^{F_L+F_R}$ \neigenvalues.\n\nThe states of the conformal field theory ${\\cal C}$ include both untwisted sector\nstates and twisted sector states. Of these the twisted sector states \nnecessarily carry fractional \nunit of winding along $\\widetilde S^1$. Hence the twisted RR states\nalways have strictly positive $L_0$, $\\bar L_0$ \neigenvalues and cannot contribute\nto \\refb{ec3}. \nThe untwisted sector\nstates are states associated with the original CFT with target\nspace $K3\\times \n\\widetilde S^1\\times S^1$ which are invariant under the $Z_m$ \nsymmetry generated by $h\\cdot \\tilde g$. \nThese can be divided into two classes: those which are invariant\nseparately under $h$ and $\\widetilde g$, and those which pick up \nequal and opposite non-trivial\nphases under the action of $h$ and $\\widetilde g$.\nThe latter class, being not invariant under $h$,\ncarries non-zero momentum along $\\widetilde S^1$, and hence\nthe RR sector states in this class have strictly positive \n$L_0$ and $\\bar L_0$ eigenvalues.\nThus they cannot contribute to the trace in \\refb{ec3}, and we are left with\nstates which are invariant separately under $h$ and $\\widetilde g$. \nSince the \noperators $C$ and $\\bar C$ appearing in \\refb{ec3} act on the Hilbert space\nof the CFT with target space $\\widetilde S^1\\times S^1$, the contribution to $K$\nfrom these states may be factorized as\n\\begin{equation} \\label{ec4}\nK = -Tr_{RR}^{K3;inv}\\left[(-1)^{F_L+F_R}\\right] \\, \nTr_{RR}^{\\widetilde S^1\\times S^1} \n\\left[(-1)^{F_L+F_R} C \\bar C\\right] \\, ,\n\\end{equation}\nwhere in $Tr_{RR}^{K3;inv}$\nthe trace is now taken over the $\\widetilde g$ invariant RR sector \nground states of the\nconformal field theory with target space $K3$, and in\n$Tr_{RR}^{\\widetilde S^1\\times S^1}$ the trace is to be taken over the\nRR sector ground states of the CFT with target space $\\widetilde S^1\\times S^1$.\nFor the later CFT the requirement of vanishing $L_0$ and $\\bar L_0$\nforces the states to carry vanishing momenta along $S^1$ and $\\widetilde S^1$ and\nhence they are automatically invariant under $h$.\n\nThere is a one to one map between the vector space of\nRR sector ground states in the CFT\nassociated with K3 and the\nreal cohomology group of $K3$. Under this map $(F_L+F_R)$ get mapped to\nthe degree of the cohomology element. Since K3 has non-trivial\ncohomology of even degree only,\n$Tr\\left[(-1)^{F_L+F_R}\\right]$ for K3 is equal to the dimension of the \ncohomology group of K3 which is 24. Here however we are interested in the\nRR sector ground states which are even under $\\widetilde g$, and hence we should\ncount the dimension of the cohomology group of K3 which is invariant under \n$\\widetilde g$. Since this number is equal to $(24-k)$, we have\n\\begin{equation} \\label{ec5}\nTr_{RR}^{K3;even}\\left[(-1)^{F_L+F_R}\\right] = 24-k\\, .\n\\end{equation}\nIn order to calculate the second factor appearing in \\refb{ec4}, we note that\nthe RR sector ground states \nassociated with $\\widetilde S^1\\times S^1$ consist of four\nstates. Defining the vacuum state $|0\\rangle$ to\nbe annihilated by $\\psi_0^-$ and\n$\\bar\\psi_0^-$, and have $F_L=F_R=-{1\\over 2}$, the states are\n\\begin{equation} \\label{ec6}\n|0\\rangle, \\quad \\psi_0^+|0\\rangle, \\quad \\bar\\psi_0^+|0\\rangle, \\quad \\psi_0^+\\bar\n\\psi_0^+|0\\rangle \\, , \n\\end{equation}\nwith $(F_L,F_R)$ values $\\left(-{1\\over 2}, -{1\\over 2}\\right)$, \n$\\left(-{1\\over 2}, {1\\over 2}\\right)$, $\\left({1\\over 2}, -{1\\over 2}\\right)$, \n$\\left({1\\over 2}, {1\\over 2}\\right)$ respectively. {}From the structure\nof $C$ and $\\bar C$ defined in \\refb{ec2} it is clear that only the\nstate $\\psi_0^+|0\\rangle$ will contribute to the trace appearing in\nthe second factor of \\refb{ec4}. Since $C\\bar C\\psi_0^+|0\\rangle=-\\psi_0^+|0\\rangle$,\nwe get\n\\begin{equation} \\label{ec7}\nTr_{RR}^{\\widetilde S^1\\times S^1} \n\\left[(-1)^{F_L+F_R} C \\bar C\\right] =-1\\, .\n\\end{equation}\nSubstituting \\refb{ec5} and \\refb{ec7} into \\refb{ec4} we get\n\\begin{equation} \\label{ec8}\nK = (24-k)\\, .\n\\end{equation}\n\nEqs.\\refb{eb4}, \\refb{eb5} now give\n\\begin{equation} \\label{eb4a}\nS_{BH} \\simeq {\\pi N \\over S_0} + 4\\pi \\, S_0 - {1\\over 2}\n\\, (24-k) \\, \n\\ln (2S_0)\\, ,\n\\end{equation}\nand\n\\begin{equation} \\label{eb5a}\n-{\\pi\\, N\\over S_0^2} + 4 \\pi -{24-k\\over 2S_0} \\simeq 0\\, .\n\\end{equation}\nThese agree with eqs.\\refb{e10a}, \\refb{e10b} under the identification\n\\begin{equation} \\label{ec9}\n\\widetilde S_{stat} = S_{BH}, \\qquad \\mu = {\\pi\\over S_0}\\, .\n\\end{equation}\nThus we see that in this approximation the entropy $S_{BH}$\nof the black hole\nagrees with the statistical entropy $\\widetilde S_{stat}$ calculated following\nthe procedure given in section \\ref{srev} up to an overall constant.\n\nEarlier we had estimated the error in $\\widetilde S_{stat}$ calculated from\n\\refb{e10a}, \\refb{e10b} to be\nnonperturbative in $1\/\\sqrt{N}$. We shall now try to carry out a\nsimilar estimate of the error involved in eqs.\\refb{eb4a}, \\refb{eb5a} so\nthat we can determine up to what level the agreement between $S_{BH}$ and\n$\\widetilde S_{stat}$ holds.\nFirst of all we should remember that there is an\nuncertainty involved in the\noriginal formulae \\refb{eb4aa}, \\refb{eb5aa} since they have not been derived\nfrom first principles. \nIn particular \\cite{9906094} used an argument based on duality symmetry to\nderive the effect of the holomorphic anomaly terms, and this does not\nfix the additive constant on the right hand side\nof \\refb{eb4aa}. Thus there is an ambiguity in the overall additive\nconstant in the\nexpression for $S_{BH}$, and hence we cannot hope to compare the additive\nconstants in $\\widetilde S_{stat}$ and $S_{BH}$.\nAssuming that the formul\\ae\\ \\refb{eb4aa}, \\refb{eb5aa} are correct up to this\nadditive constant, we see\nthat the error in the determination of $S_{BH}$ lies essentially in the\nerror in the determination of the function $g(S)$. As reviewed in \nappendix \\ref{sb}, \nwe can determine the form of the corrections to $g(S)$ by the requirement \nof S-duality invariance of the\ntheory,\\footnote{As will be discussed in appendix \\ref{sb},\nthe S-duality group for these models is usually\na subgroup of SL(2,Z), and \ndepends on the specific\nmodel we are analysing\\cite{9507050}.}\nand typically corrections\nto \\refb{e7b} involve powers of $e^{-2\\pi S}$. Since $S_0$ obtained by \nsolving eq.\\refb{eb5a} is of order\n$\\sqrt{N}$, we see that the corrections to $S_{BH}$ will involve powers of\n$e^{-\\pi\\sqrt{N}}$. Thus for large $N$\nthe agreement between $S_{BH}$ and $\\widetilde S_{stat}$ \nholds up to an undetermined additive\nconstant, and terms which are non-perturbative in $1\/\\sqrt{N}$.\nIn particular if we express $\\widetilde S_{stat}$ and $S_{BH}$ in power series\nexpansion in $1\/\\sqrt{N}$ by solving eqs.\\refb{e10a}, \\refb{e10b} \nand \\refb{eb4a}, \\refb{eb5a}\nrespectively, then the results agree to all orders in $1\/N$ including\nterms proportional to $\\ln N$. This in turn implies similar agreement\nbetween $\\ln d_N$ and $\\ln d_N^{BH}$ defined in section \\ref{srev}.\n\n\\sectiono{Discussion} \\label{sdiss}\n\nGiven the agreement between the statistical entropy $\\widetilde S_{stat}$ and \nthe black hole entropy $S_{BH}$ up to non-perturbative terms, one might \nwonder if this correspondence also holds after we include non-perturbative \nterms. Unfortunately however even for toroidally compactified heterotic \nstring theory $\\widetilde S_{stat}$ and $S_{BH}$ differ once we include\nnon-perturbative corrections\\cite{0411255,0412287}, and hence we\nexpect that such disagreement will also be present\nfor CHL compactifications. One could contemplate several reasons for\nthis discrepancy:\n\\begin{enumerate}\n\\item First of all we should remember that the formul\\ae\\\n\\refb{eb4aa},\n\\refb{eb5aa}\nfor the black\nhole entropy in the presence of non-holomorphic terms, as given in\n\\cite{9906094}, have not been derived from first principles. Thus\nthere could be further corrections to these formul\\ae\\ which could\nmodify the expression for $S_{BH}$.\n\\item It could also be that the proposal \\refb{e2a} - \\refb{eprop}\nfor relating the black hole\nentropy to the degeneracy of elementary string states\nis not \ncomplete; and that the formul\\ae\\ needs to be\nmodified once non-perturbative effects are taken into account.\n\\item Besides supersymmetric\ncompletion of the curvature squared terms, the effective field\ntheory contains infinite number of other higher derivative terms\nwhich are in principle equally important, and\nat present there is no understanding as to why these terms do not\naffect the expression for the entropy. It could happen that while\nthese terms do not affect the perturbative corrections, their\ncontribution becomes significant at the non-perturbative level.\n\\item Finally there is always a possibility that the relation between\nthe black hole entropy and statistical entropy exists only as a\npower series expansion in inverse powers of various charges. In this\ncase we do not expect any relation between the non-perturbative\nterms in the expressions for $S_{BH}$ and $\\widetilde S_{stat}$. \n\\end{enumerate}\n\nAt present we do not know which (if any) of these possibilities\nis correct. This issue clearly needs further investigation.\nWe note however that if the fourth proposal is correct, namely the \nagreement between the black hole entropy and the statistical entropy holds \nonly as a power series expansion in inverse powers of the charges, then \nthe proposal \\refb{eprop} relating the black hole and the statistical \nentropy can be extended to more general models and more general states. \nThe essential point is that in our analysis we have been restricted to \ncompactifications of the form $K_5\\times S^1$ and to states carrying \nmomentum and winding along $S^1$ in order to ensure that the degeneracy of \nthe states depends only on the \ncombination $N=nw$. If\nwe consider more general $N=4$ supersymmetric compactification \n({\\it e.g.} where the orbifold group is $Z_m\\times Z_{m'}$\nand acts on both circles instead of just \none circle\\cite{9508154}) and\/or more general \nstates \ncarrying arbitrary gauge charges $(\\vec P_L, \\vec P_R)$ associated\nwith gauge fields arising out of the left and the right sectors of the\nworld-sheet, then the role of the T-duality invariant combination $nw$ is \nplayed by\n$N\\equiv {1\\over 2} (\\vec P_R^2 - \n\\vec P_L^2)$. However in this case the degeneracy $d(\\vec P_L, \\vec P_R)$\nof such states could depend on $\\vec P_L$ and $\\vec P_R$ separately\ninstead of being a function of the combination $N$ only. To see how\nsuch dependence can arise, we can\nconsider the class of models described in section \\ref{sstat} and consider\na state that carries $\\tilde n$ units of momentum along the circle $\\widetilde S^1$ \nbesides the charge\nquantum numbers $n,w$ associated with the circle $S^1$. \nFor such states we still have $N=nw$. However\nin this case the\npart of the wave-function associated with $\\widetilde S^1$\npicks up a phase $e^{2\\pi i \\tilde n\/m}$ \nunder the $Z_m$ shift along $\\widetilde S^1$,\nand we must compensate for it by introducing a factor of \n$e^{2\\pi i \\tilde n l\/m}$\nmultiplying $g^l$ in the projection operator \\refb{epr1} in order to \nensure that the complete state is invariant under the $Z_m$ \ntransformation. This introduces\na specific dependence of the partition function and hence of\nthe degeneracy of states on $\\tilde n$. If in addition the state carries\nsome gauge charges associated with the lattice $\\Lambda_{20,4}$, then\nthe sum over\nmomentum $\\vec s$ in \\refb{ednnew}, \\refb{efnew} \nmight run over a shifted lattice,\nwhich is equivalent to replacing the $\\exp(-\\mu\\vec s^2\/2)$ factor\nin \\refb{efnew} by $\\exp[-\\mu (\\vec s + \\vec K)^2\/2]$ for some fixed vector\n$\\vec K$ that depends on the component of\n$(\\vec P_L, \\vec P_R)$ along $\\Lambda_{20,4}$. However by following the\nanalysis given in section \\ref{sstat} and\nappendix \\ref{sa} one can see that the\ndependence on $(\\vec P_L, \\vec P_R)$ introduced by either of these effects\nis exponentially suppressed, and\nhence if we are interested in $\\ln d(\\vec P_L, \\vec P_R)$ as a power series\nexpansion in inverse powers of charges, the result depends only on the\ncombination $N=(\\vec P_R^2-\\vec P_L^2)\/2$. Similar analysis can also\nbe carried out for the $Z_m\\times Z_{m'}$ orbifold models.\nThis allows us to define\n$\\widetilde S_{stat}(N)$ through eqs.\\refb{e2a}-\\refb{e12} within this approximation.\nOn the other hand from the results of \\cite{9906094} it \nalso follows that the black hole entropy will also be a function of the \ncombination $N={1\\over 2} (\\vec P_R^2 -\n\\vec P_L^2)$. An analysis similar to the one described in this paper\ncan then be used to show that\nthe correspondence \\refb{eprop} between the statistical \nentropy and \nthe black hole entropy continues to hold for these more general class\nof states and\/or models.\n\n\n\n\n\n{\\bf Acknowledgement}: I would like to thank P.~Aspinwall, A.~Dabholkar,\nR.~Gopakumar, D.~Jatkar\nand D.~Surya Ramana for valuable discussions.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{submission}\n\nWork on generating images with Generative Adversarial Networks (GANs) has shown promising results \\cite{GANS}.\nA generative net G and a discriminator D are trained simultaneously with conflicting objectives.\nG takes in a noise vector and outputs an image, while\nD takes in an image and outputs a prediction about whether the image is a sample from G.\nG is trained to maximize the probability that D makes a mistake, and D is trained to minimize that probability.\nBuilding on these ideas, one can generate good output samples using a cascade \\cite{LAPGAN} of convolutional neural networks.\nMore recently \\cite{DCGAN}, even better samples were created from a single generator network.\nHere, we consider the situation where we try to solve a semi-supervised classification task and learn a generative model simultaneously.\nFor instance, we may learn a generative model for MNIST images while we train an image classifier, which we'll call C.\nUsing generative models on semi-supervised learning tasks is not a new idea - Kingma et al. \\yrcite{SSVAE} expand work on variational generative techniques \n\\cite{VAES, VAES2} to do just that.\nHere, we attempt to do something similar with GANs.\nWe are not the first to use GANs for semi-supervised learning.\nThe CatGAN \\cite{CATGAN} modifies the objective function to\ntake into account mutual information between observed examples and their predicted\nclass distribution.\nIn Radford et al. \\yrcite{DCGAN}, the features learned by D are reused in classifiers.\n\n\nThe latter demonstrates the utility of the learned representations, but it has several undesirable properties.\nFirst, the fact that representations learned by D help improve C is not surprising - it seems\nreasonable that this should work. However, it also seems reasonable that learning a good C would help to improve the performance of D.\nFor instance, maybe images where the output of C has high entropy are more likely to come from G.\nIf we simply use the learned representations of D\nafter the fact to augment C, we don't take advantage of this.\nSecond, using the learned representations of D after the fact doesn't allow for training C and G simultaneously.\nWe'd like to be able to do this for efficiency reasons, but there is a more important motivation.\nIf improving D improves C, and improving C improves D (which we know improves G) then we may be able to\ntake advantage of a sort of feedback loop, in which all 3 components (G,C and D) iteratively make each other\nbetter.\n\nIn this paper, inspired by the above reasoning, we make the following contributions:\n\n\\begin{itemize}\n \\item First, we describe a novel extension to GANs that allows them to learn a generative model and a classifier simultaneously.\nWe call this extension the Semi-Supervised GAN, or SGAN.\n\\item Second, we show that SGAN improves classification performance on restricted data sets over a baseline classifier with no generative component.\n\\item Finally, we demonstrate that SGAN can significantly improve the quality of the generated samples and reduce\n training times for the generator.\n\\end{itemize}\n\n\\section{The SGAN Model}\n\nThe discriminator network D in a normal GAN outputs an estimated probability that the input image is\ndrawn from the data generating distribution. Traditionally this is implemented with a feed-forward network\nending in a single sigmoid unit, but it can also be implemented with a softmax output layer with one unit\nfor each of the classes [REAL, FAKE]. Once this modification is made, it's simple to\nsee that D could have N+1 output units corresponding to [CLASS-1, CLASS-2, \\dots CLASS-N, FAKE].\nIn this case, D can also act as C. We call this network D\/C.\n\nTraining an SGAN is similar to training a GAN.\nWe simply use higher granularity labels for the half of the minibatch that has been drawn from the data generating distribution.\nD\/C is trained to minimize the negative log likelihood with respect to the given labels and G is trained to maximize it, as shown in Algorithm \\ref{alg:example}.\nWe did not use the modified objective trick described in Section 3 of Goodfellow et al. \\yrcite{GANS}.\n\nNote: in concurrent work, \\cite{IMPROVEDTECHNIQUES} propose the same method for augmenting\nthe discriminator and perform a much more thorough experimental evaluation of the technique.\n\n\\begin{algorithm}[tb]\n \\caption{SGAN Training Algorithm}\n \\label{alg:example}\n\\begin{algorithmic}\n \\STATE {\\bfseries Input:} $I$: number of total iterations\n \\FOR{$i=1$ {\\bfseries to} $I$}\n \\STATE Draw $m$ noise samples $\\{z^{(1)}, \\dots, z^{(m)}\\}$ from noise prior $p_g(z)$.\n \\STATE Draw $m$ examples $\\{ (x^{(1)}, y^{(1)}) , \\dots, (x^{(m)}, y^{(m)}) \\}$ from data generating distribution $p_d(x)$.\n \\STATE Perform gradient descent on the parameters of D w.r.t. the NLL of D\/C's outputs on the combined minibatch of size $2m$.\n \\STATE Draw $m$ noise samples $\\{z^{(1)}, \\dots, z^{(m)}\\}$ from noise prior $p_g(z)$.\n \\STATE Perform gradient descent on the parameters of G w.r.t. the NLL of D\/C's outputs on the minibatch of size $m$.\n \\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Results}\n\nThe experiments in this paper were conducted with \\href{https:\/\/github.com\/DoctorTeeth\/supergan}{https:\/\/github.com\/DoctorTeeth\/supergan}, which borrows heavily from \\href{https:\/\/github.com\/carpedm20\/DCGAN-tensorflow}{https:\/\/github.com\/carpedm20\/DCGAN-tensorflow} and which contains more details about the experimental setup.\n\n\\subsection{Generative Results}\n\nWe ran experiments on the MNIST dataset \\cite{MNIST} to determine whether an SGAN would result in better generative samples\nthan a regular GAN.\nUsing an architecture similar to that in Radford et al. \\yrcite{DCGAN}, we\ntrained an SGAN both using the actual MNIST labels and with only the labels REAL and FAKE.\nNote that the second configuration is semantically identical to a normal GAN.\nFigure \\ref{icml-historical} contains examples of generative outputs from both GAN and SGAN.\nThe SGAN outputs are significantly more clear than the GAN outputs.\nThis seemed to hold true across different initializations and network architectures,\nbut it is hard to do a systematic evaluation of sample quality for varying hyperparameters.\n\n\\begin{figure}[ht]\n\\vskip 0.2in\n\\begin{minipage}[b]{0.5\\linewidth}\n\\centerline{\\includegraphics[width=\\columnwidth]{figures\/labels_1_840.png}}\n\\end{minipage\n\\begin{minipage}[b]{0.5\\linewidth}\n\\centerline{\\includegraphics[width=\\columnwidth]{figures\/no-labels_1_840.png}}\n\\end{minipage} \n\\caption{Output samples from SGAN and GAN after 2 MNIST epochs.\nSGAN is on the left and GAN is on the right.}\n\\label{icml-historical}\n\\vskip -0.2in\n\\end{figure} \n\n\\subsection{Classifier Results} \n \nWe also conducted experiments on MNIST to see whether the classifier component of the SGAN would\nperform better than an isolated classifier on restricted training sets.\nTo train the baseline, we train SGAN without ever updating G.\nSGAN outperforms the baseline in proportion to how much we shrink the training set,\nsuggesting that forcing D and C to share weights improves data-efficiency.\nTable \\ref{sample-table} includes detailed performance numbers.\nTo compute accuracy, we took the maximum of the outputs not corresponding to the FAKE label.\nFor each model, we did a random search on the learning rate and reported the best result.\n\n\\begin{table}[t]\n\\caption{Classifier Accuracy}\n\\label{sample-table}\n\\vskip 0.15in\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lcccr}\n\\hline\n\\abovespace\\belowspace\nExamples & CNN & SGAN\\\\\n\\hline\n\\abovespace\n1000 & 0.965 & 0.964\\\\\n100 & 0.895 & 0.928\\\\\n50 & 0.859 & 0.883\\\\\n25 & 0.750 & 0.802\\\\\n\\belowspace\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\\section{Conclusion and Future Work} \nWe are excited to explore the following related ideas:\n\n\\begin{itemize}\n\\item Share some (but not all) of the weights between D and C, as in the dual autoencoder \\cite{TPUL}. This could allow some weights to be specialized to discrimination and some to classification.\n\\item Make GAN generate examples with class labels \\cite{CONDITIONAL}. Then ask D\/C to assign one of $2N$ labels [REAL-ZERO, FAKE-ZERO, \\dots ,REAL-NINE, FAKE-NINE].\n\\item Introduce a ladder network \\cite{LADDER} L in place of D\/C, then use samples from G as unlabeled data to train L with.\n\\end{itemize}\n\n\\section*{Acknowledgements} \n \nWe thank the authors of Tensorflow \\cite{TENSORFLOW}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nA game models a scenario where multiple strategic controllers (or players) optimize their objective functionals, which depend not only on the self actions but also on the actions of other controllers. In stochastic static games, players observe the realization of some random state of nature, possibly through separate noisy channels, and use such observations to independently determine their actions so that the expected values of their individual cost (or utility) functions are optimized. In a stochastic dynamic game, on the other hand, the players act at multiple time steps, based on observation or measurement of some dynamic process which itself is driven by past actions as well as random quantities, which could again be called random states of nature. What information each player acquires at each stage of the game determines what is called the {\\it information structure} of the underlying game. If all the players acquire the same information at each time step, then the dynamic game is said to be a game of \nsymmetric information. However, in many real scenarios, the players do not have access to the same information about the underlying state processes and other players' observations and past actions. Such games are known as games with asymmetric information. For example, several problems in economic interactions \\cite{harsanyi1967,cole2001,myerson1997}, attacks on cyber-physical systems \\cite{gupta2012}, auctions, cryptography, etc. can be modeled as games of asymmetric information among strategic players.\n\nGames with symmetric and\/or perfect information have been well studied in the literature; see, for example, \\cite{Shapley:1953,Sobel1971, fudenberg1991,Basarbook,Filarbook}. In these games, the players have the same beliefs on the states of the game, future observations and future expected costs or payoffs. However, in games with asymmetric information, the players need not have the same beliefs on the current state and future evolution of the game. General frameworks to compute or refine Nash equilibria in stochastic games of symmetric or perfect information have received attention from several researchers, see for example, \\cite{maskin2001, myerson1997, fudenberg1991} among many others. However, by comparison, such general frameworks for games of asymmetric information are scant (for exceptions, see \\cite{Basar1975,Basaronestep,Basarmulti, basar1985}). This paper, in addition to its earlier finite-game version \\cite{nayyar2012a}, provides such a \nframework.\n\n\nIn our recent work \\cite{nayyar2012a}, we considered a general finite non-zero sum dynamic stochastic game of asymmetric information with stagewise additive cost functions. Under certain assumptions on the information structures of the players, we obtained a characterization of a particular class of Nash equilibria using a dynamic programming like approach. The key idea there was to use common information among the players to transform the original game of asymmetric information to a game of symmetric and perfect state information with an expanded state and action spaces of the players so that a more easily computable Markov perfect equilibrium of the latter can be used to obtain a Nash equilibrium for the former. The advantage of this technique is that instead of searching for equilibrium in the (large) space of strategies (which grows with the number of stages), we only need to compute Nash equilibrium in a succession of static games of complete information. This reduces the computational effort in \ncomputing a Nash equilibrium of the game. We call the Nash equilibria obtained with this approach as {\\it common information based Markov perfect equilibrium}.\n\nIn this work, we extend the framework and results of \\cite{nayyar2012a} to infinite games, particularly those with linear state and observation equations and Gaussian random variables. For quadratic cost functions of the players satisfying certain assumptions, we show that a unique common information based Markov perfect equilibrium exists. The general framework developed in the paper can be applied to obtain Nash equilibria of broader classes of stochastic dynamic games with asymmetric information, satisfying two general assumptions delineated in the paper. \n\n\\subsection{Previous Work}\nIn the past, specific models of various classes of games have been studied, where different players acquire different information. Harsanyi, in his seminal paper \\cite{harsanyi1967}, studied one subclass of static games with finite state and action spaces of the players, and showed that under some technical conditions, Nash equilibrium exists in such games. Various authors \\cite{Behn68,Rhodes69,Willman69} have studied two-player zero-sum differential games with linear state dynamics and quadratic payoffs, where the players do not make the same measurements about the state. A zero sum differential game where one player's observation is nested in the other player's observation was considered in \\cite{Hojota}. A zero-sum differential game where one player makes a noisy observation of the state while the other one does not make any measurement was considered in \\cite{MintzBasar}.\n\nDiscrete-time non-zero sum LQG games with one step delayed sharing of observations were studied in \\cite{Basaronestep} and \\cite{Basarmulti}. A game with one-step delayed observation and action sharing among the players was considered in \\cite{Altman:2009}. A two-player finite game in which the players do not have access to each other's observations and control actions was considered in \\cite{hespanha2001}, where a necessary and sufficient condition for existence of a Nash equilibrium in terms of two coupled dynamic programs was obtained.\n\nObtaining equilibrium solutions for stochastic games when players make independent noisy observations of the state and do not share all of their information (or even when they have access to the same noisy observation as in \\cite{Bjota}) has remained a challenge for general classes of games. Identifying classes of games which would lead to tractable solutions or feasible solution methods is therefore an important goal in this area.\n\n\\subsection{Contributions of this Paper}\nThis paper is a sequel to our earlier finite game work in \\cite{nayyar2012a}, where we make similar assumptions on the information structures of the players. We study games in which the state, the players' actions and primitive random variables take values in finite-dimensional Euclidean spaces. The state evolution and observation equations are taken to be linear in their arguments and all primitive random variables are assumed to be mutually independent zero-mean Gaussian random variables. We assume that the players have a stagewise additive total cost function. \n\nWe assume that the information structures of the players satisfy two sufficient conditions. For any dynamic game satisfying these assumptions, we show that we can decompose it into several static games using a backward induction algorithm. If there exists a Nash equilibrium for each of those static games, then there exists a common information based Markov perfect equilibrium for the original dynamic game. Furthermore, we present an algorithm that computes the common information based Markov perfect equilibrium in such games, provided that it exists. For games in which the cost functions of the players are quadratic satisfying certain assumption, we show that the static game at each time step admits a unique Nash equilibrium in the class of all Borel measurable strategies of the players at that time step, thereby proving the existence of a unique common information based Markov perfect equilibrium. We also show, by example, that there may be other Nash equilibria of such games that cannot be computed using \nthe conceptual method developed in this paper.\n\nTo sum up, common information based Markov perfect equilibria constitute a subclass of Nash equilibria of such games, and it can be thought of as a {\\it refinement of Nash equilibrium} for games with asymmetric information. However, we do not look into implementation issues of common information based Markov perfect equilibrium in this paper, and we leave this as a topic of further investigation.\n\n\n\\subsection{Notation}\nRandom variables are denoted by upper case letters and their realizations by the corresponding lower case letters. Random vectors are denoted by upper case bold letters and their realizations by lower case bold letters. Unless otherwise stated, the state, action and observations are assumed to be vector valued. \n\nLet $\\ALP X$ be a set. For a subset $\\SF X\\subset\\ALP X$, we let $\\SF X^\\complement$ denote the complement of the set $\\SF X$. We use $\\text{id}_{\\ALP X}$ to denote the identity map on the set $\\ALP X$. The transpose of a matrix $A$ is denoted by $A^\\transpose$. \n\n\n\nSubscripts are used as time indices and superscripts are used as player\/controller indices. Consider $a,b\\in\\mathbb{N}$. Let $\\VEC X_t$ be an element of a finite dimensional Euclidean space $\\ALP X_t$ for $a\\leq t\\leq b$. If $a\\leq b$, then we let $\\VEC X_{a:b}$ denote the set of vectors $\\{\\VEC X_a, \\VEC X_{a+1}, \\dots, \\VEC X_b\\}$. If $a>b$, then $\\VEC X_{a:b}$ is empty. On the other hand, we use $\\ALP X_{a:b}$ to denote the product space $\\prod_{t=a}^b\\ALP X_t$, which is a finite dimensional Euclidean space, with the understanding that $\\ALP X_{a:b}=\\emptyset$ if $a>b$. We use a similar convention for superscripts.\n\nWe use $\\mathds{P}\\{\\cdot\\}$ to denote the probability of an event and $\\mathds{E}[\\cdot]$ to denote the expectation of a random variable. For a collection of functions $\\boldsymbol{g}$, the notations $\\mathds{P}^{\\boldsymbol{g}}\\{\\cdot\\}$ and $\\mathds{E}^{\\boldsymbol{g}}[\\cdot]$ indicate that the probability\/expectation depends on the choice of functions in $\\boldsymbol{g}$. Similarly, for a probability measure $\\pi$, the notation $\\mathds{E}^{\\pi}[\\cdot]$ indicates that the expectation is with respect to the measure $\\pi$. The notation $\\ind{\\VEC x}$ denotes a Dirac measure at the point $\\VEC x$. For a set $\\ALP X$ and its subset $\\SF X$, $1_{\\SF X}:\\ALP X\\rightarrow\\{0,1\\}$ denotes the indicator function on the set $\\SF X$.\n\nLet $\\VEC X, \\VEC Y$ and $\\VEC Z$ be three random variables taking values, respectively, in the spaces $\\ALP X,\\:\\ALP Y$ and $\\ALP Z$. Then, $\\mathds{P}\\{\\SF X|\\VEC y,\\VEC z\\}$ denotes the probability of the event $\\SF X\\subset\\ALP X$ given the realizations $\\VEC y$ and $\\VEC z$ of the random variables $\\VEC Y$ and $\\VEC Z$. Similarly, $\\mathds{E}[\\cdot|\\VEC y]$ denotes the expected value of a real-valued function $(\\cdot)$ given the realization $\\VEC y$. We use $\\mathds{P}\\{d\\VEC x,d\\VEC y|\\VEC z\\}$ to denote the conditional probability measure over the space $\\ALP X\\times\\ALP Y$ given a realization $\\VEC z$ of another random variable $\\VEC Z$.\n\n\n\\subsection{Outline of the Paper}\nThe paper is organized as follows. In Section \\ref{sec:problem}, we formulate the two-player non-zero sum game problem with linear dynamics, linear observation equations, and asymmetric information among the players. We make two assumptions on the information structures of the controllers and an assumption on the admissible strategies of the agents. We also discuss consequences of the assumptions we make on the information structures. In Section \\ref{sec:main}, we state the main result of the paper and develop a backward induction algorithm that computes the common information based Markov perfect equilibrium of the game formulated in Section \\ref{sec:problem}, provided that it exists. In Section \\ref{sec:lqggames}, we specialize the result of Section \\ref{sec:main} to LQG games, and show that under further assumptions on cost functions, a unique common information based Markov perfect equilibrium exists in the class of measurable strategies of the players.\nIn Section \\ref{sec:information}, we show through an example that there may be other Nash equilibria of a game with asymmetric information, and that using our algorithm, we compute only a subclass of all Nash equilibria. We discuss some implications of our assumptions in Section \\ref{sec:discussion}. Finally, we conclude our discussion in Section \\ref{sec:conclude} and identify several directions for future research. Proofs of most of the results in the paper are given in appendices.\n\n\\section{Problem Formulation}\\label{sec:problem}\nLet $\\VEC X_t$ be the state of a linear system which is controlled by two controllers (players)\\footnote{In the paper, we use the term ``controller'' instead of ``player'', because we introduce another set of players in the symmetric information game introduced in the next section.}. At each time step $t$, Controller $i$, $i=1,2$, observes the state through a noisy sensor; this observation is denoted by $\\VEC Y^i_t$. Controller $i$'s action at time step $t$ is denoted by $\\VEC U^i_t$. For each Controller $i\\in\\{1,2\\}$ at time $t\\in\\{1,\\ldots,T-1\\}$, the state, action and observation spaces are denoted by $\\ALP X_t$, $\\ALP U^i_t$ and $\\ALP Y^i_t$, respectively, and they are assumed to be finite dimensional Euclidean spaces. The dynamics and observation equations are given as\n\\beq{\\label{eq:lqgdynamics} \\VEC X_{t+1} &=& A_t\\VEC X_{t}+B^1_t\\VEC U^1_t+B^2_t\\VEC U^2_t+\\VEC W_t^0,\\\\\n\\label{eq:lqgobserv} \\VEC Y^i_t &=& H^i_t\\VEC X_t + \\VEC W^i_t, \\mbox{~~}i=1,2,}\nwhere $\\VEC W^i_t$ is a random variable taking values in a finite dimensional Euclidean space denoted by $\\ALP W^i_t$ for all $i\\in\\{0,1,2\\}$ and $t\\in\\{1,\\ldots,T-1\\}$, and $A_t,B^i_t,H^i_t,\\: i\\in\\{1,2\\},\\: t\\in\\{1,\\ldots,T-1\\}$ are matrices of appropriate dimensions. $\\VEC X_1,\\VEC W^{0:2}_{1:T-1}$ are primitive random variables, and they are assumed to be mutually independent and zero-mean Gaussian random vectors.\n\n\\subsection{Information Structures of the Controllers}\nThe information available to each controller at time step $t\\in\\{1,\\ldots,T-1\\}$ is a subset of all information generated in the past, that is, $\\{\\VEC Y^{1:2}_{1:t},\\VEC U^{1:2}_{1:t-1}\\}$. Let $\\ALP E^i_t$ and $\\ALP F^i_t$, respectively, be defined as\n\\beqq{\\ALP E^i_t:=\\{(j,s)\\in\\{1,2\\}\\times\\{1,\\ldots,T\\}:\\text{ Controller } i \\text{ at time } t \\text{ knows } \\VEC Y^j_s\\},\\\\\n\\ALP F^i_t:=\\{(j,s)\\in\\{1,2\\}\\times\\{1,\\ldots,T\\}:\\text{ Controller } i \\text{ at time } t \\text{ knows } \\VEC U^j_s\\}.}\nDefine $\\ALP I^i_t$, $\\ALP C_t$ and $\\ALP P^i_t$ for $i=1,2$ and $t\\in \\{1,\\ldots,T-1\\}$ as\n\\beqq{\\ALP I^i_t &=& \\prod_{(j,s)\\in\\ALP E^i_t} \\ALP Y^j_s\\times \\prod_{(j,s)\\in\\ALP F^i_t} \\ALP U^j_s,\\\\\n\\ALP C_t &=& \\prod_{(j,s)\\in\\ALP E^1_t\\cap\\ALP E^2_t} \\ALP Y^j_s\\times \\prod_{(j,s)\\in\\ALP F^1_t\\cap \\ALP F^2_t} \\ALP U^j_s,\\\\\n\\ALP P^i_t & = & \\prod_{(j,s)\\in\\ALP E^i_t\\setminus(\\ALP E^1_t\\cap\\ALP E^2_t)} \\ALP Y^j_s\\times \\prod_{(j,s)\\in\\ALP F^i_t\\setminus (\\ALP F^1_t\\cap \\ALP F^2_t)} \\ALP U^j_s.}\nNote that $\\ALP I^i_t$, $\\ALP C_t$ and $\\ALP P^i_t$ for $i=1,2$ and $t\\in \\{1,\\ldots,T-1\\}$ are finite dimensional Euclidean spaces. \n\nWe let $\\VEC I^i_t\\in\\ALP I^i_t$ denote the information available to Controller $i$ at time step $t\\in\\{1,\\ldots,T-1\\}$, which is a vector comprised of measurements and control actions that are observed by the controller. The common information of the controllers at a time step is defined as the vector of all random variables that are observed by both controllers at that time step. The private information of a controller at a time step is the vector of random variables that are not observed by the other controller. The common information is denoted by $\\VEC C_t\\in\\ALP C_t$, and the private information of Controller $i$ is denoted by $\\VEC P^i_t\\in\\ALP P^i_t$ at time step $t\\in\\{1,\\ldots,T\\}$.\n\nA dynamic game is said to be one of symmetric information if $\\VEC C_t=\\VEC I^1_t = \\VEC I^2_t$ at all time steps. We are interested in games where the controllers may have asymmetry in information, that is, $\\VEC I^1_t \\neq \\VEC I^2_t$. An extreme example of a game of asymmetric information is the case when $\\ALP P^1_t\\neq \\emptyset$ while $\\ALP P^2_t=\\emptyset$ for all time steps $t$. Another example of a game of asymmetric information is when the controllers recall their past information and share their observations after a delay of one time step, that is, $\\ALP C_t = \\ALP Y^{1:2}_{1:t-1}$ and $\\ALP P^i_t =\\ALP Y^i_t$ for all time steps $t$.\n\n\\subsection{Admissible Strategies of Controllers}\n\nAt every time step, Controller $i$ uses a {\\it control law} $g^i_t: \\ALP P^i_t\\times \\ALP C_t\\rightarrow \\ALP U^i_t$ to map its information to its action. We assume that the control law $g^i_t$ is a Borel measurable function, and denote the space of all such control laws by $\\ALP G^i_t$.\n\nA strategy of Controller $i$, which we define as the collection of its control laws over time, is denoted by $\\mathbf g^i = (g^i_1,\\ldots,g^i_{T-1})$ and the space of strategies of Controller $i$ is denoted by $\\ALP G^i_{1:T-1}$. The pair of strategies of both controllers, $(\\mathbf g^1,\\mathbf g^2)\\in\\ALP G^1_{1:T-1}\\times \\ALP G^2_{1:T-1}$, is called the strategy profile of the controllers.\n\nThe total cost to Controller $i$, as a function of the strategy profile of the controllers, is\n\\beqq{J^i(\\mathbf g^1, \\mathbf g^2) := \\mathds{E}\\Bigg[c^i_T(\\VEC x_T) + \\sum_{t=1}^{T-1}c^i_t(\\VEC x_t,\\VEC u^1_t,\\VEC u^2_t)\\Bigg],}\nwhere $c^i_t$ is a non-negative continuous function of its arguments for $i\\in\\{1,2\\}$ and $t\\in\\{1,\\ldots,T-1\\}$, and the expectation is taken with respect to the probability measure on the state and action processes induced by the choice of strategy profile $(\\mathbf g^1, \\mathbf g^2)$.\n\n\nA strategy profile $(\\mathbf g^1, \\mathbf g^2)$ is said to be a Nash equilibrium of the game if it satisfies the following two inequalities\n\\beqq{J^1(\\mathbf g^1, \\mathbf g^2) \\leq J^1(\\tilde{\\mathbf g}^1, \\mathbf g^2),\\quad\\text{ and }\\quad J^2(\\mathbf g^1, \\mathbf g^2) \\leq J^2(\\mathbf g^1, \\tilde{\\mathbf g}^2),} \nfor all admissible strategies $\\tilde{\\mathbf g}^1\\in\\ALP G^1_{1:T-1}$ and $\\tilde{\\mathbf g}^2\\in\\ALP G^2_{1:T-1}$. \n\nWe assume that the state evolution equations, observation equations, the noise statistics, cost functions of the controllers and the information structures of the controllers are part of common knowledge. The game thus defined is referred to as game {\\bf G1}.\n\n\n\\subsection{Assumption on Evolution of Information}\nAs noted above, each controller's information consists of common information and private information. We place the following condition on the evolution of common and private information of the controllers in game {\\bf G1}. \n \\begin{assumption}\\label{assm:lqginfoevolution}\nThe common and private information evolve over time as follows:\n\\begin{enumerate}\n\\item The common information increases with time, that is, $(\\ALP E^1_t\\cap \\ALP E^2_t) \\subset (\\ALP E^1_{t+1}\\cap \\ALP E^2_{t+1})$ and $(\\ALP F^1_t\\cap \\ALP F^2_t) \\subset (\\ALP F^1_{t+1}\\cap \\ALP F^2_{t+1})$ for all $t\\in\\{1,\\ldots,T-1\\}$. Let $\\ALP D_{1,t+1}:= (\\ALP E^1_{t+1}\\cap \\ALP E^2_{t+1})\\setminus(\\ALP E^1_{t}\\cap \\ALP E^2_{t})$ and $\\ALP D_{2,t+1}:= (\\ALP F^1_{t+1}\\cap \\ALP F^2_{t+1})\\setminus(\\ALP F^1_{t}\\cap \\ALP F^2_{t})$. Define\n\\beq{\\label{eqn:commoninfo}\\ALP Z_{t+1} = \\prod_{(j,s)\\in\\ALP D_{1,t+1}} \\ALP Y^j_s\\times \\prod_{(j,s)\\in\\ALP D_{2,t+1}} \\ALP U^j_s.}\nThen, $\\VEC Z_{t+1}\\in\\ALP Z_{t+1} $ denotes the increment in common information from time $t$ to $t+1$ and we have \n\\begin{equation}\n\\VEC Z_{t+1} = \\zeta_{t+1}([\\VEC P^{1\\transpose}_t, \\VEC P^{2\\transpose}_t, \\VEC U^{1\\transpose}_t, \\VEC U^{2\\transpose}_t, \\VEC Y^{1\\transpose}_{t+1}, \\VEC Y^{2\\transpose}_{t+1}]^\\transpose),\\label{eq:lqgcommoninfo}\n\\end{equation}\nwhere $\\zeta_{t+1}$ is an appropriate projection function.\n\\item The private information evolves according to the equation\n\\begin{equation}\n\\VEC P^i_{t+1} = \\xi^i_{t+1}([\\VEC P^{i\\transpose}_{t}, \\VEC U^{i\\transpose}_t, \\VEC Y^{i\\transpose}_{t+1}]^\\transpose). \\label{eq:lqgprivateinfo}\n\\end{equation}\nwhere $\\xi^i_{t+1}$ is an appropriate projection function.\n\\end{enumerate}\n\\end{assumption}\n\n\nWe now introduce a few notations in order to prove an important result. Let $\\ALP S_t := \\ALP X_t\\times\\ALP P^1_t\\times\\ALP P^2_t$ for $t\\in\\{1,\\ldots,T-1\\}$. Fix the strategy profile of the controllers as $(\\VEC g^1,\\VEC g^2)$. Let $\\Pi_t$ be the conditional measure on the space of state and private informations, $\\ALP S_t$, given the common information $\\VEC C_t$ at time step $t$. Thus,\n\\beqq{\\Pi_t(d\\VEC s_t) = \\mathds{P}^{g^{1:2}_{1:t-1}}\\left\\{d\\VEC s_t\\Big|\\VEC C_t\\right\\},}\nwhere the superscript denotes the fact that the probability measure depends on the choice of control laws. The conditional probability measure $\\Pi_t$ is a $\\VEC C_t$-measurable random variable, whose realization, denoted by $\\pi_t$, depends on the realization $\\VEC c_t$ of the common information. We now have the following result, which is a consequence of Assumption \\ref{assm:lqginfoevolution}.\n\nLet $\\Gamma^i_t$ be a random measurable function from $\\ALP P^i_t$ to $\\ALP U^i_t$ defined as $\\Gamma^i_t(\\cdot):=g^i_t(\\cdot,\\VEC C_t)$ for $i=1,2$ and $t\\in\\{1,\\ldots,T-1\\}$, where the realization of $\\Gamma^i_t$ is denoted by $\\gamma^i_t$ and it depends on the realization of the random variable $\\VEC C_t$. Thus, $g^i_t(\\VEC P^i_t,\\VEC c_t) = \\gamma^i_t(\\VEC P^i_t)$. We now have the following result about the evolution of conditional measure $\\Pi_t$.\n\\begin{lemma}\\label{lem:piupdate}\nFix the strategy profile $(\\VEC g^1,\\VEC g^2)\\in\\ALP G^1_{1:T-1}\\times\\ALP G^2_{1:T-1}$. If Assumption \\ref{assm:lqginfoevolution} holds, then \n\\beqq{\\Pi_{t+1} = F_t(\\Pi_t,\\Gamma^1_t,\\Gamma^2_t,\\VEC Z_{t+1}),}\nwhere $F_t$ is a fixed transformation which does not depend on the choice of control laws.\n\\end{lemma}\n\\begin{proof}\nSee Appendix \\ref{app:piupdate}.\n\\end{proof}\n\n\\subsection{Strategy Independence of Beliefs}\nA crucial assumption, which forms the basis of the analysis in this paper, is the following.\n\\begin{assumption}[Strategy Independence of Beliefs] \\label{assm:lqgseparation}\nAt any time $t$ and for any realization of common information $\\VEC c_t$, the conditional probability measure $\\pi_t$ on the state $\\VEC X_t$ and the private information $(\\VEC P^1_t, \\VEC P^2_t)$ given the common information does not depend on the choice of control laws. In particular, if $\\VEC z_{t+1}$ is a realization of the increment in the common information at time step $t+1$, then $\\pi_{t+1}$ evolves according to the equation\n\\beq{\\label{eqn:pit1}\\pi_{t+1} = F_t(\\pi_t, \\VEC z_{t+1}),}\nwhere $F_t$ is a fixed transformation that does not depend on the control laws.\n\\end{assumption}\n\nAssumption \\ref{assm:lqgseparation} allows us to define the conditional belief $\\pi_t$ without specifying the control laws used. Another important consequence of Assumption \\ref{assm:lqgseparation} is that these conditional beliefs on the state and private information admit Gaussian density functions. We make this precise in the following lemma.\n\n\\begin{lemma}\\label{lem:pitgaussian}\nFor any time step $t\\in \\{1,\\ldots,T\\}$ and any realization of common information $\\VEC c_t$, the common information based conditional measure $\\pi_t$ admits a Gaussian density function.\n\\end{lemma}\n\\begin{proof}\nSee Appendix \\ref{app:pitgaussian}.\n\\end{proof}\n\nWe henceforth call $\\pi_t$ as {\\it common information based conditional belief}. In the next subsection, we prove a result on the evolution of the mean and variance of the common information based conditional beliefs.\n\n\\subsection{Evolution of Conditional Beliefs}\nSince the common information based conditional belief $\\pi_t$ admits a Gaussian density at any time step $t$, $\\pi_t$ is completely characterized by its mean ${\\VEC m}_t$ and the covariance matrix $\\Sigma_t$. The conditional covariance of a collection of jointly Gaussian random variables is data independent. \nAssumption \\ref{assm:lqgseparation} allows us to derive the following result for game {\\bf G1}.\n\\begin{lemma}\\label{lemma:lqgmean}\nThe evolution of the conditional mean \\[\\VEC M_t := (\\VEC M^0_t, \\VEC M^1_t, \\VEC M^2_t) =(\\mathds{E}[\\VEC X_t| \\VEC C_t],\\mathds{E}[\\VEC P^1_t| \\VEC C_t],\\mathds{E}[\\VEC P^2_t| \\VEC C_t])\\] of the density function of common information based conditional belief is given as\n\\begin{equation}\n{\\VEC M}_{t+1} = F^1_t({\\VEC M}_t,\\VEC Z_{t+1}), \\label{eq:lqgevolution1}\n\\end{equation}\nwhere $F^1_t$ is a fixed affine transformation that does not depend on the strategies of the controllers. The evolution of conditional covariance matrix $\\Sigma_t$ is given as\n\\begin{equation}\n\\Sigma_{t+1} = F^2_t(\\Sigma_t), \\label{eq:lqgevolution2}\n\\end{equation}\nwhere $F^2_t$ is a fixed transformation that does not depend on the strategies of the controllers.\n\\end{lemma}\n\\begin{proof}\nSee Appendix \\ref{app:lqgmean}.\n\\end{proof}\n\n\n\nExamples of several classes of games that satisfy Assumptions \\ref{assm:lqginfoevolution} and \\ref{assm:lqgseparation} are given in \\cite{nayyar2012a}. For example, if each controller acquires the realizations of the observations and the actions of the other controller with zero or one-step delay, then the corresponding game satisfies Assumptions \\ref{assm:lqginfoevolution} and \\ref{assm:lqgseparation}.\n\n\n\\section{Main Results}\\label{sec:main}\nFollowing the approach introduced in \\cite{nayyar2012a}, we now construct a new game \\textbf{G2} with two virtual players, where at every time step $t\\in\\{1,\\ldots,T-1\\}$, each virtual player observes the common information $\\VEC C_t$, but not the private information of the controllers. Since the common information is nested (by Assumption \\ref{assm:lqginfoevolution}), game {\\bf G2} is a game of perfect recall. This game is intricately related to game {\\bf G1}, and we exploit the symmetric information structure of game {\\bf G2} to devise a computational scheme to compute a Nash equilibrium of game {\\bf G1}. The steps taken to devise the scheme are as follows:\n\\begin{enumerate}\n\\item We formulate game {\\bf G2} in the next three subsections. Further, we show that the common information based conditional mean $\\VEC M_t$ is a Markov state of game {\\bf G2} at time $t$.\n\\item In Subsection \\ref{sub:relation}, we show that any Nash equilibrium of game {\\bf G2} can be used to obtain a Nash equilibrium of game {\\bf G1}, and vice-versa.\n\\item We focus on Markov perfect equilibria of game {\\bf G2} and provide a backward induction characterization of such equilibria. An equilibrium of game {\\bf G1} obtained from a Markov perfect equilibrium of game {\\bf G2} is called {\\it common information based Markov perfect equilibrium} of game {\\bf G1}.\n\\item We interpret the backward induction characterization of common information based Markov perfect equilibrium in terms of a sequence of one-stage Bayesian games.\n\\end{enumerate}\n\nWe now turn our attention to formulating game {\\bf G2}. At time step $t$ and for each realization $\\VEC c_t$ of the common information, virtual player $i$ selects a measurable function $\\gamma^i_t:\\ALP P^i_t\\rightarrow\\ALP U^i_t$. The action space of virtual player $i$ at time $t$ is denoted by $\\ALP A^i_t$, and it is defined as\n\\beq{\\label{eqn:alpAit}\\ALP A^i_t :=\\{\\gamma^i_t:\\ALP P^i_t\\rightarrow\\ALP U^i_t \\text{ such that }\\gamma^i_t \\text{ is a Borel measurable map}\\}.}\n\nWe call the actions taken by virtual players as ``prescriptions'' due to the following reason: After observing the common information $\\VEC c_t$ at time step $t$, virtual player $i\\in\\{1,2\\}$ computes the equilibrium prescription $\\gamma^i_t\\in\\ALP A^i_t$, and prescribes it to Controller $i$. The controllers evaluate the prescriptions based on the realizations of their private informations, to compute their actions at that time step.\n\n\n\n\\subsection{Admissible Strategies of Virtual Players in Game {\\bf G2}}\nA map $\\chi^i_t:\\ALP C_t\\rightarrow \\ALP A^i_t$ denotes the control law of virtual player $i\\in\\{1,2\\}$ at time step $t\\in\\{1,\\ldots,T-1\\}$. The control law $\\chi^i_t$ maps common information at time $t$ to a prescription, which itself maps private information of Controller $i$ at time $t$ to the control action of Controller $i$. Thus, a choice of $\\chi^i_t$ induces a map from $\\ALP C_t\\times\\ALP P^i_t$ to $\\ALP U^i_t$, which we denote by $\\chi^i_t(\\cdot)(\\cdot)$. We say $\\chi^i_t$ is admissible if\n\\beqq{\\chi^i_t(\\cdot)(\\cdot) \\quad \\text{ is a Borel measurable function from } \\ALP C_t\\times \\ALP P^i_t \\text{ to } \\ALP U^i_t. }\nThe set of all such admissible control laws is denoted by $\\ALP H^i_t$, $i\\in\\{1,2\\}$ and $t\\in\\{1,\\ldots,T-1\\}$. The collection of control laws at all time steps of virtual player $i$ is called the strategy of that virtual player, and it is denoted by $\\chi^i :=\\{\\chi^i_1,\\ldots,\\chi^i_{T-1}\\}$. The space of all strategies of the virtual player $i$, denoted by $\\ALP H^i_{1:T-1}$, is called the strategy space of that virtual player. A strategy tuple $(\\chi^1,\\chi^2)$ is called the strategy profile of virtual players. \n\n\\begin{definition}\\label{def:var}\nFor $i\\in\\{1,2\\}$ and $t\\in\\{1,\\ldots,T-1\\}$, let $\\varrho^i_t:\\ALP H^i_t\\rightarrow\\ALP G^i_t$ be an operator that takes a function $\\chi^i_t:\\ALP C_t\\rightarrow \\ALP A^i_t$ as its input and returns a measurable function $g^i_t:\\ALP P^i_t\\times \\ALP C_t\\rightarrow\\ALP U^i_t$ as its output, that is $ g^i_t = \\varrho^i_t(\\chi^i_t)$, such that $g^i_t(\\VEC p^i_t,\\VEC c_t) := \\chi^i_t(\\VEC c_t)(\\VEC p^i_t)$ for all $\\VEC c_t\\in\\ALP C_t$ and $\\VEC p^i_t\\in\\ALP P^i_t$. For a collection of functions $\\chi^i :=\\{\\chi^i_1,\\ldots,\\chi^i_{T-1}\\}$, let $\\varrho^i(\\chi^i)$ be defined as the set $\\{\\varrho^i_1(\\chi^i_1),\\ldots,\\varrho^i_{T-1}(\\chi^i_{T-1})\\}$.\n\nSimilarly, we let $\\varsigma^i_t:\\ALP G^i_t\\rightarrow\\ALP H^i_t$ be the operator such that $\\varsigma^i_t\\circ \\varrho^i_t = \\text{id}_{\\ALP H^i_t}$ and $\\varrho^i_t\\circ \\varsigma^i_t = \\text{id}_{\\ALP G^i_t}$. Thus, for $g^i_t\\in\\ALP G^i_t$, if $\\chi^i_t=\\varsigma^i_t(g^i_t)$, then $\\chi^i_t(\\VEC c_t)(\\VEC p^i_t) := g^i_t(\\VEC p^i_t,\\VEC c_t)$ for all $\\VEC c_t\\in\\ALP C_t$ and $\\VEC p^i_t\\in\\ALP P^i_t$. Similar to the expression above, for a collection of functions $\\VEC g^i :=\\{g^i_1,\\ldots,g^i_{T-1}\\}$, let $\\varsigma^i(\\VEC g^i)$ be defined as the set $\\{\\varsigma^i_1(g^i_1),\\ldots,\\varsigma^i_{T-1}(g^i_{T-1})\\}$.{\\hfill $\\Box$}\n\\end{definition}\n\n\n\\subsection{Cost Functions for Virtual Players}\nThe cost functions of the virtual players are defined as follows: Fix a time step $t\\in\\{1,\\ldots,T-1\\}$ and a virtual player $i$. Let $\\pi$ denote a normal distribution on the space $\\ALP S_t = \\ALP X_t\\times\\ALP P^1_t\\times\\ALP P^2_t$ with mean $\\VEC m$ and variance $\\Sigma_t$, where $\\Sigma_t$ is given by the result in Lemma \\ref{lemma:lqgmean}, and let $(\\gamma^1,\\gamma^2)$ be a prescription pair chosen by the virtual players. Define the cost function $\\tilde{c}^i_t:\\ALP S_t\\times\\ALP A^1_t\\times\\ALP A^2_t\\rightarrow\\Re_+$ of virtual player $i$ at that time step $t\\in\\{1,\\ldots,T-1\\}$ to be \n\\beqq{\\tilde{c}^i_t(\\VEC m,\\gamma^1,\\gamma^2) = \\int_{\\ALP S_t} c^i_t(\\VEC x_t,\\gamma^1(\\VEC p^1_t),\\gamma^2(\\VEC p^2_t))\\pi(d\\VEC s_t),}\nwhere one can view $\\VEC x_t,\\VEC p^1_t$ and $\\VEC p^2_t$ as appropriate projections of the variable $\\VEC s_t$. We now define the cost function of the virtual players at the final time step. Let $\\pi$ be a Gaussian distribution with mean $\\VEC m$ and variance $\\Sigma_T$. The cost functions of the virtual players at the final time step is $\\tilde{c}^i_T(\\VEC m) = \\int_{\\ALP X_T} c^i_T(\\VEC x_T)\\pi(d\\VEC x_T)$. The total cost for virtual player $i$ is given by\n\\beqq{\\tilde J^i(\\chi^1,\\chi^2) = \\ex{\\tilde{c}^i_T(\\VEC M_T)+\\sum_{t=1}^{T-1}\\tilde{c}^i_t(\\VEC M_t,\\Gamma^1_t,\\Gamma^2_t)},} \nwhere the expectation is taken with respect to the probability measure induced on the mean of common information based conditional belief by the choice of strategies $(\\chi^1,\\chi^2)$. We have the following claim about the expected cost of game {\\bf G2} given a pair of strategy profiles of controllers in game {\\bf G1} and {\\it vice versa}.\n\n\\begin{lemma}\\label{lem:vpcost}\nLet $(\\VEC g^1,\\VEC g^2)\\in\\ALP G^1_{1:T-1}\\times\\ALP G^2_{1:T-1}$, and let $(\\chi^1,\\chi^2)$ be defined as $\\chi^i:=\\varsigma^i(\\VEC g^i), i=1,2$. Then, $J^i(\\VEC g^1,\\VEC g^2) = \\tilde J^i(\\chi^1,\\chi^2)$ for $i=1,2$.\n\nConversely, let $(\\chi^1,\\chi^2)\\in\\ALP H^1_{1:T-1}\\times\\ALP H^2_{1:T-1}$, and let $(\\VEC g^1,\\VEC g^2)$ be defined as $\\VEC g^i:=\\varrho^i(\\chi^i), i=1,2$. Then, $\\tilde J^i(\\chi^1,\\chi^2)=J^i(\\VEC g^1,\\VEC g^2)$ for $i=1,2$.\n\\end{lemma}\n\\begin{proof}\nSee Appendix \\ref{app:vpcost}.\n\\end{proof}\n\n\n\\subsection{A Markov State of Game {\\bf G2}}\nRecall from Lemma \\ref{lemma:lqgmean} that given a realization $\\VEC c_t$ of common information, the common information based conditional belief $\\mathds{P}\\{d\\VEC s_t|\\VEC c_t\\}$ admits a Gaussian density function with mean $\\VEC m_t$ and variance $\\Sigma_t$. Our next result is that the mean $\\VEC M_t$ is a controlled Markov chain, and it is controlled by the actions taken (that is, the prescriptions chosen) by the virtual players.\n\n\\begin{lemma}\\label{lemma:lqgmarkov}\nThe process $\\{\\VEC M_t\\}_{t\\in\\{1,\\ldots,T\\}}$ is a controlled Markov process with the virtual players' prescriptions as the controlling actions. In particular, conditioned on the realization $\\VEC m_t$ of $\\VEC M_t$ and the prescriptions $(\\gamma^1_t, \\gamma^2_t)$ of the virtual players, the conditional mean $\\VEC M_{t+1}$ at the next time step is independent of the current common information, the past conditional means and past prescriptions. Equivalently, this fact is expressed as\n\\beqq{\\mathds{P}\\{\\VEC M_{t+1}\\in \\SF M_{t+1}|\\VEC c_t, \\VEC m_{1:t},\\gamma^{1:2}_{1:t}\\} = \\mathds{P}\\{\\VEC M_{t+1}\\in \\SF M_{t+1}|\\VEC m_{t},\\gamma^{1:2}_t\\}}\nfor all Borel sets $\\SF M_{t+1}\\subset \\ALP S_{t+1}$.\n\\end{lemma}\n\\begin{proof}\nSee Appendix \\ref{app:lqgmarkov}.\n\\end{proof}\n\nIt should be noted that the update equation of the Markov process $\\VEC M_t$ is induced by the state dynamics, observation equations, and information structure of the controllers of game {\\bf G1}. The main differences between the structures of the two games {\\bf G1} and {\\bf G2} are summarized in the table below.\\\\\n\n\\begin{center}\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{tabular}{|l|c|c|}\n\\hline\nAt time step $t\\in\\{1,\\ldots,T-1\\}$ & Game {\\bf G1} & Game {\\bf G2}\\\\\\hline\nState of the game& $\\VEC X_t\\in\\ALP X_t$ & $\\VEC M_t\\in\\ALP S_t$\\\\\\hline\nAction of Player $i$ & $\\VEC U^i_t\\in\\ALP U^i_t$ & $\\gamma^i_t\\in\\ALP A^i_t$\\\\\\hline\nInformation of Player $i$ & $\\VEC I^i_t\\in\\ALP I^i_t$ & $\\VEC C_t\\in\\ALP C_t$\\\\\\hline\nCost function of Player $i$ & $c^i_t$ & $\\tilde c^i_t$\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\vspace{10pt}\n\nSince both virtual players observe the same information (the common information between the controllers) and the common information always increases by Assumption \\ref{assm:lqginfoevolution}, game {\\bf G2} between virtual players is a game of symmetric information with perfect recall. \n\n\\subsection{Relation between Games {\\bf G1} and {\\bf G2}}\\label{sub:relation}\nIn the next theorem, we show that any Nash equilibrium of game {\\bf G1} can be used to compute a Nash equilibrium for game {\\bf G2} and vice versa.\n\\begin{theorem}\\label{thm:lqgequiv}\nLet $(\\chi^{1\\star}, \\chi^{2\\star})$ be a Nash equilibrium strategy profile of game \\textbf{G2}. Then, the strategy profile $(\\mathbf g^{1\\star}, \\mathbf g^{2\\star})$ for game \\textbf{G1}, defined as $\\VEC g^{i\\star} =\\varrho^i(\\chi^{i\\star}), i=1,2$, forms a Nash equilibrium strategy profile of game \\textbf{G1}. Conversely, if $(\\mathbf g^{1\\star}, \\mathbf g^{2\\star})$ is a Nash equilibrium strategy profile of game {\\bf G1}, then the strategy profile $(\\chi^{1\\star}, \\chi^{2\\star})$, defined by $\\chi^{i\\star}:=\\varsigma^i(\\VEC g^{i\\star}), i=1,2,$ is a Nash equilibrium strategy profile for game {\\bf G2}.\n\\end{theorem}\n\\begin{proof}\nSee Appendix \\ref{app:lqgequiv}.\n\\end{proof}\n\nIn light of the theorem above, we want to compute a Nash equilibrium of game {\\bf G2}, and then project the solution back to the original game using the operators $\\varrho^1$ and $\\varrho^2$ as introduced in Definition \\ref{def:var}.\n\nSince at any time step $t$, both virtual players observe the common information $\\VEC C_t$, the virtual players can compute the mean $\\VEC M_t$ of the common information based conditional belief. Since the mean $\\VEC M_t$ is a Markov state of game {\\bf G2} and both virtual players know its realization, game {\\bf G2} is a game of perfect state information. Also note that in game {\\bf G2}, the cost functions of the virtual players are stagewise-additive. A natural solution concept to compute the Nash equilibrium of a game of perfect information with stagewise-additive cost function is Markov perfect equilibrium \\cite{fudenberg1991}. We define the Markov perfect equilibrium of {\\bf G2} in the next subsection and prove the main result of the section.\n\n\n\\subsection{Markov Perfect Equilibrium of Game {\\bf G2}}\nFix virtual player $i$'s control laws $\\chi^i_{1:T-1}$ such that the prescription at time step $t$ is only a function of $\\VEC M_t$, say $\\chi^i_t(\\VEC C_t) = \\psi^i_t(\\VEC M_t)$ for some function $\\psi^i_t:\\ALP S_t\\rightarrow\\ALP A^i_t$, such that $\\psi^i_t(\\cdot)(\\cdot)$ is a measurable function from $\\ALP S_t\\times\\ALP P^i_t$ to $\\ALP U^i_t$ at all time step $t\\in\\{1,\\ldots,T-1\\}$. Let us use $\\bar{\\ALP H}^i_t$ to denote the set of all such maps $\\psi^i_t$, and note that $\\bar{\\ALP H}^i_t\\subset\\ALP H^i_t$. Then, virtual player $j$'s, $j\\neq i$, (one-person) optimization problem is to minimize its stagewise additive cost functional that depends on $\\VEC M_t$ and on virtual player $i$'s fixed strategy. Thus, virtual player $j$ needs to solve a finite horizon Markov decision problem with state space $\\ALP S_t$ and action space $\\ALP A^{j}_t$ at time step $t$. This is made precise in our next result.\n\n\n\\begin{lemma}\\label{lemma:lqginfostatelemma}\nConsider game {\\bf G2} among virtual players. Assume that virtual player $i$ is using the strategy $\\{\\psi^i_1,\\ldots,\\psi^i_{T-1}\\}\\in\\bar{\\ALP H}^i_{1:T-1}$, that is, virtual player $i$ selects the prescriptions at time step $t$ only as a function of the mean $\\VEC M_t$ of the common information based conditional belief $\\Pi_t$: \n\\[ \\Gamma^i_t = \\psi^i_t(\\VEC M_t),\\qquad t\\in\\{1,\\ldots, T-1\\}.\\]\nThen, for the fixed strategy of virtual player $i$, virtual player $j$'s ($j\\neq i, j\\in\\{1,2\\}$) one-sided optimization problem is a finite horizon Markov decision problem with state $\\VEC M_t$, control action $\\gamma^j_t$, and cost as $\\tilde{c}^j_t(\\VEC M_t,\\gamma^j_t,\\psi^i_t(\\VEC M_t))$ at time step $t\\in\\{1,\\ldots,T-1\\}$ and terminal cost $\\tilde{c}^j_T(\\VEC M_T)$.\n\\end{lemma}\n\\begin{proof}\nSee Appendix \\ref{app:lqginfostatelemma}.\n\\end{proof}\n\nNote that game {\\bf G2} is a dynamic game of perfect information and perfect recall. A Markov strategy of a virtual player is defined as a collection of control laws of that virtual player at all time steps such that the control law at time step $t$ is a measurable map of common information based conditional mean (state of game {\\bf G2}) to its action space at that time step. Lemma \\ref{lemma:lqginfostatelemma} states that if one virtual player sticks to Markov strategy, then the other virtual player's one-sided optimization problem is a finite horizon Markov decision problem. Under certain assumptions on the cost functions of the virtual players\\footnote{See, for example, \\cite[Section 3.3]{lerma1996} for a set of such assumptions.}, there exists a Markov strategy of the other virtual player that achieves the minimum in its Markov decision problem. Thus, there is no incentive for the other virtual player to search for optimal strategies outside the class of Markov strategies. This is an important \nobservation for game {\\bf G2}, because one can define a refinement concept for Nash equilibrium, called Markov perfect equilibrium \\cite{fudenberg1991}, for game {\\bf G2}.\n\n\n\n\\begin{definition}\nA strategy profile $(\\psi^{1\\star}_{1:T-1},\\psi^{2\\star}_{1:T-1})\\in \\ALP H^1_{1:T-1}\\times\\ALP H^2_{1:T-1}$ is said to be a Markov perfect equilibrium \\cite{fudenberg1991} of game \\textbf{G2} if (i) at each time $t$, the control laws of the virtual players at time step $t$ are functions of the mean of the common information based conditional belief $\\VEC M_t$, that is, $\\psi^i_t\\in\\bar{\\ALP H}^i_t$, and (ii) for all time steps $t\\in\\{1,\\ldots,T-1\\}$, the strategy profiles $(\\psi^{1\\star}_{t:T-1},\\psi^{2\\star}_{t:T-1})$ form a Nash equilibrium for the sub-game starting at time step $t$ of game {\\bf G2}.\n\\end{definition}\n\nIt should be noted that Markov perfect equilibrium is a refinement concept for Nash equilibria of games in which players make perfect state observations. In game {\\bf G2} among virtual players, a strategy profile that is {\\it not} a Markov perfect equilibrium either depends on the common information (and not just on the mean $\\VEC M_t$), or is not a Nash equilibrium of every sub-game in game {\\bf G2}, or both. We reemphasize this point later in Section \\ref{sec:information}. \n\nGiven a Markov perfect equilibrium of \\textbf{G2}, we can construct a corresponding Nash equilibrium of game \\textbf{G1} using Theorem~\\ref{thm:lqgequiv}. We refer to the class of Nash equilibria of \\textbf{G1} that can be constructed from the Markov perfect equilibria of \\textbf{G2} as the \\emph{common information based Markov perfect equilibria} of game \\textbf{G1}.\n\\begin{definition}\nIf $(\\psi^{1\\star}_{1:T-1},\\psi^{2\\star}_{1:T-1})$ is a Markov perfect equilibrium of game \\textbf{G2}, then the strategy profile $(\\VEC g^{1\\star}, \\VEC g^{2\\star})$ of the form $\\VEC g^{i\\star} = \\varrho^i(\\psi^{i\\star}_{1:T-1}), i=1,2,$ is called \\emph{common information based Markov perfect equilibrium} of game \\textbf{G1}.{\\hfill$\\Box$}\n\\end{definition}\n\nA similar concept was introduced for finite games with asymmetric information in our earlier work \\cite{nayyar2012a}.\n\n\\subsection{Computation of Markov Perfect Equilibrium of Game {\\bf G2}}\\label{sub:MPEG2}\nIn this subsection, we characterize Markov perfect equilibrium of game \\textbf{G2} using value functions that depend on the mean of the common information based conditional belief. \n\n\n\\begin{theorem}\\label{thm:mpevirtual}\nConsider a strategy pair $(\\psi^{1\\star}_{1:T-1},\\psi^{2\\star}_{1:T-1})\\in\\bar{\\ALP H}^1_{1:T-1}\\times\\bar{\\ALP H}^2_{1:T-1}$. Define functions $V^i_t:\\ALP X_t\\times\\ALP P^1_{t}\\times\\ALP P^2_{t}\\rightarrow\\Re$, called expected value functions of Controller $i$ at time $t$, as follows:\n\\begin{enumerate}\n\\item For each possible realization $\\VEC m = (\\VEC m^0,\\VEC m^1,\\VEC m^2)$ of $\\VEC M_{T}$, define the value functions:\n\\beq{\\label{eqn:viT} V^i_{T}(\\VEC m) := \\tilde c^i_T(\\VEC m) = \\mathds{E}[c^i_T(\\VEC X_T)|\\VEC M_T = \\VEC m]\\qquad i\\in\\{1,2\\}.}\n\\item For $t=T-1,\\ldots,1$, and for each possible realization $\\VEC m$ of $\\VEC M_{t}$, define the value functions:\n\\beq{V^i_{t}(\\VEC m) &:=& \\min_{\\tilde\\gamma^i_t\\in\\ALP A^i_t} \\mathds{E}\\Big[ \\tilde c^i_t(\\VEC M_t,\\gamma^1_t,\\gamma^2_t) +V^i_{t+1}(F^1_t(\\VEC M_t, \\VEC Z_{t+1})) \\nonumber\\\\\n& & \\qquad\\qquad\\Big|\\VEC M_{t}=\\VEC m, \\gamma^i_t =\\tilde\\gamma^i_t , \\gamma^{-i}_t = \\psi^{-i\\star}_{t}(\\VEC m)\\Big]\\qquad i\\in\\{1,2\\},\\qquad\\label{eqn:vit}}\nassuming that the minimum exists in the equation above. Then, a necessary and sufficient condition for $(\\psi^{1\\star}_{1:T-1},\\psi^{2\\star}_{1:T-1})$ to be a Markov perfect equilibrium of \\textbf{G2} is that for every time step $t\\in\\{1,\\ldots,T-1\\}$, $i\\in\\{1,2\\}$ and for every realization $\\VEC m$ of $\\VEC M_t$,\n\\beq{\\label{eqn:psiitm}\\psi^{i\\star}_t(\\VEC m) \\in \\underset{\\tilde\\gamma^i_t\\in \\ALP A^i_t}{\\arg\\min} \\;\\;\\mathds{E}\\Big[ \\tilde c^i_t(\\VEC m,\\tilde\\gamma^i_t, \\psi^{-i\\star}_{t}(\\VEC m)) +V^i_{t+1}(F^1_t(\\VEC m, \\VEC Z_{t+1}))\\Big].}\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nSee Appendix \\ref{app:mpevirtual}.\n\\end{proof}\n\nTo show that the sub-game admits a Nash equilibrium at time step $t$ requires a fixed-point argument, which means that the reaction curves of the virtual players intersect in the product of their strategy spaces $\\bar{\\ALP H}^1_t\\times\\bar{\\ALP H}^2_t$ \\cite{Basarbook}. In the next section, we show that if the cost functions of the players are quadratic functions of their arguments, then under certain conditions on the cost functions, the reaction curves of the virtual players intersect at a unique point. Thus, under those assumptions, a unique common information based Markov perfect equilibrium exists in LQG games with asymmetric information.\n\n\\begin{remark}\\label{rem:multiequilibria}\nAs stated earlier, Markov perfect equilibrium is only a subclass of Nash equilibria of game {\\bf G2}. Game {\\bf G2} (and equivalently, the corresponding game {\\bf G1}) may have several other Nash equilibria besides Markov perfect equilibrium. An example of a two-player two-stage game of asymmetric information in which there is a continuum of Nash equilibria is presented in Section \\ref{sec:information}.{\\hfill $\\Box$}\n\\end{remark}\n\n\\subsection{One-stage Bayesian Games}\\label{sub:stagebayesian}\nAt any time step $t$, let $\\VEC m_t$ and $\\VEC x_t$, respectively, be realizations of the common information based conditional mean and the state. Let us rewrite the expressions of the expected cost-to-go functions in \\eqref{eqn:vit} as\n\\beq{\\label{eqn:hatcit}\\bar{c}^i_t(\\VEC m_t;\\VEC x_t,\\VEC u^1_t,\\VEC u^2_t):=c^i_t(\\VEC x_t,\\VEC u^1_t,\\VEC u^2_t) +\\ex{V^i_{t+1}(F^1_t(\\VEC m_t, \\VEC Z_{t+1}))\\Big| \\VEC x_t,\\VEC u^1_t,\\VEC u^2_t}.}\nThese are the cost-to-go functions for the two controllers in game {\\bf G1} if both controllers stick to the common information based Markov perfect equilibrium for all time step $s>t$. For a realization $\\VEC m_t$, Controller $i$ chooses a map $\\gamma^i_t:\\ALP P^i_t\\rightarrow\\ALP U^i_t$. We assume that the probability measure on $\\ALP X_t\\times\\ALP P^1_t\\times\\ALP P^2_t$ admits a Gaussian density function with mean $\\VEC m_t$ and variance $\\Sigma_t$. One can notice that this is precisely the setup of a Bayesian game. Therefore, we say that the game between Controllers $1$ and $2$, with cost functions $\\bar{c}^1_t(\\VEC m_t;\\cdot)$ and $\\bar{c}^2_t(\\VEC m_t;\\cdot)$, respectively, is the {\\it one-stage Bayesian game at time step $t$ with mean $\\VEC m_t$}. \n\n\n\\textbf{An algorithm to compute common information based Markov perfect equilibrium:}\n\nWe can now describe a backward induction process to find a common information based Markov perfect equilibrium of game \\textbf{G1} using a sequence of one-stage Bayesian games. We proceed as follows: \\\\\n\\underline{\\textbf{Algorithm 1:}}\n\\begin{enumerate}\n\\item At the terminal time $T-1$, for each realization $\\VEC m$ of the common information based conditional mean at time $T-1$, we define a one-stage Bayesian game $SG_{T-1}(\\VEC m)$ where \n\\begin{enumerate}\n\\item The probability distribution on $(\\VEC X_{T-1}, \\VEC P^1_{T-1}, \\VEC P^2_{T-1})$, denoted by $\\pi$, is a Gaussian distribution with mean $\\VEC m$ and covariance $\\Sigma_{T-1}$.\n\\item Agent\\footnote{Agent $i$ can be thought to be the same as Controller $i$. We use a different name here in order to maintain the distinction between games \\textbf{G1} and $SG_{T-1}(\\VEC m)$.} $i$ observes $\\VEC P^i_{T-1}$ and chooses action $\\VEC U^i_{T-1}$, $i=1,2$. \n\\item Agent $i$'s cost is $\\bar{c}^i_{T-1}(\\VEC m;\\VEC X_{T-1},\\VEC U^1_{T-1},\\VEC U^2_{T-1})$, $i=1,2$.\n\\end{enumerate}\nA Bayesian Nash equilibrium\\footnote{See \\cite{fudenberg1991, Osborne, Myerson_gametheory} for a definition of Bayesian Nash equilibrium.} of this game is a pair of strategies $(\\gamma^{1*},\\gamma^{2*})$, where $\\gamma^{i*}:\\ALP P^i_{T-1}\\rightarrow\\ALP U^i_{T-1}$ is a measurable function such that for any realization $\\VEC p^i\\in\\ALP P^i_{T-1}$, $\\gamma^{i*}(\\VEC p^i)$ is a solution of the minimization problem \n\\[ \\min_{\\VEC u^i} \\mathds{E}^{\\pi}[\\bar{c}^i_{T-1}(\\VEC m;\\VEC X_{T-1},\\VEC u^i, \\gamma^{j*}(\\VEC P^{j}_{T-1}))|\\VEC P^i_{T-1} = \\VEC p^i ], \\]\nwhere $j \\neq i$ and the superscript $\\pi$ denotes that the expectation is with respect to the distribution $\\pi$. If a Bayesian Nash equilibrium $(\\gamma^{1*},\\gamma^{2*})$ of $SG_{T-1}(\\VEC m)$ exists, denote the corresponding expected equilibrium costs as $V^i_{T-1}(\\VEC m), i=1,2$, and define $\\psi^i_T(\\VEC m) := \\gamma^{i*}$, $i=1,2$.\n\n\\item At time $t < T-1$, for each realization $\\VEC m$ of the common information based conditional mean at time $t$, we define the one-stage Bayesian game $SG_t(\\VEC m)$ where\n\\begin{enumerate}\n\\item The probability distribution on $(\\VEC X_t, \\VEC P^1_t, \\VEC P^2_t)$, denoted by $\\pi$, admits a Gaussian density function with mean $\\VEC m$ and covariance $\\Sigma_{t}$.\n\\item Agent $i$ observes $\\VEC P^i_t$ and chooses action $\\VEC U^i_t$, $i=1,2$. \n\\item Agent $i$'s cost is $\\bar{c}^i_t(\\VEC m;\\VEC X_t,\\VEC U^1_t,\\VEC U^2_t)$, $i=1,2$. \n\\end{enumerate}\nA Bayesian Nash equilibrium of this game is a pair of strategies $(\\gamma^{1*},\\gamma^{2*})$, where $\\gamma^{i*}:\\ALP P^i_{T-1}\\rightarrow\\ALP U^i_{T-1}$ is a measurable function such that for any realization $\\VEC p^i\\in\\ALP P^i_t$, $\\gamma^{i*}(\\VEC p^i)$ is a solution of the minimization problem \n\\[ \\min_{\\VEC u^i} \\mathds{E}^{\\pi}[\\bar{c}^i_t(\\VEC m;\\VEC X_t,\\VEC u^i, \\gamma^{j}(\\VEC P^{j}_t)) |\\VEC P^i_t = \\VEC p^i ], \\]\nwhere $j \\neq i, i,j=1,2,$ when control actions $\\VEC U^i_t =\\VEC u^i$ and $\\VEC U^j_t = \\gamma^{j}(\\VEC P^{j}_t)$ are used. The expectation is taken with respect to the Gaussian distribution with mean $\\VEC m$ and covariance $\\Sigma_t$. If a Bayesian Nash equilibrium $(\\gamma^{1*},\\gamma^{2*})$ of $SG_t(\\VEC m)$ exists, denote the corresponding expected equilibrium costs as $V^i_t(\\VEC m), i=1,2$ and define $\\psi^i_t(\\VEC m) := \\gamma^{i*}$, $i=1,2$.\n\\end{enumerate}\n\n\n\\begin{theorem}\\label{thm:backward_ind}\nThe strategies $\\boldsymbol \\psi^i = (\\psi^i_1,\\psi^i_2,\\ldots,\\psi^i_{T-1})$, $i=1,2,$ defined by the backward induction process described in Algorithm 1 form a Markov perfect equilibrium of game \\textbf{G2}. Consequently, strategies $\\VEC g^1$ and $ \\VEC g^2$ defined as\n\\[ g^i_t(\\cdot, \\VEC c_t) := \\psi^i_t(\\VEC m_t), \\quad i\\in\\{1,2\\},\\:t\\in\\{1,\\ldots,T-1\\} \\]\nform a common information based Markov perfect equilibrium of game \\textbf{G1}.\n\\end{theorem}\n\\begin{proof}\nTo prove the result, we just need to observe that the strategies defined by the backward induction procedure of Algorithm 1 satisfy the conditions of Theorem \\ref{thm:mpevirtual} and hence form a Markov perfect equilibrium of game \\textbf{G2}.\n\\end{proof}\n\n\nIn the next section, we consider LQG games, in which the cost functions of the controllers at any time step are quadratic in the state and actions of the controllers. Under certain sufficient conditions on the cost functions of the controllers, we prove that the one-stage Bayesian game at any time $t\\in\\{1,\\ldots,T-1\\}$ with any mean $\\VEC m_t\\in\\ALP S_t$ admits a unique Bayesian Nash equilibrium. We follow the steps of the algorithm above to prove that every LQG game with cost functions satisfying certain conditions admits a unique common information based Markov perfect equilibrium.\n\n\\section{Game with Quadratic Cost Functions}\\label{sec:lqggames}\nLet us now consider the special class of games where the stagewise cost functions $c^i_T$ and $c^i_t$ are quadratic functions of their arguments:\n\\beqq{c^i_T(\\VEC X_{T}) = \\VEC X_{T}^\\transpose R^i_{11} \\VEC X_{T}, & & c^i_t(\\VEC X_t,\\VEC U^1_t,\\VEC U^2_t) = \\bmat{\\VEC X_t^\\transpose , \\VEC U^{1\\transpose}_t, \\VEC U^{2\\transpose}_t} R^i \\bmat{\\VEC X_t \\\\ \\VEC U^1_t\\\\ \\VEC U^2_t} \\\\\n\\quad\\text{ where } R^i&:=&\\bmat{R^i_{11} & R^i_{12} & R^i_{13}\\\\R_{12}^{i\\transpose} & R^i_{22} & R^i_{23}\\\\R_{13}^{i\\transpose} & R_{23}^{i\\transpose} & R^i_{33}},}\n$R^i_{11}\\geq 0$ and $R^i_{ii}>0$ for $i\\in\\{1,2\\}$. We refer to Gaussian games in which the cost functions are of the form above as dynamic LQG games.\n\nBefore we analyze the dynamic LQG game, we first formulate and compute the Nash equilibrium of a static auxiliary (Bayesian) game in the next subsection. We use the result of this auxiliary game to compute the Nash equilibrium strategies of the dynamic game. One of the main results of this section is that any LQG game that satisfies a certain assumption on the cost functions in addition to Assumptions \\ref{assm:lqginfoevolution} and \\ref{assm:lqgseparation} admits a unique common information based Markov perfect equilibrium in the class of all Borel measurable strategy profiles of the controllers. We prove this in two steps:\n\\begin{enumerate}\n\\item The first step consists of computing a Bayesian Nash equilibrium of a particular two-player static game with asymmetric information. This is done in Subsection \\ref{sec:basargame}.\n\\item We then exploit the uniqueness of Nash equilibrium, the structure of the Nash equilibrium strategies of the controllers, and the expected equilibrium costs to the controllers to obtain the main result for LQG games in Subsection \\ref{sub:generallqg}. \n\\end{enumerate}\n\n\n\\subsection{An Auxiliary Game, {\\bf AG1}}\\label{sec:basargame}\n\nThe static Bayesian game is described as follows: $\\VSF X, \\VSF Y^1, \\VSF Y^2$ are jointly Gaussian random vectors such that $\\VSF Y^i = H^i \\VSF X$ for some matrix $H^i$ of appropriate dimensions. The mean and covariance of the three-tuple $\\VSF X, \\VSF Y^1, \\VSF Y^2$ are given by\n\\[ \\VSF m = \\bmat{ \\VSF m_{x} \\\\ \\VSF m_{y^1} \\\\\\VSF m_{y^2}} \\quad \\Sigma =\\bmat{\\Sigma_{xx} & \\Sigma_{xy^1} & \\Sigma_{xy^2} \\\\\\Sigma_{y^1x} & \\Sigma_{y^1y^1} & \\Sigma_{y^1y^2} \\\\\\Sigma_{y^2x} & \\Sigma_{y^2y^1} & \\Sigma_{y^2y^2} }, \\]\n\\beq{\\label{eqn:sigmaij}\\text{where}\\qquad \\Sigma_{y^iy^j} = \\Sigma_{y^iy^i}^{\\frac{1}{2}}\\Sigma_{y^jy^j}^{\\frac{1}{2}\\transpose},\\qquad \\Sigma_{y^ix} = \\Sigma_{y^iy^i}^{\\frac{1}{2}}\\Sigma_{xx}^{\\frac{1}{2}\\transpose} \\quad\\text{for } i,j=1,2.}\nSince $\\VSF X, \\VSF Y^1, \\VSF Y^2$ are jointly Gaussian random variables, the conditional expectations $\\mathds{E}[\\VSF X|\\VSF Y^i]$ and $\\mathds{E}[\\VSF Y^{-i}|\\VSF Y^i] $ are affine functions of $\\VSF Y^i$, given by\n\\beq{\\mathds{E}[\\VSF X|\\VSF Y^i] &=& \\VSF m_x + \\Sigma_{xy^i}\\Sigma_{y^iy^i}^{-1}(\\VSF Y^i-\\VSF m_{y^i}),\\\\\n\\mathds{E}[\\VSF Y^{-i}|\\VSF Y^i] &=& \\VSF m_{y^{-i}} + \\Sigma_{y^{-i}y^i}\\Sigma_{y^iy^i}^{-1}(\\VSF Y^i-\\VSF m_{y^i}),}\nwhere $\\Sigma_{y^iy^i}^{-1}$ is the generalized inverse (pseudo inverse) of $\\Sigma_{y^iy^i}$ \\cite{catlin1989} for $i=1,2$.\n\nThe cost functions are\n\\beq{c^1(\\VSF X,\\VSF U^1,\\VSF U^2) &=& \\bmat{\\VSF X^\\transpose,\\VSF U^{1\\transpose},\\VSF U^{2\\transpose}}C\\bmat{\\VSF X \\\\\\VSF U^1\\\\\\VSF U^2} + 2\\bmat{d_1, d_2, d_3}\\bmat{\\VSF X \\\\\\VSF U^1\\\\\\VSF U^2}+ r^1 \\label{eqn:j1aux},\\\\\n c^2(\\VSF X,\\VSF U^1,\\VSF U^2) &=& \\bmat{\\VSF X^\\transpose,\\VSF U^{1\\transpose},\\VSF U^{2\\transpose}}E\\bmat{\\VSF X \\\\\\VSF U^1\\\\\\VSF U^2} + 2\\bmat{f_1, f_2, f_3}\\bmat{\\VSF X \\\\\\VSF U^1\\\\\\VSF U^2} + r^2,\\label{eqn:j2aux}\\\\\n\\text{where}\\quad C &=& \\bmat{C_{11} &C_{12} &C_{13}\\\\C_{12}^{\\transpose} &C_{22} &C_{23}\\\\C_{13}^{\\transpose} &C_{23}^{\\transpose} & C_{33}} \\quad\\text{ and }\\quad E = \\bmat{E_{11} &E_{12} &E_{13}\\\\E_{12}^{\\transpose} &E_{22} &E_{23}\\\\E_{13}^{\\transpose} &E_{23}^{\\transpose} & E_{33}}.\\nonumber}\nwith $C\\geq 0, E\\geq 0$, $C_{22}>0, E_{33}>0$, $C_{ij},E_{ij}$ are matrices, $d_i,f_i$ are row vectors of appropriate dimensions and $r_1,r_2$ are scalar constants. \n\nController $i$ observes $\\VSF Y^i, i=1,2$ and selects $\\VSF U^i$ according to a decision rule $g^i$, that is, $\\VSF U^i = g^i(\\VSF Y^i)$, where $g^i$ is a measurable function of $Y^i$ satisfying $\\mathds{E}[g^{i\\transpose}(\\VSF Y^{i})g^{i}(\\VSF Y^{i})] < \\infty$. Let the space of all such measurable functions $g^i$ be denoted by $\\ALP A\\ALP G^i$. This game is referred to as game {\\bf AG1}$(\\VSF m,\\Sigma,c^1,c^2)$. We make the following assumption on the cost functions of the controllers in {\\bf AG1}$(\\VSF m,\\Sigma,c^1,c^2)$.\n\n\\begin{assumption}\\label{assm:eigcost}\nFor any square matrix $A$, let $\\bar{\\lambda}(A)$ denotes the positive square root of the maximum eigenvalue of $A^{\\transpose}A$. For the matrix tuple $(C,E)$, define $K_1 = C_{22}^{-1}C_{23}E_{33}^{-1}E_{23}^{\\transpose}$ and $K_2 = E_{33}^{-1}E_{23}^{\\transpose}C_{22}^{-1}C_{23}$. Let $\\ALP K_i$ be the space of all matrices that are similar to $K_i,\\: i=1,2$, that is, $\\tilde K\\in\\ALP K_i$ implies there exists a square invertible matrix $L$ of appropriate dimensions such that $\\tilde K = LK_iL^{-1}$. There exists an $i_0\\in\\{1,2\\}$ and a matrix $K\\in \\ALP K_{i_0}$ such that $\\bar{\\lambda}(K)<1$.{\\hfill$\\Box$}\n\\end{assumption}\n\nIn the next lemma, which builds on and follows from the earlier results in \\cite{Basar1975,Basarmulti}, we show that the Nash equilibrium of the auxiliary game {\\bf AG1}$(\\VSF m,\\Sigma,c^1,c^2)$ that satisfies Assumption \\ref{assm:eigcost} exists in the space $\\ALP A\\ALP G^1\\times\\ALP A\\ALP G^2$, is unique, and is affine in the information of the controllers.\n\n \n\\begin{lemma}\\label{lem:basargame}\nThe following statements hold:\n\\begin{enumerate} \n\\item For the 1-stage game \\textbf{AG1}, a pair of decision rules $g^{1\\star}, g^{2\\star}$ is a Nash equilibrium if and only if they simultaneously satisfy the following two equations,\n\\beqq{g^{1\\star}(\\VSF Y^1) &=& -C_{22}^{-1}d_2^{\\transpose} - C_{22}^{-1}C_{12}^\\transpose\\mathds{E}[\\VSF X|\\VSF Y^1] - C_{22}^{-1}C_{23}\\mathds{E}[g^{2\\star}(\\VSF Y^2)|\\VSF Y^1], \\nonumber \\\\\ng^{2\\star}(\\VSF Y^2) &=& -E_{33}^{-1}f_3^{\\transpose} - E_{33}^{-1}E_{13}^\\transpose\\mathds{E}[\\VSF X|\\VSF Y^2] - E_{33}^{-1}E_{23}^{\\transpose}\\mathds{E}[g^{1\\star}(\\VSF Y^1)|\\VSF Y^2].}\n\n\\item If the matrices $(C,E)$ in the cost functions of game {\\bf AG1}$(\\VSF m,\\Sigma,c^1,c^2)$ satisfy Assumption \\ref{assm:eigcost}, then the game has a unique Nash equilibrium in the class of all Borel measurable strategies $\\ALP A\\ALP G^1\\times\\ALP A\\ALP G^2$, given as\n\\beq{\\label{eqn:gistaraux} g^{i\\star}(\\VSF Y^i) = T^i(\\VSF Y^i-\\VSF m_{y^i}) + b^i,}\nwhere $b^1,b^2$ are \nsolutions of the following pair of equations\n\\beqq{b^1 &=& -C_{22}^{-1}[d_2^{\\transpose}+C_{12}\\VSF m_x+C_{23}b^2], \\\\\nb^2 &=& -E_{33}^{-1}[f_3^{\\transpose}+E_{13}\\VSF m_x +E_{23}^{\\transpose}b^1 ],}\nand are of the form\n\\beq{\\label{eqn:b1b2}b^1 &=& l^1+L^1 \\VSF m_x,\\qquad b^2 = l^2+L^2\\VSF m_x,}\nand $T^1, T^2$ are solutions of the following pair of equations\n\\beq{\\label{eqn:t1}T^1 &=& -C_{22}^{-1}[C_{12}^\\transpose\\Sigma_{xy^1}\\Sigma_{y^1y^1}^{-1}+ C_{23}T^2\\Sigma_{y^2y^1}\\Sigma_{y^1y^1}^{-1}], \\\\\n\\label{eqn:t2}T^2 &=& -E_{33}^{-1}[E_{13}^\\transpose\\Sigma_{xy^2}\\Sigma_{y^2y^2}^{-1} +E_{23}^{\\transpose}T^1\\Sigma_{y^1y^2} \\Sigma_{y^2y^2}^{-1}].}\nHere, $l^i $ and $L^i$ are independent of $\\VSF m$ for both $i=1,2$. \n\n\\item The expected costs to the controllers when they play according to Nash equilibrium $(g^{1\\star},g^{2\\star})$ are\n\\beq{\\label{eqn:auxeqcost}\\mathds{E}[c^i(\\VSF X,g^{1\\star}(\\VSF Y^1),g^{2\\star}(\\VSF Y^2))] = \\VSF m^{\\transpose} \\Phi^i {\\VSF m} + \\Xi^{i}\\VSF m + \\Upsilon^i,}\nwhere the matrices $\\Phi^i$, $\\Xi^i$ and $\\Upsilon^i$ for $i=1,2$ are defined by\n\\beqq{&\\tilde{L} := \\bmat{0 &0 & 0\\\\ L^1 & -T^1 & 0\\\\ L^2 & 0 & -T^2}, \\quad \\tilde{T} := \\bmat{I &0 & 0\\\\ 0 & T^1 & 0\\\\0& 0 & T^2}, \\quad\n\\tilde{l} := \\bmat{0 \\\\ l^1 \\\\l^2},\\\\\n&\\Phi^1 = (\\tilde{T}+\\tilde{L})^{\\transpose}C(\\tilde{T}+\\tilde{L}), \\qquad \\Xi^1 = 2\\tilde{l}^{\\transpose}C(\\tilde{T}+\\tilde{L})+2[d_1,d_2, d_3](\\tilde{T}+\\tilde{L}),\\\\\n&\\Phi^2 = (\\tilde{T}+\\tilde{L})^{\\transpose}E(\\tilde{T}+\\tilde{L}), \\qquad \\Xi^2 = 2\\tilde{l}^{\\transpose}E(\\tilde{T}+\\tilde{L})+2[f_1,f_2, f_3](\\tilde{T}+\\tilde{L}),\\\\\n&\\Upsilon^1 = r^1 + 2[d_1,d_2, d_3]\\tilde{l} + \\textrm{trace}\\left(\\tilde{T}^{\\transpose}C\\tilde{T}\\Sigma\\right)+2\\tilde{l}^{\\transpose}C\\tilde{l}, \\\\\n& \\Upsilon^1 = r^2 + 2[f_1,f_2, f_3]\\tilde{l} + \\textrm{trace}\\left(\\tilde{T}^{\\transpose}E\\tilde{T}\\Sigma\\right)+2\\tilde{l}^{\\transpose}E\\tilde{l}.}\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nPart 1 of the lemma is proved by differentiating $\\ex{c^i(\\VSF X,\\VSF U^i, g^{-i}(\\VSF Y^{-i}))|\\VSF Y^i}$ with respect to $\\VSF U^i$ and setting it equal to zero. For the proof of Part 2 of the lemma, see Appendix \\ref{sec:auxiliary}. For proving Part 3 of the lemma, notice that if $\\VSF U^i = g^{i\\star}(\\VSF Y^i)$ for $i=1,2$, then\n\\beqq{\\bmat{\\VSF X \\\\\\VSF U^1\\\\\\VSF U^2} = \\tilde{T}\\bmat{\\VSF X \\\\\\VSF Y^1\\\\\\VSF Y^2}+\\tilde{L}\\VSF m+\\tilde{l}.}\nNow, substituting this in the expressions for $c^1$ and $c^2$ and taking the expectations, we get the expected costs of the controllers. This completes the proof of the lemma.\n\\end{proof}\n\n\\begin{remark}\\label{rem:taui}\nThe Nash equilibrium strategy of Player $i$ given in \\eqref{eqn:gistaraux} of the auxiliary game {\\bf AG1} can be rewritten as\n\\beqq{g^{i\\star}(\\VSF Y^i) = \\matrix{ccc}{l^i+L^i\\VSF m_x-T^i\\VSF m_{y^i} &|& T^i } \\matrix{c}{1 \\\\ \\VSF Y^i}.}\nIt should also be noted that the unique Nash equilibrium in the auxiliary game {\\bf AG1} exists in the class of all Borel measurable strategies of the controllers. {\\hfill$\\Box$}\n\\end{remark}\n\nIn the next subsection, we consider a class of dynamic LQG games satisfying certain assumptions, and show that each LQG game in that class admits a unique common information based Markov perfect equilibrium.\n\n\\subsection{Generalization to Dynamic LQG Games}\\label{sub:generallqg}\nIn this subsection, we consider LQG games that satisfy Assumptions \\ref{assm:lqginfoevolution} and \\ref{assm:lqgseparation}. In order to prove the main result of the section, we need the following lemma.\n\n\\begin{lemma}\\label{lem:bayesiangamet}\nConsider an LQG game {\\bf G1} that satisfies Assumptions \\ref{assm:lqginfoevolution} and \\ref{assm:lqgseparation}. Fix a time step $t\\in\\{1,\\ldots,T-2\\}$. If the expected value functions $V^i_t, i=1,2$ of the controllers at time $t+1$ are affine-quadratic functions of $\\VEC m_{t+1}$, then the one-stage Bayesian game at time step $t$ with any mean $\\VEC m_t\\in\\ALP S_t$ is an instance of auxiliary game {\\bf AG1}.\n\\end{lemma}\n\\begin{proof}\nConsider a time step $t\\in\\{1,\\ldots,T-1\\}$ and a realization $\\VEC c_t$ of the common information at time step $t$. Define\n\\beqq{\\VSF X = \\VEC S_t = \\bmat{\\VEC X_t \\\\ \\VEC P^1_t\\\\ \\VEC P^2_t},\\quad \\VSF Y^i = \\VEC P^i_t,\\quad \\VSF U^i = \\VEC U^i_t, \\qquad i=1,2.}\nThe probability measure on the state $\\VSF X$ is taken to be equal to the common information based conditional measure $\\pi_t(d\\VEC s_t) = \\pr{d\\VEC s_t|\\VEC c_t}$, that admits a Gaussian density function with mean $\\VEC m_t$ (dependent on $\\VEC c_t$) and variance $\\Sigma_t$, which are defined in Lemma \\ref{lemma:lqgmean}. The observation $\\VSF Y^i$ of auxiliary Controller $i$ is the private information $\\VEC P^i_t$.\n\nWe now prove that the one-stage Bayesian game defined above is an instance of the Auxiliary game {\\bf AG1}. First, note that $\\VSF Y^i = H^i\\VSF X$ for some appropriate matrix $H^i$, $i\\in\\{1,2\\}$, which implies that the assumption on the covariance matrix of the auxiliary game given in \\eqref{eqn:sigmaij} is satisfied by the auxiliary game defined above. We just need to verify that the cost functions of the controllers of the one-stage Bayesian game are of the same form as \\eqref{eqn:j1aux} and \\eqref{eqn:j2aux}. \n\nAt any time step $t\\leq T-1$, let us assume that the expected value function of the Controller $i$ is $V^i_{t+1}(\\VEC m_{t+1}) = \\VEC m^{\\transpose}_{t+1}\\Phi^i_{t+1}\\VEC m_{t+1}+\\Xi^i_{t+1}\\VEC m_{t+1}+\\Upsilon^i_{t+1}$ for some appropriate positive definite matrix $\\Phi^i_{t+1}$, matrix $\\Xi^i_{t+1}$ and a non-negative real number $\\Upsilon^i_{t+1}$. Recall from Lemma \\ref{lemma:lqgmean} that $\\VEC M_{t+1}:= F^1_t(\\VEC m_t,\\VEC Z_{t+1})$, where $F^1_t$ is an affine function of $\\VEC m_t$ and $\\VEC Z_{t+1}$. Thus, the cost-to-go for the Controller $i\\in\\{1,2\\}$ in the one-stage Bayesian game at time step $t< T$ with mean $\\VEC m_t$ is of the form\n\\beq{\\label{eqn:citlqg}\\check{\\check{c}}^i_{t}(\\VEC m_t;\\VEC S_t,\\VEC U^1_t,\\VEC U^2_t,\\VEC Z_{t+1}) &:=& c^i_t(\\VEC X_t,\\VEC U^1_t,\\VEC U^2_t)+(F^{1}_t(\\VEC m_t,\\VEC Z_{t+1}))^\\transpose \\Phi^i_{t+1} F^1_t(\\VEC m_t,\\VEC Z_{t+1})\\nonumber\\\\ & & +\\Xi^i_{t+1} F^{1}_t(\\VEC m_t,\\VEC Z_{t+1})+ \\Upsilon^i_{t+1}.\\qquad}\nNow, recall the definition of $\\VEC Z_{t+1}$ in Assumption \\ref{assm:lqginfoevolution}, and substitute for $\\VEC Y^1_{t+1}$ and $\\VEC Y^2_{t+1}$ in the expression for $\\VEC Z_{t+1}$ in terms of $\\VEC X_t$, $\\VEC U^1_t$, $\\VEC U^2_t$ and noises using \\eqref{eq:lqgdynamics} and \\eqref{eq:lqgobserv}. Thus, $\\VEC Z_{t+1}$ is an affine map of $\\VEC S_t$, $\\VEC U^1_t$, $\\VEC U^2_t$ and noises $\\VEC W^0_t,\\VEC W^1_{t+1}$ and $\\VEC W^2_{t+1}$. Also recall from Lemma \\ref{lemma:lqgmean} that $F^1_t$ is an affine map of its arguments. Define $\\bar{c}^i_t, i=1,2$ as\n\\beqq{\\bar{c}^i_{t}(\\VEC m_t;\\VEC S_t,\\VEC U^1_t,\\VEC U^2_t) = \\ex{\\check{\\check{c}}^i_{t}(\\VEC m_t;\\VEC S_t,\\VEC U^1_t,\\VEC U^2_t,\\VEC Z_{t+1})|\\VEC S_t,\\VEC U^1_t,\\VEC U^2_t}.}\nThus, the expression for cost function $\\bar{c}^i_t$, $i=1,2$ are precisely of the forms\n\\beq{\\label{eqn:barc1t}\\bar{c}^1_{t}(\\VEC m_t;\\VEC S_t,\\VEC U^1_t,\\VEC U^2_t) &=& \\bmat{\\VEC S_t^\\transpose,\\VEC U_t^{1\\transpose},\\VEC U_t^{2\\transpose}}C_t\\bmat{\\VEC S_t \\\\\\VEC U_t^1\\\\\\VEC U_t^2} + 2\\VEC m_t^\\transpose D_t \\bmat{\\VEC S_t \\\\\\VEC U_t^1\\\\\\VEC U_t^2}+ r^1_t \\VEC m_t+\\tilde\\Upsilon^1_{t+1},\\qquad\\\\\n\\label{eqn:barc2t}\\bar{c}^2_{t}(\\VEC m_t;\\VEC S_t,\\VEC U^1_t,\\VEC U^2_t) &=& \\bmat{\\VEC S_t^\\transpose,\\VEC U_t^{1\\transpose},\\VEC U_t^{2\\transpose}}E_t\\bmat{\\VEC S_t \\\\\\VEC U_t^1\\\\\\VEC U_t^2} + 2\\VEC m_t^\\transpose F_t \\bmat{\\VEC S_t \\\\\\VEC U_t^1\\\\\\VEC U_t^2} + r^2_t \\VEC m_t +\\tilde\\Upsilon^2_{t+1},\\qquad}\nwhere $C_t,D_t,E_t,F_t, r^1_t,r^2_t,\\tilde{\\Upsilon}^1_{t+1}, \\tilde{\\Upsilon}^2_{t+1}$ are dependent on matrices $R^i$, $\\Phi^i_{t+1}$, $\\Xi^i_{t+1}$ for $i=1,2$, the linear map $F^1_t$, the variances of noises $\\VEC W^0_t,\\VEC W^1_{t+1}$ and $\\VEC W^2_{t+1}$, and the projection function $\\zeta_{t+1}$, where $\\zeta_{t+1}$ is defined in \\eqref{eq:lqgcommoninfo}. The cost functions of the controllers given above are of the same form as considered in \\eqref{eqn:j1aux} and \\eqref{eqn:j2aux}. Thus, the one-stage Bayesian game at time $t$ with mean $\\VEC m_t$ is the same as auxiliary game {\\bf AG1}$(\\VEC m_t,\\Sigma_t,\\bar{c}^1_t(\\VEC m_t;\\cdot),\\bar{c}^2_t(\\VEC m_t;\\cdot))$. This completes the proof of the lemma.\n\\end{proof}\n\n\\begin{definition}\\label{def:agtg1}\nFor any LQG game {\\bf G1}, the corresponding one-stage Bayesian game at time step $t$ with mean $\\VEC m_t$ and cost functions of the controllers given by \\eqref{eqn:barc1t} and \\eqref{eqn:barc2t} is referred to as ${\\bf AG}_t({\\bf G1};\\VEC m_t)$. {\\hfill$\\Box$}\n\\end{definition}\n\nLemma \\ref{lem:bayesiangamet} implies that the one-stage Bayesian game of the LQG game {\\bf G1} at time $t$ with mean $\\VEC m_t$ is an instance of an auxiliary game {\\bf AG1}$(\\VEC m_t,\\Sigma_t,\\bar{c}^1_{t}(\\VEC m_t;\\cdot),\\bar{c}^2_{t}(\\VEC m_t;\\cdot))$, where $\\bar{c}^i_{t}$ is defined in \\eqref{eqn:citlqg}. If the matrix tuple $(C_t,E_t)$, as defined in \\eqref{eqn:barc1t} and \\eqref{eqn:barc2t}, satisfies Assumption \\ref{assm:eigcost}, then for any realization $\\VEC m_t$, Lemma \\ref{lem:basargame} implies that there exists a unique Nash equilibrium of the one-stage Bayesian game at time step $t$, which is affine in the private information of the controllers. Furthermore, the expected equilibrium costs are affine-quadratic in the mean $\\VEC m_t$. This crucial observation about LQG games leads us to the next theorem, which is also the main result of this section. First, we need the following assumption on the cost functions of the one-stage Bayesian games of game {\\bf G1}. \n\n\\begin{assumption}\\label{assm:lqgcost} \nAt every time step $t\\in\\{1,\\ldots,T-1\\}$ of game {\\bf G1}, the matrix tuple $(C_t,E_t)$, as defined in \\eqref{eqn:barc1t} and \\eqref{eqn:barc2t} and obtained using the procedure outlined in Algorithm 1 in Subsection \\ref{sub:stagebayesian}, satisfies Assumption \\ref{assm:eigcost}. {\\hfill$\\Box$}\n\\end{assumption}\n\nThe main result of this section is now captured by the following theorem.\n\n\\begin{theorem}\\label{thm:lqgg1}\nConsider an LQG game {\\bf G1} that satisfies Assumptions \\ref{assm:lqginfoevolution} and \\ref{assm:lqgseparation}. If Assumption \\ref{assm:lqgcost} also holds for game {\\bf G1}, then it admits a unique common information based Markov perfect equilibrium in the class of {\\it all Borel measurable strategies} of the controllers. Furthermore, the equilibrium strategy of Controller $i\\in\\{1,2\\}$ is affine in its information at all time steps.\n\\end{theorem}\n\\begin{proof}\nWe follow the procedure outlined in Algorithm 1 in Subsection \\ref{sub:stagebayesian}. At time step $T-1$, let $\\VEC m_{T-1}$ be a realization of the common information based conditional mean. Consider the one-stage Bayesian game ${\\bf AG}_{T-1}({\\bf G1};\\VEC m_{T-1})$ at time $T-1$. Since Assumption \\ref{assm:lqgcost} holds, we use the result of Lemma \\ref{lem:basargame} to conclude that a unique Nash equilibrium policies of the controllers exist. Furthermore, the Nash equilibrium policies of the one-stage Bayesian game ${\\bf AG}_{T-1}({\\bf G1};\\VEC m_{T-1})$ are affine in the conditional mean and the private information of the controllers at that time step. Since the conditional mean $\\VEC m_{T-1}$ is affine in the common information of the controllers, we conclude that the Nash equilibrium policies of the controllers in the one-stage Bayesian game ${\\bf AG}_{T-1}({\\bf G1};\\VEC m_{T-1})$ is affine in the information of the controllers. \n\nWe continue this process for all possible means $\\VEC m_t\\in\\ALP S_t$ at all time steps $t\\in\\{T-1,T-2,\\ldots,1\\}$ to conclude that there is a unique common information based Markov perfect equilibrium for game {\\bf G1}.\n\\end{proof}\n\n\\begin{remark}\nIt should be noted that one can compute the Bayesian Nash equilibrium of game {\\bf AG1} simply by solving a set of linear equations. In the dynamic LQG game {\\bf G1} satisfying the sufficient conditions of Theorem \\ref{thm:lqgg1}, the controllers can just solve a set of linear equations at successive time steps to obtain the unique common information based Markov perfect equilibrium; thus, computing the equilibrium is inexpensive in the class of LQG games.{\\hfill $\\Box$}\n\\end{remark}\n\n\\subsection{LQG Games not satisfying Assumption \\ref{assm:lqgcost}}\\label{sub:not}\nIn this subsection, we show that even if an LQG game does not satisfy Assumption \\ref{assm:lqgcost}, it may still admit a common information based Markov perfect equilibrium under some mild conditions. However, we cannot claim uniqueness of that equilibrium like we did in Theorem \\ref{thm:lqgg1} in the previous subsection.\n\nConsider the auxiliary game {\\bf AG1} discussed above. We needed Assumption \\ref{assm:eigcost} in two places in the result of Lemma \\ref{lem:basargame} above - (i) to conclude the uniqueness of Nash equilibrium provided that it exists, and (ii) to show the existence of matrices $l^1,l^2, L^1, L^2,T^1$ and $T^2$ as defined in \\eqref{eqn:b1b2}, \\eqref{eqn:t1} and \\eqref{eqn:t2} above. However, if we drop this assumption and instead make a milder assumption, then we can obtain a result that is weaker than what we got above. First, we state the assumption we make to obtain the weaker result. \n\n\\begin{assumption}\\label{assm:eigcostb}\nIn game {\\bf AG1}, the either $(I - C_{22}^{-1}C_{23}E_{33}^{-1}E_{23}^{\\transpose})$ is invertible or $(I-E_{33}^{-1}E_{23}^{\\transpose}C_{22}^{-1}C_{23})$ is invertible, and there exists a unique solution to the coupled pair of equations \\eqref{eqn:t1} and \\eqref{eqn:t2}. {\\hfill$\\Box$}\n\\end{assumption}\n\nIf we make this assumption on the auxiliary game {\\bf AG1}, then we can conclude the following about the Nash equilibrium of the game.\n \n\\begin{lemma}\\label{lem:basargameb}\nIf the auxiliary game {\\bf AG1}$(\\VSF m,\\Sigma,c^1,c^2)$ satisfies Assumption \\ref{assm:eigcostb}, then the game admits a Nash equilibrium. The expressions of Nash equilibrium control laws and expected costs are the same as in Lemma \\ref{lem:basargame}. \n\\end{lemma}\n\\begin{proof}\nThe proof is analogous to the proof of Lemma \\ref{lem:basargame}.\n\\end{proof}\n\nThis brings us to the following result for LQG games that may not satisfy Assumption \\ref{assm:lqgcost}.\n\\begin{theorem}\\label{thm:lqgg2}\nConsider an LQG game {\\bf G1} that satisfies Assumptions \\ref{assm:lqginfoevolution} and \\ref{assm:lqgseparation}. For all time steps $t\\in\\{1,\\ldots,T-1\\}$ and realizations of the mean $\\VEC m_t\\in\\ALP S_t$, obtain the one-stage Bayesian game ${\\bf AG}_t({\\bf G1};\\VEC m_t)$ by following the steps of Algorithm 1 in Subsection \\ref{sub:stagebayesian}. If ${\\bf AG}_t({\\bf G1};\\VEC m_t)$ satisfies Assumption \\ref{assm:eigcostb} for all $t\\in\\{1,\\ldots,T-1\\}$ and $\\VEC m_t\\in\\ALP S_t$, then game {\\bf G1} admits a common information based Markov perfect equilibrium.\n\\end{theorem}\n\\begin{proof}\nThe proof follows from the same arguments as in the proof of Theorem \\ref{thm:lqgg1}. It should be noticed that for a fixed affine strategy of Controller 1, the one-person dynamic optimization problem for Controller 2 can be solved using a dynamic programming (existence follows from Assumption \\ref{assm:eigcostb}) to obtain optimal strategies that are linear in its information. \n\\end{proof}\n\nNotice that we do not claim uniqueness of the common information based Markov perfect equilibrium for LQG games that do not satisfy Assumption \\ref{assm:lqgcost}.\n\\subsection{An Illustrative Example}\\label{sub:global}\nIn this section, we consider an example of a two-player non-zero sum game considered above. There are three states in the game, out of which one is a global state that is observed by both controllers, and two states are local states of the controllers. The state evolution is given by\n\\beqq{\\VEC X^0_{t+1} &=& A \\VEC X^0_t+B^1\\VEC U^1_t+B^2\\VEC U^2_t+\\VEC W^{0}_t,\\\\\n\\VEC X^i_{t+1} &=& A^i \\VEC X^0_t+D^{i}_1\\VEC U^1_t+D^i_2 \\VEC U^2_t+\\VEC W^i_t,\\quad i=1,2,}\nwhere $\\VEC X^0_t$ is the global state and $\\VEC X^i_t,i=1,2$ are the local states for $t=1,\\ldots,T-1$. The noise processes $\\VEC W^i_t$ are assumed to be mutually independent mean-zero Gaussian random variables with variances $\\Lambda^i_t$ for $i=0,1,2$ and $t=1,\\ldots,T-1$. Player $i$'s information $\\VEC I^i_t$ lies in a Euclidean space, that is $\\ALP I^i_t = \\ALP X^0_{1:t}\\times\\ALP X^i_{1:t}\\times\\ALP U^{1:2}_{1:t-1}$. The cost functions of the players are given by\n\\beqq{c^i_T(\\VEC x^0_T,\\VEC x^i_T) &=& \\VEC x^{0\\transpose}_T Q^i \\VEC x^0_T+\\VEC x^{i\\transpose}_T Q^i \\VEC x^i_T,\\\\\nc^i_t(\\VEC x^0_t,\\VEC x^i_t,\\VEC u^{1:2}_t) &=& \\VEC x^{0\\transpose}_t Q^i \\VEC x^0_t+\\VEC x^{i\\transpose}_t Q^i \\VEC x^i_t+\\VEC u^{1\\transpose}_{t}R^i\\VEC u^1_t+\\VEC u^{2\\transpose}_{t}S^i\\VEC u^2_t, \\quad i=1,2.}\nWe first verify in the following lemma that both assumptions on the information structure is satisfied by this game. Toward this end, recall that the space of common information of the players is $\\ALP C_t:=\\ALP X^0_{1:t}\\times\\ALP U^{1:2}_{1:t-1}$ and the space of private information of Player $i$ is $\\ALP P^i_t:=\\ALP X^i_{1:t}$.\n\\begin{lemma}\n\\label{lem:global}\nThe information structure of the controllers in the game defined above satisfies Assumptions \\ref{assm:lqginfoevolution} and \\ref{assm:lqgseparation}.\n\\end{lemma}\n\\begin{proof}\nSince the information structure is nested, Assumption \\ref{assm:lqginfoevolution} is automatically satisfied. Given the common information $\\VEC c_t$, it is easy to verify that the joint distribution of private informations of the players are Gaussian and independent of past strategies of the players since $\\VEC W^i_t$ are mutually independent Gaussian random variables for all time steps, and $i=1,2$. This implies that Assumption \\ref{assm:lqgseparation} is also satisfied.\n\\end{proof}\n\nWe are interested in computing common information based Markov perfect equilibrium of this game. At a time step $t$, since the private state of both controllers are affected by the global state at time step $t-1$ but not the private states at the previous time steps, the past realizations of private states of Controller $i$, $\\VEC x^i_1,\\ldots,\\VEC x^i_{t-1}$, do not affect the common information based Markov perfect equilibrium. Therefore, for ease of exposition, we make minor changes in the notations from what we have used in previous sections. Let us denote the mean of the random variable $\\VEC X^i_{t}$ given the common information $\\VEC C_t$ as $\\VEC M^i_{t}, i=1,2$. Note that the mean is given by $\\VEC M^i_{t} = A^i \\VEC X^0_{t-1}+D^{i}_1\\VEC U^1_{t-1}+D^i_2 \\VEC U^2_{t-1}$. The variance of $\\VEC X^i_t$ given the common information $\\VEC C_t$ is $\\Lambda^i_t$. Define $\\VEC M_t := [\\VEC X^{0\\transpose}_t,\\VEC M^{1\\transpose}_t,\\VEC M^{2\\transpose}_t]^\\transpose$ to be the conditional mean of the states \n$[\\VEC X^{0\\transpose}_t,\\VEC X^{1\\transpose}_t,\\VEC X^{2\\transpose}_t]^\\transpose$ given the common information $\\VEC C_t$. Also note that $\\VEC Z_{t+1}=[\\VEC U^{1\\transpose}_t, \\VEC U^{2\\transpose}_t,\\VEC X^{0\\transpose}_{t+1}]^\\transpose$. The evolution of $\\VEC M_t$ is given by\n\\beq{\\VEC M_{t+1} &=& \\matrix{c}{\\VEC X^0_{t+1}\\\\A^1 \\VEC X^0_t+D^{1}_1\\VEC U^1_t+D^1_2 \\VEC U^2_t\\\\ A^2 \\VEC X^0_t+D^{2}_1\\VEC U^1_t+D^2_2 \\VEC U^2_t}\\nonumber\\\\\n\\label{eqn:f1tglobal}& =: & F^1_t(\\VEC X^0_t,\\VEC U^1_t,\\VEC U^2_t,\\VEC X^0_{t+1}).}\nThe conditional covariance matrix is $\\Sigma_{t+1} := \\text{diag}\\{0,\\Lambda^1_{t+1},\\Lambda^2_{t+1}\\}$. Let us assume that this game satisfies Assumption \\ref{assm:lqgcost}. By Theorem \\ref{thm:lqgg1}, we conclude that this game admits a unique common information based Markov perfect equilibrium. Now, we compute the unique common information based Markov perfect equilibrium of this game.\n\n\\begin{enumerate}\n\\item At the terminal time $T$, for each realization $\\VEC m:=[\\VEC x^{0\\transpose}_T, \\VEC m^{1\\transpose},\\VEC m^{2\\transpose}]^\\transpose$ of $\\VEC M_{T}$, the one-stage Bayesian game $SG_{T}(\\VEC m)$ is defined as follows \n\\begin{enumerate}\n\\item The conditional probability distribution on $\\ALP X^0_{T}\\times \\ALP X^1_{T}\\times\\ALP X^2_{T}$ given the common information $\\VEC c_T$ is a Gaussian density with mean $\\VEC m$ and covariance $\\Sigma_{T}$.\n\\item Player $i$ observes $\\VEC X^0_{T}, \\VEC X^i_{T}$, $i=1,2$. No action is chosen.\n\\item Player $i$'s cost is $c^i_T(\\VEC x^0_T,\\VEC x^i_T)$. \n\\end{enumerate}\nThe expected costs as functions of beliefs are of the form $V^i_{T}(\\VEC m) =\\VEC x^{0\\transpose}_T Q^i \\VEC x^0_T +\\VEC m^{i\\transpose} Q^i\\VEC m^i +\\textrm{trace}(Q^i\\Lambda^i_T)$.\n\n\\item At time $t 1$, and $\\kappa \\coloneqq \\kappa(M)$, the regret of \\texttt{UCRL2} is bounded by\n\\begin{align*}\n &\\Delta(M, \\texttt{UCRL2}, s, T) \\\\\n &\\quad \\leq \\sqrt{\\frac{5}{8} T \\log\\left(\\frac{8T}{\\delta}\\right)} + \\sqrt{T} + \\kappa \\sqrt{\\frac{5}{2} T \\log\\left(\\frac{8T}{\\delta}\\right)} + \\kappa SA \\log_2\\left( \\frac{8T}{SA} \\right) \\\\\n &\\quad\\quad+ \\Bigg(\\kappa\\sqrt{14 S \\log\\left(\\frac{2AT}{\\delta}\\right)} + \\sqrt{14 \\log\\left(\\frac{2SAT}{\\delta}\\right)} + 2 \\Bigg) (\\sqrt{2} + 1) \\sqrt{SAT} \\\\\n &\\quad \\leq 34 \\max\\{1, \\kappa\\} S \\sqrt{AT \\log\\left(\\frac{T}{\\delta}\\right)}.\n\\end{align*}\n\\end{thm}\n\nAs a corollary, Theorem~\\ref{thm:regret-bound} implies that \\texttt{UCRL2} offers $O\\left( \\frac{\\kappa^2 S^2 A}{\\varepsilon^2} \\log \\frac{\\kappa S A}{\\delta \\varepsilon} \\right)$ sample complexity~\\cite{kakade2003sample}, by inverting the regret bound by demanding that the per-step regret is at most $\\varepsilon$ with probability of at least $1-\\delta$ \\citep[corollary 3]{jaksch2010near}.\nSimilarly, we have an updated logarithmic bound on the expected regret \\citep[theorem 4]{jaksch2010near}, $ \\Exp [ \\Delta(M, \\texttt{UCRL2}, s, T) ] = O(\\frac{\\kappa^2 S^2 A \\log T}{g}) $ where $g$ is the gap in average reward between the best policy and the second best policy.\n\n\\subsection{Informativeness of rewards}\n\\label{sec:informative}\nInformally, it is not hard to appreciate the challenge imposed by delayed feedback inherent in MDPs, as actions with high immediate rewards do not necessarily lead to a high \\emph{optimal} value. Are there different but ``equivalent'' reward functions that differ in their \\emph{informativeness} with the more informative ones being easier to reinforcement learn? Suppose we have two MDPs differing only in their rewards, $M_1 = (\\mathcal{S}, \\mathcal{A}, p, r_1)$ and $M_2 = (\\mathcal{S}, \\mathcal{A}, p, r_2)$, then they will have the same diameters $D(M_1) = D(M_2)$ and thus the same diameter-dependent regret bounds from previous works. With MEHC, however, we may get a more meaningful answer.\n\nFirstly, let us make precise a notion of equivalence. We say that $r_1$ and $r_2$ are \\emph{$\\Pi$-equivalent} if for any policy $\\pi : \\mathcal{S} \\rightarrow \\mathcal{A}$, its average rewards are the same under the two reward functions $\\rho(M_1, \\pi, s) = \\rho(M_2, \\pi, s)$. Formally, we will study the MEHC of a class of $\\Pi$-equivalent reward functions related via a potential.\n\n\\subsection{Potential-based reward shaping}\n\\label{sec:pbrs}\nOriginally introduced by \\citet{ng1999policy}, potential-based reward shaping (PBRS) takes a potential $\\varphi : \\mathcal{S} \\rightarrow \\mathbb{R}$ and defines shaped rewards\n\\begin{equation}\\label{eq:pbrs}\n r^\\varphi_t \\coloneqq r_t -\\varphi(s_{t}) + \\varphi(s_{t+1}).\n\\end{equation}\n\nWe can think of the stochastic process $(s_t, a_t, r^\\varphi_t)_{t\\geq 0}$ being generated from an MDP $M^\\varphi = (\\mathcal{S}, \\mathcal{A}, p, r^\\varphi)$ with reward function $r^\\varphi : \\mathcal{S} \\times \\mathcal{A} \\rightarrow \\mathcal{P}([0, r_\\text{max}])$\\footnote{One needs to ensure that $\\varphi$ respects the $[0, r_\\text{max}]$-boundedness of $M$.} whose mean rewards are\n$$ \\bar{r^\\varphi}(s, a) = \\bar{r}(s, a) -\\varphi(s) + \\Exp_{s' \\sim p(\\cdot|s,a)}\\left[ \\varphi(s') \\right]. $$\n\nIt is easy to check that $r^\\varphi$ and $r$ are indeed $\\Pi$-equivalent. For any policy $\\pi$,\n\\begin{align*}\n \\rho(M^\\varphi, \\pi, s) &= \\lim_{T \\rightarrow \\infty} \\frac{1}{T} \\Exp \\left[ R(M^\\varphi, \\pi, s, T) \\right] \\\\\n &= \\lim_{T \\rightarrow \\infty} \\frac{1}{T} \\Exp \\left[ \\sum_{t=0}^{T-1} r^\\varphi_t \\right] \\\\\n &= \\lim_{T \\rightarrow \\infty} \\frac{1}{T} \\Exp \\left[ \\sum_{t=0}^{T-1} r_t - \\varphi(s_t) + \\varphi(s_{t+1}) \\right] \\\\\n %\n &\\text{By telescoping sums of potential terms over consecutive $t$} \\\\\n %\n &= \\lim_{T \\rightarrow \\infty} \\frac{1}{T} \\Exp \\left[ - \\varphi(s_0) + \\varphi(s_T) + \\sum_{t=0}^{T-1} r_t \\right] \\\\\n &= \\lim_{T \\rightarrow \\infty} \\frac{1}{T} \\Big( - \\varphi(s) + \\Exp[\\varphi(s_T)] + \\Exp\\left[ R(M, \\pi, s, T) \\right] \\Big) \\\\\n %\n &\\text{The first two terms vanish in the limit} \\\\\n %\n &= \\lim_{T \\rightarrow \\infty} \\frac{1}{T} \\Exp\\left[ R(M, \\pi, s, T) \\right] \\\\\n &= \\rho(M, \\pi, s). \\numberthis \\label{eq:shaped-avg-reward}\n\\end{align*}\n\nTo get some intuition, it is instructive to consider a toy example (Figure~\\ref{fig:example}).\nSuppose $0 < \\beta < \\alpha$ and $\\epsilon \\in (0, 1)$, then the optimal average reward in this MDP is $1 - \\beta$, and the optimal stationary deterministic policy is $\\pi^*(s_1) \\coloneqq a_2$ and $\\pi^*(s_2) \\coloneqq a_1$, as staying in state $s_2$ yields the highest average reward.\nAs the expected number of steps needed to transition from state $s_1$ to $s_2$ and vice versa are both $\\nicefrac{1}{\\epsilon}$ via action $a_2$, we conclude that $\\kappa(M) = \\max\\{ \\alpha, \\nicefrac{\\alpha}{\\epsilon}, \\nicefrac{\\beta}{\\epsilon}, \\beta \\} = \\nicefrac{\\alpha}{\\epsilon}$.\nFurthermore, notice that taking action $a_2$ in either state transitions to the other state with probability of $\\epsilon$, however the immediate rewards are the same as taking the alternative action $a_1$ to stay in the current state---the immediate rewards are not \\emph{informative}.\nWe can differentiate the actions better by shaping with a potential of $\\varphi(s_1) \\coloneqq 0$ and $\\varphi(s_2) \\coloneqq \\nicefrac{(\\alpha - \\beta)}{2 \\epsilon}$.\nThe shaped mean rewards become, at $s_1$,\n$$ \\bar{r^\\varphi}(s_1, a_2) = 1 - \\alpha - \\varphi(s_1) + \\epsilon \\varphi(s_2) + (1 - \\epsilon) \\varphi(s_1) = 1 - \\nicefrac{(\\alpha + \\beta)}{2} > 1 - \\alpha = \\bar{r^\\varphi}(s_1, a_1) $$\nand at $s_2$,\n$$ \\bar{r^\\varphi}(s_2, a_2) = 1 - \\beta - \\varphi(s_2) + \\epsilon \\varphi(s_1) + (1 - \\epsilon) \\varphi(s_2) = 1 - \\nicefrac{(\\alpha + \\beta)}{2} < 1 - \\beta = \\bar{r^\\varphi}(s_2, a_1). $$\nThis encourages taking actions $a_2$ at state $s_1$ and discourages taking actions $a_1$ at state $s_2$ simultaneously. The maximum expected hitting cost becomes smaller\n\\begin{align*}\n \\kappa(M^\\varphi) &= \\max \\left\\{ \\alpha, \\beta, \\varphi(s_1) - \\varphi(s_2) + \\frac{\\alpha}{\\epsilon}, \\, \\varphi(s_2) - \\varphi(s_1) + \\frac{\\beta}{\\epsilon} \\right\\} \\\\\n &= \\max \\left\\{ \\alpha, \\beta, \\frac{\\alpha + \\beta}{2 \\epsilon}, \\, \\frac{\\alpha + \\beta}{2 \\epsilon} \\right\\} \\\\\n &= \\frac{\\alpha + \\beta}{2 \\epsilon} \\\\\n &< \\frac{\\alpha}{\\epsilon} = \\kappa(M).\n\\end{align*}\n\n\\begin{figure}[!tb]\n \\centering\n \\begin{tikzpicture}[->, >=stealth', shorten >=1pt, auto, semithick]\n \\tikzstyle{action} = [draw=black,fill=none]\n \n \\node[state] at (-2, 0) (s1) {$s_1$};\n \\node[state] at (2, 0) (s2) {$s_2$};\n\n \n \\node[action, left=of s1] (s1a1) {$a_1$};\n \\node[action, above right=of s1] (s1a2) {$a_2$};\n\n \\node[action, right=of s2] (s2a1) {$a_1$};\n \\node[action, below left=of s2] (s2a2) {$a_2$};\n\n \n \\path (s1a2) edge [bend left] node {$\\epsilon$} (s2)\n edge [bend left] node {$1-\\epsilon$} (s1);\n \\path (s2a2) edge [bend left] node {$\\epsilon$} (s1)\n edge [bend left] node {$1-\\epsilon$} (s2);\n \\path (s1a1) edge [bend left] node {$1$} (s1);\n \\path (s2a1) edge [bend left] node {$1$} (s2);\n\n \n \\path (s1) edge [-, dashed] node [text=red] {$1-\\alpha$} (s1a1);\n \\path (s1) edge [-, dashed] node [text=red] {$1-\\alpha$} (s1a2);\n \\path (s2) edge [-, dashed] node [text=red] {$1-\\beta$} (s2a1);\n \\path (s2) edge [-, dashed] node [text=red] {$1-\\beta$} (s2a2);\n \\end{tikzpicture}\n \\caption{Circular nodes represent states and square nodes represent actions. The solid edges are labeled by the transition probabilities and the dashed edges are labeled by the mean rewards. Furthermore, $r_\\text{max} = 1$. For concreteness, one can consider setting $\\alpha = 0.11, \\beta = 0.1, \\epsilon = 0.05$.}\n \\label{fig:example}\n\\end{figure}\n\nIn this example, MEHC is halved at best when $\\beta$ is made arbitrarily close to zero. Noting that the original MDP $M$ is equivalent to $M^\\varphi$ shaped with potential $-\\varphi$, i.e. $M = (M^\\varphi)^{-\\varphi}$ from (\\ref{eq:pbrs}), we see that MEHC can be almost doubled. It turns out that halving or doubling the MEHC is the most PBRS can do in a large class of MDPs.\n\n\\begin{thm}[MEHC under PBRS]\n\\label{thm:factor-two}\nGiven an MDP $M$ with finite maximum expected hitting cost $\\kappa(M) < \\infty$ and an unsaturated optimal average reward $\\rho^*(M) < r_\\text{max}$, the maximum expected hitting cost of any PBRS-parameterized MDP $M^\\varphi$ is bounded by a multiplicative factor of two\n$$ \\frac{1}{2} \\kappa(M) \\leq \\kappa(M^\\varphi) \\leq 2 \\kappa(M). $$\n\\end{thm}\n\nThe key observation is that the expected total rewards along a loop remains unchanged by shaping, which originally motivated PBRS \\cite{ng1999policy}. To see this, consider a loop as a concatenation of two paths, one from $s$ to $s'$ and the other from $s'$ to $s$. Under the shaping of a potential $\\varphi$, the expected total rewards of the former is increased by $\\varphi(s') - \\varphi(s)$ and the latter is decreased by the same amount. For more details, see Appendix~\\ref{sec:proof-thm-two}.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Review of van Kampen's microscopic theory}\\label{App1}\n\tWe here review the microscopic derivation of the Gaussian Langevin equation. \n\tLet us consider a system driven by a single stochastic environment as \n\t\\begin{equation}\n\t\t\\frac{d\\hv}{dt} = \\hFA,\n\t\\end{equation}\n\twhere $\\hFA$ is a stochastic force from the single environment. \n\tWe here make a critical assumption that $\\hFA$ is scaled by a positive small number $\\eps$ as \n\t\\begin{equation}\n\t\t\\hFA = \\eps \\hat{\\eta}_A(t;\\hv), \\label{spp:assmp:scale_SDE}\n\t\\end{equation}\n\twhere $\\hat{\\eta}_A$ is a Markovian jump force independent of $\\eps$. \n\tLet us write the jump rate of $\\hat{\\eta}_A(t;\\hv)$ as $\\overline{W}(v;\\mc{Y})$, \n\twhere $\\overline{W}(v;\\mc{Y})$ is the $\\eps$-independent transition probability per unit time on the condition $\\hat v(t)=v$ with the amplitude of the Poisson noise $\\mc{Y}$. \n\tWe assume that the master equation for the velocity $\\hat v$ is given by\n\t\\begin{equation}\n\t\t\\frac{\\partial P(v,t)}{\\partial t} = \\int_{-\\infty}^\\infty dy[P(v-y,t)W_\\eps(v-y;y)-P(v,t)W_\\eps(v,y)],\\label{spp:eq:master_eq}\n\t\\end{equation}\n\twhere $P(v,t)\\equiv P(\\hv(t)=v)$ is the probability distribution function (PDF) for the system's velocity and $W_\\eps(v;y)$ is the transition rate for $\\hv$ on the condition $\\hv(t)=v$ with velocity jump $y$. \n\tConsidering the relation~{(\\ref{spp:assmp:scale_SDE})}, $y$ and $\\mc{Y}$ are connected by $y=\\eps \\mc{Y}$. \n\tThen, the Jacobian relation holds as \n\t\\begin{equation}\n\t\tW_\\eps(v;y)dy = \\overline{W}(v;\\mc{Y})d\\mc{Y} \\Longleftrightarrow W_\\eps(v;y)=\\frac{1}{\\eps}\\overline{W}\\left(v;\\frac{y}{\\eps}\\right). \\label{spp:eq:scaling_kernel}\n\t\\end{equation}\n\tThe Kramers-Moyal expansion for the master equation is given by\n\t\\begin{equation}\n\t\t\\frac{\\partial P(v,t)}{\\partial t} = \\sum_{n=1}^\\infty \\frac{(-\\eps)^n}{n!}\\frac{\\partial^n}{\\partial v^n}[\\alpha_n(v)P(v,t)],\\label{spp:eq:Kramers_Moyal}\n\t\\end{equation}\n\twhere we have introduced the Kramers-Moyal coefficient\n\t\\begin{equation}\n\t\t\\alpha_n(v) \\equiv \\int_{-\\infty}^\\infty d\\mc{Y}\\overline{W}(v;\\mc{Y})\\mc{Y}^n. \n\t\\end{equation}\n\tWe here assume the stability condition for the noise around $\\hv=0$: \n\t\\begin{equation}\n\t\t\\alpha_1^{(0)} = 0, \\>\\>\\> \\alpha^{(1)}_1 = -\\bar{\\gamma}_A < 0, \\>\\>\\> \\alpha_2^{(0)}=2\\bar{\\gamma}_A\\bar{T}_A > 0,\n\t\\end{equation}\n\twhere $\\alpha_1(v)$ has a single zero point and we introduce the expansion $\\alpha_n(v)=\\sum_{k=0}^\\infty v^k\\alpha_n^{(k)}\/k!$. \n\tBy introducing the following scaled variables as $\\tau = \\eps t$, $V = v\/\\sqrt{\\eps}$, Eq.~{(\\ref{spp:eq:Kramers_Moyal})} is reduced to\n\t\\begin{align}\n\t\t&\\frac{\\partial P(V,\\tau)}{\\partial \\tau} \n\t\t\t\t\t\t\t\t\t\t\t\t= \\bigg[\\frac{\\partial }{\\partial V}\\bar{\\gamma}_AV - \\sum_{k=2}^\\infty \\frac{\\eps^{(k-1)\/2}}{k!}\\frac{\\partial }{\\partial V}\\alpha_1^{(k)}V^k\\notag\\\\\n\t\t\t\t\t\t\t\t\t\t\t\t&+\\sum_{n=2}^\\infty\\sum_{k=0}^\\infty \\frac{(-1)^n\\eps^{(n+k)\/2-1}}{n!k!}\\frac{\\partial^n}{\\partial V^n}V^k\\alpha_n^{(k)}\\bigg]P(V,\\tau)\\notag\\\\\n\t\t\t\t\t\t\t\t\t\t\t\t &= \\bar{\\gamma}_A\\left[\\frac{\\partial }{\\partial V}V + \\bar{T}_A\\frac{\\partial^2 }{\\partial V^2}\\right]P(V,\\tau) + O(\\eps^{1\/2}),\\label{spp:eq:derived_FP}\n\t\\end{align}\n\twhich is equivalent to the Gaussian Langevin equation~{(5)} in the main text.\n\t\n\tWe here note that the scaling~{(\\ref{spp:eq:scaling_kernel})} is essentially equivalent to that introduced by van Kampen~\\cite{vanKampen,vanKampenB,Gardiner}, \n\twhere $\\eps$ corresponds to the inverse of the system size $\\Omega_{\\rm sys}$ as $\\Omega_{\\rm sys}\\equiv 1\/\\eps$. \n\tTo see this point, let us transform velocity variables\n\t\\begin{equation}\n\t\ta' \\equiv \\frac{v}{\\eps}=\\Omega_{\\rm sys} v, \\>\\>\\> \\Delta a \\equiv \\frac{y}{\\eps} = \\Omega_{\\rm sys} y.\n\t\\end{equation}\n\tThen, the scaling assumption~{(\\ref{spp:eq:scaling_kernel})} is equivalent to \n\t\\begin{equation}\n\t\tW(a';\\Delta a) = \\Omega_{\\rm sys} \\overline{W}\\left(\\frac{a'}{\\Omega_{\\rm sys}}, \\Delta a\\right),\\label{spp:eq:scaling_kernel_Omega}\n\t\\end{equation}\n\twhere we have introduced the transition rate for the transformed variable $a'$ as $W(a';\\Delta a)\\equiv W_\\eps(v;y)$. \n\tThe scaling~{(\\ref{spp:eq:scaling_kernel_Omega})} is exactly equal to that introduced in Ref.~\\cite{Gardiner} on page 277. \n\t\n\t\\subsection*{Note on the invalidity of the Langevin description for rare trajectories}\n\tWe here note that the Langevin equation is an effective description only for typical trajectories, where one does not observe rare trajectories. \n\tIn the above discussion, we have implicitly assumed that the scaled velocity $V$ is not much larger than the typical velocity $V^*\\equiv \\sqrt{\\overline{T}_A}$ (i.e., $|V|\\lesssim V^*$). \n\tIn fact, when $V$ is much larger than $V^*$ as $V\/V^* = O(1\/\\eps)$, all of the terms on the right hand side of Eq.~{(\\ref{spp:eq:derived_FP})} are relevant, which implies the invalidity of the system size expansion for $|V|\\gg V^*$ \n\t(i.e., the distribution of the tail cannot be well described by the Langevin description in general). \n\tThis is because the system size expansion is not a uniform asymptotic expansion in terms of the velocity. \n\tFortunately, however, the probability of such rare trajectories is estimated to be extremely small and is irrelevant to averages of ordinary physical quantities. \n\tThe Langevin description is proved to be effectively valid in this sense. \n\t\n\t\\subsection*{Note on the validity of the van Kampen's theory for systems without microscopic reversibility}\n\tWe note that the system size expansion is valid for genuine nonequilibrium systems without time-reversal symmetry because it is based on the central limit theorem. \n\tIndeed, the effectiveness of the Gaussian Langevin description is discussed for some granular systems in Refs.~\\cite{Brey,Brilliantov}, \n\twhere the starting points of the models are given by master equations without time-reversal symmetry. \n\tWe also note that these results do not contradict the time-reversal symmetry of the Langevin model, because the Langevin model is just an effective description for typical trajectories. \n\tEven if the system is well-described by the Langevin equation for typical trajectories, the time-irreversal symmetry can be observed in general for rare trajectories. \n\t\n\t\\subsection*{Note on the state-dependence of noise}\n\tWe here note that the fluctuation described by the master equation~{(\\ref{spp:eq:master_eq})} is state-dependent noise, which is not simple white noise. \n\tIn fact, the transition rate $W_\\eps(v;y)$ for the velocity jump $y$ depends on $v$, \n\twhich implies the strong correlation between the system and the environment. \n\tWe also note that the state-dependent noise cannot be generally written as a single multiplicative noise, \n\tbecause the time-series of the Poisson flights is independent of the state of the system for the single multiplicative noise. \n\t\n\t\t\n\\section{Relation to the independent kick model}\\label{App2}\n\tThe non-Gaussian Langevin equation reproduces the independent kick model in the strong friction limit. \n\tThe independent kick was originally introduced to explain the behavior of the granular motor in the presence of the dry friction~{\\cite{Talbot,Gnoli1,Gnoli2,Gnoli3}}. \n\tAccording to the main text, the steady distribution of the non-Gaussian Langevin equation is represented in the Fourier space as \n\t\\begin{equation}\n\t\t\\tl{P}_{\\rm SS}(s) = \\exp{\\left[\\int_0^s \\frac{ds'\\Phi(s')}{\\gamma s'}\\right]}, \\label{spp:g_Fdist}\n\t\\end{equation}\n\twhere the cumulant function is given by\n\t\\begin{equation}\n\t\t\\Phi(s)=-\\gamma \\mc{T}s^2+\\int_{-\\infty}^\\infty d\\mc{Y}\\overline{W}(0;\\mc{Y})(e^{i\\mc{Y}s}-1)\n\t\\end{equation}\n\twith the Poisson transition rate $\\overline{W}(0;\\mc{Y})$.\n\tLet us consider the case without the Gaussian part ($\\mc{T}=0$). \n\tIn the strong friction limit $\\gamma \\rightarrow \\infty$, Eq.~{(\\ref{spp:g_Fdist})} is reduced to\n\t\\begin{align}\n\t\t\\tl{P}_{\\rm SS}(s) \t&= 1 + \\int_0^s \\frac{ds'\\Phi(s')}{\\gamma s'} + O(\\gamma^{-2})\\notag\\\\\n\t\t\t\t\t\t\t\t\t\t\t&= 1 + \\int_{-\\infty}^\\infty d\\mc{Y}\\int_0^s ds' \\frac{\\mc{W}(\\mc{Y})}{\\gamma s'}(e^{is'\\mc{Y}}-1) + O(\\gamma^{-2}).\\label{spp:eq:independent}\n\t\\end{align}\n\t\n\t\n\tOn the other hand, the system is kicked by rare collisions and instantly relaxes to the rest state in the independent kick model~{\\cite{Talbot,Gnoli1,Gnoli2,Gnoli3}}. \n\tThis implies the following scenario. The system is typically in the rest state $\\mc{V}=0$. \n\tHowever, an occasional collision at time $t=0$ with a Poisson flight distance $\\mc{Y}$ changes the state from $\\mc{V}(-0)=0$ to $\\mc{V}(+0)=\\mc{Y}$, \n\tand the system freely relaxes as $\\mc{V}(t)=\\mc{Y}e^{-\\gamma t}$. \n\tThis scenario leads to \n\t\\begin{equation}\n\t\t\\la h(\\hat{\\mc{V}})\\ra \\simeq \\int_{-\\infty}^\\infty d\\mc{Y}\\overline{W}(0;\\mc{Y})\\int_0^\\infty dth(\\mc{V}(t)),\\label{spp:eq:IKM_general}\n\t\\end{equation}\n\twhere $h(v)$ is an arbitrary function.\n\tSubstituting $h(\\mc{V})=e^{is\\mc{V}}-1$ into Eq.~{(\\ref{spp:eq:IKM_general})}, we obtain \n\t\\begin{align}\n\t\t\\tl{P}_{\\rm SS}(s) - 1 \t&\\simeq \\int_{-\\infty}^\\infty d\\mc{Y}\\overline{W}(0;\\mc{Y})\\int_0^\\infty dt\\left\\{\\exp{\\left[is\\mc{Y}e^{-\\gamma t}\\right]}-1\\right\\}\\notag\\\\\n\t\t\t\t\t\t\t\t&= \\int_{-\\infty}^\\infty d\\mc{Y}\\overline{W}(0;\\mc{Y})\\int_0^\\infty dt\\sum_{n=1}^\\infty \\frac{1}{n!}(is\\mc{Y})^ne^{-\\gamma nt}\\notag\\\\\n\t\t\t\t\t\t\t\t&= \\int_{-\\infty}^\\infty d\\mc{Y}\\overline{W}(0;\\mc{Y})\\sum_{n=1}^\\infty \\frac{(is\\mc{Y})^n}{n\\gamma n!}\\notag\\\\\n\t\t\t\t\t\t\t\t&= \\int_{-\\infty}^\\infty d\\mc{Y}\\int_0^s ds'\\frac{\\overline{W}(0;\\mc{Y})}{\\gamma s'}(e^{is'\\mc{Y}}-1),\n\t\\end{align}\n\twhich is equivalent to Eq.~{(\\ref{spp:eq:independent})}. \n\tThus, our theory is equivalent to the independent kick model in the strong friction limit.\n\n\\section{Granular motor under the viscous and dry frictions}\\label{App3}\n\t\\subsection{Setup}\n\t\tWe consider a granular motor under the viscous friction. \n\t\tThe motor is a cuboid of height $h$, width $w$, and length $l$ in the granular gas as in Fig~{\\ref{fig:gmotor}(a)}. \n\t\tThe cuboid rotates around the $z$-axis, and the rotational angle $\\hat \\theta$ fluctuates because of collisional impacts by surrounding granular particles. \n\t\tWe assume that there exists the Coulombic friction during the rotation around the axis. \n\t\tLet us first consider its collision rules (see Fig.~{\\ref{fig:gmotor}(b)}). \n\t\tWe assume that the motor and a particle collide at the position $\\vec r$.\n\t\tWe denote the motor's angular velocity and particle's velocity by $\\omega$ and $\\vec v$, respectively. \n\t\tThe moment of inertia along the $z$-axis and the radius of inertia are respectively given by $I$ and $R_I\\equiv \\sqrt{I\/M}$. \n\t\tThe conservation of the angular momentum and the definition of the restitution coefficient $e$ are respectively given by \n\t\t\\begin{equation}\n\t\t\tI\\omega \\vec e_z + m \\vec r\\times \\vec v = I\\omega' \\vec e_z + m \\vec r\\times \\vec v',\\>\\>\\>\\>\n\t\t\t-\\frac{(\\vec V' -\\vec v')\\cdot \\vec n}{(\\vec V-\\vec v)\\cdot \\vec n} = e,\\label{am_dr}\n\t\t\\end{equation}\n\t\twhere $\\omega'$, $\\vec V'$ and $\\vec v'$ are the angular velocity of the motor, the velocity of the motor and the velocity of the particle after the collision, respectively,\n\t\tand $\\vec n$ is the normal unit vector on the surface, and $\\vec e_z \\equiv (0,0,1)$.\n\t\tWe assume the non-slip condition for the collision: the velocity change of the particle is perpendicular to the surface as\n\t\t\\begin{equation}\n\t\t\t\\vec v' = \\vec v +\\beta \\vec n\\label{nsc}\n\t\t\\end{equation}\n\t\twith an appropriate coefficient $\\beta$. \n\t\tWe note the following relations: \n\t\t\\begin{equation}\n\t\t\t\\vec V = \\omega \\vec e_z \\times \\vec r,\\>\\>\\> \\vec V' = \\omega' \\vec e_z \\times \\vec r.\\label{r_v_av}\n\t\t\\end{equation}\n\t\tSolving Eqs.~{(\\ref{am_dr})}, {(\\ref{nsc})}, and~{(\\ref{r_v_av})}, we obtain \n\t\t\\begin{equation}\n\t\t\t\\Delta \\omega\\equiv (1+e)\\frac{\\Delta \\vec V\\cdot \\vec n}{R_I}\\frac{\\epsilon (\\vec r\\cdot \\vec t\/R_I)}{1+\\epsilon (\\vec r\\cdot \\vec t\/R_I)^2},\\>\\>\\>\n\t\t\t\\beta = \\frac{(1+e)\\Delta \\vec V\\cdot \\vec n}{1+\\epsilon (\\vec r\\cdot \\vec t\/R_I)^2},\\label{col_rule}\n\t\t\\end{equation}\n\t\twhere we introduced $\\vec t\\equiv \\vec e_z \\times \\vec e$, $\\Delta \\vec V\\equiv \\vec V-\\vec v$, and $\\epsilon\\equiv m\/M$. \n\t\tBased on the collision rule~{(\\ref{col_rule})}, we model this setup as the Boltzmann-Lorentz equation~{\\cite{Talbot,Gnoli1,Gnoli2,Gnoli3,Brilliantov}}: \n\t\t\\begin{align}\n\t\t\t&\\frac{\\partial }{\\partial t}P(\\omega,t) = \\gamma\\left[\\frac{\\partial }{\\partial \\omega} \\omega +\\frac{T}{I}\\frac{\\partial^2 }{\\partial \\omega^2}\\right]P(\\omega,t) \\notag\\\\\n\t\t\t&\t+\\int \\!\\!dy\\left[P(\\omega\\!-\\!y,t)W_\\eps(\\omega\\!-\\!y;y)\\!-\\!P(\\omega,t)W_\\eps(\\omega;y)\\right],\\label{eq:Suppl:BLeq}\\\\\n\t\t\t&W_\\eps(\\omega ;y) \\!=\\!\\rho h\\!\\int\\! ds\\! \\int\\! d\\vec v f(\\vec v)\\Theta(\\Delta \\vec V\\cdot \\vec n)|\\Delta \\vec V\\cdot\\vec n|\\delta(y-\\Delta \\omega),\n\t\t\\end{align}\n\t\twhere $s$ is the coordinate along the cuboid, $f(\\vec v)$ is the granular distribution function, $\\gamma$ is the coefficient of the viscous friction, \n\t\t$\\vec n(s)$ is the normal unit vector to the surface at $s$, and we have introduced \n\t\t\\begin{align}\n\t\t\t&\\vec V(s) \\equiv \\omega\\vec e_z \\times \\vec r(s), \\>\\> g(s)\\equiv \\frac{\\vec r(s)\\cdot \\vec t(s)}{R_I}, \n\t\t\t\\>\\> \\vec t(s)\\equiv \\vec e_z\\times \\vec n(s), \\notag\\\\\n\t\t\t&\\Delta \\vec V(s) \\equiv \\vec V(s)-\\vec v,\n\t\t\t\\>\\> \\Delta \\omega \\equiv \\frac{\\Delta \\vec V\\cdot \\vec n}{R_I}\\frac{(1+e)\\epsilon g(s)}{1+\\epsilon g^2(s)}.\n\t\t\\end{align}\n\t\tAccording to the Kramers-Moyal expansion, we obtain the differential form of the master equation as\n\t\t\\begin{align}\n\t\t\t&\\frac{\\partial P(\\omega,t)}{\\partial t}= \\gamma\\left[\\frac{\\partial }{\\partial \\omega}\\omega + \\frac{T}{I}\\frac{\\partial^2 }{\\partial \\omega^2}\\right]P(\\omega,t)\\notag\\\\\n\t\t\t\t&+\\sum_{n=1}^\\infty \\frac{(-1)^n}{n!}\\frac{\\partial^n }{\\partial \\omega^n}K_n(\\omega)P(\\omega,t),\n\t\t\\end{align}\n\t\twhere we have introduced the Kramers-Moyal coefficients\n\t\t\\begin{align}\n\t\t\t&K_n(\\omega)\t\t= \\int ds \\int d\\vec v (\\Delta\\omega)^n \\rho h f(\\vec v)\\Theta(\\Delta \\vec V\\cdot \\vec n)|\\Delta \\vec V\\cdot \\vec n| \\notag\\\\\n\t\t\t\t\t\t\t&= \\rho h\\!\\!\\int ds\\! \\left[\\frac{\\epsilon (1+e)g(s)}{R_I(1+\\epsilon g^2(s))}\\right]^n\n\t\t\t\t\t\t\t\t\\!\\!\\int \\!\\!d\\vec v f(\\vec v)\\Theta(\\Delta \\vec V\\cdot \\vec n)(\\Delta \\vec V\\cdot \\vec n)^{n+1}.\n\t\t\\end{align}\n\t\n\t\t\n\t\\subsection{Small noise expansion}\n\t\tWe consider the following four assumptions: \n\t\t(i) {\\it $\\epsilon$ is a small positive parameter,}\n\t\t(ii) {\\it $\\gamma$ is a small positive number independent of $\\epsilon$,} \n\t\t(iii) {\\it $T$ is scaled as $T=\\epsilon^2\\mc{T}$, where $\\mc{T}$ is independent of $\\epsilon$, } and\n\t\t(iv) {\\it $f(\\vec v)$ is isotropic as $f(\\vec v)=\\phi(|\\vec v|)$.}\n\t\tIntroducing a scaled variable $\\hat \\omega=\\epsilon\\hat \\Omega$, we obtain the scaled master equation as \n\t\t\\begin{align}\n\t\t\t\\frac{\\partial \\mc{P}(\\Omega,t)}{\\partial t}&= \\gamma \\left[\\frac{\\partial }{\\partial \\Omega}\\Omega+\\frac{\\mc{T}}{I}\\frac{\\partial^2 }{\\partial \\Omega^2}\\right]\\mc{P}(\\Omega,t) \\notag\\\\\n\t\t\t\t&+\\sum_{n=1}^\\infty \\frac{(-1)^n}{n!}\\frac{\\partial^n }{\\partial \\Omega^n}\\mathcal{K}_n(\\Omega)\\mc{P}(\\Omega,t), \\label{eq:Suppl:scaled_ME}\n\t\t\\end{align}\n\t\twith the scaled Kramers-Moyal coefficients \n\t\t\\begin{align}\n\t\t\t&\\mathcal{K}_n(\\Omega) = \\rho h\\int ds \\frac{(1+e)^ng^n(s)}{R_I^n(1+\\epsilon g^2(s))^n}\\times \\notag\\\\\n\t\t\t\\int d\\vec v &\\phi(|\\vec v|)\\Theta((\\epsilon \\vec {\\mathcal{V}}(s)-\\vec v)\\cdot \\vec n)[(\\epsilon\\vec {\\mathcal{V}}(s)-\\vec v)\\cdot \\vec n]^{n+1}, \n\t\t\\end{align}\n\t\twhere $\\mathcal{\\mathcal{V}}=\\Omega\\vec e_z \\times \\vec r(s)$. \n\t\tIn the limit $\\epsilon\\rightarrow+0$, Eq.~{(\\ref{eq:Suppl:scaled_ME})} is reduced to \n\t\t\\begin{align}\n\t\t\t\\frac{\\partial \\mc{P}(\\Omega,t)}{\\partial t}&= \\gamma \\left[\\frac{\\partial }{\\partial \\Omega} \\Omega+\\frac{\\mc{T}}{I}\\frac{\\partial^2 }{\\partial \\Omega^2}\\right]\\mc{P}(\\Omega,t) \\notag\\\\\n\t\t\t\t&+\\sum_{n=1}^\\infty \\frac{(-1)^n}{n!}\\frac{\\partial^n }{\\partial \\Omega^n}\\mathcal{K}_n\\mc{P}(\\Omega,t), \n\t\t\\end{align}\n\t\t\\begin{equation}\n\t\t\t\\mathcal{K}_n = \\frac{\\rho h(1+e)^n}{R_I^n}\\int ds g^n(s)\n\t\t\t\t\t\t\t\t\\int d\\vec v \\phi(|\\vec v|)\\Theta(-\\vec v\\cdot \\vec n)(-\\vec v\\cdot \\vec n)^{n+1}. \n\t\t\\end{equation}\n\t\tHere we can calculate the integral with the aid of\n\t\t\\begin{align}\n\t\t\t&\\int d\\vec v \\phi(|\\vec v|)\\Theta(-\\vec v\\cdot \\vec n)(-\\vec v\\cdot \\vec n)^{n+1}\\notag\\\\\n\t\t\t=& \\int_0^\\infty dvd\\theta d\\psi v^2\\sin{\\psi} \\phi(v)\\Theta(-v\\cos{\\psi})(-v\\cos{\\psi})^{n+1}\\notag\\\\\n\t\t\t=& 2\\pi\\int_0^\\infty dvv^{n+3}\\phi(v)\\int_{\\pi\/2}^{\\pi} (-\\cos{\\psi})^{n+1}\\sin{\\psi}d\\psi\\notag\\\\\n\t\t\t=& \\frac{2\\pi}{n+2}\\int_0^\\infty dvv^{n+3}\\phi(v),\n\t\t\\end{align}\n\t\tand\n\t\t\\begin{align}\n\t\t\t&\\int ds g^n(s) \n\t\t\t= \\frac{2}{R_I^{n}}\\int_{-l\/2}^{l\/2} ds' s'^n + \\frac{2}{R_I^{n}}\\int_{-w\/2}^{w\/2} ds' s'^n \\notag\\\\\n\t\t\t&= \t\\begin{cases}\n\t\t\t\t\t\\frac{4}{R_I^n(n+1)}\\left[\\left(\\frac{l}{2}\\right)^{n+1} \\!+\\! \\left(\\frac{w}{2}\\right)^{n+1}\\right] & ({\\rm for\\>\\>even\\>\\>}n) \\cr\n\t\t\t\t\t0 & ({\\rm for\\>\\>odd\\>\\>}n)\n\t\t\t\t\\end{cases}.\n\t\t\\end{align}\n\t\twhere we have used the isotropic distribution $\\phi(\\vec v)=\\phi(v)$, and \n\t\tdecomposed the position vector $\\vec r$ as $\\vec r=x\\vec e+s'\\vec t+z\\vec e_z$ with $x=\\pm l\/2$ or $x=\\pm w\/2$. \n\t\tWe thus have the relation $g=s'\/R_I$. \n\t\tThen, the cumulant $\\mathcal{K}_n$ is simplified as \n\t\t\\begin{equation}\n\t\t\t\\mc{K}_n \\!=\\!\t\\begin{cases}\n\t\t\t\t\t\t\t\\frac{4\\pi \\rho h(1+e)^n (l^{n+1} + w^{n+1})}{2^n R_I^{2n}(n+1)(n+2)}\\!\\!\\int_0^\\infty \\!\\!dvv^{n+3}\\phi(v) & \\!\\!({\\rm for\\>\\>even\\>\\>}n) \\cr\n\t\t\t\t\t\t\t0 & \\!\\!({\\rm for\\>\\>odd\\>\\>}n)\n\t\t\t\t\t\t\\end{cases},\\label{BL_cum}\n\t\t\\end{equation}\n\t\twhich implies that the cumulant function is given as \n\t\t\\begin{align}\n\t\t\t&\\Phi(s) +\\mc{T}\\!s^2 = \\sum_{n=1}^\\infty \\frac{(is)^n}{n!}K_n\\notag\\\\\n\t\t\t\t\t&= -\\!\\frac{16\\pi \\rho hR_I^4}{ls^2(1\\!+\\!e)^2}\\int_0^\\infty \\!\\!\\!\\!dv v\\phi(v) \\sum_{n=2}^\\infty \\frac{(-1)^n}{(2n)!}\\left[\\frac{s(1+e)lv}{2R_I^2}\\right]^{2n}\\notag\\\\\n\t\t\t\t\t &-\\frac{16\\pi \\rho hR_I^4}{ws^2(1\\!+\\!e)^2}\\int_0^\\infty\\!\\!\\!\\! dv v\\phi(v) \\sum_{n=2}^\\infty \\frac{(-1)^n}{(2n)!}\\left[\\frac{s(1+e)wv}{2R_I^2}\\right]^{2n}\\notag\\\\\n\t\t\t\t\t&= -\\!\\frac{8\\pi \\rho hR_I^4}{ls^2(1\\!+\\!e)^2}\\int_{-\\infty}^\\infty \\!\\!\\!\\!\\!\\!\\!dv |v|\\phi(v) \\!\\!\\left[e^{\\frac{is(1+e)lv}{2R^2_I}}\\!\\!\\!\\!\\!-1+\\frac{s^2(1\\!+\\!e)^2l^2v^2}{8R^4_I}\\right]\\notag\\\\\n\t\t\t\t\t &\\!-\\!\\frac{8\\pi \\rho hR_I^4}{ws^2(1\\!+\\!e)^2}\\int_{-\\infty}^\\infty \\!\\!\\!\\!\\!\\!\\!dv |v|\\phi(v) \\!\\!\\left[e^{\\frac{is(1+e)wv}{2R^2_I}}\\!\\!\\!\\!\\!-1+\\frac{s^2(1\\!+\\!e)^2w^2v^2}{8R^4_I}\\right].\\label{eq:Suppl:cumulant_general}\n\t\t\\end{align} \n\t\t\n\t\\subsection{In the case with the exponential velocity distribution}\n\t\tLet us consider the case of $\\mc{T}=l=0$, and assume that the velocity distribution of the granular gas is given by the exponential form as\n\t\t\\begin{equation}\n\t\t\tf(\\vec v) = \\frac{1}{8\\pi v_0^3}e^{-|v|\/v_0},\n\t\t\\end{equation}\n\t\twith the characteristic velocity $v_0$. \n\t\tThe non-Gaussian Langevin equation for the scaled angular velocity $\\hat \\Omega\\equiv\\omega\/\\epsilon$ is \n\t\t\\begin{equation}\n\t\t\t\\frac{d\\hat{\\Omega}}{dt} = -\\gamma \\hat{\\Omega} + \\hat {\\eta}_g.\n\t\t\\end{equation}\n\t\tHere, the cumulants function is given by \n\t\t\\begin{equation}\n\t\t\t\\Phi(s) = -\\frac{\\rho hwv_0\\Omega_g^2s^2 (5+3\\Omega_g^2s^2)}{2(1+\\Omega^2_gs^2)^2},\n\t\t\\end{equation}\n\t\twhere we have introduced $\\Omega_g\\equiv wv_0(1+e)\/2R_I^2$. \n\t\tThen, we obtain the velocity distribution (11) in the main text for the scaled angular velocity $\\tl{\\Omega}\\equiv \\Omega\/\\Omega_g$ as \n\t\t\\begin{equation}\n\t\t\t\\mc{P}_{\\rm SS}(\\tl{\\Omega}) = \\int_{-\\infty}^\\infty \\frac{ds}{2\\pi} \\frac{1}{(1+s^2)^{3v_0\/2\\tl{v}}}\\exp{\\left[-is\\tl{\\Omega} - \\frac{v_0s^2}{\\tl{v}(1+s^2)}\\right]},\n\t\t\\end{equation}\n\t\twhere $\\mc{P}_{\\rm SS}(\\tl{\\Omega})\\equiv \\mc{P}_{\\rm SS}(\\Omega)\\Omega_g$, and $\\tl{v}\\equiv 2\\gamma\/\\rho hw$\n\t\t\\begin{figure*}\n\t\t\t\\includegraphics[width=175mm]{dist.eps}\n\t\t\t\\caption{\n\t\t\t\t\t\tNumerical data of the Monte Carlo simulation of Eq.~{(\\ref{eq:Suppl:BLeq})}. \n\t\t\t\t\t\t(a)~The steady distribution function of rotor's angular velocity. \n\t\t\t\t\t\t(b)~The numerical Fourier transform of $P_{\\rm SS}(\\Omega)$. \n\t\t\t\t\t\t(c)~The numerical data of $g(s)$. If we ignore the numerical fluctuation, $g(s)$ tends to converge to zero in the limit of $s\\rightarrow+\\infty$. \n\t\t\t\t\t\t(d)~The estimated data of the granular velocity distribution using Eq.~{(\\ref{spp:eq:estdist_num})}. \n\t\t\t\t\t\t Because of the singularity at $v=0$ in Eq.~{(\\ref{spp:eq:estdist_num})}, the accuracy of the data near $v=0.5$ is not good.\n\t\t\t\t\t}\n\t\t\t\\label{fig:Suppl:dist}\n\t\t\\end{figure*}\n\t\\subsection{Inverse estimation formula for the spherical distribution}\n\t\tWe derive the inverse formula of the granular velocity distribution for the case of $\\mc{T}=l=0$ and an arbitrary $\\phi(v)$. \n\t\tThe non-Gaussian Langevin equation is given by \n\t\t\\begin{equation}\n\t\t\t\\frac{d\\hat{\\Omega}}{dt} = -\\gamma\\hat \\Omega + \\hat \\eta_g, \n\t\t\\end{equation}\n\t\twhere the cumulant function of $\\hat{\\eta}_{g}$ is given by\n\t\t\\begin{equation}\n\t\t\t\\Phi(s) = -\\frac{2\\pi \\rho hw}{s^2F_g^2}\\int_{-\\infty}^\\infty dv |v|\\phi(v) \\left[e^{iF_gsv}-1+\\frac{F_g^2s^2v^2}{2}\\right],\n\t\t\\end{equation}\n\t\twith the typical collisional impact $F_g\\equiv w(1+e)\/2R_I^2$. \n\t\tFrom Eq.~{(\\ref{spp:g_Fdist})}, we obtain the following relation between the granular velocity distribution and the Fourier representation of rotor's angular velocity distribution as\n\t\t\\begin{align}\n\t\t\t&-\\frac{2\\pi \\rho hw}{s^2F_g^2}\\int_{-\\infty}^\\infty dv |v|\\phi(v) \\left[e^{iF_gsv}-1+\\frac{F_g^2s^2v^2}{2}\\right] \\notag\\\\\n\t\t\t&= \\gamma s\\frac{d}{ds}\\log{\\tl{P}_{SS}(s)}.\\label{eq:Suppl:est1}\n\t\t\\end{align}\n\t\tThis formula can be transformed into the following form:\n\t\t\\begin{equation}\n\t\t\t\\phi(v) \\!=\\! \\frac{1}{\\pi |v|} \\!\\int_{0}^\\infty \\!\\!ds \\!\\left[a\\!-\\!\\frac{bs^2}{2}\\!-\\!cs^3\\frac{d}{ds}\\log{\\tl{P}_{\\rm SS}(s\/F_g)}\\right]\\cos{(sv)},\\label{eq:Suppl:est2}\n\t\t\\end{equation}\n\t\twhere we introduced $a\\equiv\\int_{-\\infty}^{\\infty}dv|v|\\phi(v)$, $b\\equiv\\int_{-\\infty}^{\\infty}dv|v|^3\\phi(v)$, and $c\\equiv \\gamma\/2\\pi\\rho hw$. \n\n\t\tLet us explain how to determine the coefficients $a$ and $b$. \n\t\tAccording to the Riemann-Lebesgue lemma~{\\cite{RLlemma}}, the following relation holds if $|v|\\phi(v)$ is an $L^1$-function: \n\t\t\\begin{equation}\n\t\t\t\\lim_{s\\rightarrow+\\infty}\\int_{-\\infty}^{\\infty} ds |v|\\phi(v)e^{isv} = 0,\n\t\t\\end{equation}\n\t\tor equivalently, \n\t\t\\begin{equation}\n\t\t\t\\lim_{s\\rightarrow\\infty}\\left[a-\\frac{bs^2}{2}-cs^3\\frac{d}{ds}\\log \\tl{P}_{\\rm SS}(s\/F_g)\\right] = 0. \\label{eq:Suppl:RLlemma}\n\t\t\\end{equation}\n\t\tEquation~{(\\ref{eq:Suppl:RLlemma})} is practically useful to determine the coefficients $a$ and $b$ from the experimental data of $\\tl{P}_{\\rm SS}(s)$.\n\t\t\n\t\\subsection{The numerical technique for the inverse estimation formula}\n\t\tHere we explain our numerical procedure for the inverse formula~{(\\ref{eq:Suppl:est2})}. \n\t\tWe obtain the steady distribution function of rotor's angular velocity using the Monte Carlo simulation of Eq.~{(\\ref{eq:Suppl:BLeq})} on the following setup: $\\phi(v)=e^{-|v|}\/8\\pi$, $l=T=0$, $w=\\sqrt{12}$, $M=\\rho=h=e=I=1$, $m=0.01$, and $\\gamma=2$. \n\t\tThe numerical data is plotted in Fig.~{\\ref{fig:Suppl:dist}(a)}. \n\t\t\n\t\tTo obtain the Fourier transformation $\\tl{P}_{\\rm SS}(s)$, we have used the numerical distribution $\\mc{P}_{SS}(\\Omega)$ for $0\\leq \\Omega\\leq 30$. \n\t\t$\\tl{P}_{\\rm SS}(s)$ is numerically plotted in Fig.~{\\ref{fig:Suppl:dist}(b)}. \n\t\tWe numerically estimate the coefficients $a$ and $b$ as $a=0.080121$ and $b=0.464$, respectively, \n\t\tand obtain the following function as shown in Fig.~{\\ref{fig:Suppl:dist}(c)}: \n\t\t\\begin{equation}\n\t\t\tg(s)\t=a - \\frac{bs^2}{2} - cs^3\\frac{d}{ds}\\log{\\tl{P}_{\\rm SS}(s\/F_g)}. \n\t\t\\end{equation}\n\t\tFigure~{\\ref{fig:Suppl:dist}(c)} implies the asymptotic form $g(s) \\simeq 0$ for $s\\rightarrow\\infty$. \n\t\tOn the basis of the numerical data of $g(s)$, \n\t\twe estimate the granular velocity distribution function $\\phi(v)$ as\n\t\t\\begin{equation}\n\t\t\t\\phi(v) = \\frac{1}{\\pi |v|}\\int_0^\\infty dsg(s)\\cos{sv}.\\label{spp:eq:estdist_num}\n\t\t\\end{equation}\n\t\tWe plot the granular velocity distribution estimated from Eq.~{(\\ref{spp:eq:estdist_num})} in Fig.~{\\ref{fig:Suppl:dist}(d)}. \n\t\tWe note that Eq.~{(\\ref{spp:eq:estdist_num})} has a singularity at $v=0$, which explains that the numerical accuracy is not good around $v=0$. \n\t\n\t\n\t\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}