diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzecsa" "b/data_all_eng_slimpj/shuffled/split2/finalzzecsa" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzecsa" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{secIntro}\n\\setcounter{footnote}{0}\n\n\nPhysics in nonconcommutative spacetime has long been studied\n\\cite{Doplicher:1994tu,Lukierski:1991pn}\nsince Snyder introduced the notion of quantized spacetime \\cite{Snyder:1946qz}.\nAmong many proposed models, the most common commutation relation (called canonical)\nbetween coordinates is\n\\begin{equation}\n\\label{moyal_nc}\n[\\hat{x}^\\alpha,\\hat{x}^\\beta]=i \\theta^{\\alpha\\beta},\n\\end{equation}\nwhere $\\theta^{\\alpha\\beta}=-\\theta^{\\beta\\alpha}$ are constants.\nAfter this canonical noncommutativity was introduced\nin the string theory context \\cite{Connes:1997cr,Seiberg:1999vs},\nit became the mainly studied commutation relation\nfor physics in noncommutative spacetime.\n\nThis commutation relation resembles\n the fundamental commutation relation of quantum physics.\nInspired by Weyl quantization in quantum mechanics \\cite{Weyl:1927},\na theory on the canonical noncommutative spacetime\\footnote{\nIn this paper, we only deal with space-space noncommutativity, and time is a commuting\ncoordinate throughout the paper.\nThus we use the terms (noncommutative) space and (noncommutative) spacetime interchangeably.}\ncan be reinterpreted\nto another theory on the commutative spacetime\nin which a product of any two functions on the original noncommutative\nspacetime is replaced with a deformed ($\\star$) product of the functions on\nthe commutative spacetime, the Moyal product \\cite{Groenewold:1946kp}:\n\\begin{equation}\n\\label{moyalprd}\n(f\\star g)(x)\\equiv \\left.\\exp\\left[\\frac{i}{2}\\theta^{\\alpha\\beta}\\frac{\\partial}\n{\\partial x^{\\alpha} }\\frac{\\partial}{\\partial y^{\\beta} }\\right] f(x)g(y)\\right|_{x=y} .\n\\end{equation}\nMost of the analyses for noncommutative physics\nare performed\nby using the Moyal product on the commutative space\ninstead of being treated on noncommutative spaces directly.\n\nWhat if we use a different coordinate system instead of the canonical coordinate system given by\n\\eqref{moyal_nc}? We expect that the commutation relations for the two coordinate systems\n would not be exactly equivalent\nto each other. Would then the physics described in these two coordinates systems be the same?\nWe are used to take general covariance for granted.\n General covariance in ``a noncommutative space\"\\footnote{Here, we put the quotation mark\n since it is not clear at the moment whether we have to treat\n coordinate systems with different commutation relations of ``a given space\"\n as different noncommutative spaces.\n }\n would mean the\nequivalence among different coordinate systems.\nHowever, as we mentioned above different coordinate systems in ``a noncommutative space\"\ngenerally have different\ncommutation relations which are not exactly equivalent.\nTherefore we expect that coordinate transformations among different coordinate systems\nwould not yield the same physics contradicting the usual notion of general covariance.\nSeiberg \\cite{ns2005} has already pointed out that general covariance would be broken\nin theories with emergent spacetime among which model theories on noncommutative spaces are\nalso included.\nIn this paper, we focus on this issue:\ngeneral covariance on a noncommutative space vs. non-exact equivalence between noncommutative coordinate\nsystems.\nIn order to check this, we compare the solutions of\n$U(1,1)\\times U(1,1)$ noncommutative Chern-Simons theory\nin the rectangular and polar coordinates in 3-dimensional AdS noncommutative spacetime.\n\n\nGauge theory on the canonical noncommutative spacetime\nhas been well established using the Seiberg-Witten map \\cite{Seiberg:1999vs}.\nThe Seiberg-Witten map is the consistency requirement for\na noncommutative gauge transformation of a gauge\ntheory living on a noncommutative spacetime\n to be equivalent to a gauge transformation of an ordinary gauge theory\nliving on a commutative spacetime.\nUsing this equivalence of the Seiberg-Witten map, one can find the\ncorresponding noncommutative gauge fields in terms of\ngiven ordinary gauge fields. The corresponding noncommutative\ngauge transformation can be found likewise.\n\nFor the three dimensional gravity, it has been well known\nthat it is equivalent to a Yang-Mills theory with\nthe Chern-Simons(CS) action in three dimensional spacetime\n\\cite{Achucarro:1987vz,Witten:1988hc}.\nThus using the Seiberg-Witten map\nthe noncommutative extension of 3D gravity-CS equivalence was studied\nin \\cite{Grandi:2000av,Banados:2001xw,Cacciatori:2002gq}.\nBased on these works, Pinzul and Stern \\cite{Pinzul:2005ta}\nobtained noncommutative $AdS_3$ vacuum and conical solution\nusing the Seiberg-Witten map.\nRather recently, this method was applied to the rotating BTZ black hole case\\footnote{\nBefore this, the non-rotating BTZ black hole case had been investigated in \\cite{bdrs04}\nin a different set-up of geometrical framework.}\n in \\cite{Kim:2007nx}\nwith commutation relation of $[\\hat{r},\\hat{\\phi}]= i \\theta$.\n\n For the four dimensional gravity, there is no such known equivalence relation\nbetween gravity and gauge theory in four dimensional spacetime.\nHowever, using the Poincar\\'{e} gauge theory approach of Chamseddine \\cite{Chamseddine:2000si}\na noncommutative Schwarzschild black hole solution was first obtained in \\cite{ctz08}\nusing the Seiberg-Witten map.\nLikewise, the charged black hole solutions in 4D were obtained in \\cite{ms08,Chaichian:2007dr}.\n\nIn our previous work \\cite{EDY:20081}, we studied the rotating BTZ black\nhole in a noncommutative polar coordinates with the commutation relation\\footnote{\nThis is equivalent to\n$\n\\label{r2phi}\n[\\hat{r}^2,\\hat{\\phi}]=2i\\theta .\n$\n}\n\\begin{equation}\n\\label{ncr2}\n[\\hat{r},\\hat{\\phi}]= i \\theta \\hat{r}^{-1},\n\\end{equation}\nwhich is different from the one used in \\cite{Kim:2007nx} and is\nequivalent to the canonical relation \\eqref{moyal_nc} up to first order\nin $\\theta$.\nIn this paper, we study the rotating BTZ black hole case with the\ncanonical commutation relation $[x,y]= i \\theta$,\n and compare it with our previous result \\cite{EDY:20081}.\nThen we again obtain the conical solution on $AdS_3$ in the noncommutative polar coordinates\nwith the commutation relation \\eqref{ncr2} and\ncompare it with the one obtained in \\cite{Pinzul:2005ta}.\nThe results exhibit their dependence on a chosen coordinate system.\n\n\nThe paper is organised as follows.\nIn section \\ref{secDiff},\nwe consider some aspects related with the Seiberg-Witten map\nand then investigate the difference between the commutation relations\nin the polar and rectangular coordinates.\nIn section \\ref{secBTZ},\nwe obtain the noncommutative BTZ solution with the canonical\ncommutation relation of noncommutative rectangular coordinates, then\n compare it with the result in the noncommutative polar coordinates\n obtained in \\cite{EDY:20081}.\n In section 4, we get the conical solution of noncommutative\n $AdS_3$ in the noncommutative polar coordinates, and compare it with the\n previously obtained solution by Pinzul and Stern \\cite{Pinzul:2005ta}\n in which the canonical commutation relation of the rectangular coordinates\n was used.\nWe conclude with discussion in section 5.\n\n\\section{Different noncommutativity and Seiberg-Witten map}\n\\label{secDiff}\n\nHere, we begin with reviewing\nthe Seiberg-Witten map and study related aspects\nby treating the same map in ``a noncommutative spacetime\"\nwith different commutation relations.\nAfter that we show\nhow these noncommutativities are different\nin the two following perspectives,\ncoordinates as operators and the\nMoyal product as a deformed product from twist.\n\n\\subsection{ Seiberg-Witten map in different coordinates}\n\nThe Sieberg-Witten map matches ordinary gauge fields $\\mathcal{A}$\non a commutative spacetime with noncommutative gauge fields $\\hat{\\mathcal{A}}$\non a noncommutative spacetime such that an ordinary gauge transformation of $\\mathcal{A}$\nis equivalent to a noncommutative gauge transformation of $\\hat{\\mathcal{A}}$ \\cite{Seiberg:1999vs}:\n\\begin{eqnarray}\n\\label{SWe}\n\\hat{\\mathcal{A}}(g\\cdot \\mathcal{A}\\cdot g^{-1}-\\partial g\\cdot g^{-1})\n=\\hat g*\\hat{\\mathcal{A}}*\\hat g^{-1}-\\partial \\hat g* \\hat g^{-1},\n\\end{eqnarray}\nwhere $*$ denotes the Moyal product,\n$g,\\hat g$ are elements of gauge groups for the ordinary and noncommutative gauge theories, respectively.\nThe above equation can be solved to first order in $\\theta$ as follows.\n\\begin{eqnarray}\n\\label{Aswef}\n&&\\hat{\\mathcal{A}}_{\\gamma}(\\mathcal{A})\n\\equiv \\mathcal{A}_{\\gamma}+\\mathcal{A}'_{\\gamma}\n=\\mathcal{A}_{\\gamma}-\\frac{i}{4}\\theta^{\\alpha\\beta}\n\\{ \\mathcal{A}_{\\alpha},\\partial_{\\beta}\\mathcal{A}_{\\gamma}+\\mathcal{F}_{\\beta\\gamma}\n\\}, \\\\\n\\label{lswef}\n&&\\hat{\\lambda}(\\lambda,\\mathcal{A})\n\\equiv\\lambda+\\lambda'\n= \\lambda+\\frac{i}{4}\\theta^{\\alpha\\beta}\n\\{ \\partial_{\\alpha}\\lambda,\\mathcal{A}_{\\beta}\n\\},\n\\end{eqnarray}\nwhere $\\hat{\\lambda}$ and $\\lambda$ are noncommutative and\nordinary infinitesimal gauge transformation parameters.\nWe note that there are two important factors\nin the derivation of the solution \\eqref{Aswef} and \\eqref{lswef}.\nOne is knowing of the explicit form of the Moyal product up to first order in $\\theta$,\nand the other is the coordinate independence of noncommutativity parameter $\\theta$\nbeing used in the Moyal product.\nOne would no longer get the same form of solution for Eq. \\eqref{SWe}\nin the cases of coordinate dependant noncommutativity parameters.\n\n\nGenerally one obtains different solutions of the Seiberg-Witten equation\nfor different coordinate systems.\nTo see this let us consider a coordinate transformation $\\varphi$\nbetween two coordinate systems $\\{x^\\alpha\\}$ and $\\{z^a\\}$,\nsay,\n$\\varphi: x^\\alpha\\rightarrow z^a\\equiv z^a(x^\\mu)$.\nThen a Seiberg-Witten solution $\\hat{\\mathcal{A}}_c(z)$\nin the coordinate system $\\{z^a\\}$\ncan be rewritten in terms of $\\hat{\\mathcal{A}}_\\alpha(x)$,\nthe corresponding solution of the Seiberg-Witten equation\nin the coordinate system $\\{x^\\alpha\\}$:\n\\begin{eqnarray}\n\\label{diffSW}\n\\hat{\\mathcal{A}}_c(z) &=&\n \\mathcal{A}_{c}(z)\n -\\frac{i}{4}\\tilde\\theta^{ab}\n\\{ \\mathcal{A}_{a}(z),\\partial_{b}\\mathcal{A}_{c}+\\mathcal{F}_{bc}\n\\}\\nonumber\\\\\n&=&\n\\frac{\\partial x^\\gamma}{\\partial z^c}\\mathcal{A}_\\gamma(x)\n-\\frac{i}{4}\\tilde\\theta^{ab}\n\\left\\{ \\frac{\\partial x^\\alpha}{\\partial z^a}\\mathcal{A}_{\\alpha}(x),\n\\frac{\\partial}{\\partial z^b}\n\\left(\\frac{\\partial x^\\gamma}{\\partial z^c}\\mathcal{A}_{\\gamma}\\right)\n+\\frac{\\partial x^\\beta}{\\partial z^b} \\frac{\\partial x^\\gamma}{\\partial z^c}\\mathcal{F}_{\\beta\\gamma}\n\\right\\}\\nonumber\\\\\n&=&\\frac{\\partial x^\\gamma}{\\partial z^c}\n\\left(\\hat{\\mathcal{A}}_\\gamma(x)\n+\\frac{i}{4}\\theta^{\\alpha\\beta}\n\\{ \\mathcal{A}_{\\alpha},\\partial_{\\beta}\\mathcal{A}_{\\gamma}+\\mathcal{F}_{\\beta\\gamma}\n\\}\\right)\\nonumber\\\\\n&&\n-\\frac{i}{4}\\tilde\\theta^{ab} \\frac{\\partial x^\\alpha}{\\partial z^a}\n\\left\\{\\mathcal{A}_{\\alpha}(x),\n\\frac{\\partial^2 x^\\beta}{\\partial z^b\\partial z^c}\\mathcal{A}_{\\beta}+\n\\frac{\\partial x^\\beta}{\\partial z^b} \\frac{\\partial x^\\gamma}{\\partial z^c}\n\\left(\\partial_{\\beta}\\mathcal{A}_{\\gamma}+\\mathcal{F}_{\\beta\\gamma}\\right)\n\\right\\},\\nonumber\n\\end{eqnarray}\nwhere $\\tilde\\theta^{ab}$ denote noncommutativity parameters\nassumed in the coordinate system $\\{z^a\\}$.\nThis can be reexpressed to show the difference between the two Seiberg-Witten solutions\nin $\\{z^a\\}$ and $\\{x^\\alpha\\}$ coordinate systems,\n\\begin{eqnarray}\n\\label{diffSW2}\n\\hat{\\mathcal{A}}_c(z)-\\frac{\\partial x^\\gamma}{\\partial z^c}\\hat{\\mathcal{A}}_\\gamma(x)\n&=&\n-\\frac{i}{4}\\tilde\\theta^{ab}\n\\left(\\frac{\\partial x^\\alpha}{\\partial z^a}\\right)\n\\left(\\frac{\\partial^2 x^\\beta}{\\partial z^b\\partial z^c}\\right)\n\\{\\mathcal{A}_\\alpha,\\mathcal{A}_\\beta\\}\n\\nonumber\\\\\n&&\n+\\frac{i}{4}\n\\left(\\frac{\\partial x^\\gamma}{\\partial z^c}\\right)\n\\left(\n\\theta^{\\alpha\\beta}\n-\\frac{\\partial x^\\alpha}{\\partial z^a} \\frac{\\partial x^\\beta}{\\partial z^b}\\tilde\\theta^{ab}\n\\right)\n\\{ \\mathcal{A}_{\\alpha}(z),\\partial_{\\beta}\\mathcal{A}_{\\gamma}+\\mathcal{F}_{\\beta\\gamma}\\},\n\\end{eqnarray}\nup to first order in $\\theta$.\nThe first term on the right-hand side vanishes when the transformation $\\varphi$ is linear, i.e.,\n$\\frac{\\partial^2 x^\\beta}{\\partial z^b\\partial z^c}= 0$.\nWhen $\\frac{\\partial^2 x^\\beta}{\\partial z^b\\partial z^c}\\neq 0$,\nthe solution $\\hat{\\mathcal{A}}_\\mu(\\mathcal{A})|_z$\nin the coordinate system $\\{z^a\\}$\nis different from $\\hat{\\mathcal{A}}_\\mu(\\mathcal{A})|_x$\nobtained in the coordinate system $\\{x^\\mu\\}$.\nThe second term vanishes when the two noncommutativity parameters,\n$\\theta^{\\alpha\\beta}$ and $\\tilde{\\theta}^{ab}$, are related\nas if they are tensors\\footnote{\nThe noncommutativity parameter $\\tilde\\theta^{ab}$\nin the polar coordinate system we use in this paper\nand the canonical one $\\theta^{\\alpha\\beta}$\nsatisfy this relation up to first order in $\\theta$.\nIn fact, if the two Moyal products in the two coordinate systems are equal up to first order in $\\theta$,\nthen one can show that this condition holds always regardless of the ordering problem.\n},\n$\\theta^{\\alpha\\beta}\n=\\frac{\\partial x^\\alpha}{\\partial z^a} \\frac{\\partial x^\\beta}{\\partial z^b}\\tilde\\theta^{ab}$.\nAlthough the vanishing condition for the second term does not hold in general,\nour polar noncommutativity parameter $\\theta \/r$ in \\eqref{ncr2}\nand the canonical noncommutativity parameter $\\theta$ in \\eqref{nccan}\nsatisfy this condition. However, the transformation from the rectangular to the polar coordinates is not linear.\nThus, the first term does not vanish and as we shall see this difference will\nyield different results for the rectangular and polar coordinate systems.\n\n\n\\subsection{Coordinates as operators}\n\nIn the following two subsections,\nwe compare the aspects of noncommutativity in the polar and rectangular\ncoordinate systems especially in using the Seiberg-Witten map.\nSince we consider only space-space noncommutativity\nin three dimensional spacetime in this paper,\nit is sufficient to compare the two sets of coordinate operators\n $(\\hat{x},\\hat{y})$ and $(\\hat{r},\\hat{\\phi})$.\n\nIn the rectangular coordinate system,\nthe commutation relation is given in the canonical form:\n\\begin{equation}\n\\label{nccan}\n[\\hat{x},\\hat{y}]= i \\theta .\n\\end{equation}\nWhen the two sets of coordinate operators\nare related by the corresponding classical relation\nwhich is not linear,\nfor example ($x\\rightarrow r \\cos\\phi, ~y\\rightarrow r\\sin\\phi$),\nwe face the ordering ambiguity if we want to express one set of coordinates\nin terms of other set of coordinates.\nMoreover, for the maps between functions of the operators,\nlike a solution $\\hat{\\mathcal{A}}(x, y)$ of the Seiberg-Witten equation on commutative space\nwhich corresponds to $\\mathcal{A}(\\hat x, \\hat y)$ on noncommutative space,\nthe ambiguity becomes severe.\n\nIn \\cite{EDY:20081} it was shown that\nthe commutation relation between polar coordinates\nis equivalent to the above canonical commutation relation\nup to first order in $\\theta$.\nThe commutation relation chosen there was the relation \\eqref{ncr2} which is equivalent to\n\\begin{equation}\n\\label{ncrdouble}\n[\\hat{r}^2,\\hat{\\phi}]=2i\\theta .\n\\end{equation}\nTo see how the above commutation relation and\nthe canonical one \\eqref{nccan} is related,\nwe assume that the usual map $(x,y)\\rightarrow (r,\\phi)$\nbetween the rectangular and polar coordinates holds\nin this noncommutative space,\n\\begin{eqnarray}\n\\label{rec-pol}\n\\hat{x}=\\hat{r}\\cos\\hat{\\phi}, ~~\\hat{y}=\\hat{r}\\sin\\hat{\\phi}.\n\\end{eqnarray}\nUsing the commutation relation $[\\hat{\\phi},\\hat{r}^{-1}]=i\\theta \\hat{r}^{-3}$\ndeduced from \\eqref{ncrdouble} one gets:\n\\begin{eqnarray}\n\\label{r2xy}\n\\hat{x}^2+\\hat{y}^2 := \\hat{r} ( \\hat{r}-\\frac{1}{2!}[\\hat{\\phi},[\\hat{\\phi},\\hat{r}]]\n+\\cdots ) = \\hat{r}^2-\\frac{1}{2!}\\theta^2 \\hat{r}^{-2}\n+ \\cdots.\n\\end{eqnarray}\nThen one can readily check\nhow the two commutation relations\n\\eqref{nccan} and \\eqref{ncrdouble} are different:\nUsing the commutation relation \\eqref{nccan} we have\n\\begin{eqnarray}\n\\label{lapp}\n[\\hat{x}^2+\\hat{y}^2 ,\\hat{x}]\n=[\\hat{y}^2,\\hat{x}]=-2i\\theta \\hat{y}=-2i\\theta \\hat{r}\\sin\\hat{\\phi},\n\\end{eqnarray}\nand using \\eqref{r2xy} this can be rewritten as\n\\begin{eqnarray}\n\\label{rapp}\n[\\hat{r}^2 + \\mathcal{O}(\\theta^2),\\hat{x}] \\cong\n[\\hat{r}^2,\\hat{r}\\cos\\hat{\\phi}]\n=\\hat{r}[\\hat{r}^2,\\cos\\hat{\\phi}] = -2i \\theta \\hat{r}\\sin\\hat{\\phi},\n\\end{eqnarray}\nwhere the relation $[\\hat{r}^2,\\hat{\\phi}]=2i\\theta$ is applied.\nTherefore,\n\\eqref{nccan} and \\eqref{ncrdouble} are equivalent up to first order in $\\theta$\nand became different from the second order in $\\theta$.\n\nHere, we make a short remark about the commutation relation used in \\cite{Kim:2007nx}.\nThere a noncommutative BTZ solution was worked out in the polar coordinates\nwith the following commutation relation:\n\\begin{equation}\n\\label{ncr1}\n[\\hat{r},\\hat{\\phi}]= i \\theta .\n\\end{equation}\nIf we assume that the usual relationship \\eqref{rec-pol}\nbetween the rectangular and polar coordinate systems\nstill holds in the noncommutative case, then\nwe get the following relation by applying the commutation relation \\eqref{ncr1}:\n\\begin{eqnarray}\n[\\hat{x},\\hat{y}] = [ \\hat{r} \\cos{\\hat{\\phi}},\\hat{r} \\sin{\\hat{\\phi}} ]\n = i\\theta \\hat{r},\n\\end{eqnarray}\nwhich shows that the commutation relations \\eqref{nccan} and \\eqref{ncr1}\nare not equivalent even by the dimensional count.\n\n\n\n\\subsection{Twist perspective}\n\nHere, we prefer to use the commutation relation $[\\hat{r}^2,\\hat{\\phi}]=2i\\theta$\nin solving the Seiberg-Witten equation for calculational convenience,\nsince the two commutation relations \\eqref{ncr2} and \\eqref{ncrdouble} are\nexactly equivalent.\nThe reason for this preference can be easily understood\nif we view the Moyal product from the twist perspective.\n\nIt is known that the Moyal product \\eqref{moyalprd} can also be reproduced\nfrom the deformed $*$-product \\cite{Chaichian:2004za,Bu:2006ha}:\n\\begin{equation}\n\\label{fstarg}\n(f * g)(x)\\equiv\n\\cdot \\left[ \\mathcal{F}^{-1}_{*}(f(x)\\otimes g(x))\\right] ,\n\\end{equation}\nwhere\nthe multiplication $\\cdot$ is defined as $\\cdot[f(x)\\otimes g(x)] = f(x)g(x)$,\nand the twist element $\\mathcal{F}_*$ is represented with the\ngenerators of translation along the $x^\\alpha$ directions, $P_\\alpha$, as follows.\n\\begin{equation}\n \\mathcal{F_*} =\n e^{\\frac{i}{2}\\theta^{\\alpha\\beta}P_\\alpha\\otimes P_\\beta}\n~~ \\rightarrow~~\ne^{-\\frac{i}{2}\\theta^{\\alpha\\beta} \\frac{\\partial}{\\partial x^\\alpha}\n \\otimes\\frac{\\partial}{\\partial x^\\beta}}.\n \\label{Telement}\n\\end{equation}\nUsing \\eqref{fstarg} and \\eqref{Telement},\none can check that $f*g$ in \\eqref{fstarg} is indeed equivalent to\nthe Moyal product $f\\star g$ given in \\eqref{moyalprd}:\n\\begin{eqnarray}\n\\label{twistpd}\n(f* g)(x)&=&\\cdot \\left[\ne^{\\frac{i}{2}\\theta^{\\alpha\\beta} \\frac{\\partial}{\\partial x^\\alpha}\n \\otimes\\frac{\\partial}{\\partial x^\\beta}}\n (f(x)\\otimes g(x))\\right] \\nonumber\\\\\n &=&\\cdot \\left[ f(x)\\otimes g(x)\n +\\frac{i}{2}\\theta^{\\alpha\\beta} \\frac{\\partial f(x)}{\\partial x^\\alpha}\n \\otimes \\frac{\\partial g(x)}{\\partial x^\\beta}+\\cdots\n \\right] \\nonumber\\\\\n&=&f(x)\\cdot g(x)+\\frac{i}{2}\\theta^{\\alpha\\beta} \\frac{\\partial f(x)}{\\partial x^\\alpha}\n \\frac{\\partial g(x)}{\\partial x^\\beta}+\\cdots\n \\nonumber\\\\\n &=&\\left.\\exp\\left[\\frac{i}{2}\\theta^{\\alpha\\beta}\\frac{\\partial}\n{\\partial x^{\\alpha} }\\frac{\\partial}{\\partial y^{\\beta} }\\right] f(x)g(y)\\right|_{x=y}\n\\nonumber\\\\\n&\\equiv&(f\\star g)(x) .\n\\end{eqnarray}\nThus knowing the twist element in a given coordinate system helps one to\n identify the corresponding Moyal product.\n\nThe twist element which yields\nthe noncommutativity \\eqref{nccan} in the rectangular coordinates, or $[x,y]_*=i\\theta$,\nis given by\n\\begin{equation}\n \\mathcal{F_*} =\n \\exp\\left[-\\frac{i\\theta}{2} \\left(\\frac{\\partial}{\\partial x}\\otimes\\frac{\\partial}{\\partial y}\n -\\frac{\\partial}{\\partial y}\\otimes\\frac{\\partial}{\\partial x}\\right)\\right].\n \\label{xyelement}\n\\end{equation}\nOne can rewrite the above exponent\nup to first order in $\\theta$,\nas follows:\n\\begin{eqnarray}\\label{axialexponent}\n\\frac{\\partial}{\\partial x}\\otimes\\frac{\\partial}{\\partial y}\n -\\frac{\\partial}{\\partial y}\\otimes\\frac{\\partial}{\\partial x}\n & \\simeq & \\frac{\\partial}{\\partial r}\\otimes \\frac{1}{r}\\frac{\\partial}{\\partial \\phi}\n -\\frac{1}{r}\\frac{\\partial}{\\partial \\phi}\\otimes\\frac{\\partial}{\\partial r}.\n\\end{eqnarray}\nWe can also define the twist element $\\mathcal{F'_*}$\nin the polar coordinates\nwhich yields the commutation relation\n $[r,\\phi]_{*}=i \\theta \/ r$ as in \\eqref{ncr2}\n and is equivalent to $\\mathcal{F_*}$ up to first order in $\\theta$:\n\\begin{eqnarray}\n \\mathcal{F'_*} & := &\n \\exp\\left[-\\frac{i\\theta}{2}\\left( \\frac{1}{r}\\frac{\\partial}{\\partial r}\\otimes\n \\frac{\\partial}{\\partial \\phi}\n -\\frac{\\partial}{\\partial \\phi}\\otimes \\frac{1}{r}\\frac{\\partial}{\\partial r}\\right)\\right]\n \\label{relement} \\\\\n & \\simeq &\n \\exp\\left[-\\frac{i\\theta}{2}\\left(\\frac{\\partial}{\\partial r}\\otimes\\frac{1}{r}\n \\frac{\\partial}{\\partial \\phi}\n -\\frac{1}{r}\\frac{\\partial}{\\partial \\phi}\\otimes\\frac{\\partial}{\\partial r}\\right)\\right]\n \\simeq \\mathcal{F_*}.\n\\nonumber\n\\end{eqnarray}\n\n\nIf we write a twisted product corresponding to $\\mathcal{F'_*}$,\nit would look like the Moyal product \\eqref{moyalprd} except that $\\theta$ becomes coordinate dependant,\ni.e., $\\theta \\rightarrow \\theta\/r$.\nTo use the solution of the Seiberg-Witten equation\nwithout any modification,\none should carefully place the factor $1\/r$\nin front of the derivative $\\frac{\\partial}{\\partial r}$~\n in \\eqref{relement}\nwhen one expands the Moyal products\nin the Seiberg-Witten equation.\nHowever, if we rewrite $\\mathcal{F'_*}$ in terms of the derivative\n$\\frac{\\partial}{\\partial r^2}$,\nas a new twist element $\\mathcal{F''_*}$,\n\\begin{equation}\n \\mathcal{F''_*} =\n \\exp\\left[-i\\theta \\left(\\frac{\\partial}{\\partial r^2}\\otimes\\frac{\\partial}{\\partial \\phi}\n -\\frac{\\partial}{\\partial \\phi}\\otimes\\frac{\\partial}{\\partial r^2}\\right)\\right],\n \\label{r2element}\n\\end{equation}\nthis would allow us to use the Seiberg-Witten relation without any modification.\nThe new twist element $\\mathcal{F''_*}$ is equivalent to $\\mathcal{F'_*}$\nand yields the commutation relation\n$[r^2,\\phi]_{*} = r^2*\\phi-\\phi*r^2 = 2i\\theta$.\n\n\n\n\n\\section{BTZ black hole}\n\\label{secBTZ}\n\nHere and in the following section\nwe investigate the effect of non-exact equivalence in noncommutativity\nusing the two known commutative solutions in 3D,\nthe BTZ black hole solution \\cite{Banados:1992wn,Carlip:1994hq}\nand the conical solution on $AdS_3$ \\cite{Pinzul:2005ta},\nin two ways.\n\nOne way is like the following:\nTo apply the Seiberg-Witten map associated with\nthe noncommutativity in the rectangular coordinates\nwe first transform the commutative solution obtained in the polar coordinates\ninto the one in the rectangular coordinates.\nThen after getting noncommutative solutions by applying the Seiberg-Witten map\nwith the canonical commutation relation of the rectangular coordinates,\nwe rewrite them back into the polar coordinates.\nThe other way is to use the Seiberg-Witten map directly\nin the polar coordinates without rewriting the solution back and forth\nbetween the polar and the rectangular coordinate systems.\n\nThe action of the $(2+1)$ dimensional noncommutative $U(1,1)\\times U(1,1)$\n Chern-Simons theory with the negative\ncosmological constant $\\Lambda=-1\/l^2$ is given by up to boundary terms \\cite{Banados:2001xw,Cacciatori:2002gq},\n\\begin{eqnarray}\n\\label{action}\n&&\\hat{S}(\\mathcal{\\hat{A}}^{+},\\mathcal{\\hat{A}}^{-})=\n\\hat{S}_{+}(\\mathcal{\\hat{A}}^{+})-\\hat{S}_{-}(\\mathcal{\\hat{A}}^{-}), \\\\\n&& \\hat{S}_{\\pm}(\\mathcal{\\hat{A}}^{\\pm})=\n\\beta\\int \\rm Tr(\\mathcal{\\hat{A}}^{\\pm} \\stackrel{\\star}{\\wedge} d\\mathcal{\\hat{A}}^{\\pm}+\\frac{2}{3}\n\\mathcal{\\hat{A}}^{\\pm}\\stackrel{\\star}{\\wedge} \\mathcal{\\hat{A}}^{\\pm} \\stackrel{\\star}{\\wedge} \\mathcal{\\hat{A}}^{\\pm}),\\nonumber\n\\end{eqnarray}\nwhere $\\beta=l\/16\\pi G_{N}$ and $G_{N}$ is the three dimensional Newton constant.\nHere\n$\n\\mathcal{\\hat{A}^{\\pm}}=\\mathcal{\\hat{A}}^{A\\pm}\\tau_{A}\n=\\hat{A}^{a\\pm}\\tau_{a}+\\hat{B}^{\\pm}\\tau_{3},\n$\nwith $A=0,1,2,3$, $~ a={0,1,2},$ ~ $\\mathcal{\\hat{A}}^{a\\pm}=\\hat{A}^{a\\pm}$,\n $~ \\mathcal{\\hat{A}}^{3\\pm}=\\hat{B}^{\\pm}$,\n and the deformed wedge product $\\stackrel{\\star}{\\wedge}$ denotes that\n$\nA \\stackrel{\\star}{\\wedge} B \\equiv A_{\\mu} \\star B_{\\nu}~dx^{\\mu} \\wedge dx^{\\nu}.\n$\nThe noncommutative $SU(1,1) \\times SU(1,1)$ gauge fields $\\hat{A}$ are expressed in terms of the triad\n$\\hat{e}$ and the spin connection $\\hat{\\omega}$ as %\n$\\label{nc_cs3grav}\n \\hat{A}^{a\\pm}:=\\hat{\\omega}^{a}\\pm \\hat{e}^{a}\/{l}.\n$\nIn terms of $\\hat{e}$ and $\\hat{\\omega}$\nthe action becomes \\cite{Cacciatori:2002gq}\n\\begin{eqnarray}\n\\label{reaction}\n\\hat{S}\\!\\!&=&\\!\\!\\frac{1}{8\\pi G_{N}}\\int\\left(\\hat{e}^{a}\\stackrel{\\star}{\\wedge} \\hat{R}_{a}\n+\\frac{1}{6l^2}\\epsilon_{abc}\\hat{e}^{a}\\stackrel{\\star}{\\wedge}\\hat{e}^{b}\n\\stackrel{\\star}{\\wedge}\\hat{e}^{c}\\right)\n\\nonumber \\\\\n\\!\\!&-&\\!\\! \\frac{\\beta}{2}\n\\int\\left(\\hat{B}^{+}\\stackrel{\\star}{\\wedge} d\\hat{B}^{+}\n+\\frac{i}{3}\\hat{B}^{+}\\stackrel{\\star}{\\wedge}\\hat{B}^{+}\n\\stackrel{\\star}{\\wedge}\\hat{B}^{+}\\right)\n+\\frac{\\beta}{2}\n\\int\\left(\\hat{B}^{-}\\stackrel{\\star}{\\wedge} d\\hat{B}^{-}\n+\\frac{i}{3}\\hat{B}^{-}\\stackrel{\\star}{\\wedge} \\hat{B}^{-}\n\\stackrel{\\star}{\\wedge}\\hat{B}^{-}\\right)\n\\nonumber \\\\\n&+&\\frac{i\\beta}{2} \\int( \\hat{B}^{+}-\\hat{B}^{-})\\stackrel{\\star}{\\wedge}\n\\left(\\hat{\\omega}^{a}\\stackrel{\\star}{\\wedge} \\hat{\\omega}_{a}+\\frac{1}{l^2}\\hat{e}^{a}\n\\stackrel{\\star}{\\wedge}\\hat{e}_{a}\\right)\n\\nonumber \\\\\n&+&\\frac{i\\beta}{2l}\\int( \\hat{B}^{+}+\\hat{B}^{-})\\stackrel{\\star}{\\wedge}\n\\left(\\hat{\\omega}^{a}\\stackrel{\\star}{\\wedge} \\hat{e}_{a}+\\hat{e}^{a}\n\\stackrel{\\star}{\\wedge} \\hat{\\omega}_{a}\\right),\n\\end{eqnarray}\nup to surface terms, where $\\hat{R}^{a}=d\\hat{\\omega}^{a}\n+\\frac{1}{2}\\epsilon^{abc}\\hat{\\omega}_{b}\\stackrel{\\star}{\\wedge}\\hat{\\omega}_{c}$.\nThe equation of motion can be written as follows.\n\\begin{eqnarray}\n\\label{nccurtensor}\n\\hat{\\mathcal{F}}^{\\pm} \\equiv d\\hat{\\mathcal{A}}^{\\pm}+ \\hat{\\mathcal{A}}^{\\pm}\n\\stackrel{\\star}{\\wedge}\\hat{\\mathcal{A}}^{\\pm}=0.\n\\end{eqnarray}\nIn the commutative limit this becomes,\n\\begin{eqnarray}\n\\label{ccurtensor}\nF^{\\pm} \\equiv d A^{\\pm}+ A^{\\pm}\\wedge A^{\\pm}=0, ~~ d B^{\\pm}= 0,\n\\end{eqnarray}\nand the first one can be rewritten as\n\\begin{equation}\nR^{a} + \\frac{1}{2l^2}\\epsilon^{abc}e_{b}\\wedge e_{c}=0, ~~\nT^{a} \\equiv de^{a}+\\epsilon^{abc}\\omega_{b}\\wedge e_{c}= 0.\n\\end{equation}\nThe solution of the decoupled EOM for $SU(1,1)\\times SU(1,1)$ part\nwas obtained in \\cite{Carlip:1994hq}:\n\\begin{eqnarray}\n\\label{triad}\ne^{0}&=& m\\left(\\frac{r_{+}}{l}dt-r_{-}d\\phi\\right),~\ne^{1}=\\frac{l}{n}dm,~\ne^{2}=n\\left(r_{+}d\\phi-\\frac{r_{-}}{l}dt\\right), \\nonumber\n\\\\\n\\label{spinc}\n\\omega^{0}&=& -\\frac{m}{l}\\left(r_{+}d\\phi-\\frac{r_{-}}{l}\\right),~\n\\omega^{1}=0,~~~~~\n\\omega^{2}=-\\frac{n}{l}\\left( \\frac{r_{+}}{l}dt-r_{-}d\\phi\\right),\n\\end{eqnarray}\nwhere $m^2=(r^2-r_{+}^2)\/(r_{+}^2-r_{-}^2)$,~ $n^2=(r^2-r_{-}^2)\/(r_{+}^2-r_{-}^2)$,\nand $r_+,~ r_-$ are the outer and inner horizons respectively.\nThere it was also shown to be equivalent to the ordinary BTZ black hole solution \\cite{Banados:1992wn}:\n\\begin{equation}\nds^2=-N^2dt^2+N^{-2}dr^2+r^2(d\\phi+N^{\\phi}dt)^2,\n\\end{equation}\nwhere $N^2=(r^2-r_{+}^2)(r^2-r_{-}^2)\/l^2 r^2$ and $N^{\\phi}=-r_{+}r_{-}\/lr^2$.\n\n\n\\subsection{Rectangular coordinates}\n\nThe BTZ solution in the polar coordinates can be rewritten\n in the rectangular coordinates as follows:\n\\begin{eqnarray}\nds^2 &=& [-N^2+r^2(N^{\\phi})^2]dt^2-2yN^{\\phi}dt dx +2x N^{\\phi} dt dy\n\\nonumber \\\\\n&&+\\frac{2xy}{r^2}(N^{-2}-1)dxdy\n+\\frac{1}{r^2}(N^{-2}x^2+y^2)dx^2\n+\\frac{1}{r^2}(N^{-2}y^2+x^2)dy^2,\n\\end{eqnarray}\nwhere $r^2=x^2+y^2$, ~$r_+^2=\\frac{Ml^2}{2}\\left\\{\n1+\\left[1-\\left(\\frac{J}{Ml}\\right)^2\\right]^{1\/2}\n\\right\\}$, ~ $r_-=Jl\/2r_+$\n, ~$N^{\\phi}=-r_{+}r_{-}\/lr^2$,\nand $N^2=(r^2-r_{+}^2)(r^2-r_{-}^2)\/l^2 r^2$.\nAs in \\cite{EDY:20081},\n we consider two simple $U(1)$ fluxes $B_{\\mu}^{\\pm}=B d\\phi=B(xdy-ydx)\/r^2$ with constant $B$.\nThen, the commutative $U(1,1) \\times U(1,1)$ gauge fields $\\mathcal{A}^{\\pm}$ can be written as\n\\begin{eqnarray}\n\\label{cugrectang}\n\\mathcal{A}^{\\pm}_{\\mu}= \\mathcal{A}^{\\pm A}\\tau_{A}=\nA_{\\mu}^{a\\pm}\\tau_{a}\n+B_{\\mu}^{\\pm}\\tau_{3},\n\\end{eqnarray}\nwhere $A={0,1,2,3}, a={0,1,2},~\\mathcal{A}_{\\mu}^{a\\pm}=A_{\\mu}^{a\\pm}$,\n$\\mathcal{A}_{\\mu}^{3\\pm}=B_{\\mu}^{\\pm}$ and the gauge fields $A^{a\\pm}$ are given by\n\\begin{eqnarray}\n\\label{csurectang}\nA^{0\\pm} &=& \\pm \\frac{m(r_{+}\\pm r_{-})}{l^2}\\left[dt \\pm \\frac{l}{r^2}(ydx-xdy)\\right],\n\\nonumber \\\\\nA^{1\\pm}&=& \\pm \\frac{1}{\\sqrt{(r^2-r_{+}^2)(r^2-r_{-}^2)}}(xdx+ydy),\n\\nonumber \\\\\nA^{2\\pm} &=& -\\frac{n(r_{+}\\pm r_{-})}{l^2}\\left[dt \\pm \\frac{l}{r^2}(ydx-xdy)\\right].\n\\end{eqnarray}\n From the commutative $U(1,1) \\times U(1,1)$ gauge fields, we get $\\mathcal{A'}_{\\mu}^{\\pm}$\n(recall that $\\hat{\\mathcal{A}}^{\\pm}=\\mathcal{A}_{\\mu}^{\\pm}+\\mathcal{A'}_{\\mu}^{\\pm}$)\nvia the Seiberg-Witten map \\eqref{Aswef} :\n\\begin{eqnarray}\n\\mathcal{A'}_{t}^{\\pm}\n&=& \\frac{i \\theta B}{8l^2 \\sqrt{(r^2-r_{+}^2)(r^2-r_{-}^2)}}\n\\sqrt{\\frac{r_{+}\\pm r_{-}}{r_{+} \\mp r_{-}}}\n\\left(\n \\begin{array}{cc}\n \\mp \\sqrt{r^2-r_{-}^2} & -\\sqrt{r^2-r_{+}^2} \\\\\n \\sqrt{r^2-r_{+}^2} & \\pm \\sqrt{r^2-r_{-}^2} \\\\\n \\end{array}\n \\right),\n \\nonumber \\\\\n\\mathcal{A'}_{x}^{\\pm}\n&=& \\frac{i \\theta}{8l^2 r^4 (r^2-r_{+}^2)(r^2-r_{-}^2)}\n\\left(\n \\begin{array}{cc}\n -y(U^{\\pm}-V^{\\mp}) & \\pm Bl(yF^{\\pm}-ilr^2 x G) \\\\\n \\mp Bl(yF^{\\pm}+ilr^2 x G)& -y(U^{\\pm}+V^{\\mp}) \\\\\n \\end{array}\n \\right),\n \\nonumber \\\\\n\\mathcal{A'}_{y}^{\\pm}\n&=& \\frac{i \\theta}{8l^2 r^4 (r^2-r_{+}^2)(r^2-r_{-}^2)}\n\\left(\n \\begin{array}{cc}\n x(U^{\\pm}-V^{\\mp}) & \\mp Bl(x F^{\\pm}+ilr^2 y G) \\\\\n \\mp Bl(x F^{\\pm}-ilr^2 y G)& x(U^{\\pm}+V^{\\mp}) \\\\\n \\end{array}\n \\right),\n \\end{eqnarray}\nwhere\n\\begin{eqnarray}\nU^{\\pm}&=&(r^2-r_{+}^2)(r^2-r_{-}^2)[B^2l^2-(r_{+}\\pm r_{-})]^2-r^4l^2,\n\\nonumber \\\\\nV^{\\mp}&=& B l(r_{+}\\mp r_{-})(r^2-2r_{-}^2)\\sqrt{(r^2-r_{+}^2)(r^2-r_{-}^2)},\n\\nonumber \\\\\nF^{\\pm}&=& (r^2-r-{+}^2)(r^2-2r_{-}^2)(r_{+}\\pm r_{-})\\sqrt{\\frac{r^2-r_{-}^2}{r_{+}^2-r_{-}^2}},\n\\nonumber \\\\\nG &=& r^2\\sqrt{\\frac{r^2-r_{-}^2}{r^2-r_{+}^2}}\n-(r^2-2r_{-}^2)\\sqrt{\\frac{r^2-r_{+}^2}{r^2-r_{-}^2}}.\n\\end{eqnarray}\nUsing the relations between\nthe gauge fields and the triad and spin connection,\n ~ $\\hat e\/l=\\hat{\\mathcal{A}}^{+}+\\hat{\\mathcal{A}}^{-}$\nand $ \\hat \\omega=\\hat{\\mathcal{A}}^{+}-\\hat{\\mathcal{A}}^{-}$, we get the following\n up to first order in $\\theta$.\n\\begin{eqnarray}\n\\label{nctrirectang}\n\\hat{e}^{0} &=&\\frac{r_{+}[r^2-r_{+}^2-\\theta B\/4]}{l\\sqrt{(r^2-r_{+}^2)(r_{+}^2-r_{-}^2)}}dt\n+\\frac{r_{-}}{r^2}\\sqrt{\\frac{r^2-r_{+}^2}{r_{+}^2-r_{-}^2}}\n\\left[ 1+\\frac{\\theta B}{4r^2}\n\\left(\\frac{r^2-2r_{+}^2}{r^2-r_{+}^2}\\right)\\right](y dx-x dy),\n\\nonumber \\\\\n\\hat{e}^{1} &=& -\\frac{l(r^2+r_{-}^2)}{(r^2-r_{+}^2)(r^2-r_{-}^2)}\n\\left[ 1-\\frac{\\theta B}{4r^2}\n\\frac{r_{+}^4(r^2-2r_{-}^2)-r_{-}^4(r^2-2r_{+}^2)}{(r_{+}^2-r_{-}^2)\n(r^2+r_{-}^2)\\sqrt{(r^2-r_{+}^2)(r^2-r_{-}^2)}}\\right] (xdx+ydy),\n\\nonumber \\\\\n\\hat{e}^{2} &=& \\frac{r_{-}[r^2-r_{-}^2-\\theta B\/4]}{l\\sqrt{(r^2-r_{-}^2)(r_{+}^2-r_{-}^2)}}dt\n-\\frac{r_{+}}{r^2}\\sqrt{\\frac{r^2-r_{-}^2}{r_{+}^2-r_{-}^2}}\n\\left[ 1+\\frac{\\theta B}{4r^2}\n\\left(\\frac{r^2-2r_{-}^2}{r^2-r_{-}^2}\\right)\\right](y dx-x dy),\n\\\\\n\\label{ncspincrectang}\n\\hat{\\omega}^{0} &=&\\frac{r_{-}[r^2-r_{+}^2-\\theta B\/4]}{l^2\\sqrt{(r^2-r_{+}^2)(r_{+}^2-r_{-}^2)}}dt\n+\\frac{r_{+}}{lr^2}\\sqrt{\\frac{r^2-r_{+}^2}{r_{+}^2-r_{-}^2}}\n\\left[ 1+\\frac{\\theta B}{4r^2}\n\\left(\\frac{r^2-2r_{+}^2}{r^2-r_{+}^2}\\right)\\right](y dx-x dy),\n\\nonumber \\\\\n\\hat{\\omega}^{1} &=& 0,\n\\nonumber \\\\\n\\hat{\\omega}^{2} &=& -\\frac{r_{+}[r^2-r_{-}^2-\\theta B\/4]}{l^2\\sqrt{(r^2-r_{+}^2)(r_{+}^2-r_{-}^2)}}dt\n-\\frac{r_{-}}{lr^2}\\sqrt{\\frac{r^2-r_{-}^2}{r_{+}^2-r_{-}^2}}\n\\left[ 1+\\frac{\\theta B}{4r^2}\n\\left(\\frac{r^2-2r_{-}^2}{r^2-r_{-}^2}\\right)\\right](y dx-x dy). \\nonumber\n\\end{eqnarray}\n\nA noncommutative length element can be defined by\n\\begin{eqnarray}\n\\label{ncmetricrectang}\nd\\hat{s}^2=\\hat{g}_{\\mu\\nu}dx^{\\mu}dx^{\\nu} \\equiv \\eta_{ab}\\hat{e}_{\\mu}^{a}\\star \\hat{e}_{\\nu}^{b}dx^{\\mu}dx^{\\nu},\n\\end{eqnarray}\nwhere $\\star$ denotes the Moyal product.\nSince the length element $d\\hat{s}^2$ in (\\ref{ncmetricrectang}) has symmetric summation,\nwe end up with a real length element.\nThus we define a real noncommutative metric by\n $\\hat{G}_{\\mu\\nu} \\equiv (\\hat{g}_{\\mu\\nu}+\\hat{g}_{\\nu\\mu})\/2$\n as in \\cite{Pinzul:2005ta}.\nAfter transforming it back to the polar coordinates, the length element is given by\n\\begin{eqnarray}\nd\\hat{s}^2 &=& \\hat{G}_{\\mu\\nu}dx^{\\mu}dx^{\\nu}\n\\nonumber \\\\\n&=& -\\mathcal{F}^2 dt^2+\\mathcal{\\hat{N}}^{-2}dr^2\n+2r^2 N^{\\phi}\\left(1+\\frac{\\theta B}{2r^2}\\right)dt d\\phi\n+r^2\\left(1+\\frac{\\theta B }{2r^2}\\right)d\\phi^2,\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\mathcal{F}^2&=&\\frac{(r^2-r_{+}^2-r_{-}^2)}{l^2}-\\frac{\\theta B}{2l^2}=f^2,\n\\\\\n\\hat{\\mathcal{N}}^2&=&\\frac{1}{l^2 r^2}\\left[ (r^2-r_{+}^2)(r^2-r_{-}^2)\n-\\frac{\\theta B}{2r^2}\\left( r_{+}^2(r^2-r_{-}^2)+r_{-}^2(r^2-r_{+}^2)\n\\right)\\right].\n\\end{eqnarray}\n\nNow, we investigate the apparent and Killing horizons of the above solution by the following relations:\n\\begin{eqnarray}\n\\label{apparenth}\n\\hat{G}^{rr}=\\hat{G}_{rr}^{-1}=\\hat{\\mathcal{N}}^2=0,\n\\end{eqnarray}\nfor the apparent horizon (denoted as $\\hat{r}$), and\n\\begin{equation}\n \\hat{\\chi}^2=\n\\hat{G}_{tt}-\\hat{G}_{t\\phi}^2\/\\hat{G}_{\\phi\\phi}=0,\n\\end{equation}\nfor the Killing horizon (denoted as $\\tilde{r}$).\nThese two equations yield the apparent and Killing horizons\nup to first order in $\\theta$ at\n\\begin{eqnarray}\n\\label{apparenth}\n\\hat{r}_{\\pm}^{2}&=&r_{\\pm}^{2}+\\frac{\\theta B}{2}+\\mathcal{O}(\\theta^2),\\\\\n\\label{killingh}\n\\tilde{r}_{\\pm}^2&=&r_{\\pm}^2 + \\frac{\\theta B}{2}\n+\\mathcal{O}(\\theta^2).\n\\end{eqnarray}\n\nHere the apparent and Killing horizons coincide, and\nthe inner and outer horizons are shifted from the classical(commutative case) value\nby the same amount $\\theta B\/2$ due to noncommutative effect of flux.\nNote that this feature agrees with the result in the commutative(classical) case,\nin which the apparent and Killing horizons coincide for stationary black holes.\n\n\n\\subsection{Polar coordinates }\nHere, we recall the solution in the noncommutative polar coordinates\nobtained in \\cite{EDY:20081} for comparison.\n From the consideration in section \\ref{secDiff},\nthe Moyal ($\\star$) product from $[\\hat{R}, \\hat{\\phi}]= 2 i \\theta$\nis given by\n\\begin{eqnarray}\n\\label{starp}\n(f\\star g)(x)=\\left.\\exp\\left[i\\theta\\left(\\frac{\\partial}{\\partial R}\\frac{\\partial}{\\partial \\phi'}\n-\\frac{\\partial}{\\partial \\phi}\\frac{\\partial}{\\partial R'}\\right)\\right]f(x)g(x')\n\\right|_{x=x'},\n\\end{eqnarray}\n where $\\hat{R}\\equiv \\hat{r}^2$.\n The noncommutative solution $\\mathcal{\\hat{A}}^{\\pm}$ is given by\n\\begin{eqnarray}\n\\label{ncgauges}\n\\mathcal{\\hat{A}}^{\\pm}_{\\mu}=\\hat{A}_{\\mu}^{a\\pm}\\tau_{a}+\\hat{B}_{\\mu}^{\\pm}\\tau_{3}\n= \\left(A_{\\mu}^{a\\pm}-\\frac{\\theta}{2}B_{\\phi}^{\\pm}\\partial_{R}A_{\\mu}^{a\\pm}\\right)\\tau_{a}\n+B_{\\mu}^{\\pm}\\tau_{3}+\\mathcal{O}(\\theta^2),\n\\end{eqnarray}\nwhere we also considered two $U(1)$ fluxes $B_{\\mu}^{\\pm}=B d\\phi ~$ with constant $B$.\n\nThen from the Sieberg-Witten map\nwe obtain the noncommutative triad and spin connection as follows.\n\\begin{eqnarray}\n\\label{nctriad}\n\\hat{e}^{0}&=& \\left(m-\\frac{\\theta B}{2}m'\\right)\\left(\\frac{r_{+}}{l}dt-r_{-}d\\phi\\right)+\\mathcal{O}(\\theta^2),\n\\nonumber \\\\\n\\hat{e}^{1}&=& l \\left[\\frac{m'}{n}-\\frac{\\theta B}{2}\\left(\\frac{m'}{n}\\right)'\\right]dR+\\mathcal{O}(\\theta^2),\n\\nonumber \\\\\n\\hat{e}^{2}&=& \\left(n-\\frac{\\theta B}{2}n'\\right)\\left(r_{+}d\\phi-\\frac{r_{-}}{l}dt\\right)+\\mathcal{O}(\\theta^2),\n\\\\\n\\label{ncspinc}\n\\hat{\\omega}^{0}&=& -\\frac{1}{l}\\left(m-\\frac{\\theta B}{2}m'\\right) \\left(r_{+}d\\phi-\\frac{r_{-}}{l}\\right)+\\mathcal{O}(\\theta^2),\n\\nonumber \\\\\n\\hat{\\omega}^{1}&=&\\mathcal{O}(\\theta^2),\n\\nonumber \\\\\n\\hat{\\omega}^{2}&=&-\\frac{1}{l} \\left(n-\\frac{\\theta B}{2}n'\\right)\n\\left( \\frac{r_{+}}{l}dt-r_{-}d\\phi\\right)+\\mathcal{O}(\\theta^2), \\nonumber\n\\end{eqnarray}\nwhere ${}'$ denotes the differentiation with respect to $R=r^2$.\nIt should be noted that in the polar coordinates we get a real metric,\n$\\hat{e}_{\\mu} \\star \\hat{e}_{\\nu}=\\hat{e}_{\\mu}\\hat{e}_{\\nu}$.\nRewriting $R$ back to $r^2$, we get\n\\begin{eqnarray}\n\\label{ncmetric}\nd\\hat{s}^2=-f^2dt^2+\\hat{N}^{-2}dr^2+2r^2 N^{\\phi}dtd\\phi\n+\\left(r^2+\\frac{\\theta B}{2}\\right)d\\phi^2+\\mathcal{O}(\\theta^2),\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\nN^{\\phi}&=&-r_{+}r_{-}\/lr^2, \\\\\nf^2&=&\\frac{(r^2-r_{+}^2-r_{-}^2)}{l^2}-\\frac{\\theta B}{2l^2}, \\\\\n\\hat{N}^2&=&\\frac{1}{l^2 r^2}\\left[ (r^2-r_{+}^2)(r^2-r_{-}^2)\n-\\frac{\\theta B}{2}\\left(2r^2-r_{+}^2-r_{-}^2\\right)\\right].\n\\end{eqnarray}\nIn this solution, the apparent and Killing horizons denoted as $\n\\hat{r}$ and $\\tilde{r}$, respectively, are given by:\n\\begin{eqnarray}\n\\label{apparenth}\n\\hat{r}_{\\pm}^{2}&=&r_{\\pm}^{2}+\\frac{\\theta B}{2}+\\mathcal{O}(\\theta^2),\\\\\n\\label{killingh}\n\\tilde{r}_{\\pm}^2&=&r_{\\pm}^2 \\pm \\frac{\\theta B}{2}\n\\left(\\frac{r_{+}^2+r_{-}^2}{r_{+}^2-r_{-}^2}\\right)+\\mathcal{O}(\\theta^2).\n\\end{eqnarray}\nUnlike the rectangular case,\nthe apparent and the Killing horizons in this case do not coincide.\nNote that the outer horizons coincide only in the non-rotating limit in which\n the inner horizon of the commutative solution vanishes($r_{-}=0$).\n\n\n\n\\section{Conical solution on $AdS_3$}\n\\label{secCON}\n\nIn this section\nwe first reobtain the noncommutative conical solution\nin the rectangular coordinates and check it with the\npreviously obtained one in \\cite{Pinzul:2005ta}.\nThen, we repeat the analysis in the polar coordinates\nand compare the two results.\n\n\n\\subsection{Rectangular coordinates}\n\n\nWe begin with a nonsingular conical metric on $AdS_3$ in the polar coordinates\n$(t,r,\\phi)$ \\cite{Pinzul:2005ta},\n\\begin{eqnarray}\n\\label{ccads}\nds^2=H^{-2}\\left[-(2-H)^2(dt+Jd\\phi)^2+(1-M)^2 r^2d\\phi^2+dr^2\\right],\n\\end{eqnarray}\nwhere $M$, $J$ are mass and angular momentum of the source\n respectively, and $H=(1-r^2\/4l^2)$.\nThe above metric can be transformed to the rectangular coordinates and\nthe corresponding triad and spin connection in the rectangular coordinates are\ngiven by\n\\begin{eqnarray}\n\\label{cadstriad}\ne^{0} &=& \\frac{2-H}{H}[ dt-\\frac{J}{r^2}(y dx-xdy)], \\nonumber \\\\\ne^{1} &=& \\frac{1}{H}\\left[ \\left(1-\\frac{M y^2}{r^2}\\right)dx+\\frac{Mxy}{r^2}dy\\right], \\nonumber \\\\\ne^{2} &=& \\frac{1}{H}\\left[ \\frac{Mxy}{r^2}dx+\\left(1-\\frac{Mx^2}{r^2}\\right)dy\\right], \\\\\n\\label{cadsspin}\n\\omega^{0} &=& \\frac{(2-M)H-2(1-M)}{H r^2}(xdy-ydx), \\nonumber \\\\\n\\omega^{1} &=& \\frac{y}{l^2 H}\\left[ dt-\\frac{J}{r^2}(ydx-xdy)\\right], \\nonumber \\\\\n\\omega^{2} &=& -\\frac{x}{l^2 H}\\left[dt-\\frac{J}{r^2}(ydx-xdy)\\right]. \\nonumber\n\\end{eqnarray}\n\nAs in the previous subsection we consider the same commutative $U(1,1) \\times U(1,1)$ gauge fields.\nAfter applying the Seiberg-Witten map we get\n$\\mathcal{A'}_{\\mu}^{\\pm}$ as follows.\n\\begin{eqnarray}\n\\mathcal{A'}_{t}^{\\pm}\n&=& \\frac{i \\theta}{8l^3 H^2}\n\\left(\n \\begin{array}{cc}\n \\mp(B+2) & -Bl(2-H)e^{-i\\phi}\/r \\\\\n Bl(2-H)e^{-i\\phi}\/r & \\pm(B-2) \\\\\n \\end{array}\n \\right),\n \\nonumber \\\\\n\\mathcal{A'}_{x}^{\\pm}\n&=& \\frac{i\\theta}{8l^2r^3H^2}\n\\left(\n \\begin{array}{cc}\n -ryu^{\\pm}_{B} & \\pm iB v^{\\pm} \\\\\n \\mp iB \\bar{v}^{\\pm} & -ryu^{\\pm}_{-B}\\\\\n \\end{array}\n \\right),\n \\nonumber \\\\\n\\mathcal{A'}_{y}^{\\pm}\n&=& \\frac{i\\theta}{8l^2r^3H^2}\n\\left(\n \\begin{array}{cc}\n rxu^{\\pm}_{B} & \\pm iB h^{\\pm} \\\\\n \\mp iB \\bar{h}^{\\pm} & -rxu^{\\pm}_{-B}\\\\\n \\end{array}\n \\right),\n \\end{eqnarray}\nwhere\n\\begin{eqnarray}\nu^{\\pm}_{B}&=&[(M+B)l\\pm J]^2-2(1-H)[\nJ^2\\pm 2(M+B+1)Jl+(B^2+2MB+M(M+2)l^2)] \\nonumber \\\\\n&& +(1-H)^2[(M-B-2)l \\pm J]^2, \\nonumber \\\\\nv^{\\pm} &=& lx(2-H)+iy(Ml-l\\pm J)(3H-2), \\nonumber \\\\\nh^{\\pm} &=& ly(2-H)-ix(Ml-l\\pm J)(3H-2).\n\\end{eqnarray}\nUsing the same relations between the gauge fields and the triad and spin connection given\n in the previous section,\nwe obtain the noncommutative triad and spin connection up to\nfirst order in $\\theta$ as follows.\n\\begin{eqnarray}\n\\label{nctrirectang}\n\\hat{e}^{0} &=& \\frac{2-H}{H}\\left[\\left(1-\\frac{\\theta B}{4l^2 H(2-H)}\\right)dt-\\frac{J}{r^2}\n\\left(1-\\frac{\\theta B}{2r^2}\\right)\n(y dx-xdy)\\right], \\nonumber \\\\\n\\hat{e}^{1} &=& \\frac{1}{r^2 H} \\left[Mxy -\\frac{\\theta B}{16l^2 H}\\left(\n3(M-1)y^2+\\frac{4(M-2)l^2y^2}{r^2}+x^2+4l^2\\right)\\right]dx \\nonumber \\\\\n&& + \\frac{1}{H}\\left[\n\\left(1-\\frac{Mx^2}{r^2}\\right)+\\frac{\\theta B xy}{16l^2r^4 H} \\left(\n(2-3M)r^2+4(M-2)l^2\\right)\\right]dy,\n\\nonumber \\\\\n\\hat{e}^{2} &=& \\frac{1}{H}\\left[\n\\left(1-\\frac{Mx^2}{r^2}\\right)+\\frac{\\theta B xy}{16l^2r^4 H} \\left(\n(2-3M)r^2+4(M-2)l^2\\right)\\right]dx \\nonumber \\\\\n&& +\\frac{1}{r^2 H} \\left[Mxy +\\frac{\\theta B}{16l^2 H}\\left(\n3(M-1)x^2-\\frac{4(M-2)l^2x^2}{r^2}-y^2+4l^2\\right)\\right]dy, \\\\\n\\label{ncadsspin}\n\\hat{\\omega}^{0} &=& \\frac{1}{H r^2}\n\\left[(2-M)H-2(1-M)-\\frac{\\theta B}{2r^2}[2(M-1)-(M+2)H]\\right](ydx-xdy), \\nonumber \\\\\n\\hat{\\omega}^{1} &=& \\frac{y}{l^2 H}\\left[1-\\frac{\\theta B (2-H)}{4r^2 H}\\right]dt\n-\\frac{Jy}{l^2 r^2 H}\\left[1-\\frac{\\theta B(2-3H)}{4r^2 H}\\right](ydx-xdy), \\nonumber \\\\\n\\hat{\\omega}^{2} &=& -\\frac{x}{l^2 H}\\left[1-\\frac{\\theta B (2-H)}{4r^2 H}\\right]dt\n+\\frac{Jx}{l^2 r^2 H}\\left[1-\\frac{\\theta B(2-3H)}{4r^2 H}\\right](ydx-xdy). \\nonumber\n\\end{eqnarray}\n\nNow, the length element of this solution becomes\\footnote{\nOur conical solution differs from the result\nobtained in \\cite{Pinzul:2005ta} in one respect, in the use of\n gauge parameter: We use $\\hat g = \\hat g(g, A)_{B\\neq 0}$ with nonzero flux\n while in \\cite{Pinzul:2005ta}\n they used $\\hat g = \\hat g(g, A)_{B=0}$ with zero flux.}\n\\begin{eqnarray}\nd\\hat{s}^2\n&=&-\\left( \\frac{2-H}{H}\\right)^2 \\left[ 1-\\frac{\\theta B}{2l^2 H(2-H)}\n\\right] dt^2+ H^{-2} \\left[ 1-\\frac{\\theta B}{2r^2} \\left(\\frac{2-H}{H}\\right)\n\\right]dr^2 \\nonumber \\\\\n&& -2J\\left( \\frac{2-H}{H}\\right)^2 \\left[ 1+\\frac{\\theta B}{2r^2}\n\\left(1-\\frac{r^2}{l^2 H(2-H)}\\right)\\right]dtd\\phi \\nonumber \\\\\n&& +\\frac{1}{H^2} \\Bigg[\n[(M-1)^2r^2-J^2(2-H)^2]]\n\\nonumber \\\\\n&& -\\frac{\\theta B}{2r^2 H} [\n2J^2(H^3-6H^2+10H-4)-(M-1)^2r^2(3H-2)]\\Bigg]d\\phi^2.\n\\end{eqnarray}\nThe above solution is not a black hole solution.\nHowever, in order to compare the effect of noncommutativity in different coordinate\nsystems,\nwe again consider the same quantities used to evaluate the two horizons,\napparent and Killing horizons in the BTZ black hole case, now denoted as\n $\\hat r_{A}$ and $\\tilde r_{K}$.\n From the same determining relations,\n $\\hat{G}^{rr}=\\hat{G}_{rr}^{-1}=0$ and\n $ \\hat{\\chi}^2=\\hat{G}_{tt}-\\hat{G}_{t\\phi}^2\/\\hat{G}_{\\phi\\phi}=0$\n for $\\hat r_{A}$ and $\\tilde r_{K}$ respectively,\nwe get\n\\begin{eqnarray}\n\\label{apparenth}\n{\\hat{r}_{A}}^{2}&=&4l^2,\\\\\n\\label{killingh}\n \\tilde r_{K}^2&=&0,\n\\end{eqnarray}\nup to first order in $\\theta$.\nThe values obtained above coincide with the values in the commutative case.\nWe consider that\nthis matches with the feature appeared in the BTZ solution of the rectangular coordinates\ngiven in section 3.1.\nThere the apparent and Killing horizons coincide\nin the noncommutative case just as in the commutative case.\n\n\n\\subsection{Polar coordinates }\n\nNow we do the same analysis in the polar coordinates using $R \\equiv r^2$.\n The length element (\\ref{ccads}) can be written in the $(t,R,\\phi)$ coordinates as\n\\begin{eqnarray}\n\\label{ccadspol}\nds^2=H^{-2}\\left[-(2-H)^2(dt+Jd\\phi)^2+(1-M)^2 R d\\phi^2+\\frac{dR^2}{4 R}\\right].\n\\end{eqnarray}\nThen the triad and spin connection are given by\n\\begin{eqnarray}\n\\label{cadstriad}\ne^{0} &=& \\frac{2-H}{H}(dt+J d\\phi), \\nonumber \\\\\ne^{1} &=& \\frac{1}{H}\\left[ \\frac{\\cos\\phi}{2\\sqrt{R}}dR-(1-M)\\sqrt{R}\\sin\\phi d\\phi\\right]\n, \\nonumber \\\\\ne^{2} &=& \\frac{1}{H}\\left[ \\frac{\\sin\\phi}{2\\sqrt{R}}dR+(1-M)\\sqrt{R}\\cos\\phi d\\phi\\right] ,\n\\\\\n\\label{cadsspin}\n\\omega^{0} &=& \\frac{1}{H}[(2-M)H-2(1-M)]d\\phi, \\nonumber \\\\\n\\omega^{1} &=& \\frac{\\sqrt{R}\\sin\\phi}{l^2 H}(dt+J d\\phi), \\nonumber \\\\\n\\omega^{2} &=& -\\frac{\\sqrt{R}\\cos\\phi}{l^2 H}(dt+J d\\phi). \\nonumber\n\\end{eqnarray}\n\n\nWe consider the same $U(1)$ fluxes $B_{\\mu}^{\\pm}=B d\\phi ~$ with constant $B$.\nThen, the noncommutative solution\n($\\mathcal{\\hat{A}}^{\\pm}=\\mathcal{A}^{\\pm}_{\\mu}+\\mathcal{A'}_{\\mu}^{\\pm}$)\nis given by\n\\begin{eqnarray}\n\\mathcal{A'}_{t}^{\\pm} &=& \\mp\\frac{i \\theta}{8l^3H^2}\n\\left(\n \\begin{array}{cc}\n 2+B & \\pm Bl(2-H)e^{-i\\phi}\/\\sqrt{R} \\\\\n \\mp Bl(2-H)e^{i\\phi}\/\\sqrt{R} & 2-B \\\\\n \\end{array}\n\\right),\n\\nonumber \\\\\n\\mathcal{A'}_{R}^{\\pm} &=& \\pm\\frac{\\theta B(3H-2)}{16lH^2 R^{3\/2}}\n\\left(\n \\begin{array}{cc}\n 0 & e^{-i\\phi} \\\\\n e^{i\\phi} & 0 \\\\\n \\end{array}\n\\right),\n\\nonumber \\\\\n\\mathcal{A'}_{\\phi}^{\\pm} &=& \\frac{i \\theta(l-Ml\\mp J)}{8l^3H^2}\n\\left(\n \\begin{array}{cc}\n 2+B & \\pm Bl(2-H)e^{-i\\phi}\/\\sqrt{R} \\\\\n \\mp Bl(2-H)e^{i\\phi}\/\\sqrt{R} & 2-B \\\\\n \\end{array}\n\\right).\n\\end{eqnarray}\nThen using the same relations between the gauge fields and the triad and spin connection\ngiven in the previous section,\n the noncommutative triad and spin connection are given by\n\\begin{eqnarray}\n\\label{nctriad}\n\\hat{e}^{0}&=& \\frac{H(2-H)-\\theta B\/4l^2}{H^2}(dt+J d\\phi),\n\\nonumber \\\\\n\\hat{e}^{1}&=& \\frac{\\cos\\phi}{2\\sqrt{R}H}\\left[ 1+\\frac{\\theta B}{4R}\n\\left(\\frac{3H-2}{H}\\right)\\right]dR-\\frac{(1-M)\\sqrt{R}\\sin\\phi}{H}\\left[\n1-\\frac{\\theta B}{4R}\\left(\\frac{2-H}{H}\\right)\\right]d\\phi,\n\\nonumber \\\\\n\\hat{e}^{2}&=& \\frac{\\sin\\phi}{2\\sqrt{R}H}\\left[ 1+\\frac{\\theta B}{4R}\n\\left(\\frac{3H-2}{H}\\right)\\right]dR+\\frac{(1-M)\\sqrt{R}\\cos\\phi}{H}\\left[\n1-\\frac{\\theta B}{4R}\\left(\\frac{2-H}{H}\\right)\\right]d\\phi, \\nonumber\n\\\\\n\\label{ncspinc}\n\\hat{\\omega}^{0}&=& \\frac{1}{H}\\left[\n(2-M)H-2(1-M)+\\frac{\\theta B (1-M)}{4l^2 H}\\right]d\\phi,\n\\nonumber \\\\\n\\hat{\\omega}^{1}&=& \\frac{\\sqrt{R}\\sin\\phi}{l^2H}\\left[\n1-\\frac{\\theta B}{4R}\\left(\\frac{2-H}{H}\\right)\\right](dt+Jd\\phi),\n\\nonumber \\\\\n\\hat{\\omega}^{2}&=& -\\frac{\\sqrt{R}\\cos\\phi}{l^2H}\\left[\n1-\\frac{\\theta B}{4R}\\left(\\frac{2-H}{H}\\right)\\right](dt+Jd\\phi).\n\\end{eqnarray}\n\n The noncommutative length element defined in the same way as in the previous section\n is given by in terms of $r$ as follows.\n\\begin{eqnarray}\n\\label{ncadsmetric}\nd\\hat{s}^2\n&=& -\\hat{\\mathcal{F}}^2 dt^2+ \\hat{\\mathcal{N}}^{-2}dr^2-2J\\hat{\\mathcal{F}}^2dtd\\phi\n\\nonumber \\\\\n&&+\\frac{(1-M)^2r^2-J^2(2-H)^2}{H^2}\n\\left[ 1-\\frac{\\theta B}{2l^2}\\left(\\frac{2-H}{H}\\right)\\frac{(1-M)^2l^2-J^2}{(1-M)^2r^2-J^2(2-H)^2}\n\\right]d\\phi^2+\\mathcal{O}(\\theta^2), \\nonumber \\\\\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\hat{\\mathcal{F}}^2 &=& \\left(\\frac{2-H}{H}\\right)^2 \\left[1-\\frac{\\theta B}{2l^2}\\frac{1}{H(2-H)}\\right], \\nonumber \\\\\n\\hat{\\mathcal{N}}^2 &=& H^2 \\left[1-\\frac{\\theta B}{2r^2}\n\\left(\\frac{3H-2}{H}\\right)\\right].\n\\end{eqnarray}\n\nHere we again consider the same quantities $\\hat r_{A}$ and $\\tilde r_{K}$\ndefined in the previous subsection to investigate the effect of\nnoncommutativity in different coordinate systems.\nNow they are given by\n\\begin{eqnarray}\n\\hat r_{A}^{2}&=&4l^{2}+\\mathcal{O}(\\theta^2),\n\\\\\n\\tilde r_{K}^2&=&\\frac{\\theta B}{4}+\\mathcal{O}(\\theta^2).\n\\end{eqnarray}\nUnlike the rectangular case in the previous subsection in which\nboth $\\hat r_{A}$ and $\\tilde r_{K}$ coincide with the classical values,\nhere only $\\hat r_{A}$ coincides with the classical value $r_A=2l$.\nFor $\\tilde r_{K}$, which would correspond to the Killing horizon of\na black hole, does not coincide with the classical value $r_K=0$.\nHowever, in the non-rotating limit ($J=0$), the solution for $\\tilde{r}_K$\ndoes not exist, and this feature agrees with that of the commutative case\nin which the solution for $r_K$ does not exist either in the non-rotating limit.\nThus we see that the same pattern holds in the polar coordinates as in the BTZ case, namely\nin the non-rotating limit the same feature appears in both commutative and noncommutative cases.\n\n\n\n\\section{Disscussion}\n\\label{disscuss}\n\n\nIn this paper, in order to investigate the non-exact equivalence\nbetween noncommutative coordinate systems\nwe obtain\na noncommutative BTZ black hole solution\nin the canonical rectangular coordinates via Seiberg-Witten map,\nand compare it with the previously obtained result\nin the noncommutative polar coordinates \\cite{EDY:20081}.\nWe repeat the same analysis for the conical solution\nin noncommutative $AdS_3$ using the same action\nto see whether there exists any\nsimilarity between the two cases.\n\n\n\nWhat we have learned can be illustrated as follows:\n\\vspace{0.1cm}\n\\begin{center}\n\\mbox{\\large \\xymatrix{ \\mathcal{A}(r,\\phi) \\ar[dd]_{[\\hat r,\\hat \\phi]=i\\tilde{\\theta}~~~}^{I}\n\\ar[rrrr]^{II}\n& & & &\\mathcal{B}(x,y) \\ar[dd]_{III}^{~~~[\\hat x,\\hat y]=i\\theta}\n\\\\\n\\\\\n\\hat{\\mathcal{A}}(r,\\phi) & & & &\\ar[llll]^{IV} \\hat{\\mathcal{B}}(x,y)&, } }\n\\end{center}\n\\vspace{0.1cm}\nwhere $\\mathcal{B}(x,y)\\equiv \\mathcal{A}[r(x,y),\\phi(x,y)]$\nand the maps $II$, $IV$ are the coordinate transformations\n$(x,y)\\leftrightarrow (r,\\phi)$ in a commutative space,\nand the maps $I$, $III$ denote corresponding Seiberg-Witten maps.\nFor a function $\\mathcal{A}(r,\\phi)$,\nfor example, the Carlip {\\it et. al.}'s BTZ black hole solution in the polar coordinates \\cite{Carlip:1994hq},\nwe have two different routes of getting\nSeiberg-Witten solutions $\\hat{\\mathcal{A}}(\\mathcal{A})$,\nvia $I$ or via $II\\rightarrow III\\rightarrow IV$.\n From the observation of Eq. \\eqref{diffSW} in section 2,\nwe know that the two solutions via the different routes\nwould be different,\ni.e. $\\hat{\\mathcal{A}}(r,\\phi) \\neq \\hat{\\mathcal{B}}[x(r,\\phi),y(r,\\phi)]$,\nsince the transformation\n$(x,y)\\leftrightarrow (r,\\phi)$ is not linear.\nThe results in sections 3 and 4 just support this observation.\n\n\nAnother lesson we get is from the following observations.\n1) In the rectangular coordinates, the feature appeared in\nthe solution of the commutative case remains intact in\nthe noncommutative case: In the BTZ case, both apparent and\nKilling horizons coincide. In the conical solution, the commutative\nand the noncommutative results are the same.\n2) In the polar coordinates, the feature appeared in the\ncommutative case is not maintained in the noncommutative case:\nIn the BTZ case, apparent and Killing horizons do not coincide.\nIn the conical solution, the commutative\nand the noncommutative results do not agree.\nHowever, in the non-rotating limit the feature appeared in\nthe commutative case is maintained in the noncommutative case:\nIn the BTZ case, apparent and Killing horizons do coincide.\nIn the conical solution case, the commutative\nand noncommutative results agree.\n\nThus we are left with a task of understanding the differed behaviors\nin the polar coordinates.\nOur understanding is as follows.\nIn the BTZ case, the Killing vector\nwhich determines the Killing horizon\nis dependent on the translation generator along the $\\hat{\\phi}$ direction,\nwhile the apparent horizon is determined by the null vector given by the translation\ngenerator along the radial $\\hat{r}$ direction.\nHence in the rotating case the relation between the two horizons is affected by the noncommutativity\nbetween the two coordinates $(\\hat{r}, \\hat{\\phi})$, and will differ from the commutative case.\nThe two horizons will not coincide.\nIn the non-rotating case, the Killing vector does\nnot depend on the translation generator along the $\\hat{\\phi}$ direction\nand thus no effect of noncommutativity among $(\\hat{r}, \\hat{\\phi})$ enters, resulting\nthe same relation as in the commutative case.\nIn the conical solution case, since we used the same defining relations for $\\hat{r}$ and $\\tilde{r}$\nas in the BTZ case,\nwe expect the same.\n\nIn the rectangular coordinates, the above noncommutative effect does not enter\nsince we are applying the above operation (getting a solution for $\\hat{r}$ and $\\tilde{r}$)\nto the result obtained by commutative coordinate transformation after the\nSeiberg-Witten map, thus wiping out the noncommutative characteristics.\nNote that the result obtained in the rectangular coordinates for the BTZ case differs\nfrom the commutative result. However, the feature that the apparent and Killing horizons\ncoincide remains the same as in the commutative case.\nNamely, we simply obtained a differed geometry from the commutative case\ndue to noncommutative effect by the Seiberg-Witten map. However, the\nnoncommutative effect in getting the solution of $\\hat{r}$ and $ \\tilde{r}$\nwas lost.\n\nThus as it was pointed out in \\cite{ag06} that\nthe conventional sense of diffeomorphism is not invariant\nin noncommutative theory, we\nbetter use the same coordinate system throughout the process of\nsolution finding, matching the coordinate system such that\nthe operational meaning of noncommutativity can be kept.\nFor instance, the commutation relation $[\\hat x,\\hat y]=i\\theta$\nhas translational symmetry,\nwhile the commutation relation $[{\\hat r}^2,\\hat \\phi]=2i\\theta$\nhas rotational symmetry.\nSo if we use $[{\\hat r}^2,\\hat \\phi]=2i\\theta$ instead of\n$[\\hat x,\\hat y]=i\\theta$, this means that\nwe choose the rotational symmetry (translational symmetry along $\\phi$ direction)\nat the cost of the translational symmetry along the x and y directions.\nWe consider this as the underlying reason for the\ndifferences in the results obtained in the paper.\n\n\n\n\n\n\n\\section*{Acknowledgments}\nThis work was supported by the Korea Science and Engineering Foundation(KOSEF) grant\nfunded by the Korea government(MEST), R01-2008-000-21026-0(E. C.-Y. and D. L.),\nand by the Korea Research Foundation grant funded by\nthe Korea Government(MEST), KRF-2008-314-C00063(Y. L.).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nA ubiquitous model in information-theoretic secrecy is the Shannon cipher system \\cite{Shannon1949} in which two nodes who share secret key want to communicate losslessly in the presence of an eavesdropper. As depicted in Figure~\\ref{fig:scs}, Node A views an i.i.d. source sequence $X^n$ and uses the shared secret key $K$ that is independent of the source to produce an encrypted message $M$. Node B uses the message and the key to produce $\\hat{X}^n$. An eavesdropper views the message and knows the scheme that Nodes A and B employ.\n\nAlso ubiquitous is the investigation of how to measure secrecy when there is not enough key to ensure perfect secrecy, i.e. when the key rate is less than the entropy of the information source. One potential solution, proposed by Yamamoto in \\cite{Yamamoto1997}, is to measure secrecy by the distortion that an eavesdropper incurs in attempting to reconstruct the source sequence. In accordance with the usual constructs in rate-distortion theory, this means that Nodes A and B want to maximize the following expression over all possible codes:\n\\begin{equation}\n\\min_{z^n(m)} \\mathbb{P}[d(X^n,z^n(M)) \\geq D].\n\\end{equation}\nAlthough this seems like a reasonable objective at first glance, it was shown in \\cite{Schieler2013} that simple codes employing negligible rates of secret key can force this probability to one, regardless of the distortion level $D$. The reason for this disconcerting result is that the accompanying secrecy guarantees can be fragile, as the following example elucidates. Let $X^n$ be i.i.d. $\\text{Bern}(1\/2)$ and suppose that there is just one bit of secret key, i.e. $K\\in\\{0,1\\}$. Encrypt by transmitting $X^n$ itself if $K=0$ and $X^n$ with all its bits flipped if $K=1$. In this scenario, any optimal reconstruction $Z^n$ that the eavesdropper produces has expected hamming distortion equal to $1\/2$, the highest expected distortion that the eavesdropper could possibly incur. Despite this, the eavesdropper actually knows quite a bit about $X^n$, namely that it is one of two sequences. Indeed, the guarantee of secrecy is rather fragile because if the eavesdropper learns just one bit of the source sequence, then the entire sequence is compromised. \n\nIn view of the previous example, one way to strengthen a distortion-based measure of secrecy is to design schemes around the assumption that the eavesdropper has access to some side information. In \\cite{Schieler2013}, this is accomplished by supposing that eavesdropper views the causal behavior of the system; in particular, the eavesdropper reconstructs $Z_i$ based on $X^{i-1}$ and the public message $M$.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tikzpicture}\n [node distance=1cm,minimum width=1cm,minimum height =.75 cm]\n \\node[rectangle,minimum width=5mm] (source) {$X^n$};\n \\node[node] (alice) [right =7mm of source] {A};\n \\node[node] (bob) [right =3cm of alice] {B};\n \\node[coordinate] (dummy) at ($(alice.east)!0.5!(bob.west)$) {};\n \\node[rectangle,minimum width=5mm] (xhat) [right =7mm of bob] {$\\widehat{X}^n$};\n \\node[rectangle,minimum width=7mm] (key) [above =7mm of dummy] {$K\\in[2^{nR_0}]$};\n \\node[node] (eve) [below =3mm of bob] {Eve};\n \n \\draw [arw] (source) to (alice);\n \\draw [arw] (alice) to node[minimum height=6mm,inner sep=0pt,midway,above]{$M\\in[2^{nR}]$} (bob);\n \\draw [arw] (bob) to (xhat);\n \\draw [arw] (key) to [out=180,in=90] (alice);\n \\draw [arw] (key) to [out=0,in=90] (bob);\n \\draw [arw,rounded corners] (dummy) |- (eve);\n \\end{tikzpicture}\n \\caption{\\small The Shannon cipher system with secret key rate $R_0$ and communication rate $R$. In this paper, we measure secrecy by the minimum distortion in a list of reconstruction sequences $\\{Z^n(1),\\ldots,Z^n(2^{nR_{\\sf L}})\\}$ that the eavesdropper produces.}\n \\label{fig:scs}\n \\end{center}\n\n \\end{figure}\n\nIn this paper, we study another distortion-based approach to measuring secrecy in the Shannon cipher system. Instead of requiring a single reconstruction sequence $Z^n$, we suppose that the eavesdropper produces a list of $2^{nR_{\\sf L}}$ reconstructions $\\{Z^n(1),\\ldots,Z^n(2^{nR_{\\sf L}})\\}$ and consider the minimum distortion over the entire list. This is somewhat reminiscent of equivocation (i.e., the conditional entropy $H(X^n|M)$), which also purports to measure the uncertainty of the eavesdropper. However, an important difference in the measure we study is that the structure of the uncertainty is built directly into the definition. The eavesdropper's equivocation merely provides a lower bound on the size of the smallest list that contains the exact source sequence $X^n$. On the other hand, the optimal tradeoff between secret key rate, distortion, and list rate will give us a function $R_{\\sf L}(R_0,D)$ that precisely quantifies the size of the smallest list that an eavesdropper is able to produce that reliably contains a sequence of distortion $D$. \n\nQuantifying secrecy in terms of lists and distortion has been done previously in \\cite{Merhav1999} and \\cite{Haroutunian2010}, where the eavesdropper is modeled as a ``guessing wiretapper\" who produces a sequence of reconstructions. After each estimate, the eavesdropper receives feedback about whether or not the reconstruction was within a certain distortion level.\\footnote{In \\cite{Merhav1999}, the feedback concerns exact reconstructions, whereas \\cite{Haroutunian2010} allows a distortion parameter.} As soon as the distortion level is reached, the eavesdropper stops guessing; the moments of the number of guesses needed indicate the secrecy of the system. Our approach differs from these works in that there is no sequential guessing (no testing mechanism) and the list size is fixed.\n\\subsection*{Organization}\nThis paper considers the list-reconstruction measure of secrecy and establishes the information-theoretic characterization of the optimal tradeoffs among the secret key rate, list rate, and distortion at the eavesdropper. We divide the paper into two parts. First, we introduce and solve the problem when lossless communication is required between the legitimate parties (Sections~\\ref{sec:setup}--\\ref{sec:losslessachievability}). We then introduce the lossy communication setting and solve the corresponding problem (Sections~\\ref{sec:mainlossy}--\\ref{sec:lossyachievability}), reusing components from the preceding sections where possible. Although the lossy communication setting is a generalization of the lossless setting, there are several complications and subtleties that emerge that warrant the separation. For example, the converse proof is much more involved in the lossy setting.\n\nIn Section~\\ref{sec:setup}, we formally define the list-based measure of secrecy and the lossless communication setting in which it will be first be analyzed. We also give an equivalent reformulation of the setting in terms of a malicious helper for the eavesdropper; the resulting ``henchman problem\" becomes the default formulation for the remainder of the paper. Section~\\ref{sec:mainlossless} contains Theorem~\\ref{mainresult}, the characterization of the optimal tradeoffs in the lossless communication setting. The proof of Theorem~\\ref{mainresult} is presented in Section~\\ref{sec:losslessconverse} (converse) and Section~\\ref{sec:losslessachievability} (achievability). In Section~\\ref{sec:mainlossy}, we introduce the lossy communication version of the problem and characterize the optimal tradeoffs in Theorem~\\ref{mainresultlossy}. The converse and achievability proofs of Theorem~\\ref{mainresultlossy} are given in Sections~\\ref{sec:lossyconverse} and \\ref{sec:lossyachievability}, respectively.\n\nIn addition to being a treatment of a new measure of secrecy for the Shannon cipher system, this paper is an endorsement of the efficacy of a likelihood encoder for proving source coding results. As detailed in \\cite{Song2014}, a likelihood encoder is a particular stochastic encoder which, when combined with a random codebook, manages to avoid many of the tedious and technical components of achievability proofs in lossy compression problems. The primary conduit for the analysis of a likelihood encoder is the ``soft covering lemma\", which is expounded upon in \\cite{Cuff2013}. In our case, the technique allows us to extract an idealized subproblem from the crucial part of the achievability proof and consider it independently of the original problem. The subproblem concerns the lossy compression of a codeword drawn uniformly from a random codebook.\n\n\\section{Preliminaries}\n\\label{sec:setup}\n\\subsection{Notation}\nAll alphabets (e.g., $\\mathcal{X}$, $\\mathcal{Y}$, and $\\mathcal{Z}$) are finite. The set $\\{1,\\ldots,m\\}$ is sometimes denoted by $[m]$. Given a per-letter distortion measure $d(x,z)$, we abuse notation slightly by defining\n\\begin{equation}\nd(x^n,z^n) \\triangleq \\frac1n \\sum_{i=1}^n d(x_i,z_i).\n\\end{equation}\nWe also assume that for every $x\\in\\mathcal{X}$, there exists $z\\in\\mathcal{Z}$ such that $d(x,z)=0$. \n\nWe denote the empirical distribution (or type) of a sequence $x^n$ by $T_{x^n}$:\n\\begin{equation}\nT_{x^n}(x) = \\frac1n \\sum_{i=1}^n \\mathbf{1}\\{x = x_i\\}.\n\\end{equation}\n\n\n\\subsection{Total variation distance}\nThroughout the paper, we make frequent use of the total variation distance between two probability measures $P$ and $Q$ with common alphabet, defined by\n\\begin{equation}\n\\lVert P - Q \\rVert_{\\sf TV} \\triangleq \\sup_{A\\in\\mathcal{F}} |P(A) - Q(A)|. \n\\end{equation}\nThe following properties of total variation distance are quite useful.\n\\begin{property}\n\\label{tvproperties}\nTotal variation distance satisfies:\n\\begin{enumerate}[(a)]\n\\item If the support of $P$ and $Q$ is a countable set $\\mathcal{X}$, then\n\\begin{equation}\n\\lVert P - Q \\rVert_{\\sf TV} = \\frac12 \\sum_{x\\in\\mathcal{X}} |P(\\{x\\})-Q(\\{x\\})|.\n\\end{equation}\n\\item Let $\\varepsilon>0$ and let $f(x)$ be a function with bounded range of width $b>0$. Then\n\\begin{equation}\n\\label{tvcontinuous}\n\\lVert P-Q \\rVert_{\\sf TV} < \\varepsilon \\:\\Longrightarrow\\: \\big| \\mathbb{E}_Pf(X) - \\mathbb{E}_Qf(X) \\big | < \\varepsilon b,\n\\end{equation}\nwhere $\\mathbb{E}_{P}$ indicates that the expectation is taken with respect to the distribution $P$.\n\\item Let $P_{X}P_{Y|X}$ and $Q_XP_{Y|X}$ be two joint distributions with common channel $P_{Y|X}$. Then\n\\begin{equation}\n\\lVert P_XP_{Y|X} - Q_X P_{Y|X} \\rVert_{\\sf TV} = \\lVert P_X - Q_X \\rVert_{\\sf TV}.\n\\end{equation}\n\\item Let $P_X$ and $Q_X$ be marginal distributions of $P_{XY}$ and $Q_{XY}$. Then\n\\begin{equation}\n\\lVert P_X - Q_X \\rVert_{\\sf TV} \\leq \\lVert P_{XY} - Q_{XY} \\rVert_{\\sf TV}.\n\\end{equation}\n\\end{enumerate}\n\\end{property}\n\n\\subsection{Problem setup}\nAs shown in Figure~\\ref{fig:scs}, Node A observes a source sequence $X^n$ that is i.i.d. according to a distribution $P_X$. Nodes A and B share common randomness $K\\in[2^{nR_0}]$ that is uniformly distributed and independent of $X^n$. Node A sends a message $M$ to Node B over a noiseless channel at rate $R$. \n\\begin{defn}\nAn $(n,R,R_0)$ code consists of:\n\\begin{IEEEeqnarray}{sl}\nEncoder: & f:\\mathcal{X}^n\\times[2^{nR_0}]\\rightarrow [2^{nR}]\\\\\nDecoder: & g:[2^{nR}]\\times[2^{nR_0}]\\rightarrow \\mathcal{X}^n\n\\end{IEEEeqnarray}\nThe encoder and decoder can be stochastic (in which case they are denoted by $P_{M|X^n,K}$ and $P_{\\widehat{X}^n|M,K}$).\n\\end{defn}\n\nThe encrypted communication (the message $M$) is overheard perfectly by an eavesdropper who produces a list $\\mathcal{L}(M)\\subset \\mathcal{Z}^n$ and incurs the minimum distortion over the entire list:\n\\begin{equation}\n\\min _{z^n\\in \\mathcal{L}(M)} d(X^n,z^n).\n\\end{equation}\nUsing the secret key and the noiseless channel, Nodes A and B want to communicate losslessly while ensuring that the eavesdropper's optimal strategy suffers distortion above a given level with high probability. The generalization to lossy communication begins in Section~\\ref{sec:mainlossy}.\n\n\\begin{defn}\n\\label{listdefn}\nThe tuple $(R,R_0,R_{\\sf L},D)$ is achievable if there exists a sequence of $(n,R,R_0)$ codes such that the error probability $\\mathbb{P}[X^n \\neq \\widehat{X}^n]$ vanishes and, $\\forall \\varepsilon > 0$,\n\\begin{equation}\n\\min_{\\substack{\\mathcal{L}(m):|\\mathcal{L}|\\leq2^{nR_{\\sf L}}}} \\mathbb{P}\\Big[\\min_{z^n \\in \\mathcal{L}(M)} d(X^n,z^n) \\geq D-\\varepsilon\\Big] \\xrightarrow{n\\to\\infty} 1.\n\\end{equation}\n\\end{defn}\n Thus, we allow the eavesdropper to use any list-valued function $\\mathcal{L}:\\mathcal{M} \\rightarrow \\{\\mathcal{Z}^n\\}_1^{2^{nR_{\\sf L}}}$, provided the cardinality of the range satisfies {$|\\mathcal{L}|~\\leq~2^{nR_{\\sf L}}$}. Furthermore, we assume that the eavesdropper knows the $(n,R,R_0)$ code and the distribution $P_X$.\n \n \\subsection{The henchman problem}\n So far, the problem has been formulated in terms of an eavesdropper who produces a list of $2^{nR_{\\sf L}}$ reconstructions. It turns out that we can relate this formulation to one in which an eavesdropper reconstructs a single sequence; this is accomplished by supplying the eavesdropper with a rate-limited helper (a henchman). As depicted in Figure~\\ref{fig:henchman}, the eavesdropper receives $nR_{\\sf L}$ bits of side information from a henchman who has access to the source sequence $X^n$ and the public message $M$. Since the eavesdropper and henchman cooperate, this means that the eavesdropper effectively receives the best possible $nR_{\\sf L}$ bits of side information about the pair $(X^n,M)$ to assist in producing a single reconstruction sequence $Z^n$. \n\n\\begin{figure}\n\\begin{center}\n\\begin{tikzpicture}\n [node distance=1cm,minimum width=1cm,minimum height =.75 cm]\n \\node[rectangle,minimum width=5mm] (source) {$X^n$};\n \\node[node] (alice) [right =7mm of source] {A};\n \\node[node] (bob) [right =3cm of alice] {B};\n \\node[coordinate] (dummy) at ($(alice.east)!0.5!(bob.west)$) {};\n \\node[rectangle,minimum width=5mm] (xhat) [right =7mm of bob] {$\\widehat{X}^n$};\n \\node[rectangle,minimum width=7mm] (key) [above =7mm of dummy] {$K\\in[2^{nR_0}]$};\n \\node[node] (eve) [below =3mm of bob] {Eve};\n\n \\node[node,anchor=west] (hman) at (eve -| alice.west) {\\footnotesize Henchman};\n \\node[rectangle,minimum width=5mm] (hsource) [left=7mm of hman] {$(X^n,M)$};\n \\node[rectangle,minimum width=5mm] (z) [right=7mm of eve] {$Z^n$};\n\n \\draw [arw] (source) to (alice);\n \\draw [arw] (alice) to node[minimum height=6mm,inner sep=0pt,midway,above]{$M\\in[2^{nR}]$} (bob);\n \\draw [arw] (bob) to (xhat);\n \\draw [arw] (key) to [out=180,in=90] (alice);\n \\draw [arw] (key) to [out=0,in=90] (bob);\n \\draw [arw,rounded corners] (dummy) |- (eve.165);\n \n \\draw [arw] (hman.east |- eve.195) to node[minimum height=6mm,inner sep=0pt,midway,below]{$M_{\\sf H}\\in[2^{nR_{\\sf L}}]$} (eve.195);\n \\draw [arw] (hsource) to (hman);\n \\draw [arw] (eve) to (z);\n \\end{tikzpicture}\n \\caption{\\small The henchman problem. A rate-limited henchman has access to the source sequence and the public message. The eavesdropper produces a single reconstruction sequence $Z^n$ based on the public message and the side information from the henchman.}\n \\label{fig:henchman}\n \\end{center}\n\n \\end{figure}\n\n\\begin{defn}\n\\label{henchmandefn}\nThe tuple $(R,R_0,R_{\\sf L},D)$ is achievable in the henchman problem if there exists a sequence of $(n,R,R_0)$ codes such that the error probability $\\mathbb{P}[X^n \\neq \\widehat{X}^n]$ vanishes and, $\\forall \\varepsilon > 0$,\n\\begin{equation}\n\\label{mainobj}\n\\min_{\\substack{m_{\\sf H}(x^n,m),z^n(m,m_{\\sf H}):\\\\|\\mathcal{M}_{\\sf H}|\\leq2^{nR_{\\sf L}}}} \\mathbb{P}\\Big[d(X^n,z^n(M,M_{\\sf H})) \\geq D-\\varepsilon\\Big] \\xrightarrow{n\\to\\infty} 1.\n\\end{equation}\n\\end{defn}\nThus, we allow the eavesdropper and henchman to jointly design a code consisting of an encoder $m_{\\sf H}(x^n,m)$ and a decoder $z^n(m,m_{\\sf H})$, subject to the constraint $|\\mathcal{M}_{\\sf H}| \\leq 2^{nR_{\\sf L}}$. It can be shown that allowing a stochastic encoder or decoder does not decrease the eavesdropper's distortion. As in Definition~\\ref{listdefn}, we assume that the adversarial entities are aware of the scheme that Nodes A and B employ, although this is not explicitly indicated in \\eqref{mainobj}.\n\nWe now demonstrate the equivalence of the list reconstruction problem and the henchman problem.\n\n\\begin{prop}\nThe tuple $(R,R_0,R_{\\sf L},D)$ is achievable in the list reconstruction problem if and only if it is achievable in the henchman problem. In other words, Definitions~\\ref{listdefn} and~\\ref{henchmandefn} are equivalent.\n\\end{prop}\n\\begin{proof}\nIt is enough to show that the eavesdropper's scheme in the list reconstruction problem can be transformed to a scheme in the henchman problem that achieves the same (or less) distortion, and vice versa. \n\nLet $\\mathcal{L}(m)$ be the function that the eavesdropper uses to produce a list of reconstruction sequences. If the public message is $M$, the list $\\mathcal{L}(M)$ can act as a codebook in the henchman problem. Knowing $(X^n,M)$, the henchman can transmit the index of the sequence in $\\mathcal{L}(M)$ with the lowest distortion. Upon receiving the index and $M$, the eavesdropper reconstructs the corresponding sequence.\n\nConversely, suppose that the henchman and eavesdropper have devised an encoder $m_{\\sf H}(x^n,m)$ and a decoder $z^n(m,m_{\\sf H})$. Upon observing the public message, the eavesdropper has a list of codewords (one for each $m_{\\sf H}$) that can be used for the list reconstruction problem. More precisely, the eavesdropper forms the list\n\\begin{equation}\n\\mathcal{L}(M) = \\{z^n(M,m_{\\sf H})\\}_{m_{\\sf H} \\in [2^{nR_{\\sf L}}]}.\n\\end{equation}\nIn both cases, it is straightforward to verify that the transformation maintains (or decreases) the distortion. To carry out the verification formally, it is enough to show that for any $(n,R,R_0)$ code, \n\\begin{equation}\n\\min_{\\substack{\\mathcal{L}(m):|\\mathcal{L}|\\leq2^{nR_{\\sf L}}}} \\mathbb{P}\\Big[\\min_{z^n \\in \\mathcal{L}(M)} d(X^n,z^n) \\geq D\\Big] = \\min_{\\substack{m_{\\sf H}(x^n,m),z^n(m,m_{\\sf H}):\\\\|\\mathcal{M}_{\\sf H}|\\leq2^{nR_{\\sf L}}}} \\mathbb{P}\\Big[d(X^n,z^n(M,M_{\\sf H})) \\geq D\\Big].\n\\end{equation}\nTo show $(\\geq)$, fix a list reconstruction function $\\mathcal{L}(m)$ and define a henchman encoder and eavesdropper decoder by\n\\begin{IEEEeqnarray}{rCl}\nm_{\\sf H}(x^n,m) &=& \\argmin_{j\\in[2^{nR_{\\sf L}}]} d(x^n,\\mathcal{L}(m,j))\\\\\nz^n(m,m_{\\sf H}) &=& \\mathcal{L}(m,m_{\\sf H}),\n\\end{IEEEeqnarray}\nwhere $\\mathcal{L}(m,j)$ denotes the $j$th element of the list $\\mathcal{L}(m)$. Then we have\n\\begin{IEEEeqnarray}{rCl}\n \\mathbb{P}\\Big[\\min_{z^n \\in \\mathcal{L}(M)} d(X^n,z^n) \\geq D\\Big] &=& \\mathbb{P}\\Big[d(X^n,z^n(M,M_{\\sf H})) \\geq D\\Big]\\\\\n &\\geq& \\min_{\\substack{m_{\\sf H}(x^n,m),z^n(m,m_{\\sf H}):\\\\|\\mathcal{M}_{\\sf H}|\\leq2^{nR_{\\sf L}}}} \\mathbb{P}\\Big[d(X^n,z^n(M,M_{\\sf H})) \\geq D\\Big].\n\\end{IEEEeqnarray}\nTo show $(\\leq)$, fix a henchman encoder $m_{\\sf H}(x^n,m)$ and eavesdropper decoder $z^n(m,m_{\\sf H})$ and define a list reconstruction function by\n\\begin{equation}\n\\mathcal{L}(m) = \\{z^n(m,m_{\\sf H})\\}_{m_{\\sf H}\\in[2^{nR_{\\sf L}}]}.\n\\end{equation}\nThen we have\n\\begin{IEEEeqnarray}{rCl}\n\\mathbb{P}\\Big[d(X^n,z^n(M,M_{\\sf H})) \\geq D\\Big] &\\geq& \\mathbb{P}\\Big[\\min_{z^n \\in \\mathcal{L}(M)} d(X^n,z^n) \\geq D\\Big]\\\\\n&\\geq& \\min_{\\substack{\\mathcal{L}(m):|\\mathcal{L}|\\leq2^{nR_{\\sf L}}}} \\mathbb{P}\\Big[\\min_{z^n \\in \\mathcal{L}(M)} d(X^n,z^n) \\geq D\\Big].\n\\end{IEEEeqnarray}\n\\end{proof}\n\n\n\\section{Main Result (lossless communication)}\n\\label{sec:mainlossless}\nWhen lossless communication is required between the legitimate parties, we have the following characterization of the tradeoff among the communication rate, secret key rate, list rate (or henchman rate), and eavesdropper's distortion.\n\\begin{thm}\n\\label{mainresult}\nGiven a source distribution $P_X$ and a distortion function $d(x,z)$, the closure of achievable tuples $(R,R_0,R_{\\sf L},D)$ is the set of tuples satisfying\n \\begin{equation}\n\n \\begin{IEEEeqnarraybox}[][c]{rCl}\n R &\\geq& H(X)\\\\\n D &\\leq& D(R_{\\sf L})\\cdot \\mathbf{1}\\{R_0>R_{\\sf L}\\},\n \\end{IEEEeqnarraybox}\n\\end{equation}\nwhere $D(\\cdot)$ is the point-to-point distortion-rate function:\n\\begin{equation}\nD(R) \\triangleq \\min_{P_{Z|X}:R\\geq I(X;Z)} \\mathbb{E}[d(X,Z)].\n\\end{equation}\n\\end{thm}\n\n\nPerhaps the most striking part of Theorem~\\ref{mainresult} is that the region is discontinuous. Fixing a rate of secret key $R_0$, observe that when the list rate $R_{\\sf L}$ is strictly less than $R_0$, the $(R_{\\sf L},D)$ tradeoff follows the point-to-point rate-distortion function. However, as soon as $R_{\\sf L}$ equals or exceeds the secret key rate, the eavesdropper's distortion drops to zero (the minimum distortion possible) because all possible decryptions can be enumerated in a list of size $2^{nR_0}$. Figure~\\ref{fig:region} illustrates Theorem~\\ref{mainresult} for a $\\text{Bern}(1\/2)$ source and hamming distortion; the communication rate is assumed to satisfy $R\\geq H(X)$ and has no effect on the $(R_0,R_{\\sf L},D)$ tradeoff.\n\nNote that setting $R_{\\sf L}=0$ in the region of Theorem~\\ref{mainresult} corresponds to requiring a single reconstruction (without a henchman), which was Yamamoto's original formulation of the problem in \\cite{Yamamoto1997}. In this case, we see that any positive rate of secret key results in distortion $D(0)$, the maximum expected distortion that can occur.\n\nIn the context of the list reconstruction formulation, Theorem~\\ref{mainresult} implies that when Nodes A and B act optimally and $R_{\\sf L} < R_0$, the eavesdropper's best strategy is to simply ignore the public message and list the codewords from a good point-to-point rate-distortion codebook. In particular, the public message is useless to the eavesdropper in this regime. However, when $R_{\\sf L} \\geq R_0$, the eavesdropper uses a different strategy and produces all possible decryptions of the public message. When we consider the lossy communication setting, we will see that a similar strategy switch occurs.\n\nWe now prove the achievability and converse portions of Theorem~\\ref{mainresult}. For the entirety of the proof, we use the henchman formulation instead of the list reconstruction one. The main idea in the proof of achievability concerns the problem of compressing codewords from a random codebook beyond the rate-distortion limit; the proof also relies on a likelihood encoder \\cite{Song2014} and the soft covering lemma {\\cite[Lemma IV.1]{Cuff2013}}. The converse is straightforward, as we now show.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics{henchman_region_new.pdf}\n\\caption{\\small The region in Theorem~\\ref{mainresult} for source distribution {$P_X~\\sim~\\text{Bern}(1\/2)$} and distortion measure $d(x,z)=\\mathbf{1}\\{x\\neq z\\}$.}\n\\label{fig:region}\n\\end{center}\n\\end{figure}\n\n\\section{Converse (Lossless communication)}\n\\label{sec:losslessconverse}\nThe constraint $R\\geq H(X)$ is a consequence of the lossless source coding theorem. The constraint on $D$ splits into two cases depending on the relation between $R_0$ and $R_{\\sf L}$. If $R_{\\sf L} \\geq R_0$, then any scheme that Nodes A and B use to achieve lossless compression can be exploited by the eavesdropper and the henchman. Since they can both enumerate the $2^{nR_0}$ possible decryptions of $M$, the henchman can simply send the index of the correct decryption, which results in zero distortion. On the other hand, if $R_{\\sf L} < R_0$ then the eavesdropper and the henchman can ignore $M$ altogether and simply use a point-to-point rate-distortion code to describe $X^n$ within distortion $D(R_{\\sf L})$. Therefore, regardless of the code that Alice and Bob use for lossless communication, the eavesdropper and the henchman can achieve distortion less than or equal to $D(R_{\\sf L})\\cdot \\mathbf{1}\\{R_0>R_{\\sf L}\\}$.\n\n\\section{Achievability (Lossless communication)}\n\\label{sec:losslessachievability}\nViewing the problem from the perspective of the adversarial entities, we see that the henchman observes the pair $(X^n,M)$ and encodes a message $M_{\\sf H}$, and the eavesdropper observes $(M,M_{\\sf H})$ and decodes $Z^n$; their goal is to minimize the distortion $d(X^n,Z^n)$. This describes the usual rate-distortion setting with additional information $M$ available at encoder and decoder. In other words, for a given $M=m$, the henchman observes a sequence $X^n$ drawn from a source distribution $P_{X^n|M=m}$ and describes the sequence to the eavesdropper using a rate-limited channel; the conditional distribution $P_{X^n|M=m}$ is the effective source distribution because both the henchman and the eavesdropper know the public message.\n\nObserving that $P_{X^n|M=m}$ is induced entirely by the actions of Node A, let us assume for the moment that Node A uses the following random binning scheme to encode the source sequence. First, randomly divide the set of typical $x^n$ sequences into bins of size $2^{nR_0}$. This binning is known to everyone, including the adversaries. To encode $X^n$, Node A transmits the message {$M=(M_p,M_s)$}, where $M_p$ is the bin containing $X^n$, and $M_s$ is the index within that bin, one-time padded with $K$. Note that the one-time pad renders $M_s$ statistically independent of $X^n$ and $M_s$. Thus, for this choice of encoder, the induced distribution $P_{X^n|M}$ corresponds to choosing a sequence roughly uniformly at random from bin $M_p$ (because of the asymptotic equipartition property). Furthermore, the asymptotic equipartition property and the randomness of the binning suggest that the $2^{nR_0}$ sequences in bin $M_p$ were approximately chosen i.i.d. according to $\\prod_{i=1}^n P_X(x_i)$. Therefore, very roughly speaking, the random binning scheme results in a distribution $P_{X^n|M=m}$ that corresponds to selecting a sequence uniformly from a random codebook whose codewords are generated independently and identically according to $\\prod_{i=1}^n P_{X}(x_i)$. If this is true, then the joint goal of the henchman and the eavesdropper becomes the following: lossy compression (at rate $R_{\\sf L}$) of a codeword drawn uniformly from a random codebook of size $2^{nR_0}$. We now delve into this subproblem, the conclusion of which is the following: if $R_{\\sf L} \\tau_n \\Big] = 0.\n\\end{equation}\n\\end{thm}\n\n\\begin{proof}\n\nWe first provide a brief, informal sketch of the proof idea. For an optimal $(n,\\mathcal{C}_{\\mathsf{x}},R)$ code, there are on average $2^{n(R_{\\sf C} - R)}$ codewords in $\\mathcal{C}_{\\mathsf{x}}$ that map to each of the $2^{nR}$ reconstruction sequences in $\\mathcal{Z}^n$. However, for a given reconstruction sequence $z^n$, there are only (on average) $2^{n(R_{\\sf C}-R(D))}$ sequences in $\\mathcal{C}_{\\mathsf{x}}$ within distortion $D$ of $z^n$, because the probability of an i.i.d sequence $X^n$ being within distortion $D$ of $z^n$ is roughly $2^{-nR(D)}$. Since $2^{n(R_{\\sf C}-R(D))}$ is much smaller than $2^{n(R_{\\sf C} - R)}$, the probability that $z^n$ yields distortion less than D is vanishingly small. In fact, this probability decays doubly exponentially, which means that the entire suite of $2^{nR}$ reconstruction sequences simultaneously yields distortion greater than $D$ with high probability. In other words, the optimal code gives rise to distortion greater than $D$ with high probability, which is what we want to show.\n\nThe first step is to restrict $X^n(J)$ to the $\\delta$-typical set $\\mathcal{T}_{\\delta}^n(X)$ by writing\n\\begin{equation}\n\\label{typical} \\mathbb{P}[d(X^n(J),Z^n)\\leq D] \\leq \\mathbb{P}[d(X^n(J),Z^n)\\leq D, \\mathcal{A}] + \\mathbb{P}[\\mathcal{A}^c],\n\\end{equation}\nwhere $\\mathcal{A}$ denotes the event $\\{X^n(J)\\in \\mathcal{T}_{\\delta}^n \\}$. The $\\delta$-typical set is defined according to the notion of strong typicality:\n\\begin{equation}\n\\mathcal{T}_{\\delta}^n(X) \\triangleq \\{x^n\\in\\mathcal{X}^n: \\lVert T_{x^n} - P_X \\rVert_{\\sf TV} < \\delta\\},\n\\end{equation}\nwhere $T_{x^n}$ denotes the empirical distribution (i.e., the type) of $x^n$.\nWe will choose an appropriate $\\delta$ later. Note that the second term in~\\eqref{typical} vanishes in the limit for any $\\delta>0$ since $X^n(J)$ is i.i.d. according to $P_X$.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tikzpicture}\n [node distance=1cm,minimum width=1cm,minimum height =.75 cm]\n \\node[rectangle,minimum width=5mm] (source) {$X^n(J)$};\n \\node[node] (alice) [right =7mm of source] {Enc.};\n \\node[node] (bob) [right =2cm of alice] {Dec.};\n \\node[coordinate] (dummy) at ($(alice.east)!0.5!(bob.west)$) {};\n \\node[rectangle,minimum width=5mm] (xhat) [right =7mm of bob] {$Z^n$};\n \\node[rectangle,minimum width=7mm] (key) [above =7mm of dummy] {$\\mathcal{C}_{\\mathsf{x}}$};\n \n \\draw [arw] (source) to (alice);\n \\draw [arw] (alice) to node[minimum height=6mm,inner sep=0pt,midway,above]{$R$} (bob);\n \\draw [arw] (bob) to (xhat);\n \\draw [arw] (key) to [out=180,in=90] (alice);\n \\draw [arw] (key) to [out=0,in=90] (bob);\n \\end{tikzpicture}\n \\caption{\\small Lossy compression of a codeword drawn uniformly from a random codebook $\\mathcal{C}_{\\mathsf{x}}=\\{X^n(1),\\ldots,X^n(2^{nR_{\\sf C}})\\}$. Both the encoder and decoder know the codebook $\\mathcal{C}_{\\mathsf{x}}$, and the encoder must describe a randomly chosen codeword $X^n(J)$, where $J\\sim \\text{Unif}[2^{nR_{\\sf C}}]$.}\n \\label{fig:codebook}\n \\end{center}\n\n \\end{figure}\n\nAlthough we defined a $(n,\\mathcal{C}_{\\mathsf{x}},R)$ code as an encoder-decoder pair $(f,g)$, we will benefit from viewing a code as the combination of a codebook of $z^n$ sequences and an encoder that is optimal for that codebook. In other words, treat an $(n,\\mathcal{C}_{\\mathsf{x}},R)$ code as a codebook $c_{\\mathsf{z}}\\subseteq \\mathcal{Z}^n$ of size $2^{nR}$, together with an encoder that maps $x^n \\in \\mathcal{C}_{\\mathsf{x}}$ to the $z^n\\in c_{\\mathsf{z}}$ with the lowest distortion $d(x^n,z^n)$. This allows us to write\n\\begin{IEEEeqnarray}{l}\n\\nonumber \\max_{(n,\\mathcal{C}_{\\mathsf{x}},R)\\text{ codes}} \\mathbb{P}[d(X^n(J),Z^n)\\leq D,\\mathcal{A}] \\\\\n\\label{codeequiv}\\quad\\quad = \\max_{c_{\\mathsf{z}}(\\mathcal{C}_{\\mathsf{x}})} \\mathbb{P}\\Big[\\min_{z^n \\in {c_{\\mathsf{z}}(\\mathcal{C}_{\\mathsf{X}})} } d(X^n(J),z^n)\\leq D,\\mathcal{A}\\Big],\n\\end{IEEEeqnarray}\nwhere the notation ${c_{\\mathsf{z}}(\\mathcal{C}_{\\mathsf{x}})}$ emphasizes that $c_{\\mathsf{z}}$ is a function of the random codebook $\\mathcal{C}_{\\mathsf{x}}$; for simplicity, we suppress the $n$ and $R$ parameters of $c_{\\mathsf{z}}(\\mathcal{C}_{\\mathsf{x}})$.\n\nNow we apply a union bound to the right-hand side of \\eqref{codeequiv} and write\n\\begin{IEEEeqnarray}{rCl}\n\\mathbb{P}\\Big[\\min_{z^n \\in {c_{\\mathsf{z}}(\\mathcal{C}_{\\mathsf{x}})} } d(X^n(J),z^n)\\leq D,\\mathcal{A}\\Big]\n&\\stackrel{(a)}{\\leq}& \\sum_{z^n\\in c_{\\mathsf{z}}(\\mathcal{C}_{\\mathsf{x}})} \\mathbb{P}\\Big[ d(X^n(J),z^n)\\leq D,\\mathcal{A}\\Big]\\\\\n&\\leq& 2^{nR} \\max_{z^n\\in c_{\\mathsf{z}}(\\mathcal{C}_{\\mathsf{x}})} \\mathbb{P}\\Big[ d(X^n(J),z^n)\\leq D,\\mathcal{A}\\Big]\\\\\n&\\leq& 2^{nR} \\max_{z^n\\in \\mathcal{Z}^n} \\mathbb{P}\\Big[ d(X^n(J),z^n)\\leq D,\\mathcal{A}\\Big]\\\\\n\\label{rvdenote} &\\stackrel{(b)}{=}& 2^{-n(R_{\\sf C} - R)} \\max_{z^n\\in \\mathcal{Z}^n} \\sum_{j=1}^{2^{nR_{\\sf C}}} \\mathbf{1}\\{ d(X^n(j),z^n)\\leq D, X^n(j) \\in \\mathcal{T}_{\\delta}^n \\}, \n\\end{IEEEeqnarray}\nwhere step (a) is a union bound, and step (b) uses the fact that $X^n(J)$ is chosen uniformly from $\\mathcal{C}_{\\mathsf{x}}$. Notice that for a fixed $z^n$, the terms in the sum in \\eqref{rvdenote} are i.i.d. random variables (due to the nature of the random codebook construction), which we henceforth denote by $\\xi_{j,z^n}$:\n\\begin{equation}\n\\label{xij}\n\\xi_{j,z^n} \\triangleq \\mathbf{1}\\{ d(X^n(j),z^n)\\leq D, X^n(j) \\in \\mathcal{T}_{\\delta}^n \\},\\quad j=1,\\ldots,2^{nR_{\\sf C}}.\n\\end{equation}\n\n\nUsing the equality in \\eqref{codeequiv} and the bound in \\eqref{rvdenote}, we have\n\\begin{IEEEeqnarray}{rCl}\n\\IEEEeqnarraymulticol{3}{l}{\\nonumber\n\\mathbb{P}\\Big[ \\max_{(n,\\mathcal{C}_{\\mathsf{x}},R)\\text{ codes}} \\mathbb{P}[d(X^n(J),Z^n)\\leq D,\\mathcal{A}] > \\tau_n \\Big]\n}\\\\\n\\quad&\\leq& \\mathbb{P}\\Big[\\max_{z^n\\in \\mathcal{Z}^n} \\sum_{j=1}^{2^{nR_{\\sf C}}} \\xi_{j,z^n} > \\tau_n 2^{n(R_{\\sf C}-R)}\\Big] \\\\\n\\label{finalprob}&\\stackrel{(a)}{\\leq}& |\\mathcal{Z}|^n \\max_{z^n\\in\\mathcal{Z}^n} \\mathbb{P}\\Big[\\sum_{j=1}^{2^{nR_{\\sf C}}} \\xi_{j,z^n} > \\tau_n 2^{n(R_{\\sf C}-R)}\\Big],\n\\end{IEEEeqnarray}\nwhere (a) is a union bound. If we can show that the probability in \\eqref{finalprob} decays doubly exponentially fast with $n$, then the proof will be complete. To that end, we first use a standard application of the method of types \\cite{Csiszar1998} to establish a bound on the expected value of $\\xi_{j,z^n}$ in the following lemma. The proof is relegated to the appendix.\n\n\n\n\\begin{lemma}\n\\label{typebound}\nIf $X^n$ is i.i.d. according to $P_X$, then for any $z^n$,\n\\begin{equation}\n\\mathbb{P}[d(X^n,z^n)\\leq D, X^n \\in \\mathcal{T}_{\\delta}^n] \\leq 2^{-n(R(D)-o(1))},\n\\end{equation}\nwhere $R(D)$ is the point-to-point rate-distortion function for $P_X$, and $o(1)$ is a term that vanishes as $\\delta\\rightarrow 0$ and $n\\rightarrow \\infty$.\n\\end{lemma}\n\nFrom Lemma~\\ref{typebound}, we see that the expected value of $\\sum_{j=1}^{2^{nR_{\\sf C}}} \\xi_{j,z^n}$ is bounded above by approximately $2^{n(R_{\\sf C}-R(D))}$. Moreover, since a condition of the theorem being proved is that $R k\\Big] \\leq \\left( \\frac{e\\!\\cdot\\! m \\!\\cdot\\! p}{k} \\right )^k.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nThe proof follows some of the usual steps for establishing Chernoff bounds.\n\\begin{IEEEeqnarray}{rCl}\n\\label{chernoffcorstep}\\mathbb{P}\\Big[\\sum_{i=1}^m X_i > k\\Big] &\\leq& \\min_{\\lambda > 0} e^{- \\lambda k} \\prod_{i=1}^m \\mathbb{E}[e^{\\lambda X_i}] \\\\\n&=& \\min_{\\lambda > 0} e^{- \\lambda k} (p\\cdot e^{\\lambda} + 1 - p)^m \\\\\n&\\leq& \\min_{\\lambda > 0} e^{- \\lambda k} (p\\cdot e^{\\lambda} + 1)^m \\\\\n&\\leq& \\min_{\\lambda > 0} e^{- \\lambda k} e^{mpe^{\\lambda}}\n\\end{IEEEeqnarray}\nSubstituting the minimizer $\\lambda^* = \\ln(\\frac{k}{mp})$ gives the desired bound.\n\\end{proof}\n\n\n\nUsing the bound on $\\mathbb{E}[\\xi_{j,z^n}]$ from Lemma~\\ref{typebound}, we can apply Lemma~\\ref{chernoff} to the probability in \\eqref{finalprob} by identifying\n\\begin{IEEEeqnarray}{rCl}\nm &=& 2^{n R_{\\sf C}}\\\\\np &\\leq& 2^{-n(R(D)-o(1))}\\\\ \nk &=& \\tau_n 2^{n (R_{\\sf C}-R)}.\n\\end{IEEEeqnarray}\n This gives\n\\begin{equation}\n\\label{doubleexp}\n\\mathbb{P}\\Big[\\sum_{j=1}^{2^{nR_{\\sf C}}} \\xi_{j,z^n} > \\tau_n 2^{n(R_{\\sf C}-R)}\\Big] \\leq 2^{-n\\alpha2^{n\\beta}},\n\\end{equation}\nwhere\n\\begin{IEEEeqnarray}{rCl}\n\\alpha &=& R(D) - R -o(1)\\\\\n\\beta &=& R_{\\sf C}-R-o(1).\n\\end{IEEEeqnarray}\nFor small enough $\\delta$ and large enough $n$, both $\\alpha$ and $\\beta$ are positive and bounded away from zero, and \\eqref{doubleexp} vanishes doubly exponentially fast. Consequently, the expression in \\eqref{finalprob} vanishes, completing the proof of Theorem~\\ref{cbthm}.\n\\end{proof}\nOne can readily establish the following corollary to Theorem~\\ref{cbthm}. \n\\begin{cor}\n\\label{cbcor}\nIf $R < R_{\\sf C}$ and $R < R(D)$, then\n\\begin{equation}\n\\lim_{n\\to\\infty}\\mathbb{E}_{\\mathcal{C}_{\\mathsf{x}}}\\Big[ \\min_{(n,\\mathcal{C}_{\\mathsf{x}},R)\\text{ codes}} \\mathbb{P}[d(X^n(J),Y^n)\\geq D]\\Big] = 1.\n\\end{equation}\n\\end{cor}\n\n\n\nThe interlude is now complete, and we can return to the achievability proof of Theorem~\\ref{mainresult}.\n\n\\subsection{Likelihood encoder}\nEarlier, we asserted that a scheme similar to random binning might give rise to an induced distribution $P_{X^n|M=m}$ that could be approximated by drawing a codeword uniformly from a random codebook. Then we could apply Corollary~\\ref{cbcor} to our problem by identifying $(R_{\\sf C},R)$ with $(R_0,R_{\\sf L})$. Although it is possible that an encoder using random binning might yield this distribution, we turn instead to a likelihood encoder with a random codebook because it brings considerable clarity to the induced distributions involved.\n\nConsider a codebook $c=\\{x^n(m,k)\\}$ consisting of $2^{n(R+R_0)}$ sequences from $\\mathcal{X}^n$. The likelihood encoder of \\cite{Song2014} for lossless reconstruction and for this codebook is a stochastic encoder defined by\n\\begin{equation}\nP_{M|X^nK}(m|x^n,k) \\propto \\prod_{i=1}^n \\mathbf{1}\\{x_i = x_i(m,k)\\},\n\\end{equation}\nwhere $\\propto$ indicates that appropriate normalization is required.\\footnote{In the rare case that no codeword is equal to the source sequence, an arbitrary index can be chosen.} The merit of using a likelihood encoder with a random codebook is that the resulting system-induced joint distribution of $(X^n,M,K)$, namely $P_{X^nMK}=P_{X^n}P_{K}P_{M|X^nK}$, can be shown to be close to an idealized distribution $Q_{X^nMK}$ defined by\n\\begin{equation}\nQ_{X^nMK}(x^n,m,k) \\triangleq 2^{-n(R+R_0)}\\prod_{i=1}^n \\mathbf{1}\\{x_i = x_i(m,k)\\}.\n\\end{equation}\nMore precisely, one can use the soft covering lemma~{\\cite[Lemma IV.1]{Cuff2013}} to prove the following.\n\\begin{lemma}\n\\label{distrclose}\nLet $\\mathcal{C}=\\{X^n(m,k)\\},{(m,k)\\in[2^{nR}]\\times[2^{nR_0}]}$ be a random codebook with each codeword drawn independently according to $\\prod_{i=1}^n P_X$. If $R>H(X)$, then\n\\begin{equation}\n\\label{distrtv}\n\\lim_{n\\to\\infty} \\mathbb{E}_{\\mathcal{C}} \\big\\lVert P_{X^nMK} - Q_{X^nMK} \\big\\rVert_{\\sf TV} = 0,\n\\end{equation}\nwhere the expectation is with respect to the random codebook and $\\lVert \\cdot \\rVert_{\\sf TV}$ is total variation distance.\n\\end{lemma}\n\n\\begin{proof}\nFrom the definition of $P_{X^nMK}$ and $Q_{X^nMK}$ we have $P_{M|X^nK}=Q_{M|X^nK}$. Using this fact, we have\n\\begin{IEEEeqnarray}{rCl}\n\\mathbb{E}_{\\mathcal{C}} \\big\\lVert P_{X^nMK} - Q_{X^nMK} \\big\\rVert_{\\sf TV} &\\stackrel{(a)}{=}& \\mathbb{E}_{\\mathcal{C}} \\big\\lVert P_{X^nK} - Q_{X^nK} \\big\\rVert_{\\sf TV} \\\\\n&=& \\mathbb{E}_{\\mathcal{C}} \\big\\lVert P_{X^n}P_{K} - Q_{X^n|K}P_{K} \\big\\rVert_{\\sf TV}\\\\\n&=& 2^{-nR_0} \\sum_{k=1}^{2^{nR_0}} \\mathbb{E}_{\\mathcal{C}} \\big\\lVert P_{X^n} - Q_{X^n|K=k} \\big\\rVert_{\\sf TV},\n\\end{IEEEeqnarray}\nwhere (a) uses Property~\\ref{tvproperties}c. Since $R > H(X)$, the soft covering lemma implies that the summands vanish\\footnote{Furthermore, they vanish uniformly for all $k \\in [2^{nR_0}]$.}:\n\\begin{equation}\n\\lim_{n\\to\\infty} \\mathbb{E}_{\\mathcal{C}} \\big\\lVert P_{X^n} - Q_{X^n|K=k} \\big\\rVert_{\\sf TV} = 0.\n\\end{equation}\nWithout getting into the details of the soft covering lemma, it is worthwhile to briefly summarize the main idea. The lemma, which is expounded upon in \\cite{Cuff2013}, i The soft covering lemma applies to the current proof because $Q_{X^n|K=k}$ is the output distribution induced by a memoryless channel acting on a random codebook of size $2^{nR}$, and $P_{X^n}$ is an i.i.d. distribution. Since we are considering lossless communication in this section, the relevant channel is the noiseless identity channel, and the relevant rate condition is $R>H(X)$.\n\\end{proof}\n\nLemma~\\ref{distrclose} and the definition of total variation distance allow us to analyze the probability in~\\eqref{mainobj} as if $Q_{X^nMK}$ were the true system-induced joint distribution instead of $P_{X^nMK}$. This is important because $Q_{X^n|M=m}$ is, as desired, uniform over a random codebook of size $2^{nR_0}$:\n\\begin{equation}\n\\label{idealprop}\nQ_{X^n|M=m} = \\text{Unif}\\{X^n(m,1),\\ldots,X^n(m,2^{nR_0})\\}.\n\\end{equation}\nTo see the role of $Q_{X^n|M=m}$, first denote (for the sake of brevity) the event\n\\begin{equation}\n\\mathcal{E} = \\{ d(X^n, z^n(M,M_{\\sf H})) \\geq D(R_{\\sf L}) - \\varepsilon \\}.\n\\end{equation}\nOur objective is to show that when {$R_{\\sf L} < R_0$}, Nodes A and B can force the eavesdropper to incur distortion $D(R_{\\sf L})$, i.e., there exists a sequence of codes that ensures \\eqref{mainobj}.\n\nTaking the expectation of \\eqref{mainobj} with respect to a random codebook, we have\n\\begin{IEEEeqnarray}{rCl}\n\\IEEEeqnarraymulticol{3}{l}{\\nonumber\n\\mathbb{E}_{\\mathcal{C}} \\Big[\\min_{\\substack{m_{\\sf H}(x^n,m),z^n(m,m_{\\sf H}):\\\\|\\mathcal{M}_{\\sf H}|\\leq2^{nR_{\\sf L}}}} \\mathbb{P}_{P_{X^nM}}\n[\\mathcal{E}]\\Big]}\\\\\n\\qquad &\\stackrel{(a)}{=}& \\mathbb{E}_{\\mathcal{C}} \\Big[\\min_{\\substack{m_{\\sf H}(x^n,m),z^n(m,m_{\\sf H}):\\\\|\\mathcal{M}_{\\sf H}|\\leq2^{nR_{\\sf L}}}} \\mathbb{P}_{Q_{X^nM}} [\\mathcal{E}] \\Big] + o(1) \\\\\n&=& \\mathbb{E}_{\\mathcal{C}} \\Big[\\min_{\\substack{m_{\\sf H}(x^n,m),z^n(m,m_{\\sf H}):\\\\|\\mathcal{M}_{\\sf H}|\\leq2^{nR_{\\sf L}}}} \\mathbb{E} \\big[\\mathbb{P}_{Q_{X^nM}} [\\mathcal{E} | M]\\big] \\Big] + o(1)\\\\\n&\\stackrel{(b)}{=}& \\label{relatecor} \\mathbb{E}_M \\,\\mathbb{E}_{\\mathcal{C}} \\Big[\\min_{\\substack{m_{\\sf H}(x^n),z^n(m_{\\sf H}):\\\\|\\mathcal{M}_{\\sf H}|\\leq2^{nR_{\\sf L}}}} \\mathbb{P}_{Q_{X^n|M}} [\\mathcal{E} | M] \\Big] + o(1).\n\\end{IEEEeqnarray}\nIn step (a), we use Lemma~\\ref{distrclose} to change the underlying distribution from $P_{X^nM}$ to $Q_{X^nM}$ with vanishing penalty. Step (b) uses the fact that $M$ and $\\mathcal{C}$ are independent under $Q_{X^nM}$. These steps bring us to the problem considered in the recent interlude: we must show that the henchman and the eavesdropper cannot design a code that achieves distortion $D(R_{\\sf L})$ for the ``source\" $Q_{X^n|M=m}$. \n\nSuppose that we are in the regime {$R_{\\sf L} 0$,\n\\begin{enumerate}[1)]\n\\item Lossy communication:\n\\begin{equation}\n\\mathbb{P}\\Big[d_{\\sf B}(X^n,Y^n) \\leq D_{\\sf B}+\\varepsilon\\Big] \\xrightarrow{n\\to\\infty} 1.\n\\end{equation}\n\\item List secrecy:\n\\begin{equation}\n\\label{listsecrecylossy}\n\\min_{\\substack{\\mathcal{L}(m):|\\mathcal{L}|\\leq2^{nR_{\\sf L}}}} \\mathbb{P}\\Big[\\min_{z^n \\in \\mathcal{L}(M)} d_{\\sf E}(X^n,z^n) \\geq D_{\\sf E}-\\varepsilon\\Big] \\xrightarrow{n\\to\\infty} 1.\n\\end{equation}\n\\end{enumerate}\nNote that the decoder of a $(n,R,R_0)$ code for lossy communication is a (possibly stochastic) decoder $P_{Y^n | MK}$.\n\\end{defn}\n\nAs with the lossless version of problem, we can reformulate Definition~\\ref{listdefnlossy} in terms of a rate-limited henchman. The henchman formulation for the lossy communication setting is exactly described by Figure~\\ref{fig:henchman} (with $\\widehat{X}^n$ replaced by $Y^n$).\n\nThe optimal tradeoff between the various rates and distortions is the following.\n\\begin{thm}\n\\label{mainresultlossy}\nGiven a source distribution $P_X$ and distortion functions $d_{\\sf B}(x,y)$ and $d_{\\sf E}(x,z)$, the closure of achievable tuples $(R,R_0,R_{\\sf L},D_{\\sf B},D_{\\sf E})$ is the set of tuples satisfying\n \\begin{equation}\n\n \\begin{IEEEeqnarraybox}[][c]{rCl}\n R &\\geq& I(X;Y)\\\\\n D_{\\sf B} &\\geq& \\mathbb{E}\\, d_{\\sf B}(X,Y)\\\\\n D_{\\sf E} &\\leq& \\begin{cases}\nD(R_{\\sf L}) & \\text{if } R_{\\sf L} < R_0\\\\\n\\min\\{D(R_{\\sf L}), D(R_{\\sf L} - R_0,P_{XY})\\} & \\text{if }R_{\\sf L} \\geq R_0\n\\end{cases}\n \\end{IEEEeqnarraybox}\n\\end{equation}\nfor some $P_{XY}=P_{X}P_{Y|X}$, where $D(\\cdot,P_{XY})$ is the point-to-point distortion-rate function with side information channel $P_{Y|X}$ to the encoder and decoder:\n\\begin{equation}\nD(R,P_{XY}) \\triangleq \\min_{P_{Z|XY}:R \\geq I(X;Z|Y)} \\mathbb{E}\\,d_{\\sf E}(X,Z).\n\\end{equation}\n\\end{thm}\n \nWhen $R_{\\sf L} < R_0$, the eavesdropper's distortion is at least $D(R_{\\sf L})$, just as it was when we considered lossless communication. This should not be surprising in light of the previous section, since less information is being revealed to the eavesdropper (the communication rate between Nodes A and B is lower). As before, the henchman can simply use a point-to-point rate-distortion code to achieve $D(R_{\\sf L})$. \n\nThe more interesting regime is when $R_{\\sf L} \\geq R_0$, i.e., the list rate (equivalently, the henchman's rate) is greater or equal to the rate of secret key. In this case, Theorem~\\ref{mainresultlossy} says that a communication scheme can be designed such that the eavesdropper's distortion cannot be less than\n\\begin{equation}\n\\min\\{D(R_{\\sf L}), D_Y(R_{\\sf L} - R_0)\\}.\n\\end{equation}\nTo see why these are the relevant distortions, consider the following. As we just mentioned, the henchman and the eavesdropper can always ignore the message $M$ and use a point-to-point code to achieve $D(R_{\\sf L})$. Alternatively, when $R_{\\sf L} \\geq R_0$, the henchman can first use part of the rate $R_{\\sf L}$ to communication the secret key to the eavesdropper. Then, roughly speaking, the henchman and eavesdropper effectively share side information $Y^n$ (since they both know $M$ and $K$ perfectly and can mimic the decoder), and can use the remaining rate $R_{\\sf L}-R_0$ to achieve distortion $D(R_{\\sf L} - R_0,P_{XY})$. Thus, one implication of Theorem~\\ref{mainresultlossy} is that the henchman benefits from sending information about the secret key only if he describes it entirely; there is no benefit to communicating just part of the key to the eavesdropper.\n\n\\section{Converse (lossy communication)}\n\\label{sec:lossyconverse}\nWe now present the converse proof for Theorem~\\ref{mainresultlossy}. In the regime $R_0 > R_{\\sf L}$, the converse is the same as when we required lossless communication. Nodes A and B (the legitimate parties) cannot force distortion greater than $D(R)$ with high probability because the henchman and the eavesdropper can always ignore the public message $M$ and simply use a good rate-distortion code to achieve distortion $D(R)$ with high probability. Note that this converse is ``strong\" in the sense that the probability of eavesdropper distortion being greater than $D(R)$ is not just bounded away from unity, it is actually vanishing. To be explicit, observe that if $R_{\\sf L} < R_0$ and $D_E > D(R)$, then the expression in \\eqref{listsecrecylossy} vanishes for all $\\varepsilon < D_E - D(R)$. This follows from the achievability portion of point-to-point rate-distortion theory.\n\nWhen $R_{\\sf L} \\geq R_0$, the henchman's rate is high enough that he can communicate the secret key to the eavesdropper and still have leftover rate $R_{\\sf L} - R_0$. Since the henchman and the eavesdropper both know $M$ and $K$, they can mimic the decoder of Node B and produce side information $Y^n$. Notice that we have made two assumptions: the henchman knows the secret key, and the receiver uses a deterministic decoder. However, if the henchman were not able to determine the secret key exactly, then multiple keys would correspond to the same source sequence, which means that the decoder would effectively be stochastic. Thus, we are making just one assumption: that the decoder is deterministic. This assumption is valid because a stochastic decoder cannot be used to increase the eavesdropper's distortion. Indeed, if we consider the list formulation of the problem, we see that eavesdropper's performance is completely determined by $X^n$ and $M$ alone; the output of Node B does not play a role.\\footnote{Note that this would not be the case if we were considering distortion functions of the form $d_{\\sf E}(x,y,z)$ instead of $d_{\\sf E}(x,z)$.}\n\nSo far, we have that the henchman and the eavesdropper share side information $Y^n$ equal to the receiver's reconstruction. Ideally, we would like to claim that $(X^n,Y^n)$ are jointly i.i.d. according to some distribution $P_{XY}$ and use the achievability portion of rate-distortion theory with side information at the encoder and decoder. Unfortunately, we cannot even claim that with high probability $(X^n,Y^n)$ are jointly typical according to some $P_{XY}$ because that is only guaranteed when Nodes A and B are using a nearly optimal rate-distortion code (i.e., one that operates near the rate-distortion tradeoff boundary). Instead, we rely on a different property of $(X^n,Y^n)$ that will be given shortly in Lemma~\\ref{codetypes}.\n\n\n\n\n\n\nWe will describe the henchman and eavesdropper's scheme in terms of the joint type of $(X^n,Y^n)$; to do this, we require the following straightforward extension of the type-covering lemma \\cite[Lemma 9.1]{Csiszar2011} that accounts for side information (proof omitted). Regarding notation, $\\mathcal{T}_{X}^n$ denotes the set of sequences whose types coincide with a given distribution $P_X$, and $\\mathcal{T}(\\mathcal{X}^n)$ denotes the set of all joint types on sequences in $\\mathcal{X}^n$.\n\n\\begin{lemma}\n\\label{typecovering}\nLet $\\tau > 0$ and $r\\geq0$. Fix a joint type $P_{XY} \\in \\mathcal{T}(\\mathcal{X}^n\\times\\mathcal{Y}^n)$, and let $y^n \\in \\mathcal{T}_Y^n$. For $n \\geq n_0(\\tau)$, there exists a codebook $\\mathcal{C}(y^n, P_{XY}) \\subseteq \\mathcal{Z}^n$ such that\n\\begin{enumerate}[1)]\n\\item\n\\begin{equation}\n\\frac1n \\log |\\mathcal{C}(y^n, P_{XY})| \\leq r.\n\\end{equation}\n\\item For all $x^n$ such that $(x^n,y^n)\\in \\mathcal{T}_{XY}^n$,\n\\begin{equation}\n\\min_{z^n\\in\\mathcal{C}(y^n,P_{XY})} d(x^n,z^n) \\leq D(r, P_{XY}) + \\tau\n\\end{equation}\n\\end{enumerate}\n\\end{lemma}\n\nWe also require the following lemma from \\cite{Weissman2005}.\n\\begin{lemma}[{\\cite[Theorem 7]{Weissman2005}}]\n\\label{codetypes}\nConsider any sequence of rate-distortion codes with rate $\\leq R$. Then\n\\begin{equation}\n\\limsup_{n\\to\\infty} I(T_{X^nY^n}) \\leq R \\quad \\text{a.s.},\n\\end{equation}\nwhere $T_{X^nY^n}$ denotes the type of $(X^n,Y^n)$ and $I(\\cdot)$ is the mutual information.\n\\end{lemma}\n\n \n\nNow we can begin the converse proof for the regime $R_{\\sf L} \\geq R_0$. Consider an achievable tuple $(R,R_0,R_{\\sf L},D_{\\sf B},D_{\\sf E})$. By the same argument that was used in the regime $R_{\\sf L} < R_0$, we must have $D_E \\leq D(R_{\\sf L})$ because the henchman and the eavesdropper can always ignore the public message and use a good rate-distortion code to describe $X^n$. \n\nLet $\\varepsilon \\in (0,1\/2)$. Define a set $\\mathcal{A}_n$ of joint distributions on $\\mathcal{X}\\times\\mathcal{Y}$ by\n\\begin{equation}\n\\mathcal{A}_n \\triangleq \\left\\{\n \\begin{IEEEeqnarraybox}[][c]{ll}\n Q_{XY}:\\:\\: & I_Q(X;Y) \\leq R + \\varepsilon\\\\\n & \\mathbb{E}_Q\\, d_{\\sf B}(X,Y) \\leq D_{\\sf B} + \\varepsilon\\\\\n & \\lVert Q_X - P_X \\rVert_{\\sf TV} \\leq \\varepsilon\n \\end{IEEEeqnarraybox}\n\\right\\}.\n\\end{equation}\nWe first show that\n\\begin{equation}\n\\label{typepropertieslimit}\n\\lim_{n\\to\\infty} \\mathbb{P}[T_{X^nY^n} \\in A_n] = 1,\n\\end{equation}\nwhere $T_{X^nY^n}$ denotes the type of $(X^n,Y^n)$. This can be proved by combining the following three facts:\n\\begin{enumerate}[1)]\n\\item From Lemma~\\ref{codetypes}, we have\n\\begin{equation}\n\\lim_{n\\to\\infty} \\mathbb{P} [I(T_{X^n,Y^n}) \\leq R + \\varepsilon] = 1.\n\\end{equation}\n\\item From the definition of achievability and the equality $d_{\\sf B}(x^n,y^n) = \\mathbb{E}_{T_{x^ny^n}}\\,d_{\\sf B}(x,y)$, we have\n\\begin{equation}\n\\lim_{n\\to\\infty} \\mathbb{P} [\\mathbb{E}_{T_{X^nY^n}}\\,d_{\\sf B}(x,y) \\leq D + \\varepsilon] = 1.\n\\end{equation}\n\\item From the weak law of large numbers, we have\n\\begin{equation}\n\\lim_{n\\to\\infty} \\mathbb{P} [\\lVert T_{X^n} - P_X \\rVert_{\\sf TV} \\leq \\varepsilon] = 1.\n\\end{equation}\n\\end{enumerate}\n\nWith \\eqref{typepropertieslimit} in hand, choose $n$ large enough so that\n\\begin{enumerate}[1)]\n\\item The eavesdropper cannot reconstruct with low distortion (this is an assumption of achievability):\n\\begin{equation}\n\\label{contradict}\n\\max_{\\substack{m_{\\sf H}(x^n,m),z^n(m,m_{\\sf H}):\\\\|\\mathcal{M}_{\\sf H}|\\leq2^{nR_{\\sf L}}}} \\mathbb{P}\\Big[d_{\\sf E}(X^n,z^n(M,M_{\\sf H})) < D_{\\sf E}-\\varepsilon\\Big] < \\varepsilon.\n\\end{equation}\n\\item \n\\begin{equation}\n\\label{typeeps}\n\\mathbb{P}[T_{X^nY^n} \\in A_n]\\geq\\varepsilon\n\\end{equation}\n\\item Lemma~\\ref{typecovering} is satisfied with $\\tau = \\varepsilon$ and $r = R_{\\sf L} - R_0$.\n\\item The number of bits needed to express the joint type is negligible:\n\\begin{equation}\n\\label{jointtyperate}\n\\frac1n |\\mathcal{X}| |\\mathcal{Y}| \\log (n+1) \\leq \\varepsilon.\n\\end{equation}\n\\end{enumerate}\nTo compress $x^n$ using side information ${y}^n$, the henchman first describes the joint type of $(x^n,{y}^n)$, then transmits the index of $x^n$ in the codebook $\\mathcal{C}(y^n, T_{x^n,{y}^n})$ that is guaranteed by Lemma~\\ref{typecovering}. The description of the joint type only uses additional rate $\\varepsilon$ because the size of $\\mathcal{T}(\\mathcal{X}^n\\times\\mathcal{Y}^n)$ is bounded by $(n+1)^{|\\mathcal{X}||\\mathcal{Y}|}$ and \\eqref{jointtyperate} is satisfied. Therefore, for a given source sequence $x^n$ and side information sequence ${y}^n$, the henchman is able to send a message at rate $(R_{\\sf L} - R_0)+\\varepsilon$ such that the eavesdropper can produce $z^n$ with distortion\n\\begin{equation}\n\\label{typedistortion}\nd_{\\sf E}(x^n,z^n) \\leq D(R_{\\sf L} - R_0+\\varepsilon, T_{x^n{y}^n}) + \\varepsilon.\n\\end{equation}\nNow define\n\\begin{equation}\nQ^*_{XY} \\triangleq \\argmax_{Q\\in\\mathcal{A}_n} D(R_{\\sf L}-R_0+\\varepsilon,Q).\n\\end{equation}\nFrom \\eqref{typeeps}, we see that with probability at least $\\varepsilon$, the henchman and the eavesdropper can achieve distortion\n\\begin{IEEEeqnarray}{rCl}\nd_{\\sf E}(X^n,Z^n) &\\leq& D(R_{\\sf L} - R_0+\\varepsilon, T_{X^nY^n}) + \\varepsilon\\\\\n&\\leq& D(R_{\\sf L} - R_0+\\varepsilon, Q^*_{XY}) + \\varepsilon\n\\end{IEEEeqnarray}\nTherefore, in view of \\eqref{contradict}, we can bound $D_{\\sf E}$:\n\\begin{IEEEeqnarray}{rCl}\nD_{\\sf E} &\\stackrel{(a)}{\\leq}& D(R_{\\sf L} - R_0+\\varepsilon, Q^*_{XY})+2\\varepsilon \\\\\n&\\stackrel{(b)}{\\leq}& D(R_{\\sf L} - R_0+\\varepsilon, P_{X}Q^*_{Y|X})+2\\varepsilon+o(\\varepsilon).\n\\end{IEEEeqnarray}\nStep (a) follows from \\eqref{contradict}. Step (b) is due to $\\lVert Q^*_X - P_X\\rVert_{\\sf TV}<\\varepsilon$ and the fact that the rate-distortion function is continuous in $P_X$ with respect to total variation distance (e.g., see \\cite{Palaiyanur2008}). Because $Q^*_{XY}\\in\\mathcal{A}_n$, we can also bound $R$ and $D_{\\sf B}$. First, we have\n\\begin{IEEEeqnarray}{rCl}\nR &\\geq& I(Q^*_{XY}) - \\varepsilon\\\\\n&\\stackrel{(a)}{\\geq}& I(P_XQ^*_{Y|X}) - \\varepsilon - o(\\varepsilon),\n\\end{IEEEeqnarray}\nwhere (a) is due to the continuity of mutual information with respect to total variation distance. Next, we have\n\\begin{IEEEeqnarray}{rCl}\nD_{\\sf B} &\\geq& \\mathbb{E}_{Q^*_{XY}}\\,d_{\\sf B}(X,Y) -\\varepsilon \\\\\n&\\stackrel{(a)}{\\geq}& \\mathbb{E}_{P_XQ^*_{Y|X}}\\,d_{\\sf B}(X,Y) -\\varepsilon - o(\\varepsilon),\n\\end{IEEEeqnarray}\nwhere (a) uses Property~\\ref{tvproperties}c of total variation.\n\nAssimilating the bounds that we have established, we can conclude that any achievable tuple $(R,R_0,R_{\\sf L},D_{\\sf B},D_{\\sf E})$ lies in the region\n\\begin{equation}\n\\mathcal{S}_{\\varepsilon}\\triangleq \\bigcup_{P_{Y|X}}\\left\\{\n \\begin{IEEEeqnarraybox}[][c]{rCl}\n (R,R_0,R_{\\sf L},D_{\\sf B},D_{\\sf E}):\\:R &\\geq& I(X;Y)-o(\\varepsilon)\\\\\n D_{\\sf B} &\\geq& \\mathbb{E}\\, d_{\\sf B}(X,Y) - o(\\varepsilon)\\\\\n D_{\\sf E} &\\leq& \\min\\{D(R_{\\sf L}), D(R_{\\sf L} - R_0+\\varepsilon, P_{Y|X}) + o(\\varepsilon) \\}\n \\end{IEEEeqnarraybox}\n\\right\\}.\n\\end{equation}\nSince this holds for all $\\varepsilon >0$, we have\n\\begin{equation}\n\\label{intersectregions}\n(R,R_0,R_{\\sf L},D_{\\sf B},D_{\\sf E}) \\in \\bigcap_{\\varepsilon > 0} \\mathcal{S}_{\\varepsilon}.\n\\end{equation}\nThe region in \\eqref{intersectregions} is equal to the region in Theorem~\\ref{mainresultlossy} (subject to $R_{\\sf L} \\geq R_0$), completing the converse proof.\n\n\n\n\n\\section{Achievability (lossy communication)}\n\\label{sec:lossyachievability}\nIn this section, we prove the achievability portion of Theorem~\\ref{mainresultlossy}, the lossy communication counterpart to Theorem~\\ref{mainresult}. The skeleton of the proof is similar to the one presented in Section~\\ref{sec:losslessachievability}, but we will need some enhanced versions of some of the components. \n\nAs in the lossless setting, we can view the henchman and the eavesdropper as the sender and receiver in a rate-limited system with side information $M$ (i.e., the public message) available to both parties. The correlation between the side information and the source sequence $X^n$ will govern the performance; therefore, we are interested in $P_{X^n|M=m}$ since this is the effective source distribution after accounting for common side information. As before, the encoder at Node A determines $P_{X^n|M=m}$ entirely. In Section~\\ref{sec:losslessachievability}, we were motivated by the effect of random binning (which we later replaced with a likelihood encoder for ease of analysis). However, instead of simply randomly binning $X^n$ and using $K$ to hide the location within the bin, we now want to first perform lossy compression using a codebook of sequences from $\\mathcal{Y}^n$, followed by a random binning of the codebook. Roughly speaking, this process results in a distribution $P_{X^n|M=m}$ that corresponds to selecting a $y^n$ sequence uniformly from a random codebook of size $2^{nR_0}$, then passing that sequence through a memoryless channel $\\prod P_{X|Y}$. The justification for this assertion will become clear when we use a likelihood encoder later on; for now, we study the subproblem that just surfaced: lossy compression of a \\emph{noisy version} of a codeword drawn uniformly from a random codebook. \n\n\\subsection{Lossy compression of a noisy version of a codeword drawn uniformly from a random codebook}\n\nConsider a codebook $c_{\\mathsf{y}} = \\{y^n(1),\\ldots,y^n(2^{nR_{\\sf C}})\\}$ consisting of $2^{nR_{\\sf C}}$ sequences in $\\mathcal{Y}^n$. Select a codeword uniformly at random from $c_{\\mathsf{y}}$ and denote it by $y^n(J)$, where $J \\sim \\text{Unif}[2^{nR_{\\sf C}}]$. Pass $y^n(J)$ through a memoryless channel $\\prod P_{X|Y}$ to produce a sequence $X^n$. An encoder describes $X^n$ using a noiseless link of rate $R$, and a decoder estimates it with a reconstruction sequence $Z^n$ (incurring distortion $d(X^n,Z^n))$. Both the encoder and decoder know the codebook $c_{\\mathsf{y}}$, and together they constitute a $(n,c_{\\mathsf{y}},R)$ code. The setup is shown in Figure~\\ref{fig:codebooklossy} for a random codebook $\\mathcal{C}_{\\mathsf{y}}$.\n\nThe following theorem generalizes Theorem~\\ref{cbthm} (to recover that theorem, set $Y=X$).\n\\begin{thm}\n\\label{cblossythm}\nFix $P_{XY}$, $R$, $R_{\\sf C}$ and $D$. Let $\\mathcal{C}_{\\mathsf{y}}$ be a random codebook of $2^{nR_{\\sf C}}$ codewords, each drawn independently according to $\\prod_{i=1}^n P_Y(y_i)$. Let $\\tau_n$ be any sequence that converges to zero sub-exponentially fast (i.e., $\\tau_n=2^{-o(n)}$). If\n\\begin{equation}\n\\label{regime2}\nR < \\min\\{R(D), R_Y(D) + R_{\\sf C}\\},\n\\end{equation}\nthen with high probability it is impossible to achieve distortion $D$ in the sense that\n\\begin{equation}\n\\label{cbobjlossy}\n\\lim_{n\\to\\infty}\\mathbb{P}_{\\mathcal{C}_{\\mathsf{y}}}\\Big[ \\max_{(n,\\mathcal{C}_{\\mathsf{y}},R)\\text{ codes}} \\mathbb{P}[d(X^n,Z^n)\\leq D] > \\tau_n \\Big] = 0.\n\\end{equation}\nThe function $R_Y(D)$ is the rate-distortion function with side information:\n\\begin{equation}\nR_Y(D) = \\min_{P_{Z|XY}: \\mathbb{E}\\, d(X,Z) \\leq D} I(X;Z|Y).\n\\end{equation}\n\\end{thm}\nBefore diving into the proof, let us briefly justify why the regime in \\eqref{regime2} is the one of interest. First, observe that whenever $R \\geq R(D)$ is satisfied, distortion $D$ can be achieved by simply using a regular point-to-point rate distortion code to describe $X^n$. Second, whenever $R\\geq R_Y(D) + R_{\\sf C}$ holds, distortion $D$ can be achieved in roughly the following manner. The encoder first identifies a codeword in $\\mathcal{C}_{\\mathsf{y}}$ that is jointly typical with $X^n$ (according to $P_{XY}$) and sends the index of the codeword using rate $R_{\\sf C}$. The codeword is then treated as side information, which allows the encoder to describe $X^n$ using rate $R_Y(D)$. So we see that \\eqref{regime2} is actually necessary for \\eqref{cbobjlossy} to hold.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tikzpicture}\n [node distance=1cm,minimum width=1cm,minimum height =.75 cm]\n \\node[rectangle,minimum width=5mm] (codeword) {$Y^n(J)$};\n \\node[node] (channel) [right =7mm of codeword] {$P_{X|Y}$};\n \n \\node[node] (alice) [right =15mm of channel] {Enc.};\n \\node[node] (bob) [right =1.5cm of alice] {Dec.};\n \\node[coordinate] (dummy) at ($(alice.east)!0.5!(bob.west)$) {};\n \\node[rectangle,minimum width=5mm] (xhat) [right =7mm of bob] {$Z^n$};\n \\node[rectangle,minimum width=7mm] (key) [above =7mm of dummy] {$\\mathcal{C}_{\\mathsf{y}}$};\n \n \\draw [arw] (codeword) to (channel);\n \n \\draw [arw] (channel) to node[minimum height=6mm,inner sep=0pt,midway,above]{$X^n$} (alice);\n \\draw [arw] (alice) to node[minimum height=6mm,inner sep=0pt,midway,above]{$R$} (bob);\n \\draw [arw] (bob) to (xhat);\n \\draw [arw] (key) to [out=180,in=90] (alice);\n \\draw [arw] (key) to [out=0,in=90] (bob);\n \\end{tikzpicture}\n \\caption{\\small Lossy compression of a noisy version of a codeword drawn uniformly from a random codebook $\\mathcal{C}_{\\mathsf{y}}=\\{Y^n(1),\\ldots,Y^n(2^{nR_{\\sf C}})\\}$. Both the encoder and decoder know the codebook $\\mathcal{C}_{\\mathsf{y}}$. The encoder describes $X^n$, the output of a memoryless channel $\\prod P_{X|Y}$ whose input is a randomly chosen codeword $Y^n(J)$, where $J\\sim \\text{Unif}[2^{nR_{\\sf C}}]$.}\n \\label{fig:codebooklossy}\n \\end{center}\n\n \\end{figure}\n\n\\begin{proof}\nWe follow the basic rubric of Section~\\ref{sec:losslessachievability}, making modifications where they are needed. \n\nFixing $P_{XY}$, we first restrict $(X^n,Y^n(J))$ to be jointly typical by writing\n\\begin{equation}\n\\label{typical2}\n\\mathbb{P}[d(X^n,Z^n)\\leq D] \\leq \\mathbb{P}[d(X^n,Z^n)\\leq D, \\mathcal{A}] + \\mathbb{P}[\\mathcal{A}^c],\n\\end{equation}\nwhere $\\mathcal{A}$ denotes the event $\\{(X^n,Y^n(J)) \\in \\mathcal{T}_{\\delta}^n(X,Y) \\}$. Note that the second term in~\\eqref{typical2} vanishes in the limit for any $\\delta>0$ since $(X^n,Y^n(J))$ is i.i.d. according to $P_{XY}$.\n\nContinuing exactly as in Section~\\ref{sec:losslessachievability}, we have\n\\begin{IEEEeqnarray}{rCl}\n\\max_{(n,\\mathcal{C}_{\\mathsf{y}},R)\\text{ codes}} \\mathbb{P}[d(X^n,Z^n)\\leq D,\\mathcal{A}]\n&\\leq& 2^{nR} \\max_{z^n\\in \\mathcal{Z}^n} \\mathbb{P}[ d(X^n,z^n)\\leq D,\\mathcal{A}]\\\\\n&\\stackrel{}{=}& 2^{nR} \\max_{z^n\\in \\mathcal{Z}^n} \\mathbb{E}_J\\,\\mathbb{P}[ d(X^n,z^n) \\leq D, \\mathcal{A} | Y^n(J)]\\\\\n&=& 2^{-n(R_{\\sf C} - R)} \\max_{z^n\\in \\mathcal{Z}^n} \\sum_{j=1}^{2^{nR_{\\sf C}}} \\mathbb{P}[ d(X^n,z^n) \\leq D, \\mathcal{A} | Y^n(j)]\n\\end{IEEEeqnarray}\nDenote the terms in the sum by $\\zeta_{j,z^n}$:\n\\begin{IEEEeqnarray}{rCl}\n\\label{zetaj}\n\\zeta_{j,z^n} &\\triangleq& \\mathbb{P}[ d(X^n,z^n) \\leq D, \\mathcal{A} | Y^n(j)]\\\\\n&=& \\sum_{x^n\\in\\mathcal{X}^n} \\prod_{i=1}^n P_{X|Y}(x_i | Y_i(j)) \\cdot \\mathbf{1}\\{d(x^n,z^n)\\leq D, (x^n,Y^n(j))\\in \\mathcal{T}_{\\delta}\\}\n\\end{IEEEeqnarray}\nContinuing in the manner of Section~\\ref{sec:losslessachievability} leads us to\n\\begin{equation}\n\\label{finalprob2}\n\\mathbb{P}\\Big[ \\max_{(n,\\mathcal{C}_{\\mathsf{y}},R)\\text{ codes}} \\mathbb{P}[d(X^n,Z^n)\\leq D,\\mathcal{A}] > \\tau_n \\Big]\n\\leq |\\mathcal{Z}|^n \\max_{z^n\\in\\mathcal{Z}^n} \\mathbb{P}\\Big[\\sum_{j=1}^{2^{nR_{\\sf C}}} \\zeta_{j,z^n} > \\tau_n 2^{n(R_{\\sf C}-R)}\\Big],\n\\end{equation}\n\nAs with the $\\xi_{j,z^n}$ defined in Section~\\ref{sec:losslessachievability} (Eq. \\eqref{xij}), the $\\zeta_{j,z^n}$ are i.i.d. due to the nature of the random codebook; however, they are no longer Bernoulli random variables. The following lemma, a straightforward generalization of Lemma~\\ref{typebound}, shows that $\\zeta_{j,z^n}$ is bounded above by $2^{-n(R_Y(D)-o(1))}$ with probability one. The proof is omitted.\n\n\\begin{lemma}\n\\label{typebound2}\nFix $P_{XY}$ and $y^n\\in\\mathcal{Y}^n$. If $X^n$ is distributed according to $\\prod_{i=1}^n P_{X|Y=y_i}$, then for any $z^n$,\n\\begin{equation}\n\\mathbb{P}\\big[d(X^n,z^n)\\leq D, (X^n,y^n) \\in \\mathcal{T}_{\\delta}^n \\,\\big|\\, Y^n = y^n\\big] \\leq 2^{-n(R_Y(D)-o(1))},\n\\end{equation}\nwhere $o(1)$ is a term that vanishes as $\\delta\\rightarrow 0$ and $n\\rightarrow \\infty$.\n\\end{lemma}\n\nAs mentioned, Lemma~\\ref{typebound2} implies\n\\begin{equation}\n\\label{zetasupport}\n\\zeta_{j,z^n} \\in [0,2^{-n(R_Y(D) - o(1))}].\n\\end{equation}\nIn addition to bounding the range of $\\zeta_{j,z^n}$ we can also bound its expected value. In fact, the bound is the same as for $\\xi_{j,z^n}$.\n\\begin{IEEEeqnarray}{rCl}\n\\mathbb{E}_{\\mathcal{C}_{\\mathsf{y}}} \\zeta_{j,z^n}\n&=& \\mathbb{E}_{\\mathcal{C}_{\\mathsf{y}}}\\, \\mathbb{P}[ d(X^n,z^n) \\leq D, \\mathcal{A} \\,|\\, Y^n(j)] \\\\\n&\\leq& \\mathbb{E}_{\\mathcal{C}_{\\mathsf{y}}}\\, \\mathbb{P}[ d(X^n,z^n) \\leq D, X^n \\in \\mathcal{T}_{\\delta} \\,|\\, Y^n(j)]\\\\\n&=& \\mathbb{P}[d(X^n,z^n) \\leq D, X^n \\in \\mathcal{T}_{\\delta} ] \\qquad (\\text{where }X^n \\sim \\prod P_X)\\\\\n\\label{zetamean}&\\stackrel{(a)}{\\leq}& 2^{-n (R(D) - o(1))},\n\\end{IEEEeqnarray}\nwhere (a) is due to Lemma~\\ref{typebound}.\n\nWe are now ready to apply a Chernoff bound to the probability in \\eqref{finalprob2}. First, we extend the Chernoff bound in Lemma~\\ref{chernoff} to random variables taking values on the interval $[0,a]$ instead of just binary random variables.\n\\begin{cor}\n\\label{chernoffcor2}\nIf $X^m$ is a sequence of i.i.d. random variables on the interval $[0,a]$ with $\\mathbb{E}[X_i] = p$, then\n\\begin{equation}\n\\mathbb{P}\\Big[\\sum_{i=1}^m X_i > k\\Big] \\leq \\left( \\frac{e\\!\\cdot\\! m \\!\\cdot\\! p}{k} \\right )^{k\/a}.\n\\end{equation}\n\\end{cor}\n\n\\begin{proof}\nWe start by proving the case $a=1$. To begin, we claim that if $X\\in[0,1]$ and $Y\\in\\{0,1\\}$ are random variables such that $\\mathbb{E}[X] = \\mathbb{E}[Y]$ and $f: [0,1] \\rightarrow \\mathbb{R}$ is convex, then\n\\begin{equation}\n\\mathbb{E}[f(X)] \\leq \\mathbb{E}[f(Y)].\n\\end{equation}\nTo see this, observe that for $x\\in[0,1]$,\n\\begin{equation}\nf(x) \\leq x f(1) + (1-x) f(0).\n\\end{equation}\nTaking expectations gives\n\\begin{IEEEeqnarray}{rCl}\n\\mathbb{E}[f(X)] &\\leq& \\mathbb{E}[X] f(1) + (1-\\mathbb{E}[X]) f(0) \\\\\n&=& \\mathbb{E}[Y] f(1) + (1-\\mathbb{E}[Y]) f(0) \\\\\n&=& \\mathbb{E}[f(Y)],\n\\end{IEEEeqnarray}\nverifying the claim. Now, since $f(x) = e^{\\lambda x}$ is convex, the inequality $\\mathbb{E}[e^{\\lambda Y}] \\leq \\mathbb{E}[e^{\\lambda X}]$ holds and can be applied to the proof of Lemma~\\ref{chernoff} at \\eqref{chernoffcorstep}.\n\nWith the case $a=1$ shown, we now consider any $a>0$. If we let $Y_i = \\frac{1}{a} X_i \\in [0,1]$, then the previous case applies and we have\n\\begin{IEEEeqnarray}{rCl}\n\\mathbb{P}\\Big[\\sum_{i=1}^m X_i > k\\Big] &\\leq& \\mathbb{P}\\Big[\\sum_{i=1}^m a Y_i > \\frac{k}{a}\\Big] \\\\\n&\\leq& \\Big ( \\frac{e\\!\\cdot\\! m \\!\\cdot\\! \\mathbb{E}[Y_1]}{k\/a} \\Big )^{k\/a} \\\\\n&=& \\Big ( \\frac{e\\!\\cdot\\! m \\!\\cdot\\! p}{k} \\Big )^{k\/a}.\n\\end{IEEEeqnarray}\n\\end{proof}\n\nUsing the support bound and the expected value bound in \\eqref{zetasupport} and \\eqref{zetamean}, we can apply Corollary~\\ref{chernoffcor2} to the probability in \\eqref{finalprob2} by identifying\n\\begin{IEEEeqnarray}{rCl}\nm &=& 2^{n R_{\\sf C}}\\\\\na &=& 2^{-n(R_Y(D) - o(1))}\\\\\np &\\leq& 2^{-n(R(D)-o(1))}\\\\ \nk &=& \\tau_n 2^{n (R_{\\sf C}-R)}.\n\\end{IEEEeqnarray}\nThis gives\n\\begin{equation}\n\\label{doubleexp2}\n\\mathbb{P}\\Big[\\sum_{j=1}^{2^{nR_{\\sf C}}} \\zeta_{j,z^n} > \\tau_n 2^{n(R_{\\sf C}-R)}\\Big] \\leq 2^{-n\\alpha2^{n\\beta}},\n\\end{equation}\nwhere\n\\begin{IEEEeqnarray}{rCl}\n\\alpha &=& R(D) - R -o(1)\\\\\n\\beta &=& R_{\\sf C}+R_Y(D)-R-o(1).\n\\end{IEEEeqnarray}\nFor small enough $\\delta$ and large enough $n$, both $\\alpha$ and $\\beta$ are positive and bounded away from zero, and \\eqref{doubleexp2} vanishes doubly exponentially fast. Consequently, the expression in \\eqref{finalprob2} vanishes, completing the proof of Theorem~\\ref{cblossythm}.\n\\end{proof}\n\nThe following corollary to Theorem~\\ref{cblossythm} is immediate, and, as in Section~\\ref{sec:losslessachievability}, will serve as the bridge between the subproblem we have been considering and the henchman problem. \n\n \\begin{cor}\n\\label{cbcorlossy}\nFix $P_{XY}$. If $R < \\min\\{R(D), R_Y(D) + R_{\\sf C}\\}$, then\n\\begin{equation}\n\\lim_{n\\to\\infty}\\mathbb{E}_{\\mathcal{C}_{\\mathsf{y}}}\\Big[ \\min_{(n,\\mathcal{C}_{\\mathsf{y}},R)\\text{ codes}} \\mathbb{P}[d(X^n,Z^n)\\geq D]\\Big] = 1.\n\\end{equation}\n\\end{cor}\n\n\\subsection{Likelihood encoder}\nReturning to the henchman problem, we follow the basic structure of Section~\\ref{sec:losslessachievability}. Fixing $P_{Y|X}$ (and thus a joint distribution $P_{XY}$), consider a codebook $c=\\{y^n(m,k)\\}$ of $2^{n(R+R_0)}$ sequences from $\\mathcal{Y}^n$ and define a likelihood encoder for this codebook by\n\\begin{equation}\nP_{M|X^nK}(m|x^n,k) \\propto \\prod_{i=1}^n P_{X|Y}(x_i | y_i(m,k)),\n\\end{equation}\nwhere $\\propto$ indicates that appropriate normalization is required. The distribution $P_{X^nMK}$ induced by using this encoder with a random codebook is intimately related to an idealized distribution $Q_{X^nMK}$ defined by\n\\begin{equation}\nQ_{X^nMK}(x^n,m,k) \\triangleq 2^{-n(R+R_0)} \\prod_{i=1}^n P_{X|Y}(x_i | y_i(m,k)).\n\\end{equation}\nIndeed, just as in Lemma~\\ref{distrclose}, one can use the soft covering lemma to show that if $R > I(X;Y)$, then\n\\begin{equation}\n\\label{distrtvlossy}\n\\lim_{n\\to\\infty} \\mathbb{E}_{\\mathcal{C}} \\big\\lVert P_{X^nMK} - Q_{X^nMK} \\big\\rVert_{\\sf TV} = 0,\n\\end{equation}\nwhere $\\mathcal{C}$ is a random codebook with each codeword drawn independently according to $\\prod_{i=1}^n P_Y$.\n\nInspecting $Q_{X^nMK}$ reveals that $Q_{X^n|M=m}$ is exactly the distribution that was addressed in the recent interlude. To see this, observe that $Q_{X^nK|M=m}$ is the joint distribution that arises from selecting a codeword uniformly from a codebook of size $2^{nR_0}$ and passing it through a memoryless channel $\\prod P_{X|Y}$. To be explicit,\n\\begin{equation}\n\\label{idealconditional}\nQ_{X^n|M}(x^n | m) = 2^{-nR_0} \\sum_{k=1}^{2^{nR_0}} \\prod_{i=1}^n P_{X|Y}(x_i | Y_i(m,k)).\n\\end{equation}\nProceeding with the analysis of the eavesdropper's distortion, first denote the event\n\\begin{equation}\n\\mathcal{E} = \\{ d(X^n, z^n(M,M_{\\sf H})) \\geq \\pi(R_{\\sf L},R_0,P_{Y|X}) - \\varepsilon \\},\n\\end{equation}\nwhere\n\\begin{equation}\n\\pi(R_{\\sf L},R_0,P_{Y|X}) \\triangleq \n\\begin{cases}\nD(R_{\\sf L}) & \\text{if } R_0 > R_{\\sf L}\\\\\n\\min\\{D(R_{\\sf L}), D_Y(R_{\\sf L} - R_0)\\} & \\text{if }R_0 \\leq R_{\\sf L}\n\\end{cases}\n\\end{equation}\nThe purpose of $\\pi(\\cdot)$ is to treat the cases $R_{\\sf L} < R_0$ and $R_{\\sf L} \\geq R_0$ concurrently.\n\nTaking the expectation of \\eqref{listsecrecylossy} with respect to a random codebook, we have\n\\begin{equation}\n\\label{relatecorlossy} \\mathbb{E}_{\\mathcal{C}} \\Big[\\min_{\\substack{m_{\\sf H}(x^n,m),z^n(m,m_{\\sf H}):\\\\|\\mathcal{M}_{\\sf H}|\\leq2^{nR_{\\sf L}}}} \\mathbb{P}_{P_{X^nM}}\n[\\mathcal{E}]\\Big] = \\mathbb{E}_M \\,\\mathbb{E}_{\\mathcal{C}} \\Big[\\min_{\\substack{m_{\\sf H}(x^n),z^n(m_{\\sf H}):\\\\|\\mathcal{M}_{\\sf H}|\\leq2^{nR_{\\sf L}}}} \\mathbb{P}_{Q_{X^n|M}} [\\mathcal{E} | M] \\Big] + o(1).\n\\end{equation}\nFrom \\eqref{idealconditional}, we see that the expression in \\eqref{relatecorlossy} is exactly what is addressed by Corollary~\\ref{cbcorlossy} after identifying $(R_0,R_{\\sf L})$ with $(R_{\\sf C}, R)$. Note that we are invoking the corollary with $D=\\pi(R_{\\sf L},R_0,P_{Y|X}) - \\varepsilon$, which means that \n\\begin{equation}\nR_{\\sf L} < \\min\\{ R(D), R_Y(D) + R_0 \\}.\n\\end{equation} \nThus, we have\n\\begin{equation}\n\\lim_{n\\to\\infty}\\mathbb{E}_{\\mathcal{C}} \\Big[\\min_{\\substack{m_{\\sf H}(x^n),z^n(m_{\\sf H}):\\\\|\\mathcal{M}_{\\sf H}|\\leq2^{nR_{\\sf L}}}} \\mathbb{P}_{Q_{X^n|M=m}} [\\mathcal{E} | M=m] \\Big] = 1.\n\\end{equation}\nTherefore, we can conclude that there exists a codebook such that the associated likelihood encoder ensures~\\eqref{listsecrecylossy}, because \\eqref{listsecrecylossy} holds when averaged over random codebooks. \n\nWe now complete the proof of achievability by showing that the likelihood encoder can be used to achieve distortion $\\mathbb{E}\\, d_{\\sf B}(X,Y)$ at the legitimate receiver (this is also done in \\cite{Song2014}). To do this, Node B uses a deterministic decoder that simply produces the codeword indexed by $(m,k)$, i.e.,\n\\begin{equation}\nP_{Y^n|MK}(y^n|m,k) = \\mathbf{1}\\{ y^n = y^n(m,k) \\}.\n\\end{equation}\nDefining $Q_{X^nMKY^n} \\triangleq Q_{X^nMK}P_{Y^n|MK}$, we can write\n\\begin{IEEEeqnarray}{rCl}\n\\mathbb{E}_{\\mathcal{C}} \\big\\lVert P_{X^nY^n} - Q_{X^nY^n} \\big\\rVert_{\\sf TV}\n&\\stackrel{(a)}{\\leq}& \\mathbb{E}_{\\mathcal{C}} \\big\\lVert P_{X^nMK}P_{Y^n|MK} - Q_{X^nMK} P_{Y^n|MK} \\big\\rVert_{\\sf TV} \\\\\n&\\stackrel{(b)}{=}& \\mathbb{E}_{\\mathcal{C}} \\big\\lVert P_{X^nMK} - Q_{X^nMK} \\big\\rVert_{\\sf TV}\\\\\n\\label{lossyjointclose} &\\stackrel{(c)}{\\rightarrow}& 0,\n\\end{IEEEeqnarray}\nwhere (a) and (b) follow from Properties~\\ref{tvproperties}d and \\ref{tvproperties}c, and (c) is due to \\eqref{distrtvlossy}. Now notice that $\\mathbb{E}_{\\mathcal{C}} Q_{X^nY^n}$ is exactly the product distribution $\\prod_{i=1}^n P_{XY}$ (a fact which is straightforward to verify). Therefore, by \\eqref{lossyjointclose} and the weak law of large numbers, we have\n\\begin{IEEEeqnarray}{rCl}\n\\mathbb{E}_{\\mathcal{C}}\\,\\mathbb{P}\\Big[d_{\\sf B}(X^n,Y^n) > \\mathbb{E}\\,d_{\\sf B}(X,Y)+\\varepsilon\\Big]\n&=& \\mathbb{E}_{\\mathcal{C}}\\,\\mathbb{P}_{Q_{X^nY^n}} \\Big[d_{\\sf B}(X^n,Y^n) > \\mathbb{E}\\,d_{\\sf B}(X,Y)+\\varepsilon\\Big] + o(1)\\\\\n&=& o(1)\n\\end{IEEEeqnarray}\nThis completes the achievability portion of the proof of Theorem~\\ref{mainresultlossy}.\n\n\n\\appendices\n\\section{Proof of Lemma~\\ref{typebound}}\nWe first bound $\\mathbb{P}[d(X^n,z^n)\\leq D]$ and resolve the event $X^n\\in \\mathcal{T}_{\\delta}^n$ afterward. We use the V-shell notation from the method of types \\cite{Csiszar1998}: for a stochastic matrix $V_{Z|X}$, the set of $z^n$ sequences having conditional type $V$ is denoted by $\\mathcal{T}_{V}^n(x^n)$. Note that all pairs $(x^n,z^n)$ satisfying $z^n\\in \\mathcal{T}_{V}^n(x^n)$ have the same joint type (denoted by $T_{x^nz^n}$).\n\nDiving in, we have\n\\begin{IEEEeqnarray}{rCl}\n\\mathbb{P}[d(X^n,z^n)\\leq D] &=& \\sum_{x^n\\in\\mathcal{X}^n} P_{X^n}(x^n) \\mathbf{1}\\{d(x^n,z^n)\\leq D\\} \\\\\n &\\stackrel{(a)}{=}& \\sum_{V_{X|Z}} \\sum_{x^n\\in \\mathcal{T}_{V}^n(z^n)} P_{X^n}(x^n) \\mathbf{1}\\{d(x^n,z^n)\\leq D\\}\\\\\n &\\stackrel{(b)}{=}& \\sum_{V_{X|Z}} \\sum_{x^n\\in \\mathcal{T}_{V}^n(z^n)} 2^{-n(D(T_{x^n} || P_X)+H(T_{x^n}))} \\mathbf{1}\\{\\mathbb{E}_{T_{x^nz^n}} d(X,Z)\\leq D\\}\\\\\n &\\stackrel{(c)}{\\leq} & \\sum_{\\substack{V_{X|Z}:\\\\ \\mathcal{T}_{V}^n(z^n)\\neq\\emptyset}} 2^{nH(T_{x^n} | T_{z^n})} 2^{-n(D(T_{x^n} || P_X)+H(T_{x^n}))} \\mathbf{1}\\{\\mathbb{E}_{T_{x^nz^n}} d(X,Z)\\leq D\\}\\\\\n &=& \\sum_{\\substack{V_{X|Z}:\\\\ \\mathcal{T}_{V}^n(z^n)\\neq\\emptyset}} 2^{-n(I(T_{x^nz^n}) + D(T_{x^n} || P_X))} \\mathbf{1}\\{\\mathbb{E}_{T_{x^nz^n}} d(X,Z)\\leq D\\}\\\\\n \\label{exponent1} &\\stackrel{(d)}{\\leq}& \\exp \\Big\\{ - n \\min_{V: \\mathbb{E}_{T_{x^nz^n}} d(X,Z)\\leq D} [I(T_{x^nz^n}) + D(T_{x^n} || P_X)] + O(\\log n) \\Big\\}.\n\\end{IEEEeqnarray}\nIn step (a), we partition the set $\\mathcal{X}^n$ according to the conditional type of $x^n$ given $z^n$. Step (b) follows by observing that the summands only depend on the joint type of $(x^n,z^n)$. Step (c) uses a bound on the size of $\\mathcal{T}_{V}(z^n)$, and step (d) follows from the fact that the number of conditional types is polynomial in $n$.\n\nWe can continue by lower bounding the first term in the (normalized) exponent of \\eqref{exponent1}:\n\\begin{IEEEeqnarray}{rCl}\n\\IEEEeqnarraymulticol{3}{l}{\\nonumber\n\\min_{V: \\mathbb{E}_{T_{x^nz^n}} d(X,Z)\\leq D} I(T_{x^nz^n}) + D(T_{x^n} || P_X) \n}\\\\\n\\qquad\\qquad &\\geq& \\min_{z^n}\\min_{V: \\mathbb{E}_{T_{x^nz^n}} d(X,Z)\\leq D} I(T_{x^nz^n}) + D(T_{x^n} || P_X) \\\\\n&=& \\min_{Q_{XZ}:\\mathbb{E}_{Q} d(X,Z) \\leq D} I_Q(X;Z) + D(Q_X||P_X) \\\\\n&=& \\min_{Q_X} \\min_{Q_{Z|X}: \\mathbb{E}_{Q} d(X,Z) \\leq D} I_Q(X;Z) + D(Q_X||P_X) \\\\\n&=& \\min_{Q_X}\\, [R(D,Q_X) + D(Q_X||P_X)],\n\\end{IEEEeqnarray}\nwhere $R(D,Q_X)$ denotes the rate-distortion function for a source $Q_X$.\n\nSo far, we have shown that the following holds for all $z^n$:\n\\begin{equation}\n\\mathbb{P}[d(X^n,z^n)\\leq D] \\leq \\exp \\{ -n \\cdot \\min_{Q_X}\\, [R(D,Q_X) + D(Q_X||P_X)] + O(\\log n) \\}.\n\\end{equation}\n\nHowever, this is not quite the bound we seek; a simple example will reveal that it is possible to have\n\\begin{equation}\n\\min_{Q_X}\\, [R(D,Q_X) + D(Q_X||P_X)] < R(D).\n\\end{equation}\nIndeed, consider $P_X \\sim \\text{Bern}(p),p\\in(D,1\/2)$ and $Q_X \\sim \\text{Bern}(q)$. After simplifying, we find that\n\\begin{equation}\nR(D,Q_X) + D(Q_X||P_X) = q\\log \\frac{1}{p} + (1-q) \\log \\frac{1}{1-p} - h(D).\n\\end{equation}\nMinimizing this expression over $q\\in[0,1]$ gives\n\\begin{IEEEeqnarray}{rCl}\n\\min_{Q_X}\\, [R(D,Q_X) + D(Q_X||P_X)] &=& \\min\\Big\\{\\log\\frac{1}{p},\\log\\frac{1}{1-p}\\Big\\} - h(D)\\\\\n &<& h(p) - h(D)\\\\\n &=& R(D).\n\\end{IEEEeqnarray}\nTo resolve this issue, we introduce the event $X^n\\in \\mathcal{T}_{\\delta}^n$ into the expression we want to bound. Modifying the steps above accordingly, we have\n\\begin{IEEEeqnarray}{rCl}\n-\\frac{1}{n}\\log \\mathbb{P}[d(X^n,z^n)\\leq D, X^n \\in \\mathcal{T}_{\\delta}^n] &\\geq& \\min_{Q_X: \\lVert Q_X - P_X \\rVert_{\\sf TV} < \\delta} R(D,Q_X) + D(Q_X||P_X) - O(\\tfrac{\\log n}{n})\\\\\n&\\stackrel{(a)}{\\geq}& \\min_{Q_X: \\lVert Q_X - P_X \\rVert_{\\sf TV} < \\delta} R(D,Q_X) - O(\\tfrac{\\log n}{n})\\\\\n&\\stackrel{(b)}{=}& R(D) - O(\\delta \\log \\tfrac{1}{\\delta}) - O(\\tfrac{\\log n}{n})\\\\\n&=& R(D) - o(1),\n\\end{IEEEeqnarray}\nwhere step (a) is due to the non-negativity of relative entropy and step (b) follows from the uniform continuity of the rate-distortion function with respect to total variation distance (e.g., \\cite{Palaiyanur2008}).\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Conclusions}\n\\label{sec:conclusions}\n\nIn this work we described an approach for improving uncertainty propagation performance for sampling-based uncertainty quantification methods based on propagating multiple samples together through scientific simulations. We argued this should lead to improved aggregate performance by enabling reuse of sample-independent data, improving memory access patterns by replacing sparse gather\/scatter instructions with packed\/coalesced loads\/stores, improving opportunities for fine-grained SIMD\/SIMT parallelism, and reducing latency costs associated with message passing. We described a C++ template and operator overloading approach for incorporating the ensemble propagation approach in general scientific simulation codes, and discussed how tools implementing this technique have been incorporated into a variety of libraries within Trilinos, including the Kokkos manycore performance portability library. Furthermore, we argued how the approach improves portability by providing uniform access to fine-grained vector parallelism independent of the simulation code's ability to exploit those hardware capabilities. Finally, we demonstrated performance and scalability improvements for the approach by applying it to the solution of a simple PDE. \n\nPractical application of these ideas for uncertainty quantification requires grouping samples generated by the uncertainty quantification algorithm into sets of ensembles of a size most appropriate for the architecture. In the experiments covered in this paper, we found an ensemble size of 32 generally works well for all architectures considered. Grouping samples appropriately is critical for the approach to be effective for real scientific and engineering problems, where the simulation process should be as similar as possible for all samples within an ensemble. For example, the spectra of matrices used in the solution of linear\/nonlinear systems should be similar to prevent growth in solver iterations. Furthermore the code-paths required for samples within an ensemble need to be similar to achieve effective speed-ups. Work exploring algorithmic grouping approaches is underway and will be discussed in a subsequent publication. However once samples have been grouped, each ensemble can be propagated independently using the traditional coarse-grained distributed memory approach.\n\n\\section{Ensemble Propagation}\n\\label{sec:ensemble}\n\\begin{figure}\n\t\\centering\n\t\\subfigure[\\label{fig:kron_sys}]{\n\t\t\\includegraphics[width=0.45\\textwidth]{ensemble_kron}\n\t}\n\t\\quad\n \t\\subfigure[\\label{fig:kron_sys_commuted}]{\n\t\t\\includegraphics[width=0.45\\textwidth]{ensemble_kron_commuted}\n\t}\n \t\\caption{\\subref{fig:kron_sys} Block diagonal structure of Kronecker product system~\\eqref{eq:kron_sys}. The number of blocks is determined by the ensemble size $s$, and each block has the sparsity structure for $\\partial f\/\\partial u$. \\subref{fig:kron_sys_commuted} Block structure of Kronecker product system~\\eqref{eq:kron_sys_commuted}. The outer structure is determined by $\\partial f\/\\partial u$ where each nonzero is replaced by an $s\\times s$ diagonal matrix.}\n \t\\label{fig:ensemble_matrix}\n\\end{figure}\nIn this section we formalize ensemble propagation and illustrate its effect on a ubiquitous computational kernel in the simulation of partial differential equations (PDEs), the sparse matrix-vector product. For simplicity and brevity, we only consider steady-state problems here as the extension to transient problems is straightforward. Consider a steady-state, finite-dimensional, nonlinear system\n\\begin{equation}\n\tf(u,y) = 0, \\; u\\in\\mathbb{R}^n, \\; y\\in\\mathbb{R}^m, \\; f:\\mathbb{R}^n\\times \\mathbb{R}^m\\rightarrow\\mathbb{R}^n.\n\\end{equation}\nFor the simulation of PDEs, we assume the equations have been spatially discretized by some suitable method (e.g., finite element, finite volume, finite difference), in which case $u$ would represent the nodal vector of unknowns, and $f$ the discretized PDE residual equations. Here $y$ is a set of problem inputs, and we are interested in sampling the solution $u$ for numerous values of $y$. Given some number $s$, consider computing $u$ for $s$ values of $y$: $y_1,\\dots,y_s$ (we assume $s$ is small and in what follows will be a small multiple of the natural vector width of the computer architecture). Formally this can be represented by the Kronecker product system:\n\\begin{equation} \\label{eq:kron_sys}\n\\begin{gathered}\n\tF(U,Y) = \\sum_{i=1}^s e_i \\otimes f(u_i,y_i) = 0, \\\\\n\tU = \\sum_{i=1}^s e_i \\otimes u_i, \\; Y = \\sum_{i=1}^s e_i \\otimes y_i, \\; \n\\end{gathered}\n\\end{equation}\nwhere $e_i\\in\\mathbb{R}^s$ is the $i$th column of the $s\\times s$ identity matrix. In this system, the solution vector $U$ is a block vector where all $n$ unknowns for each sample are ordered consecutively. Furthermore, the Jacobian matrix $\\partial F\/\\partial U = \\sum_{i=1}^s e_ie_i^T \\otimes \\partial f\/\\partial u_i$ has a block diagonal structure, an example of which is shown in \\Figref{fig:kron_sys}.\n\nThe choice of ordering for the unknowns in $U$ is arbitrary, and in particular the unknowns can be ordered so that all sample values are stored consecutively for each spatial degree of freedom in $u$. Formally, this amounts to commuting the terms in the Kronecker product system:\n\\begin{equation} \\label{eq:kron_sys_commuted}\n\\begin{gathered}\n\tF_c(U_c,Y_c) = \\sum_{i=1}^s f(u_i,y_i) \\otimes e_i = 0, \\\\\n\tU_c = \\sum_{i=1}^s u_i \\otimes e_i, \\;\\; Y_c = \\sum_{i=1}^s y_i \\otimes e_i.\n\\end{gathered}\n\\end{equation}\nThe Jacobian matrix $\\partial F_c\/\\partial U_c = \\sum_{i=1}^s \\partial f\/\\partial u_i \\otimes e_ie_i^T$ also has a block structure where each scalar nonzero in the original matrix $\\partial f\/\\partial u$ is replaced by an $s\\times s$ diagonal matrix, an example of which is shown in \\Figref{fig:kron_sys_commuted}.\n\n\\begin{figure}\n\\begin{lstlisting}\n\/\/ Matrix stored in compressed row storage for an arbitrary floating-point type T\ntemplate \nstruct CrsMatrix {\n int num_rows; \/\/ number of rows\n int num_entries; \/\/ number of nonzeros\n int *row_map; \/\/ starting index of each row\n int *col_entry; \/\/ column index for each nonzero\n T *values; \/\/ matrix values of type T\n};\n\n\/\/ CRS matrix-vector product z = A*x for arbitrary floating-point type T\ntemplate \nvoid crs_mat_vec(const CrsMatrix& A, const T *x, T *z) {\n for (int row=0; row\nvoid ensemble_crs_mat_vec(const CrsMatrix& A, const T *x, T *z) {\n for (int e=0; e < s; ++e) {\n for (int row=0; row\nvoid ensemble_crs_mat_vec(const CrsMatrix& A, const T *x, T *z) {\n for (int row=0; i\nclass Ensemble {\n T val[s];\npublic:\n Ensemble(const T& v) {\n for (int e=0; e\nEnsemble\noperator*(const Ensemble& a, const Ensemble& b) {\n Ensemble c;\n for (int e=0; e>})\n\\item What Kokkos needs for reductions and atomic updates\n\\item What Tpetra needs to tell MPI how to communicate ensemble values\n\\end{enumerate}\nThe first category of operations make \\texttt{Ensemble} objects act\nlike a C++ built-in floating-point type. They include:\n\\begin{itemize}\n\\item Copy constructors and assignment operators from ensemble values\n as well as scalar values (that is, for objects of type\n \\texttt{Ensemble}, values of any type \\texttt{U}\n convertible to \\texttt{T})\n\\item Arithmetic operations ($+$, $-$, $\\times$, $\/$) and\n arithmetic-assignment operators ($+\\!\\!=$, $-\\!\\!=$, $\\times\\!\\!=$,\n $\/\\!\\!=$) from ensemble and scalar values\n\\item Comparison operations $>$, $<$, $==$, $<=$, and $>=$ between\n ensembles and scalar values, which base the result on the first\n entry in the ensemble\n\\item Overloads of basic mathematical functions declared in {\\tt\n cmath}, such as \\texttt{sin()}, \\texttt{cos()},\n \\texttt{exp()}, as well as other common operations such as\n \\texttt{min()} and \\texttt{max()}\n\\end{itemize}\nThe third category of operations lets Kokkos produce ensemble values\nas the result of a parallel reduction or scan, and ensures that if\nthreads update the same ensemble value concurrently, those updates are\ncorrect (assuming that order does not matter). Kokkos needs overloads\nof the above operations for ensemble values declared\n\\texttt{volatile}, and an implementation of Kokkos' atomic updates\nfor ensemble values. \\Secref{sec:solvers} will discuss the fourth\ncategory of operations.\n\n\\texttt{Ensemble} meets the C++ requirements for a \\emph{plain\n old data} type. The template parameter \\texttt{s} fixes its size at\ncompile time; it does no dynamic memory allocation inside.\nFurthermore, it has default implementations of the default\nconstructor, copy constructor, assignment operator, and destructor.\nThis implies that arrays of ensemble values can be allocated with no\nmore initialization cost than built-in scalars. Ensemble objects can\nbe easily serialized for parallel communication and input \/ output,\nsince all such objects have the same size on all parallel processes.\n\nOur ensemble scalar type employs expression\ntemplates~\\cite{VelhuizenET} to avoid creation of temporaries and fuse\nloops within expressions, thus reducing overhead. Since the ensemble\nloop is always the lowest-level loop, it has a fixed trip count and no\niteration dependencies. This means that the compiler can easily\nauto-vectorize each arithmetic operation, including insertion of\npacked load\/store instructions. However, difficulties do arise when\nmapping GPU threads to ensemble components for GPU SIMT\nparallelization. We discuss our GPU optimizations in \n\\Secref{sec:kokkos}.\n\nSince the original CRS matrix-vector multiply routine in \\Figref{fig:crs} is already templated on the scalar type, instantiating this code on the ensemble scalar type, {\\tt crs\\_mat\\_vec< Ensemble > >()} results precisely in the ensemble matrix-vector multiply routine in \\Figref{fig:ensemble_crs}. \n\n\\subsection{Incoporating the Ensemble Type in Complex Codes}\n\\label{sec:ensemble_complex_codes}\nReplacing the floating-point scalar type with the ensemble types accomplishes both steps of replacing sample-dependent\nvariables with ensemble arrays and arithmetic operations with ensemble loops. We generally advocate the use of C++ templates\nto facilitate this type change, whereby the floating-point type is replaced by a general template parameter. This allows the\noriginal code to be recovered by instantiating the template code on the original floating-point type, and the ensemble code\nthrough the ensemble scalar type. Furthermore, other scalar types can be used as well, such as automatic differentiation\ntypes for computing derivatives. We refer to the process of using C++ templates and a variety of scalar types to implement\ndifferent forms of analysis as \\emph{template-based generic programming}~\\cite{Pawlowski:2012kc,Pawlowski:2012js}.\nThis has been shown to be effective for supporting analyses such as ensemble propagation in complex simulation codes.\n\nThe most challenging part of incorporating the ensemble scalar type in complex code bases is its conversion to code templated on the scalar type. Developers must analyze the code to determine which values depend (directly or indirectly) on the input data that will be sampled, and therefore should be converted to ensembles by replacing the types of those values with a template parameter. While this is admittedly tedious, it is generally straightforward to accomplish. Furthermore, the compiler helps in this process since the ensemble scalar type does not allow direct conversions of ensemble values to scalar values. This prevents accidentally breaking the chain of dependencies from input data to simulation outputs. It is possible to implement this conversion through a helper function, which takes the first entry in the ensemble to initialize the resulting scalar. Therefore one can incrementally convert a code to use ensembles by manually converting ensembles to scalars whenever ensemble code calls code that has not yet been converted. Note that scalars can be automatically converted to ensemble values by the compiler, which implies it is possible to incorrectly replace code that does not depend on the input data with ensembles. This can only be discovered through programmer analysis and optimization of the code.\n\nOnce all necessary scalar values have been replaced by ensembles in the simulation code, the ensemble propagation occurs\nautomatically by ``evaluating'' the resulting ensemble code. This requires adding suitable initialization and finalization code to initialize ensemble values for input data and extracting ensemble values for simulation results (using various ensemble constructors and coefficient access routines). Sample independent data are not replaced by ensembles, and\ntherefore reuse happens naturally through the normal compiler optimization process and overloaded operators that take a\nmixture of scalars and ensembles as arguments. \n\n\n\\subsection{Kokkos Performance Portability}\n\\label{sec:kokkos}\n\nKokkos~\\cite{Kokkos:2012:SciProg,Kokkos:2014:JPDC} is a programming model and C++ library that enables applications and domain libraries to implement thread scalable algorithms that are performance portable across diverse manycore architectures such as multicore CPU, Intel Xeon Phi, and NVIDIA GPU. \nKokkos' design strategy is to define algorithms with parallel patterns (for-each, reduction, scan, task-dag) and their code bodies invoked within these patterns, and with multidimensional arrays of their ``scalar'' data types.\nPerformance portability is realized through the integrated mapping of patterns, code bodies, multidimensional arrays, and datum onto the underlying manycore architecture.\n\n\nThis mapping has three components.\nFirst, code is mapped onto the target architecture's best performing threading mechanism; \\textit{e.g.}, pthreads or OpenMP on CPUs and CUDA on NVIDIA GPUs.\nSecond, parallel execution is mapped with architecture-appropriate scheduling; \\textit{e.g.}, each CPU thread is given a contiguous range of the parallel iteration space while each GPU thread is given a thread-block-strided range of the parallel iteration space.\nThird, multidimensional arrays are given an architecture-appropriate layout; \\textit{e.g.}, on CPUs arrays have a row-major or ``array of structs'' layout and on GPUs arrays have a column-major or ``struct of arrays'' layout.\nWhile this polymorphic multidimensional array abstraction has conceptual similarities to Boost.MultiArray~\\cite{website:Boost:MultiArray}, Kokkos' abstractions for explicit dimensions and layout specializations provides greater opportunities for performance optimizations. \n\n\nWhen scalar types are replaced with ensemble types in a Kokkos multidimensional array, the layout is specialized so that operations on ensemble types may exploit the lowest level of hierarchical parallelism.\nHierarchical thread parallelism can be viewed as ``vector'' parallelism nested within thread parallelism.\nThe mechanism to which Kokkos maps ``vector'' parallelism is architecture dependent.\nOn CPUs this level is mapped to vector instructions, typically through the compiler's optimization algorithms.\nOn GPUs this level is mapped to threads within a GPU warp, and then Kokkos' thread abstraction is remapped to the entire warp.\nThus on the GPU architecture ensemble operations are, transparent to the user code, performed in parallel by a warp of threads.\n\n\nThe specialized layout integrates the ensemble dimension into the multidimensional array such that the ensemble's values remain contiguous in memory on any architecture.\nThis contiguity is necessary to obtain the best ``vector'' level parallel performance for computations on irregular data structures; such as sparse matrices and unstructured finite elements.\nThese data structures typically impede performance by requiring non-contiguous scalar values to be gathered into contiguous memory (vector registers), processed with vector instructions, and then results scattered back to non-contiguous scalar values.\nOn CPU architectures these gather\/scatter operations \\textit{might} be automatically generated by a compiler that recognizes the indirection patterns of irregular data structures.\nWhen the scalar type is an ensemble each indirect access that previously referenced a single scalar value instead references the ensemble types' contiguous set of values (recall \\Figref{fig:ensemble_class}).\nAs such, gather\/scatter operations are no longer needed for vector instructions, and compilers can more easily generate vectorized operations.\n\n\nOn NVIDIA GPU architectures, warp-level gather\/scatter operations are generated by hardware, removing the need for sophisticated indirection-pattern recognition by the compiler.\nHowever, memory accesses are still non-coalesced gather\/scatter operations thus reducing performance.\nWhen the scalar type is an ensemble, indirect access patterns lead to coalesced reads and writes of contiguous ensemble values, resulting in improved performance.\n\n\nIn summary, replacing scalar types with ensemble types in computations on irregular data structures enables improved utilization of hierarchical parallel hardware such as multicore CPUs with vector instructions and GPUs.\nTo realize this improvement\n(1) Kokkos multidimensional array layouts are specialized to insure ensemble values are contiguous in memory and\n(2) ensemble operations are mapped to the ``vector'' level of Kokkos' hierarchical thread-vector parallelism.\nOn CPU architectures, this mapping happens automatically through the normal compiler vectorization process when applied to the ensemble loops. For GPU architectures however, this mapping occurs by creating a strided subview within the ensemble dimension of the multidimensional array for each GPU thread within a warp.\nThese two mappings insure that CPU vector instructions or GPU warp operations are performed on contiguous memory.\n\n\n\n\\subsection{Linear Algebra and Iterative Solvers}\n\\label{sec:solvers}\n\nThe ensemble scalar type and Kokkos library described above have both\nbeen incorporated into the Tpetra linear algebra\npackage~\\cite{Baker:2012:SciProg,TpetraURL} within Trilinos. Tpetra\nimplements parallel linear algebra data structures, computational\nkernels, data distributions, and communication patterns. ``Parallel''\nincludes both MPI (the Message Passing Interface) for\ndistributed-memory parallelism, and Kokkos for shared-memory\nparallelism within an MPI process. Supported data structures include\nvectors, ``multivectors'' that represent groups of vectors with the\nsame parallel distribution, sparse graphs, sparse matrices, and\n``block'' sparse matrices (where each block is a small dense matrix).\nTpetra's computational kernels include vector arithmetic, sparse\nmatrix-vector products, sparse triangular solve, and sparse\nmatrix-matrix multiply. It lets users represent arbitrary\ndistributions of data over MPI processes, and communicate data between\nthose distributions.\n\nTpetra is templated on the ``scalar'' type, the type of each entry in\nthe matrix or vector. In theory, this lets Tpetra work with any type\nthat ``looks like'' one of the C++ built-in floating-point types.\nTpetra uses this flexibility to provide built-in support for both\nsingle- and double-precision real and complex floating-point values,\nas well as 128-bit real floating-point arithmetic if the compiler\nsupports it. In practice, in order to support an arbitrary scalar\ntype, Tpetra needs it to implement all of the operations described in\n\\Secref{sec:ensemble_type}. In particular, Tpetra needs to tell MPI\nhow to communicate scalars. This means that Tpetra either needs to\nknow the \\texttt{MPI\\_Datatype} corresponding to the scalar type, or\nhow to pack and unpack scalars into byte arrays. The scalar type\ntells Tpetra this by implementing a C++ traits class specialization.\nTpetra in turn provides a type-generic MPI interface, both for itself\nand for users.\n\nThe Belos package~\\cite{Bavier:2012:SciProg} in Trilinos builds upon Tpetra data structures to provide parallel iterative linear solvers such as CG and GMRES. It is also templated on the scalar type, allowing ensembles to be propagated through these linear solver algorithms. Iterative solver algorithms do not directly access matrix and vector entries. They only need to know the results of inner product and norm calculations, and only deal with matrices and vectors as abstractions. Furthermore, viewing the ensemble system as the Kronecker product system~\\eqref{eq:kron_sys_commuted}, inner products and norms of ensemble vectors should be scalars and not ensembles. That is, given two ensemble vectors $U_c = \\sum_{i=1}^s u_i \\otimes e_i$ and $V_c = \\sum_{i=1}^s v_i \\otimes e_i$,\n\\begin{equation}\n\tU_c^T V_c = \\sum_{ij=1}^s u_i^T v_j \\otimes e_i^Te_j = \\sum_{i=1}^s u_i^T v_i.\n\\end{equation}\nTpetra assists Belos by defining a traits class that defines the proper data type for the result of inner product and norm calculations, and exposing these types to solvers and application code as public typedefs. First\n\\begin{equation}\n\t\\sum_{i=1}^s u_i^T v_i \\otimes e_i\n\\end{equation}\nis computed, which naturally arises when propagating the ensemble scalar type through the inner product code. The final inner product value is then computed by adding together each ensemble component. Thus, Belos does not directly generate or access ensemble values; they only appear internally within the matrix and vector data structures. Belos only needs one implementation for Tpetra objects of all scalar types, with no significant abstraction overhead.\n\n\\subsection{Multigrid Preconditioners}\n\\label{sec:preconditioners}\n\n\\begin{algorithm}[t]\n\\centering\n\\begin{algorithmic}[0]\n \\State{$A^0 = A$}\n \\Function{VCycle}{$A^k$, $b$, $x$, $k$}\n \\State{\/\/ Solve $A^k$ x = b (k is current grid level)}\n \\State $ x = S^{k}_{1} (A^k, b, x)$\n \\If{$(k \\ne {\\bf N-1})$}\n \\State{$r^{k+1} = R^k (b - A^k x )$}\n \\State{$A^{k+1} = R^k A^k P^k$}\n \\State{$z = 0$}\n \\State{}\\Call{VCycle}{$A^{k+1}$, $r^{k+1}$, $z$, $k+1$}\n \\State{$ x = x + P^{k} z$}\n \\State{$ x = S^{k}_{2} (A^k, b, x )$}\n \\Else\n \\State{$x = S^{c}(A^{N-1}, x, b)$}\n \\EndIf\n \\EndFunction\n\\end{algorithmic}\n\\caption{V-cycle multigrid with $N$ levels to approximate solution $Ax=b$.}\n\\label{vcycle}\n\\end{algorithm}\nMultigrid is a provably optimal linear solution method in work per digit of accuracy for\nsystems arising from elliptic PDEs.\nMultigrid works by accelerating the solution of a linear system of interest, $A^0x^0=b^0$, through a\nsequence or {\\em hierarchy} of increasingly smaller linear systems $A^ix^i=b^i, i>0$. The purpose of\neach system or {\\em level} is to reduce particular ranges of errors in the $i=0$ problem. Any errors that are not\nquickly damped by a particular system should be handled by a coarser problem.\n\nThe main components of a multigrid solver are {\\em smoothers}, which are solvers that operate only on\nparticular levels, and transfer operators to migrate data between levels.\nThe transfer from level $i+1$ to $i$ is called a {\\em prolongator} and denoted $P^i$.\nThe transfer from level $i$ to $i+1$ is the {\\em restrictor} and denoted $R^i$.\nA typical schedule for visiting the levels, called a {\\em V-cycle}, is given in Algorithm~\\ref{vcycle}. In practice, this\nalgorithm is divided into two phases: the setup phase where all of the matrix data used at each level is generated (e.g.,\n$R^k$, $P^k$, $A^k$, $S^k_1$, $S^k_2$ and $S^c$), and the solve phase where given $b$ and the data from the setup phase, $x$ is computed.\n\nMultigrid algorithms such as this have been implemented in the MueLu package within Trilinos~\\cite{muelu_user}. MueLu's performance with traditional scalar types on very large\n core counts has been studied before~\\cite{LinLPP14,LinPPL}. This library builds upon the\ntemplated Tpetra data structures and algorithms described above, with all of the functionality used in setup (such as the\nmatrix-matrix multiply $R^k A^k P^k$) and application of the V-cycle in Algorithm~\\ref{vcycle} (such as the smoothers\n$S^{k}_1$ and $S^{k}_2$) templated on the scalar type. Furthermore, the V-cycle algorithm in MueLu is encapsulated within an operator $z = M x$ allowing it to serve as a preconditioner for the Krylov methods in Belos. In this work, the restriction and prolongation operators $R^k$ and $P^k$ are generated from the graph of $A^k$ at each level, without any thresholding or dropping. This implies that given matrices $A_1,\\dots,A_s$ corresponding to $s$ samples within an ensemble, and corresponding multigrid preconditioners $M_1,\\dots,M_s$ generated for each matrix {\\em individually}, then the corresponding ensemble preconditioner $M_c$ is equivalent to\n\\begin{equation}\n\tM_c = \\sum_{i=0}^s M_i \\otimes e_ie_i^T.\n\\end{equation}\nIn the experiments described below, we use order-2 Chebyshev polynomial smoothers ($S_1^k$, $S_2^k$) and continue to generate levels in the multigrid hierarchy until the number of matrix rows falls below 500. For the coarse-grid solve $S^c$, a simple sparse-direct solver that is built into Trilinos, called Basker, is used. Note that in the current version of the code, the preconditioner setup and coarse grid solver do not leverage Kokkos directly and thus are thread-serial (but are MPI-parallel). Furthermore, for the GPU architecture, the coarse-grid solve is executed on the host using recent Unified Virtual Memory (UVM) features to automatically transfer data between the host and GPU. All other aspects of the V-cycle solve phase are fully thread-parallel using Kokkos.\n\nA major concern in parallel multigrid is the relative cost of communication to\ncomputation for levels $i>0$. For a good-performing multigrid method, matrix\n$A^{i+1}$ can typically have 10--30 times fewer rows than $A^i$, with only a\nmodest increase in the number of nonzeros per row. Practically, this means\nthat the ratio of communication to computation can increase by an order of\nmagnitude per level.\nIn MueLu, this issue is addressed by moving data\nto a subset of processes for coarser levels. Once, the number of processes\nfor a coarser level is determined, a new binning of the matrix rows\n(weighted by the number of nonzeros per row) is found using the\nmulti-jagged algorithm~\\cite{mj_tpds} from the Zoltan2 library~\\cite{zoltan2}.\nEach bin is assigned to a process so that data movement is restricted in the\ncoarser level to fewer processes. \nThis improves scalability of the preconditioner, particularly with large numbers of processes. Propagating ensembles through the preconditioner further reduces communication costs by amortizing communication latency across the ensemble. \n\n\\subsection{Build times and library sizes}\n\\label{sec:ETI}\n\nSome developers worry that extensive use of C++ templates may increase\ncompilation times and library sizes. The issue is that plugging each\nscalar type into templated linear algebra and solver packages results\nin an entirely new version of the solver to build. The compiler sees\na matrix-vector multiply with \\texttt{double} as distinct code from a\nmatrix-vector multiply of \\texttt{Ensemble}, for example. More\nversions of code means longer compile times and larger libraries.\nThis is not particular to C++ templates; the same would occur when\nimplementing ensemble computations automatically using\nsource-to-source translation with Fortran or C, with manual\ntranslation, or with some other language's flavor of compile-time\npolymorphism.\n\nA second issue is particular to C++. Most C++ compilers by default\nmust re-build templated code from scratch in each source file that\nuses it. Furthermore, deeply nested ``stacks'' of solver code do not\nactually get compiled until an application source file uses them with\na specific scalar type. For example, MueLu is templated and depends\non many Trilinos packages that are also templated, so using MueLu\nmeans that the application must build code from many different\nTrilinos packages. In practice, this shows up as long application\nbuild times, or even running out of memory during compilation.\n\nTrilinos fixes this with its option to use what it calls\n\\emph{explicit template instantiation} (ETI). This ``pre-builds''\nheavyweight templated code so applications do not have to build it\nfrom scratch each time. ETI corresponds to the second option in\nSection 7.5 of the GCC Manual \\cite{gcc52manual}, where Trilinos\nmanually instantiates some of its templated classes. \nETI means that building\nTrilinos might take longer, but building the application takes less\ntime. Trilinos breaks up many of its instantiations of templated\nclasses and functions into separate source files for different\ntemplate parameters, which keeps down build times and memory\nrequirements for Trilinos itself. \n\n\n\\subsection{Ensemble Divergence}\n\\label{sec:divergence}\nThe most significant algorithmic issue arising from the embedded ensemble propagation approach described above is \\emph{ensemble divergence}. Depending on the values of two given samples, the code paths taken during evaluation of the simulation code at those two samples may be different. These code paths must some how be joined together when those samples are combined into a single ensemble. We now describe different approaches for accomplishing this depending on how and where the divergence occurs within the simulation code.\n\\begin{figure}\n\\begin{lstlisting}\n\/\/ ...\n\nScalar x = ...\n\nScalar y;\nif (x > 0) {\n Scalar z = x*x;\n y = x + z;\nelse\n y = x;\n \n\/\/ ...\n\\end{lstlisting}\n\\caption{Example of ensemble divergence due to conditional evaluation resulting in multiple code branches. Here {\\tt Scalar} is a general template parameter that could be {\\tt double} for single-point evaluation or {\\tt Ensemble} for ensemble evaluation.}\n\\label{fig:conditional}\n\\end{figure}\n\\begin{figure}[h]\n\\begin{lstlisting}\n\n\/\/ Base template definition of EnsembleTrait that is empty.\n\/\/ It must be specialized for each scalar type T\ntemplate struct EnsembleTrait {};\n\n\/\/ Specialization of EnsembleTrait for T = double\ntemplate <> struct EnsembleTrait {\n typedef double value_type;\n static const int ensemble_size = 1;\n static const double& coeff(const double& x, const int i) { return x; }\n static double& coeff( double& x, const int i) { return x; }\n};\n\n\/\/ Specialization of EnsembleTrait for T = Ensemble\ntemplate struct EnsembleTrait< Ensemble > {\n typedef T value_type;\n static const int ensemble_size = s;\n static const T& coeff(const Ensemble& x, const int i) { return x.val[i]; }\n static T& coeff( Ensemble& x, const int i) { return x.val[i]; }\n};\n\n\/\/ ...\n\ntypedef EnsembleTrait ET;\ntypedef typename ET::value_type ScalarValue;\nconst int s = ET::ensemble_size;\n\nScalar x = ...\n\nScalar y;\nfor (int i=0; i 0) {\n ScalarValue z = xi*xi;\n yi = xi + z;\n }\n else\n yi = xi;\n \n\/\/ ...\n}\n\\end{lstlisting}\n\\caption{Handling ensemble divergence through the {\\tt EnsembleTrait} type trait which enables loops over ensemble components in a type-generic fashion. }\n\\label{fig:ensemble_conditional}\n\\end{figure}\n\nWhen divergence occurs at low levels within the simulation code, for example during element residual or Jacobian evaluation of the PDE, a simple approach for handling it is to add a loop over ensemble components that evaluates each sample individually. An example of this is demonstrated in \\Figrefs{fig:conditional} and~\\ref{fig:ensemble_conditional}. In \\Figref{fig:conditional}, a code branch is chosen based on the value of {\\tt x}, whose type is determined by the template parameter {\\tt Scalar}. When {\\tt Scalar} is a basic floating-point type such as {\\tt double} for a single-point evaluation, everything is fine. However when {Scalar} is {\\tt Ensemble}, only one of the branches can be chosen even when the components of {\\tt x} would choose different branches when evaluated separately. This is remedied in \\Figref{fig:ensemble_conditional} by adding a type-generic loop around the conditional, evaluating the loop body separately for each sample within the ensemble. This recovers the single-point behavior. This is accomplished through the type trait {\\tt EnsembleTrait} displayed in \\Figref{fig:ensemble_conditional} which has a trivial implementation for built-in types such as {\\tt double} allowing the code to be instantiated for both {\\tt double} and {\\tt Ensemble}. Clearly, the use of an ensemble loop such as this eliminates the architectural benefits of ensemble propagation through the body of the loop, and therefore should only be applied to small portions of the code where the bodies of the conditional branches are small.\n\nDivergence may also occur at high levels within the simulation code.\nExamples include iterative linear and nonlinear solver algorithms that\nrequire a different number of solver iterations for each sample, and\nadaptive time stepping and meshing schemes that dynamically adjust the\ntemporal and spatial discretizations based on error estimates computed\nfor previous time steps or solutions. While adding an ensemble loop\naround these calculations is certainly feasible, it defeats the\noriginal intent of incorporating ensemble propagation.\n\nAlternately, recall that the use of an ensemble scalar type is merely\na vehicle for implementing the Kronecker product\nsystem~\\eqref{eq:kron_sys_commuted}. As we discussed in\n\\Secref{sec:solvers}, the norm and inner product calculations that\ndrive convergence decisions for iterative solver algorithms as well as\nadaptivity decisions for time-stepping and meshing do not produce\nensemble values. Instead, they produce traditional floating-point\nvalues, which are the results of norms and inner products over entire\nensemble vectors. This effectively \\emph{couples} all of the ensemble\nsystems together, resulting in a single convergence or adaptivity\ndecision for the entire ensemble system. Thus divergence is handled\nin these cases through proper definition of the types used for\nmagnitudes and inner products, with associated traits classes for\ncomputing these quantities in a type-generic fashion. The resulting\ncoupled linear solver algorithm is analogous to block Krylov subspace\nmethods~\\cite{oleary1980block}. First, it couples the linear systems\ntogether \\emph{algorithmically}, not just computationally. Second,\nour approach opens up opportunities for increasing spatial locality\nand reuse in computational kernels, just as block Krylov methods do.\n(See, for example, \\cite{baker2006efficient}). Just as with block\nKrylov methods, however, coupling the component systems comprising the\nensemble system means that the solves are no longer algorithmically\nequivalent to uncoupled solves. This means that the choice of the\nensemble size $s$, as well as which samples are grouped together\nwithin each ensemble, will affect the performance of the resulting\nsimulation algorithms.\nTherefore, the solution to managing high-level solver divergence across ensemble values is group samples together in ensembles that minimize this divergence.\n\nFor example, the convergence of iterative linear solvers (or its number of\niterations) for each sample depends on several factors. Among these,\nthe most important are the condition number of the matrix associated\nwith $\\partial f\/\\partial u$ and the spatial variation and magnitude\nof the sample-dependent parameters. When the ensemble system $F_c$ is\nformed, the solver's convergence is always poorer than the solver\napplied to each sample individually. This happens because the spectrum\nof the ensemble matrix is the union of the matrix spectra of the\nsamples that comprise it; this likely increases the condition number\nand hence the number of iterations. For this reason, it is convenient\nto have a grouping strategy that gathers samples requiring a similar\nnumber of iterations in the same ensemble. Since this information is\nnot known {\\it a priori}, quantities such as those mentioned above can\nbe used to predict which samples feature a similar number of\niterations. Preliminary studies show that the variation of the\nsample-dependent parameters over the computational domain may induce a\ngrouping very similar to the one based on the number of iterations.\nAlgorithmic ideas along these lines will be explored in a future\npaper. In the computational studies below however, we assume the\nmatrix spectra for all samples are similar so that the number of\niterations of the ensemble system is constant with regards to\ngrouping.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\nIn practice, data often comes with complex relations between features which are not explicitly visible, and extracting this structural information has been a crucial, yet challenging, task in the field of Machine Learning. Recently, renewed interest in relational and structure learning has been largely driven by the development of new end-to-end Neural Network and Deep Learning frameworks \\cite{bordes2013translating,xu2018representation,van2018relational}, with multiple promising results reported. This renewed drive in relational structure inference using Neural Networks can be partially attributed to current efforts to overcome the limited generalization capabilities of Deep Learning \\cite{battaglia2018relational}. More importantly, learning the relational structure with Neural Network models has several inherent advantages: strong and efficient parameterization ability of Deep Learning can extract essential relational information and perform large-scale inference, which are considered difficult with other learning algorithms.\\par\n\nRecently, research in relational learning using Neural Networks has largely focused on sequential generation\/prediction of dynamical systems, while static data has been largely ignored \\cite{louizos2017causal,kipf2018neural,sanchez2018graph}. At their core, these algorithms use either one or a combination of Graph Neural Networks (GNNs) \\cite{scarselli2008graph,kipf2016semi,wu2019comprehensive} and Variational Autoencoders (VAEs) \\cite{kingma2013auto}. The former provide a convenient framework for relational operations through the use of graph convolutions \\cite{schlichtkrull2018modeling}, and the latter offer a powerful Bayesian inference method to learn the distribution of the latent graph structure of data. Inspired by these recently developed methods, we devised a Neural Network based algorithm for relational learning on graph data.\\par\n\nIn this paper, inspired by the recent advances in the field of GNNs and VAEs, we propose Self-Attention Graph Variational Autoencoder (SAG-VAE), a novel VAE framework that jointly learns data representation and latent structure in an end-to-end manner. SAG-VAE utilizes the gumbel-softmax reparameterization \\cite{jang2016categorical} to infer the graph adjacency matrix, and employs a novel Graph Convolutional Network (also proposed by this paper) as the generative network. During the generative process, a sampled adjacency matrix will serve as the graph edge information for the novel Graph Network, and a sampled data representation will be fed into the network to generate data. Based on this framework, SAG-VAE will be able to directly infer the posterior distributions of both the data representation and relational matrix based simply on gradient descent. \\par\n\nSeveral experiments are carried out with multiple data sets of different kinds to test the performances. We observe in the experiments that SAG-VAE can learn organized latent structures for homogeneous image data, and the interpretation can match the nature of the certain type of image. Also, for graph data with known connections, SAG-VAE can retrieve a significant portion of the connections based entirely on feature observations. Based on these performances, we argue that SAG-VAE can serve as a general relational structure learning method from data. Furthermore, since SAG-VAE is a general framework compatible with most Variational Autoencoders, it is straightforward to combine advanced VAEs with SAG-VAE to create more powerful models. \\par\n\nThe rest of the paper is arranged as follows: Section \\ref{sec:relatedwork} conducts a literature review regarding methods related to the paper; Section \\ref{sec:methodology} introduces the background and discuss the proposed SAG-VAE; Experimental results are shown in section \\ref{sec:experiment}, and the implications are discussed; And finally, section \\ref{sec:conclusion} gives a general conclusion of the paper. \\par\n\n\n\n\\section{Related Work}\n\\label{sec:relatedwork}\nEarly interest in learning latent feature relations and structures partly stems from questions over causality in different domains \\cite{granger1969investigating,kuipers1984causal}. Before the era of machine learning, methods in this field substantially relied on domain knowledge and statistical scores \\cite{brillinger1976identification,watts1998collective}, and most of them work only on small-scale problems. The recent advancements in machine learning prompted the development of large-scale and trainable models on learning feature relations \\cite{linderman2016bayesian,yang2018glomo}. However, most of the aforementioned methods are domain-specified and are usually not compatible with general-purpose data. Thus, these models are not quantitatively evaluated or compared in this paper. \\par\nFeature relational learning in Neural Networks can trace its history from sequential models. Recurrent Neural Network (RNN) and its variants like LSTM \\cite{hochreiter1997long} are the early examples of relational learning methods, although their aspect of `feature relation' has been overwhelmed by their success in sequential modeling. After the emerge of Deep Learning, researchers in the domain of Natural Language Processing first built `neural relational' models to exploit the relations between features \\cite{mikolov2013efficient}. Recently, a variety of notable methods on neural relational learning, such as AIR \\cite{eslami2016attend}, (N-)REM \\cite{greff2017neural,van2018relational} and JK network \\cite{xu2018representation}, have achieved state-of-the-art performances by adopting explicit modeling of certain relations. However, although the models discussed above are powerful, most of them assume a known relational structure given by the data or experts, which means they do \\emph{not} have the ability in \\emph{extracting} feature relations. \\par\n\nMore recently, the idea of leveraging graph neural networks to learn feature relations has grasped considerable interests \\cite{battaglia2018relational}. The graph neural network model was originally developed in the early 2000s \\cite{sperduti1997supervised,gori2005new,scarselli2008graph}, and it has been intensively improved by a series of research efforts \\cite{bruna2013spectral,defferrard2016convolutional,henaff2015deep}. And finally, \\cite{kipf2016semi} proposed the well-renowned Graph Convolutional Network (GCN) model which established the framework of modern graph neural networks. Graph neural networks are increasingly popular in the exploration and exploitation of feature relations \\cite{sanchez2018graph,schlichtkrull2018modeling, Wang2018NerveNetLS}, and there are several methods in this domain similar to the proposed SAG-VAE. For instance, \\cite{kipf2018neural} embeds a graph neural network into the framework of Variational Autoencoders to learn the latent structure for dynamic models, and \\cite{Grover2019GraphiteIG} designs an iterative refining algorithm to extract the graph structure. Furthermore, \\cite{velivckovic2017graph} proposed the Variational Graph Autoencoder (VGAE) that can reconstruct graph edges from feature observations and limited number of given edges. The VGAE model provides a strong baseline to evaluate graph edge retrieval.\nApart from the graph neural networks, this paper is also closely related to Variational Autoencoders (VAEs) \\cite{kingma2013auto} and the Gumbel-Softmax distribution \\cite{jang2016categorical}. Among the numerous variations of the VAEs, \\cite{kipf2016variational} devises an auto-encoding inference structure composed by a Graph Convolutional Network-based encoder and an inner product-based decoder, which can accomplish tasks similar to the SAG-VAE. Furthermore, \\cite{pan2018adversarially} elaborated on the idea to use VAEs to learn explicit graph structure. Gumbel-softmax was introduced by \\cite{jang2016categorical} to provide a `nearly-discrete' distribution compatible with reparametrization and backpropagation. Based on this technique, we can compute gradients for each edge, which is considered impossible with the categorical distribution.\n\n\n\\section{Method}\n\\label{sec:methodology}\n\\subsection{Background}\n\\subsubsection{Graph Convolution Networks}\nWe first introduce Graph Convolutional Networks following the framework of \\cite{kipf2016semi}. A graph is denoted as $\\boldsymbol{G} = (\\boldsymbol{V},\\boldsymbol{E})$, where $\\boldsymbol{V}$ is the set of vertices and $\\boldsymbol{E}$ is the set of edges. The vertices and their features are denoted by a $n\\times d$ matrix, where $n=|\\boldsymbol{V}|$ and $d$ is number of features. A graph adjacency matrix $\\boldsymbol{A}$ of size $n\\times n$ is adopted to indicate the edge connections, and $\\boldsymbol{\\hat{A}}=\\boldsymbol{A}+\\boldsymbol{I}$ is used to introduce relevance for each vertex itself. A feed-forward layer is characterized by the following equation:\n\\begin{equation}\n\\label{equ:graphaggre}\n\\begin{aligned}\n \\boldsymbol{H}^{(l+1)} &= f_{\\boldsymbol{W}}(\\boldsymbol{H}^{(l)},\\boldsymbol{A}) \\\\\n &= \\sigma(\\boldsymbol{\\hat{D}}^{-\\frac{1}{2}} \\boldsymbol{\\hat{A}} \\boldsymbol{\\hat{D}}^{-\\frac{1}{2}} \\boldsymbol{H}^{(l)} \\boldsymbol{W}^{(l)} )\\\\\n &=\\sigma(\\boldsymbol{\\tilde{A}}\\boldsymbol{H}^{(l)} \\boldsymbol{W}^{(l)})\n\\end{aligned}\n\\end{equation}\nwhere $\\boldsymbol{\\hat{D}}$ is the diagonal matrix with $\\boldsymbol{\\hat{D}}_{s,s}=\\sum_{t}\\boldsymbol{\\hat{A}}_{s,t}$, and $\\boldsymbol{\\tilde{A}}=\\boldsymbol{\\hat{D}}^{-\\frac{1}{2}}(\\boldsymbol{A}+\\boldsymbol{I})\\boldsymbol{\\hat{D}}^{-\\frac{1}{2}}$ is the normalized adjacency matrix.\n\\subsubsection{Variational Autoencoders}\nVariational Autoencoders (VAEs) have been witnessed to be one of the most efficient approaches to infer latent representations of the data \\cite{kingma2013auto}. Following the standard notation, we use $p(\\cdot)$ to denote the real distribution and $q(\\cdot)$ for the variational distribution. Therefore, the inference model $q(\\boldsymbol{Z}|\\boldsymbol{X})$ and the generative model $p(\\boldsymbol{X}|\\boldsymbol{Z})$ as:\n\\begin{equation}\n\\begin{aligned}\n q(\\boldsymbol{Z}|\\boldsymbol{X}) &= \\prod_{i=1}^{m} q_{\\phi}(\\boldsymbol{z}_i|\\boldsymbol{x}_{i})\\\\\n p(\\boldsymbol{X}|\\boldsymbol{Z}) &= \\prod_{i=1}^{m} p_{\\theta}(\\boldsymbol{x}_{i}|\\boldsymbol{z}_i)\n\\end{aligned}\n\\end{equation}\nWhere $m$ stands for the amount of data. And under the Gaussian prior used in the original paper \\cite{kingma2013auto}, the inference network will be:\n\\begin{equation}\nq_{\\phi}(\\boldsymbol{z}_i|\\boldsymbol{x}_{i}) = \\mathcal{N} (\\boldsymbol{z}_i| \\mu_{\\phi(\\boldsymbol{x}_{i})}, \\texttt{diag}(\\sigma^{2}_{\\phi(\\boldsymbol{x}_{i})}))\n\\end{equation}\nand the optimization objective was given as the format of Evidence Lower Bound (ELBO): \n\\begin{equation}\n\\begin{aligned}\n \\log p(\\boldsymbol{X}) \\geq & -\\mathcal{L}(\\theta, \\phi) \\\\\n = & \\mathbb{E}_{\\boldsymbol{Z} \\sim q_{\\phi}(\\boldsymbol{Z}|\\boldsymbol{X})}[\\log p_{\\theta}(\\boldsymbol{X}|\\boldsymbol{Z})] \\\\\n &\\hspace{1.0cm} -D_{KL}(q_{\\phi}(\\boldsymbol{Z}|\\boldsymbol{X})||p(\\boldsymbol{Z}))\n\\end{aligned}\n\\end{equation}\nFor conjugate priors like Gaussian, the KL-divergence can be computed analytically to avoid the noise in Monte-Carlo simulations.\n\\subsubsection{Gumbel-Softmax Distribution}\nTo introduce feature relations as a graph structure, the most straightforward approach is to represent each edge connection as a Bernoulli random variable. Alas, there is no properly-defined gradients for discrete distributions like Bernoulli. Hence, to train the model in a backpropagation fashion, we must have some alternatives to the truly discrete distribution. Thanks to recent advances in Bayesian Deep Learning, we are able to utilize Gumbel-Softmax distribution \\cite{jang2016categorical} to simulate Bernoulli\/categorical distributions. A simplex-valued random variable $\\boldsymbol{a}$ from a Gumbel-Softmax distribution is a $k$-length vector characterized by the follows:\n\\begin{equation}\n\\label{equ:gumbel-dist}\n\\boldsymbol{a}_{1:k} = \\big(\\frac{\\exp((\\log(\\alpha_{k})+G_{k})\/\\tau)}{\\sum_{k=1}^{K}\\exp((\\log(\\alpha_{k})+G_{k})\/\\tau)}\\big)_{1:k}\n\\end{equation}\nwhere $\\alpha_{k}$ is proportional to the Bernoulli\/categorical probability and $G_{k}$ is a noise from the Gumbel distribution. The subscript $1:k$ indicates a softmax vector, and $\\tau$ is the temperature variable that control the `sharpness' of the distribution. A higher $\\tau$ will make the distribution closer to a uniform one, and a lower $\\tau$ will lead to a more discrete-like distribution. \\par\nNotice that the above equation is not a density function: the density function for the Gumbel-Softmax distribution is complex, and we usually do not use it in practice. What is of our interest is that we can design neural networks to learn $\\log(\\alpha_{k})$ for each class (in the case of graph edge connection, the number of classes is 2 since we want to approximate Bernoulli), and although the output of the neural network is not necessarily valid distributions, we can apply the reparametrization trick and the transformation of equation \\ref{equ:gumbel-dist} to get simplex-valued vectors. In this way, the neural network to learn $\\log(\\alpha_{k})$ (encoding network) can be trained by backpropagation since the gradients of equation \\ref{equ:gumbel-dist} is well-defined.\n\\subsection{Inference of SAG-VAE}\nBased on the above strategy, we introduce another latent variable $\\boldsymbol{A}$, which represents the distribution of the adjacency matrix of the graph. We only consider undirected graph in this paper, so the distribution an be factorized into $p(\\boldsymbol{A})=\\prod_{s=1}^{n}\\prod_{t=s+1}^{n}p(\\boldsymbol{A}_{s,t})$. Following the doctrine of variational inference, we use the Gumbel-Softmax distribution to approximate the probability for each edge:\n\\begin{equation}\n\\label{equ:gumbelapprox}\n q_{\\phi}(\\boldsymbol{A}_{s,t}|\\boldsymbol{X}) = \\texttt{Gumbel-Softmax}(\\phi_{1}(\\boldsymbol{A}_{s,t}|\\boldsymbol{X}))\n\\end{equation}\n\\par\nNotice that in equation \\ref{equ:gumbelapprox} there is no index on $\\boldsymbol{X}$, which means the learned adjacency matrix is a shared structure (amortized inference) and should be averaging over the input. In practice, one can apply Gumbel-Softmax to each $\\phi_{1}(\\boldsymbol{A}_{s,s}|\\boldsymbol{X}_{i})$, and averaging over the probability:\n\\begin{equation}\n\\label{equ:gumbelapproxave}\n q_{\\phi}(\\boldsymbol{A}_{s,t}|\\boldsymbol{X}) = \\frac{1}{m}\\sum_{i=1}^{m}\\texttt{Gumbel-Softmax}(\\phi_{1}(\\boldsymbol{A}_{s,t}|\\boldsymbol{X}_{i}))\n\\end{equation}\nas this will make the estimation of the KL-divergence part more robust. We will discuss more on this issue further in the later paragraphs.\n\\begin{figure}[!h]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{model_compare}\n\\caption{\\label{fig:modelstructure}The model structure and graphical model of SAG-VAE.}\n\\end{figure}\n\\par\nTaking back the original $\\boldsymbol{Z}$ variable, the joint posterior is $p(\\boldsymbol{Z},\\boldsymbol{A}|\\boldsymbol{X})$. Figure \\ref{fig:modelstructure} illustrates the difference between the vanilla VAE and the SAG-VAE. Observing from the graphical model of SAG-VAE, since $\\boldsymbol{A}$ and $\\boldsymbol{Z}$ are considered not $d$-separated, they are not necessarily independent given $\\boldsymbol{X}$. Nevertheless, to simplify computation, we perform the conditional independence approximation on the variational distributions:\\\\\n\\begin{equation}\n\\label{equ:posteriorapprox}\np(\\boldsymbol{Z}, \\boldsymbol{A}|\\boldsymbol{X}) \\approx q_{\\phi_1}(\\boldsymbol{Z}|\\boldsymbol{X})q_{\\phi_2}(\\boldsymbol{A}|\\boldsymbol{X})\n\\end{equation}\nCrucially, equation \\ref{equ:posteriorapprox} allows the posterior distributions to be separated, and therefore avoids noisy and expensive Monte-Carlo simulation of the joint KL-divergence. With the similar derivation developed in \\cite{kingma2013auto}, one can get the new ELBO of our model:\\\\\n\\begin{equation}\n\\label{equ:inferenceTarget}\n\\begin{aligned}\n \\log p(\\boldsymbol{X}) &\\geq -\\mathcal{L}(\\theta, \\phi_1, \\phi_2)\\\\\n &= \\mathbb{E}_{\\boldsymbol{Z}\\sim q_{\\phi_1}(\\boldsymbol{Z}|\\boldsymbol{X}),\\boldsymbol{A}\\sim q_{\\phi_2}(\\boldsymbol{A}|\\boldsymbol{X})}[\\log p_{\\theta}(\\boldsymbol{X}|\\boldsymbol{Z},\\boldsymbol{A})]\n \\\\&\\hspace{1.0cm} -D_{KL}[q_{\\phi_1}(\\boldsymbol{Z}|\\boldsymbol{X})||p(\\boldsymbol{Z})]\n \\\\&\\hspace{1.0cm} -D_{KL}[q_{\\phi_2}(\\boldsymbol{A}|\\boldsymbol{X})||p(\\boldsymbol{A})]\n\\end{aligned}\n\\end{equation}\nThe posterior distribution of $\\boldsymbol{Z}$ is characterized by a learned Gaussian distribution, and the prior $p(\\boldsymbol{Z})$ is standard Gaussian. We omit more complicated priors developed recently since our focus is not on powerful data representation. The posterior distribution of $\\boldsymbol{A}$ is ccharacterized by the learned Gumbel-Softmax distribution, and the prior of $\\boldsymbol{A}$ is a Bernoulli distribution with one-hot, uniform or specified values. \\par\nFor SAG-VAE, we need the dimension of the hidden representation to be equal to the number of dimension (one can see the reason in section \\ref{subsec:sagn}). Therefore, we propose two types of implementations. The first one is to apply a set of hidden distributions for each data point, as it is usually applied in ordinary VAEs; and the second one is to learn a distribution for each dimension. Noticeably, the latter scheme will lead to high-quality reconstruction results, albeit with the cost that the model becomes more vulnerable to noise\/perturbations and sampling from the SAG-VAE becomes difficult. Nevertheless, the advantages of robustness and noise-resistance of SAG-VAE are more obvious with the second implementation. \\par\nAnother issue to notice is the computation of the KL-divergence term $D_{KL}[q_{\\phi_2}(\\boldsymbol{A}|\\boldsymbol{X})||p(\\boldsymbol{A})]$. Notice that for the SAG-VAE with data point-wise representation, with the implementation based on equation \\ref{equ:gumbelapproxave}, the KL divergence will become:\n$$\\frac{1}{m}\\sum_{i=1}^{m}\\sum_{j=1}^{n^{2}-n}D_{KL}(q_{\\phi_2}(\\boldsymbol{A}_{j=\\{s,t\\}}|\\boldsymbol{X}_{i})||p(\\boldsymbol{A}_{j=\\{s,t\\}}))$$\nThis function is not properly normalized as the summation depends on $n$ but there is no such parameter on the denominator. For the per-dimension version of SAG-VAE, although we do have an additional $\\frac{1}{n}$ factor, this KL-divergence term can still be way too dominating as the summation is of $O(n^{2})$ terms. Thus, inspired by the idea in \\cite{chou2019generated}, we use a $\\beta_{A}=\\frac{1}{n^{2}-n}$ to normalize the KL-divergence term ($\\beta_{A}D_{KL}[q_{\\phi_2}(\\boldsymbol{A}|\\boldsymbol{X})||p(\\boldsymbol{A})]$) and improve the performance.\n\\subsection{Self-attention Graph Generative Network}\n\\label{subsec:sagn}\nThe generative network of SAG-VAE is composed by a novel Self-attention Graph Neural Network model design in this paper. We can denote this in a short-handed notation:\n\\begin{equation}\n\\label{equ:gennetwork}\np_{\\theta}(\\boldsymbol{X}_{i}|\\boldsymbol{Z}_{i},\\boldsymbol{A}) = \\textbf{SA-GNN}_{\\theta}(\\boldsymbol{Z}_{i}, \\boldsymbol{A})\n\\end{equation}\nThe Self-attention Graph Network follows the framework of \\cite{kipf2016semi} for information aggregation. On the top of that, one significant difference is the introduction of the self-attention layer. The approach is similar to the mechanism in Self-attention GANs \\cite{zhang2018self}, but instead of performing global attention regardless of geometric information, the self-attention layer in our model is based on the neighbouring nodes. The reason for adopting such a paradigm in this model is that the node features and edge connections are learned instead of given. And if a global unconditional attention is performed, the errors on the initialization stage will be augmented. \\par\nSuppose feature $\\boldsymbol{H}^{(l)}$ is the output of the previous layer has the shape $[n \\times d^{(l)}]$, where $n$ and $d^{(l)}$ represent the number of dimensions (vertices) and graph features, respectively. Now for node $i$ and any other node $j \\in \\mathcal{N}_{i}$ (neighboring vertices), the relevance value $e_{i,j}$ is computed as follows:\n\\begin{equation}\n\\label{equ:attentionRelevance}\ne_{i,j} = (\\boldsymbol{H}^{(l)}_{i}\\boldsymbol{W_{l}})(\\boldsymbol{H}^{(l)}_{j}\\boldsymbol{W_{r}})^{T}\n\\end{equation}\nwhere $\\boldsymbol{W_{l}}$ and $\\boldsymbol{W_{r}}$ are the $[d^{(l)} \\times \\bar{d}]$ convolution matrices to transform the $d$-dimension features to $\\bar{d}$-dim attention features. Finally, having taken into consideration the graph edge connections as geometric information, we perform the softmax operation on the neighboring nodes of $i$ (including itself). Formally, the attention value will be computed as:\\\\\n\\begin{equation}\n\\label{equ:attentionSoftmax}\n\\alpha_{i,j} = \\frac{\\exp(e_{i,j})}{\\sum_{j\\in \\mathcal{N}_{i}\\cup\\{i\\}}\\exp(e_{i,j})}, \\forall j \\in \\mathcal{N}_{i}\\cup\\{i\\}\n\\end{equation}\nIn practice, the above operation can be done before normalization in parallel by multiplying the relevance information computed by equation \\ref{equ:attentionRelevance} with the adjacency matrix. This attention mechanism is similar to Graph Attention Network (GAT) \\cite{velivckovic2017graph}, with one main difference being that in GAT the relevance features are aggregated and multiplied by a learnable vector, while in SA-GNN the relevance features are directly processed by dot products. After computing $\\alpha_{i,j}$ for each pair and obtaining the matrix $\\boldsymbol{\\alpha}$, the attention result can be directly computed by matrix multiplication in the same manner of \\cite{zhang2018self}:\\\\\n\\begin{equation}\n\\label{equ:attentionmul}\n\\boldsymbol{\\bar{H}}^{(l)} = [\\boldsymbol{\\alpha}(\\boldsymbol{H}^{(l)}\\boldsymbol{W_{g}})]\\boldsymbol{W_{f}}\n\\end{equation}\nwhere $\\boldsymbol{W_{g}}$ and $\\boldsymbol{W_{f}}$ are the $[d^{(l)} \\times \\bar{d}]$ and $[\\bar{d} \\times d^{(l)}]$ transformation matrices, respectively. The main purpose of using the two matrix is to reduce computational cost. \\par\nTo introduce more flexibility, we considered incorporating edge weights into the attention mechanism. The weights can be computed by the encoding matrix with a share structure of $q_{\\phi_2}(\\boldsymbol{A})$ network. Formally, this can be expressed as:\\\\\n\\begin{equation}\n\\label{equ:attentionWeights}\n\\boldsymbol{V} = \\boldsymbol{\\hat{\\phi}_2}(\\boldsymbol{X})\n\\end{equation}\nwhere $\\boldsymbol{\\hat{\\phi}_2}(\\cdot)$ indicates a network shares the structure with $\\boldsymbol{\\phi_2}(\\cdot)$ except the last layer. Meanwhile, the main diagonal of $\\boldsymbol{V}$ will be set to 1. Therefore, equation \\ref{equ:attentionSoftmax} can be revised into:\\\\\n\\begin{equation}\n\\label{equ:attentionSoftmaxWeighted}\n\\alpha_{i,j} = \\frac{\\exp(e_{i,j})V_{i,j}}{\\sum_{j\\in \\mathcal{N}_{i}\\cup\\{i\\}}\\exp(e_{i,j})V_{i,j}}, \\forall j \\in \\mathcal{N}_{i}\\cup\\{i\\}\n\\end{equation}\nAnd in a similar idea to \\cite{zhang2018self}, the attention-based feature will be multiplied by a $\\lambda$ coeffcient originally set as 0 and added to the features updated by the rules in vanilla GCN:\\\\\n\\begin{equation}\n\\label{equ:attentionOutput}\n\\boldsymbol{H}^{(l+1)} = (\\lambda\\boldsymbol{\\bar{H}}^{(l)} + \\boldsymbol{\\tilde{A}} \\boldsymbol{H}^{(l)})\\boldsymbol{W}^{(l)}\n\\end{equation}\nwhere $\\boldsymbol{W}^{(l)}$ is the convolution weights of the $l$-th layer. Based on the above equation, the network will first focus on learning the graph geometry (edges), and then using the attention mechanism to improve the generation quality. \\par\nOne potential issue of training VAEs is the so-called `posterior collapse', i.e., the posterior distribution becomes irrelevant from data when the decoder (generative model) is powerful. Graph neural networks are powerful models, so to make sure the posterior distributions are properly trained, we introduced the idea in \\cite{dieng2018avoiding} to enforce correlation between the generated graph adjacency matrix and the output of each layer. Specifically, we use skip connection to interpolate the latent representation of data with the self-attention-processed information at each layer. This can be denoted as:\n\\begin{equation}\n\\boldsymbol{H}^{(l+1)} = \\sigma(\\lambda\\boldsymbol{\\bar{H}}^{(l)} + \\boldsymbol{\\tilde{A}} \\boldsymbol{H}^{(l)})\\boldsymbol{W}^{(l)} + \\boldsymbol{\\tilde{A}}\\boldsymbol{H}^{(1)}\\boldsymbol{\\hat{W}}^{(l)}\n\\end{equation}\nwhere $\\sigma(\\cdot)$ is the non-linear activation, $\\boldsymbol{H}^{(1)}$ is the latent representation of the data (directly from $\\boldsymbol{Z}$), and $\\boldsymbol{\\hat{W}}^{(l)}$ is the convolutional weight between the latent representation and the current layer. For the last layer, we apply activation after amalgamating the information:\n\\begin{equation}\n\\label{equ:finalOutput}\n\\boldsymbol{H}^{(L)} = \\sigma \\big((\\lambda\\boldsymbol{\\bar{H}}^{(L-1)} + \\boldsymbol{\\tilde{A}} \\boldsymbol{H}^{(L-1)})\\boldsymbol{W}^{(L)} + \\boldsymbol{\\tilde{A}}\\boldsymbol{H}^{(1)}\\boldsymbol{\\hat{W}}^{(L)}\\big)\n\\end{equation}\nto keep the properties produced by certain activation (e.g. Sigmoid will produce results in $[0,1]$). Finally, it is important to note that in the VAE framework, the latent variable $\\boldsymbol{Z}$ does not naturally fit in the GCN framework where each node is treated as a feature vector. Thus, for the data point-wise distribution version of SAG-VAE, one needs to first transform the dimension into $n$ with a fully-connected layer, and then add one dimension to get a $[m \\times n \\times 1]$ tensor. In contrast, for the SAG-VAE with dimension-wise distributions, one can directly sample a $[m \\times n \\times d]$ to operate on GCNs.\n\n\n\n\\section{Experiments}\n\\label{sec:experiment}\nIn this section, we demonstrate the performances of SAG-VAE on various tasks. Intuitively, by learning the graph-structured feature relations, SAG-VAE will have two advantages over ordinary VAEs and their existing variations: interpretable relations and insights between features and robustness against perturbations. To validate the correctness of the learned feature relations, one can apply SAG-VAE to the task of retrieving graph edges based on node feature observations. On the other hand, for the robustness of the SAG-VAE model, one can test the performance on tasks such as reconstruction with noise\/mask and sampling with perturbations. \\par\nFor the most of the experiments, the SAG-VAE models are implemented with dimension-wise distributions. The setup is picked for it the advantage of SAG-VAE is more significant with it. The data point-wise distribution counterpart of the SAG-VAE is also straightforward to implement, although the parameters are more difficult to tune. \n\\subsection{Graph Connection Retrieval}\nWe apply two types of feature observations based on graph data. For the first type, the features are generated by a 2-layer Graph Neural Network (GCN in \\cite{kipf2016semi}) by propagating information between neighboring nodes; And for the second type, we pick graph data with given feature observations and randomly drop out rows and add Gaussian noises to obtain a collection of noisy data. Notice this task of retrieving graph edge from feature observations is considered as an interesting problem in the area of machine learning. To facilitate the training process, for the SAG-VAE model used for graph connection retrieval, we apply an `informative' prior that adopts the edge density as the prior of Bernoulli distribution. This is a realistic assumption and the type of information is likely available for real-life problems. Thus, it does not affect the fairness of performance comparisons. \\par\nResults of experiments on two types of graph data illustrate that SAG-VAE can correctly retrieve a significant portion of links (satisfactory recall) while avoid generating overly redundant connections (satisfactory precision). For the first type of data, SAG-VAE can effectively generalize the reconstruction to an unseen pattern of positions. Also, by sampling from the hidden distributions, new patterns of positions can be observed. For the second type of data, SAG-VAE can outperform major existing methods. In addition, the inference of hidden representation is a unique advantage comparing to existing methods. \\par\nTo show the performance advantages of SAG-VAE, the performances of SAG-VAE are compared with pairwise product and Variational Graph Autoencoder (VGAE) \\cite{kipf2016variational}. The number of models for comparison is in small scale since there is only limited number of methods capable of inferring links based entirely on feature observations. The most naive model (pairwise product) is to directly compute the dot product between any pair of vertices, and use Sigmoid to produce the probability for a link to exist. This simple method serves as the baseline in the experiments of \\cite{kipf2016variational}, although the features in the original experiments were calculated by DeepWalk \\cite{perozzi2014deepwalk}. More advanced baselines are based on VGAE, which use part of the graph to learn representation and generalize the generation to the overall graph. The direct comparison between VGAE and SAG-VAE is to remove all edge connections and feed the graph data to VGAE with only `self-loops' on each node. To further validate the superiority of SAG-VAE, we also demonstrate the performance of VGAE with 10\\% of edges preserved in the training input, and show that SAG-VAE can outperform VGAE even under this biased setup. \n\\subsubsection{Karate Synthetic Data}\nWe adopt Zachary's karate club graph to generate the first type of feature observations. In the implementation, each type of node (labeled in the original dataset) is parametrized by an individual Gaussian distribution, and 5 different weights are adopted to generate graphs with 5 patterns. During the training phase, only the first 4 types of graphs are provided to the network, and the final pattern is used to test if the trained SAG-VAE is able to generalize the prediction. \\par\nFigure \\ref{fig:KarateReconstruction} illustrates the reconstruction of 3 patterns of node positions based on the SAG-VAE with an individual Gaussian distribution on each dimension. From the figure, it can be observed that the SAG-VAE model can approximately correctly reconstruct the node positions, and while the patterns of links are not exactly the same as the original, the overall geometries are similar in terms of edge distributions. In addition, for the unseen pattern (the rightmost column), the model successfully infers the position and the key links of the graph. \n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{karate_reconstruction.jpg}\n\\caption{\\label{fig:KarateReconstruction}SAG-VAE reconstruction of the position and link information of Zachary's karate club data. \\textit{Top}: Ground Truth; \\textit{Middle}: Position Reconstruction; \\textit{Bottom}: Position and Link Reconstruction. Notice that the pattern of the right-most column is not seen by SAG-VAE during the training phase.}\n\\end{figure}\n\\begin{figure}[!h]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{karate_representation.jpg}\n\\caption{\\label{fig:karateSampling}Karate position sampling from SAG-VAE with two different implementations}\n\\end{figure}\n\\par\nFigure \\ref{fig:karateSampling} shows the sampling results with both data point- and dimension-wise representation of SAG-VAE. From the figures, it can be observed that both versions of SAG-VAE can generate Karate data information in an organized manner. Sampling from the SAG-VAE with data-wise latent code can further restrict the patterns of the graph, while sampling from its dimension-wise counterpart appears to get a more organized distribution on the node level with different types of nodes better segmented. \\par\nTable \\ref{tab:karate} illustrates the comparison of performance between different methods on the Karate-generated data. From the table it can be observed that SAG-VAE with both data-wise and dimension-wise implementations can outperform methods of comparisons. It is noticeable that for this graph generation task, adding 10\\% ground-truth links does not help significantly improve the $F_{1}$ score of VGAE. In contrast, simply applying pairwise product will lead to a better performance in this case.\n\\begin{table}\n\\begin{center}\n{\\caption{Performance comparison between SAG-VAE and other methods on Karate-generated data.}\\label{tab:karate}}\n\\begin{tabular}{p{4cm}|ccc}\nMethod & Precision & Recall & $F_{1}$ score \\\\\\midrule[1pt]\nPairwise Product & 0.139 & 0.985 & 0.243 \\\\ \\hline\nVGAE (no input edge) & 0.142 & 0.524 & 0.223 \\\\ \\hline\nVGAE (10 \\% link) & 0.150 & 0.539 & 0.234 \\\\ \\hline\nSAG-VAE (data-wise) & \\textbf{0.616} & \\textbf{0.558} & \\textbf{0.586} \\\\ \\hline\nSAG-VAE (dimension-wise) & \\textbf{0.558} & \\textbf{0.611} & \\textbf{0.583} \\\\ \\bottomrule[1pt]\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\subsubsection{Graph Data with given Node Features}\nTable \\ref{tab:graphexperiment} illustrates the comparison of performance ($F_{1}$ scores) between different models on three benchmark graph data sets: Graph Protein \\cite{borgwardt2005protein,dobson2003distinguishing}, Coil-rag \\cite{riesen2008iam,nene1996columbia} and Enzymes \\cite{borgwardt2005protein,schomburg2004brenda}. All the 3 types of data come with rich node feature representation, and we obtain the training and testing data by selecting one sub-graph from the data and apply the second type of data generation (with random noise and row dropout). The extracted graph are of size 64, 6 and 18, respectively. Comparing to the Karate data used above, the graphs adopted here are significantly sparser \\par\nFrom Table \\ref{tab:graphexperiment}, it can be observed that SAG-VAE can outperform methods adopted for comparison, especially for the VGAE-based results. For VGAE, the performance is poor for all datasets and adding back 10\\% links does not help remedy the situation. On the other hand, simply applying pairwise product yields in quite competitive performances. One possible reason behind this observation is that since the node features are highly noisy, it is very difficult for the VAE architecture to learn meaningful embedding of the nodes; on the other hand, since the feature representations are originally rich, pairwise product can capture sufficient information, and therefore leads to an unexpected good performance. The curse of noisy feature is resolved by applying SAG-VAE: with the merits of the joint inference of data representation and feature relations, the model can overcome the problem of noise under the VAE framework and lead to overall superior performances.\n\\begin{table}[h]\n\\begin{center}\n{\\caption{Performance comparison ($F_{1}$ score only) between SAG-VAE and other methods on graph data with given node features.}\\label{tab:graphexperiment}}\n\\begin{tabular}{p{4cm}|ccc}\nMethod & Protein & Colirag & Enzymes \\\\\\midrule[1pt]\nPairwise Product & 0.367 & 0.714 & 0.410\\\\ \\hline\nVGAE (no input edge) & 0.276 & 0.620 & 0.315 \\\\ \\hline\nVGAE (10 \\% link) & 0.283 & 0.643 & 0.319\\\\ \\hline\nSAG-VAE (dimension-wise) & \\textbf{0.385} & \\textbf{0.800} & \\textbf{0.423} \\\\ \\bottomrule[1pt]\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\\subsection{Image Data: Robust Reconstruction and Sampling}\n\\label{sec:expimage}\nAs it is stated before, we expect SAG-VAE to have a more robust performance against perturbations because of the learned correlations between features can lead to a noise-resisting inductive bias. In this section, we test the robustness of SAG-VAE on two image datasets: MNIST and Fashion MNIST. The performances are evaluated based on 3 tasks: masked\/corrupted reconstruction, noisy reconstruction, and noisy sampling. Intuitively, for the reconstruction tasks, if the reconstructed images from SAG-VAE are of higher qualities than those from plain VAE, the robustness of SAG-VAE will be corroborated. Moreover, the noisy sampling task will directly perturb some of the hidden representations, and the inductive bias in SAG-VAE will be able to overcome it. Finally, the plots of the adjacency matrices will show how well the model learned the structured relationships between features. While we may not have any metric to measure it, we can observe if the learned relations are structured and if they are consistent with the characteristics of the images. \\par\nIn these experiments, we only implemented SAG-VAE with dimension-wise distributions. This type of model can produce reconstruction with higher qualities, but it is more vulnerable to perturbation. Therefore, testing with this type of implementation can better illustrate the advantages of SAG-VAE. A drawback of the dimension-wise distribution in sampling is that it makes the data representation harder to obtain, as there is no immediate low-dimension latent codes. Hence, to conduct the sampling process, we model the mean and variance of each pixel for data with different labels. We use Gaussian distribution:\n\\begin{align*}\n\\boldsymbol{\\mu} \\sim \\mathcal{N}(\\boldsymbol{\\mu}_{\\mu},\\boldsymbol{\\sigma}_{\\mu}) \\qquad \\boldsymbol{\\sigma} \\sim \\mathcal{N}(\\boldsymbol{\\mu}_{\\sigma},\\boldsymbol{\\sigma}_{\\sigma})\n\\end{align*}\nto approximately model the manifold and distributions of each dimensions. Notice that unlike graph data, for images, using dimension-wise distribution will bring high image variance. Therefore, it is not recommended to use this strategy in practice. We apply this paradigm here mainly for the purpose to illustrate the robustness of the SAG-VAE.\n\\subsubsection{Noisy and Masked Reconstruction}\nBoth MNIST and Fashion MNIST images are in the shape of $28\\times28$. For the Fashion MNIST, to better leverage a common structure, we remove the image classes that are not shirt-like or pans-like since their geometries are significantly different from the rest of the dataset. To artificially introduce adversarial perturbation on images, two types of noises are applied: uniform noise and block-masking (corruption). For uniform noise-based perturbation, 200 pixels (150 for Fashion MNIST) are randomly selected and replaced with a number generated from uniform distribution $U(0,1)$. For masked-based perturbation, a block of $6\\times6$ is added at random position on each image, thus a small portion of the digit or object in the image is unseen.\\par\nWe firstly test SAG-VAE on MNIST data with perturbations. 10 reconstructed images with corresponding perturbed and original images are randomly selected and presented in figure \\ref{fig:VAEs_denoise_mnist} and figure \\ref{fig:VAEs_mask_mnist}. On the same image, the performance of vanilla VAE is also illustrated. The vanilla VAE is implemented with fully connected layer for each dimension, which is equivalent to SAG-VAE with the adjacency matrix (links) to be zero for all but the main diagonal. \\par\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.75\\textwidth]{denoise_mnist_sagvae_vae.jpg}\n\\caption{\\label{fig:VAEs_denoise_mnist}Reconstruction comparison on noisy MNIST. \\textit{Top}: Noisy images; \\textit{2nd row}: VAE Reconstruction; \\textit{3rd row}: SAG-VAE Reconstruction; \\textit{Bottom}: Original images.}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.75\\textwidth]{mask_mnist_sagvae_vae.jpg}\n\\caption{\\label{fig:VAEs_mask_mnist}Reconstruction comparison on masked MNIST. \\textit{Top}: Masked images; \\textit{2nd row}: VAE Reconstruction; \\textit{3rd row}: SAG-VAE Reconstruction; \\textit{Bottom}: Original images.}\n\n\\end{figure}\n\nAs one can observe, images reconstructed by vanilla VAE falsely learned the patterns of noise and blocks, as there is no inductive bias against such situation. On the other hand, for both tasks, SAG-VAE outperforms VAE significantly in terms of reconstruction quality. For the noisy perturbation, one can merely observe visible noise from the reconstruction result of SAG-VAE. And for the masked perturbation, although the reconstruction quality is not as strong, it can still be observed that the edges of blocks are smoothed and mask sizes are reduced adequately. Notice that the performance of SAG-VAE on the task with uniform noise is close to denoising autoencoder \\cite{vincent2008extracting}, yet we \\emph{did not introduce any explicit denoising measure}. The de-noising characteristics is introduced almost entirely by the inductive bias from the learned feature relations. \\par\nWe further test the same tasks on Fashion MNIST, and the performances can be shown in figures \\ref{fig:VAEs_denoise_fmnist} and \\ref{fig:VAEs_mask_fmnist}. Again, we can observe from the figures that SAG-VAE significantly outperforms VAE when perturbation exists in the input data. It is noticeable that in Fashion MNIST reconstruction, SAG-VAE appears to be more resistant to block-masking, although the robustness against uniform noise is much more significant, similar to its performances on the MNIST dataset.\\par\nFigure \\ref{fig:fmnist_loss} shows the loss ($l_{2}$ distance) between reconstructed images and the original and the noise-corrupted images respectively for the SAG-VAE on the Fashion MNIST data. The legends are removed in the interest of the clarity of plotting. It can be observed that the gap between reconstructed and original images declines aligned with training loss, while the loss between reconstructed images and noise images declines ends up with landing at a plateau on a high level. This indicates that the robustness of SAG-VAE will defy itself from learning the perturbation as information. Limited by the space, we did not include the figure for the training losses of vanilla VAE. In our experiments, we observe that for vanilla VAE, the reconstruction loss between the noisy image will continue to decrease while the loss between the real image will increase, indicating that plain VAE falsely fits the perturbation as information. \\par\nFinally, figure \\ref{fig:ajmtx_mnist_fmnist} shows the learned feature relations as adjacency matrices for both MNIST and Fashion MNIST. It can be observed that while it is not very straightforward to interpret the reason for each connection to exist, the graph structure is properly organized, and it can be reasonably argues that the robustness against perturbation comes from this organized structure.\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.75\\textwidth]{fmnist_loss.jpg}\n\\caption{\\label{fig:fmnist_loss}Training Loss and Reconstruction Loss of Fashion MNIST.}\n\\end{figure}\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{denoise_fmnist_sagvae_vae.jpg}\n\\caption{\\label{fig:VAEs_denoise_fmnist}Reconstruction comparison on noisy Fashion MNIST. \\textit{Top}: Noisy images; \\textit{2nd row}: VAE Reconstruction; \\textit{3rd row}: SAG-VAE Reconstruction; \\textit{Bottom}: Original images.}\n\\end{figure}\n\\begin{figure}[!h]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{mask_fmnist_sagvae_vae.jpg}\n\\caption{\\label{fig:VAEs_mask_fmnist}Reconstruction comparison on masked Fashion MNIST. \\textit{Top}: Masked images; \\textit{2nd row}: VAE Reconstruction; \\textit{3rd row}: SAG-VAE Reconstruction; \\textit{Bottom}: Original images.}\n\\end{figure}\n\\begin{figure}[!h]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{ajmtx_mnist_fmnist.jpg}\n\\caption{\\label{fig:ajmtx_mnist_fmnist}Adjacency Matrix Generated from MNIST and Fashion MNIST.}\n\\end{figure}\n\\subsubsection{Noisy Sampling}\n\\label{subsec:noisesample}\nFollowing the method discussed in section \\ref{sec:expimage}, we fit the latent distribution of different MNIST digits\/classes with the means and variances of each pixel. For each digit\/class, we only pick one image to avoid the high image variance from the dimension-wise modeling. After getting the $\\boldsymbol{\\mu}$ and $\\boldsymbol{\\sigma}$ for each digit, we sample $10$ hidden representation $\\boldsymbol{z}$ for each of the digits\/classes, and randomly replace $200$ dimensions with random noise before sending them to the decoder to generate images. Figure \\ref{fig:sampling} illustrates the performance comparison between the SAG-VAE and the vanilla VAE on the above task. \\par\nFrom the figure, it can be observed that although both methods can preserve the general manifold of each digit, the SAG-VAE model outperforms vanilla VAE in terms of avoiding `noise over digits', i.e., loss of the grain-like pattern. This can be explained by the graph convolution mechanism of the SAG-VAE, which can `fill' the noise-corrupted pixel through exchanging information with connected pixels. And with a higher quality of image coherence, we can argue that the SAG-VAE model is shown to be more robust against noise perturbations during sampling.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{noisy_sampling.jpg}\n\\caption{\\label{fig:sampling}Noisy Sampling on MNIST images.}\n\\end{figure}\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this paper, we propose Self-Attention Graph Variaional AutoEncoder (SAG-VAE) based on recent advances on Variational Autoencoders and Graph Neural Networks. This novel model can jointly infer data representations and relations between features, which provides strong explainable results for the input datasets. In addition, by introducing the learned relations as inductive biases, the model demonstrates strong robustness against perturbations. Besides, a novel Self-Attention Graph Neural Network (SA-GNN) is proposed in the paper. \\par\nTo conclude, this paper makes the following major contributions: firstly, it proposes a novel VAE-based framework which can jointly infer representations and feature relations in an end-to-end manner; secondly, it presents a novel Self-attention-based Graph Neural Network, which leverages the power of self-attention mechanism to improve the performance; and finally, it demonstrates advantageous performances on multiple experiments, which can be of great utility in practice. \\par\nIn the future, the authors intend to extend the model to more advanced posterior approximation techniques (e.g. IWAE) and more flexible priors (e.g. normalized flow). Testing the performances of the model on more complicated datasets is another direction.\n\n\\bibliographystyle{unsrt} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nUnderstanding the origin of the Cosmic X-ray Background (CXB or XRB)\nand cosmological evolution of X-ray extragalactic populations is one \nof the main goals of X-ray astronomy. In the soft X-ray band, the\n{\\it ROSAT}\\ satellite resolved 80\\% of the 0.5--2 keV CXB into\nindividual sources (Hasinger {\\it et al.}\\ 1998) and optical identification\nrevealed that the major population is type-I AGNs (Schmidt {\\it et al.}\\\n1998). Because of the technical difficulties, imaging sky surveys in\nthe hard X-ray band (above 2 keV), where the bulk of the CXB energy\narises, were not available until the launch of {\\it ASCA} . The sensitivity\nlimits achieved by previous mission such as {\\it HEAO1}\\ (Piccinotti {\\it et al.}\\\n1982) and {\\it Ginga}\\ (Kondo {\\it et al.}\\ 1991) are at most $\\sim 10^{-11}$\nerg s$^{-1}$ cm$^{-2}$\\ (2--10 keV), and the sources observed by them only account for\n3\\% of the CXB intensity in the 2--10 keV band. In particular, there\nis a big puzzle on the CXB origin, called the ``spectral paradox'':\nbright AGNs observed with {\\it HEAO1}, {\\it EXOSAT} and {\\it Ginga}\\ have spectra\nwith an average photon index of $\\Gamma$ = 1.7$-$1.9 (e.g., Williams\n{\\it et al.}\\ 1992), which is significantly softer than that of the CXB\nitself ($\\Gamma \\simeq $ 1.4; e.g., Gendreau {\\it et al.}\\ 1995). \nFurthermore, the broad band properties of sources at fluxes from\n$\\sim10^{-11}$ to $\\sim 10^{-13}$ erg s$^{-1}$ cm$^{-2}$\\ (2--10 keV) are somewhat\npuzzling according to previous studies. The extragalactic source counts\nin the soft band (0.3--3.5 keV) obtained by {\\it Einstein}\\ Extended Medium\nSensitivity Survey (EMSS; Gioia {\\it et al.}\\ 1990) is about 2--3 times\nsmaller than that in the hard band (2--10 keV) obtained by the {\\it Ginga}\\\nfluctuation analysis (Butcher\n{\\it et al.}\\ 1997) when we assume a power-law photon index of 1.7.\n\nThe {\\it ASCA}\\ satellite (Tanaka, Inoue, \\& Holt 1994), launched in 1993\nFebruary, was expected to change this situation. It is the first\nimaging satellite capable of study of the X-ray band above 2 keV with\na sensitivity up to several $ 10^{-14}$erg s$^{-1}$ cm$^{-2}$\\ (2--10 keV) and covers\nthe wide energy band from 0.5 to 10 keV, which allows us to directly\ncompare results of the energy bands below and above 2 keV with single\ndetectors, hence accompanied with much less uncertainties than\nprevious studies. By taking these advantages, several X-ray surveys\nhave been performed with {\\it ASCA}\\ to reveal the nature of hard X-ray\npopulations: the {\\it ASCA}\\ Large Sky Survey (LSS; Ueda {\\it et al.}\\ 1998), the {\\it ASCA}\\\nDeep Sky Survey (DSS; Ogasaka {\\it et al.}\\ 1998; Ishisaki {\\it et al.}\\ 1999 for the\nLockman Hole), the {\\it ASCA}\\ Medium-Sensitivity Survey (AMSS or the GIS\ncatalog project: Ueda {\\it et al.}\\ 1997, Takahashi {\\it et al.}\\ 1998, Ueda {\\it et al.}\\\n1999b; see also Cagnoni, Della Ceca, \\& Maccacaro 1998 and Della Ceca\n{\\it et al.}\\ 1999), a survey of {\\it ROSAT}\\ deep fields (Georgantopoulos {\\it et al.}\\\n1997; Boyle {\\it et al.}\\ 1998), and so on. The sensitivity limits and survey\narea are summarized in Table~1. In this paper, we present main results\nof the {\\it ASCA}\\ surveys, focusing on the LSS (\\S~2), the Lockman Hole\ndeep survey (\\S~3), and the AMSS (\\S~4). In \\S~5, we summarize these\nresults and discuss their implications for the origin of the CXB.\n\n\\begin{table}\n\\begin{small}\n\\begin{center}\n\\caption[]{Summary of {\\it ASCA}\\ Surveys}\n\\begin{tabular}{lll}\n\\hline\\hline\nSurvey Project& Area & Sensitivity (2--10 keV)\\\\\n & (deg$^2)$ & (erg s$^{-1}$ cm$^{-2}$ )\\\\\n\\hline\nLarge Sky Survey (LSS)& 7.0 & 1.5$\\times 10^{-13}$\\\\\nDeep Sky Survey (DSS)& 0.3 & 4$\\times 10^{-14}$ \\\\\nLockman Hole Deep Survey& 0.2 & 4$\\times 10^{-14}$\\\\\nSurvey of deep {\\it ROSAT}\\ fields& 1.0 & 5$\\times 10^{-14}$\\\\\n{\\it ASCA}\\ Medium-Sensitivity Survey (AMSS) & 110 & 7$\\times 10^{-14}$\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{small}\n\\end{table}\n\n\\section{The Large Sky Survey}\n\n\\subsection{X-ray Data}\n\nThe survey field of the {\\it ASCA}\\ Large Sky Survey (LSS; Ueda {\\it et al.} , 1998)\nis a continuous region near the north Galactic pole, centered at\n$RA$(2000) = 13$^{\\rm h}$14$^{\\rm m}$, $DEC$(2000) = 31$^\\circ 30'$. \nSeventy-six pointings have been made over several periods from Dec.\\\n1993 to Jul.\\ 1995. The total sky area observed with the GIS and SIS\namounts to 7.0 deg$^2$\\ and 5.4 deg$^2$\\ with the mean exposure time of 56\nksec (sum of GIS2 and GIS3) and 23 ksec (sum of SIS0 and SIS1),\nrespectively. From independent surveys in the total (0.7--7 keV), hard\n(2--10 keV), and soft (0.7--2 keV) bands, 107 sources are detected\nwith sensitivity limits of $6\\times 10^{-14}$, $1\\times 10^{-13}$, and\n$2\\times 10^{-14}$ erg s$^{-1}$ cm$^{-2}$\\ , respectively. The Log $N$ - Log $S$ relation s derived from the\nLSS are summarized in Ueda {\\it et al.}\\ (1999a) together with a complete\nX-ray source list. At these flux limits, 30($\\pm$3)\\% of the CXB in\nthe 0.7--7 keV band and 23($\\pm$3)\\% in the 2--10 keV band have been\nresolved into discrete sources. The 2--10 keV Log $N$ - Log $S$ relation\\ combined with \nthe AMSS result (\\S 4) is plotted in Figure~3. \n\nThe spectral properties of the LSS sources suggest that contribution\nof sources with hard energy spectra become significant at a flux of\n$\\sim 10^{-13}$ erg s$^{-1}$ cm$^{-2}$\\ (2--10 keV), which are different from the major\npopulation in the soft band. The average 2--10 keV photon index is\n1.49$\\pm$0.10 (1$\\sigma$ statistical error in the mean value) for 36\nsources detected in the 2--10 keV band with fluxes below\n$4\\times10^{-13}$ erg s$^{-1}$ cm$^{-2}$ , whereas it is 1.85$\\pm$0.22 for 64 sources\ndetected in the 0.7--2 keV band with fluxes below $3\\times10^{-13}$\nerg s$^{-1}$ cm$^{-2}$ . The average spectrum of 74 sources detected in the 0.7--7 keV\nband with fluxes below $2\\times10^{-13}$ shows a photon index of\n1.63$\\pm$0.07 in the 0.7--10 keV range: this index is consistent with\nthe comparison of source counts between the hard and the soft band.\n\nTo investigate the X-ray spectra of these hard sources, we made deep\nfollow-up observations with {\\it ASCA}\\ for the five hardest sources in the\nLSS, selected by the apparent 0.7--10 keV photon index from the source\nlist excluding very faint sources. The results are summarized in Ueda\n{\\it et al.}\\ (1999c); see also Sakano {\\it et al.}\\ (1998) and Akiyama {\\it et al.}\\ (1998)\nfor AX~J131501+3141, the hardest source in the LSS. Three sources in\nthis sample are optically identified as narrow-line AGNs and one is a\nweak broad-line AGN by Akiyama {\\it et al.}\\ (2000); one is not identified\nyet. We found that spectra of these sources are most likely subject to\nintrinsic absorption at the source redshift with column densities of\n$N_{\\rm H} = 10^{22} \\sim 10^{23}$ cm$^{-2}$.\n\n\n\\subsection{Optical Identification}\n\nAkiyama {\\it et al.}\\ (2000) summarize the results of optical identification\nfor a sub-sample of the LSS sources, consisting of 34 sources detected\nin the 2--7 keV band with the SIS. The major advantage\nof this sample compared with other {\\it ASCA}\\ surveys is good position\naccuracy; it is 0.6 arcmin in 90\\% radius from the {\\it ASCA}\\ data alone,\nthanks to superior positional resolution of the SIS. To improve the\nposition accuracy further, we made follow-up observations with {\\it\nROSAT} HRI over a part of the LSS field in Dec.\\ 1997. Optical\nspectroscopic observations were made using the University of Hawaii\n88$''$ telescope, the Calar Alto 3.5m telescope, and the Kitt Peak\nNational Observatories Mayall 4m and 2.1m telescopes.\n\nOut of the 34 sources, 30 are identified as AGNs, 2 are clusters of\ngalaxies, 1 is a Galactic star, and only 1 object remains\nunidentified. The identification as AGNs is based on existence of a\nbroad emission line or the line ratios of narrow emission lines\n([NII]6583$\\AA$\/H$\\alpha$ and\/or [OIII]5007$\\AA$\/H$\\beta$); see\nAkiyama {\\it et al.}\\ 2000 and references therein. Figure~1(a) shows the\ncorrelation between the redshift and the apparent photon index in the\n0.7--10 keV range, which is obtained from a spectral fit assuming no\nintrinsic absorption, for the identified objects. The 5 sources that\nhave an apparent photon index smaller than 1.0 are identified as 4\nnarrow-line AGNs and 1 weak broad-line AGN, all are located at\nredshift smaller than 0.5. On the other hand, X-ray spectra of the\nother AGNs are consistent with those of nearby type 1 Seyfert\ngalaxies. Four high redshift broad-line AGNs show somewhat apparently\nhard spectra with an apparent photon index of $1.3\\pm0.3$, although it\nmay be still marginal due to the limited statistics.\n\nTo avoid complexity in classifying the AGNs by the optical spectra, we\ndivide the identified AGNs into two using the X-ray data: the\n``absorbed'' AGNs which show intrinsic absorption with a column\ndensity of $N_{\\rm H} > 10^{22}$ cm$^{-2}$ and the ``less-absorbed''\nAGNs with $N_{\\rm H} < 10^{22}$. Correcting the {\\it flux} sensitivity\nfor different X-ray spectra, we found the contribution of the\nabsorbed AGNs is almost comparable to that of less-absorbed\nAGNs in the 2--10 keV source counts at a flux limit of\n$2\\times10^{-13}$ erg s$^{-1}$ cm$^{-2}$ . Figure~1(b) shows the correlation between\nthe redshift and the 2--10 keV luminosity of the identified AGNs. The\nredshift distribution of the 5 absorbed AGNs is concentrated at\n$z<0.5$, which contrasts to the presence of 15 less-absorbed AGNs at\n$z>0.5$. This suggests a deficiency of AGNs with column densities of\n$N_{\\rm H} = 10^{22-23}$ at $z$ = 0.5--2, or in the X-ray luminosity\nrange larger than $10^{44}$ erg s$^{-1}$ , or both. Note that if the 4\nbroad-line AGNs with hard spectra have intrinsic absorption instead of\nother hardening mechanism such as Compton reflection, it could\ncomplement this deficiency.\n\n\\bigskip\n\\bigskip\n\\begin{figure}[htp]\n\\centerline{\\psfig{file=f1a.eps, width=6cm}\\hspace{1cm}\\psfig{file=f1b.eps, width=6cm}}\n\\caption[]{(a) left: the correlation between the redshift and \nthe apparent 0.7--10 keV photon index for the identified objects in the \nLSS (Akiyama {\\it et al.}\\ 2000). The open circles, \ncrosses, and asterisk represent AGNs, clusters of\ngalaxies, and a Galactic star, respectively.\nThe dotted curve shows the expected apparent photon index in the\nobserved 0.7--10 keV band as a function of redshift, for a typical spectrum of \ntype-1 Seyfert galaxies with a Compton reflection\ncomponent.\n(b) right: The 2--10 keV luminosity versus redshift diagram\nfor the LSS AGNs (with large open circles, Akiyama {\\it et al.}\\ 2000), and for \nthe {\\it HEAO1} A2 AGNs (with small marks, Piccinotti {\\it et al.}\\ 1982). The\n``absorbed'' AGNs are plotted with dots. Lines indicate \ndetection limits of the LSS for a source with an photon index of 1.7\nwith no intrinsic absorption.\n}\n\\end{figure}\n\n\\section{The Lockman Hole Deep Survey}\n\nDeep surveys were performed with {\\it ASCA}\\ over several fields (Ogasaka\n{\\it et al.}\\ 1998), although optical identification is more difficult than\nthe LSS because of faint flux levels and source confusion problem. To\novercome this difficulty, we have been conducting a deep survey of the\nLockman Hole, where the {\\it ROSAT}\\ deep survey was performed (Hasinger\n{\\it et al.}\\ 1998). The advantage of selecting this field is that we already\nhave a complete soft X-ray source catalog down to a flux limit of\n$5.5\\times10^{-15}$ erg s$^{-1}$ cm$^{-2}$\\ (0.5--2 keV), most of which have been\noptically identified (Schmidt {\\it et al.}\\ 1998). In addition, we utilized an\nX-ray source list at even fainter flux limits (G.~Hasinger, private\ncommunication). Since the flux limits of the {\\it ROSAT}\\ surveys are\nextremely low, we can expect most of {\\it ASCA}\\ sources could have {\\it ROSAT}\\\ncounterparts within reasonable range of spectral hardness. Utilizing\npositions of the {\\it ROSAT}\\ sources, we can determine the hard-band flux\nfor individual sources, which would otherwise have been difficult to\nseparate, to a flux limit of $3\\times10^{-14}$ erg s$^{-1}$ cm$^{-2}$\\ (2--10 keV). \nPreliminary results are reported in Ishisaki {\\it et al.}\\ (1999).\n \nUp to present, we have made 3 pointings in the direction of the\nLockman Hole with {\\it ASCA}\\ on 1993 May 22--23, 1997 April 29--30, and\n1998 November 27 for a net exposure of 63 ksec (average of the 8 SIS\nchips), 64 ksec, and 62 ksec, respectively. The pointing positions\nwere arranged so that the superposed image of the SIS field of views\n(FOVs) covers the PSPC and HRI FOVs as much as possible. We here used\nonly the SIS data considering its superior positional resolution. \nAnalysis was made through the 2-dimensional maximum-likelihood fitting\nto a raw, superposed image in photon counts space in the sky\ncoordinates, with a model consisting of source peaks (point spread\nfunctions) and the background. As a first step, we put sources into\nthe model at the positions of the {\\it ROSAT}\\ catalogs. Then, after\nchecking the residual image of the fit, we added remaining peaks that\nwere missing in the {\\it ROSAT}\\ catalogs. Thus, we determined the\nsignificance and flux of each source in three energy bands, 0.7--7\nkeV, 2--7 keV, and 0.7--2 keV, including new sources detected with\n{\\it ASCA} . We corrected for the degradation of detection efficiency\ncaused by the radiation damage using the CXB intensity. Note that the\n{\\it ASCA}\\ sensitivity limits strongly depends on position due to the\nmultiple pointings and the vignetting of the XRT.\n\nWe detected 27 sources altogether with significances higher than\n3.5$\\sigma$ in either of the three survey bands. Two sources were\nnewly detected with {\\it ASCA} . One object is a variable source having a\n0.7--7 keV photon index of about 1.7, which was very faint during the\n{\\it ROSAT}\\ observations. The other shows a very hard spectrum and is\ndetected only in the 2--7 keV band. In the combined SIS FOVs, 43\nsources out of 50 sources in the Schmidt {\\it et al.}\\ (1998) catalog are\nlocated. Identification of {\\it ASCA}\\ sources using the {\\it ROSAT}\\ catalog is\nsummarized in Table~2. Since the number of sources detected in the\n2--7 keV band is limited due to poor photon statistics, we here use\nthe results for 25 sources detected in the 0.7--7 keV band for\ncomparison with the {\\it ROSAT}\\ survey. Four unidentified\nsources in the {\\it ASCA}\\ survey have {\\it ROSAT}\\ counterparts in the deeper\nX-ray source catalog (G.~Hasinger, private communication) and\nremaining one is the variable source detected only with {\\it ASCA} . For\nAGNs identified by Schmidt {\\it et al.}\\ (1998), we divided them into two\naccording to their optical spectra: (1) type-1 AGNs, corresponding to\neither of class a, b, or c, showing broad emission lines, and (2)\ntype-2 AGN, class d or e, showing only narrow emission lines. As\nnoticed from the table, 6 out of 7 type-2 AGNs were detected, whereas\nonly half of the 26 type-1 AGNs were detected with {\\it ASCA} , which\ncovers much harder band than the {\\it ROSAT} . This suggests that\ncontribution of type-2 AGNs are more dominant in higher energy bands\nthan in the soft band at similar flux levels.\n\n\\begin{table}\n\\begin{small}\n\\caption[]{Summary of optical identification of the {\\it ASCA}\\ Lockman Hole deep survey by the {\\it ROSAT}\\ catalog (Schmidt {\\it et al.}\\ 1998)}\n\\begin{center}\n\\begin{tabular}{lcc}\n\\hline\\hline\nPopulation& {\\it ROSAT}\\ & {\\it ASCA}\\ (0.7--7 keV)\\\\\\hline \nTotal & 43 & 25 \\\\\\hline\nType-1 AGN (a-c) & 26& 13 \\\\\nType-2 AGN (d-e) & 7 & 6 \\\\\nGroup\/Galaxies& 3& 0 \\\\\nStar & 3 & 1 \\\\\nUnidentified & 4 &4+1\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{small}\n\\end{table}\n\n\\section{The {\\it ASCA}\\ Medium-Sensitivity Survey}\n\nBecause these surveys are limited in sky coverage, the sample size is\nnot sufficient to obtain a self-consistent picture about the evolution\nof the sources over the wide fluxes, from $\\sim10^{-11}$ erg s$^{-1}$ cm$^{-2}$\\ (2--10\nkeV) which is the sensitivity limit of {\\it HEAO1}\\ A2 (Piccinotti\n{\\it et al.}\\ 1982), down to $\\sim 10^{-13}$ erg s$^{-1}$ cm$^{-2}$\\ (2--10 keV), \nthat of {\\it ASCA}\\ . To complement these shortcomings, we have been\nworking on the project called the ``{\\it ASCA}\\ Medium Sensitivity Survey\n(AMSS)'', or the GIS catalog project. In the project, we utilize the\nGIS data from the fields that have become publicly available to search\nfor serendipitous sources. The large field of view and the\nlow-background characteristics make the GIS instrument ideal for this\npurpose.\n\nMain results from the AMSS are reported in Ueda {\\it et al.}\\ (1999b), which\nwere obtained from selected GIS fields of $|b|> 20^\\circ$ observed\nfrom 1993 to 1996, covering the total sky area of 106 deg$^{-2}$. The\nsample contains 714 serendipitous sources, of which 696, 323, and 438\nsources are detected in the 0.7--7 keV (total), 2--10 keV (hard), and\n0.7--2 keV (soft) band, respectively. This is currently the largest\nX-ray sample covering the 0.7--10 keV band. Figure~2(a) shows the\ncorrelation between the 0.7--7 keV flux and the hardness ratio between\nthe 2--10 keV and 0.7--2 keV count rates. We also plot the average\nhardness ratio in several flux ranges, separated by the dashed curves,\nwith crosses. It is clearly seen that the average spectrum becomes harder\nwith a decreasing flux: the corresponding photon index (assuming a\npower law over the 0.7--10 keV band with no absorption) changes from\n2.1 at the flux of $\\sim 10^{-11}$ erg s$^{-1}$ cm$^{-2}$\\ to 1.6 at $\\sim 10^{-13}$\nerg s$^{-1}$ cm$^{-2}$\\ (0.7--7 keV). Similar hardening are also reported in the 2--10 keV range \nby Della Ceca {\\it et al.}\\ (1999) using 60 serendipitous sources. \nFigure~2(b) shows the integral Log $N$ - Log $S$ relation s in the 0.7--7 keV survey band\nfor the soft source sample, consisting of sources with an apparent\n0.7--10 keV photon index larger than 1.7, and for the hard source\nsample, with an index smaller than 1.7. This demonstrates that sources\nwith hard energy spectra in the 0.7--10 keV range are rapidly\nincreasing with decreasing fluxes, compared with softer sources. \n\n\\begin{figure}\n\\centerline{\\psfig{file=f2a.eps, width=6cm}\\hspace{0.5cm}\\psfig{file=f2b.eps, width=6.5cm}}\n\\caption[]{(a) left: \nThe correlation between the 0.7--7 keV flux and the hardness ratio\nbetween the 0.7--2 keV and 2--10 keV count rates for sources detected\nin the 0.7--7 keV survey in the AMSS sample (Ueda {\\it et al.}\\ 1999b).\nThe crosses show the\naverage hardness ratios (with 1$\\sigma$ errors in the mean value) in\nthe flux bin separated by the the dashed curves, at which the count\nrate hence the sensitivity limit is constant. The dotted lines\nrepresent the hardness ratios corresponding to a photon index of 1.6,\n1.9, and 2.2 assuming a power law spectrum. \n(b) right: \nThe integral Log $N$ - Log $S$ relation s in the 0.7--7 keV survey band, derived from the\nAMSS sample. The medium-thickness curve represents the result for the\nhard source sample, consisting of sources with an apparent 0.7--10 keV\nphoton index $\\Gamma$ smaller than 1.7, the thin curve represents that\nfor the soft source sample (with $\\Gamma$ larger than 1.7), and the\nthick curve represents the sum. The 90\\% statistical errors in source\ncounts are indicated by horizontal bars at several data points.\n}\n\\end{figure}\n\n\\section{Summary}\n\nThe {\\it ASCA}\\ surveys have brought a clear, self-consistent picture about\nstatistical properties of sources that constitute about 30\\% of the\nCXB in the broad energy band of 0.7--10 keV. Figure~3 summarizes the\n2--10 keV Log $N$ - Log $S$ relation\\ obtained from the {\\it ASCA}\\ surveys together with the\nresults from previous missions. The direct source counts from combined\nresults of the LSS (Ueda {\\it et al.}\\ 1999b) and the AMSS (Ueda {\\it et al.}\\\n1999c; these contain the data used by Cagnoni, Della Ceca, \\&\nMaccacaro 1998) give the tightest constraints so far over a wide flux\nrange from $\\sim 10^{-11}$ to $\\sim 7\\times10^{-14}$ erg s$^{-1}$ cm$^{-2}$ : $N(>S)$ =\n16.8$\\pm$7.2 (90\\% statistical error), 11.43$\\pm$2.4, 3.76$\\pm$0.42,\n1.08$\\pm$0.17, and 0.33$\\pm$0.09 deg$^{-2}$, at $S$ =\n$7.4\\times10^{-14}$, $1.0\\times10^{-13}$, $2.0\\times10^{-13}$,\n$4.0\\times10^{-13}$, and $1.0\\times10^{-12}$\nerg s$^{-1}$ cm$^{-2}$ , respectively. The DSS gives a direct source counts at the\nfaintest flux, $3.8\\times10^{-14}$ erg s$^{-1}$ cm$^{-2}$ (Ogasaka\n{\\it et al.}\\ 1998), whereas the fluctuation analysis of deep SIS fields constrains\nthe Log $N$ - Log $S$ relation\\ at fluxes down to $1.5\\times10^{-14}$ (Gendreau, Barcons, \\&\nFabian 1998). As seen from the figure, the {\\it ASCA}\\ direct source counts\nsmoothly connect the two regions constrained by the {\\it Ginga}\\ and\n{\\it ASCA}\\ fluctuation analysis.\n\nThe AMSS\/LSS results demonstrate that the average spectrum of X-ray\nsources becomes harder toward fainter fluxes: the apparent photon\nindex in the 0.7--10 keV range changes from 2.1 at the flux of $\\sim\n10^{-11}$ to 1.6 at $\\sim 10^{-13}$ erg s$^{-1}$ cm$^{-2}$\\ (2--10 keV). This fact can\nbe explained by the rapid emergence of population with hard energy\nspectra, as is clearly indicated in Figure~2(b). The evolution of\nbroad-band properties of sources solves the puzzle of discrepancy\ndiscrepancy of the source counts between the soft (EMSS) and the hard\nband ({\\it Ginga}\\ and {\\it HEAO1} ). If we compare the {\\it ASCA}\\ Log $N$ - Log $S$ relation s (including\nGalactic objects) between above and below 2 keV, the hard band source\ncounts at $S\\sim 10^{-13}$ erg s$^{-1}$ cm$^{-2}$\\ (2--10 keV) matches the soft band\none when we assume a photon index of 1.6 for flux conversion, whereas\nat brighter level of $S = 4\\times 10^{-13} \\sim 10^{-12}$ erg s$^{-1}$ cm$^{-2}$\\\n(2--10 keV), we have to use a photon index of about 1.9 to make them\nmatch. The latter fact is consistent with the average 0.7--10 keV\nspectrum at the same flux levels, and can be connected the the\n``soft'' spectrum of the fluctuation observed with {\\it Ginga} , which\nshows a photon index of 1.8$\\pm0.1$ in the 2--10 keV range (Butcher\n{\\it et al.}\\ 1997).\n\nThe optical identification revealed that the major population at\nfluxes of $10^{-13}$ erg s$^{-1}$ cm$^{-2}$\\ are AGNs. The population of hard sources,\nwhich are most responsible for making the average spectrum hard, are\nX-ray absorbed sources. They are mostly identified as narrow line\n(type-2) AGNs. The contribution of these type-2 AGNs is larger in the\nhard band than in the soft band at the same flux limit. Recent results\nof the 5--10 keV band survey by {\\it BeppoSAX}\\ confirms this tendency (Fiore\n{\\it et al.}\\ 1999). These results support the scenario that the CXB\nconsists of unabsorbed AGNs and absorbed AGNs, whose contribution\nbecomes more significant with decreasing fluxes and in harder energy\nband.\n\nWe found, however, possible evidence that is not consistent with the ``unified\nscheme'' of AGNs (e.g., Awaki {\\it et al.}\\ 1991), on which many AGN\nsynthesis models are based. The LSS results may imply deficiency of X-ray\nluminous, absorbed AGNs, with $N_{\\rm H} = 10^{22-23}$ at $z$ =\n0.5--2, or in the X-ray luminosity range larger than $10^{44}$ erg s$^{-1}$ ,\nalthough we cannot rule out possibility, for example, that there are\nmany luminous AGNs at $z>2$ with extreme heavy absorption of $N_{\\rm\nH} > 10^{24}$. On the other hand, there is another implication that\nthere could be a population of AGNs at high redshifts ($z>1$) that are\noptically identified as type-1 AGNs but have apparently hard X-ray\nspectra, although the origin of the hardness is not clear yet. Future\nsurveys by {\\it Chandra} and {\\it XMM} together with optical\nidentification of the AMSS sources will reveal the luminosity, number,\nand spectral evolutions of extra-galactic populations including\nabsorbed AGNs, which will eventually lead us to full understanding of\nthe origin of the CXB.\n\n\\begin{figure}[h]\n\\centerline{\\psfig{file=f3.eps, width=13cm}}\n\\caption[]{\nSummary of the 2--10 keV Log $N$ - Log $S$ relation\\ obtained by the {\\it ASCA}\\ surveys,\ncompared with previous results. The steps are the combined results\nfrom the LSS (Ueda {\\it et al.}\\ 1998) and the AMSS (Ueda {\\it et al.}\\ 1999b). The\nfaintest point at $4\\times10^{-14}$ erg s$^{-1}$ cm$^{-2}$\\ is derived from the DSS\nutilizing the SIS data (Ogasaka {\\it et al.}\\ 1998). The trumpet shape\nbetween two dashed lines indicates 1$\\sigma$ error region from the\nfluctuation analysis of {\\it ASCA}\\ SIS deep fields (Gendreau, Barcons \\&\nFabian 1998). The contour at $10^{-13}\\sim10^{-11}$ erg s$^{-1}$ cm$^{-2}$\\ represents\nthe constraints by the {\\it Ginga}\\ fluctuation analysis at 90\\% confidence\nlevel (Butcher {\\it et al.}\\ 1997). The open circle at $8\\times10^{-12}$\nerg s$^{-1}$ cm$^{-2}$\\ corresponds to the source count by {\\it Ginga}\\ survey (Kondo\n{\\it et al.}\\ 1991), and the thick-line above $3\\times10^{-11}$ erg s$^{-1}$ cm$^{-2}$\\ is the \nextragalactic Log $N$ - Log $S$ relation\\ determined by {\\it HEAO1}\\ A2 (Piccinotti {\\it et al.}\\ 1982). \nAll the horizontal bars represent 90\\% statistical errors in source counts. \n}\n\\end{figure}\n\n\\begin{acknowledgements}\n\nI thank all the collaborators of our {\\it ASCA}\\ survey projects, especially,\nM.~Akiyama, G.~Hasinger, H.~Inoue, Y.~Ishisaki, I.~Lehmann,\nK.~Makishima, Y.~Ogasaka, T.~Ohashi, K.~Ohta, M.~Sakano, T.~Takahashi,\nT.~Tsuru, W.~Voges, T.~Yamada, and A.~Yamashita. \n\n\n\\end{acknowledgements}\n\n\\setlength{\\parindent}{0cm}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}