diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzziheu" "b/data_all_eng_slimpj/shuffled/split2/finalzziheu" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzziheu" @@ -0,0 +1,5 @@ +{"text":"\\section{Lie-Rinehart algebras} \\label{sec1}\n\nThroughout this paper $k$ will denote a field\nof characteristic $0$.\nIn this section we will briefly recall the definition\nof Lie-Rinehart algebra and mention some basic examples.\nLie-Rinehart algebras were introduced and studied\nby Herz \\cite{Her}, Palais \\cite{Pal} and Rinehart \\cite{Rin}.\nThey form the algebraic counterpart of the more geometric\nnotion of Lie algebroid, which has become better known.\nIt was Huebschmann \\cite{Hue} who gave Lie-Rinehart algebras their name,\nand also emphasized their advantages over Lie algebroids\nin contexts where singularities arise.\nSee also \\cite{Hue2} for an excellent survey.\n\nLet $R$ be a unital {\\em commutative} algebra over $k$.\nA {\\em Lie-Rinehart algebra over} $R$ is a Lie algebra\n$L$ over $k$, equipped with a structure of a unital left $R$-module\nand a homomorphism of Lie algebras $\\rho\\!:L\\to\\mathord{\\mathrm{Der}}_{k}(R)$\ninto the Lie algebra of derivations on $R$,\nwhich is a map of left $R$-modules\nsatisfying the\nfollowing Leibniz rule\n$$ [X,rY]=r[X,Y]+\\rho(X)(r)Y $$\nfor any $X,Y\\in L$ and $r\\in R$. We shall write $\\rho(X)(r)=X(r)$.\n\nNote that in \\cite{Hue} and \\cite{Rin},\na Lie-Rinehart algebra over a fixed $R$ is\nreferred to as $(k,R)$-Lie algebra.\n\n\n\\begin{ex} \\label{ex1.1} \\rm\n(1)\nIf $R=k$, then $\\mathord{\\mathrm{Der}}_{k}(R)=\\{0\\}$, and a Lie-Rinehart\nalgebra over $R$ is simply a Lie algebra over $k$.\n\n(2)\nFor arbitrary $R$, the Lie algebra $\\mathord{\\mathrm{Der}}_{k}(R)$ is\nitself a Lie-Rinehart algebra over $R$ if one takes $\\rho$\nto be the identity map.\n\n(3)\nLet $k=\\mathbb{R}$. If $\\pi\\!:E\\to M$ is a vector bundle\nover a smooth manifold $M$, then the structure\nof a Lie-Rinehart algebra on the $\\mathord{\\mathit{C}^{\\infty}}(M)$-module\n$\\Gamma(E)$ of sections of $E$ is the same as the structure\nof a Lie algebroid on $E$.\n\n(4)\nLet $L$ be a Lie-Rinehart algebra over $R$ and\n$\\tau\\!:R\\to S$ a homomorphism of unital commutative\n$k$-algebras. Then we can form a Lie-Rinehart\nalgebra $\\tau_{!}(L)$ over $S$ as the kernel\nof the map $\\varphi$,\n$$\n\\xymatrix{\n0 \\ar[r] & \\tau_{!}(L) \\ar[r] &\n(S\\mathbin{\\otimes}_{R}L)\\oplus \\mathord{\\mathrm{Der}}_{k}(S) \\ar[r]^-{\\varphi} &\n\\mathord{\\mathrm{Hom}}_{k}(R,S)\\;,\n}\n$$\ndefined by\n$$ \\varphi(s\\mathbin{\\otimes} X,D)(r)=s\\tau(X(r))-D(\\tau(r))\\;.$$\nThe bracket on $\\tau_{!}(L)$ is given by\n$$ [(s\\mathbin{\\otimes} X,D),(t\\mathbin{\\otimes} Y,E)]=\n (st\\mathbin{\\otimes} [X,Y] + D(t)\\mathbin{\\otimes} Y - E(s)\\mathbin{\\otimes} X , [D,E] )\\;,$$\nwhile the representation of $\\tau_{!}(L)$ on $S$\nis given by the projection.\nIn the special case where $\\tau$ is a localization\n$R\\to R_{\\mathfrak{p}}$ of $R$ at a prime ideal $\\mathfrak{p}$ of $R$,\none can check that $\\tau_{!}(L)$ is isomorphic\nto the localization $R_{\\mathfrak{p}}\\mathbin{\\otimes}_{R}L=L_{\\mathfrak{p}}$, with the bracket\n$$ [s^{-1}X,t^{-1}Y] =\n (st)^{-1}[X,Y] - s^{-1}t^{-2}X(t)Y + t^{-1}s^{-2}Y(s)X $$\nand representation $\\rho_{\\mathfrak{p}}\\!:L_{\\mathfrak{p}}\\to\\mathord{\\mathrm{Der}}_{k}(R_{\\mathfrak{p}})$ induced\nby $\\rho\\!:L\\to\\mathord{\\mathrm{Der}}_{k}(R)$ and the canonical map\n$R_{\\mathfrak{p}}\\mathbin{\\otimes}_{R}\\mathord{\\mathrm{Der}}_{k}(R)\\to\\mathord{\\mathrm{Der}}_{k}(R_{\\mathfrak{p}})$.\nThis agrees with the definition given in \\cite{Rin}.\n\\end{ex}\n\nThe Lie-Rinehart algebras over $R$ form a category\n$$ \\mathsf{LieRinAlg}_{R} \\;,$$\nwhere a morphism $\\phi\\!:L\\to L'$ is a homomorphism\nof Lie algebras over $k$ as well as a map\nof $R$-modules which intertwines the representations,\n$\\rho'\\mathbin{{\\scriptstyle \\circ }} \\phi=\\rho$.\nThe operation $\\tau_{!}$, induced by\na homomorphism $\\tau\\!:R\\to S$\nof unital commutative $k$-algebras,\nis a functor $\\mathsf{LieRinAlg}_{R}\\to\\mathsf{LieRinAlg}_{S}$.\nMoreover, for another homomorphism\n$\\sigma\\!:S\\to T$ of unital commutative\n$k$-algebras there is a canonical map\n$\\sigma_{!}(\\tau_{!}(L))\\to (\\sigma\\mathbin{{\\scriptstyle \\circ }}\\tau)_{!}(L)$.\nUsing these canonical maps, the\ncategories $\\mathsf{LieRinAlg}_{R}$,\nfor varying $k$-algebras $R$, can be\nassembled into one big (fibered) category\n$$ \\mathsf{LieRinAlg}\\;,$$\nin which a map $(R,L)\\to (S,K)$\nis a pair $(\\tau,\\phi)$, consisting of a homomorphism\nof unital $k$-algebras $\\tau\\!:R\\to S$ and a homomorphism\n$\\phi\\!:\\tau_{!}(L)\\to K$ of Lie-Rinehart algebras over $S$.\n\n\n\\section{The universal enveloping algebra} \\label{sec2}\n\nLet $L$ be a Lie-Rinehart algebra over $R$.\nThe left $R$-module $R\\oplus L$ has a natural\nLie algebra structure, given by\n$$[(r,X), (s,Y)]=(X(s)-Y(r),[X,Y]) $$\nfor any $r,s\\in R$ and $X,Y\\in L$.\nLet $U(R\\oplus L)$ be its universal enveloping algebra over $k$,\nobtained as the quotient of the tensor algebra\nof $R\\oplus L$ over $k$ with respect\nto the usual ideal.\nWrite $i\\!:R\\oplus L\\to U(R\\oplus L)$ for the canonical inclusion\nand $\\bar{U}(R\\oplus L)$ for\nthe subalgebra of $U(R\\oplus L)$ generated by\n$i(R\\oplus L)$.\nThe {\\em universal enveloping algebra} of the Lie-Rinehart algebra\n$L$ over $R$ (see \\cite{Rin})\nis the quotient algebra\n$$ \\mathord{\\mathscr{U}}(R,L)=\\bar{U}(R\\oplus L)\/I $$\nover $k$,\nwhere $I$ is the two-sided\nideal in $\\bar{U}(R\\oplus L)$\ngenerated by the elements\n$i(s,0)\\cdot i(r,X)-i(sr,sX)$,\nfor all $r,s\\in R$ and $X\\in L$. The natural map\n$\\iota_{R}\\!:R\\to\\mathord{\\mathscr{U}}(R,L)$, $r\\mapsto i(r,0)+I$,\nis a homomorphism of unital $k$-algebras, while\n$\\iota_{L}\\!:L\\to\\mathord{\\mathscr{U}}(R,L)$,\n$X\\mapsto i(0,X)+I$, is a homomorphism of Lie algebras.\nFurthermore, we have $\\iota_{R}(r)\\iota_{L}(X)=\\iota_{L}(rX)$\nand $[\\iota_{L}(X),\\iota_{R}(r)]=\\iota_{R}(X(r))$.\n\nThe universal enveloping algebra $\\mathord{\\mathscr{U}}(R,L)$ is characterized by the\nfollowing universal property:\nif $A$ is any unital $k$-algebra,\n$\\kappa_{R}\\!:R\\to A$ a homomorphism of unital $k$-algebras\nand $\\kappa_{L}\\!:L\\to A$ a homomorphism of Lie algebras\nsuch that $\\kappa_{R}(r)\\kappa_{L}(X)=\\kappa_{L}(rX)$\nand $[\\kappa_{L}(X),\\kappa_{R}(r)]=\\kappa_{R}(X(r))$\nfor any $r\\in R$ and $X\\in L$, then\nthere exists a unique homomorphism of\nunital algebras $f\\!:\\mathord{\\mathscr{U}}(R,L)\\to A$ such that\n$f\\mathbin{{\\scriptstyle \\circ }} \\iota_{R}=\\kappa_{R}$ and\n$f\\mathbin{{\\scriptstyle \\circ }} \\iota_{L}=\\kappa_{L}$.\n\nIn particular, the universal property of $\\mathord{\\mathscr{U}}(R,L)$\nimplies that there exists a unique\nrepresentation\n$$ \\varrho\\!:\\mathord{\\mathscr{U}}(R,L)\\to \\mathord{\\mathrm{End}}_{k}(R) $$\nsuch that $\\varrho\\mathbin{{\\scriptstyle \\circ }}\\iota_{L}=\\rho$ and\n$\\varrho\\mathbin{{\\scriptstyle \\circ }}\\iota_{R}$ is the canonical\nrepresentation given by the\nmultiplication in $R$. Since\nthe canonical\nrepresentation of $R$ is faithful, we see that\nthe map $\\iota_{R}$ is injective.\nWe shall therefore identify $\\iota_{R}(R)$ with $R$,\n$\\iota_{R}(r)=r$.\nWe shall often denote\n$\\iota_{L}(X)$ by $\\iota(X)$ or simply by $X$.\nIn this notation, the algebra $\\mathord{\\mathscr{U}}(R,L)$\nis generated by elements $X\\in L$ and $r\\in R$,\nwhile $r\\cdot X=rX$ and\n$[X,r]=X\\cdot r - r\\cdot X=X(r)$ in $\\mathord{\\mathscr{U}}(R,L)$.\nAs a $k$-linear space, $\\mathord{\\mathscr{U}}(R,L)$ is generated by $R$ and\nthe powers $\\iota(L)^{n}$, $n=1,2,\\ldots\\,$.\nThe algebra $\\mathord{\\mathscr{U}}(R,L)$ also has a natural filtration\n$$ R=\\mathord{\\mathscr{U}}_{(0)}(R,L)\\subset\\mathord{\\mathscr{U}}_{(1)}(R,L)\\subset\\mathord{\\mathscr{U}}_{(2)}(R,L)\\subset\\cdots\\;,$$\nwhere $\\mathord{\\mathscr{U}}_{(n)}(R,L)$ is spanned by $R$ and the\npowers $\\iota(L)^{m}$, for $m=1,2,\\ldots,n$.\nWe define the associated graded algebra as\n$$ \\mathord{\\mathrm{gr}}(\\mathord{\\mathscr{U}}(R,L))=\\bigoplus_{n=0}^{\\infty} \\mathord{\\mathscr{U}}_{(n)}(R,L)\/\\mathord{\\mathscr{U}}_{(n-1)}(R,L)\\;,$$\nwhere we take $\\mathord{\\mathscr{U}}_{(-1)}(R,L)=\\{0\\}$.\nIt is a commutative unital algebra over $R$.\n\n\\begin{ex} \\rm \\label{ex2.1}\n(1) Let $V$ be a left $R$-module. With zero bracket and representation,\n$V$ is a Lie-Rinehart algebra. The corresponding universal enveloping algebra\n$\\mathord{\\mathscr{U}}(R,V)$ is in this case the symmetric algebra $S_{R}(V)$\n(see the appendix below).\n\n(2) For any Lie algebra over $k$, the universal enveloping algebra\n$\\mathord{\\mathscr{U}}(k,L)$ is the classical\nuniversal enveloping algebra $U(L)$ of $L$.\n\n(3) If $G$ is a Lie groupoid over a smooth manifold $M$\nand $\\mathfrak{g}$ its Lie algebroid, then the algebra of right\ninvariant tangential differential operators on $G$ is\na concrete model for the universal enveloping algebra $\\mathord{\\mathscr{U}}(\\mathord{\\mathit{C}^{\\infty}}(M),\\Gamma(\\mathfrak{g}))$\n(see \\cite{NWX}).\n\\end{ex}\n\nAs for the classical universal enveloping algebra of a Lie algebra,\nthere is a Poincar\\'{e}-Birkhoff-Witt theorem for the\nuniversal enveloping algebra of a Lie-Rine\\-hart algebra \\cite{Rin}:\nif the Lie-Rinehart algebra $L$ is projective as a left $R$-module,\nthen the\nnatural map $\\theta\\!:S_{R}(L)\\to\\mathord{\\mathrm{gr}}(\\mathord{\\mathscr{U}}(R,L))$ is an isomorphism of\nalgebras. In particular, this implies that\n$\\iota_{L}\\!:L\\to\\mathord{\\mathscr{U}}(R,L)$ is in this case injective.\n\n\n\n\\section{Rinehart bialgebras} \\label{sec3}\n\n\nThe universal enveloping algebra of a Lie algebra\nis a Hopf algebra, as is the group ring of a discrete group.\nIn this section we will identify the algebraic structure\ncommon to the universal enveloping algebra of a Lie-Rinehart algebra\nand the convolution algebra of an \\'{e}tale groupoid. This\nstructure has occurred in the literature under various names,\nsee \\cite{Kap,Lu,Mal,Mrcun,Mrcun2,Tak,Xu}. We suggest the name\n{\\em Rinehart bialgebra}.\n\nLet $R$ be a unital commutative algebra over $k$ as before.\nAll modules considered will be unital left $R$-modules, and\n$\\mathbin{\\otimes}_{R}$ will always denote the tensor product of left\n$R$-modules.\nSuppose that $A$ is a unital $k$-algebra\nwhich {\\em extends} $R$, i.e.\\ such that\n$R$ is a unital subalgebra of $A$.\nIn particular, $A$ is an $R$-$R$-bimodule.\nThe left $R$-module $A\\mathbin{\\otimes}_{R}A$\n(tensor product of $A$, viewed as a left $R$-module, with itself)\nis also a right $R$-module in two ways.\nObserve, that $A\\mathbin{\\otimes}_{R}A$ is not necessarily an algebra in a\nnatural way unless $R$ lies in the centre of $A$.\nFollowing \\cite{Kap},\nwe denote by $A\\bar{\\mathbin{\\otimes}}_{R}A$ the submodule of\n$A\\mathbin{\\otimes}_{R}A$ given by the kernel of the map $\\vartheta$,\n$$\n\\xymatrix{\n0 \\ar[r] & A\\bar{\\mathbin{\\otimes}}_{R}A \\ar[r] &\nA\\mathbin{\\otimes}_{R}A \\ar[r]^-{\\vartheta} & \\mathord{\\mathrm{Hom}}_{k}(R,A\\mathbin{\\otimes}_{R}A)\\;,\n}\n$$\ndefined by $\\vartheta(a\\mathbin{\\otimes} b)(r)=ar\\mathbin{\\otimes} b - a\\mathbin{\\otimes} br$.\nThe $R$-module $A\\bar{\\mathbin{\\otimes}}_{R}A$ has a natural structure of\na $k$-algebra.\nIf $R$ is in the centre of $A$, then\n$A\\bar{\\mathbin{\\otimes}}_{R}A=A\\mathbin{\\otimes}_{R}A$.\n\n\\begin{dfn} \\rm \\label{dfn3.1}\nA {\\em Rinehart bialgebra over} $R$\nis a unital $k$-algebra $A$ which extends $R$,\nwith a {\\em compatible} structure of a cocommutative\ncoalgebra in the category of left $R$-modules.\n\nIf we denote the comultiplication\nby $\\mathord{\\Delta}\\!:A\\to A\\mathbin{\\otimes}_{R} A$ and the counit\nby $\\mathord{\\epsilon}\\!:A\\to R$, the compatibility conditions\nare\n\\begin{enumerate}\n\\item [(i)] $\\mathord{\\Delta}(A)\\subset A\\bar{\\mathbin{\\otimes}}_{R}A$,\n\\item [(ii)] $\\mathord{\\epsilon}(1)=1$,\n\\item [(iii)] $\\mathord{\\Delta}(1)=1\\mathbin{\\otimes} 1$,\n\\item [(iv)] $\\mathord{\\epsilon}(ab)=\\mathord{\\epsilon}(a\\mathord{\\epsilon}(b))$, and\n\\item [(v)] $\\mathord{\\Delta}(ab)=\\mathord{\\Delta}(a)\\mathord{\\Delta}(b)$\n\\end{enumerate}\nfor any $a,b\\in A$.\n\\end{dfn}\n\nObserve that the condition (v) makes sense because\nof (i) and the fact that $A\\bar{\\mathbin{\\otimes}}_{R}A$ is a $k$-algebra.\nNote, however, that (iv) does not express that $\\mathord{\\epsilon}$ is\nan algebra map.\n\nThe Rinehart bialgebras over $R$ form a category\n$$ \\mathsf{RinBiAlg}_{R}\\;, $$\nin which a morphism $f\\!:A\\to B$ is a map\nwhich is at the same time a homomorphism of $k$-algebras\nwith unit and a homomorphism of coalgebras in the category\nof left $R$-modules.\n\n\\begin{ex} \\label{ex3.2} \\rm\n(1)\nIf a unital $k$-algebra $A$ extends $R$ such that $R$ lies in the centre\nof $A$, a Rinehart bialgebra structure on $A$\nis the same as an ordinary $R$-bialgebra structure on $A$.\n\n(2)\nLet $\\mathord{\\mathscr{U}}(R,L)$ be the universal enveloping algebra of\na Lie-Rinehart algebra $L$ over $R$ (Section \\ref{sec2}).\nThe universal property implies that there exists a unique\nhomomorphism of algebras\n$$ \\mathord{\\Delta}\\!:\\mathord{\\mathscr{U}}(R,L)\\to \\mathord{\\mathscr{U}}(R,L)\\bar{\\mathbin{\\otimes}}_{R}\\mathord{\\mathscr{U}}(R,L)\n \\subset \\mathord{\\mathscr{U}}(R,L)\\mathbin{\\otimes}_{R}\\mathord{\\mathscr{U}}(R,L) $$\nsuch that $\\mathord{\\Delta}(r)=1\\mathbin{\\otimes} r=r\\mathbin{\\otimes} 1$ and\n$\\mathord{\\Delta}(X)=1\\mathbin{\\otimes} X + X\\mathbin{\\otimes} 1$ for any $r\\in R$ and $X\\in L$.\nWith the counit $\\mathord{\\epsilon}\\!:\\mathord{\\mathscr{U}}(R,L)\\to R$ given by\n$$ \\mathord{\\epsilon}(u)=\\varrho(u)(1) \\;,$$\none can check that $\\mathord{\\mathscr{U}}(R,L)$ is a Rinehart bialgebra over $R$.\nFurthermore, a morphism of Lie-Rinehart algebras\n$\\phi\\!:L\\to L'$ induces, by the universal property,\na morphism of Rinehart bialgebras\n$\\mathord{\\mathscr{U}}(R,\\phi)\\!:\\mathord{\\mathscr{U}}(R,L)\\to\\mathord{\\mathscr{U}}(R,L')$, and this gives a functor\n$$ \\mathord{\\mathscr{U}}\\!:\\mathsf{LieRinAlg}_{R}\\to\\mathsf{RinBiAlg}_{R} \\;.$$\n\n(3)\nThe convolution algebra of\nsmooth functions with compact support on\nan \\'{e}tale Lie groupoid $G$\nover a compact manifold $M$\nis a Rinehart bialgebra over $\\mathord{\\mathit{C}^{\\infty}}(M)$\n(see \\cite{Mrcun,Mrcun2}).\n\\end{ex}\n\n\nLet $A$ be a Rinehart bialgebra over $R$. Then $A$ splits\nas an $R$-module as\n$$ A=R\\oplus\\bar{A}\\;,$$\nwhere $\\bar{A}=\\ker\\mathord{\\epsilon}$. The $R$-submodule $\\bar{A}$\nis a subalgebra of $A$ and carries a\ncocommutative coassociative coproduct\n$\\bar{\\mathord{\\Delta}}\\!:\\bar{A}\\to \\bar{A}\\mathbin{\\otimes}_{R}\\bar{A}$, defined by\n$$ \\bar{\\mathord{\\Delta}}(a)=\\mathord{\\Delta}(a)- a\\mathbin{\\otimes} 1 - 1\\mathbin{\\otimes} a\\;.$$\nThe bialgebra $A$ can be reconstructed from $\\bar{A}$, $\\bar{\\mathord{\\Delta}}$\nand the multiplication on $\\bar{A}$.\n\nThe $R$-module $\\bar{A}$ has a filtration\n$$ \\{0\\}=\\bar{A}_{0} \\subset \\bar{A}_{1} \\subset \\bar{A}_{2} \\subset \\cdots $$\nwith $\\bar{A}_{n}=\\ker\\bar{\\mathord{\\Delta}}^{(n)}$, where\n$\\bar{\\mathord{\\Delta}}^{(n)}$ denotes the iterated coproduct\n$\\bar{A}\\to \\bar{A}\\mathbin{\\otimes}_{R}\\cdots\\mathbin{\\otimes}_{R}\\bar{A}$ ($n+1$ copies).\nWe refer to this filtration as the {\\em primitive filtration}\nof $\\bar{A}$, and also write $A_{n}=R\\oplus\\bar{A}_{n}$ ($n\\geq 0$).\nThe\nsubmodule $\\bar{A}_{1}$ is called the submodule of {\\em primitive}\nelements, and also denoted by $\\mathord{\\mathscr{P}}(A)$.\nWe observe that $\\mathord{\\mathscr{P}}(A)$ is a Lie-Rinehart algebra over $R$. Its\nLie bracket is given by the commutator in $A$, and its representation\n$\\rho\\!:\\mathord{\\mathscr{P}}(A)\\to \\mathord{\\mathrm{Der}}_{k}(R)$ is given by\n$\\rho(a)(r)=\\mathord{\\epsilon}(a r)$.\nIndeed, note that\n\\begin{equation*}\n\\begin{split}\na r & = (\\mathord{\\epsilon}\\mathbin{\\otimes} 1)(\\mathord{\\Delta}(a r)) \\\\\n & = (\\mathord{\\epsilon}\\mathbin{\\otimes} 1)(\\mathord{\\Delta}(a) \\mathord{\\Delta}(r)) \\\\\n & = (\\mathord{\\epsilon}\\mathbin{\\otimes} 1)((a\\mathbin{\\otimes} 1 + 1\\mathbin{\\otimes} a) (r\\mathbin{\\otimes} 1)) \\\\\n & = (\\mathord{\\epsilon}\\mathbin{\\otimes} 1)(a r\\mathbin{\\otimes} 1 + r\\mathbin{\\otimes} a) \\\\\n & = \\mathord{\\epsilon}(a r)+r a\\;,\n\\end{split}\n\\end{equation*}\nand from this it follows easily that $\\rho$\nis an $R$-linear homomorphism of Lie algebras.\n\nWe call $A$ (or $\\bar{A}$) {\\em cocomplete} if\n$A=\\bigcup_{n=0}^{\\infty}A_{n}$\n(or $\\bar{A}=\\bigcup_{n=0}^{\\infty}\\bar{A}_{n}$).\nThe Rinehart algebra $A$ is {\\em graded projective}\nif each of the subquotients\n$A_{n+1}\/A_{n}=\\bar{A}_{n+1}\/\\bar{A}_{n}$\nis a projective $R$-module.\nWe will use these notions in\nseveral theorems stated below.\n\n\\begin{ex} \\label{ex3.3} \\rm\nIt follows from the Poincar\\'{e}-Birkhoff-Witt theorem\nthat the primitive filtration of the\nuniversal enveloping algebra $\\mathord{\\mathscr{U}}(R,L)$, associated to\na Lie-Rinehart algebra $L$ over $R$,\ncoincide with its natural filtration\nif $L$ is projective as a left $R$-module.\nFurthermore,\nthe universal enveloping algebra $\\mathord{\\mathscr{U}}(R,L)$ is in this case\ncocomplete and graded projective.\n\\end{ex}\n\n\\begin{rem} \\rm \\label{rem3.4}\nLet $A$ be a cocomplete graded projective\nRinehart bialgebra over $R$. In particular, this implies that\n$A$ and all $A_{n}$ are projective $R$-modules.\nWe write $\\mathord{\\mathrm{gr}}(\\bar{A})=\\bigoplus_{n=1}^{\\infty}\\mathord{\\mathrm{gr}}_{n}(\\bar{A})$,\nwhere $\\mathord{\\mathrm{gr}}_{n}(\\bar{A})=\\bar{A}_{n}\/\\bar{A}_{n-1}$.\nThere is a cocommutative coassociative comultiplication\n$\\bar{\\mathord{\\Delta}}^{\\mathrm{gr}}$ on $\\mathord{\\mathrm{gr}}(\\bar{A})$,\ninduced by $\\bar{\\mathord{\\Delta}}$, such that\n$$ \\bar{\\mathord{\\Delta}}^{\\mathrm{gr}}(\\mathord{\\mathrm{gr}}_{n}(\\bar{A}))\\subset\n \\bigoplus_{p+q=n}\\mathord{\\mathrm{gr}}_{p}(\\bar{A})\\mathbin{\\otimes} \\mathord{\\mathrm{gr}}_{q}(\\bar{A}) $$\nand $\\ker(\\bar{\\mathord{\\Delta}}^{\\mathrm{gr}})=\\mathord{\\mathrm{gr}}_{1}(\\bar{A})$.\nFurthermore, the non-counital coalgebra\n$\\mathord{\\mathrm{gr}}(\\bar{A})$ is cocomplete, i.e.\\\n$\\bigcup_{n=1}^{\\infty}\\ker((\\bar{\\mathord{\\Delta}}^{\\mathrm{gr}})^{(n)})=\\mathord{\\mathrm{gr}}(\\bar{A})$\n(see the appendix).\nNote that\nany morphism $f\\!:A\\to B$ of cocomplete graded projective\nRinehart bialgebras over $R$ induces a\nmorphism of non-counital coalgebras\n$\\mathord{\\mathrm{gr}}(\\bar{f})\\!:\\mathord{\\mathrm{gr}}(\\bar{A})\\to\\mathord{\\mathrm{gr}}(\\bar{B})$.\n\\end{rem}\n\n\n\n\\section{A Cartier-Milnor-Moore theorem} \\label{sec4}\n\n\nWe have already seen that the universal enveloping algebra construction\ndefines a functor\n$$ \\mathord{\\mathscr{U}}\\!:\\mathsf{LieRinAlg}_{R}\\to\\mathsf{RinBiAlg}_{R}\\;.$$\nIn the other direction, there is a functor\n$$ \\mathord{\\mathscr{P}}\\!:\\mathsf{RinBiAlg}_{R}\\to\\mathsf{LieRinAlg}_{R}\\;,$$\nwhich assigns to a Rinehart bialgebra $A$\nits Lie-Rinehart algebra $\\mathord{\\mathscr{P}}(A)$ of\nprimitive elements.\n\n\\begin{theo} \\label{theo4.1}\nThe functor $\\mathord{\\mathscr{U}}$ is left adjoint to $\\mathord{\\mathscr{P}}$.\nFurthermore, the functors $\\mathord{\\mathscr{U}}$ and $\\mathord{\\mathscr{P}}$ restrict\nto an equivalence between the full subcategory\nof Lie-Rinehart algebras over $R$\nwhich are projective as left $R$-modules\nand that\nof cocomplete graded projective Rinehart bialgebras\nover $R$.\n\\end{theo}\n\nThe second part of the theorem in particular implies\nthe following property of the counit of the adjunction,\nwhich is an analogue of the Cartier-Milnor-Moore\ntheorem for Hopf algebras:\n\n\\begin{cor} \\label{cor4.2}\nLet $A$ be a Rinehart bialgebra over $R$.\nIf $A$ is cocomplete and graded projective, then\nthere is a canonical isomorphism of\nRinehart bialgebras\n$\\mathord{\\mathscr{U}}(R,\\mathord{\\mathscr{P}}(A))\\to A$.\n\\end{cor}\n\n\\begin{proof}[Proof of Theorem \\ref{theo4.1}]\nFor a Lie-Rinehart algebra $L$ over $R$, the\ncanonical map $L\\to\\mathord{\\mathscr{U}}(R,L)$ clearly lands\nin the submodule of primitive elements, and this\ndefines the unit of the adjunction,\n$$ \\alpha_{L}\\!:L\\to\\mathord{\\mathscr{P}}(\\mathord{\\mathscr{U}}(R,L))\\;.$$\nFor a Rinehart bialgebra $A$ over $R$, the inclusion\n$\\mathord{\\mathscr{P}}(A)\\to A$ induces, by the universal property\nof the universal enveloping algebra, a canonical algebra map\n$$ \\beta_{A}\\!:\\mathord{\\mathscr{U}}(R,\\mathord{\\mathscr{P}}(A))\\to A\\;,$$\nwhich in fact is clearly a map of Rinehart\nbialgebras. This defines the counit of the adjunction.\n\nThe first part of the theorem states that these two maps\nsatisfy the triangular identities\n$$ \\mathord{\\mathscr{P}}(\\beta_{A})\\mathbin{{\\scriptstyle \\circ }}\\alpha_{\\mathord{\\mathscr{P}}(A)}=\\mathord{\\mathrm{id}}_{\\mathord{\\mathscr{P}}(A)} $$\nand\n$$ \\beta_{\\mathord{\\mathscr{U}}(R,L)}\\mathbin{{\\scriptstyle \\circ }}\\mathord{\\mathscr{U}}(R,\\alpha_{L})=\\mathord{\\mathrm{id}}_{\\mathord{\\mathscr{U}}(R,L)}\\;. $$\nThese both hold by the (uniqueness part of the) universal\nproperty of the universal enveloping algebra.\n\nIf $L$ is projective as a left $R$-module,\nthen the Poincar\\'{e}-Birkhoff-Witt theorem\nimplies that $\\mathord{\\mathscr{U}}(R,L)$ is graded projective and cocomplete\n(because the natural and primitive filtrations coincide),\nand that $\\alpha_{L}$ is an isomorphism.\nFor a graded projective cocomplete Rinehart bialgebra\n$A$ over $R$, the $R$-module $\\mathord{\\mathscr{P}}(A)$ is obviously\nprojective, and it remains to show that in this case the map\n$\\beta=\\beta_{A}$ is an isomorphism.\n\nIt suffices to prove that the map of reduced\nnon-counital coalgebras\n$$ \\bar{\\beta}\\!:\\bar{\\mathord{\\mathscr{U}}}(R,\\mathord{\\mathscr{P}}(A))\\to\\bar{A} $$\nis an isomorphism.\nFor this, in turn, it is enough to show that the induced map\nof non-counital coalgebras\n$$ \\mathord{\\mathrm{gr}}(\\bar{\\beta})\\!:\\mathord{\\mathrm{gr}}(\\bar{\\mathord{\\mathscr{U}}}(R,\\mathord{\\mathscr{P}}(A)))\\to\\mathord{\\mathrm{gr}}(\\bar{A}) $$\nis an isomorphism.\nThe projection\n$\\mathord{\\mathrm{gr}}(\\bar{A})\\to \\mathord{\\mathrm{gr}}_{1}(\\bar{A})=\\mathord{\\mathscr{P}}(A)$ induces a map\nof non-counital coalgebras $\\gamma\\!:\\mathord{\\mathrm{gr}}(\\bar{A})\\to \\bar{S}_{R}(\\mathord{\\mathscr{P}}(A))$, by\nthe universal property of $\\bar{S}_{R}(\\mathord{\\mathscr{P}}(A))$ (Proposition \\ref{propA.2}).\nNow consider the diagram\n$$\n\\xymatrix{\n\\mathord{\\mathrm{gr}}(\\bar{\\mathord{\\mathscr{U}}}(R,\\mathord{\\mathscr{P}}(A))) \\ar[r]^-{\\mathord{\\mathrm{gr}}(\\bar{\\beta})} \\ar[dr]_-{\\bar{\\theta}^{-1}} &\n\\mathord{\\mathrm{gr}}(\\bar{A}) \\ar[d]^{\\gamma} \\\\\n& \\bar{S}_{R}(\\mathord{\\mathscr{P}}(A))\n}\n$$\nwhere $\\bar{\\theta}^{-1}$ is the inverse of the Poincar\\'{e}-Birkhoff-Witt\nisomorphism $\\theta$ (Section \\ref{sec2}) restricted to\n$\\bar{S}_{R}(\\mathord{\\mathscr{P}}(A))$.\nAll maps in the diagram\nare maps of non-counital coalgebras.\nThus, to see that the diagram commutes, it suffices\n(again by the universal property of $\\bar{S}_{R}(\\mathord{\\mathscr{P}}(A))$ stated\nin Proposition \\ref{propA.2}) that\n$$ \\mathord{\\mathrm{pr}}_{1}\\mathbin{{\\scriptstyle \\circ }}\\gamma\\mathbin{{\\scriptstyle \\circ }}\\mathord{\\mathrm{gr}}(\\bar{\\beta})=\\mathord{\\mathrm{pr}}_{1}\\mathbin{{\\scriptstyle \\circ }}\\bar{\\theta}^{-1} $$\nfor the projection $\\mathord{\\mathrm{pr}}_{1}\\!:\\bar{S}_{R}(\\mathord{\\mathscr{P}}(A))\\to\\mathord{\\mathscr{P}}(A)$,\nwhich is clear from the explicit definitions.\nTo finish the proof,\nrecall from Remark \\ref{rem3.4}\nthat $\\mathord{\\mathrm{gr}}(\\bar{A})$ is cocomplete and $\\ker(\\bar{\\mathord{\\Delta}}^{\\mathrm{gr}})=\\mathord{\\mathscr{P}}(A)$,\nso that\nby Lemma \\ref{lemA.1} the map\n$\\gamma$ is injective, while by the commutativity\nof the diagram it is also surjective. Thus $\\gamma$ is\nan isomorphism, and hence so is $\\mathord{\\mathrm{gr}}(\\bar{\\beta})$.\n\\end{proof}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nBlack holes provide a powerful laboratory for testing our ideas about space and time at both the theoretical as well as the observational frontier. A striking theoretical prediction based on quantum field theory in curved spacetime is that black holes are not entirely black: their event horizon emits black body radiation with a temperature inversely proportional to the mass of the black hole \\cite{Hawking:1975vcx,haw1,haw2,haw3}. Owed to this so-called Hawking effect, the black hole loses mass and eventually evaporates completely within a finite time-span. The robustness of this scenario has been corroborated in a number of different ways comprising perturbative computations in a fixed background spacetime \\cite{Hawking:1975vcx}, the detector approach \\cite{detector0,detector00,detector,detector1}, as well as by analogy to the Unruh effect \\cite{Unruh:1976db}.\n\nThe Hawking effect gives rise to a series of theoretical puzzles though. Firstly, one encounters the black hole information paradox reviewed in \\cite{chen}. The picture above suggests that the physical information about how the black hole was formed could permanently disappear, allowing many physical states to evolve into the same state. Basically, there are three viewpoints on this problem \\cite{stev1,dan}: 1) information is indeed lost after the black holes has evaporated completely, 2) evaporation stops and information is preserved inside a stable remnant, and 3) information may be returned outside via Hawking radiation. In particular, with regard to option 2) it is important to understand the final configuration emanating from the black hole evaporation process. \n\nThe second puzzle comes from the increase of the horizon temperature as the black hole becomes lighter and lighter. For instance, a black hole with a mass of the order of the Planck mass would have a temperature $T_h \\approx 10^{31}$K. Theoretically, it is then predicted that the final stage of the black hole evaporation process generates a so-called thunderbolt singularity. From the observational perspective, this feature leads to the prediction that a black hole reaching a mass range between $10^9 - 10^{13}$g should create powerful short-lived gamma-ray bursts with energy of a few hundred MeV \\cite{haw}. So far, all attempts to detect such high energy bursts have failed and resulted only in upper bounds on black hole evaporation rate in the vicinity of Earth \\cite{gamma,gamma1}.\n\nThirdly, a cold phase in a black hole's life gives an interesting perspective on dark matter \\cite{mc,rov,dm}. If the evaporation process is halted as some mass (potentially set by the Plack-mass) one may end up with a stable configuration which, except for its gravitational interaction, has no or extremely small interaction with ordinary matter and hence fits perfectly into the definition of a Weakly Interaction Massive Particle (WIMP). This has led many authors to claim that in fact such tiny black holes could explain the mystery of dark matter in our universe, see, e.g., \\cite{mc,rov,dm} for selected references. In reference \\cite{revi} it is shown that no major constrain can be cast upon the properties of Planck-size remnants if they play the role of dark matter at a cosmological scale; nonetheless, the way these remnants can be produced and their stability could be potential weak spots of such scenarios \\cite{rov}. \n\nThese puzzles, clearly ask for a better understanding of the black hole evaporation process beyond the quantum field theory in curved spacetime analysis. It is conceivable, that the ultimate answer lies in the realm of a theory of quantum gravity. Since there are currently many different routes at various stages of development, we take a different angle on the problem: quite strikingly many quantum gravity programs, including string theory\n\\cite{Becker:2006dvp,Zwiebach:2004tj,Palti:2019pca},\n loop quantum gravity and spin foams \\cite{Ashtekar:2021kfp,Perez:2012wv,Rovelli:2014ssa,Ashtekar:2021kfp},\n\n asymptotically safe gravity \\cite{Niedermaier:2006wt,Reuter:2012id,Percacci:2017fkn,Eichhorn:2018yfc,Reuter:2019byg,Pereira:2019dbn,Reichert:2020mja,Pawlowski:2020qer},\nCausal Dynamical Triangulations \\cite{Ambjorn:2012jv,Loll:2019rdj},\nand Ho\\v{r}ava-Lifshitz gravity \\cite{Hor,Wang:2017brl,Barvinsky:2021ubv} predict a dynamical dimensional reduction of the theories momentum space \\cite{Carlip:2017eud,Carlip:2019onx} (also see \\cite{tHooft:1993dmi} for an early account of this idea). Specifically, the spectral dimension $D_s$, probing the effective dimension experienced by a random walk, drops from $D_s = 4$ at macroscopic scales to $D_s \\approx 2$ at short distances. For instance, the analysis of geometries obtained from the Causal Dynamical Triangulations program reported a scale-dependent spectral dimension \\cite{Ambjorn:2005db},\n\\begin{equation}\nD_s(T) = a - \\frac{b}{c+T} \\, , \n\\ee\nwhere $a=4.02$, $b=119$, and $c =54$. A similar analysis within Euclidean Dynamical Triangulations \\cite{Laiho:2017htj} obtained $D_s(T) = 3.94 \\pm 0.16$ and $D_s(T) = 1.44 \\pm 0.19$ for the large and small distance values when extrapolating to the continuum and infinite volume limits. Generalizing \\cite{Lauscher:2005qz}, the scale-dependent spectral dimension along a fixed asymptotically safe renormalization group trajectory has been computed in \\cite{Reuter:2011ah}, leading to a three plateau structure with $D_s(T) =4$, $D_s(T) = 4\/3$, and $D_s(T)=2$ at large, intermediate, and short distances, respectively (also see \\cite{Reuter:2012xf} for a review).\n\n\n\nIn \\cite{Barcaroli:2015xda} the scale-dependence of the spectral dimension has been linked to the dimensionality of the theories momentum space: a drop of the spectral dimension indicates that there are less degrees of freedom at high-energy as compared to the expectation based on the dimension of spacetime observed at macroscopic scales. Thus the dynamical dimensional reduction provides a powerful mechanism for eliminating divergences occurring at high energies. Within the realm of multi-scale models \\cite{cal,Calcagni:2013vsa,Calcagni:2016azd,Calcagni:2015xcf}, the phenomenological consequences of this mechanism have been explored, e.g., in the context of quantum field theory \\cite{cal-standard}, cosmology \\cite{cal-cosmo,cal-cosmo2}, and also for the Unruh effect \\cite{Alkofer:2016utc}, see \\cite{Calcagni:2021ipd} for an up-to-date review.\\footnote{Along different lines, fractal aspects of black holes have been considered within the ``un-gravity program'' \\cite{grav1,spec1}.}\n\nIn \\cite{carlip} the authors suggested an intricate connection between dynamical dimensional reduction and the formation of cold remnants formed at the end of the black hole evaporation process based on a two-dimensional dilaton-gravity model. The goal of our work is to complement this analysis by implementing the effect of a drop in the spectral dimension in the thermodynamic properties of a four-dimensional Schwarzschild black hole. As our main result, we demonstrate that the mechanism of dynamical dimensional reduction removes the thunderbolt singularity appearing in the last stages of the black hole evaporation process. It does not lead to the formation of long-lived black hole remnants though. The latter requires additional ingredients, with a change in the topology of the black hole solution being the most probable one.\n\nThe rest of our work is organized as follows. Sect.\\ \\ref{sect.2} and Sect.\\ \\ref{sect.3} provide a brief introduction to black hole thermodynamics and the concept of generalized dimensions, respectively. Our analysis is presented in Sect.\\ \\ref{sect.4} and we conclude with a brief discussion and outlook in Sect.\\ \\ref{sect.5}. \n\n\\section{Black hole thermodynamics in a nutshell} \n\\label{sect.2}\nWe start by reviewing the basics of black hole thermodynamics, referring to \\cite{Davies,Wald:1995yp,Raine:2005bs} for more detailed, pedagogical accounts. For simplicity, we consider spherically symmetric black holes described by the Schwarzschild solution. In natural units where $G=c=\\hbar=k_b=1$, the resulting line-element is\n\\begin{equation}\\label{eq:sssol}\nds^2 = (1 - \\frac{2M}{r}) dt^2 - (1 - \\frac{2M}{r})^{-1} dr^2 - r^2 d\\Omega^2 \\, . \n\\ee \nHere $d\\Omega^2 = d\\theta^2 + \\sin^2\\theta d\\phi^2$ is the line-element on the unit two-sphere and $M$ is the mass of the black hole. The geometry \\eqref{eq:sssol} possesses an event horizon at\n\\begin{equation}\nr_h = 2M \\, . \n\\ee\nBased on \\eqref{eq:sssol} one readily deduces that the area of this horizon is\n\\begin{equation}\\label{eq:horizonarea}\nA_h = 4 \\pi r_h^2 = 16 \\pi M^2 \\, . \n\\ee\nAny object or photon crossing this horizon inevitably has to move inward, eventually ending at the curvature singularity at $r=0$. Classically, signals emitted at $r \\le r_h$ can not reach an observer stationed at $r > r_h$. Hence the terminology ``black hole''.\n\nThe analysis within the framework of quantum field theory in curved spacetime \\cite{Hawking:1975vcx} shows, however, that the event horizon emits black body radiation (Hawking radiation) with a temperature proportional to the surface gravity at the horizon. For the Schwarzschild black hole \\eqref{eq:sssol}, this results in\n\\begin{equation}\\label{eq:Hawkingtemp}\nT_h = \\frac{1}{8\\pi M} \\, . \n\\ee\nThe resulting luminosity $L$ is then given by\n\\begin{equation}\\label{eq:Lgeneral}\nL = A_h \\, I \\, , \n\\ee\nwhere $I$ is the integrated black body spectrum. For bosonic fields as the scalar field considered in this work\n\\begin{equation}\\label{eq:bbfactor}\nI = \\int \\frac{d^3p}{(2\\pi)^3} \\frac{E}{e^{E\/T_h} - 1} \\, . \n\\ee\nFor a massless photon $E = |\\vec{p}| = \\omega$ and one recovers the standard result \\cite{Raine:2005bs}:\\footnote{In general, the power contained in the Hawking radiation associated with a massless scalar field has the form $P = \\sum_l \\int_0^\\infty d\\omega P_l(\\omega)$ with the $l$th multipole contributing with $P_l(\\omega) = \\frac{A_h}{8\\pi^2} T_l(\\omega) \\omega^3 (e^{\\omega\/T_h}-1)^{-1}$. Our analysis focuses on the $l=0$ sector and neglects the gray-body corrections $T_l(\\omega)$. Since the latter encode the transmission probability of Hawking radiation reaching future infinity without being backscattered by the gravitational barrier surrounding the black hole, one expects that these will lead to an additional suppression of the massive contributions as compared to the massless ones. Based on eq.\\ \\eqref{eq:Lspec} one then expects that the inclusion of the $T_l(\\omega)$ will further inhibit the formation of remnants while leaving the leading order analysis unaffected.}\n\\begin{equation} \\label{Lumin}\n\t\\begin{split}\n\tL_{\\rm massless} \\,= & \\, \\frac{8 M^2}{\\pi} \\int_{0}^{\\infty} d\\omega \\, \\frac{\\omega^3}{e^{8\\, \\pi\\, M\\, \\omega}-1} \\\\\n\t= & \\, \\frac{1}{7680 \\pi M^2} \\, . \n\t\\end{split} \n\\end{equation}\nHere we have performed the integral over frequencies and expressed the result in terms of the black hole mass $M$ by substituting \\eqref{eq:Hawkingtemp}. $L_{\\rm massless}$ as a function of $M$ is then illustrated as the blue straight line in Fig.\\ \\ref{Hlumin}. Eq.\\ \\eqref{Lumin} exhibits the curious feature that black holes become more and more luminous the lighter they become. In particular $L$ diverges as $M \\rightarrow 0$. The presence of this so-called thunderbolt singularity suggests that the semi-classical analysis breaks down when describing the final stage of black hole evaporation \\cite{Hawking:1992ti,Piran:1993tq,Ashtekar:2010hx,Lowe:1993zw}.\n\\begin{figure}[h] \n\t\\includegraphics[scale=0.60]{lumin1} \\\\\n\t\\caption{\\label{Hlumin} Luminosity of a Schwarzschild black hole as a function of the mass $M$. The case of a massless and a massive scalar field with $m^2 = 1$ are illustrated by the straight blue and orange lines, respectively.}\n\\end{figure} \n\nThe life-time of a black hole with initial mass $M_0$ can then be obtained by integrating the mass-loss formula\n\\begin{equation}\\label{eq:massloss}\n\\frac{d M}{dt} = - L \\, . \n\\ee\nSubsituting \\eqref{Lumin} gives the black hole evaporation time\n\\begin{equation}\nt_{\\rm evap} = 2560 \\, \\pi \\, M_0^3 \\, . \n\\ee\nThus the emission of Hawking radiation renders the life-time of the black hole finite.\n\nThe luminosity formula \\eqref{eq:Lgeneral} is readily generalized to the case of a scalar field with mass $m$. In this case the energy appearing in \\eqref{eq:bbfactor} is replaced by the relativistic dispersion relation $E^2 = \\vec{p}^2 + m^2$. In general, $I$ does not admit an simple analytic expression. Nevertheless, it is instructive to study the following limits: if $m\/T_h \\ll 1$ the particle is relativistic and one essentially recovers the massless case. For temperatures $m\/T_h \\gg 1$ the particle is non-relativistic and the Bose-Einstein distribution in \\eqref{eq:bbfactor} may be approximated by the Boltzmann distribution. This shows that the contribution of a massive mode is actually exponentially suppressed at temperatures $T_h \\lesssim m$. The full expression for $L_{\\rm massive}$ as a function of $M$ is obtained by numerical integration. The result is the orange line shown in Fig.\\ \\ref{Hlumin}.\n \n\\section{The Spectral Dimension in a nutshell} \n\\label{sect.3}\nAn intuitive picture about quantum gravity is that spacetime at short distances will develop non-manifold like features. A first step towards characterizing the resulting structures is to generalize the notion of ``dimension'' borrowing concepts from fractal geometry \\cite{book-diffusion}. In this way one naturally distinguishes between the Hausdorff dimension (based on covering a set of points with balls of decreasing radius), the spectral dimension (measuring the dimension ``felt'' by a diffusing particle), or the walk dimension (related to the expectation value of the distance traveled by a random walk as function of the diffusion time) of an Euclidean space. While all of these dimensions agree when working on manifolds, they characterize distinct properties of fractal spaces. In the present work, the key role is played by the spectral dimension $D_s$ which may be interpreted as the dimension of the theories momentum space \\cite{Barcaroli:2015xda}. A decrease of the spectral dimension at high energy may then be interpreted as ``the theory possessing less degrees of freedom than its analogue defined on a background manifold''. \n\nFormally, the spectral dimension $d_s$ and its scale-dependent generalization $D_s(T)$ is introduced by studying the diffusion of a test particle on a $d$-dimensional Euclidean spacetime with metric $g_{\\mu\\nu}$ with respect to the fiducial diffusion time $T$. Denoting the Laplacian constructed from $g_{\\mu\\nu}$ by $\\Delta \\equiv - g^{\\mu\\nu}D_\\mu D_\\nu$, and introducing $F(\\Delta) \\equiv G(\\Delta)^{-1}$ with $G(\\Delta)$, being the position-space representation of the particles propagator, the motion of the test particle is captured by the generalized heat equation\n\\begin{equation}\\label{diff}\n\t\\frac{\\partial}{\\partial T} K_{g}(\\xi,\\xi_0;T)= -F(\\Delta) K_{g}(\\xi,\\xi_0;T)\n\\end{equation} \nsubject to the boundary condition\n\\begin{equation}\nK_{g}(\\xi,\\xi_0;T)|_{T = 0} = \\delta^d(\\xi - \\xi_0) \\, . \n\\ee\nHere $K_{g}(\\xi,\\xi_0;\\sigma)$ is the heat-kernel associated with $F(\\Delta)$. It describes the probability of the particle defusing from the initial point $\\xi_0$ to $\\xi$ during the time-interval $T$. In particular, one recovers the standard heat-equation for $F(\\Delta) = \\Delta$. The return probability $P_g(T)$ is then defined by the particle returning to its initial point after time $T$\n\\begin{equation}\\label{eq:return}\n P_g(T) \\equiv V^{-1} \\int d^d\\xi \\sqrt{g} \\, K_{g}(\\xi,\\xi;T) \\, . \n\\ee\nHere $V \\equiv \\int d^d\\xi \\sqrt{g}$ is the volume of the space. Based on \\eqref{eq:return}\nthe spectral dimension $d_s$ is then defined as\n\\begin{equation}\\label{def:specdim}\nd_s \\equiv -2 \\lim_{T \\rightarrow 0} \\, \\frac{d \\ln P_g(T)}{d \\ln T} \\, . \n\\ee\nFor the standard heat-equation on a smooth manifold $d_s = d$ agrees with the topological dimension of the manifold. At this stage, it is convenient to generalize \\eqref{def:specdim}, allowing for a scale-dependent spectral dimension\n\\begin{equation}\\label{def:specdimT}\n\tD_s(T) \\equiv -2\\frac{d \\ln P_g(T)}{d \\ln T} \\, . \n\\end{equation}\n$D_s(T)$ takes into account the possibility that long random walks my experience a different spectral dimension than the infinitesimal ones entering in the definition \\eqref{def:specdim}.\n\nOn a flat Euclidean space $\\mathbb{R}^d$ with metric $\\delta_{\\mu\\nu}$ the generalized heat-equation \\eqref{diff} can be solved using Fourier-techniques\n\\begin{equation}\\label{eq:diffkernel}\n\tK_{\\delta}(\\xi,\\xi_0;T)=\\int \\frac{d^dp}{(2\\pi)^d}e^{ip(\\xi-\\xi_0)} e^{-T F(p^2)} \\, . \n\\end{equation}\nThe resulting return probability is\n\\begin{equation}\\label{eq:retprob}\n\tP_\\delta(T)=\\int \\frac{d^dp}{(2\\pi)^d} e^{-T F(p^2)} \\, . \n\\end{equation}\nFor $F(p^2) = p^2$, the return probability evaluates to\n\\begin{equation}\nP_\\delta(T)= (4 \\pi T)^{-d\/2} \\, . \n\\ee\nSubstituting this result into \\eqref{def:specdimT} shows that $D_s(T) = d$ is independent of $T$ and agrees with the topological dimension $d$. \n\nThe computation is readily generalized to the case where the function $F(p^2)$ has a fixed scaling behavior $F(p^2) = p^{2+\\delta}$.\\footnote{Generically, any function $F(p^2)$ for which the integral \\eqref{eq:diffkernel} is not the Fourier-transform of a Gaussian will result in diffusion kernels $K_g(\\xi,\\xi_0;T)$ which are not positive semi-definite. The occurrence of negative probabilities can be cured by going to fractional calculus \\cite{cal}. Since this is not relevant in the present analysis, we do not dwell on this technical feature at this point.}\n Rewriting the integral in \\eqref{eq:retprob} in terms of the dimensionless variable $x = p^{2+\\delta} T$, one readily finds that \\cite{Reuter:2011ah}\n\\begin{equation}\\label{eq:dsT}\n\tD_s(T)=\\frac{2d}{2+\\delta} \\, , \n\\end{equation}\nwhich is again independent of the diffusion time $T$.\n\nBased on \\eqref{eq:dsT} it is then straightforward to construct a simple multi-scale model which interpolates between $D_s(T) = 2$ at microscopic and $D_s(T) = 4$ at macroscopic scales \\cite{Alkofer:2016utc}. Starting from the momentum-space propagator\n\\begin{equation}\\label{mod}\n\tG(p^2)=\\frac{1}{p^2}-\\frac{1}{p^2+m^2} \\, , \n\\end{equation}\none obtains $F(p^2) = \\frac{1}{m^2} \\, p^2 (p^2 + m^2)$. Thus $F(p^2)$ interpolates between $F(p^2) \\propto p^2$ for $p^2 \\ll m^2$ and $F(p^2) \\propto p^4$ for $p^2 \\gg m^2$. Evaluating \\eqref{eq:dsT} in these scaling regimes suggests that one recovers the desired behavior of the spectral dimension at microscopic and macroscopic scales. The integrals determining $P_\\delta(T)$ can be performed analytically and can be expressed in terms of error-functions. The resulting spectral dimension is shown in Fig.\\ \\ref{crossover}. This confirms that the model indeed interpolates between $D_s = 4$ for $T\/m \\gg 1$ and $D_s = 2$ for $T\/m \\ll 1$ \n\\begin{figure}[h] \n\t\\includegraphics[scale=0.60]{crossover2.pdf}\n\t\\caption{\\label{crossover} Illustration of the scale-dependent spectral dimension $D_s(T)$ obtained from the two-scale model \\eqref{mod} with $m^2 =1$. $D_s(T)$ interpolates smoothly between $D_s = 4$ for $T\/m \\gg 1$ and $D_s = 2$ for $T\/m \\ll 1$.}\n\\end{figure}\nEq.\\ \\eqref{mod} then defines a generic toy model realizing the dynamical dimensional reduction encountered in the full-fledged quantum gravity analysis. The latter may also fix the cross-over scale $m$ based on microscopic considerations.\n\n\\section{Black hole thermodynamics including a non-trivial spectral dimension} \n\\label{sect.4}\nAt this stage we are in a position to combine our discussions on the thermodynamical properties of black holes and dynamical dimensional reduction. Throughout this section we will assume that the radiation emitted by the event horizon remains thermal also for very light black holes, see \\cite{thermality,thermality1,thermality2} for a detailed analysis supporting this assumption. The goal of this section is then to go beyond the semi-classical analysis utilizing the concepts of generalized dimensions and dynamical dimensional reduction.\n\\subsection{Single-scale analysis}\n\\label{sect.4.1}\nFrom the perspective of generalized dimensions the luminosity formula \\eqref{eq:Lgeneral} contains two distinguished elements. Firstly, the black body factor $I$ contains an integral over the theories momentum space. This suggests that the dimension appearing in this term is the spectral dimension\n\\begin{equation}\\label{eq:Ispec}\nI_{d_s} = \\int \\frac{d^{d_s-1}p}{(2\\pi)^{d_s-1}} \\frac{E}{e^{E\/T_h} - 1} \\, . \n\\ee\nSecondly, the horizon area is related to position space properties. This suggests that this term is sensitive to the Hausdorff dimension of the (quantum) spacetime. Owed to the lack of a concrete model which would allow to describe such an effect for the event horizon, we refrain from including such an effect. If a concrete model is available, this feature may be included rather straightforwardly in the present setting by considering an effective dimension build from a linear combination of the spectral and Hausdorff dimension.\n\nThe generalization \\eqref{eq:Ispec} then allows to determine the threshold on the spectral dimension $d_s$ required for creating a long-lived black hole remnant from dynamical dimensional reduction in the momentum space. For this purpose we consider \\eqref{eq:Ispec} for a massless scalar field in a scaling regime where $d_s$ is constant but not necessary identical to the topological dimension $d$ of the spacetime. Convergence of the integral requires $d_s > 1$. Assuming convergence, $I_{d_s} = \\tilde{c} M^{-d_s}$ where $\\tilde{c}$ is a numerical constant. In combination with the classical horizon area \\eqref{eq:horizonarea} the luminosity obtained from the spectral dimension is\n\\begin{equation}\nL_{d_s} = 16 \\pi \\tilde{c} \\, M^{2-d_s} \\, . \n\\ee\nThe mass-loss formula \\eqref{eq:massloss} then shows that the generation of a remnant for which $t_{\\rm evap}$ is infinite requires $d_s-2 \\le -1$ or, equivalently, $d_s \\le 1$. This is, however, in conflict with requiring convergence of $I_{d_s}$. Thus just modifying the spectral dimension for the fields constituting the Hawking radiation is not sufficient to create a long-lived remnant. In particular, the black hole evaporation does not stop if $d_s$ drops below three. \n\\subsection{Dynamical dimensional reduction}\n\\label{sect.4.2}\nNotably, the luminosity \\eqref{eq:Lgeneral} \\emph{is linear} in the two-point correlation function of the corresponding fields. The Euclidean multi-scale model following from \\eqref{mod} then suggests the following generalization to the black hole context. First, the Euclidean flat-space propagators are analytically continued to Lorentzian signature using a standard Wick rotation. Subsequently, the principle of covariance is used to promote the derivatives appearing in the position space representation to covariant derivatives. In this way one naturally arrives at the conclusion that the luminosity $L_{D_s}$ of a black hole, in a situation where the scalar field modeling the Hawking radiation exhibits dynamical dimensional reduction, is given by the sum of a massless and massive contribution weighted by a relative minus sign:\n\\begin{equation}\\label{eq:Lspec}\nL_{D_s}(M;m) = L_{\\rm massless}(M) - L_{\\rm massive}(M;m) \\, . \n\\ee\n\\begin{figure}[h] \n\t\\includegraphics[scale=0.60]{lumin2}\n\t\\caption{\\label{luminds} Luminosity $L_{D_s}$ of a Schwarzschild black hole with mass $M$ arising from the two-scale model \\eqref{mod} with $m^2 = 1$ (blue line). The inclusion of the massive mode triggering the dynamical dimensional reduction renders the luminosity finite as $M \\rightarrow 0$. The massless case is added as the dashed line for reference.}\n\\end{figure} \nFrom the general analysis in Sect.\\ \\ref{sect.2} we then conclude that the contribution of the second term is exponentially suppresses for $M \\gg m$. As a result, $L_{D_s}(M;m)$ agrees with the semi-classical analysis in this regime. Conversely, for $M \\lesssim m$ both terms contribute with equal magnitude. As a result $L_{D_s}$ remains finite as $M \\rightarrow 0$. The crossover between these two regimes together with the removal of the thunderbolt singularity is illustrated in Fig.\\ \\ref{luminds}, where $L_{D_s}$ has been evaluated numerically for $m^2=1$.\n\nThe taming of the luminosity for light black holes arises from the interplay of the massless and massive degree of freedom which provides a Pauli-Villars-type regularization for $L_{\\rm massless}$. We stress that, from a quantum gravity perspective, this feature does not result from introducing an additional ghost field, but is a direct result of the reduced number of degrees of freedom exhibited by the theory at scales $|\\vec{p}| \\gtrsim m$. \n\nThe result shown in Fig.\\ \\ref{luminds} together with a detailed numerical analysis reveals that $L_{D_s}(M;m)$ can be very well approximated by a simple interpolating function\n\\begin{equation}\\label{eq:Lspecana}\nL_{D_s}(M;m) = \\frac{c}{b \\, m^{-2} + M^2}\n\\ee\nwhere\n\\begin{equation}\nc = \\frac{1}{7680 \\pi} \\, , \\qquad b \\approx 0.0125 \\, . \n\\ee\nThe value for $b$ has been obtained from fitting the ansatz \\eqref{eq:Lspecana} to $L_{D_s}(M,m)$, obtained via numerical integration, at values $M \\ll 1$. \n\nThe analytic formula \\eqref{eq:Lspecana} again allows to compute the Hawking evaporation time of the black hole analytically. Integrating eq.\\ \\eqref{eq:massloss} yields\n\\begin{equation}\\label{eq:tevapspec}\nt_{\\rm evap} = 2560 \\pi M_0^3 + \\frac{b}{c} \\frac{M_0}{m^2} \\, . \n\\ee\nThus the dynamical dimensional reduction leads to an increase of the black hole lifetime. Since the scale $m$ where the dynamical dimensional reduction sets in is expected to be the Planck scale, this is a rather tiny effect though. Evaluating the term linear in $M_0$ for $m = m_{\\rm Planck} = 2.18 \\times 10^{-5}$ g and $M_0 = 10^9$ g yields that the change in the lifetime of the black hole is given by $\\Delta t_{\\rm evap} = 7.46 \\times 10^{-28}$ s. Thus the structure of \\eqref{eq:tevapspec} indicates that a luminosity which is constant as $M \\rightarrow 0$ does not lead to a long-lived remnant with a life-time comparable to cosmic time-scales.\n\n\\section{Conclusions and Discussion} \n\\label{sect.5}\nMotivated by the observation that light black holes with a mass given by the Planck mass $M_{\\rm Pl} \\approx 10^{-5}$g may constitute valid dark matter candidates \\cite{mc,rov,dm,mur}, we used black hole thermodynamics to investigate the luminosity and lifetime of spherically symmetric black hole solutions. Our work stepped out of the perturbative framework of quantum field theory in curved spacetime by including the effect of a dynamical dimensional reduction of the theories momentum space. As this is a feature shared by many approaches to quantum gravity \\cite{Carlip:2017eud,Carlip:2019onx}, it is intriguing to investigate whether this mechanism leads to the formation of long-lived black hole remnants. While we showed that the dynamical dimensional reduction mechanism generically removes the divergences in the black hole luminosity encountered in the last stage of the evaporation process, the results do not provide any evidence supporting the formation of long-lived remnants. \n\nAt this point the following remarks on the scope and limitations of our analysis are in order. Our work implemented the mechanism of dynamical dimensional reduction at the level of the degrees of freedom constituting the Hawking radiation. In this course \\emph{we did not modify the topology of the background black hole solution} which is given by the Schwarzschild solution. This entails that our spacetimes exhibit just one horizon, the event horizon and there is no inner horizon. This feature is at variance with many proposals for quantum gravity inspired black hole metrics as, e.g., the Hayward metric \\cite{Hayward}, the renormalization group improved black hole solutions constructed by Bonanno and Reuter \\cite{martin}, or the Planck stars inspired by Loop Quantum Gravity \\cite{loop-rem}. A direct consequence of our topology is that the black holes in our work do not have a critical mass where the two horizons coincide and the surface gravity (and hence the Hawking temperature) is zero. By disentangling the effects of dynamical dimensional reduction and the topology of spacetime, our analysis clearly reveals that it is the latter ingredient which is decisive for forming a light black hole remnant during the final stages of black hole evaporation. \n\nThis observation is also the key for reconciling our results with the ones reported by Carlip and Grummiller \\cite{carlip}. In this case the effect of dynamical dimensional reduction was essentially incorporated through generalizing the scaling law for the event horizon,\n\\begin{equation}\\label{eq:eh}\nA_h = 4 \\pi r_h^{d_h - 2} \\, , \n\\ee\nand subsequently identifying $d_h = d_s$. Within the single-scale analysis of Sect.\\ \\ref{sect.4.1} this modification with $d_h = 2$ or $d_h = 3$ would not lead to a stop of the Hawking evaporation process. A careful analysis of the dilaton model used in \\cite{carlip} shows, however, that the underlying black hole solutions must come with a second horizon: in this way one can approach a critical configuration if the generalized dimension of the dilaton model $D(X) = 3$. This picture then corroborates our conclusion that it is actually the topology of the (quantum) black hole and not the effect of a dynamical dimensional reduction that is crucial for forming light long-lived remnants.\n\nNaturally, it would be interesting to base our conclusion on a first-principle derivation from a full-fledged theory of quantum gravity. Such a derivation will require detailed knowledge about the momentum-dependence of the theories two-point functions. Notably, the form factor program for Asymptotic Safety \\cite{Knorr:2019atm} has recently made substantial progress along these lines \\cite{Knorr:2018kog,Bosma:2019aiu,Draper:2020bop,Platania:2020knd,Bonanno:2021squ,Knorr:2021niv}. Clearly, it would then be interesting to evaluate the black hole luminosity as an observable sensitive to a non-trivial momentum dependence in the propagators of the fields. We hope to come back to this point in the future.\n\n\n\\begin{acknowledgments}\n\tWe thank C.\\ Laporte for interesting discussions. The work by F.S.\\ is supported by the NWA-grant ``The Dutch Black Hole Consortium''.\n\\end{acknowledgments}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe semiconductor quantum dots (QDs) resulting from the quantum\nconfinement of heterostructures exhibit atom-like discrete\nelectron energy levels. High-efficiency single-QD devices show the\nfunctionalities of low electrical and optical power outputs. These\nsingle-QD devices include single electron\ntransistors[\\onlinecite{Guo}-\\onlinecite{Kuba}], single photon\nsources[\\onlinecite{Michler}-\\onlinecite{Chang}], single photon\ndetectors[\\onlinecite{Gustavsson}] and single electron heat\nengines[\\onlinecite{Josefsson}]. Some applications of QD devices\nrequire both high efficiency and significant output power.\nTherefore, one needs QD solids that can retain the size tunable\nproperties of the QDs while exhibiting the band transport\ncharacteristic of bulk semiconductors.[\\onlinecite{Kagan}]\nAlthough much effort has been devoted to producing such QD solids,\nstudies of the thermoelectric properties of such 2D QD arrays have\nbeen lacking.[\\onlinecite{Harman},\\onlinecite{Talgorn}]\n\nDesigning a thermoelectric material with a high figure of merit\n($ZT$) and optimized power output is under\npursuit.[\\onlinecite{Kuo1}-\\onlinecite{Kuo2}] The dimensionless\nfigure of merit $ZT=S^2G_eT\/\\kappa$ depend on the Seeback\ncoefficient ($S$), electrical conductance ($G_e$) and thermal\nconductance ($\\kappa$) of the material. Although 1D QD arrays have\nvery high $ZT$ values, there exist many limitations in the\nimplementations of thermoelectric devices.[\\onlinecite{Kagan}] 2D\nand 3D QD arrays are required for realistic applications. The\n$\\kappa$ of a 2D QD array is smaller than that of bulk\nmaterial.[\\onlinecite{ChenG}] This low dimensional system has the\npotential to realize high $ZT$ values.[\\onlinecite{Chen}]\nTherefore, it is desirable to investigate the power factor\n($PF=S^2G_e$) of 2D QD arrays, which directly affects the\nelectrical power output. The enhancement of $G_e$ calls for a\nlarge number of electronic states (band-like). However a large $S$\nvalue occurs in dilute electronic states (atom-like). Therefore,\nenhancing one of these physical quantities will unavoidably\nsuppress the other. This study theoretically investigated the\nthermoelectric properties of a finite 2D QD array coupled to\nelectrodes, as shown in Fig. 1. The electrons of the QD array are\ninjected from the electrodes.[\\onlinecite{Mahan}] We demonstrated\nthat $G_e$ in band-like situation and $S$ in atom-like situation\ncan happen simultaneously when the miniband center of a 2D QD\narray remains a certain distance from the Fermi level of the\nelectrodes. These results will improve the thermoelectric\nperformance of 2D materials such as $SnSe$ and\n$MoS_2$.[\\onlinecite{Zhao}-\\onlinecite{Fan}]\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[trim=2.5cm 0cm 2.5cm 0cm,clip,angle=-90,scale=0.3]{Fig1}\n\\caption{(a) Schematic diagram of a two dimensional (2D) quantum\ndot (QD) array coupled to electrodes. $\\Gamma_{L}$ ($\\Gamma_R$)\ndenotes the tunneling rate of the electrons between the left\n(right) electrode and the leftmost (rightmost) QDs. Energy diagram\nof a 2D QD array coupled to electrodes with different equilibrium\ntemperatures ($T_L$ and $T_R$). }\n\\end{figure}\n\n\n\n\\section{Formalism}\n\nTo model the thermoelectric properties of a 2D QD array connected\nto the electrodes, the Hamiltonian of the system shown in Fig. 1\nis given by $H=H_0+H_{QD}$,[\\onlinecite{Haug}] where\n\\begin{small}\n\\begin{eqnarray}\nH_0& = &\\sum_{k,\\sigma} \\epsilon_k\na^{\\dagger}_{k,\\sigma}a_{k,\\sigma}+ \\sum_{k,\\sigma} \\epsilon_k\nb^{\\dagger}_{k,\\sigma}b_{k,\\sigma}\\\\ \\nonumber\n&+&\\sum_{\\ell}^{N_y}\\sum_{k,\\sigma}\nV^L_{k,\\ell,j}d^{\\dagger}_{\\ell,j,\\sigma}a_{k,\\sigma}\n+\\sum_{\\ell}^{N_y}\\sum_{k,\\sigma}V^R_{k,\\ell,j}d^{\\dagger}_{\\ell,j,\\sigma}b_{k,\\sigma}+H.c.\n\\end{eqnarray}\n\\end{small}\nThe first two terms of Eq.~(1) describe the free electron gas in\nthe left and right electrodes. $a^{\\dagger}_{k,\\sigma}$\n($b^{\\dagger}_{k,\\sigma}$) creates an electron of momentum $k$\nand spin $\\sigma$ with energy $\\epsilon_k$ in the left (right)\nelectrode. $V^L_{k,\\ell,j}$ ($V^R_{k,\\ell,j}$) describes the\ncoupling between the left (right) lead with its adjacent QD in the\n$\\ell$th row, which counts from 1 to $N_y$.\n\\begin{small}\n\\begin{eqnarray}\nH_{QD}&= &\\sum_{\\ell,j,\\sigma} E_{\\ell,j}\nd^{\\dagger}_{\\ell,j,\\sigma}d_{\\ell,j,\\sigma}\\\\ \\nonumber&+&\n\\sum_{\\sigma}\\sum_{\\ell 1,\\ell 2}^{N_y}\\sum_{j1,j2}^{N_x} t_{\\ell\n1,\\ell 2, j1, j2} d^{\\dagger}_{\\ell 1,j1,\\sigma} d_{\\ell\n2,j2,\\sigma}+H.c,\n\\end{eqnarray}\n\\end{small}\n\\begin{equation}\nt_{\\ell 1,\\ell 2,j1, j2}= \\{ \\begin{array}{ll} -t_{y} &\nif~j1=j2, |\\ell 1-\\ell 2|=1\\\\\n-t_{x} & if~\\ell 1=\\ell 2, |j1-j2|=1\n\\end{array},\n\\end{equation}\nwhere { $E_{\\ell,j}$} is the energy level of QD in the\n${\\ell}$-th row and $j$-th column. The spin-independent $t_{\\ell\n1, \\ell 2, j1, j2}$ describes the electron hopping strength, which\nis limited to the nearest neighboring sites. $d^{\\dagger}_{\\ell\n1,j1,\\sigma} (d_{\\ell 2,j2,\\sigma})$ creates (destroys) one\nelectron in the QD at the $\\ell$th row and $j$th column. If the\nwave functions of the electrons in each QD are localized, the\nelectron Coulomb interactions are strong. Their effects on\nelectron transport are significant in the scenario of weak hopping\nstrengths.[\\onlinecite{Kuo3}] On the other hand, the wave\nfunctions of the electrons are delocalized in the scenario of\nstrong hopping strengths to form minibands; hence their weak\nelectron Coulomb interactions can be ignored.\n\nTo study the transport properties of a 2D QD array junction\nconnected to electrodes, it is convenient to use the\nKeldysh-Green's function\ntechnique[\\onlinecite{Haug},\\onlinecite{Meir}]. Electron and heat\ncurrents leaving electrodes can be expressed as\n\\begin{eqnarray}\nJ &=&\\frac{2e}{h}\\int {d\\varepsilon}~\nT_{LR}(\\varepsilon)[f_L(\\varepsilon)-f_R(\\varepsilon)],\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n& &Q_{e,L(R)}\\\\ &=&\\frac{\\pm 2}{h}\\int {d\\varepsilon}~\nT_{LR}(\\varepsilon)(\\varepsilon-\\mu_{L(R)})[f_L(\\varepsilon)-f_R(\\varepsilon)]\\nonumber\n\\end{eqnarray}\nwhere\n$f_{\\alpha}(\\varepsilon)=1\/\\{\\exp[(\\varepsilon-\\mu_{\\alpha})\/k_BT_{\\alpha}]+1\\}$\ndenotes the Fermi distribution function for the $\\alpha$-th\nelectrode, where $\\mu_\\alpha$ and $T_{\\alpha}$ are the chemical\npotential and the temperature of the $\\alpha$ electrode. $e$, $h$,\nand $k_B$ denote the electron charge, the Planck's constant, and\nthe Boltzmann constant, in that order. $T_{LR}(\\varepsilon)$\ndenotes the transmission coefficient of a 2D QD array connected to\nelectrodes, which can be solved by the formula $\nT_{LR}(\\varepsilon)=4Tr[\\hat{\\Gamma}_{L}\\hat{G}^{r}_{D,A}(\\varepsilon)\\hat{\\Gamma}_{R}\\hat{G}^{a}_{D,A}(\\varepsilon)]$,\nwhere the matrix of tunneling rates ($\\hat{\\Gamma}_L$ and\n$\\hat{\\Gamma}_R$) and Green's functions\n($\\hat{G}^{r}_{D,A}(\\varepsilon)$ and\n$\\hat{G}^{a}_{D,A}(\\varepsilon)$) can be constructed by\ncoding.[\\onlinecite{Kuo4}]\n\nThe electrical conductance ($G_e$), Seebeck coefficient ($S$) and\nelectron thermal conductance ($\\kappa_e$) can be evaluated by\nusing Eqs. (4) and (5) with a small applied bias $\\Delta\nV=(\\mu_L-\\mu_R)\/e$ and cross-junction temperature difference\n$\\Delta T=T_L-T_R$. We obtain these thermoelectric coefficients\n$G_e=e^2{\\cal L}_{0}$, $S=-{\\cal L}_{1}\/(eT{\\cal L}_{0})$ and\n$\\kappa_e=\\frac{1}{T}({\\cal L}_2-{\\cal L}^2_1\/{\\cal L}_0)$. ${\\cal\nL}_n$ is given by\n\\begin{equation}\n{\\cal L}_n=\\frac{2}{h}\\int d\\varepsilon~\nT_{LR}(\\varepsilon)(\\varepsilon-E_F)^n\\frac{\\partial\nf(\\varepsilon)}{\\partial E_F},\n\\end{equation}\nwhere $f(\\varepsilon)=1\/(exp^{(\\varepsilon-E_F)\/k_BT}+1)$ is the\nFermi distribution function of electrodes at equilibrium\ntemperature $T$.\n\n\\section{ Results and discussion}\nAccording to Eq. (6), transmission coefficient plays a significant\nrole for electron transport between the electrodes. To illustrate\nthe electronic states of finite 2D QD array, we have calculated\nand shown in Fig. 2 transmission coefficient $T_{LR}(\\varepsilon)$\nas a function of $\\varepsilon$ for different tunneling rates\n($\\Gamma_{L(R),\\ell,j}(\\varepsilon)=2\\pi\\sum_{k}\n|V^{L(R)}_{k,\\ell,j}|^2\n\\delta(\\varepsilon-\\varepsilon_k)=\\Gamma_t$). A square lattice\nwith homogenous electron hopping strengths $t_x=t_y=t_c=6\\Gamma_0$\nand site-independent QD energy level $E_{\\ell,j}=E_0=E_F$ has been\nconsidered in the calculation of $T_{LR}(\\varepsilon)$. All\nphysical parameters are in units of $\\Gamma_0$. In Fig. 2(a),\n$T_{LR}(\\varepsilon)$ reveals the tunneling probability of the\nelectrons of the electrodes through the electronic states of 2D QD\narray, those energy is described by\n$\\varepsilon=E_0-2t_c(cos(\\frac{n_x\\pi}{N_x+1})+\ncos(\\frac{n_y\\pi}{N_y+1}))$, where $n_x=1,2,..N_x$ and\n$n_y=1,2,..N_y$. Because the QD array is connected to the\nelectrodes, these electronic states have inhomogeneous broadening.\nThey are also restricted within the range between $-4t_c$ and\n$4t_c$. We can tune the distribution of electronic states by\nchanging $N$, $t_c$ and $\\Gamma_t$.\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[angle=-90,scale=0.3]{Fig2}\n\\caption{Transmission coefficient $T_{LR}(\\varepsilon)$ as a\nfunction of $\\varepsilon$ for different $N$ values at\n$t_x=t_y=t_c=6\\Gamma_0$ and $E_0=E_F$. Diagrams (a) and (b)\nconsider tunneling rates $\\Gamma_L=\\Gamma_R=\\Gamma_t=1\\Gamma_0$\nand $\\Gamma_t=6\\Gamma_0$, respectively. To prevent the curves from\noverlapping each other, we shifted the curve of N = 8.}\n\\end{figure}\n\nFrom Eqs. (4) and (5), the maximum electron current and heat\ncurrent occur at $T_{LR}(\\varepsilon)$ with the maximum area. The\nauthors of Ref. [\\onlinecite{Kuo4}] proved two results: the\nmaximum area of $T_{LR}(\\varepsilon)$ can be reached at the\ncondition of $\\Gamma_t=t_c$ and the maximum area increases with\nincreasing $N$, as seen in Fig. 2(b). Note that the 2D\ntight-binding electronic states show the Van Hove singularity in\nthe density of states (DOS) as $N \\rightarrow \\infty$ (DOS\ndiverges at $E_0$). At zero temperature, the electrical\nconductance is given by the transmission coefficient\n$G_e=\\frac{2e^2}{h}T_{LR}(E_F)$. We now clarify how the electronic\nstates influence the thermoelectric coefficients of a finite 2D QD\narray.\n\nFig. 3 shows the calculated $G_e$, $\\kappa_e$ and Lorenz number\n($L_0=\\kappa_e\/(TG_e)$ at functions of the QD energy level for\nvarious values of $\\Gamma_t$ at $k_BT=1\\Gamma_0$, $t_c=6\\Gamma_0$\nand $N=8$. As a result of temperature effect ($\\frac{\\partial\nf(\\varepsilon)}{\\partial E_F}$), the electronic states shown in\nFig. 2(a) can not be resolved in Fig. 3(a). It is not easy to\njustify a finite 2D QD array in the band-like or molecule-like\nsituation from $G_e$ at finite temperature, especially at high\ntemperatures. The curves of $\\kappa_e$ in Fig. 3(b) are similar to\nthose of $G_e$. According to the Wiedemann-Franz law,\n$L_0\/(k^2_B\/e^2)=\\frac{\\pi^2}{3}$ is a temperature-independent\nquantity. In Fig. 3(c), the $L_0$ curve corresponding to\n$\\Gamma_t=6\\Gamma_0$ is approximately $\\pi^2\/3$. For comparison,\nwe also add the calculated $G_e$, $\\kappa_e$ and $L_0$ for 1D QD\narray with $N_x=100$ and $t_c=\\Gamma_t=12\\Gamma_0$ (the band width\nof $48\\Gamma_0$ in this 1D miniband). As seen in Fig. 3(c) , 1D QD\narray yields a Lorenz number\n$L_0=\\frac{k^2_B}{e^2}\\frac{\\pi^2}{3}$ between $-10\\Gamma_0 \\le\n\\Delta \\le 10\\Gamma_0$. This can be regarded as a manifested\nband-like transport.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[angle=-90,scale=0.3]{Fig3}\n\\caption{(a) Electrical conductance $G_e$, (b) electron heat\nconductance $\\kappa_e$ and (c) Lorenz number\n($L_0=\\kappa_e\/(TG_e)$) as functions of $\\Delta=E_0-E_F$ for\nvarious $\\Gamma_t$ values at $k_BT=1\\Gamma_0$, $t_c=6\\Gamma_0$\nand $N=8$. The red curves correspond to a 1D QD array with\n$N_x=100$, $t_c=\\Gamma_t=12\\Gamma_0$ and $k_BT=1\\Gamma_0$.}\n\\end{figure}\n\nFurthermore, we have calculated $G_e$, $\\kappa_e$ and $L_0$ for 2D\nQD array with $N=8$ as functions of temperature for different\n$\\Gamma_t$ values at $E_0=E_F$ and $t_c=6\\Gamma_0$ in Fig. 4. The\nred curves are the results of the 1D QD array corresponding to\nthose of Fig. 3. One-dimensional QD arrays have a\ntemperature-independent $G_e$ and a linear temperature-dependent\n$\\kappa_e$. As a consequence, $L_0=\\kappa_e\/(TG_e)$ leads to a\ntemperature-independent behavior. According to the results of\nFigs. 3 and 4, the thermoelectric properties of 2D QD arrays with\n$t_c=6\\Gamma_0$, $\\Gamma_t=6\\Gamma_0$ and $N=8$ are very similar\nto those of 1D QD arrays with minibands. We deduce that finite 2D\nQD arrays have band-like characteristics when\n$t_c=\\Gamma_t=6\\Gamma_0$ and $N=8$.\n\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[angle=-90,scale=0.3]{Fig4}\n\\caption{(a) Electrical conductance, (b) electron heat conductance\nand (c) Lorenz number as functions of temperature for various\n$\\Gamma_t$ values at $E_0=E_F$, $t_c=6\\Gamma_0$ and $N=8$. The red\ncurves correspond to those of Fig. 3 with $E_0=E_F$.}\n\\end{figure}\n\n\nBecause many thermoelectric devices operate at high temperatures,\nit is important to examine the power factor of 2D QD arrays in\nthis regime. In Figs. 2-4 we have focused on the electron\ntransport in resonant tunneling procedure (RTP) in which the\nSeebeck coefficient is very small. To obtain a large $PF$ value at\nhigh temperature, we considered the electron transport in the\nthermionic-assisted tunneling procedure (TATP) where the band\ncenter ($E_0$) is far away from the Fermi level $E_F$ in the\nelectrodes. In Fig. 5, we have calculated $G_e$, $S$, $PF$ and\n$L_0$ as functions of temperature for various $t_c$ values at\n$E_0-E_F=30\\Gamma_0$. Because the maximum $T_{LR}(\\varepsilon)$\narea occurs at $\\Gamma_t=t_c$, we adopted this condition for all\nthe subsequent steps. As seen in Fig. 5(a), $G_e$ is vanishingly\nsmall at low temperature due to the electronic states of 2D QD\narrays being kept a certain distance from the $E_F$. The\nenhancement of $G_e$ with increasing temperature is a typical\ncharacteristic arising from the TATP. To understand the\ntemperature behavior of $G_e$ at $\\Gamma_t=t_c=1\\Gamma_0$, we have\nthe expression of $G_{e,atom}=\\frac{e^2}{h} \\frac{\\pi \\Gamma_t}\n{2k_BT cosh^2((E_0-E_F)\/(2k_BT))}$ when the transmission\ncoefficient is approximated as\n$T_{LR}(\\varepsilon)=4\\Gamma^2_t\/((\\varepsilon-E_0)^2+(2\\Gamma_t)^2)$\nin Eq. (6). In addition, $S_{atom}=-\\Delta\/T=-(E_0-E_F)\/T$, which\nexplains the behavior of $S$ at $t_c=1\\Gamma_0$ and $k_BT \\ge\n2\\Gamma_0$ in Fig. 5(b). Although $G_e$ is highly enhanced with\nincreasing $t_c$, $S$ is not so sensitive to $t_c$ for $k_BT \\ge\n10 \\Gamma_0$. This explains why the trend of maximum $PF$ for\n$t_c$ shown in Fig. 5(c) is determined by $G_e$. In Fig. 5(d),\nthree $L_0$ curves violate the Wiedemann-Franz law. Note that\n$\\Gamma_t=t_c=6\\Gamma_0$ provides the band-like characteristic\n(see Fig. 3(c)).\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[angle=-90,scale=0.3]{Fig5}\n\\caption{(a) Electrical conductance, (b) Seeback coefficient, (c)\npower factor ($PF=S^2G_e$) and (d) Lorentz number as functions of\n$T$ for various $t_c$ values at $E_0-E_F=30\\Gamma_0$ and $N=8$.\nMeanwhile, we have adopted $\\Gamma_t=t_c$.}\n\\end{figure}\n\nFig. 6 shows the calculated $G_e$, $S$ and $PF$ as functions of\n$E_0$ for various $t_c$ values at $k_BT=25\\Gamma_0$ to reveal the\neffect of the band center. As seen in Fig. 6(a), $G_e$ has a\nmaximum value when the band center ($E_0$) is located at $E_F$.\nThe Seeback coefficients in Fig. 6(b) are zero at $E_0=E_F$. It is\nattributed to the symmetrical distribution of the electrons and\nholes on the electronic states of the 2D QD array. Here, the holes\nare defined as the empty states below $E_F$. For\n$k_BT=25\\Gamma_0$, the Seebeck coefficients are well described by\n$S_{atom}=-\\Delta\/T$. We find that the maximum values of $PF$\noccur near $\\Delta=60\\Gamma_0$, as indicated in Fig. 6(c). When\napproaching the atomic limit ($t_c \\rightarrow 0$), one can prove\nthat the optimization of $PF$ is given by $\\Delta\/k_BT=2.4$. The\nresults of Fig. 6(c) imply that 2D QD arrays with minibands\n($t_c=6\\Gamma_0$) preserve the atomic thermoelectric properties\nwhen the band center is far away from the $E_F$ of the electrodes.\nIn Fig. 2(b), $T_{LR}(\\varepsilon)$ depends on $N$. Therefore, we\nadd in Fig. 6 the curves with triangle marks for $N=7$ and\n$t_c=6\\Gamma_0$. From the curves of $N=7$ and $N=8$, we see that\nthe enhancement of $G_e$ resulting from the increase of electronic\nstates does not suppress $S$. It is worthy noting that a single 1D\nQD array does not exist such a behavior. We reinvestigate $PF$ as\nfunctions of $t_c$ for $N=7,8$ at $\\Delta=60\\Gamma_0$ and\n$k_BT=25\\Gamma_0$ in Fig. 6(d). $PF$ is a linear function of $t_c$\nas $t_c\\le 6\\Gamma_0$. Meanwhile, the maximum $PF$ is given by\n$t_c=\\Delta\/4$. The red curves represent the case where $t_y=0$ to\nclarify the geometer effects. When $t_c \\le 6\\Gamma_0$, the\ngeometry effects can be ignored.\n\n\nBecause $T_{LR}(\\varepsilon)$ lacks an analytical form, it is not\neasy to illustrate the complex behavior of $PF$ shown in Fig.\n6(d). If we make the assumption that minibands have homogenous\nelectronic states and consider the square-form\n$T_{LR}(\\varepsilon)$ given by\n\\begin{equation}\nT_{LR}(\\varepsilon)=\\left\\{ \\begin{array}{ll}\nN_y& \\mbox {if $-2t_c\\le \\varepsilon-E_0 \\le 2t_c,$}\\\\\n0 &\\mbox{otherwise}\\end{array} \\right.\n\\end{equation}\nthe analytical forms of $G_e$ and $S$ can be derived as\n\\begin{equation}\nG_e=\\frac{2e^2N_y}{h}~(tanh(y_1)-tanh(y_2))\n\\end{equation}\nand\n\\begin{equation}\nS=\\frac{2ek_BN_y}{h}~\\frac{(S_1(y_1)-S_2(y_2))}{G_e}\n\\end{equation}\nwhere $S_i(y_i)=y_i~tanh(y_i)-log(cosh(y_i))$,\n$y_1=\\frac{\\Delta+2t_c}{2k_BT}$ and\n$y_2=\\frac{\\Delta-2t_c}{2k_BT}$. In Fig 6(d), the blue curves are\ncalculated by using Eqs. (8) and (9). Because Eq. (7) considers\n$N_y$ 1D QD arrays with homogenous electric states, it is expected\nthat the $PF$ given by Eqs. (8) and (9) is overestimated. However,\nEqs. (8) and (9) provide a clear picture that the enhancement of\n$PF$ follows the enhancement of $G_e$. Meanwhile, $S\\approx\nS_{atom}$ as long as $\\frac{t_c}{k_BT} < 0.25$ and\n$\\frac{\\Delta}{k_BT} \\ge 2.4$. We deduce that $G_e$ in a band-like\ntransport situation and $S$ in an atomic-like situation can\ncoexist for finite 2D QD arrays with $t_c=6\\Gamma_0$ and\n$\\Delta=60\\Gamma_0$ at $k_BT=25\\Gamma_0$. If we set\n$\\Gamma_0=1~meV$, then our analysis in Fig. 6 becomes a very\nuseful guideline for thermoelectric devices operated at room\ntemperature.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[angle=-90,scale=0.3]{Fig6}\n\\caption{(a) Electrical conductance, (b) Seebeck coefficient and\n(c) power factor as functions of $E_0$ for different $t_c$ values\nat $N=8 $ and $k_BT=25\\Gamma_0$. The curves with black triangle\nmarks correspond to the case of $N=7$ and $t_c=6\\Gamma_0$.(d) $PF$\nas functions of $t_c$ for $\\Delta=60\\Gamma_0$ and\n$k_BT=25\\Gamma_0$. The red curves correspond to $t_y=0$. The blue\ncurves are calculated using Eq. (7).}\n\\end{figure}\n\n\n\n\n\\section{Conclusion}\nWe have theoretically investigated the thermoelectric properties\nof 2D QD arrays. In RTP, the Lorenz number with a value near\n$\\pi^2\/3$ and a temperature-independent behavior demonstrates that\n2D QD arrays with $t_c=\\Gamma_t=6\\Gamma_0$ and $N=8$ indeed form\nminibands. When this miniband center is far away from the Fermi\nlevel of the electrodes, TATP dominates the electron transport\nbetween the electrodes and $L_0$ violates the Wiedemann-Franz law.\nIn TATP, $G_e$ is enhanced as the number of electronic states\nincreases, whereas the $S$ values remain in an atom-like\nsituation. This is a remarkable property that would lead to a\nhigh-efficiency thermoelectric devices made of QDs with large\nelectrical power output. This interesting phenomenon exists not\nonly for 2D QD arrays with square-lattices but also\ntriangular-lattices, which will be reported in elsewhere.\n\n\n{\\bf Acknowledgments}\\\\\nThis work was supported under Contract No. MOST 107-2112-M-008\n-023MY2\n\\mbox{}\\\\\nE-mail address: mtkuo@ee.ncu.edu.tw\\\\\n\n\\setcounter{section}{0}\n\n\\renewcommand{\\theequation}{\\mbox{A.\\arabic{equation}}}\n\\setcounter{equation}{0}\n\n\\mbox{}\\\\\n\n{\\bf Data Availability Statements}\\\\\n\nThe data that supports the findings of this study are available\nwithin the article.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn recent years, high-dimensional data have flooded almost all fields of science and technology. These types of data are being generated in massive quantities and share a common high-throughput mechanism. Generally speaking, a prominent number of measurements of features, often in the millions, are gathered for a single subject.\nThe goal of the inherent analysis for these large datasets is to make useful inferences for each subject or a future subject. The tools for solving the \nproblems in this area play particularly important roles, especially in this\n``data science'' era.\n\nOne of the fundamental problems in large-scale inference \\citep{large-infer} is classification. Consider a two-class classification problem, where we have $n$ labeled training samples $(X_i,Y_i)$, $i=1, \\dots, n$. Here, $X_i$'s are $p$-dimensional feature vectors and $Y_i\\in\\{0,1\\}$ are the corresponding class labels. The goal is to estimate the label of a new observation $X$. A significant amount of work has been done in this field: see \\citep{Anderson, friedman1989regularized, lachenbruch1979discriminant}. \n\nIn modern analytical approaches, usually the number of features is huge but only a small portion of them are regarded as relevant to the classification decision, although these are not known in advance. \nIn this sense, the methods will lose power because of the large amount of noise. \nA common way to solve this problem is to perform dimension reduction such that the number of features is greatly reduced, and then use only the resultant features for classification; see \\citep{FJY, PCS, Wasserman}. \n\n\nFisher's Linear Determinant Analysis (LDA) in \\cite{Fisher} utilizes a weighted average of the features of the test sample to make a prediction. In the traditional setting where $n \\gg p$, the optimal weight vector for LDA where the two classes are assumed to share the same correlation structure $\\Omega$ satisfies\n\\begin{equation} \\label{Fisherw}\nw \\propto \\Omega (\\mu_1 - \\mu_0)\n\\end{equation}\nwhere the mean vectors $\\mu_0$ and $\\mu_1$, as well as $\\Omega$, can be easily estimated, and therefore Fisher's LDA is approachable. \n\nHowever, for the high-dimensional setting where $p \\gg n$, LDA faces immediate problems:\n\\begin{itemize}\n\\item It is practically infeasible to estimate $\\Omega$ due to the limited sample size $n$.\n\\item It is difficult to account for the information from different $\\Omega_0$ and $\\Omega_1$. The difference affects two areas: the difference in the covariance matrix term and the estimation of $\\omega$. The classification criteria have to be improved to combine these two parts. \n\\item The analysis for error rate is quite complicated even in the simplest case, where $\\Omega_0= \\Omega_1 = I_p$, as the signals are rare and weak. An extensive discussion may be found in \\cite{DJ08, Jin-survey1, FJY, Jin-survey2, JKW}.\n\n\\end{itemize}\n\nWith recent developments in the estimation of the precision matrix in high dimensions \\cite{PCS}, the first problem has been alleviated; that is, Partial Correlation Screening (PCS) is computationally efficient in estimating a row of $\\Omega$, and it needs only a few rows of the empirical covariance matrix. PCS has been demonstrated to be capable of executing the estimation of a large precision matrix ($p$ = 10K), and has been applied with evident success in classification. \n\nFor the second problem, one variant of LDA that incorporates the covariance-matrix information is proposed as Quadratic Discriminant Analysis (QDA) in \\cite{RegularizedQDA}; with extensions for the high-dimensional data; see \\cite{aoshima2019high, wu2019quadratic, RFQD}. When the signals are sparse, the original QDA method is modified accordingly with sparsity assumptions of the difference of the precision matrices and $\\mu_1 - \\mu_0$; see \\cite{fan2015innovated, jiang2018direct, li2015sparse}. However, to the best of our knowledge, the theoretical discussions for the error rate for the high-dimensional QDA with rare and weak signals have not been thoroughly discussed.\n\nTherefore, this paper primarily focuses on addressing the second and third problems. Starting from the classical QDA, we propose a new QDA with the feature-selection approach, where we estimate $\\Omega_1$ and $\\Omega_2$ with PCS and $\\mu$ with sample mean for the quadratic term (second-order information), and estimate the linear term (first-order information) with $d^\\top X$, where $d$ is found by thresholding $\\Omega_2 \\mu_2 - \\Omega_1 \\mu_1$. \nNumerical analysis of a real data set suggests that our new approach should incorporate the second-order information, as it will then perform better than the optimal high-dimensional LDA method. \n\nFor the third problem, we propose the rare and weak model on both the mean vector and the precision matrix, tying their rareness and weakness to the parameter $p$. \nBased on the model, we carefully quantify the possibility and impossibility regions for the two-class classification by simultaneously exploiting both the difference in $\\mu$ and the difference in $\\Omega$. In addition, we carefully analyze how the errors in QDA may affect the classification results, which has not been done previously in the rare and weak signal model.\n\n\nWe calibrate the effect of quadratic terms on classification, in terms of \n\\begin{itemize}\n\\item the possibility and impossibility region under the ideal case, where both $\\mu_k$ and $\\Sigma_k$ ($k=0,1$) are known;\n\\item the possibility and impossibility region for the newly proposed QDA with feature selection, under other alternative cases where $\\mu_k$ and $\\Sigma_k$ ($k=0,1$) are only partially known;\n\\item the successful classification region for the newly proposed QDA with feature selection when both $\\mu_k$ and $\\Omega_k$ are unknown. \n\\end{itemize}\nTo highlight our findings, we show that the QDA classification rule achieves a misclassification rate of 0 asymptotically ($p \\rightarrow \\infty$) in some parameter regions, and for other regions has asymptotic constant error in the ideal case (Theorems \\ref{thm:equal}-\\ref{thm:idealunequal}). The results for other alternative cases are subsequently analyzed in Theorems \\ref{thm:unknownmu}-\\ref{thm:unknownomega}. \n\n\n\\subsection{Quadratic Discriminant Analysis} \nConsider the two-class classification problem mentioned above, where we observe $n$ training samples $(X_i,Y_i)$, $i=1, \\dots, n$. \nGiven $Y_i=k$, we assume the feature vector $X_i \\in \\mathcal{R}^p$ follows a multivariate normal distribution with mean $\\mu_k$ and covariance matrix $\\Omega_k^{-1}$. \nLet $X$ denote an independent test sample from the same population; then, \n\\begin{equation} \\label{T0}\nX|Y \\sim (1 - Y) N(\\mu_0, \\Omega_0^{-1})+ Y N(\\mu_1,\\Omega_1^{-1}).\n\\end{equation}\nWe would like to classify $X$ as being from either $Y=0$ or $Y=1$. \n\nFor two-population classification problems, the QDA method is commonly used to exploit both the mean and covariance information; see \\cite{mclachlan2004}. Similarly, when $\\mu_k$ and $\\Omega_k$ are all known, $k = 0, 1$, we derive the likelihood function of the two populations, and the ratio gives the criteria for the current problem. \nThe classification rule is that\n\\begin{equation} \\label{CR2}\n\\begin{array}{l}\n\\displaystyle \\hat{Y} = I \\left\\{X^\\top\\left (\\Omega_0 -\\Omega_1\\right )X-2\\bigl(\\mu_0^\\top\\Omega_0 -\\mu_1^\\top\\Omega_1 \\bigr)X\\right. \\\\\n\\displaystyle \\left. \\ \\ \\ + \\bigl(\\mu_0^\\top\\Omega_0\\mu_0- \\mu_1^\\top\\Omega_1\\mu_1+\\ln|\\Omega_1|- \\ln|\\Omega_0|\\bigr) \\ > \\ 0\\right\\} ,\n\\end{array}\n\\end{equation} \nwhere $I(A)$ is the indicator function of event $A$. If $P(Y_i = 1) = q\\neq 0.5$, an additional term $2\\ln \\frac{q}{1-q}$ on the right-hand side of the inequality will improve the accuracy. However, as this does not have a significant effect on possibility and impossibility regions, our analysis applied to this case (i.e. $q=0.5$) will suffice. \n\nWhen $\\mu_k$ and $\\Omega_k$ are unknown in reality, we have to use the estimated parameters instead. When $p \\ll n$, the sample mean and covariance are accurate. However, in the high-dimensional setting with $p \\gg n$ and the signals being rare and weak, the estimation becomes difficult.\n\n\n\n\\subsection{Rare and Weak Signal Model}\nIn the high-dimensional setting, the number of features $p \\rightarrow \\infty$ and the signals are rare and weak. We need a model to catch the rareness and weakness in both the mean part and the covariance part. Since both parts include the rareness and weakness, the general model is complicated. Here, we consider them separately and make some simplifications. \n\nIn \\cite{FJY}, the case that $\\mu_0 \\neq \\mu_1$ but $\\Omega_0 = \\Omega_1$ has been studied. We start with the model from there, and assume $\\mu_0=-\\mu$ and $\\mu_1=\\mu$. If not, we could simply do a location shifting of the distance $\\frac{1}{2}|\\mu_1-\\mu_0|$ to the left or the right. \nWe model the contrast mean vector $\\mu$ as\n\\begin{equation}\\label{Param1}\n\\mu_i \\ \\overset{i.i.d.}\\sim \\ (1-\\epsilon)\\mathcal M_0+\\epsilon \\mathcal M_\\tau, \\ \\ \\ \\ i=1,\\dots,p,\n\\end{equation}\nwhere $\\mathcal M_0$ and $\\mathcal M_\\tau$ are the point mass at 0 and $\\tau>0$ respectively. Here, we assume the signals have the same signs and strengths. This is largely for simplicity, and, as discussed in \\citep{FJY} and our remarks following Theorem \\ref{thm:equal}, $\\mathcal M_\\tau$ can be replaced by a random variable with support $[-\\tau,\\tau]$; all the results discussed in this paper will still hold. \nThe signals are rare and weak in the sense that, when the sample size $n$ approaches infinity, we assume that the strength and density of the signals converges to 0, i.e., \n\\begin{equation}\n\\epsilon \\ \\rightarrow \\ 0, \\ \\ \\ \\ \\tau \\ \\rightarrow \\ 0. \n\\end{equation}\n\nTo examine the effect of the quadratic term, we also model the difference between $\\Omega_0$ and $\\Omega_1$. Here we use the precision matrix instead of the correlation matrix because the estimations of the two matrices are totally different in the high-dimensional setting, and the former one has a direct effect on the classification results.\nWe are interested in the difference between $\\Omega_0$ and $\\Omega_1$, and so we assume $\\Omega_0$ is known and focus on the parameterization of $\\Omega_1 - \\Omega_0$. \nWithout loss of generality, we assume $\\Omega_0=I$. Hence, we write $\\Omega = \\Omega_1$ and $\\Sigma = \\Sigma_1$ in the following context for notational simplicity. \n\nWe introduce a characterization of rareness and weakness for $\\Omega$. Let $W(\\nu)$ denote a $p$-dimensional Wigner matrix with parameter $\\nu$. Specifically, let $W(\\nu)$ be symmetric with 0's on the diagonals and $w_{ij}$ on the off-diagonals $w_{ij}$, following the distribution \n\\begin{equation}\\label{Omega0}\nw_{ji} = w_{ij} \\ \\overset{i.i.d.}\\sim \\ (1-\\nu)\\mathcal M_0+\\frac{\\nu}{2} \\mathcal M_1 + \\frac{\\nu}{2} \\mathcal M_{-1}, \\ \\ \\ \\ 1 \\leq i < j \\leq p,\n\\end{equation}\nwhere $\\mathcal M_1$ and $\\mathcal M_{-1}$ are the point masses at 1 and $-1$, respectively.\nThen, we model $\\Omega$ as \n\\begin{equation}\\label{Omega1}\n\\Omega \\ = \\ \\Sigma^{-1} \\ = \\ cI+\\eta W\n\\end{equation}\nfor some constant $c=c(n, p)$ close to 1. Here, the parameters $1 - c$ and $\\eta$ represent the signal weakness, and $\\nu$ represents the signal rareness. We assume $\\nu \\rightarrow 0$, $\\eta \\rightarrow 0$ and $c \\rightarrow 1$ when the sample size $n$ goes to infinity. \n\nNow, with a realization of $W$, the model under consideration is the mixture model\n\\begin{equation}\\label{M12}\nX_i|(W,Y_i) \\ \\sim \\ (1-Y_i) N(-\\mu,I) + Y_i N \\left(\\mu,[cI+\\eta W]^{-1}\\right), \\ \\ i=1,\\dots,n.\n\\end{equation}\nWe are interested in the possibility and impossibility regions under the current model and successful and unsuccessful classification regions of QDA classifier. \n\nWe tie all the parameters to $p$ by some constant parameters, as follows, and explore how these parameters influence the classification results. For the mean vector, the parameters $\\epsilon$ and $\\tau$ depict the signal sparsity and weakness, respectively. We define them as\n\\begin{equation}\\label{Param2}\n\\epsilon \\ = \\ p^{-\\zeta}, \\ \\ \\ \\ \\tau \\ = \\ g_p p^{-\\theta}, \\ \\ \\ \\ 0<\\zeta, \\theta <1,\n\\end{equation}\nwhere $g_p$ can be $1$ or a $\\ln p$ term, which will be discussed in Section \\ref{sec:main}. \nFor the precision matrix $\\Omega$, we take the following parameterization in a similar manner to that of the mean part: \n\\begin{equation}\\label{Param}\n\\eta=f_p p^{-\\alpha}, \\ \\ \\ \\nu=p^{-\\beta}, \\ \\ \\ \\xi=1-c=p^{-\\gamma}, \\ \\ \\ \\ \\ 0<\\alpha, \\gamma<1, 0< \\beta <2,\n\\end{equation}\nwhere $f_p$ can be $1$ or a $\\ln p$ term. Details are provided in Section \\ref{sec:main}. \nHere, $\\alpha$, $\\beta$, $\\gamma$, $\\zeta$, and $\\theta$ are all constants. It is of interest to explore the performance diagram of the QDA classifier on the space formed by these parameters. \n\nTo guarantee that $\\Omega$ is a precision matrix, $\\eta W$ must be weak enough for $\\Omega = cI + \\eta W$ to be positive definite. According to Lemma \\ref{lemma1} in Section \\ref{sec:proof}, this requirement is satisfied with high probability under the condition\n\\begin{equation}\\label{con1}\n\\beta > 1 - 2\\alpha. \n\\end{equation}\nHence, we discuss only the classification possibility and impossibility regions under this condition. \n\nFinally, we tie the sample size $n$ to $p$ by\n\\begin{equation}\\label{Param3}\nn = p^{\\delta}, \\qquad 0 < \\delta < 1. \n\\end{equation}\nWhen $p \\rightarrow \\infty$, $n$ goes to infinity at a much slower rate. \n\n\n\n\n\n\\subsection{QDA with Feature Selection}\nFor the high-dimensional setting, we have to incorporate feature selection in the proposed QDA classifier. Consider a new observation $X$ from the mixture population (\\ref{M12}); i.e., $X \\sim N(-\\mu, I)$ when $Y = 0$ and $X \\sim N(\\mu, \\Omega^{-1})$ when $Y = 1$. Under the current model, the classification rule (\\ref{CR2}) is reduced to \n\\begin{equation}\\label{eqn:qda}\n\\hat{Y}(X; \\mu, W)= I \\{X^\\top(I - \\Omega) X + 2\\mu^\\top (I+\\Omega) X + \\mu^\\top(I - \\Omega) \\mu + \\ln|\\Omega| > 0\\}.\n\\end{equation}\nIn this rule, the unknown $\\mu$ and $\\Omega$ need to be estimated. We propose the following algorithm for QDA classification with feature selection:\n\\begin{itemize}\n\\item[] {\\bf Algorithm 1}.\n\\item[0.] Find the corresponding precision matrix $\\hat{\\Omega}$ with a precision-matrix-estimation method. Calculate the average vector of the two classes as $\\hat{\\mu}_0 = \\frac{1}{n_0} \\sum_{i: Y_i = 0} X_i$ and $\\hat{\\mu}_1 = \\frac{1}{n_1} \\sum_{i: Y_i = 1} X_i$, where $n_1 = \\sum_{i =1}^n Y_i$ and $n_0 = n - n_1$. \n\\item[1.] Let $d_0=\\hat{\\mu}_0$, $d_1= \\hat{\\Omega} \\hat{\\mu}_1$ and $d = d_1 - d_0=(d(1),\\dots, d(p))^\\top$. \n\\item[2.] With a threshold $t$, let $d^{(t)}$ denote the indicator vector of feature selection, i.e. for $j=1,\\dots,p$,\n\\begin{equation}\nd^{(t)}(j)= \\left\\{\\begin{array}{ll}\n1, & |d(j)| > t, \\\\\n0, & |d(j)| < t. \n\\end{array}\n\\right.\n\\end{equation*}\nLet $\\hat \\mu_d^{(t)}$ be the hard-thresholded mean difference vector $d$, i.e. $\\hat \\mu_d^{(t)}=d\\circ d^{(t)}$.\n\\item[3.] Let $C = \\mu^\\top(I - \\Omega) \\mu + \\ln |\\Omega|$. When $C$ is unknown, we use grid search to figure out $C$ with best performance.\n\\item[4.] (Prediction step) For any fresh data vector $X$, calculate the QDA score\n$$ \nQ=X^\\top(I - \\hat{\\Omega}) X + 2( \\hat\\mu_d^{(t)})^\\top X + C\n$$\nand estimate the label of $X$ as $\\hat{Y}$ with $\\hat Y = I\\{Q > 0\\}$. \n\\end{itemize}\n\nIn this procedure, Step 0 can adopt any estimation method for the high-dimensional sparse precision matrix, such as the one in \\cite{PCS}. The theoretical limit of the proposed method depends on the estimation method applied. \n\nIn Steps 1\u20132, we propose a feature-selection component for $d = \\hat{\\Omega}\\hat{\\mu}_1 - \\hat{\\mu}_0$. This step is quite standard in high-dimensional data analysis, and yet is different from the high-dimensional LDA approach in two aspects. First, we consider $\\hat{\\Omega}\\hat{\\mu}_1 - \\hat{\\mu}_0$ instead of $\\hat{\\mu}_1 - \\hat{\\mu}_0$ directly, since the latter provides less information. Second, we apply the hard-thresholding method instead of the clipping method, which also provides the estimation of quantity that helps when we calculate the term $X'\\Omega X$. \n\n\\subsection{A Real Data Example}\\label{sec:quickdata}\nWe use a quick example to demonstrate how this works on the real data. We consider the rats dataset with summaries given in Table \\ref{table:data}. The original rats dataset was collected in a study of gene expressions of live rats in response to different drugs and a toxicant; we use the cleaned version by \\cite{liver}. This dataset consists of $181$ samples measured on the same set of $8491$ genes, with $61$ samples labeled by \\cite{liver} as toxicants and the other $120$ as other drugs. \n\n\\begin{table}[ht]\n\\caption{A gene-expression microarray rats dataset.}\n\\begin{tabular}{|l|l||c|c|}\n\\hline\nData Name & Source & $n$ ($\\#$ of subjects) & $p$ ($\\#$ of genes) \\\\\n\\hline\nRats & Yousefi et al. (2010) & 181 & 8491 \\\\\n\\hline\n\\end{tabular}\n\\label{table:data}\n\\end{table}\n\n\nThis dataset has been carefully studied in \\cite{PCS}, with the performance of the two-class classification compared among a sequence of popular classifiers, including SVM, Random Forest, and HCT-PCS. The HCT-PCS, which achieves optimal classification when it adapts LDA \\citep{DJ08, FJY} in the rare and weak signal setting, was shown to have very promising classification results with this data. \n\nThat said, in HCT-PCS, all samples of the two classes are assumed to share the same precision matrix, leaving room for improvement. We now focus on how to use the difference in precision matrix between the two classes to improve the classification of this data and compare the results here with those from the LDA setting. Here, we leave out all the implementation details, which will be introduced in Section \\ref{sec:rats}, and only highlight our findings for this rats data:\n\\begin{itemize}\n\\item QDA further outperforms LDA in terms of both best error rate and average errors, and produces better results than those other methods in \\cite{PCS}, including SVM and Random Forest, suggesting that QDA gives a better separation by taking into account the second-order difference between the two classes.\n\n\\end{itemize}\n\nAs illustrated in Figure \\ref{figure:errorrats-k30-intro} (below, left), we can see that the performances (in terms of test error) of LDA are all above those of QDA at every data splitting of 15 splittings for the rats data, given that their precision matrices were estimated by PCS separately. In fact, this finding holds for both the best error rate and average error of the two classifiers. Figure \\ref{figure:errorrats-k30-intro} (below, right) demonstrates the surface of the test error between LDA and QDA, by varying the estimated covariance matrices. This Zoom-in plot shows the error rates of the two classifiers in great detail, and supports our finding that the QDA does bring necessary improvement over LDA when the precision matrices are appropriately estimated. \n\\begin{figure}[htb!]\n \\centering\n \\subfigure[]{\\includegraphics[scale=0.35]{rats2.pdf}} \\quad\n\t \\subfigure[]{\\includegraphics[scale=0.4]{plot1_LQDA2_k30_new}}\n\t \\caption{Comparison of testing errors for the rats data: (a) error rates (y-axis) of LDA and QDA for 15 data splittings at a certain sparsity of the precision matrix; (b) Zoom-in errors for the rats data for varying choices of parameters in estimating the precision matrices for one splitting of 15 splittings.}\n \\label{figure:errorrats-k30-intro}\n\\end{figure}\n\n\n\\subsection{Main results} This paper considered several scenarios, from the simplest case, in which both the mean vector and precision matrix are known, to the most general case, in which both are unknown. The results for the unknown precision matrix (Theorem \\ref{thm:unknownomega}) make use of current advances in the area of precision-matrix estimation. To avoid discussing these advances (which are beyond the scope of this paper) and focus on the main results of our research, we now examine the case in which the mean vector is unknown while the precision matrix is known (i.e., Theorem \\ref{thm:unknownmu}). \n\nWhen $\\gamma <1\/2$, the signal in variances is so large that even for a uninformative mean vector the QDA classifier can achieve successful classification (Theorem \\ref{thm:equal}). Thus we consider the non-trivial case that $\\gamma\\geq 1\/2$ (i.e., (\\ref{eqn:gamma})). On the other hand, since the mean vector is unknown, its estimation accuracy depends on the sample size $n$. Let \n\\begin{equation}\\label{kappa}\n\\kappa=\\max\\{\\kappa_1, \\kappa_2\\},\n\\end{equation}\nwhere \n\\begin{equation}\\label{kappa12}\n\\kappa_1=2-2\\alpha-\\beta \\ \\ {\\rm and } \\ \\ \\kappa_2=1-2\\theta-\\zeta.\n\\end{equation}\nOur results show that $\\kappa$ is a key quantity in the phase transition whether the mean vector is known or unknown. Here, $\\kappa_1$ and $\\kappa_2$ are the synchronized indexes of signal weakness and sparsity in the mean difference and the covariance matrix difference, respectively. In detail, $p^{\\kappa_1}\\sim p^2\\eta^2\\nu$, which is the order of the squared $L_2$-norm of the eigenvalues of $\\eta W$, while $p^{\\kappa_2}\\sim p\\tau^2\\epsilon$, which is the order of the squared $L_2$-norm of the mean vector $\\mu$. Under model (\\ref{M12}) with balanced sample ($q=1\/2$) and the parameterization (\\ref{Param1}), (\\ref{Omega1})--(\\ref{Param3}), and (\\ref{eqn:gamma}), we examine two cases:\n\\begin{itemize}\n\\item[(i)] When $\\theta < \\delta\/2$, the signals in the mean difference of the two populations are so strong that the QDA classifier achieves the same classification results for the unknown mean vector as for the known mean vector (Theorem \\ref{thm:idealunequal}). More specifically, the QDA misclassification rate converges to 0 when $\\kappa > 0$, converges to positive values (between 0 and $1\/2$) when $\\kappa<0$, and converges to either 0 or positive values on the boundary $\\kappa=0$ depending on whether a $\\ln p$ term is introduced in the signal strength. Thus the boundary $\\kappa=0$ partitions the phase space into successful and unsuccessful classification regions of QDA classifier, which are further proved the respective possibility and impossibility regions. In another word, the QDA classifier achieves the largest successful classification region. \n\n\\item[(ii)] When $\\theta \\geq \\delta\/2$, the signals in the mean difference of the two populations are weak, and so the unknown mean vector reduces the QDA classification successful region for the case of the known mean vector (Theorem \\ref{thm:idealunequal}). More specifically, the QDA misclassification rate converges to 0 when $\\kappa > (1-\\delta)\/2$, converges to $1\/2$ when $\\kappa<(1-\\delta)\/2$, and converges to either to 0 or positive values on the boundary $\\kappa=(1-\\delta)\/2$ depending on whether a $\\ln p$ term is introduced in the signal strength.\n\n\\end{itemize}\nIn summary, when the sample size is large enough relative to signal strength in the mean difference ($\\theta < \\delta\/2$), the QDA classification result when the mean vector is unknown is the same as when it is known (`oracle' property) and QDA achieves the possibility region; otherwise, the QDA successful classification region will be reduced roughly from $\\kappa>0$ to $\\kappa>(1-\\delta)\/2$. \n\nTo visualize the relationships among these parameters and the possibility\/impossibility and QDA classification successful\/unsuccessful regions, some of the results in Theorem \\ref{thm:unknownmu} are given in Figures \\ref{figure: phase} and \\ref{figure: parameter}. Figure \\ref{figure: phase} depicts those regions in terms of $\\kappa_1$ and $\\kappa_2$ values, with subfigure (a) for $\\theta<\\delta\/2$ and (b) for $\\theta\\geq \\delta\/2$. Note that $\\kappa_2>1-\\delta-\\zeta>-\\delta$ when $\\theta<\\delta\/2$, $\\kappa_2\\leq 1-\\delta-\\zeta<1-\\delta$ when $\\theta\\geq \\delta\/2$, and $\\kappa_1<1$ by (\\ref{con1}). In Figure \\ref{figure: phase}, we observe that, when the two cases $\\theta<\\delta\/2$ and $\\theta\\geq \\delta\/2$ are compared, although the QDA successful region is reduced from $\\kappa>0$ to $\\kappa>(1-\\delta)\/2$, the region is not simply shrunk in size. Instead, some of the successful region is actually removed while another area is added to it, when we switch from the case of $\\theta<\\delta\/2$ to the case of $\\theta\\geq \\delta\/2$. The added area in the case of $\\theta\\geq \\delta\/2$ is the green area over $\\kappa_2\\in [-2, -\\delta]$. This added area is simply due to the fact that $\\theta\\geq \\delta\/2$ imposes only an upper bound $1-\\delta$ on $\\kappa_2$ and lower bound is still $-2$, while $\\theta<\\delta\/2$ imposes a lower bound $-\\delta$ on $\\kappa_2$. \n\n\\begin{figure}[htb!]\n \\centering\n \\subfigure[Strong signal region ($\\theta<\\delta\/2$)]{\\includegraphics[scale=0.35]{DeltaLarge.pdf}} \\hspace{0.2cm}\n \\subfigure[Weak signal region ($\\theta\\geq \\delta\/2$)]{\\includegraphics[scale=0.35]{DeltaSmall.pdf}} \n \\caption{The possibility\/impossibility regions and QDA successful\/unsuccessful classification regions derived in Theorem \\ref{thm:unknownmu} and defined in terms of $\\kappa_1=2-2\\alpha-\\beta$ and $\\kappa_2=1-2\\theta-\\zeta$ for fixed $\\delta$ and for the two cases: (a) strong signal region ($\\theta<\\delta\/2$) and (b) weak signal region ($\\theta\\geq \\delta\/2$).}\n \\label{figure: phase}\n\\end{figure}\n\nTo provide a sense of the relationship between the two parameters $\\alpha$ and $\\beta$ from the covariance matrix and, separately, the relationship between the two parameters $\\theta$ and $\\zeta$ from the mean vector, for fixed $\\delta$ Figure \\ref{figure: parameter} displays the QDA successful and unsuccessful classification regions of one set of parameters when the other set of parameters is fixed. Subfigures (a) and (b) are for fixed $\\theta$ and $\\zeta$ (and thus $\\kappa_2$) values, while (c) and (d) are for fixed $\\alpha$ and $\\beta$ (and thus $\\kappa_1$) values. Note that we consider only the cases in which $\\kappa_2\\leq 0$ in (a) and $\\kappa_2\\leq (1-\\delta)\/2$ in (b), since otherwise the QDA successful region of $\\alpha$ and $\\beta$ will be the collection of all possible values. Similarly, we do not consider the case $\\kappa_1>(1-\\delta)\/2$, which would make the QDA successful region of $\\theta$ and $\\zeta$ be the whole square. Note that (a) and (b) are under the constraint (\\ref{con1}). From (a) and (b) we can see that, when $\\theta$ increases from less than $\\delta\/2$ to greater than $\\delta\/2$, the QDA successful region of $\\alpha$ and $\\beta$ decreases. From (c) and (d) we observe that, when $\\kappa_1$ decreases (i.e., $2\\alpha+\\beta$ increases) from a positive value to a negative value, the QDA successful region of $\\theta$ and $\\zeta$ decreases.\n\n\\begin{figure}[htb!]\n \\centering\n \\subfigure[$\\theta<\\delta\/2$, $\\kappa_2\\leq 0$]{\\includegraphics[scale=0.4]{CovDeltaLarge.pdf}}\n \\hspace{0.2cm}\n \\subfigure[$\\theta\\geq \\delta\/2$, $\\kappa_2\\leq (1-\\delta)\/2$]{\\includegraphics[scale=0.4]{CovDeltaSmall.pdf}} \n \\subfigure[$0<\\kappa_1\\leq (1-\\delta)\/2$]{\\includegraphics[scale=0.4]{MeanKappa1Large.pdf}} \n \\hspace{0.7cm}\n \\subfigure[$\\kappa_1\\leq 0$]{\\includegraphics[scale=0.4]{MeanKappa1Small.pdf}} \n \\caption{The possibility\/impossibility regions and QDA successful\/unsuccessful classification regions derived in Theorem \\ref{thm:unknownmu} when $\\delta$ and part of the rest parameters are fixed: (a) $\\delta$, $\\theta$ and $\\zeta$ are fixed, $\\theta<\\delta\/2$ and $1-2\\theta-\\zeta\\leq 0$; (b) $\\delta$, $\\theta$ and $\\zeta$ are fixed, $\\theta\\geq \\delta\/2$ and $1-2\\theta-\\zeta \\leq (1-\\delta)\/2$; (c) $\\delta$, $\\alpha$ and $\\beta$ are fixed, and $0<2-2\\alpha-\\beta \\leq (1-\\delta)\/2$; (d) $\\delta$, $\\alpha$ and $\\beta$ are fixed, and $2-2\\alpha-\\beta\\leq 0$.}\n \\label{figure: parameter}\n\\end{figure}\n\nThe methods employed and results obtained in this work are unique compared with other literatures on QDA methods for high dimensional data with sparse signals. In \\cite{li2015sparse}, sparsity assumptions are made on the covariance matrices $\\Sigma_0$, $\\Sigma_1$ and $\\Sigma_0 - \\Sigma_1$, instead of on the precision matrices in our work. In \\cite{wu2019quadratic} and \\cite{fan2015innovated}, sparsity assumptions are made on the precision matrices. However, \\cite{wu2019quadratic} requires a stronger signal that $\\max\\{\\kappa_1, \\kappa_2\\} \\geq 1$ while we only need $\\max\\{\\kappa_1, \\kappa_2\\} > 0$. In \\cite{fan2015innovated}, sparsity condition is expressed as that the number of rows in $\\Omega_i$ that have non-zero off-diagonal entries are at the order of $o(\\min(n, p))$. In our work, we allow each entry to be independently symmetrically distributed, which is more general.\n\n\n\n\n\\subsection{Content and Notations}\nThe main results for the phase diagram of the QDA method and the one with feature selection are discussed in Section \\ref{sec:main}. In Section \\ref{sec:proof}, we show the proofs of the main theorems. Next, numerical results of the proposed methods and algorithms on real data are presented in Section \\ref{sec:rats}. In Section \\ref{sec:dis}, some concluding remarks and potential directions of future work are discussed. The details of the proofs are provided in the appendices. \n\nHere we list the notations used throughout the paper. Let the eigenvalues of $W$ be denoted by $\\lambda_1 \\geq \\lambda_2 \\geq \\cdots \\geq \\lambda_p$. For a matrix $M$, we use $\\|M\\|$ and $\\|M\\|_F$ to denote its spectral normal and Frobenius norm, respectively, and $Tr(M)$ to denote its trace which equals the summation of the eigenvalues of $M$. We use $diag(c_1,\\dots, c_p)$ to denote a diagonal matrix with diagonal elements $c_1,\\dots, c_p$, and use $I(A)$ to denote an indicator function over event $A$. For two vectors or matrices $a$ and $b$ of same dimension, $a\\circ b$ denotes the Hadamard (entrywise) product. \n\n\n\\section{Phase transition for QDA}\\label{sec:main}\nThe theoretical analysis consists of three parts. First, in Section \\ref{sec:ideal}, we discover the possibility and impossibility regions in the ideal case where all the parameters $\\mu$ and $\\Omega$ are known, and show that QDA achieves the largest successful classification region. Second, in Section \\ref{sec:mean}, we further characterize the QDA successful and unsuccessful classification regions when $\\mu$ have to be estimated but $\\Omega$ are known. Last, in Section \\ref{sec:unknownpre}, we analyze the QDA successful region when both $\\mu$ and $\\Omega$ are unknown. \n\n\n\\subsection{Ideal case}\\label{sec:ideal}\nIn the ideal case, we assume that $\\mu$ and $\\Omega$ are known. \nWe begin with the position in which $\\mu = 0$ to examine the effect of the quadratic terms on classification. In the general case, it means that the mean vectors from both classes are the same. Hence, the rare and weak signal model (\\ref{M12}) is reduced to the mixture model \n\\begin{equation}\\label{M1}\nX_i|(W,Y_i) \\ \\sim \\ (1-Y_i) N(0,I) + Y_i N \\left(0,[cI+\\eta W]^{-1}\\right).\n\\end{equation}\n\nFor any classifier $T$, we denote the estimated class label of a fresh data $X$ as $\\hat Y^T(X)$. The misclassification rate of $T$ is defined as \n\\begin{equation}\\label{eqn:error}\nMR(T) = \\frac{1}{2}P_{\\epsilon, \\tau, \\eta, \\nu, \\xi}(\\hat Y^T(X) = 0|Y = 1) + \\frac{1}{2}P_{\\epsilon, \\tau, \\eta, \\nu, \\xi}(\\hat Y^T(X)= 1|Y = 0).\n\\end{equation}\nUnder the current model, we identify the regions where the QDA method succeeds and fails. The results are given in the following theorem. \n\nFor notation simplicity, we use $C_p$ to denote any constant term satisfying $\\underset {p\\to\\infty}{\\lim\\inf} \\; C_p>0$ and use $L_p$ to denote any $\\ln p$ term such that $\\underset {p\\to\\infty}{\\lim\\inf} \\; \\frac{L_p}{(\\ln p)^r}>0$ for some constant $r>0$.\n\n\n\n\\begin{theorem}\\label{thm:equal}\nConsider model (\\ref{M1}) with the parameterizations (\\ref{Omega1})--(\\ref{con1}).\n\\begin{itemize}\n\\item[(i)] If one of the following conditions is satisfied,\n\\begin{itemize}\n\\item [(1)] $\\gamma < 1\/2$, $f_p=C_p$;\n\\item [(2)] $\\gamma \\geq 1\/2$, $\\beta<2-2\\alpha$, $f_p=C_p$;\n\\item [(3)] $\\gamma \\geq 1\/2$, $\\beta=2-2\\alpha$, $f_p=L_p$;\n\\end{itemize}\nthe QDA classification rule (\\ref{eqn:qda}) under model (\\ref{M1}) has a misclassification rate (MR) that converges to 0 as $p\\to\\infty$.\n\n\\item[(ii)] If $\\gamma> 1\/2$ and $\\beta>2-2\\alpha$, then for any classifier $L$, the misclassification rate (\\ref{eqn:error}) of $L$ converges to $1\/2$ when $p\\to\\infty$.\n\\end{itemize}\n\\end{theorem}\n\n\nTheorem \\ref{thm:equal} depicts an exact phase diagram of the QDA method when $f_p$ is a $\\ln p$ term. When $\\gamma <1\/2$ or $\\beta \\leq 2 - 2\\alpha$, the QDA method achieves a misclassification rate of 0 asymptotically. Otherwise ($\\gamma>1\/2$ and $\\beta>2-2\\alpha$), all classifiers, including the QDA method, will fail. Note that, when $\\gamma < 1\/2$, the deviation in diagonal elements of the precision matrices is large enough for successful classification. \nHence, the interesting case is when\n\\begin{equation}\\label{eqn:gamma}\n\\gamma > 1\/2,\n\\end{equation}\nand the following analysis is predicated under this assumption. On the boundary $\\gamma=1\/2$, the analysis of the possibility and impossibility regions is much more complicated and one needs to zoom in and may introduce more parameters. Given that we already have six parameters, we would put aside this special case and only consider the case of $\\gamma>1\/2$.\n\n\n\nTheorem \\ref{thm:equal} tells us that under (\\ref{eqn:gamma}), the QDA classifier succeeds in the region $\\beta\\leq 2-2\\alpha$ while all classifiers fail in the region $\\beta>2-2\\alpha$. This indicates that the possibility and impossibility regions are $\\beta\\leq 2-2\\alpha$ and $\\beta> 2-2\\alpha$ respectively and QDA succeeds in the whole possibility region and thus is optimal in this sense. \n\n\nIn the proof (see Appendix A in Supplementary Material), it may be observed that the result in Theorem \\ref{thm:equal} depends on $\\xi^2 = (c - 1)^2$ instead of $\\xi$ itself. Thus, if we set\n\\[\n\\Omega \\ = \\ I +diag(\\pm \\xi)+\\eta W, \n\\]\nTheorem \\ref{thm:equal} still holds. This makes more sense since now we allow either a $\\xi$ or a $-\\xi$ deviation from $1$ for any of the $p$ diagonal elements. Even more generally, $\\xi$ can be replaced by a random variable with support $[-\\xi,\\xi]$. For example, $\\Omega =I+diag(U_1,\\dots,U_p)+\\eta W$ with $U_i$ being i.i.d. uniform random variables over $[-\\xi,\\xi]$.\n\nWe next consider the case where the means are unequal but still known, i.e., $\\mu \\neq 0$. In such a case, the mixture model is back to (\\ref{M12}). The regions of possibility and impossibility under (\\ref{eqn:gamma}) and the optimality of QDA are characterized in the following theorem.\n\n\n\\begin{theorem}\\label{thm:idealunequal}\nConsider model (\\ref{M12}) with the parameterizations (\\ref{Param1}), (\\ref{Omega1})--(\\ref{con1}), and (\\ref{eqn:gamma}). \n\\begin{itemize}\n\\item[(i)] If one of the following conditions is satisfied, \n\\begin{itemize}\n\\item [(1)] $\\beta<2-2\\alpha$, $f_p=C_p$;\n\\item [(2)] $\\beta=2-2\\alpha$, $f_p=L_p$; \n\\item [(3)] $0<\\zeta<1-2\\theta$, $g_p=C_p$;\n\\item [(4)] $\\zeta=1-2\\theta$, $g_p=L_p$;\n\\end{itemize}\nthe QDA classification rule (\\ref{eqn:qda}) has a misclassification rate that converges to 0 as $p\\to\\infty$. \n\\item[(ii)] If $\\beta>2-2\\alpha$ and $\\zeta>1-2\\theta$, then, for any classifier $L$, the misclassification rate (\\ref{eqn:error}) of $L$ converges to $1\/2$ when $p\\to\\infty$. \n\\end{itemize}\n\\end{theorem}\n\n\n{\\bf Remark 1}. Theorem \\ref{thm:idealunequal} demonstrates that under the assumptions stated therein, the possibility and impossibility regions are $\\{\\beta\\leq 2-2\\alpha\\}\\cup \\{\\zeta\\leq 1-2\\theta\\}$ and $\\{\\beta>2-2\\alpha,\\zeta>1-2\\theta\\}$ respectively. More importantly, Theorem \\ref{thm:idealunequal} shows that the QDA classifier achieves the possibility region and thus is optimal this sense. \n \n\n{\\bf Remark 2}. Note that conditions (3) and (4) in Theorem \\ref{thm:idealunequal} on the parameters modelling the difference in mean vectors are weaker than the phase diagram of the LDA in \\cite{FJY}. This makes sense, as here we assume $\\mu$ is known when applying the QDA classification rule (\\ref{eqn:qda}), whereas \\cite{FJY} considered trained LDA classifiers with $\\mu$ unknown. \n\n{\\bf Remark 3}. From the results in Theorem \\ref{thm:idealunequal}, we observe that the possibility region for the mean part, $0<\\zeta<1-2\\theta$, and that for the precision matrix part, $1-2\\alpha<\\beta<2-2\\alpha$, are independent. This again could be explained by the fact that the classification rule (\\ref{eqn:qda}) assumes known $\\mu$ and $\\Omega$. When we consider data-trained classifiers, the two possibility regions will interact and there will be a phase diagram over the two regions showing a balance between the two sets of sparsity\/weakness indices. \n\n\n{\\bf Remark 4}. In model (\\ref{M12}), we can also introduce the sparsity and weakness in the diagonals of the precision matrix $\\Omega$. However, the model thus raised will have six sparsity and weakness indices in total: 2 for means, 2 for diagonals, and 2 for off-diagonals of the precision matrix. We can readily obtain the possibility region for this model, but it will be difficult to visualize and is thus omitted here. \n\n\n\n\n\\subsection{Unknown Mean Vector}\\label{sec:mean}\n\nIn this section, we discuss the case in which $\\mu$ is unknown but $\\Omega$ is known. Again, without loss of generality, we assume $\\Omega_0 = I$, and consider the model (\\ref{M12}). \nFurthermore, since the estimation accuracy is closely related to the sample size $n$, we employ the relationship $n = p^{\\delta}$ in (\\ref{Param3}).\nHence, the sample size is always smaller than $p$ by an order. \n\nWhen $\\Omega$ is known but $\\mu$ is unknown, we modify Algorithm 1 to adopt the current setting, as follows:\n\\begin{itemize}\n\\item[] {\\bf Algorithm 2}.\n\\item[0.] Calculate the average vector of the two classes as $\\hat{\\mu}_0 = \\frac{1}{n_0} \\sum_{i: Y_i = 0} X_i$ and $\\hat{\\mu}_1 = \\frac{1}{n_1} \\sum_{i: Y_i = 1} X_i$, where $n_1 = \\sum_{i =1}^n Y_i$ and $n_0 = n - n_1$. \n\\item[1.] Let $d_0=\\hat{\\mu}_0$, $d_1= \\Omega \\hat{\\mu}_1$ and $d=d_1-d_0=(d(1),\\dots, d(p))^\\top$.\n\\item[2.] Define $t = \\frac{2\\ln p}{\\sqrt{n}} \\cdot 1\\{\\max_{1\\leq j \\leq p} |d_j| > 2\\ln p\/\\sqrt{n}\\}$ and let $d^{(t)}$ denote the indicator vector of feature selection, i.e. for $j=1,\\dots,p$,\n\\begin{equation}\nd^{(t)}(j)= \\left\\{\\begin{array}{ll}\n1, & |d(j)| > t, \\\\\n0, & |d(j)| < t. \n\\end{array}\n\\right.\n\\end{equation*}\nLet $\\hat{\\mu}^{(t)}$ and $\\hat\\mu_d^{(t)}$ be the hard-thresholded vectors $\\hat\\mu_0$ and $d$ respectively with feature selection, i.e. \n\\begin{equation*}\n\\hat{\\mu}^{(t)}=\\hat\\mu_0\\circ d^{(t)}, \\ \\ \\ \\hat\\mu_d^{(t)}=d\\circ d^{(t)}.\n\\end{equation*}\n\\item[3.] Define $C = (\\hat{\\mu}^{(t)})^\\top (I - \\Omega) \\hat{\\mu}^{(t)} +\\ln |\\Omega|$.\n\\item[4.] (Prediction step) For any fresh data vector $X$, calculate the QDA score\n$$ \nQ=X^\\top(I -\\Omega) X + 2(\\hat\\mu_d^{(t)})^\\top X + C\n$$\nand estimate the label of $X$ as $\\hat{Y}$ with $\\hat{Y} = I\\{Q > 0\\}$. \n\\end{itemize}\n\n\n\\begin{theorem}\\label{thm:unknownmu}\nUnder model (\\ref{M12}) and the parameterization (\\ref{Param1}), (\\ref{Omega1})--(\\ref{Param3}), and (\\ref{eqn:gamma}), \n\\begin{itemize}\n\\item [(i)] When $\\theta \\geq \\delta\/2$, i.e., the signals are weak, \n\\begin{itemize}\n\\item[(1)] If $\\max\\{2-2\\alpha-\\beta, 1-2\\theta-\\zeta\\} > (1-\\delta)\/2$, the QDA classification rule (\\ref{eqn:qda}) with feature selection in Algorithm 2 has a misclassification rate that converges to 0 as $p\\to\\infty$. \n\n\\item[(2)] If $\\max\\{2-2\\alpha-\\beta, 1-2\\theta-\\zeta\\}< (1-\\delta)\/2$, then $MR(QDA) \\geq \\Phi(-1\/2)\/16$ when $p \\rightarrow \\infty$.\n\n\\item[(3)] If $\\max\\{2-2\\alpha-\\beta, 1-2\\theta-\\zeta\\}=(1-\\delta)\/2$, the misclassification rate depends on $f_p$ or $g_p$ in the following ways:\n\\begin{itemize}\n\\item [(A)] If $2 - 2\\alpha - \\beta = (1 - \\delta)\/2$ and $f_p=L_p$, then $MR(QDA) \\rightarrow 0$;\n\\item [(B)] If $1-2\\theta-\\zeta = (1 - \\delta)\/2$ and $g_p=L_p$, then $MR(QDA) \\rightarrow 0$;\n\\item [(C)] If $2 - 2\\alpha - \\beta = (1 - \\delta)\/2$ and $f_p = C_p$, then $MR(QDA) \\geq c$ for a constant $c > 0$; \n\\item [(D)] If $1-2\\theta-\\zeta = (1 - \\delta)\/2$ and $g_p=C_p$, then $MR(QDA) \\geq c$ for a constant $c > 0$.\n\\end{itemize} \n\\end{itemize}\n\n\\item[(ii)] When $\\theta < \\delta\/2$, i.e., the signals are strong, the results in Theorem \\ref{thm:idealunequal} hold.\n\n\\end{itemize}\n\\end{theorem}\n\n\nWhen the estimated mean is introduced, the parameterization of the sample size $n$ also has an effect on the QDA successful classification region. Comparing the results in Theorem \\ref{thm:idealunequal} and Theorem \\ref{thm:unknownmu}, there are two differences. First, the signal strength parameter $\\theta$ becomes an important factor in characterizing the region. When $\\theta > \\delta\/2 $, the signals are weak and impossible to identify, so $\\beta$ or $\\zeta$ must be small enough (so the signals are dense) for the classification to be successful. On the other hand, when $\\theta < \\delta\/2$, the signals are so strong that can afford to be relatively sparse.\nSecond, the QDA successful classification region is reduced by $\\frac{1 - \\delta}{2}$ for both parameters $\\beta$ and $\\zeta$ when signals are weak, unlike when the signals are strong. The reduction of the successful classification region is a result of the estimation error for the mean vectors. \n\n\n\n\n\n\n\\subsection{Unknown mean and unknown covariance}\\label{sec:unknownpre}\nIn this section, we discuss the case in which $\\mu_k$ are unknown, $\\Sigma_0 = I$, and $\\Sigma_1$ is also unknown. The estimation of the precision matrix in the high-dimensional case is still restricted to the sparse case for now. Hence, here we still assume $\\Sigma = I$ and $\\Omega - I = (c - 1)I + \\eta W$, so that both of them are sparse. Under model (\\ref{M12}), we modify Algorithm 1 to adopt the current setting, as follows: \n\\begin{itemize}\n\\item[] {\\bf Algorithm 3}.\n\\item[0.] Calculate the estimated mean vector for the two classes as $\\hat{\\mu}_0 = \\frac{1}{n_0} \\sum_{i=1}^{n_0} X_i$ and $\\hat{\\mu}_1 = \\frac{1}{n_1} \\sum_{i=n_0+1}^{n} X_i$, where $n_1 = \\sum_{i =1}^n Y_i$ and $n_0 = n - n_1$. Estimate the precision matrix $\\Omega$ with some method and let $\\hat{\\Omega}$ denote the estimation.\n\\item[1.] Let $d_0=\\hat{\\mu}_0$, $d_1= \\hat{\\Omega}\\hat{\\mu}_1$ and $d=d_1-d_0=(d(1),\\dots,d(p))^\\top$.\n\\item[2.] Define $t = \\frac{2\\ln p}{\\sqrt{n}} \\cdot 1\\{\\max_{1\\leq j \\leq p} |d(j)| > 2\\ln p\/\\sqrt{n}\\}$ and let $d^{(t)}$ denote the indicator vector of feature selection, i.e. for $j=1,\\dots,p$,\n\\begin{equation}\nd^{(t)}(j)= \\left\\{\\begin{array}{ll}\n1, & |d(j)| > t, \\\\\n0, & |d(j)| < t. \n\\end{array}\n\\right.\n\\end{equation*}\nLet $\\hat{\\mu}^{(t)}$ and $\\hat\\mu_d^{(t)}$ be the hard-thresholded vectors $\\hat\\mu_0$ and $d$ respectively with feature selection, i.e. \n\\begin{equation*}\n\\hat{\\mu}^{(t)}=\\hat\\mu_0\\circ d^{(t)}, \\ \\ \\ \\hat\\mu_d^{(t)}=d\\circ d^{(t)}.\n\\end{equation*}\n\\item[3.] Define $C = (\\hat{\\mu}^{(t)})^\\top (I - \\hat{\\Omega})\\hat{\\mu}^{(t)} +\\frac{1}{n_1}Tr(\\hat{\\Omega} - I) + \\ln |\\hat{\\Omega}|$.\n\\item[4.] (Prediction step) For any fresh data vector $X$, calculate the QDA score\n$$ \nQ=X^\\top(I -\\hat{\\Omega}) X + 2(\\hat{\\mu}_d^{(t)})^\\top X + C\n$$\nand estimate the label of $X$ as $\\hat Y$ with $\\hat Y=I\\{Q>0\\}$. \n\\end{itemize}\n\nIn Step 0, the estimation of the sparse precision matrix can be performed via any suitable approach. This has been discussed in numerous publications in the literature, such as \\cite{CLIME, FJY, glasso, PCS}. Here, our goal is to develop the QDA approach with feature-selection step, instead of designing a new precision-matrix-estimation approach. In the following theorem, we consider a general precision-matrix-estimation approach, and let $\\Delta_{\\hat{\\Omega}}$ denote the estimation error. We give the region that $MR(QDA) \\rightarrow 0$ based on $\\Delta_{\\hat{\\Omega}}$. \n\n\n\n\\begin{theorem}\\label{thm:unknownomega}\nConsider model (\\ref{M12}) and the parameterization (\\ref{Param1}), (\\ref{Omega1})--(\\ref{con1}), (\\ref{eqn:gamma}), and (\\ref{Param3}). Assume $1 < \\beta < 2$. For the employed precision-matrix-estimation approach, let $\\Delta_{\\hat\\Omega} = \\|\\Omega - \\hat{\\Omega}\\|$ be the spectral norm of the error. \nSuppose $\\Delta_{\\hat\\Omega} \\rightarrow 0$ when $p \\rightarrow \\infty$, and it satisfies that \n\\[\n\\Delta_{\\hat{\\Omega}} (p\\eta + p \\tau^2 \\epsilon + \\sqrt{p}\\ln p)+ p \\Delta_{\\hat{\\Omega}}^2 +p\/n \\ll p\\xi^2 + \\eta^2 p^2 \\nu + \\tau^2 p \\epsilon. \n\\]\nThen, the QDA classification rule (\\ref{eqn:qda}) with feature selection in Algorithm 3 has a misclassification rate that converges to 0 as $p\\to\\infty$. \n\\end{theorem}\n\nThe proof of Theorem \\ref{thm:unknownomega} can be found in Appendix \\ref{sec:proof4}. Here, we develop the general rule for the QDA classification with feature selection method. For the weak signal case that $\\theta > \\delta\/2$, the condition can be relaxed by replacing $p\/n$ to be $p\\xi\/n$. However, the term $p\/n$ is not the dominating term when we apply the PCS method and the CLIME method in Algorithm 3. Therefore, we didn't differentiate the two cases. \n\nCompared to Theorems \\ref{thm:idealunequal} and \\ref{thm:unknownmu}, a big difference here is that the condition is an inequality that containing both the precision matrix parameters and the mean vector parameters. In the following two corollaries, we can see that the condition $\\zeta < \\alpha + \\delta\/2 - 2\\theta$ indicates an intervention between the precision matrix weakness parameter $\\alpha$, the mean vector sparsity and weakness parameters $\\zeta$ and $\\theta$, and the sample size parameter $\\theta$. Under this situation, the dominating error term comes from $X(\\hat{\\Omega} - I)X$, which contains both $\\Omega$ and $\\mu$ (in $X$). \n \nWe apply the PCS approach in \\cite{PCS} and the CLIME approach in \\cite{CLIME} to be the precision-matrix-estimation approach. The results can be found in the following corollaries. \n\\begin{cor}\\label{cor:PCS}\nUnder the conditions of Theorem \\ref{thm:unknownomega} and that $\\alpha < \\delta\/2$, and PCS is employed for precision-matrix estimation. Consider the conditions that \n\\begin{itemize}\n\\item [(i)] $\\beta < 1 - \\alpha + \\delta\/2$ and $f_p=C_p$;\n\\item [(ii)] or, $\\zeta < \\alpha + \\delta\/2 - 2\\theta$ and $g_p=C_p$.\n\\end{itemize} \nIf one of the above conditions is satisfied, then the QDA classification rule (\\ref{eqn:qda}) with feature selection in Algorithm 3 has a misclassification rate that converges to 0 as $p\\to\\infty$. \n\\end{cor}\n\n\n\\begin{cor}\\label{cor:CLIME}\nUnder the conditions of Theorem \\ref{thm:unknownomega}, and that CLIME is employed for precision-matrix estimation. Assume $\\alpha < \\delta\/2$, and consider the conditions that \n\\begin{itemize}\n\\item [(i)] $\\beta < 1 - \\alpha + \\delta\/2$ and $f_p=C_p$;\n\\item [(ii)] $\\zeta < \\alpha + \\delta\/2 - 2\\theta$ and $g_p=C_p$;\n\\end{itemize} \nIf one of the above conditions is satisfied, then the QDA classification rule (\\ref{eqn:qda}) with feature selection in Algorithm 3 has a misclassification rate that converges to 0 as $p\\to\\infty$. \n\\end{cor}\n\n\n\n\n\n\n\\section{Proofs}\\label{sec:proof}\nIn this section, we present the proof of Theorems \\ref{thm:idealunequal}--\\ref{thm:unknownmu}. Theorem \\ref{thm:equal} is a special case of Theorem \\ref{thm:idealunequal}, and the proof can be found in the Appendix. \n\n\nWe first write out the misclassification rate. Given $\\mu$ and $W$ (and thus $\\Omega$), the two types of misclassification rates are defined as \n\\begin{equation}\\label{eqn:p}\np_{0,\\mu,W} = P_{Y = 0}(Q>0|\\mu, W), \np_{1, \\mu, W} = P_{Y = 1}(Q < 0|\\mu, W).\n\\end{equation}\nThen, the population misclassification rate ($MR$) is \n\\begin{equation}\\label{eqn:mr}\nMR \\ = \\ \\frac{1}{2}E_{\\mu, W}[p_{0,\\mu,W}]+ \\frac{1}{2} E_{\\mu,W}[p_{1,\\mu,W}].\n\\end{equation}\nWe wish to find a boundary that divides the parameter space into two regions. In one region, $MR$ converges to 0. In the other region, $MR$ is always larger than some positive constant. \n\n\nThis section is structured as follows. In Section \\ref{sec:lemmas}, we present some mathematical results as the preparations, and the proof can be found in the Appendix. Based on these preparations, Theorem \\ref{thm:idealunequal} is then proved in Section \\ref{sec:thmideal}. In Section \\ref{sec:thmmuposs}, we proceed to discuss the proof of Theorem \\ref{thm:unknownmu}, where $\\mu$ is unknown and we have to analyze the difference term $Q(X, \\hat{\\mu}, W) - Q(X, \\mu, W)$. \n\n\n\n\n\\subsection{Preparation}\\label{sec:lemmas}\nTo prove the theorems, we need the following propositions and lemmas for $\\mu$ and $W$. The propositions can be easily proved, and so we have omitted their proofs. The proofs of the lemmas can be found in the Appendix. \n\nFor a fixed $W$, let the eigenvalue decomposition of $W$ be \n \\[\n W=U\\Lambda U^\\top, \n\\] \n where $U^\\top U=U U^\\top=I$ and $\\Lambda=diag(\\lambda_1, \\lambda_2, \\dots, \\lambda_p)$. Here, $\\lambda_i = \\lambda_i(W)$ are the eigenvalues of $W$ such that $\\lambda_1 \\geq \\lambda_2 \\geq \\cdots \\geq \\lambda_p$. Let the spectral norm of $W$ be denoted by $\\|W\\|$. \n\nSince $\\Omega = cI + \\eta W$, $\\Omega=U(cI+\\eta\\Lambda)U^\\top$. Therefore, $\\Omega - I= \\xi I + \\eta W$, where the eigenvalues are defined as \n\\[\nm_i = \\xi+\\eta\\lambda_i, \\qquad 1 \\leq i \\leq p. \n\\]\nLet $\\tilde{\\mu}=U^\\top \\mu=(\\tilde{\\mu}_1, \\dots, \\tilde{\\mu}_p)^\\top$ and $\\tilde{X} = U^\\top X=(\\tilde{x}_1 \\dots, \\tilde{x}_p)^\\top$. Note that here we use the subscripts to denote the vector elements. \n\n\n\n\\begin{prop}\\label{prop1}\n$\\sum_{i=1}^p \\lambda_i = Tr(W) = 0$. \n\\end{prop}\n\\begin{prop}\\label{prop2}\n$\\sum_{i=1}^p \\lambda_i^2 = Tr(W^TW) \\sim 2Binomial(p(p-1)\/2, \\nu)$. \n\\end{prop}\n\n\n\\begin{lemma}\\label{lemma1}\nUnder models (\\ref{Omega1}) and (\\ref{Param}), when $p\\to\\infty$, with probability $1-o(1)$, \n\\begin{equation*}\n\\|W\\| \\leq b(p, \\beta) = \\left\\{\\begin{array}{ll}\n3\\sqrt{p\\nu} = 3p^{(1-\\beta)\/2}, & 0<\\beta < 1,\\\\\n2\\sqrt{\\frac{\\ln p}{\\ln\\ln p}}, & \\beta = 1,\\\\\n\\frac{2}{\\beta - 1}, & 1 < \\beta \\leq 2.\n\\end{array}\n\\right.\n\\end{equation*}\nLet $R_W$ be a function of $\\lambda_i$'s defined by $R_W=\\frac{\\frac{1}{p}\\sum_{i=1}^p |\\lambda_i |^3}{\\left(\\frac{1}{p}\\sum_{i=1}^p \\lambda_i^2\\right)^{3\/2}}$; \nthen, with probability $1 - o(1)$, \n\\begin{equation*}\nR_W \\leq r(p, \\beta) = \\left\\{\\begin{array}{ll}\n216, & 0<\\beta < 1,\\\\\n4\\sqrt{\\frac{\\ln p}{\\ln\\ln p}}, & \\beta=1, \\\\\n\\frac{4}{\\beta-1}p^{(\\beta-1)\/2}, & 1 < \\beta \\leq 2.\n\\end{array}\n\\right.\n\\end{equation*}\n\\end{lemma}\n\nNow, we consider $\\Omega = cI + \\eta W$. \nIn view of the result of $\\|W\\|$ and the fact that $m_i = \\xi + \\eta \\lambda_i$ under model (\\ref{Omega1}), when the condition (\\ref{con1}) holds, with probability $1 - o(1)$, \n\\begin{equation}\\label{eqn:smallm}\n\\max_{1 \\leq i \\leq p} |m_i| = \\|\\xi I + \\eta W\\| = \\xi + \\eta \\|W\\| \\rightarrow 0. \n\\end{equation}\nAccording to basic matrix theory, \n\\begin{equation}\\label{eqn:summ}\n\\sum_{i=1}^p m_i^2 = Tr((\\Omega - I)^{\\top}(\\Omega - I)) = p\\xi^2+\\eta^2 \\overset p{\\underset{i=1}\\sum} \\lambda_i^2, \n\\end{equation}\nwhere $\\sum_{i = 1}^p \\lambda_i^2 \\sim 2 Binomial(p(p-1)\/2, \\nu)$, according to Proposition \\ref{prop2}. \n\nFinally, we consider the signals in the mean part. For $\\mu$, we have \n\\begin{equation}\\label{eqn:normmu}\n\\mu^\\top \\mu = \\overset p{\\underset{i=1}\\sum} \\mu_i^2 \\ = \\ \\tau^2 \\mathcal B_p, \\quad \\mbox{where }\\mathcal B_p \\sim Binomial(p,\\epsilon).\n\\end{equation}\n\nAs we frequently use these terms in the following analysis, we provide a summary of the magnitude of these terms in the following lemma. \n\\begin{lemma}\\label{lemmaeb}\nConsider model (\\ref{M12}) with the parameterizations (\\ref{Param1}), (\\ref{Omega1})--(\\ref{con1}), and (\\ref{eqn:gamma}). With probability $1 + o(1)$, we have\n\\begin{equation}\\begin{array}{lll}\n \\overset p{\\underset{i=1}\\sum}\\lambda_i^2 = \\eta^2 p^2 \\nu(1 + o(1)), \\\\\n \\overset p{\\underset{i=1}\\sum} m_i^2 = p\\xi^2+ \\eta^2 p^2 \\nu(1 + o(1)),\\quad \\max |m_i| = o(1), \\\\\n \\overset p{\\underset{i=1}\\sum} \\mu_i^2 \\ = \\ p\\tau^2 \\epsilon(1 + o(1)). \n \\end{array}\n\\end{equation}\n\\end{lemma}\nIn addition, there is a lemma about $\\tilde{\\mu} = U^{\\top} \\mu$, which is proved in Appendix \\ref{sec:lemma2}. \n\\begin{lemma}\\label{lemma2} \nAs $p\\to\\infty$, under models (\\ref{Param1}), (\\ref{Param2}), (\\ref{Param}), and (\\ref{con1}), \n\\begin{equation*}\nR_{\\mu} := \\frac{1}{\\sqrt p} \\cdot \\frac{\\frac{1}{p}\\sum_{i=1}^p |\\tilde{\\mu}_i|^3}{\\left(\\frac{1}{p}\\sum_{i=1}^p |\\tilde{\\mu}_i|^2\\right)^{3\/2}} \\ \\overset{\\mathcal P}\\longrightarrow \\ 0.\n\\end{equation*}\n\\end{lemma}\n\nFinally, we state a lemma about $\\Phi(x_p)$, where $\\Phi(\\cdot)$ is the cumulative density function (c.d.f.) of the standard normal distribution. \n\\begin{lemma}\\label{lemmaphi}\nConsider a random variable $x_p$ and $E[\\Phi(x_p)]$:\n\\begin{itemize}\n\\item If $x_p \\rightarrow -\\infty$ in probability as $p \\rightarrow \\infty$, i.e., $P(x_p > -M) \\rightarrow 0$ for any constant $M > 0$ when $p \\rightarrow \\infty$, then $E[\\Phi(x_p)] \\rightarrow 0$ when $p \\rightarrow \\infty$.\n\\item If $x_p \\rightarrow \\infty$ in probability as $p \\rightarrow \\infty$, i.e., $P(x_p < M) \\rightarrow 0$ for any constant $M > 0$ when $p \\rightarrow \\infty$, then $E[\\Phi(x_p)] \\rightarrow 1$ when $p \\rightarrow \\infty$. \n\\end{itemize}\n\\end{lemma}\n\n\\subsection{Proof of Theorem \\ref{thm:idealunequal}}\\label{sec:thmideal}\nThere are two parts of the proof. For the region of possibility, we want to show that $MR(QDA)$ converges to 0 in this region. For the region of impossibility, we want to show any classifier $L$ has $MR(L)$ that achieves $1\/2$ if the parameters fall in this region. \n\nWe first work on the region of possibility. For this region, we want to find out $MR(QDA)$. Recall that QDA estimates the label by \n\\[\nQ = Q(X, \\mu, W) = - X^\\top(\\Omega - I) X + 2 \\mu^{\\top} (\\Omega + I) X - \\mu^\\top (\\Omega - I)\\mu + \\ln|\\Omega|. \n\\]\nTherefore, we should analyze $Q$.\n\nAccording to (\\ref{eqn:mr}), we first find the misclassification rate when $\\mu$ and $W$ are given, and then take the expectation. It indicates only $X$ is the random variable, and the random part is \n\\begin{equation}\\label{eqn:S}\nS = - X^\\top(\\Omega - I) X + 2 \\mu^{\\top} (\\Omega + I) X. \n\\end{equation}\n$S$ can be viewed as a summation of squared normal random variables. Actually, we have the lemma about the asymptotic distribution of $S$. \n\\begin{lemma}\\label{lemma:S}\nGiven $Y = i$, $i = 0, 1$, let $\\mu_S = E[S|Y = i]$ and $\\sigma^2_S = \\mathrm{Var}(S|Y = i)$. Define $Z = \\frac{S - \\mu_S}{\\sigma_S}$, then \n\\[\n\\sup_{x}|F_{Z|Y = i}(x) - \\Phi(x)| \\leq C(\\frac{1 + R_W}{\\sqrt{p}} + R_{\\mu}), \n\\]\nwhere $F_{Z|Y = i}(x) = P(Z \\leq x|Y = i)$ and $\\Phi(\\cdot)$ is the c.d.f. of the standard normal distribution. \n\\end{lemma}\nHere, $\\mu_S$ may differ when $Y = 0$ and $Y = 1$. Since $Y$ is given here and afterwards, we use $\\mu_S$ to denote this conditional expectation without confusion. For $\\sigma^2_S$, the situation is the same. \n\nAccording to Lemma \\ref{lemma1} and Lemma \\ref{lemma2}, $C(\\frac{1 + R_W}{\\sqrt{p}} + R_{\\mu}) = o(1)$ with probability $1 - o(1)$. Therefore, we only need to consider the event that $C(\\frac{1 + R_W}{\\sqrt{p}} + R_{\\mu}) = o(1)$, which means $Z|Y = i$ converges to the standard normal distribution in distribution. \nTherefore, we have that \n\\begin{equation}\\label{p0thm2}\n\\begin{array}{lll}\np_{0, \\mu, W} & = & P_{Y = 0}(Q > 0 |\\mu, W)\\\\\n& = & P_{Y = 0}(S - \\mu^\\top (\\Omega - I)\\mu + \\ln|\\Omega| > 0 |\\mu, W)\\\\\n& = & P_{Y = 0}(\\sigma_S Z + \\mu_S - \\mu^\\top (\\Omega - I)\\mu + \\ln|\\Omega| > 0 |\\mu, W)\\\\\n& = & 1 - \\Phi(\\frac{\\mu^\\top (\\Omega - I)\\mu - \\ln|\\Omega| - \\mu_S}{\\sigma_S}) + o(1)\\\\\n& = & \\Phi(-\\frac{\\mu^\\top (\\Omega - I)\\mu - \\ln|\\Omega| - \\mu_S}{\\sigma_S}) + o(1).\n\\end{array}\n\\end{equation}\nSimilarly, for $p_{1,\\mu, W}$, we have that \n\\begin{equation}\\label{p1thm2}\np_{1, \\mu, W} = P_{Y = 1}(Q < 0 |\\mu, W)= \\Phi(\\frac{\\mu^\\top (\\Omega - I)\\mu - \\ln|\\Omega| - \\mu_S}{\\sigma_S}) + o(1).\n\\end{equation}\nHence, we need to derive the performance of $p_{0,\\mu,W}$ and $p_{1, \\mu, W}$. To derive them, we should check the term that \n\\begin{equation}\\label{eqn:T}\nT = \\frac{\\mu^\\top (\\Omega - I)\\mu - \\ln|\\Omega| - \\mu_S}{\\sigma_S}.\n\\end{equation}\n\nWe figure out $T$ for the two cases. \n\\begin{itemize}\n\\item Case 0: $Y = 0$. We first figure out $\\mu_S$ and $\\sigma_S$. According to Property 1 in Appendix \\ref{sec:gchi}, \n\\begin{equation}\\label{zmeanvar1}\n\\mu_S = Tr(I - \\Omega) - \\mu^\\top(3\\Omega + I) \\mu, \\, \\sigma^2_S = 2Tr((\\Omega - I)^2) + 16 \\mu^\\top \\Omega^2 \\mu.\n\\end{equation}\nIntroduce (\\ref{zmeanvar1}) into the formula of $T$ in (\\ref{eqn:T}), then \n\\[\nT_0 := T|Y = 0 = \\frac{4\\mu^\\top \\Omega \\mu - \\ln|\\Omega| + Tr(\\Omega - I) }{\\sqrt{2Tr((\\Omega - I)^2) + 16 \\mu^\\top \\Omega^2 \\mu}}.\n\\]\nRecall that $\\Omega = U((1+\\xi)I + \\eta \\Lambda) U^\\top$ with eigenvalues $1 + m_i = 1 + \\xi + \\eta \\lambda_i$, and $\\tilde{\\mu} = U^\\top \\mu$. Introduce the terms into the above equation and there is \n\\begin{equation}\\label{c0form}\nT_0 = \\frac{-\\overset p{\\underset {i=1}\\sum}\\ln\\bigl[\\frac{1+m_i}{e^{m_i+4(1+m_i)\\tilde{\\mu}_i^2}}\\bigr]}{\\sqrt{2\\sum_{i=1}^p \\left[m_i^2 +8(1+m_i)^2\\tilde{\\mu}_i^2 \\right] }}.\n\\end{equation}\nFrom Lemma \\ref{lemmaeb}, $\\max_{1 \\leq i \\leq p}|m_i| = o(1)$ with high probability. Hence, \n\\begin{equation}\\label{conbound}\n\\ln\\biggl[\\frac{1+m_i}{e^{m_i+4(1+m_i)\\tilde{\\mu}_i^2}}\\biggr] = -\\frac{1}{2}m_i^2(1 + o(1)) - 4(1+m_i)\\tilde{\\mu}_i^2.\n\\end{equation}\nPlugging (\\ref{conbound}) into (\\ref{c0form}), we have \n\\begin{equation}\\label{p1constant}\nT_0 = \\frac{1}{2\\sqrt 2} \\sqrt{\\sum\\nolimits_{i = 1}^p (m_i^2+8\\tilde{\\mu}_i^2)}(1+o(1)).\n\\end{equation}\n\n\n\n\\item Case 1: $Y = 1$. Similarly, we first derive the mean and variance of $S$, then figure out $T|Y = 1$. According to Property 2 in Appendix \\ref{sec:gchi},\n\\begin{equation}\\label{zmeanvar2}\n\\mu_S = Tr(\\Omega^{-1} - I) + \\mu^\\top(\\Omega + 3I) \\mu, \\, \\sigma^2_S = 2Tr((\\Omega^{-1} - I)^2) + 16 \\mu^\\top \\Omega^{-1} \\mu.\n\\end{equation}\nIntroduce (\\ref{zmeanvar2}) into the formula of $T$ in (\\ref{eqn:T}), then \n\\[\nT_1 := T|Y = 1 = \\frac{-4\\mu^\\top \\mu - \\ln|\\Omega| + Tr(I - \\Omega^{-1}) }{\\sqrt{2Tr((\\Omega^{-1} - I)^2) + 16 \\mu^\\top \\Omega^{-1} \\mu}}.\n\\]\nIntroducing in the terms $m_i$'s and $\\tilde{\\mu}_i$'s and combining the results with Lemma \\ref{lemmaeb}, similarly we have \n\\begin{equation}\\label{p2cons}\nT_{1} = \\frac{-\\overset p{\\underset {i=1}\\sum}\\ln\\left[\\frac{1+m_i}{e^{m_i\/(1+m_i)-4\\tilde{\\mu}_{i}^2}}\\right ] }{\\left[2\\overset p{\\underset{i=1}\\sum} \\left[\\frac{m_i^2}{(1+m_i)^2} + \\frac{8}{1+m_i}\\tilde{\\mu}_{i}^2 \\right] \\right]^{1\/2}} = -\\frac{1}{2\\sqrt{2}} \\sqrt{\\sum\\nolimits_{i = 1}^p (m_i^2+8\\tilde{\\mu}_i^2)}(1+o(1)).\n\\end{equation}\n\\end{itemize}\nIntroducing (\\ref{p1constant}) and (\\ref{p2cons}) into (\\ref{p0thm2}) and (\\ref{p1thm2}) with (\\ref{eqn:T}), there is \n\\begin{equation}\\label{eqn:thm2p01}\n\\begin{array}{lll}\np_{0,\\mu,W} = \\Phi(-T_0) + o(1) & = & \\Phi(-\\frac{1}{2\\sqrt 2} \\sqrt{\\sum\\nolimits_{i = 1}^p (m_i^2+8\\tilde{\\mu}_i^2)}(1+o(1))) + o(1), \\\\\np_{1,\\mu,W} = \\Phi(T_1) + o(1) & = & \\Phi(-\\frac{1}{2\\sqrt 2} \\sqrt{\\sum\\nolimits_{i = 1}^p (m_i^2+8\\tilde{\\mu}_i^2)}(1+o(1))) + o(1).\\\\\n\\end{array}\n\\end{equation}\nTherefore, for the misclassification rate, there is \n\\begin{equation}\\label{eqn:mr2}\nMR = \\frac{1}{2} E[p_{0,\\mu,W}] + \\frac{1}{2} E[p_{1,\\mu,W} ] = E[\\Phi(-\\frac{1}{2\\sqrt 2} \\sqrt{\\sum\\nolimits_{i = 1}^p (m_i^2+8\\tilde{\\mu}_i^2)}(1+o(1)))] + o(1). \n\\end{equation}\n\nAccording to Lemma \\ref{lemmaeb}, with probability $1 - o(1)$, $\\sum_{i=1}^p m_i^2 = p\\xi^2 + \\eta^2p^2\\nu(1 + o(1))$ and $\\sum_{i=1}^p \\tilde{\\mu}_i^2 = \\sum_{i=1}^p {\\mu}_i^2 = p\\tau^2\\epsilon(1 + o(1))$. Therefore, with probability $1 - o(1)$, \n\\[\n-\\frac{1}{2\\sqrt 2} \\sqrt{\\sum\\nolimits_{i = 1}^p (m_i^2+8\\tilde{\\mu}_i^2)}(1+o(1)) = \\bigl[-\\frac{\\sqrt{p\\xi^2+\\eta^2 p^2 \\nu +8\\tau^2p \\epsilon}}{2\\sqrt 2} \\bigr](1+o(1)).\n \\]\n\nNow we consider the term \n\\[\np\\xi^2+\\eta^2 p^2 \\nu +8\\tau^2p = p^{1 - 2\\gamma} + f_p p^{2-2\\alpha - \\beta} + g_p p^{1 - \\zeta - 2\\theta}. \n\\]\nRecall that $\\gamma > 1\/2$ so that $p^{1 - 2\\gamma} \\rightarrow 0$. Now recall the region of possibility: \n\\begin{itemize}\n\\item [(1)] $\\beta<2-2\\alpha$, $f_p=C_p$;\n\\item [(2)] $\\beta=2-2\\alpha$, $f_p=L_p$;\n\\item [(3)] $0<\\zeta<1-2\\theta$, $g_p=C_p$;\n\\item [(4)] $\\zeta=1-2\\theta$, $g_p=L_p$.\n\\end{itemize}\nWhen one of the above conditions hold, obviously $p\\xi^2+\\eta^2 p^2 \\nu +8\\tau^2p \\rightarrow \\infty$ when $p \\rightarrow \\infty$. According to (\\ref{eqn:mr2}) and Lemma \\ref{lemmaphi}, $MR \\rightarrow 0$. \n\n\\vspace{1em}\nNext we consider the impossibility. To show that the classification error goes to $1\/2$ for any classifier $L$ in this region, we have to introduce a new lemma. Let $f$ be the density function of $X \\sim N(-\\mu, I)$ and $g$ be the density function of $X \\sim N(\\mu, \\Omega^{-1})$. The Hellinger affinity between $f$ and $g$ is defined as $H(f, g) = \\int \\sqrt{f(x) g(x)} dx$. We have the following lemma. \n\\begin{lemma}\\label{lemma:H} \nFor any classifier $L = L(X|\\mu, \\Omega)$, \n\\[\n|MR(L) - 1\/2 | \\leq C(1 - H(f, g))^{1\/2}. \n\\]\n\\end{lemma}\n\nThis lemma is well known, and so we omit the proof. According to this lemma, to prove the impossibility, we only need to prove that $H(f, g) = 1 + o(1)$. \n\nNext, we check $H(f, g)$. According to the definition of multivariate normal distribution,\n\\begin{equation*}\n\\begin{array}{lll}\nH(f, g) & = & \\int \\frac{|\\Omega|^{1\/4}}{(\\sqrt{2\\pi})^p} \\exp\\{-\\frac{1}{4}\\bigl[(x + \\mu)^\\top (x + \\mu) + (x - \\mu)^\\top \\Omega (x - \\mu)\\bigr]\\}dx\\\\\n& = & \\frac{|\\Omega|^{1\/4}}{(\\sqrt{2\\pi})^p} \\exp\\{-\\frac{1}{4}\\mu^{\\top} (I + \\Omega - (I - \\Omega) (I + \\Omega)^{-1} (I - \\Omega))\\} \\\\\n&& \\times \\int \\exp\\{-\\frac{1}{2}\\bigl[(x - (I + \\Omega)^{-1}(I - \\Omega) \\mu)^\\top \\frac{I + \\Omega}{2} (x - (I + \\Omega)^{-1}(I - \\Omega) \\mu) \\bigr]\\} dx\\\\\n& = & \\frac{|\\Omega|^{1\/4}}{|(\\Omega + I)\/2|^{1\/2}} \\exp\\{-\\frac{1}{4}\\mu^{\\top} (I + \\Omega - (I - \\Omega) (I + \\Omega)^{-1} (I - \\Omega))\\} \\\\\n& = & \\exp\\{-\\frac{1}{4} \\bigl[ \\sum_{i = 1}^p \\frac{4(1 + m_i)}{2 + m_i} \\tilde{\\mu}_i^2 + \\sum_{i = 1}^p \\ln \\frac{(1 + m_i\/2)^2}{1 + m_i} \\bigr]\\}. \n\\end{array}\n\\end{equation*}\nTo show that $H(f, g) = 1 + o(1)$, we only need to prove $R = \\sum_{i = 1}^p \\frac{4(1 + m_i)}{2 + m_i} \\tilde{\\mu}_i^2 + \\sum_{i = 1}^p \\frac{(1 + m_i\/2)^2}{1 + m_i} = o(1)$. Now, we have\n\\begin{equation*}\n\\begin{array}{lll}\nR & = & 2\\sum_{i = 1}^p \\tilde{\\mu}_i^2(1 + o(1)) + \\sum_{i = 1}^p \\ln (1 + \\frac{m_i^2}{4(1 + m_i)})\\\\\n& = & 2 \\|\\mu\\|^2(1+o(1)) + \\sum_{i = 1}^p \\frac{m_i^2}{4(1 + m_i)} (1+o(1))\\\\\n& = & 2 \\|\\mu\\|^2(1+o(1)) + \\frac{1}{4} \\|\\Omega - I\\|^2_F. \n\\end{array}\n\\end{equation*}\nTherefore, when $\\|\\mu\\|^2 = o(1)$ and $\\|\\Omega - I\\|^2_F = o(1)$, the misclassification error from any classifier will be close to 1\/2. \n\nAccording to Lemma \\ref{lemmaeb}, with probability $1 + o(1)$, $\\|\\mu\\|^2 = p\\tau^2\\epsilon (1 + o(1))$ and $\\|\\Omega - I\\|^2_F = \\sum_{i = 1}^p m_i^2 = p\\xi^2 + \\eta^2p^2 \\nu (1 + o(1))$. In the region of impossibility, $p\\tau^2\\epsilon + p\\xi^2 + \\eta^2p^2 \\nu \\rightarrow 0$, and so $H(f, g) = 1 + o(1)$. As a result, $MR(L) \\rightarrow 1\/2$ for any classifier $L$. \n\nCombining the results of the regions of possibility and impossibility, Theorem \\ref{thm:idealunequal} is proved. {\\unskip\\nobreak\\hfil\\penalty50\\hskip2em\\vadjust{\n\n\n\n\n\n\n\n\n\n\\subsection{Proof of Theorem \\ref{thm:unknownmu}: the weak signal region}\\label{sec:thmmuposs}\n\nIn this section, we focus on the algorithm for QDA with feature selection, when $\\mu$ is unknown. To estimate $\\mu$, we use $\\mu^* = -\\hat{\\mu}_0$ for the quadratic part and $d$ for the linear part:\n\\begin{equation}\\label{eqn:d}\nd \\ = \\ \\Omega \\hat{\\mu}_1 - \\hat{\\mu}_0 \\ \\sim \\ N\\bigl((I + \\Omega)\\mu, \\ \\frac{1}{n_0}I+\\frac{1}{n_1}\\Omega\\bigr).\n\\end{equation} \nFor a threshold $t$, we let $\\hat{d}_{j} = d_j*I(|d_j| \\geq t)$. When $\\max_{1 \\leq j \\leq p} |d_i| \\leq 2{\\ln p}\/\\sqrt{n}$, we take $t = 0$; otherwise we take $t = 2\\sqrt{\\ln p}\/\\sqrt{n}$. \nTherefore, our discussion includes the case in which $t = 0$ and the case in which $t \\neq 0$. \n\nIn Appendix \\ref{sec:fs}, it is shown that $t = 0$ happens with probability $1 - o(1)$ in the weak signal region, and $t \\neq 0$ happens with probability $1 - o(1)$ in the strong signal region. For the latter case, the signals can be exactly recovered with probability $1 - o(1)$. \n\n\n\n\n\n\\subsubsection{The region of successful classification}\\label{sec:weaksig}\nConsider the event $\\{t = 0\\}$. It happens with probability $1 - o(1)$, so we focus on this event only. \n\nWhen $t = 0$, the estimated label is $I\\{Q > 0\\}$ where \n\\[\nQ = Q(X, \\hat{\\mu}, W) = X^\\top(I -{\\Omega}) X + 2d^\\top X + \n \\hat{\\mu}_0^\\top (I - {\\Omega}) \\hat{\\mu}_0 + \\ln |{\\Omega}|.\n\\]\nAccording to the definition of $Z$, $\\mu_S$ and $\\sigma_S$ in Lemma \\ref{lemma:S}, we rewrite it as \n\\begin{equation}\\label{eqn:deltaq}\nQ(X, \\hat{\\mu}, W) =\n \\sigma_S Z + \\mu_S - \\mu^\\top (\\Omega - I)\\mu + \\ln|\\Omega| + \\Delta Q, \n\\end{equation}\nwhere \n\\begin{equation}\\label{Decomp1}\n\\Delta Q = 2(d^\\top -\\mu^\\top(I+\\Omega))X+\\left[\\hat{\\mu}_0^\\top (I - {\\Omega}) \\hat{\\mu}_0 -\\mu^\\top(I-\\Omega)\\mu\\right]. \n\\end{equation} \n\nAccording to Lemma \\ref{lemma:S}, with probability $1 - o(1)$, $Z$ uniformly converges to the standard normal distribution . Therefore, with the same definition of $T_i$ in (\\ref{eqn:T}), (\\ref{c0form}), and (\\ref{p2cons}), we have \n\\[\np_{i, \\mu, W} = \\Phi((-1)^i*(-T_i + \\frac{\\Delta Q}{\\sigma_S})) + o(1), \\quad i = 0, 1.\n\\]\nNote that the QDA successful region in Theorem \\ref{thm:unknownmu} is a subset of that in Theorem \\ref{thm:idealunequal}. According to Section \\ref{sec:thmideal}, $\\sigma_S = \\sqrt{p\\xi^2 + \\eta^2 p^2 \\nu + 8 \\tau^2 p \\epsilon}$, and $-T_0= T_1 = -C\\sqrt{p\\xi^2 + \\eta^2 p^2 \\nu + 8 \\tau^2 p \\epsilon}$. Further, later we will show that \n\\begin{equation}\\label{eqn:deltaq} \n\\Delta Q = o({p\\xi^2 + \\eta^2 p^2 \\nu + 8 \\tau^2 p \\epsilon}). \n\\end{equation}\nTherefore, noting that $p\\xi^2 + \\eta^2 p^2 \\nu + 8 \\tau^2 p \\epsilon \\rightarrow \\infty$ in the successful region, we have that \n\\[\np_{i, \\mu, W} = \\Phi( -C\\sqrt{p\\xi^2 + \\eta^2 p^2 \\nu + 8 \\tau^2 p \\epsilon}(1 + o(1))) \\rightarrow 0, \\qquad i = 0, 1.\n\\]\nThe misclassification rate now is that \n\\begin{equation}\\label{eqn:mr3}\nMR(QDA) = \\frac{1}{2} E_{\\mu, W} p_{0, \\mu, W} + \\frac{1}{2} E_{\\mu, W} p_{1, \\mu, W} \\rightarrow 0.\n\\end{equation}\n\nNow, we only need to show (\\ref{eqn:deltaq}) holds in the successful region. We first introduce a lemma about $\\Delta Q$, where the proof can be found in Appendix. \n\\begin{lemma}\\label{lemma:DeltaQ}\nUnder the model assumptions and the definition of $\\Delta Q$ in (\\ref{Decomp1}), there is\n\\begin{equation}\\label{eqn:lemmaDeltaQ}\n \\Delta Q \\lesssim \\sqrt{p^2 \\eta^2 \\nu \\ln p}\/n + \\sqrt{p\/n \\ln\\ln p}.\n\\end{equation}\n\\end{lemma}\n\nWe compare (\\ref{eqn:lemmaDeltaQ}) with (\\ref{eqn:deltaq}) in the successful region in Theorem \\ref{thm:unknownmu}, that \n\\begin{equation}\\label{eqn:weakposs}\n\\left\\{\\begin{array}{ll}\n\\frac{1-\\delta}{2} < \\max \\{2 - 2\\alpha - \\beta, 1 - 2\\theta - \\zeta\\}, \\\\\n\\frac{1-\\delta}{2} = 2 - 2\\alpha - \\beta > 1 - 2\\theta - \\zeta, f_p = L_p, \\\\\n\\frac{1-\\delta}{2} = 1 - 2\\theta - \\zeta > 2 - 2\\alpha - \\beta, g_p = L_p. \\\\\n\\end{array}\n\\right.\n\\end{equation}\nNote that $p\\xi^2 + p^2 \\eta^2 \\nu + 8 p\\tau^2 \\epsilon = p^{1 - 2\\gamma} + f_p^2 p^{2 - 2\\alpha - \\beta} + g_p^2 p^{1 - 2\\theta - \\zeta} \\rightarrow \\infty$ in this region. \n\nThe first term $\\sqrt{p^2 \\eta^2 \\nu \\ln p}\/n \\ll \\sqrt{p^2 \\eta^2 \\nu}$. When $p^2 \\eta^2 \\nu \\rightarrow 0$, this term converges to 0, which is at a lower order of $p\\xi^2 + p^2 \\eta^2 \\nu + 8 p\\tau^2 \\epsilon$. When $p^2 \\eta^2 \\nu \\rightarrow \\infty$, then $\\sqrt{p^2 \\eta^2 \\nu \\ln p}\/n \\ll p^2 \\eta^2 \\nu$. Hence, it is always at $o(p\\xi^2 + p^2 \\eta^2 \\nu + 8 p\\tau^2 \\epsilon )$ in this region. \nThe second term $\\sqrt{p\/n \\ln \\ln p} = p^{\\frac{1-\\delta}{2}} \\sqrt{\\ln \\ln p} \\ll L_p p^{\\max \\{2 - 2\\alpha - \\beta, 1 - 2\\theta - \\zeta\\}}$ according to the definition of $L_p$. \nCombining the analysis of these two terms, $\\Delta Q = o(p\\xi^2 + p^2 \\eta^2 \\nu + 8 p\\tau^2 \\epsilon)$. Hence, (\\ref{eqn:deltaq}) is proved.\n\nCombining the results, in the successful classification regions described in part (i) of Theorem \\ref{thm:unknownmu}, $MR(QDA) \\rightarrow 0$. \n\n\n\n\n\n\\subsubsection{The region of unsuccessful classification} \nAgain, we consider the event $\\{t = 0\\}$ only. \nAccording to the definition that $d = \\Omega \\hat{\\mu}_1 - \\hat{\\mu}_0$, there is \n\\begin{equation}\nQ = X^\\top (I - \\Omega) X + 2 X^{\\top} (\\Omega \\hat{\\mu}_1 - \\hat{\\mu}_0) + \\hat{\\mu}_0^\\top (I - \\Omega) \\hat{\\mu}_0 + \\ln |\\Omega| \n\\end{equation}\nWe discuss the case when $X$ is given first, and then add in the randomness of $X$. When $X$ is given, we have the following lemma, which is proved in Appendix. \n\\begin{lemma}\\label{lemma:weakfail}\nWith probability at least $\\Phi(-1\/2)\/4$, there is \n\\begin{eqnarray}\\label{eqn:qimposs}\nQ \\geq S + \\sqrt{X^\\top \\Omega X \/n} + \\mu^{\\top}(I - \\Omega)\\mu+ \\ln|\\Omega|+\\frac{1}{n_0} Tr(I - \\Omega), \n\\end{eqnarray}\nwhere $S$ is defined in (\\ref{eqn:S}).\n\\end{lemma}\n\nNow we consider the randomness of $X$. We focus on the case that $X \\sim N(-\\mu, I)$, and the case that $X \\sim N(\\mu, \\Omega^{-1})$ is very similar. \nConsider the case that $X \\sim N(\\mu, I)$. According to Lemma \\ref{lemma:S}, $\\frac{S - E[S]}{\\sigma_S}$ converges to the standard normal distribution uniformly. Therefore, $P(S \\geq E[S]) = 1\/2 + o(1)$. According to (\\ref{zmeanvar1}), $E[S] = Tr(I - \\Omega) - \\mu^\\top(3\\Omega + I) \\mu$. Hence, we have that \n\\begin{equation}\\label{eqn:x1}\nP(S + \\mu^\\top (I - \\Omega)\\mu \\geq -4\\mu^\\top \\Omega \\mu+ Tr(I - \\Omega)) = 1\/2 + o(1). \n\\end{equation}\nThe other term including $X$ is that $\\sqrt{X^\\top \\Omega X\/n}$. According to the properties of non-central chi-square distribution, $E[X^\\top \\Omega X\/n] = \\mu^\\top \\Omega \\mu\/n + Tr(\\Omega)\/n$ and $\\mathrm{Var}(X^\\top \\Omega X\/n) = 4\\mu^\\top \\Omega^2 \\mu\/n^2 + 2Tr(\\Omega^2)\/n^2 = p\/n^2 (1 + o(1))$. \nBy Markov inequality, \n\\begin{equation}\\label{eqn:x2}\nP(X^\\top \\Omega X\/n \\geq \\mu^\\top \\Omega \\mu\/n + Tr(\\Omega)\/n - 2\\sqrt{p\/n^2}) \\geq 3\/4. \n\\end{equation}\n\nCombining (\\ref{eqn:x1}) and (\\ref{eqn:x2}), the probability that both inequality holds is no smaller than $1\/4$. Introducing them into (\\ref{eqn:qimposs}), with probability at least $c\/4$, there is \n\\begin{eqnarray}\\label{eqn:qresult}\nQ & \\geq & -4\\mu^\\top \\Omega \\mu+ Tr(I - \\Omega)+ \\sqrt{\\mu^\\top \\Omega \\mu\/n + Tr(\\Omega)\/n - 2\\sqrt{p\/n^2}}\\\\\n&& + \\ln|\\Omega|+\\frac{1}{n_0} Tr(I - \\Omega)\\nonumber\\\\\n& \\gtrsim & -4 g_p^2 \\tau^2 p \\epsilon + \\sqrt{p\/n} - f_p^2 \\eta^2p^2\\nu.\n\\end{eqnarray}\n\nNow we consider the region defined in Theorem \\ref{thm:unknownmu}. \n\\begin{itemize}\n\\item $\\max\\{2-2\\alpha-\\beta, 1-2\\theta-\\zeta\\}< (1-\\delta)\/2$;\n\\item or, $2 - 2\\alpha - \\beta = (1 - \\delta)\/2$ and $f_p = C_p$; \n\\item or, $1-2\\theta-\\zeta = (1 - \\delta)\/2$ and $g_p=C_p$.\n\\end{itemize} \nIn the first region, there is $\\tau^2 p \\epsilon + \\eta^2p^2\\nu \\ll \\sqrt{p\/n}$, then the misclassification rate when $X \\sim N(-\\mu, I)$ is $P(Q > 0) \\geq c\/4$ according to (\\ref{eqn:qresult}). \nIn the second and third region, the coefficient of $\\sqrt{p\/n}$ can be adjusted with a smaller probability. Therefore, the misclassification rate is still no smaller than a constant. \nWhen $X \\sim N(\\mu, \\Omega^{-1})$, the same result can be got. \nHence, in this region, $MR(QDA) \\geq c$ where $c > 0$ is a constant. {\\unskip\\nobreak\\hfil\\penalty50\\hskip2em\\vadjust{\n\n\n\n\\subsection{Proof of Theorem \\ref{thm:unknownmu}: the strong signal region}\\label{sec:thm3strong} \nWhen the signals are strong, with the threshold $t = \\sqrt{2\\log p\/n}$, all the signals $\\mu_i \\neq 0$ are exactly recovered with probability $1 + o(1)$. Hence, we only consider the event that $\\{t = \\sqrt{2\\log p\/n}\\}$ and all the signals are exactly recovered. \n\nFor the case that $t \\neq 0$, $\\hat{\\mu}^{(t)} = \\hat{\\mu}_0 \\circ d^{(t)}$ and $\\hat{\\mu}_d^{(t)} = d \\circ d^{(t)}$. \nFor simplicity, in this section, we use $\\hat{\\mu}_0$ and $d$ to denote $\\hat{\\mu}_0 \\circ d^{(t)}$ and $\\hat{\\mu}_d^{(t)}$, respectively. Note that since all the signals are exactly recovered, $\\hat{\\mu}_0$ has zeros on the non-signal entries and non-zeros on the signals, and the same for $d$. \n\n\nLet $k=\\|\\mu\\|_0$ denote the number of non-zeros in $\\mu$. \nWithout loss of generality, we permute $\\mu$ such that the first $k$ entries are the non-zeros and the rest are the zeros. We also permute $W$, $\\Omega$, and $X$ accordingly, and rewrite $W$ and $\\Omega$ as $2\\times 2 $ block matrices $W=\\left(^{W_{11} \\ W_{12}}_{W_{21} \\ W_{22}}\\right)$ and $\\Omega=\\left(^{\\Omega_{11} \\ \\Omega_{12}}_{\\Omega_{21} \\ \\Omega_{22}}\\right)$ respectively, where $W_{11}$ and $\\Omega_{11}$ are $k\\times k$ sub-matrices of $W$ and $\\Omega$, respectively. Let $X^{(k)}$, $d^{(k)}$, $\\mu^{(k)}$, and $\\hat{\\mu}_0^{(k)}$ denote, respectively, $X$, $d$, $\\mu$, and $\\hat{\\mu}_0$ restricted on the first $k$ entries, and let $X^{(p-k)}$ denote $X$ restricted on the last $(p-k)$ entries. Then $\\mu^{(k)}$ is a length $k$ vector with all elements as $\\tau$. \n\n\n\nSimilar as the derivation in Section \\ref{sec:weaksig}, we have \n\\begin{equation}\\label{eqn:deltaqstrong}\nQ(X, \\hat{\\mu}, W) =\n \\sigma_S Z + \\mu_S - \\mu^\\top (\\Omega - I)\\mu + \\ln|\\Omega| + \\Delta Q, \n\\end{equation}\nwhere \n\\begin{equation}\\label{Decomp2}\n \\Delta Q = \\displaystyle 2(d - (I + \\Omega)\\mu)^\\top X+\\left[\\hat{\\mu}_0^\\top(I-\\Omega)\\hat{\\mu}_0-\\mu^\\top(I-\\Omega)\\mu\\right] .\n\\end{equation}\nWe want to show that in the region of success, \n\\begin{equation}\\label{eqn:deltaqmag} \n\\Delta Q = o({p\\xi^2 + \\eta^2 p^2 \\nu + 8 \\tau^2 p \\epsilon}). \n\\end{equation}\nTherefore, noting that $p\\xi^2 + \\eta^2 p^2 \\nu + 8 \\tau^2 p \\epsilon \\rightarrow \\infty$ in the region of possibility, we have $MR(QDA) \\rightarrow 0$. \n\nTo prove (\\ref{eqn:deltaqmag}), we introduce a lemma about $\\Delta Q$ for the strong signal case, where the proof can be found in Appendix. \n\\begin{lemma}\\label{lemma:StrongDeltaQ}\nUnder the model assumptions and the definition of $\\Delta Q$ in (\\ref{Decomp2}), there is\n\\begin{equation}\\label{eqn:lemmaDeltaQ2}\n \\Delta Q \\leq c p\\epsilon \\xi\/n + O(p \\epsilon\\sqrt{\\eta^2 \\nu}\/n) + \\sqrt{p\\epsilon} \\tau \\ln p (1 + o(1)).\n\\end{equation}\n\\end{lemma}\nConsider the three terms in (\\ref{eqn:lemmaDeltaQ2}). The first term is $c n^{-1}p\\epsilon \\xi \\ll \\sqrt{p}\\epsilon\/n \\ll \\tau^2 \\sqrt{p}\\epsilon$, since $\\xi \\ll p^{-1\/2}$ and $1\/\\sqrt{n} \\ll \\tau $. The second term is $n^{-1}p \\epsilon\\sqrt{\\eta^2 \\nu} = \\frac{\\epsilon}{n}\\sqrt{\\eta^2 p^2 \\nu}$. Hence, it is at a smaller order if $\\eta^2 p^2 \\nu \\rightarrow \\infty$. If $\\eta^2 p^2 \\nu \\rightarrow 0$, $p\\xi^2 + p\\tau^2 \\epsilon \\rightarrow \\infty$ in the region of possibility, so that $\\frac{\\epsilon}{n}\\sqrt{\\eta^2 p^2 \\nu}$ is still at a smaller order. For the last term $\\sqrt{p\\epsilon}\\tau \\ln p = \\sqrt{p \\tau^2\\epsilon }\\ln p$, the analysis is the same. \n\nTherefore, in the region of possibility identified by part (ii) of Theorem \\ref{thm:unknownmu}, (\\ref{eqn:deltaqmag}) always holds, and hence $MR$ converges to 0. {\\unskip\\nobreak\\hfil\\penalty50\\hskip2em\\vadjust{\n\n\\subsubsection{The region of Impossibility}\nWhen the mean vector $\\mu$ is unknown, the region of impossibility is no smaller than the case that the mean vector $\\mu$ is already known (\\cite{LeCam}). For the latter, the region of impossibility is depicted in Theorem \\ref{thm:idealunequal}. \n\nWhen the signals are strong, the region of impossibility is the same with that in Theorem \\ref{thm:idealunequal}. Hence, when the mean vector is unknown, any classifier $L$ has $MR(L) \\rightarrow 1\/2$ in this region. {\\unskip\\nobreak\\hfil\\penalty50\\hskip2em\\vadjust{\n\n\n\n\\section{Real Data Analysis}\\label{sec:rats}\nIn this paper, we consider the rats dataset present in \\cite{liver}. As we introduced in Section \\ref{sec:quickdata}, this data set record the gene expressions of live rats in response to different drugs and toxicant. There are 181 samples and 8491 genes, where 61 samples are labeled as toxicant and the other 120 are labeled as other drugs. \nWe compare QDA with LDA, where the latter one is shown to enjoy the best performance compared to classifiers such as SVM, RandomForest, GLasso and FoBa. \nThe QDA with feature selection for the real data is discussed in Section \\ref{sec:real-proc} and the implementation details and results are in Section \\ref{sec:dataresults}. \n\n\n\\subsection{Procedure for the real data}\\label{sec:real-proc}\nHere, we present a procedure for the classification based on QDA for the real data. For the real data, we have to estimate $\\Omega_0$, $\\Omega_1$, $\\mu_0$ and $\\mu_1$ separately. Further, we need to eliminate the effect of the feature variances. Hence, there is an additional scaling step in the following algorithm. \n\n\\begin{itemize}\n\\item[0.] \nFind the corresponding precision matrix ($\\hat{\\Omega}_0$, $\\hat{\\Omega}$). Find the sample mean vectors $\\hat{\\mu}_0$ and $\\hat{\\mu}_1$ for groups 0 and 1. \n\\item[1.] Estimate the difference $\\Omega_0-\\Omega_1$ by $\\Omega_ {\\rm diff}=\\hat\\Omega_0-\\hat\\Omega_1$. Set the diagonals of $\\Omega_{\\rm diff}$ to be zeros. \n\\item[2.] Let $d_0= \\hat{\\Omega}_0\\hat{\\mu}_0$ and $d_1= \\hat{\\Omega}\\hat{\\mu}_1$. Scale them by standard deviation and sample size, i.e., \n\\[\nd_0 = \\hat{\\Omega}_0\\hat{\\mu}_0\/s_0, \\quad d_1 = \\hat{\\Omega}\\hat{\\mu}_1\/s_1,\n\\]\nwhere $s_i$ are the standard deviation vector of the train data from class $i$, $i=0, 1$, and the devision means element-wise devision. \n\n\\item[3.] Let $d = d_1 - d_0$. For a threshold $t$, define the vector $d^{(t)}$ where $d^{(t)}_j = d_j*I(|d_j| \\geq t)$. \n\n\\item[4.] For any fresh data vector $X$, compute the scaled test vector \n\\[\nx_j = [X_j - \\bar{\\mu}_j]\/s_j,\n\\]\nwhere $\\bar{\\mu}=\\frac{\\hat{\\mu}_1+\\hat{\\mu}_0}{2}$ and $s$ is the standard error of the pooled data\n$$\ns_j = \\sqrt{[(n_0-1) ((s_0)_j)^2+(n_1-1)((s_1)_j)^2] \/ (n_0+n_1-2)}.\n$$\n\\item[5.] Calculate the Z-score\n$$ \nZ=x^\\top \\hat{\\Omega}_{\\mbox{{\\footnotesize diff}}} x+2(d^{(t)})^\\top x+C\n$$\nand classify $X$ to be in class 0 if $Z < 0$ (or in class 1 if $Z > 0$). \n\\end{itemize}\n\nThere are two tuning parameters in the algorithm, $t$ in Step 3 and $C$ in Step 5. In the implementations, we use a grid search to find the optimal values. Details in Section \\ref{sec:dataresults}. \n\nIn Step 1, we set all the diagonals of $\\Omega_{\\rm diff}$ to be 0. The reason is that the sample size is limited. The training data has 90 samples for class 0 and 46 samples for class 1 only, so the elementary-wise error for $\\hat{\\Omega}_0$ and $\\hat{\\Omega}$ are $\\sim 1\/\\sqrt{50}$. It will cause a comparatively large error on the diagonals of $\\hat{\\Omega}_0 - \\hat{\\Omega}$, and hence the classification criteria. So for this data set, we set all the diagonals of $\\Omega_{\\rm diff}$ to be 0. It is not necessary for large data sets. \n\n\\subsection{Implementation and Results}\\label{sec:dataresults}\nFollowing the setup of the data analysis in \\cite{PCS}, we apply 4-fold data splitting to the sample. \nFor each class, we randomly draw one fourth of the samples, and then combine them to be the test data while using the leftover to be the training data. We do the splitting for 15 times independently and record the error with QDA and LDA for each splitting. \nThe data (sample indices for the 15 splittings is available at \\url{https:\/\/zhigang-yao.github.io\/research.html}. \n\nIn the real data analysis section, we focus on comparing QDA and LDA. The LDA is implemented within the setting of QDA, where in Step (3) of the algorithm in Section \\ref{sec:real-proc} we use clipping thresholding instead of hard thresholding, and in Step (5) we set $\\hat{\\Omega}_{\\mbox{{\\footnotesize diff}}}=\\mathbf{0}$ for LDA. \nThe clipping threshold is employed since it gives much more satisfactory results than hard thresholding for LDA; details in \\cite{PCS}. For QDA, the two ways give similar results. \nSince the calculation of $\\hat{d}$ involves the calculation of $\\hat{\\Omega}_0$ and $\\hat{\\Omega}$ and the thresholding, LDA algorithm has exactly the same tuning parameters with QDA. \nThe procedure of determining these tuning parameters are the same for both algorithms, so that the results are comparable.\n\n\nThere are two set of parameters in the algorithm. One is in the estimation of PCS, and the other are $C$ and $t$ in the algorithm. For PCS, there are four tuning parameters $(q_1, q_2, \\delta, L)$. Here we use the same set of tuning parameters for the estimation of both $\\hat{\\Omega}_0$ and $\\hat{\\Omega}$, since the two classes are from the same data set and the performance of PCS is not sensitive to the choice of these parameters (\\cite{PCS}). \nFollowing the setting in \\cite{PCS}, we set $(\\delta, L)=(.1, 30)$, and also tried $(\\delta, L)=(.1, 50)$. The selection of $(q_1, q_2)$ is done with $C$ and $t$ by grid search. \n\nIn Step (3) and Step (5) of the algorithm, there are two tuning parameters $t$ and $C$. We set the ranges $[t_{\\mbox{{\\footnotesize min}}}, t_{\\mbox{{\\footnotesize max}}}]=[0, \\mbox{max}_{1\\leq j \\leq p} |d_j|]$ with an increment of .1 and $[C_{\\mbox{{\\footnotesize min}}}, C_{\\mbox{{\\footnotesize max}}}]=[-50, 50]$ with an increment of 1. For $(q_1, q_2)$, we consider $.1 \\leq q_k \\leq 1$, with an increment of .1, $k = 0, 1$. \nThe smallest error is obtained over a grid search of $t$, $C$, and $(q_1, q_2)$. \nThis step is the same for both QDA and LDA to be fair. We compare the smallest error that LDA and QDA can achieve. \n\n\n\n\n\\begin{figure}[htb!]\n \\centering\n \\subfigure[ $(C_{\\mbox{{\\footnotesize min}}}, C_{\\mbox{{\\footnotesize max}}})=(-50, 50)$ ]{ \\includegraphics[width= 2.37 in]{rats2.pdf}} \\quad\n\t\t\\subfigure[$(C_{\\mbox{{\\footnotesize min}}}, C_{\\mbox{{\\footnotesize max}}})=(-100, 100)$]{\\includegraphics[width= 2.37 in]{rats1.pdf}}\n \\caption{Comparison of testing error rate (y-axis) of LDA and QDA for the rats data with $(\\delta_1, L_1)=(\\delta_2, L_2)=(.1, 30)$ and 15 data splittings.}\n \\label{figure:errorrats-k30}\n\\end{figure}\n\n\\begin{figure}[htb!]\n \\centering\n \\subfigure[]{ \\includegraphics[width= 2.37in]{plot1_LQDA2_k30_new}} \\quad\n\t\t\\subfigure[]{\\includegraphics[width= 2.37in]{plot1_LQDA3_k30_new}} \n\t\t\\subfigure[]{\\includegraphics[width= 2.37in]{plot1_LQDA4_k50_new}} \\quad\n\t\t\\subfigure[]{\\includegraphics[width= 2.37in]{plot1_LQDA5_k50_new}}\n\t\t\t \\caption{Zoom-in errors for the rats data for varying choices of $(q_1, q_2) \\in (.1,1) \\times (.1, 1)$ for one splitting of 15 splittings in Figure 4(a)-(b) and Figure 6(a)-(b).}\n \\label{figure:errorrats-zoomin}\n\\end{figure}\n\nBoth the LDA test error (the best error) and the QDA test error (the best error) over all 15 data splittings are reported in Figure \\ref{figure:errorrats-k30}.\nIn the left panel of Figure \\ref{figure:errorrats-k30}, we can see that the error rates of LDA are all above QDA at every data splitting. To better show the difference between them, we also plot the testing error rate in the right panel of Figure \\ref{figure:errorrats-k30} for a wider grid-search range that $[C_{\\mbox{{\\footnotesize min}}}, C_{\\mbox{{\\footnotesize max}}}]=[-100, 100]$. \nWhen $L$ changes from 30 to 50, the results are summarized in Figure \\ref{figure:errorrats-k50}, which is similar. This comparison clearly demonstrates the expected superiority of QDA over LDA. \nThe results suggest that, for rats data, QDA outperforms LDA in terms of both best error rate and average error; with the results in \\cite{PCS} for other methods, where the authors have shown that HCT-based LDA significantly outperforms all other HCT-based methods as well as SVM and RF, our findings also suggest that the QDA gives a better separation than the LDA by taking into account the second order difference between the two classes.\n\n\\begin{figure}[htb!]\n \\centering\n \\subfigure[$(C_{\\mbox{{\\footnotesize min}}}, C_{\\mbox{{\\footnotesize max}}})=(-50, 50)$]{\\includegraphics[width= 2.4 in]{rats2-k50.pdf}}\\quad\n\t\t\\subfigure[$(C_{\\mbox{{\\footnotesize min}}}, C_{\\mbox{{\\footnotesize max}}})=(-100, 100)$]{\\includegraphics[width= 2.4 in]{rats1-k50.pdf}}\n \\caption{Comparison of testing error rate (y-axis) of LDA and QDA for the rats data with $(\\delta_1, L_1)=(\\delta_2, L_2)=(.1, 50)$ and 15 data splittings.}\n \\label{figure:errorrats-k50}\n\\end{figure}\n\n\n\n\n\\section{Discussion}\\label{sec:dis}\nThis paper focuses on the classification problem associated with the use of QDA and feature selection for data of rare and weak signals. We derived the successful and unsuccessful classification regions, by using first the case of a known mean vector and covariance matrix, then the case of an unknown mean vector but known covariance matrix, and finally the case in which both mean vector and covariance matrix were unknown. We also proved that these regions were actually the possibility and impossibility regions under the same modelling, which indicates that QDA achieves the optimal classification results in this manner. In addition, we developed computing and classification algorithms that incorporated feature selection for rare and weak data. With these algorithms, our real data analysis showed that QDA had much improved performance over LDA. \n\nOur theoretical results showed that the two sets of signal weakness and sparsity parameters, one set from the mean vector and the other set from the covariance matrix, influence the possibility\/impossibility regions or QDA successful\/unsuccessful regions almost independently (except for a $\\max$ operator over the two sets of parameters) when the covariance matrix is known. When both the mean vector and covariance matrix are unknown, the two sets of parameters interact with each other as indicated in Theorem \\ref{thm:unknownomega}. For the latter case, the analysis of the misclassification rate is very complicated and we only obtained partial results for this most general case; further study is therefore warranted. Also, for the precision matrix $\\Omega$ given in (\\ref{Omega1}), we can introduce sparsity and weakness in the diagonal elements of $I-\\Omega$, the difference in precision matrices, instead of using a constant $\\xi=1-c$ for all diagonal elements. \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nEmergent scale invariance is a key to many of our current scientific and engineering challenges, including cell membranes~\\cite{machta2012critical}, turbulence~\\cite{canet2016fully}, fracture and plasticity~\\cite{shekhawat2013damage, chen2013scaling}, and also the more traditional continuous thermodynamic phase transitions. \nThe current formulation of the field has an elegant framework which can explain observables that scale as power laws times homogeneous functions. However, the literature on corrections to this result, including logarithms and exponentially diverging quantities, is much more scattered and does not have a similarly systematic framework. \n\nThe renormalization group (RG) is our tool for understanding emergent\nscale invariance. At root, despite challenges of implementation, the renormalization group (RG)\ncoarse grains and rescales the system to generate ordinary differential equations (ODEs) for model parameters as a\nfunction of the observed log length scale $\\ell$. A fixed point of these\nflows represents a system which looks the same at different length scales;\nsystems near criticality flow near to this fixed point. In cases where the\nflow can be linearized around the fixed point, the RG\nimplies that observables near criticality are given by a power law times\na universal function of an invariant combinations of variables; {\\em e.g.}\nthe Ising model has magnetization $m \\sim t^{\\beta} \\mathcal{M}(L t^\\nu)$ where $L$ is the system size and $t = (T - T_c)\/T_c$ is the deviation of the temperature $T$ from the critical temperature $T_c$.\n\nSurprisingly often, this scenario of universal critical exponents\nand scaling functions is violated; free energies and correlation lengths\nscale with logarithms or exponentials, and the proper form of the \nuniversal scaling functions is often unknown. \nSpecifically,\ndeviations arise in the Ising model in $d = 1$~\\cite{ising1925beitrag}, 2~\\cite{salas2002exact}, \\& 4~\\cite{larkin1995phase},\nthe tricritical Ising model in $d = 3$~\\cite{Wegner73}, the $d=2$ XY\nmodel~\\cite{KosterlitzT73}, the surface critical behavior of\npolymers~\\cite{Diehl87, Eisenriegler88}, van der Waals interactions in 3-d spherical model~\\cite{Dantchev06}, finite size scaling of the random field\nIsing model (RFIM) in $d = 6$~\\cite{Ahrens10}, thermodynamic Casimir effects in\nslabs with free surfaces~\\cite{Diehl12,Diehl14}, the $d = 2$,\n4-state Potts model~\\cite{Salas97,Shchur09,Berche13}, percolation and the 6-d Potts model \\cite{PhysRevE.68.036129}, and many other systems. \nEach such system has hitherto been treated as a special case. \n\nHere we use the fact that the predictions of the RG can be written down as a set of differential equations in the abstract space of Hamiltonians. This allows us to apply a branch of dynamical systems theory, normal form theory~\\cite{murdock2006normal, PNFT1} to provide a unified description applicable to all of these systems. We arrange these systems into {\\em universality\nfamilies} of theories, each defined by its normal form. Each family has \n{\\em universal terms} (linear and nonlinear), whose values \ndetermine a system's universality\nclass within the family. Finally, each family's normal form predicts the\nnatural {\\em invariant scaling\ncombinations} governing universal scaling functions.\n\n\nThe perspective we present here is transformative: unifying, simplifying,\nand systematizing a previously technical subject and promising new developments\nin the field. Our best analogy is to the introduction of homotopy theory in the\n1970's~\\cite{toulouse1976principles,rogula1976large,Merm79,GoldbartK19} \nas a systematic method that unified the treatment of some of the many defect\nstructures studied in materials and field theories. Just as there have\nbeen several previous works that correctly identified the universal effects\nof nonlinear terms for phase transitions where an analytic RG \napproach is available~\\cite{Wegner72,Meinke05,sonoda,magradze,pelissetto2013renormalization, barma1984corrections, barma1985two, hasenbusch2008universal, Salas97}, the \nBurger's vector, winding number, and wrapping number of dislocations in \ncrystals, disclinations in liquid crystals, and Skyrmions in nuclei were \nunderstood individually before the mathematics of homotopy theory was seen\nas the natural mathematical framework. Just as homotopy theory facilitated the\nstudy of defects in more complex systems (metallic glasses, cosmic strings,\nquasicrystals), so our normal\nform methods are allowing the correct identification\nand characterization of the singularity in systems in experimental and \nnumerical explorations where analytic RG calculations do not yet\nexist~\\cite{Lorien17}. Finally, homotopy theory quickly uncovered\nthe fascinating entanglement and transformation properties of non-abelian\ndefects~\\cite{Merm79}, with early speculative applications in glass\nphysics~\\cite{Nelson83} and eventually \ninspiring the closely related nonabelian braiding being developed for \ntopological quantum computing. Similarly, we demonstrate here that our methods\nallow, for what appears to be the first time, the use of the correct,\nremarkably rich, invariant scaling variables in the universal scaling\nfunctions for systems where universal nonlinear RG terms are needed, and\nwe have discussed elsewhere~\\cite{raju2018reexamining} how our methods can\nbe powerful tools for systematically incorporating corrections to\nscaling near critical points even when universal nonlinear terms are not\nneeded. For example, in the future the normal-form change of variables \nwe introduce here could become an inner expansion matched to \nseries and virial expansions at extremes of the phase diagram; this would\nallow rapid and accurate convergent characterizations of materials systems\nclose to and far from criticality.\n\nOur machinery provides a\nstraightforward method to determine the complete form of the critical\nsingularity in these challenging cases. Our initial results are complex and\ninteresting; they pose challenges which we propose to address in future\nwork. The coordinate transformation to the normal form embodies analytic corrections to scaling, which allow us to address experimental\nsystems as they vary farther from the critical point. Finally, bifurcation theory\nis designed to analyze low-dimensional dynamical systems without detailed understanding of the underlying equations;\nour methods should improve scaling collapses in critical phenomena\nlike 2-d jamming~\\cite{goodrich2014jamming}\nwhere there is numerical evidence for logarithms but no RG framework is available.\n\nWe begin by distinguishing our work from previous literature connecting the RG to normal form theory. The previous approach~\\cite{deville2008analysis, ei2000renormalization, ziane2000certain} compared the application of RG-like methods and normal form theory to solving nonlinear differential equations using perturbation theory. The connection we are making is different. We are applying normal form theory to the RG flow equations. Hence, our approach is to apply normal form theory to make predictions about the general structure of the flows given the topology (nature and number of fixed points), rather than to apply it to the model that produces these flows. \n\nWe give an introduction to normal form theory in Section~\\ref{sec:normalform}. We give a survey of the previous literature on nonlinear scaling in the RG in section~\\ref{sec:earlierwork}. We show how the application of normal form theory allows us to define universality families of fixed points in Section~\\ref{sec:universality}. We present several worked out examples starting with the 4-d Ising model in Section~\\ref{sec:4dising} and the Random Field Ising model in Section~\\ref{sec:randomfieldising}. We then work out the application of normal form theory to the Ising model in dimensions $1$, $2$ and $3$ in Sections~\\ref{sec:3dising}--~\\ref{sec:2dising}.\n\n\n\\section{Normal Form Theory}\\label{sec:normalform}\n\nNormal form theory~\\cite{PNFT1} is a technique to reduce differential equations to a `normal form' by change of coordinates, often the simplest possible form. This is achieved by making near-identity coordinate transformations to get rid of as many terms as possible from the equation. It was developed initially by Poincar\\'{e} to integrate nonlinear systems~\\cite{poincare, chenciner2015poincare}. The physical behavior should be invariant under analytic changes of coordinates, and the length (or time) parameter should stay the same, \nwhich the mathematical literature addresses by perturbative polynomial changes\nof coordinates (attempting removal of $n$th order nonlinearities in the flow\nby using $n$th order or lower terms in the change of variables). To any \nfinite order\nthis gives an analytic change of coordinates, but it is not in general\nguaranteed to converge to an analytic transformation; we will thus\ncall it a polynomial change of coordinates. \n\nWe give a brief introduction to normal form theory here for completeness. A more detailed treatment can be found in Ref.~\\cite{PNFT1}. Typically one starts with a set of differential equations of the form \n\\begin{equation}\n \\frac{d \\bm{\\theta}}{d \\ell} = \\bm{g}(\\bm{\\theta}, \\epsilon) ,\n \\end{equation}\n where $\\epsilon$ is some parameter, $\\bm{\\theta} = \\{\\theta_i\\}$ is the vector of state variables and the vector field $\\bm{g}$ defines the flow. In the context of statistical mechanics and renormalization group flows, $\\theta_i$'s are parameters or coupling constants that enter into the free energy and $\\epsilon$ is the difference in dimension from the lower or upper critical dimensions. Let us first work with the case where $\\epsilon$ does not enter into the equations. The first step is to find the fixed point of the equation and use translations to set the fixed point of each $\\theta_i$ is at 0. The next step is to linearize about the fixed point and reduce the linear part to the simplest possible form. In general, this is the Jordan canonical form but it is often just the eigenbasis. Then, the equation is \n \\begin{equation}\n \\frac{d \\bm{\\theta}}{d \\ell} = J \\bm{\\theta} + \\bm{f}(\\bm{\\theta}) ,\n\\end{equation}\nwhere $J$ is the linearized matrix of the flow and the remaining terms are in the vector field $\\bm{f}(\\bm{\\theta}) \\sim \\mathcal{O}(\\theta^2)$. Terms of order $k$ are defined to be made up of homogeneous polynomials of order $k$. So for $\\bm{\\theta} = (\\theta_1, \\theta_2, \\theta_3)$, $\\theta_1^2 \\theta_2 \\theta_3 \\sim \\mathcal{O}(\\theta^4)$. We will denote terms of order $k$ by a lower index. So \n\\begin{equation}\n \\bm{f}(\\bm{\\theta}) = \\sum_{k \\geq 2} \\bm{f}_k (\\bm{\\theta}) .\n\\end{equation}\nNote the index is giving the order of the polynomial and not enumerating the components of the vector field. Let the lowest non-zero term be at some order $k \\geq 2$ (usually 2). Then we can write\n\\begin{equation}\n \\frac{d \\bm{\\theta}}{d \\ell} = J \\bm{\\theta} + \\bm{f}_k (\\bm{\\theta}) + \\mathcal{O}(\\theta^{k+1}) .\n \\end{equation}\nThe idea is to try and remove higher order terms by making coordinate changes. To remove the term $\\bm{f}_k$, we try to do a coordinate change of order $k$, \n\\begin{equation}\n\\label{changecoord}\n \\bm{\\theta} = \\bm{\\tilde \\theta} + \\bm{h}_k (\\bm{\\tilde \\theta}) ,\n\\end{equation}\nwhere $\\bm{h}_k (\\bm(\\tilde \\theta))$ is a polynomial in $\\bm{\\tilde \\theta}$. This construction is similar to nonlinear scaling fields~\\cite{cardy1996scaling, Wegner72} which try to linearize the RG flow equations with a subtle difference that we will remark on later. The higher order terms which we can remove by coordinate changes correspond to analytic corrections to scaling. Then, to find the equations in the new variables.\n\\begin{equation}\n \\frac{d \\bm{\\theta}}{d \\ell} = \\frac{d \\bm{\\tilde \\theta}}{d \\ell} + (\\mathcal{D} \\bm{h}_k) \\frac{d \\bm{\\tilde \\theta}}{d\\ell} .\n\\end{equation}\n$\\mathcal{D} \\bm{h}_k$ is the matrix of partial derivatives of the vector field $\\bm{h_k}$ with respect to the parameters $\\bm{\\theta}$. Now, substituting this into the equation\n\\begin{equation}\n (1 + \\mathcal{D} \\bm{h}_k) \\frac{d \\bm{\\tilde \\theta}}{d \\ell} = J (\\bm{\\tilde\n \\theta} + \\bm{h}_k) + \\bm{f}(\\bm{\\tilde \\theta} + \\bm{h}(\\bm{\\tilde\n \\theta})) + \\mathcal{O}({\\tilde \\theta}^{k+1}), \n \\end{equation}\nwhich upon simplification gives\n \\begin{equation}\n\n\n \\frac{d \\bm{\\tilde \\theta}}{d \\ell} = J \\bm{\\tilde \\theta} - (\\mathcal{D} \\bm{h}_k) J \\bm{\\tilde \\theta} + (\\mathcal{D} J \\bm{\\tilde \\theta}) \\bm{h}_k + \\bm{f}_k (\\bm{\\tilde \\theta})+ \\mathcal{O}({\\tilde \\theta}^{k+1}) .\n\\end{equation}\nFor the last line, notice that the matrix $J$ is the same as $\\mathcal{D} J \\bm{\\tilde \\theta}$ (i.e. the same as the matrix of partial derivatives with respect to parameters $\\bm{\\tilde \\theta}$ of the vector $J \\bm{\\tilde \\theta}$). \nTwo of the terms can be written as the Lie bracket (a commutator for vector fields) defined as $[\\bm{h}_k, J \\bm{\\tilde \\theta}] = - ( (\\mathcal{D} \\bm{h}_k) J \\bm{\\tilde \\theta} - (\\mathcal{D} J \\bm{\\tilde \\theta}) \\bm{h}_k)$ to give the final equation \n\\begin{equation}\n \\frac{d \\bm{\\tilde \\theta}}{d \\ell} = J \\bm{\\tilde \\theta} + [\\bm{h}_k, J \\bm{\\tilde \\theta}] + \\bm{f}_k + \\mathcal{O}({\\tilde \\theta}^{k+1}) .\n\\end{equation}\nSo, if we want to remove the term $\\bm{f}_k$, we need to solve the equation $[\\bm{h}_k, J \\bm{\\tilde \\theta}] = -\\bm{f}_k$ for $\\bm{h}_k$. It's important to note that whether this equation can be solved or not depends only on the linear part of the equation given by the matrix J. That is, within the space of transformations that we are considering, the linear part of the equation completely determines how much the equation can be simplified and how many terms can be removed. This is not true if there are zero eigenvalues and one then has to consider a broader space of transformations which we will consider later.\n\nTo see when the equation can be solved, we first note that the space of homogeneous polynomials is a vector space with a basis constructed in the obvious way $\\theta_1^{\\alpha_1}...\\theta_n^{\\alpha_n}$. Any term at order $k$ can be written as a sum of such terms for which $\\sum_i \\alpha_i = k$. The Lie bracket can be thought of as a linear operator on this space. To find the set of possible solutions is to find the range of this linear operator. Let us take the case where the linear part is diagonalizable and so just consists of the eigenvalues $\\lambda_i$. Let us say for simplicity that the $j$ component of the vector $\\bm{f_k}$ $(f_k)^j = c {\\tilde \\theta}_1^{\\alpha_1}...{\\tilde \\theta}_n^{\\alpha_n}$ for some set of \\{$\\alpha_i$\\}. Then, the $j$th component of the matrix equation reduces to\n\n\n\\begin{equation}\n \\lambda_j (h_k)^j - \\left(\\sum_i \\lambda_i \\alpha_i\\right) (h_k)^j = c \\theta_1^{\\alpha_1}...\\theta_n^{\\alpha_n} .\n\\end{equation}\nThis can be solved easily by choosing $(h_k)^j = a {\\tilde \\theta}_1^{\\alpha_1}...{\\tilde \\theta}_n^{\\alpha_n}$ and\n\\begin{equation}\n\\label{normalformequation}\n a = \\frac{c}{\\lambda_j - \\sum_i \\lambda_i \\alpha_i} .\n\\end{equation}\nWhen all nonlinear terms can be removed by such a coordinate transformation, then the usual case of power law scaling is obtained. The fixed point, in this case, is called hyperbolic. Alternatively, if we have a term with $\\lambda_j = \\sum_i \\lambda_i \\alpha_i$ (a linear combination of other eigenvalues $\\lambda_i$ with positive integer coefficients $\\alpha_i$), this term is called a resonance and cannot be removed from the equation for $d \\theta_j\/d \\ell$. This contributes to the singularity at the fixed point which is no longer given by power law combinations. \n\n\\subsection{Bifurcations}\n\nNotice a special case of these equations when for some $k$, a particular $\\lambda_k = 0$. In this case, it is possible to get an infinite number of resonances because the equation $\\lambda_i = \\lambda_i + \\alpha_k \\lambda_k$ is also true for all $\\alpha_k$ and $\\lambda_i$. This case, when one of the eigenvalues goes to 0 depending on some parameter $\\epsilon$ is called a \\textit{bifurcation}. If all linear eigenvalues $\\lambda_i$ of the flows are distinct and\nnon-zero, which terms can be removed using polynomial coordinate changes\ndepends only on these $\\lambda_i$. As we saw, this approach can be formulated\nelegantly as a linear algebra problem of the Lie bracket on the space of\nhomogeneous polynomials. For more\ngeneral cases---including bifurcations---`hypernormal form'~\\cite{murdock2004hypernormal, yu2007simplest,\nyu2002computation} theory develops a systematic but somewhat more\nbrute-force machinery to identify which terms can and cannot be removed\nperturbatively by polynomial changes of coordinates. Classic bifurcations include the pitchfork bifurcation, the transcritical bifurcation, the saddle node and the Hopf bifurcation~\\cite{GuckenheimerH13}. \n\nConfusingly, bifurcation theory separately has its own `normal form' of bifurcations. These normal forms are derived in a very different way using the implicit function theorem. The basic idea is to ask for the smallest number of terms in the equation which will preserve the qualitative behavior of the fixed points (e.g. exchange of stability of fixed points), and then map any other equation on to this simple equation using the implicit function theorem. This mapping allows for too broad a class of transformations to be useful for our purposes. An important feature of the flows that we want to preserve is their \\textit{analyticity}, we therefore only consider polynomial changes of coordinates.\n\nAn explicit example is given by the 4-d Ising model. It is known that the magnetization $M \\sim t^{1\/2} (\\log t)^{1\/3}$ with $\\log \\log$ corrections. The quartic coupling $u$ and the temperature $t$ have flow equations which traditional bifurcation theory would simplify to\n\\begin{align}\n \\frac{d u}{d \\ell} &= - \\bar{B} u^2 , \\\\\n \\frac{d t}{d \\ell} &= 2 t .\n\\end{align}\nCalculating the magnetization with this set of flow equations leads to the wrong power of logarithmic corrections. By allowing too broad a class of coordinate transformations, bifurcation theory hides the true singularity in the non-analytic coordinate change. We will show that normal form theory instead predicts \n\\begin{align}\n \\frac{d u}{d \\ell} &= -\\bar{B} u^2 + \\bar{D} u^3 , \\\\\n \\frac{d t}{d \\ell} &= 2 t - \\bar{A} t u ,\n\\end{align}\nwhich does predict the correct behavior. We will present the explicit solution of this equation in Section~\\ref{sec:4dising}. Here, we just note that the traditional $\\log$ and $\\log \\log$ terms follow from the solution's asymptotic behavior. To get these equations, we will remove higher order terms in $u$ by using a coordinate change that is lower in order (broadening the formalism we considered in Section~\\ref{sec:normalform}). Using lower order terms to remove higher order terms is part of hypernormal form theory. For our purposes, the distinction is somewhat artificial and here we simply use normal form theory to denote any procedure that uses only polynomial changes of coordinates to change terms in flow equations. \n\n\nIn Sections~\\ref{sec:4dising} and~\\ref{sec:randomfieldising}, we will explicitly work out the case of a single variable undergoing a bifurcation for the 4d Ising model and the 2d Random Field Ising model and show how there are only a finite number of terms which cannot be changed or removed. It is worth mentioning here that there can be cases in which two variables simultaneously have 0 eigenvalues. The XY model~\\cite{kosterlitz1974critical} offers an example where this happens. The dATG transition in 6 dimensions has two variables that simultaneously go through a transcritical bifurcation~\\cite{charbonneau2017nontrivial, Yaida18}. Polynomial changes of coordinates in both variables can be used here too, but because there are generically more terms at higher order than at lower order (there are many more ways to combine two variables into a sixth order polynomial than there are to combine them into a third order polynomial), we usually do not have enough freedom to remove all terms. Therefore, simultaneous bifurcations in more than one variable often have an infinite number of terms in their flow equations that cannot be removed.\n\n\n\n\n\n\\section{Earlier work} \\label{sec:earlierwork}\n\nThe approach we take is inspired by Wegner's early work~\\cite{Wegner72,\nWegner73}, subsequent developments by Aharony and\nFisher~\\cite{aharony1980university, aharony1983nonlinear}, and by studies of Barma and Fisher on\nlogarithmic corrections to scaling~\\cite{barma1984corrections,\nbarma1985two}. The approach of Salas and Sokal on the 2-d Potts model~\\cite{Salas97}, and of Hasenbusch~\\cite{hasenbusch2008universal} et al. on the 2d Edwards-Anderson model \nis similar in spirit to ours.\n\nWegner~\\cite{Wegner72} first constructed nonlinear scaling fields which transform linearly under an arbitrary renormalization group. His construction is very similar to the coordinate changes we considered above for normal form theory. The one difference is that Wegner explicitly allows the new coordinates to depend on the coarse graining length $\\ell$. We will not allow this explicit dependence on $\\ell$ in our change of coordinates, as it doesn't seem to offer any advantage over regular normal form theory.\n\n\n\n\n\nEventually, the goal of using normal form theory to understand the differential equations that describe RG flow is to simplify and systematize scaling collapses. This requires a systematic way of dealing with corrections to scaling beyond the usual power laws. There are three different types of corrections to scaling that have appeared in the literature. These include logarithmic, singular and analytic corrections to scaling. Logarithms in the scaling behavior typically occur at an upper critical dimension or in the presence of a resonance. Wegner and Riedel~\\cite{Wegner73} considered the case of a zero eigenvalues which occurs at the upper critical dimension of Ising and tri-critical Ising models. They derived the form of the scaling in terms of logarithmic corrections to scaling. However, they used perturbation theory to ignore higher order terms in the flow equations rather than only keeping those terms which cannot be removed by an application of normal form theory. Here, we will solve the full flow equations and see that the logarithmic corrections to scaling are better incorporated as part of the true singularity using normal form theory.\n\nAnalytic corrections to scaling were explored by Aharony and Fisher~\\cite{aharony1983nonlinear} who gave a physical interpretation of the nonlinear scaling fields (see below Eq.(~\\ref{changecoord})) in terms of analytic corrections to scaling in the Ising model. Analytic corrections to scaling capture the difference between the physical variable $T$ and $H$ (that your thermometer or gaussmeter measures) and the symbols $\\tilde{t}$ and $\\tilde{h}$ in the theory of the magnet. The liquid gas transition is in the Ising universality class but a theory of the liquid gas transition has to include analytic corrections to scaling to match with the universal predictions of the Ising model. Moreover, such corrections are also needed to explain the non-universal behavior away from the fixed point. Analytic corrections to scaling will correspond to terms in the differential equations that can be removed by coordinate changes.\n\nThe singular corrections to scaling are also incorporated as part of the true singularity with the addition of irrelevant variables. Finally, the ability to change the renormalization scheme leads to what are called redundant variables. In related work~\\cite{raju2018reexamining}, we argue that these variables can be seen as a gauge choice which contributes to the corrections of scaling. In forthcoming work~\\cite{Clement18}, we will explore the consequence of this distinction between gauge corrections and genuine singular corrections to scaling further.\n\n\n\n\n\n\n\n\nFinally Salas and Sokal, in the context of the 2-d Potts model, derive the normal form of the flow equations for a transcritical bifurcation. Similarly, Hasenbusch et al. derive the normal form for the 2-d Edwards-Anderson model which is also a transcritical bifurcation. Both of these do not solve the full flow equations but end up approximating the solution by logarithms. In the context of QCD, Sonoda derived the solution for the flow of a coupling which undergoes a transcritical bifurcation. \n\nDespite similar inclinations, none of these works make the complete\nconnection to normal form theory. One advantage of our approach is precisely that it brings together this disparate literature into a unified framework. The\nanalysis presented here is general and applicable to all kinds of\nsituations, ranging from old problems like the nonequilibrium random field Ising\nmodel (NERFIM)~\\cite{PerkovicDS95}, to newer research problems like jamming~\\cite{goodrich2014jamming}. \n\n\n\\section{Universality Families} \\label{sec:universality}\n\n\\begin{table*}\n\\begin{center}\n\\resizebox{\\linewidth}{!}{\n \\begin{tabular}{ | l | c | c| c| }\n \\hline\n Universality family & Systems & Normal form & Invariant scaling combinations \\\\ \\hline\n \\makecell{\\raisebox{-0.4\\height}{\\includegraphics[width=.06\\textwidth]{hyperbolicbifn2.png}}} \n \n & \\makecell{\\textbf{3-d Ising Model $(t)$} \\\\ 3-d RFIM $(w)$} & $dt\/dl = (1\/\\nu) t$ &\n\n\n\t\t $L t^{\\nu}$ \\\\ \\hline\n \\makecell{\\raisebox{-0.4\\height}{\\includegraphics[width=.06\\textwidth]{pitchforkbifn2.png}}}\n & \\makecell{\\textbf{2-d RFIM $(w)$} \\\\ 6-d Potts model $(q)$} & $dw\/dl = w^3 + \\textcolor{blue}{B w^5}$ & \n\n\t\t$L e^{1\/(2 w^2)} \n\t\t(\\textcolor{darkGreen}{1\/w^2} \\textcolor{blue}{+ B})^{\\textcolor{blue}{-B\/2}}$\\\\ \\hline\n \\makecell{\\raisebox{-0.4\\height}{\\includegraphics[width=.09\\textwidth]{transcriticalbifn2.png}}}\n & \\makecell{ \\textbf{4-d Ising model $(u,t)$} \\\\ 2-d NERFIM $(-w,S)$ \\\\ 1-d Ising model $(-t,h)$ } & \\makecell{$du\/dl = -u^2 + \\textcolor{darkGreen}{D u^3}$ \\\\ $dt\/dl = 2 t \\textcolor{darkGreen}{- A t u}$} & \n\t\\makecell{$L e^{1\/u - D} \\textcolor{blue}{(1\/(D u) - 1)^{D}} = L y^{D}$ \\\\\n\t $t L^2\\textcolor{blue}{(W(y L^{1\/D})\/(1\/(D u) - 1))^{-A} }$}\n \\\\ \\hline\n \\makecell{Resonance} & \\textbf{2-d Ising model} \n & \\makecell{$df\/dl = 2 f \\textcolor{darkGreen}{- t^2} \\textcolor{darkGreen}{- L^{-2}}$ \\\\ $dt\/dl = t\\textcolor{darkGreen}{+AL^{-1}}$} & \n\t$tL \\textcolor{darkGreen}{+ A\\log L}$ \\\\ \\hline\n \\makecell{\\raisebox{-0.4\\height}{\\includegraphics[width=.1\\textwidth]{xybifn.png}}}\n & \\textbf{2-d XY model} &\n \\makecell{$dx\/dl=-y^2(1\\textcolor{darkGreen}{+xf(x^2)})$ \\\\\n $dy\/dl = -xy$} &\n \\makecell{$y^2 -2\\int_0^x s\/(1\\textcolor{darkGreen}{+sf(s^2)})\\,ds$\\\\ \n\t$=y^2-x^2\\textcolor{darkGreen}{-(2f(0)\/3)x^3+(f(0)^2\/2)x^4+\\mathcal\n O(x^5)} $}\\\\ \\hline\n \n \\end{tabular}}\n \\end{center}\n \\caption{Normal forms and universal invariant scaling combinations for \ntraditional and intrinsically nonlinear renormalization-group critical points.\nThe universal scaling of most critical points are power-law combinations\nof the control variables, derived from the linearized normal-form equations\nof hyperbolic RG fixed points. Many systems have well-studied\nlogarithmic corrections, exponentially diverging correlations, or other\nsingularities that we attribute to intrinsic nonlinearities in the \nRG flow equations. \nIn blue are new universal terms predicted by our analysis of the\ncorresponding dynamical system normal forms, which appear not to have\nbeen hitherto discussed in the literature.\nIn green are terms we explain which have been previously observed using other\nmethods~\\cite{Wegner72,Meinke05,sonoda,magradze,pelissetto2013renormalization}. The normal form equations are shown for the system in bold. Other systems in the same universality family have the same equations associated with different variables (shown in parenthesis). The invariant scaling combination for the transcritical family requires the Lambert $W$ function defined by the equation $W(x) \\exp(W(x)) = x$. Many of the results quoted in the table were obtained in disparate literatures (QCD, glasses, critical phenomena etc.) but are united in this common framework. Other families are possible, the flow equations for the replica symmetry breaking transition in disordered media have a simultaneous transcritical bifurcation and possibly also have a Hopf bifurcation~\\cite{Yaida18}} .\n\\label{systemstable}\n\\end{table*} \n\n\n\nTraditionally, the RG contains the concept of a universality class. The universality class is essentially determined by the critical exponents which explain the scaling behavior of a model, i.e. by linearized RG eigenvalues. Normal form theory suggests another possible classification. Each fixed point can be classified by the bifurcation or resonance that it is at. The simplest case, which is also the traditional one, is the hyperbolic universality family. In the hyperbolic case, it is possible to remove all nonlinear terms in the flow equations by changes of coordinates. Hence, the RG can be written as a linear flow to all orders in perturbation theory. Different values for the linear eigenvalues correspond to different universality classes. While traditionally this is a statement about the linearization of the RG, here it is a statement about the only terms in the flow equations that are \\textit{universal} in the sense that they can not be removed by a coordinate change. \n\nThe need for this generalization becomes clear when we examine cases which are not traditional. In Table~\\ref{systemstable} we present common universality families \nand well-studied statistical mechanics systems governed by each.\nThe pitchfork bifurcation shows up\nin the 2-d Random Field Ising model; it has a cubic term in the equations\nfor $w$, the ratio of the disorder to the coupling~\\cite{Bray85}. We\nhave derived that the correct equations require an additional $w^5$\nterm~\\cite{Lorien17}, which was not included in previous work. The 2-d\nIsing model has a well known logarithmic correction to the specific\nheat, which Wegner associated with a $t^2$ resonance term in the flow\nequation~\\cite{Wegner72}. The 1-d and 4-d Ising models have transcritical\nbifurcations. The 1-d Ising case is somewhat special and we will cover it later in Section~\\ref{sec:1dising}. These cover all the important bifurcations with one variable~\\footnote{We have not studied any example of a saddle node bifurcation which would require a transition from a critical point to no critical point.}.\n\n\nWhen more than one variable is undergoing a bifurcation, or if more than one variable has an inherently nonlinear flow, the analysis becomes considerably more complicated. This is evidenced in the the 2-d XY model at the Kosterlitz--Thouless (KT) transition~\\cite{kosterlitz1977d}. It has been shown that the simplest normal form\nof its flow equations (in the inverse-temperature-like variable\n$x\\sim1\/T-1\/T_c$ and the fugacity $y$) has an infinite number of\nuniversal terms, which can be rearranged into an analytic\nfunction $f$~\\cite{pelissetto2013renormalization} (Table~\\ref{systemstable}).\nWe conjecture that the very similar transition observed in randomly grown\nnetworks~\\cite{callaway2001randomly,dorogovtsev2001anomalous} is not\nin the KT-universality class, but rather is in the same universality\nfamily. It is not to be expected that a percolation transition for\ninfinite-dimensional networks should flow to the same fixed point\nas a 2-d magnetic system, but it is entirely plausible that they share\nthe same normal form with a different universal function $f$.\n\nDifferent universality classes within the same universality family, such as those of the 4-d Ising model and the 2-d NERFIM have different power laws and scaling functions. However, as shown in Table~\\ref{systemstable}, because they both have a transcritical bifurcation the two classes have the same complicated invariant scaling combinations~\\footnote{A correlation length $y^{-D}$ from Table~\\ref{systemstable} defined in terms of the marginal variable in both cases diverges exponentially; in terms of the temperature the correlation length is a power law}. This hidden connection is made apparent in the shared normal form, where the quartic coupling and temperature ($u,T$) in the first class are associated with the (negative of) disorder strength and avalanche size ($-w,S$) in the second~\\footnote{The minus sign on $w$ and $t$ for the 1-d Ising\nand the NERFIM is because $w$ and $t$ are \nmarginally relevant whereas $u$ is marginally irrelevant for 4-d Ising.}.\n\n\n\nIndeed, the normal form not only unites these universality classes, but allows a more precise handling of their singularity. It is usually stated that the magnetization $M \\sim t^{1\/2} (\\log\nt)^{1\/3}$, the specific heat $C \\sim (\\log t)^{1\/3}$ and the\nsusceptibility $\\chi \\sim (\\log t)^{1\/3}\/t$ with $\\log \\log$ corrections~\\cite{Wegner73}. We show in the supplementary material that\nthe true singularity of the magnetization\nat the critical point is $M \\sim t^{1\/2} W(x t^{-27\/25})^{1\/3}$,\nwhere $W$ is the Lambert-W function defined by $W(z) e^{W(z)} = z$, and\n$x(u)$ is a complicated but explicit function of the irrelevant variable $u$.\n(The traditional log\nand log-log terms follow from the asymptotic behaviors of $W(x)$ at large\nand small $x$. The universal power $27\/25$ becomes manifest in the \ncomplete singularity, but is disguised into a constant factor up to\nleading logs.) We now show how to apply normal form theory to specific examples. \n\n\n\n\\section{Application to specific systems}\nIn the sections below, we derive in detail the scaling form for the entries shown in Table~\\ref{systemstable}. Our archetypal example is the 4-d Ising model. For this, we derive the scaling forms and use them to perform scaling collapses of numerical simulations. We then discuss the scaling of the Random Field Ising model, the XY model and the Ising model in dimensions $1$, $3$ and $2$. \n\n\\subsection{4-d Ising} \\label{sec:4dising}\n\nThe study of critical points using the renormalization group was turned into a dynamical system problem by Wilson~\\cite{wilson1974renormalization}. These RG calculations are done by first expressing the Ising model as a field theory with a quartic potential $u \\phi^4$. Then by coarse-graining in momentum space and rescaling, one obtains the flow equations\n\\begin{align}\n {d t}\/{d \\ell} =& 2 t - \\bar{A} t u + \\bar{C} t u^2 + \\bar{E} t u^3 \\nonumber \\\\ &+ \\bar{G} t u^4 + \\bar{I} t u^5 + \\bar{K} t u^6 ... , \\\\\n {d u}\/{d \\ell} =& \\epsilon u - \\bar{B} u^2 + \\bar{D} u^3 + \\bar{F} u^4 \\nonumber \\\\ &+ \\bar{H} u^5 + \\bar{J} u^6 + \\bar{L} u^6 ... , \\\\\n {d f}\/{d \\ell} =& (4 - \\epsilon) f + ...,\n \\label{wilsoneqs}\n\\end{align}\nwhere $t$ is the temperature, $f$ is the free energy and $u$ is the leading irrelevant variable. This is the highest order to which the flow equations are known as of now. The coefficients take the values, $\\bar{A} = 1$, $\\bar{B} = 3$, $\\bar{C} = 5\/6$, $\\bar{D} = 17\/3$, $\\bar{E} = -7\/2$, $\\bar{F} \\approx 32.54$, $\\bar{G} \\approx 19.96$, $\\bar{H} \\approx -271.6$, $\\bar{I} \\approx -150.8$, $\\bar{J} \\approx 2849$, $\\bar{K} \\approx 1355$, $\\bar{L} \\approx -34776$~\\cite{kompaniets2016renormalization, chetyrkin1983five}. The flow equation for $u$ in this case takes the form of a transcritical bifurcation with parameter $\\epsilon = 4 - d$ tuning the exchange of stability between the Gaussian ($u = 0$) and Wilson-Fisher fixed point ($u \\neq 0$). \n\nConsider these equations for $\\epsilon = 0$ which is the point at which it undergoes a transcritical bifurcation. To derive the normal form, one considers a change of variables of the form \n\\begin{align}\n t &= \\tilde t + a_1 \\tilde t \\tilde u + a_2 \\tilde t \\tilde u^2 + ... , \\\\\n u &= \\tilde u + b_1 \\tilde u^2 + b_2 \\tilde u^3 + b_3 u^4 + ...\n\\end{align}\nThis gives the equations up to order $u^4$, \n\\begin{align}\n {d \\tilde t}\/{d \\ell} =& 2 \\tilde t - \\bar{A} \\tilde t \\tilde u + (-\\bar{A} b_1 + a_1 \\bar{B} + \\bar{C}) t u^2 + ... , \\\\\n {d \\tilde u}\/{d \\ell} =& - \\bar{B} \\tilde{u}^2 + \\bar{D} \\tilde{u}^3 \\nonumber \\\\ &+ (-b_1^2 \\bar{B} + b_2 \\bar{B} + b_1 \\bar{D} + \\bar{E}) \\tilde{u}^4 + ...\n\\end{align}\nNote that any term of the form $u^m t$ in the equations for $d t\/d \\ell$ and any term of the form $u^m$ in the equations for $d u\/d \\ell$ is a resonance. Hence, the coefficients $\\bar{A}$, $\\bar{B}$ and $\\bar{D}$ remain unchanged with this change of variables. However, the coefficients $\\bar{C}$ and $\\bar{E}$ are changed (though the change is independent of $a_2$ and $b_3$ because they are resonances) and in particular, can be set to 0 by an appropriate choice of coefficients. \n\nThis creates a general procedure for reducing this flow to its simplest possible form. First, all terms that are not resonances are removed in the usual way by solving Eq.(~\\ref{normalformequation}). Then, we perturbatively remove most of the resonances using the following procedure. First consider the $u$ flow. Suppose the lowest order term in the flow after the $u^3$ term is $u^n$, i.e.\n\n\\begin{equation}\n \\frac{d u}{d \\ell} = - \\bar{B} u^2 + \\bar{D} u^3 + \\bar{N_n} u^n + \\mathcal{O}(u^{n+1})\n\\end{equation}\nwith $n > 3$. Consider a change of variables of the form $u = \\tilde{u} + b_{n-2} \\tilde{u}^{n-1}$. Then\n\\begin{widetext}\n\\begin{align}\n (1 + (n-1) b_{n-2} \\tilde{u}^{n-2}) \\frac{d \\tilde u}{d \\ell} &= - \\bar{B} (\\tilde{u} + b_{n-2} \\tilde{u}^{n-1})^2 + \\bar{D} ((\\tilde{u} + b_{n-2} \\tilde{u}^{n-1})^3 \\nonumber \\\\ &+ \\bar{N_n} (\\tilde{u} + b_{n-2} \\tilde{u}^{n-1})^n + \\mathcal{O}(\\tilde{u}^{n+1}) , \\\\\n \\frac{d \\tilde u}{d \\ell} &= \\frac{- \\bar{B} \\tilde{u}^2 + \\bar{D} \\tilde{u}^3 + \\bar{N_n} \\tilde{u}^n - 2 \\bar{B} b_{n-2} \\tilde{u}^n}{(1 + (n-1) b_{n-2} \\tilde{u}^{n-2}) } + \\mathcal{O}(\\tilde{u}^{n+1}) , \\\\\n &= - \\bar{B} \\tilde{u}^2 + \\bar{D} \\tilde{u}^3 + (\\bar{N_n} - 2 \\bar{B} b_{n-2} + (n-1) b_{n-2} \\bar{B}) \\tilde{u}^n + \\mathcal{O}(\\tilde{u}^{n+1}) .\n\\end{align}\n\\end{widetext}\nEvidently, the coefficient of the $\\tilde{u}^n$ term can be set to 0 with an appropriate choice $b_{n-2} = N_n\/(\\bar{B} (3 - n))$.\n\nSo all terms of the form $u^n$ with $n > 3$ can be removed by a change of coordinates. Incidentally, this derivation also shows why it is not possible to remove the $u^3$ term. Now consider the $t$ equation with\n\\begin{equation}\n \\frac{dt}{d \\ell} = 2 t - \\bar{A} t \\tilde u + M_n t \\tilde u^{n-1} + \\mathcal{O} (t \\tilde u^{n}) .\n\\end{equation}\nWe consider a change of coordinates\n\\begin{equation}\n t = \\tilde t + a_{n-2} \\tilde t \\tilde{u}^{n-2} .\n\\end{equation}\nIt is then straightforward to show\n\\begin{equation}\n \\frac{d \\tilde t}{d \\ell} = 2 \\tilde t - \\bar{A} \\tilde t \\tilde u + (M_n + \\bar{B} (n-2) a_{n-2} + a_{n-2} \\bar{A}) \\tilde t \\tilde u^{n-1} + \\mathcal{O} (t \\tilde u^{n}) .\n\\end{equation}\nSo setting $a_{n-2} = - M_n\/(\\bar{B}(n-2) + A)$ sets the coefficient of the $t u^{n-1}$ term with $n > 2$ to 0. \n\nAny term which is not of this form can be removed in the usual way by solving Eq.~\\ref{normalformequation}. Finally, we have another degree of freedom that we have used. We can rescale $u$ and $t$ to set some of the nonlinear coefficients to 1. This reflects the fact that the original coefficients $\\bar{A}$, $\\bar{D}$ depend on an arbitrary scale of $u$ and $t$ that we have chosen. By choosing the scale so $\\bar{B} = 1$ and the coefficient of the resonance is -1, defines $D = \\bar{D}\/\\bar{B}^2$ and $A = \\bar{A}\/\\bar{B}$. Hence, by considering all such polynomial change of coordinates, we can reduce this set of equations to their normal form \n\n\n\n\n\n\n\n\\begin{align}\n {d \\tilde t}\/{d \\ell} &= 2 \\tilde{t} - A \\tilde{u} \\tilde{t} , \\label{isinghnf} \\\\ \n {d \\tilde u}\/{d \\ell} &= - \\tilde{u}^2 + D \\tilde{u}^3, \\label{isinghnf2} \\\\\n {d \\tilde f}\/{d \\ell} &= 4 \\tilde f - {\\tilde t}^2 \\label{isinghnf3} .\n\\end{align} \n\n\nThe resultant equations then have 2 parameters $A$ and $D$ which\nare \\textit{universal}, in a way that is similar to the eigenvalues of the RG flows as in Table~\\ref{systemstable}. The normal form variables $\\tilde t$, $\\tilde u$, $\\tilde f$ are equal to the physical variables $t$, $u$ and $f$ to linear order (up to a rescaling). Corrections to these are analytic corrections to scaling. Hence, we will henceforth simply refer to the normal form variables as $t$, $u$ and $f$. It is important to note that we are making a particular choice for the analytic corrections to scaling by setting them equal to zero. It is possible to make a different choice for the higher order coefficients. In particular, the equation for $d u\/d \\ell$ goes to $\\infty$ at finite $\\ell$ if $u$ starts at a large enough value. Hence, it may be more useful to make a different choice for the higher order coefficients. All of these choices will agree close to the critical point but will have different behavior away form the critical point. Later, we will consider a different choice for the higher order terms.\n\nThe 4-d Ising model has both a bifurcation and a resonance. The $u^2$, $u^3$ and $A u t$ terms come from the bifurcation and cannot be removed\nby an analytic change of coordinates. The $t^2$ term is a consequence of\nan integer resonance between the temperature and free energy eigenvalue,\n$\\lambda_t = 1\/\\nu = 2, \\lambda_f = d = 4$. \n\n\n\\begin{figure}\n \\includegraphics[width=.85\\linewidth]{magcollapsen2.pdf}\n\\caption{Scaling collapses for the magnetization and susceptibility using the scaling form given by the normal form Eqs.(~\\ref{isinghnf}~--~\\ref{isinghnf3}). Simulations are done on a 4-d lattice using a Wolff algorithm for lattice sizes ranging from $L = 4$ to $L = 32$. Here $M_\\mathrm{scaling}$ $=$ $L ((W(y L^{1\/D})+1))^{1\/4}$ and $t_{\\mathrm{scaling}}$ $=$ $L^{-2} (W(y L^{1\/D})\/(1\/D u_0 - 1))^{1\/3} ((W(y L^{1\/D})+1))^{-1\/2}$. We find $u_0 = 0.4\\pm0.1$ for the 4-d nearest-neighbor hypercubic-lattice. An estimate of the error is given by estimating $u_0$ with a different choice of normal form which gives $u_0 = 0.5$. }\n\\label{collapses1}\n \\end{figure}\n\n\\begin{figure}\n \\includegraphics[width=.85\\linewidth]{susccollapsen2.pdf}\n \\caption{Scaling collapse for the susceptibility using the scaling form given by the normal form Eqs.(~\\ref{isinghnf}~--~\\ref{isinghnf3}). Simulations are done on a 4-d lattice using a Wolff algorithm for lattice sizes ranging from $L = 4$ to $L = 32$. Here $\\chi_\\mathrm{scaling}$ $=$ $L^2 ((W(y L^{1\/D}) + 1))^{1\/2}$ and $t_{\\mathrm{scaling}}$ $=$ $L^{-2} (W(y L^{1\/D})\/(1\/D u_0 - 1))^{1\/3} ((W(y L^{1\/D})+1))^{-1\/2}$. We find $u_0 = 0.4$ for the 4-d nearest-neighbor hypercubic-lattice.}\n\\label{collapses2}\n \\end{figure}\n\n\n\n\n\nBefore examining the full solution\nof Eqs.~(\\ref{isinghnf}~--~\\ref{isinghnf3}), we will first study the effect\nof each part of the RG flows. First, considering only the linear terms and\ncoarse-graining until $t(\\ell^*) = 1$, the free energy is given by \n$f \\sim t^2$. This is the mean-field result and also the traditional scaling\nform that RG results take in the absence of nonlinear terms in the flow\nequations. Second, we include the resonance between the temperature and\nfree energy eigenvalue, which leads to an irremovable $t^2$ term in\nthe flow equation for the free energy. This term cannot be removed by analytic\ncoordinate changes, and yields a $\\log$ correction to the specific heat.\nThird, the irrelevant variable $u$ undergoes a transcritical\nbifurcation. Results in the hyper-normal form theory literature, as well\nas some articles in the high-energy theory\nliterature~\\cite{sonoda,magradze} recognize that the simplest form that \nthe equation can be brought into is Eq.~\\ref{isinghnf2}. The solutions\nof Eqs.(~\\ref{isinghnf}~--~\\ref{isinghnf2}) are $u(\\ell) = 1 \/(D (1 + W(y e^{\\ell\/D})))$ and\n$t(\\ell) = t_0 e^{2 \\ell} (W(y e^{ \\ell\/D})\/(1\/(D u_0) - 1))^{-A}$ where $y[u_0]$ is again\na messy but explicit function: $y = (1\/(D u_0)-1) \\exp(1\/(D u_0)-1)$. We show how to derive this in the supplementary material. \nThe traditional log and log-log corrections are\nderived by expanding the $W$ function for large $\\ell$.\n\n\nLet us use this to derive the finite-size scaling form of the free energy. Early finite-size scaling work~\\cite{aktekin2001finite, lai1990finite, montvay1987numerical} attempted scaling collapses with logs; recent work does not attempt collapses at all~\\cite{lundow2009critical}. Finite-size scaling requires an equation for the magnetic field, $h$, given by $dh\/d\\ell = 3 h$. Explicit calculations show that the coefficient of the $h u$ term is zero (see below). The free energy is then a function of three scaling variables, $u(\\ell)$, $t(\\ell)$ and $h(\\ell)$. It is given by \n\\begin{align}\n f(t_0, u_0) &= e^{-4 \\ell} f(t(\\ell), u(\\ell), h(\\ell))\\nonumber\\\\\n &- W(y e^{\\ell\/D})^{-A} \\left( \\frac{W(y e^{\\ell\/D})^{-A}}{1 - A}-\\frac{1}{A}\\right).\n\\end{align}\n\n To get a finite size scaling form, we coarse grain until $\\ell = \\log L$, the system size. Note that $u(L)$ cannot just be ignored because it is a dangerous irrelevant variable. However we can account for it by taking the combination $t(L)\/(u(L))^{1\/2}$ and $h(L)\/(u(L))^{1\/4}$ as our scaling variables~\\cite{binder1985finite}. The scaling form of the free energy then depends on $u_0$ which we do not have a way to change or set in the simulation. Instead, we treat $u_0$ as a fit parameter in the scaling form of the susceptibility: \n\n\\begin{equation}\n \\chi = L^2 \\left(W(y L^{\\frac{1}{D}}) + 1\\right)^{\\frac{1}{2}} \\Phi \\left( t_0 L^2 \\left(\\frac{W(y L^{1\/D})}{1\/(D u_0) - 1}\\right)^{-A}\\right).\n\\end{equation}\n\n\\noindent At the critical point $t = 0$, the function $\\Phi$ must be analytic for\nfinite $L$ (since non-analyticity requires an infinite system size).\n$\\Phi(0)$ is therefore a constant independent of $L$ and $u_0$ at $t =\n0$. Using this, $u_0$ may be estimated from $\\chi$ at different values\nof $L$ by fitting to its predicted dependence \n$\\chi \\propto L^2 (W(y[u_0] L^{1\/D}) + 1)^{1\/2}$ where \n$y[u_0]$ is defined above.\n\n\nFigures~\\ref{collapses1}--\\ref{collapses2} shows the scaling collapse of the magnetization\nand susceptibility. The magnetization is collapsed using the best-fit value of $u_0=0.4$. Though our collapses are not significantly\nbetter than the traditional logarithmic forms, the\ncorrect form of the singularity will be more apparent at larger values\nof $u_0$. This is because the $\\log \\log$ term which is the second term in the asymptotic expansion of the $W$ function is very small compared to the $\\log$ except at large $u_0$ and small $L$. Changing the value of $u_0$ will require a model different from the nearest neighbor square lattice Ising model.\n\nSo far, we have been considering the effects of changing coordinates in the control variables on the predictions of the theory. Wegner~\\cite{wegner1974some} had also considered changing coordinates in the degrees of freedom of the theory. These changes lead to `redundant' variables, the corrections from which can be removed by coordinate changes. We discuss them in separate work. Here we merely note that they can be used to explain some features of the scaling, like the fact that the coefficient of the $h u$ term is zero.\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsubsection{Choice of normal form} \n\nThere are certain choices we have made in our application of normal form theory. One is to keep the flow parameter $\\ell$ unchanged. Some of the dynamical systems literature considers changing $\\ell$ to depend on other parameters. This would be unusual since the coarse graining length would depend on the physical parameters but does not seem to be disallowed.\nWe show in the supplementary material that this does not change the predictions for the 4-d Ising model.\n\nNormal form theory makes a particular choice for what to do with the coefficients that can be changed by coordinate changes: it sets them equal to zero. In general, however, it is not clear that the best choice to make is to set them equal to zero. Consider the equation\n\\begin{equation}\n \\frac{d u}{d \\ell} = - u^2 + D u^3 \n \\label{ueqchange}\n\\end{equation}\nwhich, as we saw, has the solution $u(\\ell) = 1 \/(D (1 + W(y e^{\\ell\/D})))$. Here, $y = (1\/(D u_0)-1) \\exp(1\/(D u_0)-1)$. Note that $u_0 > 0$ as a requirement for the stability of the free energy. If $u_0 < 1\/D$, then $y > 0$, and if $u_0 \\leq 1\/D$, $y \\leq 0$. Hence, the domain of attraction of the fixed point at $u_0 = 0$ has a length $1\/D$. If we have a system where $u_0 > 1\/D$, then this will lead to $u(\\ell) \\rightarrow \\infty$ in a finite coarse graining length. This is reflected in the branch cut of the $W$ function at $-1\/e$. In the context of high energy physics, some have tried to find deep meaning in this pole~\\cite{magradze}. \n\nHowever, for scaling purposes, we generally prefer a choice of coordinates for which there is no such unphysical behavior. One natural choice is to instead use the equations \n\\begin{equation}\n \\frac{d u}{d \\ell} = - \\frac{u^2}{1 + D u} . \n \\label{newnormalform}\n\\end{equation}\nFor small $u$, this has the same behavior as Eq.~\\ref{ueqchange}. However, the behavior at large $u$ is now well behaved. The solution of this equation is \n\\begin{equation}\n u(\\ell) = \\frac{1}{D W(e^{\\ell\/D} 1\/(D u_0) e^{1\/(D u_0)})} ,\n\\end{equation}\nwhich is in fact somewhat simpler that the solution to Eq.(~\\ref{ueqchange}). Scaling collapses with this choice of normal form for the susceptibility are shown in the case of the Ising model in Figure~\\ref{collapses2ndmethod}. Better numerics are needed to tell if this choice of normal form is really useful. It turns out that this form has been implicitly used before in the Random Field Ising model as we show explicitly in the next section.\n\n\\begin{figure}\n \\includegraphics[width=.85\\linewidth]{suscepcollapse2ndmethod.pdf}\n \\caption{Scaling collapse for the susceptibility using the scaling form given by a different choice of normal form derived from Eq.(~\\ref{newnormalform}). Simulations are done on a 4-d lattice using a Wolff algorithm for lattice sizes ranging from $L = 4$ to $L = 32$. Here $\\chi_\\mathrm{scaling}$ $=$ $L^2 ((W(y L^{1\/D}))^{1\/2}$ and $t_{\\mathrm{scaling}}$ $=$ $L^{-2} (W(y L^{1\/D}))^{-1\/6}\/(1\/D u_0))^{1\/3} \\exp(1\/(3 W(y L^{1\/D}))) $ with $y = 1\/(D u_0) e^{1\/(D u_0)}$. We find $u_0 = 0.5$ for the 4-d nearest-neighbor hypercubic-lattice using this method.}\n\\label{collapses2ndmethod}\n \\end{figure}\n\n\n\\subsection{Random Field Ising model} \\label{sec:randomfieldising}\n\nFinding critical exponents for the random field Ising model has been a longstanding challenge in physics. Some initial results used supersymmetry to prove an equivalence of the Random Field Ising model in dimensions $d+2$ with the Ising model in dimensions $d$~\\cite{parisi1979random, de2006random}. It was later shown that the lower critical dimension of the Random Field Ising model is not $3$ (as would be expected from such a correspondence) but rather $2$~\\cite{imbrie1984lower}. The upper critical dimension is 6. Here, we will look at the scaling behavior of the Random Field Ising model at its \\textit{lower} critical dimension, $d = 2$. \n\nConsider a spin system with a random field. \n\\begin{equation}\n \\mathcal{H} = -\\sum_{\\langle i j \\rangle} J s_i s_j + \\sum_i h_i s_i ,\n\\end{equation}\nwhere, $J$ is the nearest neighbor coupling and $h_i$ is a random field chosen from a Gaussian distribution with width $r$. A phenomenological theory for the RG was formulated by Bray and Moore~\\cite{Bray85}. It turns out to be useful to define a quantity $w = r\/J$. Then, using heuristic arguments on the stability of domain walls, they derive\n\\begin{equation}\n \\frac{d w}{d \\ell} = -\\epsilon\/2 w + A w^3 ,\n \\label{rfimwrongeq}\n\\end{equation}\nwith $\\epsilon = d-2$, and $d$ is the dimension. Note that the flow equations have a symmetry under $w \\rightarrow -w$ because the physics is invariant under $r \\rightarrow -r$ about the critical point at $r = 0$. This is an example of a pitchfork bifurcation. Bray and Moore argue for this scaling form by looking at the scaling of $r$ and $J$ separately. The scaling of $J$ is given by looking at the energy of a domain wall of size $b^d$. The energy of the domain wall is proportional to $b^{d-1}$. By considering the cost of roughening the domain wall because of the presence of random fields, which goes as $r^2$, they are able to derive the next term in the equation for $J$ which is now\n\\begin{equation}\n \\frac{d J}{d \\ell} = (d - 1) J + D w^2 J + \\mathcal{O}(w^4) .\n \\label{nnequation}\n\\end{equation}\nFor the random field $r$, the energy of a region of size $b^d$ is proportional to $b^{d\/2}$. Any corrections requires forming a domain of `wrong spins' which, being akin to a barrier crossing problem, is exponentially suppressed. Hence the equation for $r$ is given by \n\\begin{equation}\n\\frac{d r}{d \\ell} = \\frac{d}{2} r \n\\label{rnequation}\n\\end{equation}\nwith exponentially small corrections. These two equations together can be used to derive Eq.(~\\ref{rfimwrongeq}). Bray and Moore conjecture that Eq.(~\\ref{rnequation}( holds exactly to all orders in $w$ (up to exponential corrections). However, it is possible for Eq.~(\\ref{nnequation}) to have higher order terms in $w$ and thus Eq.(~\\ref{rfimwrongeq}) is only correct to order $w^5$. Integrating Eq.(~\\ref{rfimwrongeq}), we get $\\ell \\sim -1\/(2 A w^2) + 1\/(2 A w_0^2)$. This implies that the correlation length is \n\\begin{equation}\n \\xi \\sim e^{1\/(2 A w_0^2)} .\n\\end{equation}\nFor finite size systems, the system size $L \\sim \\exp(1\/(2 A w_0^2)$. Meinke and Middleton~\\cite{Meinke05} showed that their finite size data was much better fit by a function of the form $w_0^{-2 y} \\exp(C\/w_0^2)$ where $C$ is a constant they fit to ($C=\\Delta_0$ in their notation) and $y$ = 1.07. We will show that this prediction is consistent with the results of normal form theory. \n\nAs we have already argued, there is no reason Eq.(~\\ref{rfimwrongeq}) is true to all orders in $w$. Indeed the, normal form prediction for the flow equations can be derived in a straightforward way. Consider adding a term $A_n w^n$ to Eq.(~\\ref{rfimwrongeq}) at $\\epsilon = 0$. This is a resonance and cannot be removed usually under normal form theory. Suppose we make a change of coordinates $w = \\tilde{w} + a_n \\tilde{w}^{n -2}$. Then, to order $\\mathcal{O}(\\tilde{w}^n)$, we get\n\\begin{equation}\n \\frac{d \\tilde w}{d \\ell} = A \\tilde{w}^3 + (3 A a_n - A a_n (n -2) + A_n) \\tilde{w}^n + \\mathcal{O}(\\tilde{w}^{n+1}) .\n\\end{equation}\n\nWe can set the coefficient of $\\tilde{w}^n = 0$ if we use $a_n = A_n\/((n-5) A)$. This procedure fails for $n = 5$ but works for all $n > 5$. \\footnote{We note that we are assuming here that the coordinate transformations respect the symmetry of the problem $w \\rightarrow -w$. Otherwise, it is possible to remove the $\\tilde{w}^5$ term at the cost of introducing a $\\tilde{w}^4$ term.} Hence, the normal form of the equilibrium RFIM is given by \n\\begin{equation}\n \\frac{d \\tilde w}{d \\ell} = \\tilde{w}^3 - D \\tilde{w}^5 .\n\\end{equation}\nAs before, we have used the freedom to rescale $w$ to set the coefficient of the $\\tilde{w}^3$ term to 1. \n\nThe solution of this equation gives us an expression for the correlation length\n\\begin{equation}\n \\xi \\sim (1\/w^2 - D)^{D\/2} e^{1\/w^2} .\n\\end{equation}\nThis scaling form could explain the data in Meinke and Middleton with $D$ as a fit parameter. Notice that for this to work, $D$ must be positive. However, this solution has the strange property that the correlation length goes to $0$ for $w^2 = 1\/D$. If $w^2 > 1\/D$, $w^2(l)$ decreases till it reaches $1\/D$. If $w^2 < 1\/D$, it increases till it reaches $1\/D$. As in the 4-d Ising model, it may be more useful to consider instead the flow equation\n\\begin{equation}\n\\frac{d \\tilde w}{d \\ell} = \\frac{\\tilde{w}^3}{1 + D \\tilde{w}^2} .\n\\end{equation}\nThis gives the scaling form \n\\begin{equation}\n \\xi \\sim e^{1\/(2 w^2)} (w^2)^{-D\/2} .\n\\end{equation}\nThis is exactly consistent with the scaling form Meinke and Middleton use to collapse their data. Their data would predict the universal value for $D = 2.14$~\\footnote{Note that they also have a fit parameter which sets the scale of the exponential. However, this parameter is not universal since it depends on the scale of $w$ unlike $D$}. Any system in the same universality class should see a value of $D$ consistent with this value. However, different values of $D$ would correspond to different universality families within the same class. We now turn to discussing the XY model before returning to the Ising model in dimensions 3, 2, and 1.\n\n\\subsection{2-D XY Model}\n\nThe 2-d XY model is a remarkable system for several reasons. It was the site\nof recently celebrated insight into the connection between ground-state\ntopology and phase transitions \\cite{kosterlitz2017nobel}. Thermodynamic\nquantities have essential singularities at its phase transition, not ordinary\npower laws, and their derivatives remain continuous to arbitrary order, making\nits phase transition infinite order \\cite{berezinskii1970destroying,\nkosterlitz1973ordering, kosterlitz1974critical}. This is related the fact\nthat its RG flow equations are inherently nonlinear: they have no relevant and\ntwo marginal state variables and the procedure laid out by\n\\eqref{normalformequation} for removing higher order terms from the flow\nequations contributes nothing to their simplification.\n\nThe XY model is usually posed as ferromagnetically interacting planar spins.\nIts partition function is exactly equivalent to the product of a trivial\nGaussian model---corresponding to spin wave degrees of freedom---with a\nneutral Coulomb gas---corresponding to the interaction of spin vortices\n\\cite{knops1977exact}. The latter component contains the interesting critical\nbehavior, which is characterized by these vortices going through an unbinding\ntransition. The flow equations for a Coulomb gas in dimension $d$ are given by\n\\begin{align}\n dK\/dl&=-K(\\tfrac14Ky^2+d - 2)+\\cdots\\\\\n dy\/dl&=-y(K-d)+\\cdots,\n\\end{align}\nwhere $K\\sim T^{-1}$ and $y$ is the fugacity of the vortices\n\\cite{kosterlitz1977d}, which for an XY model is a function of temperature and\ncannot be tuned independently but is a free parameter in other equivalent\nmodels, e.g., the Coulomb gas itself. For $d>2$ there is no phase transition\nin this system, and for $d<2$ a nontrivial unstable fixed point appears and\nthere is a phase transition in the hyperbolic universality family. It is worth\nnoting that these flow equations do not describe the XY model for any\ndimension besides $d=2$; 2 is the \\emph{upper} critical dimension of the\nCoulomb gas and these flow equations, while it is the \\emph{lower} critical\ndimension for the XY model. At $d=2$ the flow equations undergo a novel\nbifurcation: there appears a line of stable fixed points at $y=0$ for all\n$K>2$, terminating at $K=2$. This termination is the\nBerezinskii--Kosterlitz--Thouless (BKT) critical point. The flow equation near this point with $x=K-2$ is\n\\begin{align}\n \\label{eq:kt-truncated:1}\n dx\/dl&=-y^2+\\cdots\\\\\n \\label{eq:kt-truncated:2}\n dy\/dl&=-xy+\\cdots.\n\\end{align}\nThese flow equations are zero to linear order and have zero Jacobian at the fixed point.\n\nIn principle arbitrary higher-order terms in these equations exist, but there\nare several constraints on their form. There is a symmetry $y\\to-y$ in the\npartition function arising from the neutrality condition---$y$ enters the\npartition function in factors of $y^{-\\sum_rn_r^2}$ for $\\sum_rn_r=0$---which\nimplies that $dx\/dl$ be even in $y$ and $dy\/dl$ be odd. In addition, when the\nfugacity is zero the model is trivial and $x$ cannot flow, meaning that\n$dx\/dl$ must only have terms proportional to $y$. Having applied these\nconstraints, the simplest normal form has been proven by induction in\npolynomial order (Appendix A of \\cite{pelissetto2013renormalization}) to take\nthe form\n\\begin{align}\n \\label{eq:kt-normalform:1}\n d\\tilde x\/dl&=-\\tilde y^2-b_0\\tilde x\\tilde y^2-b_1\\tilde x^3\\tilde y^2+\\cdots\\\\\n \\label{eq:kt-normalform:2}\n &=-\\tilde y^2\\big(1+\\tilde xf(\\tilde x^2)\\big)\\\\\n \\label{eq:kt-normalform:3}\n d\\tilde y\/dl&=-\\tilde x\\tilde y.\n\\end{align}\nFor the BKT point in the sine--Gordon model, which is thought to display to\nthe same universality as the XY model, it is known that $b_0=3\/2$\n\\cite{balog2000intrinsic, pelissetto2013renormalization}. An infinite number\nof coefficients remain, represented here in the form of the Taylor\ncoefficients of an analytic function $f(x^2)$. These numbers are universal in the\nsense that there is no redefinition of $\\tilde x$ and $\\tilde y$ such that the\nflow equations take on the form above and contain different coefficient\nvalues. Unlike those in the previous sections, this bifurcation does not have\na named classification as far as the authors know. \n\nA constant of the RG flow can be found by integrating these forms. First,\ndividing the equations \\eqref{eq:kt-normalform:3} by\n\\eqref{eq:kt-normalform:2} (and dropping the tildes), we find\n\\begin{equation}\n \\frac{dy}{dl}\\bigg\/\\frac{dx}{dl}=\\frac x{y(1+xf(x^2))},\n\\end{equation}\nwhich separates into\n\\begin{equation}\n y\\frac{dy}{dl}=\\frac x{1+xf(x^2)}\\frac{dx}{dl}.\n\\end{equation}\nIntegrating both sides and choosing $l_0$ such that $x(l_0)=0$, we find\n\\begin{align}\n \\frac12&\\big(y(l)^2-y(l_0)^2\\big)=\\int_{y(l_0)}^{y(l)}y\\,dy=\\int_{l_0}^ly\\frac{dy}{dl}dl\\\\\n &=\\int_{l_0}^l\\frac x{1+xf(x^2)}\\frac{dx}{dl}dl\n =\\int_0^{x(l)}\\frac x{1+xf(x^2)}dx.\n\\end{align}\nIt follows that \n\\begin{align}\n y(l_0)^2&=y(l)^2-2\\int_0^{x(l)}\\frac x{1+xf(x^2)}dx\\\\\n &=y(l)^2-x(l)^2+\\frac23b_0x(l)^3-\\frac12b_0x(l)^4\\\\\n &\\hspace{4em}+\\frac25(b_0^3+b_1)x(l)^5+O(x(l)^6)\n\\end{align}\nis a constant of the flow. The expansion of the integral can be taken to\narbitrary order with ordinary computer algebra software. The finite-size\nbehavior of the flow is rather complicated and doesn't yield closed form\nresults, details can be found in \\cite{pelissetto2013renormalization}.\n\nThe XY model and other infinite-order transitions are usually characterized by\nthe anomalous exponent $\\sigma$ parametrizing the essential singularity in the\ncorrelation length,\n\\begin{equation}\n \\xi\\sim e^{at^{-\\sigma}},\n \\label{eq:kt-corr}\n\\end{equation}\nwhich for the BKT transition is $\\sigma=1\/2$ \\cite{kosterlitz1974critical}.\nConformal field theory predicts the presence of infinitely many models with\nthis anomalous exponent \\cite{ginsparg1988applied}.\nThe value of $\\sigma$ been shown to be fixed by the quadratic-order truncation\nof the system's flow equation, independent of any higher-order terms\n\\cite{itoi1999renormalization}. There are six possible quadratic-order terms in flow equations with two variables. Of these, two can be removed by linear transformations of the two variables. Two more can be set to 1 by rescaling the variables. Hence, there are two parameters at quadratic order which determine the universality family that the system belongs to, and infinite number of subsequent terms which determine the universality class. Giving a full classification of possibilities is beyond the scope of this paper but we give some examples below.\n\n\nFor instance, when the requirement of symmetry under $y\\to-y$ is lifted, the\nflow equations can no longer be brought to the form Eqs.~\\eqref{eq:kt-normalform:2}\nand \\eqref{eq:kt-normalform:3}; though the simplest form that results isn't\nyet known, it is certainly different from the symmetric case, a fact that can\nbe found by simply trying to eliminate the nonsymmetric cubic terms. In such a\ncase the codimension of the bifurcation would likely be different, corresponding\nto the fact that no \\emph{a priori} reason exists for the vanishing of\nthe term linear in $y$ in $dx\/dl$. Such linear terms would change the universality family. Among the infinite collection of BKT-like\nconformal theories---including the many physical models identified as having a\nBKT-like transition because their behavior resembles Eq.~\\eqref{eq:kt-corr}, like\npercolation in grown networks \\cite{callaway2001randomly,\ndorogovtsev2001anomalous}. These may already be examples of models with\n$\\sigma=1\/2$ but belonging to another universality class. It could\nalso be the case that all BKT-like transitions are in fact members of the same\nuniversality family and class.\n\n\nOther universality classes and families definitely do exist, characterized by novel values\nfor $\\sigma$. The level-1 $\\mathrm{SU}(N)$ Wess--Zumino--Witten model has been\nfound to be characterized by $\\sigma=N\/(N+2)$ \\cite{itoi1997extended}.\nDislocated-mediated melting alone has produced a melange of anomalous\nexponents, with $\\sigma=1\/2$, $\\sigma=2\/5$, and $\\sigma=0.369\\,63\\ldots$\ndepending on precise specification of the model and the lattice geometry\n\\cite{nelson1979dislocation, young1979melting}. Topological transitions in\nsystems whose vortices are non-Abelian produce several series of $\\sigma$\nvalues dependent on particular symmetry \\cite{bulgadaev1999berezinskii}. Each\nvalue of $\\sigma$ indicates either a different universality family or merely a different class within the same family depending on how it affects the terms at quadratic order. A classification of possible\nbifurcations and corresponding simplest normal forms is in order for flow\nequations whose leading order is quadratic, and whose expansions are\nconstrained or not by various symmetries. This would be the first step in developing techniques for\ndistinguishing between universality classes and families of this type using\nexperimental or simulation data. \n\n\n\\subsection{3-d Ising model} \\label{sec:3dising}\n\nThere is a sense in which the Ising model is simplest in 3 dimensions because it is part of the hyperbolic universality family. It is also the first natural application of the $\\epsilon$ expansion. The transcritical bifurcation at 4 dimensions leads to an exchange of stabilities of the Gaussian fixed point and the Wilson-Fisher fixed point at a non-zero value of $u = u^*$. About this Wilson-Fisher fixed point, the flow equations of the 3-d Ising model are in the hyperbolic universality class with linear coefficients which define the Ising universality class. \n\n\\begin{figure}\n \\includegraphics[width=.85\\linewidth]{fixedpointsdimension.png}\n\\caption{Fixed points as a function of dimension in the Ising model. There is a transcritical bifurcation in both 4 and 1 dimensions, leading to $W$ functions and exponential correlation lengths respectively. The fixed point in 3 d is hyperbolic and the flow can be linearized. The fixed point in 2 d has a resonance which leads to a logarithmic specific heat. The challenge is to find a scaling form which interpolates between dimensions giving the correct behavior in all of these dimensions.}\n \\end{figure}\n\nHowever, another approach is to consider the scaling form as a function of the dimension $\\epsilon$ in a way that is well defined even at $\\epsilon = 0$. Doing this, naturally requires us to keep nonlinear terms in the equation because we already know that the 4-d Ising model has nonlinear terms in its flow equations.\n\nWe want to write the flow equations about the 3-d fixed point but keep the nonlinear terms required for the scaling form to have the correct limiting behavior in 2-d and in 4-d. We can write the normal form of the flow equations as \n \\begin{align}\n {d \\tilde t}\/{d \\ell} &= \\lambda_t \\tilde{t} - A \\tilde{u} \\tilde{t} , \\\\ \n {d \\tilde u}\/{d \\ell} &= \\lambda_u \\tilde{u}- \\tilde{u}^2 + D \\tilde{u}^3, \\\\\n {d \\tilde f}\/{d \\ell} &= d \\tilde f - {\\tilde t}^2 , \\\\\n {d \\tilde h}\/{d \\ell} &= \\lambda_h h .\n\\end{align}\n\nWe have included the nonlinear terms in $u$ required for the correct scaling behavior and the resonance between the temperature and the free energy. As usual, we switch notation to $t$, $h$ and $u$ with the understanding that they are different from the normal form variables by analytic corrections. Let us look at the scaling variable formed with $t$ and $u$ which can be obtained by solving\n\n\\begin{equation}\n \\frac{d \\tilde t}{d \\tilde u} = (\\lambda_t \\tilde{t} - A \\tilde{u} \\tilde{t})\/(\\lambda_u \\tilde{u} - \\tilde{u}^2 + D \\tilde{u}^3) .\n \\label{scalingequation3d}\n\\end{equation}\nThe solution of this equation gives the scaling variable\n\\begin{equation}\n t (u)^{-\\frac{\\lambda_t}{D u_1 u_2}} (u - u_1)^{-\\frac{(\\lambda_t - A u_1)}{D u_1 (u_1 - u_2)}} (u - u_2)^{\\frac{\\lambda_u - A u_2}{D (u_2 - u_1) u_2}} = \\textrm{const} \n \\label{scaling3d}\n\\end{equation}\nwhere $u_1$ and $u_2$ are the two non-zero roots of the denominator on the r.h.s of Eq.\\ref{scalingequation3d} which to first order in $\\lambda_u$ are given by $u_1 = \\lambda_u$ and $u_2 = 1\/D - \\lambda_u$. The form of the scaling variable is interesting, it is essentially given by a product of the linearized scaling variables at the three fixed points that the equation has. Taking the limit $\\epsilon \\rightarrow 0$, we get \n\\begin{align}\n t e^{-2\/u} u^{2 D - A} (1 - D u)^{A-2 D} = \\textrm{const}\n\\end{align}\nwhich is the right scaling variable in $4$-d. We have not yet been able to obtain an analytical form for the scaling variable involving $t$ and $h$. This is because the equation for $u(l)$ does not seem to have a closed-form solution here (unlike the 4-d case). Nevertheless, we are motivated by an attempt to create scaling variables which interpolate between different dimensions and have the correct scaling behavior in many dimensions going down from $4$ to $1$. Once the full scaling variables are written down, a first test would be to see if these scaling variables do better collapsing the numerical data in 3-d. \n\n\n\\subsection{1-d Ising model} \\label{sec:1dising}\n\nThe 1-d Ising model is somewhat different because it is the lower critical dimension and does not have a phase transition. The 1-d Ising model has an exact solution which can be obtained by using transfer matrices. The partition function can be written as the trace of a transfer matrix $\\mathcal{T}^N$ where $N$ is the number of spins in the system. The matrix $\\mathcal{T}_{i j} = e^{-\\beta \\mathcal{H}(s_i s_j)}$. Coarse graining here can be done by a well-defined procedure, the coarse grained transfer matrix is defined as $\\tilde{\\mathcal{T}} = \\mathcal{T}^b$ where $b$ is the coarse graining length scale. Defining $\\ell = \\log b$ and expanding for $b$ close to 1, we can get flow equations for the temperature $T$\n\\begin{equation}\n \\frac{d T}{d \\ell} = - \\frac{T^2}{2} \\sinh \\left(\\frac{2}{T} \\right) \\log \\left (\\tanh \\left(\\frac{1}{T}\\right) \\right) .\n \\label{1dfullflow}\n\\end{equation}\nThis is different from the flow equations we have considered so far because of the presence of non-analytic terms in the flow. The non-analytic term which multiplies the $T^2$ term is $= -1$ at $T = 0$. So, this equation corresponds to a transcritical bifurcation\n\\begin{equation}\n \\frac{d T}{d \\ell} = \\frac{T^2}{2} + ...\n\\end{equation}\nwhere the additional terms are non-analytic at $T = 0$. This can be used to derive a correlation length $\\chi \\sim \\exp(2\/T)$. To interpret the flow further, consider the change of coordinates $\\kappa = \\exp(-2\/T)$. In these variables, the flow is \n\\begin{equation}\n \\frac{d \\kappa}{d \\ell} = (\\kappa^2 - 1) \\log \\left( \\frac{1-\\kappa}{\\kappa + 1} \\right) .\n\\end{equation}\nEvidently, the flow is analytic in this variable. Solving the full flow Eq.(~\\ref{1dfullflow}) gives $\\chi \\sim -1\/(\\log \\tanh(1\/T))$.\n\nFor non-zero $\\epsilon$, this argument is usually extended in what is called a Migdal-Kadanoff procedure for doing RG~\\cite{kadanoff1976notes, chaikin1995principles}. The flow equations are identical except for the presence of a $- \\epsilon T$ term which serves as the bifurcation parameter. The $1+\\epsilon$ expansion can be summed completely because the flow equation is known to all orders. It does not yield very accurate critical exponents though it gives the exact value of the critical temperature in 2-d (because it respects duality symmetry). Several people have improved the expansion~\\cite{martinelli1981systematical, bruce1981droplet}. \n\nThe presence of non-analytic terms in the flow equations complicates the application of normal form theory. We will come back to it when discussing Legendre transform of flow equations.\n\n\n\\subsection{2-d Ising model} \\label{sec:2dising}\nThe 2-d Ising model is a particularly nice example because it has an exact solution in the absence of a magnetic field. All predictions then can be compared to the exact solution. Surprisingly, despite the known exact solution, the scaling behavior of the 2-d Ising model is still not completely understood. A full discussion of the 2-d Ising model will be given in separate work~\\cite{Clement18}. Here, we give a brief summary of the issues involved.\n\nThe only variable required to describe the 2-d Ising model in the absence of a field is the temperature $t$. The linear eigenvalues of the free energy and the temperature are $2$ and $1$ respectively. The normal form of the flow equations can be written as \n\\begin{align}\n \\frac{d \\tilde f}{d \\ell} &= 2 \\tilde f - \\tilde t^2 , \\\\\n \\frac{d \\tilde t}{d \\ell} &= \\tilde t .\n\\end{align}\nWe have used the fact that the only term which cannot be removed by traditional normal form analysis is the resonance $t^2$. In fact, it cannot be removed by any analytic change of variables. We have also used the freedom to rescale $t$ to set the coefficient of the resonance equal to -1~\\footnote{The sign is set to match the exact solution of the square lattice nearest neighbor Ising model}. The solution to this can be written as $\\tilde t = \\tilde t_0 e^{\\ell}$ and the free energy \n\n\\begin{equation}\n\\label{2disingeq}\n \\tilde f(\\tilde t_0, \\ell) = e^{-2 \\ell} \\tilde f(\\tilde t_0 e^{\\ell}) - \\tilde t_0^2 \\ell .\n\\end{equation}\n\nCoarse graining until $\\tilde t(\\ell) = 1$ or $l = -\\log(\\tilde t_0)$, we get\n\\begin{equation}\n \\tilde f(\\tilde t_0) = \\tilde t_0^2 \\tilde f(1) + \\tilde t_0^2 \\log \\tilde t_0 .\n\\end{equation}\n\nNow, the normal form variable $\\tilde t_0$ is some analytic function of the physical variable $t_0$. It is linear to first order in $t_0$. Hence, we can write it as $\\tilde t_0 = t_0 (1 + c(t_0))$ where $c$ is some analytic function. Then, we can expand \n\\begin{widetext}\n\\begin{align}\n \\tilde f(t_0) &= t_0^2 (1 + c(t_0))^2 f(1) + t_0^2 (1 + c(t_0))^2 \\log t_0 (1 + c(t_0)) , \\\\\n &= t_0^2 (1 + c(t_0))^2 f(1) + t_0^2 (1 + c(t_0))^2 \\log (1 + c(t_0)) + t_0^2 (1 + c(t_0))^2 \\log t_0 , \\\\\n &= a(t_0) + b(t_0) \\log t_0 \n\\end{align}\n\\end{widetext}\nwhere both $a(t_0)$ and $b(t_0)$ are some analytic functions of $t_0$. Meanwhile any change of coordinates which adds an analytic function of $t_0$ to $\\tilde f$ can be absorbed in the definition of $a(t_0)$. Hence, we can write the final most general form of the free energy of the 2-d Ising model as $f = a(t_0) + b(t_0) \\log t_0$. Indeed, the exact solution of the 2-d Ising model can be written in this form~\\cite{caselle2002irrelevant}. \n\nWhile the basic solution of the 2-d Ising model is simple, some challenges still remain. The scaling form in the presence of other variables (like the magnetic field and other irrelevant variables) which has so far only been conjectured~\\cite{aharony1983nonlinear, caselle2002irrelevant} naturally follows from an application of normal form theory. It is given simply by including other variables in the argument of the free energy in Eq.(~\\ref{2disingeq}) before coarse graining till $t(\\ell) = 1$. Irrelevant variables are the source of singular corrections to scaling. An interesting unresolved issue is the presence of higher powers of logarithms in the susceptibility which are not found in the free energy~\\cite{orrick2001susceptibility, chan2011ising}. This is usually attributed to the presence of irrelevant variables. Here it is possible to show that the irrelevant variables which are derived from conformal field theory~\\cite{caselle2002irrelevant} would in fact lead to higher powers of logarithms in the free energy which are not observed. Hence, they cannot explain the higher powers of logarithms in the susceptibility. It is possible that there are other irrelevant variables in the 2-d square lattice nearest neighbor Ising model with a field which are not predicted by conformal field theory but can capture the higher powers of logarithms in the susceptibility, as they turn on with a field.\n\nThe logarithm due to the resonance in the 2-d Ising model is most apparent in the specific heat. It is easy to derive the flow equations for the inverse specific heat which have the form\n\\begin{equation}\n \\frac{d C^{-1}}{d \\ell} = 2 C^{-2} ,\n\\end{equation}\nand has a transcritical bifurcation in two dimensions. This raises a question, is it legitimate to talk about a bifurcation in two dimensions for the Ising model if it happens in the space of results rather than the space of control variables? Intriguingly, though perhaps unrelated, a bifurcation has been observed in 2 dimensions using methods of conformal bootstrap~\\cite{golden2015no, el2014conformal}. In thermodynamics, a natural framework which interchanges between results and control parameters is given by Legendre transforms. However, the flow equations for the Legendre transformed coordinates generically have non-analyticities in them. We suspect that the variable $t$ (and $h$ etc.) is uniquely specified as the correct variable for RG. It is possible that it is more natural to consider removing degrees of freedom in the canonical ensemble ($t$ and $f$), then in a microcanonical one ($E$ and $S$)~\\footnote{In fact, there is an interesting connection here with information geometry. It is much more natural to talk about the Fisher Information metric in the canonical ensemble. To be able to talk about the uncertainty in a thermodynamic quantity, one has to be able to exchange that quantity with the environment. Otherwise, calculating the Fisher Information Metric can give ill-defined answers. This is further motivation to consider that a particular thermodynamic ensemble may be more suitable for some purposes.}. A fuller discussion will be given in forthcoming work~\\cite{Clement18}.\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\n\nWe have shown how normal form theory leads to systematic procedure for handling the singularity in RG flows. The concept of universality families broadens the notion of a universality class and we have elucidated it with several different examples. We have focused on getting a precise handle on the singularity at the critical point. However, normal form theory also gives an elegant way to fit corrections to scaling. Interestingly, even the scaling of the 2-d Ising model which has an exact solution has some unresolved mysteries which we are exploring. It is possible that interpolating between dimensions in a way that\ncaptures the correct singularities can improve scaling collapses\nin all dimensions. Finally, we are exploring the application of our methods\nto systems like jamming in 2-d~\\cite{goodrich2014jamming}, where logarithmic\ncorrections are observed but no renormalization-group theory is available. In general, we expect this fruitful confluence of dynamical systems theory and\nthe renormalization group will not only clarify and illuminate previously\nknown technical calculations, but will also facilitate quantitative analysis\nof experimental and theoretical systems farther from their critical points\nand before the underlying field theory is well understood.\n\n\\section{Acknowledgements}\n\nWe thank Tom Lubensky, Andrea Liu, John Guckenheimer, Randall Kamien and Cameron Duncan for useful conversations. AR, CBC, LXH, JPK, DBL and JPS were supported by the National Science Foundation through Grant No. NSF DMR-1719490. LXH was supported by a fellowship from Cornell University. DZR was supported by the Bethe\/KIC Fellowship and the National Science Foundation through Grant No. NSF DMR-1308089.\n\n\n\n\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}