|
{"text":"\\section*{Background}\n\nCraps is played by rolling a pair of dice repeatedly. For most bets, only the sum of the numbers appearing on the two dice matters, and this sum has distribution\n\\begin{equation}\\label{dice}\n\\pi_j:={6-|j-7|\\over36}, \\qquad j=2,3,\\ldots,12.\n\\end{equation}\nThe basic bet at craps is the \\textit{pass-line bet}, which is defined as follows. The first roll is the \\textit{come-out roll}. If 7 or 11 appears (a \\textit{natural}), the bettor wins. If 2, 3, or 12 appears (a \\textit{craps number}), the bettor loses. If a number belonging to \n$$\n\\mathscr{P}:=\\{4,5,6,8,9,10\\}\n$$\nappears, that number becomes the \\textit{point}. The dice continue to be rolled until the point is repeated (or \\textit{made}), in which case the bettor wins, or 7 appears, in which case the bettor loses. The latter event is called a \\textit{seven out}. A win pays even money. The first roll following a decision is a new come-out roll, beginning the process again.\n\nA shooter is permitted to roll the dice until he or she sevens out. The sequence of rolls by the shooter is called the \\textit{shooter's hand}. Notice that the shooter's hand can contain winning 7s and losing decisions prior to the seven out. The \\textit{length} of the shooter's hand (i.e., the number of rolls) is a random variable we will denote by $L$. Our concern here is with \n\\begin{equation}\\label{tail}\nt(n):=\\P(L\\ge n),\\qquad n\\ge1,\n\\end{equation}\nthe tail of the distribution of $L$. For example, $t(154)$ is the probability of achieving a hand at least as long as that of Ms.~DeMauro. As can be easily verified from \n(\\ref{recursion}), (\\ref{matrixformula}), or (\\ref{closed}) below, $t(154)\\approx0.178\\,882\\,426\\times10^{-9}$; \nto state it in the way preferred by the media, this amounts to one chance in 5.59 billion, approximately. The 1 in 3.5 billion figure came from a simulation that was not long enough. The 1 in 1.56 trillion figure came from $(1-\\pi_7)^{154}$, which is the right answer to the wrong question.\n\n\n\n\n\n\n\n\\section*{Two methods}\n\nWe know of two methods for evaluating the tail probabilities (\\ref{tail}).\nThe first is by recursion. As pointed out in \\cite{Ethier}, $t(1)=t(2)=1$ and \n\\begin{eqnarray}\\label{recursion}\n\\lefteqn{t(n)}\\nonumber\\\\\n&=&\\bigg(1-\\sum_{j\\in\\mathscr{P}}\\pi_j\\bigg)t(n-1)+\\sum_{j\\in\\mathscr{P}}\\pi_j(1-\\pi_j-\\pi_7)^{n-2}\\nonumber\\\\\n&&\\quad{}+\\sum_{j\\in\\mathscr{P}}\\pi_j\\sum_{l=2}^{n-1}(1-\\pi_j-\\pi_7)^{l-2} \\pi_j\n\\,t(n-l)\n\\end{eqnarray}\nfor each $n\\ge3$. Indeed, for the event that the shooter\nsevens out in no fewer than $n$ rolls to occur, consider the result of the initial\ncome-out roll. If a natural or a craps number occurs, then, beginning with the\nnext roll, the shooter must seven out in no fewer than $n-1$ rolls. If a\npoint number occurs, then there are two possibilities. Either the point is still\nunresolved after $n-2$ additional rolls, or it is made at roll $l$ for some\n$l\\in\\{2,3,\\ldots,n-1\\}$ and the shooter subsequently sevens out in no fewer than\n$n-l$ rolls. \n\nThe second method, first suggested, to the best of our knowledge, by Peter A. Griffin in 1987 (unpublished) and rediscovered several times since, is based on a Markov chain. The state space is \n\\begin{equation}\\label{S}\nS:=\\{{\\rm co},{\\rm p}4\\mbox{-}10,{\\rm p}5\\mbox{-}9,{\\rm p}6\\mbox{-}8,7{\\rm o}\\} \\equiv \\{1,2,3,4,5\\},\n\\end{equation}\nwhose five states represent the events that the shooter is coming out, has established the point 4 or 10, has established the point 5 or 9, has established the point 6 or 8, and has sevened out. The one-step transition matrix, which can be inferred from (\\ref{dice}), is\n\\begin{equation}\\label{P}\n\\bm P:={1\\over36}\\left(\\begin{array}{ccccc}\n12&6&8&10&0\\\\\n3&27&0&0&6\\\\\n4&0&26&0&6\\\\\n5&0&0&25&6\\\\\n0&0&0&0&36\\end{array}\\right).\n\\end{equation}\nThe probability of sevening out in $n-1$ rolls or less is then just the probability that absorption in state 7o occurs by the $(n-1)$th step of the Markov chain, starting in state co. A marginal simplification results by considering the 4 by 4 principal submatrix $\\bm Q$ of (\\ref{P}) corresponding to the transient states.\nThus, we have\n\\begin{equation}\\label{matrixformula}\nt(n)=1-(\\bm P^{n-1})_{1,5} = \\sum_{j=1}^4(\\bm Q^{n-1})_{1,j}.\n\\end{equation}\nClearly, (\\ref{recursion}) is not a closed-form expression, and we do not regard (\\ref{matrixformula}) as being in closed form either. Is there a closed-form expression for $t(n)$?\n\n\n\n\n\\section*{Positivity of the eigenvalues}\n\nWe begin by showing that the eigenvalues of $\\bm Q$ are positive. The determinant of\n\\begin{eqnarray*}\n\\lefteqn{\\bm Q - z \\bm I}\\\\ &=& {1\\over36}\n\\left(\\begin{array}{cccc}\n12- 36z&6&8&10\\\\\n3&27- 36z&0&0\\\\\n4&0&26- 36z&0\\\\\n5&0&0&25- 36z\\\\\n\\end{array}\\right).\n\\end{eqnarray*}\nis unaltered by row operations. From the first row subtract $6\/(27-36z)$ times the second row, $8\/(26-36z)$ times the third row,\nand $10\/(25- 36z)$ times the fourth row, cancelling the entries 6\/36, 8\/36, and 10\/36 and making the (1,1) entry equal to 1\/36 times\n\\begin{equation}\\label{1,1-entry}\n12- 36z - 3\\, \\frac{6}{ 27-36z} - 4\\, \\frac{8}{26-36z} - 5\\, \\frac{10}{25- 36z}.\n\\end{equation}\nThe determinant of $\\bm Q - z \\bm I $, and therefore the characteristic polynomial $q(z)$ of \n$\\bm Q$ is then just the product of the diagonal entries in the transformed matrix,\nwhich is (\\ref{1,1-entry}) multiplied by $(27- 36z)(26- 36z)(25- 36z)\/(36)^4$. Thus,\n\\begin{eqnarray*}\nq(z)&=&[(12- 36z)(27- 36z)(26- 36z)(25- 36z)\\\\\n&&\\quad{}-18(26- 36z)(25- 36z)\\\\\n&&\\quad{}-32(27- 36z)(25- 36z)\\\\\n&&\\quad{}-50(27- 36z)(26- 36z)]\/(36)^4.\n\\end{eqnarray*}\nWe find that $q(1),q(27\/36),q(26\/36),q(25\/36),q(0)$ alternate signs and therefore the eigenvalues are positive and interlaced between the diagonal entries (ignoring the entry 12\/36). \nMore precisely, denoting the eigenvalues by $1>e_1>e_2>e_3>e_4>0$, we have\n$$\n1>e_1>{27\\over36}>e_2>{26\\over36}>e_3>{25\\over36}>e_4>0.\n$$\n\nThe matrix $\\bm Q$, which has the structure of an arrowhead matrix \\cite{HornJohnson}, is positive definite, although not symmetric.\nThis is easily seen by applying the same type of row operations to the symmetric part $\\bm A = \\frac{1}{2}(\\bm Q+\\bm Q^\\T)$ of $\\bm Q$ to show that the eigenvalues of $\\bm A$ interlace its diagonal elements (except 12\/36). But a symmetric matrix is positive definite if and only if all its eigenvalues are positive, and a non-symmetric matrix is positive definite if and only if its symmetric part is positive definite, confirming that $\\bm Q$ is positive definite.\n\n\n\n\n\\section*{A closed-form expression}\n\nThe eigenvalues of $\\bm Q$ are the four roots of the quartic equation $q(z)=0$ or\n$$\n23328z^4-58320z^3+51534z^2-18321z+1975=0,\n$$\nwhile $\\bm P$ has an additional eigenvalue, 1, the spectral radius. \nWe can use the quartic formula (or \\textit{Mathematica}) to find these roots. We notice that the complex number \n$$\n\\alpha:=\\zeta^{1\/3}+{9829\\over \\zeta^{1\/3}}, \n$$\nwhere\n$$\n\\zeta:=-710369+18i\\sqrt{1373296647},\n$$ \nappears three times in each root. Fortunately, $\\alpha$ is positive, as we see by writing $\\zeta$ in polar form, that is, $\\zeta=re^{i\\theta}$. We obtain\n$$\n\\alpha=2\\sqrt{9829}\\,\\cos\\bigg[{1\\over3}\\cos^{-1}\\bigg(-{710369\\over9829\\sqrt{9829}}\\bigg)\\bigg].\n$$\nThe four eigenvalues of $\\bm Q$ can be expressed as\n\\begin{eqnarray*}\ne_1&:=&e(1,1),\\\\\ne_2&:=&e(1,-1),\\\\\ne_3&:=&e(-1,1),\\\\\ne_4&:=&e(-1,-1),\n\\end{eqnarray*}\nwhere\n\\begin{eqnarray*}\ne(u,v)&:=&{5\\over8}+{u\\over72}\\sqrt{{349+\\alpha\\over3}}\\\\\n&&\\quad{}+{v\\over72}\\sqrt{{698-\\alpha\\over3}-2136u\\sqrt{3\\over349+\\alpha}}.\n\\end{eqnarray*}\n\nNext we need to find right eigenvectors corresponding to the five eigenvalues of $\\bm P$. Fortunately, these eigenvectors can be expressed in terms of the eigenvalues. Indeed,\nwith $\\bm r(x)$ defined to be the vector-valued function\n$$\n\\left(\\begin{array}{c}-5+(1\/5)x\\\\\n-175+(581\/15)x-(21\/10)x^2+(1\/30)x^3\\\\\n275\/2-(1199\/40)x+(8\/5)x^2-(1\/40)x^3\\\\\n1\\\\\n0\\end{array}\\right)\n$$\nwe find that right eigenvectors corresponding to eigenvalues $1,e_1,e_2,e_3,e_4$ are\n$$\n(1,1,1,1,1)^\\T,\\;\\bm r(36e_1),\\;\\bm r(36e_2),\\;\\bm r(36e_3),\\;\\bm r(36e_4),\n$$\nrespectively. Letting $\\bm R$ denote the matrix whose columns are these right eigenvectors and putting $\\bm L:=\\bm R^{-1}$, the rows of which are left eigenvectors, we know by (\\ref{matrixformula}) and the spectral representation that\n$$\nt(n)=1-\\{\\bm R\\,{\\rm diag}(1,e_1^{n-1},e_2^{n-1},e_3^{n-1},e_4^{n-1})\\bm L\\}_{1,5}.\n$$\n\nAfter much algebra (and with some help from \\textit{Mathematica}), we obtain\n\\begin{equation}\\label{closed}\nt(n)=c_1 e_1^{n-1}+c_2 e_2^{n-1}+c_3 e_3^{n-1}+c_4 e_4^{n-1},\n\\end{equation}\nwhere the coefficients are defined in terms of the eigenvalues and the function\n\\begin{eqnarray*}\nf(w,x,y,z)&:=&(-25+36w)[4835-5580(x+y+z)\\\\\n&&\\quad{}+6480(xy+xz+yz)-7776xyz]\\\\\n&&\\;\\qquad\/[38880(w-x)(w-y)(w-z)]\n\\end{eqnarray*}\nas follows:\n\\begin{eqnarray*}\nc_1&:=&f(e_1,e_2,e_3,e_4),\\\\ c_2&:=&f(e_2,e_3,e_4,e_1),\\\\ c_3&:=&f(e_3,e_4,e_1,e_2),\\\\ c_4&:=&f(e_4,e_1,e_2,e_3).\n\\end{eqnarray*}\nOf course, (\\ref{closed}) is our closed-form expression. \n\nIncidentally, the fact that\n$t(1)=t(2)=1$ implies that\n\\begin{equation}\\label{sum1}\nc_1+c_2+c_3+c_4=1\n\\end{equation}\nand\n$$\nc_1e_1+c_2e_2+c_3e_3+c_4e_4=1.\n$$\n\nIn a sequence of independent Bernoulli trials, each with success probability $p$, the number of trials $X$ needed to achieve the first success has the \\textit{geometric distribution} with parameter $p$, and\n$$\nP(X\\ge n)=(1-p)^{n-1},\\qquad n\\ge1.\n$$\nIt follows that the distribution of $L$ is \\textit{a linear combination of four geometric distributions}. It is not a convex combination: (\\ref{sum1}) holds but, as we will see,\n$$ \nc_1>0,\\quad c_2<0,\\quad c_3<0,\\quad c_4<0. \n$$\nIn particular, we have the inequality\n\\begin{equation}\\label{ineq}\nt(n)<c_1 e_1^{n-1},\\qquad n\\ge1,\n\\end{equation}\nas well as the asymptotic formula\n\\begin{equation}\\label{asymp}\nt(n)\\sim c_1 e_1^{n-1}\\quad {\\rm as\\ }n\\to\\infty.\n\\end{equation}\n\n\n\n\n\n\\section*{Numerical approximations}\n\nRounding to 18 decimal places, the nonunit eigenvalues of $\\bm P$ are\n\\begin{eqnarray*}\ne_1&\\approx&0.862\\,473\\,751\\,659\\,322\\,030,\\\\\ne_2&\\approx&0.741\\,708\\,271\\,459\\,795\\,977,\\\\\ne_3&\\approx&0.709\\,206\\,775\\,794\\,379\\,015,\\\\\ne_4&\\approx&0.186\\,611\\,201\\,086\\,502\\,979,\n\\end{eqnarray*}\nand the coefficients in (\\ref{closed}) are\n\\begin{eqnarray*}\nc_1&\\approx&\\phantom{-}1.211\\,844\\,812\\,464\\,518\\,572,\\\\\nc_2&\\approx&-0.006\\,375\\,542\\,263\\,784\\,777,\\\\\nc_3&\\approx&-0.004\\,042\\,671\\,248\\,651\\,503,\\\\\nc_4&\\approx&-0.201\\,426\\,598\\,952\\,082\\,292.\n\\end{eqnarray*}\nThese numbers will give very accurate results over a wide range of values of $n$. \n\nThe result (\\ref{asymp}) shows that the leading term in (\\ref{closed}) may be adequate for large $n$; it can be shown that\n$$\n1<c_1 e_1^{n-1}\/t(n)<1+10^{-m}\n$$\nfor $m=3$ if $n\\ge19$; for $m=6$ if $n\\ge59$; for $m=9$ if $n\\ge104$; and for $m=12$ if $n\\ge150$. \n\n\n\n\n\n\\section*{Crapless craps}\n\nIn crapless craps \\cite[p.~354]{ScarneRawson}, as the name suggests, there are no craps numbers and 7 is the only natural. Therefore, the set of possible point numbers is \n$$\n\\mathscr{P}_0:=\\{2,3,4,5,6,8,9,10,11,12\\}\n$$\nbut otherwise the rules of craps apply. \n\nWith $L_0$ denoting the length of the shooter's hand, the analogues of (\\ref{S})--(\\ref{matrixformula}) are\n\\begin{eqnarray*}\nS_0&:=&\\{{\\rm co},{\\rm p}2\\mbox{-}12,{\\rm p}3\\mbox{-}11,{\\rm p}4\\mbox{-}10,{\\rm p}5\\mbox{-}9,{\\rm p}6\\mbox{-}8,7{\\rm o}\\}\\\\\n&\\;\\equiv&\\{1,2,3,4,5,6,7\\},\n\\end{eqnarray*}\n$$\n\\bm P_0:={1\\over36}\\left(\\begin{array}{ccccccc}\n6\\;&2&4&6&8&10&0\\\\\n1\\;&29&0&0&0&0&6\\\\\n2\\;&0&28&0&0&0&6\\\\\n3\\;&0&0&27&0&0&6\\\\\n4\\;&0&0&0&26&0&6\\\\\n5\\;&0&0&0&0&25&6\\\\\n0\\;&0&0&0&0&0&36\\end{array}\\right),\n$$\nand \n$$\nt_0(n):=\\P(L_0\\ge n)=1-(\\bm P_0^{n-1})_{1,7}.\n$$\n\nThere is an interesting distinction between this game and regular craps. The nonunit eigenvalues of $\\bm P_0$ are the roots of the sextic equation\n\\begin{eqnarray*}\n&&15116544z^6-59206464z^5+93137040z^4\\\\\n&&\\;{}-73915740z^3+30008394z^2-5305446z+172975\\\\\n&&\\qquad{}=0,\n\\end{eqnarray*}\nand the corresponding Galois group is, according to \\textit{Maple}, the symmetric group $S_6$. This means that our sextic is not solvable by radicals. Thus, it appears that there is no closed-form expression for $t_0(n)$.\n\nNevertheless, the analogue of (\\ref{closed}) holds (with six terms). All nonunit eigenvalues belong to $(0,1)$ and all coefficients except the leading one are negative. Thus, the analogues of (\\ref{ineq}) and (\\ref{asymp}) hold as well. Also, the distribution of $L_0$ is \\textit{a linear combination of six geometric distributions}. These results are left as exercises for the interested reader.\n\nFinally, $t_0(154)\\approx0.296\\,360\\,068\\times 10^{-10}$, which is to say that a hand of length 154 or more is only about one-sixth as likely as at regular craps (one chance in 33.7 billion, approximately).\n\n\\section*{Acknowledgment}\n\nWe thank Roger Horn for pointing out the interlacing property of the eigenvalues of $\\bm Q$.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} |
|
{"text":"\\section{Introduction}\n\nFor most modern calculations using density functional theory (DFT) \\cite{Kb99}, the accuracy of results depends only on approximations to the exchange-correlation functional, $E_{\\sss XC}[n]$. An exact expression for $E_{\\sss XC}[n]$ is given by the adiabatic connection formula \\cite{LP75, GL76}, in which $E_{\\sss XC}$ is expressed as an integral over a coupling constant $\\lambda$, which connects the reference (Kohn-Sham system, $\\lambda=0$) and the real physical interacting system (ground state, $\\lambda=1$), keeping the density $n({\\mathbf r})$ fixed. Study of the adiabatic connection integral has proven very useful for understanding approximate (hybrid) functionals \\cite{MCY06, PMTT08}, and is an ongoing area of research \\cite{BEP97, FTB00, MTB03}.\n\nAlmost all modern DFT calculations employ the Kohn-Sham (KS) system \\cite{KS65} as a reference. The KS system is defined as the unique fictitious system that has the same density as the real system, but consists of non-interacting electrons. The great practicality of KS DFT is due to the relative ease with which the non-interacting equations can be solved, with relatively crude approximations, giving KS DFT a useful balance between speed and accuracy.\n\nHowever, DFT calculations could also be based on another fictitious system which is known as the strictly-correlated (SC) system \\cite{SPL99}. The strictly-correlated system has the same density as the real system (as does the KS system), but the Hamiltonian consists of electron repulsion and external potential energy terms only. In recent years, the pioneering work of Seidl and others \\cite{S99, SPL99, SPK00, SPKb00, SGS07, S07, GSS08, GVS09} has led to substantial progress in solving this problem exactly and efficiently. The strictly-correlated electrons (SCE) ansatz \\cite {SGS07,S07,GSS08,GVS09} has been shown to yield the density and energy of this system, going beyond the earlier point-charge-plus-continuum (PC) model \\cite{SPK00,SPKb00}. They have achieved great success in calculating spherical symmetric systems with arbitrary number of electrons \\cite{S07}.\n\nIn this article, we look to the future and assume that the strictly-correlated limit of \\emph{any} system can be calculated with less difficulty than the original interacting problem, and all our successive work is developed based on this reference. We derive a new version of the adiabatic connection formalism, which connects the strictly-correlated system (fully interacting) and the physical system. We also introduce a new coupling constant $\\mu$, and a ``decorrelation energy'' $E_{\\sss DC}$, the counterpart of $E_{\\sss XC}$ in KS DFT, which must be evaluated to extract the true ground-state energy from the calculation of the strictly-correlated system. We argue that, as long as the strictly-correlated system can be solved easily (just as the KS case), one can develop another version of DFT based on this system, a version that is better-suited to strongly localized electrons.\n\nThroughout this paper, we use atomic units ($e^2=\\hbar=\\mu=1$), which means that if not particularly mentioned, all energies are in Hartrees, and all lengths are in Bohr radii, etc.\n\n\\section{Theory}\n\nIn this section, we introduce the alternative adiabatic connection formula, and relate its quantities to more familiar ones. All results here are formally exact.\n\n\\subsection{Kohn-Sham Adiabatic Connection}\n\nIn KS DFT, the total energy for the interacting ground-state is expressed as:\n\\begin{equation}\nE[n] = T_{\\sss S}[n] + {\\int d^3 r \\,} v_{\\rm ext}(\\mathbf{r}) \\, n(\\mathbf{r}) + U[n] + E_{\\sss XC}[n].\n\\label{eings}\n\\end{equation}\nIn this equation, $T_{\\sss S}$ is non-interacting kinetic energy of KS orbitals $\\{ \\varphi_i \\}$ that are eigenfunctions of the non-interacting KS equation, $v_{\\rm ext}({\\mathbf r})$ is external potential (nuclear attraction in the case of atoms and molecules), $U$ is the Hartree energy defined as the ``classical'' Coulomb repulsion between two electron clouds, and $E_{\\sss XC}$ is the exchange-correlation energy \\cite{primer}. The adiabatic connection integral \\cite{LP75,GL76} then gives an exact expression for $E_{\\sss XC}$:\n\\begin{equation}\nE_{\\sss XC} [n] = \\int_0^1 d\\lambda \\, W_\\lambda [n],\n\\label{normalac}\n\\end{equation}\nwhere one can show that $W_\\lambda= < \\Psi^\\lambda | \\hat{V}_{\\rm ee} | \\Psi^\\lambda > - U$, in which $\\Psi^{\\lambda}$ is the wavefunction that minimizes $\\hat{T}+\\lambda \\hat{V}_{\\rm ee}$ but has the same density as the real ground-state system \\cite{Y87}. At $\\lambda=0$, one recovers the KS system, and at $\\lambda=1$, one recovers the real interacting system. In this way, one connects the KS system with the real interacting system by changing $\\lambda$ from $0$ to $1$.\n\nA cartoon of $W_\\lambda$ versus $\\lambda$ is shown in the upper panel of Fig. \\ref{cartoons}. By definition, we have $W_0=E_{\\sss X}$ and the area under the curve is $E_{\\sss XC}$. We can also identify the kinetic correlation energy $T_{\\sss C}=E_{\\sss XC}-W_1$ in this graph.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=3.5in]{cart_n.pdf}\n\\includegraphics[width=3.5in]{cart_u.pdf}\n\\caption{Traditional (upper panel) and the new (lower panel) adiabatic connection curves.}\n\\label{cartoons}\n\\end{center}\n\\end{figure}\n\n\\subsection{Strictly-Correlated Reference System}\n\nThe KS wavefunction can be defined as the wavefunction that minimizes the kinetic energy alone, but whose density matches that of the interacting system. Analogously, the SC wavefunction is found by minimizing the electron-electron repulsion energy alone, subject to reproducing the exact density. In practice, there might be multiple degeneracies, so it is best defined in the limit as the kinetic energy goes to zero.\n\nThen, using the strictly-correlated (SC) system as the reference, the energy of the true interacting ground state is:\n\\begin{equation}\nE[n] = U_{\\rm sc}[n] + {\\int d^3 r \\,} v_{\\rm ext}(\\mathbf{r}) \\, n(\\mathbf{r}) + T_{\\sss S} [n] + E_{\\sss DC}[n],\n\\label{eingswsc}\n\\end{equation}\nwhere $U_{\\rm sc}=<\\Psi^\\infty |\\hat{V}_{\\rm ee}|\\Psi^\\infty>$. In KS DFT quantities, $U_{\\rm sc}=U+W_\\infty$ [see Eq. (\\ref{normalac})].\n\nJust as we separate the Hartree energy from the total energy in KS DFT [Eq. (\\ref{eings})], here in Eq. (\\ref{eingswsc}) we separate $T_{\\sss S}[n]$ from the total energy in SC DFT. There are a variety of algorithms that can be used to extract this quantity accurately for any given density, effectively by inverting the KS equations \\cite{PVW03}. We label the remainder as the ``decorrelation energy'', $E_{\\sss DC}[n]$. The reason we call it ``decorrelation energy'' is that, if we consider the electrons in the reference system ``strictly correlated'', with energy $U_{\\rm sc}[n]$, the electrons in the real system are \\emph{less} correlated than in the reference system. We will see the physical meaning explicitly very soon.\n\nSo far, we have defined our reference, and next we deduce an exact expression for the newly-defined ``decorrelation energy'' $E_{\\sss DC}[n]$ with the adiabatic connection formalism, just as one does for $E_{\\sss XC}[n]$ \\cite{SPL99, SPK00} in the KS DFT [Eq.( \\ref{normalac})].\n\n\\subsection{Strictly-Correlated Adiabatic Connection Formula}\n\nWe denote $\\Psi^{\\mu}$ as the wavefunction minimizing $\\hat{H}^{\\mu} = \\mu^2 \\hat{T} + \\hat{V}_{\\rm ee} + \\hat{V}_{\\rm ext}^{\\mu}$ with density $n({\\mathbf r})$. For $\\mu=0$, we recover the strictly-correlated system, and for $\\mu=1$, we recover the real system. For each value of $\\mu$, there is a corresponding unique external potential yielding the correct density, $v_{\\rm ext}^{\\mu}({\\mathbf r})$. So we have:\n\\begin{equation}\nE^{\\mu}\n=\\langle \\Psi^\\mu|\\, \\mu^2 \\hat{T} + \\hat{V}_{\\rm ee} + \\hat{V}_{\\rm ext}^{\\mu} | \\Psi^{\\mu} \\rangle\n\\label{minihmu}\n\\end{equation}\nUsing Hellmann-Feynman theorem \\cite{LP85}, we have:\n\\begin{equation}\n\\frac{dE^{\\mu}}{d\\mu}=\\left< \\Psi^{\\mu} \\left| \\frac{d\\hat{H}^{\\mu}}{d\\mu} \\right| \\Psi^{\\mu} \\right> = \\left< \\Psi^{\\mu} \\left| 2\\mu \\hat{T} + \\frac{d\\hat{V}_{\\rm ext}^{\\mu}}{d\\mu} \\right| \\Psi^{\\mu} \\right>.\n\\label{hft}\n\\end{equation}\nIntegrating and cancelling the external potential terms at both sides, we recognize the left hand side is just $T_{\\sss S}[n]+E_{\\sss DC}[n]$. Thus:\n\\begin{equation}\nE_{\\sss DC}[n] = \\int_0^1 d\\mu \\, 2\\mu \\left< \\Psi^{\\mu} \\left| \\hat{T} \\right| \\Psi^{\\mu} \\right> - T_{\\sss S}[n].\n\\label{ac}\n\\end{equation}\nThis is our adiabatic connection formula for strictly-correlated electrons. Finally, since $T_{\\sss C}[n] = T[n] - T_{\\sss S}[n]$, and $T_{\\sss S}[n]$ is independent of $\\mu$:\n\\begin{equation}\nE_{\\sss DC}[n] = \n\\int_0^1 d\\mu \\, K_\\mu[n],\n\\label{updnac}\n\\end{equation}\nwhere\n\\begin{equation}\nK_\\mu[n] = 2\\mu\\, T_{\\sss C}^\\mu[n].\n\\label{Kmudef}\n\\end{equation}\nThis is the SC doppelganger of Eq. (\\ref{normalac}). We plot a cartoon of the integrand $K_\\mu$ vs. $\\mu$ in the lower panel of Fig. \\ref{cartoons}, identifying the area below the curve as $E_{\\sss DC}$, and noting that $K_1=2T_{\\sss C}$.\n\n\\subsection{Relation to Kohn-Sham DFT}\n\nFrom a formal viewpoint, what we derived here is not new, but simply another way to describe the real interacting system. Thus we can relate all quantities defined here, such as $E_{\\sss DC}[n]$ and $K_{\\mu}[n]$, to quantities defined in the traditional KS DFT. Since both Eq. (\\ref{eingswsc}) and Eq. (\\ref{eings}) are exact for the real system, and if we use the expression of $U_{\\rm sc}[n]$ in KS language [see discussion below Eq. (\\ref{eingswsc})], we find:\n\\begin{equation}\nE_{\\sss DC}[n] = E_{\\sss XC}[n] - W_\\infty[n].\n\\label{edcexc}\n\\end{equation}\nThus $E_{\\sss DC}[n]$ defined in our theory is just the difference between the usual exchange-correlation energy of the real system, $E_{\\sss XC}[n]$, and the potential contribution to the exchange-correlation energy of the strictly-correlated system, $W_\\infty[n]$.\n\nWe can also deduce an expression for $K_\\mu$ in terms of $W_\\lambda$. Since $\\Psi^{\\mu}$ minimizes $\\hat{H}^{\\mu}=\\mu^2 \\hat{T}+\\hat{V}_{\\rm ee}$ while yielding $n({\\mathbf r})$, and $\\Psi^{\\lambda}$ minimizes $\\hat{T}+\\lambda \\hat{V}_{\\rm ee}$, we deduce $\\Psi^{1\/ \\mu^2}=\\Psi^{\\lambda}$. Now, from the scaling properties of KS DFT \\cite{L91}, we know:\n\\begin{equation}\nT_{\\sss C}^{\\lambda}=E_{\\sss C}^{\\lambda}-\\lambda \\frac{dE_{\\sss C}^{\\lambda}}{d\\lambda}.\n\\label{inttc}\n\\end{equation}\nIf we write $E_{\\sss C}^{\\lambda}=T_{\\sss C}^{\\lambda}+U_{\\sss C}^{\\lambda}$, i.e., $U_{\\sss C}^\\lambda$ is the potential contribtion to $E_{\\sss C}^\\lambda$, $U_{\\sss C}^\\lambda=\\lambda (W_\\lambda - E_{\\sss X})$, we have:\n\\begin{equation}\n\\frac{dT_{\\sss C}^{\\lambda}}{d\\lambda}=\\frac{U_{\\sss C}^{\\lambda}}{\\lambda}-\\frac{dU_{\\sss C}^{\\lambda}}{d\\lambda}.\n\\label{difftc}\n\\end{equation}\nIntegrating over $\\lambda$ from 0 to $1\/ \\mu^2$, and using the definition of $W_\\lambda$ in Eq. (\\ref{normalac}) and that $E_{\\sss X}^{\\lambda}=\\lambda E_{\\sss X}$ by scaling \\cite{L91}, we can express $T_{\\sss C}^{\\mu}[n]$ in terms of $W_\\lambda[n]$, we find:\n\\begin{equation}\nK_\\mu[n] = 2\\mu \\int_0^{1\/ \\mu^2} d\\lambda \\left( W_\\lambda[n] - W_{1\/ \\mu^2}[n] \\right).\n\\label{wl4kmu}\n\\end{equation}\nFrom this relation, we can generate the new adiabatic connection curve, as long as we know the integrand of the KS adiabatic connection, i.e. $W_\\lambda[n]$ for $\\lambda=1$ to $\\infty$.\n\n\\subsection{Exact conditions}\n\nMany of the well-established exact conditions on the correlation energy can be translated and applied to the decorrelation energy. In particular, the simple relations between scaling the density and altering the coupling constant all apply, i.e.,\n\\begin{equation}\nE_{\\sss C}^\\lambda[n]=\\lambda^2\\, E_{\\sss C}[n_{1\/\\lambda}],\n\\end{equation}\nwhere $n_\\gamma({\\mathbf r}) = \\gamma^3\\, n(\\gamma {\\mathbf r})$. Thus, in terms of scaling:\n\\begin{equation}\nK_\\mu [n] = \\frac{2}{\\mu^3} T_{\\sss C}[n_{\\mu^2}].\n\\end{equation}\nNote that, as $\\mu \\to \\infty$, $K_\\mu \\to 0$, while $K_{\\mu=0}=2W_{\\infty}'$, where $W_{\\infty}'$ is defined in the expansion of $W_\\lambda$ as $\\lambda \\to \\infty$ \\cite{SPK00}:\n\\begin{equation}\nW_\\infty'=\\lim_{\\lambda \\to \\infty} \\sqrt{\\lambda} \\left( W_\\lambda-W_\\infty \\right).\n\\label{watinfty}\n\\end{equation}\nThus the SC energy is found from solving the strictly-correlated system, while $K_{\\mu=0}$ is determined by the zero-point oscillations around that solution. Both are currently calculable for spherical systems \\cite{S07,GVS09}.\n\nThe most general property known about the correlation energy \\cite{L91} is that, under scaling toward lower densities, it must become more negative. In turn, this implies that $W_\\lambda$ is never positive. Using the definition of $T_{\\sss C}^\\mu$ and changing variable $\\lambda=1\/\\mu^2$ in Eq. (\\ref{difftc}), we find:\n\\begin{equation}\n\\frac{dT_{\\sss C}^\\mu}{d\\mu}=\\frac{2}{\\mu^5}\\frac{dW_\\lambda}{d\\lambda}<0,\n\\label{tcmul0}\n\\end{equation}\nthen using $K_\\mu=2\\mu T_{\\sss C}^\\mu$ and the fact that $K_\\mu>0$, we find:\n\\begin{equation}\n\\frac{d}{d\\mu} \\ln K_\\mu <0.\n\\label{inequality}\n\\end{equation}\nAlso, because $T_{\\sss C}^\\mu>0$, so $K_\\mu=2\\mu T_{\\sss C}^\\mu>0$, and $E_{\\sss DC}$, as an integration of $K_\\mu$, is always positive.\n\nBased on these properties of $K_\\mu$, a crude approximation to $K_\\mu$ can be a simple exponential parametrization, using $K_0$ and the derivative of $\\ln K$ at $\\mu=0$ as inputs:\n\\begin{equation}\nK=K_0\\,e^{-\\gamma \\mu}, \\hspace{0.5in} \\gamma=-\\left. \\frac{d}{d\\mu} \\ln K \\right|_0.\n\\label{simpleunif}\n\\end{equation}\n\n\\section{Illustrations}\nIn this section, we illustrate the theory developed above on three different systems, to show how $K_\\mu$ behaves for very different systems, and where the adiabatic connection formula might be most usefully approximated.\n\n\\subsection{Uniform Electron Gas}\n\nFor a uniform electron gas, we assume we know the correlation energy per particle, $\\epsilon_{\\sss C}$, accurately as a function of $r_{s}=(3\/4\\pi n)^{1\/3}$. In order to apply Eq. (\\ref{wl4kmu}) to calculate $K_\\mu[n]$, we use $\\epsilon _{\\sss C}^{\\lambda} (r_s) = \\lambda ^2 \\epsilon _{\\sss C} (\\lambda r_s)$ \\cite{L91}. Substituting into Eq. (\\ref{inttc}), changing variables $\\lambda=1\/ \\mu^2$, and using $K_\\mu=2 \\mu T_{\\sss C}^{\\mu}=N \\kappa_\\mu$, with $N$ the number of particles, we find:\n\\begin{equation}\n\\kappa_\\mu^{\\rm unif} = -\\frac{2}{\\mu^3} \\frac{d}{dr_s} \\left. \\left( r_s \\epsilon_{\\sss C}(r_s) \\right) \\right| _{r_s\/\\mu^2}.\n\\label{unigaskmu}\n\\end{equation}\nUsing Eq. (\\ref{edcexc}) and the definition of $W_\\infty$, we find:\n\\begin{equation}\n\\epsilon_{\\sss DC}^{\\rm unif}=\\epsilon_{\\sss C}+\\frac{d_0}{r_s},\n\\label{edcunif}\n\\end{equation}\nwhere $d_0$ is defined below and $d_0=0.433521$. In the large-$r_s$ limit or the low-density expansion \\cite{PW92}: \n\\begin{equation}\n\\epsilon _{\\sss C} (r_s) = - \\frac{d_0}{r_s} + \\frac{d_1}{r_s^{3\/2}} + \\frac{d_2}{r_s^2}+\\cdots\n\\label{unigasec}\n\\end{equation}\nwhere $d_2=-3.66151$ from data of Ref. \\cite{PW92}. Substituting this expansion into Eq. (\\ref{unigaskmu}), we find:\n\\begin{equation}\n\\kappa_\\mu^{\\rm unif} = \\frac{d_1}{r_s^{3\/2}}+ 2 \\mu \\frac{d_2}{r_s^2}+\\cdots \\hspace{0.5in} \\mbox{as $\\mu \\to 0$}.\n\\label{unigaskmuexpn}\n\\end{equation}\nThus $\\kappa_\\mu$ is expected to have a well-behaved expansion in powers of\n$\\mu$ for small $\\mu$.\n\nUsing Perdew and Wang's \\cite{PW92} parametrization of the correlation energy of the uniform gas, we plot $\\kappa_\\mu$ vs. $\\mu$ for $r_s=1$ in Fig. \\ref{unigasplot}, and find $\\epsilon_{\\sss DC}=0.374$ at $r_s=1$.\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=3.5in]{unigas.pdf}\n\\caption{Exact adiabatic connection curve $\\kappa_{\\mu}$ for uniform electron gas ($r_s=1$) and a simple exponential parametrization.}\n\\label{unigasplot}\n\\end{center}\n\\end{figure}\n\nUsing the exact curve for $r_s=1$ in the simple exponential parametrization [Eq. (\\ref{simpleunif})], we find $\\kappa_0=1.44073$ and $\\gamma=5.0826$. We plot the exponential parametrization in Fig. \\ref{unigasplot} and we can see that it decays much faster than the exact curve, producing a $\\epsilon_{\\sss DC}$ that is too small by about 25\\%, which means about 150\\% larger in $|\\epsilon_{\\sss C}|$ [see Eq. (\\ref{edcexc})].\n\nWe calculated $\\epsilon_{\\sss DC} \/ |\\epsilon_{\\sss C}|$ for different values of $r_s$ and plot the curve in Fig. \\ref{diffrs}. At small $r_s$, $\\epsilon_{\\sss DC} \\gg |\\epsilon_{\\sss C}|$, which suggests that the KS reference system is a better starting point, as a smaller contribution to the energy needs to be approximated. At large $r_s$, $|\\epsilon_{\\sss C}| > \\epsilon_{\\sss DC}$ so $\\epsilon_{\\sss DC}$ is a smaller quantity and may be better approximated. Under such circumstances, the strictly-correlated system might serve as a better reference. For the uniform gas, the switch-over occurs at about $r_s=16$, which is at densities much lower than those relevant to typical processes of chemical and physical interest. However, as we show below, for systems with static correlation, this regime can occur much more easily.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=3.5in]{edcvec.pdf}\n\\caption{$\\epsilon_{\\sss DC} \/ |\\epsilon_{\\sss C}|$ for different $r_s$ for uniform electron gas.}\n\\label{diffrs}\n\\end{center}\n\\end{figure}\n\n\\subsection{Hooke's Atom}\nAs we pointed out, as long as we have an approximate $W_\\lambda[n]$ for $\\lambda$ between 1 and $\\infty$, we can substitute it into Eq. (\\ref{wl4kmu}) to get the new adiabatic connection formula for the decorrelation energy. Of course, most such formulas focus on the shape between $0$ and $1$, since only that section is needed for the regular correlation energy. But any such approximate formula can be equally applied to $K_\\mu$, yielding an approximation for the decorrelation energy. Peach et al. \\cite{PMTT08} analyze various parametrizations for $W_\\lambda$, and the same forms can be used as well to parametrize $K_\\mu$, based on the similar shape of $W_\\lambda$ and $K_\\mu$ curves. In general, application to $K_\\mu$ will yield a distinct approximation to the ground-state energy, with quite different properties.\n\nTo give just one example, one of the earliest sensible smooth parametrizations is the [1,1] Pade of Ernzerhof \\cite{E96}:\n\\begin{equation}\nW_\\lambda = a \\left( \\frac{1+b\\lambda}{1+c\\lambda} \\right).\n\\end{equation}\nOne can imagine applying it with inputs of e.g., $E_{\\sss X}$, $W'_0$ given by G\\\"{o}rling-Levy perturbation theory, and $W_\\infty$ from the SC limit. It yields a sensible approximation to $W_\\lambda$ in the 0 to 1 range, but because it was not designed with the strictly-correlated limit in mind, the formula itself is not immediately applicable to the decorrelation energy, since, e.g., $K_{\\mu=0}$ vanishes. However, much more sensible is to make the same approximation directly for $K_\\mu$ instead, if one is doing an SC calculation, i.e.,\n\\begin{equation}\nK_\\mu = \\tilde{a} \\left( \\frac{1+\\tilde{b}\\mu}{1+\\tilde{c}\\mu} \\right),\n\\end{equation}\nwhose inputs could be $K_0$, $K'_0$, and, e.g., a GGA for $K_{\\mu=1}$. This is then a very different approximation from the same idea applied to the usual adiabatic connection formula.\n\nOn the other hand, there are several approximations designed to span the entire range of $\\lambda$, the most famous being ISI (interaction strength interpolation) model \\cite{SPKb00} developed by Seidl et al. This model uses the values and the derivatives of $W_\\lambda$ at two limits, namely the high-density limit (KS system, $\\lambda=0$) and the low-density limit (strictly-correlated system, $\\lambda \\to \\infty$), to make an interpolation. Another approximation to $W_\\lambda$ is developed in our previous work \\cite{LB09}, which employs $W_0, W_\\infty$ and $W_0'$ as inputs. We compare the approximate $K_\\mu$'s obtained from the two models.\n\nHooke's atom is a two-electron system (i.e., with Coulomb repulsion) in a spherical harmonic well \\cite{T94}. Using the accurate values $W_0=-0.515, W_\\infty=-0.743, W_0'=-0.101$ \\cite{MTB03}, and $W_\\infty'=0.208$ \\cite{P09}, we find:\n\\begin{equation}\nK_\\mu^{\\rm ISI} = -0.947\\mu + 1.029 A\\mu -\\frac{0.336}{\\mu B} +0.270 \\mu \\ln B,\n\\label{kmuisi4ho}\n\\end{equation}\nwhere $A=\\sqrt{1+0.653\/ \\mu^2}$ and $B=A-0.263$. With the same data substituted in $W^{\\rm simp}$ \\cite{LB09}, we find:\n\\begin{equation}\nK_\\mu^{\\rm simp} = -\\frac{0.228}{\\alpha^4 \\mu} (\\alpha^3-\\alpha^2+1)+1.287 \\mu (\\alpha-1),\n\\label{kmusimp4ho}\n\\end{equation}\nwhere $\\alpha=\\sqrt{1+0.354\/ \\mu^2}$.\nWe plot the two forms of $K_\\mu$ in Fig. \\ref{hookeplot}. The exact curve (down to $\\mu=0.5$) is taken from Ref. \\cite{MTB03}. We compare three quantities in Table \\ref{hooketable}. Although $K_\\mu^{\\rm ISI}$ contains a spurious $\\mu \\ln \\mu$ term as $\\mu \\to 0$ \\cite{S07, GVS09, LB09}, it nonetheless yields accurate results. The simple model, applied with the usual inputs, is less accurate pointwise, but integrates to an accurate value.\n\n\\begin{table}[h]\n\\caption{Comparison of several quantities for three approximations to $K_\\mu$. Note: ISI uses $K_0$ as an input. The exact values are taken from Ref. \\cite{MTB03}.}\n\\begin{center}\n\\begin{tabular}{c|c c c c}\n\\hline\\hline\n & exact & ISI & simp & exponential\\\\\n\\hline\n $K_0$ & 0.416 & 0.416 & 0.383 & 0.456\\\\\n $K_1$ & 0.058 & 0.054 & 0.059 & 0.058\\\\\n $E_{\\sss DC}$ & 0.189 & 0.191 & 0.190 & 0.193\\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\label{hooketable}\n\\end{table}\n\nWe can try the simple exponential parametrization Eq. (\\ref{simpleunif}) for $K_\\mu$ again for Hooke's atom. Because we do not know the value of $d\/d\\mu \\ln K_\\mu$ at $\\mu=0$ exactly, instead we do an exponential fitting using the method of least squares, with the exact $K_\\mu$ values (between $\\mu=0.5$ and 1) taken from Ref. \\cite{MTB03}. We plot $K_\\mu$ vs. $\\mu$ in Fig. \\ref{hookeplot} and compare several quantities in Table \\ref{hooketable}.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=3.5in]{hooke.pdf}\n\\caption{Adiabatic connection curves for Hooke's atom. The exact curve (down to $\\mu=0.5$) is taken from Ref. \\cite{MTB03}.}\n\\label{hookeplot}\n\\end{center}\n\\end{figure}\n\n\n\\subsection{H$_2$ Bond Dissociation}\nBond dissociation of the H$_2$ molecule produces a well-known dilemma in computational chemistry \\cite{Koch, PSB95, B01, CMY08}. In the exact case, as the bond length $R \\to \\infty$, the hydrogen molecule should dissociate to two free hydrogen atoms, with the ground state always a singlet and spin-unpolarized. However, spin-restricted, e.g., restricted Hartree-Fock or restricted Kohn-Sham DFT, give the correct spin multiplicity, i.e. the wavefunction is an eigenfunction of $\\hat{S}^2$, but produce an overestimated total energy, much higher than that of two free hydrogen atoms. Spin-unrestricted, e.g., unrestricted Hartree-Fock or unrestricted Kohn-Sham DFT, give a fairly good total energy, but the wavefunction is spin-contaminated, i.e., the deviation of $< \\hat{S}^2 >$ from the exact value is significant. This is known as ``symmetry breaking'' in H$_2$ bond dissociation.\n\nFuchs et al. \\cite{FNGB05} argued that DFT within RPA (random phase approximation) gives a correct picture of the H$_2$ bond dissociation within the spin-restricted KS scheme. They also gave highly-accurate adiabatic connection curves for ground-state H$_2$ at bond length $R=1.4$\\AA\\, and stretched H$_2$ at bond length $R = 5$\\AA. The curves were interpolated particularly between $\\lambda=0$ and $1$, shown as the difference of the integrand, $\\Delta W_\\lambda$, between the stretched H$_2$ molecule and two free H atoms (Fig. 1 and 3 of Ref. \\cite{FNGB05}). \n\nFor $R=1.4$\\AA\\, and $R=5$\\AA, if we use an interpolation (see Ref. 63 in Fuchs paper \\cite{FNGB05}) to estimate $\\Delta W_\\lambda$, we find reasonable values $\\Delta W_\\infty=-7.00$ and $\\Delta W_\\infty=0.13$, respectively. Using Eq. (\\ref{edcexc}), we find $\\Delta E_{\\sss DC}=4.96$ and $\\Delta E_{\\sss DC}=0.69$, respectively. We compare $\\Delta E_{\\sss DC}$ and $\\Delta E_{\\sss C}$ values in Table \\ref{stretchedtable}. The comparison shows a physical example where the strictly-correlated system is a better starting point in the calculation.\n\n\\begin{table}[h]\n\\caption{Comparison of several quantities for stretched H$_2$ at different bond lengths. The values for $\\Delta E_{\\sss X}$ and $\\Delta E_{\\sss XC}$ are taken from Ref. \\cite{FNGB05}. All values are in eV.}\n\\begin{center}\n\\begin{tabular}{c|c c c}\n\\hline\\hline\nbond length & $1.4$\\AA & $5$\\AA & $\\infty$ \\\\\n\\hline\n $\\Delta E_{\\sss X}$ & -0.98 & 5.85 & 8.5 \\\\\n $\\Delta E_{\\sss XC}$ & -2.04 & 0.82 & 0.0 \\\\\n $\\Delta W_\\infty$ & -7.00 & 0.13 & 0.0 \\\\\n $\\Delta E_{\\sss C}$ & -1.06 & -5.03 & -8.5 \\\\\n $\\Delta E_{\\sss DC}$ & 4.96 & 0.69 & 0.0 \\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\label{stretchedtable}\n\\end{table}\n\nWe can see that at the equilibrium bond length, $|\\Delta E_{\\sss C}|$ is much smaller than $\\Delta E_{\\sss DC}$, presumably making it easier to approximate the ground-state energy starting from the KS reference system. This is typical at equilibrium. However for stretched bonds, $\\Delta E_{\\sss DC}$ is much smaller than $|\\Delta E_{\\sss C}|$, and so $\\Delta E_{\\sss DC}$ instead may be better accurately approximated in the calculation and the strictly-correlated system should be a better reference. Molecules with strong static correlation, such as Cr$_2$ and O$_3$, might fall somewhere inbetween.\n\n\\section{Conclusion}\n\nIn this paper, we constructed an adiabatic connection formalism for the strictly-correlated system. We found that this adiabatic connection formula and curve can be well defined with respect to this new reference. Our formula connects the strictly-correlated system and the real system, by Eq. (\\ref{updnac}). We also defined the quantity for this new integral ``decorrelation energy'' and related this with the usual KS adiabatic connection. We illustrated how the decorrelation energy behaves, using the uniform electron gas, Hooke's atom, and stretched H$_2$ as examples.\n\nWe emphasize again that a real application of this theory is only possible when the reference, i.e., the strictly-correlated system, can be routinely calculated. At present, one can calculate quantities such as $U_{\\rm sc}$ exactly only for spherical symmetric systems \\cite{SGS07}. However, nonempirical approximations to $E_{\\sss XC}$ of Kohn-Sham theory can be employed to estimate $W_\\infty$ with useful accuracy \\cite{PTSS04}. The computation of this quantity may become much easier in the future \\cite{GSV09}. If this is so, then based on the properties discussed here, the strictly-correlated reference may be preferable in cases that are difficult for standard KS DFT calculations with standard approximations to $E_{\\sss XC}$. In fact, a recent work \\cite{GSV09} independently shows progress using exactly the formalism discussed here and suggests approximations to $E_{\\sss DC}$. In any event, the advent of strictly-correlated calculations opens up a whole new alternative approach to DFT calculations of electronic structure, and only experience can show how and when this will prove more fruitful than the traditional (KS) scheme.\n\n\\section{Acknowledgement}\nWe thank John Perdew, Michael Seidl and Paola Gori-Giorgi for kind discussions. This work is supported by the National Science Foundation under No. CHE-0809859.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} |
|
{"text":"\\section{Introduction}\nUnsupervised nonlinear dimensionality reduction methods, which embed high-dimensional data to a low-dimensional space, have been extensively deployed in many real-world applications for data visualization. Data visualization is an important component of data exploration and data analytics, as it helps data analysts to develop intuitions and gain deeper understanding about the mechanisms underlying data generation. Comprehensive surveys about dimensionality reduction and data visualization methods can be found in van der Maaten et al. (2009)~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{Maaten08dimensionalityreduction} and Burges (2010)~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{dimension-reduction-a-guided-tour-2}. Among these approaches, nonparametric neighbor embedding methods such as t-SNE~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{van2008visualizing} and Elastic Embedding~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{carreira2010elastic} are widely adopted. They generate low-dimensional latent representations by preserving neighboring probabilities of high-dimensional data in a low-dimensional space, which involves pairwise data point comparisons and thus has quadratic computational complexity with respect to the size of a given data set. This prevents them from scaling to any dataset with a size beyond several thousand. Moreover, these methods are not designed for readily generating the embedding of out-of-sample data that are prevalent in modern big data analytics. To generate out-of-sample data embedding given an existing sample embedding, computationally expensive numerical optimization or Nystr{\\\" o}m approximation is often performed, which is undesirable in practice~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{bengio2004out,vladymyrov2014linear,carreira2015fast}. \n\nParametric embedding methods, such as parametric t-SNE (pt-SNE)~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{Maaten09} employing a deep neural network (DNN), learn an explicit parametric mapping function from a high-dimensional data space to a low-dimensional embedding space, which can readily generate the embedding of out-of-sample data. The objective function of pt-SNE is the same as that of t-SNE with quadratic computational complexity. Fortunately, owing to the explicit mapping function defined by the DNN, optimization methods such as stochastic gradient descent or conjugate gradient descent based on mini-batches can be deployed when pt-SNE is applied to large-scale datasets. \n\nHowever, on one hand, the objective function of pt-SNE is a sum of a quadratic number of terms over pairwise data points, which requires mini-batches with fairly large batch sizes to achieve a reasonably good approximation to the original objective; On the other hand, optimizing the parameters of the DNN in pt-SNE also requires careful choices of batch sizes, which is often best served with small batch sizes to avoid being stuck in a bad local minimum. These conflicting choices of batch sizes make the optimization of pt-SNE hard and render its performance sensitive to the chosen batch size. In addition, to approximate the loss function defined over all pairwise data points, pt-SNE independently computes pairwise neighboring probabilities of high-dimensional data for each mini-batch, so it often produces dramatically different embeddings with different choices of user-defined perplexities that are coupled with batch sizes. Finally, although the mapping function of pt-SNE parameterized by a DNN is powerful, it is very hard to learn and requires complicated procedures such as tuning network architectures and tuning many hyper-parameters. For data embedding and visualization purposes, most users are reluctant to go through these complicated procedures.\n\nTo address the aforementioned problems, in this paper, we present unsupervised parametric t-distributed stochastic exemplar-centered embedding. Instead of modeling pairwise neighboring probabilities, our strategy learns embedding parameters by comparing high-dimensional data only with precomputed representative high-dimensional exemplars, resulting in an objective function with linear computational and memory complexity with respect to the number of exemplars. The exemplars are identified by a small number of iterations of k-means updates, taking into account both local data density distributions and global clustering patterns of high-dimensional data. These nice properties make the parametric exemplar-centered embedding insensitive to batch size and scalable to large-scale datasets. All the exemplars are repeatedly included into each mini-batch, and the choice of the perplexity hyper-parameter only concerns the expected number of neighboring exemplars calculated globally, independent of batch sizes. Therefore, the perplexity is much easier to choose by the user and much more robust to produce good embedding performance. We further use noise contrastive samples to avoid comparing data points with all exemplars, which further reduces computational\/memory complexity and increases scalability. Although comparing training data points only with representative exemplars indirectly preserves similarities between pairwise data points in each local neighborhood, it is much better than randomly sampling small mini-batches in pt-SNE whose coverages are too small to capture all pairwise similarities on a large dataset.\n\nMoreover, we propose a shallow embedding network with high-order feature interactions for data visualization, which is much easier to tune but produces comparable performance in contrast to a deep neural network employed by pt-SNE. Experimental results on several benchmark datasets show that, our proposed parametric exemplar-centered embedding methods for data visualization significantly outperform pt-SNE in terms of robustness, visual effects, and quantitative evaluations. We call our proposed deep t-distributed stochastic exemplar-centered embedding method dt-SEE and high-order t-distributed exemplar-centered embedding method hot-SEE.\n\nOur contributions in this paper are summarized as follows: (1) We propose a scalable unsupervised parametric data embedding strategy with an objective function of significantly reduced computational complexity, avoiding pairwise training data comparisons in existing methods; (2) With the help of exemplars, our methods eliminate the instability and sensitivity issues caused by batch sizes and perplexities haunting other unsupervised embedding approaches including pt-SNE; (3) Our proposed approach hot-SEE learns a simple shallow high-order parametric embedding function, beating state-of-the-art unsupervised deep parametric embedding method pt-SNE on several benchmark datasets in terms of both qualitative and quantitative evaluations.\n\n\\section{Related Work}\\label{sec:related}\nDimensionality reduction and data visualization have been extensively studied in the last twenty years~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{Maaten08dimensionalityreduction,dimension-reduction-a-guided-tour-2}. SNE~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{SNE2002}, its variant t-SNE~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{van2008visualizing}, and Elastic Embedding~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{carreira2010elastic} are among the most successful approaches. To efficiently generate the embedding of out-of-sample data, SNE and t-SNE were, respectively, extended to take a parametric embedding form of a shallow neural network~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{MinThesis} and a deep neural network~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{Maaten09}. As is discussed in the introduction, the objective functions of neighbor embedding methods have $O(n^2)$ computational complexity for $n$ data points, which limits their applicability only to small datasets. Recently, with the growing importance of big data analytics, several research efforts have been devoted to enhancing the scalability of nonparametric neighbor embedding methods~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{van2013barnes,van2014accelerating,yang2013scalable,vladymyrov2014linear}. These methods mainly borrowed ideas from efficient approximations developed for N-body force calculations based on Barnes-Hut trees~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{van2013barnes} or fast multipole methods~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{greengard1987fast}. Iterative methods with auxiliary variables and second-order methods have been developed to optimize the objective functions of neighbor embedding approaches~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{vladymyrov2012partial,vladymyrov2014linear,carreira2015fast}. Particularly, the alternating optimization method with auxiliary variables was shown to achieve faster convergence than mini-batch based conjugate gradient method for optimizing the objective function of pt-SNE. All these scalability handling and optimization research efforts are orthogonal to our development in this paper, because all these methods are designed for the embedding approaches modeling the neighboring relationship between pairwise data points. Therefore, they still have the sensitivity and instability issues, and we can readily borrow these speedup methods to further accelerate our approaches modeling the relationship between data points and exemplars. \n\nOur proposed method hot-SEE learns a shallow parametric embedding function by considering high-order feature interactions. High-order feature interactions have been studied for learning Boltzmann Machines, autoencoders, structured outputs, feature selection, and biological sequence classification~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{DBLP:conf\/iccv\/Memisevic11,DBLP:conf\/aistats\/MinNCG14,MinPSB14,DBLP:conf\/cvpr\/RanzatoH10,DBLP:journals\/jmlr\/RanzatoKH10,DBLP:journals\/corr\/GuoZM15,Min2014kdd,MinBio2015,MinGS17}. To the best of our knowledge, our work here is the first successful one to model input high-order feature interactions for unsupervised data embedding and visualization.\n\nOur work in this paper is also related to a recent supervised data embedding method called en-HOPE~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{MinGS17}. Unlike en-HOPE, our proposed methods here are unsupervised and have a completely different objective function with different motivations.\n\n\\section{Methods}\\label{sec:method}\nIn this section, we introduce the objective of pt-SNE at first. Then we describe the parametric embedding functions of our methods based on a deep neural network as in pt-SNE and a shallow neural network with high-order feature interactions. Finally, we present our proposed parametric stochastic exemplar-centered embedding methods dt-SEE and hot-SEE with low computational cost.\n\n\\subsection{Parametric t-SNE using a Deep Neural Network and a Shallow High-order Neural Network}\\label{sec:sup}\nGiven a set of data points $\\mathcal{D} = \\{{\\mathbf x}^{(i)}: i = 1,\\ldots, n\\}$, where ${\\mathbf x}^{(i)} \\in {\\mathbb R}^H$ is the input feature vector.\npt-SNE learns a deep neural network as a nonlinear feature transformation from the high-dimensional input feature space to a low-dimensional latent embedding space \n$\\{f({\\mathbf x}^{(i)}): i = 1,\\ldots, n\\}$, where $f({\\mathbf x}^{(i)}) \\in {\\mathbb R}^h$, and $h < H$. For data visualization, we set $h = 2$.\n\npt-SNE assumes, $p_{j|i}$, the probability of each data point $i$ chooses every other data point $j$ as its nearest neighbor in the high-dimensional space follows a Gaussian distribution. The joint probabilities measuring the pairwise similarities between data points ${\\mathbf x}^{(i)}$ and ${\\mathbf x}^{(j)}$ are defined by symmetrizing two conditional probabilities, $p_{j|i}$ and $p_{i|j}$, as follows,\n\\begin{eqnarray}\np_{j|i} & = & \\frac{\\exp(-||{\\mathbf x}^{(i)} - {\\mathbf x}^{(j)} ||^2\/2\\sigma_i^2)} {\\sum_{k \\neq i}\\exp(-||{\\mathbf x}^{(i)} - {\\mathbf x}^{(k)} ||^2\/2\\sigma_i^2)}, \\label{eqn:symmp} \\\\\np_{i|i} & = & 0, \\\\\np_{ij} & = & \\frac{p_{j|i} + p_{i|j}}{2n}, \n\\end{eqnarray}\n\nwhere the variance of the Gaussian distribution, $\\sigma_i$, is set such that the perplexity of the conditional distribution $P_i$ equals a user-specified perplexity $u$ that can be interpreted as the expected number of nearest neighbors of data point $i$. With the same $u$ set for all data points, $\\sigma_i$'s tend to be smaller in regions of higher data densities than the ones in regions of lower data densities. The optimal value of $\\sigma_i$ for each data point $i$ can be easily found by a simple binary search~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{SNE2002}. Although the user-specified perplexity $u$ makes the variance $\\sigma_i$ for each data point $i$ adaptive, the embedding performance is still very sensitive to this hyperparameter, which will be discussed later. In the low-dimensional space, pt-SNE assumes, the neighboring probability between pairwise data points $i$ and $j$, $q_{ij}$, follows a heavy-tailed student t-distribution. The student t-distribution is able to, on one hand, measure the similarities between pairwise low-dimensional points, on the other hand, allow dissimilar objects to be modeled far apart in the embedding space, avoiding crowding problems. \n \\begin{eqnarray}\nq_{ij} & = & \\frac{(1 + ||f({\\mathbf x}^{(i)}) - f({\\mathbf x}^{(j)})||^2)^{-1}} {\\sum_{kl:k \\neq l} (1 + ||f({\\mathbf x}^{(k)}) - f({\\mathbf x}^{(l)})||^2)^{-1}}, \\label{eqn:symmq} \\\\\nq_{ii} & = & 0. \n\\end{eqnarray}\n\nTo learn the parameters of the deep embedding function ${\\mathbf f}(.)$, pt-SNE minimizes the following Kullback-Leibler divergence between the joint distributions $P$ and $Q$ using Conjugate Gradient descent,\n\\begin{equation}\\label{obj}\n\\small\n\\ell = KL (P || Q) = \\sum_{ij: i \\neq j} p_{ij}\\log \\frac{p_{ij}}{q_{ij}}.\n\\end{equation}\nThe above objective function has $O(n^2)$ terms defined over pairwise data points, which is computationally prohibitive and prevents pt-SNE from scaling to a fairly big dataset. To overcome such scalability issue, heuristic mini-batch approximation is often used in practice. However, as will be shown in our experiments, pt-SNE is unstable and highly sensitive to the chosen batch size to achieve reasonable performance. This is due to the dilemma of the quadratic cost function approximation and DNN optimization through mini-batches: approaching the true objective requires large batch sizes but finding a good local minimum benefits from small batch sizes.\n\nAlthough pt-SNE based on a deep neural network has a powerful nonlinear feature transformation, parameter learning is hard and requires complicated procedures such as tuning network architectures and tuning many hyperparameters. Most users who are only interested in data embedding and visualization are reluctant to go through these complicated procedures. Here we propose to use high-order feature interactions, which often capture structural knowledge of input data, to learn a shallow parametric embedding model instead of a deep model. The shallow model is much easier to train and does not have many hyperparameters. In the following, the shallow high-order parametric embedding function will be presented. We expand each input feature vector ${\\mathbf x}$ to have an additional component of $1$ for absorbing bias terms, that is, ${\\mathbf x}^{\\prime} = [{\\mathbf x}; 1]$, where ${\\mathbf x}^{\\prime} \\in {\\mathbb R}^{H+1}$. The $O$-order feature interaction is the product of all possible $O$ features $\\{x_{i_1}\\times \\ldots \\times x_{i_t} \\times \\ldots \\times x_{i_O}\\}$ where, $t \\in \\{1, \\ldots, O\\}$, and $\\{i_1, \\ldots, i_t, \\ldots, i_O\\} \\in \\{1, \\ldots, H\\}$. Ideally, we want to use each $O$-order feature interaction as a coordinate and then learn a linear transformation to map all these high-order feature interactions to a low-dimensional embedding space. However, it's very expensive to enumerate all possible $O$-order feature interactions. For example, if $H = 1000, O = 3$, we must deal with a $10^9$-dimensional vector of high-order features. We approximate a Sigmoid-transformed high-order feature mapping ${\\mathbf y} = f({\\mathbf x})$ by constrained tensor factorization as follows, \n\\begin{equation}\n\\label{shopemap}\ny_s = \\sum_{k=1}^m V_{sk} \\sigma(\\sum_{f=1}^F W_{fk}({\\mathbf C_f}^T {\\mathbf x}^{\\prime})^O + b_k),\n\\end{equation}\nwhere $b_k$ is a bias term, ${\\mathbf C} \\in {\\mathbb R}^{(H+1) \\times F}$ is a factorization matrix, ${\\mathbf C}_f$ is the $f$-th column of ${\\mathbf C}$, ${\\mathbf W}\\in {\\mathbb R}^{F\\times m}$ and ${\\mathbf V}\\in {\\mathbb R}^{h\\times m}$ are projection matrices, $y_s$ is the $s$-th component of ${\\mathbf y}$, $F$ is the number of factors, $m$ is the number of high-order hidden units, and $\\sigma (x) = \\frac{1}{1 + e^{-x}}$. Because the last component of $\\mathbf{x}^\\prime$ is 1 for absorbing bias terms, the full polynomial expansion of $({\\mathbf C_f}^T {\\mathbf x}^\\prime)^O$ essentially captures all orders of input feature interactions up to order $O$. Empirically, we find that $O=2$ works best for all datasets we have and set $O=2$ for all our experiments. The hyperparameters $F$ and $m$ are set by users. Combining Equation~\\ref{obj}, Equation~\\ref{eqn:symmp}, Equation~\\ref{eqn:symmq} and the feature transformation function in Equation~\\ref{shopemap} leads to a method called high-order t-SNE (hot-SNE). As pt-SNE, the objective function of hot-SNE involves comparing pairwise data points and thus has quadratic computational complexity with respect to the sample size. The parameters of hot-SNE are learned by Conjugate Gradient descent as in pt-SNE. \n\n\\subsection{Parametric t-Distributed Stochastic Exemplar-centered Embedding}\nTo address the instability, sensitivity, and unscalability issues of pt-SNE, we present deep t-distributed stochastic exemplar-centered embedding (dt-SEE) and high-order t-distributed stochastic exemplar-centered embedding (hot-SEE) building upon pt-SNE and hot-SNE for parametric data embedding described earlier. The resulting objective function has significantly reduced computational complexity with respect to the size of training set compared to pt-SNE. The underlying intuition is that, instead of comparing pairwise training data points, we compare training data only with a small number of representative exemplars in the training set for neighborhood probability computations. To this end, we simply precompute the exemplars by running a fixed number of iterations of k-means with scalable k-means++ seeding on the training set, which has at most linear computational complexity with respect to the size of training set~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{bahmani2012scalable}.\n\nFormally, given the same dataset $\\mathcal{D}$ with formal descriptions as introduced in Section~\\ref{sec:sup}, \nwe perform a fixed number of iterations of k-means updates on the training data to identify $z$ exemplars from the whole dataset, where $z$ is a user-specified free parameter and $z << n$ (please note that k-means often converges within a dozen iterations and shows linear computational cost in practice). Before performing k-means updates, the exemplars are carefully seeded by scalable k-means++, which will make our methods robust under abnormal conditions, although our experiments show that random seeding works equally well. We denote these exemplars by $\\{{\\mathbf e}^{(j)}: j = 1, \\ldots, z\\}$. The high-dimensional neighboring probabilities is calculated through a Gaussian distribution,\n\\begin{eqnarray}\np_{j|i} & = & \\frac{\\exp(-||{\\mathbf x}^{(i)} - {\\mathbf e}^{(j)} ||^2\/2\\sigma_i^2)} {\\sum_k\\exp(-||{\\mathbf x}^{(i)} - {\\mathbf e}^{(k)} ||^2\/2\\sigma_i^2)}, \\label{eqn:symmp_exem} \\\\\np_{j|i} & = & \\frac{p_{j|i}}{n}, \n\\end{eqnarray}\nwhere $i = 1, \\ldots, n, j=1, \\ldots, z$, and the variance of the Gaussian distribution, $\\sigma_i$, is set such that the perplexity of the conditional distribution $P_i$ equals a user-specified perplexity $u$ that can be interpreted as the expected number of nearest exemplars, not neighboring data points, of data instance $i$. Since the high-dimensional exemplars capture both local data density distributions and global clustering patterns, different choices of perplexities over exemplars will not change the embedding too much, resulting in much more robust visualization performance than that of other embedding methods insisting on modeling local pairwise neighboring probabilities.\n\nSimilarly, the low-dimensional neighboring probabilities is calculated through a t-distribution,\n\\begin{eqnarray}\nq_{j|i} & = & \\frac{(1 + d_{ij})^{-1}} {\\sum_{i=1}^n\\sum_{k=1}^z (1 + d_{ik})^{-1}}, \\label{eqn:q_exem}\\\\\nd_{ij} & = & ||f({\\mathbf x}^{(i)}) - f({\\mathbf e}^{(j)}) ||^2,\n\\end{eqnarray}\nwhere $f(\\cdot)$ denotes a deep neural network for dt-SEE or the high-order embedding function as described in Equation~\\ref{shopemap} for hot-SEE. \n\nThen we minimize the following objective function to learn the embedding parameters ${\\mathbf \\Theta}$ of dt-SEE and hot-SEE while keeping the exemplars $\\{{\\mathbf e}^{(j)}\\}$ fixed,\n\\begin{eqnarray}\\label{exobj}\n&\\min_{}^{} \\ell({\\mathbf \\Theta}, \\{{\\mathbf e}^{(j)}\\}) = \\sum_{i=1}^{n}\\sum_{j=1}^{z} p_{j|i}\\log \\frac{p_{j|i}}{q_{j|i}}\n\\end{eqnarray}\nwhere $i$ indexes training data points, $j$ indexes exemplars, ${\\mathbf \\Theta}$ denotes the high-order embedding parameters $\\{\\{b_k\\}_{k=1}^m, \\mathbf{C }, \\mathbf{W}, \\mathbf{V}\\}$ in Equation~\\ref{shopemap}.\n\nNote that unlike the probability distribution in Equation~\\ref{eqn:symmq}, $q_{j|i}$ here is computed only using the pairwise distances between training data points and exemplars. This small modification has significant benefits. Because $z << n$, compared to the quadratic computational complexity with respect to $n$ of Equation~\\ref{obj}, the objective function in Equation~\\ref{exobj} has a significantly reduced computational cost, considering that the number of representative exemplars is often much much smaller than $n$ for real-world large datasets in practice. \n\n\\subsection{Further Reduction on Computational Complexity and Memory Complexity by Noise Contrastive Estimation}\nWe can even further reduce the computational complexity and memory complexity of dt-SEE and hot-SEE using noise contrastive estimation (NCE). Instead of computing neighboring probabilities between each data point $i$ and all $z$ exemplars, we can simply only compute the probabilities between data point $i$ and its $z_e$ nearest exemplars for both $P$ and $Q$. For high-dimensional probability distribution $P_i$, we simply set the probabilities between $i$ and other exemplars 0; for low-dimensional probability distribution $Q_i$, we randomly sample $z_n$ non-neighboring exemplars outside of these $z_e$ neighboring exemplars, and use the sum of these $z_n$ non-neighboring probabilities multiplied by a constant $K_e$ and the $z_e$ neighboring probabilities to approximate the normalization terms involving data point $i$ in Equation~\\ref{eqn:q_exem}. Since this strategy based on noise contrastive estimation eliminates the need of computing neighboring probabilities between data points and all exemplars, it further reduces computational and memory complexity.\n\n\n\n\\section{Experiments}\\label{sec:experiment}\nIn this section, we evaluate the effectiveness of dt-SEE and hot-SEE by comparing them against state-of-the-art unsupervised parametric embedding method pt-SNE based upon three datasets, \\textit{i.e.}, COIL100, MNIST, and Fashion. The COIL100 data~\\footnote{http:\/\/www1.cs.columbia.edu\/CAVE\/software\/softlib\/coil-100.php} contains 7200 images with 100 classes, where 3600 samples for training and 3600 for test. The MNIST dataset~\\footnote{http:\/\/yann.lecun.com\/exdb\/mnist\/} consists of 60,000 training and 10,000 test gray-level 784-dimensional images. The Fashion dataset~\\footnote{https:\/\/github.com\/zalandoresearch\/fashion-mnist} has the same number of classes, training and test data points as that of MNIST, but is designed to classify 10 fashion products, such as boot, coat, and bag, where each contains a set of pictures taken by professional photographers from different aspects of the product, such as looks from front, back, with model, and in an outfit. \n\n\n\nTo make computational procedures and tuning procedures for data visualization simpler, none of these models was pre-trained using any unsupervised learning strategy, although hot-SNE, hot-SEE, dt-SEE, and pt-SNE could all be pre-trained by autoencoders or variants of Restricted Boltzmann Machines~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{MinMYBZ10,MinBio2015}. \n\nFor hot-SNE and hot-SEE, we set $F=800$ and $m=400$ for all the datasets used. For pt-SNE and dt-SEE, we set the deep neural network architecture to input dimensionality $H$-500-500-2000-2 for all datasets, following the architecture design in van der Maaten (2009)~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{Maaten09}. For hot-SEE and dt-SEE, when the exemplar size is smaller than 1000, we set batch size to 100; otherwise, we set it 1000. With the above architecture design, the shallow high-order neural network used in hot-SNE and hot-SEE is as fast as $2.5$ times of the deep neural network used in pt-SNE and dt-SEE for embedding $10,000$ MNIST test data.\n\nFor all the experiments, the predictive accuracies were obtained by the 1NN approach on top of the 2-dimensional representations generated by different methods. The error rate was calculated by the number of misclassified test data points divided by the total number of test data points. \n\n\n\n\n\\subsection{Performance Comparisons with Different Batch Sizes and Perplexities on COIL100 and MNIST}\nOur first experiment aims at examining the robustness of different testing methods with respect to the batch size and the perplexity used. Figures~\\ref{fig:batchsizeSensitivity} and~\\ref{fig:perplexitySensitivity} depict our results on the COIL100 and MNIST datasets when varying the batch size and perplexity, respectively, used by the testing methods.\n\n \\begin{figure}[H\n \\subfloat[COIL100\\label{COIL100}]{%\n \\includegraphics[width=0.4738\\textwidth]{COIL100BatchSize.eps}\n }\n \\hfill\n \\subfloat[MNIST\\label{MNISTd}]{%\n \\includegraphics[width=0.4738\\textwidth]{MNISTBatchSize.eps}\n }\n \\centering\n \\caption{batch size sensitivity test on COIL100 and MNIST }\n \\label{fig:batchsizeSensitivity}\n \\end{figure}\n \n\n\n\nFigure~\\ref{fig:batchsizeSensitivity} suggests that, for the COIL100 data, the pt-SNE was very sensitive to the selection of the batch size; efforts were needed to find a right batch size in order to obtain good performance. On the other hand, the use of different batch sizes had very minor impact on the predictive performance of both the dt-SEE and hot-SEE strategies. Similarly, for the MNIST data, as shown in Figure~\\ref{fig:perplexitySensitivity}, in order to obtain good predictive performance, the pt-SNE needed to have a batch size not too big and not too small. On the contrary, the hot-SEE methods was insensitive to the size of batch larger than 300. \n\n \\begin{figure}[H\n \\subfloat[COIL100\\label{COIL100}]{%\n \\includegraphics[width=0.4738\\textwidth]{COIL100Perplexity.eps}\n }\n \\hfill\n \\subfloat[MNIST\\label{MNISTd}]{%\n \\includegraphics[width=0.4738\\textwidth]{MNISTPerplexity.eps}\n }\n \\centering\n \\caption{perplexity sensitivity test on COIL100 and MNIST }\n \\label{fig:perplexitySensitivity}\n \\end{figure}\nBased on the results in Figure~\\ref{fig:batchsizeSensitivity}, we selected the best batch sizes for both the COIL100 and MNIST data sets, with 600 and 1000, respectively, but we varied the values of the perplexities used. In Figure~\\ref{fig:perplexitySensitivity}, one can observe that, the performance of the pt-SNE and hot-SNE could dramatically change due to the use of different perplexities, but that was not the case for both the dt-SEE and hot-SEE. Similarly, for the MNIST data, as depicted in Figure~\\ref{fig:perplexitySensitivity}, in order to obtain good predictive performance, one would need to carefully tune for the right perplexity. On the contrary, both the dt-SEE and hot-SEE methods performed quite robust with respect to different selected perplexities. \n\n\n\n\n\n\nBecause the choices of batch size and perplexity are coupled in a complicated way in pt-SNE as explained in the introduction, we run additonal experiments to show the advantages of dt-see and hot-see. When we set perplexity to 10 and batch size to 100, 300, 600, 1000, 2000, 3000, 5000, 10000, the test error rate of pt-SNE on MNIST is, respectively, $32.97\\%$, $22.1\\%$, $24.00\\%$, $16.30\\%$, $12.41\\%$, $12.28\\%$, $13.09\\%$, $16.43\\%$, which still varies a lot. In contrast, the error rates of dt-SEE or hot-SEE using 1000 exemplars are consistently below $10\\%$ with the same batch size ranging from 100 to 10000 and perplexity 3 and 10, which again shows shat exemplar-centered embedding dt-see and hot-see are much more robust than pt-SNE.\n\n\\subsection{Experimental Results on the Fashion dataset}\nWe also further evaluated the predictive performance of the testing methods using the Fashion data set. We used batch sizes of 1000 and 2000, along with perplexity of 3 in all the experiments since both pt-SNE and hot-SNE favored these settings as suggested in Figures~\\ref{fig:batchsizeSensitivity} and~\\ref{fig:perplexitySensitivity}. The achieved accuracies are shown in Table~\\ref{tab:vggknn}.\n\n\\begin{table}[H]\n\\centering\n \\begin{tabular}{|c|c|}\\hlin\n Methods & Error Rates \\\\\n \\hline \npt-SNE (batchsize = 1000) & 32.48\\\\\n\\hline\npt-SNE (batchsize = 2000) & 32.04\\\\\n\\hline \nhot-SNE (batchsize = 1000) & 31.29\\\\\n\\hline\nhot-SNE (batchsize = 2000) & 31.82\\\\\n\\hline\ndt-SEE (batchsize = 1000) & 29.42\\\\\n\\hline\ndt-SEE (batchsize = 2000) & 28.30\\\\\n\\hline \nhot-SEE (batchsize = 1000) & 29.06\\\\\n\\hline\nhot-SEE (batchsize = 2000) & \\textbf{28.18}\\\\\n\\hline\n\\end{tabular}\n\\caption{Error rates (\\%) by 1NN on the 2-dimensional representations produced by different methods with perplexity = 3 on the Fashion dataset.}\\label{tab:vggknn}\n\\end{table}\n\nResults in Table~\\ref{tab:vggknn} further confirmed the superior performance of our methods. Both the dt-SEE and hot-SEE significantly outperformed the pt-SNE and hot-SNE. \n\n\n\n\\subsection{Two-dimensional Visualization of Embeddings}\nThis section provides the visual results of the embeddings formed by the pt-SNE and hot-SEE methods. \n\n The top and bottom subfigures in Figure~\\ref{fig:ministSmallBatchSize} depicts the 2D embeddings on the MNIST data set created by pt-SNE and hot-SEE, with batch size of 100 (perplexity = 3) and perplexity of 10 (batch size = 1000), respectively. From these visual figures, one may conclude that the hot-SEE was more stable compared to its competitor pt-SNE. \n\t\n\t\\begin{figure*}[h\n \\subfloat[pt-SNE, batch size 100\\label{COIL100}]{%\n \\includegraphics[width=0.503738\\textwidth]{MNIST_PtSNEBatchsize100.eps}\n }\n \\subfloat[hot-SEE, batch size 100\\label{MNISTd}]{%\n \\includegraphics[width=0.503738\\textwidth]{MNIST_hotSEEExemsize100.eps}\n }\n \t\t \\hfill\n \\subfloat[pt-SNE, perplexity 10\\label{COIL100}]{%\n \\includegraphics[width=0.503738\\textwidth]{MNISTPtSNEperplexity10.eps}\n }\n \\subfloat[hot-SEE, perplexity 10\\label{MNISTd}]{%\n \\includegraphics[width=0.503738\\textwidth]{MNISThotSEEperplexity10.eps}\n\n}\n \\centering\n \\caption{Comparing pt-SNE to hot-SEE with a small batch size = 100 (perplexity = 3) or a reasonable perplexity = 10 (batch size = 1000) to illustrate pt-SNE's unstable visual performance.}\n \\label{fig:ministSmallBatchSize}\n \\end{figure*}\n\n\n \\begin{figure*}[h\n \\subfloat[pt-SNE\\label{COIL100}]{%\n \\includegraphics[width=0.50139738\\textwidth]{MNISTPtsne.eps}\n }\n \\subfloat[hot-SNE\\label{MNISTd}]{%\n \\includegraphics[width=0.50139738\\textwidth]{MNISThotsne.eps}\n }\n\t\t \\hfill\n \\subfloat[dt-SEE\\label{COIL100}]{%\n \\includegraphics[width=0.50139738\\textwidth]{MNISTdtsee.eps}\n }\n \\subfloat[hot-SEE\\label{MNISTd}]{%\n \\includegraphics[width=0.50139738\\textwidth]{MNISThotsee.eps}\n }\n \\centering\n \\caption{MNIST embedding figures for pt-SNE, hot-SNE, dt-SEE, and hot-SEE }\n \\label{fig:mnistEmbedding}\n \\end{figure*}\n\n In Figure~\\ref{fig:mnistEmbedding}, we also provided the visual results of the MNIST embeddings created by pt-SNE, hot-SNE, dt-SEE, and hot-SEE, with batch size of 2000. These results imply that \n the dt-SEE and hot-SEE produced the best visualization: the data points in each cluster were close to each other but with large separation between different clusters, compared to that of the pt-SNE and hot-SNE methods. \n \n \n \\begin{figure}[H\n \\subfloat[pt-SNE\\label{COIL100}]{%\n \\includegraphics[width=0.483738\\textwidth]{FashionPtsne.eps}\n }\n \\hfill\n \\subfloat[hot-SEE\\label{MNISTd}]{%\n \\includegraphics[width=0.483738\\textwidth]{Fashionhotsee.eps}\n }\n \\centering\n \\caption{Fashion embedding figures for pt-SNE and hot-SEE}\n \\label{fig:fashionEmbedding}\n \\end{figure}\nAlso, in Figures~\\ref{fig:fashionEmbedding}, we depicted our visual 2D embedding results on the Fashion data set. These figures further confirmed the better clustering quality generated by the hot-SEE method, compared to that of the pt-SNE strategy.\n\n\\subsection{Noise Contrastive Estimation}\nIn this section, we evaluated the performance of the noise contrastive estimation (NCE) strategy applied to our method hot-SEE with perplexity 3 and 2000 exemplars. We set $z_e = z_n = 100$ and $K_e = 18$. Table~\\ref{tab:nce} show the error rates (\\%) obtained by 1NN on the two-dimensional representations produced by hot-SEE with or without NCS, respectively, on the MNIST and Fashion datasets.\n\n\\begin{table}[H]\n \\centering\n \\begin{tabular}{|lc|lc|lc|lc|}\\hlin\n\\multicolumn{4}{|c}{MNIST}&\\multicolumn{4}{|c|}{Fashion}\\\\ \\hline\n\\multicolumn{2}{|c|}{standard}&\\multicolumn{2}{c|}{w\/ NCE} &\\multicolumn{2}{c|}{standard}&\\multicolumn{2}{c|}{w\/ NCE} \\\\\n\\hline \n\\multicolumn{2}{|c|}{9.30}&\\multicolumn{2}{c|}{9.69} &\\multicolumn{2}{c|}{28.18}&\\multicolumn{2}{c|}{28.19}\\\\\n\\hline\n\\end{tabular}\n\\caption{Error rates (\\%) obtained by 1NN on the two-dimensional representations produced by hot-SEE (perplexity = 3 and 2000 exemplars) with or without further computational complexity reduction based on Noise Contrastive Estimation (NCE), respectively, on the MNIST and Fashion datasets.} \\label{tab:nce}\n\\end{table}\n\nResults in Table~\\ref{tab:nce} suggest that the NCE was able to further reduce the \n computational and memory complexity of our method without sacrificing the predictive performance. As shown in the table, the accuracy difference of the hot-SEE method with and without NCE was less than 0.4\\% for both the MNIST and Fashion data sets. \n\n\\begin{table*}[h]\n \\centering\n\n\t\\scalebox{0.95}{ \n \\begin{tabular}{|lc|lc|lc|lc|lc|lc|}\\hlin\n\\multicolumn{4}{|c|}{COIL100}&\\multicolumn{4}{c}{MNIST}&\\multicolumn{4}{|c|}{Fashion}\\\\ \\hline\n\\multicolumn{2}{|c|}{careful seeding}&\\multicolumn{2}{c|}{random seeding} &\\multicolumn{2}{c|}{careful seeding}&\\multicolumn{2}{c|}{random seeding} &\\multicolumn{2}{c|}{careful seeding}&\\multicolumn{2}{c|}{random seeding} \\\\\n\\hline \n\\multicolumn{2}{|c|}{58.67}&\\multicolumn{2}{c|}{58.44} &\\multicolumn{2}{c|}{9.30}&\\multicolumn{2}{c|}{9.19} &\\multicolumn{2}{c|}{28.18}&\\multicolumn{2}{c|}{28.53} \\\\\n\\hline\n\\end{tabular}}\n \\caption{Error rates (\\%) obtained by 1NN on the 2-dimensional representations produced by hot-SEE (perplexity = 3) with careful seeding or random seeding on the COIL100 (with 600 exemplars), MNIST (with 2000 exemplars), and Fashion (with 2000 exemplars) datasets.}\n \\label{tab:seed}\n\\end{table*}\n \n\\subsection{Careful Exemplar Seeding vs. Random Initialization}\nWe also further evaluate the performance of our methods in terms of different exemplar initializations used. We compared the performance of using careful seeding based on scalable K-means++ and randomly initialized exemplars. We presented the results in Table~\\ref{tab:seed}. From Table~\\ref{tab:seed}, one can observe that our methods were insensitive to the exemplar seeding approach used. That is, very similar predictive performances (less than 0.4\\%) were obtained by our methods on all the three testing data sets, i.e., COIL100, MNIST, and Fashion. \n\n\\subsection{Comparing Evaluation Metrics of kNN ($k \\geq 1$) and Quality Score}\nWe believe that the evaluation metric based on 1NN test error rate used in the previous experimental sections is more appropriate than kNN test error rate with $k > 1$. The reason is that the 1NN performance exactly shows how accurately our exemplar-based embedding methods catpure very local neighborhood information, which is more challenging for our proposed methods. Because exemplars are computed globally, it is much easier for dt-see and hot-see to achieve better performance based on kNN with $k > 1$. On the MNIST dataset, we show the best training and test error rates of kNN with $k \\geq 1$ using the two-dimensional embedding generated by different methods in Talbe~\\ref{tab:knnerr}, which consistently shows that dt-see and hot-see significantly outperforms pt-SNE and supports our claims above.\n\\begin{table*}[h]\n \\centering\n\n\t\\scalebox{0.95}{ \n \\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n&\\multicolumn{10}{c|}{The Number of Nearest Neighbors k in kNN}\\\\\n\\hline\nMethod&1&2&3&4&5&6&7&8&9&10\\\\\n\\hline\npt-sne\\_tr&12.49&12.49&9.26&8.84&8.45&8.30&8.18&8.12&8.08&8.08\\\\\npt-sne\\_te&12.55&12.55&9.79&9.48&9.12&8.95&8.83&8.72&8.72&8.69\\\\\n\\hline\nhot-see\\_tr&8.87&8.87&6.31&6.05&5.83&5.68&5.64&5.63&5.60&5.58\\\\\nhot-see\\_te&9.19&9.19&7.21&6.76&6.61&6.42&6.41&6.41&6.42&6.36\\\\\n\\hline\ndt-see\\_tr&\\textbf{7.19}&\\textbf{7.19}&\\textbf{5.09}&\\textbf{4.90}&\\textbf{4.72}&\\textbf{4.67}&\\textbf{4.62}&\\textbf{4.62}&\\textbf{4.56}&\\textbf{4.56}\\\\\ndt-see\\_te&\\textbf{8.80}&\\textbf{8.80}&\\textbf{6.69}&\\textbf{6.45}&\\textbf{6.25}&\\textbf{6.17}&\\textbf{6.02}&\\textbf{6.02}&\\textbf{5.94}&\\textbf{5.96}\\\\\n\\hline\n\\end{tabular}}\n \\caption{The training error rates (\\_tr) and test error rates (\\_te) of kNN with different k's using the two-dimensional embedding generated by different methods on MNIST.}\n \\label{tab:knnerr}\n\\end{table*}\n\nAnother evaluation metric based on Quality Score was used by a recent method called kernel t-SNE (kt-SNE) \\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{ktsne}. The Quality Score metric computes the k (neighborhood size) nearest neighbors of each data point, respectively, in the high-dimensional space and in the low-dimensional space, and the metric calculates the preserved percentage of the high-dimensional neighborhood in the low-dimensional neighborhood averaged over all test data points as the Quality Score, with respect to different neighborhood size k. In Table~\\ref{tab:qualityscores}, we compute the quality scores of different methods on the MNIST test data for preserving their neighborhood on the training data with neighborhood size ranging from 1 to 100. These results also show that hot-see and dt-see consistently outperform pt-SNE. \n\nWe find that Kernel t-SNE is also capable of embedding out-of-sample data. To have a similar experiment setting on MNIST as that used in kernel t-SNE, we randomly choose 2000 data points as held-out test set from the original test set (size=10000) to get 10 different test sets with size 2000, the test error rates of our methods compared to kernel t-SNE are, kernel t-SNE: $14.2\\%$, fisher kernel t-SNE: $13.7\\%$, hot-see: $9.11\\% \\pm 0.43\\%$, dt-see: $8.74\\% \\pm 0.37\\%$. Our methods hot-see and dt-see significantly outperform (fisher) kernel t-SNE.\n\n\\begin{table*}[h]\n \\centering\n\n\t\\scalebox{0.95}{ \n \\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n&\\multicolumn{11}{c|}{Neighborhood Size}\\\\\n\\hline\nMethod&1&10&20&30&40&50&60&70&80&90&100\\\\\n\\hline\npt-sne&0.55&4.01&6.68&8.76&10.56&12.17&13.62&14.93&16.06&17.19&18.23\\\\\nhot-see&1.12&5.25&8.22&10.53&12.48&14.19&15.69&17.04&18.27&19.41&20.44\\\\\ndt-see&\\textbf{1.14}&\\textbf{6.74}&\\textbf{10.68}&\\textbf{13.52}&\\textbf{15.78}&\\textbf{17.56}&\\textbf{19.03}&\\textbf{20.22}&\\textbf{21.31}&\\textbf{22.27}&\\textbf{23.17}\\\\\n\\hline\n\\end{tabular}}\n \\caption{Quality scores (\\%, the higher the better) for different embedding methods computed on the test set against the training set on MNIST.}\n \\label{tab:qualityscores}\n\\end{table*}\n\n\\section{Conclusion and Future Work\n\\label{sec:discussion}\nIn this paper, we present unsupervised parametric t-distributed stochastic exemplar-centered data embedding and visualization approaches, leveraging a deep neural network or a shallow neural network with high-order feature interactions. \nOwing to the benefit of a small number of precomputed high-dimensional exemplars, our approaches avoid pairwise training data comparisons and have signicantly reduced computational cost. In addition, the high-dimensional exemplars reflect local data density distributions and global clustering patterns. With these nice properties, the resulting embedding approaches solved the important problem of embedding performance being sensitive to hyper-parameters such as batch sizes and perplexities, which have haunted other neighbor embedding methods for a long time. Experimental results on several benchmark datasets demonstrate that our proposed methods significantly outperform state-of-the-art unsupervised deep parametric embedding method pt-SNE in terms of robustness, visual effects, and quantitative evaluations.\n\nIn the future, we plan to incorporate recent neighbor-embedding speedup developments based on efficient N-body force approximations into our exemplar-centered embedding framework. \n\n\\bibliographystyle{splncs04\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} |
|
{"text":"\\section{}\n\n\n\n\n\n\n\n\n\n\\begin{acknowledgments}\n\n\nWe would like to thank Rainer St\\\"{o}hr and Nan Zhao for discussions. The work is financially supported by ERC SQUTEC, EU-SIQS SFB TR21 and DFG KO4999\/1-1. Ya Wang thanks the supporting from 100-Talent program of CAS.\\\\\n\n\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} |
|
{"text":"\\section{introduction}\n\nThe successful fabrication of two-dimensional materials such as graphene and transition metal dichalcogenides arouses intense interest of researchers with intriguing electronic, mechanical, optical, and thermal properties.\\cite{Novoselov2005, Zhang2005, Radisavljevic2011, Wang2012} The gapless nature of graphene and low carrier mobility of transition metal dichalcogenides, however, present limitations to their potential application in industry.\\cite{Liao2010, Schwierz2010, Wu2011, Mak2010, Radisavljevic2011} Very recently, another exciting two-dimensional material, few-layer black phosphorus name as phosphorene, has been successfully fabricated.\\cite{Li2014, Dai2014, Reich2014, Liu2014} The phosphorene-based field effect transistor exhibits a carrier mobility up to 1000 cm$^{2}$\/V$\\cdot$s and an of\/off ration up to 10$^4\\sim10^5$.\\cite{Li2014,Liu2014,Koenig2014} \n\nSimilar to graphite, black phosphorus is also a layered material held together by interlayer van der Waals (vdW) interactions. Inside a layer, each phosphorus atom bonds with three nearest neighbors by sharing all three valence electrons for \\emph{sp}$^3$ hybridization in a puckered honeycomb structure.\\cite{Rodin2014} Black phosphorus has a direct band gap of 0.31$\\sim$0.35 eV.\\cite{Keyes1953, Maruyama1981, Akahama1983, Warschauer2004} The band gap of phosphorene has been found to depend on the film thickness. First-principles calculations demonstrated that the energy band gap decreases from 1.5 $\\sim$2.0 eV for a monolayer to $\\sim$0.6 eV for a five-layer phosphorene.\\cite{Qiao2014, Tran2014} It was also predicted that under strain, few-layer phosphorene could go through a semiconductor-to-metal or direct-to-indirect band gap transition.\\cite{Peng2014, Rodin2014} Most recently, Liu \\emph{et al.} constructed an inverter using MoS$_2$ as an \\emph{n}-type transistor and phosphorene as a \\emph{p}-type transistor, and integrated the two on the same Si\/SiO$_2$ substrate.\\cite{Liu2014} They observed unintentional \\emph{p}-type conductivity with high hole mobility in few-layer phosphorene. Additionally, a number of experiments have also achieved intrinsic \\emph{p}-type phosphorene.\\cite{Yuan2014, Li2014, Liu2014, Deng2014, Das2014} \n\nThen a question arises: what is the origin of the reported intrinsic \\emph{p}-type conductivity in phosphorene? \nDefects and impurities are usually unavoidable in real materials and often change dramatically the electrical, optical and magnetic properties of three- \\cite{Tilley2008} and two-dimensional semiconductors.\\cite{Terrones2012, Tongay2013, Qiu2013, Zhu2014} A large number of theoretical studies on the thickness-dependence of the electronic structure of few-layer phosphorene notwithstanding, knowledge of the properties of native point defects in few-layer phosphorene is still missing. In the present work, we have investigated the formation energies and transition levels of both vacancies and self-interstitials in few-layer phosphorene by performing first-principles calculations using hybrid density functional \\cite{Becke1993, Perdew1996a, Heyd2003} in combination with a semiempirical vdW correction approach developed by Grimme and co-workers\\cite{Grimme2006}, aiming to elucidate the origin of unintentional \\emph{p}-type conductivity displayed by this novel material. Our calculations demonstrated that: (\\emph{i}) the host band gap, formation energies and acceptor transition levels of both vacancies and self-interstitials all decrease with increasing film thickness of phosphorene; (\\emph{ii}) both native point defects are possible sources of the intrinsic \\emph{p}-type conductivity manifested in few-layer phosphorene; (\\emph{iii}) these native defects have low formation energies and thus could serve as compensating centers in \\emph{n}-type multilayer phosphorene. The remainder of this paper is organized as follows. In Sec. II, methodology and computational details are described. Sec. III presents the calculations of formation energies and transition energies of native point defects in few-layer phosphorene, followed by electronic structure analyses. Finally, a short summary is given in Sec. IV.\n\n\n\\section{Methodology}\nOur total energy and electronic structure calculations were carried out using the VASP code, \\cite{Kresse1996, Kresse1996a}, based on the hybrid density functional theory (DFT) proposed by Heyd, Scuseria, and Ernzerhof (HSE).\\cite{Heyd2003} The recent development of hybrid DFT can yield band gaps in good agreement with measurements,\\cite{Paier2006, Marsman2008, Park2011} and thus provide more reliable description of transition levels and formation energies of defects in semiconductors.\\cite{Alkauskas2011, Deak2010, Komsa2011, Freysoldt2014} We here have employed a revised scheme, HSE06.\\cite{Krukau2006} The screening parameter was set to 0.2 {\\AA}$^{-1}$; the Hartree-Fock (HF) mixing parameter $\\alpha$ was tuned to produce a band gap similar to the one given by the GW0 approximation, \\cite{Hedin1965, Shishkin2006} which means that $\\alpha$\\% of HF exchange with (100-$\\alpha$)\\% of Perdew, Burke and Ernzerhof (PBE) exchange\\cite{Perdew1996} in the generalized gradient approximation (GGA) were mixed and adopted in exchange functional. The core-valence interaction was described by the frozen-core projector augmented wave (PAW) method.\\cite{PAW, Kresse1999} The electronic wave functions were expanded in a plane-wave basis with a cutoff of 250 eV. Test calculations show that the calculated formation energies of neutrally and negatively charged P vacancy in monolayer phosphorene will change by less than 0.1 eV if the energy cutoff is increased to 400 eV. Previous theoretical calculations have shown that the interlayer vdW interaction need to be considered for a proper description of the geometrical properties of black phosphorus.\\cite{Appalakondaiah2012} We therefore incorporated the vdW interactions by employing a semiempirical correction scheme of Grimme's DFT-D2 method, which has been successful in describing the geometries of various layered materials.\\cite{Grimme2006, Bucko2010}\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.5]{bulk.pdf}\n\\caption{\\label{bulk}(Color online) Top (a) and side (b) views of the unit cell of black phosphorus.}\n\\end{figure}\n\nIn simulation, a thin film of black phosphorus can be easily obtained by simply truncating the bulk into a slab containing only a few atomic layers. The atomic structure of the black phosphorus is presented in Fig. \\ref{bulk}, from which a layered structure is clearly seen. In each layer, the \\emph{sp}$^3$ hybridization between one P atom and its three neighbors lead to the tripod-like local structure along \\emph{c} direction. In the slab model of few-layer phosphorene, periodic slabs were separated by a vacuum no thinner than 15 {\\AA}. For bulk black phosphorus, an 8\\texttimes{}6\\texttimes{}1 \\emph{k}-mesh including $\\Gamma$-point, generated according to the Monkhorst-Pack scheme,\\cite{Monkhorst1976} was applied to the Brillouin-zone integrations. On geometry optimization, both the shapes and internal structural parameters of pristine unit-cells were fully relaxed until the residual force on each atom less than 0.01 eV\/{\\AA}. \n\nThe defective system containing a self-interstitial atom, P$_i$, or a vacancy, V$_\\text{P}$, was modeled by adding or removing a P atom to or from a 3\\texttimes{}2 supercell of few-layer phosphorene. They were the native point defects considered in the present work. In a monolayer phosphorene, there are three interstitial sites; whereas in a multi-layer film, both P$_i$ and V$_\\text{P}$ can reside either in the outer or inner layers. We label these positions as X$^{in}$ and X$^{out}$ (X=P$_i$ and V$_\\text{P}$) respectively. In Fig. \\ref{interstitial}, we show the six inequivalent interstitial sites in a bilayer phosphorene. In view of the fact that the contribution of vdW interaction to the stability of adsorbate on graphene, even in the chemisorption cases, is non-negligible, \\cite{Wang2012b} we expected that the HSE06 plus DFT-D2 method should give a more accurate description on the local structure of interstitial defects in few-layer phosphorene. A $\\Gamma$-centered 2\\texttimes{}2\\texttimes{}1 Monkhorst-Pack \\emph{k}-mesh was adopted for the 3\\texttimes{}2\\texttimes{}1 supercells. The internal coordinates in the defective supercells were relaxed to reduce the residual force on each atom to less than 0.02 eV\/{\\AA}. Moreover, we have allowed spin-polarization for defective systems. A more detailed discussion on the convergence of total energies of defective systems with respect to vacuum thickness is given in the next section.\n\nAn accurate description of the band structure of phosphorene is a prerequisite for obtaining reliable predictions on defect properties, which impact greatly the electronic conductivity in phosphorene. Since there is no reported experimental data for the band gaps of few-layer phosphorene, we compare our HSE06 results for defect-free few-layer phosphorene with those obtained using highly accurate quasiparticle GW0 calculations \\cite{Hedin1965, Shishkin2006}. The GW0 approximation has been shown to provide very reliable descriptions of the electronic and dielectric properties for many semiconductors and insulators.\\cite{Fuchs2007, Shishkin2007} To achieve good convergence of dielectric function in the GW0 calculations, we used a large number of energy band, 80 times of the total number of involved atoms. The converged eigenvalues and wavefunctions obtained from HSE06 with 25\\% HF exact exchange (denoted as HSE06-25\\% hereafter) functional were chosen as the initial input for the GW0 calculations. \nNote that in GW0 calculations only the quasiparticle energies were recalculated self-consistently in four iterations;\nthe wavefunctions were not updated but remain fixed at the HSE06-25\\% level. A 200 frequency grid points was applied to the integration over the frequencies along the imaginary time axis and real axis. For visualization purpose, the GW0 bands were interpolated based on Wannier orbitals, implemented in WANNIER90 code.\\cite{Mostofi2008}\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.40]{interstitial.pdf}\n\\caption{\\label{interstitial}(Color online) Six inequivalent interstitial configurations in phosphorene bilayer. The point defects are colored differently.}\n\\end{figure}\n\nTo model a charged-defect, a uniform background charge with opposite sign was added to keep the global charge neutrality of the whole system. The formation energy of a charged defect was defined as \\cite{Zhang1991}\n\n\\begin{equation}\\label{eq1}\n\\begin{split}\n\\Delta E^f_D(\\alpha,q)=E_{tot}(\\alpha,q)-E_{tot}(host,0)-n_{\\alpha}\\mu_{\\alpha} \\\\ \n+q(\\mu_{e}+\\epsilon_{v})+E_{corr}[q],\n\\end{split}\n\\end{equation}\n\nwhere $E_{tot}(\\alpha,q)$ and $E_{tot}(host,0)$ are the total energies of the supercells with and without defect $\\alpha$. \\emph{n}$_\\alpha$ is the number of atoms of species $\\alpha$ added to (\\emph{n}$_\\alpha$>0) or and removed from (\\emph{n}$_\\alpha$<0) the perfect supercell to create defect $\\alpha$. $\\mu_{\\alpha}$ is the atomic chemical potential equal to the total energy per atom in its elemental crystal.\\emph{q} is the charge state of defect, $\\epsilon_{v}$ is the host valence band maximum (VBM) level and $\\mu_{e}$ is electron chemical potential in reference to the $\\epsilon_{v}$ level. Therefore, $\\mu_{e}$ can vary between zero and the band-gap (\\emph{E}$_g$) of few-layer phosphorene. The final term accounts for both the alignment of the electrostatic potential between the bulk \nand defective (charged) supercells, as well as the finite-size effects resulting from the long-range Coulomb interaction of charged defects in a homogeneous neutralizing background. It can be evaluated by using the Freysoldt correction scheme with an average static dielectric constant.\\cite{Freysoldt2009} \n\nA 12\\texttimes{}8\\texttimes{}1 \\emph{k}-mesh with a Gaussian smearing of 0.01 eV was employed in the calculations of static dielectric tensors $\\epsilon$ of pristine few-layer phosphorene. For the static dielectric tensors, the ion-clamped contribution was calculated from the response theory of insulators in finite electric field.\\cite{Souza2002} Since the ionic contributions depend on the Born-effective charges and the vibrational modes only,\\cite{Paier2009} they were treated using GGA-PBE approximation based on density-functional perturbation theory.\\cite{Wu2005} More details of this technique can be found in our previous work.\\cite{Wang2014} \nThe defect thermodynamic transition (ionization) energy level $\\epsilon_{\\alpha}$(\\emph{q}\/$\\emph{q}^{\\prime}$) is defined as the Fermi-level (\\emph{E}$_\\text{F}$) position for which the formation energies of these charge states are equal for the same defect,\n\\begin{equation}\\label{eq3}\n\\epsilon_{\\alpha}(q\/q^{\\prime})=[\\Delta E^f_D(\\alpha,q)-\\Delta E^f_D(\\alpha,q^{\\prime})]\/(q^{\\prime}-q).\n\\end{equation}\nMore specifically, the defect is stable in the charge state \\emph{q} when the \\emph{E}$_\\text{F}$ is below $\\epsilon_{\\alpha}(q\/q^{\\prime})$, while the defect is stable in the charge state q$^{\\prime}$ for the \\emph{E}$_\\text{F}$ positions above $\\epsilon_{\\alpha}(q\/q^{\\prime})$.\n\n\\section{Results and discussion}\n\\subsection{Fundamental properties of pristine few-layer phosphorene}\nPrior to the investigation of defective system, we have first calculated the atomic and electronic properties of pristine few-layer phosphorene. The calculated lattice parameters as a function of film thickness, yielded by PBE, PBE+vdW, and HSE06-25\\%+vdW treatments of the density functional are listed in Table \\ref{lattice}. \nWe find the lattice parameter \\emph{b} increases by 0.07-0.15 {\\AA} from bulk to monolayer, while \\emph{a} and interlayer distance $\\Delta$\\emph{d} are quite insensitive to the film thickness. Similar trends were also reported in a previous first-principles study by Qiao \\emph{et al.}\\cite{Qiao2014} For bulk black phosphorus, the measured lattice parameters are \\emph{a}=3.31 {\\AA}, \\emph{b}=4.38 {\\AA} and $\\Delta$\\emph{d}=5.24 {\\AA}.\\cite{Brown1965} We see that PBE overestimates both \\emph{b} (3.6\\%) and $\\Delta$\\emph{d} (5.5\\%); PBE+vdW and HSE06+vdW, on the other hand, are in much better agreement with experiment. So far, there are no experimental data for few-layer phosphorene systems, but we speculate that the success of PBE+vdW and HSE06+vdW in description of bulk black phosphorus could probably extend to few-layer phosphorene. Therefore, we include vdW correction in the following calculations unless otherwise stated. \n\n\n\\begin{table*}[htbp]\n\\centering\n\\begin{ruledtabular}\n\\caption{\\label{lattice} Lattice constants \\emph{a}, \\emph{b} and interlayer distance $\\Delta$\\emph{d} as a function of film thickness in few-layer phosphorene given by PBE, PBE+vdW and HSE06-25\\%+vdW approaches respectively.}\n\\begin{tabular}{c|ccc|ccc|ccc|}\n&\\multicolumn{3}{c|}{PBE}\n&\\multicolumn{3}{c|}{PBE+vdW}\n&\\multicolumn{3}{c|}{HSE06-25\\%+vdW}\\\\\nSystems & \\emph{a} (\\AA) & \\emph{b} (\\AA) & $\\Delta$\\emph{d} (\\AA) & \\emph{a} (\\AA) & \\emph{b} (\\AA) & $\\Delta$\\emph{d} (\\AA) & \\emph{a} (\\AA)& \\emph{b} (\\AA) & $\\Delta$\\emph{d} (\\AA) \\\\\n\\hline\nmonolayer &3.30 & 4.61 & - & 3.32 & 4.56 & - & 3.30 & 4.50 & - \\\\\nbilayer & 3.31 & 4.58 & 5.57 & 3.32 & 4.50 & 5.21\t& 3.30 & 4.45 & 5.17 \\\\\ntrilayer & 3.31 & 4.58 & 5.58 & 3.32 & 4.47 & 5.22\t& 3.30 & 4.44 & 5.18 \\\\\nquadrilayer & 3.31 & 4.57& 5.59 & 3.32 & 4.46 & 5.23 & 3.30 & 4.44 & 5.19 \\\\\nbulk$^a$ &3.31 & 4.54 & 5.53 & 3.33 & 4.41 & 5.23 & 3.31 & 4.37 & 5.19 \\\\\n\\end{tabular}\n\\leftline{$^a$ Experimental lattice conctants: \\emph{a}=3.31 {\\AA}, \\emph{b}=4.38 {\\AA} and $\\Delta$\\emph{d}=5.24 {\\AA} in reference \\onlinecite{Brown1965}.}\n\\end{ruledtabular}\n\\end{table*}\n\nThe standard HSE06 approach with 25\\% exact exchange is known to well reproduce the band gaps of small- to medium-gap systems, but not those of wide-gap materials.\\cite{Paier2006, Paier2006a, Marsman2008} Recently, Fuchs \\emph{et al.} have shown that GW0 approach can describe very well (but slightly overestimate) the electronic structure of wide-gap materials, and the mean absolute relative error (MARE) on the calculated band gaps of some representative traditional semiconductors is only 8.0\\%.\\cite{Fuchs2007} We summarize the PBE, HSE06, and GW0 calculated band gap of few-layer phosphorene and bulk phosphorus in Table \\ref{bandgap}. For the bulk, GW0 gives a band gap of 0.65 eV, significantly higher than the experimental value of 0.31-0.35 eV.\\cite{Keyes1953, Maruyama1981, Akahama1983, Warschauer2004} The HSE06 result, 0.28 eV, is slightly lower than experimental value. We therefore expect that GW0 and HSE06-25\\% approaches would give a reasonable upper and lower bounds for the band gap of few-layer phosphorene.\n\nThe most important knowledge learn from Table \\ref{bandgap} is that all density functional forms predict a similar trend: the energy band gap of phosphorene decreases with increasing film thickness. This phenomenon, we argue, is mainly due to the energy band broadening induced by interlayer interaction. Additionally, the quantum confinement effect in low dimensional materials are likely to contribute to this trend.\\cite{Kang2013} Since there is no experimental results concerning defective phosphorene and the GW0 approach can perform neither structural optimization nor total energy calculations, we chose to utilize somewhat larger Hartree-Fock mixing parameters $\\alpha_\\text{opt}$ for thin phosphorene, \\emph{i.e.}, 35\\% for monolayer and 30\\% for bilayer, in an attempt to rectify the probably underestimated band gaps. As for the quadrilayer phosphorene, we used a parameter of 25\\%, the same value as for the bulk. \n\n\n\\begin{table*}[htbp]\n\\centering\n\\begin{ruledtabular}\n\\caption{\\label{bandgap} The calculated band gap (\\emph{E}$_g$) of few-layer phosphorene as a function of film thickness using PBE, HSE06 and GW0 methods respectively.}\n\\begin{tabular}{c|cccccc}\nSystems & PBE & HSE06-25\\% & HSE06-$\\alpha_\\text{opt}$ & GW0 & Previous work$^a$ & Exp. \\\\\n\\hline\nmonolayer &0.91 & 1.56 & 1.91$^b$ & 2.41 & 1.5-2.0 & - \\\\\nbilayer &0.45 & 1.04 & \t1.23$^c$ & 1.66 & 1.0-1.3 &- \\\\\ntrilayer &0.20 & 0.74 & \t0.98$^c$ & 1.20 & 0.7-1.1 &- \\\\\nquadrilayer & 0.16 & 0.71 & 0.71$^d$ & 1.08 &0.5-0.7 & -\\\\\nbulk &0.10 & 0.28 & 0.28$^d$ & 0.58 & $\\sim$0.3 & 0.31$\\sim$0.35$^e$ \\\\\n\\end{tabular}\n\\leftline{$^a$ References \\onlinecite{Rodin2014,Tran2014,Qiao2014}.}\n\\leftline{$^b$ HSE06-35\\%.}\n\\leftline{$^c$ HSE06-30\\%.}\n\\leftline{$^d$ HSE06-25\\%.}\n\\leftline{$^e$ References \\onlinecite{Keyes1953,Maruyama1981,Akahama1983,Warschauer2004}.}\n\\end{ruledtabular}\n\\end{table*}\n\nFigure \\ref{monolayer} displays the calculated band structure of monolayer phosphorene using HSE06 and GW0. Note that both the VBM and conduction band minimum (CBM) are located at $\\Gamma$-point, and hence a direct band gap. This result is consistent with many previous theoretical studies.\\cite{Qiao2014, Tran2014, Liu2014, Peng2014, Rodin2014} However, there is a disagreement on this point. For example, Li \\emph{et al.} have argued that monolayer phosphorene possibly possesses an indirect\nband gap, because the band interactions near the $\\gamma$ point are complicated, as was viewed from a $k\\cdot p$ perturbation theory.\\cite{Li2014a} The partial charge density analyses show that the VBM are derived from the bonding states between P atoms in different sublayers and the anti-boding states between P atoms in the same sublayer. The opposite is true for the case of CBM. \n\nWe plot in Fig. \\ref{bilayer}(a) the band structure of bilayer phosphorene. Clearly, the band characteristics are similar to those of the monolayer, except that in the bilayer, energy level splitting occurs due to the interlayer interactions. The formation of a bilayer phosphorene can be viewed as the result of two monolayer moving close to each other. The degenerated energy levels of two monolayers become non-degenerated via interlayer interactions. Overall, in both monolayer and bilayer cases, HSE06 and GW0 yield similar band dispersion. Remarkable discrepancy occurs to valence states lying 10 eV below the VBM. Energy bands calculated using HSE06 approach are pushed further downward compared to those obtained using GW0 approach.\n\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.5]{monolayer.pdf}\n\\caption{\\label{monolayer}(Color online) (a) (Color online) Energy band structures (a) of phosphorene monolayer calculated using HSE06 and GW0 methods, and side views of charge density of (b) CBM and (c) VBM. The vacuum level is set to zero and the charge density isosurface levels are shown at 40\\% of their maximum values.}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.52]{bilayer.pdf}\n\\caption{\\label{bilayer}(Color online) Energy band structures (a) of phosphorene monolayer calculated using HSE06 and GW0 methods, and side views of charge density of (b) CBM and (c) VBM. The vacuum level is set to zero and the charge density isosurface levels are shown at 40\\% of their maximum values.}\n\\end{figure}\n \n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.48]{bandalign.pdf}\n\\caption{\\label{bandalign}(Color online) Band alignments for few-layer phosphorene. The vacuum level is taken as zero energy reference.}\n\\end{figure}\n\nThe calculated band alignments for few-layer phosphorene using difference approaches are shown in Fig. \\ref{bandalign}. Although differing in magnitude, all approaches produce similar trends: (\\emph{i}) with the increases in film thickness, the VBM and CBM of few-layer phosphorene move upward and downward respectively, as is the case in few-layer transition-metal dichalcogenides;\\cite{Kang2013} (\\emph{ii}) overall, the magnitude of band offset on the valence band is more significant than that on the conduction band. This implies that the transition levels of acceptors are more sensitively dependent on film thickness than those of donors.\n\n\nTo evaluate the formation energy of charged defects via Eq. (\\ref{eq1}), we need to know the static dielectric tensor of few-layer phosphorene. With the periodic slab model, our calculated static dielectric constant tensor $\\varepsilon$ demonstrate a linear dependence on the inverse of vacuum thickness (Fig. \\ref{epsilon}). Obviously, the \\emph{true} value of the static dielectric tensor is the one obtained in the limiting case of infinite vacuum. In effect, it can be extrapolated from the results for finite-size supercells with different vacuum thickness by scaling scheme. We list in Table \\ref{dielectric} the calculated $\\varepsilon$ of few-layer phosphorene parallel to \\emph{a} ($\\varepsilon^{a}$), \\emph{b} ($\\varepsilon^{b}$), and \\emph{c} ($\\varepsilon^{c}$) axes using HSE06. It is seen that the static dielectric tensor becomes larger for thicker phosphorene, due to enhanced screening effect. Additionally, the decrease in the band gap with increasing film thickness also contributes to this trend. The ionic contributions to the $\\varepsilon$, on the other hand, are found to be rather small ($\\leq$0.5). For the bulk system, our calculated $\\varepsilon$ are noticeably different from Ref. \\onlinecite{Asahina1984}, in which the frequency-dependent dielectric function calculations were performed using the local density approximation. \nSince the defective few-layer phosphorene systems have been modeled with supercells containing finite-size vacuum, the $\\varepsilon$ obtained from the corresponding pristine unit-cells have been adopted in calculating formation energies of defects.\n\n\n\\begin{table}[htbp]\n\\centering\n\\begin{ruledtabular}\n\\caption{\\label{dielectric} Static dielectric tensors $\\varepsilon$ as a function of the inverse of vacuum thickness for (a) monolayer, (b) bilayer and (c) quatrilayer phosphorene.}\n\\begin{tabular}{c|ccc}\nSystems & $\\varepsilon^{a}$ & $\\varepsilon^{b}$ & $\\varepsilon^{c}$ \\\\\n\\hline\nmonolayer & 1.12 & 1.15 & 1.01 \\\\\nbilayer & 1.72 & 1.93 & 1.03 \\\\\nquadrilayer & 2.79 & 3.02 & 1.05 \\\\\nbulk & 11.99 & 14.64 & 7.86 \\\\\nbulk (Ref. \\onlinecite{Asahina1984}) & 10.2 & 12.5 & 8.3 \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.42]{dielectric.pdf}\n\\caption{\\label{epsilon}(Color online) Static dielectric tensors $\\epsilon$ as functions of the inverse of vacuum thickness for (a) monolayer, (b) bilayer and (c) quantrilayer phosphorene respectively.}\n\\end{figure}\n\n\n\\subsection{Properties of native defects in few-layer phosphorene}\nIn consideration of the electrostatic screening effect of vacuum slab along the \\emph{c} direction is small (the dielectric constant of vacuum is equal to 1), we take monolayer phosphorene as an example to check the total energy convergence of charged-defect systems with respect to the vacuum thickness. Test calculations show that a vacuum thickness of 12 {\\AA} can ensure the charge-neutral systems being well converged within 0.01 eV in total energies. This is not the case, however, for charged defects. Figure \\ref{converge}(a) reveals that the numerical errors in the calculated total energies of monolayer phosphorene containing one V$_\\text{P}^{out}$ or P$_i^{out}$ in 1- charge state are about 0.01 eV when a vacuum of 40 {\\AA} was applied. However, for defects in 1+ charge state, a vacuum of 40 {\\AA} is still far less than enough [Fig. \\ref{converge}(b)]. Thus, the formation energies of positively and negatively charged native defects would be overestimated and underestimated in few-layer phosphorene when a typical 12 {\\AA} vacuum was adopted without any corrections. These errors lead to unrealistic deeper transition levels for both acceptors and donors.\n\n \n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.34]{converge.pdf}\n\\caption{\\label{converge}(Color online) Total energies of phosphorene monolayer containing a vacancy, V$_\\text{P}^{out}$, or a self-interstitial, P$_i^{out}$, in the charge states of (a) 1- or (b) 1+, as a function of the inverse of vacuum thickness. The total energies of the slabs with a vacuum thickness of 20 {\\AA} are taken as zero.}\n\\end{figure}\n\nThe calculated formation energy of V$_\\text{P}$ and P$_i$ in monolayer phosphorene as a function of electron chemical potential are plotted in Fig. \\ref{monolayer_D}(a). The change of slope in the line for P$_i$ corresponds to the transition between charge states where thermodynamic transition takes place. We find that V$_\\text{P}$ is stable in the charge state of 1- with respect to the neutral state for all values of \\emph{E}$_\\text{F}$ in the host band gap. This means that V$_\\text{P}$ behaves as a shallow acceptor and could be one of the sources for \\emph{p}-type conductivity observed experimentally.\\cite{Liu2014} Because of the high formation energy (around 2.9 eV at \\emph{E}$_\\text{F}$=VBM), the negatively charged V$_\\text{P}$ has a low concentration in phosphorene monolayer under equilibrium growth conditions, and thus might not be an efficient \\emph{p}-type defect. Upon geometry optimization, the nearest neighbor of 1- charged V$_\\text{P}$ on the top sublayer relaxes toward V$_\\text{P}$ and bonds to its four neighbors with two different bond lengths of 2.41 {\\AA} and 2.28 {\\AA} respectively [Fig. \\ref{monolayer_D}(b)]. It should be pointed that the donor ionization levels of V$_\\text{P}$ or P$_i$ are unstable for all positions of \\emph{E}$_\\text{F}$ in the host band gaps of few-layer phosphorene, suggesting that both V$_\\text{P}$ and P$_i$ are expected to be acceptors instead of donors.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.43]{monolayer_D.pdf}\n\\caption{\\label{monolayer_D}(Color online) (a) Formation energy of V$_\\text{P}$ and P$_i$ as a function of electron chemical potential $\\mu_e$ in monolayer phosphorene. (b) Local structures of V$_\\text{P}$ and P$_i$. The defect and its nearest-neighbors are colored differently.}\n\\end{figure}\n\nA self-interstitial P atom, P$_i$, finds its stable position by bridging two host P atoms [see Fig. \\ref{interstitial}(a)]. The formation energy of P$_i$ is about 1.0 eV lower than that of V$_\\text{P}$ when \\emph{E}$_\\text{F}$ is near the VBM, suggesting that P$_i$ is the dominant native point defect under \\emph{p}-type conditions. The (0\/1-) acceptor level of P$_i$ is predicted to be 0.88 eV above the VBM, implying that P$_i$ is a deep acceptor. On the other hand, when the \\emph{E}$_\\text{F}$ is close to the host CBM, both V$_\\text{P}$ and P$_i$ have much lowered formation energies and are energetically stable in the charge state of 1-. This means that they can serve as compensating centers in \\emph{n}-type doping monolayer. In the neutral charge state, P$_i$ bonds to two host P atom with identical bond lengths of 2.14 {\\AA}. A small asymmetry was observed in these two bonds (2.06 {\\AA} versus 2.20 {\\AA}), a local lattice distortion different from that around P$_i$ in 1- charge state. \n\nIn Figure \\ref{multilayer_D}, we display the calculated formation energies of V$_\\text{P}$ and P$_i$ in bilayer (panel a) and quadrilayer phosphorene (panel b) as a function of electron chemical potential. Our calculations show that both P$_i^{out}$ and V$_\\text{P}^{out}$ are energetically more stable than P$_i^{in}$ and V$_\\text{P}^{in}$ in both films, regardless of the charge states. The acceptor transition levels for V$_\\text{P}^{out}$ and P$_i^{out}$ are -0.64 eV (not shown) and 0.19 eV with respect to the VBM, indicating that all possible native defects can contribute to the \\emph{p}-type conductivity in bilayer. For quadrilayer phosphorene, both V$_\\text{P}^{out}$ and P$_i^{out}$ are stable in the charge state of 1- for any \\emph{E}$_\\text{F}$ in the band gap. This trend is closely related to the upward shift of the band offset for VBM (see Fig. \\ref{bandalign}). The calculated formation energies of all considered native defects decrease with the increase of film thickness. For examples, the calculated formation energies of the neutral V$_\\text{P}^{out}$ and P$_i^{out}$ decrease from 2.88 and 1.86 eV in monolayer, to 2.67 and 1.82 eV in bilayer, and further to 2.18 and 1.73 eV in quanrilayer, with the layer-dependent effect being more significant on V$_\\text{P}^{out}$ than P$_i^{out}$. Therefore, the formation energies of these acceptor-type defects in \\emph{N}-layer phosphorene (\\emph{N}>4) could be low enough when the \\emph{E}$_\\text{F}$ is near the CBM. As a result, self-compensation would be unavoidable in \\emph{n}-type phosphorene. We expect that nonequilibrium growth techniques might be necessary to reduce the concentrations of native defects in preparation of \\emph{n}-type phosphorene. \n\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.4]{multilayer_D.pdf}\n\\caption{\\label{multilayer_D}(Color online) Formation energies of V$_\\text{P}$ and P$_i$ in (a) bilayer and (b) quadrilayer phosphorene as a function of electron chemical potential.}\n\\end{figure}\nWe plot in Fig. \\ref{bilayer_struc} the local atomic arrangements around the negatively charged V$_\\text{P}^{out}$, V$_\\text{P}^{in}$, P$_i^{out}$ and P$_i^{in}$ in bilayer phosphorene. One can see that the relaxed local structure of V$_\\text{P}^{out}$ is very similar to the case of monolayer (panel a). Unlike V$_\\text{P}^{out}$, the neighboring P atoms of negatively charged V$_\\text{P}^{in}$ undergo no significant distortion from their ideal lattice positions (panel b). This in turn leads to long ($\\geq$3.1 {\\AA}) and weak bonds between nearest-neighbors of V$_\\text{P}^{in}$. The equilibrium local structure of negatively charged P$_i^{out}$ in bilayer is also similar to that in monolayer. As for P$_i^{in}$, the upper layer pushes the negatively charged P$_i^{in}$ to move downward, resulting in two identical bond lengths between P$_i^{in}$ and its two nearest-neighbors (2.14 {\\AA}). Meanwhile, the nearest-neighbors on the upper layer relax symmetrically away from P$_i^{in}$, as illustrated in panel (d).\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.34]{bilayer_struc.pdf}\n\\caption{\\label{bilayer_struc}(Color online) Local structure of negatively charged (a) V$_\\text{P}^{out}$, (b) V$_\\text{P}^{in}$, (c) P$_i^{out}$ and P$_i^{in}$ in bilayer. The point defects and their nearest-neighbors are colored differently.}\n\\end{figure}\nTo gain deeper insight into the origin of the conductive characteristics in few-layer phosphorene, we display in Fig. \\ref{defectalign} the transition levels of native point defects with respect to the vacuum level. One can see that the transition levels of V$_\\text{P}$ and P$_i$ generally decrease with increasing film thickness. This means that the magnitudes of formation energies of negatively charged defects decrease more rapidly than those for the neutral ones when going from monolayer to quadrilayer. This results in the shift of the acceptor transition levels of V$_\\text{P}$ and P$_i$ toward lower energies. Combined with the band offset effects for the host VBM, this shift is also responsible for the observed shallower acceptor levels of V$_\\text{P}$ and P$_i$ in thicker films.\n\nWe note that three different HF mixing parameter $\\alpha$ (25\\%, 30\\% and 35\\%) were adopted for monolayer, bilayer and quadrilayer phosphorene. We now take V$_\\text{P}^{out}$ and P$_i^{out}$ as examples to investigate the impact of $\\alpha$ on their stability and conductivity. We present in panel (a) of Fig. \\ref{alpha} the comparison of HSE06-25\\% and HSE06-35\\% in respect of the formation energy of V$_\\text{P}$ and P$_i$ as a function of electron chemical potential in monolayer phosphorene. A deviation of around 0.4 eV is observed for the formation energy of P$_i^{out}$; while the change in transition levels of V$_\\text{P}^{out}$ and P$_i^{out}$, shown in panel (b), is within 0.1 eV when $\\alpha$ goes from 35\\% to 25\\%. This suggests that $\\alpha$ has insignificant effects on the transition levels of V$_\\text{P}^{out}$ and P$_i^{out}$. The rigid shifts of the host VBM are primarily responsible for the shallower transition levels which are calculated by using HSE06-25\\% approach. Furthermore, one can conclude that P$_i$ still acts as a deep acceptor if the monolayer phosphorene has a band gap value of 1.56 eV, based on the HSE06-25\\% calculated results. \nWe expect it to hold true for thicker phosphorene. Similar results were also found for the native defects in GaInO$_3$.\\cite{Wang2015}\n\n \n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.5]{defectalign.pdf}\n\\caption{\\label{defectalign}(Color online) Transition levels of V$_\\text{P}$ and P$_i$ in few-layer phosphorene, referenced to the vacuum level.}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.4]{alpha.pdf}\n\\caption{\\label{alpha}(Color online) Formation energies of V$_\\text{P}$ and P$_i$ as functions of electron chemical potential in monolayer phosphorene given by HSE06-$\\alpha$ method. The solid and dashed lines represent $\\alpha$=35\\% and $\\alpha$=25\\% results. The gray region represents the HSE06-25\\% band gap. (b) Transition energy referenced to the vacuum level.}\n\\end{figure}\n\n\\section{summary}\nIn conclusion, we have investigated the structural and electronic properties of native point defect in few-layer phosphorene using first-principles calculations based on hybrid density functional theory including vdW correction. Our calculations show that both vacancy and self-interstitial P defects exhibit acceptor-like behavior and their formation energies and transition levels decrease with increasing film thickness. The same trend is also observed in the host band gap. These trends can be explained by the band offsets for few-layer phosphorene. Specifically, we find that the valence band maximum and conduction band minimum systematically shift upward and downward in reference to the vacuum level with the increases of film thickness. As a result, both vacancies and self-interstitials become shallow acceptors in few-layer phosphorene and can acount for the sources of \\emph{p}-type conductivities observed in experiments. On the other hand, these native acceptors could have non-negligible concentrations and thus act as compensating centers in \\emph{n}-type phosphorene. \n \n\n\n\n\\begin{acknowledgments}\nWe thank Y. Kumagai for valuable discussions. V. Wang acknowledges the support of the Natural Science Foundation of Shaanxi Province, China (Grant no. 2013JQ1021). Y. Kawazoe is thankful to the Russian Megagrant Project No.14.B25.31.0030 ``New energy technologies and energy carriers''\nfor supporting the present research. The calculations were performed on the HITACHI SR16000 supercomputer at the Institute for Materials Research of Tohoku University, Japan.\n\\end{acknowledgments}\n\n\n\\nocite{*}\n\\bibliographystyle{aipnum4-1}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} |
|
|