diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzlfmw" "b/data_all_eng_slimpj/shuffled/split2/finalzzlfmw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzlfmw" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction and motivation}\\label{Intro}\n\n$\\;\\;\\;$\n\nIn general, the electromagnetic field\ntensor ${F}$, expressed by a $4\\times 4$ antisymmetric matrix, is used to describe\nthe electromagnetic field intensity. This description involves 6 parameters. An alternative way to describe an\nelectromagnetic field is by use of the 4-potential. In a chosen gauge, the 4-potential\ntransforms as a 4-vector. The electromagnetic field tensor is then recovered by differentiating the 4-potential.\nChoice of a gauge can reduce the degrees of freedom of this description to from 4 to 3.\n\nIn 1904 E. T. Whittaker introduced \\cite{Whittaker} two scalar potential functions. Thus, he was able to reduce\nthe degrees of freedom of the electromagnetic field description to 2.\nHe showed that the electromagnetic field can be expressed in terms of the\nsecond derivatives of these functions. However, he was not able to find the covariance of his scalar potentials.\nH. S. Ruse \\cite{Ruse} improved the result of Whittaker. In 2009 Y. Friedman and S. Gwertzman showed \\cite{FGW} that it is possible to combine these two scalar potential functions into one complex-valued function ${S}(x)$ on Minkowski space, which we call the \\textit{prepotential of the electromagnetic field}. Moreover, they showed that this prepotential is invariant under a certain spin half representation of the Lorentz group. Thus, this prepotential provides a covariant description of an electromagnetic field with minimal degrees of freedom.\n\nIn 1953, influenced by M. Born, who was looking for the connection of the wave-function description of elementary particles in quantum mechanics and the electromagnetic field they generate, H. S. Green and E. Wolf introduced \\cite{GreenWolf53} a complex scalar potential for an electromagnetic field. They described the similarity of the expressions for energy and momentum densities between their potential and the wave function. They were unable, however, to find the connection between their potential and the Whittaker potentials.\n\nAn electromagnetic field is generated by a collection of moving charges. Thus, a description of an electromagnetic field can be obtained by integrating the fields of moving charges. For a moving charge and observer at\npoint $x$, the position of the charge at the retarded time relative to the observer is a null-vector in spacetime. We show that for each such vector, there is a complex dimensionless scalar which is invariant under a certain representation of the Lorentz group. The prepotential $S(x)$ is defined to be the logarithm of this scalar.\n\nThe Aharonov-Bohm effect indicates that there is a multiple-valued prepotential of the 4-potential of an electromagnetic field.\n In their paper \\cite{AharonovBohm59}, a scalar function $S$ such that $\\nabla S=(e\\hbar\/c)A$, where $A$ is the 4-potential of the electromagnetic field, was introduced.\nIt has been shown that if $\\psi_0$ is the solution of the Schr\\\"odinger equation in the absence of an electromagnetic field, then the function $\\psi=\\psi_0e^{-iS\/\\hbar}$ is the solution of the equation in the presence\nof the field, at least in a simply connected region in which the electromagnetic field vanishes.\nIn the magnetic Aharonov-Bohm experiment, however, the region outside the solenoid is not simply connected. This leads to a multi-valued $\\psi$ in this experiment. As it was shown in \\cite{FOst}, our prepotential is of the same type, but is defined for any electromagnetic field. Our prepotential is also multi-valued.\n\nIn classical mechanics, the negative of the gradient of a scalar potential equals the force.\n This is true for forces which generate linear acceleration. Such forces are defined by a one-form\n(since their line integral gives the work). The derivative of a scalar function\nis also a one-form. Hence, the derivative of a potential can equal the negative of the\nforce. But in classical mechanics we also have rotating forces, which are described by\ntwo-forms. Such forces cannot be expressed as derivatives of a scalar potential. For example, the electromagnetic field is not conservative in general. It generates both linear and rotational acceleration, and the electromagnetic tensor is a two-form. Hence, it is natural to assume that a kind of second derivative of a scalar potential will define\nthe force tensor. Note that the usual differential of a gradient of a real-valued function is zero.\nTherefore, the prepotential must be complex-valued, and we will need to define a Lorentz invariant conjugation of the gradient of the prepotential in order to obtain the 4-potential of the field.\n\nAnother important property of a prepotential of an electromagnetic field is its \\textit{locality}. Note that the electromagnetic tensor $F_{\\mu\\nu}$ of a field of a moving charge depends on the position, velocity and acceleration, while the 4-potential $A_\\mu$ depends only on the position and velocity of the source. Our prepotential $S$ depends only on the position of the source.\n\nIn section 2, we obtain a Lorentz group representation based on the complex electromagnetic field tensor. We will show that the regular representation $\\pi$ of the Lorentz group can be decomposed as a product of two commuting representations $\\tilde{\\pi}$ and $\\tilde{\\pi}^*$. In section 3, we will introduce the prepotential of an electromagnetic field and show its geometric meaning. We will also show the covariance of this prepotential under the action of the representation $\\tilde{\\pi}$. In section 4, we will study the connection between the prepotential and the Faraday vector of a field.\nWe will find the gauge of the prepotential. Explicit solutions for the prepotential and field of a charge at rest and a charged infinite rod will be found. Using computer algebra, we will derive the field of an arbitrary moving charge from its prepotential. The Maxwell equations for the prepotential will be derived in section 5 and we will demonstrate their use for deriving the prepotential and the field of current in straight wire. In section 6, we will show that the representation $\\tilde{\\pi}$ coincides with the spin half representation of the Lorentz group on the spinors. We also show that the matrices occurring in the description of the connection of the prepotential to the field are a representation of the Dirac $\\alpha$-matrices.\n\n\n\n\\section{Lorentz group representation based on complex electromagnetic field tensor}\\label{secLor}\n\nIn flat Minkowski space $M$, the spacetime coordinates of an event are denoted by\n$x^{\\mu}\\;(\\mu=0,1,2,3)$, with $x^0=ct$. The Minkowski inner product is\n$x\\cdot y=\\eta_{\\mu\\nu}x^{\\mu}y^{\\nu}$, where\nthe Minkowski metric is $\\eta_{\\mu\\nu}=\\operatorname{diag}(1,-1,-1,-1)$.\nThe usual Lorentz group representation $\\pi$ can be associated with an electromagnetic field tensor $F_\\alpha^\\beta(\\mathbf{E},\\mathbf{B})$\n\\begin{equation}\\label{EMMixedTensor}\nF=F_\\mu^\\nu = \\left(\\begin{array}{cccc}\n0 & E_1 & E_2 & E_3 \\\\\nE_1 & 0 & cB_3 & -cB_2 \\\\\nE_2 & -cB_3 & 0 & cB_1 \\\\\nE_3 & cB_2 & -cB_1 & 0 \\\\\n\\end{array}\\right),\n\\end{equation}\n where $\\mathbf{E}$ denote the electric field intensity\n and $\\mathbf{B}$ the magnetic field intensity, as follows.\n\nSince a magnetic field generates a rotation,\na generator of a rotation about the direction $\\mathbf{n}\\in \\mathbb{R}^3$ is defined by $F_\\alpha^\\beta(0,\\mathbf{B})$, with $c\\mathbf{B}=-\\mathbf{n}$. Thus, the representation of the rotation\nabout the direction $\\mathbf{n}$ with angle $\\omega$ is given by the operator $\\exp(F_\\alpha^\\beta(0,-\\mathbf{n})\\omega).$ For example, a\nrotation $\\mathfrak{R}^1$ about the $x^1$-axis (rotation in the $x^2x^3$ plane) is represented as\n\\begin{equation}\\label{RepPiR1}\n\\pi(\\mathfrak{R}^1)(\\omega)=\\Lambda_{23}(\\omega)=\\exp (\\left(\n \\begin{array}{cccc}\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & -1 \\\\\n 0& 0 & 1 & 0 \\\\\n \\end{array}\n \\right)\\omega)=\\left(\n \\begin{array}{cccc}\n 1 & 0 & 0 & 0 \\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & \\cos\\omega & -\\sin\\omega \\\\\n 0 & 0 & \\sin\\omega & \\cos\\omega \\\\\n \\end{array}\n \\right).\n\\end{equation}\nSimilarly, since an electric field generate boosts, a generator of a boost in the direction $\\mathbf{n}$ can be identified with $F_\\alpha^\\beta(\\mathbf{n},0)$. Thus, the representation of the boosts in the direction $\\mathbf{n}$ with rapidity $\\omega$ is given by the operator $\\exp(F_\\alpha^\\beta(\\mathbf{n},0 )\\omega).$ For example,\nboosts $\\mathfrak{B}^1$ in the $x$-axis are represented as\n\\begin{equation}\\label{RepPiB1}\n\\pi(\\mathfrak{B}^1)(\\omega)=\\Lambda_{01}(\\omega)=\\exp (\\left(\n \\begin{array}{cccc}\n 0 & 1 & 0 & 0 \\\\\n 1 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \\\\\n 0& 0 & 0 & 0 \\\\\n \\end{array}\n \\right)\\omega)=\\left(\n \\begin{array}{cccc}\n \\cosh\\omega & \\sinh\\omega & 0 & 0 \\\\\n \\sinh\\omega & \\cosh\\omega & 0 & 0 \\\\\n 0 & 0 & 1&0 \\\\\n 0 & 0 & 0&1 \\\\\n \\end{array}\n \\right).\n\\end{equation}\n\nOur complex prepotential $S(x)$ is a function $M \\rightarrow \\mathbb{C}$ on Minkowski space $M$.\nIts gradient at any spacetime point belongs to the complexified cotangent space at that\npoint, which we identify with $\\mathbb{C}^4$. Denote by $M_c$ the complex space $\\mathbb{C}^4$\nendowed with the bilinear complex-valued form $x\\cdot y=\\eta_{\\mu\\nu} x^\\mu y^\\nu$, which can\nbe considered as a complexification of Minkowski space. The bilinear form on $M_c$ is an extension of the Minkowski inner product.\n\nObviously, the representation $\\pi$ of the Lorentz group on $M$ can be extended linearly to\na representation on $M_c$. We claim that this representation can be decomposed as follows.\n\\begin{claim}\\label{pi-decomposition}\nThe representation $\\pi$ of the Lorentz group on $M_c$ can be decomposed into a product\n\\begin{equation}\\label{DecLorents}\n \\pi=\\tilde{\\pi}\\tilde{\\pi}^*\n\\end{equation}\nof two commuting representations $\\tilde{\\pi}$ and $\\tilde{\\pi}^*$ on $M_c$.\n\\end{claim}\nIn order to prove this claim, we will decompose the generator $F^\\alpha_\\beta $ of the representation $\\pi$ into a sum\n\\begin{equation}\\label{DecGenLorents}\n F^\\alpha_\\beta=\\frac{1}{2}\\mathcal{F}^\\alpha_\\beta +\\frac{1}{2}\\bar{\\mathcal{F}}^\\alpha_\\beta ,\n\\end{equation}\nwhere $\\mathcal{F}^\\alpha_\\beta$ is the complex electromagnetic tensor and its complex adjoint $\\bar{\\mathcal{F}}^\\alpha_\\beta$, defined below.\n\nAn electromagnetic field can be defined by the \\textit{Faraday vector} $\\mathbf{F}=\\mathbf{E}+ic\\mathbf{B}$. Note that since $i$ is a pseudo-scalar and $\\mathbf{B}$\nis a pseudo-vector, the expression $i\\mathbf{B}$ is a vector which is independent of\nthe chosen orientation of the space. The Faraday vector is used\nto describe the Lorentz invariant field constants, see for example \\cite{Landau}.\nIn \\cite{FD}, Friedman and Danziger introduced a \\textit{complex electromagnetic tensor} $\\mathcal{F}^\\alpha_\\beta$ for the\ndescription of an electromagnetic field, similar to the one introduced by Silberstein in \\cite{Silberstein1} and\n\\cite{Silberstein2}. Complexified electromagnetic fields were also studied by A. Gersten \\cite{Gerst} and played an important role in obtaining explicit solutions in \\cite{FS} and \\cite{F04} for motion of a charge in constant electromagnetic field.\n\nLet $F^\\alpha_\\beta (\\mathbf{E},\\mathbf{B})$ be the usual electromagnetic tensor. The\ncomplex electromagnetic tensor $\\mathcal{F}^\\alpha_\\beta(\\mathbf{E},\\mathbf{B})$ is defined by $\\mathcal{F}^\\alpha_\\beta(\\mathbf{E},\\mathbf{B})=F^\\alpha_\\beta (\\mathbf{F},-i\\mathbf{F})$\nor\n\\begin{equation}\\label{complex tensor}\n\\mathcal{F}^\\alpha_\\beta=\\left(\n \\begin{array}{cccc}\n 0 & F_1 & F_2& F_3 \\\\\n F_1 & 0 & -iF_3 & iF_2 \\\\\n F_2& iF_3 & 0 & -iF_1 \\\\\n F_3& -iF_2 & iF_1 & 0 \\\\\n \\end{array}\n \\right)=\\sum_{j=1}^3 (\\rho^j)^\\alpha_\\beta F_j\\,,\n\\end{equation}\nwhere $(\\rho_j)^\\alpha_\\beta$ are the Majorana-Oppenheimer matrices (see \\cite{Dvoeglasov})\n \\[\n(\\rho^1)^\\alpha_\\beta=\\left(\n\\begin{array}{cccc}\n0 & 1 & 0 & 0 \\\\\n1 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & -i \\\\\n0 & 0 & i & 0 \\\\\n\\end{array}\\right),\\;\n(\\rho^2)^\\alpha_\\beta=\\left(\n\\begin{array}{cccc}\n0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & i \\\\\n1 & 0 & 0 & 0 \\\\\n0 & -i & 0 & 0 \\\\\n\\end{array}\\right),\\]\n\n\\begin{equation}\\label{rho3}\n (\\rho^3)^\\alpha_\\beta=\\left(\n\\begin{array}{cccc}\n0 & 0 & 0 & 1 \\\\\n0 & 0 & -i & 0 \\\\\n0 & i & 0 & 0 \\\\\n1 & 0 & 0 & 0 \\\\\n\\end{array}\\right).\n\\end{equation}\n\nNote that the electromagnetic tensor $F^\\alpha_\\beta$ is the real part of the complex tensor $\\mathcal{F}^\\alpha_\\beta$. Equation (\\ref{DecGenLorents}) holds now if we define the complex conjugate tensor to be $\\bar{\\mathcal{F}}^\\alpha_\\beta=\\sum_{j=1}^3 (\\bar{\\rho}_j)^\\alpha_\\beta \\bar{F}_j$.\n\nThe Majorana-Oppenheimer matrices can be derived from the connection of the 4-potential and the Faraday vector, as follows. It is known that if $A_\\mu$ is the 4-potential of the electromagnetic field, then $E_j=F_{0j}=\\partial_jA_0-\\partial_0A_j= A_{0,j}-A_{j,0}$, where $\\partial_\\mu=\\frac{\\partial}{\\partial x^\\mu}$ and $B^j=(\\nabla\\times\\mathbf{A})^j$, where $\\nabla\\times$ is the curl which is applied to the vector potential. Thus, the the $j$ component of the Faraday vector is connected to the potential as\n\\begin{equation}\\label{FaradayAnd4potent}\n F^j= ( {\\rho}^j)^{\\alpha\\beta}\\partial_\\alpha A_\\beta=( {\\rho}^j)^{\\alpha\\beta} A_{\\beta,\\alpha} \\,,\n\\end{equation}\nshowing that the matrices $\\rho^j$ connect the 4-potential to the Faraday vector of the electromagnetic field.\nUsing (\\ref{complex tensor}), we can define a differential operator which connects the 4-potential with the the complex electromagnetic tensor $\\mathcal{F}^\\alpha_\\beta$:\n\\begin{equation}\\label{AtoCalF}\n \\mathcal{F}^\\alpha_\\beta=\\sum_{j=1}^3 (\\rho^j)^\\alpha_\\beta ( {\\rho}^j)^{\\nu\\mu}\\partial_\\nu A_\\mu=\\nabla\\times A\\,,\n\\end{equation}\nwhere the \\textit{curl} $\\nabla\\times$ on $M_c$ is defined as\n\\begin{equation}\\label{curlDef}\n \\nabla\\times=\\sum_{j=1}^3 (\\rho_j)^\\alpha_\\beta ( {\\rho}^j)^{\\nu\\mu}\\partial_\\nu\\,.\n\\end{equation}\n\nWe will now derive a Lorentz group representation on $M_c$ based on the Majorana-Oppenheimer matrices. In Section 6 we will show that this representation can be identified with the known spin-half representation of the Lorentz group. To define this representation, it is enough to define the representation on the generators of boosts and rotations.\n\n\\begin{defn}\\label{pi-tildeDefn}\nDefine a representation $\\tilde{\\pi}$ on $M_c$ by defining generators $\\tilde{B}^j$ of boosts $\\mathfrak{B}^j$ in the direction of $x^j$ to be $\\frac{1}{2}\\rho^j$ and generators $\\tilde{R}^j$ of rotations $\\mathfrak{R}^j$ about the direction $x^j$ to be $\\frac{i}{2}\\rho^j$ .\n\\end{defn}\n\\begin{claim}\\label{pi-tildeRepr}\nThe representation $\\tilde{\\pi}$ is a Lorentz group representation on $M_c$.\n\\end{claim}\nDirect calculation shows that the operators $\\tilde{R}^j=\\frac{i}{2}\\rho^j$ obey the same commutator relations as the generators of the rotation group $SO(3)$:\n\\[[\\tilde{R}^j,\\tilde{R}^k]=-\\varepsilon^{jk}_l\\tilde{R}^l,\\]\nwhere $\\varepsilon^{ijk}$ is the Levi-Civita tensor.\nMoreover, the $\\tilde{B}^j=\\frac{1}{2}\\rho^j$ matrices obey the same commutator relations as the generators of boosts in the Lorentz group:\n \\[ [\\tilde{B}^j,\\tilde{B}^k]=\\varepsilon^{jk}_l\\tilde{R}^l,\\;\\;\\;\\;\n [\\tilde{R}^j,\\tilde{B}^k]=\\varepsilon^{jk}_l\\tilde{B}^l\\]\n This proves Claim \\ref{pi-tildeRepr}.\n\nThe complex adjoints $\\bar{\\sigma}^j=\\frac{-i}{2}\\bar{\\rho}^j$ and $\\frac{1}{2}\\bar{\\rho}^j$ satisfy similar relations. This leads to a second Lorentz group representation on $M_c$.\n\\begin{defn}\\label{pi-tildeDefnStar}\nDefine a representation $\\tilde{\\pi}^*$ on $M_c$ by defining generators $\\tilde{B}^j$ of boosts $\\mathfrak{B}^j$ in the direction of $x^j$ to be $\\frac{1}{2}\\bar{\\rho}^j$ and generators $\\tilde{R}^j$ of rotations $\\mathfrak{R}^j$ about the direction $x^j$ to be $\\frac{i}{2}\\bar{\\rho}^j$.\n\\end{defn}\nMoreover, the two sets of operators $\\{\\rho^j\\}$ and $\\{\\bar{\\rho}^k\\}$ commute:\n\\begin{equation}\\label{CommutinfMO}\n [\\rho^j,\\bar{\\rho}^k]=0.\n\\end{equation}\nThus, the representations $\\tilde{\\pi}$ and $\\tilde{\\pi}^*$ commute.\n Since \\[\\exp F^\\alpha_\\beta =\\exp(\\frac{1}{2}\\mathcal{F}^\\alpha_\\beta +\\frac{1}{2}\\bar{\\mathcal{F}}^\\alpha_\\beta)=\\exp(\\frac{1}{2}\\mathcal{F}^\\alpha_\\beta) \\exp(\\frac{1}{2}\\bar{\\mathcal{F}}^\\alpha_\\beta),\\]\n the representation $\\pi$, associated with the tensor $F^\\alpha_\\beta$, can be decomposed as a product (\\ref{DecLorents}) of representations $\\tilde{\\pi}$ and $\\tilde{\\pi}^*$. This proves Claim \\ref{pi-decomposition}.\n\n\nIn addition to the above commutation relations, Majorana-Oppenheimer matrices also satisfy \\textit{anti-commutation relations}, which are very helpful for the calculation of the exponents of these matrices. The anti-commutation relations are\n\\begin{equation}\\label{CAR}\n \\{\\rho^j ,\\rho^k\\}=\\frac{1}{2}(\\rho^j\\rho^k+\\rho^k \\rho^j)=\\delta^{jk}I\\,,\n\\end{equation}\nwhich implies that $(\\rho^j)^2=I$. Using these relation and the power series expansion of the exponent function, the exponents of these matrices are\n\\[\\exp (\\rho^j\\frac{\\omega}{2})= \\cosh \\frac{\\omega}{2} I +\\rho^j\\sinh \\frac{\\omega}{2}\\]\n\\[\\exp (\\rho^j\\frac{i\\omega}{2})= \\cos \\frac{\\omega}{2} I +i\\rho^j\\sin \\frac{\\omega}{2}\\,.\\]\n\n\nWe can define now explicitly the representation under $\\tilde{\\pi}$ of the rotations and the boosts of the Lorentz group. For example, the rotation $\\mathfrak{R}^1$ about the $x^1$-axis or rotation in the $x^2x^3$ plane by an angle $\\omega$ is represented by\n\\begin{equation}\\label{RotTildePi}\n \\tilde{\\pi}(\\mathfrak{R}^1)(\\omega)=\\Upsilon_{23}(\\omega)=\\exp (i\\rho^1\\frac{\\omega}{2})= \\cos \\frac{\\omega}{2} I -i\\rho^1\\sin\\frac{\\omega}{2}\n\\end{equation}\n\\[=\\left(\n \\begin{array}{cccc}\n \\cos \\omega\/2 & -i\\sin \\omega\/2 & 0 & 0 \\\\\n -i\\sin \\omega\/2 & \\cos \\omega\/2 & 0 & 0 \\\\\n 0 & 0 & \\cos \\omega\/2 & -\\sin \\omega\/2 \\\\\n 0&0& \\sin \\omega\/2 & \\cos \\omega\/2\\\\\n \\end{array}\n \\right)\\,.\\]\nThe representation $\\tilde{\\pi}^*$ of this rotation is given by $\\bar{\\Upsilon}_{23}(\\omega)$, the complex conjugate of the above matrix. In the $x^2x^3$ plane, both $\\tilde{\\pi}(\\mathfrak{R}^1)$ and $\\tilde{\\pi}^*(\\mathfrak{R}^1)$ define a rotation by an angle $\\omega\/2$, and their product $\\Upsilon_{23}(\\omega)\\bar{\\Upsilon}_{23}(\\omega)=\\Lambda_{23}(\\omega)$, defined by (\\ref{RepPiR1}), is a rotation by an angle $\\omega$ in this plane. However, in the plane $x^0x^1$, the rotation $\\Lambda_{23}(\\omega)$ under $\\pi$ is the identity, while both $\\Upsilon_{23}(\\omega)$ and $\\bar{\\Upsilon}_{23}(\\omega)$ define a ``complex rotation\" in the plane $\\{x^0+ix^1:\\;\\;x^0,x^1\\in \\mathbb{R}\\}$ by angles $-\\omega\/2$ and $\\omega\/2$, respectively, indicating the there is a rotation around the $x^1$ axis. This observation needs further understanding.\n\n\nSimilarly, the boost in the $x^1$-direction with parameter $\\omega$ is\n\\begin{equation}\\label{Boosttildepi}\n \\tilde{\\pi}(\\mathfrak{B}^1)(\\omega)=\\Upsilon_{01}(\\omega)=\\exp (\\rho^1\\frac{\\omega}{2})= \\cosh \\frac{\\omega}{2} I +\\rho^1\\sinh \\frac{\\omega}{2}\\end{equation}\n\\[=\\left(\n \\begin{array}{cccc}\n \\cosh \\omega\/2 & \\sinh \\omega\/2 & 0 & 0 \\\\\n \\sinh \\omega\/2 & \\cosh \\omega\/2 & 0 & 0 \\\\\n 0 & 0 & \\cosh \\omega\/2 &-i \\sinh \\omega\/2 \\\\\n 0&0& i\\sinh \\omega\/2 & \\cosh \\omega\/2\\\\\n \\end{array}\n \\right).\\]\n The representation $\\tilde{\\pi}^*$ of this boost is given by $\\bar{\\Upsilon}_{01}(\\omega)$, the complex conjugate of the above matrix. In the $x^0x^1$ plane, both $\\tilde{\\pi}(\\mathfrak{B}^1)$ and $\\tilde{\\pi}^*(\\mathfrak{B}^1)$ define boosts in the $x^1$ direction with rapidity $\\omega\/2$, and their product $\\Upsilon_{01}(\\omega)\\bar{\\Upsilon}_{01}(\\omega)=\\Lambda_{01}(\\omega)$, defined by (\\ref{RepPiB1}), is a boost with rapidity $\\omega$ in the $x^1$ direction. However, in the $x^2x^3$ plane, the boost $\\Lambda_{01}(\\omega)$ under $\\pi$ is the identity, while both $\\Upsilon_{01}(\\omega)$ and $\\bar{\\Upsilon}_{01}(\\omega)$ define ``complex boosts\" in the plane $\\{x^2+ix^3:\\;\\;x^2,x^3\\in \\mathbb{R}\\}$, with rapidity $-\\omega\/2$ and $\\omega\/2$, respectively, and the product of these boosts is the identity.\n\nNote that the operators $\\Upsilon_{\\alpha\\beta}$ are isometries on the space $M_c$. The representation $\\tilde{\\pi}$, defined by the operators $\\Upsilon_{\\alpha\\beta}$, is a spin 1\/2 representation of the Lorentz group on $M_c$. Similarly, the representation $\\tilde{\\pi}^*$ is defined by the operators $\\bar{\\Upsilon}_{\\alpha\\beta}$, which are obtained by taking the complex conjugate of $\\Upsilon_{\\alpha\\beta}$. This is also a spin 1\/2 representation of the Lorentz group on $M_c$, and it commutes with the representation $\\tilde{\\pi}$. In section \\ref{SecSpinors}, we will show that the representation $\\tilde{\\pi}$ can be identified with the usual representation of the Lorentz group on the \\textit{spinors}.\n\nLet $F_\\alpha ^\\beta$ be the electromagnetic tensor of some electromagnetic field. This tensor is covariant under the representation $\\pi$ of the Lorentz group. What is the covariance of the complex tensor $\\mathcal{F}_\\alpha ^\\beta$ associated with $F_\\alpha ^\\beta$? We will show the following.\n\\begin{claim}\\label{covarftildeF}\nThe covariance of the electromagnetic tensor $F_\\alpha ^\\beta$ under the representation $\\pi$ is equivalent to the covariance of the corresponding tensor $\\mathcal{F}_\\alpha ^\\beta$ under the representation $\\tilde\\pi$.\n\\end{claim}\nLet $\\Lambda$ be any element of the representation $\\pi$ of the Lorentz group. By (\\ref{DecLorents}), there is an element $\\Upsilon$ of $\\tilde{\\pi}$ such that $\\Lambda=\\Upsilon\\bar\\Upsilon$. Using that $\\mathcal{F},\\Upsilon \\in \\operatorname{span} \\rho^j$, these operators commute with any $\\bar{\\rho}^k$ and thus with $\\mathcal{\\bar F}, \\bar\\Upsilon$. Thus,\n\\[ F' = \\Lambda F\\Lambda^{-1}= \\frac{1}{2} (\\Upsilon\\bar\\Upsilon\\mathcal{F}\\bar\\Upsilon^{-1}\\Upsilon^{-1} + \\Upsilon\\bar\\Upsilon\\mathcal{\\bar F}\\bar\\Upsilon^{-1}\\Upsilon^{-1})\\]\\[ = \\frac{1}{2} (\\Upsilon\\mathcal{F}\\Upsilon^{-1} + \\bar\\Upsilon\\mathcal{\\bar F}\\bar\\Upsilon^{-1}) = \\frac{1}{2} (\\mathcal{F'}+\\mathcal{\\bar F'}).\\]\nThis proves Claim \\ref{covarftildeF}.\n\n\n\n\\section{Definition of the prepotential}\n\nDenote by $P$ a point in Minkowski space $M$ at which we want to define\nthe prepotential. We will call $P$ the observer's point and denote his coordinates by $x=x^\\mu$. Denote by $\\check{x}(\\tau)=\\check{x}^\\mu (\\tau): \\mathbb{R}\\rightarrow M$ the worldline of the charge $q$ generating our electromagnetic field, as a function of its proper time. Let the point $Q=\\check{x}(\\tau(x))$ be the unique point of intersection of the past light cone at $P$ with the\nworldline $\\check{x}(\\tau)$ of the charge. The time on the worldline of the charge corresponding to this intersection is uniquely determined by the point $x$. It is called the \\textit{retarded time} and will be denoted by $\\tau(x)$. Note that radiation emitted at $Q$ at the retarded time will reach $P$ at time $t=x^0\/c$ corresponding to this point. Our prepotential $S(x)$ will depend on the relative position\n\\begin{equation}\\label{PositionDef}\n r(x)=x-\\check{x}(\\tau(x))\n\\end{equation}\nof the charge at retarded time $\\tau(x)$, see Figure \\ref{chargePotent}.\n\\begin{figure}[h!]\n \\centering\n\\scalebox{0.4}{\\includegraphics{pointChargePoten.pdf}}\n \\caption{The four-vectors associated with an observer and a moving charge.}\\label{chargePotent}\n\\end{figure}\n\n The relative position $r$ is a null vector. Hence,\n \\begin{equation}\\label{NullDecomp}\n (r^0+r^3)(r^0-r^3)=(r^1+ir^2)(r^1-ir^2).\n\\end{equation}\nIn this decomposition, we have split the four coordinates of $r$ into two groups: the first group contains the time component $r^0$ and the third spatial component $r^3$. The second group contains the first two spatial components $r^1$ and $r^2$.\n We can now define a dimensionless complex constant $\\zeta$, as follows.\n \\begin{defn}\\label{zetaDef}\n For any null-vector $r$ in $M_c$, we define a dimensionless complex scalar $\\zeta$ by\n \\begin{equation}\\label{zetaDefEq}\n \\zeta(r)=\\frac{r^0+r^3}{r^1-ir^2}=\\frac{r^1+ir^2}{r^0-r^3}.\n \\end{equation}\n For any $x\\in M$ and any world-line $\\check{x}(\\tau)$, we define $\\zeta (x)= \\zeta(r(x))$, where the relative position $r(x)$ is defined by (\\ref{PositionDef}).\n \\end{defn}\n\n The scalar $\\zeta(r)$ can be identified as a simple function of the stereographic projection of the direction of the vector part $\\mathbf{r}$ of $r$ from the celestial sphere to the Argand plane, as follows. Since $r$ is a null vector, it can be decomposed as $(|\\mathbf{r}|,\\mathbf{r})$ and is defined uniquely by its vector part $\\mathbf{r}$. Define $\\hat{\\mathbf{r}}=\\mathbf{r}\/|\\mathbf{r}|$, a unit vector in the direction vector of $\\mathbf{r}$, and a null vector $\\hat{r}=(|\\hat{\\mathbf{r}}|,\\hat{\\mathbf{r}})$. Since $\\zeta (r) $ is the same for any multiple of $r$, we have $\\zeta (r)=\\zeta (\\hat{r})$. Then, as shown in \\cite{PeR} p. 11, $\\zeta (r)=\\zeta (\\hat{r})=e^{i\\varphi} cot\\frac{\\theta}{2}$, where $\\varphi,\\theta$ are the angles of standard spherical coordinates of $\\hat{r}$. This shows that $\\zeta (r) $ depends only on the direction from the observer to the charge at the retarded time.\n\nThere is no way to associate to every null vector $r$ a non-trivial scalar which is invariant under the usual representation $\\pi$. However, for the representation $\\tilde{\\pi}$ we have:\n\\begin{claim}\\label{zetaInvar}\nFor any null vector $r$, the complex constant $\\zeta(r)$, defined by (\\ref{zetaDefEq}), is invariant under the representation $\\tilde{\\pi}$ on $M_c$.\n\\end{claim}\nThe boost $\\Upsilon _{01}$ of $\\tilde{\\pi}$ in the $x^1$-direction is defined by (\\ref{Boosttildepi}). Applying this boost to the vector $r$ yields\n\\[\\Upsilon _{01} (r)=\\cosh(\\omega\/2)(r^0,r^1,r^2,r^3)+\\sinh(\\omega\/2)(r^1,r^0,-ir^3,ir^2).\\]\nUsing (\\ref{zetaDefEq}), we have\n\\[\\Upsilon _{01} (\\zeta)= \\zeta(\\Upsilon _{01} (r))= \\frac\n{\\cosh(\\omega\/2)(r^0+r^3)+\\sinh(\\omega\/2)(r^1+ir^2)}{\\cosh(\\omega\/2)(r^1-ir^2)+\\sinh(\\omega\/2)(r^0-r^3)}=\\]\n\\[\\frac{r^0+r^3}{r^1-ir^2}\\cdot\n\\frac{1+\\tanh(\\omega\/2)(r^1+ir^2)\/(r^0+r^3)}{1+\\tanh(\\omega\/2)(r^0-r^3)\/(r^1-ir^2)}\n=\\frac{r^0+r^3}{r^1-ir^2}\\cdot\n\\frac{1+\\tanh(\\omega\/2)\/\\bar{\\zeta}}{1+\\tanh(\\omega\/2)\/\\bar{\\zeta}}\n=\\zeta,\\]\nshowing that $\\zeta$ is invariant under the boost $\\Upsilon _{01}$. Similarly, one may show the\ninvariance of $\\zeta$ under any boost and rotation of the representation $\\tilde{\\pi}$. This proves Claim \\ref{zetaInvar}.\n\n\\begin{defn}\\label{SdefDef} We define the \\textit{prepotential} $S(x)$ at point $x$ of an electromagnetic field generated by a moving charge $q$ as\n\\begin{equation}\\label{Sdef}\n S(x)=\\frac{q}{2}\\ln \\zeta (x)=\\frac{q}{2}\\ln \\zeta(r (x)),\n\\end{equation}\nwhere $\\zeta (r)$ is defined by (\\ref{zetaDefEq}) and the relative position of the charge $r(x)$ is defined by (\\ref{PositionDef}).\n\\end{defn}\nFrom Claim \\ref{zetaInvar}, it follows that the prepotential is invariant under the representation $\\tilde{\\pi}$ on $M_c$.\nConsider an electromagnetic field generated by any charge distribution $\\sigma (x)$. The scalar prepotential for this field is defined by\n \\begin{equation}\\label{scalPotGenera}\n S(x) =\\frac{1}{2}\\int\\limits_{K^-(x)}\\zeta(r)\\sigma(x+r)d^3r,\n\\end{equation}\nwhere $K^-(x)$ denotes the backward light-cone at $x$, and the integration is by the 3D volume on this cone.\n\n\nThe geometric meaning of this prepotential can be understood if we introduce \\textit{relativistic bipolar coordinates} $(\\varrho^0,\\varrho^1,\\theta,\\varphi)$:\n\\[x^0=\\varrho^0\\cosh\\theta,\\;\\;x^1=\\varrho^1\\cos \\varphi,\\;\\;x^2=\\varrho^1\\sin \\varphi,\\;\\;x^3=\\varrho^0\\sinh \\theta.\\]\nThe angle $\\varphi$ in these coordinates is the same angle $\\varphi$ occurring in polar and spherical coordinates. However, the angle $\\theta$ differs from the angle $\\theta$ of spherical coordinates. These coordinates fit better to our model as they give a simple form for the prepotential. The light-cone in these coordinates has the simple form $\\varrho^0=\\varrho^1=\\varrho$, a hyperplane. For any null-vector $r$ on the light-cone, the invariant constant $\\zeta(r)$, defined by (\\ref{zetaDefEq}), in relativistic bipolar coordinates is\n\\[\\zeta(r)=\\frac{r^0+r^3}{r^1-ir^2}=\\frac{\\varrho e^{\\theta}}{\\varrho e^{-i\\varphi}}=e^{\\theta+i\\varphi}.\\]\nThe prepotential of a moving charge is\n\\[ S(x)=\\frac{q}{2}(\\theta+i\\varphi),\\] which is a multiple of the complex angle $\\theta+i\\varphi$ of the relative position $r(x)$ of the charge.\n\n\\section{ The prepotential and the Faraday vector of the field}\n\n\\subsection{Connection between the prepotential and the Faraday vector of a field}\n\nThe 4-potential will be defined as a conjugate of the gradient of the prepotential $S(x)$.\nSince the prepotential $S(x)$ is invariant under the representation $\\tilde{\\pi}$, we want our\nconjugation to commute with this representation. By Definition \\ref{pi-tildeDefn} of the representation $\\tilde{\\pi}$, the conjugation needs to commute with any operator $\\rho^j$. Using (\\ref{CommutinfMO}), we can choose any operator $\\bar{\\rho}^k$ to define the conjugation. Since, in (\\ref{NullDecomp}), and in the definition of $\\zeta$, we choose the third spatial component to join the time component, also here we chose the $\\tilde{\\pi}$ \\textit{covariant conjugation }$\\mathcal{C}$ on $M_c$ to be given by multiplication by $\\bar{\\rho}^3$:\n\\begin{equation}\\label{ConjOperDef}\n \\mathcal{C}=\n (\\bar{\\rho}^3)^\\alpha_\\beta=\\left(\n\\begin{array}{cccc}\n0 & 0 & 0 & 1 \\\\\n0 & 0 & i & 0 \\\\\n0 & -i & 0 & 0 \\\\\n1 & 0 & 0 & 0 \\\\\n\\end{array}\\right)\\,.\n\\end{equation}\n\nThe following diagram shows the derivation of the complex tensor $\\mathcal{F}$ of the field from its prepotential $S(x)$:\n \\[\\begin{CD} S @>\\nabla>> \\nabla S @>\\mathcal{C}>>A= \\mathcal{C}(\\nabla S) @>\\nabla\\times>> \\mathcal{F}.\n \\end{CD}\\]\nBy differentiating the prepotential and taking its conjugate, we obtain the 4-potential $A$. Then we use (\\ref{AtoCalF}) to get from $A$ to the complex tensor $\\mathcal{F}$ of the field. The components $F_j$ of the Faraday vector can be found by (\\ref{FaradayAnd4potent}) as\n\\begin{equation}\\label{FfromS}\n F_j={\\partial}_\\nu (\\rho^j)^{\\mu\\nu}(\\bar{\\rho}^3)_\\mu^\\lambda{\\partial}_\\lambda S\\,.\n\\end{equation}\n\\begin{claim}\\label{ClCovStoF}\nThe Faraday vector $ F_j$ defined by (\\ref{FfromS}) is invariant under the representation $\\tilde{\\pi}$\n\\end{claim}\nThis claim follows from the following facts. Claim \\ref{zetaInvar} implies that $S(x)$ is $\\tilde{\\pi}$ invariant. By definition of the conjugation, $(\\bar{\\rho}^3)_\\mu^\\lambda$ is $\\tilde{\\pi}$ invariant. The remaining operators together form a $\\tilde{\\pi}$ invariant operator.\n\nThe last expression can be simplified, if for each $j=1,2,3$, we introduce a new tensor $((\\alpha^j)^{\\mu\\lambda}=(\\rho^j)^{\\mu\\nu}(\\bar{\\rho}^3)_\\mu^\\lambda$. Then we can rewrite (\\ref{FfromS}) as\n\\begin{equation}\\label{FfromSAlf}\n F_j={(\\alpha^j)^{\\mu\\lambda}\\partial}_\\nu {\\partial}_\\lambda S\\,.\n\\end{equation}\nThe new tensors are\n\\[(\\alpha^1)^{\\mu\\lambda}=\\left(\n\\begin{array}{cccc}\n0 & 0 & -i & 0 \\\\\n0 & 0 & 0 & -1 \\\\\n-i & 0 & 0 & 0 \\\\\n0 & -1& 0 & 0 \\\\\n\\end{array}\\right),\\;\\;(\\alpha^2)^{\\mu\\lambda}=\\left(\n\\begin{array}{cccc}\n0 & i & 0 & 0 \\\\\ni & 0 & 0 & 0\\\\\n0 & 0 & 0 & -1 \\\\\n0 & 0& -1& 0 \\\\\n\\end{array}\\right)\\]\n\\[(\\alpha^3)^{\\mu\\lambda}=\\left(\n\\begin{array}{cccc}\n1 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 0& 0 & -1 \\\\\n\\end{array}\\right)\\,.\\]\nThus, (\\ref{FfromSAlf}) defines the following explicit formulas for the connection between the prepotential $S(x)$ and the Faraday vector.\n\\begin{equation}\n \\begin{array}{l}\n F_1=-2(S_{,13}+iS_{,02}) \\\\\n F_2=-2(S_{,23}-iS_{,01}) \\\\\n F_3= S_{,00}+S_{,11}+S_{,22}-S_{,33}\\\\\n \\end{array}\\label{Fas derS}\n\\end{equation}\n\nBased on these formulas, we obtain the following gauges for the prepotential.\n\\begin{claim}\\label{claimGauge}\nThe prepotential $S(x)$ is not determined uniquely by the electromagnetic field. Let $g(x)$ be a function on $M_c$ from the following list, or a combination of such functions:\n\\begin{enumerate}\n \\item $g(x)=g(x^1,x^2)$ is harmonic, that is, $g_{,11}+g_{,22}=0$,\n \\item $g(x)=g(x^0,x^3)$ satisfies the wave equation, that is, $g_{,00}-g_{,33}=0$,\n \\item $g(x)$ is of order 1 in $x^\\mu$.\n\\end{enumerate}\nThen the transformation of the prepotential\n\\begin{equation}\\label{gagueTrans}\n S'(x)=S(x)+g(x)\n\\end{equation}\ndoes not affect the field. A function $g(x)$ of this type will be called the \\textit{prepotential gauge}.\n\\end{claim}\nTo prove the claim, it is enough to check that the field of the gauge $g(x)$ defined by (\\ref{Fas derS}) is zero. In case $(i)$, we have $g_{,0}=g_{,3}=0$. Since each term of the components $F_1,F_2$ of the field in\n(\\ref{Fas derS}) involves differentiation by $x^0$ or by $x^3$, these components are zero. The third component $F_3$ is zero since $g_{,00}=g_{,33}=0$ and $g_{,11}+g_{,22}=0$. Similarly, for case $(ii)$, we have $g_{,1}=g_{,2}=0$. Since each term of the components $F_1,F_2$ of the field in\n(\\ref{Fas derS}) involve differentiation by $x^1$ or by $x^2$, these components are zero. The third component $F_3$ is zero since $g_{,11}=g_{,22}=0$ and $g_{,00}-g_{,33}=0$. Since each component of the field involves second derivatives, it will vanish for any function $g(x)$ of order 1 in $x^\\mu$. This completes the proof of the claim.\n\n\nWe demonstrate now the use of formulas (\\ref{Fas derS}) and Claim \\ref{claimGauge} in calculating the field of a rest charge at the origin from its prepotential.\nConsider a rest charge $q$ at the origin. The worldline of this charge is $\\check{x}(\\tau)=(\\tau,0,0,0)$. Thus, the relative position of the charge is\n $r=(|x|,x^1,x^2,x^3)$, where $|x|=\\sqrt{(x^1)^2+(x^2)^2+(x^3)^2}$.\nFrom (\\ref{zetaDefEq}) and (\\ref{Sdef}),\n\\[S'(x)=\\frac{q}{2}\\ln\\frac{|x|+x^3}{x^1-ix^2}=\\frac{q}{2}\\ln(|x|+x^3)-\\frac{q}{2}\\ln(x^1-ix^2)\\,.\\]\nThe function $\\ln(x^1-ix^2)$ is harmonic, and, by Claim \\ref{claimGauge}$(i)$, is a prepotential gauge.\n\\begin{exam} For a rest charge $q$ at the origin, we define the prepotential to be\n\\begin{equation}\\label{ResTPrepot}\nS(x)=\\frac{q}{2}\\ln(|x|+x^3)\\,.\n\\end{equation}\n\\end{exam}\nNote that in this case the prepotential is well defined for all $x\\ne0$ and could be chosen to be real valued.\n\\begin{claim}\\label{ClResPoint}\nThe prepotential $S(x)$ defined by (\\ref{ResTPrepot}) defines the electromagnetic field of the rest charge $q$ at the origin to be $\\mathbf{E}=\\frac{\\mathbf{r}}{|\\mathbf{r}|^3}$ and $\\mathbf{B}=0$. Moreover, $S(x)$ satisfies the wave equation $\\square S(x)=0$.\n\\end{claim}\nSince $S(x)$ is independent of $x^0$, we have $S_{,0}=0$. Direct calculation shows that $\\frac{\\partial}{\\partial x^j}|x|=\\frac{x^j}{|x|}$. Thus, we get\n\\[S_{,1}=\\frac{q}{2|x|}\\frac{x^1}{|x|+x^3},\\;\\;\\;S_{,2}=\\frac{q}{|x|2}\\frac{x^2}{|x|+x^3}\\,,\\]\nand \\[ S_{,3}=\\frac{q}{2}\\frac{1}{|x|+x^3}\\left(\\frac{x^3}{|x|}+1\\right)=\\frac{q}{2\\vert x\\vert}\\,.\\]\nThe 4-potential $A$ in this case is\n\\[A=\\mathcal{C}\\nabla S= \\frac{q}{2\\vert x\\vert}(1, -i\\frac{x^2}{|x|+x^3},i\\frac{x^1}{|x|+x^3},0)\\,.\\]\nThis potential behaves similarly to the usual 4-potential: when $|x|$ approaches infinity, it behaves like $\\frac{1}{|x|}$.\n\nBy use of (\\ref{Fas derS}), we get\n\\[ F_1=-2(S_{,13}+iS_{,02})=-2\\frac{\\partial}{\\partial x^1}S_{,3}=-2\\frac{\\partial}{\\partial x^1}\n\\frac{q}{2|x|}=\\frac{q x^1}{|x|^3}\\,,\\]\n\\[ F_2=-2(S_{,23}-iS_{,01})=-2\\frac{\\partial}{\\partial x^2}S_{,3}=-2\\frac{\\partial}{\\partial x^2}\n\\frac{q}{2|x|}=\\frac{q x^2}{|x|^3}\\,.\\]\n\nTo calculate $F_3$ using (\\ref{Fas derS}), we need first to calculate $S_{,\\mu\\mu}$.\n\\[S_{,11}=\\frac{\\partial}{\\partial x^1}\\frac{q}{2}\n\\frac{x^1}{|x|(|x|+x^3)}=\\frac{q}{2}\\frac{|x|^2+x^3|x|-2(x^1)^2-x^3(x^1)^2\/|x|}{|x|^2(|x|+x^3)^2},\\]\n\\[S_{,22}=\\frac{\\partial}{\\partial x^2}\\frac{q}{2}\n\\frac{x^2}{|x|(|x|+x^3)}=\\frac{q}{2}\\frac{|x|^2+x^3|x|-2(x^2)^2-x^3(x^2)^2\/|x|}{|x|^2(|x|+x^3)^2}.\\]\nThis implies that\n\\[S_{,11}+S_{,22}=\\frac{q x^3}{2|x|^3}\\,.\\]\nSince $S_{,00}=0$ and $S_{,33}=\\frac{-q x^3}{2|x|^3}$, equation (\\ref{Fas derS}) yields\n\\[F_3=\\frac{q x^3}{|x|^3}\\]\nand $\\square S(x)=S_{,00}-(S_{,11}+S_{,22})-S_{,33}=0$. This proves the claim.\n\nAs a second example, we will find now the prepotential of an infinitely long charged rod. We start with a prepotential\nof a finite rod on the $x^3$ axis, from $x^3=-L$ to $x^3=L$, with charge density $\\sigma$. Denote by $l$ the position of the charge on the rod, so that $-L 0$) Fe complicates the situation. In fact, in the VCA there is no qualitative difference between Fe excess and deficiency, as the excess (deficient) charge are both\nlocated around the Fe site. Experimentally, however, it is known that the excess Fe ions are located in the Te planes, and\nthis has qualitatively different effects, the most important being that the effective Fe magnetic moment is enhanced and not reduced,\ndue to the formation of local moments on the excess Fe in the Te planes. This cannot be taken into account by the VCA approach of treating the doping in our LDA calculations. Furthermore, it is important to point out that mean-field LDA DFT calculations cannot reproduce the renormalization of the $B_{1g}$ frequency and linewidth observed at low temperature in the Fe-rich systems, which are discussed in the next paragraph.\n\n\\subsubsection{Excess Fe-induced magnetic fluctuation}\nWe have seen in Sec.~\\ref{Exp_Fe_undoped} that in the Fe$_{1.09}$Te crystal, we are not able to observe clearly the effect of the magnetic transition on the phonons. A small softening has been seen (see Fig.~\\ref{Fig3b}-d), but no narrowing of the line shape.\nIn the Se-substituted system, as shown in Fig.~\\ref{Fig4}, excess iron, in addition to a decrease in $T_c$, induces large effects on both frequency and lineshape of the phonons. The strongest effect occurs close to 50\\% Se content, with a large softening and broadening of the $B_{1g}$ phonon below $T\\sim$35 K, well above $T_c$. To our knowledge, no phase transition has been reported in this temperature range for this doping level, but the occurrence of short-range magnetic fluctuations has been reported.~\\cite{Khasanov09}\nIn the undoped case with low excess iron concentration, it has been shown that a low energy gap in the spin-wave excitation spectrum opens when entering the magnetic state.~\\cite{Stock} Increasing the excess iron concentration, this gap is filled up with low-energy spin fluctuations.~\\cite{Stock}\n\nIn both doped and undoped cases, one effect of excess iron is, therefore, to induce low energy magnetic fluctuations in a temperature range at which we also observe a relative broadening of the $B_{1g}$ phonon, \\textit{i.~e.}, a decrease of its lifetime.\nThis reinforces the point we made at the end of Sec.~\\ref{disc}, indicating that the additional damping for the $B_{1g}$ mode may actually originate from its coupling to magnetic excitations.\n\n\\section{Conclusions}\nWe have carried out a systematic study of the lattice dynamics in the Fe$_{1+y}$Te$_{1-x}$Se$_x$ system, focusing more particularly on the $c$-axis polarized Fe $B_{1g}$ mode. In parent compounds, unlike other systems such as BaFe$_2$As$_2$ or LiFeAs, a nonconventional broadening of this mode is observed as temperature decreases, and a clear signature of the SDW gap opening is observed.\nAs Se is substituted to Te, the temperature dependence of this modes smoothly evolves toward a more regular situation, with the $B_{1g}$ phonon showing conventional anharmonic decay. A good agreement between the observed phonon frequencies and a first-principles calculation including the effects of magnetic ordering is found.\nThe temperature dependence of the phonon linewidth, as well as the effects induced by the Fe nonstoichiometry in these compounds, revealed a peculiar coupling of this mode to magnetic fluctuations in the Fe$_{1+y}$Te$_{1-x}$Se$_x$ system and can, to date, not be satisfactorily reproduced within state-of-the-art computational approaches.\n\n\\section{Acknowledgement}\n\nWe thank A. Schulz for technical support and A.C. Walters for useful suggestions. This work has been supported by the European project SOPRANO (Grant No. PITN-GA-2008-214040), by the French National Research Agency, (Grant No. ANR-09-Blanc-0211 SupraTetrafer), and by the UK Engineering and Physical Sciences Research Council (MJR, EP\/C511794).\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Review of Geometrically Non-Higgsable Seven-branes}\n\\label{sec:review}\n\nWe will study seven-branes using their geometric description in\nF-theory. There the axiodilaton $\\tau = C_0 + i\\, e^{-\\phi}$ of the type IIB\ntheory is considered to be the complex structure modulus of an\nauxiliary elliptic curve which is fibered over the compact extra dimensional space\n$B$. Such a structure is determined by a Calabi-Yau fourfold $X$\nwhich is elliptically fibered $\\pi:X\\rightarrow B$. An elliptic fibration with section\nis birationally equivalent \\cite{Na88} to a Weierstrass model\n\\eqn{\ny^2 = x^3 + f\\, x + g\n}[]\nwhere $f$ and $g$ are sections of $K_B^{-4}$ and $K_B^{-6}$, respectively, with\n$K_B$ the anticanonical bundle on $B$. The fibers $\\pi^{-1}(p)$ are smooth\nelliptic curves for any point $p$ which is not in the discriminant\nlocus\n\\eqn{\n\\Delta = 4\\,f^3 + 27\\, g^2 = 0.\n}[]\nOn the other hand if $p$ is a generic point in the codimension one locus \n$\\Delta = 0$, then $\\pi^{-1}(p)$ is one of the singular fibers classified by\nKodaira \\cite{MR0187255,MR0205280,MR0228019}.\n\nSeven-branes are located along $\\Delta = 0$. Their precise nature\ndepends on the structure of $f$ and $g$ and therefore also $\\Delta$, which may be an irreducible\neffective divisor or comprised of components\n\\begin{equation}\n\\Delta = \\prod_i \\Delta_i.\n\\end{equation}\nTaking a loop around $\\Delta$ or any component $\\Delta_i=0$ induces an\n$SL(2,\\mathbb{Z})$ monodromy on the associated type IIB supergravity\ntheory. The action on $\\tau$ is\n\\begin{equation}\n\\tau \\mapsto \\frac{a\\tau + b}{c\\tau + d}, \\qquad \\qquad M = \\begin{pmatrix}a & b \\\\ c & d\\end{pmatrix}\\in SL(2,\\mathbb{Z}).\n\\end{equation}\nSeven-brane structure is determined by the Weierstrass model according\nto the order of vanishing of $f$, $g$, and $\\Delta$ along the\nseven-brane. Often in this paper some $\\Delta_i=z^N$, and therefore we\nwill denote the associated orders of vanishing as $ord_z(f,g,\\Delta)$\nas a three-tuple or in terms of the individual orders $ord_z(f)$,\n$ord_z(g)$, and $ord_z(\\Delta)$. From this data the singularity type,\n$SL(2,\\mathbb{Z})$ monodromy, and non-abelian symmetry algebra (up to outer\nmonodromy) can be determined; see Table \\ref{tab:fibs}. This is the\ngeometric symmetry group, henceforth symmetry group or gauge group, along the\nseven-brane in the absence of symmetry-breaking $G$-flux. The\nstructure of $\\Delta$ is determined by $f, g$ and there is a moduli space of such choices that corresponds to\nthe complex structure of $X$. Gauge sectors along seven-branes can be\nengineered by tuning $f$ and $g$ relative to their generic structures.\n\n\\begin{table}[thb]\n \\centering\n\\scalebox{.8}{\n \\begin{tabular}{|c c c c c c c c|}\n\\hline\nType &\n$ord_z(f)$ &\n$ord_z(g)$ &\n$ord_z(\\Delta)$ &\nsingularity & nonabelian symmetry algebra & monodromy & order \\\\ \\hline \\hline \n$I_0$&$\\geq $ 0 & $\\geq $ 0 & 0 & none & none & $\\begin{pmatrix} 1 & 0 \\\\ 0 & 1\\end{pmatrix}$ & $1$\\\\ \n$I_n$ &0 & 0 & $n \\geq 2$ & $A_{n-1}$ & ${\\mathfrak{su}}(n)$ or ${\\mathfrak{sp}}(\\lfloor\nn\/2\\rfloor)$& $\\begin{pmatrix} 1 & n \\\\ 0 & 1\\end{pmatrix}$ & $\\infty$\\\\\n$II$ & $\\geq 1$ & 1 & 2 & none & none & $\\begin{pmatrix} 1 & 1 \\\\ -1 & 0\\end{pmatrix}$ & $6$\\\\\n$III$ &1 & $\\geq 2$ &3 & $A_1$ & ${\\mathfrak{su}}(2)$ & $\\begin{pmatrix} 0 & 1 \\\\ -1 & 0\\end{pmatrix}$ & $4$\\\\\n$IV$ & $\\geq 2$ & 2 & 4 & $A_2$ & ${\\mathfrak{su}}(3)$ or ${\\mathfrak{su}}(2)$& $\\begin{pmatrix} 0 & 1 \\\\ -1 & -1\\end{pmatrix}$ & $3$\\\\\n$I_0^*$&\n$\\geq 2$ & $\\geq 3$ & $6$ &$D_{4}$ & ${\\mathfrak{so}}(8)$ or ${\\mathfrak{so}}(7)$ or ${\\mathfrak{g}}_2$ & $\\begin{pmatrix} -1 & 0 \\\\ 0 & -1\\end{pmatrix}$ &$2$\\\\\n$I_n^*$&\n2 & 3 & $n \\geq 7$ & $D_{n -2}$ & ${\\mathfrak{so}}(2n-4)$ or ${\\mathfrak{so}}(2n -5)$ & $\\begin{pmatrix} -1 & -n \\\\ 0 & -1\\end{pmatrix}$ & $\\infty$\\\\\n$IV^*$& $\\geq 3$ & 4 & 8 & $E_6$ & ${\\mathfrak{e}}_6$ or ${\\mathfrak{f}}_4$& $\\begin{pmatrix} -1 & -1 \\\\ 1 & 0\\end{pmatrix}$ & $3$\\\\\n$III^*$&3 & $\\geq 5$ & 9 & $E_7$ & ${\\mathfrak{e}}_7$ & $\\begin{pmatrix} 0 & -1 \\\\ 1 & 0\\end{pmatrix}$ & $4$\\\\\n$II^*$& $\\geq 4$ & 5 & 10 & $E_8$ & ${\\mathfrak{e}}_8$ & $\\begin{pmatrix} 0 & -1 \\\\ 1 & 1\\end{pmatrix}$ & $6$\\\\\n\\hline\nnon-min &$\\geq 4$ & $\\geq6$ & $\\geq12$ & \\multicolumn{4}{c|}{does not\nappear for supersymmetric vacua} \\\\\n\\hline\n \\end{tabular}}\n \\caption{The Kodaira fibers, along with their orders of vanishing in a Weierstrass model, singularity type, possible nonabelian\nsymmetry algebras, $SL(2,\\mathbb{Z})$ monodromy, and monodromy order.}\n \\label{tab:fibs}\n\\end{table}\n\n\n\\vspace{1cm} Let us now turn to geometrically non-Higgsable\nseven-branes. Physically, this means that there are no directions in the\nsupersymmetric moduli space that break the gauge group on the\nseven-branes by splitting them up. Mathematically, a geometrically\nnon-Higgsable seven-brane along $z=0$ exists when\n\\begin{equation}\n\\Delta = z^N \\, \\tilde \\Delta\n\\end{equation}\nfor any choice of $f$ and $g$, i.e. for a generic point in the complex\nstructure moduli space of $X$, henceforth $\\cs{X}$. For $N>2$ the seven-brane\ncarries a non-trivial gauge group $G$. It is often possible that by tuning $f,g$ to a\nsubvariety $L\\subset \\cs{X}$ the discriminant $\\Delta$ is proportional\nto $z^{M>N}$ and the gauge group along the seven-brane at $z=0$ is\nenhanced to $G'\\supset G$. There may be many such loci $L_i$ in $\\cs{X}$. The statement that a\nnon-Higgsable seven-brane exists for generic complex structure moduli is the statement\nthat it exists for any complex structure in $\\cs{X}\\setminus\n\\{\\bigcup_i L_i\\}$, which is the bulk of $\\cs{X}$ since each $L_i$\nhas non-trivial codimension. Often the discriminant is of the form\n\\begin{equation}\n\\Delta = \\tilde \\Delta \\,\\,\\prod_i z_i^{N_i}\n\\end{equation}\nfor generic complex structure, in which case there is a non-Higgsable\nseven-brane along each locus $z_i=0$. They may intersect, giving rise\nto product group gauge sectors with jointly charged matter that arise\nfrom clusters of intersecting seven-branes. These are referred to as\nnon-Higgsable clusters \\cite{Morrison:2012np,Morrison:2012js}. \n\nThe\npossible gauge groups that may arise along a non-Higgsable\nseven-brane are $E_8, E_7, E_6, F_4, SO(8), SO(7), G_2, SU(3),$ and\n$SU(2)$ and there are five two-factor products with jointly charged matter\nthat may arise. In particular, note that $SU(5)$ and $SO(10)$, which arise\nfrom $I_5$ and $I_1^*$ fibers, may never be non-Higgsable; more generally,\nthis is true of seven-branes with fibers $I_n$ and $I_{n>0}^*$. This is easy\nto see in the $I_n$ case. Such a model has $ord_z(f,g,\\Delta)=(0,0,n)$,\nand $f\\mapsto (1+\\epsilon) f$ for $\\epsilon \\in \\mathbb{C}^*$ is a symmetry breaking complex\nstructure deformation that always exists, by virtue of the model existing in the first place.\nSimilar arguments exist for $I_{n>0}^*$ fibers.\n\n\nThe name ``non-Higgsable clusters'' is a suitable name in\nsix-dimensional compactifications of F-theory, since there the\nassociated six-dimensional gauge sectors do not have any symmetry\nbreaking flat directions in the supersymmetric moduli space, as\ndetermined by $\\cs{X}$ and also the low energy degrees of\nfreedom. However, in four dimensional compactifications there are\nother effects such as T-branes \\cite{Cecotti:2010bp} that may break\nthe gauge group, so that \\emph{geometrically non-Higgsable} is a more\naccurate name. Furthermore, if $\\Delta \\sim z^2$ for a generic $p\\in\n\\cs{X}$ then $G=\\emptyset$ even though there is a divisor $z=0$ in $B$\nthat is singular, and sometimes a codimension two locus $C$ may be\nsingular for generic moduli even if it is not contained in a\nnon-Higgsable seven-brane. Both have been referred to as\n``non-Higgsable structure'' \\cite{Halverson:2015jua} even though there\nis no associated gauge group. The general feature is the existence of\nsingular structure for generic complex structure moduli, and aside\nfrom these two caveats there is a gauge group on a seven-brane that\ncannot be spontaneously broken by a complex structure deformation.\n\nThough not named as such at the time, the first F-theory\ncompactifications with non-Higgsable seven-branes appeared in\n\\cite{Morrison:1996pp}. These examples have six non-compact dimensions\nand four compact dimensions $B_2$ with $B_2=\\mathbb{F}_n$, and there is a\nnon-Higgsable seven-brane on the $-n$ curve in $\\mathbb{F}_n$ for $n>2$. The\ncomplete set of non-Higgsable clusters and seven-branes that may arise\nin six-dimensional compactifications were classified in\n\\cite{Morrison:2012np} and the examples with toric $B_2$ were\nclassified in \\cite{Morrison:2012js}. In the latter, all but $16$ of\nthe $61,539$ examples exhibit non-Higgsable clusters or seven-branes,\nand the $16$ that do not are weak Fano varieties, i.e. varieties\nsatisfying $-K\\cdot C\\geq 0$ for any holomorphic curve $C$. In all cases in\nsix dimensions the reason for geometric non-Higgsability is immediately evident\nin the low energy gauge theory: either there is no matter or there is not enough\nmatter to allow for Higgsing consistent with supersymmetry, due to having a half hypermultiplet\nin a pseudoreal representation.\n\nIn examples with four non-compact dimensions the extra dimensions of\nspace are a complex threefold $B_3$ and there are additional\nnon-Higgsable clusters and structures that do not appear in six\ndimensions, including for example loops \\cite{Morrison:2014lca} and\nthe gauge group $SU(3)\\times SU(2)$\n\\cite{GrassiHalversonShanesonTaylor2014}. In the latter case the\nmatter content matches the non-abelian structure of the standard\nmodel. A classification \\cite{Halverson:2015jua} of $B_3$ that are\n$\\mathbb{P}^1$-bundles over certain toric surfaces has non-Higgsable clusters for\n$98.3\\%$ of the roughly $100,000$ examples with over $500$ examples with an $SU(3)\\times SU(2)$\nsector. A broader exploration of toric $B_3$ using Monte Carlo techniques \\cite{Taylor:2015ppa}\nhas non-Higgsable structure for all $B_3$ after an appropriate ``thermalization,'' and approximately $76\\%$\nof the examples have a non-Higgsable $SU(3)\\times SU(2)$ sector. Non-Higgsable clusters also appear in\nthe F-theory geometry with the largest number of currently known flux vacua \\cite{Taylor:2015xtz},\nwhere vacuum counts were estimated using techniques of Ashok, Denef, and Douglas \\cite{Ashok:2003gk,Denef2004}. \nIt is not clear whether cosmological evolution prefers the special vacua associated with a typical $B_3$, perhaps\ncharacterized by \\cite{Taylor2015}, or the typical vacua associated with a special base $B$ that gives the largest number\nof flux vacua, which may be $B_{max}$ of \\cite{Taylor:2015xtz}. Needless to say, this is a fascinating question moving forward.\n\nWhat is becoming clear is that non-Higgsable clusters and structure\nplay a very important role in the landscape of F-theory\ncompactifications. It has become common to say that non-Higgsable\nclusters are doubly generic. The first is a strong sense: for fixed\n$B$, having a non-Higgsable cluster means that there is a non-trivial seven-brane\nfor generic points in $\\cs{X}$. The second is in a weaker, but still compelling, sense:\nthere is growing evidence that generic extra dimensional spaces $B$ give rise to\nnon-Higgsable clusters or structure. One line of evidence is in the large datasets cited above.\nAnother is the argument of \\cite{Halverson:2015jua}: if there is a curve $C \\subset B$ with $-K\\cdot C<0$\nthen $-K$ contains $C$ and $C$ sits inside the discriminant locus, giving non-Higgsable structure on $C$.\nSuch $B$ are ones that are not weak Fano, and it is expected that a generic algebraic surface or threefold\nis of this type. In particular, there are only $105$ topologically distinct Fano threefolds.\n\n\n\\vspace{1cm}\nIn this work we will study the strongly coupled physics associated to fibers that\ncan give rise to geometrically non-Higgsable seven-branes. As such, the analyses of this\npaper include, but are not limited to, F-theory compactifications with non-Higgsable\nseven-branes. These fibers are\n\\begin{equation}\nII, III, IV, I_0^*, IV^*, III^*, II^*,\n\\end{equation}\nand any seven-brane with one of these has an associated $SL(2,\\mathbb{Z})$ monodromy\nmatrix $M$ that is nilpotent, i.e. $M^k=1$ for some $k$. $M_{I_0^*}=-1$ which acts trivially on\n$\\tau$, indicating that this configuration is uncharged, in agreement with the fact that it is\n$4$ $D7$-branes on top of an $O7$ plane from the type IIB point of view. The rest act non-trivially on\n$\\tau$ but the theory comes back to itself after taking $k$ loops around the seven-brane; the seven-brane\ncharges are nilpotent. Though our analyses apply more broadly,\nthey are of particular interest given the prevalence of non-Higgsable clusters in the landscape.\n\n\n\\section{Axiodilaton Profiles and Strong Coupling}\n\\label{sec:axiodilaton}\n\n\nThe primary difference between F-theory and the weakly coupled type\nIIB string is that the axiodilaton $\\tau = C_0 + i e^{-\\phi}$ varies\nover $B$ in F-theory, and therefore so does the string coupling $g_s =\ne^{\\langle \\phi \\rangle}$. The behavior of $\\tau$ near seven-branes affects gauge theories on seven-branes,\nas well as three-brane gauge theories or string scattering in the vicinity of seven-branes.\nIn his seminal works \\cite{MR0187255,MR0205280,MR0228019} Kodaira computed $\\tau$ locally near\nseven-branes in elliptic surfaces.\n\nIn this section we will study axiodilaton profiles via their relation to the Klein $j$-invariant\nof an elliptic curve for elliptic fibrations of arbitrary dimension.\nWe will normalize the $j$-invariant in a standard way by $J:= j \/1728$, and in the\ncase of a Weierstrass model we have\n\\begin{equation}\nJ = \\frac{4f^3}{\\Delta} \\qquad \\text{where} \\qquad \\Delta = 4f^3 + 27g^2.\n\\end{equation}\nIn this formulation the $J$-invariant depends on base coordinates\naccording to the sections $f$ and $g$ of the Weierstrass\nmodel. However, $J$ also depends on the ratio of periods of the\nelliptic curve $\\tau = \\frac{\\omega_1}{\\omega_2}$ where $\\tau$ is the\nvalue of the axiodilaton field at each point in $B$. Thus, if $z$ is a\nbase coordinate we compute $J=J(z)$ directly from the Weierstrass\nmodel, but this can also be thought of as $J = J(\\tau(z))$. By\ninverting the $J$-function, we will determine the axiodilaton profile\nand study it in the vicinity of geometrically non-Higgsable seven-branes.\nWe will also demonstrate that F-theory compactifications generically exhibit\nregions with $O(1)$ string coupling and recover classic results from the perturbative type IIB theory.\n\n\\begin{table}[thb]\n \\centering\n \\begin{tabular}{|ccc|} \\hline\n Fiber & $J$ & $J|_{z=0}$ \\\\ \\hline \\hline\n $II$ & $\\frac{z}{A+z}$& $0$ \\\\ \\hline\n $III$ & $\\frac{1}{1+Az}$& $1$ \\\\ \\hline\n $IV$ & $\\frac{z^2}{A+z^2}$& $0$ \\\\ \\hline\n $I_0^*$ & $\\frac{1}{1+A}$& $\\frac{1}{1+A}$ \\\\ \\hline\n $IV^*$ & $\\frac{z}{A+z}$& $0$ \\\\ \\hline\n $III^*$ & $\\frac{1}{1+Az}$& $1$ \\\\ \\hline\n $II^*$ & $\\frac{z^2}{A+z^2}$& $0$ \\\\ \\hline\n \\end{tabular}\n \\caption{The $J$-invariant for seven-branes associated to geometrically non-Higgsable clusters, expressed in a way that is particularly useful for a local analysis near the seven-brane. Here $f=z^n\\, F$ and $g=z^m\\, G$ with $A= 27G^2\/4F^3$.}\n \\label{tab:Jw}\n\\end{table}\n\n\nThough there are seven Kodaira fiber types that may give rise to geometrically\nnon-Higgsable seven-branes, $\\{II,III,IV,I_0^*,IV^*,III^*,II^*\\}$, some\nhave the same $J$-invariant and $\\tau$ in the vicinity\nof the brane. In each case the Weierstrass model takes the form\n\\begin{equation}\nf = z^n \\, F, \\qquad g=z^m\\, G, \\qquad \\Delta = z^{min(3n,2m)}\\, \\tilde \\Delta,\n\\end{equation}\nand the $J$-invariant takes a simple form. Near a generic region of the\nseven-brane on $z=0$ both $F$ and $G$\nare non-zero, and therefore so is $A \\equiv 27G^2 \/ 4F^3$. The possibilities\nare computed in Table \\ref{tab:Jw} and the redundancies are \\cite{MR0187255,MR0205280,MR0228019}\n\\begin{equation}\nJ_{II} = J_{IV^*}, \\qquad J_{III} = J_{III^*}, \\qquad J_{IV} = J_{II^*}.\n\\end{equation}\nThis result may seem at odds with the monodromy order for these\nKodaira fibers displayed in Table \\ref{tab:fibs}, since the type $II$\nand $II^*$ fibers have order $6$ whereas the type $IV$ and $IV^*$\nfibers have order $3$. The resolution is that, though the monodromy\nassociated with type $II$ and $II^*$ fibers is $6$, $M_{II}^3 =\nM_{II^*}^3=-I$, where $I$ is the identity matrix, so that the type $II$,\n$II^*$, $IV$, and $IV^*$ fibers all induce an order $3$ action on $\\tau$.\n\n\n\nThere are some special values for $\\tau(J)$ that we will see arise in inverting\n$J$,\n\\begin{equation}\n\\tau(0)=e^{\\pi i\/3}, \\qquad \\tau(1) = i,\n\\end{equation}\nup to an $SL(2,\\mathbb{Z})$ transformation. These values correspond to $g_s = \\frac{2}{\\sqrt{3}}$\nand $g_s=1$, and it is important physically that these cannot be lowered\nby an $SL(2,\\mathbb{Z})$ transformation. Mapping $\\tau\\mapsto \\tau':=\\frac{a\\tau + b}{c\\tau + d}$\nby an arbitrary $SL(2,\\mathbb{Z})$ transformation for $\\tau=e^{\\pi i\/3}$ and $\\tau =i$, respectively,\nwe have new string coupling constants\n\\begin{equation}\ng_s'=(c^2+cd+d^2)\\frac{2}{\\sqrt{3}}\\geq \\frac{2}{\\sqrt{3}}, \\qquad g_s'=(c^2+d^2)\\geq 1,\n\\label{eq:cantgoweak}\n\\end{equation}\nshowing that the string couplings\nwith these two values of $\\tau$ cannot be lowered by a global $SL(2,\\mathbb{Z})$ transformation.\n\nFor each case in Table \\ref{tab:Jw} we will invert $J$ to solve for\n$\\tau$.\n\n\\subsection{Inverting the $J$-function and Ramanujan's Alternative Bases}\n\nThere is a nineteenth century procedure for inverting the $J$-function that is due\nto Jacobi. Recall that the $j$-invariant satisfies\n\\begin{equation}\nj(q) = \\frac{1}{q} + 744 + 196884 \\,\\, q + \\dots \n\\end{equation}\nin terms of $q=e^{2\\pi i \\tau}$. Jacobi's result relates $j$ to $q$ via hypergeometric\nfunctions, which then allows for the computation of $\\tau$ by taking a logarithm.\nThe result is\n\\begin{equation}\n\\tau = i\\,\\,\\, \\frac{_2F_1(\\frac12,\\frac12;1;1-x)}{_2F_1(\\frac12,\\frac12;1;x)}, \\qquad J = \\frac{4\\, (1-x(1-x))^3}{27\\, x^2(1-x)^2}, \n\\label{eq:tauJJacobi}\n\\end{equation}\nin terms of the hypergeometric function $_2F_1(a,b,c;x)$. For $|x|<1$ it satisfies\n\\begin{equation}\n_2F_1(a,b;c;x) = \\sum_{n=0}^\\infty \\frac{(a)_n(b)_n}{(c)_n \\, n!} x^n,\n\\end{equation}\nwhere $(a)_n=a(a+1)(a+2)\\dots(a+n-1)$ for $n\\in \\mathbb{Z}^+$ is the\nPochhammer symbol. For a particular value of $J$, then, six values of\n$\\tau$ are obtained by solving the sextic in $x$, and these are\nrelated to one another by $SL(2,\\mathbb{Z})$ transformations.\n\nMuch progress was made in the theory of elliptic functions at the beginning of the\n$20^{\\text{th}}$ century by Ramanujan, who recorded his theorems in notebooks \\cite{MR0099904}\nthat were dense with results. In one, he claimed that there are similar inversion formulas\nwhere the base $q$ is not\n\\begin{equation}\nq = exp\\left({-\\pi \\frac{_2F_1(1\/2,1\/2,1,1-x)}{_2F_1(1\/2,1\/2,1,x)}}\\right)\n\\end{equation}\nas it was for Jacobi, but is instead one of\n\\begin{align}\nq &= exp\\left({-\\frac{2\\pi}{\\sqrt{3}} \\frac{_2F_1(1\/3,2\/3,1,1-x)}{_2F_1(1\/3,2\/3,1,x)}}\\right), \\nonumber \\vspace{2cm} \\\\ \nq &= exp\\left({-\\frac{2\\pi}{\\sqrt{2}} \\frac{_2F_1(1\/4,3\/4,1,1-x)}{_2F_1(1\/4,3\/4,1,x)}}\\right), \\nonumber \\\\\nq &= exp\\left({-2\\pi \\frac{_2F_1(1\/6,5\/6,1,1-x)}{_2F_1(1\/6,5\/6,1,x)}}\\right).\n\\label{eq:altbaseinit}\n\\end{align}\nThere has been significant progress \\cite{MR1311903,MR1117903,MR1071759,MR1010408,MR1237931,MR1243610,MR1825995,MR3107523,Cooper2009} in the study of Ramanujan's theories of elliptic functions\nto these alternative bases in recent years, including rigorous proofs of many of Ramanujan's results.\nPractically, these different theories give different ways to study $\\tau$.\n\nIn studying the relationship between $J$, $\\tau$, and Ramanujan's alternative theories, we will\nutilize the notation of Cooper \\cite{Cooper2009}. The alternative bases satisfy\n\\begin{equation}\nq_r := exp\\left(\\frac{-2\\pi}{\\sqrt{r}}\\frac{_2F_1(a_r, 1-a_r;1;1-x_r)}{_2F_1(a_r,1-a_r;1;x_r)}\\right)\n\\label{eq:altq}\n\\end{equation}\nwhere $a_1=\\frac16$, $a_2=\\frac14$, and $a_3=\\frac13$ for $r=1,2,3$ reproduce \\eqref{eq:altbaseinit},\nwhere the $J$ invariant satisfies\n\\begin{equation}\nJ = \\frac{1}{4\\,x_1(1-x_1)} = \\frac{(1+3x_2)^3}{27\\,x_2(1-x_2)^2}=\\frac{(1+8x_3)^3}{64\\,x_3(1-x_3)^3}.\n\\label{eq:JRam}\n\\end{equation}\nFor any value of $J$ one may then solve either Jacobi's sextic (\\ref{eq:tauJJacobi}) or the quadratic,\ncubic, or quartic in \\eqref{eq:JRam}. Other inversion methods also exist, but we will not use them.\n\n\\vspace{.5cm}\nWe utilize these methods to study elliptic fibrations, beginning with general statements\nand then proceeding to the study of examples near geometrically non-Higgsable seven-branes.\n\nConsider a Weierstrass model, where $J=4f^3\/\\Delta$. In a neighborhood of a seven-brane\non $z=0$ one can compute the $SL(2,\\mathbb{Z})$ monodromy $M$ of the seven-brane by taking a small\nloop around the seven-brane, which includes an action on $\\tau$. Recalling that\n\\begin{equation}\nM^k(\\tau) = \\tau\n\\end{equation} \nfor some $k$ for any geometrically non-Higgsable seven-brane, one would like to verify $k$\ndirectly by inverting the $J$-function. We will see that some of Ramanujan's theories give\nrise to $k$ element sets of $\\tau$ values that are permuted by the monodromy, where $k$ is the\norder of $x_i$ in \\eqref{eq:JRam}.\n\nSince each solution for $x_i$ determines a value of $\\tau$ directly\nvia $q$ in \\eqref{eq:altq}, let us solve for $x_i$ in terms of $J$.\nIn the quadratic case we have\n\\begin{equation}\nx_1 = \\frac{1\\pm \\sqrt{1-1\/J}}{2}\n\\label{eq:xinJquad}\n\\end{equation}\nwhich is degenerate for $J=1$ and is not well defined for $J=0$. Aside from $I_0^*$, these\nare $J$-invariants associated with geometrically non-Higgsable seven-branes. \n\nIn the cubic case we solve the equation\n\\begin{equation}\n27(J-1) \\, x_2^3-27(J+2) \\, x_2^2+ 9(3J-1) \\, x_2-1 = 0\n\\end{equation}\nto obtain $x_2$. \nRather than solving this cubic exactly, let study the behavior of this\ncubic near $J=0$ and $J=1$, since this is the relevant structure near non-Higgsable seven-branes.\nExpanding around $J=0$ by taking $J=\\delta J_0 \\ll 1$ the three solutions for $x_2$ near $J=0$ are \n\\begin{equation}\nx_2 = -\\frac{1}{3}-\\frac{2\\sqrt[3]{2}\\,}{3} e^{2\\pi i n\/3} \\,\\delta J_0^{1\/3}+O(\\delta J_0^{2\/3}), \\qquad n\\in\\{0,1,2\\} \n\\label{eq:x2nearJis0}\n\\end{equation}\nand we see that the three roots are permuted by an order three monodromy upon taking a small\nloop around $J=0$. Near $J=1$ we take $J = 1+\\delta J_1$ with $\\delta J_1 \\ll 1$ and compute\n\\begin{equation}\nx_2 = \\frac{3}{\\delta J_1} + \\frac{16}{9} - \\frac{16\\, \\delta J_1}{81} + O(\\delta J_1^2), \\qquad\nx_2 = \\frac{1}{9} \\pm \\frac{8\\sqrt{\\delta J_1}}{27\\sqrt{3}} + \\frac{8 \\delta J_1}{81} + O(\\delta J_1^{3\/2}) \n\\label{eq:x2nearJis1}\n\\end{equation}\nfrom which we see that one of the solutions goes to infinity at $J=1$ (as is expected since the cubic\nreduces to a quadratic when $J=1$), whereas the other two are permuted by an order two monodromy around $J=0$.\n\nTherefore, we will study seven-brane theories with $J=1$ $(J=0)$ with Ramanujan's theory where\n$J$ is quadratic (cubic) in $x_1$ ($x_2$), solving for $\\tau$.\n\n\\subsection{Warmup: Reviewing Weakly Coupled Cases}\n\nBefore proceeding to the interesting seven-brane structures that may be non-Higgsable, all of which have\nfinite $J$-invariant, let us consider the seven-branes that may appear in the weakly coupled type IIB\ntheory, which have $J=\\infty$.\n\nLet us begin with the case of $n$ coincident $D7$-branes, which in F-theory language have a Kodaira\nfiber $I_n$. The Weierstrass model takes takes the form\n\\begin{equation}\nf = F, \\qquad g = G, \\qquad \\Delta = z^n \\tilde \\Delta,\n\\end{equation}\nwhere we have used our common notation of inserting $F$ in $f$ (and $G$ in $g$) even though $f,g\\sim z^0$\nin this case, and $z$ does not divide $\\tilde \\Delta$. Instead, $F$ and $G$ must be tuned to ensure the form\nof $\\Delta$. The $J$-invariant is\n\\begin{equation}\nJ(I_n) = \\frac{4F^3}{z^n\\tilde \\Delta}=:\\frac{C}{z^n},\n\\end{equation}\nand we can see that, indeed, $J=\\infty$ at $z=0$. Solving the theory where $J$ is a quadratic\nin $x_1$ gives\n\\begin{equation}\nx_1 = \\frac12 \\pm \\frac12 \\sqrt{1-\\frac{z^n}{C}}=: \\alpha_\\pm,\n\\end{equation}\nand then using equation (\\ref{eq:altq}) to compute $\\tau(\\alpha_-)$ and Taylor expanding we\nfind\n\\begin{equation}\n\\tau(\\alpha_-) = \\frac{n\\, log(z)}{2\\pi i} + \\cdots,\n\\end{equation}\nwhich induces a monodromy $\\tau \\mapsto \\tau+n$ upon encircling\n$z=0$. This is the expected monodromy of a stack of $n$ D7-branes.\nThe other solution $\\tau(\\alpha_+)$ is S-dual to\n$\\tau(\\alpha_-)$.\n\nNow consider $n\\geq 4$ D7-branes that are on top of an O7-plane, which in F-theory language corresponds to an $I_{n-4}^*$ fiber.\nIn this case the Weierstrass model is\n\\begin{equation}\nf = z^2 F, \\qquad g = z^3 G, \\qquad \\Delta = z^{2+n} \\tilde \\Delta,\n\\end{equation}\nwith $J$-invariant\n\\begin{equation}\nJ(I_{n-4}^*) = \\frac{4F^3}{z^{n-4}\\tilde \\Delta}=: \\frac{C}{z^{n-4}}.\n\\end{equation}\nThen again solving the theory where $J$ is a quadratic in $x_1$ we obtain\n(with similar $\\alpha_\\pm$)\n\\begin{equation}\n\\tau(\\alpha_-) = \\frac{(n-4)\\, log(z)}{2\\pi i} + \\cdots,\n\\end{equation}\nand there is a monodromy $\\tau \\mapsto \\tau + n-4$ upon encircling $z=0$.\nThis is the monodromy expected for $n$ D7-branes on top of an O7-plane, and \nfamously there is no monodromy in the case $n=4$, since the 4 D7-branes\ncancel the charge of the O7-plane.\n\n\n\n\\subsection{Axiodilaton Profiles Near Seven-Branes with $J=1$}\n\nFrom Tables \\ref{tab:fibs} and \\ref{tab:Jw} we see that the seven-branes with \n$J=1$ have fiber of Kodaira type $III$ and $III^*$, which carry gauge\nsymmetry $SU(2)$ and $E_7$, respectively. In both cases we have the same local\nstructure of the $J$-invariant\n\\begin{equation}\n J(III) = J(III^*) = \\frac{1}{1+Az}.\n\\end{equation}\nUsing equation (\\ref{eq:xinJquad}) we see \n\\begin{equation}\nx_1 = \\frac{1\\pm i \\sqrt{A z}}{2} =: \\alpha_\\pm,\n\\end{equation}\nwhich exhibits a $\\mathbb{Z}_2$ monodromy around $z=0$ that induces a monodromy on $\\tau$. \nUsing the relationship \\eqref{eq:altq} between $q^{2\\pi i \\tau}$ and $x_1$ we compute two\nvalues for $\\tau$\n\\begin{equation}\n\\tau_{\\pm} = i \\, \\frac{_2F_1(\\frac16,\\frac56,1,\\alpha_+)}{_2F_1(\\frac16,\\frac56,1,\\alpha_-)}.\n\\label{eq:tauexactJis1}\n\\end{equation}\nSince the $\\mathbb{Z}_2$ monodromy swaps $\\alpha_\\pm$, it also swaps $\\tau_\\pm$ and noting $\\tau_- = -1\/\\tau_+$\nwe see\n\\begin{equation}\n\\tau_\\pm \\mapsto \\tau_\\mp= - \\frac{1}{\\tau_\\pm}\n\\end{equation}\nunder the monodromy. This matches the behavior associated with the monodromy matrices\n\\begin{equation}\nM_{III}\\begin{pmatrix} 0 & 1 \\\\ -1 & 0\\end{pmatrix}, \\qquad M_{III^*} = \\begin{pmatrix} 0 & -1 \\\\ 1 & 0\\end{pmatrix} \n\\end{equation}\nand we have seen the result by explicitly solving for the axiodilaton $\\tau$. Note that this monodromy\nis precisely an $S$-duality, which therefore also swaps electrons and monopoles represented by $(p,q)$-strings\nto D3-brane probes.\n\nHow does the physics, in particular the string coupling, change upon moving away from the seven-brane? Expanding\nthe exact solution (\\ref{eq:x2nearJis1}) near $z=0$ we obtain\n\\begin{equation}\n\\tau_\\pm = i \\pm B\\sqrt{Az}-\\frac{i}{2} B^2 A z + O(z^{3\/2})\n\\end{equation}\nwhere \n\\begin{equation}\nB = \\frac{5\\, \\Gamma(\\frac{7}{12}) \\Gamma(\\frac{11}{12})}{36\\, \\Gamma(\\frac{13}{12}) \\Gamma(\\frac{17}{12})} \\simeq .2638,\n\\end{equation}\nis a constant that depends on values of the Euler $\\Gamma$ function but $A = 27G^2\/4F^3$ depends on the location\nin the base. We see that $\\tau(z)$ satisfies $\\tau(0)=i$ and \n\\begin{equation}\ng_{s,\\pm} \\simeq \\frac{1}{1 \\pm B \\,Im(\\sqrt{Az}) - \\frac12 B^2 Re(Az)}. \n\\end{equation}\nClose to $z=0$ we have\n\\begin{equation}\ng_{s,\\pm} \\simeq \\frac{1}{1 \\pm B \\,Im(\\sqrt{Az})}, \n\\end{equation}\nand we see that the monodromy exchanges a more weakly coupled theory with a more strongly coupled theory, where the deviation from\n$g_s=1$ depends on the model-dependent factor $A$ and the separation $z$ from the brane. This was also implicit from $\\tau\\mapsto -1\/\\tau$.\n\nWe see directly that the string coupling $g_s\\simeq O(1)$ in the\nvicinity of the type $III$ and type $III^*$ seven-branes carrying\n$SU(2)$ and $E_7$ gauge symmetry, respectively, and that Ramanujan's\ntheory where $J$ is a quadratic in $x_1$ gives a set of $\\tau$ values\npermuted by the brane-sourced monodromy. From (\\ref{eq:cantgoweak}),\nan $SL(2,\\mathbb{Z})$ transformation cannot make the theory weakly coupled in\nthis region. \\vspace{.5cm}\n\nIn the\nprevious section we saw that this method is more natural than the\ntheory where $J$ is cubic in $x_2$ since the $SL(2,\\mathbb{Z})$ monodromy of\nthe type $III$ and $III^*$ Kodaira fibers is $\\mathbb{Z}_2$ rather than\n$\\mathbb{Z}_3$. For completeness, though, inverting using the cubic theory gives\nthree solutions for $x_2$\n\\begin{align}\nx_2 = \\left\\{-\\frac{3}{A z}-\\frac{11}{9}+O\\left(z\\right),\\frac{1}{9}-\\frac{8 i \\sqrt{A^3} \\sqrt{z}}{27 \\sqrt{3}\n A}+O\\left(z\\right),\\frac{1}{9}+\\frac{8 i \\sqrt{A^3}\n \\sqrt{z}}{27 \\sqrt{3} A}+O\\left(z\\right)\\right\\},\n\\end{align}\nwhere we see that the first solution decouples near $z=0$ and one is left with the latter two\nsolutions, which we will call $\\beta_\\pm$. There is a $\\mathbb{Z}_2$ monodromy that exchanges $\\beta_\\pm$\nupon encircling $z=0$. Defining $\\tau_\\pm$ for the cubic theory as \n$\\tau_\\pm \\equiv \\tau(\\beta_\\pm)$ we have\n\\begin{equation}\n\\tau_\\pm = \\frac{i}{\\sqrt{2}} \\frac{_2F_1(\\frac14,\\frac34,1,1-\\beta_\\pm)}{_2F_1(\\frac14,\\frac34,1,\\beta_\\pm)},\n\\end{equation}\nand, though direct evaluation gives $\\tau_\\pm=i$ at $z=0$, the monodromy\n$\\tau \\mapsto -\\frac{1}{\\tau}$ is not immediate, instead requiring the use of some identities for the\nhypergeometric function for an exact expression.\nWe have verified via Taylor expansion that $\\tau_\\pm$ are swapped by a $\\mathbb{Z}_2$ monodromy, however.\nThus, the theory quadratic in $x_1$ seems better suited to study $III$ and $III^*$ fibers.\n\n\\subsection{Axiodilaton Profiles Near Seven-Branes with $J=0$}\n\nWe now turn to the study of seven-branes with Kodaira fibers satisfying $J=0$. \nFrom Tables \\ref{tab:fibs} and \\ref{tab:Jw} we see that the seven-branes with \n$J=0$ have general fiber of Kodaira type $II$, $II^*$, $IV$, and $IV^*$. The\nseven-branes of the first two types carry no geometric gauge symmetry and $E_8$,\nrespectively, whereas the latter two exhibit $SU(3)$ ($SU(2)$) and $E_6$ ($F_4$)\ngeometric gauge symmetry respectively, if the geometry does not (does) exhibit\nouter-monodromy that reduces the rank of the gauge group. Recall that\n\\begin{align}\n J(II) = J(IV^*) = \\frac{z}{A+z}, \\qquad J(II^*) = J(IV) = \\frac{z^2}{A+z^2},\n\\label{eq:localJforJis1}\n\\end{align}\nand that there is no discrepancy in this matching because, though the $SL(2,\\mathbb{Z})$ monodromy\nassociated with these fibers are either order $3$ or $6$, they are only order\n $3$ on $\\tau$.\n\nLet us utilize the theory where $J$ is a cubic in $x_2$ to study $\\tau$ near these\nseven-branes, beginning with the cases of a type $II$ and $IV^*$ fiber since these\nhave the same local structure of $J$-invariant. The expansion of the three solutions\nto the cubic in $x_2$ expanded around $z=0$ are\n\\begin{equation}\nx_2 = -\\frac13 - \\frac{2(2A^2)^{1\/3}}{3A} e^{\\frac{2\\pi i n}{3}} \\, z^{1\/3} -\\frac{4}{3(2A^2)^{1\/3}} e^{\\frac{4\\pi i n}{3}} \\, z^{2\/3} - \\frac{1}{A} z + O(z^{4\/3}), \\qquad n\\in\\{0,1,2\\},\n\\end{equation}\nfrom which we see a $\\mathbb{Z}_3$ monodromy upon encircling the seven-brane\nat $z=0$. Letting $\\beta_n$ be the $x_2$ solution for each $n\\in\n\\{0,1,2\\}$, we have\n\\begin{equation}\n\\tau_n := \\tau(\\beta_n) = \\frac{i}{\\sqrt{2}} \\frac{_2F_1(\\frac14, \\frac34, 1,1-\\beta_n)}{_2F_1(\\frac14, \\frac34, 1,\\beta_n)}.\n\\end{equation}\nIf the $\\tau$ values are distinct then there is an order $3$ $SL(2,\\mathbb{Z})$ monodromy on $\\tau$, but its determining that\nits precise action is $\\tau \\mapsto \\frac{\\tau-1}{\\tau}$\nwould require using some identities of hypergeometric functions, unlike in the case of the type $III$ and $III^*$ seven-branes\nwhere its action $\\tau\\mapsto -1\/\\tau$ was immediately clear. Instead we will prove the monodromy numerically in a power series in $z$.\nNumerically at leading order in $z$ and keeping four significant figures in $z^{1\/3}$, we have\n\\begin{align}\n\\tau_0 &\\simeq e^{\\frac{\\pi i}{3}} -\\frac{.3355i}{A^{2\/3}} \\, z^{1\/3} + O(z^{2\/3}) \\nonumber \\\\\n\\tau_1 &\\simeq e^{\\frac{\\pi i}{3}} +\\frac{.2906+.1678 i}{A^{2\/3}} \\, z^{1\/3} + O(z^{2\/3}) \\nonumber \\\\\n\\tau_2 &\\simeq e^{\\frac{\\pi i}{3}} -\\frac{.2906-.1678i }{A^{2\/3}} \\, z^{1\/3} + O(z^{2\/3}).\n\\label{eq:tau012forIIIVs}\n\\end{align}\nWe encircle $z=0$ by writing $z=re^{i\\theta}$ where $r\\in \\mathbb{R}$ is a\nsmall positive number and then varying $\\theta$. There is a choice of\ndirection: encircling by taking $\\theta$ from $0$ to $2\\pi$ we see\nthat $\\tau_0 \\mapsto \\tau_1$, $\\tau_1\\mapsto \\tau_2$,\n$\\tau_2\\mapsto\\tau_0$, whereas going in the other direction by taking\n$\\theta$ from $0$ to $-2\\pi$ gives the inverse action $\\tau_0 \\mapsto\n\\tau_2$, $\\tau_1\\mapsto \\tau_0$, $\\tau_2\\mapsto\\tau_1$. One can verify that\nthis latter action corresponds to the monodromy $\\tau \\mapsto \\frac{\\tau-1}{\\tau}$;\ni.e. $\\frac{\\tau_i - 1}{\\tau_i} = \\tau_{i-1}$ where $\\tau_{-1}:=\\tau_2$.\n\nNow consider the cases of seven-branes with a\ntype $II^*$ and type $IV$ fiber. From (\\ref{eq:localJforJis1}) we see that this differs \nfrom the analysis we just performed by the replacement $z\\mapsto z^2$, as can be verified\nby direct computation. The solutions to the cubic are\n\\begin{equation}\nx_2 = -\\frac13 - \\frac{2(2A^2)^{1\/3}}{3A} e^{\\frac{2\\pi i n}{3}} \\, z^{2\/3} -\\frac{4}{3(2A^2)^{1\/3}} e^{\\frac{4\\pi i n}{3}} \\, z^{4\/3} - \\frac{1}{A} z^2 + O(z^{8\/3}), \\qquad n\\in\\{0,1,2\\},\n\\end{equation}\ndefining $\\beta_n$ to be these three solutions the function form of $\\tau$, and therefore $\\tau_n:=\\tau(\\beta_n)$, remains the same.\nNumerically at leading order in $z$, keeping four significant figures in $z^{2\/3}$\nwe have \n\\begin{align}\n\\tau_0 &\\simeq e^{\\frac{\\pi i}{3}} -\\frac{.3355i}{A^{2\/3}} \\, z^{2\/3} + O(z^{4\/3}) \\nonumber \\\\\n\\tau_1 &\\simeq e^{\\frac{\\pi i}{3}} +\\frac{.2906+.1678 i}{A^{2\/3}} \\, z^{2\/3} + O(z^{4\/3}) \\nonumber \\\\\n\\tau_2 &\\simeq e^{\\frac{\\pi i}{3}} -\\frac{.2906-.1678i }{A^{2\/3}} \\, z^{2\/3} + O(z^{4\/3}).\n\\label{eq:tau012forIIIVs}\n\\end{align}\nNote that the monodromy action has changed, though: upon taking $\\theta$ from $0$ to $2 \\pi$\nwe have $\\tau_0\\mapsto \\tau_2$, $\\tau_1\\mapsto \\tau_0$, $\\tau_2\\mapsto \\tau_1$. In the type $II$, $IV^*$\ncase this was the monodromy associated with taking $\\theta$ from $0$ to $-2\\pi$. We see explicitly\nthat the $SL(2,\\mathbb{Z})$ monodromy of $II\/IV^*$ fibers induce the inverse action on $\\tau$ compared to\n$II^*\/IV$ fibers. This is as expected since\n\\begin{equation}\nM_{II}=M_{II^*}^{-1}, \\qquad M_{IV}=M_{IV^*}^{-1}\n\\end{equation}\nand the fact\n\\begin{equation}\nM_{II}=-M_{IV^*}. \\qquad M_{II^*}=-M_{IV}\n\\end{equation}\nimplies that $II$ and $IV^*$ induce the same action on $\\tau$, and similarly for $II^*$ and $IV$.\n\nGiven these explicit solutions for $\\tau$ one can solve for $g_s$ as a function of the local coordinate\n$z$ near the seven-branes with type $II$, $IV^*$, $II^*$, or $IV$ fibers and determine its falloff\nfrom the central value $g_s=2\/\\sqrt{3}$ at $z=0$.\n\n\n\\subsection{The Genericity of Strongly Coupled Regions in F-theory and the Sen Limit}\n\nIn this section we would like to demonstrate the genericity of\nstrongly coupled regions in F-theory. This statement is immediately\nplausible since F-theory is a generalization of the weakly coupled\ntype IIB string with varying axiodilaton, but we would like to argue\nthat there are regions with $O(1)$ $g_s$ for nearly all of the moduli\nspace of F-theory using two concrete lines of evidence, one that utilizes\nnon-Higgsable clusters and one that does not.\n\nRecall from section \\ref{sec:review} that there is growing evidence and argumentation\nthat nearly all extra dimensional topologies $B$ give rise to geometrically non-Higgsable\nstructure, and it is typically the case that those have geometrically\nnon-Higgsable seven-branes. The latter have a Kodaira fiber in the set\n\\begin{equation}\\{II,III,IV,I_0^*,IV^*,III^*,II^*\\}\\end{equation} and\nfor all but\\footnote{It would also be interesting to understand\n whether a non-Higgsable seven-brane with $I_0^*$ fiber, which could\n have gauge group $SO(8), SO(7),$ or $G_2$, necessarily forbids a Sen\n limit; certainly if it is Higgsable, such a limit can exist.}\n$I_0^*$ the axiodilaton is strongly coupled in the vicinity of the\nseven-brane, as seen using the explicit solutions of section\n\\ref{sec:axiodilaton}. If the argumentation regarding the genericity of non-Higgsable\nclusters is correct and a typical compactification has a non-Higgsable\nseven-brane with fiber $II,III,IV,IV^*, III^*,$ or $II^*$, then there are\nregions with $O(1)$ $g_s$ for most of the moduli space of F-theory.\n\n\n\nHowever, there are regions with $O(1)$ $g_s$\nquite generally even in the absence of non-Higgsable clusters.\nConsider a completely general Weierstrass model, which has a\n$J$-invariant\n\\begin{equation}\nJ = \\frac{4f^3}{4f^3+27g^2}.\n\\end{equation}\nIn some cases $f$ and $g$ may be reducible (e.g. if there are\nnon-Higgsable clusters), but it need not be so and it will not affect\nthe following calculation. There are two special loci in this generic\ngeometry, $f=0$ and $g=0$, and on these loci\n\\begin{equation}\nJ|_{f=0} = 0, \\qquad J|_{g=0}=1.\n\\end{equation}\nSince we have seen its utility for studying loci with $J=1$, consider Ramanujan's theory in\nwhich $J$ is a quadratic in $x_1$. Then we have\n\\begin{equation}\nx_1 = \\frac12 \\pm \\frac{3i\\sqrt{3} g}{4f^{3\/2}}=: \\alpha_\\pm\n\\end{equation}\nand solving for $\\tau$ we have\n\\begin{equation}\n\\tau_{\\pm} = i \\, \\frac{_2F_1(\\frac16,\\frac56,1,\\alpha_+)}{_2F_1(\\frac16,\\frac56,1,\\alpha_-)}.\n\\label{eq:tauonfis0}\n\\end{equation}\nwhich is the same form as for seven-branes with type $III$ or $III^*$ Kodaira fibers\nas in \\eqref{eq:tauexactJis1}, though $\\alpha_\\pm$ have different forms which critically\nchange the physics. For example, the change in $\\alpha_\\pm$ changes the local structure of $\\tau$ to\n\\begin{equation}\n\\tau_\\pm = i \\pm B \\frac{g}{f^{3\/2}}+O(g^2)\n\\end{equation} where here $B = \\frac{5\\, \\Gamma(\\frac{7}{12}) \\Gamma(\\frac{11}{12})}{8\\sqrt{3}\\, \\Gamma(\\frac{13}{12}) \\Gamma(\\frac{17}{12})} \\simeq .68542$, and we see (at leading order\\footnote{This is\n extended to all orders by the fact that the power series expansion for $_2F_1(a,b;c;x)$ holds for $|x|<1$, which occurs for sufficiently\n small $g$ away from $f=0$.}) that there is no monodromy associated with taking a loop around $g=0$ for $f\\ne0$.\nThis makes physical sense, because such a locus has no seven-branes, and therefore no source for $\\tau$-monodromy! \nNevertheless, the region $g=0$ is strongly coupled: at $g=0$, $\\alpha_+=\\alpha_-$, $\\tau=i$, and therefore\n$g_s=1$. Similar results can be obtained using the $x_2$ theory to solve for $\\tau$ near $f=0$,\nand in that case $x_2$ has three solutions that give $\\tau=e^{\\pi i\/3}$, and therefore $g_s=\\frac{2}{\\sqrt{3}}$ at $f=0$.\n\nThese facts about strong coupling are quite general, independent of\nthe existence or non-existence of any non-trivial seven-brane\nstructure (i.e. with fiber other than $I_1$). They hold even for\ncompletely smooth models, such as the generic Weierstrass model over\n$\\mathbb{P}^3$.\n\n\n\\vspace{.5cm}\nWhat happens to these strongly coupled regions in the Sen limit?\nRoughly, the Sen limit \\cite{Sen:1996vd,Sen:1997gv} is a weakly\ncoupled limit in moduli space where $J\\mapsto \\infty$ and therefore\n$\\tau \\mapsto i \\infty$, so $g_s\\mapsto 0$. This occurs because the\nWeierstrass model takes the form $f=C \\eta -3 h^2$, $g=h \\left(C \\eta -2 h^2\\right)$\nwith discriminant\n\\begin{equation}\n\\Delta = C^2 \\eta^2(4C\\eta - 9h^2)\n\\end{equation}\nwhere $\\eta$ and $h$ are sections and $C$ is a parameter. Sen's weak coupling limit is the $C\\mapsto 0$\nlimit. However, note that if $C$ is very small but non-zero\n\\begin{equation}\nJ = \\frac{4f^3}{C^2 \\eta^2(4C\\eta - 9h^2)} = \\frac{4 \\left(C \\eta -3 h^2\\right)^3}{C^2 \\eta ^2 \\left(4 C \\eta -9 h^2\\right)}\n\\end{equation}\nis made very large by $1\/C$, but not infinite. Therefore unless it is strictly true that $C=0$,\nthe loci $f=0$ and $g=0$ (i.e. $C\\eta=3h^2$ and $\\{h=0\\}\\cup\\{C\\eta=2h^2\\}$) have $J=0$ and $J=1$, and therefore\n$g_s=2\/\\sqrt{3}$ and $g_s=1$, respectively. In the limit of $C$ becoming very small but non-zero one expects the string coupling to become lower \nin the vicinity of $f=0$ and $g=0$, though $O(1)$ on the loci. \n\nLet us see this explicitly. First, solving quadratic theory for $\\tau$ near $h=0$ (a component of $g=0$) we obtain\n\\begin{equation}\n\\tau_\\pm = i \\pm B\\, (C\\eta)^{-1\/2} \\, h+O(h^2)\n\\end{equation}\nwhere $B = \\frac{5\\, \\Gamma(\\frac{7}{12}) \\Gamma(\\frac{11}{12})}{8\\sqrt{3}\\, \\Gamma(\\frac{13}{12}) \\Gamma(\\frac{17}{12})} \\simeq .68542$.\nAs $C$ becomes large, the $g_s$ associated with one of these solutions for a small $h$ becomes weaker, but $g_s=1$\nat $h=0$ if $C$ is finite. Studying the other component of $g=0$ where $C\\eta=2h^2$ is more difficult because the locus\nitself moves as $C$ is taken to $0$. This component intersects a disc centered at $h=0$ at $h=\\pm \\sqrt{C\\eta\/2}$,\nand solving the quadratic theory near $h=\\sqrt{C\\eta\/2}$ we have\n\\begin{equation}\n\\tau_\\pm = i \\pm i\\,4\\sqrt{2}B\\,\\left(h-\\sqrt{\\frac{C\\eta}{2}}\\right)+O\\left(\\left[h-\\sqrt{\\frac{C\\eta}{2}}\\right]^2\\right).\n\\end{equation}\nIn the strict limit $C=0$ all of these loci collapse to $h=0$ and the theory is weakly\ncoupled, but any $C\\ne 0$ gives $g_s=1$ on the two\ncomponents of $g=0$. Similarly, the locus $f=0$ gives rise to $g_s=2\/\\sqrt{3}$ \npoint at the loci $h=\\pm \\sqrt{\\frac{C\\eta}{3}}$ in an $h$-disc, and this could\nbe studied explicitly using the theory where $J$ is cubic in $x_2$.\n\n\nWhat is going on physically for these components? We have three: one for $f=0$\nand two for $g=0$, and unless $g_s=0$ they are strongly coupled loci.\nExamining the discriminant we see that as $C$ becomes small two\nseven-branes approach the locus $h=0$ and collide in the limit. This\nis the $O7$-plane, and therefore for $C\\ne 0$ the region $h=0$ is the\nstrongly coupled region between the two seven-branes that become the\n$O7$ in the Sen limit. The $h$-disc is useful for studying $f=0$ and the\nother component of $g=0$. The picture is\n\\begin{equation} \n\\begin{tikzpicture}[scale=1]\n \\draw[xshift=7cm,thick,color=black] (60mm,0mm) circle (1mm);\n \\draw[xshift=7cm,thick,color=black] (-60mm,0mm) circle (1mm);\n \\fill[xshift=7cm,thick,color=blue] (0mm,0mm) circle (1mm);\n \\fill[xshift=7cm,thick,color=blue] (28.2843mm,0mm) circle (1mm);\n \\fill[xshift=7cm,thick,color=blue] (-28.2843mm,0mm) circle (1mm);\n \\fill[xshift=7cm,thick,color=red] (23.094mm,0mm) circle (1mm);\n \\fill[xshift=7cm,thick,color=red] (-23.094mm,0mm) circle (1mm);\n\\end{tikzpicture}\n\\label{eqn:pixpiypiz}\n\\end{equation}\nwhere the hollowed circles are the only places where $\\Delta$\nintersects the $h$-disc. These are the two $(p,q)$ seven-branes that\nbecome the $O7$ in the Sen limit, and in this limit the whole pictures\ncollapse to the central blue dot at $h=0$. The red (blue) dots are\nwhere the $f=0$ ($g=0$) locus intersects the $h$-disc, and they have\n$g_s=2\/\\sqrt{3}$ and $g_s=1$, respectively. They are strongly coupled\nregions that are separated from the branes for finite $C$.\n\n\n\\vspace{.3cm}\n\nThe one caveat that we have not yet discussed involves configurations with\nconstant coupling. They were studied in the K3 case by Dasgupta and\nMukhi \\cite{Dasgupta:1996ij}, with the result (which generalizes beyond\nK3) that seven-branes with fiber $II$, $III$, $IV$, $I_0^*$, $IV^*$,\n$III^*$, and $II^*$ may gave rise to constant coupling\nconfigurations. When this occurs, all but the $I_0^*$ case gives\nrise to constant coupling configurations with $O(1)$ $g_s$. In the\n$I_0^*$ case there are a continuum of possible $g_s$ values that may\nbe constant cross the base $B$. In such a case there may be multiple $I_0^*$ seven-branes\nand the Weierstrass model\ntakes the form\n\\begin{equation}\nf = F \\prod_i z_i^2, \\qquad g = G \\prod_i z_i^3,\n\\end{equation}\nwith $F$ and $G$ necessarily constants, rather than non-trivial sections of a line bundle. In such a case\nthey do not define vanishing loci in $B$ with $O(1)$ $g_s$. Instead, all of the factors of $z_i$ drop out\nof the $J$ invariant,\n\\begin{equation}\nJ = \\frac{4F^3}{4F^3+27G^2}\n\\end{equation}\nwhich is just a constant, not varying over the base. To our knowledge,\nthis is the only possible way to obtain an F-theory compatification\nwith non-zero $g_s\\ll 1$ which is weakly coupled everywhere in $B$.\n\nIn summary, aside from compactifications realizing this $I_0^*$ caveat\nor the strict $g_s=0$ limit, there is a region $f=0$ or $g=0$ (or a component thereof) in every\nF-theory compactification with $O(1)$ $g_s$, and it is not necessarily near\nany seven-brane. This may have interesting implications for moduli\nstabilization or cosmology in the landscape.\n\n\\section{Non-Perturbative $SU(3)$ and $SU(2)$ Theories}\n\\label{sec:su3su2}\n\n\\subsection{Comparison to D7-brane Theories}\n\nIn this section we would like to compare the non-perturbative\nrealizations of $SU(3)$ and $SU(2)$ theories from type $IV$ and $III$\nfibers to the $SU(3)$ and $SU(2)$ theories\\footnote{The type $IV$ and $I_3$ fibers\nrealize $SU(2)$ ($SU(3)$) theories in six and four-dimensional models if the \nfiber is non-split (split).} that may be realized by\nstacks of three and two $D7$-branes at weak string coupling. The\nlatter are realized by type $I_3$ and $I_2$ fibers, and in some cases\n(but not if the $IV$ or $III$ is non-Higgsable) they are related by\ndeformation.\n\n\nSuch deformations are slightly unusual. In many cases a deformation of\na geometry spontaneously breaks the theory on the seven-brane, but in\nthese cases the deformation leaves the gauge group\nintact\\footnote{This is always true for a $III$ to $I_2$ deformation,\n and is also true for a $IV$ to $I_3$ deformation provided the deformation preserve\n the split or non-split property related to non-simply laced groups.}. However, the deformation is\nnon-trivial, since the Kodaira fiber, the number of branes in the stack, the $SL(2,\\mathbb{Z})$ monodromy,\nand the axiodilaton profile all change due to the deformation.\n\nWe begin by considering the relationship between $SU(2)$ theories\nrealized on seven-branes with type $I_2$ and $III$ Kodaira fibers.\nWe expand $f$ and $g$ as\n\\begin{equation}\nf = f_0 + f_1 z \\qquad \\qquad g = g_0 + g_1 z + g_2 z^2,\n\\end{equation}\nwhere $f_0$, $g_0$, and $g_1$ do not depend on $z$ but $f_1$ and $g_2$\ncan contain terms that are both constant in $z$ and depend on $z$. To engineer\nan $I_2$ singularity we move to a sublocus in complex structure moduli where\n\\begin{equation}\nf_0 = -3a^2 \\qquad g_0 = 2a^3 \\qquad f_1 = b \\qquad g_1 = -a b,\n\\end{equation}\nin which case\n\\begin{equation}\n\\Delta = z^2 \\left(108 a^3 g_2-9 a^2 b^2\\right)+ z^3(4b^3-54abg_2)+O(z^4).\n\\end{equation}\nHere $a$ and $b$ are global sections of line bundles $a\\in\n\\Gamma(\\mathcal{O}(-2K))$ and $b \\in \\Gamma(\\mathcal{O}(-4K-Z))$ where $Z$ is the class of the divisor\n$z=0$. Specifying in moduli such that $a=0$, we see that $f_0$, $g_0$, and $g_1$ vanish\nidentically, and the resulting fibration has $(ord f, ord g, ord \\Delta) = (1,2,3)$; thus\nin the limit $a\\mapsto 0$ the $I_2$ fiber over $z=0$ becomes type $III$.\n\nIn a generic disc $D$ containing $z=0$ where $z$ is also the coordinate of\nthe disc, $a$ is a constant since $f_0$, $g_0$, and $g_1$ \ndo not depend on $z$. We can study the geometry\nof the elliptic surface over the disc that is naturally induced from\nrestriction of the elliptic fibration. From the point of view of the\nelliptic surface, then, the $a\\mapsto 0$ limit is just a limit in a\nconstant complex number $a$ (technically $a|_D$, but we abuse\nnotation). Then $a=0$ realizes a type $III$ fiber along $z=0$, and a\nsmall deformation $a\\ne 0$ reduces it to an $I_2$ fiber; both\ngeometrically give rise to $SU(2)$ gauge theories on the\nseven-brane.\n\n We would also like to consider a second deformation\nparameter $\\epsilon$, set $b=g_2=1$ for simplicity, and consider a small\nenough disc that we can drop higher order terms in $z$; then the\ntwo-parameter family of elliptic fibrations over $D$ is given by\n\\begin{align}\nf = -3a^2 + z \\qquad \\qquad g = \\epsilon + 2 a^3 - a z + z^2 \\nonumber \\\\\n\\Delta = 108 a^3 z^2+108 a^3 \\epsilon -9 a^2 z^2-54 a z^3-54 a z \\epsilon +27 z^4+4 z^3+54 z^2\n \\epsilon +27 \\epsilon ^2.\n\\end{align}\nWe note the behavior of the discriminant in the relevant limits:\n\\begin{align}\n\\epsilon=0&: \\Delta = z^2 \\left(108 a^3-9 a^2-54 a z+27 z^2+4 z\\right)\\\\\n\\epsilon=a=0&: \\Delta = z^3 \\left(27 z+4\\right),\n\\end{align}\nwhere the $\\epsilon=0$ limit is the limit of $SU(2)$ gauge enhancement with a type\n$I_2$ fiber, and the $\\epsilon=a=0$ limit maintains the $SU(2)$ group but realizes\nthe theory instead with a type $III$ fiber.\n\n\\vspace{.5cm}\n\\noindent \\emph{Analysis of the Two Deformations}\n\nLet us study two different deformations of the type $III$\ntheory that uncover its differences from the $I_2$ theory, and also\nthe relationship between the two via deformation. The first\ndeformation is to take $\\epsilon \\ne 0$ but small; specifically,\n$\\epsilon = .001$. This is a small breaking of the type $III$ $SU(2)$\ntheory, where the deformation causes the three branes comprising the\ntype $III$ theory to split into three $(p,q)$ seven-branes in a smooth\ngeometry with no non-abelian gauge symmetry. For these parameters the\ngeometry appears in Figure \\ref{fig:a0}\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[scale=.75]{figures\/zIIII2azero.eps} \\qquad \\qquad\n\\includegraphics[scale=.75]{figures\/xIIII2azero.eps}\n\\end{center}\n\\caption{Two figures of the geometry deformed by $\\epsilon=.001$, $a=0$, with\n$(p,q)$ seven-branes on the left and ramification points of the elliptic curve on the right.}\n\\label{fig:a0}\n\\end{figure}\nwhere we have displayed the branes in the $z$-plane as red dots and we\nhave also displayed the $x$-plane above $z=0$, where the blue dots\nrepresent three ramificaton points of the torus described as a double\ncover (as is natural in a Weierstrass model). Any straight line\nbetween two of the blue dots determines a one-cycle in the elliptic\nfiber above $z=0$, and by following straight line paths from $z=0$ to\nthe seven-branes two of the ramification points will collide,\ndetermining a vanishing cycle.\n\nObtaining a consistent picture of the W-boson degrees of freedom throughout\nthe moduli space requires that the geometry provides a mechanism that turns the\nmassive W-bosons of the slightly deformed type $III$ theory to the massive W-bosons \nof the slightly deformed type $I_2$ theory. From the weak coupling limit, we know that\nthe latter are represented by fundamental open strings. From the deformation of the\ntype $III$ singularity performed in \\cite{Grassi:2014sda} we know that the former are \nthree-pronged string junctions. Thus, for $\\epsilon\\ne 0$ small, the continual\nincrease of $a$ from $0$ must turn string junctions into fundamental strings.\n\nFor the deformation of a type $III$ singularity the vanishing cycles were\nderived in \\cite{Grassi:2014sda}; they can be read off by taking\nstraight line paths from the origin. Beginning with the left-most brane\nand working clockwise about $z=0$, the ordered set of vanishing cycles are $Z_{III}=\\{\\pi_2,\\pi_1,\\pi_3\\}$\nwhere these cycles are defined as\n\\begin{equation} \n\\begin{tikzpicture}[scale=1]\n \\fill[xshift=7cm,thick] (180:10mm) circle (1mm);\n \\fill[xshift=7cm,thick] (180-120:10mm) circle (1mm);\n \\fill[xshift=7cm,thick] (180+120:10mm) circle (1mm);\n \\node at (9.2cm,1.3cm) {$x$};\n \\draw[xshift=7cm,thick,->] (180:10mm)+(30:1.3mm) -- +(30:16mm);\n \\draw[xshift=7cm,thick,->] (180-120:10mm)+(-90:1.3mm) -- +(-90:16mm);\n \\draw[xshift=7cm,thick,->] (180+120:10mm)+(150:1.3mm) -- +(150:16mm);\n \\node at (6.6cm,0.7cm) {$\\pi_1$};\n \\node at (8cm,0cm) {$\\pi_2$};\n \\node at (6.6cm,-0.7cm) {$\\pi_3$};\n \\draw[xshift=9cm,thick,yshift=1.0cm] (90:0mm) -- (90:4mm);\n \\draw[xshift=9cm,thick,yshift=1.0cm] (0:0mm) -- (0:4mm);\n \\end{tikzpicture}\n\\label{eqn:pixpiypiz}\n\\end{equation}\nand the massive W-bosons of the broken theory are three-pronged string\njunctions $J_\\pm=(\\pm 1,\\pm 1,\\pm 1)$ which, topologically, are two\nspheres in the elliptic surface due to having asymptotic charge zero \\cite{Grassi:2014ffa}.\nPictorially, they appear as\n\\begin{equation}\n \\begin{tikzpicture}\n \\begin{scope}[rotate=-30] \n \\fill[xshift=-20mm] (90:8mm) circle (1mm);\n \\fill[xshift=-20mm] (210:8mm) circle (1mm);\n \\fill[xshift=-20mm] (330:8mm) circle (1mm);\n \\draw[xshift=-20mm,thick] (0:0mm) -- (90:7mm);\n \\draw[xshift=-20mm,thick] (90:3.5mm) -- (75:3mm);\n \\draw[xshift=-20mm,thick] (90:3.5mm) -- (105:3mm);\n \\draw[xshift=-20mm,thick] (0:0mm) -- (210:7mm);\n \\draw[xshift=-20mm,thick] (210:3.5mm) -- (225:3mm);\n \\draw[xshift=-20mm,thick] (210:3.5mm) -- (195:3mm);\n \\draw[xshift=-20mm,thick] (0:0mm) -- (330:7mm); \n \\draw[xshift=-20mm,thick] (330:3.5mm) -- (345:3mm);\n \\draw[xshift=-20mm,thick] (330:3.5mm) -- (315:3mm);\n \\end{scope}\n \\begin{scope}[yshift=2cm,rotate=-30]\n \\fill[xshift=20mm] (90:8mm) circle (1mm);\n \\fill[xshift=20mm] (210:8mm) circle (1mm);\n \\fill[xshift=20mm] (330:8mm) circle (1mm);\n \\draw[xshift=20mm,thick] (90:0mm) -- (90:7mm);\n \\draw[xshift=20mm,thick] (90:3.5mm) -- (78:4.1mm);\n \\draw[xshift=20mm,thick] (90:3.5mm) -- (102:4.1mm);\n \\draw[xshift=20mm,thick] (0:0mm) -- (210:7mm);\n \\draw[xshift=20mm,thick] (210:3.5mm) -- (222:4.1mm);\n \\draw[xshift=20mm,thick] (210:3.5mm) -- (198:4.1mm);\n \\draw[xshift=20mm,thick] (0:0mm) -- (330:7mm); \n \\draw[xshift=20mm,thick] (330:3.5mm) -- (342:4.1mm);\n \\draw[xshift=20mm,thick] (330:3.5mm) -- (318:4.1mm);\n \\end{scope}\n \\end{tikzpicture}\n\\end{equation}\nwhere the black dots represent seven-branes. See \\cite{Grassi:2014sda} for more details of the intersection\ntheory of these particular junctions, as well as their reproduction of the\nDynkin diagram.\n\n\\vspace{.3cm} The first deformation should be thought of as a small\ndeformation of an $SU(2)$ seven-brane theory associated with a type $III$\nsingularity, that is, a Higgsing. We would now like to study a deformation that\ncorresponds to a large deformation from a type $III$ $SU(2)$ theory to a \ntype $I_2$ $SU(2)$ theory (which does not Higgs the theory), and then a small\ndeformation that Higgses the $SU(2)$ theory on the seven-brane with an $I_2$\nsingular fiber. Heuristically, the third brane of the type $III$ singularity \nshould be far away relative to the distance between the two branes of the deformed\ntype $I_2$ singularity, which in type IIB language are two D7-branes with a small\nsplitting.\n\nWe take $a=.3i$ and $\\epsilon=.001$. The difference between\nthis deformation and the deformation of the previous paragraph is that\nin this case one of the three seven-branes that was originally at $z=0$ is\nmuch further away due to taking $a\\ne 0$, and the question is whether\nthe $(p,q)$ labels of the three-branes changed in the process. The geometry\nappears in Figure \\ref{fig:an0}\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[scale=.75]{figures\/zIIII2anonzero.eps} \\qquad \\qquad\n\\includegraphics[scale=.75]{figures\/xIIII2anonzero.eps}\n\\end{center}\n\\caption{Two figures of the geometry deformed by $\\epsilon=.001$, $a=.3i$, with\n$(p,q)$ seven-branes on the left and ramification points of the elliptic curve on the right.}\n\\label{fig:an0}\n\\end{figure}\nwhere the brane on the left is the one that has been moved further away\nfrom the origin by turning on $a$. If one were to maintain this value of $a$\nbut tune $\\epsilon\\mapsto 0$, the two branes close to the origin would collide\nto give an $SU(2)$ theory with an $I_2$ singular fiber.\nIndeed, with this deformation the two-branes closer\nto the origin both have vanishing cycle $\\pi_1$ and the brane\ndisplaced by the $a$ deformation has vanishing cycle $\\pi_3$, so that\nnow the ordered set of vanishing cycles (beginning with the left-most brane and working\nclockwise about $z=0$) is $Z=\\{\\pi_3,\\pi_1,\\pi_1\\}$. The\nW-bosons of the broken $I_2$ $SU(2)$ theory are the strings between the\n$\\pi_1$ branes, as expected since $I_2$ is the F-theory lift of two\nD7-branes.\n\nWhat has happened? The natural W-boson of the broken type $III$ SU(2)\ntheory (associated to the $a=0, \\epsilon=.001$ deformation) is the\nthree-pronged string junction, but tuning $a$ from $a=0$ to $a=.3i$\nthe natural W-boson of the broken type $I_2$ theory is the fundamental\nstring. The change between the two different descriptions of the W-boson\nis a deformation induced Hanany-Witten move; for related ideas in a different\ngeometry, see \\cite{Cvetic:2010rq}. This can actually be seen via\ncontinuous deformation from $a=0$ to $a=.3i$, during which the leftmost (bottom) seven-brane\nin Figure \\ref{fig:a0} becomes the bottom (leftmost) seven-brane of Figure \\ref{fig:an0}.\nIn the process the straight line path from $z=0$ to the (moving) leftmost seven-brane in Figure \\ref{fig:a0}\nis crossed via the movement of the bottom seven-brane in Figure \\ref{fig:a0}, changing the $(p,q)$ labels\nof the former. This matches the changes in the ordered sets of vanishing cycles.\n\n\n\nThough we have explicitly seen the natural state change using a\ndeformation, let us check the possibilities using the usual algebraic\ndescription of Zwiebach and Dewolfe \\cite{DeWolfeZwiebach} where the\n$(p,q)$ seven-branes are arranged in a line with branch cuts pointing\ndownward. Technically, the branch cuts represent the mentioned\nstraight line paths from the origin, the latter being the point whose\nassociated fiber $E_0$ appears in the relative homology $H_2(X_d,E_0)$\nthat defines the string junctions. $X_d$ is the elliptic surface \nof the disc. The\nW-boson of the broken $SU(2)$ theory associated to a type $III$\nsingularity is\n\\begin{center}\n\\begin{tikzpicture}\n \\draw[thick,dotted] (0cm,-.25cm) -- (0cm,-1.75cm);\n \\draw (0cm,0cm) circle (.25cm);\n \\node at (0cm,0cm) {$\\pi_3$};\n \\draw[thick,dotted] (1cm,-.25cm) -- (1cm,-1.75cm);\n \\draw (1cm,0cm) circle (.25cm);\n \\node at (1cm,0cm) {$\\pi_2$};\n \\draw[thick,dotted] (-1cm,-.25cm) -- (-1cm,-1.75cm);\n \\draw (-1cm,0cm) circle (.25cm);\n \\node at (-1cm,0cm) {$\\pi_1$};\n \\draw[thick] (-1cm,.25cm) arc (135:45:1.4cm);\n \\draw[thick] (0cm,.25cm) -- (0cm,.65cm);\n \\draw[thick] (-.8cm,.5cm) -- (-.65cm,.5cm);\n \\draw[thick] (-.66cm,.36cm) -- (-.66cm,.5cm);\n \\draw[thick] (.8cm,.5cm) -- (.65cm,.5cm);\n \\draw[thick] (.66cm,.36cm) -- (.66cm,.5cm);\n \\draw[thick] (-.1cm,.4cm) -- (0cm,.5cm);\n \\draw[thick] (.1cm,.4cm) -- (0cm,.5cm);\n\\end{tikzpicture}\n\\end{center}\nwhere for concreteness we choose a basis such that $\\pi_1 = (1,0)$, $\\pi_2=(0,1)$ and\n$\\pi_3=(-1,-1)$ with the convention that we cross branch cuts by moving to the right. Then\nthe monodromy matrix associated to a $(p,q)$ seven-brane is \n\\begin{equation}\n \\begin{pmatrix}\n 1-pq && p^2 \\\\\n -q^2 && 1+pq\n \\end{pmatrix}\n \\label{eq:Mpq}\n\\end{equation}\nand one can check that the monodromy of these three branes reproduces the monodromy of the\ntype $III$ singularity, as they must after deformation. We see\n\\begin{equation}\nM_{\\pi_2}M_{\\pi_3}M_{\\pi_1}=M_{III}=\n\\begin{pmatrix}\n 0 && 1 \\\\ -1 && 0\n\\end{pmatrix},\n\\end{equation}\nwhich is the expected behavior.\n\n\nIn this description, the continuous motion of the seven-branes\ndescribed above amounts to the $\\pi_3$ brane crossing the branch cut\nof the $\\pi_2$ brane, changing its vanishing cycle to\n$M_{\\pi_3}^{-1}\\begin{pmatrix}0\\\\1\\end{pmatrix}=\\begin{pmatrix}-1\\\\0\\end{pmatrix}$\nseven-brane; but this is just $-\\pi_1$ and we are free to call it a\n$\\pi_1$ seven-brane instead if we reverse the sign of any junction\ncoming out of the brane. With this movement and relabeling, the above\njunction becomes\n\\begin{center}\n\\begin{tikzpicture}\n \\draw[thick,dotted] (0cm,-.25cm) -- (0cm,-1.75cm);\n \\draw (0cm,0cm) circle (.25cm);\n \\node at (0cm,0cm) {$\\pi_1$};\n \\draw[thick,dotted] (1cm,-.25cm) -- (1cm,-1.75cm);\n \\draw (1cm,0cm) circle (.25cm);\n \\node at (1cm,0cm) {$\\pi_3$};\n \\draw[thick,dotted] (-1cm,-.25cm) -- (-1cm,-1.75cm);\n \\draw (-1cm,0cm) circle (.25cm);\n \\node at (-1cm,0cm) {$\\pi_1$};\n \\draw[thick] (1cm,.25cm) -- (1cm,.76cm);\n \\draw[thick] (.9cm,.4cm) -- (1cm,.5cm);\n \\draw[thick] (1.1cm,.4cm) -- (1cm,.5cm);\n \\draw[thick] (-.52cm,.61cm) -- (-.37cm,.59cm);\n \\draw[thick] (-.39cm,.44cm) -- (-.37cm,.59cm);\n \\draw[thick] (.65cm,-.5cm) -- (.56cm,-.58cm);\n \\draw[thick] (.63cm,-.7cm) -- (.56cm,-.58cm);\n \\draw[thick] (-1cm,.25cm) .. controls (0cm,1cm) and (2cm,1cm) .. (2cm,0cm) .. controls (2cm,-1cm) and (.2cm,-.75cm) .. (.15cm,-.2cm);\n\\end{tikzpicture}\n\\end{center}\nand now the branes are in a position to do the relevant Hanany-Witten move. Prior to crossing the branch\ncut from the right, the $(p,q)$ charge of the piece of string that ends on the middle brane (in this picture) is $\\pi_1+\\pi_3$, and after the branch cut it is $\\pi_1$ due\nto the monodromy action $M_{\\pi_3}^{-1}$. If the monodromy action is replaced by a prong by pulling the string\nthrough the brane (that is, performing a Hanany-Witten move), how many extra prongs $k$ are picked up on the right-most brane?\nCharge conservation requires \n\\begin{equation} \\pi_1 + \\pi_3 + k \\,\\pi_3 = \\pi_1 \\end{equation} \nand we see $k=-1$. That is, the Hanany-Witten move replaces the\nportion of the string crossing the branch cut with a prong going into the $\\pi_3$ brane. This cancels against the prong that is already there, leaving\n\\vspace{.3cm}\n\\begin{center}\n\\begin{tikzpicture}\n \\draw[thick,dotted] (0cm,-.25cm) -- (0cm,-1.75cm);\n \\draw (0cm,0cm) circle (.25cm);\n \\node at (0cm,0cm) {$\\pi_1$};\n \\draw[thick,dotted] (1cm,-.25cm) -- (1cm,-1.75cm);\n \\draw (1cm,0cm) circle (.25cm);\n \\node at (1cm,0cm) {$\\pi_3$};\n \\draw[thick,dotted] (-1cm,-.25cm) -- (-1cm,-1.75cm);\n \\draw (-1cm,0cm) circle (.25cm);\n \\node at (-1cm,0cm) {$\\pi_1$};\n \\draw[thick] (-.75cm,0cm) -- (-.25cm,0cm);\n \\draw[thick] (-.55cm,.1cm) -- (-.45cm,0cm);\n \\draw[thick] (-.55cm,-.1cm) -- (-.45cm,0cm);\n\\end{tikzpicture}\n\\end{center}\nwhich, since we have chosen $\\pi_1=(1,0)$, is just a fundamental string.\nIf we had chosen $\\pi_1$ to be some other $(p,q)$ value this would be a $(p,q)$\nstring, but there is always a choice of $SL(2,\\mathbb{Z})$ frame what would turn\nit back into a fundamental string.\n\nIn summary, we see using algebraic techniques that the natural W-boson\nof the Higgsed type $III$ $SU(2)$ theory, which is a three-pronged\nstring junction, is related to the the natural W-boson of the Higgsed\ntype $I_2$ $SU(2)$ theory. The relationship is a brane rearrangement\ntogether with a Hanany-Witten move, which we have seen explicitly\nabove via two deformations of the elliptic surface with a type $III$\nsingular fiber.\n\n\\vspace{.3cm}\n\nThe natural W-bosons of the type $I_3$\ntheory are (in an appropriate $SL(2,\\mathbb{Z})$ duality frame) fundamental\nstrings and the natural W-bosons of the type $IV$ theory include\nstring junctions (see e.g. \\cite{Grassi:2014sda} for an explicit analysis). Via deformation they must be related to one\nanother, and given the result we have just seen it is natural to expect a brane rearrangement\nand a Hanany-Witten move. This expectation is correct, though we will not\nexplicitly show it since the techniques are so similar to the previous case.\n\n\nWe would like to present the deformation for the interested reader, though. Take \n\\begin{equation}\n f=-3 a^4+a b\\, z+c\\, z^2+f_3\\, z^3\\qquad \\qquad\n g=2 a^6-a^3 b\\, z+ \\left(\\frac{b^2}{12}-a^2 c\\right)\\, z^2+g_3 z^3\n\\end{equation}\nwhere $a$, $b$, and $c$ are global section $a\\in \\mathcal{O}(-K)$, $b\\in\n\\mathcal{O}(-3K-Z)$ and $c \\in \\mathcal{O}(-4K-Z)$ with $Z$ is the class of $z=0$.\nHere $f_3$ and $g_3$ contain constant terms in $z$ as well as higher\norder terms. The discriminant is\n\\begin{align}\n \\Delta &= \\frac{1}{16} z^3 \\left(1728 a^8 f_3+1728 a^6 g_3-288 a^5 b c-8 a^3 b^3\\right)\\nonumber \\\\ & +\\frac{1}{16} z^4 \\left(-1152 a^5\n b f_3-144 a^4 c^2-864 a^3 b g_3+120 a^2 b^2 c+3 b^4\\right)+O(z^5)\n\\end{align}\nand we see that there is a gauge theory with $I_3$ fiber on the seven-brane at $z=0$. In the $a=0$ limit we see that\n\\begin{align}\nf = z^2 \\left(c+f_3 z\\right) \\qquad \\qquad g=\\frac{1}{12} z^2 \\left(b^2+12 g_3 z\\right) \\nonumber \\\\\n\\Delta = \\frac{1}{16} z^4 \\left(3 b^4+72 b^2 g_3 z+64 c^3 z^2+192 c^2 f_3 z^3+192 c\n f_3^2 z^4+64 f_3^3 z^5+432 g_3^2 z^2\\right)\n\\end{align}\nwhich has a gauge theory with type $IV$ Kodaira fiber on the seven-brane at $z=0$. In these cases we have not\nimposed the absence of outer monodromy (i.e. we have no imposed a split fiber), so they are $SU(2)$ gauge theories. If the form is further restricted so\nthat outer monodromy is imposed, it is an $SU(3)$ gauge theory.\n\n\\newsec{Extra Branes at Seven-Brane Intersections}\n\\label{sec:extra branes}\n\nTwo seven-branes that intersect along a codimension two locus $C\\subset B$ are typically only seven-branes that\nintersect along $C$. However, a counterexample giving rise to $SU(3)\\times SU(2)$ gauge symmetry \nwas studied in \\rcite{GrassiHalversonShanesonTaylor2014} where an additional brane with an $I_1$ singular fiber also intersected\nthe curve of $SU(3)\\times SU(2)$ intersection. Though one might have expected a\n$IV$-$III$ collision, a $IV$-$III$-$I_1$ collision occurs automatically. The Weierstrass model is\n $f= z^1 t^2\\, F$ and $g=z^2 t^2\\, G$ with\n\\eqn{\\Delta = z^3 t^4\\,(4t^2 F^3 + 27 z G^2 )=: z^3t^4\\tilde \\Delta,}\nfrom which it can be seen that the brane along $\\tilde \\Delta = 0$ intersects $\\{z=t=0\\}=:C$. Three\nstacks of branes intersect $C$, and we call $\\tilde \\Delta=0$ the ``extra brane'' at $C$.\n\nThere is a natural guess regarding the associated physics:\nwhere there is an extra brane there should be extra string (or string junction) states.\nIn the mentioned example it was found that such a collision gives\nrise not only to the expected bifundamentals of $SU(3)\\times SU(2)$, but also \\cite{GrassiHalversonShanesonTaylor2014}\nfundamental hypermultiplets of $SU(3)$ and $SU(2)$ which could come in\nchiral multiplets if flux is turned on. Such configurations may be non-Higgsable;\nfor example geometries, see \\cite{GrassiHalversonShanesonTaylor2014,Halverson:2015jua,Taylor:2015ppa}.\n\n\\vspace{.3cm}\n\nUnder which circumstances is there necessarily an extra brane? We take $F_1$ and $F_2$ to be the\nKodaira singular fibers of seven-branes that collide along $C$, with \n\\begin{equation}\n F_{1,2}\\in\\{II,III,IV,I_0^*,IV^*,III^*,II^*\\}.\\end{equation}\nThese are the possible fibers of non-Higgsable seven-branes.\nLet\n$J_1$ and $J_2$ be the $J$-invariants associated with $F_1$ and $F_2$. By direct calculation we find:\n\\begin{itemize}\n\t\\item If either $F_1$ or $F_2$ is $I_0^*$ there is no additional brane.\n\t\\item If neither $F_1$ nor $F_2$ are $I_0^*$ then there is an additional brane if and only if $J_1\\ne J_2$.\n\t\\item If there is an additional brane then precisely one of the fibers is type $III^*$ or $III$.\n\t\\item If there is an additional brane and the Weierstrass\n model does not have $(f,g)$ vanishing to order $\\geq (4,6)$\n along $C$, then the fiber types are either $(IV,III)$ or $(III,II)$.\n\\end{itemize}\nTo see this, let the seven-brane with fibers $F_1$\nand $F_2$ be localized on the divisors $z=0$ and $t=0$ respectively. Then write \n\\begin{equation}\nf = z^a t^b \\, F \\qquad g = z^c t^d\\, G\n\\end{equation}\nwhere $a,b,c,d$ can be determined from Table \\ref{tab:fibs} given the knowledge of $F_1$\nand $F_2$. The discriminant takes the form\n\\begin{equation}\n\\Delta = z^{min(3a,2c)}\\, t^{min(3b,2d)} \\,\\, \\tilde \\Delta\n\\end{equation}\nand there is an extra brane along $C$ whenever $\\tilde\\Delta|_{z=t=0}=0$.\nIn the cases where there are no $(4,6)$ curves, the above conclusions can all be seen directly from Table\n\\ref{table:minimal models with extra brane}, where ``$NF$'' denotes that the $J$ invariant\nof $I_0^*$ is not fixed.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{ccccccc}\n $F_1$ & $F_2$ & $J_1$ & $J_2$ & Minimal on $C$& $\\Delta$ & Additional Brane? \\\\ \\hline \\hline\n$IV^*$ & $II$ & $0$ & $0$ & Yes & $(4 \\tilde f^3\\, t\\, z + 27 \\tilde g^2)\\, t^2\\, z^8$ & No \\\\\n$I_0^*$ & $IV$ & $NF$ & $0$ & Yes & $(4 \\tilde f^3\\, t^2 + 27 \\tilde g^2)\\, t^4\\, z^6$ & No \\\\\n$I_0^*$ & $III$ & $NF$ & $1$ & Yes & $(4 \\tilde f^3 + 27\\tilde g^2 \\, t ) \\, t^3\\, z^6$ & No \\\\\n$I_0^*$ & $II$ & $NF$ & $0$ & Yes & $(4 \\tilde f^3\\, t + 27 \\tilde g^2)\\, t^2\\, z^6$ & No \\\\\n$IV$ & $IV$ & $0$ & $0$ & Yes & $(4 \\tilde f^3\\, t^2\\, z^2 + 27 \\tilde g^2)\\, t^4\\, z^4$ & No \\\\\n$IV$ & $III$ & $0$ & $1$ & Yes & $(4 \\tilde f^3\\, z^2 + 27 \\tilde g^2\\, t)\\, t^3\\, z^4$ & Yes \\\\\n$IV$ & $II$ & $0$ & $0$ & Yes & $(4 \\tilde f^3\\, t\\, z^2 + 27 \\tilde g^2)\\, t^2\\, z^4$ & No \\\\\n$III$ & $III$ & $1$ & $1$ & Yes & $(4\\tilde f^3 + 27\\tilde g^2 \\, t\\, z)\\, t^3\\, z^3$ & No \\\\\n$III$ & $II$ & $1$ & $0$ & Yes & $(4 \\tilde f^3\\, t + 27 \\tilde g^2\\, z)\\, t^2\\, z^3$ & Yes \\\\\n$II$ & $II$ & $0$ & $0$ & Yes & $(4 \\tilde f^3\\, t\\, z + 27 \\tilde g^2)\\, t^2\\, z^2$ & No \\\\ \\hline \\hline\n\\end{tabular}\n\\caption{The possible intersecting elliptic seven-branes that may arise, according to their singular fibers $F_1$ and $F_2$, and some of their\nproperties.}\n\\label{table:minimal models with extra brane}\n\\end{table}\n\n\n\n\\subsection{Associated Matter Representations}\n\n\nThere are two possible fiber intersections that force the existence\nof an extra brane, $IV$-$III$ and $III$-$II$. Due to the effects of outer\nmonodromy, the $IV$-$III$ collision allows for the possibility of\nintersecting non-abelian seven-branes with $SU(3)\\times SU(2)$ or\n$SU(2) \\times SU(2)$ gauge symmetry, while intersecting seven-branes\nwith a fiber collision $IV$-$III$ necessarily have $SU(2)$ gauge\nsymmetry. The Lie algebra representations of matter at the $IV$-$III$-$I_1$\n$SU(3)\\times SU(2)$ collision was determined in \\rcite{GrassiHalversonShanesonTaylor2014}. They are\nhypermultiplets of \n\\begin{equation}\n(3,2), (3,1), (1,2)\n\\end{equation} \nin the absence of flux, but can become the chiral non-abelian $SU(3)\\times SU(2)$\nrepresentations of the standard model if chirality inducing G-flux is turned on.\n\n \nLet us determine the Lie algebra representations in the\ncase of the other collision, which is $III$-$II$-$I_1$ via\nanomaly cancellation in six dimensions. \nConsider a six-dimensional F-theory compactification with $B_2=\\mathbb{P}^2$\nand a seven-brane with $SU(2)$ gauge symmetry from a type $III$ fiber on\n$Z=\\{z=0\\}$, a divisor in the hyperplane class. Take also a seven-brane with\nno gauge symmetry and a type $II$ singular fiber on $T=\\{t=0\\}$, also\nin the hyperplane class. Such a Weierstrass model takes the form\n\\eqn{f_{12} = z \\, t\\, f_{10} \\qquad \\qquad g_{18} = z^2 \\, t \\,g_{15}\n \\qquad \\qquad \\Delta = z^3 t^2\\, (4f_{10}^3\\, t+27g_{15}^2\\,\n z)\\equiv z^3 \\, \\tilde \\Delta} and we would like to study matter at\nthe intersection $z = \\tilde \\Delta = 0$. These intersections are of\ntwo types: the single point $z=t=0$ and the $10$ points $z=f_{10}=0$\nwith $t\\ne 0$. The latter are all points where seven-branes with a\ntype $III$ and $I_1$ fiber collide; there are $2$ fundamentals of\n$SU(2)$ for each such point \\cite{Grassi:2000we,Grassi:2011hq}. Thus, the $10$ points contribute $20$\nfundamental hypermultiplets. Since anomaly cancellation for an $SU(2)$\nseven-brane on the hyperplane in $\\mathbb{P}^2$ requires $22$ hypermultiplets\n(see e.g. section 2.5 of \\cite{Johnson:2014xpa}), anomaly cancellation\nrequires that the $III$-$II$-$I_1$ intersection also contribute two\nfundamental hypermultiplets.\n\n\n\\newsec{Argyres-Douglas Theories on D3-branes and BPS Dyons as Junctions}\n\\label{sec:D3 probes}\n\nD3-branes in the vicinity of seven-branes realize non-trivial $\\mathcal{N}=2$\nquantum field theories on their worldvolume \\cite{Banks:1996nj}, which\nare broken to $\\mathcal{N}=1$ theories by the background. The position of the\nD3-brane relative to a seven-brane determines a point on the Coulomb\nbranch of the D3-brane theory, and the seven-brane determines a singular\npoint on the Coulomb branch at which additional particle states become\nmassless. This point is reached when the D3-brane collides with the\nseven-brane, and the additional light states are string junctions that\nstretch from the seven-brane to the D3-brane. \nPrevious works \\cite{Mikhailov:1998bx,DeWolfe:1998bi}\ndetermined the string junction representations of BPS particles\nfor well-known $\\mathcal{N}=2$ theories, including $N_f=0,1,2,3,4$\nSeiberg-Witten theory \\cite{Seiberg:1994rs,Seiberg:1994aj}.\n\nIn this section we will do the same for D3-branes probing the type\n$II$, $III$, and $IV$ singularities. Recall that the latter two are\nassociated with the non-perturbative $SU(2)$ and $SU(3)$ theories on\nseven-branes that were discussed in section \\ref{sec:su3su2}. Relative\nto weakly coupled realizations of $SU(2)$ and $SU(3)$ from\n$2$ $D7$-branes and 3 $D7$-branes, which have an $I_2$ and $I_3$ fiber in\nF-theory, the type $III$ and $IV$ theories have an extra seven-brane. \nVia studying local axiodilaton profiles we have seen that in the vicinity of the brane\nthe string coupling is $O(1)$ and that these seven-branes source a nilpotent\n$SL(2,\\mathbb{Z})$ monodromy. Therefore, the worldvolume theories of nearby D3-branes necessarily\ndiffer from the $N_f=2,3$ Seiberg-Witten theories realized on D3-branes near $I_2$ and $I_3$ seven-branes.\n\nThe worldvolume theory on a D3-brane when it has collided with a type $III$ or type $IV$ seven-brane is the\n$N_f=2$ or $3$ Seiberg-Witten theory at its Argyres-Douglas point, respectively. \nWe will see that the characteristic \\cite{Argyres:1995jj} Argyres-Douglas (AD) phenomenon is\nrealized by string junctions. This phenomenon is\nexistence of points in the moduli space where electrons and dyons\ncharged under the $U(1)$ of the $\\mathcal{N}=2$ theory simultaneously become massless. In\nF-theory this occurs when the D3-brane collides with a type $III$ or\ntype $IV$ seven-brane. The AD phenomenon was originally realized in\n$G=SU(3)$ theories, which have genus two Seiberg-Witten curves, and thus one\nmight expect it to not exist in an elliptically fibered setup such as F-theory.\nFor specially tuned $G=SU(2)$ Seiberg-Witten theories Argyres-Douglas\npoints do exist \\cite{Argyres:1995xn}, though; the tuning brings in an extra singularity\nthat turns a type $I_2$ ($I_3$) fiber into a type $III$ ($IV$) fiber. Somewhat ironically,\nF-theory models with non-Higgsable seven-branes of type $III$ and $IV$ do not require any\nsuch tuning; the topology of the base $B$ forces the local structure of the Seiberg-Witten\ncurve that would appear tuned from an $\\mathcal{N}=2$ point of view. \n\n\nWe will use the BPS junction criterion\\footnote{See\n \\cite{Mikhailov:1998bx} for another study of BPS junctions.} of\n\\cite{DeWolfe:1998bi} to determine the BPS states on three-branes near\nthe type $II$, $III$, and $IV$ seven-branes. This constraint is\n\\begin{equation}\n\\label{eq:BPS}\n(J,J) \\geq -2 + gcd(a(J))\n\\end{equation}\nwhere $J$ is the string junction and $a(J)$ is its asymptotic charge,\nand it is derived from the requirement that in the M-theory picture\n$J$ is a holomorphic curve with boundary, with the boundary a\nnon-trivial one-cycle $a(J)$ in the smooth elliptic curve above the\nD3-brane. This one-cycle $a(J)$ is the asymptotic charge, and it\ndetermines the electric and magnetic charge of the associated junction\nunder the $U(1)$ of the D3-brane theory. Junctions are relative\nhomology cycles \\cite{Grassi:2014ffa}, i.e. two-cycles that may have\nboundary in a smooth elliptic curve above a particular point $p$. This\npoint is given physical meaning if $p$ is the location of a D3-brane.\n\nThroughout this section we will need a basis choice in order to represent the one-cycle $a(J)$ as a vector in\n$\\mathbb{Z}^2$. We choose\n\\begin{equation}\n\\pi_1 = \\begin{pmatrix}1\\\\ 0 \\end{pmatrix}, \\qquad \\pi_3 = \\begin{pmatrix}0 \\\\ 1\\end{pmatrix},\n\\end{equation}\nwhich determines $\\pi_2$ via $\\pi_1+\\pi_2+\\pi_3=0\\in H_1(E_p,\\mathbb{Z})$, where $E_p$ is the elliptic fiber above\nthe D3-brane in the F-theory picture, or alternatively the torus that defines the electric and magnetic\ncharges.\n\n\\subsection{The $N_f=1$ Argyres-Douglas Point and the Type $II$ Kodaira Fiber}\n\nLet us study the F-theory realization of BPS states on a D3-brane near a slightly deformed type $II$ Kodaira fiber. From \\cite{Grassi:2014sda},\nthe vanishing cycles are $Z=\\{\\pi_3,\\pi_1\\}$ and so the $I$-matrix that determines topological intersections is\n\\begin{equation}\nI = (\\cdot,\\cdot) = \\begin{pmatrix} -1 & -\\frac12 \\\\ -\\frac12 & -1\\end{pmatrix}.\n\\end{equation}\nDefining $J=(Q_1,Q_2)$ we have\n\\begin{equation}\n(J,J) = -Q_1Q_2-Q_1^2-Q_2^2 \\qquad \\text{with}\\qquad a(J) = \\begin{pmatrix} Q_2 \\\\ Q_1 \\end{pmatrix}.\n\\end{equation}\nThe string junctions satisfying the BPS particle condition \\eqref{eq:BPS} are\n\\begin{center}\n\\begin{tabular}{c|c}\n$a(J)$ & Junctions \\\\ \\hline\n$(1,0)$ & $(0,1)$ \\\\\n$(1,-1)$ & $(-1,1)$ \\\\\n$(0,1)$ & $(1,0)$\n\\end{tabular}\n\\end{center}\nThese BPS particles arising from string junctions also have BPS\nanti-particles via the action $J\\mapsto -J$, which preserves $(J,J)$\nand $gcd(a(J))$ and therefore the associated junctions $-J$ satisfy\n\\eqref{eq:BPS}. In the M-theory description the geometric object $J$\ncorresponds to a two-cycle, and the associated BPS particle arises\nfrom wrapping an M2-brane on that cycle; the anti-particle associated\nto $-J$ arises from wrapping an anti M2-brane. Note that there is no\njunction $J$ with $a(J)=0$ and $(J,J)=-2$; as explained in\n\\cite{Grassi:2014sda}, this demonstrates via deformation that the type\nII singularity does not carry a gauge algebra.\n\nOne might be tempted to think that the seven-brane associated to a type II singularity has little impact on the \nlow energy physics of an F-theory compactification, since it does not carry any gauge algebra. However, this\nis not true, and we would like to emphasize:\n\\begin{itemize}\n\\item Locally near the type II seven-brane (or precisely in the 8d theory) the worldvolume theory on the D3-brane is\n $N_f=1$ Seiberg-Witten theory near its Argyres-Douglas point. At that point BPS electrons, monopoles, and dyons \n become massless.\n\\item Since $J=0$ for the type $II$ Kodaira fiber, $g_s$ is $O(1)$ in the vicinity of the seven-brane,\n with $\\tau$ profile solved via Ramanujan's alternative bases in section \\ref{sec:axiodilaton}..\n\\item Even though both can split into two mutually non-local seven-branes, the seven-brane associated to a \ntype II Kodaira fiber is not an orientifold. First, because their $SL(2,\\mathbb{Z})$ monodromies are different, and second because the orientifold famously \\emph{must} split due\nto instanton effects in F-theory, which is not true of the type II seven-brane.\n\\end{itemize}\n\n\n\\subsection{The $N_f=2$ Argyres-Douglas Point and the Type $III$ Kodaira Fiber}\n\nIn this section we study the F-theory realization of BPS states on D3-branes near a slightly deformed type $III$ Kodaira fiber.\nIn the coincident limit this theory is the Argyres-Douglas theory obtained by tuning $N_f=2$ $G=SU(2)$\nSeiberg-Witten theory to its Argyres-Douglas point. \nUtilizing an explicit deformation of \\cite{Grassi:2014sda}, the vanishing cycles are $Z=\\{\\pi_2,\\pi_1,\\pi_3\\}$ and the \n$I$-matrix that determines topological intersections is \n\\begin{equation}\nI = (\\cdot,\\cdot) = \\begin{pmatrix} -1 & \\frac12 & -\\frac12 \\\\ \\frac12 & -1 & \\frac12 \\\\ -\\frac12 & \\frac12 & -1 \\end{pmatrix}.\n\\end{equation}\nDefining a string junction by $J=(Q_1,Q_2,Q_3)$ we have \n\\begin{equation}\n(J,J) = Q_1Q_2+Q_1Q_3-Q_2Q_3 - \\sum_i Q_i^2 \\qquad \\text{with}\\qquad a(J) = \\begin{pmatrix}-Q_1+Q_2 \\\\ -Q_1+Q_3 \\end{pmatrix}.\n\\end{equation}\nThees\nBPS particles arising from string junctions also have BPS anti-particles via the action $J\\mapsto -J$, which\npreserves $(J,J)$ and $gcd(a(J))$, leaving (\\ref{eq:BPS}) invariant.\n\nUsing the constraint \\eqref{eq:BPS} the possible BPS string junctions\ncan be computed directly. The states of self-intersection $-2$ have\n$a(J)=0$ and were identified in \\cite{Grassi:2014sda}. They are $J_+ = (1,1,1)$ and $J_-=(-1,-1,-1)$,\nwhere the $\\pm$ denote a choice of positive and negative root for the associated\n$SU(2)$ algebra that is the gauge algebra on the seven-brane and the flavor\nalgebra of the three-brane theory. The rest of the junctions satisfying \\eqref{eq:BPS} have\n$(J,J)=-1$ and are given by\n\\begin{center}\n\\begin{tabular}{c|c}\n$a(J)$ & Junctions \\\\ \\hline\n$(0,1)$ & $(0,0,1)$,$(-1,-1,0)$ \\\\\n$(1,0)$ & $(0,1,0)$,$(-1,0,-1)$ \\\\\n$(1,1)$ & $(-1,0,0)$ \\\\\n$(1,-1)$ & $(0,1,-1)$\n\\end{tabular}\n\\end{center}\nwhere the sets of junctions in the first two lines are doublets of\n$SU(2)$ since they differ by $J_\\pm$. This spectrum matches the known\nBPS states in the maximal chamber of the deformed theory with masses\nturned on; see e.g. \\cite{Maruyoshi:2013fwa}.\n\nWe have seen the relationship between the ordered sets of\nvanishing cycles $Z=\\{\\pi_2,\\pi_1,\\pi_3\\}$ of \\cite{Grassi:2014sda}\nand\\footnote{The set of vanishing cycles $\\{A,A,C\\}$ should be compared to the set $\\{\\pi_1,\\pi_1,\\pi_3\\}$ of section \\ref{sec:su3su2}.} $Z=\\{A,A,C\\}$ of \\cite{DeWolfeZwiebach,DeWolfe:1998bi} that can be associated to the\ndeformed type III Kodaira fiber by explicit\ndeformation in section \\ref{sec:su3su2}. Let us study the BPS states with the\nlatter set for the sake of completeness, taking\n\\begin{equation}\nA=\\begin{pmatrix}1 \\\\ 0\\end{pmatrix}, \\qquad C = \\begin{pmatrix}1 \\\\ 1\\end{pmatrix},\n\\end{equation}\nas in \\cite{DeWolfe:1998zf}.\n The $I$-matrix is\n\\begin{equation}\nI = (\\cdot,\\cdot) = \\begin{pmatrix} -1 & 0 & \\frac12 \\\\ 0 & -1 & \\frac12 \\\\ \\frac12 & \\frac12 & -1 \\end{pmatrix}\n\\end{equation}\nand taking $J=(Q_1,Q_2,Q_3)$ we compute\n\\begin{equation}\n(J,J) = Q_3(Q_1+Q_2) - \\sum_i Q_i^2 \\qquad \\text{with}\\qquad a(J) = \\begin{pmatrix}Q_1+Q_2+Q_3\\\\ Q_3 \\end{pmatrix}.\n\\end{equation}\nThere are junctions $J_\\pm$ satisfying the BPS constraint \\eqref{eq:BPS} with $a(J)=0$ and $(J,J)=-2$. They are\n$J_+=(1,-1,0)$ and $J_-=(-1,1,0)$, and they are the $W_\\pm$ bosons of the broken $SU(2)$ theory. The rest of the\njunctions satisfying the BPS condition have $(J,J)=-1$ and satisfy\n\\begin{center}\n\\begin{tabular}{c|c}\n$a(J)$ & Junctions \\\\ \\hline\n$(3,1)$ & $(1,1,1)$ \\\\\n$(2,1)$ & $(1,0,1)$,$(0,1,1)$ \\\\\n$(1,1)$ & $(1,0,1)$ \\\\\n$(1,0)$ & $(1,0,0)$,$(0,1,0)$.\n\\end{tabular}\n\\end{center}\nWe again see two $SU(2)$ doublets and two $SU(2)$ singlets before taking into account the $-J$ junctions.\nThese junctions with electromagnetic charge \nmust be related to those of $Z=\\{\\pi_2,\\pi_1,\\pi_3\\}$ by a Hanany-Witten move, as we explicitly showed for the simple roots\n$J_\\pm$ in section \\ref{sec:su3su2}.\n\n\n\\subsection{The $N_f=3$ Argyres-Douglas Point and the Type $IV$ Kodaira Fiber}\n\nIn this section we study the F-theory realization of BPS states on D3-branes near a slightly deformed type\nIV Kodaira fiber. In the coincident limit the worldvolume theory on the D3-brane is the\nArgyres-Douglas theory obtained by tuning $N_f=3$ $G=SU(2)$ Seiberg-Witten theory to\nits Argyres-Douglas point.\n\nThe ordered set of vanishing cycles derived in \\cite{Grassi:2014sda} is $Z_{IV}=\\{\\pi_1,\\pi_3,\\pi_1,\\pi_3\\}$\nand\nthe associated I-matrix that determines the topological intersections of junctions is\n\\begin{equation}\nI = (\\cdot,\\cdot) = \\begin{pmatrix} -1 & \\frac12 & 0 & \\frac12 \\\\ \\frac12 & -1 & -\\frac12 & 0 \\\\ 0 & -\\frac12 & -1 & \\frac12 \n\\\\ \\frac12 & 0 & \\frac12 & -1 \\end{pmatrix}.\n\\end{equation}\nWriting the junction as a vector $J=(Q_1,Q_2,Q_3,Q_4)$ we have \n\\begin{equation}\n(J,J) = Q_1Q_2 + Q_1Q_4-Q_2Q_3+Q_3Q_4 - \\sum_i Q_i^2 \\qquad \\text{with}\\qquad a(J) = \\begin{pmatrix}Q_1+Q_3 \\\\ Q_2+Q_4 \\end{pmatrix}.\n\\end{equation}\nThese\nBPS particles arising from string junctions also have BPS anti-particles via the action $J\\mapsto -J$, which\npreserves $(J,J)$ and $a(J)$ and therefore the associated junctions $-J$ satisfy \\eqref{eq:BPS}. The latter arise from\nwrapped anti M2-branes in the M-theory picture.\n\nLet us derive the possible BPS string junctions using the constraint \\eqref{eq:BPS}.\nThe junctions of self-intersection $-2$ have\n$a(J)=0$ and were identified in \\cite{Grassi:2014sda}. They are the roots of an $SU(3)$ algebra,\nand we take $J_1=(1,0,-1,0)$\nand $J_2=(0,1,0,-1)$ as the simple roots; then the full set of root junctions is simply $J_1$,$J_2$,\n$J_1+J_2$ and their negatives. Solving the condition for BPS junctions \\eqref{eq:BPS} we find that the\npossible BPS junctions are\n\\begin{center}\n\\begin{tabular}{c|c}\n$a(J)$ & Junctions \\\\ \\hline\n$(-1,-1)$ & $(0,0,-1,-1)$ $(-1,0,0,-1)$ $(-1,-1,0,0)$ \\\\\n$(1,0)$ & $(1,0,0,0)$ $(0,0,1,0)$ $(0,-1,1,1)$ \\\\\n$(0,1)$ & $(1,1,-1,0)$ $(0,1,0,0)$ $(0,0,0,1)$ \\\\\n$(2,1)$ & $(1,0,1,1)$ \\\\\n$(-1,-2)$ & $(-1,-1,0,-1)$ \\\\\n$(-1,1)$ & $(0,1,-1,0)$ \n\\end{tabular}\n\\end{center}\nand we have chosen the ordering of the junctions in the first three\ncolumns to show that they are fundamentals of $SU(3)$. Namely, in each\nof the first thee columns the second junction subtracted from the\nfirst is $J_1$ and the third junction from the second is $J_2$. The\nnegatives of these are anti-fundamental, completing the non-trivial\nflavor hypermultiplets.\n\n\n\\acknowledgments \\vspace{.25cm} I would like to thank Philip Argyres,\nShaun Cooper, Andreas Malmendier, Brent Nelson, Daniel Schultz,\nWashington Taylor, Yi-Nan Wang and Wenbin Yan for helpful discussions\nand correspondence. I am particularly grateful to Antonella Grassi and\nJulius Shaneson for discussions and comments on a draft, and to\nJ.L. Halverson for support and encouragement. This work is generously\nsupported by startup funding from Northeastern University and the\nNational Science Foundation under Grant No. PHY11-25915.\n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{ccccccc}\n $F_1$ & $F_2$ & $J_1$ & $J_2$ & Minimal on $C$& $\\Delta$ & Additional Brane? \\\\ \\hline \\hline \n$II^*$ & $II^*$ & $0$ & $0$ & No & $(4 \\tilde f^3\\, t^2\\, z^2 + 27 \\tilde g^2)\\, t^{10}\\, z^{10}$ & No \\\\\n$II^*$ & $III^*$ & $0$ & $1$ & No & $(4 \\tilde f^3\\, z^2 + 27 \\tilde g^2\\, t)\\, t^9\\, z^{10}$ & Yes \\\\\n$II^*$ & $IV^*$ & $0$ & $0$ & No & $(4 \\tilde f^3\\, t\\, z^2 + 27 \\tilde g^2)\\, t^8\\, z^{10}$ & No \\\\\n$II^*$ & $I_0^*$ & $0$ & $NF$ & No & $(4 \\tilde f^3\\, z^2 + 27 \\tilde g^2)\\, t^6\\, z^{10}$ & No\\\\\n$II^*$ & $IV$ & $0$ & $0$ & No & $(4 \\tilde f^3\\, t^2\\, z^2 + 27 \\tilde g^2)\\, t^4\\, z^{10}$ & No \\\\\n$II^*$ & $III$ & $0$ & $1$ & No & $(4 \\tilde f^3\\, z^2 + 27 \\tilde g^2\\, t)\\, t^3\\, z^{10}$ & Yes \\\\\n$II^*$ & $II$ & $0$ & $0$ & No & $(4 \\tilde f^3\\, t\\, z^2 + 27 \\tilde g^2)\\, t^2\\, z^{10}$ & No \\\\\n$III^*$ & $III^*$ & $1$ & $1$ & No & $(4 \\tilde f^3 + 27\\tilde g^2 \\, t)\\, t^9\\, z^9$ & No \\\\\n$III^*$ & $IV^*$ & $1$ & $0$ & No & $(4 \\tilde f^3\\, t + 27 \\tilde g^2\\, z)\\, t^8\\, z^9$ & Yes \\\\\n$III^*$ & $I_0^*$ & $1$ & $NF$ & No & $(4 \\tilde f^3 + 27\\tilde g^2 \\, z)\\, t^6 \\,z^9$ & No \\\\\n$III^*$ & $IV$ & $1$ & $0$ & No & $(4 \\tilde f^3\\, t^2 + 27 \\tilde g^2\\, z)\\, t^4\\, z^9$ & Yes \\\\\n$III^*$ & $III$ & $1$ & $1$ & No & $(4\\tilde f^3 + 27\\tilde g^2 \\, t\\, z)\\, t^3\\, z^9$ & No \\\\\n$III^*$ & $II$ & $1$ & $0$ & No & $(4 \\tilde f^3\\, t + 27 \\tilde g^2\\, z)\\, t^2\\, z^9$ & Yes \\\\\n$IV^*$ & $IV^*$ & $0$ & $0$ & No & $(4 \\tilde f^3\\, t\\, z + 27 \\tilde g^2)\\, t^8\\, z^8$ & No \\\\\n$IV^*$ & $I_0^*$ & $0$ & $NF$ & No & $(4 \\tilde f^3\\, z + 27 \\tilde g^2)\\, t^8 z^8$ & No \\\\\n$IV^*$ & $IV$ & $0$ & $0$ & No & $(4 \\tilde f^3\\, t^2\\, z + 27 \\tilde g^2)\\, t^4\\, z^8$ & No \\\\\n$IV^*$ & $III$ & $0$ & $1$ & No & $(4\\tilde f^3\\, z + 27\\tilde g^2 \\, t) \\, t^3\\, z^8$ & Yes \\\\\n$IV^*$ & $II$ & $0$ & $0$ & Yes & $(4 \\tilde f^3\\, t\\, z + 27 \\tilde g^2)\\, t^2\\, z^8$ & No \\\\\n$I_0^*$ & $I_0^*$ & $NF$ & $NF$ & No & $(4 \\tilde f^3 + 27 \\tilde g^2) \\, t^6\\, z^6$ & No \\\\\n$I_0^*$ & $IV$ & $NF$ & $0$ & Yes & $(4 \\tilde f^3\\, t^2 + 27 \\tilde g^2)\\, t^4\\, z^6$ & No \\\\\n$I_0^*$ & $III$ & $NF$ & $1$ & Yes & $(4 \\tilde f^3 + 27\\tilde g^2 \\, t ) \\, t^3\\, z^6$ & No \\\\\n$I_0^*$ & $II$ & $NF$ & $0$ & Yes & $(4 \\tilde f^3\\, t + 27 \\tilde g^2)\\, t^2\\, z^6$ & No \\\\\n$IV$ & $IV$ & $0$ & $0$ & Yes & $(4 \\tilde f^3\\, t^2\\, z^2 + 27 \\tilde g^2)\\, t^4\\, z^4$ & No \\\\\n$IV$ & $III$ & $0$ & $1$ & Yes & $(4 \\tilde f^3\\, z^2 + 27 \\tilde g^2\\, t)\\, t^3\\, z^4$ & Yes \\\\\n$IV$ & $II$ & $0$ & $0$ & Yes & $(4 \\tilde f^3\\, t\\, z^2 + 27 \\tilde g^2)\\, t^2\\, z^4$ & No \\\\\n$III$ & $III$ & $1$ & $1$ & Yes & $(4\\tilde f^3 + 27\\tilde g^2 \\, t\\, z)\\, t^3\\, z^3$ & No \\\\\n$III$ & $II$ & $1$ & $0$ & Yes & $(4 \\tilde f^3\\, t + 27 \\tilde g^2\\, z)\\, t^2\\, z^3$ & Yes \\\\\n$II$ & $II$ & $0$ & $0$ & Yes & $(4 \\tilde f^3\\, t\\, z + 27 \\tilde g^2)\\, t^2\\, z^2$ & No \\\\ \\hline\\hline\n\\end{tabular}\n\\caption{Properties of all intersections of Kodaira fibers with finite order monodromy, including\nthose that give rise to $(4,6)$ curves. There is an additional brane if and only if $J_1+J_2=1$, and therefore\nit may play an important role in local axiodilaton profiles.}\n\\end{table}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe interpretation of chest radiographs is an essential task in the practice of a radiologist, enabling the early detection of thoracic diseases~\\cite{Rajpurkar2018,Wang2017}. To accelerate and improve the assessment of the continuously increasing number of radiographs, several deep learning solutions have been recently proposed for the automatic classification of radiographic findings~\\cite{Wang2017,Guendel2018,Yao2018}. Due to large variations in image quality or subjective definitions of disease appearance, there is a large inter-rate variability which leads to a high degree of label noise~\\cite{Rajpurkar2018}. Modeling this variability when designing an automatic system for assessing this type of data is essential; an aspect which was not considered in previous work.\n\nUsing principles of information theory and subjective logic~\\cite{Josang2016} based on the Dempster-Shafer framework for modeling of evidence~\\cite{Dempster1968}, we present a method for training a system that generates both an image-level label and a classification uncertainty measure. We evaluate this system for classification of abnormalities on chest radiographs. The main contributions of this paper include:\n\\begin{enumerate}\n \\item describing a system for jointly learning classification probabilities and classification uncertainty in a parametric model;\n \\item proposing uncertainty-driven bootstrapping as a means to filter training samples with highest predictive uncertainty to improve robustness and accuracy;\n \\item comparing methods for generating stochastic classifications to model classification uncertainty; \n \\item presenting an application of this system to identify cases with uncertain classification, yielding more accurate classification on the remaining cases;\n \\item showing that the uncertainty measure can distinguish radiographs with correct and incorrect labels according to a multi-radiologist-consensus study.\n \n\\end{enumerate}\n\n\\section{Background and Motivation}\n\\subsection{Machine Learning for the Assessment of Chest Radiographs}\nThe open access to the ChestX-Ray8 dataset~\\cite{Wang2017} of chest radiographs has led to a series of recent publications that propose machine learning based systems for disease classification. With this dataset, Wang et al.~\\cite{Wang2017} also report a first performance baseline of a deep neural network at an average area under receiver operating characteristic curve (ROC-AUC) of 0.75. These results have been further improved by using multi-scale image analysis~\\cite{Yao2018}, or by actively focusing the attention of the network on the most relevant sub-regions of the lungs~\\cite{Guan2018}. State-of-the-art results on the official split of the ChestX-Ray8 dataset are reported in~\\cite{Guendel2018} (avg. ROC-AUC of 0.81), using a location-aware dense neural network. In light of these contributions, a recent study compares the performance of such an AI system and 9 practicing radiologists~\\cite{Rajpurkar2018}. While the study indicates that the system can surpass human performance, it also highlights the high variability among different expert radiologists for the reading of chest radiographs. The reported average specificity of the readers is very high (over 95\\%), with an average sensitivity of 50\\%$\\,\\pm$\\,8\\%. With such a large inter-rater variability, one may ask: How can real 'ground truth' data be obtained? Does the label noise affect the training? Current solutions do not consider this variability, which leads to models with overconfident predictions and limited generalization.\\medskip\\\\\n\\textbf{Principles of Uncertainty Estimation:}\\, One way to handle this challenge is to explicitly estimate the classification uncertainty from the data. Recent methods for uncertainty estimation in the context of deep learning rely on Bayesian estimation theory~\\cite{Molchanov2017} or ensembles~\\cite{Laks2017} and demonstrate increased robustness to out-of-distribution data. However, these approaches come with significant computational limitations; associated with the high complexity of sampling parameter spaces of deep models for Bayesian risk estimation; or associated with the challenge of managing ensembles of deep models. Sensoy et al.~\\cite{Sensoy2018} propose an efficient alternative based on the theory of subjective logic~\\cite{Josang2016}, training a deep neural network to estimate the sample uncertainty based on observed data.\n\n\\section{Proposed Method}\n\nFollowing the work of Sensoy et al.~\\cite{Sensoy2018} based on the Dempster-Shafer theory of evidence~\\cite{Dempster1968}, we apply principles of subjective logic~\\cite{Josang2016} to derive a binary classification model that can support the joint estimation of per-class probabilities ($\\hat{p}_+; \\hat{p}_-$) and predictive uncertainty $\\hat{u}$. In this context, a decisional framework is defined through the assignment of so called belief masses from evidence collected from observed data to individual attributes, e.g., membership to a class~\\cite{Dempster1968,Josang2016}. Let us denote $b^+$ and $b^-$ the belief values for the positive and negative class, respectively. The uncertainty mass $u$ is defined as: $u = 1 - b^+ - b^-$, where $b^+ = e^+\/E$ and $b^- = e^-\/E$ with $e^+; e^- \\ge 0$ denoting the per-class collected evidence and total evidence $E = e^+ + e^- + 2$. For binary classification, we propose to model the distribution of such evidence values using the beta distribution, defined by two parameters $\\alpha$ and $\\beta$ as: $f(x;\\alpha,\\beta) = \\frac{\\Gamma(\\alpha+\\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)}x^{\\alpha - 1}(1 - x)^{\\beta - 1}$, where $\\Gamma$ denotes the gamma function and $\\alpha, \\beta > 1$ with $\\alpha = e^+ + 1$ and $\\beta = e^- + 1$. In this context, the per-class probabilities can be derived as $p^+ = \\alpha\/E$ and $p^- = \\beta\/E$. Figure~\\ref{fig:beta} visualizes the beta distribution for different $\\alpha, \\beta$ values.\n\nA training dataset is provided: $\\mathcal{D} = \\{\\vec{I}_k,y_k\\}_{k=1}^{N}$, composed of $N$ pairs of images $\\vec{I}_k$ with class assignment $y_k\\in\\{0,1\\}$. To estimate the per-class evidence values from the observed data, a deep neural network parametrized by $\\vec{\\theta}$ can be applied, with: $[e^+_k, e^-_k] = \\mathcal{R}(\\vec{I}_k;\\vec{\\theta})$, where $\\mathcal{R}$ denotes the network response function. Using maximum likelihood estimation, one can learn the network parameters $\\hat{\\vec{\\theta}}$ by optimizing the Bayes risk with a beta distributed prior:\n\\begin{equation}\n \\mathcal{L}^{data}_k = \\int \\|y_k - p_k\\|^2 \\frac{\\Gamma(\\alpha+\\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)}p_k^{\\alpha - 1}(1 - p_k)^{\\beta - 1} dp_k,\n\\label{eq:lossmse}\n\\end{equation}\nwhere $k\\in\\{1,\\ldots,N\\}$ denotes the index of the training example from dataset~$\\mathcal{D}$, $p_k$ the predicted probability on the training sample $k$, and $\\mathcal{L}^{data}_k$ defines the goodness of fit. Using linearity properties of the expectation, Eq.~\\ref{eq:lossmse} becomes:\n\\begin{equation}\n \\mathcal{L}^{data}_k = (y_k - \\hat{p}_k^{\\,+})^2 + (1 - y_k - \\hat{p}_k^{\\,-})^2 + \\frac{\\hat{p}_k^{\\,+}(1 - \\hat{p}_k^{\\,+}) + \\hat{p}_k^{\\,-}(1 - \\hat{p}_k^{\\,-})}{E_k + 1},\n\\end{equation}\nwhere $\\hat{p}_k^{\\,+}$ and $\\hat{p}_k^{\\,-}$ denote the network's probabilistic prediction. The first two terms measure the goodness of fit, and the last term encodes the variance of the prediction~\\cite{Sensoy2018}.\n\n\\begin{figure*}[t]\n\\centering\n\\subfloat[Confident negative]{\n\\includegraphics[height=2.8cm]{b1.pdf}\n}\n\\subfloat[Confident positive]{\n\\includegraphics[height=2.8cm]{b2.pdf}\n}\n\\subfloat[High uncertainty]{\n\\includegraphics[height=2.8cm]{b3.pdf}\n}\n\\qquad\n\\caption{Probability density function of the beta distribution: example parameters ($\\alpha, \\beta$) modeling confident and uncertain predictions.\\label{fig:beta}}\n\\end{figure*}\n\nTo ensure a high uncertainty value for data samples for which the gathered evidence is not conclusive for an accurate classification, an additional regularization term $\\mathcal{L}^{reg}_k$ is added to the loss. Using information theory, this term is defined as the relative entropy, i.e., the Kullback-Leibler divergence, between the beta distributed prior term and the beta distribution with total uncertainty ($\\alpha, \\beta = 1$). In this way, cost deviations from the total uncertainty state, i.e., $u = 1$, which do not contribute to the data fit are accounted for~\\cite{Sensoy2018}. With the additional term, the total cost becomes $\\mathcal{L} = \\sum_{k=1}^{N}\\mathcal{L}_k$ with:\n\\begin{equation}\n \\mathcal{L}_k = \\mathcal{L}^{data}_k + \\lambda\\,\\text{KL}\\left (f(\\hat{p}_k;\\tilde{\\alpha}_k,\\tilde{\\beta}_k)\\|f(\\hat{p}_k;\\langle 1,1\\rangle)\\right),\n\\end{equation}\nwhere $\\lambda\\in[0,1]$, $\\hat{p}_k = \\hat{p}_k^{\\,+}$, with $(\\tilde{\\alpha}_k, \\tilde{\\beta}_k)=(1, \\beta_k)$ for $y_k = 0$ and $(\\tilde{\\alpha}_k, \\tilde{\\beta}_k)=(\\alpha_k, 1)$ for $y_k = 1$. Removing additive constants and using properties of the logarithm function, one can simplify the regularization term to the following:\n\\begin{equation}\n \\mathcal{L}^{reg}_k = \\log\\frac{\\Gamma(\\tilde{\\alpha}_k+\\tilde{\\beta}_k)}{\\Gamma(\\tilde{\\alpha}_k)\\Gamma(\\tilde{\\beta_k})} + \\sum_{x\\in\\{\\tilde{\\alpha}_k, \\tilde{\\beta}_k\\}} (x - 1)\\left(\\psi(x) - \\psi(\\tilde{\\alpha}_k + \\tilde{\\beta}_k)\\right),\n\\end{equation}\nwhere $\\psi$ denotes the digamma function and $k\\in\\{1,\\ldots,N\\}$. Using stochastic gradient descent, the total loss $\\mathcal{L}$ is optimized on the training set~$\\mathcal{D}$.\\smallskip\n\n\\textbf{Sampling the Data Distribution:}\\, An important requirement to ensure training stability and to learn a robust estimation of evidence values is an adequate sampling of the data distribution. We empirically found dropout~\\cite{Srivastava2014} to be a simple and very effective strategy to address this problem. In practice, dropout emulates an ensemble model combination driven by the random deactivation of neurons. Alternatively, one may use an explicit ensemble of $M$ models $\\{\\vec{\\theta}_k\\}_{k=1}^{M}$, each trained independently. Following the principles of deep ensembles~\\cite{Laks2017}, the per-class evidence can be computed from the ensemble estimates $\\{e^{(k)}\\}_{k=1}^{M}$ via averaging. In our work, we found dropout to be as effective as deep ensembles.\\smallskip\n\n\\textbf{Uncertainty-driven Bootstrapping:}\\, Given the predictive uncertainty measure $\\hat{u}$, we propose a simple and effective algorithm for filtering the training set with the target of reducing label noise. A fraction of training samples with highest uncertainty are eliminated and the model is retrained on the remaining data. Instead of sample elimination, robust M-estimators may be applied, using a per-sample weight that is inversely proportional to the predicted uncertainty. The hypothesis is that by focusing the training on 'confident' labels, we increase the robustness of the classifier and improve its performance on unseen data.\n\n\\section{Experiments}\n\n\\textbf{Dataset and Setup:}\\, The evaluation is based on two datasets, the ChestX-Ray8~\\cite{Wang2017} and PLCO~\\cite{PLCO}. Both datasets provide a series of AP\/PA chest radiographs with binary labels on the presence of different radiological findings, e.g., granuloma, pleural effusion, or consolidation. The ChestX-Ray8 dataset contains 112,120 images from 30,805 patients, covering 14 findings extracted from radiological reports using natural language processing (NLP)~\\cite{Wang2017}. In contrast, the PLCO dataset was constructed as part of a screening trial, containing 185,421 images from 56,071 patients and covering 12 different abnormalities.\n\nFor performance comparison, we selected location-aware dense networks~\\cite{Guendel2018} as baseline. This method achieves state-of-the-art results on this problem, with a reported average ROC-AUC of \\textbf{0.81} (significantly higher than that of competing methods: 0.75~\\cite{Wang2017} and 0.77~\\cite{Yao2018}) on the official split of the ChestX-Ray8 dataset and a ROC-AUC of \\textbf{0.88} on the official split of the PLCO dataset. To evaluate our method, we identified testing subsets with higher confidence labels from multi-radiologist studies. For PLCO, we randomly selected 565 test images and had 2 board-certified expert radiologists read the images -- updating the labels to the majority vote of the 3 opinions (incl. the original label). For ChestX-Ray8, a subset of 689 test images was randomly selected and read by 4 board-certified radiologists. The final label was decided following a consensus discussion. For both datasets, the remaining data was split in 90\\% training and 10\\% validation. All images were down-sampled to $256\\times 256$ using bilinear interpolation.\\smallskip\n\n\\textbf{System Training:}\\, We constructed our learning model from the DenseNet-121 architecture~\\cite{Huang2017}. A dropout layer with a dropout rate of 0.5 was inserted after the last convolutional layer. We also investigated the benefits of using deep ensembles to improve the sampling ($M=5$ models trained on random subsets of 80\\% of the training data; we refer to this with the keyword \\textbf{[ens]}). A fully connected layer with ReLU activation units maps to the two outputs $\\alpha$ and $\\beta$. We used a systematic grid search to find the optimal configuration of training meta-parameters: learning rate ($10^{-4}$), regularization factor ($\\lambda=1$; decayed to $0.1$ and $0.001$ after 1\/3, respectively 2\/3 of the epochs), training epochs (around 12, using an early stop strategy with a patience of 3 epochs) and a batch size of 128. The low number of epochs is explained by the large size of the dataset.\\smallskip\n\n\\textbf{Uncertainty-driven Sample Rejection:}\\, Given a model trained for the assessment of an arbitrary finding, one can directly estimate the prediction uncertainty $\\hat{u} = 2\/(\\alpha + \\beta) \\in [0, 1]$. This is an orthogonal measure to the predicted probability, with increased values on out-of-distribution cases under the given model. One can use this measure for sample rejection, i.e., set a threshold $u_t$ and steer the system to not output its prediction on all cases with an expected uncertainty larger than $u_t$. Instead, these cases are labeled with the message \\textit{\"Do not know for sure; process case manually\"}. In practice this leads to a significant increase in accuracy compared to the state-of-the-art on the remaining cases, as reported in Table~\\ref{tab:results} and Figure~\\ref{fig:samplereject}. For example, for the identification of granuloma, a rejection rate of 25\\% leads to an increase of over 20\\% in the micro-average F1 score. On the same abnormality, a 50\\% rejection rate leads to a F1 score over 0.99 for the prediction of negative cases. We found no significant difference in average performance when using ensembles (see Figure~\\ref{fig:samplereject}).\\smallskip\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[height=3.75cm]{granuloma.pdf}\n\\includegraphics[height=3.75cm]{fibrosis.pdf}\n\\caption{Evolution of the F1-scores for the positive (+) and negative (--) classes relative to the sample rejection threshold - determined using the estimated uncertainty. We show the performance for granuloma and fibrosis based on the PLCO dataset~\\cite{PLCO}. The baseline (horizontal dashed lines) is determined using the method from~\\cite{Guendel2018} (working point at max. average of per-class F1 scores). Decision threshold for our method is fixed at 0.5. \\label{fig:samplereject}}\n\\end{figure}\n\\begin{table}[t]\n\\centering\n\\caption{Comparison between the reference method~\\cite{Guendel2018} and several versions of our method calibrated at sample rejection rates of 0\\%, 10\\%, 25\\% and 50\\% (based on the PLCO dataset~\\cite{PLCO}). Lesion refers to lesions of the bones or soft tissue.\\label{tab:results}}\n\\begin{tabular}{L{1.8cm} C{2.5cm} C{1.7cm} C{1.9cm} C{1.9cm} C{1.8cm}}\n&\\multicolumn{5}{c}{\\textbf{ROC-AUC}}\\\\\n\\cmidrule{2-6}\n\\textbf{Finding}&Guendel et al.~\\cite{Guendel2018}&\\textbf{Ours} [0\\%]&\\textbf{Ours} [10\\%]&\\textbf{Ours} [25\\%]&\\textbf{Ours} [50\\%]\\\\\n\\midrule\nGranuloma&0.83&0.85&0.87&\\textbf{0.90}&\\textbf{0.92}\\\\\nFibrosis&0.87&0.88&0.90&\\textbf{0.92}&\\textbf{0.94}\\\\\nScaring&0.82&0.81&0.84&\\textbf{0.89}&\\textbf{0.93}\\\\\nLesion&0.82&0.83&0.86&\\textbf{0.88}&\\textbf{0.90}\\\\\nCardiac Ab.&0.93&0.94&0.95&\\textbf{0.96}&\\textbf{0.97}\\\\\n\\midrule\\midrule\n\\textbf{Average}&0.85&0.86&0.89&\\textbf{0.91}&\\textbf{0.93}\\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[height=3cm]{human.pdf}\n\\includegraphics[height=3cm]{catch.pdf}\n\\caption{\\textbf{Left}: Predictive uncertainty distribution on 689 ChestX-Ray test images; a higher uncertainty is associated with cases of the critical set, which required a label correction according to expert committee. \\textbf{Right}: Plot showing the capacity of the algorithm to eliminate cases from the critical set via sample rejection. Bars indicate the percentage of critical cases for each batch of 5\\% rejected cases.\\label{fig:corr}}\n\\end{figure}\n\n\\textbf{System versus Reader Uncertainty:}\\, To provide an insight into the meaning of the uncertainty measure and its correlation with the difficulty of cases, we evaluated our system on the detection of pleural effusion (excess accumulation of fluid in the pleural cavity) based on the ChestX-Ray8 dataset. In particular, we analyzed the test set of 689 cases that were relabeled using an expert committee of 4 experts. We defined a so called \\textit{critical set}, that contains only cases for which the label (positive or negative) was changed after the expert reevaluation. According to the committee, this set contained not only easy examples for which probably the NLP algorithm has failed to properly extract the correct labels from the radiographic report; but also difficult cases where either the image quality was limited or the evidence of effusion was very subtle. In Figure~\\ref{fig:corr} (left), we empirically demonstrate that the uncertainty estimates of our algorithm correlate with the committee's decision to change the label. Specifically, for unchanged cases, our algorithm displayed very low uncertainty estimates (average 0.16) at an average AUC of 0.976 (rejection rate of 0\\%). In contrast, on cases in the critical set, the algorithm showed higher uncertainties distributed between 0.1 and the maximum value of 1 (average 0.41). This empirically demonstrates the ability of the algorithm to recognize the cases where annotation errors occurred in the first place (through NLP or human reader error). In Figure~\\ref{fig:corr} (right) we show how cases of the critical set can be effectively filtered out using sample rejection. Qualitative examples are shown in Figure~\\ref{fig:examples}.\\smallskip\n\n\\textbf{Uncertainty-driven Bootstrapping:}\\, We also investigated the benefit of using bootstrapping based on the uncertainty measure on the example of plural effusion (ChestX-Ray8). \nWe report performance as [\\textit{AUC}; \\textit{F1-score} (pos. class); \\textit{F1-score} (neg. class)]. After training our method, the baseline performance was measured at $[0.89; 0.60; 0.92]$ on testing. We then eliminated 5\\%, 10\\% and 15\\% of training samples with highest uncertainty, and retrained in each case on the remaining data. The metrics improved to $[0.90; 0.68; 0.92]_{5\\%}$, $[0.91; 0.67; 0.94]_{10\\%}$ and $[\\textbf{0.93}; \\textbf{0.69}; \\textbf{0.94}]_{15\\%}$ on the test set. This is a significant increase, demonstrating the potential of this strategy to improve the robustness of the model to the label noise. We are currently focused on further exploring this method.\n\n\\begin{figure*}[t]\n\\centering\n\\subfloat[$\\hat{u},\\hat{p}=0.90,0.45$]{\n\\includegraphics[height=2.8cm]{ex1.pdf}\n\\label{subfig:1}\n}\n\\subfloat[$\\hat{u},\\hat{p}=0.93,0.48$]{\n\\includegraphics[height=2.8cm]{ex2.pdf}\n\\label{subfig:2}\n}\n\\subfloat[$\\hat{u},\\hat{p}=0.54,0.65$]{\n\\includegraphics[height=2.8cm]{ex3.pdf}\n\\label{subfig:3}\n}\n\\subfloat[$\\hat{u},\\hat{p}=0.11,0.05$]{\n\\includegraphics[height=2.8cm]{00002048_000_11negative.png}\n\\label{subfig:4}\n}\n\\caption{ChestX-Ray8 test images assessed for pleural effusion ($\\hat{u}$: est. uncertainty, $\\hat{p}$: output probability; with affected regions circled in red). Figures~\\ref{subfig:1}, \\ref{subfig:2} and \\ref{subfig:3} show positive cases of the critical set with high predictive uncertainty -- possibly explained by the atypical appearance of accumulated fluid in~\\ref{subfig:1}, and poor quality of image~\\ref{subfig:2}. Figure~\\ref{subfig:4} shows a high confidence case with no pleural effusion.\\label{fig:examples}}\n\\end{figure*}\n\n\\section{Conclusion}\nIn conclusion, this paper presents an effective method for the joint estimation of class probabilities and classification uncertainty in the context of chest radiograph assessment. Extensive experiments on two large datasets demonstrate a significant accuracy increase if sample rejection is performed based on the estimated uncertainty measure. In addition, we highlight the capacity of the system to distinguish radiographs with correct and incorrect labels according to a multi-radiologist-consensus user study, using the uncertainty measure only.\\smallskip\n\nThe authors thank the National Cancer Institute for access to NCI's data collected by the Prostate, Lung, Colorectal and Ovarian (PLCO) Cancer Screening Trial. The statements contained herein are solely those of the authors and do not represent or imply concurrence or endorsement by NCI.\n\n\\ifx1\\undefined\n\\textbf{Disclaimer} The concepts and information presented in this paper are based on research results that are not commercially available.\n\\fi\n\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{secintro}\n\nThe goal of these lecture notes, as the title says, is to give a basic introduction to the theory of large deviations at three levels: theory, applications and simulations. The notes follow closely my recent review paper on large deviations and their applications in statistical mechanics \\cite{touchette2009}, but are, in a way, both narrower and wider in scope than this review paper.\n\nThey are narrower, in the sense that the mathematical notations have been cut down to a minimum in order for the theory to be understood by advanced undergraduate and graduate students in science and engineering, having a basic background in probability theory and stochastic processes (see, e.g., \\cite{grimmett2001}). The simplification of the mathematics amounts essentially to two things: i) to focus on random variables taking values in $\\mathbb{R}$ or $\\mathbb{R}^D$, and ii) to state all the results in terms of probability densities, and so to assume that probability densities always exist, if only in a weak sense. These simplifications are justified for most applications and are convenient for conveying the essential ideas of the theory in a clear way, without the hindrance of technical notations.\n\nThese notes also go beyond the review paper \\cite{touchette2009}, in that they cover subjects not contained in that paper, in particular the subject of numerical estimation or \\textit{simulation} of large deviation probabilities. This is an important subject that I intend to cover in more depth in the future.\n\nSections~\\ref{secnum1} and \\ref{secnum2} of these notes are a first and somewhat brief attempt in this direction. Far from covering all the methods that have been developed to simulate large deviations and rare events, they concentrate on the central idea of large deviation simulations, namely that of exponential change of measure, and they elaborate from there on certain simulation techniques that are easily applicable to sums of independent random variables, Markov chains, stochastic differential equations and continuous-time Markov processes in general.\n\nMany of these applications are covered in the exercises contained at the end of each section. The level of difficulty of these exercises is quite varied: some are there to practice the material presented, while others go beyond that material and may take hours, if not days or weeks, to solve completely. For convenience, I have rated them according to Knuth's logarithmic scale.\\footnote{00 = Immediate; 10 = Simple; 20 = Medium; 30 = Moderately hard; 40 = Term project; 50 = Research problem. See the superscript attached to each exercise.}\n\nIn closing this introduction, let me emphasize again that these notes are not meant to be complete in any way. For one thing, they lack the rigorous notation needed for handling large deviations in a precise mathematical way, and only give a hint of the vast subject that is large deviation and rare event simulation. On the simulation side, they also skip over the subject of error estimates, which would fill an entire section in itself. In spite of this, I hope that the material covered will have the effect of enticing readers to learn more about large deviations and of conveying a sense of the wide applicability, depth and beauty of this subject, both at the theoretical and computational levels.\n\n\\section{Basic elements of large deviation theory}\n\\label{sectheory}\n\n\\subsection{Examples of large deviations}\n\nWe start our study of large deviation theory by considering a sum of real random variables (RV for short) having the form\n\\begin{equation}\nS_n=\\frac{1}{n}\\sum_{i=1}^n X_i.\n\\end{equation}\nSuch a sum is often referred to in mathematics or statistics as a \\textbf{sample mean}. We are interested in computing the \\textbf{probability density function} $p_{S_n}(s)$ (pdf for short) of $S_n$ in the simple case where the $n$ RVs are mutually \\textbf{independent and identically distributed} (IID for short).\\footnote{The pdf of $S_n$ will be denoted by $p_{S_n}(s)$ or more simply by $p(S_n)$ when no confusion arises.} This means that the joint pdf of $X_1,\\ldots,X_n$ factorizes as follows:\n\\begin{equation}\np(X_1,\\ldots,X_n)=\\prod_{i=1}^n p(X_i),\n\\end{equation}\nwith $p(X_i)$ a fixed pdf for each of the $X_i$'s.\\footnote{To avoid confusion, we should write $p_{X_i}(x)$, but in order not to overload the notation, we will simply write $p(X_i)$. The context should make clear what RV we are referring to.} \n\nWe compare next two cases for $p(X_i)$:\n\\begin{itemize}\n\\item \\textbf{Gaussian pdf:}\n\\begin{equation}\np(x)=\\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\mathrm{e}^{-(x-\\mu)^2\/(2\\sigma^2)},\\quad x\\in\\mathbb{R},\n\\end{equation}\nwhere $\\mu=E[X]$ is the \\textbf{mean} of $X$ and $\\sigma^2=E[(X-\\mu)^2]$ its \\textbf{variance}.\\footnote{The symbol $E[X]$ denotes the expectation or expected value of $X$, which is often denoted by $\\langle X\\rangle$ in physics.}\n\n\\item \\textbf{Exponential pdf:}\n\\begin{equation}\np(x)=\\frac{1}{\\mu} \\mathrm{e}^{-x\/\\mu},\\quad x\\in [0,\\infty),\n\\label{eqexpd1}\n\\end{equation}\nwith mean $E[X]=\\mu>0$. \n\\end{itemize}\n\nWhat is the form of $p_{S_n}(s)$ for each pdf?\n\nTo find out, we write the pdf associated with the event $S_n=s$ by summing the pdf of all the values or \\textbf{realizations} $(x_1,\\ldots,x_n)\\in\\mathbb{R}^n$ of $X_1,\\ldots,X_n$ such that $S_n=s$.\\footnote{Whenever possible, random variables will be denoted by uppercase letters, while their values or realizations will be denoted by lowercase letters. This follows the convention used in probability theory. Thus, we will write $X=x$ to mean that the RV $X$ takes the value $x$.} In terms of Dirac's delta function $\\delta(x)$, this is written as\n\\begin{equation}\np_{S_n}(s)=\\int_\\mathbb{R} \\mathrm{d} x_1\\cdots \\int_\\mathbb{R} \\mathrm{d} x_n\\ \\delta\\big(\\textstyle\\sum_{i=1}^n x_i -ns\\big)\\ p(x_1,\\ldots,x_n).\n\\label{eqrep1}\n\\end{equation}\nFrom this expression, we can then obtain an explicit expression for $p_{S_n}(s)$ by using the method of generating functions (see Exercise~\\ref{excf1}) or by substituting the Fourier representation of $\\delta(x)$ above and by explicitly evaluating the $n+1$ resulting integrals. The result obtained for both the Gaussian and the exponential densities has the general form\n\\begin{equation}\np_{S_n}(s)\\approx \\mathrm{e}^{-nI(s)},\n\\label{eqldt1}\n\\end{equation}\nwhere \n\\begin{equation}\nI(s)=\\frac{(x-\\mu)^2}{2\\sigma^2},\\qquad s\\in\\mathbb{R}\n\\label{eqrfg1}\n\\end{equation}\nfor the Gaussian pdf, whereas\n\\begin{equation}\nI(s)=\\frac{s}{\\mu}-1-\\ln\\frac{s}{\\mu},\\qquad s\\geq 0\n\\label{eqldexp1}\n\\end{equation}\nfor the exponential pdf.\n\n\\begin{figure}[t]\n\\resizebox{\\textwidth}{!}{\\includegraphics{gaussianldt1.pdf}}\n\\caption{(Left) pdf $p_{S_n}(s)$ of the Gaussian sample mean $S_n$ for $\\mu=\\sigma=1$. (Right) $I_n(s)=-\\frac{1}{n}\\ln p_{S_n}(s)$ for different values of $n$ demonstrating a rapid convergence towards the rate function $I(s)$ (dashed line).}\n\\label{figgaussianld1}\n\\end{figure}\n\nWe will come to understand the meaning of the approximation sign ($\\approx$) more clearly in the next subsection. For now we just take it as meaning that the dominant behaviour of $p(S_n)$ as a function of $n$ is a decaying exponential in $n$. Other terms in $n$ that may appear in the exact expression of $p(S_n)$ are sub-exponential in $n$.\n\nThe picture of the behaviour of $p(S_n)$ that we obtain from the result of (\\ref{eqldt1}) is that $p(S_n)$ decays to $0$ exponentially fast with $n$ for all values $S_n=s$ for which the function $I(s)$, which controls the rate of decay of $p(S_n)$, is positive. But notice that $I(s)\\geq 0$ for both the Gaussian and exponential densities, and that $I(s)=0$ in both cases only for $s=\\mu=E[X_i]$. Therefore, since the pdf of $S_n$ is normalized, it must get more and more concentrated around the value $s=\\mu$ as $n\\rightarrow\\infty$ (see Fig.~\\ref{figgaussianld1}), which means that $p_{S_n}(s)\\rightarrow \\delta(s-\\mu)$ in this limit. We say in this case that $S_n$ converges in probability or in density to its mean. \n\nAs a variation of the Gaussian and exponential samples means, let us now consider a sum of \\textbf{discrete random variables} having a probability distribution $P(X_i)$ instead of continuous random variables with a pdf $p(X_i)$.\\footnote{Following the convention in probability theory, we use the lowercase $p$ for continuous probability densities and the uppercase $P$ for discrete probability distributions and probability assignments in general. Moreover, following the notation used before, we will denote the distribution of a discrete $S_n$ by $P_{S_n}(s)$ or simply $P(S_n)$.} To be specific, consider the case of IID \\textbf{Bernoulli RVs} $X_1,\\ldots,X_n$ taking values in the set $\\{0,1\\}$ with probability $P(X_i=0)=1-\\alpha$ and $P(X_i=1)=\\alpha$. What is the form of the probability distribution $P(S_n)$ of $S_n$ in this case?\n\nNotice that we are speaking of a probability distribution because $S_n$ is now a discrete variable taking values in the set $\\{0,1\/n,2\/n,\\ldots,(n-1)\/n,1\\}$. In the previous Gaussian and exponential examples, $S_n$ was a continuous variable characterized by its pdf $p(S_n)$. \n\nWith this in mind, we can obtain the exact expression of $P(S_n)$ using methods similar to those used to obtain $p(S_n)$ (see Exercise~\\ref{exbs1}). The result is different from that found for the Gaussian and exponential densities, but what is remarkable is that the distribution of the Bernoulli sample mean also contains a dominant exponential term having the form\n\\begin{equation}\nP_{S_n}(s)\\approx \\mathrm{e}^{-nI (s)},\n\\label{eqldbern1}\n\\end{equation} \nwhere $I(s)$ is now given by\n\\begin{equation}\nI(s)=s\\ln \\frac{s}{\\alpha} +(1-s)\\ln \\frac{1-s}{1-\\alpha},\\qquad s\\in [0,1].\n\\label{eqbern1}\n\\end{equation}\n\nThe behaviour of the exact expression of $P(S_n)$ as $n$ grows is shown in Fig.~\\ref{figbernld1} together with the plot of $I(s)$ as given by Eq.~(\\ref{eqbern1}). Notice how $P(S_n)$ concentrates around its mean $\\mu=E[X_i]=\\alpha$ as a result of the fact that $I(s)\\geq 0$ and that $s=\\mu$ is the only value of $S_n$ for which $I=0$. Notice also how the support of $P(S_n)$ becomes ``denser'' as $n\\rightarrow\\infty$, and compare this property with the fact that $I(s)$ is a continuous function despite $S_n$ being discrete. Should $I(s)$ not be defined for discrete values if $S_n$ is a discrete RV? We address this question next.\n\n\\begin{figure}[t]\n\\resizebox{\\textwidth}{!}{\\includegraphics{bernoullildt1.pdf}}\n\\caption{(Top left) Discrete probability distribution $P_{S_n}(s)$ of the Bernoulli sample mean for $\\alpha=0.4$ and different values of $n$. (Top right) Finite-$n$ rate function $I_n(s)=-\\frac{1}{n}\\ln P_{S_n}(s)$. The rate function $I(s)$ is the dashed line. (Bottom left). Coarse-grained pdf $p_{S_n}(s)$ for the Bernoulli sample mean. (Bottom right) $I_n(s)=-\\frac{1}{n}\\ln p_{S_n}(s)$ as obtained from the coarse-grained pdf.}\n\\label{figbernld1}\n\\end{figure}\n\n\\subsection{The large deviation principle}\n\nThe general exponential form $\\mathrm{e}^{-nI(s)}$ that we found for our three previous sample means (Gaussian, exponential and Bernoulli) is the founding result or property of large deviation theory, referred to as the large deviation principle. The reason why a whole theory can be built on such a seemingly simple result is that it arises in the context of many stochastic processes, not just IID sample means, as we will come to see in the next sections, and as can be seen from other contributions to this volume; see, e.g., Engel's.\\footnote{The volume referred to is the volume of lecture notes produced for the summer school; see \\texttt{http:\/\/www.mcs.uni-oldenburg.de\/}}\n\nThe rigorous, mathematical definition of the large deviation principle involves many concepts of topology and measure theory that are too technical to be presented here (see Sec.~3.1 and Appendix B of \\cite{touchette2009}). For simplicity, we will say here that a random variable $S_n$ or its pdf $p(S_n)$ satisfies a \\textbf{large deviation principle} (LDP) if the following limit exists:\n\\begin{equation}\n\\lim_{n\\rightarrow\\infty} -\\frac{1}{n}\\ln p_{S_n}(s)=I(s)\n\\label{eqldlim1}\n\\end{equation}\nand gives rise to a function $I(s)$, called the \\textbf{rate function}, which is not everywhere zero.\n\nThe relation between this definition and the approximation notation used earlier should be clear: the fact that the behaviour of $p_{S_n}(s)$ is dominated for large $n$ by a decaying exponential means that the exact pdf of $S_n$ can be written as\n\\begin{equation}\np_{S_n}(s)=\\mathrm{e}^{-nI(s)+o(n)}\n\\end{equation}\nwhere $o(n)$ stands for any correction term that is sub-linear in $n$. Taking the large deviation limit of (\\ref{eqldlim1}) then yields\n\\begin{equation}\n\\lim_{n\\rightarrow\\infty} -\\frac{1}{n}\\ln p_{S_n}(s)=I(s)-\\lim_{n\\rightarrow\\infty} \\frac{o(n)}{n} = I(s),\n\\end{equation}\nsince $o(n)\/n\\rightarrow 0$. We see therefore that the large deviation limit of Eq.~(\\ref{eqldlim1}) is the limit needed to retain the dominant exponential term in $p(S_n)$ while discarding any other sub-exponential terms. For this reason, large deviation theory is often said to be concerned with estimates of probabilities on the logarithmic scale. \n\nThis point is illustrated in Fig.~\\ref{figbernld1}. There we see that the function \n\\begin{equation}\nI_n(s)=-\\frac{1}{n}\\ln p_{S_n}(s)\n\\end{equation} \nis not quite equal to the limiting rate function $I(s)$, because of terms of order $o(n)\/n=o(1)$; however, it does converge to $I(s)$ as $n$ gets larger. Plotting $p(S_n)$ on this scale therefore reveals the rate function in the limit of large $n$. This convergence will be encountered repeatedly in the sections on the numerical evaluation of large deviation probabilities.\n\nIt should be emphasized again that the definition of the LDP given above is a simplification of the rigorous definition used in mathematics, due to the mathematician Srinivas Varadhan.\\footnote{Recipient of the 2007 Abel Prize for his ``fundamental contributions to probability theory and in particular for creating a unified theory of large deviations.''} The real, mathematical definition is expressed in terms of probability measures of certain sets rather than in terms of probability densities, and involves upper and lower bounds on these probabilities rather than a simple limit (see Sec.~3.1 and Appendix B of \\cite{touchette2009}). The mathematical definition also applies a priori to any RVs, not just continuous RVs with a pdf.\n\nIn these notes, we will simplify the mathematics by assuming that the random variables or stochastic processes that we study have a pdf. In fact, we will often assume that pdfs exist even for RVs that are not continuous but ``look'' continuous at some scale.\n\nTo illustrate this point, consider again the Bernoulli sample mean. We noticed that $S_n$ in this case is not a continuous RV, and so does not have a pdf. However, we also noticed that the values that $S_n$ can take become dense in the interval $[0,1]$ as $n\\rightarrow\\infty$, which means that the support of the discrete probability distribution $P(S_n)$ becomes dense in this limit, as shown in Fig.~\\ref{figbernld1}. From a practical point of view, it therefore makes sense to treat $S_n$ as a continuous RV for large $n$ by interpolating $P(S_n)$ to a continuous function $p(S_n)$ representing the ``probability density'' of $S_n$.\n\nThis pdf is obtained in general simply by considering the probability $P(S_n\\in [s,s+\\Delta s])$ that $S_n$ takes a value in a tiny interval surrounding the value $s$, and by then dividing this probability by the ``size'' $\\Delta s$ of that interval:\n\\begin{equation}\np_{S_n}(s)=\\frac{P(S_n\\in [s,s+\\Delta s])}{\\Delta s}.\n\\end{equation}\nThe pdf obtained in this way is referred to as a \\textbf{smoothed}, \\textbf{coarse-grained} or \\textbf{weak density}.\\footnote{\\label{fnldp1}To be more precise, we should make clear that the coarse-grained pdf depends on the spacing $\\Delta s$ by writing, say, $p_{S_n,\\Delta s}(s)$. However, to keep the notation to a minimum, we will use the same lowercase $p$ to denote a coarse-grained pdf and a ``true'' continuous pdf. The context should make clear which of the two is used.} In the case of the Bernoulli sample mean, it is simply given by\n\\begin{equation}\np_{S_n}(s)=nP_{S_n}(s)\n\\end{equation}\nif the spacing $\\Delta s$ between the values of $S_n$ is chosen to be $1\/n$.\n\nThis process of replacing a discrete variable by a continuous one in some limit or at some scale is known in physics as the \\textbf{continuous} or \\textbf{continuum limit} or as the \\textbf{macroscopic limit}. In mathematics, this limit is expressed via the notion of weak convergence (see \\cite{dupuis1997}).\n\n\\subsection{The G\\\"artner-Ellis Theorem}\n\nOur goal in the next sections will be to study random variables and stochastic processes that satisfy an LDP and to find analytical as well as numerical ways to obtain the corresponding rate function. There are many ways whereby a random variable, say $S_n$, can be shown to satisfy an LDP:\n\\begin{itemize}\n\\item \\textbf{Direct method:} Find the expression of $p(S_n)$ and show that it has the form of the LDP;\n\\item \\textbf{Indirect method:} Calculate certain functions of $S_n$, such as generating functions, whose properties can be used to infer that $S_n$ satisfies an LDP;\n\\item \\textbf{Contraction method:} Relate $S_n$ to another random variable, say $A_n$, which is known to satisfy an LDP and derive from this an LDP for $S_n$.\n\\end{itemize}\n\nWe have used the first method when discussing the Gaussian, exponential and Bernoulli sample means. \n\nThe main result of large deviation theory that we will use in these notes to obtain LDPs along the indirect method is called the \\textbf{G\\\"artner-Ellis Theorem} (GE Theorem for short), and is based on the calculation of the following function:\n\\begin{equation}\n\\lambda(k)=\\lim_{n\\rightarrow\\infty}\\frac{1}{n}\\ln E[\\mathrm{e}^{nk S_n}],\n\\label{eqscgf1}\n\\end{equation}\nknown as the \\textbf{scaled cumulant generating function}\\footnote{The function $E[\\mathrm{e}^{kX}]$ for $k$ real is known as the \\textbf{generating function} of the RV $X$; $\\ln E[\\mathrm{e}^{kX}]$ is known as the \\textbf{log-generating function} or \\textbf{cumulant generating function}. The word ``scaled'' comes from the extra factor $1\/n$.} (SCGF for short). In this expression $E[\\cdot]$ denotes the expected value, $k$ is a real parameter, and $S_n$ is an arbitrary RV; it is not necessarily an IID sample mean or even a sample mean. \n\nThe point of the GE Theorem is to be able to calculate $\\lambda(k)$ without knowing $p(S_n)$. We will see later that this is possible. Given $\\lambda(k)$, the GE Theorem then says that, if $\\lambda(k)$ is differentiable,\\footnote{The statement of the GE Theorem given here is a simplified version of the full result, which is essentially the result proved by G\\\"artner \\cite{gartner1977}. See Sec.~3.3 of \\cite{touchette2009} for a more complete presentation of GE Theorem and \\cite{ellis1984} for a rigorous account of it.} then \n\\begin{itemize}\n\\item $S_n$ satisfies an LDP, i.e.,\n\\begin{equation}\n\\lim_{n\\rightarrow\\infty}-\\frac{1}{n}\\ln p_{S_n}(s)=I(s);\n\\end{equation}\n\\item The rate function $I(s)$ is given by the \\textbf{Legendre-Fenchel transform} of $\\lambda(k)$:\n\\begin{equation}\nI(s)=\\sup_{k\\in\\mathbb{R}}\\{ks-\\lambda(k)\\},\n\\label{eqlf1}\n\\end{equation}\nwhere ``$\\sup$'' stands for the \\textbf{supremum}.\\footnote{In these notes, a $\\sup$ can be taken to mean the same as a $\\max$.}\n\\end{itemize}\n\nWe will come to calculate $\\lambda(k)$ and its Legendre-Fenchel transform for specific RVs in the next section. It will also be seen in there that $\\lambda(k)$ does not always exist (this typically happens when the pdf of a random variable does not admit an LDP). Some exercises at the end of this section illustrate many useful properties of $\\lambda(k)$ when this function exists and is twice differentiable.\n\n\\subsection{Varadhan's Theorem}\n\nThe rigorous proof of the GE Theorem is too technical to be presented here. However, there is a way to justify this result by deriving in a heuristic way another result known as Varadhan's Theorem.\n\nThe latter theorem is concerned with the evaluation of a functional\\footnote{A function of a function is called a \\textbf{functional}.} expectation of the form\n\\begin{equation}\nW_n[f]=E[\\mathrm{e}^{nf(S_n)}]=\\int_\\mathbb{R} p_{S_n}(s)\\, \\mathrm{e}^{nf(s)}\\, \\mathrm{d} s,\n\\end{equation} \nwhere $f$ is some function of $S_n$, which is taken to be a real RV for simplicity. Assuming that $S_n$ satisfies an LDP with rate function $I(s)$, we can write\n\\begin{equation}\nW_n[f]\\approx\\int_\\mathbb{R} \\mathrm{e}^{n[f(s)-I(s)]}\\, \\mathrm{d} s.\n\\end{equation}\nwith sub-exponential corrections in $n$. This integral has the form of a so-called \\textbf{Laplace integral}, which is known to be dominated for large $n$ by its largest integrand when it is unique. Assuming this is the case, we can proceed to approximate the whole integral as\n\\begin{equation}\nW_n[f]\\approx \\mathrm{e}^{n \\sup_s [f(s)-I(s)]}.\n\\end{equation}\nSuch an approximation is referred to as a \\textbf{Laplace approximation}, the \\textbf{Laplace principle} or a \\textbf{saddle-point approximation} (see Chap.~6 of \\cite{bender1978}), and is justified in the context of large deviation theory because the corrections to this approximation are sub-exponential in $n$, as are those of the LDP. By defining the following functional:\n\\begin{equation}\n\\lambda[f]=\\lim_{n\\rightarrow\\infty}\\frac{1}{n}\\ln W_n(f)\n\\end{equation}\nusing a limit similar to the limit defining the LDP, we then obtain\n\\begin{equation}\n\\lambda[f]=\\sup_{s\\in\\mathbb{R}}\\{f(s)-I(s)\\}.\n\\label{eqvaradhan1}\n\\end{equation}\n\nThe result above is what is referred to as \\textbf{Varadhan Theorem} \\cite{varadhan1966,touchette2009}. The contribution of Varadhan was to prove this result for a large class of RVs, which includes not just IID sample means but also random vectors and even random functions, and to rigorously handle all the heuristic approximations used above. As such, Varadhan's Theorem can be considered as a rigorous and general expression of the Laplace principle.\n\nTo connect Varadhan's Theorem with the GE Theorem, consider the special case $f(s)=ks$ with $k\\in\\mathbb{R}$.\\footnote{Varadhan's Theorem holds in its original form for bounded functions $f$, but another stronger version can be applied to unbounded functions and in particular to $f(x)=kx$; see Theorem 1.3.4 of \\cite{dupuis1997}.} Then Eq.~(\\ref{eqvaradhan1}) becomes\n\\begin{equation}\n\\lambda(k)=\\sup_{s\\in\\mathbb{R}}\\{ks-I(s)\\},\n\\end{equation}\nwhere $\\lambda(k)$ is the same function as the one defined in Eq.~(\\ref{eqscgf1}). Thus we see that if $S_n$ satisfies an LDP with rate function $I(s)$, then the SCGF $\\lambda(k)$ of $S_n$ is the Legendre-Fenchel transform of $I(s)$. This in a sense is the inverse of the GE Theorem, so an obvious question is, can we invert the equation above to obtain $I(s)$ in terms of $\\lambda(k)$? The answer provided by the GE Theorem is that, a sufficient condition for $I(s)$ to be the Legendre-Fenchel transform of $\\lambda(k)$ is for the latter function to be differentiable.\\footnote{The differentiability condition, though sufficient, is not necessary: $I(s)$ in some cases can be the Legendre-Fenchel transform of $\\lambda(k)$ even when the latter is not differentiable; see Sec.~4.4 of \\cite{touchette2009}.}\n\nThis is the closest that we can get to the GE Theorem without proving it. It is important to note, to fully appreciate the importance of that theorem, that it is actually more just than an inversion of Varadhan's Theorem because the existence of $\\lambda(k)$ implies the existence of an LDP for $S_n$. In our heuristic inversion of Varadhan's Theorem we assumed that an LDP exists.\n\n\\subsection{The contraction principle}\n\nWe mentioned before that LDPs can be derived by a contraction method. The basis of this method is the following: let $A_n$ be a random variable $A_n$ known to have an LDP with rate function $I_A(a)$, and consider another random variable $B_n$ which is a function of the first $B_n=f(A_n)$. We want to know whether $B_n$ satisfies an LDP and, if so, to find its rate function.\n\nTo find the answer, write the pdf of $B_n$ in terms of the pdf of $A_n$:\n\\begin{equation}\np_{B_n}(b)=\\int_{\\{a: f(a)=b\\}} p_{A_n}(a)\\, \\mathrm{d} a\n\\end{equation}\nand use the LDP for $A_n$ to write\n\\begin{equation}\np_{B_n}(b)\\approx \\int_{\\{a: f(a)=b\\}} \\mathrm{e}^{-nI_A(a)}\\, \\mathrm{d} a.\n\\end{equation}\nThen apply the Laplace principle to approximate the integral above by its largest term, which corresponds to the minimum of $I(a)$ for $a$ such that $b=f(a)$. Therefore,\n\\begin{equation}\np_{B_n}(b)\\approx \\exp\\left(-n{\\inf_{\\{a: f(a)=b\\}} I_A(a)}\\right).\n\\end{equation}\nThis shows that $p(B_n)$ also satisfies an LDP with rate function $I_B(b)$ given by\n\\begin{equation}\nI_B(b)=\\inf_{\\{a:f(a)=b\\}} I_A(a).\n\\end{equation}\n\nThis formula is called the \\textbf{contraction principle} because $f$ can be many-to-one, i.e., there might be many $a$'s such that $b=f(a)$, in which case we are ``contracting'' information about the rate function of $A_n$ down to $B_n$. In physical terms, this formula is interpreted by saying that an improbable fluctuation\\footnote{In statistical physics, a deviation of a random variable away from its typical value is referred to as a \\textbf{fluctuation}.} of $B_n$ is brought about by the most probable of all improbable fluctuations of $A_n$.\n\nThe contraction principle has many applications in statistical physics. In particular, the maximum entropy and minimum free energy principles, which are used to find equilibrium states in the microcanonical and canonical ensembles, respectively, can be seen as deriving from the contraction principle (see Sec.~5 of \\cite{touchette2009}).\n \n\n\\subsection{From small to large deviations}\n\nAn LDP for a random variable, say $S_n$ again, gives us a lot of information about its pdf. First, we know that $p(S_n)$ concentrates on certain points corresponding to the zeros of the rate function $I(s)$. These points correspond to the most probable or \\textbf{typical values} of $S_n$ in the limit $n\\rightarrow\\infty$ and can be shown to be related mathematically to the \\textbf{Law of Large Numbers} (LLN for short). In fact, an LDP always implies some form of LLN (see Sec.~3.5.7 of \\cite{touchette2009}).\n\nOften it is not enough to know that $S_n$ converges in probability to some values; we may also want to determine the likelihood that $S_n$ takes a value away but close to its typical value(s). Consider one such typical value $s^*$ and assume that $I(s)$ admits a Taylor series around $s^*$:\n\\begin{equation}\nI(s)=I(s^*)+I'(s^*) (s-s^*)+\\frac{I''(s^*)}{2} (s-s^*)^2+\\cdots.\n\\end{equation}\nSince $s^*$ must correspond to a zero of $I(s)$, the first two terms in this series vanish, and so we are left with the prediction that the \\textbf{small deviations} of $S_n$ around its typical value are Gaussian-distributed:\n\\begin{equation}\np_{S_n}(s)\\approx \\mathrm{e}^{- n I''(s^*) (s-s^*)^2\/2}.\n\\end{equation}\n\nIn this sense, large deviation theory contains the \\textbf{Central Limit Theorem} (see Sec.~3.5.8 of \\cite{touchette2009}). At the same time, large deviation theory can be seen as an extension of the Central Limit Theorem because it gives information not only about the small deviations of $S_n$, but also about its \\textbf{large deviations} far away from its typical value(s); hence the name of the theory.\\footnote{A large deviation is also called a large fluctuation or a \\textbf{rare and extreme event}.}\n\n\\subsection{Exercises}\n\\label{secex1}\n\n\\begin{exercise}\n\\item \\label{excf1}\\exdiff{10} (Generating functions) Let $A_n$ be a sum of $n$ IID RVs:\n\\begin{equation}\nA_n=\\sum_{i=1}^n X_i.\n\\end{equation} \nShow that the generating function of $A_n$, defined as\n\\begin{equation}\nW_{A_n}(k)=E[\\mathrm{e}^{k A_n}],\n\\end{equation}\nsatisfies the following factorization property:\n\\begin{equation}\nW_{A_n}(k)=\\prod_{i=1}^n W_{X_i}(k)=W_{X_i}(k)^n,\n\\end{equation}\n$W_{X_i}(k)$ being the generating function of $X_i$. \n\n\\item \\label{exgauss1}\\exdiff{12} (Gaussian sample mean) Find the expression of $W_X(k)$ for $X$ distributed according to a Gaussian pdf with mean $\\mu$ and variance $\\sigma^2$. From this result, find $W_{S_n}(k)$ following the previous exercise and $p(S_n)$ by inverse Laplace transform. \n\n\\item \\label{exexp1}\\exdiff{15} (Exponential sample mean) Repeat the previous exercise for the exponential pdf shown in Eq.~(\\ref{eqexpd1}). Obtain from your result the approximation shown in (\\ref{eqldexp1}).\n\n\\item \\label{exbs1}\\exdiff{10} (Bernoulli sample mean) Show that the probability distribution of the Bernoulli sample mean is the binomial distribution:\n\\begin{equation}\nP_{S_n}(s)=\\frac{n!}{(sn)![(1-s)n]!}\\ \\alpha^{sn}(1-\\alpha)^{(1-s)n}.\n\\end{equation}\nUse Stirling's approximation to put this result in the form of (\\ref{eqldbern1}) with $I(s)$ given by Eq.~(\\ref{eqbern1}).\n\n\\item \\label{exmulti1}\\exdiff{12} (Multinomial distribution) Repeat the previous exercise for IID RVs taking values in the set $\\{0,1,\\ldots, q-1\\}$ instead of $\\{0,1\\}$. Use Stirling's approximation to arrive at an exponential approximation similar to (\\ref{eqldbern1}). (See solution in \\cite{ellis1999}.)\n\n\\item \\label{exscgf1}\\exdiff{10} (SCGF at the origin) Let $\\lambda(k)$ be the SCGF of an IID sample mean $S_n$. Prove the following properties of $\\lambda(k)$ at $k=0$:\n\\begin{itemize}\n\\item~$\\lambda(0)=0$\n\\item~$\\lambda'(0)=E[S_n]=E[X_i]$\n\\item~$\\lambda''(0)=\\text{var}(X_i)$\n\\end{itemize}\nWhich of these properties remain valid if $S_n$ is not an IID sample mean, but is some general RV?\n\n\\item \\label{excon1}\\exdiff{12} (Convexity of SCGFs) Show that $\\lambda(k)$ is a convex function of $k$.\n\n\\item \\label{exlf1}\\exdiff{12} (Legendre transform) Show that, when $\\lambda(k)$ is everywhere differentiable and is strictly convex (i.e., has no affine or linear parts), the Legendre-Fenchel transform shown in Eq.~(\\ref{eqlf1}) reduces to the \\textbf{Legendre transform}:\n\\begin{equation}\nI(s)=k(s) s-\\lambda(k(s)),\n\\end{equation}\nwhere $k(s)$ is the unique of $\\lambda'(k)=s$. Explain why the latter equation has a unique root.\n\n\\item \\label{excon2}\\exdiff{20} (Convexity of rate functions) Prove that rate functions obtained from the GE Theorem are strictly convex. \n\n\\item \\label{exvar1}\\exdiff{12} (Varadhan's Theorem) Verify Varadhan's Theorem for the Gaussian and exponential sample means by explicitly calculating the Legendre-Fenchel transform of $I(s)$ obtained for these sample means. Compare your results with the expressions of $\\lambda(k)$ obtained from its definition.\n\n\\end{exercise}\n\n\\section{Applications of large deviation theory}\n\\label{secapp}\n\nWe study in this section examples of random variables and stochastic processes with interesting large deviation properties. We start by revisiting the simplest application of large deviation theory, namely, IID sample means, and then move to Markov processes, which are often used for modeling physical and man-made systems. More information about applications of large deviation theory in statistical physics can be found in Secs~5 and 6 of \\cite{touchette2009}, as well as in the contribution of Engel contained in this volume.\n\n\\subsection{IID sample means}\n\\label{secsanov}\n\nConsider again the sample mean\n\\begin{equation}\nS_n=\\frac{1}{n}\\sum_{i=1}^nX_i\n\\end{equation}\ninvolving $n$ real IID variables $X_1,\\ldots,X_n$ with common pdf $p(x)$. To determine whether $S_n$ satisfies an LDP and obtain its rate function, we can follow the GE Theorem and calculate the SCGF $\\lambda(k)$ defined in Eq.~(\\ref{eqscgf1}). Because of the IID nature of $S_n$, $\\lambda(k)$ takes a simple form:\n\\begin{equation}\n\\lambda(k)=\\ln E[\\mathrm{e}^{kX}].\n\\end{equation}\nIn this expression, $X$ is any of the $X_i$'s (remember they are IID). Thus to obtain an LDP for $S_n$, we only have to calculate the log-generating function of $X$, check that it is differentiable and, if so, calculate its Legendre-Fenchel transform (or in this case, its Legendre transform; see Exercise~\\ref{exlf1}). Exercise~\\ref{exiid1} considers different distributions $p(x)$ for which this calculation can be carried out, including the Gaussian, exponential and Bernoulli distributions studied before.\n\nAs an extra exercise, let us attempt to find the rate function of a special sample mean defined by\n\\begin{equation}\nL_{n,j}=\\frac{1}{n}\\sum_{i=1}^n \\delta_{X_i,j}\n\\label{eqempvec1}\n\\end{equation}\nfor a sequence $X_1,\\ldots,X_n$ of $n$ discrete IID RVs with finite state space $\\mathcal{X}=\\{1,2,\\ldots,q\\}$. This sample mean is called the \\textbf{empirical distribution} of the $X_i$'s as it ``counts'' the number of times the value or symbol $j\\in\\mathcal{X}$ appears in a given realization of $X_1,\\ldots,X_n$. This number is normalized by the total number $n$ of RVs, so what we have is the \\textbf{empirical frequency} for the appearance of the symbol $j$ in realizations of $X_1,\\ldots,X_n$.\n\nThe values of $L_{n,j}$ for all $j\\in\\mathcal{X}$ can be put into a vector $\\mathbf{L}_n$ called the \\textbf{empirical vector}. For the Bernoulli sample mean, for example, $\\mathcal{X}=\\{0,1\\}$ and so $\\mathbf{L}_n$ is a two-dimensional vector $\\mathbf{L}_n=(L_{n,0},L_{n,1})$ containing the empirical frequency of $0$'s and $1$'s appearing in a realization $x_1,\\ldots,x_n$ of $X_1,\\ldots,X_n$:\n\\begin{equation}\nL_{n,0}=\\frac{\\#\\text{ 0's in } x_1,\\ldots,x_n}{n},\\quad L_{n,1}=\\frac{\\#\\text{ 1's in } x_1,\\ldots,x_n}{n}.\n\\end{equation}\n\nTo find the rate function associated with the random vector $\\mathbf{L}_n$, we apply the GE Theorem but adapt it to the case of random vectors by replacing $k$ in $\\lambda(k)$ by a vector $\\mathbf{k}$ having the same dimension as $\\mathbf{L}_n$. The calculation of $\\lambda(\\mathbf{k})$ is left as an exercise (see Exercise~\\ref{exsanov1}). The result is\n\\begin{equation}\n\\lambda(\\mathbf{k})=\\ln \\sum_{j\\in\\mathcal{X}} P_j\\, \\mathrm{e}^{k_j}\n\\label{eqsanov1}\n\\end{equation}\nwhere $P_j=P(X_i=j)$. It is easily checked that $\\lambda(\\mathbf{k})$ is differentiable in the vector sense, so we can use the GE Theorem to conclude that $\\mathbf{L}_n$ satisfies an LDP, which we write as\n\\begin{equation}\np(\\mathbf{L}_n={\\bm\\mu})\\approx \\mathrm{e}^{-n I({\\bm\\mu})},\n\\end{equation}\nwith a vector rate function $I({\\bm\\mu})$ given by the Legendre-Fenchel transform of $\\lambda(\\mathbf{k})$. The calculation of this transform is the subject of Exercise~\\ref{exsanov1}. The final result is\n\\begin{equation}\nI({\\bm\\mu})=\\sum_{j\\in\\mathcal{X}}\\mu_j\\ln\\frac{\\mu_j}{P_j}.\n\\label{eqsanov2}\n\\end{equation}\n\nThis rate function is called the \\textbf{relative entropy} or \\textbf{Kullback-Leibler divergence} \\cite{cover1991}. The full LDP for $\\mathbf{L}_n$ is referred to as \\textbf{Sanov's Theorem} (see \\cite{sanov1961} and Sec.~4.2 of \\cite{touchette2009}). \n\nIt can be checked that $I({\\bm\\mu})$ is a convex function of $\\bm\\mu$ and that it has a unique minimum and zero at ${\\bm\\mu}=\\mathbf{P}$, i.e., $\\mu_j=P_j$ for all $j\\in\\mathcal{X}$. As seen before, this property is an expression of the LLN, which says here that the empirical frequencies $L_{n,j}$ converge to the probabilities $P_j$ as $n\\rightarrow\\infty$. The LDP goes beyond this result by describing the fluctuations of $\\mathbf{L}_n$ around the typical value ${\\bm\\mu}=\\mathbf{P}$.\n\n\\subsection{Markov chains}\n\\label{subsecmc1}\n\nInstead of assuming that the sample mean $S_n$ arises from a sum of IID RVs $X_1,\\ldots,X_n$, consider the case where the $X_i$'s form a \\textbf{Markov chain}. This means that the joint pdf $p(X_1,\\ldots,X_n)$ has the form\n\\begin{equation}\np(X_1,\\ldots,X_n)=p(X_1)\\prod_{i=1}^{n-1} \\pi(X_{i+1}|X_i),\n\\label{eqmark1}\n\\end{equation}\nwhere $p(X_i)$ is some initial pdf for $X_1$ and $\\pi(X_{i+1}|X_i)$ is the \\textbf{transition probability density} that $X_{i+1}$ follows $X_i$ in the Markov sequence $X_1\\rightarrow X_2\\rightarrow\\cdots\\rightarrow X_n$ (see \\cite{grimmett2001} for background information on Markov chains). \n\nThe GE Theorem can still be applied in this case, but the expression of $\\lambda(k)$ is more complicated. Skipping the details of the calculation (see Sec.~4.3 of \\cite{touchette2009}), we arrive at the following result: provided that the Markov chain is homogeneous and ergodic (see \\cite{grimmett2001} for a definition of ergodicity), the SCGF of $S_n$ is given by\n\\begin{equation}\n\\lambda(k)=\\ln \\zeta(\\tilde\\Pi_k),\n\\label{eqscgfmc1}\n\\end{equation}\nwhere $\\zeta(\\tilde\\Pi_k)$ is the \\textbf{dominant eigenvalue} (i.e., with largest magnitude) of the matrix $\\tilde\\Pi_k$ whose elements $\\tilde\\pi_k(x,x')$ are given by $\\tilde\\pi_k(x,x')=\\pi(x'|x)\\mathrm{e}^{k x'}$. We call the matrix $\\tilde\\Pi_k$ the \\textbf{tilted matrix} associated with $S_n$. If the Markov chain is finite, it can be proved furthermore that $\\lambda(k)$ is analytic and so differentiable. From the GE Theorem, we then conclude that $S_n$ has an LDP with rate function\n\\begin{equation}\nI(s)=\\sup_{k\\in\\mathbb{R}}\\{ks-\\ln \\zeta(\\tilde\\Pi_k)\\}.\n\\end{equation}\nSome care must be taken if the Markov chain has an infinite number of states. In this case, $\\lambda(k)$ is not necessarily analytic and may not even exist (see, e.g., \\cite{harris2006,harris2007,rakos2008} for illustrations).\n\nAn application of the above result for a simple Bernoulli Markov chain is considered in Exercise~\\ref{exbmc1}. In Exercises~\\ref{excurr1} and \\ref{excurr2}, the case of random variables having the form\n\\begin{equation}\nQ_n=\\frac{1}{n}\\sum_{i=1}^{n-1} q(x_i,x_{i+1})\n\\label{eqcurr1}\n\\end{equation}\nis covered. This type of RV arises in statistical physics when studying currents in stochastic models of interacting particles (see \\cite{harris2007}). The tilted matrix $\\tilde\\Pi_k$ associated with $Q_n$ has the form $\\tilde\\pi_{k}(x,x')=\\pi(x'|x)\\mathrm{e}^{k q(x,x')}$.\n\n\n\\subsection{Continuous-time Markov processes}\n\nThe preceding subsection can be generalized to ergodic Markov processes evolving continuously in time by taking a continuous time limit of discrete-time ergodic Markov chains. \n\nTo illustrate this limit, consider an ergodic continuous-time process described by the state $X(t)$ for $0\\leq t \\leq T$. For an infinitesimal \\textbf{time-step} $\\Delta t$, time-discretize this process into a sequence of $n+1$ states $X_0,X_1,\\ldots,X_n$ with $n=T\/\\Delta t$ and $X_i=X(i\\Delta t)$, $i=0,1,\\ldots,n$. The sequence $X_0,X_1,\\ldots,X_n$ is an ergodic Markov chain with infinitesimal transition matrix $\\Pi(\\Delta t)$ given by\n\\begin{equation}\n\\Pi(\\Delta t)=\\pi (x(t+\\Delta t)|x(t))=\\mathrm{e}^{G\\Delta t}=I+G \\Delta t +o(\\Delta t),\n\\end{equation}\nwhere $I$ is the identity matrix and $G$ the \\textbf{generator} of $X(t)$. With this discretization, it is now possible to study LDPs for processes involving $X(t)$ by studying these processes in discrete time at the level of the Markov chain $X_0,X_1,\\ldots,X_n$, and transfer these LDPs into continuous time by taking the limits $\\Delta t\\rightarrow 0$, $n\\rightarrow\\infty$. \n\nAs a general application of this procedure, consider a so-called \\textbf{additive process} defined by \n\\begin{equation}\nS_T=\\frac{1}{T}\\int_0^T X(t)\\, \\mathrm{d} t.\n\\end{equation}\nThe discrete-time version of this RV is the sample mean\n\\begin{equation}\nS_n=\\frac{1}{n\\Delta t}\\sum_{i=0}^n X_i\\, \\Delta t=\\frac{1}{n}\\sum_{i=0}^n X_i.\n\\end{equation}\nFrom this association, we find that the SCGF of $S_T$, defined by the limit\n\\begin{equation}\n\\lambda(k)=\\lim_{T\\rightarrow\\infty}\\frac{1}{T}\\ln E[\\mathrm{e}^{Tk S_T}],\\qquad k\\in\\mathbb{R},\n\\label{eqscgfct1}\n\\end{equation}\nis given by\n\\begin{equation}\n\\lambda(k)=\\lim_{\\Delta t\\rightarrow 0}\\lim_{n\\rightarrow\\infty}\\frac{1}{n \\Delta t}\\ln E[\\mathrm{e}^{k \\Delta t\\sum_{i=0}^n X_i}]\n\\end{equation}\nat the level of the discrete-time Markov chain. \n\nAccording to the previous subsection, the limit above reduces to $\\ln\\zeta(\\tilde\\Pi_{k}(\\Delta t))$, where $\\tilde\\Pi_{k}(\\Delta t)$ is the matrix (or operator) corresponding to\n\\begin{eqnarray}\n\\tilde\\Pi_{k}(\\Delta t) &=& \\mathrm{e}^{k\\Delta t x'} \\Pi(\\Delta t)\\nonumber\\\\\n&=& \\big(1+kx'\\Delta t+o(\\Delta t)\\big)\\big(I+G'\\Delta t+o(\\Delta t)\\big)\\nonumber\\\\\n&=& I+\\tilde G_k \\Delta t+o(\\Delta t),\n\\label{eqttm1}\n\\end{eqnarray}\nwhere\n\\begin{equation}\n\\tilde G_k=G+k x' \\delta_{x,x'}.\n\\end{equation} \nFor the continuous-time process $X(t)$, we must therefore have\n\\begin{equation}\n\\lambda(k)=\\zeta(\\tilde G_k),\n\\label{eqscgfmc2}\n\\end{equation}\nwhere $\\zeta(\\tilde G_k)$ is the dominant eigenvalue of the \\textbf{tilted generator} $\\tilde G_k$.\\footnote{As for Markov chains, this result holds for continuous, ergodic processes with finite state-space. For infinite-state or continuous-space processes, a similar result holds provided $G_k$ has an isolated dominant eigenvalue.} Note that the reason why the logarithmic does not appear in the expression of $\\lambda(k)$ for the continuous-time process\\footnote{Compare with Eq.~(\\ref{eqscgfmc1}).} is because $\\zeta$ is now the dominant eigenvalue of the generator of the infinitesimal transition matrix, which is itself the exponential of the generator.\n\nWith the knowledge of $\\lambda(k)$, we can obtain an LDP for $S_T$ using the GE Theorem: If the dominant eigenvalue is differentiable in $k$, then $S_T$ satisfies an LDP in the long-time limit, $T\\rightarrow\\infty$, which we write as $p_{S_T}(s)\\approx \\mathrm{e}^{-T I(s)}$, where $I(s)$ is the Legendre-Fenchel transform $\\lambda(k)$. Some applications of this result are presented in Exercises~\\ref{exrt1} and \\ref{excurr2}. Note that in the case of current-type processes having the form of Eq.~(\\ref{eqcurr1}), the tilted generator is not simply given by $\\tilde G_k=G+kx'\\delta_{x,x'}$; see Exercise~\\ref{excurr2}.\n\n\\subsection{Paths large deviations}\n\\label{subsecpathldp}\n\nWe complete our tour of mathematical applications of large deviation theory by studying a different large deviation limit, namely, the low-noise limit of the following \\textbf{stochastic differential equation} (SDE for short):\n\\begin{equation}\n\\dot x(t)=f(x(t))+\\sqrt{\\varepsilon}\\, \\xi(t),\\qquad x(0)=0,\n\\label{eqsde1}\n\\end{equation} \nwhich involves a \\textbf{force} $f(x)$ and a \\textbf{Gaussian white noise} $\\xi(t)$ with the properties $E[\\xi(t)]=0$ and $E[\\xi(t')\\xi(t)]=\\delta(t'-t)$; see Chap.~XVI of \\cite{kampen1992} for background information on SDEs.\n\nWe are interested in studying for this process the pdf of a given \\textbf{random path} $\\{x(t)\\}_{t=0}^T$ of duration $T$ in the limit where the \\textbf{noise power} $\\varepsilon$ vanishes. This abstract pdf can be defined heuristically using path integral methods (see Sec.~6.1 of \\cite{touchette2009}). In the following, we denote this pdf by the functional notation $p[x]$ as a shorthand for $p(\\{x(t)\\}_{t=0}^T)$.\n\nThe idea behind seeing the low-noise limit as a large deviation limit is that, as $\\varepsilon\\rightarrow 0$, the random path arising from the SDE above should converge in probability to the \\textbf{deterministic path} $x(t)$ solving the ordinary differential equation\n\\begin{equation}\n\\dot x(t)=f(x(t)),\\qquad x(0)=0.\n\\label{eqdet1}\n\\end{equation}\nThis convergence is a LLN-type result, and so in the spirit of large deviation theory we are interested in quantifying the likelihood that a random path $\\{x(t)\\}_{t=0}^T$ ventures away from the deterministic path in the limit $\\varepsilon\\rightarrow 0$. The functional LDP that characterizes these path fluctuations has the form\n\\begin{equation}\np[x]\\approx \\mathrm{e}^{-I[x]\/\\varepsilon},\n\\end{equation}\nwhere\n\\begin{equation}\nI[x]=\\int_0^T [\\dot x(t)-f(x(t))]^2\\, \\mathrm{d} t.\n\\label{eqldpf1}\n\\end{equation}\nSee \\cite{freidlin1984} and Sec.~6.1 of \\cite{touchette2009} for historical sources on this LDP.\n\nThe rate functional $I[x]$ is called the \\textbf{action}, \\textbf{Lagrangian} or \\textbf{entropy} of the path $\\{x(t)\\}_{t=0}^T$. The names ``action'' and ``Lagrangian'' come from an analogy with the action of quantum trajectories in the path integral approach of quantum mechanics (see Sec.~6.1 of \\cite{touchette2009}). There is also a close analogy between the low-noise limit of SDEs and the semi-classical or WKB approximation of quantum mechanics \\cite{touchette2009}. \n\nThe path LDP above can be generalized to higher-dimensional SDEs as well as SDEs involving state-dependent noise and correlated noises (see Sec.~6.1 of \\cite{touchette2009}). In all cases, the minimum and zero of the rate functional is the trajectory of the deterministic system obtained in the zero-noise limit. This is verified for the 1D system considered above: $I[x]\\geq 0$ for all trajectories and $I[x]=0$ for the unique trajectory solving Eq.~(\\ref{eqdet1}).\n\nFunctional LDPs are the most refined LDPs that can be derived for SDEs as they characterize the probability of complete trajectories. Other ``coarser'' LDPs can be derived from these by contraction. For example, we might be interested to determine the pdf $p(x,T)$ of the state $x(T)$ reached after a time $T$. The contraction in this case is obvious: $p(x,T)$ must have the large deviation form $p(x,T)\\approx \\mathrm{e}^{-V(x,T)\/\\varepsilon}$ with\n\\begin{equation}\nV(x,T)=\\inf_{x(t):x(0)=0,x(T)=x} I[x].\n\\end{equation}\nThat is, the probability of reaching $x(T)=x$ from $x(0)=0$ is determined by the path connecting these two endpoints having the largest probability. We call this path the \\textbf{optimal path}, \\textbf{maximum likelihood path} or \\textbf{instanton}. Using variational calculus techniques, often used in classical mechanics, it can be proved that this path satisfies an Euler-Lagrange-type equation as well as a Hamilton-type equation (see Sec.~6.1 of \\cite{touchette2009}). Applications of these equations are covered in Exercises~\\ref{exoup1}--\\ref{expol1}.\n\nQuantities similar to the additive process $S_T$ considered in the previous subsection can also be defined for SDEs (see Exercises~\\ref{exrt1} and \\ref{excurr1}). An interesting aspect of these quantities is that their LDPs can involve the noise power $\\varepsilon$ if the low-noise limit is taken, as well as the integration time $T$, which arises because of the additive (in the sense of sample mean) nature of these quantities. In this case, the limit $T\\rightarrow 0$ must be taken before the low-noise limit $\\varepsilon\\rightarrow 0$. If $\\varepsilon\\rightarrow 0$ is taken first, the system considered is no longer random, which means that there can be no LDP in time.\n\n\\subsection{Physical applications}\n\nSome applications of the large deviation results seen so far are covered in the contribution of Engel to this volume. The following list gives an indication of these applications and some key references for learning more about them. A more complete presentation of the applications of large deviation theory in statistical physics can be found in Secs.~5 and 6 of ~\\cite{touchette2009}.\n\\begin{itemize}\n\\item \\textbf{Equilibrium systems:} Equilibrium statistical mechanics, as embodied by the ensemble theory of Boltzmann and Gibbs, can be seen with hindsight as a large deviation theory of many-body systems at equilibrium. This becomes evident by realizing that the thermodynamic limit is a large deviation limit, that the entropy is the equivalent of a rate function and the free energy the equivalent of a SCGF. Moreover, the Legendre transform of thermodynamics connecting the entropy and free energy is nothing but the Legendre-Fenchel transform connecting the rate function and the SCGF in the GE Theorem and in Varadhan's Theorem. For a more complete explanation of these analogies, and historical sources on the development of large deviation theory in relation to equilibrium statistical mechanics, see the book by Ellis \\cite{ellis1985} and Sec.~3.7 of \\cite{touchette2009}. \n \n\\item \\textbf{Chaotic systems and multifractals:} The so-called \\textbf{thermodynamic formalism} of dynamical systems, developed by Ruelle \\cite{ruelle2004} and others, can also be re-interpreted with hindsight as an application of large deviation theory for the study of chaotic systems.\\footnote{There is a reference to this connection in a note of Ruelle's popular book, Chance and Chaos \\cite{ruelle1993} (see~Note 2 of Chap.~19).} There are two quantities in this theory playing the role of the SCGF, namely, the \\textbf{topological pressure} and the \\textbf{structure function}. The Legendre transform appearing in this theory is also an analogue of the one encountered in large deviation theory. References to these analogies can be found in Secs.~7.1 and 7.2 of \\cite{touchette2009}.\n\n\\item \\textbf{Nonequilibrium systems:} Large deviation theory is becoming the standard formalism used in studies of nonequilibrium systems modelled by SDEs and Markov processes in general. In fact, large deviation theory is currently experiencing a sort of revival in physics and mathematics as a result of its growing application in nonequilibrium statistical mechanics. Many LDPs have been derived in this context: LDPs for the current fluctuations or the occupation of interacting particle models, such as the exclusion process, the zero-range process (see \\cite{harris2005}) and their many variants \\cite{spohn1991}, as well as LDPs for work-related and entropy-production-related quantities for nonequilibrium systems modelled with SDEs \\cite{chetrite2008,chetrite2007a}. Good entry points in this vast field of research are \\cite{derrida2007} and \\cite{harris2007}. \n\n\\item \\textbf{Fluctuation relations:} Several LDPs have come to be studied in recent years under the name of fluctuation relations. To illustrate these results, consider the additive process $S_T$ considered earlier and assume that this process admits an LDP with rate function $I(s)$. In many cases, it is interesting not only to know that $S_T$ admits an LDP but to know how probable positive fluctuations of $S_T$ are compared to negative fluctuations. For this purpose, it is common to study the ratio\n\\begin{equation}\n\\frac{p_{S_T}(s)}{p_{S_T}(-s)},\n\\end{equation}\nwhich reduces to\n\\begin{equation}\n\\frac{p_{S_T}(s)}{p_{S_T}(-s)}\\approx \\mathrm{e}^{T[I(-s)-I(s)]}\n\\end{equation}\nif we assume an LDP for $S_T$. In many cases, the difference $I(-s)-I(s)$ is linear in $s$, and one then says that $S_T$ satisfies a \\textbf{conventional fluctuation relation}, whereas if it nonlinear in $s$, then one says that $S_T$ satisfies an \\textbf{extended fluctuation relation}. Other types of fluctuation relations have come to be defined in addition to these; for more information, the reader is referred to Sec.~6.3 of \\cite{touchette2009}. A list of the many physical systems for which fluctuation relations have been derived or observed can also be found in this reference.\n\\end{itemize}\n\n\\subsection{Exercises}\n\\label{secex2}\n\n\\begin{exercise}\n\\item \\label{exiid1}\\exdiff{12} (IID sample means) Use the GE Theorem to find the rate function of the IID sample mean $S_n$ for the following probability distribution and densities:\n\\begin{itemize}\n\\item~Gaussian:\n\\begin{equation}\np(x)=\\frac{1}{\\sqrt{2\\pi \\sigma^2}} \\mathrm{e}^{-(x-\\mu)^2\/(2\\sigma^2)},\\qquad x\\in\\mathbb{R}.\n\\end{equation}\n\\item~Bernoulli: $X_i\\in\\{0,1\\}$, $P(X_i=0)=1-\\alpha$, $P(X_i=1)=\\alpha$.\n\\item~Exponential: \n\\begin{equation}\np(x)=\\frac{1}{\\mu} \\mathrm{e}^{-x\/\\mu},\\qquad x\\in [0,\\infty),\\quad \\mu>0.\n\\end{equation}\n\\item~Uniform:\n\\begin{equation}\np(x)=\\left\\{\n\\begin{array}{lll}\n\\frac{1}{a} & & x\\in[0,a]\\\\\n0 & & \\text{otherwise}.\n\\end{array}\n\\right.\n\\end{equation}\n\\item~Cauchy:\n\\begin{equation}\np(x)=\\frac{\\sigma}{\\pi(x^2+\\sigma^2)},\\qquad x\\in\\mathbb{R},\\quad \\sigma>0.\n\\end{equation}\n\\end{itemize}\n\n\\item \\label{exnc1}\\exdiff{15} (Nonconvex rate functions) Rate functions obtained from the GE Theorem are necessarily convex (strictly convex, in fact; see Exercise~\\ref{excon2}), but rate functions in general need not be convex. Consider, as an example, the process defined by\n\\begin{equation}\nS_n=Y+\\frac{1}{n}\\sum_{i=1}^n X_i,\n\\end{equation}\nwhere $Y=\\pm1$ with probability $\\frac{1}{2}$ and the $X_i$'s are Gaussian IID RVs. Find the rate function for $S_n$ assuming that $Y$ is independent of the $X_i$'s. Then find the corresponding SCGF. What is the relation between the Legendre-Fenchel transform of $\\lambda(k)$ and the rate function $I(s)$? How is the nonconvexity of $I(s)$ related to the differentiability of $\\lambda(k)$? (See Example~4.7 of \\cite{touchette2009} for the solution.)\n\n\\item \\label{exexp2}\\exdiff{15} (Exponential mixture) Repeat the previous exercise by replacing the Bernoulli $Y$ with $Y=Z\/n$, where $Z$ is an exponential RV with mean $1$. (See Example~4.8 of \\cite{touchette2009} for the solution.)\n\n\\item \\label{exprod1}\\exdiff{12} (Product process) Find the rate function of \n\\begin{equation}\nZ_n=\\left(\\prod_{i=1}^n X_i\\right)^{1\/n}\n\\end{equation}\nwhere $X_i\\in\\{1,2\\}$ with $p(X_i=1)=\\alpha$ and $p(X_i=2)=1-\\alpha$, $0<\\alpha<1$.\n\n\\item \\label{exsp1}\\exdiff{15} (Self-process) Consider a sequence of IID Gaussian RVs $X_1,\\ldots,X_n$. Find the rate function of the so-called \\textbf{self-process} defined by \n\\begin{equation}\nS_n=-\\frac{1}{n}\\ln p(X_1,\\ldots,X_n).\n\\end{equation}\n(See Sec.~4.5 of \\cite{touchette2009} for information about this process.) Repeat the problem for other choices of pdf for the RVs.\n\n\\item \\label{exild1}\\exdiff{20} (Iterated large deviations~\\cite{lowe1996}) Consider the sample mean\n\\begin{equation}\nS_{m,n}=\\frac{1}{m}\\sum_{i=1}^m X_{i}^{(n)}\n\\end{equation}\ninvolving $m$ IID copies of a random variable $X^{(n)}$. Show that if $X^{(n)}$ satisfies an LDP of the form\n\\begin{equation}\np_{X^{(n)}}(x)\\approx \\mathrm{e}^{-nI(x)},\n\\end{equation}\nthen $S_{m,n}$ satisfies an LDP having the form\n\\begin{equation}\np_{S_{m,n}}(s)\\approx \\mathrm{e}^{-mn I(x)}.\n\\end{equation}\n\n\\item \\label{exprw1}\\exdiff{40} (Persistent random walk) Use the result of the previous exercise to find the rate function of \n\\begin{equation}\nS_n=\\frac{1}{n}\\sum_{i=1}^n X_i,\n\\end{equation}\nwhere the $X_i$'s are independent Bernoulli RVs with non-identical distribution $P(X_i=0)=\\alpha^i$ and $P(X_i=1)=1-\\alpha^i$ with $0<\\alpha<1$.\n\n\\item \\label{exsanov1}\\exdiff{15} (Sanov's Theorem) Consider the empirical vector $\\mathbf{L}_n$ with components defined in Eq.~(\\ref{eqempvec1}). Derive the expression found in Eq.~(\\ref{eqsanov1}) for\n\\begin{equation}\n\\lambda(\\mathbf{k})=\\lim_{n\\rightarrow\\infty}\\frac{1}{n}\\ln E[\\mathrm{e}^{n\\, \\mathbf{k}\\cdot\\mathbf{L}_n}],\n\\end{equation} \nwhere $\\mathbf{k}\\cdot\\mathbf{L}_n$ denotes the scalar product of $\\mathbf{k}$ and $\\mathbf{L}_n$. Then obtain the rate function found in (\\ref{eqsanov2}) by calculating the Legendre transform of $\\lambda(\\mathbf{k})$. Check explicitly that the rate function is convex and has a single minimum and zero.\n\n\\item \\label{excp1}\\exdiff{20} (Contraction principle) Repeat the first exercise of this section by using the contraction principle. That is, use the rate function of the empirical vector $\\mathbf{L}_n$ to obtain the rate function of $S_n$. What is the mapping from $\\mathbf{L}_n$ to $S_n$? Is this mapping many-to-one or one-to-one?\n \n\\item \\label{exbmc1}\\exdiff{17} (Bernoulli Markov chain) Consider the Bernoulli sample mean \n\\begin{equation}\nS_n=\\frac{1}{n}\\sum_{i=1}^n X_i,\\qquad X_i\\in\\{0,1\\}.\n\\end{equation}\nFind the expression of $\\lambda(k)$ and $I(s)$ for this process assuming that the $X_i$'s form a Markov process with symmetric transition matrix \n\\begin{equation}\n\\pi (x'|x)=\\left\\{\n\\begin{array}{lll}\n1-\\alpha & & x'=x\\\\\n\\alpha & & x'\\neq x\n\\end{array}\n\\right.\n\\end{equation}\nwith $0\\leq \\alpha\\leq 1$. (See Example~4.4 of \\cite{touchette2009} for the solution.)\n\n\\item \\label{exrt1}\\exdiff{20} (Residence time) Consider a Markov process with state space $\\mathcal{X}=\\{0,1\\}$ and generator\n\\begin{equation}\nG=\\left(\n\\begin{array}{cc}\n-\\alpha & \\beta\\\\\n\\alpha & -\\beta\n\\end{array}\n\\right).\n\\end{equation}\nFind for this process the rate function of the random variable\n\\begin{equation}\nL_T=\\frac{1}{T} \\int_0^T \\delta_{X(t),0}\\, \\mathrm{d} t,\n\\end{equation} \nwhich represents the fraction of time the state $X(t)$ of the Markov chain spends in the state $X=0$ over a period of time $T$.\n\n\\item \\label{excurr1}\\exdiff{20} (Current fluctuations in discrete time) Consider the Markov chain of Exercise~\\ref{exbmc1}. Find the rate function of the \\textbf{mean current} $Q_n$ defined as \n\\begin{equation}\nQ_n=\\frac{1}{n}\\sum_{i=1}^n f(x_{i+1},x_i),\n\\end{equation} \n\\begin{equation}\nf (x',x)=\\left\\{\n\\begin{array}{lll}\n0 & & x'=x\\\\\n1 & & x'\\neq x.\n\\end{array}\n\\right.\n\\end{equation}\n$Q_n$ represents the mean number of jumps between the states $0$ and $1$: at each transition of the Markov chain, the current is incremented by 1 whenever a jump between the states $0$ and $1$ occurs.\n\n\\item \\label{excurr2}\\exdiff{22} (Current fluctuations in continuous time) Repeat the previous exercise for the Markov process of Exercise~\\ref{exrt1} and the current $Q_T$ defined as\n\\begin{equation}\nQ_T=\\lim_{\\Delta t\\rightarrow 0}\\frac{1}{T}\\sum_{i=0}^{n-1} f(x_{i+1},x_i),\n\\end{equation}\nwhere $x_i=x(i\\Delta t)$ ad $f(x',x)$ as above, i.e., $f(x_{i+1},x_i)=1-\\delta_{x_i,x_{i+1}}$. Show for this process that the tilted generator is\n\\begin{equation}\n\\tilde G_k=\\left(\n\\begin{array}{cc}\n-\\alpha & \\beta\\,\\mathrm{e}^k\\\\\n\\alpha\\, \\mathrm{e}^k & -\\beta\n\\end{array}\n\\right).\n\\end{equation}\n\n\\item \\label{exoup1}\\exdiff{17} (Ornstein-Uhlenbeck process) Consider the following linear SDE:\n\\begin{equation}\n\\dot x(t)=-\\gamma x(t)+\\sqrt{\\varepsilon}\\, \\xi(t),\n\\end{equation}\noften referred to as the \\textbf{Langevin equation} or \\textbf{Ornstein-Uhlenbeck process}. Find for this SDE the solution of the optimal path connecting the initial point $x(0)=0$ with the fluctuation $x(T)=x$. From the solution, find the pdf $p(x,T)$ as well as the stationary pdf obtained in the limit $T\\rightarrow\\infty$. Assume $\\varepsilon\\rightarrow 0$ throughout but then verify that the results obtained are valid for all $\\varepsilon>0$.\n\n\\item \\label{excons1}\\exdiff{20} (Conservative system) Show by large deviation techniques that the stationary pdf of the SDE \n\\begin{equation}\n\\dot \\mathbf{x} (t)=-\\nabla U(\\mathbf{x}(t))+\\sqrt{\\varepsilon}\\, \\xi(t)\n\\end{equation}\nadmits the LDP $p(\\mathbf{x})\\approx \\mathrm{e}^{-V(\\mathbf{x})\/\\varepsilon}$ with rate function $V(\\mathbf{x})=2U(\\mathbf{x})$. The rate function $V(\\mathbf{x})$ is called the \\textbf{quasi-potential}.\n\n\\item \\label{extrans1}\\exdiff{25} (Transversal force) Show that the LDP found in the previous exercise also holds for the SDE\n\\begin{equation}\n\\dot \\mathbf{x} (t)=-\\nabla U(\\mathbf{x}(t))+\\mathbf{A}(\\mathbf{x})+\\sqrt{\\varepsilon}\\, \\xi(t)\n\\end{equation}\nif $\\nabla U\\cdot \\mathbf{A}=0$. (See Sec.~4.3 of \\cite{freidlin1984} for the solution.)\n\n\\item \\label{expol1}\\exdiff{25} (Noisy Van der Pol oscillator) The following set of SDEs describes a noisy version of the well-known Van der Pol equation:\n\\begin{eqnarray}\n\\dot x&=&v\\nonumber\\\\\n\\dot v&=&-x+v(\\alpha-x^2-v^2)+\\sqrt{\\varepsilon}\\, \\xi.\n\\end{eqnarray}\nThe parameter $\\alpha\\in\\mathbb{R}$ controls a bifurcation of the zero-noise system: for $\\alpha\\leq 0$, the system with $\\xi=0$ has a single attracting point at the origin, whereas for $\\alpha>0$, it has a stable limit cycle centered on the origin. Show that the stationary distribution of the noisy oscillator has the large deviation form $p(x,v)\\approx \\mathrm{e}^{-U(x,v)\/\\varepsilon}$ as $\\varepsilon\\rightarrow 0$ with rate function \n\\begin{equation}\nU(x,v)=-\\alpha (x^2+v^2)+\\frac{1}{2}(x^2+v^2)^2.\n\\end{equation}\nFind a different set of SDEs that has the same rate function. (See \\cite{graham1989} for the solution.)\n\n\\item \\label{exbmp1}\\exdiff{30} (Dragged Brownian particle \\cite{zon2003a}) The stochastic dynamics of a tiny glass bead immersed in water and pulled with laser tweezers at constant velocity can be modelled in the overdamped limit by the following reduced Langevin equation:\n\\begin{equation}\n\\dot x=-(x(t)-\\nu t)+\\sqrt{\\varepsilon}\\, \\xi(t),\n\\end{equation}\nwhere $x(t)$ is the position of the glass bead, $\\nu$ the pulling velocity, $\\xi(t)$ a Gaussian white noise modeling the influence of the surrounding fluid, and $\\varepsilon$ the noise power related to the temperature of the fluid. Show that the \\textbf{mean work} done by the laser as it pulls the glass bead over a time $T$, which is defined as\n\\begin{equation}\nW_T=-\\frac{\\nu}{T}\\int_0^T (x(t)-\\nu t)\\, \\mathrm{d} t,\n\\end{equation}\nsatisfies an LDP in the limit $T\\rightarrow\\infty$. Derive this LDP by assuming first that $\\varepsilon\\rightarrow 0$, and then derive the LDP without this assumption. Finally, determine whether $W_T$ satisfies a conventional or extended fluctuation relation, and study the limit $\\nu\\rightarrow 0$. (See Example~6.9 of \\cite{touchette2009} for more references on this model.)\n\\end{exercise}\n\n\\section{Numerical estimation of large deviation probabilities}\n\\label{secnum1}\n\nThe previous sections might give the false impression that rate functions can be calculated explicitly for many stochastic processes. In fact, exact and explicit expressions of rate functions can be found only in few simple cases. For most stochastic processes of scientific interest (e.g,, noisy nonlinear dynamical systems, chemical reactions, queues, etc.), we have to rely on analytical approximations and numerical methods to evaluate rate functions.\n\nThe rest of these notes attempts to give an overview of some of these numerical methods and to illustrate them with the simple processes (IID sample means and Markov processes) treated before. The goal is to give a flavor of the general ideas behind large deviation simulations rather than a complete survey, so only a subset of the many methods that have come to be devised for numerically estimating rate functions are covered in what follows. Other methods not treated here, such as the transition path method discussed by Dellago in this volume, are mentioned at the end of the next section.\n\n\\subsection{Direct sampling}\n\nThe problem addressed in this section is to obtain a numerical estimate of the pdf $p_{S_n}(s)$ for a real random variable $S_n$ satisfying an LDP, and to extract from this an estimate of the rate function $I(s)$.\\footnote{The literature on large deviation simulation usually considers the estimation of $P(S_n\\in A)$ for some set $A$ rather than the estimation of the whole pdf of $S_n$.} To be general, we take $S_n$ to be a function of $n$ RVs $X_1,\\ldots,X_n$, which at this point are not necessarily IID. To simplify the notation, we will use the shorthand $\\bm\\om=X_1,\\ldots,X_n$. Thus we write $S_n$ as the function $S_n(\\bm\\om)$ and denote by $p(\\bm\\om)$ the joint pdf of the $X_i$'s.\\footnote{For simplicity, we consider the $X_i$'s to be real RVs. The case of discrete RVs follow with slight changes of notations.}\n\nNumerically, we cannot of course obtain $p_{S_n}(s)$ or $I(s)$ for all $s\\in\\mathbb{R}$, but only for a finite number of values $s$, which we take for simplicity to be equally spaced with a small step $\\Delta s$. Following our discussion of the LDP, we thus attempt to estimate the coarse-grained pdf\n\\begin{equation}\np_{S_n}(s)=\\frac{P(S_n\\in [s,s+\\Delta s])}{\\Delta s}=\\frac{P(S_n\\in\\Delta_s)}{\\Delta s},\n\\label{eqcgpdf1}\n\\end{equation} \nwhere $\\Delta_s$ denotes the small interval $[s,s+\\Delta s]$ anchored at the value $s$.\\footnote{Though not explicitly noted, the coarse-grained pdf depends on $\\Delta s$; see Footnote~\\ref{fnldp1}.} \n\nTo construct this estimate, we follow the \\textbf{statistical sampling} or \\textbf{Monte Carlo method}, which we broke down into the following steps (see the contribution of Katzgraber in this volume for more details):\n\\begin{enumerate}\n\\item Generate a \\textbf{sample} $\\{\\bm\\om^{(j)}\\}_{j=1}^L$ of $L$ copies or realizations of the sequence $\\bm\\om$ from its pdf $p(\\bm\\om)$.\n\\item Obtain from this sample a sample $\\{s^{(j)}\\}_{j=1}^L$ of values or realizations for $S_n$:\n\\begin{equation}\ns^{(j)}=S_n(\\bm\\om^{(j)}),\\qquad j=1,\\ldots,L.\n\\end{equation}\n\\item Estimate $P(S_n\\in\\Delta_s)$ by calculating the sample mean\n\\begin{equation}\n\\hat P_L(\\Delta_s)=\\frac{1}{L}\\sum_{j=1}^L 1{\\hskip -2.9pt}\\hbox{I}_{\\Delta_s}(s^{(j)}),\n\\end{equation}\nwhere $1{\\hskip -2.9pt}\\hbox{I}_A(x)$ denotes the \\textbf{indicator function} for the set $A$, which is equal to $1$ if $x\\in A$ and $0$ otherwise. \n\n\\item Turn the \\textbf{estimator} $\\hat P_L(\\Delta_s)$ of the probability $P(S_n\\in\\Delta_s)$ into an estimator $\\hat p_L(s)$ of the probability density $p_{S_n}(s)$:\n\\begin{equation}\n\\hat p_L(s)=\\frac{\\hat P_L(\\Delta_s)}{\\Delta s}=\\frac{1}{L\\Delta s}\\sum_{j=1}^L 1{\\hskip -2.9pt}\\hbox{I}_{\\Delta_s}(s^{(j)}).\n\\end{equation}\n\\end{enumerate}\n\nThe result of these steps is illustrated in Fig.~\\ref{figdirsamp1} for the case of the IID Gaussian sample mean. Note that $\\hat p_L(s)$ above is nothing but an empirical vector for $S_n$ (see Sec.~\\ref{secsanov}) or a \\textbf{density histogram} of the sample $\\{s^{(j)}\\}_{j=1}^L$.\n\n\\begin{figure}[t]\n\\resizebox{\\textwidth}{!}{\\includegraphics{directsamplinggauss1.pdf}}\n\\caption{(Left) Naive sampling of the Gaussian IID sample mean ($\\mu=\\sigma=1$) for $n=10$ and different sample sizes $L$. The dashed line is the exact rate function. As $L$ grows, a larger range of $I(s)$ is sampled. (Right) Naive sampling for a fixed sample size $L=10\\, 000$ and various values of $n$. As $n$ increases, $I_{n,L}(s)$ approaches the expected rate function but the sampling becomes inefficient as it becomes restricted to a narrow domain.}\n\\label{figdirsamp1}\n\\end{figure}\n\nThe reason for choosing $\\hat p(s)$ as our estimator of $p_{S_n}(s)$ is that it is an \\textbf{unbiased estimator} in the sense that\n\\begin{equation}\nE[\\hat p_L(s)]=p_{S_n}(s)\n\\end{equation}\nfor all $L$. Moreover, we know from the LLN that $\\hat p_L(s)$ converges in probability to its mean $p_{S_n}(s)$ as $L\\rightarrow\\infty$. Therefore, the larger our sample, the closer we should get to a valid estimation of $p_{S_n}(s)$. \n\nTo extract a rate function $I(s)$ from $\\hat p_L(s)$, we simply compute\n\\begin{equation}\nI_{n,L}(s)=-\\frac{1}{n}\\ln \\hat p_L(s)\n\\label{eqrfe1}\n\\end{equation}\nand repeat the whole process for larger and larger integer values of $n$ and $L$ until $I_{n,L}(s)$ converges to some desired level of accuracy.\\footnote{Note that $\\hat P_L(\\Delta_s)$ and $\\hat p_L(s)$ differ only by the factor $\\Delta s$. $I_{n,L}(s)$ can therefore be computed from either estimator with a difference $(\\ln \\Delta s)\/n$ that vanishes as $n\\rightarrow\\infty$.}\n\n\\subsection{Importance sampling}\n\nA basic rule of thumb in statistical sampling, suggested by the LLN, is that an event with probability $P$ will appear in a sample of size $L$ roughly $LP$ times. Thus to get at least one instance of that event in the sample, we must have $L>1\/P$ as an approximate lower bound for the size of our sample (see Exercise~\\ref{exerrest1} for a more precise derivation of this estimate).\n\nApplying this result to $p_{S_n}(s)$, we see that, if this pdf satisfies an LDP of the form $p_{S_n}(s)\\approx \\mathrm{e}^{-nI(s)}$, then we need to have $L>\\mathrm{e}^{n I(s)}$ to get at least one instance of the event $S_n\\in \\Delta_s$ in our sample. In other words, our sample must be exponentially large with $n$ in order to see any large deviations (see Fig.~\\ref{figdirsamp1} and Exercise~\\ref{exds1}).\n\nThis is a severe limitation of the sampling scheme outlined earlier, and we call it for this reason \\textbf{crude Monte Carlo} or \\textbf{naive sampling}. The way around this limitation is to use \\textbf{importance sampling} (IS for short) which works basically as follows (see the contribution of Katzgraber for more details):\n\\begin{enumerate}\n\\item Instead of sampling the $X_i$'s according to the joint pdf $p(\\bm\\om)$, sample them according to a new pdf $q (\\bm\\om)$;\\footnote{The pdf $q$ must have a support at least as large as that of $p$, i.e., $q(\\bm\\om)>0$ if $p(\\bm\\om)>0$, otherwise the ratio $R_n$ is ill-defined. In this case, we say that $q$ is \\textbf{relatively continuous} with respect to $p$ and write $q\\gg p$.}\n\\item Calculate instead of $\\hat p_L(s)$ the estimator\n\\begin{equation}\n\\hat q_L(s)=\\frac{1}{L\\Delta s}\\sum_{j=1}^L 1{\\hskip -2.9pt}\\hbox{I}_{\\Delta_s} \\big(S_n(\\bm\\om^{(j)})\\big)\\, R(\\bm\\om^{(j)}),\n\\label{eqqest1}\n\\end{equation}\nwhere \n\\begin{equation}\nR(\\bm\\om)=\\frac{p(\\bm\\om)}{q(\\bm\\om)}\n\\end{equation}\nis the called the \\textbf{likelihood ratio}. Mathematically, $R$ also corresponds to the \\textbf{Radon-Nikodym derivative} of the measures associated with $p$ and $q$.\\footnote{The Radon-Nikodym derivative of two measures $\\mu$ and $\\nu$ such that $\\nu\\gg\\mu$ is defined as $d\\mu\/d\\nu$. If these measures have densities $p$ and $q$, respectively, then $d\\mu\/d\\nu=p\/q$.}\n\\end{enumerate}\n\nThe new estimator $\\hat q_L(s)$ is also an unbiased estimator of $p_{S_n}(s)$ because (see Exercise~\\ref{exuie1})\n\\begin{equation}\nE_q[\\hat q_L(s)]= E_p[\\hat p_L(s)].\n\\label{equb1}\n\\end{equation}\nHowever, there is a reason for choosing $\\hat q_L(s)$ as our new estimator: we might be able to come up with a suitable choice for the pdf $q$ such that $\\hat q_L(s)$ has a smaller variance than $\\hat p_L(s)$. This is in fact the goal of IS: select $q(\\bm\\om)$ so as to minimize the variance\n\\begin{equation}\n\\text{var}_q(\\hat q_L(s))=E_q[(\\hat q_L(s)-p(s))^2].\n\\end{equation}\nIf we can choose a $q$ such that $\\text{var}_q(\\hat q_L(s))<\\text{var}_p(\\hat p_L(s))$, then $\\hat q_L(s)$ will converge faster to $p_{S_n}(s)$ than $\\hat p_L(s)$ as we increase $L$.\n\nIt can be proved (see Exercise~\\ref{exoptq1}) that in the class of all pdfs that are relatively continuous with respect to $p$ there is a unique pdf $q^*$ that minimizes the variance above. This \\textbf{optimal} IS pdf has the form\n\\begin{equation}\nq^*(\\bm\\om) =1{\\hskip -2.9pt}\\hbox{I}_{\\Delta_s}(S_n(\\bm\\om))\\frac{p(\\bm\\om)}{p_{S_n}(s)} =p(\\bm\\om|S_n(\\bm\\om)=s)\n\\label{eqoippdf1}\n\\end{equation}\nand has the desired property that $\\text{var}_{q^*}(\\hat q_L(s))=0$. It does seem therefore that our sampling problem is solved, until one realizes that $q^*$ involves the unknown pdf that we want to estimate, namely, $p_{S_n}(s)$. Consequently, $q^*$ cannot be used in practice as a sampling pdf. Other pdfs must be considered.\n\nThe next subsections present a more practical sampling pdf, which can be proved to be optimal in the large deviation limit $n\\rightarrow\\infty$. We will not attempt to justify the form of this pdf nor will we prove its optimality. Rather, we will attempt to illustrate how it works in the simplest way possible by applying it to IID sample means and Markov processes. For a more complete and precise treatment of IS of large deviation probabilities, see \\cite{bucklew2004} and Chap.~VI of \\cite{asmussen2007}. For background information on IS, see Katzgraber (this volume) and Chap.~V of \\cite{asmussen2007}.\n\n\\subsection{Exponential change of measure}\n\nAn important class of IS pdfs used for estimating $p_{S_n}(s)$ is given by\n\\begin{equation}\np_k(\\bm\\om)=\\frac{\\mathrm{e}^{nk S_n(\\bm\\om)}}{W_n(k)}p(\\bm\\om),\n\\end{equation}\nwhere $k\\in\\mathbb{R}$ and $W_n(k)$ is a normalization factor given by\n\\begin{equation}\nW_n(k)=E_p[\\mathrm{e}^{nkS_n}]=\\int_{\\mathbb{R}^n}\\mathrm{e}^{nkS_n(\\bm\\om)}\\, p(\\bm\\om)\\, \\mathrm{d}\\bm\\om.\n\\end{equation}\nWe call this class or family of pdfs parameterized by $k$ the \\textbf{exponential family}. In large deviation theory, $p_k$ is also known as the \\textbf{tilted pdf} associated with $p$ or the \\textbf{exponential twisting} of $p$.\\footnote{In statistics and actuarial mathematics, $p_k$ is known as the \\textbf{associated law} or \\textbf{Esscher transform} of $p$.} The likelihood ratio associated with this change of pdf is \n\\begin{equation}\nR(\\bm\\om)=\\mathrm{e}^{-nkS_n(\\bm\\om)}\\, W_n(k),\n\\end{equation}\nwhich, for the purpose of estimating $I(s)$, can be approximated by\n\\begin{equation}\nR(\\bm\\om)\\approx \\mathrm{e}^{-n[k S_n(\\bm\\om)-\\lambda(k)]}\n\\end{equation}\ngiven the large deviation approximation $W_n(k)\\approx\\mathrm{e}^{n\\lambda(k)}$.\n\nIt is important to note that a single pdf of the exponential family with parameter $k$ cannot be used to efficiently sample the whole of the function $p_{S_n}(s)$ but only a particular point of that pdf. More precisely, if we want to sample $p_{S_N}(s)$ at the value $s$, we must choose $k$ such that\n\\begin{equation}\nE_{p_k}[\\hat q_L(s)]=p_{S_n}(s)\n\\label{eqecm1}\n\\end{equation}\nwhich is equivalent to solving $\\lambda'(k)=s$ for $k$, where\n\\label{eqscgfis1}\n\\begin{equation}\n\\lambda(k)=\\lim_{n\\rightarrow\\infty}\\frac{1}{n}\\ln E_p[\\mathrm{e}^{nk S_n}]\n\\end{equation}\nis the SCGF of $S_n$ defined with respect to $p$ (see Exercise~\\ref{exisscgf1}). Let us denote the solution of these equations by $k(s)$. For many processes $S_n$, it can be proved that the tilted pdf $p_{k(s)}$ corresponding to $k(s)$ is asymptotically optimal in $n$, in the sense that the variance of $\\hat q_L(s)$ under $p_{k(s)}$ goes asymptotically to $0$ as $n\\rightarrow\\infty$. When this happens, the convergence of $\\hat q_L(s)$ towards $p_{S_n}(s)$ is fast for increasing $L$ and requires a sub-exponential number of samples to achieve a given accuracy for $I_{n,L}(s)$. To obtain the full rate function, we then simply need to repeat the sampling for other values of $s$, and so other values of $k$, and scale the whole process for larger values of $n$.\n\nIn practice, it is often easier to reverse the roles of $k$ and $s$ in the sampling. That is, instead of fixing $s$ and selecting $k=k(s)$ for the numerical estimation of $I(s)$, we can fix $k$ to obtain the rate function at $s(k)=\\lambda'(k)$. This way, we can build a parametric representation of $I(s)$ over a certain range of $s$ values by covering or ``scanning'' enough values of $k$.\n\nAt this point, there should be a suspicion that we will not achieve much with this \\textbf{exponential change of measure} method (ECM method short) since the forms of $p_k$ and $\\hat q_L(s)$ presuppose the knowledge of $W_n(k)$ and in turn $\\lambda(k)$. We know from the GE Theorem that $\\lambda(k)$ is in many cases sufficient to obtain $I(s)$, so why taking the trouble of sampling $\\hat q_L(s)$ to get $I(s)$?\\footnote{To make matters worst, the sampling of $\\hat q_L(s)$ does involve $I(s)$ directly; see Exercise~\\ref{exisi1}.}\n\nThe answer to this question is that $I(s)$ can be estimated from $p_{k}$ without knowing $W_n(k)$ using two insights:\n\\begin{itemize} \n\\item Estimate $I(s)$ indirectly by sampling an estimator of $\\lambda(k)$ or of $S_n$, which does not involve $W_n(k)$, instead of sampling the estimator $\\hat q_L(s)$ of $p_{S_n}(s)$;\n\\item Sample $\\bm\\om$ according to the a priori pdf $p(\\bm\\om)$ or use the Metropolis algorithm, also known as the Markov chain Monte Carlo algorithm, to sample $\\bm\\om$ according to $p_k(\\bm\\om)$ without knowing $W_n(k)$ (see Appendix~\\ref{appmh} and the contribution of Katzgraber in this volume).\n\\end{itemize}\nThese points will be explained in the next section (see also Exercise~\\ref{exmetis1}).\n\nFor the rest of this subsection, we will illustrate the ECM method for a simple Gaussian IID sample mean, leaving aside the issue of having to know $W_n(k)$. For this process \n\\begin{equation}\np_k(\\bm\\om)=p_k(x_1,\\ldots,x_n)=\\prod_{i=1}^n p_k(x_i),\n\\end{equation}\nwhere\n\\begin{equation}\np_k(x_i)=\\frac{\\mathrm{e}^{k x_i}\\, p(x_i)}{W(k)},\\qquad W(k)=E_p[\\mathrm{e}^{k X}],\n\\end{equation}\nwith $p(x_i)$ a Gaussian pdf with mean $\\mu$ and variance $\\sigma^2$. We know from Exercise~\\ref{exiid1} the expression of the generating function $W(k)$ for this pdf:\n\\begin{equation}\nW(k)=\\mathrm{e}^{k\\mu+\\frac{\\sigma^2}{2}k^2}.\n\\end{equation}\nThe explicit expression of $p_k(x_i)$ is therefore\n\\begin{equation}\np_k(x_i)=\\mathrm{e}^{k(x_i-\\mu)-\\frac{\\sigma^2}{2}k^2} \\frac{\\mathrm{e}^{-(x_i-\\mu)^2\/(2\\sigma^2)}}{\\sqrt{2\\pi \\sigma^2}}=\\frac{\\mathrm{e}^{-(x_i-\\mu-\\sigma^2 k)^2\/(2\\sigma^2)}}{\\sqrt{2\\pi\\sigma^2}}.\n\\end{equation}\n\nThe estimated rate function obtained by sampling the $X_i$'s according to this Gaussian pdf with mean $\\mu+\\sigma^2 k$ and variance $\\sigma^2$ is shown in Fig.~\\ref{figexpsamp1} for various values of $n$ and $L$. This figure should be compared with Fig.~\\ref{figdirsamp1} which illustrates the naive sampling method for the same sample mean. It is obvious from these two figures that the ECM method is more efficient than naive sampling.\n\n\\begin{figure}[t]\n\\resizebox{\\textwidth}{!}{\\includegraphics{expsamplinggauss1.pdf}}\n\\caption{IS for the Gaussian IID sample mean ($\\mu=\\sigma=1$) with the exponential change of measure.}\n\\label{figexpsamp1}\n\\end{figure}\n\nAs noted before, the rate function can be obtained in a parametric way by ``scanning'' $k$ instead of ``scanning'' $s$ and fixing $k$ according to $s$. If we were to determine $k(s)$ for a given $s$, we would find\n\\begin{equation}\n\\lambda'(k)=\\mu+\\sigma^2 k =s,\n\\end{equation}\nso that $k(s)=(s-\\mu)\/\\sigma^2$. Substituting this result in $p_k(x_i)$, then yields\n\\begin{equation}\np_{k(s)}(x_i)=\\frac{\\mathrm{e}^{-(x_i-s)^2\/(2\\sigma^2)}}{\\sqrt{2\\pi\\sigma^2}}.\n\\end{equation}\nThus to efficiently sample $p_{S_n}(s)$, we must sample the $X_i$'s according to a Gaussian pdf with mean $s$ instead of the a priori pdf with mean $\\mu$. The sampling is efficient in this case simply because $S_n$ concentrates in the sense of the LLN at the value $s$ instead of $\\mu$, which means that $s$ has become the typical value of $S_n$ under $p_{k(s)}$. \n\nThe idea of the ECM method is the same for processes other than IID sample means: the goal in general is to change the sampling pdf from $p$ to $p_k$ in such a way that an event that is a rare event under $p$ becomes a typical under $p_k$.\\footnote{Note that this change is not always possible: for certain processes, there is no $k$ that makes certain events typical under $p_k$. For examples, see \\cite{asmussen2011}.} For more information on the ECM method and IS in general; see \\cite{bucklew2004} and Chaps.~V and VI of \\cite{asmussen2007}. For applications of these methods, see Exercise~\\ref{exis2}.\n\n\n\\subsection{Applications to Markov chains}\n\nThe application of the ECM method to Markov chains follows the IID case closely: we generate $L$ IID realizations $x_1^{(j)},\\ldots,x_n^{(j)}$, $j=1,\\ldots,L$, of the chain $\\bm\\om=X_1,\\ldots, X_n$ according to the tilted joint pdf $p_k(\\bm\\om)$, using either $p_{k}$ directly or a Metropolis-type algorithm. The value $k(s)$ that achieves the efficient sampling of $S_n$ at the value $s$ is determined in the same way as in the IID case by solving $\\lambda'(k)=s$, where $\\lambda(k)$ is now given by Eq.~(\\ref{eqscgfmc1}). In this case, $S_n=s$ becomes the typical event of $S_n$ in the limit $n\\rightarrow\\infty$. A simple application of this procedure is proposed in Exercise~\\ref{exmcis1}. For further reading on the sampling of Markov chains, see \\cite{bucklew2004,sadowsky1989,sadowsky1990,bucklew1990b,sadowsky1996}.\n \nAn important difference to notice between the IID and Markov cases is that the ECM does not preserve the factorization structure of $p(\\bm\\om)$ for the latter case. In other words, $p_k(\\bm\\om)$ does not describe a Markov chain, though, as we noticed, $p_k(\\bm\\om)$ is a product pdf when $p(\\bm\\om)$ is itself a product pdf. In the case of Markov chains, $p_k(\\bm\\om)$ does retain a product structure that looks like a Markov chain, and one may be tempted to define a tilted transition pdf as\n\\begin{equation}\n\\pi_k(x'|x)=\\frac{\\mathrm{e}^{kx'}\\pi(x'|x)}{W(k|x)},\\qquad W(k|x)=\\int_{\\mathbb{R}} \\mathrm{e}^{kx'} \\pi(x'|x)\\, \\mathrm{d} x'.\n\\end{equation}\nHowever, it is easy to see that the joint pdf of $\\bm\\om$ obtained from this transition matrix does not reproduce the tilted pdf $p_k(\\bm\\om)$.\n\n\n\\subsection{Applications to continuous-time Markov processes}\n\nThe generalization of the results of the previous subsection to continuous-time Markov processes follows from our discussion of the continuous-time limit of Markov chains (see Sec.~\\ref{subsecmc1}). In this case, we generate, according to the tilted pdf of the process, a sample of time-discretized trajectories. The choice of $\\Delta t$ and $n$ used in the discretization will of course influence the precision of the estimates of the pdf and rate function, in addition to the sample size $L$.\n\nSimilarly to Markov chains, the tilted pdf of a continuous-time Markov process does not in general describe a new ``tilted'' Markov process having a generator related to the tilted generator $\\tilde G_k$ defined earlier. In some cases, the tilting of a Markov process can be seen as generating a new Markov process, as will be discussed in the next subsection, but this is not true in general.\n\n\n\\subsection{Applications to stochastic differential equations}\n\nStochastic differential equations (SDEs) are covered by the continuous-time results of the previous subsection. However, for this type of stochastic processes the ECM can be expressed more explicitly in terms of the path pdf $p[x]$ introduced in Sec.~\\ref{subsecpathldp}. The tilted version of this pdf, which is the functional analogue of $p_k(\\bm\\om)$, is written as\n\\begin{equation}\np_k[x]=\\frac{\\mathrm{e}^{TkS_T[x]}\\, p[x]}{W_T(k)},\n\\label{eqtppdf1}\n\\end{equation}\nwhere $S_T[x]$ is some functional of the trajectory $\\{x(t)\\}_{t=0}^T$ and \n\\begin{equation}\nW_T(k)=E_p[\\mathrm{e}^{TkS_T}].\n\\end{equation}\nThe likelihood ratio associated with this change of pdf is\n\\begin{equation}\nR[x]=\\frac{p[x]}{p_k[x]}=\\mathrm{e}^{-TkS_T[x]}\\, W_T(k)\\approx \\mathrm{e}^{-T[k S_T[x]-\\lambda(k)]}.\n\\end{equation}\n\nAs a simple application of these expressions, let us consider the SDE\n\\begin{equation}\n\\dot x(t)=\\xi(t),\n\\end{equation}\nwhere $\\xi(t)$ is a Gaussian white noise, and the following additive process:\n\\begin{equation}\nD_T[x]=\\frac{1}{T}\\int_0^T \\dot x(t)\\, \\mathrm{d} t,\n\\end{equation}\nwhich represents the average velocity or \\textbf{drift} of the SDE. What is the form of $p_k[x]$ in this case? Moreover, how can we use this tilted pdf to estimate the rate function $I(d)$ of $D_T$ in the long-time limit?\n\nAn answer to the first question is suggested by recalling the goal of the ECM. The LLN implies for the SDE above that $D_T\\rightarrow 0$ in probability as $T\\rightarrow\\infty$, which means that $D_T=0$ is the typical event of the ``natural'' dynamic of $x(t)$. The tilted dynamics realized by $p_{k(d)}[x]$ should change this typical event to the fluctuation $D_T=d$; that is, the typical event of $D_T$ under $p_{k(d)}[x]$ should be $D_T=d$ rather than $D_T=0$. One candidate SDE which leads to this behavior of $D_T$ is\n\\begin{equation}\n\\dot x(t)=d+\\xi(t),\n\\label{eqbsde1}\n\\end{equation} \nso the obvious guess is that $p_{k(d)}[x]$ is the path pdf of this SDE.\n\nLet us show that this is a correct guess. The form of $p_k[x]$ according to Eq.~(\\ref{eqtppdf1}) is\n\\begin{equation}\np_k[x]\\approx\\mathrm{e}^{-T I_k[x]},\\qquad I_k[x]=I[x]-k D_T[x]+\\lambda(k),\n\\end{equation}\nwhere\n\\begin{equation}\nI[x]=\\frac{1}{2T}\\int_0^T \\dot x^2\\, \\mathrm{d} t\n\\end{equation}\nis the action of our original SDE and\n\\begin{equation}\n\\lambda(k)=\\lim_{T\\rightarrow\\infty}\\frac{1}{T}\\ln E[\\mathrm{e}^{TkD_T}]\n\\end{equation}\nis the SCGF of $D_T$. The expectation entering in the definition of $\\lambda(k)$ can be calculated explicitly using path integrals with the result $\\lambda(k)=k^2\/2$. Therefore,\n\\begin{equation}\nI_k[x]=I[x]-k D_T[x]-\\frac{k^2}{2}.\n\\end{equation}\n\nFrom this result, we find $p_{k(d)}[x]$ by solving $\\lambda'(k(d))=d$, which in this case simply yields $k(d)=d$. Hence\n\\begin{equation}\nI_{k(d)}[x]=I[x]-d D_T[x]+\\frac{d^2}{2}=\\frac{1}{2 T}\\int_0^T (\\dot x-d)^2\\, \\mathrm{d} t,\n\\end{equation}\nwhich is exactly the action of the \\textbf{boosted SDE} of Eq.~(\\ref{eqbsde1}).\n\nSince we have $\\lambda(k)$, we can of course directly obtain the rate function $I(d)$ of $D_T$ by Legendre transform:\n\\begin{equation}\nI(d)=k(d) d-\\lambda(k(d))=\\frac{d^2}{2}.\n\\end{equation}\nThis shows that the fluctuations of $D_T$ are Gaussian, which is not surprising given that $D_T$ is up to a factor $1\/T$ a Brownian motion.\\footnote{Brownian motion is defined as the integral of Gaussian white noise. By writing $\\dot x=\\xi(t)$, we therefore define $x(t)$ to be a Brownian motion.} Note that the same result can be obtained following Exercise~\\ref{exisi1} by calculating the likelihood ratio $R[x]$ for a path $\\{x_d(t)\\}_{t=0}^T$ such that $D_T[x_d]=d$, i.e.,\n\\begin{equation}\nR[x_d]=\\frac{p[x_d]}{p_{k(d)}[x_d]}\\approx \\mathrm{e}^{-T I(d)}.\n\\end{equation}\n\nThese calculations give a representative illustration of the ECM method and the idea that the large deviations of an SDE can be obtained by replacing this SDE by a boosted or \\textbf{effective SDE} whose typical events are large deviations of the former SDE. Exercises~\\ref{exaecm1} and \\ref{exnecm1} show that effective SDEs need not correspond exactly to an ECM, but may in some cases reproduce such a change of measure in the large deviation limit $T\\rightarrow\\infty$. The important point in all cases is to be able to calculate the likelihood ratio between a given SDE and a boosted version of it in order to obtain the desired rate function in the limit $T\\rightarrow\\infty$.\\footnote{The limit $\\varepsilon\\rightarrow 0$ can also be considered.}\n\nFor the particular drift process $D_T$ studied above, the likelihood ratio happens to be equivalent to a mathematical result known as \\textbf{Girsanov's formula}, which is often used in financial mathematics; see Exercise~\\ref{exgir1}.\n\n\\subsection{Exercises}\n\\label{secex3}\n\n\\begin{exercise}\n\n\\item \\label{exbernrv2}\\exdiff{5} (Variance of Bernoulli RVs) Let $X$ be a Bernoulli RV with $P(X=1)=\\alpha$ and $P(X=0)=1-\\alpha$, $0\\leq\\alpha\\leq 1$. Find the variance of $X$ in terms of $\\alpha$.\n\n\\item \\label{exds1}\\exdiff{17} (Direct sampling of IID sample means) Generate on a computer a sample of size $L=100$ of $n$ IID Bernoulli RVs $X_1,\\ldots,X_n$ with bias $\\alpha=0.5$, and numerically estimate the rate function $I(s)$ associated with the sample mean $S_n$ of these RVs using the naive estimator $\\hat p_L(s)$ and the finite-size rate function $I_{n,L}(s)$ defined in Eq.~(\\ref{eqrfe1}). Repeat for $L=10^3,10^4, 10^5$, and $10^6$ and observe how $I_{n,L}(s)$ converges to the exact rate function given in Eq.~(\\ref{eqbern1}). Repeat for exponential RVs.\n\n\\item \\label{exuie1}\\exdiff{10} (Unbiased estimator) Prove that $\\hat q_L(s)$ is an unbiased estimator of $p_{S_n}(s)$, i.e., prove Eq.~(\\ref{equb1}).\n\n\\item \\label{exvarest1}\\exdiff{15} (Estimator variance) The estimator $\\hat p_L(s)$ is actually a sample mean of Bernoulli RVs. Based on this, calculate the variance of this estimator with respect to $p(\\bm\\om)$ using the result of Exercise~\\ref{exbernrv2} above. Then calculate the variance of the importance estimator $\\hat q_L(s)$ with respect to $q(\\bm\\om)$.\n\n\\item \\label{exerrest1}\\exdiff{15} (Relative error of estimators) The real measure of the quality of the estimator $\\hat p_L(s)$ is not so much its variance $\\text{var}(\\hat p_L(s))$ but its relative error $\\text{err}(\\hat p_L(s))$ defined by\n\\begin{equation}\n\\text{err}(\\hat p_L(s))=\\frac{\\sqrt{\\text{var}(\\hat p_L(s))}}{p_{S_n}(s)}.\n\\end{equation}\nUse the previous exercise to find an approximation of $\\text{err}(\\hat p_L(s))$ in terms of $L$ and $n$. From this result, show that the sample size $L$ needed to achieve a certain relative error grows exponentially with $n$. (Source: Sec.~V.I of \\cite{asmussen2007}.)\n\n\\item \\label{exoptq1}\\exdiff{17} (Optimal importance pdf) Show that the density $q^*$ defined in (\\ref{eqoippdf1}) is optimal in the sense that\n\\begin{equation}\n\\text{var}_q(\\hat q_L(s))\\geq\\text{var}_{q^*}(\\hat q_L(s))=0\n\\end{equation}\nfor all $q$ relatively continuous to $p$. (See Chap.~V of \\cite{asmussen2007} for help.)\n\n\\item \\label{exisscgf1}\\exdiff{15} (Exponential change of measure) Show that the value of $k$ solving Eq.~(\\ref{eqecm1}) is given by $\\lambda'(k)=s$, where $\\lambda(k)$ is the SCGF defined in Eq.~(\\ref{eqscgfis1}).\n\n\\item \\exdiff{40} (LDP for estimators) We noticed before that $\\hat p_L(s)$ is the empirical vector of the IID sample $\\{S_n^{(j)}\\}_{j=1}^L$. Based on this, we could study the LDP of $\\hat p_L(s)$ rather than just its mean and variance, as usually done in sampling theory. What is the form of the LDP associated with $\\hat p_L(s)$ with respect to $p(\\bm\\om)$? What is its rate function? What is the minimum and zero of that rate function? Answer the same questions for $\\hat q_L(s)$ with respect to $q(\\bm\\om)$. (Hint: Use Sanov's Theorem and the results of Exercise~\\ref{exild1}.)\n\n\\item \\label{exis2}\\exdiff{25} (Importance sampling of IID sample means) Implement the ECM method to numerically estimate the rate function associated with the IID sample means of Exercise~\\ref{exiid1}. Use a simple IID sampling based on the explicit expression of $p_k$ in each case (i.e., assume that $W(k)$ is known). Study the convergence of the results as a function of $n$ and $L$ as in Fig.~\\ref{figexpsamp1}.\n\n\\item \\label{exisi1}\\exdiff{15} (IS estimator) Show that $\\hat q_L(s)$ for $p_{k(s)}$ has the form\n\\begin{equation}\n\\hat q_L(s)\\approx\\frac{1}{L\\Delta s}\\sum_{j=1}^L 1{\\hskip -2.9pt}\\hbox{I}_{\\Delta_s}(s^{(j)})\\ \\mathrm{e}^{-nI(s^{(j)})}.\n\\end{equation}\nWhy is this estimator trivial for estimating $I(s)$?\n\n\\item \\label{exmetis1}\\exdiff{30} (Metropolis sampling of IID sample means) Repeat Exercise~\\ref{exis2}, but instead of using an IID sampling method, use the Metropolis algorithm to sample the $X_i$'s in $S_n$. (For information on the Metropolis algorithm, see Appendix~\\ref{appmh}.)\n\n\\item \\exdiff{15} (Tilted distribution from contraction) Let $X_1,\\ldots,X_n$ be a sequence of real IID RVs with common pdf $p(x)$, and denote by $S_n$ and $L_n(x)$ the sample mean and empirical functional, respectively, of these RVs. Show that the ``value'' of $L_n(x)$ solving the minimization problem associated with the contraction of the LDP of $L_n(x)$ down to the LDP of $S_n$ is given by\n\\begin{equation}\n\\mu_k(x)=\\frac{\\mathrm{e}^{kx}p(x)}{W(k)},\\qquad W(k)=E_p[\\mathrm{e}^{kX}],\n\\end{equation}\nwhere $k$ is such that $\\lambda'(k)=(\\ln W(k))'=s$. To be more precise, show that this pdf is the solution of the minimization problem\n\\begin{equation}\n\\inf_{\\mu:f(\\mu)=s} I(\\mu),\n\\end{equation}\nwhere\n\\begin{equation}\nI(\\mu)= \\int_\\mathbb{R}\\mathrm{d} x\\, \\mu(x)\\ln\\frac{\\mu(x)}{p(x)}\n\\end{equation}\nis the continuous analogue of the relative entropy defined in Eq.~(\\ref{eqsanov2}) and\n\\begin{equation}\nf(\\mu)=\\int_\\mathbb{R} x\\, \\mu(x)\\, \\mathrm{d} x\n\\end{equation}\nis the contraction function. Assume that $W(k)$ exists. Why is $\\mu_k$ the same as the tilted pdf used in IS?\n\n\\item \\exdiff{50} (Equivalence of ensembles) Comment on the meaning of the following statements: (i) The optimal pdf $q^*$ is to the microcanonical ensemble what the tilted pdf $p_k$ is to the canonical ensemble. (ii) Proving that $p_k$ achieves a zero variance for the estimator $\\hat q_L(s)$ in the limit $n\\rightarrow\\infty$ is the same as proving that the canonical ensemble becomes equivalent to the microcanonical ensemble in the thermodynamic limit.\n\n\\item \\label{exmcis1}\\exdiff{25} (Tilted Bernoulli Markov chain) Find the expression of the tilted joint pdf $p_k(\\bm\\om)$ for the Bernoulli Markov chain of Exercise~\\ref{exbmc1}. Then use the Metropolis algorithm (see Appendix~\\ref{appmh}) to generate a large-enough sample of realizations of $\\bm\\om$ in order to estimate $p_{S_n}(s)$ and $I(s)$. Study the converge of the estimation of $I(s)$ in terms of $n$ and $L$ towards the expression of $I(s)$ found in Exercise~\\ref{exbmc1}. Note that for a starting configuration $x_1,\\ldots,x_n$ and a target configuration $x'_1,\\ldots,x'_n$, the acceptance ratio is\n\\begin{equation}\n\\frac{p_k(x_1',\\ldots,x_n')}{p_k(x_1,\\ldots,x_n)}=\\frac{\\mathrm{e}^{k x_1'} p(x_1')\\prod_{i=2}^n \\mathrm{e}^{k x_i'} \\pi(x_i'|x_{i-1}')}{\\mathrm{e}^{k x_1} p(x_1)\\prod_{i=2}^n \\mathrm{e}^{k x_i} \\pi(x_i|x_{i-1})},\n\\end{equation}\nwhere $p(x_i)$ is some initial pdf for the first state of the Markov chain. What is the form of $\\hat q_L(s)$ in this case?\n\n\\item \\label{exgir1}\\exdiff{20} (Girsanov's formula) Denote by $p[x]$ the path pdf associated with the SDE \n\\begin{equation}\n\\dot x(t)=\\sqrt{\\varepsilon}\\, \\xi(t),\n\\end{equation}\nwhere $\\xi(t)$ is Gaussian white noise with unit noise power. Moreover, let $q[x]$ be the path pdf of the boosted SDE\n\\begin{equation}\n\\dot x(t)=\\mu +\\sqrt{\\varepsilon}\\, \\xi(t).\n\\end{equation}\nShow that\n\\begin{equation}\nR[x]=\\frac{p[x]}{q[x]}=\\exp\\left(-\\frac{\\mu}{\\varepsilon} \\int_0^T \\dot x\\, \\mathrm{d} t+\\frac{\\mu^2}{2\\varepsilon} T\\right)=\\exp\\left(-\\frac{\\mu}{\\varepsilon} x(T) +\\frac{\\mu^2}{2\\varepsilon} T\\right).\n\\end{equation}\n\n\\item \\label{exaecm1}\\exdiff{27} (Effective SDE) Consider the additive process,\n\\begin{equation}\nS_T=\\frac{1}{T}\\int_0^T x(t)\\, \\mathrm{d} t,\n\\end{equation}\nwith $x(t)$ evolving according to the simple Langevin equation of Exercise~\\ref{exoup1}. Show that the tilted path pdf $p_{k(d)}[x]$ associated with this process is \\textit{asymptotically} equal to the path pdf of the following boosted SDE:\n\\begin{equation}\n\\dot x(t)=-\\gamma (x(t)-s)+\\xi(t).\n\\label{eqbsde2}\n\\end{equation} \nMore precisely, show that the action $I_{k(s)}[x]$ of $p_{k(s)}[x]$ and the action $J[x]$ of the boosted SDE differ by a boundary term which is of order $O(1\/T)$.\n\n\\item \\label{exnecm1}\\exdiff{22} (Non-exponential change of measure) Although the path pdf of the boosted SDE of Eq.~(\\ref{eqbsde2}) is not exactly the tilted path pdf $p_{k}[x]$ obtained from the ECM, the likelihood ratio $R[x]$ of these two pdfs does yield the rate function of $S_T$. Verify this by obtaining $R[x]$ exactly and by using the result of Exercise~\\ref{exisi1}.\n\n\\end{exercise} \n\n\\section{Other numerical methods for large deviations}\n\\label{secnum2}\n\nThe use of the ECM method appears to be limited, as mentioned before, by the fact that the tilted pdf $p_k(\\bm\\om)$ involves $W_n(k)$. We show in this section how to circumvent this problem by considering estimators of $W_n(k)$ and $\\lambda(k)$ instead of estimators of the pdf $p({S_n})$, and by sampling these new estimators according to the tilted pdf $p_k(\\bm\\om)$, but with the Metropolis algorithm in order not to rely on $W_n(k)$, or directly according to the a priori pdf $p(\\bm\\om)$. At the end of the section, we also briefly describe other important methods used in numerical simulations of large deviations. \n\n\\subsection{Sample mean method}\n\nWe noted in the previous subsection that the parameter $k$ entering in the ECM had to be chosen by solving $\\lambda'(k)=s$ in order for $s$ to be the typical event of $S_n$ under $p_k$. The reason for this is that\n\\begin{equation}\n\\lim_{n\\rightarrow\\infty }E_{p_k}[S_n]=\\lambda'(k).\n\\end{equation}\nBy the LLN we also have that $S_n\\rightarrow\\lambda'(k)$ in probability as $n\\rightarrow\\infty$ if $S_n$ is sampled according to $p_{k}$.\n\nThese results suggest using\n\\begin{equation}\ns_{L,n}(k)=\\frac{1}{L}\\sum_{j=1}^L S_n(\\bm\\om^{(j)})\n\\end{equation}\nas an estimator of $\\lambda'(k)$ by sampling $\\bm\\om$ according to $p_k(\\bm\\om)$, and then integrating this estimator with the boundary condition $\\lambda(0)=0$ to obtain an estimate of $\\lambda(k)$. In other words, we can take our estimator for $\\lambda(k)$ to be\n\\begin{equation}\n\\hat\\lambda_{L,n}(k)=\\int_0^k s_{L,n}(k')\\, \\mathrm{d} k'.\n\\end{equation}\nFrom this estimator, we then obtain a parametric estimation of $I(s)$ from the GE Theorem by Legendre-transforming $\\hat\\lambda_{L,n}(k)$:\n\\begin{equation}\nI_{L,n}(s)=k s-\\hat\\lambda_{L,n}(k),\n\\label{eqplf1}\n\\end{equation}\nwhere $s=s_{L,n}(k)$ or, alternatively,\n\\begin{equation}\nI_{L,n}(s)=k(s)\\, s-\\hat\\lambda_{L,n}(k(s)),\n\\label{eqplf2}\n\\end{equation}\nwhere $k(s)$ is the root of $\\hat\\lambda_{L,n}'(k)=s$.\\footnote{In practice, we have to check numerically (using finite-scale analysis) that $\\lambda_{n,L}(k)$ does not develop non-differentiable points in $k$ as $n$ and $L$ are increased to $\\infty$. If non-differentiable points arise, then $I(s)$ is recovered from the GE Theorem only in the range of $\\lambda'$; see Sec.~4.4 of \\cite{touchette2009}.}\n\nThe implementation of these estimators with the Metropolis algorithm is done explicitly via the following steps: \n\\begin{enumerate}\n\\item For a given $k\\in\\mathbb{R}$, generate an IID sample $\\{\\bm\\om^{(j)}\\}_{j=1}^L$ of $L$ configurations $\\bm\\om=x_1,\\ldots,x_n$ distributed according to $p_{k}(\\bm\\om)$ using the Metropolis algorithm, which only requires the computation of the ratio\n\\begin{equation}\n\\frac{p_{k}(\\bm\\om)}{p_{k}(\\bm\\om')}=\\frac{\\mathrm{e}^{nkS_n(\\bm\\om)}\\, p(\\bm\\om)}{\\mathrm{e}^{nkS_n(\\bm\\om')}\\, p(\\bm\\om')}\n\\end{equation}\nfor any two configurations $\\bm\\om$ and $\\bm\\om'$; see Appendix~\\ref{appmh};\n\n\\item Compute the estimator $s_{L,n}(k)$ of $\\lambda'(k)$ for the generated sample;\n\\item Repeat the two previous steps for other values of $k$ to obtain a numerical approximation of the function $\\lambda'(k)$;\n\\item Numerically integrate the approximation of $\\lambda'(k)$ over the mesh of $k$ values starting from $k=0$. The result of the integration is the estimator $\\hat\\lambda_{L,n}(k)$ of $\\lambda(k)$;\n\\item Numerically compute the Legendre transform, Eq.~(\\ref{eqplf1}), of the estimator of $\\lambda(k)$ to obtain the estimator $I_{L,n}(s)$ of $I(s)$ at $s=s_{L,n}(k)$;\n\\item Repeat the previous steps for larger values of $n$ and $L$ until the results converge to some desired level of accuracy.\n\\end{enumerate}\n\n\\begin{figure}[t]\n\\centering\n\\resizebox{\\textwidth}{!}{\\includegraphics{gaussiansmlambda1.pdf}}\n\\resizebox{0.5\\textwidth}{!}{\\includegraphics{gaussiansmrate1.pdf}}\n\\caption{(Top left) Empirical sample mean $s_L(k)$ for the Gaussian IID sample mean ($\\mu=\\sigma=1$). The spacing of the $k$ values is $\\Delta k=0.5$. (Top right) SCGF $\\hat\\lambda_L(k)$ obtained by integrating $s_L(k)$. (Bottom) Estimate of $I(s)$ obtained from the Legendre transform of $\\hat\\lambda_L(k)$. The dashed line in each figure is the exact result. Note that the estimate of the rate function for $L=5$ is multi-valued because the corresponding $\\hat\\lambda_L(k)$ is nonconvex.}\n\\label{figgaussiansm1}\n\\end{figure}\n\n\nThe result of these steps are illustrated in Fig.~\\ref{figgaussiansm1} for our test case of the IID Gaussian sample mean for which $\\lambda'(k)=\\mu+\\sigma^2 k$. Exercise~\\ref{exsmm1} covers other sample means studied before (e.g., exponential, binary, etc.).\n\nNote that for IID sample means, one does not need to generate $L$ realizations of the whole sequence $\\bm\\om$, but only $L$ realizations of one summand $X_i$ according the tilted marginal pdf $p_k(x)=\\mathrm{e}^{kx} p(x)\/W(k)$. In this case, $L$ takes the role of the large deviation parameter $n$. For Markov chains, realizations of the whole sequence $\\bm\\om$ must be generated one after the other, e.g., using the Metropolis algorithm.\n\n\\subsection{Empirical generating functions}\n\nThe last method that we cover is based on the estimation of the generating function\n\\begin{equation}\nW_n(k)=E[\\mathrm{e}^{nk S_n}]=\\int_{\\mathbb{R}^n}p(\\bm\\om)\\, \\mathrm{e}^{nkS_n(\\bm\\om)}\\, \\mathrm{d}\\bm\\om.\n\\end{equation}\nConsider first the case where $S_n$ is an IID sample mean. Then $\\lambda(k)=\\ln W(k)$, where $W(k)=E[\\mathrm{e}^{kX_i}]$, as we know from Sec.~\\ref{secsanov}, and so the problem reduces to the estimation of the SCGF of the common pdf $p(X)$ of the $X_i$'s. An obvious estimator for this function is\n\\begin{equation}\n\\hat\\lambda_{L}(k)=\\ln \\frac{1}{L}\\sum_{j=1}^L \\mathrm{e}^{k X^{(j)}},\n\\end{equation} \nwhere $X^{(j)}$ are the IID samples drawn from $p(X)$. From this estimator, which we call the \\textbf{empirical SCGF}, we build a parametric estimator of the rate function $I(s)$ of $S_n$ using the Legendre transform of Eq.~(\\ref{eqplf1}) with\n\\begin{equation}\ns(k)=\\hat\\lambda_L'(k)=\\frac{\\sum_{j=1}^L X^{(j)}\\, \\mathrm{e}^{k X^{(j)}}}{\\sum_{j=1}^L \\mathrm{e}^{k X^{(j)}}}.\n\\end{equation}\n\n\\begin{figure}[t]\n\\centering\n\\resizebox{\\textwidth}{!}{\\includegraphics{gaussianegflambda1.pdf}}\n\\resizebox{0.5\\textwidth}{!}{\\includegraphics{gaussianegfrate1.pdf}}\n\\caption{(Top left) Empirical SCGF $\\hat\\lambda_L(k)$ for the Gaussian IID sample mean ($\\mu=\\sigma=1$). (Top right) $s(k)=\\hat\\lambda_L'(k)$. (Bottom) Estimate of $I(s)$ obtained from $\\hat\\lambda_L(k)$. The dashed line in each figure is the exact result.}\n\\label{figgaussianegf1}\n\\end{figure}\n\nFig.~\\ref{figgaussianegf1} shows the result of these estimators for the IID Gaussian sample mean. As can be seen, the convergence of $I_L(s)$ is fast in this case, but is limited to the central part of the rate function. This illustrates one limitation of the empirical SCGF method: for unbounded RVs, such as the Gaussian sample mean, $W(k)$ is correctly recovered for large $|k|$ only for large samples, i.e., large $L$, which implies that the tails of $I(s)$ are also recovered only for large $L$. For bounded RVs, such as the Bernoulli sample mean, this limitation does not arise; see \\cite{duffield1995,duffy2005} and Exercise~\\ref{exegf1}.\n\nThe application of the empirical SCGF method to sample means of ergodic Markov chains is relatively direct. In this case, although $W_n$ no longer factorizes into $W(k)^n$, it is possible to ``group'' the $X_i$'s into blocks of $b$ RVs as follows:\n\\begin{equation}\n\\underbrace{X_1+\\cdots +X_b}_{Y_1} + \\underbrace{X_{b+1}+\\cdots+X_{2b}}_{Y_2}+\\cdots+\\underbrace{X_{n-b+1}+\\cdots+X_n}_{Y_m}\n\\end{equation}\nwhere $m=n\/b$, so as to rewrite the sample mean $S_n$ as\n\\begin{equation}\nS_n=\\frac{1}{n}\\sum_{i=1}^n X_i=\\frac{1}{bm}\\sum_{i=1}^m Y_i.\n\\end{equation}\nIf $b$ is large enough, then the blocks $Y_i$ can be treated as being independent. Moreover, if the Markov chain is ergodic, then the $Y_i$'s should be identically distributed for $i$ large enough. As a result, $W_n(k)$ can be approximated for large $n$ and large $b$ (but $b\\ll n$) as\n\\begin{equation}\nW_n(k)=E[\\mathrm{e}^{k\\sum_{i=1}^m Y_i}]\\approx E[\\mathrm{e}^{k Y_i}]^m,\n\\end{equation} \nso that\n\\begin{equation}\n\\lambda_n(k)=\\frac{1}{n}\\ln W_n(k)\\approx \\frac{m}{n}\\ln E[\\mathrm{e}^{k Y_i}]=\\frac{1}{b}\\ln E[\\mathrm{e}^{k Y_i}].\n\\end{equation}\nWe are thus back to our original IID problem of estimating a generating function; the only difference is that we must perform the estimation at the level of the $Y_i$'s instead of the $X_i$'s. This means that our estimator for $\\lambda(k)$ is now\n\\begin{equation}\n\\hat\\lambda_{L,n}(k)=\\frac{1}{b}\\ln \\frac{1}{L}\\sum_{j=1}^L \\mathrm{e}^{k Y^{(j)}},\n\\end{equation}\nwhere $Y^{(j)}$, $j=1,\\ldots, L$ are IID samples of the block RVs. Following the IID case, we then obtain an estimate of $I(s)$ in the usual way using the Legendre transform of Eq.~(\\ref{eqplf1}) or (\\ref{eqplf2}).\n\nThe application of these results to Markov chains and SDEs is covered in Exercises~\\ref{exegf1} and \\ref{exegf2}. For more information on the empirical SCGF method, including a more detailed analysis of its convergence, see \\cite{duffield1995,duffy2005}.\n\n\\subsection{Other methods}\n\nWe briefly describe below a number of other important methods used for estimating large deviation probabilities, and give for each of them some pointer references for interested readers. The fundamental ideas and concepts related to sampling and ECM that we have treated before also arise in these other methods.\n\\begin{itemize}\n\\item \\textbf{Splitting methods:} Splitting or cloning methods are the most used sampling methods after Monte Carlo algorithms based on the ECM. The idea of splitting is to ``grow'' a sample or ``population'' $\\{\\bm\\om\\}$ of sequences or configurations in such a way that a given configuration $\\bm\\om$ appears in the population in proportion to its weight $p_k(\\bm\\om)$. This is done in some iterative fashion by ``cloning'' and ``killing'' some configurations. The result of this process is essentially an estimate for $W_n(k)$, which is then used to obtain $I(s)$ by Legendre transform. For mathematically-oriented introductions to splitting methods, see \\cite{lecuyer2007,lecuyer2009b,dean2009,morio2010}; for more physical introductions, see \\cite{giardina2006,lecomte2007a,tailleur2007,tailleur2009}.\n\n\\item \\textbf{Optimal path method:} We briefly mentioned in Sec.~\\ref{subsecpathldp} that rate functions related to SDEs can often be derived by contraction of the action functional $I[x]$. The optimization problem that results from this contraction is often not solvable analytically, but there are several techniques stemming from optimization theory, classical mechanics, and dynamic programming that can be used to solve it. A typical optimization problem in this context is to find the optimal path that a noisy system follows with highest probability to reach a certain state considered to be a rare event or fluctuation. This optimal path is also called a \\textbf{transition path} or a \\textbf{reactive trajectory}, and can be found analytically only in certain special systems, such as linear SDEs and conservative systems; see Exercises~\\ref{exop1}--\\ref{exop3}. For nonlinear systems, the Lagrangian or Hamiltonian equations governing the dynamics of these paths must be simulated directly; see \\cite{graham1989} and Sec.~6.1 of \\cite{touchette2009} for more details.\n\n\\item \\textbf{Transition path method:} The transition path method is essentially a Monte Carlo sampling method in the space of configurations $\\bm\\om$ (for discrete-time processes) or trajectories $\\{x(t)\\}_{t=0}^T$ (for continuous-time processes and SDEs), which aims at finding, as the name suggests, transition paths or optimal paths. The contribution of Dellago in this volume covers this method in detail; see also \\cite{dellago2003,dellago2006,dellago2009,metzner2006,vanden-eijnden2006b}. A variant of the transition method is the so-called \\textbf{string method} \\cite{vanden-eijnden2002} which evolves paths in conservative systems towards optimal paths connecting different local minima of the potential or landscape function of these systems.\n\n\\item \\textbf{Eigenvalue method:} We have noted that the SCGF of an ergodic continuous-time process is given by the dominant eigenvalue of its tilted generator. From this connection, one can attempt to numerically evaluate SCGFs using numerical approximation methods for linear functional operators, combined with numerical algorithms for finding eigenvalues. A recent application of this approach, based on the renormalization group, can be found in \\cite{gorissen2009,gorissen2011}.\n\\end{itemize}\n\n\n\n\\subsection{Exercises}\n\\label{secex4}\n\n\\begin{exercise}\n\\item \\label{exsmm1}\\exdiff{25} (Sample mean method) Implement the steps of the sample mean method to numerically estimate the rate functions of all the IID sample means studied in these notes, including the sample mean of the Bernoulli Markov chain.\n\n\\item \\exdiff{25}\\label{exegf1} (Empirical SCGF method) Repeat the previous exercise using the empirical SCGF method instead of the sample mean method. Explain why the former method converges quickly when $S_n$ is bounded, e.g., in the Bernoulli case. Analyze in detail how the method converges when $S_n$ is unbounded, e.g., for the exponential case. (Source: \\cite{duffield1995,duffy2005}.)\n\n\\item \\exdiff{30}\\label{exegf2} (Sampling of SDEs) Generalize the sample mean and empirical mean methods to SDEs. Study the additive process of Exercise~\\ref{exaecm1} as a test case. \n\n\\item \\exdiff{30}\\label{exop1} (Optimal paths for conservative systems) Consider the conservative system of Exercise~\\ref{excons1} and assume, without loss of generality, that the global minimum of the potential $U(\\mathbf{x})$ is at $\\mathbf{x}=0$. Show that the optimal path which brings the system from $\\mathbf{x}(0)=0$ to some fluctuation $\\mathbf{x}(T)=\\mathbf{x}\\neq 0$ after a time $T$ is the time-reversal of the solution of the noiseless dynamics $\\dot \\mathbf{x} =-\\nabla U(\\mathbf{x})$ under the terminal conditions $\\mathbf{x}(0)=\\mathbf{x}$ and $\\mathbf{x}(T)=0$. The latter path is often called the \\textbf{decay path}. \n\n\\item \\exdiff{25}\\label{exop2} (Optimal path for additive processes) Consider the additive process $S_T$ of Exercise~\\ref{exaecm1} under the Langevin dynamics of Exercise~\\ref{exoup1}. Show that the optimal path leading to the fluctuation $S_T=s$ is the constant path\n\\begin{equation}\nx(t)=s,\\quad t\\in [0,T].\n\\end{equation}\nExplain why this result is the same as the asymptotic solution of the boosted SDE of Eq.~(\\ref{eqbsde2}).\n\n\\item \\exdiff{25}\\label{exop3} (Optimal path for the dragged Brownian particle) Apply the previous exercise to Exercise~\\ref{exbmp1}. Discuss the physical interpretation of the results.\n\n\\end{exercise} \n\n\\section*{Acknowledgments}\n\nI would like to thank Rosemary J. Harris for suggesting many improvements to these notes, as well as Alexander Hartmann and Reinhard Leidl for the invitation to participate to the 2011 Oldenburg Summer School.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}