applied-ai-018's picture
Add files using upload-large-folder tool
5027ccc verified
raw
history blame
173 kB
{"text":"\\section{INTRODUCTION}\n\nVarious instability mechanisms are studied to understand some of the main\nfeatures and processes in the astrophysical objects depending on their\nphysical properties. Convective or buoyancy instability arising as a result\nof stratification is among those instabilities that may operate under\ndifferent circumstances from the stellar interiors (e.g., Schwarzschild\n1958), accretion disks (Balbus 2000, 2001), and neutron stars (Chang \\&\nQuataert 2009) to the hot accretion flows (e.g., Narayan et al. 2000, 2002)\nand even galaxy clusters and intercluster medium (ICM) (e.g., Quataert 2008;\nSharma et al. 2009; Ren et al. 2009). Analogous instabilities also exist in\nthe neutral atmosphere of the Earth and ocean (Gossard \\& Hooke 1975;\nPedlosky 1982). Diversity of the astrophysical objects, in which convective\ninstabilities may have a significant role, leading to turbulence and\nanomalous energy and matter transport, is a good motivation to explore this\ninstability either through linear analytical analysis or by direct numerical\nsimulations from different physical point of views. Although the significant\nrole of convection in the transport of energy in stellar interiors is a\nwell-known physical process, theoretical efforts to understand convective\nenergy transport in the tenuous and hot plasmas such as ICM (Sarazin 1988)\nhave lead to some results over recent years.\n\nAccording to the standard Schwarzschild criterion, a thermally stratified\nfluid is convectively unstable when the entropy increases in the direction\nof gravity (Schwarzschild 1958). By taking into account the anisotropic heat\nflux in plasmas where the mean free path of ions and electrons is much\nlarger than their Larmor radius, one obtains additional instabilities for\nshort wave numbers with larger growth rates than that without thermal flux.\nThese instabilities have been shown to arise when the temperature increases\nin the direction of gravity at the absence of the background thermal flux\n(the magnetothermal instability (MTI)) (Balbus 2000, 2001) and when the\ntemperature decreases along gravity at the presence of the latter (the heat\nbuoyancy instability (HBI)) (Quataert 2008). Both MTI and HBI have been\nsimulated in 2D and 3D by many authors over recent years (e.g., Parrish et\nal. 2008; Parrish \\& Quataert 2008; Parrish et al. 2009). Following recent\nachievements in the convective theory, it has attracted attention of the\nauthors for analyzing its possible role in ICM after a long time\ndiscounting. Majority of the mass of a cluster of galaxies is in the dark\nmatter. However, around 1\/6 of its mass consists of a hot, magnetized, and\nlow density plasma known as ICM. The electron density is $n_{e}\\simeq 10^{-2}\n$ to $10^{-1}$ cm$^{-3}$ at the central parts of ICM. The electron\ntemperature $T_{e}$ is measured of the order of a few keV, though the ion\ntemperature $T_{i}$ has not yet been measured directly (e.g., Fabian et al.\n2006; Sanders et al. 2010). The magnetic field strength $B$ in ICM is\nestimated to be in the range 0.1-10 $\\mu $G depending on where the\nmeasurement is made (Carilli \\& Taylor 2002) which implies a dynamically\nweak magnetic field with $\\beta =8\\pi n_{e}T_{e}\/B^{2}\\approx 200-2000$.\nThus, ICM with the ion Larmor radius $10^{8-9}$ cm ($T_{i}\\sim T_{e}$) and\nthe mean free path $10^{22-23}$ cm is classified as a weakly collisional\nplasma (Carilli \\& Taylor 2002). In simulating ICM, it is important to\nconsider anisotropic viscosity as well because the Reynolds number is very\nlow (Lyutikov 2007, 2008). Another important physical agent is cosmic rays.\nRecent studies show that centrally concentrated cosmic rays have a\ndestabilizing effect on the convection in ICM (Chandran \\& Dennis 2006;\nRasera and Chandran 2008).\n\nTheoretical models applied for study of buoyancy instabilities are based on\nthe ideal magnetohydrodynamic (MHD) equations (Balbus 2000, 2001; Quataert\n2008, Chang \\& Quataert 2009; Ren et al. 2009). Using of these equations\npermits us comparatively easily to consider different problems. However, the\nideal MHD does not capture some important effects. One of the such effects\nis the nonzero longitudinal electric field perturbation along the\nbackground magnetic field. As we show here, the contribution of currents due\nto this small field to the dispersion relation can be of the same order of\nmagnitude as that due to other electric field components. Besides, the MHD\nequations do not take into account the very existence of various charged and\nneutral species with different masses and electric charges and their\ncollisions between each others and therefore can not be applied to\nmulticomponent systems. On the contrary, the plasma $\\mathbf{E}$-approach\ndeals with dynamical equations for each species. From Faraday's and Ampere's\nlaws one obtains equations for the electric field components. Such an\napproach allows us to follow the movement and changing of parameters of each\nspecies separately and obtain rigorous conditions of consideration and\nphysical consequences in specific cases. This approach permits us to include\nvarious species of ions and dust grains having different charges and masses.\nIn this way, streaming instabilities of rotating multicomponent objects\n(accretion disks, molecular clouds and so on) have been investigated by\nNekrasov (e.g., 2008, 2009 a, 2009 b), which have growth rates much larger\nthan that of the magnetorotational instability (Balbus 1991). In some cases,\nthe standard methods used in MHD leads to conclusions that are different\nfrom those obtained by the method using the electric field perturbations.\nOne of a such example is considered in Nekrasov (2009 c).\n\nIn this paper, we apply a multicomponent approach to study buoyancy\ninstabilities in magnetized electron-ion astrophysical plasmas with the\nbackground electron thermal flux. We include collisions between electrons\nand ions. However, we adopt here that cyclotron frequencies of species are\nmuch larger than their collision frequencies. Such conditions are typical\nfor ICM and galaxy clusters. In this case, as it is known, the heat flux is\nanisotropic and directed along the magnetic field lines (Braginskii 1965).\nWe consider a geometry in which gravity, stratification, and the background\nmagnetic field are all directed along one ($z$-) axis. In our approach, it\nis important to obtain exact expressions for species' velocities in an\ninhomogeneous medium. We give main equations and results. However, for those\nwho are not interested in the mathematical details, they can directly refer\nto Sections 7 and 8. The dispersion relation is obtained for cases, in which\nthe background heat flux is absent or present. This gives a possibility to\ncompare two cases. Solutions of the dispersion relation are discussed.\n\nThe paper is organized as follows. In Section 2, the fundamental equations\nare given. An equilibrium state is considered in Section 3. Perturbed ion\nvelocity, number density, and thermal pressure are obtained in Section 4. In\nSection 5, we consider the perturbed velocity and temperature for electrons.\nComponents of the dielectric permeability tensor are found in Section 6.\n\n\\bigskip\n\n\\section{BASIC EQUATIONS}\n\nWe start with the following equations for ions:\n\\begin{equation}\n\\frac{\\partial \\mathbf{v}_{i}}{\\partial t}\\mathbf{=-}\\frac{\\mathbf{\\nabla }%\np_{i}}{m_{i}n_{i}}+\\mathbf{g+}\\frac{q_{i}}{m_{i}}\\mathbf{E}+\\frac{q_{i}}{%\nm_{i}c}\\mathbf{v}_{i}\\times \\mathbf{B}-\\nu _{ie}\\left( \\mathbf{v}_{i}-%\n\\mathbf{v}_{e}\\right) ,\n\\end{equation}\nthe momentum equation,\n\\begin{equation}\n\\frac{\\partial n_{i}}{\\partial t}+\\mathbf{\\nabla }\\cdot n_{i}\\mathbf{v}%\n_{i}=0,\n\\end{equation}\nthe continuity equation, and\n\\begin{equation}\n\\frac{\\partial p_{i}}{\\partial t}+\\mathbf{v}_{i}\\cdot \\mathbf{\\nabla }%\np_{i}+\\gamma p_{i}\\mathbf{\\nabla }\\cdot \\mathbf{v}_{i}=0,\n\\end{equation}\nthe pressure equation. The corresponding equations for electrons are:\n\\begin{equation}\n\\mathbf{0=-}\\frac{\\mathbf{\\nabla }p_{e}}{n_{e}}+q_{e}\\mathbf{E}+\\frac{q_{e}}{%\nc}\\mathbf{v}_{e}\\times \\mathbf{B}-m_{e}\\nu _{ei}\\left( \\mathbf{v}_{e}-%\n\\mathbf{v}_{i}\\right) ,\n\\end{equation}%\n\\begin{equation}\n\\frac{\\partial n_{e}}{\\partial t}+\\mathbf{\\nabla }\\cdot n_{e}\\mathbf{v}%\n_{e}=0,\n\\end{equation}%\n\\begin{equation}\n\\frac{\\partial p_{e}}{\\partial t}+\\mathbf{v}_{e}\\cdot \\mathbf{\\nabla }%\np_{e}+\\gamma p_{e}\\mathbf{\\nabla }\\cdot \\mathbf{v}_{e}=\\lambda -\\left(\n\\gamma -1\\right) \\mathbf{\\nabla \\cdot q}_{e},\n\\end{equation}%\n\\begin{equation}\n\\frac{\\partial T_{e}}{\\partial t}+\\mathbf{v}_{e}\\cdot \\mathbf{\\nabla }%\nT_{e}+\\left( \\gamma -1\\right) T_{e}\\mathbf{\\nabla }\\cdot \\mathbf{v}_{e}=%\n\\frac{\\lambda }{n_{e}}-\\left( \\gamma -1\\right) \\frac{1}{n_{e}}\\mathbf{\\nabla\n\\cdot q}_{e},\n\\end{equation}\nthe temperature equation, where $\\mathbf{q}_{e}$ is the electron heat flux\n(Braginskii 1965). We neglect inertia of the electrons. In Equations\n(1)-(7), $q_{j}$ and $m_{j}$ are the charge and mass of species $j=i,e$, $%\n\\mathbf{v}_{j}$ is the hydrodynamic velocity, $n_{j}$ is the number density,\n$p_{j}=n_{j}T_{j}$ is the thermal pressure, $T_{j}$ is the temperature, $\\nu\n_{ie}$ ($\\nu _{ei}$) is the collision frequency of ions (electrons) with\nelectrons (ions), $\\mathbf{g}$ is gravity, $\\mathbf{E}$\\textbf{\\ }and $%\n\\mathbf{B}$ are the electric and magnetic fields, $c$ is the speed of light\nin vacuum, and $\\gamma $ is the adiabatic constant. We assume the electrons\nto be magnetized when their cyclotron frequency $\\omega\n_{ce}=q_{e}B\/m_{e}c\\gg \\nu _{ee}$, where $\\nu _{ee}$ is the the\nelectron-electron collision frequency. In this case, the electron thermal\nflux is mainly directed along the magnetic field,%\n\\begin{equation}\n\\mathbf{q}_{e}=-\\chi _{e}\\mathbf{b}\\left( \\mathbf{b\\cdot \\nabla }\\right)\nT_{e},\n\\end{equation}\nwhere $\\chi _{e}$ is the electron thermal conductivity coefficient and $%\n\\mathbf{b=B\/}B$ is the unit vector along the magnetic field (Braginskii\n1965). The term $\\lambda $ compensates the temperature change as a result of\nthe equilibrium heat flux. We take only into account the electron thermal\nconductivity by equation (8), because the corresponding ion conductivity is\nconsiderably smaller (Braginskii 1965).\n\nElectromagnetic equations are Faraday's law\n\\begin{equation}\n\\mathbf{\\nabla \\times E=-}\\frac{1}{c}\\frac{\\partial \\mathbf{B}}{\\partial t}\n\\end{equation}\nand Ampere`s law\n\\begin{equation}\n\\mathbf{\\nabla \\times B=}\\frac{4\\pi }{c}\\mathbf{j,}\n\\end{equation}\nwhere $\\mathbf{j=}\\sum_{j}q_{j}n_{j}\\mathbf{v}_{j}.$ We consider the wave\nprocesses with typical time-scales much larger than the time the light\nspends to cover the wavelength of perturbations. In this case, one can\nneglect the displacement current in Equation (10) that results in\nquasineutrality both in electromagnetic and purely electrostatic\nperturbations. The magnetic field $\\mathbf{B}$ includes the background\nmagnetic field $\\mathbf{B}_{0}$, the magnetic field $\\mathbf{B}_{0cur}$ of\nthe background current (when it presents), and the perturbed magnetic field.\n\n\\bigskip\n\n\\section{EQUILIBRIUM STATE}\n\nAt first, we consider an equilibrium state. We assume that background\nvelocities are absent. In this paper, we study configuration, in which the\nbackground magnetic field, gravity, and stratification are directed along\nthe $z$-axis. Let, for definiteness, $\\mathbf{g}$ be $\\mathbf{g=-z}g$, where\n$g>0$ and $\\mathbf{z}$ is the unit vector along the $z$-direction. Then,\nEquations (1) and (4) give\n\\begin{equation}\ng_{i}=-\\frac{1}{m_{i}n_{i0}}\\frac{\\partial p_{i0}}{\\partial z}=g-\\frac{q_{i}%\n}{m_{i}}E_{0},\n\\end{equation}%\n\\begin{equation}\ng_{e}=-\\frac{1}{m_{i}n_{e0}}\\frac{\\partial p_{e0}}{\\partial z}=\\frac{q_{i}}{%\nm_{i}}E_{0},\n\\end{equation}\nwhere (and below) the index $0$ denotes equilibrium values. Here and below\nwe assume that $q_{i}=-q_{e}$. We see that equilibrium distributions of ions\nand electrons influence each other through the background electric field $%\nE_{0}$. In the case $n_{i0}=n_{e0}$ ( this equality is satisfied for the\ntwo-component plasma) and $T_{i0}=T_{e0}$, we obtain $g_{i}=g_{e}=$ $g\/2$.\nThus, we have $E_{0}=m_{i}g\/2q_{i}$. The presence of the third component,\nfor example, of the cold dust grains with the charge $q_{d}$ and mass $%\nm_{d}\\gg m_{i}$ results in other value of $E_{0}=m_{d}g\/q_{d}$. In this\ncase, the ions and electrons are in equilibrium under the action of the\nthermal pressure and equilibrium electric field, being $g_{i}\\simeq -g_{e}$.\n\n\\bigskip\n\n\\section{LINEAR\\ ION\\ PERTURBATIONS}\n\nLet us write Equations (1)-(3) for ions in the linear approximation,%\n\\begin{equation}\n\\frac{\\partial \\mathbf{v}_{i1}}{\\partial t}\\mathbf{=-}\\frac{\\mathbf{\\nabla }%\np_{i1}}{m_{i}n_{i0}}+\\frac{\\mathbf{\\nabla }p_{i0}}{m_{i}n_{i0}}\\frac{n_{i1}}{%\nn_{i0}}+\\mathbf{F}_{i1}+\\frac{q_{i}}{m_{i}c}\\mathbf{v}_{i1}\\times \\mathbf{B}%\n_{0},\n\\end{equation}%\n\\begin{equation}\n\\frac{\\partial n_{i1}}{\\partial t}+v_{i1z}\\frac{\\partial n_{i0}}{\\partial z}%\n+n_{i0}\\mathbf{\\nabla }\\cdot \\mathbf{v}_{i1}=0,\n\\end{equation}%\n\\begin{equation}\n\\frac{\\partial p_{i1}}{\\partial t}+v_{i1z}\\frac{\\partial p_{i0}}{\\partial z}%\n+\\gamma p_{i0}\\mathbf{\\nabla }\\cdot \\mathbf{v}_{i1}=0,\n\\end{equation}\nwhere\n\\begin{equation}\n\\mathbf{F}_{i1}=\\frac{q_{i}}{m_{i}}\\mathbf{E}_{1}-\\nu _{ie}\\left( \\mathbf{v}%\n_{i1}-\\mathbf{v}_{e1}\\right),\n\\end{equation}\nand the index $1$ denotes the perturbed variables. Below, we solve these\nequations to find the perturbed velocity of ions in an inhomogeneous medium.\n\n\\subsection{Perturbed velocity of ions}\n\nApplying the operator $\\partial \/\\partial t$ to Equation (13) and using\nEquations (14) and (15), we obtain%\n\\begin{equation}\n\\frac{\\partial ^{2}\\mathbf{v}_{i1}}{\\partial t^{2}}\\mathbf{=-}g_{i}\\mathbf{%\n\\nabla }v_{i1z}+\\frac{1}{m_{i}n_{i0}}\\left[ \\left( \\gamma -1\\right) \\left(\n\\mathbf{\\nabla }p_{i0}\\right) +\\gamma p_{i0}\\mathbf{\\nabla }\\right] \\mathbf{%\n\\nabla }\\cdot \\mathbf{v}_{i1}+\\frac{\\partial \\mathbf{F}_{i1}}{\\partial t}+%\n\\frac{q_{i}}{m_{i}c}\\frac{\\partial \\mathbf{v}_{i1}}{\\partial t}\\times\n\\mathbf{B}_{0}.\n\\end{equation}\nWe can find solutions for the components of $\\mathbf{v}_{i1}$. For\nsimplicity, we assume that $\\partial \/\\partial x=0$, because a system is\nsymmetric in the transverse direction relative to the $z$-axis. The $x$%\n-component of Equation (17) has the form\n\\begin{equation}\n\\frac{\\partial v_{i1x}}{\\partial t}\\mathbf{=}F_{i1x}+\\omega _{ci}v_{i1y},\n\\end{equation}\nwhere $\\omega _{ci}=q_{i}B_{0}\/m_{i}c$ is the ion cyclotron frequency. For\nthe $y$-component of Equation (17), we obtain:%\n\\begin{equation}\n\\frac{\\partial ^{2}v_{i1y}}{\\partial t^{2}}\\mathbf{=-}g_{i}\\frac{\\partial\nv_{i1z}}{\\partial y}+c_{si}^{2}\\frac{\\partial }{\\partial y}\\mathbf{\\nabla }%\n\\cdot \\mathbf{v}_{i1}+\\frac{\\partial F_{i1y}}{\\partial t}-\\omega _{ci}\\frac{%\n\\partial v_{i1x}}{\\partial t}.\n\\end{equation}%\nHere, $c_{si}=\\left( \\gamma T_{i0}\/m_{i}\\right) ^{1\/2}$ is the ion sound\nvelocity.\n\nUsing Equation (18), a relation for $v_{i1y}$ is given from Equation (19) as\nfollows\n\\begin{equation}\n\\left( \\frac{\\partial ^{2}}{\\partial t^{2}}+\\omega _{ci}^{2}\\right)\nv_{i1y}-Q_{i1y}\\mathbf{=}\\frac{\\partial P_{i1}}{\\partial y}.\n\\end{equation}\nThen from Equation (18), we obtain%\n\\begin{equation}\n\\frac{\\partial }{\\omega _{ci}\\partial t}\\left[ \\left( \\frac{\\partial ^{2}}{%\n\\partial t^{2}}+\\omega _{ci}^{2}\\right) v_{i1x}-Q_{i1x}\\right] \\mathbf{=}%\n\\frac{\\partial P_{i1}}{\\partial y}.\n\\end{equation}\n\nHere, the following notations are introduced:%\n\\begin{equation}\nP_{i1}=\\mathbf{-}g_{i}v_{i1z}+c_{si}^{2}\\mathbf{\\nabla }\\cdot \\mathbf{v}%\n_{i1},\n\\end{equation}%\n\\begin{equation}\nQ_{i1x}=\\omega _{ci}F_{i1y}+\\frac{\\partial F_{i1x}}{\\partial t},\n\\end{equation}%\n\\begin{equation}\nQ_{i1y}=-\\omega _{ci}F_{i1x}+\\frac{\\partial F_{i1y}}{\\partial t}.\n\\end{equation}\n\nThe value $P_{i1}$ defines the pressure perturbation (Eq. [15]). We see from\nEquation (21) that when $\\partial \/\\partial t\\ll \\omega _{ci}$ the thermal\npressure effect on the velocity $v_{i1x}$ is much larger than that on $%\nv_{i1y}$. The $z$-component of Equation (17) takes the form\n\\begin{equation}\n\\frac{\\partial }{\\partial t}\\left( \\frac{\\partial v_{i1z}}{\\partial t}%\n-F_{i1z}\\right) \\mathbf{=-}g_{i}\\frac{\\partial v_{i1z}}{\\partial z}+\\left[\n\\left( 1-\\gamma \\right) g_{i}+c_{si}^{2}\\frac{\\partial }{\\partial z}\\right]\n\\mathbf{\\nabla }\\cdot \\mathbf{v}_{i1}.\n\\end{equation}\n\nLet us now find $\\mathbf{\\nabla }\\cdot \\mathbf{v}_{i1}$ through $v_{i1z}$.\nDifferentiating Equation (20) with respect to $y$ and using expression (22),\nwe obtain\n\\begin{equation}\nL_{1}\\mathbf{\\nabla }\\cdot \\mathbf{v}_{i1}\\mathbf{=}L_{2}v_{i1z}+\\frac{%\n\\partial Q_{i1y}}{\\partial y},\n\\end{equation}\nwhere the following operators are introduced:%\n\\begin{equation}\nL_{1}=\\frac{\\partial ^{2}}{\\partial t^{2}}+\\omega _{ci}^{2}\\mathbf{-}%\nc_{si}^{2}\\frac{\\partial ^{2}}{\\partial y^{2}},\n\\end{equation}%\n\\begin{equation}\nL_{2}=\\left( \\frac{\\partial ^{2}}{\\partial t^{2}}+\\omega _{ci}^{2}\\right)\n\\frac{\\partial }{\\partial z}-g_{i}\\frac{\\partial ^{2}}{\\partial y^{2}}.\n\\end{equation}\n\nWe can derive an equation for the longitudinal velocity $v_{i1z}$,\nsubstituting $\\mathbf{\\nabla }\\cdot \\mathbf{v}_{i1}$ found from Equation\n(26) in Equation (25),%\n\\begin{equation}\nL_{3}v_{i1z}\\mathbf{=}L_{1}\\frac{\\partial F_{i1z}}{\\partial t}+L_{4}\\frac{%\n\\partial Q_{i1y}}{\\partial y},\n\\end{equation}\nwhere operators $L_{3}$ and $L_{4}$ have the form\n\\begin{eqnarray}\nL_{3} &=&\\left( \\frac{\\partial ^{2}}{\\partial t^{2}}+\\omega _{ci}^{2}\\right)\n\\frac{\\partial ^{2}}{\\partial t^{2}}-c_{si}^{2}\\left( \\frac{\\partial ^{2}}{%\n\\partial y^{2}}+\\frac{\\partial ^{2}}{\\partial z^{2}}\\right) \\frac{\\partial\n^{2}}{\\partial t^{2}}-c_{si}^{2}\\omega _{ci}^{2}\\frac{\\partial ^{2}}{%\n\\partial z^{2}} \\\\\n&&+\\gamma g_{i}\\left( \\frac{\\partial ^{2}}{\\partial t^{2}}+\\omega\n_{ci}^{2}\\right) \\frac{\\partial }{\\partial z}+c_{si}^{2}\\frac{\\partial L_{1}%\n}{L_{1}\\partial z}L_{2}+\\left( 1-\\gamma \\right) g_{i}^{2}\\frac{\\partial ^{2}%\n}{\\partial y^{2}}, \\nonumber\n\\end{eqnarray}\n\n\\begin{equation}\nL_{4}=\\left( 1-\\gamma \\right) g_{i}+c_{si}^{2}\\left( \\frac{\\partial }{%\n\\partial z}-\\frac{\\partial L_{1}}{L_{1}\\partial z}\\right) .\n\\end{equation}\nFor obtaining expression (30), we have used expressions (27) and (28).\n\nIt is easy to see that at the absence of the background magnetic field and\nwithout taking into account electromagnetic perturbations (the right\nhand-side of Eq. [29]), the equation $L_{3}v_{i1z}=0$ describes the ion\nsound and internal gravity waves. In this case, a sum of the last two terms\non the right hand-side of expression (30) is equal to $-c_{si}^{2}\\omega\n_{bi}^{2}\\frac{\\partial ^{2}}{\\partial y^{2}}$, where $\\omega _{bi}$ is the\n(ion) Brunt-V\\\"{a}is\\\"{a}l\\\"{a} frequency equal to%\n\\begin{equation}\n\\omega _{bi}^{2}=\\frac{g_{i}}{c_{si}^{2}}\\left[ \\left( \\gamma -1\\right)\ng_{i}+\\frac{\\partial c_{si}^{2}}{\\partial z}\\right] .\n\\end{equation}\nHowever, we see the existence of the background magnetic field considerably\nmodifies the operator $L_{3}$. Note the right hand-side of Equation (29)\ndescribes a connection between ions and electrons through the electric field\n$\\mathbf{E}_{1}$ and collisions.\n\n\\subsection{Specific case for ions}\n\nSo far, we have not made any simplifications and all the equations and the\nexpressions are given in their general forms. Now, we consider further\nperturbations with a frequency much lower than the ion cyclotron frequency\nand the transverse wavelengths much larger than the ion Larmor radius. Such\nconditions are typical for the astrophysical plasmas. Besides, we\ninvestigate a part of the frequency spectrum in the region lower than the\nion sound frequency. Thus, we set%\n\\begin{equation}\n\\omega _{ci}^{2}\\gg \\frac{\\partial ^{2}}{\\partial t^{2}},c_{si}^{2}\\frac{%\n\\partial ^{2}}{\\partial y^{2}};c_{si}^{2}\\frac{\\partial ^{2}}{\\partial z^{2}}%\n\\gg \\frac{\\partial ^{2}}{\\partial t^{2}}.\n\\end{equation}%\nIn this case, operators (27), (28), (30), and (31) take the form%\n\\begin{eqnarray}\nL_{1} &\\simeq &\\omega _{ci}^{2},L_{2}\\simeq \\omega _{ci}^{2}\\frac{\\partial }{%\n\\partial z}, \\\\\nL_{3} &=&-\\omega _{ci}^{2}\\left[ \\left( c_{si}^{2}\\frac{\\partial }{\\partial z%\n}-\\gamma g_{i}\\right) \\frac{\\partial }{\\partial z}-\\frac{\\partial ^{2}}{%\n\\partial t^{2}}\\right] , \\nonumber \\\\\nL_{4} &=&\\left( 1-\\gamma \\right) g_{i}+c_{si}^{2}\\frac{\\partial }{\\partial z}%\n. \\nonumber\n\\end{eqnarray}%\nAlso, the operator $L_{3}$ can be written for a case in which\n\\begin{equation}\n\\omega _{ci}^{2}\\frac{\\partial ^{2}}{\\partial t^{2}}\\gg c_{si}^{2}\\frac{%\n\\partial c_{si}^{2}}{\\partial z}\\frac{\\partial ^{3}}{\\partial y^{2}\\partial z%\n}.\n\\end{equation}\nThe small corrections in operators $L_{3}$ and $L_{4}$ are needed to be kept\nbecause some main terms in expressions for ion and electron\nvelocities are equal each other (see below). Therefore, when calculating the\nelectric current these main terms will be canceled and small corrections to\nvelocities will only contribute to the current.\n\nFor the cases represented by inequalities (33) and (35) when the operators\nhave the form (34), the equations for $v_{i1z}$ and $\\mathbf{\\nabla }\\cdot\n\\mathbf{v}_{i1}$ become%\n\\begin{equation}\n\\left[ \\left( c_{si}^{2}\\frac{\\partial }{\\partial z}-\\gamma g_{i}\\right)\n\\frac{\\partial }{\\partial z}-\\frac{\\partial ^{2}}{\\partial t^{2}}\\right]\nv_{i1z}\\mathbf{=-}\\frac{\\partial F_{i1z}}{\\partial t}-\\left[ \\left( 1-\\gamma\n\\right) g_{i}+c_{si}^{2}\\frac{\\partial }{\\partial z}\\right] \\frac{\\partial\nQ_{i1y}}{\\omega _{ci}^{2}\\partial y},\n\\end{equation}%\n\\begin{equation}\n\\mathbf{\\nabla }\\cdot \\mathbf{v}_{i1}\\simeq \\frac{\\partial v_{i1z}}{\\partial\nz}+\\frac{\\partial Q_{i1y}}{\\omega _{ci}^{2}\\partial y}.\n\\end{equation}\n\n\\subsection{Ion velocity in the Fourier transform}\n\nCalculations show that some main terms in expressions for $v_{i1z}$ (when\ncalculating the current), $\\mathbf{\\nabla }\\cdot \\mathbf{v}_{i1}$ and $%\nP_{i1} $ are canceled. Therefore, the small terms proportional to\ninhomogeneity must be taken into account. To make this correctly, we can not\nmake the Fourier transformation in Equations (36) and (37) to find the\nperturbed ion pressure $P_{i1}$. However, firstly, we should apply the\noperator $\\partial \/\\partial z$ to this variable for using Equation (36). It\nis analogous to obtain the term $\\partial c_{s}^{2}\/\\partial z$ in\nexpression (32) for the Brunt-V\\\"{a}is\\\"{a}l\\\"{a} frequency. After that, we\ncan apply in a local approximation the Fourier transformation assuming the\nlinear perturbations to be proportional to $\\exp (i\\mathbf{kr-}i\\omega t)$.\nAs a result, we obtain for the Fourier-components $v_{i1zk}$, $\\mathbf{k}%\n\\cdot \\mathbf{v}_{i1k}$ and $P_{i1k}$, where $k=\\left( \\mathbf{k,}\\omega\n\\right) $, the following expressions:\n\\begin{equation}\nv_{i1zk}=-i\\frac{\\omega }{k_{z}^{2}c_{si}^{2}}\\left( 1-i\\frac{\\gamma g_{i}}{%\nk_{z}c_{si}^{2}}\\right) F_{i1zk}-\\frac{k_{y}}{k_{z}\\omega _{ci}^{2}}\\left(\n1-i\\frac{g_{i}}{k_{z}c_{si}^{2}}\\right) Q_{i1yk},\n\\end{equation}%\n\\begin{equation}\n\\mathbf{k}\\cdot \\mathbf{v}_{i1k}\\mathbf{=-}i\\frac{\\omega }{k_{z}c_{si}^{2}}%\n\\left( 1-i\\frac{\\gamma g_{i}}{k_{z}c_{si}^{2}}\\right) F_{i1zk}+i\\frac{k_{y}}{%\nk_{z}}\\frac{g_{i}}{c_{si}^{2}\\omega _{ci}^{2}}Q_{i1yk},\n\\end{equation}%\n\\begin{eqnarray}\nP_{i1k} &=&\\frac{\\omega }{k_{z}}F_{i1zk}-i\\frac{\\omega }{k_{z}^{2}c_{si}^{2}}%\n\\left[ \\left( \\gamma -1\\right) g_{i}+\\frac{\\partial c_{si}^{2}}{\\partial z}%\n\\right] F_{i1zk} \\\\\n&&+i\\frac{k_{y}g_{i}}{k_{z}^{2}c_{si}^{2}\\omega _{ci}^{2}}\\left[ \\left(\n\\gamma -1\\right) g_{i}+\\frac{\\partial c_{si}^{2}}{\\partial z}-\\omega ^{2}%\n\\frac{c_{si}^{2}}{g_{i}}\\right] Q_{i1yk}. \\nonumber\n\\end{eqnarray}\n\nIn expressions (38) and (39), we have omitted additional small terms at $%\nQ_{i1yk}$, which are needed for calculation of $P_{i1k}$. When calculating\nthe current along the $z$-axis, the main term $\\sim Q_{i1yk}$ in Equation\n(38) will be canceled. The contribution of the first term $\\sim F_{i1zk}$ to\nthis current has, as we shall show below, the same order of magnitude for\nthe buoyancy instabilities as that of the term $\\sim g_{i}Q_{i1yk}$. The\nsame relates to expressions (39) and (40). Thus, the longitudinal electric\nfield perturbations must be taken into account. However, in the ideal MHD,\nthis field is absent. We see from expressions (38) and (39) that $\\mathbf{%\n\\nabla }\\cdot \\mathbf{v}_{i1}\\sim (g_{i}\/c_{si}^{2})v_{i1z}$. This relation\nis the same as that for the internal gravity waves in the Earth's atmosphere\n(e.g., Nekrasov 1994). Using expression (40), we obtain velocities $v_{i1yk}$\nand $v_{i1xk}$ from Equations (20) and (21), correspondingly.\n\n\\subsection{Perturbed ion number density and pressure}\n\nIt is followed from above that $\\mathbf{\\nabla }\\cdot \\mathbf{v}_{i1}\\neq 0$%\n. Let us find the perturbed ion number density and pressure in the\nFourier-representation. From Equations (14), (38) and (39), we obtain%\n\\begin{equation}\n\\frac{n_{i1k}}{n_{i0}}=-i\\frac{1}{k_{z}c_{si}^{2}}F_{i1zk}-i\\frac{k_{y}}{%\nk_{z}c_{si}^{2}\\omega \\omega _{ci}^{2}}\\left[ \\left( \\gamma -1\\right) g_{i}+%\n\\frac{\\partial c_{si}^{2}}{\\partial z}\\right] Q_{i1yk}.\n\\end{equation}\nEquation (15) gives $\\partial p_{i1}\/\\partial t=-m_{i}n_{i0}P_{i1}$. Thus,\nwe obtain, using Equation (40),%\n\\begin{equation}\n\\frac{p_{i1k}}{p_{i0}}=-i\\frac{\\gamma }{k_{z}c_{si}^{2}}F_{i1zk}+\\frac{%\n\\gamma k_{y}g_{i}}{k_{z}^{2}c_{si}^{4}\\omega \\omega _{ci}^{2}}\\left[ \\left(\n\\gamma -1\\right) g_{i}+\\frac{\\partial c_{si}^{2}}{\\partial z}-\\omega ^{2}%\n\\frac{c_{si}^{2}}{g_{i}}\\right] Q_{i1yk}.\n\\end{equation}\n\nComparing Equations (41) and (42), we see that the relative perturbation of\nthe pressure due to the transverse electric force $Q_{i1yk}$ is much smaller\nthan the relative perturbation of the number density. However, these\nperturbations as a result of the action of the longitudinal electric force $%\nF_{i1zk}$ have the same order of magnitude. Thus, $p_{i1k}\/p_{i0}$ $\\sim\nn_{i1k}\/n_{i0}$. This result contradicts a supposition $p_{i1k}\/p_{i0}$ $\\ll\nn_{i1k}\/n_{i0}$ adopted in the MHD analysis of buoyancy instabilities\n(Balbus 2000, 2001; Quataert 2008) because the latter does not take into\naccount the longitudinal electric field perturbations. From the results\ngiven below, it is followed that, as we have already noted above, the both\nterms on the right hand-side of Equation (41) have the same order of\nmagnitude.\n\n\\bigskip\\\n\n\\section{LINEAR\\ ELECTRON\\ PERTURBATIONS}\n\nEquations for the electrons in the linear approximation are the following:%\n\\begin{equation}\n\\mathbf{0=-}\\frac{\\mathbf{\\nabla }p_{e1}}{n_{e0}}+\\frac{\\mathbf{\\nabla }%\np_{e0}}{n_{e0}}\\frac{n_{e1}}{n_{e0}}+\\mathbf{F}_{e1}+\\frac{q_{e}}{c}\\mathbf{v%\n}_{e1}\\times \\mathbf{B}_{0},\n\\end{equation}%\n\\begin{equation}\n\\frac{\\partial n_{e1}}{\\partial t}+v_{e1z}\\frac{\\partial n_{e0}}{\\partial z}%\n+n_{e0}\\mathbf{\\nabla }\\cdot \\mathbf{v}_{e1}=0,\n\\end{equation}%\n\\begin{equation}\n\\frac{\\partial p_{e1}}{\\partial t}+v_{e1z}\\frac{\\partial p_{e0}}{\\partial z}%\n+\\gamma p_{e0}\\mathbf{\\nabla }\\cdot \\mathbf{v}_{e1}=-\\left( \\gamma -1\\right)\n\\mathbf{\\nabla \\cdot q}_{e1},\n\\end{equation}%\n\\begin{equation}\n\\frac{\\partial T_{e1}}{\\partial t}+v_{e1z}\\frac{\\partial T_{e0}}{\\partial z}%\n+\\left( \\gamma -1\\right) T_{e0}\\mathbf{\\nabla }\\cdot \\mathbf{v}_{e1}=-\\left(\n\\gamma -1\\right) \\frac{1}{n_{e0}}\\mathbf{\\nabla \\cdot q}_{e1},\n\\end{equation}%\n\\begin{equation}\n\\mathbf{q}_{e1}=-\\mathbf{b}_{1}\\chi _{e0}\\frac{\\partial T_{e0}}{\\partial z}-%\n\\mathbf{b}_{0}\\chi _{e0}\\frac{\\partial T_{e1}}{\\partial z}-\\mathbf{b}%\n_{0}\\chi _{e1}\\frac{\\partial T_{e0}}{\\partial z},\n\\end{equation}%\n\\begin{equation}\n\\mathbf{F}_{e1}=q_{e}\\mathbf{E}_{1}-m_{e}\\nu _{ei}\\left( \\mathbf{v}_{e1}-%\n\\mathbf{v}_{i1}\\right) .\n\\end{equation}\nHere, $\\chi _{e1}=5\\chi _{e0}T_{e1}\/2T_{e0}$ (and $\\chi _{e}\\sim T_{e}^{5\/2}$%\n, see Spitzer (1962)) is the perturbation of the thermal flux conductivity\ncoefficient. The perturbation of the unit magnetic vector $\\mathbf{b}_{1}$\nis equal to $b_{1x,y}=B_{1x,y}\/B_{0}$ and $b_{1z}=0$. The thermal flux in\nequilibrium is $\\mathbf{q}_{e0}=-\\mathbf{b}_{0}\\chi _{e0}\\frac{\\partial\nT_{e0}}{\\partial z}$.\n\nWe have seen above at consideration of the ion perturbations that the terms $%\n\\sim 1\/H^{2}$, where $H$ is the typical scale height, are needed to be kept\n(see the last term in Equation (40)). Therefore, these terms are kept also\nfor the electrons.\n\n\\subsection{Equation for the electron temperature perturbation}\n\nLet us find equation for the electron temperature perturbation. The\nexpression $\\mathbf{\\nabla \\cdot q}_{e1}$, where $\\mathbf{q}_{e1}$ is\ndefined by (47), is given by%\n\\begin{equation}\n\\mathbf{\\nabla \\cdot q}_{e1}=\\frac{\\partial q_{e1y}}{\\partial y}+\\frac{%\n\\partial q_{e1z}}{\\partial z}=-\\chi _{e0}\\frac{\\partial T_{e0}}{\\partial z}%\n\\frac{1}{B_{0}}\\frac{\\partial B_{1y}}{\\partial y}-\\chi _{e0}\\frac{\\partial\n^{2}T_{e1}}{\\partial z^{2}}-2\\frac{\\partial \\chi _{e0}}{\\partial z}\\frac{%\n\\partial T_{e1}}{\\partial z}-\\frac{\\partial ^{2}\\chi _{e0}}{\\partial z^{2}}%\nT_{e1}.\n\\end{equation}\nSubstituting this expression into Equation (46), we obtain%\n\\begin{equation}\nD_{1}T_{e1}=-v_{e1z}\\frac{\\partial T_{e0}}{\\partial z}-\\left( \\gamma\n-1\\right) T_{e0}\\mathbf{\\nabla }\\cdot \\mathbf{v}_{e1}+\\left( \\gamma\n-1\\right) \\frac{\\chi _{e0}}{n_{e0}}\\frac{\\partial T_{e0}}{\\partial z}\\frac{%\n\\partial B_{1y}}{B_{0}\\partial y},\n\\end{equation}\nwhere the operator $D_{1}$ is defined as%\n\\begin{equation}\nD_{1}=\\left[ \\frac{\\partial }{\\partial t}-\\left( \\gamma -1\\right) \\frac{1}{%\nn_{e0}}\\left( \\chi _{e0}\\frac{\\partial ^{2}}{\\partial z^{2}}+2\\frac{\\partial\n\\chi _{e0}}{\\partial z}\\frac{\\partial }{\\partial z}+\\frac{\\partial ^{2}\\chi\n_{e0}}{\\partial z^{2}}\\right) \\right] .\n\\end{equation}\n\n\\subsection{Perturbed velocity and temperature of electrons}\n\nWe find now equations for components of the perturbed velocity of electrons.\nThe $x$-component of Equation (43) has a simple form, i.e.\n\\begin{equation}\nv_{e1y}=-\\frac{1}{m_{e}\\omega _{ce}}F_{e1x},\n\\end{equation}\nwhere $\\omega _{ce}=q_{e}B_{0}\/m_{e}c$. Applying the operator $\\partial\n\/\\partial t$ to the $y$-component of Equation (43) and using Equations (45)\nand (49), we obtain%\n\\begin{eqnarray}\n\\frac{\\partial }{\\partial t}\\left( v_{e1x}-\\frac{1}{m_{e}\\omega _{ce}}%\nF_{e1y}\\right) &\\mathbf{=}&\\mathbf{-}\\frac{1}{\\omega _{ci}}\\frac{\\partial\nP_{e1}}{\\partial y}-\\left( \\gamma -1\\right) \\frac{\\chi _{e0}}{m_{e}\\omega\n_{ce}n_{e0}}\\frac{\\partial T_{e0}}{\\partial z}\\frac{\\partial ^{2}B_{1y}}{%\nB_{0}\\partial y^{2}} \\\\\n&&+\\frac{1}{m_{e}\\omega _{ce}}\\left( D_{1}-\\frac{\\partial }{\\partial t}%\n\\right) \\frac{\\partial T_{e1}}{\\partial y}, \\nonumber\n\\end{eqnarray}\nwhere\n\\begin{equation}\nP_{e1}=-g_{e}v_{e1z}+c_{se}^{2}\\mathbf{\\nabla }\\cdot \\mathbf{v}_{e1}\n\\end{equation}\nand $c_{se}^{2}=\\gamma p_{e0}\/$ $m_{i}n_{e0}$. The variable $P_{e1}$ is\nanalogous to $P_{i1}$ (see Eq. [22]), which defines the ion pressure\nperturbation. But for electrons, their pressure perturbation is also\naffected by the thermal conductivity (see Eq. [45]).\n\nLet us express $\\mathbf{\\nabla }\\cdot \\mathbf{v}_{e1}$ through $v_{e1z}$,\nusing Equation (52),%\n\\begin{equation}\n\\mathbf{\\nabla }\\cdot \\mathbf{v}_{e1}=\\frac{\\partial v_{e1z}}{\\partial z}-%\n\\frac{1}{m_{e}\\omega _{ce}}\\frac{\\partial F_{e1x}}{\\partial y}.\n\\end{equation}\nThe $z$-component of Equation (43) takes the form\n\\begin{equation}\n0\\mathbf{=-}\\frac{1}{n_{e0}}\\frac{\\partial p_{e1}}{\\partial z}+\\frac{1}{%\nn_{e0}}\\frac{\\partial p_{e0}}{\\partial z}\\frac{n_{e1}}{n_{e0}}+F_{e1z}.\n\\end{equation}\n\nWe consider further perturbations with the dynamic frequency $\\partial\n\/\\partial t$ satisfying the following conditions:%\n\\begin{equation}\n\\frac{\\chi _{e0}}{n_{e0}}\\frac{\\partial ^{2}}{\\partial z^{2}}\\gg \\frac{%\n\\partial }{\\partial t}\\gg \\frac{1}{n_{e0}}\\frac{\\partial \\chi _{e0}}{%\n\\partial z}\\frac{\\partial }{\\partial z}.\n\\end{equation}\n\nIn this case, the terms proportional to $\\partial \\chi _{e0}\/\\partial z$ in\nthe temperature equation (50) (see [51]) are unimportant because the\nnecessary small corrections proportional to $\\partial \/\\partial t$ in this\nequation will be larger than that $\\sim \\partial \\chi _{e0}\/\\partial z$.\nThus, an inhomogeneity of the thermal flux conductivity coefficient and its\nperturbation can be neglected. We further apply the operator $\\partial\n\/\\partial t$ to Equation (56) and use Equations (44), (45), (49), and (55).\nAs a result, we obtain%\n\\begin{eqnarray}\n\\left( c_{se}^{2}\\frac{\\partial }{\\partial z}-\\gamma g_{e}\\right) \\frac{%\n\\partial v_{e1z}}{\\partial z} &\\mathbf{=}&\\mathbf{-}\\frac{\\partial F_{e1z}}{%\nm_{i}\\partial t}+\\left[ \\left( 1-\\gamma \\right) g_{e}+c_{se}^{2}\\frac{%\n\\partial }{\\partial z}\\right] \\frac{1}{m_{e}\\omega _{ce}}\\frac{\\partial\nF_{e1x}}{\\partial y} \\\\\n&&+\\left( \\gamma -1\\right) \\frac{\\chi _{e0}}{m_{i}n_{e0}}\\left( \\frac{%\n\\partial T_{e0}}{\\partial z}\\frac{1}{B_{0}}\\frac{\\partial ^{2}B_{1y}}{%\n\\partial y\\partial z}+\\frac{\\partial ^{3}T_{e1}}{\\partial z^{3}}\\right) .\n\\nonumber\n\\end{eqnarray}\n\nEquation for the temperature perturbation under conditions (57) has the form\n\n\\begin{eqnarray}\n\\left[ \\left( \\gamma -1\\right) \\frac{\\chi _{e0}}{n_{e0}}\\frac{\\partial ^{2}}{%\n\\partial z^{2}}-\\frac{\\partial }{\\partial t}\\right] T_{e1} &=&v_{e1z}\\frac{%\n\\partial T_{e0}}{\\partial z}+\\left( \\gamma -1\\right) T_{e0}\\left( \\frac{%\n\\partial v_{e1z}}{\\partial z}-\\frac{1}{m_{e}\\omega _{ce}}\\frac{\\partial\nF_{e1x}}{\\partial y}\\right) \\\\\n&&-\\left( \\gamma -1\\right) \\frac{\\chi _{e0}}{n_{e0}}\\frac{\\partial T_{e0}}{%\n\\partial z}\\frac{\\partial B_{1y}}{B_{0}\\partial y}, \\nonumber\n\\end{eqnarray}\nwhere we have used Equation (55). Substituting $T_{e1}$ in Equation (58) and\ncarrying out some transformations, we find equation for the longitudinal\nvelocity $v_{e1z}$%\n\\begin{eqnarray}\n\\frac{\\partial ^{3}v_{e1z}}{\\partial z^{3}} &=&\\mathbf{-}\\frac{\\partial\n^{2}F_{e1z}}{T_{e0}\\partial z\\partial t}\\mathbf{-}\\frac{n_{e0}}{\\chi _{e0}}%\n\\left( \\frac{\\partial }{\\partial z}\\right) ^{-1}\\frac{\\partial ^{2}F_{e1z}}{%\nT_{e0}\\partial t^{2}}+\\frac{1}{m_{e}\\omega _{ce}}\\frac{\\partial ^{3}F_{e1x}}{%\n\\partial y\\partial z^{2}} \\\\\n&&+\\frac{1}{c_{se}^{2}}\\left( \\gamma g_{e}+\\frac{\\partial c_{se}^{2}}{%\n\\partial z}\\right) \\frac{1}{m_{e}\\omega _{ce}}\\frac{\\partial ^{2}F_{e1x}}{%\n\\partial y\\partial z}-\\frac{\\partial T_{e0}}{T_{e0}\\partial z}\\frac{1}{B_{0}}%\n\\frac{\\partial ^{2}B_{1y}}{\\partial y\\partial t}. \\nonumber\n\\end{eqnarray}\nThe correction proportional to $\\partial F_{e1x}\/\\partial t$ is absent. The\nlast term on the right hand-side of Equation (60) is connected with the\nbackground electron thermal flux (Quataert 2008).\n\nFrom Equations (59) and (60), we can find equation for the temperature\nperturbation\n\\begin{eqnarray}\n\\left( \\gamma -1\\right) \\frac{\\chi _{e0}}{n_{e0}}\\frac{\\partial }{\\partial z}%\n\\left( \\frac{\\partial ^{2}T_{e1}}{\\partial z^{2}}+\\frac{\\partial T_{e0}}{%\n\\partial z}\\frac{\\partial B_{1y}}{B_{0}\\partial y}\\right) &=&\\frac{\\gamma\nT_{e0}}{c_{se}^{2}}\\left[ \\left( \\gamma -1\\right) g_{e}+\\frac{\\partial\nc_{se}^{2}}{\\partial z}\\right] \\frac{1}{m_{e}\\omega _{ce}}\\frac{\\partial\nF_{e1x}}{\\partial y} \\\\\n&&-\\left( \\gamma -1\\right) \\frac{\\partial F_{e1z}}{\\partial t}-\\gamma \\frac{%\nn_{e0}}{\\chi _{e0}}\\left( \\frac{\\partial }{\\partial z}\\right) ^{-2}\\frac{%\n\\partial ^{2}F_{e1z}}{\\partial t^{2}} \\nonumber \\\\\n&&-\\gamma \\frac{\\partial T_{e0}}{\\partial z}\\left( \\frac{\\partial }{\\partial\nz}\\right) ^{-1}\\frac{\\partial ^{2}B_{1y}}{B_{0}\\partial y\\partial t}.\n\\nonumber\n\\end{eqnarray}\nIt is followed from results obtained below that all terms on the right-hand\nside of Equation (61) (except the correction $\\sim \\partial\n^{2}F_{e1z}\/\\partial t^{2}$) have the same order of magnitude (see Section\n[4.3]). The left-hand side of this equation is larger (see conditions [57]).\nThus, the temperature perturbation in the zero order of magnitude can be\nfound by equaling the left part of Equation (61) to zero. However, the right\nside is necessary for finding the transverse velocity perturbation $v_{e1x}$%\n\nTo find the velocity $v_{e1x}$, we need to calculate the value $P_{e1}$ (see\nEqs. [53] and [54]). Performing calculations in the same way as that for\nions (see Section 4.3), we obtain%\n\\begin{eqnarray}\nc_{se}^{2}\\frac{\\partial ^{2}P_{e1}}{\\partial z^{2}} &=&\\left[ c_{se}^{2}%\n\\frac{\\partial }{\\partial z}+\\left( \\gamma -1\\right) g_{e}+\\frac{\\partial\nc_{se}^{2}}{\\partial z}\\right] \\left( \\mathbf{-}\\frac{\\partial F_{e1z}}{%\nm_{i}\\partial t}+\\frac{\\partial V_{e1}}{\\partial z}\\right) \\\\\n&&+g_{e}\\left[ \\left( \\gamma -1\\right) g_{e}+\\frac{\\partial c_{se}^{2}}{%\n\\partial z}\\right] \\frac{1}{m_{e}\\omega _{ce}}\\frac{\\partial F_{e1x}}{%\n\\partial y}, \\nonumber\n\\end{eqnarray}\nwhere we have introduced the notation connected with the thermal flux,%\n\\begin{equation}\nV_{e1}=\\left( \\gamma -1\\right) \\frac{\\chi _{e0}}{m_{i}n_{e0}}\\left( \\frac{%\n\\partial T_{e0}}{\\partial z}\\frac{1}{B_{0}}\\frac{\\partial B_{1y}}{\\partial y}%\n+\\frac{\\partial ^{2}T_{e1}}{\\partial z^{2}}\\right) .\n\\end{equation}\n\nEquation (62) can be re-written in the form, which is convenient for finding\nthe velocity $v_{e1x}$. Using Equation (61), we obtain%\n\\begin{eqnarray}\n\\frac{\\partial ^{2}}{\\partial z^{2}}\\left( P_{e1}-V_{e1}\\right) &=&-\\frac{%\n\\partial ^{2}F_{e1z}}{m_{i}\\partial z\\partial t}-\\frac{\\gamma }{c_{se}^{2}}%\n\\left[ \\left( \\gamma -1\\right) g_{e}+\\frac{\\partial c_{se}^{2}}{\\partial z}%\n\\right] \\frac{\\partial F_{e1z}}{m_{i}\\partial t} \\\\\n&&+\\frac{1}{c_{se}^{2}}\\left[ \\left( \\gamma -1\\right) g_{e}+\\frac{\\partial\nc_{se}^{2}}{\\partial z}\\right] \\left( \\gamma g_{e}+\\frac{\\partial c_{se}^{2}%\n}{\\partial z}\\right) \\frac{1}{m_{e}\\omega _{ce}}\\frac{\\partial F_{e1x}}{%\n\\partial y} \\nonumber \\\\\n&&-\\left[ \\left( \\gamma -1\\right) g_{e}+\\frac{\\partial c_{se}^{2}}{\\partial z%\n}\\right] \\frac{\\partial T_{e0}}{T_{e0}\\partial z}\\left( \\frac{\\partial }{%\n\\partial z}\\right) ^{-1}\\frac{\\partial ^{2}B_{1y}}{B_{0}\\partial y\\partial t}%\n. \\nonumber\n\\end{eqnarray}\n\nIt is easy to see that Equation (53) has the form%\n\\begin{equation}\n\\frac{\\partial }{\\partial t}\\left( v_{e1x}-\\frac{1}{m_{e}\\omega _{ce}}%\nF_{e1y}\\right) \\mathbf{=-}\\frac{1}{\\omega _{ci}}\\frac{\\partial }{\\partial y}%\n\\left( P_{e1}-V_{e1}\\right) .\n\\end{equation}\n\nThus, the main contribution of the flux described by Equation (63) does not\ninfluence on the electron dynamics. Applying to Equation (65) the operator $%\n\\partial ^{2}\/\\partial z^{2}$ and using Equation (64), we find an equation\nfor the velocity $v_{e1x}$.\n\n\\bigskip\n\n\\section{FOURIER CURRENT COMPONENTS}\n\n\\subsection{Fourier velocity components of ions and electrons}\n\nLet us give velocities of ions and electrons in the Fourier-representation.\nFrom Equations (20), (21), and (40), we have%\n\\begin{equation}\nv_{i1xk}\\mathbf{=}\\frac{1}{\\omega _{ci}^{2}}\\left( 1+\\frac{\\omega ^{2}}{%\n\\omega _{ci}^{2}}\\right) Q_{i1xk}+i\\frac{k_{y}^{2}}{k_{z}^{2}}\\frac{\\left(\n\\omega ^{2}-g_{i}a_{i}\\right) }{\\omega \\omega _{ci}^{3}}Q_{i1yk}\\mathbf{-}%\n\\frac{1}{\\omega _{ci}}\\frac{k_{y}}{k_{z}}\\left( 1-i\\frac{a_{i}}{k_{z}}%\n\\right) F_{i1zk},\n\\end{equation}%\n\\begin{equation}\nv_{i1yk}\\mathbf{=}\\frac{1}{\\omega _{ci}^{2}}\\left[ 1+\\frac{\\left(\nk^{2}\\omega ^{2}-k_{y}^{2}g_{i}a_{i}\\right) }{k_{z}^{2}\\omega _{ci}^{2}}%\n\\right] Q_{i1yk}+i\\frac{\\omega }{\\omega _{ci}^{2}}\\frac{k_{y}}{k_{z}}\\left(\n1-i\\frac{a_{i}}{k_{z}}\\right) F_{i1zk}.\n\\end{equation}\nHere and below, we have introduced notations%\n\\begin{equation}\na_{i,e}=\\frac{1}{c_{si,e}^{2}}\\left[ \\left( \\gamma -1\\right) g_{i,e}+\\frac{%\n\\partial c_{si,e}^{2}}{\\partial z}\\right] .\n\\end{equation}\nThe velocity $v_{i1zk}$ is given by Equation (38).\n\nFrom Equations (64) and (65), we find%\n\\begin{eqnarray}\nv_{e1xk} &\\mathbf{=}&\\mathbf{-}i\\frac{a_{e}c_{se}^{2}}{\\omega \\omega _{ci}}%\n\\frac{k_{y}^{2}}{k_{z}^{2}}\\left( b_{e}\\frac{1}{m_{e}\\omega _{ce}}%\nF_{e1xk}+\\omega \\frac{\\partial T_{e0}}{k_{z}T_{e0}\\partial z}\\frac{B_{1yk}}{%\nB_{0}}\\right) \\\\\n&&+\\frac{1}{m_{e}\\omega _{ce}}F_{e1yk}\\mathbf{-}\\frac{k_{y}}{k_{z}}\\left(\n1-i\\gamma \\frac{a_{e}}{k_{z}}\\right) \\frac{1}{m_{e}\\omega _{ce}}F_{e1zk},\n\\nonumber\n\\end{eqnarray}\nwhere the following notation is introduced:\n\\begin{equation}\nb_{e}=\\frac{1}{c_{se}^{2}}\\left( \\gamma g_{e}+\\frac{\\partial c_{se}^{2}}{%\n\\partial z}\\right) .\n\\end{equation}\nEquation (60) also gives us\n\\begin{eqnarray}\nv_{e1zk} &=&\\frac{k_{y}}{k_{z}}\\frac{1}{m_{e}\\omega _{ce}}F_{e1xk}-i\\frac{%\nk_{y}}{k_{z}^{2}}\\left( b_{e}\\frac{1}{m_{e}\\omega _{ce}}F_{e1xk}+\\omega\n\\frac{\\partial T_{e0}}{k_{z}T_{e0}\\partial z}\\frac{B_{1yk}}{B_{0}}\\right) \\\\\n&&-i\\frac{\\omega }{k_{z}^{2}T_{e0}}\\left( 1+i\\omega \\frac{n_{e0}}{\\chi\n_{e0}k_{z}^{2}}\\right) F_{e1zk}. \\nonumber\n\\end{eqnarray}\nThe velocity $v_{e1y}$ is defined by Equation (52).\n\n\\subsection{Fourier electron velocity components at the absence of heat flux}\n\nTo elucidate the role of the electron thermal flux, we also consider the\ndispersion relation when the flux is absent. Therefore, we give here the\ncorresponding electron velocity components:\n\\begin{equation}\nv_{e1xk}=-i\\frac{k_{y}^{2}g_{e}a_{e}}{k_{z}^{2}\\omega \\omega _{ci}}\\frac{1}{%\nm_{e}\\omega _{ce}}F_{e1xk}+\\frac{1}{m_{e}\\omega _{ce}}F_{e1yk}-\\frac{k_{y}}{%\nk_{z}}\\left( 1-i\\frac{a_{e}}{k_{z}}\\right) \\frac{1}{m_{e}\\omega _{ce}}%\nF_{e1zk},\n\\end{equation}%\n\\begin{equation}\nv_{e1zk}\\mathbf{=}\\frac{k_{y}}{k_{z}}\\left( 1-i\\frac{g_{e}}{k_{z}c_{se}^{2}}%\n\\right) \\frac{1}{m_{e}\\omega _{ce}}F_{e1xk}\\mathbf{-}i\\frac{\\omega }{%\nk_{z}^{2}c_{se}^{2}m_{i}}\\left( 1-i\\frac{\\gamma g_{e}}{k_{z}c_{se}^{2}}%\n\\right) F_{e1zk}.\n\\end{equation}\nComparing expressions (69) and (71) with these equations, we see that the\nthermal flux under conditions (57) essentially modifies the small terms in\nthe electron velocity.\n\n\\subsection{Fourier components of the current}\n\nWe find now the Fourier components of the linear current $\\mathbf{j}%\n_{1}=q_{i}n_{i0}\\mathbf{v}_{i1}+q_{e}n_{e0}\\mathbf{v}_{e1}$. It is\nconvenient to consider the value $4\\pi i\\mathbf{j}_{1}\/\\omega $. Using\nexpressions (38), (52), and (66)-(71), we obtain the following current\ncomponents:%\n\\begin{eqnarray}\n\\frac{4\\pi i}{\\omega }j_{1xk} &=&a_{xx}E_{1xk}+ia_{xy}E_{1yk}-a_{xz}E_{1zk}\n\\\\\n&&-b_{xx}\\left( v_{i1xk}-v_{e1xk}\\right) -ib_{xy}\\left(\nv_{i1yk}-v_{e1yk}\\right) +b_{xz}\\left( v_{i1zk}-v_{e1zk}\\right) , \\nonumber\n\\end{eqnarray}%\n\\ \\\n\\begin{eqnarray}\n\\frac{4\\pi i}{\\omega }j_{1yk} &=&-ia_{yx}E_{1xk}+a_{yy}E_{1yk}-a_{yz}E_{1zk}\n\\\\\n&&+ib_{yx}\\left( v_{i1xk}-v_{e1xk}\\right) -b_{yy}\\left(\nv_{i1yk}-v_{e1yk}\\right) +b_{yz}\\left( v_{i1zk}-v_{e1zk}\\right) , \\nonumber\n\\end{eqnarray}%\n\\begin{eqnarray}\n\\frac{4\\pi i}{\\omega }j_{1zk} &=&-a_{zx}E_{1xk}-a_{zy}E_{1yk}+a_{zz}E_{1z} \\\\\n&&+b_{zx}\\left( v_{i1x}-v_{e1x}\\right) +b_{zy}\\left( v_{i1y}-v_{e1y}\\right)\n-b_{zz}\\left( v_{i1z}-v_{e1z}\\right) . \\nonumber\n\\end{eqnarray}\n\nWhen obtaining expressions (74)-(76), we have used notations (16), (23),\n(24), and (48) and equalities $q_{e}=-q_{i}$, $n_{e0}=n_{i0}$, $m_{e}\\nu\n_{ei}=m_{i}\\nu _{ie}$. We have also substituted $B_{1yk}$ by $(k_{z}c\/\\omega\n)E_{1xk}$ (see below). The following notations are introduced above:%\n\\begin{eqnarray}\na_{xx} &=&\\frac{\\omega _{pi}^{2}}{\\omega _{ci}^{2}}\\frac{k^{2}}{k_{z}^{2}}%\n\\left( 1-\\frac{k_{y}^{2}}{k^{2}}\\frac{g_{i}a_{i}+a_{e}b_{e}c_{se}^{2}}{%\n\\omega ^{2}}-\\frac{k_{y}^{2}}{k^{2}}\\frac{a_{e}c_{se}^{2}}{\\omega ^{2}}\\frac{%\n\\partial T_{e0}^{\\ast }}{T_{e0}\\partial z}\\right) , \\\\\na_{xy} &=&a_{yx}=\\frac{\\omega _{pi}^{2}\\omega }{\\omega _{ci}^{3}}\\frac{k^{2}%\n}{k_{z}^{2}}\\left( 1-\\frac{k_{y}^{2}}{k^{2}}\\frac{g_{i}a_{i}}{\\omega ^{2}}%\n\\right) ,a_{xz}=\\frac{\\omega _{pi}^{2}}{\\omega \\omega _{ci}}\\frac{k_{y}}{%\nk_{z}^{2}}\\left( a_{i}-\\gamma a_{e}\\right) , \\nonumber \\\\\na_{yy} &=&\\frac{\\omega _{pi}^{2}}{\\omega _{ci}^{2}},a_{yz}=a_{zy}=\\frac{%\n\\omega _{pi}^{2}}{\\omega _{ci}^{2}}\\frac{k_{y}}{k_{z}},a_{zx}=\\frac{\\omega\n_{pi}^{2}}{\\omega \\omega _{ci}}\\frac{k_{y}}{k_{z}^{2}}\\left( b_{e}-\\frac{%\ng_{i}}{c_{si}^{2}}+\\frac{\\partial T_{e0}^{\\ast }}{T_{e0}\\partial z}\\right) ,\n\\nonumber \\\\\na_{zz} &=&\\frac{\\omega _{pi}^{2}}{k_{z}^{2}}\\left( \\frac{\\gamma }{c_{se}^{2}}%\n+\\frac{1}{c_{si}^{2}}\\right) \\nonumber\n\\end{eqnarray}\nand%\n\\begin{eqnarray}\nb_{xx} &=&\\frac{\\omega _{pi}^{2}\\nu _{ie}}{\\omega _{ci}^{2}}\\frac{m_{i}}{%\nq_{i}}\\frac{k^{2}}{k_{z}^{2}}\\left( 1-\\frac{k_{y}^{2}}{k^{2}}\\frac{%\ng_{i}a_{i}+a_{e}c_{se}^{2}b_{e}}{\\omega ^{2}}\\right) , \\\\\nb_{zx} &=&\\frac{\\omega _{pi}^{2}}{\\omega \\omega _{ci}}\\frac{k_{y}}{k_{z}^{2}}%\n\\left( b_{e}-\\frac{g_{i}}{c_{si}^{2}}\\right) \\frac{m_{i}}{q_{i}}\\nu _{ie},\n\\nonumber \\\\\nb_{ij} &=&a_{ij}\\frac{m_{i}}{q_{i}}\\nu _{ie}. \\nonumber\n\\end{eqnarray}\nHere $\\omega _{pi}=\\left( 4\\pi n_{i0}q_{i}^{2}\/m_{i}\\right) ^{1\/2}$ is the\nplasma frequency and $k^{2}=k_{y}^{2}+k_{z}^{2}$. The terms proportional to $%\nT_{e0}^{\\ast }$ are connected with the background electron thermal flux.\n\nCalculations show that to obtain expressions for $a_{ij}$ without thermal\nflux, using electron velocities (72) and (73), we must change $b_{e}$ by $%\ng_{e}\/c_{se}^{2}$, put $T_{e0}^{\\ast }=0$, and take $\\gamma =1$ in terms $%\na_{xz}$ and $a_{zz}$.\n\n\\subsection{Simplification of collision contribution}\n\nFrom the formal point of view, an assumption that electrons are magnetized\nhas only been involved in neglecting the transverse electron thermal flux.\nIn other respects, a relationship between $\\omega _{ce}$ and $\\nu _{ei}$ or $%\n\\omega _{ci}$ and $\\nu _{ie}$ (that is the same) can be arbitrary in\nEquations (74)-(76). We further proceed by assuming that $\\omega \\ll \\omega\n_{ci}$. In this case, we can neglect the collisional terms proportional to $%\nb_{xy}$ and $b_{yx}$. However, the system of Equations (74)-(76) stays\nsufficiently complex to find $\\mathbf{j}_{1}$ through $\\mathbf{E}_{1}$.\nTherefore, we further consider the specific case in which the frequency $%\n\\omega $ and wave numbers satisfy the following conditions:%\n\\begin{equation}\n\\frac{\\omega _{ci}^{2}}{\\nu _{ie}^{2}}\\frac{k_{z}^{2}}{k^{2}}\\gg \\frac{%\n\\omega }{\\nu _{ie}}\\gg \\frac{1}{k_{z}^{2}H^{2}}\\frac{k_{y}^{2}c_{s}^{2}}{%\n\\omega _{ci}^{2}},\n\\end{equation}\nwhere%\n\\begin{equation}\nc_{s}^{2}=\\frac{c_{si}^{2}c_{se}^{2}}{\\gamma c_{si}^{2}+c_{se}^{2}}.\n\\end{equation}\n\nIt is clear that conditions (79) can easily be realized. In this case, the\ncurrent components are equal to%\n\\begin{eqnarray}\n\\frac{4\\pi i}{\\omega }j_{1xk} &=&\\varepsilon _{xx}E_{1xk}+i\\varepsilon\n_{xy}E_{1yk}-\\varepsilon _{xz}E_{1zk}, \\\\\n\\frac{4\\pi i}{\\omega }j_{1yk} &=&-i\\varepsilon _{yx}E_{1xk}+\\varepsilon\n_{yy}E_{1yk}-\\varepsilon _{yz}E_{1zk}, \\nonumber \\\\\n\\frac{4\\pi i}{\\omega }j_{1zk} &=&-\\varepsilon _{zx}E_{1xk}-\\varepsilon\n_{zy}E_{1yk}+\\varepsilon _{zz}E_{1z}. \\nonumber\n\\end{eqnarray}\n\nComponents of the dielectric permeability tensor $\\varepsilon _{ij}$ are the\nfollowing:\n\\begin{eqnarray}\n\\varepsilon _{xx} &=&a_{xx}+i\\frac{\\nu _{ie}}{\\omega _{ci}}\\frac{k_{y}}{%\nk_{z}^{2}}\\frac{\\left( a_{i}-\\gamma a_{e}\\right) }{\\left( 1-id_{z}\\right) }%\na_{zx},\\varepsilon _{xy}=a_{xy}+\\frac{\\nu _{ie}}{\\omega _{ci}}\\frac{k_{y}}{%\nk_{z}^{2}}\\frac{\\left( a_{i}-\\gamma a_{e}\\right) }{\\left( 1-id_{z}\\right) }%\na_{zy}, \\\\\n\\varepsilon _{xz} &=&\\frac{a_{xz}}{\\left( 1-id_{z}\\right) },\\varepsilon\n_{yx}=a_{yx}-\\frac{\\omega \\nu _{ie}}{\\omega _{ci}^{2}}\\frac{k_{y}}{k_{z}}%\n\\frac{a_{zx}}{\\left( 1-id_{z}\\right) },\\varepsilon _{yy}=a_{yy}, \\nonumber\n\\\\\n\\varepsilon _{yz} &=&\\frac{a_{yz}}{\\left( 1-id_{z}\\right) },\\varepsilon\n_{zx}=\\frac{a_{zx}}{\\left( 1-id_{z}\\right) },\\varepsilon _{zy}=\\frac{a_{zy}}{%\n\\left( 1-id_{z}\\right) },\\varepsilon _{zz}=\\frac{a_{zz}}{\\left(\n1-id_{z}\\right) }, \\nonumber\n\\end{eqnarray}\nwhere we have used notations (78)\n\\begin{equation}\nd_{z}=\\frac{\\omega \\nu _{ie}}{k_{z}^{2}c_{s}^{2}}.\n\\end{equation}\nParameter $d_{z}$ defines the collisionless, $d_{z}\\ll 1$, and collisional, $%\nd_{z}\\gg 1$ regimes. Below, we derive the dispersion relation.\n\n\\bigskip\n\n\\section{DISPERSION\\ RELATION}\n\nFrom Equations (9) and (10) in the Fourier-representation and using system\nof equations (81), we obtain the following equations for the electric field\ncomponents:\n\\begin{eqnarray}\n\\left( n^{2}-\\varepsilon _{xx}\\right) E_{1xk}-i\\varepsilon\n_{xy}E_{1yk}+\\varepsilon _{xz}E_{1zk} &=&0, \\\\\ni\\varepsilon _{yx}E_{1xk}+\\left( n_{z}^{2}-\\varepsilon _{yy}\\right)\nE_{1yk}+\\left( -n_{y}n_{z}+\\varepsilon _{yz}\\right) E_{1zk} &=&0, \\nonumber\n\\\\\n\\varepsilon _{zx}E_{1xk}+\\left( -n_{y}n_{z}+\\varepsilon _{zy}\\right)\nE_{1yk}+\\left( n_{y}^{2}-\\varepsilon _{zz}\\right) E_{1zk} &=&0, \\nonumber\n\\end{eqnarray}\nwhere $\\mathbf{n=k}c\/\\omega $. The dispersion relation can be found by\nsetting the determinant of the system (84) equal to zero. In our case, the\nterms proportional to $\\varepsilon _{xy}$ and $\\varepsilon _{yx}$ can be\nneglected. As a result, we have%\n\\[\n\\left( n^{2}-\\varepsilon _{xx}\\right) \\left[ n_{y}^{2}\\varepsilon\n_{yy}+\\left( n_{z}^{2}-\\varepsilon _{yy}\\right) \\varepsilon\n_{zz}-n_{y}n_{z}\\left( \\varepsilon _{yz}+\\varepsilon _{zy}\\right)\n+\\varepsilon _{yz}\\varepsilon _{zy}\\right]\n\\]\n\\begin{equation}\n+\\left( n_{z}^{2}-\\varepsilon _{yy}\\right) \\varepsilon _{xz}\\varepsilon\n_{zx}=0.\n\\end{equation}\n\nThe above dispersion relation can be studied for different cases. In\nsubsequent sections, we consider both the collisionless and collisional\ncases.\n\n\\subsection{ Collisionless case}\n\nWe assume now that the condition%\n\\begin{equation}\n\\frac{\\omega \\nu _{ie}}{k_{z}^{2}c_{s}^{2}} \\ll 1,\n\\end{equation}\nis satisfied. Then, using notations (77) and (82), the dispersion relation\n(85) becomes%\n\\begin{equation}\n\\left( \\omega ^{2}-k_{z}^{2}c_{A}^{2}\\right) \\left( \\omega\n^{2}-k_{z}^{2}c_{A}^{2}-\\Omega ^{2}\\frac{k_{y}^{2}}{k^{2}}\\right) =0,\n\\end{equation}\nwhere $c_{A}=B_{0}\/(4\\pi m_{i}n_{i0})^{1\/2}$ is the Alfv\\'{e}n velocity and\n\\begin{equation}\n\\Omega ^{2}=g_{i}a_{i}+c_{se}^{2}a_{e}b_{e}+c_{se}^{2}a_{e}\\frac{\\partial\nT_{e0}^{\\ast }}{T_{e0}\\partial z}+c_{s}^{2}\\left( a_{i}-\\gamma a_{e}\\right)\n\\left( b_{e}-\\frac{g_{i}}{c_{si}^{2}}+\\frac{\\partial T_{e0}^{\\ast }}{%\nT_{e0}\\partial z}\\right) .\n\\end{equation}\n\nFor obtaining Equation (87), we have used the condition $k_{y}^{2}c_{s}^{2}\/%\n\\omega _{ci}^{2} \\ll 1$. We see that there are two wave modes. The first\nwave mode, $\\omega ^{2}=k_{z}^{2}c_{A}^{2}$, is the Alfv\\'{e}n wave with a\npolarization of the electric field mainly along the $y$-axis (the wave\nvector $\\mathbf{k}$ is situated in the $y-z$ plane). This wave does not feel\nthe inhomogeneity of the medium. The second wave has a polarization of the\nmagnetosonic wave, i.e. its electric field is directed mainly along the $x$%\n-axis (see below). This wave is undergone by the action of the medium\ninhomogeneity effect. The corresponding dispersion relation is\n\\begin{equation}\n\\omega ^{2}=k_{z}^{2}c_{A}^{2}+\\Omega ^{2}\\frac{k_{y}^{2}}{k^{2}}.\n\\end{equation}\n\nThe expression (88) can be further simplified using equations (11), (12),\n(68), (70), and (80). As a result, we obtain%\n\\begin{equation}\n\\Omega ^{2}=\\frac{\\gamma }{\\gamma c_{si}^{2}+c_{se}^{2}}\\frac{1}{m_{i}^{2}}%\n\\left[ \\left( \\gamma -1\\right) m_{i}g+\\gamma \\frac{\\partial \\left(\nT_{i0}+T_{e0}\\right) }{\\partial z}\\right] \\left[ m_{i}g+\\frac{\\partial\n\\left( T_{e0}+T_{e0}^{\\ast }\\right) }{\\partial z}\\right] .\n\\end{equation}%\nWe have pointed out at the end of Section (6.3) what changes must be\ndone in expressions (77) and (78) to consider the case without heat flux.\nThis case follows from Equation (90), if we omit the term $\\partial \\left(\nT_{e0}+T_{e0}^{\\ast }\\right) \/\\partial z$ and put $\\gamma =1$\nin the first multiplier. Then $\\Omega ^{2}$ becomes \n\n\\begin{equation}\n\\Omega ^{2}=\\frac{g}{c_{si}^{2}+c_{se}^{2}}\\left[ \\left( \\gamma -1\\right) g+%\n\\frac{\\partial \\left( c_{si}^{2}+c_{se}^{2}\\right) }{\\partial z}\\right] .\n\\end{equation}\n\nThis is the Brunt-V\\\"{a}is\\\"{a}l\\\"{a} frequency. Comparing (90) and (91), we\nsee that the heat flux stabilizes the unstable stratification. The presence\nof the background heat flux does not play of principle role. If the\ntemperature decreases in the direction of gravity ($\\partial\nT_{i,e0}\/\\partial z>0$), a medium is stable. Solution (90) describes an\ninstability regime only when\n\\[\n\\frac{\\gamma -1}{2\\gamma }m_{i}g<-\\frac{\\partial T_{0}}{\\partial z}<\\frac{1}{%\n2}m_{i}g.\n\\]%\nwhere ($T_{i0}\\sim T_{e0}=T_{0}$). We also note that $\\Omega ^{2}$ can be\nnegative if gradients of $T_{i0}$ and $T_{e0}$ have different signs.\n\nFor a comparison, we give here the corresponding dispersion relation\nby Quataert (2008)%\n\\[\n\\omega ^{2}\\simeq -g\\left( \\frac{d\\ln T_{0}}{dz}\\right) \\frac{k_{\\perp }^{2}%\n}{k^{2}},\n\\]\nwhich is discussed in Section 8.\n\n\\subsection{Collisional case}\n\nWe proceed with the collisional case when\n\\begin{equation}\n\\frac{\\omega \\nu _{ie}}{k_{z}^{2}c_{s}^{2}}\\gg 1.\n\\end{equation}\nIn this limiting case, we obtain again Equation (89).\n\n\\subsection{Polarization of perturbations}\n\nLet us neglect in the system of equations (84) the small contributions given\nby $\\varepsilon _{xy}$ and $\\varepsilon _{yx}$. Then, for example, in the\ncollisionless case, we obtain for the second wave $\\omega ^{2}\\neq\nk_{z}^{2}c_{A}^{2}$,%\n\\begin{eqnarray}\nE_{1yk} &=&\\frac{k_{y}}{k_{z}}E_{1zk}, \\\\\nE_{1zk} &=&\\frac{\\varepsilon _{zx}}{\\varepsilon _{zz}}E_{1xk}\\ll E_{1xk}.\n\\nonumber\n\\end{eqnarray}\n\nThus, the second wave has a polarization of the electric field mainly along\nthe $x$-axis. In spite of that the component $E_{1zk}\\ll E_{1xk}$, it is\nmultiplied by a large coefficient in the first equation of the system (84).\nAs a result, the contribution of this term is the same on the order of\nmagnitude as that of the first term.\n\nIn the collisional case, the component $E_{1zk}$ is also defined by Equation\n(93). However, its contribution to the first equation of the system (84) can\nbe neglected.\n\n\\bigskip\n\n\\section{DISCUSSION}\n\nDispersion relation (87) with $\\Omega ^{2}$ defined by Equations (88) or\n(90) considerably differs from that given in (Quataert 2008) for the case of\nour geometry. The reason goes back to the assumptions made in the MHD\nanalysis of buoyancy instabilities $p_{1}\/p_{0}\\ll \\rho _{1}\/\\rho _{0}$,\nwhere $p$ and $\\rho $ denote the pressure and mass density of fluid, and the\ncondition of incompressibility $\\mathbf{\\nabla \\cdot v}_{1}=0$, where $%\n\\mathbf{v}_{1}=\\mathbf{v}_{i1}$ is the perturbed fluid velocity. We now\nshortly show how one can obtain the result of Quataert (2008) in our\ngeometry, using these assumptions. We sum Equations (13) and (43) and use\nthe Ampere's law (10). The components of the equations become,\n\\begin{eqnarray}\n\\frac{\\partial v_{i1x}}{\\partial t} &\\mathbf{=}&\\frac{B_{0}}{4\\pi \\rho _{0}}%\n\\frac{\\partial B_{1x}}{\\partial z}, \\\\\n\\frac{\\partial v_{i1y}}{\\partial t} &\\mathbf{=}&\\mathbf{-}\\frac{\\partial\np_{1}}{\\rho _{0}\\partial y}-\\frac{B_{0}}{4\\pi \\rho _{0}}\\left( \\frac{%\n\\partial B_{1z}}{\\partial y}-\\frac{\\partial B_{1y}}{\\partial z}\\right) ,\n\\nonumber \\\\\n\\frac{\\partial v_{i1z}}{\\partial t} &\\mathbf{=}&\\mathbf{-}\\frac{\\partial\np_{1}}{\\rho _{0}\\partial y}-g\\frac{n_{e1}}{n_{0}}, \\nonumber\n\\end{eqnarray}\nwhere $n_{i0}=n_{e0}=n_{0}$, $\\rho _{0}=m_{i}n_{0}$, and we have used $%\nn_{i1}=n_{e1}$. The components of the ideal magnetic induction equation are\nthe following:%\n\\begin{eqnarray}\n\\frac{\\partial B_{1x}}{\\partial t} &\\mathbf{=}&B_{0}\\frac{\\partial v_{i1x}}{%\n\\partial z}\\mathbf{,} \\\\\n\\frac{\\partial B_{1y}}{\\partial t} &\\mathbf{=}&B_{0}\\frac{\\partial v_{i1y}}{%\n\\partial z}, \\nonumber \\\\\n\\frac{\\partial B_{1z}}{\\partial t} &\\mathbf{=}&-B_{0}\\frac{\\partial v_{i1y}}{%\n\\partial y}. \\nonumber\n\\end{eqnarray}\n\nWe note that the last equation (95) is a consequence of the second equation\n(95) and $\\mathbf{\\nabla \\cdot B}_{1}=0$.\n\nThe first equations of the systems of equations (94) and (95) describe the\nAlfv\\'{e}n waves, which are split from the other perturbations (see Eq.\n[87]). Applying operators $\\partial ^{3}\/\\partial z^{2}\\partial t$ and $%\n\\partial ^{3}\/\\partial y\\partial z\\partial t$ to the second and third\nequations of the systems of equations (94), correspondingly, using equation $%\n\\mathbf{\\nabla \\cdot v}_{i1}=0$ and the second equation (95), and\nsubtracting one equation from another, we obtain%\n\\begin{equation}\n\\left( \\frac{\\partial ^{2}}{\\partial y^{2}}+\\frac{\\partial ^{2}}{\\partial\nz^{2}}\\right) \\frac{\\partial ^{2}v_{i1y}}{\\partial t^{2}}\\mathbf{=}%\nc_{A}^{2}\\left( \\frac{\\partial ^{2}}{\\partial y^{2}}+\\frac{\\partial ^{2}}{%\n\\partial z^{2}}\\right) \\frac{\\partial ^{2}v_{i1y}}{\\partial z^{2}}+\\frac{g}{%\nn_{0}}\\frac{\\partial ^{3}n_{e1}}{\\partial y\\partial z\\partial t}.\n\\end{equation}\nAlso, from Equation (61), we have (see conditions [57]),\n\\begin{equation}\n\\frac{\\partial ^{2}T_{e1}}{\\partial z^{2}}=-\\frac{\\partial T_{e0}}{\\partial z%\n}\\frac{\\partial B_{1y}}{B_{0}\\partial y}.\n\\end{equation}\n\nTaking into account that $T_{e1}\/T_{e0}=-n_{e1}\/n_{e0}$, differentiating\nEquation (97) over $t$, using the second equation (95), and substituting the\nequation obtained in Equation (96), we find\n\\begin{equation}\n\\left( \\frac{\\partial ^{2}}{\\partial y^{2}}+\\frac{\\partial ^{2}}{\\partial\nz^{2}}\\right) \\frac{\\partial ^{2}v_{i1y}}{\\partial t^{2}}\\mathbf{=}%\nc_{A}^{2}\\left( \\frac{\\partial ^{2}}{\\partial y^{2}}+\\frac{\\partial ^{2}}{%\n\\partial z^{2}}\\right) \\frac{\\partial ^{2}v_{i1y}}{\\partial z^{2}}+g\\frac{%\n\\partial T_{e0}}{T_{e0}\\partial z}\\frac{\\partial ^{2}v_{i1y}}{\\partial y^{2}}%\n.\n\\end{equation}\nBy neglecting the contribution of the magnetic field, this equation\ncoincides in the Fourier-representation with Equation (22) in Quataert\n(2008).\n\nHowever, the presence of the longitudinal electric field perturbations $%\nE_{1z}$ results in that $p_{i,e1k}\/p_{i,e0}\\sim n_{i,e1k}\/n_{i,e0}$ (for\nions, see Section [4.4]). Besides, Equation (97) together with equation $%\n\\mathbf{\\nabla \\cdot v}_{i1}=0$ lead to the nonphysical equation\n\\[\n\\frac{\\partial T_{e1}}{\\partial t}=v_{e1z}\\frac{\\partial T_{e0}}{\\partial z}%\n.\n\\]\nThus, Equation (98) is incorrect.\n\nFrom the dispersion relation (85), we see the necessity of involving the\ncontribution of values $\\varepsilon _{xz}$, $\\varepsilon _{zx}$, and $%\n\\varepsilon _{zz}$ in the collisionless case (86) (values $%\n\\varepsilon _{xz}$ and $\\varepsilon _{zx}$ give the last term on the right\nhand-side of Eq. [88]). This means that contribution of currents $j_{1x}\\sim\nE_{1z}$ and $j_{1z}\\sim E_{1x},E_{1z}$ must be taken into account. However,\nthe role of the longitudinal electric field $E_{1z}$ in the MHD equations is\nnot clear. The same also relates to the collisional case (92). In\nthe current $j_{1xk}$, we must take into consideration the\ncontribution of the current $j_{1zk}$ as a result of collisions,\nwhich is proportional to $E_{1xk}$ (see Eqs. [74] and [76]).\n\nThus, the standard MHD equations with simplified assumptions are not\napplicable for the correct theory of buoyancy instabilities. Such a theory\ncan only be given by the multicomponent approach used in this paper.\n\nThe results following from Equation (90) show that the thermal flux\nstabilizes the buoyancy instability. The instability is only possible in the\nnarrow region of the temperature gradient (see Section [7.1]). The presence\nof the background electron thermal heat (the term $\\sim T_{e0}^{\\ast }$)\ndoes not play an essential role. An instability is also possibly, if the\ntemperature gradients of ions and electrons have the opposite signs.\n\nThe contribution of collisions between electrons and ions depends on the\nparameter $d_{z}$ defined by Equation (83). In the both limits (86) ($%\nd_{z}\\ll 1$) and (92) ($d_{z}\\gg 1$), the dispersion relation has the same\nform.\n\nWe would like to say a few words about the Schwarzschild criterion of the\nbuoyancy instability. It is generally accepted that this instability is\npossible, if the entropy increases in the direction of gravity. From a\nformal point of view, it is correct, if we take the Brunt-V\\\"{a}is\\\"{a}l\\\"{a}\nfrequency $N$ in the form (e.g. Balbus 2000),%\n\\[\nN^{2}=-\\frac{1}{\\gamma \\rho }\\frac{\\partial p}{\\partial z}\\frac{\\partial \\ln\np\\rho ^{-\\gamma }}{\\partial z}.\n\\]%\nHowever, this expression can easily be transformed to expression (32). Thus,\nwe see that for instability to exist, the temperature must increase along\ngravity and exceed the threshold.\n\n\\bigskip\n\n\\section{CONCLUSION}\n\nIn this paper, we have investigated buoyancy instabilities in magnetized\nelectron-ion astrophysical plasmas with the background electron thermal\nflux, using the $\\mathbf{E}$-approach when dynamical equations for the ions\nand the electrons are solved separately via electric field perturbations. We\nhave included the background electron heat flux and collisions between\nelectrons and ions. The important role of the longitudinal electric field\nperturbations, which are not captured by the MHD equations, has been shown.\nWe showed that the previous MHD\\ result for the growth rate in the geometry\nconsidered in this paper when all background quantities are directed along\nthe one axis is incorrect. The reason of this has been shown to be in\nsimplified assumptions made in the MHD analysis of the buoyancy\ninstabilities.\n\nWe have adopted that cyclotron frequencies of species are much larger than\ntheir collision frequencies that is typical for ICM and galaxy clusters. The\ndispersion relation obtained shows that the anisotropic electron heat flux,\nincluding the background one, stabilizes the unstable stratification except\nthe narrow region of the temperature gradient. However, when gradients of\nthe ion and electron temperatures have opposite signs, the medium becomes\nunstable.\n\nResults obtained in this paper are applicable to the magnetized weakly\ncollisional stratified objects and can be useful for searching sources of\nturbulent transport of energy and matter. It has been suggested that\nbuoyancy instability can act as a driving mechanism to generate turbulence\nin ICM and this extra source of the heating may help to resolve cooling flow\nproblem. However, all previous analytical or numerical studies are\nrestricted to the MHD approach. Our study shows that when the true\nmultifluid nature of the system with the electron heat flux is considered,\none can not expect buoyancy instability unless for a very limited range of\nthe gradient of the temperature or when the gradients of the temperature of\nthe electrons and ions have opposite signs which both cases are very\nunlikely. However, in the case when the heat flux does not play the role,\nthe system can be unstable according to the Schwarzschild criterion. \n\nThe current linear analysis is for simplified initial conditions, in\nwhich the background magnetic field, temperature gradient, and gravity are\nalong the same direction. However, another configuration should also be\nexamined using the $E$-approach, in which the initial magnetic field is\nperpendicular to the direction of gravity. This will be done in the\nforthcoming paper.\n\n\\bigskip\n\n\\subsection{References}\n\nBalbus, S. A. 2000, ApJ, 534, 420\n\nBalbus, S. A. 2001, ApJ, 562, 909\n\nBalbus, S. A., \\& Hawley, J. F. 1991, ApJ, 376, 214\n\nBraginskii, S. I. 1965, Rev. Plasma Phys., 1, 205\n\nCarilli, C. L., \\& Taylor, G. B. 2002, ARA\\&A, 40, 319\n\nChandran, B. D., \\& Dennis, T. J. 2006, ApJ, 642, 140\n\nChang, P., \\& Quataert, E. 2009, 0909.3041. (submitted to MNRAS)\n\nFabian, A. C., Sanders, J. S., Taylor, G. B., Allen, S. W., Crawford, C. S.,\nJohnstone, R. M.,\n\n\\&\\ Iwasawa, K. 2006, MNRAS, 366, 417\n\nGossard, E.E., \\& Hooke, W.H. 1975, Waves in the Atmosphere (Amsterdam:\nElsevier Scientific Publishing Company)\n\nLyutikov, M. 2007, ApJL, 668, L1\n\nLyutikov, M. 2008, ApJL, 673, L115\n\nNarayan, R., Igumenshchev, I. V., \\& Abramowicz, M. A. 2000, ApJ, 539, 798\n\nNarayan, R., Quataert, E., Igumenshchev, I. V., \\& Abramowicz, M. A. 2002,\nApJ, 577, 295\n\nNekrasov, A. K. 1994 J. Atmos.Terr. Phys., 56, 931\n\nNekrasov, A. K. 2008, Phys. Plasmas, 15\\textbf{,} 032907\n\nNekrasov, A. K. 2009 a, Phys.Plasmas, 16, 032902\n\nNekrasov, A. K. 2009 b, ApJ, 695, 46\n\nNekrasov, A. K. 2009 c, ApJ, 704, 80\n\nParrish, I. J., \\& Quataert, E. 2008, ApJL, 677, L9\n\nParrish, I. J., Stone, J. M., \\& Lemaster, N. 2008, ApJ, 688, 905\n\nParrish, I. J., Quataert, E., \\& Sharma, P. 2009, ApJ, 703, 96\n\nPedlosky, J. 1982, Geophysical Fluid Dynamics, (New York: Springer-Verlag)\n\nQuataert, E. 2008, ApJ, 673, 758\n\nRasera, Y., \\& Chandran, B. 2008, ApJ, 685, 105\n\nRen, H., Wu, Z., Cao, J., \\& Chu, P. K. 2009, Phys. Plasmas, 16, 102109\n\nSanders, J. S.,\\ Fabian, A. C., Frank, K. A., Peterson, J. R., \\& Russell,\nH. R. 2010, MNRAS, 402, 127\n\nSarazin, C. L. 1988, X-Ray Emission from Clusters of Galaxies (Cambridge:\n\nCambridge Univ. Press)\n\nSchwarzschild, M. 1958, Structure and Evolution of the Stars (New York:\nDover)\n\nSharma, P., Chandran, B. D. G., Quataert, E., \\& Parrish, I. J. 2009, ApJ,\n699, 348\n\nSpitzer, L., Jr. 1962, Physics of Fully Ionized Gases (2d ed.; New York:\nInterscience)\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{Introduction}\n\\label{Introduction}\n\\vspace*{-0.2cm}\nThe COMPASS experiment is a fixed target experiment at CERN. Its physics\nprogram is focused on the investigation of the internal structure of the \nnucleon using muon and hadron beams. The main experimental challenge is the\nneed to cope with high luminosity, resulting in high beam intensities and large\ntrigger rates. The setup of the COMPASS spectrometer is described in\n\\cite{Spectrometer}.\n\nCharged hadron identification is obtained using a large Ring Imaging\nCherenkov detector, the COMPASS RICH-1 \\cite{Rich-1}. The RICH detector uses\n$C_4F_{10}$ as radiator gas inside a $5\\times 6$~m$^2$ wide and $3$~meter deep\nvessel. The produced Cherenkov photons are reflected on a $20$~m$^2$ mirror\nwall onto two sets of photon detectors, an upper and a lower one. \n\nUntil 2004, the photon detectors used were $8$ multi-wire proportional chambers\n(MWPCs) with cesium iodide (CsI) photo-cathodes covering an active surface of\nabout $5.2$~m$^2$. Limitations to the RICH-1 performance at high rates\nwere related to the photon detector nature. Due to the presence of CsI\nphotocathodes, the MWPCs could not be operated at high gain, thus requiring a\nlong integration time of about $0.5$~$\\mu$s of the front-end electronics, a\nmodified version of the GASSIPLEX chip. \n\nThis limited the performance of the\nphoton detection in the central part of the detector in two ways: In the\nCOMPASS experimental environment a large flux of halo muons \naccounts for about 10\\% to 20\\% of the total beam flux. At high beam intensities\nof up to $10^8$ muons per second, the halo muons create a considerable\nbackground of Cherenkov photons. These photons create an uncorrelated\nbackground on the photon detectors, which reduces the particle identification\nefficiency and purity, especially for particles in the very forward direction.\nSince also the base-line restoration of the GASSIPLEX output takes about\n$3.5\\,\\mu$s, a large dead-time is created at high trigger rates. \n\n\\section{RICH-1 Upgrade} \n\\vspace*{-0.2cm} \nTherefore a new and fast photon detection system was developed and installed\nbetween Autumn 2004 and Spring 2006 in order to be able to distinguish by time\ninformation between\nphotons from physics events and background,\nand to be able to run at higher trigger rates of up to $100$~kHz. The upgrade of\nthe COMPASS RICH-1 is two-fold: In the central part of the photon detectors\n(1\/4 of the surface), the MWPCs have been replaced by $576$ Multi-Anode\nPhoto-Multipliers (MAPMTs) \\cite{Fulvio} with new fast readout electronics,\nwhich will be discussed in this contribution. In the outer part, the existing\nMWPCs have been equipped with a faster readout electronics based on the APV\npreamplifier with sampling ADCs \\cite{APV}.\n\n\\section{Fast Readout Electronics for the MAPMTs} \n\\vspace*{-0.2cm} The MAPMTs used for a fast\nphoton detection in the central part of the photon detectors are 16-channel\nmulti-anode photomultipliers H7600-03-M16 from Hamamatsu \\cite{Andy}. The\nreadout system \\cite{MAD} for the MAPMTs is based on the MAD4\npreamplifier-discriminator \\cite{MAD2} and the dead-time free F1-TDC\ncharacterized by a very good time resolution \\cite{F1}. The electronics system is\nmounted in a very compact setup as close as possible to the photomultipliers\n(Fig.~\\ref{fig1}). This minimizes the electrical noise and takes into account\nthe limited space in front of the RICH detector.\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{schill-fig1.eps}\n\\vspace*{-1.2cm}\n\\caption{Scheme of the read-out system.}\n\\label{fig1}\n\\end{figure}\n\n\n\n\\section{Analogue Front-end Electronics} \n\\vspace*{-0.2cm} The analogue front-end board amplifies\nthe signal from the photomultiplier, discriminates it and sends it as a\ndifferential signal to the digital board. Each front-end card is equipped with\n$2$ MAD4 chips with $4$ channels each. The MAD4 chip features a\ncharge-sensitive preamplifier with fixed gain ($3.35$~mV\/fC), a shaper and a\ndiscriminator with digitally adjustable threshold. To match the amplitude of\nthe MAPMT signal to the input stage of the MAD4 chip, a resistive voltage\ndivider attenuates the signal by a factor of $2.4$. The binning of the threshold\nsetting \nwas chosen to be $0.5$~fC\/digit. The measured noise level is $<7$~fC\n(Fig. \\ref{threshold}), while typical MAPMT signals have amplitudes between\n$100$ and $1000$~fC (Fig. \\ref{signal}) \\cite{Andy}. The signal peak at lower\namplitudes originates from photoelectrons which are\nmissing one amplification stage in the MAPMT. Since the signal fraction \nof this peak is significant, a threshold setting below this peak \nis essential for achieving high efficiency. A typical\nthreshold setting of about $40$~fC is chosen. For this threshold, the\nexcellent signal-to-noise ratio allows to obtain a very high efficiency,\npreserving a negligible level of cross-talk (Fig. \\ref{fig2}). \n\nThe MAD4 chip can operate at input rates up to $1$~MHz per\nchannel. An upgraded version of the MAD4 chip is under development, \ncapable of input rates up to $5$~MHz, the CMAD.\n\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{schill-fig2.eps}\n\\vspace*{-0.8cm}\n\\caption{Noise rate as a function of the threshold setting for one \nquadrant of the detector with $144$~MAPMTs.}\n\\label{threshold}\n\\end{figure}\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{schill-fig3.eps}\n\\vspace*{-0.8cm}\n\\caption{Amplitude spectrum of the MAPMT for single photons at\n$900$~V operating voltage, measured with an ADC.}\n\\label{signal}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[clip, bb=0 0 800 600, width=0.5\\textwidth]{schill-fig4.eps}\n\\vspace*{-0.8cm}\n\\caption{Cross talk level as a function of the threshold setting, obtained by\nilluminating in single photon mode a single pixel (black dot) with a focused \nlaser beam.}\n\\label{fig2}\n\\end{figure}\n\n\\section{Digital Front-end Electronics} \n\\vspace*{-0.2cm} The digital part of the new RICH-1\nfrontend electronics consists of the DREISAM front-end board, which is equipped\nwith eight F1-TDC chips to read out four MAPMTs (Fig. \\ref{Dreisam}). The\nboard was designed in a very compact way. The data are\ndigitized on the DREISAM board and are sent out via optical links to the\nHOT-CMC board, a small mezzanine card on the CATCH, the common readout-driver\nboard of the COMPASS experiment. The F1 chips on the digital boards have a\ndigitization bin-width of $108.3$~ps\nand can operate at input data rates of up to $10$~MHz per channel. The readout of\nthe data can be performed at trigger rates up to $100$~kHz. \n\\begin{figure}[b]\n\\includegraphics[width=0.5\\textwidth]{schill-fig5.eps}\n\\vspace*{-0.6cm}\n\\caption{Front side of the DREISAM card with 8 F1-TDC chips.}\n\\label{Dreisam}\n\\end{figure}\n\n\n\n\\section{System Performances}\n\\vspace*{-0.2cm}\nThe time resolution of the complete system consisting of MAPMT, MAD4 board and\nDREISAM board has been determined by illuminating the MAPMT photocathode with \noptical pulses of width less\nthan $50$~ps from a pulsed laser system. The laser intensity was attenuated by optical\nfilters to obtain single photon signals on the MAPMT. The time resolution of\nthe complete system was determined to be $\\sigma=320$~ps.\n \nThe upgraded photon detection system of the COMPASS RICH-1 has been stably \noperated during the beam-time in 2006 and 2007. In Fig. \\ref{time} the time\nspectrum of the detected Cherenkov photons is shown. The central peak of the physics\nsignal has a standard deviation of about $1$~ns. The background below the peak is created by\nuncorrelated Cherenkov photons mainly from muon-beam halo particles. \nThe observed width of the central peak is determined by the different\ngeometrical path length of the photons in a Cherenkov ring traveling from \nthe mirrors to the photon detection system. This has been confirmed by a Monte\nCarlo simulation of the detector setup. \nBy applying a suitable off\\-line time-cut of $\\pm 5$~ns around the signal peak,\nan excellent background suppression is achieved. Cherenkov rings from a\nphysics event are shown in Fig. \\ref{Rings}.\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{schill-fig6.eps}\n\\vspace*{-0.8cm}\n\\caption{Physics signal and background from 2006 data.}\n\\label{time}\n\\end{figure}\n\n\\section{Conclusions} \nA fast front-end electronics for the read-out of the\nMAPMTs of the COMPASS RICH-1 was designed and successfully installed in 2006.\nThe new electronics has an excellent time resolution for Cherenkov photons of\nless than $1$~ns and thus allows a very good background suppression of \n photons originating from halo muons. Its high hit capability allows a dead-time free\ndata taking at trigger rates of up to $100$~kHz. The upgraded detector and the new\nelectronics entirely fulfill the expected performances and have been operated successfully since the data taking period in 2006. With the upgraded\ndetector, the number of detected Cherenkov photons for saturated rings \nhas increased from $14$\nbefore to $56$ after the upgrade. The resolution of the\nCherenkov angle has improved from $0.6$ to $0.3$~mrad \\cite{Federica}.\n\\vspace*{-0.8cm}\n\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{schill-fig7.eps}\n\\vspace*{-0.8cm}\n\\caption{Online display of Cherenkov rings from a physics event.}\n\\label{Rings}\n\\end{figure}\n\n\n\n\\section*{Acknowledgments}\nWe acknowledge the support from CERN and the support by the BMBF (Germany) and\nthe European Community-research Infrastructure Activity under the FP6 program\n(Hadron Physics).\n\\vspace*{-0.8cm}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{RESULTS}\n\n\\subsection{Properties of the transfer network}\nThe transfer network shows strong seasonal, monthly, and weekly cycles of patient transfers. The topology of the network and the geography of patient transfers are closely related, with 90\\% of transfers between hospitals less than 200km apart. On average, over the 2-year period, a hospital sent patients to 13.55 $\\pm$ 0.15 (SE) hospitals and received patients from 13.55 $\\pm$ 0.25 hospitals. (Note that the two means necessarily coincide in a directed network because each edge has both an outgoing end and an incoming end.) The average number of patients transferred per edge in the 2-year period was 12.3 $\\pm$ 0.63 (SE). Although the degree distributions (in-degree and out-degree) have fat tails (more so the in-degree), comparisons of the average clustering coefficient and the average shortest path length to randomized versions of the network show that the network closely resembles a spatial network.\nIn particular, it is much more clustered than a random network and has a high average shortest path length. Finally, the network shows no significant assortativity by degree. A representation of the aggregated network is shown in Fig.~\\ref{Fig1}. (See the appendices for more details.)\n\n\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=8.6cm]{Fig1}\n \\caption{\\textbf{Network of hospital-to-hospital transfers of US Medicare patients.} The network consists of hospitals that are connected by daily transfers of patients, here aggregated over the two-year period. Edge color encodes the number of patients transferred through each connection.\\label{Fig1}}\n \\end{center}\n\\end{figure}\n\n\n\\subsection{Spread of C.~Diff.~infections}\nIn our data, over the two-year period, there were a total of 313,214 \\emph{C.~Diff. } infections in the 5,677 hospitals included in the study (after all exclusion criteria were applied). We plot the mean \\emph{C.~Diff. } incidence for each hospital and the mean \\emph{C.~Diff. } incidence for its network neighbors in Fig.~\\ref{Fig2}. We observe two distinct regimes, one for low \\emph{C.~Diff. } incidence and another for high \\emph{C.~Diff. } incidence. The incidence of the pathogen in a given hospital appears to be correlated with the incidence of the pathogen in its network neighbors as long as the incidence at the focal hospital is relatively low; this correlation appears to vanish for hospitals displaying higher \\emph{C.~Diff. } incidence. One possible explanation for this phenomenon is that, if there were only very few cases of \\emph{C.~Diff. } in the low incidence regime, the transfers of infected patients might go undetected, therefore inducing correlations among pathogen incidences across the network. Conversely, if pathogen incidence were high and local, such that hospital outbreaks are detected, patient transfers might be restructured to curb the further spread of the infection. We determine the boundary between the two regimes based on the strength of correlation in pathogen incidence and assign the value for the crossover between the two regimes (shown as the vertical line in Fig.~\\ref{Fig2}). For \\emph{C.~Diff. } incidence below this threshold, the Pearson correlation coefficient $R \\approx 0.47$ (95\\% CI: 0.44, 0.49) whereas above the threshold $R \\approx -0.01$ (95\\% CI: -0.08, 0.07), where the confidence intervals for the correlation coefficients where estimated using the Fisher $z$-transformation \\cite{cite25}. This finding on the correlation of \\emph{C.~Diff. } incidence across hospitals that are neighbors in the transfer network supports the use of the transfer network as a substrate for the spread of nosocomial infections.\n\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=8.6cm]{Fig2}\n \\caption{\\textbf{Correlation between C.diff. incidence and transfer network structure.} The horizontal axis represents the mean \\emph{C.~Diff. } incidence at the focal hospital over time and the vertical axis is the mean \\emph{C.~Diff. } incidence in the network neighborhood of that hospital (the mean taken first over time and then over all network neighbors). We exclude hospitals with fewer than 100 patients from subsequent correlation analyses, leading to exclusion of 7.5\\% (428) of all hospitals. The Pearson correlation coefficients are 0.47 and -0.01 for the low and high incidence regimes, respectively, which are separated by the vertical line.\\label{Fig2}}\n \\end{center}\n\\end{figure}\n\n\n\n\\subsection{Monitoring the system for hypothetical outbreaks}\nWe investigated the optimal selection and placement of network sensors for early detection of epidemics. We used three different strategies for selecting the sensor nodes based on their properties in the static network, choosing them based on their in-degree rank, out-degree rank, or choosing them at random. Nodes with a high in-degree are expected to be efficient at funneling in pathogens from their network environment, whereas nodes with a high out-degree are expected to rapidly funnel out their pathogens.\n\nWe implemented two versions of each strategy. In the \\emph{static implementation}, the set of sensor hospitals was fixed in time, whereas in the \\emph{dynamic implementation} different hospitals function as sensors at different times (see Methods). In Fig.~\\ref{Fig3}, we show the results for the efficacy and the fraction of detected cases for the three strategies for the static implementation. The in-degree strategy achieves the highest efficacy with the lowest number of sensors and at most uses only 108 hospitals (1.9\\% of all hospitals) as sensors. The out-degree strategy is second best and it uses 167 hospitals (2.9\\%) as sensors. Both degree-based approaches outperform the random strategy that uses 332 hospitals (5.9\\%). In terms of the fraction of detected cases, the three strategies perform similarly: 78\\% for in-degree, 81\\% for out-degree, and 84\\% for the random strategy.\n\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=8.6cm]{Fig3}\n \\caption{\\textbf{Finding the optimal sensor set for the static implementation of the surveillance system.} Efficacy (a) and fraction of detected cases (b) on the static network as a function of the fraction of hospitals acting as sensors. The different curves represent different strategies for sensor selection: random selection (black), selection proportional to in-degree (red), and selection proportional to out-degree (blue). \\label{Fig3}}\n \\end{center}\n\\end{figure}\n\n\n\nIn Fig.~\\ref{Fig4}, we show the efficacy and fraction of detected cases for the three strategies for the dynamic implementation as a function of the number of sensors and the \\emph{activation time} $T$, the length of the time period that the hospital will be incorporated in the sensor set upon admitting a \\emph{C.~Diff. } patient. Except for very low activation times of the order of a few days, the measures of efficacy and fraction of detected cases are almost unaffected by this parameter. As can be seen in Fig.~\\ref{Fig5}, the optimal sensor set of a strategy stabilizes after $T=5$ days. These results corroborate the finding that choosing sensors based on in-degree is the best overall strategy, followed by out-degree, and then the random strategy. All of the strategies result in similar sizes for the most efficient sensor sets as in the static case. In terms of the fraction of detected cases, all three strategies perform similarly, each covering about 80\\% of the cases. We find that the average time a sensor spends in the active state increases as a function of the activation time $T$. Therefore, an optimal approach is to choose the smallest activation time $T$ that does not deteriorate performance of the sensor system in terms of the fraction of detected cases. For an activation time $T=5$, the average fraction of time sensors spend in the active state is $0.51$ for in-degree based selection, $0.47$ for out-degree based selection, and $0.46$ for the random strategy.\n\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=8.6cm]{Fig4}\n \\caption{\\textbf{Finding the optimal sensor set for the dynamic implementation of the surveillance system.} Heatmaps showing the efficacy (left column) and fraction of detected cases (right column) on the temporal transfer network. Results are shown as a function of the fraction of hospitals acting as sensors (horizontal axes) and the activity time that they implement (vertical axes). The rows of panels correspond to choosing the sensors randomly (top row), proportional to out-degree (middle row) and proportional to in-degree (bottom row).\\label{Fig4}}\n \\end{center}\n\\end{figure}\n\n\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=8.6cm]{Fig5}\n \\caption{\\textbf{Efficacy of temporal sensor sets.} \\textbf{a)} Fraction of sensors for the most efficient sensor set from the temporal network for sensors chosen at random (black), proportional to in-degree (red), and proportional to out-degree (blue). We have smoothened the efficacy curves by averaging the results using a window of 5 sensors. \\textbf{b)} Fraction of detected cases for the most efficient sensor set. \\textbf{c)} Average fraction of time that a sensor stays in the active state (same color code as on the left).\\label{Fig5}}\n \\end{center}\n\\end{figure}\n\n\n\nIn Fig.~\\ref{Fig6} an instance for the optimal sensor set derived from each strategy in the static implementation is plotted in a map. Sensor hospitals are plotted in red, while their first neighbors in blue and the rest in grey. Their size encodes the number of \\emph{C.~Diff. } cases they host in the full study period. We visually see that the number of blue and red hospitals are more or less similar for all strategies, while the number of sensor hospitals (in red) decreases from the random (Fig.~\\ref{Fig6}a), to the out-degree (Fig.~\\ref{Fig6}b), to the in-degree strategy (Fig.~\\ref{Fig6}c).\n\n\n\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=8.6cm]{Fig6}\n \\caption{\\textbf{Spatial locations of optimal sensor sets} in the static transfer network based on in-degree (a), out-degree (b), random (c). In red are the sensor hospitals, in blue their first neighbors and in grey those uncovered by the sensor set.\\label{Fig6}}\n \\end{center}\n\\end{figure}\n\n\n\n\\section{CONCLUSIONS}\n\nWe studied a network defined by the transfer of 12.5M Medicare patients across 5,667 US hospitals over a 2-year period. We found the network to be strongly geographically embedded, with 90\\% of all transfers spanning a distance less than 200km. We found that the transfer network could plausibly be used as a substrate for the spread of pathogens: we observed a positive correlation for \\emph{C.~Diff. } incidence between hospitals and their network neighbors, identifying two qualitatively distinct regimes corresponding to low and high \\emph{C.~Diff. } incidence. Finally, we showed that selecting hospitals as sensors based on their in-degree in the static network was able to detect a large fraction of infections. Furthermore, an activation time of just 5 to 7 days using the dynamic sensor implementation is sufficient to achieve this surveillance with just 2\\% of the hospitals acting as sensors. These results support our conceptual model that the structure of the nationwide hospital patient transfer network is important for the spread of health-care associated infections, likely well beyond the illustrative case of C. diff considered here. In particular, our work highlights the need to monitor the network of transfers\u00e2 not just individual hospitals\u00e2 in order to monitor infectious outbreaks. \n\nIt is possible that other sorts of pathogens might need a different number of sensor hospitals, a different set of sensor hospitals, or different surveillance windows. Nevertheless, it is clear that the health of the entire hospital system, from the perspective of nosocomial infections or other outbreaks, could be monitored by leveraging the network structure of patient transfers. \n\nOur study has several limitations. First, the data we used to map the hospital networks are from 2006 and 2007. However, given that hospital transfer patterns are strongly embedded in the geography of the country, as we also demonstrated here, we do not expect the age of the data to affect our results substantially. Second, we cannot assess the extent to which unobserved policies or commercial constraints might have affected the flow of patients from one hospital to another; however, these policies merely affected patient transfers, which are, in any case, observable in the current and similar future data. Third, our analyses and models assume that patient transfers are the only mechanism responsible for the spread of infections. There are, of course, other vectors or means that might result in hospitals being infected, such as the movement of physicians, nurses, and other health care staff between hospitals. Finally, in this analysis, we did not make use of the fine-scale temporal information available in transfer data; future work could evaluate how bursts of infected patients, perhaps on particular days of the week, might contribute to an epidemic.\n\nUnderstanding the structure and dynamics of the hospital transfer network for the spread of real infections has a number of important implications. Empirical data could be used, either periodically or perhaps even in real time\u00e2 to map networks of patient movement in the US health care system, and this network could then be used monitor the spread of nosocomial and other infections in the network. In our estimation, such a system could detect 80\\% of \\emph{C.~Diff. } cases using just 2\\% of hospitals as network sensors. Our methods suggest practicable strategies for identifying which hospitals should serve a surveillance function for the whole system and, in the dynamic implementation, how long the sensors should retain a higher level of alertness after each index case. These tools would be useful not only for public health interventions in the case of natural epidemics, but also in the case of deliberate ones, such as those due to a possible bioterror attack. In conclusion, the actual structure and flow pattern of patients across US hospitals confers certain specific vulnerabilities and defenses, regardless of the biology of the pathogen per se, placing theoretical bounds on any effective containment strategy directed at a contagious pathogen.\n\n\\section{MATERIALS AND METHODS}\n\n\\subsection{Study data}\nWe study hospital-to-hospital transfer patterns of the entire population of US Medicare patients over a two-year period. Medicare provides almost universal coverage to all Americans aged 65 and older, about 15\\% of the US population \\cite{cite17}; and about 37\\% of all hospital admissions in 2003 were for Medicare patients \\cite{cite18}. We used a 100\\% sample of the Medicare Provider Analysis and Review (MedPAR) files for calendar years 2006 and 2007. The MedPAR files contain diagnosis, procedure, and billing information on all inpatient and skilled nursing facility (SNF) stays. Our study cohort consisted of Medicare patients aged 65 or older with a hospital stay at an acute medical or surgical hospital with an active record in the American Hospital Association (AHA) 2005 database \\cite{cite19}. Before applying these exclusion criteria, we identified 26.4 million stays of 12.5 million patients in 6,278 different hospitals. After the exclusions, our final cohort consisted of 21.0 million inpatient stays of 10.4 million patients in 5,667 different hospitals. \n\n\\subsection{Hospital-to-hospital transfers}\nAccording to our definition, a hospital-to-hospital transfer occurs whenever a patient is discharged from one hospital and admitted to another hospital on the same calendar day. Note that a minority of transfers as defined here may not correspond to actual formal transfers of patients. For example, a patient could be discharged from hospital A and then be re-admitted to hospital B on the same day for a reason that is unrelated to her stay at hospital A. From an epidemiological point of view, however, these are essentially equivalent to formal patient transfers. (Our results change little if we relax the definition of hospital transfers to allow the re-admission to take place the day following the day of discharge. See the appendices.) Using this definition of transfer, we identified 936,101 transfer events taking place between 76,003 pairs of hospitals. \n\n\\subsection{Constructing the transfer network}\nWe consider a network representation of the patient transfers across hospitals. In this framework, hospitals are represented as nodes and a transfer of a total of $x$ patients on day $d$ from hospital $i$ to hospital $j$ is represented as a directed edge from node $i$ to node $j$ with weight $x$ on day $d$. The longitudinal sequence of patient transfers forms a directed, weighted, temporal network. We consider a static representation of the network that retains no temporal information of patient transfers by aggregating the data for the two-year period, where the weight of the edge from node $i$ to node $j$ is the mean daily number of patient transfers through that edge, i.e., the total number of transfers from hospital $i$ to hospital $j$ during the study period divided by the number of days in the period (730). \n\n\\subsection{C.~Diff.~incidence on the transfer network}\nThe transfer of infected patients from one hospital to another can result in pathogen transmission between them. Given that the MedPAR files contain diagnosis codes for each patient, we investigated the incidence of \\emph{Clostridium difficile} (\\emph{C. diff.}) infections and its correlation with properties of the transfer network. \\emph{C.~Diff. } is an anaerobic, gram-positive, spore-forming bacteria that occurs frequently in health care settings. It is found in over 20\\% of patients who have been hospitalized for more than one week. The disease is spread by ingestion of \\emph{C.~Diff. } spores, which are very hardy and can persist on environmental surfaces for months without proper hygiene \\cite{cite20}. \\emph{C.~Diff. } associated infections kill an estimated 14,000 people a year in the US as a result of institutional infections \\cite{cite21}. We ascertained incident cases of \\emph{C.~Diff. } infection by identifying any hospital admissions with ICD-9 diagnostic code 008.45. The sensitivity and specificity of using ICD-9 codes to identify \\emph{C.~Diff. } infections have been reported by multiple groups to be adequate for identifying overall \\emph{C.~Diff. } burden for epidemiological purposes \\cite{cite22,cite23,cite24}. Given the relative \\emph{C.~Diff. } incidence at each hospital, defined as the fraction of patients with that particular diagnosis over the study period, we plot the average relative \\emph{C.~Diff. } incidence in the neighborhood of each hospital against its own \\emph{C.~Diff. } incidence in Figure ~\\ref{Fig2}. We quantify the correlation using the Pearson linear correlation coefficient.\n \n\\subsection{Sensor placement on the hospital network}\nIt might be possible to make use of the properties of the hospital-hospital transfer network to set up a real-time surveillance system for infections, such as a new strain of antibiotic-resistant \\emph{C.~Diff. } For this application, it is unlikely that exhaustive data would be available for all hospitals all the time, and this limitation calls for a parsimonious approach where only a subset of hospitals needs to be monitored at any given time. We call these monitored hospitals ``network sensors'' in the sense that they could be used to sense incipient epidemics. We consider three different prescriptions for sensor placement: (1) choose sensor hospitals in proportion to their in-degree rank in the static network; (2) choose sensor hospitals in proportion to their out-degree rank in the static network; and (3) choose sensor hospitals uniformly at random from the set of all hospitals. In our simulations, we assume that a monitored hospital is able to detect every infected patient who is present either in the hospital itself or in any of its network neighbors to which it is connected via patient transfers. While this assumption is made primarily for methodological reasons and may not hold in practice, the relative performance (the ordering) of the three prescriptions for selecting sensors remains unaffected if the assumption were relaxed. To learn about the potential of the hospital sensor framework to detect epidemics, we investigate its best-case performance by determining the optimal sensor set for the observed data (see appendices). We expect that its performance would be somewhat reduced for an independent test data set (data not used as part of the training of the method).\n\n\\subsection{Determining the optimal sensor set}\nWe define the relative efficacy of the sensor $E_N$ set as\n\\begin{equation}\n E_N=\\frac{D_N}{ND_1}-\\frac{M-D_N}{M}\n\\end{equation}\nwhere $N$ is the number of sensors in the sensor set, $D_N$ the number of infected patients detected by a sensor set of $N$ sensors, and $M$ is the total number of \\emph{C.~Diff. } cases in the network. While adding sensors to the system always improves its overall performance, any sensor set exhibits diminishing marginal returns in the sense that the per-sensor increment in performance declines with each added sensor. The first term in the definition corresponds to the number of detected cases normalized by the number of cases that would be detected if all sensors were as efficacious as the first sensor in the sensor set. The second term is a penalty term that corresponds to the fraction of undetected cases. High relative efficacy is therefore a combination of selecting a set of sensors that are as close as possible to the efficaciousness of the first sensor in the set and having these sensors miss as small a proportion of cases as possible. Note that the two terms in the definition of the relative efficacy could be assigned different weights; however, here, we opted for the simplest approach and only ensured that the two contributions are measured on the same scale.\n\n\\subsection{Static and dynamic implementation of network sensors}\nWe implement the sensor framework in two different ways. In the static implementation, the sensors are always active, whereas, in the dynamic implementation, the sensors are either passive or active. When a sensor is passive, it can only detect infections in the hospital itself. Whenever an infection is detected, the sensor either transitions from the passive state to the active state for a period of $T$ days or, if already in the active state, remains in that state for another $T$ days. In addition to the efficacy of the sensor sets, for both implementations, we keep track of the fraction of \\emph{C.~Diff. } cases that are detected in order to assess the performance of the sensor system.\n\n\\noindent \\textbf{Static implementation} Since we know the number of \\emph{C.~Diff. } cases in each hospital at any given time, we simply count the number of cases in the sensor hospitals and their network neighbors. We average the results by generating 10,000 independent realizations of sensor sets for each of the three different prescriptions of choosing sensors (in-degree, out-degree, random). The optimal sensor set for each strategy is the one with maximum efficacy.\n\n\\noindent \\textbf{Dynamic implementation} We monitor the admission times of \\emph{C.~Diff. } patients at each hospital, and whenever such a patient is admitted, we incorporate the hospital in the sensor set for $T$ days following the admission, a time period we call the activation time. Once added to the sensor set, the hospital can detect the \\emph{C.~Diff. } cases present in the hospital itself and its network neighbors for a total of $T$ days. The efficacy of the sensor system therefore depends on the value of $T$, and we compute the efficacy of the sensors for $T$ from 0 to 100 days (shown from 0 to 30 days in Fig.~\\ref{Fig5}). For each combination of parameter values, the number of sensors and the activation time, and for each strategy of prescribing sensors, we perform 1,000 independent realizations of the sensor selection process. We also track the average time each sensor stays in the active state. An optimal sensor set is one that has maximal efficacy for activation time $T$, minimizes the average time the sensors stay active, and maximizes the fraction of detected cases.\n\n\n\\begin{acknowledgments}\nWe thank Laurie Meneades for the expert assistance required to build the dataset. JFG and JPO are joint first authors of this article. \n\\end{acknowledgments}\n\n\n\\renewcommand{\\thefigure}{A\\arabic{figure}}\n\\setcounter{figure}{0}\n\\newpage\n\n\\section{APPENDICES}\n\n\\subsection{Transfer network}\n\nWe characterize the temporal nature of hospital usage by showing the time series of the number of transfers in Figure \\ref{FigS1} (a). A clear seasonal oscillation is visible, and at a finer temporal scale, a weekly periodic cycle is also observable, where Saturdays and Sundays are the least active days of the week and Mondays the most active and also the most variable. In Figure \\ref{FigS1} (b) there are periodic oscillations in many of the quantities of interest, such as the number of patients staying overnight at hospitals, number of admissions, discharges, and transfers.\n\n\\begin{figure}[H]\n\\begin{center}\n \\includegraphics[width=8.6cm]{FigS1}\n \\caption{\\textbf{ Hospital Transfer Network HTN.} \\textbf{a)} Total number of transfers in the system as a function of time for the two years of data. We can see seasonal and weekly oscillations. \\textbf{b)} Median, 5- and 95-percentile for several quantities of interest for the different days of the week.\\label{FigS1}}\n \\end{center}\n\\end{figure}\n\n\nWe then examine the structural connectivity and geographic characteristics of the static transfer network (see Fig. \\ref{FigS2}). In terms of network topology, the in-degree distribution has a broader tail than the out-degree distribution. The network has an average (local) clustering coefficient of 0.51. This coefficient measures the probability that any two hospitals connected to an index hospital are in turn connected to each other, forming a closed triad (a cycle of three nodes and three edges). A random graph with the same number of nodes and edges yields an average local clustering coefficient of 0.0057$\\pm$ 0.0001 (SE), which is substantially lower than the observed value, a finding that likely reflects the network's geographic embeddedness. The average shortest path length of the network is 4.69. To put this number in perspective, we performed network randomizations using a slight variant of the directed configuration model that preserves both in-degree and out-degree distributions \\cite{citeS1}. This approach gave rise to an average shortest path length of 3.6 $\\pm$ 0.4 (SE). The observed network is therefore a somewhat \"larger world\" than what would be expected by chance, but this is almost certainly driven by the underlying geography and the objective of keeping transfers as short as possible. In fact, about 90\\% of the transfers are to hospitals less than 200km away. \n\n\n\\begin{figure}[H]\n\\begin{center}\n \\includegraphics[width=8.6cm]{FigS2}\n \\caption{\\textbf{ Topological and geographical characteristics of the transfer network.} \\textbf{a)} Distributions for in- (red open triangles) and out-degree (black solid circles). \\textbf{b)} Distribution for the number of transfers per connection $\\omega$. \\textbf{c)} Transfer distance distribution. \\label{FigS2}}\n \\end{center}\n\\end{figure}\n\nDegree assortativity is the concept that nodes with many connections tend to be connected to other nodes with many connections \\cite{citeS2,citeS3}. When the static network is taken as undirected, we can use the assortativity coefficient to measure the extent to which the degrees of hospitals in each pair of connected hospitals are similar. We obtain a slightly negative value of -0.06, but similar values of -0.005 $\\pm$ 0.001 (SE) also arise from randomizations of the network using the algorithm discussed above. Consequently, there is no statistically significant assortativity in the network over and above what would be expected by chance given the network's degree distributions.\n\n\n\\subsection{Robustness of the transfer extraction}\n\nSince the patient transfers are not explicit in the data but instead need to be inferred from the data, we investigated the robustness of some of the results to our definition of what constitutes a hospital transfer. Instead of requiring readmission on the day of discharge, we relaxed this definition by allowing the readmission to take place also on the day after discharge. A visual examination of Fig. \\ref{FigS3} shows that the edges induced by the same-day rule (red edges) and the additional edges that result using the relaxed rule (blue edges). This relaxation leads to 67472 additional transfers (7.2\\% increase). There are 11827 new edges that appear on the transfer network (15.6\\% increase), with an average transfer load of 1.2 with a standard deviation of 0.7. For the connections that appear under both rules, the difference in transfer loads averages to 0.7 transfers with a standard deviation of 1.9. The distribution of edge weights for both cases are shown in the upper left panel of Fig. \\ref{FigS4}, and the two distributions appear visually very similar to one another. The weight distribution of the additional edges, as well as the distribution of weight differences for the common edges in both cases can be seen in the upper right panel of Fig. \\ref{FigS4}. The range of this distribution is much more constrained than that of the actual weight distributions. The number of transfers increases, but the patterns remain essentially the same both temporally and topologically. For the temporal patterns, see the lower panels of Fig. \\ref{FigS4}. Note also that both measures of transfers are strictly speaking wrong, as the first one based on the one-day rule is really a lower bound on the number of transfers and the second one (based on the relaxed rule) is an upper bound. Given the similarity of these findings across the two rules, in the following we work with the lower bound (same day discharge and readmission).\n\n\n\\begin{figure}[H]\n\\begin{center}\n \\includegraphics[width=8.6cm]{FigS3}\n \\caption{\\textbf{ Comparison of the transfer network based on the 1-day and 2-day rules.} The network is constructed by aggretating transfer data over the full two-year period. Red edges correspond to the connections induced by the 1-day rule and the blue edges correspond to the additional edges that appear when considering the 2-day rule.\\label{FigS3}}\n \\end{center}\n\\end{figure}\n\n\\begin{figure}[H]\n\\begin{center}\n \\includegraphics[width=8.6cm]{FigS4}\n \\caption{\\textbf{ Comparison of transfer window of one and two days.} \\textbf{a)} Distributions of the number of transfers per connection \u03c9 in black for one day transfers (1-day rule) and in red for one or two day transfers (2-day rule). \\textbf{b)} Distribution of the number of transfers per connection for the edges that appear when using the 2-day rule. Two-day transfers (orange diamonds) and of the difference in the number of transfers for the connections that are shared by the two rules (green triangles). \\textbf{c)} Temporal evolution of the total number of transfers for one day and two day transfers. The insets show a four-week and a one-week window, showing the periodicities in the data. \\textbf{d)} Median, 5- and 95 percentiles for the transfers aggregated by day of the week. Again a comparison of one day and two day transfers demonstrates that they are qualitatively very similar.\\label{FigS4}}\n \\end{center}\n\\end{figure}\n\n\n\n\\subsection{Optimal sensor set}\n\nWe determine the best sensor set we could have possibly chosen given the observed data. In order to do this, we use greedy algorithms \\cite{citeS4} as checking all possible combinations of hospitals to use as sensors grows exponentially in the number of hospitals and is therefore not feasible for any but the smallest hospital transfer networks. For a fast algorithm that is not guaranteed to give the optimal answer (as is true with any heuristic algorithm), we choose the sensors sequentially. We first compute the number of cases each hospital would detect and we choose the one that will detect the highest number of cases. We then re-compute how many new cases would be covered by each subsequent hospital if added to the existing sensor set. This continues until we find the sensor set that covers all cases. As mentioned above, this procedure does not guarantee that we will choose the optimal sensor set given a number of sensors N, but it is however very efficient and yields an effective sensor set not far from the optimal one. In order to check that our solution is sufficiently close to the actual best solution, we used simulated annealing \\cite{citeS5}. The simulated annealing procedure is suitable for optimization problems of large scale, especially ones where a desired global extremum is hidden among many, poorer, local extrema. There is an objective function to be minimized, in our case the coverage of cases to be maximized, but the space over which that function is defined is not simply the N-dimensional space of N continuously variable parameters. Rather, it is a discrete, but very large, configuration space with the number of elements factorially large, so that they cannot be explored exhaustively. This result is in agreement with the result of the fast sequential algorithm.\n\nIn Fig. \\ref{FigS5} we show the results of finding the sensor set that maximizes the number of detected cases in the training dataset for the static network case. This method is data-based and tries to maximize the number of detected cases without the use of any strategy of choosing sensors other than the optimization procedure. In this case we find that for a very small number of 26 (0.46\\%) sensors, we can detect 88\\% of the cases. This very high performance is however likely a consequence of over-fitting the model to the observed (training) data. Using this set of hospitals as sensors for a new dataset on patient transfers would likely result in lower (and more variable) performance of the sensor system.\n\n\n\\begin{figure}[H]\n\\begin{center}\n \\includegraphics[width=8.6cm]{FigS5}\n \\caption{\\textbf{ Finding the optimal number of sensors for the best sensor selection (static network).} \\textbf{a)} shows the efficacy and \\textbf{b)} the fraction of detected cases, both as a function of the fraction of hospitals used as sensors. There is a peak for a very low fraction of sensors, but this point however corresponds to no more than 30\\% of detected cases. The second peak located at around 0.005 (using 0.5\\% of hospitals as sensors) is able to detect over 80\\% of the cases. \\label{FigS5}}\n \\end{center}\n\\end{figure}\n\nIn Fig. \\ref{FigS6} we can see the results of performing the same analysis for the dynamic implementation. Now the hospitals that are sensors act only as sensors for a period T days after admitting a patient with a C.diff infection. The greedy method for choosing sensors works as in the static case, but now taking into account the temporal restrictions for the cases that the sensor system is able to detect. The result is similar to the results of the other methods when moving from the static to the dynamic case. The results are different for a small value of the activation time, below one week, but remain basically unchanged as the activation time is raised.\n\n\n\\begin{figure}[H]\n\\begin{center}\n \\includegraphics[width=8.6cm]{FigS6}\n \\caption{\\textbf{ Finding the optimal number of sensors for the best sensor selection (dynamic network).} Efficacy (a) and the fraction of detected cases (b) as a function of the fraction of sensors and the activation time $T$. \\label{FigS6}}\n \\end{center}\n\\end{figure}\n\nFinally in Fig. \\ref{FigS7} we can see the sensor set that is the result of the optimization for the aggregated case.\n\n\n\n\\begin{figure}[H]\n\\begin{center}\n \\includegraphics[width=8.6cm]{FigS7}\n \\caption{\\textbf{ Spatial positioning of the optimal sensor set.} Red dots represent the sensor hospitals and blue dots are (nearest) neighbors of sensor hospitals. The size of each dot represents the mean C. diff incidence taken over the 2-year period at the hospital.\\label{FigS7}}\n \\end{center}\n\\end{figure}\n\n\n\n\n\\subsection{Robustness of sensor set performance}\n\n\nThte performance of statistical methods is generally quantified using some error metric, and most fitting procedures attempt to minimize this error in the process of finding suitable values for model parameters. It is often possible to reduce this training error by increasing model complexity, but generally the goal of modeling is to have the model perform well on a test data set, ideally an independent data set, that the model was not trained on. Good performance on a test data set, quantified by a low test error, generally leads to better overall model performance and avoids the problem of over fitting, which refers to the model adapting to the test data \"too well\" at the expense of poor generalizability to different realizations of data from the same data generating mechanism.\n\nIn analogy with this approach to statistical learning, we performed a series of analyses to investigate the performance of sensor sets derived from one set of data and tested on another. The objective of the analysis is twofold. First, it will enable us to ascertain the validity of our methods when applied to training data, i.e., data not used to select the set of sensors. Two, given that there are likely temporal correlations in the data, it enables us to study the performance of sensor sets on data that are temporally far removed from the training data.\n\nHere we divided our data to disjoint (non-overlapping) windows of width L, where we used values of 1 month, 2 months, 4 months, 6 months, and a year for L. For any given window, we take the first window to be our training data and use all subsequent windows as different realizations of test data. We used the training data for generating the sensor sets (based on in-degree, out-degree, and the greedy algorithm; we exclude considerations of the random stragy here because there is no real distinction between testing and training) and evaluated the relative efficacy and the percentage of cases detected separately for each test data window.\n\nAlthough intuitively it seems that the sensor sets would perform worse the greater the temporal separation between the training window and test window, we found that our methods were robust against this separation. Little variation is observed as the validation window gets more and more separated temporally from the training window that was used to construct the sensor sets (see Figs.~\\ref{FigS8}-~\\ref{FigS10}). This is counterintuitive especially for the sensor set obtained using the greedy algorithm because in principle we are over-fitting our model to the data and consequently this should result in more variability. Nevertheless, temporal correlations in the dynamics of the system make it well behaved in this sense. An important lesson here is that it is possible to determine efficient sensor sets even using outdated data. \n\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=8.6cm]{FigS8}\n \\caption{\\textbf{ Out-degree strategy: training vs. test data.} Due to temporal correlations in the data, the sensor sets derived from the first slice of data perform comparably to their performance on the training set when applied to the remaining slices of data as test data. In all the plots, the results for the training set are shown as black solid lines while the red dashed lines refer to the sensor set applied to the test data sets. From left to right and top to bottom, the different plots refer to window widths of 1 (a), 2 (b), 4 (c), 6 (d), and 12 (e) months.\\label{FigS8}}\n \\end{center}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=8.6cm]{FigS9}\n \\caption{\\textbf{ In-degree strategy: information vs. validation sets.} The panels are arranged as above. Due to the temporal correlation of the data the sensor sets derived from the first slice of data perform comparably to their performance on the training set when applied to the remaining slices of data as test data.\\label{FigS9}}\n \\end{center}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=8.6cm]{FigS10}\n \\caption{\\textbf{ Greedy strategy: information vs. validation sets.} The panels and are arranged as above. Due to the temporal correlation of the data the sensor sets derived from the first slice of data perform comparably to their performance on the training set when applied to the remaining slices of data as test data. Nevertheless when compared to the other strategies this is slightly more variable when compared training and test data results.\\label{FigS10}}\n \\end{center}\n\\end{figure}\n\n\n\\subsection{Effect of the length of the observation period on the sensor set evaluation}\n\nThe validation set approach also enables us to evaluate how the construction of a sensor set is affected by the width of the window used in its construction. From the results in Fig.~\\ref{FigS11} it is clear that the wider the window, the smaller the number of sensors needed in order for the sensor set to be optimal. The out-degree strategy is less robust with respect to this metric, and the plots demonstrate a large difference between the curves between 2 and 4 months. The difference is less pronounced between the other curves.\n\n\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=8.6cm]{FigS11}\n \\caption{\\textbf{ Effect of observation period on the construction of the sensor set.} Efficacy and fraction of detected cases for different lengths of the observation period. \\textbf{a)} random strategy, \\textbf{b)} out-degree strategy, \\textbf{c)} in-degree strategy. \\textbf{d)} Greedy strategy, the ``best\" sensor set. The different colors correspond to different window widths: 1 month (black), 2 months (red), 4 months (blue), 6 months (purple) and 12 months (orange).\n \\label{FigS11}}\n \\end{center}\n\\end{figure}\n\n\n\\subsection{List of hospitals included in the sensor sets}\n\nIn this section we list the first 26 hospitals included in the in-degree and out-degree strategies, as well as those that arise from the greedy optimization approach.\n\n\\vspace{0.2cm}\n\\textbf{In-degree strategy for the 2-year aggregated network:}\n\\vspace{0.2cm}\n\\begin{enumerate}\n \\item Saint Marys Hospital, 1216 Second Street SW, Rochester, MN, kin=346, kout=103\n \\item Cleveland Clinic Foundation, 9500 Euclid Avenue, Cleveland, OH, kin=286, kout=145\n \\item New York-Presbyterian Hospital, 525 East 68th Street, Manhattan, NY, kin=214, kout=118\n \\item Mount Sinai Hospital, One Gustave L Levy Place, Manhattan, NY, kin=169, kout=80\n \\item St Luke's Episcopal Hospital, 6720 Bertner Avenue, Houston, TX, kin=163, kout=91\n \\item Barnes-Jewish Hospital, 1 Barnes-Jewish Hosp Plaza, St. Louis, MO, kin=162, kout=78\n \\item Massachusetts General Hospital, 55 Fruit Street, Boston, MA, kin=159, kout=88\n \\item Emory University Hospital, 1364 Clifton Road NE, Atlanta, GA, kin=151, kout=71\n \\item Methodist Hospital, 6565 Fannin Street, Houston, TX, kin=151, kout=72\n \\item University of Alabama Hospital, 619 South 19th Street, Birmingham, AL, kin=147, kout=79\n \\item Johns Hopkins Hospital, 600 North Wolfe Street, Baltimore, MD, kin=146, kout=74\n \\item UPMC Presbyterian, 200 Lothrop Street, Pittsburgh, PA, kin=146, kout=89\n \\item Brigham and Women's Hospital, 75 Francis Street, Boston, MA, kin=142, kout=91\n \\item Northwestern Memorial Hospital, 251 East Huron Street, Chicago, IL, kin=141, kout=75\n \\item Hospital of the Univ of PA, 3400 Spruce Street, Philadelphia, PA, kin=139, kout=81\n \\item Clarian Health Partners, I-65 at 21st Street, Indianapolis, IN, kin=136, kout=81\n \\item New York Univ Medical Center, 550 First Avenue, Manhattan, NY, kin=135, kout=46\n \\item Kessler Institute for Rehab, 1199 Pleasant Valley Way, Newark, NJ, kin=133, kout=51\n \\item Mem Sloan-Kettering Cancer Ctr, 1275 York Avenue, Manhattan, NY, kin=133, kout=52\n \\item Duke University Hospital, Erwin Road, Durham, NC, kin=132, kout=67\n \\item Rochester Methodist Hospital, 201 West Center Street, Rochester, MN, kin=131, kout=27\n \\item Vanderbilt Univ Medical Center, 1211 22nd Avenue South, Nashville, TN, kin=131, kout=77\n \\item Baylor Univ Medical Center, 3500 Gaston Avenue, Dallas, TX, kin=131, kout=72\n \\item Abbott Northwestern Hospital, 800 East 28th Street, Minneapolis, MN, kin=126, kout=24\n \\item Thomas Jefferson Univ Hospital, 111 South 11th Street, Philadelphia, PA, kin=124, kout=75\n \\item Lenox Hill Hospital, 100 East 77th Street, Manhattan, NY, kin=123, kout=63\n\\end{enumerate}\n\n\\vspace{0.2cm}\n\\textbf{Out-degree strategy for the 2-year aggregated network:}\n\\vspace{0.05cm}\n\\begin{enumerate}\n \\item Cleveland Clinic Foundation, 9500 Euclid Avenue, Cleveland, OH, kout=145, kin=286\n \\item New York-Presbyterian Hospital, 525 East 68th Street, Manhattan, NY, kout=118, kin=214\n \\item Saint Marys Hospital, 1216 Second Street SW, Rochester, MN, kout=103, kin=346\n \\item Brigham and Women's Hospital, 75 Francis Street, Boston, MA, kout=91, kin=142\n \\item St Luke's Episcopal Hospital, 6720 Bertner Avenue, Houston, TX, kout=91, kin=163\n \\item UPMC Presbyterian, 200 Lothrop Street, Pittsburgh, PA, kout=89, kin=146\n \\item Univ of TX M D Anderson Ctr, 1515 Holcombe Boulevard, Houston, TX, kout=89, kin=114\n \\item Massachusetts General Hospital, 55 Fruit Street, Boston, MA, kout=88, kin=159\n \\item UCSF Medical Center, 500 Parnassus Avenue, San Francisco, CA, kout=81, kin=107\n \\item Clarian Health Partners, I-65 at 21st Street, Indianapolis, IN, kout=81, kin=136\n \\item Hospital of the Univ of PA, 3400 Spruce Street, Philadelphia, PA, kout=81, kin=139\n \\item Mount Sinai Hospital, One Gustave L Levy Place, Manhattan, NY, kout=80, kin=169\n \\item University of Alabama Hospital, 619 South 19th Street, Birmingham, AL, kout=79, kin=147\n \\item Atlanticare Regional Med Ctr, 1925 Pacific Avenue, Camden, NJ, kout=79, kin=13\n \\item Barnes-Jewish Hospital, 1 Barnes-Jewish Hosp Plaza, St. Louis, MO, kout=78, kin=162\n \\item Vanderbilt Univ Medical Center, 1211 22nd Avenue South, Nashville, TN, kout=77, kin=131\n \\item Florida Hospital, 601 East Rollins Street, Orlando, FL, kout=76, kin=57\n \\item Shands at the Univ of Florida, 1600 SW Archer Road, Gainesville, FL, kout=75, kin=106\n \\item Northwestern Memorial Hospital, 251 East Huron Street, Chicago, IL, kout=75, kin=141\n \\item Thomas Jefferson Univ Hospital, 111 South 11th Street, Philadelphia, PA, kout=75, kin=124\n \\item Johns Hopkins Hospital, 600 North Wolfe Street, Baltimore, MD, kout=74, kin=146\n \\item Baylor Univ Medical Center, 3500 Gaston Avenue, Dallas, TX, kout=72, kin=131\n \\item Methodist Hospital, 6565 Fannin Street, Houston, TX, kout=72, kin=151\n \\item Naples Community Hospital, 350 Seventh Street North, Fort Myers, FL, kout=71, kin=27\n \\item Emory University Hospital, 1364 Clifton Road NE, Atlanta, GA, kout=71, kin=151\n \\item Memorial Hermann Hospital, 6411 Fannin, Houston, TX, kout=71, kin=114\n\\end{enumerate}\n\n\\vspace{0.5cm}\n\n\\vspace{0.2cm}\n\\textbf{Greedy algorithm:}\n\\vspace{0.25cm}\n\\begin{enumerate}\n \\item Cleveland Clinic Foundation, 9500 Euclid Avenue, Cleveland, OH, kin=286, kout=145\n \\item New York-Presbyterian Hospital, 525 East 68th Street, Manhattan, NY, kin=214, kout=118\n \\item Saint Marys Hospital, 1216 Second Street SW, Rochester, MN, kin=346, kout=103\n \\item Johns Hopkins Hospital, 600 North Wolfe Street, Baltimore, MD, kin=146, kout=74\n \\item Massachusetts General Hospital, 55 Fruit Street, Boston, MA, kin=159, kout=88\n \\item Univ of TX M D Anderson Ctr, 1515 Holcombe Boulevard, Houston, TX, kin=114, kout=89\n \\item Barnes-Jewish Hospital, 1 Barnes-Jewish Hosp Plaza, St. Louis, MO, kin=162, kout=78\n \\item Shands at the Univ of Florida, 1600 SW Archer Road, Gainesville, FL, kin=106, kout=75\n \\item UCLA Medical Center, 10833 Le Conte Avenue, Los Angeles, CA, kin=116, kout=54\n \\item Northwestern Memorial Hospital, 251 East Huron Street, Chicago, IL, kin=141, kout=75\n \\item Hospital of the Univ of PA, 3400 Spruce Street, Philadelphia, PA, kin=139, kout=81\n \\item Duke University Hospital, Erwin Road, Durham, NC, kin=132, kout=67\n \\item Baylor Univ Medical Center, 3500 Gaston Avenue, Dallas, TX, kin=131, kout=72\n \\item Emory University Hospital, 1364 Clifton Road NE, Atlanta, GA, kin=151, kout=71\n \\item UCSF Medical Center, 500 Parnassus Avenue, San Francisco, CA, kin=107, kout=81\n \\item St Joseph's Hosp \\& Med Center, 350 West Thomas Road, Phoenix, AZ, kin=58, kout=43\n \\item Clarian Health Partners, I-65 at 21st Street, Indianapolis, IN, kin=136, kout=81\n \\item Univ of Michigan Hospitals, 1500 East Medical Center Drive, Ann Arbor, MI, kin=113, kout=53\n \\item UPMC Presbyterian, 200 Lothrop Street, Pittsburgh, PA, kin=146, kout=89\n \\item Vanderbilt Univ Medical Center, 1211 22nd Avenue South, Nashville, TN, kin=131, kout=77\n \\item Univ of Washington Medical Ctr, 1959 NE Pacific St, Box 356151, Seattle, WA, kin=74, kout=31\n \\item University of Kansas Hospital, 3901 Rainbow Boulevard, Kansas City, MO, kin=95, kout=44\n \\item Jackson Memorial Hospital, 1611 NW 12th Avenue, Miami, FL, kin=65, kout=51\n \\item OU Medical Center, 1200 Everett Drive, Oklahoma City, OK, kin=69, kout=43\n \\item University of Alabama Hospital, 619 South 19th Street, Birmingham, AL, kin=147, kout=79\n \\item University of Virginia Med Ctr, Jefferson Park Avenue, Charlottesville, VA, kin=78, kout=48\n\\end{enumerate}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section*{Methods}\n\nSupercell calculations for the SSCHA~\\cite{PhysRevLett.111.177002,PhysRevB.89.064302}\nand linear response calculations~\\cite{RevModPhys.73.515}\nwere performed within DFT and the generalized gradient \napproximation functional~\\cite{PhysRevLett.77.3865} as implemented in the\n{\\sc Quantum ESPRESSO}~\\cite{0953-8984-21-39-395502} code.\nWe used ultrasoft pseudopotentials~\\cite{PhysRevB.41.7892},\na plane-wave cutoff energy of 60 Ry for the kinetic energy and 600 Ry\nfor the charge density. The charge density and dynamical matrices\nwere calculated using a 32$^3$ Monkhorst-Pack shifted\nelectron-momentum grid for the unit cell calculations. This mesh\nwas adjusted accordingly in the supercell calculations. \nThe electron-phonon coupling was calculated by using electron and phonon momentum grids\ncomposed of up to $42\\times42\\times42$ randomly displaced points in the Brillouin zone.\nThe isotropic Migdal-Eliashberg equations were solved using 512 Matsubara frequencies\nand $\\mu^*=0.16$.\n\nThe SSCHA calculations were performed using a\n3$\\times$3$\\times$3 supercell for both H$_3$S and D$_3$S in the $Im\\bar3m$ phase, yielding\ndynamical matrices on a commensurate 3$\\times$3$\\times$3 $q$-point grid. \nThe difference between the harmonic and anharmonic\ndynamical matrices in the 3$\\times$3$\\times$3 phonon momentum grid was\ninterpolated to a 6$\\times$6$\\times$6 grid. Adding the harmonic matrices to the\nresult, the anharmonic dynamical matrices were obtained on a $6\\times6\\times6$\ngrid. These dynamical matrices were used for the anharmonic\nelectron-phonon coupling calculation.\nThe SSCHA calculations\nfor $Q=1$ were performed with a 2$\\times$2$\\times$2\nsupercell. For consistency, the vibrational energies presented in Fig. \\ref{energ}\nwere also calculated using a 2$\\times$2$\\times$2 supercell. The\nelectron-phonon calculations for $Q=1$ were, however, performed with \nthe SSCHA dynamical matrices interpolated to a 6$\\times$6$\\times$6 grid\nfrom the 2$\\times$2$\\times$2 mesh. \n\nThe $E_{\\mathrm{vib}}(Q)$ curves in Fig. \\ref{energ} \nwere obtained as follows. $E_{\\mathrm{vib}}$ was calculated for $Q=0$ and $Q=1$ with the\nSSCHA. With the SSCHA calculation at $Q=1$, we extracted \n$\\frac{{\\rm d} E_{\\mathrm{vib}}}{{\\rm d} Q} (Q=1)$ with no further\ncomputational effort. Considering that the derivative of the curve at $Q=0$\nvanishes by symmetry, we can get straightforwardly a \npotential of the form $E_{\\mathrm{vib}}(Q) = A + BQ^2 + CQ^4$.\nThe $E_{\\mathrm{vib}} \\mathrm{fit}$ curves\npresented in Fig. \\ref{energ} were obtained in this way. \nThe extra point obtained \nat $Q=0.5$ for H$_3$S at $V = 97.85 a_0^3$ (see Fig. \\ref{energ}(a))\nconfirmed the validity of the fitting procedure.\nThe $E_{{\\rm BO}} (Q)$ BOES energies were calculated for many\n$Q$ points yielding an accurate fitting curve.\nFig. \\ref{e_c} was obtained using a polynomial interpolation\nof the BOES in the volume range shown and adding the \n$E_{\\mathrm{vib}}^{R3m} - E_{\\mathrm{vib}}^{Im\\bar3m} (x)$\ncurves that are practically independent of volume. \n\n\n\n\\section*{Acknowledgements}\n\nWe acknowledge financial support from the Spanish Ministry of Economy\nand Competitiveness (FIS2013- 48286-C2-2-P), French Agence Nationale\nde la Recherche (Grant No.~ANR-13-IS10-0003-01), EPSRC (UK) (Grant\nNo.~EP\/J017639\/1), Cambridge Commonwealth Trust, National Natural\nScience Foundation of China (Grants No.~11204111, 11404148, and\n11274136), and 2012 Changjiang Scholars Program of China. Computer\nfacilities were provided by the PRACE project AESFT and the Donostia\nInternatinal Physics Center (DIPC).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{Introduction}\nIt was suggested by Chuang and Yamamoto \\cite{chuang} that the cross-Kerr effect (or cross-phase modulatation) between two optical fields could be used for so-called ``dual-rail'' quantum logic with photonic qubits, provided sufficiently large nonlinearities could be generated. The goal is to be able to perform a transformation like the following (where $0$ and $1$ refer to the number of photons in the two interacting modes):\n\\begin{equation}\n\\frac{1}{\\sqrt 2}\\bigl(\\ket 0 + \\ket 1\\bigr)\\ket 1 \\to \\frac{1}{\\sqrt 2}\\bigl(\\ket 0 + e^{-i\\phi}\\ket 1\\bigr)\\ket 1\n\\label{wanted}\n\\end{equation}\nFor cross-phase modulation by a $\\chi^{(3)}$ medium, the phase shift $\\phi = \\kappa n_1 n_2$, where $n_1$ and $n_2$ are the number of photons in the respective modes, and $\\kappa$ is a constant. If $\\kappa$ can be made as large as $\\pi$, then Eq.~(\\ref{wanted}) shows that we have a conditional logical operation which, in the basis of states $(\\ket 0 \\pm \\ket 1)\/\\sqrt 2$, is equivalent to the CNOT gate.\n\nAs already pointed out in \\cite{nielsen}, however, the very large Kerr nonlinearites needed to achieve this sort of phase modulation are, in ordinary materials, always associated with large absorption losses. Over the years, a number of methods have been suggested to overcome this difficulty, mostly revolving around the use of electromagnetically-induced transparency, or EIT, to eliminate the linear absorption and, at the same time, increase the nonlinear dispersion of the medium \\cite{munro,ottaviani,sanders}. Nevertheless, in 2006, J. H. Shapiro published a study \\cite{shapiro} showing that the causal, non-instantaneous behavior of any $\\chi^{(3)}$ nonlinearity would always prevent the transformation (\\ref{wanted}) from happening with high fidelity for large $\\phi$. Central to Shapiro's study was a multimode treatment of the two propagating one-photon wavepackets, something that most previous studies had not considered. \n\nShapiro's original argument was very general, and in a later publication \\cite{shapiro2} it was stated that it was not immediately apparent whether it applied to the EIT schemes. One such scheme that had, in fact, received a multimode quantum-field treatment was that of Lukin and Imamo\\u glu \\cite{lukin} (see also the discussion in \\cite{rmp}), based on the so-called ``giant Kerr effect,'' originally proposed by Schmidt and Imamo\\u glu \\cite{ima} (which is, in one form or another, at the heart of all the EIT proposals). The conclusions of \\cite{lukin,rmp} are, however, somewhat ambiguous, because they suggest that very large phase shifts are possible, but at the cost of large changes to the modal structure of the pulses.\n\nThe goal of the present paper is to show that with an appropriate, idealized, but local Hamiltonian, that reproduces the Heisenberg-picture evolution equations of refs. \\cite{lukin,rmp}, large phase shifts in the Schr\\\" odinger picture (as in Eq.~(\\ref{wanted})) are, in fact, impossible, if the traveling fields are described by localized, single-photon pulses. This is in agreement with Shapiro's prediction for the ``fast nonlinearity'' case. In fact, the analysis presented here clearly shows that a large phase shift is only possible in the limit in which the pulse's spectral width approaches the bandwidth of the nonlinear medium. In that case, however, it is also shown here, for the specific atomic configuration considered \\cite{ima} in the derivation of the giant Kerr effect, that EIT becomes ineffective and the losses due to absorption in the medium approach unity. \n\nThe paper is organized as follows. The instantaneous response limit, with an idealized Hamiltonian, is considered in Section 2. The actual time response of an EIT medium is considered in Section 3, where a relationship between absorption losses and the pulse's spectral width is derived. Section 4 contains some further discussion and conclusions.\n\n\\section{Instantaneous response limit}\n\\subsection{Hamiltonian, locality, and Heisenberg picture results}\nSuppose that, somehow, one has managed to produce a medium that leads, to a sufficiently good approximation, to a field evolution described by the following Hamiltonian\n\\begin{align}\nH = &\\sum_{n=-n_{max}}^{n_{max}} \\hbar \\frac{2\\pi n c}{L} a^\\dagger_n a_n + \\sum_{m=-n_{max}}^{n_{max}} \\hbar \\frac{2\\pi m c}{L} b^\\dagger_m b_m \\cr\n&+ \\hbar\\epsilon \\int_{z_0}^{z_0+l}E_a^{(-)} E_a^{(+)} E_b^{(-)} E_b^{(+)} \\, dz\n\\label{ham}\n\\end{align}\nHere $E_a$ and $E_b$ are Schr\\\" odinger-picture field operators given by \n\\begin{align}\nE_a^{(+)} &= \\left(\\frac{\\hbar\\omega_0}{\\epsilon_0 A L}\\right)^{1\/2}\\sum_{n=-n_{max}}^{n_{max}} a_n e^{2 \\pi i n z\/L} \\cr\nE_b^{(+)} &= \\left(\\frac{\\hbar\\omega_0}{\\epsilon_0 A L}\\right)^{1\/2}\\sum_{m=-n_{max}}^{n_{max}} b_m e^{2 \\pi i m z\/L}\n\\end{align}\nwith $E_a^{(-)}, E_b^{(-)}$ their Hermitian conjugates. The quantization volume has cross-sectional area $A$ and length $L$. The total number of modes to be added, for each field, is $M=2 n_{max} + 1$, and it is determined by $L$ and the bandwidth of the optical medium, ${\\Delta\\omega}_\\text{medium} = 2 \\pi c M\/L$. The number of initially-occupied field modes may be much less than $M$, as will be discussed below. The following Section discusses how the ``giant Kerr effect'' \\cite{ima} may lead to the kind of interaction Hamiltonian that appears in (\\ref{ham}) under appropriate conditions.\n\nAn important property of the Hamiltonian (\\ref{ham}) is that it is \\emph{local}. Suppose that the field state is described by a pair of wavepackets, which for simplicity we will take to be identical:\n\\begin{equation}\n\\ket{\\psi_0} = \\sum_{n=-n_{max}}^{n_{max}} c_n \\ket{1_n}_a \\otimes \\sum_{n=-n_{max}}^{n_{max}} c_n \\ket{1_n}_b\n\\end{equation}\nHere $\\ket{1_n}_a$ is a state with one photon in the $n$-th ``a'' mode, and zero photons in all the other modes, and similarly $\\ket{1_n}_b$. The field intensity in such a state is given by\n\\begin{equation}\nI_a(z) = \\av{E_a^{(-)} E_a^{(+)}} = \\frac{\\hbar\\omega_0}{\\epsilon_0 A L} \\left|\\sum_n c_n e^{2 \\pi i n z\/L}\\right|^2\n\\label{iaz}\n\\end{equation}\nand the expectation value of the energy is\n\\begin{equation}\n\\av{H} = \\frac{4\\pi\\hbar c}{L} \\sum n |c_n|^2 + \\hbar\\epsilon \\int_{z_0}^{z_0+l} I_a(z) I_b(z) \\, dz\n\\label{avh}\n\\end{equation}\nFor pulses with a symmetric spectrum, such as those which will be considered here, the first term in (\\ref{avh}) vanishes. The second term, on the other hand, which represents the interaction energy between the field and the material medium, vanishes if (and only if) the pulses are not inside the medium. This is a physically reasonable requirement for any Hamiltonian that one might want to use to describe the interaction of a finite pulse with a localized medium.\n\nIf the Heisenberg equation of motion is used with the Hamiltonian (\\ref{ham}), one gets the following propagation equations for the field operators $E_a^{(+)}(t,z)$ and $E_b^{(-)}(t,z)$, in the Heisenberg picture:\n\\begin{align}\n\\left(\\frac{\\partial}{\\partial t} + c \\frac{\\partial}{\\partial z}\\right)E_a^{(+)} &= i \\kappa E_b^{(-)} E_b^{(+)} E_a^{(+)} \\\\\n\\left(\\frac{\\partial}{\\partial t} + c \\frac{\\partial}{\\partial z}\\right)E_b^{(+)} &= i \\kappa E_a^{(-)} E_a^{(+)} E_b^{(+)}\n\\label{heis}\n\\end{align}\nwhere $\\kappa = \\epsilon(\\hbar\\omega_0\/\\epsilon_0 A)$, and the right-hand side is as shown for $z_0\\le z\\le z_0 + l$, and $0$ elsewhere. This follows from the assumption that the bandwidth of the medium is large enough to justify the approximation\n\\begin{equation}\n\\sum_{n=-n_{max}}^{n_{max}} e^{2\\pi in(z-z^\\prime)\/L} \\simeq = L\\delta(z-z^\\prime)\n\\end{equation}\nStrictly speaking, this requires also that the bandwidth of the pulses be smaller than the bandwidth of the interaction, an assumption to which we will return shortly.\n\nEqs.~(\\ref{heis}) are exactly the same as obtained by Lukin and Imamo\\u glu in \\cite{lukin}, minus all the extra complications arising from the different group velocities in the medium and the wavepacket compression. These complications will simply be ignored here in order to concentrate on the basic difficulties caused by locality and the multimode nature of the field. As explained in \\cite{lukin}, the Eqs.~(\\ref{heis}) can be self-consistently solved by integrating along the characteristics to get, at any point $z$ within the medium\n\\begin{align}\nE^{(+)}_{a,b}(t,z) = &E^{(+)}_{a,b}(t^\\prime,z_0) \\cr\n&\\times\\exp\\left[i\\frac{\\kappa}{c} (z-z_0) E^{(-)}_{b,a}(t^\\prime,z_0) E^{(+)}_{b,a}(t^\\prime,z_0) \\right]\\cr\n\\label{heissol}\n\\end{align}\nwith the local time $t^\\prime \\equiv t -(z-z_0)\/c$. Eq.~(\\ref{heissol}) can be verified by direct substitution in (\\ref{heis}), noting that it implies the equality $E^{(-)}_{a,b}(t,z)E^{(+)}_{a,b}(t,z) = E^{(-)}_{a,b}(t^\\prime,z_0)E^{(+)}_{a,b}(t^\\prime,z_0)$. For $z>z_0+l$, one can use free propagation backwards, and the fact that the field only undergoes multiplication by a unitary operator, to rewrite the result in terms of the $t=0$ operators:\n\\begin{align}\nE^{(+)}_{a,b}(t,z) = &E^{(+)}_{a,b}(0,z-ct) \\cr\n&\\times\\exp\\left[i\\frac{\\kappa l}{c} E^{(-)}_{b,a}(0,z-ct) E^{(+)}_{b,a}(0,z-ct) \\right]\\cr\n\\label{heissol2}\n\\end{align}\n\nAt first sight, Eq.~(\\ref{heissol2}) might seem to be exactly what we want, since it suggests that each of the two fields acquires a phase that is proportional to the intensity of the other one. The fact that the actual phase shift apparently depends on the local intensity, at different points in the wavepacket, may be slightly worrisome, but Eq.~(\\ref{heissol2}) at least suggests that nothing should prevent one from making the phase at, say, the center of the wavepacket, as large as one might want to. The situation looks quite different, however, in the Schr\\\" odinger picture, to which we turn next.\n\n\\subsection{Time evolution in the Schr\\\" odinger picture}\n\nIn the Schr\\\" odinger picture we write the state of the system as the double sum\n\\begin{equation}\n\\ket{\\psi(t)} = \\sum_n \\sum_m c_{nm}(t) e^{-2 \\pi i(n+m)c t\/L} \\ket{1_n}_a\\ket{1_m}_b\n\\end{equation}\nwhere the coefficient $c_{nm}(0)$ (two indices) equals the product $c_n(0)c_m(0)$ (single index) at $t=0$. The equation of motion for $c_{nm}$ is\n\\begin{align}\n\\dot c_{nm} = &-i\\epsilon \\left(\\frac{\\hbar\\omega_0}{\\epsilon_0 A L} \\right)^2\\sum_{n^\\prime m^\\prime}c_{n^\\prime m^\\prime} \\cr\n&\\times\\int_{z_0}^{z_0+l} e^{-2\\pi i(n^\\prime + m^\\prime - n - m)(ct-z)\/L}\\, dz\n\\label{dotcnm}\n\\end{align}\nThis can be integrated analytically under some approximations that are equivalent to the ones used in the previous section. To begin with, introduce a new set of indices, $\\mu$ and $\\nu$, that stand for the sum and difference, respectively, of $n$ and $m$. Then $c_{\\nu\\mu} = c_{nm}$ with $n=(\\mu+\\nu)\/2$ and $m=(\\mu-\\nu)\/2$, and we have\n\\begin{equation}\n\\dot c_{\\nu\\mu} = -i\\eta \\sum_{\\mu^\\prime}\\left(\\sum_{\\nu^\\prime }c_{\\nu^\\prime \\mu^\\prime} \\right)\\int_{z_0}^{z_0+l} e^{-2\\pi i(\\mu^\\prime - \\mu)(ct-z)\/L}\\, dz\n\\label{dotcnumu}\n\\end{equation}\nwhere, for convenience, the parameter $\\eta = \\epsilon(\\hbar\\omega_0\/\\epsilon_0 A L)^2$ has been defined. One can next introduce a new set of coefficients, $v_\\mu(t)$, defined by\n\\begin{equation}\nv_\\mu = \\sum_{\\nu=|\\mu|-2 n_{max}}^{2 n_{max}-|\\mu|} c_{\\nu\\mu}\n\\label{defv}\n\\end{equation}\nNote that in (\\ref{defv}), as in all other sums over $\\nu$ for constant $\\mu$, the index $\\nu$ increases in steps of 2, so there are $2n_{max} -|\\mu|+1$ terms in the sum (and $\\mu$ ranges from $-2n_{max}$ to $2 n_{max}$). The $v_\\mu$ obey the equation of motion (easily derived from (\\ref{dotcnumu}))\n\\begin{align}\n\\dot v_\\mu = &-i\\eta \\left(2n_{max}- |\\mu|+1 \\right) \\sum_{\\mu^\\prime=-2 n_{max}}^{2 n_{max}} v_{\\mu^\\prime} \\cr\n&\\times\\int_{z_0}^{z_0+l} e^{-2\\pi i(\\mu^\\prime - \\mu)(ct-z)\/L}\\, dz\n\\label{dotvmu}\n\\end{align}\nEquation (\\ref{dotvmu}) can be integrated by introducing an envelope function $f(t,z)$, defined as\n\\begin{equation}\nf(t,z) = \\sum_\\mu v_\\mu(t) e^{-2\\pi i\\mu \\omega(ct-z)\/L}\n\\label{deff}\n\\end{equation}\nwhich satisfies\n\\begin{widetext}\n\\begin{align}\n\\left(\\frac{\\partial}{\\partial t} + c \\frac{\\partial}{\\partial z}\\right)f &= \\sum_\\mu \\dot v_\\mu e^{-2\\pi i\\mu \\omega(ct-z)\/L} \\cr\n&= - i\\eta \\sum_{\\mu^\\prime} v_{\\mu^\\prime} \\int_{z_0}^{z_0+l} e^{-2\\pi i\\mu^\\prime \\omega(ct-z^\\prime)\/L} \\left[\\sum_\\mu (2n_{max}-|\\mu|+1)e^{2 \\pi i\\mu (z-z^\\prime)\/L}\\right] dz^\\prime\n\\label{partialsf}\n\\end{align}\n\\end{widetext}\nIt is easy to see that, for large enough $n_{max}$, the expression in square brackets in (\\ref{partialsf}) converges to $(2 n_{max} + 1)L\\delta(z-z^\\prime)$, which means that we have\n\\begin{equation}\n\\left(\\frac{\\partial}{\\partial t} + c \\frac{\\partial}{\\partial z}\\right)f = -i\\eta M L f(t,z), \\qquad z_0 < z < z_0 + l\n\\label{partialsf2}\n\\end{equation}\nwhere, again, $M=2n_{max} + 1$ is the total number of modes, and the right-hand side of (\\ref{partialsf2}) vanishes outside the medium. Integrating along the characteristics, as in the previous section, one finds, for $z$ inside the medium,\n\\begin{align}\nf(t,z) &= e^{-i\\eta ML (z-z_0)\/c}f(0,z-ct) \\cr\n&= e^{-i\\eta M L (z-z_0)\/c} \\sum_\\mu v_\\mu(0) e^{-2\\pi i\\mu (c t-z)\/L}\n\\end{align}\nwhich can now be substituted into (\\ref{dotcnumu}) (via the definitions (\\ref{defv}) and (\\ref{deff})), to yield\n\\begin{align}\n\\dot c_{\\nu\\mu} = &-i\\eta \\sum_{\\mu^\\prime} v_{\\mu^\\prime}(0) \\cr\n&\\times\\int_{z_0}^{z_0+l} e^{-2 \\pi i(\\mu^\\prime-\\mu) (ct-z)\/L} e^{-i\\eta ML (z-z_0)\/c} \\, dz \\cr\n\\label{dotcnumu2}\n\\end{align}\nThe expression on the right-hand side of (\\ref{dotcnumu2}) can now be directly integrated. For the purpose of comparing the state of the field after the interaction to the state before the interaction, it is convenient to concentrate on the value of the coefficients $c_{\\nu\\mu}$ at the time $T\\equiv L\/c$ (i.e., the quantization time), at which point, in the absence of interaction, the pulse should return to its initial state, since the traveling-wave formalism we are using is equivalent to periodic boundary conditions. In that case, the integration of (\\ref{dotcnumu2}) over time from $t=0$ to $t=T$ selects only the $\\mu^\\prime = \\mu$ term in the sum, and we have\n\\begin{equation}\nc_{\\mu\\nu}(T) = c_{\\mu\\nu}(0) + \\frac{1}{M}\\left(e^{-i\\eta MLl\/c}-1 \\right)\\, v_{\\mu}(0)\n\\label{cmunut}\n\\end{equation}\n\n\\subsection{Fidelities, and numerical results} \nEquation (\\ref{cmunut}) can be used to calculate the overlap between the initial state and the state at the time $T$, from which, in turn, a number of other useful results can be derived. Defining, for simplicity, \n\\begin{equation}\n\\Phi = \\frac{\\eta MLl}{c} = \\frac{\\kappa l}{c} M \\frac{\\hbar\\omega_0}{\\epsilon_0 AL}\n\\end{equation}\n(where the last expression uses $\\kappa$ as defined in the previous section, for comparison with the Heisenberg-picture result, Eq.~(\\ref{heissol2})), we have\n\\begin{equation}\n\\av{\\psi(0)|\\psi(T)} = 1 + \\frac{1}{M} \\left(e^{-i\\Phi}-1\\right) \\sum_\\mu |v_\\mu(0)|^2 \\equiv \\sqrt{{\\cal F}_0}\\,e^{-i\\phi}\n\\label{proj}\n\\end{equation}\nRecall that the original goal (Eq.~(\\ref{wanted})) was to leave the original two-photon state invariant except for a phase shift. The fidelity ${\\cal F}_0$ is a measure of the success of this operation. Note that if $\\sum_\\mu|v_\\mu(0)|^2\/M =1$, we have ${\\cal F}_0 = 1$ and $\\phi=\\Phi$. It is important, therefore, to calculate this quantity. Note that the expression (\\ref{iaz}) for the single-wavepacket intensity can be rewritten as\n\\begin{equation}\nI_a(z) = \\av{E_a^{(-)}E_a^{(+)}} = \\frac{\\hbar\\omega_0}{\\epsilon_0 A L} \\sum_{n,m} c_n^\\ast(0) c_m(0) e^{-2\\pi i(n-m)z\/L} \n\\end{equation}\nfrom which it follows that\n\\begin{equation}\nI_a^2(z) = \\left(\\frac{\\hbar\\omega_0}{\\epsilon_0 A L}\\right)^2 \\sum_{\\mu,\\mu^\\prime} v_\\mu^\\ast v_{\\mu^\\prime} e^{-2 \\pi i(\\mu-\\mu^\\prime)z\/L}\n\\end{equation}\nand therefore\n\\begin{equation}\n\\sum_\\mu |v_\\mu(0)|^2 = \\frac 1 L \\left(\\frac{\\epsilon_0 A L}{\\hbar\\omega_0}\\right)^2\\,\\int_0^L I_a^2(z)\\, dz = \\frac{L \\int_0^L I_a^2(z)\\, dz}{\\left[\\int_0^L I_a(z)\\, dz \\right]^2}\n\\label{dasum}\n\\end{equation}\nThis can be related to the pulse bandwidth as follows. First, note that the assumption of a localized pulse means that it is legitimate to extend all the integral arguments in (\\ref{dasum}) from minus infinity to infinity. Then, introduce the spatial Fourier transform $P(k)$ of the function $I_a(z)$, so that\n\\begin{equation}\nI_a(z) = \\frac{1}{\\sqrt{2\\pi}} \\int_{-\\infty}^\\infty P(k) e^{ikz} dk\n\\end{equation}\nThen we have\n\\begin{equation}\n\\sum_\\mu |v_\\mu(0)|^2 = \\frac{L}{2\\pi} \\frac{\\int_{-\\infty}^{\\infty} |P(k)|^2 dk}{|P(0)|^2}\n\\end{equation}\nNow suppose that the function $P(k)$ is peaked at $k=0$ (as it should be if we have correctly separated the slowly-varying part of the pulse form its carrier frequency), and that it is negligible outside of an interval of width $\\Delta k$. Then $|P(k)|^2\/|P(0)|^2 \\le 1$ for all $k$, and therefore\n\\begin{equation}\n\\sum_\\mu |v_\\mu(0)|^2 \\le \\frac{L}{2\\pi} \\Delta k\n\\label{dasum2}\n\\end{equation}\nwhere the equality holds only for a ``square'' $P(k)$ (constant in the interval $\\Delta k$, and zero outside of it); but this corresponds to a spatially non-localized pulse, whose intensity decays only as $1\/z$. We can then assume that (\\ref{dasum2}) is always a strict inequality. Returning to our original formulation in terms of $M$ discrete modes spaced, in frequency, by $2\\pi c\/L$, we conclude that\n\\begin{equation}\nr\\equiv \\frac 1 M \\sum_\\mu |v_\\mu(0)|^2 < \\frac{\\Delta\\omega_\\text{pulse}}{\\Delta\\omega_\\text{medium}}\n\\label{rdef}\n\\end{equation}\nwhich should always be less than 1 (note that $\\Delta\\omega_\\text{pulse}$ has been defined through the effective support of $P(k)$, and will typically be larger than the conventional ``standard deviation'' of $\\omega$ for the wavepackets considered). In terms of the quantity $r$, we can express the fidelity ${\\cal F}_0$ and the phase $\\phi$ as\n\\begin{equation}\n{\\cal F}_0 = 1 - 4 \\sin^2\\left(\\frac \\Phi 2\\right) \\, r(1-r)\n\\label{deff0}\n\\end{equation}\nand\n\\begin{equation}\n\\phi = \\tan^{-1}\\left[\\frac{r \\sin\\Phi}{1-r + r\\cos\\Phi}\\right]\n\\label{defphi}\n\\end{equation}\nNote that we can always make the distortion of the original wavepacket negligible by letting $r\\to 0$, but this is at the expense of an extremely small phase shift. The opposite case, $r\\to 1$, also leads to a high fidelity, this time with potentially large phase shifts, but it is forbidden by locality. By this it must be understood that it is simply not possible to make $r$ arbitrarily close to 1 with a localized pulse. For instance, suppose that the intensity $I_a$ is proportional to a Gaussian $e^{-(z-z_1)^2\/\\sigma^2}$. The right-hand side of (\\ref{dasum}) then evaluates to $1\/(\\sigma\\sqrt{2\\pi})$, and we have\n\\begin{equation}\nr = \\frac{L\/\\sigma}{M} \\frac{1}{\\sqrt{2\\pi}} < 0.4 \\qquad \\text{(Gaussian)}\n\\label{rgaussian}\n\\end{equation}\nsince, in order to describe the pulse adequately, the number of modes $M$ must be at least of the order of $L\/\\sigma$ (in other words, using fewer modes results in a non-localized pulse). Similarly, for a hyperbolic secant $I_a(z) \\sim \\text{sech}(z\/\\sigma)$, we find\n\\begin{equation}\nr = \\frac{L\/\\sigma}{M} \\frac{2}{\\pi^2} < 0.2 \\qquad \\text{(hyperbolic secant)}\n\\end{equation}\nSince order of magnitude arguments are always uncertain regarding such things as factors of two, however, and the temptation to look for an ``optimal pulse shape'' is strong, it is important to keep in mind the absolute bound (\\ref{rdef}), and also the considerations to follow in the next section, which will show, for the specific example of a ``giant Kerr effect'' medium, how things deteriorate when one tries to fit the pulse's spectrum too tightly in the medium's transparency window. \n\nParametric plots of ${\\cal F}_0$ versus $\\phi$ for various values of $r<0.5$ are shown in Figure 1. Note that, when $r=1\/2$, $\\phi = \\Phi\/2$ and ${\\cal F}_0 = \\cos^2(\\Phi\/2)$. \n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=3.2in]{newfig1.eps}\n\\end{center}\n\\caption[example]\n { \\label{fig:fig1}\nParametric plot of the fidelity ${\\cal F}_0$ versus the phase shift $\\phi$ for $r=0.4, 0.3, 0.2$, as $\\Phi$ is varied from $0$ to $2\\pi$. Dots: result of numerical calculations with a Gaussian pulse, with 17 modes and $\\sigma\/L = 0.059$ (corresponding to $r=0.4$, outermost dots), and $\\sigma\/L = 0.078$ (corresponding to $r=0.3$, innermost dots); the numerical calculations cover only the range $0\\le\\Phi\\le \\pi$.}\n\\end{figure}\nThe figure also shows the results of numerical calculations (based on the direct integration of Eq.~(\\ref{dotcnm})) for $M=17$ modes and two Gaussian pulses with $I_a \\sim e^{-(z-z_1)^2\/\\sigma^2}$, $z_1 = L\/4$, and $\\sigma = 0.059 L$ (corresponding to $r=0.4$) and $\\sigma=0.078 L$ (corresponding to $r=0.3$). The nonlinear medium was taken to start at $z_0=L\/2$ and had extension $l=L\/2$. The agreement with the analytical approximation is better for the broader pulse because the replacement of the term in square brackets in (\\ref{partialsf}) by a delta function is a better approximation in that case. The calculations show that the theory may underestimate somewhat the achievable phase $\\phi$, but large values of $\\phi$ still correspond to very small fidelities.\n\n\\subsection{Discussion}\n\nThe results in the previous subsection indicate that the parameter $r$ that determines the maximum achievable phase shift decreases, as the ratio of the pulse's spectral width to the medium's bandwidth. \n\nOne may wonder why, once enough modes have been included in the calculation to describe the wavepacket properly, the addition of more, empty, modes should have a degrading effect on the performance of the system. The answer lies in spontaneous emission. Considering the action of the Hamiltonian (\\ref{ham}) on an initial state with only one photon in each pulse, the two sets of annihilation operators, $E^{(+)}_a$ and $E^{(+)}_b$, will first produce a uniform vacuum state, and then the creation operators $E^{(-)}_a$ and $E^{(-)}_b$ may replace the $a$ and $b$ photons into any of the available temporal modes, regardless of where they may have been originally. There are some momentum and energy conservation constraints, enforced by integrals over position and time respectively, but as (for instance) Eq.~(\\ref{dotcnumu}) shows, an initially unoccupied pair of modes with an arbitrary $\\nu$ (difference between ``a'' and ``b'' temporal frequency) can be created out of any of the preexisting pairs of modes with the same $\\mu$ (sum of the ``a'' and ``b'' frequencies), without incurring any energy or momentum penalty. This is also apparent from Eq.~(\\ref{cmunut}).\n\nThis problem is particularly acute for single-photon wavepackets. For pulses containing an appreciable number of photons, $\\bar n$, the action of the annihilation operators results in modes that are still highly populated, and so stimulated emission will take place preferentially in those modes. In other words, one expects this problem to decrease as $1\/\\bar n$ (the ratio of spontaneous to stimulated emission), as the number of photons in the pulses increases.\n\nTo get the single-photon case to work, one might contemplate modifying the Hamiltonian, so that, for instance, instead of the negative-frequency field operators $E^{(-)}_a$ and $E^{(-)}_b$ one would have a weighted sum of creation operators, more closely matching the spectrum of the incoming pulse (of course, by Hermiticity, the positive-frequency field operators $E^{(+)}_a$ and $E^{(+)}_b$ would also have to be modified). This amounts to introducing some of the effects of dispersion in the medium, but it cannot be done arbitrarily, since there are physical rules (such as the Kramers-Kronig relations) that govern these things. In particular, strong dispersion is typically associated with absorption. As will be shown in the next section, even the extremely weak residual absorption present in the giant (EIT-enhanced) Kerr effect is enough to prevent one from taking the limit $r\\to 1$ in the results presented above.\n\nIt may be worth considering for a moment the extreme case of a ``toy'' Hamiltonian that would work with any pulse shape. This could be achieved by replacing the interaction part of (\\ref{ham}) by\n\\begin{equation}\nH_I = \\hbar l \\epsilon \\left(\\sum_n a^\\dagger_n a_n\\right)\\left(\\sum_m b^\\dagger_m b_m\\right)\n\\label{hnonlocal}\n\\end{equation}\nUnlike (\\ref{ham}), this Hamiltonian does not create any photons in initially unoccupied modes; yet, it is unphysical, because it is completely nonlocal: the pulse will be interacting with the medium wherever it might happen to be. This highlights the importance of doing multimode quantized-field calculations properly. Formally, either one of the Hamiltonians (\\ref{ham}) or (\\ref{hnonlocal}) could be considered as a possible generalization of the single-mode Kerr Hamiltonian $a^\\dagger a b^\\dagger b$, but they yield very different predictions in the multimode case, and only one of them is (approximately) physical. It is to the terms neglected in this approximation that we turn in the next Section.\n\n\n\\section{Giant Kerr effect and medium bandwidth}\n\nIn the previous section it was shown that in order to achieve a relatively large phase shift one should try to make the pulse's bandwidth as close to that of the medium as possible. However, when one does that, the medium's absorption is not negligible anymore.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=3in]{fig2.eps}\n\\end{center}\n\\caption[example]\n { \\label{fig:fig2}\nLevel scheme for the giant Kerr effect. $E_a$ and $E_b$ are weak (in this paper, single-photon) fields; $\\Omega_c$ is the EIT ``coupling field.''}\n\\end{figure}\n\nA way to approximately realize the Hamiltonian (\\ref{ham}) is by making use of the ``giant Kerr effect'' introduced in \\cite{ima}. The scheme (illustrated in Figure 2) makes use of electromagnetically-induced transparency, or EIT, to open a ``transparency window'' in the medium for the field $E_a$ (the field $E_b$ does not need it, since the level 2 will typically be unoccupied, and the detuning $\\Delta_b$ will be taken to be large), as well as to enhance the dispersion of the medium and with it the Kerr nonlinearity. As shown in \\cite{ima}, if the optical Bloch equations for the atom are solved in steady-state, under the assumption that absorption is small, one obtains for the atomic dipole amplitudes in the two transitions the result\n\\begin{align}\np_a &= i\\frac{4 d_{13}^2 d_{24}^2 }{\\hbar^4\\Omega_c^2(\\gamma_4\/2 +i \\Delta_b)}\\,|E_b|^2 E_a^\\ast \\label{e18b} \n\\\\\np_b &= i\\frac{4 d_{13}^2 d_{24}^2 }{\\hbar^4\\Omega_c^2(\\gamma_4\/2 +i \\Delta_b)}\\,|E_a|^2 E_b^\\ast \\label{e18a} \\end{align}\nwhere $d_{13}$ and $d_{24}$ are the dipole matrix elements for the two transitions. Multiplying each dipole amplitude by the corresponding field, and adding the contributions of all the atoms by integrating over the spatial extent of the medium, then yields an interaction energy of the form (\\ref{ham}), with \n\\begin{equation}\n\\epsilon = \\frac{4 d_{13}^2 d_{24}^2}{\\hbar^4\\Delta_b\\Omega_c^2} \\rho A\n\\end{equation}\nwhere $\\rho$ is the volume density of atoms in the medium and $A$ the cross-sectional area of the beam, and the assumption $\\Delta_b \\gg \\gamma_4$ has been made. As discussed in, e.g., \\cite{rmp}, this steady-state approximation, appropriate for a continuous-wave field, neglects a number of important dispersive effects that result in a slowing down and broadening of the $E_a$ pulse. These complications were discussed by Harris and Hau in \\cite{harrishau}, and possible ways around them were suggested by Lukin and Imamo\\u glu in \\cite{lukin}. Here the propagation effects will be ignored, in order to concentrate, instead, on the consequences of the \\emph{temporal} variation of the pulse at the location of each atom.\n\nFor an optically dense medium, the transparency window is a Gaussian of width\n\\begin{equation}\n\\Delta\\omega_\\text{trans} = \\frac{\\Omega_c^2}{\\sqrt{\\Gamma_{31}\\gamma_{31}}}\\, \\frac{1}{\\sqrt{\\rho\\sigma_a l}}\n\\label{deltaomegatrans}\n\\end{equation}\n(see, e.g., \\cite{lukinscully}) where $\\sigma_a = 3\\lambda^2\/2\\pi$ is the on-resonance absorption cross-section of the atom in the $1\\to 3$ transition; $\\Gamma_{31}$ is the spontaneous emission decay rate from level 3 to level 1, and $\\gamma_{31} \\ge \\Gamma_{31}$ is the total decay rate of the $(1,3)$ coherences, including dephasing and decay to other levels (such as 2). The residual absorption inside this window has been discussed using a purely semiclassical treatment in \\cite{myoptcomm}; here, for single-photon pulses, it will be estimated (and with it, implicitly, the width of the window itself) as follows. Considering only a one-photon wavepacket in field $a$, and ignoring field $b$ altogether for simplicity, an initial state $\\sum_n c_n(0)\\ket{1_n}_a\\ket 1$ (where the second ket refers to the atomic state) can evolve into a superposition\n\\begin{equation}\n\\ket{\\psi(t)} = \\sum_n c_n(t) e^{-in\\omega t}\\ket{1_n}_a\\ket 1 + C_2(t)\\ket{0}_a\\ket 2 + C_3(t)\\ket{0}_a\\ket 3\n\\end{equation}\n(here and in what follows, $\\omega \\equiv 2\\pi c\/L$), under the non-Hermitian Hamiltonian\n\\begin{align}\nH_{appr} = &\\sum_n\\hbar n\\omega a^\\dagger a - d_{13} \\left(E^{(+)}_a \\ket 3\\bra 1 + E^{(-)}_a \\ket 1\\bra 3 \\right) \\cr\n&+ \\frac{\\hbar\\Omega_c}{2} \\bigl(\\ket 2\\bra 3 + \\ket 3\\bra 2\\bigr) -i\\hbar \\frac{\\gamma_{31}}{2} \\ket 3\\bra 3 \n\\end{align}\nFor simplicity (in order to use the quasi-pure state approach) we shall neglect the dephasing contribution to $\\gamma_{31}$ and the feedback, through spontaneous emission, from state $\\ket 3$ to state $\\ket 1$. We still allow for $\\Gamma_{31} \\ne \\gamma_{31}$, and note the relation between the atomic dipole moment matrix element $d_{13}$ and $\\Gamma_{31}$:\n\\begin{equation}\n\\Gamma_{31} = \\frac{\\omega_0^3 d_{13}^2}{3 \\pi \\epsilon_0 \\hbar c^3}\n\\label{G31}\n\\end{equation}\nThe equations of motion for the coefficients $c_n, C_3$ and $C_2$ are\n\\begin{subequations}\n\\begin{align}\n\\dot c_n &= i g_{13} C_3 e^{in\\omega t} \\label{dotcn}\\\\\n\\dot C_3 &= -\\frac{\\gamma_{31}}{2}C_3 + i g_{13}\\sum_n c_n e^{-in\\omega t} -i \\frac{\\Omega_c}{2} C_2 \\label{dotc3}\\\\\n\\dot C_2 &= -i \\frac{\\Omega_c}{2} C_3 \\label{dotc2}\n\\end{align}\n\\label{ceqs}\n\\end{subequations}\nwhere $g_{13}=(d_{13}\/\\hbar)(\\hbar\\omega_0\/\\epsilon_0 AL)^{1\/2}$. Assuming that all the coefficients vary sufficiently slowly, the equation (\\ref{dotc3}) for $C_3$ can be adiabatically integrated with the result\n\\begin{equation}\nC_3(t) \\simeq i g_{13} \\sum_n \\frac{c_n(t)e^{-in\\omega t}}{\\gamma_{31}\/2-in\\omega} -i\\frac{\\Omega_c}{\\gamma_{31}} C_2(t)\n\\label{c3ta}\n\\end{equation}\nWhen this is substituted into the equation (\\ref{dotc2}) for $C_2$, the decay rate $\\Omega_c^2\/2\\gamma_{31}$ appears multiplying $C_2$ itself. Since this rate is, presumably, much greater than the transparency bandwidth (\\ref{deltaomegatrans}), it is consistent to assume that all the relevant modes are slower than it, and to perform a further adiabatic integration, with the result\n\\begin{equation}\nC_2(t)\\simeq \\frac{\\Omega_c g_{13}}{2}\\sum_n\\frac{c_n(t)e^{-in\\omega t}}{(\\gamma_{31}\/2-in\\omega)(\\Omega_c^2\/2\\gamma_{31}-in\\omega)}\n\\end{equation}\nFinally, this can be substituted again in (\\ref{c3ta}), and the lowest-order nonvanishing contribution in $n\\omega$ kept, to yield the occupation probability amplitude for level $3$ in the presence of the pulse:\n\\begin{equation}\n|C_3|^2 \\simeq \\frac{16 g_{13}^2}{\\Omega_c^2} \\left|\\sum_n n\\omega c_n e^{-in\\omega t}\\right|^2\n\\end{equation}\n(note that this is consistent with the semiclassical treatment of \\cite{myoptcomm}, which yielded an occupation probability of level 3 proportional to the square of the time-derivative of the field amplitude envelope).\n\nWe assume that irreversible processes, represented by the rate $\\gamma_{31}$, take the system out of the state 3 and destroy the coherence, and we can estimate then a single-atom ``loss'' probability by\n\\begin{align}\n\\int_0^{T} \\gamma_{31} |C_{31}|^2 dt &\\simeq \\frac{16\\gamma_{31} g_{13}^2}{\\Omega_c^2} \\,\\frac L c \\sum_n (n\\omega)^2|c_n(0)|^2 \\cr\n&= \\frac{8\\gamma_{31}\\Gamma_{31}}{\\Omega_c^2}\\,\\frac{\\sigma_a}{A}\\,(\\delta\\omega_\\text{pulse})^2\n\\end{align}\nwhere $T=L\/c$ is the ``quantization time,'' Eq.~(\\ref{G31}) has been used, and $\\delta\\omega_\\text{pulse}$ (the standard deviation of $n\\omega$ for the pulse) has been defined in a natural way. Multiplying this by the total number of atoms, $\\rho A l$, with which the pulse interacts, we obtain the total loss probability,\n\\begin{equation}\nP_\\text{loss} = 8 \\left(\\frac{\\delta\\omega_\\text{pulse}}{\\Delta\\omega_\\text{trans}}\\right)^2 \\gtrsim r^2,\n\\label{ploss}\n\\end{equation}\nassuming that the relevant ``medium bandwidth'' to be used in the calculations in the previous section (in particular, in Eq.~(\\ref{rdef})) is of the order of $\\Delta\\omega_\\text{trans}$, and also that the ``effective frequency support'' of the pulse, $\\Delta\\omega_\\text{pulse}$, is of the order of a few standard deviations. \n\nEq.~(\\ref{ploss}) shows that, in order to prevent the loss of coherence through spontaneous emission out of the level 3, the parameter $r$ needs to be kept very small, in which case, as shown in the previous section, the phase shift in the Schr\\\" odinger picture is necessarily very small as well. It may be tempting to try to look for an ``optimal'' pulse shape that, for instance, maximizes ${\\cal F}_0$ and $\\phi$ (Eqs.~(\\ref{deff0}), (\\ref{defphi})) while minimizing Eq.~(\\ref{ploss}), but that would be missing the point. The basic meaning of Eq.~(\\ref{ploss}) is actually that the unitary evolution under the Hamiltonian (\\ref{ham}), assumed in the previous Section, simply does not hold unless the pulse's frequency spectrum is well within the medium's (EIT) transparency bandwidth, in which case $r$, and the maximum phase shift $\\phi$, are necessarily small.\n\nAs an example, suppose one has a Gaussian pulse of the form $I_a \\sim e^{-(z-z_1)^2\/\\sigma^2}$, in which case $(\\delta\\omega_\\text{pulse})^2 = c^2\/2\\sigma^2$, and $r$ is given by Eq.~(\\ref{rgaussian}). Then Eq.~(\\ref{ploss}) becomes $P_\\text{loss} = 2 r^2\/\\pi$, and to have $P_\\text{loss}$ smaller than, say, $0.1$, we require $r \\le 0.4$. If we also want $1-{\\cal F}_0 \\simeq 0.1$, Eq.~(\\ref{deff0}) shows that $\\Phi$ cannot exceed $0.66$, and then, by Eq.~(\\ref{defphi}), we have $\\phi \\le 0.26$. However, this is such a small phase shift that the overlap of the initial state in Eq.~(\\ref{wanted}) with the target state is already $\\cos^2(\\phi\/2) = 0.98$. This means that one has a bigger ``success probability'' if one simply \\emph{does nothing at all} to the initial state.\n\n\\section{Conclusions}\n\nThe results presented here are fully in agreement with the analysis of Shapiro and co-workers \\cite{shapiro,shapiro2}. In particular, in the ``fast nonlinearity'' regime, the achievable phase shift is very small for as long as the instantaneous response approximation is justified. This corresponds to being allowed to neglect the higher-order (in $n\\omega$) terms in the adiabatic expansion in Section 3, which are responsible for the breakdown of unitary evolution as the pulse's bandwidth approaches the EIT transparency bandwidth. Note that in the formalism used in Section 3 this loss of unitarity is ultimately due to the disappearance of a photon from the system (the photon is absorbed, to bring the atom to level $\\ket 3$, and then spontaneously emitted into some other mode); if this was to be described using field operators, restricted to only the two sets of modes ``a'' and ``b'', one would have to throw in a Langevin noise term to preserve the commutation relations. This would connect to Shapiro's explanation of the reduced fidelity in this regime in terms of phase noise. (The connection is, essentially, the fluctuation-dissipation theorem.)\n\nA somewhat surprising result from the analysis in Section 2 is the realization that the seemingly arbitrarily large phase obtained in the Heisenberg picture does not necessarily translate into a large phase in the Schr\\\" odinger picture. This highlights an important feature of the multimode calculations. For a single mode it is certainly the case that any phase factor acquired by the Heisenberg-picture operators $a(t)$, $b(t)$, will also appear multiplying the single-photon state $\\ket{11}$ in the Schr\\\" odinger picture. In the multimode case one cannot count on such a correspondence. This is already apparent from the fact that, in the Heisenberg picture, the magnitude of the phase depends on the local pulse intensity (as in Eq.~(\\ref{heissol2})), whereas the Schr\\\" odinger picture treatment determines a single value for the phase shift, simply by projecting the final field state onto the initial one, as in Eq.~(\\ref{proj}).\n\nIt seems legitimate to say that spontaneous emission is ultimately responsible by the impossibility to get large phase shifts; in the ``slow regime,'' as just described, by removing a photon from the system, and in the ``fast regime'' by populating all the initially empty pairs of temporal modes (with the same $\\mu$) with equal probability. This is consistent with many previous results that indicate that in order to carry a nontrivial quantum logical operation (i.e., one that can change a state into an orthogonal one) with an error probability $P_e$ one needs of the order of $1\/P_e$ photons, since $1\/\\bar n$ is precisely the ratio of ``spontaneous emission noise'' to ``signal,'' when one has $\\bar n$ control photons \\cite{jgb}. In other words, single-photon quantum optical gates are bound to have failure probabilities of the order of unity. This is plainly the case in schemes such as ``linear optics quantum computing'' \\cite{kok}, which, however, have the advantage, over the schemes considered here, of not modifying the shape of the pulses when they succeed.\n\nIn view of all the evidence gathered thus far, and the possible pitfalls of incomplete analyses, it seems reasonable to suggest that any future proposals of ``single-photon Kerr nonlinearities'' for quantum logic should at least include detailed studies involving: (1) clearly local, and physically realizable Hamiltonians; (2) localized wavepackets, described by quantized multimode fields, where at least enough modes are included in numerical calculations to cover the whole nonlinear medium's bandwidth; (3) conventional fidelities computed in the Schr\\\" odinger picture; and (4) a realistic estimate of any residual losses or decoherence mechanisms. As the results presented here indicate, however, there seems to be no reason to believe that such an analysis could violate the conclusions of Shapiro's analysis \\cite{shapiro}.\n\nThis research has been supported by the National Science Foundation.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}