diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzehri" "b/data_all_eng_slimpj/shuffled/split2/finalzzehri" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzehri" @@ -0,0 +1,5 @@ +{"text":"\\section{INTRODUCTION}\n\n\nUltracold Fermi gas \\cite{giorgini_rmp2008},\nwith tunable atom-atom interaction through Feshbach resonance \\cite{chin_rmp2010}, has been an ideal platform for the study of the crossover physics from weak-coupling Bardeen-Cooper-Schrieffer (BCS) pairing to a Bose-Einstein condensate (BEC) of bound pairs \\cite{regal_prl2004,zwierlein_prl2004,bartenstein_prl2004}.\nRecently synthetic gauge fields \\cite{lin_nat2009} and spin-orbit (SO) coupling \\cite{lin_nat2011, ji_nphys2014, chunlei_pra2013, pengjun_prl2012, cheuk_prl2012} were realized in experiments,\nopening up a completely new avenue of research in superfluid Fermi gases.\nThe interplay of Zeeman fields and SO coupling leads to many novel phenomena at both zero\n\\cite{gong_prl2011,han_pra2012,seo_pra2012,jiang_pra2011,zengqiang_prl2011,zhoupra2013}\nand finite temperatures \\cite{lindong_njp2013,hu_njp2013},\nfrom mixed singlet-triplet pairing to topological phase transition\n\\cite{gong_prl2011,xiaji_pra2012,wu_prl2013,zheng_pra2013,chunlei_natc2013,weizhang_natc2013}.\n\nMany of the interesting physics of SO-coupled Fermi gas can be captured by mean-field theory. However, it is also true that mean-field theory may fail under many circumstances. For example, mean-field theory, which does not include the effect of noncondensed pairs,\nfails to describe accurately the phase transition from a superfluid to a normal gas, particularly for systems with strong interaction.\nBeyond-mean-field theoretical methods have been proposed \\cite{nozieres1985,melo_prl1993,ohashi_prl2002,machida_pra2006,xiaji_pra2005}.\nHere we present a theoretical investigation under the framework of the T-matrix scheme to address the superfluid properties of a Rashba SO coupled Fermi gas over the entire BCS-BEC crossover regime \\cite{qijin_prl1998,maly1999,loktev_pr2001,perali_prb2007,qijin_pr2005,kinnunen,huihu_prl2010,qijin,kinast, stajic,bauer_prl2014, haoguo_arxiv2013,lianyi_pra2013,zhangjing}.\nIn the absence of the Zeeman field, it was shown that the SO coupling enhances superfluid pairing \\cite{renyuan_prl2012,zengqiang_prl2011,lianyi_prl2012,lianyi_pra2013}. At the mean-field level, we know that\nthe presence of both the SO coupling and a perpendicular (out-of-plane) Zeeman field give rise to effective $p$-wave pairing \\cite{tewari_prl2007,chuanwei_prl2012}.\nMeanwhile introducing an in-plane Zeeman component creates an anisotropic Fermi surface which favors finite-momentum pairing,\ngiving rise to a Fulde-Ferrell superfluid \\cite{wu_prl2013,zheng_pra2013,chunlei_natc2013,weizhang_natc2013,ff1,ff4}.\nThese previous studies motivated our present work, in which we investigate the thermodynamic properties of an SO coupled Fermi gas subject to a Zeeman field. We will focus our calculation on two important quantities that are measurable in experiment: the superfluid to normal transition temperature and the isothermal compressibility.\n\nWe organize the paper as follows.\nIn Sec.~\\ref{sec_model_hamiltonian}, we give an introduction to the Rashba SO coupled model\nand briefly describe the T-matrix scheme used in the calculation.\nThen in Sec.~\\ref{sec_zero-temperature_properties},\nwe briefly review the zero-temperature properties of the system.\nThe superfluid transition temperature $T_c$ in the BEC-BCS crossover is investigated in Sec.~\\ref{sec_superfluid_transition_temperature}, and its\ndependence on the Zeeman field is emphasized. We present the numerical results of compressibility in Sec.~\\ref{sec_isothermal_compressibility} before we provide a summary in Sec~\\ref{summary}. We show that both the superfluid transition temperature and the compressibility have distinct dependence on the out-of-plane and the in-plane Zeeman fields.\nIn the Appendix we present technical details of the T-matrix formalism.\n\n\\section{Model}\n\\label{sec_model_hamiltonian}\n\nWe consider a three-dimensional two-component degenerate Fermi gas with Rashba SO coupling together with effective Zeeman fields.\nThis system can be described by the following Hamiltonian:\n\\begin{eqnarray}\n\\mathcal{H} &=& \\int d{\\bf r}\\,\n\\psi^\\dag ( \\mathcal{H}_0 + \\mathcal{H}_\\mathrm{so} + h_z\\sigma_z +h_x\\sigma_x) \\psi({\\bf r}) \\nonumber\\\\\n&& + U \\int d{\\bf r}\\, \\psi^\\dag_\\uparrow({\\bf r}) \\psi^\\dag_\\downarrow({\\bf r})\n\\psi_\\downarrow({\\bf r}) \\psi_\\uparrow({\\bf r})~,\n\\end{eqnarray}\nwhere $\\psi({\\bf r})=(\\psi_\\uparrow \\,,\\psi_\\downarrow)^T$ represents the fermionic field operator and $\\mathcal{H}_0 = -\\nabla^2\/(2m)-\\mu$ represents the kinetic energy\nwith $\\mu$ being the chemical potential.\nThe Rashba SO-coupling term takes the form\n$\\mathcal{H}_\\mathrm{so} = \\alpha ( k_y\\sigma_x - k_x\\sigma_y )$ in the $xy$-plane, with the parameter $\\alpha$ characterizing the strength of the SO coupling.\nWe consider both an out-of-plane Zeeman field $h_z$ and an in-plane Zeeman field $h_x$.\nThe quantity $U$ represents the bare two-body interaction constant and in the calculation will be replaced by the $s$-wave scattering length $a_s$ through the standard regularization scheme:\n$1\/U = m\/(4\\pi\\hbar^2a_s)-\\sum_k m\/k^2$.\n\nIn the mean-field BCS theory, we introduce an order parameter\n$\\Delta=U\\sum_{\\bm{k}}\\langle \\psi_{\\bm{Q}+\\bm{k},\\uparrow}\\psi_{\\bm{Q}-\\bm{k},\\downarrow}\\rangle$ to characterize the property of the superfluid. Here $\\bm{Q}$ represents the center-of-mass momentum of the pairs. A finite ${\\bm{Q}}$ arises from the presence of the in-plane Zeeman field \\cite{wu_prl2013,zheng_pra2013,chunlei_natc2013,ff1,lindong_njp2013,hu_njp2013,ff4}.\nAt finite temperature in a T-matrix scheme, the order parameter $\\Delta(T)$ can be divided into two parts:\n$\\Delta^2 = \\Delta_\\mathrm{sc}^2 + \\Delta_\\mathrm{pg}^2$ \\cite{lianyi_pra2013}. Here $\\Delta_{\\rm sc}$ is the superfluid gap arising from the condensed pairs and vanishes above the superfluid transition temperature $T_c$.\nThe pseudo-gap $\\Delta^2_\\mathrm{pg}\\sim\\langle\\Delta^2(T)\\rangle - \\langle\\Delta(T)\\rangle^2$ describes the thermodynamic fluctuation of non-condensed pairs \\cite{qijin_prl1998,kosztin_prb2000}.\nBelow the superfluid transition temperature $T_c$, the thermodynamic quantities are determined\nby the Thouless criterion \\cite{thouless_1960} within the T-matrix scheme instead of the BCS formalism in the mean-field model.\nThis is because at finite temperature thermodynamic fluctuation plays a critical role with a tendency towards destroying the pairing condensation.\nThe Thouless criterion for finite pairing momentum $\\bm{Q}$ takes the form:\n\\begin{equation}\nU^{-1}+\\chi(0,\\bm{Q})=0 ~,\n\\end{equation}\nwhere $\\chi(0,\\bm{Q})$ is the spin symmetrized pair susceptibility.\nMore technical details are given in the Appendix.\n\nThe T-matrix scheme adopted here was first developed in the context of high-$T_c$ cuprates \\cite{qijin_prl1998,maly1999,loktev_pr2001,perali_prb2007}. It was later applied to study the BEC-BCS crossover phenomenon in ultracold atomic Fermi gases \\cite{qijin_pr2005}. The T-matrix theory was found to be reasonably successful in providing theoretical support for the measured radio-frequency (rf) spectrum \\cite{kinnunen,huihu_prl2010,qijin}, density profiles \\cite{stajic}, and thermodynamic properties such as heat capacity \\cite{kinast}, pressure \\cite{bauer_prl2014}, and isothermal compressibility \\cite{haoguo_arxiv2013} of ultracold Fermi gases. More recently, this theory was also applied to Fermi gases with spin-orbit coupling \\cite{lianyi_pra2013, zhangjing}. In Ref.~\\cite{zhangjing}, it was found that the theoretical calculation qualitatively agrees with the measured rf spectrum of a spin-orbit-coupled Fermi gas on the BEC side of the resonance. Given these past successes, we are confident that the T-matrix scheme indeed represents a vast improvement over the simple mean-field theory and should be at least qualitatively valid over the whole BEC-BCS crossover regime. \n\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.35\\textwidth]{fig1.eps}\n\\caption{(Color online) Thermodynamic quantities at zero temperature,\n(a) order parameter, and (b) pairing momentum, as functions of interaction strength characterized by $1\/a_s K_F$ for different $\\theta_h$. Here the SO coupling strength is\n$\\alpha K_F=2.0E_F$, and the effective Zeeman field strength is $h=0.5E_F$. $\\theta_h$ is the angle between the effective Zeeman field and the $z$-axis, such that $h_z = h \\cos \\theta_h$ and $h_x = h \\sin \\theta_h$. Therefore $\\theta_h=0$ represents a pure out-of-plane Zeeman field, while $\\theta_h=\\pi\/2$ represents a pure in-plane Zeeman field. $K_F$ and $E_F$ are the Fermi wave number and Fermi energy of the ideal Fermi gas, respectively.}\n\\label{crossover_para}\n\\end{figure}\n\n\n\\section{Zero-temperature properties}\n\\label{sec_zero-temperature_properties}\n\nWe first briefly review the main features of the system at zero temperature. Note that the pseudo-gap tends to zero in the low-temperature limit as $\\Delta_\\mathrm{pg}\\sim T^{3\/4}$ [see Eq.~(\\ref{delta_pg})] and vanishes at $T=0$. Hence, the T-matrix scheme we adopted here reduces to the mean-field theory at zero temperature. The Rashba SO coupling and the Zeeman field have opposite effects on the magnitude of the gap parameter: the former tends to enhance the gap, while the latter tends to reduce the gap. Fig.~\\ref{crossover_para}(a) displays the gap parameter as a function of interaction strength for several different Zeeman fields. The in-plane Zeeman field is known to break the symmetry of the band structure and results in Cooper pairs with finite momentum, a signature of the Fulde-Ferrell superfluid. In Fig.~\\ref{crossover_para}(b), we show how the magnitude of the momentum of Cooper pairs $Q=|\\bm{Q}|$ ($\\bm{Q}$ is along the $y$-axis \\cite{zheng_pra2013}) changes in the BEC-BCS crossover. $Q$ decreases quickly on the BEC side of the resonance as the two-body $s$-wave interaction, which favors zero-momentum pairing, dominates in that regime.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.35\\textwidth]{fig2.eps}\n\\caption{(Color online) Superfluid gap at $T=0$ as a function of the Zeeman field strength for (a) $1\/a_sK_F= -0.5$, (b) 0, and (c) 0.5. The solid, dashed, and dot-dashed lines correspond to $\\theta_h= 0$, $\\pi\/4$, and $\\pi\/2$, respectively. The SO-coupling strength is\n$\\alpha K_F=2.0E_F$. The vertical arrows indicate the critical Zeeman field strength at which the system becomes gapless.}\n\\label{deltah}\n\\end{figure}\n\nIn Fig.~\\ref{deltah}, we show how the zero-temperature superfluid gap varies as the Zeeman field strength changes. For a weak Zeeman field, the gap is insensitive to the orientation of the field. In general, $\\Delta$ decreases as $h=\\sqrt{h_z^2+h_x^2}$ increases due to the pair-breaking effect of the Zeeman field. However, as $h$ exceeds some threshold value, the decrease of the gap becomes sensitive to the orientation of the field: The larger the in-plane Zeeman field component is, the steeper the decrease of the gap becomes. As we will show below, this threshold value corresponds to the field strength at which the quasi-particle excitation gap vanishes.\n\nAnother important feature induced by the Zeeman field is that it closes the bulk quasi-particle excitation gap at a critical value. Fig.~\\ref{phase}(a) represents a phase diagram in which we plot the critical Zeeman field at which the system changes from gapped to gapless. The region below each line represents the gapped phase. As the interaction strength varies from the BCS side to the BEC side, the order parameter increases [see Fig.~\\ref{crossover_para}(a)], and correspondingly the critical field strength increases and the gapped region enlarges. In the absence of the in-plane Zeeman field (i.e., when $h_x=0$), the system becomes gapless due to the presence of the discrete Fermi points \\cite{gong_prl2011} located along the $k_z$-axis in momentum space at $k_z=\\pm\\sqrt{2m(\\mu\\pm\\sqrt{h^2-\\Delta^2})}$. These Fermi points are topological defects. Hence the transition from the gapped to the gapless region in this case also represents a transition from a topologically trivial to a topologically nontrivial quantum phase. For finite $h_x$, by contrast, the gapless region features a nodal surface in momentum space on which the single particle excitation gap vanishes \\cite{lindong_njp2013,zhoupra2013,yong_prl2014}. An example of such a nodal surface is plotted in Fig.~\\ref{phase}(b). The vertical arrows in Fig.~\\ref{deltah} indicate the critical Zeeman field strength at which the system becomes gapless. One can see that the sharp drop of the superfluid gap is correlated with the appearance of the nodal surface induced by a large in-plane Zeeman field. As we shall show below, the presence of the nodal surface also has dramatic effects on the thermodynamic properties of the system.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.32\\textwidth]{fig3.eps}\n\\caption{(Color online) (a) The critical magnetic field that separates the gapped region and the gapless region. The region below the lines is gapped. Here the SO-coupling strength is\n$\\alpha K_F=2.0E_F$. (b) The nodal surface in the gapless region for $h_z=0.0$, $h_x=1.0E_F$, $\\alpha K_F=2.0E_F$, and $1\/a_sK_F=0.0$. The range of the plot is from $-2 K_F$ to $2K_F$ along each direction.}\n\\label{phase}\n\\end{figure}\n\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{fig4.eps}\n\\caption{(Color online) (a) The total gap $\\Delta$, pseudo-gap $\\Delta_{\\rm pg}$ and superfluid gap $\\Delta_{\\rm sc}$ as functions of the temperature.\n(b) The corresponding pairing momentum $Q$ as a function of the temperature.\nThe parameters used are\n$1\/a_sK_F=0.0$, $\\alpha K_F=2.0E_F$, $h_z=0.0$, and $h_x=0.5E_F$. In this example, the green vertical dashed line indicates the critical temperature $T_c=0.210E_F$.}\n\\label{delta}\n\\end{figure}\n\n\\section{Superfluid Transition Temperature}\n\\label{sec_superfluid_transition_temperature}\n\nWe now turn our attention to the finite-temperature properties of the system. The first quantity we want to address is the superfluid transition temperature $T_c$, which is identified as the lowest temperature at which the superfluid gap $\\Delta_{\\rm sc}$ vanishes.\nFig.~\\ref{delta}(a) shows an example of how the gap varies as the temperature increases. As temperature increases, the superfluid gap $\\Delta_{\\rm sc}$ decreases monotonically and vanishes at $T_c$.\nIn contrast, the pseudo-gap $\\Delta_{\\rm pg}$ is a monotonically increasing function $\\Delta_{\\rm pg}\\sim T^{3\/4}$ below $T_c$ and decreases above $T_c$.\nOn the other hand, Fig.~\\ref{delta}(b) shows that the center-of-mass momentum $Q$ of the Cooper pairs is quite insensitive to the temperature for $TT_c$. This indicates that the in-plane Zeeman field not only induces a Fulde-Ferrell superfluid below $T_c$, but also induces an exotic normal state above $T_c$ featuring a finite-momentum pseudo-gap \\cite{ff4}.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.36\\textwidth]{fig5.eps}\n\\caption{(Color online) Critical temperature $T_c$ in the BEC-BCS crossover for (a) different spin-orbit couplings (the values of the SO-coupling strength $\\alpha$, in units of $E_F\/K_F$, are shown in the legends) with no Zeeman field;\nand for (b) different ($h$, $\\theta_h$) with $\\alpha K_F=2.0E_F$. $h$ is in units of $E_F$.}\n\\label{crossover}\n\\end{figure}\n\nIn Fig. \\ref{crossover}, we show the superfluid transition temperature $T_c$ in the BEC-BCS crossover. In Fig. \\ref{crossover}(a), we plot $T_c$ as a function of the interaction strength for several different values of the SO-coupling strength without the Zeeman field. For comparison, the mean-field result is also included. For weak interaction, where $1\/(a_sK_F)$ is large and negative, the mean-field result agrees with the T-matrix result. As the interaction increases towards the BEC limit, the mean-field theory predicts an unphysically large critical temperature, a clear indication of its breakdown. In the BEC limit where the two-body attractive interaction is dominant, the system behaves like a condensation of weakly interacting tightly bound bosonic dimers with effective mass $2m$, regardless of the SO coupling strength \\cite{jiang_pra2011,zengqiang_prl2011,hu_prl2011}. The BEC transition temperature for this system is 0.218$E_F$ \\cite{melo_prl1993,ohashi_prl2002,machida_pra2006}.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.35\\textwidth]{fig6.eps}\n\\caption{(Color online) Critical temperature $T_c$ as functions of the Zeeman field strength for (a) $1\/a_sK_F= -0.5$, (b) 0, and (c) 0.5. The solid, dashed, and dot-dashed lines correspond to $\\theta_h= 0$, $\\pi\/4$, and $\\pi\/2$, respectively. The SO-coupling strength is\n$\\alpha K_F=2.0E_F$. The vertical arrows indicate the value of the Zeeman field strength at which the system become gapless.}\n\\label{htc}\n\\end{figure}\n\nIn Fig. \\ref{crossover}(b), we examine how the Zeeman field affects $T_c$. In the BEC limit, we have again $T_c=0.218 E_F$, insensitive to either the SO-coupling strength or the Zeeman field strength \\cite{jiang_pra2011}. On the BCS side, however, the Zeeman field tends to decrease $T_c$. This effect is much more pronounced with the in-plane Zeeman field than with the out-of-plane Zeeman field, indicating that the finite-momentum Fulde-Ferrell pairing is less robust than the zero-momentum Cooper pairing.\n\nTo show the effect of the Zeeman field on the transition temperature more clearly, we plot in Fig.~\\ref{htc} $T_c$ as a function of the Zeeman field strength at different orientations. Across the BEC-BCS crossover, $T_c$ is not very sensitive to the out-of-plane Zeeman field over a large range of Zeeman field strengths. By contrast, when the in-plane Zeeman field is present, $T_c$ starts to drop rather sharply near the critical Zeeman field strength where the system becomes gapless. This steep down turn of $T_c$ is particularly pronounced on the BEC side of the resonance. We attribute this steep drop of $T_c$ and the corresponding steep drop of the zero-temperature superfluid gap (see Fig.~\\ref{deltah}) to the enhanced fluctuation as a result of the emergence of the nodal surface induced by a large in-plane Zeeman field.\n\n\n\n\n\\section{Isothermal Compressibility}\n\\label{sec_isothermal_compressibility}\n\nThe isothermal compressibility, defined as $\\kappa = \\frac{1}{n} \\left. \\frac{\\partial n}{\\partial P} \\right|_{T}$, measures the change in the density $n$ in response to the change of the pressure $P$. A recent experiment measured the compressibility of a Fermi gas across the superfluid phase transition \\cite{ku_sci2012}, which is found to be in reasonable agreement with the T-matrix theory \\cite{haoguo_arxiv2013}. In the superfluid regime, it is found that $\\kappa$ increases with temperature, i.e., the gas becomes more compressible as temperature increases. This can be intuitively understood as a lower temperature yields a larger superfluid gap, hence a gas less likely to be compressed. Here we want to examine how SO coupling affects the behavior of $\\kappa$.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{fig7.eps}\n\\caption{(Color online) Compressibility in the superfluid regime as a function of temperature for different Zeeman field strengths and scattering lengths. The scattering lengths are: $1\/K_Fa_s = -0.5$ for upper panels (a1) and (b1), $1\/K_Fa_s = 0.$ for middle panels (a2) and (b2), and $1\/K_Fa_s = 0.5$ for lower panels (a3) and (b3). For left panels (a1), (a2) and (a3), $h_x=0$; for right panels (b1), (b2) and (b3), $h_z=0$. The SO-coupling strength is\n$\\alpha K_F=2.0E_F$. The compressibility is normalized by $\\kappa_0=\\frac{3}{2}\\frac{1}{N E_F}$, the compressibility of an ideal Fermi gas, and the temperature is normalized to the superfluid transition temperature $T_c$ for the given set of parameters. In the legends, we also indicate whether the quasiparticle excitations of the corresponding system are gapped or gapless. In (a2), we show $\\kappa$ in the absence of SO couplings and Zeeman fields (the dotted line) by taking $\\kappa=h=0$. In this limit, our calculation recovers the result reported in Ref. \\cite{haoguo_arxiv2013}.}\n\\label{compressibility}\n\\end{figure}\n\nFor our system, $\\kappa$ can be expressed as \\cite{seo_arxiv2011, seo_pra2013, haoguo_arxiv2013}\n\\begin{eqnarray}\n\\kappa &=& \\dfrac{1}{N^2}\\Big[\n\\Big(\\dfrac{\\partial N}{\\partial \\mu}\\Big)_{T,\\Delta,Q}\n+ \\Big(\\dfrac{\\partial N}{\\partial \\Delta}\\Big)_{T,\\mu,Q} \\Big(\\dfrac{\\partial \\Delta}{\\partial \\mu}\\Big)_{T,Q} \\notag\\\\\n&& + \\Big(\\dfrac{\\partial N}{\\partial Q}\\Big)_{T,\\mu,\\Delta} \\Big(\\dfrac{\\partial Q}{\\partial \\mu}\\Big)_{T,\\Delta}\n\\Big] ~,\n\\end{eqnarray}\nwhere\n\\[\n\\Big(\\dfrac{\\partial \\Delta}{\\partial \\mu}\\Big)_{T,Q} =\n\\Big(\\dfrac{\\partial N}{\\partial \\Delta}\\Big)_{T,\\mu,Q} \\Big\/ \\Big(\\dfrac{\\partial^2 \\Omega}{\\partial \\Delta^2}\\Big)_{T,\\mu,Q} ~,\n\\]\n\\[\n\\Big(\\dfrac{\\partial Q}{\\partial \\mu}\\Big)_{T,\\Delta} =\n\\Big(\\dfrac{\\partial N}{\\partial Q}\\Big)_{T,\\mu,\\Delta} \\Big\/ \\Big(\\dfrac{\\partial^2 \\Omega}{\\partial Q^2}\\Big)_{T,\\mu,\\Delta} ~.\n\\]\nIn Fig. \\ref{compressibility}, we show the compressibility $\\kappa$ as a function of temperature $T$ in the superfluid regime on both sides of the Feshbach resonance. In the left panels, we set the in-plane Zeeman field $h_x=0$. Across the Feshbach resonance, we see that regardless of the strength of the out-of-plane Zeeman field $h_z$, $\\kappa$ increases smoothly as the temperature increases from 0 to $T_c$, just like in a superfluid Fermi gas without SO coupling \\cite{ku_sci2012, haoguo_arxiv2013}. By contrast, the presence of the in-plane Zeeman field gives rise to some surprising effects. In the right panels of Fig.~\\ref{compressibility}, we set $h_z=0$. One can see that for small $h_x$ such that the system is gapped, $\\kappa$ is a monotonically increasing function of $T$. However, once $h_x$ exceeds the critical value such that the system becomes gapless, $\\kappa$ becomes a non-monotonic function of temperature. In certain regimes, $\\kappa$ even decreases as $T$ increases.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.32\\textwidth]{fig8.eps}\n\\caption{(Color online) Compressibility at $T=0$ as a function of the Zeeman field strength. The dashed line corresponds to the in-plane Zeeman field ($\\theta_h=\\pi\/2$), and the solid line corresponds to the out-of-plane Zeeman field ($\\theta_h=0$). The dashed and solid arrows indicate the critical field strength at which the system becomes gapless in the presence of an in-plane Zeeman field and an out-of-plane Zeeman field, respectively. The SO coupling strength is\n$\\alpha K_F=2.0E_F$, and the scatter length is set at $1\/K_Fa_s=0$.}\n\\label{comph}\n\\end{figure}\n\nThe drastically different effects on compressibility by the in-plane and the out-of-plane Zeeman field can also be seen in Fig.~\\ref{comph}, where we plot $\\kappa$ at the unitarity limit at zero temperature as a function of the Zeeman field strength. For the out-of-plane Zeeman field, $\\kappa$ decreases smoothly as the field strength increases. In particular, it does not exhibit any distinctive feature when the system changes from gapped to gapless. This is consistent with the fact that the gapped to gapless transition in the presence of a pure out-of-plane Zeeman field represents a topological phase transition which does not leave a trace in thermodynamic quantities. On the other hand, when the Zeeman field is in plane with its strength increasing from zero, $\\kappa$ first decreases and then rises sharply near the critical field when the system becomes gapless. Hence the in-plane Zeeman-field-induced gapless superfluid, featuring a Fermi nodal surface, is highly compressible.\n\nThe nodal surface resulting from the in-plane Zeeman field is not unique to Rashba SO coupling. For example, in the experimentally realized equal-weight Rashba-Dresselhaus coupling, a large in-plane Zeeman field can also induce a nodal surface \\cite{nodal1,nodal2}. We have checked that for that system, the Zeeman-field dependence of the compressibility exhibits very similar behavior.\nAs it is known, using the standard form of the fluctuation-dissipation\ntheorem for a balanced system, the isothermal compressibility can be rewritten as\n$\\kappa= \\frac{V}{T}( \\frac{\\langle\\hat{N}^2\\rangle-N^2}{N^2} )$ \\cite{seo_arxiv2011}, i.e., $\\kappa$ is directly proportional to number fluctuation of the system. The increase in $\\kappa$ can therefore be interpreted as a consequence of enhanced number fluctuation induced by the nodal surface.\n\n\n\n\n\\section{SUMMARY}\n\\label{summary}\n\nIn summary,\nwe have investigated the effect of the pairing fluctuation\non the thermodynamic properties of a Rashba SO coupled superfluid Fermi gas by adopting a T-matrix scheme. We focus on the effect of the Zeeman field. In particular, the in-plane Zeeman component leads to finite-momentum Cooper pairing and, when its magnitude becomes sufficiently large, it induces a nodal Fermi surface on which the quasi-particle excitation gap vanishes. The presence of the nodal surface has dramatic effects on the superfluid properties: it greatly suppresses the superfluid transition temperature and increases the isothermal compressibility. Both phenomena can be attributed to the enhanced fluctuation due to the presence of the nodal surface. By stark contrast, when only the out-of-plane Zeeman field is present, both $T_c$ and $\\kappa$ exhibit smooth behavior when the Zeeman field strength is increased, even when the system becomes gapless at large Zeeman field strength. The key difference here is that a large out-of-plane Zeeman field, unlike its in-plane counterpart, does not give rise to a nodal surface, but only discrete Fermi points along the $k_z$-axile.\nThese Fermi points are topological defects, and their appearance does not manifest on any thermodynamic quantities. In addition,\nwe also find an unconventional pseudo-gap state above the superfluid transition temperature, in which the non-condensed pairs possess nonzero center-of-mass momentum.\nWe attribute it to the anisotropic Fermi surface induced by the in-plane Zeeman field, which is independent of temperature.\n\n\n\n\n\\section*{ACKNOWLEDGEMENTS}\nZ.Z., X.Z. and G.G. are supported by the National Natural\nScience Foundation of China (Grants No. 11074244\nand No. 11274295), and the National 973 Fundamental Research Program (2011cba00200).\nH.P. acknowledges support from the NSF, and\nthe Welch Foundation (Grant No. C-1669).\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Counting of branched covers}\n\n\nLet us consider a connected compact surface without boundary $\\Omega$ and a branched covering $f:\\Sigma\\rightarrow\\Omega$\nby a connected or non-connected surface $\\Sigma$. We will consider a covering $f$ of the degree $d$. It means that the\npreimage $f^{-1}(z)$ consists of $d$ points $z\\in\\Omega$ except some finite number of points. This points are called\n\\textit{critical values of $f$}.\n\nConsider the preimage $f^{-1}(z)=\\{p_1,\\dots,p_{\\ell}\\}$ of $z\\in\\Omega$. Denote by $\\delta_i$ the degree of $f$ at $p_i$. It\nmeans that in the neighborhood of $p_i$ the function $f$ is homeomorphic to $x\\mapsto x^{\\delta_i}$. The set \n$\\Delta=(\\delta_1\\dots,\\delta_{\\ell})$\nis the partition of $d$, that is called \\textit{topological type of $z$}.\n\n\nFor a partition $\\Delta$ of a number $d=|\\Delta|$ denote by $\\ell(\\Delta)$ the number of the non-vanishing parts ($|\\Delta|$ and\n$\\ell(\\Delta)$ are called the weight and the length of $\\Delta$, respectively). We denote a partition and its Young diagram by \nthe same letter. Denote by $(\\delta_1,\\dots,\\delta_{\\ell})$ the Young diagram with rows of length $\\delta_1,\\dots,\\delta_{\\ell}$\nand corresponding partition of $d=\\sum \\delta_i$. \n\n\nFix now points $z_1,\\dots,z_{\\textsc{f}}$ and partitions $\\Delta^{(1)},\\dots,\\Delta^{(\\textsc{f})}$ of $d$. Denote by\n\\[\\widetilde{C}_{\\Omega (z_1\\dots,z_{\\textsc{f}})} (d;\\Delta^{(1)},\\dots,\\Delta^{(\\textsc{f})})\\]\nthe set of all branched covering $f:\\Sigma\\rightarrow\\Omega$ with critical points $z_1,\\dots,z_{\\textsc{f}}$ of topological\ntypes $\\Delta^{(1)},\\dots,\\Delta^{(\\textsc{f})}$.\n\nCoverings $f_1:\\Sigma_1\\rightarrow\\Omega$ and $f_2:\\Sigma_2\\rightarrow\\Omega$ are called isomorphic if there exists an\nhomeomorphism $\\varphi:\\Sigma_1\\rightarrow\\Sigma_2$ such that $f_1=f_2\\varphi$. Denote by $\\texttt{Aut}(f)$ the group of\nautomorphisms of the covering $f$. Isomorphic coverings have isomorphic groups of automorphisms of degree $|\\texttt{Aut}(f)|$.\n\nConsider now the set $C_{\\Omega (z_1\\dots,z_{\\textsc{f}})} (d;\\Delta^{(1)},\\dots,\\Delta^{(\\textsc{f})})$ of isomorphic classes\nin $\\widetilde{C}_{\\Omega (z_1\\dots,z_{\\textsc{f}})} (d;\\Delta^{(1)},\\dots,\\Delta^{(\\textsc{f})})$. This is a finite set.\nThe sum\n\\begin{equation}\\label{Hurwitz-number-geom-definition}\nH^{\\textsc{e},\\textsc{f}}(d;\\Delta^{(1)},\\dots,\\Delta^{(\\textsc{f})})=\n\\sum\\limits_{f\\in C_{\\Omega (z_1\\dots,z_{\\textsc{f}})}(d;\\Delta^{(1)},\\dots,\n\\Delta^{(\\textsc{f})})}\\frac{1} {|\\texttt{Aut}(f)|}\\quad,\n\\end{equation}\ndon't depend on the location of the points $z_1\\dots,z_{\\textsc{f}}$ and is called \\textit{Hurwitz number}.\nHere $\\textsc{f}$ denotes the number of the branch points, and $\\textsc{e}$ is the Euler characteristic of the base surface.\n\nIn case it will not produce a confusion we admit 'trivial' profiles $(1^d)$ among $\\Delta^1,\\dots,\\Delta^{\\textsc{f}}$ in\n(\\ref{Hurwitz-number-geom-definition})\nkeeping the notation $H^{\\textsc{e},\\textsc{f}}$ though the number of critical points now is less than $\\textsc{f}$.\n\n\nIn case we count only connected covers $\\Sigma$ we get the \\textit{connected} Hurwitz numbers \n$H_{\\rm con}^{\\textsc{e},\\textsc{f}}(d;\\Delta^{(1)},\\dots,\\Delta^{(\\textsc{f})})$.\n\n\n\\vspace{1ex}\n\nThe Hurwitz numbers arise in different fields of mathematics: from algebraic geometry to integrable systems. They are well\nstudied for orientable $\\Omega$. In this case the Hurwitz number coincides with the weighted number of holomorphic branched\ncoverings of a Riemann surface $\\Omega$ by other Riemann surfaces, having critical points $z_1,\\dots,z_\\textsc{f}\\in\\Omega$ of\nthe topological types $\\Delta^{(1)},\\dots,\\Delta^{(\\textsc{f})}$ respectively. The well known isomorphism between Riemann\nsurfaces and complex algebraic curves gives the interpretation of the Hurwitz numbers as the numbers of morphisms of\ncomplex algebraic curves.\n\nSimilarly, the Hurwitz number for a non-orientable surface $\\Omega$ coincides with the weighted number of the dianalytic\nbranched coverings of the Klein surface without boundary by another Klein surface and coincides with the weighted number\nof morphisms of real algebraic curves without real points \\cite{AG,N90,N2004}. An extension of the theory to all Klein surfaces\nand all real algebraic curves leads to Hurwitz numbers for surfaces\nwith boundaries may be found in \\cite{AN,N}.\n\n\nRiemann-Hurwitz formula related the Euler characteristic of the base surface $\\textsc{e}$ and the Euler characteristic of\nthe $d$-fold cover $\\textsc{e}'$ as follows:\n\\begin{equation}\\label{RH}\n \\textsc{e}'= d\\textsc{e}+\\sum_{i=1}^{\\textsc{f}}\\left(\\ell(\\Delta^{(i)})-d\\right)=0\n\\end{equation}\nwhere the sum ranges over all branch points $z_i\\,,i=1,2,\\dots$ with ramification profiles given by partitions $\\Delta^i\\,,i=1,2,\\dots$\nrespectively, and $\\ell(\\Delta^{(i)})$ denotes the length of the partition $\\Delta^{(i)}$ which is equal to the number of\nthe preimages $f^{-1}(z_i)$ of the point $z_i$.\n\n\\vspace{1ex}\n{\\bf Example 1}.\nLet $f:\\Sigma\\rightarrow\\mathbb{CP}^1$ be a covering without critical points.\nThen, each $d$-fold cover is a union of $d$ Riemann spheres: $\\mathbb{CP}^1 \\coprod \\cdots \\coprod \\mathbb{CP}^1$, then\n$\\deg f =d!$ and $H^{2,0}(d)=\\frac{1}{d!}$\n\n\n\\vspace{1ex}\n{\\bf Example 2}.\nLet $f:\\Sigma\\rightarrow\\mathbb{CP}^1$ be a $d$-fold covering with two critical points with the profiles \n$\\Delta^{(1)}=\\Delta^{(2)}=(d)$.\n(One may think of $f=x^d$). Then $H^{2,2}(d;(d),(d))=\\frac 1d$. Let us note that $\\Sigma$ is connected in this case \n(therefore $H^{2,2}(d;(d),(d))=H_{\\rm con}^{2,2}(d;(d),(d)) $)\nand its Euler\ncharacteristic $\\textsc{e}'=2$.\n\n\n\\vspace{1ex}\n{\\bf Example 3}.\n The generating\nfunction for the Hurwitz numbers $H^{2,2}(d;(d),(d))$ from the previous Example may be writen as\n$$\nF(h^{-1}\\mathbf{p}^{(1)},h^{-1}\\mathbf{p}^{(2)}):=\\, h^{-2}\\sum_{d>0}\\, H_{\\rm con}^{2,2}(d;(d),(d)) p_d^{(1)}p_d^{(2)}=\nh^{-2}\\sum_{d>0} \\frac 1d p_d^{(1)}p_d^{(2)}\n$$ \nHere $\\mathbf{p}^{(i)}=(p_1^{(i)},p_2^{(i)},\\dots),\\,i=1,2$ are two sets of formal parameters. The powers of the auxilary\nparameter $\\frac 1h$ count the Euler characteristic of the cover $\\textsc{e}'$ which is 2 in our example.\nThen thanks to the known general statement about the link between generating functions of \"connected\" and \"disconnected\"\nHurwitz numbers (see for instance \\cite{ZL}) one can write down the generating function for the Hurwitz numbers for covers with\ntwo critical points,\n$H^{2,2}(d;\\Delta^{(1)},\\Delta^{(2)})$, as follows:\n\\[\n\\tau(h^{-1}\\mathbf{p}^{(1)},h^{-1}\\mathbf{p}^{(2)})=e^{F(h^{-1}\\mathbf{p}^{(1)},h^{-1}\\mathbf{p}^{(2)}) } =\n\\]\n\\begin{equation}\\label{E=2,F=2Hurwitz}\ne^{h^{-2}\\sum_{d>0} \\frac 1d p_d^{(1)}p_d^{(2)}}\\,\n= \\,\n\\sum_{d\\ge 0} \\sum_{\\Delta^{(1)},\\Delta^{(2)}}\n H^{2,2}(d;\\Delta^{(1)},\\Delta^{(2)}) \\,h^{-\\textsc{e}'} \\mathbf{p}^{(1)}_{\\Delta^{(1)}}\\mathbf{p}^{(2)}_{\\Delta^{(2)}}\n\\end{equation}\nwhere $\\mathbf{p}^{(i)}_{\\Delta^{(i)}}:=p^{(i)}_{\\delta^{(i)}_1}p^{(i)}_{\\delta^{(i)}_2}p^{(i)}_{\\delta^{(i)}_3}\\cdots$, $i=1,2$\nand where $\\textsc{e}'=\\ell(\\Delta^{(1)}) + \\ell(\\Delta^{(2)})$ in agreement with (\\ref{RH}) where we put $\\textsc{f}=2$.\nFrom (\\ref{E=2,F=2Hurwitz}) it follows that the profiles of both critical points\ncoincide, otherwise the Hurwitz number vanishes. Let us denote this profile $\\Delta$, \nand $|\\Delta|=d$ and from the last equality we get\n$$\nH^{2,2}(d;\\Delta,\\Delta) = \\frac {1}{z_{\\Delta}}\n$$\nHere\n\\begin{equation}\nz_\\Delta\\,=\\,\\prod_{i=1}^\\infty \\,i^{m_i}\\,m_i!\n\\end{equation}\nwhere $m_i$ denotes the number of parts equal to $i$ of the partition $\\Delta$ (then the partition $\\Delta$ is often\ndenoted by $(1^{m_1}2^{m_2}\\cdots)$).\n\n\n\\vspace{1ex}\n{\\bf Example 4}.\nLet $f:\\Sigma\\rightarrow\\mathbb{RP}^2$ be a covering without critical points.\nThen, if $\\Sigma$ is connected, then $\\Sigma=\\mathbb{RP}^2$,\n$\\deg f=1$\\quad or $\\Sigma=S^2$, $\\deg f=2$. Next, if $d=3$, then\n$\\Sigma=\\mathbb{RP}^2\\coprod\\mathbb{RP}^2\\coprod\\mathbb{RP}^2$ or $\\Sigma=\\mathbb{RP}^2\\coprod S^2$.\nThus $H^{1,0}(3)=\\frac{1}{3!}+\\frac{1}{2!}=\\frac{2}{3}$.\n\n\n\n\n\\vspace{1ex}\n{\\bf Example 5}.\nLet $f:\\Sigma\\rightarrow\\mathbb{RP}^2$ be a covering with a single critical point with profile $\\Delta$, and $\\Sigma$ \nis connected.\n Note that due to (\\ref{RH}) the Euler\ncharacteristic of $\\Sigma$ is $\\textsc{e}'=\\ell(\\Delta)$. \n(One may think of $f=z^d$ defined in the unit disc where we identify $z$ and $-z$ if $|z|=1$).\nIn case we cover the Riemann sphere by the Riemann sphere $z\\to z^m$ we get\ntwo critical points with the same profiles. However we cover $\\mathbb{RP}^2$ by the Riemann sphere, then we have the \ncomposition of the\nmapping $z\\to z^{m}$ on the\nRiemann sphere and the factorization by antipodal involution $z\\to - \\frac{1}{\\bar z}$. Thus we have the ramification \nprofile $(m,m)$\nat the single critical point $0$ of $\\mathbb{RP}^2$.\nThe automorphism group is the dihedral group of the order $2m$ which consists of rotations on $\\frac{2\\pi }{m}$ and \nantipodal involution\n$z\\to -\\frac{1}{\\bar z}$.\nThus we get that \n$$\nH_{\\rm con}^{1,1}\\left(2m;(m,m)\\right)=\\frac{1}{2m}\n$$\nFrom (\\ref{RH}) we see that $1=\\ell(\\Delta)$ in this case.\nNow let us cover $\\mathbb{RP}^2$ by $\\mathbb{RP}^2$ via $z\\to z^d$. From (\\ref{RH}) we see that $\\ell(\\Delta)=1$.\nFor even $d$ we have the critical point\n$0$, in addition each point of the unit\ncircle $|z|=1$ is critical (a folding), while from the beginning we restrict our consideration only on isolated critical points.\nFor odd $d=2m-1$ there is\nthe single critical point $0$, the automorphism group consists of rotations on the angle $\\frac{2\\pi}{2m-1}$. Thus in this case\n$$\nH_{\\rm con}^{1,1}\\left(2m-1;(2m-1)\\right)=\\frac{1}{2m-1}\n$$\n\n\\vspace{1ex}\n{\\bf Example 6}.\nThe generating series of the connected Hurwitz numbers with a single critical point from the previous Example is\n\\[\nF(h^{-1}\\mathbf{p})=\n \\frac{1}{h^2}\\sum_{m>0} p_m^2 H_{\\rm con}^{1,1}\\left(2m;(m,m)\\right) +\n \\frac{1}{h} \\sum_{m>0} p_{2m-1} H_{\\rm con}^{1,1}\\left(2m-1;(2m-1)\\right)\n\\]\nwhere $H_{{\\rm con}}^{1,1}$ describes $d$-fold covering either by the Riemann\nsphere ($d=2m$) or by the projective plane ($d=2m-1$). \nWe get the generating function for Hurwitz numbers with a single critical point\n\\[\n\\tau(h^{-1}\\mathbf{p})=e^{F(h^{-1}\\mathbf{p} ) } =\n\\]\n\\begin{equation}\\label{single-branch-point'}\ne^{\\frac {1}{h^2}\\sum_{m>0} \\frac {1}{2m}p_m^2 +\\frac 1h\\sum_{m {\\rm odd}} \\frac 1m p_m }=\n\\sum_{d>0} \n\\sum_{\\Delta\\atop |\\Delta|=d} h^{-\\ell(\\Delta)} \\mathbf{p}_\\Delta\nH^{1,a}(d;\\Delta)\n\\end{equation}\nwhere $a=0$ and if $\\Delta=(1^d)$, and where $a=1$ and\n otherwise. Then $H^{1,1}(d;\\Delta)$ is the Hurwitz number \ndescribing $d$-fold covering of $\\mathbb{RP}^2$ with a single\nbranch point of type $\\Delta=(d_1,\\dots,d_l),\\,|\\Delta|=d$ by a (not necessarily connected) Klein surface of\nEuler characteristic $\\textsc{e}'=\\ell(\\Delta)$. For instance, for $d=3$, $\\textsc{e}'=1$ we get\n$H^{1,1}(3;\\Delta)=\\frac 13\\delta_{\\Delta,(3)}$.\nFor unbranched coverings (that is for $a=0$, $\\textsc{e}'=d$) we get formula (\\ref{unbranched}).\n\n\\paragraph{Tau functions.}\n\nLet us note that the expression presented in (\\ref{E=2,F=2Hurwitz}), namely,\n\\begin{equation}\\label{2KP-tau-vac-Schur}\n \\tau^{\\rm 2KP}_1(h^{-1}\\mathbf{p}^{(1)},h^{-1}\\mathbf{p}^{(2)}) = \ne^{h^{-2}\\sum_{d>0} \\frac 1d p_d^{(1)}p_d^{(2)}}\n\\end{equation}\ncoincides with the simplest two-component KP tau function\nwith two sets of higher times $h^{-1}\\mathbf{p}^{(i)},\\, i=1,2$, while (\\ref{single-branch-point'}) may be recognized\nas the simplest non-trivial tau function of the BKP hierarchy of Kac and van de Leur \\cite{KvdLbispec} \n\\begin{equation}\n \\tau^{\\rm BKP}_1(h^{-1}\\mathbf{p}) =\ne^{\\frac {1}{h^2}\\sum_{m>0} \\frac {1}{2m}p_m^2 +\\frac 1h\\sum_{m {\\rm odd}} \\frac 1m p_m }\n\\end{equation}\nwritten down in\n\\cite{OST-I}. In (\\ref{E=2,F=2Hurwitz}) and in (\\ref{single-branch-point'}) the higher times are rescaled as\n$p_m\\to h^{-1}p_m,\\,m>0$ as it is common in the study of the integrable dispersionless equations where only\nthe top power of the 'Plank constant' $h$ is taken into account. For instance, see \\cite{NZ} where the counting\nof coverings of the Riemann sphere by Riemann spheres was related to the so-called Laplacian growth problem \\cite{MWZ},\n\\cite{Z}.\nAbout the quasiclassical limit of the DKP hierarchy see \\cite{ATZ}.\nThe rescaling is also common for tau functions used in two-dimensional gravity where the powers of $h^{-\\textsc{e}}$ \ngroup contributuions of surfaces of Euler characteristic $\\textsc{e}$ to the 2D gravity partition function \\cite{BrezinKazakov}.\nIn the context of the links between Hurwitz numbers and integrable hierarchies the rescaling $\\mathbf{p} \\to h^{-1}\\mathbf{p}$ was\nconsidered in \\cite{Harnad-overview-2015} and in \\cite{NO-LMP}. In our case the role similar to $h$ plays $N^{-1}$, \nwhere $N$ is the size of matrices in matrix integrals.\n\nWith the help of these tau functions we shall construct integral over matrices. To do this we present the variables\n$\\mathbf{p}^{(i)},\\, i=1,2$ and $\\mathbf{p}$ as traces of a matrix we are interested in. We write $\\mathbf{p}(X)=\\left(p_1(X),p_2(X),\\dots \\right)$,\nwhere\n\\begin{equation}\\label{p_m(X)}\np_m(X) = \\mathrm {tr} X^m = \\sum_{i=1}^N x_i^m\n\\end{equation}\nand where $x_1,\\dots,x_N$ are eigenvalues of $X$. \n\n\nIn this case we use non-bold capital letters for the matrix argument and our tau functions\nare tau functions of the matrix argument:\n\\begin{equation}\\label{tau_1}\n\\tau_1^{\\rm 2KP}(X,\\mathbf{p}):=\\tau_1^{\\rm 2KP}(\\mathbf{p}(X),\\mathbf{p})=\n\\sum_\\lambda s_\\lambda(X)s_\\lambda(\\mathbf{p}) =e^{\\mathrm {tr} V(X,\\mathbf{p})}=\n\\prod_{i=1}^N e^{\\sum_{m=1}^\\infty \\frac{1}{m}x_i^mp_m}\n\\end{equation}\nwhere $x_i$ are eigenvalues of $X$,\nwhere $\\mathbf{p}=(p_1,p_2,\\dots)$ is a semi-infinite set of parameters,\nand\n\\begin{equation}\\label{tau_1^B}\n \\tau_1^{\\rm BKP}(X): =\\tau_1^{\\rm BKP}(\\mathbf{p}(X)) =\n \\sum_\\lambda s_\\lambda(X) =\\prod_{i=1}^N (1-x_i)^{-1}\\prod_{iN\n\\end{equation}\nwhere $\\ell(\\lambda)$ is the length of a partition $\\lambda=(\\lambda_1,\\dots,\\lambda_\\ell),\\,\\lambda_\\ell >0$.\n\nFor further purposes we need the following spectral invariants\nof a matrix $X$:\n\\begin{equation}\\label{spectral-invariant}\n{\\bf P}_\\Delta(X):=\\prod_{i=1}^\\ell p_{\\delta_i}(X)\n\\end{equation}\nwhere $\\Delta=(\\delta_1,\\dots, \\delta_\\ell)$ is a partition and each $p_{\\delta_i}$ is defined by (\\ref{p_m(X)})\n\n\nIn our notation one can write\n\\begin{equation}\\label{tau_1-XY}\n\\tau_1^{\\rm 2KP}(X,Y)=\\tau_1^{\\rm 2KP}(\\mathbf{p}(X),\\mathbf{p}(Y))=\n\\sum_{\\Delta}\\frac{1}{z_\\Delta} {\\bf P}_\\Delta(X){\\bf P}_\\Delta(Y)\n\\end{equation}\n\n\\paragraph{Combinatorial approach.} The study of the homomorphisms between the fundemental group of the base Riemann sufrace \nof genus $g$ (the Euler characterisic is resectively $\\textsc{e}=2-2g$)\nwith \n$\\textsc{f}$ marked points and the symmetric group in the context of the counting of the non-equivalent $d$-fold covering with \ngiven profiles \n$\\Delta^{i},\\,i=1,\\dots,\\textsc{f}$ results to the following equation (for instance, for the details, see Appendix A written by Zagier for the \nRussian edition of \\cite{ZL} or works \\cite{M1}, \\cite{GARETH.A.JONES})\n\\begin{equation}\\label{Hom-pi-S_d-Riemann}\n\\prod_{j=1}^g a_jb_ja_j^{-1}b_j^{-1}X_1\\cdots X_{\\textsc{f}} =1\n\\end{equation}\nwhere $a_j,b_j,X_i\\in S_d$ and where each $X_i$ belongs to the cycle class $C_{\\Delta^i}$. Then the Hurwitz number \n$H^{2-2g,\\textsc{f}}(d;\\Delta^1,\\dots,\\Delta^\\textsc{f})$ is equal to the number of solutions of equation (\\ref{Hom-pi-S_d-Riemann})\ndivided by the order of symmetric group $S_d$ (to exclude the equivalent solutions obtained by the conjugation of all factors in\n(\\ref{Hom-pi-S_d-Riemann}) by elements of the group. In the geometrical approach each conjugation means the re-enumaration of $d$ sheets\nof the cover).\n\nFor instance Example 3 considered above counts non-equivalent solutions of the equation $X_1X_2=1$ with given cycle classes \n$C_{\\Delta^1}$ and $C_{\\Delta^2}$. Solutions of this equation consist of all elements of class $C_{\\Delta^1}$ and inverse elements, so \n$\\Delta^2=\\Delta^1=:\\Delta$. The number of elements of any class $C_\\Delta$ (the cardinality of $|C_\\Delta|$) divided by $|\\Delta|!$ \nis $1 \\over z_\\Delta$ as we got in the Example 3.\n\nFor Klein surfaces (see \\cite{M2},\\cite{GARETH.A.JONES}) instead of (\\ref{Hom-pi-S_d-Riemann}) we get \n\\begin{equation}\\label{Hom-pi-S_d-Klein}\n\\prod_{j=1}^g R_j^2 X_1\\cdots X_{\\textsc{f}} =1\n\\end{equation}\nwhere $R_j,X_i\\in S_d$ and where each $X_i$ belongs to the cycle class $C_{\\Delta^i}$. In (\\ref{Hom-pi-S_d-Klein}),\n$g$ is the so-called genus of non-orientable surface which is related to its Eular chatacteristic $\\textsc{e}$ as \n$\\textsc{e}=1-g$. For the projective plane ($\\textsc{e}=1$) we have $g=0$, for the Klein bottle ($\\textsc{e}=1$) $g=1$.\n\nConsider unbranched covers of the torus (equation (\\ref{Hom-pi-S_d-Riemann})), projective plane and Klein bottle\n(\\ref{Hom-pi-S_d-Klein})). In this we put each $X_i=1$ in (\\ref{Hom-pi-S_d-Riemann})) and (\\ref{Hom-pi-S_d-Klein})).\nHere we present three pictures, for the torus ($\\textsc{e}=0$), the real projective plane ($\\textsc{e}=1$) and Klein bottle \n($\\textsc{e}=0$) which may be obtained by the identification of square's edges. We get $aba^{-1}b^{-1}=1$ for torus,\n$abab=1$ for the projective plane and $abab^{-1}=1$ for the Klein bottle.\n\n\n3 pictures.\n\nConsider unbranched coverings ($\\textsc{f}=0$).\nFor the real projective plane we have $g=1$ in (\\ref{Hom-pi-S_d-Klein}) only one $R_1=ab$. If we treat the projective plane as the unit disk\nwith identfied opposit points of the boarder $|z|=1$, then $R$ is related to the path from $z$ to $-z$.\nFor the Klein bottle ($g=2$ in (\\ref{Hom-pi-S_d-Klein})) there are $R_1=ab$ and $R_2=b^{-1}$.\n\nTo avoid confisions in what follows we will use the notion of genus and the notations $g$ only for Rieamnn surfaces, while\nthe notion of the Euler characteristic $\\textsc{e}$ we shall use both for orientable and non-orientable surfaces.\n\n \n \n\\section{Random matrices. Complex Ginibre ensemble.}\nOn this subject there is an extensive literature, for instance see \\cite{Ak1}, \\cite{Ak2}, \\cite{AkStrahov}, \n\\cite{S1}, \\cite{S2}.\n\nWe will consider integrals over complex matrices $Z_1,\\dots,Z_n$ where the measure is defined as\n\\begin{equation}\nd\\Omega(Z_1,\\dots,Z_n)= \\prod_{\\alpha=1}^n d\\mu(Z_\\alpha)=c \n\\prod_{\\alpha=1}^n\\prod_{i,j=1}^N d\\Re (Z_\\alpha)_{ij}d\\Im (Z_\\alpha)_{ij}e^{-|(Z_\\alpha)_{ij}|^2}\n\\end{equation}\nwhere the integration range is $\\mathbb{C}^{N^2}\\times \\cdots \\times\\mathbb{C}^{N^2}$ and where $c$ is the normalization \nconstant defined via $\\int d \\Omega(Z_1,\\dots,Z_n)=1$. \n\n\nWe treat this measure as the probability measure. The related ensemble is called the ensemble of $n$ independent \ncomplex Ginibre enesembles. \nThe expectation of a quantity\n$f$ which depends on entries of the matrices $Z_1,\\dots,Z_n$ is defined by\n$$\n\\mathbb{E}(f)=\\int f(Z_1,\\dots,Z_n) d\\Omega(Z_1,\\dots,Z_n).\n$$\n\nLet us introduce\nthe following products\n\\begin{eqnarray}\\label{Z}\nX&:=&(Z_1 C_1) \\cdots (Z_n C_n)\\\\ \n\\label{tildeZ^*}\nY_{t}&:=& Z^\\dag_n Z^\\dag_{n-1} \\cdots Z^\\dag_{t+1} Z^\\dag_1Z^\\dag_2 \\cdots Z^\\dag_{t} ,\\qquad 0< t < n\n\\end{eqnarray}\nwhere $Z_\\alpha^\\dag$ is the Hermitian conjugate of $Z_\\alpha$. \nWe are interested in correlation functions of spectral invariants of matrices $X$ and $Y_t$.\n\nWe denote by $x_1,\\dots,x_N$ and by $y_1,\\dots,y_N$ the eigenvalues of the matrices $X$ and $Y_t$, respectively.\nGiven partitions $\\lambda=(\\lambda_1,\\dots,\\lambda_l)$, $\\mu=(\\mu_1,\\dots,\\mu_k)$, $l,k\\le N$. Let us introduce the following \nspectral invariants\n\\begin{equation}\n{\\bf P}_\\lambda (X)=p_{\\lambda_1}(X)\\cdots p_{\\lambda_l} (X),\\qquad\n{\\bf P}_\\mu(Y_t)=p_{\\mu_1}(Y_t)\\cdots p_{\\mu_k}(Y_t)\n\\end{equation}\nwhere each $p_m(X)$ is defined via (\\ref{p_m(X)}).\n\nFor a given partition $\\lambda$, such that $d:=|\\lambda|\\le N$, let us consider the spectral invariant \n${\\bf P}_\\lambda$ of the matrix $XY_{t}$ \n(see (\\ref{spectral-invariant})). We have \n\n\n\\begin{Theorem}\\label{Theorem1}\n $X$ and $Y_t$ are defined by (\\ref{Z})-(\\ref{tildeZ^*}).\n Denote $\\textsc{e}=2-2g$. \n \n(A) Let $n > t=2g \\ge 0$. \nThen\n \\[\n \\mathbb{E}\\left({\\bf P}_\\lambda (X Y_{2g})\\right)=\n \\]\n \\begin{equation}\n z_\\lambda \\sum_{\\Delta^1,\\dots,\\Delta^{n-2g+1}\\atop |\\lambda|=|\\Delta^{j}| =d,\\, j\\le n-2g+1}\nH^{2-2g,n+2-2g}(d;\\lambda,\\Delta^{1},\\dots,\\Delta^{n-2g+1})P_{\\Delta^{n-2g+1}}(C' C'')\n\\prod_{i=1}^{n-2g} P_{\\Delta^i}(C_{2g+i})\n \\end{equation}\n where\n \\begin{equation}\\label{C'C''2g}\n C' = C_1 \\cdots C_{2g-1} ,\n\\qquad\nC''=C_2C_4\\cdots C_{2g}\n\\end{equation} \n\n\n(B) Let $n > t=2g+1 \\ge 1$. \nThen\n \\[\n \\mathbb{E}\\left({\\bf P}_\\lambda (X Y_{2g+1})\\right)=\n \\] \n \\begin{equation}\n z_\\lambda \\sum_{\\Delta^1,\\dots,\\Delta^{n-2g+1}\\atop |\\lambda|= |\\Delta^{j}|=d,\\,j\\le n-2g+1}\nH^{2-2g,n+2-2g}(d;\\lambda,\\Delta^{1},\\dots,\\Delta^{n-2g+1})P_{\\Delta^{n-2g}}(C')P_{\\Delta^{n-2g+1}}(C'')\n\\prod_{i=1}^{n-2g-1} P_{\\Delta^i}(C_{2g+1+i})\n \\end{equation}\n where\n \\begin{equation}\\label{C'C''2g+1}\nC'= C_1C_3 \\cdots C_{2g+1} ,\n\\qquad\nC''=C_2C_4\\cdots C_{2g}\n\\end{equation} \n \n\\end{Theorem}\n\n\n\\begin{Corollary}\nLet $|\\lambda|=d\\le N$ as before, and let each $C_i=I_N$ ($N\\times N$ unity matrix).\nThen\n \\begin{equation}\n \\frac{1}{z_\\lambda}\\mathbb{E}\\left({\\bf P}_\\lambda (X Y_{2g})\\right)=\n \\frac{1}{z_\\lambda}\\mathbb{E}\\left({\\bf P}_\\lambda (X Y_{2g+1})\\right)=\n N^{nd-\\ell(\\lambda)} \\sum_{\\textsc{e}'} N^{\\textsc{e}'}\n S^{\\textsc{e}'}_{\\textsc{e}}(\\lambda)\n \\end{equation}\n where $\\textsc{e}=2-2g$ and where\n \\begin{equation}\n S^{\\textsc{e}'}_{\\textsc{e}}(\\lambda) :=\n\\sum_{\\Delta^1,\\dots,\\Delta^{n+\\textsc{e}-1}\\atop \n\\sum_{i=1}^{n+\\textsc{e}-1}\\ell(\\Delta^{i}) =L}\nH^{\\textsc{e},n+\\textsc{e}}(d;\\lambda,\\Delta^{1},\\dots,\\Delta^{n+\\textsc{e}-1}),\\quad L=-\\ell(\\lambda)+ nd +\\textsc{e}' \n \\end{equation}\nis the sum of Hurwitz numbers counting all $d$-fold coverings with the following properties: \n\n(i) the Euler characteristic of the base surface is $\\textsc{e}$ \n\n(ii) the Euler characteristic of the cover is $\\textsc{e}'$ \n\n(iii) there are at most $\\textsc{f}=n+\\textsc{e}$ critical points\n\n\n\\end{Corollary}\n\nThe item (ii) in the Corollary follows from the equality ${\\bf P}_\\Delta(I_N)=N^{\\ell(\\Delta)}$ \n(see (\\ref{spectral-invariant}) and (\\ref{p_m(X)})) and\nfrom the Riemann-Hurwitz relation which relates Euler characteristics of a base and a cover via branch points\nprofile's lengths (see (\\ref{RH})):\n$$\n\\sum_{i=1}^{n+\\textsc{e}-1}\\ell(\\Delta^{i}) =-\\ell(\\lambda) +(\\textsc{f}- \\textsc{e})d +\\textsc{e}'\n$$ \nIn our case $\\textsc{f}- \\textsc{e}=n$.\n\n\n\n\\begin{Theorem} \\label{Theorem2}\n$X$ and $Y_t$ are defined by (\\ref{Z})-(\\ref{tildeZ^*}).\n \n (A) If $|\\lambda|\\neq |\\mu|$ then\n $\\mathbb{E}\\left({\\bf P}_\\lambda (X) {\\bf P}_\\mu(Y_t)\\right)=0$.\n\n \n (B) Let $|\\lambda| = |\\mu| =d$ and $n-1 > t=2g+1 \\ge 1$.\n \n\nThen\n \\[\n \\mathbb{E}\\left({\\bf P}_\\lambda (X) {\\bf P}_\\mu(Y_{2g+1})\\right)=\n \\]\n \\begin{equation}\n z_\\lambda z_\\mu \n \\sum_{\\Delta^1,\\dots,\\Delta^{n-2g}\\atop |\\lambda|= |\\Delta^{j}|=d,\\,j\\le n-2g}\nH^{2-2g,n+2-2g}(d;\\lambda,\\mu,\\Delta^{1},\\dots,\\Delta^{n-2g})P_{\\Delta^{n-2g-1}}(C')P_{\\Delta^{n-2g}}(C_{n}C'')\n\\prod_{i=1}^{n-2g-2} P_{\\Delta^i}(C_{2g+1+i}) \n \\end{equation} \n where $C'$ and $C''$ are given by (\\ref{C'C''2g}).\n \n (C) \n Let $|\\lambda| = |\\mu|$ $n > t=2g \\ge 0$ . \n\nThen\n \\[\n \\mathbb{E}\\left({\\bf P}_\\lambda (X) {\\bf P}_\\mu(Y_{2g})\\right)=\n \\]\n \\begin{equation}\n z_\\lambda z_\\mu \n \\sum_{\\Delta^1,\\dots,\\Delta^{n-2g}\\atop |\\lambda|=|\\Delta^{j}| =d,\\, j\\le n-2g}\nH^{2-2g,n+2-2g}(d;\\lambda,\\mu,\\Delta^{1},\\dots,\\Delta^{n-2g})P_{\\Delta^{n-2g+1}}(C'C_n C'')\n\\prod_{i=1}^{n-2g} P_{\\Delta^i}(C_{2g+i})\n \\end{equation}\n where $C'$ and $C''$ are given by (\\ref{C'C''2g+1}).\n \n \n\\end{Theorem}\n\n\n\\begin{Corollary}\n Let $|\\lambda|=d\\le N$ as before, and let each $C_i=I_N$.\nThen\n \\begin{equation}\n \\frac{1}{z_\\lambda z_\\mu}\\mathbb{E}\\left({\\bf P}_\\lambda (X){\\bf P}_\\lambda(Y_{2g})\\right)=\n \\frac{1}{z_\\lambda z_\\mu}\\mathbb{E}\\left({\\bf P}_\\lambda (X){\\bf P}_\\lambda(Y_{2g+1})\\right)\n =\n \\frac{1}{z_\\lambda}\\mathbb{E}\\left({\\bf P}_\\lambda (X Y_{2g})\\right)=\n \\frac{1}{z_\\lambda}\\mathbb{E}\\left({\\bf P}_\\lambda (X Y_{2g+1})\\right)\n \\end{equation}\n\n\n\n\\end{Corollary}\n\n\n\n\\begin{Theorem} \n\\label{Theorem3}\n$X$ and $Y_t$ are defined by (\\ref{Z})-(\\ref{tildeZ^*}).\n \n \n (A) Let $n-1 > t=2g+1 \\ge 0$.\n \n\nThen\n \\[\n \\mathbb{E}\\left({\\bf P}_\\lambda (X) \\tau^{\\rm BKP}_1(Y_{2g+1})\\right)=\n \\]\n \\begin{equation}\n z_\\lambda \n \\sum_{\\Delta^1,\\dots,\\Delta^{n-2g}\\atop |\\lambda|= |\\Delta^{j}|=d,\\,j\\le n-2g}\nH^{1-2g,n+1-2g}(d;\\lambda,\\Delta^{1},\\dots,\\Delta^{n-2g})P_{\\Delta^{n-2g-1}}(C')P_{\\Delta^{n-2g}}(C_{n}C'')\n\\prod_{i=1}^{n-2g-2} P_{\\Delta^i}(C_{2g+1+i}) \n \\end{equation} \n where $C'$ and $C''$ are given by (\\ref{C'C''2g}).\n \n (B) \n Let $n > t=2g \\ge 0$ . \n\nThen\n \\[\n \\mathbb{E}\\left({\\bf P}_\\lambda (X) \\tau^{\\rm BKP}_1(Y_{2g})\\right)=\n \\]\n \\begin{equation}\n z_\\lambda z_\\mu \n \\sum_{\\Delta^1,\\dots,\\Delta^{n-2g}\\atop |\\lambda|=|\\Delta^{j}| =d,\\, j\\le n-2g}\nH^{1-2g,n+1-2g}(d;\\lambda,\\Delta^{1},\\dots,\\Delta^{n-2g})P_{\\Delta^{n+1-2g}}(C'C_n C'')\n\\prod_{i=1}^{n-2g} P_{\\Delta^i}(C_{2g+i})\n \\end{equation}\n where $C'$ and $C''$ are given by (\\ref{C'C''2g+1}).\n \n \n\\end{Theorem}\n\nThe sketch of proof.\n \nThe character Frobenius-type formula by Mednykh-Pozdnyakova-Jones \\cite{M2},\\cite{GARETH.A.JONES} \n\\begin{equation}\\label{Hurwitz-number}\nH^{\\textsc{e},k}(d;\\Delta^1,\\dots,\\Delta^{k})=\\sum_{\\lambda\\atop |\\lambda|=d}\n\\left(\\frac{{\\rm dim}\\lambda}{d!} \\right)^{\\textsc{e}}\\varphi_\\lambda(\\Delta^1)\\cdots \n\\varphi_\\lambda(\\Delta^k)\n\\end{equation}\nwhere \n${\\rm dim}\\lambda$ is the dimension of the irreducible representation of $S_d$, and\n\\begin{equation}\n\\label{varphi}\n\\varphi_\\lambda(\\Delta^{(i)}) := |C_{\\Delta^{(i)}}|\\,\\,\\frac{\\chi_\\lambda(\\Delta^{(i)})}{{\\rm dim}\\lambda} ,\n\\quad {\\rm dim}\\lambda:=\\chi_\\lambda\\left((1^d)\\right)\n\\end{equation}\n$\\chi_\\lambda(\\Delta)$ is the character of the symmetric group $S_d$ evaluated at a cycle type $\\Delta$,\nand $\\chi_\\lambda$ ranges over the irreducible complex characters of $S_d$ (they are\nlabeled by partitions $\\lambda=(\\lambda_1,\\dots,\\lambda_{\\ell})$ of a given weight $d=|\\lambda|$). It \nis supposed that $d=|\\lambda|=|\\Delta^{1}|=\\cdots =|\\Delta^{k}|$.\n$|C_\\Delta |$ is the cardinality of the cycle\nclass $C_\\Delta$ in $S_d$.\n\nThen we use the characteristic map relation\n\\cite{Mac}:\n\\begin{equation}\\label{Schur-char-map}\ns_\\lambda(\\mathbf{p})=\\frac{{\\rm dim}\\lambda}{d!}\\left(p_1^d+\\sum_{\\Delta\\atop |\\Delta|=d } \\varphi_\\lambda(\\Delta)\\mathbf{p}_{\\Delta}\\right)\n\\end{equation}\nwhere $\\mathbf{p}_\\Delta=p_{\\Delta_1}\\cdots p_{\\Delta_{\\ell}}$ and where $\\Delta=(\\Delta_1,\\dots,\\Delta_\\ell)$ is a partition whose weight\ncoinsides with the weight of $\\lambda$: $|\\lambda|=|\\Delta|$. Here \n\\begin{equation}\n{\\rm dim}\\lambda =d!s_\\lambda(\\mathbf{p}_\\infty),\\qquad \\mathbf{p}_\\infty = (1,0,0,\\dots)\n\\end{equation}\nis the dimension of the irreducable representation of the symmetric group $S_d$. We imply that \n$\\varphi_\\lambda(\\Delta)=0$ if $|\\Delta|\\neq |\\lambda|$.\n\nThen we know how to evaluate the integral with the Schur function via Lemma\n used in \\cite{O-2004-New} and \\cite{NO-2014}, \\cite{NO-LMP}\n(for instance see \\cite{Mac} for the derivation). \n\\begin{Lemma} \\label{useful-relations}\nLet $A$ and $B$ be normal matrices (i.e. matrices diagonalizable by unitary transformations). Then\nBelow $p_{\\infty}=(1,0,0,\\dots)$. \n\\begin{equation}\\label{sAZBZ^+'}\n\\int_{\\mathbb{C}^{n^2}} s_\\lambda(AZBZ^+)e^{-\\textrm{Tr}\nZZ^+}\\prod_{i,j=1}^n d^2Z=\n\\frac{s_\\lambda(A)s_\\lambda(B)}{s_\\lambda(p_{\\infty})}\n\\end{equation}\nand\n\\begin{equation}\\label{sAZZ^+B'}\n\\int_{\\mathbb{C}^{n^2}} s_\\mu(AZ)s_\\lambda(Z^+B) e^{-\\textrm{Tr}\nZZ^+}\\prod_{i,j=1}^nd^2Z= \\frac{s_\\lambda(AB)}{s_\\lambda(p_{\\infty})}\\delta_{\\mu,\\lambda}\\,.\n\\end{equation}\n\\end{Lemma}\nTo prove Theorem \\ref{Theorem1} we use that we can equate the integral over $E(\\tau^{\\rm 2KP}(XY_y))$ using this Lemma\nand (\\ref{2KP-tau-vac-Schur}) and then compare it to the same integral where now we use (\\ref{tau_1}).\nTo prove Theorem \\ref{Theorem2} in the similar way we equate $E(\\tau^{\\rm 2KP}(X)\\tau^{\\rm 2KP}(Y_y))$.\nTo prove Theorem \\ref{Theorem3} we similarly $E(\\tau^{\\rm 2KP}(X)\\tau^{\\rm 2KP}(Y_y))$ in the same way taking into acount\nalso (\\ref{tau_1^B}).\n\n\n\n\n\\section*{Acknowledgements}\n\nThe work has been funded by the RAS Program ``Fundemental problems of nonlinear mechanics'' and by the Russian Academic Excellence Project '5-100'.\nI thank A. Odziyevich and university of Bialystok for warm hospitality which allowed\nit is accurate to write down this work. I am grateful to S. Natanzon, A. Odziyevich, J. Harnad, A. Mironov (ITEP) and\nto van de Ler for various remarks concerning the questions connected with this work. Special gratitude\nto E. Strakhov for the fact that he drew my attention to the works on quantum chaos devoted to the products \nof random matrices and for fruitful discussions.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe Gross-Pitaevskii (GP) equation was independently derived by L.P.~Pitaevskii \\cite{Pitaevskii61} and E.P.~Gross \\cite{Gross61} in 1961. It describes a superfluid gas of weakly interacting bosons at zero temperature. The solution of the equation is a complex function $\\Psi=|\\Psi| \\exp i \\varphi$, whose modulus squared represents the particle density, $n=|\\Psi|^2$, and the gradient of the phase gives the local velocity of the fluid, ${\\bf v}= (\\hbar\/m)\\nabla \\varphi$, where $m$ is the particle mass. In the derivation by L.P. Pitaevskii, the GP equation emerges as a generalization of Bogoliubov's theory \\cite{Bogolyubov47} to a spatially inhomogeneous superfluid \\cite{Ginzburg58}. A quantized vortex can exist as a stationary solution of the GP equation where all particles circulate with the same angular momentum $\\hbar$ around a line where the density vanishes; the solution has the form $\\sqrt{n(r)} \\exp i \\varphi$, where now $\\varphi$ is the angle around the vortex axis and $r$ is the distance from the axis in cylindrical coordinates. The density $n(r)$ is a smooth function which increases from $0$ to a constant asymptotic value $n_0$ over a length scale characterized by $\\xi$, known as the {\\it healing length}, determined by $n_0$ and the strength of the interaction. \n\nQuantized vortices have been extensively studied in superfluid $^4$He \\cite{Donnelly91}, which is a strongly correlated liquid. The core of the vortex in $^4$He is only qualitatively captured by the GP equation and more refined theories are needed to account for the atom-atom interactions and many-body effects \\cite{Dalfovo92,Ortiz95,Vitiello96,Giorgini96,Galli14}. A direct comparison between theory and experiment for the structure of the vortex core is not available, and is likely unrealistic, the main reason being that the core size in $^4$He is expected to be of the same order as the atom size. The only way to observe such a vortex thus consists of looking at its effects on the motion of impurities that may be attached to it. Electrons \\cite{Yarmchuk79,Yarmchuk82,Maris09,Maris11}, solid hydrogen particles \\cite{Bewley06,Bewley08,Paoletti10,Paoletti11,Fonda14}, and $^4$He$_2^*$ excimer molecules \\cite{Zmeev13} have been used for this purpose. These impurities act as tracers for the position of vortex filaments in order to infer their motion on a macroscopic scale, but the fine structure of the core remains inaccessible. Furthermore, impurities may themselves affect the dynamics of the vortex filaments \\cite{Barenghi09}. \n\nIn dilute ultracold atomic gases the situation is more favorable. On the one hand, the GP theory furnishes a very accurate description of the system in regimes of temperature and diluteness that are attainable in typical experiments with trapped Bose-Einstein condensates (BECs) \\cite{Dalfovo99,PSbook16}. On the other hand, beginning with a series of seminal experiments \n\\cite{Matthews99,Madison00,Anderson2000,Anderson2001,Haljan01,AboShaeer01,Hodby02},\nquantized vortices are routinely produced and observed with different techniques (see \\cite{Fetter09} for a review). \n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{vortex-core-fig1.png}\n\\caption{Experimental absorption images of a condensate with $7 \\times 10^6$ atoms after $120$~ms of free expansion. The small blue ellipse at the center of (a) represents the shape of the trapped condensate before the expansion, which is an elongated ellipsoid with the long axis in the $x$-direction. The expansion is faster in the transverse direction, so that the aspect ratio is inverted and the atomic distribution aquires a pancake shape. (a) Column density along a transverse direction. The faint vertical stripe is a signature of the presence of a vortex, and its shape is an interference pattern originating from the anisotropic velocity field around the vortex and the velocity field of the expansion. The field of view is $1.3 \\times 3$~mm. (b) Column density along the axial direction. The vortex is almost invisible. The field of view is $3 \\times 3$~mm. (c) Residual column density. From the previous image we subtract spurious interference fringes, due to imperfections in the optical imaging, and the background density, using a Thomas-Fermi fit (see text). The result is an image of the residual column density which neatly reveals a vortex filament. (d)-(g) Other examples of vortex filaments shown by the residual column density for different condensates with one or more vortices. Note that even though the {\\it in-situ} condensate is always isotropic in the $y$-$z$ plane it becomes slightly elliptic after a long expansion due to a residual curvature of the magnetic field used to levitate the condensate against gravity.}\n\\label{fig:expt-images}\n\\end{figure*}\n\nDespite such an abundance of work, it may sound surprising that no detailed quantitative comparison between theory and experiment for the\nstructure of the vortex core in three-dimensional (3D) condensates has yet been performed. A reason is that the healing length $\\xi$ in typical trapped BECs, though much larger than in liquid $^4$He, is still smaller than the optical resolution, which is limited by the wavelength of the laser beams used for imaging. Another reason is that, when illuminating the atomic cloud with light, the result is the optical density, which is determined by an integral of the density along the imaging axis (column density); thus, a vortex filament has a strong contrast only if it is rectilinear and aligned along the imaging axis. One can overcome the first limitation by switching off the confining potential, letting the condensate freely expand. The vortex core expands as well, at least as fast as the condensate radius \\cite{Lundh98,Dalfovo00,Hosten03,Teles13a}, so that it can become visible after a reasonable expansion time. Concerning vortex alignment, one can strongly confine a BEC along one spatial direction, squeezing it to within a width of several $\\xi$. In such a geometry, vortices orient themselves along the short direction, thus behaving as point-like topological defects in a quasi-2D system rather than filaments in a 3D fluid (a recent discussion about the structure of the vortex core in expanding quasi-2D condensates can be found in \\cite{Sang13}). Conversely, if the condensate width is significantly larger than $\\xi$ in all directions, the vortex filaments can easily bend \\cite{Aftalion01,Garcia01,Aftalion03,Modugno03}, with a consequent reduction of their visibility in the column density. Bent vortex filaments have indeed been observed in \\cite{Raman01,Rosenbusch02,Bretin03,Donadello14}.\nBending and optical resolution particularly limit the quality of comparisons between theory and experiment for the structure of the vortex core (see Fig.~14.10 in \\cite{PSbook16}). \n\nIn this work, we show that 3D vortex filaments can be optically observed with enough accuracy to permit a direct comparison with the predictions of the GP theory. In our experiment, we produce large condensates of sodium atoms in an elongated axially symmetric harmonic trap and we image each condensate, in both the axial and a transverse direction, after free expansion. When a vortex filament is present, it produces a visible modification of the column density distribution of the atoms. \nWe use numerical GP simulations, as well as scaling laws which are valid for the expansion of large condensates, to make direct comparisons with our experimental observations and find good agreement. \n\n\\section{Experiment}\n\nWe produce ultracold samples of sodium atoms in the internal state $|3S_{1\/2},F=1,m_{\\mathrm{F}}=-1\\rangle$ in a cigar-shaped harmonic magnetic trap with trap frequencies $\\omega_x\/2 \\pi=9.3$~Hz and $\\omega_\\perp\/2 \\pi=93$~Hz. The thermal gas is cooled via forced evaporative cooling and pure BECs of typically around $10^7$ atoms are finally obtained with negligible thermal component. The evaporation ramp in the vicinity of the BEC phase transition is performed at different rates: slow quenches eventually produce condensates which are almost in their ground state, while faster quenches lead to the formation of quantized vortices in the condensate as a result of the Kibble-Zurek mechanism \\cite{Lamporesi13,Donadello16}. The quench rate can be chosen in such a way to obtain condensates with one vortex on average. \n\nThe trapped condensate has a radial width on the order of $30 \\ \\mu$m and an axial width that is $10$ times larger. The healing length in the center of the condensate is about $0.2\\ \\mu$m, smaller than the optical resolution. It is also about two orders of magnitude smaller than the radial width of the condensate, which means that, as far as the density distribution is concerned, a vortex is a thin filament living in a 3D superfluid background with smoothly varying density, and the local properties of the vortex core are hence almost unaffected by boundary conditions. However, boundaries are still important for the superfluid velocity field. In fact, the ellipsoidal shape of the condensate causes a preferential alignment of the vortex filament along a (randomly chosen) radial direction so as to minimize its energy. Moreover, this geometry makes the flow around the vortex line anisotropic, meaning that on the larger scale of the entire condensate a vortex behaves as an almost planar localized object. For this reason, such vortices in elongated condensates are also known as solitonic-vortices \\cite{Donadello14,Brand02,Komineas03,Ku14}. For our purposes, such localization is an advantage since it significantly reduces the bending of the vortex filaments, while at the same time keeping their local core structure three dimensional. \n\nObservations are performed by releasing the atoms from the trap and taking simultaneous absorption images of the full atomic distribution along the radial and axial directions after a sufficiently long expansion in free space, so that the vortex core becomes larger than the imaging resolution \\cite{Donadello14,Donadello16}. The presence of a levitating magnetic field gradient makes it possible to achieve long expansion times preventing the BEC from falling. Typical images are shown in Fig.~\\ref{fig:expt-images}. In the radial direction (panel {\\it a}), the vortex is seen as a dark stripe. This soliton-like character is due to the interference of the two halves (ends) of the elongated condensate which, on the large length scale of the entire condensate, have approximately a $\\pi$ phase difference \\cite{Donadello14,Ku14,Tylutki15}.\nIf a vortex filament is parallel to the imaging direction, the dark stripe exhibits a central dip, corresponding to the vortex core seen along its axis, and a twist due to the anisotropic quantized circulation. The $2\\pi$ phase winding around the vortex core was also detected in the same setup \\cite{Donadello14} by means of an interferometric technique based on a sequence of Bragg pulses. In the axial direction (panel {\\it b}), the soliton-like character is integrated out and the vortex filament is only a faint (and almost invisible) perturbation in the column density. However, by subtracting the background represented by a condensate without any vortex, the filament clearly emerges in the residual density distribution (panel {\\it c}). In the following we show how this signal can be used to extract quantitative information on the vortex structure after expansion, and how this is related to the shape of the vortex core in the condensate {\\it in-situ}, before the expansion. \n\n\\section{Theory}\n\nThe GP equation for the macroscopic wave function $\\Psi({\\bf r,t})$ for a BEC of weakly interacting bosons of mass $m$ at zero temperature is \\cite{Pitaevskii61,Gross61,Dalfovo99,PSbook16}\n\\begin{equation}\ni \\hbar \\frac{\\partial \\Psi}{\\partial t} = \\left( -\\frac{\\hbar^2 \\nabla^2}{2m} + V_{\\rm ext} + g |\\Psi|^2 \\right) \\Psi ,\n\\end{equation}\nwhere $V_{\\rm ext}$ is the external potential and $t$ is time. The quantity $g$ is a coupling constant characterizing the interaction between the atoms, which is positive for our condensates. The stationary version of the GP equation is obtained by choosing $\\Psi({\\bf r},t)= \\psi({\\rm r}) \\exp (-i\\mu t\/\\hbar)$, so that\n\\begin{equation}\n\\left( -\\frac{\\hbar^2 \\nabla^2}{2m} + V_{\\rm ext} + g |\\psi|^2 \\right) \\psi = \\mu \\psi\n\\label{eq:stationaryGP}\n\\end{equation}\nwhere $\\mu$ is the chemical potential and $n=|\\psi|^2$ is the density. In our case, we use the stationary GP equation to describe the condensate confined by the axially symmetric harmonic potential $V_{\\rm ext}=(m\/2)[\\omega_x^2 x^2+\\omega_\\perp^2 (y^2+z^2)]$, with the aspect ratio $\\lambda=\\omega_x\/\\omega_\\perp=0.1$, as in the experiment. Then we simulate the expansion by using this solution as the $t=0$, starting condition for the solution of the time dependent GP equation with $V_{\\rm ext}=0$. We simulate condensates with and without a vortex. In the former case, the vortex is rectilinear, passing through the center and aligned along the $z$-axis. The need to accurately describe the dynamics of the system on both the scale of the healing length $\\xi$ and the scale of the width of the entire expanding condensate poses severe computational constraints. With this in mind, we are only able to perform simulations up to values of the chemical potential on the order of $10\\hbar \\omega_\\perp$, which are smaller than the experimental values, ranging from about $15$ to $30\\hbar \\omega_\\perp$. Experiments can also be performed for smaller values of $N$, and hence smaller $\\mu$, but fluctuations in the density distribution become relatively larger with decreasing $N$, and the signal-to-noise ratio for the visibility of vortices in axial imaging becomes too small. The comparison between theory and experiments hence requires an extrapolation of the GP results to larger $\\mu$ and this is possible thanks to scaling laws which are valid for large condensates. \n\nIf $\\mu$ is significantly larger than both $\\hbar \\omega_\\perp$ and $\\hbar \\omega_x$, then the ground state of the condensate, i.e., the lowest energy stationary solution of the GP equation, is well approximated by the Thomas-Fermi (TF) approximation, which corresponds to neglecting the first term in the parenthesis of Eq.~(\\ref{eq:stationaryGP}), so that the density becomes \\cite{Dalfovo99,PSbook16}\n\\begin{equation}\nn_{\\rm TF}(x,y,z) = \\frac{1}{g} \\left[\\mu -\\frac{1}{2}m \\omega_x^2 x^2 -\\frac{1}{2}m \\omega_\\perp^2 (y^2+ z^2) \\right] \n\\label{eq:TF}\n\\end{equation}\nwithin the central region where $n_{\\rm TF}$ is positive, and is $0$ elsewhere.\nWe can then define the boundary TF radii $R_x = (2\\mu\/m\\omega_x^2)^{1\/2}$ and $R_\\perp = (2\\mu\/m\\omega_\\perp^2)^{1\/2}$, the central density $n_0=\\mu\/g$, and the rescaled coordinates $\\tilde{x}=x\/R_x$, $\\tilde{y}=y\/R_\\perp$ and $\\tilde{z}=z\/R_\\perp$, and rewrite the density in the form \n\\begin{equation}\nn_{\\rm TF}(\\tilde{x},\\tilde{y},\\tilde{z}) = n_0 ( 1 - \\tilde{x}^2 - \\tilde{y}^2 - \\tilde{z}^2 ) \\, .\n\\label{eq:rescaledTF}\n\\end{equation}\nThis inverted parabola is a very good approximation for the density profiles of our condensates except in a narrow region near the condensate boundaries \\cite{Dalfovo96b}. \n\nIn the regime where the TF approximation is valid, the free expansion is governed by simple scaling laws \\cite{Castin96,Kagan96,Dalfovo97}. In particular, one can prove that the condensate preserves its shape with a rescaling of the TF radii in time according to $R_x(t)=b_x(t)R_x(0)$ and $R_\\perp(t)=b_\\perp(t)R_\\perp(0)$, where the scaling parameters $b_x$ and $b_\\perp$ are solutions of the coupled differential equations\n$\\ddot{b}_\\perp - \\omega_{\\perp}^2\/(b_xb_\\perp^3) = 0 $ and $\\ddot{b}_x - \\omega_{x}^2\/(b_x^2b_\\perp^2) = 0$,\nwith initial conditions $b_x=b_\\perp=1$ and $\\dot{b}_x=\\dot{b}_\\perp=0$ at $t=0$.\nBy using the aspect ratio $\\lambda$ and introducing the dimensionless time $\\tau=\\omega_{\\perp}t$, one can rewrite the same equations as \n\\begin{equation}\n\\frac{d^2 b_\\perp}{d \\tau^2} - \\frac{1}{b_xb_\\perp^3} = 0 \\ \\ \\ , \\ \\ \\ \n\\frac{d^2 b_x}{d \\tau^2} - \\frac{\\lambda^2}{b_x^2b_\\perp^2} = 0 \\, . \n\\label{eq:b}\n\\end{equation}\nAnalytic solutions exist in the limit $\\lambda \\ll 1$, that is, for a very elongated ellipsoid, for which one finds \\cite{Castin96}\n\\begin{eqnarray}\nb_\\perp (\\tau) & = & \\sqrt{1+\\tau^2} \\nonumber \\\\\nb_x (\\tau) & = & 1 + \\lambda^2 [ \\tau \\ {\\rm arctan} \\tau - \\ln \\sqrt{1+\\tau^2} \\ ] \\, .\n\\label{eq:banalytic} \n\\end{eqnarray}\nThe correction proportional to $\\lambda^2$ becomes vanishingly small in the limit of the infinite cylinder, where the condensate is known to follow a scaling behavior that preserves its radial shape, even in regimes where the TF approximation does not apply \\cite{Pitaevskii97}.\n\n\\begin{figure}[]\n\\includegraphics[width=0.9\\linewidth]{vortex-core-fig2.pdf}\n\\caption{Residual column density (\\ref{eq:nres}) calculated for a GP simulation of an expanding condensate with $\\mu=9.7 \\hbar \\omega_\\perp$ and with a vortex aligned along $z$, passing through the origin. Curves are plotted for different values of the expansion time, $\\tau=\\omega_\\perp t$, and are normalized to the value $n^{\\rm TF}_{\\rm col} (0,\\tau)$, which is the maximum of the fitted TF column density at the same time. The coordinate $\\tilde{y}=y\/R_\\perp$ is the distance from the vortex axis in units of the transverse TF radius obtained from the same fit. The spatial range is limited to half the TF radius in order to highlight the print of the vortex in the column density; the effects of the condensate boundaries are almost negligible in this range. }\n\\label{fig:restheo}\n\\end{figure}\n\n\\begin{figure}[]\n\\includegraphics[width=0.9\\linewidth]{vortex-core-fig3.pdf}\n\\caption{Time evolution of the depth (top) and width (bottom) of the depletion produced by a vortex in the residual column density of expanding condensates with different chemical potentials $\\mu$.\nDepth and width are defined as the amplitude and the width $\\sigma$ of a Gaussian fit, respectively.\nAs in Fig.~\\ref{fig:restheo}, these parameters are normalized by the central TF column density and the transverse TF radius.\nNote that to be consistent with our experiments, for the purpose of improving the fit quality, prior to fitting we average $\\delta n (\\tilde{y},\\tilde{z},\\tau)\/n^{\\rm TF}_{\\rm col} (0,\\tilde{z},\\tau)$ over different $z$ values within the interval [$-R_{\\perp}\/3,R_{\\perp}\/3$]. At very early times, $\\tau\\lesssim 3$, the dip in the residual is too small for the fit to quantitatively represent the vortex's characteristics. The dashed line is the prediction (\\ref{eq:maxresidualvortex}) of the empty core model.}\n\\label{fig:depthwidth}\n\\end{figure}\n\nThe TF density profile (\\ref{eq:rescaledTF}) is not only an accurate fitting function of the GP density distribution during the free expansion of an elongated condensate with $\\mu \\sim 10 \\hbar \\omega_{\\perp}$, but the TF radii extracted from the fit also agree with the scaling solutions of (\\ref{eq:b}), as well as with the analytic expressions (\\ref{eq:banalytic}), the discrepancy being less than $2$\\% in all our simulations, even for long expansion times. The agreement is expected to be even better for larger values of $\\mu$. This justifies the use of a TF fit to extract the residual density both in the experiments and in the GP simulations. The fit also provides the values of the TF radii and $n_0$ at any given time $t$, which can be used to rescale the coordinates and the density. \n\n For comparison with experiments, the key quantity is the column density, that is, the integral of the density along the imaging axis. Let us consider a cut of the density in the $z=0$ plane and define $n_{\\rm col}(\\tilde{y},t) = \\int d\\tilde{x} \\ n(\\tilde{x},\\tilde{y},0,t)$, where the integral is restricted to the region where the density is positive. Using the analytic TF density, one finds \n\\begin{equation}\nn^{\\rm TF}_{\\rm col}(\\tilde{y},t)=n^{\\rm TF}_{\\rm col} (0,t)(1 - \\tilde{y}^2)^{3\/2} \n\\label{eq:ncolaxial}\n\\end{equation}\nand we can finally define the residual column density as\n\\begin{equation}\n\\delta n (\\tilde{y},t) = n_{\\rm col} (\\tilde{y},t) - n^{\\rm TF}_{\\rm col}(\\tilde{y},t) \\, .\n\\label{eq:nres}\n\\end{equation}\nAn example is shown in Fig.~\\ref{fig:restheo}, where we plot $\\delta n$ obtained in the GP simulation of the expansion for a condensate with $\\mu= 9.7 \\hbar \\omega_\\perp$. The figure shows that, as expected, a vortex produces a (column) density depletion whose depth is very small, i.e., only a few percent of the central column density of the condensate. It also shows that the depth increases in time during the expansion, while the width seems to remain almost constant. In Fig.~\\ref{fig:depthwidth} we show results for the depth and the width obtained in simulations of condensates with different chemical potentials, plotted as a function of the expansion time.\n\nThese results can be qualitatively understood by using a simplified model where the GP vortex core in the initial condensate is modelled by an empty cylinder of radius $r_v = c \\xi_0$, where $c$ is a number of order $1$ and $\\xi_0$ is the healing length of a uniform condensate with density $n_0$, which is given by $\\xi_0= \\hbar\/\\sqrt{2mgn_0} = \\hbar\/\\sqrt{2m\\mu}$. The rescaled radius is $\\tilde{r}_v=r_v\/R_\\perp= c \\xi_0\/R_\\perp= c \\hbar\\omega_\\perp\/2\\mu$. Then, let us assume that the initial expansion of the condensate is dominated by the mean-field interaction in the following sense: a segment of vortex filament near the center of the condensate expands as if it were in a uniform condensate, preserving its shape, but adiabatically following the time variation of the density of the medium around it. Hence, the vortex radius grows because the density decreases and the healing length is inversely proportional to $\\sqrt{n_0}$. Meanwhile, the transverse and axial TF radii $R_\\perp$ and $R_x$ grow, but with different scaling laws; such a difference is precisely the origin of the increased visibility of the vortex. The empty-cylinder model allows us to calculate the column density, analytically taking into account all of these effects. In particular, using the scaling law (\\ref{eq:banalytic}) and neglecting the $\\lambda^2$ term, one can easily prove that $\\tilde{r}_v$ is constant during the expansion, while the residual column density takes the form \n\\begin{equation}\n\\delta n (\\tilde{y},\\tau) = - \\frac{3 \\lambda \\tilde{r}_v}{2} n^{\\rm TF}_{\\rm col} (0;\\tau) \\sqrt{1+\\tau^2} \n(1 - \\tilde{y}^2) \\left( 1 - \\frac{\\tilde{y}^2}{\\tilde{r}_v^2} \\right)^{\\frac{1}{2}} \\ ,\n\\end{equation}\nand the normalized depth can be written as \n\\begin{equation}\n\\frac{|\\delta n (0,\\tau)|}{n^{\\rm TF}_{\\rm col} (0,\\tau)} = \\frac{3}{2} \\lambda \\tilde{r}_v \\sqrt{1+\\tau^2} \\, .\n\\label{eq:maxresidualvortex}\n\\end{equation}\nThe dashed line in Fig.~\\ref{fig:depthwidth} corresponds to this prediction when $c=1.6$ and $\\mu= 9.7 \\hbar \\omega_\\perp$. With the same parameters, the rescaled width of the empty cylinder is $\\tilde{r}_v \\sim 0.08$, which is in qualitative agreement with the data in the bottom panel of the same figure. However, the assumption of adiabaticity is expected to be valid only at short times, when the density of the expanding condensate remains sufficiently large. As the expansion proceeds, the mean-field interactions lose their strength and the velocity field gradually assumes the characteristics of a ballistic expansion \\cite{Lundh98,Dalfovo00}.\nThe crossover from mean-field to ballistic expansion is smooth and,\nfor reference, we note that a spherically trapped condensate is expected to decouple at around $\\tau_{\\rm dec} \\sim \\sqrt{2\\mu\/\\hbar\\omega}$ \\cite{Lundh98}, which, for $\\mu = 9.7\\hbar\\omega$, would correspond to $\\tau_{\\rm dec} \\sim 4$ in Fig.~\\ref{fig:depthwidth}.\nThe full GP simulations show that the width remains approximately constant throughout the simulation, while the depth significantly deviates from the $\\sqrt{1+\\tau^2}$ law and saturates to a constant value deep in the ballistic regime. \n\n\\section{Experiment \\lowercase{vs.} Theory}\n\nIn this section, we compare the results of the experiments with the predictions of the GP theory for the overall shape, width and depth of the vortex in the residual column density. \n\nThe depth and the width after a given expansion time $t$ are shown in Fig.~\\ref{fig:mu_scaling} as a function of $1\/\\mu$. The two quantities are extracted from Gaussian fits, and normalized by the central TF column density and the transverse TF radius as in Fig.~\\ref{fig:depthwidth}. In the case of experimental data, we first select condensates exhibiting a rectilinear vortex filament near their center, at an axial distance smaller than $R_\\perp\/3$. We then fit the column density with the analytic TF profile, but excluding points lying within a few healing lengths of the filament. From the fit we obtain the chemical potential and the TF radii of the ``background\" condensate and, by subtracting this background from the column density, we get the residual $\\delta n (\\tilde{y})$, where $\\tilde{y}$ is taken to be orthogonal to the filament. In order to increase the signal-to-noise ratio we average the normalized depth $\\delta n(\\tilde{y})\/n_{\\rm col}^{\\rm TF}(0)$ over different $z$ values within the interval [$-R_{\\perp}\/3,R_{\\perp}\/3$]. \nMoreover, if a vortex line is displaced from the center by a distance $\\tilde \\rho = \\sqrt{\\tilde x ^ 2 +\\tilde y ^2 + \\tilde z ^2}$, its core structure is that of a vortex in a background condensate with a density $(1 - \\tilde \\rho^2)$ times lower than the central density; we thus assign to the vortex a value of $\\mu$ corrected by the same factor.\nFinally, for long expansion times the residual external field makes the condensate slightly elliptic in the radial plane. For this reason, we use both $R_y$ and $R_z$ as independent TF radii and then we define $R_\\perp=\\sqrt{R_yR_z}$. The same fitting procedure is applied to the GP density distributions, for which the condensate radius is always axially symmetric and the vortex is centered by construction. The experimental points correspond to four independent sets of data, where the cooling, evaporation, and imaging procedures are optimized for condensates with different atom numbers: red and orange points correspond to the largest condensates in our laboratory ($\\mu \\sim 30 \\hbar\\omega_\\perp$, $t=150$~ms and $120$~ms), blue points are the smallest condensates in which vortices are still observable ($\\mu \\sim 15 \\hbar\\omega_\\perp$, $t=100$~ms), while green points represent an old data set \\cite{Bisset17} for intermediate condensates ($\\mu \\sim 20 \\hbar\\omega_\\perp$, $t=120$~ms). Error bars account for statistical noise in the residual column density and for the uncertainties in the fit. \n\nThe GP results clearly show that the rescaled width $\\sigma\/R_\\perp$ scales linearly with $1\/\\mu$. This is consistent with the fact that, in the elongated geometry of our condensates, the rescaled width remains almost constant during the expansion.\nAnother way to understand this is to note that the in-trap width is proportional to $\\xi_0 \/R_\\perp$, and hence to $1\/\\mu$, and this scaling survives after long expansion times, even deep within the ballistic regime where length ratios become frozen.\nThe dashed line is a linear fit to the GP points, including the limiting case of an infinite condensate at $1\/\\mu=0$. Figure~\\ref{fig:mu_scaling} shows that the experimental data are in good agreement with the GP predictions, especially for the largest condensates, where the vortex signal-to-noise ratio is the largest. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1\\linewidth]{vortex-core-fig4.pdf}\n\\caption{Depth (top) and width (bottom) of the depletion produced by a vortex in the residual column density for condensates of different $\\mu$. The black $+$ symbols are obtained from GP simulations for an expansion time $\\tau=\\omega_\\perp t= 70$, corresponding to $120$~ms; the point at $1\/\\mu=0$ is the limit of an infinitely large condensate, where both quantities must vanish. The dashed line in the bottom panel is the linear law $\\sigma\/R_\\perp \\sim \\xi_0 \/R_\\perp \\propto 1\/\\mu$ predicted by GP theory in the TF scaling regime. Points with error bars are the experimental data. The expansion time is $t=150$~ms (red), $t=120$~ms (green and orange) and $t=100$~ms (blue); varying $t$ in this range would change the vertical position of the experimental data by a negligible amount of the order of $1\\%$. The depth and width are calculated from Gaussian fits to both GP and experimental distributions of the residual column density by using the same procedure. }\n\\label{fig:mu_scaling}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1\\linewidth]{vortex-core-fig5.pdf}\n\\caption{Residual column density after $150$~ms of free expansion for a condensate with $2 \\times 10^7$ atoms and $\\mu=33 \\hbar \\omega_\\perp$, containing a vortex. The inset shows the full residual column density in the $y$-$z$ plane. The quantity $\\delta n (\\tilde{y},\\tilde{z})\/n^{\\rm TF}_{\\rm col} (0,\\tilde{z})$ is averaged in the direction $z$ within the rectangular box and the resulting values (blue points) are plotted in the main panel as a function of the rescaled coordinate $\\tilde{y}=y\/R_\\perp$, with $\\tilde{y}=0$ at the vortex position. The solid line is the same quantity, obtained with the same fitting procedure applied to the GP residual column density of a condensate with $\\mu=9.7 \\hbar \\omega_\\perp$, after linearly rescaling its width according to the dashed line of Fig.~\\ref{fig:mu_scaling}, and reducing its depth to match the experimental value. }\n\\label{fig:profile}\n\\end{figure}\n\nFor the case of vortex depth, the GP theory does not provide any simple scaling law to compare with the experimental results considered here. The reason is that, as discussed in the previous section, the visibility of the vortex in the residual column density exhibits a nontrivial dependence on the expansion time, associated with the crossover from the mean-field dominated early stages of expansion to the later ballistic expansion dynamics. Eventually, for large $t$, the normalized depth saturates at a value weakly dependent on $\\mu$ (see Fig.~\\ref{fig:depthwidth}). The experimental points lie in a range fully compatible with a smooth interpolation from the GP results down to the infinite condensate limit, in the sense that any reasonable interpolating function would clearly pass through most of the experimental points, within the experimental uncertainties. \n\nIn Fig.~\\ref{fig:profile}, we show an example of vortex profile in a condensate with $2 \\times 10^7$ atoms and chemical potential $\\mu_{\\rm expt}=33 \\hbar \\omega_\\perp$, after an expansion time $t=150$~ms. The full residual column density $\\delta n (\\tilde{y},\\tilde{z})$ is plotted in the inset. The quantity $\\delta n (\\tilde{y},\\tilde{z})\/n^{\\rm TF}_{\\rm col} (0,\\tilde{z})$ is averaged in the $z$ direction within the rectangular box, and the resulting $\\delta n (\\tilde{y})\/n^{\\rm TF}_{\\rm col} (0)$ is shown in the main panel of the figure as a function of $\\tilde{y}$. In order to compare the experimental data with GP theory we proceed as follows. We first check that the shape of the vortex core in the residual column density of GP simulations with different values of $\\mu$ is the same up to a rescaling of the width and the depth as in Fig.~\\ref{fig:mu_scaling}, except for small fluctuations in the tails, which are expected to become negligible for large $\\mu$. This implies that the GP profile of $\\delta n (\\tilde{y})\/n^{\\rm TF}_{\\rm col} (0)$ for the experimental chemical potential $\\mu_{\\rm expt}=33 \\hbar \\omega_\\perp$ should be the same as for the GP simulation for $\\mu_{\\rm GP}=9.7 \\hbar \\omega_\\perp$, after rescaling the width linearly with $\\mu$ (dashed line in Fig.~\\ref{fig:mu_scaling}). The solid line in Fig.~\\ref{fig:profile} is the resulting GP profile, where we fixed the depth to the experimental value.\nThere is good agreement between theory and experiment for the overall shape, including quantitative agreement for the width. The depth has good qualitative agreement if one considers that the experimental value lies within a range between the GP results for smaller $\\mu$ and the trivial limit for $\\mu \\to \\infty$, in a way that is compatible with any reasonable smooth interpolation as already shown in the top panel of Fig.~\\ref{fig:mu_scaling}.\n \nIt is worth noticing that the optical resolution in our experiments is not limiting the comparison with theory. To check this, we convolve the GP profile with a Gaussian having a width in the range $\\sigma_{\\rm res} \\sim 2 - 3\\ \\mu$m, corresponding to our optical resolution, and we find that the effects on the points in Figs.~\\ref{fig:mu_scaling} and \\ref{fig:profile} are negligible (note that the vortex core in Fig.~\\ref{fig:profile} has a width $\\sigma \\sim 30\\ \\mu {\\rm m} \\gg \\sigma_{\\rm res}$). The fluctuations in the experimental data, which contribute to the error bars in Fig.~\\ref{fig:mu_scaling}, are dominated by photon shot-noise in the absorption images and by systematic spurious optical fringes which are not completely filtered out. \n\nFinally, we note that thermal atoms are not visible in our samples, which means that the temperature of the condensates is significantly smaller than the critical temperature for Bose-Einstein condensation. Nevertheless, a certain number of thermal atoms is still expected to be present in the trapped condensate, and some of them can be confined within the vortex core \\cite{Coddington04}. These atoms should not be present in the vortex core after the expansion, since their kinetic energy is sufficient to separate them from the expanding condensate, leaving an empty vortex core.\nIn any case, our observations suggest that the effect of thermal atoms on the {\\it in situ} vortex core is limited.\nIn fact, the good agreement that we find with GP theory (valid at zero temperature) is an indication that, if thermal atoms are present, their effects on the shape, width and depth of the vortex are negligible within the uncertainties of our experiments. \n\n\\section{Conclusion}\n\nIn summary, we have shown that quantized vortex filaments can be observed by optical means in 3D Bose-Einstein condensates of weakly interacting ultracold atoms, at a level of accuracy which is enough to allow for a direct comparison with the predictions of the Gross-Pitaevskii theory for the width, depth, and overall shape of the vortex core. We found good agreement between theory and experiment. We have performed experiments with large condensates of sodium atoms and compared the results to those obtained in numerical simulations. In order to make the vortex visible we let the condensate expand for a long time. The expansion dynamics were included in the numerical simulations. We have shown that Thomas-Fermi scaling laws, valid for large elongated condensates, can be efficiently used to relate the observed features after expansion to the structure of the vortex core in the initially trapped condensate. \\\\\n\n\\bigskip\n\n{\\bf Acknowledgments:}\nWe dedicate this paper to Lev P. Pitaevskii in celebration of his 85th birthday.\nNo words can express our gratitude for the times spent working alongside him and, of course, for his pioneering contributions to physics itself.\nThis work is supported by Provincia Autonoma di Trento and by QuantERA ERA-NET cofund project NAQUAS. \\\\\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\nKeeping information secure has become a major concern with the advancement in technology. In this work, the information theory aspect of security is analyzed, as entropies are used to measure security. The system also incorporates some traditional ideas surrounding cryptography, namely Shannon's cipher system and adversarial attackers in the form of eavesdroppers. In cryptographic systems, there is usually a message in plaintext that needs to be sent to a receiver. In order to secure it, the plaintext is encrypted so as to prevent eavesdroppers from reading its contents. This ciphertext is then transmitted to the receiver. Shannon's cipher system (mentioned by Yamamoto\\cite{shannon1_yamamoto}) incorporates this idea. The definition of Shannon's cipher system has been discussed by Hanawal and Sundaresan~\\cite{hanawal_shannon}. In Yamamoto's~\\cite{shannon1_yamamoto} development on this model, a correlated source approach is introduced. This gives an interesting view of the problem, and is depicted in Figure~\\ref{fig:yamamoto_shannoncipher}. Correlated source coding incorporates the lossless compression of two or more correlated data streams. Correlated sources have the ability to decrease the bandwidth required to transmit and receive messages because a syndrome (compressed form of the original message) is sent across the communication links instead of the original message. A compressed message has more information per bit, and therefore has a higher entropy because the transmitted information is more unpredictable. The unpredictability of the compressed message is also beneficial in terms of securing the information. \n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics [scale = 0.7]{yamamoto_shannoncipher.pdf}\n\\caption{Yamamoto's development of the Shannon Cipher System}\n\\label{fig:yamamoto_shannoncipher}\n\\end{figure}\n\nThe source sends information for the correlated sources, $X$ and $Y$ along the main transmission channel. A key $W_k$, is produced and used by the encoder when producing the ciphertext. The wiretapper has access to the transmitted codeword, $W$. The decoded codewords are represented by $\\widehat{X}$ and $\\widehat{Y}$. In Yamamoto's scheme the security level was also focused on and found to be $\\frac{1}{K} H(X^K,Y^K|W)$ (i.e. the joint entropy of $X$ and $Y$ given $W$, where $K$ is the length of $X$ and $Y$) when $X$ and $Y$ have equal importance, which is in accordance with traditional Shannon systems where the security is measured by the equivocation. When one source is more important than the other then the security level is measured by the pair of the individual uncertainties $(\\frac{1}{K} H(X^K|W), \\frac{1}{K} H(Y^K|W))$. \n\nIn practical communication systems links are prone to eavesdropping and as such this work incorporates wiretapped channels, i.e. channels where an eavesdropper is present.\n\nThere are specific kinds of wiretapped channels that have been developed. The mathematical model for this Wiretap Channel is given by Rouayheb \\textit{et al.}~\\cite{ref12_rouayheb_soljanin}, and can be explained as follows: the channel between a transmitter and receiver is error-free and can transmit $n$ symbols $Y=(y_1,\\ldots,y_n)$ from which $\\mu$ bits can be observed by the eavesdropper and the maximum secure rate can be shown to equal $n-\\mu$ bits. The security aspect of wiretap networks have been looked at in various ways by Cheng \\textit{et al.} \\cite{ref21_cheng_yeung}, and Cai and Yeung \\cite{ref11_cai_yeung}, emphasising that it is of concern to secure these type of channels. \n\nVillard and Piantanida \\cite{pablo_secure_multiterminal} also look at correlated sources and wireap networks: A source sends information to the receiver and an eavesdropper has access to information correlated to the source, which is used as side information. There is a second encoder that sends a compressed version of its own correlation observation of the source privately to the receiver. Here, the authors show that the use of correlation decreases the required communication rate and increases secrecy. Villard \\textit{et al.} \\cite{pablo_secure_transmission_receivers} explore this side information concept further where security using side information at the receiver and eavesdropper is investigated. Side information is generally used to assist the decoder to determine the transmitted message. An earlier work involving side information is that by Yang \\textit{et al.}~\\cite{feedback_yang}. The concept can be considered to be generalised in that the side information could represent a source. It is an interesting problem when one source is more important and Hayashi and Yamamoto\\cite{Hayashi_coding} consider it in another scheme, where only $X$ is secure against wiretappers and $Y$ must be transmitted to a legitimate receiver. They develop a security criterion based on the number of correct guesses of a wiretapper to retrieve a message. In an extension of the Shannon cipher system, Yamamoto \\cite{coding_yamamoto} investigated the secret sharing communication system. \n\nIn this case, we generalise a model for correlated sources across a channel with an eavesdropper and the security aspect is explored by quantifying the information leakage and reducing the key lengths when incorporating Shannon's cipher system. \n\nThis paper initially describes a two correlated source model across wiretapped links, which is detailed in Section II. In Section III, the information leakage is investigated and proven for this two correlated source model. The information leakage is quantified to be the equivocation subtracted from the total obtained uncertainty. In Section IV the two correlated sources model is looked at according to Shannon's cipher system. The notation contained in the tables will be clarified in the following sections. The proofs for this Shannon cipher system aspect are detailed in Section V. Section VI details the extension of the two correlated source model where multiple correlated sources in a network scenario is investigated. There are two subsections here; one quantifying information leakage for the Slepian-Wolf scenario and the other incorporating Shannon's cipher system where key lengths are minimized and a masking method to save on keys is presented. Section VII explains how the models detailed in this paper are a generalised model of Yamamoto's~\\cite{shannon1_yamamoto} model, and further offers comparison to other models. The future work for this research is detailed in Section VIII and the paper is concluded in Section IX.\n\n\n\n\n\n\\section{Model}\n\nThe independent, identically distributed (i.i.d.) sources $X$ and $Y$ are mutually correlated random variables, depicted in Figure~\\ref{fig:new_model}. The alphabet sets for sources $X$ and $Y$ are represented by $\\mathcal{X}$ and $\\mathcal{Y}$ respectively. Assume that ($X^K$, $Y^K$) are encoded into two syndromes ($T_{X}$ and $T_{Y}$). We can write $T_X = (V_X, V_{CX})$ and $T_Y = (V_Y, V_{CY})$ where $T_X$ and $T_Y$ are the syndromes of $X$ and $Y$. Here, $T_X$ and $T_Y$ are characterised by $(V_X, V_{CX})$ and $(V_Y, V_{CY})$ respectively. The Venn diagram in Figure \\ref{fig:new_venn2} easily illustrates this idea where it is shown that $V_X$ and $V_Y$ represent the private information of sources $X$ and $Y$ respectively and $V_{CX}$ and $V_{CY}$ represent the common information between $X^K$ and $Y^K$ generated by $X^K$ and $Y^K$ respectively. \n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics [scale = 0.7]{new_model.pdf}\n\\caption{Correlated source coding for two sources}\n\\label{fig:new_model}\n\\end{figure}\n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics [scale = 0.7]{new_venn2.pdf}\n\\caption{The relation between private and common information}\n\\label{fig:new_venn2}\n\\end{figure}\n\nThe correlated sources $X$ and $Y$ transmit messages (in the form of syndromes) to the receiver along wiretapped links. The decoder determines $X$ and $Y$ only after receiving all of $T_X$ and $T_Y$. The common information between the sources are transmitted through the portions $V_{CX}$ and $V_{CY}$. In order to decode a transmitted message, a source's private information and both common information portions are necessary. This aids in security as it is not possible to determine, for example $X$ by wiretapping all the contents transmitted along $X$'s channel only. This is different to Yamamoto's~\\cite{shannon1_yamamoto} model as here the common information consists of two portions. The aim is to keep the system as secure as possible and these following sections show how it is achieved by this new model. \n\nWe assume that the function $F$ is a one-to-one process with high probability, which means based on $T_X$ and $T_Y$ we can retrieve $X^K$ and $Y^K$ with minimal error. Furthermore, it reaches the Slepian-Wolf bound, $H(T_X, T_Y)=H(X^K,Y^K)$. Here, we note that the lengths of $T_X$ and $T_Y$ are not fixed, as it depends on the encoding process and nature of the Slepian-Wolf codes. The process is therefore not ideally one-to-one and reversible and is another difference between our model and Yamamoto's~\\cite{shannon1_yamamoto} model.\n\nThe code described in this section satisfies the following inequalities for $\\delta > 0$ and sufficiently large $K$.\n\n\\begin{eqnarray}\nPr \\{X^K \\neq G(V_X, V_{CX}, V_{CY})\\} \\le \\delta\n\\label{x_prob}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nPr \\{Y^K \\neq G(V_Y, V_{CX}, V_{CY})\\} \\le \\delta\n\\label{y_prob}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(V_X, V_{CX}, V_{CY})\\le H(X^K) + \\delta \n\\label{x_entropy}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(V_Y, V_{CX}, V_{CY})\\le H(Y^K) + \\delta \n\\label{y_entropy}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(V_X, V_Y, V_{CX}, V_{CY})\\le H(X^K,Y^K) + \\delta \n\\label{xy_entropy}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(X^K|V_X, V_Y) \\geq H(V_{CX}) + H(V_{CY}) - \\delta \n\\label{H_inequality1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(X^K|V_{CX}, V_{CY}) \\geq H(V_X) + H(V_{CY}) - \\delta \n\\label{H_inequality2}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(X^K|V_{CX}, V_{CY}, V_Y) \\geq H(V_X) + H(V_{CY}) - \\delta \n\\label{H_inequality3}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(V_{CX}) + H(V_X) - \\delta \\le H(X^K|V_{CY}, V_{Y}) \\nonumber\n\\\\ \\le H(X) - H(V_{CY}) + \\delta\n\\label{H_inequality4}\n\\end{eqnarray}\n\\\\\nwhere $G$ is a function to define the decoding process at the receiver. It can intuitively be seen from \\eqref{x_entropy} and \\eqref{y_entropy} that $X$ and $Y$ are recovered from the corresponding private information and the common information produced by $X^K$ and $Y^K$. Equations \\eqref{x_entropy}, \\eqref{y_entropy} and \\eqref{xy_entropy} show that the private information and common information produced by each source should contain no redundancy. \nIt is also seen from \\eqref{H_inequality2} and \\eqref{H_inequality3} that $V_Y$ is independent of $X^K$ asymptotically. Here, $V_X$, $V_Y$, $V_{CX}$ and $V_{CY}$ are disjoint, which ensures that there is no redundant information sent to the decoder. \n\nTo recover $X$ the following components are necessary: $V_X$, $V_{CX}$ and $V_{CY}$. This comes from the property that $X^K$ cannot be derived from $V_X$ and $V_{CX}$ only and part of the common information between $X^K$ and $Y^K$ is produced by $Y^K$.\n\nYamamoto~\\cite{shannon1_yamamoto} proved that a common information between $X^K$ and $Y^K$ is represented by the mutual information $I(X;Y)$. Yamamoto~\\cite{shannon1_yamamoto} also defined two kinds of common information. The first common information is defined as the rate of the attainable minimum core $V_C$ (i.e. $V_{CX}, V_{CY}$ in this model) by removing each private information, which is independent of the other information, from ($X^K$, $Y^K$) as much as possible. The second common information is defined as the rate of the attainable maximum core $V_C$ such that if we lose $V_C$ then the uncertainty of $X$ and $Y$ becomes $H(V_C)$. Here, we consider the common information that $V_{CX}$ and $V_{CY}$ represent.\n\nWe begin demonstrating the relationship between the common information portions by constructing the prototype code ($W_X$, $W_Y$, $W_{CX}$, $W_{CY}$) as per Lemma 1. \n\n\\textit{Lemma 1: For any $\\epsilon_0 \\geq 0$ and sufficiently large $K$, there exits a code $W_X = F_X(X^K)$, $W_Y = F_Y(Y^K)$, $W_{CX} = F_{CX}(X^K)$, $W_{CY} = F_{CY}(Y^K)$, $\\widehat{X}^K,\\widehat{Y}^K = G(W_X, W_Y, W_{CX}, W_{CY})$, where $W_X \\in I_{M_X}$, $W_Y \\in I_{M_Y}$, $W_{CX} \\in I_{M_{CX}}$, $W_{CY} \\in I_{M_{CY}}$ for $I_{M_{\\alpha}}$, which is defined as $\\{0, 1, \\ldots, M_{\\alpha} - 1\\}$, that satisfies},\n\n\\begin{eqnarray}\nPr\\{\\widehat{X}^K, \\widehat{Y}^K \\neq X^K, Y^K\\} \\le \\epsilon\n\\label{lemma1_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(X|Y) - \\epsilon_0 \\le \\frac{1}{K} H(W_X) \\le \\frac{1}{K} \\log M_X \\le H(X|Y) + \\epsilon_0\n\\label{lemma1_2}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(Y|X) - \\epsilon_0 \\le \\frac{1}{K} H(W_Y) \\le \\frac{1}{K} \\log M_Y \\le H(Y|X) + \\epsilon_0\n\\label{lemma1_3}\n\\end{eqnarray}\n\n\n\\begin{eqnarray}\n& & I(X;Y) - \\epsilon_0 \\le \\frac{1}{K} (H(W_{CX}) + H(W_{CY})) \\nonumber \\\\\n & \\le & \\frac{1}{K} (\\log M_{CX} + \\log M_{CY}) \\le I(X;Y) + \\epsilon_0\n\\label{lemma1_4}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_Y) \\geq H(X) - \\epsilon_0\n\\label{lemma1_5}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X) \\geq H(Y) - \\epsilon_0\n\\label{lemma1_6}\n\\end{eqnarray}\n\nWe can see that \\eqref{lemma1_2} - \\eqref{lemma1_4} mean\n\\begin{eqnarray}\n&& H(X,Y) - 3\\epsilon_0 \\le \\frac{1}{K} (H(W_X) + H(W_Y) + H(W_{CX}) \\nonumber \\\\\n& + & H(W_{CY})) \\nonumber \\\\\n& \\le & H(X,Y) + 3\\epsilon_0\n\\label{lemma1_7}\n\\end{eqnarray}\n\nHence from \\eqref{lemma1_1}, \\eqref{lemma1_7} and the ordinary source coding theorem, ($W_X$, $W_Y$, $W_{CX}$, $W_{CY}$) have no redundancy for sufficiently small $\\epsilon_0 \\geq 0$. It can also be seen that $W_X$ and $W_Y$ are independent of $Y^K$ and $X^K$ respectively. \n\n\\begin{proof}[Proof of Lemma 1]\n\nAs seen by Slepian and Wolf, mentioned by Yamamoto\\cite{shannon1_yamamoto} there exist $M_X$ codes for the $P_{Y|X}(y|x)$ DMC (discrete memoryless channel) and $M_Y$ codes for the $P_{X|Y}(x|y)$ DMC. The codeword sets exist as $C^X_i$ and $C^Y_j$, where $C^X_i$ is a subset of the typical sequence of $X^K$ and $C^Y_j$ is a subset of the typical sequence of $Y^K$.\nThe encoding functions are similar, but we have created one decoding function as there is one decoder at the receiver:\n\n\\begin{eqnarray}\nf_{Xi}:I_{M_{CX}} \\rightarrow C^X_i\n\\label{lemma1proof_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nf_{Yj}:I_{M_{CY}} \\rightarrow C^Y_j\n\\label{lemma1proof_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\ng: X^K, Y^K \\rightarrow I_{M_{CX}} \\times I_{M_{CY}}\n\\label{lemma1proof_2}\n\\end{eqnarray}\n\nThe relations for $M_X$, $M_Y$ and the common information remain the same as per Yamamoto's and will therefore not be proven here. \n\nIn this scheme, we use the average $(V_{CX}, V_{X},V_{CY}, V_{Y})$ transmitted for many codewords from $X$ and $Y$. Thus, at any time either $V_{CX}$ or $V_{CY}$ is transmitted. Over time, the split between which common information portion is transmitted is determined and the protocol is prearranged accordingly. Therefore all the common information is either transmitted as $l$ or $m$, and as such Yamamoto's encoding and decoding method may be used. \n\nAs per Yamamoto's method the code does exist and that $W_X$ and $W_Y$ are independent of $Y$ and $X$ respectively, as shown by Yamamoto\\cite{shannon1_yamamoto}. \n\n\n\\end{proof}\n\nThe common information is important in this model as the sum of $V_{CX}$ and $V_{CY}$ represent a common information between the sources. The following theorem holds for this common information:\n\\\\\n\\textit{Theorem 1:}\n\\begin{eqnarray}\n\\frac{1}{K} [H (V_{CX}) + H (V_{CY})] = I (X;Y) \n\\label{theorem1}\n\\end{eqnarray}\n\nwhere $V_{CX}$ is the common portion between $X$ and $Y$ produced by $X^K$ and $V_{CY}$ is the common portion between $X$ and $Y$ produced by $Y^K$. It is noted that the \\eqref{theorem1} holds asymptotically, and does not hold with equality when $K$ is finite. Here, we show the approximation when $K$ is infinitely large.\nThe private portions for $X^K$ and $Y^K$ are represented as $V_X$ and $V_Y$ respectively. As explained in Yamamoto's~\\cite{shannon1_yamamoto} Theorem 1, two types of common information exist (the first is represented by $I(X;Y)$ and the second by $\\text{min} (H(X^K), H(Y^K))$. We will develop part of this idea to show that the sum of the common information portions produced by $X^K$ and $Y^K$ in this new model is represented by the mutual information between the sources. \n\n\\begin{proof}[Proof of Theorem 1]\nThe first part is to prove that $H(V_{CX}) + H(V_{CY}) \\geq I(X;Y)$, and is done as follows. \nWe weaken the conditions \\eqref{x_prob} and \\eqref{y_prob} to\n\\begin{eqnarray}\n\\text{Pr }\\{X^K,Y^K \\neq G_{XY} (V_X, V_Y, V_{CX}, V_{CY}\\}) \\le \\delta_1\n\\label{weakenedxy_prob}\n\\end{eqnarray}\n\nFor any ($V_X$,$V_Y$, $V_{CX}$, $V_{CY}$) $\\in C(3\\epsilon_0)$ (which can be seen from \\eqref{lemma1_7}), we have from \\eqref{weakenedxy_prob} and the ordinary source coding theorem that\n\n\\begin{eqnarray}\nH(X^K,Y^K) - \\delta_1 &\\le & \\frac{1}{K} H(V_X, V_Y, V_{CX}, V_{CY}) \\nonumber \\\\\n\t& \\le & \\frac{1}{K} [H(V_X) + H(V_Y) + H (V_{CX}) \\nonumber \\\\\n\t& + & H(V_{CY})]\n\\label{theorem1proof_1}\n\\end{eqnarray}\n\nwhere $\\delta_1 \\rightarrow 0$ as $\\delta \\rightarrow 0$. From Lemma 1,\n\\begin{eqnarray}\n\\frac{1}{K} H(V_Y|X^K) \\geq \\frac{1}{K} H(V_Y) - \\delta\n\\label{theorem1proof_2}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(V_X|Y^K) \\geq \\frac{1}{K} H(V_X) - \\delta\n\\label{theorem1proof_3}\n\\end{eqnarray}\n\nFrom \\eqref{theorem1proof_1} - \\eqref{theorem1proof_3},\n\\begin{eqnarray}\n\\frac{1}{K} [H(V_{CX}) + H(V_{CY})] &\\ge & H(X,Y) - \\frac{1}{K} H(V_X) \\nonumber \\\\\n& - & \\frac{1}{K} H(V_Y) - \\delta_1\t\t\t\t\t\t\t\\nonumber \\\\\n& \\geq & H(X,Y) - \\frac{1}{K} H(V_X|Y) \t\t\\nonumber \\\\\n& - & \\frac{1}{K} H(V_Y|X) - \\delta_1 - 2\\delta\n\\label{theorem1proof_4}\n\\end{eqnarray}\n\nOn the other hand, we can see that\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K, V_Y) \\le H(X,Y) + \\delta\n\\label{theorem1proof_5}\n\\end{eqnarray}\n\nThis implies that \n\\begin{eqnarray}\n\\frac{1}{K} H(V_Y|X^K) \\le H(Y|X) + \\delta \n\\label{theorem1proof_6}\n\\end{eqnarray}\nand \n\\begin{eqnarray}\n\\frac{1}{K} H(V_X|Y^K) \\le H(X|Y) + \\delta \n\\label{theorem1proof_7}\n\\end{eqnarray}\n\nFrom \\eqref{theorem1proof_4}, \\eqref{theorem1proof_6} and \\eqref{theorem1proof_7} we get\n\\begin{eqnarray}\n\\frac{1}{K} [H(V_{CX}) + H(V_{CY})] & \\ge & H(X,Y) - H(X|Y) - H(Y|X) \\nonumber \\\\\n& - & \\delta_1 - 4\\delta\t\\nonumber \\\\\n& = & I(X;Y) - \\delta_1 - 4\\delta\n\\label{theorem1proof_8}\n\\end{eqnarray}\n\nIt is possible to see from \\eqref{lemma1_4} that $H(V_{CX}) + H(V_{CY}) \\le I(X;Y)$. From this result, \\eqref{lemma1proof_2} and \\eqref{theorem1proof_8}, and as $\\delta_1 \\rightarrow 0$ and $\\delta \\rightarrow 0$ it can be seen that\n\\begin{eqnarray}\n\\frac{1}{K} [H (V_{CX} + H(V_{CY})] = I(X;Y)\n\\end{eqnarray}\n\\end{proof}\n \nThis model can cater for a scenario where a particular source, say $X$ needs to be more secure than $Y$ (possibly because of eavesdropping on the $X$ channel). In such a case, the $\\frac{1}{K} H(V_{CX})$ term in \\eqref{theorem1proof_8} needs to be as high as possible. When this uncertainty is increased then the security of $X$ is increased. Another security measure that this model incorporates is that $X$ cannot be determined from wiretapping only $X$'s link. \n \n\n\\section{Information Leakage}\nIn order to determine the security of the system, a measure for the amount of information leaked has been developed. This is a new notation and quantification, which emphasizes the novelty of this work. The obtained information and total uncertainty are used to determine the leaked information. Information leakage is indicated by $L_{\\mathcal{Q}}^\\mathcal{P}$. Here $\\mathcal{P}$ indicates the source\/s for which information leakage is being quantified, $\\mathcal{P} = \\{S_1, \\ldots, S_n\\}$ where $n$ is the number of sources (in this case, $n = 2$). Further, $\\mathcal{Q}$ indicates the syndrome portion that has been wiretapped, $\\mathcal{Q} = \\{V_1, \\ldots, V_m\\}$ where $m$ is the number of codewords (in this case, $m = 4$).\n\nThe information leakage bounds are as follows:\n\\begin{eqnarray}\nL_{V_X,V_Y}^{X^K} \\le H(X^K) - H(V_{CX}) - H(V_{CY}) + \\delta\n\\label{L_inequality1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nL_{V_{CX},V_{CY}}^{X^K} \\le H(X^K) - H(V_X) - H(V_{CY}) + \\delta\n\\label{L_inequality2}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nL_{V_{CX},V_{CY},V_Y}^{X^K} \\le H(X^K) - H(V_X) - H(V_{CY}) + \\delta\n\\label{L_inequality3}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& H(V_{CY}) - \\delta \\le L_{V_{Y},V_{CY}}^{X^K} \\nonumber\n\\\\ & \\le & H(X^K) - H(V_{CX}) - H(V_X) + \\delta \n\\label{L_inequality4}\n\\end{eqnarray}\n \nHere, $V_Y$ is private information of source $Y^K$ and is independent of $X^K$\nand therefore does not leak any information about $X^K$, shown\nin \\eqref{L_inequality2} and \\eqref{L_inequality3}. Equation \\eqref{L_inequality4} gives an indication of the minimum and maximum amount of leaked information for the interesting case where a syndrome has been wiretapped and its information leakage on the alternate source is quantified. The outstanding common information component is the maximum information that can be leaked. For this case, the common information $V_{CX}$ and $V_{CY}$ can thus consist of added\nprotection to reduce the amount of information leaked. These bounds developed in \\eqref{L_inequality1} - \\eqref{L_inequality4} are proven in the next section.\n\nThe proofs for the above mentioned information leakage inequalities are now detailed. First, the inequalities in \\eqref{H_inequality1} - \\eqref{H_inequality4} will be proven, so as to prove that the information leakage equations hold. \\\\\n\\begin{proof}[Lemma 2]\nThe code ($V_X$, $V_{CX}$, $V_{CY}$, $V_Y$) defined at the beginning of Section I, describing the model and \\eqref{x_prob} - \\eqref{xy_entropy} satisfy \\eqref{H_inequality1} - \\eqref{H_inequality4}. Then the information leakage bounds are given by \\eqref{L_inequality1} - \\eqref{L_inequality4}.\n\\\\\\\\ \\textit{Proof for \\eqref{H_inequality1}}:\n\\begin{eqnarray}\n& & \\frac{1}{K} H(X^K|V_X,V_Y) \t\t\t\t\t\t\t\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K,V_X,V_Y) - H(V_X,V_Y)] \t\t\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K,V_Y) - H(V_X,V_Y)] \t\t\t\t\t\t\t\t\\label{lemma2_ref1}\\\\\n& = & \\frac{1}{K} [H(X^K|V_Y) + I(X^K;V_Y) + H(V_Y|X^K)] \t\t\t\t\t\t\t\\nonumber\\\\\n& & - \\frac{1}{K} [H(V_X|V_Y) + I(V_X;V_Y) + H(V_Y|V_X)] \t\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K|V_Y) + H(V_Y|X^K) - H(V_X|V_Y)\t\t\t\t\t\t\t\\nonumber\\\\\n& & - H(V_Y|V_X)]\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K) + H(V_Y) - H(V_X) - H(V_Y)]\t\t\t\t\t\t\t\\label{lemma2_ref2}\\\\ \n& = & \\frac{1}{K} [H(X^K) - H(V_X)]\t\t\t\t\t\t\t\t\t\t\t\\nonumber\\\\\n& \\geq & \\frac{1}{K} [H(V_X) + H(V_{CX}) + H(V_{CY}) - H(V_X)]\t- \\delta\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(V_{CX}) + H(V_{CY})] -\\delta\t\t\t\t\t \t\t\t\n\\label{lemma2_part1}\n\\end{eqnarray}\n\nwhere \\eqref{lemma2_ref1} holds because $V_X$ is a function of $X$ and \\eqref{lemma2_ref2} holds because $X$ is independent of $V_Y$ asymptotically and $V_X$ is independent of $V_Y$ asymptotically.\n\nFor the proofs of \\eqref{H_inequality2} and \\eqref{H_inequality3}, the following simplification for $H(X|V_{CY})$ is used:\n\\begin{eqnarray}\nH(X^K|V_{CY}) & = & H(X^K,Y^K) - H(V_{CY}) \\nonumber \\\\\n& = & H(X^K) + H(V_{CY}) - I(X; V_{CY}) - H(V_{CY}) \\nonumber \\\\\n& = & H(X^K) + H(V_{CY}) - H(V_{CY}) - H(V_{CY}) \\nonumber \\\\\n& + & {\\delta}_1 \\label{new_55} \\\\\n& = & H(X^K) - H(V_{CY}) = {\\delta}_1\n\\label{simpli}\n\\end{eqnarray}\n\nwhere $I(X; V_{CY})$ approximately equal to $H(V_{CY})$ in \\eqref{new_55} can be seen intuitively from the Venn diagram in Figure \\ref{fig:new_venn2}. Since it is an approximation, ${\\delta}_1$, which is smaller than $\\delta$ in the proofs below has been added to cater for the tolerance. \n\\\\\\\\ \\textit{Proof for \\eqref{H_inequality2}}:\n\\begin{eqnarray}\n& & \\frac{1}{K} H(X^K|V_{CX},V_{CY})\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K,V_{CX},V_{CY}) - H(V_{CX},V_{CY})]\t\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K,V_{CY}) - H(V_{CX},V_{CY})] \t\t\t\t\t\t\t\t\\label{lemma2_ref3}\\\\\n& = & \\frac{1}{K} [H(X^K) - H(V_{CY}) + I(X;V_{CY}) + H(V_{CY}|X^K)] \t\t\t\t\\nonumber\\\\\n& & - \\frac{1}{K} [H(V_{CX}|V_{CY}) + I(V_{CX};V_{CY}) + H(V_{CY}|V_{CX})] \\nonumber \\\\\n& + & \\delta_1\t\t\\nonumber\\\\\t\n& = & \\frac{1}{K} [H(X^K) - H(V_{CY}) + H(V_{CY})- H(V_{CX}) - H(V_{CY})]\t \\nonumber \\\\\n& + & \\delta_1\t\t\t\t\t\\label{lemma2_ref4} \\\\\n& = & \\frac{1}{K} [H(X^K) - H(V_{CY}) - H(V_{CX})]\t\t\t\t\t\t\t\t+ \\delta_1\t\t\t\\nonumber\\\\\n& \\geq & \\frac{1}{K} [H(V_X) + H(V_{CX}) + H(V_{CY}) - H(V_{CY}) - H(V_{CX})] -\\delta\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} H(V_X) + \\delta_1 -\\delta\n\\label{lemma2_part2}\n\\end{eqnarray}\n\nwhere \\eqref{lemma2_ref3} holds because $V_{CX}$ is a function of $X^K$ and \\eqref{lemma2_ref4} holds because $X$ is independent of $V_{CY}$ asymptotically and $V_{CX}$ is independent of $V_{CY}$ asymptotically. \n\nThe proof for $H(X|V_{CX},V_{CY},V_Y)$ is similar to that for $H(X|V_{CX},V_{CY})$, because $V_Y$ is independent of $X$.\n\\\\\\\\ \\textit{Proof for \\eqref{H_inequality3}}:\n\\begin{eqnarray}\n& & \\frac{1}{K} H(X^K|V_{CX},V_{CY},V_Y)\t\t\t\t\t\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} H(X^K|V_{CX},V_{CY})\n\\label{lemma2_ref5}\\\\\n& = & \\frac{1}{K} [H(X^K,V_{CX},V_{CY}) - H(V_{CX},V_{CY})]\t\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K,V_{CY}) - H(V_{CX},V_{CY})] \t\t\t\t\t\t\t\t\\label{lemma2_ref6}\\\\\n& = & \\frac{1}{K} [H(X^K) - H(V_{CY}) + I(X;V_{CY}) + H(V_{CY}|X^K)] \t\t\t\t\t\\nonumber\\\\\n& & - \\frac{1}{K} [H(V_{CX}|V_{CY}) + I(V_{CX};V_{CY}) + H(V_{CY}|V_{CX})] \\nonumber \\\\\n& + & \\delta_1\t\t\\nonumber\\\\\t\n& = & \\frac{1}{K} [H(X^K) - H(V_{CY}) + H(V_{CY})- H(V_{CX}) \t\t\\nonumber\\\\\n& - & H(V_{CY})] + \\delta_1\t\t\t\t\\label{lemma2_ref7}\\\\\n& = & \\frac{1}{K} [H(X^K) - H(V_{CY}) - H(V_{CX})]\t\t\t\t\t\t\t\t+ \\delta_1\t\t\t\\nonumber\\\\\n& \\geq & \\frac{1}{K} [H(V_X) + H(V_{CX}) + H(V_{CY}) - H(V_{CY}) \\nonumber\\\\\n& - & - H(V_{CX})] - \\delta\t\t+ \\delta_1\t\\nonumber\\\\\n& = & \\frac{1}{K} H(V_X) -\\delta + \\delta_1\n\\label{lemma2_part3}\n\\end{eqnarray}\n\nwhere \\eqref{lemma2_ref5} holds because $V_Y$ and $X^K$ are independent, \\eqref{lemma2_ref6} holds because $V_{CX}$ is a function of $X^K$ and \\eqref{lemma2_ref7} holds because $X^K$ is independent of $V_{CY}$ asymptotically and $V_{CX}$ is independent of $V_{CY}$ asymptotically. \n\nFor the proof of \\eqref{H_inequality4}, we look at the following probabilities:\n\\begin{eqnarray}\n\\text{Pr} \\{V_X,V_{CX} \\neq G(T_X)\\} \\le \\delta\n\\label{lemma2_eqn1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\text{Pr} \\{V_Y,V_{CY} \\neq G(T_Y)\\} \\le \\delta\n\\label{lemma2_eqn2}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n& & \\frac{1}{K} H(X^K|T_Y)\t\t\t\t\t\t\t\t\t\t\t\\nonumber\\\\\n& \\le & \\frac{1}{K} H(X^K, V_{CY},V_Y)] + \\delta\t\t\t\t\t\\label{lemma2_ref8}\\\\\n& = & \\frac{1}{K} [H(X^K, V_{CY},V_{Y}) - H(V_{CY},V_{Y})] + \\delta\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K, V_{Y}) - H(V_{CY},V_{Y})] + \\delta\t\t\t\t\t\t\\label{lemma2_ref9}\\\\\n& = & \\frac{1}{K} [H(X^K|V_{Y}) + I(X^K;V_{Y}) + H(V_{Y}|X^K)] \t\t\t\t\t\\nonumber\\\\\n& & - \\frac{1}{K} [H(V_{CY}|V_{Y}) + I(V_{CY};V_{Y}) + H(V_{Y}|V_{CY})] + \\delta\t\t\\nonumber\\\\\t\n& = & \\frac{1}{K} [H(X^K) + H(V_{Y})- H(V_{CY}) - H(V_{Y})]+ \\delta\t\\label{lemma2_ref10}\\\\\n& = & \\frac{1}{K} [H(X^K) - H(V_{CY})] + \\delta\t\t\t\t\t\t\t\t\t\t\t\n\\label{lemma2_part4.1}\n\\end{eqnarray}\n\nwhere \\eqref{lemma2_ref8} holds from \\eqref{lemma2_eqn2}, \\eqref{lemma2_ref9} holds because $V_{CY}$ and $V_Y$ are asymptotically independent. Furthermore, \\eqref{lemma2_ref10} holds because $V_{CY}$ and $V_{Y}$ are asymptotically independent and $X^K$ and $V_{Y}$ are asymptotically independent.\n\nFollowing a similar proof to those done above in this section, another bound for $H(X^K|V_{CY},V_Y)$ can be found as follows:\n\\begin{eqnarray}\n& & \\frac{1}{K} H(X^K|V_{CY},V_Y)\t\t\t\t\t\t\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K,V_{CY},V_{Y}) - H(V_{CY},V_{Y})]\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K,V_{Y}) - H(V_{CY},V_{Y})] \t\t\t\t\t\t\\label{lemma2_ref11}\\\\\n& = & \\frac{1}{K} [H(X^K|V_{Y}) + I(X^K;V_{Y}) + H(V_{Y}|X)] \t\t\t\t\t\\nonumber\\\\\n& & - \\frac{1}{K} [H(V_{CY}|V_{Y}) + I(V_{CY};V_{Y}) + H(V_{Y}|V_{CY})] \t\t\\nonumber\\\\\t\n& = & \\frac{1}{K} [H(X^K) + H(V_{Y})- H(V_{CY}) - H(V_{Y})]\t\t\t\t\\label{lemma2_ref12}\\\\\n& = & \\frac{1}{K} [H(X^K) - H(V_{CY})]\t\\nonumber\\\\\n& \\geq & \\frac{1}{K} [H(V_X) + H(V_{CX}) + H(V_{CY}) - H(V_{CY})]\t- \\delta \\nonumber\\\\\n& = & \\frac{1}{K} [H(V_X) + H(V_{CX})]\t- \\delta\n\\label{lemma2_part4.2}\n\\end{eqnarray}\n\nwhere \\eqref{lemma2_ref11} and \\eqref{lemma2_ref12} hold for the same reason as \\eqref{lemma2_ref9} and \\eqref{lemma2_ref10} respectively. \n\nSince we consider the information leakage as the total information obtained subtracted from the total uncertainty, the following hold for the four cases considered in this section:\n\n\\begin{eqnarray}\nL_{V_X,V_Y}^{X^K} & = & H(X^K) - H(X^K|V_X,V_Y) \t\\nonumber\\\\ \n& \\le & H(X^K) - H(V_{CX}) - H(V_{CY}) + \\delta\n\\label{Lemma2_proof_inequality1}\n\\end{eqnarray}\nwhich proves \\eqref{L_inequality1}.\n\n\\begin{eqnarray}\nL_{V_{CX},V_{CY}}^{X^K} & = & H(X^K) - H(X^K|V_{CX},V_{CY}) \\nonumber\n\\\\ & \\le & H(X^K) - H(V_X) + \\delta\n\\label{Lemma2_proof_inequality2}\n\\end{eqnarray}\nwhich proves \\eqref{L_inequality2}.\n\n\\begin{eqnarray}\nL_{V_{CX},V_{CY},V_Y}^{X^K} & = & H(X^K) - H(X^K|V_{CX},V_{CY},V_Y) \\nonumber\n\\\\ & \\le & H(X^K) - H(V_X) + \\delta\n\\label{Lemma2_proof_inequality3}\n\\end{eqnarray}\nwhich proves \\eqref{L_inequality3}.\n\nThe two bounds for $H(V_{CY},V_Y)$ are given by \\eqref{lemma2_part4.1} and \\eqref{lemma2_part4.2}. \nFrom \\eqref{lemma2_part4.1}:\n\n\\begin{eqnarray}\nL_{V_{Y},V_{CY}}^{X^K} & \\geq & H(X^K) - [H(X) - H(V_{CY}) + \\delta] \\nonumber \\\\\n& \\geq & H(V_{CY}) - \\delta\n\\label{Lemma2_proof_inequality4.1}\n\\end{eqnarray}\n\nand from \\eqref{lemma2_part4.2}:\n\\begin{eqnarray}\nL_{V_{Y},V_{CY}}^{X^K} & \\le & H(X^K) - \\left (H(V_X) + H(V_{CX}) - \\delta \\right) \\nonumber \\\\\n& \\le & H(X^K) - H(V_X) - H(V_{CX}) + \\delta \n\\label{Lemma2_proof_inequality4.2}\n\\end{eqnarray}\n\n\nCombining these results from \\eqref{Lemma2_proof_inequality4.1} and \\eqref{Lemma2_proof_inequality4.2} gives \\eqref{L_inequality4}.\n\\end{proof}\n\n\\section{Shannon's Cipher System}\n\nHere, we discuss Shannon's cipher system for two independent correlated sources (depicted in Figure \\ref{fig:shannon_cipher_2sources}). The two source outputs are i.i.d random variables $X$ and $Y$, taking on values in the finite sets $\\mathcal{X}$ and $\\mathcal{Y}$. Both the transmitter and receiver have access to the key, a random variable, independent of $X^K$ and $Y^K$ and taking values in $I_{M_k} = \\{0, 1, 2, \\ldots ,M_{k} - 1\\}$. The sources $X^K$ and $Y^K$ compute the ciphertexts $X^{'}$ and $Y^{'}$, which are the result of specific encryption functions on the plaintext from $X$ and $Y$ respectively. The encryption functions are invertible, thus knowing $X^{'}$ and the key, $X^K$ can be retrieved. \n\nThe mutual information between the plaintext and ciphertext should be small so that the wiretapper cannot gain much information about the plaintext. For perfect secrecy, this mutual information should be zero, then the length of the key should be at least the length of the plaintext.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics [scale = 0.7]{shannon_2sources.pdf}\n\\caption{Shannon cipher system for two correlated sources}\n\\label{fig:shannon_cipher_2sources}\n\\end{figure}\n\nThe encoder functions for $X$ and $Y$, ($E_X$ and $E_Y$ respectively) are given as:\n\n\\begin{eqnarray}\nE_X : \\mathcal{X}^K \\times I_{M_{kX}} & \\rightarrow & I_{M_X'} = \\{0, 1, \\ldots, M_X' - 1\\} \\nonumber \n\\\\ && I_{M_{CX}'} = \\{0, 1, \\ldots, M_{CX}' - 1\\}\n\\label{xencoder_fcn}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nE_Y : \\mathcal{Y}^K \\times I_{M_{kY}} & \\rightarrow & I_{M_Y'} = \\{0, 1, \\ldots, M_Y' - 1\\} \\nonumber \n\\\\ && I_{M_{CY}'} = \\{0, 1, \\ldots, M_{CY}' - 1\\}\n\\label{yencoder_fcn}\n\\end{eqnarray}\n\nThe decoder is defined as:\n\n\\begin{eqnarray}\nD_{XY} : (I_{M'_X}, I_{M'_Y}, I_{M'_{CX}},I_{M'_{CY}}) & \\times & I_{M_{kX}}, I_{M_{kY}} \\nonumber \\\\\n& \\rightarrow & \\mathcal{X}^K \\times \\mathcal{Y}^K\n\\end{eqnarray}\n\nThe encoder and decoder mappings are below:\n\\begin{eqnarray}\nW_1 = F_{E_X} (X^K, W_{kX})\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_2 = F_{E_Y} (Y^K, W_{kY})\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\widehat{X}^K = F_{D_X} (W_1, W_2, W_{kX})\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\widehat{Y}^K = F_{D_Y} (W_1, W_2, W_{kY})\n\\end{eqnarray}\n\nor \n\n\\begin{eqnarray}\n(\\widehat{X}^K, \\widehat{Y}^K) = F_{D_{XY}} (W_1, W_2, W_{kX}, W_{kY})\n\\end{eqnarray}\n\n\nThe following conditions should be satisfied for cases 1- 4:\n\n\\begin{eqnarray}\n\\frac{1}{K}\\log M_X \\le R_X +\\epsilon\n\\label{cond1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K}\\log M_Y \\le R_Y +\\epsilon\n\\label{cond2}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K}\\log M_{kX} \\le R_{kX} +\\epsilon\n\\label{cond3}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K}\\log M_{kY} \\le R_{{kY}} +\\epsilon\n\\label{cond4}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\text {Pr} \\{\\widehat{X}^K \\neq X^K\\} \\le \\epsilon\n\\label{cond5}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\text{Pr} \\{ \\widehat{Y}^K \\neq Y^K\\} \\le \\epsilon\n\\label{cond6}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_1) \\le h_X + \\epsilon\n\\label{cond7}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_2) \\le h_Y + \\epsilon\n\\label{cond8}\n\\end{eqnarray}\n\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K,Y^K|W_1) \\le h_{XY} + \\epsilon\n\\label{cond8.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K,Y^K|W_2) \\le h_{XY} + \\epsilon\n\\label{cond9}\n\\end{eqnarray}\n\nwhere $R_X$ is the the rate of source $X$'s channel and $R_Y$ is the the rate of source $Y$'s channel. Here, $R_{kX}$ is the rate of the key channel at $X^K$ and $R_{kY}$ is the rate of the key channel at $Y^K$. The security levels, which are measured by the total and individual uncertainties are $h_{XY}$ and $(h_X, h_Y)$ respectively. \n\\\\\\\\\nThe cases 1 - 5 are:\n\\\\ \\textit{Case 1:} When $T_X$ and $T_Y$ are leaked and both $X^K$ and $Y^K$ need to be kept secret.\n\\\\ \\textit{Case 2:} When $T_X$ and $T_Y$ are leaked and $X^K$ needs to be kept secret.\n\\\\ \\textit{Case 3:} When $T_X$ is leaked and both $X^K$ and $Y^K$ need to be kept secret.\n\\\\ \\textit{Case 4:} When $T_X$ is leaked and $Y^K$ needs to be kept secret.\n\\\\ \\textit{Case 5:} When $T_X$ is leaked and $X^K$ needs to be kept secret.\n\\\\ where $T_X$ is the syndrome produced by $X$, containing $V_{CX}$ and $V_X$ and $T_Y$ is the syndrome produced by $Y$, containing $V_{CY}$ and $V_X$ .\n\\\\\\\\\nThe admissible rate region for each case is defined as follows:\n\\\\ \\textit{Definition 1a:} ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$) is admissible for case 1 if there exists a code ($F_{E_{X}}$, $F_{D_{XY}}$) and ($F_{E_{Y}}$, $F_{D_{XY}}$) such that \\eqref{cond1} - \\eqref{cond6} and \\eqref{cond9} hold for any $\\epsilon \\rightarrow 0$ and sufficiently large $K$.\n\\\\ \\textit{Definition 1b:} ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{X}$) is admissible for case 2 if there exists a code ($F_{E_{X}}$, $F_{D_{XY}}$) such that \\eqref{cond1} - \\eqref{cond7} hold for any $\\epsilon \\rightarrow 0$ and sufficiently large $K$.\n\\\\ \\textit{Definition 1c:} ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{X}$, $h_{Y}$) is admissible for case 3 if there exists a code ($F_{E_{X}}$, $F_{D_{XY}}$) and ($F_{E_{Y}}$, $F_{D_{XY}}$) such that \\eqref{cond1} - \\eqref{cond6} and \\eqref{cond8}, \\eqref{cond9} hold for any $\\epsilon \\rightarrow 0$ and sufficiently large $K$.\n\\\\ \\textit{Definition 1d:} ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{Y}$) is admissible for case 4 if there exists a code ($F_{E_{X}}$, $F_{D_{XY}}$) such that \\eqref{cond1} - \\eqref{cond6} and \\eqref{cond8} hold for any $\\epsilon \\rightarrow 0$ and sufficiently large $K$.\n\\\\ \\textit{Definition 1e:} ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{X}$) is admissible for case 5 if there exists a code ($F_{E_{X}}$, $F_{D_{XY}}$) such that \\eqref{cond1} - \\eqref{cond6} and \\eqref{cond7} hold for any $\\epsilon \\rightarrow 0$ and sufficiently large $K$.\n\\\\ \\textit{Definition 2:} The admissible rate regions of $\\mathcal{R}_j$ and of $\\mathcal{R}_k$ are defined as:\n\n\\begin{eqnarray}\n\\mathcal{R}_1(h_{XY}) = \\{(R_X, R_Y, R_{kX}, R_{kY}):\t\t\t\\nonumber\n\\\\(R_X, R_Y, R_{kX}, R_{kY}, h_{XY} ) \\text{ is admissible for case 1} \\}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\mathcal{R}_2(h_{X}) = \\{(R_X, R_Y, R_{kX}, R_{kY}):\t\t\t\\nonumber\n\\\\ (R_X, R_Y, R_{kX}, R_{kY}, h_{X} ) \\text{ is admissible for case 2} \\}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\mathcal{R}_3(h_X, h_Y) = \\{(R_X, R_Y, R_{kX}, R_{kY}):\t\t\t\\nonumber\n\\\\ (R_X, R_Y, R_{kX}, R_{kY}, h_{X}, h_{Y} ) \\text{ is admissible for case 3} \\}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\mathcal{R}_4(h_{Y}) = \\{(R_X, R_Y, R_{kX}, R_{kY}):\t\t\t\\nonumber\n\\\\(R_X, R_Y, R_{kX}, R_{kY}, h_{Y} ) \\text{ is admissible for case 4} \\}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\mathcal{R}_5(h_{X}) = \\{(R_X, R_Y, R_{kX}, R_{kY}):\t\t\t\\nonumber\n\\\\(R_X, R_Y, R_{kX}, R_{kY}, h_{X} ) \\text{ is admissible for case 5} \\}\n\\end{eqnarray}\n\nTheorems for these regions have been developed:\n\n\\textit{Theorem 2:} For $0 \\le h_{XY} \\le H(X,Y)$,\n\\begin{eqnarray}\n&& \\mathcal{R}_1(h_{XY}) = \\{(R_X, R_Y, R_{kX},R_{kY}): \t\t\\nonumber\n\\\\ && R_X \\geq H(X|Y), \t\t\t\t\\nonumber\n\\\\ && R_Y \\geq H(Y|X),\t\t\\nonumber\n\\\\ && R_X + R_Y\t\\geq H(X,Y)\t\t\t\t\t\t\\nonumber\n\\\\ && R_{kX} \\geq h_{XY} \\text{ and } R_{kY} \\geq h_{XY} \\}\t\t\t\n\\label{theorem2}\n\\end{eqnarray}\n\n\\textit{Theorem 3:} For $0 \\le h_{X} \\le H(X)$,\n\\begin{eqnarray}\n&& \\mathcal{R}_2(h_{X}) = \\{(R_X, R_Y, R_{kX},R_{kY}): \t\t\\nonumber\n\\\\ && R_X \\geq H(X|Y), \t\t\t\t\\nonumber\n\\\\ && R_Y \\geq H(Y|X),\t\t\\nonumber\n\\\\ && R_X + R_Y\t\\geq H(X,Y)\t\t\t\t\t\t\\nonumber\n\\\\ && R_{kX} \\geq h_X \\text{ and } R_{kY} \\geq h_Y \\}\t\t\t\n\\label{theorem3}\n\\end{eqnarray}\n\n\\textit{Theorem 4:} For $0 \\le h_{X} \\le H(X)$ and $0 \\le h_{Y} \\le H(Y)$,\n\\begin{eqnarray}\n&& \\mathcal{R}_3(h_{X}, h_{Y}) = \\{(R_X, R_Y, R_{kX},R_{kY}): \t\t\\nonumber\n\\\\ && R_X \\geq H(X|Y), \t\t\t\t\\nonumber\n\\\\ && R_Y \\geq H(Y|X),\t\t\\nonumber\n\\\\ && R_X + R_Y\t\\geq H(X,Y)\t\t\t\t\t\t\\nonumber\n\\\\ && R_{kX} \\geq h_{X} \\text{ and } R_{kY} \\geq h_{Y} \\}\n\\label{theorem4}\n\\end{eqnarray}\n\n\\textit{Theorem 5:} For $0 \\le h_{X} \\le H(X)$,\n\\begin{eqnarray}\n&& \\mathcal{R}_5(h_{X}, h_{Y}) = \\{(R_X, R_Y, R_{kX},R_{kY}): \t\t\\nonumber\n\\\\ && R_X \\geq H(X|Y), \t\t\t\t\\nonumber\n\\\\ && R_Y \\geq H(Y|X),\t\t\\nonumber\n\\\\ && R_X + R_Y\t\\geq H(X,Y)\t\t\t\t\t\t\\nonumber\n\\\\ && R_{kX} \\geq h_{X} \\text{ and } R_{kY} \\geq 0 \\}\n\\label{theorem5}\n\\end{eqnarray}\n\nWhen $h_X = 0$ then case $5$ can be reduced to that depicted in \\eqref{theorem4}. \nHence, Corollary 1 follows:\n\\\\ \\textit{Corollary 1:} $\\mathcal{R}_4(h_{Y}) = \\mathcal{R}_3(0, h_Y)$\n\nThe security levels, which are measured by the total and individual uncertainties $h_{XY}$ and $(h_X, h_Y)$ respectively give an indication of the level of uncertainty in knowing certain information. When the uncertainty increases then less information is known to an eavesdropper and there is a higher level of security. \n\n\n\\section{Proof of Theorems 2 - 5}\nThis section initially proves the direct parts of Theorems 2 - 5 and thereafter the converse parts.\n\n\\subsection{Direct parts}\nAll the channel rates in the theorems above are in accordance with Slepian-Wolf's theorem, hence there is no need to prove them. \nWe construct a code based on the prototype code ($W_X, W_Y, W_{CX}, W_{CY}$) in Lemma 1. In order to include a key in the prototype code, $W_X$ is divided into two parts as per the method used by Yamamoto \\cite{shannon1_yamamoto}:\n\\begin{eqnarray}\nW_{X1} = W_X \\text{ mod } M_{X1} \\in I_{M_{X1}} = \\{0, 1, 2, \\ldots, M_{X1} - 1\\}\n\\label{theorems2-4_eq_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{X2} = \\frac{W_X - W_{X1}}{M_{X1}} \\in I_{M_{X2}} = \\{0, 1, 2, \\ldots, M_{X2} - 1\\}\n\\label{theorems2-4_eq_2}\n\\end{eqnarray}\n\nwhere $M_{X1}$ is a given integer and $M_{X2}$ is the ceiling of $M_X\/M_{X1}$. The $M_X\/M_{X1}$ is considered an integer for simplicity, because the difference between the ceiling value and the actual value can be ignored when $K$ is sufficiently large. In the same way, $W_Y$ is divided:\n\n\\begin{eqnarray}\nW_{Y1} = W_Y \\text{ mod } M_{Y1} \\in I_{M_{Y1}} = \\{0, 1, 2, \\ldots, M_{Y1} - 1\\}\n\\label{theorems2-4_eq_3}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{Y2} = \\frac{W_Y - W_{Y1}}{M_{Y1}} \\in I_{M_{Y2}} = \\{0, 1, 2, \\ldots, M_{Y2} - 1\\}\n\\label{theorems2-4_eq_4}\n\\end{eqnarray}\n\nThe common information components $W_{CX}$ and $W_{CY}$ are already portions and are not divided further. \nIt can be shown that when some of the codewords are wiretapped the uncertainties of $X^K$ and $Y^K$ are bounded as follows:\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_{X2},W_Y) \\geq I(X;Y) + \\frac{1}{K} \\log M_{X1} - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_{X},W_{Y2}) \\geq I(X;Y) + \\frac{1}{K} \\log M_{Y1} - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_2}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_{X},W_{Y2}) \\geq I(X;Y) - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_3}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_{X},W_Y, W_{CY}) \\geq \\frac{1}{K} \\log M_{CX} - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_4}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_{X},W_Y, W_{CY}) \\geq \\frac{1}{K} \\log M_{CX} - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_5}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_Y, W_{CY}) \\geq H(X|Y) + \\frac{1}{K} \\log M_{CX} - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_6}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_Y, W_{CY}) \\geq \\frac{1}{K} \\log M_{CX} - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_7}\n\\end{eqnarray}\n\nwhere $\\epsilon_{0}^{'} \\rightarrow 0$ as $\\epsilon_{0} \\rightarrow 0$.\nThe proofs for \\eqref{theorems2-4_ineq_1} - \\eqref{theorems2-4_ineq_7} are the same as per Yamamoto's\\cite{shannon1_yamamoto} proof in Lemma A1. The difference is that $W_{CX}$, $W_{CY}$, $M_{CX}$ and $M_{CY}$ are described as $W_{C1}$, $W_{C2}$, $M_{C1}$ and $M_{C2}$ respectively by Yamamoto. Here, we consider that $W_{CX}$ and $W_{CY}$ are represented by Yamamoto's $W_{C1}$ and $W_{C2}$ respectively. In addition there are some more inequalities considered here:\n\\begin{eqnarray}\n \\frac{1}{K} H(Y^K|W_X, W_{CX}, W_{CY}, W_{Y2}) & \\geq & \\frac{1}{K} \\log M_{Y1} \\nonumber\n \\\\ & - & \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_8}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n \\frac{1}{K} H(Y^K|W_X, W_{CX}, W_{CY}) & \\geq & \\frac{1}{K} \\log M_{Y1} \\nonumber\n\\\\ & + & \\frac{1}{K} \\log M_{Y2} - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_9}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_{X2}, W_{CY}) & \\geq & \\frac{1}{K} \\log M_{X1} \t\\nonumber\n\\\\ & + & \\frac{1}{K} \\log M_{CX} - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_10}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_{X2}, W_{CY}) & \\geq & \\frac{1}{K} \\log M_{Y1} \t\\nonumber\n\\\\ & + & \\frac{1}{K} \\log M_{Y2} + \\frac{1}{K} \\log M_{CX} \t\t\t\\nonumber\n\\\\ & - & \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_11}\n\\end{eqnarray}\n\nThe inequalities \\eqref{theorems2-4_ineq_8} and \\eqref{theorems2-4_ineq_9} can be proved in the same way as per Yamamoto's\\cite{shannon1_yamamoto} Lemma A2, and \\eqref{theorems2-4_ineq_10} and \\eqref{theorems2-4_ineq_11} can be proved in the same way as per Yamamoto's\\cite{shannon1_yamamoto} Lemma A1. \n\nFor each proof we consider cases where a key already exists for either $V_{CX}$ or $V_{CY}$ and the encrypted common information portion is then used to mask the other portions (either $V_{CX}$ or $V_{CY}$ and the private information portions). There are two cases considered for each; firstly, when the common information portion entropy is greater than the entropy of the portion that needs to be masked, and secondly when the common information portion entropy is less than the entropy of the portion to be masked. For the latter case, a smaller key will need to be added so as to cover the portion entirely. This has the effect of reducing the required key length, which is explained in greater detail in Section VII.\n\n\\begin{proof}[Proof of Theorem 2]\nSuppose that ($R_X$, $R_Y$, $R_{KX}$, $R_{KY}$) $\\in$ \n$\\mathcal{R}_1$ for $h_{XY} \\le H(X,Y)$. Without loss of generality, we assume that $h_X \\le h_Y$. Then, from \\eqref{theorem2} \n\\begin{eqnarray}\n&& R_X \\geq H(X^K|Y^K) \t\t\t\t\\nonumber\n\\\\&& R_Y \\geq H(Y^K|X^K) \t\t\t\t\\nonumber\n\\\\&& R_X + R_Y \\geq H(X^K, Y^K)\n\\label{theorem2_proof_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nR_{kX} \\geq h_{XY}, R_{kY} \\geq h_{XY} \n\\label{theorem2_proof_2}\n\\end{eqnarray}\n\nAssuming a key exists for $V_{CY}$. For the first case, consider the following: $H(V_{CY}) \\geq H(V_X)$, $H(V_{CY}) \\geq H(V_Y)$ and $H(V_{CY}) \\geq H(V_{CX})$.\n\n\\begin{eqnarray}\nM_{CY} = 2^{K h_{XY}}\n\\label{theorem2_proof_6}\n\\end{eqnarray}\n\nThe codewords $W_X$ and $W_Y$ and their keys $W_{kX}$ and $W_{kY}$ are now defined:\n\n\\begin{eqnarray}\nW_X = (W_{X1} \\oplus W_{kCY}, W_{X2} \\oplus W_{kCY}, W_{CX} \\oplus W_{kCY})\n\\label{theorem2_proof_7}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_Y = (W_{Y1} \\oplus W_{kCY}, W_{Y2} \\oplus W_{kCY}, W_{CY})\n\\label{theorem2_proof_8}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kY} = (W_{kCY})\n\\label{theorem2_proof_10}\n\\end{eqnarray}\n\nwhere $W_\\alpha \\in I_{M_\\alpha} = \\{0, 1, \\ldots, M_\\alpha - 1\\}$. The wiretapper will not know $W_{X1}$, $W_{X2}$ and $W_{CX}$ from $W_X$ and $W_{Y1}$, $W_{Y2}$ and $W_{CY}$ from $W_Y$ as these are protected by the key ($W_{kCY}$.\n\nIn this case, $R_X$, $R_Y$, $R_{kX}$ and $R_{kY}$ satisfy from \\eqref{lemma1_2} - \\eqref{lemma1_4} and \\eqref{theorem2_proof_1} - \\eqref{theorem2_proof_6}, that\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_X + \\frac{1}{K} \\log M_Y & = & \\frac{1}{K} (\\log M_{X1} + \\log M_{X2} \\nonumber\n\\\\ & + & \\log M_{CX}) + \\frac{1}{K} (\\log M_{Y1} \\nonumber\n\\\\ & + & \\log M_{Y2} + \\log M_{CY}) \\nonumber \n\\\\ & \\le & H(X|Y) + H(Y|X) \t\t\t\t\\nonumber\n\\\\ & + & I(X;Y) + \\epsilon_0 \\nonumber\n\\\\ & = & H(X,Y) \t\t\\nonumber\n\\\\ & \\le & R_X + R_Y\n\\label{theorem2_proof_11}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} \\log M_{CY}\t\\nonumber\n\\\\ & = & h_{XY}\t\t\t\\label{num3}\n\\\\ & \\le & R_{kX}\n\\label{theorem2_proof_13}\n\\end{eqnarray}\n\nwhere \\eqref{num3} comes from \\eqref{theorem2_proof_6}.\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kY} & = & \\frac{1}{K} \\log M_{CY}\t\\nonumber\n\\\\ & = & h_{XY}\t\t\t\\label{num4}\n\\\\ & \\le & R_{kY}\n\\label{theorem2_proof_14}\n\\end{eqnarray}\nwhere \\eqref{num4} comes from \\eqref{theorem2_proof_6}.\n\nThe security levels thus result:\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_X, W_Y) & = & \\frac{1}{K} H(X|W_{X1} \\oplus W_{kCY}, \\nonumber\n\\\\ && W_{X2} \\oplus W_{kCY}, W_{CX} \\oplus W_{kCY},\t\t\t\\nonumber\n\\\\ && W_{Y1} \\oplus W_{kCY}, W_{Y2} \\oplus W_{kCY},\t\\nonumber\n\\\\ && W_{CY}) \t\t\t\t\\nonumber\n\\\\ & = & H(X^K)\t\\label{num5}\n\\\\ & \\ge & h_X - \\epsilon^{'}_0\n\\label{theorem2_proof_16}\n\\end{eqnarray}\n\nwhere \\eqref{num5} holds because $W_{X1}$, $W_{X2}$, $W_{CX}$, $W_{Y1}$, $W_{Y2}$ are covered by key $W_{CY}$ and $W_{CY}$ is covered by an existing random number key. Equations \\eqref{lemma1_1} - \\eqref{lemma1_7} imply that $W_{X1}$, $W_{X2}$, $W_{Y1}$ and $W_{Y2}$ have almost no redundancy and they are mutually independent.\n\nSimilarly, \n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X, W_Y) \\geq h_Y - \\epsilon^{'}_0\n\\label{theorem2_proof_17}\n\\end{eqnarray}\n\nTherefore ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$, $h_{XY}$) is admissible from \\eqref{theorem2_proof_11} - \\eqref{theorem2_proof_17}.\n\nNext the case where: $H(V_{CY}) < H(V_X)$, $H(V_{CY}) < H(V_Y)$ and $H(V_{CY}) < H(V_{CX})$ is considered. Here, there are shorter length keys used in addition to the key provided by $W_{CY}$ in order to make the key lengths required by the individual portions. For example the key $W_{k1}$ comprises $W_{kCY}$ and a short key $W_1$, which together provide the length of $W_{X1}$.\nThe codewords $W_X$ and $W_Y$ and their keys $W_{kX}$ and $W_{kY}$ are now defined:\n\n\\begin{eqnarray}\nW_X = (W_{X1} \\oplus W_{k1}, W_{X2} \\oplus W_{k2}, W_{CX} \\oplus W_{k3})\n\\label{theorem2_proof_7.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_Y = (W_{Y1} \\oplus W_{k4}, W_{Y2} \\oplus W_{k5}, W_{CY})\n\\label{theorem2_proof_8.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kX} = (W_{k1}, W_{k2}, W_{k3})\n\\label{theorem2_proof_10.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kY} = (W_{k4}, W_{k5})\n\\label{theorem2_proof_10.1.2}\n\\end{eqnarray}\n\nwhere $W_\\alpha \\in I_{M_\\alpha} = \\{0, 1, \\ldots, M_\\alpha - 1\\}$. The wiretapper will not know $W_{X1}$, $W_{X2}$ and $W_{CX}$ from $W_X$ and $W_{Y1}$, $W_{Y2}$ and $W_{CY}$ from $W_Y$ as these are protected by the key ($W_{kCY}$.\n\nIn this case, $R_X$, $R_Y$, $R_{kX}$ and $R_{kY}$ satisfy that\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_X + \\frac{1}{K} \\log M_Y & = & \\frac{1}{K} (\\log M_{X1} + \\log M_{X2} \\nonumber\n\\\\ & + & \\log M_{CX}) + \\frac{1}{K} (\\log M_{Y1} \\nonumber\n\\\\ & + & \\log M_{Y2} + \\log M_{CY}) \\nonumber \n\\\\ & \\le & H(X|Y) + H(Y|X) \t\t\t\t\\nonumber\n\\\\ & + & I(X;Y) + \\epsilon_0 \\nonumber\n\\\\ & = & H(X,Y) \t\t\\nonumber\n\\\\ & \\le & R_X + R_Y\n\\label{theorem2_proof_11.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} [\\log M_{k1} + \\log M_{k2} + \\log M_{k3}]\t\\nonumber\n\\\\ & = & \\log M_{kCY} + \\log M_{1} \\nonumber\n\\\\ & + & \\log M_{kCY} + \\log M_{2} \\nonumber\n\\\\ & + & \\log M_{kCY} + \\log M_{3} \\nonumber\n\\\\ & = & 3 \\log M_{kCY} + \\log M_{1} \\nonumber\n\\\\ & + & \\log M_{2} + \\log M_{3} \\nonumber\n\\\\ & \\geq & 3 h_{XY} - \\epsilon_0 \\label{num333.1}\n\\\\ & \\geq & h_{XY}\n\\label{theorem2_proof_13.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num333.1} results from \\eqref{theorem2_proof_6}.\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} [\\log M_{k3} + \\log M_{k4} + \\log M_{kCY}]\t\\nonumber\n\\\\ & = & \\log M_{kCY} + \\log M_{3} +\t\\log M_{kCY} \\nonumber\n\\\\ & + & \\log M_{4} + \\log M_{kCY} \t\t\\nonumber\n\\\\ & = & 3 \\log M_{kCY} + \\log M_{3} + \\log M_{4}\n\\\\ & \\geq & 3 h_{XY} - \\epsilon_0 \\label{num333}\n\\\\ & \\geq & h_{XY}\n\\label{theorem2_proof_14.1.1}\n\\end{eqnarray}\n\n\nwhere \\eqref{num333} results from \\eqref{theorem2_proof_6}.\n\nThe security levels thus result:\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_X, W_Y) & = & \\frac{1}{K} H(X|W_{X1} \\oplus W_{k1}, W_{X2} \\oplus W_{k2}, \\nonumber\n\\\\ && W_{CX} \\oplus W_{k3},\t\t\t\\nonumber\n\\\\ && W_{Y1} \\oplus W_{k4}, W_{Y2} \\oplus W_{k5},\t\\nonumber\n\\\\ && W_{CY}) \t\t\t\t\\nonumber\n\\\\ & = & H(X^K)\t\\label{num5.1.1}\n\\\\ & \\ge & h_X - \\epsilon^{'}_0\n\\label{theorem2_proof_16.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num5} holds because $W_{X1}$, $W_{X2}$, $W_{CX}$, $W_{Y1}$, $W_{Y2}$ are covered by key $W_{CY}$ and some shorter length key and $W_{CY}$ is covered by an existing random number key. \n\nSimilarly, \n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X, W_Y) \\geq h_Y - \\epsilon^{'}_0\n\\label{theorem2_proof_17.1.1}\n\\end{eqnarray}\n\nTherefore ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$, $h_{XY}$) is admissible from \\eqref{theorem2_proof_11.1.1} - \\eqref{theorem2_proof_17.1.1}.\n\n\\end{proof}\n\nTheorem 3 - 5 are proven in the same way with varying codewords and keys. The proofs follow:\n\n\\begin{proof}[Theorem 3 proof]\n\nThe consideration for the security levels is that $h_Y \\geq h_X$ because $Y$ contains the key the is used for masking.\nSuppose that ($R_X$, $R_Y$, $R_{KX}$, $R_{KY}$) $\\in$ \n$\\mathcal{R}_2$. From \\eqref{theorem3} \n\\begin{eqnarray}\n&& R_X \\geq H(X^K|Y^K) \t\t\t\t\\nonumber\n\\\\&& R_Y \\geq H(Y^K|X^K) \t\t\t\t\\nonumber\n\\\\&& R_X + R_Y \\geq H(X^K, Y^K)\n\\label{theorem3_proof_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nR_{kX} \\geq h_{X}, R_{kY} \\geq h_{Y} \n\\label{theorem3_proof_2}\n\\end{eqnarray}\n\nAssuming a key exists for $V_{CY}$. For the first case, consider the following: $H(V_{CY}) \\geq H(V_X)$, $H(V_{CY}) \\geq H(V_Y)$ and $H(V_{CY}) \\geq H(V_{CX})$.\n\n\\begin{eqnarray}\nM_{CY} = 2^{K h_{Y}}\n\\label{theorem3_proof_6}\n\\end{eqnarray}\n\nThe codewords $W_X$ and $W_Y$ and their keys $W_{kX}$ and $W_{kY}$ are now defined:\n\n\\begin{eqnarray}\nW_X = (W_{X1} \\oplus W_{kCY}, W_{X2} \\oplus W_{kCY}, W_{CX} \\oplus W_{kCY})\n\\label{theorem3_proof_7}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_Y = (W_{Y1} \\oplus W_{kCY}, W_{Y2} \\oplus W_{kCY}, W_{CY})\n\\label{theorem3_proof_8}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kY} = (W_{kCY})\n\\label{theorem3_proof_10}\n\\end{eqnarray}\n\nwhere $W_\\alpha \\in I_{M_\\alpha} = \\{0, 1, \\ldots, M_\\alpha - 1\\}$. The wiretapper will not know $W_{X1}$, $W_{X2}$ and $W_{CX}$ from $W_X$ and $W_{Y1}$, $W_{Y2}$ and $W_{CY}$ from $W_Y$ as these are protected by the key ($W_{kCY}$.\n\nIn this case, $R_X$, $R_Y$, $R_{kX}$ and $R_{kY}$ satisfy from \\eqref{lemma1_2} - \\eqref{lemma1_4} and \\eqref{theorem3_proof_1} - \\eqref{theorem3_proof_6}, that\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_X + \\frac{1}{K} \\log M_Y & = & \\frac{1}{K} (\\log M_{X1} + \\log M_{X2} \\nonumber\n\\\\ & + & \\log M_{CX}) + \\frac{1}{K} (\\log M_{Y1} \\nonumber\n\\\\ & + & \\log M_{Y2} + \\log M_{CY}) \\nonumber \n\\\\ & \\le & H(X|Y) + H(Y|X) \t\t\t\t\\nonumber\n\\\\ & + & I(X;Y) + \\epsilon_0 \\nonumber\n\\\\ & = & H(X,Y) \t\t\\nonumber\n\\\\ & \\le & R_X + R_Y\n\\label{theorem3_proof_11}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} \\log M_{CY}\t\\nonumber\n\\\\ & = & h_{Y}\t\t\t\\label{num3.2}\n\\\\ & \\geq & h_X - \\epsilon_0 \\label{num3.3}\n\\\\ & & R_{kX}\n\\label{theorem3_proof_13}\n\\end{eqnarray}\n\nwhere \\eqref{num3.2} comes from \\eqref{theorem3_proof_6} and \\eqref{num3.3} comes form the consideration stated at the beginning of this proof.\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kY} & = & \\frac{1}{K} \\log M_{CY}\t\\nonumber\n\\\\ & = & h_{XY}\t\t\t\\label{num4.1}\n\\\\ & \\le & R_{kY}\n\\label{theorem3_proof_14}\n\\end{eqnarray}\nwhere \\eqref{num4.1} comes from \\eqref{theorem3_proof_6}.\n\nThe security levels thus result:\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_X, W_Y) & = & \\frac{1}{K} H(X|W_{X1} \\oplus W_{kCY}, \\nonumber\n\\\\ && W_{X2} \\oplus W_{kCY}, W_{CX} \\oplus W_{kCY},\t\t\t\\nonumber\n\\\\ && W_{Y1} \\oplus W_{kCY}, W_{Y2} \\oplus W_{kCY},\t\\nonumber\n\\\\ && W_{CY}) \t\t\t\t\\nonumber\n\\\\ & = & H(X^K)\t\\label{num5.1}\n\\\\ & \\ge & h_X - \\epsilon^{'}_0\n\\label{theorem3_proof_16}\n\\end{eqnarray}\n\nwhere \\eqref{num5.1} holds because $W_{X1}$, $W_{X2}$, $W_{CX}$, $W_{Y1}$, $W_{Y2}$ are covered by key $W_{CY}$ and $W_{CY}$ is covered by an existing random number key. Equations \\eqref{lemma1_1} - \\eqref{lemma1_7} imply that $W_{X1}$, $W_{X2}$, $W_{Y1}$ and $W_{Y2}$ have almost no redundancy and they are mutually independent.\n\nSimilarly, \n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X, W_Y) \\geq h_Y - \\epsilon^{'}_0\n\\label{theorem3_proof_17}\n\\end{eqnarray}\n\nTherefore ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$, $h_{XY}$) is admissible from \\eqref{theorem2_proof_11} - \\eqref{theorem2_proof_17}.\n\nNext the case where: $H(V_{CY}) < H(V_X)$, $H(V_{CY}) < H(V_Y)$ and $H(V_{CY}) < H(V_{CX})$ is considered. Here, there are shorter length keys used in addition to the key provided by $W_{CY}$ in order to make the key lengths required by the individual portions. For example the key $W_{k1}$ comprises $W_{kCY}$ and a short key $W_1$, which together provide the length of $W_{X1}$.\nThe codewords $W_X$ and $W_Y$ and their keys $W_{kX}$ and $W_{kY}$ are now defined:\n\n\\begin{eqnarray}\nW_X = (W_{X1} \\oplus W_{k1}, W_{X2} \\oplus W_{k2}, W_{CX} \\oplus W_{k3})\n\\label{theorem3_proof_7.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_Y = (W_{Y1} \\oplus W_{k4}, W_{Y2} \\oplus W_{k5}, W_{CY})\n\\label{theorem3_proof_8.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kX} = (W_{k1}, W_{k2}, W_{k3})\n\\label{theorem3_proof_10.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kY} = (W_{k4}, W_{k5})\n\\label{theorem3_proof_10.1.2}\n\\end{eqnarray}\n\nwhere $W_\\alpha \\in I_{M_\\alpha} = \\{0, 1, \\ldots, M_\\alpha - 1\\}$. The wiretapper will not know $W_{X1}$, $W_{X2}$ and $W_{CX}$ from $W_X$ and $W_{Y1}$, $W_{Y2}$ and $W_{CY}$ from $W_Y$ as these are protected by the key ($W_{kCY}$.\n\nIn this case, $R_X$, $R_Y$, $R_{kX}$ and $R_{kY}$ satisfy that\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_X + \\frac{1}{K} \\log M_Y & = & \\frac{1}{K} (\\log M_{X1} + \\log M_{X2} \\nonumber\n\\\\ & + & \\log M_{CX}) + \\frac{1}{K} (\\log M_{Y1} \\nonumber\n\\\\ & + & \\log M_{Y2} + \\log M_{CY}) \\nonumber \n\\\\ & \\le & H(X|Y) + H(Y|X) \t\t\t\t\\nonumber\n\\\\ & + & I(X;Y) + \\epsilon_0 \\nonumber\n\\\\ & = & H(X,Y) \t\t\\nonumber\n\\\\ & \\le & R_X + R_Y\n\\label{theorem3_proof_11.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} [\\log M_{k1} + \\log M_{k2} + \\log M_{k3}]\t\\nonumber\n\\\\ & = & \\log M_{kCY} + \\log M_{1} +\t\\log M_{kCY} \\nonumber\n\\\\ & + & \\log M_{2} + \\log M_{kCY} + \\log M_{3} \\nonumber\n\\\\ & = & 3 \\log M_{kCY} + \\log M_{1} + \\log M_{2} + \\log M_{3} \\nonumber\n\\\\ & \\geq & 3 h_{Y} - \\epsilon_0 \\label{num334.1}\n\\\\ & \\geq & 3 h_{X} - \\epsilon_0 \\nonumber\n\\\\ & \\geq & h_{X}\n\\label{theorem3_proof_13.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num334.1} results from \\eqref{theorem3_proof_6} and the result is from the consideration at the beginning of this proof.\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kY} & = & \\frac{1}{K} [\\log M_{k3} + \\log M_{k4} + \\log M_{kCY}]\t\\nonumber\n\\\\ & = & \\log M_{kCY} + \\log M_{3} +\t\\log M_{kCY} \\nonumber\n\\\\ & + & \\log M_{4} + \\log M_{kCY} \\nonumber\n\\\\ & = & 3 \\log M_{kCY} + \\log M_{3} + \\log M_{4} \\nonumber\n\\\\ & \\geq & 3 h_{Y} - \\epsilon_0 \\label{num333.5}\n\\\\ & \\geq & h_{Y}\n\\label{theorem3_proof_14.1.1}\n\\end{eqnarray}\n\n\nwhere \\eqref{num333.5} results from \\eqref{theorem3_proof_6}.\n\nThe security levels thus result:\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_X, W_Y) & = & \\frac{1}{K} H(X|W_{X1} \\oplus W_{k1}, \\nonumber \n\\\\ && W_{X2} \\oplus W_{k2}, W_{CX} \\oplus W_{k3},\t\t\t\\nonumber\n\\\\ && W_{Y1} \\oplus W_{k4}, W_{Y2} \\oplus W_{k5},\t\\nonumber\n\\\\ && W_{CY}) \t\t\t\t\\nonumber\n\\\\ & = & H(X^K)\t\\label{num5.1.1.1}\n\\\\ & \\ge & h_X - \\epsilon^{'}_0\n\\label{theorem3_proof_16.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num5.1.1.1} holds because $W_{X1}$, $W_{X2}$, $W_{CX}$, $W_{Y1}$, $W_{Y2}$ are covered by key $W_{CY}$ and some shorter length key and $W_{CY}$ is covered by an existing random number key. \n\nSimilarly, \n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X, W_Y) \\geq h_Y - \\epsilon^{'}_0\n\\label{theorem3_proof_17.1.1}\n\\end{eqnarray}\n\nTherefore ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$, $h_{XY}$) is admissible from \\eqref{theorem2_proof_11.1.1} - \\eqref{theorem2_proof_17.1.1}.\n\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem 4]\n\n\nAgain, the consideration for the security levels is that $h_Y \\geq h_X$ because $Y$ contains the key the is used for masking.\nSuppose that ($R_X$, $R_Y$, $R_{KX}$, $R_{KY}$) $\\in$ \n$\\mathcal{R}_3$. From \\eqref{theorem3} \n\\begin{eqnarray}\n&& R_X \\geq H(X^K|Y^K) \t\t\t\t\\nonumber\n\\\\&& R_Y \\geq H(Y^K|X^K) \t\t\t\t\\nonumber\n\\\\&& R_X + R_Y \\geq H(X^K, Y^K)\n\\label{theorem4_proof_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nR_{kX} \\geq h_{X}, R_{kY} \\geq h_{Y} \n\\label{theorem4_proof_2}\n\\end{eqnarray}\n\nAssuming a key exists for $V_{CY}$. For the first case, consider the following: $H(V_{CY}) \\geq H(V_X)$, $H(V_{CY}) \\geq H(V_Y)$ and $H(V_{CY}) \\geq H(V_{CX})$.\n \n\\begin{eqnarray}\nM_{CY} = 2^{K h_{Y}}\n\\label{theorem4_proof_6}\n\\end{eqnarray}\n\nIn the same way as theorem 2 and 3, the codewords $W_X$ and $W_Y$ and their keys $W_{kX}$ and $W_{kY}$ are now defined:\n\n\\begin{eqnarray}\nW_X = (W_{X1} \\oplus W_{kCY}, W_{X2} \\oplus W_{kCY}, W_{CX} \\oplus W_{kCY})\n\\label{theorem4_proof_7}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_Y = (W_{Y1} \\oplus W_{kCY}, W_{Y2} \\oplus W_{kCY}, W_{CY})\n\\label{theorem4_proof_8}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kY} = (W_{kCY})\n\\label{theorem4_proof_10}\n\\end{eqnarray}\n\nwhere $W_\\alpha \\in I_{M_\\alpha} = \\{0, 1, \\ldots, M_\\alpha - 1\\}$. The wiretapper will not know $W_{X1}$, $W_{X2}$ and $W_{CX}$ from $W_X$ and $W_{Y1}$, $W_{Y2}$ and $W_{CY}$ from $W_Y$ as these are protected by the key ($W_{kCY}$.\n\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_X + \\frac{1}{K} \\log M_Y & = & \\frac{1}{K} (\\log M_{X1} + \\log M_{X2} \\nonumber\n\\\\ & + & \\log M_{CX}) + \\frac{1}{K} (\\log M_{Y1} \\nonumber\n\\\\ & + & \\log M_{Y2} + \\log M_{CY}) \\nonumber \n\\\\ & \\le & H(X|Y) + H(Y|X) \t\t\t\t\\nonumber\n\\\\ & + & I(X;Y) + \\epsilon_0 \\nonumber\n\\\\ & = & H(X,Y) \t\t\\nonumber\n\\\\ & \\le & R_X + R_Y\n\\label{theorem4_proof_11}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} \\log M_{CY}\t\\nonumber\n\\\\ & = & h_{Y}\t\t\t\\label{num4.2}\n\\\\ & \\geq & h_X - \\epsilon_0 \\label{num4.3}\n\\\\ & & R_{kX}\n\\label{theorem4_proof_13}\n\\end{eqnarray}\n\nwhere \\eqref{num4.2} comes from \\eqref{theorem4_proof_6} and \\eqref{num4.3} comes form the consideration stated at the beginning of this proof.\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kY} & = & \\frac{1}{K} \\log M_{CY}\t\\nonumber\n\\\\ & = & h_{XY}\t\t\t\\label{num5.1}\n\\\\ & \\le & R_{kY}\n\\label{theorem4_proof_14}\n\\end{eqnarray}\nwhere \\eqref{num5.1} comes from \\eqref{theorem4_proof_6}.\n\nThe security levels thus result:\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_X, W_Y) & = & \\frac{1}{K} H(X|W_{X1} \\oplus W_{kCY}, \n\\\\ && W_{X2} \\oplus W_{kCY}, W_{CX} \\oplus W_{kCY},\t\t\t\\nonumber\n\\\\ && W_{Y1} \\oplus W_{kCY}, W_{Y2} \\oplus W_{kCY},\t\\nonumber\n\\\\ && W_{CY}) \t\t\t\t\\nonumber\n\\\\ & = & H(X^K)\t\\label{num6.1}\n\\\\ & \\ge & h_X - \\epsilon^{'}_0\n\\label{theorem4_proof_16}\n\\end{eqnarray}\n\nwhere \\eqref{num6.1} holds because $W_{X1}$, $W_{X2}$, $W_{CX}$, $W_{Y1}$, $W_{Y2}$ are covered by key $W_{CY}$ and $W_{CY}$ is covered by an existing random number key. Equations \\eqref{lemma1_1} - \\eqref{lemma1_7} imply that $W_{X1}$, $W_{X2}$, $W_{Y1}$ and $W_{Y2}$ have almost no redundancy and they are mutually independent.\n\nSimilarly, \n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X, W_Y) \\geq h_Y - \\epsilon^{'}_0\n\\label{theorem4_proof_17}\n\\end{eqnarray}\n\nTherefore ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$, $h_{XY}$) is admissible from \\eqref{theorem2_proof_11} - \\eqref{theorem2_proof_17}.\n\nNext the case where: $H(V_{CY}) < H(V_X)$, $H(V_{CY}) < H(V_Y)$ and $H(V_{CY}) < H(V_{CX})$ is considered. Here, there are shorter length keys used in addition to the key provided by $W_{CY}$ in order to make the key lengths required by the individual portions. For example the key $W_{k1}$ comprises $W_{kCY}$ and a short key $W_1$, which together provide the length of $W_{X1}$.\nThe codewords $W_X$ and $W_Y$ and their keys $W_{kX}$ and $W_{kY}$ are now defined:\n\n\\begin{eqnarray}\nW_X = (W_{X1} \\oplus W_{k1}, W_{X2} \\oplus W_{k2}, W_{CX} \\oplus W_{k3})\n\\label{theorem4_proof_7.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_Y = (W_{Y1} \\oplus W_{k4}, W_{Y2} \\oplus W_{k5}, W_{CY})\n\\label{theorem4_proof_8.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kX} = (W_{k1}, W_{k2}, W_{k3})\n\\label{theorem4_proof_10.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kY} = (W_{k4}, W_{k5})\n\\label{theorem4_proof_10.1.2}\n\\end{eqnarray}\n\nwhere $W_\\alpha \\in I_{M_\\alpha} = \\{0, 1, \\ldots, M_\\alpha - 1\\}$. The wiretapper will not know $W_{X1}$, $W_{X2}$ and $W_{CX}$ from $W_X$ and $W_{Y1}$, $W_{Y2}$ and $W_{CY}$ from $W_Y$ as these are protected by the key ($W_{kCY}$.\n\nIn this case, $R_X$, $R_Y$, $R_{kX}$ and $R_{kY}$ satisfy that\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_X + \\frac{1}{K} \\log M_Y & = & \\frac{1}{K} (\\log M_{X1} \\nonumber\n\\\\ & + & \\log M_{X2} + \\log M_{CX}) \\nonumber\n\\\\ & + & \\frac{1}{K} (\\log M_{Y1} + \\log M_{Y2} + \\log M_{CY}) \\nonumber\n\\\\ & \\le & H(X|Y) + H(Y|X) + I(X;Y) + \\epsilon_0\t\t\t\t\\nonumber\n\\\\ & = & H(X,Y) \t\t\\nonumber\n\\\\ & \\le & R_X + R_Y\n\\label{theorem4_proof_11.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} [\\log M_{k1} + \\log M_{k2} + \\log M_{k3}]\t\\nonumber\n\\\\ & = & \\log M_{kCY} + \\log M_{1} +\t\\log M_{kCY} \\nonumber\n\\\\ & + & \\log M_{2} + \\log M_{kCY} + \\log M_{3} \\nonumber\n\\\\ & = & 3 \\log M_{kCY} + \\log M_{1} + \\log M_{2} + \\log M_{3} \\nonumber\n\\\\ & \\geq & 3 h_{Y} - \\epsilon_0 \\label{num335.1}\n\\\\ & \\geq & 3 h_{X} - \\epsilon_0 \\nonumber\n\\\\ & \\geq & h_{X}\n\\label{theorem4_proof_13.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num335.1} results from \\eqref{theorem4_proof_6} and the result is from the consideration at the beginning of this proof.\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kY} & = & \\frac{1}{K} [\\log M_{k3} + \\log M_{k4} + \\log M_{kCY}]\t\\nonumber\n\\\\ & = & \\log M_{kCY} + \\log M_{3} +\t\\log M_{kCY} \\nonumber\n\\\\ & + & \\log M_{4} + \\log M_{kCY} \\nonumber\n\\\\ & = & 3 \\log M_{kCY} + \\log M_{3} + \\log M_{4} \\nonumber\n\\\\ & \\geq & 3 h_{Y} - \\epsilon_0 \\label{num336.5}\n\\\\ & \\geq & h_{Y}\n\\label{theorem4_proof_14.1.1}\n\\end{eqnarray}\n\n\nwhere \\eqref{num336.5} results from \\eqref{theorem4_proof_6}.\n\nThe security levels thus result:\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_X, W_Y) & = & \\frac{1}{K} H(X|W_{X1} \\oplus W_{k1}, \n\\\\ && W_{X2} \\oplus W_{k2}, W_{CX} \\oplus W_{k3},\t\t\t\\nonumber\n\\\\ && W_{Y1} \\oplus W_{k4}, W_{Y2} \\oplus W_{k5},\t\\nonumber\n\\\\ && W_{CY}) \t\t\t\t\\nonumber\n\\\\ & = & H(X^K)\t\\label{num6.1.1.1}\n\\\\ & \\ge & h_X - \\epsilon^{'}_0\n\\label{theorem4_proof_16.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num6.1.1.1} holds because $W_{X1}$, $W_{X2}$, $W_{CX}$, $W_{Y1}$, $W_{Y2}$ are covered by key $W_{CY}$ and some shorter length key and $W_{CY}$ is covered by an existing random number key. \n\nSimilarly, \n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X, W_Y) \\geq h_Y - \\epsilon^{'}_0\n\\label{theorem4_proof_17.1.1}\n\\end{eqnarray}\n\nTherefore ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$, $h_{XY}$) is admissible from \\eqref{theorem4_proof_11.1.1} - \\eqref{theorem4_proof_17.1.1}.\n\nThe region indicated for $\\mathcal{R_4}$ is derived from this region for $\\mathcal{R_3}$, when $h_X = 0$.\n\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem 5]\nAs before, $V_{CY}$ may be used as a key, however here we use $V_{CX}$ as the key in this proof to show some variation. \n\nNow the consideration for the security levels is that $h_X \\geq h_Y$ because $X$ contains the key that is used for masking.\nSuppose that ($R_X$, $R_Y$, $R_{KX}$, $R_{KY}$) $\\in$ \n$\\mathcal{R}_5$. From \\eqref{theorem3} \n\\begin{eqnarray}\n&& R_X \\geq H(X^K|Y^K) \t\t\t\t\\nonumber\n\\\\&& R_Y \\geq H(Y^K|X^K) \t\t\t\t\\nonumber\n\\\\&& R_X + R_Y \\geq H(X^K, Y^K)\n\\label{theorem5_proof_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nR_{kX} \\geq h_{X}, R_{kY} \\geq h_{Y} \n\\label{theorem5_proof_2}\n\\end{eqnarray}\n\nAssuming a key exists for $V_{CX}$. For the first case, consider the following: $H(V_{CX}) \\geq H(V_X)$, $H(V_{CX}) \\geq H(V_Y)$ and $H(V_{CX}) \\geq H(V_{CX})$.\n \n\\begin{eqnarray}\nM_{CX} = 2^{K h_{X}}\n\\label{theorem5_proof_6}\n\\end{eqnarray}\n\nThe codewords $W_X$ and $W_Y$ and their keys $W_{kX}$ and $W_{kY}$ are now defined:\n\n\\begin{eqnarray}\nW_X = (W_{X1} \\oplus W_{kCX}, W_{X2} \\oplus W_{kCX}, W_{CX})\n\\label{theorem5_proof_7}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_Y = (W_{Y1} \\oplus W_{kCX}, W_{Y2} \\oplus W_{kCX}, W_{CY} \\oplus W_{kCX})\n\\label{theorem5_proof_8}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kX} = (W_{kCX})\n\\label{theorem5_proof_10}\n\\end{eqnarray}\n\nwhere $W_\\alpha \\in I_{M_\\alpha} = \\{0, 1, \\ldots, M_\\alpha - 1\\}$. The wiretapper will not know $W_{X1}$, $W_{X2}$ and $W_{CX}$ from $W_X$ and $W_{Y1}$, $W_{Y2}$ and $W_{CY}$ from $W_Y$ as these are protected by the key $W_{kCX}$ and $W_{kCX}$ is protected by a random number key.\n\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_X + \\frac{1}{K} \\log M_Y & = & \\frac{1}{K} (\\log M_{X1} + \\log M_{X2} \\nonumber\n\\\\ & + & \\log M_{CX}) + \\frac{1}{K} (\\log M_{Y1} \\nonumber\n\\\\ & + & \\log M_{Y2} + \\log M_{CY}) \\nonumber \n\\\\ & \\le & H(X|Y) + H(Y|X) \t\t\t\t\\nonumber\n\\\\ & + & I(X;Y) + \\epsilon_0 \\nonumber\n\\\\ & = & H(X,Y) \t\t\\nonumber\n\\\\ & \\le & R_X + R_Y\n\\label{theorem5_proof_11}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} \\log M_{CX}\t\\nonumber\n\\\\ & \\geq & h_X - \\epsilon_0 \\label{num6.31}\n\\\\ & & R_{kX}\n\\label{theorem5_proof_13}\n\\end{eqnarray}\n\nwhere \\eqref{num6.31} comes from \\eqref{theorem5_proof_6}.\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kY} & = & \\frac{1}{K} \\log M_{CX}\t\\nonumber\n\\\\ & = & h_{X}\t\t\t\\label{num6.32}\n\\\\ & \\geq & h_Y \\label{num6.32}\n\\\\ & \\le & R_{kY}\n\\label{theorem5_proof_14}\n\\end{eqnarray}\nwhere \\eqref{num6.32} comes from \\eqref{theorem5_proof_6} and \\eqref{num6.32} comes form the consideration stated at the beginning of this proof.\n\nThe security levels thus result:\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_X, W_Y) & = & \\frac{1}{K} H(X|W_{X1} \\oplus W_{kCX}, \\nonumber\n\\\\ && W_{X2} \\oplus W_{kCX}, W_{CX}, W_{CY} \\oplus W_{kCX},\t\t\t\\nonumber\n\\\\ && W_{Y1} \\oplus W_{kCX}, W_{Y2} \\oplus W_{kCX}) \t\t\t\t\\nonumber\n\\\\ & = & H(X^K)\t\\label{num16.1}\n\\\\ & \\ge & h_X - \\epsilon^{'}_0\n\\label{theorem5_proof_16}\n\\end{eqnarray}\n\nwhere \\eqref{num6.1} holds because $W_{X1}$, $W_{X2}$, $W_{CX}$, $W_{Y1}$, $W_{Y2}$ are covered by key $W_{CY}$ and $W_{CY}$ is covered by an existing random number key. Equations \\eqref{lemma1_1} - \\eqref{lemma1_7} imply that $W_{X1}$, $W_{X2}$, $W_{Y1}$ and $W_{Y2}$ have almost no redundancy and they are mutually independent.\n\nSimilarly, \n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X, W_Y) \\geq h_Y - \\epsilon^{'}_0\n\\label{theorem5_proof_17}\n\\end{eqnarray}\n\nTherefore ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$, $h_{XY}$) is admissible from \\eqref{theorem2_proof_11} - \\eqref{theorem2_proof_17}.\n\nNext the case where: $H(V_{CX}) < H(V_X)$, $H(V_{CX}) < H(V_Y)$ and $H(V_{CX}) < H(V_{CX})$ is considered. Here, there are shorter length keys used in addition to the key provided by $W_{CX}$ in order to make the key lengths required by the individual portions. For example the key $W_{k1}$ comprises $W_{kCX}$ and a short key $W_1$, which together are the length of $W_{X1}$.\nThe codewords $W_X$ and $W_Y$ and their keys $W_{kX}$ and $W_{kY}$ are now defined:\n\n\\begin{eqnarray}\nW_X = (W_{X1} \\oplus W_{k1}, W_{X2} \\oplus W_{k2}, W_{CX})\n\\label{theorem5_proof_7.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_Y = (W_{CY} \\oplus W_{k3}, W_{Y1} \\oplus W_{k4}, W_{Y2} \\oplus W_{k5})\n\\label{theorem5_proof_8.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kX} = (W_{k1}, W_{k2}, W_{k3})\n\\label{theorem5_proof_10.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kY} = (W_{k4}, W_{k5})\n\\label{theorem5_proof_10.1.2}\n\\end{eqnarray}\n\nwhere $W_\\alpha \\in I_{M_\\alpha} = \\{0, 1, \\ldots, M_\\alpha - 1\\}$. The wiretapper will not know $W_{X1}$, $W_{X2}$ and $W_{CX}$ from $W_X$ and $W_{Y1}$, $W_{Y2}$ and $W_{CY}$ from $W_Y$ as these are protected by the key ($W_{kCY}$.\n\nIn this case, $R_X$, $R_Y$, $R_{kX}$ and $R_{kY}$ satisfy that\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_X + \\frac{1}{K} \\log M_Y & = & \\frac{1}{K} (\\log M_{X1} \\nonumber\n\\\\ & + & \\log M_{X2} + \\log M_{CX}) \\nonumber\n\\\\ & + & \\frac{1}{K} (\\log M_{Y1} + \\log M_{Y2} \\nonumber\n\\\\ & + & \\log M_{CY}) \\nonumber\n\\\\ & \\le & H(X|Y) + H(Y|X) + I(X;Y) \\nonumber\n\\\\ & + & \\epsilon_0\t\t\t\t\\nonumber\n\\\\ & = & H(X,Y) \t\t\\nonumber\n\\\\ & \\le & R_X + R_Y\n\\label{theorem5_proof_11.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} [\\log M_{k1} + \\log M_{k2} + \\log M_{k3}]\t\\nonumber\n\\\\ & = & \\log M_{kCX} + \\log M_{1} +\t\\log M_{kCX} \\nonumber\n\\\\ & + & \\log M_{2} + \\log M_{kCX} + \\log M_{3} \\nonumber\n\\\\ & = & 3 \\log M_{kCX} + \\log M_{1} + \\log M_{2} + \\log M_{3} \\nonumber\n\\\\ & \\geq & 3 h_{X} - \\epsilon_0 \\label{num3335.1}\n\\\\ & \\geq & h_{X}\n\\label{theorem5_proof_13.1.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num3335.1} results from \\eqref{theorem5_proof_6}.\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kY} & = & \\frac{1}{K} [\\log M_{k3} + \\log M_{k4} + \\log M_{kCY}]\t\\nonumber\n\\\\ & = & \\log M_{kCX} + \\log M_{3} +\t\\log M_{kCX} \\nonumber\n\\\\ & + & \\log M_{4} + \\log M_{kCX} \\nonumber\n\\\\ & = & 3 \\log M_{kCX} + \\log M_{3} + \\log M_{4} \\nonumber\n\\\\ & \\geq & 3 h_{X} - \\epsilon_0 \\label{num33335.1}\n\\\\ & \\geq & 3 h_{Y} - \\epsilon_0 \\label{num336.5}\n\\\\ & \\geq & h_{Y}\n\\label{theorem5_proof_14.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num33335.1} results from \\eqref{theorem5_proof_6} and \\eqref{num336.5} results from the consideration at the beginning of this proof.\n\n\nThe security levels thus result:\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_X, W_Y) & = & \\frac{1}{K} H(X|W_{X1} \\oplus W_{k1}, W_{X2} \\oplus W_{k2}, \\nonumber\n\\\\ && W_{CX}, W_{CY} \\oplus W_{k3},\t\t\t\\nonumber\n\\\\ && W_{Y1} \\oplus W_{k4}, W_{Y2} \\oplus W_{k5},\t\\nonumber\n\\\\ && W_{CY}) \t\t\t\t\\nonumber\n\\\\ & = & H(X^K)\t\\label{num6.1.1.1.1}\n\\\\ & \\ge & h_X - \\epsilon^{'}_0\n\\label{theorem5_proof_16.1.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num6.1.1.1.1} holds because $W_{X1}$, $W_{X2}$, $W_{CY}$, $W_{Y1}$, $W_{Y2}$ are covered by key $W_{CX}$ and some shorter length key and $W_{CX}$ is covered by an existing random number key. \n\nSimilarly, \n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X, W_Y) \\geq h_Y - \\epsilon^{'}_0\n\\label{theorem5_proof_17.1.1}\n\\end{eqnarray}\n\nTherefore ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$, $h_{XY}$) is admissible from \\eqref{theorem5_proof_11.1.1} - \\eqref{theorem5_proof_17.1.1}.\n\n\\end{proof}\n\n\n\\subsection{Converse parts}\nFrom Slepian-Wolf's theorem we know that the channel rate must satisfy $R_X \\geq H(X|Y)$, $R_Y \\geq H(Y|X)$ and $R_X + R_Y \\geq H(X,Y)$ to achieve a low error probability when decoding.\nHence, the key rates are considered in this subsection. \n\\\\\n\\textit{Converse part of Theorem 2:}\n\\begin{eqnarray}\nR_{kX} & \\geq & \\frac{1}{K} log M_{kX} - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(W_{kX}) - \\epsilon\t\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(W_{kX|W_1}) - \\epsilon\t\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H(W_{kX}) - I(W_{kX}; W_1)] - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H((W_{kX|X, Y, W_1}) + I(W_{kX}; W_1) \t\t\\nonumber\n\\\\ & + & I(W_{kX};X|Y, W_1) + I(X, Y, W_{kX}|W_1) \t\t\\nonumber\n\\\\ & + & I(Y, W_{kX}|X, W_1) - I(W_{kX}; W_1)] - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H(X, Y|W_1) - H(X,Y|W_1, W_{kX})] - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & h_{XY} - \\frac{1}{K} H(X,Y|W_1, W_{kX}) - \\epsilon \\label{conv_1}\n\\\\ & = & h_{XY} - H(V_{CY}) - \\epsilon \t\t\\nonumber\n\\\\ & = & h_{XY} - \\epsilon\n\\end{eqnarray}\n\nwhere \\eqref{conv_1} results from equation \\eqref{cond8.1}. Here, we consider the extremes of $H(V_{CY})$ in order to determine the limit for $R_{kX}$. When this quantity is minimum then we are able to achieve the maximum bound of $h_{XY}$.\n\n\\begin{eqnarray}\nR_{kY} & \\geq & \\frac{1}{K} log M_{kY} - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(W_{kY}) - \\epsilon\t\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(W_{kY|W_2}) - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H(W_{kY}) - I(W_{kY}; W_2)] - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H((W_{kY|X, Y, W_2}) + I(W_{kY}; W_2) \t\t\\nonumber\n\\\\ & + & I(W_{kY};X|Y, W_2) + I(X, Y, W_{kY}|W_2) \t\t\t\t\\nonumber\n\\\\ & + & I(Y, W_{kY}|X, W_2) - I(W_{kY}; W_2)] - \\epsilon\t\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H(X, Y|W_2) - H(X,Y|W_2, W_{kY})] - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & h_{XY} - \\frac{1}{K} H(X,Y|W_2, W_{kY}) - \\epsilon \\label{conv_2}\n\\\\ & = & h_{XY} - H(V_{CX}) - \\epsilon \t\\nonumber\n\\\\ & = & h_{XY} - \\epsilon\n\\end{eqnarray}\n\n\nwhere \\eqref{conv_2} results from equation \\eqref{cond9}. Here, we consider the extremes of $H(V_{CX})$ in order to determine the limit for $R_{kY}$. When this quantity is minimum then we are able to achieve the maximum bound of $h_{XY}$.\n\n\n\\textit{Converse part of Theorem 3:}\n\\begin{eqnarray}\nR_{kX} & \\geq & \\frac{1}{K} log M_{kX} - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(W_{kX}) - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(W_{kX|W_1}) - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H(W_{kX}) - I(W_{kX}; W_1)] - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H((W_{kX|X, W_1}) + I(W_{kX}; W_1) \t\t\\nonumber\n\\\\ & + & I(X, W_{kX}|W_1) - I(W_{kX}; W_1)] - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} I(X, W_{kX}|W_1) - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H(X|W_1) - H(X|W_1, W_{kX})] - \\epsilon\t\\nonumber\n\\\\ & \\geq & h_{X} - H(V_{CY}) - \\epsilon \\label{conv_3}\n\\\\ & = & h_{X} - \\epsilon\n\\end{eqnarray}\n\n\nwhere \\eqref{conv_3} results from \\eqref{cond7}. Here, we consider the extremes of $H(V_{CY})$ in order to determine the limit for $R_{kX}$. When this quantity is minimum then we are able to achieve the maximum bound of $h_{X}$.\n\n\\begin{eqnarray}\nR_{kY} & \\geq & \\frac{1}{K} log M_{kY} - \\epsilon\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(W_{kY}) - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(W_{kY|W_2}) - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H(W_{kY}) - I(W_{kY}; W_2)] - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H((W_{kY|Y, W_2}) + I(W_{kY}; W_2) \t\t\\nonumber\n\\\\ & + & I(X, W_{kY}|W_2) - I(W_{kY}; W_2)] - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} I(Y, W_{kY}|W_2) - \\epsilon\t\t\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H(Y|W_2) - H(Y|W_2, W_{kY})] - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & h_{Y} - H(V_{CX}) - \\epsilon \\label{conv_4}\n\\\\ & = & h_{Y} - \\epsilon\n\\end{eqnarray}\n\nwhere \\eqref{conv_3} results from \\eqref{cond8}. Here, we consider the extremes of $H(V_{CX})$ in order to determine the limit for $R_{kY}$. When this quantity is minimum then we are able to achieve the maximum bound of $h_{Y}$.\n\nSince theorems 4-5 also have key rates of $h_X$ and $h_Y$ for $X$ and $Y$ respectively we can use the same methods to prove the converse. \n\n\n\\section{Scheme for multiple sources}\n\nThe two correlated source model presented in Section II is generalised even further, and now concentrates on multiple correlated sources transmitting syndromes across multiple wiretapped links. This new approach represents a network scenario where there are many sources and one receiver. We consider the information leakage for this model for Slepian-Wolf coding and thereafter consider the Shannon's cipher system representation.\n\n\\subsection{Information leakage using Slepian-Wolf coding}\n\nHere, Figure~\\ref{fig:extended} gives a pictorial view of the new extended model for multiple correlated sources.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics [scale = 0.7]{extended.pdf}\n\\caption{Extended generalised model}\n\\label{fig:extended}\n\\end{figure}\n\n\nConsider a situation where there are many sources, which are part of the ${\\bf S}$ set:\n \n\\begin{eqnarray}\n{\\bf S} = \\{S_{1}, S_{2}, \\ldots, S_{n}\\} \\nonumber\n\\end{eqnarray}\nwhere $i$ represents the $i$th source ($i = 1,\\ldots, n$) and there are $n$ sources in total. Each source may have some correlation between some other source and all sources are part of a binary alphabet. There is one receiver that is responsible for performing decoding. The syndrome for a source $S_i$ is represented by $T_{S_i}$, which is part of the same alphabet as the sources. \n\nThe entropy of a source is given by a combination of a specific conditional entropy and mutual information. In order to present the entropy we first define the following sets:\n\n\n\n\\begin{itemize}\n\\item [-] The set, ${\\bf S}$ that contains all sources: ${\\bf S} = \\{S_1, S_2,\\ldots, S_n\\}$. \n\\item [-] The set, ${\\bf S}_t$ that contains $t$ unique elements from ${\\bf S}$ and ${\\bf S}_t$ $\\subseteq$ $\\bf{S}$, ${S}_i \\in {\\bf S}_t$, ${\\bf S}_t \\cup {\\bf S}_t^c$ $=$ $\\bf{S}$ and $|{\\bf S}_t|$ $= t$ \n\\end{itemize}\n\n\nHere, $H(S_i)$ is obtained as follows:\n\n\\begin{eqnarray}\nH(S_i) = H(S_i|{\\bf S}_{\\backslash S_i}) + \\displaystyle\\sum_{t=2}^{n} (-1)^{t-1} \\displaystyle\\sum_{\\text{all possible ${\\bf S}_t$}}^{} I({\\bf S}_t|{\\bf S}_t^c)\n\\label{entropy}\n\\end{eqnarray}\n\n\nHere, $n$ is the number of sources, $ H(S_i|{\\bf S}_{\\backslash S_i})$ denotes the conditional entropy of the source $S_i$ given $S_i$ subtracted from the set ${\\bf S}$ and $I({\\bf S}_t|{\\bf S}_t^c)$ denotes the mutual information between all sources in the subset ${\\bf S}_t$ given the complement of ${\\bf S}_t$.\nIn the same way as for two sources, the generalised probabilities and entropies can be developed. It is then possible to decode the source message for source $S_i$ by receiving all components related to $S_i$. This gives rise to the following inequality for $H(S_i)$ in terms of the sources:\n\\begin{eqnarray}\nH(S_i|{ {\\bf S}_{\\backslash S_i}}) & + & \\displaystyle\\sum_{t=2}^{n} (-1)^{t-1}\t\\nonumber \\displaystyle\\sum_{\\text{all possible ${\\bf S}_t$}}^{} I({\\bf S}_t|{\\bf S}_t^c) \n\\\\ & \\le & H(S_i) + \\delta\n\\label{3source_2}\n\\end{eqnarray}\n\nIn this type of model information from multiple links need to be gathered in order to determine the transmitted information for one source. Here, the common information between sources is represented by the $I({\\bf S}_t|{\\bf S}_t^c)$ term. The portions of common information sent by each source can be determined upfront and is an arbitrary allocation in our case. For example in a three source model where $X$, $Y$ and $Z$ are the correlated sources, the common information shared with $X$ and the other sources is represented as: $I(X;Y|Z)$ and $I(X;Z|Y)$. Each common information portion is divided such that the sources having access to it are able to produce a portion of it themselves. The common information $I(X;Y|Z)$ is divided into $V_{CX1}$ and $V_{CY1}$ where the former is the common information between $X$ and $Y$, produced by $X$ and the latter is the common information between $X$ and $Y$, produced by $Y$. Similarly, $I(X;Z|Y)$ consists of two common information portions, $V_{CX2}$ and $V_{CZ1}$ produced by $X$ and $Z$ respectively. \n\nAs with the previous model for two correlated sources, since wiretapping is possible there is a need to develop the information leakage for the model. The information leakages for this multiple source model is indicated in \\eqref{remark1} and \\eqref{remark2}. \n\n\\textit{Remark 1:} The leaked information for a source $S_i$ given the transmitted codewords $T_{S_i}$, is given by:\n\\begin{eqnarray}\nL^{S_i}_{T_{S_i}} = I(S_i ; T_{S_i})\n\\label{remark1}\n\\end{eqnarray}\n\nSince we use the notion that the information leakage is the conditional entropy of the source given the transmitted information subtracted from the source's uncertainty (i.e $H(S_i) - H(S_i| T_{S_i})$), the proof for \\eqref{remark1} is trivial. Here, we note that the common information is the minimum amount of information leaked. Each source is responsible for transmitting its own private information and there is a possibility that this private information may also be leaked. The maximum leakage for this case is thus the uncertainty of the source itself, $H(S_i)$.\n\nWe also consider the information leakage for a source $S_i$ when another source $S_{j_(j \\neq i)}$ has transmitted information. This gives rise to Remark 2.\n\\\\\n\\textit{Remark 2:} The leaked information for a source $S_i$ given the transmitted codewords $T_{S_j}$, where $i \\neq j$ is:\n\\begin{eqnarray}\nL^{S_i}_{T_{S_j}} & = & H(S_i) - H(S_i|T_{S_j})\t\\nonumber\n\\\\ & = & H(S_i) - [H(S_i) - I(S_i; T_{S_j})]\t\\nonumber\n\\\\ & = & I(S_i ; T_{S_j})\n\\label{remark2}\n\\end{eqnarray}\n\nThe information leakage for a source is determined based on the information transmitted from any other channel using the common information between them. The private information is not considered as it is transmitted by each source itself and can therefore not be obtained from an alternate channel. Remark 2 therefore gives an indication of the maximum amount of information leaked for source $S_i$, with knowledge of the syndrome $T_{S_j}$. \n\nThese remarks show that the common information can be used to quantify the leaked information. The common information provides information for more than one source and is therefore susceptible to leaking information about more than one source should it be compromised. This subsection gives an indication of the information leakage for the new generalised multiple correlated sources model when a source's syndrome and other syndromes are wiretapped.\n\n\\subsection{Information leakage for Shannon's cipher system}\n\nThis subsection details a novel masking method to minimize the key length and thereafter builds this multiple correlated source model on Shannon's cipher system. \n\nThe new masking method encompasses masking the conditional entropy portion with a mutual information portion. By masking, certain information is hidden and it becomes more difficult to obtain the information that has been masked. Masking can typically be done using random numbers, however we eliminate the need for random numbers that represent keys and rather use a common information to mask with. \n\nWe make the following assumptions:\n\\begin{itemize}\n\\item\nThe capacity of each link cannot be exhausted using this method.\n\\item\nA common information is used to mask certain private information and can be used to mask multiple times. Further, private information that needs to be masked always exists in this method.\n\\end{itemize}\nThe allocation of common information for transmission are done on an arbitrary basis. The objective of this subsection is to minimize the key lengths while achieving perfect secrecy. \n\nThe private information for source $i$ is given by $H(S_i|{ {\\bf S}_{\\backslash S_i}})$ according to \\eqref{entropy}, which is called $W_{S_i}$ and the common information associated with source $S_i$ is given by $W_{CS_i}$. First, choose a common information with which to mask. Then we take a part of $W_{S_i}$, i.e. ${W_{S_i}}^{'}$, that has entropy equal to $H(W_{CS_i})$, and mask as follows: \n\\begin{eqnarray}\nW_{S_i}^{'} \\oplus W_{CS_i}\n\\label{masking}\n\\end{eqnarray}\n\nWhen the two sequences are xor'ed the result is a single sequence that may look different to the originals. We then transmit the masked portion instead of the $W_{S_i}^{'}$ portion when transmitting $W_{S_i}$, thus providing added security. \nThis brings in the interesting factor that there are many possibilities for a specific mutual information to mask conditional entropy portions. For example when considering three sources as before, it is possible to mask the private information for $X$, $Y$ and $Z$ with the common portion $I(X;Y;Z)$. If $Y$ is secure then this common information can be transmitted along $Y$'s channel, ensuring the information is kept secure. The ability to mask using the common information is a unique and interesting feature of this new model for multiple correlated sources. The underlying principle is that the secure link should transmit more common information after transmitting their private information. \n\nWe find that the lower bound for the channel rate when the masking approach is used is given by:\n\\begin{eqnarray}\nR_i^M \\geq H(S_1, \\ldots, S_n) - \\displaystyle\\sum_{t=2}^{n} \\displaystyle\\sum_{\\text{all possible ${\\bf S}_t$}} (t -1) I({\\bf S}_t|{\\bf S}_t^c)\n\\end{eqnarray}\nwhere $R_i^M$ is the $i$th channel rate when masking is used. \n\nThe method works theoretically but may result in some concern practically as there may be a security compromise when common information is sent across non secure links. We see that if the $ W_{CS_i}$ component used for masking has been compromised then the private portion it has masked will also be compromised. A method to overcome this involves using two common information parts for masking. Equation \\eqref{masking} representing the masking would become:\n\\begin{eqnarray}\nW_{S_i}^{'} \\oplus W_{CS_i} \\oplus W_{CS_j}\n\\label{masking2}\n\\end{eqnarray}\n\nwhere $i \\neq j$ and both $W_{CS_i}$ and $W_{CS_j}$ are common information associated with source $S_i$. This way, if only $W_{CS_j}$ is compromised then $W_{S_i}$ is not compromised as it is still protected by $W_{CS_i}$. Here, combinations of common information are used to increase the security. The advantage with \\eqref{masking2} is that keys may be reused because common information may be shared by more than one source. Further, the method will not result in an increase in key length. \n\n\nThe Shannon's cipher system for this multiple source model is now presented in order to determine the rate regions for perfect secrecy. The multiple sources each have their own encoder and there is a universal decoder. Each source has an encoder represented by:\n\\begin{eqnarray}\nE_i : \\mathcal{S} \\times I_{W_{S_i}} & \\rightarrow & I_{W_{CS_i}} = \\{0, 1, \\ldots, W_{S_i} - 1\\} \\nonumber \n\\\\ && I_{W_{CS_i}} = \\{0, 1, \\ldots, W_{CS_i} - 1\\}\n\\label{ms_xencoder_fcn}\n\\end{eqnarray}\nwhere $I_{MPi}$ is the alphabet representing the private portion for source $S_i$ and $I_{MCi}$ is the alphabet representing the common information for source $S_i$.\nThe decoder at the receiver is defined as:\n\\begin{eqnarray}\nD : (I_{W_{S_i}}, I_{W_{CS_i}}) & \\times & I_{Mk} \\rightarrow \\mathcal{S}\n\\end{eqnarray}\n\nThe encoder and decoder mappings are below:\n\\begin{eqnarray}\nW_i = F_{E_i} (S_i, W_{ki})\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\widehat{S_i} = F_{D_i} (W_i, W_{ki}, W_{\\{{kp}\\}})\n\\end{eqnarray}\n\nwhere $p = 1, \\ldots, n$, $p \\neq i$ and $W_{\\{{kp}\\}}$ represents the set of common information required to find $S_i$, and $\\widehat{S_i}$ is the decoded output. \n\nThe following conditions should be satisfied for the general cases:\n\n\\begin{eqnarray}\n\\frac{1}{K}\\log W_{S_i} \\le R_i +\\epsilon\n\\label{ms_cond1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K}\\log M_{ki} \\le R_{ki} +\\epsilon\n\\label{ms_cond2}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\text {Pr} \\{\\widehat{S_i} \\neq S_i\\} \\le \\epsilon\n\\label{ms_cond3}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(S_i|W_i) \\le h_i - \\epsilon\n\\label{ms_cond7}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(S_j|W_i) \\le h_j - \\epsilon\n\\label{ms_cond8}\n\\end{eqnarray}\n\nwhere $R_i$ is the the rate of source $S_i$'s channel and $R_{k_{i}}$ is the key rate of $S_i$. The security levels, for source $i$ and any other source $j$ are measured uncertainties $h_{i}$ and $h_j$ respectively. \n\\\\\\\\\nThe general cases considered are:\n\\\\ \\textit{Case 1:} When $T_{S_i}$ is leaked and $S_i$ needs to be kept secret.\n\\\\ \\textit{Case 2:} When $T_{S_i}$ is leaked and $S_i$ and\/or $S_j$ needs to be kept secret.\n\\\\\\\\\nThe admissible rate region for each case is defined as follows:\n\\\\ \\textit{Definition 1a:} ($R_i$, $R_{ki}$, $h_{i}$) is admissible for case 1 if there exists a code ($F_{E_{i}}$, $F_{D}$) such that \\eqref{ms_cond1} - \\eqref{ms_cond7} hold for any $\\epsilon \\rightarrow 0$ and sufficiently large $K$.\n\\\\ \\textit{Definition 1b:} ($R_i$, $R_{ki}$, $R_j$, $R_{kj}$, $h_{j}$) is admissible for case 2 if there exists a code ($F_{E_{i}}$, $F_{D}$) such that \\eqref{ms_cond1} - \\eqref{ms_cond3} and \\eqref{ms_cond8} hold for any $\\epsilon \\rightarrow 0$ and sufficiently large $K$.\n\\\\ \\textit{Definition 2:} The admissible rate regions are defined as:\n\n\\begin{eqnarray}\n\\mathcal{R}(h_{i}) = \\{(R_i, R_{ki}):\t\t\t\\nonumber\n\\\\(R_i, R_{ki}, h_{i} ) \\text{ is admissible for case 1} \\}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\mathcal{R}(h_{i}, h_{j}) = \\{(R_i, R_{ki}, R_j, R_{kj}):\t\t\t\\nonumber\n\\\\(R_i, R_{ki}, R_j, R_{kj}, h_{j} ) \\text{ is admissible for case 2} \\}\n\\end{eqnarray}\n\nThe theorems developed for these regions follow:\n\n\\textit{Theorem 6:} For $0 \\le h_{i} \\le I(S_i;S_n|S_n^c)$,\n\\begin{eqnarray}\n&& \\mathcal{R}_1(h_{i}) = \\{(R_i, R_{ki}): \t\t\\nonumber\n\\\\ && R_i \\geq H(S_i), \t\t\t\t\\nonumber\n\\\\ && R_{k_i} \\geq I({\\bf S}_t|{\\bf S}_t^c) \\}\t\t\t\n\\label{theorem6}\n\\end{eqnarray}\n\n\\textit{Theorem 7:} For $0 \\le h_{j} \\le H(S_i, S_j)$,\n\\begin{eqnarray}\n&& \\mathcal{R}_2(h_{i}, h_{j}) = \\{(R_i, R_{ki}, R_j, R_{kj}): \t\t\\nonumber\n\\\\ && R_i \\geq H(S_i, S_j), R_j \\geq H(S_i, S_j), \t\t\t\t\t\t\t\\nonumber\n\\\\ && R_{ki} \\geq I(S_i;S_j) \\text{ and } R_{kj} \\geq I(S_i;S_j) \\}\t\t\t\n\\label{theorem7}\n\\end{eqnarray}\n\nThe proofs for these theorems follow. The source information components are first identified. Assume the private portions of source $i$ and $j$ are given by $W_i$ and $W_j$ respectively. \n\n\\begin{proof}[Theorem 6 proof]\nHere, $R_i \\geq H(S_i)$, $R_{ki} \\geq I({\\bf S}_t|{\\bf S}_t^c)$. For the case where $h_i > I({\\bf S}_t|{\\bf S}_t^c)$, the definitions for $W_{CS_i}$, $W_i$ and $W_{ki}$ follow:\n\n\\begin{eqnarray}\nW_{CS_i} = 2^{K I({\\bf S}_t|{\\bf S}_t^c)}\n\\label{theorem6_proof_41}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_i = (W_{Pi}, W_{kCi})\n\\label{theorem6_proof_43}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{ki} = W_{Ci}\n\\label{theorem6_proof_44}\n\\end{eqnarray}\n\nThe keys and uncertainties are calculated as follows:\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{i} & = & \\frac{1}{K} (\\log W_{S_i} + \\log W_{CS_i}) \\nonumber\n\\\\ & \\le & H(S_i|{\\bf S}_{\\backslash {S_i}}) + \\frac{1}{K} W_{CS_i} + \\epsilon_0 \t\\nonumber\n\\\\ & = & H(S_i|{\\bf S}_{\\backslash {S_i}}) + I({\\bf S}_t|{\\bf S}_t^c) + \\epsilon_0 \\nonumber \n\\\\ & = & \\frac{1}{K} H(S_i) + \\epsilon_0\t\t\t\\nonumber\n\\\\ & \\le & R_i + \\epsilon_0\n\\label{theorem6_proof_45.0}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& \\frac{1}{K} \\log M_{ki} \t\t\\nonumber\n\\\\ & = & \\frac{1}{K} \\log W_{CS_i}\t\\nonumber\n\\\\ & = & I({\\bf S}_t|{\\bf S}_t^c) \n\\\\ & \\le & R_{ki} + \\epsilon_0\n\\label{theorem6_proof_45.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& \\frac{1}{K} H(S_i|W_{Pi}, W_{Ci}) \t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(S_i) - \\epsilon_0^{'} \t\t\\nonumber\n\\\\ & = & h_i - \\epsilon_0^{'}\t\t\n\\label{theorem6_proof_47}\n\\end{eqnarray}\n\nFrom \\eqref{theorem6_proof_45.0} - \\eqref{theorem6_proof_47}, ($R_i$, $R_{ki}$, $h_i$) is admissible for $h_i > I({\\bf S}_t|{\\bf S}_t^c))$. We now consider the case where $h_i \\le I({\\bf S}_t|{\\bf S}_t^c))$ and define $W_{CS_i}$, $W_i$ and $W_{ki}$ as follows:\n\n\n\\begin{eqnarray}\nW_{CS_i} = 2^{K I({\\bf S}_t|{\\bf S}_t^c)}\n\\label{theorem6_proof_49}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{i} = (W_{Pi}, W_{kCi})\n\\label{theorem6_proof_53}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{ki} = W_{Ci}\n\\label{theorem6_proof_54}\n\\end{eqnarray}\n\nThe keys and uncertainties are calculated as follows:\n\\begin{eqnarray}\n&& \\frac{1}{K} \\log M_{ki} \t\t\\nonumber\n\\\\ & = & \\frac{1}{K} \\log W_{CS_i}\t\\nonumber\n\\\\ & = & I({\\bf S}_t|{\\bf S}_t^c)) \n\\\\ & \\le & R_{ki} + \\epsilon_0\n\\label{theorem6_proof_55}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& \\frac{1}{K} H(S_i|W_{Pi}, W_{Ci}) \t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(S_i|W_{Ci}) + I({\\bf S}_t|{\\bf S}_t^c) - \\epsilon_0^{'} \t\t\\nonumber\n\\\\ & = & H(S_i) - \\epsilon_0^{'} \\nonumber\n\\\\ & = & h_i - \\epsilon_0^{'}\n\\label{theorem6_proof_57}\n\\end{eqnarray}\n\nFrom \\eqref{theorem6_proof_55} - \\eqref{theorem6_proof_57} it is seen that ($R_i$, $R_{ki}$, $h_i$) is admissible for $h_i \\le I({\\bf S}_t|{\\bf S}_t^c))$.\n\\end{proof}\n\nTheorem 7 is proven in a similar manner. \n\n\\begin{proof}[Theorem 7 proof]\nHere, $R_i \\geq H(S_i, S_j)$, $R_j \\geq H(S_i, S_j)$, $R_{ki} \\geq I(S_i;S_j)$ and $R_{kj} \\geq I(S_i;S_j)$. For the case where $h_j \\le H(S_i;S_j)$, the definitions for $W_{CS_i}$, $M_{Cj}$ $W_i$, $W_{ki}$, $W_j$ and $W_{kj}$ follow:\n\n\\begin{eqnarray}\nW_{CS_i} = 2^{K I({\\bf S}_t|{\\bf S}_t^c)}\n\\label{theorem7_proof_40.9}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nM_{Cj} = 2^{K I({\\bf S}_t|{\\bf S}_t^c)}\n\\label{theorem7_proof_40.10}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_i = (W_{Pi}, W_{kCi})\n\\label{theorem7_proof_41}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{ki} = W_{Ci}\n\\label{theorem7_proof_42}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_j = (W_{Pj}, W_{kCj})\n\\label{theorem7_proof_43}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kj} = W_{Cj}\n\\label{theorem7_proof_44}\n\\end{eqnarray}\n\nThe keys and uncertainties are calculated as follows:\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{i} & = & \\frac{1}{K} (\\log W_{S_i} + \\log W_{CS_i}) \\nonumber\n\\\\ & \\le & \\frac{1}{K} H(S_i|{\\bf S}_{\\backslash S_i}) + \\frac{1}{K} W_{CS_i} + \\epsilon_0 \t\\nonumber\n\\\\ & = & \\frac{1}{K} H(S_i|{\\bf S}_{\\backslash S_i}) + I({\\bf S}_t|{\\bf S}_t^c) + \\epsilon_0 \t\n\\\\ & = & \\frac{1}{K} H(S_i) + \\epsilon_0\t\t\t\\nonumber\n\\\\ & \\le & R_i + \\epsilon_0\n\\label{theorem7_proof_45.0}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{j} & = & \\frac{1}{K} (\\log M_{Pj} + \\log M_{Cj}) \\nonumber\n\\\\ & \\le & \\frac{1}{K} H(S_j|{\\bf S}_{\\backslash S_j}) + \\frac{1}{K} M_{Cj} + \\epsilon_0 \t\\nonumber\n\\\\ & = & \\frac{1}{K} H(S_j|{\\bf S}_{\\backslash S_j}) + I(S_j;{\\bf S}_t|{\\bf S}_t^c) + \\epsilon_0 \t\n\\\\ & = & \\frac{1}{K} H(S_j) + \\epsilon_0\t\t\t\\nonumber\n\\\\ & \\le & R_j + \\epsilon_0\n\\label{theorem7_proof_45.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& \\frac{1}{K} \\log M_{ki} \t\t\\nonumber\n\\\\ & = & \\frac{1}{K} \\log W_{CS_i}\t\\nonumber\n\\\\ & = & I({\\bf S}_t|{\\bf S}_t^c)\n\\\\ & \\le & R_{ki} + \\epsilon_0\n\\label{theorem7_proof_46}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& \\frac{1}{K} \\log M_{kj} \t\t\\nonumber\n\\\\ & = & \\frac{1}{K} \\log M_{Cj}\t\\nonumber\n\\\\ & = & I(S_i; S_j) + \\epsilon_0\n\\\\ & \\le & R_{ki} + \\epsilon_0\n\\label{theorem7_proof_46.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& \\frac{1}{K} H(S_j|W_{Pi}, W_{Ci}) \t\t\\nonumber\n\\\\ & \\geq & H(S_j) - H(S_i) - \\epsilon_0^{'} \t\t\\nonumber\n\\\\ & = & H(S_i, S_j) - H(S_i) - \\epsilon_0^{'} \\nonumber\n\\\\ & \\geq & h_j - H(S_i)\n\\\\ & = & h_j - h_i - \\epsilon_0^{'}\t\t\n\\label{theorem7_proof_47}\n\\end{eqnarray}\n\nFrom \\eqref{theorem7_proof_45.0} - \\eqref{theorem7_proof_47}, ($R_i$, $R_{ki}$, $R_j$, $R_{kj}$, $h_j$) is admissible for $h_j \\le H(S_i, S_j)$. We now consider the case where $h_j > H(S_i,S_j)$, and define $W_{CS_i}$, $W_i$ and $W_{ki}$ as follows:\n\n\\begin{eqnarray}\nW_{CS_i} = 2^{K I({\\bf S}_t|{\\bf S}_t^c)}\n\\label{theorem7_proof_40.91}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nM_{Cj} = 2^{K I({\\bf S}_t|{\\bf S}_t^c)}\n\\label{theorem7_proof_40.11}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_i = (W_{Pi}, W_{kCi})\n\\label{theorem7_proof_48}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{ki} = W_{Ci}\n\\label{theorem7_proof_49}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_j = (W_{Pj}, W_{kCj})\n\\label{theorem7_proof_50}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kj} = W_{Cj}\n\\label{theorem7_proof_51}\n\\end{eqnarray}\n\nThe keys and uncertainties are calculated as follows:\n\n\\begin{eqnarray}\n&& \\frac{1}{K} \\log M_{ki} \t\t\\nonumber\n\\\\ & = & \\frac{1}{K} \\log W_{CS_i}\t\\nonumber\n\\\\ & = & I({\\bf S}_t|{\\bf S}_t^c)\n\\\\ & \\le & R_{ki} + \\epsilon_0\n\\label{theorem7_proof_52}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& \\frac{1}{K} \\log M_{kj} \t\t\\nonumber\n\\\\ & \\le & I(S_i; S_j) + \\epsilon_0\n\\\\ & \\le & R_{ki} + \\epsilon_0\n\\label{theorem7_proof_53}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& \\frac{1}{K} H(S_j|W_{Pi}, W_{Ci}) \t\t\\nonumber\n\\\\ & \\le & H(S_j) - H(S_i) + \\epsilon_0^{'} \t\t\\nonumber\n\\\\ & = & H(S_i, S_j) - H(S_i) + \\epsilon_0^{'} \\nonumber\n\\\\ & \\le & h_j - H(S_i)\n\\\\ & = & h_j - h_i - \\epsilon_0^{'}\t\t\n\\label{theorem7_proof_54}\n\\end{eqnarray}\n\nFrom \\eqref{theorem7_proof_52} - \\eqref{theorem6_proof_54} it is seen that ($R_i$, $R_{ki}$, $R_j$, $R_{kj}$, $h_j$) is admissible for $h_j \\le H(S_i,S_j)$.\n\\end{proof}\n\nThese theorems demonstrate the necessary rates required for perfect secrecy. The goal of the Shannon's cipher aspect was to reduce the key lengths. The masking method explained in this section is able to use common information as keys and therefore minimise the key rates for the general cases. \n\nThe information leakage described in the Slepian-Wolf aspect indicates the common information that should be given added protection to ensure that even less information will be leaked. \nThe new extended model presented here also incorporates a multiple correlated sources approach using Shannon's cipher system, which is more practical than looking at two sources. \n\n\n\\section{Comparison to other models}\nThe two correlated sources model across a channel with an eavesdropper is a more generalised approach of Yamamoto's~\\cite{shannon1_yamamoto} model. If we were to combine the links into one link, we would have the same situation as per Yamamoto's ~\\cite{shannon1_yamamoto}. From Section VI it is evident that the model can be implemented for multiple sources with Sahnnon's cipher system. Due to the unique scenario incorporating multiple sources and multiple links, the new model is more secure as private information and common information from other link\/s are required for decoding.\n\nFurther, information at the sources may be more secure in the new model because if one source is compromised then only one source's information is known. In Yamamoto's ~\\cite{shannon1_yamamoto} method both source's information is contained at one station and when that source is compromised then information about both sources are known. The information transmitted along the channels (i.e. the syndromes) do not have a fixed length as per Yamamoto's ~\\cite{shannon1_yamamoto} method. Here, the syndrome length may vary depending on the encoding procedure and nature of Slepian-Wolf codes, which is another feature of this generalised model. \n\nThe generalised model also has the advantage that varying amounts of the common information $V_{CX}$ and $V_{CY}$ (in the case of two sources) may be transmitted depending on the security of the transmission link and\/or sources. For example, for two correlated sources, if $Y$'s channel is not secure we can specify that more of the common information is transmitted from $X$. In this way we're able to make better use of the transmission link's security. For Yamamoto's ~\\cite{shannon1_yamamoto} method the common information was transmitted as one portion, $V_{C}$.\n\nIn this model, information from more than one link is required in order to determine the information for one source. This gives rise to added security as even if one link is wiretapped it is not possible to determine the contents of a particular source. This is attributed to the fact that this model has separate common information portions, which is different to Yamamoto's model. \n\nAnother major feature is that private information can be hidden using common information. Here, common information produced by a source may be used to mask its private codeword thus saving on key length. The key allocation is specified by general rules presented in Section VI. The multiple correlated sources model presents a combination masking scheme where more than one common information is used to protect a private information, which is a practical approach. This is an added feature developed in order to protect the system. This approach has not been considered in the other models mentioned in this section. \n \nThe work by Yang \\textit{et al.}~\\cite{feedback_yang} uses the concept of side information to assist the decoder in determining the transmitted message. The side information could be considered to be a source and is related to this work when the side information is considered as correlated information. Similar work with side information that incorporates wiretappers, by Villard and Piantanida \\cite{pablo_secure_multiterminal} and Villard \\textit{et al.} \\cite{pablo_secure_transmission_receivers} may be generalised in the sense that side information can be considered to be a source, however this new model is distinguishable as syndromes, which are independent of one another are transmitted across an error free channel in the new model. Further, to the author's knowledge Shannon's cipher system has not been incorporated into these models by Villard and Piantanida \\cite{pablo_secure_multiterminal} and Villard \\textit{et al.} \\cite{pablo_secure_transmission_receivers}. \n\n\n\n\\section{Future work}\nThis work has room for expansion and future work. It would be interesting to consider the case where the channel capacity has certain constraints (according to the assumptions in Section VI the channel capacity is enough at all times). In the new model, the channels are either protected by keys or not however this is limited and a real case scenario where there are varying security levels for the channels is an avenue for future work. Another aspect for expansion is to investigate the allocation of common information as keys to minimize additional keys with links having varying security levels and limited capacity. \n\n\n\n\\section{Conclusion}\nThe information leakage for two correlated sources across a channel with an eavesdropper was initially considered. Knowing which components contribute most to information leakage aids in keeping the system more secure, as these terms can be made more secure. The information leakage for the two correlated source model was quantified and proven. Shannon's cipher system was also incorporated for this model and channel and key rates to achieve perfect secrecy have been provided. The two correlated sources model has been extended for the network scenario where we consider multiple sources transmitting information across multiple links. The information leakage for this extended model is detailed. The channel and key rates are also considered for the multiple correlated source model when Shannon's chipher system is implemented. A masking method is further presented to minimize key lengths and a combination masking method is presented to address its practical shortcoming.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe $n$-vector model, introduced by Stanley in 1968 \\cite{S68} is described by the Hamiltonian $${\\mathcal H}(d,n) = -J\\sum_{\\langle i,j \\rangle} {\\bf s_i }\\cdot{\\bf s_j} ,$$\nwhere $d$ denotes the dimensionality of the lattice, and ${\\bf s_i}$\nis an $n$-dimensional vector of magnitude $\\sqrt{n}$. When $n=1$ this Hamiltonian\ndescribes the Ising model, when $n=2$ it describes the classical\nXY model, and in the limit $n \\to 0,$ one recovers the self-avoiding walk (SAW) model, as first pointed out by de Gennes \\cite{dG72}. The $n$-vector model has been shown to be equivalent to a loop model with a weight $n$ assigned to each closed loop \\cite{DMNS81} and weight $x$ to each edge of the loop. The two-dimensional O($n$) model on a honeycomb lattice, which is the focus of this paper, is a particular case of this equivalence. The partition function of the loop model can be written as\n\\begin{equation}\nZ(x)=\\sum_{G}x^{l(G)}n^{c(G)},\n\\end{equation}\nwhere $G$ is a configuration of loops, $l(G)$ is the number of loop segments and $c(G)$ is the number of closed loops. The parameter $x$ is defined by the high-temperature expansion of the $O(n)$ model partition function and is related to the coupling $J$, the temperature $T$ and Boltzmann's constant $k$ by \n\\begin{equation}\n{\\rm e}^{ \\frac{J}{kT} {\\bf s_i }\\cdot{\\bf s_j} } \\approx 1+ x{\\bf s_i }\\cdot{\\bf s_j}.\n\\end{equation}\n\nIn 1982 Nienhuis \\cite{N82} showed that, for $n \\in [-2,2],$ the model on the honeycomb lattice could be mapped onto a solid-on-solid model, from which he was able to derive the critical points and critical exponents, subject to some plausible assumptions. These results agreed with the known exponents and critical point for the Ising model, and predicted exact values for those models corresponding to other values of the spin dimensionality $n.$ In particular, for $n=0$ the critical point for the honeycomb lattice SAW model was predicted to be $x_{\\rm c}=1\/\\sqrt{2+\\sqrt{2}},$ a result finally proved 28 years later by Duminil-Copin and Smirnov \\cite{DC-S10}. \n\nThe proof of Duminil-Copin and Smirnov involves the use of a non-local \\emph{parafermionic observable} $F(z)$ where $z$ is the (complex) coordinate of the plane. This function can be thought of as a complex function with the ``parafermionic'' property $F({\\rm e}^{2\\pi{\\rm i}} z) = {\\rm e}^{-2\\pi{\\rm i} \\sigma} F(z)$ where the real-valued parameter $\\sigma$ is called the $\\emph{spin}$. For special values of $\\sigma$, this observable satisfies a discrete analogue of (one half of) the Cauchy-Riemann equations. This \\emph{discrete holomorphic} or \\emph{preholomorphic} property allowed Smirnov and Duminil-Copin to derive an important identity for self-avoiding walks on the honeycomb lattice and, consequently, the Nienhuis prediction for $x_{\\rm c}$.\n\nSmirnov~\\cite{Smirnov10} has also derived such an identity for the general honeycomb $O(n)$ model with $n \\in [-2,2]$. This identity provides an alternative way of predicting the value of the critical point $x_{\\rm c}(n)=1\/\\sqrt{2+\\sqrt{2-n}}$ as conjectured by Nienhuis for values of $n$ other than $n=0$.\n\nThis paper contains two new results. We first present an off-critical deformation of the discrete Cauchy-Riemann equations, by relaxing the preholomorphicity condition, which allows us to consider critical exponents near criticality. Indeed, this deformation gives rise to an identity between bulk and boundary generating functions, and we utilize this identity in Section~\\ref{ssec:wedge} to determine, based on some assumptions, the asymptotic form of the winding angle distribution function for SAWs on the half-plane and in a wedge in terms of boundary critical exponents. It is important to note that up to this point the only assumptions we make are the existence of the critical exponents and the value of the critical point. We will not rely on Coulomb gas techniques or conformal invariance. We find perfect agreement with the conjectured winding angle distribution function on the cylinder predicted by Duplantier and Saleur \\cite{DS88} in terms of bulk critical exponents. Finally we conjecture the values of the wedge critical exponents as a function of the wedge angle for $n\\in [-2, 2)$.\n\n\\section{Off-critical identity for the honeycomb O($n$) model}\n\\subsection{Smirnov's observable on the honeycomb lattice}\nWe briefly review an important result of Smirnov for self-avoiding walks on the honeycomb lattice. \n\nFirstly, a $\\emph{mid-edge}$ is defined to be a point equidistant from two adjacent vertices on a lattice. A \\emph{domain} $\\Omega$ is a simply connected collection of mid-edges on the half-plane honeycomb lattice. The set of vertices of the half-plane honeycomb lattice is denoted $V(\\Omega)$. Those mid-edges of $\\Omega$ which are adjacent to only one vertex in $V(\\Omega)$ form $\\partial\\Omega$. \n\\begin{figure}\n\\centering\n\\begin{picture}(150,180)\n\\put(0,180){\\includegraphics[scale=0.5, angle=270]{exampleF}}\n\\put(-10,85){$a$}\n\\put(15,85){$v_a$}\n\\put(48,122){$z$}\n\\put(48,143){$v$}\n\\end{picture}\n\\caption{A configuration $\\gamma$ on a finite domain. Point $a$ is a boundary mid-edge, point $z$ is another mid-edge, and $v_a$ and $v$ are corresponding vertices. The contribution of $\\gamma$ to $F(z)$ is ${\\rm e}^{-2{\\rm i}\\sigma\\pi\/3}x^{30} n$.\n} \\label{fig:exampleF}\n\\end{figure}\nLet $\\gamma$ be a configuration on a domain $\\Omega$ comprising a single self-avoiding walk and a number (possibly zero) of closed loops. We denote by $\\ell(\\gamma)$ the number of vertices occupied by $\\gamma$ and $c(\\gamma)$ the number of closed loops. Furthermore let $W(\\gamma)$ be the winding angle of the self-avoiding walk component. Define the following observable.\n\\begin{definition}[Preholomorphic observable]\n\\label{def:Fdef}\n\\mbox{}\\newline\n\\begin{itemize}\n\\item For $a\\in\\partial\\Omega, z\\in\\Omega$, set\n\\begin{equation}\nF(\\Omega,z;x,n,\\sigma):=F(z) = \\sum_{\\gamma: a\\rightarrow z} {\\rm e}^{-{\\rm i} \\sigma W(\\gamma)} x^{\\ell(\\gamma)} n^{c(\\gamma)},\n\\label{eq:Fmidedge}\n\\end{equation}\nwhere the sum is over all configurations $\\gamma$ for which the SAW component runs from the mid-edge $a$ to a mid-edge $z$ (we say that $\\gamma$ ends at $z$).\n\\item Let $F(p;v)$ only include configurations where there is a walk terminating at the mid-edge $p$ adjacent to the vertex $v$ and the other two mid-edges adjacent to $v$ are not occupied by a loop segment. For $v_a, v\\in V(\\Omega)$ and $p, q$ and $r$ mid-edges adjacent to $v$, set\n\\begin{align}\n\\overline{F}(V(\\Omega),v;x,n,\\sigma):=\\overline{F}(v) = (p-v)F(p;v)+(q-v)F(q;v)+(r-v)F(r;v),\n\\label{eq:Fvertex}\n\\end{align}\nSince this is a function involving walks that terminate at mid-edges adjacent to the vertex $v$ we consider this as a function defined at the vertices of the lattice.\n\\end{itemize}\nSee Fig.~\\ref{fig:exampleF} for an example:\n\\end{definition}\nSmirnov \\cite{Smirnov10} proves the following: \n\\begin{lemma}[Smirnov]\n\\label{lem:Slemma4}\nFor $n\\in[-2,2]$, set $n=2\\cos\\phi$ with $\\phi\\in[0,\\pi]$. Then for\n\\begin{align}\n\\sigma &= \\frac{\\pi-3\\phi}{4\\pi},\\qquad x^{-1}=x_{\\rm c}^{-1}=2\\cos\\left(\\frac{\\pi+\\phi}{4}\\right) = \\sqrt{2-\\sqrt{2-n}},\\qquad\\text{or}\n\\label{eq:Slemma_dense}\\\\\n\\sigma &= \\frac{\\pi+3\\phi}{4\\pi},\\qquad x^{-1}=x_{\\rm c}^{-1}=2\\cos\\left(\\frac{\\pi-\\phi}{4}\\right) = \\sqrt{2+\\sqrt{2-n}},\n\\label{eq:Slemma_dilute}\n\\end{align}\nthe observable $F$ satisfies the following relation for every vertex $v\\in V(\\Omega)$:\n\\begin{equation} \\label{eqn:localidentity}\n(p-v)F(p) + (q-v)F(q) + (r-v)F(r)=0,\n\\end{equation}\nwhere $p,q,r$ are the mid-edges adjacent to $v$.\n\\end{lemma}\n\nThe first equation in Lemma~\\ref{lem:Slemma4} corresponds to the larger of the two critical values of the step weight $x$ and thus describes the so-called dense regime as configurations with many loops are favoured. The second equation corresponds to the line of critical points separating the dense and dilute phases \\cite{N82}. Eqn. (\\ref{eqn:localidentity}) can be interpreted as the vanishing of a discrete contour integral, hence the name preholomorphic observable for $F(z)$.\n\\begin{figure}\n\\centering\n\\begin{picture}(250,180)\n\\put(0,180){\\includegraphics[scale=1.1, angle=270]{identity_groups}}\n\\put(48,8){$p$}\n\\put(65, 20){$r$}\n\\put(58, 0){$q$}\n\\end{picture}\n\\caption{The two types of configurations which end at mid-edges $p,q,r$ adjacent to vertex $v$. The first type, on the left, involves configurations which visit all three mid-edges. On the right are those configurations where the self-avoiding walk visits at most two mid-edges.} \n\\label{fig:identity_groups}\n\\end{figure}\n\\begin{proof}\nConsider a vertex $v$ adjacent to a mid-edge $p$. The two other adjacent mid-edges we refer to as $q$ and $r$ and are labelled as shown in Fig.~\\ref{fig:identity_groups}. For a self-avoiding walk entering the vertex $v$ from the mid-edge $p$ and terminating at either $p, q$ or $r$ there are two disjoint sets of configurations to consider, each corresponding to a different external connectivity of the remaining mid-edges $q$ and $r$. These are also shown in Fig. (\\ref{fig:identity_groups}). Since the two sets of configurations are disjoint we can consider the identity (\\ref{eqn:localidentity}) for each case separately.\nIn the following, we define $\\lambda={\\rm e}^{-{\\rm i}\\sigma\\pi\/3}$ (the weight accrued by a walk for each left turn) and $j={\\rm e}^{2{\\rm i}\\pi\/3}$ (the value of $p-v$ when mid-edge $p$ is to the north-west of its adjacent vertex $v$).\n\\begin{enumerate}\n\\item\nIn the first case, we consider all configurations where mid-edges $p$ and $q$ are connected. There are three ways for this to occur: two with the self-avoiding walk visiting all three sites, and one with a closed loop running through $v$. Furthermore, we define $F_L(P;v)$ to be the contribution to $F(p)$ involving only configurations where the walk ends at the point $p$, adjacent to the vertex $v$ and where the two other mid-edges adjacent to $v$ are occupied by a closed loop. Requiring \\eqref{eqn:localidentity} to hold then implies\n\\begin{equation}\n(p-v)F_L(p;v)+(q-v)\\frac{1}{n}\\bar{\\lambda}^4F_L(p;v)+(r-v)\\frac{1}{n}\\lambda^4F_L(p;v)=0.\n\\label{eqn:localidentity2}\n\\end{equation}\nThe factor of $\\frac{1}{n}$ arises from the absence of a closed loop and the complex phase factors arise from the additional winding: the loop makes an additional four left turns to arrive at $q$ and four right turns to arrive at $r$. Substituting these into \\eqref{eqn:localidentity2} we find\n\\begin{equation}\n\\frac{1}{n} (p-v)F_L(p;v)(-n-\\bar{\\lambda}^4 j- \\lambda^4\\bar{j} )=0, \\nonumber\n\\end{equation}\nwhere we have used that $q-v=j(p-v)$ and $r-v=\\bar{j}(p-v)$. Since the choice of $v$ and $p$ was arbitrary this implies\n\\begin{equation}\nn+\\lambda^4 \\bar{j} +\\bar{\\lambda}^{4}j=0. \\nonumber\n\\end{equation}\nFinally, using the parameterisation of $n$ in terms of $\\phi$ and solving for $\\sigma$ we obtain\n\\begin{align}\n\\sigma &= \\frac{\\pi-3\\phi}{4\\pi} \\qquad \\text{for } \\lambda^4 \\bar{j} = -{\\rm e}^{{\\rm i}\\phi},\n\\label{eq:sigmadense}\\\\\n\\sigma &= \\frac{\\pi+3\\phi}{4\\pi} \\qquad \\text{for } \\lambda^4 \\bar{j} = -{\\rm e}^{-{\\rm i}\\phi},\n\\label{eq:sigmadilute}\n\\end{align}\n\\item\nIn the second case only one or two mid-edges are occupied in the configuration and mid-edges $q$ and $r$ are not connected.\nRecalling the definition of $F(p; v)$ in (\\ref{def:Fdef}) and Eqn. \\eqref{eqn:localidentity} we have\n\\begin{equation}\n(p-v)F(p; v)+(q-v)x\\bar{\\lambda} F(p; v)+(r-v)x\\lambda F(p; v)=0,\n\\end{equation}\nwhich simplifies to\n\\begin{equation}\nF(p; v)(-1-x_{\\rm c}\\bar{\\lambda}j-x_{\\rm c}\\lambda\\bar{j})=0.\n\\end{equation}\nAgain, since this equation holds for arbitrary $v$ we obtain\n\\begin{equation}\n1+x_{\\rm c}\\lambda\\bar{j}+x_{\\rm c}\\bar{\\lambda}j=0,\n\\end{equation}\nwhich leads to\n\\begin{equation}\nx_{\\rm c}^{-1}=2\\cos\\left(\\frac\\pi3(\\sigma-1)\\right).\n\\end{equation}\n\\end{enumerate}\nThe two possible values of $\\sigma$ give rise to the corresponding two values for $x_{\\rm c}$.\n\\end{proof}\n\n\\subsection{Off-critical deformation}\nFirst we evaluate the discrete divergence of the second set of configurations in Fig.~\\ref{fig:exampleF} for general $x$, below the critical value. This gives\n\\begin{lemma}[Massive preholomorphicity identity]\n\\label{lem:massiveCR}\nFor a given vertex $v$ with mid-edges $p$, $q$ and $r,$ and $x$ below the critical value $x_{\\rm c}$, the parafermionic observable $F(z)$ satisfies\n\\begin{align}\n\\label{eqn:massiveCR}\n(p-v)F(p)+(q-v)F(q)+(r-v)F(r)=(1-\\frac{x}{x_{\\rm c}}) \\overline{F}(v),\n\\end{align}\nwhere $F(z)$ and $\\overline{F}(v)$ are defined in Definition \\ref{def:Fdef}.\n\\end{lemma}\nWe use the term massive preholomorphic as \\eqref{eqn:massiveCR} is of a similar form to that described in \\cite{MS09} and \\cite{Smirnov10}.\n\\begin{proof}\nSimilar to Lemma~\\ref{lem:Slemma4} the proof splits into two parts. The first part, concerning cancellations of contributions coming from walks depicted in the left-hand side of Fig. \\ref{fig:identity_groups}, is completely analogous, and as before fixes the value of $\\sigma$. The difference is that now we relax the requirement that the contribution from the second set of configurations (shown on the right in Fig.~\\ref{fig:identity_groups}) to the discrete contour integral vanishes. Consequently, $x$ is no longer fixed to the critical value.\n\nConsider a vertex $v$ with mid-edges labelled $p$, $q$ and $r$ in a counter-clockwise fashion. There are three disjoint sets of configurations, depending on which of the three mid-edges $p$, $q$ or $r$ the walk enters from. These are shown in Fig. \\ref{fig:pqrvertex}. Recall that we denote by $F(p;v)$ the contributions to $F(p)$ that only include configurations where there is a walk terminating at the mid-edge $p$ adjacent to the vertex $v$ and where the two other mid-edges adjacent to $v$ are unoccupied. The contribution to the left-hand side of \\eqref{eqn:massiveCR} from walks entering the vertex from $p$ is the sum of three terms\n\\begin{eqnarray}\n\\label{eqn:venterp}\n&&(p-v)F(p;v)+(q-v)x\\,{\\rm e}^{\\pi\\sigma{\\rm i}\/3}F(p;v)+(r-v)x\\,{\\rm e}^{-\\pi\\sigma{\\rm i}\/3}F(p;v).\n\\end{eqnarray}\nThe first term is simply from walks that enter and terminate at $p$. The second term is from those walks that enter from $p$, make a right turn and terminate at $q$. The final term is from walks that enter at $p$ and make a left turn to terminate at $r$. The last two terms acquire an additional weight $x$ from the extra step and a phase factor from the turn. We can simplify \\eqref{eqn:venterp} to obtain\n\\begin{eqnarray*}\n&=& (p-v)F(p;v)+(p-v)xj\\bar{\\lambda}F(p;v)+(p-v)x\\lambda \\bar{j}F(p;v) \\\\\n&=& (p-v)F(p;v)(1-xj\\bar{\\lambda} -x\\bar{j}\\lambda)\\\\\n&=& (p-v)F(p;v)(1-\\frac{x}{x_{\\rm c}}),\n\\end{eqnarray*}\nwhere in the first line we have used that \n\\[\nq-v=j(p-v),\\qquad r-v=\\bar{j}(p-v).\n\\]\nFor walks entering from mid-edges $q$ and $r$ similar calculations give contributions\n\\[\n(q-v)F(q;v)(1-\\frac{x}{x_{\\rm c}}) \\text{ and }\n(r-v)F(r;v)(1-\\frac{x}{x_{\\rm c}}).\n\\]\nAdding the three contributions together and using Definition~\\ref{def:Fdef} gives the right-hand side of Eqn. \\eqref{eqn:massiveCR}. \n\\end{proof}\nUsing the above lemma we can now derive the following relationship between generating functions. \\\\\n\\begin{lemma}[Off-critical generating function identity]\n\\label{lem:offcriticalDCS}\n\\begin{align}\n\\sum_{\\gamma : a \\to z\\in\\partial \\Omega\\backslash\\{a\\}} e^{{\\rm i} \\tilde{\\sigma} W(\\gamma)} x^{|\\gamma|}n^{c(\\gamma)} + (1- x\/x_c)\\sum_{\\gamma : a \\to z\\in\\Omega\\backslash \\partial \\Omega} e^{{\\rm i} \\tilde{\\sigma} W(\\gamma)} x^{|\\gamma|}n^{c(\\gamma)} = C_{\\Omega}(x),\n\\end{align}\nwhere\n\\begin{equation}\nC_{\\Omega}(x) = \\sum_ {\\gamma : a \\to a} x^{|\\gamma|}n^{c(\\gamma)},\n\\end{equation}\nis the usual generating function of the honeycomb lattice O($n$) model, i.e. closed loops without the SAW component, and $\\tilde{\\sigma} = 1-\\sigma$.\n\\end{lemma}\n\\begin{figure}\n\\centering\n\\begin{picture}(400,100)\n\\put(85,0){\\includegraphics[scale=0.75, angle=0]{pqrvertex2.pdf}}\n\\put(122,58){$p$}\n\\put(207,58){$p$}\n\\put(290,57){$p$}\n\\put(110,38){$q$}\n\\put(194,38){$q$}\n\\put(276,38){$q$}\n\\put(135,37){$r$}\n\\put(219,37){$r$}\n\\put(303,37){$r$}\n\\end{picture}\n\\caption{The three possible ways for a walk to enter a given vertex via each of the three mid-edges, $p$, $q$ and $r$. The discrete divergence is evaluated for all three cases in order to derive the off-critical, or `massive' preholomorphicity condition.}\n\\label{fig:pqrvertex}\n\\end{figure}\n\\begin{proof}\nWe begin by summing Eqn. \\eqref{eqn:massiveCR} over all the vertices of the lattice $\\Omega$. The contribution to the left-hand side of \\eqref{eqn:massiveCR} from those mid-edges that are in the bulk cancels, since each bulk mid-edge is summed over twice but with opposite signs. This leaves only the boundary mid-edges contributing to the left-hand side which can be written as\n\\begin{equation}\n\\sum_{\\gamma : a \\to z \\in\\partial \\Omega} {\\rm e}^{ -{\\rm i} \\sigma W(\\gamma)} x^{|\\gamma|} n^{c(\\gamma)}{\\rm e}^{{\\rm i}\\phi(\\gamma)}, \\nonumber\n\\end{equation}\nwhere ${\\rm e}^{{\\rm i}\\phi(\\gamma)}$ is the complex number that describes the direction from the boundary vertex to the boundary mid-edge. It is easy to check that this equals ${\\rm e}^{{\\rm i} W(\\gamma)}$ for all walks terminating on boundary mid-edges other than the starting mid-edge $a$ and is $-1$ if the walk terminates at $a$ (which we call a $\\emph{zero-length walk}$) . Using $\\tilde{\\sigma}=1-\\sigma$ we then have\n\\begin{equation}\n\\label{eqn:lhssum}\n\\sum_{\\gamma : a \\to z\\in\\partial \\Omega\\backslash\\{a\\}} {\\rm e}^{{\\rm i} \\tilde{\\sigma} W(\\gamma)} x^{|\\gamma|} n^{c(\\gamma)}-\\sum_{\\gamma : a \\to a} x^{|\\gamma|} n^{c(\\gamma)}.\n\\end{equation}\nThis first term arises from all configurations where the walk terminates on a boundary mid-edge other than the starting mid-edge $a$. The second is from all configurations with a zero-length walk, that is one that terminates at $a$. Note that we define the winding angle of a zero-length walk to be $0$.\n\n \nAs for the right-hand side of Eqn. \\eqref{eqn:massiveCR}, using Definition \\ref{def:Fdef} this can be written as\n\\begin{equation}\n\\label{eqn:rhssum}\n\\left (1-\\frac{x}{x_{\\rm c}} \\right ) \\sum_{\\gamma : a \\to z \\in\\Omega\\backslash \\partial \\Omega} \\left [ F(z;v_1(z)) (z-v_1(z))+ F(z;v_2(z)) (z-v_2(z)) \\right ].\n\\end{equation}\nThis is because for a given end point $z$, a walk can be heading towards one of two possible vertices which we call $v_1$ and $v_2$, the labelling being unimportant. This is illustrated in Fig. \\ref{fig:2vertex}. Equating \\eqref{eqn:lhssum} and \\eqref{eqn:rhssum} we have\n\\begin{align}\n&\\sum_{\\gamma : a \\to z\\in \\partial \\Omega\\backslash \\{a\\}}{\\rm e}^{ {\\rm i} \\tilde{\\sigma} W(\\gamma)} x^{|\\gamma|}n^{c(\\gamma)}-\\sum_{\\gamma : a \\to a}x^{|\\gamma|}n^{c(\\gamma)} \\nonumber\\\\\n= &\\left (1-\\frac{x}{x_{\\rm c}} \\right ) \\sum_{\\gamma : a \\to z \\in\\Omega\\backslash \\partial \\Omega} \\left ( F(z;v_1) (z-v_1(z))+ F(z;v_2) (z-v_2(z)) \\right ) \n\\label{eqn:identity1}\n\\end{align}\nUsing $\\sigma=1-\\tilde{\\sigma}$ and the definition of $F(z;v)$ the summation on the right-hand side becomes \n\\[\n{\\rm e}^{ {\\rm i} \\phi } \\left ( \\sum_{\\gamma : a \\to z \\to v_1} x^{|\\gamma|}n^{c(\\gamma)}{\\rm e}^{ {\\rm i} \\tilde{\\sigma} W(\\gamma)}{\\rm e}^{-{\\rm i} W(\\gamma)} - \\sum_{\\gamma : a \\to z \\to v_2} x^{|\\gamma|}n^{c(\\gamma)}e^{{\\rm i} \\tilde{\\sigma} W(\\gamma)} {\\rm e}^{-{\\rm i} W(\\gamma)} \\right ), \n\\] \\\\\nwhere $e^{{\\rm i}\\phi}$ is the unit vector from $v_1$ to $z$, which is the negative of the unit vector from $v_2$ to $z$.\n\\begin{figure}\n\\centering\n\\begin{picture}(250,75)\n\\put(50,0){\\includegraphics[scale=0.75, angle=0]{2vertex2.pdf}}\n\\put(37,0){$v_1$}\n\\put(90,35){$v_2$}\n\\put(66, 22){$z$}\n\\put(147,0){$v_1$}\n\\put(200,35){$v_2$}\n\\put(180, 22){$z$}\n\\put(70,0){$e^{i\\phi}$}\n\\end{picture}\n\\caption{A walk terminating at the mid-edge $z$. The mid-edge lies between two vertices $v_1$ and $v_2$ and the unit vector from $v_1$ to $z$ is given by $e^{{\\rm i} \\phi}$. The labelling of the vertices is arbitrary.} \n\\label{fig:2vertex}\n\\end{figure}\n\nA walk that terminates at $z$ and moves in the direction of vertex $v_2$ has winding $W(\\gamma_2)=2\\pi m' + \\phi$ while a walk heading in the direction of the vertex $v_1$ and terminating at $z$ has winding $W(\\gamma_1)=(2m + 1)\\pi + \\phi$ for some $m, m' \\in\\mathbb{Z}$. In each case the angle $\\phi$ from the unit vector is cancelled by the $\\phi$ appearing in the winding angle term $e^{-{\\rm i} W(\\gamma)}$ and this leaves\n\\begin{equation}\n\\label{exp:rhsidentity1}\n-\\sum_{\\gamma : a \\to z \\in \\Omega\\backslash \\partial \\Omega} x^{|\\gamma|}n^{c(\\gamma)}{\\rm e}^{ {\\rm i} \\tilde{\\sigma} W(\\gamma)}. \n\\end{equation}\n\nThe left-hand side \\eqref{eqn:lhssum} is a sum of walks to the boundary and walks of length zero, which is equal to $C_{\\Omega}(x)$. Substituting expression (\\ref{exp:rhsidentity1}) into Eqn. (\\ref{eqn:identity1}) and rearranging we obtain \n\\begin{equation} \\label{eqn:keyidentity}\n\\sum_{\\gamma : a \\to z\\in \\partial \\Omega\\backslash \\{a \\}} {\\rm e}^{{\\rm i}\\tilde{\\sigma} W(\\gamma)} x^{|\\gamma|}n^{c(\\gamma)} + (1- x\/x_c)\\sum_{\\gamma : a \\to z\\in \\Omega\\backslash \\partial \\Omega} {\\rm e}^{{\\rm i}\\tilde{\\sigma} W(\\gamma)} x^{|\\gamma|}n^{c(\\gamma)} = C_{\\Omega}(x)\n\\end{equation}\n\\end{proof}\n\\section{Winding angle}\n\\subsection{Generating function definitions}\nLet us now restrict to a particular trapezoidal domain $\\Omega=S_{T,L}$ of width $T$ and left-height $2L$, see Fig.~\\ref{fig:S_boundary}.\nThe winding angle distribution function can be calculated directly from the off-critical generating function identity (\\ref{eqn:keyidentity}). We remind the reader that $\\gamma$ describes a walk along with a configuration of loops. For convenience we use the terms $\\emph{generating function of walks}$ and $\\emph{configuration of walks}$ but it should be understood that these include configurations of closed loops as well. We define the following generating function\n\\[\nG_{\\theta,\\Omega}(x)=\\sum_{\\substack{\\gamma : a \\to z\\in\\Omega\\backslash \\partial \\Omega\\\\W(\\gamma)=\\theta}}x^{|\\gamma|}n^{c(\\gamma)}.\n\\]\n$G_{\\theta,\\Omega}(x)$ contains only those contributions to $G_\\Omega(x) = \\sum_\\theta G_{\\theta,\\Omega}(x)$ where the walk has winding angle $\\theta$. We also define\n\\[\nH_{\\Omega}(x)=\\sum_{\\gamma : a \\to z\\in\\partial \\Omega\\backslash \\{a \\} }{\\rm e}^{{\\rm i}\\tilde{\\sigma} W(\\gamma)}x^{|\\gamma|}n^{c(\\gamma)},\n\\]\nwhich is the generating function describing walks that terminate on the boundary of the domain, and thus have a winding angle associated to that boundary.\n\\begin{figure}[t]\n\\begin{center}\n\\begin{picture}(140,190)\n\\put(0,180){\\includegraphics[height=120pt, angle=270]{honeycomb}}\n\\put(15,77){$\\alpha$}\n\\put(123,76){$\\beta$}\n\\put(55,143){$\\varepsilon$}\n\\put(55,10){$\\bar{\\varepsilon}$}\n\\put(15,89){$a$}\n\\put(-11,77){$2L$}\n\\put(69,182){$T$}\n\\end{picture}\n\\end{center}\n\\caption{Finite patch $S_{5,1}$ of the hexagonal lattice. The SAW component of a loop configuration starts on the central mid-edge of the left boundary (shown as $a$).}\n\\label{fig:S_boundary}\n\\end{figure}\nUsing this notation (\\ref{eqn:keyidentity}) becomes\n\\[\nH_{\\Omega}(x) + (1- x\/x_c)\\sum_{\\theta} {\\rm e}^{{\\rm i}\\tilde{\\sigma} \\theta}G_{\\theta,\\Omega}(x) = C_{\\Omega}(x).\n\\]\nNow let $H_{\\Omega}^{*}(x)$ and $G_{\\theta,\\Omega}^{*}(x)$ be $H_{\\Omega}(x)\/C_{\\Omega}(x)$ and $G_{\\theta,\\Omega}(x)\/C_{\\Omega}(x)$ respectively. For $x