diff --git "a/data_all_eng_slimpj/shuffled/split2/finalznnb" "b/data_all_eng_slimpj/shuffled/split2/finalznnb" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalznnb" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nRecent observational studies of nearby star-forming regions with the {\\em Herschel Space Observatory} have convincingly shown that stars are born in self-gravitating filaments \n \\citep[e.g., ][]{Andre+2010,Arzoumanian+2011}. \nIn addition, the resultant mass function of star-forming dense cores are now explained by the mass distribution along filaments \\citep{Inutsuka2001,Andre+2014}. \nThis simplifies the question of the initial conditions of star formation, but poses the question of how such filamentary molecular clouds are created in the interstellar medium (ISM) prior to the star formation process. \nRecent high-resolution magneto-hydrodynamical simulations of two-fluid dynamics with cooling\/heating and thermal conduction by \\citet{InoueInutsuka2008,InoueInutsuka2009} have shown that the formation of molecular clouds requires multiple episodes of supersonic compression \\citep[see also][]{Heitsch+2009}. \n\\citet{InoueInutsuka2012} further investigated the formation of molecular clouds in the magnetized ISM \nand revealed the formation of a magnetized molecular cloud by the accretion of HI clouds created through thermal instability. \nSince the mean density of the initial multi-phase HI medium is an order of magnitude larger than the typical warm neutral medium (WNM) density, this formation timescale is shorter than that of molecular cloud formation solely through the accumulation of diffuse WNM \n\\citep[see, e.g.,][for the cases of WNM flows]{KoyamaInutsuka2002,Hennebelle+2008,HeitschHartmann2008,Banerjee+2009,Vazquez-Semadeni+2011}. \nThe resulting timescale of molecular cloud formation of $\\gtrsim$10 Myrs is consistent with the evolutionary timescale of molecular clouds in the LMC \\citep{Kawamura+2009}.\n\n\nWe have done numerical simulations of additional compression of already-formed but low-mass molecular clouds, and found interesting features associated with realistic evolution.\nFigure 1 shows a snapshot of the face-on view of the layer created by compressing a non-uniform molecular cloud with a shock wave propagating at 10 km\/s. The direction of the shock compression is perpendicular to the layer. The magnetic field lines are mainly in the dense layer of compressed gas. \nThe strength of the initial magnetic field prior to the shock compression is $20\\mu$Gauss and that of the dense region created after compression is about $200\\mu$Gauss on average. \nMany dense filaments are created with axes perpendicular to the mean magnetic field lines. \nWe can also see many faint filamentary structures that mimic ``striations'' observed in the Taurus Dark Cloud and are almost parallel to the mean magnetic field lines \\citep[][]{Goldsmith+2008}. \nIn our simulations, these faint filaments appear to be feeding gas onto dense filaments (similar to what is observed for local clouds by \\citet[e.g.,][]{Sugitani+2011,Palmeirim+2013,Kirk+2013}). \nOnce the line-mass of a dense filament exceeds the critical value ($2C_{\\rm s}^2\/G$), star formation is expected to start \\citep{InutsukaMiyama1992,InutsukaMiyama1997,Andre+2010}. \nThis threshold of line-mass for star formation is equivalent to the threshold of the column density of molecular gas $116M_\\sun {\\rm pc}^{-2}$ \\citep[][]{Lada+2010}, if the widths of filaments are all close to 0.1pc \\citep[][]{Arzoumanian+2011,Andre+2014}. \n\nAlthough further analysis is required for quantitative comparison between the results of simulation and observed structures, Figure 1 clearly shows that the structures created by multiple shock wave passages do match the characteristic structures observed in filamentary molecular clouds. \nThis motivates us to describe a basic scenario of molecular cloud formation.\nThe present paper is focused on the implications of this identification of the mechanism of molecular cloud formation.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\hsize]{inutsuka_fig1.eps}\n\\caption{Face-on column density view of a shock-compressed dense layer of molecular clouds. \nWe set up low-mass molecular clouds by the compression of two-phase HI clouds. \nThis snapshot shows the result of an additional compression of \nlow-mass\nmolecular clouds by a shock wave propagating at 10 km\/s. \nThe magnetic field lines are mainly in a dense sheet of a compressed gas. \nThe color scale for column density (in cm$^{-2}$) is shown on top. \nThe mean magnetic field is in the plane of the layer and its direction is shown by white bars.\nNote the formation of dense magnetized filaments whose axes are almost perpendicular to the mean magnetic field. \nFainter ``striation''-like filaments can also be seen, that are almost perpendicular to the dense filaments. \n\\label{fig1}}\n\\end{figure}\n\n\\section{A Scenario of Cloud Formation Driven by Expanding Bubbles}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\hsize]{inutsuka_fig2.eps}\n\\caption{\nA schematic picture of sequential formation of molecular clouds by multiple compressions by overlapping dense shells driven by expanding bubbles. \nThe thick red circles correspond to magnetized dense multi-phase ISM where cold turbulent HI clouds are embedded in WNM. \nMolecular clouds can be formed only in the limited regions where the compressional direction is almost parallel to the local mean magnetic field lines, or in regions experiencing an excessive number of compressions. \nAn additional compression of a molecular cloud tends to create multiple filamentary molecular clouds. \nOnce the line-mass of a filament exceeds the critical value, even in a less massive molecular cloud, star formation starts. \nIn general, star formation in a cloud accelerates with the growth in total mass of the cloud. \nGiant molecular clouds collide with one another at limited frequency. \nThis produces very unstable molecular gas and may trigger very active star formation. \n\\label{fig2}}\n\\end{figure}\n\nHI observations of our Galaxy reveal many shell-like structures near the galactic plane \\citep[e.g.,][]{HartmannBurton1997,Taylor+2003}. \nWe identify repeated interactions of expanding shock waves as a basic mechanism of molecular cloud formation, and we depict the overall scenario of cloud formation in our Galaxy as a schematic picture in Figure \\ref{fig2}. \nIn this picture, red circles correspond to the remnants of shock waves due to old and slow supernova remnants or expanding HII regions. \nCold HI clouds embedded in WNM are almost ubiquitously found in the shells of these remnants \n\\citep[e.g.,][]{HartmannBurton1997,Taylor+2003,2007ApJ...664..363H}. \nMolecular clouds are expected to be formed in limited regions where the mean magnetic field is parallel to the direction of shock wave propagation, or in regions where an excessive number of shock wave sweepings are experienced. \nTherefore, molecular clouds can be found only in limited regions in shells. \nNote that the typical timescale of each shock wave is of order 1Myr, but the formation of molecular clouds requires many Myrs. \nSome bubbles become invisible as a supernova remnant or an HII region many million years after their birth. \nTherefore, this schematic picture corresponds to a ``very long exposure snapshot'' of the real structure of the ISM. \nEach molecular cloud may have random velocity depending on the location in the most recent bubble that interacts with the cloud. \nInterestingly, this multi-generation picture of the evolution of molecular clouds seems to agree with the observational findings of \\citet{Dawson+2011a,Dawson+2011b,Dawson+2015}, who investigated the transition of atomic gas to molecular gas in the wall of Galactic supershells.\nIn the case of LMC, \\citet{Dawson+2013} concluded that only $12\\sim25$\\% of the molecular mass can be apparently attributed to the formation due to presently visible shell activity. \nThis may not be inconsistent with our scenario since Dawson et al (2013) only considered HI supergiant shells, whereas molecular clouds in our model can form at the interface of much smaller bubbles and shells (which observationally are more difficult to identify and characterize in the HI data) and the timescale for cloud forming shells to become invisible is much shorter than the growth timescale of molecular mass.\n\nA typical velocity of the shock wave due to an expanding ionization-dissociation front is 10km\/s, as shown by Hosokawa \\& Inutsuka (2006a), since it is essentially determined by the sound speed of ionized gas ($\\sim 10^4$K). \nIwasaki et al. (2011b) have shown that if a molecular cloud is swept-up by shock wave of ~10km\/s, it moves with a velocity slightly less than the shock speed. \nThus, the mean velocity of each molecular cloud should be somewhat smaller than that of the most recent shock wave. \nWhen the shock velocity of a supernova remnant is much higher than 10km\/s, the resultant interaction would result in the destruction of molecular clouds. \nTherefore, the cloud-to-cloud velocity dispersion of molecular clouds should be similar to 10km\/s.\nAccording to this acquisition mechanism of random velocity, the velocity of a cloud is not expected to depend strongly on its mass. \nIn other words, random velocities of molecular clouds of different masses are not expected to be in equipartition of energy ($M \\delta v^2\/2=$const.). \nObservations by Stark \\& Lee (2005) have shown that the random velocities of low-mass molecular clouds ($< 2 \\times 10^5 M\\sun$) only vary by a few, with no dependence on cloud mass. \nThese observations are therefore more consistent with our picture than a model in which molecular clouds acquire their relative velocities via mutual gravitational interaction.\n\n\nIn limited circumstances, created molecular clouds collide with one another. \nThis produces highly gravitationally unstable molecular gas and may trigger very active star formation \\citep[e.g.,][]{Fukui+2014}. \nInoue \\& Fukui (2013) have done magnetohydrodynamical simulations of a cloud-cloud collision and argue that it may lead to active formation of massive stars \\citep[see also][]{Vaidya+2013}. \nThis mode of massive star formation is not, however, a prerequisite of our model.\n\n\n\\subsection{Formation Timescale of Molecular Clouds}\nLet's first model the growth of molecular clouds. \n\\cite{InoueInutsuka2012} have shown that we need multiple episodes of compression of HI clouds to create molecular clouds. \nAccording to the standard picture of supernova-regulated ISM dynamics (e.g., McKee \\& Ostriker 1977), the typical timescale between consecutive compressions by supernova remnants is about 1Myr. \nThe total creation rate of expanding bubbles is larger than the occurrence rate of supernova explosions, since the former can also be created by radiation from massive stars less massive than supernova progenitors. \nTherefore, the actual timescale of compressions in ISM, $T_{\\rm exp}$, should be somewhat smaller than 1Myr if it is averaged over the Galactic thin disk. \nObviously the compression timescale is smaller in the spiral arms and larger in inter-arm regions since star formation activity is concentrated in the spiral arms.\nThus, we have to consider the time evolution of cloud mass for much longer than 1 Myr. \n\nLet us estimate the typical timescale of molecular cloud growth. \n\\cite{InoueInutsuka2012} have shown that the angle between the converging flow direction and the average direction of the magnetic field should be less than a certain angle for molecular cloud formation. \nAlthough Inoue \\& Inutsuka (2009) shows that this critical angle depends on the flow speed, we adopt a critical angle of 15 degrees (=0.26 radian) in the following discussion for simplicity. \nThis value is not so different from the angle ($\\sim 20$ degrees) for possible compression in the simpler one-dimensional model by \\cite{HennebellePerault2000}. \nFor simplicity we assume that magnetic field is uniform in the region we consider and the direction of compression is isotropic. \nThe solid angle spanned by the possible directions of compression resulting in the formation of a molecular cloud is $0.26^2 \\pi$. \nThe anti-parallel directions are also possible. \nTherefore, the probability, $p$, of successfully forming a molecular cloud in a single compression can be estimated by the ratio of solid angle over which compressions lead to molecular cloud formation to the solid angle of the whole sphere, i.e., $p=2 \\cdot 0.26^2 \\pi\/(4\\pi)=0.034$.\nNote that Figure 1 is not the snapshot just after the birth of molecular clouds, but the result of one additional compression of the molecular clouds in which the direction of compression is perpendicular to the mean direction of the magnetic field lines.\nWe also emphasize that since the formation of a GMC requires many episodes of compression, our model does not predict a strong correlation between the present-day magnetic field direction and the orientation of the GMC.\n\nAfter each compression a cloud may slightly expand because of the reduced pressure of the ambient medium, which may result in the loss of diffuse components of cloud mass. \nObservationally the average column densities of molecular clouds do not seem to change very much and always appear to correspond to a visual extinction of several.\nThis means that the mass of a cloud is proportional to its cross-section. \nSince the compressional formation of molecular material is expected to be proportional to the cross-section of the pre-existing cloud, we can model the rate of increase of molecular cloud mass as \n\\begin{equation}\n \\frac{dM}{dt} = \\frac{M}{T_{\\rm f}} , \\label{eq:Eqformation}\n\\end{equation}\nwhere $T_{\\rm f}$ denotes the formation timescale. \nThis equation shows that resultant mass of each molecular cloud grows exponentially with a long timescale $T_{\\rm f}$ if we average in time over a few Myr.\nIf self-gravity increases the accumulation rate of mass into the molecular cloud, the right-hand side of Equation (1) may have a stronger dependence on mass. \nFor example, the so-called ``gravitational focusing factor'' increases the cross section of coalescence by a factor proportional to the square of mass for the large mass limit. \nThis will produce a significantly steeper slope of the cloud mass function (see Section 4). \nA linear dependence on mass in our formulation implicitly assumes that self-gravity of the whole molecular cloud does not significantly affect the cloud growth.\n\nBased on our investigation of molecular cloud formation described above,\nwe estimate the formation timescale as follows:\n\\begin{equation}\n T_{\\rm f} = \\frac{1}{p} \\cdot T_{\\rm exp}. \\label{eq:Tformation}\n\\end{equation}\nThe average value in spiral arm regions would be $T_{\\rm f} \\sim 10$ Myr, but can be factor of a few longer in the inter-arm regions. \nIn reality, many repeated compressions with large angles between the flow direction and mean magnetic field lines gradually increase the dense parts of clouds, and hence contribute to the formation of molecular clouds over a long timescale. \nThis may mean that the actual value of $T_{\\rm f}$ is somewhat smaller than the estimate of Equation (2). \n\nFukui et al. (2009) has shown that the clouds with masses (a few $\\times 10^5M_\\sun$) gain their mass at a rate 0.05 $M_\\sun$\/yr over a timescale 10Myr. \nThis means that the mass of a cloud in their sample doubles in $\\sim 10$Myr, which is consistent with our choice of $T_{\\rm f}=10$Myr. \nNote, however, that Fukui et al. (2009) argued that the atomic gas accretion is driven by the self-gravity of a GMC, which is not included in the present modelling where we assume that gas accretion is essentially driven by the interaction with expanding bubbles. \nIf the gravitational force is significant for the HI accretion onto GMC, it possibly enhances the growth rate of molecular cloud (i.e., smaller $T_{\\rm f}$). \nFurther quantitative studies of the effect of self-gravity on the accretion of gas onto a GMC remain to be done. \nIn the present paper we neglect this effect and do not distinguish self-gravitationally bound and pressure-confined clouds, for simplicity.\n\nIn regions where the number density of molecular clouds is very large, cloud-cloud collision may contribute to the increase of cloud mass, and hence, may also affect the mass function of molecular clouds. \nThe detailed modelling of cloud-cloud collision will be given in our forthcoming paper. \nHere we ignore the contribution of cloud-cloud collision to the change of mass function and simply use the constant value of $T_{\\rm f}$.\n\n\n\\section{Quenching of Star Formation in Molecular Clouds\nNext we consider the destruction of molecular clouds to determine how the star formation is quenched. \nDale et al. (2012, 2013) have done extensive three dimensional simulations of star cluster formation with ionization or stellar wind feedback and shown that the effects of photo-ionization and stellar winds are limited in quenching the star formation in massive molecular clouds \\citep[see also][]{Walch+2012}. \n\\citet{Diaz-Miller+1998} calculated the steady-state structures of HII regions and pointed out that the photodissociation of hydrogen molecules due to FUV photons is much more important than photoionization due to UV photons for the destruction of molecular clouds. \n\\cite{2005ApJ...623..917H,2006ApJ...646..240H,2006ApJ...648L.131H,2007ApJ...664..363H} actually included photodissociation in the detailed radiation hydrodynamical calculations of an expanding HII region in a non-magnetized ISM (by resolving photodissociative line radiation), and found the limited effect of ionizing radiation and essentially confirmed the critical importance of FUV radiation for the ambient molecular cloud. \n\n\\subsection{Expanding HII Regions in Magnetized ISM}\nIn the case of non-magnetized molecular gas of density $10^2{\\rm cm}^{-3}$ around a massive star larger than $\\sim 20 M_\\sun$, a large amount of gas ($\\sim 3\\times 10^4 M_\\sun$) is photodissociated and re-processed into molecular material in the dense shell around the expanding HII region within 5 Myrs \\citep{2006ApJ...648L.131H}. \nAccording to the series of papers by \\citet{InoueInutsuka2008,InoueInutsuka2009}, however, \nthe inclusion of magnetic field is expected to reduce the density of the swept-up shell substantially. \nTherefore the magnetic field should affect significantly the actual structure of compressed shell and subsequent star formation process \\citep[c.f., 3D simulation by][]{Arthur+2011}. \n\nTo quantitatively analyze the consequence, we have done numerical magnetohydrodynamics simulations of an expanding bubble due to UV and FUV photons from the massive star. \nThe details of the method is the same as described in \n\\cite{2006ApJ...646..240H,2006ApJ...648L.131H} except for the inclusion of the magnetic field. \nSince the calculation assumes spherical symmetry, we include only the $13\\mu$Gauss magnetic field that is transverse to the radial direction as a simplification. \nThe magnetic pressure due to transverse field is accounted for in the Riemann solver as in \\cite{Sano+1999} \\citep[see][]{SuzukiInutsuka2006,IwasakiInutsuka2011}. \n\nThe upper panel of Figure \\ref{fig3} shows the resultant masses of ionized gas in HII region and atomic gas in photo-dissociation region transformed from cold molecular gas around an expanding HII region at the termination time as a function of the mass of a central star ($M_*$). \nAlso plotted is the warm molecular gas in and outside the compressed shell around the HII region. \nThe temperature of the warm molecular gas exceeds 50K. \nIts column density is smaller than $10^{21} {\\rm cm}^{-2}$, and hence, dust shielding for CO molecule is not effective and all the CO molecules are photo-dissociated.\nThis warm molecular gas without CO (so-called, CO-dark H$_2$) is not expected to be gravitationally bound unless the mass of the parental molecular cloud is exceptionally large. \nTherefore the subsequent star formation in this warm molecular gas is not expected. \nThe uppermost black solid line denotes the total mass ($M_{\\rm g}(M_*)$) of these non-star-forming gases. \n\nThe lower panel of Figure 3 shows gas mass in the upper panel multiplied by $M_*^{-1.3}$ (blue dashed curve), $M_*^{-1.5}$ (black solid curve), and $M_*^{-1.7}$ (red dotted curve). \nThe areas under these curves are proportional to the mass affected by massive stars whose mass distribution follows \n$dn_*\/d(log M_*) \\propto M_*^{1.3}, M_*^{1.5}$, and $M_*^{1.7}$, respectively. \nWe can see that the shape of the curve does not vary much, and stars with mass $20\\sim30 M_\\sun$ always dominate the disruption of the molecular cloud. \n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\hsize]{inutsuka_fig3.eps}\n\\caption{\nUpper Panel: \nMasses in various phases transformed from cold molecular gas around an expanding HII region at the termination time as a function of the mass of a central star. \nThe red dot-dashed line, the blue dotted line, and purple dashed line correspond to \nionized hydrogen in the HII region, \nneutral hydrogen in the photodissociation region, \nand warm molecular hydrogen gas without CO, respectively. \nThe uppermost black solid line denotes the total mass of these non-star-forming gases. \nLower Panel: \nThe IMF-weighted mass of non-star-forming gas transformed from molecular gas by a massive star of mass $M_*$. \nThe area under the curve is proportional to the mass generated by massive stars whose mass function follows $dn\/d(\\log M_*) \\propto M_*^{-\\beta +1}$. \nThe peak of the curve determines the inverse of star formation efficiency $\\epsilon_{\\rm SF}$ (see explanation below Equation \\ref{eq:SFE}). \n\\label{fig3}\n}\n\\end{figure}\n\n\nOur calculations include ionization, photodissociation, and magnetohydrodynamical shock waves, but are restricted to a spherical geometry. \nTherefore we should investigate the dispersal process in more realistic three dimensional simulations. \nHowever, the inclusion of photo-dissociation requires the numerical calculation of FUV line transfer and hence remains extremely difficult in multi-dimensional simulations.\n\n\\subsection{Star Formation Efficiency}\nHereafter we assume the power law exponent of mass in the initial mass function of stars for large mass ($M_* > 8 M_\\sun$) is $-\\beta$ and $2<\\beta<3$ \n($ dn_*\/dM_* \\propto M_*^{-\\beta} $). \nNow we calculate the total mass of non-star forming gas disrupted by new born stars in a cloud. \nOne might think that the total mass of non-star-forming mass in the stellar system can be calculated by \n$\n M_{\\rm g,total} = \\int_0^\\infty M_{\\rm g}(M_*) (dn_*\/dM_*) dM_*\n$.\nHowever, this estimation is meaningful only in the case the number of massive stars in a cloud are very large, \n$\n \\int_{20 M_\\sun}^\\infty (dn_*\/dM_*) dM_* \\gg 1. \n$ \nIn reality, the number of massive stars in a molecular cloud of intermediate mass is quite small, and even a single massive star can destruct the whole parental molecular cloud. \nThus, to analyze the quenching of star formation in molecular clouds, it is more appropriate to determine the most likely mass of the star that is responsible for the destruction of molecular clouds. \n\nSince we assume the large mass side of the stellar initial mass function can be approximated by the power law of the exponent $-\\beta$, we can express the mass function in logarithmic mass of massive stars created in a cloud as \n\\begin{equation}\n \\frac{dn_*}{d\\log M_*} = M_* \\frac{dn_*}{dM_*} \n = N_* \\left( \\frac{M_*}{M_\\sun} \\right)^{-\\beta+1}\n ~~{\\rm for}~M_* > 8 M_\\sun\n \\label{eq:IMF} \n\\end{equation}\nNote that the pre-factor $N_*$ is defined for the mass distribution of stars in the individual cloud we are analyzing. \nFor convenience, we define the effective minimum mass ($M_{\\rm *m}$) of a star in the hypothetical power law mass function by the following formula for the total mass in the cloud, \n\\begin{eqnarray}\n M_{\\rm *,total} \n& = & \n \\int_{0} ^{\\infty} M_* \\frac{dn_*}{dM_*} dM_* \\nonumber \\\\ \n& \\equiv & \n \\int_{M_{\\rm *m}}^{\\infty} \n N_* \\left( \\frac{M_*}{M_\\sun} \\right)^{-\\beta+1} dM\n = \\left( \\frac{N_*}{\\beta-2} \\right)\n \\left( \\frac{M_\\sun}{M_{\\rm *m}}\\right)^{\\beta-2} .\n \\label{eq:Matotal}\n\\end{eqnarray}\n\nSuppose that a single massive star more massive than $M_{\\rm *d}$ is created in the molecular cloud. \nThis condition can be expressed as \n\\begin{equation}\n 1 =\\int_{M_{\\rm *d}}^{\\infty} \\frac{dn_*}{dM_*} dM_* = \n \\left( \\frac{N_*}{\\beta-1} \\right)\n \\left( \\frac{M_\\sun}{M_{\\rm *d}} \\right)^{\\beta-1} \n ~~{\\rm for}~M_* > 8 M_\\sun .\n \\label{eq:MdNa}\n\\end{equation}\nThis equation relates $M_{\\rm *d}$ and $N_*$. \nWe can express the total mass of stars in the cloud as a function of $M_{\\rm *d}$ by eliminating $N_*$ in equations (\\ref{eq:Matotal}) and (\\ref{eq:MdNa}), \n\\begin{equation}\n M_{\\rm *,total} =\n \\left( \\frac{\\beta-1 }{\\beta-2 } \\right)\n \\left( \\frac{M_\\sun }{M_{\\rm *m}} \\right)^{\\beta-2} \n \\left( \\frac{M_{\\rm *d}}{M_\\sun } \\right)^{\\beta-1} \n ~~{\\rm for}~M_* > 8 M_\\sun .\n \\label{eq:MtMd}\n\\end{equation}\nThus, $M_{\\rm *,total} \\propto M_{\\rm *d}^{\\beta-1}$ for $M_* > 8 M_\\sun$.\nNow we suppose that a molecular cloud of mass $M_{\\rm cl}$ is eventually destroyed by UV and FUV photons from a star of mass $M_{\\rm *d}$ born in the cloud, and hence, star formation in the cloud is quenched. \nThe condition for this to occur can be written as \n$\n M_{\\rm cl} = M_{\\rm g}\n$\n and \n$\n \\epsilon_{\\rm SF} M_{\\rm cl} = M_{\\rm *,total}, \n$\nwhere $\\epsilon_{\\rm SF}$ is the star formation efficiency (the ratio of the total mass of stars to the mass of the parental cloud).\n\nIf $\\epsilon_{\\rm SF}$ is smaller than the value that would satisfy the above condition, the cloud destruction is not sufficient and star formation continues using the remaining cold molecular material in the cloud, which in turn increases $\\epsilon_{\\rm SF}$. \nThus, we expect that the actual evolution of a molecular cloud finally satisifies the above condition when the star formation is eventually quenched.\nThis means that the star formation efficiency should be given by \n\\begin{equation}\n \\epsilon_{\\rm SF} = \\frac{M_{\\rm *,total}}{M_{\\rm g}(M_{\\rm *d})} \n= \\left( \\frac{\\beta-1 }{\\beta-2 } \\right)\n \\left( \\frac{M_\\sun }{M_{\\rm *m}} \\right)^{\\beta-2}\n \\left( \\frac{M_{\\rm *d}}{M_\\sun } \\right)^{\\beta-1}\n \\left( \\frac{M_{\\rm g} }{M_\\sun } \\right)^{-1}.\n \\label{eq:SFE} \n\\end{equation}\nThe preceding argument suggests that the value of star formation efficiency should take the minimum value of the right hand side of this equation, \ni.e., the maximum value of $M_{\\rm g} M_{\\rm *}^{-\\beta+1}$ where $M_{\\rm g}$ is a function of $M_*$. \nFigure 3 shows that the maximum value of $M_{\\rm g} M_{\\rm *}^{-\\beta+1}$ is attained at $M_* \\sim 30M_\\sun$ where $M_{\\rm g}$ is about $10^5 M_\\sun$. \nTherefore we can conclude that once a massive star of $M_{\\rm *d} = 30 \\pm 10 M_\\sun$ is created, the star formation is eventually quenched in a cloud of mass $M_{\\rm g}(M_{\\rm *d}) \\sim 10^5 M_\\sun$. \nThis corresponds to $\\epsilon_{\\rm SF} \\sim 10^{-2}$, if we adopt $\\beta=2.5$ and $M_{\\rm *m} = 0.1M_\\sun$.\nThe dependence of $\\epsilon_{\\rm SF}$ on $M_{\\rm *m}$ is quite weak ($\\beta-2 \\sim 0.5$) as shown in Equation (7). \nIt is also not sensitive on $\\beta$ in the limited range ($2.3 < \\beta < 2.7$) as shown in Figure 3. \nThus, the authors think that this value of $\\epsilon_{\\rm SF} \\sim 10^{-2}$ is robust in typical star forming regions in our Galaxy. \nThis argument may explain the reason for the low star formation efficiency in molecular clouds observationally found many decades ago \\citep[e.g.,][]{ZuckermanEvans1974}. \n\nNote that a sharp increase of $M_{\\rm g} M_*^{-\\beta+1}$ is due to the sharp increase of UV\/FUV luminosity at $M_* \\sim 20 M_\\sun$. \nTherefore a star much smaller than $20M_\\sun$ is not expected to be the main disrupter of the molecular cloud. \nFor example, the upper panel of Figure 3 shows that a $10M_\\sun$ star can quench $\\sim 10^3 M_\\sun$ of the surrounding molecular material. \nHowever a $10^3 M_\\sun$ molecular cloud is not likely to produce a $10M_\\sun$ star unless $\\epsilon_{\\rm SF} \\sim 1$ as can be seen in equation (\\ref{eq:MtMd}), and hence, the destruction of $10^3 M_\\sun$ cloud by a $10M_\\sun$ star is not expected, in general. \n\n\nIf the initial mass function does not depend on the parent cloud mass as we assume here, the star formation efficiency is not expected to depend on mass for a cloud larger than $\\sim 10^5 M_\\sun$. \nThis can be understood as follows. \nThe number of UV\/FUV photons is proportional to the number of massive stars, which increases with the mass of the cloud. \nHowever, the required number of photons also increases with the mass of the cloud. \nFor example, a $10^6 M_\\sun$ cloud will produce 10 stars with mass $> 30M_\\sun$ \nif $\\epsilon_{\\rm SF} = 10^{-2}$, $\\beta=2.5$, and $M_{\\rm *m} = 0.1M_\\sun$.\nThen, these 10 massive stars will destroy $10 \\times 10^5 M_\\sun$ molecular gas. \nTherefore star formation in the whole molecular cloud is quenched when $\\epsilon_{\\rm SF} = 10^{-2}$. \nTherefore we can conclude that the star formation efficiency does not depend on the mass of the cloud if the shape of the initial mass function does not depend on the mass of the cloud. \n\nNow we can estimate the timescale for the destruction of a molecular cloud. \nOur calculation of the expanding ionization\/dissociation front in the magnetized molecular cloud shows that a $\\sim 10^5 M_\\sun$ molecular cloud can be destroyed within 4 Myrs.\nThe actual destruction timescale, $T_{\\rm d}$, should be the sum of the timescale of formation of a massive star and expansion timescale of the HII region, i.e., $T_{\\rm d} \\approx T_* + 4$Myr, where $T_*$ denotes the timescale for a massive star to form once the cloud is created. \nAfter one cycle of molecular cloud destruction over a timescale $T_{\\rm d}$, only a fraction, $\\epsilon_{\\rm SF}$, of the molecular gas is transformed into stars. \nTherefore, the timescale to completely transform a molecular cloud to stars is $T_{\\rm d}\/\\epsilon_{\\rm SF} \\sim 1.4$Gyr for $T_* \\sim 10$Myrs and $T_{\\rm d} \\sim 14$Myrs. \nThis may explain the so-called ``depletion timescale'' of molecular clouds that manifests observationally in the Schmidt-Kennicutt Law \\citep[e.g.,][] {Bigiel+2011,Lada+2012,KennicuttEvans2012}. \n\n\\section{Mass Function of Molecular Clouds}\nIn order to describe the time evolution of the mass function of molecular clouds, $n_{\\rm cl}(M)=dN_{\\rm cl}\/dM$, over a timescale much longer than 1 Myr,\n we adopt coarse-graining of short-timescale growth and destruction of clouds, and describe the continuity equation of molecular clouds in mass space as \n\\begin{equation}\n \\PD{n_{\\rm cl}}{t} + \\PD{}{M} \\left( n_{\\rm cl} \\frac{dM}{dt} \\right)\n = - \\frac{n_{\\rm cl}}{T_{\\rm d}} , \n\\end{equation}\nwhere \n$n_{\\rm cl}(dM\/dt)$ denotes the flux of mass function in mass space, \n$dM\/dt$ describes the growth rate of the molecular cloud as given in Equation (1). \nThe sink term on the right hand side of this equation corresponds to the destruction rate of molecular clouds in the sense of ensemble average. \nIf the dynamical effects such as shear and tidal stresses contribute to the cloud destruction \\citep[e.g.,][]{Koda+2009,DobbsPringle2013}, we should modify $T_{\\rm d}$ in this equation. \nSince the left hand side of this equation should be regarded as the ensemble average, the term $1\/T_{\\rm d}$ represents the sum of the destruction rate of all the possible processes. \nHere we simply assume that the resultant $T_{\\rm d}$ is not very different from our estimate of destruction due to radiation feedback from massive stars. \n\nAccording to the series of our work on the formation of molecular clouds, the molecular cloud as a whole is not necessarily created as a self-gravitationally bound object. \nTherefore, our modelling of the mass function of molecular clouds is not restricted to the self-gravitationally bound clouds.\nHowever, our modelling is not intended to describe the spatially extended diffuse molecular clouds much larger than the typical size of the bubbles ($\\lesssim100$pc).\n\nA steady state solution of the above equation is \n\\begin{equation}\n n_{\\rm cl}(M) = \\frac{N_0}{M_{\\sun}} \\left( \\frac{M}{M_{\\sun}} \\right)^{-\\alpha}, \n\\end{equation}\nwhere $N_0$ is a constant and \n\\begin{equation}\n \\alpha = 1 + \\frac{T_{\\rm f}}{T_{\\rm d}} . \\label{eq:alpha}\n\\end{equation}\nFor conditions typical of spiral arm regions in our Galaxy, we expect $T_* \\sim T_{\\rm f}$ and thus $T_{\\rm f} \\la T_{\\rm d}$, which corresponds to $1 < \\alpha \\lesssim 2$.\nFor example, $T_{\\rm f}=T_*=10$Myrs corresponds to $\\alpha \\approx 1.7$, which agrees well with observations \\citep{Solomon+1987,Kramer+1998,Heyer+2001,RomanDuval+2010}. \n\nHowever, in a quiescent region away from spiral arms or in the outer disk, in which there is a very limited amount of dense material, $T_{\\rm f}$ is expected to be larger at least by a factor of a few than in spiral arms. \nIn contrast, $T_{\\rm d}$ is not necessarily expected to be large even in such an environment, since the meaning of $T_{\\rm d}$ is the average timescale of cloud destruction that occurs after the cloud is created, and thus, it does not necessarily depend on the growth timescale of the cloud. \nTherefore, we expect that $T_{\\rm d}$ can be smaller than $T_{\\rm f}$ in such an environment, which may produce $\\alpha = 1+ T_{\\rm f}\/T_{\\rm d} > 2$. \nThis tendency is actually observed in Milky Way outer disk, LMC, M33, and M51 \\citep{Rosolowsky2005,Wong+2011,Gratier+2012,Hughes+2010,Colombo+2014} . \n\n\n\n\nThe total number of molecular clouds is calculated as \n\\begin{eqnarray}\n N_{\\rm total} &=& \\int_{M_1}^{M_2} n(M) dM \n = \\frac{N_0}{\\alpha-1} \n \\left[ \n \\left( \\frac{M_{\\sun}}{M_1} \\right)^{\\alpha-1}\n - \\left( \\frac{M_{\\sun}}{M_2} \\right)^{\\alpha-1}\n \\right] \n \\nonumber\\\\ \n &\\sim& \\frac{N_0}{\\alpha-1} \n \\left( \\frac{M_{\\sun}}{M_1} \\right)^{\\alpha-1}, \n\\end{eqnarray}\nwhere we used $M_2 \\gg M_1$ in the final estimate. \nThe total number of clouds is essentially determined by the lower limit of the mass of the cloud. \nLikewise, the total mass of the molecular clouds is \n\\begin{eqnarray}\n M_{\\rm total} &=& \\int_{M_1}^{M_2} M n(M) dM \n = \\frac{N_0 M_{\\sun}}{2-\\alpha} \n \\left[ \n \\left( \\frac{M_2}{M_{\\sun}} \\right)^{2-\\alpha}\n - \\left( \\frac{M_1}{M_{\\sun}} \\right)^{2-\\alpha}\n \\right]\n \\nonumber\\\\ \n &\\sim& \\frac{N_0 M_{\\sun}}{2-\\alpha} \n \\left( \\frac{M_2}{M_{\\sun}} \\right)^{2-\\alpha}, \n\\end{eqnarray}\nwhere we used $M_2 \\gg M_1$ and $\\alpha < 2$ in the final estimate. \nThus, the total mass of molecular clouds is essentially determined by the upper limit of the mass of the cloud. \nLet us assume $M_{\\rm total} \\sim 10^9 M_{\\sun}$ in the Galaxy, then our simple choice of $M_1=10^2 M_{\\sun}$, $M_2=10^6 M_{\\sun}$, and $\\alpha=1.5$ corresponds to $N_{\\rm total}\\sim10^5$ and the average mass of molecular clouds is $M_{\\rm ave} \\equiv M_{\\rm total}\/N_{\\rm total} \\sim 10^4 M_{\\sun}$. \nNote that these numbers depend on our choice of $M_1$. \n\n\n\\section{Summary}\nIn general, dense molecular clouds cannot be created in shock waves propagating in magnetized WNM without cold HI clouds. \nIn this paper we identify repeated interactions of shock waves in dense ISM as a basic mechanism for creating filametary molecular clouds, which are ubiquitously observed in the nearby ISM \\citep{Andre+2014}. \nThis suggests an expanding-bubble-dominated picture of the formation of molecular clouds in our Galaxy, which enables us to envision an overall picture of the statistics of molecular clouds and resultant star formation. \nTogether with the findings of our previous work, our conclusions are summarized as follows: \n\\begin{enumerate}\n\\item \nTurbulent cold HI clouds embedded in WNM can be readily created in the expanding shells of HII regions or in the very late phase of supernova remnants. \nIn contrast, the formation of molecular clouds in a magnetized ISM needs many compression events. \nOnce low-mass molecular clouds are formed, an additional compression creates many filamentary molecular clouds. \nOne compression corresponds to of order 1Myr on average in our Galaxy.\nThe timescale of cloud formation is a few times 10Myrs. \n\\item \nSince the galactic thin disk is occupied by many bubbles, molecular clouds are formed in the overlapping regions of (old and new) bubbles. \nHowever, since the average lifetime of each bubble is shorter than the timescale of cloud formation, it is difficult to observationally identify the multiple bubbles that created the molecular clouds. \n\\item \nThe velocity dispersion of molecular clouds should originate in the expansion velocities of bubbles. \nThis is estimated to be $\\lesssim$10km\/s and should not strongly depend on the mass of the molecular cloud. \n\\item \nTo describe the growth of molecular cloud mass we can temporally smooth out the evolution over timescales larger than $\\sim$ 1Myr. \nThe resultant mass (smoothed over time) of each molecular cloud is an almost exponentially increasing function of time. \n\n\\item \nThe destruction of a molecular cloud is mainly due to UV\/FUV radiation from massive stars more massive than $20M_\\sun$. \nThe probability of cloud destruction is not a sensitive function of the mass of molecular clouds. \nIf the shape of the initial mass function does not vary much with the mass of parent molecular clouds, cloud destruction by $30 \\pm 10 M_\\sun$ stars results in a star formation efficiency of order 1\\%. \nThis property explains the observed constancy of the gas depletion timescale ($1 \\sim 2$ Gyr) of giant molecular clouds in the solar neighborhood and possibly in some external galaxies where the normalizations for the Schmidt-Kennicutt Law obtained by high-density tracers are shown to be similar.\n\n\\item \nThe steady state of the evolution of the cloud mass function corresponds to a power law with exponent $-n$ in the range $1 < n \\lesssim 2$ in the spiral arm regions of our Galaxy. \nHowever, a larger value of the exponent, such as $n > 2$, is possible in the inter-arm regions. \n\\end{enumerate}\n\nNote that the first and third conclusions have partly shown in our previous investigations \\citep{InoueInutsuka2009}. \nIn addition we can suggest the following implications from these conclusions: \n\\begin{enumerate}\n\\setcounter{enumi}{6}\n\\item \nStar formation starts, even in small molecular clouds, once the line-mass of an internal self-gravitating filament exceeds the critical value \\citep{Andre+2010}. \nOur analysis suggests that the mass of an individual molecular cloud increases roughly exponentially over $\\sim 10$ Myrs.\nAccording to the formation mechanism driven by repeated compressions, we expect that the total mass in filaments of sufficiently high line-mass increases with the number of compressional events. \nThis means that the mass of star-forming dense gas increases with the mass of the molecular cloud and the star formation should accelerate over many million years. \nThis conjecture may provide a clue in understanding the star formation histories found by \\citet{PallaStahler2010} in seven individual molecular clouds such as Taurus-Auriga and $\\rho$ Ophiuchi. \n\\item \nMolecular clouds may collide over a timescale of a few times 10 Myrs, depending on the relative locations in adjacent (almost invisible) bubbles. \nSuch a molecular cloud collision may result in active star formation in a giant molecular cloud \\citep[e.g.,][]{Fukui+2014}. \n\\end{enumerate}\n\nThese implications should be investigated in more detail by numerical simulations. \nOur radiation magnetohydrodynamics simulations of an expanding bubble due to UV and FUV photons from the massive star show that most of the material in molecular clouds become warm molecular clouds without CO molecules. \nAlthough we have to investigate the fate of the CO-dark gas in more detail, we expect that the total mass can be very large and may account for the dark gas indicated by various observations \\cite[e.g.,][]{Grenier+2005}. \n\nThere are many report that the Kennicutt-Schmidt correlation varies with some properties of galaxies \n\\citep[e.g., ][]{Saintonge+2011,Saintonge+2012,Meidt+2013,Davis+2014}. \nIn addition, a simple relation does not fit to the center of our Galaxy \\citep[e.g.,][]{Longmore+2013}. \nThe reasons for these deviations remain to be studied.\n\n\n\n\\begin{acknowledgements}\nSI thanks Hiroshi Kobayashi and Jennifer M. Stone for useful discussions and comments. \nSI is supported by Grant-in-Aid for Scientific Research (23244027,23103005)\n\\end{acknowledgements}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe discrete memoryless interference channel (DM-IC) is the canonical model for studying the effect of interference in wireless systems. The capacity of this channel is only known in some special cases e.g. class of deterministic ICs \\cite{Gam82,Cho07}, strong interference conditions \\cite{Sat81,Cos87,Chu07}, degraded conditions \\cite{Ben79,Liu08} and a class of semideterministic ICs \\cite{Cho09}. Characterizing the capacity region in the general case has been one of the long standing open problems in information theory. The best known achievable rate region is the so-called Han-Kobayashi scheme, which can be achieved by using schemes that are based on the concepts of rate-splitting and superposition coding \\cite{Han81,Cho06}. Rate-splitting refers to the technique of splitting the message at a transmitter into a common and a private part, where the common part is decoded at all the receivers and the private part is decoded only at the intended receiver. The two parts of the message are then combined into a single signal using superposition coding, first introduced in \\cite{Cov72} in the context of the broadcast channel.\nIn all the special cases where the capacity is known, the Han-Kobayashi region equals the capacity region. However, it has been very recently shown that this inner bound is not tight in general \\cite{Nai15}.\n\nThe first result we present in this paper is to show that the Han-Kobayashi region can be achieved by a multicoding scheme. This scheme does not involve any explicit rate-splitting. Instead, the codebook at each encoder is generated as a multicodebook, i.e. there are multiple codewords corresponding to each message. The auxiliary random variable in this scheme does not explicitly carry a part of the message, rather it implicitly carries \\emph{some} part of the message, and it is not required to specify which part.\\footnote{A similar idea, combined with block-Markov operation, has been recently used in \\cite{Lim14} to develop an achievability scheme called distributed-decode-forward for broadcast traffic on relay networks.} In this sense, it's role is different from that in the Han-Kobayashi scheme \\cite{Han81,Cho06}, and is reminiscent of the encoding for state-dependent channels in \\cite{Gel80}, and the alternative proof of Marton's achievable rate region for the broadcast channel given in \\cite{Gam81}. A key advantage of\nthe multicoding nature of the new scheme is that it can be easily extended to obtain simple achievability schemes for setups in which the canonical interference channel model is augmented to incorporate additional node capabilities such as cognition and state-dependence, while extending the original Han-Kobayashi scheme to such setups can quickly become highly involved. We demonstrate this by constructing schemes for settings which augment the canonical interference channel model in different ways.\n \n\nThe first setting we consider is when the interference channel is state-dependent and the state-information is available non-causally to one of the transmitters (cognitive transmitter). For simplicity, we focus on the case when the cross-link between the non-cognitive transmitter and its undesired receiver is weak enough to be ignored, giving rise to the so called $Z$-interference channel topology. We know that for a point-to-point state-dependent channel with non-causal state information at the encoder, the optimal achievability scheme due to Gelfand and Pinsker uses multicoding at the encoders. Hence, for state-dependent interference channels with noncausal state information at the encoders too, we would like to use the idea of multicoding. Since the new achievability scheme that we present for the canonical interference channel already involves multicoding, it requires almost no change to be applicable to the state-dependent setting. Apart from being simple, we are also able to prove its optimality for the case of the deterministic $Z$-interference channel.\n\nWe then specialize our capacity characterization for the state-dependent deterministic $Z$-interference channel to the case where the channels are governed by the linear deterministic model of \\cite{Ave11}. In the recent literature, this model has proven extremely useful for approximating the capacity of wireless networks and developing insights for the design of optimal communication strategies. We consider a linear deterministic Z-interference channel, in which the state of the channel denotes whether the interference link is present or not. When the transmitters are base-stations and the receivers are end-users, this can model the scenario where one of the transmitters is cognitive, for example it can be a central controller that knows when the other Tx-Rx pair will be scheduled to communicate on the same frequency band. When the two Tx-Rx pairs are scheduled to communicate on the same frequency band, this gives an interference channel; when they communicate on different frequency bands each pair gets a clean channel free of interference. Moreover, the cognitive transmitter can know the schedule ahead of time, i.e. the times at which its transmission will be interfering with the second Tx-Rx pair. For this special case, we identify auxiliary random variables and provide an explicit expression for the capacity region. This explicit capacity characterization allows us to identify interesting properties of the optimal strategy. In particular, with single bit level for the linear deterministic channels (which would imply low to moderate SNR for the corresponding Gaussian channels), the sum rate is maximized when the interfering transmitter remains silent (transmits $0$'s) at times when it interferes with the second transmission. It then treats these symbols as stuck to $0$ and performs Gelfand-Pinsker coding. The second transmitter observes a clean channel at all times and communicates at the maximal rate of $1$ bit per channel use. This capacity characterization also reveals that when all nodes are provided with the state information the sum-capacity cannot be further improved. Thus, for this\nchannel, the sum-capacity when all nodes have state information is the same as that when only the interfering encoder\nhas state information. \n\nMotivated by wireless applications, there has been significant recent interest in state-dependent interference channels (ICs), where the state information is known only to some of the transmitters. Given the inherent difficulty of the problem, many special cases have been considered \\cite{Zha13,Goo13,Dua13a,Dua13b}, for which different coding schemes have been proposed. However, exact capacity characterizations have proven difficult. Another line of related work has been the study of cognitive state-dependent ICs \\cite{Rin11,Som08,Dua12,Kaz13}. Here, the term ``cognitive'' is usually used to mean that the cognitive transmitters know not only the state of the channel but also messages of other transmitters. Note that this assumption is significantly stronger than assuming state information at the transmitter as we do here.\n\nThe second setting we consider is when one of the transmitters has the capability to overhear the signal transmitted by the other transmitter, which can be used to induce cooperation between the two transmitters. This is different from having orthogonal communication links (or conferencing) between the encoders, as studied in \\cite{Wan11b}. Instead, overhearing exploits the natural broadcasting nature of the wireless medium to establish cooperation without requiring any dedicated resources. A variety of different models have been used to capture overhearing \\cite{Pra11,Yan11,Car12}, and are known by different names such as cribbing, source cooperation, generalized feedback, cognition etc. We use \"partial cribbing\" to model the overhearing, in which some deterministic function of the signal transmitted by the non-cognitive transmitter is available at the cognitive transmitter in a strictly causal fashion. Again, for simplicity, we focus on the case of the $Z$-interference channel, where the cross-link between the non-cognitive transmitter and its undesired receiver is weak enough to be ignored. For this setting, we develop a simple achievability scheme by combining our multicoding-based scheme with block-Markov coding and show that it is optimal for deterministic configurations.\n\nFinally, to further illustrate the point that simple schemes can be obtained for augmented scenarios, we describe two extensions which introduce even more complexity in the model. In the first extension, a third message is introduced in the state-dependent Z-interference channel, which is to be communicated from the interfering transmitter to the interfered receiver. The second extension combines the state-dependent Z-IC and the Z-IC with unidirectional partial cribbing. In both extensions, we are able to obtain simple optimal schemes by naturally extending the multicoding-based achievability schemes.\n\n\\subsection*{Organization}\nWe describe the models considered in this paper formally in Section~\\ref{sec:model}.\nThe alternate achievability scheme that achieves the Han-Kobayashi region is presented in Sections~\\ref{sec:outline}. Section~\\ref{sec:state} describes the results concerning the state-dependent setup and section~\\ref{sec:cribbing} describes the results concerning the cribbing setup. The two extensions are described in Section~\\ref{sec:extensions} and we end the paper with a short discussion in Section~\\ref{sec:conclude}.\n\n\\section{Model}\\label{sec:model}\n\nCapital letters, small letters and capital calligraphic letters denote random variables, realizations and alphabets respectively. The tuple $(x(1),x(2),\\dots ,x(n))$ and the set $\\{a,{a+1},\\dots ,b\\}$ are denoted by $x^n$ and $[a:b]$ respectively, and $\\mc{T}_{\\epsilon}^{(n)}$ stands for the $\\epsilon$-strongly typical set of length-$n$ sequences.\n\nWe now describe the channel models considered in this paper.\n\n\\subsection{Canonical Interference Channel}\nThe two-user discrete memoryless interference channel $p_{Y_1,Y_2|X_1,X_2}(y_1,y_2|x_1,x_2)$ is depicted in Fig.~\\ref{fig:model}. Each sender $j\\in\\{1,2\\}$ wishes to communicate a message $M_j$ to the corresponding receiver.\n\nA $(n,2^{nR_1},2^{nR_2},\\epsilon)$ code for the above channel consists of the encoding and decoding functions:\n\\begin{IEEEeqnarray*}{rCl}\nf_{j,i} & : & [1:2^{nR_j}] \\rightarrow \\mc{X}_j, \\quad j\\in\\{1,2\\}, 1\\leq i\\leq n,\\\\ \ng_j & : & \\mc{Y}_j^n \\rightarrow [1:2^{nR_j}],\\quad j\\in\\{1,2\\},\n\\end{IEEEeqnarray*}\nsuch that \n$$\\text{Pr}\\left\\{g(Y_j^n)\\neq M_j\\right\\} \\leq \\epsilon,\\quad j\\in\\{1,2\\},$$\nwhere $M_1$ and $M_2$ are assumed to be distributed uniformly in $[1:2^{nR_1}]$ and $[1:2^{nR_2}]$ respectively. A rate pair $(R_1,R_2)$ is said to be \\emph{achievable} if for every $\\epsilon > 0,$ there exists a $(n,2^{nR_1},2^{nR_2},\\epsilon)$ code for sufficiently large $n$. The capacity region is defined to be the closure of the achievable rate region.\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[scale=1.5]{model.pdf}\n\\caption{Two-User Discrete Memoryless Interference Channel (DM-IC)}\n\\label{fig:model}\n\\end{figure}\n\n\n\\subsection{State-Dependent Z-Interference Channel}\\label{subsec:model_state}\n\nThe discrete memoryless Z-interference channel $p(y_1|x_1,s)p(y_2|x_1,x_2,s)$ with discrete memoryless state $p(s)$ is depicted in Fig.~\\ref{fig:model_gen}. The states are assumed to be known noncausally at encoder 1. Each sender $j\\in\\{1,2\\}$ wishes to communicate a message $M_j$ at rate $R_j$ to the corresponding receiver. For this setting, a $(n,2^{nR_1},2^{nR_2},\\epsilon)$ code consists of the encoding and decoding functions:\n\\begin{IEEEeqnarray*}{rCl}\nf_{1,i}& : &[1:2^{nR_1}]\\times \\mc{S}^n \\rightarrow \\mc{X}_1, \\quad 1\\leq i\\leq n,\\\\ \nf_{2,i}& :& [1:2^{nR_2}] \\rightarrow \\mc{X}_2, \\quad 1\\leq i\\leq n,\\\\ \ng_j &: & \\mc{Y}_j^n \\rightarrow [1:2^{nR_j}],\\quad j\\in\\{1,2\\},\n\\end{IEEEeqnarray*}\nsuch that \n$$\\text{Pr}\\left\\{g(Y_j^n)\\neq M_j\\right\\} \\leq \\epsilon,\\quad j\\in\\{1,2\\}.$$ The probability of error, achievable rate pairs $(R_1,R_2)$ and the capacity region are defined in a similar manner as before.\n\n\\begin{figure}[!h]\n\\centering\n\\includegraphics[scale=1.5]{block_diag_gen.pdf}\n\\caption{The State-Dependent Z-Interference Channel (S-D Z-IC)}\n\\label{fig:model_gen}\\vspace{0mm}\n\\end{figure}\n\nThe deterministic S-D Z-IC is depicted in Fig.~\\ref{fig:model_state_det}. The channel output $Y_1$ is a deterministic function $y_1(X_1,S)$ of the channel input $X_1$ and the state $S$. At receiver 2, the channel output $Y_2$ is a deterministic function $y_2(X_2,T_1)$ of the channel input $X_2$ and the interference $T_1$, which is assumed to be a deterministic function $t_1(X_1,S)$.\nWe also assume that if $x_2$ is given, $y_2(x_2,t_1)$ is an injective function of $t_1$, i.e. there exists some function $g$ such that $t_1=g(y_2,x_2).$ \n\n\\begin{figure}[!h]\n\\centering\n\\includegraphics[scale=1.5]{block_diag_2.pdf}\n\\caption{The Injective Deterministic S-D Z-IC}\n\\label{fig:model_state_det}\n\\end{figure}\n\nWe consider a special case of the injective deterministic S-D Z-IC in detail, which is the modulo-additive S-D Z-IC, depicted in Fig.~\\ref{fig:model_modulo}. All channel inputs and outputs come from a finite alphabet $\\mc{X}=\\{0,1,\\dots ,|\\mc{X}|-1\\}$. The channel has two states. In state $S=0$, there is no interference while in state $S=1$, the cross-link is present. When the cross-link is present, the output at receiver~2 is the modulo-$\\mc{X}$ sum of $X_2$ and $X_1$. For all other cases, the output is equal to the input. We can describe this formally as: \n\\begin{equation*}\n\\begin{split}\nY_1 & = X_1,\\\\\nY_2 & = X_2 \\oplus (S\\cdot X_1).\n\\end{split}\n\\end{equation*}\nAssume that the state $S$ is i.i.d. Ber$(\\lambda)$. A generalization of this model that incorporates multiple levels is also considered subsequently.\n\n\\begin{figure}[!h]\n\\centering\n\\includegraphics[scale=1.5]{modulo.pdf}\n\\caption{The Modulo-Additive S-D Z-IC. All channel inputs and outputs take values in the same finite alphabet $\\mc{X}$. The state $S$ is Ber$(\\lambda).$}\n\\label{fig:model_modulo}\n\\end{figure}\n\n\\subsection{Z-Interference Channel with Partial Cribbing}\\label{subsec:model_crib}\nThe discrete memoryless deterministic Z-interference channel is depicted in Fig.~\\ref{fig:model_crib}. The channel output $Y_1$ is a deterministic function $y_1(X_1)$ of the channel input $X_1$. At receiver 2, the channel output $Y_2$ is a deterministic function $y_2(X_2,T_1)$ of the channel input $X_2$ and the interference $T_1$, which is assumed to be a deterministic function $t_1(X_1)$. We also assume that if $x_2$ is given, $y_2(x_2,t_1)$ is an injective function of $t_1$, i.e. there exists some function $g$ such that $t_1=g(y_2,x_2).$ Each sender $j\\in\\{1,2\\}$ wishes to communicate a message $M_j$ at rate $R_j$ to the corresponding receiver. \n\nWe assume that encoder 1 can overhear the signal from transmitter 2 \\emph{strictly causally}, which is modeled as partial cribbing with a delay \\cite{Asn13}. The partial cribbing signal, which is a function of $X_2$ is denoted by $Z_2$. So $X_{1i}$ is a function of $(M_1,Z_2^{i-1})$ and $X_{2i}$ is a function of $M_2$.\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[scale=1.5]{block_diag.pdf}\n\\caption{Injective Deterministic Z-Interference Channel with Unidirectional Partial Cribbing}\n\\label{fig:model_crib}\n\\end{figure}\n\nA $(n,2^{nR_1},2^{nR_2},\\epsilon)$ code for this setting consists of \n\\begin{IEEEeqnarray*}{rCl}\nf_{1,i} & : & [1:2^{nR_1}]\\times \\mc{Z}_2^{i-1} \\rightarrow \\mc{X}_1, \\quad 1\\leq i\\leq n,\\\\ \nf_{2,i} & : & [1:2^{nR_2}] \\rightarrow \\mc{X}_2, \\quad 1\\leq i\\leq n,\\\\ \ng_j & : & \\mc{Y}_j^n \\rightarrow [1:2^{nR_j}],\\quad j\\in\\{1,2\\},\n\\end{IEEEeqnarray*}\nsuch that \n$$\\text{Pr}\\left\\{g(Y_j^n)\\neq M_j\\right\\} \\leq \\epsilon,\\quad j\\in\\{1,2\\}.$$ The probability of error, achievable rate pairs $(R_1,R_2)$ and the capacity region are defined in a similar manner as before.\n\n\\section{Canonical Interference Channel}\\label{sec:outline}\n\n\\subsection{Preliminaries}\\label{sec:prelim}\nThe currently best known achievable rate region for the 2-user DM-IC was provided by Han and Kobayashi in \\cite{Han81}, using a scheme based on rate-splitting and superposition coding. An alternative achievable rate region that included the Han-Kobayashi rate region was proposed in \\cite{Cho06}, using another scheme that used rate-splitting and superposition coding. Using the terminology introduced in \\cite{Wan13}, the encoding in \\cite{Han81} can be described as employing \\emph{homogeneous} superposition coding, while that in \\cite{Cho06} can be described as employing \\emph{heterogeneous} superposition coding. It was then proved in \\cite{Cho08} that the two regions are, in fact, equivalent and given by the following compact representation (see also \\cite{Kra06,Kob07}).\n\n\\begin{thm}[Han-Kobayashi Region]\\label{thm:HK}\nA rate pair $(R_1,R_2)$ is achievable for the DM-IC $p(y_1,y_2|x_1,x_2)$ if\n\\begin{equation}\\label{eq:achreg_prelim}\n\\begin{split}\nR_1 & < I(X_1;Y_1|U_2,Q),\\\\\nR_2 & < I(X_2;Y_2|U_1,Q),\\\\\nR_1 + R_2 & < I(X_1;Y_1|U_1,U_2,Q) +I(X_2,U_1;Y_2|Q) ,\\\\\nR_1 + R_2 & < I(X_1,U_2;Y_1|U_1,Q) + I(X_2,U_1;Y_2|U_2,Q),\\\\\nR_1 + R_2 & < I(X_1,U_2;Y_1|Q) + I(X_2;Y_2|U_1,U_2,Q),\\\\\n2R_1 + R_2 & < I(X_1;Y_1|U_1,U_2,Q) + I(X_2,U_1;Y_2|U_2,Q) + I(X_1,U_2;Y_1|Q),\\\\\nR_1 + 2R_2 & < I(X_2;Y_2|U_1,U_2,Q) + I(X_1,U_2;Y_1|U_1,Q) + I(X_2,U_1;Y_2|Q),\n\\end{split}\n\\end{equation}\nfor some pmf $p(q)p(u_1,x_1|q)p(u_2,x_2|q),$ where ${|\\mc{U}_1|\\leq |\\mc{X}_1|+4}$, ${|\\mc{U}_2|\\leq |\\mc{X}_2|+4}$ and ${|\\mc{Q}|\\leq 4.}$\n\\end{thm}\n\n\\subsection{Outline of the new achievability scheme}\nWe first describe the alternative achievability scheme informally and discuss the similarities and differences with the existing achievability schemes. The later subsections describe and analyze the scheme formally.\n\nEncoder $j$, where $j\\in\\{1,2\\}$ prepares two codebooks: \n\\begin{itemize}\n\\item A transmission multicodebook\\footnote{The term ``multicodebook'' refers to the fact that there are multiple codewords corresponding to each message.}, which is a set of codewords $\\{x_j^n(\\cdot,\\cdot)\\}$ formed using the transmission random variable $X_j$. This set is partitioned into a number of bins (or subcodebooks), where the bin-index corresponds to the message, \n\\item A coordination codebook which is a set of codewords $\\{u_j^n(\\cdot)\\}$ formed using the auxiliary random variable $U_j$.\n\\end{itemize}\nGiven a message, one codeword $x_j^n$ from the corresponding bin in the transmission multicodebook is chosen so that it is jointly typical with some sequence $u_j^n$ in the coordination codebook. The codeword $x_j^n$ so chosen forms the transmission sequence.\n\nAt a decoder, the desired message is decoded by using joint typicality decoding, which uses the coordination codebook and the transmission multicodebook of the corresponding encoder and the coordination codebook of the other encoder. Thus, a receiver makes use of the interference via its knowledge of the coordination codebook at the interfering transmitter.\n\nFrom the above description, it can be seen that the coordination codebook does not carry any message. Its purpose is to ensure that the transmission sequence from a given bin is well-chosen, i.e. it is beneficial to the intended receiver and also the unintended receiver. To the best of our knowledge, this is the first time an auxiliary random variable (which is not the time-sharing random variable) appears in one of the best known achievability schemes without being explicitly associated with any message. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Achievability scheme}\\label{subsec:achHK}\nChoose a pmf $p(u_1,x_1)p(u_2,x_2)$ and $0<\\epsilon'<\\epsilon$.\n\\subsubsection*{Codebook Generation}\n\\begin{itemize}\n\\item Encoder 1 generates a coordination codebook consisting of $2^{nR_{1c}}$ codewords\\footnote{Though there is no notion of a common message or a private message in this achievability scheme, we use the subscripts $c$ and $p$ to convey if the corresponding random variables are used for decoding at all destinations or only the desired destination respectively.} $u_1^n(l_{1c}),\\;l_{1c}\\in[1:2^{nR_{1c}}]$ i.i.d. according to $\\prod_{i=1}^np(u_{1i})$. It also generates a transmission multicodebook consisting of $2^{n(R_1+R_{1p})}$ codewords $x_1^n(m_1,l_{1p}),\\;m_1\\in[1:2^{nR_1}],\\; l_{1p}\\in[1:2^{nR_{1p}}]$ i.i.d. according to $\\prod_{i=1}^np(x_{1i})$.\n\\item Similarly, encoder 2 generates a coordination codebook consisting of $2^{nR_{2c}}$ codewords $u_2^n(l_{2c}),\\;l_{2c}\\in[1:2^{nR_{2c}}]$ i.i.d. according to $\\prod_{i=1}^np(u_{2i})$. It also generates a transmission multicodebook consisting of $2^{n(R_2+R_{2p})}$ codewords $x_2^n(m_2,l_{2p}),\\;m_2\\in[1:2^{nR_2}],\\; l_{2p}\\in[1:2^{nR_{2p}}]$ i.i.d. according to $\\prod_{i=1}^np(x_{2i})$.\n\\end{itemize}\n\n\\subsubsection*{Encoding}\n\\begin{itemize}\n\\item To transmit message $m_1$, encoder 1 finds a pair $(l_{1c},l_{1p})$ such that $$(u_1^n(l_{1c}),x_1^n(m_1,l_{1p}))\\in\\mc{T}^{(n)}_{\\epsilon'}$$ and transmits $x_1^n(m_1,l_{1p})$. If it cannot find such a pair, it transmits $x_1^n(m_1,1)$.\n\\item Similarly, to transmit message $m_2$, encoder 2 finds a pair $(l_{2c},l_{2p})$ such that $$(u_2^n(l_{2c}),x_2^n(m_2,l_{2p}))\\in\\mc{T}^{(n)}_{\\epsilon'}$$ and transmits $x_2^n(m_2,l_{2p})$. If it cannot find such a pair, it transmits $x_2^n(m_2,1)$.\n\\end{itemize}\n\nThe codebook generation and encoding process are illustrated in Fig.~\\ref{fig:encoding}.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=1.5]{codebooks.pdf}\n\\vspace{1mm}\n\\caption{Codebook Generation and Encoding at Encoder~1. The independently generated $x_1^n$ sequences, lined up vertically in the figure, are binned into $2^{nR_1}$ bins. The independently generated coordination sequences $u_1^n$ are lined up horizontally. To transmit message $m_1$, a jointly typical pair $(x_1^n,u_1^n)$ is sought where $x_1^n$ falls into the $m_1$-th bin, and then $x_1^n$ is transmitted.}\n\\label{fig:encoding}\n\\end{figure}\n\n\\subsubsection*{Decoding}\n\\begin{itemize}\n\\item Decoder 1 finds the unique $\\hat{m}_1$ such that $$(u_1^n(l_{1c}),x_1^n(\\hat{m}_1,l_{1p}),u_2^n(l_{2c}),y_1^n)\\in\\mc{T}^{(n)}_{\\epsilon}$$ for some $(l_{1c},l_{1p},l_{2c})$. If none or more than one such $\\hat{m}_1$ are found, then decoder 1 declares error.\n\\item Decoder 2 finds the unique $\\hat{m}_2$ such that $$(u_2^n(l_{2c}),x_2^n(\\hat{m}_2,l_{2p}),u_1^n(l_{1c}),y_2^n)\\in\\mc{T}^{(n)}_{\\epsilon}$$ for some $(l_{2c},l_{2p},l_{1c})$. If none or more than one such $\\hat{m}_2$ are found, then decoder 2 declares error.\n\\end{itemize}\n\n\\subsubsection*{Discussion}\nBefore providing the formal analysis of the probability of error to show that the coding scheme described above achieves the Han-Kobayashi region, we discuss the connection between the new scheme and the scheme from \\cite{Cho06} which motivates the equivalence of their rate regions.\n\nConsider the set of codewords used at encoder 1. While this set resembles a multicodebook, it can be reduced to a standard codebook (one codeword per message) by stripping away the codewords in each bin that are not jointly typical with any of the $u_1^n$ sequences, and therefore are never used by the transmitters. In other words, after we generate the multicodebook in Fig.~\\ref{fig:encoding}, we can form a smaller codebook by only keeping one codeword per message which is jointly typical with one of the $u_1^n$ sequences (i.e., those codewords highlighted in Fig.~\\ref{fig:encoding}). Note that this reduced codebook indeed has a superposition structure. Each of the $ 2^{nR_1} $ remaining codewords $x_1^n$ is jointly typical with one of the $2^{nR_{1c}}$ $u_1^n$ codewords, and when $n$ is large there will be exactly $ 2^{n(R_1-R_{1c})} $ $x_1^n$ sequences that are typical with each $u_1^n$ sequence, i.e., these $ 2^{n(R_1-R_{1c})} $ $x_1^n$ sequences will look as if they were generated i.i.d. from $p(x_1|u_1)$. Therefore, the $u_1^n$ sequences can be indeed thought as the cloud centers in this superposition codebook and $x_1^n$'s as the satellite codewords. Therefore, our multicodebook construction can be viewed as an equivalent way to generate a superposition codebook as in \\cite{Cho08}. This reveals that both the codebook structure and the decoding in our scheme are similar to that in the Han-Kobayashi scheme and therefore the two achievable rate regions are, not surprisingly, equal.\n\nHowever, note that for broadcast channels, combining Marton coding (which employs multicoding) \\cite{Gam81} with Gelfand-Pinsker coding (which also employs multicoding) is more straightforward than combining superposition coding with Gelfand-Pinsker coding. The former has been shown to be optimal in some cases \\cite{Lap13}. Since our codebook construction for the interference channel also has the flavor of multicoding, extending this construction to setups where multicoding is required is also quite straightforward. As mentioned in the introduction, we exploit this to develop simple achievability schemes for more general setups described in later sections. \n\n\n\\subsubsection*{Probability of Error}\nDue to the symmetry of the code, the average probability of error $\\msf{P}(\\mc{E})$ is equal to $\\msf{P}(\\mc{E}|M_1,M_2)$, so we can assume $(M_1,M_2) = (1,1)$ and analyze $\\msf{P}(\\mc{E}|1,1)$. Let $(L_{1c},L_{1p},L_{2c},L_{2p})$ denote the indices chosen during encoding by encoder 1 and encoder 2. \n\nWe now define events that cover the event of error in decoding message $m_1$:\n\\begin{IEEEeqnarray*}{rCl}\n\\mc{E}_1 & \\triangleq & \\{(U_1^n(l_{1c}),X_1^n(1,l_{1p}))\\notin\\mc{T}^{(n)}_{\\epsilon'} \\;\\text{ for all } l_{1c}, l_{1p}\\}, \\\\\n\\mc{E}_2 & \\triangleq & \\{(U_1^n(L_{1c}),X_1^n(1,L_{1p}),U_2^n(L_{2c}),Y_1^n)\\notin\\mc{T}^{(n)}_{\\epsilon}\\}, \\\\\n\\mc{E}_3 & \\triangleq & \\{(U_1^n(L_{1c}),X_1^n(m_1,l_{1p}),U_2^n(L_{2c}),Y_1^n)\\in\\mc{T}^{(n)}_{\\epsilon}\\text{ for some }m_1\\neq 1, \\text{ for some } l_{1p}\\}, \\\\\n\\mc{E}_4 & \\triangleq & \\{(U_1^n(L_{1c}),X_1^n(m_1,l_{1p}),U_2^n(l_{2c}),Y_1^n)\\in\\mc{T}^{(n)}_{\\epsilon} \\text{ for some }m_1\\neq 1, \\text{ for some } l_{1p},l_{2c}\\} ,\\\\\n\\mc{E}_5 & \\triangleq & \\{(U_1^n(l_{1c}),X_1^n(m_1,l_{1p}),U_2^n(L_{2c}),Y_1^n)\\in\\mc{T}^{(n)}_{\\epsilon} \\text{ for some }m_1\\neq 1, \\text{ for some } l_{1p},l_{1c}\\} ,\\\\\n\\mc{E}_6 & \\triangleq & \\{(U_1^n(l_{1c}),X_1^n(m_1,l_{1p}),U_2^n(l_{2c}),Y_1^n)\\in\\mc{T}^{(n)}_{\\epsilon} \\text{ for some }m_1\\neq 1, \\text{ for some } l_{1c},l_{1p},l_{2c}\\}.\n\\end{IEEEeqnarray*}\n\nConsider also the event $\\mc{E}'_1$, analogous to $\\mc{E}_1$, which is defined as follows.\n\\begin{IEEEeqnarray*}{rCl}\n\\mc{E}'_1 & \\triangleq & \\{(U_2^n(l_{2c}),X_2^n(1,l_{2p}))\\notin\\mc{T}^{(n)}_{\\epsilon'} \\;\\text{ for all } l_{2c}, l_{2p}\\}. \\label{eq:E'1}\n\\end{IEEEeqnarray*}\n\nSince an error for $m_1$ occurs only if at least one of the above events occur, we use the union bound to get the following upper bound on the average probability of error in decoding $m_1$:\n$$ \\msf{P}(\\mc{E}_1) + \\msf{P}(\\mc{E}'_1)+ \\msf{P}(\\mc{E}_2\\cap\\mc{E}_1^c\\cap\\mc{E}'^{c}_1) + \\msf{P}(\\mc{E}_3) + \\msf{P}(\\mc{E}_4) + \\msf{P}(\\mc{E}_5) + \\msf{P}(\\mc{E}_6).$$\n\nBy the mutual covering lemma \\cite[Chap. 8]{Gam12}, $\\msf{P}(\\mc{E}_1)\\rightarrow 0$ as $n\\rightarrow\\infty$ if \n\\begin{IEEEeqnarray}{rCl}\nR_{1p} + R_{1c} & > & I(U_1;X_1) + \\delta(\\epsilon'),\\label{eq:ach1}\n\\end{IEEEeqnarray}\nwhere $\\delta(\\epsilon')\\rightarrow 0$ as $\\epsilon'\\rightarrow 0.$\n\nSimilarly, we get that \n$\\msf{P}(\\mc{E}'_1)\\rightarrow 0$ as $n\\rightarrow\\infty$ if \n\\begin{IEEEeqnarray}{rCl}\nR_{2p} + R_{2c} & > & I(U_2;X_2) + \\delta(\\epsilon').\\label{eq:ach1'}\n\\end{IEEEeqnarray}\n\nBy the conditional typicality lemma, $\\msf{P}(\\mc{E}_2\\cap\\mc{E}_1^c\\cap\\mc{E}'^{c}_1)$ tends to zero as $n\\rightarrow\\infty$.\n\nFor $\\msf{P}(\\mc{E}_3)\\rightarrow 0$, we can use the packing lemma from \\cite[Ch. 3]{Gam12} to get the condition\n\\begin{equation}\\label{eq:ach2}\nR_1 + R_{1p} < I(X_1;U_1,U_2,Y_1) - \\delta(\\epsilon),\n\\end{equation}where $\\delta(\\epsilon)\\rightarrow 0$ as $\\epsilon\\rightarrow 0.$\n\nFor $\\msf{P}(\\mc{E}_4)\\rightarrow 0$, we can again use the packing lemma to get the condition\n\\begin{equation}\\label{eq:ach3}\nR_1 + R_{1p} + R_{2c} < I(X_1,U_2;U_1,Y_1)- \\delta(\\epsilon).\n\\end{equation}\n\nFor $\\msf{P}(\\mc{E}_5)\\rightarrow 0$, we apply the multivariate packing lemma from the Appendix as shown in \\eqref{eq:multipack_2} to get the condition\n\\begin{IEEEeqnarray}{LCl}\nR_1 + R_{1p} + R_{1c} < I(U_1;X_1) + I(U_1,X_1;U_2,Y_1) - \\delta(\\epsilon).\\label{eq:ach4}\n\\end{IEEEeqnarray} \n\nFinally, for $\\msf{P}(\\mc{E}_6)\\rightarrow 0$ as $n\\rightarrow\\infty$, another application of the multivariate packing lemma as shown in \\eqref{eq:multipack_3} gives the condition\n\\begin{IEEEeqnarray}{rCl}\nR_1 + R_{1p} + R_{1c} + R_{2c} & < & I(U_1;X_1) + I(U_2;Y_1) + I(U_1,X_1;U_2,Y_1)-\\delta(\\epsilon).\\label{eq:ach5}\n\\end{IEEEeqnarray}\n\nA similar analysis leads to the following additional conditions for the probability of error in decoding $m_2$ to vanish as $n\\rightarrow\\infty$.\n\\begin{IEEEeqnarray}{rCl}\nR_2 + R_{2p} & < & I(X_2;U_2,U_1,Y_2) - \\delta(\\epsilon),\\label{eq:ach7}\\\\\nR_2 + R_{2p} + R_{1c} & < & I(X_2,U_1;U_2,Y_2)- \\delta(\\epsilon),\\label{eq:ach8}\\\\\nR_2 + R_{2p} + R_{2c} & < & I(U_2;X_2) + I(U_2,X_2;U_1,Y_2)- \\delta(\\epsilon),\\label{eq:ach9}\\\\\nR_2 + R_{2p} + R_{2c} + R_{1c} & < & I(U_2;X_2) + I(U_1;Y_2) + I(U_2,X_2;U_1,Y_2)-\\delta(\\epsilon).\\label{eq:ach10}\n\\end{IEEEeqnarray}\n\nHence the probability of error vanishes as $n\\rightarrow\\infty$ if the conditions \\eqref{eq:ach1}-\\eqref{eq:ach10} are satisfied.\nFor the sake of brevity, let us first denote the RHS of the conditions \\eqref{eq:ach1}-\\eqref{eq:ach10} by $a,b,c,d,e,f,g,h,i,j$ respectively (ignoring the $\\delta(\\epsilon')$ and $\\delta(\\epsilon)$ terms). \n\nWe then note the following relations among these terms which can be proved using the chain rule of mutual information, the Markov chains $U_1-X_1-(U_2,X_2,Y_1,Y_2)$ and $U_2-X_2-(U_1,X_1,Y_1,Y_2)$ and the independence of $(U_1,X_1)$ and $(U_2,X_2)$\n\\begin{equation}\\label{eq:relFM}\n\\begin{gathered}\ne-a \\leq \\min\\{c,d\\},\\\\\nf -a \\leq d \\leq f,\\\\\nc \\leq e \\leq f,\\\\\ni-b \\leq \\min\\{g,h\\},\\\\\nj-b \\leq h \\leq j,\\\\\ng \\leq i \\leq j.\n\\end{gathered}\n\\end{equation}\n\nWe now employ Fourier-Motzkin elimination on the conditions \\eqref{eq:ach1}-\\eqref{eq:ach10} and $R_{1c},R_{1p},R_{2c},R_{2p}\\geq 0$ to eliminate $R_{1c},R_{1p},R_{2c},R_{2p}$. The set of relations \\eqref{eq:relFM} can be used to simplify this task by recognizing redundant constraints. At the end, we get the following achievable region:\n\\begin{equation}\\label{eq:achreg1}\n\\begin{split}\nR_1 & < e-a,\\\\\nR_2 & < i-b,\\\\\nR_1 + R_2 & < c + j-a-b,\\\\\nR_1 + R_2 & < d + h-a-b,\\\\\nR_1 + R_2 & < f + g-a-b,\\\\\n2R_1 + R_2 & < c+h+f-2a-b,\\\\\nR_1+2R_2 & < d +g+j-a-2b.\n\\end{split}\n\\end{equation}\n\nUsing the same facts as those used to prove \\eqref{eq:relFM}, we can show that the above region is the same as the Han-Kobayashi region. For the sake of completeness, we show this explicitly.\n\\begin{itemize}\n\\item Consider the upper bound on $R_1$:\n\\begin{IEEEeqnarray}{rCl}\ne-a & = & I(U_1,X_1;U_2,Y_1)\\nonumber\\\\\n& \\stackrel{(a)}{=} & I(X_1;U_2,Y_1)\\nonumber\\\\\n& \\stackrel{(b)}{=} & I(X_1;Y_1|U_2),\\label{eq:achreg2}\n\\end{IEEEeqnarray}where step $(a)$ follows since $U_1-X_1-(U_2,Y_1)$ is a Markov chain, and step $(b)$ follows since $X_1$ is independent of $U_2$.\n\\item Similarly, \\begin{equation}\\label{eq:achreg3}i-b= I(X_2;Y_2|U_1).\\end{equation}\n\\item Consider the first upper bound on the sum-rate ${c+j-a-b}$:\n\\begin{IEEEeqnarray}{lCl}\nc+j-a-b \\nonumber\\\\\n = I(X_1;U_1,U_2,Y_1) + I(U_2;X_2) + I(U_1;Y_2) \\nonumber\\\\\n \\quad\\quad +\\> I(U_2,X_2;U_1,Y_2)- I(U_2;X_2)-I(U_1;X_1)\\nonumber\\\\\n \\stackrel{(a)}{=} I(X_1;U_2,Y_1|U_1) + I(U_1;Y_2) + I(U_2,X_2;U_1,Y_2)\\nonumber\\\\\n \\stackrel{(b)}{=} I(X_1;U_2,Y_1|U_1) + I(U_1;Y_2) + I(X_2;U_1,Y_2)\\nonumber\\\\\n \\stackrel{(c)}{=} I(X_1;U_2,Y_1|U_1) + I(U_1;Y_2) + I(X_2;Y_2|U_1)\\nonumber\\\\\n \\stackrel{(d)}{=} I(X_1;Y_1|U_1,U_2) + I(X_2,U_1;Y_2),\\label{eq:achreg4}\n\\end{IEEEeqnarray}where step $(a)$ follows by the chain rule of mutual information, step $(b)$ follows by the Markov chain $U_2-X_2-(U_1,Y_2)$, step $(c)$ follows since $U_1$ and $X_2$ are independent and step $(d)$ follows by the independence of $U_2$ and $(U_1,X_1)$. \n\\item By similar steps, $f+g-a-b =$ \\begin{equation}\\label{eq:achreg5}I(X_1,U_2;Y_1) + I(X_2;Y_2|U_1,U_2).\\end{equation}\n\\item The remaining upper-bound on the sum-rate $d+h-a-b$ can be simplified as follows: \\begin{IEEEeqnarray}{lCl}\nd+h-a-b \\nonumber\\\\\n= I(X_1,U_2;U_1,Y_1) + I(X_2,U_1;U_2,Y_2) \\nonumber\\\\\n\\quad -\\> I(U_1;X_1) - I(U_2;X_2)\\nonumber\\\\\n = I(X_1,U_2;Y_1|U_1) + I(X_2,U_1;Y_2|U_2),\\label{eq:achreg6}\n\\end{IEEEeqnarray}which follows by the chain rule of mutual information and the independence of $(U_1,X_1)$ and $(U_2,X_2)$.\n\\item The upper bound on $2R_1+R_2$ can be simplified as follows:\n\\begin{IEEEeqnarray}{lCl}\nc+h+f-2a-b \\nonumber\\\\\n= I(X_1;U_1,U_2,Y_1) + I(X_2,U_1;U_2,Y_2) + I(U_1,X_1) \\nonumber\\\\\n\\quad +\\> I(U_2;Y_1) + I(U_1,X_1;U_2,Y_1) - 2I(U_1;X_1) - I(U_2;X_2)\\nonumber\\\\\n \\stackrel{(a)}{=} I(X_1;U_2,Y_1|U_1) + I(X_2,U_1;Y_2|U_2) + I(U_2;Y_1) + I(U_1,X_1;U_2,Y_1)\\nonumber\\\\\n \\stackrel{(b)}{=} I(X_1;U_2,Y_1|U_1) + I(X_2,U_1;Y_2|U_2) + I(U_2;Y_1) + I(X_1;Y_1|U_2)\\nonumber\\\\\n \\stackrel{(c)}{=} I(X_1;Y_1|U_1,U_2) + I(X_2,U_1;Y_2|U_2) + I(X_1,U_2;Y_1),\\label{eq:achreg7}\n\\end{IEEEeqnarray}where step $(a)$ holds by the chain rule of mutual information and the independence of $U_1$ and $(U_2,X_2)$, step $(b)$ follows by $U_1-X_1-(U_2,Y_1)$ and the independence of $X_1$ and $U_2$, and step $(c)$ follows by the chain rule of mutual information and the independence of $U_2$ and $(U_1,X_1)$.\n\\item Finally, $d+g+j-a-2b$ can be similarly shown to be equal to \\begin{equation}\\label{eq:achreg8} I(X_2;Y_2|U_1,U_2) + I(X_1,U_2;Y_1|U_1) + I(X_2,U_1;Y_2).\\end{equation}\n\\end{itemize}\n\nFrom \\eqref{eq:achreg1}-\\eqref{eq:achreg8} and including a time-sharing random variable $Q$, we get that the following region is achievable:\n\\begin{equation}\\label{eq:achreg}\n\\begin{split}\nR_1 & < I(X_1;Y_1|U_2,Q),\\\\\nR_2 & < I(X_2;Y_2|U_1,Q),\\\\\nR_1 + R_2 & < I(X_1;Y_1|U_1,U_2,Q) +I(X_2,U_1;Y_2|Q) ,\\\\\nR_1 + R_2 & < I(X_1,U_2;Y_1|U_1,Q) + I(X_2,U_1;Y_2|U_2,Q),\\\\\nR_1 + R_2 & < I(X_1,U_2;Y_1|Q) + I(X_2;Y_2|U_1,U_2,Q),\\\\\n2R_1 + R_2 & < I(X_1;Y_1|U_1,U_2,Q) + I(X_2,U_1;Y_2|U_2,Q) + I(X_1,U_2;Y_1|Q),\\\\\nR_1 + 2R_2 & < I(X_2;Y_2|U_1,U_2,Q) + I(X_1,U_2;Y_1|U_1,Q) + I(X_2,U_1;Y_2|Q),\n\\end{split}\n\\end{equation}\nfor pmf $p(q)p(u_1,x_1|q)p(u_2,x_2|q).$ This region is identical to the region in \\eqref{eq:achreg_prelim}.\\hfill\\IEEEQED\n\n\n\n\\section{State-dependent Interference channels}\\label{sec:state}\nIn this section, we focus on the particular setup of the state-dependent Z-interference channel (S-D Z-IC) with noncausal state information at the interfering transmitter, as depicted in Fig.~\\ref{fig:model_gen}. We provide a simple achievability scheme for this setup, that is obtained from the alternative achievability scheme for the general interference channel. This scheme is shown to be optimal for the deterministic case. The auxiliary random variable used for encoding at the interfering transmitter now implicitly captures some part of the message as well as some part of the state sequence realization. \nThe achievability scheme can also be viewed as a generalization of the schemes presented in \\cite{Cad09} and \\cite{Dua13b}.\n\n\n\nAfter characterizing the capacity region of the deterministic S-D Z-IC, we investigate a special case in detail: the modulo-additive S-D Z-IC. The modulo-additive channel is motivated by the linear deterministic model which has gained popularity over the recent years for studying wireless networks \\cite{Ave11}. For this case (which can be thought of as a linear deterministic model with only one \\emph{bit level}), we obtain an explicit description of the capacity region and furthermore, show that the capacity region is also achieved by the standard Gelfand-Pinsker coding over the first link and treating interference as noise over the second link. Following this, the modulo-additive S-D Z-IC with multiple levels is considered and some discussion is provided about the capacity region and the performance of simple achievability schemes.\n\nTo summarize, this section contains the following contributions:\n\\begin{itemize}\n\\item An achievable rate region for the S-D Z-IC,\n\\item Capacity region of the injective deterministic S-D Z-IC,\n\\item Modulo-additive S-D Z-IC: optimality of treating interference-as-noise and other properties.\n\\end{itemize}\n\n\n\n\n\n\\subsection{Results for the State-Dependent Channel}\\label{subsec:main_res_state}\n\nThe following theorem provides an inner bound to the capacity region of the S-D Z-IC in Fig.~\\ref{fig:model_gen}.\n\\begin{thm}\\label{thm:gen_ach}\nA rate pair $(R_1,R_2)$ is achievable for the channel in Fig.~\\ref{fig:model_gen} if\n\\begin{equation}\\label{eq:gen_ach}\n\\begin{split}\nR_1 & < I(U;Y_1|Q)-I(U;S|Q),\\\\\nR_2 & < I(X_2;Y_2|V,Q),\\\\\nR_2 & < I(V,X_2;Y_2|Q) - I(V;S|Q),\\\\\nR_1 + R_2 & < I(U;Y_1|Q)+ I(V,X_2;Y_2|Q)\\\\\n & \\quad\\quad -I(U;S|Q)- I(U,S;V|Q),\n\\end{split}\n\\end{equation}for some pmf $p(q)p(u,v|s,q)p(x_1|u,v,s,q)p(x_2|q).$\n\\end{thm}\n\n\nFor the injective deterministic S-D Z-IC, we can identify natural choices for the auxiliary random variables in Theorem~\\ref{thm:gen_ach} that, in fact, yield the capacity region. This result is stated in the following theorem.\n\n\\begin{thm}\\label{thm:cap}\nThe capacity region of the injective deterministic S-D Z-IC in Fig.~\\ref{fig:model_state_det} is the set of rate pairs $(R_1,R_2)$ that satisfy\n\\begin{equation}\\label{eq:cap}\n\\begin{split}\nR_1 & \\leq H(Y_1|S,Q),\\\\\nR_2 & \\leq H(Y_2|T_1,Q),\\\\\nR_2 & \\leq H(Y_2|Q) - I(T_1;S|Q),\\\\\nR_1 + R_2 & \\leq H(Y_1|T_1,S,Q)+H(Y_2|Q)-I(T_1;S|Q),\n\\end{split}\n\\end{equation}\nfor some pmf $p(q)p(x_1|s,q)p(x_2|q),$ where $|\\mc{Q}|\\leq 4$.\n\\end{thm}\n\n\n\n\\begin{remark} Note that the capacity region remains unchanged even if the first receiver is provided with the state information. The proof of this theorem is presented in subsection~\\ref{subsec:proof_thm_cap}.\n\\end{remark}\n\n\n\n\\subsection{Proof of Theorem~\\ref{thm:gen_ach}}\\label{subsec:proofthm1}\nFix $p(u,v|s)p(x_1|u,v,s)p(x_2)$ and choose $0<\\epsilon'<\\epsilon$. \n\n\\subsubsection*{Codebook Generation}\n\\begin{itemize}\n\\item Encoder 2 generates $2^{nR_2}$ codewords $x_2^n(m_2), m_2\\in[1:2^{nR_2}]$ i.i.d. according to $p(x_2)$.\n\\item Encoder 1 generates $2^{n(R_1+R_1')}$ codewords $u^n(m_1,l_1)$ i.i.d. according to $p(u)$, where $m_1\\in[1:2^{nR_1}]$ and $l_1\\in[1:2^{nR_1'}]$. Encoder 1 also generates $2^{nR_2'}$ codewords $v^n(l_2), l_2\\in[1:2^{nR_2'}]$ i.i.d. according to $p(v)$.\n\\end{itemize}\n\n\\subsubsection*{Encoding}\n\\begin{itemize}\n\\item To transmit message $m_2$, encoder~2 transmits $x_2^n(m_2)$.\n\\item Assume that the message to be transmitted by encoder~1 is $m_1$. After observing $s^n$, it finds a pair $(l_1,l_2)$ such that $(u^n(m_1,l_1),v^n(l_2),s^n)\\in\\mc{T}^{(n)}_{\\epsilon'}$. Then it transmits $x_1^n$, which is generated i.i.d. according to $p(x_1|u,v,s)$.\n\\end{itemize}\n\n\\subsubsection*{Decoding}\n\\begin{itemize}\n\\item Decoder 1 finds a unique $\\hat{m}_1$ such that $(u^n(\\hat{m}_1,l_1),y_1^n)\\in\\mc{T}^{(n)}_{\\epsilon}$ for some $l_1$.\n\\item Decoder 2 finds a unique $\\hat{m}_2$ such that $(x_2^n(\\hat{m}_2),v^n(l_2),y_2^n)\\in\\mc{T}_\\epsilon^{(n)}$ for some $l_2$.\n\\end{itemize}\n\n\\subsubsection*{Probability of Error}\nDue to the symmetry of the code, the average probability of error $\\msf{P}(\\mc{E})$ is equal to $\\msf{P}(\\mc{E}|M_1,M_2)$, so we can assume $(M_1,M_2) = (1,1)$ and analyze $\\msf{P}(\\mc{E}|1,1)$. Let $(L_1,L_2)$ denote the pair of indices chosen by encoder 1 such that $(U^n(1,L_1),V^n(L_2),S^n)\\in\\mc{T}^n_{\\epsilon'}$. \n\nWe now define events that cover the error event:\n\\begin{IEEEeqnarray*}{rCl}\n\\mc{E}_1 & \\triangleq & \\{(U^n(1,l_1),V^n(l_2),S^n)\\notin\\mc{T}^{(n)}_{\\epsilon'} \\text{ for all } l_1, l_2\\}, \\label{eq:E1}\\\\\n\\mc{E}_2 & \\triangleq & \\{(U^n(1,L_1),Y_1^n)\\notin\\mc{T}^{(n)}_{\\epsilon}\\}, \\label{eq:E2}\\\\\n\\mc{E}_3 & \\triangleq & \\{(U^n(m_1,l_1),Y_1^n)\\in\\mc{T}^{(n)}_{\\epsilon} \\text{ for some }m_1\\neq 1, l_1\\}, \\label{eq:E3}\\\\\n\\mc{E}_4 & \\triangleq & \\{(X_2^n(1),V^n(L_2),Y_2^n)\\notin\\mc{T}^{(n)}_{\\epsilon}\\} \\label{eq:E4},\\\\\n\\mc{E}_5 & \\triangleq & \\{(X_2^n(m_2),V^n(l_2),Y_2^n)\\in\\mc{T}^{(n)}_{\\epsilon} \\text{ for some }m_2\\neq 1, l_2\\} \\label{eq:E5}.\n\\end{IEEEeqnarray*}\n\nSince an error occurs only if at least one of the above events occur, we have the following upper bound on the average probability of error:\n$$\\msf{P}(\\mc{E}) \\leq \\msf{P}(\\mc{E}_1) + \\msf{P}(\\mc{E}_2\\cap\\mc{E}_1^c) + \\msf{P}(\\mc{E}_3) + \\msf{P}(\\mc{E}_4\\cap\\mc{E}_1^c) + \\msf{P}(\\mc{E}_5).$$\n\nSimilar to the proof of the mutual covering lemma \\cite[Ch. 8]{Gam12}, we can show that $\\msf{P}(\\mc{E}_1)\\rightarrow 0$ as $n\\rightarrow\\infty$ if \n\\begin{IEEEeqnarray}{rCl}\nR_1' & > & I(U;S) + \\delta(\\epsilon'),\\label{eq:ach1_state}\\\\\nR_2' & > & I(V;S) + \\delta(\\epsilon'),\\label{eq:ach2_state}\\\\\nR_1' + R_2' & > & I(U;S) + I(U,S;V) + \\delta(\\epsilon')\\label{eq:ach3_state},\n\\end{IEEEeqnarray}\nwhere $\\delta(\\epsilon')\\rightarrow 0$ as $\\epsilon'\\rightarrow 0.$\n\nBy the conditional typicality lemma \\cite[Ch. 2]{Gam12}, $\\msf{P}(\\mc{E}_2\\cap\\mc{E}_1^c)$ and $\\msf{P}(\\mc{E}_4\\cap\\mc{E}_1^c)$ both tend to zero as $n\\rightarrow\\infty$.\n\nBy the packing lemma \\cite[Ch. 3]{Gam12}, for $\\msf{P}(\\mc{E}_3)\\rightarrow 0$, we require\n\\begin{equation}\\label{eq:ach4_state} R_1+R_1' < I(U;Y_1) - \\delta(\\epsilon),\\end{equation}\nand for $\\msf{P}(\\mc{E}_5)\\rightarrow 0$, we require\n\\begin{IEEEeqnarray}{rCl}\nR_2 & < & I(X_2;Y_2|V) - \\delta(\\epsilon),\\label{eq:ach5_state}\\\\\nR_2 + R_2' & < & I(V,X_2;Y_2) - \\delta(\\epsilon),\\label{eq:ach6_state}\n\\end{IEEEeqnarray}\nwhere $\\delta(\\epsilon)\\rightarrow 0$ as $\\epsilon\\rightarrow 0.$ Hence, $\\msf{P}(\\mc{E})\\rightarrow 0$ if \\eqref{eq:ach1_state}, \\eqref{eq:ach2_state}, \\eqref{eq:ach3_state}, \\eqref{eq:ach4_state}, \\eqref{eq:ach5_state}, \\eqref{eq:ach6_state} are satisfied. \nAllowing coded-time sharing with a time-sharing random variable $Q$ and eliminating $R_1',R_2'$ via Fourier-Motzkin elimination, we obtain the region \\eqref{eq:gen_ach}.\\hfill\\IEEEQED\n\n\n\\subsection{Proof of Theorem~\\ref{thm:cap}}\\label{subsec:proof_thm_cap}\nAchievability follows from Theorem~\\ref{thm:gen_ach} by choosing $U=Y_1$ and $V=T_1$. These choices are valid since encoder 1 knows $(M_1,S^n)$, which determines $T_1^n$ and $Y_1^n$. We now prove the converse.\n\nGiven a sequence of codes that achieves reliable communication (i.e. $P_e^{(n)}\\rightarrow 0$ as $n\\rightarrow\\infty$) at rates $(R_1,R_2)$, we have, by Fano's inequality:\n\\begin{IEEEeqnarray*}{c}\nH(M_1|Y_1^n) \\leq n\\epsilon_n,\\\\\nH(M_2|Y_2^n) \\leq n\\epsilon_n,\n\\end{IEEEeqnarray*}\nwhere $\\epsilon_n\\rightarrow 0$ as $n\\rightarrow\\infty.$\n\nUsing these, we can establish an upper bound on $R_1$ as follows,\n\\begin{IEEEeqnarray}{rCl}\nnR_1 & = & H(M_1) \\nonumber\\\\\n& = & H(M_1|S^n) \\nonumber\\\\\n& \\leq & I(M_1;Y_1^n|S^n) + n\\epsilon_n \\nonumber\\\\\n& \\leq & H(Y_1^n|S^n) + n\\epsilon_n \\nonumber\\\\\n& \\leq & \\sum_{i=1}^n H(Y_{1i}|S_i) + n\\epsilon_n. \\nonumber\n\\end{IEEEeqnarray}\n\nA simple upper bound on $R_2$ is established in the following:\n\\begin{IEEEeqnarray}{rCl}\nnR_2 & = & H(M_2)\\nonumber\\\\\n& = & H(M_2|T_1^n)\\nonumber\\\\\n& \\leq & I(M_2;Y_2^n|T_1^n) + n\\epsilon_n\\nonumber\\\\\n& \\leq & H(Y_2^n|T_1^n) + n\\epsilon_n\\nonumber\\\\\n& \\leq & \\sum_{i=1}^n H(Y_{2i}|T_{1i}) + n\\epsilon_n.\\nonumber\n\\end{IEEEeqnarray}\n\n\nFor the second upper bound on $R_2$, consider the following:\n\\begin{IEEEeqnarray}{rCl}\nnR_2 & = & H(M_2)\\nonumber\\\\\n& = & H(M_2) + H(Y_2^n|M_2) - H(Y_2^n|M_2) \\nonumber\\\\\n& = & H(Y_2^n) + H(M_2|Y_2^n) - H(Y_2^n|M_2) \\nonumber\\\\\n& \\leq & \\sum_{i=1}^nH(Y_{2i}) + n\\epsilon_n - H(Y_2^n|M_2)\\nonumber\\\\\n& \\stackrel{(a)}{=} & \\sum_{i=1}^nH(Y_{2i}) + n\\epsilon_n - H(T_1^n|M_2)\\nonumber\\\\\n& \\stackrel{(b)}{=} & \\sum_{i=1}^nH(Y_{2i}) + n\\epsilon_n - H(T_1^n)\\nonumber\\\\\n& \\leq & \\sum_{i=1}^nH(Y_{2i}) + n\\epsilon_n - I(T_1^n;S^n)\\nonumber\\\\\n& = & \\sum_{i=1}^nH(Y_{2i}) + n\\epsilon_n - H(S^n) + H(T_1^n|S^n)\\nonumber\\\\\n& \\leq & \\sum_{i=1}^nH(Y_{2i}) + n\\epsilon_n - H(S^n) + \\sum_{i=1}^nH(T_{1i}|S_i)\\nonumber\\\\\n& \\stackrel{(c)}{=} & \\sum_{i=1}^nH(Y_{2i}) + n\\epsilon_n - \\sum_{i=1}^nH(S_i) + \\sum_{i=1}^nH(T_{1i}|S_i)\\nonumber\\\\\n& = & \\sum_{i=1}^nH(Y_{2i}) + n\\epsilon_n - \\sum_{i=1}^nI(T_{1i};S_i)\\nonumber\\\\\n\\end{IEEEeqnarray}\nwhere step $(a)$ follows by the injectivity property, step $(b)$ follows because $T_1^n$ is independent of $M_2$, and step $(c)$ follows because $S^n$ is an i.i.d. sequence.\n\n\nWe now establish an upper bound on the sum-rate\n\n \\begin{IEEEeqnarray}{rL}\nn(R_1+R_2) & = H(M_1|S^n) + H(M_2)\\nonumber\\\\\n& \\leq I(M_1;T_1^n,Y_1^n|S^n) + n\\epsilon_n + H(Y_2^n) + H(M_2|Y_2^n) - H(Y_2^n|M_2)\\nonumber\\\\\n& \\leq I(M_1;T_1^n,Y_1^n|S^n) + n\\epsilon_n + H(Y_2^n) + n\\epsilon_n - H(Y_2^n|M_2)\\nonumber\\\\\n& \\stackrel{(a)}{\\leq} H(T_1^n,Y_1^n|S^n) + H(Y_2^n) - H(T_1^n|M_2)+ 2n\\epsilon_n\\nonumber\\\\\n& \\stackrel{(b)}{=} H(T_1^n,Y_1^n|S^n) + H(Y_2^n) - H(T_1^n)+ 2n\\epsilon_n\\nonumber\\\\\n& = H(Y_1^n|S^n,T_1^n) + H(Y_2^n) - I(T_1^n;S^n)+ 2n\\epsilon_n\\nonumber\\\\\n& \\stackrel{(c)}{\\leq} \\sum_{i=1}^n H(Y_{1i}|S_i,T_{1i}) + \\sum_{i=1}^n H(Y_{2i}) - \\sum_{i=1}^nI(T_{1i};S_i)+ 2n\\epsilon_n\\nonumber\n\\end{IEEEeqnarray}\nwhere as before, steps $(a)$, $(b)$ and $(c)$ follow because of injectivity property, independence of $T_1^n$ and $M_2$, and i.i.d. state respectively.\n\n\n\nFrom the four bounds established in this section, we can complete the converse by introducing an independent time-sharing random variable $Q$ uniformly distributed on $[1:n]$ and defining $X_1$, $T_1$, $S$, $X_2$, $Y_1$, $Y_2$ to be $X_{1Q}$, $T_{1Q}$, $S_{Q}$, $X_{2Q}$, $Y_{1Q}$, $Y_{2Q}$ respectively. \\hfill\\IEEEQED\n\n\n\\subsection{Example: Modulo-Additive State-Dependent Z-Interference Channel}\\label{subsec:modulo}\n\n\\begin{thm}\\label{thm:modulo}\nThe capacity region of the modulo-additive S-D Z-IC in Fig.~\\ref{fig:model_modulo} is given by the convex closure of the rate pairs $(R_1,R_2)$ satisfying\n\\begin{equation}\\label{eq:cap_modulo}\n\\begin{split}\nR_1 & < (1-\\lambda)\\log |\\mc{X}| + \\lambda H(\\bm{p}),\\\\\nR_2 & < \\log |\\mc{X}| - H\\left(\\lambda \\bm{p} + (1-\\lambda)\\bm{\\delta}_0\\right),\n\\end{split}\n\\end{equation}\nfor some $\\bm{p}\\in\\mc{P}_{\\mc{X}}$, where $\\mc{P}_{\\mc{X}}$ denotes the probability simplex corresponding to $\\mc{X}$, $H(\\bm{p})$ stands for the entropy of the pmf $\\bm{p}$ and $\\bm{\\delta}_0$ denotes the pmf that has unit mass at $0$.\n\\end{thm}\n\nThe capacity region when $\\mc{X}=\\{0,1\\}$ and $S$ is i.i.d. Ber$\\left(\\frac{1}{2}\\right)$ is shown in Figure~\\ref{fig:modulo}.\n\n\n\n\\subsection*{Proof of Theorem~\\ref{thm:modulo}}\n\n\n\\begin{figure}[!t]\n\\centering\n\\input{modulo.tikz}\n\\caption{Capacity Region with $\\mc{X}=\\{0,1\\}$ and $S$ i.i.d. Ber$\\left(\\frac{1}{2}\\right).$ The dotted line shows the capacity region when all nodes have state information. Note that the maximal sum-rate of $1.5$ bits\/channel use is achievable with state information only at the interfering Tx.}\n\\label{fig:modulo}\n\\end{figure}\n\nConsider the capacity region stated in Theorem~\\ref{thm:cap}. Let $\\bm{p}_{1,0}$, $\\bm{p}_{1,1}$ and $\\bm{p}_{2}$, all in $\\mc{P}_{\\mc{X}}$, be used to denote the pmf's ${p(x_1|s=0,q)}$, $p(x_1|s=1,q)$ and $p(x_2|q)$ respectively. Evaluating each of the constraints in \\eqref{eq:cap} gives us the following expression for the capacity region:\n\\begin{equation}\\label{eq:cap_modulo_1}\n\\begin{split}\nR_1 & < (1-\\lambda)H(\\bm{p}_{1,0}) + \\lambda H(\\bm{p}_{1,1}),\\\\\nR_2 & < H(\\bm{p}_{2}),\\\\ \nR_2 & < H\\left((1-\\lambda)\\bm{p}_{2} + \\lambda\\widetilde{\\bm{p}}\\right) + \\lambda H(\\bm{p}_{1,1}) \\\\\n&\\quad\\quad - H\\left(\\lambda \\bm{p}_{1,1} + (1-\\lambda)\\bm{\\delta}_0\\right),\\\\\nR_1 + R_2 & < (1-\\lambda)H(\\bm{p}_{1,0})+H\\left((1-\\lambda)\\bm{p}_{2} + \\lambda\\widetilde{\\bm{p}}\\right)\\\\\n&\\quad\\quad + \\lambda H(\\bm{p}_{1,1}) - H\\left(\\lambda \\bm{p}_{1,1} + (1-\\lambda)\\bm{\\delta}_0\\right),\n\\end{split}\n\\end{equation}\nwhere $\\widetilde{\\bm{p}}\\in\\mc{P}_{\\mc{X}}$ is a pmf that is defined as \n$$\\widetilde{\\bm{p}}(k) = \\sum_{i=0}^{|\\mc{X}|-1} \\bm{p}_{1,1}(i)\\bm{p}_{2}(k-i),\\quad 0\\leq k\\leq |\\mc{X}|-1,$$ and $k-i$ should be understood to be $(k-i)\\text{ mod } |\\mc{X}|$.\n\nFirstly, we note that $\\bm{p}_{1,0}$ should be chosen as the pmf of the uniform distribution to maximize $H(\\bm{p}_{1,0})$, thus maximizing the RHS of the constraints in \\eqref{eq:cap_modulo_1}. Similarly, $\\bm{p}_2$ should also be chosen to be the pmf of the uniform distribution. Then, we can also remove the first constraint on $R_2$, since it is rendered redundant by the other constraint on $R_2$.\nThus, the capacity region is given by the convex closure of $(R_1,R_2)$ satisfying\n\\begin{equation}\\label{eq:cap_modulo_3}\n\\begin{split}\nR_1 & < (1-\\lambda)\\log(|\\mc{X}|) + \\lambda H(\\bm{p}_{1,1}),\\\\\nR_2 & < \\log(|\\mc{X}|) + \\lambda H(\\bm{p}_{1,1}) - H\\left(\\lambda \\bm{p}_{1,1} + (1-\\lambda)\\bm{\\delta}_0\\right),\\\\\nR_1 + R_2 & < (2-\\lambda)\\log(|\\mc{X}|)+ \\lambda H(\\bm{p}_{1,1})\\\\\n&\\quad\\quad\\quad\\quad - H\\left(\\lambda \\bm{p}_{1,1} + (1-\\lambda)\\bm{\\delta}_0\\right),\n\\end{split}\n\\end{equation} for $\\bm{p}_{1,1}\\in\\mc{P}_{\\mc{X}}.$\n\nFor any $\\bm{p}$, the region in \\eqref{eq:cap_modulo} is contained in the region in \\eqref{eq:cap_modulo_3} for $\\bm{p}_{1,1}=\\bm{p}$. Hence, the convex closure of \\eqref{eq:cap_modulo} is contained in the convex closure of \\eqref{eq:cap_modulo_3}.\n\nHowever, also note that the region in \\eqref{eq:cap_modulo_3} for any $\\bm{p}_{1,1}$ is contained in the convex hull of two regions, one obtained by setting $\\bm{p} = \\bm{p}_{1,1}$ in \\eqref{eq:cap_modulo} and the other obtained by setting $\\bm{p}=\\bm{\\delta}_0$ in \\eqref{eq:cap_modulo}. Hence, the convex closure of \\eqref{eq:cap_modulo_3} is also contained in the convex closure of \\eqref{eq:cap_modulo}. This concludes the proof of Theorem~\\ref{thm:modulo}.\\hfill\\IEEEQED \n\n\n\\begin{remark}\nThe optimal sum-rate $(2-\\lambda)\\log |\\mc{X}|$ is achieved by choosing $\\bm{p}=\\bm{\\delta}_0$. This corresponds to setting the transmitted symbols of the first transmitter to $0$ when $S=1$ so that it does not interfere with the second transmission. The first transmitter then treats these symbols as stuck to $0$ and performs Gelfand-Pinsker coding. The second transmitter transmits at rate $\\log(|\\mc{X}|)$ bits\/channel use. It can be easily verified that this is also the optimal sum-rate when all nodes are provided with the state~information. Thus, for this channel, the sum-capacity when all nodes have state information is the same as that when only encoder~1 has state information.\n\\end{remark}\n\n\\begin{remark}\nFinally, we note that there is also another way to achieve the capacity region of the modulo additive S-D Z-IC. For this, first recall that to get the capacity region expression in Theorem~\\ref{thm:cap}, we set the auxiliary random variables $U$ and $V$ in the expression in Theorem~\\ref{thm:gen_ach} to $Y_1$ and $T_1$ respectively. Another choice, which corresponds to standard Gelfand-Pinsker coding for the first transmitter-receiver pair and treating interference as noise at the second receiver is to choose $V=\\phi$ in Theorem~\\ref{thm:gen_ach}. This gives us the following achievable region:\n\\begin{equation}\\label{eq:int_noise}\n\\begin{split}\nR_1 & < I(U;Y_1|Q)-I(U;S|Q),\\\\\nR_2 & < I(X_2;Y_2|Q),\n\\end{split}\n\\end{equation} for some pmf $p(q)p(u|s,q)p(x_1|u,s,q)p(x_2|q)$. We can now see that for the modulo-additive S-D Z-IC, the capacity region is also achieved by making the following choices in the above region: $p(u|s=0)$ to be the uniform pmf over $\\mc{X}$, $p(u|s=1)$ to be $\\bm{p}$, $p(x_1|u,s)$ to be $\\bm{\\delta}_u$ (i.e. $X_1=U$) and $p(x_2)$ to be the uniform pmf over $\\mc{X}$. Thus, the capacity region of the modulo-additive S-D Z-IC can also be achieved by treating interference as noise at the second receiver.\n\\end{remark}\n\n\\subsection{Multiple-level modulo-additive S-D Z-IC}\\label{subsec:multiplelevel}\n\nThe linear deterministic model introduced in \\cite{Ave11} consists of multiple \\emph{bit levels} that roughly correspond to bits communicated at different power levels. The modulo-additive S-D Z-IC that we looked at in the previous subsection is a special case in which the number of levels is one. Extending the model to have multiple bit levels raises some interesting questions which we consider in this subsection. \n\nMore specifically, consider the model depicted in Fig.~\\ref{fig:model_modulo_multiple}, which can be thought of as three copies of the model in Fig.~\\ref{fig:model_modulo}, which are however related by the common state affecting them. For simplicity, we restrict attention to the case when the alphabet on each level, denoted by $\\mc{X}$, is the binary alphabet, i.e. $\\{0,1\\}$, and the state is Ber$(0.5).$ Let $L$ denote the number of bit levels.\n\n\\begin{figure}[!th]\n\\centering\n\\includegraphics[scale=1.5]{modulo_multiple_levels.pdf}\n\\caption{The Modulo-Additive S-D Z-IC wit multiple bit levels.}\n\\label{fig:model_modulo_multiple}\n\\end{figure}\n\n\\begin{figure}[!th]\n\\centering\n\\begin{subfigure}\n\\centering\n \\resizebox{.5\\linewidth}{!}{\\input{twolevel_binary_SD_ZIC.tikz}}\n\\caption{Comparison of the different rate regions for 2-level binary modulo-additive S-D Z-IC}\n\\label{fig:modulo_multiple_2}\n\\end{subfigure}\n\\vspace{4mm}\n\\begin{subfigure}\n\\centering\n\\resizebox{.5\\linewidth}{!}{\\input{threelevel_binary_SD_ZIC.tikz}}\n\\caption{Comparison of the different rate regions for 3-level binary modulo-additive S-D Z-IC}\n\\label{fig:modulo_multiple_3}\n\\end{subfigure}\n\\end{figure}\n\nThis model also falls under the injective-deterministic setup for which we have completely characterized the capacity region. So the capacity region can be easily computed, as we indeed do in the following. This evaluation also allows us to immediately compare the capacity region with the rates achieved by some straightforward achievability schemes that we can employ. In particular, consider the following two simple achievability schemes:\n\\begin{itemize}\n\\item ``Separation'': The simplest strategy one can employ is to separately consider each level and communicate over it independently of the other levels. This gives us that the rate pairs $(R_1,R_2)$ satisfying\n\\begin{equation}\\label{eq:ach_separation}\n\\begin{split}\nR_1 & < \\frac{L}{2} + \\sum_{i=1}^{L}\\frac{1}{2} H(\\bm{p}_i),\\\\\nR_2 & < L - \\sum_{i=1}^{L}H\\left( \\bm{p}_i + \\bm{\\delta}_0\\right),\n\\end{split}\n\\end{equation}\nfor some $\\bm{p}_1,\\bm{p}_2,\\dots,\\bm{p}_L\\in\\mc{P}_{\\mc{X}}$ are achievable.\n\\item ``Communicate state'': Alternatively, by noticing that strictly better rates could have been achieved if decoder~2 also had access to the state information, we can reserve one level to communicate the state from encoder~1 to decoder~2. This is done by ensuring that encoder~1 transmits a 1 on this reserved level whenever the state is 1, and encoder~2 constantly transmits a 0 on this level. The nodes communicate on the remaining levels keeping in mind that now decoder~2 also has state information. Note that while no communication can happen between encoder~2 and decoder~2 on the reserved level, encoder~1 can still communicate with decoder~1 at rate 0.5 on this level by treating it as a channel with stuck bits (bit equals 1 whenever state equals 1). This strategy provides us the following achievable region:\n\\begin{equation}\\label{eq:ach_state}\n\\begin{split}\nR_1 & < \\frac{L}{2} + \\frac{1}{2} H(\\bm{p}),\\\\\nR_2 & < L-1 - \\frac{1}{2}H\\left( \\bm{p}\\right),\n\\end{split}\n\\end{equation}\nfor some $\\bm{p}\\in\\mc{P}_{\\mc{X}^{L-1}}$.\n\\end{itemize}\n\nWe can expect that the suboptimality of reserving one level for communicating the state should become relatively small as the number of levels increases i.e. at high SNR. This is corroborated by the numerical analysis, shown in Figs.~\\ref{fig:modulo_multiple_2} and \\ref{fig:modulo_multiple_3}, in which we can see that there is a marked improvement in the rates achieved by this scheme relative to the capacity region as we increase the number of levels from 2 to 3. Indeed, since all the levels are affected by the same state, the entropy of the state becomes small compared to the communication rates as the SNR increases, so it is not a big overhead to explicitly communicate the state to decoder~2 at high SNR. However, at low SNR, the figures show that the overhead incurred is quite high due to which this approach is significantly suboptimal, while the simple scheme of treating the levels separately results in achieving very close to the entire capacity region.\n\n\n\n\n\n\\section{Interference Channels with Partial Cribbing}\\label{sec:cribbing}\n\n\nIn this section, we focus on deterministic Z-interference channels when the interfering transmitter can overhear the signal transmitted by the other transmitter after it passes through some channel. This channel is also modeled as a deterministic channel, dubbed as \\emph{partial cribbing} in \\cite{Asn13}. Deterministic models, in particular linear deterministic models \\cite{Ave11}, have gained popularity due to the observation that they are simpler to analyze and are provably close in performance to Gaussian models. \n\nThere have been quite a few very sophisticated achievability schemes designed for interference channels with causal cribbing encoders, however optimality of the achievable rate regions has not been addressed. In the most general interference channel model with causal cribbing \\cite{Yan11}, each encoder needs to split its message into four parts: a common part to be sent cooperatively, a common part to be sent non-cooperatively, a private part to be sent cooperatively and a private part to be sent non-cooperatively. Further, because of the causal nature of cribbing, achievability schemes usually involve block-Markov coding, so that each encoder also needs to consider the cooperative messages of both encoders from the previous block. Motivated by the alternative achievability scheme we have presented earlier for the general interference channel, we present a simple optimal achievability scheme that minimizes the rate-splitting that is required. Specifically, while encoder~2 only splits its message into a cooperative and non-cooperative private part, encoder~1 does not perform any rate-splitting at all. By focusing on the specific configuration of the Z-interference channel, we are able to prove the optimality of an achievability scheme that is simpler than the highly involved achievability schemes for the general case that are currently known.\n \n\n\n\n\n\n\n\n\\subsection{Result for Partial Cribbing}\\label{subsec:main_res_crib}\n\\begin{thm}\\label{thm:part_crib}\nThe capacity region of the injective deterministic Z-interference channel with unidirectional partial cribbing, depicted in Fig.~\\ref{fig:model_crib}, is given by the convex closure of $(R_1,R_2)$ satisfying \\begin{equation}\\label{eq:cap_part_crib}\n\\begin{split}\nR_1 & \\leq H(Y_1|W),\\\\\nR_2 & \\leq \\min\\Big(H(Y_2), H(Y_2,Z_2|T_1,W)\\Big),\\\\\nR_1 + R_2 & \\leq H(Y_1|T_1,W)+\\min\\Big(H(Y_2), H(Y_2,Z_2|W)\\Big),\n\\end{split}\n\\end{equation}\nfor $p(w)p(x_1|w)p(x_2|w),$ where $W$ is an auxiliary random variable whose cardinality can be bounded as\n$|\\mathcal{W}|\\leq |\\mathcal{Y}_2|+3.$\n\\end{thm}\n\nThe proof of this theorem is presented below.\n\n\\subsection{Proof of Theorem~\\ref{thm:part_crib}}\\label{subsec:proof_crib}\n\\emph{Achievability}\\\\\nChoose a pmf $p(w)p(u_d,u_c,x_1|w)p(x_2,z_2|w)$ and ${0<\\epsilon'<\\epsilon}$, where for the sake of generality, we use the auxiliary random variables $U_d$ and $U_c$. In the injective deterministic case at hand, they can be set to $Y_1$ and $T_1$ respectively.\n\\subsubsection*{Codebook Generation}\nThe communication time is divided into $B$ blocks, each containing $n$ channel uses, and an independent random code is generated for each block $b\\in[1:B]$. Whenever it is clear from the context, we suppress the dependence of codewords on $b$ to keep the notation simple. The messages in block $B$ are fixed apriori, so a total of $B-1$ messages are communicated over the $B$ blocks. The resulting rate loss can be made as negligible as desired by choosing a sufficiently large $B$.\n\nWe split $R_2$ as $R_2'+R_2''$, which corresponds to the split of message~2 into two parts, one that will be sent cooperatively by both transmitters to receiver~2 and the other non-cooperatively only by transmitter~2 to receiver~2. For each block $b$, let $m_{2,b}'\\in[1:2^{nR_2'}]$ and $m_{2,b}''\\in[1:2^{nR_2''}]$. For each block $b\\in [1:B]$, we generate $2^{nR_2'}$ sequences $w^n$ i.i.d. according to $p(w)$.\n\\begin{itemize}\n\\item For each $w^n$ in block $b$, we generate $2^{nR_{2}'}$ sequences $\\left\\{z_2^n(w^n,m'_{2,b})\\right\\}$ i.i.d. according to $p(z_2|w)$. Then for each $(w^n,z_2^n)$, we generate $2^{nR_2''}$ sequences $\\left\\{x_2^n(w^n,z_2^n,m''_{2,b})\\right\\}$ i.i.d. according to $p(x_2|z_2,w)$.\n\\item For each $w^n$ in block $b$, we generate $2^{nR_c}$ sequences $\\left\\{u_c^n(w^n,l_c)\\right\\}$ i.i.d. according to $p(u_c|w)$, where $l_c\\in[1:2^{nR_c}]$. We also generate $2^{n(R_1+R_d)}$ sequences $\\left\\{u_d^n(m_{1,b},l_d)\\right\\}$ i.i.d. according to $p(u_{d})$, where $m_{1,b}\\in[1:2^{nR_1}]$ and $l_d\\in[1:2^{nR_d}]$. \\footnote{Note that the $u_d^n$ sequences are generated independently of the $w^n$ sequences.}\n\\end{itemize}\n\n\\subsubsection*{Encoding}\nLet us assume for now that as a result of the cribbing, encoder 1 knows $m'_{2,b-1}$ at the end of block ${b-1}$. Then in block $b$, both encoders can encode $m'_{2,b-1}$ using $w^n(m'_{2,b-1})$ where $w^n$ is from the code for block $b$.\n\\begin{itemize}\n\\item To transmit message $m_{1,b}$, encoder 1 finds a pair $(l_{d},l_{c})$ such that $$(w^n(m'_{2,b-1}),u_c^n(w^n,l_c),u_d^n(m_{1,b},l_{d}))\\in\\mc{T}^{(n)}_{\\epsilon'}.$$ It transmits $x_1^n$ that is generated i.i.d. according to $p(x_1|w,u_d,u_c)$.\n\\item To transmit message $m_{2,b}=(m'_{2,b},m''_{2,b})$, encoder 2 encodes $m'_{2,b}$ as $z_2^n(w^n,m'_{2,b})$ and then transmits $x_2^n(w^n,z_2^n,m''_{2,b})$.\n\\end{itemize}\nWe fix apriori the messages in block $B$ to be $m_{1,B}=1$, $m'_{2,B}=1$ and $m''_{2,B}=1$. Also, to avoid mentioning edge cases explicitly, whenever $m_{1,0}$, $m'_{2,0}$ or $m''_{2,0}$ appear, we assume that all are fixed to 1.\n\n\\subsubsection*{Decoding}\n\\begin{itemize}\n\\item \\emph{Encoder~1:} At the end of block $b$, assuming it has already decoded $m'_{2,b-1}$ at the end of block $b-1$, encoder 1 decodes $m'_{2,b}$ by finding the unique $\\hat{m}'_{2,b}$ such that the sequence $z_2^n$ it has observed via cribbing is equal to $z_2^n(w^n,\\hat{m}'_{2,b})$.\n\\item \\emph{Decoder~1:} In each block $b$, decoder 1 finds the unique $\\hat{m}_{1,b}$ such that $(u_d^n(m_{1,b},l_{d}),y_1^n)\\in\\mc{T}^{(n)}_{\\epsilon}$ for some $l_{d}$.\n\\item \\emph{Decoder~2:} Decoder 2 performs backward decoding as follows: \n\\begin{itemize}\n\\item In block $B$, decoder 2 finds a unique $m'_{2,B-1}$ such that the condition \\eqref{eq:jtd_dec2_B} is satisfied for some $l_c.$\n\\begin{equation}\\label{eq:jtd_dec2_B}\n(w^n(\\hat{m}'_{2,B-1}),z_2^n(w^n,1),x_2^n(w^n,z_2^n,1),u_c^n(w^n,l_c),y_2^n)\\in\\mc{T}^{(n)}_{\\epsilon}\n\\end{equation}\n\\item In block $b$, assuming $m'_{2,b}$ has been decoded correctly, it finds the unique $(\\hat{m}'_{2,b-1}, \\hat{m}''_{2,b})$ such that the condition \\eqref{eq:jtd_dec2} is satisfied for some $l_{c}$.\n\\begin{equation}\\label{eq:jtd_dec2}\n(w^n(\\hat{m}'_{2,b-1}),z_2^n(w^n,m'_{2,b}),x_2^n(w^n,z_2^n,\\hat{m}''_{2,b}),u_c^n(w^n,l_c),y_2^n)\\in\\mc{T}^{(n)}_{\\epsilon}\n\\end{equation}\n\\end{itemize}\n\\end{itemize}\n\n\n\n\n\\subsubsection*{Probability of Error}\n\nTo get a vanishing probability of error, we can impose the conditions described in the following list.\n\\begin{itemize}\n\\item\nSimilar to the proof of the mutual covering lemma \\cite[Ch. 8]{Gam12}, we can show that the following conditions are sufficient for the success of encoding at the first transmitter:\n\\begin{IEEEeqnarray}{rCl}\nR_d & > & I(U_d;W) +\\delta(\\epsilon'),\\label{eq:dec1}\\\\\nR_d + R_c & > & I(U_d;U_c,W)+\\delta(\\epsilon')\\label{eq:dec2}.\n\\end{IEEEeqnarray}\n\\item For the decoding at encoder 1 to succeed:\n\\begin{equation}\nR'_2 < H(Z_2|W)-\\delta(\\epsilon).\\label{eq:dec3}\n\\end{equation}\n\\item For decoding at decoder 1 to succeed:\n\\begin{equation}\nR_1 + R_d < I(U_d;Y_1)-\\delta(\\epsilon).\\label{eq:dec4}\n\\end{equation}\n\\item For the backward decoding at decoder 2 to succeed, it is sufficient that the following conditions are satisfied:\n\\begin{IEEEeqnarray}{rCl}\nR''_2 & < & I(X_2;Y_2|W,U_c,Z_2)-\\delta(\\epsilon),\\label{eq:dec5}\\\\\nR''_2+ R_c & < & I(U_c,X_2;Y_2|W,Z_2)-\\delta(\\epsilon),\\label{eq:dec6}\\\\\nR'_2 + R''_2 + R_c & < & I(W,U_c,X_2;Y_2)-\\delta(\\epsilon).\\label{eq:dec7}\n\\end{IEEEeqnarray}\n\\end{itemize}\n\nNoting that $R'_2+R''_2=R_2$, eliminating $(R_d,R_c,R'_2,R''_2)$ from \\eqref{eq:dec1}-\\eqref{eq:dec7} via Fourier-Motzkin elimination, and substituting $U_d=Y_1$ and $U_c=T_1$, we get the achievable region in \\eqref{eq:cap_part_crib} with the following additional bound on $R_1$:\n$$R_1< H(Y_1|W,T_1) + H(Y_2|W,Z_2).$$\nTo conclude the proof of achievability, we show that this bound is rendered redundant by $R_1d, 0\\leq m,n \\leq N$. From our numerical search (Fig.~\\ref{fig2}), we found that $d=1$ Hamiltonians generate only trivial error correcting codes. Therefore we will consider the $d=2$ case, which seems to be the minimal distance required for QEC exceeding break-even. More explicitly, a $d=2$ Hamiltonian satisfies\n\\begin{equation}\\label{eq:locality_constraints}\n \\begin{pmatrix}\n 0 & \\tilde{H}_{03} & \\tilde{H}_{04} & ... & \\tilde{H}_{0N} \\\\\n \\tilde{H}_{30} & 0 & \\tilde{H}_{14} & ... & \\tilde{H}_{1N} \\\\\n \\tilde{H}_{40} & \\tilde{H}_{41} & 0 & ... & \\tilde{H}_{2N} \\\\\n \\vdots & & & & \\tilde{H}_{N-3,N} \\\\\n \\tilde{H}_{N0} & \\tilde{H}_{N1} & ... & \\tilde{H}_{N,N-3} & 0\n \\end{pmatrix} =\\mathbf{0}.\n\\end{equation}\nThe goal here is to find $\\tilde{H}$, in other words we will solve for the coefficients $\\beta_{ij}$ as well as the logical states while satisfying these locality constraints.\n\n\\subsection{Example of the $\\sqrt{3}$ code}\nWe demonstrate how to solve this problem with a concrete example. Numerically the $\\sqrt{3}$ code we found with the $d=2$ Hamiltonian had logical states of the form\n\\begin{equation}\n \\begin{split}\n \\ket{\\psi_0} &= a_0 \\ket{0} + a_3 \\ket{3} \\\\\n \\ket{\\psi_1} &= a_1 \\ket{1} + a_4 \\ket{4} + a_6 \\ket{6} .\n \\end{split} \n\\end{equation}\nSince the two logical states don't share any Fock basis, we can always make all coefficients $a_0,a_3,a_1,a_4,a_6$ real by doing the basis transformation $\\ket{n} \\rightarrow e^{i\\theta_n} \\ket{n}$. The error states are\n\\begin{equation}\n \\begin{split}\n \\ket{\\psi_2} &= \\ket{2} \\propto \\hat a\\ket{\\psi_0} \\\\\n \\ket{\\psi_3} &= \\mathcal{N}_1 ( a_1 \\ket{0} + 2 a_4 \\ket{3} + \\sqrt{6} a_6 \\ket{5} ) \\propto \\hat a\\ket{\\psi_1} .\n \\end{split} \n\\end{equation}\nNotice that if Eq.~(\\ref{eq:locality_constraints}) does have a solution, the solution always exists no matter how we choose the basis for the orthogonal subspace $\\mathcal{H}_2$. In other words, we could always represent the new basis as linear combinations of the old basis and that together with the old solution $\\beta_{ij}$ gives the new solution $\\beta_{ij}'$. Therefore here we have complete freedom to select the basis $\\{ \\ket{\\psi_4},\\ket{\\psi_5},\\ket{\\psi_6} \\}$ for $\\mathcal{H}_2$ and for convenience of further analysis we make the following choice (notation $\\psi_i(n) = \\inp{n}{\\psi_i}$):\n\\begin{equation}\n \\begin{split}\n \\ket{\\psi_4} & = \\psi_4(1) \\ket{1} + \\psi_4(4) \\ket{4} + \\psi_4(6) \\ket{6} \\\\\n \\ket{\\psi_5} & = \\psi_5(1) \\ket{1} + \\psi_5(4) \\ket{4} + \\psi_5(6) \\ket{6} \\\\\n \\ket{\\psi_6} & = \\psi_6(0) \\ket{0} + \\psi_6(3) \\ket{3} + \\psi_6(5) \\ket{5} .\n \\end{split} \n\\end{equation}\nWe can make all $\\psi_i(n)$ to be real here, which leads to all $\\beta_{ij}$ also being real.\nWith this basis choice, many constraints in Eq.~(\\ref{eq:locality_constraints}) can be easily satisfied either automatically or by setting certain $\\beta_{ij}=0$. More specifically, for any $|m-n|>2$ such that $\\mele{m}{(\\ket{\\psi_2}\\bra{\\psi_0} + \\ket{\\psi_3}\\bra{\\psi_1})}{n} = 0$, there are two different cases:\n\\begin{enumerate}\n \\item $\\mele{m}{(\\ket{\\psi_i}\\bra{\\psi_j})}{n}=0, \\forall i,j$: in this case $\\tilde{H}_{mn}=0$ is already satisfied.\n \\item there exists $i,j$ such that $\\mele{m}{(\\ket{\\psi_i}\\bra{\\psi_j})}{n}\\neq 0$: in this case we just set $\\beta_{ij}=0$.\n\\end{enumerate}\nTherefore the only non-trivial constraints from Eq.~(\\ref{eq:locality_constraints}) are those with $\\mele{m}{(\\ket{\\psi_2}\\bra{\\psi_0} + \\ket{\\psi_3}\\bra{\\psi_1})}{n} \\neq 0$ which are $\\tilde{H}_{04},\\tilde{H}_{06},\\tilde{H}_{36},\\tilde{H}_{51}$. It is easy to see that the only terms in Eq.~(\\ref{eq:QEC_full}) that will contribute to these matrix elements are $\\ket{\\psi_6}\\bra{\\psi_4}$ and $\\ket{\\psi_6}\\bra{\\psi_5}$. With these analysis, the ansatz Hamiltonian Eq.~(\\ref{eq:QEC_full}) can greatly simplified to the following:\n\\begin{equation}\\label{eq:H1}\n \\tilde{H} = \\ket{\\psi_2}\\bra{\\psi_0} + \\ket{\\psi_3}\\bra{\\psi_1} + \\beta_1 \\ket{\\psi_6} \\bra{\\psi_4} + \\beta_2 \\ket{\\psi_6} \\bra{\\psi_5} ,\n\\end{equation}\nwhere the two free parameters $\\beta_1$ and $\\beta_2$ satisfy a set of linear equations\n\\begin{subequations}\n \\begin{align}\n \\tilde{H}_{04} &= \\psi_3(0) \\psi_1(4) + \\beta_1 \\psi_6(0) \\psi_4(4) + \\beta_2 \\psi_6(0) \\psi_5(4) = 0 \\label{eq:a} \\\\\n \\tilde{H}_{06} &= \\psi_3(0) \\psi_1(6) + \\beta_1 \\psi_6(0) \\psi_4(6) + \\beta_2 \\psi_6(0) \\psi_5(6) = 0 \\label{eq:b} \\\\\n \\tilde{H}_{36} &= \\psi_3(3) \\psi_1(6) + \\beta_1 \\psi_6(3) \\psi_4(6) + \\beta_2 \\psi_6(3) \\psi_5(6) = 0 \\label{eq:c} \\\\\n \\tilde{H}_{51} &= \\psi_3(5) \\psi_1(1) + \\beta_1 \\psi_6(5) \\psi_4(1) + \\beta_2 \\psi_6(5) \\psi_5(1) = 0 \\label{eq:d} .\n \\end{align}\n\\end{subequations}\nThe crucial observation here is that the number of equations 4 is larger than the number of parameters 2, which means the coefficients must be linearly dependent. Since these coefficients are essentially functions of $\\ket{\\psi_0}$ and $\\ket{\\psi_1}$, this eventually provides the extra constraints for determining the logical states. Here there should be $4-2=2$ constraints in total.\n\nBelow we will show in details how to obtain the two constraints and eventually the two logical states. Comparing Eq.~(\\ref{eq:b}) and Eq.~(\\ref{eq:c}), it's easy to see that the first constraint is\n\\begin{equation}\\label{eq:constraint1}\n \\frac{\\psi_3(0)}{\\psi_3(3)} = \\frac{\\psi_6(0)}{\\psi_6(3)} .\n\\end{equation}\nTo get the second constraint, let us multiply Eq.~(\\ref{eq:a}) with $\\psi_1(4)$, multiply Eq.~(\\ref{eq:b}) with $\\psi_1(6)$, and then add them together:\n\\begin{equation}\n \\begin{split}\n & \\psi_3(0) ( [\\psi_1(4)]^2 + [\\psi_1(6)]^2 ) \\\\\n & +\\beta_1 \\psi_6(0) [ \\psi_4(4) \\psi_1(4) + \\psi_4(6) \\psi_1(6) ] \\\\\n & + \\beta_2 \\psi_6(0) [ \\psi_5(4) \\psi_1(4) + \\psi_5(6) \\psi_1(6) ] = 0 .\n \\end{split}\n\\end{equation}\nUsing the fact that $\\ket{\\psi_1}$ is normalized and orthogonal to both $\\ket{\\psi_4}$ and $\\ket{\\psi_5}$, we have\n\\begin{equation}\n \\begin{split}\n & \\psi_3(0) ( 1 - [\\psi_1(1)]^2) + \\beta_1 \\psi_6(0) [ -\\psi_4(1) \\psi_1(1) ] \\\\\n & + \\beta_2 \\psi_6(0) [ -\\psi_5(1) \\psi_1(1) ] = 0 \\\\\n \\Rightarrow & - \\psi_3(0) \\frac{1 - [\\psi_1(1)]^2}{\\psi_1(1)} + \\beta_1 \\psi_6(0) \\psi_4(1) \\\\\n & + \\beta_2 \\psi_6(0) \\psi_5(1) = 0 .\n \\end{split}\n\\end{equation}\nCompare this with Eq.~(\\ref{eq:d}), we immediately obtain the second constraint:\n\\begin{equation}\\label{eq:constraint2}\n \\begin{split}\n & - \\psi_3(0) \\frac{1 - [\\psi_1(1)]^2}{\\psi_1(1) \\psi_6(0)} = \\frac{\\psi_3(5) \\psi_1(1)}{\\psi_6(5)} \\\\\n \\Rightarrow & \\psi_3(0) \\psi_6(5) (1 - [\\psi_1(1)]^2) + \\psi_3(5) \\psi_6(0) [\\psi_1(1)]^2 = 0 .\n \\end{split}\n\\end{equation}\nLet us explicitly list all the relevant states here\n\\begin{equation}\n \\begin{split}\n \\ket{\\psi_0} &= a_0 \\ket{0} + a_3 \\ket{3} \\\\\n \\ket{\\psi_1} &= a_1 \\ket{1} + a_4 \\ket{4} + a_6 \\ket{6} \\\\\n \\ket{\\psi_3} &= \\mathcal{N}_1 ( a_1 \\ket{0} + 2 a_4 \\ket{3} + \\sqrt{6} a_6 \\ket{5} ) \\\\\n \\ket{\\psi_6} &= \\mathcal{N}_2 ( a_1 \\ket{0} + 2 a_4 \\ket{3} + \\beta \\ket{5} ) ,\n \\end{split} \n\\end{equation}\nwhere we have applied Eq.~(\\ref{eq:constraint1}) for $\\ket{\\psi_6}$ and $\\beta$ is another parameter. Combining the QEC criteria and Eq.~(\\ref{eq:constraint2}), we have\n\\begin{equation}\n \\begin{split}\n & a_0^2 + a_3^2 = 1 \\\\\n & a_1^2 + a_4^2 + a_6^2 = 1 \\\\\n & a_0 a_1 + 2 a_3 a_4 = 0 \\\\\n & 3 a_3^2 = a_1^2 + 4 a_4^2 + 6 a_6^2 \\\\\n & a_1^2 + 4 a_4^2 + \\sqrt{6}\\beta a_6 = 0 \\\\\n & \\beta (1 - a_1^2) + \\sqrt{6} a_6 a_1^2 = 0 .\n \\end{split}\n\\end{equation}\nWe have 6 equations and 6 parameters in total, and the solution is (there is some freedom to choose the signs which again is just a trivial basis transformation)\n\\begin{equation}\n \\begin{split}\n & a_0 = \\sqrt{1-\\frac{1}{\\sqrt{3}}}, a_3 = \\frac{1}{\\sqrt[4]{3}}, a_1 = \\sqrt{\\frac{2(6-\\sqrt{3})}{\\sqrt{3}+9}} \\\\\n & a_4 = -\\sqrt{\\frac{(\\sqrt{3}-1)(6-\\sqrt{3})}{2(\\sqrt{3}+9)}}, a_6 = \\sqrt{\\frac{3-\\sqrt{3}}{2(\\sqrt{3}+9)}} .\n \\end{split}\n\\end{equation}\nTherefore the logical states of the $\\sqrt{3}$ code are\n\\begin{equation}\\label{eq:logical_states}\n \\begin{split}\n \\ket{\\psi_0} =& \\sqrt{1-\\frac{1}{\\sqrt{3}}} \\ket{0} + \\frac{1}{\\sqrt[4]{3}} \\ket{3} \\\\\n \\ket{\\psi_1} =& \\sqrt{\\frac{2(6-\\sqrt{3})}{\\sqrt{3}+9}} \\ket{1} - \\sqrt{\\frac{(\\sqrt{3}-1)(6-\\sqrt{3})}{2(\\sqrt{3}+9)}} \\ket{4} \\\\\n & + \\sqrt{\\frac{3-\\sqrt{3}}{2(\\sqrt{3}+9)}} \\ket{6} .\n \\end{split}\n\\end{equation}\nNotice that the average number of photons in the codewords is $3|a_3|^2 = \\sqrt{3}$.\n\nNow we could complete all basis states and the Hamiltonian Eq.~(\\ref{eq:H1}). The basis of $\\mathcal{H}_2$:\n\\begin{equation}\n \\begin{split}\n \\ket{\\psi_6} &= \\mathcal{N}_2 ( a_1 \\ket{0} + 2 a_4 \\ket{3} + \\beta \\ket{5} ) , \\quad \\beta = -\\frac{a_1^2+4a_4^2}{\\sqrt{6} a_6} \\\\\n \\ket{\\psi_4} &= \\mathcal{N}_3 ( a_4 \\ket{1} - a_1 \\ket{4} ) \\\\\n \\ket{\\psi_5} &= \\mathcal{N}_4 ( a_1 \\ket{1} + a_4 \\ket{4} + \\beta' \\ket{6} ) , \\quad \\beta' = -\\frac{a_1^2+a_4^2}{a_6} \n \\end{split} \n\\end{equation}\nand the Hamiltonian parameters:\n\\begin{equation}\n \\beta_2 = -\\frac{\\mathcal{N}_1 a_6}{\\mathcal{N}_2 \\mathcal{N}_4 \\beta'} \\qquad \\beta_1 = \\frac{\\mathcal{N}_1 a_4 (1-a_6\/\\beta')}{\\mathcal{N}_2 \\mathcal{N}_3 a_1} .\n\\end{equation}\n\nThere are some extra complexities in constructing the AQEC Hamiltonian and we actually need to keep more terms from the summation in Eq.~(\\ref{eq:QEC_full}) rather than just the $\\beta_1$ and $\\beta_2$ terms in Eq.~(\\ref{eq:H1}). To understand why this is required, let us study a simpler problem of stabilizing $\\ket{\\psi} = \\frac{1}{\\sqrt{2}}(\\ket{0}+\\ket{2})$ under photon loss error.\nEven though the Hamiltonian $\\hat{H} = (\\ket{0,e}+\\ket{2,e})\\bra{1,g} + \\text{h.c.}$ corrects the error after a single photon loss, it doesn't actually lead to state stabilization.\nThe reason is that when no photon loss happens, the state evolves within the subspace $\\{\\ket{0}, \\ket{2}\\}$ under the non-Hermitian Hamiltonian $\\hat{H}'=-i\\kappa \\hat{a}^\\dagger \\hat{a}\/2$ and eventually becomes $\\ket{0}$. This is because non-detection of a photon still provides us information about the state causing us to update it in a way that skews towards a lower number of photons. State-stabilization must undo this effect.\nWe protect $\\ket{\\psi}$ against $\\hat{H}'$ by engineering a large detuning for $\\ket{\\psi}$ within the subspace $\\{\\ket{0}, \\ket{2}\\}$. For example, adding extra terms such as $\\Omega \\ket{\\psi} \\bra{\\psi}$ or $\\Omega \\ket{0}\\bra{2}+\\text{h.c.}$ to $\\hat{H}$ will stabilize $\\ket{\\psi}$. This new interaction can be seen as rapidly repopulating the $\\ket{2}$ component of the wavevector as it decays through non-detection of photons.\n\nSimilarly, Eq.~(\\ref{eq:H1}) only protects the logical states against single photon loss error, but not the non-unitary dynamics under $\\hat{H}'$.\nFortunately, keeping a few extra terms from the summation in Eq.~(\\ref{eq:QEC_full}) is sufficient to generate the large detuning without changing the Hamiltonian distance as well as the above derivation.\nThe choices are not unique and one option is to add $\\ket{\\psi_4} \\bra{\\psi_2}$ as well as $(a_6 \\ket{4} - a_4 \\ket{6}) \\bra{5}$ in $\\tilde{H}$, which produces similar results compared to the discovered code in Fig.~\\ref{fig2}(c)i.\nOn the other hand, all these complications in constructing a proper AQEC Hamiltonian are automatically taken care of by \\texttt{AutoQEC} through numerical optimization of the average fidelity.\n\n\n\n\\section{Minimizing the emission bandwidth $\\mathcal{B}$}\nHere we prove the claim in the main text that for a harmonic oscillator coupled to a three-level qubit with Hamiltonian $\\hat{H}_{ab}$ in Eq.~(\\ref{eq:H_ab}), the emission bandwidth $\\mathcal{B}$ is minimized when $g_1^2\/\\Delta_1 \\approx g_2^2\/\\Delta_2$.\n\n\\begin{proof}\nThe Hamiltonian can be written in the subspace of $\\{\\ket{n+2,g},\\ket{n+1,e},\\ket{n,f}\\}$ as a matrix\n\\begin{equation}\n \\begin{pmatrix}\n 0 & \\sqrt{n+2} g_1 & 0 \\\\\n \\sqrt{n+2} g_1 & \\Delta_1 & \\sqrt{n+1} g_2 \\\\\n 0 & \\sqrt{n+1} g_2 & \\Delta_2\n \\end{pmatrix} \n\\end{equation}\nand the eigenvalues satisfy\n\\begin{equation}\n \\begin{split}\n & \\lambda^3 - (\\Delta_1+\\Delta_2) \\lambda^2 + \\left[\\Delta_1 \\Delta_2 - (n+2) g_1^2 - (n+1) g_2^2 \\right] \\lambda \\\\\n & + (n+2) g_1^2 \\Delta_2 = 0 .\n \\end{split}\n\\end{equation}\nIn the dispersive regime $\\Delta_{1,2} \\gg g_{1,2}$, the eigenvalues can be expanded perturbatively as\n\\begin{equation}\n \\begin{split}\n \\lambda =& \\lambda_0 + \\lambda_1 + \\lambda_2 + \\mathcal{O}\\left(\\frac{g^3}{\\Delta^3}\\right) g \\\\\n \\lambda_1 =& \\mathcal{O}\\left(\\frac{g}{\\Delta}\\right) g, \\quad \\lambda_2 = \\mathcal{O}\\left(\\frac{g^2}{\\Delta^2}\\right) g .\n \\end{split}\n\\end{equation}\nFor dressed eigenstates $\\ket{\\widetilde{n+2,g}}$, $\\lambda_0 = 0$ and\n\\begin{equation}\n \\begin{split}\n & \\left[\\Delta_1 \\Delta_2 - (n+2) g_1^2 - (n+1) g_2^2 \\right] \\lambda_1 + (n+2) g_1^2 \\Delta_2 = 0 \\\\ \n \\Rightarrow & \\quad \\lambda_1 = -\\frac{(n+2) g_1^2}{\\Delta_1}\n \\end{split}\n\\end{equation}\nwhich agrees with the dispersive coupling Hamiltonian and no level nonlinearity shows up at this order. To the next order,\n\\begin{equation}\n \\begin{split}\n & \\left[\\Delta_1 \\Delta_2 - (n+2) g_1^2 - (n+1) g_2^2 \\right] (\\lambda_1 + \\lambda_2) \\\\\n & - (\\Delta_1+\\Delta_2) \\lambda_1^2 + (n+2) g_1^2 \\Delta_2 = 0\n \\end{split}\n\\end{equation}\nwhich gives\n\\begin{equation}\n \\lambda_2 = \\frac{(n+2)g_1^2}{\\Delta_1^2} \\left[ (n+2) \\frac{g_1^2}{\\Delta_1} - (n+1) \\frac{g_2^2}{\\Delta_2} \\right] .\n\\end{equation}\nNotice that in general $\\lambda_2$ will induce nonlinearity for the dressed states $\\ket{\\widetilde{n,g}}$ since it depends on $n^2$. However, when $g_1^2\/\\Delta_1 = g_2^2\/\\Delta_2$ the dependence on $n^2$ is completely removed which means the nonlinearity and therefore also the emission bandwidth $\\mathcal{B}$ is eliminated at this order.\n\\end{proof}\n\n\\subsection{Qubit choice for $\\hat{b}$}\nThe relevant dispersive coupling to the $e$ levels is\n\\begin{equation}\n \\chi_e = \\frac{2g_1^2}{\\Delta_1} - \\frac{g_2^2}{\\Delta_2 - \\Delta_1}\n\\end{equation}\nand at the minimal nonlinearity point, we have\n\\begin{equation}\n \\chi_e = \\frac{g_1^2}{\\Delta_1} \\frac{r-2}{r-1}\n\\end{equation}\nwhere $r=g_2^2\/g_1^2=\\Delta_2\/\\Delta_1$.\nIdeally, $\\chi_e$ should be as large as possible at this minimal nonlinearity point, such that we can selectively drive certain level transitions without introducing large $\\mathcal{B}$.\nFor a transmon qubit $r \\approx 2 \\Rightarrow \\chi_e \\approx 0$ and therefore cannot be used as qubit $\\hat{b}$.\nFortunately, other qubit designs could provide much more flexibility in engineering the coupling ratio $r$ and $r \\approx 1$ is favorable in terms of larger $\\chi_e$.\n\nIn this work, we choose a fluxonium type of Hamiltonian\n\\begin{equation}\n \\hat{H} = 4 E_C \\hat{n}^2 - E_J \\cos (\\hat{\\phi} - \\phi_{\\text{ext}}) + \\frac{1}{2} E_L \\hat{\\phi}^2\n\\end{equation}\nfor qubit $\\hat{b}$. With realistic parameters $\\phi_{\\text{ext}}=0$, $E_C\/2\\pi=0.95~\\text{GHz}$, $E_J\/2\\pi=4.75~\\text{GHz}$ and $E_L\/2\\pi=0.65~\\text{GHz}$, the coupling ratio is $r = g_2^2\/g_1^2 = |\\mele{f}{\\hat{n}}{\\hat{e}}|^2\/|\\mele{e}{\\hat{n}}{\\hat{g}}|^2 \\approx 1.2$ with $\\omega_{ge}\/2\\pi \\approx 5.43~\\text{GHz}$ and $\\omega_{ef}\/2\\pi \\approx 3.87~\\text{GHz}$.\n\n\n\\section{Full circuit design}\nIn this section, we provide details for the full circuit simulation in Fig.~\\ref{fig4}(e).\nThe AQEC Hamiltonian Eq.~(\\ref{eq:AQEC_H}) can be implemented with a more physical Hamiltonian\n\\begin{equation}\\label{eq:H_physical}\n \\hat{H} = \\hat{H}_{ab} + \\left( f_1(t)\\hat{a}^\\dagger + f_2(t) \\hat{a} + f_3(t) \\hat{a}^2 \\right) \\hat{b}^\\dagger + f_4(t) \\hat{b}^\\dagger \\hat{c} + \\text{h.c.} \n\\end{equation}\nwhere\n\\begin{equation}\n \\begin{split}\n f_1(t) =& \\sum_n \\frac{\\alpha^{(1)}_n e^{-i(E_{n,e} - E_{n-1,g})t}}{\\mele{\\widetilde{n,e}}{\\hat{a}^\\dagger \\hat{b}^\\dagger}{\\widetilde{n-1,g}}} \\\\\n f_2(t) =& \\sum_n \\frac{\\alpha^{(2)}_n e^{-i(E_{n,e} - E_{n+1,g})t}}{\\mele{\\widetilde{n,e}}{\\hat{a} \\hat{b}^\\dagger}{\\widetilde{n+1,g}}} \\\\\n f_3(t) =& \\sum_n \\frac{\\alpha^{(3)}_n e^{-i(E_{n,e} - E_{n+2,g})t}}{\\mele{\\widetilde{n,e}}{\\hat{a}^2 \\hat{b}^\\dagger}{\\widetilde{n+2,g}}} \\\\\n f_4(t) =& \\Omega \\sum_n \\frac{e^{-i(E_{n,e} - E_{n,g})t}}{\\mele{\\widetilde{n,e}}{\\hat{b}^\\dagger}{\\widetilde{n,g}}} \n \\end{split}\n\\end{equation}\nand $\\ket{\\widetilde{n,g(e)}}$ are the dressed eigenstates of $\\hat{H}_{ab}$ with energies $E_{n,g(e)}$.\nNotice that the dressed states $\\ket{\\widetilde{n,g}}$ replace the bare Fock states $\\ket{n,g}$ in our definition of the logical basis in Eq.~(\\ref{eq:logical_states}) and the matrix elements such as $\\mele{\\widetilde{n,e}}{\\hat{a}^\\dagger \\hat{b}^\\dagger}{\\widetilde{n-1,g}}$ will be close but not equal to $\\sqrt{n}$.\nThe drivings $f_{1\\sim 4}(t)$ engineer couplings between dressed states rather than the bare Fock states.\n\nThe relevant dissipators are $\\{ \\sqrt{\\kappa} \\hat{a}, \\sqrt{\\kappa_q} \\hat{c} \\}$, but to be more realistic we also include an extra dissipator $\\sqrt{\\kappa} \\hat{b}$ in the simulation.\nThe coupling strength $\\Omega$ between $\\hat{b}$ and $\\hat{c}$ is chosen such that the effective decay rate for $\\hat{b}$ after adiabatically eliminating $\\hat{c}$~\\cite{Reiter2012a} is still $4\\Omega^2\/\\kappa_q=2\\pi \\times 20~\\text{MHz}$, same as the value used in the numerical optimization. We set the decay rate of $\\hat{c}$ as $\\kappa_q\/2\\pi=100~\\text{MHz}$.\n\nThe Hamiltonian Eq.~(\\ref{eq:H_physical}) can be furthermore implemented with a circuit model\n\\begin{equation}\\label{eq:H_circuit}\n \\begin{split}\n \\hat{H} =& \\omega_a \\hat{a}^\\dagger \\hat{a} + \\omega_{ge} \\ket{e} \\bra{e} + (\\omega_{ge} + \\omega_{ef}) \\ket{f} \\bra{f}+ \\omega_c \\hat{c}^\\dagger \\hat{c} \\\\\n &+ g_1 (\\hat{a}^\\dagger \\ket{g}\\bra{e} + \\hat{a}\\ket{e}\\bra{g}) + g_2 (\\hat{a}\\ket{f}\\bra{e} + \\hat{a}^\\dagger \\ket{e}\\bra{f}) \\\\\n &+ \\varepsilon_1 (t) \\left[ g_{ab}^{(1)} \\cos \\left( \\varphi_a (\\hat{a}+\\hat{a}^\\dagger) + \\varphi_b (\\hat{b}+\\hat{b}^\\dagger) \\right) \\right. \\\\\n &\\qquad \\qquad \\left. + g_{bc}^{(1)} \\cos \\left( \\varphi_b (\\hat{b}+\\hat{b}^\\dagger) + \\varphi_c (\\hat{c}+\\hat{c}^\\dagger) \\right) \\right] \\\\\n &+ \\varepsilon_2 (t) \\left[ g_{ab}^{(2)} \\sin \\left( \\varphi_a (\\hat{a}+\\hat{a}^\\dagger) + \\varphi_b (\\hat{b}+\\hat{b}^\\dagger) \\right) \\right. \\\\\n &\\qquad \\qquad \\left. + g_{bc}^{(2)} \\sin \\left( \\varphi_b (\\hat{b}+\\hat{b}^\\dagger) + \\varphi_c (\\hat{c}+\\hat{c}^\\dagger) \\right) \\right] ,\n \\end{split}\n\\end{equation}\nwhere the drivings are given by\n\\begin{equation}\n \\begin{split}\n \\varepsilon_1 (t) =& - 2 \\text{Re} \\left\\{ \\frac{1}{\\varphi_a \\varphi_b g_{ab}^{(1)}} \\left[ e^{-2i\\omega_a t} f_1(t) + f_2(t) \\right] \\right. \\\\\n &\\left. + \\frac{1}{\\varphi_b \\varphi_c g_{bc}^{(1)}} e^{i(\\omega_c- \\omega_a) t} f_4(t) \\right\\} \\\\\n \\varepsilon_2 (t) =& - 2 \\text{Re} \\left\\{ \\frac{2}{\\varphi_a^2 \\varphi_b g_{ab}^{(2)}} e^{i \\omega_a t} f_3(t) \\right\\} ,\n \\end{split}\n\\end{equation}\nwhich are generated by the two independent flux pump through the larger and smaller loops~\\cite{Kapit2016a} in Fig.~\\ref{fig4}(c).\nAfter Taylor expanding the $\\cos$ and $\\sin$ interaction and dropping fast rotating terms in the rotating frame, we could show the equivalence of Eq.~(\\ref{eq:H_circuit}) to the Hamiltonian Eq.~(\\ref{eq:H_physical}) with $\\Delta_1 = \\omega_{ge}-\\omega_a$ and $\\Delta_2 = \\Delta_1 + \\omega_{ef}-\\omega_a$.\nTo ensure the validity of rotating wave approximation, we place the frequencies at $\\omega_a\/2\\pi=3.5~\\text{GHz}$ and $\\omega_c\/2\\pi=2.5~\\text{GHz}$ with qubit $\\hat{b}$ frequencies from the previous section. We also choose $\\varphi_a=\\varphi_b=\\varphi_c=0.1$ such that higher order terms in the $\\cos$ and $\\sin$ expansions can be safely dropped.\nAll AQEC Hamiltonian parameters as well as the logical basis states are directly imported from the \\texttt{AutoQEC} optimization result instead of using the analytical results in Appendix~\\ref{appendix:b}.\nWe use QuTiP~\\cite{Johansson2012,Johansson2013} for the full circuit simulation.\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figures\/SI_figure_1.pdf}\n \\caption{(a-b) Wigner functions and photon number distributions for the discovered encoding in Fig.~\\ref{fig2}(c)ii. (c) Single state fidelity $F_{\\theta\\phi}$ at $t=10\\mu$s for the whole Bloch sphere. The white dashed line indicates the break-even fidelity. (d) $F_{\\theta\\phi}$ on the Bloch sphere for the $\\sqrt{3}$ code. For all Wigner funciton plots throughout this paper, the horizontal axis label is $x = \\ave{\\hat{a} + \\hat{a}^\\dagger}\/\\sqrt{2}$ and the vertical axis label is $p = i\\ave{\\hat{a}^\\dagger - \\hat{a}}\/\\sqrt{2}$.}\n \\label{fig_SI_1}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figures\/SI_figure_2.pdf}\n \\caption{(a-b) Wigner functions and photon number distributions for another variant of the $\\sqrt{3}$ code discovered with $d=2$ Hamiltonian.}\n \\label{fig_SI_2}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{figures\/SI_figure_3.pdf}\n \\caption{(a) Learning curve for results in Fig.~\\ref{fig_SI_2}. (b) Wigner functions of $\\hat{\\rho}_{\\text{code}}$ at different iterations during training, which shows a relatively good convergence after a few thousand iterations.}\n \\label{fig_SI_3}\n\\end{figure}\n\n\n\\section{Additional comments on the optimization results}\n\\subsection{Exceed break-even with partial protection}\nWe further investigate the result in Fig.~\\ref{fig2}(c)ii, which represents a class of optimization results that perform better than break-even fidelity but worse than full QEC codes.\nFig.~\\ref{fig_SI_1}(a) shows the Wigner functions for the code subspace as well as both logical states, and Fig.\\ref{fig_SI_1}(b) shows the photon number distribution for the logical states where $\\ket{\\psi_0} \\in \\{ \\ket{1}, \\ket{2}, \\ket{3} \\}$ only occupies low photon number states and $\\ket{\\psi_1} \\in \\{ \\ket{5}, \\ket{6}, \\ket{7} \\}$ only occupies high photon number states.\n\nTo understand how the logical subspace is preserved under the AQEC Hamiltonian, we plot the single state fidelity $F_{\\theta\\phi}$ over the Bloch sphere (Fig.~\\ref{fig_SI_1}(c)) at $t=10~\\mu$s. The logical state $\\ket{\\psi_0}$ is strongly stabilized by the AQEC Hamiltonian with a fidelity 0.985 and $\\ket{\\psi_1}$ is preserved with a lower fidelity 0.598.\nSome of their superposition states ($\\theta,\\phi$ in-between the white dashed line) have fidelities below break-even, but the average fidelity over the whole Bloch sphere still exceeds break-even (Fig.~\\ref{fig2}(e) red dashed line) due to the partial protection in the logical subspace.\nIn comparison, we also plot the single state fidelity for the $\\sqrt{3}$ code (Fig.~\\ref{fig2}(c)i) in Fig.~\\ref{fig_SI_1}(d) which shows a relatively uniform protection for any logical states.\n\nWe could study a simplified example to demonstrate that a partially protected logical subspace exceeds break-even.\nStabilizing Fock states $\\ket{0}$ and $\\ket{2}$ under photon loss error can be implemented with a distance 1 Hamiltonian $\\hat{H}=\\ket{2,e} \\bra{1,g}+\\ket{1,g} \\bra{2,e}$.\nAt long time, both logical states are stabilized with single state fidelities $F_{\\theta=0}(t) \\approx F_{\\theta=\\pi}(t) \\approx 1$ but any coherent superposition state becomes a complete mixture of $\\{\\ket{0}\\bra{0}, \\ket{2}\\bra{2}\\}$.\nThis leads to an average fidelity of $\\frac{2}{3}$ which is better than the break-even fidelity $\\frac{1}{2}$.\nIntuitively, stabilizing both $\\ket{\\psi_0}$ and $\\ket{\\psi_1}$ preserves strictly more information compared to collapsing the whole Bloch sphere to $\\ket{\\psi_0}=\\ket{0}$.\n\n\\subsection{A different $\\sqrt{3}$ code}\nBesides the $\\sqrt{3}$ code explained in the main text, \\texttt{AutoQEC} also discovered another variant of $\\sqrt{3}$ code (Fig.~\\ref{fig_SI_2}(a)) protected by a distance 2 Hamiltonian.\nThe main difference is that $\\ket{\\psi_1} \\in \\{ \\ket{1}, \\ket{4}, \\ket{7} \\}$ instead of $\\{ \\ket{1}, \\ket{4}, \\ket{6} \\}$ (Fig.~\\ref{fig_SI_2}(b)). Following the same procedures as in Appendix~\\ref{appendix:b}, this new code can also be analytically derived as ($F \\approx 99.8\\%$ compared to the numerical results in Fig.~\\ref{fig_SI_2}(a))\n\\begin{equation}\n \\begin{split}\n \\ket{\\psi_0} =& \\sqrt{1-\\frac{1}{\\sqrt{3}}} \\ket{0} + \\frac{1}{\\sqrt[4]{3}} \\ket{3} \\\\\n \\ket{\\psi_1} =& \\sqrt{\\frac{4(7-\\sqrt{3})}{3(7 + \\sqrt{3})}} \\ket{1} - \\sqrt{\\frac{(\\sqrt{3}-1)(7-\\sqrt{3})}{3(7 + \\sqrt{3})}} \\ket{4} \\\\\n & + \\sqrt{\\frac{3-\\sqrt{3}}{3(7 + \\sqrt{3})}} \\ket{7} .\n \\end{split}\n\\end{equation}\n\n\n\\section{Training details}\nWe use Adam optimizer~\\cite{Kingma2015} for the gradient based learning with a learning rate about 0.001. Usually after a few hundred iterations, we can tell whether the training is stuck at a bad local minimum below break-even or not and make a decision on early stops. The training often achieves good convergence after a few thousand iterations and we could lower the learning rate to about 0.0003 for the final learning stage.\n\nWe choose a Fock state cutoff of 20 for the bosonic mode with a total Hilbert space dimension of 40. At the beginning of each \\texttt{AutoQEC} run, the real and imaginary parts of the logical states $\\ket{\\psi_0}$ and $\\ket{\\psi_1}$ as length 40 complex vectors are randomly initialized.\nDuring the optimization, in general $\\ket{\\psi_0}$ and $\\ket{\\psi_1}$ won't be perfectly orthogonal to each other after an Adam update step and therefore we choose to maintain their orthogonality by manually setting $\\ket{\\psi_1} \\rightarrow \\ket{\\psi_1} - \\frac{\\inp{\\psi_0}{\\psi_1}}{\\inp{\\psi_0}{\\psi_0}} \\ket{\\psi_0}$ after each iteration.\n\nFigure~\\ref{fig_SI_3} shows the learning curve for results in Fig.~\\ref{fig_SI_2} discovered with $d=2$ Hamiltonian. Similar learning curves occur frequently through many runs of \\texttt{AutoQEC}.\nRegarding computational cost, each iteration takes about 12 seconds on 3 CPUs (Intel Xeon CPU E5-2609 v4 @ 1.70GHz) for training with distance 2 Hamiltonians.\n\\texttt{AutoQEC} runs on 3 CPUs because $\\hat{\\rho}_{00}(t),\\hat{\\rho}_{11}(t),\\hat{\\rho}_{10}(t)$ in the definition of $\\bar{F}(t)$ can be evaluated in parallel with three independent master equation time evolutions.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe goal for the construction that I describe in this note was to lift the stability result of \\cite{vineyards} to the setting when the simplicial filtrations are not necessarily defined on the same set. The ideas in this note were first presented at ATCMS in July 2012. A partial discussion appears in \\cite{facundo2012banff}. \n\nSection \\ref{sec:geodesic} proves that that this construction defines a geodesic metric on the collection of finite filtered spaces. There I give an improvement of the stability of persistence which uses these geodesics.\n\n\n\n\\section{Simplicial Homology}\nGiven a simplicial complex $L$ and simplices $\\sigma,\\tau\\in L$, we write $\\sigma\\subseteq \\tau$ whenever $\\sigma$ is a face of $\\tau$. For each integer $\\ell\\geq 0$ we denote by $L^{(\\ell)}$ the $\\ell$-skeleton of $L$.\n\nRecall that given two finite simplicial complexes $L$ and $S$, a \\emph{simplicial map} between them arises from any map $f:L^{(0)}\\rightarrow S^{(0)}$ with the property that whenever $p_0,p_1,\\ldots,p_k$ span a simplex in $L$, then $f(p_0),f(p_1),\\ldots,f(p_k)$ span a simplex of $S$. One does not require that the vertices $f(p_0),f(p_1),\\ldots,f(p_k)$ be all distinct. Given a map $f:L^{(0)}\\rightarrow S^{(0)}$ between the vertex sets of the finite simplicial complexes $L$ and $S$, we let $\\overline{f}:L\\rightarrow S$ denote the induced \\emph{simplicial map}.\n\n\n\n\n\n\nWe will make use of the following theorem in the sequel.\n\\begin{theorem}[Quillen's Theorem A in the simplicial category, \\cite{quillen}] Let $\\zeta:S\\rightarrow L$ be a simplicial map between two finite complexes. Suppose that the preimage of each closed simplex of $L$ is contractible. Then $\\zeta$ is a homotopy equivalence.\n\\end{theorem}\n\n\n\n\n \n\\begin{corollary}\\label{coro:eq-pd}\nLet $L$ be a finite simplicial complex and $\\varphi:Z\\rightarrow L^{(0)}$ be any surjective map with finite domain $Z$. Let $S:=\\{\\tau\\subseteq Z|\\,\\varphi(\\tau)\\in L\\}$. Then $S$ is a simplicial complex and the induced simplicial map\n$\\overline{\\varphi}:S\\rightarrow L$ is an homotopy equivalence.\n\\end{corollary}\n\n\\begin{proof}\nNote that $S = \\bigcup_{\\sigma\\in L}\\{\\tau\\subseteq Z|\\,\\varphi(\\tau)=\\sigma\\}$ so it is clear that $S$ is a simplicial complex with vertex set $Z$. That the preimage of each $\\sigma\\in L$ is contractible is trivially true since those preimages are exactly the simplices in $S$. The conclusion follows directly from Quillen's Theorem A.\n\\end{proof}\n\nIn this paper we consider homology with coefficients in a field $\\mathbb{F}$ so that given a simplicial complex $L$, then for each $k\\in\\mathbb{N}$, $H_k(L,\\mathbb{F})$ is a vector space. To simplify notation, we drop the argument $\\mathbb{F}$ from the list and only write $H_k(L)$ for the homology of $L$ with coefficients in $\\mathbb{F}$.\n\n\\section{Filtrations and Persistent Homology}\n\nLet $\\mathcal{F}$ denote the set of all finite \\emph{filtered spaces}: that is pairs $\\mathbf{X}=(X,F_X)$ where $X$ is a finite set and $F_X:\\mathrm{pow}(X)\\rightarrow \\mathbb{R}$ is a monotone function. Any such function is called a \\emph{filtration} over $X$. Monotonicity in this context refers to the condition that $F_X(\\sigma)\\geq F_X(\\tau)$ whenever $\\sigma \\supseteq \\tau.$ Given a finite set $X$, by $\\mathcal{F}(X)$ we denote the set of all possible filtrations $F_X:\\pow{X}\\rightarrow \\mathbb{R}$ on $X$. Given a filtered space $\\mathbf{X}=(X,F_X)\\in\\mathcal{F}$, for each $\\varepsilon\\in \\mathbb{R}$ define the simplicial complex \n$$L_\\varepsilon(\\mathbf{X}):=\\big\\{\\sigma\\subseteq X|\\,F_X(\\sigma)\\leq \\varepsilon\\big\\}.$$\nOne then considers the nested family of simplicial complexes\n\n\n$$L(\\mathbf{X}):=\\big\\{L_{\\varepsilon}(\\mathbf{X})\\subset L_{\\varepsilon'}(\\mathbf{X})\\}_{\\varepsilon\\leq \\varepsilon'}$$\n\n\nwhere each $L_{\\varepsilon}(\\mathbf{X})$ is, by construction, finite. At the level of homology, for each $k\\in \\mathbb{N}$ the above inclusions give rise to a system of vector spaces and linear maps\n\n$$\\mathbb{V}_k(\\mathbf{X}):=\\big\\{V_{\\varepsilon}(\\mathbf{X})\\stackrel{v_{\\varepsilon,\\varepsilon'}}{\\longrightarrow} V_{{\\varepsilon'}}(\\mathbf{X})\\big\\}_{\\varepsilon\\leq \\varepsilon'},$$\n\n\n\nwhich is called a \\emph{persistence vector space.} Note that each $V_{\\varepsilon}(\\mathbf{X})$ is finite dimensional.\n\n\nPersistence vector spaces admit a \\emph{classification up to isomorphism} in terms of collections of intervals so that to the persistence vector space $\\mathbb{V}$ one assigns a multiset of intervals $I(\\mathbb{V})$ \\cite{zz}. These collections of intervals are sometimes referred to as \\emph{barcodes} or also \\emph{persistence diagrams}, depending on the graphical representation that is adopted \\cite{comptopo-herbert}. We denote by $\\mathcal{D}$ the collection of all finite persistence diagrams. An element $D\\in\\mathcal{D}$ is a \\emph{finite} multiset of points $$D= \\{(b_\\alpha,d_\\alpha),\\,0\\leq b_\\alpha\\leq d_\\alpha,\\,\\alpha\\in A\\}$$ for some (finite) index set $A$. Given $k\\in\\mathbb{N}$, to any filtered set $\\mathbf{X}\\in\\mathcal{F}$ one can attach a persistence diagram via \n$$\\mathbf{X}\\longmapsto L(\\mathbf{X}) \\longmapsto \\mathbb{V}_k(\\mathbf{X}) \\longmapsto I\\big(\\mathbb{V}_k(\\mathbf{X})\\big).$$\nWe denote by $\\dk{k}:\\mathcal{F}\\rightarrow \\mathcal{D}$ the resulting composite map. Given $\\mathbf{X}=(X,F_X)$, we will sometimes write $\\dk{k}{(F_X)}$ to denote $\\dk{k}{(\\mathbf{X})}$.\n\n\n\\section{Stability of filtrations}\nThe \\emph{bottleneck distance} is a useful notion of distance between persistence diagrams and we recall it's definition next. We will follow the presentation on \\cite{carlsson_2014}. Let $\\Delta\\subset \\mathbb{R}^2_+$ be comprised of those points which sit above the diagonal: $\\Delta:=\\{(x,y)|\\,x\\leq y\\}.$ \n\n\n\nDefine the \\emph{persistence} of a point $P=(x_P,y_P)\\in\\Delta$ by $\\pers{P}:=y_P-x_P$. \n\n Let $D_1=\\{P_\\alpha\\}_{\\alpha\\in A_1}$ and $D_2=\\{Q_\\alpha\\}_{\\alpha\\in A_2}$ be two persistence diagrams indexed over the finite index sets $A_1$ and $A_2$, respectively. Consider subsets $B_i\\subseteq A_i$ with $|B_1|=|B_2|$ and any bijection $\\varphi:B_1\\rightarrow B_2$, then define\n$$J(\\varphi):=\\max\\left(\\max_{\\beta\\in B_1}\\|P_\\beta-Q_{\\varphi(\\beta)}\\|_\\infty,\\max_{\\alpha\\in A_1\\backslash B_1}\\frac{1}{2}\\pers{P_\\alpha},\\max_{\\alpha\\in A_2\\backslash B_2}\\frac{1}{2}\\pers{P_\\alpha}\\right).$$\nFinally, one defines the bottleneck distance between $D_1$ and $D_2$ by $$d_{\\mathcal{D}}(D_1,D_2):=\\min_{(B_1,B_2,\\varphi)} J(\\varphi),$$\nwhere $(B_1,B_2,\\varphi)$ ranges over all $B_1\\subset A_1$, $B_2\\subset A_2$, and bijections $\\varphi:B_1\\rightarrow B_2$.\n\n\nOne of the standard results about the stability of persistent homology invariants, which is formulated in terms of the Bottleneck distance, is the proposition below which we state in a weaker form that will suffice for our presentation:\\footnote{In \\cite{vineyards} the authors do not assume that the underlying simplicial complex is the full powerset.}\n \\begin{theorem}[\\cite{vineyards}]\\label{theo:stab-vineyards}\n For all finite sets $X$ and filtrations $F,G:\\mathrm{pow}(X)\\rightarrow \\mathbb{R}$, \n $$d_{\\mathcal{D}}(\\dk{k}(F),\\dk{k}(G))\\leq \\max_{\\sigma\\in\\mathrm{pow}(X)}|F(\\sigma)-G(\\sigma)|,$$\n for all $k\\in\\mathbb{N}.$\n \\end{theorem}\n\nThe proof of this theorem offered in \\cite{vineyards} is purely combinatorial and elementary. This result requires that the two filtrations be given on the \\emph{same} set. This restriction will be lifted using the ideas that follow.\n\n\n\\subsection{Filtrations defined over different sets}\n A \\emph{parametrization} of a finite set $X$ is any finite set $Z$ and a surjective map $\\varphi_X:Z\\rightarrow X$.\nConsider a filtered space $\\mathbf{X} = (X,F_X)\\in\\mathcal{F}$ and a parametrization $\\varphi_X:Z\\rightarrow X$ of $X$. By $\\varphi_X^\\ast F_X$ we denote the \\emph{pullback filtration} induced by $F_X$ and the map $\\varphi_X$ on $Z$. This filtration is given by $\\tau\\mapsto F_X(\\varphi_X(\\tau))$ for all $\\tau\\in\\mathrm{pow}(Z).$ \n\n\n\nA useful corollary of the persistence homology isomorphism theorem \\cite[pp. 139]{comptopo-herbert} and Corollary \\ref{coro:eq-pd} is that the persistence diagrams of the original filtration and the pullback filtration are identical.\n\\begin{corollary}\\label{coro:same}\nLet $\\mathbf{X}=(X,F_X)\\in \\mathcal{F}$ and $\\varphi:Z\\twoheadrightarrow X$ a parametrization of $X$. Then, for all $k\\in\\mathbb{N}$, $\\dk{k}(\\varphi^\\ast F_X)=\\dk{k}(F_X).$\n\\end{corollary}\n\n\n\\subsubsection{Common parametrizations of two spaces: tripods.}\nNow, given $\\mathbf{X} = (X,F_X)$ and $\\mathbf{Y}=(Y,F_Y)$ in $\\mathcal{F}$, the main idea in comparing filtrations defined on different spaces is to consider parametrizations \n$\\varphi_X:Z\\twoheadrightarrow X$ and $\\varphi_Y:Z\\twoheadrightarrow Y$ of $X$ and $Y$ from a \\emph{common} parameter space $Z$, i.e. \\emph{tripods}:\n\n$$\\xymatrix{ & Z \\ar@{->>}[dl]_{{\\varphi_X}} \\ar@{->>}[dr]^{\\varphi_Y} & \\\\ X\t& & Y}$$\n\nand compare the pullback filtrations $\\varphi_X^*F_X$ and $\\varphi_Y^*F_Y$ on $Z$. Formally, define\n\n\n\n\n\n\n \\begin{multline}\n d_{\\mathcal{F}}\\big(\\mathbf{X},\\mathbf{Y}\\big):=\\\\\\inf\\left\\{\\max_{\\tau\\in\\mathrm{pow}(Z)}\\big|\\varphi^\\ast_X F_X(\\tau)-\\varphi^\\ast_Y F_Y(\\tau)\\big|;\\,\\varphi_X:Z\\twoheadrightarrow X,\\, \\varphi_Y:Z\\twoheadrightarrow Y\\,\\,\\mbox{parametrizations}\\right\\}.\n \\end{multline}\n\n\n\n\\begin{remark} \\label{rem:dist-F-simple} Notice that in case $X=\\{\\ast\\}$ and $F_{\\{\\ast\\}}(\\ast) = c\\in\\mathbb{R}$, then $d_{\\mathcal{F}}(X,Y)=\\max_{\\sigma\\subset Y}\\big|F_Y(\\sigma)-c\\big|,$ for any filtered space $Y$. If $c=0$, $Y=\\{y_1,y_2\\}$ with $F_Y(y_1)=F_Y(y_2)=0$ and $F_{Y}(\\{y_1,y_2\\})=1$. Then, $d_{\\mathcal{F}}(X,Y)=1.$\n\nHowever, still with $c=0$ and $Y=\\{y_1,y_2\\}$, but $F_Y(y_1)=F_Y(y_2)=F_{Y}(\\{y_1,y_2\\})=0$, one has $d_\\mathcal{F}(X,Y)=0$. This means that $d_{\\mathcal{F}}$ is at best a pseudometric on filtered spaces.\n\\end{remark}\n\n\n\\begin{proposition}\n$d_{\\mathcal{F}}$ is a pseudometric on $\\mathcal{F}$.\n\\end{proposition}\n\\begin{proof}\nSymmetry and non-negativity are clear. We need to prove the triangle inequality. Let $\\mathbf{X} = (X,F_X)$, $\\mathbf{Y} = (Y,F_Y)$, and $\\mathbf{W} = (W,F_W)$ in $\\mathcal{F}$ be non-empty and $\\eta_1,\\eta_2>0$ be s.t. \n$$d_{\\mathcal{F}}(\\mathbf{X},\\mathbf{Y})<\\eta_1\\,\\,\\mbox{and}\\,\\,d_{\\mathcal{F}}(\\mathbf{Y},\\mathbf{W})<\\eta_2.$$ \nChoose, $\\psi_X:Z_1\\twoheadrightarrow X$, $\\psi_Y:Z_1\\twoheadrightarrow Y$, $\\zeta_Y:Z_2\\twoheadrightarrow Y$, and $\\zeta_W:Z_2\\twoheadrightarrow W$ surjective such that \n$$\\|F_X\\circ\\psi_X-F_Y\\circ\\psi_Y\\|_{\\ell^\\infty(\\pow{Z_1})}<\\eta_1$$\nand\n$$\\|F_Y\\circ\\zeta_Y-F_W\\circ\\zeta_W\\|_{\\ell^\\infty(\\pow{Z_2})}<\\eta_2.$$\nLet $Z\\subseteq Z_1\\times Z_2$ be defined by $Z:=\\{(z_1,z_2)\\in Z_1\\times Z_2|\\psi_Y(z_1)=\\zeta_Y(z_2)\\}$ and consider the following (pullback) diagram:\n\n$$\\xymatrix{& & Z \\ar[dl]_{{\\pi_1}} \\ar[dr]^{\\pi_2} & & \\\\ & Z_1\\ar[dl]_{\\psi_X} \\ar[dr]^{\\psi_Y}& & Z_2\\ar[dl]_{\\zeta_Y}\\ar[dr]^{\\zeta_W} &\\\\ X & & Y & & W}.$$\n\nClearly, since $\\psi_Y$ and $\\zeta_Y$ are surjective, $Z$ is non-empty.\nNow, consider the following three maps with domain $Z$: $\\phi_X := \\psi_X\\circ\\pi_1$, $\\phi_Y := \\psi_Y\\circ\\pi_1=\\zeta_Y\\circ\\pi_2$, and $\\phi_W:=\\zeta_W\\circ\\pi_2$. These three maps are surjective and therefore constitute parametrizations of $X$, $Y$, and $W$, respectively. Then, since $\\pi_i:Z\\rightarrow Z_i$, $i=1,2$, are surjective and $\\psi_Y\\circ {\\pi_1} = \\zeta_Y\\circ\\pi_2$, we have\n\\begin{align*}\nd_{\\mathcal{F}}(\\mathbf{X},\\mathbf{W})&\\leq \\|F_X\\circ\\phi_X-F_W\\circ\\phi_W\\|_{\\ell^\\infty(\\pow{Z})}\\\\\n&\\leq \\|F_X\\circ\\phi_X - F_Y\\circ\\phi_Y\\|_{\\ell^\\infty(\\pow{Z})}+\\|F_Y\\circ\\phi_Y - F_W\\circ\\phi_W\\|_{\\ell^\\infty(\\pow{Z})}\\\\\n&= \\|F_X\\circ\\psi_X - F_Y\\circ\\psi_Y\\|_{\\ell^\\infty(\\pow{Z_1})}+\\|F_Y\\circ\\zeta_Y - F_W\\circ\\zeta_W\\|_{\\ell^\\infty(\\pow{Z_2})}\\\\\n&\\leq \\eta_1+\\eta_2.\n\\end{align*}\nThe conclusion follows by letting $\\eta_1\\searrow d_{\\mathcal{F}}(X,Y)$ and $\\eta_2\\searrow d_{\\mathcal{F}}(Y,W)$.\n\\end{proof}\n\nWe now obtain a lifted version of Theorem \\ref{theo:stab-vineyards}\n \\begin{theorem} \\label{theo:stab-pullback}\n For all finite filtered spaces $\\mathbf{X} =(X,F_X)$ and $\\mathbf{Y} = (Y,F_Y)$, and all $k\\in\\mathbb{N}$ one has:\n $$d_{\\mathcal{D}}(\\dk{k}(\\mathbf{X}),\\dk{k}(\\mathbf{Y}))\\leq d_{\\mathcal{F}}(\\mathbf{X},\\mathbf{Y}).$$\n \\end{theorem}\n\n\n\n \\begin{proof}[Proof of Theorem \\ref{theo:stab-pullback}]\n Assume $\\varepsilon>0$ is such that $d_{\\mathcal{F}}(F_X,F_Y)<\\varepsilon.$ Then, let $\\varphi_X:Z\\rightarrow X$ and $\\varphi_Y:Z\\rightarrow Y$ be surjective maps from the finite set $Z$ into $X$ and $Y$, respectively, such that $|\\varphi_X^\\ast F_X(\\tau) - \\varphi_Y^\\ast F_Y(\\tau)|<\\varepsilon$ for all $\\tau\\in\\mathrm{pow}(Z)$. Then, by Theorem \\ref{theo:stab-vineyards}, \n $$d_{\\mathcal{D}}(\\dk{k}(\\varphi_X^\\ast F_X),\\dk{k}(\\varphi_Y^\\ast F_Y))<\\varepsilon.$$\n for all $k\\in\\mathbb{N}.$ Now apply Corollary \\ref{coro:same} and conclude by letting $\\varepsilon$ approach $d_{\\mathcal{F}}(\\mathbf{X},\\mathbf{Y})$.\n \\end{proof}\n\n\\begin{remark}\\label{rem:non-tight}\nConsider the case of $\\mathbf{Y}$ being the one point filtered space $\\{\\ast\\}$ such that $F_{\\{\\ast\\}}(\\{\\ast\\})=0$, and $\\mathbf{X}$ such that $X=\\{x_2,x_2\\}$, and $F_X(\\{x_1\\})=F_X(\\{x_2\\})=0$, $F_X(\\{x_1,x_2\\})=1$. In this case $d_{\\mathcal{F}}(\\mathbf{X},\\mathbf{Y})=1$. However, notice that for $k=0$ $\\mathrm{D}_0(\\mathbf{X}) = \\{[0,\\infty),[0,1)\\}$\n and $\\mathrm{D}_0(\\mathbf{Y}) = \\{[0,\\infty)\\}$. Additionaly, for all $k\\geq 1$ one has $\\mathrm{D}_k(\\mathbf{X}) = \\mathrm{D}_k(\\mathbf{Y}) = \\emptyset$. This means that the lower bound provided by Theorem \\ref{theo:stab-pullback} is equal to $\\frac{1}{2}<1=d_{\\mathcal{F}}(\\mathbf{X},\\mathbf{Y})$.\n\\end{remark}\n\n\\section{Filtrations arising from metric spaces: Rips and \\v{C}ech}\nRecall \\cite{burago-book} that for two compact metric spaces $(X,d_X)$ and $(Y,d_Y)$, a correspondence between them is amy subset $R$ of $X\\times Y$ such that the natural projections $\\pi_X:X\\times Y\\rightarrow X$ and $\\pi_Y:X\\times Y\\rightarrow Y$ are such that $\\pi_X(R)=X$ and $\\pi_Y(R)=Y$. The distortion of any such correspondence is given by \n$$\\mathrm{dis}(R):=\\sup_{(x,y),(x',y')\\in R}\\big|d_X(x,x')-d_Y(y,y')\\big|.$$\nThen, Gromov-Hausdorff distance between $(X,d_X)$ and $(Y,d_Y)$ is defined as \n$$\\dgro{X}{Y} := \\frac{1}{2}\\inf_{R}\\mathrm{dis}(R),$$\nwhere the infimum is taken over all correspondences $R$ between $X$ and $Y$.\n\n\\subsection{The Rips filtration}\nRecall the definition of the \\emph{Rips filtration} of a finite metric space $(X,d_X)$: for $\\sigma\\in \\pow{X}$,\n$$F^{\\mathrm{R}}_X(\\sigma)=\\diamms{\\sigma}{X}:=\\max_{x,x'\\in X}d_X(x,x').$$\n\nThe following theorem was first proved in \\cite{dgw-topo-pers}. A different proof (also applicable to compact metric spaces) relying on the interleaving distance and multivalued maps was given in \\cite{chazal-geom}. Yet another different proof avoiding multivalued maps is given in \\cite{dowker-ph}.\n\n\\begin{theorem}\\label{theo:stab-dD-R}\nFor all finite metric spaces $X$ and $Y$, and all $k\\in\\mathbb{N}$,\n$$d_{\\mathcal{D}}\\big(\\dk{k}(F^{\\mathrm{R}}_X),\\dk{k}(F^{\\mathrm{R}}_Y)\\big)\\leq 2\\,\\dgro{X}{Y}.$$\n\\end{theorem}\n\nA different proof of Theorem \\ref{theo:stab-dD-R} can be obtained by combining Theorem \\ref{theo:stab-pullback} and Proposition \\ref{prop:stab-R} below.\n\\begin{proposition}\\label{prop:stab-R}\nFor all finite metric spaces $X$ and $Y$, \n$$d_{\\mathcal{F}}\\big(F^{\\mathrm{R}}_X,F^{\\mathrm{R}}_Y\\big)\\leq 2\\,\\dgro{X}{Y}.$$\n\\end{proposition}\n\\begin{proof}[Proof of Proposition \\ref{prop:stab-R}]\nLet $X$ and $Y$ be s.t. $\\dgro{X}{Y}<\\eta$, and let $R\\subset X\\times Y$ be a surjective relation with $|d_X(x,x')-d_Y(y,y')|\\leq 2\\eta$ for all $(x,y),(x',y')\\in R$. Consider the parametrization $Z=R$, and $\\varphi_X=\\pi_1:Z\\rightarrow X$ and $\\varphi_Y=\\pi_2:Z\\rightarrow Y$, then \n\\beq{eq:param}\n|d_X(\\varphi_X(t),\\varphi_X(t'))-d_Y(\\varphi_Y(t),\\varphi_Y(t'))|\\leq 2\\eta\n\\end{equation}\nfor all $t,t'\\in Z$. Pick any $\\tau\\in Z$ and notice that\n\n$$\\varphi^*_XF_X^{\\mathrm{R}}(\\tau) = F_X^{\\mathrm{R}}(\\varphi_X(\\tau)) = \\max_{t,t'\\in \\tau}d_X(\\varphi_X(t),\\varphi_X(t')).$$\n\nNow, similarly, write \n\\begin{multline}\\varphi^*_YF_Y^{\\mathrm{R}}(\\tau) = \\max_{t,t'\\in \\tau}d_Y(\\varphi_Y(t),\\varphi_Y(t'))\\leq \\max_{t,t'\\in\\tau}d_X(\\varphi_X(t),\\varphi_X(t'))+2\\eta = \\varphi^*_XF_X^{\\mathrm{R}}(\\tau) + 2\\eta,\n\\end{multline}\n\nwhere the last inequality follows from \\refeq{eq:param}. The proof follows by interchanging the roles of $X$ and $Y$.\n\\end{proof}\n\n\n\\subsection{The \\v{C}ech filtration}\nAnother interesting and frequently used filtration is the \\emph{\\v{C}ech filtration}: for each $\\sigma\\in\\pow{X}$, \n$$F^\\mathrm{C}_X(\\sigma) := \\mathbf{rad}_X(\\sigma)=\\min_{p\\in X}\\max_{x\\in \\sigma}d_X(x,p).$$ That is, the filtration value of each simplex corresponds to its \\emph{circumradius}.\n\\begin{proposition}\\label{prop:stab-C}\nFor all finite metric spaces $X$ and $Y$, \n$$d_{\\mathcal{F}}\\big(F^{W}_X,F^{W}_Y\\big)\\leq 2\\,\\dgro{X}{Y}.$$\n\\end{proposition}\n\nAgain, as a corollary of Theorem \\ref{theo:stab-pullback} and Proposition \\ref{prop:stab-C} we have the following \n\\begin{theorem}\\label{theo:stab-dD-C}\nFor all finite metric spaces $X$ and $Y$, and all $k\\in\\mathbb{N}$,\n$$d_{\\mathcal{D}}\\big(\\dk{k}(F^{\\mathrm{C}}_X),\\dk{k}(F^{\\mathrm{C}}_Y)\\big)\\leq 2\\,\\dgro{X}{Y}.$$\n\\end{theorem}\nA proof of this theorem via the interleaving distance and multi-valued maps has appeared in \\cite{chazal-geom}.\\footnote{The version in \\cite{chazal-geom} applies to compact metric spaces.} Another proof avoiding multivalued maps is given in \\cite{dowker-ph}.\n\n\\begin{proof}[Proof of Proposition \\ref{prop:stab-C}]\n\nThe proof is similar to that of Proposition \\ref{prop:stab-R}. Pick any $\\tau\\in Z$, then,\n\n$$\\varphi^*_XF_X^W(\\tau) = F_X^W(\\varphi_X(\\tau)) = \\min_{p\\in X}\\max_{t\\in \\tau}d_X(p,\\varphi_X(t))=\\max_{t\\in \\tau}d_X(p_\\tau,\\varphi_X(t))$$\nfor some $p_\\tau\\in X$. Let $t_\\tau\\in Z$ be s.t. $\\varphi_X(t_\\tau)=p_\\tau$, and from the above obtain\n$$\\varphi^*_XF_X^W(\\tau) = \\max_{t\\in \\tau}d_X(\\varphi_X(t_\\tau),\\varphi_X(t)).$$\n\nNow, similarly, write \n\\begin{multline}\\varphi^*_YF_Y^W(\\tau) = \\min_{q\\in Y}\\max_{t\\in \\tau}d_Y(q,\\varphi_Y(t))\\leq \\max_{t\\in\\tau}d_Y(\\varphi_Y(t_\\tau),\\varphi_Y(t))\\leq \\max_{t\\in\\tau}d_X(\\varphi_X(t_\\tau),\\varphi_X(t))+2\\eta\\\\ = \\varphi^*_XF_X^W(\\tau) + 2\\eta,\n\\end{multline}\n\nwhere the last inequality follows from \\refeq{eq:param}. The proof follows by interchanging the roles of $X$ and $Y$.\n\\end{proof}\n\n\n\\section{$d_{\\mathcal{F}}$ is geodesic}\\label{sec:geodesic}\nIn this section we construct geodesics between any pair $\\mathbf{X}$ and $\\mathbf{Y}$ of filtered spaces and obtain a strengthening of Theorem \\ref{theo:stab-pullback}.\n\n\n\\subsection{Geodesics}\nGiven $\\mathbf{X}$ and $\\mathbf{Y}$ in $\\mathcal{F}$ consider $\\mathcal{T}^\\mathrm{opt}(\\mathbf{X}.\\mathbf{Y})$ the set of all minimizing tripods: That is, for any $(Z,\\varphi_X,\\varphi_Y)\\in\\mathcal{T}(\\mathbf{X},\\mathbf{Y})$ we have $\\|\\varphi_X^\\ast F_X-\\varphi_Y^*F_Y\\|_{\\ell^\\infty(\\pow{Z}} = d_{\\mathcal{F}}(\\mathbf{X},\\mathbf{Y}).$\n\nFor each minimizing tripod $T=(Z,\\varphi_X,\\varphi_Y)\\in\\mathcal{T}^\\mathrm{opt}(\\mathbf{X},\\mathbf{Y})$ consider the curve $$\\mbox{$\\gamma_T:[0,1]\\rightarrow \\mathcal{F}$ defined by $t\\mapsto \\mathbf{Z_t}:=(Z,F_t)$}$$ where \n $$F_t:=(1-t)\\cdot \\varphi_X^*F_X+t\\cdot \\varphi_Y^* F_Y.$$\n\n\n\\begin{theorem}\nFor each $T\\in\\mathcal{T}^\\mathrm{opt}(\\mathbf{X},\\mathbf{Y})$ the curve $\\gamma_T$ is a geodesic between $\\mathbf{X}$ and $\\mathbf{Y}$. Namely, for all $s,t\\in[0,1]$ one has:\n$$d_{\\mathcal{F}}(\\gamma_T(s),\\gamma_T(t))=|s-t|\\cdot d_{\\mathcal{F}}(\\mathbf{X},\\mathbf{Y}).$$\n\\end{theorem}\n\\begin{proof}\nLet $\\eta = d_{\\mathcal{F}}(\\mathbf{X},\\mathbf{Y})$. We check that $$(\\ast )\\,\\,\\,\\,d_{\\mathcal{F}}(\\gamma_T(s),\\gamma_T(t))\\leq |s-t|\\cdot \\eta$$ and notice that this is enough. Otherwise, let $s\\frac{1}{2}=d_{\\mathcal{D}}(\\alpha_T(0),\\alpha_T(1)),$ which is the bound provided by Theorem \\ref{theo:stab-pullback}.\n\n \\end{remark}\n\n\\begin{proof}[Proof of Theorem \\ref{theo:strengthening}]\nLet $T\\in \\mathcal{T}^\\mathrm{opt}(\\mathbf{X},\\mathbf{Y})$ and let $0=t_0<\\cdots 0 \\\\ 0 & , & \\theta < 0\n\\end{array}\\right. \\end{equation}\nSimilarly,\n\\begin{equation} \\nu_1 = \\int_{-\\pi}^{\\pi} \\frac{dk}{2\\pi} \\frac{1}{1 + i e^{ik} \/z} = \\left\\{ \\begin{array}{ccc}\n\t0 & , & \\theta > 0 \\\\ 1 & , & \\theta < 0\n\\end{array}\\right. \\end{equation}\nThus, we have two distinct topological phases with $\\nu_0 = 1$, $\\nu_1 = 0$ ($\\nu_0 = 0$, $\\nu_1 = 1$) for $\\theta > 0$ ($\\theta <0$).\n\nIn position space, the action of $W$ can be deduced directly from \\eqref{eq10a} and \\eqref{eq13}. We obtain\n\\begin{eqnarray}\\label{eq8} \\Psi_0 (x) &\\to& \\frac{\\cos\\theta}{2} \\left[ \\Psi_0 (x-1) - \\Psi_0 (x+1) \\right] \\nonumber\\\\\n&& - \\frac{1 + \\sin\\theta}{2} \\Psi_1 (x-1) - \\frac{1 - \\sin\\theta}{2} \\Psi_1 (x+1)\\nonumber\\\\\n\\Psi_1 (x) &\\to& \\frac{\\cos\\theta}{2} \\left[ \\Psi_1 (x-1) - \\Psi_1 (x+1) \\right] \\nonumber\\\\\n&& - \\frac{1 - \\sin\\theta}{2} \\Psi_0 (x-1) - \\frac{1 + \\sin\\theta}{2} \\Psi_0 (x+1)\n\\nonumber\\\\\n\\end{eqnarray}\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=1.5]{QW-trapped.png}\n\t\\caption{Trapped state at the boundary of two topological phases of a discrete simple-step walk.}\\label{fig:1}\n\\end{figure}\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=0.4]{QW.png}\n\t\\caption{State being reflected at the boundary of two topological phases of a discrete simple-step walk.}\\label{fig:2}\n\\end{figure}\n\nIn Fig.~\\ref{fig:1}, we see that a discrete-time simple-step topological walk gives rise to bound states at $x=0$ and $x=-1$, which is chosen to be the boundary of two distinct topological phases. The initial state was centered around $x=0$ with a small spread $\\Delta x$, and $\\Psi_0(x) = \\Psi_1(x)$. As the system evolves in time, the parts of the state near the boundary diffuse ballistically away from the boundary, but the part at the boundary remains protected. In Fig.~\\ref{fig:2}, we choose the initial state around $x=50$, i.e., entirely within a single topological phase. In this case, we see that the quantum walk diffuses in both directions ballistically. The part of the walk that reaches the boundary of the other phase is reflected back and continues to diffuse away from the boundary ballistically without entering the region of the other topological phase.\n\n\\subsection{Continuous-time limit}\n\nTo go over to the continuous-time limit, we set $\\theta = \\frac{\\pi}{2} - \\epsilon$, and consider a scaling limit in which $\\epsilon \\to 0$ and $n\\to\\infty$, so that the product $n\\epsilon $ remains finite. In this limit, $\\omega_+ \\to \\pi$, and $\\omega_- \\to 0$.\nNotice that $W^2 = \\mathbb{I} + \\mathcal{O} (\\epsilon)$, which is not the case for $W$. We will therefore consider the limit of an \\emph{even} number of steps.\n\nSetting\n\\begin{equation}\\epsilon = \\gamma\\Delta t \\ , \\ \\ t = 2n\\Delta t \\end{equation}\nApplying \\eqref{eq8} twice, we obtain\n\\begin{eqnarray}\\label{eq8a} &&\\Psi_0 (x, n+2) - \\Psi_0 (x,n) \\nonumber\\\\\n&& = \\gamma \\Delta t \\left[ \\Psi_1 (x,n) - \\Psi_1 (x-2,n) \\right] \\nonumber\\\\\n&& \\Psi_1 (x, n+2) - \\Psi_1 (x,n) \\nonumber\\\\\n&&= -\\gamma \\Delta t \\left[ \\Psi_0 (x,n) - \\Psi_0 (x+2,n) \\right]\n\\end{eqnarray}\nfrom which we deduce the continuous-time quantum walk in the limit $\\Delta t\\to 0$,\n\\begin{eqnarray}\\label{eq8b} \\frac{\\partial\\Psi_0 (x, t)}{\\partial t} &=& \\gamma \\left[\n\\Psi_1 (x,t) - \\Psi_1 (x-2,t) \\right] \\nonumber\\\\\n\\frac{\\partial\\Psi_1 (x, t)}{\\partial t} &=& \\gamma \\left[\n- \\Psi_0 (x,t) + \\Psi_0 (x+2,t) \\right]\n\\end{eqnarray}\nDefining\n\\begin{equation}\\label{eq27} \\Phi_\\pm (x) = \\pm \\Psi_0 (x) + \\Psi_1 (x-1 ) \\end{equation}\nwe obtain the decoupled equations\n\\begin{equation}\\label{eq37} \\frac{\\partial\\Phi_\\pm (x, t)}{\\partial t} = \\pm \\gamma \\left[\n\\Phi_\\pm (x+1,t) - \\Phi_\\pm (x-1,t) \\right]\n\\end{equation}\nrelated to each other by time reversal.\n\nWorking similarly in the other phase (with $\\nu_0 = 0$), we obtain in the continuous-time limit,\n\\begin{eqnarray}\\label{eq8bb} \\frac{\\partial\\Psi_0 (x, t)}{\\partial t} &=& \\gamma \\left[\n\\Psi_1 (x,t) - \\Psi_1 (x+2,t) \\right] \\nonumber\\\\\n\\frac{\\partial\\Psi_1 (x, t)}{\\partial t} &=& \\gamma \\left[\n- \\Psi_0 (x,t) + \\Psi_0 (x-2,t) \\right]\n\\end{eqnarray}\nIt is easy to see that these are equivalent to the decoupled Eqs.\\ \\eqref{eq37} under the definition $\\Phi_\\pm (x) = \\mp \\Psi_0 (x) + \\Psi_1 (x+1 )$ (\\emph{cf.}\\ with Eq.\\ \\eqref{eq27}).\n\nThe above results are no longer valid if the coin parameters are spatially dependent. In particular, we are interested in the case in which there are two regions in space which have different topological numbers. This can be achieved by having $\\theta > 0$ and $\\theta < 0$ in the respective regions.\nThen the above results are valid in the bulk of each region, but not along their boundaries.\n\nFor the continuous-time limit, we need to consider the limit of $4n$ steps as $n\\to\\infty$. This is because $W^2 \\ne \\mathbb{I} + \\mathcal{O} (\\epsilon)$, due to an obstruction at the boundary of the two regions, but we still have $W^4 = \\mathbb{I} + \\mathcal{O} (\\epsilon)$.\n\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=1.5]{CTQW-trapped.png}\n\t\\caption{Trapped state at the boundary of two topological phases of a continuous simple-step walk.}\\label{fig:3}\n\\end{figure}\n\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=0.4]{CTQW.png}\n\t\\caption{State being reflected at the boundary of two topological phases of a continuous simple-step walk.}\\label{fig:4}\n\\end{figure}\nIn the continuous-time limit, away from the boundary the results match those of single topological phases. More precisely, for $x\\ge 2$, we recover Eq.\\ \\eqref{eq8bb} whereas for $x\\le -3$, we recover Eq.\\ \\eqref{eq8b}.\nNear the boundary (for $-3 < x < 2$), we obtain\n\\begin{eqnarray} \\frac{\\partial\\Psi_0(1)}{\\partial t}\n&=& \\gamma \\Psi_1 (1) \\nonumber\n\\\\ \\frac{\\partial\\Psi_1 (1)}{\\partial t} &=& - \\gamma \\left[ \\Psi_0 (1) - \\Psi_0 (3) \\right] \\nonumber\n\\\\ \\frac{\\partial\\Psi_0(0)}{\\partial t}\n&=& 0 \\nonumber\\\\\n\\frac{\\partial\\Psi_1 (0)}{\\partial t} &=& \\gamma \\Psi_0 (2) \\nonumber\n\\\\\n\\frac{\\partial\\Psi_0 (-1)}{\\partial t} &=& 0\n\\\\\n\\frac{\\partial\\Psi_1(-1)}{\\partial t}\n&=& \\gamma \\Psi_0 (-3) \\nonumber \\nonumber\\\\\n\\frac{\\partial\\Psi_0 (-2)}{\\partial t} &=& \\gamma \\Psi_1 (-2) \\nonumber\\\\\n\\frac{\\partial\\Psi_1(-2)}{\\partial t}\n&=& -\\gamma \\left[ \\Psi_0 (-2) - \\Psi_0 (-4) \\right]\n\\end{eqnarray}\nNotice that $\\Psi_0 (0)$ and $\\Psi_1 (-1)$ decouple, so if initially $\\Psi_0(0) =1$, then the walk is trapped at $x=0$, and similarly for $\\Psi_1(-1)$. Moreover, if a walk starts entirely within one of the topologically distinct regions (or at the boundary, $x=0$), it remains in it. This matches the asymptotic behavior observed in the discrete-time case (cf. Fig.~\\ref{fig:1}).\n\nBy defining\n$\\Phi_\\pm$ as in \\eqref{eq27}, we recover the equations of motion for decoupled walks\n\\eqref{eq37} away from the boundary (for $|x|\\ge 2$). In the continuous-time limit of the simple-step quantum walk, we see the same behavior as we did for the discrete quantum walk. We have the same two bound states near the boundary at $x = 0,1$, respectively. This is shown in Fig.~\\ref{fig:3}. Away from the boundary, the system diffuses ballistically, and is reflected at the boundary, as shown in Fig.~\\ref{fig:4}.\n\n\\section{Split-step Walk}\n\\label{sec:2}\n\nAs with the previous section we rederive the discrete-time case here for completeness. The original results can be found in \\cite{Kit,Kitagawa2010,Obuse2015} along with our new results in the continuous-time limit following thereafter.\n\n\\subsection{Discrete-time walk}\nFor the split-step walk, we flip two coins, $T(\\theta_1)$ and $T(\\theta_2)$, where\n$T(\\theta)$ is defined in \\eqref{eqTtheta}.\nA step of the walker is represented by\n\\begin{equation} U = S_-T(\\theta_2)S_+ T(\\theta_1) \\end{equation}\nIt is convenient to define the repeated block\n\\begin{equation}\\label{eq10as} U' = e^{-i \\frac{\\theta_1}{2} Y} Z S_- e^{-i \\theta_2 Y} Z S_+ e^{-i\\frac{\\theta_1}{2} Y} \\end{equation}\nThe advantage of working with $U'$ instead of $U$ is that $U'$ is of the form\n\\begin{equation}\\label{eq10s} U' = -F X F^{-1} X \\end{equation}\nwhere\n\\begin{equation}\\label{eqFsplit} F = e^{-i\\frac{\\theta_1}{2} Y} ZS_- e^{-i\\frac{\\theta_2}{2} Y}\n\\end{equation}\nAfter switching to a frame in which $X$ is diagonal, we arrive at $W$ given by \\eqref{eq13}, and $W_c$ acting on a coin (Eq.\\ \\eqref{eq15}) as\n\\begin{equation} W_c\n= \\begin{pmatrix}\n\t\\beta_0 & \\beta_1 \\\\ -\\beta_1^\\ast & \\beta_0\n\\end{pmatrix} \\end{equation}\nwhere\n\\begin{eqnarray} \\beta_0 &=& \\cos k \\cos\\theta_1\\cos\\theta_2 + \\sin\\theta_1\\sin\\theta_2 \\nonumber\\\\\n\\beta_1 &=& -(i\\sin k + \\cos k \\sin\\theta_1)\\cos\\theta_2 + \\cos\\theta_1\\sin\\theta_2\\ \\ \\\n\\end{eqnarray}\nThe eigenvalues are\n\\begin{equation} e^{-i\\omega_\\pm} = \\beta_0 \\mp i\\sqrt{1-\\beta_0^2} \\end{equation}\nSimilarly, for $G_c$ defined by \\eqref{eq13} and \\eqref{eq15} with $F$ given by\n\\eqref{eqFsplit}, we obtain\n\\begin{equation} G_c\n= e^{-ik\/2}\\begin{pmatrix}\n\t\\gamma_0 & \\gamma_1 \\\\ \\gamma_1^\\ast & -\\gamma_0^\\ast\n\\end{pmatrix} \\end{equation}\nwhere\n\\begin{eqnarray} \\gamma_0 &=& \\cos \\frac{k}{2} \\cos\\frac{\\theta_-}{2} + i \\sin \\frac{k}{2} \\cos\\frac{\\theta_+}{2} \\nonumber\\\\\n\\gamma_1 &=& \\cos \\frac{k}{2} \\sin\\frac{\\theta_-}{2} - i \\sin \\frac{k}{2} \\cos\\frac{\\theta_+}{2}\\ \\ \\\n\\end{eqnarray}\nand we defined $\\theta_\\pm = \\theta_1 \\pm \\theta_2$.\n\nFor the two topological invariants, we obtain, respectively,\n\\begin{equation} \\nu_0 = \\int_{-\\pi}^{\\pi} \\frac{dk}{2\\pi}\\frac{1}{1-z_0 e^{ik} } \\ , \\ \\ z_0 = \\frac{\\cos\\frac{\\theta_+}{2} + \\sin\\frac{\\theta_-}{2}}{\\cos\\frac{\\theta_+}{2} - \\sin\\frac{\\theta_-}{2}} \\end{equation}\nand\n\\begin{equation} \\nu_1 = \\int_{-\\pi}^{\\pi} \\frac{dk}{2\\pi}\\frac{1}{1+ z_1 e^{ik} } \\ , \\ \\ z_1 = \\frac{\\cos\\frac{\\theta_-}{2} - \\sin\\frac{\\theta_+}{2}}{\\cos\\frac{\\theta_-}{2} + \\sin\\frac{\\theta_+}{2}}\n\\end{equation}\nIt is easy to see that\n\\begin{equation} \\nu_\\alpha = \\left\\{ \\begin{array}{ccc}\n\t1 & , & |z_\\alpha| < 1 \\\\ 0 & , & |z_\\alpha|>1\n\\end{array}\\right. \\ , \\ \\\n\\alpha = 0,1~.\n\\end{equation}\nThus we obtain four different topological phases,\n\\begin{align}\n\tI (\\nu_0,\\nu_1) &= (1, 1)\\\\\n\tII (\\nu_0,\\nu_1) &= (0, 0)\\\\\n\tIII (\\nu_0,\\nu_1) &= (0,1)\\\\\n\tIV (\\nu_0,\\nu_1) &= (1,0)\n\\end{align}\nIn position space, the action of $W$ yields\n\\begin{widetext}\n\\begin{eqnarray}\\label{eq8split} \\Psi_0 (x) &\\to& \\frac{\\cos\\theta_1\\cos\\theta_2}{2} \\left[ \\Psi_0 (x-1) + \\Psi_0 (x+1) \\right] + \\sin\\theta_1\\sin\\theta_2 \\Psi_0 (x)\\nonumber\\\\\n&& - \\frac{1 + \\sin\\theta_1}{2}\\cos\\theta_2 \\Psi_1 (x-1) + \\frac{1 - \\sin\\theta_1}{2}\\cos\\theta_2 \\Psi_1 (x+1) + \\cos\\theta_1 \\sin\\theta_2 \\Psi_1 (x)\\nonumber\\\\\n\\Psi_1 (x) &\\to& \\frac{\\cos\\theta_1\\cos\\theta_2}{2} \\left[ \\Psi_1 (x-1) + \\Psi_1 (x+1) \\right] + \\sin\\theta_1\\sin\\theta_2 \\Psi_1 (x) \\nonumber\\\\\n&& - \\frac{1 - \\sin\\theta_1}{2}\\cos\\theta_2 \\Psi_0 (x-1) + \\frac{1 + \\sin\\theta_1}{2}\\cos\\theta_2 \\Psi_0 (x+1) - \\cos\\theta_1 \\sin\\theta_2 \\Psi_0 (x)\n\\end{eqnarray}\n\\end{widetext}\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=1.5]{SQW_type1-trapped.png}\n\t\\caption{Trapped state at the boundary of topological phases \\emph{I} and \\emph{II} of a continuous simple-step walk.}\\label{fig:5}\n\\end{figure}\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=0.35]{SQW1.png}\n\t\\caption{State being reflected at the boundary of topological phases \\emph{I} and \\emph{II} of a continuous simple-step walk.}\\label{fig:7}\n\\end{figure}\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=1.5]{SQW_type2-trapped.png}\n\t\\caption{Trapped state at the boundary of topological phases \\emph{I} and \\emph{III} of a continuous simple-step walk.}\\label{fig:6}\n\\end{figure}\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=0.35]{SQW2.png}\n\t\\caption{State being reflected at the boundary of topological phases \\emph{I} and \\emph{III} of a continuous simple-step walk.}\\label{fig:8}\n\\end{figure}\n\nFigures \\ref{fig:5} and \\ref{fig:7} show the behavior of a discrete split-step quantum walk with topological phases \\emph{III} for $x\\ge 0$ and \\emph{IV} for $x<0$. In Fig.~\\ref{fig:5}, we see the same two bound states near the boundary of the two phases as in the case of the simple-step walk. This is expected, because the boundary between phases \\emph{III} and \\emph{IV} is equivalent to the boundary between two topologically distinct phases of a simple-step quantum walk. This behavior has been demonstrated experimentally by Kitagawa, \\emph{et al.}~\\cite{Kitagawa2012}. Figure \\ref{fig:7} shows the behavior of the split-step quantum walk away from the boundary. In particular, a system whose initial state is entirely within a single topological phase will never leave the region of that phase. When such a state comes in contact with the boundary between two phases, it is reflected back.\n\nFigures \\ref{fig:6} and \\ref{fig:8} show the behavior of a discrete split-step quantum walk with topological phases \\emph{I} for $x\\ge 0$ and \\emph{III} for $x<0$. Unlike in the previous case, there is a single bound state at the boundary between phases \\emph{I} and \\emph{III}, at $x=-1$. This has also been observed experimentally~\\cite{Kitagawa2012}. Away from the boundary, we obtain the same qualitative behavior as in the previous case, including reflection of the state at the boundary.\n\n\\subsection{Continuous-time limit}\n\nThe continuous-time limit is obtained as $\\theta_1 \\to\\pm \\frac{\\pi}{2}$, $\\theta_2\\to 0$, or $\\theta_2 \\to\\pm \\frac{\\pi}{2}$, $\\theta_1\\to 0$. They correspond to the four distinct topological phases listed above, respectively,\n\\begin{align}\n\tI (\\theta_1,\\theta_2) &= (0, \\frac{\\pi}{2})\\\\\n\tII (\\theta_1,\\theta_2) &= (0, -\\frac{\\pi}{2})\\\\\n\tIII (\\theta_1,\\theta_2) &= (\\frac{\\pi}{2},0)\\\\\n\tIV (\\theta_1,\\theta_2) &= (-\\frac{\\pi}{2},0)\n\\end{align}\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=1.5]{CTSQW_type1-trapped.png}\n\t\\caption{Trapped state at the boundary of topological phases \\emph{I} and \\emph{II} of a continuous simple-step walk.}\\label{fig:9}\n\\end{figure}\n\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=0.35]{CTSQW1.png}\n\t\\caption{State being reflected at the boundary of topological phases \\emph{I} and \\emph{II} of a continuous simple-step walk.}\\label{fig:10}\n\\end{figure}\nIn phase $I$, we set $\\theta_1 = \\epsilon_1$, $\\theta_2 = \\frac{\\pi}{2} - \\epsilon_2$, and consider the scaling limit in which $\\epsilon_{1,2}\\to 0$, $n\\to\\infty$, so that the products $n\\epsilon_{1,2}$ remain finite. In this limit, $\\omega_\\pm \\to \\pm \\frac{\\pi}{2}$. We have $W^2 = -\\mathbb{I} + \\mathcal{O} (\\epsilon_{1,2})$. We will therefore multiply the wavefunction by a phase $i$, and consider the limit of an even number of steps. Setting\n\\begin{equation} \\epsilon_{1,2} = \\gamma_{1,2} \\Delta t \\ , \\ \\ t = 2n\\Delta t \\end{equation}\nwe obtain from \\eqref{eq8split} the equations of motion in the limit $\\Delta t \\to 0$,\n\\begin{eqnarray}\\label{eqomI} \\frac{\\partial\\Psi_0 (x)}{\\partial t} &=& -2\\gamma_1 \\Psi_1 (x) - \\gamma_2 \\left[ \\Psi_1 (x-1) + \\Psi_1 (x+1) \\right] \\nonumber\\\\\n\\frac{\\partial\\Psi_1(x)}{\\partial t} &=& 2\\gamma_1 \\Psi_0 (x) + \\gamma_2 \\left[ \\Psi_0 (x-1) + \\Psi_0 (x+1) \\right]\\ \\ \\ \\\n\\end{eqnarray}\nWorking similarly, we obtain the same equations of motion \\eqref{eqomI} in the continuous-time limit in phase $II$.\n\nDefining\n\\begin{equation}\\label{eq47} \\Phi_\\pm (x) = \\pm i \\Psi_0 (x) + \\Psi_1 (x-1) \\end{equation}\nwe obtain the decoupled equations of motion,\n\\begin{equation}\\label{eq48} \\frac{\\partial\\Phi_\\pm (x)}{\\partial t} = \\pm 2i\\gamma_2 \\Phi_\\pm (x) \\pm i \\gamma_1 \\left[ \\Phi_\\pm (x-1) + \\Phi_\\pm (x+1) \\right] \\end{equation}\nIn phase $III$, we obtain the equations of motion\n\\begin{eqnarray}\\label{eqomIII} \\frac{\\partial\\Psi_0 (x)}{\\partial t} &=& \\gamma_1 \\left[ \\Psi_1 (x) +\\Psi_1 (x-2) \\right] +2\\gamma_2 \\Psi_1 (x-1)\\nonumber\\\\\n\\frac{\\partial\\Psi_1(x)}{\\partial t} &=& - \\gamma_1 \\left[ \\Psi_0 (x) +\\Psi_0 (x+2) \\right] - 2\\gamma_2 \\Psi_0 (x+1) \\ \\ \\ \\ \\\n\\end{eqnarray}\nand in phase $IV$,\n\\begin{eqnarray}\\label{eqomIV} \\frac{\\partial\\Psi_0 (x)}{\\partial t} &=& \\gamma_1 \\left[ \\Psi_1 (x) +\\Psi_1 (x+2) \\right] +2\\gamma_2 \\Psi_1 (x+1)\\nonumber\\\\\n\\frac{\\partial\\Psi_1(x)}{\\partial t} &=& - \\gamma_1 \\left[ \\Psi_0 (x) +\\Psi_0 (x-2) \\right] - 2\\gamma_2 \\Psi_0 (x-1) \\ \\ \\ \\ \\\n\\end{eqnarray}\nThey can be put into the decoupled form \\eqref{eq48}, if we define $\\Phi_\\pm$ as in \\eqref{eq47} in phase $III$, and $\\Phi_\\pm (x) = \\pm i \\Psi_0 (x) + \\Psi_1 (x+1)$ in phase $IV$.\n\nThere are six different boundaries, but only two are qualitatively different. We proceed to consider a representative from each type.\n\nFor a system in phase $III$ for $x\\ge 0$, and phase $IV$ for $x<0$,\nworking as before, we obtain for $x\\le -3$ the equations of motion \\eqref{eqomIII}, and for $x\\ge 1$, the equations of motion \\eqref{eqomIV}. Near the boundary of the two phases, we have\n\t\\begin{eqnarray}\\label{eq53}\n\t\\frac{\\partial\\Psi_0 (-2)}{\\partial t} &=& \\gamma_1 \\Psi _1(-2) + 2\\gamma_2\\Psi _1(-1)\\nonumber\\\\\n\t\\frac{\\partial\\Psi_1 (-2)}{\\partial t} &=& -\\gamma_1\\left( \\Psi _0(-2) + \\Psi_0(-4) \\right) - 2\\gamma_2\\Psi _0(-3)\\nonumber\\\\\n\t\t\\frac{\\partial\\Psi_0 (-1)}{\\partial t} &=& 0\\nonumber\\\\\n\t\t\\frac{\\partial\\Psi_1 (-1)}{\\partial t} &=& -\\gamma_1\\Psi_0(-3) - 2\\gamma_2\\Psi_0(-2)\\nonumber\\\\\n\t\t\\frac{\\partial\\Psi_0 (0)}{\\partial t} &=& 0 \\nonumber\\\\\n\t\t\\frac{\\partial\\Psi_1 (0)}{\\partial t} &=& -\\gamma_1\\Psi _0(2) - 2\\gamma_2\\Psi_0(1)\\nonumber\\\\\n\t\t\\frac{\\partial\\Psi_0(1)}{\\partial t} &=& \\gamma_1\\Psi_1(1) + 2\\gamma_2\\Psi_1(0) \\nonumber\\\\\n\t\t\\frac{\\partial\\Psi_1(1)}{\\partial t} &=& -\\gamma_1\\left( \\Psi_0(1) + \\Psi_0(3) \\right) - 2\\gamma_2\\Psi_0(2)\n\t\\end{eqnarray}\n\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=1.5]{CTSQW_type2-trapped.png}\n\t\\caption{Trapped state at the boundary of topological phases \\emph{I} and \\emph{III} of a continuous simple-step walk.}\\label{fig:11}\n\\end{figure}\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=0.35]{CTSQW2.png}\n\t\\caption{State being reflected at the boundary of topological phases \\emph{I} and \\emph{III} of a continuous simple-step walk.}\\label{fig:12}\n\\end{figure}\n\n\nSimilarly, for a system in phase $I$ for $x\\ge 0$, and phase $III$ for $x<0$,\nwe obtain for $x\\ge 1$, the equations of motion \\eqref{eqomI}, and for $x\\le -3$, the equations of motion \\eqref{eqomIII}.\nNear the boundary, we have\n\\begin{eqnarray}\\label{eq54}\n\\frac{\\partial\\Psi_0 (-2)}{\\partial t} &=& \\gamma_1\\left[\\Psi _1(-4)+\\Psi _1(-2)\\right] + 2\\gamma_2\\Psi _1(-3) \\nonumber\\\\\n\\frac{\\partial\\Psi_1 (-2)}{\\partial t} &=& -\\gamma_1\\Psi _0(-2)-2\\gamma_2 \\Psi _0(-1)\\nonumber\\\\\n\\frac{\\partial\\Psi_0 (-1)}{\\partial t} &=& \\gamma_1\\Psi _1(-3) +2\\gamma_2 \\Psi _1(-2)\\nonumber\\\\\n\\frac{\\partial\\Psi_1 (-1)}{\\partial t} &=& 0 \\nonumber\\\\\n\\frac{\\partial\\Psi_0 (0)}{\\partial t} &=& - \\gamma_2\\Psi _1(1)\\nonumber\\\\\n\\frac{\\partial\\Psi_1 (0)}{\\partial t} &=& \\gamma _2\\Psi _0(1)\\nonumber\\\\\n\\end{eqnarray}\nFigures \\ref{fig:9}, \\ref{fig:10}, \\ref{fig:11}, and \\ref{fig:12} depict the continuous-time limit of the discrete quantum walks shown in Figs. \\ref{fig:5}, \\ref{fig:6}, \\ref{fig:7}, and \\ref{fig:8}, respectively. As expected, the observed behavior matches the asymptotic behavior at and away from the boundary in the discrete case. It should be noted that in the continuous-time limit, the bound states can be found analytically from \\eqref{eq53} for the boundary between \\emph{III} and \\emph{IV}, and \\ref{eq54} for the boundary between phases \\emph{I} and \\emph{III}. These bound states are all in agreement with the asymptotic results obtained above in the corresponding discrete cases, as well as experimental results \\cite{Kitagawa2012}.\n\n\\begin{widetext}\n\n\\begin{figure}[htp]\n \\centering\n\t\\includegraphics[scale=1.0]{stability.png}\n\t\\caption{Bound states near the boundary between two phases. From left to right, the distributions of $|\\Psi_0(x)|^2, |\\Psi_1(x)|^2, |\\Psi_0(x)|^2 + |\\Psi_1(x)|^2$ after a short time $t=25$. In each plot, the value $R$ corresponds to the ratio between walk parameters $\\gamma_1, \\gamma_2$. (a) At the boundary between phases \\emph{I} and \\emph{II} with the initial state centered at $\\Psi_0(-1)\\ \\text{and}\\ \\Psi_0(0)$. (b) At the boundary between phases \\emph{I} and \\emph{III} with the initial state centered at $\\Psi_1(-1)$}\\label{fig:13}\n\\end{figure}\n\n\\end{widetext}\n\nAs discussed above, the split-step quantum walk with boundary between the topological phases \\emph{III} and \\emph{IV} gives rise to two topologically protected bound states at the boundary. In Fig.~\\ref{fig:13}, we show that these bound states are robust against small changes in the quantum walk parameters.\n\n\\begin{figure}[htp]\n \\centering\n\t\\includegraphics[scale=0.75]{ballistic.png}\n\t\\caption{Ballistic diffusion of continuous walks without boundary. (a) In phase \\emph{III} with no boundary the quantum walk diffuses ballistically. This diffusion can occur to the right or to the left depending on the initial state. Here the initial state was centered at $\\Psi_0(0)$ and $\\Psi_1(0)$ with $\\Psi_1(0) = \\pm \\Psi_0(0)$. (b) In phase \\emph {I} with no boundary the quantum walk diffuses ballistically with the same behavior regardless of the initial state. }\\label{fig:14}\n\\end{figure}\n\nFigure \\ref{fig:14} shows that in the case of a single topological phase, the state ballistically diffuses away from its initial position. In topological phase \\emph{III}, one can choose the initial state in such a away that the diffusion only occurs in a single direction. Due to the linearity of the system, it follows that the initial state can be chosen so that the diffusion occurs in both directions. However, in phase \\emph{I}, regardless of the choice of initial state, the diffusion invariably occurs in both directions ballistically.\n\n\n\\section{Conclusions}\n\\label{sec:3}\n\nIn conclusion, we have investigated the continuous-time limit of discrete quantum walks with topological phases. In quantum walks, it is common to consider both discrete and continuous-times. In recent years much interest has been devoted to understanding how discrete-time quantum walks can simulator topological insulators. Here we have shown the existence of a continuous-time limit that preserves their topological phases. We considered both simple-step and split-step walks and derived analytically the equations of motion governing their behaviors. Through our analytical solutions we showed the existence of bound states at the boundary of two phases. We also solved the equations of motion numerically in the bulk. In terms of future work it would be interesting to consider the alternative continuous-limit approach given in \\cite{Dheeraj2015} to study topological properties of quantum walks.\n\n\\acknowledgments{We thank C.\\ M. Chandrashekar for illuminating discussions. D.\\ C.\\ and G.\\ S.\\ thank the Army Research Laboratory, where most of this work was performed, for its hospitality and financial support.}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}