diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzikqd" "b/data_all_eng_slimpj/shuffled/split2/finalzzikqd" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzikqd" @@ -0,0 +1,5 @@ +{"text":"\\section*{}\n\n\n\\section{Introduction}\nThe radiative properties of an atom are a function of the available \nelectromagnetic \nvacuum modes, hence they are altered by the presence of an optical resonator. \nCorrespondingly, spontaneous emission of an atom can be enhanced and suppressed by placing it \ninside a cavity depending on the cavity resonance - atom transition detuning \nas has been shown in \\cite{Kleppner1}. For the last 30 years cavity quantum \nelectrodynamics has been attracting attention as an ideal framework to explore\nthe foundations of quantum mechanics, but also with wide \nranging application in metrology (e.g \\cite{Kasevich2006}), quantum computing \n(e.g \\cite{Zoller1}), quantum cryptography and the development of novel cavity \nassisted laser cooling mechanisms \\cite{Domokos2003, Vuletic2001}.\\\\\nIn the far-off-resonance case, the interaction of an atom, or more generally a polarizable particle,\nwith the optical resonator can be understood classically as a position \ndependent refractive index changing the optical pathlength of the cavity \nand therefore its scattering and emission properties. The interaction strength \nbetween atom and cavity depends on the dipole moment $\\mu$ of the scatterer \ncompared to the overall mode volume $V_{mode}$ of the resonator, and the \ncoupling constant is \n$g=\\mu\\sqrt{\\omega_A\/\\left(2 \\hbar \\epsilon_0 V_{mode}\\right)}$, where $\\omega_A$ here is the frequency of the atomic transition.\\\\ \nQuantum mechanically the single-atom single-mode interaction is best described by the \nJaynes-Cummings Hamiltonian \\cite{Jaynes63}. In the new dressed basis the \nexcited state is split into two components due to the vacuum fluctuation of the \nelectro-magnetic field. The so called vacuum Rabi splitting is proportional to\nthe atom-cavity coupling constant and is observable when it is bigger than \nthe cavity linewidth $\\nu_C$ and the atomic transition linewidth $\\Gamma$. The cavity-atom interaction can also be characterized by the \nso-called cooperativity parameter, i.e. the ratio of the photon scattering\/emission rate into the cavity mode $\\Gamma_C$ with respect to the free space scattering\/emission rate $\\Gamma_{FS}$. For a single atom - single mode interaction, the \ncooperativity parameter is given by:\n\t\\[C=\\frac{\\Gamma_C}{\\Gamma_{FS}}=\\frac{g^2}{2 \\kappa \\gamma}~. \\]\nHere $\\kappa=\\nu_C\/2$ is half the cavity loss rate and $\\gamma=\\Gamma\/2$ is half the excited state decay rate.\nIf $g>>\\left(\\kappa+\\gamma\\right)$ and therefore $C>>1$ the coherent cavity-atom excitation exchange is much quicker than the dissipative \ndecay into free space or the decay of the cavity mode. Therefore an emitted photon stays \ninside the cavity, eventually gets reabsorbed by the atom, reemitted into the cavity and so on. \nFinally, the photon will leave the system either into free space or through \none of the mirrors. This regime is called the strong coupling regime.\nDepending on the specific application, strong coupling may be a requirement. For example in the context of cavity cooling it was pointed out in Ref. \\cite{Prospects1} that strong coupling is not essential for cavity-assisted laser cooling of two-level atoms, but it is important for cavity-cooling of multi-level atoms and molecules. There are now two ways to reach the strong coupling regime. One way is to reduce the mode-volume of the cavity and improve coating of the mirrors to produce a very large coupling constant $g$ \nand small cavity loss rate $\\kappa$ as done for example in Refs.\n\\cite{Rempe1, Kimble1, Brennecke2007}. \nA different approach, which is also the one followed in this work, involves increasing the number of particles inside the cavity mode and studying\nthe collective strong coupling regime. For N atoms in the cavity mode the \ncoupling scales as $g_{coll}=\\sqrt{N} g$ \\cite{TavisCummings1968}, \nthe vacuum Rabi splitting scales as \\cite{Transmission1}: \n\\[\n\\Omega=2\\sqrt{{{g_{coll}}^2-\\left(\\frac{\\gamma-\\kappa}{2}\\right)^2}}\n\\]\n\n\nand the collective \ncooperativity becomes\n\\[\nC_{Coll}=\\frac{N g^2}{2\\kappa \\gamma}\n\\]\nThe collective coupling of multiple atoms can increase the interaction \ndramatically. It has been observed in several experiments \n\\cite{Raizen1989,Kasevich2006,Klinner2006,Brennecke2007}. In most experiments \n(except e.g \\cite{Vuletic2003,BadCavity1,BadCavity2}) the coupling constant \n$g$ for the single atom is at least comparable to $\\kappa$. \nIn contrast, our aim is to investigate the coherent interaction between \nmultiple cold atoms and a cavity mode in a lossy optical resonator, where the\nsingle atom-cavity interaction is governed by the incoherent atomic decay rate\ninto free space. The large number of atoms only ensures that our system \nbecomes strongly coupled.\\\\\n\nIn this work we describe our experimental apparatus and present experimental results for \nthe normal mode splitting of the cavity-atom system, a signature of the strong coupling regime. \nCollective cooperativity parameters up to $C_{coll}>180$ are measured in our experiments.\\\\ \nA cooperativity parameter much larger than unity will allow us to study \ncollective phenomena in atom-cavity systems, from cavity-assisted cooling \nof large samples of atoms \\cite{Vuletic2003}, a model system for the \ncooling of molecules, to superradiance into the cavity mode and collective \nself-organisation of atoms.\n\n\\section{Experimental setup}\n\nIn the experiment we load a Magneto-Optical Trap (MOT) of Cesium atoms from the background gas directly into the mode of a nearly confocal optical resonator. The cavity and the MOT are depicted in figure \\ref{fig:CavityPump}.\\\\\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=0.25\\textwidth]{Figure1_new2.eps}\n\t\\caption{\\label{fig:CavityPump} A schematic of the cavity arrangement and the MOT. The third pair of MOT beams is orthogonal to the image plane.}\n\\end{figure}\nThe FWHM linewidth of the cavities $TEM_{00}$ mode $\\nu_C$ was measured to be \n$2\\pi\\times\\left(1.60\\pm0.05\\right)$ MHz. For a Free Spectral Range (FSR)\nof $2\\pi\\times\\left(1249.5\\pm0.4\\right)$ MHz this leads to a finesse of $F=\\left(780\\pm15\\right)$.\\\\ With about 12 cm cavity length the single mode-single atom coupling constant $g$ is very small, smaller than the cavity half loss rate $\\kappa$ and much smaller \nthan the atomic excited state half decay rate $\\gamma$. The cavity-atom interaction \nat the single atom level is completely governed by the decay into free space and \nthe influence of the resonator can be treated as a perturbation. This regime is the \nso called \"bad cavity limit\". The relevant parameters for our system are:\n$\\left(g,\\kappa, \\gamma\\right)=2\\pi\\times\\left(0.12,0.8,2.6\\right)$ MHz.\\\\\nTo increase the coupling as well as the overall mode volume the cavity in the \nexperiment was chosen to be confocal (the length of the resonator equals the \ncurvature radius of the mirrors). In this ideal case the higher order eigenmodes of \nthe cavity are degenerate in frequency with the TEM$_{00}$-order mode, so that the \ncoupling to each mode should contribute to the overall interaction.\\\\\n\nWe notice that the overlap of many transverse modes with slightly different frequencies leads to an empty-cavity linewidth larger than the $TEM_{00}$ value of $2\\pi\\times\\left(1.60\\pm0.05\\right)$ MHz. The degeneracy is lifted due to abberations on the mirrors and a small deviation from the confocality condition. To measure the TEM$_{00}$-order linewidth the contributions of the higher order transverse modes were spatially filtered out \nby a small aperture. For the observation of the vacuum Rabi splitting the filter was then removed. The transmission now contains higher order transversal mode components and appears broader.\\\\\n\\\\\nTwo important parameters need to be controlled separately to study the behaviour of the \ncloud in the mode with frequency $\\omega_C$ illuminated by a pump beam with frequency \n$\\omega_P$. First the cavity pump laser - atomic transition detuning $\\Delta_A=\\omega_P - \\omega_A$ needs to be adjustable and stable on long time scales to enable averaging of several experimental runs. This is done by a home built extended cavity diode laser (ECDL) offset locked to the repumper laser of the MOT. The offset lock \\cite{Weitz2004} \nenables us to gain atomic transition stability at an adjustable detuning \n$\\Delta_A= 2\\pi\\times\\left(-3 \\rightarrow +2\\right)$ GHz from the Cs $D_2$ $F=4 \\rightarrow F'=5$ \n transition. The other important experimental parameter is the cavity pump laser - cavity resonance detuning $\\Delta_C=\\omega_P - \\omega_C$. Since there are cavity resonances every free spectral range the maximum detuning is $\\Delta_C=\\pm \\left(FSR\/2= 2\\pi \\times624.25\\right) MHz$. The cavity resonance stabilization scheme is described in the next section.\n\n\\subsection{Cavity stabilization}\n\nDuring each experiment it was crucial that the cavity kept a stable, \nconstant resonance frequency $\\omega_C$, both with respect to the atomic transition \nfrequency $\\omega_A$, and with respect to the probe laser $\\omega_P$. We established \nthat by actively stabilizing the cavity to an atomic rubidium resonance. In order to be able to change the cavity resonance frequency, the locking point needed to be frequency offset by an adjustable radio frequency between 500 and 1500 \nMHz provided by a radio-frequency generator. The stabilization schematic can be seen \nin figure \\ref{fig:CavityStabi}.\n\\begin{figure}[h]\n\t\\centering\n\t\t\\includegraphics[width=0.9\\textwidth]{Figure2.eps}\n\t\\caption{\\label{fig:CavityStabi} The schematic of the experimental setup to control the cavity frequency. The beat signal of the two ECDL is mixed down with a function generator from R\\&S (SMA02A). The error signal to stabilize the cavity piezo is generated with an error signal generating circuit (EGC) following the design of Ref. \\cite{Weitz2004}.}\n\\end{figure}\n\nBecause of widespread availability we use a 780nm Extended Cavity Diode Lasers (ECDL) \\cite{ECDL1995} stabilized via FM-Spectroscopy to the Rubidium $F=2\\rightarrow F'=2,3$ crossover line as an atomic reference signal. Another ECDL lasing at a similar wavelength is injected into the cavity on a high-order TEM mode (a so called v-mode). This is possible in nearly confocal cavities and has the benefit of very small mode overlap with the MOT. No influence of this laser onto the cloud could be detected. It is stabilized to the mode using a Pound-Drever-Hall lock (PDH) \\cite{PDH2001} and constantly kept on resonance to it.\\\\ It is basically measuring the length of the cavity very precisely. When the cavity length changes, the cavity resonance $\\omega_C$ moves and the laser follows. To gain atomic transition stability for the cavity length this laser is now offset locked via a side-of-filter \\cite{Weitz2004} technique to the 780 nm reference laser. The error signal is fed back to drive the piezo of the cavity, which can change the length of the cavity by about $17\\mu m$, compensating mostly for slow temperature drifts and low frequency acoustic noise. The offset lock enables us to change the frequency difference of the two ECDLs so that we can position a cavity resonance at arbitrary detunings to both the atomic Cesium transition as well as the cavity probe laser. The lock is stable for the whole day with small drifts well below one cavity linewidth. The overall stabilization bandwidth is limited by the mechanical resonance of the science cavity spacer to about 800Hz.\\\\\n\n\n\\section{Strong Coupling in the bad cavity limit}\n\tOne experimental cycle involved loading the MOT for an arbitrary length of time (between 100 and 3500ms), turning off the MOT cooling beams and the magnetic field followed by pumping the cavity with a weak probe beam. After a short off time, in which the cavity probe laser or the cavity could be moved in frequency the cycle started again. \n\tThe atom number interacting with the mode was controlled by choosing different MOT loading times.\n\tThree different experimental procedures were implemented to observe the normal mode splitting of the cavity-atom system. Two methods involved observing the transmission of the cavity as a function of the probe laser frequency. The third method relied on the collection of the scattered light into free space with a CCD camera during the probe. The observation of the Rabi splitting in the MOT fluorescence confirmed the results, but the data presented in this publication originates solely from transmission measurements.\\\\\n \n \\begin{figure}\n\\begin{center}\n\\subfigure[]{\n\\resizebox*{6.5cm}{!}{\\includegraphics{Figure3a_3.eps}}\n\\subfigure[]{\n\\resizebox*{6.5cm}{!}{\\includegraphics{Figure3b_3}}\n\\caption{\\label{fig2_a} a) Cavity transmission measurements with a weak incident $\\sigma_+$-probe beam for different atom numbers in the mode. The cavity is positioned at the Cs $D_2$ $F=4 \\rightarrow F'=5$ transition. The red curves are fits with the cavity transmission function from \\cite{Transmission1} with free parameters $\\kappa$ and $g_{coll}$. The scatter of the data can be explained by shot-to-shot atom number fluctuations as well as probe and cavity resonance frequency drifts. A conservative estimate for the atom number fluctuations is 6\\% and $2\\pi\\times0.3$ MHz for the maximum drift of cavity resonance frequency and probe beam frequency during the whole experiment. b) The collective coupling constant displays the typical $\\sqrt{N}$-behaviour indicating the strong collective coupling regime with an effective coupling parameter $g_{eff}=\\textbf{$g$}\/\\sqrt{2}$. The linear fit to the data was used to calibrate the atom number in the mode.\n\\label{sample-figure1}\n\\end{center}\n\\end{figure}\nThe first set of transmission measurements used a constant probe laser frequency for\neach point. The MOT was loaded for a certain time to a specific density, then \nswitched off and a weak cavity probe beam was turned on with a \nmechanical shutter about $250\\mu s$ later. The transmitted peak power for most experiments was measured to be below $\\left(2.0\\pm0.2\\right)$ nW. According to Ref. \\cite{transmission2}, this is much smaller than the critical photon number to cause bistable behaviour and therefore suitable to observe stable Rabi splitting. The shutter was left open for two milliseconds and we averaged the transmission signal on the photodiode over 1 ms. After \nthat, the cavity probe laser was moved in frequency by $2\\pi\\times2$ MHz and the cycle started \nagain. The scan of the probe beam begins at a detuning $\\Delta_A=-2\\pi\\times 50$ MHz and ends at \n$\\Delta_A=2\\pi\\times 50$ MHz so that for each curve 51 points were taken. The \ncavity resonance was tuned so to have the same frequency as the atomic transition, so that $\\Delta_A-\\Delta_C=\\omega_C-\\omega_A=0$. For a rough alignment the empty cavity transmission was compared to a Doppler-free Cesium spectroscopy while scanning. \nThen the cavity was locked and a first transmission measurement taken. If there \nwas an amplitude difference in the two peaks indicating a detuning between atomic \nand cavity resonance frequency, the cavity was moved with the offset lock until \nthe transmission peaks were similar in strength.\\\\ \nFigure \\ref{fig2_a} shows data taken for the cavity positioned at the \nCs $D_2$ $F=4\\rightarrow F'=5$ transition. With respect to the empty cavity resonance the doublet \nsplits with increasing number of atoms according to the $\\sqrt{N}$ behaviour. \nThe aquired data was fitted with the theoretical curve for the cavity transmission in the linear regime (see e.g Ref. \\cite{Transmission1}) with the collective coupling $g_{coll}$ and the cavity half decay rate $\\kappa$ as free parameters. The collective coupling is then displayed as a function of the square root of the atom number in figure \\ref{fig2_a} b. The fit result for the cavity half loss rate is $\\kappa_{fit}=2\\pi\\times\\left(5.8\\pm0.2\\right)$ MHz for all the data and is therefore about 7 times larger than the measured $\\kappa$ of the TEM$_{00}$ mode. We attribute this fact to the multimode structure of the nearly confocal resonator: higher order TEM modes have a bigger mode volume and therefore a smaller coupling constant g. This causes the higher order normal mode splitting to be smaller which appears as broadening of $\\kappa$.\nTo calibrate the atom number in the mode we plotted the collective coupling constant over the square root of the observed MOT fluorescence first. The MOT fluorescences was measured with a CCD camera for the same experimental parameters but without probing the cavity mode for each loading time. The shot to shot fluctuations are estimated to be below 6\\%. \nThe result was fitted with a line through the origin and the horizontal axis of figure \\ref{fig2_a} b rescaled in a way that the gradient corresponded to the effective coupling constant $g_{eff}= g\/\\sqrt{2}=2\\pi \\times 85 kHz$. Since the probe is too weak to trap the atoms they see an averaged coupling during their ballistic movement in the 1 ms probe time, being zero at the nodes and $g$ at the antinodes of the cavity field. This results in the smaller coupling constant \n$g_{eff}$ according to Ref. \\cite{Leslie2004}. The largest observed collective coupling of \n$g_{coll}=2\\pi\\times \\left(27.8\\pm0.4\\right)$ MHz corresponded to $\\left(1.33\\pm0.08\\right)\\times 10^5$ atoms effectively coupled to the cavity mode and a maximum collective cooperativity of $C_{coll}=186\\pm5$. The rescaling factor relating the atom number in the MOT to the atom number coupled to the mode was 1.47. It is bigger than 1 because not all atoms in the MOT are in the center of the mode. \n\n\n\nThe collective behaviour of the vacuum Rabi splitting of the cavity-atom system is clearly demonstrated. The coupling to the resonator dominates the environmental dissipation process indicating the collective strong coupling regime for atom numbers in the mode bigger than about 600 corresponding to $C_{coll}=1$.\\\\\n\\begin{figure}\n\\begin{center}\n\\subfigure[]{\n\\resizebox*{6.5cm}{!}{\\includegraphics{Figure4a.eps}}\n\\subfigure[]{\n\\resizebox*{6.5cm}{!}{\\includegraphics{Figure4b_1.eps}}\n\\caption{\\label{fig2} a) The transmission of $\\sigma_+$-polarized light through the cavity as a function of the probe laser frequency for an atom number $N=\\left(15\\pm22\\right)\\times10^3$. The different traces (twenty averages each) are taken for different cavity resonance-atomic transition detuning. The error in the detuning is estimated to be below $2\\pi\\times0.3$ MHz. The difference between each trace is about $2\\pi\\times4.6$ MHz (except the middle three, where the frequency difference was $2\\pi\\times3.2$ MHz) starting from the top at $-2\\pi\\times21.5$ MHz. The red line is a double lorenzian fit to those curves where two peaks were observable. b) The center frequencies of the two peaks displayed over the cavity resonance - atomic transition detuning. For an empty cavity the mode position should follow the dashed line. Coupling atoms to the cavity splits the resonance and an avoided crossing can be observed.\n\\label{sample-figure}\n\\end{center}\n\\end{figure}\nThe second set of measurements was taken in a slightly different way. Instead of pumping the mode for a certain time with the same frequency after the release from the MOT, we scanned the probe laser over the atomic transition. The scan speed was about 40 MHz\/ms. Eventhough the ballistic expansion of the cloud during the first 2 ms is negligible, the two peaks are probed one after the other. For high intensities of the probe light, this causes the second peak to become asymmetric in height and position even if the empty cavity resonance equals the atomic transition. For small intensities this was not observable.\nThe traces of the transmission were recorded with an oscilloscope and averaged over 20 shots. They can be seen in figure \\ref{fig2} a. After aquiring the waveform the cavity offset lock was used to move the cavity resonance with respect to the atomic transition. In this way the avoided crossing induced by the collective cavity-atom coupling could be observed. Figure \\ref{fig2} b shows how, for large cavity-transition detunings, the coupling is small, therefore the eigenfrequency of the cavity mode with atoms does not deviate from the case without atoms (indicated by the dashed line). Approaching the atomic resonance causes the coupling to increase and the mode to split.\\\\ \n \n\\begin{figure}\n\\begin{center}\n\\subfigure[]{\n\\resizebox*{6.5cm}{!}{\\includegraphics{Figure5a_2.eps}}\n\\subfigure[]{\n\\resizebox*{6.5cm}{!}{\\includegraphics{Figure5b_2.eps}}\n\\caption{\\label{fig3} a) The transmission of the cavity of a weak $\\pi$-polarized probe beam for each hyperfine transition in comparison with the empty cavity. The atom number in the cavity is $N=\\left(110\\pm7\\right)\\times 10^3$. The stronger the dipole moment of the transition the bigger the splitting. The red line is the theoretical curve for the transmission according to Ref. \\cite{Transmission1}. b) The observed collective coupling constant as a function of the relative transition strength as given in Ref. \\cite{Steck}.}\n\\label{sample-figure}\n\\end{center}\n\\end{figure}\n\nThe last set of measurements involved observing the splitting for the other hyperfine states of cesium as seen in figure \\ref{fig3} a. The cavity was positioned according to the procedure for the 4-5 transition. The MOT loading time and all other parameters are the same for each transmission measurement but the observed splitting is less. \nOur data shows that the splitting is proportional to the relative hyperfine transition \nstrength, consistent with the fact that the coupling constant $g$ is proportional \nto the relative transition strength.\\\\\n\\section{Conclusion}\nIn conclusion we have demonstrated the strong coupling of an ensemble of up to $\\left(1.33\\pm0.08\\right) \\times 10^5$ atoms with a lossy nearly confocal cavity leading to a collective \ncooperativity parameter of up to $C_{coll}=186\\pm5$. In contrast to other observations of the avoided crossing and the behaviour of the vacuum Rabi splitting (e.g \\cite{Raizen1989, Kasevich2006}) the single atom - single mode coupling $g$ is very small compared to $\\kappa$ and the relevant excited state half decay rate $\\gamma$. The large number of inner cavity scatterers only enables us to reach the strong coupling regime which is a prerequisite to study certain cavity cooling mechanisms. This system is also suitable for the study of other collective effects such as superradiance or collective selforganization in a lossy cavity.\\\\\n\n\\section{Acknowledgement} This work is supported by the EPSRC grant EP\/H049231\/1. We would like to thank Jon Goldwin for stimulating discussions.\\\\\n\n\\bigskip \n\n\n\\bibliographystyle{tMOP}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n \n \n\n \n\nImproper priors and finitely additive probabilities (FAP) are the two main alternatives to the standard Bayesian Paradigm using proper priors, i.e. countably additive probabilities in the Kolmogorov axiomatic. Both alternatives are involved in paradoxical phenomena such as non-conglomerability, marginalization paradox, etc.\nAs an heuristic argument, some authors such as \\citet{stone1982} or \\citet[][p.218]{kada1986}, consider proper prior sequences, for example sequence of uniform prior, and derive under two different topologies two kind of limits: FAPs limits and improper limits. The choice between FAP distribution and improper distribution has been largely debated in the Bayesian literature \\citep[see][p.15]{hart1983}. \n\nThe aim of FAP limits is to preserve the total mass equal to 1, while sacrificing the countable additivity. This point of view has been mainly defended by \\citet{defi1972}. On the other hand, improper distributions aim to preserve the countable additivity, while sacrificing a total mass equal to 1. Improper distribution appears naturally in the framework of conditional probability, see \\citet{reny1955} \tand more recently \\citet{taralind2010,taralind2016} and \\citet{lindtara2018}. Conditional probability spaces are also related to projective spaces of measures \\citep{reny1970} which have a natural quotient space topology and a natural convergence mode, named $q$-vague convergence by \\citet{bidr2016}. \n\nIn Bayesian inference some paradoxes such as non-conglomerability \\citep{ston1976, stone1982} or the marginalization paradox \\citep{dastzi1973} occur with improper or diffuse FAP priors \\citep[][]{dubi1975}, but not with proper priors. This lead some authors to include the likelihood in the definition of a convergence mode for the priors, by for instance considering the convergence of the posterior distribution w.r.t. to an entropy criterion \\citep{akai1980} or an integrated version of this criterion \\citep{bebesu2009} when the posterior is proper. Bayesian inference with improper posterior is justified by \\citet{tatuli2018} from a theoretical point of view. \\citet{ bobidr2018} consider the convergence of proper to an improper posterior for Bayesian estimation of abundance by removal sampling. \\citet*{tulari2018} propose to adapt MCMC for the estimation of improper posterior. \n\nIn this paper, we only consider convergence of distributions regardless to any statistical model. The implication of our result in Bayesian inference, especially the construction of specific sequences of distribution used in the proof of Theorem \\ref{thm.sameqvnotFAP} is left for future works. In Section \\ref{section.defcv}, we define several concept of convergence in the setting of improper and FAP distributions. In Section \\ref{section.uniform} we revisit the notion of uniform distribution on integer. In Section \\ref{section.examples}, we illustrate the fundamental difference between convergence to an improper prior and to an FAP. \n \n \n\\section{Convergence of probability sequences}\n\\label{section.defcv}\nWe denote by $\\mathcal C_b$ the set of continuous real-valued bounded functions, by $\\mathcal C_K$ the set of continuous real-valued functions with compact support.\nFor a $\\sigma$-finite measure $\\pi$, we denote $\\pi(f)=\\int f(\\theta) \\;d\\pi(\\theta)$.\n Consider a sequence of proper priors $\\{\\pi_n\\}_{n\\in \\mathbb N}$. The usual converge mode of $\\{\\pi_n\\}_{n\\in \\mathbb N}$ to a proper prior $\\pi$ is the narrow convergence, also called weak convergence or convergence in law, defined by:\n\\begin{equation}\n\\label{eq.cvnarrow}\n\\pi_n \\xrightarrow[n\\rightarrow +\\infty]{narrowly} \\pi\\iff \\pi_n(f) \\xrightarrow[n\\rightarrow +\\infty]{}\\pi(f)\\quad \\forall f\\in\\mathcal C_b\n\\end{equation}\n\n When it exists, the narrow limit of $\\{\\pi_n\\}_n$ is necessarily unique. In this section, we consider two alternative convergence modes when there is no narrow limit, and especially when the total mass tends to concentrate around the boundary on the domain, more precisely when $\\lim_n \\pi_n(f)=0$ for all $f$ in $\\mathcal C_K$. The idea is to consider a proper prior as a special case of FAP or as a special case of a Radon measure, and for each case to define in a formalized way, a convergence mode. In both cases the limit is not unique in general but, as a requirement, must coincide with the narrow convergence when Eq. (\\ref{eq.cvnarrow}) holds.\n\n~\n\n\nIn the following, we consider that $\\Theta$ is a metric, locally compact, second countable Hausdorff space $\\Theta$. This is the case, for example, for usual topological finite-dimensional vector spaces or denumerable sets with the discrete topology. In the latter case, any function is continuous and a compact set is a finite set. \n\n\\subsection{Convergence to an improper distribution}\nTo extend the notion of the narrow limit, we consider here proper distributions within the set of projective space of positive Radon measures as follows: we denote by $\\mathcal R$ the set of non-null Radon measures, that is regular countably additive measures with finite mass on each compact set. Note that, in the discrete case any $\\sigma$-finite measure is a Radon measure. \n\n~\n\n\n\n \nWe define an improper distribution as an unbounded Radon measure which appear in parametric Bayesian statistics \\citep[see, e.g.][]{jeff1983}.\nThe \\textit{projective space} $\\overline{\\mathcal R}$\n associated to $\\mathcal R$ is the quotient space for the equivalence relation $\\sim$ defined by $\\pi_1\\sim\\pi_2$ iff $\\pi_2=\\alpha\\, \\pi_1$ for some positive scalar factor $\\alpha$. To a Radon measure $\\pi$, it can be associated a unique equivalence class $\\overline\\pi=\\{\\pi'=\\alpha\\,\\pi\\,;\\,\\alpha>0\\}$. Therefore, a projective space is a space where objects are defined up to a positive scalar factor. It is natural in Bayesian statistics to consider such projective space since two equivalent priors give the same posterior. The projective space $\\mathcal R$ is also naturally linked with conditional probability spaces \\citep\n {reny1955}. \n All the results presented below on the convergence mode w.r.t. to the projective space $\\overline{\\mathcal R}$ can be found in \\citet{bidr2016}. The usual topology on $\\mathcal R$ is the vague topology defined by \n\\begin{equation}\n\\label{eq.cvvague}\n\\pi_n \\xrightarrow[n\\rightarrow +\\infty]{vaguely} \\pi\\iff \\pi_n(f) \\xrightarrow[n\\rightarrow +\\infty]{}\\pi(f)\\quad \\forall f\\in\\mathcal C_K\n\\end{equation}\nwhere $\\mathcal C_K$ is the set of all real-valued functions on $\\Theta$ with compact support.\n\n\nFrom the related quotient topology, we can derive a convergence mode, called q-vague convergence: a sequence $\\{\\pi_n\\}_n$ in $\\mathcal R$ converge $q-vaguely$ to a (non-null) improper distribution $\\pi$ in $\\mathcal R$ if $\\overline \\pi_n$ converges to $\\overline\\pi$ w.r.t. the quotient topology where $\\overline \\pi_n=\\{\\alpha\\pi_n;\\alpha>0\\}$ is the equivalence class associated to $\\pi_n$ and similarly for $\\overline\\pi$. The limit $\\overline \\pi$ is unique whereas $\\pi$ is unique only up to a positive scalar factor. It is not always tractable to check a convergence in the quotient space. However, there is an equivalent definition in the initial space $\\mathcal R$: $\\{\\pi_n\\}_n$ converges $q-vaguely$ to $\\pi$ if there exists some scalar factors $\\alpha_n$ such that $\\{\\alpha_n\\,\\pi_n\\}_n$ converges vaguely to $\\pi$:\n \\begin{equation}\n \\label{eq.cvqvague2}\n \\pi_n \\xrightarrow[n\\rightarrow +\\infty]{q-vaguely} \\pi\\iff a_n\\pi_n \\xrightarrow[n\\rightarrow +\\infty]{vaguely}\\pi\\quad \\textrm{for some } a_1,a_2,...>0\n \\end{equation}\n \n \n The $q$-vague convergence can be considered as an extension of the narrow convergence in the sense that if $\\{\\pi_n\\}_n$ and $\\pi$ are proper distributions and $\\{\\pi_n\\}_n$ converge narrowly to $\\pi$ then $\\{\\pi_n\\}_n$ converge q-vaguely to $\\pi$. Note that the converse part holds if and only if $\\{\\pi_n\\}_n$ is tight \\citep[see][Proposition 2.8]{bidr2016}.\n \n ~\n \n \n \n When a sequence $\\{\\pi_n\\}_n$ of proper distributions converges q-vaguely to an improper distribution, then $\\lim_n\\pi_n(K)=0$ for any compact $K$ \\citep[Prop. 2.11]{bidr2016}. The following lemma gives an apparently stronger, but in fact equivalent, result. It will be useful to establish the main result and to construct examples in Section \\ref{section.exampleFAPvsqvague}.\n \n \\begin{lemma}\\label{lemma.suitecpctslow}\n \tLet $\\{\\pi_n\\}_n$ be a sequence of proper distributions such that $\\lim_n\\pi_n(K)=0$ for any compact $K$. Then there exists a non-decreasing sequence of compact sets $K_n$ such that $\\cup_n K_n=\\Theta$ and $\\lim_n\\pi_n(K_n)=0$. Moreover, $K_n$ may be chosen such that, for any compact $K$, there exists an integer $N$ such that $K\\subset K_N$.\n \\end{lemma}\n \\begin{proof}\n \tLet $\\widetilde K_m$, $m\\geq 1$, be a increasing sequence of compact sets with $\\cup_m \\widetilde K_m=\\Theta$. For each $m $, $\\lim_n\\pi_n(\\widetilde K_m)=0$, so there exists an integer $N_m$ such that $N_m>N_{m-1}$ and $\\pi_n(\\widetilde K_m)\\leq 1\/m$ for $n>N_m$. Consider now such a sequence of integers $N_m$, $m\\geq 1$. For any $n$ there exists a unique integer $m$ such that $N_m\\leq n 0$, there exists an infinite number of $n$ such \nthat $\\left|\\pi_n(f_i)-\\pi(f_i)\\right|\\leq \\varepsilon$, $i=1,...,p$. Note that, since $\\mathcal F_b$ is not in general first-countable, there does not necessarily exist a subsequence $\\{\\pi_{n_k}\\}_k$ that converges to $\\pi$. We can only say that, for any $f\\in\\mathcal F_b$, there exists a subsequence $\\{\\pi_{n_k}(f)\\}_k$ such that $\\pi_{n_k}(f)$ converges to $\\pi(f)$.\n\nTherefore, the set of FAP limits of $\\{\\pi_n\\}_n$ obtained by Stone's approach is included in the set of FAP limits obtained by using the Hahn-Banach theorem as above and (\\ref{eq.cvFAP}) or (\\ref{eq.cvFAPdiscret}) still hold but are not sufficient conditions. The converse inclusion is false in general. It is easy to see that the closed convex hull of the set of FAP limits defined by Stone is included in the set of FAP limits defined in this paper. We conjecture that, conversely, the set of FAP limits defined by (\\ref{eq.cvFAP}) is the convex hull for the limits defined by Stone. \nConsider for example $\\pi_{2n}=\\delta_0$ and $\\pi_{2n+1}=\\delta_1$. There are only two FAP limits $\\delta_0$ and $\\delta_1$ with Stone's construction, whereas any $\\pi=\\alpha \\delta_0+(1-\\alpha) \\delta_1$, $0\\leq\\alpha\\leq 1$ is a FAP limit with our construction. In Section \\ref{section.limdeltan}, we illustrate the difference between the two convergence modes with another example. \n\n~\n\nEven if the two definitions of FAP limits are not equivalent, the main results, especially Theorem \\ref{thm.sameqvnotFAP}, Corollary \\ref{corol.main}, Proposition \\ref{prop.PoissonlimitFAP}, Lemma \\ref{lemma.FAPlimitcombconvexe} and \\ref{lemma.FAPKNslow} hold for both of them. In the following, we consider only the first definition of FAP limits.\n \n\n \n\n~\n\n\n\\subsection{FAP limits vs q-vague convergence}\n\\label{section.FAPvsqvague}\nThe fact that a sequence of proper distributions have both improper and FAP limits may suggest a connection between the two notions as proposed heuristically by many authors. The following results show that this is not the case. Roughly speaking, it is shown that any FAP which is a FAP limit of some proper distribution sequence can be connected to any improper prior by this mean. \n\n \n\\begin{theorem}\n\t\\label{thm.sameqvnotFAP}\nLet $\\{\\pi_n\\}_n$ be a sequence of proper distributions such that $\\lim_{n}\\pi_n(K)=0 $ for any compact set $K$. Then, for any improper distribution $\\pi$, it can be constructed a sequence $\\{\\widetilde\\pi_n\\}_n $ which converges q-vaguely to $\\pi$ and which has the same set of FAP limits as $\\{\\pi_n\\}_n$.\n\\end{theorem}\n\n\\begin{proof}\nFor any FAP or any proper or improper distribution $\\mu$ we define the distribution $(\\mathds{1}_A\\, \\mu)$ by $(\\mathds{1}_A\\,\\mu)(f)=\\mu(\\mathds 1_A \\,f)$.\tFrom Lemma \\ref{lemma.suitecpctslow}, it can be constructed an exhaustive increasing sequence $K_n$ of compact sets such that $\\lim_n \\pi_n(K_n)=0$. \n\tPut $\\gamma_n=\\pi_n(K_n)$ and define the sequence of proper distributions $\\widetilde \\pi_n=\\gamma_n \\frac{1}{\\pi(K_n)} \\mathds 1_{K_n}\\pi$ $+$ $ (1-\\gamma_n) \\frac{1}{\\pi_n (K_n^c)}\\mathds{1}_{K_n^c}\\,\\pi_n $, with $K^c$ the complement of $K$. By Lemma \\ref{lemma.FAPKNslow} and \\ref{lemma.FAPlimitcombconvexe} in Appendix A, $\\widetilde\\pi_n$ has the same FAP limits as $\\{\\pi_n\\}$. By Lemma \\ref{lemma.cvqvtronque}, $\\widetilde\\pi_n$ converges q-vaguely to $\\pi$.\n\t\n\\end{proof} \n\n\n\n\\begin{corollary}\n\t\\label{corol.main}\n\tLet $\\{\\pi_n\\}_n$ be a sequence of proper distributions that converges q-vaguely to \nan improper distribution $\\pi^{(1)}$. Then, for any other improper distribution $ \\pi^{(2)}$, it can be constructed a sequence $\\{ \\widetilde\\pi_n\\}_n$ that converges q-vaguely to $\\pi^{(2)}$ and that has the same FAP limits as $\\{\\pi_n\\}_n$.\n\\end{corollary}\n The only link that can be established between improper q-vague limits and FAP limits of the same proper distribution sequence is that the FAP limits are necessarily \\textit{diffuse}, i.e. they assign a probability 0 to any compact set.\n \n \n\n \\section{Uniform distribution on integers} \n \\label{section.uniform}\n In this section, we compare different notions of uniform distributions on the set of integers $\\mathbb N$. by using several considerations such as limit of proper uniform distributions.\n \n \n \n \n We illustrate the fact that FAP uniform distributions are not well defined objects \\citep[][pp.122,224]{defi1972}. Contrary to uniform improper distributions, FAP limits of uniform distributions on an exhaustive sequence of compact sets are highly dependent on the choice of that sequence.\n \n \n \n \n \n \n \n \\subsection{Uniform improper distribution}\nThere are several equivalent ways to define a uniform improper prior on integers. These definitions lead to a unique, up to a scalar factor, distribution. The uniform distribution can be defined directly as a flat distribution, i.e. $\\pi(k)\\propto 1$ for $k$ integer. It is also the unique (up to a scalar factor) measure that is shift invariant, i.e. such that $\\pi(k+A)=\\pi(A)$ for any integer $k$ and any set of integers $A$. The uniform distribution is also the q-vague limit of the sequence of uniform proper distributions on $K_n=\\{0,1,...,n\\}$. More generally and equivalently, the uniform distribution is the q-vague limit of any sequence of proper uniform priors on an exhaustive increasing sequence $\\{K_n\\}_n$ of finite subsets of integers. \n\n\\subsection{Finitely additive uniform distribution}\nThe notion of uniform finitely additive probabilities is more complex. Contrary to the improper case, there is no explicit definition since $\\pi({k})=0$ for any integer $k$. We present here several non equivalent approaches to define a uniform FAP. The first two ones can be found in \\cite{kadaohagan1995} and \\citet{schikada2007}.\n \n\\subsubsection{Shift invariant (SI) uniform distribution}\n\nAs for the improper case, a uniform FAP distribution $\\pi$ can be defined as been any shift invariant FAP, i.e. FAPs satisfying $\\pi (A) = \\pi (A+k)$ for any subset of integers $A$ and any integer $k$. Such a distribution will be called SI-uniform. In that case, one necessarily has : $\\pi(k_1+k_2\\times \\mathbb N)=k_2^{-1}$, for any $(k_1,k_2) \\in \\mathbb{N}\\times\\mathbb{N}^*$. \\cite{kadaohagan1995} also investigate the properties of FAPs satisfying only $\\pi(k_1+k_2\\times \\mathbb N)=k_2^{-1}$, where $k_1+k_2\\times \\mathbb N$ are called \\emph{residue classes}. \n\n\n\\subsubsection{Limiting relative frequency (LRF) uniform distributions.}\n\n\\cite{kadaohagan1995} consider a stronger condition to define uniformity. For a subset $A$, define its limiting relative frequency LRF$(A)$ by $$\\textrm{LRF}(A) = \\lim_{N\\to\\infty} \\frac{\\#\\{k\\leq N, \\; \\text{s.t.}\\; k\\in A\\}}{N+1}, $$\nwhen this limit exists. A FAP $\\pi$ on $\\mathbb N$ is said to be {\\em LRF-uniform} if $\\pi (A) = p$ when $\\textrm{LRF}(A)=p$. \n\n Let $\\pi_n$ be the uniform proper distribution on $K_n=\\{0,1,...,n\\}$, then $\\textrm{LRF}(A) =$ $ \\lim_{n\\to\\infty} \\pi_n(A)$. Therefore a FAP $\\pi$ is LRF uniform if and only if it is a FAP limit of $\\{\\pi_n\\}_n$. It is worth noting that, unlike the improper case, the FAP limits are highly dependent on the choice of the increasing exhaustive sequence of finite sets $K_n$. Changing the sequence $\\{K_n\\}_n$ changes the notion of uniformity. For example, if $\\widetilde\\pi_n$ is the uniform distribution on $K_n=\\{2k\\,;\\; 0\\leq k\\leq n^2\\} \\cup \\{2k+1\\,;\\;0\\leq k\\leq n\\}$, then $\\lim_n \\widetilde \\pi_n(2\\mathbb N)= 1$, whereas $\\lim_n\\pi_n(2 \\mathbb N)=1\/2$. \n\n \n\n \n\n\n\\subsubsection{Bernoulli Scheme (BS) uniform distribution}\n\n \nWe propose here another notion of uniformity that is not dependent of the choice a particular increasing sequence of $K_n$ as for the LRF uniformity. Consider a Bernoulli Scheme, that is a sequence $\\{X_k\\}_{k\\in\\mathbb N}$ of i.i.d. Bernoulli distributed random variables with mean $p\\in[0;1]$. Define the random set $A(X)=\\{k \\in \\mathbb N, \\text{ s.t. } X_k =1 \\}$. A FAP $\\pi$ is said to be {\\em BS-uniform } if, for any $p\\in[0;1]$, $\\pi(A(X)) = p$, almost surely. Note that the strong law of large numbers, LRF$(A(X))=p$, almost surely.\n \n\\begin{proposition} \nLet $\\{K_n\\}$ be an increasing sequence of finite subsets of $\\mathbb{N}$, with $\\cup_{n\\in\\mathbb N} K_n $ being infinite. Then any FAP-limit of the sequence $\\pi_n$ of uniform distributions on $K_n$ is BS-uniform.\n\\end{proposition} \nWhen $\\cup_{n\\in\\mathbb N} K_n =\\mathbb N$, this proposition shows that any FAP limit of uniform distribution is BS-uniform. In particular, a LRF uniform FAP is also BS uniform. However, if for example $K_n$ is the set of even numbers less or equal to $n$, then any FAP-limit of the sequence of uniform distributions on $K_n$ will be BS-uniform, although it is intuitively, certainly not uniform on $\\mathbb N$ but on $2\\mathbb N$. Therefore, BS uniformity looks much more like a necessary condition for a FAP to be uniform, than like a complete definition. \n\n\n\\section{Comparison of convergence modes on examples}\n\\label{section.examples}\n \nWe consider here some examples that illustrate the difference between convergence of proper distributions to an improper distribution or a to FAP.\n\n\n\\subsection{FAP limits on ${\\mathbb N}$.}\n\\label{section.limdeltan}\nFor a sequence $\\{\\pi_n\\}$ of proper distribution on $\\mathbb N$, it is known that there does not necessarily exist a q-vague limit, but if it exists, it is unique in the projective space of Radon measures, i.e. unique up to a scalar factor. At the opposite, we have seen that a FAP limit always exists but is not necessarily unique. \n\nWe illustrate the non-uniqueness of FAP limits with an extreme case. Consider the sequence of proper distributions $\\pi_n = \\delta_n$, where $\\delta _n$ is the Dirac measure on $n$. This sequence has no q-vague limit since $\\pi_n(k)=0$ for $n>k$. Moreover, for any subset $A$ of ${\\mathbb N}$ so that $A$ and $A^c$ are both infinite:\n$$\\displaystyle\n0=\\liminf_{n\\to\\infty} \\pi_n(A) \\leq \\pi (A) \\leq \\limsup_{n\\to\\infty} \\pi_n(A) =1,\n$$ \nwhereas, for any finite set $A$, $\\lim_n\\pi_n(A)=0=\\pi(A)$ and for any cofinite set $A$, $\\lim_n\\pi_n(A)=1=\\pi(A)$. Therefore, from (\\ref{eq.cvFAPdiscret}), the set of FAP limits of $\\pi_n$ is the set of all diffuse FAPs on $\\mathbb N$. This shows that all the diffuse FAPs are connected through the same sequence $\\pi_n$.\n\n \n ~\n \nLet's examine the set of FAP limits of $\\pi_n=\\delta_n$ obtained with Stone's definition of FAP convergence (see Section \\ref{section.FAPlimit}). For any subset $A$, there exits a subsequence $\\{\\pi_{n_k}\\}$ such that $\\pi_{n_k}(A)$ convergences to $\\pi(A)$. So, $\\pi(A)\\in\\{0,1\\}.$ Therefore the FAP limits of $\\pi_n$ in Stone's sense are all \\textit{remote} FAPs, that is diffuse FAPs $\\pi$ such that $\\pi(A)\\in\\{0,1\\}$, as defined by \\citet[][p.92]{dubi1975}. This also proves the existence of remote FAPs. Note that a remote distribution is neither BS uniform nor SI and therefore cannot be LRF uniform. \n \n\\subsection{Convergence of sequence of Poisson distributions}\n\\label{section.poisson} \n\n\nWe consider the sequence $\\{\\pi_n\\}_n$ of Poisson distributions with mean $n$. For any finite set $K$, we have $\\lim_n\\pi_n(K)=0$. However this sequence of proper priors does not converge $q$-vaguely to any improper distribution \\citep[][\u00a75.2]{bidr2016}. As a remark, let $\\widetilde\\pi_n$ be shifted measures on $\\mathbb{Z}$, defined by $\\pi_n( B) = \\pi_n(B+n)$, where \n$\\pi_n$ can be seen as a measure on the set of all integers $\\mathbb{Z}$ with $\\widetilde\\pi_n(k)=0$ for $k<0$. Then, using the approximation of the Poisson distribution by a the normal distribution, it can be shown that the sequence $\\widetilde\\pi_n$ converges $q-$vaguely to the improper uniform measure on $\\mathbb{Z}$. \n \nWe consider now the FAP limits of the sequence $\\{\\pi_n\\}_n$. The next result shows that the limit have some properties of uniformity described in Section \\ref{section.uniform} but not all of them. The proof is given in Appendix B.\n\n \n\n\\begin{proposition}\n\t\\label{prop.PoissonlimitFAP}\n\t Any FAP limit $\\pi$ of the sequence $\\{\\pi_n\\}_n$ of Poisson distribution with mean $n$ is shift invariant and BS-uniform but not necessarily LRF uniform. \n\\end{proposition}\n\nTherefore, the FAP limits of the Poisson distribution sequence are examples of SI- and BS-uniform distributions that are not LRF uniform.\n\\citet{kaji2014} give\n another example of SI but not LFR uniform FAPs using paths of random walks. Even if they consider FAP on a subset of bounded function, it can be extended to $\\mathcal F_b$ using Hahn-Banach theorem similarly to Section \\ref*{section.FAPlimit}.\n\n\n\n \\subsection{FAP vs q-vague convergence of uniform proper distributions}\n \\label{section.exampleFAPvsqvague}\n \n\nTo illustrate the fact that any FAP limit can be related with any improper distribution, consider again the sequence $\\{\\pi_n\\}_n$ of Poisson distributions with mean $n$ and any given improper distribution $\\pi^0$ on the integers. Since $\\lim_n\\pi_n(K)=0$ for any finite set, the proof of Lemma \\ref{lemma.suitecpctslow} shows how to construct an exhaustive sequence of finite set $K_n$ such that $\\lim_n \\pi_n(K_n)=0$. For example, choose $K_n = \\{k\\in \\mathbb N, \\; k \\leq n\/2 \\}$ and define the sequence of proper distributions $\\widetilde \\pi_n$ by :\n\n\n\\begin{equation}\n\\label{eq.seqpoisexample}\n\\widetilde\\pi_n(A)=\\pi_n(K_n) \\frac{\\pi^0(A\\cap K_n)}{\\pi^0(K_n)} + (1-\\pi_n (K_n)) \\frac{\\pi_n(A\\cap K_n^c)}{\\pi_n(K_n^c)} \\vspace{1mm}\n\\end{equation}\nfor any set $A$. From Theorem \\ref{thm.sameqvnotFAP}, $\\widetilde{\\pi}_n$ converge $q$-vaguely to $\\pi^0$ and has the same FAP limit as the Poisson distribution sequence.\n\n\nAs another example, consider the sequence $\\{\\pi_n\\}_n$ of uniform distribution on $\\{0,1,...,n\\}$ and choose $K_n = \\{k\\in \\mathbb N, \\; k \\leq \\sqrt n \\}$. We have $\\lim_n \\pi_n(K_n)=0$. Therefore, for any improper distribution $\\pi^0$ on the set of integers, the sequence constructed as in (\\ref{eq.seqpoisexample}) have the same FAP limits as that of sequence of uniform distributions $\\{\\pi_n\\}_n$ and converge q-vaguely to $\\pi^0$. This shows again the difficulty to connect improper and FAP uniform distributions by limits of proper distribution.\n\n\n\\subsection{Convergence of beta distributions}\nIn this section, we consider the limit of the sequence of beta distribution $\\pi_{a_n,b_n}$ $=$ $\\text{Beta}(a_n,b_n)$ defined on $\\Theta=]0,1[$ when $a_n$ and $b_n $ go to 0. We see that the FAP limits depend on the way $a_n$ and $b_n$ goes to $0$, which is not the case for the q-vague improper limit. \nThis illustrates the difference between FAP limits and q-vague limits of proper distribution sequences. \n\n \n\n\nThe density of a beta distribution Beta$(a,b)$ is given by \n$$\n\\pi_{a,b} (x) =\\frac{1}{\\beta(a, b)} \\, x^{a-1} (1-x)^{b-1} \\text{ for } \\; x \\in ]0;1[\n$$\n where $\\beta(a,b)$ is the beta function.\n \nFrom \\citet{bidr2016}, the unique (up to a scalar factor) q-vague limit of $\\text{Beta} (a_n,b_n)$ when $a_n$ and $b_n $ go to 0 is the Haldane improper distribution: $$\\pi_H(x)=\\frac{1}{x(1-x)} \\text{ for } \\; x \\in ]0;1[.$$ \n \n\nThe q-vague limit gives no information on the relative concentration of the mass around $0$ and $1$: for $0N$.\n\tTherefore, for $n>N$, $\\widetilde a_n\\widetilde \\pi_n(f)=a_n\\pi_n(f)$. The result and its reciprocal follow. \n\\end{proof}\n \n\n\n\\section*{Appendix B}\n \n We prove here Proposition \\ref{prop.PoissonlimitFAP} of section\n \\ref{section.poisson}.\n \n In order to show that $\\pi$ is SI, we consider $\\pi_n$ as\n distribution on $\\mathbb Z$, extending them by $0$ on the non-positive\n integers. Define the $\\pi^{(k)}_n$ by $\\pi^{(k)}_n (A) = \\pi_n\n (A+k)$, for any subset of $\\mathbb Z$. It is known that that\n $\\|\\pi^{(k)}_n - \\pi_n \\|_{TV} \\leq \\frac{k}{\\sqrt{2\\pi n}}$, where\n $\\|\\cdot\\|_{TV}$ is the total variation norm. Therefore, for any subset\n of $\\mathbb N$, $\\lim_{n\\to\\infty} |\\pi_n(A+k) - \\pi_n(A) | =0$. Letting\n $n$ go to infinity, we deduce that, for any FAP limit $ \\pi$ of $\\pi_n$,\n and any integer $k$ : $ \\pi (A+k) = \\pi (A)$.\n \n \n The fact that $\\pi$ is uniform in BS sense comes from an easy\n adaptation of the Hoeffding inequality in that context. Let $(X_k)_{k\n \t\\in \\mathbb N}$ be a Bernoulli scheme, of parameter $p$, and denote by\n $\\mathbb P$ being the associated probability.\n Hoeffding inequality gives, that, for any $n$~:\n \n $$\n \\mathbb P\n \\left\\{\n \\bigg|\n \\sum_{k=0}^{\\infty} e^{-n}\\frac{k^n}{n!} (X_k(\\omega) -p)\n \\bigg|\n \\geq\n t\n \\right\\}\n \\; \\leq \\;\n 2 e^{-2 c \\sqrt{2\\pi n}\\, t^2},\n $$\n for some positive constant $c$. The expected conclusion is then\n obtained thanks to the Borel-Cantelli lemma.\n \n The fact that some of the FAP limits $\\pi$ are not LRF uniform is a\n direct consequence of the following lemma.\n \n \n \\begin{lemma}\n \tFor any $0\\leq p,p'\\leq 1$, there exists a set $A$ and some FAP\n \tlimits ${\\pi}$ of $\\{{\\pi}\\}_n$ such that $LRF(A)=p$ and ${\\pi}(A)=p'$.\n \\end{lemma}\n \\begin{proof}\n \tFirst note that , for any set $A'$, $LRF(A')=p$ if, and only\n \tif, $\\sharp \\{ k \\leq n, \\, k\\in A'\\} =$ $ pn + o(n)$. Therefore, for any\n \tset $A$ with $LRF(A)=p$ and for any set $B$ such that $\\sharp \\{k\\leq n\n \t, \\, k\\in B\\} = o(N)$, one has both $LRF(A\\cup B) = p$ and\n \t$LRF(A\\setminus B ) = p$. Take now for set $B$ the following~:\n \t$$\n \tB = \\bigcup_{k\\in\\mathbb N} \\big\\{ u\\in\\mathbb N \\; :\\; 4^k -2^k k\n \t\\leq u \\leq 4^k+2^k k \\big\\}.\n \t$$\n \tFor that $B$, one has~:\n \t$$\n \t\\limsup_{n\\to\\infty} \\frac{\\sharp\\{k\\leq n, \\, k \\in B\\}}{n+1} =\n \t\\lim_{k \\to\\infty} \\frac{\\sum_{i=0}^{k} 2^{i+1} i }{4^k + 2^k k}\n \t\\; \\leq \\; \\lim_{k \\to\\infty} \\frac{(k+1)2^{k+2}}{4^k } = 0,\n \t$$\n \tand thus $LRF(B) =0$. However, $\\pi_{4^k} (B)$ converges to $1$.\n \tIndeed, if $U_k$ is some random variable with law $\\pi_{4^k}$, one has~:\n \t$$\n \t\\pi_{4^k}\n \t\\big(\n \t\\big\\{ u\\in\\mathbb N \\; :\\; 4^k -2^k k \\leq u \\leq 4^k+2^k k \\big\\}\n \t\\big)\n \t=\n \t\\mathbb P\n \t\\bigg( \\frac{U_k -4^k}{\\sqrt{4^k}} \\in\n \t\\big[ - k \\, ; \\, k\n \t\\big]\n \t\\bigg).\n \t$$\n \tThe rigth-hand side term above converges to $1$ thanks to the\n \tcentral limit theorem. Hence $LRF(A\\cup B) = LRF(A\\setminus B) = p$\n \twhile $\\pi_{4^k}(A\\cup B)$ converges to $1$, and $\\pi_{4^k}(A\\setminus\n \tB) $ converges to $0$. Now, for any $p'\\in[0;1]$, choose two numbers\n \t$a0$, even when $W=\\phi$, we can still create uncertainty\nat the eavesdropper. This is possible since Helen can choose not\nto transmit any information on the insecure link and transmit only\na coded description of $Y^{n}$ by using the secure link at the\nrate $R_{sec}$, which plays the role of a correlated key. Furthermore,\nbeing correlated with the source $X^{n}$, the coded description of\n$Y^{n}$ also permits Alice to lower the rate of transmission when\ncompared to the case of using an uncorrelated secret key, where Alice\ntransmits at a rate $H(X)$.\n\n\\section{Summary of Main Results}\nIn Section $4$, we present the rate-equivocation region for the\ncase of one-sided helper. We show that Slepian-Wolf type coding alone\nat Alice is optimal for this case. We present the\nrate-equivocation region for the case of two-sided helper in\nSection $5$. For the case of two-sided helper, Alice uses a\nconditional rate-distortion code to create an auxiliary $U$ from\nthe source $X$ and the coded output $V$ received from Helen. This\ncode construction is reminiscent of lossy-source coding with\ntwo-sided helper \\cite{KaspiBerger:1982},\\cite{Vasudevan:ISIT07}, \\cite{Helper:ITW09}.\nFor the case of lossy source coding, a conditional rate-distortion\ncode is used where $U$ is selected to satisfy the distortion\ncriterion. On the other hand, the purpose of $U$ in secure\nlossless source coding is to confuse the eavesdropper. From this\nresult, we demonstrate the insufficiency of Slepian-Wolf type coding at\nAlice by explicitly utilizing the side information at Alice. This observation\nis further highlighted in Section $6$ where we compare the\nrate-equivocation regions of two-sided helper and one-sided helper cases for a pair\nof binary symmetric sources. For this example, we show that for\nall $R_{y}>0$, the information leakage to the eavesdropper for the\ntwo-sided helper is strictly less than the case of one-sided\nhelper. We finally generalize the result of two-sided helper to\nthe case when there are both secure and insecure rate-limited\nlinks from the two-sided helper and additional side informations\n$W$ and $Z$, at Bob and Eve, respectively. The presence of secure\nand insecure rate-limited links from Helen can be viewed as a\nsource-coding analogue of a degraded broadcast channel from Helen\nto (Alice, Bob) and Eve. We characterize the rate-equivocation region for this\nmodel when the sources satisfy the condition $Y\\rightarrow X\n\\rightarrow (W,Z)$.\n\n\n\\section{One-Sided Helper}\n\\subsection{System model}\nWe consider the following source coding problem. Alice observes an $n$-length source\nsequence $X^{n}$, which is intended to be transmitted losslessly\nto Bob. The coded output of Alice can be observed by the malicious\nuser Eve. Moreover, Helen observes a correlated source $Y^{n}$ and\nthere exists a noiseless rate-limited channel from Helen to Bob.\nWe assume that the link from Helen to Bob is a secure link and the\ncoded output of Helen is not observed by\nEve (see Figure $1$). The sources $(X^{n},Y^{n})$ are generated\ni.i.d. according to $p(x,y)$ where $p(x,y)$ is defined over the finite\nproduct alphabet $\\mathcal{X}\\times \\mathcal{Y}$. The aim of Alice is\nto create maximum uncertainty at Eve regarding the source $X^{n}$\nwhile losslessly transmitting the source to Bob.\n\nAn $(n, 2^{nR_{x}},2^{nR_{y}})$ code for this model consists of an\nencoding function at Alice, $f_{x}:X^{n}\\rightarrow \\{1,\\ldots,2^{nR_{x}}\\}$,\nan encoding function at Helen, $f_{y}:Y^{n}\\rightarrow \\{1,\\ldots,2^{nR_{y}}\\}$, and a decoding function at Bob, $g: \\{1,\\ldots,2^{nR_{x}}\\} \\times \\{1,\\ldots,2^{nR_{y}}\\} \\rightarrow X^{n}$.\nThe uncertainty about the source $X^{n}$ at Eve is measured by\n$H(X^{n}|f_{x}(X^{n}))\/n$. The probability of error in the reconstruction of\n$X^{n}$ at Bob is defined as\n$P_{e}^{n}=\\mbox{Pr}(g(f_{x}(X^{n}),f_{y}(Y^{n}))\\neq X^{n})$.\nA triple $(R_{x},R_{y},\\Delta)$ is achievable if for any $\\epsilon>0$,\nthere exists a $(n, 2^{nR_{x}}, 2^{nR_{y}})$ code such that $P_{e}^{n}\\leq\n\\epsilon$ and \\break $H(X^{n}|f_{x}(X^{n}))\/n \\geq \\Delta$. We denote\nthe set of all achievable $(R_{x},R_{y},\\Delta)$ rate triples as $\\mathcal{R}_{1-sided}$.\n\n\\subsection{Result}\nThe main result is given in the following theorem.\n\\begin{Theo}\\label{Theorem1} The set of achievable rate triples $\\mathcal{R}_{1-sided}$ for\n secure source coding with one-sided helper is given as\n\\begin{align}\n\\mathcal{R}_{1-sided}=\\Big\\{(R_{x},R_{y},\\Delta): R_{x}&\\geq H(X|V)\\\\\nR_{y}&\\geq I(Y;V) \\\\\n\\Delta&\\leq I(X;V)\\Big\\}\n\\end{align}\nwhere the joint distribution of the involved random variables is as\nfollows,\n\\begin{align}\np(x,y,v)&=p(x,y)p(v|y)\n\\end{align}\nand it suffices to consider such distributions for which\n$|\\mathcal{V}|\\leq |\\mathcal{Y}|+2$.\n\\end{Theo}\nThe proof of Theorem $1$ is given in the Appendix.\n\nWe note that inner and outer bounds for source coding model considered in this\nsection can be obtained from the results presented in \\cite{Deniz:ITW08} although\nthese bounds do not match in general. These bounds match when Bob has\ncomplete uncoded side information $Y^{n}$, i.e., when $R_{y}\\geq\nH(Y)$.\n\nThe achievability scheme which yields the rate region described in\nTheorem $1$ is summarized as follows:\n\\begin{enumerate}\n\\item Helen uses a rate-distortion\ncode to describe the correlated source $Y$ to Bob.\n\\item Alice performs\nSlepian-Wolf binning of the source $X$ with respect to the coded\nside information at Bob.\n\\end{enumerate}\nTherefore, our result shows that the\nachievable scheme of Ahlswede, Korner \\cite{AhlswedeKorner:1975} and\nWyner \\cite{Wyner:1975} is optimal in the presence of an\neavesdropper. Moreover, on dropping the security\nconstraint, Theorem $1$ yields the result of \\cite{AhlswedeKorner:1975},\\cite{Wyner:1975}.\n\n\\section{Two-Sided Helper}\n\\subsection{System model}\nWe next consider the following generalization of the model considered\nin Section $4$. In this model, Alice also has access to the coded\noutput of Helen besides the source sequence $X^{n}$ (see Figure $2$).\nAn $(n, 2^{nR_{x}},2^{nR_{y}})$ code for this model consists of an\nencoding function at Alice, $f_{x}:X^{n} \\times \\{1,\\ldots,2^{nR_{y}}\\} \\rightarrow \\{1,\\ldots,2^{nR_{x}}\\}$,\nan encoding function at Helen, $f_{y}:Y^{n}\\rightarrow \\{1,\\ldots,2^{nR_{y}}\\}$, and a decoding function at Bob,\n$g: \\{1,\\ldots,2^{nR_{x}}\\} \\times \\{1,\\ldots,2^{nR_{y}}\\} \\rightarrow X^{n}$.\nThe uncertainty about the source $X^{n}$ at Eve is measured by\n$H(X^{n}|f_{x}(X^{n}))\/n$. The probability of error in the reconstruction of\n$X^{n}$ at Bob is defined as\n$P_{e}^{n}=\\mbox{Pr}(g(f_{x}(X^{n},f_{y}(Y^{n})),f_{y}(Y^{n}))\\neq X^{n})$.\nA triple $(R_{x},R_{y},\\Delta)$ is achievable if for any $\\epsilon>0$,\nthere exists a $(n, 2^{nR_{x}}, 2^{nR_{y}})$ code such that $P_{e}^{n}\\leq\n\\epsilon$ and $H(X^{n}|f_{x}(X^{n}))\/n \\geq \\Delta$. We denote\nthe set of all achievable $(R_{x},R_{y},\\Delta)$ rate triples as $\\mathcal{R}_{2-sided}$.\n\\subsection{Result}\nThe main result is given in the following theorem.\n\\begin{Theo}\\label{Theorem2} The set of achievable rate triples $\\mathcal{R}_{2-sided}$ for\nsecure source coding with two-sided helper is given as\n\\begin{align}\n\\mathcal{R}_{2-sided}=\\Big\\{(R_{x},R_{y},\\Delta): R_{x}&\\geq H(X|V)\\\\\n\\vspace{-0.1in}R_{y}&\\geq I(Y;V)\\\\\n\\Delta&\\leq \\min(I(X;V|U),R_{y})\\Big\\}\\label{Theo2}\n\\end{align}\nwhere the joint distribution of the involved random variables is as\nfollows,\n\\begin{align}\np(x,y,v,u)&=p(x,y)p(v|y)p(u|x,v)\n\\end{align}\nand it suffices to consider distributions such that $|\\mathcal{V}|\\leq\n|\\mathcal{Y}|+2$ and $|\\mathcal{U}|\\leq |\\mathcal{X}||\\mathcal{Y}|+2|\\mathcal{X}|$.\n\\end{Theo}\nThe proof of Theorem $2$ is given in the Appendix. We remark here that\nthe proof of converse for Theorem $2$ is closely related to the proof of the converse of the rate-distortion function with a two-sided\nhelper \\cite{KaspiBerger:1982},\\cite{Vasudevan:ISIT07},\\cite{Helper:ITW09}.\n\n\nThe achievability scheme which yields the rate region described in\nTheorem $2$ is summarized as follows:\n\\begin{enumerate}\n\\item Helen uses a rate-distortion\ncode to describe the correlated source $Y$ to both Bob and\nAlice through a coded output $V$.\n\\item Using the coded output $V$ and\nthe source $X$, Alice jointly quantizes $(X,V)$ to an auxiliary random\nvariable $U$. She subsequently performs Wyner-Ziv coding, i.e., bins\nthe $U$ sequences at the rate $I(X;U|V)$ such that Bob can decode $U$ by using\nthe side information $V$ from Helen.\n\\item Alice also bins the source $X$\nat a rate $H(X|U,V)$ so that having access to $(U,V)$, Bob can\ncorrectly decode the source $X$. The total rate used by\nAlice is $I(X;U|V)+H(X|U,V)=H(X|V)$.\n\\end{enumerate}\nTherefore, the main difference between the achievability schemes for Theorems $1$\nand $2$ is at the encoding at Alice and decoding at Bob. Also note\nthat selecting a constant $U$ in Theorem $2$ corresponds to\nSlepian-Wolf type coding at\nAlice which resulted in an equivocation of $I(X;V)$\nin Theorem $1$. We will show in the next section through an example that the uncertainty about\nthe source at Eve for the case of two-sided helper can be strictly\nlarger than the case of one-sided helper and selecting $U$ as a\nconstant is clearly suboptimal.\n\nBesides reflecting the fact that the uncertainty at Eve\ncan be strictly larger than the case of a one-sided helper, Theorem $2$\nhas another interesting interpretation. If Alice and Helen can use\nsufficiently large rates to securely transmit the source $X^{n}$ to Bob, then the helper\ncan simply transmit a secret key of entropy $H(X)$ to both Alice\nand Bob. Alice can then use this secret key to\nlosslessly transmit the source to Bob in perfect secrecy by using a\none-time pad \\cite{Shannon:1949}. In other\nwords, when $R_{x}$ and $R_{y}$ are larger than $H(X)$, one can\nimmediately obtain this result from Theorem $2$ by selecting $V$\nto be independent of $(X,Y)$ and uniformly distributed on\n$\\{1,\\ldots,|\\mathcal{X}|\\}$. Finally, selecting $U=X\\oplus V$, we\nobserve that $\\min{(R_{y},I(X;V|U))}=H(X)$, yielding perfect\nsecrecy. We note that, here, $V$ plays the role of a secret key.\n\nNow consider the model where the side information $Y^{n}$ is\nof the form $Y^{n}=X^{n}\\oplus B^{n}$, where\n$|\\mathcal{B}|=|\\mathcal{X}|$, and $B^{n}$ is independent of $X^{n}$.\nMoreover, assume that the side information $Y^{n}$ is available to both Alice and\nBob in an uncoded manner. For this model, it follows from \\cite{Grokop:ISIT05}\nthat, to maximize the uncertainty at the eavesdropper, Alice cannot\ndo any better than describing the error sequence $B^{n}$ to Bob. \nNote that our two-sided helper model differs from this model in two\naspects: first, in our case, the common side information available to\nAlice and Bob is coded and rate-limited, secondly, the sources in our\nmodel do not have to be in modulo-additive form.\n\n\nOur encoding scheme at Alice for the case of two-sided helper \ncomprises of two steps: (a) using the\ncoded side information $V$ and the source $X$, Alice creates $U$ and\ntransmits the bin index of $U$ at a rate $I(X;U|V)$ so that Bob can estimate $U$ using\n$V$, and, (b) Alice bins the source $X$ at a rate $H(X|U,V)$ and transmits\nthe bin index of the source $X$. Note that if for a pair of sources $(X,Y)$, \nthe optimal $V$ is of the form $V=X\\oplus B$, where $|\\mathcal{B}|=|\\mathcal{X}|$ and $B$\nis independent of $X$, then it suffices to choose $U=B$, so that \n$I(X;V|U)=H(X)$ and $H(X|U,V)=0$. In other words, for such sources, step (b) in\nour achievability scheme is not necessary, which is similar to the\nachievability for the case of modulo-additive sources in \\cite{Grokop:ISIT05}.\nOn the other hand, for a general pair of sources $(X,Y)$, the optimal\n$V$ may not be of the form $V=X\\oplus B$ and the optimal $U$ may not\nalways satisfy $H(X|U,V)=0$. Therefore, we need the step (b) for\nour achievability scheme. This differentiates our achievable scheme\nfrom that of \\cite{Grokop:ISIT05}, which holds for modulo-additive\nsources with uncoded side information.\n\nAlso note that if $Y^{n}$ is not of the form $X^{n}\\oplus B^{n}$, and if $R_{y}\n\\geq H(X)$, then Helen can transmit a secret key which will enable\nperfectly secure transmission of $X^{n}$ by using a one-time pad \\cite{Shannon:1949}. \nThis phenomenon does not always occur when the side information $Y^{n}$ \nis available to both Alice and Bob in an uncoded fashion \\cite{Grokop:ISIT05}.\n\n\\section{An Example: Binary Symmetric Sources}\nBefore proceeding to further generalizations of Theorems $1$ and $2$, we\nexplicitly evaluate Theorems $1$ and $2$ for a pair of binary sources.\n\nLet $X$ and $Y$ be binary sources with $X\\sim \\mbox{Ber}(1\/2), Y\\sim \\mbox{Ber}(1\/2)$ and\n$X=Y\\oplus E$, where $E\\sim \\mbox{Ber}(\\delta)$. For this pair of\nsources, the region described in Theorem $1$ can be completely characterized as,\n\\begin{align}\n\\mathcal{R}_{1-sided}(R_{y})=\\big\\{(R_{x},\\Delta):\\hspace{0.05in}R_{x}&\\geq h(\\delta*h^{-1}(1-R_{y}))\\nonumber\\\\\n\\Delta&\\leq 1-h(\\delta*h^{-1}(1-R_{y}))\\big\\}\\label{regn1}\n\\end{align}\nand the region in Theorem $2$ can be completely characterized as,\n\\begin{align}\n\\mathcal{R}_{2-sided}(R_{y})=\\big\\{(R_{x},\\Delta):\\hspace{0.05in}R_{x}&\\geq h(\\delta*h^{-1}(1-R_{y}))\\nonumber\\\\\n\\Delta&\\leq \\min(R_{y},1)\\big\\}\n\\end{align}\nwhere $h(.)$ is the binary entropy function, and $a*b=a(1-b)+b(1-a)$.\n\nWe start with the derivation of (\\ref{regn1}). Without loss of\ngenerality, we assume that $R_{y}\\leq H(Y)$. Achievability follows by\nselecting $V=Y\\oplus N$, where $N\\sim \\mbox{Ber}(\\alpha)$, where\n\\begin{align}\n\\alpha&=h^{-1}(1-R_{y})\n\\end{align}\nSubstituting, we obtain\n\\begin{align}\nH(X|V)&= h(\\delta*h^{-1}(1-R_{y}))\\\\\nI(X;V)&= 1-h(\\delta*h^{-1}(1-R_{y}))\n\\end{align}\nwhich completes the achievability. Converse follows by simple application of Mrs. Gerber's lemma \\cite{Gerber:1973} as\nfollows. Let us be given $R_{y}\\in (0,1)$. We have\n\\begin{align}\nR_{y}&\\geq I(Y;V)\\\\\n&= H(Y)-H(Y|V)\\\\\n&= 1 - H(Y|V)\n\\end{align}\nwhich implies $H(Y|V)\\geq 1-R_{y}$. Mrs. Gerber's lemma states that for\n$X=Y\\oplus E$, with $E\\sim \\mbox{Ber}(\\delta)$, if $H(Y|V)\\geq \\beta$, then $H(X|V)\\geq\nh(\\delta*h^{-1}(\\beta))$. We therefore have,\n\\begin{align}\nR_{x}&\\geq H(X|V)\\\\\n&\\geq h(\\delta*h^{-1}(1-R_{y}))\n\\end{align}\nand\n\\begin{align}\n\\Delta&\\leq I(X;V)\\\\\n&= H(X)-H(X|V)\\\\\n&= 1- H(X|V)\\\\\n&\\leq 1- h(\\delta*h^{-1}(1-R_{y}))\n\\end{align}\nThis completes the converse.\n\nFor the case of two-sided helper, we compute the equivocation $\\Delta$ as\nfollows. We choose $V$ as $V=Y \\oplus N$ where $N\\sim \\mbox{Ber}(\\alpha)$ as\nin the case of one-sided helper. We choose $U$ as,\n\\begin{align}\nU&=X\\oplus V\n\\end{align}\nWe next compute the term $I(X;V|U)$ as follows,\n\\begin{align}\nI(X;V|U)&=I(X;V|X \\oplus V)\\\\\n&=H(X,X\\oplus V)-H(X\\oplus V)\\\\\n&=H(X)+H(V|X)-H(X\\oplus V)\\\\\n&=H(X)+H(Y\\oplus N|X)-H(X\\oplus Y \\oplus N)\\\\\n&=H(X)+H(X\\oplus E \\oplus N|X)-H(X\\oplus X\\oplus E \\oplus N)\\\\\n&=H(X)\\\\\n&=1\n\\end{align}\nand therefore\n\\begin{align}\n\\min(R_{y},I(X;V|U))&=\\min(R_{y},1)\n\\end{align}\nFor the converse part, we also have that\n\\begin{align}\n\\Delta&\\leq \\min(R_{y},I(X;V|U))\\\\\n&\\leq \\min(R_{y},H(X))\\\\\n&= \\min(R_{y},1)\n\\end{align}\n\nThe rate from Alice, $R_{x}$ and\nthe equivocation $\\Delta$ for the cases of one-sided and two-sided helper are shown in\nFigure $4$ for the case when $\\delta=0.05$. For the one-sided helper, we can observe a trade-off\nin the amount of information Alice needs to send versus the\nuncertainty at Eve. For small values of $R_{y}$, Alice needs\nto send more information thereby leaking out more\ninformation to Eve. The amount of information\nleaked is exactly the information sent by Alice.\nOn the other hand, for the case of two-sided helper, the\nuncertainty at the eavesdropper is always strictly larger than the\nuncertainty in the one-sided case. Also note that for this pair of\nsources, perfect secrecy is possible for the case of two-sided\nhelper when $R_{y}\\geq H(Y)$ which is not possible for the case of\none-sided helper.\n\n\\begin{figure}[t]\n \\centerline{\\epsfig{figure=comparison.eps,width=11cm}}\\vspace{-0.1in}\n \\caption{The rate-equivocation region for a pair of binary\n symmetric sources.}\\label{fig3}\n\\end{figure}\n\n\\section{Secure and Insecure Links from Two-Sided Helper }\n\\subsection{System model}\nIn this section, we consider a generalization of the model considered\nin Section $5$. We consider the case when there are two links from\nHelen (see Figure $3$). One link of rate $R_{sec}$ is secure, i.e., the output of this link is\navailable to only Alice and Bob. The second link of rate $R_{ins}$ is public\nand the output of this link is available to Alice, Bob and\nEve. The sources $(X^{n},Y^{n},W^{n},Z^{n})$ are generated\ni.i.d. according to $p(x,y,w,z)=p(x,y)p(w,z|x)$ where $p(x,y,w,z)$ is defined over the finite\nproduct alphabet $\\mathcal{X}\\times \\mathcal{Y} \\times \\mathcal{W}\n\\times \\mathcal{Z}$.\n\nA $(n, 2^{nR_{x}}, 2^{nR_{sec}}, 2^{nR_{ins}})$ code for this model consists of an\nencoding function at Helen, $f_{y}:Y^{n}\\rightarrow\nJ_{sec}\\times J_{ins}$, where $|J_{sec}|\\leq 2^{nR_{sec}}$,\n$|J_{ins}|\\leq 2^{nR_{ins}}$, an\nencoding function at Alice, $f_{x}:X^{n}\\times\nJ_{sec}\\times J_{ins}\\rightarrow \\{1,\\ldots,2^{nR_{x}}\\}$,\nand a decoding function at Bob, $g: \\{1,\\ldots,2^{nR_{x}}\\} \\times\nJ_{sec}\\times J_{ins} \\times W^{n} \\rightarrow X^{n}$.\nThe uncertainty about the source $X^{n}$ at Eve is measured by\n$H(X^{n}|f_{x}(X^{n},J_{sec}, J_{ins}),J_{ins},Z^{n})\/n$. The\nprobability of error in the reconstruction of\n$X^{n}$ at Bob is defined as\n$P_{e}^{n}=\\mbox{Pr}(g(f_{x}(X^{n},J_{sec}, J_{ins}),J_{sec}, J_{ins},W^{n})\\neq X^{n})$.\nA quadruple $(R_{x}, R_{sec}, R_{ins}, \\Delta)$ is achievable if for any $\\epsilon>0$,\nthere exists a $(n, 2^{nR_{x}}, 2^{nR_{sec}}, 2^{nR_{ins}})$ code such that $P_{e}^{n}\\leq\n\\epsilon$ and $H(X^{n}|f_{x}(X^{n},J_{sec},J_{ins}),J_{ins}, Z^{n})\/n\n\\geq \\Delta$. We denote the set of all achievable\n$(R_{x},R_{sec},R_{ins},\\Delta)$ rate quadruples as $\\mathcal{R}_{sec-ins}^{W,Z}$.\n\\subsection{ Result}\nThe main result is given in the following theorem.\n\\begin{Theo}\\label{Theorem3} The set of achievable rate quadruples\n $\\mathcal{R}_{sec-ins}^{W, Z}$ for\nsecure source coding with secure and insecure links from a two-sided helper, additional side\ninformation $W$ at Bob and $Z$ at Eve is given as\n\\begin{align}\n\\mathcal{R}_{sec-ins}^{W,Z}=\\Big\\{(R_{x},R_{sec}, R_{ins}, \\Delta): R_{x}&\\geq H(X|V_{1},V_{2},W)\\\\\nR_{sec}&\\geq I(Y;V_{1}|W)\\\\\nR_{ins}&\\geq I(Y;V_{2}|W,V_{1})\\\\\n\\Delta&\\leq \\min(R_{sec},I(X;V_{1}|U,V_{2},W))\\nonumber\\\\\n&\\hspace{0.18in}+I(X;W|U,V_{2})-I(X;Z|U,V_{2})\\Big\\}\n\\end{align}\nwhere the joint distribution of the involved random variables is as\nfollows,\n\\begin{align}\np(x,y,w,z,v_{1},v_{2},u)&=p(x,y)p(w,z|x)p(v_{1},v_{2}|y)p(u|x,v_{1},v_{2})\n\\end{align}\nand it suffices to consider such distributions for which\n$|\\mathcal{V}_{1}|\\leq |\\mathcal{Y}|+3$, $|\\mathcal{V}_{2}|\\leq\n|\\mathcal{Y}|+4$ and $|\\mathcal{U}|\\leq |\\mathcal{X}||\\mathcal{Y}|^{2}+7|\\mathcal{X}||\\mathcal{Y}|+12|\\mathcal{X}|+2$.\n\\end{Theo}\nThe proof of Theorem $3$ is given in the Appendix.\n\nThe achievability scheme which yields the rate region described in\nTheorem $3$ is summarized as follows:\n\\begin{enumerate}\n\\item Helen first uses the secure\nlink to describe the source $Y$ to both Alice and Bob at a rate\n$I(Y;V_{1}|W)$, where $W$ plays the role of side\ninformation. Subsequently, the insecure link is used to provide\nanother description of the correlated source $Y$ at a rate\n$I(Y;V_{2}|W,V_{1})$, where $(W,V_{1})$ plays the role of side\ninformation. Therefore, the key idea is to first use the secure link\nto build common randomness at Alice and Bob and subsequently use this\ncommon randomness to send information over the insecure link at a lower rate.\n\\vspace{-0.1in}\n\\item Having access to the coded outputs\n$(V_{1},V_{2})$ from Helen and\nthe source $X$, Alice jointly quantizes $(X,V_{1},V_{2})$ to an auxiliary random\nvariable $U$. She subsequently performs Wyner-Ziv coding, i.e., bins\nthe $U$ sequences at the rate $I(X;U|V_{1},V_{2},W)$ such that Bob can decode $U$ by using\n$W$ and the coded outputs $(V_{1},V_{2})$ from the helper.\n\\item She also bins the source $X$\nat a rate $H(X|U,V_{1},V_{2},W)$ so that having access to $(U,V_{1},V_{2},W)$, Bob can\ncorrectly decode the source $X$. The total rate used by\nAlice is $I(X;U|V_{1},V_{2},W)+H(X|U,V_{1},V_{2},W)=H(X|V_{1},V_{2},W)$.\n\\end{enumerate}\n\nWe note here that using Theorem $3$, we can recover Theorem $2$ by\nsetting $R_{ins}=0$, and selecting $W=Z=V_{2}=\\phi$.\n\nOn setting $R_{sec}=0$, the resulting model is similar to the one\nconsidered in \\cite{CsiszarNarayan} although the aim in\n\\cite{CsiszarNarayan} is to generate a secret key to be shared by\nAlice and Bob, while we are interested in the secure\ntransmission of the source $X$.\n\nIf $R_{sec}=R_{ins}=0$, then we recover the result of\n\\cite{VinodKannan:ITW07} as a special case by setting\n$V_{1}=V_{2}=\\phi$.\nTherefore, Theorem $3$ can also be viewed as a\ngeneralization of the result of \\cite{VinodKannan:ITW07} where no rate\nconstraint is imposed on the transmission of Alice and the goal is\nto maximize the uncertainty at Eve while losslessly transmitting the\nsource to Bob.\n\n\n\\section{Conclusions}\nIn this paper, we considered several secure source coding problems. We\nfirst provided the characterization of the rate-equivocation region\nfor a secure source coding problem with coded side information at\nthe legitimate user. We next generalized this result for two different\n models with increasing complexity. We characterized the\n rate-equivocation region for the case of two-sided helper. \nThe value of two-sided coded side information is emphasized by\ncomparing the respective equivocations for a pair of binary sources.\nIt is shown for this example that Slepian-Wolf type coding alone is\ninsufficient and using our achievable scheme, one attains strictly\nlarger uncertainty at the eavesdropper than the case of one-sided helper.\nWe next considered the case when there are both secure and insecure\nrate-limited links from the helper and characterized the\nrate-equivocation region.\n\n\n\n\\section{Appendix}\n\\subsection{Proof of Theorem \\ref{Theorem1}}\n\n\\subsubsection{Achievability}\nFix the distribution $p(x,y,v)=p(x,y)p(v|y)$.\n\\begin{enumerate}\n\\item Codebook generation at Helen:\nFrom the conditional probability distribution $p(v|y)$ compute $p(v)=\\sum_{y}p(y)p(v|y)$.\nGenerate $2^{nI(V;Y)}$ codewords $v(l)$ independently\naccording to $\\prod_{i=1}^{n}p(v_{i})$, where $l=1,\\ldots,2^{nI(V;Y)}$.\n\\item Codebook generation at Alice:\nRandomly bin the $x^{n}$ sequences into\n$2^{nH(X|V)}$ bins and index these bins as $m=1,\\ldots,M$, where $M=2^{nH(X|V)}$.\n\\item Encoding at Helen:\nOn observing the sequence $y^{n}$, Helen tries to find a\nsequence $v(l)$ such that $(v(l),y^{n})$ are jointly\ntypical. From rate-distortion theory, we know that there exists one\nsuch sequence as long as $R_{y}\\geq I(V;Y)$. Helen sends the index $l$ of\nthe sequence $v(l)$.\n\\item Encoding at Alice:\nOn observing the sequence $x^{n}$, Alice finds the bin index $m_{X}$\nin which the sequence $x^{n}$ falls and transmits the bin index $m_{X}$.\n\\item Decoding at Bob:\nOn receiving $l$ and the bin index $m_{X}$, Bob tries to find\na unique $x^{n}$ sequence in bin $m_{X}$ such that $(v(l),x^{n})$\nare jointly typical. This is possible since the number of $x^{n}$\nsequences in each bin is roughly $2^{nH(X)}\/2^{nH(X|V)}$ which is\n$2^{nI(X;V)}$ and the existence of a unique $x^{n}$ such that $(v(l),x^{n})$\nare jointly typical is guaranteed by the Markov lemma \\cite{Cover:book}.\n\\item Equivocation:\n\\begin{align}\nH(X^{n}|m_{X})&=H(X^{n},m_{X})-H(m_{X})\\\\\n&=H(X^{n})+H(m_{X}|X^{n})-H(m_{X})\\\\\n&= H(X^{n})-H(m_{X})\\\\\n&\\geq H(X^{n})-\\mbox{log}(M)\\\\\n&=H(X^{n})-nH(X|V)\\\\\n&= n I(X;V)\n\\end{align}\nTherefore,\n\\begin{align}\n\\Delta&\\leq I(X;V)\n\\end{align}\nis achievable. This completes the achievability part.\n\\end{enumerate}\n\n\\subsubsection{Converse}\nLet the output of the helper be $J_{y}$, and the\noutput of Alice be $J_{x}$, i.e.,\n\\begin{align}\nJ_{y}&=f_{y}(Y^{n})\\\\\nJ_{x}&=f_{x}(X^{n})\n\\end{align}\nFirst note that, for noiseless reconstruction of the sequence $X^{n}$ at the legitimate decoder, we have by Fano's inequality\n\\begin{align}\nH(X^{n}|J_{x},J_{y})&\\leq n\\epsilon_{n}\\label{fano}\n\\end{align}\n\nWe start by obtaining a lower bound on $R_{x}$, the rate of Alice, as follows\n\\begin{align}\nnR_{x}&\\geq H(J_{x})\\\\\n&\\geq H(J_{x}|J_{y})\\\\\n&= H(X^{n},J_{x}|J_{y})-H(X^{n}|J_{x},J_{y})\\\\\n&\\geq H(X^{n},J_{x}|J_{y})-n\\epsilon_{n}\\label{e1}\\\\\n&\\geq H(X^{n}|J_{y})-n\\epsilon_{n}\\\\\n&=\\sum_{i=1}^{n}H(X_{i}|X^{i-1},J_{y})-n\\epsilon_{n}\\\\\n&=\\sum_{i=1}^{n}H(X_{i}|V_{i})-n\\epsilon_{n}\\label{e2}\\\\\n&= nH(X_{Q}|V_{Q},Q)-n\\epsilon_{n}\\\\\n&=nH(X|V)-n\\epsilon_{n}\\label{etemp2}\n\\end{align}\nwhere (\\ref{e1}) follows by (\\ref{fano}). In (\\ref{e2}), we have defined\n\\begin{align}\nV_{i}&=(J_{y},X^{i-1})\n\\end{align}\nIn (\\ref{etemp2}), we have defined,\n\\begin{align}\nX=X_{Q}, \\quad V=(Q,V_{Q})\n\\end{align}\nwhere $Q$ is uniformly distributed on $\\{1,\\ldots,n\\}$ and is independent of all\nother random variables.\n\nNext, we obtain a lower bound on $R_{y}$, the rate of the helper,\n\\begin{align}\nnR_{y}&\\geq H(J_{y})\\\\\n&\\geq I(J_{y};Y^{n})\\\\\n&= \\sum_{i=1}^{n}I(J_{y},Y^{i-1};Y_{i})\\\\\n&= \\sum_{i=1}^{n}I(J_{y},Y^{i-1},X^{i-1};Y_{i})\\label{e3}\\\\\n&\\geq \\sum_{i=1}^{n}I(J_{y},X^{i-1};Y_{i})\\\\\n&= \\sum_{i=1}^{n}I(V_{i};Y_{i})\n\\end{align}\n\\begin{align}\n&= nI(V_{Q};Y_{Q}|Q)\\\\\n&=nI(V;Y)\\label{etemp3}\n\\end{align}\nwhere (\\ref{e3}) follows from the Markov chain\n\\begin{align}\nX^{i-1}\\rightarrow (J_{y},Y^{i-1}) \\rightarrow Y_{i}\n\\end{align}\nand in (\\ref{etemp3}), we have defined $Y=Y_{Q}$.\n\nWe now have the main step, i.e., an upper bound on the equivocation\nrate of the eavesdropper,\n\\begin{align}\nH(X^{n}|J_{x})&=\nH(X^{n},J_{y}|J_{x})-H(J_{y}|X^{n},J_{x})\\\\\n&=\nH(J_{y}|J_{x})-H(J_{y}|X^{n},J_{x})+H(X^{n}|J_{x},J_{y})\\\\\n&=\nH(J_{y}|J_{x})-H(J_{y}|X^{n})+H(X^{n}|J_{x},J_{y})\\label{e4}\\\\\n&\\leq\nH(J_{y})-H(J_{y}|X^{n})+H(X^{n}|J_{x},J_{y})\\\\\n&\\leq I(J_{y};X^{n}) + n\\epsilon_{n}\\label{e5}\\\\\n&= \\sum_{i=1}^{n}I(J_{y};X_{i}|X^{i-1}) + n\\epsilon_{n}\\\\\n&= \\sum_{i=1}^{n}I(J_{y},X^{i-1};X_{i}) +\nn\\epsilon_{n}\\\\\n&=\\sum_{i=1}^{n}I(X_{i};V_{i})+\nn\\epsilon_{n}\\\\\n&= nI(X_{Q};V_{Q}|Q)+n\\epsilon_{n}\\\\\n&= nI(X;V)+n\\epsilon_{n}\n\\end{align}\nwhere (\\ref{e4}) follows from the Markov chain\n\\begin{align}\nJ_{x}\\rightarrow X^{n} \\rightarrow J_{y}\n\\end{align}\nand (\\ref{e5}) follows from (\\ref{fano}). This implies\n\\begin{align}\n\\Delta&\\leq I(X;V)\n\\end{align}\n\nAlso note that the following is a Markov chain,\n\\begin{align}\nV\\rightarrow Y \\rightarrow X\n\\end{align}\nTherefore, the joint distribution of the involved random variables is\n\\begin{align}\np(x,y,v)&=p(x,y)p(v|y)\n\\end{align}\nFrom support lemma \\cite{Csiszar:book}, it can be shown that it\nsuffices to consider such joint distributions for which\n$|\\mathcal{V}|\\leq |\\mathcal{Y}|+2$.\n\n\n\\subsection{Proof of Theorem \\ref{Theorem2}}\n\\subsubsection{Achievability}\nFix the distribution $p(x,y,v)=p(x,y)p(v|y)p(u|x,v)$.\n\\begin{enumerate}\n\\item Codebook generation at Helen:\nFrom the conditional probability distribution $p(v|y)$ compute $p(v)=\\sum_{y}p(y)p(v|y)$.\nGenerate $2^{nI(V;Y)}$ codewords $v(l)$ independently\naccording to $\\prod_{i=1}^{n}p(v_{i})$, where $l=1,\\ldots,2^{nI(V;Y)}$.\n\\item Codebook generation at Alice:\nFrom the distribution $p(u|x,v)$, compute $p(u)$.\nGenerate $2^{nI(X,V;U)}$ sequences $u(s)$ independently\naccording to $\\prod_{i=1}^{n}p(u_{i})$, where\n$s=1,\\ldots,2^{nI(X,V;U)}$. Next, bin these sequences uniformly into\n$2^{nI(X;U|V)}$ bins.\n\nAlso, randomly bin the $x^{n}$ sequences into\n$2^{nH(X|U,V)}$ bins and index these bins as $m=1,\\ldots,2^{nH(X|U,V)}$.\n\\item Encoding at Helen:\nOn observing the sequence $y^{n}$, Helen tries to find a\nsequence $v(l)$ such that $(v(l),y^{n})$ are jointly\ntypical. From rate-distortion theory, we know that there exists one\nsuch sequence. Helen sends the index $l$ of\nthe sequence $v(l)$.\n\\item Encoding at Alice:\nThe key difference from the one-sided helper case is in the encoding at\nAlice. On observing the sequence $x^{n}$, Alice first finds the bin index $m_{X}$\nin which the sequence $x^{n}$ falls. Alice also has the sequence $v(l)$\nreceived from Helen. Alice next finds a sequence $u$ such\nthat $(u,v(l),x^{n})$ are jointly typical. Let the\nbin index of this resulting $u$ sequence be $s_{U}$.\n\nAlice transmits the pair $(s_{U},m_{X})$ which is received by Bob and\nEve. The total rate used by Alice is $ I(X;U|V)+H(X|U,V)=H(X|V)$.\n\n\\item Decoding at Bob: On receiving the pair $(s_{U},m_{X})$ from\nAlice and the index $l$ from Helen, Bob first searches the\nbin $s_{U}$ for a sequence $\\hat{u}$ such that $(\\hat{u},v(l))$\nare jointly typical. This is possible since the number of $u$\nsequences in each auxiliary bin is approximately\n$2^{nI(X,V;U)}\/2^{nI(X;U|V)}$ which is $2^{nI(U;V)}$ and therefore\nwith high probability, Bob will be able to obtain the correct $u$\nsequence.\n\nUsing the estimate $\\hat{u}$ and $v(l)$, Bob\nsearches for a unique $x^{n}$ sequence in the\nbin $m_{X}$ such that $(\\hat{u},v(l),x^{n})$ are jointly\ntypical. This is possible since the number of $x^{n}$\nsequences in each bin is approximately $2^{nH(X)}\/2^{nH(X|U,V)}$ which is\n$2^{nI(U,V;X)}$.\n\\item Equivocation:\n\\begin{align}\nH(X^{n}|s_{U},m_{X})&=H(X^{n},m_{X},s_{U})-H(m_{X},s_{U})\\\\\n&=H(X^{n})+H(m_{X},s_{U}|X^{n})-H(m_{X},s_{U})\\\\\n&=H(X^{n})+H(s_{U}|X^{n})-H(m_{X},s_{U})\\label{equi1}\\\\\n&\\geq H(X^{n})+H(s_{U}|X^{n})-H(s_{U})-H(m_{X})\\label{equi2}\\\\\n&= H(X^{n})-H(m_{X})-I(s_{U};X^{n})\\\\\n&\\geq H(X^{n})-nH(X|U,V)-I(s_{U};X^{n})\\label{equi3}\\\\\n&= nI(X;U,V)-I(s_{U};X^{n})\\\\\n&\\geq nI(X;U,V)-I(U^{n};X^{n})\\label{equi4}\\\\\n&\\geq nI(X;U,V)-nI(U;X)\\label{equi5}\\\\\n&= nI(X;V|U) \\\\\n&\\geq n\\min(I(X;V|U),R_{y}) \\label{equi6}\n\\end{align}\nwhere (\\ref{equi1}) follows from the fact that $m_{X}$ is the bin\nindex of the sequence $X^{n}$, (\\ref{equi2}) follows from the fact\nthat conditioning reduces entropy, (\\ref{equi3}) follows from the fact\nthat $H(m_{X})\\leq \\mbox{log}(2^{nH(X|U,V)})$, (\\ref{equi4}) follows\nfrom the fact that $s_{U}$ is the bin index of the sequence $U^{n}$,\ni.e., $s_{U}\\rightarrow U^{n}\\rightarrow X^{n}$ forms a Markov chain\nand subsequently using the data processing inequality. Finally,\n(\\ref{equi6}) follows from the fact that $\\min(I(X;V|U),R_{y})\\leq I(X;V|U)$.\nWe therefore have\n\\begin{align}\n\\Delta&\\leq \\min(I(X;V|U),R_{y})\n\\end{align}\nis achievable. This completes the achievability part for the case of two-sided helper.\n\\end{enumerate}\n\n\\subsubsection{Converse}\nThe only difference in the converse part for the case of two-sided helper\nis when deriving an upper bound on the equivocation\nrate of the eavesdropper:\n\\begin{align}\nH(X^{n}|J_{x})&=\nH(X^{n},J_{y}|J_{x})-H(J_{y}|X^{n},J_{x})\\\\\n&=\nH(J_{y}|J_{x})-H(J_{y}|X^{n},J_{x})+H(X^{n}|J_{x},J_{y})\\\\\n&\\leq\nI(X^{n};J_{y}|J_{x})+n\\epsilon_{n}\\\\\n&=\n\\sum_{i=1}^{n}I(X_{i};J_{y}|J_{x},X^{i-1})+n\\epsilon_{n}\\\\\n&=\n\\sum_{i=1}^{n}I(X_{i};J_{y},X^{i-1}|J_{x},X^{i-1})+n\\epsilon_{n}\\\\\n&=n I(X;V|U) + n \\epsilon_{n}\n\\end{align}\nwhere we have defined\n\\begin{align}\nV_{i}&=(J_{y},X^{i-1})\\label{defV}\\\\\nU_{i}&=(J_{x},X^{i-1})\\label{defU}\n\\end{align}\nand\n\\begin{align}\nX=X_{Q},\\quad Y=Y_{Q}, \\quad V=(Q,V_{Q}), \\quad U=(Q,U_{Q})\n\\end{align}\nWe also have,\n\\begin{align}\nH(X^{n}|J_{x})&=\nH(X^{n},J_{y}|J_{x})-H(J_{y}|X^{n},J_{x})\\\\\n&=\nH(J_{y}|J_{x})-H(J_{y}|X^{n},J_{x})+H(X^{n}|J_{x},J_{y})\\\\\n&\\leq\nH(J_{y}|J_{x})+n\\epsilon_{n}\\\\\n&\\leq\nH(J_{y})+n\\epsilon_{n}\\\\\n&\\leq n R_{y} + n \\epsilon_{n}\n\\end{align}\nThis implies\n\\begin{align}\n\\Delta&\\leq \\min(I(X;V|U),R_{y})\n\\end{align}\n\nThe joint distribution of the involved random variables is as\nfollows,\n\\begin{align}\np^{out}(x,y,v,u)&=p(x,y)p(v|y)p^{out}(u|x,v,y)\\label{outerr}\n\\end{align}\nNote that in the achievability proof of Theorem $2$, joint distributions of the\nfollowing form are permitted,\n\\begin{align}\np^{ach}(x,y,v,u)&=p(x,y)p(v|y)p^{ach}(u|x,v)\\label{innerr}\n\\end{align}\ni.e, we have the Markov chain, $Y\\rightarrow (X,V) \\rightarrow U$.\nWith the definition of $V$ and $U$ as in (\\ref{defV})-(\\ref{defU}), these random variables do not satisfy this\nMarkov chain. This implies that what we have shown so far is the following,\n\\begin{align}\n\\mathcal{R}_{2-sided}\\subseteq \\mathcal{R}_{out}\n\\end{align}\nwhere\n\\begin{align}\n\\mathcal{R}_{out}=\n\\Big\\{(R_{x},R_{y},\\Delta): R_{x}&\\geq H(X|V)\\\\\n\\vspace{-0.1in}R_{y}&\\geq I(Y;V)\\\\\n\\Delta&\\leq \\min(I(X;V|U),R_{y})\\Big\\}\n\\end{align}\nwhere the joint distribution of the involved random variables is as\ngiven in (\\ref{outerr}).\n\nHowever, we observe that the term $I(X;V|U)$ depends only on the\nmarginal $p^{out}(u|x,v)$. Similarly, the terms $H(X|V)$ and $I(X;V)$\ndepend only on the marginal $p(x,v)$. We use these observations\nto show that the region $\\mathcal{R}_{out}$\nis the same when it is evaluated using the joint distributions of the\nform given in (\\ref{innerr}). This is clear by noting that once we are given a distribution of the\nform given in (\\ref{outerr}), we can construct a new distribution of the\nform given in (\\ref{innerr}) with the same rate expressions.\nConsider any distribution $p^{out}(x,y,v,u)$ of the form given in (\\ref{outerr}).\nUsing $p^{out}(x,y,v,u)$, compute the marginal $p^{out}(u|x,v)$ as,\n\\begin{align}\np^{out}(u|x,v)&=\\frac{\\sum_{y}p^{out}(x,y,u,v)}{p(x,v)}\n\\end{align}\nWe now construct a distribution $p^{ach}(x,y,v,u)\\in \\mathcal{P}_{ach}$ as,\n\\begin{align}\np^{ach}(x,y,v,u)&=p(x,y)p(v|y)p^{out}(u|x,v)\n\\end{align}\nsuch that the terms $I(X;V|U)$, $H(X|V)$ and $I(X;V)$ are the same whether they\nare evaluated according to $p^{out}(x,y,v,u)$ or according to $p^{ach}(x,y,v,u)$.\nTherefore, we conclude that it suffices to consider input\ndistributions satisfying the Markov chain $Y\\rightarrow\n(X,V)\\rightarrow U$ when evaluating $\\mathcal{R}_{out}$ and hence $\\mathcal{R}_{out}=\\mathcal{R}_{ach}$.\nThis completes the converse part. We remark here that this observation\nwas useful in obtaining the converse for the rate-distortion function with common coded\nside information \\cite{KaspiBerger:1982},\\cite{Vasudevan:ISIT07},\\cite{Helper:ITW09}.\n\n\n\\subsection{Proof of Theorem \\ref{Theorem3}}\n\\subsubsection{Achievability}\nFix the distribution $p(x,y,w,z,v_{1},v_{2},u)=p(x,y)p(w,z|x)p(v_{1},v_{2}|y)p(u|x,v_{1},v_{2})$.\n\\begin{enumerate}\n\\item Codebook generation at Helen: From the conditional\nprobability distribution $p(v_{1},v_{2}|y)$ compute\n$p(v_{1})=\\sum_{(y,v_{2})}p(y)p(v_{1},v_{2}|y)$ and\n$p(v_{2})=\\sum_{(y,v_{1})}p(y)p(v_{1},v_{2}|y)$. Generate\n$2^{nI(V_{1};Y)}$ sequences $v_{1}(l)$ independently according to\n$\\prod_{i=1}^{n}p(v_{1i})$, where $l=1,\\ldots,2^{nI(V_{1};Y)}$.\nBin these sequences uniformly and independently in\n$2^{n(I(V_{1};Y)-I(V_{1};W))}$ bins. Denote the bin index of the\nsequence $v_{1}(l)$ as $b_{V_{1}}(v_{1}(l))$.\n\nNext generate $2^{nI(V_{2};Y,V_{1})}$ sequences $v_{2}(j)$\nindependently according to $\\prod_{i=1}^{n}p(v_{2i})$, where\n$j=1,\\ldots,2^{nI(V_{2};Y,V_{1})}$. Bin these sequences uniformly\nand independently in \\break\n$2^{n(I(V_{2};Y,V_{1})-I(V_{2};W,V_{1}))}$ bins. Denote the bin\nindex of the sequence $v_{2}(j)$ as $b_{V_{2}}(v_{2}(j))$.\n\n\\item Codebook generation at Alice:\nFrom the distribution $p(u|x,v_{1},v_{2})$, compute $p(u)$.\nGenerate $2^{nI(X,V_{1},V_{2};U)}$ sequences $u(s)$ independently\naccording to $\\prod_{i=1}^{n}p(u_{i})$, where\n$s=1,\\ldots,2^{nI(X,V_{1},V_{2};U)}$. Next, bin these sequences uniformly into\n$2^{n(I(X,V_{1},V_{2};U)-I(W,V_{1},V_{2};U))}$ bins.\n\nAlso, randomly bin the $x^{n}$ sequences into\n$2^{nH(X|U,V_{1},V_{2},W)}$ bins and index these bins as\n$m=1,\\ldots,2^{nH(X|U,V_{1},V_{2},W)}$.\n\n\\item Encoding at Helen:\nOn observing the sequence $y^{n}$, Helen tries to find a\nsequence $v_{1}(l)$ such that $(v_{1}(l),y^{n})$ are jointly\ntypical. From rate-distortion theory, we know that there exists one\nsuch sequence. Helen sends the bin index $b_{V_{1}}(v_{1}(l))$ of\nthe sequence $v_{1}(l)$ on the secure link which is received by Alice and Bob.\n\nHelen also finds a sequence $v_{2}(j)$ such that $(v_{1}(l),v_{2}(j),y^{n})$ are jointly\ntypical. From rate-distortion theory, we know that there exists one\nsuch sequence. Helen sends the bin index $b_{V_{2}}(v_{2}(j))$ of\nthe sequence $v_{2}(j)$ on the insecure link which is received by Alice, Bob and Eve.\n\n\\item Encoding at Alice:\nOn observing the sequence $x^{n}$, Alice first finds the bin index $m_{X}$\nin which the sequence $x^{n}$ falls. Alice also receives the bin indices\n$b_{V_{1}}(v_{1}(l))$ and $b_{V_{2}}(v_{2}(j))$ from Helen. She first looks for a unique $\\hat{v}_{1}(l)$\nin the bin $b_{V_{1}}(v_{1}(l))$ such that $(x^{n}, \\hat{v}_{1}(l))$\nare jointly typical. Alice can estimate the correct sequence\n$v_{1}(l)$ as long as the number of sequences in each bin is\nless than $2^{nI(X;V_{1})}$. Note from the codebook generation step at\nHelen, that the number of\n$v_{1}$ sequences in each bin is approximately\n$2^{nI(V_{1};Y)}\/2^{n(I(V_{1};Y)-I(V_{1};W))}=2^{nI(V_{1};W)}$. Since $V_{1}\\rightarrow X\n\\rightarrow W$ forms a Markov chain, we have $I(V_{1};W)\\leq I(V_{1};X)$ and therefore the number of\n$v_{1}$ sequences in each bin is less than $2^{nI(X;V_{1})}$ and\nconsequently Alice can estimate the correct sequence $v_{1}(l)$.\n\nUsing $x^{n}$ and $\\hat{v}_{1}(l)$, Alice looks for a unique $\\hat{v}_{2}(j)$\nin the bin $b_{V_{2}}(v_{2}(j))$ such that $(x^{n}, \\hat{v}_{1}(l), \\hat{v}_{2}(j))$\nare jointly typical. Alice can estimate the correct sequence\n$v_{2}(j)$ as long as the number of sequences in each bin is\nless than $2^{nI(X,V_{1};V_{2})}$. The number of $v_{2}(j)$\nsequences in each bin is $2^{nI(W,V_{1};V_{2})}$. From the Markov chain\n$W\\rightarrow X\\rightarrow (V_{1},V_{2})$, we have $I(W,V_{1};V_{2})\\leq I(X,V_{1};V_{2})$\nand therefore Alice can correctly estimate the sequence $v_{2}(j)$.\n\nShe next finds a sequence $u$ such that $(u,\\hat{v}_{1}(l),\\hat{v}_{2}(j),x^{n})$ are jointly typical. Let the\nbin index of this resulting $u$ sequence be $s_{U}$.\n\nAlice transmits the pair $(s_{U},m_{X})$ which is received by Bob and\nEve. The total rate used by Alice is $ I(X;U|V_{1},V_{2},W)+H(X|U,V_{1},V_{2},W)=H(X|V_{1},V_{2},W)$.\n\n\\item Decoding at Bob:\nOn receiving the pair $(s_{U},m_{X})$ from Alice and the bin indices\n$b_{V_{1}}(v_{1}(l))$ and $b_{V_{2}}(v_{2}(j))$ from\nHelen, Bob looks for a unique $\\hat{v}_{1}(l)$\nin the bin $b_{V}(v_{1}(l))$ such that $(w^{n}, \\hat{v}_{1}(l))$\nare jointly typical. He can estimate the correct sequence $v_{1}(l)$ with\nhigh probability since the number of $v_{1}$ sequences in each\nbin is approximately $2^{nI(V_{1};W)}$. Using the estimate\n$\\hat{v}_{1}(l)$ and $w^{n}$, he looks for a unique $\\hat{v}_{2}(j)$\nin the bin $b_{V_{2}}(v_{2}(j))$ such that $(w^{n}, \\hat{v}_{1}(l)), \\hat{v}_{2}(j))$\nare jointly typical. With high probability, the correct sequence\n$v_{2}(j)$ can be decoded by Bob since the number of\n$v_{2}$ sequences in each bin is approximately $2^{nI(V_{2};W|V_{1})}$.\n\nHe next searches the bin $s_{U}$ for a sequence $\\hat{u}$ such\nthat $(\\hat{u},\\hat{v}_{1}(l),\\hat{v}_{2}(j),w^{n})$ are jointly\ntypical. Since the number of $u$ sequences in each auxiliary bin\nis approximately\n$2^{nI(X,V_{1},V_{2};U)}\/2^{n(I(X,V_{1},V_{2};U)-I(W,V_{1},V_{2};U))}$\nwhich is $2^{nI(W,V_{1},V_{2};U)}$, with high probability, Bob\nwill be able to obtain the correct $u$ sequence.\n\nUsing the estimates $\\hat{u}$, $\\hat{v}_{1}(l)$, and\n$\\hat{v}_{2}(j)$, Bob\nsearches for a unique $x^{n}$ sequence in the\nbin $m_{X}$ such that $(\\hat{u}, \\hat{v}_{1}(l),\n\\hat{v}_{2}(j), w^{n},x^{n})$ are jointly\ntypical. This is possible since the number of $x^{n}$\nsequences in each bin is approximately\n$2^{nH(X)}\/2^{nH(X|U,V_{1},V_{2},W)}$, i.e.,\n$2^{nI(U,V_{1},V_{2},W;X)}$. Therefore, Bob can correctly decode the\n$x^{n}$ sequence with high probability.\n\n\\item{Equivocation}: \n\\begin{align}\nH(X^{n}|s_{U},m_{X},b_{V_{2}},Z^{n})&=H(X^{n},m_{X},s_{U},b_{V_{2}}|Z^{n})-H(m_{X},s_{U},b_{V_{2}}|Z^{n})\\\\\n&=H(X^{n}|Z^{n})+H(m_{X},s_{U},b_{V_{2}}|X^{n},Z^{n})-H(m_{X},s_{U},b_{V_{2}}|Z^{n})\\\\\n&=H(X^{n}|Z^{n})+H(s_{U},b_{V_{2}}|X^{n},Z^{n})-H(m_{X},s_{U},b_{V_{2}}|Z^{n})\\label{e12L}\\\\\n&\\geq H(X^{n}|Z^{n})+H(s_{U},b_{V_{2}}|X^{n},Z^{n})-H(m_{X}|Z^{n})\\nonumber\\\\&\\hspace{0.17in}-H(s_{U},b_{V_{2}}|Z^{n})\\label{e12Ltem1}\\\\\n&\\geq\nH(X^{n}|Z^{n})+H(s_{U},b_{V_{2}}|X^{n},Z^{n})-H(m_{X})\\nonumber\\\\&\\hspace{0.17in}-H(s_{U},b_{V_{2}}|Z^{n})\\label{e12Ltem2}\\\\\n&\\geq\nH(X^{n}|Z^{n})+H(s_{U},b_{V_{2}}|X^{n},Z^{n})-nH(X|U,V_{1},V_{2},W)\\nonumber\\\\&\\hspace{0.17in}-H(s_{U},b_{V_{2}}|Z^{n})\\label{e22L}\\\\\n&=\nH(X^{n}|Z^{n})-I(s_{U},b_{V_{2}};X^{n}|Z^{n})-nH(X|U,V_{1},V_{2},W)\\\\\n&\\geq\nH(X^{n}|Z^{n})-I(U^{n},V_{2}^{n};X^{n}|Z^{n})-nH(X|U,V_{1},V_{2},W)\\label{e32L}\\\\\n&\\geq\nH(X^{n}|Z^{n})-nI(U,V_{2};X|Z)-nH(X|U,V_{1},V_{2},W)\\\\\n&=\nnH(X|Z)-nI(U,V_{2};X|Z)-nH(X|U,V_{1},V_{2},W)\n\\end{align}\n\\begin{align}\n&=\nn(H(X|U,V_{2},Z)-H(X|U,V_{1},V_{2},W))\\\\\n&=\nn(I(X;V_{1}|U,V_{2},W)+I(X;W|U,V_{2})-I(X;Z|U,V_{2}))\\\\\n&\\geq\nn(\\min(R_{sec},I(X;V_{1}|U,V_{2},W))+I(X;W|U,V_{2})\\nonumber\\\\&\\hspace{0.34in}-I(X;Z|U,V_{2}))\n\\end{align}\nwhere (\\ref{e12L}) follows from the fact that $m_{X}$ is the bin index\nof the sequence $X^{n}$, (\\ref{e12Ltem1}) and (\\ref{e12Ltem2}) follow\nfrom the fact that conditioning reduces entropy, (\\ref{e22L}) follows from the fact that\n$H(m_{X})\\leq \\mbox{log}(2^{nH(X|U,V_{1},V_{2},W)})$, (\\ref{e32L}) follows from\nthe fact that $s_{U}$ is the bin index of the sequence $U^{n}$ and $b_{V_{2}}$\n is the bin index of the sequence $V_{2}^{n}$.\nThis completes the achievability part.\n\\end{enumerate}\n\n\\subsubsection{Converse}\nLet the coded outputs of the helper be denoted as $(J_{sec},J_{ins})$, where\n$J_{sec}$ denotes the coded output of the secure link, $J_{ins}$ denotes the\ncoded output of the insecure link, and the\noutput of Alice be denoted as $J_{x}$, i.e.,\n\\begin{align}\n(J_{sec},J_{ins})=f_{y}(Y^{n})\\qquad \\mbox{and} \\qquad J_{x}=f_{x}(X^{n},J_{sec},J_{ins})\n\\end{align}\nFirst note that, for noiseless reconstruction of the sequence $X^{n}$ at Bob, we have by Fano's inequality\n\\begin{align}\nH(X^{n}|J_{x},J_{sec},J_{ins},W^{n})&\\leq n\\epsilon_{n}\\label{fano2L}\n\\end{align}\n\nWe start by obtaining a lower bound on $R_{x}$, the rate of\nAlice, as follows,\n\\begin{align}\nnR_{x}&\\geq H(J_{x})\\\\\n&\\geq H(J_{x}|J_{sec},J_{ins},W^{n})\\\\\n&= H(X^{n},J_{x}|J_{sec},J_{ins},W^{n})-H(X^{n}|J_{x},J_{sec},J_{ins},W^{n})\\\\\n&\\geq H(X^{n},J_{x}|J_{sec},J_{ins},W^{n})-n\\epsilon_{n}\\label{eX1}\\\\\n&\\geq H(X^{n}|J_{sec},J_{ins},W^{n})-n\\epsilon_{n}\\\\\n&=\\sum_{i=1}^{n}H(X_{i}|X^{i-1},J_{sec},J_{ins},W^{n})-n\\epsilon_{n}\\\\\n&=\\sum_{i=1}^{n}H(X_{i}|J_{sec},J_{ins},X^{i-1},W_{i+1}^{n},W_{i})-n\\epsilon_{n}\\label{eXX1}\\\\\n&\\geq \\sum_{i=1}^{n}H(X_{i}|J_{sec},J_{ins},Y^{i-1},X^{i-1},W_{i+1}^{n},W_{i})-n\\epsilon_{n}\n\\end{align}\n\\begin{align}\n&=\\sum_{i=1}^{n}H(X_{i}|V_{1i},V_{2i},W_{i})-n\\epsilon_{n}\\label{eX2}\\\\\n&=nH(X|V_{1},V_{2},W)-n\\epsilon_{n}\n\\end{align}\nwhere (\\ref{eX1}) follows by (\\ref{fano2L}) and (\\ref{eXX1}) follows\nfrom the following Markov chain,\n\\begin{align}\nW^{i-1}\\rightarrow (J_{sec},J_{ins},X^{i-1},W_{i+1}^{n},W_{i})\\rightarrow X_{i}\n\\end{align}\nIn (\\ref{eX2}), we have defined\n\\begin{align}\nV_{1i}&=(J_{sec},Y^{i-1},X^{i-1},W_{i+1}^{n})\\\\\nV_{2i}&=J_{ins}\n\\end{align}\n\nWe next obtain a lower bound on $R_{sec}$, the rate of the secure link,\n\\begin{align}\nnR_{sec}&\\geq H(J_{sec})\\\\\n&\\geq H(J_{sec}|W^{n})\\\\\n&\\geq I(J_{sec};Y^{n}|W^{n})\\\\\n&=\n\\sum_{i=1}^{n}I(J_{sec},Y^{i-1},W^{i-1},W_{i+1}^{n};Y_{i}|W_{i})\\\\\n&= \\sum_{i=1}^{n}I(J_{sec},Y^{i-1},W_{i+1}^{n};Y_{i}|W_{i})\\label{2L1a}\\\\\n&= \\sum_{i=1}^{n}I(J_{sec},Y^{i-1},X^{i-1},W_{i+1}^{n};Y_{i}|W_{i})\\label{2L1b}\\\\\n&= \\sum_{i=1}^{n}I(V_{1i};Y_{i}|W_{i})\\\\\n&=nI(V_{1};Y|W)\n\\end{align}\nwhere (\\ref{2L1a}) and (\\ref{2L1b}) follow from the Markov chain\n\\begin{align}\n(X^{i-1},W^{i-1})\\rightarrow (J_{sec},Y^{i-1},W_{i+1}^{n}) \\rightarrow Y_{i}\n\\end{align}\n\nNext, we provide a lower bound on $R_{ins}$, the rate of the insecure\nlink,\n\\begin{align}\nnR_{ins}&\\geq H(J_{ins})\\\\\n&\\geq H(J_{ins}|J_{sec},W^{n})\\\\\n&\\geq H(J_{ins}|J_{sec},W^{n})-H(J_{ins}|J_{sec},Y^{n})\\\\\n&= I(J_{ins};Y^{n}|J_{sec})-I(J_{ins};W^{n}|J_{sec})\\\\\n&= \\sum_{i=1}^{n}I(J_{ins};Y_{i}|J_{sec},Y^{i-1},W_{i+1}^{n})-I(J_{ins};W_{i}|J_{sec},Y^{i-1},W_{i+1}^{n})\\label{2L2}\\\\\n&= \\sum_{i=1}^{n}I(J_{ins};Y_{i}|J_{sec},Y^{i-1},X^{i-1}W_{i+1}^{n})-I(J_{ins};W_{i}|J_{sec},Y^{i-1},X^{i-1},W_{i+1}^{n})\\label{2L3}\\\\\n&= \\sum_{i=1}^{n}I(V_{2i};Y_{i}|V_{1i})-I(V_{2i};W_{i}|V_{1i})\\\\\n&= \\sum_{i=1}^{n}I(V_{2i};Y_{i}|W_{i},V_{1i})\\\\\n&=nI(V_{2};Y|W,V_{1})\n\\end{align}\nwhere (\\ref{2L2}) follows from the Csiszar's sum lemma\n\\cite{Csiszar:book}, and (\\ref{2L3}) follows from the following Markov\nchain,\n\\begin{align}\nX^{i-1}\\rightarrow (J_{sec},Y^{i-1},W_{i+1}^{n}) \\rightarrow (J_{ins},Y_{i},W_{i})\n\\end{align}\n\nWe now have the main step, i.e., an upper bound on the equivocation\nrate of the eavesdropper,\n\\begin{align}\nH(X^{n}|J_{x},J_{ins},Z^{n})&=\nH(X^{n},Z^{n}|J_{x},J_{ins})-H(Z^{n}|J_{x},J_{ins})\\\\\n&=H(X^{n},W^{n},J_{sec},Z^{n}|J_{x},J_{ins})-H(Z^{n}|J_{x},J_{ins})\\nonumber\\\\&\\hspace{0.17in}-H(W^{n},J_{sec}|X^{n},Z^{n},J_{x},J_{ins})\\\\\n&=H(W^{n},J_{sec}|J_{x},J_{ins})+H(X^{n}|W^{n},J_{x},J_{sec},J_{ins})+H(Z^{n}|X^{n},W^{n})\\nonumber\\\\&\\hspace{0.17in}-H(Z^{n}|J_{x},J_{ins})-H(W^{n}|X^{n},Z^{n})-H(J_{sec}|X^{n},J_{x},J_{ins},W^{n})\\label{2L4}\\\\\n&=(H(W^{n}|J_{x},J_{ins})-H(Z^{n}|J_{x},J_{ins}))-(H(W^{n}|X^{n},Z^{n})\\nonumber\\\\&\\hspace{0.17in}\n-H(Z^{n}|X^{n},W^{n}))+I(J_{sec};X^{n}|J_{x},J_{ins},W^{n})\\nonumber\\\\&\\hspace{0.17in}+H(X^{n}|W^{n},J_{x},J_{sec},J_{ins})\\\\\n&\\leq(H(W^{n}|J_{x},J_{ins})-H(Z^{n}|J_{x},J_{ins}))-(H(W^{n}|X^{n},Z^{n})\\nonumber\\\\&\\hspace{0.17in}-H(Z^{n}|X^{n},W^{n}))\n+I(J_{sec};X^{n}|J_{x},J_{ins},W^{n})+n\\epsilon_{n}\\label{2L5}\n\\end{align}\nwhere (\\ref{2L4}) follows from the Markov chain $Y^{n}\\rightarrow\nX^{n}\\rightarrow (W^{n},Z^{n})$ and (\\ref{2L5}) follows by (\\ref{fano2L}).\nNow, using Csiszar's sum lemma \\cite{Csiszar:book}, we have\n\\begin{align}\nH(W^{n}|J_{x},J_{ins})-H(Z^{n}|J_{x},J_{ins})&=\\sum_{i=1}^{n}H(W_{i}|Z^{i-1},W_{i+1}^{n},J_{x},J_{ins})\\nonumber\\\\&\\hspace{0.17in}-\\sum_{i=1}^{n}H(Z_{i}|Z^{i-1},W_{i+1}^{n},J_{x},J_{ins})\\label{2L6}\n\\end{align}\nand we also have,\n\\begin{align}\nH(W^{n}|X^{n},Z^{n})-H(Z^{n}|X^{n},W^{n})&=H(W^{n}|X^{n})-H(Z^{n}|X^{n})\\\\\n&=\\sum_{i=1}^{n}H(W_{i}|X_{i})-\\sum_{i=1}^{n}H(Z_{i}|X_{i})\\label{2L7a}\\\\\n&=\\sum_{i=1}^{n}H(W_{i}|X_{i},Z^{i-1},W_{i+1}^{n},J_{x},J_{ins})\\nonumber\\\\&\\hspace{0.17in}-\\sum_{i=1}^{n}H(Z_{i}|X_{i},Z^{i-1},W_{i+1}^{n},J_{x},J_{ins})\\label{2L7b}\n\\end{align}\nwhere (\\ref{2L7a}) follows from the memorylessness of the sources and\n(\\ref{2L7b}) follows from the Markov chain\n$Y^{n}\\rightarrow X^{n}\\rightarrow (W^{n},Z^{n})$.\nNow, defining,\n\\begin{align}\nU_{i}&=(J_{x},Z^{i-1},W_{i+1}^{n})\n\\end{align}\nand \n\\begin{align}\nX=X_{Q},\\quad Y=Y_{Q},\\quad Z=Z_{Q}, \\quad W=W_{Q}\\\\\nV_{1}=(Q,V_{1Q}),\\quad V_{2}=(Q,V_{2Q}), \\quad U=(Q,U_{Q})\n\\end{align}\nwe have from (\\ref{2L6}) and (\\ref{2L7b}),\n\\begin{align}\nH(W^{n}|J_{x},J_{ins})-H(Z^{n}|J_{x},J_{ins})&=n(H(W|U,V_{2})-H(Z|U,V_{2}))\\label{2L8}\\\\\nH(W^{n}|X^{n},Z^{n})-H(Z^{n}|X^{n},W^{n})&=n(H(W|X,U,V_{2})-H(Z|X,U,V_{2}))\\label{2L9}\n\\end{align}\nNow consider,\n\\begin{align}\nI(J_{sec};X^{n}|J_{x},J_{ins},W^{n})&=\\sum_{i=1}^{n}I(J_{sec};X_{i}|J_{x},J_{ins},W^{n},X^{i-1})\\\\\n&=\\sum_{i=1}^{n}H(X_{i}|J_{x},J_{ins},W^{n},X^{i-1})-\\sum_{i=1}^{n}H(X_{i}|J_{x},J_{ins},J_{sec},W^{n},X^{i-1})\\\\\n&=\\sum_{i=1}^{n}H(X_{i}|J_{x},J_{ins},W_{i},X^{i-1},W_{i+1}^{n})\\nonumber\\\\&\\hspace{0.17in}-\\sum_{i=1}^{n}H(X_{i}|J_{x},J_{ins},J_{sec},W_{i},X^{i-1},W_{i+1}^{n})\\\\\n&=\\sum_{i=1}^{n}H(X_{i}|J_{x},J_{ins},W_{i},X^{i-1},Z^{i-1},W_{i+1}^{n})\\nonumber\\\\&\\hspace{0.17in}-\\sum_{i=1}^{n}H(X_{i}|J_{x},J_{ins},J_{sec},W_{i},X^{i-1},Z^{i-1},W_{i+1}^{n})\\\\\n&\\leq\n\\sum_{i=1}^{n}H(X_{i}|J_{x},J_{ins},W_{i},Z^{i-1},W_{i+1}^{n})\\nonumber\\\\&\\hspace{0.17in}-\\sum_{i=1}^{n}H(X_{i}|J_{x},J_{ins},J_{sec},W_{i},X^{i-1},Z^{i-1},W_{i+1}^{n})\\\\\n&\\leq \\sum_{i=1}^{n}H(X_{i}|J_{x},J_{ins},W_{i},Z^{i-1},W_{i+1}^{n})\\nonumber\\\\&\\hspace{0.17in}-\\sum_{i=1}^{n}H(X_{i}|J_{x},J_{ins},J_{sec},W_{i},X^{i-1},Y^{i-1},Z^{i-1},W_{i+1}^{n})\\\\\n&=\\sum_{i=1}^{n}H(X_{i}|U_{i},W_{i},V_{2i})-\\sum_{i=1}^{n}H(X_{i}|U_{i},W_{i},V_{2i},V_{1i})\\\\\n&=\\sum_{i=1}^{n}I(X_{i};V_{1i}|W_{i},U_{i},V_{2i})\\\\\n&=nI(X;V_{1}|W,U,V_{2})\n\\end{align}\nWe also have\n\\begin{align}\nI(J_{sec};X^{n}|J_{x},J_{ins},W^{n})&\\leq H(J_{sec})\\\\\n&\\leq nR_{sec}\n\\end{align}\nTherefore, we have\n\\begin{align}\nI(J_{sec};X^{n}|J_{x},J_{ins},W^{n})&\\leq n\\min(R_{sec},I(X;V_{1}|W,U,V_{2}))\\label{2L10}\n\\end{align}\nFinally, on substituting (\\ref{2L8}), (\\ref{2L9}) and (\\ref{2L10}) in (\\ref{2L5}), we arrive\nat\n\\begin{align}\\hspace{-0.1in}\nH(X^{n}|J_{x},J_{ins},Z^{n}) &\\leq n(\\min(R_{sec},I(X;V_{1}|W,U,V_{2}))+I(X;W|U,V_{2})-I(X;Z|U,V_{2})+\\epsilon_{n})\n\\end{align}\nThis implies\n\\begin{align}\n\\Delta&\\leq \\min(R_{sec},I(X;V_{1}|W,U,V_{2}))+I(X;W|U,V_{2})-I(X;Z|U,V_{2})\n\\end{align}\n\nAlso note that the following is a Markov chain,\n\\begin{align}\n(V_{1},V_{2})\\rightarrow Y \\rightarrow X \\rightarrow (W,Z)\n\\end{align}\nTherefore, the joint distribution of the involved random variables is\n\\begin{align}\np^{out}(x,y,w,z,v_{1},v_{2},u)&=p(x,y)p(w,z|x)p(v_{1},v_{2}|y)p(u|x,v_{1},v_{2},y)\n\\end{align}\nOn the other hand, the joint distribution of the involved random\nvariables in the achievability proof of Theorem $3$ is in the following form,\n\\begin{align}\np^{ach}(x,y,w,z,v_{1},v_{2},u)&=p(x,y)p(w,z|x)p(v_{1},v_{2}|y)p(u|x,v_{1},v_{2})\\label{innner2s}\n\\end{align}\ni.e., they satisfy the Markov chain $Y\\rightarrow\n(X,V_{1},V_{2})\\rightarrow U$. Now using the observation that\n$I(X;W|U,V_{1},V_{2})$ depends on the marginal\n$p(x,w,u,v_{1},v_{2})$ and $I(X;Z|U,V_{1},V_{2})$ depends on the\nmarginal $p(x,z,u,v_{1},v_{2})$ and using similar arguments used\nin the converse proof of Theorem $2$, it can be shown that it\nsuffices to consider distributions of the form given in\n(\\ref{innner2s}) when evaluating our outer bound. This completes\nthe proof of the converse part.\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Basic notation and information structure}\\label{sec-nota}\n\\section*{Problem formulation and solution}\nConsider a Markov chain $\\{\\eta(k),\\,\\,k=0,1,\\ldots \\}$ \nwith distribution and transition probabilities given respectively by \n\\begin{equation}\\nonumber\n\\begin{aligned}\ns_{ij}(k)&=\\text{Pr}(\\eta(k+1)=j|\\eta(k)=i), \n\\\\ \n\\upsilon_i(k)&=\\text{Pr}(\\eta(k)=i),\\;\\;i,j\\in\\mathbb{N}, k\\geq 0,\n\\end{aligned} \n\\end{equation}\nwhere $\\mathbb{N}=\\{1,\\ldots,N\\}$ is the state space. \nFor Markov chains and basic probability rules employed in this note, the reader is referred to \n\\cite{Cinlar75,Papoulis}.\nConsider the process $\\{\\theta(k), k=0,1,\\ldots,\\ell\\}$, defined as follows, based on the Markov chain in a finite time window,\n$$\\theta(k)=\\eta(\\ell-k), \\;\\; \\,k=0,1,\\ldots,\\ell.$$ \nAssume that a cluster observation is available \nin the form $\\{\\theta(0)\\in \\mathbb{C}_0\\}$, $\\mathbb{C}_0\\subseteq \\mathbb{N}$, as well as a cluster observation with anticipation \n$\\{\\theta(\\ell)\\in \\mathbb{C}_\\ell$\\}, \n$\\mathbb{C}_\\ell\\subseteq\\mathbb{N}$.\n$E_{\\text{o}}\\in\\mathcal{F}$ stands for the associated $\\sigma$-algebra. \n\nThe adopted process $\\theta$ and information structure may appear in real-world problems, including problems involving a first-in last-out queue. For example, suppose the Markov chain characterise the presence of certain irregularities in $\\ell$ items arriving at a store, and that the first item in goes through an inspection of these irregularities, so that we know the state of this item (when it enters the queue); following the queue rule, and counting the items when leaving the queue, we have information ``with anticipation'' on the state of the $\\ell$-th item. Note, via this example, that observation with anticipation does not necessarily require prediction of future events.\n\n\nIn this note we seek for formulas \nfor the conditional transition probabilities and distributions \ndefined in (1), based on the Markov chain parameters $\\upsilon_i$ and \\textcolor{black}{$s_{ij}(k)$} and the cluster observations. We shall need some additional notation.\nFor each $i,j\\in\\mathbb{N}$ and $k=0,1,\\ldots,\\ell-1$, we define\n\\begin{equation}\\label{eq-prob-theta}\n\\begin{aligned}\np_{ij}&=\\text{Pr}(\\theta(k)=j\\,|\\,\\theta(k+1)=i\\,,\\,E_{\\text{o}}),\n\\\\\n\\pi_i(k)&=\\text{Pr}(\\theta(k)=i)|E_{\\text{o}}), \n\\;\\;i,j\\in\\mathbb{N}, 0\\leq k\\leq \\ell.\n\\end{aligned}\n\\end{equation}\nWe write $\\mathbf{P}(k)$ to represent a matrix of dimension $N$ by $N$, whose components are $p_{ij}(k)$, therefore satisfying the Chapman-Kolmogorov equation \n\\begin{equation}\\label{eq-chapman}\n\\pi(k+1)=\\mathbf{P}(k)\\pi(k).\n\\end{equation}\n\n\\noindent{W}e denote, for $i\\in\\mathbb{N}$, and $k=1,\\ldots,\\ell-1$, \n$${e}= \\sum_{r\\in\\mathbb{C}_\\ell}\\sum_{j\\in\\mathbb{C}_{0}}[\\mathbf{S}^{\\ell}]_{rj}\\upsilon_r(\\textcolor{black}{0}),\\quad {g}_i(k)=\\sum_{r\\in\\mathbb{C}_{0}}[\\mathbf{S}^{k+1}]_{ir}.$$\n\n\n\\noindent\\textbf{Lemma 1.} \n\\textit{The following is valid for $1\\leq k\\leq \\ell-2$ and $i,j\\in\\mathbb{N}$; if \\textcolor{black}{$i$ is such that $\\upsilon_i(\\ell-k-1)>0$}, then\n\\begin{equation}\\label{eq-prob_lema3}\n p_{ij}(k)=\\begin{cases}\n (g_i(k))^{-1}\\sum_{r\\in\\mathbb{C}_0}[\\mathbf{S}^{k}]_{jr}s_{ij}(k),&\\, g_i(k)\\neq 0,\\\\\n \\quad\\quad\\quad \\text{arbitrary},&\\, {otherwise};\n \\end{cases}\n\\end{equation}\n\\textcolor{black}{if $\\upsilon_i(\\ell-k-1)=0$, then $p_{ij}(k)$ is arbitrary.} \n\\eqref{eq-prob_lema3} is also valid for $k=\\ell-1$, $i\\in\\mathbb{C}_{\\ell}$ and $j\\in\\mathbb{N}$, as well as for $k=0$, $i\\in\\mathbb{N}$ and $j\\in\\mathbb{C}_{0}$. %\nThe remaining cases regarding $p_{ij}(k)$ are: \n$p_{ij}(\\ell-1)$ \\textcolor{black}{is arbitrary} for $i\\notin\\mathbb{C}_{\\ell}$ and $j\\in\\mathbb{N}$; \n\\textcolor{black}{if $i\\in\\mathbb{C}_\\ell$ and there is \nno Markov state $\\eta(\\ell)\\in\\mathbb{C}_{0}$ that can be reached from $i$ \n(in the sense that $\\text{Pr}(\\eta(\\ell)\\in\\mathbb{C}_{0}\\,|\\,\\eta(0)=i)=0$) then $p_{ij}(\\ell-1)$ is arbitrary;} \n$p_{ij}(0)$ is arbitrary for $i\\in\\mathbb{N}$ and $j\\notin\\mathbb{C}_{0}$. Regarding $\\pi_{i}(\\ell)$, for $i\\in\\mathbb{C}_{\\ell}$, \n\\begin{equation}\\label{eq-pi-terminal}\n \\pi_{i}(\\ell)=\\begin{cases}\n e^{-1}\\displaystyle\\sum_{j\\in\\mathbb{C}_{0}} [\\mathbf{S}^{\\ell}]_{ij}\\upsilon_i(0),&\\, e\\neq 0,\\\\\n \\quad\\quad\\quad \\text{arbitrary},&\\, {otherwise}.\n \\end{cases}\n\\end{equation}\nFor $i\\notin\\mathbb{C}_{\\ell}$, $\\pi_{i}(\\ell)=0$. \nFinally, $\\pi(k)$, $1\\leq k\\leq \\ell-1$, is given by \\eqref{eq-chapman} and the formulas above.\n}\n\n\\textbf{Proof:} for $i\\in\\mathbb{N}$ by definition we have \n\\begin{equation}\\nonumber\n\\begin{aligned}\n \\pi_i(\\ell)&=\\text{Pr}(\\theta(\\ell)=i\\,|\\,\\eta(0)\\in\\mathbb{C}_\\ell,\\,\\eta(\\ell)\\in\\mathbb{C}_0)\\\\\n &=\\text{Pr}(\\eta(0)=i\\,|\\,\\eta(0)\\in\\mathbb{C}_\\ell,\\,\\eta(\\ell)\\in\\mathbb{C}_0). \n \\end{aligned}\n\\end{equation}\nOf course, \n$\\pi_i(\\ell)=0$ whenever $i\\notin\\mathbb{C}_\\ell$, as it is given that \n$\\theta(\\ell)=\\eta(0)$ is in $\\mathbb{C}_\\ell$. If $i\\in\\mathbb{C}_\\ell$, and assuming that \n$\\text{Pr}\\big(\\eta(0)\\in\\mathbb{C}_\\ell\\,,\\,\\eta(\\ell)\\in\\mathbb{C}_0\\big)>0$, we may write \n\\begin{equation*}\n\\begin{aligned}\n\\pi_i(\\ell)&=\\text{Pr}\\big(\\eta(0)=i\\,|\\,\\eta(0)\\in\\mathbb{C}_\\ell\\,,\\,\\eta(\\ell)\\in\\mathbb{C}_0\\big)\\\\\n &=\\frac{\\text{Pr}\\big(\\eta(\\ell)\\in\\mathbb{C}_0\\,\\,,\\,\\,\\eta(0)=i\\big)}{\\text{Pr}\\big(\\eta(\\ell)\\in\\mathbb{C}_0\\,\\,,\\,\\,\\eta(0)\\in\\mathbb{C}_\\ell\\big)}\\\\\n &=\\frac{\\displaystyle\\sum_{j\\in\\mathbb{C}_{0}}\\text{Pr}\\big(\\eta(\\ell)=j\\,,\\,\\eta(0)=i\\big)}{\\displaystyle\\sum_{r\\in\\mathbb{C}_{\\ell}}\\sum_{j\\in\\mathbb{C}_0}\\text{Pr}\\big(\\eta(\\ell)=j\\,,\\,\\eta(0)=r\\big)}.\n\\end{aligned}\n\\end{equation*}\nNote that\n\\begin{equation}\\nonumber\n\\begin{aligned}\n\\text{Pr}(\\eta(\\ell)=j,\\,\\eta(0)=r)&=\\text{Pr}(\\eta(\\ell)=j|\\eta(0)=r)\\\\\n&\\cdot\\text{Pr}(\\eta(0)=r)=[\\mathbf{S}^{\\ell}]_{rj}\\upsilon_r(0), \n\\end{aligned}\n\\end{equation}\nand substituting in the above\n$$\\begin{aligned}\n\\pi_i(\\ell)&=\\displaystyle\\sum_{j\\in\\mathbb{C}_{0}}[\\mathbf{S}^{\\ell}]_{i,j}\\upsilon_i(0)\\left(\\displaystyle\\sum_{r\\in\\mathbb{C}_\\ell}\\sum_{j\\in\\mathbb{C}_{0}}[\\mathbf{S}^{\\ell}]_{r,j}\\upsilon_r(0)\\right)^{-1}\n\\\\& \n=e^{-1}\\displaystyle\\sum_{j\\in\\mathbb{C}_{0}} [\\mathbf{S}^{\\ell}]_{ij}\\upsilon_i(0)\n\\end{aligned}$$\nIf $i\\in\\mathbb{C}_\\ell$ and \n$\\text{Pr}\\big(\\eta(0)\\in\\mathbb{C}_\\ell\\,,\\,\\eta(\\ell)\\in\\mathbb{C}_0\\big)=0.$ \n(in which case, the inverse in the above equation does \nnot exist), then $\\pi_i(\\ell)$ \nis conditioned on an event of probability zero \ntherefore $\\pi_i(\\ell)$ is arbitrary,\nthus completing the demonstration of \\eqref{eq-pi-terminal}.\nNow we turn our attention to $\\mathbf{P}$.\n\\begin{equation}\\nonumber\n\\begin{aligned}\np_{ij}(\\ell-1)&=\\text{Pr}(\\theta(\\ell-1)=j|\\theta(\\ell)=i,\\eta(0)\\in\\mathbb{C}_\\ell,\\eta(\\ell)\\in\\mathbb{C}_0)\\\\\n&=\\text{Pr}(\\eta(1)=j|\\eta(0)=i,\\eta(0)\\in\\mathbb{C}_\\ell,\\eta(\\ell)\\in\\mathbb{C}_0).\n\\end{aligned}\n\\end{equation}\nIt is clear that, if $i\\notin\\mathbb{C}_\\ell$, then $p_{ij}(\\ell-1)$ \nis conditioned on an empty set, therefore an event of probability zero, making \n$p_{ij}(\\ell-1)$ arbitrary, for any $j\\in\\mathbb{N}$. \nIf $i\\in\\mathbb{C}_\\ell$ and the event $\\{\\eta(\\ell)\\in\\mathbb{C}_{0}\\,|\\,\\eta(0)=i\\}$ \nis not of probability zero, then using the total probability law we write\n\\begin{equation}\\nonumber\n\\begin{aligned}\np_{ij}(\\ell-1)\n&=\\frac{\\text{Pr}(\\eta(\\ell)\\in\\mathbb{C}_{0}\\,|\\,\\eta(1)=j,\\eta(0)=i)\\cdot s_{ij}(k)} {\\text{Pr}(\\eta(\\ell)\\in\\mathbb{C}_{0}\\,|\\,\\eta(0)=i)}\\\\\n&=\\frac{\\text{Pr}(\\eta(\\ell)\\in\\mathbb{C}_{0}\\,|\\,\\eta(1)=j)\\cdot s_{ij}(k)}\n {\\text{Pr}(\\eta(\\ell)\\in\\mathbb{C}_{0}\\,|\\,\\eta(0)=i)}\\\\\n&=\\displaystyle\\sum_{r\\in\\mathbb{C}_{\\ell}}[\\mathbf{S}^{\\ell-1}]_{jr} s_{ij}(k) \\left(\\displaystyle\\sum_{r\\in\\mathbb{C}_{\\ell}}[\\mathbf{S}^{\\ell}]_{ir}\\right)^{-1}, j\\in\\mathbb{N},\n\\end{aligned}\n\\end{equation}\nwhere the second inequality is due to the Markov property. \nIf $i\\in\\mathbb{C}_\\ell$ and the event $\\{\\eta(\\ell)\\in\\mathbb{C}_{0}\\,|\\,\\eta(0)=i\\}$ \nis an empty set, then $p_{ij}(\\ell-1)$ is conditional on an event of zero probability, hence it is arbitrary. \nRegarding $p_{ij}(k)$ with $1\\leq k\\leq \\ell-2$ and $i,j\\in\\mathbb{N}$, \ndenoting \n$A = \\{\\eta(\\ell-k-1)=i\\}$ for a better visual diagramming of \nthe next equation, \nassuming $\\text{Pr}(\\eta(\\ell)\\in\\mathbb{C}_{0},A)>0$ and using the Markov property, we write \n\\begin{equation}\\nonumber\n\\begin{aligned}\n& p_{ij}(k)=\\text{Pr}(\\eta(\\ell-k)=j|A,\\eta(0)\\in\\mathbb{C}_\\ell,\\eta(\\ell)\\in\\mathbb{C}_0)\\\\\n& \\;\\;=\\text{Pr}(\\eta(\\ell-k)=j|A,\\eta(\\ell)\\in\\mathbb{C}_{0})\\\\\n& \\;\\;=\\frac{\\text{Pr}(\\eta(\\ell-k)=j,\\eta(\\ell)\\in\\mathbb{C}_{0},A)}{\\text{Pr}(\\eta(\\ell)\\in\\mathbb{C}_{0},A)}\\\\\n& \\;\\;=\\frac{\\text{Pr}(\\eta(\\ell)\\in\\mathbb{C}_{0}|\\eta(\\ell-k)=j,A)\\text{Pr}(\\eta(\\ell-k)=j|A)P(A)} {\\text{Pr}(\\eta(\\ell)\\in\\mathbb{C}_{0}|A)P(A)}\\\\\n& \\;\\;=\\frac{\\text{Pr}(\\eta(\\ell)\\in\\mathbb{C}_{0}|\\eta(\\ell-k)=j)\\text{Pr}(\\eta(\\ell-k)=j|A)} {\\text{Pr}(\\eta(\\ell)\\in\\mathbb{C}_{0}|A)},\n\\end{aligned}\n\\end{equation}\nyielding\n$$\\begin{aligned}\np_{ij}(k)&=\\sum_{r\\in\\mathbb{C}_{0}}[\\mathbf{S}^{k}]_{jr} s_{ij}(k) \\left(\\displaystyle\\sum_{r\\in\\mathbb{C}_{0}}[\\mathbf{S}^{k+1}]_{ir}\\right)^{-1}\n\\\\&\n=(g_i(k))^{-1}\\sum_{r\\in\\mathbb{C}_0}[\\mathbf{S}^{k}]_{jr}s_{ij}(k).\n\\end{aligned}\n$$\n\\textcolor{black}{Note that the requirement $\\text{Pr}(\\eta(\\ell)\\in\\mathbb{C}_{0},A)>0$ is equivalent to say: (i) considering the Markov chain $\\{\\eta_k, k\\geq 0\\}$, \nthe set $\\mathbb{C}_{0}$ is reachable from state $i$ in $k+1$ steps \n(in which case $g_i(k)=0$), and (ii) $\\text{Pr}(\\eta(\\ell-k-1)=i)=\\upsilon_i(\\ell-k-1)>0$. If any of (i) or (ii) is false, or both are false, then then $p_{ij}(k)$ is conditional on an event of probability zero, \nhence it is \\textcolor{black}{arbitrary}.}\nIt only remains to find the formula for $p_{ij}(0)$.\n\\begin{equation}\\nonumber\n \\begin{aligned}\np_{ij}(0)&=\\text{Pr}(\\theta(0)=j|\\theta(1)=i,\\eta(0)\\in\\mathbb{C}_\\ell,\\eta(\\ell)\\in\\mathbb{C}_0)\\\\\n&=\\text{Pr}(\\eta(\\ell)=j|\\eta(\\ell-1)=i,\\eta(0)\\in\\mathbb{C}_\\ell,\\eta(\\ell)\\in\\mathbb{C}_0).\n \\end{aligned}\n\\end{equation}\nIf $j\\notin\\mathbb{C}_{0}$ \nthen $p_{ij}(0)=0$ because it is given \nthat $\\eta(0)\\in\\mathbb{C}_{0}$; otherwise, we have\n\\begin{equation}\\nonumber\n\\begin{aligned}\np_{ij}(0)&=\\text{Pr}\\big(\\eta(\\ell)=j\\,|\\,\\eta(\\ell-1)=i\\,,\\,\\eta(\\ell)\\in\\mathbb{C}_0\\big)\\\\\n\\end{aligned}\n\\end{equation}\nso that, when the conditional event is not of probability zero, \n\\begin{equation}\\nonumber\n\\begin{aligned}\np_{ij}(0)&=\\frac{\\text{Pr}\\big(\\eta(\\ell)=j,\\eta(\\ell)\\in\\mathbb{C}_0\\,|\\,\\eta(\\ell-1)=i\\big)}{\\text{Pr}\\big(\\eta(\\ell)\\in\\mathbb{C}_0\\,|\\,\\eta(\\ell-1)=i\\big)}\\\\\n&=\\frac{\\text{Pr}\\big(\\eta(\\ell)=j\\,|\\,\\eta(\\ell-1)=i\\big)}{\\text{Pr}\\big(\\eta(\\ell)\\in\\mathbb{C}_0\\,|\\,\\eta(\\ell-1)=i\\big)}\\\\\n&=s_{ij}(k) \\left(\\displaystyle\\sum_{r\\in\\mathbb{C}_{0}}s_{ir}\\right)^{-1},\\forall\\,i\\in\\mathbb{N}\\,\\,\\text{and}\\,\\,j\\in\\mathbb{C}_{0}, \n\\end{aligned}\n\\end{equation}\nand, when the conditional event $\\{\\eta(\\ell-1)=i,\\,\\eta(\\ell)\\in\\mathbb{C}_0\\}$ is of probability zero \n(in which case the inverse in the above equation does not exist), then $p_{ij}(0)$ is \\textcolor{black}{arbitrary}.\n \nFinally, $\\pi(k)$, $1\\leq k\\leq \\ell-1$, is given by \\eqref{eq-chapman} and the formulas above.\n\\hfill Q.E.D.\n\n\\medskip\n{\\it Remark 1.} All arbitrary values in Lemma 1 can be set to zero. This is a suitable choice \nin some cases, as in \\cite{Daniel-duality,PachasSubm}, where a Riccati-like equation is computed for every $i,k$ such that $\\pi_i(k)>0$, so that, choosing $\\pi_i(k)=0$ avoids unnecessary computations. \n\n\\medskip\n{\\it Remark 2.} In view of \\eqref{eq-prob_lema3},\nthe transition probabilities of the process $\\theta$ depend on $k$ even if the Markov chain is time-homogeneous. \nFor $\\mathbf{P}$ to be irrespective of $k$, it would be necessary that the Markov chain is time-homogeneous \\emph{and} there is no observation of $\\theta(0)$, that is, $\\mathbb{C}_0=\\mathbb{N}$; in this case, \n\\eqref{eq-prob_lema3} reduces to $p_{ij}=s_{ij}$.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Introduction}\\label{sec:introduction} Visual odometry\n(VO)~\\cite{nister2004visual}, commonly referred to as ego-motion\nestimation, is a fundamental capability that enables robots to reliably\nnavigate its immediate environment. With the wide-spread adoption of\ncameras in various robotics applications, there has been an evolution\nin visual odometry algorithms with a wide set of variants including\nmonocular VO~\\cite{nister2004visual,konolige2010large}, stereo\nVO~\\cite{howard2008real,kitt2010visual} and even non-overlapping\n\\textit{n}-camera VO~\\cite{hee2013motion,kneip2013using}. Furthermore, each of\nthese algorithms has been custom tailored for specific camera optics\n(pinhole, fisheye, catadioptric) and the range of motions observed by\nthese cameras mounted on various\nplatforms~\\cite{scaramuzza20111}.\n\n\\begin{figure}[t]\n \\includegraphics[width=\\columnwidth]{graphics\/mdn-vae.pdf}\n \\caption{\\textbf{Visual Ego-motion Learning Architecture: } We\n propose a visual ego-motion learning architecture that maps optical\n flow vectors (derived from feature tracking in an image sequence) to\n an ego-motion density estimate via a Mixture Density Network (MDN). By\n modeling the architecture as a Conditional Variational Autoencoder\n (C-VAE), our model is able to provide introspective reasoning and\n prediction for scene-flow conditioned on the ego-motion estimate and\n input feature location.}\n \\label{fig:egomotion-architecture}\n \\vspace{-5mm}\n\\end{figure}\n\nWith increasing levels of model specification for each domain, we\nexpect these algorithms to perform differently from others while\nmaintaining lesser generality across various optics and camera\nconfigurations. Moreover, the strong dependence of these algorithms on\ntheir model specification limits the ability to actively monitor and\noptimize their intrinsic and extrinsic model parameters in an online\nfashion. In addition to these concerns, autonomous systems today use\nseveral sensors with varied intrinsic and extrinsic properties that\nmake system characterization tedious. Furthermore, these\nalgorithms and their parameters are fine-tuned on specific datasets\nwhile enforcing little guarantees on their generalization performance\non new data.\n\nTo this end, we propose a fully trainable architecture for visual\nodometry estimation in generic cameras with varied camera optics\n(\\textit{pinhole}, \\textit{fisheye} and \\textit{catadioptric}\nlenses). In this work, we take a geometric approach by posing the\nregression task of ego-motion as a density estimation problem. By\ntracking salient features in the image induced by the ego-motion (via\nKanade-Lucas-Tomasi\/KLT feature tracking), we learn the mapping from\nthese tracked flow features to a probability mass over the range of\nlikely ego-motion. We make the following contributions:\n\\begin{itemize}\n \\item \\textbf{A fully trainable ego-motion estimator}: We\nintroduce a fully-differentiable density estimation model for visual\nego-motion estimation that robustly captures the inherent ambiguity\nand uncertainty in relative camera pose estimation (See\nFigure~\\ref{fig:egomotion-architecture}).\n\\item \\textbf{Ego-motion for generic camera optics}: Without imposing\nany constraints on the type of camera optics, we propose an approach\nthat is able to recover ego-motions for a variety of camera models\nincluding \\textit{pinhole}, \\textit{fisheye} and \\textit{catadioptric}\nlenses.\n\\item\\textbf{Bootstrapped ego-motion training and refinement}: We propose a\nbootstrapping mechanism for autonomous systems whereby a robot\nself-supervises the ego-motion regression task. By fusing information\nfrom other sensor sources including GPS and INS (Inertial Navigation\nSystems), these indirectly inferred trajectory estimates serve as\nground truth target poses\/outputs for the aforementioned regression\ntask. Any newly introduced camera sensor can now leverage this\ninformation to learn to provide visual ego-motion estimates without\nrelying on an externally provided ground truth source.\n\\item\\textbf{Introspective reasoning via scene-flow predictions}: We\ndevelop a generative model for optical flow prediction that can be\nutilized to perform outlier-rejection and scene flow reasoning. \n\\end{itemize}\nThrough experiments, we provide a thorough analysis of ego-motion\nrecovery from a variety of camera models including pinhole, fisheye\nand catadioptric cameras. We expect our general-purpose approach to be\nrobust, and easily tunable for accuracy during\noperation. We illustrate the robustness and generality of our approach\nand provide our findings in Section~\\ref{sec:experiments}.\n\n\n\n\n\n\\section{Related Work}\\label{sec:related-work} Recovering relative\ncamera poses from a set of images is a well studied problem under the\ncontext of Structure-from-Motion\n(SfM)~\\cite{triggs1999bundle,hartley2003multiple}. SfM is usually\ntreated as a non-linear optimization problem, where the camera poses\n(extrinsics), camera model parameters (intrinsics), and the 3D scene\nstructure are jointly optimized via non-linear\nleast-squares~\\cite{triggs1999bundle}.\n\n\n\\textbf{Unconstrained VO}: Visual odometry, unlike incremental\nStructure-from-Motion, only focuses on determining the 3D camera pose\nfrom sequential images or video imagery observed by a monocular\ncamera. Most of the early work in VO was done primarily to determine\nvehicle\negomotion~\\cite{moravec1980obstacle,matthies1989dynamic,olson2000robust}\nin 6-DOF, especially in the Mars planetary rover. Over the years\nseveral variants of the VO algorithm were proposed, leading up to the\nwork of Nister et al.~\\cite{nister2004visual}, where the authors\nproposed the first real-time and scalable VO algorithm. In their work,\nthey developed a 5-point minimal solver coupled with a RANSAC-based\noutlier rejection scheme~\\cite{fischler1981random} that is still\nextensively used today. Other researchers~\\cite{corke2004omnidirectional}\nhave extended this work to various camera types including catadioptric\nand fisheye lenses.\n\n\\textbf{Constrained VO}: While the classical VO objective does not\nimpose any constraints regarding the underlying motion manifold or\ncamera model, it however contains several failure modes that make it\nespecially difficult to ensure robust operation under arbitrary scene\nand lighting conditions. As a result, imposing egomotion constraints has\nbeen shown to considerably improve accuracy, robustness, and run-time\nperformance. One particularly popular strategy for VO estimation in\nvehicles is to enforce planar homographies during matching features\non the ground\nplane~\\cite{liang2002visual,ke2003transforming},\nthereby being able to robustly recover both relative orientation and\nabsolute scale. For example, Scaramuzza et\nal.~\\cite{scaramuzza20111,scaramuzza2009real} introduced a novel\n1-point solver by imposing the vehicle's non-holonomic motion\nconstraints, thereby speeding up the VO estimation up to 400Hz.\n\n\n\\textbf{Data-driven VO}: While several model-based methods have been\ndeveloped specifically for the VO problem, a few have attempted to\nsolve it with a data-driven approach. Typical approaches have\nleveraged dimensionality reduction techniques by learning a\nreduced-dimensional subspace of the optical flow vectors induced by\nthe\negomotion~\\cite{roberts2009learning}. In~\\cite{ciarfuglia2014evaluation},\nCiarfuglia et al. employ Support Vector Regression (SVR) to recover\nvehicle egomotion (3-DOF). The authors further build upon their\nprevious result by swapping out the SVR module with an end-to-end\ntrainable convolutional neural network~\\cite{costante2016exploring}\nwhile showing improvements in the overall performance on\nthe KITTI odometry benchmark~\\cite{Geiger2012CVPR}. Recently, Clarke\net al.~\\cite{wen2016vinet} introduced a visual-inertial odometry\nsolution that takes advantage of a neural-network architecture to learn\na mapping from raw inertial measurements and sequential imagery to\n6-DOF pose estimates. By posing visual-inertial odometry (VIO) as a\nsequence-to-sequence learning problem, they developed a neural network\narchitecture that combined convolutional neural networks with Long\nShort-Term Units (LSTMs) to fuse the independent sensor measurements\ninto a reliable 6-DOF pose estimate for ego-motion. Our work closely\nrelates to these data-driven approaches that have recently been\ndeveloped. We provide a qualitative comparison of how our approach is\npositioned within the visual ego-motion estimation landscape in\nTable~\\ref{table:vo-landscape}. \n\n\n\\begin{table}[h]\n \\centering\n \\scriptsize\n \\rowcolors{2}{gray!25}{white}\n {\\renewcommand{\\arraystretch}{1}\n {\\setlength{\\tabcolsep}{0.2mm}\n \\begin{tabular}{lM{1cm}M{1cm}M{1cm}M{1.25cm}}\n \\toprule\n \\centering\n \n \\textbf{Method Type} & \\textbf{Varied Optics} & \\textbf{Model Free} & \\textbf{Robust} \n & \\textbf{Self Supervised} \\\\ \\midrule\n \\textit{Traditional VO~\\cite{scaramuzza2011visual}}\n & $\\text{\\ding{55}}$ & $\\text{\\ding{55}}$ & $\\text{\\ding{51}}$ & $\\text{\\ding{55}}$ \\\\ \n \\textit{End-to-end VO~\\cite{costante2016exploring,wen2016vinet}}\n & $\\text{\\ding{55}}$ & $\\text{\\ding{51}}$ & $\\text{\\ding{51}}$ & $\\text{\\ding{55}}$ \\\\ \n \n \\textit{This work}\n & $\\text{\\ding{51}}$ & $\\text{\\ding{51}}$ & $\\text{\\ding{51}}$ & $\\text{\\ding{51}}$ \\\\\n \\bottomrule\n \\end{tabular}}}\n\\caption{\\textbf{Visual odometry landscape}: A qualitative comparison of how our approach is\n positioned amongst existing solutions to ego-motion\n estimation.}\n\\label{table:vo-landscape}\\vspace{-5mm}\n\\end{table}\n\n\n\n\n\n\\section{Ego-motion regression}\\label{sec:procedure}\n\nAs with most ego-motion estimation solutions, it is imperative to\ndetermine the minimal parameterization of the underlying motion\nmanifold. In certain restricted scene structures or motion manifolds,\nseveral variants of ego-motion estimation are\nproposed~\\cite{scaramuzza20111,liang2002visual,ke2003transforming,scaramuzza2009real}.\nHowever, we consider the case of modeling cameras with varied optics\nand hence are interested in determining the full range of ego-motion,\noften restricted, that induces the pixel-level optical flow. This\nallows the freedom to model various unconstrained and partially\nconstrained motions that typically affect the overall robustness of\nexisting ego-motion algorithms. While model-based approaches have\nshown tremendous progress in accuracy, robustness, and run-time\nperformance, a few recent data-driven approaches have been shown to\nproduce equally compelling\nresults~\\cite{costante2016exploring,wen2016vinet,konda2015learning}. An\nadaptive and trainable solution for relative pose estimation or\nego-motion can be especially advantageous for several reasons: (i) a\ngeneral-purpose end-to-end trainable model architecture that applies to a\nvariety of camera optics including pinhole, fisheye, and catadioptric\nlenses; (ii) simultaneous and continuous optimization over both ego-motion estimation\nand camera parameters (intrinsics and extrinsics that are implicitly\nmodeled); and (iii) joint reasoning over resource-aware\ncomputation and accuracy within the same architecture is amenable. We\nenvision that such an approach is especially beneficial in the context\nof bootstrapped (or weakly-supervised) learning in robots, where the\nsupervision in ego-motion estimation for a particular camera can be\nobtained from the fusion of measurements from other robot\nsensors (GPS, wheel encoders etc.).\n\n\n\nOur approach is motivated by previous minimally parameterized\nmodels~\\cite{scaramuzza20111,scaramuzza2009real} that are able to\nrecover ego-motion from a \\textit{single tracked feature}. We find\nthis representation especially appealing due to the simplicity and\nflexibility in~\\textit{pixel-level} computation. Despite the reduced\ncomplexity of the input space for the mapping problem, recovering the\nfull 6-DOF ego-motion is ill-posed due to the inherently\nunder-constrained system. However, it has been previously shown that\nunder non-holonomic vehicle motion, camera ego-motion may be fully\nrecoverable up to a sufficient degree of accuracy using a single\npoint~\\cite{scaramuzza20111,scaramuzza2009real}.\n\nWe now focus on the specifics of\nthe ego-motion regression objective. Due to the under-constrained\nnature of the prescribed regression problem, the pose estimation is\nmodeled as a density estimation problem over the range of possible\nego-motions\\footnote{\\scriptsize Although the parametrization is\n maintained as $SE(3)$, it is important to realize\nthat the nature of most autonomous car datasets involve a\nlower-dimensional ($SE(2)$) motion manifold}, conditioned on the\ninput flow features. It is important to note that the output of the\nproposed model is a density estimate\n$p(\\hat{\\mathbf{z}}_{t-1,t}\\vert\\mathbf{x}_{t-1,t})$ for every feature tracked\nbetween subsequent frames.\n\n\n\n\\subsection{Density estimation for\n ego-motion}\\label{subsec:density-egomotion}\nIn typical associative mapping problems, the joint probability density\n$p(\\mathbf{x},\\mathbf{z})$ is decomposed into the product of two terms: (i) $p(\\mathbf{z}\n\\vert \\mathbf{x})$: the conditional density of the target pose $\\mathbf{z} \\in\nSE(3)$ conditioned on the input feature correspondence $\\mathbf{x} = (\\bm{x},\n\\Delta\\bm{x})$ obtained from sparse optical flow (KLT)~\\cite{birchfield2007klt} (ii)\n$p(\\mathbf{x})$: the unconditional density of the input data $\\mathbf{x}$. While we\nare particularly interested in the first term $p(\\mathbf{z}\\vert\\mathbf{x})$ that\npredicts the range of possible values for $\\mathbf{z}$ given new values of\n$\\mathbf{x}$, we can observe that the density $p(\\mathbf{x}) = \\sum_z p(\\mathbf{x},\\mathbf{z})\nd\\mathbf{z}$ provides a measure of how well the prediction is captured by the\ntrained model.\n\nThe critical component in estimating the ego-motion belief is the\nability to accurately predict the conditional probability distribution\n$p(\\mathbf{z}\\vert\\mathbf{x})$ of the pose estimates that is induced by the given\ninput feature $\\bm{x}$ and the flow $\\Delta\\bm{x}$. Due to its powerful and\nrich modeling capabilities, we use a \\textit{Mixture Density Network}\n(MDN)~\\cite{bishop1994mixture} to parametrize the conditional density\nestimate. MDNs are a class of end-to-end trainable\n(fully-differentiable) density estimation techniques that leverage\nconventional neural networks to regress the parameters of a generative\nmodel such as a finite Gaussian Mixture Model (GMM). The powerful representational\ncapacity of neural networks coupled with rich probabilistic modeling\nthat GMMs admit, allows us to model multi-valued or multi-modal\nbeliefs that typically arise in inverse problems such as visual\nego-motion.\n\nFor each of the $F$ input flow features $\\mathbf{x}_i$ extracted via KLT, the\nconditional probability density of the target pose data $\\mathbf{z}_i$\n(Eqn~\\ref{eq:cdf}) is\nrepresented as a convex combination of $K$ Gaussian components,\n\\vspace{-1mm}\n\\begin{align}\\label{eq:cdf}\n p(\\mathbf{z}_i \\mid \\mathbf{x}_i) = \\sum_{k=1}^{K} \\pi_k(\\mathbf{x}_i) \\mathcal{N}(\\mathbf{z} \\mid \\mu_k(\\mathbf{x}_i), \\sigma_{k}^2(\\mathbf{x}_i))\n\\end{align}where $\\pi_k(\\mathbf{x})$ is the mixing coefficient for the $k$-th component\nas specified in a typical GMM. The Gaussian\nkernels are parameterized by their mean vector $\\mu_k(\\mathbf{x})$ and\ndiagonal covariance $\\sigma_{k}(\\mathbf{x})$. It is important to note that\nthe parameters $\\pi_k(\\mathbf{x}), \\mu_k(\\mathbf{x})$, and $\\sigma_k(\\mathbf{x})$ are\ngeneral and continuous functions of $\\mathbf{x}$. This allows us to model\nthese parameters as the output ($a^{\\pi}$, $a^{\\mu}$,\n$a^{\\sigma}$) of a conventional neural network which takes $\\mathbf{x}$ as\nits input. Following~\\cite{bishop1994mixture}, the outputs of the\nneural network are constrained as follows: (i) The mixing coefficients\nmust sum to 1, i.e. $\\sum_{K}\\pi_k(\\mathbf{x}) = 1$ where $0 \\leq \\pi_k(\\mathbf{x})\n\\leq 1$. This is accomplished via the \\textit{softmax} activation as\nseen in Eqn~\\ref{eq:mdn-pi}. (ii) Variances $\\sigma_k(\\mathbf{x})$ are\nstrictly positive via the \\textit{exponential} activation\n(Eqn~\\ref{eq:mdn-sigma}).\n\\begin{align}\n &\\pi_k(\\mathbf{x}) = \\frac{\\exp(a_k^\\pi)} { \\sum_{l=1}^{K} \\exp(a_l^\\pi) }\\label{eq:mdn-pi} \\\\\n &\\sigma_k(\\mathbf{x}) = \\exp(a_k^\\sigma), \\hspace{4mm}\n \\mu_{k}(\\mathbf{x}) = a_{k}^\\mu \\label{eq:mdn-sigma} \\vspace{4mm}\\\\\n \\mathcal{L_{MDN}} = -\n &\\sum_{n=1}^{N} \\ln \\Bigg\\{ \\sum_{k=1}^{K} \\pi_k(\\mathbf{x}_n) \\mathcal{N}(\\mathbf{z} \\mid\n \\mu_k(\\mathbf{x}_n), \\sigma_{k}^2(\\mathbf{x}_n)) \\Bigg\\} \\label{eq:nll}\n\\end{align}\nThe proposed model is learned end-to-end by maximizing the data\nlog-likelihood, or alternatively minimizing the negative log-likelihood\n(denoted as $\\mathcal{L_{MDN}}$ in Eqn~\\ref{eq:nll}), given the $F$ input feature tracks ($\\mathbf{x}_1\\dots\\mathbf{x}_F$) and expected\nego-motion estimate $\\mathbf{z}$. The resulting ego-motion density estimates\n$p(\\mathbf{z}_i\\vert\\mathbf{x}_i)$\nobtained from\neach individual flow vectors $\\mathbf{x}_i$ are then fused by taking the\nproduct of their densities. However, to\nmaintain tractability of density products, only the mean and\ncovariance corresponding to the largest mixture coefficient (i.e. most\nlikely mixture mode) for each feature is considered for subsequent trajectory\noptimization (See Eqn~\\ref{eq:density-products}).\n\\begin{align}\\label{eq:density-products}\n p(\\mathbf{z} \\vert \\mathbf{x}) \\simeq \\prod_{i=1}^{F} \\max_{k} \\Big\\{\\pi_k(\\mathbf{x}_i)\n \\mathcal{N}(\\mathbf{z}_i \\mid \\mu_k(\\mathbf{x}_i), \\sigma_{k}^2(\\mathbf{x}_i)) \\Big\\}\n\\end{align}\n\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=\\columnwidth]{graphics\/losses-graphic.pdf}\n \\caption{\\textbf{Windowed trajectory optimization}: An illustration\n of the losses introduced for training frame-to-frame ego-motion\n (\\textit{local}) and windowed ego-motion (\\textit{global}) by\n compounding the poses determined from each of the individual\n frame-to-frame measurements.}\n \\label{fig:losses-illustration}\n \\vspace{-6mm}\n\\end{figure}\n\n\\subsection{Trajectory\n optimization}\\label{sec:trajectory-optimization} While minimizing the MDN loss\n($\\mathcal{L}_{MDN}$) as described above provides a reasonable\nregressor for ego-motion estimation, it is evident that optimizing\nframe-to-frame measurements do not ensure long-term consistencies in\nthe ego-motion trajectories obtained by integrating these regressed\nestimates. As one expects, the integrated trajectories are sensitive\nto even negligible biases in the ego-motion regressor.\n\n\\textbf{Two-stage optimization}: To circumvent the aforementioned issue, we introduce a\nsecond optimization stage that jointly minimizes the \n\\textit{local} objective ($\\mathcal{L}_{MDN}$) with a \\textit{global} objective that\nminimizes the error incurred between the overall trajectory and the\ntrajectory obtained by integrating the regressed pose estimates\nobtained via the \\textit{local} optimization. This allows the\n\\textit{global} optimization stage to have a warm-start with an almost\ncorrect initial guess for the network parameters.\n\nAs seen in Eqn~\\ref{eq:losses}, $\\mathcal{L}_{TRAJ}$ pertains to the\noverall trajectory error incurred by integrating the individual\nregressed estimates over a batched window (we typically consider 200\nto 1000 frames). This allows us to fine-tune the regressor to predict\nvalid estimates that integrate towards accurate long-term ego-motion\ntrajectories. As\nexpected, the model is able to roughly learn the curved trajectory\npath, however, it is not able to make accurate predictions when\nintegrated for longer time-windows (due to the lack of the\n\\textit{global} objective loss term in Stage\n1). Figure~\\ref{fig:losses-illustration} provides a high-level\noverview of the input-output relationships of the training procedure,\nincluding the various network losses incorporated in the ego-motion\nencoder\/regressor. For illustrative purposes only, we refer the reader to\nFigure~\\ref{fig:two-stage-illustration} where we validate this\ntwo-stage approach over a simulated dataset~\\cite{Zhang2016ICRA}. \n\nIn Eqn~\\ref{eq:losses}, $\\hat{\\mathbf{z}}_{{\\scriptsize t-1,t}}$ is the\nframe-to-frame ego-motion estimate and the regression target\/output of the MDN\nfunction $F$, where $F:\\mathbf{x} \\mapsto \\Big(\\mu({\\mathbf{x}}_{{\\scriptsize\nt-1,t}}), \\sigma(\\mathbf{x}_{{\\scriptsize t-1,t}}), \\pi(\\mathbf{x}_{{\\scriptsize\nt-1,t}})\\Big)$. $\\hat{\\mathbf{z}}_{1,t}$ is the overall trajectory\npredicted by integrating the individually regressed frame-to-frame\nego-motion estimates and is defined by $\\hat{\\mathbf{z}}_{1,t} = \\hat{\\mathbf{z}}_{1,2} \\oplus \\hat{\\mathbf{z}}_{2,3} \\oplus\n\\dots \\oplus \\hat{\\mathbf{z}}_{t-1,t}$.\n\\begin{equation}\n \\begin{aligned}\n \\mathcal{L_{\\text{ENC}}} =\n \\underbrace{\\sum_{t} \\mathcal{L}^t_{{\\scriptsize MDN}}\\Big( F(\\mathbf{x}),\n \\mathbf{z}_{{\\scriptsize t-1,t}}\\Big)}_{\\text{MDN Loss}} + \n \\underbrace{\\sum_{t} \\mathcal{L}^t_{{\\scriptsize TRAJ}}(\\mathbf{z}_{1,t} \\ominus\n \\hat{\\mathbf{z}}_{1,t})}_{\\text{Overall Trajectory Loss}}\n \\end{aligned}\n \\label{eq:losses} \n\\end{equation}\n\\vspace{-5mm} \n\n\n\\begin{figure}[!t]\n \\centering\n \\centering \n {\\renewcommand{\\arraystretch}{0.4}\n {\\setlength{\\tabcolsep}{0.2mm}\n \\begin{tabular}{cccc}\n \n \n \n \n \\includegraphics[width=0.25\\columnwidth]{graphics\/learning\/rpg\/map_02.pdf} \n &\\includegraphics[width=0.25\\columnwidth]{graphics\/learning\/rpg\/map_04.pdf}\n &\\includegraphics[width=0.25\\columnwidth]{graphics\/learning\/rpg\/map_08.pdf}\n &\\includegraphics[width=0.25\\columnwidth]{graphics\/learning\/rpg\/map_18.pdf}\\\\\n {\\scriptsize \\textbf{Stage 1}} & {\\scriptsize \\textbf{Stage 2}}\n &{\\scriptsize \\textbf{Stage 2}} & {\\scriptsize \\textbf{Stage 2}}\\\\\n {\\scriptsize (Final)} & {\\scriptsize (Epoch 4)}\n &{\\scriptsize (Epoch 8)} & {\\scriptsize (Epoch 18)}\n \\end{tabular}}}\n \\caption{\\textbf{Two-stage Optimization}: An illustration of the\n two-stage optimization procedure. The \\textit{first} column shows\n the final solution after the first stage. Despite the\n minimization, the integrated trajectory is clearly biased and\n poorly matches the expected result. The \\textit{second},\n \\textit{third} and \\textit{fourth} column shows the gradual improvement of the\n second stage (global\n minimization) and matches the expected ground truth trajectory\n better (i.e. estimates the regressor biases better).}\n \\label{fig:two-stage-illustration}\n \\vspace{-5mm}\n\\end{figure}\n\n\n\n\n\n\n\n\n\\subsection{Bootstrapped learning for ego-motion\nestimation}\\label{sec:proc-bootstrapped} Typical robot navigation\nsystems consider the fusion of visual odometry estimates with other\nmodalities including estimates derived from wheel encoders, IMUs, GPS\netc. Considering odometry estimates (for e.g. from wheel encoders)\nas-is, the uncertainties in open-loop chains grow in\nan unbounded manner. Furthermore, relative pose estimation may also be\ninherently biased due to calibration errors that eventually contribute\nto the overall error incurred. GPS, despite being noise-ridden,\nprovides an absolute sensor reference measurement that is especially\ncomplementary to the open-loop odometry chain maintained with odometry\nestimates. The probabilistic fusion of these two relatively\nuncorrelated measurement modalities allows us to recover a\nsufficiently accurate trajectory estimate that can be directly used\nas ground truth data $\\mathbf{z}$ (in Figure~\\ref{fig:egomotion-regression-illustration}) for our supervised regression problem.\n\n\n\n\n\n\\begin{figure}[!b]\n \\centering\n \\includegraphics[width=\\columnwidth]{graphics\/egomotion-regression-graphic.pdf}\n \\caption{\\textbf{Bootstrapped Ego-motion Regression}:\n Illustration of the bootstrap mechanism whereby a robot\n self-supervises the proposed ego-motion regression task in a new\n camera sensor by fusing information from other sensor sources\n such as GPS and INS.}\n \\label{fig:egomotion-regression-illustration}\n \\vspace{-4mm}\n\\end{figure}\n\n\\begin{figure}[!b]\n \\centering\n \\includegraphics[width=\\columnwidth]{graphics\/egomotion-deployment-graphic.pdf}\n \\caption{\\textbf{Learned Ego-motion Deployment}:\n During model deployment, the learned visual-egomotion model\n provides valuable relative pose constraints to augment the standard\n navigation-based sensor fusion (GPS\/INS and wheel encoder odometry\n fusion).}\n \\label{fig:egomotion-deployment-illustration}\n \\vspace{-2mm}\n\\end{figure}\n\nThe indirect recovery of training data from the fusion of other sensor\nmodalities in robots falls within the \\textit{self-supervised or\nbootstrapped} learning paradigm. We envision this capability to be\nespecially beneficial in the context of life-long learning in future\nautonomous systems. Using the fused and optimized pose estimates $\\mathbf{z}$\n(recovered from GPS and odometry estimates), we are able to recover\nthe required input-output relationships for training visual ego-motion\nfor a completely new sensor (as illustrated in\nFigure~\\ref{fig:egomotion-regression-illustration}). Figure~\\ref{fig:egomotion-deployment-illustration}\nillustrates the realization of the learned model in a typical\nautonomous system where it is treated as an additional sensor\nsource. Through experiments~\\ref{sec:bootstrap-exp}, we illustrate\nthis concept with the recovery of ego-motion in a robot car equipped\nwith a GPS\/INS unit and a single camera.\n\n\n\n\\subsection{Introspective Reasoning for Scene-Flow Prediction} Scene\nflow is a fundamental capability that provides directly measurable\nquantities for ego-motion analysis. The flow observed by sensors\nmounted on vehicles is a function of the inherent scene depth, the\nrelative ego-motion undergone by the vehicle, and the intrinsic and\nextrinsic properties of the camera used to capture it. As with any\nmeasured quantity, one needs to deal with sensor-level noise\npropagated through the model in order to provide robust\nestimates. While the input flow features are an indication of\nego-motion, some of the features may be corrupted due to lack of or\nambiguous visual texture or due to flow induced by the dynamics of\nobjects other than the ego-motion itself. Evidently, we observe that\nthe dominant flow is generally induced by ego-motion itself, and it is\nthis flow that we intend to fully recover via a conditional\nvariational auto-encoder (C-VAE). By inverting the regression problem,\nwe develop a generative model able to predict the most-likely flow\n$\\hat{\\Delta x}$ induced given an ego-motion estimate $\\mathbf{z}$, and\nfeature location $x$. We propose a scene-flow specific autoencoder\nthat encodes the implicit egomotion observed by the sensor, while\njointly reasoning over the latent depth of each of the individual\ntracked features. \n\n\n\\begin{equation}\n \\begin{aligned}\n \\mathcal{L_{\\text{CVAE}}} =& \\mathbb{E}{\\big [} \\log p_{\\theta}(\\Delta x\n | \\mathbf{z},x) {\\big]}\\\\\n &- D_{KL}\\big[q_{\\phi}(\\mathbf{z} |\n x,\\Delta x) ||\n p_{\\theta}(\\mathbf{z}|x)\\big] \n \\end{aligned}\n \\label{eq:cvae} \n\\end{equation}\n\nThrough the proposed denoising autoencoder model, we are also able to\nattain an introspection mechanism for the presence of outliers. We\nincorporate this additional module via an auxiliary loss as specified\nin Eqn~\\ref{eq:cvae}. An illustration of these flow\npredictions are shown in Figure~\\ref{fig:evaluation-flow-prediction}.\n\n\n\n\n\n\\section{Discussion}\\label{sec:discussion} The initial results in\nbootstrapped learning for visual ego-motion has motivated new\ndirections towards life-long learning in autonomous robots. While our\nvisual ego-motion model architecture is shown to be sufficiently\npowerful to recover ego-motions for non-linear camera optics such as\nfisheye and catadioptric lenses, we continue to investigate further\nimprovements to match existing state-of-the-art models for these lens\ntypes. Our current model does not capture distortion effects yet,\nhowever, this is very much a future direction we would like to\ntake. Another consideration is the resource-constrained setting, where\nthe optimization objective incorporates an additional regularization\nterm on the number of parameters used, and the computation load\nconsumed. We hope for this resource-aware capability to transfer to\nreal-world limited-resource robots and to have a significant impact on\nthe adaptability of robots for long-term autonomy.\n\n\\section{Conclusion}\\label{sec:conclusion} While many visual\nego-motion algorithm variants have been proposed in the past decade,\nwe envision that a fully end-to-end trainable algorithm for generic\ncamera ego-motion estimation shall have far-reaching implications in\nseveral domains, especially autonomous systems. Furthermore, we expect\nour method to seamlessly operate under resource-constrained situations\nin the near future by leveraging existing solutions in model reduction\nand dynamic model architecture tuning. With the availability of\nmultiple sensors on these autonomous systems, we also foresee our\napproach to bootstrapped task (visual ego-motion) learning to\npotentially enable robots to learn from experience, and use the new\nmodels learned from these experiences to encode redundancy and\nfault-tolerance all within the same framework.\n\n\n\\section{Experiments}\\label{sec:experiments} In this section, we\nprovide detailed experiments on the performance, robustness and\nflexibility of our proposed approach on various datasets. Our approach\ndifferentiates itself from existing solutions on various fronts as\nshown in Table~\\ref{table:vo-landscape}. We evaluate the performance\nof our proposed approach on various publicly-available datasets\nincluding the KITTI dataset~\\cite{Geiger2012CVPR}, the Multi-FOV synthetic\ndataset~\\cite{Zhang2016ICRA} (pinhole, fisheye, and catadioptric\nlenses), an omnidirectional-camera\ndataset~\\cite{schonbein2014omnidirectional}, and on the Oxford Robotcar\n1000km Dataset~\\cite{maddern20161}.\n\nNavigation solutions in autonomous systems today typically fuse\nvarious modalities including GPS, odometry from wheel encoders and INS\nto provide robust trajectory estimates over extended periods of\noperation. We provide a similar solution by leveraging the learned\nego-motion capability described in this work, and fuse it with\nintermittent GPS updates\\footnote{\\scriptsize For evaluation purposes\nonly, the absolute ground truth locations were added as weak priors on\ndatasets without GPS measurements}\n(Secion~\\ref{sec:performance}). While maintaining similar performance\ncapabilities (Table~\\ref{tab:trajectory-errors}), we re-emphasize the benefits\nof our approach over existing solutions:\n\\begin{itemize}\n\\item \\textbf{Versatile}: With\na fully trainable model, our approach is able to\nsimultaneously reason over both ego-motion and implicitly modeled camera parameters\n(\\textit{intrinsics} and \\textit{extrinsics}). Furthermore, online calibration and\nparameter tuning is implicitly encoded within the same learning\nframework.\n\\item \\textbf{Model-free}: Without imposing any constraints\non the type of camera optics, our approach is able to recover\nego-motions for a variety of camera models including \\textit{pinhole},\n\\textit{fisheye} and \\textit{catadioptric}\nlenses. (Section~\\ref{sec:model-free})\n\\item \\textbf{Bootstrapped training and refinement}: We illustrate a bootstrapped\nlearning example whereby a robot self-supervises the proposed\nego-motion regression task by fusing information from other sensor sources\nincluding GPS and INS (Section~\\ref{sec:bootstrap-exp})\n\\item \\textbf{Introspective reasoning for scene-flow prediction}: Via the C-VAE\ngenerative model,\nwe are able to reason\/introspect over the predicted flow vectors in\nthe image given an ego-motion estimate. This provides an obvious\nadvantage in \\textit{robust} outlier detection and identifying dynamic\nobjects whose flow vectors need to be disambiguated from the\nego-motion scene flow (Figure~\\ref{fig:evaluation-flow-prediction})\n\\end{itemize}\n\n\\input{tex\/fig-qualitative-results}\n\n\n\\subsection{Evaluating ego-motion performance with sensor fusion}\n\\label{sec:performance} In this section, we evaluate our approach\nagainst a few state-of-the-art algorithms for monocular visual\nodometry~\\cite{kitt2010visual}. On the KITTI\ndataset~\\cite{Geiger2012CVPR}, the pre-trained estimator is used to\nrobustly and accurately predict ego-motion from KLT features tracked\nover the dataset image sequence. The frame-to-frame ego-motion\nestimates are integrated for each session to recover the full\ntrajectory estimate and simultaneously fused with intermittent GPS\nupdates (incorporated every 150 frames). In\nFigure~\\ref{fig:egomotion-fusion}, we show the qualitative performance\nin the overall trajectory obtained with our method. The entire\npose-optimized trajectory is compared against the ground truth\ntrajectory. The\ntranslational errors are computed for each of the ground truth and\nprediction pose pairs, and their median value is reported in\nTable~\\ref{tab:trajectory-errors} for a variety of datasets with\nvaried camera optics.\n\n\n\\subsection{Varied camera optics}\\label{sec:model-free} Most of the\nexisting implementations of VO estimation are restricted to a class of\ncamera optics, and generally avoid implementing a general-purpose VO estimator for\nvaried camera optics. Our approach on the other hand, has shown the ability to provide\naccurate VO with intermittent GPS trajectory estimation while\nsimultaneously being applicable to a varied range of camera\nmodels. In\nFigure~\\ref{fig:egomotion-evaluation-varied-optics}, we compare with \nintermittent GPS trajectory estimates for all three camera models, and\nverify their performance accuracy compared to ground truth. In our\nexperiments, we found that while our proposed solution was\nsufficiently powerful to model different camera optics, it was\nsignificantly better at modeling pinhole lenses as compared to fisheye\nand catadioptric cameras (See Table~\\ref{tab:trajectory-errors}). In\nfuture work, we would like to investigate further extensions that\nimprove the accuracy for both fisheye and catadioptric lenses.\n\n\\input{tex\/tab-trajectory-prediction-performance}\n\n\\begin{figure}[h]\n \\centering \n {\\renewcommand{\\arraystretch}{0.1}\n {\\setlength{\\tabcolsep}{0.1mm}\n \\begin{tabular}{ccc}\n \\includegraphics[width=0.33\\columnwidth]{graphics\/trajectory-prediction\/rpg\/rpg_urban_pinhole_map_500_update.pdf}&\n \\includegraphics[width=0.33\\columnwidth]{graphics\/slam\/rpg-fisheye-vo-slam\/rpg_urban_fisheye_map_500_update.pdf}&\n \\includegraphics[width=0.33\\columnwidth]{graphics\/slam\/rpg-cata-vo-slam\/rpg_urban_cata_map_500_update.pdf}\\\\\n \\scriptsize \\textbf{Pinhole} & \\scriptsize \\textbf{Fisheye} & \\scriptsize \\textbf{Catadioptric}\n \\end{tabular}}}\n \\caption{\\textbf{Varied camera optics:} An illustration of the\n performance of our general-purpose approach for varied camera optics\n (pinhole, fisheye, and catadioptric lenses) on the\n Multi-FOV synthetic dataset~\\cite{Zhang2016ICRA}. Without any prior\n knowledge on the camera optics, or the mounting configuration\n (extrinsics), we are able to robustly and accurately recover the full\n trajectory of the vehicle (with intermittent GPS updates every 500\n frames).}\n \\label{fig:egomotion-evaluation-varied-optics}\n \\vspace{-6mm}\n\\end{figure}\n\n\\subsection{Self-supervised Visual Ego-motion Learning in Robots}\n\\label{sec:bootstrap-exp} We\nenvision the capability of robots to self-supervise tasks such as\nvisual ego-motion estimation to be especially beneficial in the\ncontext of life-long learning and autonomy. We experiment and validate\nthis concept through a concrete example using the 1000km Oxford Robot\nCar dataset~\\cite{maddern20161}. We train the task of visual\nego-motion on a new camera sensor by leveraging the fused GPS and INS\ninformation collected on the robot car as ground truth trajectories\n(6-DOF), and extracting feature trajectories (via KLT) from image\nsequences obtained from the new\ncamera sensor. The timestamps from the cameras are synchronized\nwith respect to the timestamps of the fused GPS and INS information,\nin order to obtain a one-to-one mapping for training purposes. We\ntrain on the \\texttt{stereo\\_centre} \\textit{(pinhole)} camera dataset\nand present our results in Table~\\ref{tab:trajectory-errors}. As seen\nin Figure~\\ref{fig:egomotion-fusion}, we are able to achieve\nconsiderably accurate long-term state estimates by fusing our proposed\nvisual ego-motion estimates with even sparser GPS updates (every 2-3\nseconds, instead of 50Hz GPS\/INS readings). This allows the robot to\nreduce its reliance on GPS\/INS alone to perform robust, long-term\ntrajectory estimation.\n\n\n\\begin{figure}[!t]\n \\centering \n {\\renewcommand{\\arraystretch}{0.4}\n {\\setlength{\\tabcolsep}{0.4mm}\n \\begin{tabular}{cccc}\n \\rotatebox[x=-6mm]{90}{\\scriptsize \\textbf{Image}}& \n \\includegraphics[width=0.29\\columnwidth,frame={\\fboxrule}]{graphics\/flow-prediction\/pinhole-img0250_0.png}&\n \\includegraphics[width=0.29\\columnwidth,frame={\\fboxrule}]{graphics\/flow-prediction\/fisheye-img0250_0.png}&\n \\includegraphics[width=0.29\\columnwidth,frame={\\fboxrule}]{graphics\/flow-prediction\/cata-img0250_0.png}\\\\\n \\rotatebox[x=-5.5mm]{90}{\\scriptsize \\textbf{Forward}}&\n \\includegraphics[width=0.3\\columnwidth]{graphics\/flow-prediction\/pinhole-forward-flow.pdf}&\n \\includegraphics[width=0.3\\columnwidth]{graphics\/flow-prediction\/fisheye-forward-flow.pdf}&\n \\includegraphics[width=0.3\\columnwidth]{graphics\/flow-prediction\/cata-forward-flow.pdf}\\\\\n \n \n \n \n \n \n \n \n & \\scriptsize \\textbf{(a) Pinhole} & \\scriptsize \\textbf{(b) Fisheye} & \\scriptsize \\textbf{(c) Catadioptric}\\\\\n \\end{tabular}}}\n \\caption{\\textbf{Introspective reasoning for scene-flow prediction}:\n Illustrated above are the dominant flow vectors corresponding to\n scene-flow given the corresponding ego-motion. While this module is\n not currently used in the ego-motion estimation, we expect it to be\n critical in outlier rejection. \\textbf{Row 1}: Sample image from\n camera, \\textbf{Row 2}: Flow induced by forward motion\n \n \n }\n \\label{fig:evaluation-flow-prediction}\n \\vspace{-6mm}\n\\end{figure}\n\n\n\n\n\\subsection{Implementation Details}\\label{sec:implementation}\nIn this section we describe the details of our proposed model,\ntraining methodology and parameters used. The input $\\mathbf{x} = (\\bm{x},\n\\Delta\\bm{x})$ to the density-based ego-motion estimator are feature\ntracks extracted via (Kanade-Lucas-Tomasi) KLT feature tracking over\nthe raw camera image sequences. The input feature positions and flow\nvectors are normalized to be the in range of $[-1,1]$ using the\ndimensions of the input image. We evaluate sparse LK (Lucas-Kanade)\noptical flow over 7 pyramidal scales with a scale factor of\n$\\sqrt{2}$. As the features are extracted, the corresponding robot\npose (either available via GPS or GPS\/INS\/wheel odometry sensor\nfusion) is synchronized and recorded in $SE(3)$ for training\npurposes. The input KLT features, and the corresponding relative pose\nestimates used for training are parameterized as $\\mathbf{\\mathbf{z}} =\n(\\mathbf{t},\\mathbf{r}) \\in \\mathbb{R}^6$, with a Euclidean\ntranslation vector $\\mathbf{t} \\in \\mathbb{R}^3$ and an Euler rotation\nvector $\\mathbf{r} \\in \\mathbb{R}^3$.\n\n\\textbf{Network and training:} The proposed architecture\nconsists of a set of fully-connected stacked layers (with 1024, 128\nand 32 units) followed by a Mixture Density Network with 32 hidden\nunits and 5 mixture components ($K$). Each of the initial\nfully-connected layers implement \\textit{tanh} activation after it,\nfollowed by a dropout layer with a dropout rate of 0.1. The final\noutput layer of the MDN ($a^{\\pi}$, $a^{\\mu}$,\n$a^{\\sigma}$) consists of $(O + 2) * K$ outputs where $O$ is the\ndesired number of states estimated. \n\nThe network is trained (in Stage 1) with loss weights of 10, 0.1, 1\ncorresponding to the losses $\\mathcal{L}_{MDN}, \\mathcal{L}_{TRAJ},\n\\mathcal{L}_{CVAE}$ described in previous sections. The training data\nis provided in batches of 100 frame-to-frame subsequent image pairs,\neach consisting of approximately 50 randomly sampled feature matches\nvia KLT. The learning rate is set to $1\\mathrm{e}{-3}$ with Adam as\nthe optimizer. On the synthetic Multi-FOV dataset and the KITTI\ndataset, training for most models took roughly an hour and a half\n(3000 epochs) independent of the KLT feature extraction step.\n\n\\textbf{Two-stage optimization}: We found the one-shot joint\noptimization of the \\textit{local} ego-motion estimation and\n\\textit{global} trajectory optimization to have sufficiently low\nconvergence rates during training. One possible explanation is the\nhigh sensitivity of the loss weight parameters that is used for tuning\nthe local and global losses into a single objective. As previously addressed in\nSection~\\ref{sec:trajectory-optimization}, we\nseparate the training into two stages thereby alleviating the\naforementioned issues, and maintaining fast convergence rates in Stage\n1. Furthermore, we note that during the second stage, it only requires\na few tens of iterations for sufficiently accurate ego-motion\ntrajectories. In order to optimize over a larger time-window in stage\n2, we set the batch size to 1000 frame-to-frame image matches, again\nrandomly sampled from the training set as before. Due to the large integration\nwindow and memory limitations, we train this stage purely on the CPU\nfor only 100 epochs each taking roughly 30s per epoch. Additionally,\nin stage 2, the loss weights for $\\mathcal{L}_{TRAJ}$ are increased to\n100 in order to have faster convergence to the \\textit{global}\ntrajectory. The remaining loss weights are left unchanged.\n\n\\textbf{Trajectory fusion}: We use\nGTSAM\\footnote{\\scriptsize\\url{http:\/\/collab.cc.gatech.edu\/borg\/gtsam}}\nto construct the underlying factor graph for pose-graph\noptimization. Odometry constraints obtained from the frame-to-frame\nego-motion are incorporated as a 6-DOF constraint parameterized in $SE(3)$ with $1*10^{-3}$~rad rotational noise\nand $5*10^{-2}$~m translation noise. As with typical autonomous\nnavigation solutions, we expect measurement updates in the form of GPS\n(absolute reference updates) in order to correct for the long-term\ndrift incurred in open-loop odometry chains. We incorporate absolute\nprior updates only every 150 frames, with a weak translation prior of $0.01$~m. The constraints\nare incrementally added and solved using iSAM2~\\cite{kaess2012isam2} as\nthe measurements are streamed in, with updates performed every 10\nframes.\n\nWhile the proposed MDN is parametrized in Euler angles, the\n\\textit{trajectory integration module} parameterizes the rotation\nvectors in quaternions for robust and unambiguous long-term trajectory\nestimation. All the rigid body transformations are implemented\ndirectly in Tensorflow for pure-GPU training support.\n\n\\textbf{Run-time performance}: We are particularly interested in the\nrun-time \/ test-time performance of our approach on CPU architectures\nfor mostly resource-constrained settings. Independent of the KLT feature\ntracking run-time, we are able to recover ego-motion estimates at\nroughly 3ms on a consumer-grade Intel(R) Core(TM)\ni7-3920XM CPU @ 2.90GHz.\n\n\n\\textbf{Source code and Pre-trained weights}: We implemented the\nMDN-based ego-motion estimator with Keras and Tensorflow, and trained\nour models using a combination of CPUs and GPUs (NVIDIA Titan X). All\nthe code was trained on an server-grade Intel(R) Xeon(R) CPU E5-2630\nv3 @ 2.40GHz and tested on the same consumer-grade machine as\nmentioned above to\nemulate potential real-world use-cases. The source code and\npre-trained models used will be made available\nshortly\\footnote{\\scriptsize\nSee~\\url{http:\/\/people.csail.mit.edu\/spillai\/learning-egomotion}\nand~\\url{https:\/\/github.com\/spillai\/learning-egomotion}}.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}