{"text":"\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusions} \\label{SEC:Conclusion}\nIn this paper we have presented an approach that estimates frequency differences between the mixing oscillator in the HF transmitter, that moves the baseband signal to the carrier frequency, and the oscillator in the receiver that converts the radio signal back to baseband. It is based on tracking the pitch and its harmonics in the received baseband speech signal in an open range search method. Experiments on real data from \\gls{HF} transmissions show promising performance, both in terms of precision and computational complexity.\n\n\n\\balance\n\\renewcommand*{\\bibfont}{\\small}\n\\bibliographystyle{IEEEtran}\n\n\\section{Pitch tracking experiments} \\label{SEC:PitchEx}\n\\begin{figure}[b]\n\t\\centering\n\t\\input{images\/wizzard.tex}\n\t\\caption{Cumulative density function (CDF) of pitch estimation error per frame on PTDB-TUG database \\cite{PTDB-TUG2011}.}\n\t\\label{fig:wizzard}\n\\end{figure}\nTo evaluate the proposed approach w.r.t.\\ its capabilities of tracking the human pitch we used the PTDB-TUG database from \\cite{PTDB-TUG2011} and compared our approach to the YAAPT algorithm implementation \\cite{Zahorian08}. The results are given in Fig.~\\ref{fig:wizzard}. The Rake-PC pitch tracker achieves in more than 85\\% of all pitch containing speech segments a higher precision than the YAAPT algorithm. However, in the remaining 15\\% of the segments the performance stays way below YAAPT. This difference can be attributed to the sophisticated post-processing of YAAPT (multi pitch candidate selection process, non-linearities to restore missing pitches, application of temporal restrictions) which is missing in Rake-PC. If a Kalman filter is applied to the pitch trajectory estimated by Rake-PC, the performances difference can be compensated for to a great degree.\n\nFig.~\\ref{fig:wizzard} also shows the results of an oracle experiment, where it was allowed to multiply the pitch tracking results by a factor of $2$ or $0.5$ to compensate for mistakenly selecting a harmonic or subharmonic as the pitch frequency. Both algorithms benefit from the oracle, with Rake-PC achieving higher gains and even outperforming YAAPT. From this control experiment it can be concluded that the majority of the large errors in pitch estimation are caused by a wrong classification of pitch harmonics and sub-harmonics to be the pitch. This is a typical error to be handled by post-processing. But this misinterpretation has no impact on the task of carrier frequency difference estimation, because we are only interested in the sum of the PSD values at pitch and pitch harmonics. \n\n\n\\section{Experiments on HAM radio data} \\label{SEC:Experiments}\n\n\\begin{figure}[t]\t\t\n\t\\input{images\/cmp_FFT2.tex}\t\n\t\\caption{Difference between estimated and ground truth carrier frequency difference for three FFT sizes. Length of speech activity was ${\\geq}\\SI{10}{s}$. No crosstalkers present.} \t\\label{fig:cmp_FFT}\t\n\\end{figure}\n\n\n\\input{sections\/database}\n\n\\subsection{Carrier Frequency Difference Estimation}\nIn Fig.~\\ref{fig:cmp_FFT} the error between the estimated difference $\\widehat{f}_{D}$ and the ground truth difference is depicted. For this experiment the length of the speech segments was between $\\SI{10}{s}$ and $\\SI{27}{s}$. The system works reliable with errors below $\\pm \\SI{5}{Hz}$, which is an error that is not perceivable by humans \\cite{Clark2013}.\n\nFor shorter speech segments the error increases as can be seen in Fig.~\\ref{fig:ErrorClassAffiliation}, where we grouped the estimation errors in five classes. In that figure we compared our approach to the harmonic\/spectral envelope approach of \\cite{Ganapathy2013}, which we implemented on our own since no open source implementation was available.\nIt can be observed that the proposed approach achieves lower estimation errors, both on short and longer speech utterances.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\resizebox{\\columnwidth}{!}{\n\t\t\\input{images\/ErrorClassAffiliationCMP2.tex}}\n\t\\caption{Dependency of error class affiliation on speech segment length. Implementation of Harmonic\/Spectral Envelop following \\cite{Ganapathy2013}. FFT size set to $4096$.}\n\t\\label{fig:ErrorClassAffiliation}\n\\end{figure}\n\n\\begin{figure}[b]\n\t\\centering\n\t\\input{images\/crosstalker.tex}\n\t\\caption{Accumulated logarithmic energy in case of two parallel speakers. Crosstalker is visible as secondary maximum and its (sub-)harmonics.}\n\t\\label{fig:multipleparallelspeakerpsdtrack}\t\n\\end{figure}\n\n\\subsection{Parallel speakers and harmonic errors}\nA small amount of our recordings include the special case of a concurrent speaker at a higher frequency. This is a challenging, however likely scenario to be encountered in practice.\nFig.~\\ref{fig:multipleparallelspeakerpsdtrack} depicts the accumulated logarithmic energy $\\widehat{\\Gamma}(f_D)$, Eq.~\\eqref{EQ:accumulatedLogEnergy}, versus candidate frequency difference $f_D$. Both, the speaker at $\\SI{100}{Hz}$ and the interfering crosstalker at $\\SI{1098}{Hz}$ are visible as maxima in the accumulated log-energy. Also two secondary maxima are visible next to the crosstalker which we attribute to harmonics\/subharmonics and a possible non-linearity in the transmission system. So extending the Rake-PC towards concurrent speaker tracking and identification seems to be possible, similar to multi-speaker tracking in diarization \\cite{Hogg2019} or localization tasks \\cite{Gerlach2014}.\n\n\n\n\\subsection{Processing time}\nIn Fig.~\\ref{fig:realtime} the real time factors of the proposed approach, our implementation of \\cite{Ganapathy2013} and the reference implementation of the YAAPT algorithm from \\cite{Zahorian08} are given. The implementation following Fig.~\\ref{Fig:rakesimplified} (''Rake-PC (Single Core)'') improves the real time factor significantly compared to the direct implementation (''Rake (Single Core)''). The overall processing time can be further reduced by a straight forward parallel implementation (''Rake-PC (Multi Core)''). For a \\gls{FFT} size of 2048 Rake-PC has a similar real time factor as \\cite{Ganapathy2013} and \\cite{Zahorian08}.\n\n\\begin{figure}[htb]\n\t\\centering\n\t\\input{images\/rtf.tex}\n\t\\caption{Real time factors for different approaches and \\gls{FFT} sizes, including single and multi core implementation for a block shift of \\SI{20}{ms}. (AMD Ryzen 5 3600, 6-Core, 32GB RAM, Matlab)}\n\t\\label{fig:realtime}\t\n\\end{figure}\n\n\n\n\\section{Deep Neural Network}\\label{SEC:DNN}\nThe proposed approach can be further improved by exchanging the two maximum operations of \\eqref{EQ:Max} and \\eqref{EQ:Argmax} by a \\gls{DNN} based detector directly working on the features $\\bm{\\Gamma}(t,\\bm{f}_{P},f_{S})$.\n\nFrom the \\gls{DNN} we expect that it learns the temporal dependencies ...\n\n\n\n\n\n\\section{Introduction}\nThe scenario at hand envisions a radio station listening on a fixed, pre-selected frequency, and seeking for \\gls{SSB} modulated \\gls{HF} signals. \nIf the receiver selects a different carrier frequency than the transmitter, the demodulated signal contains a frequency shifted version of the original speech signal originating from the carrier frequency difference \\cite{Suzuki1994}. This has a detrimental effect on the intelligibility of the transmitted speech signal \\cite{Baskent2007}. Fig.~\\ref{fig:spec} shows the spectrogram of a demodulated signal from a station operating at a carrier frequency difference of $\\SI{500}{Hz}$ compared to the transmitter. \n\nTo improve intelligibility, the carrier frequency difference should be estimated and the signal shifted in frequency to remove the difference. This contribution is concerned with the first task, the determination of the carrier frequency difference from the demodulated speech signal. The second task, the compensation, is rather straightforward and will not be considered here.\n\nA carrier frequency difference can be estimated by investigating the statistical properties of speech, e.g., the modulation symmetry \\cite{Clark2013} or the spectral envelope \\cite{Ganapathy2013}. \nThe contribution \\cite{Clark2013} utilizes a third-order modulation spectral analysis that, however, limits the analyzable spectrum to one-forth of its total width, and \\cite{Ganapathy2013} proposes a fundamental harmonic frequency detector that requires relatively long speech segments for reliable estimation in noisy conditions. \nTraining based frequency offset estimation has been proposed in \\cite{Xing2017}, where GMM-SVMs, i-Vectors and deep neural networks are employed. The main disadvantage here is the requirement of having a representative and large enough data set for training, as \\gls{HF} transmissions include a variety of distortions.\n\n\\begin{figure}[b]\n\t\\centering\n\t\\input{images\/spec.tex}\n\t\\caption{Spectrogram of a signal transmitted over HF and demodulated with \\SI{500}{Hz} carrier frequency difference. Marked in black are the carrier frequency difference (dashed) and the pitch traces including two harmonics.}\n\t\\label{fig:spec}\n\\end{figure}\n\nIn this paper we follow the idea of \\cite{Suzuki1994, Ganapathy2013, Xing2017}: By detecting the typical pitch structures in the spectrogram, a possible carrier frequency difference becomes apparent. \nFundamental frequency estimation, or pitch tracking, has been a research topic for years with applications in signal enhancement and speaker identification tasks. Various approaches are known from the literature, e.g., RAPT \\cite{Talkin2005ARA}, STRAIGHT \\cite{Kawahara02}, YIN \\cite{YIN02} and YAAPT \\cite{Zahorian08}.\nSince most approaches are based on correlation techniques, be it in the time \\cite{YIN02} or frequency domain \\cite{Kawahara02} or even in both domains \\cite{Zahorian08}, comparative studies show only small differences between the algorithms in terms of precision \\cite{8081482} as they all depend on similar features. Detecting candidates for periodic signals within the physical range of the vocal cord's oscillation frequencies is usually the first step, which is followed by a post-processing for candidate refinement and subsequent smoothing \\cite{Talkin2005ARA, Zahorian08}. Besides time and frequency domain, also cepstral domain estimators have been proposed \\cite{Gerkman2010}. \nClearly, all methods suffer from low \\gls{SNR} ratios \\cite{8081482} and robustness to distortions is an important aspect. Here, new approaches based on \\glspl{DNN} reported promising results \\cite{Han2014}.\n\nHowever, pitch tracking with the purpose of frequency difference estimation poses different constraints compared to the pure pitch tracking task, since in our scenario the pitch and its harmonics are shifted by an arbitrary frequency, requiring an open range search for all possible shifts. \n\nThe contributions of this paper are two-fold. First, we introduce and discuss our new approach to carrier frequency difference estimation named ''Rake''. It is based on accumulated log-energy values and \nenables the classification on significantly shorter segments of speech compared to existing approaches, e.g., \\cite{Ganapathy2013}. Second, an efficient implementation in the power cepstrum domain is proposed to reduce the computational demands of the approach. Finally, in the experiments we evaluate the proposed algorithm on real \\gls{SSB} \\gls{HF} recordings and also compare it to a state-of-the-art pitch tracking algorithm and a frequency difference estimation algorithm.\n\nThe paper is organized as follows: In Sec.~\\ref{SEC:Rake} our features for carrier frequency difference estimation are derived, followed by Sec.~\\ref{SEC:Implementation} where we discuss details of the implementation in the power cepstrum domain. In Sec.~\\ref{SEC:PitchEx} and Sec.~\\ref{SEC:Experiments} the experimental results are discussed. The paper ends by drawing some conclusions in Sec.~\\ref{SEC:Conclusion}.\n\n\n\\section{Implementation} \\label{SEC:Implementation}\nThe computationally expensive evaluation of the terms in \\eqref{EQ:GammaSum} can be interpreted as a correlation of the logarithmic \\gls{PSD} values $\\log\\{|X(t,f)|^2\\}$ with a filter function\n\\begin{align} \\label{EQ:FilterBank}\n\th(f,f_P) = \\sum_{\\tau=1}^{\\tau_{\\text{max}}} \\sum_{\\nu=-W}^{W} \\omega(\\tau,\\nu) \\cdot \\gamma(f-\\tau \\cdot f_P(t)-\\nu),\n\\end{align}\nalong the frequency axis, where $\\gamma(.)$ denotes the unit impulse. This interpretation is similar to the harmonic sieves proposed in \\cite{Gerlach2014}. The correlation, which has to be carried out for all $f_P(t) \\in [f_{P,\\min}, f_{P,\\max}]$, can be efficiently computed by applying a \\gls{FFT}, i.e., by moving to the power cepstral domain, and using the Overlap-Save-Method. \n\nFig.~\\ref{Fig:rakesimplified} shows a block diagram of the overall algorithm. This implementation is denoted as ''Rake-PC'' (Rake Power Cepstrum) in the following.\n\\begin{figure}[htb]\n\t\\centering \\footnotesize\n\t\\def.95\\columnwidth{.95\\columnwidth}\n\n\t\\import{images\/}{rake2.pdf_tex}\n\t\\caption{Rake-PC: Block diagram showing power cepstral domain correlation, carrier frequency difference and pitch trace estimation.}\n\t\\label{Fig:rakesimplified}\n\\end{figure}\nThe upper part of the figure illustrates the realization of the correlation of the log-PSD with the set of filters, where each filter correponds to an assumed value of $f_P$, in the PSD domain. The resulting $\\Gamma(t,f_{P}(t),f_D)$ depends on time frame $t$, the assumed pitch frequency $f_P(t)$, and the carrier frequency shift $f_D$. Next, the optimal value of the pitch is determined according to Eq.~\\eqref{EQ:Max}, followed by a summation along the time axis, Eq.~\\eqref{EQ:accumulatedLogEnergy}. The resolution of the resulting $\\widehat{\\Gamma}(f_D)$ is limited by the \\gls{STFT} size, as the shift is given by its bin index. This can be overcome by interpolating the accumulated log-energy terms, e.g., by spline interpolation. The subsequent argmax operation from \\eqref{EQ:Argmax} yields the final estimate $\\hat{f}_D$, whose resolution is no longer limited by the FFT size. \n\nNote, that the first maximum operation as stated in \\eqref{EQ:Max}, is carried out independently for each time frame $t$, a clear shortcoming of the method, as it does not account for the inertia of the vocal cords, that results in smooth pitch trajectories. Introducing a-priori knowledge to account for the lowpass characteristics of the pitch trajectory, e.g., by using a simple first order Markov chain as proposed in \\cite{Gerkman2010}, would improve the pitch tracking precision, however, at the cost of a significantly increased computational complexity.\n\nThe values of $f_D$ are quantized by the FFT resolution and the maximum\nsearch from \\eqref{EQ:Argmax} is restricted to the frequency bins that belong to the frequency range \\SI{0}{Hz} to \\SI{3500}{Hz} for a signal sampled at $\\SI{8}{kHz}$.\nThe upper limit is motivated by the fact that a speech signal requires approximately a $\\SI{500}{Hz}$ bandwidth to be intelligible and that the regarded harmonics have to fit in the considered frequency range. Signals with negative offsets $f_D$ are not considered because they are characterized by a significant signal loss of the lower frequencies and remain unintelligible without signal reconstruction approaches.\n\n\n\nThe availability of pitch trace estimates $f'_P(t,f_D)$, $t=0,\\ldots , T-1$, offers the opportunity to discard maxima in $\\widehat{\\Gamma}(f_D)$ that are caused by narrow-band digital transmissions instead of speech. As human speech is characterized by a time-variant pitch, while digital transmissions operate with a fixed frequency, the two can be discerned by the variance of $f'_P(t,f_D)$. If it falls below a threshold, the detected signal is probably not speech and consequently discarded. \n\n\n\\section{Rake Approach} \\label{SEC:Rake}\nWe are given a demodulated \\gls{SSB} \\gls{HF} signal, of which we assume that it has already been pre-processed by a speech activity detection unit, e.g., by the DNN-based approach from \\cite{Heitkaemper2020}, such that only segments with active speakers are regarded in the following.\nNote, that these segments consist of voiced and unvoiced speech, as well as short pauses. In the spectral domain the pitch and its harmonics are clearly visible in a spectrogram, if the \\gls{STOI} value \\cite{Taal2011STOI} is above $0.5$. However, many \\gls{SSB} transmissions have much worse \\gls{STOI} values and the pitch contours are occluded by noise or even completely erased, requiring noise robust algorithms and approaches (visit \\cite{Heitkaemper2020a} for example signals).\n\nIn the following we propose to estimate the carrier frequency difference by a method that is based on locating the pitch and its harmonics in the noisy speech spectrogram. To this end it uses a filterbank with adjustable and time-varying center frequencies, that correspond to the fundamental frequency and its harmonics. One can view the filtering operation as a rake that is pulled in time direction through the logarithmic \\gls{PSD} values $\\log\\{|X(t,f)|^2\\}$ of the signal's \\gls{STFT} $X(t,f)$, where $t$ denotes the frame index and $f$ the frequency bin index to collect the energy at the pitch frequency and its harmonics. The relevant frequency bin indices at the $t$-th frame for a hypothetical carrier frequency difference $f_D$, pitch $f_P(t)$ and the corresponding pitch harmonics are given by $f_D + \\tau \\cdot f_{P}(t)$, where $(\\tau \\in [1, \\tau_{\\text{max}}])$. To account for the limited frequency resolution of the \\gls{STFT} analysis, not only the frequency bin itself but also a small range around it is considered by introducing the frequency deviation parameter $\\nu$. So, the logarithmic \\gls{PSD} values of the pitch including the harmonics are given by \n\\begin{align} \\label{EQ:PSDTerms}\n&\\Psi_{\\nu}^\\tau(t,{f}_{P}(t),f_D) = \\log\\{|X(t,f_D + \\tau \\cdot f_{P}(t)+\\nu)|^2\\}.\n\\end{align}\nThese values are weighted by factors $\\omega(\\tau,\\nu)$, which depend on the harmonic index $\\tau$ and the distance $\\nu$ to the filter center, and are summed by\n\\begin{align} \\label{EQ:GammaSum}\n& \\Gamma(t,f_{P}(t),f_D) = \\sum_{\\tau=1}^{\\tau_{\\text{max}}} \\sum_{\\nu=-W}^{+W} \\omega(\\tau,\\nu) \\cdot \\Psi_{\\nu}^\\tau(t,f_{P}(t),f_D).\n\\end{align}\n\nFor each frequency difference $f_D$ a different sequence of pitch values $\\bm{f}_P = [f_{P}(0),\\ldots f_{P}(T-1)]$ is optimal in the sense that the summation of $\\Gamma(t,f_{P}(t),f_D)$ along $t$ reaches a maximum. Stated differently, the maximization of \\eqref{EQ:GammaSum} will yield an estimate of $f_D$. This is achieved by the following three steps. First, for each time instance $t$, the maximum across the possible pitch hypotheses $f_{P}(t) \\in [f_{P,\\min},f_{P,\\max}]$ is computed:\n\\begin{align} \\label{EQ:Max}\nf'_{P}(t, f_D) = \\underset{f_{P}(t)}{\\argmax} \\left\\{ \\Gamma(t,f_{P}(t),f_D)\\right\\} \\\\ \n\\Gamma'(t,f_D) = \\Gamma(t,f'_{P}(t, f_D),f_D).\n\\end{align}\n$\\Gamma'(t,f_D)$ is the maximum log-energy value for a given demodulation shift hypotheses $f_D$, and the corresponding pitch hypothesis is $f'_{P}(t, f_D)$. Here, $f_{P,\\min}$ and $f_{P,\\max}$ denote the frequency bins corresponding to the assumed minimum (\\SI{50}{Hz}) and maximum (\\SI{400}{Hz}) pitch frequencies.\n\nNext, a summation over time results in the accumulated log-energy per carrier frequency difference hypothesis $f_D$: \n\\begin{align} \\label{EQ:accumulatedLogEnergy}\n& \\widehat{\\Gamma}(f_D) = \\sum_{t=0}^{T-1} \\Gamma'\\left(t,f_D \\right).\n\\end{align}\nAssuming a single speaker scenario, the maximum of $\\widehat{\\Gamma}(f_D)$ is selected as the most likely hypotheses for the demodulation shift $\\widehat{f}_{D}$ with \n\\begin{align} \\label{EQ:Argmax}\n\\widehat{f}_{D} = \\underset{f_{D}\\in \\Omega_{f_D}}{\\argmax} \\left\\{ \\widehat{\\Gamma}(f_D)\\right\\} \n\\end{align}\nwith $\\Omega_{f_D}$ denoting the set of candidate frequency differences,\nand the corresponding pitch hypotheses sequence is given by\n\\begin{align}\n\\label{EQ:PitchHyp}\n\\widehat{\\bm{f}}_{P} = [f'_{P}(0, \\widehat{f}_{D}),\\ldots, f'_{P}(T-1, \\widehat{f}_{D})].\n\\end{align}\nAs reported in several publications, e.g., in \\cite{Zahorian08}, and also observed in our own experimental recordings, some audio segments have only a very weak or even no pitch at all, although the harmonics are clearly observable. To take account of this observation the weight of the pitch $\\omega(0,\\nu)$ is only half of $\\omega(1,\\nu)$, i.e., the weight of the first harmonic. Furthermore, all other harmonics are weighted with $\\omega(\\tau,\\nu) \\propto \\frac{1}{\\tau}$, whereby all filters are designed in a triangular shape.\n\nThe summation in \\eqref{EQ:GammaSum} gives a similar pitch detection feature as the \\gls{SHC} used in the YAAPT algorithm \\cite{Zahorian08}, whereby \\eqref{EQ:GammaSum} is defined as a sum of logarithmic \\gls{PSD} values and \\gls{SHC} is a sum over a product of magnitude spectral values. The log-spectral domain formulation causes a dynamic range reduction and improves the numerical stability. \n\nSince \\eqref{EQ:GammaSum} extends the pitch tracking problem towards an open range search by introducing the unknown parameter $f_D$, the computational complexity of the problem is increased by a factor proportional to the size $|\\Omega_{f_D}|$ of the set of candidate values for the frequency difference. Hence, reducing the computational complexity becomes an important task which in our case is handled by interpreting \\eqref{EQ:GammaSum} in the cepstral domain as shown in the next section.\n\n\n\\subsection{Ham Radio Data}\\label{SEC:dataset}\nWe have set up a transmission system between our amateur radio station in Paderborn and several other distant ham radio stations across Europe, transmitting utterances from the LibriSpeech corpus \\cite{libri20}. Kiwi-\\gls{SDR} devices \\cite{kiwi20} at the distant stations were utilized to demodulate the received \\gls{SSB} \\gls{HF} signals and send the recorded audio signals back to our servers via a websocket connection. Audio markers had been added to the signal to allow for an automated time alignment between the transmitted and received signals, easing the annotation and segmentation of the data \\cite{Heitkaemper2020a}.\n\nFor the transmissions a beacon, callsign DB0UPB, was used, which was supervised by a human to avoid interference with other ham radio stations. The \\gls{HF} signals are \\gls{SSB} modulated using the \\gls{LSB} with a bandwidth of $\\SI{2.7}{kHz}$ at carrier frequencies of $\\SI{7.06}{MHz}-\\SI{7.063}{MHz}$ and $\\SI{3.6}{MHz} -\\SI{3.62}{MHz}$. To simulate a carrier frequency difference the demodulation frequency of the transmitter and the receiver were selected to differ by values from the set $f_D = [0,100,300,500,1000]$. \nAlthough the original speech samples have a sampling rate of \\SI{16}{kHz}, and the Kiwi-\\gls{SDR} samples the data at \\SI{12.001}{Hz}, the finally emitted data is band-limited to $\\SI{2.7}{kHz}$ (\\gls{ITU} regulations) which introduces a loss of the upper frequencies in case of \\gls{LSB} transmission depending on the carrier frequency difference $f_D$. \nThe data set has a total size of 23:31 hours of which 3:28 hours contain speech activity.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nThe journal \\textit{Monthly Notices of the Royal Astronomical Society} (MNRAS) encourages authors to prepare their papers using \\LaTeX.\nThe style file \\verb'mnras.cls' can be used to approximate the final appearance of the journal, and provides numerous features to simplify the preparation of papers.\nThis document, \\verb'mnras_guide.tex', provides guidance on using that style file and the features it enables.\n\nThis is not a general guide on how to use \\LaTeX, of which many excellent examples already exist.\nWe particularly recommend \\textit{Wikibooks \\LaTeX}\\footnote{\\url{https:\/\/en.wikibooks.org\/wiki\/LaTeX}}, a collaborative online textbook which is of use to both beginners and experts.\nAlternatively there are several other online resources, and most academic libraries also hold suitable beginner's guides.\n\nFor guidance on the contents of papers, journal style, and how to submit a paper, see the MNRAS Instructions to Authors\\footnote{\\label{foot:itas}\\url{http:\/\/www.oxfordjournals.org\/our_journals\/mnras\/for_authors\/}}.\nOnly technical issues with the \\LaTeX\\ class are considered here.\n\n\n\\section{Obtaining and installing the MNRAS package}\nSome \\LaTeX\\ distributions come with the MNRAS package by default.\nIf yours does not, you can either install it using your distribution's package manager, or download it from the Comprehensive \\TeX\\ Archive Network\\footnote{\\url{http:\/\/www.ctan.org\/tex-archive\/macros\/latex\/contrib\/mnras}} (CTAN).\n\nThe files can either be installed permanently by placing them in the appropriate directory (consult the documentation for your \\LaTeX\\ distribution), or used temporarily by placing them in the working directory for your paper.\n\nTo use the MNRAS package, simply specify \\verb'mnras' as the document class at the start of a \\verb'.tex' file:\n\n\\begin{verbatim}\n\\documentclass{mnras}\n\\end{verbatim}\nThen compile \\LaTeX\\ (and if necessary \\bibtex) in the usual way.\n\n\\section{Preparing and submitting a paper}\nWe recommend that you start with a copy of the \\texttt{mnras\\_template.tex} file.\nRename the file, update the information on the title page, and then work on the text of your paper.\nGuidelines for content, style etc. are given in the instructions to authors on the journal's website$^{\\ref{foot:itas}}$.\nNote that this document does not follow all the aspects of MNRAS journal style (e.g. it has a table of contents).\n\nIf a paper is accepted, it is professionally typeset and copyedited by the publishers.\nIt is therefore likely that minor changes to presentation will occur.\nFor this reason, we ask authors to ignore minor details such as slightly long lines, extra blank spaces, or misplaced figures, because these details will be dealt with during the production process.\n\nPapers must be submitted electronically via the online submission system; paper submissions are not permitted.\nFor full guidance on how to submit a paper, see the instructions to authors.\n\n\\section{Class options}\n\\label{sec:options}\nThere are several options which can be added to the document class line like this:\n\n\\begin{verbatim}\n\\documentclass[option1,option2]{mnras}\n\\end{verbatim}\nThe available options are:\n\\begin{itemize}\n\\item \\verb'letters' -- used for papers in the journal's Letters section.\n\\item \\verb'onecolumn' -- single column, instead of the default two columns. This should be used {\\it only} if necessary for the display of numerous very long equations.\n\\item \\verb'doublespacing' -- text has double line spacing. Please don't submit papers in this format.\n\\item \\verb'referee' -- \\textit{(deprecated)} single column, double spaced, larger text, bigger margins. Please don't submit papers in this format.\n\\item \\verb'galley' -- \\textit{(deprecated)} no running headers, no attempt to align the bottom of columns.\n\\item \\verb'landscape' -- \\textit{(deprecated)} sets the whole document on landscape paper.\n\\item \\verb\"usenatbib\" -- \\textit{(all papers should use this)} this uses Patrick Daly's \\verb\"natbib.sty\" package for citations.\n\\item \\verb\"usegraphicx\" -- \\textit{(most papers will need this)} includes the \\verb'graphicx' package, for inclusion of figures and images.\n\\item \\verb'useAMS' -- adds support for upright Greek characters \\verb'\\upi', \\verb'\\umu' and \\verb'\\upartial' ($\\upi$, $\\umu$ and $\\upartial$). Only these three are included, if you require other symbols you will need to include the \\verb'amsmath' or \\verb'amsymb' packages (see section~\\ref{sec:packages}).\n\\item \\verb\"usedcolumn\" -- includes the package \\verb\"dcolumn\", which includes two new types of column alignment for use in tables.\n\\end{itemize}\n\nSome of these options are deprecated and retained for backwards compatibility only.\nOthers are used in almost all papers, but again are retained as options to ensure that papers written decades ago will continue to compile without problems.\nIf you want to include any other packages, see section~\\ref{sec:packages}.\n\n\\section{Title page}\n\nIf you are using \\texttt{mnras\\_template.tex} the necessary code for generating the title page, headers and footers is already present.\nSimply edit the title, author list, institutions, abstract and keywords as described below.\n\n\\subsection{Title}\nThere are two forms of the title: the full version used on the first page, and a short version which is used in the header of other odd-numbered pages (the `running head').\nEnter them with \\verb'\\title[]{}' like this:\n\\begin{verbatim}\n\\title[Running head]{Full title of the paper}\n\\end{verbatim}\nThe full title can be multiple lines (use \\verb'\\\\' to start a new line) and may be as long as necessary, although we encourage authors to use concise titles. The running head must be $\\le~45$ characters on a single line.\n\nSee appendix~\\ref{sec:advanced} for more complicated examples.\n\n\\subsection{Authors and institutions}\n\nLike the title, there are two forms of author list: the full version which appears on the title page, and a short form which appears in the header of the even-numbered pages. Enter them using the \\verb'\\author[]{}' command.\n\nIf the author list is more than one line long, start a new line using \\verb'\\newauthor'. Use \\verb'\\\\' to start the institution list. Affiliations for each author should be indicated with a superscript number, and correspond to the list of institutions below the author list.\n\nFor example, if I were to write a paper with two coauthors at another institution, one of whom also works at a third location:\n\\begin{verbatim}\n\\author[K. T. Smith et al.]{\nKeith T. Smith,$^{1}$\nA. N. Other,$^{2}$\nand Third Author$^{2,3}$\n\\\\\n$^{1}$Affiliation 1\\\\\n$^{2}$Affiliation 2\\\\\n$^{3}$Affiliation 3}\n\\end{verbatim}\nAffiliations should be in the format `Department, Institution, Street Address, City and Postal Code, Country'.\n\nEmail addresses can be inserted with the \\verb'\\thanks{}' command which adds a title page footnote.\nIf you want to list more than one email, put them all in the same \\verb'\\thanks' and use \\verb'\\footnotemark[]' to refer to the same footnote multiple times.\nPresent addresses (if different to those where the work was performed) can also be added with a \\verb'\\thanks' command.\n\n\\subsection{Abstract and keywords}\n\nThe abstract is entered in an \\verb'abstract' environment:\n\\begin{verbatim}\n\\begin{abstract}\nThe abstract of the paper.\n\\end{abstract}\n\\end{verbatim}\n\\noindent Note that there is a word limit on the length of abstracts.\nFor the current word limit, see the journal instructions to authors$^{\\ref{foot:itas}}$.\n\nImmediately following the abstract, a set of keywords is entered in a \\verb'keywords' environment:\n\\begin{verbatim}\n\\begin{keywords}\nkeyword 1 -- keyword 2 -- keyword 3\n\\end{keywords}\n\\end{verbatim}\n\\noindent There is a list of permitted keywords, which is agreed between all the major astronomy journals and revised every few years.\nDo \\emph{not} make up new keywords!\nFor the current list of allowed keywords, see the journal's instructions to authors$^{\\ref{foot:itas}}$.\n\n\\section{Sections and lists}\n\nSections and lists are generally the same as in the standard \\LaTeX\\ classes.\n\n\\subsection{Sections}\n\\label{sec:sections}\nSections are entered in the usual way, using \\verb'\\section{}' and its variants. It is possible to nest up to four section levels:\n\\begin{verbatim}\n\\section{Main section}\n \\subsection{Subsection}\n \\subsubsection{Subsubsection}\n \\paragraph{Lowest level section}\n\\end{verbatim}\n\\noindent The other \\LaTeX\\ sectioning commands \\verb'\\part', \\verb'\\chapter' and \\verb'\\subparagraph{}' are deprecated and should not be used.\n\nSome sections are not numbered as part of journal style (e.g. the Acknowledgements).\nTo insert an unnumbered section use the `starred' version of the command: \\verb'\\section*{}'.\n\nSee appendix~\\ref{sec:advanced} for more complicated examples.\n\n\\subsection{Lists}\n\nTwo forms of lists can be used in MNRAS -- numbered and unnumbered.\n\nFor a numbered list, use the \\verb'enumerate' environment:\n\\begin{verbatim}\n\\begin{enumerate}\n \\item First item\n \\item Second item\n \\item etc.\n\\end{enumerate}\n\\end{verbatim}\n\\noindent which produces\n\\begin{enumerate}\n \\item First item\n \\item Second item\n \\item etc.\n\\end{enumerate}\nNote that the list uses lowercase Roman numerals, rather than the \\LaTeX\\ default Arabic numerals.\n\nFor an unnumbered list, use the \\verb'description' environment without the optional argument:\n\\begin{verbatim}\n\\begin{description}\n \\item First item\n \\item Second item\n \\item etc.\n\\end{description}\n\\end{verbatim}\n\\noindent which produces\n\\begin{description}\n \\item First item\n \\item Second item\n \\item etc.\n\\end{description}\n\nBulleted lists using the \\verb'itemize' environment should not be used in MNRAS; it is retained for backwards compatibility only.\n\n\\section{Mathematics and symbols}\n\nThe MNRAS class mostly adopts standard \\LaTeX\\ handling of mathematics, which is briefly summarised here.\nSee also section~\\ref{sec:packages} for packages that support more advanced mathematics.\n\nMathematics can be inserted into the running text using the syntax \\verb'$1+1=2$', which produces $1+1=2$.\nUse this only for short expressions or when referring to mathematical quantities; equations should be entered as described below.\n\n\\subsection{Equations}\nEquations should be entered using the \\verb'equation' environment, which automatically numbers them:\n\n\\begin{verbatim}\n\\begin{equation}\n a^2=b^2+c^2\n\\end{equation}\n\\end{verbatim}\n\\noindent which produces\n\\begin{equation}\n a^2=b^2+c^2\n\\end{equation}\n\nBy default, the equations are numbered sequentially throughout the whole paper. If a paper has a large number of equations, it may be better to number them by section (2.1, 2.2 etc.). To do this, add the command \\verb'\\numberwithin{equation}{section}' to the preamble.\n\nIt is also possible to produce un-numbered equations by using the \\LaTeX\\ built-in \\verb'\\['\\textellipsis\\verb'\\]' and \\verb'$$'\\textellipsis\\verb'$$' commands; however MNRAS requires that all equations are numbered, so these commands should be avoided.\n\n\\subsection{Special symbols}\n\n\n\\begin{table}\n \\caption{Additional commands for special symbols commonly used in astronomy. These can be used anywhere.}\n \\label{tab:anysymbols}\n \\begin{tabular}{lll}\n \\hline\n Command & Output & Meaning\\\\\n \\hline\n \\verb'\\sun' & \\sun & Sun, solar\\\\[2pt]\n \\verb'\\earth' & \\earth & Earth, terrestrial\\\\[2pt]\n \\verb'\\micron' & \\micron & microns\\\\[2pt]\n \\verb'\\degr' & \\degr & degrees\\\\[2pt]\n \\verb'\\arcmin' & \\arcmin & arcminutes\\\\[2pt]\n \\verb'\\arcsec' & \\arcsec & arcseconds\\\\[2pt]\n \\verb'\\fdg' & \\fdg & fraction of a degree\\\\[2pt]\n \\verb'\\farcm' & \\farcm & fraction of an arcminute\\\\[2pt]\n \\verb'\\farcs' & \\farcs & fraction of an arcsecond\\\\[2pt]\n \\verb'\\fd' & \\fd & fraction of a day\\\\[2pt]\n \\verb'\\fh' & \\fh & fraction of an hour\\\\[2pt]\n \\verb'\\fm' & \\fm & fraction of a minute\\\\[2pt]\n \\verb'\\fs' & \\fs & fraction of a second\\\\[2pt]\n \\verb'\\fp' & \\fp & fraction of a period\\\\[2pt]\n \\verb'\\diameter' & \\diameter & diameter\\\\[2pt]\n \\verb'\\sq' & \\sq & square, Q.E.D.\\\\[2pt]\n \\hline\n \\end{tabular}\n\\end{table}\n\n\\begin{table}\n \\caption{Additional commands for mathematical symbols. These can only be used in maths mode.}\n \\label{tab:mathssymbols}\n \\begin{tabular}{lll}\n \\hline\n Command & Output & Meaning\\\\\n \\hline\n \\verb'\\upi' & $\\upi$ & upright pi\\\\[2pt]\n \\verb'\\umu' & $\\umu$ & upright mu\\\\[2pt]\n \\verb'\\upartial' & $\\upartial$ & upright partial derivative\\\\[2pt]\n \\verb'\\lid' & $\\lid$ & less than or equal to\\\\[2pt]\n \\verb'\\gid' & $\\gid$ & greater than or equal to\\\\[2pt]\n \\verb'\\la' & $\\la$ & less than of order\\\\[2pt]\n \\verb'\\ga' & $\\ga$ & greater than of order\\\\[2pt]\n \\verb'\\loa' & $\\loa$ & less than approximately\\\\[2pt]\n \\verb'\\goa' & $\\goa$ & greater than approximately\\\\[2pt]\n \\verb'\\cor' & $\\cor$ & corresponds to\\\\[2pt]\n \\verb'\\sol' & $\\sol$ & similar to or less than\\\\[2pt]\n \\verb'\\sog' & $\\sog$ & similar to or greater than\\\\[2pt]\n \\verb'\\lse' & $\\lse$ & less than or homotopic to \\\\[2pt]\n \\verb'\\gse' & $\\gse$ & greater than or homotopic to\\\\[2pt]\n \\verb'\\getsto' & $\\getsto$ & from over to\\\\[2pt]\n \\verb'\\grole' & $\\grole$ & greater over less\\\\[2pt]\n \\verb'\\leogr' & $\\leogr$ & less over greater\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\nSome additional symbols of common use in astronomy have been added in the MNRAS class. These are shown in tables~\\ref{tab:anysymbols}--\\ref{tab:mathssymbols}. The command names are -- as far as possible -- the same as those used in other major astronomy journals.\n\nMany other mathematical symbols are also available, either built into \\LaTeX\\ or via additional packages. If you want to insert a specific symbol but don't know the \\LaTeX\\ command, we recommend using the Detexify website\\footnote{\\url{http:\/\/detexify.kirelabs.org}}.\n\nSometimes font or coding limitations mean a symbol may not get smaller when used in sub- or superscripts, and will therefore be displayed at the wrong size. There is no need to worry about this as it will be corrected by the typesetter during production.\n\nTo produce bold symbols in mathematics, use \\verb'\\bmath' for simple variables, and the \\verb'bm' package for more complex symbols (see section~\\ref{sec:packages}). Vectors are set in bold italic, using \\verb'\\mathbfit{}'.\n\nFor matrices, use \\verb'\\mathbfss{}' to produce a bold sans-serif font e.g. \\mathbfss{H}; this works even outside maths mode, but not all symbols are available (e.g. Greek). For $\\nabla$ (del, used in gradients, divergence etc.) use \\verb'$\\nabla$'.\n\n\\subsection{Ions}\n\nA new \\verb'\\ion{}{}' command has been added to the class file, for the correct typesetting of ionisation states.\nFor example, to typeset singly ionised calcium use \\verb'\\ion{Ca}{ii}', which produces \\ion{Ca}{ii}.\n\n\\section{Figures and tables}\n\\label{sec:fig_table}\nFigures and tables (collectively called `floats') are mostly the same as built into \\LaTeX.\n\n\\subsection{Basic examples}\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{example}\n \\caption{An example figure.}\n \\label{fig:example}\n\\end{figure}\nFigures are inserted in the usual way using a \\verb'figure' environment and \\verb'\\includegraphics'. The example Figure~\\ref{fig:example} was generated using the code:\n\\begin{verbatim}\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{example}\n \\caption{An example figure.}\n \\label{fig:example}\n\\end{figure}\n\\end{verbatim}\n\n\\begin{table}\n \\caption{An example table.}\n \\label{tab:example}\n \\begin{tabular}{lcc}\n \\hline\n Star & Mass & Luminosity\\\\\n & $M_{\\sun}$ & $L_{\\sun}$\\\\\n \\hline\n Sun & 1.00 & 1.00\\\\\n $\\alpha$~Cen~A & 1.10 & 1.52\\\\\n $\\epsilon$~Eri & 0.82 & 0.34\\\\\n \\hline\n \\end{tabular}\n\\end{table}\nThe example Table~\\ref{tab:example} was generated using the code:\n\\begin{verbatim}\n\\begin{table}\n \\caption{An example table.}\n \\label{tab:example}\n \\begin{tabular}{lcc}\n \\hline\n Star & Mass & Luminosity\\\\\n & $M_{\\sun}$ & $L_{\\sun}$\\\\\n \\hline\n Sun & 1.00 & 1.00\\\\\n $\\alpha$~Cen~A & 1.10 & 1.52\\\\\n $\\epsilon$~Eri & 0.82 & 0.34\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\\end{verbatim}\n\n\\subsection{Captions and placement}\nCaptions go \\emph{above} tables but \\emph{below} figures, as in the examples above.\n\nThe \\LaTeX\\ float placement commands \\verb'[htbp]' are intentionally disabled.\nLayout of figures and tables will be adjusted by the publisher during the production process, so authors should not concern themselves with placement to avoid disappointment and wasted effort.\nSimply place the \\LaTeX\\ code close to where the figure or table is first mentioned in the text and leave exact placement to the publishers.\n\nBy default a figure or table will occupy one column of the page.\nTo produce a wider version which covers both columns, use the \\verb'figure*' or \\verb'table*' environment.\n\nIf a figure or table is too long to fit on a single page it can be split it into several parts.\nCreate an additional figure or table which uses \\verb'\\contcaption{}' instead of \\verb'\\caption{}'.\nThis will automatically correct the numbering and add `\\emph{continued}' at the start of the caption.\n\\begin{table}\n \\contcaption{A table continued from the previous one.}\n \\label{tab:continued}\n \\begin{tabular}{lcc}\n \\hline\n Star & Mass & Luminosity\\\\\n & $M_{\\sun}$ & $L_{\\sun}$\\\\\n \\hline\n $\\tau$~Cet & 0.78 & 0.52\\\\\n $\\delta$~Pav & 0.99 & 1.22\\\\\n $\\sigma$~Dra & 0.87 & 0.43\\\\\n \\hline\n \\end{tabular}\n\\end{table}\nTable~\\ref{tab:continued} was generated using the code:\n\n\\begin{verbatim}\n\\begin{table}\n \\contcaption{A table continued from the previous one.}\n \\label{tab:continued}\n \\begin{tabular}{lcc}\n \\hline\n Star & Mass & Luminosity\\\\\n & $M_{\\sun}$ & $L_{\\sun}$\\\\\n \\hline\n $\\tau$~Cet & 0.78 & 0.52\\\\\n $\\delta$~Pav & 0.99 & 1.22\\\\\n $\\sigma$~Dra & 0.87 & 0.43\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\\end{verbatim}\n\nTo produce a landscape figure or table, use the \\verb'pdflscape' package and the \\verb'landscape' environment.\nThe landscape Table~\\ref{tab:landscape} was produced using the code:\n\\begin{verbatim}\n\\begin{landscape}\n \\begin{table}\n \\caption{An example landscape table.}\n \\label{tab:landscape}\n \\begin{tabular}{cccccccccc}\n \\hline\n Header & Header & ...\\\\\n Unit & Unit & ...\\\\\n \\hline\n Data & Data & ...\\\\\n Data & Data & ...\\\\\n ...\\\\\n \\hline\n \\end{tabular}\n \\end{table}\n\\end{landscape}\n\\end{verbatim}\nUnfortunately this method will force a page break before the table appears.\nMore complicated solutions are possible, but authors shouldn't worry about this.\n\n\\begin{landscape}\n \\begin{table}\n \\caption{An example landscape table.}\n \\label{tab:landscape}\n \\begin{tabular}{cccccccccc}\n \\hline\n Header & Header & Header & Header & Header & Header & Header & Header & Header & Header\\\\\n Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit \\\\\n \\hline\n Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\\\\n Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\\\\n Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\\\\n Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\\\\n Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\\\\n Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\\\\n Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\\\\n Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\\\\n \\hline\n \\end{tabular}\n \\end{table}\n\\end{landscape}\n\n\\section{References and citations}\n\n\\subsection{Cross-referencing}\n\nThe usual \\LaTeX\\ commands \\verb'\\label{}' and \\verb'\\ref{}' can be used for cross-referencing within the same paper.\nWe recommend that you use these whenever relevant, rather than writing out the section or figure numbers explicitly.\nThis ensures that cross-references are updated whenever the numbering changes (e.g. during revision) and provides clickable links (if available in your compiler).\n\nIt is best to give each section, figure and table a logical label.\nFor example, Table~\\ref{tab:mathssymbols} has the label \\verb'tab:mathssymbols', whilst section~\\ref{sec:packages} has the label \\verb'sec:packages'.\nAdd the label \\emph{after} the section or caption command, as in the examples in sections~\\ref{sec:sections} and \\ref{sec:fig_table}.\nEnter the cross-reference with a non-breaking space between the type of object and the number, like this: \\verb'see Figure~\\ref{fig:example}'.\n\nThe \\verb'\\autoref{}' command can be used to automatically fill out the type of object, saving on typing.\nIt also causes the link to cover the whole phrase rather than just the number, but for that reason is only suitable for single cross-references rather than ranges.\nFor example, \\verb'\\autoref{tab:journal_abbr}' produces \\autoref{tab:journal_abbr}.\n\n\\subsection{Citations}\n\\label{sec:cite}\n\nMNRAS uses the Harvard -- author (year) -- citation style, e.g. \\citet{author2013}.\nThis is implemented in \\LaTeX\\ via the \\verb'natbib' package, which in turn is included via the \\verb'usenatbib' package option (see section~\\ref{sec:options}), which should be used in all papers.\n\nEach entry in the reference list has a `key' (see section~\\ref{sec:ref_list}) which is used to generate citations.\nThere are two basic \\verb'natbib' commands:\n\\begin{description}\n \\item \\verb'\\citet{key}' produces an in-text citation: \\citet{author2013}\n \\item \\verb'\\citep{key}' produces a bracketed (parenthetical) citation: \\citep{author2013}\n\\end{description}\nCitations will include clickable links to the relevant entry in the reference list, if supported by your \\LaTeX\\ compiler.\n\n\\defcitealias{smith2014}{Paper~I}\n\\begin{table*}\n \\caption{Common citation commands, provided by the \\texttt{natbib} package.}\n \\label{tab:natbib}\n \\begin{tabular}{lll}\n \\hline\n Command & Ouput & Note\\\\\n \\hline\n \\verb'\\citet{key}' & \\citet{smith2014} & \\\\\n \\verb'\\citep{key}' & \\citep{smith2014} & \\\\\n \\verb'\\citep{key,key2}' & \\citep{smith2014,jones2015} & Multiple papers\\\\\n \\verb'\\citet[table 4]{key}' & \\citet[table 4]{smith2014} & \\\\\n \\verb'\\citep[see][figure 7]{key}' & \\citep[see][figure 7]{smith2014} & \\\\\n \\verb'\\citealt{key}' & \\citealt{smith2014} & For use with manual brackets\\\\\n \\verb'\\citeauthor{key}' & \\citeauthor{smith2014} & If already cited in close proximity\\\\\n \\verb'\\defcitealias{key}{Paper~I}' & & Define an alias (doesn't work in floats)\\\\\n \\verb'\\citetalias{key}' & \\citetalias{smith2014} & \\\\\n \\verb'\\citepalias{key}' & \\citepalias{smith2014} & \\\\\n \\hline\n \\end{tabular}\n\\end{table*}\n\nThere are a number of other \\verb'natbib' commands which can be used for more complicated citations.\nThe most commonly used ones are listed in Table~\\ref{tab:natbib}.\nFor full guidance on their use, consult the \\verb'natbib' documentation\\footnote{\\url{http:\/\/www.ctan.org\/pkg\/natbib}}.\n\nIf a reference has several authors, \\verb'natbib' will automatically use `et al.' if there are more than two authors. However, if a paper has exactly three authors, MNRAS style is to list all three on the first citation and use `et al.' thereafter. If you are using \\bibtex\\ (see section~\\ref{sec:ref_list}) then this is handled automatically. If not, the \\verb'\\citet*{}' and \\verb'\\citep*{}' commands can be used at the first citation to include all of the authors.\n\n\\subsection{The list of references}\n\\label{sec:ref_list}\n\nIt is possible to enter references manually using the usual \\LaTeX\\ commands, but we strongly encourage authors to use \\bibtex\\ instead.\n\\bibtex\\ ensures that the reference list is updated automatically as references are added or removed from the paper, puts them in the correct format, saves on typing, and the same reference file can be used for many different papers -- saving time hunting down reference details.\nAn MNRAS \\bibtex\\ style file, \\verb'mnras.bst', is distributed as part of this package.\nThe rest of this section will assume you are using \\bibtex.\n\nReferences are entered into a separate \\verb'.bib' file in standard \\bibtex\\ formatting.\nThis can be done manually, or there are several software packages which make editing the \\verb'.bib' file much easier.\nWe particularly recommend \\textsc{JabRef}\\footnote{\\url{http:\/\/jabref.sourceforge.net\/}}, which works on all major operating systems.\n\\bibtex\\ entries can be obtained from the NASA Astrophysics Data System\\footnote{\\label{foot:ads}\\url{http:\/\/adsabs.harvard.edu}} (ADS) by clicking on `Bibtex entry for this abstract' on any entry.\nSimply copy this into your \\verb'.bib' file or into the `BibTeX source' tab in \\textsc{JabRef}.\n\nEach entry in the \\verb'.bib' file must specify a unique `key' to identify the paper, the format of which is up to the author.\nSimply cite it in the usual way, as described in section~\\ref{sec:cite}, using the specified key.\nCompile the paper as usual, but add an extra step to run the \\texttt{bibtex} command.\nConsult the documentation for your compiler or latex distribution.\n\nCorrect formatting of the reference list will be handled by \\bibtex\\ in almost all cases, provided that the correct information was entered into the \\verb'.bib' file.\nNote that ADS entries are not always correct, particularly for older papers and conference proceedings, so may need to be edited.\nIf in doubt, or if you are producing the reference list manually, see the MNRAS instructions to authors$^{\\ref{foot:itas}}$ for the current guidelines on how to format the list of references.\n\n\\section{Appendices and online material}\n\nTo start an appendix, simply place the \\verb'\n\\section{Introduction}\n\nExoplanet surveys and demographic studies have revealed a previously unknown class of planets. These close-in, super-earth\/sub-neptunes have radii in the range 1-4~R$_\\oplus$ \\citep[e.g.][]{Bourucki2011,Thompson2018} and masses $\\lesssim 20~$M$_\\oplus$ \\citep[e.g.][]{Mayor2011,Wu2013,Marcy2014,Weiss2014,Hadden2014,JontofHutter2016}. With orbital periods of less than a few hundred days, these planets have been shown to be incredibly common, with most sun-like and later-type stars hosting at least one, if not many \\citep[e.g.][]{Fressin2013,Silburt2015,Mulders2018,Zink2019,Hsu2019}. \n\nCombined mass and radius measurements for individual planets revealed a vast spread in densities; from as high as $\\sim 10$~g~cm$^{-3}$ to as low as $\\sim 0.1~$g~cm$^{-3}$. The former is consistent with rocky bodies with Earth-like (iron\/silicate) compositions \\citep[e.g.][]{Dressing2015,Dorn2019} and the latter with solid cores surrounded by larger H\/He atmospheres \\citep[e.g.][]{JontofHutter2016}. At intermediate densities a plethora of compositions are possible, including some combination of iron, silicates, water and H\/He \\citep[e.g.][]{Rogers2010,Zeng2019}. \n\nH\/He envelopes on low-mass planets close to their host stars are vulnerable to mass-loss \\citep[e.g.][]{Lammer2003,Baraffe2005,MurrayClay2009,owen12}. Evolutionary models of solid-cores surrounded by H\/He envelopes suggested that mass-loss would sculpt the planet population into two classes: those which completely lose their H\/He envelope leaving behind a ``stripped core'', and those planets that retain about 1~\\% by mass in their H\/He envelope \\citep[e.g.][]{Owen2013,Lopez2013,Jin2014}. Early indications of such a dichotomy were found in planetary density measurements \\citep[e.g.][]{Weiss2014,Rogers2015}. However, it was not until accurately measured stellar properties allowed precise planetary radii to be determined that a bi-modal radius distribution was revealed \\citep[e.g.][]{Fulton2017,Fulton2018,VanEylen2018}. \n\nIncorporating these evolutionary clues into the compositional determination suggests most planets larger than about 1.8~R$_\\oplus$ consist of an Earth-like core surrounded by a H\/He envelope, where this envelope contains a few percent of the planet's mass \\citep{Wolfgang2015}. Further, mass-loss models suggest that the vast majority of those planets that do not possess H\/He envelopes today, formed with one which they then lost \\citep[e.g.][]{Owen2017,Wu2019,Rogers2020}. \n\nThe majority of exoplanet host stars are billions of years old. Recent work by \\citet{Berger2020} has shown that the median age of the {\\it Kepler} planet host stars is $\\sim 4$~Gyr and only $\\sim 3$\\%\nof the {\\it Kepler} host stars are $<1$~Gyr old. Therefore, most of our knowledge about the demographics of close-in exoplanets is restricted to old stars. The fact many of these planets possess H\/He envelopes necessitates they formed before gas-disc dispersal. These gas-discs have lifetimes of $1-10$~Myr \\citep[e.g.][]{haisch01,Mamajek2009}. { Thus, most exoplanet host-stars are significantly older than the planetary formation timescale ($\\lesssim 10$~Myr). }\n\nCompositional uncertainty and lack of knowledge of the exoplanet demographics has led to considerable discussion and debate about how these planets form. In many cases the size of the H\/He envelopes implies they must have accreted from a protoplanetary disc. The idea of accretion of a H\/He envelope by a solid core fits within the general framework of core-accretion \\citep[e.g.][]{Rafikov2006,Ikoma2006}. In the core-accretion model, once the planet is massive enough that its Bondi-radius resides outside its physical radius (crudely at a few tenths of a Lunar mass, e.g. \\citealt{Massol2016}) it can gravitationally bind nebula gas. As the planet's solid mass continues to grow it can bind ever larger quantities of gas; { eventually at around core masses of $\\sim$0.5~M$_\\oplus$ the H\/He envelope's cooling time becomes comparable to the disc's lifetime \\citep[e.g.][]{Lee2015}.} Thus, beyond solid core masses of $\\sim$ 0.5~M$_\\oplus$\\footnote{Obviously this precise value is uncertain and depends on details of the opacity, e.g. \\citealt{Venturini2016,LeeConnors2020}.} accretion of a H\/He envelope is dependant on how fast current gas in the planet's envelope evolves thermally; heat the envelope gas via solid accretion, and gas flows from the envelope to the disc, let it cool and contract, and high-entropy nebula gas flows into the envelope. \n\nThe expected thermal envelope structure typically consists of a convective zone surrounding the core, where heat generated by solid accretion or gravitational contraction of the atmosphere is rapidly convected outwards. { Eventually, as the temperature and density drops a radiative zone forms, which connects the envelope to the disc}, where the lower entropy interior is connected to the higher entropy disc \\citep[e.g.][]{Rafikov2006}. Since opacity typically increases with pressure and temperature it's the radiative-convective boundary that controls the thermal evolution of the envelope, and hence the rate of accretion. This makes gas accretion fairly insensitive to the nebula's properties (e.g. density and temperature). \n\nDespite the basic success of the core-accretion model in explaining that low-mass planets can acquire a H\/He envelope, it has failed to quantitatively explain the properties of observed sub-Neptunes \\citep[e.g.][]{Lee2014,Ogihara2020,Alessi2020}. In fact the issue is not that they have accreted H\/He atmospheres, it is that they have only accreted a few percent by mass, even after mass-loss processes like photoevaporation \\citep{Jankovic2019,Ogihara2020} and\/or core-powered mass-loss \\citep[e.g.][]{Ginzburg2018} have been taken into account. Specifically, \\citet{Rogers2020} has recently shown standard core-accretion models over-predict the H\/He envelope mass by a factor of $\\sim$5 at the end of disc dispersal.\n\nThese problems have led to several proposed solutions: either slowing the accretion of H\/He \\citep[e.g.][]{Ormel2015,Lee2016,Ginzburg2017,Chen2020}, typically by increasing the entropy of the interior, or including extra, rapid mass-loss, most notably {\\it during} protoplanetary disc dispersal, decreasing the entropy of the interior \\citep[e.g.][]{OW2016,Ginzburg2016,Fossati2017,Kubyshkina2018_young} (see the discussion for a detailed description of these models). More fundamentally, these proposed solutions all modify the thermodynamic evolution of the H\/He envelope. Without any intervention one would expect the planet to have an initial cooling timescale (or Kelvin-Helmholtz contraction timescale) comparable to the time over which it has been forming, i.e. a few million years. Thus, in the first solution where accretion is slowed by increasing the entropy of the interior, the envelope's cooling time becomes shorter (probably less than a few million years). In the second solution, where rapid mass-loss during disc dispersal decreases the { planet's radius and entropy}, planets end up with much longer cooling times, closer to $\\sim 100$~Myr \\citep{OW2016}. This mechanism is known as ``boil-off''. \n\nAs planets age they cool; by the time they reach a Gyr old they gone through many { initial} cooling times and have completely forgotten their initial thermodynamic properties. Therefore, our population of old planets is generally unable to tell us about the thermodynamic properties of forming or recently formed planets. \n\nWith the advent of wider searches for transiting exoplanets (e.g. K2 \\citealt{K2Mission}, TESS \\citealt{TESSmission} \\& NGTS \\citealt{NGTS}) discovering young ($\\lesssim 100$~Myr) close-in planets has become possible. Notable recent examples are the four planets in the V1298 Tau system at an age of $\\sim$ 24~Myr \\citep{David2019}, a $\\sim 45$~Myr old planet around DS Tuc A \\citep{Newton2019}, a $\\sim 23$~Myr old planet around AU Mic \\citep{Plavchan2020} as well as other recent results from the {\\sc thyme} project \\citep[e.g.][]{THYMEII}. \n\n\nIn this work, we show how the combination of a mass and radius measurement for young planets can be used to place a constraint on their initial entropy. In Section~2 we demonstrate how young planets with measured mass and radii can be used to constrain their initial entropies, using simple models. In Section~3 we use numerical planetary evolution models to explore this further, investigating real systems in Section~4. In Section~5 we discuss the implications of our work and summarise in Section~6. \n\n\n\n\n\n\\section{A sketch of the idea}\nPresent day sub-neptunes with voluminous H\/He atmospheres cool over time, contracting under the release of heat left over from formation (both in their atmospheres and solid cores).\nEvolutionary tracks \\citep[e.g.][]{Baraffe2005,Lopez2014} predict these planets were substantially larger when younger, even when mass-loss is not factored in. In Figure~\\ref{fig:basic_evol} we show tracks from planetary evolution models for planets that evolve into typical sub-neptunes after billions of years of evolution. These planets have radii in the range 4-15~R$_\\oplus$ at ages $\\lesssim 100$~Myr. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{simple_evolve_2.pdf}\n \\caption{{ The planet's radius (top) and total planet mass (bottom)} evolution of low-mass, close-in exoplanets with H\/He atmospheres. The solid and dashed lines show planets whose envelopes have initial Kelvin-Helmholtz contraction timescales of 500 and 5 Myr respectively. The thick and thin lines show planets that have initial envelope fractions ($M_{\\rm env}\/M_{\\rm c}$) of 0.1 and 0.04 respectively. The evolution calculations are computed using {\\sc mesa} as described in Section~\\ref{sec:mesa} and include thermal evolution and photoevaporation. The planet contains a core with a mass of $\\sim 5$~M$_\\oplus$ and is located at 0.1~AU around a sun-like star.}\n \\label{fig:basic_evol}\n\\end{figure} \n\nIn fact the upper envelope of the radius evolution for present-day sub-neptunes is expected to become $\\gtrsim 1~$R$_{\\rm Jup}$ at young ages. This is demonstrated in Figure~\\ref{fig:basic_evol} indicating that at the youngest ages these planets could be conflated with a giant planet, as they have radii similar to present day hot jupiters. In fact, young hot jupiters are expected to have radii in excess of $\\sim$ 13~R$_\\oplus$ \\citep{Fortney2010}. This indicates that even those young planets that have measured radii $\\sim 10$~R$_\\oplus$ (e.g. V1298 Tau b, \\citealt{David2019} and HIP 67522b, \\citealt{THYMEII}) are likely to be ``proto-sub-neputunes'' rather than young, jupiter mass planets. \n\nUnlike stars, planet's have no method of preventing gravitational collapse (other than Coulomb or degeneracy pressure), therefore, the observed size of a planet depends on the thermodynamic state of it's interior, or how much entropy it currently possesses. As the planet's envelope contracts the interior entropy falls. Eventually, the interior will either be supported by degeneracy pressure at high masses or, at low-masses by Coulomb pressure \\citep{Seager2007}.\n\nIt is well known that with measurements of the mass and radius of a single composition self-gravitating sphere (e.g. a pure H\/He mixture), then the internal thermodynamic state and interior can be determined. However, both observational \\citep{WL12,JontofHutter2016,Fulton2017,VanEylen2018,Benneke2019,Benneke2019b}, and theoretical \\citep{Owen2017,Wu2019,Rogers2020} evidence suggests present-day sub-neptunes are not of a single composition, but mostly likely composed of a solid core surrounded by a H\/He envelope\\footnote{Although alternative ideas do exists, e.g. \\citet{Zeng2019}.}. As we show below, this makes the envelope mass and its internal thermodynamic state degenerate. Specifically, a planet with a known mass and radius can have more mass in its core and less in its envelope if the envelope is hotter (and therefore has a larger scale height) or vice-versa, { as demonstrated by works where extra heating mechanisms are included \\citep[e.g.][]{Pu2017,Millholland2020}}. This is even before the degeneracy between the composition of the core and envelope mass is taken into account. \n\n\\subsection{Degeneracy between internal entropy and envelope mass fraction}\n\nThe degeneracy between entropy and envelope mass fraction is trivial to identify. Given a planet composed of a core of known density surrounded by a H\/He envelope, there are three parameters which specify its structure: core mass, envelope mass and internal entropy. Thus, with only measurements of a planet's mass and radius it is not possible to constrain these three structure parameters, and thus we cannot determine its internal entropy. \n\nThis can be seen explicilty if we build a simple model for the planet's internal structure. This simple model is based on the assumption that self-gravity can be neglected in the envelope\\footnote{It can also be seen by considering ``loaded'' polytropes \\citep{Huntley1975} which include atmosphere self-gravity and smoothly tend to the mono-composition models as the envelope mass exceeds the core mass. However, such analysis is not necessary to elucidate the basic point.}\\citep[e.g.][]{Rafikov2006,Piso2014,Lee2015,Ginzburg2016}. As derived by \\citet{Owen2017} \\& \\citet{OCE2020} the envelope mass, $M_{\\rm env}$, surrounding a core of mass $M_c$ and radius $R_c$ with equation-of-state relating pressure, $P$ to density, $\\rho$ via $P\\propto \\rho^\\gamma$ is given, approximately by:\n\\begin{equation}\n M_{\\rm env}\\approx 4\\pi R_{\\rm rcb}^3\\rho_{\\rm rcb}\\left(\\nabla_{\\rm ab}\\frac{GM_c}{c_s^2 R_{\\rm rcb}}\\right)^{1\/(\\gamma-1)} I_2\\left(R_c\/R_{\\rm rcb},\\gamma\\right) \\label{eqn:Menv}\n\\end{equation}\nwhere $R_{\\rm rcb}$ is the radius of the radiative-convective boundary, $\\nabla_{\\rm ab}$ is the adiabatic gradient, $c_s$ the isothermal sound-speed at the radiative-convective boundary and $I_n$ is a dimensionless integral of the form:\n\\begin{equation}\n I_n\\left(R_c\/R_{\\rm rcb},\\gamma\\right)=\\int_{R_c\/R_{\\rm rcb}}^1 x^n\\left(x^{-1}-1\\right)^{1\/(\\gamma-1)}{\\rm d}x\n\\end{equation}\nDue to the extreme irradiation level most close-in planets receive, the radiative-convective boundary occurs at optical depths much higher than the photosphere. The radiative-convective boundary sets the point at which the energy released by gravitational contraction in the interior is no-longer transported by the convection, but rather by radiation. Hence the luminosity of the planet (and therefore internal entropy) is related directly to the density at the radiative-convective boundary. Since luminosity and entropy are rather non-intuitive quantities and do not facilitate easy comparison across planet-mass, age or formation models we follow \\citet{Owen2017} and choose the atmosphere's Kelvin-Helmholtz contraction timescale $\\tau_{\\rm KH}$ (or cooling timescale) as our quantity to describe the entropy and thermodynamic state of the planetary interior. We choose this parameterisation of entropy as this cooling timescale can be directly compared to quantities like the protoplanetary disc lifetime. This allows us to write the luminosity as:\n\\begin{equation}\n L\\approx\\frac{1}{\\tau_{\\rm KH}}\\frac{GM_cM_{\\rm env}}{R_{\\rm rcb}}\\frac{I_1}{I_2}\n\\end{equation}\n\nFor clarity, we restrict ourselves to a constant opacity envelope, with opacity $\\kappa$ (while this is clearly incorrect, nothing we demonstrate below is invalidated by this). Thus, solving for the density at the radiative-convective boundary \\citep[e.g.][]{Owen2017} we find:\n\\begin{equation}\n \\rho_{\\rm rcb}\\approx\\left(\\frac{\\mu}{k_b}\\right)\\left[\\left(\\frac{I_2}{I_1}\\right)\\frac{64\\pi\\sigma T^3 R_{\\rm rcb} \\tau_{\\rm KH}}{3\\kappa M_{\\rm env}}\\right] \\label{eqn:rho_rcb}\n\\end{equation}\nwith $\\mu$ the mean molecular weight, $k_b$ Boltzmann's constant, $\\sigma$ Stefan-Boltzmann's constant and $T$ is the equilibrium temperature of the planet. Finally, noting that the radiative layer is approximately isothermal (and therefore, well described by an exoponential density profile) and thus thin, we simply take $R_{\\rm rcb}\\approx R_p$. Combining Equations~\\ref{eqn:Menv} and \\ref{eqn:rho_rcb} we can derive a mass-radius relationship for our planets of the form:\n\n\\begin{equation}\n R_p^{(4\\gamma - 5)\/(\\gamma -1)}M_p^{1\/(\\gamma -1)}\\propto M_{\\rm env}^2 \\tau_{\\rm KH}^{-1}\\frac{I_1}{I_2^2} \\label{eqn:mass_radius}\n\\end{equation}\nwhere we have dropped unimportant variables. Equation~\\ref{eqn:mass_radius} clearly demonstrates that even with the measurement of a planet's mass and radius, the Kelvin-Helmholtz contraction timescale cannot be determined. Now, in the limit $M_{\\rm env}\\rightarrow M_p$ this degeneracy disappears\\footnote{However, one needs to include self-gravity in the analysis.}. For old planets this degeneracy is bypassed by the reasonable assumption that the planet has cooled sufficiently that its initial thermodynamic state has been forgotten and $\\tau_{\\rm KH}$ has simply tended towards the age of the planet ($T_{\\rm age}$). However, for young planets such statements cannot be made, all we can safely say is $\\tau_{\\rm KH}\\gtrsim T_{\\rm age}$. Thus, at young ages, measurements of a planet's mass and radius are not sufficient to determine either its internal composition (fraction of mass in the atmosphere compared to the core) or its internal thermodynamic state.\n\n The dependence in the core composition is encapsulated in the dimensional integrals $I_1$ and $I_2$. In the limit $R_p\/R_c \\gg 1$, relevant for young planets, both integrals tend to a constant for $\\gamma > 3\/2$ and $I_2$ tends to a constant for $>4\/3$. Inspection of our numerical models (Section~\\ref{sec:mesa}) indicates that planetary interiors span the full range of possible limits, with $\\gamma <4\/3$ close to the planetary cores when the interiors are high-entropy and $\\gamma > 3\/2$ closer to the planetary surface for low-entropy interiors. In order to assess whether core-composition will effect our analysis we calculate how the ratio $I_2^2\/I_1$ varies with core-composition at different values of $\\gamma$, we do this for a 7~R$_\\oplus$, 5~M$_\\oplus$ planet. For an extreme variation in core composition of 1\/3 ice, 2\/3 rock to 1\/3 iron, 2\/3 rock we find a variation of 4\\% for $\\gamma=5\/3$ and a factor of two for a of $\\gamma=1.25$ (the lowest found at any point in the numerical models). These variations are much smaller than the order of magnitude changes in the Kelvin-Helmholtz timescale we are investigating, especially when considering detailed fits to the exoplanet data suggest the spread in core-composition is narrow \\citep[e.g.][]{Dorn2019,Rogers2020}. Specifically, for the spread in core composition inferred by \\citet{Rogers2020} the ratio $I_2^2\/I_1$ varies by a maximum of 15\\% for $\\gamma=1.25$ for a 7~R$_\\oplus$, 5~M$_\\oplus$ planet. Thus, we consider our results to be robust to uncertainties in the core-composition, and certainly smaller than variations arising from the observational uncertainties on age, mass and radius\\footnote{In fact retaining the dimensional integrals and an arbitrary choice of $\\gamma$, we find the dependence on the constrain of $\\tau_{\\rm KH}$ in Equation~\\ref{eqn:critera2} scales linearly with $I_2^2\/I_1$, compared to much higher powers of mass, radius and age, indicating that for typical 10-20\\% errors, the variation in the dimensional integrals with core-compositions will have a small effect on our constraint on the Kelvin-Helmholtz timescale. }. Given we have already adopted a constant opacity, we chose to adopt a constant value of $\\gamma=5\/3$ and ignore the variation of $I_2$ and $I_1$ for simplicity in the rest of the section, while noting no choice of a single value of $\\gamma$ is justifiable. We emphasise that this section is purely an illustrative demonstration of the method and these choices do not affect the general idea. In our numerical models in Section~\\ref{sec:mesa} the appropriate equation-of-state and opacities are used. \n\n\n\n\n\\subsection{Leveraging mass-loss}\n\nFortunately, for close-in planets we do have a way of constraining the mass in a planet's envelope. The high-irradiation levels experienced by young planets cause them to lose envelope mass over time \\citep[e.g.][]{Baraffe2005,Lopez2013,Owen2013}. Specifically, given a planet with a known mass and radius there is a minimum envelope mass it could have retained given its age. Make the envelope less massive and it could not have survived mass-loss until its current age. Therefore, the envelope mass-loss timescale $t_{\\dot{m}}$, must satisfy:\n\\begin{equation}\nt_{\\dot{m}}\\equiv\\frac{M_{\\rm env}}{\\dot{M}}\\gtrsim T_{\\rm age}\\label{eqn:tmdot1}\n\\end{equation}\nSince the mass-loss rate $\\dot{M}$ depends only on planet mass and radius (for externally driven loss processes), or planet mass, radius and Kelvin-Helmholtz contraction timescale (for internally driven loss processes such as core-powered mass-loss \\citealt{Ginzburg2018,Gupta2019}), then the inequality in Equation~\\ref{eqn:tmdot1} can be used to place a lower bound on the Kelvin-Helmholtz timescale of the planet. Working within the framework where the loss is driven by photoevaporation, we show that with a planet radius alone one can place a minimum value on the planet mass to be consistent with the simplest (``vanilla'') picture of planetary formation via core accretion. Further, with a measured mass and radius we demonstrate how a lower bound on the Kelvin-Helmholtz timescale can be found. \n\\subsection{A minimum mass for the ``vanilla'' scenario} \n\\label{sec:min_mass}\nThe most na\\\"ive expectation is that planet formation is smooth and continuous and disc dispersal gently releases the planet into the circumstellar environment wherin mass-loss can proceed. In this scenario, with no violent processes, the planet's Kelvin-Helmholtz contraction timescale should roughly track age. Therefore, at young-ages we can follow what is done at old ages, where we accept that a planet is several cooling times old and set $\\tau_{\\rm KH}\\sim T_{\\rm age}$. Combining this anszat with Equation~\\ref{eqn:mass_radius} and inequality \\ref{eqn:tmdot1} we find:\n\n\\begin{equation}\n M_p^{3\/4}\\gtrsim A\\, \\dot{M}T_{\\rm age}^{1\/2}R_p^{-5\/4}\n\\end{equation}\nwhere the term $A$ incorporates all the terms we have dropped (e.g. temperature, opacity, mean-molecular weight and fundamental constants). Setting the mass-loss rate to:\n\\begin{equation}\n \\dot{M}\\propto \\frac{F_{\\rm HE}\\pi R_p^3}{G M_p}\\propto R_p^3\/M_p\n\\end{equation} as found in the case of energy-limited photoevaporation ({ with $F_{\\rm HE}$ the high energy flux received by the planet),} we find:\n\\begin{equation}\nM_p \\gtrsim A'\\, R_p T_{\\rm age}^{2\/7} \\label{eqn:criterion}\n\\end{equation}\nwith $A'\\equiv A^{4\/7}$. Put simply, for a young planet with a measured radius from a transit survey there is a minimum mass for it to be consistent with the na\\\"ivest expectation of planet formation. Put another way, if a planet is consistent with the criterion in Equation~\\ref{eqn:criterion}, then limited constraints can be placed on its formation entropy and history. Noting in the scenario above where $T\\propto a^{-1\/2}$ (with $a$ the orbital separation) and $\\dot{M}\\propto a^{-2}$ we find $A' \\propto a ^{-13\/7}$, validating the expectation that the higher irradiation levels closer to the star lead to higher mass-loss and therefore higher required planet masses. Finally (in the case of photoevaporation), the rapid drop in XUV flux when the star spins down \\citep[e.g.][]{Tu2015} means $T_{\\rm age}$ cannot be set arbitrarily long, but is rather constrained to be the saturation time of the XUV output of the star. Hence, the typically quoted values of $\\sim 100$~Myr for sun-like stars \\citep[e.g.][]{Jackson2012}. \n\nWhile the above style of calculation is unlikely to provide interesting analysis for real planets, it could be useful for selecting which planets to target with radial velocity, transit-timing variation (TTV) or spectroscopic follow-up. \n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Constraining entropy of formation}\n\\label{sec:min_tkh}\nDoing away with the anszat that $\\tau_{\\rm KH}\\sim T_{\\rm age}$, we can now generalise to the possibility that at young ages $\\tau_{\\rm KH}\\gtrsim T_{\\rm age}$. Now we can follow a similar argument to that in the preceeding section, and show that with measurement of a planet's mass and radius, one can place a lower bound on the planet's Kelvin-Helmholtz contraction timescale. \n\nAgain combining Equation~\\ref{eqn:mass_radius} for the mass-radius relationship with the mass-loss criterion in Equation~\\ref{eqn:tmdot1} we find:\n\\begin{equation}\n \\tau_{\\rm KH} \\gtrsim B\\, R_p^{7\/2} M_p^{-7\/2} T_{\\rm age}^2 \\label{eqn:critera2}\n\\end{equation}\nwhere like the $A$ factor above, $B$ encapsulates all the terms and fundamental constants we have dropped from our analysis. The dependence of the inequality in Equation~\\ref{eqn:critera2} is easy to understand. Larger and less massive planets which are older have experienced more mass-loss. The higher total mass-loss requires a higher atmosphere mass to resist, necessitating a lower entropy interior to give a planet with the same total mass and radius (Equation~\\ref{eqn:mass_radius}). Again for the case where $T\\propto a^{-1\/2}$ and $\\dot{M}\\propto a^{-2}$ we find $B \\propto a ^{-13\/4}$ indicating that it is those planets that are closest to their host stars (and experience more vigorous mass-loss) that are the most constraining. Now clearly, if one finds a constraint on the Kelvin-Helmholtz timescale that is shorter than its age, one has not learnt anything other than it is consistent with the ``vanilla'' scenario for core-accretion, and satisfies the constraint in Equation~\\ref{eqn:criterion}. \n\nWith a sample of young planets with ages less than a few 100~Myr with measured masses and radii it is possible to constrain their current Kelvin-Helmholtz contraction timescales and hence gain insights into their formation entropies and the physical processes that lead to their formation and early evolution. On the flip-side, if {\\it all} young planets appear to be consistent with $\\tau_{\\rm KH}\\sim T_{\\rm age}$ at young ages we can also make inferences about their formation pathways.\n\n\nWe caution that in the previous sections we deliberately chose an incorrect opacity-law (a constant opacity) and a simple mass-loss model, in order that the powers in the previous expressions did not become large integer ratios, and as such they should not be used for any quantitative analysis. Switching to more realistic opacity and mass-loss laws does not change the facts identified in Section~\\ref{sec:min_mass} and \\ref{sec:min_tkh}. \n\n\\subsection{A slightly more sophisticated demonstration}\n\nBefore we switch to using full numerical solutions of planetary evolution we can get a sense of the range of interesting planet properties by using the semi-analytic planet structure model developed by \\citet{Owen2017}, where all choices (opacity-law, mass-loss model etc) follow those in \\citet{OCE2020}. In all cases we assume an Earth-like core composition with a 1\/3 iron to 2\/3 rock mass-ratio, which is consistent with the current exoplanet demographics (but as mentioned above, such a choice does not strongly affect our results). \n\\subsubsection{Minimum masses}\nIn Figure~\\ref{fig:simple_rad} we show the minimum mass required for the ``vanilla'' scenario where $\\tau_{\\rm KH}\\sim T_{\\rm age}$ at all ages. \n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{Min_mass_simple.pdf}\n \\caption{The minimum mass required to be consistent with a scenario where $\\tau_{\\rm KH}\\sim T_{\\rm age}$ at all ages.{ This figure uses the more sophisticated calculation described in Section 2.5, rather than the simple inequality given in Equation~9}. The top panel shows a planet with a separation from a sun-like star of 0.05~AU and the bottom panel 0.1~AU. The sun-like star is assumed to have a saturated XUV flux of $10^{-3.5}~$L$_\\odot$ at all ages. { The white regions in the left of the plot, labelled as ``no solution (evaporation valley)'' are regions of parameter space planet where a H\/He envelope would have undergone run-away loss and is a manifestation of the evaporation valley}. The dotted lines show planetary radius evolution curves (with no mass-loss) that begin at 10~R$_\\oplus$ at 10~Myr and track $\\tau_{\\rm KH}=T_{\\rm age}$.}\n \\label{fig:simple_rad}\n\\end{figure}\nThe dotted lines on these figures show the radius evolution of planets (not undergoing mass-loss), which begin at 10~R$_\\oplus$ at 10~Myr. These evolutionary curves do not cross many minimum mass contours indicating that there is little strong age preference in the range of 10 to 100~Myr for selecting planets for this kind of analysis (although we will investigate this more precisely in Section~\\ref{sec:mesa}). This is fairly easy to understand; as the planet cools and contracts the absorbed XUV falls, reducing the mass-loss rate. However, the total time to resist mass-loss increases. These two competing effects approximately balance, resulting in a minimum mass that does not change strongly with age. Once the XUV flux is no-longer saturated and rapidly falls with time, the mass-loss rate drops precipitously and the minimum mass will also drop rapidly with age.\n\nThe difference between the two panels in Figure~\\ref{fig:simple_rad} indicates, as expected from the previous analysis, that close-in planets require higher masses. For those young planets discovered to date with radii in the range 5-10~R$_\\oplus$, minimum masses in the range of 5-15~M$_\\oplus$ are required.\n\\subsubsection{Constraining the initial Kelvin-Helmholtz timescale}\n\nWhile the minimum masses provide a useful guide they do not provide much insight into planetary formation. Here, we elaborate on the much more interesting case of young planets with well measured masses and radii. \n\nIn Figure~\\ref{fig:tkh_simple} we show how the mass-radius plane is partitioned into regions of parameter space that are consistent with $\\tau_{\\rm KH}\\sim T_{\\rm age}$ and those requiring $\\tau_{\\rm KH}\\gtrsim T_{\\rm age}$. This analysis is performed for a planet located at 0.1~AU around an XUV saturated, 50~Myr old Sun-like star. \n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{Constrain_tkh_simple.pdf}\n \\caption{The planet mass and radius plane separated into those planets consistent with $\\tau_{\\rm KH}\\sim T_{\\rm age}$ and those which require longer initial Kelvin-Helmholtz timescales. { As in Figure~2 this figure uses the more sophisticated calculation described in Section 2.5, rather than the simple inequality given in Equation~10 }. The diagram is shown for a planet located at 0.1~AU around an XUV saturated (10$^{-3.5}$~L$_\\odot$), 50~Myr old sun-like star. Planets that sit the white region would require a long Kelvin-Helmholtz contraction timescale at formation. The point shows a representative young planet with a measured radius of 7~R$_\\oplus$ and mass of 5~M$_\\oplus$ shown with indicative 10\\% error-bars. { The solid-line and dotted line show curves with a constant $\\tau_{\\rm KH}$, with values of 382~Myr and 50~Myr respectively} . Thus, this representative planet would require a current (and hence formation) contraction timescale of $\\gtrsim T_{\\rm age}$.}\n \\label{fig:tkh_simple}\n\\end{figure}\nPlacing a representative young planet with a measured radius of 7~R$_\\oplus$ and mass of 5~M$_\\oplus$ on this diagram indicates it would require a longer Kelvin-Helmholtz contraction timescale (and hence lower entropy) than predicted by simple formation scenarios. Given theoretical ideas, such as ``boil-off'', predict initial Kelvin-Helmholtz contraction timescales of $\\sim$ 100~Myr \\citep{OW2016}, this figure indicates they could be detected with radii and mass measured to the $\\sim 10$\\% precision level. \n\nIts important to emphasise (as demonstrated in Equation~\\ref{eqn:critera2}) that this type of analysis only provides a bound on the Kelvin-Helmholtz contraction timescale, where the equality holds when a planet is on the limit of stability due to mass-loss. Therefore, finding a planet in the region consistent with $\\tau_{\\rm KH}\\sim T_{\\rm age}$ does not imply that it doesn't have a longer contraction timescale, rather it could just be very stable to envelope mass-loss. \n\n\\section{Numerical planetary evolution models}\n\\label{sec:mesa}\nIn the previous section we have used analytic tools to illustrate the basic physics; however, these models can only be pushed so far. For robust and quantitative results full numerical models are a must. This is for several reasons, most importantly, many of the transiting planets discovered to date are large and thus may contain quite significant envelope mass-fractions ($\\gtrsim 10\\%$). While self-gravity of such an envelope is small it is not negligible, and not included in the previous analytic model. Additionally, in the previous section we assumed an ideal equation-of-state with constant ratio of specific heats and power-law opacity model. While these choices are acceptable for understanding demographic properties, these assumptions induce unnecessary errors in the analysis of individual systems. Finally, by characterising the full evolutionary history (rather than the instantaneous state) we are able to leverage even more power. This is because not all planetary structures that are consistent with a planet's current state are consistent with its evolutionary history { once mass-loss is taken into account}. The last point is demonstrated by the fact \\citet{Owen2016} were able to provide a (albeit weak) constraint on the entropy of formation for the old planet {\\it Kepler-}36c. { Specifically in the previous section we only asked the question whether $\\tau_{\\rm KH}\\gtrsim T_{\\rm age}$. In this section, by including the full evolutionary history, we are able to compare to the planet's {\\it initial} Kelvin Helmholtz timescale, which we define to be the envelope's Kelvin Helmholtz timescale at the end of disc dispersal. This comparison is more powerful, as it allows to to explore initial Kelvin-Helmholtz timescales which are shorter than the planet's current age.}\n\n\n\nTherefore, to overcome the above shortcomings we solve for the full planetary evolution using {\\sc mesa} \\citep{Paxton2011,Paxton2013,Paxton2015}. The {\\sc mesa} models are identical to those used in \\citet{Owen2016} and \\citet{OwenLai2018}, and include the impact of stellar irradiation (which tracks the \\citealt{Baraffe1998} stellar evolution models) and photoevaporation using the \\citet{Owen2012} mass-loss rates. \n\n\\subsection{Example planet}\nHere we return to our example planet from Figure~\\ref{fig:tkh_simple}, a planet located at 0.1~AU around a 50~Myr sun-like star. Nominally, we consider this planet to have a measured radius of 7~R$_\\oplus$ and measured mass of 5~M$_\\oplus$, but we will investigate how changes to the mass, as well as measurement precision, will affect constraints on the planet's initial Kelvin-Helmholtz timescale. \n\n\n\\subsubsection{Time undergoing photoevaporation}\n\nOne of the big uncertainties at young ages is how long the planet has been exposed to XUV irradiation, and hence photoevaporating. When embedded in the protoplanetary disc it is protected from XUV photons. Thus, the age of the star only provides an upper bound on the time the planet has spent photoevaporating. Since disc lifetimes vary between $\\sim 1$ and $\\sim 10$~Myr, at young ages this is not a trivial uncertainty. We include this uncertainty in our analysis by deriving the probability distribution for the time a planet has spent photoevaporating after disc dispersal $T_p$. We then marginalise over this probability when determining our lower bound on the planet's initial Kelvin-Helmholtz timescale. \n\nWe take the star to have a Gaussian age uncertainty with mean $t_*$ and error $\\sigma_*$. We further assume after a time $t_d$, the disc fraction decays exponentially with the form \n\\begin{equation}\n \\propto \\exp\\left(-\\frac{T_d-t_d}{\\sigma_d}\\right)\n\\end{equation}\nwhere $T_d$ is the disc's lifetime and $\\sigma_d$ is the decay time for the disc fraction. Such a phenomenological form describes the evolution of the protoplanetary disc fraction \\citep[e.g.][]{Mamajek2009}. Now given a star's actual age ($T_*$) is a sum of the unknown disc's lifetime and the unknown time the planet has been undergoing photoevaporation ($T_p$), then we know $T_*=T_p+T_d$.\nTherefore the probability distribution for $T_p$ can be written as:\n\\begin{eqnarray}\n P(T_p) &=& \\frac{1}{2\\sigma_d}\\exp\\left[\\frac{\\sigma_*^2+2\\sigma_d\\left(T_p+t_d-t_*\\right)}{2\\sigma_d^2}\\right]\\nonumber \\\\&\\times&\\left\\{1-{\\rm erf}\\left[\\frac{\\sigma_*^2+\\sigma_d\\left(T_p+t_d-t_*\\right)}{\\sqrt{2}\\sigma_*\\sigma_d}\\right]\\right\\}\n\\end{eqnarray}\n\nIn this work we set $t_d=1$~Myr and $\\sigma_d=3$~Myr as this reproduces the fact that all (single) stars host discs at an age of 1~Myr, but by 10~Myr the vast majority of stars have dispersed their discs. { Thus, we adopt an initial Kelvin-Helmholtz timescale of 10~Myr as the upper limit that can be reached in standard core-accretion theory.} \n\n\\subsubsection{Results}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\textwidth]{Panel_plot.pdf}\n\\caption{The top left, top right and bottom left panels show joint probability distributions for the initial Kelvin-Helmholtz timescale, core mass and initial envelope mass fractions for a 50$\\pm5$~Myr old planet with a radius of $7\\pm0.7$~R$_\\oplus$ and mass of $5\\pm 0.5$~M$_\\oplus$. The bottom right panel shows the marginalised probability distribution for the initial Kelvin-Helzholtz timescale, with the point indicating the 99\\% lower-limit at a value of 168~Myr. } \\label{fig:big_panel}\n\\end{figure*}\n\nIn Figure~\\ref{fig:big_panel} we show joint probability distributions for the initial Kelvin-Helmholtz timescale, initial envelope mass fraction and core mass, as well as the marginalised probability distribution for the initial Kelvin-Helmholtz timescale. This analysis has been performed assuming $10\\%$ Gaussian errors on stellar age, radius and mass. Similar to our analysis in the earlier section for our 7~R$_\\oplus$ and 5~M$_\\oplus$ 50~Myr old planet we find that it would require an initial Kelvin-Helmholtz timescale significantly longer than would be predicted by standard core-accretion theory. In this example, we would place a 99\\% lower limit on the initial Kelvin-Helmholtz timescale of $\\sim 170$~Myr. The joint probability distributions are also correlated as expected with our earlier analysis. Lower mass planets require longer initial Kelvin-Helmholtz timescales and higher initial envelope mass fractions. \n\nWe explore the role of planet mass in Figure~\\ref{fig:vary_mass} where we consider measured planet masses between 4 and 8 Earth masses (again for our 7~R$_\\oplus$, 50~Myr old planet with 10\\% measurement uncertainties). We note very few 4~M$_\\oplus$ models are consistent with the measured radius, as most have initial envelope mass fractions $\\sim 1$, making them extremely vulnerable to photoevaporation \\citep{Owen2019}. \n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{Vary_mass.pdf}\n \\caption{Marginalised probability distributions for the initial Kelvin-Helmholtz timescale for a 50$\\pm5$~Myr old planet with a radius of $7\\pm0.7$~R$_\\oplus$. Different lines show different planet masses (with 10\\% uncertainty). For this particular planet a measured mass of $\\lesssim 7$~M$_\\oplus$ would be required to claim evidence of boil-off.}\n \\label{fig:vary_mass}\n\\end{figure}\nAs expected, as the planet mass increases, the bound on the initial Kelvin-Helmholtz timescale decreases (as the higher-mass core is able to hold onto a less massive, and thus higher entropy envelope). \n\nWhile $\\lesssim 10\\%$ measurement uncertainties on planet radius and stellar age\\footnote{Much of the uncertainty in the time a planet has spent photoevaporating is dominated by the uncertainty in the disc dispersal timescale at ages $\\lesssim 100$~Myr.} have been achieved for known young planets, stellar activity may mean obtaining radial-velocity mass measurements at a $\\sim 10$\\% precision is difficult. Therefore, in Figure~\\ref{fig:mass-error} we show how sensitive our constraints on the initial Kelvin-Helmholtz timescale are to mass uncertainties in the range of 5-25\\%. As you would naturally expect, increasing the uncertainty means higher mass planets become consistent with the measured mass, allowing shorter initial Kelvin-Helmholtz timescales. However, even with a tentative $\\sim 25\\%$ mass detection, for this example we would still be able to place a useful constraint on the initial Kelvin-Helmholtz timescale. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{Vary_Error.pdf}\n \\caption{Marginalised probability distributions for the initial Kelvin-Helmholtz timescale for a 50$\\pm5$~Myr old planet with a radius of $7\\pm0.7$~R$_\\oplus$ and mass of $5$~M$_\\oplus$. Different lines show different uncertainties on the measured planet mass.}\n \\label{fig:mass-error}\n\\end{figure}\n\nThis gives us confidence that measured masses can place useful constraints on the entropy of formation of young transiting planets, even if those mass measurements are tentative. \n\n\\subsection{What age is best?}\n\nOne question that remains is what is the best system age to do this experiment for? Obviously young planets allow you to constrain shorter and shorter initial Kelvin-Helmholtz timescales as they've had less chance too cool. Yet, at young ages there are two confounding effects. First photoevaporation may not have had enough time to significantly control the planet's evolution. Second, at very young ages, the time the planet has spent photoevaporating after disc dispersal is not dominated by the uncertainty in the age of the system, but rather by the unknown disc lifetime. For example a 10~Myr old planet could have spent anywhere between 0 and $\\sim 9$~Myr photoevaporating. However, wait too long and the planet will have cooled sufficiently that knowledge of its initial thermodynamic state will have been lost, especially at ages $\\gtrsim 100$~Myr when photoevaporation no-longer dominates. \n\nThus, we expect there to be an optimum range of stellar ages at which this experiment is most stringent. In order to assess this we take the evolution of a planet with a 4.375~M$_\\oplus$ core, with an initial envelope mass fraction and Kelvin-Helmholtz timescale of $0.3$ and $500$~Myr respectively. This model roughly corresponds to our 5~M$_\\oplus$, 7~R$_\\oplus$, 50 Myr old planet studied earlier. We then use our method to constrain its initial Kelvin-Helmholtz timescale as a function of age assuming 10\\% errors on planet mass, radius and stellar age. The minimum initial Kelvin-Helmholtz timescale (at the 99\\% confidence level) is shown as a function of age for this exercise in Figure~\\ref{fig:best_age}.\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{Best_age.pdf}\n \\caption{The constraint on the minimum inferred initial Kelvin-Helmholtz timescale as a function of stellar age. }\n \\label{fig:best_age}\n\\end{figure}\nThe system age that is most constraining lies around $\\sim 30$~Myr, where even with errors and a fairly robust confidence level we recover the actual initial Kelvin-Helmholtz timescale within a factor of 3. Ages in the range $\\sim 20-60$~Myr have a constraint that varies by less than $50\\%$ of the absolute maximum. We can clearly see that after 100~Myr the constraint on the initial Kelvin-Helmholtz timescale becomes uninformative. Therefore, for real systems our method should provide meaningful constraints on the initial Kelvin-Helmholtz timescale and hence formation entropy for planets around stars with ages in the range 20-60~Myr. \n\n\n\\section{Application to real planets}\nHaving shown that by using the photoevaporation model it is possible to constrain a young planet's entropy of formation we turn our attention to detected young planets, and consider how their inferred entropy of formation varies as a function of possible measured mass. We choose to focus here on DS Tuc Ab and V1298 Tau c, out of the handful of known young planets, as these are the most strongly irradiated, and therefore most likely to result in strong constraints on their initial Kelvin-Helmholtz contraction timescale. \n\n\n\\subsection{DS Tuc Ab}\n\nDS Tuc Ab \\citep{Benatti2019,Newton2019} is a $5.70\\pm0.17$~R$_\\oplus$ planet\\footnote{We choose to use the stellar and planetary parameters from \\citet{Newton2019}.} discovered around a $45\\pm4$~Myr, 1.01~M$_\\odot$ star, orbiting with a period of 8.1~days. Using exactly the same formalism as applied in Section~\\ref{sec:mesa} we consider the constraints on entropy of formation and initial Kelvin-Helmholtz timescale as a function of planet mass.\nWe find that a measured mass $\\lesssim 4.5$~M$_\\oplus$ would be inconsistent with the current properties of DS Tuc Ab. In Figure~\\ref{fig:limit_DS} we show how the inferred lower-limit on the initial Kelvin-Helholtz timescale varies with both the measured planet mass and the measurement uncertainty. Our results indicate that a measured mass $\\lesssim 7.5$~M$_\\oplus$ with a uncertainty of 10\\% (or $\\lesssim$ 6.5~M$_\\oplus$ with a 20\\% uncertainty) would require a longer than na\\\"ively expected initial Kelvin-Helmholtz timescale and require with a ``boil-off'' phase. A mass of 7.5~M$_\\oplus$ corresponds to a radial velocity semi-amplitude of $\\sim 2.4$~m~s$^{-1}$, eminently detectable with current instrumentation, stellar noise not withstanding. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{DS_TUCAb_result.pdf}\n \\caption{The minimum initial Kelvin-Helmholtz timescale (at the 99\\% confidence limit) for DS Tuc Ab shown as a function of measured planet mass for a 10\\% and 20\\% measured mass uncertainty. A measured mass of $\\lesssim 7-8$M$_\\oplus$ would require a ``boil-off'' phase to explain. }\n \\label{fig:limit_DS}\n\\end{figure}\n\n\\subsection{V1298 Tau c}\n\nThe 23$\\pm4$~Myr old, 1.1~M$_\\odot$, V1298 Tau system contains four large transiting young planets \\citep{David2019}. All planets are between 5-11~R$_\\oplus$ in radii and orbit close to the star with periods $\\lesssim 100$~days indicating it is likely to be a precursor to the the archetypal {\\it Kepler} multi-planet systems. Given our analysis in Section~\\ref{sec:min_tkh} indicated that planets much closer to their star will provide the most stringent limits (due to more vigorous photoevaporation) we select planet c to investigate here. V1298 Tau c is a $5.59\\pm0.34$~R$_\\oplus$ planet with an orbital period of 8.2~days. Since the V1298 Tau system is a multi-planet system, dynamical arguments have already put constraints on the sum of planet c's and d's mass to be $7^{+21}_{-5}$~M$_\\oplus$. Like DS Tuc Ab above we calculate the minimum initial Kelvin-Helmholtz timescale, taken to be the 99\\% lower limit, as a function of measured planet mass (with both 10\\% and 20\\% measurement uncertainties) which is shown in Figure~\\ref{fig:V1298_result}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{V1298_result.pdf}\n \\caption{The minimum initial Kelvin-Helmholtz timescale (at the 99\\% confidence limit) for V1298 Tau c shown as a function of measured planet mass for a 10\\% and 20\\% measured mass uncertainty. A measured mass of $\\lesssim 6-7$M$_\\oplus$ would require a ``boil-off'' phase to explain. }\n \\label{fig:V1298_result}\n\\end{figure}\nWe note measured planet masses $\\lesssim 4$~M$_\\oplus$ are inconsistent with V1298 Tau c's current properties. A mass measurement of $\\lesssim 6.5$~M$_\\oplus$ with a 10\\% uncertainty (or $\\lesssim$5.5~M$_\\oplus$ with a 20\\% uncertainty) would require a boil-off phase to explain. This corresponds to a RV semi-amplitude of $\\sim 2$~m~s$^{-1}$, again within the realm of possibility for radial velocity characterisation (stellar noise not withstanding). Further, since V1298 Tau is a multi-planet system, this permits the possibility of mass constraints through Transit Timing Variations. \n\n\\section{Discussion}\n\nTypical sub-neptune and super-earth planets are expected to be much larger at early ages.\nIn this work we have shown that with a combined mass and radius measurement of a proto-sub-neptune\/super-earth ($\\lesssim 100$~Myr old), a lower bound can be placed on its initial Kelvin-Helmholtz timescale. This lower bound provides valuable insight into the accretion and early evolution of its H\/He envelope. This lower bound is essentially found by answering how low-mass a H\/He envelope can exist on the planet given it's been undergoing photoevaporation. For a fixed planet radius, a higher entropy envelope contains less mass and is therefore more vulnerable to mass-loss. Whereas, lower entropy envelopes need to be more massive and thus are able to resist mass-loss for longer. \n\nOne might expect that the younger the planet this experiment is done for the tighter a constraint can be obtained. While this is generally true, at the youngest ages the fact protoplanetary disc lifetimes vary means one cannot be certain how long a planet has been undergoing mass-loss. Thus, we find that the optimum ages for this experiment are around 20-60~Myr. \n\nFurther, more accurate measurements obviously result in tighter constraints. In this work, we showed that measurement precision in the range of 10-20\\% on radius, mass and age are required to perform this analysis. Current (and previous) transit surveys have already reached and exceeded this requirement on known young planets (e.g. the planets in the V1298 Tau system have radius uncertainties in the range 6-7\\%). Further, the already published young planets have age uncertainties at the 10\\% level. \n\nWhat is difficult to ascertain is whether the mass measurements at the $\\lesssim 20\\%$ precision are achievable. As discussed in Section~4, the problem is not RV precision. Rather it is intrinsic stellar variability, particularly due to spots \\citep[e.g.][]{Huerta2008}, which are more prevalent on younger stars. Recent work using Gaussian Processes have shown that it is possible to model intrinsic stellar variability and obtain mass measurements for planets \\citep{Haywood2014,Grunblatt2015}. Using this technique \\citet{Barragan2019} recently obtained an RV mass measurement for K2-100b, which is a moderately young star showing significant intrinsic RV variability. Alternatively, if the planets happen to reside in multi-transiting systems, then TTVs could be used. While we acknowledge the difficulty in obtaining the mass measurements we require, we advocate that the scientific value of constraints on planets' initial entropy is important enough to motivate the effort. \n\nSince we are using photoevaporation to constrain the entropy of formation, our results are sensitive to the accuracy of theoretical photoevaporation calculations. In this work we use the mass-loss rates of \\citet{Owen2012} which are consistent with the location and slope of the ``evaporation valley'' \\citep{VanEylen2018} \\footnote{ As does the core-powered mass-loss model \\citep{Gupta2019,Gupta2019b}.}, and are generally in good agreement with observed outflows \\citep{Owen2012}. Only more theoretical and observational work calibrating photoevaporation models can assess the impact changing the mass-loss rates may have on entropy constraints. \n\n\n\\subsection{Links to planet formation theory}\n\nSince the discovery of sub-neptunes and super-earths there has been much work on their origin \\citep[e.g.][]{Ikoma2006,Ikoma2012,Lee2014,Venturini2015,Venturini2016,Ginzburg2018}. It is clear that the only way to explain (at least some of) their current densities is to have a have a planetary core (made of some mixture of rock, iron and ices) surrounded by a H\/He envelope which contains $\\sim 1-10$\\% of the planet's total mass \\citep[e.g.][]{JontofHutter2016}. \n\nSuch a planetary composition would naturally arise through the core-accretion mechanism, whereby the growing solid core accretes a H\/He envelope over the disc's lifetime. In this standard picture, the accreting planetary envelope smoothly connects to the disc, but remains in quasi-hydrostatic and thermal equilibrium. As the envelope cools, it contracts and slowly accretes. This process happens on the envelope's Kelvin-Helmholtz timescale, which without any strong internal heating sources, quickly equilibrates to roughly the envelope's age. If disc dispersal allows the envelope to remain in quasi-hydrostatic and thermal equilibrium, then a planet's ``initial Kelvin-Helmholtz timescale'' (which we define as the Kelvin-Helmholtz timescale after disc dispersal) is essentially the time it has spent forming, which is bounded by the protoplanetary disc lifetime (e.g. $\\lesssim 10$~Myr). \n\nWhile the basic picture appears to fit, there is growing evidence that the standard core accretion model significantly over-predicts the amount of H\/He a core of a given mass should accrete \\citep{Jankovic2019,Ogihara2020,Alessi2020,Rogers2020}. In some cases the problem is so acute that it's not clear why certain planets did not become giant planets \\citep[e.g.][]{Lee2014,Lee2019}. Several solutions have been proposed to solve this problem. \\cite{Chen2020} suggested enhanced opacity from dust could slow the atmosphere's accretion. Using numerical simulations, \\citet{Ormel2015} and \\citet{Fung2015} suggested that the envelope was not in quasi-hydrostatic equilibrium with the disc, but rather high-entropy disc material continually flowed into the envelope, preventing it from cooling. \n\n\\cite{Lee2016} hypothesised instead these planets do not spend the entire disc lifetime accreting from the nebula, but rather formed rapidly (over a timescale of $10^5-10^6$~years) in the final ``transition'' disc stage of the protoplanetary disc. The much lower gas surface densities and the shorter lifetime of the transition disc phase gave rise to smaller accreted atmospheres. The above modifications to the ``vanilla'' core accretion theory model will typically result in higher entropy envelopes and therefore initial Kelvin-Helmholtz contraction timescales, significantly shorter than the standard value of a few Myr.\n\nAn alternative solution to the over accretion problem\\footnote{Although it does not prohibit the modifications to core-accretion theory described above.} is the introduction of additional mass-loss. While it does not seem energetically feasible to increase the rates of either photoevaporation or core-powered mass-loss (as they are already fairly efficient), the assumption that the envelope maintains some sort of dynamical equilibrium as the disc disperses seems unlikely. Protoplanetary discs are observed to live and evolve slowly over their 1-10~Myr lifetimes. However, the dispersal process is rapid with a timescale of $\\sim 10^5$~years \\citep[e.g.][]{kenyon95,ercolano11,koepferl13,owenreview2016}. \n\nAs argued by \\citet{OW2016} and \\citet{Ginzburg2016} this means accreted H\/He envelopes cannot maintain dynamical and thermal balance with the gas in the dispersing disc. As such the envelopes become over-pressurised, and expand hydrodynamically into the forming cicumstellar vacuum. This ``boil-off'' process results in mass-loss (in extreme cases up to 90\\% of the initial envelope is lost), but also importantly cooling of the interior. This is because the bottleneck for cooling (the radiative-convective boundary) is replaced by an advective-convective boundary and thermal energy is removed from the interior quickly by advection and mass-loss. Using simulations, \\citet{OW2016} found that after this boil-off process, the remaining envelopes had their entropies reduced. Their Kelvin-Helmholtz contraction timescales at the end of disc dispersal were around $\\sim 100$~Myr. \n\nThus, any constraints of the initial Kelvin-Helmholtz contraction timescale of proto-sub-neptunes\/super-earths will be invaluable for constraining and testing our models for their origins. \n\n\\section{Summary}\n\nThe formation of sub-Neptunes and super-Earths is uncertain and many formation models have been proposed to explain their origin. These formation models are essentially unconstrained by the old, evolved exoplanet population that has a typical age of 3~Gyr. However, various planet formation models predict vastly different entropies at the end of protoplanetary disc dispersal. Characterising the entropies at the end of disc dispersal in terms of initial Kelvin-Helmholtz contraction timescales, these predictions range from $\\lesssim 1$~Myr to $\\gtrsim 100$~Myr.\n\nA young proto-sub-neptune\/super-earth with a measured mass, radius and age can be used to place a lower bound on its initial Kelvin-Helmholtz contraction timescale. This requires the planet to be close enough to its host star that photoevaporation has had an impact on its evolution. This constraint is obtained by answering how low-mass a H\/He envelope can exist on the planet given the mass-loss it experienced. For a fixed planet radius, a higher entropy envelope contains less mass and is therefore more vulnerable to mass-loss. Whereas, lower entropy envelopes need to be more massive and thus are able to resit mass-loss for longer.\n\nWe have shown that planets around host stars with ages 20-60~Myr are the optimum targets for this kind of analysis. Applying our hypothesised method to detected young planets DS Tuc Ab and V1298 Tau c we show planet mass constraints (with $\\lesssim 20\\%$ precision) in the range 7-10~M$_\\oplus$ would be consistent with our standard picture of core-accretion. Mass measurements $\\lesssim 7$~M$_\\oplus$ would favour a ``boil-off'' process, where a planet loses mass and its interior cools significantly during dispersal. \n\nWhile precise mass measurements of low-mass planets orbiting young stars are likely to be challenging, the insights into planet formation that could be obtained warrant the effort. \n\n\n\\section*{Acknowledgements}\nJEO is supported by a Royal Society University Research Fellowship and a 2019 ERC starting grant (PEVAP).\n\n\\section*{Data Availability}\nThe code used to create the planet structure models in Section~2.5 is freely available at: \\url{https:\/\/github.com\/jo276\/EvapMass}. The custom {\\sc mesa} code used to calculate the planet evolution models in Section~3 and 4 is freely available at: \\url{https:\/\/github.com\/jo276\/MESAplanet}.\nThe remaining data underlying this article will be shared on reasonable request to the corresponding author.\n\n\n\n\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\t\\label{sec:introduction}\n\t\n\tModern day edge devices, with their data acquisition and storage ability, have pushed the need of distributed computing beyond the realms of data centers. Devices such as mobile phones, sensor systems in vehicles, wearable technology and smart homes, within their limited storage and processing capabilities, can constantly collect data and perform simple computations. However, due to data privacy concerns and limitations on network bandwidth and power, it becomes impractical to transmit all the collected data to a centralized server and conduct centralized training. \n\t\n\tThe nascent field of federated learning \\cite{konevcny2015federated,konevcny2016federated,brendan2017aguera,mohri2019agnostic,li2020federated} tries to address these concerns. As described in \\cite{konevcny2016federated}, federated learning is a machine learning setting where the goal is to train a high-quality centralized model with training data distributed over a large number of clients. Unlike the data centers, the clients collect data samples independently but in a non-i.i.d. fashion. The clients may be highly unbalanced, i.e., the number of samples per client may vary significantly. The clients may also have hardware related constraints. Although the number of clients could be quite large, each client is typically a simple device which has access to a very small number of data samples and can only conduct very basic computations due to limitations on its processing and power capabilities. Furthermore, since battery power is at a premium, the communication between the client and the centralized server acts as a major bottleneck. Due to these constraints, it is common to encounter straggling and faulty clients in the federated learning setting. \n\t\n\tIn this work, we study the problem of exact support recovery of sparse linear regression in federated learning. \\cite{wainwright2009info} provided an information theoretic lower bound for sparse linear regression. They showed that, in a completely centralized setting where all the data resides in a single server, $\\mathcal{O}(s \\log d)$ samples are necessary for exact support recovery of a $d$-dimensional parameter vector with $s$ non-zero entries. In our setting, none of the clients has the access to necessary number of data samples required for exact support recovery or possess computational capabilities to run complex algorithms. Furthermore, we only allow for one-shot communication between the clients and the centralized server, i.e., clients can send information to the centralized server only once. We propose a novel yet simple algorithm for this setting and show that local clients can collaboratively recover the exact support of the sparse linear regression model with provable theoretical guarantees. \n\t\n\t\\paragraph{Related work.}\n\t\n\tDespite being a new research area, there has been lot of interest in the field of federated learning. On the experimental side, \\cite{konevcny2015federated} were the first to formally define federated learning and proposed an algorithm with encouraging experimental results. \\cite{konevcny2016federated} came up with strategies to improve the communication efficiency for federated learning. \\cite{brendan2017aguera} proposed a communication efficient algorithm for deep networks. Similarly, \\cite{yurochkin2019bayesian} developed a novel framework for federated learning with neural networks and \\cite{wang2020federated} proposed a federated learning algorithm using matched averaging for neural networks. \\cite{bhagoji2019analyzing} empirically analyzed adversarial attacks on federated learning settings. They specifically studied the threat of model poisoning where the adversary controls a small number of malicious clients (usually 1)\n\twith the aim of causing the global model to misclassify. \\cite{li2020fair} studied fair resource allocation in federated learning. On the theoretical side, \\cite{he2018cola} proposed a new decentralized training algorithm with guarantees for convergence rates for linear classification and regression models. \\cite{smith2017cocoa} presented a communication efficient decentralized framework which covers general non-strongly-convex regularizers, including problems like lasso with convergence rate guarantees. They also describe a possible extension of their method to one-shot communication schemes. \\cite{smith2017federated} proposed a multi-task learning based approach for federated learning with convergence rate guarantees which was tolerant to client failure and could handle clients which lag in sending information to the centralized server (also known as straggling clients). \\cite{mohri2019agnostic} proposed a client distribution agnostic framework for federated learning. They also provided Rademacher-based generalization bounds for their proposed approach. \n\t\n\t\\paragraph{Our Contribution.}\n\tAll the work mentioned above are interesting in their own domain however our contribution is mostly theoretical. The existing theoretical work provide guarantees for convergence rates (which guarantees a small mean squared error in the training set provided enough iterations) or generalization bounds (which guarantees a small mean squared error in the testing set provided enough samples). However, the final solution may not match exactly with the true parameter vector. In this work, we provide provable theoretical guarantees for exact recovery of the support of the true sparse paramater vector of linear regression in federated learning.\n\tSupport recovery, i.e., correctly detecting the zero and nonzero entries of the parameter vector, is arguably a challenging task. In particular, we show that for a $d$-dimensional $s$-sparse parameter vector $\\mathcal{O}(\\log d)$ clients and $\\mathcal{O}(s^2 \\log s)$ data samples per client are sufficient to recover the exact support. If the predictor variables are mutually independent then we can do exact support recovery with only $\\mathcal{O}(s)$ data samples per client. Notice that in this case the aggregate sample complexity is $\\mathcal{O}(s\\log d)$ which matches the optimal sample complexity of the centralized setting\\cite{wainwright2009info,wainwright2009sharp}. We propose a simple yet effective method for exact support recovery and prove that the method is \\emph{correct} and efficient in terms of \\emph{time} and \\emph{sample complexity}. Our method has the following key properties:\n\t\\begin{itemize}\n\t\t\\item \\textbf{Simplicity: } We do not solve any optimization problem at the client level. All the computations are simple and let us use our method in devices with low computational power.\n\t\t\\item \\textbf{One shot communication and privacy: } Our method is communication efficient. We only need one round communication of at most $d-$bits from the client to the centralized server. As the communication is kept to a minimum, very little information about the client is passed to the centralized server. \n\t\t\\item \\textbf{Fault tolerance and aversion to model poisoning and straggling: } Our method is naturally robust to client node failure and averse to rogue and straggling clients.\n\t\\end{itemize} \n\t\n\t\n\t\n\t\\section{Preliminaries}\n\t\\label{sec:preliminaries}\n\tIn this section, we collect the notation which we use throughout this paper. We also formally define the support recovery problem for sparse linear regression in federated learning.\n\t\n\t\\subsection{Notation and Problem Setup}\n\t\\label{subsec:notation and problem setup}\n\t\n\tLet $w^* \\in \\mathbb{R}^d$ be a $d-$dimensional parameter with sparsity $s$, i.e., only $s$ out of $d$ entries of $w^*$ are non-zero. We use $\\seq{r}$ as a shorthand notation to denote the set $\\{1,2,\\cdots,r\\}$. Let $S^*$ be the true support set, i.e., $S^* = \\{ r | w^*_r \\ne 0, r \\in \\seq{d} \\}$. We denote the corresponding complementary non-support set as $S^*_c = \\{ r | w^*_r = 0, r \\in \\seq{d} \\}$. Assume there are $g$ clients, each with $n_i$ independent samples, for $i \\in [g]$. Note that the data distribution across $g$ clients need not be identical. Each client $i \\in \\seq{g}$ contains each data sample in the format $(X_i, y_i)$ where $X_i \\in \\mathbb{R}^d$ are the predictor variables and $y_i \\in \\mathbb{R}$ is the response variable. The data generation process for each client $i \\in \\seq{g}$ is as follows:\n\t\\begin{align}\n\t\\label{eq:generative model}\n\t\\begin{split}\n\ty_i = X_i^\\intercal w^* + e_i \\; ,\n\t\\end{split}\n\t\\end{align} \n\twhere $e_i$ is a zero mean sub-Gaussian additive noise with variance proxy $\\eta_i^2$, where $\\eta_i > 0$. Note that all the clients share the same parameter vector $w^*$. The $j$-th entry of $X_i$ is denoted by $X_{ij}, \\forall i\\in \\seq{g}, j \\in \\seq{d}$. Each entry $X_{ij}$ of $X_i$ is a zero mean sub-Gaussian random variable with variance proxy $\\rho_i^2$, where $\\rho_i > 0$. We denote covariance matrix for $X_i$ as $\\Sigma^i \\in \\mathbb{R}^{d \\times d}$ with diagonal entries $\\Sigma^i_{jj} \\equiv {\\sigma^i_{jj}}^2, \\forall j \\in \\seq{d}$ and non-diagonal entries $\\Sigma^i_{jk} \\equiv \\sigma^i_{jk}, \\forall j,k \\in \\seq{d}, j \\ne k$. If predictor variables are mutually independent then $\\sigma^i_{jk} = 0, \\forall i \\in \\seq{g}, j,k \\in \\seq{d}, j \\ne k$. The $t$-th sample of the $i$-th client is denoted by $(X_i^t, y_i^t), \\forall i \\in \\seq{g}, t \\in \\seq{n_i}$. We note that $X_i^t \\in \\mathbb{R}^d$ and $y_i^t \\in \\mathbb{R}$ and denote $j$-th entry of $X_i^t$ as $X_{ij}^t$. Notice that the data distributions for $(X_i, y_i)$ can vary a lot across the clients by varying $\\rho_i$ and $\\eta_i$, as well as the specific sub-Gaussian probability distribution. The class of sub- Gaussian variates includes for instance Gaussian variables, any bounded random variable (e.g., Bernoulli, multinomial, uniform), any random variable with strictly log-concave density, and any finite mixture of sub-Gaussian variables. Similarly, data samples can be distributed unevenly across the clients by varying $n_i$. In subsequent sections, we use $\\mathbb{P}(A)$ to denote probability of the event $A$ and $\\mathbb{E}(A)$ to denote the expectation of the random variable $A$.\n\t\n\t\\subsection{Problem Statement}\n\t\\label{subsec: problem statement}\n\t\n\tFor our problem, we assume that $n_i < \\mathcal{O}(s \\log d), \\forall i \\in \\seq{g}$. Otherwise, support can be trivially recovered by using compressed sensing methods in the client with $n_i = \\mathcal{O}(s \\log d)$ which is the order of necessary and sufficient number of samples required for exact support recovery in linear regression setup \\cite{wainwright2009info,wainwright2009sharp}. Furthermore, we assume that each of our clients can only do very simple computations and can only do one-shot communication with the centralized server, i.e., each client can only send at most $d$-bits to the centralized server. Considering the above requirements, we are interested in answering the following question:\n\t\\begin{problem}[Exact Support Recovery]\n\t\t\\label{prob:exact support recovery}\n\t\tGiven that each client contains $n_i < \\mathcal{O}(s\\log d)$ data samples generated through the process described in equation \\eqref{eq:generative model}, is it possible to efficiently recover the true support of the $s$-sparse shared parameter vector $w^* \\in \\mathbb{R}^d$ by collecting $d$-bits of information from every client only once with provable theoretical guarantees.\n\t\\end{problem}\n\tThe efficiency in exact recovery means that the sample complexity per client should be strictly less than $\\mathcal{O}(s\\log d)$ and that our algorithm should have polynomial time complexity and should also be easy to implement. \n\t\n\t\\section{Our Method}\n\t\\label{sec:methodology}\n\t\n\tIn this section, we present a simple algorithm to solve problem \\ref{prob:exact support recovery}. Our main idea is that estimation at the client level can be incorrect for every client but this information can still be aggregated in a careful manner to compute the true support. \n\t\n\t\\subsection{Client Level Computations}\n\t\\label{subsec:client level computations}\n\t\n\tEach client tries to estimate the support of $w^*$ using $n_i$ independent samples available to it. As mentioned previously, $n_i, \\forall i \\in \\seq{g}$ is not sufficient to compute correct support of $w^*$ using any method possible \\cite{wainwright2009info}. Let $\\hat{w}_i \\in \\mathbb{R}^d$ be the estimate of $w^*$ computed by each client $i$. Let $S_i$ be the support of $\\hat{w}_i$. Each server communicates the computed support (at most $d$ bits) to a centralized server which then computes the final support for $w^*$. The centralized server receives $S_i$ from each client and computes the final support $S = f(S_1, S_2,\\cdots,S_g)$. Each client $i, \\forall i \\in \\seq{g}$ computes $\\hat{w}_i$ in the following way:\n\t\\begin{align}\n\t\\label{eq:what}\n\t\\begin{split}\n\t\\forall i \\in \\seq{g}, j \\in \\seq{d},\\quad \\hat{w}_{ij} = \\frac{1}{\\hat{\\sigma}_{ij}} \\text{sign}(\\hat{\\alpha}_{ij}) \\max (0, |\\hat{\\alpha}_{ij}| - \\lambda) \\; ,\n\t\\end{split}\n\t\\end{align} \n\twhere $\\hat{w}_{ij}$ is $j$-th entry of $\\hat{w}_i$ and $\\lambda > 0$ is a regularization parameter. We present the exact procedure to compute a feasible $\\lambda$ in later sections. We also define $\\hat{\\sigma}_{ij}$ and $\\hat{\\alpha}_{ij}$ as follows: \n\t\\begin{align}\n\t\\label{eq:sigma_alpha}\n\t\\begin{split}\n\t\\hat{\\sigma}_{ij} \\triangleq \\frac{1}{n_i} \\sum_{t=1}^{n_i} (X_{ij}^t)^2,\\quad \\hat{\\alpha}_{ij} \\triangleq \\frac{1}{n_i} \\sum_{t=1}^{n_i} y_i^t X_{ij}^t\n\t\\end{split}\n\t\\end{align}\n\tNote that these are simple calculations and can be done in $\\mathcal{O}(d n_i)$ run time at each client. If $n_i$ can be kept small (which we will show later), this can be done even by a device with low computational ability. The choice of this exact form of $\\hat{w}_{ij}$ in equation \\eqref{eq:what} is not arbitrary. To get the intuition behind our choice, consider the following $\\ell_1$-regularized (sparse) linear regression problem at each client.\n\t\\begin{align}\n\t\\label{eq:serverlasso}\n\t\\begin{split}\n\t(\\forall i\\in\\seq{g}), \\quad \\hat{w}_i = \\arg\\min_w \\frac{1}{n_i}\\sum_{t=1}^{n_i} (w^\\intercal X_i^t - y_i^t)^2 + \\lambda \\| w \\|_1 \\; ,\n\t\\end{split}\n\t\\end{align}\n\twhere $\\| \\cdot \\|_1$ denotes the $\\ell_1$ norm of a vector. The construction of $\\hat{w}_i$ in equation \\eqref{eq:what} is the exact solution to optimization problem \\eqref{eq:serverlasso} if predictor variables, i.e., the entries in $X_{i}$, are assumed to be uncorrelated. Notice how the solution provided in \\eqref{eq:what} avoids any computation (or estimation) of the covariance matrix which, in any case, would be incorrect if each client has access to only a few samples. Each client $i$ sends the support $S_i = \\{ j | \\hat{w}_{ij} \\ne 0, j \\in \\seq{d} \\}$ of $\\hat{w}_i$ to the centralized server. Note that even in the worst case scenario, each client only sends $d$ bits to the centralized server.\n\t\n\t\\subsection{Information Aggregation and Constructing the Final Support}\n\t\\label{subsec:constructing final support S}\n\t\n\tWe aggregate supports $S_i, \\forall i \\in \\seq{g}$ from all the clients and construct the final support. Before we get to the construction of the final support, we define a random variable $R_{ij}, \\forall i \\in \\seq{g}, j \\in \\seq{d}$ which takes value $1$ if $j \\in S_i$ and $0$ otherwise.\n\n\tThus, random variable $R_{ij}$ indicates whether entry $j$ is in the support $S_i$ of client $i$. Using the random variables $R_{ij}$, we construct the final support $S$ by computing the median of $R_{ij}$ across $i \\in \\seq{g}$. If the median is $1$ then we conclude that $j$ is in the support otherwise we conclude that $j$ is not in the support. More formally, we define a random variable $R_j \\triangleq \\frac{1}{g}\\sum_{i=1}^g R_{ij}$ and if $R_j \\geq \\frac{1}{2}$, then we conclude that $j \\in S$. Otherwise, if $R_j < \\frac{1}{2}$, then we conclude that $j \\notin S$. The above procedure can be compactly written as the following algorithms running in clients and centralized server:\n\t\n\t\\begin{algorithm}[H]\n\t\t\\label{algo:getExactSupport}\n\t\t\\begin{minipage}{0.5\\textwidth}\n\t\t\t\\begin{algorithm}[H]\n\t\t\t\t\\SetKwInOut{Input}{Input}\n\t\t\t\t\\SetKwInOut{Output}{Output}\n\t\t\t\t\\tcp{Runs in client $i, \\forall i \\in \\seq{g}$} \n\t\t\t\t\\Input{Data samples $(X_i^t, y_i^t), \\forall t \\in \\seq{n_i}$ }\n\t\t\t\t\\Output{Estimated support for shared parameter $w^*$}\n\t\t\t\t$R_i \\leftarrow \\{0\\}^d$ \\;\n\t\t\t\t\\For{each $j \\in \\seq{d}$}{\n\t\t\t\t\tCompute $\\hat{w}_{ij}$ using equation \\eqref{eq:what} and \\eqref{eq:sigma_alpha} \\;\n\t\t\t\t\t\\If{$\\hat{w}_{ij} \\ne 0$}{ \n\t\t\t\t\t\t$R_{ij} \\leftarrow 1 $ \\;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tSend $R_i$ to centralized server \\;\n\t\t\t\\end{algorithm}\n\t\t\\end{minipage}%\n\t\n\t\t\\begin{minipage}{0.5\\textwidth}\n\t\t\t\\vspace{-1.8\\baselineskip}\n\t\t\t\\begin{algorithm}[H]\n\t\t\t\t\\SetKwInOut{Input}{Input}\n\t\t\t\t\\SetKwInOut{Output}{Output}\n\t\t\t\t\\tcp{Runs in centralized server}\n\t\t\t\t\\Input{$R_i, \\forall i \\in \\seq{g}$}\n\t\t\t\t\\Output{True support $S$ for shared parameter $w^*$}\n\t\t\t\t$S \\leftarrow \\{\\}$ \\;\n\t\t\t\t\\For{each $j \\in \\seq{d}$}{ \n\t\t\t\t\tCompute $R_j = \\frac{1}{g} \\sum_{i=1}^g R_{ij}$ \\;\n\t\t\t\t\t\\If{$R_j \\geq \\frac{1}{2}$}{\n\t\t\t\t\t\t$S \\leftarrow S \\cup \\{j\\}$ \\;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\\end{algorithm}\n\t\t\\end{minipage}\n\t\t\\caption{getExactSupport}\n\t\\end{algorithm}\n\n\t\n\t\\section{Main Results and Analysis}\n\t\\label{sec:analysis}\n\t\n\tIn this section, we describe and analyze our theoretical results. We present our results in two different settings. In the first setting, we assume that predictor variables are mutually independent. We tackle the more general case of correlated predictors in the second setting.\n\t\n\t\\subsection{Mutually Independent Predictors}\n\t\\label{subsec:mutually independent predictors}\n\t\n\tIn this setting, predictor variables are mutually independent of each other in all the clients, i.e., $\\forall i \\in \\seq{g}$, $\\mathbb{E}(X_{ij} X_{ik}) = 0, \\forall j, k \\in \\seq{d}, j \\ne k$. In this setting, we state the following result:\n\t\\begin{theorem}[Mutually Independent Predictors]\n\t\t\\label{thm:mutually independent predictors}\n\t\tFor the federated support learning for linear regression as described in Section \\ref{sec:preliminaries} where predictor variables are mutually independent of each other, if for some $0 < \\delta < 1$, each of the $g = \\mathcal{O}(\\log d)$ client has $n_i = \\mathcal{O}(\\frac{1}{\\delta^2})$ data samples and if for each $i \\in \\seq{g}$ and $j \\in S^*$,\n\t\t\\begin{align*}\n\t\t\\begin{split}\n\t\t8 \\delta \\rho_i^2 \\sqrt{\\sum_{k\\in S^*} w_k^2} + 8 |\\eta_i\\rho_i|\\delta < \\lambda < |w_j^* {\\sigma_{jj}^i}^2| - 8 |w_j^*| \\rho_i^2 \\delta - 8 \\rho_i^2 \\sqrt{\\sum_{k\\in S^*, k \\ne j} {w_k^*}^2} \\delta - 8 |\\eta_i\\rho_i| \\delta \\;,\n\t\t\\end{split}\n\t\t\\end{align*} \n\t\tthen Algorithm \\ref{algo:getExactSupport} recovers the exact support for the shared parameter vector $w^*$. \n\t\\end{theorem} \n\tBy taking $\\delta = \\mathcal{O}(\\frac{1}{\\sqrt{s}})$, we get the following corollary:\n\t\\begin{corollary}\n\t\t\\label{cor:mutually independent predictors}\n\t\tFor the federated support learning for linear regression as described in Section \\ref{sec:preliminaries} where predictor variables are mutually independent of each other, if for some constant $0 < K < \\sqrt{s}$, each of the $g = \\mathcal{O}(\\log d)$ client has $n_i = \\mathcal{O}(s)$ data samples and if for each $i \\in \\seq{g}$ and $j \\in S^*$,\n\t\t\\begin{align*}\n\t\t\\begin{split}\n\t\t&8 K \\rho_i^2 \\sqrt{\\frac{\\sum_{k\\in S^*} {w_k^*}^2}{s}} + 8 K \\frac{|\\eta_i\\rho_i|}{\\sqrt{s}} < \\lambda < |w_j^* {\\sigma_{jj}^i}^2| - 8 K \\frac{|w_j^*| \\rho_i^2}{\\sqrt{s}} - 8 K \\rho_i^2 \\sqrt{\\frac{\\sum_{k\\in S^*, k \\ne j} {w_k^*}^2}{s}} - \\\\\n\t\t& 8 K \\frac{|\\eta_i\\rho_i|}{\\sqrt{s}} \\;,\n\t\t\\end{split}\n\t\t\\end{align*} \n\t\tthen Algorithm \\ref{algo:getExactSupport} recovers the exact support for the shared parameter vector $w^*$.\n\t\\end{corollary}\n\tThe choice of such value of $\\delta$ is to subdue the growth of the $\\sqrt{\\sum_{k\\in S^*} {w_k^*}^2}$ term which approximately grows as $\\mathcal{O}(\\sqrt{s})$. Later on, we will empirically show that such a choice leads to a feasible range for $\\lambda$. Also observe that, the overall sample complexity for our algorithm is $\\mathcal{O}(s \\log d)$ which matches the optimal sample complexity for sparse linear regression\\cite{wainwright2009info,wainwright2009sharp}, i.e., even if we have access to all the samples in a centralized server, we can not have a better sample complexity guarantee for support recovery. \n\t\n\t\\subsubsection{Proof of Theorem \\ref{thm:mutually independent predictors}}\n\t\\label{subsubsec: proof of theorem mutually independent predictors}\n\t\n\t\\begin{proof}\n\t\t\\label{proof:theorem mutually independent predictors}\n\t\tRecall that $R_j = \\frac{1}{g} \\sum_{i=1}^g R_{ij}$ where $R_{ij}$ is defined in Section \\ref{subsec:constructing final support S}. We prove that, with high probability, $R_j \\geq \\frac{1}{2}, \\forall j \\in S^*$ and $R_j < \\frac{1}{2}, \\forall j \\in S^*_c$. We will provide the proof in two parts. First, we deal with entries $j$ which are in the support of $w^*$, i.e., $j \\in S^*$ and then we will deal with $j \\in S^*_c$.\n\t\t\n\t\t\\paragraph{For entries $j$ in support $S^*$.}\n\t\t\n\t\tWe begin our proof by first stating the following lemma.\n\t\t\\begin{lemma}\n\t\t\t\\label{lem:support mcdiarmid}\n\t\t\tFor $j \\in S^*$, let $\\mathbb{E}(R_j) > \\frac{1}{2}$, then $R_j \\geq \\frac{1}{2}$ with probability at least $1 - 2 \\exp( - 2g (-\\frac{1}{2} + \\mathbb{E}(R_j))^2)$.\n\t\t\\end{lemma}\n\t\t\n\t\tNext we show that for $j \\in S^*$, $\\mathbb{E}(R_j)$ is indeed greater than $\\frac{1}{2}$. To that end, we provide the result of the following lemma.\n\t\t\\begin{lemma}\n\t\t\t\\label{lem:support bound on E(Rj)}\n\t\t\tFor $j \\in S^*$ and some $0 < \\delta \\leq 1$, if predictors are mutually independent of each other and $ 0 < \\lambda < |w_j^* {\\sigma_{jj}^i}^2| - 8 |w_j^*| \\rho_i^2 \\delta - 8 \\rho_i^2 \\sqrt{\\sum_{k\\in S^*, k \\ne j} {w_k^*}^2} \\delta - 8 |\\eta_i\\rho_i| \\delta $\n\t\t\tthen $ \\mathbb{E}(R_j) \\geq 1 - \\frac{6}{g} \\sum_{i=1}^g \\exp(-n_i\\delta^2) $. Furthermore, for $n_i = \\mathcal{O}(\\frac{1}{\\delta^2})$, we have $\\mathbb{E}(R_j) > \\frac{1}{2}$.\n\t\t\\end{lemma}\n\t\t\n\t\t\\paragraph{For entries $j$ in non-support $S^*_c$.}\n\t\tSimilar to the entries in the support, we begin this part by stating the following result for entries in the non-support.\n\t\t\\begin{lemma}\n\t\t\t\\label{lem:nonsupport mcdiarmid}\n\t\t\tFor $j \\in S^*_c$, let $\\mathbb{E}(R_j) < \\frac{1}{2}$, then $R_j \\leq \\frac{1}{2}$ with probability at least $1 - 2 \\exp( - 2g (\\frac{1}{2} - \\mathbb{E}(R_j))^2)$.\n\t\t\\end{lemma}\n\t\tIt remains to show that for $j \\in S^*_c$, $\\mathbb{E}(R_j)$ is smaller than $\\frac{1}{2}$. In particular, we use the result from the following lemma.\n\t\t\\begin{lemma}\n\t\t\t\\label{lem:nonsupport bound E(Rj)}\n\t\t\tFor $j \\in S^*_c$ and $0 < \\delta \\leq 1$, if predictors are mutually independent of each other and if $ \\lambda > 8 \\delta \\rho_i^2 \\sqrt{\\sum_{k\\in S^*} w_k^2} + 8 |\\eta_i\\rho_i|\\delta $\n\t\t\tthen $\\mathbb{E}(R_j) \\leq \\frac{4}{g} \\sum_{i=1}^g \\exp(-n_i \\delta^2)$. Furthermore, for $n_i = \\mathcal{O}(\\frac{1}{\\delta^2})$, we have $\\mathbb{E}(R_j) < \\frac{1}{2}$.\n\t\t\\end{lemma} \n\t\tResults from Lemma \\ref{lem:support bound on E(Rj)} and \\ref{lem:nonsupport bound E(Rj)} make sure that Lemma \\ref{lem:support mcdiarmid} and \\ref{lem:nonsupport mcdiarmid} hold. We would like these results to hold across all $j \\in \\seq{d}$. This implies that we need a union bound across all the $d$ predictors. Thus, having $g = \\mathcal{O}(\\log d)$ ensures that our results hold for all entries in the support and non-support with high probability. \n\t\\end{proof}\n\t\n\t\\subsection{Correlated predictors}\n\t\\label{subsec:correlated predictors}\n\t\n\tNow that we have dealt with mutually independent predictors, we focus on correlated predictors in this section. As described previously, the covariance matrix for $X_i$ is denoted by $\\Sigma^i \\in \\mathbb{R}^{d \\times d}$ with diagonal entries $\\Sigma^i_{jj} \\equiv {\\sigma^i_{jj}}^2, \\forall j \\in \\seq{d}$ and non-diagonal entries $\\Sigma^i_{jk} \\equiv \\sigma^i_{jk}, \\forall j,k \\in \\seq{d}, j \\ne k$. Some of the results from the previous subsection can be used for this setting as well. However, correlation between predictors affects some results. Below, we state the main results for this setting before proving them formally. \n\t\\begin{theorem}[Correlated Predictors]\n\t\t\\label{thm:correlated predictors}\n\t\tFor the federated support learning for linear regression as described in Section \\ref{sec:preliminaries}, if for some $0 < \\delta < \\frac{1}{\\sqrt{2}}$, each of the $g = \\mathcal{O}(\\log d)$ client has $n_i = \\mathcal{O}(\\frac{1}{\\delta^2} \\log s)$ data samples and if for each $i \\in \\seq{g}$,\n\t\t\\begin{align*}\n\t\t\\begin{split}\n\t\t&(\\forall j \\in S^*_c) |\\sum_{k\\in S^*} w^*_k \\sigma^i_{jk}| + \\sum_{k\\in S^*} 8 \\sqrt{2} |w_k^*| (1 + 4 \\max_j \\frac{\\rho_i^2}{{\\sigma^i_{jj}}^2}) \\max_j {\\sigma^i_{jj}}^2 \\delta + 8 |\\eta_i \\rho_i| \\delta < \\lambda \\\\\n\t\t&< (\\forall j \\in S^*) |(w^*_j{\\sigma^i_{jj}}^2 + \\sum_{k\\in S^*, k\\ne j} w^*_k \\sigma^i_{jk})| - 8 |w_j^*| \\rho_i^2 \\delta - \\sum_{k\\in S^*, k\\ne j} 8 \\sqrt{2} |w_k^*| (1 + 4 \\max_j \\frac{\\rho_i^2}{{\\sigma^i_{jj}}^2}) \\\\\n\t\t&\\max_j {\\sigma^i_{jj}}^2 \\delta - 8 |\\eta_i \\rho_i| \\delta \\;,\n\t\t\\end{split}\n\t\t\\end{align*} \n\t\tthen Algorithm \\ref{algo:getExactSupport} recovers the exact support for the shared parameter vector $w^*$. \n\t\\end{theorem} \n\tBy taking $\\delta = \\mathcal{O}(\\frac{1}{s})$, we get the following corollary:\n\t\\begin{corollary}\n\t\t\\label{cor:correlated predictors}\n\t\tFor the federated support learning for linear regression as described in Section \\ref{sec:preliminaries}, if for some constant $0 < K < \\frac{s}{\\sqrt{2}}$, each of the $g = \\mathcal{O}(\\log d)$ client has $n_i = \\mathcal{O}(s^2 \\log s)$ data samples and if for each $i \\in \\seq{g}$,\n\t\t\\begin{align*}\n\t\n\t\t\\begin{split}\n\t\t&(\\forall j \\in S^*_c) |\\sum_{k\\in S^*} w^*_k \\sigma^i_{jk}| + \\sum_{k\\in S^*} 8 \\sqrt{2} |w_k^*| (1 + 4 \\max_j \\frac{\\rho_i^2}{{\\sigma^i_{jj}}^2}) \\max_j {\\sigma^i_{jj}}^2 \\frac{K}{s} + 8 |\\eta_i \\rho_i| \\frac{K}{s} < \\lambda \\\\\n\t\t&< (\\forall j \\in S^*) |(w^*_j{\\sigma^i_{jj}}^2 + \\sum_{k\\in S^*, k\\ne j} w^*_k \\sigma^i_{jk})| - 8 |w_j^*| \\rho_i^2 \\frac{K}{s} - \\sum_{k\\in S^*, k\\ne j} 8 \\sqrt{2} |w_k^*| (1 + 4 \\max_j \\frac{\\rho_i^2}{{\\sigma^i_{jj}}^2}) \\\\\n\t\t&\\max_j {\\sigma^i_{jj}}^2 \\delta - 8 |\\eta_i \\rho_i| \\frac{K}{s} \\;,\n\t\t\\end{split}\n\t\t\\end{align*} \n\t\tthen Algorithm \\ref{algo:getExactSupport} recovers the exact support for the shared parameter vector $w^*$.\n\t\\end{corollary}\n\t\n\tAs with the previous case, the choice of such a value of $\\delta$ is to subdue the growth of terms which grow as $\\mathcal{O}(s)$. In our experiments, this leads to a feasible range for $\\lambda$. In this case, the overall sample complexity for our algorithm is $\\mathcal{O}(s^2 \\log s \\log d)$ which only differs by a factor of $s\\log s$ from the optimal sample complexity for support recovery in sparse linear regression in the centralized setting where all the data resides in a single server\\cite{wainwright2009info,wainwright2009sharp}.\n\t\n\t\\subsubsection{Proof of Theorem \\ref{thm:correlated predictors}}\n\t\\label{subsubsec:proof of theorem correlared predictors}\n\t\n\t\\begin{proof}\n\t\t\\label{proof:theorem correlared predictors}\n\t\tRecall that $R_j = \\frac{1}{g} \\sum_{i=1}^g R_{ij}$ where $R_{ij}$ is defined in Section \\ref{subsec:constructing final support S}. We will again prove that, with high probability, $R_j \\geq \\frac{1}{2}, \\forall j \\in S^*$ and $R_j < \\frac{1}{2}, \\forall j \\in S^*_c$. Some of the results from the previous Section \\ref{proof:theorem mutually independent predictors} follow without any changes. We provide new results for the remaining parts. Like before first, we deal with entries $j$ which are in the support of $w^*$, i.e., $j \\in S^*$ and then we will deal with $j \\in S^*_c$. \n\t\t\n\t\t\\paragraph{For entries $j$ in support $S^*$.}\n\t\t\n\t\tWe observe that Lemma \\ref{lem:support mcdiarmid} holds even in this case. Thus, we start our proof by stating the following lemma.\n\t\t\\begin{lemma}\n\t\t\t\\label{lem:support bound on E(Rj) correlated}\n\t\t\tFor $j \\in S^*$ and some $0 < \\delta \\leq \\frac{1}{\\sqrt{2}}$, if $\\forall j \\in S^*$,\n\t\t\t$ 0 < \\lambda < |(w^*_j{\\sigma^i_{jj}}^2 + \\sum_{k\\in S^*, k\\ne j} w^*_k \\sigma^i_{jk})| - 8 |w_j^*| \\rho_i^2 \\delta - \\sum_{k\\in S^*, k\\ne j} 8 \\sqrt{2} |w_k^*| (1 + 4 \\max_j \\frac{\\rho_i^2}{{\\sigma^i_{jj}}^2}) \\max_j {\\sigma^i_{jj}}^2 \\delta - 8 |\\eta_i \\rho_i| \\delta $\n\t\t\tthen $ \\mathbb{E}(R_j) \\geq 1 - \\frac{4s}{g} \\sum_{i=1}^g \\exp(-n_i\\delta^2 ) $. Furthermore, for $n_i = \\mathcal{O}(\\frac{1}{\\delta^2}\\log s)$, we have $\\mathbb{E}(R_j) > \\frac{1}{2}$.\n\t\t\\end{lemma}\n\t\t\n\t\t\\paragraph{For entries $j$ in non-support $S^*_c$.}\n\t\tAgain, Lemma \\ref{lem:nonsupport bound E(Rj)} follows directly. Thus, we present the following lemma to show that for the entries in the non-support $\\mathbb{E}(R_j) < \\frac{1}{2}$.\n\t\t\\begin{lemma}\n\t\t\t\\label{lem:nonsupport bound E(Rj) correlated}\n\t\t\tFor $j \\in S^*_c$ and some $0 < \\delta \\leq \\frac{1}{\\sqrt{2}}$, if\n\t\t\t$ \\lambda > |\\sum_{k\\in S^*} w^*_k \\sigma^i_{jk}| + \\sum_{k\\in S^*} 8 \\sqrt{2} |w_k^*| (1 + 4 \\max_j \\frac{\\rho_i^2}{{\\sigma^i_{jj}}^2}) \\max_j {\\sigma^i_{jj}}^2 \\delta + 8 |\\eta_i \\rho_i| \\delta $ \n\t\t\tthen $\\mathbb{E}(R_j) \\leq \\frac{4s + 2}{g} \\sum_{i=1}^g \\exp(-n_i \\delta^2)$. Furthermore, for $n_i = \\mathcal{O}(\\frac{1}{\\delta^2} \\log s)$, we have $\\mathbb{E}(R_j) < \\frac{1}{2}$.\n\t\t\\end{lemma} \n\t\tResults from Lemmas \\ref{lem:support bound on E(Rj) correlated} and \\ref{lem:nonsupport bound E(Rj) correlated} ensure that Lemma \\ref{lem:support mcdiarmid} and \\ref{lem:nonsupport mcdiarmid} hold. Since we would like these results to hold across all $j \\in \\seq{d}$, we need a union bound across all the $d$ predictors. Thus, having $g = \\mathcal{O}(\\log d)$ makes sure that our results hold for all entries in the support and non-support with high probability. \t\n\t\\end{proof}\n\t\n\t\\subsection{Time Complexity}\n\t\\label{sub:time complexity}\n\t\n\tEach client does $\\mathcal{O}(dn_i)$ basic calculations. Thus, from the results of Corollaries \\ref{cor:mutually independent predictors} and \\ref{cor:correlated predictors}, the time complexity at each client is $\\mathcal{O}(sd)$ for mutually independent predictors and $\\mathcal{O}(s^2 d \\log s)$ for correlated predictors. The centralized server gathers $d$-bits of information from $g$ clients in $\\mathcal{O}(dg)$ time. \n\t\n\t\\section{Discussion on Robustness}\n\t\\label{sec:discussion on robustness}\n\t\n\tSince our method only relies on the correct calculation of the median, it is naturally robust to failure of few clients. To simulate the effect of model poisoning \\cite{bhagoji2019analyzing} and stragglers, we consider that $0 < \\beta < \\frac{1}{2}$ portion of clients have gone rogue (are straggling) and transmitting the wrong information to the centralized server. For the worst case scenario, we assume that they report the complement of the support, i.e., they always send a bit ``$1$'' for entries in the non-support and a bit ``$0$'' for entries in the support. To accommodate this change in the case of correlated predictors, we slightly change statements of Lemmas \\ref{lem:support bound on E(Rj) correlated} and \\ref{lem:nonsupport bound E(Rj) correlated}. Now we have, $ (\\forall j \\in S^*), \\quad \\mathbb{E}(R_j) \\geq (1 - \\beta) - \\frac{4s}{g} \\sum_{i=1}^{(1-\\beta)g} \\exp(-n_i \\delta^2) $ and $ (\\forall j \\in S^*_c), \\quad \\mathbb{E}(R_j) \\leq \\frac{4s + 2}{g} \\sum_{i=1}^{(1 - \\beta)g} \\exp(-n_i \\delta^2) + \\beta $.\n\tIt is easy to see that, as long as, we have $n_i > \\frac{1}{\\delta^2} \\log(\\frac{(8s + 4)(1 - \\beta)}{1 - 2 \\beta })$ data samples per client, then we still have $\\mathbb{E}(R_j) > \\frac{1}{2}, \\forall j \\in S^*$ and $\\mathbb{E}(R_j) < \\frac{1}{2}, \\forall j \\in S^*_c$ and all our results still hold. A similar analysis can be conducted for the case of mutually independent predictors and our results hold as long as we have $n_i > \\frac{1}{\\delta^2} \\log (\\frac{12(1 - \\beta)}{1 - 2 \\beta})$ data samples per client. \n\t\n\t\\section{Experimental Results}\n\t\\label{sec:experimental results}\n\t\n\t\\begin{figure}[!ht]\n\t\t\\centering\n\t\t\\begin{subfigure}{.45\\textwidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\linewidth]{exact_support_rec_num_samples}\n\t\t\t\\caption{Exact support recovery against numbers of samples per client}\n\t\t\t\\label{fig:recnumsample}\n\t\t\\end{subfigure}%\n\t\t\\begin{subfigure}{.45\\textwidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\linewidth]{exact_support_rec_num_clients}\t\n\t\t\t\\caption{Exact support recovery against numbers of clients}\n\t\t\t\\label{fig:recnumclient}\n\t\t\\end{subfigure}\n\t\t\\caption{Phase transition curves. Left: Exact support recovery averaged across $30$ runs against varying number of samples per client for $d = 500, 1000$, and $2000$, $s = 3$, $g = \\mathcal{O}(\\log d)$ clients. Right: Exact support recovery averaged across $30$ runs against varying number of clients for $s = 10, 20, 40$, and $50$, $d = 1000$, $n = \\max(30, \\mathcal{O}(s^2 \\log s))$ samples per server.}\n\t\t\\label{fig:recovery}\n\t\\end{figure} \n\t\n\t\n\tIn this section, we validate our theoretical results by conducting computational experiments. We provide the results for the experiments when predictors are correlated. Data in each client is generated by following generative process described in equation \\ref{eq:generative model}. Note that predictors and error term in different clients follow different sub-Gaussian distributions. To make it more general, we keep the correlation between entries in the support different than the correlation between one entry in the support and the other entry in the non-support and these further vary across clients. The regularization parameter $\\lambda$ is chosen such that condition in corollary \\ref{cor:correlated predictors} is satisfied for every client and for every entry in support and non-support. All the results reported here are averaged over 30 independent runs. We conduct two separate experiments to verify that $n_i = \\mathcal{O}(s^2\\log s)$ independent samples per client and a total of $g = \\mathcal{O}(\\log d)$ clients are sufficient to recover the true support.\n\t\n\t\\paragraph{Exact support recovery against number of samples per client.} \n\tThis experiment was conducted for a varying number of predictors ($d = 500, 1000$ and $2000$). For each of them, we fixed the number of clients to be $g = 2 \\log d$. The sparsity $s$ is kept fixed at $3$. The number of samples per client $n_i$ is varied with control parameter $C$ as $10^C s^2 \\log s$. Performance of our method is measured by assigning value $1$ for exact recovery and $0$ otherwise. We can see in Figure \\ref{fig:recnumsample}, that initially, recovery remains at $0$ and then there is sharp jump after which recovery becomes $1$. Notice how all three curves align perfectly. This validates the result of our theorem and shows that given $g = \\mathcal{O}(\\log d)$ clients, $n_i = \\mathcal{O}(s^2 \\log s)$ samples per client are sufficient to recover the true support. \n\t\n\t\\paragraph{Exact support recovery against number of clients.}\n\tThe second experiment was conducted for a varying number of non-zero entries ($s=10, 20, 40$ and $50$) in the support of $w^*$. The experiments were run for a setup with $d=1000$ predictors. We fixed the number of samples per client ($n_i$) to be $\\max(30, \\mathcal{O}(s^2 \\log s))$. This ensures that a minimum of $30$ samples are available to each client. This is inline with our previous experiment where exact recovery is achieved around $30$ samples per client. The number of clients $g$ is varied with control parameter $C$ as $10^C \\log d$. Like previous experiment, performance is measured by assigning value $1$ for exact recovery and $0$ otherwise. We can again see in Figure \\ref{fig:recnumclient}, that initially, recovery is remains at $0$ and then it goes to $1$ as we increase number of clients. We also notice that all four curves align nicely. This validates that given $n_i = \\mathcal{O}(s^2 \\log s)$ independent samples per server, $g = \\mathcal{O}(\\log d)$ clients are sufficient to recover the true support. \n\t%\n \n\t\n\t\\section{Concluding Remarks}\n\t\\label{sec:conclusion}\n\n\tIn this paper, we propose a simple and easy to implement method for learning the exact support of parameter vector of linear regression problem in a federated learning setup. We provide theoretical guarantees for the correctness of our method. We also show that our method runs in polynomial sample and time complexity. Furthermore, our method is averse to client failures, model poisoning and straggling clients. As a future direction, it would be interesting to analyze if the bound on the sample complexity in the case of correlated predictors matches corresponding information theoretic lower bounds.\n\t\n\t\n\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nThis paper starts by introducing a mathematical framework for our solution. Then it describes a procedure to build the generic factor's activations identificator. Finally it presents the result of the procedure for two particular statistical modelling of the measure grid's activity. This paper has been influenced by modern machine learning techniques, reviewed in \\cite{PCML-1}, especially algorithm that perform automated feature engineering such as neural network and deep learning \\cite{PCML-2} as well as tree learning techniques \\cite{PCML-3} and improvements \\cite{PCML-4} and \\cite{PCML-5}. Finally, modern signal processing techniques, that I have been taught at the Ecole Polytechnique F\\'ed\\'erale De Lausanne, reviewed in \\cite{SP-1}, and recent work in statistics in large dimensions, to which I have been introduced during my stay in the Laboratoire d'Informatique Gaspard Monge, reviewed in \\cite{TAO-1}, has been more than determinant for the conception of this paper. In order to make the core subject of this report more consistent, we introduce the following notations:\n\n \\begin{figure}[h]\n \\centering\n \\includegraphics[scale=0.25]{figures\/notations.png}\n \\caption{Measure grid model}\n \\label{fig:model_overview}\n\\end{figure}\n\n\n\\begin{itemize}\n\\item $n$ the size of the measure grid. \n\\item $\\mathcal{G}$ the measure grid, composed of bits, $\\mathcal{G} = \\lbrace b_1, \\ldots b_n \\rbrace$.\n\\item $S(\\mathcal{G})$ the set of all possible permutations of $\\mathcal{G}$, $\\vert S(\\mathcal{G})\\vert = 2^{n}$.\n\\item $S(\\mathcal{G}, l)$ the set of all permutations of $\\mathcal{G}$ of size $l$.\n\\item $K$ the number of latent factors.\n\\item $\\mathcal{F}$ the set of latent factors, $\\mathcal{F} = \\lbrace f_1, \\ldots f_K \\rbrace$.\n\\item $S(\\mathcal{F})$ the set of all possible permutations of $\\mathcal{F}$, $\\vert S(\\mathcal{F})\\vert = 2^{K}$.\n\\item $S(\\mathcal{F}, l)$ the set of all permutations of $\\mathcal{F}$ of size $l$.\n\\item $\\mathcal{G}(f)$ the set of bits activated by factor $f$, $\\mathcal{G}(f) \\in S(\\mathcal{G})$ and $f \\in \\mathcal{F}$. \n\\item $\\mathcal{G}^{-1}(b)$ the set of factors that activate grid's bit $b$, $\\mathcal{G}^{-1}(b) \\in S(\\mathcal{F})$ and $b \\in \\mathcal{G}$.\n\\item $F(2)$ the field with elements $\\{0, 1\\}$, equipped with the logical XOR and logical AND respectively as the addition and multiplication.\n\\end{itemize}\n\n \n\\section{Definitions And Properties}\n\n\\subsection{Statistical definitions}\n\nIn this section we provide formalism for the statistical description of factors's activity and their signature on the measure grid.\n\n\\subsubsection*{Activation of factors}\n\nEach factor takes value in $\\{ 0, 1\\}$ at each instant of time. A factor with value 1 at some instant is active, otherwise it has value 0. At this stage of the paper we assume no particular statistical model for factors. Nevertheless, if we consider the set of all possible combination of active and unactive factors ($F(2)^{K}$), we assume that there is a well defined distribution $d'$ such that \n\n\\begin{align*}\nd' &: F(2)^{K} \\rightarrow [0,1]\\\\\nd'_x &= \\mathbb{P}\\left(f_1 = x_1, \\ldots, f_K = x_K \\right)\n\\end{align*}\n\nThe statistical signature of a factor on the measure grid describes how the factor is linked to measure grid's bits. At this stage we simply assume that there is a well defined probability measure so that for any $I \\in S(\\mathcal{G})$\n\n\\begin{align*}\n\\mathbb{P}(\\mathcal{G}(f) = I) &\\in [0, 1]\\\\\n\\sum_{I \\in S(\\mathcal{G})} \\mathbb{P}(\\mathcal{G}(f) = I) &=1\n\\end{align*}\n\nLatent factors's activations and signatures on the measure grid induce activations of measure grid's bits. We refer to this distribution over all possible combinations of activations of bits as $d$, and define it as\n\n\\begin{align*}\nd &: F(2)^{n} \\rightarrow [0,1]\\\\\nd_x &= \\mathbb{P}\\left(b_1 = x_1, \\ldots, b_n = x_n \\right)\n\\end{align*}\n\nFinally, we can also modelize the connection between factors and a measure grid's bit as a signature of the grid's bit on factor space. That is, for $I \\in S(\\mathcal{F})$, there is a well defined probability measure so that \n\n\\begin{align*}\n\\mathbb{P}(\\mathcal{G}^{-1}(b) = I) &\\in [0, 1]\\\\\n\\sum_{I \\in S(\\mathcal{F})} \\mathbb{P}(\\mathcal{G}^{-1}(b) = I) &=1\n\\end{align*}\n\n\n\\subsubsection*{Characteristic polynome}\n\nThe activity of factors and grid's bits may be modelized using a set of multivariate polynomials whose fiber and image domain is respectively $F(2)^{n}$ and $F(2)$. The set of polynome associated with a set $I \\in S(\\mathcal{F})$ and $I' \\in S(\\mathcal{G})$ is denoted respectively $\\lbrace \\mathcal{P}_{I, l} \\rbrace_{l \\in \\mathbb{N}}$ and $\\lbrace \\mathcal{P}_{I', l} \\rbrace_{l \\in \\mathbb{N}}$. It represents a segmentation of states of respectively factors of $I$ and measure grid's bits of $I'$.\n\n\n\\begin{align*}\n\\mathcal{P}_{I, l}&: F(2)^{K} \\rightarrow F(2)\\\\\n\\mathcal{P}_{I, l} [\\textbf{x}] &= \\begin{cases} \\sum_{\\pi \\in S(I', l)} x_{\\pi_1} \\cdot \\ldots \\cdot x_{\\pi_l}, & \\text{if }\\ l \\in \\lbrace 1, \\ldots, \\vert I \\vert \\rbrace \\\\ 0, & \\text{otherwise} \\end{cases}\n\\end{align*}\n\nand \n\n\\begin{align*}\n\\mathcal{P}_{I', l}&: F(2)^{n} \\rightarrow F(2)\\\\\n\\mathcal{P}_{I', l} [\\textbf{x}'] &= \\begin{cases} \\sum_{\\pi \\in S(I', l)} x'_{\\pi_1} \\cdot \\ldots \\cdot x'_{\\pi_l}, & \\text{if }\\ l \\in \\lbrace 1, \\ldots, \\vert I \\vert \\rbrace \\\\ 0, & \\text{otherwise} \\end{cases}\n\\end{align*}\n\n\nWhere $S(I, l)$ and $S(I', l)$ are the set of all permutations of size $l$ of respectively $I$ and $I'$, $\\textbf{x} = [x_{f_1}, \\ldots, x_{f_K}]$ and $\\textbf{x}' = [x_{b_1}, \\ldots, x_{b_n}]$. Furthermore we define the characteristic polynomial of a set $I \\in S(\\mathcal{F})$ and $I' \\in S(\\mathcal{G})$ at level $l_0 \\in \\{0, \\ldots, \\vert I \\vert \\}$ and $l'_0 \\in \\{0, \\ldots, \\vert I' \\vert \\}$ as \n\n\\vspace{10px}\\noindent\\begin{minipage}{.5\\linewidth}\n\\begin{align*}\n\\mathcal{P}^{l_0}_{I} &: F(2)^{K} \\rightarrow F(2)\\\\\n\\mathcal{P}^{l_0}_{I} &= \\sum_{l = l_0}^{\\vert I \\vert} \\mathcal{P}_{I, l}\n\\end{align*}\n\\end{minipage}%\n\\noindent\\begin{minipage}{.5\\linewidth}\n\\begin{align*}\n\\mathcal{P}^{l'_0}_{I'} &: F(2)^{n} \\rightarrow F(2)\\\\\n\\mathcal{P}^{l'_0}_{I'} &= \\sum_{l = l'_0}^{\\vert I' \\vert} \\mathcal{P}_{I', l}\n\\end{align*}\n\\end{minipage}\\vspace{15px}\n\nSo far, the addition is set to be the logical XOR in the definition of fields $F(2)$. However, in the rest of this report, we will use symbol $+$ and $\\sum$ as representation of a logical OR in $F(2)$. This notation enables us to save a lot of time in writing complex polynomials. Denoting $\\oplus$ the logical XOR and $\\bar{x}$ the opposite of $x$, one have\n\n\\begin{equation*}\n(x \\cdot \\bar{y}) \\oplus (\\bar{x} \\cdot y) = x + y \n\\end{equation*} \n\n\\subsubsection*{Operators on polynome}\n\nIn order to qualify a set of factors and grid's bits, we define some basic operator. First, let $I_{\\mathcal{G}}$ be a subset of $S(\\mathcal{G})$, we denote by $F_2(I_{\\mathcal{G}}, \\mathcal{G})$ the operator that transforms $I_{\\mathcal{G}}$ into a set of $\\vert I_{\\mathcal{G}} \\vert$ vector in $F(2)^{\\vert \\mathcal{G} \\vert}$. \n\n\\begin{equation*}\nF_2(I_{\\mathcal{G}}, \\mathcal{G}) : S(\\mathcal{G}) \\rightarrow F(2)^{\\vert \\mathcal{G} \\vert \\times \\vert I_{\\mathcal{G}} \\vert} \n\\end{equation*}\n\nFor each vector $X \\in F_2(\\{I\\}, \\mathcal{G})$ such that $I\\in I_{\\mathcal{G}}$, an entry takes value $1$ if the associated index belongs to $I$, 0 otherwise. This operator is convenient to evaluate the characteristic polynome. As an example, let $(I, I') \\in S(\\mathcal{G})^{2}$ and $l_0 \\in \\{ 1, \\ldots \\vert I \\vert \\}$ then \n\n\\begin{align*}\n\\sum_{x \\in F_2(\\lbrace I' \\rbrace, \\mathcal{G})} \\mathcal{P}_I^{l_0} \\left[ x \\right] &= \\begin{cases} 1, & \\text{if }\\ \\vert I'\\cap I \\vert \\geq l_0 \\\\ 0, & \\text{otherwise} \\end{cases}\n\\end{align*}\n\nFurthermore, given the distribution over measure grid's bits activation $d$, we define the norm of a characteristic Polynome $\\mathcal{P}_I^{l_0}$ with respect to $d$ as\n\n\\begin{align*}\n\\Vert . \\Vert_{d} &: \\mathcal{P}_{F_2(S(\\mathcal{G}), \\mathcal{G})} \\rightarrow \\left[ 0, 1\\right] \\\\\n\\Vert \\mathcal{P}_I^{l_0} \\Vert_{d} &= \\sum_{x \\in F_2(S(\\mathcal{G}), \\mathcal{G})} \\mathcal{P}_I^{l_0}\\left[ x \\right] \\times d_x\n\\end{align*}\n\n\nWhere $\\mathcal{P}_{F_2(S(\\mathcal{G}), \\mathcal{G})}$ the space of all polynomials with domain $F_2(S(\\mathcal{G}), \\mathcal{G})$ and $\\times$ is the simple multiplication in $\\mathbb{R}$. Finally, keeping previous notations, let $\\lbrace \\mathcal{P}_{I_{i}}^{l_i} \\rbrace_{i=1, ..., k}$ a set of characteristic polynome for some integer $k \\geq 2$, we define the product operator with respect to $d$ as \n\n\\begin{align*}\n\\langle ., \\ldots, . \\rangle_{d} &: \\mathcal{P}_{F_2(S(\\mathcal{G}), \\mathcal{G})}^{k} \\rightarrow \\left[ 0, 1\\right] \\\\\n\\langle \\mathcal{P}^{l_1}_{I_1}, \\ldots, \\mathcal{P}^{l_k}_{I_k} \\rangle_{d} &= \\sum_{x \\in F_2(S(\\mathcal{G}), \\mathcal{G})} \\left( \\mathcal{P}^{l_1}_{I_1} \\left[ x \\right] \\cdot \\ldots \\cdot \\mathcal{P}^{l_k}_{I_k} \\left[ x \\right] \\right) \\times d_x\n\\end{align*}\n\nWhere $\\cdot$ denotes the usual multiplication in $F(2)$ and $\\times$ is the simple multiplication in $\\mathbb{R}$. Finally, each operator specified above can also be defined in the factor space, using the characteristic polynomial in factor space and the distribution over factors's activations $d'$.\n\n\n\\subsubsection*{Stochastic processus induced by factor's activation}\n\nFactors's activations are observed as a strictly stationnary stochastic processus. That is for a couple $(I, l) \\in \\mathcal{F} \\times \\{1, \\ldots, \\vert I \\vert\\}$, we associate a stochastic process $x_I^l[t]$ defined as \n\n\n\\begin{equation*}\nx_I^l[t] = \\begin{cases} 1, & \\text{with probability }\\ \\Vert \\mathcal{P}_I^l \\Vert_{d'}\\\\ 0, & \\text{Otherwise} \\end{cases}\n\\end{equation*}\n\nwith $\\{ x_I^l[t] \\}_{t\\in \\mathbb{N}}$ 2-by-2 independant. Factors's signatures and their activations lead to bits's activations that are also observed as striclty stationary stochastic processus. Again for a couple $(I, l) \\in \\mathcal{G} \\times \\{1, \\ldots, \\vert I \\vert\\}$, we associate a stochastic process $x_I^l[t]$ defined as \n\n\n\\begin{equation*}\nx_I^l[t] = \\begin{cases} 1, & \\text{with probability }\\ \\Vert \\mathcal{P}_I^l \\Vert_{d} \\\\ 0, & \\text{Otherwise} \\end{cases}\n\\end{equation*}\n\nwith $\\{ x_I^l[t] \\}_{t\\in \\mathbb{N}}$ 2-by-2 independant.\n\n\\subsection{Firing Graph}\nThe firing graph is the main data structure used in our solution. In this section we propose a definition of it, as well as basic tools to support its analysis.\n \n\\subsubsection*{Graph specification}\nThe algorithm presented in this report use a particular data structure that we refer as firing graph and that we denote $G(V, D_w)$. \n\n\\begin{itemize}\n\\item $V$ is the set of vertices $V = \\lbrace v_1, \\ldots, v_{\\vert V \\vert} \\rbrace$\n\\item $D_w$ is the weighted direct link matrix, $D_w \\in \\mathbb{N}^{\\vert V \\vert \\times \\vert V \\vert}$ and $\\left[ D_w \\right]_{i, j} = w$ indicate an edge of weight $w$ from vertex $v_i$ to vertex $v_j$ if $w > 0$ \n\\end{itemize}\n\n$G$ is a directed weighted graph whose vertices are organized in layer. A vertex $v$ of some layer $i \\in \\mathbb{N}$ must have at least one incoming edge from a vertex of layer $i-1$. It may also have incoming edges from any vertices of layer $k \\in \\mathbb{N}, k < i$. Such a set of vertices will be referred as the input domain of $v$. Vertices of layer $0$ have empty input domains, they correspond to bits of the measure grid $\\mathcal{G}$. Each vertex stores the tuple $(I, l_0)$\n\n\\begin{itemize}\n\\item $I$ the set of vertices at the tail of incoming edge of the vertex, referred as input set\n\\item $l_0$ the firing rate's lower bound of the vertex, referred as level, $l_0 \\in \\lbrace 1, \\ldots, \\vert I \\vert \\rbrace$ \n\\end{itemize}\n\n \\begin{figure}[H]\n \\centering\n \\includegraphics[scale=0.25]{figures\/firing_graph.png}\n \\caption{Firing graph}\n \\label{fig:firing_graph}\n\\end{figure}\n\n\\subsubsection*{Graph Polynomials}\n\nAs for bits of the measure grid and factors, a vertex $v(I, l_0)$ of a firing graph is assiociated with the set of polynomes $\\lbrace \\mathcal{P}_{I, l} \\rbrace_{l \\in \\{l_0, \\ldots, \\vert I \\vert \\ \\}}$. Each polynome is a segment of its characteristic polynome $\\mathcal{P}_v$ that describes activation, at instant $t$, of $v$, given its input domain's activations at instant $t-1$. If we denote by $n$, $I$ and $l_0$ respectively the size of the input domain of $v$, the set of vertex that has a link toward $v$ and the level of $v$, then \n\n\\vspace{10px}\\noindent\\begin{minipage}{.5\\linewidth}\n\\begin{align*}\n\\mathcal{P}_{v, l}&: F(2)^{n} \\rightarrow F(2)\\\\\n\\mathcal{P}_{v, l}[\\textbf{x}] &= \\begin{cases} \\sum_{\\pi \\in S(I, l)} x_{\\pi_1} \\cdot x_{\\pi_2} \\cdot \\ldots \\cdot x_{\\pi_l}, & \\text{if }\\ l \\in \\lbrace l_0, \\ldots, \\vert I \\vert \\rbrace \\\\ 0, & \\text{otherwise} \\end{cases}\n\\end{align*}\n\\end{minipage}%\n\\noindent\\begin{minipage}{.5\\linewidth}\n\\begin{align*}\n\\mathcal{P}_v &: F(2)^{n} \\rightarrow F(2)\\\\\n\\mathcal{P}_v[\\textbf{x}] &= \\sum_{l = l_0}^{\\vert I \\vert} \\mathcal{P}_{v, l}[\\textbf{x}]\n\\end{align*}\n\\end{minipage}\\vspace{15px}\n\nWhere $S(I, l)$ is the set of all permutations of size $l$ of elements of $I$ and $\\textbf{x} \\in F_2(\\{ I \\}, D_v)$, where $D_v$ is the input domain of $v$. Furthermore, all operators on polynome defined previously is applicable. Let $v, v_1, \\ldots, v_k$ be some vertices of the firing graph with the same input domain and $d$ a distribution over activations of their input domain's vertices. Then the norm and the product with respect to distribution $d$ are defined as\n\n\n\\vspace{10px}\\begin{minipage}{.3\\linewidth}\n\\begin{align*}\n\\Vert \\mathcal{P}_v \\Vert_{d} = \\sum_{x \\in F_2(S(\\mathcal{G}), \\mathcal{G})} \\mathcal{P}_v\\left[ x \\right] \\times d_x\n\\end{align*}\n\\end{minipage}%\n\\noindent\\begin{minipage}{.7\\linewidth}\n\\begin{align*}\n\\langle \\mathcal{P}_{v_1}, \\ldots, \\mathcal{P}_{v_k} \\rangle_{d} = \\sum_{x \\in F_2(S(\\mathcal{G}), \\mathcal{G})} \\left( \\mathcal{P}_{v_1} \\left[ x \\right] \\cdot \\ldots \\cdot \\mathcal{P}_{v_k} \\left[ x \\right] \\right) \\times d_x\n\\end{align*}\n\\end{minipage}\\vspace{15px}\n\n\nFinally, activations of vertices are observed as stochastic processus. Given a vertex $v(I, l)$ we define\n\n\\begin{equation*}\nx_v[t] = \\begin{cases} 1, & \\text{with probability }\\ \\Vert \\mathcal{P}_v \\Vert_{d}\\\\ 0, & \\text{Otherwise} \\end{cases}\n\\end{equation*}\n\nThe stochastic process that takes value 1 if the vertex $v$ actvivates and 0 otherwise, at each instant of time. If measure grid's bits compound layer 0 of the firing graph, then, from definition of bit's stochastic processus and linearity of state's propagations, $x_v[t]$ is strictly stationary.\n\n\n\\subsubsection*{Connection to grid's bit}\n\nThe firing graph is a convenient data structure to measure activity of a complex group of measure grid's bits. When the firing graph's layer 0 is composed of measure grid's bits, the characteristic polynome of each vertex can be represented as a characteristic polynome in the measure grid's space, without consideration of time and delay. Let $G$ be such a firing graph, then for any vertex of layer 1, $v(I, \\vert I \\vert)$, the characteristic polynome $v$ is equal to the characteristic polynome of the set of bits $I \\in \\mathcal{G}$ with level $\\vert I \\vert$.\n\n\\begin{align*}\n\\mathcal{P}_{v} &= \\mathcal{P}_{I, \\vert I \\vert}\\\\\nx_v[t] &= x_{I}^{\\vert I \\vert}[t - 1] \n\\end{align*}\n\nFurthermore if we set the level of $v$ to 1 its characteristic polynome become the logical $or$-sum of the characteristic polynome of each bits of $I$\n\n\\begin{align*}\n\\mathcal{P}_{v} &= \\sum_{b \\in I} \\mathcal{P}_{\\{ b \\}, 1}\\\\\nx_v[t] &= \\begin{cases} 1, & \\text{if }\\ \\sum_{b \\in I} x_{\\{ b \\}}^1[t - 1] > 0\\\\ 0, & \\text{Otherwise} \\end{cases}\n\\end{align*}\n\n\nBesides, one can design more complexe arrangements of vertices that enable to model activations of multiple sets of measure grid's bits. Let $G$ be a firing graph with its layer 0 composed of $\\mathcal{G}$, let $u(I, \\vert I \\vert)$ and $v(I', 1)$, such that $I \\cap I' = \\emptyset$, be vertices of layer 1 and $w(\\{ u, v \\}, 2)$ a vertex of layer 2. Then one can see that that characteristic polynome of $w$ verifies\n\n\\begin{align*}\n\\mathcal{P}_{w} &= \\sum_{b \\in I'} \\mathcal{P}_{I \\cup \\{ b \\}, \\vert I \\vert + 1}\\\\\nx_w[t] &= \\begin{cases} 1, & \\text{if }\\ \\sum_{b \\in I'} x_{I \\cup \\{ b \\}}^{\\vert I \\vert +1}[t - 2] > 0\\\\ 0, & \\text{Otherwise} \\end{cases}\n\\end{align*}\n\n\n\\subsection{Evaluation of measure grid's bits}\n\nA perfect indicator of the activation of a given factor $f$ can be used to evaluate the possibility of any set of bits to be part of $f$'s signature on the measure grid. \n\n\\subsubsection*{Factor's signature}\n\n\nOne way to describe the activity of a factor $f$ on the measure grid is to associate it to a polynome in the measure grid's space\n\n\\begin{align*}\n\\mathcal{P}_{\\mathcal{G}(f)} &: F(2)^n \\rightarrow F(2)\\\\\n \\mathcal{P}_{\\mathcal{G}(f)} &= \\mathcal{P}_{\\mathcal{G}(f), \\vert \\mathcal{G}(f) \\vert}\n\\end{align*}\n\n$\\mathcal{P}_{\\mathcal{G}(f)}$ is refered as the polynomial signature of $f$ on $\\mathcal{G}$. Anytime $f$ is active then its polynomial signature takes value 1. Yet under particular modelling of factor's links to measure grid, the polynomial signature of $f$ can take value 1 while $f$ is not active. More formally let $f \\in \\mathcal{F}$, $\\forall I \\in S(\\mathcal{F})$ such that $x \\in F_2(\\{I\\}, \\mathcal{F})$ and $x' \\in F_2(\\{\\cup_{f \\in I} \\mathcal{G}(f) \\}, \\mathcal{G})$\n\\begin{center}\n$\\mathcal{P}_f[x] = 1 \\Rightarrow \\mathcal{P}_{\\mathcal{G}(f)}[x'] = 1$\n\\end{center}\n\nFurthermore if $!\\exists J \\in S(\\mathcal{F} \\setminus \\{ f \\})$ such that $\\mathcal{G}(f) \\subset \\bigcup_{f' \\in J} \\mathcal{G}(f')$ then\n\n\\begin{center}\n$\\mathcal{P}_f[x] = 1 \\Leftrightarrow \\mathcal{P}_{\\mathcal{G}(f)}[x'] = 1$\n\\end{center}\n\n\\subsubsection*{basic metrics}\n\nLet $I \\in S(\\mathcal{G})$, $l \\in \\{ 1, \\ldots, \\vert I \\vert \\}$, $f \\in \\mathcal{F}$ and $e$ the event \"factor $f$ is active\". Then we define the recall coefficient of couple $(I, l)$ with respect to $f$ as\n\n\\begin{equation*}\n\\mu_{I, l, f} = \\langle \\mathcal{P}^{l}_{I}, \\mathcal{P}_{\\mathcal{G}(f)} \\rangle_{d \\vert e} + \\langle \\mathcal{P}^{l}_{I}, \\bar{\\mathcal{P}}_{\\mathcal{G}(f)} \\rangle_{d \\vert e}\n\\end{equation*}\n\nWhere $d \\vert e$ is the distribution over bit's activations given event $e$ and $\\bar{\\mathcal{P}}_{\\mathcal{G}(f)}$ is the complement of $\\mathcal{P}_{\\mathcal{G}(f)}$ in $F(2)$. Furthermore we define the precision coefficient of couple $(I, l)$ with respect to $f$ as\n\n\\begin{equation*}\n\\nu_{I, l, f} = \\langle \\mathcal{P}^{l}_{I}, \\mathcal{P}_{\\mathcal{G}(f)} \\rangle_{d \\vert \\bar{e}} + \\langle \\mathcal{P}^{l}_{I}, \\bar{\\mathcal{P}}_{\\mathcal{G}(f)} \\rangle_{d \\vert \\bar{e}}\n\\end{equation*}\n\nWhere $d \\vert \\bar{e}$ is the distribution over bit's activations given not event $e$. Finally we define the purity coefficient of couple $(I, l)$ with respect to $f$ as \n\n\\begin{equation*}\n\\omega_{I, l, f} = \\frac{\\nu_{I, l, f}}{\\mu_{I, l, f}}\n\\end{equation*}\n\n\nThe lower $\\omega_{I, l, f}$ is, the purer is the couple ($I$, $l$) with respect to $f$. The recall, precision and purity coefficient can be defined for any vertex $v$ of a firing graph where vertices of layer 0 are composed by measure grid's bit and are denoted respectively $\\mu_{v, f}$, $\\nu_{v, f}$ and $\\omega_{v, f}$. The latter are computed by using the representation of $\\mathcal{P}_v$ as a characteristic polynomial in the measure grid's space.\n\n\\subsubsection*{advanced metrics}\n\n \nLet $I \\in S(\\mathcal{G})$, $l \\in \\{ 1, \\ldots, \\vert I \\vert \\}$, $f \\in \\mathcal{F}$ and $e$ the event \"factor $f$ is active\". We define the precision of the couple $(I, l)$ with respect to factor $f$ as\n\n\\begin{equation*}\n\\phi_{I, l, f} = \\frac{\\Vert \\mathcal{P}^{l}_I \\Vert_{d, e}}{\\Vert \\mathcal{P}^{l}_I \\Vert_{d}}\n\\end{equation*}\n\nWe also define the recall of the couple $(I, l)$ with respect to factor $f$ as\n\n\\begin{equation*}\n\\psi_{I, l, f} = \\frac{\\Vert \\mathcal{P}^{l}_I \\Vert_{d, e}}{\\Vert \\mathcal{P}_{\\mathcal{G}(f)} \\Vert_{d, e}}\n\\end{equation*}\n\nWhere $d, e$, the distribution over the combination of activations of measure grid's bits that intetersect with event $e$. The precision and the recall are defined for any vertex $v$ of a firing graph where vertices of layer 0 are composed by measure grid's bit and are denoted respectively $\\phi_{v, f}$ and $\\psi_{v, f}$. Again, The latter are computed by using the representation of $\\mathcal{P}_v$ as a characteristic polynomial in the measure grid's space.\n\n\\subsubsection*{Advanced stochastic process induced by vertex}\n\nGiven a firing graph with its layer 0 composed of measure grid's bits, we have seen that the propagation of activations induces a stochastic process at each vertex. Here we introduce some more complex stochastic processus at each vertex of $G$. Given a vertex $v$ at layer $k \\geq 0$, its characteristic polynome $\\mathcal{P}_{v}$, a factor $f \\in \\mathcal{F}$ and e, the event \"factor $f$ is active\", we define the score process of $v$ with respect to factor $f$ as\n\n\\begin{equation*}\ns_{v,f}\\left[N, T, p, q \\right] = N + \\sum_{t=1}^{T} s_{v, p, q, t, f}\n\\end{equation*}\n\nWhere $(N, T, p, q) \\in \\mathbb{N}^4$ and $\\lbrace s_{v, p, q, t, f} \\rbrace_{t \\in \\mathbb{N}}$ a set of i.i.d random variable. $s_{v, p, q, t, f}$ takes value $q$ if the event e was true at instant $t - k $ and value $-p$ if it was false, given that $v$ activates at instant $t$. That is, $\\forall$ $t < k $, $s_{v, p, q,t, f} = 0$ and $\\forall$ $t \\geq k$\n\n\\begin{equation*}\ns_{v, p,q, t, f} = \\begin{cases} q, & \\text{with probability } q_s \\\\ -p, & \\text{with probability } 1 - q_s \\end{cases}\n\\end{equation*}\n\nWhere $q_s = \\frac{q_r}{q_r + q_p}$ with $q_r = \\Vert \\mathcal{P}_v \\Vert_{d,e} $ and $q_p = \\Vert \\mathcal{P}_v \\Vert_{d} - q_r$. $d, e$ is the distribution over measure grid's activations that intersect with the event e\n\n\\subsection{Properties}\n\nThis paragraph intend to deliver useful properties for the analysis of the algorithm. The proof of every properties can be found in the appendix A at the end of this paper.\n\n\\subsubsection*{Polynomial decomposition}\n\n\\underline{Partition}\\\\\n\nLet $v_1(I, l_0)$, $v_2(J, 0)$ and $v_3(K, 0)$, be three vertices at the layer 1 of some firing graph, with the same input domain $\\mathcal{G}$. If $I = J \\cup K$ and $J \\cap K = \\emptyset$, then, $\\forall x \\in F_2(S(\\mathcal{G}), \\mathcal{G})$\n\n\\begin{equation}\n\\label{prop:partition-1}\n\\mathcal{P}_I^{l_0}\\left[ x \\right] = \\sum_{l=l_0}^{\\vert I \\vert} \\sum_{j=0}^{\\vert J \\vert} \\mathcal{P}_{J, j}\\left[ x \\right] \\cdot \\mathcal{P}_{K, l - j}\\left[ x \\right] \n\\end{equation} \n\nIn paticular for $b \\in I$\n\n\\begin{equation}\n\\label{prop:partition-2}\n\\mathcal{P}_{I, l}\\left[ x \\right] = \\mathcal{P}_{I\\setminus \\{ b\\}, l}\\left[ x \\right] \\cdot \\mathcal{P}_{\\{ b\\}, 0}\\left[ x \\right] + \\mathcal{P}_{I\\setminus \\{ b\\}, l-1}\\left[ x \\right] \\cdot \\mathcal{P}_{\\{ b\\}, 1}\\left[ x \\right]\n\\end{equation} \n\n\\underline{Decomposition}\\\\\n\nLet $G$ be a firing graph with layer 0 composed of $\\mathcal{G}$. Let $u(I, l_u)$, $v(I', l_v)$ such that $I \\cap I' = \\emptyset$ as vertices of layer 1 and $w(\\{ u, v \\}, 2)$ as vertex of layer 2. Let $K \\in \\cup_{l \\in \\{l_v, \\ldots, \\vert I' \\vert \\}} S(I', l)$, $x \\in F_2(S(\\mathcal{G}), \\mathcal{G})$ and $x' = \\begin{bmatrix} \\mathcal{P}_{u}[x] &\\mathcal{P}_{v}[x] \\end{bmatrix}$ then \n\n\n\\begin{equation}\n\\label{prop:2layer-1}\n\\mathcal{P}_{K, \\vert K \\vert}\\left[ x \\right] \\cdot \\mathcal{P}_{\\{u, v\\}, 2}\\left[ x' \\right] = \\sum_{l=l_u}^{\\vert I \\vert} \\sum_{J \\in S(I, l)} \\mathcal{P}_{J \\cup K, l + \\vert K \\vert }\\left[ x \\right]\n\\end{equation}\n\nIn particular if $l_u = \\vert I \\vert$ and $l_v = 1$, then for any vertex of layer 0, $b \\in I'$\n\n\\begin{equation}\n\\label{prop:2layer-2}\n \\mathcal{P}_{b, 1}\\left[ x \\right] \\cdot \\mathcal{P}_{\\{u, v\\}, 2} \\left[ x' \\right]= \\mathcal{P}_{I \\cup \\{b\\}, \\vert I \\vert + 1}\\left[ x \\right]\n\\end{equation}\n\n\n\\subsubsection*{Metrics}\n\nThroughout this section, we consider $G$ to be a firing graph with layer 0 composed by measure grid's bits $\\mathcal{G}$ and $f \\in \\mathcal{F}$ denote some target factor that is linked to some bit of the measure grid. The distribution of activation of latent factors and measure grid's bits will be respectively denoted $d$ and $d'$ and e is the event \"factor $f$ is active\". Furthermore we use $v$ to denote some vertex of $G$ whose characteristic polynome respects $\\mathcal{P}_v = \\mathcal{P}_{I}^{l}$ with $(I, l) \\in S(\\mathcal{G}), \\{1, \\ldots, \\vert I \\vert\\}$ and $f \\in \\mathcal{F}$ some factor\\\\\n\n\\underline{Precision of vertex}\\\\\n\nThe precision of $v$ with respect to $f$ is\n\n\\begin{equation}\n\\label{prop:precision1}\n\\phi_{v, f} = \\frac{\\Vert \\mathcal{P}_{f} \\Vert_{d'}}{\\Vert \\mathcal{P}_{f} \\Vert_{d'} + (1 - \\Vert \\mathcal{P}_{f} \\Vert_{d'}) \\times \\omega_{I, l, f}}\n\\end{equation}\n\nFurthermore, if $\\mu_{v, f} = 1$ we have\n\n\\begin{equation}\n\\label{prop:precision2}\n\\phi_{v, f} \\leq \\frac{\\Vert \\mathcal{P}_{f} \\Vert_{d'}}{\\Vert \\mathcal{P}_{f} \\Vert_{d'} + (1 - \\Vert \\mathcal{P}_{f} \\Vert_{d'}) \\times \\omega_{\\mathcal{G}(f),\\vert \\mathcal{G}(f) \\vert, f}} \n\\end{equation}\n\n\\underline{Recall of vertex}\\\\\n\nThe recall of $v$ with respect to $f$ is\n\n\\begin{equation}\n\\label{prop:recall1}\n\\psi_{v, f} = \\mu_{I, l, f}\n\\end{equation}\n\nFurthermore, \n\n\\begin{equation}\n\\label{prop:recall2}\n0 \\leq \\phi_{v, f} \\leq 1\n\\end{equation}\n\nWhere right equality is reached whenever $v$ is connected to a set of measure grid's bit $I \\in \\mathcal{G}$, with level $l_0 = \\vert I \\vert$ such that $I \\subset \\mathcal{G}(f)$. \\\\\n\n\\underline{vertex's score process}\\\\\n\nIf $s_{v, f}[N, T, p, q]$ denotes the score process of $v$ with respect to $f$, with $N, T, p, q \\in \\mathbb{N}^4$, then\n\n\\begin{equation}\n\\label{prop:score_mean}\n\\mathbb{E} \\left[ s_{v, f}[N, T, p, q] \\right] = N + T \\times (\\phi_{I, l, f} \\times (p + q) - p)\n\\end{equation}\n\nFurthermore,\n\n\\begin{equation}\n\\label{prop:score_var}\n\\mathrm{Var} \\left[ s_{v, p, q, t, f} \\right] = (q + p)^{2} \\times \\phi_{I, l, f} \\times (1 - \\phi_{I, l, f})\n\\end{equation}\n\n\\section{Identification of Latent Factor}\nIn this section, we present a procedure to identify a latent factor's activation. The procedure consists of two steps:\n\n\\begin{itemize}\n\\item Sampling: Sample the measure grid and build a firing graph.\n\\item Draining: Drain the firing graph to exclude high purity coefficient's vertices.\n\\end{itemize}\n\nBoth processus will be described and the efficiency of the draining algorithm quantified.\n\n\\subsection{Sampling}\n\n\nSampling the measure grid consists in following a procedure to select some bits of it. This procedure is usually designed to be the most efficient in the fullfilment of specific quantitative objective. First, we assume that we have access to a determinist exact indicator of $f$'s activations with $f\\in \\mathcal{F}$. Then, the objective of sampling is to maximize the probability that we sample a bit whose purity coefficient with respect to $f$ is lower or equal to some positive constant $\\omega$. That is, if we denote $s$ the random variable of the outcome of a single sampling, the objective is to maximize\n\n\\begin{center}\n$\\mathbb{P}( \\omega_{\\{s\\}, 1, f} \\leq \\omega)$\n\\end{center} \n\nAgain, if we have a set $I \\subset \\mathcal{G}$ of bits, the objective of sampling is to maximize the probability of selecting a bit $b$, for which the purity of $I \\cup \\{b \\}$ at level $\\vert I \\vert + 1$ is lower to a given positive constant $\\omega$. That is, if we denote by $s$ the random variable of the outcome of a single sampling, the objective is to maximize\n\n\\begin{center}\n$\\mathbb{P}( \\omega_{I \\cup \\{ s\\}, \\vert I \\vert, f} \\leq \\omega)$\n\\end{center} \n\nWe propose a very intuitive sampling method based on the indicator of activation of target factor $f$. Given parameters $p_s \\in [0, 1]$ and $S_p$ respectively the probability of picking a bit and a set of pre-selected measure grid's bits, the sampling procedure writes\n\n\\begin{algorithm}[H]\n\\caption{Sampling}\n\\textbf{Input:} $p_{\\mathcal{S}}$, $S_{p}$\\\\\n\\textbf{Output:} $S$\n\\begin{algorithmic}\n\\State $S \\gets \\{ \\}$, $x_f \\gets nextFactorState()$, $X_{\\mathcal{G}} \\gets nextGridState()$\n\\While{$ S \\textit{ is empty}$}\n \\If {$x_f = 1$ and $\\forall b \\in S_p \\textit{ } X_{\\mathcal{G}}[b] = 1$}\n \\ForAll{$b \\in \\mathcal{G}\\setminus S \\cup S_p$}\n \\If {$X_{\\mathcal{G}}[b] = 1$}\n \\State $S \\gets S \\cup \\{ b \\} \\textit{ with probability } p_s$\n \\EndIf\t\n \\EndFor\t\n \\EndIf\n \\State $x_f \\gets nextFactorState()$\n \\State $X_{\\mathcal{G}} \\gets nextGridState()$\n\\EndWhile\n\\end{algorithmic}\n\\end{algorithm}\n\nWhere $x_f$ and $X_{\\mathcal{G}}$ are respectively a scalar that takes value 1 when factor $f$ is active, 0 otherwise, and a mapping with measure grid's bits as keys and their states as values (0 or 1). The second mean of the sampling procedure is to build a firing graph. The construction of the firing graph requires to set a parameter $N \\in \\mathbb{N}$ that corresponds to the initial weigth of edges that will be drained. In addition we set a mask matrix $G_{mask} \\in \\{0, 1\\}^{\\vert V \\vert}$ that controls which vertex is allowed to have their outcoming edges updated during draining. We consider two kind of firing graphs.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.35]{figures\/firing_graph_drainer_1.png}\n\\caption{Single sampled firing graph}\n\\label{fig:single_sampled_fg}\n\\end{figure}\n\nIn figure~\\ref{fig:single_sampled_fg}, sampled bits $\\{b_1, \\ldots, b_{n_s}\\}$ are used as vertices of the layer 0 of a firing graph $G$, $n_S = \\vert S \\vert$. Then vertex $v(\\{b_1, \\ldots, b_{n_s}\\}, 1)$ is added at the layer 1 of $G$. Furthermore, we set $G_{mask}$ so to allow only layer 0's outcoming edges to be updated through draining.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.35]{figures\/firing_graph_drainer_2.png}\n\\caption{Joint sampled firing graph}\n\\label{fig:joint_sampled_fg}\n\\end{figure}\n\nIn figure~\\ref{fig:joint_sampled_fg}, sampled bits $\\{b_1, \\ldots, b_{n_s}\\}$ and pre-selected bits $\\{b_1^{*}, \\ldots, b_k^{*} \\}$ for some $k \\in \\mathbb{N}^{*}$ compound the layer 0 of the firing graph $G$, $n_S = \\vert S \\vert$. Then, vertices $v(\\{b_1, \\ldots, b_{n_s}\\}, 1)$ and $u(\\{b_1, \\ldots, b_{k}\\}, k)$ are added at layer 1 of $G$ and vertex $w(\\{u, v\\}, 2)$ at layer 2 of $G$. Finally, we set $G_{mask}$ so that only $b_1, \\ldots, b_{n_s}$'s outcoming edges are allowed to be updated through draining.\n\n\\subsection{Draining}\n\nDraining the firing graph consists in iterating a forward propagation of bits's activations and a backward propagation of feedback generated by factor's activations through the firing graph. Feedback are meant to increment or decrement the weight of unmasked vertices's outcoming edges. Given that an edge with a null or negative weigth vanishes, at the end of the routine, connections of the graph differentiate between vertices's purity. To ease understanding of the algorithm, we split vertices of the firing graph into input and core vertices which are respectively vertices of layer 0 and vertices of layers $> 0$. Furthermore, we introduce a new type of vertices that can only have incoming edges from core vertices. We refer to those vertices as outputs. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.30]{figures\/firing_graph_drainer.png}\n\\caption{Draining diagram}\n\\label{fig:draining_diagram} \n\\end{figure}\n\nWe use $n_i$, $n_c$ and $n_o$ to refer to the number of respectively input, core and output vertices. Furthermore, we define $I_w \\in \\mathbb{N}^{n_i \\times n_c}$, $C_w \\in \\mathbb{N}^{n_c \\times n_c}$ and $O_w \\in \\mathbb{N}^{n_c \\times n_o}$ that correspond to the weighted direct link matrices respectively from input toward core vertices, core toward core vertices and core toward output vertices. Furthermore we will use $A = A_w > 0$, $A \\in \\{I, C, O \\}$ to denote the corresponding unweighted direct link matrices. Finally, in order to represent in a more convenient way stochastic processus induced by measure grid's activations, we define the following stochastics vectors\n\n\\begin{itemize}\n\\item $x_i^{(t)} \\in \\{0, 1\\}^{1 \\times n_i}$ the vector of activations of input vertices at instant $t$\n\\item $x_c^{(t)} \\in \\{0, 1\\}^{1 \\times n_c}$ the vector of activations of core vertices at instant $t$\n\\item $x_o^{(t)} \\in \\{0, 1\\}^{1 \\times n_o}$ the vector of activations of output vertices at instant $t$\n\n\\end{itemize}\n\nThe propagation of activations through the firing graph can be represented with two equations:\\\\\n\n\n\\vspace{20px}\\noindent\\begin{minipage}{.5\\linewidth}\n\\begin{center}\n\\underline{Forward transmitting (FT)}\n\\end{center}\n\\begin{align*}\n\\tilde{x}_c^{(t)} &= x_i^{(t-1)} \\cdot I + x_c^{(t-1)} \\cdot C \\\\\n\\tilde{x}_o^{(t)} &= x_c^{(t-1)} \\cdot O\n\\end{align*} \n\\end{minipage}%\n\\noindent\\begin{minipage}{.5\\linewidth}\n\\begin{center}\n\\underline{Forward processing (FP)}\n\\end{center}\n\\begin{align*}\n[x_c^{(t)}]_i &= \\begin{cases} 1, & \\text{if }\\ [\\tilde{x}_c^{(t)}]_i > l_i \\\\ 0, & \\text{Otherwise} \\end{cases}\\\\\n[x_o^{(t)}]_j &= \\begin{cases} 1, & \\text{if }\\ [\\tilde{x}_o^{(t)}]_j > 1 \\\\ 0, & \\text{Otherwise} \\end{cases} \n\\end{align*}\n\\end{minipage}\\vspace{20px}\n\n\nWhere $\\cdot$ is the usual matrix multiplication, $(i, j) \\in \\{1, \\ldots n_c\\} \\times \\{1, \\ldots n_o\\}$ and $l_i$ is the level of the $i^{th}$ core vertex. An output vertex of the firing graph is fed with the activation of a targeted factor decayed in time by the number of layer - 1. That is, for single and joint sampled firing graphs, the decay is respectively set to 1 and 2. Factor's activations generate a feedback to the output that is back propagated through the firing graph. Supposing that we set the factor's decay to $d \\geq 1$, the feedback is defined as \n\n\\begin{equation*}\nx_{b,o}^{(t)} = x_o^{(t)} \\circ \\left( (p + q) \\times x_f^{(t-d)} - p \\right)\n\\end{equation*}\n\nWhere $\\circ$ denotes the Hadamard product, $x_f^{(t-d)}$ is the vector of states of factors at instant $t-d$ and $p, q$ are pre-difined positive integers. A correct backpropagation of $x_{b,o}^{(t)}$ up to the input vertices is made possible by using time and space coherence of firing graph's forward states. We denote by $V_i$, $i\\in \\mathbb{N}$ the set of vertices that has a path, composed of $i$ vertices, toward an output vertex. Let $G$ be a firing graph with $k \\in \\mathbb{N}^{*}$ layers augmented with a layer of ouptut vertices. Let $V_o$ the set of output vertices, $\\forall v, o \\in V_0 \\times V_o$, $v$ is elligible to $o$'s feedback at instant $t$ if and only if \n\n\\begin{itemize}\n\\item $v$ was active at instant $t-1$\n\\item $v$ has an edge toward $o$\n\\end{itemize} \n\nThe same principle can be used to backpropagate the feedback from vetices of $V_0$ towards vertices of $V_1$ and so on. Generally speaking, the back propagation from vertices of $V_i$ towards $V_{i+1}$ respects $\\forall v, v' \\in V_{i} \\times V_{i-1}$, $v$ is elligible to feedback of $v'$ at instant $t$ if and only if \n\n\\begin{itemize}\n\\item $v$ was active at instant $t - (2 \\times i + 1)$\n\\item $v$ has an edge toward $v'$\n\\end{itemize} \n\nFinally we can encode the backpropagation equations as\n\n\\vspace{20px}\\noindent\\begin{minipage}{.5\\linewidth}\n\\begin{center}\n\\underline{Backward transmitting (BT)}\n\\end{center}\n\\begin{align*}\n\\tilde{X}_{b, c}^{(t)} &= (O \\cdot X_{b, o}^{(t-1)} + C \\cdot X_{b, c}^{(t-1)}) \\circ X_{m, c}^{(t)T}\\\\\nX_{b,i}^{(t)} &= (I \\cdot X_{b, c}^{(t-1)}) \\circ X_{m, i}^{(t)T}\n\\end{align*} \n\\end{minipage}%\n\\noindent\\begin{minipage}{.5\\linewidth}\n\\begin{center}\n\\underline{Backward processing (BP)}\n\\end{center}\n\\begin{align*}\nX_{b, o}^{(t)} &= \\begin{bmatrix} \\textbf{0}_{no \\times 1} & x_{b, o}^{(t)} & \\textbf{0}_{no \\times d_{max} - 2} \\end{bmatrix} \\\\\nX_{b, c}^{(t)} &= \\begin{bmatrix} \\textbf{0}_{n_c \\times 2} & [\\tilde{X}_{b, c}^{(t)}]_{(:n_c, :(d_{max}-2))} \\end{bmatrix} \n\\end{align*}\n\\end{minipage}\n\n\\begin{center}\n\\underline{Structure udpates (SU)}\n\\begin{align*}\nO_w &= O_w + O \\circ (X_{b, o}^{(t-1)} \\cdot X_{m, c}^{(t)})^{T}\\\\\nC_w &= C_w + C \\circ (X_{b, c}^{(t-1)} \\cdot X_{m, c}^{(t)})^{T}\\\\\nI_w &= I_w + I \\circ (X_{b, c}^{(t-1)} \\cdot X_{m, i}^{(t)})^{T}\n\\end{align*}\n\\end{center}\\vspace{20px}\n\n\n \nWhere $X_{m,c}^{(t)} = \\begin{bmatrix} x_c^{(t)} & \\ldots & x_c^{(t- d_{max})}\\end{bmatrix}^{T}$ and $X_{m, i}^{(t)} =\\begin{bmatrix} x_i^{(t)} & \\ldots & x_i^{(t- d_{max})}\\end{bmatrix}^{T}$, $X_{c, b}^{(t)} \\in \\{0, q, - p \\}^{n_c \\times d_{max}}$ for $t \\in \\mathbb{N}^{*}$ and $X_{c, b}^{(0)} = \\textbf{0}_{n_c \\times d_{max}}$. Furthermore $d_{max} \\geq (l -1) \\times 2 + 1$ where $l$ is the number of layers of the firing graph. Finally we provide a parameter $T \\in \\mathbb{N}$ to the draining algorithm. It controls the targeted number of feedback that an edge should receive before disabling its update. Maintaining update's permissions for each edge requires an operation similar to structure updates. Finally, the draining algorithm iterates forward and backward pass until either $G$ is composed of two distinct connexe components, no structure update is enabled or the maximum number of iterations $T_{max} \\in \\mathbb{N}$ has been reached. \n\n\\begin{algorithm}[H]\n\\caption{Draining}\n\\textbf{Input:} $G$, T, $T_{max}$, $p$, $q$, decay\\\\\n\\textbf{Output:} $G$ drained\n\\begin{algorithmic}\n\\State $i \\gets 0$\\Comment{Initialisation}\n\\State $X_{b,c}$, $x_{b, o}, x_i, x_c, X_{m, c}, X_{m, i}$ $\\gets$ InitSignals()\n\\While{$i < T_{max}$}\\Comment{Core loop}\n \\State $x_i \\gets nextGridState()$\n \\State $x_c, x_o \\gets \\textit{FT}(G, x_i, x_c)$\\Comment{Forward pass}\n \\State $X_{m, c}, X_{m, i}, x_c, x_o \\gets \\textit{FP}(x_c, x_o)$\n \\If {$i \\geq decay$}\n \\State $x_f \\gets nextFactoreState()$\n \\State $X_{b,c}, X_{b, o} \\gets \\textit{BP}(X_{b, c}, X_{b, o} x_f, p, q)$\\Comment{Backward pass}\n \\State $G' \\gets \\textit{SU}(T, G, X_{b, c}, X_{b, o}, X_{m, c}, X_{m, i})$\n \\State $X_{b, c}, X_{b, i} \\gets \\textit{BT}(G, X_{b, c}, X_{b, o}, X_{m, c}, X_{m, i}, )$\n \\State $G \\gets G'$\n \\EndIf\t\n \\If {$G.cc == 2$ or $\\textit{not } G_{mask}.any()$}\\Comment{Stop conditions}\n \\State $break$\n \\EndIf\t\n \\State $i \\gets i + 1$ \n\\EndWhile\n\\end{algorithmic}\n\\end{algorithm}\n\nClearly, the complexity of the algorithm is dominated by the backward transmit and structure updates operations. A standard worst case analysis of those operations gives $\\mathcal{O}(n^{4} \\times d_{max}^2)$, where $n$ is the total number of vertices in the firing graph. Yet this analysis relies on standard complexity time for dense matrix operations, and does not take into account neither the sparsity of signals and direct link matrices nor the distribution of input vertices's activations. In practice, we have found that the forward and backward propagation of bits and factors's activations is time consuming, especially when both $N$ and $T$ are large numbers. Thus, to reduce running time, batch\\_size successive bits and factors's states are forward and backward propagated with an efficient vectorization of the equation. The decrease in time complexity of this practical trick is impressive and worth the gain in space complexity of the algorithm. Finally this trick may requires to dynamically change the batch\\_size so that treshold for the number of updates at each edges is respected.\n\n\\subsection{Analysis of the algorithm}\n\n\\begin{theorem}\n\\label{th_stopping}\nGiven a set of sampled bits $S$, a set of pre-selected bits $I =\\{b_{1}^{*}, \\ldots, b_{i}^{*}\\}$ a target factor $f$ and $G$, the firing graph built after sampling algorithm. A 5-tuple $(\\omega, N, T, p, q)$ exists such that the probability of event E: \"no input vertices of $G$ have outcoming edges at the end of the draining\" is upper bounded. More specifically\n\n\\begin{equation*}\n\\mathbb{P} \\left( E \\right) \\leq \\sum_{j = 0}^{\\vert S \\vert} p_{-}^j \\times \\mathbb{P}_{\\mathcal{S}} \\left( \\vert \\lbrace s \\in S \\setminus \\omega_{I \\cup \\{s\\}, \\vert I \\vert + 1, f } < \\omega \\rbrace \\vert = j \\right) \n\\end{equation*}\n\nWhere $p_{-} = \\mathbb{P} \\left( s_{v, f}[N, T, p,q] < 0 \\vert \\omega_{I \\cup \\{ s \\}, i + 1, f} < \\omega \\right)$. Where $v(I \\cup \\{ s \\}, i+1)$, for any $s \\in S$, is a vertex of layer 1 of a firing graph $G$ of 2 layers. Furthermore\n\n\\begin{equation*}\n\\mathbb{P} \\left( s_{v, f}[N, T, p, q]< 0 \\vert \\omega_{I \\cup \\{ s \\}, i + 1, f} < \\omega \\right) \\leq C \\times \\max \\left( \\exp\\left(- T \\times \\left(\\frac{\\delta_{f} c}{\\sigma}\\right)^2 \\right), \\exp\\left(- T \\times \\delta_{f}c \\right) \\right)\n\\end{equation*}\n\nWith $\\delta_{f}$, $C$ and $c$ are postitive constants that depends on $\\omega$ and $i$ and $f$. $\\mathrm{Var}[s_{v, p, q, f, t}] = \\sigma^2$. \n\n\\end{theorem}\n\n\\textbf{Proof.} As a reminder, in the core of this proof, we refer to $d$ and $d'$ respectively to the distribution over bits's activations and factors's activations. Given the arrangement of vertices of graph $G$ and the forward equations of the draining algorithm, the activation of any vertices $b \\in S$ that will be propagated toward an output vertex, is modelled by the following characteristic polynomial\n\n\\begin{equation*}\n\\mathcal{P}_{\\{b\\}, 1} \\cdot \\mathcal{P}_{\\{u, v\\}, 2}\n\\end{equation*}\n\nWith $v(S, 1)$ and $u(\\{b_1^{*}, \\ldots, b_i^{*}\\}, i)$. Thus, using (\\ref{prop:2layer-2}), the activity of $b$ that is propagated to the ouptut vertex is the same than the activity of a vertex $v(\\{b_1^{*}, \\ldots, b_i^{*}, b\\}, i+1)$ at the layer 1 of a firing graph $G'$ where $b_1^{*}, \\ldots, b_i^{*}$ and $b$ compound its layer 0. Furtermore, given the time and space consistency of the backpropagation of the feedback from the output vertex, the weight of the outcoming edge of $b$, at the convergence of the draining algorithm, is either $0$ or equal to the score process of vertex $v$ in $G'$ with respect to $f$, $s_{v, f}[N, T, p, q]$. Then, the first inequality is obtained by developping\n\n\\begin{align*}\n\\mathbb{P} \\left( E \\right)&= \\sum_{j=0}^{\\vert S \\vert} p_{-}^j \\times p_{+}^{\\vert S \\vert -j} \\times\\mathbb{P}_{\\mathcal{S}} \\left( \\vert \\lbrace s \\in S \\setminus \\omega_{I \\cup \\{s\\}, i + 1, f } < \\omega \\rbrace \\vert = j \\right) \\\\\n&\\leq \\sum_{j=0}^{\\vert S \\vert} p_{-}^j \\times \\mathbb{P}_{\\mathcal{S}} \\left( \\vert \\lbrace s \\in S \\setminus \\omega_{I \\cup \\{s\\}, i + 1, f } < \\omega \\rbrace \\vert = j \\right)\n\\end{align*} \n\nWhere \n\\begin{itemize}\n\\item $I =\\{b_{1}^{*}, \\ldots, b_{i}^{*}\\}$\n\\item $p_{-} = \\mathbb{P} \\left( s_{v, f}(N, T, p, q) < 0 \\vert \\omega_{I\\cup\\{s \\}, i + 1, f} < \\omega \\right)$\n\\item $p_{+} = \\mathbb{P} \\left( s_{v, f}(N, T, p, q) < 0 \\vert \\omega_{I\\cup\\{s \\}, i + 1, f} \\geq \\omega \\right)$\n\\end{itemize}\n\nThen, we choose the value of the postive real $\\omega$ such that a measure grid's bit $b^{+}$ verifies\n\n\\begin{align*}\nb^{+} = &\\argmin_{b \\in \\mathcal{G}} \\vert \\omega - \\omega_{I \\cup \\{ b \\}, i +1, f} \\vert \\\\\n&\\textit{ such that } \\omega - \\omega_{I \\cup \\{ b \\}, i +1, f} > 0 \n\\end{align*}\n\nAnd we define the vertex $v^{+}(I \\cup \\{b^{+}\\}, i+1)$ and $\\delta^{+} = \\vert \\omega - \\omega_{v^{+}, f} \\vert$. If vertex $v$ is such that $\\omega_{v,f} < \\omega$ then using (\\ref{prop:precision1}) one gets\n\n\\begin{equation*}\n\\underbrace{\\frac{\\Vert \\mathcal{P}_{f} \\Vert_{d'}}{\\Vert \\mathcal{P}_{f} \\Vert_{d'} + (\\omega - \\delta) \\times (1 - \\Vert \\mathcal{P}_{f} \\Vert_{d'})}}_{\\phi_{v, f}} \\geq \\underbrace{\\frac{\\Vert \\mathcal{P}_{f} \\Vert_{d'}}{\\Vert \\mathcal{P}_{f} \\Vert_{d'} + (\\omega - \\delta^{+}) \\times (1 - \\Vert \\mathcal{P}_{f} \\Vert_{d'})}}_{\\phi_{v^{+}, f}} > \\underbrace{\\frac{\\Vert \\mathcal{P}_{f} \\Vert_{d'}}{\\Vert \\mathcal{P}_{f} \\Vert_{d'} + \\omega \\times (1 - \\Vert \\mathcal{P}_{f} \\Vert_{d'})}}_{\\phi}\n\\end{equation*}\n\nFor some real $\\delta \\geq \\delta^{+} > 0$. Then, We choose the 4-tuple $(N, T, p, q)$ as follow:\n\n\\begin{align*}\n(p, q) &\\in \\mathbb{N}^{2} \\textit{ such that } \\phi \\times (p + q) - p < 0\\\\\nN &= -T \\times (\\phi \\times (p+q) - p)\\\\\nT &\\in \\mathbb{N} \\textit{ such that } N \\textit{ large enough}\n\\end{align*}\n\nThus, given $\\omega_{v,f} < \\omega$ one can write\n\n\\begin{align*}\n\\mathbb{P} \\left( s_{v,f}[N, T, p, q] < 0 \\right) &= \\mathbb{P} \\left( N + \\sum_{t=1}^{T} s_{v, p, q, T, f} < 0 \\right) \\\\\n&= \\mathbb{P}\\left( \\sum_{t=1}^{T} s_{v, p, q, t, f} - T \\times \\mathbb{E}\\left[s_{v, p, q, 1, f}\\right] < -N - T \\times \\mathbb{E}\\left[s_{v, p, q, 1, f}\\right]\\right)\n\\end{align*}\n\nFurthermore from the definition of $\\phi$ and $\\phi_{v, f}$ we have\n\n\\begin{equation*}\n\\phi_{v, f} =\\phi + \\underbrace{\\delta \\times \\phi \\times \\phi_{v, f} \\times \\frac{1 - \\Vert \\mathcal{P}_{f} \\Vert_{d'}}{\\Vert \\mathcal{P}_{f} \\Vert_{d'}}}_{\\delta_{v, f}}\n\\end{equation*}\n\nYet using equation (\\ref{prop:score_mean}) one have\n\n\\begin{align*}\n\\mathbb{E}\\left[s_{v, p, q, 1, f}\\right] &= \\phi_{v, f} \\times (p + q) - p\\\\\n&= (\\phi + \\delta_{v, f}) \\times (p + q) - p\\\\\n\\end{align*}\n\nUsing $N = - T \\times \\left(\\phi \\times (p + q) - p \\right)$ and the definition of $\\phi$ one have\n\n\\begin{equation*}\n-N - T \\times \\mathbb{E}\\left[s_{v, p, q, 1, f}\\right] = -T \\times (p + q) \\times \\delta_{v, f} \n\\end{equation*}\n\nThus\n\n\\begin{align*}\n\\mathbb{P} \\left( s_{v,f}[N, T, p, q] < 0 \\right) &= \\mathbb{P}\\left( \\sum_{t=1}^{T} s_{v, p, q, t, f} - \\mathbb{E}\\left[s_{v, p, q, t, f}\\right] < -T \\times (p + q) \\times \\delta_{v, f} \\right)\\\\\n&\\leq \\mathbb{P}\\left(\\vert \\sum_{t=1}^{T} s_{v, p, q,t, f} - \\mathbb{E}\\left[s_{v, p, q, t, f}\\right] \\vert > T \\times (p + q) \\times \\delta_{v, f} \\right)\\\\\n&\\leq \\mathbb{P}\\left(\\vert \\sum_{t=1}^{T} s_{v, p, q, t, f} - \\mathbb{E}\\left[s_{v, p, q, t, f}\\right] \\vert > T \\times \\delta_{f} \\right)\n\\end{align*}\n\nWith $\\delta_{f} = (p + q) \\times \\delta^{+} \\times \\phi^{2} \\times \\frac{1 - \\Vert \\mathcal{P}_{f} \\Vert_{d'}}{\\Vert \\mathcal{P}_{f} \\Vert_{d'}}$. \\\\\n\nAt this point we have to notice that $\\lbrace s_{v, p, q, t, f} \\rbrace_{t=1, \\ldots, T}$ is a sequence of i.i.d random variables with mean $\\mu$ and variance $\\sigma^2$ that verifies $\\vert s_{v, p, q, t, f} \\vert \\leq max(p, q)$. Thus one can apply the Chernoff inequality as formulated in \\cite{TAO-1}. In particular, taking $\\lambda = \\sigma^{-1}\\delta_{f}$ we obtain\n\n\\begin{align*}\n\\mathbb{P}\\left(\\vert \\sum_{t=1}^{T} s_{v, p, q, tn f} - \\mathbb{E}\\left[s_{v, p, q, t, f}\\right] \\vert > T \\delta_{f} \\right) &= \\mathbb{P}\\left(\\vert \\sum_{t=1}^{T} s_{v, p, q, t} - \\mathbb{E}\\left[s_{v, p, q, t}\\right] \\vert > \\lambda\\sigma \\sqrt{T} \\right)\\\\\n&\\leq C \\times \\max \\left( \\exp\\left(- T \\times \\left(\\frac{\\delta_{f} c}{\\sigma}\\right)^2 \\right), \\exp\\left(- T \\times \\delta_{f}c \\right) \\right)\n\\end{align*}\n\n\nWith $C, c$ some positive constant and $\\mathrm{Var}[s_{v, p, q, t, f}] = \\sigma^2$ for $t \\in \\{1, \\ldots, T \\}$. Q.E.D.\n\n\\begin{center}\n\\rule[0pt]{100pt}{1pt} \n\\end{center}\n\n\\begin{theorem}\n\\label{th_precision}\nGiven a set of sampled bits $S$, a set of pre-selected bits $I =\\{b_{1}^{*}, \\ldots, b_{i}^{*}\\}$ a target factor $f$ and $G$ the firing graph built after sampling algorithm. A sequence of 5-tuple $(\\omega, N, T, p, q)$ exists such that for each input vertex $v$ of $G$, from which the output is reachable, we have \n\n\\begin{equation}\n\\mathbb{P} \\left( \\omega_{v, f} > \\omega \\right) \\leq C \\times \\max \\left( \\exp\\left(- T \\times \\left(\\frac{\\delta_{f} c}{\\sigma}\\right)^2 \\right), \\exp\\left(- T \\times \\delta_{f}c \\right) \\right)\n\\end{equation}\n\nWhere $v(I \\cup \\{ s \\}, i+1)$, for any $s \\in S$, is a vertex of layer 1 of a firing graph $G$ of 2 layers and $\\delta_{f}$, $C$ and $c$ are postitive constants that depends on $\\omega$ and $i$ and $\\mathrm{Var}[s_{v, p, q, f, t}] = \\sigma^2$. \n\\end{theorem}\n\n\\textbf{Proof.} \nAs in the proof of the previous theorem, using the arrangement of vertices of $G$, the property (\\ref{prop:2layer-2}) and the forward and backward equations of the draining algorithm, one can show that the weight of the outcoming edge of any vertices $b \\in S$ of $G$ is either equal to 0 or to the score process $s_{v, f}[N, T, p, q]$ where $v(\\{b_1^{*}, \\ldots, b_i^{*}, b\\}, i+1)$ is a vertex at the layer 1 of a firing graph $G'$ where $b_1^{*}, \\ldots, b_i^{*}$ and $b$ compound its layer 0. Furthermore, if sample $b$ still have outcoming edges after draining, then\n\n\\begin{equation*}\n\\mathbb{P} \\left( \\omega_{v, f} > \\omega \\right) = \\mathbb{P}_{\\mathcal{S}} \\left( \\omega_{I \\cup \\{b\\}, i + 1, f } > \\omega \\right) \\times \\mathbb{P} \\left( s_{v, f}(N, T, p, q) > 0 \\vert \\omega_{I \\cup \\{ b \\}, i + 1, f} > \\omega \\right)\n\\end{equation*}\n\nThen, we choose the value of the postive real $\\omega$ such that a bit $b^{-}$ verifies\n\n\\begin{align*}\nb^{-} = &\\argmin_{b \\in \\mathcal{G}} \\vert \\omega - \\omega_{I \\cup \\{ b \\}, f} \\vert \\\\\n&\\textit{ such that } \\omega - \\omega_{I \\cup \\{ b \\}, f} < 0 \n\\end{align*}\n\nAnd we define the vertex $v^{-}(I \\cup \\{b^{-}\\}, i+1)$ and $\\delta^{-} = \\vert \\omega - \\omega_{v^{-}, f} \\vert$. If $v$ is such that $\\omega_{v, f} < \\omega$ then using (\\ref{prop:precision1}) we have \n\n\\begin{equation*}\n\\underbrace{\\frac{\\Vert \\mathcal{P}_{f} \\Vert_{d'}}{\\Vert \\mathcal{P}_{f} \\Vert_{d'} + (\\omega + \\delta) \\times (1-\\Vert \\mathcal{P}_{f} \\Vert_{d'})}}_{\\phi_{v, f}} \\leq \\underbrace{\\frac{\\Vert \\mathcal{P}_{f} \\Vert_{d'}}{\\Vert \\mathcal{P}_{f} \\Vert_{d'} + (\\omega + \\delta^{-}) \\times (1 - \\Vert \\mathcal{P}_{f} \\Vert_{d'})}}_{\\phi_{v^{-}, f}} < \\underbrace{\\frac{\\Vert \\mathcal{P}_{f} \\Vert_{d'}}{\\Vert \\mathcal{P}_{f} \\Vert_{d'} + \\omega \\times (1 - \\Vert \\mathcal{P}_{f} \\Vert_{d'})}}_{\\phi}\n\\end{equation*}\n\nfor some $\\delta \\geq \\delta^{-} > 0$. Then defining the 4-tuple $(N, T, p, q)$ as\n\n\n\\begin{align*}\n(p, q) &\\in \\mathbb{N}^{2} \\textit{ such that } \\phi \\times (p + q) - p < 0\\\\\nN &= -T \\times (\\phi \\times (p+q) - p)\\\\\nT &\\in \\mathbb{N} \\textit{ such that } N \\textit{ large enough}\n\\end{align*}\n\nThen, reproducing the same development as it was done in the proof of previous theorem, one can derive a convenient form to easily apply the Chernoff inequality.\n\n\\begin{equation*}\n\\mathbb{P} \\left( s_{v, f}(N, T, p, q) > 0 \\vert \\omega_{v, f} > \\omega \\right) \\leq \\mathbb{P}\\left(\\vert \\sum_{t=0}^{T_i - 1} s_{v, p, q, t, f} - \\mathbb{E}\\left[s_{v, p, q, t, f}\\right] \\vert > T \\times \\delta_{f} \\right)\n\\end{equation*}\n\nWith $\\delta_{f} = (p + q) \\times \\delta^{-} \\times \\phi^{2} \\times \\frac{1 - \\Vert \\mathcal{P}_{f} \\Vert_{d'}}{\\Vert \\mathcal{P}_{f} \\Vert_{d'}}$. Then using the Chernoff inequality as written in \\cite{TAO-1} using $\\lambda = \\sigma^{-1} \\delta_{f}$ we obtain\n\n\n\\begin{align*}\n\\mathbb{P}\\left(\\vert \\sum_{t=0}^{T - 1} s_{v, p, q, t, f} - \\mathbb{E}\\left[s_{v, p, q, t, f}\\right] \\vert > T \\times \\delta_{f} \\right) &= \\mathbb{P}\\left(\\vert \\sum_{t=0}^{T - 1} s_{v, p, q, t, f} - \\mathbb{E}\\left[s_{v, p, q, t, f}\\right] \\vert > \\lambda \\sigma \\sqrt{T} \\right) \\\\\n&\\leq C \\times \\max \\left( \\exp\\left(- T \\times \\left(\\frac{\\delta_{f} c}{\\sigma}\\right)^2 \\right), \\exp\\left(- T \\times \\delta_{f}c \\right) \\right)\n\\end{align*}\n\nWith $C, c$ some positive constant and $\\mathrm{Var}[s_{v, p, q, t, f}] = \\sigma^2$. Q.E.D\n\n\\begin{center}\n\\rule[0pt]{100pt}{1pt} \n\\end{center}\n\n\\subsection{Limit of the generic case}\n\nThe combination of theorems shows that the association of sampling and draining with the right choice of 5-tuple $(\\omega, N, T, p, q)$ gives a convenient tool to select measure grid's bits with purity coefficient lower than a target $\\omega$. Furthermore, when $T \\rightarrow +\\infty$, the correct selction is almost certain, which highlights the trade-off between efficiency and complexity of the algorithm that is embedded in the choice of $\\omega$ and $T$, on which depends $N, p$ and $q$.\nThis generic procedure and its analysis deliver a strong framework that eases the derivation of more specific results that may be obtained under specific modelling of latent factors's activations and measure grid signatures. Nevertheless, it leaves two fundamental points clueless \n\n\\begin{itemize}\n\\item No possibility to quantify further the effectiveness of the sampling strategy \n\\item No specific procedure or heuristics to choose positive real value $\\omega$\n\\end{itemize}\n\nIn the rest of this paper, we present two particular cases of factor's and measure grid's modelling that enables a better quantification of the sampling strategy and stronger heuristics for the choice of $\\omega$. \n\n\\section{Case of signal plus noise}\n\nThis particular case is designed to be easy to analyze. We first define the statistical modelling of factors and bits's activations. Then, we quantify the sampling strategy and justify a choice for the 5-tuple $(\\omega, T, N, p, q)$. Finally, we present simulations and provide discussion of the results obtained with this special case.\n\n\\subsection{Statistical modelling}\n\nIn this particular case, we assume that the target factor $f$ is linked to some $\\vert \\mathcal{G}(f)\\vert = k$ measure grid's bits and activates with probability $p_f$. We also assume that bits of the measure grid are identically and independently subject to a noisy activation with probability $p_N$. We may see noisy activations as the result of $n$ noisy latent factors, linked to exactly 1 bit of the measure grid, that is $K=n + 1$. Under this model, the probability for a bit $b\\in \\mathcal{G}$ to activate is defined as \n\n\\begin{equation*}\n\\mathbb{P} \\left( \\textit{\"b active\"} \\right) = \\begin{cases} p_{f} + p_N \\times (1-p_{f}), & \\text{if }\\ b \\in \\mathcal{G}(f^{*}) \\\\ p_N, & \\text{Otherwise} \\end{cases}\n\\end{equation*}\n\nAs a consequence, for any $I \\in S(\\mathcal{G})$ such that $\\vert I \\cap \\mathcal{G}(f) \\vert = i$ and $j = \\vert I \\vert -i$ if we set $x \\in F_2 (\\{I\\} , \\mathcal{G})$, the distribution over measure grid bits's activations is defined as\n\n\\begin{equation*}\nd_{x} = \\begin{cases} p_N^{i+j} \\times (1-p_N)^{n - i - j} \\times (1- p_{f}) + p_N^{j} \\times (1-p_N)^{n - k - j} \\times p_{f}, & \\text{if }\\ i=k \\\\ p_N^{i+j} \\times (1-p_N)^{n - i - j} \\times (1- p_{f}), & \\text{Otherwise} \\end{cases}\n\\end{equation*}\n\nIn the rest of the section, we will always refer to this distribution as $d$.\n\n\\subsection{Evaluation of bits}\n\nLet $G$ be a firing graph with a layer 0 composed of measure grid's bits. Then, the precision of a vertex $v(I, \\vert I \\vert)$ of layer 1 of $G$, with respect to $f$, depends only on $\\vert I \\vert$ and $\\vert I \\cap \\mathcal{G}(f) \\vert$. Indeed, if $\\vert I \\cap \\mathcal{G}(f) \\vert = i$ then\n\n\\begin{equation*}\n\\phi_{v,f} = \\frac{p_{f}}{p_{f} + (1 - p_{f})\\times p_N^{i}}\n\\end{equation*}\n\nWith identification of terms using (\\ref{prop:precision1}) we have $\\omega_{v, f} = p_N^{i}$ and using previously defined distribution, one finds that $\\mu_{v, f} = p_N^{\\vert I \\vert - i}$, $\\nu_{v, f} = p_N^{\\vert I \\vert}$. Besides, given a set of bits $I$ such that $I \\subset \\mathcal{G}(f)$, if $b \\in \\mathcal{G}(f) \\setminus I$, the precision of vertex $v(I \\cup \\{ b \\}, \\vert I \\vert +1)$ with respect to $f$ is \n\n\\begin{equation*}\n\\phi_{v} = \\frac{p_{f}}{p_{f} + (1 - p_{f})\\times p_N^{\\vert I \\vert + 1}}\n\\end{equation*}\n\nif $b \\notin \\mathcal{G}(f)$\n\n\\begin{equation*}\n\\phi_{v} = \\frac{p_{f}}{p_{f} + (1 - p_{f})\\times p_N^{\\vert I \\vert}}\n\\end{equation*}\n\n\n\\subsection{Sampling Strategy}\n\nIn this particular case we follow the generic sampling procedure $\\mathcal{S}$ with parameter $p_{\\mathcal{S}}$. Thus, using the previously defined statistical distribution of bits's activations, if we denote $S$, the set of sampled bits using $\\mathcal{S}$, the distribution of the cardinal of $S$ is \n\n\\begin{align*}\n\\mathbb{P}\\left(\\vert S \\vert = s\\right) = \\begin{cases} \\begin{pmatrix} n - k \\\\ s - k \\end{pmatrix} \\times p_N^{s - k } \\times (1 - p_N)^{n-k-s} \\times p_{\\mathcal{S}} , & \\text{if }\\ s \\geq k \\\\ 0, & \\text{otherwise} \\end{cases}\n\\end{align*}\n\nThus its expected size is $\\mathbb{E}\\left[ \\vert S \\vert \\right] = k + (n - k) \\times p_N \\times p_{\\mathcal{S}} $. Furthermore if $I= \\{b_1^{*}, \\ldots, b_i^{*} \\}\\in S(\\mathcal{G})$ is some set of pre-selected bits and $S$ is a set of bits sampled using $\\mathcal{S}$, a positive real $\\omega_i$ exists such that\n\n\\begin{align*}\n\\mathbb{P}_{\\mathcal{S}} \\left( \\vert \\lbrace s \\in S \\setminus \\omega_{I \\cup \\{s\\}, \\vert I \\vert + 1, f } < \\omega_i \\rbrace \\vert = j \\right) &= \\mathbb{P}_{\\mathcal{S}} \\left( \\vert \\lbrace s \\in S \\setminus s\\in \\mathcal{G}(f) \\vert = j \\right)\\\\\n&= \\begin{pmatrix} \\vert \\mathcal{G}(f) \\vert -i \\\\ j \\end{pmatrix} \\times p_{\\mathcal{S}}^{j} \\times (1 - p_{\\mathcal{S}})^{\\vert \\mathcal{G}(f) \\vert -i - j}\n\\end{align*}\n\n\\subsection{Identification of factors}\n\nFirst, in the case of a single sampled firing graph, one can see that bits's purity coefficients take only two values with respect to $f$\n\n\\begin{align*}\n\\omega_{\\{b\\}, 1, f} = \\begin{cases} p_N , & \\text{if }\\ b \\in \\mathcal{G}(f) \\\\ 1, & \\text{otherwise} \\end{cases}\n\\end{align*}\n\nThus if we choose \n\n\\begin{equation*}\n\\omega_0 = \\frac{(1 + p_N)}{2}\n\\end{equation*}\n\nIt maximizes the purity margin defined as\n \n\\begin{align*}\n\\delta_0 &= \\frac{(\\omega_0 - \\omega_{\\{ b \\}, 1, f}) + (\\omega_{\\{ b' \\}, 1, f} - \\omega_0)}{2} = \\frac{(1-p_N)}{2}\n\\end{align*}\n\nWhere $b \\in \\mathcal{G}(f)$ and $b' \\notin \\mathcal{G}(f)$. In the case of a joint sampled firing graph in which a set $I = \\{b_1^{*}, \\ldots, b_i^{*}\\}$ of $i \\in \\mathbb{N}^{*}$ pre-selected bits that verify $\\forall b \\in I$, $b \\in \\mathcal{G}(f)$, remaining bit's purity coefficients with respect to $f$ can take again two values\n\n\\begin{align*}\n\\omega_{I \\cup \\{b\\}, i+1, f} = \\begin{cases} p_N^{i+1} , & \\text{if }\\ b \\in \\mathcal{G}(f) \\\\ p_N^{i}, & \\text{otherwise} \\end{cases}\n\\end{align*}\n\nThus if we choose\n\n\\begin{equation*}\n\\omega_i = \\frac{(1 + p_N) \\times p_N^{i}}{2}\n\\end{equation*}\n\nit maximizes the purity margin defined as\n \n\\begin{align*}\n\\delta_i &= \\frac{(\\omega_i - \\omega_{I \\cup \\{ b \\}, i+1, f}) + (\\omega_{I \\cup \\{ b' \\}, i+1, f} - \\omega_i)}{2} = \\frac{(1-p_N) \\times p_N^{i}}{2}\n\\end{align*}\n\nWhere $b \\in \\mathcal{G}(f)$ and $b' \\notin \\mathcal{G}(f)$. Finally we define the 5-tuple $(N_i, T_i, p, q)$ as \n\n\\begin{align*}\n(p, q) &\\in \\mathbb{N}^{2} \\textit{ such that } \\phi_i \\times (p + q) - p \\leq 0 \\textit{ and } \\phi_i' \\times (p + q) - p > 0 \\\\\nN &= -T \\times (\\phi \\times (p+q) - p)\\\\\nT &\\in \\mathbb{N} \\textit{ such that } N \\textit{ large enough}\n\\end{align*}\n\nWhere $t \\in \\mathbb{N}$, $\\phi_i = \\frac{p_{f}}{p_{f} + \\omega_i \\times (1 - p_f)}$ and $\\phi_i' = \\frac{p_{f}}{p_{f} + (\\omega_i - \\delta_i) \\times (1 - p_f)}$.\n\n\\subsection{Simulation}\n\nThe signal plus noise model is implemented in python and mainly uses standard numpy and scipy modules to generate random signal that fit its probabilistic model. More details about the implementation can be found in appendix B. We generate $n=1000$ bits that randomly activate with probability $p_N$ and we choose randomly $\\vert \\mathcal{G}(f)\\vert = 50$ bits that are linked to a latent factor that activates with probability $p_f=0.3$. Finally we build the single sampled firing graph using $p_{\\mathcal{S}} = 1$.\n\n\\begin{figure}[H]\n\\subfloat[$p_N = 0.3$, $(p,q)=(1,1)$]{\\includegraphics[scale=0.165]{figures\/signalplusnoise_111.png}} \n\\subfloat[$p_N = 0.5$, $(p,q)=(2,3)$]{\\includegraphics[scale=0.165]{figures\/signalplusnoise_112.png}}\\\\\n\\subfloat[$p_N = 0.7$, $(p,q)=(3, 5)$]{\\includegraphics[scale=0.165]{figures\/signalplusnoise_121.png}}\n\\subfloat[$p_N = 0.9$, $(p,q)=(5,11)$]{\\includegraphics[scale=0.165]{figures\/signalplusnoise_122.png}} \n\\caption{Observation of the score process for different SNR models $T=500$}\n\\label{fig:sim_spn_1}\n\\end{figure}\n\nEach subplot of figure~\\ref{fig:sim_spn_1} shows the weight of outcoming edges of sampled vertices. Blue lines show the weight of edges outcoming from sampled bit $b \\in \\mathcal{G}(f)$ and red lines correspond to the weight of edges outcoming from sampled bit $b \\notin \\mathcal{G}(f)$. Finally the black horizontal line represents the theoretical mean value of $s_{v, f}[N, T, p, q]$ of a vertex with characteristic polynome $\\mathcal{P}_v = \\mathcal{P}_{\\{b\\}, 1}$, with $b \\in \\mathcal{G}(f)$. As theory suggests, we can see two distinct phenomenons, blues lines converge around theoretical mean for process of bits linked to the target factor and red lines converge to 0. However, the higher is $p_N$, the less noticeable is the distinction between each process. This is explained by the fact that the higher is $p_N$, the closer are the precision of bits linked to target factor $f$ and the precision of noisy bits. Futhermore the later observation induces a high value of $p+q$ which result in a more volatile score process. For the second simulation we use $n=1000$, $\\vert \\mathcal{G}(f)\\vert = 50$, $p_f=0.3$, $T=200$ and $p_{\\mathcal{S}}= 0.5$. Yet at the end of the draining we choose all the input vertices of the firing graph that still have an outcoming edge and use their combined activation as an estimator of the target factor's activation. We then measure their precision and recall over $100$ repetion for each SNR ratio\n\n\\begin{table}[H]\n\\begin{tabular}{|l|l|l|l|l|l|}\n\\hline\n$P_N$ & Mean $\\phi$ & Standard deviation $\\phi$ & Mean $\\psi$ & Standard deviation $\\psi$ & Number of fails \\\\ \\hline\n0.3 & 1.0 & 0.00 & 0.87 & 0.30 & 0 \\\\ \\hline\n0.5 & 1.0 & 0.03 & 0.66 & 0.42 & 0 \\\\ \\hline\n0.7 & 0.97 & 0.13 & 0.43 & 0.46 & 4 \\\\ \\hline\n0.9 & 0.75 & 0.14 & 0.13 & 0.25 & 19 \\\\ \\hline\n\\end{tabular}\n\\caption{Evaluation of naive factor's activation estimation, $T=200$, $100$ repetitions}\n\\label{tab:sim_spn_1}\n\\end{table}\n\nTable~\\ref{tab:sim_spn_1} shows quality indicators of the estimator for different SNR ratio. The two first columns give respectively the mean and standard deviation of the precision of the estimator. The two following columns are respectively the mean and the standard deviation of the recall of the estimator. Finally the last column is the number of experiments that ended without any input vertices having a path towards the output, so that the construction of an estimator is not possible. Again, we see that the quality of the estimator drops as the theoretical precision between noisy bits and factor's bits are close to each other. Yet it reveals that this naive estimator, for a reasonable SNR ratio, is still efficient to predict the activation of target latent factor. Finally, we simulate the signal plus noise model in the settings of joint sampled firing graph. We use a measure grid of $n=1000$ bits from which we sampled randomly $\\vert \\mathcal{G}(f) \\vert = 50$ bits linked to target factor $f$ that activates with probability $p_f = 0.3$ and we set $p_N=0.6$. Finally we built the joint sampled firing graph by pre-selecting randomly 5 bits linked to the factor and running the sampling algorithm described previously using $p_{\\mathcal{S}} = 1$.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.30]{figures\/signalplusnoise_2.png}\n\\caption{Observation of the score process in a joint sampled firing graph with $T=500$ and $(p, q) = (7, 1)$}\n\\label{fig:sim_spn_2}\n\\end{figure}\n\nIn this case, we obtain $N=7$ and $\\omega_5 \\simeq 0.062$ when following the procedure described in previous section. As for the first experiment, blue lines show the weight of edges outcoming from sampled bit $b \\in \\mathcal{G}(f)$ and red lines correspond to the weight of edges outcoming from sampled bit $b \\notin \\mathcal{G}(f)$. The black horizontal line represents the theoretical mean value of $s_{v, f}[N, T, p, q]$, where $v$ has characteristic polynome $\\mathcal{P}_v = \\mathcal{P}_{\\{b_1, \\ldots, b_5, b\\}, 5}$, with $\\{b_1, \\ldots, b_5\\}$ the set of pre-selected bits and $b \\in \\mathcal{G}(f)$. The simulation validate the expectation from theory and the high value of $p+q$ explains the high volatility of score processus.\n\n\\section{Case of sparse measure grid}\n\nThis particular case is more complex than the previous one. We first define the statistical signature of factors and bits's activations. Then we quantify the sampling strategy and justify a choice for the 5-tuple $(\\omega, T, N, p, q)$. Finally, we present simulations and provide discussion of results obtained with this particular case.\n\n\\subsection{Statistical modelling}\n\n\n\\subsubsection*{Latent factor activation}\n\nWe assume that each of the $K$ latent factors activates independantly with probability $p_f$. As a consequence, for any $I \\in S(\\mathcal{F})$, if we define $x$ such that $x \\in F_2 (\\{I\\}, \\mathcal{F})$, we can define the distribution of factor's activation as \n\n\\begin{equation*}\nd'_x = \\begin{pmatrix} \\vert I \\vert \\\\ K \\end{pmatrix} \\times p_f^{\\vert I \\vert} \\times (1-p_f)^{K-\\vert I \\vert} \n\\end{equation*}\n\n\n\\subsubsection*{Measure grid activation}\n\nWe assume two major properties of activations of measure grid's bits.\n\n\\begin{itemize}\n\\item For each factor $f \\in \\mathcal{F}$, each bit $b \\in \\mathcal{G}$ has equal probability $p_g$ to belong to $\\mathcal{G}(f)$.\n\\item For each factor $f \\in \\mathcal{F}$, for each couple $b_1, b_2 \\in \\mathcal{G}^2$, events \"$b_1 \\in \\mathcal{G}(f)$\" and \"$b_2 \\in \\mathcal{G}(f)$\" are independent.\n\\end{itemize} \n\nAs a consequence the probability for a bit $b$ to activate, given that every factor of some set $\\{f_1, \\ldots, f_k\\} \\subset \\mathcal{F}$ is active, writes\n\n\\begin{align*}\n\\mathbb{P}\\left(\\textit{b active } \\vert f_1, \\ldots, f_k \\textit{ active}\\right) &= \\sum_{i=1}^{k} \\begin{pmatrix} k \\\\ i \\end{pmatrix} \\times p^i_g \\times (1 - p_g)^{k - i}\\\\\n &= 1 - (1 - p_g)^{ k}\n\\end{align*}\n\nThe above quantity depends only on the number of active latent factors. Thus, for any $I \\in S(\\mathcal{G})$, if we define $x \\in F_2 (S(\\mathcal{G}) , \\mathcal{G})$, we can define the distribution of bits's activations as \n\n\\begin{equation*}\nd_x = \\sum_{k = 1}^{K} \\left[ \\begin{pmatrix} K \\\\ k \\end{pmatrix} \\times p_f^{k} \\times (1-p_f)^{K-k}\\right] \\times \\left[ p^i_{g \\vert k} \\times (1 - p_{g\\vert k})^{n - i} \\right]\n\\end{equation*}\n\nWith $i = \\vert I \\vert$ and $p_{g \\vert k} = \\mathbb{P}\\left(\\textit{b active } \\vert f_1, \\ldots, f_k \\textit{ active}\\right)$.\n\n\\subsection{Evaluation of bits}\n\nLet $G$ be a firing graph whose layer 0 is composed of measure grid's bits. Given a target factor $f$, the precision with respect to $f$ of a vertex $v(\\{ b \\}, 1)$ of the layer 1 of $G$ depends on wether $b \\in \\mathcal{G}(f)$ and on $\\vert \\mathcal{G}^{-1}(b) \\vert$. Indeed, if $b \\in \\mathcal{G}(f)$ and $\\vert \\mathcal{G}^{-1}(b) \\vert = l $, $l \\in \\{1, \\ldots, K\\}$, it will be said to have a purity rank of $l$ and its precision with respect to $f$ writes\n\n\\begin{equation*}\n\\phi_{v, f} = \\frac{p_f}{p_f + (1 - p_f)\\times \\omega_{l}}\n\\end{equation*}\n\nwhere $\\omega_l = 1 - (1-p_f)^{l-1}$. If $b' \\notin \\mathcal{G}(f)$ the precision of $v'(\\{ b' \\}, 1)$ with respect to $f$ writes\n\n\\begin{equation*}\n\\phi_{v', f} = p_f\n\\end{equation*}\n\nFuthermore, if we have a vertex $v(I, \\vert I \\vert)$ such that $\\forall b \\in I$, $b \\in \\mathcal{G}(f)$ and $\\min_{b \\in I} \\vert \\mathcal{G}^{-1}(b) \\vert = l$, then the precision of $v$ with respect to $f$ verifies\n\n\\begin{equation*}\n\\phi_{v, f} \\leq \\frac{p_f}{p_f + \\omega^{-}_l \\times (1 - pf)}\n\\end{equation*}\n\nWith \n\n\\begin{equation*}\n\\omega^{-}_l = \\sum_{k=K-l-1}^{K} \\begin{pmatrix} K \\\\ k \\end{pmatrix} p_f^{k} \\times (1-p_f)^{K - k}\n\\end{equation*}\n\nThe minimum purity coefficient one can obtained with bits that verifies $b \\in \\mathcal{G}(f)$ and $\\vert \\mathcal{G}^{-1}(b) \\vert = l$. That is, the case of a vertex $v(I, \\vert I \\vert)$ with $I$ composed of every possible $\\begin{pmatrix} K \\\\ l \\end{pmatrix}$ such bits.\n\n\n\\subsection{Sampling Strategy}\n\nWe follow the generic sampling procedure $\\mathcal{S}$ with parameter $p_{\\mathcal{S}}$. Although it is not hard to derive key quantification such as $\\mathbb{E}\\left[ \\vert S \\vert \\right]$ or probabilities to sample bits linked to a target factor $f$ under this modelling, generic formulas are not elegant and present not much interest in this simulation.\n \n\\subsection{Identification of factors}\n\nFirst, in the case of a single sampled firing graph, for any grid's bits linked to factor $f$, there is only $K$ different purity coefficients possible. Thus we may set $\\omega$ to $\\omega_l$, using $l$ reasonably small to differentiate lower purity rank from greater purity rank samples. In the case of a joint sampled graph, where a set of $I=\\{b_1^{*}, \\ldots b_i^{*}\\}$ were pre-selected then the choice of $\\omega$ is not trivial and is hard to be efficiently and generically derived. Let $\\omega_{I, \\vert I \\vert, f}$ the purity coefficient of the pre-selected set of bits we set $\\omega = \\omega_{I, \\vert I \\vert, f} - \\delta$ where $\\delta \\in \\mathbb{R}$ should be chosen with caution. Finally we choose the 5-tuple $\\lbrace (\\omega, N, T, p, q)$ as \n\n\\begin{align*}\n(p, q) &\\in \\mathbb{N}^{2} \\textit{ such that } \\phi \\times (p + q) - p < 0 \\\\\nN &= -T \\times (\\phi \\times (p+q) - p)\\\\\nT &\\in \\mathbb{N} \\textit{ such that } N \\textit{ large enough}\n\\end{align*}\n\n\nWhere $\\phi = \\frac{p_f}{p_f + \\omega \\times (1-p_f)}$. \n\n\\subsection{Simulation}\n\nThe sparse measure grid model is implemented using python and the standard python numpy and scipy modules to generate random signal that fit its probabilistic model. In our case we generate $n=1000$ bits with $K=10$ latent factors that activate with probability $p_f = 0.3$ and we link measure grid's bits independently with probability $p_g=0.3$. Finally we built the single sampled firing graph running the sampling algorithm described previously, using $p_{\\mathcal{S}} = 1$. Finally, we set $\\omega= \\omega_{10}$, the higher purity coefficient for bits linked to the target factor $f$.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.33]{figures\/sparse_1.png}\n\\caption{Observation of the score process in a single sampled firing graph with $T=1000$ and $(p, q) = (1, 1)$}\n\\label{fig:sim_si_1}\n\\end{figure}\n\nWe clearly see a rapid differentiation of score processus according to their purity rank. We can also observe that, at the end of draining, the higher the purity coefficient is, the closer are weights of corresponding edges. Finally, the behaviour of score processus validates the efficiency of the draining algorithm to rank bits of the measure grids blindly, in an attempt to identify latent factors. The second experiment with the sparse measure grid model aims to give intuition on the choice of $\\delta$ used for draining a joint sampled firing graph. As for the previous simulation, we generate $n=1000$, bits with $K=10$ latent factors that activate with probability $p_f = 0.3$ and we link measure grid's bits independently with probability $p_g=0.3$. Then we choose randomly $i=5$ bits, denoted by $I=\\{b_1^{*}, \\ldots, b_i^{*}\\}$, with purity rank $4$ with respect to the target factor $f$. Finally we sampled and built the joint sampled firing graph using $p_{\\mathcal{S}} = 1$ and the set of pre-selected bits $I$. The procedure described in the previous section to choose the target purity coefficient consists in estimating the purity coefficient $\\hat{\\omega}_{I, \\vert I \\vert, f}$ and to set $\\delta$ so that $\\omega = \\hat{\\omega}_{I, \\vert I \\vert, f} - \\delta$.\\\\\n\n\\begin{figure}[H]\n\\subfloat[$\\delta = 0.$]{\\includegraphics[scale=0.165]{figures\/sparse_211.png}} \n\\subfloat[$\\delta = 10^{-2}$]{\\includegraphics[scale=0.165]{figures\/sparse_212.png}}\\\\\n\\subfloat[$\\delta = 5 \\times 10^{-2}$]{\\includegraphics[scale=0.165]{figures\/sparse_221.png}}\n\\subfloat[$\\delta = 10^{-1}$]{\\includegraphics[scale=0.165]{figures\/sparse_222.png}} \n\\caption{Observation of sampled bit's score processus in a joint sampled firing graph $i=5$, $T=500$ and $(p, q) = (1, 1)$}\n\\label{fig:sim_si_2}\n\\end{figure}\n\nIn each simulation, $\\hat{\\omega}_{I, \\vert I \\vert, f}$ has been estimated using $1000$ samples and we use $T=500$. Furthermore, each figure corresponds to a different value of $\\delta$ that induces different values of $\\omega$, set as $\\omega = \\hat{\\omega}_{I, \\vert I \\vert, f} - \\delta$. As for the first experiment, the different colored lines in each subfigure show the weight of edges outcoming from sampled bits with different purity ranks. As expected, we can see that the higher $\\delta$ is, the more discriminative the draining procedure is. If $\\delta$ is set to 0, then every sampled bits will remain connected in the firing graph after draining, which is not of great interest. Yet, if $\\delta$ is set too high we may end with two connexe components, which is not desirable neither. Thus, the experiment confirms the difficulties that we may face choosing the right value for $\\delta$.\n\n\\section{Discussion}\n\nThis paper has presented an algorithm that consists in a generic optimisation of a firing graph, in an attempt to solve the abstract task of identifying latent factor's activations. Furthermore it has provided theoretical certitude on the effectivness of the procedure. However, the iterative optimisation method associated with the diversity and flexibility of the architecture of a firing graph opens doors to further applications, notably in the field of inverse problem and in the very hype field of machine learning. Indeed in supervised classification, we are given a dataset composed of features that may be numerical or categorical description of samples and targets that specify the class of samples. If we assume that the activation of a target is a combination of latent factors's activations and that we operate the minimum transformation of features so that they take the form of a measure grid, a light layer of procedures could turn our solution into a supervised classificator. The specificity of such a learner would give it an interesting position in the supervised learning landscape. Indeed, its iterative optimisation and flexible architecture could make it an adaptative learner, that scale to large dataset, with minimum processing work on raw data, in the manner of a neural network. Yet unlike neural network the algorithm handle very efficiently categorical or sparse feature space. Furthermore, compared to the most advanced tree based classification, its flexible architecture is more suitable to learning update and on-the-fly evaluation or addition of new features. Finally, given the hype granted to the field of machine learning nowaday, both in the scientific comunity and civil society, it would be common sense to orient this piece of research to this field. \n\n\n\\newpage\n\\begin{center}\n\\LARGE \\textbf{Appendices}\n\\end{center}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section*{Abstract}{\\small\n\nBased on $uvby\\beta$ photometry we study the structure of several Galactic\nstar-forming fields. Lac OB1 is a compact association at 520$\\pm20$ pc\nspatially correlated with a region of intense H{\\sc ii} emission in\nSh2-126. Lod\\'{e}n 112 is a compact OB group at 1630$\\pm82$ pc, probably connected to\nan extended feature of OB stars located toward the Carina tangent.\nThe field toward Car OB1 is complex and \nlikely contains apparent concentrations representing parts of long segments\nof the Carina arm projected along the line of sight. Within the classical Mon\nOB2 association we separate a relatively compact group\nat 1.26 kpc, that is spatially correlated to the Monoceros Loop SN remnant.\n\n\\normalsize}\n\\end{minipage}\n\\section{Introduction \\label{intro}}\nThe Galactic OB-associations offer a unique opportunity to study the\ninfluence of massive stars on the interstellar matter. \nA reconstruction of the star-formation history of many Galactic fields should be\npossible once the spatial distribution of the young stars is reliably\ndetermined. Despite the extensive efforts to improve and unify the distances\nto the young stellar groups in the Milky Way (MW), discrepancies still remain\nin the published studies for a large number of fields. Many of the present\ndistance estimates are based to a large extent on preliminary distance\ncalibrations, broad-band photometry, or absolute magnitudes ($M_V$) obtained\nvia spectral and luminosity type (MK classification). On the other hand, the\n$uvby\\beta$ photometric system provides $M_V$ and colour\nexcess determinations for early-type stars in excellent agreement\nwith the $Hipparcos$ parallaxes \\cite{kaltcheva98}.\n\\begin{figure}\n\\center\n\\includegraphics[scale=0.27]{Kaltcheva_N_Fig1a.pdf} ~\n\\includegraphics[scale=0.27]{Kaltcheva_N_Fig1b.pdf} \n\\caption{\\label{fig1} Comparisons of $uvby\\beta$ $M_V$ (left) and\nMK-based $M_V$ (right) to the $Hipparcos$-based $M_V$.}\n\\end{figure}\n\nFigure~\\ref{fig1} represents the comparison of $uvby\\beta$ $M_V$ and MK-based\n$M_V$ to the $Hipparcos$-based $M_V$, pointing out to a possible\nover-estimation of stellar distance when relying on a MK-based determination.\nSince the distances to the Galactic OB associations are based mainly on\nindividual stellar distances and rarely on main-sequence fitting, applying\n$uvby\\beta$ photometry should lead to a significant improvement for many OB\ngroups in the MW.\n\nOur deriving of $uvby\\beta$ photometric distances utilizes the\nintrinsic colour calibrations of Crawford \\cite{crawford78} and Kilkenny \\& Whittet \\cite{kilkenny85} and\nthe luminosity calibration of Balona \\& Shobbrook \\cite{balona84} and takes into account possible\nstellar emission and mis-classification \\cite{kaltcheva00}. The expected\nerrors for one star are about 12 \\% for luminosity classes III-IV and about\n18-20~\\% for luminosity classes I and II. Photometric $uvby\\beta$ distances\nderived in this way provide the same impression of the star-forming field's\nstructure as the improved $Hipparcos$ parallaxes (cf. \\cite{kaltcheva07}). \n\nIn this contribution we present improved distance estimates for four Galactic\nOB associations.\n\n\n\\section{Lac OB1}\n\nLac OB1 (Lac~OB1b, \\cite{blaauw58}) is a nearby notable clustering of\nearly-type stars near 10 Lacertae that initially gained attention because of\nthe expanding motion of its members. Based on the derived $uvby\\beta$\nphotometric distances used in conjunction with the photometric diagrams we\nidentify Lac OB1 as a compact group of 12 low-reddened main-sequence stars\nlocated at a distance 520$\\pm20$ pc in the direction $l=96.4^\\circ, b=-16.6^\\circ$ (see \\cite{kaltcheva09} for details). The available radial velocity and\nproper motion measurements support the impression that this is a real\ngroup. For these 12 stars, the recalculated $Hipparcos$ parallaxes\n\\cite{vanleeuwen07} are in excellent agreement with the photometric\n$uvby\\beta$ parallaxes. The photometric distance of the O9V star 10~Lac (HD\n214680) is estimated to be 715$^{+107}_{-92}$ pc. Although this estimate is\nconsiderably larger than the one based on $Hipparcos$, the agreement for this\nstars is better with the recomputed $Hipparcos$ parallax \\cite{vanleeuwen07}\nwhich yields a distance of $529^{+70}_{-50}$ pc in comparison to the original\n$Hipparcos$ estimate of $325^{+82}_{-55}$ pc.\n\nFigure~\\ref{fig2} presents the distribution of H{\\sc ii} intensity in units of\nRayleighs and brightness temperature distribution of H{\\sc i} at velocity\nchannel $-15.5$ km s$^{-1}$ toward Lac OB1, with Lac OB1 stars\nsuperimposed. Here and on all further figures the H{\\sc ii} data are taken from \\cite{finkbeiner03} via the $SkyView$ interface \\cite{McGlynn98}, and the H{\\sc i} data are taken from Leiden\/Argentine\/Bonn (LAB) Survey of Galactic HI \\cite{kalberla05}.\nA correlation of the stars' location with the regions of\nintense H{\\sc ii} emission in Sh2-126 \\cite{sharpless59} is noticeable. On the\nother side, the distribution of neutral hydrogen shows a deficiency (most obvious in the selected velocity channel), also\ncorrelating with the location of the stars (see also \\cite{cappadenicolau90}).\n\n\\begin{figure}\n\\center\n\\includegraphics[scale=0.5]{Kaltcheva_N_Fig2a.pdf} ~\n\\includegraphics[scale=0.5]{Kaltcheva_N_Fig2b.pdf} \n\\caption{\\label{fig2} The stars of Lac OB1 overploted on the distribution of\n the H{\\sc ii} emission in Sh2-126 (left) and H{\\sc i} (right). 10 Lac is shown with filled symbol.}\n\\end{figure}\n\n\\section{The field of Lod\\'{e}n 112}\n\nLod\\'{e}n 112 is identified as a poor, but compact cluster candidate. Based on $uvby\\beta$ photometry we obtained true DM = 11.06$\\pm$0.12(s.e.) and average color excess $E(b-y)$ = 0.5$\\pm$0.03(s.e.) \\cite{kaltcheva11}. This corresponds to a distance of 1630$\\pm$82 pc, which is\nsignificantly smaller than the presently adopted 2500 pc (WEBDA). In our\n$uvby\\beta$ sample there are several other early B stars located at that exact\ndistance. The photometric distances and available proper motions allowed us to\nidentify a group of about 10 early B stars that could represent a new OB\nassociation at coordinates $282^\\circ\\!