diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcppv" "b/data_all_eng_slimpj/shuffled/split2/finalzzcppv" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcppv" @@ -0,0 +1,5 @@ +{"text":"\\section{INTRODUCTION}\n\nMore and more complex substructures have been discovered in the Milky Way by recent digital sky surveys(\\citet{ant12}; \\citet{kle08}; \\citet{zhao09}). It is well-known that the vicinal velocity field is clumpy and most of the observed overdensities are made of spatially unbound groups of stars, called moving groups. \\citet{egg58} have defined and investigated many moving groups, supposing that moving groups are from dissolving open clusters. Later, many theoretical models suggest that the overdensities of stars in some regions of the Galactic velocity UV-plane may be a result of global dynamical mechanisms related to the nonaxisymmetry of the Galaxy \\citep{fam05}, namely the presence of the bar \\citep{kal91,deh00,fux01}, and\/or spiral arms \\citep{sim04,qui05}. Since the late 90s of last century, the bar has been believed to be short and fast rotating for long time. This was in very good agreement with the explanation of the Hercules moving group as being due to the bar's outer Lindblad resonance \\citep{deh00}. However, recent photometric studies of the Galactic center have shown that the bar could be longer, reopening the debate on the bar's pattern speed and the origin of the Hercules moving group \\citep{mon17,per17}. Nowadays, the origins of these moving groups are explained by different theories or hypotheses, such as cluster disruption, dynamical effects and accretion events. \\citet{fre02} put forward the chemical tagging technique to reassemble the ancient stellar forming aggregates in the Galactic disk. Since then it has become popular to use detailed chemical abundances from high resolution spectroscopy to disentangle the mechanism that has formed a certain stream. For example, \\citet{ben07} found a wide spread in the distributions of age and chemical abundances of the stars in the Hercules stream, and concluded that this group is compatible with being a dynamical feature. According to the homogeneity of the HR 1614 group in age and abundance, \\citet{sil07} concluded that it is the remnant of a dispersed star-forming event.\n\nIn the past, it is hard to determine the stellar members of moving groups due to the lack of parallaxes information, which will become available with Gaia survey. Combined with spectroscopic survey, like LAMOST, we can determine accurate velocity coordinates of stars in the solar neighbourhood. The $\\gamma$ Leo (Leonis) moving group hasn't been closely analysed before. The $\\gamma$ Leonis group were defined by \\citet{egg59a,egg59b} by convergent point method. Its existence has been confirmed by \\citet{sku97}. \\citet{ant12}, using RAVE data, reidentified two peaks of the $\\gamma$ Leo moving group in UV plane by wavelet transform which are confirmed by \\citet{lia17} using Gaia-TGAS (\\citet{pru16,bro16}) cross-matched with LAMOST DR3 \\citep{cui12,zhao06,zhao12}.\n\n\nThe objective of this paper is to trace the origin of the $\\gamma$ Leo moving group by chemical tagging. Section 2 describes our sample and observational information about this sample. In Section 3, we discuss stellar parameters, chemical abundance and error analysis. The main results and discussions are given in Section 4. In the final section, we present conclusion of our work and expectation for the future.\n\n\n\n\\section{SAMPLE SELECTION and OBSERVATION}\n\n\\subsection*{Membership Criteria}\nMembership of a moving group is based on the stars' velocities. The velocity components \\textit{UVW} are defined in a right-handed local standard reference coordinated system, which point to the directions of the Galactic centre, Galactic rotation and the North Galactic Pole, respectively. Velocities were corrected to the local standard of rest where the Sun's \\textit{UVW} velocities are ($7.01$, 10.13, 4.95) km s$^{-1}$ \\citep{hua15} respectively. Parallaxes and proper motions were taken from Gaia DR1 (\\citet{bro16, pru16}) and radial velocities were taken from the LAMOST catalogue (Zhao et al. 2012; Cui et al. 2012). With the correlation coefficients provided by Gaia, the uncertainties of the velocity components have been calculated using the full covariance matrix. We have excluded stars with relative parallax uncertainty larger than 30\\% from the sample.\n\n\\citet{lia17} used a wavelet transform technique to search for overdensities in the velocity distribution. Following \\citet{ant12}, peak 7 and peak 10 were identified as $\\gamma$ Leo moving group in the local sample of \\citet{lia17}. Wavelet transform was applied to the \\textit{UVW} coordinates to get the peaks and the size of the $\\gamma$ Leo overdensity. Then we took the peak (60.8, 3.3, 2.9) km s$^{-1}$ as the center and the spherical radius 9.3 km s$^{-1}$ as the radius of the $\\gamma$ Leo moving group. We adopted all objects within the radius as candidate stars, taking account of typical errors of velocity about 5 km s$^{-1}$ and inner velocity dispersion of a moving group is more than 2 km s$^{-1}$ \\citep{shk12}. 77 candidate stars within 300 pc were selected, from which 18 stars have been observed with the Subaru Telescope. Table 1 lists their identifier; equatorial coordinate; parallax; proper motion components and identifier name in the simbad astronomical database of 15 single stars (other 3 stars are spectroscopic binaries).\n\n\\subsection*{Observation}\nHigh-resolution spectra of 18 candidates of the $\\gamma$ Leo moving group members were obtained on August 3-6, 2017 with Subaru\/HDS (High Dispersion Spectrograph; \\citealt*{nog02}). The spectral resolution is 36,000 covering approximately from 4000 {\\AA} to 7000 {\\AA}. The data were reduced by IRAF echelle package, including bias correction, flat fielding, scattered light subtraction, extraction of spectra, and wavelength calibration using Th arc lines. Cosmic-ray hits are removed by the method described in \\citet{aok05}. The code HDSV \\citep{zhao07} were used to estimate heliocentric radial velocities and normalize the spectra. The signal to noise ratio (\\textit{S\/N}) of spectra varies from star to star (listed in table~\\ref{tbl-2}), but the mean value is 68.2 per pixel at 5500 {\\AA}. Among the 18 observed stars, three were found to be double lined spectroscopic binaries and they were excluded from the sample. The remaining 15 stars have been analyzed. Solar spectrum (moon spectrum) observed by NAOC-Xinglong 2.16 m Telescope was acquired for correcting the zero point of elemental abundances.\n\n\n\\section{ABUNDANCE ANALYSIS}\n\n\n\\subsection*{Equivalent width measurements and Model Atmospheres} \\label{bozomath}\nThe elemental abundances were determined based on equivalent widthes (EWs) measured line by line with gaussian fit. The atomic data of all the lines we used are taken from \\citet{kon17}. To estimate the accuracy of EWs measurement, we compared our EWs of the moon spectrum with those of \\citet{ben14}. The linear least squares fit for the two sets of data is $EW_{this work}=1.0225(\\pm0.0054)EW_{Bensby}+1.055(\\pm0.351)$ m\\AA, and the standard deviation is about 2.3 m\\AA. Figure 1 shows a comparison of the two EWs sets and their linear fit line. The model atmospheres were interpolated from LTE Kurucz model atmospheres \\citep{kur93} and the theoretical EWs for individual lines were calculated using the ABONTEST8 code supplied by Dr. P. Magain.\n\\begin{figure}\n\\epsscale{.90}\n\\plotone{ew.eps}\n\\caption{Equivalent widths comparison for sun between the values of \\citet{ben14} and those measured in this work (x and y axes, respectively). The dashed line is a linear fit to the data points.\\label{Figure 1}}\n\\end{figure}\n\n\\subsection*{Stellar Parameters} \\label{bozomath}\nStellar parameters are estimated by two methods, photometric way and spectroscopic way. For the photometric way, effective temperatures (\\teff) were derived from the photometric colour index \\textit{V $-$ K} according to empirical calibration relations given by \\citet{alo96}. The apparent magnitudes are adopted from SIMBAD Astronomical Database \\citep{sbd00}. Most stars' color excess E(B $-$ V) are obtained from Galactic Dust Reddening and Extinction website\\footnote{http:\/\/irsa.ipac.caltech.edu\/applications\/DUST\/}\\citep{sch11}. However, for J0219+5623, suspect that the value 0.43 obtained by this method is an overestimate and take 0.03 from 3D Dust Mapping website\\footnote{http:\/\/argonaut.skymaps.info\/}\\citep{gre15} as E(B $-$ V) to estimate this star's effective temperature. Surface gravity ($\\log g$) is calculated from basic relation between bolometric magnitude, effective temperature, stellar mass and surface gravity.\n$$\n\\log\\frac{g}{g_{\\bigodot}} = \\log\\frac{M}{M_{\\bigodot}}+4\\log\\frac{T_{eff}}{T_{eff,\\bigodot}}+0.4(M_{bol}-M_{bol,\\bigodot})\n$$\nwhere\n$$\nM_{bol} = V_{mag}+BC+5\\log\\pi+5\n$$\nThe parallax $\\pi$ (mas) is taken from Gaia-Tgas \\citep{bro16, pru16}. The bolometric corrections are calculated using the relation given by \\citet{alo95}. Most of our stars are turn-off stars, while for two sub-giant stars (J1735+2650 and J0159+2636), we used formulae provided by \\citet{alo99} to calculate the effective temperature. Stellar mass is estimated by interpolation of the evolutionary tracks of \\citet{yi03} for \\teff and $M_{bol}$.\n\n\nFor the spectroscopic way, effective temperature is determined by adjusting excitation equilibrium, requiring the slopes of lower excitation potential vs log $\\epsilon$(Fe \\uppercase\\expandafter{\\romannumeral1}) close to zero. Surface gravity ($\\log g$) is determined from ionization equilibrium method which forces $\\log \\epsilon$(Fe \\uppercase\\expandafter{\\romannumeral1}) equal to $\\log \\epsilon$(Fe \\uppercase\\expandafter{\\romannumeral2}). Micro-turbulence is determined as the abundances of Fe \\uppercase\\expandafter{\\romannumeral1} lines show no trend with EWs. We iterate the fitting with a 3$\\sigma$ rejection of the deviant Fe lines after the first determination of the stellar parameters. The parameters from this spectroscopic method are adopted as our final parameters to calculate the abundances. In figure 2, we plot stars' positions in HR diagram. The abscissa labels spectroscopic effective temperature and the ordinate labels luminosity. The dotted line is a Y$^2$ isochrone with age = 1.2 Gyr from \\citet{yi01}.\n\n\\begin{figure}\n\\epsscale{.60}\n\\plotone{LTeff.eps}\n\\caption{HR diagram of our sample.\\label{Figure 2}}\n\\end{figure}\n\n\n\nThe resulting stellar parameters are listed in table~\\ref{tbl-2}. Figure 3 shows comparisons of \\teff and $\\log g$ from the two methods. The average and standard deviation of $\\triangle$\\teff are 66 K and 270 K, respectively. The systematic effect between these methods may be mainly caused by uncertainties of extinction. The average and standard deviation of $\\triangle\\log g$ are 0.007 and 0.069, respectively. We suppose the systematic deviation for $\\log g$ can be ignored. The uncertainties of $\\log g$ are mainly caused by uncertainties of stellar effective temperature and uncertainties of the mass from parallax errors and extinction.\n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\begin{minipage}[c]{0.4\\textwidth}\n\\centering\n\\includegraphics[scale=0.35]{teff.eps}\n\\end{minipage}\n\\begin{minipage}[c]{0.4\\textwidth}\n\\centering\n\\includegraphics[scale=0.35]{logg.eps}\n\\end{minipage}\n\\caption{Photometric \\teff vs spectroscopic \\teff and photometric $\\log g$ vs spectroscopic $\\log g$. The dash line indicates unit slope. \\label{Figure 3}}\n\\end{center}\n\\end{figure}\n\n\nWe adopted the photospheric solar abundance of \\citet{asp09}, to calculate [X\/H] values. To estimate the offset of the derived abundances with respect to their results, we used the moon spectrum to gain the solar parameter. The gained effective temperature, surface gravity, micro-turbulent velocity and metallicity are \\teff$ = $5779 K, $\\log g = $4.35, $\\xi = $0.85 km s$^{-1}$ and $\\log \\epsilon(Fe) = $7.61, respectively. Thus when calculated the metallicity [Fe\/H] of other stars, we subtracted a extra 0.11 dex for system correction. The final stellar abundances in the [X\/H] are presented in table~\\ref{tbl-3} and abundance distributions are plotted in Figure 4, -5. Moreover, we overploted the abundances of field stars from \\citet{ben14} and \\citet{ven04} as comparison stars.\n\n\\subsection*{Error Analysis} \\label{bozomath}\n\nTo estimate abundance uncertainties due to errors associated with EWs measurements and stellar parameters, we analyzed the sensitivities of abundance to changes of each quantity separately with the others unchanged. Table~\\ref{tbl-4} lists the abundance differences induced by changing the equivalent widths $\\Delta EW = 2.3$ m\\AA, effective temperature $\\Delta T_{\\textrm{eff}}=100 $K, the surface gravity $\\Delta\\log g = 0.12$, the iron abundance $\\Delta$[Fe\/H]$=0.11$ dex and the micro-turbulent velocity $\\Delta \\xi = $0.1 km s$^{-1}$, respectively. We took $\\Delta EW = 2.3$ m\\AA because the standard deviation of our measured EWs of the moon spectrum with those of \\citet{ben14} is about 2.3 m\\AA. The typical error in the stellar parameters $\\Delta T_{\\textrm{eff}}$, $\\Delta\\log g$ and $\\Delta \\xi$ are estimated based on our spectroscopic derivation, namely according to the slope changes. We took the $\\Delta$[Fe\/H]$=0.11$ dex because the maximum random error of [Fe\/H] in the measurements is about 0.11 dex. Finally, we adopt the square root of the quadratic sum of the errors of all factors as the total error $\\sigma_{total}$. We did not consider the NLTE effects which may cause larger scatter and overestimated abundances. As Table~\\ref{tbl-4} shows, apart from the total error of Na abundance of a star (J0158+3955) that reaches 0.19 dex, the largest uncertainty is 0.14 dex. The titanium is more sensitive to changes of parameters than other elements, and the largest abundance error appears in the $\\Delta$[Ti] column for most stars. These uncertainties do not change the result of abundance distribution.\n\n\n\\section{RESULT and DISCUSSION}\n\nFigure 4 shows the abundance ratios of these elements for our sample and comparison stars. The metallicity of the 15 $\\gamma$ Leo moving group member stars ranges from -0.67 to 0.35. The mean value and standard deviation are respectively 0.03 and 0.24. The large dispersion demonstrates they are not from a chemically homogeneous origin.\n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\begin{minipage}[c]{0.45\\textwidth}\n\\centering\n\\includegraphics[scale=0.6]{Mg.eps}\n\\end{minipage}\n\\begin{minipage}[c]{0.45\\textwidth}\n\\centering\n\\includegraphics[scale=0.6]{Si.eps}\n\\end{minipage}\n\\end{center}\n\\vspace{0pt}\n\\begin{center}\n\\begin{minipage}[c]{0.45\\textwidth}\n\\centering\n\\includegraphics[scale=0.6]{Ca.eps}\n\\end{minipage}\n\\begin{minipage}[c]{0.45\\textwidth}\n\\centering\n\\includegraphics[scale=0.6]{Ti.eps}\n\\end{minipage}\n\\caption{[X\/Fe] vs [Fe\/H] for $\\alpha$-elements(Mg, Si, Ca, Ti). Blue filled circles are member stars; Green pluses are comparison stars from \\citet{ben14}; Red triangles are comparison stars from \\citet{ven04}. \\label{Figure 4}}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\begin{minipage}[c]{0.45\\textwidth}\n\\centering\n\\includegraphics[scale=0.6]{Na.eps}\n\\end{minipage}\n\\begin{minipage}[c]{0.45\\textwidth}\n\\centering\n\\includegraphics[scale=0.6]{Al.eps}\n\\end{minipage}\n\\end{center}\n\\vspace{0pt}\n\\begin{center}\n\\begin{minipage}[c]{0.45\\textwidth}\n\\centering\n\\includegraphics[scale=0.6]{Cr.eps}\n\\end{minipage}\n\\begin{minipage}[c]\n{0.45\\textwidth}\n\\centering\n\\includegraphics[scale=0.6]{Ni.eps}\n\\end{minipage}\n\\end{center}\n\\vspace{0pt}\n\\begin{center}\n\\begin{minipage}[c]{0.45\\textwidth}\n\\centering\n\\includegraphics[scale=0.6]{Y.eps}\n\\end{minipage}\n\\begin{minipage}[c]{0.45\\textwidth}\n\\centering.\n\\includegraphics[scale=0.6]{Ba.eps}\n\\end{minipage}\n\\caption{[X\/Fe] vs [Fe\/H] for Fe-peak, odd-Z and s-process elements (Al, Ba, Cr, Ni, Na, Y). Blue filled circles are member stars; Green pluses are comparison stars from \\citet{ben14}; Red triangles are comparison stars from \\citet{ven04}. \\label{Figure 5}}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}\n\\epsscale{.70}\n\\plotone{afe.eps}\n\\caption{Comparison of [$\\alpha$\/Fe] vs [Fe\/H] between $\\gamma$ Leo moving group with other moving groups. The red filled circles are member stars of $\\gamma$ Leo moving group. Black squares are member stars of Hercules moving group from \\citet{ram16}. Diamond are member stars of the Sirius moving group from \\citet{tab17}. Triangles are member stars of the Hyades moving group from \\citet{sil11}. Pluses and crosses are member stars of the AF06 and Arcturus moving group, respectively, from \\citet{ram12}. See text for details.\\label{Figure 6}}\n\\end{figure}\n\n$\\alpha$-elements (Mg, Si, Ca, Ti) are mainly produced in Type \\uppercase\\expandafter{\\romannumeral2} supernovae (SN \\uppercase\\expandafter{\\romannumeral2}) nucleosynthesis, while iron is produced in both SN \\uppercase\\expandafter{\\romannumeral2} and SN \\uppercase\\expandafter{\\romannumeral1}a events. The [$\\alpha$\/Fe] ratio is a key chemical signature, because it well reflects the star formation history of the stellar system. As found in the comparision stars in solar neighbourhood, all these four $\\alpha$-elements abundances of 15 member stars show decreasing trend with increasing metallicity for lower metallicities and they reach a plateau at higher metallicity \\citep{edv93,che00}.\n\nThe $\\alpha$-elements abundance [$\\alpha$\/Fe]$=$([Mg\/Fe]+[Si\/Fe]+[Ca\/Fe]+[Ti\/Fe])\/4 ranges from -0.045 to 0.114. The mean and standard deviation of member stars' $\\alpha$-elements abundances are 0.031 and 0.062, respectively. In figure 6, we compared [$\\alpha$\/Fe] with [Fe\/H] of $\\gamma$ Leo moving group with other known moving groups. The metallicity of the $\\gamma$ Leo moving group spreads a relatively large range. The dispersion of member stars' $\\alpha$-elements abundance is large too, but it's close to the Hyades moving group and AF06 moving group. At lower metallicity, the [$\\alpha$\/Fe] distribution of the $\\gamma$ Leo moving group is similar to those of AF06 moving group, while at higher metallicity, the [$\\alpha$\/Fe] distribution of the $\\gamma$ Leo moving group is close to the Hercules moving group.\n\nNa and Al are odd-Z elements and thought to be produced in SN \\uppercase\\expandafter{\\romannumeral2} and SN \\uppercase\\expandafter{\\romannumeral1}b\/c \\citep{nom84}. Though it's not quit clear in Al, the Na distribution of member stars obviously shows a upturn (\\citet{edv93,shi04}) in the comparision stars. Stars of the $\\gamma$ Leo moving group follow this trend, although the sample size of this study is still limited. Al lines of the star J2154+1418 are so weak that they are not included in our analysis. The iron-peak elements (Cr, Ni) are believed to have the same patterns as iron. The scatters of [Cr\/Fe] and [Ni\/Fe] of the member stars are small as the comparision stars. Y and Ba are light and heavy neutron-capture elements, respectively. The abundances of these elements in member stars have relatively large errors, and comparision stars distribute in a wider range. According to these chemical abundances, we suggest the stars of the $\\gamma$ Leo moving group are born in situ.\n\n\n\n\\section{CONCLUTION}\nWe have observed 18 candidates of the $\\gamma$ Leo moving group members selected by the \\textit{UVW} criteria from the LAMOST survey. Three stars are spectroscopic binaries and excluded from the sample. For the remaining fifteen stars, a detailed abundance analysis is carried out. The abundance pattern of member stars shows no evident difference from those of comparision stars. The large dispersion of metallicity in member stars suggests that the $\\gamma$ Leo moving group is not from some chemically homogeneous origins. We suppose the $\\gamma$ Leo moving group is originated from dynamical effects, perhaps related to the effect of the spiral arms. For example, Figure 18 of \\citet{ant11} shows that it is possible that spiral arms can generate a structure at this velocity region. However, small variations of the simulation parameter can produce very different velocity structures. In the future, we will do some dynamical simulations to better understand the origin the $\\gamma$ Leo moving group.\n\n\n\nThe Gaia's high precision astrometric data brings great convenience to the study of moving groups in the solar neighbourhood. Chemical abundances from high resolution spectra play an important role in disentangling the degeneracy of many causes determining the local velocity structures.\n\n\n\n\\acknowledgments\nWe thank Bharat Kumar Yerra, Li Haining, Tan Kefeng and Liu Yujuan for their constructive suggestions and discussion. This work is supported by the Astronomical Big Data Joint Research Center, co-founded by the National Astronomical Observatories, Chinese Academy of Sciences and the Alibaba Cloud. This study is supported by the National Natural Science Foundation of China under grant No. 11390371, 11233004, U1431106, 11573035, 11625313, 11603033, the National Key Basic Research Program of China (973 program) 2014CB845701 and 2014CB845703, and JSPS - CAS Joint Research Program. WA is partially supported by JSPS KAKENHI Grant Number 16H02168.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe Thompson group $F$ was introduced \nby Richard Thompson in the 1960s and many of its unusual,\ninteresting properties \n\\cite{CFP96,CF11} have been deeply studied over the past\ndecades, in particular due the still open conjecture of its\nnonamenability. Recently Vaughan Jones provided a new approach to the construction of (unitary) representations of the\nThompson group $F$ which is motivated by the link between\nsubfactor theory and conformal field theory (see \\cite{Jo17,Jo18a,Jo18b,BJ19a,\nBJ19b,AJ21}). Independently,\nanother approach to the representation theory of the Thompson\ngroup $F$ is motivated\nby recent progress in the study of distributional invariance\nprinciples and symmetries in noncommutative probability (see \\cite{Ko10,EGK17} and \\cite[Introduction]{KKW20}). More\nprecisely, a close relation between certain representations of\nthe Thompson monoid $F^+$ and unilateral noncommutative\nstationary Markov processes is established in \\cite{KKW20}.\nThe goal of the present paper is to demonstrate that this\nconnection appropriately extends to one between\nrepresentations of the Thompson group $F$ and bilateral\nstationary noncommutative Markov processes (in the sense of K\\\"ummerer \\cite{Ku85}). \n\nOur main results are Theorem \\ref{theorem:markov-filtration-1}\nwhich is about the construction of a local Markov filtration \nand a bilateral stationary Markov process from a given representation of the Thompson group $F$.\nGoing beyond the framework of Markovianity, this construction is further deepened in Theorem \\ref{theorem:markov-filtration-2} and Corollary \\ref{corollary:triangulararray},\nto obtain rich triangular arrays of commuting squares. \nA main result in the converse direction is Theorem \\ref{theorem:TensorMarkovF} where we provide a canonical construction of a representation of the Thompson group $F$ from a given bilateral stationary noncommutative Markov process in tensor dilation form. Finally, we apply this canonical construction to bilateral stationary Markov processes in classical probability. We establish in Theorem \\ref{theorem:F-gen-compression} that, for a given Markov transition operator, there exists a representation of the Thompson group $F$ such that this Markov transition operator \nis the compression of a represented generator of the Thompson group $F$. \n\n\nLet us outline the content of this paper. Section\n\\ref{section:Preliminaries} starts with providing\ndefinitions, notation and some background results on the\nThompson group $F$ (see Subsection\n\\ref{subsection:basics-on-F}). The basics of noncommutative\nprobability spaces and Markov maps are given in Subsection \\ref{subsection:Markov-maps}.\nWe review in Subsection \\ref{subsection:Markovianity} the notion of\ncommuting squares from subfactor theory, as it is underlying the present concept of Markovianity in noncommutative probability. Furthermore we provide the notion of a local Markov filtration which allows to define Markovianity \non the level of von Neumann subalgebras without any reference to noncommutative random variables. Finally we review some results on noncommutative stationary processes in Subsection\n\\ref{subsection:Noncommutative Stationary Processes}. Here we will meet bilateral noncommutative stationary Markov processes and Markov dilations in the sense of K\\\"ummerer\n\\cite{Ku85} as well as bilateral noncommutative stationary Bernoulli shifts.\n\nWe investigate in Section \\ref{section:Mark-From-Rep} how representations of the Thompson group\n$F$ in the automorphisms of noncommutative probability spaces yield bilateral noncommutative stationary Markov processes. Subsection\n\\ref{subsection:generating} introduces the generating property of representations of $F$ in\nDefinition \\ref{definition:generating}. This property ensures that the fixed point algebras of the\nrepresented generators of $F$ form a tower which generates the noncommutative probability\nspace, see Proposition \\ref{proposition:generating-property}. This tower of fixed\npoint algebras equips the noncommutative probability space with a filtration which, using actions\nof the represented generators, can be further upgraded to become a local Markov filtration.\nSubsection \\ref{subsection:Markov-F} considers certain noncommutative stationary processes which are adapted to this local Markov filtration.\n\nThe closing Section \\ref{section:Reps-of-F-from-Mark} shows that representations of $F$ can be obtained from an important class of bilateral stationary noncommutative Markov processes. To be more precise, in Subsection \\ref{subsection:Example} we provide elementary constructions of the Thompson group $F$ in the automorphisms of a tensor product von Neumann algebra. This extends the representation of the Thompson monoid $F^+$ obtained in \\cite{KKW20} and also provides examples of bilateral noncommutative Markov and Bernoulli shifts. We show in Subsection \\ref{subsection:constr-rep-F} that Markov processes in tensor dilation form give rise to representations of $F$. Finally, in Subsection \\ref{subsection:constr-classical} we use a result of K\\\"ummerer to show that, given a bilateral stationary Markov process\nin the classical case, we can obtain representations of $F$ such that the associated transition operator is the compression of a represented generator of $F$.\n\n\\section{Preliminaries} \\label{section:Preliminaries}\n\\subsection{\\texorpdfstring{The Thompson group $F$}{}}\n\\label{subsection:basics-on-F}\nThe Thompson group $F$, originally introduced by Richard Thompson in 1965 as a certain group of piece-wise linear homeomorphisms on the interval $[0,1]$, is known to have the infinite\npresentation\n\\begin{equation*}\nF:=\\langle g_0,g_1,g_2,\\ldots \\mid g_{k}g_{\\ell}=g_{\\ell+1}g_{k} \n\\text{ for } 0\\leq k<\\ell <\\infty \\rangle. \n\\end{equation*}\nWe note that we work throughout with \ngenerators $g_k$ which correspond to the\ninverses of the generators usually used in the literature (e.g.~\\cite{Be04}). \nLet $e \\in F$ denote the neutral element. \nAs it is well-known, $F$ is finitely generated with $F = \n\\langle g_0, g_1 \\rangle$. Furthermore, \nas shown for example in \\cite[Theorem 1.3.7]{Be04}, an element $e \\neq g \\in F$ has the unique normal form\n\\begin{align} \\label{eq:F-normal-form}\ng = g_0^{-b_0}\\cdots g_k^{-b_k} \\, g_k^{a_k}\\cdots g_0^{a_0} \n\\end{align}\nwhere $a_0, \\ldots , a_k , b_0, \\ldots , b_k \\in \\mathbb{N}_0$, $k \\geq 0$ and \n \\begin{enumerate}\n \\item exactly one of $a_k$ and $b_k$ is non-zero,\n \\item if $a_i\\neq 0$ and $b_i\\neq 0$, then $a_{i+1}\\neq 0$\nor $b_{i+1}\\neq 0$.\n \\end{enumerate}\nAs the defining relations of this presentation of $F$ involve no inverse generators, one can associate\nto it the monoid \n\\begin{equation}\\label{eq:F+}\nF^{+}=\\langle g_0,g_1,g_2,\\ldots \\mid g_{k}g_{\\ell}=g_{\\ell+1}g_{k} \n\\text{ for } 0\\leq k<\\ell <\\infty \\rangle^+, \n\\end{equation} \nreferred to as the \\emph{Thompson monoid $F^+$}. We remark that, alternatively, the\ngenerators of this monoid can be obtained as morphisms (in the inductive limit) of the category\nof finite binary forests, see for example \\cite{Be04,Jo18a}. \n\\begin{Definition} \\normalfont \\label{definition:mn-shift}\nLet $m,n \\in \\mathbb{N}_0$ with $m \\le n$ be fixed. The \\emph{$(m,n)$-partial shift} $\\operatorname{sh}_{m,n}$ is\nthe group homomorphism on $F$ defined by \n\\[\n\\operatorname{sh}_{m,n}(g_k) = \\begin{cases}\n g_m &\\text{if $k=0$}\\\\\n g_{n +k} &\\text{if $k \\ge 1$}.\n \\end{cases}\n\\]\n\\end{Definition}\nWe remark that the map $\\operatorname{sh}_{m,n}$ preserves all defining relations of $F$ and is thus\nwell-defined as a group homomorphism.\n\\begin{Lemma} \\normalfont \\label{lemma:m-n-shift}\nThe group homomorphisms $\\operatorname{sh}_{m,n}$ on $F$ are injective for all $m,n \\in \\mathbb{N}_0$. \n\\end{Lemma} \\normalfont\n\\begin{proof}\nIt suffices to show that $\\operatorname{sh}_{m,n}(g) = e$ implies $g=e$. \nLet $g\\in F$ have the (unique) normal form as stated in \\eqref{eq:F-normal-form}. Thus, by the definition of the partial shifts, \n\\[\n\\operatorname{sh}_{m,n}(g) = g_m^{-b_0}\\cdots g_{n+k}^{-b_k} \\, g_{n+k}^{a_k}\\cdots g_m^{a_0}. \n\\]\n Thus $\\operatorname{sh}_{m,n}(g) = e$ if and only if \n $g_{n+k}^{a_k}\\cdots \\, g_m^{a_0} = g_{n+k}^{b_k} \\cdots \\, g_m^{b_0}$.\nSince the elements on both sides of the last equation are in normal form, its uniqueness implies $a_i =b_i$ for all $i$. But this entails $g=e$.\n\\end{proof}\n\n\\subsection{Noncommutative probability spaces and Markov maps}\n\\label{subsection:Markov-maps}\nThroughout, a \\emph{noncommutative probability space} $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ consists of a \nvon Neumann algebra $\\tilde{\\mathcal{M}}$ and a faithful normal state $\\tilde{\\psi}$ on $\\tilde{\\mathcal{M}}$. The identity \nof $\\tilde{\\mathcal{M}}$ will be denoted by $\\mathbbm{1}_{\\tilde{\\mathcal{M}}}$, or simply by $\\mathbbm{1}$ when the context is clear.\nThroughout, $\\bigvee_{i\\in I}\\tilde{\\mathcal{M}}_i$ denotes the von Neumann algebra generated by the \nfamily of von Neumann algebras $\\{\\tilde{\\mathcal{M}}_i\\}_{i\\in I} \\subset \\tilde{\\mathcal{M}}$ for $I\\subset \\mathbb{Z}$.\nIf $\\tilde{\\mathcal{M}}$ is abelian and acts on a separable Hilbert space, then $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ is\nisomorphic to $\\big(L^\\infty(\\Omega, \\Sigma, \\mu), \\int_{\\Omega} \\cdot \\,\\, d\\mu\\big)$\nfor some standard probability space $(\\Omega, \\Sigma, \\mu)$.\n\\begin{Definition} \\normalfont \\label{definition:endomorphism}\nAn \\emph{endomorphism} $\\tilde{\\alpha}$ of a probability space $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ is a $*$- homomorphism on $\\tilde{\\mathcal{M}}$ \nsatisfying the following additional properties:\n\\begin{enumerate}\n \\item $\\tilde{\\alpha}(\\mathbbm{1}_{\\tilde{\\mathcal{M}}})=\\mathbbm{1}_{\\tilde{\\mathcal{M}}}$ (unitality);\n \\item $\\tilde{\\psi}\\circ \\tilde{\\alpha}=\\tilde{\\psi}$ (stationarity);\n \\item $\\tilde{\\alpha}$ and the modular automorphism group $\\sigma_t^{\\tilde{\\psi}}$ commute \n for all $t\\in \\mathbb{R}$ (modularity).\n\\end{enumerate}\nThe set of endomorphisms of $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ is denoted by $\\operatorname{End}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$. We note that an\nendomorphism of $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ is automatically injective. In this paper, we will chiefly work with the automorphisms of $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ denoted by $\\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$.\n\n\\end{Definition}\n\\begin{Definition}\\label{definition:MarkovMap}\\normalfont\nLet $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ and $(\\mathcal{N},\\varphi)$ be two noncommutative probability spaces. A linear map \n$T \\colon \\tilde{\\mathcal{M}} \\to \\mathcal{N}$ is called a \\emph{$(\\tilde{\\psi},\\varphi)$-Markov map} if the following \nconditions are satisfied:\n\\begin{enumerate}\n\\item \\label{item:mm-i}\n$T$ is completely positive; \n\\item \\label{item:mm-ii}\n$T$ is unital; \n\\item \\label{item:mm-iii}\n$\\varphi \\circ T = \\tilde{\\psi}$;\n\\item \\label{item:mm-iv}\n$T \\circ \\sigma_t^{\\tilde{\\psi}} = \\sigma_t^{\\varphi} \\circ T$, for all $t \\in \\mathbb{R}$.\n\\end{enumerate}\n\\end{Definition}\nHere $\\sigma_{}^{\\tilde{\\psi}}$ and $\\sigma_{}^{\\varphi}$ denote the modular automorphism groups of \n$(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ and $(\\mathcal{N},\\varphi)$, respectively. If $(\\tilde{\\mathcal{M}},\\tilde{\\psi}) = (\\mathcal{N},\\varphi)$, we say that \n$T$ is a $\\tilde{\\psi}$-\\emph{Markov map on $\\tilde{\\mathcal{M}}$}. Conditions \\eqref{item:mm-i} to \\eqref{item:mm-iii} \nimply that a Markov map is automatically normal. The condition \\eqref{item:mm-iv} is equivalent \nto the condition that a unique Markov map $T^* \\colon (\\mathcal{N},\\varphi) \\to (\\tilde{\\mathcal{M}},\\tilde{\\psi})$ exists such\nthat\n\\[\n\\tilde{\\psi}\\big(T^*(y)x\\big) = \\varphi\\big(y\\, T(x)\\big) \\qquad (x \\in \\tilde{\\mathcal{M}}, y \\in \\mathcal{N}). \n\\]\nThe Markov map $T^*$ is called the \\emph{adjoint} of $T$ and $T$ is called \\emph{self-adjoint} if\n$T=T^*$. We note that condition \\eqref{item:mm-iv} is automatically satisfied whenever $\\tilde{\\psi}$ and\n$\\varphi$ are tracial, in particular for abelian von Neumann algebras $\\tilde{\\mathcal{M}}$ and $\\mathcal{N}$. \n\nWe recall for the convenience of the reader the definition of conditional expectations in the\npresent framework of noncommutative probability spaces.\n\\begin{Definition} \\normalfont\nLet $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ be a noncommutative probability space, and $\\mathcal{N}$ be a von Neumann subalgebra of\n$\\tilde{\\mathcal{M}}$. A linear map $E: \\tilde{\\mathcal{M}}\\to \\mathcal{N}$ is called a \\emph{conditional expectation} if it satisfies\nthe following conditions:\n\\begin{enumerate}\n \\item $E(x)=x$ for all $x\\in \\mathcal{N}$;\n \\item $\\|E(x)\\|\\leq \\|x\\|$ for all $x\\in \\tilde{\\mathcal{M}}$;\n \\item $\\tilde{\\psi}\\circ E=\\tilde{\\psi}$.\n\\end{enumerate}\n\\end{Definition}\nSuch a conditional expectation exists if and only if $\\mathcal{N}$ is globally invariant under the\nmodular automorphism group of $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ (see \\cite{Ta72}, \\cite{Ta79} and \\cite{Ta03}). The\nvon Neumann subalgebra $\\mathcal{N}$ is called $\\tilde{\\psi}$-conditioned if this condition is satisfied. Note\nthat such a conditional expectation is automatically normal and uniquely determined by $\\tilde{\\psi}$. In\nparticular, a conditional expectation is a Markov map and satisfies the module property\n$E(axb)=aE(x)b$ for $a,b\\in \\mathcal{N}$ and $x\\in \\tilde{\\mathcal{M}}$.\n\n\\subsection{Noncommutative independence and Markovianity} \n\\label{subsection:Markovianity}\n\nWe recall some equivalent properties as they serve to define commuting squares in subfactor\ntheory (see for example \\cite{GHJ89,JS97,Po89}) and as they are familiar from conditional independence\nin classical probability. \n\\begin{Proposition}\\label{proposition:cs}\nLet $\\tilde{\\mathcal{M}}_0, \\tilde{\\mathcal{M}}_1, \\tilde{\\mathcal{M}}_2$ be $\\tilde{\\psi}$-conditioned von Neumann subalgebras of the probability space\n$(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ such that $\\tilde{\\mathcal{M}}_0 \\subset (\\tilde{\\mathcal{M}}_1 \\cap \\tilde{\\mathcal{M}}_2)$. Then the following are equivalent:\n\\begin{enumerate}\n\\item \\label{item:cs-i}\n$E_{\\tilde{\\mathcal{M}}_0}(xy) = E_{\\tilde{\\mathcal{M}}_0}(x) E_{\\tilde{\\mathcal{M}}_0}(y)$ for all $x \\in \\tilde{\\mathcal{M}}_1$ and $y\\in \\tilde{\\mathcal{M}}_2$; \n\\item \\label{item:cs-ii}\n$E_{\\tilde{\\mathcal{M}}_1} E_{\\tilde{\\mathcal{M}}_2} = E_{\\tilde{\\mathcal{M}}_0}$;\n\\item \\label{item:cs-iii}\n$E_{\\tilde{\\mathcal{M}}_1}(\\tilde{\\mathcal{M}}_2) = \\tilde{\\mathcal{M}}_0$;\n\\item \\label{item:cs-iv}\n$E_{\\tilde{\\mathcal{M}}_1} E_{\\tilde{\\mathcal{M}}_2} = E_{\\tilde{\\mathcal{M}}_2} E_{\\tilde{\\mathcal{M}}_1}$ and $\\tilde{\\mathcal{M}}_1\\cap \\tilde{\\mathcal{M}}_2 = \\tilde{\\mathcal{M}}_0$. \n\\end{enumerate}\nIn particular, it holds that $\\tilde{\\mathcal{M}}_0 = \\tilde{\\mathcal{M}}_1 \\cap \\tilde{\\mathcal{M}}_2$ if one and thus all of these four\nassertions are satisfied. \n\\end{Proposition}\n\\begin{proof}\nThe tracial case for $\\tilde{\\psi}$ is proved in \\cite[Prop.~4.2.1.]{GHJ89}. The non-tracial case follows\nfrom this, after some minor modifications of the arguments therein. \n\\end{proof}\n\\begin{Definition}\\normalfont\nThe inclusions \n\\[\n\\begin{matrix}\n\\tilde{\\mathcal{M}}_2 &\\subset &\\tilde{\\mathcal{M}}\\\\\n\\cup & & \\cup \\\\\n\\tilde{\\mathcal{M}}_0 & \\subset & \\tilde{\\mathcal{M}}_1\n\\end{matrix}\n\\]\nas given in Proposition \\ref{proposition:cs} are said to form a \\emph{commuting square (of von\nNeumann algebras)} if one (and thus all) of the equivalent conditions \\eqref{item:cs-i} to\n\\eqref{item:cs-iv} are satisfied in Proposition \\ref{proposition:cs}.\n\\end{Definition}\n\\begin{Notation} \\normalfont \\label{notation:indexsets}\nWe write $I < J$ for two subsets $I, J \\subset \\mathbb{Z}$ if $i < j$ for all $i \\in I$ and $j \\in \nJ$. The cardinality of $I$ is denoted by $|I|$. For $N \\in \\mathbb{Z}$, we denote by $I + N$ the\nshifted set $\\{i + N \\mid i \\in I\\}$. Finally, $\\mathcal{I}(\\mathbb{Z})$ denotes the set of all `intervals' of \n$\\mathbb{Z}$, i.e.~sets of the form $[m,n] := \\{m, m+1, \\ldots, n\\}$, $[m,\\infty) := \\{m, m+1,\n\\ldots\\}$ or $(-\\infty, m] := \\{\\ldots, m-1,m\\}$ for $-\\infty \\leq m \\le n < \\infty$.\n\\end{Notation}\n\nWe next address the basic notions of Markovianity in noncommutative probability. Commonly,\nMarkovianity is understood as a property of random variables relative to a filtration of the\nunderlying probability space. Our investigations from the viewpoint of distributional invariance\nprinciples reveal that the phenomenon of `Markovianity' emerges without reference to any\nstochastic process already on the level of a family of von Neumann subalgebras, indexed by the\npartially ordered set of all `intervals' $\\mathcal{I}(\\mathbb{Z})$. As commonly the index set of a filtration\nis understood to be totally ordered \\cite{Ve17}, we refer to such partially indexed families as `local \nfiltrations'. \n\\begin{Definition}\\normalfont\nA family of $\\tilde{\\psi}$-conditioned von Neumann subalgebras $\\tilde{\\mathcal{M}}_\\bullet \\equiv \\{\\tilde{\\mathcal{M}}_I\\}_{I \\in\n\\mathcal{I}(\\mathbb{Z})}$ of the probability space $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ is called a \\emph{local filtration (of\n$(\\tilde{\\mathcal{M}},\\tilde{\\psi}))$} if \n \\begin{align*}\n I \\subset J \\quad \\Longrightarrow \\quad \\tilde{\\mathcal{M}}_I \\subset \\tilde{\\mathcal{M}}_J.&&&& \\qquad \\text{(Isotony)}\n\\end{align*}\n\\end{Definition}\nThe isotony property ensures that inclusions are valid as they are assumed for commuting squares.\nTo be more precise, it holds that\n\\[\n\\begin{matrix}\n\\tilde{\\mathcal{M}}_{I} &\\subset &\\tilde{\\mathcal{M}}\\\\\n\\cup & & \\cup \\\\\n\\tilde{\\mathcal{M}}_{K} & \\subset & \\tilde{\\mathcal{M}}_{J}\n\\end{matrix}\n\\]\nfor $I, J, K \\in \\mathcal{I}(\\mathbb{Z})$ with $K \\subset (I \\cap J)$. Finally, let $\\mathcal{N}_\\bullet \\equiv\n\\{\\mathcal{N}_I\\}_{I \\in \\mathcal{I}(\\mathbb{Z})}$ be another local filtration of $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$. Then $\\mathcal{N}_\\bullet$ \nis said to be \\emph{coarser} than $\\tilde{\\mathcal{M}}_\\bullet$ if $\\mathcal{N}_I \\subset \\tilde{\\mathcal{M}}_I$ for all \n$I \\in \\mathcal{I}(\\mathbb{Z})$ and we denote this by $\\mathcal{N}_{\\bullet}\\prec \\tilde{\\mathcal{M}}_{\\bullet}$. Occasionally we \nwill address $\\mathcal{N}_{\\bullet}$ also as a \\emph{local subfiltration} of $\\tilde{\\mathcal{M}}_{\\bullet}$.\n\\begin{Definition}\\normalfont \\label{definition:markov-filtration}\nLet $\\tilde{\\mathcal{M}}_\\bullet \\equiv \\{\\tilde{\\mathcal{M}}_I\\}_{I \\in \\mathcal{I}(\\mathbb{Z}) }$ be a local filtration of $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$.\n$\\tilde{\\mathcal{M}}_{\\bullet}$ is said to be \\emph{Markovian} if\n the\ninclusions \n\\begin{eqnarray*}\n\\begin{matrix}\n\\tilde{\\mathcal{M}}_{(-\\infty,n]} &\\subset &\\tilde{\\mathcal{M}}\\\\\n\\cup & & \\cup \\\\\n\\tilde{\\mathcal{M}}_{[n,n]} & \\subset & \\tilde{\\mathcal{M}}_{[n,\\infty)}\n\\end{matrix}\n\\end{eqnarray*}\nform a commuting square for each $n \\in \\mathbb{Z}$. \n\\end{Definition}\nCast as commuting squares, Markovianity of the local filtration $\\tilde{\\mathcal{M}}_\\bullet$ has many equivalent\nformulations, see Proposition \\ref{proposition:cs}. In particular, it holds that\n\\begin{align}\n&&&& E_{\\tilde{\\mathcal{M}}_{(-\\infty,n]}} E_{\\tilde{\\mathcal{M}}_{[n,\\infty)}} & = E_{\\tilde{\\mathcal{M}}_{[n,n]}}&& \\text{for all $n \\in \\mathbb{Z}$.} \n&&&& \\tag{M'} \\label{eq:filt-markov-II}\n\\end{align}\nHere $E_{\\tilde{\\mathcal{M}}_I}$ denotes the $\\tilde{\\psi}$-preserving normal conditional expectation from $\\tilde{\\mathcal{M}}$ onto\n$\\tilde{\\mathcal{M}}_I$. \n\n\\subsection{Noncommutative stationary processes and dilations}\n\\label{subsection:Noncommutative Stationary Processes}\nWe introduce bilateral noncommutative stationary processes, as they underly the\napproach to distributional invariance principles in \\cite{Ko10,GK09}. Furthermore we present\n dilations of Markov maps using K\\\"ummerer's approach to\nnoncommutative stationary Markov processes \\cite{Ku85}. The existence of such dilations is\nactually equivalent to the factoralizability of Markov maps (see \\cite{AD06} and \\cite{HM11}).\n\\begin{Definition}\\normalfont \\label{definition:process-sequence}\nA \\emph{bilateral stationary process} $(\\tilde{\\mathcal{M}},\\tilde{\\psi}, \\tilde{\\alpha}, \\mathcal{A}_0)$ consists of a probability\nspace $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$, a $\\tilde{\\psi}$-conditioned subalgebra $\\mathcal{A}_0 \\subset \\tilde{\\mathcal{M}}$, and an automorphism\n$\\tilde{\\alpha}\\in \\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$. The sequence\n\\[\n(\\iota_n)_{n \\in \\mathbb{Z}}\\colon (\\mathcal{A}_0, \\tilde{\\psi}_0) \\to (\\tilde{\\mathcal{M}},\\tilde{\\psi}), \n\\qquad \\iota_{n} := \\tilde{\\alpha}^n|_{\\mathcal{A}_0}=\\tilde{\\alpha}^n\\iota_0,\n\\]\nis called the \\emph{sequence of random variables associated to} $(\\tilde{\\mathcal{M}},\\tilde{\\psi}, \\tilde{\\alpha}, \\mathcal{A}_0)$.\nHere $\\tilde{\\psi}_0$ denotes the restriction of $\\tilde{\\psi}$ from $\\tilde{\\mathcal{M}}$ to $\\mathcal{A}_0$ and $\\iota_0$ denotes the\ninclusion map of $\\mathcal{A}_0$ in $\\tilde{\\mathcal{M}}$.\n\nThe stationary process $(\\tilde{\\mathcal{M}},\\tilde{\\psi}, \\tilde{\\alpha}, \\mathcal{A}_0)$ is called \\emph{minimal} if\n\\[ \\bigvee_{i \\in \\mathbb{Z}} \\tilde{\\alpha}^i \\iota_0(\\mathcal{A}_0) = \\tilde{\\mathcal{M}}.\\]\n\\end{Definition}\n\n\\begin{Definition}\\normalfont \\label{definition:ncms}\nThe (not necessarily minimal) stationary process $\\big( \\tilde{\\mathcal{M}},\\tilde{\\psi}, \\tilde{\\alpha}, \\mathcal{A}_0)$ is called a \n\\emph{bilateral noncommutative stationary Markov process} if its canonical local filtration\n\\[\n\\{\\mathcal{A}_I:= \\bigvee_{i \\in I} \\tilde{\\alpha}^i \\iota_0(\\mathcal{A}_0)\\}_{I \\in \\mathcal{I}(\\mathbb{Z})}\n\\]\nis Markovian. If this process is minimal, then the endomorphism $\\tilde{\\alpha}$ is also called a \n\\emph{Markov shift} with generator $\\mathcal{A}_0$. \n\nThe associated $\\tilde{\\psi}_0$-Markov map $T=\\iota_0^*\\tilde{\\alpha} \\iota_0$, where $\\iota_0$ is the inclusion\nmap of $\\mathcal{A}_0$ in $\\tilde{\\mathcal{M}}$ and $\\tilde{\\psi}_0$ the restriction of $\\tilde{\\psi}$ to $\\mathcal{A}_0$, is often called the\n\\emph{transition operator} of the given Markov process.\n\\end{Definition}\n\nThe next lemma gives a simplified condition to check that a bilateral stationary process is a Markov process.\n\n\\begin{Lemma} \\label{lemma:Mark-Suff}\nLet $\\big( \\tilde{\\mathcal{M}},\\tilde{\\psi}, \\tilde{\\alpha}, \\mathcal{A}_0)$ be a bilateral stationary process with canonical local filtration\n$\n\\{\\mathcal{A}_I:= \\bigvee_{i \\in I} \\tilde{\\alpha}^i \\iota_0(\\mathcal{A}_0)\\}_{I \\in \\mathcal{I}(\\mathbb{Z})}$.\nSuppose $P_I$ denotes the $\\tilde{\\psi}$-preserving normal conditional expectation from $\\tilde{\\mathcal{M}}$ onto $\\mathcal{A}_I$ and satisfies\n\\[\nP_{(-\\infty, 0]} P_{[0,\\infty)} = P_{[0,0]}.\n\\]\nThen $\\{\\mathcal{A}_I\\}_{I \\in \\mathcal{I}(\\mathbb{Z})}$ is a local Markov filtration and $\\big( \\tilde{\\mathcal{M}},\\tilde{\\psi}, \\tilde{\\alpha}, \\mathcal{A}_0)$ is a bilateral stationary Markov process.\n\\end{Lemma}\n\n\\begin{proof}\nFor all $k \\in \\mathbb{Z}$ and $I \\in \\mathcal{I}(\\mathbb{Z})$, we have $ \\tilde{\\alpha}_0^{k} P_{I} = P_{I+k} \\tilde{\\alpha}_0^{k}$ (see \\cite[Remark 2.1.4]{Ku85})). Hence, for each $n \\in \\mathbb{Z}$,\n\\begin{align*}\nP_{(-\\infty,0]}P_{[0,\\infty)} = P_{[0,0]}\n\\quad\n\\Longleftrightarrow \n\\quad\n\\tilde{\\alpha}_0^n P_{(-\\infty,0]}P_{[0,\\infty)}\n\\tilde{\\alpha}_0^{-n}\n = \\tilde{\\alpha}_0^n P_{[0,0]} \\tilde{\\alpha}_0^{-n}\n\\quad\n\\Longleftrightarrow\n\\quad\nP_{(-\\infty,n]}P_{[n,\\infty)} = P_{[n,n]},\n\\end{align*}\nwhich is the required\nMarkovianity for the local filtration $\\{\\mathcal{A}_{I}\\}_{I\\in \\mathcal{I}(\\mathbb{Z})}$.\n\\end{proof}\n\n\n\n\\begin{Definition}[\\cite{Ku85}]\\normalfont \\label{definition:dilation}\nLet $(\\mathcal{A},\\varphi)$ be a probability space. A $\\varphi$-Markov map $T$ on $\\mathcal{A}$ is said to admit\na \\emph{(bilateral state-preserving) dilation} if there exists a probability space $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$,\nan automorphism $\\tilde{\\alpha}\\in \\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ and a $(\\varphi,\\tilde{\\psi})$-Markov map $\\iota_0:\\mathcal{A}\\to \\tilde{\\mathcal{M}}$\nsuch that, for all $n \\in \\mathbb{N}_0$, \n\\begin{eqnarray*}\nT^n=\\iota_0^*\\tilde{\\alpha}^n \\iota_0.\n\\end{eqnarray*}\nSuch a dilation of $T$ is denoted by the quadruple $(\\tilde{\\mathcal{M}},\\tilde{\\psi},\\tilde{\\alpha},\\iota_0)$ and is said to be\n\\emph{minimal} if $\\tilde{\\mathcal{M}}=\\bigvee_{n\\in \\mathbb{Z}} \\tilde{\\alpha}^{n}\\iota_0(\\mathcal{A})$. $(\\tilde{\\mathcal{M}},\\tilde{\\psi},\\tilde{\\alpha},\\iota_0)$ is called a \\emph{dilation of first order} if the equality $T= \\iota_0^* \\tilde{\\alpha} \\iota_0$ alone holds.\n\\end{Definition}\nActually it follows from the case $n=0$ that the $(\\varphi,\\tilde{\\psi})$-Markov map $\\iota_0$ is a\nrandom variable from $(\\mathcal{A},\\varphi)$ to $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ such that $\\iota_0\\iota_0^*$ is the\n$\\tilde{\\psi}$-preserving conditional expectation from $\\tilde{\\mathcal{M}}$ onto $\\iota_0(A)$. \n\\begin{Definition}[\\cite{Ku85}]\\normalfont \\label{definition:markovdilation}\nThe dilation $(\\tilde{\\mathcal{M}},\\tilde{\\psi},\\tilde{\\alpha},\\iota_0)$ of the $\\varphi$-Markov map $T$ on $\\mathcal{A}$ (as introduced\nin Definition \\ref{definition:dilation}) is said to be a \\emph{(bilateral state-preserving) Markov\ndilation} if the local filtration $\\big\\{\\mathcal{A}_I := \\bigvee_{n \\in I} \\tilde{\\alpha}^n\\iota_0(\\mathcal{A})\\big\\}_{I \\in \n\\mathcal{I}(\\mathbb{Z})}$ is Markovian. \n\\end{Definition}\n\\begin{Remark} \\normalfont\nA dilation of a $\\varphi$-Markov map $T$ on $\\mathcal{A}$ may not be a Markov dilation. \nThis is discussed in \\cite[Section 3]{KuSchr83} where it is shown that Varilly has\nconstructed a dilation in \\cite{Va81} which is not a Markov dilation. We are grateful \nto B.~K\\\"ummerer for bringing this to our attention \\cite{Ku21}. \n\\end{Remark}\n\n\\begin{Definition}\\cite[Definition 4.1.3]{Ku85}\\normalfont \\label{definition:tensordilation}\nLet $(\\mathcal{A}, \\varphi)$ be a probability space and $T$ be a $\\phi$-Markov map on $\\mathcal{A}$. A dilation of first order $(\\tilde{\\mathcal{M}}, \\tilde{\\psi}, \\tilde{\\alpha}, \\iota_0)$ of $T$ is called a \\emph{tensor dilation} if the conditional expectation $\\iota_0^* \\iota_0 : \\tilde{\\mathcal{M}} \\to \\iota_0(\\mathcal{A})$ is of tensor type, that is, there exists a von Neumann subalgebra $\\mathcal{C}$ of $\\tilde{\\mathcal{M}}$ with faithful normal state $\\chi$ such that $\\tilde{\\mathcal{M}} = \\iota_0(\\mathcal{A}) \\otimes \\mathcal{C}$ and $(\\iota_0^* \\iota_0)(\\iota(a) \\otimes x) = \\chi(x) a$ for all $a \\in \\mathcal{A}, x \\in \\mathcal{C}$.\n\\end{Definition}\nLet us next relate the above bilateral notions of dilations and stationary processes. It is\nimmediate that a dilation $\\big(\\tilde{\\mathcal{M}},\\tilde{\\psi},\\tilde{\\alpha},\\iota_0\\big)$ of the $\\varphi$-Markov map $T$\non $\\mathcal{A}$ gives rise to the stationary process $\\big(\\tilde{\\mathcal{M}},\\tilde{\\psi}, \\tilde{\\alpha}, \\iota_0(\\mathcal{A})\\big)$.\nFurthermore this stationary process is Markovian if and only if the dilation is a Markov\ndilation, as evident from the definitions. Conversely, a stationary Markov process\nyields a dilation (and thus a Markov dilation) as it was shown by K\\\"ummerer, stated below for the convenience of the reader.\n \\begin{Proposition}\\label{proposition:dilation}\\cite[Proposition 2.2.7]{Ku85}\nLet $(\\tilde{\\mathcal{M}},\\tilde{\\psi}, \\tilde{\\alpha},\\mathcal{A}_0)$ be a bilateral stationary Markov process and $T=\\iota_0^*\\tilde{\\alpha} \\iota_0$ be\nthe corresponding transition operator where $\\iota_0$ is the inclusion map of $\\mathcal{A}_0$ into $\\tilde{\\mathcal{M}}$.\nThen $(\\tilde{\\mathcal{M}},\\tilde{\\psi},\\tilde{\\alpha},\\iota_0)$ is a dilation of $T$. In other words, the following diagram\ncommutes for all $n\\in \\mathbb{N}_0$:\n\t\\[\n\t\\begin{tikzcd}\n\t(\\mathcal{A}_0, \\tilde{\\psi}_0) \\arrow[r, \"T^n\"] \\arrow[d, \"\\iota_0\"]\n\t& (\\mathcal{A}_0, \\tilde{\\psi}_0) \\arrow[d, leftarrow, \"\\iota_0^*\"] \\\\\n\t(\\tilde{\\mathcal{M}},\\tilde{\\psi}) \\arrow[r, \"\\tilde{\\alpha}^n\"]\n\t& (\\tilde{\\mathcal{M}},\\tilde{\\psi}) \n\t\\end{tikzcd}.\n\t\\]\nHere $\\tilde{\\psi}_0$ denotes the restriction of $\\tilde{\\psi}$ to $\\mathcal{A}_0$.\n\\end{Proposition}\n\n\nWe close this subsection by providing a noncommutative notion of operator-valued Bernoulli\nshifts. The definition of such shifts stems from\ninvestigations of K\\\"ummerer on the structure of noncommutative Markov processes in \\cite{Ku85},\nand such shifts can also be seen to emerge from the noncommutative extended de Finetti theorem in\n\\cite{Ko10}. \n\\begin{Definition}\\normalfont \\label{definition:ncbs}\nThe minimal stationary process $\\big( \\tilde{\\mathcal{M}},\\tilde{\\psi}, \\tilde{\\beta}, \\mathcal{B}_0)$ \nwith canonical local filtration $\\{\\mathcal{B}_I = \\bigvee_{i \\in I} \\tilde{\\beta}_0^i(\\mathcal{B}_0)\\}_{I \\in \\mathcal{I}(\\mathbb{Z})}$\nis called a \\emph{bilateral\nnoncommutative Bernoulli shift} with \\emph{generator} $\\mathcal{B}_0$ if $\\tilde{\\mathcal{M}}^{\\tilde{\\beta}}\n\\subset \\mathcal{B}_0$ and \n\\[\n\t\\begin{matrix}\n\t\\mathcal{B}_{I} &\\subset &\\tilde{\\mathcal{M}}\\\\\n\t\\cup & & \\cup \\\\\n\t\\tilde{\\mathcal{M}}^{\\tilde{\\beta}} & \\subset & \\mathcal{B}_{J} \n\t\\end{matrix}\n\\]\nforms a commuting square for any $I, J \\in \\mathcal{I}(\\mathbb{Z})$ with $I \\cap J = \\emptyset$.\n\n\n\\end{Definition}\nIt is easy to see that a noncommutative Bernoulli shift $( \\tilde{\\mathcal{M}},\\tilde{\\psi}, \\tilde{\\beta}, \\mathcal{B}_0)$ is a minimal\nstationary Markov process where the corresponding transition operator $\\iota_0^*\\tilde{\\beta} \\iota_0$ \nis a conditional expectation (onto $\\tilde{\\mathcal{M}}^{\\tilde{\\beta}}$, the fixed point algebra of $\\tilde{\\beta}$). \nHere $\\iota_0$ denotes the inclusion map of $\\mathcal{B}_0$ into $\\tilde{\\mathcal{M}}$.\n\\section{Markovianity from Representations of $F$} \\label{section:Mark-From-Rep}\nWe show that bilateral stationary Markov processes can be obtained from representations of the Thompson group $F$ in the automorphisms of a noncommutative probability space. Most of the results in this section follow closely those of \\cite[Section 4]{KKW20}, suitably adapted to the bilateral case.\n\nLet us fix some notation, as it will be used throughout this section. We assume that the\nprobability space $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ is equipped with the representation $\\tilde{\\rho} \\colon F \\to\n\\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$. For brevity of notion, especially in proofs, the represented generators of $F$\nare also denoted by\n\\[\n\\tilde{\\alpha}_n := \\tilde{\\rho}(g_n) \\in \\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi}), \n\\]\nwith fixed point algebras given by $\\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_n} := \\{x \\in \\tilde{\\mathcal{M}} \\mid \\tilde{\\alpha}_n (x) = x\\}$, \nfor $0 \\le n < \\infty$. Of course, $\\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_n} = \\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_n^{-1}}$. Furthermore the intersections of fixed point algebras\n\\[\n\\tilde{\\mathcal{M}}_n := \\bigcap_{k \\ge n +1} \\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_k}\n\\] \ngive the tower of von Neumann subalgebras \n\\[\n\\tilde{\\mathcal{M}}^{\\tilde{\\rho}(F)} \\subset \\tilde{\\mathcal{M}}_0 \\subset \\tilde{\\mathcal{M}}_1 \\subset \\tilde{\\mathcal{M}}_2 \\subset \\ldots \n\\subset \\tilde{\\mathcal{M}}_\\infty := \\bigvee_{n \\ge 0} \\tilde{\\mathcal{M}}_n \\subset \\tilde{\\mathcal{M}}. \n\\]\nFrom the viewpoint of noncommutative probability theory, this tower provides a filtration of the \nnoncommutative probability space $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$. The canonical local filtration of \na stationary process $(\\tilde{\\mathcal{M}}, \\tilde{\\psi}, \\tilde{\\alpha}_0, \\mathcal{A}_0)$ will be seen to be a local subfiltration of a local\nMarkov filtration whenever the $\\tilde{\\psi}$-conditioned von Neumann subalgebra $\\mathcal{A}_0$ is\nwell-localized, to be more precise: contained in the intersection of fixed point algebras\n$\\tilde{\\mathcal{M}}_0$. It is worthwhile to emphasize that, depending on the choice of the generator $\\mathcal{A}_0$,\nthe canonical local filtration of this stationary process may not be Markovian. Subsection\n\\ref{subsection:Markov-F} investigates in detail conditions under which the canonical local filtration\nof a stationary process $(\\tilde{\\mathcal{M}}, \\tilde{\\psi}, \\tilde{\\alpha}_0, \\mathcal{A}_0)$ is Markovian. \n\\subsection{Representations with a generating property}\n\\label{subsection:generating}\nAn immediate consequence of the relations between generators of the Thompson group $F$ is \nthe adaptedness of the endomorphism $\\tilde{\\alpha}_0$ to the tower of (intersected) fixed point\nalgebras:\n\\[\n\\tilde{\\alpha}_0(\\tilde{\\mathcal{M}}_{n}) \\subset \\tilde{\\mathcal{M}}_{n+1} \\qquad \\text{for all $n \\in \\mathbb{N}_0$}.\n\\]\nTo see this, note that if $x \\in \\tilde{\\mathcal{M}}_n$ and $k \\geq n+2$, then $\\tilde{\\alpha}_k \\tilde{\\alpha}_0 (x) = \\tilde{\\alpha}_0 \\tilde{\\alpha}_{k-1} (x) = \\tilde{\\alpha}_0 x$. On the other hand, if $x \\in \\tilde{\\mathcal{M}}_n$ and $k \\geq n$, then $\\tilde{\\alpha}_{k} \\tilde{\\alpha}_0^{-1} (x) = \\tilde{\\alpha}_0^{-1} \\tilde{\\alpha}_{k+1} (x) =\\tilde{\\alpha}_0^{-1}(x)$. This gives that $\\tilde{\\alpha}_0^{-1}(\\tilde{\\mathcal{M}}_n) \\subset \\tilde{\\mathcal{M}}_{n-1}$ for $n \\geq 1$. Hence, actually $\\tilde{\\alpha}_0(\\tilde{\\mathcal{M}}_n)= \\tilde{\\mathcal{M}}_{n+1}$ for all $n \\in \\mathbb{N}_0$. We also note that $\\tilde{\\alpha}_0^{-1}(\\tilde{\\mathcal{M}}_0) \\subset \\tilde{\\mathcal{M}}_0$. \n\nThus, generalizing terminology from classical probability, the random variables\n\\begin{alignat*}{2}\n\\iota_0 &:= \\operatorname{Id}|_{\\tilde{\\mathcal{M}}_0} &\\colon \\tilde{\\mathcal{M}}_0 \\to \\tilde{\\mathcal{M}}_0 \\subset \\tilde{\\mathcal{M}}\\\\\n\\iota_1 &:= \\tilde{\\alpha}_0|_{\\tilde{\\mathcal{M}}_0} &\\colon \\tilde{\\mathcal{M}}_0 \\to \\tilde{\\mathcal{M}}_1 \\subset \\tilde{\\mathcal{M}}\\\\\n\\iota_2 &:= \\tilde{\\alpha}^2_0|_{\\tilde{\\mathcal{M}}_0} &\\colon \\tilde{\\mathcal{M}}_0 \\to \\tilde{\\mathcal{M}}_2 \\subset \\tilde{\\mathcal{M}}\\\\\n & \\qquad \\vdots\\\\\n\\iota_n &:= \\tilde{\\alpha}^n_0|_{\\tilde{\\mathcal{M}}_0} &\\colon \\tilde{\\mathcal{M}}_0 \\to \\tilde{\\mathcal{M}}_n \\subset \\tilde{\\mathcal{M}}\n\\end{alignat*}\nare adapted to the filtration $\\tilde{\\mathcal{M}}_0 \\subset \\tilde{\\mathcal{M}}_1 \\subset \\tilde{\\mathcal{M}}_2 \\subset \\ldots$ and $\\tilde{\\alpha}_0$\nis the time evolution of the stationary process $(\\tilde{\\mathcal{M}},\\tilde{\\psi}, \\tilde{\\alpha}_0, \\tilde{\\mathcal{M}}_0)$. An immediate question is whether a representation of the \nThompson group $F$ restricts to the von Neumann subalgebra $\\tilde{\\mathcal{M}}_\\infty$. \n\\begin{Definition}\\label{definition:generating} \\normalfont\nThe representation $\\tilde{\\rho} \\colon F \\to \\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ is said to have the \\emph{generating\nproperty} if $\\tilde{\\mathcal{M}}_\\infty = \\tilde{\\mathcal{M}}$. \n\\end{Definition}\nAs shown in Proposition \\ref{proposition:generating-property} below, this generating property\nentails that each intersected fixed point algebra $\\tilde{\\mathcal{M}}_n = \\bigcap_{k \\ge n+1} \\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_k}$ \nequals the single fixed point algebra $\\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_{n+1}}$. Thus the generating property \ntremendously simplifies the form of the tower $\\tilde{\\mathcal{M}}_0 \\subset \\tilde{\\mathcal{M}}_1 \\subset \\ldots$, and our next \nresult shows that this can always be achieved by restriction. \n\\begin{Proposition}\\label{proposition:generatingrestriction}\nThe representation $\\tilde{\\rho}:F\\to \\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ restricts to the generating representation \n$\\tilde{\\rho}_{\\operatorname{gen}}:F\\to \\operatorname{Aut}(\\tilde{\\mathcal{M}}_{\\infty},\\tilde{\\psi}_\\infty)$ such that $\\tilde{\\alpha}_n (\\tilde{\\mathcal{M}}_\\infty) \\subset\n\\tilde{\\mathcal{M}}_\\infty$ and $E_{\\tilde{\\mathcal{M}}_\\infty} E_{\\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_n}} = E_{\\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_n}} E_{\\tilde{\\mathcal{M}}_\\infty}$ for all\n$n \\in \\mathbb{N}_0$. Here $\\tilde{\\psi}_\\infty$ denotes the restriction of the state $\\tilde{\\psi}$ to $\\tilde{\\mathcal{M}}_\\infty$.\n$E_{\\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_n}}$ and $E_{\\tilde{\\mathcal{M}}_{\\infty}}$ denote the unique $\\tilde{\\psi}$-preserving normal\nconditional expectations onto $\\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_n}$ and $\\tilde{\\mathcal{M}}_{\\infty}$ respectively.\n\\end{Proposition}\n\\begin{proof}\nWe show that $\\tilde{\\alpha}_i (\\tilde{\\mathcal{M}}_n ) \\subset \\tilde{\\mathcal{M}}_{n+1}$ for all $i,n \\ge 0$. Let $x \\in \\tilde{\\mathcal{M}}_n$. If \n$i \\ge n+1$ then $\\tilde{\\alpha}_i (x) = x$ is immediate from the definition of $\\tilde{\\mathcal{M}}_n$. If $i < n+1$\nthen, using the relations for the generators of the Thompson group, \n$\\tilde{\\alpha}_i(x)= \\tilde{\\alpha}_i\\tilde{\\alpha}_{k+1} (x) = \\tilde{\\alpha}_{k+2}\\tilde{\\alpha}_i(x)$\nfor any $k \\ge n$, thus $\\tilde{\\alpha}_i(x) \\in \\tilde{\\mathcal{M}}_{n+1}$. Consequently $\\tilde{\\alpha}_i$ maps \n$\\bigcup_{n \\ge 0} \\tilde{\\mathcal{M}}_n$ into itself for any $i \\in \\mathbb{N}_0$.\nIt is also easily verified that $\\tilde{\\alpha}_i^{-1}(\\tilde{\\mathcal{M}}_n) \\subset \\tilde{\\mathcal{M}}_n$ for all $i$ and $n \\ge 0$. \nNow a standard approximation\nargument shows that $\\tilde{\\mathcal{M}}_\\infty$ is invariant under $\\tilde{\\alpha}_i$ and $\\tilde{\\alpha}_i^{-1}$ for any $i \\in \\mathbb{N}_0$. \nConsequently the representation $\\tilde{\\rho}$ restricts to $\\tilde{\\mathcal{M}}_\\infty$ and, of course, this restriction\n$\\tilde{\\rho}_{\\operatorname{gen}}$ has the generating property. \n\nSince $\\tilde{\\mathcal{M}}_\\infty$ is globally invariant under the modular automorphism group of $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$,\nthere exists the (unique) $\\tilde{\\psi}$-preserving normal conditional expectation $E_{\\tilde{\\mathcal{M}}_\\infty}$ from\n$\\tilde{\\mathcal{M}}$ onto $\\tilde{\\mathcal{M}}_\\infty$. In particular, $\\tilde{\\rho}_{\\operatorname{gen}}(g_n) = \\tilde{\\alpha}_n|_{\\tilde{\\mathcal{M}}_\\infty}$ commutes\nwith the modular automorphism group of $(\\tilde{\\mathcal{M}}_\\infty, \\tilde{\\psi}_\\infty)$ which ensures\n$\\tilde{\\rho}_{\\operatorname{gen}}(g_n)\\in \\operatorname{Aut}(\\tilde{\\mathcal{M}}_{\\infty},\\tilde{\\psi}_\\infty)$. Finally that $E_{\\tilde{\\mathcal{M}}_\\infty}$ and\n$E_{\\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_n}}$ commute is concluded from \n\\[\nE_{\\tilde{\\mathcal{M}}_\\infty} \\tilde{\\alpha}_n E_{\\tilde{\\mathcal{M}}_\\infty} = \\tilde{\\alpha}_n E_{\\tilde{\\mathcal{M}}_\\infty},\n\\]\nwhich implies $ E_{\\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_n}} E_{\\tilde{\\mathcal{M}}_\\infty} = E_{\\tilde{\\mathcal{M}}_\\infty}E_{\\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_n}}$ by\nroutine arguments, and an application of the mean ergodic theorem (see for example \n\\cite[Theorem 8.3]{Ko10}), \n\\[\n E_{\\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_n}} = \\lim_{N \\to \\infty} \\frac{1}{N} \\sum_{i=1}^{N} \\tilde{\\alpha}_n^i, \n\\] \nwhere the limit is taken in the pointwise strong operator topology. \n\\end{proof}\n\\begin{Lemma}\\label{lemma:generating}\nWith the notations as above, $\\tilde{\\mathcal{M}}_k=\\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_{k+1}}\\cap \\tilde{\\mathcal{M}}_{\\infty}$ for all $k\\in \\mathbb{N}_0$.\n\\end{Lemma}\n\\begin{proof}\nFor the sake of brevity of notation, let $Q_n=E_{\\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_n}}$ denote the $\\tilde{\\psi}$-preserving\nnormal conditional expectation from $\\tilde{\\mathcal{M}}$ onto $\\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_n}$. By the definition of $\\tilde{\\mathcal{M}}_k$ \nand $\\tilde{\\mathcal{M}}_\\infty$, it is clear that $\\tilde{\\mathcal{M}}_k\\subset\\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_{k+1}}\\cap \\tilde{\\mathcal{M}}_{\\infty}$. In order to\nshow the reverse inclusion, it suffices to show that $Q_n Q_k |_{\\tilde{\\mathcal{M}}_\\infty}=Q_k |_{\\tilde{\\mathcal{M}}_\\infty}$ for $ \n0\\leq k0$ are verified on finite elementary tensors by a straightforward computation. Similar arguments as used in the proof of \nProposition \\ref{proposition:rep-Fbeta} ensure that the maps $g_n \\mapsto \\tilde{\\rho}_M(g_n) := \\tilde{\\alpha}_n $ extend multiplicatively \nto a representation $\\tilde{\\rho}_M \\colon F \\to \\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$. Its generating property is again immediate from the \nminimality of the stationary process by Proposition \\ref{proposition:minimality-generating}. Finally, the Markovianity of the\nbilateral stationary process $(\\tilde{\\mathcal{M}}, \\tilde{\\psi}, \\tilde{\\alpha}_0, \\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_1})$ follows from Corollary\n\\ref{corollary:markov-filtration-MN}.\n\\end{proof}\nGiven the stationary Markov process $(\\tilde{\\mathcal{M}}, \\tilde{\\psi}, \\tilde{\\alpha}_0, \\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_1})$ (from Proposition \\ref{proposition:rep-Falpha}), \na restriction of the generating algebra $\\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_1}$ to a von Neumann subalgebra $\\mathcal{A}_0$ provides a candidate \nfor another stationary Markov process. Viewing the Markov shift $\\tilde{\\alpha}_0$ as a `perturbation' of the Bernoulli\nshift $\\tilde{\\beta}_0$, the subalgebra $\\mathcal{A}_0 = \\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0}$ is an interesting choice. \n\\begin{Proposition} \\label{proposition:MarkovTwoReps2}\nThe quadruple $(\\tilde{\\mathcal{M}}, \\tilde{\\psi}, \\tilde{\\alpha}_0, \\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0})$ is a bilateral stationary Markov process.\n\\end{Proposition} \n\n\\begin{proof} \nWe recall from \\eqref{eq:f-p-a-0} that \n\\begin{align*}\n \\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0} &= \\mathcal{A} \\otimes \\mathbbm{1}_\\mathcal{C}^{\\otimes_{\\mathbb{N}_0}} \n \\otimes \\mathbbm{1}_\\mathcal{C}^{\\otimes_{\\mathbb{N}_0}} \n \\otimes \\mathbbm{1}_\\mathcal{C}^{\\otimes_{\\mathbb{N}_0}} \n \\otimes \\cdots. \n\\end{align*}\nLet $P_I$ denote the $\\tilde{\\psi}$-preserving normal conditional expectation from $\\tilde{\\mathcal{M}}$ onto \n$ \\mathcal{A}_{I} := \\bigvee_{i \\in I} \\tilde{\\alpha}_0^i(\\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0})$ for an interval $I \\subset \\mathbb{Z}$. \nBy Lemma \\ref{lemma:Mark-Suff}, it suffices to verify the Markov property\n\\[\nP_{(-\\infty,0]}P_{[0,\\infty)} = P_{[0,0]}.\n\\]\nFor this purpose we use the von Neumann subalgebra \n\\[\n\\mathcal{D}_0 :=\n\\begin{array}{ccccccccc}\n&& \\vdots & & \\vdots & & \\vdots & & \\\\\n && \\otimes & & \\otimes & & \\otimes & & \\\\\n && \\mathbbm{1}_\\mathcal{C} & & \\mathbbm{1}_\\mathcal{C} & & \\mathbbm{1}_\\mathcal{C} & & \\cdots \\\\\n &&\\otimes & & \\otimes & & \\otimes & & \\\\\n\\mathcal{A} & \\otimes & \\mathcal{C} & \\otimes & \\mathbbm{1}_\\mathcal{C} & \\otimes & \\mathbbm{1}_\\mathcal{C} & \\otimes & \\cdots \\\\\n\\end{array}\n\\]\nand the tensor shift $\\tilde{\\beta}_0$ to generate the `past algebra' \n$\\mathcal{D}_{<} := \\bigvee_{i < 0} \\tilde{\\beta}_0^i(\\mathcal{D}_0)$ and\nthe `future algebra' $\\mathcal{D}_{\\ge } := \\bigvee_{i\\ge 0} \\tilde{\\beta}_0^i(\\mathcal{D}_0)$. \nOne has the inclusions\n\\[\n\\mathcal{A}_{(-\\infty,0]} \\subset \\mathcal{D}_{<}, \\qquad\n\\mathcal{A}_{[0,\\infty)} \\subset \\mathcal{D}_{\\ge}, \\qquad\n\\mathcal{D}_{<} \\cap \\mathcal{D}_{\\ge} = \\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0}.\n\\]\nHere we used for \nthe first inclusion that $\\tilde{\\alpha}_0 = \\gamma_0 \\circ \\tilde{\\beta}_0$ and thus \n$\\tilde{\\alpha}_0^{-1} = \\tilde{\\beta}_0^{-1} \\circ \\gamma_0^{-1}$. The second inclusion is\nimmediate from the definitions of the von Neumann algebras. Finally, the claimed \nintersection property is readily deduced from the underlying tensor product structure. \nLet $E_{\\mathcal{D}_{<}}$ and $E_{\\mathcal{D}_{\\ge}}$ denote the $\\tilde{\\psi}$-preserving normal conditional expectations from $\\tilde{\\mathcal{M}}$ onto\n$ \\mathcal{D}_{<} $ and $ \\mathcal{D}_{\\ge}$, respectively. We observe that $E_{\\mathcal{D}_{<}} E_{\\mathcal{D}_{\\ge}} = P_{[0,0]}$ is immediately\ndeduced from the tensor product structure of the probability space $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$. But this allows us to compute\n\\begin{align*}\n P_{(-\\infty,0]}P_{[0,\\infty)} \n = P_{(-\\infty,0]} E_{\\mathcal{D}_{<}} E_{\\mathcal{D}_{\\ge}} P_{[0,\\infty)} \n = P_{(-\\infty,0]} P_{[0,0]} P_{[0,\\infty)}\n = P_{[0,0]}. \n\\end{align*}\n\\end{proof}\nWe remark that the bilateral stationary Markov process $(\\tilde{\\mathcal{M}}, \\tilde{\\psi}, \\tilde{\\alpha}_0, \\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0})$ is not minimal. \n\n\n\\subsection{Constructions of Representations of $F$ from stationary Markov processes} \\label{subsection:constr-rep-F}\nThe following theorem uses the tensor product construction of the present section to show that bilateral stationary Markov processes of tensor product type give rise to representations of $F$.\n\\begin{Theorem}\\label{theorem:TensorMarkovF}\nLet $(\\mathcal{A} \\otimes \\mathcal{C}, \\varphi \\otimes \\chi, \\gamma, \\mathcal{A} \\otimes \\mathbbm{1}_\\mathcal{C})$ be a bilateral \nstationary Markov process.\nThen there exists a probability space $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$, generating representations \n$\\tilde{\\rho}_B, \\tilde{\\rho}_M \\colon F \\to \\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ and an embedding \n$\\kappa \\colon (\\mathcal{A} \\otimes \\mathcal{C}, \\varphi \\otimes \\chi) \\to (\\tilde{\\mathcal{M}},\\tilde{\\psi})$ such that\n\\begin{enumerate}\n \\item $\\kappa (\\mathcal{A} \\otimes \\mathbbm{1}) = \\tilde{\\mathcal{M}}^{\\tilde{\\rho}_B(g_0)}$,\n \\item $\\gamma^n|_{\\mathcal{A}\\otimes \\mathbbm{1}_{\\mathcal{C}}} = \\kappa^* \\tilde{\\rho}_M(g_0^n)\\kappa|_{\\mathcal{A} \\otimes \\mathbbm{1}_\\mathcal{C}}$ \n\tfor all $n \\in \\mathbb{N}_0$. \n\\end{enumerate}\n\t\n\\end{Theorem}\n\\begin{proof} \nWe take \n\\[\n\t(\\tilde{\\mathcal{M}}, \\tilde{\\psi}) \n\t:= \\big(\\mathcal{A} \\otimes \\mathcal{C}^{\\otimes_{\\mathbb{N}_0^2}} , \\varphi \\otimes \\chi^{\\otimes_{\\mathbb{N}_0^2}}\\big)\n\\]\nand construct two representations of the Thompson group $F$ as obtained in Propositions \\ref{proposition:rep-Fbeta} and \\ref{proposition:rep-Falpha}. That is, we define the representation $\\tilde{\\rho}_B \\colon F \\to\n\\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ as $\\tilde{\\rho}_B(g_n):=\\tilde{\\beta}_n$ and the representation $\\tilde{\\rho}_M \\colon F \\to\n\\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ as $\\tilde{\\rho}_M(g_n):=\\tilde{\\alpha}_n$ with $\\tilde{\\alpha}_0 = \\gamma_0 \\circ \\tilde{\\beta}_0$ and $\\tilde{\\alpha}_n = \\tilde{\\beta}_n$ for $n \\ge 1$. We remind that $\\gamma_0$ is the natural extension of $\\gamma$ to an automorphism on $(\\tilde{\\mathcal{M}}, \\tilde{\\psi})$.\nLet $\\kappa$ be the natural\nembedding of $(\\mathcal{A} \\otimes \\mathcal{C}, \\varphi \\otimes \\chi)$ into $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$. By Proposition \\ref{proposition:MarkovTwoReps2}, $(\\tilde{\\mathcal{M}},\\tilde{\\psi},\\tilde{\\alpha}_0,\\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0})$ is a\nnoncommutative stationary Markov process, and $\\kappa(\\mathcal{A}\\otimes \\mathbbm{1}_{\\mathcal{C}})=\\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0}$.\n\nWe are given that $(\\mathcal{A} \\otimes \\mathcal{C}, \\varphi \\otimes \\chi, \\gamma, \\mathcal{A} \\otimes \\mathbbm{1}_\\mathcal{C})$ is a\nstationary Markov process, hence by Proposition \\ref{proposition:dilation}, we get \n\\begin{equation}\\label{eq:Markv1} \n (\\gamma|_{\\mathcal{A}\\otimes \\mathbbm{1}_{\\mathcal{C}}})^n = \\gamma^n|_{\\mathcal{A}\\otimes \\mathbbm{1}_{\\mathcal{C}}}, \n\\end{equation}\nfor $n\\in \\mathbb{N}_0$. It is easy to check that, for all $a\\in \\mathcal{A}$, \n\\[\n\\kappa^* \\tilde{\\alpha}_0\\kappa(a\\otimes \\mathbbm{1}_{\\mathcal{C}})= \\kappa^* \\tilde{\\alpha}_0 \\left( a \\otimes \\mathbbm{1}_{\\mathcal{C}}^{\\otimes_{\\mathbb{N}_0}} \\otimes \\mathbbm{1}_{\\mathcal{C}}^{\\otimes_{\\mathbb{N}_0}} \\cdots \\right)\n= \\kappa^*\\left(\n\\begin{array}{ccccccccc}\n& & \\vdots& & && & &\\\\\n& & \\otimes &&& & & &\\\\\n& & \\mathbbm{1}_{\\mathcal{C}} &&& & & &\\\\\n& & \\otimes &&& & & &\\\\\n& & \\mathbbm{1}_{\\mathcal{C}} &&& & & &\\\\\n& & \\otimes &&& & & &\\\\\n\\gamma (a & \\otimes & \\mathbbm{1}_{\\mathcal{C}}) \n&\\otimes &\\mathbbm{1}_{\\mathcal{C}}^{\\otimes_{\\mathbb{N}_0}} & \\otimes & \\mathbbm{1}_{\\mathcal{C}}^{\\otimes_{\\mathbb{N}_0}} &\\cdots& \\end{array} \\right)\n= \\gamma (a \\otimes \\mathbbm{1}_{\\mathcal{C}}).\n\\] \nHence, $\\kappa^* \\tilde{\\alpha}_0 \\kappa|_{\\mathcal{A}\\otimes \\mathbbm{1}_\\mathcal{C}}=\\gamma|_{\\mathcal{A}\\otimes \\mathbbm{1}_{\\mathcal{C}}}$. As\n$\\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0}=\\kappa(\\mathcal{A}\\otimes \\mathbbm{1}_\\mathcal{C})$, the stationary Markov process \n$(\\tilde{\\mathcal{M}},\\tilde{\\psi},\\tilde{\\alpha}_0,\\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0})$ has the transition operator \n$\\gamma|_{\\mathcal{A}\\otimes \\mathbbm{1}_{\\mathcal{C}}}$. We once again appeal to Proposition \\ref{proposition:dilation} to get\n\\begin{equation}\\label{eq:Markv2} \n(\\gamma|_{\\mathcal{A}\\otimes \\mathbbm{1}_{\\mathcal{C}}})^n=\\kappa^* \\tilde{\\alpha}_0^n \\kappa|_{\\mathcal{A}\\otimes \\mathbbm{1}_\\mathcal{C}.}\n\\end{equation}\nCombining \\eqref{eq:Markv1} and \\eqref{eq:Markv2} completes the proof of the theorem.\n\\end{proof}\n\n\n\\subsection{The Classical Case} \\label{subsection:constr-classical}\n\nWe state a result of K\\\"ummerer that provides a tensor dilation of any Markov map on a commutative von Neumann algebra. This will allow us to obtain a representation of $F$ as in Theorem \\ref{theorem:TensorMarkovF}.\n \n\\begin{Notation} \\normalfont\nThe (non)commutative probability space $(\\mathcal{L}, \\operatorname{tr}_\\lambda)$\nis given by the Lebesgue space of essentially bounded functions $\\mathcal{L} := L^\\infty([0,1],\\lambda)$ \nand $\\operatorname{tr}_\\lambda := \\int_{[0,1]} \\cdot\\, d\\lambda$ as the faithful normal state on $\\mathcal{L}$. \nHere $\\lambda$ denotes the Lebesgue measure on the unit interval $[0,1] \\subset \\mathbb{R}$. \n\\end{Notation}\n\\begin{Theorem}[{\\cite[4.4.2]{Ku86}}] \\label{theorem:kuemmerer-twosided}\nLet $R$ be a $\\varphi$-Markov map on $\\mathcal{A}$, where $\\mathcal{A}$ is a commutative von \nNeumann algebra with separable predual. Then there exists \n$\\gamma \\in \\operatorname{Aut}(\\mathcal{A} \\otimes \\mathcal{L}, \\varphi \\otimes \\operatorname{tr}_\\lambda)$ \nsuch that $(\\mathcal{A}\\otimes \\mathcal{L}, \\varphi\\otimes \\operatorname{tr}_{\\lambda}, \\gamma, \\iota_0)$ is a Markov (tensor)\ndilation of $R$. That is, $(\\mathcal{A}\\otimes \\mathcal{L}, \\varphi\\otimes \\operatorname{tr}_{\\lambda}, \\gamma, \\mathcal{A}\\otimes\n\\mathbbm{1}_{\\mathcal{L}})$ is a stationary Markov process, and for all $n \\in \\mathbb{N}_0$, \n\\[\nR^n = \\iota_0^* \\, \\gamma^n \\iota_0,\n\\]\nwhere $\\iota_0 \\colon (\\mathcal{A},\\varphi) \\to (\\mathcal{A} \\otimes \\mathcal{L}, \\varphi \\otimes \\operatorname{tr}_\\lambda)$\ndenotes the canonical embedding $\\iota_0(a) = a \\otimes \\mathbbm{1}_\\mathcal{L}$ such\nthat $E_0 := \\iota_0 \\circ \\iota_0^* $ is the $\\varphi \\otimes \\operatorname{tr}_\\lambda$-preserving \nnormal conditional expectation from $\\mathcal{A} \\otimes \\mathcal{L}$ onto $\\mathcal{A} \\otimes \\mathbbm{1}_{\\mathcal{L}}$.\n\\end{Theorem}\n\n\n\n\\begin{Theorem}\\label{theorem:F-gen-compression}\nLet $(\\mathcal{A},\\varphi)$ be a probability space where $\\mathcal{A}$ is commutative with separable predual,\nand let $R$ be a $\\varphi$-Markov map on $\\mathcal{A}$. There exists a probability space $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$,\ngenerating representations $\\tilde{\\rho}_B, \\tilde{\\rho}_M \\colon F \\to \\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$, and an embedding \n$\\iota \\colon (\\mathcal{A}, \\varphi) \\to (\\tilde{\\mathcal{M}},\\tilde{\\psi})$ such that\n\\begin{enumerate}\n\\item \n$\\iota(\\mathcal{A}) = \\tilde{\\mathcal{M}}^{\\tilde{\\rho}_B(g_0)}$,\n\\item\n$R^n = \\iota^* \\tilde{\\rho}_M(g_0^n)\\iota $ \nfor all $n \\in \\mathbb{N}_0$. \t\n\\end{enumerate}\t\n\\end{Theorem}\n\\begin{proof}\nBy Theorem \\ref{theorem:kuemmerer-twosided}, there exists $\\gamma\\in \\operatorname{Aut}(\\mathcal{A}\\otimes\n\\mathcal{L},\\varphi\\otimes \\operatorname{tr}_{\\lambda})$ such that $(\\mathcal{A}\\otimes \\mathcal{L}, \\varphi\\otimes\n\\operatorname{tr}_{\\lambda}, \\gamma, \\mathcal{A}\\otimes \\mathbbm{1}_{\\mathcal{L}})$ is a stationary Markov process. By Theorem \\ref{theorem:TensorMarkovF},\nthere exists a probability space $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$, generating representations \n$\\tilde{\\rho}_B, \\tilde{\\rho}_M \\colon F \\to \\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ and an embedding \n$\\kappa \\colon (\\mathcal{A} \\otimes \\mathcal{L}, \\varphi \\otimes \\chi) \\to (\\tilde{\\mathcal{M}},\\tilde{\\psi})$ such that\n$\\kappa(\\mathcal{A} \\otimes \\mathbbm{1}_{\\mathcal{L}}) = \\tilde{\\mathcal{M}}^{\\tilde{\\rho}_B(g_0)}$ and $\\gamma^n |_{\\mathcal{A} \\otimes \\mathbbm{1}_{\\mathcal{L}}} = \\kappa^* \\tilde{\\rho}_M(g_0^n) \\kappa |_{\\mathcal{A} \\otimes \\mathbbm{1}_\\mathcal{L}}$. The proof is completed by taking $\\iota : = \\kappa \\circ \\iota_0$, where $\\iota_0 \\colon (\\mathcal{A},\\varphi) \\to (\\mathcal{A} \\otimes \\mathbbm{1}_{\\mathcal{L}}, \\varphi \\otimes \\operatorname{tr}_\\lambda)$\ndenotes the canonical embedding $\\iota_0(a) = a \\otimes \\mathbbm{1}_\\mathcal{L}$.\n\\end{proof}\n\n\\section*{Acknowledgements}\n\\label{section:acknowledgements}\nThe second author was partially supported by a Government of\nIreland Postdoctoral Fellowship (Project ID: GOIPD\/2018\/498). \nBoth authors acknowledge several helpful discussions\nwith B.~V.~Rajarama Bhat in an early stage of this project. \nAlso the first author would like to thank Persi Diaconis, Gwion Evans, Rolf Gohm, Burkhard K\\\"ummerer \nand Hans Maassen for several fruitful discussions on Markovianity.\nBoth authors thank the organizers of the conference\n\\emph{Non-commutative algebra, Probability and Analysis in Action}\nheld at Greifswald in September 2021 in honour of Michael\nSch\\\"urmann.\n\\label{section:bibliography}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\nRecursive algorithms are often the most efficient technique \nfor calculating gauge theory amplitudes, as ideally information \nis maximally recycled \\cite{Britto:2004ap,Berends:1987me,Parke:1986gb}. \nIn recent years recursive techniques have become a major component for \nevent simulation at the LHC, for tree-level generation \nof multi-jet events and part of the vast improvement in \nNLO calculations at higher multiplicity \n\\cite{Gleisberg:2008fv,Badger:2010nx,\nBern:1994cg,Berger:2008sj}. The \nirreducible complexity of full-matrix \nelements limit computations of final-state partons to a fairly modest number \n(typically $n \\le 10$ at LO, $n \\le 5$ at NLO), which \nin the hard and widely separated regime \nmeets essential experimental demand \\cite{Berger:2010zx,Ita:2011wn,Aad:2013ysa}. \nHowever, for the logarithmically enhanced sector of soft and collinear radiation, \ngenerating high multiplicity is crucial and in practice proceeds through parton shower \nMonteCarlo~\\cite{pythia,herwig,sherpa}. \n\nThis paper introduces a simple technique for recursively \nextracting logarithmic coefficients of $n$-jet \nrates. We emphasize that these coefficients are a mere \nskeleton of the complete (even tree-level matrix element) calculation, but \nour goal here is to explore the high multiplicity regime. \nWe find simple implementations in both the \nexclusive-$k_t$ (here on, \\emph{Durham}) \nand generalized inclusive-$k_t$ (here on, \\emph{Generalized-$k_t$}) jet algorithms \nstarting from their respective generating functionals \n\\cite{fastjet,Catani:1991hj,Leder:1996py,Brown:1991hx,Webber:2010vz,Gerwick:2012fw}. \nThe rates we calculate correspond to expanding in powers of $(\\alpha_s \/\\pi) L^2$, where \nin the Durham algorithm $L$ is the logarithm of a dimensionless resolution scale \n$y_{cut}$, while in the Generalized-$k_t$ algorithm $L^2$ contains separate \nenergy and angular logarithms which depends on a minumum energy scale \n$E_R$ and jet radius $R$. It is known that the resolved coefficients obtained \nin this way are present in the LO matrix element calculation \\cite{Brown:1991hx}, while the \nunresolved ones start at the NLO. As we will see, since our formula allows the efficient \ncomputation of an exclusive $n$-gluon rate to arbitrarily \nhigh order in $(\\alpha_s \/\\pi)L^2$ ({\\sl i.e.} \\, including additional \nunresolved gluons), for all practice purposes these rates can \nbe thought of as resummed containing the same level of \nformal accuracy as a standard parton shower\\footnote{\nour implementation is for the double-leading-logarithms \nonly, but the extension to the relevant next-to-double-logarithms \nalso including the $g\\to q\\bar{q}$ splitting is discussed at a later stage.}. It \nis important to bear in mind however that the rate coefficients do \nnot \\emph{a priori} contain \nany notion of kinematics or recoil as in the parton shower.\n\nThere are several potential applications for our work, all generally \nfollowing from the ability to compute analytic expressions in a shower-like \napproximation. To illustrate the improvement \nwith an example, let us outline how the calculation proceeds directly \nfrom the generating functional for the exclusive rates in \n$e^+ e^- \\to \\bar{q}q + 20 \\; \\text{gluons}$ \nin the Durham algorithm. Starting from the generating \nfunctional\n\\begin{alignat}{5}\n\\Phi_{g\/q}(u,Q^2) &= u \\; \\exp \n\\left[ \\int_{Q_0^2}^{Q^2} dt \\; \\Gamma_{g\/q}(Q^2,t) \n \\left( \\Phi_g(u,t) -1 \\right) \\right] \n\\label{eq:gf_evolution}\n\\end{alignat}\nwe obtain the resummed rate differentiating $(\\Phi_q)^2$ 22 times \nwith respect to the variable $u$ at the point $u=0$. Thus we define \nthe exclusive jet fractions \n \\begin{alignat}{5}\n f_{n} = \\left. \\frac{1}{n!} \\frac{d^n}{du^n} \\Phi_q^2 \\right|_{u=0} .\n\\label{eq:def_gf}\n\\end{alignat}\nThe resulting resummed $20$-gluon expression we \nmercifully do not include, but note that it is a linear combination \nof 39,289,183 possible splitting histories\\footnote{The number of splitting \nhistories contributing to an $n$ gluon final state is the recursive number of \ninteger sub-partitions of the integer partitions of $n$.}. Obtaining a \nnumerical answer requires either a numerical evaluation of each of the \n19-dimensional integrals (38 dimensional for the generalized \nclass of $k_t$ algorithms) or for the fixed order coefficient, expanding \nthe Sudakov form factors to the appropriate order and evaluating \nthe still 19-dimensional integral analytically. In practice this procedure could \nbe optimized so that, for example, partial results for \nthe multi-dimensional integrals are recycled, \nbut it should be clear that the manipulations are extremely unwieldy. \n\nExpressing the expanded rates as the resolved and unresolved \ncoefficients \n\\begin{equation}\nP_n = \\text{Res}_n \\; + \\; \\text{URes}_n\n\\end{equation}\nwhere $\\text{Res}_n \\sim \\alpha_s^n $ and $\\text{URes}_n$ starts \nat ${\\mathcal{O}}(\\alpha_s^{n+1})$, our method allows the computation of \n$\\text{Res}_{20}$ in a matter of seconds. Once $\\text{Res}_{20}$ is \nknown, it is straight-forward to ``bootstrap\" the unresolved \ncomponents for the lower multiplcities using \nsimple identical boson (Poisson) statistics. Doing this to \nsufficiently high order, one recovers the resummed rates\\footnote{\nWe note here very explicitly that the physics in our recursive \nprescription is identical to the coherent branching formalism. In fact, we prove \nthe consistency of our method directly from the generating functional. What is \nspecial is that a simplified recursive formula allows us to study gluonic coefficients \nfor arbitrary multiplicities, in practice an order of magnitude larger than using \nconventional techniques. \n}. \n\nThe reason we are able to construct a simple recursive formula \ncomes down to a well known fact about the exponentiation of \nleading singularities in gauge theory amplitudes, namely that it is \ndetermined by the maximally non-abelian contribution \n\\cite{Frenkel:1984pz,Gatheral:1983cz} (for more recent results \nalong these lines see {\\sl e.g.} \\, Refs.~\\cite{DelDuca:2011ae,Gardi:2013ita}). \nFor our prescription, which determines the coefficients of the \nleading soft-collinear singularities in the $L\\to \\infty$ sense, the only required \nphysics input is the (coherent branching formalism analogous) maximally \nsecondary coefficient, corresponding to a string of gluons each \nemitting exactly once. This is diagrammatically encapsulated in the \nfirst moment of the generating functional \\eq{eq:gf_evolution}, and these \ncontributions are also order by order guaranteed to exponentiate. Knowing \nonly this contribution, the remainder \nof our recursive formula determines the entire leading coefficient \nusing bosonic statistics. We hope that our proof of the \nrecursive algorithm makes this point clear.\n\n\nThis paper is arranged as follows. In \\sec{sec:Rec} we introduce \nthe details of our recursive prescription for the resolved component. \nFor the sake of presentation we prove the individual steps \nonly at the end of the section. We outline the \nmethod for pure Yang-Mills in the Durham algorithm. \nAt the stated level of accuracy it is simple to \ngeneralize to arbitrary numbers of initial quarks or gluons. \nWe include the prescription for the unresolved component \nin \\sec{sec:URes}. In \\sec{sub_ex} we provide an example \nstep in the recursion for $4$-gluon emission from a $q\\bar{q}$ dipole.\nIn \\sec{sec:genkt} we summarize the small \nmodifications necessary for the \ninclusive-$k_t$ algorithm. In \\sec{sec:proof} we provide proofs for the \nindividual steps of the recursion directly from the generating \nfunctionals. We study the \ngluonic coefficients at high multiplicity in \\sec{sec:app} and discuss some \npossible applications for our computational tool. In the appendix \nwe provide the resummed 6-gluon $f_6$ contribution used to validate our algorithm.\n\n\\medskip\n\n\\section{Recursive prescription}\n\\label{Recsas}\n\\subsection{Resolved Component}\n\\label{sec:Rec}\n\nWe consider here pure Yang-Mills (YM) in the Durham \nalgorithm and start by decomposing the $n$-gluon final \nstate in terms of its \\emph{splitting history}. We differentiate these from \nFeynman diagrams by distinguishing between the emitter and \nemitted parton at each $1\\to2$ splitting. We call each splitting \ninvolving an initial parton \\emph{primary}, and any non-primary \nsplitting is termed \\emph{secondary}.\nFor fixed $n$ we write the resolved component of the \ncorresponding $n$-gluon rate from a single initiator as \n\\begin{equation}\n\\text{Res}_n \\;\\; = \\;\\; \\sum_k \\sum_{i=0}^{n-1} c^{(n)}_{ik} \n\\left(a_s C_A L^{2} \\right)^n \n\\label{defers}\n\\end{equation}\nwhere $a_s = \\alpha_S\/\\pi$, $L = \\log(1\/y_{\\text{cut}})$ and \n$c^{(n)}_{ik} > 0$. The index $i$ counts the number of secondary \nemission in a particular splitting history. \nThe sum on $k$ is over all diagrams of the same order in $i$, which \nis left implicit for the moment. Our definition ensures that every term in \n\\eq{defers} is in one-to-one correspondence with a specific \nsplitting history. However, the recursive formula for the \nresolved coefficients does not depend on the index $k$, so \nwe drop it for the time being. \n\nWe claim that given a specific subset of coefficients from multiplicities $n$ and smaller, we \ncan write a general expression for $c^{(n+1)}_{i}$, and using \\eq{defers}, \ncompute $\\text{Res}_{n+1}$. The necessary ingredients \nfor $c^{(n+1)}_{i}$ are:\n\\begin{itemize}\n\\item All of the $c^{(l)}_{l-1}$ with $l < n+1$. These \nare all of the previous coefficients highest order in secondary \nemissions, or in other \nwords containing precisely one primary splitting from the initial \nhard line. \n\\item All of the $c^{(n)}_{l-1}$ with $l -1< n$. These \nare the coefficients with at least two primary splittings only \nfrom the $n$-th coefficient. \n\\item Integer partitions of $n+1$.\\end{itemize}\nUsing these ingredients we find a simple formula for the rate coefficients. \nTo illustrate this procedure we first go through the steps in the recursive \nprescription. We provide a detailed example of one step in the recursion \nfor 4-gluon emission in Section. \\ref{sub_ex}.\n\nThe first step is to divide the coefficients into two categories\n\\begin{equation}\nc^{(n)}_{i} = c^{(n)}_{k} + c^{(n)}_{n-1}\n\\end{equation}\nThe index $i \\in (0,n-1)$, $k \\in (0,n-2)$.\nThe $c^{(n)}_{k}$ coefficients are the contributions with at least 2 primary splittings. \nEach gluonic structure is already present in the lower multiplcity coefficients, \nand can therefore be constructed by multiplying such coefficients and taking into \naccount symmetry factors (this is intuitive, although it is proven in \nSec. \\ref{sec:proof} more explicitly).\nIn contrast, the $c^{(n)}_{n-1}$ coefficients are maximally secondary with respect \nto the hard initial line. These satisfy a relatively simple recursion relation for \npromoting coefficients higher up on the emission tree\n\\begin{equation}\nc^{(n+1)}_{n} \\; =\\; \\; \\sum_{j=0}^{n-1} c^{(n)}_{j} d^{(n)} \n\\qquad \\qquad d^{(n)} = \\dfrac{(2n)!}{(2n+2)!} \\; .\n\\label{easlif}\n\\end{equation}\nDiagrammatically the $n-1$ term in \\eq{easlif} corresponds to the relation in \nFig. \\ref{ill3}.\n\\begin{figure}[t]\n\\includegraphics[width=1.0\\textwidth]{fig_1_f.pdf}\n\\caption{Illustration of the first term in \\eq{easlif}. The solid line always represents \nthe initial partons in the process, which for our current example is a single gluon.}\n\\label{ill3}\n\\end{figure}\nThe grey blob indicates that this gluon is allowed to emit an arbitrary \nnumber of times, and each emission itself may split \\emph{et cetera}. \nThe solid line will always indicate an arbitrary number and type of initial \npartons, which for this specific example we take as a single gluon.\nThe other terms in \\eq{easlif} sums over the $c^{(n)}_j$ terms not maximally \nsecondary and not representable in the relation above.\nWe see that the two step process promotes diagrams with at least two \nprimary emissions to ones on the RHS and finally to the LHS of Fig. \\ref{ill3}. \nThe origin of the specific form of \\eq{easlif} is that the prescription for \npromoting primary to secondary emission essentially involves reweighting\nby the first moment of the generating functional, which for the Durham \nalgorithm is $\\Phi'_{u=1} \\sim \\sum_{n=0}^{\\infty} (aC_A L^2)^{2n}\/(2n)!$ \\cite{Ellis:1991qj}. \nDiagrammatically, this is identical to the sum of maximally secondary \nsplitting histories.\n\nThe final step in our recursion is to generate the $c^{(n+1)}_k$ with $k < n$ \ncoefficients. It is easy to see that a recursion based solely on $c^{(n)}$ \ncoefficients is bound to fail, as the integer partition of $n$ arising at each multiplicity \nis not easily defined recursively. Instead, we compute $c^{(n+1)}_k$ by enumerating \nthe various partitions of gluons and weighting by the appropriate irreducible \nstructures $c^{(k)}_{k-1}$. Note that only the values of $c^{(k)}_{k-1}$ need \nto be stored from previous multiplicities. \nComputing $n+1$ coefficients we only require $n$ of such numbers \nmaking this step computationally manageable\\footnote{Looping over the \nvarious partitions still constitutes the most computationally intensive part \nof our algorithm.}. An additional ingredient is that \n$m$ identical structures carries a phase space factor $1\/m!$. \n\nA complete representation for this contribution is\n\\begin{equation}\nc^{(n+1)}_{k} = \\sum_{p(n)} \\frac{1}{S} \\left[ \\prod_{\\sigma_i=\\{ \\sigma_1,\\cdots\\, \\sigma_{r}\\} } \\;\nc^{(\\sigma_i)}_{\\sigma_i-1}\\right].\n\\label{c1rec}\n\\end{equation}\nwhere the sum is over integer partitions $p(n)$ of $n$ of length \n$r \\ge 2$. The product is over the individual elements of each \npartition. For example, for $n = 4$ there are 4 partitions in the \nsum $\\left\\{\\sigma_1,\\sigma_2,\\sigma_3,\\sigma_4\\right\\} = \n\\{(3,1),(2,2), (2,1,1),(1,1,1,1)\\}$. \nHere $S$ is the overall symmetry number taken as the \nproduct of identical structure phase space factors, {\\sl e.g.} \\, the \ncontribution from the (2,2) term is \n$(1\/2!)(c^{(2)}_{1})^2$.\nWe can summarize the entire recursive algorithm for the resolved \ncoefficients and the main result of this paper \n\\begin{equation}\n\\sum_{i=0}^{n} c^{(n+1)}_{i} = \\sum_{p(n)} \\frac{1}{S} \\left[ \\prod_{\\sigma_i=\\{ \\sigma_1,\\cdots\\, \\sigma_{r}\\} } \\; \nc^{(\\sigma_i)}_{\\sigma_i-1}\\right]\n\\;\\; + \\;\\;\\sum_{j=0}^{n-1} c^{(n)}_{j} d^{(n)} \n\\label{final}\n\\end{equation}\nAn example recursion for an individual diagram is given in Fig. \\ref{exred}. Note \nthat as soon as a diagram ends up in the furthest right $c^{(n+2)}_{n+1}$ class it \nremains there indefinitely. It is simple to check that our formula exhausts all \npossible splitting histories to a given multiplicity. We confirm the validity \nof \\eq{final} by comparing with a direct computation from the generating \nfunctional with up to $5$ final state gluons \\cite{Gerwick:2012hq}.\n\n\\begin{figure}[t]\n\\includegraphics[width=1.0\\textwidth]{fig_2_f.pdf}\n\\caption{Example recursion for an individual diagram. As soon as \nthe diagram ends up on the right-hand side it is repeated in the \nrecursion according to the $d$ term in \\eq{easlif}.}\n\\label{exred}\n\\end{figure}\n\n\n\\bigskip\n\n\\subsection{Unresolved Component}\n\\label{sec:URes}\n\nGiven the set of resolved coefficients up to multiplicity $n$, it is \nrelatively straight-forward to determine the unresolved coefficients \nfor lower multiplicitities also up to order $(a_s L^2)^n$. To describe these \ncoefficients we extend our notation slightly so that $c^{(l)}_{i} \\to c^{(l,n)}_{i}$ \nwhere $l$ ranges between $0, 1,\\cdots n$ and indicates the \nmultiplicity. The resolved coefficients are then $c^{(n,n)}_{i}$ \nand the unresolved are the rest. Now it should be clear that \nthe unresolved coefficients \ncome from expanding the Sudakovs beyond leading order. \nTherefore, we expect the unresolved coefficients to be related to an expanded \nexponential and most importantly, to be determined from the resolved \ncomponents at the same order.\n\nFor the simplest case of the all primary contributions we find\n\\begin{equation}\nc^{(l,n)}_{0} = (-1)^{n-l} \\frac{1}{(n-l)!} \\frac{1}{l!} \\;.\n\\label{simp_res}\n\\end{equation}\nNote that at every order the individual coefficients correctly \nsatisfy $\\sum_{l=0}^{n} c^{(l,n)}_{0} = 0$. This fact holds on \na diagram by diagram basis to all multiplicities for the exclusive rates. \nIn order to extend also to the secondary terms, the complication is that we \nneed to distinguish diagrams beyond what we have so far for the resolved \ncomponent. The additional necessary ingredient is the number of \nrepeated identical emissions in a given splitting history. \n\nIn order to proceed, let us note that due to our recursion relation \nthe resolved component $c^{(n,n)}_{j}$ of each splitting history is known, \nand can be decomposed in terms of numerical coefficients times \npowers of $c^{(1,1)}_{0}$, $c^{(2,2)}_{0}$ and $c^{(2,2)}_{1}$. \nThese are our starting conditions, which we will refer to as the \\emph{primordial \ncoefficients}. Let us denote the powers of each as $a$, $b$ and $c$ \nrespectively. Now we define \n\\begin{equation}\np = a + 2b + c\n\\label{nuaasp}\n\\end{equation}\nand claim that for the unresolved components $c^{(l,n)}_{j}$ of this \nparticular diagram, that $l \\in (n, n-1, \\cdots , n-p)$ with coefficients \ngiven by \n\\begin{equation}\nc^{(l,n)}_{j} = (-1)^{n-l} \\frac{1}{(n-l)!}\\frac{ p! } {(p-(n-l))!} c^{(n,n)}_{j} \\, .\n\\label{fin_un_r}\n\\end{equation}\nFor $l1$).\n\n\\medskip\n\n\nGiven a tree $\\bf t\\in T^{\\infty}$, for $a\\in {\\mbb N}$, define $r_a{\\bf t}=\\{\\nu\\in{\\bf\nt}:|\\nu|\\leq a\\}$. Then $r_a{\\bf t}$ is a finite tree whose\ncontour function is denoted by $\\{C_a({\\bf t}, s):s\\geq0\\}$. For $k\\geq0$, we denote by $Y_k({\\bf t})$ the number of individuals in generation $k$: $$Y_k({\\bf t})=\\#\\{\\nu\\in{\\bf t}: |\\nu|=k\\}, \\quad k\\geq 0.$$\n\n\nGiven a probability measure $p=\\{p_k: k\\geq0\\}$ with\n$\\sum_{k\\geq0}kp_k >1.$ Let ${\\cal G}^p$ be a super-critical Galton-Watson tree with offspring distribution $p$.\n Then ${\\P}(\\#{\\cal G}^p<\\infty)=f(p)$, where $f(p)$ is the minimal solution of the following equation of $s$:\n$$\ng^p(s):=\\sum_{k\\geq0}s^kp_k=s, \\quad 0\\leq s\\leq1.\n$$\nLet $q=\\{q_k: k\\geq0\\}$ be another probability distribution such that\n$$q_k=f(p)^{k-1}p_k, \\text{ for } k\\geq1,\\text{ and } q_0=1-\\sum_{k\\geq1}q_k.$$\nThen $\\sum_{k\\geq0}kq_k<1$.\nLet ${\\cal G}^q$ be a subcritical GW tree\nwith offspring distribution $q$. Note that $\\l(Y_k({\\cal G}^q), k\\geq0\\r)$ is a Galton-Watson process starting from a single ancestor with offspring distribution $q$. We first present a simple lemma.\n\n\\begin{lemma}\\label{Lem:GWGir} Let $F$ be any nonnegative measurable function on $\\bf T$. Then for ${\\bf t}\\in {\\bf T}$,\n\\beqlb\\label{GWGirb}\n{\\mP}\\l[{\\cal G}^p={\\bf t}\\r]=f(p){\\mP}\\left[{\\cal\nG}^q={\\bf t}\\right]\n\\eeqlb\nand for any $a\\in \\mbb N$,\n\\beqlb\\label{GWGira}\n{\\mE}\\l[F(r_a{\\cal G}^p)\\r]={\\mE}\\left[f(p)^{1-Y_{a}({\\cal G}^q)}F(r_a{\\cal\nG}^q)\\right].\n\\eeqlb\n\\end{lemma}\n\\proof (\\ref{GWGirb}) is just (4.8) in \\cite{adh:pgwttvmp}. The proof of (\\ref{GWGira}) is straightforward. Set $\\text{gen}(a, {\\bf t})=\\{\\nu\\in{\\bf t}: |\\nu|=a\\}$.\nBy (\\ref{ForGuT}), for $\\bf t\\in T$, we have\n$$\\P(r_a{\\cal G}^p=r_a{\\mathbf t})=\\prod_{\\nu\\in r_a{\\mathbf t}\\setminus{\\cal L}(r_a{\\mathbf t})}p_{k_{\\nu}{\\mathbf t}}\\cdot \\prod_{\\nu\\in{\\cal L}(r_a{\\mathbf t})\\setminus \\text{gen}(a, {\\bf t})}p_0 $$\nand then\n$$\n\\P(r_a{\\cal G}^p=r_a{\\mathbf t})=f(p)^{-\\left(\\sum_{\\nu\\in r_a{\\mathbf t}\\setminus{\\cal\nL}(r_a{\\mathbf t})}(k_{\\nu}{\\mathbf\nt}-1)\\right)}\\left(\\frac{p_0}{q_0}\\right)^{\\#{\\cal L}(r_a{\\mathbf\nt})-Y_{a}({\\bf t})}\\P(r_a{\\cal G}^q=r_a{\\mathbf t}).\n$$\nWe also have\n\\begin{equation}\\label{eq:pzero}\nq_0=1-\\sum_{k=1}^{\\infty}f(p)^{k-1}p_k=1+p_0\/f(p)-g^p(f(p))\/f(p)=p_0\/f(p).\n\\end{equation}\nThen (\\ref{GWGira}) follows from the fact that given a tree\n$\\mathbf{t}\\in \\mathbf{T}$,\n$$\\#\\mathcal{L}({\\mathbf t})=1+\\sum_{\\nu\\in {\\mathbf t}\\setminus{\\cal\nL}({\\mathbf t})}(k_{\\nu}{\\mathbf t}-1).$$\n \\qed\n\n\\begin{remark}\\label{Rem: dis}It is easy to see that $(f(p)^{-Y_{n}({\\cal G}^q)}, n\\geq0)$ is a martingale with respect to ${\\cal F}_n=\\sigma(r_n {\\cal G}^q).$ In fact, by the branching property, we have for all $0\\leq m\\leq n$,\n\\beqlb\\label{mart}\n{\\mbb E}\\l[f(p)^{-Y_{n}({\\cal G}^q)}\\bigg{|}r_m{\\cal G}^q\\r]=\\l[{\\mbb E}\\l[f(p)^{-Y_{n-m}({\\cal G}^q)}\\r]\\r]^{Y_{m}({\\cal G}^q)}=f(p)^{-Y_{m}({\\cal G}^q)}.\n\\eeqlb\n\\end{remark}\nSince contour functions code finite trees in $\\mathbf{T}$, we immediately get the following result.\n\\begin{corollary}\\label{cor:con}\nFor any nonnegative measurable function $F$ on $ C({\\mbb R}^+, {\\mbb R}^+)$ and $a\\in\\mbb N$,\n\\beqlb\\label{cor:cona}{\\mE}\\l[F\\l( C_{a}({\\cal G}^p,\\cdot)\\r)\\r]={\\mE}\\left[f(p)^{1-Y_{a}({\\cal G}^q)}F\\l( C_{a}({\\cal G}^q,\\cdot)\\r)\\right].\n\\eeqlb\n\\end{corollary}\n\n\nLemma \\ref{Lem:GWGir} could be regarded as a discrete counterpart of the martingale transformation for L\\'evy trees in Section 4 of \\cite{ad:ctvmp}; see also (\\ref{Gircon}) below in this paper. To see this, we need to introduce continuous state branching processes and L\\'evy trees.\n\n\n\\subsection{Continuous State Branching Processes}\\label{Secbm}\nLet $\\alpha\\in \\mbb R$, $\\beta\\ge 0$ and $\\pi$ be a $\\sigma$-finite measure\non $(0,+\\infty)$ such that $\\int_{(0,+\\infty)}(1\\wedge\nr^2)\\pi(dr)<+\\infty$.\nThe branching mechanism $\\psi$ with characteristics $(\\alpha,\\beta,\n\\pi)$ is\ndefined by:\n\\begin{equation}\n \\label{eq:psi}\n\\psi(\\lambda)=\\alpha\\lambda+\\beta\\lambda^2\n+\\int_{(0,+\\infty)}\\left(e^{-\\lambda\n r}-1+\\lambda r1_{\\{r<1\\}}\\right)\\pi(dr).\n\\end{equation}\nA c\\`ad-l\\`ag $\\mbb R^+$-valued Markov process $Y^{\\psi,x}=(Y_t^{\\psi, x}, t\\geq0)$ started at $x\\geq0$ is called $\\psi$-continuous state branching process ($\\psi$-CSBP in short) if its transition kernels satisfy\n$$\nE[e^{-\\lambda Y^{\\psi,x}_t}]=e^{-xu_t(\\lambda)},\\quad t\\geq0,\\, \\lambda>0,\n$$\nwhere $u_t(\\lambda)$ is the unique nonnegative solution of\n$$\n\\frac{\\partial u_t(\\lz)}{\\partial t}=-\\psi(u_t(\\lz)),\\quad u_0(\\lz)=\\lz.\n$$\n\n\n\\noindent $\\psi$ and $Y^{\\psi,x}$ are said to be sub-critical (resp. critical, super-critical) if $\\psi'(0+)\\in(0,+\\infty)$ (resp. $\\psi'(0+)=0, \\psi'(0+)\\in[-\\infty,0)$). We say that $\\psi$ and $Y^{\\psi,x}$ are (sub)critical if they are critical or sub-critical.\n\n\\bigskip\n\n\\noindent In the sequel of this paper, we will assume the following assumptions on $\\psi$ are in force:\n\\begin{enumerate}\n\n\n\\item[(H1)] The Grey condition holds:\n\\begin{equation}\n\\int^{+\\infty}_1\\frac{d\\lambda}{\\psi(\\lambda)}<+\\infty.\n\\end{equation}\nThe Grey condition is equivalent to the a.s. finiteness of the\nextinction time of the corresponding CSBP. This assumption is used to\nensure that the corresponding height process is continuous.\n\n\n\\item[(H2)] The branching mechanism $\\psi$ is conservative: for all\n $\\varepsilon>0$,\n\\[\n\\int_{(0, \\varepsilon]} \\frac{d\\lambda}{|\\psi(\\lambda)|}=+\\infty.\n\\]\nThe conservative assumption is equivalent to the finiteness of the\ncorresponding CSBP at all time.\n\n\\end{enumerate}\nLet us remark that (H1) implies $\\beta>0$ or $\\int_{(0,1)} r \\pi(dr)=+\\infty $. And if $\\psi$ is (sub)critical, then we must have $\\alpha-\\int_{(1,{+\\infty})}r\\pi(dr)\\in[0,+\\infty).$ We end this subsection by collecting some results from \\cite{ad:ctvmp}.\n\n\\bigskip\n\n\\noindent Let $(X_t,t\\geq0)$ denote the canonical process of ${\\cal D}:=D({\\mbb R}^+, \\mbb R)$. Let $P_x^{\\psi}$ be the probability measure on $D({\\mbb R}^+, \\mbb R)$ such that $P_x^{\\psi}(X_0=x)=1$ and $(X_t,t\\geq0)$ is a $\\psi$-CSBP under $P_x^{\\psi}$.\n\n\\begin{lemma}\\label{AD2.2}(Lemma 2.4 in \\cite{ad:ctvmp})\nAssume that $\\psi$ is supercritical satisfying (H1) and (H2).\nThen\n\\begin{enumerate}\n\n\\item[(i)] $P_x^{\\psi}$-a.s. $X_{\\infty}=\\lim_{t\\rar\\infty}X_t$ exists, $X_{\\infty}\\in\\{0,\\infty\\}$ and\n $$\n P_x^{\\psi}(X_{\\infty}=0)=e^{-\\gamma x},\n $$\nwhere $\\gamma$ is the largest root of $\\psi(\\lz)=0$.\n\\item[(ii)] For any nonnegative random variable measurable w.r.t. $\\sigma(X_t, t\\geq0)$, we have\n $$\n E_x^{\\psi}[W|X_{\\infty}=0]=E_x^{\\psi_{\\gamma}}[W],\n $$\nwhere $\\psi_{\\gamma}(\\cdot)=\\psi(\\cdot+\\gamma).$\n\n\\end{enumerate}\n\\end{lemma}\n\\subsection{Height processes}\\label{Secbpheight}\n\n\nTo code the genealogy of the $\\psi$-CSBP, Le Gall and Le Jan \\cite{[LeLa98]} introduced the so-called height process, which is a functional of the L\\'{e}vy process with Laplace exponent $\\psi$; see also Duquesne and Le Gall \\cite{[DL02]}.\n\nAssume that $\\psi$ is (sub)critical satisfying (H1). Let ${\\mbb P}^{\\psi}$ be a probability measure on $\\cal D$ such that under ${\\mbb P}^{\\psi}$, $X=(X_t,t\\geq0)$ is a L\\'{e}vy process with\nnonnegative jumps and with Laplace exponent $\\psi$:\n $$\n {\\mbb E}^{\\psi}\\Big[e^{-\\lz X_t}\\Big]=e^{t\\psi(\\lz)},\\quad t\\geq0,\\,\\lz\\geq0.\n $$\n\n The so-called continuous-time \\textit{height process} denoted by $H$ with sample path in $C(\\mbb R^+,\\mbb R^+)$ is defined for every $t\\geq0$ by:\n $$\n H_t=\\liminf_{\\ez\\rightarrow0}\\frac{1}{\\ez}\\int_0^t1_{\\{X_s0$, define $$T_x=\\inf\\{t\\geq0: I_t\\leq -x\\},$$\n where $I_t=\\inf_{0\\leq r\\leq t}X_r$. By Theorem 1.4.1 of \\cite{[DL02]}, the process\n$(L^a_{T_x},a\\geq0)$ under $\\mP^{\\psi}$ is distributed as a $\\psi$-CSBP started at $x$.\n\nLet ${{\\cal C}:=C({\\mbb R}^+, {\\mbb R}^+)}$ be the space of nonnegative continuous functions on ${\\mbb R}^+$ equipped with the supmum norm. Denote by $(e_t,t\\geq0)$ the canonical process of ${\\cal C}$. Denote by $\\mP_x^{\\psi}$ the law of $(H_{t\\wedge T_x}, t\\geq0)$ under $\\mP^{\\psi}$. Then $\\mP_x^{\\psi}$ is a probability distribution on $\\cal C$. Set $Z_a=L_{T_x}^a$ under $\\mP_x^{\\psi}$, i.e.,\n$$\n\\lim_{\\ez\\rightarrow0}\\sup_{a\\geq\\ez}\\mE_x^{\\psi}\\left[\\sup_{s\\leq\nt}\\left|\\ez^{-1}\\int_0^s 1_{\\{a-\\ez0$, define\n$$\n\\Gamma_{f,a}(x)=\\int_0^x1_{\\{f(t)\\leq a\\}}dt,\\quad \\Pi_{f,a}(x)=\\inf\\{r\\geq0: \\Gamma_{f,a}(r)>x\\},\\quad x\\geq0,\n$$\nwhere we make the convention that $\\inf\\emptyset=+\\infty.$ Then we define\n$$\n\\pi_a(f)(x)=f(\\Pi_{f,a}(x)),\\quad f\\in {C({\\mbb R}^+, {\\mbb R}^+)}, \\, x\\geq0.\n$$\nNote that $\\pi_a\\circ\\pi_b=\\pi_a$ for $0\\leq a\\leq b.$ Let $\\psi$ be a super-critical branching\nmechanism satisfying (H2).\nDenote by $q^*$ the unique (positive) root of $\\psi'(q)=0$. Then the\nbranching mechanism $\\psi_q(\\cdot)=\\psi(\\cdot+q)-\\psi(q)$ is critical for\n$q=q^*$ and sub-critical for $q>q^*$. We also have $\\gamma>q^*$. Because super-critical branching processes may have infinite mass,\nin \\cite{ad:ctvmp} it was cut at a given level to construct the corresponding\ngenealogical continuum random tree. Define\n$$\nM_a^{\\psi_q, -q}=\\exp\\left\\{-qZ_0+qZ_a+\\psi(q)\\int_0^aZ_sds\\right\\},\\quad a\\geq0.\n$$\nDefine a filtration ${\\cal H}_a=\\sigma(\\pi_a(e))\\vee {\\cal N}$, where $\\cal N$ is the class of ${\\mbb P}_x^{\\psi_q}$ negligible sets. By (\\ref{local}), we have $M^{\\psi_q,-q}$ is ${\\cal H}$-adapted.\n\\begin{theorem}(Theorem 2.2 in \\cite{ad:ctvmp}) For each $q\\geq q^*$, $M^{\\psi_q,-q}$ is an ${\\cal H}$-martingale under ${\\mbb P}_x^{\\psi_q}.$\n\\end{theorem}\n\\proof See Theorem 2.2 and arguments in Section 4 in \\cite{ad:ctvmp}. \\qed\n\n\nDefine the distribution $\\mP_x^{\\psi,a}$\nof the $\\psi$-CRT cut at level $a$ with initial mass $x$, as\nthe distribution of $\\pi_a(e)$ under $M_a^{\\psi_q,-q}d\\mP_x^{\\psi_q}$:\nfor any\nnon-negative measurable function $F$ on $C({\\mbb R}^+, {\\mbb R}^+)$,\n \\beqlb\\label{supercriticalP}\n \\mE_x^{\\psi,a}[F(e)]&=&\\mE_x^{\\psi_q}\\Big[M_a^{\\psi_q,-q}F(\\pi_a(e))\\Big],\n\n\n\n\n \\eeqlb\nwhich do not depend on the choice of $q\\geq q^*$; see Lemma 4.1 of\n\\cite{ad:ctvmp}. Taking $q=\\gamma$ in (\\ref{supercriticalP}), we see\n\\beqlb\\label{Gircon} \\mE_x^{\\psi,a}[F(e)]=\\mE_x^{\\psi_{\\gamma}}\\Big[e^{-\\gamma x+\\gamma Z_a}F(\\pi_a(e))\\Big]\\eeqlb\nand $(e^{-\\gamma x+\\gamma Z_a}, a\\geq0)$ under ${\\mP}_x^{\\psi_{\\gamma}}$ is an ${\\cal H}$-martingale with mean 1.\n\\begin{remark}\\label{lem: defsuper} $\\mP_x^{\\psi,a}$ gives the law of super-critical L\\'evy trees truncated at height $a$. Then the law of the whole tree could be defined as a projective limit. To be more precise,\n let $\\cal W$ be the set of $C({\\mbb R}^+, {\\mbb R}^+)$-valued functions endowed with the $\\sigma$-field generated by the coordinate maps. Let $(w^a,a\\geq0)$ be the canonical process on $\\cal W$. Proposition 4.2 in \\cite{ad:ctvmp} proved that there exists a probability measure $\\bar{\\mP}_x^{\\psi}$\non $\\W$ such that for every\n$a\\geq0$, the distribution of $w^a$ under $\\bar{\\mP}_x^{\\psi}$\n is $\\mP_x^{\\psi,a}$\n\n and for $0\\leq a\\leq b$\n$$\n\\pi_a(w^b)=\\pi_a\\quad \\bar{\\mP}_x^{\\psi}-a.s\n$$\n\\end{remark}\n\n\\begin{remark}\nThe above definitions of $ \\mE_x^{\\psi,a}$ and $\\bar{\\mP}_x^{\\psi}$\nare also valid for (sub)critical branching mechanisms.\n\\end{remark}\n\n\n\n\n\n\n\\section{From Galton-Watson forests to L\\'evy forests}\\label{Secmain}\nComparing (\\ref{cor:cona}) with (\\ref{Gircon}), one can see that the l.h.s. are similar. The super-critical trees (discrete or continuum) truncated at height $a$ are connected to sub-critical trees, via a martingale transformation. Motivated by Duquesne and Le Gall's work \\cite{[DL02]}, which studied the scaling limit of (sub)critical trees, one may hope that the laws of suitably rescaled super-critical Galton-Watson trees truncated at height $a$ could converge to the law defined in (\\ref{Gircon}). Our main result, Theorem \\ref{Main}, will show it is true.\n\n\nFor each integer $n\\geq1$ and real number $x>0$,\n\\begin{itemize}\n\\item\n Let $[x]$ denote the integer part of $x$ and let $\\lceil x\\rceil$ denote the minimal integer which is larger than $x$.\n\n\\item\n Let\n$p^{(n)}=\\{p^{(n)}_k: k=0,1,2,\\ldots\\}$ be a probability\nmeasure on $\\mbb N$.\n\n\\item Let\n${\\cal G}^{p^{(n)}}_1, {\\cal G}^{p^{(n)}}_2, \\ldots, {\\cal G}^{p^{(n)}}_{[nx]}$ be independent Galton-Watson trees with the same offspring distribution $p^{(n)}.$\n\n\\item Define $Y_k^{p^{(n)},x}=\\sum_{i=1}^{[nx]}Y_k({\\cal G}_{i}^{p^{(n)}})$. Then $Y^{p^{(n)},x}=(Y_k^{p^{(n)},x},k=0,1,\\ldots)$ is a Galton-Watson process with offspring distribution $p^{(n)}$ starting from $[nx]$.\n\n\\item For $a\\in \\mbb N$, define the contour function of trees cut at level $a$, $C_a^{p^{(n)},x}=(C_a^{p^{(n)},x}(t), t\\geq0)$, by concatenating the contour functions $(C(r_a{\\cal G}_{1}^{p^{(n)}}, t), t\\in[0,2\\#r_a{\\cal G}_{1}^{p^{(n)}}]),\n \\ldots, (C(r_a{\\cal G}_{[nx]}^{p^{(n)}}, t), t\\in[0,2\\#r_a{\\cal G}_{[nx]}^{p^{(n)}}])$ and setting $C_a^{p^{(n)},x}(t)=0$ for $t\\geq 2\\sum_{i=1}^{[nx]}\\#r_a{\\cal G}_{i}^{p^{(n)}}$.\n\n\\item For $a\\in{\\mbb R}^+$, define $C_a^{p^{(n)},x}=\\pi_a(C_{\\lceil a\\rceil}^{p^{(n)},x})$.\n\n\n\\item If $\\sum_{k\\geq0}kp_k^{(n)}\\leq 1$, then we define the contour function $C^{p^{(n)},x}=(C^{p^{(n)},x}(t), t\\geq0)$ by concatenating the contour functions $(C({\\cal G}_{1}^{p^{(n)}}, t), t\\in[0,2\\#{\\cal G}_{1}^{p^{(n)}}]), \\ldots, (C({\\cal G}_{[nx]}^{p^{(n)}}, t), t\\in[0,2\\#{\\cal G}_{[nx]}^{p^{(n)}}])$ and setting $C^{p^{(n)},x}(t)=0$ for $t\\geq 2\\sum_{i=1}^{[nx]}\\#{\\cal G}_{i}^{p^{(n)}}$.\n \\end{itemize}\n\n\n\n\\noindent\nLet $(\\gamma_n, n=1,2,\\ldots)$ be a nondecreasing sequence of positive numbers converging to $\\infty$. Define\n$$\nG^{(n)}(\\lambda)=n\\gamma_n[g^{p^{(n)}}(e^{-\\lambda\/n})-e^{-\\lambda\/n}],$$ where $g^{p^{(n)}}$ is the generating function of $p^{(n)}$,\nand define a probability measure on $[0,\\infty)$ by\n$$\\mu^{(n)}\\l(\\frac{k-1}{n}\\r)=p_k^{(n)},\\quad k\\geq0.$$\nWe then present the following statements. By $\\overset{(d)}{\\rightarrow}$ we mean convergence in distribution.\n\\begin{enumerate}\n\\item[(A1)] $G^{(n)}(\\lambda)\\rar\\psi(\\lambda)$ as $n\\rar\\infty$ uniformly on any bounded interval.\n\n\\item[(A2)]\n\\beqlb\\label{lem:lib}\n\\left(\\frac{1}{n}Y^{p^{(n)},x}_{[\\gamma_n\nt]},\\; t\\geq0\\right)\\overset{(d)}{\\longrightarrow}(Y_t^{\\psi,x},\\; t\\geq0),\\quad \\text{as }n\\rar\\infty,\n\\eeqlb\nin $D(\\mbb R^+,\\mbb R^+)$.\n\n\\item[(A3)] There exists a probability measure $\\mu$ on $(-\\infty, +\\infty)$ such that\n$\\l(\\mu^{(n)}\\r)^{*[n\\gamma_n]}\\rar \\mu$ as $n\\rar\\infty$, where $\\int e^{-\\lambda x}\\mu(dx)=e^{\\psi(\\lambda)}.$\n\\end{enumerate}\nThe following lemma is a variant of Theorem 3.4 in \\cite{[Gr74]}.\n\\begin{lemma}\\label{lem:li}\nLet $\\psi$ be a branching mechanism satisfying (H1) and (H2). Then (A1), (A2) and (A3) are equivalent.\n\\end{lemma}\n\\begin{remark}\n(A3) is just the condition (i) in Theorem 3.4 of \\cite{[Gr74]}. Under our assumption on $\\psi$, we do not need condition (b) there. (A3) is also equivalent to the convergence of random walks to $\\psi$-L\\'evy processes; see Theorem 2.1.1 of \\cite{[DL02]} for (sub)critical case.\n\\end{remark}\n\n\\proof We shall show that (A2)$\\Leftrightarrow$(A3) and (A3)$\\Leftrightarrow$(A1).\n\ni): If (A2) holds, then $\\psi$ is conservative implies that ${\\mbb P}(Y_t<\\infty)=1$ for all $t\\geq0$. Then Theorem 3.3 in \\cite{[Gr74]} gives (A2)$\\Rightarrow$(A3). Meanwhile, Theorem 3.1 in \\cite{[Gr74]} implies (A3)$\\Rightarrow$(A2).\n\nii): We first show (A3)$\\Rightarrow$(A1). Denote by $L^{(n)}(\\lambda)$ the Laplace transform of $\\l(\\mu^{(n)}\\r)^{*[n\\gamma_n]}$. Then Theorem 2.1 in \\cite{[Gr74]}, together with (A3), gives that for every real number $d>0$\n\\beqlb\\label{lap}\n\\log L^{(n)}(\\lz)=[n\\gamma_n] \\log \\l(\\frac{e^{\\lz\/n}}{n\\gamma_n}G^{(n)}(\\lz)+1\\r)\\rar {\\psi(\\lz)},\n\\text{ as } n\\rar\\infty,\n\\eeqlb uniformly in $\\lz\\in[0,d],$\nwhich implies that for any $\\ez>0$, all $n>n(d,\\ez)$ and $\\lz\\in[0,d]$,\n$$\nn\\gamma_n\\l(e^{\\frac{\\psi(\\lz)-\\ez}{[n\\gamma_n]}}-1\\r)\n<\ne^{\\lz\/n}G^{(n)}(\\lz)<\nn\\gamma_n\\l(e^{\\frac{\\psi(\\lz)+\\ez}{[n\\gamma_n]}}-1\\r).\n$$\nThen by $|e^x-1-x|<{e^{|x|}}|x|^2\/2$,\n$$\n-\\frac{2(\\psi(\\lz)-\\ez)^2 e^{\\frac{|\\psi(\\lz)-\\ez|}{[n\\gamma_n]}}}{n\\gamma_n}-2\\ez\n<\ne^{\\lz\/n}G^{(n)}(\\lz)-\\psi(\\lz)<\n\\frac{(\\psi(\\lz)+\\ez)^2 e^{\\frac{|\\psi(\\lz)+\\ez|}{n\\gamma_n}}}{n\\gamma_n}+\\ez.$$\nNote that $\\psi$ is locally bounded. Thus as $n\\rar\\infty$,\n$G^{(n)}(\\lz)\\rar \\psi(\\lz),$\nuniformly on any bounded interval, which is just (A1).\nSimilarly, one can deduce that if $(A1)$ holds, then $ L^{(n)}(\\lz)\\rar e^{\\psi(\\lz)}$ as $n\\rar\\infty$, which implies (A3). \\qed\n\n\n\n\nNow, we are ready to present our main theorem. Define $\\mathcal{E}^{p^{(n)},x}=\\inf\\{k\\geq0: Y^{p^{(n)},x}_k=0\\}$ and $\\mathcal{E}^{\\psi,x}=\\inf\\{t\\geq0: Y_t^{\\psi,x}=0\\}$ with the convention that $\\inf\\emptyset=+\\infty$. Denote by $g_{k}^{p^{(n)}}$ the $k$-th iterate of $g^{p^{(n)}}$.\n\\begin{theorem}\\label{Main}\nLet $\\psi$ be a branching mechanism satisfying (H1) and (H2).\n Assume that (A1) or (A2) holds. Suppose in addition that for every $\\dz>0$,\n\\beqlb\\label{Main1}\n\\liminf_{n\\rar\\infty}g_{[\\dz\\gamma_n]}^{p^{(n)}}(0)^n>0.\n\\eeqlb\nThen for $x>0$,\n\\beqlb\\label{conexeb}\n\\frac{1}{\\gamma_n}{\\cal E}^{p^{(n)},x}\\overset{(d)}{\\rar}\\mathcal{E}^{\\psi,x}\n\\text { on }[0,+\\infty]\\eeqlb\n and for any bounded continuous function $F$ on $C({\\mbb R}^+, {\\mbb R}^+)$ and every $a\\geq0$,\n\\beqlb\\label{Main2}\n\\lim_{n\\rar\\infty}{\\mE}\n\\l[F\\l(\\pi_a\\l(\\gamma_n^{-1}C^{p^{(n)},x}({2n\\gamma_n\\cdot})\\r)\\r) \\r]=\\mE^{\\psi,a}_x\\l[F\\l(e\\r)\\r].\\eeqlb\n\\end{theorem}\n\nBefore proving the theorem, we would like to give some remarks.\n\\begin{remark}\n(\\ref{Main1}) is essential to (\\ref{Main2}); see the comments following Theorem 2.3.1 in \\cite{[DL02]}. In fact under our assumptions $(H1), (H2)$ and $(A1)$, (\\ref{Main1}) is equivalent to (\\ref{conexeb}). To see (\\ref{conexeb}) implies (\\ref{Main1}), note that\n $$g_{[\\dz\\gamma_n]}^{p^{(n)}}(0)^{[nx]}=\\mP\\l[Y^{p^{(n)},x}_{[\\dz \\gamma_n]}=0\\r]=\\mP[{\\cal E}^{p^{(n)},x}\/{\\gamma_n}< \\dz]$$\nwhich, together with (\\ref{conexeb}), gives\n$$\n\\liminf_{n\\rar\\infty}g_{[\\dz\\gamma_n]}^{p^{(n)}}(0)^{[nx]}= \\liminf_{n\\rar\\infty}\n\\mP[{\\cal E}^{p^{(n)},x}\/{\\gamma_n}<\\dz]\\geq \\mP[{\\cal E}^{\\psi,x}<\\dz]>0,\n$$\nwhere the last inequality follows from our assumption $(H1)$; see Chapter 10 in \\cite{[Ky06]} for details.\n\\end{remark}\n\\begin{remark}\\label{remDW}\nSome related work on the convergence of discrete Galton-Watson trees has been done in \\cite{[DL02]} and \\cite{[DW12]}. In \\cite{[DL02]}, only the (sub)critical case was considered; see Theorem \\ref{lem:dl} below. In Theorem 4.15 of \\cite{[DW12]}, a similar work was done using a quite different formalism. The assumptions there are same as our assumptions in the Theorem \\ref{Main}. But the convergence holds for locally compact rooted real trees in the sense of the pointed Gromov-Hausdorff distance, which is a weaker convergence. Thus Theorem \\ref{Main} implies that the super-critical L\\'evy trees constructed in \\cite{ad:ctvmp} coincides with the one studied in \\cite{[DW12]}; see also \\cite{[ADH12]} and \\cite{[DW07]}.\n\\end{remark}\n\n\nWe then present a variant of Theorem 2.3.1 and Corollary 2.5.1 in \\cite{[DL02]} which is essential to our proof of Theorem \\ref{Main}.\n\n\\begin{theorem}\\label{lem:dl} (Theorem 2.3.1 and Corollary 2.5.1 of \\cite{[DL02]})\nLet $\\psi$ be a (sub)critical branching mechanism satisfying $(H1)$.\n Assume that (A1) or (A2) holds. Suppose in addition that for every $\\dz>0$,\n\\beqlb\\label{lem:dlb}\n\\liminf_{n\\rar\\infty}g_{[\\dz\\gamma_n]}^{p^{(n)}}(0)^n>0.\n\\eeqlb\nThen \\beqlb\\label{lem:dla}\n\\frac{1}{\\gamma_n}{\\cal E}^{p^{(n)},x}\\overset{(d)}{\\rar}\\mathcal{E}^{\\psi,x}\n\\text { on }[0,+\\infty)\\eeqlb\n and for any bounded continuous function $F$ on $C({\\mbb R}^+, {\\mbb R}^+)\\times D({\\mbb R}^+, {\\mbb R}^+)$,\n\\beqlb\\label{coropia}\n\\lim_{n\\rar\\infty}{\\mE}\n\\l[F\\l(\\pi_a\\l(\\gamma_n^{-1}C^{p^{(n)},x}({2n\\gamma_n\\cdot})\\r), \\l(\\frac{1}{n}Y^{p^{(n)}, x}_{[\\gamma_n\na]}\\r)_{a\\geq0}\\r) \\r]=\\mE^{\\psi}_x\\l[F\\l(\\pi_a(e), (Z_a)_{a\\geq0}\\r)\\r].\n\\eeqlb\n\\end{theorem}\n\\proof The comments following Theorem 2.3.1 in \\cite{[DL02]} give (\\ref{lem:dla}). And by Corollary 2.5.1 in \\cite{[DL02]}, we have\n \\beqlb\\label{lem:dlc}\n\\lim_{n\\rar\\infty}{\\mE}\n\\l[F\\l(\\l(\\gamma_n^{-1}C^{p^{(n)},x}({2n\\gamma_n\\cdot})\\r), \\l(\\frac{1}{n}Y^{p^{(n)},x}_{[\\gamma_n\na]}\\r)_{a\\geq0}\\r) \\r]=\\mE^{\\psi}_x\\l[F\\l(e, (Z_a)_{a\\geq0}\\r)\\r].\n\\eeqlb\n On the other hand, let ${\\cal C}_{a}$ be the set of discontinuities of $\\pi_a$. (\\ref{localtime}) yields\n\\beqlb\\label{coropib}\n\\Gamma_{e,a}(x)=\\int_0^x1_{\\{e_t\\leq a\\}}dt=\\int_{\\mbb R^+}1_{\\{s\\leq a\\}}L_x^sds=\\int_0^x1_{\\{e_t0$,\n\\beqlb\\label{lem:exe1}\n\\liminf_{n\\rar\\infty}g_{[\\dz\\gamma_n]}^{p^{(n)}}(0)^n>0.\n\\eeqlb\nThen as $n\\rar\\infty$,\n\\beqlb\\label{conexea}\nf(p^{(n)})^{[nx]}\\rar e^{-\\gamma x},\\quad x>0.\n\\eeqlb\n\\end{lemma}\n\\proof\nRecall that $f(p^{(n)})$ denotes the minimal solution of $g^{p^{(n)}}(s)=s.$\nFor each $n\\geq1$, define\n$$\nq^{(n)}_k=f({p^{(n)}})^{k-1}p^{(n)}_k,\\quad k\\geq1\\quad \\text{ and }\\quad\nq^{(n)}_0=1-\\sum_{k\\geq1}q^{(n)}_k.\n$$\nThen $q^{(n)}=\\{q_k^{(n)}: k\\geq0\\}$ is a probability distribution with generating function given by\n\\beqlb\\label{gq}\ng^{q^{(n)}}(s)=g^{p^{(n)}}\\l(sf({p^{(n)}})\\r)\/f({p^{(n)}}),\\quad 0\\leq s\\leq 1.\n\\eeqlb\n\n\\noindent Thus $ g^{q^{(n)}}(0)=g^{p^{(n)}}(0)\/f({p^{(n)}})$ and by induction we further have\n\\beqlb\\label{extinclim}\ng_{k+1}^{q^{(n)}}(0)=g^{q^{(n)}}\\l(g_{k}^{q^{(n)}}(0)\\r)\n=g^{p^{(n)}}\\l(g_{k}^{q^{(n)}}(0)f({p^{(n)}})\\r)\/f({p^{(n)}})\n=g_{k+1}^{p^{(n)}}(0)\/f({p^{(n)}}),\\quad k\\geq1.\n\\eeqlb\nWith (\\ref{lem:exe1}), we see that for any $\\delta>0$,\n\\beqlb\\label{sub2}\n1\\geq\\liminf_{n\\rar\\infty}g_{[\\dz \\gamma_n]}^{q^{(n)}}(0)^n\n=\\liminf_{n\\rar\\infty}g_{[\\dz \\gamma_n]}^{p^{(n)}}(0)^n\/f({p^{(n)}})^n\\geq\\liminf_{n\\rar\\infty}g_{[\\dz \\gamma_n]}^{p^{(n)}}(0)^n>0.\n\\eeqlb\nThen we also have $e^{-\\gamma_0}:=\\liminf_{n\\rar\\infty}f({p^{(n)}})^n>0.$\nSince $f({p^{(n)}})\\leq1$, we may write $f({p^{(n)}})=e^{-a_n\/n}$ for some $a_n\\geq0$. We further have $\\limsup_{n\\rar\\infty}a_n=\\gamma_0.$ We shall show that $\\gamma_0=\\gamma$ and $\\{a_n:n\\geq1\\}$ is a convergent sequence. To this end, let $\\{a_{n_k}:k\\geq1\\}$ be a convergent subsequence of $\\{a_n:n\\geq1\\}$ with $\\lim_{k\\rar\\infty}a_{n_k}=:\\tilde{\\gamma}\\leq \\gamma_0.$ Then by (A1),\n$$\n0=n_k\\gamma_{n_k}[{g^{p^{(n_k)}}(e^{-a_{n_k}\/n_k})-e^{-a_{n_k}\/{n_k}}}]\\rar \\psi(\\tilde{\\gamma}),\\quad \\text{as}\\quad k\\rar\\infty.\n$$\nThus $\\psi(\\tilde{\\gamma})=0.$ On the other hand, note that $\\psi$ is a convex function with $\\psi(0)=0$ and $\\gamma$ is the largest root of $\\psi(\\lz)=0$. Then we have $\\psi(\\lz_1)<0$ and $\\psi(\\lz_2)>0$ for $0<\\lz_1<\\gamma<\\lz_2$. If $\\tilde{\\gamma}\\neq\\gamma$, then $\\tilde{\\gamma}=0$. In this case, we may find a sequence $\\{b_{n_k}: k\\geq1\\}$ with $b_{n_k}>a_{n_k}$ for all $k\\geq 1$ such that $b_{n_k}\\rar\\gamma$ and for $k$ sufficiently large\n$$\n{g^{p^{(n_k)}}(e^{-b_{n_k}\/n_k})-e^{-b_{n_k}\/{n_k}}}=0.\n$$\nThis contradicts the fact that $f(p^{(n)})=e^{-a_n\/n}$ is the minimal solution of $g^{p^{(n)}}(s)=s.$ Thus $\\tilde{\\gamma}=\\gamma$ which implies that $\\lim_{n\\rar\\infty}a_n=\\gamma$ and $\\lim_{n\\rar\\infty}f(p^{(n)})^{[nx]}=e^{\\gamma x}$ for any $x>0.$ \\qed\n\n\n\n\\bigskip\n\nWe are in the position to prove Theorem \\ref{Main}.\n\n\\bigskip\n\n{\\bf Proof of Theorem \\ref{Main}:} With Theorem \\ref{lem:dl} in hand, we only need to prove the result when $\\psi$ is super-critical. The proof will be divided into three steps.\n\n\\textit{First step:} One can deduce from (A1) and (\\ref{conexea}) that\n\\beqlb\\label{sub1}\n&&n\\gamma_n[g^{q^{(n)}}\\l(e^{-\\lambda\/n}\\r)-e^{-\\lambda\/n}]\\cr\n&&\\qquad=n\\gamma_n\\l\n[g^{p^{(n)}}\\l(e^{-\\lambda\/n}f({p^{(n)}})\\r)\n-e^{-\\lambda\/n}f({p^{(n)}})\\r]\/f({p^{(n)}})\\cr\n&&\\qquad\\rightarrow\\psi(\\lambda+\\gamma),\n\\quad\n\\text{as }n\\rightarrow\\infty,\n\\eeqlb\nuniformly on any bounded interval. Then Lemma \\ref{lem:li} and Theorem \\ref{lem:dl}, together with (\\ref{sub2}) and (\\ref{sub1}), imply that\n\\beqlb\\label{conexesub}\n\\frac{1}{\\gamma_n}{\\cal E}^{q^{(n)},x}\\overset{(d)}{\\rar}\\mathcal{E}^{\\psi_{\\gamma},x}\n\\text { on }[0,+\\infty)\\eeqlb\nand for any bounded continuous function $F$ on $C({\\mbb R}^+, {\\mbb R}^+)\\times D({\\mbb R}^+, {\\mbb R})$,\n\\beqlb\\label{consub}\n\\lim_{n\\rar\\infty}{\\mE}\\l[F\\l(\\pi_a\\l((\\gamma_n^{-1}C^{q^{(n)},x}({2n\\gamma_n\\cdot})\\r), \\l(\\frac{1}{n}Y^{q^{(n)},x}_{[\\gamma_n\na]}\\r)_{a\\geq0}\\r) \\r]=\\mE^{\\psi_{\\gamma}}_x\\l[F(\\pi_a(e), (Z_a)_{a\\geq0})\\r].\n\\eeqlb\n\n\\textit{Second step:} We shall prove (\\ref{conexeb}). Note that\n$$\n\\{{\\cal E}^{p^{(n)},x}<\\infty\\}=\\{{\\cal G}_i^{p^{(n)}}, i=1,\\ldots,[nx] \\text{ are finite trees }\\}.\n$$\nThen by Corollary \\ref{cor:con}, for $f\\in C({\\mbb R}^+,{\\mbb R}^+)$,\n$$\n{\\mE}\\l[f\\l({\\cal E}^{p^{(n)},x}\/{\\gamma_n}\\r)1_{\\{{\\cal E}^{p^{(n)},x}<\\infty\\}}\\r]=f(p^{(n)})^{[nx]}{\\mE}\\l[f\\l({\\cal E}^{q^{(n)},x}\/{\\gamma_n}\\r)\\r]\n$$\nwhich, by (\\ref{conexea}), (\\ref{conexesub}) and Lemma \\ref{AD2.2}, converges to $e^{-\\gamma x}{\\mE}\\l[f\\l(\\mathcal{E}^{\\psi_{\\gamma},x}\\r)\\r]\n={\\mE}\\l[f\\l(\\mathcal{E}^{\\psi,x}\\r)1_{\\{{\\cal E}^{\\psi,x}<\\infty\\}}\\r]$, as $n\\rar\\infty$.\nWe also have that\n$$\n\\mP[{\\cal E}^{p^{(n)},x}=\\infty]=1-f(p^{(n)})^{[nx]}\\rar1-e^{-\\gamma x}=\\mP[{\\cal E}^{\\psi,x}=\\infty],\\quad\\text{as }n\\rar\\infty,\n$$\nwhich gives (\\ref{conexeb}).\n\n\\textit{Third step:} We shall prove (\\ref{Main2}). By Corollary \\ref{cor:con}, for any nonnegative measurable function $F$ on $C({\\mbb R}^+, {\\mbb R}^+)$ and $a\\geq0$,\n\\beqlb\\label{defsuper}\n{\\mE}\\l[F(C_{\\lc a\\rc}^{p^{(n)},x}(\\cdot))\\r]={\\mE}\\left[f(p^{(n)})^{[nx]-Y_{\\lc a\\rc}^{q^{(n)},x}}\nF(C_{\\lc a\\rc}^{q^{(n)},x}(\\cdot))\\right].\n\\eeqlb\nNote that\n$$\nC_a^{q^{(n)},x}=\\pi_aC_{\\lc a\\rc}^{q^{(n)},x}\\text{ and }\\pi_a(\\gamma_n^{-1}C^{q^{(n)},x})=\\gamma_n^{-1}C_{\\gamma_n a}^{q^{(n)},x}.\n$$\nThen by (\\ref{defsuper}) we have for $a\\in {\\mbb R}^+$\n\\beqlb\\label{defsuper1}\n{\\mE}\\l[F(C_{a}^{p^{(n)},x}(\\cdot))\\r]={\\mE}\\left[f(p^{(n)})^{[nx]-Y_{\\lc a\\rc}^{q^{(n)},x}}\nF(C_{a}^{q^{(n)},x}(\\cdot))\\right]\n\\eeqlb\nand\n\\beqnn{\\mE}\\l[F\\l(\\gamma_n^{-1}\nC_{\\gamma_na}^{p^{(n)},x}({2n\\gamma_n\\cdot})\\r) \\r]\n={\\mE}\\l[f(p^{(n)})^{[nx]-Y_{\\lc \\gamma_na\\rc}^{q^{(n)},x}}F\\l(\\pi_a\\l(\\gamma_n^{-1}\nC^{q^{(n)},x}({2n\\gamma_n\\cdot})\\r)\\r) \\r].\n\\eeqnn\nWe shall show that $\\{f(p^{(n)})^{[nx]-Y_{\\lc \\gamma_na\\rc}^{q^{(n)},x}}, n\\geq1\\}$ is uniformly integrable. Write $Y_a^{n}=Y_{\\lc \\gamma_na\\rc}^{q^{(n)},x}\/n$ for simplicity.\nFirst, note that ${\\mE}\\l[f(p^{(n)})^{[nx]-nY_a^{n}}\\r]=1$. Then with (\\ref{conexea}) and (\\ref{consub}) in hand, by the bounded convergence theorem, we have\n$$\n \\lim_{l\\rar\\infty}\\lim_{n\\rar\\infty}{\\mE}\\l[f(p^{(n)})^{[nx]-n(l\\wedge Y_a^{n})}\\r]\n=\\lim_{l\\rar\\infty}{\\mE}_x^{\\psi_{\\gamma}}\\l[e^{-\\gamma x+\\gamma(l\\wedge Z_a)}\\r]={\\mE}_x^{\\psi_{\\gamma}}\\l[e^{-\\gamma x+\\gamma Z_a}\\r]=1.\n$$\nNote that both ${\\mE}_x^{\\psi_{\\gamma}}\\l[e^{-\\gamma x+\\gamma(l\\wedge Z_a)}\\r]$ and ${\\mE}\\l[f(p^{(n)})^{[nx]-n(l\\wedge Y_a^{n})}\\r]$ are increasing in $l$. Thus for every $\\ez>0$, there exist $l_0$ and $n_0$ such that for all $l>l_0$ and $n>n_0$,\n$$1-\\ez\/2<{\\mE}\\l[f(p^{(n)})^{[nx]-n(l\\wedge Y_a^{n})}\\r]\\leq 1.$$\nMeanwhile, since\n$$\n \\lim_{l\\rar\\infty}{\\mE}\\l[f(p^{(n)})^{[nx]-n(l\\wedge Y_a^{n})}\\r]={\\mE}\\l[f(p^{(n)})^{[nx]-nY_a^{n}}\\r]=1,\n$$\nthere exists $l_1>0$ such that for all $n\\geq1$,\n$$1-\\ez\/2<{\\mE}\\l[f(p^{(n)})^{[nx]-n(l_1\\wedge Y_a^{n})}\\r]\\leq {\\mE}\\l[f(p^{(n)})^{[nx]-nY_a^{n}}\\r]=1.$$\nThen for all $n\\geq1$,\n$${\\mE}\\l[f(p^{(n)})^{[nx]-nY_a^{n}}1_{\\{Y_a^{n}>l_1\\}}\\r]-\n{\\mE}\\l[f(p^{(n)})^{[nx]-nl_1}1_{\\{Y_a^{n}>l_1\\}}\\r]<\\ez\/2.$$\nDefine $C_0=\\sup_{n\\geq 1}f(p^{(n)})^{[nx]-nl_1}<\\infty.$ Then for any set $A\\in {\\cal F}$ with ${\\mP}(A)<\\frac{\\ez}{2C_0},$\n\\beqnn\n{\\mE}\\l[f(p^{(n)})^{[nx]-nY_a^{n}}1_A\\r]<{\\mE}\\l[f(p^{(n)})^{[nx]-n(l_1\\wedge Y_a^{n})}1_A\\r]+\\ez\/2<\\ez.\n\\eeqnn\nThus $\\{f(p^{(n)})^{[nx]-Y_{\\lc \\gamma_na\\rc}^{q^{(n)},x}}, n\\geq1\\}$ is uniformly integrable; see Lemma 4.10 in \\cite{[Ka02]}. Using the Skorohod representation theorem and (\\ref{consub}), one can deduce that\n\\beqnn\n&&\\lim_{n\\rar\\infty}{\\mE}\\l[F\\l(\\gamma_n^{-1}\nC_{\\gamma_na}^{p^{(n)},x}({2n\\gamma_n\\cdot})\\r) \\r]\n\\cr&&\n\\quad=\\lim_{n\\rar\\infty}{\\mE}\\l[f(p^{(n)})^{[nx]-Y_{\\lc \\gamma_na\\rc}^{q^{(n)},x}}F\\l(\\pi_a\\l(\\gamma_n^{-1}\nC^{q^{(n)},x}({2n\\gamma_n\\cdot})\\r)\\r) \\r]\n\\cr&&\n\\quad=\\mE^{\\psi_{\\gamma}}_x\\l[e^{-\\gamma x+\\gamma Z_a}F(\\pi_a e) \\r].\n\\eeqnn\nwhich is just the right hand side of (\\ref{Main2}).\nWe have completed the proof.\n\\qed\n\n\n\n\\begin{remark}Write $C^n_t=\\gamma_n^{-1}C^{p^{(n)},x}({2n\\gamma_nt})$ for simplicity and recall that $(w^a,a\\geq0)$ denotes the canonical process on $\\cal W$.\nSuppose that the assumptions of Theorem \\ref{Main} are satisfied. Then one can construct a sequence of probability measures $\\bar{\\mP}_x^{p^{n}}$ on $\\W$ such that for every\n$a\\geq0$, the distribution of $w^a$ under $\\bar{\\mP}_x^{p^{n}}$ is the same as $\\pi_a(C^n)$ and for $0\\leq\na\\leq b$,\\;\n$$\n\\pi_a(w^b)=\\pi_a\\quad \\bar{\\mP}_x^{p^{(n)}}-a.s.\n$$\nWe then have \\beqlb\\label{conversuper}\\bar{\\mP}_x^{p^{n}}\\rar \\bar{\\mP}_x^{\\psi}\\text{ as }n\\rar\\infty.\\eeqlb\n\\end{remark}\n\n\\begin{remark}\nIn \\cite{ad:ctvmp}, an excursion measure (`distribution' of a single tree) was also defined. However, we could not find an easy proof of convergence of trees under such excursion measure.\n\\end{remark}\n\n{\\bf Acknowledgement} Both authors would like to give their sincere thanks to J.-F. Delmas and M. Winkel for their valuable comments and suggestions on an earlier version of this paper. H. He is supported by SRFDP (20110003120003), Ministry of Education (985 project) and NSFC (11071021, 11126037, 11201030).\nN. Luan is supported by UIBE (11QD17) and NSFC (11201068).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{The number of conditional images} To analyze the impact of the number of conditional images, we train our F2GAN with $K_1$ conditional images based on seen categories, and generate new images for unseen categories with $K_2$ conditional images. By default, we set $K=K_1=K_2=3$ in our experiments. We evaluate the quality of generated images using different $K_1$ and $K_2$ in low-data (\\emph{i.e.}, $10$-sample) classification (see Section $4.4$ in the main paper). By taking EMNIST dataset as an example, we report the results in Table~\\ref{tab:number_effect} by varying $K_1$ and $K_2$ in the range of $[3, 5, 7, 9]$. From Table~\\ref{tab:number_effect}, we can observe that our F2GAN can achieve satisfactory performance when $K_2=K_1$. The performance generally increases as $K$ increases (except from 3 to 5), but the performance gain is not very significant. Then, we observe the performance variance with fixed $K_1$.\nGiven a fixed $K_1$, when $K_2K_1$, the performance drops sharply, especially when $K_2$ is much larger than $K_1$ (\\emph{e.g.}, $K_1=3$ and $K_2=9$). One possible explanation is that when we train our F2GAN with $K_1$ conditional images, it is not adept at fusing the information of more conditional images ($K_2>K_1$) in the testing phase.\n\n\\setlength{\\tabcolsep}{8pt}\n\\begin{table}[t]\n \\caption{Accuracy(\\%) of low-data (10-sample) classification augmented by our F2GAN with different $ \\textbf{K}_1$ and $ \\textbf{K}_2$ on EMNIST dataset.} \n \\centering\n \\fontsize{8}{8}\\selectfont\n \\begin{tabular}{lrrrr}\n \\hline\n ~ & $K_1=3$ & $K_1=5$&$K_1=7$ & $K_1=9$ \\cr\n \n \\hline\n $K_2=3$&97.01 & 96.86 & 95.82 & 94.56 \\cr\n \\hline\n $K_2=5$& 95.24 & 96.98 & 96.08 & 95.52 \\cr\n \\hline\n $K_2=7$&93.76 & 95.13 & 97.23 & 96.86 \\cr\n \\hline\n $K_2=9$&90.17 & 92.74 & 94.38& 97.86 \\cr\n \\hline \n \\end{tabular}\n \\vspace{0.1mm}\n \\label{tab:number_effect}\n\\end{table}\n\n\n\n\n\\section{More Generation Results}\nWe show more example images generated by our F2GAN ($K=3$) on Flowers and Animals datasets in Figure~\\ref{fig:flowers} and Figure~\\ref{fig:animals} respectively. Besides, we additionally conduct experiments on FIGR-8~\\cite{clouatre2019figr} dataset, which is not used in our main paper. The generated images on FIGR-8 dataset are shown in Figure~\\ref{fig:figr-8}. On all three datasets, our F2GAN can generally generate diverse and plausible images based on a few conditional images. However, for some complex categories with very large intra-class variance, the generated images are not very satisfactory. For example, in the $4$-th row in Figure~\\ref{fig:animals}, the mouths of some dog faces look a little unnatural. We conjecture that in these hard cases, our fusion generator may have difficulty in fusing the high-level features of conditional images or seeking for relevant details from conditional images.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.25]{.\/figures\/flower_generated.jpg}\n\\end{center}\n\\caption{Images generated by our F2GAN($ \\textbf{K=3}$) on Flowers dataset. The conditional images are in the left three columns.} \n\\label{fig:flowers} \n\\end{figure*}\n\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.25]{.\/figures\/animals_generated.jpg}\n\\end{center}\n\\caption{Images generated by our F2GAN($ \\textbf{K=3}$) on Animals Faces dataset. The conditional images are in the left three columns.} \n\\label{fig:animals} \n\\end{figure*}\n\n\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.5]{.\/figures\/figr8.png}\n\\end{center}\n\\caption{Images generated by our F2GAN($ \\textbf{K=3}$) on FIGR-8 dataset ~\\cite{clouatre2019figr}. The conditional images are in the left three columns.} \n\\label{fig:figr-8} \n\\end{figure*}\n\n\n\\section{More Interpolation results}\nAs in Section 4.3 in the main paper, We show more interpolation results of our F2GAN in Figure~\\ref{fig:interpolation}. \nGiven two images from the same unseen category, we perform linear interpolation based on these two conditional images. In detail, for interpolation coefficients $\\bm{a}=[a^1, a^2]$, we start from $[0.9,0.1]$, and then gradually decrease (\\emph{resp.}, increase) $a^1$ (\\emph{resp.}, $a^2$) to $0.1$ (\\emph{resp.}, $0.9$) with step size $0.1$.\nIt can be seen that our F2GAN is able to produce diverse and realistic images with rich details between two conditional images, even when two conditional images are quite different. \n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.6]{.\/figures\/interpolation_sup.png}\n\\end{center}\n\\caption{Linear interpolation results of our F2GAN on Flowers dataset.}\n\\label{fig:interpolation} \n\\end{figure*}\n\n\n\n\n\n\\section{Comparison with Few-shot Image Translation}\nFew-shot image translation methods like FUNIT~\\cite{liu2019few} mainly borrow category-invariant content information from seen categories to generate new images for unseen categories in the testing phase. \nTechnically, FUNIT disentangles the category-relevant factors~(\\emph{i.e.}, class code) and category-irrelevant factors~(\\emph{i.e.}, content code) of images. Next, we refer to the images from seen (\\emph{resp.}, unseen) categories as seen (\\emph{resp.}, unseen) images.\nBy replacing the content code of an unseen image with those of seen images, FUNIT can generate more images for this unseen category.\nHowever, in this way, few-shot image translation can only introduce category-irrelevant diversity, but fails to introduce enough category-relevant diversity for category-specific properties. \n\nTo confirm this point, we conduct few-shot classification experiments (see Section $4.5$ in the main paper) to evaluate the quality of generated images. Based on the released model of FUNIT~\\cite{liu2019few} trained on Animal Faces~\\cite{deng2009imagenet}, we use class codes of unseen images and content codes of seen images to generate $512$ new images for each unseen category. Then, we use the generated images to help few-shot classification (see Section $4.5$ in the main paper), which is referred to as ``FUNIT-1\" in Table~\\ref{tab:performance_fewshot_classifier}. Besides, we also exchange content codes within the images from the same unseen category to generate new images for each unseen category, but the number of new images generated in this way is quite limited. \nSpecifically, in $N$-way $C$-shot setting, we can only generate $(C-1) \\times C$ new images for each unseen category. We refer to this setting as ``FUNIT-2\" in Table~\\ref{tab:performance_fewshot_classifier}. \n\nFrom Table~\\ref{tab:performance_fewshot_classifier}, it can be seen that ``FUNIT-1\" is better than ``FUNIT-2\", because ``FUNIT-1\" leverages a large amount of extra seen images when generating new unseen images. However, ``FUNIT-1\" is inferior to some state-of-the-art few-shot classification methods as well as our F2GAN, because FUNIT cannot introduce adequate category-relevant diversity as analyzed above.\n\n\n\\setlength{\\tabcolsep}{2pt}\n\\begin{table}[t]\n \\caption{Accuracy(\\%) of different methods on Animals Faces in few-shot classification setting.} \n \\centering\n \n \\begin{tabular}{lrr}\n \n \n \\hline\n Method & 5-way 5-shot &10-way 5-shot\\cr\n \n \\hline\n MatchingNets~\\cite{vinyals2016matching} &59.12 &50.12 \\cr\n\n MAML~\\cite{finn2017model} & 60.03 &49.89 \\cr\n\n RelationNets~\\cite{sung2018learning} &67.51 & 58.12 \\cr\n\n MTL~\\cite{sun2019meta} &79.85 &70.91 \\cr\n\n DN4~\\cite{li2019revisiting} &81.13 &71.34 \\cr\n \n MatchingNet-LFT~\\cite{Hungfewshot} &80.95 &71.62 \\cr\n \n \n MatchingGAN~\\cite{hong2020matchinggan} & 80.36 & 70.89\\cr\n \n FUNIT-1 & 78.02 &69.12 \\cr\n FUNIT-2 &75.29 &67.87 \\cr\n F2GAN & $\\textbf{82.69}$ & $\\textbf{73.19}$ \\cr\n \n \\bottomrule[0.8pt]\n \\end{tabular}\n \\label{tab:performance_fewshot_classifier}\n\\end{table}\n\n\\section{Details of Network Architecture}\n\\textbf{Generator} In our fusion generator, there are in total $11$ residual blocks ($5$ encoder blocks, $5$ decoder blocks, and $1$ intermediate block), in which each encoder (\\emph{resp.},decoder) block contains $3$ convolutional layers with leaky ReLU and batch normalization followed by one downsampling (\\emph{resp.}, upsampling) layer, while intermediate block contains $3$ convolutional layers with leaky ReLU and batch normalization. The architecture of our generator is summarized in Table~\\ref{tab:generator}. \n\n\n\\setlength{\\tabcolsep}{8pt}\n\\begin{table}[t]\n \\caption{The network architecture of our fusion generator. BN denotes batch normalization.} \n \\centering\n \n \\begin{tabular}{ccccc}\n \\hline\n Layer & Resample & Norm & Output Shape \\cr\n \n \\hline\n Image $\\bm{x}$ & - & - & 128*128*3 \\cr\n \\hline\n Conv $1 \\times 1$ & - & - & 128*128*32 \\cr\n \\hline\n Residual Block & AvgPool & BN & 64*64*64 \\cr\n \\hline\n Residual Block & AvgPool & BN & 32*32*64 \\cr\n \\hline\n Residual Block & AvgPool & BN & 16*16*96 \\cr\n \\hline\n Residual Block & AvgPool & BN & 8*8*96 \\cr\n \\hline\n Residual Block & AvgPool & BN & 4*4*128 \\cr\n \\hline\n Residual Block & - & BN & 4*4*128 \\cr\n \\hline\n Residual Block & Upsample & BN & 8*8*96 \\cr\n \\hline\n Residual Block & Upsample & BN & 16*16*96 \\cr\n \\hline\n Residual Block & Upsample & BN & 32*32*64 \\cr\n \\hline\n Residual Block & Upsample & BN & 64*64*64 \\cr\n \\hline\n Residual Block & Upsample & BN & 128*128*64 \\cr\n \\hline\n Conv $1 \\times 1$ & - & - & 128*128*3 \\cr\n \\hline \n \\end{tabular}\n \\label{tab:generator}\n\\end{table}\n\n\\noindent\\textbf{Discriminator} Our discriminator is analogous to that in~\\cite{liu2019few}, which consists of one convolutional layer followed by five groups of residual blocks. Each group of residual blocks is as follows: ResBlk-$k$ $\\rightarrow$ ResBlk-$k$ $\\rightarrow$ AvePool$2$x$2$, where ResBlk-$k$ is a ReLU first residual block~\\cite{mescheder2018training} with the number of channels $k$ set as $64$, $128$, $256$, $512$, $1024$ in five residual blocks. We use one fully connected (fc) layer with $1$ output following global average pooling layer to obtain the discriminator score. The architecture of our discriminator is summarized in Table ~\\ref{tab:discriminator}. \n\nThe classifier shares the feature extractor with the discriminator and only replaces the last fc layer with another fc layer with $C^s$ outputs with $C^s$ being the number of seen categories. The mode seeking loss and the interpolation regression loss also use the feature extractor from the discriminator. Specifically, we remove the last fc layer from discriminator to extract the features of generated images, based on which the mode seeking loss and the interpolation regression loss are calculated. \n\n\n\n\n\n\n\\setlength{\\tabcolsep}{8pt}\n\\begin{table}[t]\n \\caption{The network architecture of our fusion discriminator.} \n \\centering\n \n \\begin{tabular}{ccccc}\n \\hline\n Layer & Resample & Norm & Output Shape \\cr\n \n \\hline\n Image $\\bm{x}$ & - & - & 128*128*3 \\cr\n \\hline\n Conv $1 \\times 1$ & - & - & 128*128*32 \\cr\n \\hline\n Residual Blocks & AvgPool & - & 64*64*64 \\cr\n \\hline\n Residual Blocks & AvgPool & - & 32*32*128 \\cr\n \\hline\n Residual Blocks & AvgPool & - & 16*16*256 \\cr\n \\hline\n Residual Blocks & AvgPool & - & 8*8*512 \\cr\n \\hline\n Residual Blocks & AvgPool & - & 4*4*1024 \\cr\n \\hline\n Global & GlobalAvgPool & - & 1*1*1024 \\cr\n \\hline\n FC & - & - & 1*1*1 \\cr\n \\hline\n \\end{tabular}\n \\label{tab:discriminator}\n\\end{table}\n\\section{Conclusion}\nIn this paper, we have proposed a novel few-shot generation method F2GAN to fuse high-level features of conditional images and fill in the detailed information borrowed from conditional images. Technically, we have developed a non-local attentional fusion module and an interpolation regression loss. We have conducted extensive generation and classification experiments on five datasets to demonstrated the effectiveness of our method.\n\n\\section{Related Work}\n\\label{sec:related}\n\n\n\n\\noindent\\textbf{Data Augmentation:}\nData augmentation~\\cite{Krizhevsky2012ImageNet} targets at augmenting the training set with new samples. Traditional data augmentation techniques (\\emph{e.g.}, crop, rotation, color jittering) can only produce limited diversity. Some more advanced augmentation techniques~\\cite{zhang2017mixup,yun2019cutmix} are proposed, but they fail to produce realistic images. \nIn contrast, deep generative models can exploit the distribution of training data to generate more diverse and realistic samples for feature augmentation~\\cite{schwartz2018delta,mm1} and image augmentation~\\cite{antoniou2017data}. Our method belongs to image augmentation and can produce more images to augment the training set.\n\n\\noindent\\textbf{Generative Adversarial Network:}\nGenerative Adversarial Network (GAN)~\\cite{goodfellow2014generative,xu2019learning} is a powerful generative model based on adversarial learning. In the early stage, unconditional GANs~\\cite{miyato2018spectral} generated images with random vectors by learning the distribution of training images. Then, GANs conditioned on a single image~\\cite{miyato2018cgans,antoniou2017data} were proposed to transform the conditional image to a target image. Recently, a few conditional GANs attempted to accomplish more challenging tasks conditioned on more than one image, such as few-shot image translation~\\cite{liu2019few} and few-shot image generation~\\cite{clouatre2019figr,bartunov2018few}.\nIn this paper, we focus on few-shot image generation, which will be detailed next.\n\n\\noindent\\textbf{Few-shot Image Generation}\nFew-shot generation is a challenging problem which can generate new images with a few conditional images. Early few-shot image generation works are limited to certain application scenario. For example, Bayesian learning and reasoning were applied in ~\\cite{lake2011one,rezende2016one} to learn simple concepts like pen stroke and combine the concepts hierarchically to generate new images. \nMore recently, FIGR~\\cite{clouatre2019figr} was proposed to combine adversarial learning with optimization-based few-shot learning method Reptile~\\cite{nichol2018first} to generate new images. Similar to FIGR~\\cite{clouatre2019figr}, DAWSON~\\cite{liang2020dawson} applied meta-learning MAML algorithms~\\cite{finn2017model} to GAN-based generative models to achieve domain adaptation between seen categories and unseen categories. Metric-based few-shot learning method Matching Network~\\cite{vinyals2016matching} was combined with Variational Auto-Encoder~\\cite{Pu2016Variational} in GMN~\\cite{bartunov2018few} to generate new images without finetuning in the test phase. MatchingGAN~\\cite{hong2020matchinggan} attempted to use learned metric to generate images based on a single or a few conditional images. In this work, we propose a new solution for few-shot image generation, which can generate more diverse and realistic images.\n\n\\noindent\\textbf{Attention Mechanism:}\nAttention module aims to localize the regions of interest. Abundant attention mechanisms like spatial attention~\\cite{xu2016ask}, channel attention~\\cite{chen2017sca}, and full attention~\\cite{wang2018mancs} have been developed. Here, we discuss two works most related to our method. The method in~\\cite{lathuiliere2019attention} employs local attention mechanism to select relevant information from multi-source human images for human image generation, but it fails to capture long-range relevance. Inspired by non-local attention~\\cite{zhang2019self,wang2018non}, we develop a novel non-local attentional fusion (NAF) module for few-shot image generation.\n\n\n\\section{Introduction}\nDeep generative models\n, mainly including Variational Auto-Encoder (VAE) based methods~\\cite{vae} and Generative Adversarial Network (GAN) based methods~\\cite{goodfellow2014generative}, draw extensive attention from the artificial intelligence community. Despite the advances achieved in current GAN-based methods~\\cite{cyclegan,stargan1, stargan2,stylegan1, stylegan2,mm2,DoveNet2020,GAIN2019}, the remaining bottlenecks in deep generative models are the necessity of amounts of training data and the difficulties with fast adaptation to a new category~\\cite{clouatre2019figr,bartunov2018few,liang2020dawson}, especially for those newly emerging categories or long-tail categories. Therefore, it is necessary to consider how to generate images for a new category with only a few images. This task is referred to as few-shot image generation~\\cite{clouatre2019figr,hong2020matchinggan}, which can benefit a wide range of downstream category-aware tasks like few-shot classification~\\cite{vinyals2016matching,sung2018learning}. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.32]{.\/figures\/attention_motivation.png}\n\\end{center}\n\\caption{Illustration of fusing three conditional images $\\textbf{x}_1, \\textbf{x}_2, \\textbf{x}_3$ with interpolation coefficients $[ \\textbf{0.2}, \\textbf{0.3}, \\textbf{0.5}]$ in our proposed F2GAN. The high-level features of conditional images are fused with interpolation coefficients and the details (\\emph{e.g.}, color dots representing query locations) of the generated image are filled by using relevant low-level features (\\emph{e.g.}, color boxes corresponding to query locations) from conditional images. Best viewed in color.}\n\\label{fig:attention_explain} \n\\end{figure}\n\n\nIn the few-shot image generation task, the model is trained on seen categories with sufficient labeled training images. Then, given only a few training images from a new unseen category, the learnt model is expected to produce more diverse and realistic images for this unseen category. In some previous few-shot image generation methods~\\cite{vinyals2016matching,hong2020matchinggan}, the model is trained on seen categories in an episode-based manner~\\cite{vinyals2016matching}, in which a small number (\\emph{e.g.}, 1, 3, 5) of images from one seen category are provided in each training episode~\\cite{vinyals2016matching} to generate new images. The input images used in each training episode are called conditional images. After training, the learnt model can generate new images by using a few conditional images from each unseen category. \n\n\n\n\nTo the best of our knowledge, there are quite few works on few-shot image generation. Among them, DAGAN~\\cite{antoniou2017data} is a special case, \\emph{i.e.}, one-shot image generation, which injects random noise into the generator to produce a slightly different image from the same category. However, this method is conditioned on only one image and fails to fuse the information of multiple images from the same category. More recent few-shot image generation methods can be divided into optimization-based methods and metric-based methods. Particularly, optimization-based FIGR~\\cite{clouatre2019figr} (\\emph{resp.}, DAWSON~\\cite{liang2020dawson}) adopted a similar idea to Reptile~\\cite{nichol2018first} (\\emph{resp.}, MAML~\\cite{finn2017model}), by initializing a generator with images from seen categories and fine-tuning the trained model with images from each unseen category. \nMetric-based method GMN~\\cite{bartunov2018few} (\\emph{resp.}, MatchingGAN~\\cite{hong2020matchinggan}) is inspired by matching network~\\cite{vinyals2016matching} and combines matching procedure with VAE (\\emph{resp.}, GAN). However, FIGR, DAWSON, and GMN can hardly produce sharp and realistic images. MatchingGAN performs better, but has difficulty in fusing complex natural images.\n\n\n\n\n\nIn this paper, we follow the idea in \\cite{hong2020matchinggan} by fusing conditional images, and propose a novel fusing-and-filling GAN (F2GAN) to enhance the fusion ability. The high-level idea is fusing the high-level features of conditional images and filling in the details of generated image with relevant low-level features of conditional images, which is depicted in Figure~\\ref{fig:attention_explain}. In detail, our method contains a fusion generator and a fusion discriminator as shown in Figure~\\ref{fig:framework}. Our generator is built upon U-Net structure with skip connections~\\cite{ronneberger2015u} between the encoder and the decoder. A well-known fact is that in a CNN encoder, shallow blocks encode low-level information at high spatial resolution while deep blocks encode high-level information at low spatial resolution. We interpolate the high-level bottleneck features (the feature vector between encoder and decoder) of multiple conditional images with random interpolation coefficients. Then, the fused high-level feature is upsampled through the decoder to produce a new image. In each upsampling stage, we borrow missing details from the skip-connected shallow encoder block by using our Non-local Attentional Fusion (NAF) module. Precisely, NAF module searches the outputs from shallow encoder blocks of conditional images in a global range, to attend the information of interest for each location in the generated image.\n\nIn the fusion discriminator, we employ typical adversarial loss and classification loss to enforce the generated images to be close to real images and from the same category of conditional images. To ensure the diversity of generated images, we additionally employ a mode seeking loss and an interpolation regression loss, both of which are related to interpolation coefficients. Specifically, we use a variant of mode seeking loss~\\cite{mao2019mode} to prevent the images generated based on different interpolation coefficients from collapsing to a few modes. Moreover, we propose a novel interpolation regression loss by regressing the interpolation coefficients based on the features of conditional images and generated image, which means that each generated image can recognize its corresponding interpolation coefficients. In the training phase, we train our F2GAN based on the images from seen categories. In the testing phase, conditioned on a few images from each unseen category, we can randomly sample interpolation coefficients to generate diverse images for this unseen category.\n\nOur contributions can be summarized as follows: 1) we design a new few-shot image generation method F2GAN, by fusing high-level features and filling in low-level details; 2) Technically, we propose a novel non-local attentional fusion module in the generator and a novel interpolation regression loss in the discriminator; 3) Comprehensive experiments on five real datasets demonstrate the effectiveness of our proposed method.\n\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.25]{.\/figures\/framework.png}\n\\end{center}\n\\caption{The framework of our method which consists of a fusion generator and a fusion discriminator. $\\tilde{ \\textbf{x}}$ is generated based on the random interpolation coefficients $ \\textbf{a}$ and $ \\textbf{K}$ conditional images $\\{ \\textbf{x}_k|_{k=1}^K\\}$. Due to space limitation, we only draw three encoder blocks and two decoder blocks. Best viewed in color.}\n\\label{fig:framework} \n\\end{figure*}\n\n\\section{Our Method}\nGiven a few conditional images $\\mathcal{X}_S=\\{\\bm{x}_k|_{k=1}^K\\}$ from the same category ($K$ is the number of conditional images) and random interpolation coefficients $\\bm{a}=[a^1,\\ldots,a^K]$, our model targets at generating a new image from the same category. We fuse the high-level bottleneck features of conditional images $\\{\\bm{x}_k|_{k=1}^K\\}$ with interpolation coefficients $\\bm{a}$, and fill in the low-level details specified by Non-local Attentional Fusion (NAF) module during upsampling to generate a new image $\\tilde{\\bm{x}}$. \n\nWe split all categories into seen categories $\\mathcal{C}^{s}$ and unseen categories $\\mathcal{C}^{u}$, where $\\mathcal{C}^{s} \\cap \\mathcal{C}^{u}=\\emptyset$. In the training phase, our model is trained with images from seen categories $\\mathcal{C}^{s}$ to learn a mapping, which translates a few conditional images $\\mathcal{X}_S$ of a seen category to a new image belonging to the same category. In the testing phase, a few conditional images from an unseen category in $\\mathcal{C}^{u}$ together with random interpolation coefficients $\\bm{a}$ are fed into the trained model to generate new diverse images for this unseen category. As illustrated in Figure~\\ref{fig:framework}, our model consists of a fusion generator and a fusion discriminator, which will be detailed next.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.36]{.\/figures\/attention.png}\n\\end{center}\n\\caption{The architecture of our Non-local Attentional Fusion (NAF) module. $\\bm{\\xi}_r^k$ is the feature of $\\textbf{x}_k$ from the $r$-th encoder block, $\\bm{\\phi}_r$ is the output from the $r$-th decoder block, and $\\bm{\\eta}_r$ is the output of NAF.}\n\n\\label{fig:attention} \n\\end{figure}\n\n\\subsection{Fusion Generator}\nOur fusion generator $G$ adopts an encoder-decoder structure~\\cite{antoniou2017data} which is a combination of U-Net~\\cite{ronneberger2015u} and ResNet~\\cite{he2016deep}. Specifically, there are in total $11$ residual blocks ($5$ encoder blocks, $5$ decoder blocks, and $1$ intermediate block), in which each encoder (\\emph{resp.}, decoder) block contains $3$ convolutional layers with leaky ReLU and batch normalization followed by one downsampling (\\emph{resp.}, upsampling) layer, while the intermediate block contains $3$ convolutional layers with leaky ReLU and batch normalization. The detailed architecture can be found in Supplementary. The encoder (\\emph{resp.}, decoder) blocks progressively decrease (\\emph{resp.}, increase) the spatial resolution. For ease of description, the encoder (\\emph{resp.}, decoder) blocks from shallow to deep are indexed from $4$ (\\emph{resp.}, $1$) to $0$ (\\emph{resp.}, $5$). We use $\\bm{{\\psi}}^k$ to denote the bottleneck feature of $\\bm{x}_k$ from the intermediate block. Besides, we add $3$ skip connections between the encoder and the decoder. For $r=1,2,3$, the $r$-th skip connection directs the output from the $r$-th encoder block to the output from the $r$-th decoder block.\nThen, we use $\\bm{{\\xi}}_r^k \\in \\mathcal{R}^{W_r \\times H_r \\times C_r}$ to denote the output feature of conditional image $\\bm{x}_k$ from the $r$-th encoder block, and $\\bm{{\\phi}}_r \\in \\mathcal{R}^{W_r \\times H_r \\times C'_r}$ to denote the output feature from the $r$-th decoder block, where $C_r$ and $C'_r$ are the number of channels in the $r$-th encoder and decoder respectively. \n\nTo fuse the bottleneck features of conditional images $\\mathcal{X}_S$, we randomly sample interpolation coefficients $\\bm{a}=[a^1,\\ldots,a^K]$, which satisfy $a^k\\geq 0$ and $\\sum_{k=1}^K a^k=1$, leading to the fused bottleneck feature $\\bm{{\\eta}}_0 =\\sum_{k=1}^{K} a^{k} \\bm{{\\psi}}^k$.\nSince the spatial size of bottleneck feature is very small (\\emph{e.g.}, $4\\times 4$), the spatial misalignment issue can be ignored and high-level semantic information of conditional images is fused. \nThen, the fused bottleneck feature is upsampled through decoder blocks. During upsampling in each decoder block, lots of details are missing and need to be filled in.\nWe borrow the low-level detailed information from the output features of its skip-connected encoder block. Furthermore, we insert a Non-local Attentional Fusion (NAF) module into the skip connection to attend relevant detailed information, as shown in Figure~\\ref{fig:framework}. For the $r$-th skip connection, NAF module takes $a^k\\bm{\\xi}_r^k$ and $\\bm{\\phi}_r$ as input and outputs $\\bm{{\\eta}}_r = \\textnormal{NAF}(\\{a^k\\bm{\\xi}_r^k|_{k=1}^K\\}, \\bm{\\phi}_r)$.\nThen, $\\bm{{\\eta}}_r$ concatenated with $\\bm{\\phi}_r$, that is, $\\hat{\\bm{{\\phi}}}_{r} = [\\bm{{\\eta}}_r, \\bm{{\\phi}}_r]$, is taken as the input to the $(r+1)$-th decoder block.\n\nOur attention-enhanced fusion strategy is a little similar to~\\cite{lathuiliere2019attention}. However, for each spatial location on $\\bm{\\phi}_r$, the attention module in~\\cite{lathuiliere2019attention} only attends exactly the same location on $\\bm{\\xi}_r^k$, which will hinder attending relevant information if the conditional images are not strictly aligned. For example, for category ``dog face\", the dog eyes may appear at different locations in different conditional images which have different face poses. Inspired by non-local attention~\\cite{zhang2019self,wang2018non}, for each spatial location on $\\bm{\\phi}_r$, we search relevant information in a global range on $\\bm{\\xi}_r^k$. Specifically, our proposed Non-local Attentional Fusion (NAF) module calculates an attention map $\\bm{A}_r^k$ based on $\\bm{{\\phi}}_r$ and $a^k\\bm{{\\xi}}_r^k$, in which each entry ${A}_r^k(i,j)$ represents the attention score between the $i$-th location on $\\bm{{\\phi}}_r$ and the $j$-th location on $a^k\\bm{{\\xi}}_r^k$. Therefore, the design philosophy and technical details of our NAF module are considerably different from those in~\\cite{lathuiliere2019attention}. \n\nThe architecture of NAF module is shown in Figure~\\ref{fig:attention}. First, $a^k\\bm{{\\xi}}_r^k$ and $\\bm{\\phi}_r$ are projected to a common space by $f(\\cdot)$ and $g(\\cdot)$ respectively, where $f(\\cdot)$ and $g(\\cdot)$ are $1 \\times 1 \\times \\frac{C_r}{8}$ convolutional layer with spectral normalization~\\cite{miyato2018spectral}. For ease of calculation, we reshape $f(a^k\\bm{{\\xi}}_r^k) \\in \\mathcal{R}^{W_r \\times H_r \\times \\frac{C_r}{8}} $ (\\emph{resp.}, $g(\\bm{{\\phi}}_r) \\in \\mathcal{R}^{W_r \\times H_r \\times \\frac{C_r}{8}}$) into $\\bar{f}(a^k\\bm{{\\xi}}_r^k) \\in \\mathcal{R}^{N_r \\times \\frac{C_r}{8}}$ (\\emph{resp.}, $\\bar{g}(\\bm{{\\phi}}_r) \\in \\mathcal{R}^{N_r \\times \\frac{C_r}{8}}$), in which $N_r=W_r \\times H_r$. Then, we can calculate the attention map between $\\bm{{\\phi}}_r$ and $a^k\\bm{{\\xi}}_r^k$:\n\\begin{equation}\\label{eqn:attention_map}\n\\begin{aligned}\n\\bm{A}_r^k = softmax\\left(\\bar{g}(\\bm{{\\phi}}_r)\\bar{f}(a^k\\bm{{\\xi}}_r^k)^{T} \\right).\n\\end{aligned}\n\\end{equation}\nWith obtained attention map $\\bm{A}_r^k$, we attend information from $a^k\\bm{{\\xi}}_r^k$ and achieve the attended feature map $\\bm{{\\eta}}_r$:\n\\begin{equation}\n\\begin{aligned}\n\\bm{{\\eta}}_r = \\sum_{k=1}^{K} v\\left(\\bm{A}_r^k \\bar{h}(a^k\\bm{{\\xi}}_r^k)\\right),\n\\end{aligned}\n\\end{equation}\nwhere $\\bar{h}(\\cdot)$ means $1 \\times 1 \\times \\frac{C_r}{8} $ convolutional layer followed by reshaping to $\\mathcal{R}^{N_r \\times \\frac{C_r}{8}}$, similar to $\\bar{f}(\\cdot)$ and $\\bar{g}(\\cdot)$ in (\\ref{eqn:attention_map}). $v(\\cdot)$ reshapes the feature map back to $\\mathcal{R}^{W_r \\times H_r \\times \\frac{C_r}{8}}$ and then performs $1 \\times 1 \\times \\frac{C_r}{8} $ convolution.\n \n\nAs the shallow (\\emph{resp.}, deep) encoder block contains the low-level (\\emph{resp.}, high-level) information, our generated images can fuse multi-level information of conditional images coherently. Finally, the generated image can be represented by $\\tilde{\\bm{x}} = G(\\bm{a}, \\mathcal{X}_S)$.\n\nFollowing \\cite{hong2020matchinggan}, we adopt a weighted reconstruction loss to constrain the generated image:\n\\begin{equation} \\label{eqn:loss_reconstruction}\n\\begin{aligned}\n\\mathcal{L}_1 = \\sum_{k=1}^{K} a^k || \\bm{x}_k - \\tilde{\\bm{x}}||_1.\n\\end{aligned}\n\\end{equation}\nIntuitively, the generated image should bear more resemblance to the conditional image with larger interpolation coefficient.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.3]{.\/figures\/combo.jpg}\n\\end{center}\n\\caption{Images generated by DAGAN, MatchingGAN, and our F2GAN ($ \\textbf{K=3}$) on five datasets (from top to bottom: Omniglot, EMNIST, VGGFace, Flowers, and Animals Faces). The conditional images are in the left three columns.}\n\\label{fig:visualization} \n\\end{figure*}\n\n\\subsection{Fusion Discriminator}\nThe network structure of our discriminator is analogous to that in~\\cite{liu2019few}, which consists of one convolutional layer followed by five residual blocks~\\cite{mescheder2018training}. The detailed architecture can be found in Supplementary. Differently, we use one fully connected (fc) layer with $1$ output following average pooling layer to obtain the discriminator score. We treat $K$ conditional images $\\{\\bm{x}_k|_{k=1}^K\\}$ as real images and the generated image $\\tilde{\\bm{x}}$ as fake image. In detail, the average score $\\mathrm{D}(\\bm{x})$ for $K$ conditional images and the score $\\mathrm{D}(\\tilde{\\bm{x}})$ for generated image $\\tilde{\\bm{x}}$ are calculated for adversarial learning. To stabilize training process, we use hinge adversarial loss in~\\cite{miyato2018cgans}. To be exact, the goal of discriminator $\\mathrm{D}$ is minimizing $\\mathcal{L}_D$ while the goal of generator is minimizing $\\mathcal{L}_{GD}$:\n\\begin{eqnarray}\n\\!\\!\\!\\!\\!\\!\\!\\!&&\\mathcal{L}_D = \\mathbb{E}_{\\tilde{\\bm{x}}} [\\max (0,1+\\mathrm{D}(\\tilde{\\bm{x}})] + \\mathbb{E}_{\\bm{x}_k} [\\max (0,1-\\mathrm{D}({\\bm{x}}))], \\nonumber\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!&&\\mathcal{L}_{GD} = - \\mathbb{E}_{\\tilde{\\bm{x}}} [\\mathrm{D}(\\tilde{\\bm{x}})]\n\\end{eqnarray}\n\nAnalogous to ACGAN~\\cite{odena2017conditional}, we apply a classifier with cross-entropy loss to classify the real images and the generated images into the corresponding seen categories.\nSpecifically, the last fc layer of the discriminator $D$ is replaced by another fc layer with the output dimension being the number of seen categories:\n\\begin{equation}\n\\begin{aligned}\\label{eqn:loss_classification}\n\\mathcal{L}_{c} = -\\log p(c(\\bm{x})|\\bm{x}),\n\\end{aligned}\n\\end{equation}\nwhere $c(\\bm{x})$ is the ground-truth category of $\\bm{x}$. We minimize $\\mathcal{L}^{D}_{c} = -\\sum_{k=1}^K\\log p(c(\\bm{x}_k)|\\bm{x}_k)$ for $K$ conditional images $\\{\\bm{x}_k|_{k=1}^K\\}$ when training the discriminator. We update the generator by minimizing $\\mathcal{L}^{G}_{c}=-\\log p(c(\\tilde{\\bm{x}})|\\tilde{\\bm{x}})$, since we expect the generated image $\\tilde{\\bm{x}}$ to be classified as the same category of conditional images.\n\n\nBy varying interpolation coefficients $\\bm{a}$, we expect to generate diverse images, but one common problem for GAN is mode collapse~\\cite{mao2019mode}, which means that the generated images may collapse into a few modes.\nIn our fusion generator, when sampling two different interpolation coefficients $\\bm{a}_1$ and $\\bm{a}_2$, the generated images $G(\\bm{a}_1,\\mathcal{X}_S)$ and $G(\\bm{a}_2,\\mathcal{X}_S)$ are likely to collapse into the same mode. To guarantee the diversity of generated images, we use two strategies to mitigate mode collapse, one is a variant of mode seeking loss~\\cite{mao2019mode} to seek for more modes, the other is establishing bijection between the generated image $\\tilde{\\bm{x}}$ and its corresponding interpolation coefficient $\\bm{a}$. The mode seeking loss in~\\cite{mao2019mode} was originally used to produce diverse images when using different latent codes. Here, we slightly twist the mode seeking loss to produce diverse images when using different interpolation coefficients. Specifically,\nwe remove the last fc layer of $D$ and use the remaining feature extractor $\\hat{D}$ to extract the features of generated images with different interpolation coefficients. Then, we maximize the ratio of the distance between $\\hat{D}(G(\\bm{a}_1,\\mathcal{X}_S))$ and $\\hat{D}(G(\\bm{a}_2,\\mathcal{X}_S))$ over the distance between $\\bm{a}_1$ and $\\bm{a}_2$, yielding the following mode seeking loss:\n\\begin{equation} \\label{eqn:loss_mode_seeking}\n\\begin{aligned}\n\\mathcal{L}_{m} = \\frac {|| \\hat{D}(G(\\bm{a}_1,\\mathcal{X}_S)) - \\hat{D}(G(\\bm{a}_2,\\mathcal{X}_S))||_1} {|| \\bm{a}_1 - \\bm{a}_2||_1}.\n\\end{aligned}\n\\end{equation}\n\nTo further ensure the diversity of generated images, the bijection between the generated image $\\tilde{\\bm{x}}$ and its corresponding interpolation coefficient $\\bm{a}$ is established by a novel interpolation regression loss, which regresses the interpolation coefficient $\\bm{a}$ based on the features of conditional images $\\hat{D}(\\bm{x}_k)$ and generated image $\\hat{D}(\\tilde{\\bm{x}})$. Note that the feature extractor $\\hat{D}$ is the same as in (\\ref{eqn:loss_mode_seeking}). Specifically, we apply a fully-connected (fc) layer $E$ to the concatenated feature $[\\hat{D}(\\bm{x}_k),\\hat{D}(\\tilde{\\bm{x}})]$, and obtain the similarity score $s_k$ between $\\bm{x}_k$ and $\\tilde{\\bm{x}}$: $s_k = E([\\hat{D}(\\bm{x}_k), \\hat{D}(\\tilde{\\bm{x}})])$.\nThen, we apply softmax layer to $\\bm{s}=[s_1,\\ldots,s_K]$ to obtain the predicted interpolation coefficients $\\tilde{\\bm{a}} = softmax(\\bm{s})$, which are enforced to match the ground-truth $\\bm{a}$:\n\\begin{equation} \\label{eqn:loss_interpolation}\n\\begin{aligned}\n\\mathcal{L}_{a} = ||\\tilde{\\bm{a}} - \\bm{a} ||_2.\n\\end{aligned}\n\\end{equation}\nBy recognizing the interpolation coefficient based on the generated image and conditional images, we actually establish a bijection between the generated image and interpolation coefficient, which discourages two different interpolation coefficients from generating the same image.\n\n\\subsection{Optimization}\nThe overall loss function to be minimized is as follows, \n\\begin{equation}\n\\begin{aligned}\n\\mathcal{L} = \\mathcal{L}_D + \\mathcal{L}_{GD}+ \\lambda_1 \\mathcal{L}_{1} + \\mathcal{L}_{c} - \\lambda_m \\mathcal{L}_{m} + \\lambda_a \\mathcal{L}_{a},\\label{optimization}\n\\end{aligned}\n\\end{equation}\nin which $\\lambda_1$, $\\lambda_m$, and $\\lambda_a$ are trade-off parameters. In the framework of adversarial learning, fusion generator and fusion discriminator are optimized by related loss terms in an alternating manner. In particular, the fusion discriminator is optimized by minimizing $\\mathcal{L}_D$ and $\\mathcal{L}^{D}_c$, while the fusion generator is optimized by minimizing $\\mathcal{L}_{GD}$, $\\mathcal{L}_{1}$, $\\mathcal{L}^{G}_{c}$, $-\\mathcal{L}_{m}$, and $\\mathcal{L}_{a}$, in which $\\mathcal{L}^{D}_c$ and $\\mathcal{L}^{G}_{c}$ are defined below (\\ref{eqn:loss_classification}). \n\n\n\n\\setlength{\\tabcolsep}{4pt}\n\\begin{table*}[t]\n \\caption{FID ($\\downarrow$), IS ($\\uparrow$) and LPIPS ($\\uparrow$) of images generated by different methods for unseen categories on three datasets.} \n \\centering\n \n \\begin{tabular}{lrrrrrrrrr}\n \n \\toprule[0.8pt]\n \\multirow{2}{*}{Method}&\n \\multicolumn{3}{c}{VGGFace} & \\multicolumn{3}{c}{Flowers} &\\multicolumn{3}{c}{Animals Faces}\\cr\n &FID ($\\downarrow$) & IS ($\\uparrow$) & LPIPS($\\uparrow$) & FID ($\\downarrow$) & IS ($\\uparrow$) & LPIPS ($\\uparrow$) &FID ($\\downarrow$) & IS ($\\uparrow$) & LPIPS ($\\uparrow$) \\cr\n \\cmidrule(r){2-4} \\cmidrule(r){5-7} \\cmidrule(r){8-10}\n FIGR~\\cite{clouatre2019figr} &139.83 &2.98 &0.0834 & 190.12&1.38 &0.0634 &211.54 &1.55 &0.0756\\cr\n \n GMN~\\cite{bartunov2018few}&136.21 &2.14 &0.0902 &200.11 &1.42 &0.0743 &220.45 &1.71 &0.0868 \\cr\n DAWSON~\\cite{liang2020dawson} &137.82 &2.56 & 0.0769 & 188.96& 1.25 &0.0583 & 208.68 &1.51 &0.0642 \\cr\n \n DAGAN~\\cite{antoniou2017data}& 128.34 & 4.12 & 0.0913& 151.21&2.18 &0.0812 &155.29 &3.32 &0.0892\\cr\n MatchingGAN~\\cite{hong2020matchinggan}& 118.62 & 6.16 & 0.1695& 143.35& 4.36&0.1627 & 148.52& 5.08& 0.1514\\cr\n F2GAN &$\\textbf{109.16}$ &$\\textbf{8.85}$ & $\\textbf{0.2125}$ &$\\textbf{120.48}$ &$\\textbf{6.58}$ &$\\textbf{0.2172}$ &$\\textbf{117.74}$ &$\\textbf{7.66}$ &$\\textbf{0.1831}$\\cr\n \n \\bottomrule[0.8pt]\n \n \\end{tabular}\n \\vspace{0.1mm}\n \\label{tab:performance_metric}\n\\end{table*}\n\n\n\n\\section{Experiments}\n\\label{sec:experiments}\n\n\\setlength{\\tabcolsep}{4pt}\n\\begin{table*}[t]\n \\caption{Accuracy(\\%) of different methods on three datasets in low-data setting.} \n \\centering\n \n \\begin{tabular}{lrrrrrrrrr}\n \n \\toprule[0.8pt]\n \\multirow{2}{*}{Method}&\n \\multicolumn{3}{c}{Omniglot } & \\multicolumn{3}{c}{EMNIST} &\\multicolumn{3}{c}{VGGFace}\\cr\n &5-sample & 10-sample & 15-sample & 5-sample & 10-sample & 15-sample &5-sample & 10-sample &15-sample \\cr\n \\cmidrule(r){2-4} \\cmidrule(r){5-7} \\cmidrule(r){8-10}\n Standard &66.22 & 81.87 &83.31 & 83.64 & 88.64 & 91.14 & 8.82 & 20.29 & 39.12\\cr\n Traditional &67.32 &82.28 & 83.95 & 84.62 & 89.63 & 92.07 & 9.12 &22.83 & 41.63 \\cr\n \n FIGR~\\cite{clouatre2019figr} & 69.23 & 83.12 & 84.89 & 85.91 & 90.08 & 92.18 & 6.12 & 18.84& 32.13 \\cr\n GMN~\\cite{bartunov2018few} & 67.74 & 84.19 & 85.12 & 84.12 & 91.21 & 92.09 & 5.23 & 15.61 &35.48\\cr\n DAWSON~\\cite{liang2020dawson} &68.56 &82.02 & 84.01 & 83.63 & 90.72 & 91.83 & 5.27 &16.92 &30.61 \\cr\n DAGAN~\\cite{antoniou2017data} & 88.81 &89.32 & 95.38 &87.45 & 94.18& 95.58 &19.23 &35.12 & 44.36\\cr\n MatchingGAN~\\cite{hong2020matchinggan} &89.03 &90.92 & 96.29 & 91.75 & 95.91 &96.29 &21.12 &40.95 & 50.12\\cr\n F2GAN &$\\textbf{91.93}$ &$\\textbf{92.48}$ & $\\textbf{97.12}$& $\\textbf{93.18}$& $\\textbf{97.01}$ &$\\textbf{97.82}$ & $\\textbf{24.76}$&$\\textbf{43.21}$ & $\\textbf{53.42}$\\cr\n \n \n \\bottomrule[0.8pt]\n \n \\end{tabular}\n \\vspace{0.1mm}\n \\label{tab:performance_vallia_classifier}\n\\end{table*}\n\n\n\\setlength{\\tabcolsep}{2pt}\n\\begin{table*}[t]\n \\caption{Accuracy(\\%) of different methods on three datasets in few-shot classification setting.} \n \\centering\n \n \\begin{tabular}{lrrrrrr}\n \n \\toprule[0.8pt]\n \\multirow{2}{*}{Method}&\\multicolumn{2}{c}{VGGFace}&\\multicolumn{2}{c}{Flowers}&\\multicolumn{2}{c}{Animals Faces}\n \\cr & 5-way 5-shot &10-way 5-shot & 5-way 5-shot &10-way 5-shot & 5-way 5-shot &10-way 5-shot\\cr\n \\cmidrule(r){2-3} \\cmidrule(r){4-5} \\cmidrule(r){6-7}\n MatchingNets~\\cite{vinyals2016matching} & 60.01 &48.67 & 67.98&56.12 &59.12 &50.12 \\cr\n\n MAML~\\cite{finn2017model} & 61.09&47.89 & 68.12&58.01 & 60.03 &49.89 \\cr\n\n RelationNets~\\cite{sung2018learning}& 62.89 & 54.12 &69.83&61.03 &67.51 & 58.12 \\cr\n\n MTL~\\cite{sun2019meta}&77.82 &68.95 &82.35 &74.24 &79.85 &70.91 \\cr\n\n DN4~\\cite{li2019revisiting}&78.13 &70.02 &83.62 &73.96 &81.13 &71.34 \\cr\n \n MatchingNet-LFT~\\cite{Hungfewshot} &77.64 &69.92 & 83.19 &74.32 &80.95 &71.62 \\cr\n \n \n MatchingGAN~\\cite{hong2020matchinggan} & 78.72 & 70.94 &82.76 & 74.09 & 80.36 & 70.89\\cr\n\n F2GAN&$\\textbf{79.85}$ &$\\textbf{72.31}$ &$\\textbf{84.92}$ &$\\textbf{75.02}$ &$\\textbf{82.69}$ &$\\textbf{73.19}$ \\cr\n \n \\bottomrule[0.8pt]\n \\end{tabular}\n \\vspace{0.1mm}\n \\label{tab:performance_fewshot_classifier}\n\\end{table*}\n\n\n\n\n\\subsection{Datasets and Implementation Details}\nWe conduct experiments on five real datasets including Omniglot \\cite{Brenden2015One}, EMNIST~\\cite{cohen2017emnist}, VGGFace~\\cite{cao2018vggface2}, Flowers~\\cite{nilsback2008automated}, and Animal Faces~\\cite{deng2009imagenet}. For VGGFace (\\emph{resp.}, Omniglot, EMNIST), following MatchingGAN \\cite{hong2020matchinggan}, we randomly select $1802$ (\\emph{resp.}, $1200$, $28$) categories from total $2395$ (\\emph{resp.}, $1623$, $48$) categories as training seen categories and select $497$ (\\emph{resp.}, $212$, $10$) categories from remaining categories as unseen testing categories. For Animal face and flower datasets, we use the seen\/unseen split provided in~\\cite{liu2019few}. In Animal Faces, $117574$ animal faces from $149$ carnivorous animal categories are selected from ImageNet~\\cite{deng2009imagenet}. All animal categories are split into $119$ seen categories for training and $30$ unseen categories for testing. For Flowers dataset with $8189$ images distributed in $102$ categories, there are $85$ training seen categories and $17$ testing unseen categories.\n\nWe set $\\lambda_1=1$, $\\lambda_m = 0.01$, and $\\lambda_a = 1$ in (\\ref{optimization}). We set the number of conditional images $K=3$ by balancing the benefit against the cost, because larger $K$ only brings slight improvement (see Supplementary). We use Adam optimizer with learning rate 0.0001 and train our model for $200$ epochs.\n\n\n\\subsection{Quantitative Evaluation of Generated Images} \\label{sec:visualization}\nWe evaluate the quality of images generated by different methods on three datasets based on commonly used Inception Scores (IS)~\\cite{xu2018empirical}, Fr\u00e9chet Inception Distance (FID)~\\cite{heusel2017gans}, and Learned Perceptual Image Patch Similarity (LPIPS)~\\cite{zhang2018unreasonable}. The IS is positively correlated with visual quality of generated images. We fine-tune the ImageNet-pretrained Inception-V3 model~\\cite{szegedy2016rethinking} with unseen categories to calculate the IS for generated images. The FID is designed for measuring similarities between two sets of images. We remove the last average pooling layer of the ImageNet-pretrained Inception-V3 model as the feature extractor. Based on the extracted features, we compute Fr\u00e9chet Inception Distance between the generated images and the real images from the unseen categories. The LPIPS can be used to measure the average feature distance among the generated images. We compute the average of pairwise distances among generated images for each category, and then compute the average over all unseen categories as the final LPIPS score. The details of distance calculation can be found in \\cite{zhang2018unreasonable}.\n\nFor our method, we train our model based on seen categories. Then, we use a random interpolation coefficient and $K=3$ conditional images from each unseen category to generate a new image for this unseen category. We can generate adequate images for each unseen category by repeating the above procedure.\nSimilarly, GMN~\\cite{bartunov2018few}, FIGR~\\cite{clouatre2019figr} and MatchingGAN~\\cite{hong2020matchinggan} are trained in $1$-way $3$-shot setting based on seen categories, and the trained models are used to generate images for unseen categories. Different from the above methods, DAGAN~\\cite{antoniou2017data} is conditioned on a single image, but we can use one conditional image each time to generate adequate images for unseen categories.\n\nFor each unseen category, we use each method to generate $128$ images based on sampled $30$ real images, and calculate FID, IS and LPIPS based on the generated images. The results of different methods are reported in Table \\ref{tab:performance_metric}, from which we observe that our method achieves the highest IS, lowest FID, and highest LPIPS, demonstrating that our model could generate more diverse and realistic images compared with baseline methods.\n\nWe show some example images generated by our method on five datasets including simple concept datasets and relatively complex natural datasets in Figure~\\ref{fig:visualization}. For comparison, we also show the images generated by DAGAN and MatchingGAN, which are competitive baselines as demonstrated in Table \\ref{tab:performance_metric}. On concept datasets Omniglot and EMNIST, we can see that the images generated by DAGAN are closer to inputs with limited diversity, while MatchingGAN and F2GAN can both fuse features from conditional images to generate diverse images for simple concepts. On natural datasets VGGFace, Flowers, and Animals Faces, we observe that MatchingGAN can generate plausible images on VGGFace dataset because face images are well-aligned. However, the images generated by MatchingGAN are of low quality on Flowers and Animals Faces datasets. In contrast, the images generated by our method are more diverse and realistic than DAGAN and MatchingGAN, because the information of more than one conditional image are fused more coherently in our method. In Supplementary, we also visualize our generated results on FIGR-8 dataset, which is released and used in FIGR~\\cite{clouatre2019figr}, as well as more visualization results on Flowers and Animals datasets.\n\n\\subsection{Visualization of Linear Interpolation}\nTo evaluate whether the space of generated images is densely populated, we perform linear interpolation based on two conditional images $\\bm{x}_1$ and $\\bm{x}_2$ for ease of visualization. In detail, for interpolation coefficients $\\bm{a}=[a^1, a^2]$, we start from $[0.9, 0.1]$, and then gradually decrease (\\emph{resp.}, increase) $a^1$ (\\emph{resp.}, $a^2$) to $0.1$ (\\emph{resp.}, $0.9$) with step size $0.1$. Because MatchingGAN also fuses conditional images with interpolation coefficients, we report the results of both MatchingGAN and our F2GAN in Figure~\\ref{fig:interpolation}. Compared with MatchingGAN, our F2GAN can produce more diverse images with smoother transition between two conditional images. More results can be found in Supplementary.\n\n\n\n\\subsection{Low-data Classification}\n\\label{sec:vallia}\nTo further evaluate the quality of generated images, we use generated images to help downstream classification tasks in low-data setting in this section and few-shot setting in Section \\ref{sec:few-shot}. For low-data classification on unseen categories, following MatchingGAN~\\cite{hong2020matchinggan}, we randomly select a few (\\emph{e.g.}, $5$, $10$, $15$) training images per unseen category while the remaining images in each unseen category are test images. Note that we have training and testing phases for the classification task, which are different from the training and testing phases of our F2GAN.\nWe initialize the ResNet$18$~\\cite{he2016deep} backbone using the images of seen categories, and then train the classifier using the training images of unseen categories. Finally, the trained classifier is used to predict the test images of unseen categories. This setting is referred to as ``Standard\" in Table~\\ref{tab:performance_vallia_classifier}. \n\nThen, we use the generated images to augment the training set of unseen categories. For each few-shot generation method, we generate $512$ images for each unseen category based on the training set of unseen categories. Then, we train the ResNet$18$ classifier on the augmented training set (including both original training set and generated images) and apply the trained classifier to the test set of unseen categories. We also use traditional augmentation techniques (\\emph{e.g.}, crop, rotation, color jittering) to augment the training set and report the results as ``Traditional\" in Table~\\ref{tab:performance_vallia_classifier}.\n\nThe results of different methods are listed in Table~\\ref{tab:performance_vallia_classifier}. On Omniglot and EMNIST datasets, all methods outperform ``Standard\" and ``Traditional\", which demonstrates the benefit of deep augmentation methods. On VGGFace dataset, our F2GAN, MatchingGAN~\\cite{hong2020matchinggan}, and DAGAN~\\cite{antoniou2017data} outperform ``Standard\", while the other methods underperform ``Standard\". One possible explanation is that the images generated by GMN and FIGR on VGGFace are of low quality, which harms the classifier. It can also be seen that our proposed F2GAN achieves significant improvement over baseline methods, which corroborates the high quality of our generated images.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.3]{.\/figures\/interpolation_comparison.png}\n\\end{center}\n\\caption{Linear interpolation results of MatchingGAN (top row) and our F2GAN (bottom row) based on two conditional images $ \\textbf{x}_1$ and $ \\textbf{x}_2$ on Flowers dataset.}\n\\label{fig:interpolation} \n\\end{figure}\n\n \n\n\n\n\n\n\n\\subsection{Few-shot Classification}\n\\label{sec:few-shot}\n\nWe follow the $N$-way $C$-shot setting in few-shot classification~\\cite{vinyals2016matching,sung2018learning} by creating evaluation episodes and calculating the averaged accuracy over multiple evaluation episodes. In each evaluation episode, $N$ categories are randomly selected from unseen categories. Then, $C$ images from each of $N$ categories are randomly selected as training set while the remaining images are used as test set. We use pretrained ResNet$18$~\\cite{he2016deep} (pretrained on the seen categories) as the feature extractor and train a linear classifier for the selected $N$ unseen categories. Besides $N\\times C$ training images, our fusion generator produces $512$ additional images for each of $N$ categories to augment the training set. \n\nWe compare our method with existing few-shot classification methods, including representative methods MatchingNets~\\cite{vinyals2016matching}, RelationNets~\\cite{sung2018learning}, MAML~\\cite{finn2017model} as well as state-of-the-art methods MTL~\\cite{sun2019meta}, DN4~\\cite{li2019revisiting}, MatchingNet-LFT~\\cite{Hungfewshot}. Note that no augmented images are added to the training set of $N$ unseen categories for these baseline methods. Instead, we strictly follow their original training procedure, in which the images from seen categories are used to train those few-shot classifiers. Among the baselines, MAML \\cite{finn2017model} and MTL~\\cite{sun2019meta} need to\nfurther fine-tune the trained classifier based on the training set of $N$ unseen categories in each evaluation episode.\n\nWe also compare our method with competitive few-shot generation baseline MatchingGAN~\\cite{hong2020matchinggan}. For MatchingGAN, We use the same setting as our F2GAN and generate augmented images for unseen categories. Besides, we compare our F2GAN with FUNIT~\\cite{liu2019few} in Supplementary.\n\nBy taking $5$-way\/$10$-way $5$-shot as examples, we report the averaged accuracy over $10$ episodes on three datasets in Table~\\ref{tab:performance_fewshot_classifier}.\nOur method achieves the best results in both settings on all datasets, which shows the benefit of using augmented images produced by our fusion generator. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.47]{.\/figures\/query_attention.png}\n\\end{center}\n\\caption{The visualization of learned attention maps from NAF module. In each row, $\\tilde{ \\textbf{x}}$ is generated based on three conditional images $ \\textbf{x}_1$, $ \\textbf{x}_2$, and $ \\textbf{x}_3$.\nFor each color dot (query location) in $\\tilde{ \\textbf{x}}$, we draw a color arrow with the same color to summarize the bright region (the most-attended region corresponding to the query location) in $ \\textbf{x}_k$. Best viewed in color.}\n\\label{fig:attention_map} \n\\end{figure}\n\n\\subsection{Ablation Studies}\n\n\n\n\\textbf{The number of conditional images} To analyze the impact of the number of conditional images, we train F2GAN with $K_1$ conditional images based on seen categories, and generate new images for unseen categories with $K_2$ conditional images. Due to space limitation, we leave the details to Supplementary.\n\n\n \n \n\n\\noindent\\textbf{Loss terms: }In our method, we employ weighted reconstruction loss $\\mathcal{L}_{1}$ (\\ref{eqn:loss_reconstruction}), mode seeking loss $\\mathcal{L}_{m}$ (\\ref{eqn:loss_mode_seeking}), and interpolation regression loss $\\mathcal{L}_{a}$ (\\ref{eqn:loss_interpolation}). To investigate the impact of $\\mathcal{L}_{1}$, $\\mathcal{L}_{m}$, and $\\mathcal{L}_{a}$, we conduct experiment on Flowers dataset by removing each loss term from the final objective (\\ref{optimization}) separately. The quality of generated images is evaluated from two perspectives. On one hand, IS, FID, and LPIPS of generated images are computed as in Section~\\ref{sec:visualization}. On the other hand, we report the accuracy of few-shot ($10$-way $5$-shot) classification augmented with generated images as in Section~\\ref{sec:few-shot}. The results are summarized in Table~\\ref{tab:network_design}, which shows that ablating $\\mathcal{L}_1$ leads to slight degradation of generated images. Another observation is that without $\\mathcal{L}_{m}$, the results \\emph{w.r.t.} all metrics become much worse, which indicates that the mode seeking loss can enhance the diversity of generated images. Besides, when removing $\\mathcal{L}_a$, it can be seen that the diversity and realism of generated images are compromised, resulting in lower classification accuracy.\n\n\\noindent\\textbf{Attention module: }In our fusion generator, a Non-local Attentional Fusion (NAF) module is designed to borrow low-level information from the encoder. To corroborate the effectiveness of our design, we remove the NAF module, and directly connect the fused encoder features with the output of corresponding decoder blocks via skip connection, which is referred to as ``w\/o NAF\" in Table ~\\ref{tab:network_design}. Besides, we replace our NAF module with local attention used in~\\cite{lathuiliere2019attention} to compare two different attention mechanisms, which is referred to as ``local NAF\" in Table~\\ref{tab:network_design}. The results show that both ``local NAF\" or our NAF achieve better results than ``w\/o NAF\", which proves the necessity of attention enhanced fusion strategy. We also observe that our NAF module can improve the realism and diversity of generated images, which is justified by lower FID, higher IS, and higher LPIPS. \n\nMoreover, we visualize the attention maps in Figure~\\ref{fig:attention_map}. The first column exhibits the generated images based on three conditional images. For each generated image, we choose three representative query locations, which borrow low-level details from three conditional images respectively. For the conditional image $\\textbf{x}_k$, \nwe obtain the $H_1\\times W_1$ attention map from the corresponding row in $\\bm{A}_1^k$ in (\\ref{eqn:attention_map}). For each color query point, we draw a color arrow with the same color to summarize the most-attended regions (bright regions) in the corresponding conditional image. In the first row, we can see that the red (\\emph{resp.}, green, blue) query location in the generated flower $\\tilde{\\textbf{x}}$ borrows some color and shape details from $\\textbf{x}_1$ (\\emph{resp.}, $\\textbf{x}_2$, $\\textbf{x}_3$). Similarly, in the second row, the red (\\emph{resp.}, green, blue) query location in the generated dog face $\\tilde{\\textbf{x}}$ borrows some visual details of forehead (\\emph{resp.}, tongue, cheek) from $\\textbf{x}_1$ (\\emph{resp.}, $\\textbf{x}_2$, $\\textbf{x}_3$).\n\n\n\\setlength{\\tabcolsep}{6pt}\n\\begin{table}[t]\n \\caption{Ablation studies of our loss terms and attention module on Flowers dataset.} \n \\centering\n \n \\begin{tabular}{lrrrr} \n \\hline\n setting& accuracy (\\%) & FID ($\\downarrow$) & IS ($\\uparrow$) & LPIPS ($\\uparrow$) \\cr\n \n \\hline\n w\/o $\\mathcal{L}_{1}$ & 74.89 & 122.68 & 6.39 & 0.2114 \\cr\n \n w\/o $\\mathcal{L}_{m}$ & 73.92 & 125.26 & 4.92 & 0.1691\\cr\n \n w\/o $\\mathcal{L}_{a}$ & 72.42 & 122.12 & 4.18 & 0.1463 \\cr\n \n \\hline\n w\/o NAF & 72.62 & 137.81 & 5.11& 0.1825 \\cr\n \n local NAF & 73.98 & 134.45 & 5.92 & 0.2052 \\cr\n \\hline\n \n F2GAN &$\\textbf{75.02}$ &$\\textbf{120.48}$ &$\\textbf{6.58}$ &$\\textbf{0.2172}$\\cr\n \\hline\n \\end{tabular}\n \\vspace{0.1mm}\n \\label{tab:network_design}\n\\end{table}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}