diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzizdq" "b/data_all_eng_slimpj/shuffled/split2/finalzzizdq" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzizdq" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\tIn recent years, the hope of cheap, solution-based solar cells with efficiencies above 20\\% has fueled the scientific research in halide perovskites (HaPs).\n \\cite{David1_1, David1_2, David1_3, David1_4, David1_5, David1_6, David1_7} \n HaPs exhibit many interesting optoelectronic properties that make them ideal candidates for future semiconductor based applications, yet some of their structural and dynamic properties remain of central scientific interest since they also present major stumbling blocks in the development of stable devices. Fundamentally, HaPs can crystallize in the typical perovskite structure $ABX_3$, where the $B$-site is a heavy metallic element such as lead, and halide ions ($X$) form octahedra centered around the $B$-site. \n \\cite{David2}\n In hybrid organic-inorganic HaPs, the voids between the inorganic scaffolds are occupied by the organic $A$-site (in this work methylammonium, MA). As is typical for perovskites, HaPs undergo phase transitions with changes in temperature. The prototypical variant $\\mathrm{MAPbI_3}${} has an orthorhombic symmetry up to 162~K, where it changes to a tetragonal structure. The second phase transition occurs at 327~K, where it switches to a cubic crystal structure.\n \\cite{Poglitsch87, Stoumpos13}\n Importantly, HaPs are compared to other well established semiconductors\n \\cite{David4}\n and perovskite materials very soft mechanically,\n \\cite{Feng14, David5_2, Rakita15, David5_4, David5_5, David5_6}\n which is interesting fundamentally and could become problematic when they are exposed to the operating conditions of commercially used solar cells. Furthermore, experimental and theoretical studies have also shown that the structure and binding in HaPs can be modified already by applying relatively low pressure. \n \\cite{Capitani16, David6_2, David6_3}\nTherefore, it is interesting to study the interrelation of binding and structural modifications in HaPs, which can be induced by a change in temperature or applied pressure, and may have important consequences for their optoelectronic properties.\n \\newline\n Such insight can be generated from first-principles based computations. However, from a theoretical point of view there are a number of physical effects and properties, all of which could play a role regarding the structure of and binding in HaPs. First, hybrid HaPs consist of highly polarizable atoms such as iodine and lead and contain hydrogen atoms bound to electronegative species (e.g. nitrogen); hence, from simple chemical arguments dispersive interactions, such as van-der-Waals (vdW) and hydrogen bonding, can be expected to play an important role in HaPs. {Several computational studies have shown that dispersive interactions are crucial in static calculations of the lattice constants and mechanical properties of HaPs.} \n \\cite{Feng14, David7_2, Egger14, David7_4, David7_5,Faghihnasiri17, David5_5, Motta16, Egger18}\n However, as to how their treatment for HaPs should best be implemented in density functional theory (DFT), the most widely used method for HaPs, is still an open question.\\cite{Kronik14, Hermann17} {In particular, a recent study has concluded that correcting for dispersive interactions in calculations of HaPs does not improve their description.}\n \\cite{Bokdam17}\nSecond, Kohn-Sham DFT functionals, such as the frequently used generalized gradient approximation (GGA), cannot accurately describe the band gap of semiconductors even in principle,\n \\cite{David8_1, David8_2}\n which can be improved by applying computationally more costly hybrid functionals in the generalized Kohn-Sham framework.\n \\cite{David9,Zhang11}\nFor classical inorganic semiconductors, screened hybrid functionals were shown to improve also the description of lattice constants compared to GGA-based approaches.\n\t\\cite{David10_1, David10_2, David10_3}\n The question that naturally arises then is whether a similar improvement is found when using a hybrid functional for calculating the structure of HaPs, {as has been suggested recently.}\\cite{Bokdam17}\n Third, it is well established that spin-orbit coupling (SOC) due to the presence of the lead atom in $\\mathrm{MAPbI_3}${} leads to strong modifications in the electronic structures, i.e., it lifts the degeneracy of the conduction and valence band and lowers the band gap.\n \\cite{David11,Whalley17}\n Lastly, since HaPs are mechanically soft, they exhibit large structural dynamical effects, such as massive ionic displacements and molecular rotation around room temperature,\n \\cite{David12_1, Egger16}\n which is the relevant scenario for device applications. Therefore, it is important to investigate the consequences of these unusual structural dynamical effects on the pertinent structural properties of HaPs. {To the best of our knowledge, the impact of finite temperature on the lattice constants and mechanical properties, and the role of dispersive interactions in calculating them, has not been addressed by means of DFT-based molecular dynamics (MD).}\n \\newline\n In this study, we first investigate how the choice of vdW correction and DFT functional affects the results of calculations for structure and binding in the prototype $\\mathrm{MAPbI_3}$. To this end, we compare data obtained in static DFT calculations for a primitive cubic unit cell to experimental ones, using various computational approaches. The most promising choice of method is then used to explore how different MA orientations impact structural properties in static and {finite-temperature MD calculations} of $\\mathrm{MAPbI_3}$. \n \n \n\\section{Methods and Computational Setup}\n\n\tA satisfactory treatment of dispersion forces within conventional approximate Kohn-Sham DFT functionals requires dispersive correction schemes. In our calculations, we considered the Tkatchenko-Scheffler method with regular Hirshfeld partitioning (TS)\n \\cite{Tkatchenko09}\n and with an iterative Hirshfeld partitioning (HI)\n \\cite{Bucko14, Bucko13}\nas well as the many-body dispersion (MBD) method.\n\t\\cite{David15, Ambrosetti14, Bucko16}\nThe TS and HI methods include pairwise dispersive interactions between atoms in the crystal. The HI scheme expands the Hirshfeld partitioning with an iterative process that results in a more realistic allocation of charges for strongly ionic systems.\n\\cite{Bucko14}\nTo include the screening of dispersive interactions that occurs in a many-body system, we apply the MBD method based on the random phase expression of the correlation energy, as developed by Tkatchenko et al.\n\t\\cite{David15}\n\\\\ \n For the comparison of results obtained by using a GGA functional and a hybrid functional, we apply two very common variants in the context of solid-state calculations, namely the PBE\n \\cite{Perdew96}\n and HSE functionals.\n \\cite{Heyd03, David18_2}\n In the HSE functional, the short-range exchange energy is calculated as a mix of PBE and exact exchange,\n \\cite{Heyd03, David18_2}\n \n which was often found to improve the description of electronic-structure and structural properties of semiconductors. \n \\cite{David20_1, David10_1, David10_2, David10_3}\n SOC describes the interaction of the spin angular momentum with the orbital momentum, and it plays a crucial role in systems with heavy elements such as lead. In our calculations, it was described {fully self-consistently} within the framework of non-collinear magnetism,\n \\cite{David21}\n in conjunction with the various DFT methods applied here. \n \n In order to obtain structural and mechanical parameters, we use an equation of state that accurately describes the macroscopic properties of the crystal. One of the most frequently used ones is the Birch-Murnaghan equation of state (BMEOS).\n \\cite{Birch,Murnaghan,Fu83}\n It describes the free energy of a solid as a function of the volume of the unit cell, $E(V)$, at isothermal conditions as:\n \\begin{eqnarray}\n \tE(V) = E_0 + \\frac{B_0V}{B^\\prime_0} \\left( \\frac{(V_0\/V)^{B^\\prime_0}}{B^\\prime_0 - 1} +1 \\right) - \\frac{V_0B_0}{B^\\prime_0 -1}.\\label{BMEOS}\n \\end{eqnarray}\n $E_0$ is the free energy at the equilibrium volume, $V_0$, and $B_0$ and $B^\\prime_0$ are the bulk modulus and its derivative, respectively. In order to obtain these parameters, we first calculate the free energy of the static system at eight equidistant volumes between 90\\% and 111\\% of the experimental value\n \\cite{Stoumpos13}\n \n and fit Eq.~\\ref{BMEOS} using these eight values (tests with more data points did not show any significant improvement) with the method of least squares. Note that while the $E_0$ and $B^\\prime_0$ are required to fit Eq.~\\ref{BMEOS}, they are not relevant for our discussion.\n \n \\begin{figure}\n\t\\includegraphics[width=\\linewidth]{combined}\n \\caption{\\label{combined} Schematic structural representations of the configurations of $\\mathrm{MAPbI_3}${} that were used in the calculations: a) Primitive unit cell (red solid line) and a 2x2x2 supercell (black dashed line) used in static calculations; note that we have considered various orientations of MA, see text for details. b) Visualization of the structural changes in molecular dynamics (MD) calculation of a 2x2x2 supercell, which was obtained as an overlay of five structures separated by 200~fs along the 20~ps DFT-MD trajectory. Shown are carbon (brown), nitrogen (light blue), hydrogen (white), iodine (violet), and lead (gray) atoms. Atoms belonging to more than the computational cell are displayed for visual clarity.}\n\\end{figure}\n \n In our calculations, we first considered the cubic primitive unit cell of $\\mathrm{MAPbI_3}${} as has been reported experimentally, see Fig.~\\ref{combined}a.\n \\cite{Stoumpos13}\n To be able to also investigate the effect of MA orientation, we used a supercell that consists of eight unit-cells stacked together in a 2x2x2 pattern (see Fig.~\\ref{combined}a), with the MA ions orientated in parallel or anti-parallel to each other. {In this way, we can test the effect of the assumption that MA molecules are oriented perfectly in parallel throughout the material on computing the structural and binding properties of $\\mathrm{MAPbI_3}${}.} Note that in these calculations, the angle between the MA molecules was fixed during the atomic relaxation. To fit Eq.~\\ref{BMEOS} with the data obtained in the MD calculations, the average free energy along a trajectory of 20~ps was calculated for each volume point, which was used, together with the standard deviation as the uncertainty, in the fit of Eq. \\ref{BMEOS}. We note that since the fit errors of $B^\\prime$ were quite large in the case of the MD calculations, in fitting Eq.~\\ref{BMEOS} we set $B^\\prime$ to the value obtained in the static calculation of the unit cell.\n \n \n\t\\begin{table*}\n \t\\caption{\\label{static_unitcell} Volume of the primitive unit cell, $V_0$, and bulk modulus, $B_0$, obtained by fitting the DFT-calculated data with Eq.~\\ref{BMEOS}, using various methods applied to the primitive unit-cell of $\\mathrm{MAPbI_3}$.}\n \t\\begin{ruledtabular}\n \t\t\\begin{tabular}{lccccccccccc}\n\t\t\t\t& PBE & PBE+TS & PBE+MBD & PBE+HI & HSE & HSE+TS & HSE+MBD & HSE+HI & PBE+SOC & PBE+TS+SOC & Experiment \\\\\n ine\n $V_0$ [\\AA$^3$] \\footnote{Errors are between 0.1 and 0.4~\\AA$^3$} & 272.9 & 256.2 & 257.1 & 262.0 & 266.6 & 252.9 & 252.1 & 258.1 & 274.4 & 256.3 & 247-253 \\cite{Stoumpos13,Baikie13,Feng14}\\\\\n $B_0$ [GPa] \\footnote{Errors are between 0.2 and 0.4~GPa}& 10.6 & 15.7 & 13.6 & 14.3 & 11.1 & 16.4 & 14.7 & 14.2 & 9.6 & 15.0 & 12-16 \\cite{Ferreira18,Rakita15}\\\\\n \t\\end{tabular}\n \t\\end{ruledtabular}\n \\end{table*}\n \n The DFT calculations were performed with a plane-wave basis (cutoff energy: 400 eV) and the projector-augmented wave (PAW) method,\n \\cite{David24}\n as implemented in the VASP code.\n \\cite{David25}\n For unit cell calculations, a 6x6x6 grid of k-points centered around the $\\Gamma$-point was used, for static and dynamic calculations that considered the supercell, the grid was reduced to a sampling of 3x3x3. For each static calculation, the ionic coordinates were relaxed, using the PBE functional and the respective dispersive correction scheme, with the Gadget tool in internal coordinates,\n \\cite{David26}\n before calculating the energies to fit Eq.~\\ref{BMEOS}. In the calculations applying the HSE functional or SOC, the respective PBE-based geometry was used to reduce computational costs. For all static calculations, the geometry was considered relaxed when the forces acting one the atoms were below 10~meV per \\AA. In the canonical (NVT) MD simulations, a $\\mathrm{MAPbI_3}${} 2x2x2 supercell with randomly orientated MA was used as a starting point for the structural geometry, {in line with the recommendations provided by Lahnsteiner et al.} \\cite{Lahnsteiner16\n It was then equilibrated for 5~ps and computed for 20~ps in time steps of 1~fs at a temperature of 400~K. {This protocol is sufficient to compute the impact of the dynamic nuclear effects on the structure and binding despite the limited supercell size} \\cite{Lahnsteiner16}{ and trajectory length. The latter was checked explicitly by adding another 10~ps to the simulation, which resulted in only insignificant changes of the calculated observables.} Schematic representations of the structures were generated with VESTA. \\cite{vesta}\n \n \n \n\\section{Results}\n\n \nFig.~\\ref{vdw_comp} shows the energy change due to volume change, i.e., $\\Delta E(V) = E(V)- E_0$, for the static calculations of the $\\mathrm{MAPbI_3}${} unit cell, calculated by using the PBE and HSE functionals augmented by different dispersion corrections. The optimized volume ($V_0$) and bulk modulus ($B_0$), obtained from the fit, are listed in Table~\\ref{static_unitcell}. Considering the PBE data, it can readily be seen that using dispersive corrections has a large impact on the result. Furthermore, it was found that the results from the TS and MBD methods are very similar, and provide a $V_0$ of 256.2~\\AA$^3$ (PBE+TS) and 257.1~\\AA $^3$ (PBE+MBD) which are close to the experimental range of 247-253~\\AA$^3$.\nIn contrast, the bare PBE calculations are far off (272.9~\\AA$^3$) the experimental range, in agreement with literature findings,\n \\cite{Feng14, David7_2, Egger14, David7_4, David7_5,Faghihnasiri17}\nand the PBE+HI (262.0~\\AA$^3$) results basically lie in between the results of bare PBE and PBE+TS\/PBE+MBD. The calculated $B_0$ data show similar trends, i.e., the experimentally reported values (12-16~GPa)\nagree well with the PBE+TS, PBE+MBD, and PBE+HI results, but deviate more from the bare PBE value. From these data, one can see that the TS dispersive correction scheme with a regular Hirshfeld partitioning and the MBD approach perform best in reproducing structural and binding properties measured in experiments. {It should be noted that the experimental data were recorded at elevated temperatures, at least at $T\\approx$330~K, while the calculations were performed at 0~K and do not address thermal expansion, an effect we discuss below based on our results obtained from finite-temperature MD calculations.}\n\n Fig.~\\ref{vdw_comp} also shows the results for regular and TS-corrected calculations using the HSE functional (see Table~\\ref{static_unitcell} for full dataset using the other correction schemes). While there is some visible difference between the PBE and HSE calculations without dispersive corrections, the bare HSE value for $V_0$ is still quite off the experimental result, in stark contrast to findings reported for conventional inorganic semiconductors.\n\\cite{David10_1, David10_2, David10_3} \n Considering the dispersion-corrected HSE data, it is found that these compare almost equally well to experiment as their PBE counterparts. Importantly, the optimized $V_0$ value using the PBE+TS, PBE+MBD, HSE+TS, and HSE+MBD are all within $< 5$~\\AA$^3$, which is smaller than the range of experimentally reported values. Hence, we find that the improvement due to using the HSE functional is minor compared to using dispersive corrections, which is thus found to be the essential computational ingredient for obtaining accurate structural and binding properties of $\\mathrm{MAPbI_3}$. From these findings, we argue that using PBE+TS provides a reasonable choice for calculating structural and binding properties of HaPs such as $\\mathrm{MAPbI_3}$. For completeness, we note that calculations including SOC showed very little difference to those without, as can be seen in Table~\\ref{static_unitcell}.\n \n \n \\begin{figure}\n \t\\includegraphics[width=1.0\\linewidth]{unit_cell_comp}\n \\caption{\\label{vdw_comp} Energy change as a function of unit-cell volume, $\\Delta E(V) = E(V)- E_0$, for the static calculations of the primitive $\\mathrm{MAPbI_3}${} unit cell, using the PBE and HSE functionals augmented by different dispersion correction schemes. Also shown are fits to Eq.~\\ref{BMEOS} and the range of experimental results for the volumes (green-shaded area). It is noted that a larger slope in the fit corresponds to a larger value of $B_0$.}\n \\end{figure}\n \n It is well-known that the electronic interaction between MA and the inorganic ions in $\\mathrm{MAPbI_3}${} is minor, since the electronic states of MA are energetically not close to the band edges. Nevertheless, electrostatic and dispersive interactions\nbetween the inorganic ions and MA could still be important for the structure and binding of $\\mathrm{MAPbI_3}$. In order to test this, we varied the MA orientation in a 2x2x2 supercell, considering the extreme cases of either perfectly parallel or antiparallel MA molecules. {It is noted that the former scenario is equivalent to considering a primitive cubic unit cell of $\\mathrm{MAPbI_3}${} containing one MA unit.} When calculating the structural parameters of the 2x2x2 supercell with different MA orientations, we limit the calculations to the bare PBE and the PBE+TS approach, since neither using HSE nor including SOC has shown a significant improvement that would justify the increase in computational cost {(see above and Table}~\\ref{static_unitcell}). Furthermore, using the TS and MBD dispersive correction scheme provided essentially equal results, but the computational costs associated with the MBD method increase more rapidly when the number of atoms becomes larger compared to the TS scheme.\n \n The first relevant observation in the data shown in Table~\\ref{static_supercell} is that the results of the supercell calculations considering the parallel MA orientation are, within the fitting error, identical to the results obtained with the calculations for the primitive unit cell, as expected. However, a noticeable difference occurs in both the structure (see Fig.~\\ref{supercell_MA}) and the optimized parameters (see Table~\\ref{static_supercell}) when the MA orientation is changed to the other extreme of perfectly antiparallel MA molecules. In the case of the parallel MA orientation, at all volumes the octahdra retain cubic symmetry, whereas in the antiparallel orientation they tilt strongly (see Fig.~\\ref{supercell_MA}). Indeed, this effect is important, since for the fit of the supercell data calculated with bare PBE we find that $V_0\/8$ changes from 272.7\\AA$^3$, for the parallel orientation, to 265.8\\AA$^3$, for the antiparallel one. The same trend is confirmed in the PBE+TS calculations, although the differences are smaller. Hence, the interaction between the inorganic cage and the MA molecules is important for the structure and binding in $\\mathrm{MAPbI_3}$. Since our DFT calculations show that the antiparallel orientation is preferred in ~$\\mathrm{MAPbI_3}$, i.e., the free energy is consistently lower for the antiparallel MA orientation at all eight volumes, this requires further investigations.\n \n \\begin{table}\n \t\\caption{\\label{static_supercell}Cell volume, $V_0$, and bulk modulus, $B_0$, obtained by fitting the DFT-calculated data with Eq.~\\ref{BMEOS}, using PBE and PBE+TS applied to a 2x2x2 supercell of $\\mathrm{MAPbI_3}${} with differently orientated MA molecules. Note that in order to improve the comparison, we here report $V_0\/8$.}\n \\begin{ruledtabular}\n \\begin{tabular}{lcc}\n \t& parallel MA & anti-parallel MA \\\\\n ine \n & \\multicolumn{2}{c}{\\textbf{PBE}} \\\\\n $V_0$ [\\AA$^3$] \\footnotemark[1]& 272.9 & 267.5 \\\\ \n $B_0$ [GPa] \\footnotemark[2]& 10.8 & 11.2 \\\\\n \n & \\multicolumn{2}{c}{\\textbf{PBE+TS}} \\\\\n $V_0$ [\\AA$^3$] \\footnotemark[1] & 256.3 & 253.8 \\\\ \n $B_0$ [GPa] \\footnotemark[2]& 15.7 & 14.7 \\\\\n \\end{tabular}\n \\footnotetext[1]{Errors are between 0.1 and 0.3~\\AA$^3$}\n \\footnotetext[2]{Errors are between 0.4 and 0.4~GPa}\n \\end{ruledtabular}\n \\end{table}\n \n \n \\begin{figure}\n \t\\includegraphics[width=.49\\linewidth]{CONTCAR_parallel}\n \\includegraphics[width=.49\\linewidth]{CONTCAR_antiparallel}\n \\caption{\\label{supercell_MA}Schematic representations of the supercells with optimized atomic geometries obtained with PBE+TS at a volume of $256.6$~\\AA$^3$, for the case of parallel (a) and antiparallel MA orientation (b). The x-z plane is shown, and atoms belonging to more than the computational cell are displayed for visual clarity}\n \\end{figure} \n \n \n \n \\begin{table}\n \t\\caption{\\label{MD_results}\nCell volume, $V_0$, and bulk modulus, $B_0$, obtained by fitting the DFT-calculated data with Eq.~\\ref{BMEOS}, using PBE and PBE+TS applied to a 2x2x2 supercell of $\\mathrm{MAPbI_3}${} computed along an MD trajectory of 20~ps. Note that in order to improve the comparison, we here report $V_0\/8$.}\n \t\\begin{ruledtabular}\n \t\\begin{tabular}{lcc}\n \t& PBE & PBE+TS \\\\\n ine\n \t$V_0$ [\\AA$^3$] \\footnotemark[1] & 267.3 & 248.1 \\\\ \n \t$B_0$ [GPa] \\footnotemark[1] & 8.8 & 13.0 \\\\\n \\end{tabular}\n \\footnotetext[1]{Errors are 0.1~\\AA$^3$}\n \\footnotetext[2]{Errors are between 0.6 and 0.7~GPa}\n \\end{ruledtabular}\n\t\\end{table}\n \n \n \\begin{figure}\n \t\\includegraphics[width=\\linewidth]{Figure_md_pbe}\n \\includegraphics[width=\\linewidth]{Figure_md_ts}\n \\caption{\\label{md_bmeos}\n Energy change as a function of unit-cell volume, $\\Delta E(V) = E(V)- E_0$, calculated by using MD-DFT with PBE (top) and PBE+TS (bottom) along a trajectory of 20~ps, shown as yellow dots, where the symbol size is given by number of occurrences in the MD run. The blue cross denotes the average $\\Delta E(V)$, and the black dashed line is the fit according to Eq.~\\ref{BMEOS}. Note that in order to improve the comparison, we here report $\\Delta E\/8$ as a function of $V\/8$.}\n \\end{figure}\n \n\n Hence, motivated by the finding that MA orientation impacts structure and binding in $\\mathrm{MAPbI_3}$, we performed fully unconstrained NVT MD calculations at 400~K. {This is relevant, since the MA unit was found to be only weakly bound in the cubic phase,} \\cite{Chen15, Jingrui18}\n {and hence undergoes rotational and translational motion at elevated temperatures, which could impact the energetics in the material dynamically. Furthermore,} the MD calculations allow for testing whether the effect of dispersive corrections seen for the static structures is still visible at elevated temperatures, {a comparison that has not been attempted previously but is} important for investigating dynamical effects in the structural interaction of $\\mathrm{MAPbI_3}$. Fig.~\\ref{md_bmeos} shows the entire distribution as well as the mean value of the PBE- and PBE+TS-calculated change in free energy of $\\mathrm{MAPbI_3}$, $\\Delta E$, determined for a 20~ps MD simulation, again fitted by Eq.~\\ref{BMEOS}. The most apparent finding from Fig.~\\ref{md_bmeos} is yet again the sizable difference between the PBE and PBE+TS curves, as also quantified by the $V_0$ parameters provided in Table~\\ref{MD_results}. Furthermore, a slight decrease of $V_0$ is found in the MD calculations at 400~K compared to the static 0~K calculation, contrary to the expected thermal expansion. In regard to $B_0$, while in the case of PBE it is similar to the 0~K calculation of the primitive unit cell, for PBE+TS $B_0$ it is slightly lower than what was obtained in the static calculations. \n \n In order to better understand the origin of these findings, the average atomic positions along the {PBE+TS} MD trajectories were calculated. Fig.~\\ref{md_poscars} shows that depending on the volume of the supercell, two different types of structures emerged. For the three smallest volumes considered in the MD, the octahedra were found to be tilted and the C-N bond of MA is still clearly visible in the average structure: successive MA molecules are oriented in parallel to each other along one direction and orthogonal to each other in the other two directions, aligning\n with the long axis of the rhombus created by the tilted octahedra. For the five larger considered volumes, on the other hand, the octahedra form an on average near perfect cubic symmetry, and the average carbon and nitrogen atoms are almost conjoined. The latter could either mean that there is absolutely no preferred direction of the MA molecules, i.e., it {rotates such that it is entirely disordered over the course of the 20~ps trajectory}, or that there are preferred directions which are equally occupied thermally.\n \n \\begin{figure}\n \t\\includegraphics[width=.49\\linewidth]{POSCAR_md_090}\n \\includegraphics[width=.49\\linewidth]{POSCAR_md_102}\n \\caption{\\label{md_poscars}Schematic representation of the time-averaged atomic positions along the MD trajectory, calculated with PBE+TS, for $V_0=231.0$\\AA$^3$ (a) and $256.6$\\AA$^3$ (b). Note that hydrogen atoms were omitted for clarity.}\n \\end{figure} \n \n\\FloatBarrier \n\\section{Discussion}\n\nThe results from the static unit cell calculations confirmed previous findings that showed the importance of including dispersive interaction in DFT calculations {for the structure and binding in $\\mathrm{MAPbI_3}${}}.\\cite{Feng14, David7_2, Egger14, David7_4, David7_5,Faghihnasiri17} {Here, we went beyond in testing a range of dispersive-correction schemes}: PBE calculations corrected by the TS scheme with the regular Hirshfeld partitioning and the MBD scheme showed good agreement with experimental data. The TS method with iterative Hirshfeld partitioning performed slightly worse than the regular TS and MBD schemes. {While the iterative Hirshfeld partitioning has indeed been shown to improve the description of ionic materials,}\\cite{Bucko14, Bucko13} {$\\mathrm{MAPbI_3}${} is a more complex case, since it is a hybrid organic-inorganic system that contains covalent bonds (in the MA molecule) as well as partially covalent-ionic bonds (in the inorganic framework)}. One perhaps surprising result is that the MBD method did not improve the results significantly compared to the TS method, even though one would expect that the iodine atoms, which account for most of the vdW-interactions,\\cite{Egger14} are screened by the surrounding dielectric environment. {We considered a comparison of the $C_6$ parameters calculated from PBE+TS and PBE+MBD, which determine the dispersive correction in either case,}\\cite{Kronik14,Hermann17}{ to further understand this result. We find that the changes are minor, on the order of $1~\\%$ or smaller, and furthermore do not depend on the volume of the unit cell.} This implies that the screening is barely affected by the changes in distance in the calculations at different volumes, and thereby only leads to a constant energy shift. {Furthermore, we considered the higher-order contributions to the dispersive energy calculated with PBE+MBD, to find that indeed the second-order term is dominating. This could imply that the dispersive interactions in $\\mathrm{MAPbI_3}${} are such that higher-order interactions are indeed negligible compared to the pairwise contribution, and also that the polarizability is largely isotropic (see ref. }\\cite{Hermann17}\n{for further discussion).}\n \nFurthermore, we found that using the hybrid functional HSE, which improves the description of the electronic structure, did not result in improvements for the optimized unit-cell volume and bulk modulus, once it was combined with dispersive corrections. Indeed, the results from the PBE+TS, PBE+MBD, HSE+TS, and HSE+MBD calculations are all within experimentally reported range for these two important structural parameters. Since {our data, presented in Table~}\\ref{static_unitcell}{, also confirm the effect of SOC for the structure and binding to be minor, and since the PBE+TS approach was shown to be very accurate for structural and mechanical properties of multiple HaP crystals,} \\cite{Egger14, David7_5, David5_5,David7_4, Motta16, Egger18}\nwe conclude that using PBE+TS for investigating the structure and binding in static and dynamical calculations of HaPs is a reasonable choice.\n \n{In this context, it is worth noting that} \\citealt{Bokdam17} suggested the choice of DFT functional to have a bigger impact on the structural properties of $\\mathrm{MAPbI_3}${} than the addition of dispersive corrections. While this is certainly true for electronic properties, and while our calculations showed some minor differences between the PBE and HSE results, {we have shown that} this clearly cannot diminish the important role of dispersive interactions in $\\mathrm{MAPbI_3}$. {We further note that the different conclusions of our work and} \\citealt{Bokdam17} {could be related to the different points of reference used in either case: while we have chosen to consider experimental data on the lattice constants and bulk modulus,} \\citealt{Bokdam17} {considered energies calculated in the random-phase approximation (RPA) as a reference. Indeed, RPA energies can be very accurate for solids and are broadly applicable, but it is worth noting that ``standard'' RPA suffers from known deficiencies, such as potentially incorrect descriptions of short-range correlation.} \\cite{RPA_range,Hermann17}\n \n \\begin{figure}\n \t\\includegraphics[width=\\linewidth]{green_iodines}\n \\caption{\\label{antiparallel_mixed} Schematic representation of the interaction between the ammonium-end of MA and the surrounding iodine atoms; the relevant atoms are highlighted in green, and the z-x plane (left) and y-x plane (right) are shown. The structure is taken from the PBE+TS calculation of $V= 256.6$~\\AA$^3$ as visualized in Fig.~\\ref{supercell_MA}.}\n \\end{figure}\n \nUsing the PBE+TS approach allowed for studying more complicated static and dynamic structural phenomena: First, we investigated the impact of MA orientation on the structural parameters of $\\mathrm{MAPbI_3}$, studying the extreme cases of either perfectly parallel or antiparallel MA molecules contained in a supercell. We found that only for the parallel orientation of MA do the octahedra retain cubic symmetry, while for antiparallel MA molecules they strongly tilt. Note that the lattice vectors were still constrained to the primitive unit cell, i.e., the volume change corresponds to a hydrostatic pressure in the system. Since the relative energy gain\/cost due to these structural distortions depends on the unit-cell volume, this effect modified the obtained volume and bulk modulus. Hence, the interaction between MA and the inorganic atoms is important, especially because the calculations showed that the antiparallel orientations are actually energetically preferred.\n\nThe origin of these distortions is related to the interactions between the MA molecules and iodine atoms, since the partially positive ammonium-part of the MA interacts stronger with the partially negatively iodide ion than the methyl-side. In Fig.~\\ref{antiparallel_mixed} we illustrate that the nitrogen atoms of MA interact mainly with five of the neighboring iodines, which results in a distortion of the octahedra such that the distances between the nitrogen and iodine atoms are maintained between 3.64 and 3.81~\\AA. These interactions seem to be the main driving force behind the MA-induced distortions of the octahedra in the antiparallel case, which are energetically favorable for the system. Due to symmetry, these interactions are cancelled in the case of parallel MA orientation, and our geometry relaxations did not automatically adapt to the more favorable antiparallel szenario. This finding is also quite interesting in view of the fact that the experimentally-determined lattice symmetry of ~$\\mathrm{MAPbI_3}${} above 327~K is almost perfectly cubic,\\cite{Poglitsch87,Stoumpos13} despite the fact that different MA orientations can induce lower energy structures.\n\nThe implications of these findings can be fully understood by means of fully-unconstrained MD calculations at 400~K, since in these the MA molecules, as well as all other atoms, are allowed to move freely. The first important finding from these data was that inclusion of dispersive corrections is equally important to obtain reasonable structural parameters in MD calculations. Second, the results obtained from the MD simulations are quite surprising at first sight, since the unit-cell volume was found to be smaller than the one obtained from the 0~K static calculation. To understand this finding, consider that the time-scales associated with MA motion are much faster than the ones corresponding to the octahedral distortions of the heavy Pb-I cage, which is included in the MD but absent in the static calculations. Our analysis further showed that the average nuclear positions of MA are such that the carbon and nitrogen atoms are essentially conjoined at this temperature, meaning that MA is disordered, which is well known.\\cite{Poglitsch87} Hence, the inorganic atoms essentialy respond to a time-averaged volume corresponding to the moving MA molecule. Due to the MA disorder, this volume is effectively smaller at 400~K than for a fixed pattern of MA orientations, allowing for shorter I-Pb-I bonds and hence smaller crystal volumes. Last but not least, this rationale also explains the finding from the MD data showing that the time-averaged octahedral symmetry is almost perfectly cubic at 400~K in agreement with experiment: MA disorder implies that all possible octahedral distortions induced by MA-iodine interactions are statistically equally likely. Therefore, the cubic perovskite structure is maintained as the most energetically favorable average crystal structure, exhibiting all the electronic and optical properties that render $\\mathrm{MAPbI_3}${} so favorable for device applications.\n\nFinally, the data contained in Fig.~\\ref{md_poscars} showed that at smaller volumes the MD-averaged nuclear positions exhibit octahedral distortions together with a more preferred orientational order of MA. This makes sense, considering that smaller unit-cell volumes correspond to smaller voids between the octahedra, which increases the organic-inorganic interactions,\\cite{David7_5} and partially hinders free MA motion.\nSuch tilting of the octahedra into an orthorhombic-like structure was also observed experimentally in pressure experiments\\cite{Capitani16} for $\\mathrm{MAPbI_3}$, and the variation in the preferred direction for MA at different volumes was discussed also in a recent study reporting MD simulations.\\cite{Lahnsteiner18} Therefore, in agreement with previous findings our study shows that even relatively mild changes in external pressure can result in large changes of the structure and binding in $\\mathrm{MAPbI_3}$.\n\n\\section{Conclusion}\n\n In summary, we investigated the impact of various levels of DFT-related approximations for calculations of the structural and binding properties of the prototypical HaP material $\\mathrm{MAPbI_3}$. Our tests considered the effects of including different dispersive correction schemes, applying a hybrid functional, including SOC, and also addressed the role of dynamic effects in MD calculations. The data confirmed previous theoretical work showing that dispersive corrections are important for accurate calculations of $\\mathrm{MAPbI_3}$, and also highlight that applying a computationally much more expensive hybrid functional improves the description of structural and mechanical properties by only a small amount. From this, we conclude that the use of a semilocal functional, augmented by pairwise dispersive interactions, is a suitable choice when computing more complicated static as well as structural dynamical phenomena in HaPs. Applying this methodology to DFT-based MD calculations of $\\mathrm{MAPbI_3}$, we analyzed the dynamic effect of molecular motion and its interplay with the structure of and binding in $\\mathrm{MAPbI_3}$. From this analysis, we could rationalize microscopically the simultaneous occurrence of a preferred cubic octahedral symmetry and MA disorder.\n\t\n\\section{Acknowledgements}\n\nFunding provided by the Alexander von Humboldt Foundation in the framework of the Sofja Kovalevskaja Award endowed by the German Federal Ministry of Education and Research is acknowledged. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS at J\u00fclich Supercomputing Centre (JSC).\n\n\\section{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nFictitious Play (FP), introduced in \\cite{Brown51}, is one of the oldest and best-known game theoretic learning algorithms. FP has been shown to be an effective algorithm for distributed learning of Nash equilibria in various classes of games including two-player zero-sum games \\cite{robinson1951iterative}, generic $2\\times m$ games \\cite{berger2005fictitious}, supermodular games\n\\cite{milgrom1990rationalizability,berger2008learning}, one-against-all games \\cite{sela1999fictitious}, and potential games \\cite{Mond96,benaim2005stochastic}. However, the manner in which players \\emph{learn} in FP is often unsatisfactory, especially in the context of distributed control.\n\nIn FP, players learn equilibrium strategies in the sense that the time-averaged empirical distribution of players' actions converges to the set of Nash equilibria ---a form of learning known as \\emph{convergence in empirical distribution}. This notion of learning tends to be problematic when the limit set of a learning algorithm contains mixed-strategy equilibria. In particular, convergence of the time-averaged empirical distribution to a mixed-strategy equilibrium does not imply any form of convergence in players' period-by-period strategies or actions. In practice, players' period-by-period strategies tend to move in progressively longer and longer cycles around an equilibrium set---the time-averaged empirical distribution is driven to equilibrium, but the period-by-period strategies never approach the equilibrium set themselves.\n\nIn the context of repeated-play algorithms, we refer to convergence of the empirical distribution (or some function thereof) to an equilibrium set as weak convergence, and we refer to any form of learning involving weak convergence as weak learning. We refer to the convergence of players' period-by-period strategies to an equilibrium set as strong convergence, and we refer to any form of learning involving strong convergence as strong learning. Intuitively speaking, weak learning means that players learn an equilibrium strategy in some abstract sense (i.e., convergence in empirical distribution) but may never actually implement the strategy they are learning. In strong learning, not only do players \\emph{learn} an equilibrium strategy, but they also implement it.\n\nFP is proven to achieve learning only in the weak sense, and thus no guarantees can be made regarding the convergence nor optimality of players period-by-period strategies. For example, Jordan \\cite{jordan1993} presents a continuum of games for which FP achieves weak learning, yet in all but a countable subset of games, the period-by-period strategies produced by FP never approach the game's unique equilibrium. As another example, Young \\cite{young2004strategic} presents a $2\\times 2$ game in which FP achieves weak learning, but the period-by-period actions produced by FP achieve the lowest possible utility in every stage of the repeated play (see also Section \\ref{sec_weak_convergence}).\n\nOur first main contribution is the presentation of a simple variant of FP that converges strongly to equilibrium. In our strongly convergent variant of FP, players gradually and independently transition from using the FP best response rule to determine the next-iteration action, to using their current empirical distribution as a probability mass function from which they sample to determine the next-iteration action. We show that, for any game in which FP can be shown to converge weakly to equilibrium (and for which a certain robustness assumption holds---see \\textbf{A.\\ref{a_robustness}}), our variant of FP will converge strongly to equilibrium.\n\nOne advantage of this approach is that it is readily applicable to more general FP-type learning algorithms. Our second (and more general) main contribution is a method for taking a weakly convergent FP-type learning algorithm, and constructing from it, a strongly convergent variant.\nWe study a general class of FP-type algorithms and show that, so long as an algorithm achieves weak learning in a sufficiently robust sense (see \\textbf{A.\\ref{a_robustness}}), then a strongly convergent variant of the algorithm can be constructed. As an example of how the general result may be applied, we consider three weakly convergent FP-type algorithms---classical FP, Generalized Weakened FP \\cite{leslie2006generalised}, and Empirical Centroid FP \\cite{swenson2012ECFP,Swenson-MFP-Asilomar-2012}---and construct the strongly convergent variant of each.\n\n\n\n\\subsection{Related Work}\nAn overview of the topic of learning in games can be found in \\cite{fudenberg1998theory,young2004strategic}. Various problems associated with learning mixed-strategy equilibria in best-response-type learning algorithms (including FP-type algorithms) are discussed in \\cite{jordan1993}. In particular, the issue of weak convergence is considered, along with a discussion of some of the underlying mechanics that lead to weak convergence.\n\nMany learning algorithms are designed to ensure that their limit points are pure-strategy equilibria \\cite{marden06,marden-payoff,chasparis2010aspiration,pradelski2012learning,marden-shamma-08}. Ensuring convergence to a pure strategy is a natural way of ensuring strong learning, since weak learning can generally only occur when the limit set contains mixed strategies.\n\nIn contrast, this paper studies a method of ensuring strong convergence when the limit set of the algorithm contains mixed strategies. The ability to (strongly) learn mixed equilibria is important for many reasons, the foremost being that, in finite games, the set of Nash equilibria (NE) is only guaranteed to be non-empty if mixed equilibria are considered. Mixed strategies play an important role when the learned strategy needs to be robust to uncertainty in opponent behavior or game structure, or secure against the actions of malicious players \\cite{rass2014numerical,voorneveld1999pareto,alpcan2010network,sela1999fictitious,dabcevic2014fictitious}. With regards to FP in particular, it was recently shown in \\cite{candogan2013dynamics} that, for the class of near-potential games, the limit set of the FP dynamics (weakly speaking) is a neighborhood of a mixed equilibrium.\n\nRegret-testing algorithms \\cite{foster2003regret},\\cite{germano2007global} achieve strong convergence to mixed-strategy equilibria in generic finite games. However, such algorithms operate on fundamentally different principles from FP-type algorithms---players implement a form of exhaustive search to coordinate on a NE strategy. Such algorithms tend to have slow convergence rates, especially when the number of players or available actions is large.\n\nStochastic FP (SFP)---introduced in \\cite{Fud92}---was proposed as a learning mechanism that could (i) mitigate the problem of weak convergence to mixed equilibria in FP and (ii) provide a reasonable explanation for why real-world players might learn mixed-strategy equilibria. In SFP, the issue of weak convergence is addressed by smoothing each player's best response correspondence with the addition of small random shocks or perturbations. The stable points of SFP are not Nash equilibria, but rather Nash distributions. The set of Nash distributions converges to the set of Nash equilibria as the size of the perturbations goes to zero \\cite{Fud92}. SFP has been shown to obtain strong convergence to the set of Nash distributions in various classes of games \\cite{hofbauer2002global,benaim2005stochastic,fudenberg1998theory}. Moreover, if the perturbations are permitted to gradually decay throughout the course of the repeated play, then SFP converges to the set of NE \\cite{leslie2006generalised}.\n\nIn contrast to SFP, the present work does not consider the descriptive agenda of providing an explanation for why real-world learners might act according to a given behavior rule. Furthermore, we present a simple and intuitive procedure for modifying a variety of weakly convergent learning algorithms in order to obtain a strong convergent variant. From a technical perspective, the current work differs from SFP in that the best response correspondence is not directly smoothed in any way.\n\nThe work \\cite{leslie2006generalised} by Leslie et al. studies a useful generalization of FP termed Generalized Weakened FP (GWFP). Among other contributions, the paper demonstrates that the convergence of FP is not affected by asymptotically decaying perturbations to players' best response sets. This result provides a cornerstone for our proofs by ensuring that FP (and GWFP) meet the critical robustness assumption \\textbf{A.\\ref{a_robustness}}. We study a strongly convergent variant of GWFP in Section \\ref{sec_apps2}. Furthermore, \\cite{leslie2006generalised} also presents a payoff-based, actor-critic learning algorithm based on GWFP that achieves strong learning. Our work differs from this in that we provide a general method for constructing a strongly convergent algorithm from a weakly convergent one in a setting where instantaneous payoffs information may or may not be available.\n\nOur preliminary results on strong convergence in FP is found in \\cite{swenson2014strong}. The present work expands on \\cite{swenson2014strong} by considering algorithms beyond classical FP and establishing more general conditions under which convergence can be attained (in particular, see \\textbf{A.\\ref{rho_a1}}--\\textbf{A.\\ref{rho_a3}}). Furthermore, \\cite{swenson2014strong} contains a gap in reasoning in the proof of Lemma 2 which the present paper fills in.\n\nThe remainder of the paper is organized as follows. Section \\ref{sec_prelims} sets up notation to be used in the subsequent development. Section \\ref{sec_FP} introduces classical FP and discusses the problem of weak convergence in classical FP. Section \\ref{sec_strong_fp} presents the strongly convergent variant of classical FP and states the strong convergence theorem for classical FP. Section \\ref{sec_general_setup} presents the general notion of an FP-type algorithm, then presents the strongly convergent variant of an FP-type algorithm, states the general strong convergence result in the context of an FP-type algorithm, and presents the proof of the result. In Section \\ref{sec_apps}, the general result is applied to prove strong convergence in classical FP, Generalized Weakened FP, and Empirical Centroid FP. Section \\ref{sec_conclusion} concludes the paper.\n\n\\section{Preliminaries}\n\\label{sec_prelims}\n\\subsection{Setup and Notation}\nA game in normal form is represented by the triple $\\Gamma := (N,(Y_i,u_i)_{i\\in N})$, where $N = \\{1,\\ldots,n\\}$ denotes the set of players, $Y_i$ denotes the finite set of actions available to player $i$, and $u_i:\\prod_{i\\in N}Y_i \\rightarrow \\mathbb{R}$ denotes the utility function of player $i$. Denote by $Y:= \\prod_{i\\in N} Y_i$ the joint action space.\n\nIn order to guarantee the existence of Nash equilibria it is necessary to consider the mixed extension of $\\Gamma$ in which players are permitted to play probabilistic strategies. Let $m_i := |Y_i|$ be the cardinality of the action space of player $i$, and let $\\Delta_i := \\{p\\in \\mathbb{R}^{m_i}:\\sum_{k=1}^{m_i}p(k) = 1,~p(k)\\geq 0 ~\\forall k\\}$ denote the set of mixed strategies available to player $i$---note that a mixed strategy is probability distribution over the action space of player $i$. Denote by $\\Delta^n := \\prod_{i\\in N} \\Delta_i$, the set of joint mixed strategies.\n\nIn this context, we often wish to retain the notion of playing a deterministic action. For this purpose, let $A_i := \\{e_1,\\ldots,e_{m_i}\\}$ denote the set of ``pure strategies'' of player $i$, where $e_j$ is the $j$-th cannonical vector containing a $1$ at position $j$ and zeros otherwise.\n\nThe mixed utility function of player $i$ is given by $U_i(p) := \\sum_{y \\in Y} u_i(y) p_1(y)\\ldots p_n(y)$, where $U_i:\\Delta^n \\rightarrow \\mathbb{R}$. When convenient we sometimes write $U_i(p)$ as $U_i(p_i,p_{-i})$, where $p_i$ denotes the mixed strategy of player $i$ and $p_{-i}$ denotes the mixed strategies of all other players.\nThe set of Nash equilibria is given by $NE := \\{p\\in \\Delta^n: U_i(p_i,p_{-i}) \\geq U_i( p_i',p_{-i}), ~\\forall p_i' \\in \\Delta_i,~\\forall i\\in N\\}$.\nLet\n\\vskip-15pt\n\\begin{equation}\nBR_i^{\\epsilon}(p_{-i}) := \\{a_i \\in A_i: U(a_i,p_{-i}) \\geq \\max_{\\alpha_i \\in A_i} U(\\alpha_i,p_{-i})-\\epsilon\\}\n\\label{BR_epsilon_set}\n\\end{equation}\n\\vskip-5pt\n\\noindent\nbe the $i$-th players set of $\\epsilon$-best responses to a strategy profile $p_{-i}$ adopted by the other players. Note that in this definition we only consider pure-strategy $\\epsilon$-best responses.\nDenote by $v_i(p_{-i}) := \\max_{p_i\\in \\Delta_i} U_i(p_i,p_{-i}),$ the value obtained by playing a best response.\n\nThroughout, we assume there exists a probability space $(\\Omega,\\mathcal{F},\\mathbb{P})$ rich enough to carry out the construction of the various random variables required in this paper. For a random object $X$ defined on a measurable space $(\\Omega,\\mathcal{F})$, let $\\sigma(X)$ denote the $\\sigma$-algebra generated by $X$ \\cite{williams_book}. As a matter of convention, all equalities and inequalities involving random objects are to be interpreted almost surely (a.s.) with respect to the underlying probability measure, unless otherwise stated.\n\n\\subsection{Repeated Play}\nSuppose players repeatedly face off in the game $\\Gamma$. Denote by $t\\in \\{1,2,\\ldots\\}$ a round of the repeated play. Let $\\{a_i(t)\\}_{t\\geq 1}$ denote the sequence of actions taken by player $i$, where $a_i(t) \\in A_i$, and let $\\{a(t)\\}_{t\\geq 1}$, $a(t) = (a_1(t),\\ldots,a_n(t))$ denote the sequence of joint actions.\n\nLet $\\{\\mathcal{F}_t\\}_{t\\geq 1}$ be a filtration (sequence of $\\sigma$-algebras) that contains the information available to players in round $t$ of the repeated play.\nFor $t\\geq 1$ and $\\alpha_i \\in A_i$, let $g(\\alpha_i,~t)\\in\\mathbb{R}$ be an $\\mathcal{F}_{t-1}$-measurable random variable with $g_i(\\alpha_i,~t) := \\mathbb{P}(a_i(t) = \\alpha_i\\vert \\mathcal{F}_{t-1})$, and let $g_i(t)\\in \\Delta_i$ be the vector with components $g_i(t) := (g_i(\\alpha_1,~t),\\ldots,g_i(\\alpha_{m_i},~t))$, where $m_i$ is the cardinality of $A_i$.\nWe say $g_i(t)$ is the mixed strategy used by player $i$ in round $t$, and we say $\\{g_i(t)\\}_{t\\geq}$ is the sequence of period-by-period (mixed) strategies used by player $i$. The sequence of joint period-by-period strategies is given by $\\{g(t)\\}_{t\\geq 1}$, $g(t) := (g_1(t),\\ldots,g_n(t))$.\n\nDenote by $q_i(t) \\in \\Delta_i$, the empirical distribution of player $i$. The precise manner in which the empirical distribution\\footnote{The term \\emph{empirical distribution} is often used to refer explicitly to the time-averaged histogram of the action choices of some player $i$; i.e., $q_i(t) = \\frac{1}{t}\\sum_{s=1}^t a_i(s)$. Here, we allow for a broader definition that will permit interesting and useful algorithmic generalizations.} is formed will depend on the algorithm at hand. In general, $q_i(t)$ is formed as a function of the action history $\\{a_i(s)\\}_{s=1}^t$ and serves as a compact representation of the action history of player $i$ up to and including the round $t$. The joint empirical distribution is given by $q(t) := (q_1(t),\\ldots,q_n(t))$.\n\nUnless otherwise stated, $d(\\cdot,~\\cdot)$ denotes the standard Euclidean norm. For $m\\geq 1$ and $S \\subset \\mathbb{R}^m$ define the distance from $p\\in \\mathbb{R}^m$ to $S\\subset\\mathbb{R}^m$ by $d(p,~S) := \\inf\\{d(p,~p'):~p'\\in S\\}$. We say a repeated-play learning process converges \\emph{weakly} to equilibrium if for some map $f:\\Delta^n\\rightarrow\\Delta^n$ there holds $d(f(q(t)),~NE)\\rightarrow 0$ as $t\\rightarrow \\infty$. In most cases in this paper, $f$ will simply be the identity function. We say a repeated-play learning process converges \\emph{strongly}\\footnote{The notion of strong convergence presented in this paper is comparable to the notions of ``convergence in intended behavior'' presented in \\cite{Fud92} and ``convergence in strategic intentions'' given in \\cite{young2004strategic}.} to equilibrium if $d(g(t),~NE)\\rightarrow 0$ as $t\\rightarrow \\infty$. Note that weak learning implies that players \\emph{learn} an equilibrium strategy, but may never actually begin to implement the strategy that is being learned. On the other hand, in strong learning players both \\emph{learn} an equilibrium strategy, and implement the strategy that is being learned (see Section \\ref{sec_weak_convergence} for more details).\n\n\\section{Fictitious Play}\n\\label{sec_FP}\n\\subsection{Fictitious Play}\n\\label{sec_FP_subsection}\nLet\n\\vskip-20pt\n\\begin{equation}\n\\label{q_FP}\nq_i(t) := \\frac{1}{t}\\sum_{s=1}^t a_i(s),\n\\end{equation}\n\\vskip-5pt\n\\noindent be the normalized histogram\\footnote{Recall that the actions $a_i(t)\\in A_i$ are dirac distributions in the mixed-strategy space $\\Delta_i$.} of the actions of player $i$.\n\nFP may be intuitively understood as follows. Players repeatedly face off in a stage game $\\Gamma$. In any given stage of the game, players choose a next-stage action by assuming (perhaps incorrectly) that opponents are using stationary and independent strategies. Thus, in FP, players use the marginal empirical distribution of each opponent's past play, $q_i(t)$, as a prediction of the opponent's behavior in the upcoming round and choose a next-round strategy which is a best response against this prediction.\n\nA sequence of actions $\\{a(t)\\}_{t\\geq 1}$ such that\\footnote{In all variants of FP discussed in this paper, the initial action $a_i(1)$ may be chosen arbitrarily for all $i$.\\label{footnote_initial_cond}}\n\\vskip-15pt\n\\begin{equation}\na_i(t+1) \\in BR_i(q_{-i}(t)),~\\forall i,\n\\label{FP_BR}\n\\end{equation}\n\\vskip-5pt\n\\noindent for all $t\\geq 1$, is referred to as a \\emph{fictitious play process}. FP has been studied extensively to determine the classes of games for which it can be said to converge (weakly) to the set of Nash equilibria.\nAmong other results, it has been shown that FP leads to weak learning in two-player zero-sum games \\cite{robinson1951iterative}, potential games \\cite{Mond96}, and generic $2\\times m$ games \\cite{berger2005fictitious}. We summarize these results in the following theorem.\n\\begin{theorem}\nLet $\\Gamma = (N,\\{u_i(\\cdot)\\}_{i\\in N},Y^n)$ be a two-player zero-sum game, potential game, or generic $2\\times m$ game, and let $\\{a(t)\\}_{t\\geq 1}$ be a fictitious play process on $\\Gamma$. Then $d(q(t),~NE)\\rightarrow 0$ as $t\\rightarrow \\infty$.\n\\label{theorem_classical_fp}\n\\end{theorem}\n\n\\subsection{Weak Convergence in Fictitious Play}\n\\label{sec_weak_convergence}\nThe following example (see \\cite{young2004strategic}, p. 78), while fairly simple, clearly illustrates the phenomenon of weak convergence in FP, and demonstrates why weak convergence can be a deeply unsatisfactory notion of learning.\n\n\n\\begin{wrapfigure}{l}{0.35\\textwidth}\n \\begin{center}\n \\includegraphics[width=0.28\\textwidth]{figures\/miscoord_game.eps}\n \\end{center}\n \\caption{}\\label{fig:miscoord_game}\n\\end{wrapfigure}\n\nConsider the two-player asymmetric coordination game shown in Figure \\ref{fig:miscoord_game}. The game has three Nash equilibria: both players play A, both players play B, and an asymmetric mixed-strategy Nash equilibrium. The game is a potential game \\cite{Mond96} (in fact, an identical interests game \\cite{Mond01}) and hence falls within the purview of Theorem \\ref{theorem_classical_fp}---regardless of the initial conditions, players engaged in an FP process will learn an equilibrium in the weak sense that $d(q(t),~NE)\\rightarrow 0$ as $t\\rightarrow\\infty$.\n\nSuppose that the players are engaged in an FP process on this game, and in the first round they miscoordinate their actions (e.g., one chooses A, and the other chooses B). Young \\cite{young2004strategic} shows the somewhat counterintuitive result that the FP dynamics will in fact lead players to miscoordinate their action choices in every subsequent round of the learning process. Thus, despite the fact that $\\lim_{t\\rightarrow\\infty} d(q(t),~NE) = 0$, the players' realized action choices are extremely suboptimal---yielding the lowest possible utility in each round of play. Intuitively speaking, this phenomenon occurs when players' actions cycle in such a way as to drive the time-averaged empirical distribution to a mixed-strategy Nash equilibrium, yet player's period-by-period strategies never constitute (nor even approach) a Nash equilibrium themselves.\n\nIt may be said that in weak learning players ``learn'' a NE strategy in some abstract sense, but never actually implement the strategy they are learning. In strong learning, players not only learn a NE strategy, but they also physically implement the strategy that is being learned.\n\nThe following section presents a simple modification of FP that achieves strong learning; i.e., players' period-by-period strategies converge to equilibrium in addition to convergence of the empirical distributions.\n\n\n\n\n\\section{Strong Convergence in Classical Fictitious Play}\n\\label{sec_strong_fp}\n\nConsider a variant of FP in which the action for player $i$ at time $t$ is chosen by drawing a random sample from the mixed strategy (i.e., probability distribution) $g_i(t)$, where\n\\begin{equation}\ng_i(t) \\in BR_i(q_{-i}(t-1))\\rho_i(t) + q_i(t-1)(1-\\rho_i(t)),\n\\label{strong_FP_informal}\n\\end{equation}\n$\\rho_i(t) \\in [0,1]$, and $\\lim_{t\\rightarrow\\infty} \\rho_i(t) = 0$. Intuitively, this is similar to the classical FP process \\eqref{FP_BR}, but rather than playing a deliberate best response each round, players gradually transition toward drawing their stage $t$ action as a random sample from their own empirical distribution, $q_i(t)$.\n\nThe idea is that players will play a best response sufficiently often so that, per FP, the empirical distribution $q(t)$ will be driven toward equilibrium, as in Theorem \\ref{theorem_classical_fp}. Then, since $\\rho_i(t) \\rightarrow 0$ as $t\\rightarrow \\infty$, the mixed strategy $g_i(t)$ tends towards $q_i(t)$, which is itself tending towards equilibrium. Informally, \\eqref{strong_FP_informal} captures the main idea of strongly convergent FP. A formal presentation of the algorithm is given below.\n\n\n\\subsection{Strongly Convergent Variant of Classical FP}\n\\label{sec_strong_FP_construction}\nConsider a variant of FP in which the action for player $i$ at time $t$ is chosen according to the following randomized rule:\n\\vskip-20pt\n\\begin{equation}\na_i(t) \\sim g'_i(t) :=\n\\begin{cases}\nb_i(t-1), & \\mbox{ if } X_i(t) = 1,\\\\\nq_i(t-1), & \\mbox{otherwise},\n\\end{cases}\n\\label{action_rule0}\n\\end{equation}\nwhere\n$b_i(t-1) \\in BR_i(q_{-i}(t-1)),$ the notation\n$a_i(t) \\sim g'_i(t)$ indicates that the action $a_i(t)$ is drawn as a random sample\\footnote{The action $a_i(t) \\in A_i$ is technically a dirac distribution over the finite action space $Y_i$ (see Section \\ref{sec_prelims}), and the mixed strategy $g_i'(t)$ is a probability distribution over $Y_i$. More precisely, the notation $a_i(t) \\sim g_i'(t)$ means that an action $y_i(t)$ is drawn as a random sample from $g_i'(t)$ with $a_i(t) := \\delta_{y_i(t)}(y_i)$, where $\\delta_{y_i(t)}(y_i) = 1$ if $y_i = y_i(t)$ and $\\delta_{y_i(t)}(y_i) = 0$ otherwise.} from the probability mass function $g'_i(t)$, $X_i(t) \\in \\{0,1\\}$ is a random variable, and $q_i(t)$ is the player's empirical distribution as defined in \\eqref{qt_update2} below.\nLet $\\mathcal{F}_t := \\sigma(\\{a(s),X_1(s),\\ldots,X_n(s),b_1(s),\\ldots,b_n(s)\\}_{s\\leq t}),$\nand note that $g'_{i}(t)$ is $\\mathcal{F}_{t}$-measurable.\nLet\n\\vskip-20pt\n\\begin{equation}\n\\rho_i(t) := \\mathbb{P}(X_i(t) = 1\\vert~\\mathcal{F}_{t-1}),\n\\end{equation}\n\\vskip-5pt\n\\noindent and note that $\\rho_i(t)$ is $\\mathcal{F}_{t-1}$-measurable.\nIntuitively speaking, $\\rho_i(t)$ represents the probability that player $i$ deliberately chooses to play a best response strategy in round $t$ given the history of play up through the previous round.\nWe make the following assumptions regarding each player's probability of deliberately choosing a best response:\n\\begin{assumption}\n$\\lim\\limits_{t\\rightarrow\\infty} \\rho_i(t) = 0, ~\\forall i\\in N$, a.s.,\n\\label{rho_a1}\n\\end{assumption}\n\\begin{assumption}\n$\\sum\\limits_{t\\geq 1} \\rho_i(t) = \\infty, ~\\forall i\\in N$, a.s.,\n\\label{rho_a2}\n\\end{assumption}\n\\begin{assumption}\n$\\lim\\limits_{t\\rightarrow\\infty} \\frac{\\sum_{k=1}^t\\rho_i(k)}{\\sum_{k=1}^t\\rho_j(k)} = 1, ~\\forall i,j \\in N,$ a.s.\n\\label{rho_a3}\n\\end{assumption}\n\nThe first assumption ensures that players eventually transition towards playing their next-stage action as a sample from their empirical distribution rather than playing a deliberate best response. The second assumption ensures that, for each player, a deliberate best response is played infinitely often. The third assumption ensures that the number of deliberate best responses taken by each player remain relatively in sync.\\footnote{Note that since $\\rho_i(t)$ is only required to be $\\mathcal{F}_{t-1}$-measurable, this parameter is in fact adaptively tunable. This is a feature of practical interest since it allows players to adjust their deliberate best response rates on the fly---possibly adapting to the (initially unknown) deliberate best response rates of others and to underlying process dynamics---in order to satisfy \\textbf{A.\\ref{rho_a1}}--\\textbf{A.\\ref{rho_a3}}.}\nIn practice, players may choose their deliberate best responses completely asynchronously; for example, setting $\\rho_i(t) = 1\/t^{r},~\\forall i$, with $r\\in (0,1]$, results in (purely) independent sampling of deliberate best response rounds and secures \\textbf{A.\\ref{rho_a1}}--\\textbf{A.\\ref{rho_a3}}.\n\n\nLet\n\\vskip-25pt\n\\begin{equation}\n\\ell_i(t) := \\sum\\limits_{k=1}^{t} X_i(k)\n\\label{ell_def}\n\\end{equation}\n\\vskip-5pt\n\\noindent count the number of times player $i$ has deliberately played a best response until and including round $t$. Note that $\\ell_i(t)$ is $\\mathcal{F}_t$-measurable. The empirical distribution $q_i(t)$ is defined recursively as\\footnote{To initialize the process, let the action $a_i(1)$ be chosen arbitrarily, let $q_i(1) = a_i(1)$, and let $X_i(1) = 1$ for all $i$.}\n\\vskip-15pt\n\\begin{equation}\nq_i(t+1) = q_i(t) + \\frac{1}{\\ell_i(t+1)}\\left(a_i(t+1) - q_i(t)\\right)X_i(t+1).\n\\label{qt_update2}\n\\end{equation}\n\\vskip-5pt\n\\noindent Intuitively speaking, the empirical distribution \\eqref{qt_update2} is updated only over rounds when a deliberate best response was played. Note that $q_i(t)$ is $\\mathcal{F}_t$-measurable.\\footnote{Note that, \\eqref{action_rule0} implicitly assumes that players have knowledge of the empirical distributions of opponents when computing a best response. This may be accomplished by assuming that players actions are accompanied with a ``tag'' indicating whether or not the played action was a deliberate best response. Alternatively, the information regarding $q_i(t)$ may tracked by the individual player $i$ and disseminated by a gossip-type algorithm \\cite{swenson2012ECFP} or implicitly disseminated through a payoff-based scheme.}\n\nFinally, let\n\\vskip-20pt\n\\begin{equation}\ng_i(t) := b_i(t-1)\\rho_i(t) + q_i(t-1)(1-\\rho_i(t)),\n\\label{g_def}\n\\end{equation}\n\\vskip-5pt\n\\noindent and note that $g_i(t)$ is $\\mathcal{F}_{t-1}$ measurable.\\footnote{To see this, note first that $q_i(t-1)$ and $\\rho_i(t)$ have been shown to be $\\mathcal{F}_{t-1}$ measurable. Furthermore, this implies that $BR_i(q_i(t-1))$ is $\\mathcal{F}_{t-1}$-measurable. Lastly, by construction, $b_i(t) \\in BR_i(q_i(t-1))$ is $\\mathcal{F}_{t-1}$-measurable.} More importantly, note that for every $\\alpha_i \\in A_i$,\n$g_i(\\alpha_i,t) = \\mathbb{P}(a_i(t) = \\alpha_i\\vert~\\mathcal{F}_{t-1}),$\nand thus $g_i(t)$ represents the mixed strategy (conditioned on past play) used by player $i$ in round $t$. The joint mixed strategy used in round $t$ is given by $g(t) := (g_1(t),\\ldots,g_n(t))$.\n\nWe refer to a process where, for each player $i$, $a_i(t)$ is updated according to \\eqref{action_rule0}, $q_i(t)$ is updated according to \\eqref{qt_update2}, and $g_i(t)$ is updated according to \\eqref{g_def} as the strongly convergent variant of (classical) FP (for reasons to be clear soon).\n\n\\subsection{Strong Convergence in Classical FP: Main Result}\nThe following result states that in the strongly convergent variant of FP, players' period-by-period mixed strategies converge to the set of Nash equilibria---i.e., strong learning is achieved.\n\\begin{cor}\nLet $\\Gamma$ be a two-player zero-sum game, potential game, or generic $2\\times m$ game. Assume \\textbf{A.\\ref{rho_a1}}--\\textbf{A.\\ref{rho_a3}} hold. Then the strongly convergent variant of FP achieves strong learning in the sense that $\\lim_{t\\rightarrow\\infty} d(g(t),~NE) = 0$ almost surely.\n\\label{theorem_main_result}\n\\end{cor}\n\nIn order to prove the above result, we first study a more general notion of fictitious play and then prove the result as a corollary of the general theorem (see Theorem \\ref{theorem_general_result}). Taking this general approach allows our strong convergence results to be be applied to other FP-type algorithms, e.g., Generalized Weakened FP (Section \\ref{sec_apps2}) and Empirical Centroid FP (Section \\ref{sec_apps3}). The proof of Corollary \\ref{theorem_main_result} is given in Section \\ref{sec_apps1}.\n\n\\subsection{Simulation Example}\nIn order to demonstrate the learning properties of strongly convergent FP, we simulated classical FP and strongly convergent FP in a simple two-player matching pennies game with utility functions as shown in Figure \\ref{fig:payoff_matrix}. The game has a unique (symmetric) mixed-strategy equilibrium in which both players choose either action with probability $1\/2$. Figure \\ref{fig:FP_strategies} shows the period-by-period strategies generated by classical FP. Players' strategies are always pure and progress in continuously lengthening cycles. While the time-averaged empirical distribution is being driven to equilibrium, the period-by-period strategies clearly are not.\n\nFigure \\ref{fig:strong_FP_strategies} shows the period-by-period strategies generated by strongly convergent FP with $\\rho(t) = t^{-.35}$. Players' period-by-period strategies are converging to the unique Nash equilibrium of the game.\n\nFigure \\ref{fig:received_utility} shows the utility received by the realized joint action $a(t)$ in each round of repeated play for both learning algorithms. The received payoffs in classical FP cycle around the value of the game, while the received payoffs in strongly convergent FP converge to the value of the game.\n\nOne possible tradeoff in strongly convergent FP is that less frequent deliberate best response actions and less frequent updating of the empirical distribution (see \\eqref{qt_update2}) may lead to a slow-down in convergence rate. The empirical distribution processes for player $1$ in each algorithm is shown in Figure \\ref{fig:empirical_dist} with $\\rho(t) = t^{-.35}$.\n{\\small\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.27\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/match_pennies_utility_matrix.eps}\n \\caption{}\n \\label{fig:payoff_matrix}\n \\end{subfigure\n ~\n \n \\begin{subfigure}[b]{0.3\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/fp_strategies.eps}\n \\caption{}\n \\label{fig:FP_strategies}\n \\end{subfigure}\n ~\n \n \\begin{subfigure}[b]{0.3\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/strong_fp_strategies.eps}\n \\caption{}\n \\label{fig:strong_FP_strategies}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.3\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/match_pennies_utilities.eps}\n \\caption{}\n \\label{fig:received_utility}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.3\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/emp_distributions.eps}\n \\caption{}\n \\label{fig:empirical_dist}\n \\end{subfigure}\n \\caption{\\small \\ref{fig:payoff_matrix}: Matching pennies payoff matrix, \\ref{fig:FP_strategies}: The probability of each player playing heads in round $t$ using the classical FP algorithm, \\ref{fig:strong_FP_strategies}: The probability of each player playing heads in round $t$ using the strongly convergent FP algorithm, \\ref{fig:received_utility}: The received utility in round $t$ given the realized action $a(t)$, \\ref{fig:empirical_dist}: The empirical distribution process of the action H (heads) for player $1$ in both FP and strongly convergent FP.}\\label{fig:simulation}\n\\end{figure}\n}\n\\section{General Setup}\n\\label{sec_general_setup}\nIn this section we study strong learning in FP-type algorithms ---a class of algorithms that generalizes FP and includes many learning processes based on best-response dynamics.\\footnote{The class of FP-type algorithms proposed here is similar in spirit to the class of best-response-based algorithms considered in \\cite{jordan1993}.} In Section \\ref{sec_FP_type}, we define the notion of an FP-type algorithm. In Section \\ref{sec_FP_type_examples} we present some examples of an FP-type algorithm. In Section \\ref{sec_FP_type_strong} we define the strongly convergent variant of an FP-type algorithm. In Section \\ref{sec_FP_type_main_result} we provide the general strong convergence result for an FP-type algorithm (see Theorem \\ref{theorem_general_result}), and in Sections \\ref{sec_additional_defs}--\\ref{sec_proof_general_result} we prove the general result.\n\n\\subsection{FP-Type Algorithm}\n\\label{sec_FP_type}\nAn FP-type algorithm generalizes classical FP in the following ways: (i) the notion of a player's empirical distribution is generalized, (ii) players are permitted to use a function of the empirical distribution (rather than use the empirical distribution itself) as a predictor of the next-round strategy of opponents, (iii) convergence to equilibrium may occur in terms of a function of the empirical distribution (rather than convergence to equilibrium of the empirical distribution itself), and (iv) limit sets other than the set of NE are permitted.\n\nWe define an FP-type algorithm as follows. Let players be engaged in repeated play of a stage game $\\Gamma$. Let $a_i(t)$ represent the action of player $i$ in round $t\\in\\{1,~2,\\ldots\\}$, and let\n$H_i(t) := \\{a_i(s)\\}_{s=1}^t$\nrepresent the action history of player $i$ up to and including round $t$.\n\nIn classical FP, for each player $i$, the normalized histogram of the player's action choices \\eqref{q_FP} is used as a compact representation of the player's action history. In the general formulation of an FP-type algorithm, we still suppose that players track a compact representation of the action history, but we allow the compact representation to take on a fairly general form,\\footnote{In most literature, the notion of an \\emph{empirical distribution} refers strictly to the time-averaged empirical histogram of a player's action choices, as in classical FP \\eqref{q_FP}. However, as discussed in Section \\ref{sec_prelims}, we use the term empirical distribution more generally to refer to an arbitrarily formed (see \\textbf{A.\\ref{a_general_q}}) distribution that a player uses to track information regarding opponents' empirical action histories. This abuse of terminology allows us to more naturally extend concepts to the general FP-type setting.} as stated in the following assumption:\n\\begin{assumption}\n\\label{a_general_q}\nThe empirical distribution of player $i$ is of the form $q_i(t) := f^q_i(H_i(t),~t)$, where $f^q_i(\\cdot,~t):\\prod_{s=1}^t A_i \\rightarrow \\Delta_{i}$.\n\\end{assumption}\nWe make the following assumption regarding the sequence of functions $\\{f^q_i(\\cdot,~t)\\}_{t\\geq 1}$ used to form the empirical distribution sequence of player $i$:\n\\begin{assumption}\n\\label{a_step_size_bound}\nFor any history sequence $\\{H_i(t)\\}_{t\\geq 1}$ for player $i$, there holds $\\lim_{t\\rightarrow\\infty} \\|f^q_i(H_i(t+1),~t+1) - f^q_i(H_i(t),~t)\\| = 0$.\n\\end{assumption}\n\nIn particular, this implies that---regardless of the action history---there holds\\\\ $\\lim_{t\\rightarrow \\infty} \\|q_i(t+1) - q_i(t)\\| = 0$ for each player $i$. This fairly mild assumption captures the essential characteristics required for our asymptotic analysis, and may be seen as a generalization of classical FP where exact averaging of actions over time yields $\\|q_i(t+1) - q_i(t)\\| \\leq \\frac{1}{t}$ (see Section \\ref{sec_FP_example}). Together, assumptions \\textbf{A.\\ref{a_general_q}}--\\textbf{A.\\ref{a_step_size_bound}} allow us to consider a variety of FP inspired algorithms, including those with general step sizes \\cite{leslie2006generalised} and those with more intricate history dependent rules such as derivative action \\cite{Arslan04}.\n\nIn an FP-type algorithm, players form a prediction of the future behavior of opponents as a function of the current empirical distribution. Let $p_i(t)$ be player $i$'s prediction of opponent strategies for the upcoming round $(t+1)$. We assume,\n\\begin{assumption}\nPlayer $i$'s prediction $p_i(t)$ of opponent behavior is of the form $p_i(t) = f_i^p(q(t))$, where $f_i^p:\\Delta^n \\rightarrow \\Delta_{-i}$ is a Lipschitz continuous, time-invariant function.\n\\label{a_prediction}\n\\end{assumption}\n\nWe say a sequence of actions $\\{a(t)\\}_{t\\geq 1}$ is an FP-type process if for all $i\\in N$ and all $t\\geq 1$,\n$a_i(t+1) \\in BR_i^{\\epsilon_t}(p_i(t)),$\nwhere $BR^{\\varepsilon_t}_i(\\cdot)$ is the $\\varepsilon_t$-best response set (recall \\eqref{BR_epsilon_set}), and $\\{\\epsilon_t\\}_{t\\geq 1}$ is a sequence satisfying $\\lim_{t\\rightarrow\\infty} \\epsilon_t = 0$.\n\nIn many variants of FP, including classical FP, learning occurs in the sense that $d(q(t),~NE)\\rightarrow 0$. We generalize this notion of learning by allowing for limit sets other than the set of NE and allowing for convergence in terms of a function of $q(t)$ rather than permitting convergence only in terms of $q(t)$ itself.\n\nLet $E$ be some target equilibrium set (not necessarily the set of NE). An FP-type process is said to learn elements of $E$ if for each $i$ there exists a function $f^\\xi_i$ satisfying:\n\\begin{assumption}\nThe function $f^\\xi_i:\\Delta^n\\rightarrow\\Delta_i$ is Lipschitz continuous and time invariant,\n\\label{a_xi}\n\\end{assumption}\nand such that, for $\\xi_i(t) := f^\\xi_i(q(t))$ and $\\xi(t) := (\\xi_1(t),\\ldots,\\xi_n(t))$ there holds\n$\\lim_{t\\rightarrow 0} d(\\xi(t),~E) =0.$\nWe refer to $\\xi(t)$ as the asymptotic learning distribution, and $f^\\xi_i$ as the convergence map of player $i$.\n\nIn general, we will denote an instance of an FP-type learning algorithm by $\\Psi = (\\{f^q_i(\\cdot,~t)\\}_{t\\geq 1},f_i^p,f^\\xi_i)_{i\\in N}$.\nIn order to construct a strongly convergent variant of $\\Psi$ we will require that $\\Psi$ obtain weak convergence in sufficiently robust sense as stated in the following assumption.\n\n\\begin{assumption}\nFor the stage game $\\Gamma$ and equilibrium set $E$, the FP-type algorithm $\\Psi$ is such that for any sequence $(\\epsilon_t)_{t\\geq 1}$ satisfying $\\lim_{t\\rightarrow\\infty} \\epsilon_t = 0$, the FP-type algorithm $\\Psi$ obtains weak convergence in the sense that $\\lim_{t\\rightarrow 0} d(\\xi(t),~E)= 0$.\n\\label{a_robustness}\n\\end{assumption}\n\nThe above assumption ensures that the FP-type algorithm is robust to asymptotically decaying perturbations in a player's best response set. When studying the strongly convergent variant of $\\Psi$ in the following section, the assumption \\textbf{A.\\ref{a_robustness}} will serve to ensure that convergence of the process is not disrupted by minor asynchronies in the number of deliberate best responses taken by each player (i.e., minor disparities in \\eqref{ell_def}).\n\n\\subsection{Examples}\n\\label{sec_FP_type_examples}\n\\subsubsection{Classical Fictitious Play}\n\\label{sec_FP_example}\nClassical FP (Section \\ref{sec_FP_subsection}) fits the template of an FP-type algorithm with $q_i(t) = \\frac{1}{t}\\sum_{s=1}^t a_i(s)$. Note that $q_i(t)$ may be written in recursive form as: $q_i(t+1) = q_i(t) + 1\/(t+1)\\left(a_i(t+1) - q_i(t) \\right)$. Thus, $\\|q_i(t+1) - q_i(t)\\| \\leq \\frac{M_i}{t+1}$, where $M_i := \\sup_{p_i',p_i''\\in \\Delta_i} \\|p_i' - p_i''\\|$, and \\textbf{A.\\ref{a_step_size_bound}} is satisfied. The prediction map $f_i^p$ is given by the identity function, and convergence map $f^\\xi_i$ also given by the identity function. The target equilibrium set is given by $E := NE$, the set of Nash equilibria.\n\n\\subsubsection{Generalized Weakened Fictitious Play}\n\\label{sec_GWFP_intro}\nLeslie et al. \\cite{leslie2006generalised} study a useful generalization of FP, termed Generalized Weakened FP (GWFP), in which players are permitted to choose a suboptimal best response each round, so long as the degree of suboptimality decays asymptotically to zero, and in which step-size sequences other than $\\{1\/t\\}_{t\\geq 1}$ are permitted.\n\nFormally, for $p_{-i} \\in \\Delta_{-i}$ and $\\epsilon\\geq 0$, let\\footnote{The set $\\bar{BR^{\\epsilon}_i}(p_{-i})$ defined below differs from the set $BR_i^\\epsilon(p_{-i})$ defined in the preliminaries in that $\\bar{BR^{\\epsilon}_i}(p_{-i})$ includes all \\emph{mixed} strategy best responses, whereas $BR_i^\\epsilon(p_{-i})$ contains only the pure strategy best responses. The set $\\bar{BR^{\\epsilon}_i}(p_{-i})$ is used here in order to precisely define a GWFP process as given in \\cite{leslie2006generalised}, but the remainder of the paper focuses on the set $BR_i^\\epsilon(p_{-i})$.}\n$\\bar{BR^{\\epsilon}_i}(p_{-i}) := \\{p_i\\in \\Delta_i: U_i(p_i,p_{-i}) \\geq \\max_{\\alpha_i \\in A_i}U_i(\\alpha_i,p_{-i})-\\epsilon\\},$\nand for $p\\in \\Delta^n$, let $\\bar{BR^{\\epsilon}}(p) := (\\bar{BR^{\\epsilon}_1}(p_{-1}),\\ldots,\\bar{BR^{\\epsilon}_n}(p_{-n}))$. A sequence $\\{q(t)\\}_{t\\geq 1}$ is said to be a GWFP process if\n$q(t+1) \\in (1-\\gamma(t+1))q(t) + \\gamma(t+1)(\\BRtildet(q(t)) + M_{t+1})$\nwith $\\gamma(t) \\rightarrow 0$ and $\\epsilon_t \\rightarrow 0$ as $t\\rightarrow \\infty$, $\\sum_{t\\geq 1} \\gamma(t) = \\infty$, and $\\{M_t\\}_{t\\geq 1}$ is a deterministic (or stochastic) perturbation sequence satisfying\n$\\lim\\limits_{t\\rightarrow\\infty}\\sup_{k}\\{\\|\\sum_{i=t}^{k-1} \\gamma_{i+1} M_{i+1} \\|:\\sum_{i=t}^{k-1}\\gamma_{i+1} \\leq T\\} = 0$ (a.s.).\n\nWe consider a special case of GWFP in which $M_t = 0, ~\\forall t$ and the $\\epsilon$-best response set is restricted to the set of pure strategy $\\epsilon$-best responses. That is, we consider the subset of GWFP process such that\n$a(t+1) \\in BR^{\\epsilon_t}(q_{-i}(t)),$\nand,\n\\begin{equation}\n\\label{q_gwfp_def}\nq(t+1) = q(t) + \\gamma(t+1)\\left(a(t+1) - q(t) \\right),\n\\end{equation}\nwith $\\epsilon_t \\rightarrow 0$,\nand in a slight variation of terminology we refer to the sequence of actions $\\{a(t)\\}_{t\\geq 1}$ satisfying the above as a GWFP process.\n\nIn the terminology of Section \\ref{sec_FP_type}, GWFP fits the template of an FP-type algorithm with the empirical distribution $q_i(t)$ defined recursively as in \\eqref{q_gwfp_def} (where it is assumed that $\\lim_{t\\rightarrow\\infty} \\gamma(t) = 0$), the prediction map $f_i^p$ given by the identity function for all $i$, and the convergence map $f^{\\xi}_i$ given by the identity function for all $i$, and the target equilibrium set is given by $E := NE$---the set of Nash equilibria.\n\n\\subsubsection{Empirical Centroid Fictitious Play---Learning Consensus Equilibria}\n\\label{sec_ECFP_intro}\nEmpirical Centroid FP (ECFP) was conceived as a variant of FP suited to implementation in large-scale games \\cite{swenson2012ECFP,Swenson-MFP-Asilomar-2012}. In ECFP, rather than tracking the empirical distribution of each individual opponent (as in FP), players track and respond to only the centroid of the empirical distributions. In order to ensure the process is well defined the following assumption is made:\n\\begin{assumption}\n\\label{a_ident_strat}\nAll players use the same strategy space.\n\\end{assumption}\nUnder this assumption, let the empirical distribution be defined by\n\\vskip-20pt\n\\begin{equation}\n\\label{q_ecfp_def}\nq_i(t) := \\frac{1}{t}\\sum_{s=1}^t a_i(s),\n\\end{equation}\n\\vskip-5pt\n\\noindent and let the empirical centroid distribution be defined by\n$\\bar q(t) := \\frac{1}{n}\\sum_{i\\in N} q_i(t).$\nWe say a sequence of actions $\\{a(t)\\}_{t\\geq 1}$ is an ECFP process if for all $i$ and all $t\\geq 1$,\n\\begin{equation}\na_i(t+1) \\in BR_i^{\\epsilon_t}(\\bar q_{-i}(t)),\n\\label{ecfp_process}\n\\end{equation}\nwhere $\\bar q_{-i}(t) = (\\bar q(t),\\ldots,\\bar q(t)) \\in \\prod_{j\\not= i} \\Delta_j$ is the $(n-1)$-tuple containing $(n-1)$ repeated copies of $\\bar q(t)$, and $\\{\\epsilon_t\\}_{t\\geq 1}$ is a sequence satisfying $\\lim_{t\\rightarrow\\infty} \\epsilon_t = 0$.\n\nIn ECFP, players learn elements of the set of consensus Nash equilibria\\footnote{We assume here that the set of consensus Nash equilibria is non-empty. When revisiting ECFP in Section \\ref{sec_apps3}, we provide an assumption on the utility structure that ensures that the set is indeed non-empty.}, defined by\n$C:= \\{p = (p_1,~\\ldots~,p_n)\\in NE:~ p_1 = p_2 = \\ldots = p_n\\},$\nthe subset of Nash equilibria in which all players use identical strategies (see \\cite{swenson2012ECFP} for more details).\nDefine $\\bar q^n(t) := (\\bar q(t),\\ldots,\\bar q(t)) \\in \\Delta^n$ to be the $n$-tuple containing repeated copies of $\\bar q(t)$; learning in ECFP takes place in the sense that $\\lim_{t\\rightarrow\\infty} d(\\bar q^n(t),~C) = 0$.\n\nIn the terminology of Section \\ref{sec_FP_type}, ECFP fits the template of an FP-type algorithm with the empirical distribution given by \\eqref{q_ecfp_def}, the prediction map $f_i^p$ given by\n$f_i^p(q(t)) := \\left(\\frac{1}{n}\\sum_{j\\in N}q_j(t), \\ldots, \\frac{1}{n}\\sum_{j\\in N}q_j(t)\\right),~ \\forall i,$\nwhere the right-hand side is a $(n-1)$-tuple containing repeated copies of $\\bar q(t)$,\nand the convergence map given by\n$f^\\xi_i(q(t)) := \\frac{1}{n}\\sum_{j=1}^n q_j(t),~\\forall i.$\nThe target equilibrium set is given by $E:= C$, the set of consensus Nash equilibria.\n\n\\subsubsection{Empirical Centroid Fictitious Play---Learning Mean-Centric Equilibria}\n\\label{sec_ecfp_mce_intro}\nIn this section we consider a slight modification of the ECFP algorithm presented in Section \\ref{sec_ECFP_intro} that enables players to learn elements of an alternate (non-Nash) equilibrium set.\n\nLet an ECFP action process be defined as in \\eqref{ecfp_process}. Define the set of mean-centric equilibria by\n$MCE := \\{p \\in \\Delta^n: ~U_i(p_i,~\\bar p_{-i}) \\geq U_i(p_i',~\\bar p_{-i})~\\forall p_i' \\in \\Delta_i,~\\forall i\\}.$\nThe set of MCE is neither a superset nor a subset of the NE---rather, it is a set of natural equilibrium points tailored to the ECFP dynamics \\cite{swenson2013MCE}. The set of consensus Nash equilibria $C$ (see Section \\ref{sec_ECFP_intro}) however, is contained in the set of MCE.\n\nIn ECFP, players learn elements of MCE in the sense that $\\lim_{t\\rightarrow\\infty}d(q(t),~MCE)= 0$. In the terminology of Section \\ref{sec_FP_type}, this fits the template of an FP-type algorithm with $q_i(t)$ given by \\eqref{q_ecfp_def}, $f_i^p$ defined in the same way as in Section \\ref{sec_ECFP_intro}, the convergence map $f^\\xi_i$ given by the identity for all $i$, and the target equilibrium set given by $E := MCE$.\n\nNote that the only difference between the ECFP algorithm discussed in the Section \\ref{sec_ECFP_intro} and the ECFP algorithm discussed here is the choice of target equilibrium set $E$ and convergence maps $f^\\xi_i$.\n\n\\subsection{Strongly Convergent Variant of an FP-type Algorithm}\n\\label{sec_FP_type_strong}\nIn this section we construct the strongly convergent variant of an FP-type learning algorithm. The construction here is a generalization of that of Section \\ref{sec_strong_FP_construction} where we constructed the strongly convergent variant of classical FP.\n\nLet $\\Psi = (\\{f^q_i(\\cdot,~t)\\}_{t\\geq 1},~f_i^p,~f^\\xi_i)_{i\\in N}$ be an FP-type learning algorithm.\nFor each $i\\in N$, let $\\{X_i(t)\\}_{t\\geq 1}$ be a sequence of random variables with $X_i(t) \\in \\{0,1\\}$. Analogous to Section \\ref{sec_strong_fp}, $X_i(t)=1$ will serve to indicate that player $i$ took a deliberate best response in round $t$. Let\n\\vskip-20pt\n\\begin{equation}\n\\label{ell_def2}\n\\ell_i(t) := \\sum_{s=1}^t X_i(s)\n\\end{equation}\n\\vskip-5pt\n\\noindent count the number of deliberate best responses taken by player $i$ through $t$.\n\nIn Section \\ref{sec_strong_FP_construction} the empirical distribution of player $i$, \\eqref{qt_update2}, is a time average taken only over rounds when player $i$ took a deliberate best response. In order to generalize this notion to an FP-type algorithm, define the term\n\\begin{equation}\n\\label{tau_def}\n\\tau_i(s) := \\inf\\{t:\\ell_i(t)=s\\}.\n\\end{equation}\nFor $s\\geq 1$, $\\tau_i(s)$ indicates the round when player $i$ took their $s$-th deliberate best response,\\footnote{Note that by \\eqref{tau_exist}, $\\tau_i(s)$ is finite valued a.s. for any $s\\in \\{1,~2,\\ldots\\}$.}\nand the sequence $\\{\\tau_i(s)\\}_{s\\geq 1}$ gives the subsequence of rounds when player $i$ took a deliberate best response. For $t\\in \\{1,~2,\\ldots\\}$ let\n$\\bar H_i(t) := \\{a_i(\\tau_i(s)):~\\tau_i(s) \\leq t\\}$\ndenote the action history of player $i$. Note that $\\bar H(t)$ records only the history of actions that were taken as deliberate best responses.\nLet the empirical distribution of player $i$ at time $t$ be formed as\n\\vskip-18pt\n\\begin{equation}\n\\label{qt_general_strong_update}\nq_i(t) := f_i^q(\\bar H_i(t),~\\ell_i(t)).\n\\end{equation}\n\\vskip-5pt\n\\noindent Let the asymptotic learning distribution (see \\textbf{A.\\ref{a_xi}} and subsequent discussion) be given by $\\xi_i(t) := f_i^\\xi(q(t))$ and $\\xi(t) := (\\xi_1(t),\\ldots,~\\xi_i(t))$.\n\nLet the action for player $i$ in round $t\\geq 2$ be chosen according to the random rule\\footnote{To initialize the process, let the action $a_i(1)$ be chosen arbitrarily, let $X_i(1) = 1$, and let $\\bar H(1) = a_i(1)$ for all $i$.}\n\\begin{equation}\na_i(t) \\sim g'_i(t) :=\n\\begin{cases}\nb_i(t-1), & \\mbox{ if } X_i(t) = 1,\\\\\n\\xi_i(t-1), & \\mbox{otherwise},\n\\end{cases}\n\\label{action_rule1}\n\\end{equation}\nwhere $p_i(t-1) = f_i^p(q(t-1))$, and $b_i(t-1) \\in BR^{\\eta_t}_i(p_i(t-1)),$\nand assume:\\footnote{Note that this assumption subsumes the more typical assumption that $\\eta_t = 0,~\\forall t$. By making this more general assumption we are able to handle interesting scenarios that may arise in a practical implementation of the algorithm; e.g., players have some asymptotically decaying error in their knowledge of their utility function or knowledge of opponent's empirical distributions.}\n\\begin{assumption}\n\\label{a_eta}\nThe sequence $(\\eta_t)_{t\\geq 1}$ associated with $b_i(t)$ of \\eqref{action_rule1} is such that $\\lim\\limits_{t\\rightarrow\\infty} \\eta_t = 0$.\n\\end{assumption}\nLet $\\mathcal{F}_t := \\sigma(\\{a(s),X_i(s),\\ldots,X_n(s),b_1(s),\\ldots,b_n(s)\\}_{s\\leq t}).$\nLet the probability that player $i$ chooses a deliberate best response in round $t$ conditioned on past events be given by\n$\\rho_i(t) := \\mathbb{P}(X_i(t) = 1\\vert \\mathcal{F}_{t-1}),$\nand assume \\textbf{A.\\ref{rho_a1}}--\\textbf{A.\\ref{rho_a3}} hold.\nNote that $q_i(t)$, $p_i(t)$, $\\xi_i(t)$, and $g_i'(t)$ are $\\mathcal{F}_t$--measurable and that by definition, $\\rho_i(t)$ is $\\mathcal{F}_{t-1}$--measurable.\n\nFinally, let\n\\vskip-15pt\n\\begin{equation}\n\\label{g_def_general}\ng_i(t) := b_i(t-1)\\rho_i(t) + \\xi_i(t)(1-\\rho_i(t)).\n\\end{equation}\n\\vskip-5pt\n\\noindent Note that $g_i(t)$ is $\\mathcal{F}_{t-1}$--measurable and that $g(\\alpha_i,t) = \\mathbb{P}(a_i(t) = \\alpha_i\\vert \\mathcal{F}_{t-1})$; that is, $g_i(t)$ represents the mixed strategy in use by player $i$ in round $t$ (compare with \\eqref{g_def}). Let $g(t) := (g_1(t),\\ldots,g_n(t))$ denote the joint mixed strategy in use at time $t$.\n\nWe refer to a process where, for each player $i$, $q_i(t)$ is updated according to \\eqref{qt_general_strong_update}, $a_i(t)$ is updated according to \\eqref{action_rule1}, and $g_i(t)$ is updated according to \\eqref{g_def_general} as the strongly convergent variant of $\\Psi$ (for reasons to be clear soon---see Theorem \\ref{theorem_general_result}). In Section \\ref{sec_apps} we will demonstrate applications of this in the context of the previous examples.\n\n\\subsection{General Result}\n\\label{sec_FP_type_main_result}\nThe following theorem provides the general result from which the strong convergence of various FP-type algorithms can be derived.\n\\begin{theorem}\nLet $\\Gamma$ be a finite normal form game, let $E$ be an equilibrium set, and let $\\Psi$ be an FP-type algorithm satisfying \\textbf{A.\\ref{a_general_q}}--\\textbf{A.\\ref{a_robustness}}. If the strongly convergent variant of $\\Psi$ satisfies \\textbf{A.\\ref{rho_a1}}--\\textbf{A.\\ref{rho_a3}} and \\textbf{A.\\ref{a_eta}} then it achieves strong learning in the sense that $\\lim_{t\\rightarrow\\infty} d(g(t),~E) = 0$, almost surely.\n\\label{theorem_general_result}\n\\end{theorem}\n\nWe emphasize that in the above result players' period-by-period mixed strategies $g(t)$ are converging to equilibrium. In general,\nwhen seeking to construct the strongly convergent variant of some FP-type algorithm $\\Psi$, the most challenging aspect of applying Theorem \\ref{theorem_general_result} is the verification that $\\Psi$ satisfies \\textbf{A.\\ref{a_robustness}}. The remaining assumptions \\textbf{A.\\ref{a_general_q}}--\\textbf{A.\\ref{a_xi}} are generally fairly trivial to verify. Assumptions \\textbf{A.\\ref{rho_a1}}--\\textbf{A.\\ref{rho_a3}} and \\textbf{A.\\ref{a_eta}} pertain to the manner in which the strongly convergent variant of $\\Psi$ is constructed and are not related to intrinsic properties of $\\Psi$ itself.\n\n\\subsection{Some Additional Definitions}\n\\label{sec_additional_defs}\nIn order to prove Theorem \\ref{theorem_general_result} we will study the behavior of an underlying FP-type process that is embedded in the action, history, and empirical distribution processes produced by the strongly convergent variant of $\\Psi$. In particular, for $i\\in N$ and $s\\in \\{1,2,\\ldots\\}$, let $\\tau_i(s)$ be defined as in \\eqref{tau_def}, and define the following terms:\n$\\tilde a_i(s) := a_i(\\tau_i(s)),~\n \\tilde a(s) := (\\tilde a_1(s),\\ldots,\\tilde a_n(s)),~\n \\tilde H_i(s) := \\bar H_i(\\tau_i(s)),~\n \\tilde q_i(s) := q_i(\\tau_i(s)),~\n \\tilde q(s) := (\\tilde q_1(s),\\ldots,\\tilde q_n(s)),~\n \\tilde p_i(s) := f^p_i(\\tilde q(s)),~\n \\tilde \\xi(s) := (f^\\xi_1(\\tilde q(s)),\\ldots,f^\\xi_n(\\tilde q(s))).$\nThe aforementioned terms (marked with a tilde) correspond to to the embedded FP-type process that we will study in the proof of Theorem \\ref{theorem_general_result}. In particular, for each player $i$, the sequence $\\{\\tau_i(s)\\}_{s\\geq 1}$ denotes the subsequence of rounds when the player chose to play a deliberate best response. The sequence ${\\tilde a_i(s)}_{s\\geq 1}$ is the action sequence occurring along the subsequence of rounds when player $i$ chose to play a deliberate best response. The sequence $\\{\\tilde H_i(s)\\}_{s\\geq 1}$ corresponds to the action history of player $i$ along the same subsequence. The sequence $\\{\\tilde q_i(s)\\}_{s\\geq 1}$ corresponds to the empirical distribution of player $i$ along the same subsequence; in particular, note that by Lemma \\ref{q_tilde_lemma} (see appendix), $\\{\\tilde q_i(s)\\}_{s\\geq 1}$ fits the format prescribed by \\textbf{A.\\ref{a_general_q}} for the embedded FP-type process: $\\tilde q_i(s) = f_i^q(\\tilde H(s),s).$\nFinally, the term $\\tilde \\xi(s)$ is the asymptotic learning distribution associated with the embedded FP-type process.\n\nIn studying the embedded FP-type process, it will be important to characterize the terms to which players are best responding. With this in mind, note that per \\eqref{action_rule1}, the action at time $\\tau_i(s+1)$ (in the strongly convergent variant of $\\Psi$) is chosen as $a_i(\\tau_i(s+1)) \\in BR_i^{\\eta_{\\tau_i(s+1)}}(p_i(\\tau_i(s+1)-1))$. In order to translate this to the embedded FP-type process, define the following terms:\n$\\hat q^i_j(s) := q_j(\\tau_i(s+1)-1),~\n\\hat q^i(s) := (q_1(\\tau_i(s+1)-1),\\ldots,q_n(\\tau_i(s+1)-1))~\n\\hat p_i(s) := f_i^p(\\hat q^i(s)),$\nBy construction, the $(s+1)$-th action of player $i$ in the embedded FP-type process is chosen as,\n\\begin{equation}\n\\label{embedded_BR}\n\\tilde a_i(s+1) \\in BR_i^{\\eta_{\\tau_i(s+1)}}(\\hat p_i(s)).\n\\end{equation}\nIn the embedded FP-type process, the term $\\tilde q_j(s)$ may be thought of as the `true' empirical distribution of player $j$. The term $\\hat q_j^i(s)$ may be thought of as the estimate which player $i$ maintains of $\\tilde q_j(s)$, and the term $\\hat q^i(s)$ (note the superscript) may be thought of as player $i$'s estimate of the joint empirical distribution $\\tilde q(s)$ at the time of player $i$'s $(s+1)$-th best response. Finally, the term $\\hat p_i(s)$ may be thought of as player $i$'s prediction of opponents next-stage strategy given $\\hat q^i(s)$; in particular, note that---in the embedded FP-type process---player $i$ chooses their stage $(s+1)$ action \\eqref{embedded_BR} as an asymptotic best response to $\\hat p_i(s)$.\n\n\n\\subsection{Some Useful Properties}\n\\label{sec_useful_props}\nLet\n\\begin{equation}\n\\Omega' := \\{\\omega: \\lim_{t\\rightarrow\\infty} \\frac{\\ell_i(t)}{\\sum_{k=1}^t \\rho_i(t)} = 1,~ \\forall i \\}.\n\\end{equation}\nBy Lemma \\ref{IR3} (see appendix), there holds $\\mathbb{P}(\\Omega')=1$. In proving Theorem \\ref{theorem_general_result} we will restrict attention to (sample path) realizations in $\\Omega'$.\n\nNote that under assumption \\textbf{A.\\ref{rho_a2}}, there holds $\\{\\omega: \\lim_{t\\rightarrow\\infty}\\ell_i(t) = \\infty,~ \\forall i\\}\\supset \\Omega'.$ By the equivalence $\\{\\omega: \\lim_{t\\rightarrow\\infty}\\ell_i(t) = \\infty,~ \\forall i\\}=\\{\\omega:X_i(t)=1 \\mbox{ infinitely often } \\forall i\\}$, there holds $\\{\\omega:X_i(t)=1 \\mbox{ infinitely often } \\forall i\\}\\supset \\Omega'.$\nTherefore, by the definitions of $\\ell_i$ and $\\tau_i$, there holds for any realization in $\\Omega'$, $\\lim_{t\\rightarrow\\infty} \\ell_i(t) = \\infty$, and\n\\vskip-15pt\n\\begin{align}\n\\label{tau_exist}\n&\\tau_i(s) <\\infty, ~\\forall s\\in \\mathbb{N},\\\\\n\\label{tau_lim}\n&\\lim\\limits_{s\\rightarrow\\infty} \\tau_i(s) = \\infty.\n\\end{align}\n\\vskip-5pt\n\nThese properties will be useful in the proof of Theorem \\ref{theorem_main_result}. In particular, the proof will frequently make reference to $\\tilde q_i(s)$, or $\\tilde a_i(s)$ for arbitrary $s\\in \\mathbb{N}$---the property \\eqref{tau_exist} ensures that such terms are well defined for any $\\omega \\in \\Omega'$.\n\nNote also that for any realization in $\\Omega'$, for $i\\in N$ and $s\\in \\{1,2,\\ldots\\}$,\n\\vskip-15pt\n\\begin{equation}\n\\label{ell_tau_eq}\n\\ell_i(\\tau_i(s)) = s,\n\\end{equation}\n\\vskip-15pt\n\\noindent and for $i\\in N$ and $t\\in \\{1,2,\\ldots\\}$\n\\vskip-15pt\n\\begin{equation}\n\\label{X_i_implication}\nX_i(t) = 1 \\implies \\tau_i(\\ell_i(t)) = t.\n\\end{equation}\n\\vskip-5pt\n\\noindent Furthermore, note that $X_i(t) = 0$ implies that $\\ell_i(t) = \\ell_i(t-1)$ and $\\bar H_i(t) = \\bar H_i(t-1)$, and in particular,\n\\vskip-20pt\n\\begin{align}\n\\label{q_step_equality}\nX_i(t) = 0 \\implies q_i(t) = q_i(t-1).\n\\end{align}\n\\vskip-10pt\n\\noindent These facts are readily verified by conferring with the definitions of $\\tau_i$, $\\ell_i$, and $X_i$.\n\n\\subsection{Proof of Theorem \\ref{theorem_general_result}}\n\\label{sec_proof_general_result}\n\\begin{proof}\nSince $\\mathbb{P}(\\Omega') = 1$ it is sufficient to show that the desired result holds for any $\\omega \\in \\Omega'$. Henceforth, we restrict attention to realizations $\\omega \\in \\Omega'$, and for ease of notation suppress the term $\\omega$ when referring to random variables.\n\nAs a first step, we wish to show that $\\lim_{s\\rightarrow\\infty} d(\\tilde \\xi(s),~E) = 0$. We accomplish this by showing that there exists a sequence $\\{\\epsilon_s\\}_{s\\geq 1}$ such that $\\lim_{s\\rightarrow\\infty}\\epsilon_s = 0$ and $\\tilde a_i(s+1) \\in BR_i^{\\epsilon_s}(\\tilde p_i(s))$. By assumption \\textbf{A.\\ref{a_robustness}}, it will then follow that $\\lim_{s\\rightarrow\\infty} d(\\tilde \\xi(s),~E) = 0$.\n\nTo that end, note that by Lemma \\ref{lemma_BR_limit} (see appendix),\n$\\lim\\limits_{s\\rightarrow\\infty} |U_i(a_i(\\tau_i(s+1)),p_i(\\tau_i(s+1)-1)) - v_i(p_i(\\tau_i(s+1)-1))| = 0,~\\forall i,$\nor equivalently by the definitions of $\\tilde a(s)$ and $\\hat p_i(s)$ (see Section \\ref{sec_additional_defs}),\n\\vskip-20pt\n\\begin{align}\n\\lim\\limits_{s\\rightarrow\\infty} |U_i(\\tilde a_i(s+1)),\\hat p_i(s)) - v_i(\\hat p_i(s))| = 0,~\\forall i.\n\\label{thrm1_eq1}\n\\end{align}\n\\vskip-5pt\n\\noindent By Lemma \\ref{IR0} (see appendix), $\\lim_{s\\rightarrow\\infty} \\|\\hat q^i(s) - \\tilde q(s)\\| = 0$. By \\textbf{A.\\ref{a_prediction}}, it follows that $\\lim_{s\\rightarrow\\infty} \\|\\hat p_i(s) - \\tilde p_i(s)\\| = 0$, which by the Lipschitz continuity of $U_i(\\cdot)$ implies that\n$\\lim_{s\\rightarrow\\infty} | U_i(\\alpha_i,\\hat p_i(s)) - U_i(\\alpha_i,\\tilde p_i(s))| = 0,~ \\forall \\alpha_i \\in A_i, \\forall i,$\nand\n$\\lim_{s\\rightarrow\\infty} |v_i(\\hat p_i(s)) - v_i(\\tilde p_i(s))| = 0, \\forall i.$\nReturning to \\eqref{thrm1_eq1} we see that\n$\\lim\\limits_{s\\rightarrow\\infty} |U_i(\\tilde a_i(s+1)),\\tilde p_i(s)) - v_i(\\tilde p_i(s))| = 0, ~\\forall i,$\ni.e., there exists a sequence $\\{\\epsilon_s\\}_{s\\geq 1}$ such that $\\epsilon_s \\rightarrow 0$ and $\\tilde a_i(s+1) \\in BR_i^{\\epsilon_s}(\\tilde p_i(s))$. It follows by \\textbf{A.\\ref{a_robustness}} that\n\\vskip-20pt\n\\begin{equation}\n\\lim\\limits_{s\\rightarrow\\infty} d(\\tilde \\xi(s),~E) = 0.\n\\label{theorem_main_result_eq3}\n\\end{equation}\n\\vskip-5pt\n\nWe now proceed to show that $\\lim_{t\\rightarrow\\infty} d(\\xi(t),~E) =0$. Let $\\varepsilon > 0$ be given. By Lemma \\ref{IR1} (see appendix) and assumption \\textbf{A.\\ref{a_xi}}, for each $i\\in N$, there exists a random time $S_i > 0$ such that $\\forall s\\geq S_i$, $\\|\\xi(\\tau_i(s)) - \\tilde\\xi(s)\\| < \\frac{\\varepsilon}{2}$. Let $S^{'} = \\max_i\\{S_i\\}$. By \\eqref{theorem_main_result_eq3} there exists a random time $S^{''}$ such that $\\forall s\\geq S^{''}$, $d(\\tilde \\xi(s), ~E) < \\frac{\\varepsilon}{2}$. Let $S=\\max\\{S^{'},S^{''}\\}$. Then\n\\vskip-15pt\n\\begin{equation}\nd(\\xi(\\tau_i(s)),~E) < \\varepsilon, ~\\forall i,~ \\forall s\\geq S.\n\\label{thrm1_eq6}\n\\end{equation}\n\\vskip-5pt\n\nLet $T = \\max_{i}\\{\\tau_i(S)\\}$. Note that for some $i$, $\\xi(T) = \\xi(\\tau_i(S))$, and therefore by \\eqref{thrm1_eq6},\n\\vskip-15pt\n\\begin{equation}\nd(\\xi(T),~E) < \\varepsilon.\n\\label{thrm1_eq4}\n\\end{equation}\n\\vskip-5pt\nAlso note that for any $t_0>T$, it holds that $\\ell_i(t_0) \\geq S$ (since $\\ell_i(\\tau_i(S)) = S$, and $\\ell_i(t)$ is non-decreasing in $t$), and moreover\n\\vskip-15pt\n\\begin{align}\nX_i(t_0) = 1 \\mbox{ for some } i & ~\\implies ~q(t_0) = q(\\tau_i(\\ell_i(t_0))) \\implies \\xi(t_0) = \\xi(\\tau_i(\\ell_i(t_0))),\\\\\nX_i(t_0) = 0 \\mbox{ for all $i$ } & ~\\implies ~q(t_0) = q(t_0-1) \\implies \\xi(t_0) = \\xi(t_0-1) ,\n\\label{thrm1_eq5}\n\\end{align}\n\\vskip-5pt\nwhere the first implication holds with $ ~\\mbox{ with } \\ell_i(t_0) \\geq S$. In the above, the first line follows from \\eqref{X_i_implication}, and the second line follows from \\eqref{q_step_equality}.\nConsider $t\\geq T$. If for some $i$, $X_i(t) = 1$, then by \\eqref{thrm1_eq5} and \\eqref{thrm1_eq6},\n$d(\\xi(t),~E) = d(\\xi(\\tau_i(\\ell_i(t))),~E) < \\varepsilon.$\nOtherwise, if $X_i(t) = 0 ~\\forall i$, then $\\xi(t) = \\xi(t-1)$.\n\nIterate this argument $m$ times until either (i) $X_i(t-m) = 1$ for some $i$, or (ii), $t-m = T$. In the case of (i),\n$d(\\xi(t),~E) = d(\\xi(t-m),~E) = d(\\xi(\\tau_i(\\ell_i(t-m))),~E) < \\varepsilon,$\nwhere the inequality again follows from \\eqref{thrm1_eq6} and the fact that $t-m>T \\implies \\ell_i(t-m)\\geq S$.\nIn the case of (ii),\n$d(\\xi(t),~E) = d(\\xi(T),~E) < \\varepsilon,$\nwhere the inequality follows from \\eqref{thrm1_eq4}. Since $\\varepsilon >0$ was chosen arbitrarily, it follows that\n$\\lim\\limits_{t\\rightarrow\\infty} d(\\xi(t),~E)=0.$\n\nFinally, we show that $\\lim_{t\\rightarrow\\infty} d(g(t),~E)=0$. Note that by \\eqref{g_def_general}, $\\|g_i(t) - \\xi_i(t-1)\\| \\leq M_i\\rho_i(t), ~\\forall i,$ where $M_i:=\\max_{p',p'' \\in \\Delta_i} \\|p' - p''\\|$ is a constant. Invoking assumption \\textbf{A.\\ref{rho_a1}} gives, $\\lim\\limits_{t\\rightarrow\\infty}\\|g_i(t) - \\xi_i(t-1)\\|=0, ~\\forall i.$\nCombining this with the fact that $\\lim\\limits_{t\\rightarrow\\infty} d(\\xi(t),~E)=0$ yields the desired result, $\\lim_{t\\rightarrow\\infty}d(g(t),~E) =0$.\n\\end{proof}\n\n\n\\section{Applications of the General Result}\n\\label{sec_apps}\nIn this section we consider three different FP-type algorithms and study the strongly convergent variant of each. In each case, we prove strong convergence by showing that the FP-type algorithm fits the template of Theorem \\ref{theorem_general_result}. Generally, the only non-trivial aspect of applying Theorem \\ref{theorem_general_result} will be to show that \\textbf{A.\\ref{a_robustness}} is satisfied.\n\nIn Section \\ref{sec_apps1} we consider classical FP. The fact that classical FP satisfies \\textbf{A.\\ref{a_robustness}} was shown by Leslie et al. \\cite{leslie2006generalised}. In Section \\ref{sec_apps2} we consider GWFP---a generalization of FP proposed in \\cite{leslie2006generalised}. Again, the crucial step of showing that GWFP satisfies \\textbf{A.\\ref{a_robustness}} was shown in \\cite{leslie2006generalised}. In Section \\ref{sec_apps3} we consider a variant of FP termed ECFP. That ECFP satisfies \\textbf{A.\\ref{a_robustness}} was shown in \\cite{swenson2015weakECFP}. We emphasize that each of these algorithms is known to achieve weak learning in the sense that $d(\\xi(t),~E) \\rightarrow 0$ as $t\\rightarrow \\infty$. Our contribution is to construct a variant where players also achieve learning in the strong sense that period-by-period mixed strategies also converge to equilibrium.\n\n\\subsection{Strong Convergence in Classical FP}\n\\label{sec_apps1}\nWe now prove Corollary \\ref{theorem_main_result} using the general convergence result of Theorem \\ref{theorem_general_result}.\n\n\\begin{proof}\nClassical FP fits the template of an FP-type algorithm with the empirical distribution given by $q_i(t) = \\frac{1}{t}\\sum_{s=1}^ta_i(s)$, the functions $f_i^p$ and $f_i^\\xi$ given by the identity function for each $i$, and the best response perturbation given by $\\epsilon_t = 0,~\\forall t$. To show that the strongly convergent variant of classical FP attains strong learning, it suffices to show that the assumptions of Theorem \\ref{theorem_general_result} are met.\n\nTo that end, note that \\textbf{A.\\ref{rho_a1}}--\\textbf{A.\\ref{rho_a3}} are satisfied by assumption, and \\textbf{A.\\ref{a_eta}} is trivially satisfied (with $\\eta_t = 0,~\\forall t$). Furthermore, the empirical distribution sequence satisfies $\\lim_{t\\rightarrow\\infty}\\|q_i(t) - q_i(t-1)\\| = 0$ (see Section \\ref{sec_FP_example}), and hence \\textbf{A.\\ref{a_step_size_bound}} is satisfied. The functions $f_i^p$ and $f_i^\\xi$ (each being the identity function) satisfy \\textbf{A.\\ref{a_prediction}}--\\textbf{A.\\ref{a_xi}}.\nTherefore, it is sufficient to show that \\textbf{A.\\ref{a_robustness}} is satisfied. But, for zero-sum games, potential games, and generic $2\\times m$ games this holds by \\cite{leslie2006generalised}, Corollary 5.\n\\end{proof}\n\n\\subsection{Strong Convergence in Generalized Weakened FP}\n\\label{sec_apps2}\n\nGWFP was introduced in Section \\ref{sec_GWFP_intro}, where it was shown to fit the template of an FP-type algorithm.\n\nSince, by definition, a GWFP process allows players to choose an $\\epsilon_t$ sub-optimal best response with $\\epsilon_t \\rightarrow 0$, the following result (\\cite{leslie2006generalised}, Corollary 5) guarantees a GWFP process satisfies \\textbf{A.\\ref{a_robustness}} in the noted classes of games.\n\\begin{theorem}\nAny generalized weakened fictitious play process will converge to the set of Nash equilibria in two-player zero-sum games, potential games, and generic $2\\times m$ games.\n\\label{thrm_wfp}\n\\end{theorem}\n\nTo clarify the precise meaning of the convergence stated above as it relates to the present work, we emphasize that Theorem \\ref{thrm_wfp} implies that $\\lim_{t\\rightarrow\\infty} d(q(t),~NE)=0$; i.e., the process converges weakly to equilibrium.\n\nLet the strongly convergent variant of GWFP be constructed using the approach laid out in Section \\ref{sec_FP_type_strong}. The following Corollary to Theorem \\ref{theorem_general_result} states that the strongly convergent variant of a GWFP process will achieve strong learning.\\footnote{It should be noted that classical FP may be seen as an instance of GWFP, and thus Corollary \\ref{theorem_main_result} may in fact be deduced as a corollary to Corollary \\ref{cor_gwfp}. However, for clarity and continuity of presentation, the results regarding classical FP have been presented separately.}\n\\begin{cor}\n\\label{cor_gwfp}\nLet $\\Gamma$ be a two-player zero-sum game, potential game, or generic $2\\times m$ game. Let $\\Psi$ be an instance of GWFP. If the strongly convergent variant of $\\Psi$ satisfies \\textbf{A.\\ref{rho_a1}}--\\textbf{A\\ref{rho_a3}} and \\textbf{A.\\ref{a_eta}}, then it achieves strong learning in the sense that $\\lim_{t\\rightarrow\\infty} d(g(t),~NE) = 0$.\n\\end{cor}\n\\begin{proof}\nIt is sufficient to show that the conditions of Theorem \\ref{theorem_general_result} are met. Note that \\textbf{A.\\ref{rho_a1}}--\\textbf{A.\\ref{rho_a3}}, \\textbf{A.\\ref{a_eta}} hold by assumption. Furthermore, by definition, any GWFP process satisfies $\\lim_{t\\rightarrow\\infty} \\gamma(t)=0$, and hence satisfies \\textbf{A.\\ref{a_step_size_bound}}. The functions $f_i^p$ and $f_i^\\xi$ are given by the identity function for each $i$, and hence \\textbf{A.\\ref{a_prediction}} and \\textbf{A.\\ref{a_xi}} hold. Thus, it suffices to show that \\textbf{A.\\ref{a_robustness}} holds for the specified class of games---but, this follows from Theorem \\ref{thrm_wfp}.\n\\end{proof}\n\n\n\\subsection{Strong Convergence in Empirical Centroid FP}\n\\label{sec_apps3}\nECFP was introduced in Sections \\ref{sec_ECFP_intro} and \\ref{sec_ecfp_mce_intro}. It\nIn order to study the asymptotic behavior of ECFP (in either of the above formats introduced in Sections \\ref{sec_ECFP_intro} and \\ref{sec_ecfp_mce_intro}) we make the following assumption regarding the structure of players' utility functions:\n\\begin{assumption}\nThe players' utility functions are identical and permutation invariant. That is, for any $i,j\\in N$, $u_i(y) = u_j(y)$, and $u([y']_i,[y'']_j,y_{-(i,j)}) = u([y'']_i,[y']_j,y_{-(i,j)}),$\nwhere, for any player $k\\in N$, the notation $[y']_k$ indicates the action $y'\\in Y_k$ being played by player $k$, and $y_{-(i,j)}$ denotes the set of actions being played by all players other than $i$ and $j$.\n\\label{a_perm_inv}\n\\end{assumption}\n\nWe note that, under this assumption, the sets $C$ and $MCE$ are nonempty \\cite{swenson2012ECFP,swenson2013MCE}.\nThe following theorem (\\cite{swenson2015weakECFP}, Theorem 1) specifies the manner in which players engaged in an ECFP process (weakly) learn elements of the sets $C$ and $MCE$.\n\\begin{theorem}\nLet $\\{a(t)\\}_{t\\geq 1}$ be an ECFP process. \\\\\nAssume $\\Gamma$ is such that \\textbf{A.\\ref{a_ident_strat}} and \\textbf{A.\\ref{a_perm_inv}} hold. Then players learn equilibrium strategies in the sense that\n(i) $\\lim_{t\\rightarrow \\infty} d(\\bar q^n(t),~C) = 0$, and (ii) $\\lim_{t\\rightarrow \\infty} d(q(t),~MCE) = 0$.\n\\label{theorem_ecfp_weak}\n\\end{theorem}\n\nNote that case (i) above corresponds to ECFP with the convergence map $f_i^\\xi$ as given in Section \\ref{sec_ECFP_intro}, and case (ii) corresponds to the convergence map $f_i^\\xi$ given by the identity function (as in Section \\ref{sec_ecfp_mce_intro}). Since, by definition, an ECFP process \\eqref{ecfp_process} allows players to choose actions from the $\\epsilon_t$-sub-optimal best response set with $\\epsilon_t \\rightarrow 0$, Theorem \\ref{theorem_ecfp_weak} ensures that ECFP satisfies \\textbf{A.\\ref{a_robustness}}.\n\nLet $\\Psi$ be an instance of ECFP as presented in either Section \\ref{sec_ECFP_intro} or Section \\ref{sec_ecfp_mce_intro}, and let the strongly convergent variant of $\\Psi$ be constructed using the approach laid out in Section \\ref{sec_FP_type_strong}.\nThe following corollary to Theorem \\ref{theorem_general_result} states that players engaged in the strongly convergent variant of an ECFP process learn elements of $C$ and $MCE$ in the strong sense that players' period-by-period strategies converge to equilibrium.\n\\begin{cor}\n(i) Let $\\Psi$ be an instance of ECFP with $f^{\\xi}_i(q) = \\frac{1}{n}\\sum_jq_j,~\\forall i$ and assume $\\Gamma$ is such that \\textbf{A.\\ref{a_ident_strat}} and \\textbf{A.\\ref{a_perm_inv}} hold. If the strongly convergent variant of $\\Psi$ satisfies \\textbf{A.\\ref{rho_a1}}--\\textbf{A.\\ref{rho_a3}} and \\textbf{A.\\ref{a_eta}}, then it achieves strong learning in the sense that $\\lim_{t\\rightarrow 0} d(g(t),~C)=0$.\\\\\n(ii) Let $\\Psi$ be an instance of ECFP with $f^{\\xi}_i(q)$ given by the identity function for all $i$ and assume $\\Gamma$ is such that \\textbf{A.\\ref{a_ident_strat}} and \\textbf{A.\\ref{a_perm_inv}} hold. If the strongly convergent variant of $\\Psi$ satisfies \\textbf{A.\\ref{rho_a1}}--\\textbf{A.\\ref{rho_a3}} and \\textbf{A.\\ref{a_eta}}, then it achieves strong learning in the sense that $\\lim_{t\\rightarrow 0} d(g(t),~MCE)=0$.\n\\end{cor}\n\\begin{proof}\nCases (i) and (ii) differ only in terms of the function $f^{\\xi}_i(t)$ and target equilibrium set $E$. However, in both cases the function $f^{\\xi}_i$ satisfies \\textbf{A.\\ref{a_xi}}. It suffices to show the remaining conditions of Theorem \\ref{theorem_general_result} are satisfied. Henceforth we treat cases (i) and (ii) equivalently.\n\nNote that \\textbf{A.\\ref{rho_a1}}--\\textbf{A.\\ref{rho_a3}} and \\textbf{A.\\ref{a_eta}} hold by assumption. The empirical distribution sequence satisfies $\\|q_i(t) - q_i(t-1)\\| \\leq \\frac{M_i}{t} \\rightarrow 0 \\mbox{ as } t\\rightarrow \\infty$, where $M_i := \\sup_{p',p'' \\in \\Delta_i} \\|p' - p''\\|$, and hence \\textbf{A.\\ref{a_step_size_bound}} is satisfied. Note that the function $f_i^p(q) = \\frac{1}{n}\\sum_j q_j$ satisfies \\textbf{A.\\ref{a_prediction}}. Finally, Theorem \\ref{theorem_ecfp_weak} shows that \\textbf{A.\\ref{a_robustness}} is satisfied.\n\\end{proof}\n\n\n\\section{Conclusions}\n\\label{sec_conclusion}\nAn algorithm is said to achieve weak learning if players learn an equilibrium strategy in an abstract sense (see Section \\ref{sec_prelims}), but period-by-period strategies do not necessarily converge to equilibrium. An algorithm is said to achieve strong learning if (additionally) players' period-by-period strategies converge to equilibrium. Weak learning may be thought of as a form of learning where players \\emph{learn} a strategy in some abstract sense, but never begin to implement the strategy they are learning. On the other hand, in strong learning, not only do players \\emph{learn} a strategy, but they also physically implement the learned strategy through the course of the learning process.\n\nFictitious Play (FP) and its variants are known to exhibit weak learning but not necessarily strong learning. An approach was presented for taking a general FP-type algorithm that achieves weak learning, and constructing from it a strongly convergent variant of the algorithm. General convergence results were proved and used to construct a strongly convergent variant of several example FP-type processes.\n\nIn order to apply the convergence results proved in this paper, it is necessary to ensure a candidate algorithm meets \\textbf{A.\\ref{a_robustness}} (the other necessary assumptions are relatively trivial to verify). An interesting future research direction might be to investigate other FP-type algorithms (e.g., \\cite{Arslan04,Shamma03}) and verify whether they meet the assumptions sufficient for construction of a strongly convergent variant.\n\n\\section*{Appendix}\n{\\small\n\\subsection{Some Useful Inequalities}\n\\label{sec_IR}\nWe consider some useful inequalities related to the strongly convergent variant of an FP-type algorithm. We restrict attention to realizations $\\omega \\in \\Omega'$. Let $\\{q_i(t)\\}_{t\\geq 1}$ be given by \\eqref{qt_general_strong_update}.\nBy \\textbf{A.\\ref{a_step_size_bound}} there exists a sequence $\\gamma(t)$ such that\n$\\lim\\limits_{t\\rightarrow\\infty} \\gamma(t) = 0,$\nand for each $i\\in N$,\n\\vskip-5pt\n\\begin{equation}\n\\|q_i(t+1) - q_i(t)\\| \\leq M_i\\gamma(\\ell_i(t)),\n\\label{qt_bound}\n\\end{equation}\nwhere $M_i:= \\sup_{q',q'' \\in \\Delta_i}\\|q' - q''\\|$.\nSimilarly, there holds for any integer $s>0$,\n\\begin{equation}\n\\|\\tilde q(s+1) - \\tilde q(s)\\| \\leq M\\gamma(s),\n\\label{qs_bound1}\n\\end{equation}\nwhere $M:= \\sup_{q',q'' \\in \\Delta^n}\\|q' - q''\\|$.\nMore generally, for any integers $s_1,s_2>0$, if \\textbf{A.\\ref{a_step_size_bound}} holds then,\n\\vskip-20pt\n\\begin{align}\n\\|\\tilde q(s_1) - \\tilde q(s_2)\\| & \\leq M\\sum\\limits_{s=\\min\\{s_1,s_2\\}}^{\\max\\{s_1,s_2\\}-1} \\gamma(s)\\leq |s_1 - s_2|B,\n\\label{qs_bound2}\n\\end{align}\n\\vskip-5pt\n\\noindent where $0$, and vertical, $|V\\>$, polarizations) into spatially\ndistinct counter propagating light beams. The $H$ component leaves\nthe interferometer unchanged. But the $V$ component is rotated in\nthe wave plate, which corresponds to probabilistic damping into\nthe $H$ component. Then, at the exit from the interferometer, this\ncomponent is probabilistically transmitted or reflected from the\nbeam splitter. So it is cast into two orthogonal spatial modes\ncorresponding the reservoir states with and without excitation.\n\nThe action of the ADC can be represented by an interaction\nHamiltonian~\\cite{Nielsen}: $H\\sim a b^\\dagger + a^\\dagger b$,\nwhere $a$ ($a^\\dagger$) and $b$ ($b^\\dagger$) are annihilation\n(creation) operators of the system and environment oscillators,\nrespectively. In more general models of damping, a single\noscillator $b$ of the reservoir is replaced by a finite or\ninfinite collection of oscillators $\\{b_n\\}$ coupled to the system\noscillator with different strengths (see, e.g.,\nRef.~\\cite{Louisell,Leibfried03}). For the example of quantum\nstates of motion of ions trapped in a radio-frequency (Paul) trap,\nthe amplitude damping can be modeled by coupling an ion to the\nmotional amplitude reservoir described by the above\nmultioscillator Hamiltonian~\\cite{Leibfried03}. The\nhigh-temperature reservoir is possible to simulate by applying (on\ntrap electrodes) a random uniform electric field with spectral\namplitude at the ion motional\nfrequency~\\cite{Myatt00,Turchette00}. The zero-temperature\nreservoir can be simulated by laser cooling combined with\nspontaneous Raman scattering~\\cite{Poyatos}.\n\n\\subsection{Phase-damping channel}\n\nThe PDC is a prototype model of dephasing or pure decoherence,\ni.e., loss of coherence of a two-level state without any loss of\nsystem's energy. The PDC is described by the map\n\\begin{equation}\n{\\cal E} _{\\text{PDC}}(\\rho )=s\\rho +p\\left( \\rho _{00}|0\\rangle\n\\langle 0|+\\rho _{11}|1\\rangle \\langle 1|\\right) \\label{PDC},\n\\end{equation}\nand obviously the three Kraus operators are given by\n\\begin{equation}\nE_{0}=\\sqrt{s}\\,\\openone,\\; E_{1}=\\sqrt{p}\\,|0\\rangle \\langle\n0|,\\; E_{2}=\\sqrt{p}\\,|1\\rangle \\langle 1|, \\label{kraus2}\n\\end{equation}\nwhere $\\openone$ is the identity operator. For the PDC, there is\nno energy change and a loss of decoherence occurs with probability\n$p.$ As a result of the action of the PDC, the Bloch sphere is\ncompressed by a factor $(1-2p)$ in the $xy$ plane.\n\nIn analogy to the ADC, the PDC can be considered as an interaction\nbetween two oscillators (modes) representing system and\nenvironment as described by the interaction Hamiltonian: $H\\sim\na^\\dagger a(b^\\dagger + b)$~\\cite{Nielsen}. In more general\nphase-damping models, a single environmental mode $b$ is usually\nreplaced by an infinite collection of modes $b_n$ coupled, with\nvarious strengths, to mode $a$.\n\nIt is evident that the action of the PDC is nondissipative. It\nmeans that, in the standard computational basis $|0\\>$ and $|1\\>$,\nthe diagonal elements of the density matrix $\\rho$ remain\nunchanged, while the off-diagonal elements are suppressed.\nMoreover, the qubit states $|0\\>$ and $|1\\>$ are also unchanged\nunder the action of the PDC, although any superposition of them\n(i.e., any point in the Bloch sphere, except the poles) becomes\nentangled with the environment.\n\nThe PDC can be interpreted as elastic scattering between a\n(two-level) system and a reservoir. It is also a model of coupling\na system with a noisy environment via a quantum nondemolition\n(QND) interaction. Note that spin squeezing of atomic ensembles\ncan be generated via QND\nmeasurements~\\cite{Kuzmich99,Takahashi99,Kuzmich00,Julsgaard01,Schleier,Appel,Takano}.\nSo modeling the spin-squeezing decoherence via the PDC can be\nrelevant in this context.\n\nThe PDC is also a suitable model to describe $T_2$ relaxation in\nspin resonance. This in contrast to modeling $T_1$ relaxation via\nthe ADC.\n\nA circuit modeling the PDC can be realized as a simplified version\nof the circuit for the ADC, discussed in the previous subsection,\nobtained by removing the CNOT gate~\\cite{Nielsen}. Then, the angle\n$\\theta$ in the controlled rotation gate $R_y(\\theta)$ is related\nto the probability $p$ in Eq.~(\\ref{PDC}).\n\nThe sudden vanishing of entanglement under the PDC was first\nexperimentally observed in Ref.~\\cite{Almeida}. This optical\nimplementation of the PDC was based on the same system as the\nabove-mentioned Sagnac interferometer for the ADC but with an\nadditional half-wave plate at a $\\pi\/4$ angle in one of the\noutgoing modes.\n\nSome specific kinds of PDCs can be realized in a more\nstraightforward manner. For example, in experiments with trapped\nions, the motional PDC can be implemented just by modulating the\ntrap frequency, which changes the phase of the harmonic motion of\nions~\\cite{Myatt00,Turchette00} (for a review see\nRef.~\\cite{Leibfried03} and references therein).\n\n\\subsection{Depolarizing channel}\n\nThe definition of the DPC is given via the map\n\\begin{align}\n{\\cal E }_{\\text{DPC}}(\\rho )& =\\sum_{i=0}^{3}E_{k}\\rho\nE_{k}^{\\dagger },\n\\\\\n& =(1-p^{\\prime })\\rho +\\frac{p^{\\prime }}{3}(\\sigma _{x}\\rho\n\\sigma _{x}+\\sigma _{y}\\rho \\sigma _{y}+\\sigma _{z}\\rho \\sigma\n_{z}), \\notag\\label{DPC},\n\\end{align}\nwhere\n\\begin{eqnarray}\nE_{0} &=&\\sqrt{1-p^{\\prime }}\\openone,\n\\quad E_{1}=\\sqrt{\\frac{p^{\\prime }}{3}}\\sigma _{x}, \\notag \\\\\nE_{2} &=&\\sqrt{\\frac{p^{\\prime }}{3}}\\sigma _{y},\\quad \\quad\nE_{3}=\\sqrt{\\frac{p^{\\prime }}{3}}\\sigma _{z}, \\label{kraus3}\n\\end{eqnarray}\nare the Kraus operators. By using the following identity\n\\begin{equation*}\n\\sigma _{x}\\rho \\,\\sigma _{x}+\\sigma _{y}\\rho \\,\\sigma _{y}+\\sigma\n_{z}\\rho \\,\\sigma_{z}+\\rho =2\\openone,\n\\end{equation*}%\nwe obtain\n\\begin{equation}\n{\\cal E}_{\\text{DPC}}(\\rho )=s\\rho +p\\frac{\\openone}{2},\n\\end{equation}\nwhere $p ={4p^{\\prime }}\/{3}$. We see that for the DPC, the spin\nis unchanged with probability $s=1-p$ or it is depolarized to the\nmaximally mixed state $\\openone\/2\\,\\ $with probability $p.$ It is\nseen that due to the action of the DPC, the radius of the Bloch\nsphere is reduced by a factor $s$, but its shape remains\nunchanged.\n\nFormally, the action of the DPC on a qubit in an unknown state\n$\\rho$ can be implemented in a three-qubit circuit composed of two\nCNOT gates with two auxiliary qubits initially in mixed states\n\\begin{eqnarray}\n \\rho_1=\\openone\/2,\\quad \\rho_2=(1-p)|00\\rangle\\langle\n00|+p|11\\rangle\\langle 11|, \\label{N1}\n\\end{eqnarray}\nwhich model the environment. Qubit $\\rho_2$ controls the other\nqubits via the CNOT gates~\\cite{Nielsen}.\n\nThe DPC map can also be implemented by applying each of the Pauli\noperators $[\\openone,\\sigma_x,\\sigma_y,\\sigma_z]$ at random with\nthe same probability. Using this approach, optical DPCs have been\nrealized experimentally both in free space~\\cite{Ricci04} and in\nfibers~\\cite{Karpinski08}, where qubits are associated with\npolarization states of single photons. In Ref.~\\cite{Ricci04}, the\nDPC was implemented by using a pair of equal electro-optical\nPockels cells. One of them was performing a $\\sigma_x$ gate and\nthe other a $\\sigma_y$ gate. The simultaneous action of both\n$\\sigma_x$ and $\\sigma_y$ corresponds to a $\\sigma_y$ gate. The\ncells were driven (with a mutual delay of $\\tau\/2$) by a\ncontinuous-wave periodic square-wave electric field with a\nvariable pulse duration $\\tau$, so the total depolarizing process\nlasted 2$\\tau$ for each period.\n\nAnalogous procedures can be implemented in other systems,\nincluding collective spin states of atomic ensembles. The coherent\nmanipulation of atomic spin states by applying off-resonantly\ncoherent pulses of light is a basic operation used in many\napplications~\\cite{Julsgaard04}. We must admit that the standard\nmethods enable rotations in the Bloch sphere of only classical\nspin states (i.e., coherent spin states). Nevertheless,\nrecently~\\cite{Takano} an experimental method has been developed\nto rotate also spin-squeezed states.\n\nIt is worth noting that in experimental realizations of\ndecoherence channels (e.g, in ion-trap\nsystems~\\cite{Hannemann09}), sufficient resources for complete\nquantum tomography are provided even for imperfect preparation of\ninput states and the imperfect measurements of output states from\nthe channels.\n\n\\section{Spin-squeezing definitions and concurrence}\n\nNow, we discuss several parameters of spin squeezing and give\nseveral relations among them. To compare spin squeezing with\npairwise entanglement, we also give the definition of concurrence.\nWe notice that most previous investigations on ESD of concurrence\nwere only carried out for two-particle system rather than for\ntwo-particle subsystem embedded in a larger system. For the\ninitial states, spin-squeezing parameters and concurrence are also\ngiven below.\n\n\\subsection{Spin-squeezing parameters and their relations}\n\\subsubsection{Definitions of spin squeezing}\n\nThere are several spin-squeezing parameters, but we list only\nthree typical and related ones as\nfollows~\\cite{KU,Wineland,Sorensen,Toth}:\n\\begin{eqnarray}\n\\xi _{1}^{2} &=&\\frac{4(\\Delta J_{\\vec{n}_\\perp })_{\\min }^{2}}{N},~~ \\label{x1} \\\\\n\\xi _{2}^{2} &=&\\frac{N^{2}}{4\\langle \\vec{J}\\rangle ^{2}}\\xi _{1}^{2},~~\n\\label{x2} \\\\\n\\xi _{3}^{2} &=&\\frac{\\lambda _{\\min }}{\\langle \\vec{J}^{2}\\rangle\n-\\frac{N}{2}}. \\label{x3}\n\\end{eqnarray}\nHere, the minimization in the first equation is over all\ndirections denoted by $\\vec{n}_\\perp,$ perpendicular to the mean\nspin direction $\\langle \\vec{J}\\rangle \/\\langle \\vec{J}^{2}\\rangle\n$; $\\lambda _{\\min }$ is the minimum eigenvalue of the\nmatrix~\\cite{Toth}\n\\begin{equation}\n\\Gamma =(N-1)\\gamma +\\mathbf{C}, \\label{gamma}\n\\end{equation}\nwhere\n\\begin{equation}\n\\gamma _{kl}={C}_{kl}-\\langle J_{k}\\rangle \\langle J_{l}\\rangle\n\\;\\;\\text{for}\\;\\; k,l\\in \\{x,y,z\\}=\\{1,2,3\\}, \\label{comatrix}\n\\end{equation}\nis the covariance matrix and ${\\bf C}=[C_{kl}]$ with\n\\begin{equation}\n{C}_{kl}=\\frac{1}{2}\\langle J_{l}J_{k}+J_{k}J_{l}\\rangle\n\\label{cmatrix}\n\\end{equation}\nis the global correlation matrix. The parameters $\\xi _{1}^{2},\n\\xi _{2}^{2},$ and $\\xi _{3}^{2}$ were defined by Kitagawa and\nUeda \\cite{KU}, Wineland {\\em et al.}~\\cite{Wineland}, and\nT\\'{o}th {\\it et al.}~\\cite{Toth}, respectively. If $\\xi\n_{2}^{2}<1$ $(\\xi _{3}^{2}<1),$ spin squeezing occurs, and we can\nsafely say that the multipartite state is\nentangled~\\cite{Sorensen,Toth}. Although we cannot say that the\nsqueezed state via the parameter $\\xi_1^2$ is entangled, it is\nindeed closely related to quantum entanglement~\\cite{WangSanders}.\n\n\\subsubsection{Squeezing parameters for states with parity}\n\nWe know from Sec.~II.A that the initial state has an even parity\nand that the mean spin direction is along the $z$ direction.\nDuring the transmission through all the three decoherence channels\ndiscussed here, the mean spin direction does not change. For\nstates with a well-defined parity (even or odd), the\nspin-squeezing parameter $\\xi _{1}^{2}$ was found to be\n\\cite{WangSanders}\n\\begin{equation}\n\\xi _{1}^{2}=\\frac{2}{N}\\left( \\langle J_{x}^{2}+J_{y}^{2}\\rangle\n-|\\langle J_{-}^{2}\\rangle |\\right). \\label{xixi1}\n\\end{equation}\nThen, the parameter $\\xi _{2}^{2}$ given by Eq.~(\\ref{x2}) becomes\n\\begin{equation}\n\\xi _{2}^{2}=\\frac{N^{2}\\xi _{1}^{2}}{4\\langle J_{z}\\rangle\n^{2}}=\\frac{N\\left( \\langle J_{x}^{2}+J_{y}^{2}\\rangle -|\\langle\nJ_{-}^{2}\\rangle |\\right) }{2\\langle J_{z}\\rangle ^{2}}.\n\\end{equation}\nFor the third squeezing parameter (see Appendix A for the\nderivation), we have\n\\begin{equation}\n\\xi _{3}^{2}=\\frac{\\min \\left\\{ \\xi _{1}^{2},\\varsigma\n^{2}\\right\\} }{{4}{N^{-2}}\\langle \\vec{J}^{2}\\rangle\n-{2}{N^{-1}}}, \\label{xixixi}\n\\end{equation}\nwhere\n\\begin{equation}\n\\varsigma ^{2}=\\frac{4}{N^{2}}\\left[ N(\\Delta J_{z})^{2}+\\langle\nJ_{z}\\rangle ^{2}\\right] . \\label{zzz}\n\\end{equation}\nNote that the first parameter $\\xi _{1}^{2}$ becomes a key\ningredient for the latter two squeezing parameters ($\\xi_2^2$ and\n$\\xi_3^2$).\n\n\\subsubsection{Spin-squeezing parameters in terms of local expectations}\n\nFor later applications, we now express the squeezing parameters in\nterms of local expectations and correlations, and also examine the\nmeaning of $\\varsigma ^{2}$, which will be clear by substituting\nEqs.~(\\ref{qqq}) and (\\ref{square4}) into Eq.~(\\ref{zzz}),\n\\begin{eqnarray}\n\\varsigma ^{2} &=&1+\\mathcal{C}_{zz} \\notag \\\\\n&=&1+(N-1)\\left( \\langle \\sigma _{1z}\\sigma _{2z}\\rangle -\\langle \\sigma\n_{1z}\\rangle \\langle \\sigma _{2z}\\rangle \\right) . \\label{zzzz}\n\\end{eqnarray}\nThus, the parameter $\\varsigma ^{2}$ is simply related to the\ncorrelation $\\mathcal{C}_{zz}$ along the $z$ direction. A negative\ncorrelation ${\\cal C}_{zz}<0$ is equivalent to $\\varsigma ^{2}<1.$\nIt is already known that the spin-squeezing parameter $\\xi\n_{1}^{2}$ can be written as \\cite{Kitagawa}\n\\begin{equation}\\label{perp}\n\\xi _{1}^{2}=1+(N-1)\\mathcal{C}_{\\vec{n}_{\\perp }\\vec{n}_{\\perp\n}},\n\\end{equation}\nwhere $\\mathcal{C}_{\\vec{n}_{\\perp }\\vec{n}_{\\perp }}$ is the\ncorrelation function in the direction perpendicular to the mean\nspin direction. So, the spin squeezing $\\xi_1^2<1$ is equivalent\nto the negative pairwise correlations $\\mathcal{C}_{\\vec{n}_{\\perp\n}\\vec{n}_{\\perp }}<0$~\\cite{Kitagawa}.\n\nThus, from the above analysis, spin squeezing and negative\ncorrelations are closely connected to each other. The parameter\n$\\varsigma ^{2}<1$ indicates that spin squeezing occurs along the\n$z$ direction, and $\\xi_1^{2}<1$ implies spin squeezing along the\ndirection perpendicular to the mean spin direction. Furthermore,\nfrom Eq.~(\\ref{xixixi}), a competition between the transverse and\nlongitudinal correlations is evident.\n\nBy substituting Eqs.~(\\ref{s6}) and (\\ref{square2}) to\nEq.~(\\ref{xixi1}), one can obtain the expression of $\\xi _{1}^{2}$\nin terms of local correlations $\\langle \\sigma _{1+}\\sigma\n_{2-}\\rangle $ and $\\langle \\sigma _{1-}\\sigma _{2-}\\rangle $ as\nfollows:\n\\begin{eqnarray}\n\\xi _{1}^{2} &=&1+(N-1)\\langle \\sigma _{1+}\\sigma _{2-}+\\sigma _{1-}\\sigma\n_{2+}\\rangle \\notag \\\\\n&&-2(N-1)|\\langle \\sigma _{1-}\\sigma _{2-}\\rangle | \\notag \\\\\n&=&1+2(N-1)\\langle \\sigma _{1+}\\sigma _{2-}\\rangle -|\\langle\n\\sigma _{1-}\\sigma _{2-}\\rangle |). \\label{xixixi1}\n\\end{eqnarray}\nThe second equality in Eq.~(\\ref{xixixi1}) results from the\nexchange symmetry. From Eqs.~(\\ref{qqq}), (\\ref{square3}), and\n(\\ref{zzzz}), one finds\n\\begin{eqnarray}\n\\xi _{2}^{2} &=&\\frac{\\xi _{1}^{2}}{\\langle \\sigma _{1z}\\rangle ^{2}},~\n\\label{xixixi2} \\\\\n\\xi _{3}^{2} &=&\\frac{\\min \\left\\{ \\xi\n_{1}^{2},1+\\mathcal{C}_{zz}\\right\\} }{(1-N^{-1})\\langle\n\\vec{\\sigma}_{1}\\cdot \\vec{\\sigma}_{2}\\rangle +{N^{-1}}}.\n\\label{third}\n\\end{eqnarray}\nThus, we have reexpressed the squeezing parameters in terms of\nlocal correlations and expectations.\n\n\\subsubsection{New spin-squeezing parameters}\n\nIn order to characterize spin squeezing more conveniently, we\ndefine the following squeezing parameters:\n\\begin{equation}\n\\zeta _{k}^{2}=\\max (0,1-\\xi _{k}^{2}), \\; k\\in \\{1,2,3\\}.\n\\label{zeta}\n\\end{equation}\nThis definition is similar to the expression of the concurrence\ngiven below. Spin squeezing appears when $\\zeta _{k}^{2}>0$, and\nthere is no squeezing when $\\zeta _{k}^{2}$ vanishes. Thus, the\ndefinition of the first parameter $\\zeta_{1}^{2}$ has a clear\nmeaning, namely, it is the \\emph{strength} of the negative\ncorrelations as seen from Eq.~(\\ref{perp}). The larger is\n$\\zeta_1^2$, the larger is the strength of the negative\ncorrelation, and the larger of is the squeezing. More explicitly,\nfor the initial state, we have $\\xi _{1}^{2}=1-(N-1)C_{0}$\n\\cite{WangSanders}, so $\\zeta _{1}^{2}$ is just the rescaled\nconcurrence $\\zeta_1^2=C_{r}(0)=(N-1)C_{0}$~\\cite{Vidal}.\n\nHere, we give a few comments on the spin-squeezing parameter $\\xi\n_{2}^{2}$, which represents a competition between $\\xi _{1}^{2}$\nand $\\langle \\sigma _{1z}\\rangle ^{2}$: the state is squeezed\naccording to the definition of $\\ \\xi _{2}^{2}$ if $\\xi\n_{1}^{2}<\\langle \\sigma _{1z}\\rangle ^{2}$. We further note\nthat~\\cite{CPL}\n\\begin{equation}\n\\langle \\sigma _{1z}\\rangle ^{2}=1-2E_{L},\n\\end{equation}\nwhere $E_{L}$ is the linear entropy of one spin and it can be used\nto quantify the entanglement of pure states~\\cite{Horodecki}. So,\nthere is a competition between the strength of negative\ncorrelations and the linear entropy $2E_L$ in the parameter $\\xi\n_{2}^{2},$ and $\\zeta _{1}^{2}>2E_{L}$ implies the appearance of\nsqueezing.\n\n\n\\subsection{Concurrence for pairwise entanglement}\n\nIt has been found that the concurrence is closely related to spin\nsqueezing~\\cite{WangSanders}. Here, we consider its behavior under\nvarious decoherence channels. The concurrence quantifying the\nentanglement of a pair of spin-1\/2 can be calculated from the\nreduced density matrix. It is defined as~\\cite{Conc}\n\\begin{equation}\n{C}=\\max(0,\\lambda _1-\\lambda _2-\\lambda _3-\\lambda _4),\n\\label{Cdef}\n\\end{equation}\nwhere the quantities $\\lambda _i$ are the square roots of the\neigenvalues in descending order of the matrix product\n\\begin{equation}\n\\varrho _{12}=\\rho _{12}(\\sigma _{1y}\\otimes \\sigma _{2y})\\rho\n_{12}^{*}(\\sigma _{1y}\\otimes \\sigma _{2y}). \\label{varrho}\n\\end{equation}\nIn (\\ref{varrho}), $\\rho _{12}^{*}$ denotes the complex conjugate\nof $\\rho _{12}$.\n\nThe two-spin reduced density matrix for a parity state with the\nexchange symmetry can be written in a block-diagonal\nform~\\cite{WangMolmer}\n\\begin{equation}\n\\rho _{12}=\\left(\n\\begin{array}{cc}\nv_{+} & u^{\\ast } \\\\\nu & v_{-}%\n\\end{array}%\n\\right) \\oplus \\left(\n\\begin{array}{cc}\nw & y \\\\\ny & w%\n\\end{array}%\n\\right) , \\label{re}\n\\end{equation}\nin the basis \\{$|00\\rangle ,|11\\rangle ,|01\\rangle ,|10\\rangle $\\}, where\n\\begin{eqnarray}\nv_{\\pm } &=&\\frac{1}{4}\\left( 1\\pm 2\\langle \\sigma _{1z}\\rangle +\\langle\n\\sigma _{1z}\\sigma _{2z}\\rangle \\right) , \\label{r1} \\\\\nw &=&\\frac{1}{4}\\left( 1-\\langle \\sigma _{1z}\\sigma _{2z}\\rangle \\right) ,\n\\label{r3} \\\\\nu &=&\\langle \\sigma _{1+}\\sigma _{2+}\\rangle , \\label{rrr} \\\\\ny &=&\\langle \\sigma _{1+}\\sigma _{2-}\\rangle . \\label{rr}\n\\end{eqnarray}\nThe concurrence is then given by \\cite{Wootters2}\n\\begin{equation}\nC=\\max \\left\\{ 0,2\\left( |u|-w\\right) ,2(y-\\sqrt{v_{+}v_{-}})\\right\\} .\n\\label{conc}\n\\end{equation}\nFrom the above expressions of the spin-squeezing parameters and\nconcurrence, we notice that if we know the expectation $\\langle\n\\sigma _{1z}\\rangle$, and the correlations $\\langle \\sigma\n_{1+}\\sigma _{2-}\\rangle ,$ $\\langle \\sigma _{1-}\\sigma\n_{2-}\\rangle$, and $\\langle \\sigma _{1z}\\sigma _{2z}\\rangle,$ all\nthe squeezing parameters and concurrence can be determined. Below,\nwe will give explicit analytical expressions for them subject to\nthree decoherence channels.\n\n\\subsection{Initial-state squeezing and concurrence}\n\nWe will now investigate initial spin squeezing and pairwise\nentanglement by using our results for the spin-squeezing\nparameters and concurrence obtained in the last subsections. We\nfind that the third squeezing parameter $\\xi_3^2$ is equal to the\nfirst one $\\xi_1^2$. The squeezing parameter $\\xi _{1}^{2}$ is\ngiven by (see Appendix B):\n\\begin{align}\n\\xi _{1}^{2}(0)&=1-C_{r}(0) \\notag \\\\\n& =1-(N-1)C_{0}, \\notag \\\\\n& =1-2(N-1)(|u_{0}|-y_{0}), \\label{ccc1}\n\\end{align}\nwhere\n\\begin{align}\nC_{0}=&\\frac{1}{4}{\\LARGE \\{}[(1-\\cos ^{N-2}\\theta_0 )^{2}+16\\sin\n^{2}{(\\theta_0\/2)} \\cos ^{2N-4}{(\\theta_0\/2)}]^{\\frac{1}2}\n\\notag \\\\\n& -1+\\cos ^{N-2}\\theta_0 {\\Large \\}}\n\\end{align}\nis the concurrence \\cite{WangSanders}.\n\nThe parameter $\\xi _{2}^{2}(0)$ is easily obtained, as we know\nboth $\\xi _{1}^{2}(0)$ and $\\langle \\sigma _{1z}\\rangle_0^{2}$\n(\\ref{sigmaz})$.$ For this state, following from\nEq.~(\\ref{square3}), $\\langle \\vec{\\sigma}_{1}\\cdot\n\\vec{\\sigma}_{2}\\rangle_0 =1,$ and thus the third parameter given\nby Eq.~(\\ref{third}) becomes\n\\begin{align}\n\\xi _{3}^{2}(0)&=\\min [\\xi _{1}^{2}(0),\\varsigma ^{2}(0)]\\notag\\\\\n&=\\min [\\{ 1-C_{r}(0),1+\\mathcal{C}_{zz}(0)] , \\label{thirdd}\n\\end{align}\nwhere the correlation function is\n\\begin{equation}\n\\mathcal{C}_{zz}(0)=\\frac{1}{2}\\left( 1+\\cos ^{N-2}\\theta_0\n\\right) -\\cos ^{2N-2}{(\\theta_0\/2)}\\geq 0. \\label{c5}\n\\end{equation}\nThe proof of the above inequality is given in Appendix C.\n\nAs the correlation function $\\mathcal{C}_{zz}(0)$ and the\nconcurrence $C_{r}(0)$ are always $\\ge 0$, Eq.~(\\ref{thirdd})\nreduces to\n\\begin{equation}\n\\xi _{3}^{2}(0)=\\xi _{1}^{2}(0)=1-C_{r}(0).\n\\end{equation}\nSo, for the initial state, the spin-squeezing parameters $\\xi\n_{3}^{2}(0)$ and $\\xi _{1}^{2}(0)$ are equal or equivalently, we\ncan write $\\zeta _{1}^{2}(0)=\\zeta _{3}^{2}(0)=C_{r}(0)$ according\nto the definition of parameter $\\zeta _{k}^{2}$ given by\nEq.~(\\ref{zeta}). Below we made a summary of results of this\nsection in Table I.\n\n\\begin{table*}[tbp]\n\\caption{Spin-squeezing parameters $\\xi_1^2$~\\cite{KU},\n$\\xi_2^2$~\\cite{Wineland}, $\\xi_3^2$~\\cite{Toth} and concurrence\n$C$~\\cite{Conc} for arbitrary states (first two columns), states\nwith parity (third column). The squeezing parameters are also\nexpressed in terms of local expectations (fourth column) and in\nterms of the initial rescaled concurrence $C_r(0)$ for initial\nstates (last column). Also, $C_0$ is the initial concurrence, and\nother parameters are defined in the text.}\n\\begin{center}\n\\begin{tabular}{c||c|c|c|c}\n\\hline\\hline Squeezing parameters& Definitions & States with\nparity & In terms of local expectations &Initial state\n\\\\ \\hline\\hline\n\\parbox{2 cm} {$\\xi_{1}^{2}$} &\n\\parbox{3.7 cm} {\\vspace{2mm} $\\displaystyle\\frac{4(\\Delta J_{\\vec{n}_\\perp })_{\\min\n}^{2}}{N}$ \\vspace{2mm}} &\n\\parbox{4 cm}{$\\displaystyle\\frac{2}{N}\\left( \\langle J_{x}^{2}+J_{y}^{2}\\rangle\n-|\\langle J_{-}^{2}\\rangle |\\right)$}&\n\\parbox{4 cm}{$1+2(N-1)(y -|u|)$}&\n\\parbox{1.5 cm}{$1-C_r(0)$}\n\n\\\\ \\hline\n\\parbox{2 cm} {$\\xi_{2}^{2}$} &\n\\parbox{3.7cm} {\\vspace{2mm} $\\displaystyle\\frac{N^{2}}{4\\langle\n\\vec{J}\\rangle ^{2}}\\xi _{1}^{2}$ \\vspace{2mm}} &\n\\parbox{4 cm} {$\\displaystyle\\frac{N^{2}\\xi _{1}^{2}}{4\\langle J_{z}\\rangle ^{2}}$} &\n\\parbox{4 cm}{$\\displaystyle\\frac{\\xi _{1}^{2}}{\\langle \\sigma _{1z}\\rangle ^{2}}$}&\n\\parbox{1.5 cm}{$\\displaystyle\\frac{1-C_r(0)}{\\langle\\sigma_{1z}\\rangle_0^2}$}\n\\\\ \\hline\n\n\\parbox{2 cm} { $\\xi_{3}^{2}$} &\n\\parbox{3.9cm} {\\vspace{2mm} \\ $\\displaystyle\n\\frac{\\lambda _{\\min }}{\\langle \\vec{J}^{2}\\rangle\n-\\displaystyle\\frac{N}{2}} $ \\vspace{2mm} } &\n\\parbox{4.2 cm} { $\\displaystyle\\frac{\\min\n\\left\\{ \\xi _{1}^{2},\\varsigma ^{2}\\right\\} }{{4}{N^{-2}}\\langle\n\\vec{J}^{2}\\rangle -{2}{N^{-1}}}$}&\n\\parbox{4 cm}{$\\displaystyle\\frac{\\min \\left\\{ \\xi _{1}^{2},1+\\mathcal{C}_{zz}\\right\\} }{(1-N^{-1})\\langle \\vec{\\sigma}_{1}\\cdot \\vec{\\sigma}_{2}\\rangle +{N^{-1}}} $}&\n\\parbox{1.5 cm}{$1-C_r(0)$}\n\\\\ \\hline\n\n\\parbox{2.5 cm} {\\vspace{0.3cm}Concurrence $C$\\vspace{0.3cm}} &\n\\parbox{3.7 cm} {$\\max(0,\\lambda_1-\\lambda_2-\\lambda_3-\\lambda_4)$} &\n\\parbox{4.0 cm} {$2\\max (0,|u|-w,y-\\sqrt{v_{+}v_{-}})$}&\n\\parbox{4.0 cm}{ $2\\max (0,|u|-w,y-\\sqrt{v_{+}v_{-}})$} &\n\\parbox{1.5 cm}{$C_0$}\n\\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\\section{Spin squeezing under decoherence}\n\nNow we begin to study spin squeezing under three different\ndecoherence channels. From the previous analysis, all the\nspin-squeezing parameters and the concurrence are determined by\nsome correlation functions and expectations. So, if we know the\nevolution of them under decoherence, the evolution of any\nsqueezing parameters and pairwise entanglement can be calculated.\n\n\\subsection{Heisenberg approach}\n\nWe now use the Heisenberg picture to calculate the correlation\nfunctions and the relevant expectations. A decoherence channel\nwith Kraus operators $K_{\\mu }$ is defined via the map\n\\begin{equation}\n{\\cal E}(\\rho )=\\sum_{\\mu }K_{\\mu }\\rho K_{\\mu\n}^{\\dagger }.\n\\end{equation}\nThen, an expectation value of the operator $A$ can be calculated\nas $\\langle A\\rangle =$Tr$\\left[A{\\cal E}(\\rho )\\right] .$\nAlternatively, we can define the following map,\n\\begin{equation}\n{\\cal E}^{\\dagger }(\\rho )=\\sum_{\\mu }K_{\\mu }^{\\dagger }\\rho\nK_{\\mu }.\n\\end{equation}\nIt is easy to check that%\n\\begin{equation}\\label{four}\n\\langle A\\rangle =\\text{Tr}\\left[ A{\\cal E} (\\rho) \\right]\n=\\text{Tr}\\left[{\\cal E}^{\\dagger }(A)\\rho \\right] .\n\\end{equation}\nSo, one can calculate the expectation value via the above equation\n(\\ref{four}). This is very similar to the standard Heisenberg\npicture.\n\n\\subsection{Amplitude-damping channel}\n\n\\begin{figure}[tbp]\n\\includegraphics[width=9cm,clip]{fig1.eps}\n\\caption{(Color online) Spin-squeezing parameters\n$\\protect\\zeta_2^2$ (red curve with squares), $\\protect\\zeta_3^2$\n(top green curve with circles), and the concurrence $C_r$ (blue\nsolid curve) versus the decoherence strength $p=1-\\exp(-\\gamma t)$\nfor the amplitude-damping channel, where $\\gamma$ is the damping\nrate. Here, $\\theta_0$ is the initial twist angle given by\nEq.~(\\ref{angle}). In all figures, we consider an ensemble of\n$N=12$ spins. Note that for a small initial twist angle $\\theta_0$\n(e.g., $\\theta_0=0.1\\pi$), the two squeezing parameters and the\nconcurrence all concur. For larger values of $\\theta_0$, the\nparameters $\\zeta_2^2$, $\\zeta_3^2$, and $C$ become quite\ndifferent and all vanish for sufficiently large values of the\ndecoherence strength.}\n\\end{figure}\n\n\\subsubsection{\\protect\\bigskip Squeezing parameters}\nBased on the above approach and the Kraus operators for the ADC\ngiven by Eq.~(\\ref{kraus1}), we now find the evolutions of the\nfollowing expectations under decoherence (see Appendix D for\ndetails)\n\\begin{subequations}\n\\begin{align}\n\\langle \\sigma _{1z}\\rangle =& \\; s\\langle \\sigma _{1z}\\rangle _{0}-p, \\\\\n\\langle \\sigma _{1-}\\sigma _{2-}\\rangle =&\\; s\\langle \\sigma\n_{1-}\\sigma\n_{2-}\\rangle_0 , \\label{c2} \\\\\n\\langle \\sigma _{1+}\\sigma _{2-}\\rangle =&\\; s\\langle \\sigma\n_{1+}\\sigma\n_{2-}\\rangle_0 , \\label{c3} \\\\\n\\langle \\sigma _{1z}\\sigma _{2z}\\rangle =&\\; s^{2}\\langle \\sigma\n_{1z}\\sigma _{2z}\\rangle _{0}-2sp\\langle \\sigma _{1z}\\rangle\n_{0}+p^{2}. \\label{c44}\n\\end{align}\nTo determine the squeezing parameters and the concurrence, it is\nconvenient to know the correlation function $\\mathcal{C}_{zz}$ and\nthe expectation $\\langle \\vec{\\sigma}_{1}\\cdot\n\\vec{\\sigma}_{2}\\rangle ,$ which can be determined from the above\nexpectations as follows:\n\\end{subequations}\n\\begin{align}\n\\langle \\vec{\\sigma}_{1}\\cdot \\vec{\\sigma}_{2}\\rangle =&1-s\\, p\\, x_{0}, \\\\\n\\mathcal{C}_{zz}=&s^{2}\\left( \\langle \\sigma _{1z}\\sigma _{2z}\\rangle\n_{0}-\\langle \\sigma _{1z}\\rangle _{0}\\langle \\sigma _{2z}\\rangle _{0}\\right)\n\\notag \\label{c4} \\\\\n=&s^{2}\\mathcal{C}_{zz}(0),\n\\end{align}\nwhere%\n\\begin{equation}\nx_{0}=1+2\\langle \\sigma _{z}\\rangle _{0}+\\langle \\sigma _{1z}\\sigma\n_{2z}\\rangle _{0}.\n\\end{equation}\nSubstituting the relevant expectation values and the correlation\nfunction into Eqs.~(\\ref{xixixi1}), (\\ref{xixixi2}), and\n(\\ref{third}) leads to the explicit expression of the\nspin-squeezing parameters\n\\begin{eqnarray}\n\\xi _{1}^{2} &=&1-sC_{r}(0), \\label{xi1} \\\\\n\\xi _{2}^{2} &=&\\frac{\\xi _{1}^{2}}{\\left( s\\langle \\sigma\n_{1z}\\rangle\n_{0}-p\\right) ^{2}}, \\label{xix2} \\\\\n\\xi _{3}^{2} &=&\\frac{\\min \\left\\{\\xi _{1}^{2},1+s^{2}\\mathcal{C}%\n_{zz}(0)\\right\\} }{1+({N}^{-1}-1)s\\, p\\, x_{0}}. \\label{xix3}\n\\end{eqnarray}\nAs the correlation function $\\mathcal{C}_{zz}(0)\\geq 0$, given by\nEq.~(\\ref{c5})$,$ the third parameter can be simplified as\n\\begin{equation}\n\\xi _{3}^{2}=\\frac{1-sC_{r}(0)}{1+({N}^{-1}-1)s\\, p\\, x_{0}}.\n\\end{equation}\n\nInitially, the state is spin squeezed, i.e., $\\xi _{1}^{2}(0)<1$\nor $C_r(0)>0$. From Eq.~(\\ref{xi1}), one can find that $\\xi\n_{1}^{2}<1$, except in the asymptotic limit of $p=1$. As we will\nsee below, for the PDC and DPC, $$\\xi _{1}^{2}=1-s^{2}C_{r}(0).$$\nThus, we conclude that according to $\\xi _{1}^{2}$, the initially\nspin-squeezed state is always squeezed for $p\\neq 1$, irrespective\nof both the decoherence strength and decoherence models. In other\nwords, there exists no SSSD if we quantify spin squeezing by the\nfirst parameter $\\xi_1^2$.\n\n\\subsubsection{Concurrence}\n\nIn the expression (\\ref{conc}) of the concurrence, there are three\nterms inside the max function. The expression can be simplified to\n(see Appendix E for details):\n\\begin{equation}\nC_{r}=2(N-1)\\max (0,|u|-w). \\label{sim}\n\\end{equation}\nBy using Eqs.~(\\ref{r3}) and (\\ref{c3}), one finds\n\\begin{align}\n&2(|u|-w) \\\\\n&\\hspace{5mm}=2s|u_{0}|+\\frac{s}{2}\\left[ s-2+s\\langle \\sigma\n_{1z}\\sigma _{2z}\\rangle\n_{0}-2p\\langle \\sigma _{1z}\\rangle _{0}\\right] ) \\notag\\\\\n&\\hspace{5mm}=sC_{0}-\\frac{s\\,p\\,x_{0}}{2}.\n\\end{align}\nSo, we obtain the evolution of the rescaled concurrence as\n\\begin{equation}\nC_{r}=\\max \\left[ 0,sC_{r}(0)-2^{-1}{(N-1)s\\,p}\\,x_{0}\\right],\n\\label{cccc}\n\\end{equation}\nwhich depends on the initial concurrence, expectation\n$\\langle\\sigma_{1z}\\rangle_0$, and correlation\n$\\langle\\sigma_{1z}\\sigma_{2z}\\rangle_0$.\n\n\\subsubsection{Numerical results}\n\nThe numerical results for the squeezing parameters and concurrence\nare shown in Fig.~1 for different initial values of the twist\nangle $\\theta_0$, defined in Eq.~(\\ref{angle}). For the smaller\nvalue of $\\theta_0$, e.g., $\\theta_0=\\pi\/10$, we see that there is\nno ESD and SSSD. All the spin squeezing and the pairwise\nentanglement are completely robust against decoherence.\nIntuitively, the larger is the squeezing, the larger is the\nvanishing time. However, here, in contrast to this, no matter how\nsmall are the squeezing parameters and concurrence, they vanish\nonly in the asymptotic limit. This results from the complex\ncorrelations in the initial state and the special characteristics\nof the ADC.\n\nFor larger values of $\\theta_0$, as the decoherence strength $p$\nincreases, the spin squeezing decreases until it suddenly\nvanishes, so the phenomenon of SSSD occurs. There exists a\ncritical value $p_{c},$ after which there is no spin squeezing.\nThe vanishing time of $\\xi _{3}^{2}$ is always larger than those\nof $\\xi _{2}^{2}$ and the concurrence. We note that depending on\nthe initial state, the concurrence can vanish before or after $\\xi\n_{2}^{2}$. This means that in our model, the parameter $\\xi\n_{3}^{2}<1$ implies the existence of pairwise entanglement, while\n$\\xi _{2}^{2}$ does not.\n\n\n\n\\subsubsection{Decoherence strength $p_c$ corresponding to the SSSD}\n\nFrom Eqs.~(\\ref{xix2}), (\\ref{xix3}), and (\\ref{cccc}), the\ncritical value $p_{c}$ can be analytically obtained as\n\\begin{eqnarray}\np_{c}^{(k)} &=&\\frac{x_kC_{r}(0)}{\\left( N-1\\right) x_{0}}, \\quad (k=1,3) \\label{eq1} \\\\\np_{c}^{(2)} &=&\\frac{\\langle \\sigma_{1z}\\rangle _{0}^{2}+C_{r}(0)-1}{%\n1+2\\langle \\sigma_{1z}\\rangle _{0}+\\langle \\sigma _{z}\\rangle\n_{0}^{2}},\n\\end{eqnarray}\nwhere $x_1=2$ for the concurrence and $x_3=N$ for the squeezing\nparameter $\\zeta_{3}^{2}$. The critical value $p_{c}^{(2)}$ is\nfor the second squeezing parameter $\\zeta_{2}^{2}$. Here, $p_c$ is\nrelated to the vanishing time $t_v$ via $p_c=1-\\exp(-\\gamma t_v)$.\n\nIn Fig.~2, we plot the critical values $p_c$ of the decoherence\nstrength versus $\\theta_0$. The initial-state squeezing parameter\n$\\zeta_{1}^{2}$ is also plotted for comparison. For a range of\nsmall values of $\\theta_0$, the entanglement and squeezing are\nrobust to decoherence. The concurrence and parameter $\\zeta\n_{2}^{2}$ intersect. However, we do not see the intersections\nbetween $\\zeta _{3}^{2}$ and $\\zeta _{2}^{2}$ or between\n$\\zeta_3^2$ and the concurrence. We also see that for the same\ndegree of squeezing, the vanishing times are quite different,\nwhich implies that except for the spin-squeezing correlations,\nother type of correlations exist. For large enough initial twist\nangles $\\pi\\le \\theta_0\\le 2\\pi$, the behavior of the squeezing\nparameter $\\xi_1^2$ is similar to those corresponding to\n$p_{c}^{(1)}$ and $p_{c}^{(3)}$.\n\n\\begin{figure}[tbp]\n\\includegraphics[width=9cm,clip]{fig2.eps}\n\\caption{(Color online) Critical values of the decoherence\nstrength $p_{c}^{(1)}$ (blue solid curve), $p_{c}^{(2)}$ (red\ncurve with squares), $p_{c}^{(3)}$ (green curve with circles), and\nthe squeezing parameter $\\protect\\zeta_1^2$ (black dashed curve)\nversus the initial twist angle $\\protect\\theta_0$ given by\nEq.~(\\ref{angle}) for the amplitude-damping channel, ADC. Here,\n$p_c$ is related to the vanishing time $t_v$ via\n$p_c=1-\\exp(-\\gamma t_v)$. At vanishing times, SSSD occurs. The\ncritical values $p_{c}^{(1)}$ , $p_{c}^{(2)}$, and $p_{c}^{(3)}$\ncorrespond to the concurrence, squeezing parameter $\\zeta_2^2$,\nand $\\zeta_3^2$, respectively.}\n\\end{figure}\n\n\\subsection{Phase-damping channel}\n\n\\subsubsection{Squeezing parameters and concurrence}\nNow, we study the spin squeezing and pairwise entanglement under\nthe PDC. For this channel, the expectation values $\\langle \\sigma\n_{z}^{\\otimes n}\\rangle $ are unchanged and the two correlations\n$\\langle \\sigma _{1-}\\sigma _{2-}\\rangle $ and $\\langle \\sigma\n_{1+}\\sigma _{2-}\\rangle $ evolve as (see Appendix D for\ndetails)%\n\\begin{eqnarray}\n\\langle \\sigma _{1-}\\sigma _{2-}\\rangle &=& s^{2}\\langle \\sigma\n_{1-}\\sigma _{2-}\\rangle , \\notag \\\\ \\langle \\sigma _{1+}\\sigma\n_{2-}\\rangle &=& s^{2}\\langle \\sigma _{1+}\\sigma _{2-}\\rangle.\n\\label{evolve}\n\\end{eqnarray}\nFrom the above equations and the fact $\\langle \\vec{\\sigma}_{1}\\cdot \\vec{%\n\\sigma}_{2}\\rangle _{0}=1$, one finds\n\\begin{align}\n\\langle \\vec{\\sigma}_{1}\\cdot \\vec{\\sigma}_{2}\\rangle\n&=s^{2}\\langle \\sigma _{1x}\\sigma _{2x}+\\sigma _{1y}\\sigma\n_{2y}\\rangle _{0}+\\langle \\sigma\n_{1z}\\sigma _{2z}\\rangle _{0} \\notag \\\\\n &=s^{2}(1-\\langle \\sigma _{1z}\\sigma _{2z}\\rangle _{0})+\\langle \\sigma\n_{1z}\\sigma _{2z}\\rangle _{0}, \\label{sigmasigma} \\\\\n\\mathcal{C}_{zz}(p)&=\\mathcal{C}_{zz}(0). \\label{sigmasigma2}\n\\end{align}\nTherefore, from the above properties, we obtain the evolution of the\nsqueezing parameters,\n\\begin{eqnarray}\n\\xi _{1}^{2} &=&1-s^{2}C_{r}(0), \\label{eee}\\\\\n\\xi _{2}^{2} &=&\\frac{\\xi _{1}^{2}}{\\langle \\sigma _{1z}\\rangle _{0}^{2}}%\n,~ \\label{ee2}\n\\end{eqnarray}\nand the third parameter becomes\n\\begin{eqnarray}\n\\xi _{3}^{2} &=&\\frac{N\\min \\left[\\xi _{1}^{2},1+\\mathcal{C}%\n_{zz}(0)\\right] }{(N-{1})[ s^{2}+(1-s^{2})\\langle \\sigma_{1z}\\sigma _{2z}\\rangle_{0}] +1} \\label{ee3} \\\\\n&=&\\frac{N\\xi _{1}^{2}}{(N-{1})[s^{2}+(1-s^{2})\\langle \\sigma\n_{1z}\\sigma _{2z}\\rangle_{0}] +{1}}.\n\\end{eqnarray}\nwhere we have used Eqs.~(\\ref{sigmasigma}) and\n(\\ref{sigmasigma2}), and the property $\\mathcal{C}_{zz}(0)\\geq 0.$\n\nFrom Eq.~(\\ref{evolve}) and the simplified form of the concurrence\ngiven by Eq.~(\\ref{sim}), the concurrence is found to be\n\\begin{eqnarray}\nC_{r} &=&\\max \\Big\\{0,2(N-1) \\notag \\\\\n&&\\times \\left[ s^{2}|u_{0}|-{4}^{-1}(1-\\langle \\sigma _{1z}\\sigma\n_{2z}\\rangle _{0}\\rangle )\\right] \\Big\\} \\notag \\\\\n&=&\\max \\left[0,s^{2}C_{r}(0)+\\frac{a_{0}(s^{2}-1)}{2}\\right].\n\\label{ee4}\n\\end{eqnarray}\nwhere\n\\begin{align}\na_{0}=\\left( N-1\\right) (1-\\langle \\sigma _{1z}\\sigma _{2z}\\rangle\n_{0}).\n\\end{align}\nThus, we obtained all time evolutions of the spin-squeezing\nparameters and the concurrence. To study the phenomenon of SSSD,\nwe below examine the vanishing times.\n\n\\subsubsection{Decoherence strength $p_c$ corresponding to the SSSD}\n\nThe critical decoherence strengths $p_c$ can be obtained from Eqs.~(\\ref{ee2}), (\\ref{ee3}), and (%\n\\ref{ee4}) as follows:\n\\begin{eqnarray}\np_{c}^{(k)} &=&1-\\left[ \\frac{a_{0}}{x_kC_{r}(0)+a_{0}}\\right]\n^{\\frac{1}{2}},\n\\label{eq2} \\\\\np_{c}^{(2)} &=&1-\\left[ \\frac{1-\\langle \\sigma _{1z}\\rangle _{0}^{2}}{C_{r}(0)%\n}\\right]^{\\frac{1}{2}},\n\\end{eqnarray}\nwhere $k=1,3$ and $x_1=2, x_3=N$.\n\\begin{figure}[tbp]\n\\includegraphics[width=9cm,clip]{fig3.eps}\n\\caption{(Color online) Same as in Fig.~2 but for the\nphase-damping channel, PDC, instead of ADC. }\n\\end{figure}\n\nIn Fig. 3, we plot the decoherence strength $p_c$ versus the twist\nangle $\\theta_0$ of the initial state for the PDC. For this\ndecoherence channel, the critical value $p_c's$ first decrease\nuntil they reach zero. Also, it is symmetric with respect to\n$\\theta_0 =\\pi ,$ which is in contrast to the ADC. There are also\nintersections between the concurrence and parameter $\\xi\n_{2}^{2},$ and the critical value $p_{c}^{(3)}$ is always larger\nthan $p_{c}^{(1)}$ and $p_{c}^{(2)}.$\n\n\\begin{table*}[tbp]\n\\caption{Analytical results for the time evolutions of all\nrelevant expectations, correlations, spin-squeezing parameters,\nand concurrence, as well as the critical values $p_c$ of the\ndecoherence strength $p$. This is done for the three decoherence\nchannels considered in this work. For the concurrence $C$, we give\nthe expression for $C_r'$, which is related to the rescaled\nconcurrence $C_r$ via $C_r=\\max(0,C_r')$.}\n\\begin{center}\n\\begin{tabular}{c||c|c|c}\n\\hline\\hline & Amplitude-damping channel & Phase-damping channel\n& Depolarizing channel\n\\\\\n& (ADC) & (PDC) & (DPC)\n\\\\ \\hline\\hline\n\\parbox{1.5 cm} {\\vspace{0.3cm} $\\langle\\sigma_{1z}\\rangle$\\vspace{0.2cm} } &\n\\parbox{4 cm} {$s\\langle\\sigma_{1z}\\rangle_0-p$} &\n\\parbox{4 cm}{$\\langle\\sigma_{1z}\\rangle_0$}&\n\\parbox{4 cm}{$s\\langle\\sigma_{1z}\\rangle_0$}\n\\\\ \\hline\n\n\\parbox{1.5 cm} {\\vspace{0.3cm}$\\langle\\sigma_{1z}\\sigma_{2z}\\rangle$\\vspace{0.3cm}} &\n\\parbox{4.3cm} {$s^2\\langle\\sigma_{1z}\\sigma_{2z}\\rangle_0-2sp\\langle\\sigma_{1z}\\rangle_0+p^2$} &\n\\parbox{4 cm} {$\\langle\\sigma_{1z}\\sigma_{2z}\\rangle_0$}&\n\\parbox{4 cm}{$s^2\\langle\\sigma_{1z}\\sigma_{2z}\\rangle_0$}\n\\\\ \\hline\n\n\\parbox{1.5 cm} {\\vspace{0.3cm}$\\langle\\sigma_{1+}\\sigma_{2-}\\rangle$\\vspace{0.3cm}} &\n\\parbox{4cm} {$s\\langle\\sigma_{1+}\\sigma_{2-}\\rangle_0$} &\n\\parbox{4 cm} {$s^2\\langle\\sigma_{1+}\\sigma_{2-}\\rangle_0$}&\n\\parbox{4 cm}{$s^2\\langle\\sigma_{1+}\\sigma_{2-}\\rangle_0$}\n\\\\ \\hline\n\n\n\\parbox{1.5 cm} {\\vspace{0.3cm}$\\langle\\sigma_{1-}\\sigma_{2-}\\rangle$\\vspace{0.3cm}} &\n\\parbox{4cm} {$s\\langle\\sigma_{1-}\\sigma_{2-}\\rangle_0$} &\n\\parbox{4 cm} {$s^2\\langle\\sigma_{1-}\\sigma_{2-}\\rangle_0$}&\n\\parbox{4 cm}{$s^2\\langle\\sigma_{1-}\\sigma_{2-}\\rangle_0$}\n\\\\ \\hline\n\n\\parbox{1.5 cm} {\\vspace{0.3cm}$\\langle\\vec{\\sigma}_{1}\\cdot\\vec{\\sigma}_{2}\\rangle$\\vspace{0.3cm}} &\n\\parbox{4cm} {$1-s\\,p\\,x_0$} &\n\\parbox{4 cm} {$s^2(1-\\langle\\sigma_{1z}\\sigma_{2z}\\rangle_0)+\\langle\\sigma_{1z}\\sigma_{2z}\\rangle_0$}&\n\\parbox{4 cm}{$s^2$}\n\\\\ \\hline\n\n\\parbox{1.5 cm} {\\vspace{0.3cm}${\\cal C}_{zz}$\\vspace{0.3cm}} &\n\\parbox{4 cm} {$s^2{\\cal C}_{zz}(0)$} &\n\\parbox{4 cm} {${\\cal C}_{zz}(0)$}&\n\\parbox{4 cm}{$s^2{\\cal C}_{zz}(0)$}\n\\\\ \\hline\n\n\n\n\\parbox{1.5 cm} {\\vspace{0.3cm}$\\xi_1^2$\\vspace{0.3cm}} &\n\\parbox{4cm} {$1-sC_r(0)$} &\n\\parbox{4 cm} {$1-s^2C_r(0)$}&\n\\parbox{4 cm}{$1-s^2C_r(0)$}\n\\\\ \\hline\n\n\\parbox{1.5 cm} {\\vspace{0.3cm}$\\xi_2^2$\\vspace{0.3cm}} &\n\\parbox{4cm} {$\\displaystyle\\frac{1-sC_r(0)}{(s\\langle\\sigma_{1z}\\rangle_0-p)^2}$} &\n\\parbox{4 cm} {\\vspace{0.15cm}$\\displaystyle\\frac{1-s^2C_r(0)}{\\langle\\sigma_{1z}\\rangle_0^2}$\\vspace{0.15cm}}&\n\\parbox{4 cm}{$\\displaystyle\\frac{1-s^2C_r(0)}{s^2\\langle\\sigma_{1z}\\rangle_0^2}$}\n\\\\ \\hline\n\n\n\\parbox{1.5 cm} {\\vspace{0.3cm}$\\xi_3^2$\\vspace{0.3cm}} &\n\\parbox{4cm} {$\\displaystyle\\frac{1-sC_r(0)}{1+(N^{-1}-1)s\\,p\\,x_0}$} &\n\\parbox{6 cm} {\\vspace{0.15cm}$\\displaystyle\\frac{1-s^2C_r(0)}{(1-N^{-1})[s^2+(1-s^2)\\langle\\sigma_{1z}\\sigma_{2z}\\rangle_0]+N^{-1}}$\\vspace{0.2cm}}&\n\\parbox{4 cm}{$\\displaystyle\\frac{1-s^2C_r(0)}{(1-N^{-1})s^2+N^{-1}}$}\n\\\\ \\hline\n\n\n\\parbox{1.5 cm} {\\vspace{0.3cm}$C_r'$\\vspace{0.3cm}} &\n\\parbox{4cm} {$sC_{r}(0)-(N-1)s\\,p\\,x_{0}\/2$} &\n\\parbox{4 cm} {$s^{2}C_{r}(0)+{a_{0}(s^{2}-1)}\/{2}$}&\n\\parbox{4.3 cm}{$s^{2}C_{r}(0)+(N-1)(s^{2}-1)\/2$}\n\\\\ \\hline\n\n\\parbox{1.5 cm} {\\vspace{0.3cm}$p_{c}^{(1)}$\\vspace{0.3cm}} &\n\\parbox{4cm} {$\\displaystyle\\frac{2C_{r}(0)}{\\left( N-1\\right) x_{0}}$} &\n\\parbox{4 cm} {$\\displaystyle 1-\\left(\n\\frac{a_{0}}{2C_{r}(0)+a_{0}}\\right)\n^{\\frac{1}{2}}$}&\n\\parbox{4 cm}{$\\displaystyle 1-\\left( \\frac{N-1}{2 C_{r}(0)+N-1}\\right)\n^{\\frac{1}2}$}\n\\\\ \\hline\n\n\n\\parbox{1.5 cm} {\\vspace{0.3cm}$p_{c}^{(2)}$\\vspace{0.3cm}} &\n\\parbox{4cm} {$\\displaystyle\\frac{\\langle \\sigma_{1z}\\rangle _{0}^{2}+C_{r}(0)-1}{%\n1+2\\langle \\sigma_{1z}\\rangle _{0}+\\langle \\sigma _{z}\\rangle\n_{0}^{2}}$} &\n\\parbox{4 cm} {$\\displaystyle 1-\\left( \\frac{1-\\langle \\sigma _{1z}\\rangle _{0}^{2}}{C_{r}(0)%\n}\\right) ^{\\frac{1}2}$}&\n\\parbox{4 cm}{$\\displaystyle 1-\\left( \\frac{1}{C_{r}(0)+\\langle \\sigma _{1z}\\rangle _{0}^{2}%\n}\\right) ^{\\frac{1}2}$}\n\\\\ \\hline\n\n\\parbox{1.5 cm} {\\vspace{0.3cm}$p_{c}^{(3)}\\vspace{0.3cm}$} &\n\\parbox{4cm} {$\\displaystyle\\frac{NC_{r}(0)}{\\left( N-1\\right) x_{0}}$} &\n\\parbox{4 cm} {$\\displaystyle 1-\\left(\n\\frac{a_{0}}{NC_{r}(0)+a_{0}}\\right)\n^{\\frac{1}{2}}$}&\n\\parbox{4 cm}{$\\displaystyle 1-\\left( \\frac{N-1}{N C_{r}(0)+N-1}\\right)\n^{\\frac12}$}\n\\\\ \\hline\n\n\\end{tabular}%\n\\end{center}\n\\end{table*}\n\n\\subsection{Depolarizing channel}\n\n\\begin{figure}[tbp]\n\\includegraphics[width=9cm,clip]{fig4.eps}\n\\caption{(Color online) Same as in Fig.~2 but for the depolarizing\nchannel, DPC, instead of ADC.}\n\\end{figure}\n\n\\subsubsection{Squeezing parameters and concurrence}\n\nThe decoherence of the squeezing parameter defined by S\\o rensen\n{\\it et al.}~\\cite{Sorensen} has been studied in\nRef.~\\cite{SimonKempe} for the DPC. It is intimately related to\nthe second squeezing parameter $\\xi_2^2$. For the DPC, the\nevolution of correlations $\\langle \\sigma _{1-}\\sigma _{2-}\\rangle\n$ and $\\langle \\sigma _{1+}\\sigma _{2-}\\rangle $ are the same as\nthose of the DPC given by Eq.~(\\ref{evolve}), and the expectations\n$\\langle \\sigma _{1z}\\rangle $ and $\\langle \\sigma _{1z}\\sigma\n_{2z}\\rangle $ change as (see Appendix D).\n\\begin{eqnarray}\n\\langle \\sigma _{1z}\\rangle &=&s\\langle \\sigma _{1z}\\rangle _{0}, \\\\\n\\langle \\sigma _{1z}\\sigma _{2z}\\rangle &=&s^{2}\\langle \\sigma\n_{1z}\\sigma _{2z}\\rangle_{0}. \\label{cccccc}\n\\end{eqnarray}\nFrom these equations, we further have\n\\begin{align}\n& \\langle \\vec{\\sigma}_{1}\\cdot \\vec{\\sigma}_{2}\\rangle =s^{2}\\langle \\vec{%\n\\sigma}_{1}\\cdot \\vec{\\sigma}_{2}\\rangle _{0}=s^{2}, \\\\\n& \\mathcal{C}_{zz}=s^{2}\\left( \\langle \\sigma _{1z}\\sigma\n_{2z}\\rangle _{0}-\\langle \\sigma _{1z}\\rangle _{0}\\langle \\sigma\n_{2z}\\rangle _{0}\\right) =s^{2}\\mathcal{C}_{zz}(0).\n\\end{align}\n The squeezing parameter $\\xi_1^2$ is\ngiven by Eq.~(\\ref{eee}), and the other two squeezing parameters\nare obtained as\n\\begin{eqnarray}\n\\xi _{2}^{2} &=&\\frac{\\xi _{1}^{2} }{s^{2}\\langle \\sigma\n_{1z}\\rangle\n_{0}^{2}},~ \\label{k1} \\\\\n\\xi _{3}^{2} &=&\\frac{N\\min \\left\\{\\xi_{1}^{2} ,1+s^{2}\\mathcal{C}%\n_{zz}(0)\\right\\} }{(N-{1})s^{2}+{1}} \\notag \\\\\n&=&\\frac{N\\xi _{1}^{2} }{(N-{1})s^{2}+{1}}. \\label{k2}\n\\end{eqnarray}\nBy making use of Eqs.~( \\ref{evolve}) and (\\ref{cccccc}) and\nstarting from the simplified form of the concurrence (\\ref{sim}),\nwe obtain\n\\begin{eqnarray}\nC_{r} &=&\\max\n\\left\\{0,2(N-1)\\left[s^{2}|u_{0}|-\\textstyle{\\frac14}(1-s^{2}\\langle\n\\sigma\n_{1z}\\sigma _{2z}\\rangle _{0})\\right]\\right\\} \\notag \\\\\n&=&\\max \\left[ 0,s^{2}C_{r}(0)+{2}^{-1}(N-1)(s^{2}-1)\\right] .\n\\label{cb}\n\\end{eqnarray}\nWe observe that the concurrence is dependent only on the initial\nvalue itself, not other ones.\n\n\n\\subsubsection{Decoherence strength $p_c$ corresponding to the SSSD}\n\nFrom Eqs.~(\\ref{cb}), (\\ref{k1}), and (\\ref{k2}), the vanishing\ntimes are analytically calculated as\n\\begin{eqnarray}\np_{v}^{(k)} &=&1-\\left[ \\frac{N-1}{x_k C_{r}(0)+N-1}\\right]\n^{\\frac 12}, \\label{eq3}\n\\\\\np_{v}^{(2)} &=&1-\\left[ \\frac{1}{C_{r}(0)+\\langle \\sigma _{1z}\\rangle _{0}^{2}%\n}\\right]^{\\frac 12},\n\\end{eqnarray}\nwhere $k=1,3$ and $x_1=2, x_3=N$.\n\nIn Fig.~3, we plot the critical values $p_c$ versus the initial\ntwist angle $\\theta_0$ for the DPC. For the DPC, the $p_c's$ first\nincrease until they reach their maxima and then decrease to zero.\nAlso, it is symmetric with respect to $\\theta_0=\\pi$, which is the\nsame as for the PDC. There are also intersections between the\nconcurrence and the parameter $\\xi _{2}^{2}.$ Qualitatively, the\nbehaviors of $p_{c}^{(1)}$ and $p_{c}^{(3)}$ are the same as that\nof the squeezing parameter $\\zeta _{1}^{2}$. This implies that the\nlarger the squeezing, the larger is the critical value $p_c$.\n\nThe common features of these three decoherence channels are: (i)\nThe critical value $p_{v3}$ is always larger or equal than the\nother two, namely, the spin-squeezing correlations according to\n$\\xi _{3}^{2}$ are more robust; (ii) there always exist two\nintersections between the concurrence and the parameter $\\xi\n_{2}^{2},$ for $\\theta_0$ from 0 to $2\\pi $, irrespective of the\ndecoherence channels; (iii) when there is no squeezing (central\narea of Figs. 2, 3, and 4), all vanishing times are zero. Table II\nconveniently lists all the analytical results obtained in this\nsection.\n\n\\section{Conclusions and remarks}\n\nTo summarize, for a spin ensemble in a typical spin-squeezing\ninitial state under three different decoherence channels, we have\nstudied spin squeezing with three different parameters in\ncomparison with the pairwise entanglement quantified by the\nconcurrence. When the subsystems of the correlated system decay\nasymptotically in time, the spin-squeezing parameter $\\zeta\n_{1}^{2}$ also decays asymptotically in time for all three types\nof decoherence. However, for the other two squeezing parameters\n$\\zeta_2^2$ and $\\zeta_3^2$, we find the appearance of\nspin-squeezing sudden death and entanglement sudden death. The\nglobal behaviors of the correlated state are markedly different\nfrom the local ones. The spin-squeezing parameter $\\zeta _{2}^{2}$\ncan vanish before, simultaneously, or after the concurrence, while\nthe squeezing parameter $\\zeta _{3}^{2}$ is always the last to\nvanish. This means that this parameter is more robust to\ndecoherence, and it can detect more entanglement than~$\\xi_2^2$.\n\nOur analytical approach for the vanishing times can be applied to\nany initial quantum correlated states, not restricted to the\npresent one-axis twisted state. Moreover, for more complicated\nchannels, such as the amplitude-damping channel at finite\ntemperatures~\\cite{Aolita} or the channel discussed in\nRef.~\\cite{Lidar}, the method developed in this article can be\nreadily applied to study spin squeezing under these decoherence\nchannels.\n\nOur investigations show the widespread occurrence of sudden death\nphenomena in many-body quantum correlations. Since there exists\ndifferent vanishing times for different squeezing parameters, spin\nsqueezing offers a possible way to detect the total spin\ncorrelation and their quantum fluctuations with distinguishable\ntime scales. The discovery of different lifetimes for various\nspin-squeezing parameters means that, in some time region, there\nstill exists another quantum correlation when other quantum\ncorrelations suddenly vanish. However, to determine which kind of\ncorrelations will vanish, one possible approach is to further\ninvoke irreducible multiparty correlations~\\cite{Zhou}, where the\nmultipartite correlations are classified in a series of\nirreducible $k$ party ones. If we could obtain the time evolution\nbehaviors of such irreducible multipartite correlations in various\ndecoherence channels, we could classify lifetimes for the\nspin-squeezing sudden death of various multipartite correlations\norder by order.\n\n\\begin{acknowledgments}\nWe gratefully acknowledge partial support from the National\nSecurity Agency, Laboratory of Physical Sciences, Army Research\nOffice, National Science Foundation under Grants Nos. 0726909, and\nJSPS-RFBR 06-02-91200. X. Wang acknowledges support from the\nNational Natural Science Foundation of China under No. 10874151,\nthe National Fundamental Research Programs of China under Grant\nNo. 2006CB921205, and the Program for New Century Excellent\nTalents in University (NCET). A.~M. acknowledges support from the\nPolish Ministry of Science and Higher Education under Grant No. N\nN202 261938.\n\\end{acknowledgments}\n\n\\begin{appendix}\n\n\\section{Spin-squeezing parameter $\\protect\\xi _{3}^{2}$ for states with\nparity symmetry}\n\nHere, we calculate the spin-squeezing parameter $\\xi_3^2$ for\ncollective states with either even or odd parity symmetry. For\nsuch states, we immediately have\n\\begin{equation}\n\\langle J_{x}\\rangle =\\langle J_{y}\\rangle =\\langle\nJ_{x}J_{z}\\rangle =\\langle J_{y}J_{z}\\rangle =0\n\\end{equation}\nas the operators change the parity of the state. Then, the mean\nspin direction is along the $z$ direction and the correlation\nmatrix given by Eq.~(\\ref{cmatrix}) is simplified to\n\\begin{equation}\n\\mathbf{C}=\\left(\n\\begin{array}{ccc}\n\\langle J_{x}^{2}\\rangle & C_{xy} & 0 \\\\\nC_{xy} & \\langle J_{y}^{2}\\rangle & 0 \\\\\n0 & 0 & \\langle J_{z}^{2}\\rangle%\n\\end{array}%\n\\right),\n\\end{equation}\nwhere $C_{xy}=\\langle \\lbrack J_{x},J_{y}]_{+}\\rangle\/2$. From the\ncorrelation matrix $\\mathbf{C}$ and the definition of covariance\nmatrix $\\gamma $ given by Eq.~(\\ref{comatrix}), one finds\n\\begin{equation}\n\\Gamma =\\left(\n\\begin{array}{ccc}\nN\\langle J_{x}^{2}\\rangle & {N}C_{xy} & 0 \\\\\n{N}C_{xy} & N\\langle\nJ_{y}^{2}\\rangle & 0 \\\\\n0 & 0 & N(\\Delta J_{z})^{2}+\\langle J_{z}^{2}\\rangle%\n\\end{array}%\n\\right).\n\\end{equation}\nThis matrix has a block-diagonal form and the eigenvalues of the\n$2\\times 2$ block are obtained as\n\\begin{equation}\n\\lambda _{\\pm }=\\frac{N}{2}\\left( \\langle J_{x}^{2}+J_{y}^{2}\\rangle\n\\pm |\\langle J_{-}^{2}\\rangle |\\right) .\n\\end{equation}\nIn deriving the above equation, we have used the relation%\n\\begin{equation}\nJ_{-}^{2}=J_{x}^{2}-J_{y}^{2}-i[J_{x},J_{y}]_{+}.\n\\end{equation}\nTherefore, the smallest eigenvalue $\\lambda _{\\min }$ of $\\Gamma $\nis obtained as\n\\begin{equation}\n\\lambda _{\\min }=\\min \\left(\\lambda _{-},N(\\Delta\nJ_{z})^{2}+\\langle J_{z}^{2}\\rangle \\right), \\label{xixi2}\n\\end{equation}\nwhere $\\lambda _{-}$ differs from the squeezing parameter\n$\\xi_1^2$ given by Eq.~(\\ref{xixi1}) by only a multiplicative\nconstant, as seen by comparing Eqs.~(\\ref{xixi1}) and\n(\\ref{xixi2}). From Eqs.~(\\ref{xixi2}) and (\\ref{x3}), one finds\nthat the squeezing parameter $\\xi_3^2$ is given by\nEq.~(\\ref{xixixi}).\n\n\\section{Spin-squeezing parameters for the one-axis twisted state}\n\nHere, we will use the Heisenberg picture to derive the relevant\nexpectations and spin-squeezing parameters for the initial\nstate~\\cite{Molmer2,WangMolmer2}. To determine the spin-squeezing\nparameter $\\xi _{1}^{2}$ given by Eq.~(\\ref{xixixi1}), one needs\nto know the expectation $\\langle \\sigma_{1z}\\rangle_0$, and\ncorrelations $\\langle \\sigma _{1+}\\sigma _{2-}\\rangle_0$ and\n$\\langle \\sigma _{1-}\\sigma _{2-}\\rangle_0$. We first consider the\nexpectation $\\langle \\sigma_{1z}\\rangle_0$. For simplicity, we\nomit the subscript $0$ in the following formulas.\n\n\\subsection{Expectation $\\langle\\sigma_{1z}\\rangle$}\n\nThe evolution operator can be written as,\n\\begin{equation}\nU=\\exp({-i\\chi tJ_{x}^{2}})=\\exp\\left({-i\\theta\n\\sum_{k>l}j_{kx}j_{lx}}\\right)\n\\end{equation}\nup to a trivial phase, where $\\theta=2\\chi t$ given by\nEq.~(\\ref{angle}). From this form, the evolution of $j_{1z}$ can\nbe obtained as\n\\begin{eqnarray}\nU^{\\dagger }j_{1z}U =j_{1z}\\cos [ \\theta j_x^{(2)}] +j_{1y}\\sin [\n\\theta j_x^{(2)}],\n\\end{eqnarray}\nwhere\n\\begin{equation}\nj_{x}^{(k)}=\\sum_{l=k}^N j_{lx}.\n\\end{equation}\nTherefore, the expectations are\n\\begin{equation} \\langle j_{1z}\\rangle\n=-{2}^{-1}\\langle {\\bf 1'}| \\cos [ \\theta j_x^{(2)}] |{\\bf\n1'}\\rangle \\label{jz1}\n\\end{equation}\nsince $\\langle 1|j_{1y}|1\\rangle =0.$ Here, $|{\\bf 1'}\\rangle\n=|1\\rangle _{2}\\otimes ...\\otimes |1\\rangle _{N}.$ So, one can\nfind the following form for the expectation values\n\\begin{eqnarray}\n\\langle {\\bf 1}|\\cos \\left[ \\theta J_{x}\\right] |{\\bf 1}\\rangle\n&=&\\left( \\langle {\\bf 1}|e^{i\\theta J_{x}}|{\\bf 1}\\rangle\n+ {\\rm c.c.}\\right) \/2 \\notag \\\\\n&=&\\left( \\Pi _{k=1}^{N}\\langle 1|e^{i\\theta j_{kx}}|1\\rangle + {\\rm c.c.} \\right) \/2 \\notag \\\\\n&=&\\cos ^{N}({\\theta'}), \\label{jz2}\n\\end{eqnarray}\nwhere $\\theta'=\\theta\/2$ and $|{\\bf 1}\\rangle=|1\\rangle^{\\otimes\nN}$.\n\nBy using Eqs.~(\\ref{jz1}) and (\\ref{jz2}), one gets\n\\begin{equation}\\label{sigmaz}\n\\langle \\sigma _{z}\\rangle =-\\cos ^{N-1}\\left( {\\theta' }\\right).\n\\end{equation}\n\n\n\\subsection{\\protect\\bigskip Correlation $\\langle \\protect\\sigma _{1+}%\n\\protect\\sigma _{2-}\\rangle $}\n\nSince the operator $\\sigma _{1x}\\sigma _{2x}$ commutes with the\nunitary operator $U,$ we easily obtain\n\\begin{equation}\n\\langle \\sigma _{1x}\\sigma _{2x}\\rangle =0. \\label{xx}\n\\end{equation}\nWe now compute the correlations $\\langle \\sigma _{1z}\\sigma\n_{2z}\\rangle .$ From the unitary operator,\n\\begin{eqnarray*}\n&&\\hspace*{-7mm} U^{\\dagger }j_{1z}j_{2z}U\\nonumber\\\\\n&=&\\left[j_{1z}\\cos ( \\theta\nj_x^{(2)})+j_{1y}\\sin ( \\theta j_x^{(2)}) \\right] \\\\\n&&\\times \\left[j_{2z}\\cos [ \\theta (j_{1x}+j_{x}^{(3)})]\n+j_{2y}\\sin [ \\theta (j_{1x}+j_{x}^{(3)})] \\right] \\\\\n&=&\\left[j_{1z}\\cos (\\theta j_{2x})\\cos(\\theta\nj_{x}^{(3)})-j_{1z}\\sin\n(\\theta j_{2x})\\sin (\\theta j_{x}^{(3)})\\right. \\\\\n&&\\left. +j_{1y}\\sin (\\theta j_{2x})\\cos (\\theta\nj_{x}^{(3)})+j_{1y}\\cos (\\theta j_{2x})\\sin (\\theta j_{x}^{(3)})\n\\right] \\\\\n&&\\times \\left[j_{2z}\\cos (\\theta j_{1x})\\cos (\\theta j_{x}^{(3)})\n-j_{2z}\\sin (\\theta j_{1x})\\sin (\\theta j_{x}^{(3)})\\right.\n\\\\\n&&\\left. +j_{2y}\\sin (\\theta j_{1x})\\cos (\\theta j_{x}^{(3)})\n+j_{2y}\\cos (\\theta j_{1x})\\sin (\\theta j_{x}^{(3)}) \\right].\n\\end{eqnarray*}%\nAlthough there are 16 terms after expanding the above equation,\nonly 4 terms survive when calculating $\\langle s_{1z}s_{2z}\\rangle\n.$ We then have\n\\begin{eqnarray}\\label{eef}\n\\langle j_{1z}j_{2z}\\rangle\n&=&\\langle {\\bf 1}|j_{1z}j_{2z}\\cos ^{2}(\\theta\/2)\\cos ^{2}(\\theta j_{x}^{(3)})\n\\notag \\\\\n&&-j_{1z}j_{2x}j_{2y}\\sin (\\theta )\\sin ^{2}(\\theta j_{x}^{(3)})\n\\notag \\\\\n&&+4j_{1y}j_{1x}j_{2x}j_{2y}\\sin ^{2}(\\theta \/2)\\cos ^{2}(\\theta j_{x}^{(3)})\n\\notag \\\\\n&&-j_{1y}j_{1x}j_{2z}\\sin (\\theta )\\sin ^{2}(\\theta j_{x}^{(3)}) |{\\bf 1}\\rangle\n\\notag \\\\\n&=&{4}^{-1}\\langle {\\bf 1}'|\\cos ^{2}(\\theta j_{x}^{(3)}) |{\\bf 1}'\\rangle\n\\notag \\\\\n&=&{8}^{-1}\\langle {\\bf 1}'|\\left[ 1+\\cos (2\\theta j_{x}^{(3)}\n) \\right]|{\\bf 1}'\\rangle\n\\notag \\\\\n&=&{8}^{-1}\\left[ 1+\\cos ^{N-2}(\\theta )\\right],\n\\end{eqnarray}\nwhere $|{\\bf 1}'\\rangle=|1\\rangle_3\\otimes...\\otimes|1\\rangle_N$.\nThe second equality in Eq.~(\\ref{eef}) is due to the property\n$j_{x}j_{y}=-j_{y}j_{x}={ij_z}\/{2}$, and the last equality from\nEq.~(\\ref{jz2}). Finally, from the above equation, one finds\n\\begin{equation}\n\\langle \\sigma _{1z}\\sigma _{2z}\\rangle ={2}^{-1}\\left( 1+\\cos\n^{N-2}\\theta \\right). \\label{zz}\n\\end{equation}\nDue to the relation $\\langle \\sigma _{1x}\\sigma _{2x}+\\sigma\n_{1y}\\sigma _{2y}+\\sigma _{1z}\\sigma _{2z}\\rangle =1$ for the\ninitial state, the correlation $\\langle \\sigma _{1y}\\sigma\n_{2y}\\rangle $ is obtained from Eqs.~(\\ref{xx}) and (\\ref{zz}) as\n\\begin{equation}\n\\langle \\sigma _{1y}\\sigma _{2y}\\rangle ={2}^{-1}\\left( 1-\\cos\n^{N-2}\\theta \\right) . \\label{yy}\n\\end{equation}\nSubstituting Eqs.~(\\ref{xx}) and (\\ref{yy}) into the following\nrelations\n\\begin{equation*}\n\\sigma _{1x}\\sigma _{2x}+\\sigma _{1y}\\sigma _{2y}=2\\left( \\sigma\n_{1+}\\sigma _{2-}+\\sigma _{1-}\\sigma _{2+}\\right)\n\\end{equation*}\nleads to one element of the two-spin reduced density matrix,\n\\begin{equation}\\label{y0}\ny_{0}=\\langle \\sigma _{1+}\\sigma _{2-}\\rangle ={8}^{-1}\\left(\n1-\\cos ^{N-2}\\theta \\right) ,\n\\end{equation}\nwhere the relation $\\langle \\sigma _{1+}\\sigma _{2-}\\rangle\n=\\langle \\sigma _{1-}\\sigma _{2+}\\rangle $ is used due to the\nexchange symmetry.\n\n\\subsection{Correlation $\\langle \\protect\\sigma _{1-}\\protect\\sigma %\n_{2-}\\rangle $}\n\nTo calculate the correlation $\\langle \\sigma _{1-}\\sigma\n_{2-}\\rangle ,$ due to the following relations%\n\\begin{eqnarray}\n\\sigma _{1x}\\sigma _{2x}-\\sigma _{1y}\\sigma _{2y} &=&2\\left( \\sigma\n_{1+}\\sigma _{2+}+\\sigma _{1-}\\sigma _{2-}\\right) , \\label{sigma1} \\\\\ni\\left( \\sigma _{1x}\\sigma _{2y}+\\sigma _{1y}\\sigma _{2x}\\right)\n&=&2\\left( \\sigma _{1+}\\sigma _{2+}-\\sigma _{1-}\\sigma\n_{2-}\\right) ,\\quad \\label{sigma2}\n\\end{eqnarray}\nwe need to know the expectations $\\langle j_{1x}j_{2y}\\rangle .$ The\nevolution of $j_{1x}j_{2y}$ is given by\n\\begin{eqnarray*}\nU^{\\dagger }s_{1x}s_{2y}U &=&j_{1x}\\left\\{j_{2y}\\cos \\left[ \\theta\n(j_{1x}+j_{x}^{(3)})\\right] \\right. \\\\\n&&\\left.\\quad~~ -j_{2z}\\sin \\left[ \\theta\n(j_{1x}+j_{x}^{(3)})\\right] \\right\\},\n\\end{eqnarray*}%\nand the expectation is obtained as\n\\begin{eqnarray*}\n\\langle j_{1x}j_{2y}\\rangle &=&{2}^{-1}\\langle {\\bf 1'}|j_{1x}\\sin\n\\left[ \\theta\n(j_{1x}+j_{x}^{(3)})\\right] |{\\bf 1'}\\rangle \\\\\n&=&{(4i)}^{-1}\\langle {\\bf 1'}|j_{1x}e^{i\\theta j_{1x}}\\Pi\n_{k=3}^{N}e^{i\\theta j_{kx}} \\\\\n&&-j_{1x}e^{-i\\theta j_{1x}}\\Pi _{k=3}^{N}e^{-i\\theta j_{kx}}|{\\bf\n1'}\\rangle \\\\\n&=&{(4i)}^{-1}{\\cos ^{N-2}\\left( {\\theta'}{}\\right) }\\langle\n1|j_{1x}e^{i\\theta j_{1x}}-j_{1x}e^{-i\\theta j_{1x}}|1\\rangle \\\\\n&=&{2}^{-1}{\\cos ^{N-2}\\left( {\\theta'}\\right) }\\langle 1|\nj_{1x}\\sin\n(\\theta j_{1x})|1\\rangle \\\\\n&=&{4}^{-1}{\\sin \\left({\\theta'}{}\\right) \\cos ^{N-2}\\left(\n\\theta'\\right) }\n\\end{eqnarray*}%\nHere, $|{\\bf 1'}\\rangle=|1\\rangle _{1}\\otimes |1\\rangle\n_{3}\\otimes ...\\otimes |1\\rangle _{N}$, where $|1\\rangle_2$ is\nabsent. Moreover, $\\langle j_{1y}j_{2x}\\rangle =\\langle\nj_{1x}j_{2y}\\rangle $ due to the exchange symmetry, and thus,\n\\begin{equation*}\n\\langle j_{1x}j_{2y}+j_{1y}j_{2x}\\rangle =2^{-1}{\\sin \\left({\\theta'}{}%\n\\right) \\cos ^{N-2}\\left({\\theta'}{}\\right) }.\n\\end{equation*}\nFor the initial state (\\ref{initial}), we obtain the following\nexpectations \\cite{KU,WangMolmer}\n\\begin{equation}\\label{b12}\n\\langle \\sigma _{1x}\\sigma _{2y}+\\sigma _{1y}\\sigma _{2x}\\rangle\n=2\\sin \\left( {\\theta'}{}\\right) \\cos ^{N-2}\\left( {\\theta\n'}{}\\right).\n\\end{equation}\nThe combination of Eqs.~(\\ref{xx}), (\\ref{yy}), (\\ref{sigma1}), (\\ref{sigma2}%\n), and (\\ref{b12}) leads to the correlation\n\\begin{eqnarray}\\label{u0}\nu_{0} &=&\\langle \\sigma _{1-}\\sigma _{2-}\\rangle =-{8}^{-1}\\left(\n1-\\cos\n^{N-2}\\theta \\right) \\notag \\\\\n&&-{i}{2}^{-1}\\sin \\left({\\theta'}{}\\right) \\cos ^{N-2}\\left({%\n\\theta' }{}\\right). \\label{cr}\n\\end{eqnarray}\nSubstituting Eqs.~(\\ref{y0}) and (\\ref{u0}) to Eq.~(\\ref{xixixi1})\nleads to the expression of the squeezing parameter $\\xi_1^2$ given\nby Eq.~(\\ref{ccc1}).\n\n\\section{Proof of ${\\cal C}_{zz}(0)\\ge 0 $}\n\nTo prove this, we will not use this specific function of the\ninitial twist angle $\\theta$ as given by Eq.~(\\ref{c5}), but use\nthe positivity of the reduced density matrix (\\ref{re})$.$ We\nfirst notice an identity\n\\begin{equation*}\n\\mathcal{C}_{zz}=4(v_{+}v_{-}-w^{2}),\n\\end{equation*}\nwhich results from Eqs.~(\\ref{r1}) and (\\ref{r3}). This is a key\nstep. Also there exists another identity\n\\begin{equation}\n\\label{e1} w_0=y_0\n\\end{equation}\nas $\\langle \\vec{\\sigma}_{1}\\cdot \\vec{\\sigma}_{2}\\rangle _{0}=1.$\nFrom the positivity of the reduced density matrix (\\ref{re}), one\nhas\n\\begin{equation*}\nv_{0+}v_{0-}\\geq |u_0|^{2}\\geq y_0^{2}=w_0^{2},\n\\end{equation*}\nwhere the second inequality follows from Eq.~(\\ref{r3}) and the\nlast equality results from Eq.~(\\ref{e1}). This completes the\nproof.\n\n\\section{Derivation of the evolution of the correlations and expectations under decoherence}\n\nFor an arbitrary matrix\n\\begin{equation*}\nA=\\left(\n\\begin{array}{cc}\na & b \\\\\nc & d%\n\\end{array}%\n\\right) ,\n\\end{equation*}\nfrom the Kraus operators (\\ref{kraus1}) for the ADC, it is\nstraightforward to find\n\\begin{eqnarray*}\n{\\cal E} (A) &=&\\left(\n\\begin{array}{cc}\nsa & \\sqrt{s}b \\\\\n\\sqrt{s}c & d+pa%\n\\end{array}%\n\\right) , \\\\\n{\\cal E}^{\\dagger }(A) &=&\\left(\n\\begin{array}{cc}\nsa+pd & \\sqrt{s}b \\\\\n\\sqrt{s}c & d%\n\\end{array}%\n\\right) .\n\\end{eqnarray*}%\nThe above equations imply that\n\\begin{eqnarray*}\n{\\cal E} ^{\\dagger }(\\sigma _{\\mu}) &=&\\sqrt{s}\\sigma _{\\mu} \\; \\text{for}\\; \\mu=x,y, \\\\\n{\\cal E} ^{\\dagger }(\\sigma _{z}) &=&s \\sigma _{z}-p.\n\\end{eqnarray*}%\nAs we considered independent and identical decoherence channels\nacting separately on each spin, the evolution correlations and expectations in Eqs.~(%\n\\ref{c2}), (\\ref{c3}), and (\\ref{c44}) are obtained directly from\nthe above equations.\n\nFrom the Kraus operators (\\ref{kraus2}), the evolution of the\nmatrix $A$ under the PDC is obtained as\n\\begin{equation*}\n{\\cal E} (A)={\\cal E} ^{\\dagger }(A)=\\left(\n\\begin{array}{cc}\na & sb \\\\\nsc & d\n\\end{array}\n\\right) ,\n\\end{equation*}\nfrom which one finds\n\\begin{eqnarray*}\n{\\cal E} ^{\\dagger }(\\sigma _{\\mu}) &=&s \\sigma _{\\mu} \\quad \\text{for} \\; \\mu=x,y \\\\\n{\\cal E} ^{\\dagger }(\\sigma _{z}) &=&\\sigma _{z}.\n\\end{eqnarray*}%\nSo expectations $\\langle \\sigma _{z}^{\\otimes n}\\rangle $ are\nunchanged and Eq.~(\\ref{evolve}) is obtained.\n\nFrom the Kraus operators (\\ref{kraus3}) of the DPC, the evolution\nof the matrix $A$ is given by\n\\begin{eqnarray*}\n{\\cal E} (A) &=&{\\cal E} ^{\\dagger }(A) \\\\\n&=&\\left(\n\\begin{array}{cc}\nas +\\frac{p}{2}(a+d) & sb \\\\\nsc & ds +\\frac{p}{2}(a+d)%\n\\end{array}%\n\\right)\n\\end{eqnarray*}%\nfrom which one finds%\n\\begin{eqnarray*}\n{\\cal E} ^{\\dagger }(\\sigma _{\\alpha}) =s \\sigma _{\\alpha}\\quad\n\\text{for}\\; \\alpha\\in\\{x,y,z\\}.\n\\end{eqnarray*}%\nThen, Eq.~(\\ref{cccccc}) is obtained.\n\n\\section{Simplified form of the concurrence}\n\nFor our three kinds of decoherence channels, the concurrence\n(\\ref{conc}) can be simplified and given by\n\\begin{eqnarray}\nC &=&\\max \\left\\{ 0,2\\left( |u|-w\\right)\n,2(y-\\sqrt{v_{+}v_{-}})\\right\\}\n\\notag\\\\\n&=&\\max \\left\\{ 0,2\\left( |u|-w\\right) \\right\\} . \\label{E1}\n\\end{eqnarray}\nIf one can prove\n\\begin{eqnarray}\n|u|-y &\\geq &0, \\\\\nw-\\sqrt{v_{+}v_{-}} &\\leq &0,\n\\end{eqnarray}\nthen we obtain the simplified form shown in Eq.~(\\ref{E1}). The\nlast inequality can be replaced by\n\\begin{equation}\\label{c22}\nw^{2}-v_{+}v_{-}\\leq 0\n\\end{equation}\nas $w$ and $v_{+}v_{-}$ are real.\n\nWe first consider the ADC channel. From Eqs.~(\\ref{c2}), (\\ref{c3}), and (\\ref{c4}%\n), one obtains\n\\begin{eqnarray}\n|u|-y &=&s(|u_{0}|-y_{0})\\geq 0, \\\\\nw^{2}-v_{+}v_{-} &=&-\\frac{1}{4}\\mathcal{C}_{zz}=-\\frac{s^{2}}{4}\\mathcal{C}%\n_{zz}(0)\\leq 0.\n\\end{eqnarray}\nwhere the inequalities result from Eqs.~(\\ref{ccc1}) and\n(\\ref{c5}), respectively. So, the inequality (\\ref{c22}) follows.\n\nFor the PDC, from Eq.~(\\ref{evolve}) and fact that\n$\\langle\\sigma_z^{\\otimes n}\\rangle$ is unchanged under\ndecoherence, the concurrence can also be simplified due to the\nfollowing properties:\n\\begin{eqnarray*}\n|u|-y &=&s^{2}(|u_{0}|-y_{0})\\geq 0, \\\\\nw^{2}-v_{+}v_{-} &=&-\\frac{1}{4}\\mathcal{C}_{zz}(0)\\leq 0.\n\\end{eqnarray*}\nFor the DPC, from Eqs.~(\\ref{evolve}) and (\\ref{cccccc}), one has\n\\begin{eqnarray}\n|u|-y &=&s^{2}(|u_{0}|-y_{0})\\geq 0, \\\\\nw^{2}-v_{+}v_{-} &=&-\\frac{s^{2}}{4}\\mathcal{C}_{zz}(0)\\leq 0.\n\\end{eqnarray}\nSo, again, the concurrence can be simplified to the form shown in\nEq.~(\\ref{E1}). This completes the proof.\n\n\\end{appendix}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec_intro}\nOver the last decades nonlocal models have attracted much attention owing to its potentially promising application in various disciplines of science and engineering, such as the preridynamic (PD) theory of continuum mechanics, and the modeling of nonlocal diffusion process, see \\cite{bobaru2010the,du2012analysis,silling2000reformulation,weckner2005the,zhou2010mathematical}. Peridynamics originally introduced in \\cite{silling2000reformulation} is a nonlocal formulation of elastodynamics which can more easily incorporate discontinuities such as cracks and damage, and has been extended past its original formulation including micropolar, nanofiber networks and so on \\cite{bobaru2007influence,foster2010viscoplasticity,gerstle2007micropolar,weckner2009green}. While most nonlocal models are formulated on bounded domains with volume constraints, there are indeed applications in which the simulation in an infinite medium may be useful, such as wave or crack propagation in the whole space.\n\nIn this paper, we consider constructing perfectly matched layers (PMLs) to numerically solve the following nonlocal wave equation\n\\begin{align}\n& (\\partial_t^2+\\mathcal{L}) q(x,t) = f(x,t), \\quad x\\in\\mathbb{R},\\ t>0,\\label{eq:nonlocalwave}\\\\\n& q(x,0) = \\psi_0(x), \\quad \\partial_t q(x,0) = \\psi_1(x),\\quad x\\in\\mathbb{R}, \\label{eq:nonlocalwavecon}\n\\end{align}\nwhere $q(x,t)$ represents the displacement field, $\\psi_0(x)$ and $\\psi_1(x)$ are the initial values, $f(x,t)$ is the body force. The nonlocal operator $\\mathcal{L}$ acting on $q$ is defined by\n\\begin{align} \n\t\\mathcal{L} q(x,t) = \\int_{\\mathbb{R}} \\big( q(x,t)-q(y,t) \\big) \\gamma\\Big(y-x,\\frac{x+y}{2}\\Big) \\mathrm{d}y,\t\\label{eq:nonlocalOperator}\n\\end{align}\nwhere the nonnegative kernel function $\\gamma(\\alpha,\\beta)$ satisfies\n\\begin{align}\n\\gamma(-\\alpha,\\beta)=\\gamma(\\alpha,\\beta),\\ \\forall \\alpha,\\beta\\in\\mathbb{R},\\quad \\mathrm{and}\\quad \\gamma(\\alpha,\\beta)=0,\\ \\mbox{if}\\ |\\alpha|>\\delta>0. \\label{eq:symkernel}\n\\end{align}\nWe assume the initial values $\\psi_k(x)\\ (k=0,1)$ and the source $f(x,t)$ are compactly supported over a bounded domain $\\Omega_f$ for all $t$.\n\n\nThe aim of this paper is to develop an efficient numerical scheme to compute the solution of problem \\eqref{eq:nonlocalwave}-\\eqref{eq:nonlocalwavecon} on the whole real axis. We are facing two difficulties: \n\\begin{itemize}\n\\item The definition domain is unbounded. This requires us to construct artificial\/absorbing boundary conditions (ABCs) which artificially bounds the computational domain without changing the solution of a PDE or nonlocal model. Here we consider perfectly matched layers (PMLs) of nonlocal models to overcome the unboundedness of spatial domain;\n\\item The kernel in the proposed nonlocal PML equation is complex-valued and depends on the time $t$. As a result, the modified nonlocal operator in the PML equation is given by a convolution in time, which differs from the original nonlocal operator \\eqref{eq:nonlocalOperator}. In addition, the simulations are implemented in multi-scale media. These require us to develop an asymptotically compatible (AC) scheme which should be consistent with both its local limiting model (i.e., taking $\\delta \\to 0$) and the nonlocal model itself (i.e., taking $\\delta =\\mathcal{O}(1)$). \n\\end{itemize}\n\n\nTo overcome the first difficulty of the unboundedness of definition domain, the accurate ABCs is a successful approach by absorbing any impinging waves on the artificial boundaries\/layers. The great progress has been made for the construction of ABCs for various nonlocal models, see \\cite{DuHanZhangZheng,DuZhangZheng,ZYZD16,ZhengHuDuZhang,zheng2020stability}. In this paper, we will apply the perfectly matched layer (PML) to confine a bounded domain of physical interest.\nThe PML has two important properties: (i) waves in the PML regions decay exponentially and; (ii) the returning waves after one round trip through the absorbing layer are very tiny if the wave reflects off the truncated boundary. These properties make it useful to simulate wave propagations in various media and fields, e.g., \\cite{ber94,becache2004perfectly,berenger1996three,bermudez2007an,cw,cl03,collino1998the,turkel1998absorbing}. While the PML has been well developed for local problems, there are few works on PMLs for nonlocal problems \\cite{antoine2019towards,DuZhangNonlocal1,DuZhangNonlocal2,wildman2011,wildman2012a,wang2013matching,ji2020artificial}. The main reason is that, due to the nonlocal horizon, the design of PMLs for nonlocal models poses challenges not faced in the PDEs setting. For example, when constructing local PMLs, one replaces derivatives with respect to real numbers by the corresponding complex derivatives. However, this process cannot be applied to the nonlocal operator which is in the form of integral. \n\nIn this paper, we provide a way of constructing an efficient PML for nonlocal wave problem \\eqref{eq:nonlocalwave}--\\eqref{eq:nonlocalwavecon}. To do so, we first reformulate the wave equation into a nonlocal Helmholtz equation by using Laplace transform. The Laplace transform introduces a complex variable $s$. After that, we apply the PML modifications, recently developed in \\cite{DuZhangNonlocal1,DuZhangNonlocal2} for nonlocal Helmholtz equations, to derive PMLs for the resulting nonlocal Helmholtz equation with $s$. In this situation, the kernel is still analytically continued into the complex coordinates and consequently, the modified equation has a complex-valued kernel depending on complex value $s$. Finally, we transform the modified nonlocal equation into its time-domain form by inverse Laplace transform. As a result, we obtain the nonlocal wave equation with PML modifications.\n\nIn term of the discretization of the nonlocal PML equation, asymptotic compatibility (AC) schemes, a concept developed in \\cite{TianDu,TianDu2}, is needed to discretize the nonlocal operator \\cite{Du2016handbook}. \nIn this paper, the kernel is taken by the following heterogeneous diffusion coefficient\n\\begin{equation}\\label{sm}\n0< \\sigma(x)=\\frac{1}{2}\\int_\\mathbb{R} s^2\\gamma(s,x)ds < \\infty,\n\\end{equation}\nwhich implies that the nonlocal model is in multi-scale media. Under the assumption \\eqref{sm}, the nonlocal operator \\eqref{eq:nonlocalOperator} will converge\nto a local operator \\cite{DuZhangZheng} in the form of \n\\begin{equation} \\label{LO}\n\\lim_{\\delta\\to 0^+} \\mathcal{L} q(x)=-\\partial_x\\left[\\sigma(x)\\partial_xq(x)\\right].\n\\end{equation}\nAs $\\delta\\rightarrow 0$, the solution of problem \\eqref{eq:nonlocalwave} will converge to the solution of local wave equation\n\\begin{align}\n \\partial_t^2 q(x,t)-\\partial_x\\left[\\sigma(x)\\partial_x \\right] q(x,t)= f(x,t), \\quad x\\in\\mathbb{R},\\ t>0. \n\\end{align}\n\nThe AC scheme can ensure that numerical solutions of nonlocal models converge to the correct local limiting solution, as both the mesh size $h$ and the nonlocal effect $\\delta$ tend to zero. One can refer to \\cite{du2019asymptotically,tian2017a,TianDu,tian2014asymptotically} for more details of AC schemes. Noting that our nonlocal PML problem involves a new complex-valued kernel arising from the inverse Laplace transform, we here present the analogous ideas given in \\cite{DuZhangZheng} to discretize the one-dimensional nonlocal operator with the general complex and time-dependent kernels and complex functions. For practical multi-scale simulations, we apply Talbot's contour \\cite{weideman2006optimizing} to the inverse Laplace transform and obtain its approximation consisting of several sub-kernels. For each sub-kernel, we employ an AC scheme, developed for complex functions in \\cite{DuZhangNonlocal1,DuZhangNonlocal2}, to discretize the resulting nonlocal operator. After that, we introduce some new auxiliary functions and reformulate the semi-discrete problem into a second-order ODE system, which is finally solved by a Verlet-type scheme.\n\n\n \n\n\n\nThe outline of this paper is organized as follows. In section~\\ref{sec:NPML}, we design the nonlocal PMLs and obtain a truncated nonlocal PML problem on a bounded domain. In section~\\ref{sec_dis}, we first spatially discretize the truncated nonlocal PML problem into an ODE system with the variable $t$ and solve it by a Verlet-type scheme. In section~\\ref{sec_num} we introduce the basic setting of parameters for the discretization, and present numerical examples to verify the efficiency of the nonlocal PMLs and the convergence order of our numerical scheme. \n\n\n\\section{Nonlocal Perfectly Matched Layers} \\label{sec:NPML}\n\nWe now consider the construction of nonlocal PMLs by using complex-coordinate approach. The complex-coordinate approach is essentially based on analytic continuation of the wave equation into complex spatial coordinates where the fields are exponentially decaying. To do so, we assume the initial data functions and the kernel functions satisfy the following properties:\n\\begin{itemize}\n\\item[A1:] $\\psi_1$ and $f$ are compactly supported into a finite interval $\\mathcal{D}=(x_l,x_r)$, and $\\psi_0$ is compactly supported into $(x_l+\\delta,x_r-\\delta)$; \n\\item[A2:] $\\gamma$ is compactly supported over a strip $[-\\delta,\\delta]\\times\\mathbb{R}$ with $\\delta\\leq x_r-x_l$;\n\\item[A3:] $\\gamma$ is homogeneous in both $[x_r,+\\infty)$ and $(-\\infty,x_l]$, namely,\n\\begin{align}\n\\gamma(\\alpha,\\beta) =& \\gamma_L(\\alpha),\\quad \\beta\\in (-\\infty, x_l+\\delta\/2],\\\\\n\\gamma(\\alpha,\\beta) =& \\gamma_R(\\alpha),\\quad \\beta\\in (x_r-\\delta\/2,+\\infty].\n\\end{align}\n\\end{itemize}\nIn the sequel we take $\\gamma_L=\\gamma_R=\\gamma_\\infty$ for brevity. \n\nPerforming the Laplace transform on \\eqref{eq:nonlocalwave}, we have \n\\begin{align}\ns^2\\hat q(x,s) +\\mathcal{L} \\hat q(x,s) = \\hat f(x,s)+s\\psi_0(x)+\\psi_1(x), \\quad x\\in\\mathbb{R}, \\label{eq:Hel}\n\\end{align}\nwhere $\\hat{q}(x,s) = \\mathscr{L}(q(x,t);s)$ with $\\mathscr{L}$ representing the Laplace transform in time with $\\Re \\{s\\} >0$.\n\nNoting the nonlocal operator $\\mathcal{L}$ is self-adjoint, we can rewrite \\eqref{eq:Hel} into the weak form of \n\\begin{align*}\n\\int_{\\mathbb{R}} s^2\\hat q(x,s) v(x)\\,\\mathrm{d}x -& \\frac12\\int_{\\mathbb{R}}\\int_{\\mathbb{R}} \\big[\\hat q(x,s) - \\hat q(y,s) \\big]\\big[ v(x)-v(y) \\big]\\\\\n& \\gamma\\Big(y-x,\\frac{x+y}{2}\\Big)\\,\\mathrm{d}x\\,\\mathrm{d}y = \\int_\\mathbb{R} \\big(\\hat f(x,s)+s\\psi_0(x)+\\psi_1(x)\\big)v(x)\\mathrm dx,\\quad \\forall v\\in C_0^\\infty(\\mathbb{R}).\n\\end{align*}\nThe PML modifications can be viewed as a complex coordinate stretching of the original problem by constructing an analytic continuation to the complex plane \\cite{DuZhangNonlocal1,DuZhangNonlocal2}. In this paper, we take \n\\begin{align}\n\t\\tilde x:=\\int_0^x \\alpha(\\eta,s)\\mathrm{d}\\eta= \\int_0^x \\Big(1 + \\frac{z}{s}\\sigma(\\eta)\\Big) \\mathrm{d}\\eta,\\qquad \\tilde y:=\\int_0^y \\alpha(\\eta,s)\\mathrm{d}\\eta= \\int_0^y \\Big(1 + \\frac{z}{s}\\sigma(\\eta)\\Big) \\mathrm{d}\\eta, \\label{eq:complexStreching}\n\\end{align}\nwhere the absorption function $\\sigma(\\eta)\\leq1$ is positive in $\\mathbb{R}\\setminus\\mathcal{D}$ and is zero in $\\mathcal{D}$. The PML coefficient $z$ is a real or complex constant, such as $z= 10$ or $ z = 10+\\i$. By replacing\n\\begin{equation*}\nx\\to\\tilde x(x,s),\\quad y\\to \\tilde y(y,s),\\quad\n\\,\\mathrm{d}x\\to\\frac{\\partial \\tilde x}{\\partial x}\\,\\mathrm{d}x=\\alpha(x,s)\\,\\mathrm{d}x,\\quad\n\\,\\mathrm{d}y\\to\\frac{\\partial \\tilde y}{\\partial y}\\,\\mathrm{d}y=\\alpha(y,s)\\,\\mathrm{d}y,\n\\end{equation*}\nwe can transform Eq. \\eqref{eq:Hel} into the following nonlocal equation with PML modifications\n\\begin{align*}\n\\int_{\\mathbb{R}} s^2\\hat q(\\tilde x,s) v(\\tilde x)\\,\\mathrm{d}x &- \\frac12\\int_{\\mathbb{R}}\\int_{\\mathbb{R}} \\big[\\hat q(\\tilde x,s) - \\hat q(\\tilde y,s) \\big]\\big[ v(\\tilde x)-v(\\tilde y) \\big]\\gamma\\Big(\\tilde y-\\tilde x,\\frac{\\tilde x+\\tilde y}{2}\\Big)\\alpha(x,s)\\alpha(y,s)\\,\\mathrm{d}x\\,\\mathrm{d}y\\\\\n& = \\int_\\mathbb{R}\\big(\\hat f(\\tilde x,s)+s\\psi_0(\\tilde x)+\\psi_1(\\tilde x)\\big) v(\\tilde x)\\alpha(x,s)\\mathrm dx,\\quad \\forall v\\in C_0^\\infty(\\mathbb{R}),\n\\end{align*}\nwhich implies that \n\\begin{align}\ns^2\\alpha(x,s)\\hat q(\\tilde x,s) + \\int_{\\mathbb{R}} \\big[\\hat q(\\tilde x,s) - \\hat q(\\tilde y,s) \\big] \\gamma\\Big(\\tilde y-\\tilde x,\\frac{\\tilde x+\\tilde y}{2}\\Big)\\alpha(x,s)\\alpha(y,s)\\mathrm dy \\notag\\\\\n= \\hat f(x,s)+s\\psi_0(x)+\\psi_1(x). \\label{eq_HelPML}\n\\end{align}\nNoting that to derive the right hand side of the above equation, we have used the facts that $\\tilde x=x,\\alpha(x,s)=1$ for $x\\in\\mathcal{D}$, the initial data $\\psi_k(k=0,1)$ and the source function $f$ are compactly supported in $\\mathcal{D}$. Thus, we continue the equation~\\eqref{eq:Hel} into \\eqref{eq_HelPML} in complex coordinates. One can see that the solutions $\\hat q(\\tilde x,s)$ will not change in the interior domain $\\mathcal{D}$ and exponentially decay in the absorbing region $\\sigma(x)>0$ by choosing an appropriate PML coefficient $z$. \n\nWe now perform the inverse Laplace transform to turn the equation back into the time-domain form. To do so, we set \n\\begin{align}\n\\tilde q(x,t) = \\mathscr{L}_s^{-1}[\\hat q(\\tilde x,s)],\\qquad \\tilde \\gamma(x,y,t)=\\mathscr{L}_s^{-1}\\Big[\\frac{1}{s}\\gamma\\Big(\\tilde y-\\tilde x,\\frac{\\tilde x+\\tilde y}{2}\\Big)\\alpha(x,s)\\alpha(y,s)\\Big]. \\label{eq_invkernel}\n\\end{align}\nSince $\\tilde x=x$ for $x\\in\\mathcal{D}$, we have $\\tilde q(x,t)=q(x,t)$ for $x\\in\\mathcal{D}$ and all time $t$, which implies that $\\tilde q(x,0)=q(x,0)$ and $\\partial_t\\tilde q(x,t)|_{t=0}=\\partial_t q(x,t)|_{t=0}$ for $x\\in \\mathcal{D}$. Therefore, we can naturally assume that $\\tilde q(x,0)=\\psi_0(x)$ and $\\partial_t\\tilde q(x,0)=\\psi_1(x)$. Then, we have the following inverse Laplace transforms \n\\begin{align}\n\\mathscr{L}_s^{-1}[ s^2\\alpha(x,s)\\hat q(\\tilde x,s)-s\\psi_0(x)-\\psi_1(x) ] &= \\mathscr{L}_s^{-1}[ (s^2+zs\\sigma(x))\\hat q(\\tilde x,s) -s\\psi_0(x)-\\psi_1(x)]\\notag\\\\\n& = \\partial_t^2 \\tilde q(x,t) + z\\sigma(x) \\partial_t \\tilde q(x,t), \\label{eq_wavePMLp1}\n\\end{align}\nand\n\\begin{align}\n& \\mathscr{L}_s^{-1}\\Big[\\big[\\hat q(\\tilde x,s) - \\hat q(\\tilde y,s) \\big]\\gamma\\Big(\\tilde y-\\tilde x,\\frac{\\tilde x+\\tilde y}{2}\\Big)\\alpha(x,s)\\alpha(y,s)\\Big] \\notag\\\\\n=& \\mathscr{L}_s^{-1}\\Big[\\big[\\big(s\\hat q(\\tilde x,s)-q(x,0)\\big) -\\big(s \\hat q(\\tilde y,s)-q(y,0)\\big) \\big] \\cdot \\frac{1}{s} \\gamma\\Big(\\tilde y-\\tilde x,\\frac{\\tilde x+\\tilde y}{2}\\Big)\\alpha(x,s)\\alpha(y,s)\\Big] \\notag\\\\\n& + \\mathscr{L}_s^{-1}\\Big[\\big[q(x,0)-q(y,0) \\big] \\cdot \\frac{1}{s} \\gamma\\Big(\\tilde y-\\tilde x,\\frac{\\tilde x+\\tilde y}{2}\\Big) \\alpha(x,s)\\alpha(y,s)\\Big] \\notag\\\\\n=& \\big[ \\partial_t \\tilde q(x,t)- \\partial_t \\tilde q(y,t) \\big] \\ast \\tilde \\gamma(x,y,t) +\\big[q(x,0)-q(y,0) \\big] \\tilde \\gamma(x,y,t),\\label{eq_wavePMLp2}\n\\end{align}\nwhere $*$ indicates the convolution of two functions in time. Combining \\eqref{eq_wavePMLp1} and \\eqref{eq_wavePMLp2} with \\eqref{eq_HelPML} yields the nonlocal wave equation with PML modifications as\n\\begin{align}\n\\big(\\partial_t^2 +z\\sigma(x)\\partial_t \\big) \\tilde q(x,t) + &\\int_\\mathbb{R} \\big[ \\partial_t \\tilde q(x,t)- \\partial_t \\tilde q(y,t) \\big] \\ast \\tilde \\gamma(x,y,t) \\mathrm dy \\notag\\\\\n&= f(x,t) - \\int_\\mathbb{R}\\big[\\psi_0(x)-\\psi_0(y) \\big] \\tilde \\gamma(x,y,t)\\mathrm{d}y,\\quad x\\in\\mathbb{R}.\n\\end{align}\n\nNoting $ \\tilde x =x, \\tilde y = y \\; (\\forall x,y\\in\\mathcal{D})$ and $\\mathrm{supp}\\;\\psi_0(x)\\subset(x_l+\\delta,x_r-\\delta)$ (see A1), we have $\\tilde \\gamma(x,y,t)=\\gamma\\Big( y- x,\\frac{ x+ y}{2}\\Big)$, which implies that for all $x$,\n\\begin{align*}\n\\int_\\mathbb{R}\\big[\\psi_0(x)-\\psi_0(y) \\big] \\tilde \\gamma(x,y,t)\\mathrm{d}y = \\int_{\\mathcal{D}}\\big[\\psi_0(x)-\\psi_0(y) \\big] \\gamma\\Big( y- x,\\frac{ x+ y}{2}\\Big)\\mathrm dy.\n\\end{align*}\n\nWe finally have the nonlocal PML wave equations\n\\begin{align} \\label{PMLw}\n\\big(\\partial_t^2 +z\\sigma(x)\\partial_t \\big) \\tilde q(x,t) + \\mathcal{L}_{pml} \\partial_t \\tilde q(x,t) = f(x,t) - \\int_{\\mathcal{D}}\\big[\\psi_0(x)-\\psi_0(y) \\big] \\gamma\\Big( y- x,\\frac{ x+ y}{2}\\Big)\\mathrm dy,\n\\end{align}\nwhere the nonlocal operator $\\mathcal{L}_{pml}$ for the PML is given by\n\\begin{align}\n \\mathcal{L}_{pml} \\partial_t \\tilde q(x,t)=\\int_\\mathbb{R} \\big[ \\partial_t \\tilde q(x,t)- \\partial_t \\tilde q(y,t) \\big] \\ast \\tilde \\gamma(x,y,t) \\mathrm dy.\n\\end{align}\nWe point out that the nonlocal PML operator $\\mathcal{L}_{pml}$ involves a convolution in time, which differs from the original nonlocal operator $\\mathcal{L}$.\n\nNoting the PML equation \\eqref{PMLw} is still defined on the whole space, we need to truncate the computational region at some sufficiently large $x$ by putting homogeneous Dirichlet boundary conditions. To do so, we define the PML layer $\\mathcal{D}_p=(x_l-d_p,x_l]\\cup[x_r,x_r+d_p)$ with the thickness $d_p$ of the absorbing layer, and define the boundary layer $\\mathcal{D}_b$ of width $\\delta$ which surrounds $\\mathcal{D}\\cup\\mathcal{D}_p$.\n\nThus, we derive the following truncated nonlocal wave problem with PML modifications:\n\\begin{align}\n&\\big(\\partial_t^2 +z\\sigma(x)\\partial_t \\big) \\hat{\\tilde q}(x,t) + \\mathcal{L}_{pml} \\partial_t \\hat{\\tilde q}(x,t)\\notag\\\\\n&\\qquad\\qquad\\qquad= f(x,t) - \\int_{\\mathcal{D}}\\big[\\psi_0(x)-\\psi_0(y) \\big] \\gamma\\Big( y- x,\\frac{ x+ y}{2}\\Big)\\mathrm dy,\\quad x\\in\\mathcal{D}\\cup\\mathcal{D}_p, \\label{eq_truPML1}\\\\\n& \\hat{\\tilde q}(x,0) = \\psi_0(x),\\quad \\partial_t \\hat{\\tilde q}(x,0) = \\psi_1(x),\\quad x\\in\\mathcal{D}_b\\cup\\mathcal{D}_p\\cup\\mathcal{D},\\label{eq_truPML2}\\\\\n& \\hat{\\tilde q}(x,t)=0,\\quad x\\in \\mathcal{D}_b,\\ 0N,\\ 0< t\\leq T,\\label{eq_sdp3}\n\\end{align}\nwhere $\\hat{\\tilde q}_n(t)\\approx\\hat{\\tilde q}(x_n,t)$.\n\nIn the remainder, we consider the convolutions of functions $\\partial_t \\hat{\\tilde q}_k(t)$ and $e^{\\xi_jt}$ over the range $[0,t]$ by introducing the auxiliary functions\n\\begin{align}\np_{k,j}(t) = \\partial_t \\hat{\\tilde q}_k(t) \\ast e^{\\xi_jt},\\quad k\\in\\mathcal{I}_p\\cup\\mathcal{I},\\ j=1,\\cdots,m.\n\\end{align}\nIt's clear that functions $p_{k,j}$ satisfy the following ODEs\n\\begin{align}\n\\partial_t p_{k,j}(t) = \\xi_j p_{k,j}(t) + \\partial_t \\hat{\\tilde q}_k(t) \\label{eq_pjode}\n\\end{align}\nwith the initial conditions $p_{k,j}(0) = 0$.\nThen we reformulate the the semi-discrete problem~\\eqref{eq_sdp1}--\\eqref{eq_sdp3} into the following ODEs\n\\begin{align}\n&\\big(\\partial_t^2 +z\\sigma(x_n)\\partial_t \\big)\\hat{\\tilde q}_n(t) + \\sum_{j=1}^m\\sum_{k}\\tilde a_{n,k}^j p_{k,j}(t) \\notag\\\\\n&\\qquad = f(x_n,t)-\\int_{\\mathcal{D}}\\big[\\psi_0(x_n)-\\psi_0(y) \\big] \\gamma\\Big( y- x_n,\\frac{ x_n+ y}{2}\\Big)\\mathrm dy,\\quad 1\\leq n\\leq N,\\ 0N,\\ 0< t\\leq T.\\label{eq_disPML4}\n\\end{align}\n\n\n\\subsection{The Verlet-type ODE solver}\n\nWe here introduce the Verlet-type algorithm to numerically solve the ODE system \\eqref{eq_disPML1}--\\eqref{eq_disPML4}. Denote by $D_\\sigma$ the $N\\times N$ the diagonal matrix with entries $\\sigma(x_1),\\sigma(x_2),\\cdots,\\sigma(x_N)$, and by $\\tilde A_j\\ (j=1,2,\\cdots,m)$ the $N\\times N$ matrices with entries $\\tilde a_{n,k}^j\\ (n,k=1,2,\\cdots,N)$. The the ODE system \\eqref{eq_disPML1}--\\eqref{eq_disPML4} can be rewritten into the following form\n\\begin{align}\n\\mathbf{w}(t)-\\mathbf{q}'(t) & = 0,\\\\ \n\\mathbf{w}'(t) + zD_\\sigma \\mathbf{w}(t) + \\sum_{j=1}^m \\tilde A_j \\mathbf{p}_j(t) &= \\mathbf{f}(t),\\\\\n\\mathbf{p}_j'(t) - \\xi_j \\mathbf{p}_j(t) - \\mathbf{w}(t) &= 0,\\quad j=1,2,\\cdots,m,\n\\end{align}\nwhere $\\mathbf{q}=(\\hat{\\tilde q}_1,\\hat{\\tilde q}_2,\\cdots,\\hat{\\tilde q}_N)^T$, $\\mathbf{p}_j=(p_{1,j},p_{2,j},\\cdots,p_{N,j})^T$ and\n\\begin{align*}\n\\mathbf{f} = \\begin{pmatrix}\nf(x_1,t)-\\int_{\\mathcal{D}}\\big[\\psi_0(x_1)-\\psi_0(y) \\big] \\gamma\\Big( y- x_1,\\frac{ x_1+ y}{2}\\Big)\\mathrm dy\\\\\nf(x_1,t)-\\int_{\\mathcal{D}}\\big[\\psi_0(x_2)-\\psi_0(y) \\big] \\gamma\\Big( y- x_2,\\frac{ x_2+ y}{2}\\Big)\\mathrm dy\\\\\n\\vdots\\\\\nf(x_1,t)-\\int_{\\mathcal{D}}\\big[\\psi_0(x_N)-\\psi_0(y) \\big] \\gamma\\Big( y- x_N,\\frac{ x_N+ y}{2}\\Big)\\mathrm dy\n\\end{pmatrix}.\n\\end{align*}\nLet $\\tau$ be the temporal stepsize and $t_k=k\\tau$ be the $k$-th time point. Denote by $\n\\mathbf{w}^{k+1\/2} \\approx \\mathbf{w}(t_{k+1\/2}),\\; \\mathbf{q}^k \\approx \\mathbf{q}(t_k),\\; \\mathbf{p}_j^k \\approx \\mathbf{p}(t_k)\\; (k=0,1,\\cdots).$ \nLet $\\mathbf{f}^j=\\mathbf{f}(t_j), \\mathbf{\\Psi}_0 = (\\psi_0(x_1),\\psi_0(x_2),\\cdots,\\psi_0(x_N))^T$ and $\\mathbf{\\Psi}_1 = (\\psi_1(x_1),\\psi_1(x_2),\\cdots,\\psi_1(x_N))^T$.\nThe initial values can be written as\n\\begin{align} \\label{1s}\n\\mathbf{q}^0 &= \\mathbf{\\Psi}_0,\\\\\n\\mathbf{p}_j^0 &=0,\\quad j=1,2,\\cdots,m,\\\\\n\\mathbf{w}^{1\/2} &= \\mathbf{\\Psi}_1 + \\frac{\\tau}{2}\\Big[\\mathbf{f}^0-\\sum_{j=1}^m\\tilde A_j \\mathbf{p}_j^0-zD_\\sigma\\mathbf{\\Psi}_1\\Big].\n\\end{align}\nFor $k\\geq0$, we apply the following second-order central difference to calculate $\\mathbf{q}^{k+1}$ and $\\mathbf{p}_j^{k+1}$ by \n\\begin{align}\n\\mathbf{w}^{k+\\frac12} - \\frac{\\mathbf{q}^{k+1}-\\mathbf{q}^{k}}{\\tau} & =0,\\\\\n\\frac{\\mathbf{p}_j^{k+1}-\\mathbf{p}_j^{k}}{\\tau} - \\xi_j\\frac{\\mathbf{p}_j^{k+1}+\\mathbf{p}_j^{k}}{2} - \\mathbf{w}^{k+\\frac12} &=0.\n\\end{align}\nAfter that, we still update $\\mathbf{w}^{k+3\/2}$ by using the above $\\mathbf{q}^{k+1}$ and $\\mathbf{p}_j^{k+1}$ via the second-order scheme: \n\\begin{align}\n& \\frac{\\mathbf{w}^{k+\\frac32}-\\mathbf{w}^{k+\\frac12}}{\\tau} + zD_\\sigma \\frac{\\mathbf{w}^{k+\\frac32}+\\mathbf{w}^{k+\\frac12}}{2} + \\sum_{j=1}^m \\tilde A_j \\mathbf{p}_j^{k+1}= \\mathbf{f}^{k+1}. \\label{3s}\n\\end{align}\n\n\\section{Numerical examples}\\label{sec_num}\nIn this section, three examples are provided to verify the effectiveness of our PML strategy, the convergence and asymptotic compatibility of scheme \\eqref{1s}--\\eqref{3s}. Define the $L^2$-error at $t=t_k$ by \n\\begin{align}\ne_h &= \\sqrt{\\frac{1}{|\\mathcal{I}|}\\sum_{n\\in\\mathcal{I}} |\\hat{\\tilde q}^k(x_n)-q(x_n,t_k)|^2 },\\end{align}\nand the error to study the AC property, i.e., the so-called ``$\\delta$-convergence'' in \\cite{silling2000reformulation,DuZhangZheng,bobaru2009convergence}, by \n\\begin{align}\ne_\\delta &= \\sqrt{\\frac{1}{|{\\mathcal{I}\\cup\\mathcal{I}_p}|}\\sum_{n\\in{\\mathcal{I}\\cup\\mathcal{I}_p}} |\\hat{\\tilde q}^k(x_n)-u(x_n,t_k)|^2 },\n\\end{align}\nwhere $u(x,t)$ is the corresponding local PML soltion.\n\nIn the simulations, we choose the interior domain $\\mathcal{D}$ such that $x_l=-l,\\ x_r=l$ for some constant $l$ and set the PML absorbing function as the piecewise linear function\n\\begin{align}\n\\sigma(\\eta) = \n\\begin{cases}\n0, & -l<\\eta \\frac14|z|^2, \\quad \\forall x,y. \n\\end{align*}\n\n\n\\noindent\\textbf{We then consider Talbot's contour parameters $\\mu$ and $\\nu$ for the Gaussian kernel~\\eqref{eq_gaukernel}.} The analytic continuation of the kernel $\\gamma(y-x,\\frac{x+y}{2}) = \\frac{4}{\\delta^3}\\sqrt{\\frac{10^3}{\\pi}} e^{-10\\frac{(x-y)^2}{\\delta^2}}$ is given by\n\\begin{align}\n\\gamma(\\tilde y-\\tilde x,\\frac{\\tilde x+\\tilde y}{2}) = \\frac{4}{\\delta^3}\\sqrt{\\frac{10^3}{\\pi}} e^{-10\\frac{(\\tilde x-\\tilde y)^2}{\\delta^2}}.\n\\end{align}\nNote the $\\Omega_\\mathcal{K}$ is the whole complex plane for any given $z\\in\\mathbb{C}$ and $x,y\\in\\mathbb{R}$. However, to ensure the stability, we have to choose the Talbot's contour parameters $\\mu$ and $\\nu$ such that $\\Re[(\\tilde x-\\tilde y)^2] \\geq 0$. Denote by $\\zeta=\\zeta_1+\\i \\zeta_2=\\frac{z}{\\xi_j}$. We have\n\\begin{align*}\n\\Re\\Big[(\\tilde x-\\tilde y)^2\\Big] =& \\Re\\Big[\\Big( (x+\\zeta_1\\int_0^x\\sigma(t)\\mathrm dt+\\i\\zeta_2\\int_0^x\\sigma(t)\\mathrm dt) - (y+\\zeta_1\\int_0^y\\sigma(t)\\mathrm dt+\\i\\zeta_2\\int_0^y\\sigma(t)\\mathrm dt) \\Big)^2\\Big] \\\\\n=& \\Big( (x+\\zeta_1\\int_0^x\\sigma(t)\\mathrm dt)- (y+\\zeta_1\\int_0^y\\sigma(t)\\mathrm dt)\\Big)^2 -\\Big (\\zeta_2\\int_0^x\\sigma(t)\\mathrm dt-\\zeta_2\\int_0^y\\sigma(t)\\mathrm dt\\Big)^2\\\\\n=&(x-y)^2\\big[ (1+\\zeta_1 g)^2 - \\zeta_2^2 g^2\\big]\\\\\n=&(x-y)^2\\big[ \\big(1+(\\zeta_1-\\zeta_2)g\\big) \\big(1+(\\zeta_1+\\zeta_2)g\\big)\\big],\n\\end{align*}\nwhere $g$ is defined in \\eqref{Ge} with $0\\leq g\\leq1$ as $0\\leq\\sigma\\leq 1$. \n\nTo ensure $\\Re[(\\tilde x-\\tilde y)^2] \\geq 0$, we have $\\big(1+(\\zeta_1-\\zeta_2)g\\big) \\big(1+(\\zeta_1+\\zeta_2)g\\big)\\geq0$ for any $g\\in[0,1]$, which implies that $\\zeta_1-\\zeta_2\\geq -1$ and $\\zeta_1+\\zeta_2\\geq-1$. Therefore, we may simply choose Talbot's contour parameters $\\mu$ and $\\nu$ such that\n\\begin{align}\n\\sqrt{2}|z|\\leq |\\xi_j|,\\quad \\forall\\ j=1,2,\\cdots,m.\n\\end{align}\n\n\n\n\n\\bibliographystyle{siam}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Related Work}\n\\label{sec:related}\nIn this section, we review the recent studies that are most relevant to our work, including data-driven storytelling, automatic data visualization, and natural language generation.\n\n\\subsection{Data-Driven Storytelling} \nData-driven storytelling is a rapidly developing research direction that focuses on techniques for enhancing data understanding, information expression, and communication. Narrative visualization is one promising approach frequently used for data-driven story telling~\\cite{tong2018storytelling}. Recently, the visualization community has extensively investigated storytelling and narrative visualization techniques~\\cite{segel2010narrative, tong2018storytelling}. According to Segel and Heer~\\cite{segel2010narrative}, narrative visualization can be largely classified into seven genres, including magazine style, annotated chart, partitioned poster, flow chart, comic strip, slideshow, and video. Evidence shows that an effective composition and visual narrative of the story can guide readers through the data and improve the comprehension and memory~\\cite{borkin2015beyond, hullman2013deeper}. \nTo compose effective data-driven stories, Hullman~{\\it et~al.}\\xspace~\\cite{hullman2013deeper} identified several key design actions in the sequential story creation process, including context definition, facts selection, modality selection, and order selection.\nLee~{\\it et~al.}\\xspace also decomposed the creation process of a data-driven story into three major steps: insights finding, scripting, and communicating to the audience~\\cite{lee2015more}. These valuable study results guide the designs of many data story creation systems including Calliope\\xspace, which is introduced in this paper.\n\nUsers commonly experience difficulty in creating a data-driven story due to technical barriers, which motivate the design and development of various authoring tools. General tools such as Ellipsis~\\cite{satyanarayan2014authoring} allows a user to directly integrate visualizations to an illustrative story. Recent studies focus on designing tools to generate a specific type of visual narrative. For example, ChartAccent~\\cite{ren2017chartaccent} and InfoNice~\\cite{wang2018infonice} are respectively designed for creating annotated charts and infographics. Narvis~\\cite{wang2018narvis} is introduced to extract the combination of visual elements of a visualization and organize them as a slideshow to help with the narrative interpretation of a visualization design. DataClips~\\cite{amini2016authoring} is designed to help users generate data videos. Various authoring tools that are specifically designed to create narrative visualization for certain types of data. For example, Timeline Storyteller~\\cite{brehmer2019timeline}, is a visual storytelling tool designed specifically for time-oriented data. Several visualization tools are also introduced to bridge the gap between visual analysis and storytelling~\\cite{chen2018supporting, gratzl2016visual}, but they still target on expert users.\n\nThe above tools assume that the story content is manually created, resulting in inefficiency. By contrast, Calliope\\xspace supports automatic story generation and flexible story editing functions, which ensure the quality, lower the barrier, and improve the efficiency of visual narration.\n\n\\subsection{Automatic Data Visualization}\nStudies on automatic visualization have experienced three stages, including visualization chart recommendation, automatic data mapping generation, and auto-insights. To recommend a chart given an input data, early studies employed a rule-based method, which checks the data types to make a suggestion~\\cite{mackinlay2007show,gotz2010harvest}. A recent study~\\cite{hu2019vizml} trained a classification model based on a collection of ``data feature - chart type\" pairs extracted from a visualization design corpus. As a result, given data features, a proper chart type can be selected. In Calliope\\xspace, we select a chart for each data fact based on its fact type and data fields.\n\nTo visualize data in a chart, one must determine the detailed data mapping strategy. To this end, various techniques are introduced. Draco~\\cite{moritz2018formalizing} uses an optimization model to find the best data mapping strategy under a set of constraints formulated by several common visual design guidelines. DeepEye~\\cite{luo2018deepeye} enumerates all possible data mapping strategies and uses a decision tree to select the good ones. Data2Vis~\\cite{dibia2019data2vis} ``translates\" the data into a visual encoding scheme based on a sequence-to-sequence deep model. Text-to-Viz~\\cite{cui2019text} employs natural language processing techniques to identify and parse data entities, such as numbers, portions, and data scopes from an input text, and convert them into a statistic diagram. Shi~{\\it et~al.}\\xspace\\cite{Shi2019TaskOrientedOS} explored the chart design space via a reinforcement learning model and generated a sequence of data mapping approaches regarding an analytical task. Calliope\\xspace encodes different data fact fields in a chart following a rule-based method.\n\nRecent studies focused on extracting data patterns and representing them in charts to reveal data insights, i.e., auto-insights~\\cite{tang2017extracting,ding2019quickinsights}. The extracted insights can be quantitatively measured based on their statistical significance~\\cite{ding2019quickinsights}. Visual analysis techniques were also developed to support auto-insights. For example, SeeDB~\\cite{vartak2015seedb} finds and illustrates the most interesting trend in the data by exploring various data mapping strategies. Foresight~\\cite{demiralp2017foresight} extracts and visualizes insights from multidimensional data using rule-based methods. DataShot~\\cite{wang2019datashot} randomly visualizes a set of automatically generated data facts as a factsheet. \\rv{Inspired by DataShot, Calliope\\xspace also borrowed the auto-insights techniques to generate data facts, but made a step further by organizing the facts in a logical order to generate a meaningful data story.}\n\n\\subsection{Natural Language Generation}\nRecently, studies on natural language generation (NLG) demonstrate the capability of producing descriptive text from various types of data~\\cite{mishra2019storytelling,turner2009generating,galanis2007generating}. Many methods use templates to generate sentences~\\cite{swanson2012say,li2013story}, and techniques that automatically enrich a template were developed~\\cite{dou2018data2text,ye2020variational}. Recent studies leveraged deep learning models to generate textual content from scratch~\\cite{fan2018hierarchical,fan2019strategies}. Several methods create an intermediate structure based on a recurrent neural network~\\cite{martin2018event,xu2018skeleton,yao2019plan}, and others use the auto-encoder architecture to generate diverse sentences from a latent space~\\cite{liu2019transformer,li2019learning}. Among various techniques, those aim to generate text content based on structured data are the most relevant to our work~\\cite{mahapatra2016statistical,belz2008automatic,jain2018mixed}. For example, commercial software, such as PowerBI\\footnote{\\url{https:\/\/powerbi.microsoft.com}} and Quill\\footnote{\\url{https:\/\/narrativescience.com}} describe important data facts based on a set of templates to help interpret the data and the corresponding visualization. A number of visual auto-insights systems, such as DataSite~\\cite{cui2019datasite}, DataShot~\\cite{wang2019datashot}, and Voder~\\cite{srinivasan2018augmenting}, use the template-based NLG to generate captions for visualization charts. In Calliope\\xspace, we also employ the template-based method to generate captions for each chart, but to ensure the readability and avoid ambiguity, we define a syntax for each fact type that regulates the generation results.\n\\section{Design of the Calliope System}\n\\label{sec:system}\nThis section introduces the design of Calliope\\xspace system. We first introduce the formal definition of a data story, then survey on a collection of data videos to help us understand how a story is generated by human designers. After that, we summarize the design requirements and introduce the architecture design of the system. \n\n\n\\vspace{-0.5em}\n\\input{tables\/1-factvischarts}\n\\setlength{\\floatsep}{5pt}\n\\input{tables\/2-narrativelogic}\n\\setlength{\\textfloatsep}{5pt}\n\n\\subsection{Data Story}\nData story is a set of story pieces that are meaningfully connected to support the author's communication goal~\\cite{lee2015more}. A story piece is a fact backed up by data, and it is usually visualized by succinct but expressive charts, accompanied with annotations (labels, pointers, text, etc.) and narrations to express the message and avoid the ambiguity. We design Calliope\\xspace to automatically generate visual data stories by following this definition. Formally, a data story $\\mathcal{S}$ consists of a sequence of data facts that are connected by coherent relations (denoted as $r_i \\in \\mathcal{R}$) $\\{f_1,r_1, \\cdots, f_{n-1}, r_{n-1}, f_n\\}$ with each fact $f_i \\in F$. We will use these notations throughout the paper. \n\n\\subsection{Preliminary Survey}\nBefore designing the system, it is necessary to understand how a data story is created by a human designer. To this end, as data video is a frequently used narrative visualization form, we collected a set of 602 data videos from YouTube and Vimeo by searching keywords, such as ``animated infographic'', ``data video'', and ``motion infographic''. A total of 230 high-quality videos were selected and manually segmented into 4186 story pieces. The fact and chart types of 2583 data-related story pieces and the coherence relations used for connecting two succeeding pieces were labeled for analysis. \\rv{Here, we borrowed the definition of fact types introduced in DataShot~\\cite{wang2019datashot} and coherence relations introduced in~\\cite{wellner2006classification, wolf2005representing} to label our data}. \n\nAs a result, even some simple statistics are able to help us answer a number of questions that are critical to the design of an automatic story generation system. For example, Fig.~\\ref{fig:statistic} suggests the frequently used fact types, the fact types frequently used as start points, and the frequently used coherence relations regarding our data video corpus. Table~\\ref{tab:charts} suggests the preferred visualization charts given a fact type. Table~\\ref{tab:logic} suggests the narrative logic probably used following each fact type. These results guide the design of the story generation algorithm and will be further described later. \n\n\\subsection{System Design}\nOur goal is to design a system that can automatically generate high-quality initial data stories directly from an input spreadsheet and support flexible story editing functions to lower the technical barriers of creating a data story. To achieve this goal, a number of key requirements need to be fulfilled:\n\n\n\\begin{enumerate\n\\itemsep -1mm\n\\item[{\\bf R1}] {\\bf Generating ``successful\" stories.} The most important thing for the system is to ensure the quality of story generation. Among various factors that contribute to a successful narrative artifact, the key is understandability~\\cite{riedl2010narrative}, which is usually determined by the narrative logic, and the meaningful and believable content~\\cite{lee2015more}. Therefore, the system should be intelligent enough to automatically generate meaningful stories logically with correct, i.e., believable, data backups.\n\n\\item[{\\bf R2}] {\\bf Efficient story generation.} The system should be able to efficiently generate a data story within a reasonable period of time that is affordable to the users. Therefore, the generation time should be controllable to grantee the efficiency of the system while keeping the quality of the story.\n\n\\item[{\\bf R3}] {\\bf Expressive story representation.} As suggested in~\\cite{lee2015more}, the generated visual data story should be expressively represented in both visual and textual forms to precisely express the message and avoid ambiguity. Here, simple but intuitive charts~\\cite{wang2019datashot}, as well as precise and meaningful narratives~\\cite{lee2015more} should be guaranteed to reduce users' learning efforts.\n\n\\item[{\\bf R4}] {\\bf Easy story editing.} The system should provide flexible interactions to support comprehensive editing of the generated storyline, text narration, visual representation, and the corresponding data facts, so that a user can further refine and adjust an automatically generated story based on their own requirements.\n\n\\item[{\\bf R5}] {\\bf Easy communication and sharing.} The visual and textual representations of a data story should be probably aligned and adaptive laid out to fit into different devices such as a laptop, tablets, and smartphones to facilitate an easy story exploration, communication, and sharing.\n\\end{enumerate}\n\nTo fulfill these requirements, the design of Calliope\\xspace system consists of two modules (Fig.~\\ref{fig:system}): (1) the story generation engine and (2) the story editor. The story generation engine is designed based on a logic-oriented Monte Carlo tree search process, in which a story is gradually generated fact by fact while searching through the data space defined by an input spreadsheet. The whole search process is guided by narrative logic and a reward function that measures the importance of facts to ensure the quality of the generated story (\\textbf{R1}). In addition, the time spent on each searching step is configurable, which guarantees the generation efficiency (\\textbf{R2}). The generated story is visualized in the story editor as a series of captioned visualization charts (\\textbf{R3}), whose data facts, caption, chart type, and logic orders can be revised according to user preferences (\\textbf{R4}). The final visual data story can be represented in three modes to fit different devices (\\textbf{R5}).\n\n\\setlength{\\textfloatsep}{20pt}\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/pipeline.png}\n \\vspace{-2em}\n \\caption{Calliope\\xspace system consists of two modules : the story generation engine and the story editor.} \\label{fig:system}\n \\vspace{-1.5em}\n\\end{figure}\n\\setlength{\\textfloatsep}{20pt}\n\\subsection{Story Generation Algorithm}\nIn Calliope\\xspace, we introduce an intelligent algorithm that generates data facts from an input spreadsheet and threads them logically to create a meaningful data story. However designing such an algorithm is challenging. The story design space, formulated by a collection of data facts generated from the input spreadsheet, could be extremely large due to the huge number of possible combinations of the fact fields even based on a small dataset. The algorithm cannot generate all facts first and then pick up those important ones to build the story as users cannot afford a long waiting time. We addressed this issue by introducing an efficient logic-oriented searching algorithm based on the Monte Carlo tree search (MCTS)~\\cite{browne2012survey,silver2016mastering}. The algorithm can efficiently explore a large space that contains a huge number of states via a searching process organized by a tree structure and guided by a reward function towards a logic direction.\n\n\\begin{figure}[!b]\n \\centering\n \\vspace{-0.5em}\n \\includegraphics[width=\\linewidth]{figures\/algorithm.png}\n \\vspace{-1em}\n \\caption{An iteration of the logic-oriented Monte Carlo tree search consists of four steps, including (a) selection, (b) expansion, (c) simulation, and (d) back-propagation.} \n \\label{fig:algorithm}\n \\vspace{-0.5em}\n\\end{figure}\n\n\\paragraph{\\bf Algorithm Overview} In general, the algorithm explores the design space by dynamically constructing a searching tree $\\mathcal{T}$. As shown in Fig.~\\ref{fig:algorithm}(a), each node in the tree is a data fact $f_i$ and each directed edge indicates a logic relation $r_i$. A data story $\\mathcal{S} = \\{f_1, r_1, \\cdots,f_{n-1}, r_{n-1}, f_n\\}$ is thus represented by a path starting from the root. A reward function is designed to estimate the quality of each path in the tree. The reward scores are marked on the last node in paths. A node shared by multiple paths is weighted by the maximum reward. These scores are used to guide the exploration of the design space.\n\n\nThe tree $\\mathcal{T}$ is gradually generated through a searching process as described in Algorithm~\\ref{alg:generation}. In particular, the algorithm takes a spreadsheet $\\mathcal{D}$, i.e., a data table, and a goal $\\mathcal{G}$, such as generating a story with a desired information quantity or length as the inputs and automatically generates a story $\\mathcal{S}$ that fulfills the goal. \\rv{Initially, it randomly generates a set of facts in types that are frequently used as the starting point in a data story (Fig.~\\ref{fig:statistic}(b)). These facts usually reveal general and common data patterns which may already known by the audience as the background of the story. Among these facts, the most important one, denoted as $f_0$, is used as the root of $\\mathcal{T}$. In the next, the algorithm generates a story by iteratively searching more informative and significant data facts to elaborate the story via four major steps: \\textit{\\textbf{selection}}\\xspace, \\textit{\\textbf{expansion}}\\xspace, \\textit{\\textbf{simulation}}\\xspace, and \\textit{\\textbf{back-propagation}}\\xspace.}\nThe first step finds a node $f_i$ with the largest reward in $\\mathcal{T}$, from which the next searching step will be performed (\\textit{line 3}, Fig.~\\ref{fig:algorithm}(a)). \nThe second step searches the design space by creating a set of data facts (denoted as $F_i$), that is logically relevant to $f_i$ (\\textit{line 4}, Fig.~\\ref{fig:algorithm}(b)).\nThe third step finds the best searching direction $f_i \\rightarrow f^*, f^* \\in F_i$ with the largest reward $\\Delta^*$ through a simulation process (\\textit{lines 5 - 11}, Fig.~\\ref{fig:algorithm}(c)). This process simulates the cases in which each $f \\in F_i$ is expanded in a simulation tree $\\mathcal{T}_s$ rooted at $f_i$ to help the algorithm explore the space a few steps further, so that the different searching directions can be estimated in advance. The simulation runs within a time limit to ensure the efficiency of the algorithm (\\textbf{R2}).\nIn the last step, the tree $\\mathcal{T}$ is updated via a back-propagation process, in which the weights of the relevant nodes are updated based on $\\Delta^*$ and $f^*$ is formally added into $\\mathcal{T}$ as a child of $f_i$ (\\textit{line 11}, Fig.~\\ref{fig:algorithm}(d)). Finally the path with the highest reward in $\\mathcal{T}$, $\\mathcal{P}^*$, is identified as the best story generated at the current iteration (\\textit{line 12}). The algorithm stops when the goal $\\mathcal{G}$ is fulfilled.\n\n\n\\setlength{\\textfloatsep}{0pt}\n\\begin{algorithm}[tb]\n\\label{alg:generation}\n\\SetAlgoLined\n\\SetKwInOut{Input}{Input}\n\\SetKwInOut{Output}{Output}\n\\Input{$\\mathcal{D}, \\mathcal{G}$}\n\\Output{$\\mathcal{S} = \\{f_1,r_1 \\cdots, f_{n-1}, r_{n-1}, f_n\\}$}\n$f_0$ $\\leftarrow$ Initialize($\\mathcal{D}$);\n$\\mathcal{T}$ $\\leftarrow$ \\{$f_0$\\};\n$\\mathcal{S}$ $\\leftarrow$ \\{\\}\\;\n\\While{$\\mathcal{G}$ is not fulfilled}{\n \\tcc{1.\\textit{\\textbf{selection}}\\xspace}\n $f_i \\leftarrow select(\\mathcal{T})$\\; \n \n \\tcc{2.\\textit{\\textbf{expansion}}\\xspace}\n $F_i$ $\\leftarrow$ Expand($f_i$)\\;\n \n \\tcc{3.\\textit{\\textbf{simulation}}\\xspace}\n $\\mathcal{T}_s \\leftarrow \\{f_i\\}$;\n $F \\leftarrow F_i$;\n $f_p \\leftarrow f_i$\\;\n \\While{within time limitation}{\n \n \\tcp{Calculate the reward of each node in $F$ in context of $\\mathcal{T}_s$ and find the node $f \\in F$ with the highest reward $\\Delta$. The design space will be explored in direction of $f_p \\rightarrow f$ in the simulation process.}\n $f, \\Delta \\leftarrow$ Reward($\\mathcal{T}_s, F$)\\;\n \n \\tcp{Add $f$ in the simulation tree $\\mathcal{T}_s$ as a child of $f_p$ and update reward of the relevant nodes in $\\mathcal{T}_s$ based on $\\Delta$. After that, find the node $f^* \\in F_i$ with the highest reward in $\\mathcal{T}_s$, where $f_i \\rightarrow f^*$ determines the best searching direction found so far.}\n $f^*,\\Delta^* \\leftarrow$ BackPropagation($\\mathcal{T}_s, f, \\Delta$)\\;\n \n \\tcp{Select the next node $f_p$ and expand it in the simulation tree for a further exploration}\n $f_p \\leftarrow select(\\mathcal{T}_s)$;\n $F$ $\\leftarrow$ Expand($f_p$)\\;\n }\n \\tcc{4.\\textit{\\textbf{back-propagation}}\\xspace}\n BackPropagation($\\mathcal{T},f^*, \\Delta^*$)\\;\n \n $\\mathcal{S} \\leftarrow \\mathcal{P}^* = \\{f_1, r_1, \\cdots, f_{n-1}, r_{n-1}, f_n\\}$\\;\n}\n\\Return $\\mathcal{S}$\\;\n\\caption{Logic-Oriented Monte Carlo Tree Search}\n\\end{algorithm}\n\n\\paragraph{\\bf Logic-Oriented Node Expansion} Expanding a selected node $f_i$ in the search tree $\\mathcal{T}$ to elaborate the story design space is a critical step in the aforementioned searching algorithm. The expansion should generate a set of nodes that is logically relevant to $f_i$ to gradually generate a meaningful data story through the searching process. To this end, we investigated how a set of commonly used coherence relations (denoted as $\\mathcal{R}$)~\\cite{wellner2006classification, wolf2005representing} was used in data stories during our preliminary survey. As a result, Table~\\ref{tab:logic} summarizes the likelihood, $P(r_i|f_i)$, of each relation $r_i$ occurring after a fact $f_i$ regarding to their fact types, which guides the node expansion process. In particular, during the expansion, we create a set of data facts regarding each coherence relation. The proportion of the newly generated facts is given by $P(r_i|f_i)$, and each new fact, $f_{i+1}$, is generated by the following rules:\n\\begin{itemize}[leftmargin=10pt,topsep=2pt]\n\\itemsep -.5mm\n \\item {\\textit{\\textbf{Similarity}}} indicates two succeeding facts are logically parallel to each other. Therefore, $f_{i+1}$ can be generated by a variety of methods, such as modifying the measure \/ breakdown \/ focus field without changing the subspace. \n \\item {\\textit{\\textbf{Temporal}}} relation communicates the ordering in time of events or states. In this case, we generate $f_{i+1}$ by changing the value of the temporal filter in $f_{i}$'s subspace to a succeeding time.\n \\item {\\textit{\\textbf{Contrast}}} indicates a contradiction between two facts. For simplicity, we only check the contradictions in two types of facts, i.e., trend and association. $f_{i+1}$ is generated by modifying the subspace of $f_i$ to form a data contradiction in measures. For example, the sales trends of a product increases, but that of another product decreases. The sales number of a product is positively associated with its price, but the association is negative in case of another product. In these examples, the subspace is determined by different products.\n \\item {\\textit{\\textbf{Cause-Effect}}} indicates the later event is caused by the former one. \\rv{In multidimensional data, a causal relation can be determined between dimensions based on the data distribution}. In this way, $f_{i+1}$ can be generated by changing the measure field $m_i$ of $f_i$ to another numerical field in the spreadsheet that is most likely caused by $m_i$ in accordance with causal analysis~\\cite{schaechtle2013multi}.\n \\item {\\textit{\\textbf{Elaboration}}} indicates a relation in which a latter fact $f_{i+1}$ adds more details to the previous one $f_i$. In this way, we create $f_{i+1}$ by shrinking the scope of $f_i$'s subspace via adding more constraints (i.e., filters) or setting a focus to ``zoom\" $f_i$ into a more specific scope. \n \\item {\\textit{\\textbf{Generalization}}} indicates $f_{i+1}$ is an abstraction of the previous $f_i$, which is in opposite to elaboration. Therefore, we create $f_{i+1}$ by enlarging the scope of $f_i$'s subspace via removing constraints.\n\\end{itemize}\n\n\\paragraph{\\bf Reward Function} We propose a reward function that estimates the quality of each generated story $\\mathcal{S}$ via three criteria, i.e., diversity $D(\\mathcal{S})$, logicality $L(\\mathcal{S})$, and integrity (i.e, data coverage) $C(\\mathcal{S})$ based on the story's information entropy $H(\\mathcal{S})$:\n\\begin{equation}\nreward(\\mathcal{S}) = \\{\\gamma_1 \\cdot D(\\mathcal{S}) + \\gamma_2 \\cdot L(\\mathcal{S}) + \\gamma_3 \\cdot C(\\mathcal{S})\\} \\cdot H(\\mathcal{S})\n\\label{eq:reward}\n\\end{equation}\nwhere $\\gamma_i \\in [0,1], \\sum_i \\gamma_i = 1$ are the weighting parameters given by users to balance different criteria. All the criteria are also normalized to $[0,1]$. $H(\\mathcal{S})$ is the story's information entropy that indicates the expected self-information of the data facts in the story, which is used as the basis of the reward and formally defined as follows:\n\\begin{equation}\n H(\\mathcal{S})= \\sum_{i=1}^{n} { P(f_i) \\cdot I_s(f_i) } = -\\sum_{i=1}^{m} P(f_i) \\cdot S(f_i) \\cdot \\log _{2}\\left(P(f_i)\\right)\n\\end{equation}\nwhere, $I_s(f_i)$ is the fact's importance score defined in Formula (\\ref{eq:importance}). The definition of each story estimation criterion is described as follows:\n\n\\begin{itemize}[leftmargin= 10pt]\n\\itemsep -.5mm\n\\item \\textbf{\\textit{Diversity}} estimates the variance of the fact types in $\\mathcal{S}$. Rich fact types will make the story vivid and attractive. Diversity is given by two terms as follows:\n\\begin{equation}\nD(\\mathcal{S}) = \\frac{n}{\\min(|\\mathcal{S}|,10)} \\cdot \\frac{-\\sum_{i=1}^{n} {p}_{i} \\cdot \\ln \\left({p}_{i}\\right)}{\\ln (n)} \n\\end{equation}\nwhere $n$ indicates the total number of fact types used in $\\mathcal{S}$ whose maximum value is known as 10; $p_i$ is the proportion of the $i$-th fact type in the story. When $D(\\mathcal{S})$ is maximized, the first term encourages to use more fact types in a story, and the second term, a normalized Shannon diversity index~\\cite{ramezani2012note}, ensures that different fact types can be evenly used in the story.\n\n\\item \\textbf{\\textit{Logicality}} estimates the logical coherence of a story. A higher logicality score indicates the story is more coherent and easier to follow. Logicality is defined by the averaged likelihood of each coherent relation $r_i$ occurred after each fact $f_i$ in the story: \n\n\\begin{equation}\nL(S) = \\frac{1}{n-1}\\sum_{r_i,f_i \\in S}P\\left(r_{i} | f_{i}\\right)\n\\end{equation}\n\n\\item \\textbf{\\textit{Integrity}} is the data coverage rate of $\\mathcal{S}$. A larger integrity, indicates the story more comprehensively represents the input data. Integrity is defined as:\n\\begin{equation}\nC(\\mathcal{S}) = \\frac{count(\\bigcup\\limits_{i=0}^{n-1} f_{i})}{\\mathcal{N}}\n\\end{equation}\nwhere the molecule is the total number of data items in the spreadsheet used in the story, and $\\mathcal{N}$ is the total number of data items.\n\\end{itemize}\n\\section{Story Generation Engine}\n\\label{sec:engine}\nIn this section, we first formally define a data fact and its importance measurement. We then describe the details of the proposed automatic story generation algorithm.\n\n\\input{tables\/3-datafact}\n\n\\subsection{Data Facts}\n\\label{sec:fact}\nData facts are the elementary building blocks of a data story. Each of these facts represents a piece of information extracted from the data. \\rv{We first give a formal definition of data facts by simplifying the concepts introduced in~\\cite{wang2019datashot,chen2009toward} to guarantee a clear semantic and then introduce a novel method used to estimate the importance of a given data fact.}\n\n\\paragraph{\\bf Definition} A data fact is designed to measure a collection of data items in a subspace of an input dataset based on a measurable data field. The data items can be further divided into groups via a breakdown method. Formally, a fact, $f_i \\in F$, is defined by a 5-tuple:\n\\[\n\\begin{aligned}\n f_i &= \\{type, subspace, breakdown, measure, focus\\}\\\\\n &= \\{t_i, s_i, b_i, m_i, x_i\\}\n\\end{aligned}\n\\]\n\\rv{where \\textit{\\textbf{type}}\\xspace (denoted as $t_i$) indicates the type of information described by the fact. As summarized in Table~\\ref{tab:fact}, Calliope\\xspace includes 10 fact types;}\n\\textit{\\textbf{subspace}}\\xspace (denoted as $s_i$) describes the data scope of the fact, which is defined by a set of data filters in the following form: \n\\[\n\\{\\{\\mathcal{F}_1 = \\mathcal{V}_1\\},\\cdots, \\{\\mathcal{F}_k = \\mathcal{V}_k\\}\\}\n\\]\nwhere $\\mathcal{F}_i$ and $\\mathcal{V}_i$ respectively indicate a data field and its corresponding value selected to filter the data. By default, the subspace is the entire dataset. \\textit{\\textbf{breakdown}}\\xspace (denote as $b_i$) is a set of temporal or categorical data fields based on which the data items in the subspace are further divided into groups; \\textit{\\textbf{measure}}\\xspace (denote as $m_i$) is a numerical data field based on which we can retrieve a data value or calculate a derived value, such as count, sum, average, minimum, or maximum, by aggregating the subspace or each data group; \\textit{\\textbf{focus}}\\xspace (denote as $x_i$) indicates a set of specific data items in the subspace that require attention. Despite the above five fields, certain facts may also have a \\textit{\\textbf{derived value}} (denoted as $V_d$) such as a textual summary of the trend (i.e., ``increasing\" or ``decreasing\") or the specific difference value between two cases described by a difference fact, or the correlation coefficient computed for an association fact as shown in Table~\\ref{tab:fact}. These values help with a more insightful description of the fact.\n\n\\rv{When compared to the concepts introduced in~\\cite{wang2019datashot,chen2009toward}, the above definition simplified and restricted the fact fields to ensure a clear semantic expression of the data that avoids redundancy, overlaps, and ambiguity. Specifically, when compared to \\cite{wang2019datashot}, we removed the fact fields that are irrelevant to the fact semantics and treated ``aggregation\" as an operation on ``measures\" instead of a fact type to avoid duplicated fact definitions. In addition, as summarized in Table~\\ref{tab:fact}, we add constraints on each fact field to ensure a clear semantics. For example, the facts in ``distribution\" and ``trend\" types are designed to capture the data patterns given by the measures of different data groups in the subspace. Both fact types can be differentiated by their ways of breaking down a subspace: the subspace in a ``trend\" fact must be divided by a temporal field, whereas the subspace in a ``distribution\" fact can only be divided by a categorical field. Thus, each fact can be described by a syntax which is used for generating a textual description of the fact.}\n\nTo understand the above concepts, let's consider the following examples. Given a dataset about the COVID-19 virus outbreak in China, the data fact, {\\it \\{``distribution\", \\{\\{Country =``China\"\\}\\}, \\{Province\\}, \\{sum(Infections)\\}, \\{Province=``Hubei\"\\}\\}}, describes ``the distribution of the \\underline{total number of} \\underline{infections} over all \\underline{provinces} when \\underline{the country is China} (subspace) and \\underline{Hubei} needs to pay attention\" regarding to the syntax of the distribution fact. Similarly, the data fact, {\\it \\{``trend\", \\{\\{Province =``Hubei\"\\}\\}, \\{Date\\}, \\{sum(Infections)\\}, \\{Date=``2020-1-24\"\\}\\}}, indicates ``the changing trend of the \\underline{total number of} \\underline{infections} over different \\underline{dates} when \\underline{province is Hubei} and the values of \\underline{2020-1-24} need to pay attention\".\n\n\n\n\n\n\\paragraph{\\bf Importance Score} We estimate the importance of a data fact $f_i = \\{t_i,s_i,b_i,m_i,x_i\\} \\in F$ based on its \\textit{self-information} (denoted as $I(f_i)$) weighted by its \\textit{pattern significance} (i.e., $S(f_i)\\in [0,1]$) as follows:\n\\begin{equation}\n I_s(f_i) = S(f_i) \\cdot I(f_i)\n\\label{eq:importance}\n\\end{equation}\n\nIn particular, $I(f_i) \\in [0, \\infty]$ is defined based on the information theory and can be measured in ``bit\" using the following formula:\n\\begin{equation}\nI(f_i) = -log_2(P(f_i))\n\\label{eq:info}\n\\end{equation}\n\\rv{where $P(f_i)$ indicates the occurrence probability of the fact given the input data. A data fact with a lower occurrence probability in the data space has a higher self-information value as it reveals uncommon patterns which are usually more meaningful and interesting.} $P(f_i)$ is formally determined by the occurrence probability of the fact's \\textit{\\textbf{subspace}}\\xspace $s_i$, the probabilities of selecting $x_i$ as the \\textit{\\textbf{focus}}\\xspace in $s_i$, and the probabilities of choosing $m_i$ ($P(m_i|t_i)$) and $b_i$ ($P(b_i|t_i)$) to measure and break down $s_i$ given a fact type $t_i$: \n\\begin{equation}\nP(f_i) = P(m_i|t_i) \\cdot P(b_i|t_i) \\cdot P(s_i) \\cdot P(x_i | s_i)\n\\label{eq:pfi}\n\\end{equation}\nwhere $P(m_i|t_i)$ and $P(b_i|t_i)$ is defined regarding the data type constraints of $m_i$ and $b_i$ as summarized in Table~\\ref{tab:fact}. For example, when the fact type is ``Value\", $P(m_i|Value)$ is $1\/N$, where $N$ is the total number of numerical fields in the data. Similarly, $P(b_i|Difference)$ is $1\/(C + T)$ where $C,T$ are the total number of categorical and temporal fields in the data. Moreover, in Formula (\\ref{eq:pfi}), $P(x_i | s_i)$ is defined as the proportion of the focused data items in the subspace, i.e., $P(x_i | s_i) = count(x_i) \/ count(s_i)$, with the assumption that the probability of selecting each data item as a focus in the subspace is equivalent. In our design, all the data items in $s_i$ are focused by default, i.e., $P(x_i | s_i) = 1$ when the \\textit{\\textbf{focus}}\\xspace field is unspecified. To calculate $P(s_i)$, we first assume $s_i$ consists of $k$ data filters, i.e., $\\{\\{\\mathcal{F}_1 = \\mathcal{V}_1\\},\\cdots, \\{\\mathcal{F}_k = \\mathcal{V}_k\\}\\}$ and there are $m$ independent data fields that can be used for formulating a subspace. In this way, $P(s_i)$ is defined as follows:\n\\begin{equation}\nP(s_i) = \\frac{1}{\\sum_{i=0}^{m} C(m, i)}\\cdot\\prod_{j=1}^{k}P(\\mathcal{F}_j = \\mathcal{V}_j)\n\\label{eq:p_si}\n\\end{equation}\nwhere the first term indicates the probability of choosing the fields $\\mathcal{F}_1,\\cdots, \\mathcal{F}_k$ from the input data to formulate the subspace $s_i$. $C(m, i)$ is an $i$-combination over a set of $m$ possible data fields. $\\sum_{i=0}^{m} C(m, i)$ summarizes the number of all possible cases for formulating a subspace. In this way, the first term in Formula (\\ref{eq:p_si}) indicates the method we use for formulating the current subspace $s_i$ is just one possible case. The second term in Formula (\\ref{eq:p_si}) indicates the probability of using the corresponding values $\\mathcal{V}_1,\\cdots, \\mathcal{V}_k$ on the selected fields to filter the data. This probability is directly given by the products of the proportions of the data that satisfy each filter conditions, i.e., $\\{\\mathcal{F}_j = \\mathcal{V}_j\\}$.\n\nIn Formula (\\ref{eq:importance}), $S(f_i) \\in [0, 1]$ estimates the significance of the data patterns described by the fact $f_i$, which is calculated based on auto-insight techniques~\\cite{ding2019quickinsights, tang2017extracting, wang2019datashot}. The detailed methods are described in the supplemental material. \\rv{It worth mentioning that a significant pattern may not necessary have a high self-information value. Only using both measurements as shown in Formula (\\ref{eq:importance}) will guarantee a comprehensive estimation. Under this definition, the importance of a fact is only determined by its data content and irrelevant to how frequently a type of fact is used in data stories (Fig.~\\ref{fig:statistic}(a)).}\n\n\n\\input{sections\/4-2-algorithm}\n\\section{Data Story Editor}\n\\label{sec:editor}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/UI.png}\n \\vspace{-1.5em}\n \\caption{The story editor of Calliope\\xspace system consists of three major views: (1) the storyline view for story configuration, generation, and storyline editing, (2) the fact view for fact editing, and (3) the story visualization view for the visual data story preview and sharing \\rv{(\\textit{\\textbf{factsheet}}\\xspace mode)}. \n } \n \\label{fig:ui}\n \\vspace{-1em}\n\\end{figure}\n\\setlength{\\textfloatsep}{20pt}\n\n\nIn this section, we introduce the design of the story editor and the methods used for visualizing a data story.\n\n\\subsection{User Interface}\nThe story editor, as shown in Fig.~\\ref{fig:ui}, consists of three major views: the storyline view (Fig.~\\ref{fig:ui}-1), fact view (Fig.~\\ref{fig:ui}-2), and story visualization view (Fig.~\\ref{fig:ui}-3). In the storyline view, a user can upload a spreadsheet, set the story generation goal, and adjust the reward function in a group of configuration panels (Fig.~\\ref{fig:ui}-1(a)). The generated data facts are shown in a row (Fig.~\\ref{fig:ui}-1(b)), in which a user can remove a fact or change the generated narrative order based on his\/her own preferences. Each fact is visualized by a chart and captioned by a generated text description (\\textbf{R3}). When a fact is selected, the data details on each of its fields, importance scores, and visual and textual representations, will be shown in the fact view (\\textbf{R4}). \\rv{The generated data story can be visualized in the story visualization view through three visualization modes: (1) \\textit{\\textbf{storyline}}\\xspace mode (Fig.~\\ref{fig:teaser}), (2) \\textit{\\textbf{swiper}}\\xspace mode (Fig.~\\ref{fig:casestudy}(a)), and (3) \\textit{\\textbf{factsheet}}\\xspace mode (Fig.~\\ref{fig:casestudy}(b)). These modes are respectively designed for representing the story on laptops\/tablets, smartphones, and printouts to facilitate a flexible story communication and sharing (\\textbf{R5}). A user can easily switch between different \nmodes in the story visualization view \nvia a drop down menu.}\n\n\n\\subsection{Visualizing a Data Story}\nA data story generated by the engine is visualized as a sequence of charts through two steps: \\textit{showing a data fact} and \\textit{showing a story}. The first step maps a data fact to a chart while the second step organizes a sequence of charts in an expressive layout as a story.\n\n\\paragraph{\\bf Showing a Data Fact} Benefiting from the simple and clear definition of each fact type introduced in Section~\\ref{sec:fact}, Calliope\\xspace is able to directly convert a data fact into a captioned chart that incorporates both the visual and textual representations. Specifically, the caption is generated based on the syntax defined in Table~\\ref{tab:fact}, and the fact is automatically visualized in two steps by following a rule-based approach. In particular, the system first selects the most frequently used chart regarding the fact type in Table~\\ref{tab:charts}. After that, it selects a subset of data from the input spreadsheet regarding the filters given by the \\textit{\\textbf{subspace}}\\xspace field and then maps the \\textit{\\textbf{breakdown}}\\xspace field(s) to the categorical channel(s) and the \\textit{\\textbf{measure}}\\xspace field(s) to the numerical channel(s) in the chart. Finally, the data values indicated by the \\textit{\\textbf{focus}}\\xspace field are highlighted in the chart. Fig.~\\ref{fig:ui}(2) illustrates an example of showing a difference fact in a captioned bar chart.\n\n\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[width=0.86\\linewidth]{figures\/case23.png}\n \\vspace{-0.5em}\n \\caption{Two data story examples generated by Calliope\\xspace: (a) a story about car selling records around economic crisis in 2008 shown in the \\textit{\\textbf{swiper}}\\xspace mode and (b) a story about startup failures after the tide of ``new economics\" in China shown in the \\textit{\\textbf{factsheet}}\\xspace mode.} \n \\label{fig:casestudy}\n \\vspace{-1em}\n\\end{figure*}\n\n\\paragraph{\\bf Showing a Story} The story visualization view provides a variety of visualization modes to represent the generated data story for different application scenarios. In particular, a summarization is first provided to give a textual briefing of the story to help users obtain data insights at a glance. This step shows the data coverage rate, the total number of data facts in the story, and the generated textual narrative of the story. The \\textit{\\textbf{storyline}}\\xspace and \\textit{\\textbf{swiper}}\\xspace visualization modes are respectively designed to facilitate an efficient story exploration on tablets and smartphones. \\rv{In particular, the captioned charts showing data facts in the story are horizontally aligned in a row to represent the narrative order in the \\textit{\\textbf{storyline}}\\xspace mode} and are shown one at a time in the \\textit{\\textbf{swiper}}\\xspace mode. A user can swipe the touch screen to explore the story through these representations. Finally, the \\textit{\\textbf{factsheet}}\\xspace mode is designed to show the story in the form of a poster that could be easily printed out.\n\n\n\\section{Evaluation}\nWe evaluate Calliope\\xspace system via (1) three examples to showcase the quality of the generated story, (2) two controlled experiments to estimate the usefulness and quality of a generated logic, and (3) domain expert interviews to estimate the usability of the system.\n\n\\subsection{Example Stories}\nWe demonstrate three visual data stories generated by Calliope\\xspace respectively based on three real-world datasets as shown in Fig.~\\ref{fig:teaser} and Fig.~\\ref{fig:casestudy}, which are described as follows:\n\nFig.~\\ref{fig:teaser} shows a story generated based on a \\textit{\\textbf{COVID-19}}\\xspace dataset (903 rows, 5 columns). The data record the recent numbers of daily infections, deaths, and healings of COVID-19 in China from March 1st to March 21st. The generated story illustrates the daily mortality in China decreased in March ({\\it Fact 1}), and the largest number was 42 occurred on March 2nd ({\\it Fact 2}). Hubei was the most affected province ({\\it Fact 3}). The total death in Hubei accounted for 97.4\\% of that in China ({\\it Fact 4}), which was 423 ({\\it Fact 5}). A large number of patients recovered in March ({\\it Fact 6}), showing the improving situation in China.\n\nFig.~\\ref{fig:casestudy}(a) shows a story generated based on a \\textit{\\textbf{Car Sales}}\\xspace dataset (275 rows, 4 columns), including the sales records of different automobile brands around the financial crisis in 2008. The story shows that in 2007-2011, 21,921,768 cars were sold in total ({\\it Fact 1}). The top three sellers were Ford, Toyota, and Honda ({\\it Fact 2}). The difference was huge when comparing the best and worst sellers ({\\it Fact 3}). Specifically, SUV was the best selling model ({\\it Fact 4}) which sold 6,764,065 more than MPV ({\\it Fact 5}). Generally, the sales records decreased during the final financial crisis ({\\it Fact 6}).\n\nFig.~\\ref{fig:casestudy}(b) presents a dataset about \\textit{\\textbf{Startup Failures}}\\xspace . The data (1234 rows, 6 columns) record a set of companies closed during or after the raising tide of ``new economics\" in China from 2010 to 2019. Each startup company is described from six criteria, including its broken year, location, industry, funded status, survival time, and the main cause of failure. The story shows that numerous companies were closed in recent years ({\\it Fact 1}), and most of them were located in Eastern China ({\\it Fact 2}). The most dangerous fields were e-commerce, social media, and local business ({\\it Fact 3}). Most companies closed in these fields were still in the early stages before the series A+ round ({\\it Fact 4}), and some even closed without receiving any investment ({\\it Fact 5}). Regarding the reasons, ``no business model\" and ``no market need\" were the most frequently occurring problems in these startup companies ({\\it Fact 6}).\n\n\n\\subsection{Evaluation of the Generated Logic}\nWe verified the usefulness and the quality of the generated logic in a story via two controlled experiments.\n\n\\paragraph{\\bf Experiment I: Usefulness}\nWe first estimated whether the logic generated by Calliope\\xspace helps with the understanding of a data story. To this end, we compared our generation results with the factsheets generated by DataShot~\\cite{wang2019datashot}, in which a set of selected data facts was randomly organized. \n\n{\\underline{\\textit{Data.}}}\n\\rv{We collected the same datasets illustrated in Fig. 4 (C, D) in DataShot, including CarSales and SummerOlympics. Based on these two datasets, we generated two factsheets using Calliope automatically and picked two cases from the DataShot paper directly as the baseline. To ensure a more fair study setting, we made sure these generated stories from different systems contained similar data facts, and we also revised the design of each factsheet with the same style.}\n\n{\\underline{\\textit{Procedure and Tasks.}}}\n\\rv{We recruited 16 college students (12 females) ages 22 - 26 years old (M=23.94, SD=1.43) as participants. Each participant was presented with two factsheets of different topic from our data, one by Calliope and one by DataShot.\nWe counterbalanced the presentation order of the factsheets for a fair comparison.\nThe participants were asked to read the factsheets carefully and compare two factsheets from five specific aspects including logicality, comprehension, memorability, engagement, and dissemination. The experiment lasted approximately 40 minutes for each participant.}\n\n{\\underline{\\textit{Results.}}}\n\\rv{In terms of \\textit{Logicality} and \\textit{Memorability}, Calliope received more positive feedback than DataShot. One participant commented that ``\\textit{I can smoothly follow it's (Calliope) logic from whole to part, as it first introduces the overall information about Olympic golds and then zooms in on specific sports and countries.}\" Another participant said, ``\\textit{it's much easier to remember the story generated by Calliope, as the annual car sales in different brands is presented step by step in a proper order}\". \nRegarding \\textit{Comprehension}, \\textit{Engagement}, and \\textit{Dissemination}, Calliope performs comparably to Datashot. One participant said, ``\\textit{I enjoy the \nsimple and beautiful visualization of both factsheets and would love to share them on social media if the data is relevant.}\"} \n\n\n\n\n\n\\paragraph{\\bf Experiment II : Quality} \nThe second experiment was designed to evaluate the quality of the generated logic. To this end, we objectively estimated consistency of the logic orders respectively given by users and generated by Calliope\\xspace based on the same set of data facts.\n\n\\underline{\\textit{Procedure and Tasks.}} We first shuffled the order of a set of data facts in a visual data story generated by Calliope\\xspace and then asked a group of users to restore the logic order by reading the chart and description of each fact. Finally, we checked the consistency between the human-generated logic orders and those produced by Calliope\\xspace based on Kendall's $\\tau_b$ correlation~\\cite{kendall1945treatment}. \n\\rv{This measure was introduced to estimate the consistency of the element orders between two sequences, whose value lies in $[-1, 1]$ with ``-1\" indicating a completely reverse order and ``1\" indicating the orders are identical}. To ensure a fair and comprehensive comparison, we generated 12 data stories based on the aforementioned three datasets, four stories per dataset. Each story contained six data facts whose orders were shuffled for the experiment. \n\nA group of 20 participants (17 female) aged 22-30 years old ($M=26, SD=2.63$) were recruited for Experiment II. All of the participants reported that they have fundamental knowledge about data visualization or experience in data-oriented storytelling. The experiments started by a brief introduction about the data, and the participants were asked to reorder the data facts of all the aforementioned 12 shuffled stories via an interactive user interface. \nWe also encouraged the participants to fully explore the data and try their best to understand the data insights represented by each data fact.\n\n{\\underline{\\textit{Results.}}}\n\\rv{We calculated the average Kendall's $\\tau_b$ value on each dataset, and the result showed that the logic orders generated by Calliope\\xspace were consistent with those generated by our participants (\\textit{\\textbf{Car Sales}}\\xspace: $\\mu=0.487, \\sigma=0.29$; \\textit{\\textbf{COVID-19}}\\xspace: $\\mu=0.648, \\sigma=0.327$; \\textit{\\textbf{Startup Failures}}\\xspace: $\\mu=0.63, \\sigma=0.295$). \nWe also leveraged the sequences generated by humans as a ground truth and calculated Kendall's $\\tau_b$ values between the random results and human generated results as the baseline. As a result, the baseline is 0.015, which indicates these two orders are relatively irrelevant (as the value is close to 0). \nBy comparing the $\\tau_b$ values, we found that the logic orders generated by Calliope are much more consistent to humans than the baseline.}\n\n\\begin{figure}[b]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{figures\/interview.png}\n \\vspace{-0.5em}\n \\caption{The ratings of generated story, visualization design, and system from different criteria based on a 5-point Likert scale given by 10 expert users, where 5 is the best and 1 is the worst.}\n \\label{fig:questionare}\n \\vspace{-0.5em}\n\\end{figure}\n\n\\subsection{Expert Interview}\nTo further evaluate the usability of Calliope\\xspace system, a series of interviews with three groups of expert users from different areas were performed. The first group included four data journalists (denoted as J1-J4) from different news medias in China. They had over 3.5 years' working experiences on average, and were very familiar with the data-oriented storytelling and had technical skills for creating a visual data story. The second group comprised three data analysts (denoted as D1-D3) from an international IT consultant company. Their major job was to analyze customers' data and write analysis reports. BI tools such as Tableau and PowerBI were frequently used in their daily work. The third group consisted of three senior visualization researchers (denoted as V1-V3), all of whom had experiences on publishing papers in major visualization conferences such as IEEE VIS and EuroVis. \n\n\\paragraph{\\bf Procedure} The interviews were performed via an online meeting system. Each interview started by a 10-minutes introduction about the system. After that, the experts were encouraged to use the online Calliope\\xspace system on their own. After fully explored the functions of the system, the experts were asked to generate a data story based on one of the three datasets introduced in Experiment II. To arouse their interests, we let the journalists explore the \\textit{\\textbf{COVID-19}}\\xspace dataset as it is a recent and important news topic. We let the data analysts use the \\textit{\\textbf{Car Sales}}\\xspace data given its similarity to the business data analyzed in their work. We let the visualization researchers explore the \\textit{\\textbf{Startup Failures}}\\xspace data as it is the most complex one. They were also encouraged to edit the generated story and share their findings with us. The experts were asked to finish a questionnaire, followed by an interview after they fully explored the functionalities of Calliope\\xspace system. The whole process lasted for about one hour and the interviews were recorded for later analysis.\n\n\\paragraph{\\bf Results} \nFig.~\\ref{fig:questionare} shows that questionnaire results, Calliope\\xspace was rated relatively high in terms of the generated story, visualization, and system.\nA large number of positive comments were recorded during the interview. We summarized the interview results due to the page limitation.\n\n\\textit{\\underline{Data Story.}} \nAll the experts agreed that the generated data story was able to express useful data insights. The visualization researcher V2 mentioned that ``\\textit{the data facts in the story are quite clear and are well-organized}\". J1 commented that ``\\textit{the story starts from an overview, followed by a series of data details, ..., It's the way we frequently adopted when writing a news story. It's amazing that now it can be automatically generated}\". J3 also noted that \\textit{``the logical order [of the story] can help readers get into the points\"}. D1 observed that the ordered data facts can help with the efficient exploration of the data, \\textit{``[Automatically] showing the appropriate dimensions and measures in a sequence of visualizations can definitely guide the data exploration, ..., it's helpful when you have no idea about how to get started\"}.\n\n\n\\textit{\\underline{Visualization.}}\nIn terms of visualization, all the experts were satisfied with the design of three visualization modes in Calliope\\xspace, \\textit{``it's a very nice and thoughtful design\"} (J1-J4). Especially, they felt that the \\textit{\\textbf{storyline}}\\xspace mode provides a good overview (J1-J3, D1, D3, V1, V2). All the experts believed that the \\textit{\\textbf{swiper}}\\xspace mode was neat and helpful when viewing the story on a smartphone while the \\textit{\\textbf{factsheet}}\\xspace mode supports easy printing of a story. \nAll the journalists felt that editing was an important function, \\textit{``it (editing function) allows us to create a high-quality story quickly based on the generation results\"} (J1). They also felt generating stories by interactively changing the reward was \\textit{``interesting\"} and \\textit{``inspiring\"}. J2 suggested that \\textit{``we usually write stories from different perspectives, and it can facilitate my ideation process\"}.\n\n\\textit{\\underline{System.}}\nMost experts (J1-J3, D1, D3, V2, V3) mentioned that the system was useful for users who are not skilled at data analysis or visualization. The data journalists J1-J4 especially appreciated the efficient story generation and editing function of Calliope\\xspace. J4 said, \\textit{``with this tool I can quickly create a story by first generating a draft and then revising it accordingly\"}. The data analysts felt the system is powerful to help them efficiently preview an unknown data, \\textit{``with this tool, I can quickly find where to start when getting a [new] spreadsheet\"} (D1). The visualization researchers believed that our system lowers the barriers of creating a data story. V3 said, ``when compared with other data story authoring tools, this system is much more smart as it requires limited knowledge about data analysis and visualization design\". \n\n\\section{\\bf Limitations and Future Work}\n\\rv{Despite the above positive results from the evaluation, we would also like to summarize and discuss several limitations that was found during the design and implementation process and mentioned by our expert users during the interview. \nWe hope highlight these limitations will help point out several potential future research directions and inspire new studies by following our work.\n}\n\n\\rv{\\underline{\\textit{Supporting a Better Textual Narrative.}} During the interview, many data journalists (J1,J2,J4) felt the generated captions were too rigid to be used especially in a data news. More diverse and insightful descriptions were desired. Moreover, the current results also contained some grammar errors, which also need to be addressed (J1,2 D1, V2,3)). However, all of them acknowledged that the current results, although unsatisfactory, were still useful for a rapid preview and briefing of the input data. In the future, it is necessary to leverage more advanced techniques introduced in the field of nature language processing to generate text narratives in the data story in a higher quality.}\n\n\\underline{\\textit{Understanding Data Semantics.}} After using Calliope\\xspace, although impressed, some experts (J1-J3, V2) expect a more intelligent tool that can even understand the semantics of the data to better generate the story content and logic. \\rv{We acknowledge this is a key limitation of the current system and understanding the underlying semantics of the data is critical for generating a meaningful and insightful data story. This is a promising research direction that is worth a further exploration. To address the issue, one could leverage or develop more advanced AI techniques or could also introduce sophisticated interactive feedback mechanism to keep user in the generation loop and leverage human intelligence to \nsteer data quality~\\cite{liu2018steering} and guide the underlying generation algorithm\/model to better understand data semantics.}\n\n\\underline{\\textit{Enriching Visualization.}} Several experts (J3, D1, D2) would like to have a slides mode and dashboard mode to support more application cases. J1 and V3 also pointed out that some of the current generated visual encoding are notably simple, and a chart should encode more information at a time. For example, when showing a line chart, the size of point could also be used to encode data, and a stack bar chart could be used to show an additional categorical field in the data. \\rv{In addition, Calliope\\xspace cannot deal with hierarchical or relational datasets, which are also desired functions (V1,V2). Providing more advanced visualization representations for a story is also a valuable future work.}\n\n\\underline{\\textit{Performance Issues.}}\nThe current system design and implementation have some performance bottlenecks that worth a future study. \\rv{In particular, the calculation of data fact significance, i.e., $S(f_i)$ in Formula (\\ref{eq:importance}) consists of statistical computations, which is usually slow and thus limits the number of facts that can be explored in each searching iteration, thus affecting the generation quality.}\n\\section{Conclusion}\n\nWe have presented Calliope\\xspace, a novel system designed for automatically generating visual data stories from a spreadsheet. The system incorporates a novel logic-oriented Monte Carlo tree search algorithm to create a data story by gradually generating a sequence of data facts in a logical order while exploring the data space. The importance of a fact is measured by its information quantity and its statistical significance. Each fact in the story is visualized in a chart with an automatically generated caption. A story editor is introduced in the system to facilitate the easy and efficient editing of a generated story. The proposed technique was evaluated via three example cases, two controlled experiments, and a series of interviews with 10 expert users from different domains. The evaluation showed the power of Calliope\\xspace system and revealed several limitations of the current system, which will be addressed in the future.\n\n\n\\section{Fact Significance}\n\nFact significance is calculated based on the former auto-insights techniques \\cite{tang2017extracting, ding2019quickinsights, wang2019datashot}. \nAs the definition in the references, the significance measure reveals the uncommonness of an observed insight in the result set \\cite{tang2017extracting}.\nA meaningful fact should exhibit significant differences against a baseline which reflects common situations formed up by majority of non-insights \\cite{ding2019quickinsights}.\nIn the following section, we state the calculation methods for each fact type using the same statements in the these references. \n\n\n\\paragraph{\\bf Value} DataShot states ``Some data facts (e.g., Value, Aggregation) only derive values from the data simply assign it to zero. We simply assign them to zero.\"\\cite{wang2019datashot}. In our implementation, we use the probability of the fact to calculate the significant score.\n\n\\paragraph{\\bf Difference}\nA significant difference between two $focus$ corresponds to a high score for Difference. \nThe significance is equal to 1 when the difference is maximum.\n\n\\paragraph{\\bf Proportion}\nWe follow the definition of \\underline{Proportion} in DataShot\\cite{wang2019datashot} and \\underline{Attribution} in QuickInsights\\cite{ding2019quickinsights}.\nDataShot states ``A high proportion corresponds to a high score for Proportion\".\nQuickInsights states ``Attribution shows the fact that the leading value dominates (accounting for >= 50\\% of) the group\".\nThus we directly use the proportion as the significance of proportion. \nIn addition, if the proportion is larger than 50\\%, we set the significance as 1 because the focus part dominates the group.\n\n\\paragraph{\\bf Trend}\nWe follow the definition of \\underline{Trend} in DataShot and \\underline{Shape Insight} in Top-K Insights\\cite{tang2017extracting}.\nDataShot states ``A sharp increase corresponds to a high score for Trend\".\nTop-K Insights states ``In business intelligence applications, data analysts are attracted to a clear rising\/falling trend, whose slope is very different from 0\".\nThe following is the method mentioned in the Top-K Insights:\n\nWe set the null hypothesis as $X$ forms a shape with a slope near 0.\nThus, the p-value should measure how surprisingly the slope differs from 0.\n\n1. First, we fit $X$ to a line by linear regression analysis, and then compute its slope $slope$ and the goodness-of-fit value $r^{2}$.\n\n2. we use logistic distribution to model the distributions of slopes.\n\n3. The p-value is the probability of the slope values equal to or larger than the observed slope of the rising trend.\n\n4. Finally, we define the significance as $S(f)=r^2\\cdot (1-p)$, where the goodness-of-fit value $r^2$ is used as a weight.\n\n\\paragraph{\\bf Categorization}\nWe follow the definition of \\underline{Evenness} in QuickInsights\\cite{ding2019quickinsights}.\nQuickInsights states that Evenness shows that the cases where all value of a measure for a given category should be close to each other. The following is the method:\n\n1. Perform the CHI square test for the hypothesis: the counts in each category are equal.\n\n2. We obtain the significance as $S(f)=1-p$.\n\n\\paragraph{\\bf Distribution}\nThe significance of distribution should reveal how surprisingly $X$ differs from the Gaussian distribution, which is the common distribution in the natuaral world.\nThe following is the method:\n\nWe set the null hypothesis $X$ as a power-law distribution with Gaussian noises.\n\n1. Perform the Shapiro-Wilk test for $X$.\n\n2. We obtain the significance as $S(f)=1-p$.\n\n\\paragraph{\\bf Rank}\nWe follow the definition of \\underline{Point Insight} in Top-K Insights\\cite{tang2017extracting}.\nTop-K Insights states ``In the business domain, the sale of products often follows a power-law distribution\".\nThe following is the method mentioned in Top-K Insights:\n\nWe set the null hypothesis $X$ as a power-law distribution with Gaussian noises.\n\n1. First, we sort $X$ in the descending order and obtain the maximum value $x_{max}$.\n\n2. Then, we fit the values in $X$ to a power-law distribution if it is good fit, where the prediction errors (i.e., subtracting observed value $\\hat{x_i}$ from estimated value $x_i$ , also called residuals) approximately follow Gaussian distribution.\n\n3. Next, we determine how surprising it is that $X$ observed against the hypothesis and calculate the p-value $p$.\n\n4. Finally, we obtain the significance as $S(f)=1-p$.\n\n\\paragraph{\\bf Association}\nWe follow the definition of \\underline{Correlation} in QuickInsights\\cite{ding2019quickinsights}.\nThe following is the method mentioned in the QuickInsights:\n\nThe significance of two $measure$ is defined based on testing using Student's t-distribution with Pearson's correlation coefficient $r$.\n\n1. Specify the null and alternative hypotheses.\n\n2. Calculate the value of test statistic: $t=r \\sqrt{\\frac{n-2}{1-r^{2}}}$.\n\n3. Use the resulting test statistic $t$ to calculate the p-value, which is determined by referring to a t-distribution with n-2 degrees of freedom.\n\n4. The p-value is translated into significance. The lower the p-value, the higher the significance.\n\n\\paragraph{\\bf Extreme}\nWe follow the definition of \\underline{Outstanding No.1} and \\underline{Outstanding Last} in QuickInsights\\cite{ding2019quickinsights}.\nThe following is the method mentioned in the QuickInsights:\n\nTake Outstanding No. 1 as an example.\nGiven a group of non-negative numerical values {x} and their biggest value $x_{max}$ , the significance of $x_{max}$ being Outstanding No.1 of $X$ is defined based on the p-value against the null hypothesis of $X$ obeys an ordinary long-tail distribution.\n\n1. We sort $X$ in descending order;\n\n2. We assume the long-tail shape obeys a power-law function. Then we conduct regression analysis for the values in $X$. $x_{max}$ using power-law functions, where i is an order index and in our current implementation we fix $\\beta = 0.7$ in the power-law fitting;\n\n3. We assume the regression residuals obey a Gaussian distribution. Then we use the residuals in the preceding regression analysis to train a Gaussian model $H$;\n\n4. We use the regression model to predict $x_{max}$ and get the corresponding residual $R$;\n\n5. The p-value will be calculated via $P(R|H)$.\n\n\\paragraph{\\bf Outlier} We follow the definition of \\underline{Outlier} in DataShot and QuickInsights\\cite{wang2019datashot, ding2019quickinsights}.\nBecause of no exact method mentioned in these papers, we use the a statistic method Grubbs' test, which is a commonly used way to detect outliers.\n\n1. The first step is to quantify how far the outlier is from the others. We calculate The Grubbs test statistic $G$ as the largest absolute deviation from the sample mean in units of the sample standard deviation.\n\n2. The hypothesis of no outliers at a certain significance level is rejected if the calculated $G$ is greater than the corresponding critical value in the table, which have been tabulated for the Grubbs test statistic. \n\n3. If there exist at least an outlier and the p-value is small, we can conclude that the deviation of the outlier from other values is statistically significant. The lower the p-value is, the higher the significance is. Hence We obtain the significance as $S(f)=1-p$. \n\n4. Otherwise, If the hypothesis of no outliers is accepted which means there is no outlier in the sample, the significance is equal to 0.\n\n\\section{An Example of the Algorithm}\n\nTo explain the algorithm, we provide a concrete example in which the process of creating a 3-facts story is explained in detail. Here we use a simplified spreadsheet about the deaths of COVID-19 in China from March 1st to March 21st as a running example. The spreadsheet contains 3 columns including Date(temporal), Province(categorical), and Deaths(numerical). \n\nInitially, the algorithm generates a set of random facts according to the preliminary survey as the first fact, such as the Value fact \"The value of the \\underline{total deaths} is 423\" (\\textbf{F1}), the Trend fact \"The trend of the \\underline{total deaths} over \\underline{dates} is decreasing\" (\\textbf{F2}), and the Categorization fact \"The data contains 32 \\underline{provinces}\"(\\textbf{F3}). According to the importance score, the \\textbf{F2} is chosen as the root of the tree.\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/example-step-1.png}\n \n\\end{figure}\n\nIn the next, the algorithm begins an iterative process with 4 major steps: selection, expansion, simulation, and back-propagation. \n\n\\begin{itemize}\n \\item {\\bf Selection}: At first, it finds a fact with the largest reward in the tree. In the current stage, \\textbf{F2} is picked because it is the only fact in the tree. \n\n \\item {\\bf Expansion}: Secondly, the algorithm expands a set of data facts from the Trend fact according to the different logical relation. For example, a contrast relation may lead to the Trend fact \"The \\underline{total deaths} over \\underline{dates} shows an increasing trend when \\underline{the province is Hong Kong}\" (\\textbf{F4}), an elaboration relation can lead to the Extreme fact \"The maximum value of the \\underline{total deaths} is 42 when \\underline{the date is 2020\/3\/2}\" (\\textbf{F5}), and a similarity relation can trigger the Distribution fact \"The distribution of the \\underline{total deaths} over different \\underline{provinces} shows an overview\" (\\textbf{F6}).\n\n \\item {\\bf Simulation}: Thirdly, the algorithm starts to simulate from all the expanded facts (\\textbf{F4}, \\textbf{F5} and \\textbf{F6}). In each simulation process, the algorithm tries to explore the design space and find a path with the largest reward. \n\n \\item {\\bf Back-propagation}: The rewards of the facts in the tree are updated after each simulation. In this example, the algorithm finds that choosing \\textbf{F5} will lead to a path with the largest reward. Thus, it updates the reward $\\Delta$ in the path (\\textbf{F2}, \\textbf{F5}) during back-propagation.\n\\end{itemize}\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/example-step-2.png}\n \n\\end{figure}\n\n\nNow \\textbf{F5} is the node with the largest reward in the tree. Thus, it will be selected as the beginning node in the next iteration. A new Distribution fact \\textbf{F7} is expanded during the similar search process. The algorithm stops when the goal is fulfilled. In the end, the path (\\textbf{F2}, \\textbf{F5}, \\textbf{F7}) with the highest reward is the best story in the tree.\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/example-step-3.png}\n \n\\end{figure}\n\n\nThe following figure is the storyline of the final 3-facts data stories. It illustrates that the trend of daily mortality in China was decreasing in March (Fact 1). In particular, the largest number was 42 occurred on March 2nd (Fact 2). Finally, the distribution shows that Hubei was the most dangerous province (Fact 3).\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.82\\linewidth]{figures\/Example-Result.png}\n \n\\end{figure}\n\\subsection{Story Generation Algorithm}\nIn Calliope\\xspace, we introduce an intelligent algorithm that generates data facts from an input spreadsheet and threads them in logic to create a meaningful data story. However design such an algorithm is quite challenge. The story design space, formulated by a collection of data facts generated from the input spreadsheet, could be extremely large due to the huge number of possible combinations of the fact fields event based on a small dataset. It is almost impossible for the algorithm to generate all facts first and then pick up those important ones to build the story as users cannot afford a long waiting time ({\\bf R2}). We addressed this issue by introducing an efficient logic-oriented searching algorithm based on the Monte Carlo tree search~\\cite{browne2012survey,silver2016mastering}. It can efficiently explore a large space that contains numerous status via a searching process organized by a tree structure and guided by a reward function.\n\nAlgorithm~\\ref{alg:generation} provides an overview of the proposed technique. It takes a spreadsheet $\\mathcal{D}$, the desired story length $\\mathcal{L}$, and a set of parameters \\{$w_d, w_i, w_l$\\} as the inputs and outputs a story $\\mathcal{S} = \\{f_1,\\cdots,f_n\\}$. Here, the parameters are given by users that indicate their personal preferences of the generated story in terms of its fact diversity ($w_d$), the data coverage rate ($w_d$), and logicality ($w_l$). Specifically, the algorithm first generates a set of initial facts in three fact types, i.e., value, trend, and categorization, that are most frequently used as the starting points in a data story according to our preliminary survey as shown in Fig.~\\ref{fig:statistic}(b). The most important one is selected as the initial fact in $\\mathcal{S}$, denoted as $f_0$, and used as the root of a search tree. In the next, the algorithm expands $f_0$ in the tree by creating a set of data facts as its children that are logically relevant to $f_0$. \n\nwhich is estimated by as reward function estimated by a reward function, \n\nbased on which the a search tree is gradually generated. \n\n\n\\begin{algorithm}[htb]\n \\label{alg:generation}\n \\SetAlgoLined\n \\SetKwInOut{Input}{Input}\n \\SetKwInOut{Output}{Output}\n \\Input{$data$}\n \\Output{$S$}\n \n $f_0$ $\\leftarrow$ InitalFact($data$)\\;\n $S$ $\\leftarrow$ [$f_0$]\\;\n \\While{length(S) \\textless max}{\n \\While{within computational time}{\n ($f_l$,$r_l$) $\\leftarrow$ TreePolicy($S$)\\;\n $reward$ $\\leftarrow$ DefaultPolicy($story(f_l, r_l)$)\\;\n Backpropagation(($f_l$,$r_l$), $reward$)\\;\n }\n $S$.insert(BestChild($S$))\\;\n }\n $S$ $\\leftarrow$ Aggregation($S$)\\;\n \\Return $S$\\;\n \\caption{Data Story Generation}\n\\end{algorithm}\n\n\\subsubsection{Logic Calculation}\n\n\nA $tree$ $policy$ determines the most possible node to reach in each iteration of MCTS. In data story generation, the $tree policy$ is used to find the next fact to expand the story. To ensure the facts in story are meaningful connected and reduce the search space, we introduce coherence relations as the latent relation between the adjacent facts to guide a logic-oriented tree policy in MCTS. In the discourse, coherence relations refers to informational relations that hold between natural language sentences \\cite{hobbs1985coherence}. According to preliminary data videos survey and related corpus studies \\cite{wolf2005representing, wellner2006classification}, we summarize 6 discourse relations in data stories in Table \\ref{table:relation}. A pair of facts can be described by three components $(f_a, r_{ab}, f_b)$, where $f_a$ and $f_b$ denote two adjacent facts connected by relation $r_{ab}$.\n\n\\input{tables\/4-relation}\n\nA $tree$ $policy$ determines the most possible node to reach in each iteration of MCTS. In data story generation, the $tree policy$ is used to find the next fact to expand the story. To ensure the facts in story are meaningful connected and reduce the search space, we introduce coherence relations as the latent relation between the adjacent facts to guide a logic-oriented tree policy in MCTS. In the discourse, coherence relations refers to informational relations that hold between natural language sentences \\cite{hobbs1985coherence}. According to preliminary data videos survey and related corpus studies \\cite{wolf2005representing, wellner2006classification}, we summarize 6 discourse relations in data stories in Table \\ref{table:relation}. A pair of facts can be described by three components $(f_a, r_{ab}, f_b)$, where $f_a$ and $f_b$ denote two adjacent facts connected by relation $r_{ab}$.\n\n\\input{tables\/4-relation}\n\nWe regard these relations as the constraints in tree policy. Calliope\\xspace applies the rules in the defined relations to keep the connection between $f_a$ and $f_b$. As follow, when the current fact is $f_a$ and relation is $r_{ab}$, the next fact $f_b$ can be derived after a set of edit operations from $f_a$:\n\n\\begin{itemize}\n \\item {\\bf Similarity} modifies ($meausre$\/$breakdown$\/categorical filter in $subspace$) in $f_a$ to construct the $f_b$. It juxtaposes two facts with same measurement or same scope of data, such as ``the confirmed count of COVID-19 in different provinces\".\n \\item {\\bf Temporal} adds\/removes\/modifies the temporal filter of $subspace$ in $f_a$ to construct the $f_b$. It statement two facts based on time, such as ``the confirmed count of COVID-19 in February and March\".\n \\item {\\bf Contrast} modifies ($meausre$\/filter in $subspace$) in $f_a$. In addition, the result should be the opposite fact to $f_a$, thus we take it as $f_b$. In this relation, $f_a$ and $f_b$ are reverse to each other, such as ``the decreasing trend of new confirmed count of COVID-19 in China and the increasing trend in other countries\". \n \\item {\\bf Causality} modifies $meausre$ that has the cause-effect relation to the $f_a$. $f_b$ should be the reason of $f_a$, such as ``the confirmed count leads to the death count of COVID-19\". The causality discover technique can be used to build a causal graph for the multi-dimensional data \\cite{schaechtle2013multi}. The directed link in the causal graph means the causal relation from one numerical data to another. Thus we use the causal graph to modify the $measure$.\n \\item {\\bf Elaboration} add filter in $subspace$ or add $focus$ from $f_a$ to $f_b$. It means $f_b$ is the specific condition to $f_a$, such as ``the confirmed count of COVID-19 in China and details in each provinces\".\n \\item {\\bf Generalization} remove filter in $subspace$ or remove $focus$ from $f_a$ to $f_b$. It means $f_b$ is the general condition to $f_a$, such as ``the confirmed count of COVID-19 in China and overall information in the world\".\n\\end{itemize}\n\nIn addition, we follow the statistic of narrative logic frequently used behind each fact type (Table~\\ref{tab:logic}) to determine the probability of each relation during tree search. \n\n\\label{4.2.1}\n\n\\subsubsection{Reward Function}\n\nThe reward function is to ensure the quality of the story generation. As we mentioned before, our system should generate a meaningful story in a clear logic with correct data. The meaningful story refers to a data story with a high importance score, which can be measured by the information entropy and significance.\n\nBesides the story importance, the story should also provide the narrative logic, and believable content. Furthermore, the content in story should should be diverse to avoid a boring presentation. Therefore, the final reward of story also consider narrative logic, data integrity, and diversity. We put these three parameters into the coefficient of preference which can be adjusted by the user.\n\nThe total reward of story is the product of the coefficient of user preference and the importance score of the story. Bringing each parameters together, the final $reward$ is:\n\n\\begin{equation}\nreward = (\\alpha \\cdot Diversity + \\beta \\cdot Integrity + \\gamma \\cdot Logicality) \\cdot score(S)\n\\end{equation}\nwhere $\\alpha$, $\\beta$ and $\\gamma$ is the weight in [0,1] and the sum of them is equal to 1. Users can determine the value by interaction in the storyline view. The interaction detail will be introduced in Section~\\ref{5.4}.\n\nBased on the information theory, we define the information entropy of story $S$ as $H(S)$ by the formula of information entropy:\n\n\\begin{equation}\n H(S)=-\\sum_{i=1}^{m} P(f_i) \\log _{2}\\left(P(f_i)\\right) = \\sum_{i=1}^{n} { P(f_i) \\cdot I(f_i) }\n\\end{equation}\n\nWe add significance importance of each fact into this formula to calculate the importance score of the story $score(S)$. Then, we put formula of fact importance into $score(S)$.\n\n\\begin{equation}\nscore(S) = \\sum_{i=1}^{n} { P(f_i) \\cdot S(f_i) \\cdot I(f_i) } = \\sum_{i=1}^{n} { P(f_i) \\cdot score(f_i) }\n\\end{equation}\n\n\\textbf{Logicality} represents the logic of the story.\nIn the logic tree search, we only consider the logic from current fact to next relation. By contrast, we consider the logic from fact to last relation in reward term. Logicality is the average of each conditional probability of last relation $r_{i-1}$ given fact $f_{i}$:\n\n\\begin{equation}\nLogicality = \\frac{1}{n-1}\\sum_{i=1}^{n-1}P\\left(r_{i-1} | f_{i}\\right)\n\\end{equation}\n\n\\textbf{Diversity} represents the diversity of the story.\nWhen there are more various fact types in a story, the story is considered more diverse. We define $m$ as the number of fact types in the story and ${p}_{j}$ as the proportion of $j$th fact type in the story. The diversity is calculated as the follow, where the former item is to make sure more unique fact types in the story and the latter item is the normalized Shannon diversity index to keep these fact types have roughly equal proportion:\n\n\\begin{equation}\nDiversity = \\frac{m}{\\max(n,10)} \\cdot -\\frac{\\sum_{j=1}^{m} {p}_{j} \\cdot \\ln \\left({p}_{j}\\right)}{\\ln (m)} \n\\end{equation}\n\n\\textbf{Integrity} represents the data integrity of the story.\nThe more complete the story cover the spreadsheet, the better the data integrity. The calculation method is to take the union of the table cells covered by each fact and divide by the count of total table cells:\n\n\\begin{equation}\nIntegrity = \\frac{count_{cell}(\\bigcup\\limits_{i=0}^{n-1} f_{i})}{count_{cell}(data)}\n\\end{equation}\n\n\\label{4.2.2}\n\n\\subsection{Story Aggregation}\n\nFor now, the story pieces in our generated data story only convey single fact. However, the compound fact is the common used fact that consists of multiple primitive facts \\cite{amar2005low, chen2009toward}. For instance, \"The trend of sales is decreasing and the maximum of sales is in 2007\" is a compound fact that involves a trend fact and an extreme fact. To keep the story concise, Calliope\\xspace provides an option for users to aggregate the story by merging single facts into a compound one automatically.\n\nThe story aggregation process consists of three major steps: (1) similarity measuring, (2) hierarchical clustering, and (3) fact merging.\n\nFirst, we measures the similarity between data facts using the overlap similarity for categorical data \\cite{boriah2008similarity}. The overlap similarity of the $type$ is 0 when types of two facts are no match, and 1 when the attribute values match. For the $measure$ and the $breakdown$, the similarity is the proportion of the fields in common. For the $subspace$ and the $focus$, we use the the fraction of union covered by intersection for similarity calculation. The similarity in total is the sum of these five results.\n\nThen, we aggregate the data facts into a hierarchical structure based on the similarity using hierarchical clustering technique. After clustering, the pairs of leaf nodes, such as the dashed box in fig, are the potential facts to be merged. These pairs of facts are ranked according to the similarity.The more similar facts have more tendency to be merged.Calliope\\xspace provides a slider bar that allows users to control the level of story aggregation in percentage, with a value of 0\\% to present single facts, and a value of 100\\% to merge all pairs of data facts.\n\nTo present the compound fact, Calliope\\xspace merge facts in two aspects, including visualization and natural language. For simplicity, we define a rule table for each combinations of two fact types. In the table, we choose chart type for compound fact according to the intersection of chart candidates for each primitive fact type. We also merge the fact tuples that have no conflict between each other. If there is no common chart type between two facts or conflict in the fact tuples, we simply place the two visualization by juxtaposition. For example, there is a trend fact and a extreme fact share same $measure$, $subspace$, and $breakdown$ shown in Fig.\\ref{fig:aggregation}. The difference is that the extreme fact has an additional \\textit{focus} which highlight the maximum value. Based on our rule table, both of these two facts can use the line chart to present the information. As a result, the compound fact use line chart which is shown in Fig.\\ref{fig:aggregation}. For the description of compound fact, we choose to join together two sentences from both primitive facts.\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/aggregation.png}\n \\caption{Compound fact} \n \\label{fig:aggregation}\n\\end{figure}\n\nUp to this point, Calliope\\xspace does not consider the perception cost during the fact merging \\cite{hullman2013deeper}. Besides, the natural language in the compound fact may has redundancy, which can be solved by NLP techniques such as coreference resolution \\cite{ng2002improving}. We leave these questions in the future work.\n\n\\label{4.3}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Story Generation Engine}\n\\label{sec:engine}\nIn this section, we describe the technique details of the visual data story generation engine introduced in Calliope\\xspace. Before describing the algorithm, we first introduce different types of data facts and their importance measures. \n\n\\subsection{Data Facts}\nData facts are the elementary pieces that build up a data story. In Calliope\\xspace, we formally define fact as a 5-tuple \n\nBased on preliminary study and formative work \\cite{wang2019datashot}, Calliope provides a formal model for representing data facts.\nIn our definition, a data fact consist of a five-tuple, including \\textit{type}, \\textit{measure}, \\textit{subspace}, \\textit{group-by} and \\textit{focus}.\n$$\nfact := (type, measure, subspace, groupby, focus)\n$$\n\nThe \\textit{type} definition identifies the fact type from 10 pre-defined taxonomies, including \\textit{value}, \\textit{difference}, \\textit{proportion}, \\textit{trend}, \\textit{categorization}, \\textit{distribution}, \\textit{rank}, \\textit{association}, \\textit{extreme} and \\textit{outlier}.\nThese 10 types derive from our preliminary survey on data videos and several formative works \\cite{wang2019datashot, chen2009toward}. \nThe \\textit{measure} determines which numerical field and aggregation function to use as the measurement in the fact.\nThe \\textit{subspace} specifies the range of data that fulfill a filter condition.\nThe \\textit{group-by} groups records that have same value in categorical or temporal fields into summary records.\nThe \\textit{focus} highlights the specific information in the subspace.\n\nTable \\ref{tab:fact_table} summarizes the rules to construct the data fact for each fact type.\nAs shown in the table, \\textit{measure}, \\textit{group-by} and \\textit{focus} are not necessary for all fact types.\nThe definition of each fact type are depicted in the following list.\nTo explain the meaning of each fact type, we use a sample spreadsheet about automatic sales.\nThe columns' information is shown in Table \\ref{table:tabledetail}.\n\n\\begin{table}[htb]\n \\label{table:tabledetail}\n \\caption{The information of columns in the sample}\n \\centering\n \\small\n \\begin{tabular}{ccl}\n \\hline\n field name & type & description \\\\ \\hline\n Sales & numerical & automobile sales \\\\ \n Income & numerical & automobile income \\\\ \n Category & categorical & types of automobile (e.g. SUV, Compact) \\\\ \n Brand & categorical & brands of automobile (e.g. BMW, Ford) \\\\ \n Year & temporal & year of Sales (2007-2011) \\\\ \\hline\n \\end{tabular}\n\\end{table}\n\n\\begin{itemize}\n\n \\item \\textbf{Value} is defined on (\\textit{measure}, \\textit{subspace}).\n It derives the value according to the \\textit{measure} in \\textit{subspace}. \n E.g. Show total sales in Brand Ford.\n \n \\item \\textbf{Difference} is defined on (\\textit{measure}, \\textit{subspace}, \\textit{group-by}, \\textit{focus}).\n It compares the \\textit{measure} of two \\textit{focus} on \\textit{group-by} in \\textit{subspace}.\n E.g. Compare difference between sales of SUV and Compact in Brand BMW.\n \n \\item \\textbf{Proportion} is defined on (\\textit{measure}, \\textit{subspace}, \\textit{group-by}, \\textit{focus}).\n It shows the proportion of the \\textit{measure} in the \\textit{focus} from all \\textit{group-by}s in \\textit{subspace}.\n E.g. Show proportion of SUV in total sales of Brand Ford.\n \n \\item \\textbf{Trend} is defined on (\\textit{measure}, \\textit{subspace}, \\textit{group-by}).\n It shows the \\textit{measure} order by the temporal \\textit{group-by} in \\textit{subspace}.\n E.g. Show the trend of sales in Brand Ford order by year.\n \n \\item \\textbf{Categorization} is defined on (\\textit{subspace}, \\textit{group-by}).\n It lists all \\textit{group-by}s in \\textit{subspace}.\n E.g. Show all categories contained in Brand Ford.\n \n \\item \\textbf{Distribution} is defined on (\\textit{measure}, \\textit{subspace}, \\textit{group-by}).\n It shows the \\textit{measure} of each \\textit{group-by} in \\textit{subspace}. \n E.g. Show the sales of each category in Brand Ford.\n \n \\item \\textbf{Rank} is defined on (\\textit{measure}, \\textit{subspace}, \\textit{group-by}, \\textit{focus}).\n It lists all \\textit{group-by} order by \\textit{measure} in \\textit{subspace} and \\textit{focus} on the top rank.\n E.g. List the sales rank of brand in 2011.\n \n \\item \\textbf{Association} is defined on (\\textit{measure}, \\textit{subspace}, \\textit{group-by}).\n It shows the relation between two \\textit{measure}s by \\textit{group-by} in \\textit{subspace}.\n E.g. Show the relationship between sales and income in each brand in 2011.\n \n \\item \\textbf{Extreme} is defined on (\\textit{measure}, \\textit{subspace}, \\textit{group-by}, \\textit{focus}).\n It shows the \\textit{measure} of each \\textit{group-by} in \\textit{subspace} and \\textit{focus} on the extreme.\n E.g. Show the best-selling category in Brand Ford.\n \n \\item \\textbf{Outlier} is defined on (\\textit{measure}, \\textit{subspace}, \\textit{group-by}, \\textit{focus}).\n It shows the \\textit{measure} of each \\textit{group-by} in \\textit{subspace} and \\textit{focus} on the outlier.\n E.g. Show the outlier of sales by year in Brand Ford.\n \n\\end{itemize}\n\nAccording to this fact definition, We describe a single data fact using a JSON specification in Calliope.\nBased on the automobile spreadsheet, we demonstrate a \\textit{Proportion} fact in Listing \\ref{lst:json}.\nThe \\textit{measure} of the fact is total sum of ``Sales\".\nThe \\textit{subspace} is the records when ``Brand\" is equal to ``Ford\".\nThe \\textit{group-by} divide the subspace by ``Category\", and the \\textit{focus} is the records when when ``Category\" is equal to ``SUV\".\nThe meaning of this specification is to show the proportion fact about total Sales of SUV in the Brand Ford.\n\n\\begin{lstlisting}[caption=JSON specification of a data fact,\nbasicstyle=\\small, label={lst:json}, language=json,firstnumber=1]\n{\n \"type\": \"proportion\",\n \"measure\": [{\"field\": \"Sales\", \"aggregate\": \"sum\"}],\n \"subspace\": [{\"field\": \"Brand\", \"value\": \"Ford\"}],\n \"group-by\": [{\"field\": \"Category\"}],\n \"focus\": [{\"field\": \"Category\", \"value\": \"SUV\"}]\n}\n\\end{lstlisting}\n\nIn addition the basic fact defination, data facts also have \\textit{parameter} to explain the status of the fact, such as increasing or decreasing in the \\textit{trend} fact.\nTable \\ref{tab:fact_table} displays all \\textit{parameter} in 10 fact types.\nThese \\textit{parameter}s can be derived from the data and fact specification directly.\nWith the \\textit{parameter}, we can generate natural language for each fact. \nThe detail of facts to natural language will be descripted in Section 5.2.1.\n\n\\begin{table*}[t]\n \\caption{Data Facts}\n \\label{tab:fact_table}\n \\small\n \\def1.5{1.2}\n \\begin{tabular}{|l|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|l|p{7.7cm}|}\n \\hline\n fact type & \\multicolumn{1}{l|}{measure} & \\multicolumn{1}{l|}{subspace} & \\multicolumn{1}{l|}{group-by} & \\multicolumn{1}{l|}{ focus } & parameter & sentence template \\\\ \\hline\n Value & O & O & X & X & derived value & The \\{\\{aggregate\\}\\} \\{\\{measure\\}\\} is \\{\\{parameter\\}\\} when \\{\\{subspace\\}\\}. \\\\ \\hline\n Difference & O & O & O & O & difference value & \\begin{tabular}[c]{@{}l@{}}The difference between \\{\\{focus{[}1{]}\\}\\} and \\{\\{focus{[}2{]}\\}\\} regarding to their \\\\ \\{\\{aggregate\\}\\} \\{\\{measure\\}\\} is \\{\\{parameter\\}\\} when \\{\\{subspace\\}\\}.\\end{tabular} \\\\ \\hline\n Proportion & O & O & O & O & percentage & \\begin{tabular}[c]{@{}l@{}}The \\{\\{focus\\}\\} accounts for \\{\\{parameter\\}\\} of the \\{\\{aggregate\\}\\} \\\\ \\{\\{measure\\}\\} when \\{\\{subspace\\}\\}.\\end{tabular} \\\\ \\hline\n Trend & O & O & O & X & increasing\/decreasing & \\begin{tabular}[c]{@{}l@{}}The trend of \\{\\{aggregate\\}\\} \\{\\{measure\\}\\} over \\{\\{group-by\\}\\} is \\\\ \\{\\{parameter\\}\\} when \\{\\{subspace\\}\\}.\\end{tabular} \\\\ \\hline\n Categorization & X & O & O & X & number of categories & \\begin{tabular}[c]{@{}l@{}}Given \\{\\{subspace\\}\\}, there are \\{\\{parameter\\}\\} \\{\\{group-by\\}\\}(s) \\\\ which are \\{\\{C\\_1\\}\\}, \\{\\{C\\_2\\}\\}, \\{\\{...\\}\\}, and \\{\\{C\\_n\\}\\}.\\end{tabular} \\\\ \\hline\n Distribution & O & O & O & X & N\/A & \\begin{tabular}[c]{@{}l@{}}The distribution of the \\{\\{aggregate\\}\\} \\{\\{measure\\}\\} over different \\\\ \\{\\{group-by\\}\\}(s) shows an overview of the data when \\{\\{subspace\\}\\}.\\end{tabular} \\\\ \\hline\n Rank & O & O & O & O & N\/A & \\begin{tabular}[c]{@{}l@{}}In the \\{\\{aggregate\\}\\} \\{\\{measure\\}\\} ranking of different \\{\\{group-by\\}\\}(s), \\\\ the top three \\{\\{group-by\\}\\}(s) are \\{\\{focus{[}0{]}\\}\\}, \\{\\{focus{[}1{]}\\}, \\{\\{focus{[}2{]}\\}\\},\\\\ when \\{\\{subspace\\}\\}.\\end{tabular} \\\\ \\hline\n Association & O & O & O & X & correlation value & \\begin{tabular}[c]{@{}l@{}}The Pearson correlation between the \\{\\{measure{[}1{]}\\}\\} and the \\\\ \\{\\{measure{[}2{]}\\}\\} is \\{\\{parameter\\}\\} in case of \\{\\{subspace\\}\\}.\\end{tabular} \\\\ \\hline\n Extreme & O & O & O & O & \\begin{tabular}[c]{@{}l@{}}0: max\/min\\\\ 1: extreme value\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Given \\{\\{subspace\\}\\}, the \\{\\{parameter{[}0{]}\\}\\} value of the \\{\\{aggregate\\}\\} \\\\ \\{\\{measure\\}\\}is \\{\\{parameter{[}1{]}\\}\\} when \\{\\{group-by\\}\\} is \\{\\{focus\\}\\}.\\end{tabular} \\\\ \\hline\n Outlier & O & O & O & O & outlier value & \\begin{tabular}[c]{@{}l@{}}The\\{\\{aggregate\\}\\} \\{\\{measure\\}\\} of \\{\\{focus\\}\\} is an outlier when compare \\\\ with that of other \\{\\{group-by\\}\\}(s) when \\{\\{subspace\\}\\}.\\end{tabular} \\\\ \\hline\n \\end{tabular}\n\\end{table*}\n\nCalliope calculate the importance score $I(f)$ for a data fact $f$ to measure the value of fact.\nThe score contains two aspects: information importance ${\\rm Info}(f)$ and significance importance ${\\rm Sig}(f)$.\nThe importance score for each fact is:\n\n\\begin{equation}\n I(f) = {\\rm Sig}(f) * {\\rm Info}(f)\n\\end{equation}\n\n\\textbf{Information importance} is measured by Shannon entropy \\cite{shannonentropy}, which is also called the self-information of a data fact.\nThe information importance should be high when an unlikely data fact is observed in spreadsheet.\nIt is calculated as follows, where $X$ is the scope of the data:\n\n\\begin{equation}\n{\\rm Info}(f) = -log_2(P(X))\n\\end{equation}\n\nAccording our fact definition, the \\textit{subspace} and the \\textit{focus} are the only two elements that related to the scope of the data $X$.\nSo the probability of $X$ is then determined by the product of the probability of \\textit{subspace} and conditional probability of \\textit{focus} given \\textit{subspace}:\n\n\\begin{equation}\nP(X) = P(subspace) * P(focus | subspace)\n\\end{equation}\n\nFor now, we only support at most 2 fields in the subspace of the fact.\nThe following formulation is to calculate the probability of the subspace:\n\n\\begin{equation}\nP(subspace)=\\frac{1}{\\sum_{i=0}^{2} C(n, i)}*\\prod_{i=1}^{k}\\frac{count({subspace}_i)}{count(all)}\n\\end{equation}\nThe former item is the probability of the occurence of the \\textit{subspace}, where $n$ is the count of categorical and temporal fields in the spreadsheet, and $C(n,i)$ is the number of $i$-combinations from $n$ fields. \nThe latter item is the probability of the \\textit{subspace}, where $count({subspace}_i)$ is the count of records which fulfills the $i$th condition in \\textit{subspace} and $count(all)$ is the total count of the spreadsheet.\nWhen the \\textit{subspace} includes two fields, we assume they are independent to each other.\nSo we calculate the product of the probabilities of two \\textit{subspace}s.\n\nThe conditional probability of \\textit{focus} given \\textit{subspace} is determined by the fraction of the count of \\textit{subspace} covered by the count of \\textit{focus}:\n\n\\begin{equation}\nP(focus|subspace)= \\frac{count(focus)}{count(subspace)}\n\\end{equation}\n\n\\textbf{Significance importance} $Sig(f)$ is the measurement from the statistical aspect. \nTo determine the importance of fact, significance is the weight for the information importance in [0, 1].\nFor instance, if a trend fact is steady with no obvious increasing or decreasing, it should have a low significance close to 0.\nIn contrast, a fact shows a clear rising trend line should get a high significance close to 1.\nWe calculate the significance importance of the fact using the methods in formative works \\cite{ding2019quickinsights, tang2017extracting, wang2019datashot}.\n\n\\subsection{Story Generation Algorithm}\nFirst describe the algorithm overview via pseudocode and explain the algorithm pipeline by important steps.\n\n\\subsubsection{Logic Calculation}\nDescribing how did we ensures the story logic and reduce the search space. Describe the statistics that we summarized based on the preliminary survey.\n\n\\subsubsection{Reward Function}\nDescribing the reward function followed by the details of each terms in the function. The key is to clearly describe the design rationale of each term.\n\nWe define the information entropy of the data story as $H(story)$.\n\n\\begin{equation}\nH(story) = E(I(F)) = \\sum_{i=1}^{n} { P(f_i) * I(f_i) }\n\\end{equation}\n$I(f_i)$ is the quantity of information for the $i$th fact and the $X$ is the data of the fact.\n\nBased on the information entropy, we add significance into the formulation to calculate the importance score of the story $I(story)$.\n\n\\begin{equation}\nI(S) = E(S(F) * I(F)) = \\sum_{i=1}^{n} { P(f_i) * S(f_i) * I(f_i) }\n\\end{equation}\n\nThe total story reward is the importance score of the story with the weight of user preference.\n\n\\begin{equation}\nReward = w_{S} * Importance(S)\n\\end{equation}\n\nThe weight of user preference includes: logicality, diversity and integrity.\n\n\\begin{equation}\nw_{story} = \\alpha * Diversity + \\beta * Integrity + (1- \\alpha - \\beta) * Logicality\n\\end{equation}\n\n\\subsection{Story Aggregation}\n\nFor now, the story pieces in our generated data story only convey single fact.\nHowever, the compound fact is the common used fact that consists of multiple primitive facts \\cite{amar2005low, chen2009toward}.\nTo keep the story concise, we aggregate the story by merging single facts into a compound one.\nThe story aggregation process consists of three major steps: (1)similarity measuring, (2)hierarchical clustering, and (3) fact merging.\n\nFirst, we measures the similarity between data facts using the overlap similarity for categorical data \\cite{boriah2008similarity}.\nThe overlap similarity of the \\textit{type} is 0 when types of two facts are no match, and 1 when the attribute values match.\nFor the \\textit{measure} and the \\textit{group-by}, the similarity is the proportion of the fields in common.\nFor the \\textit{subspace} and the \\textit{focus}, we use the the fraction of union covered by intersection for similarity calculation.\nThe similarity in total is the sum of these five results.\n\nThen, we aggregate the data facts into a hierarchical structure based on the similarity using hierarchical clustering technique.\nAfter clustering, the pairs of leaf nodes, such as the dashed box in fig, are the potential facts to be merged.\nThese pairs of facts are ranked according to the similarity. \nThe more similar facts have more tendency to be merged.\nCalliope provides a slider bar that allows users to control the level of story aggregation in percentage, with a value of 0\\% to present single facts, and a value of 100\\% to merge all pairs of data facts.\n\nTo present the compound fact, Calliope merge facts in two aspects, including visualization and natural language.\nFor simplicity, we define a rule table for each combinations of two fact types.\nIn the table, we choose chart type for compound fact according to the intersection of chart candidates for each primitive fact type.\nWe also merge the fact tuples that have no conflict between each other.\nIf there is no common chart type between two facts or conflict in the fact tuples, we simply place the two visualization by juxtaposition.\nFor example, there is a trend fact and a extreme fact share same \\textit{measure}, \\textit{subspace}, and \\textit{group-by} shown in Fig.\\ref{fig:aggregation}.\nThe difference is that the extreme fact has an additional \\textit{focus} which highlight the maximum value.\nBased on our rule table, both of these two facts can use the line chart to present the information.\nAs a result, the compound fact use line chart which is shown in Fig.\\ref{fig:aggregation}.\nFor the description of compound fact, we choose to join together two sentences from both primitive facts.\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/aggregation.png}\n \\caption{Compound fact} \n \\label{fig:aggregation}\n\\end{figure}\n\nUp to this point, Calliope does not consider the perception cost during the fact merging \\cite{hullman2013deeper}.\nBesides, the natural language in the compound fact may has redundancy, which can be solved by NLP techniques such as coreference resolution \\cite{ng2002improving}.\nWe leave these questions in the future work.\n\\section{Story Generation Engine}\n\\label{sec:engine}\n\nIn this section, we describe the technique details of the data story generation engine in Calliope system. We first introduce the definitions of different types of data facts and their importance measures followed by the details of the proposed story generation algorithm.\n\n\\subsection{Data Facts}\n\nBased on our preliminary study and formative work \\cite{wang2019datashot}, Calliope provides a formal model for representing data facts.\nIn definition, a data fact consist of a five-tuple, including \\textit{type}, \\textit{measure}, \\textit{subspace}, \\textit{group-by} and \\textit{focus}.\n$$\nfact := (type, measure, subspace, groupby, focus)\n$$\n\nThe \\textit{type} definition identifies the fact type from 10 pre-defined taxonomies, including value, difference, proportion, trend, categorization, distribution, rank, association, extreme and outlier.\nThese 10 types derive from our preliminary survey on data videos and several formative works \\cite{wang2019datashot, chen2009toward}. \nThe \\textit{measure} determines which numerical field and aggregation function to use as the measurement in the fact.\nThe \\textit{subspace} specifies the range of data that fulfill a filter condition.\nThe \\textit{group-by} groups records that have same value in categorical or temporal fields into summary records.\nThe \\textit{focus} highlights the specific information in the subspace.\n\nTable \\ref{tab:fact_table} summarizes the rules to construct the data fact for each fact type.\nAs shown in the table, \\textit{measure}, \\textit{group-by} and \\textit{focus} are not necessary for all fact types.\nThe definition of each fact type are depicted in the following list.\nTo explain the meaning of each fact type, we use a sample spreadsheet about automatic sales.\nThe columns' information is shown in Table \\ref{table:tabledetail}.\n\n\\begin{table}[htb]\n \\label{table:tabledetail}\n \\caption{The information of columns in the sample}\n \\centering\n \\small\n \\begin{tabular}{ccl}\n \\hline\n field name & type & description \\\\ \\hline\n Sales & numerical & automobile sales \\\\ \n Income & numerical & automobile income \\\\ \n Category & categorical & types of automobile (e.g. SUV, Compact) \\\\ \n Brand & categorical & brands of automobile (e.g. BMW, Ford) \\\\ \n Year & temporal & year of Sales (2007-2011) \\\\ \\hline\n \\end{tabular}\n\\end{table}\n\n\\begin{itemize}\n\n \\item \\textbf{Value} is defined on (\\textit{measure}, \\textit{subspace}).\n It derives the value according to the \\textit{measure} in \\textit{subspace}. \n E.g. Show total sales in Brand Ford.\n \n \\item \\textbf{Difference} is defined on (\\textit{measure}, \\textit{subspace}, \\textit{group-by}, \\textit{focus}).\n It compares the \\textit{measure} of two \\textit{focus} on \\textit{group-by} in \\textit{subspace}.\n E.g. Compare difference between sales of SUV and Compact in Brand BMW.\n \n \\item \\textbf{Proportion} is defined on (\\textit{measure}, \\textit{subspace}, \\textit{group-by}, \\textit{focus}).\n It shows the proportion of the \\textit{measure} in the \\textit{focus} from all \\textit{group-by}s in \\textit{subspace}.\n E.g. Show proportion of SUV in total sales of Brand Ford.\n \n \\item \\textbf{Trend} is defined on (\\textit{measure}, \\textit{subspace}, \\textit{group-by}).\n It shows the \\textit{measure} order by the temporal \\textit{group-by} in \\textit{subspace}.\n E.g. Show the trend of sales in Brand Ford order by year.\n \n \\item \\textbf{Categorization} is defined on (\\textit{subspace}, \\textit{group-by}).\n It lists all \\textit{group-by}s in \\textit{subspace}.\n E.g. Show all categories contained in Brand Ford.\n \n \\item \\textbf{Distribution} is defined on (\\textit{measure}, \\textit{subspace}, \\textit{group-by}).\n It shows the \\textit{measure} of each \\textit{group-by} in \\textit{subspace}. \n E.g. Show the sales of each category in Brand Ford.\n \n \\item \\textbf{Rank} is defined on (\\textit{measure}, \\textit{subspace}, \\textit{group-by}, \\textit{focus}).\n It lists all \\textit{group-by} order by \\textit{measure} in \\textit{subspace} and \\textit{focus} on the top rank.\n E.g. List the sales rank of brand in 2011.\n \n \\item \\textbf{Association} is defined on (\\textit{measure}, \\textit{subspace}, \\textit{group-by}).\n It shows the relation between two \\textit{measure}s by \\textit{group-by} in \\textit{subspace}.\n E.g. Show the relationship between sales and income in each brand in 2011.\n \n \\item \\textbf{Extreme} is defined on (\\textit{measure}, \\textit{subspace}, \\textit{group-by}, \\textit{focus}).\n It shows the \\textit{measure} of each \\textit{group-by} in \\textit{subspace} and \\textit{focus} on the extreme.\n E.g. Show the best-selling category in Brand Ford.\n \n \\item \\textbf{Outlier} is defined on (\\textit{measure}, \\textit{subspace}, \\textit{group-by}, \\textit{focus}).\n It shows the \\textit{measure} of each \\textit{group-by} in \\textit{subspace} and \\textit{focus} on the outlier.\n E.g. Show the outlier of sales by year in Brand Ford.\n \n\\end{itemize}\n\nAccording to this fact definition, We describe a single data fact using a JSON specification in Calliope.\nBased on the automobile spreadsheet, we demonstrate a \\textit{Proportion} fact in Listing \\ref{lst:json}.\nThe \\textit{measure} of the fact is total sum of ``Sales\".\nThe \\textit{subspace} is the records when ``Brand\" is equal to ``Ford\".\nThe \\textit{group-by} divide the subspace by ``Category\", and the \\textit{focus} is the records when when ``Category\" is equal to ``SUV\".\nThe meaning of this specification is to show the proportion fact about total Sales of SUV in the Brand Ford.\n\n\\begin{lstlisting}[caption=JSON specification of a data fact,\nbasicstyle=\\small, label={lst:json}, language=json,firstnumber=1]\n{\n \"type\": \"proportion\",\n \"measure\": [{\"field\": \"Sales\", \"aggregate\": \"sum\"}],\n \"subspace\": [{\"field\": \"Brand\", \"value\": \"Ford\"}],\n \"group-by\": [{\"field\": \"Category\"}],\n \"focus\": [{\"field\": \"Category\", \"value\": \"SUV\"}]\n}\n\\end{lstlisting}\n\nIn addition the basic fact defination, data facts also have \\textit{parameter} to explain the status of the fact, such as increasing or decreasing in the \\textit{trend} fact.\nTable \\ref{tab:fact_table} displays all \\textit{parameter} in 10 fact types.\nThese \\textit{parameter}s can be derived from the data and fact specification directly.\nWith the \\textit{parameter}, we can generate natural language for each fact. \nThe detail of facts to natural language will be descripted in Section 5.2.1.\n\n\\begin{table*}[t]\n \\caption{Data Facts}\n \\label{tab:fact_table}\n \\small\n \\def1.5{1.2}\n \\begin{tabular}{|l|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|l|p{7.7cm}|}\n \\hline\n fact type & \\multicolumn{1}{l|}{measure} & \\multicolumn{1}{l|}{subspace} & \\multicolumn{1}{l|}{group-by} & \\multicolumn{1}{l|}{ focus } & parameter & sentence template \\\\ \\hline\n Value & O & O & X & X & derived value & The \\{\\{aggregate\\}\\} \\{\\{measure\\}\\} is \\{\\{parameter\\}\\} when \\{\\{subspace\\}\\}. \\\\ \\hline\n Difference & O & O & O & O & difference value & \\begin{tabular}[c]{@{}l@{}}The difference between \\{\\{focus{[}1{]}\\}\\} and \\{\\{focus{[}2{]}\\}\\} regarding to their \\\\ \\{\\{aggregate\\}\\} \\{\\{measure\\}\\} is \\{\\{parameter\\}\\} when \\{\\{subspace\\}\\}.\\end{tabular} \\\\ \\hline\n Proportion & O & O & O & O & percentage & \\begin{tabular}[c]{@{}l@{}}The \\{\\{focus\\}\\} accounts for \\{\\{parameter\\}\\} of the \\{\\{aggregate\\}\\} \\\\ \\{\\{measure\\}\\} when \\{\\{subspace\\}\\}.\\end{tabular} \\\\ \\hline\n Trend & O & O & O & X & increasing\/decreasing & \\begin{tabular}[c]{@{}l@{}}The trend of \\{\\{aggregate\\}\\} \\{\\{measure\\}\\} over \\{\\{group-by\\}\\} is \\\\ \\{\\{parameter\\}\\} when \\{\\{subspace\\}\\}.\\end{tabular} \\\\ \\hline\n Categorization & X & O & O & X & number of categories & \\begin{tabular}[c]{@{}l@{}}Given \\{\\{subspace\\}\\}, there are \\{\\{parameter\\}\\} \\{\\{group-by\\}\\}(s) \\\\ which are \\{\\{C\\_1\\}\\}, \\{\\{C\\_2\\}\\}, \\{\\{...\\}\\}, and \\{\\{C\\_n\\}\\}.\\end{tabular} \\\\ \\hline\n Distribution & O & O & O & X & N\/A & \\begin{tabular}[c]{@{}l@{}}The distribution of the \\{\\{aggregate\\}\\} \\{\\{measure\\}\\} over different \\\\ \\{\\{group-by\\}\\}(s) shows an overview of the data when \\{\\{subspace\\}\\}.\\end{tabular} \\\\ \\hline\n Rank & O & O & O & O & N\/A & \\begin{tabular}[c]{@{}l@{}}In the \\{\\{aggregate\\}\\} \\{\\{measure\\}\\} ranking of different \\{\\{group-by\\}\\}(s), \\\\ the top three \\{\\{group-by\\}\\}(s) are \\{\\{focus{[}0{]}\\}\\}, \\{\\{focus{[}1{]}\\}, \\{\\{focus{[}2{]}\\}\\},\\\\ when \\{\\{subspace\\}\\}.\\end{tabular} \\\\ \\hline\n Association & O & O & O & X & correlation value & \\begin{tabular}[c]{@{}l@{}}The Pearson correlation between the \\{\\{measure{[}1{]}\\}\\} and the \\\\ \\{\\{measure{[}2{]}\\}\\} is \\{\\{parameter\\}\\} in case of \\{\\{subspace\\}\\}.\\end{tabular} \\\\ \\hline\n Extreme & O & O & O & O & \\begin{tabular}[c]{@{}l@{}}0: max\/min\\\\ 1: extreme value\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Given \\{\\{subspace\\}\\}, the \\{\\{parameter{[}0{]}\\}\\} value of the \\{\\{aggregate\\}\\} \\\\ \\{\\{measure\\}\\}is \\{\\{parameter{[}1{]}\\}\\} when \\{\\{group-by\\}\\} is \\{\\{focus\\}\\}.\\end{tabular} \\\\ \\hline\n Outlier & O & O & O & O & outlier value & \\begin{tabular}[c]{@{}l@{}}The\\{\\{aggregate\\}\\} \\{\\{measure\\}\\} of \\{\\{focus\\}\\} is an outlier when compare \\\\ with that of other \\{\\{group-by\\}\\}(s) when \\{\\{subspace\\}\\}.\\end{tabular} \\\\ \\hline\n \\end{tabular}\n\\end{table*}\n\nCalliope calculate the importance score $I(f)$ for a data fact $f$ to measure the value of fact.\nThe score contains two aspects: information importance ${\\rm Info}(f)$ and significance importance ${\\rm Sig}(f)$.\nThe importance score for each fact is:\n\n\\begin{equation}\n I(f) = {\\rm Sig}(f) \\cdot {\\rm Info}(f)\n\\end{equation}\n\n\\textbf{Information importance} ${\\rm Info}(f)$ is measured by Shannon entropy \\cite{shannonentropy}, which is also called the self-information of a data fact.\nThe information importance should be high when an unlikely data fact is observed in spreadsheet.\nIt is calculated as follows, where $X$ is the scope of the data:\n\n\\begin{equation}\n{\\rm Info}(f) = -log_2(P(X))\n\\end{equation}\n\nAccording our fact definition, the \\textit{subspace} and the \\textit{focus} are the only two elements that related to the scope of the data $X$.\nSo the probability of $X$ is then determined by the product of the probability of \\textit{subspace} and conditional probability of \\textit{focus} given \\textit{subspace}:\n\n\\begin{equation}\nP(X) = P(subspace) \\cdot P(focus | subspace)\n\\end{equation}\n\nFor now, we only support at most 2 fields in the subspace of the fact.\nThe following formulation is to calculate the probability of the subspace:\n\n\\begin{equation}\nP(subspace)=\\frac{1}{\\sum_{i=0}^{2} C(n, i)}\\cdot\\prod_{i=1}^{k}\\frac{count({subspace}_i)}{count(all)}\n\\end{equation}\nThe former item is the probability of the occurence of the \\textit{subspace}, where $n$ is the count of categorical and temporal fields in the spreadsheet, and $C(n,i)$ is the number of $i$-combinations from $n$ fields. \nThe latter item is the probability of the \\textit{subspace}, where $count({subspace}_i)$ is the count of records which fulfills the $i$th condition in \\textit{subspace} and $count(all)$ is the total count of the spreadsheet.\nWhen the \\textit{subspace} includes two fields, we assume they are independent to each other.\nSo we calculate the product of the probabilities of two \\textit{subspace}s.\n\nThe conditional probability of \\textit{focus} given \\textit{subspace} is determined by the fraction of the count of \\textit{subspace} covered by the count of \\textit{focus}:\n\n\\begin{equation}\nP(focus|subspace)= \\frac{count(focus)}{count(subspace)}\n\\end{equation}\n\n\\textbf{Significance importance} ${\\rm Sig}(f)$ is the measurement from the statistical aspect. \nTo determine the importance of fact, significance is the weight for the information importance in [0, 1].\nFor instance, if a trend fact is steady with no obvious increasing or decreasing, it should have a low significance close to 0.\nIn contrast, a fact shows a clear rising trend line should get a high significance close to 1.\nWe calculate the significance importance of the fact using the methods in formative works \\cite{ding2019quickinsights, tang2017extracting, wang2019datashot}.\n\n\\subsection{Story Generation Algorithm}\n\nWe introduce Monte Carlo Tree Search\n\nThere are 4 steps in each search iteration: (1) Selection, (2) Expansion, (3) Simulation, (4) Backpropagation.\nThese steps are grouped into two policies, including logic tree policy and default policy.\nThe total generation algorithm is summarized in pseudocode in Algorithm \\ref{alg:generation}.\n\n\\begin{algorithm}[htb]\n \\label{alg:generation}\n \\SetAlgoLined\n \\SetKwInOut{Input}{Input}\n \\SetKwInOut{Output}{Output}\n \\Input{Data from spreadsheet}\n \\Output{Story with a sequence of data facts}\n \n $f_0$ $\\leftarrow$ InitalFact($data$)\\;\n $story$ $\\leftarrow$ [$f_0$]\\;\n \\While{length(story) < max}{\n \\While{within computational time}{\n ($f_l$,$r_l$) $\\leftarrow$ LogicTreePolicy($story$)\\;\n $\\Delta$ $\\leftarrow$ DefaultPolicy($story(f_l, r_l)$)\\;\n Backup(($f_l$,$r_l$), $\\Delta$)\\;\n }\n $story$.insert(BestChild($story$))\\;\n }\n $story$ $\\leftarrow$ Aggregation($story$)\\;\n \\Return $story$\\;\n \\caption{Data Story Generation}\n\\end{algorithm}\n$f_0$ is the first fact and the root node corresponding to the $story$.\n$f_l$ and $r_l$ is the last fact and relation reached during the logic tree policy, which will be introduced in Section 4.2.1.\n$\\Delta$ is the reward for terminal $story$ by running the default policy.\nThe result of the overall search BestChild($story$) is the fact and relation that leads to the best reward.\nSo we take the result to append the current $story$.\nAfter all iterations are finished, we get the final generated $story$ and aggregate it using the method mentioned in Section 4.3.\n\n\\subsubsection{Logic Calculation}\n\nAccording to preliminary data videos survey and formative corpus study \\cite{wolf2005representing}, we summarize 6 discourse relations in data stories.\nWe list them in Table \\ref{table:relation}.\n\n\\begin{table}[]\n \\centering\n \\label{table:relation}\n \\caption{6 discourse relations and their example templates in data stories.}\n \\begin{tabular}{ll}\n \\hline\n relation & example template \\\\ \\hline\n similarity & \\{\\{$Fact_a$\\}\\}. \\{\\{$Fact_b$\\}\\}. \\\\ \n temporal & \\{\\{$Fact_a$\\}\\}. After that, \\{\\{$Fact_b$\\}\\}. \\\\ \n contrast & \\{\\{$Fact_a$\\}\\}. However, \\{\\{$Fact_b$\\}\\}. \\\\ \n cause-effect & \\{\\{$Fact_a$\\}\\}. The reason is \\{\\{$Fact_b$\\}\\}. \\\\ \n elaboration & \\{\\{$Fact_a$\\}\\}. Furthermore, \\{\\{$Fact_b$\\}\\}. \\\\ \n generalization & \\{\\{$Fact_a$\\}\\}. In general, \\{\\{$Fact_b$\\}\\}. \\\\ \\hline\n \\end{tabular}\n\\end{table}\n\nInitial fact:\n\n\\subsubsection{Reward Function}\nDescribing the reward function followed by the details of each terms in the function. The key is to clearly describe the design rationale of each term.\n\nWe define the information entropy of the data story as $H(story)$.\n\n\\begin{equation}\nH(story) = E(I(F)) = \\sum_{i=1}^{n} { P(f_i) \\cdot I(f_i) }\n\\end{equation}\n$I(f_i)$ is the quantity of information for the $i$th fact and the $X$ is the data of the fact.\n\nBased on the information entropy, we add significance into the formulation to calculate the importance score of the story $I(S)$.\n\n\\begin{equation}\nI(S) = E(S(F) \\cdot I(F)) = \\sum_{i=1}^{n} { P(f_i) \\cdot S(f_i) \\cdot I(f_i) }\n\\end{equation}\n\nThe coefficient of user preference $w_{S}$ includes three parameters: logicality, diversity and integrity.\n\n\\textbf{Logicality}\n\n\\begin{equation}\nLogicality = \\frac{1}{n}\\sum_{i=1}^{n-1}P\\left(r_{i-1} | f_{i}\\right)\n\\end{equation}\n\n\\textbf{Diversity}\n\n\\begin{equation}\nDiversity = \\frac{count(type)}{\\max (length(S),10)} *-\\frac{\\sum_{j=1}^{i} \\hat{p}_{j} \\cdot \\ln \\left(\\hat{p}_{j}\\right)}{\\ln (s)} \n\\end{equation}\n\n\\textbf{Integrity}\n\n\\begin{equation}\nIntegrity = \\frac{count_{cell}(\\bigcup\\limits_{i=0}^{n-1} f_{i})}{count_{cell}(data)}\n\\end{equation}\n\nThe total story reward is the importance score of the story with the coefficient of user preference.\nBringing each parameters together, the final $reward$ is:\n\n\\begin{equation}\nreward = (\\alpha \\cdot Diversity + \\beta \\cdot Integrity + \\gamma \\cdot Logicality) \\cdot I(S)\n\\end{equation}\nwhere $\\alpha$, $\\beta$ and $\\gamma$ is the weight in [0,1] and the sum of them is equal to 1.\nUsers can determine the value by interaction in the storyline view.\nThe interaction detail will be introduced in Section 5.1.\n\n\\subsection{Story Aggregation}\n\nFor now, the story pieces in our generated data story only convey single fact.\nHowever, the compound fact is the common used fact that consists of multiple primitive facts \\cite{amar2005low, chen2009toward}.\nFor instance, \"The trend of sales is decreasing and the maximum of sales is in 2007\" is a compound fact that involves a trend fact and an extreme fact.\nTo keep the story concise, Calliope provides an option for users to aggregate the story by merging single facts into a compound one automatically.\n\nThe story aggregation process consists of three major steps: (1) similarity measuring, (2) hierarchical clustering, and (3) fact merging.\n\nFirst, we measures the similarity between data facts using the overlap similarity for categorical data \\cite{boriah2008similarity}.\nThe overlap similarity of the \\textit{type} is 0 when types of two facts are no match, and 1 when the attribute values match.\nFor the \\textit{measure} and the \\textit{group-by}, the similarity is the proportion of the fields in common.\nFor the \\textit{subspace} and the \\textit{focus}, we use the the fraction of union covered by intersection for similarity calculation.\nThe similarity in total is the sum of these five results.\n\nThen, we aggregate the data facts into a hierarchical structure based on the similarity using hierarchical clustering technique.\nAfter clustering, the pairs of leaf nodes, such as the dashed box in fig, are the potential facts to be merged.\nThese pairs of facts are ranked according to the similarity. \nThe more similar facts have more tendency to be merged.\nCalliope provides a slider bar that allows users to control the level of story aggregation in percentage, with a value of 0\\% to present single facts, and a value of 100\\% to merge all pairs of data facts.\n\nTo present the compound fact, Calliope merge facts in two aspects, including visualization and natural language.\nFor simplicity, we define a rule table for each combinations of two fact types.\nIn the table, we choose chart type for compound fact according to the intersection of chart candidates for each primitive fact type.\nWe also merge the fact tuples that have no conflict between each other.\nIf there is no common chart type between two facts or conflict in the fact tuples, we simply place the two visualization by juxtaposition.\nFor example, there is a trend fact and a extreme fact share same \\textit{measure}, \\textit{subspace}, and \\textit{group-by} shown in Fig.\\ref{fig:aggregation}.\nThe difference is that the extreme fact has an additional \\textit{focus} which highlight the maximum value.\nBased on our rule table, both of these two facts can use the line chart to present the information.\nAs a result, the compound fact use line chart which is shown in Fig.\\ref{fig:aggregation}.\nFor the description of compound fact, we choose to join together two sentences from both primitive facts.\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/aggregation.png}\n \\caption{Compound fact} \n \\label{fig:aggregation}\n\\end{figure}\n\nUp to this point, Calliope does not consider the perception cost during the fact merging \\cite{hullman2013deeper}.\nBesides, the natural language in the compound fact may has redundancy, which can be solved by NLP techniques such as coreference resolution \\cite{ng2002improving}.\nWe leave these questions in the future work.\n\\subsection{Storyline View}\nThe storyline view(Fig.~\\ref{fig:ui}-1) contains two panels, the story configuration panel(Fig.~\\ref{fig:ui}-1(a)) and the story pieces panel(Fig.~\\ref{fig:ui}-1(b)). The story configuration panel is where users upload data, tweak parameters for the generated story, such as the story length, the chart diversity and the tolerable time limitation of the generation. The chart diversity represents chart type diversities of each fact type in the generated story ranging from zero to one. Particularly, the value of zero means that each type of facts maps to only one chart type in the generated result. When users increase the value with the slider, more chart types will show up for each fact type in the story. \n\nIn the reward view within the story configuration panel, users can also set up their preference for the reward of the story generation algorithm, including three key parameters: logicality, diversity and integrity. A chart is implemented with one movable circle and three fixed circles to encode the relation among the generated story and the three key parameters.\nHere, the circle $S$ represents the story, and three fixed circles representing three parameters shape a circular area, and the distances between $S$ and three parameter circles capture the weights of each parameter. While moving the circle $S$, the parameter circles that are closer to $S$ will obtain higher weights and more attention when generating.\n\nAfter clicking the ``Generate Story\" button in the configuration panel, Calliope\\xspace will yield a sequence of data facts in a logic order and line them in the story pieces panel successively. \nEach piece in this panel shows a fact of data with visualization and description. \nFor easy revision, Calliope\\xspace allows users to add, remove or re-arrange these fact pieces by simple interaction, as well as edit the statement of facts. When a specific fact piece is selected, its details will show up in the fact view, where users can modify it to meet their needs. These actions will apply in all the other views accordingly to reduce repetitive modification. \n\n\\subsection{Fact View}\nThe fact view (Fig.~\\ref{fig:ui}-2) is a detailed view that not only represent the selected story piece but also enables users to customize this fact by changing the configuration. \nFact's configuration items include the following: \\textit{fact type}, \\textit{visualization}, \\textit{measure},\\textit{subspace}, \\textit{breakdown}, and \\textit{focus}. Different fact types may have specific combination of the configuration items mentioned above according to table~\\ref{tab:fact}.\nThrough the adjustment of these configuration items, the nature language(\\ref{5.2.1}) and the visualization(\\ref{5.2.2}) of the fact will be generated in real time. Also, the information quantity and significance score of the selected fact will be updated accordingly. \n\n\\subsubsection{Facts to Nature Language}\nEach facts is equivalent to a context free grammar which can be used to generate nature language. Describe how the description text are generated for each fact type.\n\\label{5.2.1}\n\n\\subsubsection{Facts to Visualization}\n\nBased on our preliminary survey, we design a rule based method to match fact types with different chart types. For ensuring the visual diversity, we add some alternative designs of each basic chart based on statistical results. For example, donut chart and half donut chart can be the variants of the basic pie chart for users to choose.\n\nThe visualization changes accordingly with the adjustment of fact configuration. Among the configuration items, \\textit{subspace} subspace is used to specify the range of data to be visualized, \\textit{measure} and \\textit{groupby} respectively correspond to the measure and dimension that make up the chart, and \\textit{focus} is used as the highlighted part in the chart.\n\\label{5.2.2}\n\n\\subsection{Story Visualization View}\nThe story visualization view provides a variety of visualization forms to represent the generated data story line for different application scenarios. In particular, a summarization view is first provided to give a textual briefing of the story to help users obtain data insights at a glance. It shows the data coverage rate, total number of data facts in the story, and a textual description of the story. \n\nTo facilitate an efficient story exploration on tablets and smart phones, a storyline view and a mobile view are respectively developed. The visualizations and textual descriptions of the facts in the story are grouped together and horizontally aligned in a row in the storyline view and are showed one at a time in the mobile view. Users can swipe on the touch screen to scroll the horizontal storyline in the storyline view or switch to another fact in the mobile view to navigate through the story. \n\nFinally, a fact-sheet view is also introduced, in which the visual and textual representation of all the facts are organized into a poster that can be directly downloaded and printed. This view supports an easy offline usage of the generation results. A layout algorithm is designed to enable an efficient and aesthetic representation of the story facts. Formally, the algorithm is designed to optimize the following objective: \n\n\\begin{equation}\n f = f_s + f_d\n \\label{eq:optimize}\n\\end{equation}\nwhere $f_s$ indicates how the areas of each fact take up in the fact sheet in proportion of their importance scores, shown in Eq.(\\ref{eq:opti_area}); $f_d$ measures whether the layout of the fact sheet matches the guideline of low intra-row distances and high inter-row distances defined in Eq.(\\ref{eq:opti_layout}).\n\\begin{equation}\n f_s = \\frac{\\sum_{i=1}^{n} s_i \\times a_i }{\\sum_{i=1}^{n} a_i }\n \\label{eq:opti_area}\n\\end{equation}\nwhere $n$ is the number of facts in the generated story; $s_i$ is the importance score of $i$-th fact derived from the story generation algorithm, and $a_i$ is the area $i$-th fact occupied. With $f_s$, we expect that facts with higher scores would involve larger areas to attract enough attention from readers for they bring more information.\n\nHere we denote the layout of fact sheet with $n$ facts as $C = \\{C_1, C_2, \\cdots, C_k\\}$, where $k$ is the number of rows in the fact sheet. Formally, $f_d$ is calculated as follows:\n\\begin{equation}\nf_d = f_d^{inter} - f_d^{intra}\n\\label{eq:opti_layout}\n\\end{equation}\n\\begin{equation}\nf_d^{inter} = \\frac{\\sum_{1 \\leq j < |C|} d(C_j, C_j+1)}{|C|-1}\n\\label{eq:opti_inter}\n\\end{equation}\n\\begin{equation}\nf_d^{intra} = \\frac{\\sum_{C_j \\in C} \\sum_{1 \\leq i < |C_i|} d(x_i, x_{i+1})} {n - |C|}\n\\label{eq:opti_intra}\n\\end{equation}\nwhere $d$ measures the distance between two facts. To obtain it, we first calculate the similarity between two facts as described in Sec.~\\ref{4.3} and then minus it from one, indicating low similarities lead to high distances. As for the distance between two rows, we identify it as the distance between two adjacent facts connecting rows. Finally, $f_d$ is derived by subtracting $f_d^{intra}$, the average distance inside rows, from $f_d^{inter}$, the average distance between rows. With higher value, $f_d$ represents a more compact layout of a fact sheet in terms of the logic relations among facts through the generated story.\n\\label{StoryView}\n\n\n\\subsection{Interactions}\n\nCalliope\\xspace provides a set of interaction methods to support the data story generation, fact exploration, story visualization and publishing to the public web.\n\n{\\bf Data story generation.} Calliope\\xspace allows users to fine-tune several details of the story in the story configuration panel within the storyline view. For the length of the story generated by Calliope\\xspace, the chart diversity for each fact type and the time limitation of generation, users can drag the sliders of each parameter to pick a specific value from the given range. As for the reward preference for the generation, user can drag the circle $S$ which presents the story in the reward panel, to a position where distances between the story circle and three parameter nodes capture the relevant weights accordingly. After all parameters settled, users can click the ``Generate Story\" button and wait for the story from Calliope\\xspace in a few seconds. With the generated story, Calliope\\xspace allows users to add a new fact piece or remove current fact pieces by clicking buttons, or re-order the fact pieces by simple drag-and-drop interaction in the story pieces panel. Editing the fact description is also enabled in the story pieces panel. All actions will apply in all the other views accordingly to reduce repetitive modification. \n\n{\\bf Modifying data fact.} The fact view enables users to customize the Currently selected fact by changing the fact's configuration. \nFact view provides selectors for \\textit{fact type}, \\textit{visualization}, \\textit{measure},\\textit{subspace}, \\textit{breakdown}, and \\textit{focus}. When adjusting these configuration items, visual and textual representations of the current fact will be modified. \n\n{\\bf Choosing visualization form and publishing.} \nCalliope\\xspace allows user to switch between all the visualization forms mentioned in~\\ref{StoryView} in the story visualization view.\nBy clicking the ``share\" button, Calliope\\xspace can automatically generate an online access link and a code snippet that can be embedded in other web pages for the generated data story represented in its current visualization form. When the user clicks the copy button, the online link or the code snippet will be copied accordingly. If others visit the link, they can access this data story. As for the generated \ncode snippet, user can easily embed this data story into his own web page by inserting the snippet into it.\n\n\\label{5.4}\n\\section{Data Story Editor}\n\\label{sec:editor}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/UI.png}\n \\caption{The story editor of Calliope\\xspace system contests of three major views: (1) the storyline view for story configuration, generation and storyline editing, (2) the fact view for fact editing, and (3) the story visualization view for the visual data story preview and sharing.} \\label{fig:ui}\n\\end{figure}\n\nIn this section, we describe the design details of each views in the data story editor and the corresponding interactions of the system. \n\n\\subsection{User Interface}\nThe story editor, as shown in Fig.~\\ref{fig:ui}, consists of three major views: the storyline view (Fig.~\\ref{fig:ui}-1), the fact view (Fig.~\\ref{fig:ui}-2), and the story visualization view (Fig.~\\ref{fig:ui}-3). In the storyline view, a user can upload a spreadsheet, set the story generation goal, and adjust the reward function in a group of configuration panels (Fig.~\\ref{fig:ui}-1(a)). The generated data facts are shown in a row (Fig.~\\ref{fig:ui}-1(b)), in which a user can remove a fact or change the generated narrative order based on his\/her own preferences. Each fact is visualized by a chart and captioned by a generated text description (\\textbf{R3}). When a fact is selected, its data details, importance measures, and visual and textual representations will be shown in the fact view, which are also editable (\\textbf{R4}). The generated data story can be visualized in the story visualization view through three visualization modes: (1) the storyline mode, (2) the swiper mode, and (3) the fact sheet mode, respectively designed for representing the story on laptops\/tablets, smart phones, and in printouts to facilitate a flexible story communication and sharing (\\textbf{R5}).\n\nLet's consider the following scenario for using Calliope\\xspace system. Jean is a data journalist. Her primary job is to tracing important news events, collect, analyze, and visualize the corresponding data to write news stories with the data support. Her job frequently requires a quick response to emergencies, but data analysis and visualization usually take days to finish. She uses Calliope\\xspace to help her on these tasks. Recently, she is monitoring the public health emergency on the propagation of CoVID-19 virus in China. A data containing everyday infections and deaths in different provinces and cities are collected. She uploads the data into Calliope\\xspace system, sets the story length as 6 (i.e., contains at most 6 data facts) in the story configuration panel, and clicks the generate button. Only after a few seconds, the system automatically outputs a sequence of related data facts represented by charts and captioned by text in a logic order in the storyline view. She clicks a generated fact to edit it in the fact view, where she can change the generated description, chart and the fact types, and the data details based on her own observations and preferences. After a few edits on the story facts, Jean switched the visual representation style to the swiper mode in the story visualization view and shared the results with her colleague via a link by clicking the share button for a future discussion.\n\n\n\n\n\\subsection{Fact View}\nThe fact view (Fig.~\\ref{fig:ui}-2) is a detailed view that not only represent the selected story piece but also enables users to customize this fact by changing the configuration. \nFact's configuration items include the following: \\textit{fact type}, \\textit{visualization}, \\textit{measure},\\textit{subspace}, \\textit{breakdown}, and \\textit{focus}. Different fact types may have specific combination of the configuration items mentioned above according to table~\\ref{tab:fact}.\nThrough the adjustment of these configuration items, the nature language(\\ref{5.2.1}) and the visualization(\\ref{5.2.2}) of the fact will be generated in real time. Also, the information quantity and significance score of the selected fact will be updated accordingly. \n\n\\subsubsection{Facts to Nature Language}\nEach facts is equivalent to a context free grammar which can be used to generate nature language. Describe how the description text are generated for each fact type.\n\\label{5.2.1}\n\n\\subsubsection{Facts to Visualization}\n\nBased on our preliminary survey, we design a rule based method to match fact types with different chart types. For ensuring the visual diversity, we add some alternative designs of each basic chart based on statistical results. For example, donut chart and half donut chart can be the variants of the basic pie chart for users to choose.\n\nThe visualization changes accordingly with the adjustment of fact configuration. Among the configuration items, \\textit{subspace} subspace is used to specify the range of data to be visualized, \\textit{measure} and \\textit{groupby} respectively correspond to the measure and dimension that make up the chart, and \\textit{focus} is used as the highlighted part in the chart.\n\\label{5.2.2}\n\n\\subsection{Story Visualization View}\nThe story visualization view provides a variety of visualization forms to represent the generated data story line for different application scenarios. In particular, a summarization view is first provided to give a textual briefing of the story to help users obtain data insights at a glance. It shows the data coverage rate, total number of data facts in the story, and a textual description of the story. \n\nTo facilitate an efficient story exploration on tablets and smart phones, a storyline view and a mobile view are respectively developed. The visualizations and textual descriptions of the facts in the story are grouped together and horizontally aligned in a row in the storyline view and are showed one at a time in the mobile view. Users can swipe on the touch screen to scroll the horizontal storyline in the storyline view or switch to another fact in the mobile view to navigate through the story. \n\nFinally, a fact-sheet view is also introduced, in which the visual and textual representation of all the facts are organized into a poster that can be directly downloaded and printed. This view supports an easy offline usage of the generation results. A layout algorithm is designed to enable an efficient and aesthetic representation of the story facts. Formally, the algorithm is designed to optimize the following objective: \n\n\\begin{equation}\n f = f_s + f_d\n \\label{eq:optimize}\n\\end{equation}\nwhere $f_s$ indicates how the areas of each fact take up in the fact sheet in proportion of their importance scores, shown in Eq.(\\ref{eq:opti_area}); $f_d$ measures whether the layout of the fact sheet matches the guideline of low intra-row distances and high inter-row distances defined in Eq.(\\ref{eq:opti_layout}).\n\\begin{equation}\n f_s = \\frac{\\sum_{i=1}^{n} s_i \\times a_i }{\\sum_{i=1}^{n} a_i }\n \\label{eq:opti_area}\n\\end{equation}\nwhere $n$ is the number of facts in the generated story; $s_i$ is the importance score of $i$-th fact derived from the story generation algorithm, and $a_i$ is the area $i$-th fact occupied. With $f_s$, we expect that facts with higher scores would involve larger areas to attract enough attention from readers for they bring more information.\n\nHere we denote the layout of fact sheet with $n$ facts as $C = \\{C_1, C_2, \\cdots, C_k\\}$, where $k$ is the number of rows in the fact sheet. Formally, $f_d$ is calculated as follows:\n\\begin{equation}\nf_d = f_d^{inter} - f_d^{intra}\n\\label{eq:opti_layout}\n\\end{equation}\n\\begin{equation}\nf_d^{inter} = \\frac{\\sum_{1 \\leq j < |C|} d(C_j, C_j+1)}{|C|-1}\n\\label{eq:opti_inter}\n\\end{equation}\n\\begin{equation}\nf_d^{intra} = \\frac{\\sum_{C_j \\in C} \\sum_{1 \\leq i < |C_i|} d(x_i, x_{i+1})} {n - |C|}\n\\label{eq:opti_intra}\n\\end{equation}\nwhere $d$ measures the distance between two facts. To obtain it, we first calculate the similarity between two facts as described in Sec.~\\ref{4.3} and then minus it from one, indicating low similarities lead to high distances. As for the distance between two rows, we identify it as the distance between two adjacent facts connecting rows. Finally, $f_d$ is derived by subtracting $f_d^{intra}$, the average distance inside rows, from $f_d^{inter}$, the average distance between rows. With higher value, $f_d$ represents a more compact layout of a fact sheet in terms of the logic relations among facts through the generated story.\n\\label{StoryView}\n\n\n\\subsection{Interactions}\n\nCalliope\\xspace provides a set of interaction methods to support the data story generation, fact exploration, story visualization and publishing to the public web.\n\n{\\bf Data story generation.} Calliope\\xspace allows users to fine-tune several details of the story in the story configuration panel within the storyline view. For the length of the story generated by Calliope\\xspace, the chart diversity for each fact type and the time limitation of generation, users can drag the sliders of each parameter to pick a specific value from the given range. As for the reward preference for the generation, user can drag the circle $S$ which presents the story in the reward panel, to a position where distances between the story circle and three parameter nodes capture the relevant weights accordingly. After all parameters settled, users can click the ``Generate Story\" button and wait for the story from Calliope\\xspace in a few seconds. With the generated story, Calliope\\xspace allows users to add a new fact piece or remove current fact pieces by clicking buttons, or re-order the fact pieces by simple drag-and-drop interaction in the story pieces panel. Editing the fact description is also enabled in the story pieces panel. All actions will apply in all the other views accordingly to reduce repetitive modification. \n\n{\\bf Modifying data fact.} The fact view enables users to customize the Currently selected fact by changing the fact's configuration. \nFact view provides selectors for \\textit{fact type}, \\textit{visualization}, \\textit{measure},\\textit{subspace}, \\textit{breakdown}, and \\textit{focus}. When adjusting these configuration items, visual and textual representations of the current fact will be modified. \n\n{\\bf Choosing visualization form and publishing.} \nCalliope\\xspace allows user to switch between all the visualization forms mentioned in~\\ref{StoryView} in the story visualization view.\nBy clicking the ``share\" button, Calliope\\xspace can automatically generate an online access link and a code snippet that can be embedded in other web pages for the generated data story represented in its current visualization form. When the user clicks the copy button, the online link or the code snippet will be copied accordingly. If others visit the link, they can access this data story. As for the generated \ncode snippet, user can easily embed this data story into his own web page by inserting the snippet into it.\n\n\\label{5.4}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Evaluation}\nThe evaluation of Calliope\\xspace system consists of two parts, which respectively estimate the quality of the generated story logic and the visual data story content.\n\n\nWe evaluate Calliope\\xspace via two controlled experiments to evaluate the generate logic and case study with 10 expert users followed by a series of expert interviews to collect their feedback.\n\n\\begin{figure*}[tb]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/case23.png}\n \\vspace{-1.5em}\n \\caption{Two alternative visualization modes, (a)\\textit{\\textbf{swiper}}\\xspace mode and (b) \\textit{\\textbf{factsheet}}\\xspace mode, that are developed to facilitate the story exploration, sharing and communication via mobile phones and printouts.} \n \\label{fig:vmodes}\n \\vspace{-1em}\n\\end{figure*}\n\n\\subsection{Evaluation of the Generated Logic}\nTwo controlled experiments were performed to evaluate the generated story logic. The first one is designed to compares the logic generated by users and \n\nIn the first experiment, we first randomized the orders of a set of data facts in a visual data story generated by Calliope\\xspace and then asked a group of users to restore the order of the data facts based on their own observations and understandings of the data facts. Finally, we checked the consistency of the human generated logic orders and the logic order produced by Calliope\\xspace. In this experiment, we recruited 20 participants (17 female) aged from 22 to 30 (mean 26). 11 of them had data analysis experiences and 14 of them had some basic knowledge about data visualization. 3 participants have experience in storytelling and 16 are with the skill of design. Participants took 19.5 minutes in average ($SD=10$) to finish the study.\n\n\n\\subsubsection{User Study \\uppercase\\expandafter{\\romannumeral1}}\nThe first study evaluates the logicality of the data stories of Calliope\\xspace by comparing the orders of facts from Calliope\\xspace and human.\n\n\\textbf{Hypotheses} \nAs we apply a logic-oriented MCTS to explore the space of the input data and generate data facts as a data story, we hypothesize that:\n\\begin{enumerate}\n\\itemsep -1mm\n\\item[{\\bf H1}] The logic order of facts in the story generated by Calliope\\xspace would strongly correlated with users' logical reasoning.\n\\end{enumerate}\n\n\\textbf{Procedure and Task} We collected three datasets in different topics to generate data stories for the study. \\textit{CarSales} dataset(275 rows, 4 columns) describes the sales of different automobile brands from 2007 to 2011; \\textit{COVID-19} dataset(903 rows, 6 columns) provides epidemic data around the world from Mar. 1, 2020 to Mar. 21, 2020, recording the number of infections, deaths and cures everyday; \\textit{Startup Failures} dataset(1234 rows, 6 columns) reports the situation of companies that died through the tide of the new economy in China from 2010 to 2019, showing these companies' industry information, fund status they ended up with, reasons for failure, etc. For each dataset, we uploaded it to Calliope\\xspace and generated 4 stories with length 6, then shuffled the order of each story. \n\nIn the study, we presented brief introduction of each dataset before the questions. For each question, participants were asked to re-arrange the facts from an out-of-order story based on the information from charts and descriptions, to form a logical and reasonable story. After answering all 12 questions of re-ordering, participants were asked to write down their judgement and their personal information. \n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/userstudy1.png}\n \\vspace{-2em}\n \\caption{The evaluation results of the story logic: (a) the average Kendall's $\\tau_b$ values and standard errors on each dataset; (b) the average Kendall's $\\tau_b$ values and standard errors of Human and Calliope\\xspace; (c) the means and standard errors of rates on logic of DataShot and Calliope\\xspace by participants using a 5-point Likert scale.} \n \\label{fig:userstudy_tau}\n \\vspace{-1.7em}\n\\end{figure}\n\n\\textbf{Results}\nTo measure the correspondence between the rankings of Calliope\\xspace and of participants, we used Kendall's $\\tau_b$ correlation~\\cite{kendall1945treatment} for evaluation, with value lies between -1(reversed order) and 1(original order). First we collected the participant scores of the Kendall's $\\tau_b$ on the orderings between participants and Calliope\\xspace and then averaged them on each dataset, as shown in Fig.~\\ref{fig:userstudy_tau}(a). The results indicate that there exists some correlation between two rankings, especially in the latter two datasets, which are more familiar to the participants. \n\nTo understand whether Calliope\\xspace really perceives logic and generate meaningful stories, we invited an expert who knows much about the three datasets to assign reasonable orders of the facts of each story. Then we obtained the Kendall's $\\tau_b$ correlation between participants' rankings and expert's rankings on 12 questions, marked as \\textit{Human}, and the Kendall's $\\tau_b$ correlation between Calliope\\xspace's rankings and expert's rankings on 12 questions, marked as \\textit{Calliope\\xspace}. We conducted an unpaired t-test to examine the differences on the $\\tau_b$ scores of two groups. The result showed in Fig.~\\ref{fig:userstudy_tau}(b) indicates that there is no significant difference ($t(22)=.097, p=.92$) of participants($M=.50, SD=.13$) and Calliope\\xspace($M=.55, SD=.17$) when comparing both of their rankings with that of the expert. Hence \\textbf{H1} is accepted.\n\nWhen asked about the criteria of a logical story, the answers can be summarized into four aspects. \nFirst, most of the participants valued the consistency of stories. \\textit{``I would pay attention to the key words in the facts, and place those with similar key words together\"}, one participants said. Since our stories were backed up by data, some participants arranged the facts by tracking the relevant data scopes as well. \nSecondly, participants would try to organize data facts using common discourse relations. Among the answers(9 of 20), the overall-detail structure of stories was mentioned most frequently, with which the story gives the whole picture in the beginning, followed by a series of details. \nAlso, participants would interpret the semantic contents of each fact. one commented that \\textit{``I assigned the orders through the emotions from negative to positive for the COVID-19 dataset, first reporting the number of infections, then deaths and cures.\"} while another said that \\textit{``I followed the chronological order and put facts depicting infections of COVID-19 ahead of those with deaths situation.\"} \nParticularly, since Calliope\\xspace presents data stories with visualizations, some participants(5 of 20) mentioned that they would take the visualizations into account. One commented that one of his judgements was \\textit{``the correlation of types of charts in each fact\"}.\n\n\\subsubsection{User Study \\uppercase\\expandafter{\\romannumeral2}}\nTo further evaluate the quality of the output of Calliope\\xspace, we conducted the second user study to validate whether the data stories generated by our approach would be more logical and reasonable than existing work. For comparison, we choose DataShot\\cite{wang2019datashot} as a baseline, which automatically generates ``random facts\" sheet from the tabular data. \n\n\\textbf{Hypotheses} \nAs the facts in our stories follow a logical order and are displayed sequentially in all visualization modes, we hypothesize that:\n\\begin{enumerate}\n\\itemsep -1mm\n\\item[{\\bf H2}] The visual data story generated by Calliope\\xspace would be better for reading than the story with facts in random order under the same topic significantly.\n\\end{enumerate}\n\n\\textbf{Participants} In this study, we invited 20 participants (14 female) whose ages range from 23 to 30 (mean: 25). All the participants are students majoring in design. 13 of them have the experience in designing infographic posters. They took an average of 13 minutes ($SD=12$) to complete this study.\n\n\\textbf{Procedure and Task} Our study followed a 2 (visualization tool) $\\times$ 3 (dataset) mixed design. We collected the three datasets presented in DataShot and uploaded them to Calliope\\xspace to generate stories in the \\textit{\\textbf{factsheet}}\\xspace mode. After the processing, we got 6 factsheets, including 3 of them collected from the paper of DataShot and the other 3 generated from Calliope\\xspace. To avoid other factors that interfere with the study, we unified the design style for the factsheets from both visualization tools. We also marked the sequential number for all facts in each factsheet to guide participants. Finally, we randomized the order of the 6 factsheets in each questionnaire.\n\nEach participant performed 6 trials in the questionnaire. In each trail, participants were shown one factsheet and asked to read it following the sequential order from left to right, top to bottom. After each trail, they rated the logic of the factsheet in a 5-point Likert scale ranging from ``Very Unclear\" to ``Very Clear\".\n\n\\textbf{Results} We analyzed a total of 120 user ratings and conducted unpaired t test between DataShot and Calliope\\xspace. The result (Fig.~\\ref{fig:userstudy_tau}(c)) shows that there is a significant difference ($t(118)=3.99, p<.01$) between the rating of DataShot($M=3.02, SD=.95$) and Calliope\\xspace($M=3.72, SD=.98$), which means that stories generated by Calliope\\xspace have clearer logic than random facts stories for reading and comprehension. Therefore \\textbf{H2} is accepted. \n\n\\subsection{Case Studies and Expert Interview}\n\nTo evaluate the utility of our system, we conducted in-depth interviews with ten experts. Due to the application fields of the system, we chose experts from three relevant areas, including 4 professional data journalists, 3 data analysts in BI department, and 3 visualization researchers. In the following section, these 10 participants are denoted as \\textbf{P1}, \\textbf{P2}, ..., \\textbf{P10}, where \\textbf{P1-P4} are data journalists, \\textbf{P5-P7} are data analysts, and \\textbf{P8-P10} are visualization researchers.\n\n\\textbf{Interview methodology} \nThe interview was started with a 10-minute system introduction. We introduced the system interface and each module to the interviewees. \nThen all the participants were asked to explore our system freely and generate a high quality data story with a provided dataset in 15-20 minutes. During the exploration process, if participants have any questions with Calliope\\xspace, we'll provide hints to them promptly. After finishing exploring and completing a data story, the user was asked to use the sharing feature to generate a link as a delivered work.\nThen we conducted a brief interview with the user, including the following three aspects: the generated data story, visual and interaction design and the overall system. \nAt the end of the interview, the participants were asked to fill out a questionnaire to assess these three aspects using a 5-point Likert scale.\nOn average, each of the interviews last approximately one hour.\n\n\\textbf{Case Studies}\nWe collected all visual data stories authored by participants. Here we present three of them with different spreadsheets, including CarSales, COVID-19, and Startup Failures.\n\n\\textit{CarSales.} \\textbf{P7} (data analyst, male, 34) generated and authored this story about CarSales, which is shown in Fig.~\\ref{fig:vmodes}(a). During the whole authoring process, he acknowledged that the generated story is well structured which can be easily understood and provided an overview of the dataset. Thus, he used the generated story as backbone and edited facts to convey his insights about this data. The final story is start with the sales trend from 2007 to 2011. He thought the financial crisis is to blame for this decreasing trend. Then the story shows the rank of sales for each brand and the top 1 Ford accounts for 25.04\\% of the total sales. After that, it displays the rank of sales for different vehicle models and the comparison between top 1 (SUV) and top 3 (Subcompact). In the end of the story, P7 indicated that the sales of Subcompact model is the only one not recovered after the financial crisis. After finishing the story, he shared it in \\textit{\\textbf{swiper}}\\xspace mode which he thought is the best way to spread via phones quickly. \n\n\\textit{COVID-19.} This story is generated and authored by \\textbf{P4} (data journalist, female, 25), which is shown in Fig.~\\ref{fig:teaser}. P4 is an experienced data journalist. Because of her daily work needing the cooperation with designers and engineers, she appreciated the effectiveness of this system that can let her create data news on her own. After the story generation, she use the data story editor to reorder facts to ensure the attractiveness of the story based on her background knowledge. The story first presents the distribution of death cases in the provinces of China during March and points out that the maximum is 42 on March 2. Then it displays the distribution of death cases geographically, followed by the detail data in Hubei. After that, the story is concluded with the total death and recovered cases in March, indicating that China has turned for the better. She decided to represent the story in the \\textit{\\textbf{storyline}}\\xspace mode, which can be embedded into websites for news reports easily and read in web browsers.\n\n\\textit{Startup Failures.} \\textbf{P10} (visualization researcher, female, 29) was interested in the spreadsheet about startup failures in the past 10 years. She has no background information about this data before. Instead of the data exploration, she started with the story generation in the Calliope\\xspace system. \nThis generated story begins with the trend and distribution of failed startups in terms of time and space respectively, then represents the survival time ranking of failed startups in each industry and the situation of failed startups in each fund status. An outlier is also detected that no-funding is abnormal among all fund statuses. At the end of the story it shows the distribution of failure reasons of startups. After viewing the initial generated story, she pointed out that the logic and statement of generated story is very clear. Then she checked the schema of the data table and she found the generated story covers all aspects of the data. Thus she felt satisfied with the story without making any significant changes and only tried to adjust the visualization of the facts. Finally, she chose to print the story in the \\textit{\\textbf{factsheet}}\\xspace mode as shown in Fig.~\\ref{fig:vmodes}(b). \n\n\\textbf{Participant Feedback} \nAfter the interview, we summarized all the participants' feedback and collated the results of user ratings of Calliope\\xspace.\nFig.~\\ref{fig:interview} shows the results of a 5-point Likert scale, which express how much the users agree or disagree with 8 standards on 3 aspects of our system. Generally speaking, all the expert users gave promising and positive feedback on Calliope\\xspace.\n\n\\begin{figure}[tbh]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/interview.png}\n \\vspace{-1.5em}\n \\caption{The interview ratings of Calliope\\xspace on a 5-point Likert scale.} \n \\label{fig:interview}\n \\vspace{-0.5em}\n\\end{figure}\n\n\\textit{\\textbf{The generated data story.}} \nAll the experts agreed that the data stories generated by Calliope\\xspace can effectively convey the necessary information in the data. \n\\textbf{P10} commented, \\textit{``Each part of the story has the information that I want to present, and the story has both the overall content and the details. It's amazing!\"}. \nExperts commented regarding the data exploration that, \\textit{``This system can effectively help users explore data in order to discover intriguing data facts\"}(\\textbf{P3}), and \\textit{``The system automatically generates structured framework of data stories about the dataset, helping users to explore and understand the data easily\"}(\\textbf{P7}).\nMeanwhile, participants commented on the logic of the generated story, \\textit{``The logic of the generated story is very clear, especially when the logicality attribute in the reward panel was adjusted to a larger value\"} (\\textbf{P9}), and\n\\textit{``The logical order of the generated story can help the readers get to the point\"}(\\textbf{P1}).\nMoreover, experts also reported that the current presentation of data facts is meaningful and easy to understand.\n\\textbf{P5} commented, \\textit{``Instead of presenting multi-dimensional data in complex visualizations, the system arranges the appropriate dimensions and measures, and uses a series of simple visualizations to make the generated story easier to understand.\" }\n\n\\textit{\\textbf{Visual and interaction design.}}\nAll experts provided positive comments on the visual and interaction design of Calliope\\xspace.\nRegarding the 3 visual forms of the data story, expert users acknowledged that these 3 visual forms are very useful and can be selected for different scenarios. \nExperts commented, \\textit{``Each fact in the \\textit{\\textbf{factsheet}}\\xspace mode is sized according to how important it is. Therefore, it makes sense that this layout allows people to see relatively important information of the data story at a glance. It's also very nice that the \\textit{\\textbf{factsheet}}\\xspace can be printed as a PDF\"}(\\textbf{P3}); \n\\textit{``The \\textit{\\textbf{swiper}}\\xspace mode shows a fact per page, which is suitable for mobile devices\"}(\\textbf{P9});\n\\textit{``The \\textit{\\textbf{storyline}}\\xspace mode is a great form for presenting stories in web pages\"}(\\textbf{P7}).\nAs for the storyline view and the fact view, \\textbf{P9} appreciated the interaction between these two views. He commented, \\textit{``I can roughly explore the whole data story through the storyline view, and then further explore the data details on each of its fields of the selected fact through the fact view.\"}\n\\textbf{P3} also expressed her thoughts in that, `\\textit{`The storyline view enable me to add, delete, reorder and modify generated data facts, which is really convenient for me to make a data story on my preference\"}\nAt the same time, two experts(\\textbf{P3} and \\textbf{P9}) mentioned that the adjustment of reward configuration can help users with different needs to generate stories with different emphasis, \\textit{``For example, data journalists like me may focus more on the logicality of a data story, while data analysts may prefer integrity because they may want to cover as much data as possible\"}(\\textbf{P3}).\n\n\\textit{\\textbf{The overall system.}}\nThese 10 experts all gave an overall assessment of Calliope\\xspace from the perspective of their areas of expertise.\n\\textbf{P1}, a data journalist with four years' experience, commented that this system can help users generate stories quickly and intelligently, compared to traditional journalism, which uses templates to draw charts. \n\\textbf{P2} also appreciated the effectiveness of Calliope\\xspace. she commented, \\textit{``This system is very useful and gives a lot of effective advice, especially for those who have no experience in data visualization or data journalism.\"}\nAnother data journalist \\textbf{P4} said the system enable users to quickly create graphics for some easy-to-read articles with little interaction, thus there is no need for the designers to typesetting. She also commented that charts provided in Calliope\\xspace are commonly used in journalism, such as line charts, bar charts, pie charts, etc.\nThe data journalist offered insights of our system from a news perspective, while data analysts came up with insights from the perspective of data analysis.\n\\textbf{P7} commented, \\textit{``With this system, analysts can quickly generate stories that help quickly summarize the information of the data without much exploration or insight. It is really convenient in simple application scenarios.\"}\nIn addition, data visualization experts also put forward their own views.\n\\textbf{P9} commented, \\textit{``The system lays out the overall story frame of a dataset, giving us a direction on how to create a data story. It is worth mentioning that the visual design is also very good.\"}\n\\textbf{P10} said, \\textit{``The overall system is very useful, especially for people who are not professional in data visualization.\"}\n\n\n\\textbf{Discussions}\nIn spite of the positive feedback mentioned above, some suggestions for our system were also put forward, which can be summarized in the following aspects:\n\\begin{itemize}[leftmargin=10pt,topsep=2pt]\n\\itemsep -.5mm\n \\item {\\textbf{Story optimization.}} \n Both \\textbf{P2} and \\textbf{P9} said that the automatic-generated text may be suitable for Simple application scenarios or initial story exploration. However, the formal scenarios, such as professional news, have high requirements of expression. They suggested the statement of the story can be more diverse and vivid. \n Journalists are more concerned about the specific cases behind the data, hoping to dig deeper into the information behind the data.\n Meanwhile, data analysts also pointed out that the data story just describe the data fact without further insight which can guide users on how to take actions.\n These concerns indicate the gap between generated and real-world data stories. Since Calliope\\xspace provides editing functions, users can further optimize the generated stories to meet their needs on various aspects. \n \n \\item {\\textbf{Semantic understanding.}} Three experts (\\textbf{P3}, \\textbf{P6}, \\textbf{P7}) pointed out that the generated data stories are lack of semantic understanding and common sense to build bridges between information. For example, in the news about COVID-19, the number of confirmed cases is usually depicted before the number of deaths and cures. However, our system does not have this background knowledge. We acknowledge it as the limitation of the current system. Spreadsheets in the real world are usually relevant to some domain knowledge, which has significant impact on the story structure. We think it is a promising direction for future work.\n \n \\item {\\textbf{Visualization.}}\n \\textbf{P2}, \\textbf{P3} and \\textbf{P4} all suggested that the system should provide the ability to customize charts(such as changing color, transparency, axis labels, etc.) and provide more visual designs for users to choose. Regarding the \\textit{\\textbf{factsheet}}\\xspace mode, 3 of the experts suggested that the reading order of the data facts should be more clear. Moreover, \\textbf{P10} suggested, \\textit{``I'd appreciate it if there was a visualization form of slides, which can be helpful when I need to prepare a presentation.\"} As future work, we plan to integrate more charts and layouts to enrich our system.\n\\end{itemize}\n\nOverall, Calliope\\xspace is appreciated by these ten experts. These interviews also pointed out the next steps of the system.\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}