diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcypn" "b/data_all_eng_slimpj/shuffled/split2/finalzzcypn" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcypn" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nAt the extremely high temperatures reached in heavy-ion collisions, a phase-transition occurs from ordinary nuclear matter to a QGP state in which quarks and gluons are not confined into hadrons. The quark formation time during the collision is proportional to the inverse of the quark mass \\cite{MassDep}. Therefore, heavy quarks are generated early during the collision and can experience the full evolution of the medium \\cite{qgptime}. The quarks lose energy while moving through the medium by collisional and radiative processes. This energy loss is expected to depend on the path length, the QGP density,the parton colour charge (Casimir factor), and the quark mass (dead-cone effect) \\cite{deadcone,charmbeauty}. Because of this, the following energy loss hierarchy is expected: $\\Delta E_\\mathrm{loss}$(g) $>$ $\\Delta E_\\mathrm{loss}$(u,d) $>$ $\\Delta E_\\mathrm{loss}$(c) $>$ $\\Delta E_\\mathrm{loss}$(b). \n\nThe nuclear modification factor ($R_\\mathrm{AA}$) quantifies the medium effects that affect the heavy quarks when they traverse the medium. This factor, defined as $$R_\\mathrm{AA} = \\frac{1}{\\langle N^\\mathrm{AA}_{coll}\\rangle} \\frac{\\mathrm{d}N^\\mathrm{AA} \/ \\mathrm{d}p_\\mathrm{T}}{\\mathrm{d}N^\\mathrm{pp} \/ \\mathrm{d}p_\\mathrm{T}},$$ is obtained from the ratio of the transverse-momentum-differential yields measured in Pb\u2014Pb and pp collisions. The scaling factor $\\langle N^\\mathrm{AA}_{coll}\\rangle$ represents the average number of binary nucleon-nucleon collisions in Pb--Pb collisions for a given centrality interval. If heavy quarks do not lose energy in the medium $R_\\mathrm{AA} = 1$, while it drops below unity if they do. Heavy quarks are also expected to be affected by the collective motion of the medium. This gives rise to an anisotropic flow usually described by the components of a Fourier expansion of the azimuthal distribution of the outgoing particles. The second coefficient of this expansion is called elliptic flow ($v_2$). \n\n\\begin{figure}[t!]\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{1.pdf}}\n\\end{minipage}\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{2.pdf}}\n\\end{minipage}\n\\caption{Left: $R_\\mathrm{AA}$ of non-strange D mesons in central Pb--Pb collisions compared with theoretical calculations. Right: Ratio of $R_\\mathrm{AA}$ of non-prompt D$^0$ mesons over the $R_\\mathrm{AA}$ of prompt D$^0$ mesons. The data is compared with models with different energy loss for charm and beauty. Copyright CERN, reused with permission.}\n\\label{Fig:1}\n\\end{figure}\n\n\\section{Open heavy flavour}\nThe left panel in Fig. \\ref{Fig:1} shows a comparison of the $R_\\mathrm{AA}$ of non-strange D-mesons in central Pb--Pb collisions with theoretical calculations. The low momentum reach in central collisions allows setting stringent constraints on energy-loss models for central Pb--Pb collisions. Models without shadowing, like the BAMPS model \\cite{bamps}, overestimate the $R_\\mathrm{AA}$ spectrum at low $p_\\mathrm{T}$. \n\nThe models can be tested more rigorously by requiring a description of multiple observables, like $R_\\mathrm{AA}$ and $v_2$, at the same time, over a wide momentum range, and in different centrality intervals \\cite{dmesonRAA, dmesonFlow}. This shows that accurate modeling of data requires a combination of collisional and radiative energy loss, hadronization via coalescence, cold-nuclear-matter effects, and a realistic description of the medium evolution. \n\nThe right panel shows the ratio of the $R_\\mathrm{AA}$ of non-prompt D$^0$-mesons over the $R_\\mathrm{AA}$ for prompt D$^0$-mesons. Prompt D$^0$-mesons, which come directly from the charm quarks produced in the initial collision, and non-prompt D$^0$-mesons, which are produced later by the decay of beauty hadrons, show a different $R_\\mathrm{AA}$ at intermediate $p_\\mathrm{T}$. Models with different energy loss for charm and beauty can describe within uncertainties the ratio of non-prompt over prompt D$^0$-meson $R_\\mathrm{AA}$. This is an indication that energy loss depends on the quark mass. \n\n\\begin{figure}[t!]\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{3.pdf}}\n\\end{minipage}\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{4.pdf}}\n\\end{minipage}\n\\caption{Left: $R_\\mathrm{AA}$ in central Pb--Pb collisions for multiple types of particle species. Right: $\\Lambda_c^+$ \/ D$^0$ ratio as a function of multiplicity for several $p_\\mathrm{T}$ intervals. Copyright CERN, reused with permission.}\n\\label{Fig:2}\n\\end{figure}\n\nThe left panel in Fig. \\ref{Fig:2} shows the $R_\\mathrm{AA}$ for different particle species with a hierarchy that is consistent with the expected difference in energy loss for charm versus light-flavour and gluons. Strange D-mesons and $\\Lambda_{c}$ baryons show a hint of lower suppression, compared to non-strange D-mesons, that may point at recombination effects. \"Models that include hadronization via coalescence reproduce D$_\\mathrm{S}$ data within uncertainties. \n\nThe right panel in Fig. \\ref{Fig:2} shows the $\\Lambda_c^+$ \/ D$^0$ ratio as a function of multiplicity in pp, p--Pb, and Pb--Pb collisions for several $p_\\mathrm{T}$ intervals. This ratio shows an enhancement at low $p_\\mathrm{T}$ compared to e$^{+}$e$^{-}$ collider measurements in which $\\Lambda_c^+$ \/ D$^0$ $\\approx 0.1$ \\cite{epref}. The multiplicity dependence of the $\\Lambda_c^+$ \/ D$^0$ ratio shows that the enhancement remains higher than electron-positron collider measurements even for low-multiplicity pp collisions, suggesting that charm-quark recombination with quarks from the surrounding hadronic environment may already occur in small systems.\n\n\n\\begin{figure}[t!]\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{5.pdf}}\n\\end{minipage}\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{6.pdf}}\n\\end{minipage}\n\\caption{Left: $R_\\mathrm{AA}$ as a function of multiplicity for inclusive J\/$\\psi$ in two rapidity intervals. Right: $R_\\mathrm{AA}$ as a function of $\\langle N_\\mathrm{part} \\rangle$ for two $\\Upsilon$ states along with model predictions. Copyright CERN, reused with permission.}\n\\label{Fig:3}\n\\end{figure}\n\n\n\\section{Quarkonium}\nAt high temperatures colour screening in the QGP results in the suppression of quarkonium production \\cite{quarkonium}. Different quarkonium states have different binding energies, which results in the expectation of a sequential melting of states when colliding nuclei at higher energies \\cite{melting}. On the other hand, the c$\\bar{\\mathrm{c}}$ multiplicity increases at higher collision energies. This leads to the expectation of an enhancement of quarkonia production via recombination at hadronization.\n\nThe left panel of Fig. \\ref{Fig:3} shows the $R_\\mathrm{AA}$ as a function of multiplicity for inclusive J\/$\\psi$-mesons in two rapidity intervals. This $R_\\mathrm{AA}$ measurement has a significantly improved precision and $p_\\mathrm{T}$ reach compared to previous measurements \\cite{improved}. At higher multiplicities the $R_\\mathrm{AA}$ at midrapidity is higher than at forward rapidity. This observation may suggest that recombination effects are stronger at midrapidity, where the charm-quark density is higher.\n\nThe centrality dependence of the $R_\\mathrm{AA}$ is shown in the right panel of Fig. \\ref{Fig:3}. The data show a slight bottomonium centrality dependence and match well with the model predictions \\cite{du}. A stronger suppression of $\\Upsilon$(2S) than $\\Upsilon$(1S) is observed.\n\nFor J\/$\\psi$-mesons, measurements show a positive $v_2$ in a large $p_\\mathrm{T}$ range at forward rapidity. This is illustrated in the left panel of Fig. \\ref{Fig:4}. The bottomonium $v_2$ is consistent with zero, however more data are needed for a conclusive interpretation on the difference between J\/$\\psi$ and bottomonium $v_2$\n\n\\begin{figure}[t!]\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{7.pdf}}\n\\end{minipage}\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{8.pdf}}\n\\end{minipage}\n\\caption{Left: $v_2$ as a function of $p_\\mathrm{T}$ for inclusive J\/$\\psi$. Right: $\\Upsilon$(1S) $v_2$ as a function of $p_\\mathrm{T}$ compared with inclusive J\/$\\psi$ $v_2$ and different models \\cite{Yflow}. Copyright CERN, reused with permission.}\n\\label{Fig:4}\n\\end{figure}\n\n\n\\section{Heavy-flavour jets}\nJets originate from hard parton-parton interactions. In ALICE heavy-flavour tagged jets are measured down to low jet $p_\\mathrm{T}$ (5 GeV\/$c$). The study of jets provides experimental data for gluon-to-hadron fragmentation functions and gluon PDF at low $x$. The study of jet quenching provides additional information to further characterise parton energy loss in the QGP.\n\nFig. \\ref{Fig:5} shows the first measurement of the $\\Lambda_c^+$ probability density distribution of the parallel jet momentum fraction (z$_{||}^{ch}$) compared to data. The Pythia 8 SoftQCD model has the best agreement with data.\n\n Jets with beauty hadrons were reconstructed exploiting the displaced impact parameter of b-hadron decay tracks to the primary vertex. The observed yields are consistent with POWHEG. The nuclear modification factor in p--Pb ($R_\\mathrm{pPb}$) for B-tagged jets is shown in the right panel of Fig. \\ref{Fig:5}. No cold-nuclear-matter effects are observed within uncertainties using B-tagged jets.\n\n\\begin{figure}[t!]\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{9.pdf}}\n\\end{minipage}\n\\hspace{0.1cm}\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{10.pdf}}\n\\end{minipage}\n\\caption{Left: probability density distribution of the parallel jet momentum fraction (z$_{||}^{ch}$) for $\\Lambda_c^+$-tagged jets compared to expectations from Monte Carlo generators. Right: $R_\\mathrm{pPb}$ for B-tagged jets with a comparison of measurements by ALICE and CMS \\cite{CMSbjet}. Copyright CERN, reused with permission.}\n\\label{Fig:5}\n\\end{figure}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\label{sec:intro}\n\nThe cosmic microwave background (CMB) anisotropies observed by the\nWilkinson Microwave Anisotropy Probe (WMAP) have revealed that the\nprimordial density fluctuations are almost adiabatic and scale invariant\n\\cite{Komatsu:2008hk,Dunkley:2008ie}. Inflationary cosmology\n\\cite{inflation} is currently the favored scenario to explain such\nprimordial fluctuations. According to it, density perturbations are\nproduced as quantum vacuum fluctuations on sub-Hubble scales and then\nstretched to super-Hubble scales during the phase of accelerated\nexpansion of space. However, inflationary cosmology is not without its\nproblems (see e.g. \\cite{RHBrev}),\\footnote{One of the problems in the\nusually considered mechanism, where quantum fluctuations are the origin\nof the present cosmic fluctuations, is the transition from quantum\nquantities to classical observables. However, in the mechanism proposed\nin this paper, such a problem does not exist because they are classical\nfluctuations from the beginning.} and thus it is important to study\nscenarios alternative to inflation. In the 1980s, topological defect\nmodels such as those based on cosmic strings were investigated intensely\nas a possible alternative to generate primordial density fluctuations\n\\cite{topologicaldefects}. However, the fluctuations induced by defects\nin an expanding universe are isocurvature and, even if they might mimic\nthe inflationary predictions for the temperature-temperature (TT)\ncorrelation of the CMB \\cite{Turok:1996wa}, observations of the\nanti-correlation of the temperature and the E-mode polarization (TE),\nprecisely measured by WMAP, confirmed that such fluctuations could not\nbe the dominant source of CMB anisotropies\n\\cite{Peiris:2003ff,Komatsu:2008hk}. Thus, causal scaling seed models\nare ruled out as a main component of primordial density fluctuations.\n\nOn the other hand, in recent years other types of scenarios alternative\nto inflation motivated by developments in string theory have been\nproposed. Examples are the Pre-Big-Bang model (PBB)\n\\cite{Gasperini:1992em} and the Cyclic\/Ekpyrotic scenario\n\\cite{Khoury:2001wf,Steinhardt:2001st}.\\footnote{Another model is string\ngas cosmology \\cite{BV,NBV}, in which even the dimensionality of the\nobservable universe may be determined dynamically.} The common feature\nof these models is that the universe begins in a contracting phase\nbefore emerging into the expanding phase of Standard Big Bang cosmology\nafter a cosmological bounce. In the contracting phase, comoving scales\nexit the Hubble radius unless the contraction is too rapid. Previous\nstudies have considered quantum mechanical vacuum fluctuations of a\nscalar matter field evaluated when the matter scales exit the Hubble\nradius during the contracting phase. Of course, once they are produced\nin the contracting phase, the fluctuations must be coupled to\nfluctuations in the expanding phase after the bounce. The propagation of\nfluctuations through the bounce phase depends on the details of the\nbounce \\cite{Durrer}. There are models which yield an almost scale\ninvariant spectrum (see e.g. \\cite{Tolley}) after the bounce.\n\nIn this paper, we suggest the possibility that primordial density\nfluctuations are produced by causal seeds such as cosmic strings in the\ncontracting phase, and show that they could generate adiabatic, almost\nscale invariant, and super-Hubble curvature fluctuations in the\nexpanding universe.\\footnote{Another attempt to produce adiabatic\nfluctuations from cosmic strings is discussed in the context of\ntwo-metric theories of gravity \\cite{Avelino:2000iy}.} One simple\npossibility to realize cosmic strings in the contracting universe is to\nembed our model into a so-called cyclic universe, in which cosmic\nstrings are formed in the usual way during a phase transition in the\nexpanding phase, if the matter Lagrangian admits cosmic strings. In our\nscenario, different from the cyclic scenario of Steinhardt and Turok\n\\cite{Steinhardt:2001st} where quantum fluctuations source the\nprimordial density fluctuations, here cosmic strings seed the\nperturbations. Of course, in the cyclic scenario, topological defects\nmay be dangerous because they may dominate the energy density of the\nuniverse, as pointed out by Avelino et al. in\nRefs.~\\cite{Avelino:2002hx,Avelino:2002xy}. However, Avelino et al. also\ngive a solution to that problem: they point out that a relatively long\nperiod of cosmic acceleration at low energies (late period of one cycle)\ncan dilute topological defects in order that they do not overdominate\nthe universe. A second possibility is to consider the birth of the\nuniverse in the contracting universe (not cyclic). If the universe has\nfinite birth time, the correlated region at the birth of the universe\ndoes not necessarily cover the whole volume of the universe. Then, the\nrandomness of the values of the underling field beyond the correlation\nlength leads to the formation of topological defects.\n\nDensity fluctuations produced by causal seed models naturally become\nsuper-Hubble in the contracting phase. More specifically, the key point\nis that the defect-seeded perturbations which are initially isocurvature\nin nature seed a growing adiabatic mode. At the time when the symmetry\n(whose breaking yields the topological defects) is restored, the seed\nterm disappears and the fluctuations become frozen-in super-Hubble scale\nadiabatic perturbations. As long as no dominant isocurvature\nfluctuations are produced in the expanding phase, the fluctuations in\nthe expanding phase will be adiabatic and thus able to explain the TE\nanti-correlation observed in the CMB. In the following we consider\ncosmic strings as a concrete example and investigate the nature of the\ndensity fluctuations in detail.\n\n\\section{Adiabatic fluctuations from cosmic strings in a contracting\nuniverse}\n\nThe evolution of cosmic strings in a contracting universe was\ninvestigated in Refs. \\cite{Avelino:2002hx,Avelino:2002xy}. As in these\nreferences, we will assume that the distribution of strings on\nsuper-Hubble scale is like a random walk. We make the simplest\nassumption that the universe is initially matter and then radiation\ndominated in the contracting phase. In this context, it was shown that\ncosmic strings obey the scaling solution asymptotically both in the\nradiation and matter dominated epochs. Specifically, the correlation\nlength $L$ is proportional to $a^{2} \\ln a \\propto (-t) \\ln (-t)$ in the\nradiation era ($a$ is the scale factor and $t(<0)$ is cosmic time, with\nthe bounce time taken as $t=0$). If we take the string loop chopping\nefficiency $\\tilde{c}$ to be a non-zero constant, then the ratio of the\nenergy density in cosmic strings to the total one stays almost\nconstant.\\footnote{There remains the logarithmic dependence in the\nradiation dominated era, which can lead to a small deviation from scale\ninvariance of the final curvature perturbations.} Therefore, the\ndensity fluctuations produced by cosmic strings are almost scale\ninvariant at least initially at the Hubble radius exit.\n\nOn super-Hubble scales, the dynamics of the defect network\nsets up isocurvature fluctuations which in turn act as a\nseed for growing curvature perturbations.\nAs the universe contracts, the temperature of radiation\nincreases, and eventually leads to symmetry \nrestoration and the disappearance of the\ntopological defects. Thus, there will no longer\nbe any isocurvature fluctuations, and the source on\nsuper-Hubble scales in the differential equation for the\nadiabatic mode vanishes. Hence, the fluctuations become\nfrozen in as adiabatic ones.\n\nOn super-Hubble scales, the equation for the evolution of the curvature\nperturbation on uniform total density hypersurfaces, $\\zeta$, is given\nby\\footnote{\n Although the density fluctuations of cosmic strings can be large and\n of the order of unity, the linear perturbation theory applies\n because the metric perturbations remain small as long as the energy\n density of the cosmic string is subdominant.\n} \\cite{Wands:2000dp}\n\\begin{equation}} \\newcommand{\\eeq}{\\end{equation}\n \\dot{\\zeta} = - \\frac{H}{\\rho+P} \\delta P_{\\rm nad},\n \\label{eq:zetaevo}\n\\eeq\nwhere $H \\equiv \\dot{a}\/a$ is the Hubble parameter, $\\rho$ and $P$ are\nthe total energy density and pressure, respectively. A dot represents\na derivative with respect to the cosmic time. The non-adiabatic pressure\nperturbation $\\delta P_{\\rm nad}$ is defined as $\\delta P_{\\rm nad}\n\\equiv \\delta P - c_s^2 \\delta \\rho$ with $\\delta \\rho$ being the\ntotal density fluctuation and $c_s^2 = \\dot{P}\/\\dot{\\rho}$ being the\nadiabatic sound speed. In the multi-fluid case, the total\nnon-adiabatic pressure perturbation consists of two parts. The first\npart comes from intrinsic entropy perturbations which vanish for a\nbarotropic fluid. Therefore, the intrinsic entropy perturbations of\nmatter and radiation vanish. On the other hand, it is non-trivial that\nthe intrinsic isocurvature perturbation for cosmic strings also\nvanishes as strings can be modeled as a simple fluid with equation of state \n$w_{st} \\equiv P_{st} \/ \\rho_{st} = - 1\/3$. However, we expect that \nthe intrinsic entropy perturbation of cosmic strings is negligible and\nwill assume this in the following. The second part $\\delta P_{\\rm rel}$ \ncomes from the relative entropy perturbation between different fluids\n$S_{\\alpha\\beta}$ \\cite{Malik:2002jb},\n\\begin{equation}} \\newcommand{\\eeq}{\\end{equation}\n \\delta P_{\\rm rel} \\equiv \\frac{1}{6H\\dot{\\rho}} \\sum_{\\alpha,\\beta}\n \\dot{\\rho}_{\\alpha} \\dot{\\rho}_{\\beta} (c_{\\alpha}^2-c_{\\beta}^2) \n S_{\\alpha\\beta},\n \\label{eq:relnonad}\n\\eeq\nwhere the relative entropy perturbation between different fluids\n$S_{\\alpha\\beta}$ is given by\n\\begin{equation}} \\newcommand{\\eeq}{\\end{equation}\n S_{\\alpha\\beta} = - 3H \n \\left(} \\newcommand{\\rmk}{\\right) \\frac{\\delta\\rho_{\\alpha}}{\\dot{\\rho_{\\alpha}}}\n - \\frac{\\delta\\rho_{\\beta}}{\\dot{\\rho_{\\beta}}} \\rmk,\n \\label{eq:relentropy}\n\\eeq \nand the adiabatic sound speed for each component, $c_\\alpha^2$, is\ngiven by $\\dot{P_\\alpha}\/\\dot{\\rho_\\alpha}$ with $\\rho_\\alpha$ and\n$P_\\alpha$ being the energy density and pressure of the component.\n\nNow, let us estimate the amplitude of the curvature perturbation $\\zeta$\nfor a comoving scale $k$. First, we consider a scale $k$ which exits the\nHubble radius during a radiation dominated era. We neglect the curvature\nperturbations which are generated when the corresponding scale is\nsub-Hubble.\\footnote{This can be justified by assuming that string loop\ndistribution is subdominant and the initial value of the curvature\nperturbation at $t \\rightarrow - \\infty$ vanishes. In fact, we expect\nthat the string loop distribution will be subdominant in a contracting\nuniverse compared to an expanding universe since comoving scales are\nexiting rather than entering the Hubble radius. Loops exit the Hubble\nradius before they collapse through emission of gravitational\nwaves. This effect reduces the loop chopping efficiency $\\tilde{c}$, and\nhence the number of produced loops is smaller in a contracting phase.}\nThus, we just follow the evolution of the curvature perturbation from\nthe epoch $t_H(k)$ when a comoving scale $k$ exits the Hubble radius\nuntil the time $t_{cs}$ when the symmetry is restored and the strings\ndisappear.\n\nThe relative entropy perturbation between\nradiation and cosmic strings is \n\\begin{equation}} \\newcommand{\\eeq}{\\end{equation}\n S_{rs} \\simeq - 3H \\frac{\\delta \\rho_{\\rm st}}{\\dot{\\rho}_{\\rm st}},\n \\label{eq:entropyrs}\n\\eeq\nwhere we have used the fact that \n$\\dot{\\rho}_{\\rm st}\/\\dot{\\rho}_{\\rm rad}$ \nis almost constant and much smaller than unity, and \n$|\\delta \\rho_{\\rm rad}|$ is at most comparable to \n$|\\delta \\rho_{\\rm st}|$. From the scaling solution, it follows that \ncosmic strings can be modeled as a random walk on scales\nlarger than the Hubble radius with step length comparable to\nthe Hubble radius. Then, the density fluctuations of cosmic strings\nfor a super-Hubble scale can be easily estimated as\n\\begin{equation}} \\newcommand{\\eeq}{\\end{equation}\n \\delta \\rho_{st}(k) \\simeq N |H| \\mu k_{\\rm phys},\n \\label{eq:deltarhosr}\n\\eeq\nwhere $\\mu$ is the mass per unit length of a string, $N={\\cal O}} \\newcommand{\\CP}{{\\cal P}(1)$ is\nthe number of long strings crossing any given Hubble volume, and $k_{\\rm\nphys} \\equiv k\/a$. Notice that $N$ can be different in radiation and\nmatter dominated epochs although the difference is expected to be at\nmost ${\\cal O}} \\newcommand{\\CP}{{\\cal P}(1)$. Inserting these equations to Eq. (\\ref{eq:zetaevo})\nyields\n\\begin{equation}} \\newcommand{\\eeq}{\\end{equation}\n \\dot{\\zeta} \\simeq N G \\mu k_{\\rm phys}.\n\\eeq\n\nAs stated above, cosmic strings disappear at the time\n$t_{\\rm cs}$ due to symmetry restoration. Once\ncosmic strings disappear, the curvature perturbation is conserved at\nleast until the bounce. Thus, the final curvature perturbation before\nthe bounce is estimated as\n\\begin{equation}} \\newcommand{\\eeq}{\\end{equation}\n \\zeta = \\int_{t_H(k)}^{t_{\\rm cs}} dt \\dot{\\zeta}\n \\sim N_{\\rm r} G \\mu k (-t_H(k))^{\\frac 12} \n \\sim N_{\\rm r} G \\mu.\n\\eeq\nHere we have made use of $k (-t_H(k))^{\\frac 12} = 1\/2$ (in the\nradiation era), and $N_{\\rm r}$ is the value of $N$ (in the\nradiation epoch) which is ${\\cal O}} \\newcommand{\\CP}{{\\cal P}(1)$. Thus,\nthe curvature perturbations are independent of the comoving scale $k$ and\nhence scale invariant at least before the bounce. In the same way, the\ncurvature perturbation for comoving scales whose physical scales exit\nthe Hubble radius during the matter era is estimated as\n\\begin{equation}} \\newcommand{\\eeq}{\\end{equation}\n \\zeta = \\int_{t_H(k)}^{t_{\\rm eq}} dt \\dot{\\zeta} +\n \\int_{t_{\\rm eq}}^{t_{\\rm cs}} dt \\dot{\\zeta}\n \\sim N_{\\rm m} G \\mu k (-t_H(k))^{\\frac 13} \n \\sim N_{\\rm m} G \\mu,\n\\eeq\nwhere $t_{\\rm eq}$ is the matter-radiation equality time in the\ncontracting phase, we have made use of $k (-t_H(k))^{\\frac 13}\n= 2\/3$ (in the matter era), and $N_{\\rm m}$ is the value of $N$ in\nthe matter epoch. Therefore, the curvature perturbation is\nscale invariant at least before the bounce also on these scales. \n\nAccording to the often-used Hwang-Vishniac \\cite{HV} (Deruelle-Mukhanov\n\\cite{DM}) matching conditions for fluctuations across a space-like\nhypersurface, the curvature perturbation is conserved across the\nbounce. If we apply these matching condition, we conclude that the final\ncurvature perturbations in the expanding phase are almost scale\ninvariant and hence could be responsible for the present density\nfluctuations if $G \\mu \\sim 10^{-5}$. As emphasized in \\cite{Durrer},\nthere are problems with blindly applying these matching\nconditions. Subsequent studies have shown that the actual transfer of\nthe fluctuations depends quite sensitively on the details of the\nbounce. There are cases where the curvature perturbation is conserved\n(see e.g. \\cite{Tsujikawa:2002qc,Copeland:2006tn,Bozza}), but there are\nother examples where this does not hold\n\\cite{Hassan,Tirtho,Cai}. However, if the bounce time is short compared\nto the time scale of the fluctuations of interest, it can be rather\nrigorously shown that the spectrum of $\\zeta$ is maintained through the\nbounce. This can be shown \\cite{Cai,Cai2} by modeling the background\ncosmology with three phases: the initial contracting radiation phase,\nthe ``bounce phase'' during which $H = \\alpha t$, where $\\alpha$ is some\nconstant, and the expanding radiation phase. The matching conditions at\nthe two hypersurfaces between these phases can be consistently applied\n(since the background also satisfies the matching conditions, unlike\nwhat happens in the single matching between contracting and expanding\nphase which has been applied in the case of the singular Ekpyrotic\nbounce).\nHowever, we would like to point out that, even if the\ncurvature perturbation is {\\it not} conserved through the bounce, the\nscale invariance of the final curvature perturbation still holds true as\nlong as its change in the amplitude of the fluctuations across the\nbounce is independent of the comoving scale. This can be reasonably\nexpected for modes we are interested in because their momenta are much\nsmaller than the maximal value of $|H|$ around the bounce point\n(assuming that the bounce is smooth), the only energy scale which can be\nset by the bounce.\\footnote{\n Even in the case when the change through the bounce depends on the\n comoving scale, a scale invariant spectrum may be realized by\n considering the time varying tension of cosmic strings discussed in\n Refs. \\cite{Yamaguchi:2005gp,Ichikawa:2006rw,Takahashi:2006yc}, which\n compensates for the variation of the curvature perturbation.\n}\n\nFinally, we comment on some subtleties. First of all, cosmic strings may\nbe formed again in the expanding phase. In this case, cosmic strings\nagain produce isocurvature fluctuations in the expanding phase, which\nshould be suppressed to less than $10\\%$ of the total curvature\nperturbations \\cite{Pogosian:2003mz,Endo:2003fr}. Such a suppression may\nbe realized in the case when the constant $N$ for cosmic strings in the\ncontracting phase is much larger than that in the expanding phase. The\nfact that the loop chopping efficiency $\\tilde{c}$ is smaller implies\nthat the constant $N$ might be larger. Therefore, it is plausible that\nthe constant $N$ for cosmic strings in the contracting phase is larger\nthan that in the expanding phase. Another possibility is that the\ncurvature perturbation sourced by fluctuations of cosmic strings in a\ncontracting phase is amplified at the bounce. Another solution is to\nsimply assume that cosmic string are not produced in the expanding phase\nbecause the symmetry breaking patterns of scalar fields are not\nnecessarily the same in the contracting and the expanding phases.\n\nAnother issue is that cosmic strings generate not only density\nperturbations but also vector and tensor perturbations (gravitational\nwaves). Gravitational waves are produced by oscillations of loops as\nwell as long strings. As explained before, the radius of a loop could\nbecome larger than the Hubble before there has been a significant amount\nof gravitational radiation. Therefore, we expect that the relative\namplitude of gravitational waves to scalar metric fluctuations will be\nsmaller in the contracting phase than the standard scenario of cosmic\nstrings in an expanding universe. Regarding the vector mode, it has\nbeen shown that vector perturbations exhibit growing mode solutions in\nthe contracting phase \\cite{Battefeld:2004cd}. In particular, the metric\nperturbations always grow while the matter perturbations stay constant\nin the radiation dominated era. However, the vector perturbations will\ndecrease in the expanding phase. If vector fluctuations are suppressed\n(even slightly) across the bounce (or the scalar fluctuations enhanced),\nthen the vector modes will be sufficiently small today not to destroy\nthe successful agreement with the CMB angular power spectrum. Small but\nnot negligible vector fields could, in fact, be useful for generating\nthe observed large scale magnetic fields, as pointed out in\nRef. \\cite{Battefeld:2004cd}. As a final remark, we mention possible\nnon-Gaussianities in the scenario. Since the distribution of cosmic\nstring is highly non-Gaussian, the produced density fluctuations may\ngive large non-Gaussianity, though the bispectrum for a simulated string\nmodel in the expanding universe is shown not to be so large for the\ndiagonal contribution \\cite{Gangui:2001fr}. All these topics are worth\ninvestigating.\n\n\\section{Summary}\n\nWe have shown that adiabatic, super-Hubble, and almost scale invariant\ndensity fluctuations can be produced by cosmic strings in a contracting\nuniverse. Although cosmic strings can only generate isocurvature\nfluctuations in an expanding universe, they can produce adiabatic\nfluctuations by considering them in a contracting universe because\ncosmic strings disappear due to symmetry restoration. Our findings open\nthe possibility that topological defects could be resurrected as the\nmain source of the current cosmic density fluctuations.\n\n\n\\ack\nM.Y is grateful to M. Kawasaki for useful\n discussions. T.T. and M.Y would like to thank R. H. Brandenberger for\n kind hospitality at McGill University where this work was finished.\n This work is supported in part by a Canadian NSERC Discovery Grant and\n by the Canada Research Chair program (R.B.), and by the Sumitomo\n Foundation (T.T.), and by Grant-in-Aid for Scientific Research from\n the Ministry of Education, Science, Sports, and Culture of Japan\n No.\\,19740145 (T.T.), No.\\,18740157, and No.\\,19340054 (M.Y.).\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\nAs artificial intelligence (AI) technologies are playing key roles in our daily lives, developing intelligent systems which can work with humans more effectively (instead of replacing them) is becoming a central research theme \\cite{9153877,peng2022investigations,russell2021human}. Such theme is mostly pronounced as \\textsl{hybrid intelligence}, aiming to benefit from the strengths of both humans and the machine intelligence in solving problems. Developing systems of such capability demands fundamentally novel approaches to major research problems in AI: state-of-the-art systems outperform humans in many cognitive tasks from playing video games \\cite{hester_deep_2017} to pattern recognition \\cite{liu2019comparison}, however they fall short when it comes to other tasks such as common sense reasoning, performing causal discovery, and behavioural human capabilities such as explaining its own decisions, adapting to different environments, collaborating with others, etc. A particular challenge in developing such systems relies on making them more interpretable \\cite{9153877,tjoa2020survey,TIDDI2022103627} which is the main focus of this paper. \t\t\n\nAn obvious medium in making such systems interpretable relies on employing an existing knowledge representation formalism which is inherently tailored towards expressing human knowledge. One such type of human knowledge that is relevant in problem solving is captured by the notion of \\textit{interrogative agenda} (also called research agenda \\cite{enqvist2012modelling}) of an epistemic agent (which will be explained further in detail in Section ~\\ref{prelim: imterrogative agenda}). Intuitively, given a context, an interrogative agenda abstracts a set of features that an epistemic agent is interested in. In order to express interrogative agendas we employ the knowledge representation formalism of \\textit{formal concept analysis}. \n\nFormal concept analysis (FCA) is an influential foundational theory in knowledge representation and reasoning \\cite{priss2006formal, qadi2010formal, poelmans2010formal, valtchev2004formal, poelmans2013formal, ganter2012formal, wille1996formal} which provides a framework for categorizing objects w.r.t.~a given set of features.\nThe set of features used in the categorization (formal context in FCA) can be identified as its agenda, and different agendas will correspond to different categorizations. The agenda used to categorize a set of objects may be chosen on several factors like the availability and precision of the data, the categorization methodology, and the purpose of the categorization.\\footnote{A logical framework for studying these different categorizations obtained from different agendas and their interaction was developed in our earlier work \\cite{FLexiblecat2022} and applied to auditing domain.} In this paper, we focus on obtaining concept lattices (possibly fuzzy) corresponding to different agendas (possibly non-crisp)\nHowever, in many applications, it is unclear which interrogative agenda (Sec.~\\ref{prelim: imterrogative agenda}) is best suited to obtain a categorization that can be useful in dealing with a given problem. Thus, in this work, we focus on the task of using a machine learning algorithm to learn such agendas, and hence a ``good categorization'' for the problem at hand. In particular, we will address the task of classification and outlier detection.\n\nIn the realm of machine learning, formal concept analysis has been used in the past for classification, outlier detection, rare concept mining and identification of rare patterns (Sec.~\\ref{Sec:Classification and outlier detection using concept lattice}). However, to the best of our knowledge, all these methods use a single concept lattice (or its sublattice) to deal with the problems mentioned above. That is, the agenda of the categorization is fixed beforehand. The main difficulty in using such techniques relies on the fact that there are exponentially many subsets of features (and weights) one has to take into account. \nOn the other hand, since some features may not be relevant for a given classification task, removing them can reduce the data collection cost, its complexity, and may even improve the accuracy for some tasks. However, determining the set of relevant features can be difficult, and it is an important part of the preprocessing phase for many such algorithms. \n\nIn this paper, we propose a meta-learning algorithm to identify the best-suited agenda (and hence categorization). That is, to estimate the significance of different sets of features for the given task. The incorporation of such outer-loop on top of an existing classification or outlier detection algorithm can potentially increase its generalising power and the performance. Another major advantage of such method is that the learned agendas provide us an estimation of the importance of different sets of features for the given task, making our results more explainable. \n\n\\paragraph{Structure of paper.} In Section \\ref{sec:preliminaries}, we provide the relevant preliminaries. In Section \\ref{Sec:Classification and outlier detection using concept lattice}, we give an overview of FCA-based classification and outlier detection algorithms. In Section \\ref{sec:Learning interrogative agendas}, we describe the framework for learning agendas and provide a generic learning algorithm. In Section \\ref{sec:Conclusion}, we conclude and give some directions for future research. \n\\section{Preliminaries}\\label{sec:preliminaries}\n\\subsection{Formal concept analysis}\nA {\\em formal context} \\cite{ganter2012formal} is a structure $\\mathbb{P} = (A, X, I)$ such that $A$ and $X$ are sets of {\\em objects} and {\\em features}, respectively, and $I\\subseteq A\\times X$ is the so-called {\\em incidence relation} which records whether a given object has a given feature. That is, for any object $a$ and feature $x$, $a I x$ iff $a$ has feature $x$. \nFormal contexts can be thought of as abstract representations of e.g., databases, tabular data and such.\nEvery formal context as above induces maps $I^{(1)}: \\mathcal{P}(A)\\to \\mathcal{P}(X)$ and $I^{(0)}: \\mathcal{P}(X)\\to \\mathcal{P}(A)$, respectively defined by the assignments \n\\begin{equation}\n I^{(1)}[B]: = \n\\{x\\in X\\mid \\forall a(a\\in B\\Rightarrow aIx)\\},\\quad \n I^{(0)}[Y] = \n\\{a\\in A\\mid \\forall x(x\\in Y\\Rightarrow aIx)\\}.\n\\end{equation}\nA {\\em formal concept} of $\\mathbb{P}$ is a pair \n$c = (\\val{c}, \\descr{c})$ such that $\\val{c}\\subseteq A$, $\\descr{c}\\subseteq X$, and $I^{(1)}[\\val{c}] = \\descr{c}$ and $I^{(0)}[\\descr{c}] = \\val{c}$. \nA subset $B \\subseteq A$ (resp.\\ $Y\\subseteq X$) is said to be {\\em closed} or {\\em Galois-stable} if $Cl(B)=I^{(0)}[I^{(1)}[B]]=B$ (resp.\\ $Cl(Y)=I^{(1)}[I^{(0)}[Y]]=Y$).\nThe set of objects $\\val{c}$ is intuitively understood as the {\\em extension} of the concept $c$, while the set of features $ \\descr{c}$ is understood as its {\\em intension}. \nThe set of the all formal concepts of $\\mathbb{P}$ (denoted by $L(\\mathbb{P})$) can be partially ordered as follows: for any $c, d\\in L(\\mathbb{P})$, \n\\begin{equation}\nc\\leq d\\quad \\mbox{ iff }\\quad \\val{c}\\subseteq \\val{d} \\quad \\mbox{ iff }\\quad \\descr{d}\\subseteq \\descr{c}.\n\\end{equation}\nWith this order, $L(\\mathbb{P})$ is a complete lattice, the {\\em concept lattice} $\\mathbb{P}^+$ of $\\mathbb{P}$. \n\\subsection{Interrogative agendas}\\label{prelim: imterrogative agenda} \n\nIn epistemology and formal philosophy, interrogative agenda (or research agenda \\cite{enqvist2012modelling}) of an epistemic agent (or group of agents e.g.,~users) indicates the set of questions they are interested in, or what they want to know relative to a certain circumstance. \nIntuitively, in any context, interrogative agendas act as cognitive filters that block content which is deemed irrelevant by the agent. Only the information the agent considers relevant is used\ne.g.~in the formation of their beliefs, or actions, etc. Deliberation and negotiation processes can be described as whether or how agents succeed and interact in shaping their interrogative agendas, and the outcomes of these processes can be described in terms of the aggregated (or \"common ground\") agenda.\nAlso, phenomena such as polarization \\cite{myers1976group}, echo chambers \\cite{sunstein2001republic} and self-fulfilling prophecies \\cite{merton1948self} can be described in terms of the formation and dynamics of interrogative agendas among networks of agents. \n\n\n\n\nDealing with a classification or outlier detection problem, we may have different agendas for different aims. For example, the agenda for the classification of consumers for a grocery store based on their buying preferences is very different from the agenda of a political analyst trying to classify the same set of people based on their political inclinations. Thus, interrogative agendas play an important role in determining natural or useful categorization for a specific purpose. \n\n\\subsection{Interrogative agendas and flexible categorization}\n\\label{ssec:Interrogative agendas and flexible categorization}\nLet $\\mathbb{P}=(A,X,I)$ be a formal context. For a set of features $Y \\subseteq X$, the formal context induced by $Y$ from $\\mathbb{P}$ is $(A,X,I \\cap A \\times Y)$. Given the set of all the features $X$, the (non-crisp) interrogative agenda of an agent can be described by a mass function on $\\mathcal{P}(X)$. For an agenda represented by $m:\\mathcal{P}(X) \\to [0,1]$, and any $Y \\subseteq X$, $m(Y)$ represents the importance (or intensity of the preference) of the set of features $Y$ according to the agenda given by $m$. We assume that mass functions are normalized, that is, \n\\begin{equation}\n\\sum_{Y \\subseteq X} m(Y)=1.\n\\end{equation}\nAny such mass function induces a probability or preference function $p_m: \\mathcal{R} \\to [0,1]$ such that $p_m((A,X,I \\cap A \\times Y))= m(Y)$, where $\\mathcal{R}$ is the set of all the formal contexts corresponding to the crisp agendas induced by subsets of $X$ (i.e. the formal contexts corresponding to each $Y \\subseteq X$).\n\nThe agendas of different agents can be aggregated using different Dempster-Shafer rules \\cite{shafer1992dempster, sentz2002combination, denoeux2006cautious} to obtain a categorization corresponding to aggregated agendas. A logical framework for deliberation between different agents having different agendas is developed in \\cite{FLexiblecat2022}. This framework can be applied to study categorizations when different agents with different interests interact with each other for communication or joint decision making, as it is the case in auditing, community analysis, linguistics, etc. We also describe a method to approximate the importance of individual features from mass functions describing agendas by plausibility transform \\cite{cobb2006plausibility} or pignistic transformation \\cite{smets2005decision}, methods used in Dempster-Shafer theory to transform Dempster-Shafer mass functions to probability functions. These importance values of individual features can be useful in several different applications like feature analysis, clustering, etc. \n\\label{ssec:interrogativeag}\n\n\\section{Classification and outlier detection using concept lattices} \\label{Sec:Classification and outlier detection using concept lattice}\nIn this section, we give an overview of different classification \nand outlier detection techniques using concept lattices.\n\\subsection{Classification using concept lattices}\nDifferent algorithms have been applied to classify objects using formal concept analysis, that is, using concept lattices. \n Fu et al. \\cite{fu2004comparative} provide a comparison between different FCA-based classification algorithms, such as LEGAL \\cite{liquiere1990legal}, GALOIS \\cite{carpineto1993galois}, RULEARNER \\cite{sahami1995learning}, CLNN and CLNB \\cite{xie2002concept}. Prokasheva et al. \\cite{prokasheva2013classification} describe different classification algorithms using FCA and challenges to such methods. \n \n In \\cite{kuznetsov2004machine}, Kuznetsov describes a classification algorithm that uses the JSM-method \\cite{finn1989generalized,FINN1983351}. He proposes to use concept lattices and training examples to form hypotheses as follows. Let $(A, X, I)$ be a formal context for the set of objects $A$ and the set of features $X$. We add an additional target feature $x \\not\\in X$ for denoting a class of an object. This partitions $A$ into three sets of objects $A_+$, $A_-$, and $A_\\tau$ consisting of objects known to have feature $x$, objects known to not have feature $x$, and objects for which it is unknown whether or not they have it, respectively. Positive hypotheses for the JSM-method based on this formal context are given by the sets of features that are shared by a set of positive examples but not by any negative example. That is, a set $H \\subseteq X$ is a positive hypothesis iff $I^{(0)}[H] \\cap A_+ \\neq \\emptyset$ and $H \\not\\subseteq I^{(1)}[\\{a\\}] $ for any $a \\in A_-$. Negative hypotheses are defined analogously. For any object $b$, it will be classified positively (resp. negatively) if $I^{(1)}[\\{b\\}]$ contains a positive (resp. negative) hypothesis but no negative (resp. positive) hypotheses. In case $I^{(1)}[\\{b\\}]$ contains both or neither, the classification is undetermined or some other method like majority voting can be used to classify $b$. The method sketched above has been used with different modifications in many FCA-based classification algorithms \\cite{ganter2000formalizing,kuznetsov2013fitting,onishchenko2012classification}. Some classification algorithms based on FCA use concept lattices to augment other classifiers like SVM \\cite{carpineto2009concept}, Naive Bayes classifier and Nearest neighbour classifier \\cite{xie2002concept} in preprocessing or feature selection. Other FCA-based classification methods include biclustering \\cite{onishchenko2012classification}, and cover-based classification \\cite{maddouri2004towards}. \n\\subsection{Outlier detection using concept lattices}\nOutlier detection can be considered as a special case of binary classification where the classes are outlier and non-outliers. Thus, any of the above-mentioned algorithms can be used for outlier detection using concept lattices. Some other methods or algorithms based on formal concept analysis have also been studied specifically for outlier detection or similar tasks like mining rare concepts or patterns \\cite{sugiyama2013semi,okubo2010algorithm,zhang2014outlier}. The simplest method to define the outlier degree of an element from a concept lattice is by using the size of its closure (i.e. the smallest category containing the element). Smaller size of closure of an object indicates that there are a small number of elements which have the same features as the object and thus it is likely to be an outlier. Sugiyama \\cite{sugiyama2013semi} suggests that the outlierness of an object in a concept lattice should not depend on the size of its closure but one must consider the number of concepts it creates. He suggests to define the outlierness score of a set of objects $B \\subseteq A$ as\n\\begin{equation}\nq(B): = |\\{ (G,Y) \\in \\mathbb{P}^+ \\mid B \\subseteq G \\, \\text{or}\\, I^{(1)}[B] \\subseteq Y \\}|.\n\\end{equation}\nThis definition is more suited to detect outliers that belong to a densely agglomerated cluster which locates sparsely if we overview the whole set of objects. Zhang et al.~\\cite{zhang2014outlier} propose an outlier mining algorithm based on constrained concept lattices to detect local outliers using a sparsity-based method. One of the key advantages of using formal concept analysis in classification or outlier detection over other algorithms is that FCA can be used to deal with both continuous and discrete attributes simultaneously, through the discretization of continuous attributes by conceptual scaling (Sec. \\ref{ssec:COnceptual scaling}).\n\nOne of the major issues in applications of formal concept analysis is the complexity of the algorithms involved. The fundamental reason behind the high complexity is that in the worst-case scenario the number of categories in a concept lattice grows exponentially with the number of objects and features involved. Several techniques have been devised in past to overcome this complexity problem \\cite{cole1999scalability,dias2010reducing,singh2017concepts}. \n\n\\subsection{Discretization of continuous attributes and conceptual scaling} \\label{ssec:COnceptual scaling}\nIn order to apply formal concept analysis on attributes with continuous values, we need to discretize them. The process of converting many-valued (possibly continuous-valued) attributes into binary attributes or features for FCA is known as conceptual scaling \\cite{ganter1989conceptual}. Scaling is an important part of most FCA-based techniques and has been studied extensively \\cite{ganter1989conceptual,prediger1997logical,prediger1999lattice}. Choosing the correct scaling method depends on the specific task the concept lattice is used for. \n\\section{Learning interrogative agendas} \\label{sec:Learning interrogative agendas}\nFormal concept analysis categorizes a given set of objects w.r.t~a given set of features. Thus, the outlier detection (or the classification) task at hand depends on the features (or attributes) under consideration. However, in many applications it is hard to estimate which features are of importance and how important they are, that is the correct agenda, for a given task. Here we describe a machine learning framework that tries to solve this problem by using machine learning to learn a ``good'' agenda for the given task. This provides a way to improve the performance of FCA-based classification or outlier detection algorithms by choosing the correct agenda. This also makes results more explainable by providing the importance value of each set of features. \n\\subsection{Space of possible agendas}\nAs discussed in Section \\ref{ssec:Interrogative agendas and flexible categorization}, an (non-crisp) interrogative agenda on a given set of features $X$ is given by a mass function $m:\\mathcal{P}(X) \\to [0,1]$, where for any $Y \\subseteq X$, $m(Y)$ denotes the importance of the set of features $Y$ in the categorization.\nThe mass function $m$ induces a probability function $p_m:\\mathcal{R} \\to [0,1]$, where $\\mathcal{R}$ is the set of all the (crisp) formal contexts induced from $\\mathbb{P}=(A,X,I)$ by different crisp agendas i.e. subsets of $X$. For any categorization (formal context) $\\mathbb{P} \\in \\mathcal{R}$, $p_m(\\mathbb{P})$ denotes the likelihood assigned or preference given to the categorization $\\mathbb{P}$ by the agenda $m$. Thus, the set of all possible non-crisp categorizations (resp. non-crisp agendas) induced from a context $\\mathbb{P}$ is given by the set of all the probability functions on $\\mathcal{R}$ (resp. the set of all the possible mass functions on $\\mathcal{P}(X)$). As discussed in the introduction, we want to learn a ``good'' agenda that\nleads to a categorization that can be used to complete a given task effectively. This corresponds to learning a probability function $p$ on $\\mathcal{R}$ which represents a suitable categorization for the given task. That is, we use machine learning to search for a ``good'' function in the space of probability functions on $\\mathcal{R}$.\nFor the sake of computational and notational convenience, here we propose the following simplifications.\n\n\nLet $\\mathbb{R}$ be the set of real numbers. Let $f:\\mathcal{R} \\to \\mathbb{R}$ be a map assigning weight $w_\\mathbb{L} \\in \\mathbb{R}$ for every $\\mathbb{P} \\in \\mathcal{R}$. For any $\\mathbb{P} \\in \\mathcal{R}$, $f(\\mathbb{P})$ denotes the importance (or preference) assigned to the context $\\mathbb{P}$ or to the corresponding set of features $Y$, where $\\mathbb{P}=(A,X, I \\cap A \\times Y)$. We call any such function $f$ a non-crisp agenda as it gives weights (representing importance) to different sets of features. Any such function can be seen as a real-valued vector of dimension $|\\mathcal{R}|$. Thus, the set of all such functions is isomorphic to the space $\\mathbb{R}^{|\\mathcal{R}|}$. As this space is linear, the shift from probability functions on $\\mathcal{R}$ to real-valued functions simplifies the task of learning an agenda (weight function) that minimizes loss using a simple gradient descent method. \n\nThe weights assigned to lattices can be interpreted as probabilities on $\\mathcal{R}$, (and hence mass functions on $\\mathcal{P}(X)$) via normalization when all the weights are non-negative. The negative weights suggest that the corresponding categorization is opposite to the preferred categorization for the task at hand. For example, suppose we are interested in detecting elements with a value of feature $f_1$ being abnormally high, while the outlier detection method used finds \noutliers with value of $f_1$ low. Then the learning algorithm is likely to assign a negative weight to the agenda $\\{f_1\\}$. \n\nAs discussed earlier, one of the major problems in applications of formal concept analysis is the complexity of the algorithms involved. Here, we are proposing to consider priority (or weight) functions on a set of different concept lattices corresponding to different agendas. As the number of different (crisp) agendas induced from a set $X$ of features is exponential in $|X|$, this may add another exponential factor to the complexity of the algorithm. In many applications where the number of features is large, this may make the problem computationally infeasible. Thus, in most applications we need to choose a smaller set of concept lattices or (crisp) agendas as a basis, that is set of (crisp) concept lattices on which the weight functions are defined. We propose the following strategies for this choice. \n\n\\begin{enumerate}\n \\item \\textbf{Choosing agendas that consist of a small number of features} In this strategy, we choose the (crisp) agendas consisting of $\\alpha$ or a smaller number of features to construct basis concept lattices for some fixed $\\alpha\\ll |X|$. This is based on the idea that tasks like classification or outlier detection can be performed with good accuracy by considering only a small number of features together. This is especially the case with tasks involving epistemic components as humans use a limited number of features in combination for basic tasks like comparison and classification. As these agendas consist of a small number of features, the number of concepts in these concept lattices is small. This makes the computational complexity low for most algorithms operating on concept lattices. Thus, this method can be applied for finding agendas when the algorithms may have high computational complexity for lattices with a large number of concepts. In some situations, it may also be useful to add the full concept lattice (lattice corresponding to the full feature set $|X|$) to the set of basis lattices. This allows us to consider the full concept lattice with all available information for the task at hand while having the possibility of giving higher or lower (compared to other features) importance to some small subsets of features. For example, if the weights attached to all the lattices except those given by agendas $\\{f_1\\}$ and $X$ are close to $0$ and the weights assigned to these agendas are similar, it corresponds to the agenda in which the set of all features and $\\{f_1\\}$ are the only important sets of features. Thus, the concept lattice based on $f_1$ alone would be of high significance. \n \n \\item \\textbf{Choosing important agendas based on prior or expert knowledge} For some tasks, we may have prior or expert knowledge assigning different importance or priority to some lattices or agendas. In such cases, these lattices are taken as the set of basis lattices. This provides us a way to incorporate prior or expert knowledge with other algorithms using formal concept analysis. \n \\item \\textbf{Choosing agendas adaptively} In this strategy, we start with a set of agendas given by all the sets consisting of less than $\\alpha$ features for some small $\\alpha$ (usually taken as 1). We use machine learning to learn weights assigned to them, and then drop all the oness which get assigned a very low weight (after normalization). We then consider agendas consisting of any set of features that is a subset of the union of agendas that are not removed in the first step. Choosing these agendas can be interpreted as considering combinations of features that are deemed important in the first learning step. We then repeat the learning process with this new set of lattices. We keep repeating this process until all the agendas (lattices) added in the last step get assigned low weights or we reach $|X|$ (full concept lattice). In this way, we recursively check the possible combinations of agendas deemed to be important so far in the next recursive step. This method works on assumption that if a feature is not important on its own, then it is unlikely to be part of a set of features that is important. However, this assumption may fail in several situations. In such cases, this method should not be used to choose a basis. \n \\end{enumerate}\n There can be other effective strategies for choosing basis lattices for different tasks and algorithms. \n \\subsection{Learning algorithm}\n Once the set of possible agendas (or concept lattices) is chosen, we apply some \n classification or outlier detection algorithm on each of these. For every lattice $\\mathbb{L} \\in \\mathcal{R}$, we start assigning it a random weight $w \\in \\mathbb{R}$. Let $Alg$ be any algorithm which performs classification or outlier detection for a fixed concept lattice.\n \n Suppose $Alg$ is a classification (resp. outlier detection) algorithm classifying a set $A$ of objects into $n$ classes using concept lattices. For any object $a$ and a class $k$, let $Alg_k(a, \\mathbb{L})$ (resp. $Alg(a, \\mathbb{L})$ denote the membership of the object $a$ into the class $k$ (resp. outlier degree) according to the classification algorithm $Alg$ acting on the lattice $ \\mathbb{L}$. Notice that we allow for our classifiers (resp. outlier detection algorithms) to be interpreted as fuzzy or probabilistic such that membership value (resp. outlier degree) of $a$ belongs to $[0,1]$. For an algorithm $Alg$ with crisp output, the value $Alg_k(a, \\mathbb{L})$ (resp. $Alg(a, \\mathbb{L})$) will be either $0$ or $1$. For a given weight function $w: \\mathcal{R} \\to \\mathbb{R}$, we say that the membership of $a$ in the class $k$ (resp. outlier degree of $a$) assigned by the algorithm $Alg$ acting on a non-crisp categorization described by $w$ is \n \\begin{equation}\n \\label{eqn:outputs}\n out_k(a,w) = \\frac{\\sum_{\\mathbb{L} \\in \\mathcal{R} } w(\\mathbb{L})Alg_k(a, \\mathbb{L})}{\\sum_{\\mathbb{L} \\in \\mathcal{R} }w(\\mathbb{L})}.\n \\end{equation}\n Intuitively, this corresponds to taking the weighted sum of the result given by $Alg$ on lattices with weights provided by the agenda $w$. Let $loss$ be a loss function for a given classification task, and let $loss(out)$ be the total loss for the classification (resp. outlier detection) when classes (outlier degrees) are assigned by $ Alg_k(a,w) $ (resp. $ Alg(a,w) $). We use a gradient descent method to learn the agenda $f_0$ that minimizes the loss. We then use the learnt agenda $f_0$ to assign a class to an object that is for any test object $b$, its predicted membership in class $k$ (resp. outlier degree) is $ Alg_k(b,f_0) $ (resp. $Alg(b, f_0)$). \n \n\\begin{algorithm}\n\\footnotesize\n\\caption{Meta-Learning Algorithm for Interrogative Agendas}\n\\hspace*{\\algorithmicindent} \\textbf{Input:} a set of objects $A$, a set of features $X$, a training set $T\\subseteq X$, and a map $y:T \\to C$ representing the labels on the training set, an algorithm $Alg$ that takes in input some object and a concept lattice in $\\mathcal{R}$, and outputs an element in $\\mathbb{R}^C$ representing its prediction for each class; a loss function $loss$ that compares two classifications and outputs a real number, and a number of training epochs $M$.\\\\\n\\hspace*{\\algorithmicindent} \\textbf{Output} A model that classifies objects in $X$.\n\\begin{algorithmic}[1]\n\\Procedure{Train}{$A$, $X$, $T$, $y$, $Alg$, $loss$, $M$}\n \\State $\\mathbb{L}_1,\\ldots,\\mathbb{L}_n \\leftarrow $ \\textbf{compute} the concept lattices of $\\mathcal{R}$\n \\State \\textbf{let} $predictions$ be an empty map from $A$ to $\\mathbb{R}^C$\n \\State \\textbf{let} $w$ be an array of weights of length $n$ initialized with random values in $\\mathbb{R}$\n \\For{$e = 1, \\ldots, M$ } \n \\For{$a \\in X$, $k \\in C$}\n \\State $predictions[a][k] \\leftarrow \\frac{\\sum_{i = 1}^n w(\\mathbb{L}_i)Alg_k(a, \\mathbb{L}_i)}{\\sum_{i = 1}^n w(\\mathbb{L}_i)}$\n \\EndFor\n \\State \\textbf{update} $w$ with an iteration of gradient descent using $loss(predictions)$\n \\EndFor\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\nA generic algorithm for outlier detection can be given in a similar manner. \n\n\\subsection{Example} \nLet us consider the following toy data table providing some information on different types of apples. It contains information on the color, size, sweetness, and origin of the apples. We assume that all apples under consideration are either green or red. For conceptual scaling, we divide sweetness, price, and volume into low, medium, and high. This converts these continuous-valued attributes into discrete-valued. The set of features is obtained by considering each value of attributes as a different feature. For example, High volume, red color, Medium price are a few of them. \n\n\\begin{table*}[h]\n \\label{Table:data table}\n \\centering\n \\begin{tabular}{c c c c c c}\n \\hline\n \\textbf{Type}& \\textbf{Color}&\\textbf{ Volume}&\\textbf{ Sweetness} &\\textbf{ Local}& \\textbf{Price}\\\\\n \\hline\n 1 & red & High & High & Yes & Medium\\\\\n 2 & green & High & High & Yes & Medium \\\\\n 3 & red & Medium & Medium & Yes& Medium\\\\\n 4 & green & Low & High & No& Medium\\\\\n 5 & green & High & Medium & No & Low\\\\\n 6 & red & Medium & Low & Yes& Low \\\\\n 7 & green & High & Medium & Yes &Low \\\\\n 8 & green & High & Medium & Yes& High\\\\\n \\hline\n \\end{tabular}\n \\caption{Exemplary data table containing the information on different types of apples w.r.t. attributes such as color, volume, sweetness, et cetera. }\n\\end{table*}\n\nLet $A$ and $X$ be the set of all types of apples and features respectively. The (non-crisp) agendas of interest to us are the ones assigning mass to an attribute and not to an individual feature. That is, we consider basis lattices corresponding to feature sets that contain all the values for a given many-valued attribute. As an example, if a $Y \\subseteq X$ in the agendas corresponding to a basis lattice contains the feature high volume, then it must also contain the features low and medium volume. We use volume to denote the set of features \\{high volume, low volume, medium volume\\}. A similar convention is used for the other attributes as well. \n\n\nLet $I \\subseteq A \\times X$ be the incidence relation and\nlet $P$ be a customer. Suppose we are interested in classifying apples into types customer $P$ likes (class 1) and does not like (class 2). Given a formal context (concept lattice) $\\mathbb{P}= (A, X, I \\cap A \\times Y)$, describing a categorization of these types of apples for a given agenda of interest $Y$, we use the following simple algorithm to predict the class for a new type of apple. \nLet $A_+$ and $A_-$ be the set of apples known to be in class 1 and class 2 respectively (from the training set). A set of features $H \\subseteq Y$ is said to be a positive (resp. negative) hypothesis in w.r.t.~a lattice $(A, X, I \\cap A \\times Y)$ iff $H$ is Galois-stable, $I^{(0)}[H]$ is non-empty, and $I^{(0)}[H] \\cap A_-=\\emptyset$ (resp.~$I^{(0)}[H] \\cap A_+=\\emptyset$). For any new element $t$, \nwe put it in class 1 (resp. class 2) if the category $ I^{(1)}[\\{t\\}]$ contains only positive (resp. negative) hypothesis. The algorithm is inconclusive when it contains neither type of hypothesis (no information) or contains both type of hypotheses (inconsistent information). \n\n Suppose the classification of apples of types 1-8 into classes 1 and 2 for customer $P$ are as $Class \\, 1=\\{1,2,4,5, 7\\}$ and $Class\\, 2=\\{3,6,8\\}$ and suppose also that we use the full concept lattice (that is, agenda $Y=X$). Let $t_0$ be a new type of apple that is green, has high volume, high sweetness, is local, and has a high price. Consider hypotheses $H_1=$ \\{High sweetness\\} and $H_2=$ \\{Green, local\\} which are both contained in $ I^{(1)}[\\{t_0\\}]$. The hypothesis $H_1$ is positive while $H_2$ is negative. Thus, the above classification algorithm can not classify this object as the available information is inconsistent. However, in many cases, some subsets of features are of much more importance to a customer than others. For example, from the above classification, it is hinted that the customer $P$ considers Sweetness and Price as more important features than color or location. Our algorithm for learning agenda can help to recognize this difference in importance of different sets of features and allow us to classify such elements.\n \n Suppose we use our method to find the best categorization (or agenda) for the completion of this task using the above classification algorithm with the basis lattice consisting of lattices given by agendas comprising of one attribute (as discussed earlier, one attribute can correspond to multiple features due to scaling). We start with random weights assigned to each of these lattices. We then use the classification algorithm described above to classify new types of apples into classes 1 and 2 using each of these lattices. We then sum over the weights of lattices in which elements are assigned to either class. The new object is assigned to the class which has a higher mass (the algorithm is indecisive if such a class does not exist). We use machine learning (gradient descent) to train the algorithm to find the best weights for this classification task. \n \nDuring the training phase, our algorithm can (generally) learn that the attribute (or set of features) \\{sweetness\\} matters much more than other features to the customer $P$. Thus, a high weight will be attached to the lattice with agenda \\{sweetness\\}. Thus, the above algorithm in combination with our method assigns $t_0$ to class 1. \nAdding this method on top of a classification algorithm may give a better classification (that is, more elements classified correctly with a given amount of training samples) given our learnt information `sweetness is much more important for $P$ in decision-making' is true. \n\nSimilarly, higher (resp. lower) masses attached to agendas consisting of different sets of a single attribute are helpful in better categorization when this attribute is more (resp. less) important for the customer. Thus, using machine learning techniques (for example, gradient descent when possible) to learn the best possible agenda to provide categorization to complement the classification algorithm can improve its accuracy with less training. Considering more basis lattices may further improve the accuracy and sample complexity. For example, it can be seen that the types 5 and 7, which have medium sweetness and low price, belong to class 1. This provides us with another likely hypothesis that the customer likes apples that are of medium size (not necessarily high) but have a low price. This hints to us that the agenda \\{size, price\\} may be of significant importance to the customer. In case this agenda is indeed more important to the agent, the learning would assign it a high weight during training and thus allows us to make more accurate predictions with a fewer number of samples. However, an increasing number of basis lattices may increase computational complexity significantly, meaning that such a decision needs to be made judiciously. \n\nThis simple example shows that the classification algorithm described above can be improved in terms of accuracy, sample complexity, and explainability by adding a learning step for finding out the best agenda for categorization. Adding this step to the different algorithms discussed in Section \\ref{Sec:Classification and outlier detection using concept lattice}, used for classification and outlier detection using concept lattices can improve these algorithms in a similar manner. This is especially the case for the tasks in which the importance of different features may be hard to estimate beforehand. The obtained agendas can be defined formally using the logical framework described in \\cite{FLexiblecat2022}. In that paper, a logical model was used to represent deliberation between agents with different agendas. This framework can also be used to model deliberation or interaction between different learning algorithms by aggregating learnt agendas using different techniques described in \\cite{FLexiblecat2022}. The agendas inferred by our learning algorithm can be used for further tasks like aggregation from different sources. For example, if for two different classifiers the agendas learned are given by the mass functions $m_1$ and $m_2$ on $\\mathcal{P}(X)$, then a combined classifier that takes both into account can be obtained by choosing the agenda $F(m_1, m_2)$, where $F$ is a suitable Dempster-Shafer combination rule \\cite{sentz2002combination,smets1993belief,denoeux2006cautious}, and then applying the classification algorithm to the resulting lattice. \n\n\\section{Conclusion and future directions} \\label{sec:Conclusion}\nIn this paper, targeting the explainability line of hybrid intelligence research~\\cite{9153877}, we proposed a meta-learning algorithm to learn a \"good\" (interrogative) agenda for categorization (which is used by a potential FCA-based classification or outlier detection algorithm). Adding such a learning step to a given algorithm allows us to improve the accuracy and sample complexity of the procedure while also making it more explainable. On the empirical side, a performance evaluation and the ablation study on the results of employing different FCA-based classification and outlier detection algorithms is an avenue of future research. Another investigation line is the transferability analysis of \"good\" agendas e.g., how much knowledge do we transfer and how good the data efficiency is when such an agenda is used on previously unseen environments\/categorizations. Noteworthy is extending this methodology towards other interesting application domains such as knowledge discovery, data visualization, information retrieval, etc.\n\nOn the theoretical side, this framework can be used to model deliberation between agendas learnt from different algorithms, providing us a way to study their interaction, comparison, or combination. Within the interpretation of taking the concept lattice as expert knowledge, the learnt agendas can also be aggregated or compared with agendas of different experts allowing us to incorporate learning and expert knowledge in categorization. From a multiagent systems perspective, it is especially useful to model subjective categorizations involving multiple agents (human experts and algorithms) with different agendas or goals interacting with each other. In future work, we are considering investigation in a variety of directions e.g., investigating desirable properties of various aggregation mechanisms, representational power such as proportionality and fairness of induced agendas for multiple parties, convergence and robustness guarantees for the \"good\" agendas, computational complexity analysis on hard and easy cases for (non-)crisp agendas, and extending our method on a more general framework in order to tackle the problem of features selection in an uniform way.\n\nThe meta-algorithm described in the present paper is currently employed in the development of an outlier detection algorithm with good results. Currently, it has been tested on the datasets from the ELKI toolkit \\cite{Campos2016} and it has been compared against the algorithms discussed in it. A detailed report of the results will be available in the future.\n\n\n\n\\begin{acknowledgments}\n Erman Acar is generously supported by the Hybrid Intelligence Project which is financed by the Dutch Ministry of Education, Culture and Science with project number 024.004.022. Krishna Manoorkar is supported by the NWO grant KIVI.2019.001 awarded to Alessandra Palmigiano.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:intro}Introduction}\nRecently the Collider Detector at Fermilab (CDF) Collaboration has measured the mass of $W$ boson to be $80.4335\\pm 0.0094~\\mathrm{GeV}$~\\cite{CDF:2022hxs}, which is deviated from Standard Model (SM) prediction of $80.357\\pm 0.006~\\mathrm{GeV}$~\\cite{ParticleDataGroup:2020ssz} and which seems to indicate new physics beyond SM. And there are lots of works appeared to discuss this topic~\\cite{Cirigliano:2022qdm,Borah:2022obi,Chowdhury:2022moc,Arcadi:2022dmt,Zhang:2022nnh,Mondal:2022xdy,Nagao:2022oin,Kanemura:2022ahw,Kawamura:2022uft,Peli:2022ybi,Ghoshal:2022vzo,Perez:2022uil,Zheng:2022irz,Ahn:2022xeq,Heo:2022dey,Crivellin:2022fdf,Endo:2022kiw,Du:2022brr,Cheung:2022zsb,DiLuzio:2022ziu,Balkin:2022glu,Biekotter:2022abc,Krasnikov:2022xsi,Paul:2022dds,Babu:2022pdn,DiLuzio:2022xns,Bagnaschi:2022whn,Heckman:2022the,Lee:2022nqz,Cheng:2022jyi,Bahl:2022xzi,Song:2022xts,Asadi:2022xiy,Athron:2022isz,Sakurai:2022hwh,Fan:2022yly,Zhu:2022scj,Arias-Aragon:2022ats,Cacciapaglia:2022xih,Blennow:2022yfm,Strumia:2022qkt,Athron:2022qpo,Yang:2022gvz,deBlas:2022hdk,Tang:2022pxh,Du:2022pbp,Campagnari:2022vzx,Zhu:2022tpr,Fan:2022dck}.\nIn this work we will explore physics beyond SM which can give the observed mass of the $W$ boson at tree level. \n\nIn the SM, the mass of the $W$ boson and the $Z$ boson are given by the Higgs mechanism. Since $Z$ is combination of the $B$ boson and the $W^{3}$ boson, which is a component of the gauge triplet $W^{i}$, the mass of the $W$ boson and the $Z$ boson are connected. And it is difficult to change the mass of the $W$ boson solely. One way to alter the mass of the $W$ boson is to mix the $Z$ boson with an other vector boson. Mix the $Z$ boson with another boson will inevitably alter the mass expression of the $Z$ boson which may alter the value of the $\\mathrm{SU}(2)_L$ gauge coupling and thus the mass of the $W$ boson. There are usually two kinds of mixing: direct mixing in mass matrix and kinetic mixing. Though the normalization of the kinetic mixing terms will result in mass mixing, we will consider two models in this work: Derivative Portal Dark Matter (DPDM) model~\\cite{Zeng:2022llh} and the U(1) model~\\cite{Holdom:1990xp,Lao:2020inc}. In these two models, the extra gauge boson are connects to the SM through kinetic mixing to $Z$ boson and $B$ boson respectively. The kinetic mixing will alter the mass expression of the $Z$ boson and thus the mass of the $W$ boson at tree level. Since electroweak oblique parameters have a strong constraint on electroweak physics, we will also consider the electroweak oblique parameters constraints to these models. For the DPDM model, we also consider constraints from the observed Dark Matter (DM) relic density. \n\nThis work is organized as follows: In Sec.~\\ref{sec:general} we generally discuss the mechanism that the mixing between extra boson and $Z$ boson will change the mass of the $W$ boson. In Sec.~\\ref{sec:bsm} we explore two models and discuss their capability of altering the $W$ mass. As well as explore constraints from electroweak oblique parameters and DM relic density. And we conclude in Sec.~\\ref{sec:con} \n\\section{general discussion of prediction of the mass of $W$ boson}%\n\\label{sec:general}\nIn this section we will discuss in general how can an extra boson mix with the $Z$ boson will change the mass of $W$ boson. To see this we first write down the mass of the $W$ boson $m_{W}$ and the mass of the $Z$ boson $m_{Z}$ given by SM:\n\\begin{eqnarray}\n m_{W}^2=\\frac{1}{4}g^2v^2,\\ m_{Z}^2=\\frac{1}{4}(g^2+g^{\\prime 2})v^2.\n\\end{eqnarray}\nWhere $g$ and $g^{\\prime}$ are the gauge couplings of $\\mathrm{SU}(2)_\\mathrm{L}$ and $\\mathrm{U}(1)_\\mathrm{Y}$. And $v$ is the vacuum expectation value (vev) of the Higgs boson. When choosing the Fermi coupling constant $G_{F}$, the $Z$ boson mass $m_{Z}$ and the fine-structure constant $\\alpha$ as input parameter, the $W$ boson mass will then be determined. Because \n\\begin{eqnarray}\n G_{F}=\\frac{1}{\\sqrt{2}v^2 },\\ e=\\sqrt{4\\pi\\alpha} =\\frac{gg^{\\prime}}{\\sqrt{g^2+g^{\\prime 2}} }.\n\\end{eqnarray}\nGoing beyond the SM, we will mix the $Z$ boson with another vector boson. After that the real mass of the $Z$ will be the square root of one of the eigenvalues of the following mass matrix:\n\\begin{eqnarray}\n \\begin{pmatrix}\n\t\\frac{1}{4}(g^2+g^{\\prime 2})v^2&b\\\\\n\tb&a\n \\end{pmatrix}.\n\\end{eqnarray}\nWhere we have used $a$ and $b$ to denote some general mass term. The eigenvalues of the mass matrix can be written as: \n\\begin{eqnarray}\n m_{Z,Z^{\\prime}}^2=\\frac{1}{2}\\left( \\frac{1}{4}(g^2+g^{\\prime 2})v^2+a\\pm \\sqrt{\\left( \\frac{1}{4}(g^2+g^{\\prime 2})v^2+a \\right)^2-a(g^2+g^{\\prime 2})v^2+4b^2} \\right) \\label{mass}.\n \n\\end{eqnarray}\nDefine $c=\\frac{1}{4}(g^2+g^{\\prime 2})v^2$, we have a compact form of $m_{Z,Z^{\\prime}}^2=\\frac{1}{2}\\left( a+c\\pm\\sqrt{(a-c)^2+4b^2} \\right)$. \nAnd we can see the heavier mass of $m_{Z,Z^{\\prime}}$ will be bigger than both $a$ and $c$, and the lighter mass of $m_{Z,Z^{\\prime}}$ will be smaller than both $a$ and $c$. Therefore in order to have a bigger $c$, since observation of the $W$ mass indicate larger $g$, the value of $a$ must be lager than $c$. And the mass of $Z$ boson should corresponds to the minus sign in Eq.~\\eqref{mass}. \nAdopting the input parameters as $G_{F}=1.1663787*10^{-5}~\\mathrm{GeV}^{-2},\\ m_{Z}=91.1876~\\mathrm{GeV},\\ \\alpha\\approx 1\/128$~\\cite{ParticleDataGroup:2020ssz}, we can draw a blue band which saturate the observed mass of the $W$ boson in $3\\sigma$ confidence level in Fig.~\\ref{fig:abband}.\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{abband}\n \\caption{Band which gives the $W$ mass between $80.4053$ and $80.4617$. }\n \\label{fig:abband}\n\\end{figure}\n\nActually we can calculate the analytic relation between $a$ and $b$ by taking the mass of $W$ boson $m_{W}$ as an input parameter. From Eq.~\\eqref{mass} we can write:\n\\begin{eqnarray}\n b^2&&=c(a-m_{Z}^2)+m_{Z}^{4}-m_{Z}^2a\\nonumber\\\\\n \n \n\t&&=\\frac{4m_{W}^4}{4m_{W}^2-e^2v^2}(a-m_{Z}^2)+m_{Z}^{4}-m_{Z}^2a\\label{abconstranit}.\n\\end{eqnarray}\nThen we can given constraint to models beyond SM according to Eq.~\\eqref{abconstranit}. Actually the above discussion does not take loop corrections from SM into consideration. Considering the loop corrections from SM we should replace $m_{W}$ in Eq.~\\eqref{abconstranit} with $m_{W}-\\delta m_{W} $, where $\\delta m_{W}$ represents the loop corrections to $m_{W}$ from SM. \n\\section{models beyond SM}%\n\\label{sec:bsm}\nIn this section we will explore two models beyond SM which mix the $Z$ boson with an extra vector boson and might give the observed $W$ boson mass. We also consider other constraints like electroweak oblique parameters constraint and DM relic density constraint. \n\\subsection{Derivative Portal Dark Matter}\n\\label{sub:u_1_model}\nThe DPDM model extends the SM with an extra vector boson which links the dark sector and the SM through its kinetic mixing with the $Z$ boson. The relevant Lagrangian of the DPDM model can be written as~\\cite{Zeng:2022llh}:\n\\begin{eqnarray}\n \\mathcal{L}=&&-\\frac{1}{4}Z^{\\mu\\nu}Z_{\\mu\\nu}-\\frac{1}{4}Z^{\\prime\\mu\\nu}Z^{\\prime}_{\\mu\\nu}-\\frac{\\epsilon}{2} Z^{\\mu\\nu}Z_{\\mu\\nu}^{\\prime}\\\\\n\t&&+\\sum\\limits_{f} Z_{\\mu}\\bar{f}\\gamma^{\\mu}(g_{V}-g_{A}\\gamma^{5})f+g_{\\chi}Z_{\\mu}^{\\prime}\\bar{\\chi}\\gamma^{\\mu}\\chi\\nonumber\\\\\n\t&&+\\frac{1}{2}m_{Z}^2Z_{\\mu}Z^{\\mu}+\\frac{1}{2}m_{Z^{\\prime}}^2Z_{\\mu}^{\\prime}Z^{\\prime\\mu}-m_{\\chi}\\bar{\\chi}\\chi\\nonumber.\n\\end{eqnarray}\nAfter normalization of the kinetic terms, the kinetic mixing between $Z$ and $Z^{\\prime}$ actually result in mass mixing between them. The kinetic mixing term of the Lagrangian can normalized by:\n\\begin{eqnarray}\n K=\\begin{pmatrix}\n -k_1&k_2\\\\\n k_1&k_2\n \\end{pmatrix},\n\\end{eqnarray}\nwhere $k_1=1\/\\sqrt{2-2\\epsilon} $ and $k_2=1\/\\sqrt{2+2\\epsilon} $. And this operation will result in the following mass matrix between the two vector bosons: \n\\begin{eqnarray}\n\t \\begin{pmatrix}\n\t k_1^2M_1&k_1k_2M_2\\\\\n\t k_1k_2M_2&k_2^2M_1\n\t \\end{pmatrix}\n\\end{eqnarray}\nwhere $M_1=m_{Z}^2+m_{Z^{\\prime}}^2$ and $M_2=m_{Z^{\\prime}}^2-m_{Z}^2$. \nOne can use an orthogonal matrix $O$ to diagonalize the mass matrix, and $O$ can be defined as \n\\begin{eqnarray}\n\t O=\\begin{pmatrix}\n\t\t\\cos \\theta&\\sin \\theta\\\\\n\t\t-\\sin \\theta& \\cos \\theta\n\t \\end{pmatrix},\\ \\text{with} \\tan 2\\theta=\\frac{2k_1k_2M_2}{(k_2^2-k_1^2)M_1}.\n\\end{eqnarray}\nTherefore according to Eq.~\\eqref{abconstranit} we can give constraint to $m_{Z^{\\prime}}$ and $\\epsilon$ as:\n\\begin{eqnarray}\n k_1k_2M_2=\\sqrt{(k_1^2M_1-m_{Z}^{exp\\ 2})(k_2^2M_1-m_{Z}^{exp\\ 2})}\\label{DPDMcons}. \n\\end{eqnarray}\nWhere we have used $m_{Z}^{exp}$ to represent the experiment observed mass of $Z$ boson, which is meant to distinguish from $m_{Z}$. \n\nApart from giving mass to the $W$ boson, we will also calculate the tree level $S,T,U$ constraints to this model. The neutral-current coupling between $Z$ boson and SM fermions in the DPDM model can be written as:\n\\begin{eqnarray}\n L_{NC,Zff}&&=\\sum\\limits_{f} (-k_2\\sin \\theta -k_1\\cos \\theta) \\hat{Z}_{\\mu}\\bar{f}\\gamma^{\\mu}(g_{V}-g_{A}\\gamma^{5})f\\\\\n\t &&=\\sum\\limits_{f} (-k_2\\sin \\theta -k_1\\cos \\theta) \\hat{Z}_{\\mu}\\bar{f}\\gamma^{\\mu}\\frac{e}{s_{w}c_{w} }(T^{3}_{f}\\frac{1-\\gamma^{5}}{2}-Q_{f}s_{w}^2 )f,\n\\end{eqnarray}\nwhere $\\hat{Z}_{\\mu} $ is the mass eigenstate of the $Z$ boson. The form of the charged-current in the DPDM is the same as the SM. \nUsing the effective-lagrangian techniques given by ~\\cite{burgess1994model}:\n\\begin{eqnarray}\n &&\\mathcal{L}_{CC, Wff}=-\\frac{e}{\\sqrt{2} \\hat{s}_{w}}(1-\\frac{\\alpha S}{4(\\hat{c}_{w}^2-\\hat{s}_{w}^2)}+\\frac{\\hat{c}_{w}^2\\alpha T}{2(\\hat{c}_{w}^2-\\hat{s}_{w}^2)}+\\frac{\\alpha U}{8\\hat{s}_{w}^2})\\sum\\limits_{ij}V_{ij}\\bar{f}_{i}\\gamma^{\\mu}\\gamma_{L}f_{j}W_{\\mu}^{\\dagger}+\\mathrm{c.c.}.\\\\ \n&& \\mathcal{L}_{NC, Zff}=\\frac{e}{\\hat{s}_{w}\\hat{c}_{w}}(1+\\frac{\\alpha T}{2})\\sum\\limits_{f}\\bar{f}\\gamma^{\\mu}[T^{3}_{f}\\frac{1-\\gamma^{5}}{2}-Q_{f}(\\hat{s}_{w}^2+\\frac{\\alpha S}{4(\\hat{c}_{w}^2-\\hat{s}_{w}^2)}-\\frac{\\hat{c}_{w}^2\\hat{s}_{w}^2\\alpha T}{\\hat{c}_{w}^2-\\hat{s}_{w}^2})]fZ_{\\mu},\n\\end{eqnarray}\nwhere $\\hat{s}_{w}\\hat{c}_{w}m_{\\hat{Z}}=s_{w}c_{w}\\frac{1}{2}\\sqrt{ g^2+g^{\\prime 2}}v=s_{w}c_{w}m_{Z}$, we can write $S$ and $T$ in the DPDM model as\n\\begin{eqnarray}\n \\alpha T=2(\\frac{\\hat{s}_{w}\\hat{c}_{w}}{s_{w}c_{w}}(-k_2\\sin \\theta-k_1\\cos \\theta)-1)\\\\\n \\alpha S=4\\hat{c}_{w}^2\\hat{s}_{w}^2\\alpha T+4(\\hat{c}_{w}^2-\\hat{s}_{w}^2)(s_{w}^2-\\hat{s}_{w}^2)\\\\\n \\alpha U=8\\hat{s}_{w}^2(\\frac{\\hat{s}_{w}}{s_{w}}-1+\\frac{\\alpha S}{4(\\hat{c}_{w}^2-\\hat{s}_{w}^2)}-\\frac{\\hat{c}_{w}^2\\alpha T}{2(\\hat{c}_{w}^2-\\hat{s}_{w}^2)})\n\\end{eqnarray}\nAnd we constrain the DPDM model with global fit results given by table five of ~\\cite{deBlas:2022hdk}:\n\\begin{eqnarray}\n S=0.005\\pm 0.097,\\ T=0.04\\pm 0.12,\\ U=0.134\\pm 0.087,\n\\end{eqnarray}\nwith the correlation coefficient $\\rho_{ST}=0.91,\\ \\rho_{SU}=-0.65,\\ \\rho_{TU}=-0.88$. \n\nThe DPDM model can naturally escape stringent constraint from Dark Matter direct detection due to a cancellation mechanism~\\cite{Cai:2021evx}, and in Fig.~\\ref{fig:DPDMres} we have draw the constraints from observed DM relic density, the observed $W$ mass and the electroweak oblique parameters. \n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{DPDMres}\n \\caption{The lightblue area are excluded by Planck experiment~\\cite{Planck:2018vyg}. The red line gives the observed $W$ at tree level. The green line has taken the SM model loop corrections into consideration and gives the observed $W$ mass, with the dashed green lines correspond to the $3\\sigma$ upper and lower deviation. And the blue line gives the observed DM relic density. The magenta contour gives the constraints from electroweak oblique parameters at 95\\% C.L., with the red star gives the best fit. }\n \\label{fig:DPDMres}\n\\end{figure}\nWhere the red line give the observed $W$ boson mass solely. And the green line give the observed $W$ boson mass with SM loop corrections taken into consideration. The dashed green lines correspond to the $3\\sigma$ mass deviated from the $W$ boson mass. And the blue line saturate the observed DM relic density, while the light blue area are excluded by Planck experiment~\\cite{Planck:2018vyg}. The DM relic density are calculated in settings $m_{\\chi}=60~\\mathrm{GeV},\\ g_{\\chi}=0.1$ by numerical tools: \\texttt{FeynRules~2}~\\cite{Alloul:2013bka}, \\texttt{MadGraph}~\\cite{Alwall:2014hca}, and \\texttt{MadDM}~\\cite{Ambrogi:2018jqj}. The magenta contour corresponds to the 95\\% C.L. constraints from electroweak oblique parameters. And the red star represents the best fit of $STU$: $m_{Z^{\\prime}}\\approx 116~\\mathrm{GeV},\\ \\epsilon\\approx 0.0250$. We see that this point also meets the direct calculation of $m_W$ and observed DM relic density. Also from Fig.~\\ref{fig:DPDMres} we see that the magenta area is consistent with the green lines and is also consistent with the physics that $STU$ not only contains the fitting of the observed $W$ boson mass, but also contains the constraints from electroweak couplings. To give the observed $W$ boson mass $m_{Z^{\\prime}}$ should satisfy $105~\\mathrm{GeV}130~\\mathrm{GeV}$ not excluded by Planck experiment, one can change the DM mass $m_{\\chi}$ and thus the annihilation resonance area will move accordingly. On the other hand, one can increase the extra gauge coupling $g_{\\chi}$ or simply not introduce dark matter in this model\n\n\\subsection{U(1) model}\n\\label{sub:u_1_model}\nIn the U(1) model there is a gauge boson of an extra $\\mathrm{U}(1)_\\mathrm{X}$ gauge symmetry which connects to the gauge boson of SM $\\mathrm{U}(1)_\\mathrm{Y}$ symmetry through kinetic mixing. In this section we will adopt the same model setting as~\\cite{Lao:2020inc}. Then the kinetic mixing terms can be written as:\n\\begin{eqnarray}\n \\mathcal{L}_{\\mathrm{K}}=-\\frac{1}{4}B^{\\mu\\nu}B_{\\mu\\nu}-\\frac{1}{4}X^{\\mu\\nu}X_{\\mu\\nu}-\\frac{\\epsilon}{2}B^{\\mu\\nu}X_{\\mu\\nu},\n\\end{eqnarray}\nwhere $B_{\\mu}$ and $X_{\\mu}$ are the gauge fields of $\\mathrm{U}(1)_\\mathrm{Y}$ and $\\mathrm{U}(1)_\\mathrm{X}$ gauge symmetry. And there will be mass mixing term between $B_{\\mu}$ and $W^{3}_{\\mu}$ after Higgs getting its vev. Therefore the matrix of $(W_{\\mu}^{3},\\ B_{\\mu},\\ X_{\\mu} )$ can be denoted as:\n\\begin{eqnarray}\n &&\\frac{1}{2}\\begin{pmatrix}\n W^{3\\mu}&B^{\\mu}&X^{\\mu} \n \\end{pmatrix}\n \\begin{pmatrix}\n\tg^2v^2\/4&-gg^{\\prime}v^2\/4&0\\\\\n\t-gg^{\\prime}v^2\/4&g^{\\prime 2}v^2\/4&0\\\\\n\t0&0&g_{x}^2v_{s}^2\n \\end{pmatrix}\n \\begin{pmatrix}\n W_{\\mu}^{3}\\\\\n B_{\\mu}\\\\\n X_{\\mu} \n \\end{pmatrix}\\nonumber\\\\\n =&&\\frac{1}{2}\\begin{pmatrix}\n W^{3\\mu}&B^{\\mu}&X^{\\mu} \n \\end{pmatrix}K^{-1T}OO^TK^{T}\n \\begin{pmatrix}\n\tg^2v^2\/4&-gg^{\\prime}v^2\/4&0\\\\\n\t-gg^{\\prime}v^2\/4&g^{\\prime 2}v^2\/4&0\\\\\n\t0&0&g_{x}^2v_{s}^2\n \\end{pmatrix}KOO^T K^{-1}\n \\begin{pmatrix}\n W_{\\mu}^{3}\\\\\n B_{\\mu}\\\\\n X_{\\mu} \n \\end{pmatrix}\\\\\n =&&\\frac{1}{2}\\begin{pmatrix}\n A^{\\mu}&Z^{\\mu}&Z^{\\prime\\mu} \n \\end{pmatrix}\n \\begin{pmatrix}\n\t0&0&0\\\\\n\t0&m_{Z}^2&0\\\\\n\t0&0&m_{Z^{\\prime}}^2\n \\end{pmatrix}\n \\begin{pmatrix}\n A_{\\mu}\\\\\n Z_{\\mu}\\\\\n Z^{\\prime}_{\\mu} \n \\end{pmatrix}\\nonumber\\label{massmatrix}\n\\end{eqnarray}\nWhere we have used $K$ to normalize the kinetic terms of $B_{\\mu}$ and $X_{\\mu}$ and used $O$ to diagonalize the mass matrix and transform the fields to their mass eigenstates. The masses of the two massive vector boson $Z$ and $Z^{\\prime}$ will be:\n\\begin{eqnarray}\n m_{Z,Z^{\\prime}}^2=\n \\frac{1}{8} (\n g^2 v^2+g^{\\prime 2} k1^2 v^2+g^{\\prime 2} k2^2 v^2+4 g_x^2 k1^2 v_s^2+4 g_x^2 k2^2 v_s^2\\\\\n\\pm\\sqrt{\\left(g^2 v^2+\\left(k1^2+k2^2\\right) \\left(g^{\\prime 2} v^2+4 g_x^2 v_s^2\\right)\\right)^2-16 g_x^2 v^2 v_s^2 \\left(g^2 \\left(k1^2+k2^2\\right)+4 g^{\\prime 2} k1^2 k2^2\\right)})\\nonumber\n\\end{eqnarray}\nNote that the kinetic mixing between $B_{\\mu}$ and $X_{\\mu}$ will not change the form of the electric charge $e$. The definition of electric can be extracted from couplings between Photon and the Higgs doublet. Which in this model will be:\n\\begin{eqnarray}\n e=g[KO]_{11}=g^{\\prime}[KO]_{21}=\\frac{2g^{\\prime}k_2}{\\sqrt{1+\\frac{4g^{\\prime 2}k_2^2}{g^2}+\\frac{k_2^2}{k_1^2}} }=\\frac{gg^{\\prime}}{\\sqrt{ g^2+g^{\\prime 2}}}.\n\\end{eqnarray}\nWhere $[KO]_{ij}$ represents the element which lies in the $i^{th}$ row and the $j^{th}$ column of matrix $KO$. \nThe neutral-current coupling between $Z$ boson and SM fermions in the U(1) model can be written as:\n\\begin{eqnarray}\n L_{NC,Zff}&&=\\sum\\limits_{f}Z_{\\mu}\\bar{f}\\gamma^{\\mu}(g_{V}-g_{A}\\gamma^{5})f,\\\\\n \\mathrm{with}\\ g_{V}&&=g_{A}+g^{\\prime}[KO]_{22}Q_{f},\\ g_{A}=\\frac{T_{f}^3}{2}(-g^{\\prime}[KO]_{22}+g[KO]_{12}).\n\\end{eqnarray}\nAnd we can read $S,T,U$ as:\n\\begin{eqnarray}\n \\alpha T=\\frac{2\\hat{s}_{w}\\hat{c}_{w}(-g^{\\prime}[KO]_{22}+g[KO]_{12})}{e}-2\\\\\n \\alpha S=\\frac{-4g^{\\prime}[KO]_{22}(\\hat{c}_{w}^2-\\hat{s}_{w}^2)}{-g^{\\prime}[KO]_{22}+g[KO]_{12}}-4\\hat{s}_{w}^2(\\hat{c}_{w}^2-\\hat{s}_{w}^2)+4\\hat{c}_{w}^2\\hat{s}_{w}^2\\alpha T\\\\\n \\alpha U=8\\hat{s}_{w}^2(\\frac{\\hat{s}_{w}}{s_{w}}-1+\\frac{\\alpha S}{4(\\hat{c}_{w}^2-\\hat{s}_{w}^2)}-\\frac{\\hat{c}_{w}^2\\alpha T}{2(\\hat{c}_{w}^2-\\hat{s}_{w}^2)})\n\\end{eqnarray}\n\n\nNow we can give a line which predict the observed $W$ mass in this model in Fig.~\\ref{fig:u1res}.\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{u1res}\n \\caption{The red line gives the observed $W$ at tree level. The green line has taken the SM model loop corrections into consideration and gives the observed $W$ mass, with the dashed green lines correspond to the $3\\sigma$ upper and lower deviation. The magenta contour gives the constraints from electroweak oblique parameters at 99\\% C.L., with the red star gives the best fit. }\n \\label{fig:u1res}\n\\end{figure}\nWhere we also use red dashed line to show that the U(1) model can solely give the observed mass of $W$ boson. And the green line takes the SM loop corrections into consideration and gives observed $W$ boson mass, with the dashed green lines correspond to the $3\\sigma$ upper and lower deviation. The magenta contour stands for the 99\\% C.L. constraints from the electroweak oblique parameters constraints, with the red star being the best fit: $m_{Z^{\\prime}}=139.9~\\mathrm{GeV},\\ \\epsilon=0.068$. From Fig.~\\ref{fig:u1res} we see that the electroweak oblique parameters constraints are in accordance with the direct calculation of $m_{W}$. And there are large area in the U(1) model that can give the observed $W$ boson mass. The lower mass limit and the lower $\\epsilon$ limit is about $m_{Z^{\\prime}}\\approx 109.8~\\mathrm{GeV}$ and $\\epsilon\\approx 0.036$. \n\\section{\\label{sec:con}Conclusion}\nIn this work we have explored the possibility of altering the $W$ boson mas at tree level through mixing between an extra gauge boson and the $Z$ boson. We first give general discussion of the effects from mixing extra boson with $Z$ boson, then explored two realistic models: the DPDM model and the U(1) model. In the DPDM model the extra gauge boson mixes with the $Z$ boson through the kinetic mixing between extra boson and the $Z$ boson, while in the U(1) model the extra gauge boson mixes with the $Z$ boson through the kinetic mixing between extra boson and the $B$ boson. Apart from giving the $W$ boson mass, we also discussed the electroweak oblique parameters constraints for both model, and also explored DM relic density constraints for the DPDM model. We find that there are area in both model which can give the observed $W$ boson mass at tree level, and the best fit value for the extra vector boson mass is around $120~\\mathrm{GeV}$. \n\n\\begin{acknowledgments}\n This work is supported by the National Natural Science Foundation of China (NSFC) under Grant Nos. 11875327 and 11905300, the China Postdoctoral Science Foundation under Grant\nNo. 2018M643282, the Fundamental Research Funds for the Central Universities, and the Sun Yat-Sen University Science Foundation.\n\\end{acknowledgments}\n\\section*{Note added}\nDuring the finalizing of this manuscript, we noticed that ~\\cite{Zhang:2022nnh} appeared on arxiv. ~\\cite{Zhang:2022nnh} discusses explaination of the $W$ boson mass with U(1) dark matter model as well as several phenomenology constraints on DM. Our work discusses models with an extra gauge boson which can explain the $W$ boson mass. Apart from the DPDM model, we also discussed the U(1) model, but in different scenarios.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n One extension of Malliavin calculus from the Brownian motion to general L\\'{e}vy processes was made using the It\\^{o} chaos \n decomposition on the $L_2$-space over the L\\'evy space. This approach was used for\n instance by Nualart and Vives \\cite{nualart-vives}, Privault \\cite{privault_extension}, Benth, Di Nunno, L{\\o}kka, {\\O}ksendal and Proske \n \\cite{benth-dinunno-lokka-oksendal-proske}, Lee and Shih \\cite{lee-shih_product_formula}, Sol\\'e, Utzet and Vives\n \\cite{sole-utzet-vives} and Applebaum\n \\cite{applebaum2}. \n \n The wide interest in Malliavin calculus for L\\'{e}vy processes in stochastics and \n applications motivates the study of an accessible characterization\n of differentiability and fractional differentiability.\n Fractional differentiability is defined by real interpolation between the Malliavin Sobolev space $\\mathbbm{D}_{1,2}$ and $L_2({\\mathbbm{P}})$ and\n we recall the definition in Section \\ref{section:fractional} of this paper. \n Geiss and Geiss \\cite{geiss-geiss} and Geiss and Hujo \\cite{geiss-hujo} have shown that Malliavin differentiability and \n fractional differentiability \n are in a close connection to discrete-time approximation of certain stochastic integrals when the underlying process is a (geometric)\n Brownian motion. Geiss et al. \\cite{geiss-geiss-laukkarinen} proved that this applies also to L\\'{e}vy processes with jumps.\n These works assert that knowing the parameters of \n fractional smoothness allow to design discretization time-nets\n such that the optimal approximation rate can be achieved. \n For details, see \\cite{geiss-geiss}, \\cite{geiss-hujo} and \\cite{geiss-geiss-laukkarinen}.\n \n\n \n Steinicke \\cite{steinicke} and Geiss and Steinicke \\cite{geiss-steinicke} take advantage of the fact that any random variable $Y$ on \n the L\\'evy space can be represented as a functional $Y = F(X)$ of the L\\'evy process $X$, where $F$ is a real valued measurable mapping \n on the Skorohod space of right continuous functions. Let us restrict to the case that $F(X)$ only depends on the jump part of $X$. \n Using the corresponding result from \n Sol\\'e, Utzet and Vives \\cite{sole-utzet-vives} and Al\\`os, Le\\'on and Vives \\cite{alos-leon-vives} on the canonical space, Geiss and Steinicke\n \\cite{geiss-steinicke} show that the condition $F(X)\\in\\mathbbm{D}_{1,2}$ is equivalent with\n $$\\iint_{\\mathbbm{R}_+\\times\\mathbbm{R}}\\mathbbm{E}\\left[ \\left(F(X+x\\mathbbm{1}_{[t,\\infty)}) - F(X) \\right)^2 \\right] {\\mathrm{d}} t\\nu({\\mathrm{d}} x) < \\infty, $$\n where $\\nu$ is the L\\'evy measure of $X$.\n On the other hand one gets from Mecke's formula \\cite{mecke} that\n %\n $$\\iint_A \\mathbbm{E}[F( X+x\\mathbbm{1}_{[t,\\infty)} )]{\\mathrm{d}} t\\nu({\\mathrm{d}} x) = \\mathbbm{E}[N(A)F(X)]$$\n %\n for any nonnegative measurable $F$ and any $A\\in\\mathcal{B}([0,\\infty)\\times\\mathbbm{R}\\setminus\\{0\\})$, \n where $N$ is the Poisson random measure associated with $X$ as in Section \\ref{section:preliminaries}. \n These results raise the following questions: when can Malliavin \n differentiability be described using a weight function such as $N(A)$, and is there a weight function for fractional differentiability?\n \n\n In this paper we search for weight functions $\\Lambda$ and measurability conditions on $Y$ such that the criteria \n \\begin{equation}\\|Y \\Lambda\\|_{L_2({\\mathbbm{P}})} < \\infty \\label{equation:weight_criteria}\\end{equation}\n describes the smoothness of $Y$. We begin by recalling the orthogonal It\\^{o} \n chaos decomposition\n $$Y = \\sum_{n=0}^\\infty I_n(f_n)$$\n on $L_2({\\mathbbm{P}})$ and the Malliavin Sobolev space\n $$\\mathbbm{D}_{1,2} = \\left\\{ Y\\in L_2({\\mathbbm{P}}): \\|Y\\|_{\\mathbbm{D}_{1,2}} = \\sum_{n=0}^\\infty (n+1) \\|I_n(f_n)\\|_{L_2({\\mathbbm{P}})}^2 < \\infty \\right\\}$$ \n %\n in Section \\ref{section:preliminaries}.\n Then, in Section \\ref{section:differentiability}, we obtain an equivalent condition for Malliavin differentiability. The assertion is that \n $$Y\\in\\mathbbm{D}_{1,2} \\text{ if and only if } \\normP{ Y \\sqrt{N(A) +1}} < \\infty,$$\n whenever $Y$ is measurable with respect to ${\\cal F}_A$, the\n completion of the sigma-algebra generated by $\\left\\{ N(B) : B\\subseteq A, \\,B\\in\\mathcal{B}([0,\\infty)\\times\\mathbbm{R})\\right\\}$ \n and the set $A\\in\\mathcal{B}([0,\\infty)\\times\\mathbbm{R}\\setminus\\{0\\})$ satisfies $\\mathbbm{E}[N(A)]<\\infty$.\n\n Section \\ref{section:fractional} treats fractional differentiability and our aim is to adjust the weight function $\\Lambda$ so that the \n condition \\eqref{equation:weight_criteria} describes \n a given degree of smoothness. We recall the $K$-method of real interpolation which we \n use to determine the interpolation spaces $(L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,q}$ for $\\theta\\in(0,1)$ and $q\\in[1,\\infty]$. These spaces are intermediate\n between $\\mathbbm{D}_{1,2}$ and $L_2({\\mathbbm{P}})$.\n We show that when $Y$ is ${\\cal F}_A$-measurable and $\\mathbbm{E}[N(A)]<\\infty$, then $Y$ has fractional differentiability of order $\\theta$ for $q=2$ \n if and only if\n$$\\normP{ Y\\sqrt{ N(A) +1}^{\\,\\theta} } < \\infty.$$\n \n \n\n\n\\section{Preliminaries} \\label{section:preliminaries}\n\nConsider a L\\'evy process $X = (X_t)_{t\\geq0}$ with c\\`{a}dl\\`{a}g paths on a\ncomplete probability space $({\\Omega},{\\cal F},\\mathbbm{P})$, where ${\\cal F}$ is the completion of the sigma-algebra generated by $X$.\nThe L\\'{e}vy-It\\^{o} decomposition states that there exist $\\gamma\\in\\mathbbm{R}$, $\\sigma\\geq0$, a standard Brownian motion $W$ and \na Poisson random measure $N$ on $\\mathcal{B}([0,\\infty)\\times\\mathbbm{R})$ such that\n\\[X_t=\\gamma t + \\sigma W_t + \\iint_{(0,t]\\times \\{ |x|>1\\}} x N(\\mathrm{d} s,\\mathrm{d} x) \n+ \\iint_{(0,t]\\times\\left\\{0<|x|\\leq1\\right\\}}x\\tilde{N}(\\mathrm{d} s,\\mathrm{d} x)\\]\n holds for all $t\\geq0$ a.s.\nHere $\\tilde{N}(\\mathrm{d} s,\\mathrm{d} x) = N(\\mathrm{d} s,\\mathrm{d} x)-\\mathrm{d} s\\nu(\\mathrm{d} x)$ is the compensated Poisson random measure and \n$\\nu:\\mathcal{B}(\\mathbbm{R})\\to[0,\\infty]$ is the L\\'{e}vy measure of $X$ satisfying $\\nu(\\{0\\})=0$,\n$\\int_\\mathbbm{R} (x^2\\wedge1)\\nu(\\mathrm{d} x)<\\infty$ and $\\nu(B)=\\mathbbm{E} \\left[ N((0,1]\\times B) \\right]$ when $0\\not\\in \\bar{B}$.\nThe triplet $(\\gamma,\\sigma,\\nu)$ is called the L\\'evy triplet.\n\n\nLet us recall the It\\^{o} chaos decomposition from \\cite{ito}:\nDenote $\\mathbbm{R}_+ := [0,\\infty)$. We consider the following measure $\\mathbbm{m}$ defined as \n\\begin{eqnarray*}\n\n \\mathbbm{m}:\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R})\\to[0,\\infty],& \n & \\mathbbm{m}(\\mathrm{d} s,\\mathrm{d} x):= \\mathrm{d} s \\left[ {\\sigma}^2 \\delta_0(\\mathrm{d} x) + x^2 \\nu(\\mathrm{d} x) \\right].\n\\end{eqnarray*}\nFor sets $B\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R})$ such that $\\mathbbm{m}(B) < \\infty$, a random measure $M$ is defined by\n\\[M(B) := \\sigma \\int_{\\left\\{ s\\in\\mathbbm{R}_+:(s,0)\\in B\\right\\}} \\mathrm{d} W_s + \\lim_{n\\to\\infty}\\iint_{\\left\\{(s,x)\\in B: \\frac{1}{n} < |x| < n\\right\\}} x\\ \\tilde{N}(\\mathrm{d} s,\\mathrm{d} x),\\]\nwhere the convergence is taken in $L_2(\\mathbbm{P}):=L_2({\\Omega},{\\cal F},\\mathbbm{P})$.\nThe random measure $M$ is independently scattered and it holds that $\\mathbbm{E}\nM(B_1) M(B_2) = \\mathbbm{m}(B_1 \\cap B_2)$ for all $B_1,B_2\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R})$ with\n$\\mathbbm{m}(B_1)<\\infty$ and $\\mathbbm{m}(B_2)<\\infty$.\n\n\n\nFor $n=1,2,\\ldots$ write \n\\[ {L_2\\left( \\mm^{\\otimes n} \\right)} = L_2 \\left((\\mathbbm{R}_+\\times\\mathbbm{R})^n, \\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R})^{\\otimes n}, \\mathbbm{m}^{\\otimes n}\\right)\\] \nand set $L_2 \\left(\\mathbbm{m}^{\\otimes 0} \\right):=\\mathbbm{R}$. A function $f_n:(\\mathbbm{R}_+\\times\\mathbbm{R})^n\\to\\mathbbm{R}$ is said to\nbe symmetric, if it coincides with its symmetrization\n$\\tilde{f}_n$,\n\\[\\tilde{f}_n((s_1,x_1),\\ldots,(s_n,x_n))=\\frac{1}{n!}\\sum_{\\pi}f_n\\left( \\left( s_{\\pi(1)},x_{\\pi(1)} \\right),\\ldots, \\left( s_{\\pi(n)},x_{\\pi(n)} \\right) \\right),\\]\nwhere the sum is taken over all permutations\n$\\pi:\\{1,\\ldots,n\\}\\to\\{1,\\ldots,n\\}$.\n\nWe let $I_n$ denote the multiple integral of order $n$ defined by It\\^{o} \\cite{ito} and shortly recall the definition. \nFor pairwise disjoint \n$B_1,\\ldots, B_n\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R})$ with $\\mathbbm{m}(B_i)<\\infty$ the \nintegral of $\\mathbbm{1}_{B_1}\\otimes\\cdots\\otimes\\mathbbm{1}_{B_n}$ \nis defined by\n\\begin{equation} I_n\\left( \\mathbbm{1}_{B_1}\\otimes\\cdots\\otimes\\mathbbm{1}_{B_n}\\right) := M(B_1)\\cdots M(B_n). \\label{equation:multiple_integral}\\end{equation}\nIt is then extended to a linear and continuous operator $I_n:{L_2\\left( \\mm^{\\otimes n} \\right)}\\to L_2(\\mathbbm{P})$. We let $I_0(f_0):=f_0$ for $f_0\\in\\mathbbm{R}$. \nFor the multiple integral we have\n\\begin{equation}\\label{equation:inner_product_L_2}\nI_n(f_n)=I_n(\\tilde{f}_n) \\text{ and } \\mathbbm{E} \\left[ I_n(f_n)I_k(g_k) \\right] \n=\\begin{cases}\n 0, & \\text{ if }n\\neq k\\\\\n n! \\left( \\tilde{f_n},\\tilde{g_n}\\right)_{L_2(\\mathbbm{m}^{\\otimes n})}, & \\text{ if }n=k\n \\end{cases}\n\\end{equation}\nfor all $f_n\\in {L_2\\left( \\mm^{\\otimes n} \\right)}$ and $g_k\\in L_2\\left(\\mathbbm{m}^{\\otimes k}\\right)$. \n\nAccording to \\cite[Theorem 2]{ito}, for any $Y\\in L_2({\\mathbbm{P}})$ there exist functions $f_n\\in {L_2\\left( \\mm^{\\otimes n} \\right)}$,\n$n=0,1,2,\\ldots,$ such that\n\\[ Y=\\sum_{n=0}^\\infty\nI_n(f_n) \\quad \\text{ in } L_2(\\mathbbm{P})\\]\nand the functions $f_n$ are unique in $L_2(\\mathbbm{m}^{\\otimes n})$ when they are chosen to be\nsymmetric. We have\n\\[ \\norm{Y}^2_{L_2(\\mathbbm{P})} = \\sum_{n=0}^\\infty n! \\norm{\\tilde{f}_n}^2_{L_2\\left( \\mm^{\\otimes n} \\right)}.\\]\n\n\n\n\n\n\nWe recall the definition of the Malliavin Sobolev space $\\mathbbm{D}_{1,2}$ based on the\nIt\\^{o} chaos decomposition. We denote by $\\mathbbm{D}_{1,2}$ \nthe space of all $Y = \\sum_{n=0}^\\infty\nI_n(f_n) \\in L_2(\\mathbbm{P})$ such that\n\\[\\|Y\\|^2_{\\mathbbm{D}_{1,2}} := \\sum_{n=0}^\\infty (n+1)! \\norm{\\tilde{f}_n}^2_{L_2\\left( \\mm^{\\otimes n} \\right)} < \\infty.\\]\nLet us write $L_2(\\mathbbm{m}\\otimes\\mathbbm{P}) := L_2(\\mathbbm{R}_+\\times\\mathbbm{R}\\times{\\Omega},\n\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R})\\otimes{\\cal F}, \\mathbbm{m}\\otimes\\mathbbm{P})$ and define the\nMalliavin derivative $D:\\mathbbm{D}_{1,2}\\to L_2(\\mathbbm{m}\\otimes\\mathbbm{P})$ in the following way.\nFor $B_1,\\ldots,B_n \\in \\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R})$, which are pairwise disjoint and such that $\\mathbbm{m}(B_i)< \\infty$ for all $i=1,\\ldots,n$, we let\n\\begin{align*}D_{t,x} I_n\\left( \\mathbbm{1}_{B_1}\\otimes\\cdots\\otimes\\mathbbm{1}_{B_n}\\right) \n&= nI_{n-1}\\left( \\tilde{\\mathbbm{1}_{B_1}\\otimes\\cdots\\otimes\\mathbbm{1}_{B_n}}(\\cdot,(t,x))\\right)\\\\\n&:= \\sum_{i=1}^n \\prod_{j\\neq i} M(B_j) \\mathbbm{1}_{B_i}(t,x). \n\\end{align*}\nIt holds $\\normmP{DI_n\\left( \\mathbbm{1}_{B_1}\\otimes\\cdots\\otimes\\mathbbm{1}_{B_n}\\right)} \n= \\sqrt{n}\\normP{I_n\\left( \\mathbbm{1}_{B_1}\\otimes\\cdots\\otimes\\mathbbm{1}_{B_n}\\right) }$ and\nthe operator is extended to $\\left\\{ I_n(f_n): f_n\\in L_2(\\mathbbm{m}^{\\otimes n}) \\right\\}$ by linearity and continuity. For \n$Y = \\sum_{n=0}^\\infty I_n(f_n) \\in \\mathbbm{D}_{1,2}$ it then holds that\n\\[D_{t,x}Y := \\sum_{n=1}^\\infty n I_{n-1} \\left(\\tilde{f}_n(\\cdot,(t,x))\\right) \\] \nconverges in $L_2(\\mathbbm{m}\\otimes\\mathbbm{P})$.\n\n\n\\begin{remark}\\label{remark:inner_product_DD}\nNote that also for any $u\\in L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})$ one finds a chaos representasion $u=\\sum_{n=0}^\\infty I_n(g_{n+1})$, \nwhere the functions $g_{n+1} \\in L_2\\left(\\mathbbm{m}^{\\otimes(n+1)}\\right)$\nare symmetric in the first $n$ variables. For $u,v\\in L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})$ with \n$u=\\sum_{n=0}^\\infty I_n(g_{n+1})$ and $v=\\sum_{n=0}^\\infty I_n(h_{n+1})$ it then holds\n\\begin{equation}\\label{equation:inner_product_DD}\n (u,v)_{L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})} = \\sum_{n=0}^\\infty n! \\left(g_{n+1},h_{n+1} \\right)_{L_2\\left(\\mathbbm{m}^{\\otimes (n+1)}\\right)}.\n \\end{equation}\n\\end{remark}\nFor more information, see for example \\cite{nualart-vives}, \\cite{privault_extension}, \\cite{benth-dinunno-lokka-oksendal-proske}, \n\\cite{lee-shih_product_formula}, \\cite{sole-utzet-vives} and\n \\cite{applebaum2}. \n\n\\section{Differentiability}\n\\label{section:differentiability}\n\nWe shall use the notation $\\mathbbm{R}_0 = \\mathbbm{R}\\setminus\\{0\\}$. For $A\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R}_0)$ we denote by\n${\\cal F}_A$ the completion of the sigma-algebra $ \\sigma \\left( N(B) : B\\subseteq A \\text{ and } B\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R}) \\right) $. \nThe following theorem implies that if $Y\\in L_2({\\mathbbm{P}})$ is ${\\cal F}_A$-measurable and $\\mathbbm{E}[N(A)]<\\infty$, then \n$Y\\in\\mathbbm{D}_{1,2}$ if and only if $\\mathbbm{E}[Y^2 N(A)]<\\infty$.\n\n\\begin{theorem}\\label{theorem:second_double_inequality}\n Let $A\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R}_0)$ be such that $ \\mathbbm{E}\\left[N(A)\\right] = ({\\mathrm{d}} t \\otimes \\nu)(A) < \\infty$ and \n $Y\\in L_2({\\mathbbm{P}})$. \n \\begin{enumerate}\n \\item If $Y\\in\\mathbbm{D}_{1,2}$, then $Y\\sqrt{N(A)}\\in L_2({\\mathbbm{P}})$ and\n \\begin{align}\\label{equation:second_double_inequality1}\n \\left| \\normP{Y\\sqrt{N(A)}} - \\normP{Y} \\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} \\right| \n \\leq \\normmP{DY\\mathbbm{1}_A}.\\end{align}\n \\item If $Y\\sqrt{N(A)}\\in L_2({\\mathbbm{P}})$ and $Y$ is ${\\cal F}_A$-measurable, then $Y\\in\\mathbbm{D}_{1,2}$ and\n \\begin{align}\\label{equation:second_double_inequality2}\n \\normmP{DY}\n \\leq \\normP{Y\\sqrt{N(A)}} + \\normP{Y}\\sqrt{\\mathbbm{E}\\left[ N(A)\\right]}.\n \\end{align}\n \\end{enumerate}\n\\end{theorem}\n\n\n\n\n\nWe denote by $\\mathcal{S}$ the set of random variables $Y$ such that there exists $m\\geq 1$, $f\\in C_c^\\infty(\\mathbbm{R}^m)$ and \n$0\\leq t_0 < t_1 < \\cdots t_m < \\infty$ such that\n$$Y= f\\left( X_{t_1} - X_{t_0},\\ldots,X_{t_m} - X_{t_{m-1}} \\right).$$\n\n\n\\begin{lemma}[Theorem 4.1, Corollaries 4.1 and 3.1 in \\cite{geiss-laukkarinen}]\\label{lemma:smooths_dense_in}\\hfill\n\n\n\\begin{itemize}\n \\itemm{a} $\\mathcal{S}$ is dense in $\\mathbbm{D}_{1,2}$ and $L_2({\\mathbbm{P}})$.\\label{lemma:smooths_dense_in_a}\n \\itemm{b} For $Y,Z\\in\\mathcal{S}$ it holds\n $D_{t,x}(YZ) = Y D_{t,x}Z + Z D_{t,x}Y + x D_{t,x} Y D_{t,x} Z$ $\\mathbbm{m}\\otimes{\\mathbbm{P}}$-a.e.\n\\end{itemize}\n\n\\end{lemma}\n\n\n\n\n\\begin{proposition}\\label{proposition:first_double_inequality}\n Let $Y = \\sum_{n=0}^\\infty I_n(f_n)$ be bounded and $A\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R}_0)$ be such that $ \\mathbbm{E}\\left[N(A)\\right] = \u00a0({\\mathrm{d}} t \\otimes \\nu)(A) < \\infty$.\n Then $\\sum_{n=1}^\\infty nI_{n-1}\\left( \\tilde{f_n}(\\cdot,{ * })\\right) \\mathbbm{1}_A({ *})$ converges in $L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})$ and\n \\begin{align}\\label{equation:first_double_inequality}\n &\\left| \\normP{Y\\sqrt{N(A)}} - \\normP{Y} \\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} \\right| \n \\leq \\normmP{\\sum_{n=1}^\\infty \\left( nI_{n-1}\\!\\left( \\tilde{f_n} \\right) \\mathbbm{1}_A \\right) } \\nonumber \\\\*\n &\\qquad\\qquad\\leq \\normP{Y\\sqrt{N(A)}} + \\normP{Y}\\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} .\n \\end{align}\n\\end{proposition}\n\\begin{proof}\n Assume first that $Y\\in\\mathcal{S}$. Then also $Y^2 = \\sum_{n=0}^\\infty I_n(g_n) \\in\\mathcal{S}$. \n %\nLetting $h(t,x) = \\frac{1}{x}\\mathbbm{1}_A(t,x)$ we have $I_1(h) = N(A) - \\mathbbm{E}\\left[ N(A) \\right]$ and we get using\n\\eqref{equation:inner_product_L_2} and \\eqref{equation:inner_product_DD} that\n %\n \\begin{align*}\n \\mathbbm{E}\\left[ Y^2 N(A)\\right] - \\mathbbm{E}\\left[ Y^2 \\right] \\mathbbm{E}\\left[ N(A)\\right]\n & = \\mathbbm{E}\\left[ Y^2 I_1(h) \\right] \n = (g_1,h)_{L_2(\\mathbbm{m})}\\\\\n & = (DY^2,h{\\mathbbm{1}_{{\\Omega}}})_{L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})}.\n\\end{align*}\nFrom Lemma \\ref{lemma:smooths_dense_in} (b) we obtain\n\\begin{align*}\n \\mathbbm{E}\\left[ Y^2 N(A)\\right]\n & = \\mathbbm{E}\\left[ Y^2 \\right] \\mathbbm{E}\\left[ N(A)\\right] + (DY^2,h)_{L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})}\\\\\n & = \\mathbbm{E}\\left[ Y^2 \\right] \\mathbbm{E}\\left[ N(A)\\right] + 2 \\iint_A \\mathbbm{E}\\left[ YD_{t,x}Y \\right] x{\\mathrm{d}} t \\nu({\\mathrm{d}} x)\\\\*\n &\\qquad + \\iint_A \\mathbbm{E}\\left[ \\left( D_{t,x} Y\\right)^2\\right]\\mathbbm{m}({\\mathrm{d}} t,{\\mathrm{d}} x).\n \\end{align*}\n Using H\\\"{o}lder's inequality we get\n %\n $$\\left| 2 \\iint_A \\mathbbm{E}\\left[ YD_{t,x}Y \\right] x{\\mathrm{d}} t \\nu({\\mathrm{d}} x)\\right| \\leq 2\\normP{Y}\\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} \\normmP{DY\\mathbbm{1}_A},$$\n so that\n %\n \\begin{align*}\n & \\left( -\\normP{Y}\\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} + \\normmP{DY\\mathbbm{1}_A} \\right)^2\n \\leq \\mathbbm{E} \\left[ Y^2 N(A)\\right]\\\\*\n & \\qquad\\qquad\\leq \\left( \\normP{Y}\\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} + \\normmP{DY\\mathbbm{1}_A} \\right)^2.\\end{align*}\n Taking the square root yields to the double inequality \\eqref{equation:first_double_inequality}.\n\n Using Lemma \\ref{lemma:smooths_dense_in} (a) we find for any bounded $Y$ a uniformly bounded sequence $(Y_k)\\subset \\mathcal{S}$ such that \n $Y_k\\to Y$ a.s. Since inequality \\eqref{equation:first_double_inequality} holds for all random variables \n $Y_k-Y_m \\in \\mathcal{S}$, they are uniformly bounded and $Y_k-Y_m\\to 0$ a.s. as $k,m\\to\\infty$, we have by dominated convergence that\n \\begin{align*}\n & \\normmP{D(Y_k-Y_m) \\mathbbm{1}_A}\\\\\n &\\leq \\normP{(Y_k-Y_m)\\sqrt{N(A)}} + \\normP{Y_k-Y_m}\\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} \\\\ \n &\\to 0 \n \\end{align*}\n as $k,m\\to\\infty$.\n Thus the sequence $(DY_{k}\\mathbbm{1}_A)_{k=1}^\\infty$ converges\n in $L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})$ to some mapping $u\\in L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})$. Write $Y_k=\\sum_{n=0}^\\infty I_n \\left( \\tilde{f}_n^{(k)} \\right)$.\n The mapping $u$ has a representasion $u = \\sum_{n=0}^\\infty I_n(h_{n+1})$\n (see Remark \\ref{remark:inner_product_DD}), where for all $n\\geq0$ we have that\n %\n \\begin{align*}\n \\left\\| n\\tilde{f_n}\\mathbbm{1}_A - h_n \\right\\|_{L_2({\\mathbbm{m}^{\\otimes n}})} \n & \\leq \\left\\| n\\tilde{f_n}\\mathbbm{1}_A - n\\tilde{f}_n^{(k)}\\mathbbm{1}_A \\right\\|_{L_2({\\mathbbm{m}^{\\otimes n}})} \n \\!+ \\left\\| n\\tilde{f}_n^{(k)}\\mathbbm{1}_A - h_n \\right\\|_{L_2({\\mathbbm{m}^{\\otimes n}})} \\\\*\n & \\to 0 \n \\end{align*} \n as $k\\to\\infty$.\n We obtain \\eqref{equation:first_double_inequality} for the random variable $Y$ using dominated convergence, the convergence\n $DY_k\\mathbbm{1}_A \\to \\sum_{n=0}^\\infty \\left( D I_n(f_n) \\mathbbm{1}_A \\right)$ in $L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})$ and the fact that \n \\eqref{equation:first_double_inequality} holds for all random variables $Y_{k}$. \n\\end{proof}\n\n\n\\begin{lemma}\\label{lemma:Lipschitz}\n If $Y=\\sum_{n=0}^\\infty I_n(f_n\\mathbbm{1}_{\\mathbbm{R}_+\\times\\mathbbm{R}_0}^{\\otimes n}) \\in \\mathbbm{D}_{1,2}$ and $g:\\mathbbm{R}\\to\\mathbbm{R}$ is Lipschitz-continuous, then\n $g(Y)\\in\\mathbbm{D}_{1,2}$ and\n $$D_{t,x}g(Y) = \\frac{g(Y + x D_{t,x}Y) - g(Y)}{x} \\quad \\text{ in } L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}}).$$ \n\\end{lemma}\n\\begin{proof}\nThe lemma is an immediate consequence of \\cite[Lemma 5.1 (b)]{geiss-laukkarinen}.\n\\end{proof}\n\n\n\\begin{lemma}\\label{lemma:measurability}\nLet $Y = \\sum_{n=0}^\\infty I_n(f_n)\\in L_2({\\mathbbm{P}})$ and $A\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R})$. Then\n$$\\mathbbm{E} \\left[ Y | {\\cal F}_A \\right] = \\sum_{n=0}^\\infty I_n \\left( f_n\\mathbbm{1}_A^{\\otimes n} \\right) \\text{ in }L_2({\\mathbbm{P}}).$$\n\\end{lemma}\n\\begin{proof}\nThe equality can be shown via the construction of the chaos analogously to the proof of \\cite[Lemma 1.2.4]{nualartv1}.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{theorem:second_double_inequality}]\n{ \\emph{1.}} Assume $Y \\in\\mathbbm{D}_{1,2}$ and define $g_m(x)= (-m \\vee x)\\wedge m$ for $m\\geq 1$. From Lemma \\ref{lemma:Lipschitz} we get \n$g_m(Y)\\in\\mathbbm{D}_{1,2}$ and $|Dg_m(Y)|\\leq |DY|$. Then, using monotone convergence and Proposition\n\\ref{proposition:first_double_inequality}, we obtain\n\\begin{align*}\n& \\left| \\normP{Y\\sqrt{N(A)}} - \\normP{Y} \\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} \\right| \\\\\n& = \\lim_{m\\to\\infty} \\left| \\normP{g_m(Y)\\sqrt{N(A)}} - \\normP{g_m(Y)} \\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} \\right| \\\\\n&\\leq \\limsup_{m\\to\\infty} \\normmP{Dg_m(Y)\\mathbbm{1}_A}\\\\\n& \\leq \\normmP{DY\\mathbbm{1}_A} < \\infty.\n\\end{align*}\nHence $Y\\sqrt{N(A)}\\in L_2({\\mathbbm{P}})$.\n\n{ \\emph{2.}} Assume $\\| Y\\sqrt{N(A)}\\| < \\infty$ and define $g_m(Y)$ as above. \nWrite $Y = \\sum_{n=0}^\\infty I_n \\left( f_n \\right)$ and $g_m(Y) = \\sum_{n=0}^\\infty I_n ( f_n^{(m)})$. \nSince $g_m(Y)\\to Y$ in $L_2({\\mathbbm{P}})$, it holds \n$\\| \\tilde{f}_n^{(m)} \\|^2_{L_2(\\mathbbm{m}^{\\otimes n})}\\to \\| \\tilde{f_n} \\|^2_{L_2(\\mathbbm{m}^{\\otimes n})}$ as $m\\to\\infty$.\n Since $g_m(Y)$ is ${\\cal F}_A$-measurable, we have $\\tilde{f}_n^{(m)} = \\tilde{f}_n^{(m)}\\mathbbm{1}_A^{\\otimes n}$ $\\mathbbm{m}^{\\otimes n}$-a.e. \nby Lemma \\ref{lemma:measurability} for all $m\\geq 1$. \n By Fatou's Lemma, \nProposition \\ref{proposition:first_double_inequality} and monotone convergence we get \n\\begin{align*}\n & \\sqrt{ \\sum_{n=1}^\\infty nn! \\left\\| \\tilde{f_n} \\right\\|^2_{L_2(\\mathbbm{m}^{\\otimes n})}} \\\\*\n &\\leq \\liminf_{m\\to\\infty}\\sqrt{\\sum_{n=1}^\\infty nn! \\left\\| \\tilde{f}_n^{(m)} \\right\\|^2_{L_2(\\mathbbm{m}^{\\otimes n})}} \\\\*\n &\\leq \\liminf_{m\\to\\infty} \\left( \\normP{g_m(Y)\\sqrt{N(A)}} + \\normP{g_m(Y)} \\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} \\right)\\\\*\n & = \\normP{Y\\sqrt{N(A)}} + \\normP{Y} \\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} < \\infty.\n\\end{align*}\nThus $Y\\in\\mathbbm{D}_{1,2}$.\n\\end{proof}\n\nWe use the notation $ \\alpha \\sim_c \\beta$ for $\\frac{1}{c} \\beta \\leq \\alpha \\leq c \\beta$ for $c\\geq 1$ and \n$\\alpha, \\beta\\in[0,\\infty]$.\n \n\\begin{corollary}\\label{corollary:norm_equivalence}\nLet $A\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R}_0)$ be such that $ \\mathbbm{E} \\left[ N(A)\\right] < \\infty$ and assume that \n$Y= \\sum_{n=0}^\\infty I_n(f_n)\\in L_2({\\mathbbm{P}})$\n is ${\\cal F}_A$-measurable. Then\n$$ \\|Y\\|_{\\mathbbm{D}_{1,2}} \\sim_{\\sqrt{2}\\left( \\sqrt{ \\mathbbm{E} \\left[ N(A) \\right]} +1 \\right)} \n\\left\\| Y \\sqrt{N(A) +1 } \\right\\|_{L_2({\\mathbbm{P}})}, $$\n where the norms may be infinite. \n\\end{corollary}\n\\begin{proof}\nThe inequalities \\eqref{equation:second_double_inequality1} and \\eqref{equation:second_double_inequality2} give the relation\n$$\\left \\| Y \\sqrt{N(A)} \\right\\|_{L_2({\\mathbbm{P}})} + \\|Y\\|_{L_2({\\mathbbm{P}})} \\sim_{ \\sqrt{ \\mathbbm{E} \\left[ N(A) \\right]} +1 } \\|Y\\|_{L_2({\\mathbbm{P}})} \n+ \\|DY\\|_{L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})}. $$ \nThe claim follows from $\\|Y\\|_{\\mathbbm{D}_{1,2}} \\leq \\|Y\\|_{L_2({\\mathbbm{P}})} + \\|DY\\|_{L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})} \\leq \\sqrt{2}\\|Y\\|_{\\mathbbm{D}_{1,2}} $ and\n\\begin{align*}\n { \\normP{Y\\sqrt{N(A)+1}} }\n & \\leq \\left\\| Y \\left( \\sqrt{N(A)} + 1 \\right) \\right\\|_{L_2({\\mathbbm{P}})} \\\\\n & \\leq \\left\\| Y \\sqrt{N(A)} \\right\\|_{L_2({\\mathbbm{P}})} + \\|Y\\|_{L_2({\\mathbbm{P}})}\\\\\n & \\leq { \\sqrt{ 2 \\left( \\left\\| Y \\sqrt{N(A)} \\right\\|^2_{L_2({\\mathbbm{P}})} + \\|Y\\|^2_{L_2({\\mathbbm{P}})} \\right) } }\\\\\n & = { \\sqrt{2} \\left\\| Y \\sqrt{N(A) +1 } \\right\\|_{L_2({\\mathbbm{P}})}. } \n\\end{align*}\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\section{Fractional differentiability}\n\\label{section:fractional}\n\n\nWe consider fractional smoothness in the sense of real interpolation spaces between $L_2({\\mathbbm{P}})$ and $\\mathbbm{D}_{1,2}$. \nFor parameters $\\theta\\in(0,1)$ and $q\\in[1,\\infty]$ the interpolation space $(L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,q}$\nis a Banach space, intermediate between $L_2({\\mathbbm{P}})$ and $\\mathbbm{D}_{1,2}$.\n\n \n\n\nWe shortly recall the $K$-method of real interpolation. \nThe K-functional of $Y\\in L_2({\\mathbbm{P}})$ is the mapping\n$K(Y,\\cdot; L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2}): (0,\\infty) \\to [0,\\infty)$ defined by \n\\begin{align*} \n&K(Y,s; L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})\\\\ & := \\inf\\{ \\|Y_0\\|_{L_2({\\mathbbm{P}})} + s\\|Y_1\\|_{\\mathbbm{D}_{1,2}}:\\ Y=Y_0+Y_1,\\,Y_0\\in L_2({\\mathbbm{P}}),\\, Y_1\\in \\mathbbm{D}_{1,2} \\}\n\\end{align*}\nand we shall use the abbreviation $K(Y,s)$ for $K(Y,s; L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})$.\nLet $\\theta\\in(0,1)$ and $q\\in[1,\\infty]$. The space $(L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,q}$ consists of all $Y\\in L_2({\\mathbbm{P}})$\nsuch that\n\\[\n\\|Y\\|_{(L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,q}} \n= \\begin{cases}\n \\left[ \\int_0^\\infty \\left| s^{-\\theta} K(Y,s) \\right|^q \\frac{\\mathrm{d}s}{s} \\right]^{\\frac{1}{q}}, & q\\in[1,\\infty)\\\\\n \\sup_{s>0} s^{-\\theta} K(Y,s), & q=\\infty\n \\end{cases}\n\\]\nis finite.\n\n\nThe interpolation spaces are nested in a lexicographical order:\n\\[\n\\mathbbm{D}_{1,2} \\subset (L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\eta,p} \\subset (L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,q} \\subseteq (L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,p} \\subset L_2({\\mathbbm{P}})\n\\]\n for $ 1 \\leq q \\leq p \\leq \\infty$ and\n $ 0 < \\theta < \\eta < 1$.\nFor further properties of interpolation we refer to \\cite{bennet-sharpley} and \\cite{triebel}.\n\n\n\\begin{theorem} \\label{theorem:fractional}\n Let $\\theta\\in(0,1)$, $A\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R}_0)$ be such that $ \\mathbbm{E}\\left[ N(A) \\right]< \\infty$ \n and $Y\\in L_2({\\mathbbm{P}})$\n be ${\\cal F}_A$-measurable. Then \n$$Y \\in (L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,2}\\text{ if and only if } \\mathbbm{E}\\left[ Y^2 N(A)^\\theta \\right] < \\infty.$$ If\n $Y \\in (L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,2}$, then\n %\n$$ \\|Y\\|_{(L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,2}}\n \\sim_{\\sqrt{2}\\frac{ \\sqrt{ \\mathbbm{E}\\left[ N(A) \\right] } +1}{\\sqrt{\\theta(1-\\theta)}}} \\left\\| Y \\sqrt{N(A) +1}^{\\,\\theta} \\right\\|_{L_2({\\mathbbm{P}})}.$$\n\\end{theorem}\n\\begin{proof}\nWe first show that\n \\begin{equation} \\label{equation:Kfunctional}\n K(Y,s) \n\\sim_{ 2\\left( \\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} +1 \\right)} \\left\\| Y \\min\\left\\{1, s \\sqrt{N(A) +1} \\right\\} \\right \\|_{L_2({\\mathbbm{P}})}.\n \\end{equation}\nFrom Lemma \\ref{lemma:measurability} we obtain the inequalities $ \\| \\mathbbm{E} \\left[ Y_0 | {\\cal F}_A \\right] \\|_{L_2({\\mathbbm{P}})} \\leq \\|Y_0\\|_{L_2({\\mathbbm{P}})} $\nand $\\| \\mathbbm{E} \\left[ Y_1 | {\\cal F}_A \\right] \\|_{\\mathbbm{D}_{1,2}} \\leq \\|Y_1\\|_{\\mathbbm{D}_{1,2}} $ for any $Y_0\\in L_2({\\mathbbm{P}})$ and $Y_1\\in\\mathbbm{D}_{1,2}$. Hence\n\\begin{align}\\label{align:K_relation_one}\n & K(Y,s) \\nonumber\\\\*\n& = \\inf\\left\\{ \\|Y_0\\|_{L_2({\\mathbbm{P}})} + s\\|Y_1\\|_{\\mathbbm{D}_{1,2}}:\\ Y_0+Y_1 = Y,\\, Y_0\\in L_2({\\mathbbm{P}}),\\, Y_1\\in \\mathbbm{D}_{1,2} \\right\\}\\nonumber \\\\\n& = \\inf\\left\\{ \\| \\mathbbm{E} \\left[ Y_0 | {\\cal F}_A \\right] \\|_{L_2({\\mathbbm{P}})} + s\\| \\mathbbm{E} \\left[ Y_1 | {\\cal F}_A \\right] \\|_{\\mathbbm{D}_{1,2}}: Y_0+Y_1 = Y,\\, Y_1\\in \\mathbbm{D}_{1,2} \\right\\}\\nonumber \\\\*\n& \\sim_{c} \n \\inf\\left \\{ \\|Y_0\\|_{L_2({\\mathbbm{P}})} + s \\left\\| Y_1 \\sqrt{N(A)+1} \\right\\|_{L_2({\\mathbbm{P}})} : \n Y_0 + Y_1 = Y, Y_1\\in\\mathbbm{D}_{1,2} \\right \\}\n\\end{align}\nfor $c = \\sqrt{2}\\left( \\sqrt{\\mathbbm{E}\\left[ N(A) \\right]} +1\\right) $ by Corollary \\ref{corollary:norm_equivalence}. \nNext we approximate the $K$-functional from above with the choice $Y_0 = Y\\mathbbm{1}_{\\left\\{ \\sqrt{N(A)+1} > \\frac{1}{s}\\right\\}} $ \nand get from \\eqref{align:K_relation_one} that\n\\begin{align*}\n& \\frac{1}{c} K(Y,s)\\\\\n& \\leq \\left( \\left\\|Y \\mathbbm{1}_{ \\left\\{{ \\sqrt{N(A) +1}} > \\frac{1}{s} \\right\\}} \\right\\|_{L_2({\\mathbbm{P}})} \n + s \\left\\| Y{ \\sqrt{N(A) +1} } \\mathbbm{1}_{ \\left\\{ { \\sqrt{N(A) +1 }} \\leq \\frac{1}{s} \\right\\}} \\right\\|_{L_2({\\mathbbm{P}})} \\right) \\\\\n& \\leq \\sqrt{2} \\left\\| Y \\min\\left\\{ 1, s { \\sqrt{N(A) +1 }} \\right\\} \\right\\|_{L_2({\\mathbbm{P}})}.\n\\end{align*}\nUsing the triangle inequality and the fact that $$|Y({\\omega}) -y| + |y|a \\geq |Y({\\omega})| \\min\\{1,a\\}$$ for all ${\\omega}\\in{\\Omega}$, \n$y\\in\\mathbbm{R}$ and $a\\geq 0$ we obtain from \\eqref{align:K_relation_one} the lower bound \n\\begin{align*}\n & c K(Y,s)\\\\*\n& \\geq \\inf \\left\\{ \\left\\| |Y_0| + |Y_1|s { \\sqrt{N(A) +1} } \\right\\|_{L_2({\\mathbbm{P}})} : Y = Y_0+Y_1,\\, Y_1\\in\\mathbbm{D}_{1,2} \\right\\}\\\\*\n& \\geq \\left\\| Y \\min \\left\\{ 1, s { \\sqrt{N(A) +1}} \\right\\} \\right\\|_{L_2({\\mathbbm{P}})}.\n\\end{align*}\nWe have shown that \\eqref{equation:Kfunctional} holds.\nFrom \\eqref{equation:Kfunctional} we get \n\\begin{align*}\n& \\|Y\\|_{(L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,2}} \\\\*\n\n&\\sim_{2\\left( \\sqrt{\\mathbbm{E} \\left[ N(A) \\right]} +1 \\right)} \n\\left( \\int_0^\\infty \\left| s^{-\\theta} \\normP{ Y \\min\\left\\{ 1, s { \\sqrt{N(A) +1 } } \\right\\} } \\right|^2 \n\\frac{{\\mathrm{d}} s}{s} \\right)^{\\frac{1}{2}}.\n\\end{align*}\nWe finish the proof by computing the integral using first Fubini's theorem. We get\n\\begin{align*}\n& \\int_0^\\infty \\left| s^{-\\theta} \\normP{ Y \\min\\left\\{ 1, s { \\sqrt{N(A) +1 } } \\right\\} } \\right|^2 \\frac{{\\mathrm{d}} s}{s}\\\\*\n& = \\mathbbm{E} \\left[ Y^2 \\int_0^\\infty s^{-2\\theta} \\min\\left\\{ 1, s^2 {\\left( N(A) +1 \\right) } \\right\\}\\frac{{\\mathrm{d}} s}{s} \\right] \\\\*\n& = \\mathbbm{E} \\left[ Y^2 \\frac{1}{2\\theta(1-\\theta)} { \\left( N(A) + 1 \\right)^{\\theta} } \\right].\n\\end{align*}\n\\end{proof}\n\n\n\n\n\n\\section{Concluding remarks}\n\n\n\n\nFrom Theorem \\ref{theorem:second_double_inequality} assertion \\emph{2.} we can conclude that a higher integrability than square integrability\ncan imply Malliavin differentiability. For example,\nall the spaces $L_p({\\Omega},{\\cal F}_A, {\\mathbbm{P}})$ are subspaces of $\\mathbbm{D}_{1,2}$ when $p>2$ and $\\mathbbm{E}[N(A)]< \\infty$ as we can deduce from the following corollary.\n\n\n\n\\begin{corollary} Let $A\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R}_0)$ be such that $\\lambda:=\\mathbbm{E}[N(A)] \\in (0, \\infty)$ so that $N(A)\\sim$ Poisson($\\lambda$). \nThen for the space\n$$L_2 \\log^+ L_2({\\Omega},{\\cal F}_A,{\\mathbbm{P}}) : = \\left\\{ Y \\in L_2({\\Omega},{\\cal F}_A,{\\mathbbm{P}}): \\mathbbm{E} \\left[ Y^2 \\ln^+ Y^2 \\right] < \\infty \\right\\},$$\nwhere $\\ln^+x = \\max\\{\\ln x , 0\\}$, it holds that\n$$L_2 \\log^+ L_2({\\Omega},{\\cal F}_A,{\\mathbbm{P}})\\subsetneq \\mathbbm{D}_{1,2} \\cap L_2({\\Omega},{\\cal F}_A,{\\mathbbm{P}}).$$\n\\end{corollary}\n\\begin{proof}\nSuppose $\\mathbbm{E} \\left[ Y^2 \\ln^+ Y^2 \\right] < \\infty$ and let ${\\varphi}(y)=\\ln(y+1)$. The functions $\\Phi$ and $\\Phi^\\star$ with\n$$\\Phi(x) = \\int_0^x {\\varphi}(y) {\\mathrm{d}} y = (x+1)\\ln(x+1) -x \\leq 1 + x\\ln^+ x$$ \nand\n$$\\Phi^\\star(x)= \\int_0^x {\\varphi}^{-1}(y) {\\mathrm{d}} y =e^x - x -1$$\nare a complementary pair of Young functions. They satisfy the\nYoung inequality $xy \\leq \\Phi(x) + \\Phi^\\star(y)$ for all $x,y\\geq0$ and we get\n\\begin{align*}\n \\mathbbm{E} \\left[ Y^2 N(A) \\right] \n& \\leq \\mathbbm{E} \\left[ \\Phi\\left( Y^2 \\right) \\right] + \\mathbbm{E} \\left[ \\Phi^\\star(N(A)) \\right] \\\\*\n& \\leq \\mathbbm{E} \\left[ Y^2 \\ln^+\\left( Y^2 \\right) \\right] + e^{(e-1){\\lambda} } - {\\lambda} \\\\*\n &< \\infty. \\end{align*}\n %\nHence $Y\\in\\mathbbm{D}_{1,2}$ by Theorem \\ref{theorem:second_double_inequality}.\n\nTo see that the inclusion is strict, let $a\\in(1,2]$ and choose a Borel function $f:\\mathbbm{R}\\to\\mathbbm{R}$ such that\n$f(0)=f(1)=0$ and\n$$f(n) = \\sqrt{e^\\lambda \\frac{n!}{\\lambda^n} \\frac{1}{n^2 \\ln^a n}} \\quad \\text{ for }n = 2,3,\\ldots.$$\nThen, since\n $\\ln n! = \\sum_{k=2}^n \\ln k \\geq \\int_1^n \\ln x\\, {\\mathrm{d}} x = n\\ln n - n + 1 $ for $n\\geq 2$ and $a\\leq 2$, we have\n\\begin{align*}\n \\mathbbm{E}\\left[f^2(N(A))\\ln^+ f^2(N(A))\\right] \n& = \\sum_{n=2}^\\infty \\frac{1}{n^2 \\ln^a n}\\ln \\left( e^\\lambda \\frac{n!}{\\lambda^n} \\frac{1}{n^2 \\ln^a n} \\right)\\\\\n& = \\sum_{n=2}^\\infty \\frac{\\ln n!}{n^2 \\ln^a n}\n + \\sum_{n=2}^\\infty \\frac{1}{n^2 \\ln^a n}\\ln \\left(e^\\lambda\\frac{1}{\\lambda^n} \\frac{1}{n^2 \\ln^a n} \\right) \\\\\n& = \\infty,\n\\end{align*}\nbut\n$$\\mathbbm{E}\\left[N(A) f^2(N(A))\\right] = \\sum_{n=2}^\\infty nf^2(n) e^{-\\lambda} \\frac{\\lambda^n}{n!} = \\sum_{n=2}^\\infty \\frac{1}{n\\ln^a n} < \\infty $$\nso that $f(N(A)) \\in \\mathbbm{D}_{1,2}$ by Theorem \\ref{theorem:second_double_inequality}.\n\\end{proof}\n\n\n\n\n\n\n\\begin{remark}\\label{remark:compound_differentiability}\n Suppose ${\\sigma}=0$ and $\\nu(\\mathbbm{R})<\\infty$, which means that $X$ is a compound Poisson process (with drift) and\n $$X_t = \\beta t + \\int_{(0,t]\\times\\mathbbm{R}_0} x N({\\mathrm{d}} s, {\\mathrm{d}} x)\\quad \\text{ for all }t\\geq 0 \\text{ a.s.}$$\n for some $\\beta\\in\\mathbbm{R}$. The process $(N_t)_{t\\geq0}$, with $N_t = N((0,t]\\times\\mathbbm{R}_0)$ a.s., is the Poisson process associated to $X$.\n Let $T\\in(0,\\infty)$ and ${\\cal F}_T$ be the completion of the sigma-algebra generated by $(X_t)_{t\\in[0,T]}$. Then\n ${\\cal F}_T = {\\cal F}_{[0,T]\\times\\mathbbm{R}}$ and by Theorems \\ref{theorem:second_double_inequality} and \\ref{theorem:fractional} for\n any\n ${\\cal F}_T$-measurable random variable $Y$ and any $\\theta\\in(0,1)$ it holds that\n \\begin{itemize} \\item[(a)] $Y\\in\\mathbbm{D}_{1,2}$ if and only if\n $\\normP{ Y\\sqrt{N_T+1}} < \\infty$ and\n \\item[(b)] $Y\\in(L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,2}$ if and only if $\\normP{ Y\\sqrt{N_T+1}^{\\,\\theta}} < \\infty$.\n \\end{itemize}\n\\end{remark}\n\n\n\n\n\n\n\n{\\bf Acknowledgements.} The author is grateful to Christel Geiss and Stefan Geiss for several\nvaluable ideas and suggestions regarding this work.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}