diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzziqdw" "b/data_all_eng_slimpj/shuffled/split2/finalzziqdw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzziqdw" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\tThe classical binomial process has been studied by \\citet{jakeman} and has been used to model fluctuations in a train of events in quantum optics.\n\tRecall that the classical binomial process $ \\mathcal{N}(t)$, $t\\ge 0 $, with birth rate $\\lambda>0$ and\n\t\tdeath rate $\\mu>0$, has state probabilities\n\t\t$p_n(t) = \\Pr \\{ \\mathcal{N}(t) = n | \\mathcal{N}(0) = M \\}$ which solve the following Cauchy problem: \n\t\t\\begin{align}\n\t\t\t\\label{state}\n\t\t\t\\begin{cases}\n\t\t\t\t\\frac{\\mathrm d}{\\mathrm dt} p_n(t) = \\mu (n+1) p_{n+1}(t) - \\mu n p_n(t) - \\lambda\n\t\t\t\t(N-n) p_n(t) \\\\\n\t\t\t\t\\qquad \\qquad + \\lambda (N-n+1) p_{n-1}(t), \\qquad \\qquad \\qquad 0 \\leq n \\leq N, \\\\\n\t\t\t\tp_n(0) =\n\t\t\t\t\\begin{cases}\n\t\t\t\t\t1, & n=M, \\\\\n\t\t\t\t\t0, & n \\neq M.\n\t\t\t\t\\end{cases}\n\t\t\t\\end{cases}\n\t\t\\end{align}\n\t\tThe initial number of individuals is $M \\geq 1$, and $N \\geq M$.\n\n\t\tNotice that the binomial process has a completely different behaviour compared to the\n\t\tclassical linear birth-death process. Here the birth rate is proportional to the\n\t\tdifference between a larger fixed number and the number of\tindividuals present while the\n\t\tdeath rate remains linear. The whole evolution of the binomial process develops\n\t\tin the region $[0, N]$.\n\t\tFurthermore it is shown that at large times, an equilibrium is reached and displays a binomial distribution.\n\t\n\t\tFrom \\eqref{state}, it is straightforward to realise that the generating function\n\t\t\\begin{align}\n\t\t\tQ(u,t) = \\sum_{n=0}^N (1-u)^n p_n(t), \\qquad |1-u| \\leq 1,\n\t\t\\end{align}\n\t\tis the solution to\n\t\t\\begin{align}\n\t\t\t\\label{fgp}\n\t\t\t\\begin{cases}\n\t\t\t\t\\frac{\\partial}{\\partial t} Q(u,t) = -\\mu u \\frac{\\partial}{\\partial u} Q(u,t) - \\lambda u\n\t\t\t\t(1-u) \\frac{\\partial}{\\partial u} Q(u,t) -\\lambda N u Q(u,t), \\\\\n\t\t\t\tQ(u,0) = (1-u)^M.\n\t\t\t\\end{cases}\n\t\t\\end{align}\nMoreover, \\citet{jakeman} showed that at large times, the evolving population follows a binomial distribution with parameter $\\lambda \/ (\\lambda + \\mu)$. \n\nIn this paper, we propose a fractional generalisation of the classical binomial process. The fractional generalization includes non-markovian and rapidly dissipating or bursting birth-death processes at small and regular times. We also derive more statistical and related properties of the newly developed fractional stochastic process, which are deemed useful in real applications. Note that the theory and results presented here may have applications beyond quantum optics and may be of interest in other disciplines. \tAs in the preceding works on fractional Poisson process (e.g.\\ \\citet{laskin}) and other\n\t\tfractional point processes (see e.g.\\ \\citet{cah,pol}), fractionality is obtained by replacing the\n\t\tinteger-order derivative in the governing differential equations with a fractional-order derivative. \tIn particular, we use the Caputo fractional derivative of a well-behaved function $f(t)$ and is defined as\n\t\t\\begin{align}\n\t\t\t\\label{caputo}\n\t\t\t\\frac{\\mathrm d^\\nu}{\\mathrm dt^\\nu} f(t) = \\frac{1}{\\Gamma(m-\\nu)} \\int_0^t \\frac{\\frac{\\mathrm d^m}{\n\t\t\t\\mathrm d \\tau^m}f(\\tau)}{(t-\\tau)^{\\nu-m+1}}\n\t\t\t\\mathrm d\\tau, \\qquad m=\\lceil \\nu \\rceil,\n\t\t\\end{align}\n\twhere ``$\\lceil y \\rceil$\" is the smallest integer that is not less than $y$. Note that the Caputo fractional derivative operator is in practice a convolution of the standard derivative\nwith a power law kernel which adds more memory in the process. This characteristic is certainly an improvement from a physical viewpoint. By simple substitution, we obtain the following initial value problems for the probability generating function and the state\n\t\tprobabilities:\n\t\t\\begin{align}\n\t\t\t\\label{fgpfrac}\n\t\t\t\\begin{cases}\n\t\t\t\t\\frac{\\partial^\\nu}{\\partial t^\\nu} Q^\\nu(u,t) = -\\mu u \\frac{\\partial}{\\partial u} Q^\\nu(u,t) - \\lambda u\n\t\t\t\t(1-u) \\frac{\\partial}{\\partial u} Q^\\nu(u,t) -\\lambda N u Q^\\nu(u,t), \\\\\n\t\t\t\tQ^\\nu(u,0) = (1-u)^M, \\qquad \\qquad \\qquad |1-u|\\leq 1,\n\t\t\t\\end{cases}\n\t\t\\end{align}\t\t\n\t\t\\begin{align}\n\t\t\t\\label{statefrac}\n\t\t\t\\begin{cases}\n\t\t\t\t\\frac{\\mathrm d^\\nu}{\\mathrm dt^\\nu} p_n^\\nu(t) = \\mu (n+1) p_{n+1}^\\nu(t) - \\mu n p_n^\\nu(t) - \\lambda\n\t\t\t\t(N-n) p_n^\\nu(t) \\\\\n\t\t\t\t\\qquad \\qquad \\qquad + \\lambda (N-n+1) p_{n-1}^\\nu(t), & 0 \\leq n \\leq N, \\\\\n\t\t\t\tp_n^\\nu(0) =\n\t\t\t\t\\begin{cases}\n\t\t\t\t\t1, & n=M, \\\\\n\t\t\t\t\t0, & n \\neq M,\n\t\t\t\t\\end{cases}\n\t\t\t\\end{cases}\n\t\t\\end{align}\n\t\twhere $\\nu \\in (0,1]$.\n\nWe organized the rest of the paper as follows. In Section 2, the statistical properties of the fractional binomial process are derived by solving the preceding initial-value problem. Section 3 explored the sub-models that are directly extractable from the fractional binomial process. We then conclude the paper by providing more discussions and future extensions of the study in Section 4.\n\n\n\t\\section{Main properties of the fractional binomial process}\n\t\t\\label{se}\n\t\t\n\t\tFirstly, we prove a subordination relation which is of fundamental importance to deriving many of our results.\n\n\t\t\\begin{thm}\n\t\t\t\\label{sub}\n\t\t\tThe fractional binomial process $\\mathcal{N}^\\nu(t)$ has the following one-dimensional representation:\n\t\t\t\\begin{align}\n\t\t\t\t\\mathcal{N}^\\nu(t) \\overset{\\text{d}}{=} \\mathcal{N}(V_t^\\nu), \n\t\t\t\\end{align}\n\t\t\twhere $\\mathcal{N}(t)$ is a classical binomial process, $V_t^\\nu$, $t\\ge 0$, is the inverse process\n\t\t\tof the $\\nu$-stable subordinator (see e.g.\\ \\citet{meer}), $t \\ge 0$, and $\\nu \\in (0,1]$.\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\t\tLet $\\text{Pr} \\{ V_t^\\nu \\in \\mathrm ds \\} = h(s,t) \\, \\mathrm ds$\n\t\t\t\tbe the law of the inverse $\\nu$-stable subordinator. We now show that\n\t\t\t\t\\begin{align}\n\t\t\t\t\tQ^\\nu(u,t) = \\sum_{n=0}^N (1-u)^n p_n^\\nu(t) = \\int_0^\\infty Q(u,s) \\, h(s,t) \\, \\mathrm ds\n\t\t\t\t\\end{align}\n\t\t\t\tsatisfy the fractional differential equation \\eqref{fgpfrac}. We can then write\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\frac{\\partial^\\nu}{\\partial t^\\nu} \\int_0^\\infty Q(u,s) h(s,t) \\mathrm ds\n\t\t\t\t\t= \\int_0^\\infty Q(u,s) \\frac{\\partial^\\nu}{\\partial t^\\nu} h(s,t) \\mathrm ds.\n\t\t\t\t\\end{align}\n\t\t\t\tSince it can be easily verified that $h(s,t)$ is a solution to the fractional equation\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\frac{\\partial^\\nu}{\\partial t^\\nu} h(s,t) = - \\frac{\\partial}{\\partial s} h(s,t),\n\t\t\t\t\\end{align}\n\t\t\t\twe readily obtain\n\t\t\t\t\\begin{align}\n\t\t\t\t\t& \\frac{\\partial^\\nu}{\\partial t^\\nu} Q^\\nu(u,t) \\\\\n\t\t\t\t\t& = - \\int_0^\\infty Q(u,s)\n\t\t\t\t\t\\frac{\\partial}{\\partial s} h(s,t) \\mathrm ds \\notag \\\\\n\t\t\t\t\t& = \\left. - h(s,t) Q(u,s) \\right|_{s=0}^\\infty + \\int_0^\\infty h(s,t) \\frac{\\partial}{\\partial s}\n\t\t\t\t\tQ(u,s) \\mathrm ds \\notag \\\\\n\t\t\t\t\t& = \\int_0^\\infty \\left[ -\\mu u \\frac{\\partial}{\\partial u} Q(u,s)\n\t\t\t\t\t-\\lambda u(1-u) \\frac{\\partial}{\\partial u} Q(u,s)\n\t\t\t\t\t-\\lambda NuQ(u,s) \\right] h(s,t) \\mathrm ds \\notag \\\\\n\t\t\t\t\t& = -\\mu u \\frac{\\partial}{\\partial u} Q^\\nu(u,t)\n\t\t\t\t\t-\\lambda u(1-u) \\frac{\\partial}{\\partial u} Q^\\nu(u,t)\n\t\t\t\t\t-\\lambda NuQ^\\nu(u,t). \\notag\n\t\t\t\t\\end{align}\n\t\t\t\\end{proof}\n\t\t\t\\begin{flushright} \\qed \\end{flushright}\n\t\t\\end{thm}\n\t\t\n\t\tIn the following theorem, we derive the expected number of individuals $\\mathbb{E}\\,\\mathcal{N}^\\nu(t)$ or the expected population size of the fractional binomial process at any time $t \\ge 0$.\n\t\t\\begin{thm}\n\t\t\tFor the fractional binomial process $\\mathcal{N}^\\nu(t)$, $t \\ge 0$, $\\nu \\in (0,1]$, we have\n\t\t\t\\begin{align}\n\t\t\t\t\\label{mea}\n\t\t\t\t\\mathbb{E}\\,\\mathcal{N}^\\nu(t) = \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right)\n\t\t\t\tE_{\\nu,1} \\left( -(\\lambda+\\mu) t^\\nu \\right) + N\\frac{\\lambda}{\\lambda+\\mu},\n\t\t\t\\end{align}\n\twhere\n\t\\[\n\tE_{\\alpha, \\beta} \\left( \\xi \\right) = \\sum\\limits_{r=0}^\\infty \\frac{\\xi^r}{\\Gamma (\\alpha r + \\beta) }\t\n\t\\]\nis the Mittag-Leffler function.\t\t\t\n\t\t\t\\begin{proof}\n\t\t\t\tBy considering that\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\label{sharp}\n\t\t\t\t\t\\left. -\\frac{\\partial}{\\partial u} Q^\\nu(u,t) \\right|_{u=0} = \\mathbb{E}\\,\\mathcal{N}^\\nu(t)\n\t\t\t\t\\end{align}\n\t\t\t\tand on the base of \\eqref{fgpfrac},\n\t\t\t\twe can write\n\t\t\t\t\\begin{align}\n\t\t\t\t\t-\\frac{\\partial^\\nu}{\\partial t^\\nu} \\frac{\\partial}{\\partial u} Q^\\nu(u,t) = {} &\n\t\t\t\t\t\\mu \\left( \\frac{\\partial}{\\partial u} Q^\\nu(u,t) + u \\frac{\\partial^2}{\\partial u^2}\n\t\t\t\t\tQ^\\nu(s,t) \\right) \\\\\n\t\t\t\t\t& + \\lambda \\left( \\frac{\\partial}{\\partial u} Q^\\nu(u,t)\n\t\t\t\t\t+ u \\frac{\\partial^2}{\\partial u^2} Q^\\nu(u,t) \\right) \\notag \\\\\n\t\t\t\t\t& - \\lambda \\left( 2 u \\frac{\\partial}{\\partial u} Q^\\nu(u,t) + u^2\n\t\t\t\t\t\\frac{\\partial^2}{\\partial u^2}Q^\\nu(u,t) \\right) \\notag \\\\\n\t\t\t\t\t& + \\lambda N \\left( Q^\\nu(u,t)\n\t\t\t\t\t+ u \\frac{\\partial}{\\partial u} Q^\\nu(u,t) \\right), \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tthus leading to the Cauchy problem\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\label{bel}\n\t\t\t\t\t\\begin{cases}\n\t\t\t\t\t\t\\frac{\\mathrm d^\\nu}{\\mathrm dt^\\nu} \\mathbb{E} \\mathcal{N}^\\nu(t) =\n\t\t\t\t\t\t-(\\mu +\\lambda) \\mathbb{E} \\mathcal{N}^\\nu(t) + \\lambda N, \\\\\n\t\t\t\t\t\t\\mathbb{E} \\mathcal{N}^\\nu(0) = M.\n\t\t\t\t\t\\end{cases}\n\t\t\t\t\\end{align}\n\t\t\t\tThe solution to \\eqref{bel} can be written as (using formula (4.1.65) of \\citet{kilbas})\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\mathbb{E}\\,\\mathcal{N}^\\nu(t)\n\t\t\t\t\t= {} & M E_{\\nu,1} \\left(-(\\lambda+\\mu)t^\\nu\\right) \\\\\n\t\t\t\t\t& + \\int_0^t (t-s)^{\\nu-1}\n\t\t\t\t\tE_{\\nu,\\nu} \\left(-(\\lambda+\\mu)(t-s)^\\nu\\right) \\lambda N \\mathrm ds \\notag \\\\\n\t\t\t\t\t= {} & M E_{\\nu,1} \\left(-(\\lambda+\\mu)t^\\nu\\right) + \\lambda N \\int_0^t y^{\\nu-1} E_{\\nu,\\nu}\n\t\t\t\t\t\\left(-(\\lambda+\\mu) y^\\nu\\right) \\mathrm dy \\notag \\\\\n\t\t\t\t\t= {} & M E_{\\nu,1} \\left(-(\\lambda+\\mu)t^\\nu\\right) + \\lambda N \\biggl| -\\frac{\n\t\t\t\t\tE_{\\nu,1}\\left( -(\\lambda+\\mu)y^\\nu \\right)}{(\\lambda+\\mu)} \\biggr|_0^t \\notag \\\\\n\t\t\t\t\t= {} & M E_{\\nu,1} \\left(-(\\lambda+\\mu)t^\\nu\\right) - \\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\t\tN \\left( E_{\\nu,1} \\left( -(\\lambda+\\mu) t^\\nu \\right) -1 \\right) \\notag \\\\\n\t\t\t\t\t= {} & \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right)E_{\\nu,1}\n\t\t\t\t\t\\left( -(\\lambda+\\mu) t^\\nu \\right) + N\\frac{\\lambda}{\\lambda+\\mu}. \\notag\n\t\t\t\t\\end{align}\n\t\t\t\\end{proof}\n\t\t\t\\begin{flushright} \\qed \\end{flushright}\n\t\t\\end{thm}\n\t\tFigure \\ref{afig} shows the mean value \\eqref{mea} in both cases $\\left[ M-N\\lambda\/(\\lambda+\\mu)\n\t\t\\right] < 0$ and $\\left[ M-N\\lambda\/(\\lambda+\\mu)\n\t\t\\right] > 0$ for specific values of the remaining parameters.\n\t\tNote also that when $M=N\\lambda\/(\\lambda+\\mu)$ the mean value $\\mathbb{E}\\,\n\t\t\\mathcal{N}^\\nu(t) = N\\lambda\/(\\lambda+\\mu)$ is constant.\n\t\n\t\t\\begin{figure}[h!t!tb!p!]\n\t\t\t\\centering\n\t\t\t\\includegraphics[height=2.5in, width=2.2in]{p1.pdf}\n\t\t\t\\includegraphics[height=2.5in, width=2.2in]{p2.pdf}\n\t\t \n\t\t\n\t\t\t\\caption{\\label{afig}The mean value of the fractional binomial process $\\mathbb{E}\\,\n\t\t\t\t\\mathcal{N}^\\nu(t)$. For both graphs we have $N=100$, $M=40$, $\\nu=0.7$. The rates are respectively\n\t\t\t\t$(\\lambda,\\mu)=(1,1)$ (left) and $(\\lambda,\\mu)=(1,3)$ (right).}\n\t\t\\end{figure}\n\n\t\n\t\tWe now proceed to deriving the variance $\\mathbb{V}\\text{ar} \\, \\mathcal{N}^\\nu(t)$ of the fractional\n\t\tbinomial process, starting from the second factorial moment.\n\t\t\n\t\t\\begin{thm}\t\t\n\t\t\t\t\tFor the fractional binomial process $\\mathcal{N}^\\nu(t)$, $t \\ge 0$, $\\nu \\in (0,1]$, we have\n\t\t\t\\begin{align}\n\t\t\t\t\\label{va}\n\t\t\t\t\\mathbb{V}\\text{ar} & \\, \\mathcal{N}^\\nu(t) \\\\\t\t\t\n\t\t\t\t= {} & \\left( \\frac{\\lambda^2 N(N-1)}{(\\lambda +\\mu)^2} -\\frac{2\\lambda M(N-1)}{\\lambda+\\mu}\n\t\t\t\t+ M(M-1) \\right) E_{\\nu,1} (-2(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t& + \\left( \\frac{2\\lambda^2 N}{(\\lambda+\\mu)^2} - \\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\t(N+2M) +M \\right)\n\t\t\t\tE_{\\nu,1} (-(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t& - \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right)^2 \\left( E_{\\nu,1}(-(\\lambda+\\mu)t^\\nu) \\right)^2\n\t\t\t\t+ \\frac{N \\lambda\\mu}{(\\lambda+\\mu)^2}. \\notag\n\t\t\t\\end{align}\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\t\tFrom \\eqref{fgpfrac}, we have\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\frac{\\partial^\\nu}{\\partial t^\\nu} \\frac{\\partial^2}{\\partial u^2} Q^\\nu(u,t) = {} &\n\t\t\t\t\t-\\mu \\frac{\\partial^2}{\\partial u^2} Q^\\nu(u,t) -\\mu \\left( \\frac{\\partial^2}{\\partial u^2}\n\t\t\t\t\tQ^\\nu(u,t) + u \\frac{\\partial^3}{\\partial u^3} Q^\\nu(u,t) \\right) \\\\\n\t\t\t\t\t& -\\lambda \\left( (1-2u) \\frac{\\partial^2}{\\partial u^2} Q^\\nu(u,t) -2 \\frac{\\partial}{\\partial u}\n\t\t\t\t\tQ^\\nu(u,t) \\right) \\notag \\\\\n\t\t\t\t\t& - \\lambda \\left( (1-2u) \\frac{\\partial^2}{\\partial u^2}Q^\\nu(u,t)\n\t\t\t\t\t+(u-u^2)\\frac{\\partial^3}{\\partial u^3} Q^\\nu(u,t) \\right) \\notag \\\\\n\t\t\t\t\t& -\\lambda N \\frac{\\partial}{\\partial u} Q^\\nu(u,t) - \\lambda N \\left( \\frac{\\partial}{\\partial u}\n\t\t\t\t\tQ^\\nu(u,t) + u \\frac{\\partial^2}{\\partial u^2} Q^\\nu(u,t) \\right). \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tRecalling \\eqref{sharp} and the equality\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\left. \\frac{\\partial^2}{\\partial u^2} Q^\\nu(u,t) \\right|_{u=0} = \\mathbb{E}\n\t\t\t\t\t\\left( \\mathcal{N}^\\nu(t)(\\mathcal{N}^\\nu(t) -1) \\right) = H^\\nu(t),\n\t\t\t\t\\end{align}\n\t\t\t\twe obtain\n\t\t\t\t\t\\begin{align}\n\t\t\t\t\t\\label{sarp}\n\t\t\t\t\t\\frac{\\mathrm d^\\nu}{\\mathrm dt^\\nu} H^\\nu(t) & =\n\t\t\t\t\t-2 \\mu H^\\nu(t) -2\\lambda H^\\nu(t) -2\\lambda \\mathbb{E}\\,\\mathcal{N}^\\nu(t) + 2\\lambda N\n\t\t\t\t\t\\mathbb{E}\\,\\mathcal{N}^\\nu(t) \\\\\n\t\t\t\t\t& = -2 (\\lambda+\\mu) H^\\nu(t) + 2 \\lambda (N-1) \\mathbb{E}\\,\\mathcal{N}^\\nu(t). \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tBy substituting \\eqref{mea} into \\eqref{sarp}, we arrive at the Cauchy problem\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\label{cacio}\n\t\t\t\t\t\\begin{cases}\n\t\t\t\t\t\t\\frac{\\mathrm d^\\nu}{\\mathrm dt^\\nu} H^\\nu(t) = -2 (\\lambda+\\mu) H^\\nu(t)\n\t\t\t\t\t\t+ 2 \\lambda (N-1) \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right)\n\t\t\t\t\t\tE_{\\nu,1} \\left( -(\\lambda+\\mu)t^\\nu \\right) \\\\\n\t\t\t\t\t\t\\qquad \\qquad \\qquad + 2 \\lambda^2 N (N-1) \\frac{1}{\\lambda+\\mu} \\\\\n\t\t\t\t\t\tH^\\nu(0) = M(M-1),\n\t\t\t\t\t\\end{cases}\n\t\t\t\t\\end{align}\n\t\t\t\tthat can be solved using the Laplace transform $\\widetilde{H}^\\nu(z) = \\int_0^\\infty\n\t\t\t\te^{-zt} H^\\nu(t)\\, \\mathrm dt$ as follows:\n\t\t\t\t\\begin{align}\n\t\t\t\t\tz^\\nu \\widetilde{H}^\\nu(z) &- z^{\\nu-1} M(M-1) \\\\\n\t\t\t\t\t= {} & -2 (\\lambda+\\mu) \\widetilde{H}^\\nu(z) + 2 \\lambda (N-1)\n\t\t\t\t\t\\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right) \\frac{z^{\\nu-1}}{z^\\nu + (\\lambda+\\mu)} \\notag \\\\\n\t\t\t\t\t& + \\frac{1}{z} 2 \\lambda^2 N(N-1)\\frac{1}{\\lambda+\\mu}. \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tThe Laplace transform then reads\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\label{elastic}\n\t\t\t\t\t\\widetilde{H}^\\nu(z) = {} & M(M-1) \\frac{z^{\\nu-1}}{z^\\nu+2(\\lambda+\\mu)} \\\\\n\t\t\t\t\t& +2\\lambda (N-1)\n\t\t\t\t\t\\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right) \\frac{z^{\\nu-1}}{(z^\\nu+(\\lambda+\\mu))\n\t\t\t\t\t(z^\\nu+2(\\lambda+\\mu))} \\notag \\\\\n\t\t\t\t\t& + 2 \\lambda^2 N(N-1) \\frac{1}{\\lambda+\\mu} \\cdot \\frac{z^{-1}}{z^\\nu\n\t\t\t\t\t+2(\\lambda+\\mu)} \\notag \\\\\n\t\t\t\t\t= {} & M(M-1) \\frac{z^{\\nu-1}}{z^\\nu+2(\\lambda+\\mu)} \\notag \\\\\n\t\t\t\t\t& + \\frac{2\\lambda(N-1)}{\\lambda+\\mu}\n\t\t\t\t\t\\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right) \\left( \\frac{z^{\\nu-1}}{z^\\nu+(\\lambda+\\mu)}\n\t\t\t\t\t- \\frac{z^{\\nu-1}}{z^\\nu + 2(\\lambda+\\mu)} \\right) \\notag \\\\\n\t\t\t\t\t& + 2 \\lambda^2 N(N-1) \\frac{1}{\\lambda+\\mu} \\cdot \\frac{z^{-1}}{z^\\nu\n\t\t\t\t\t+2(\\lambda+\\mu)} \\notag.\n\t\t\t\t\\end{align}\n\t\t\t\tEquation \\eqref{elastic} then implies that\n\t\t\t\t\\begin{align}\n\t\t\t\t\tH^\\nu&(t) \\\\\n\t\t\t\t\t= {} & M(M-1) E_{\\nu,1} \\left( -2(\\lambda+\\mu)t^\\nu \\right) \\notag \\\\\n\t\t\t\t\t& +\\frac{2\\lambda (N-1)}{\\lambda+\\mu} \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right)\n\t\t\t\t\t\\left( E_{\\nu,1}(-(\\lambda+\\mu)t^\\nu) - E_{\\nu,1}(-2(\\lambda+\\mu)t^\\nu) \\right) \\notag \\\\\n\t\t\t\t\t& + 2\\lambda^2 N(N-1)\\frac{1}{\\lambda+\\mu} t^\\nu E_{\\nu,\\nu+1} \\left( -2(\\lambda+\\mu)t^\\nu\n\t\t\t\t\t\\right). \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tConsidering that $t^\\nu E_{\\nu,\\nu+1}(at^\\nu) = a^{-1} (E_{\\nu,1}(at^\\nu)-1)$, we obtain\n\t\t\t\t\\begin{align}\n\t\t\t\t\tH^\\nu(t) = {} & M(M-1) E_{\\nu,1} \\left( -2(\\lambda+\\mu)t^\\nu \\right) \\\\\n\t\t\t\t\t& +\t\\frac{2\\lambda(N-1)}{\\lambda+\\mu} \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right)\n\t\t\t\t\tE_{\\nu,1} (-(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t& - \\frac{2\\lambda (N-1)}{\\lambda+\\mu} \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right)\n\t\t\t\t\tE_{\\nu,1} (-2(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t& -\\frac{\\lambda^2}{(\\lambda+\\mu)^2} N(N-1) E_{\\nu,1} (-2(\\lambda+\\mu)t^\\nu)\n\t\t\t\t\t+ \\frac{\\lambda^2}{(\\lambda+\\mu)^2} N(N-1) \\notag \\\\\n\t\t\t\t\t= {} & M(M-1) E_{\\nu,1} \\left( -2(\\lambda+\\mu)t^\\nu \\right) \\notag \\\\\n\t\t\t\t\t& +\n\t\t\t\t\t\\frac{2\\lambda M(N-1)}{\\lambda+\\mu} E_{\\nu,1}(-(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t& - \\frac{2 \\lambda^2 N(N-1)}{(\\lambda+\\mu)^2} E_{\\nu,1} (-(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t& - \\frac{2\\lambda M(N-1)}{\\lambda+\\mu} E_{\\nu,1} (-2(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t& + \\frac{2\\lambda^2 N(N-1)}{(\\lambda+\\mu)^2} E_{\\nu,1}(-2(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t& -\\frac{\\lambda^2 N(N-1)}{(\\lambda+\\mu)^2} E_{\\nu,1}(-2(\\lambda+\\mu)t^\\nu)\n\t\t\t\t\t+\\frac{\\lambda^2 N(N-1)}{(\\lambda+\\mu)^2} \\notag \\\\\n\t\t\t\t\t= {} & \\frac{\\lambda^2 N(N-1)}{(\\lambda+\\mu)^2} \\notag \\\\\n\t\t\t\t\t& + E_{\\nu,1}(-2(\\lambda+\\mu)t^\\nu)\n\t\t\t\t\t\\left( \\frac{\\lambda^2 N(N-1)}{(\\lambda + \\mu)^2} - \\frac{2\\lambda M(N-1)}{\\lambda+\\mu}\n\t\t\t\t\t+M(M-1) \\right) \\notag \\\\\n\t\t\t\t\t& - E_{\\nu,1} (-(\\lambda+\\mu)t^\\nu) \\left( \\frac{2 \\lambda^2 N(N-1)}{(\\lambda+\\mu)^2}\n\t\t\t\t\t-\\frac{2\\lambda M (N-1)}{\\lambda+\\mu} \\right). \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tThe variance can thus be written as\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\mathbb{V}\\text{ar} & \\, \\mathcal{N}^\\nu(t) \\\\\n\t\t\t\t\t= {} & H^\\nu(t) + \\mathbb{E}\\, \\mathcal{N}^\\nu(t)\n\t\t\t\t\t- (\\mathbb{E}\\, \\mathcal{N}^\\nu(t))^2 \\notag \\\\\n\t\t\t\t\t= {} & H^\\nu(t) + \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right) E_{\\nu,1} (-(\\lambda+\\mu)t^\\nu)\n\t\t\t\t\t\\notag \\\\\n\t\t\t\t\t& + N \\frac{\\lambda}{\\lambda+\\mu} - \\left( M-\\frac{\\lambda}{\\lambda+\\mu} \\right)^2\n\t\t\t\t\t\\left( E_{\\nu,1}(-(\\lambda+\\mu)t^\\nu) \\right)^2 \\notag \\\\\n\t\t\t\t\t& - N^2 \\frac{\\lambda^2}{(\\lambda+\\mu)^2} - 2 \\frac{N \\lambda}{\\lambda+\\mu}\n\t\t\t\t\t\\left( M-N \\frac{\\lambda}{\\lambda+\\mu} \\right) E_{\\nu,1}(-(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\lambda^2 N(N-1)}{(\\lambda +\\mu)^2} -\\frac{2\\lambda M(N-1)}{\\lambda+\\mu}\n\t\t\t\t\t+ M(M-1) \\right) E_{\\nu,1} (-2(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t& + \\left( \\frac{2\\lambda^2 N}{(\\lambda+\\mu)^2} - \\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\t\t(N+2M) +M \\right)\n\t\t\t\t\tE_{\\nu,1} (-(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t& - \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right)^2 \\left( E_{\\nu,1}(-(\\lambda+\\mu)t^\\nu) \\right)^2\n\t\t\t\t\t+ \\frac{N \\lambda\\mu}{(\\lambda+\\mu)^2}. \\notag\n\t\t\t\t\\end{align}\n\t\t\t\\end{proof}\n\t\t\t\\begin{flushright} \\qed \\end{flushright}\n\t\t\\end{thm}\n\t\t\n\t\tExploiting Theorem \\ref{sub}, we derive the explicit expression of the extinction\n\t\tprobability $p_0^\\nu(t) = \\text{Pr} \\{ \\mathcal{N}^\\nu(t) = 0 | \\mathcal{N}^\\nu(0) = M \\}$ below.\n\t\t\n\t\t\\begin{thm}\n\t\t\tThe extinction probability $p_0^\\nu(t) = \\text{Pr} \\{ \\mathcal{N}^\\nu(t) = 0 | \\mathcal{N}^\\nu(0) = M \\}$\n\t\t\tfor a fractional binomial process $\\mathcal{N}^\\nu(t)$, $t \\ge 0$ is\n\t\t\t\\begin{align}\n\t\t\t\t\\label{extinction}\n\t\t\t\tp_0^\\nu(t) = {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\sum_{r=0}^{N-M} \\binom{N-M}{r} \\left( \n\t\t\t\t\\frac{\\lambda}{\\mu} \\right)^r \\\\\n\t\t\t\t& \\times \\sum_{h=0}^M \\binom{M}{h} (-1)^h E_{\\nu,1}(-(r+h)(\\lambda+\\mu)t^\\nu). \\notag\n\t\t\t\\end{align}\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\t\tIt is known \\citep{jakeman} that the generating function $Q(u,t) = \\sum_{n=0}^N (1-u)^n p_n(t)$\n\t\t\t\tfor the classical binomial process can be written as\n\t\t\t\t\\begin{align}\n\t\t\t\t\tQ(u,t)\n\t\t\t\t\t= {} & \\left[ 1-\\left( 1-e^{-(\\mu+\\lambda)t} \\right)\\frac{\\lambda}{\\lambda+\\mu}u \\right]^{N-M} \\\\\n\t\t\t\t\t& \\times \\left[ 1-\\left( \\left(1-e^{-(\\mu+\\lambda)t}\\right)\\frac{\\lambda}{\\lambda+\\mu} +\n\t\t\t\t\te^{-(\\mu+\\lambda)t} \\right)u \\right]^M. \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tThis suggests that the extinction probability for the classical case can be written as\n\t\t\t\t\\begin{align}\n\t\t\t\t\tp_0(t) = {} & \\left[1-\\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\t\t+ \\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\t\te^{-(\\mu+\\lambda)t}\\right]^{N-M} \\\\\n\t\t\t\t\t& \\times \\left[ 1-\\left(\\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\t\t-e^{-(\\mu+\\lambda)t} \\frac{\\lambda}{\\lambda+\\mu}+e^{-(\\mu+\\lambda)t}\\right) \\right]^M \\notag \\\\\n\t\t\t\t\t= {} & \\left[ \\frac{\\mu}{\\lambda+\\mu} + \\frac{\\lambda}{\\lambda+\\mu} e^{-(\\mu+\\lambda)t} \\right]^{N-M}\n\t\t\t\t\t\\left[ \\frac{\\mu}{\\lambda+\\mu} - \\frac{\\mu}{\\lambda+\\mu}e^{-(\\mu+\\lambda)t} \\right]^M \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{1}{\\lambda+\\mu} \\right)^N \\mu^M \\left( \\mu+\\lambda e^{-(\\lambda+\\mu)t} \\right)^{N-M}\n\t\t\t\t\t\\left( 1-e^{-(\\lambda+\\mu)t} \\right)^M \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\left( 1+\\frac{\\lambda}{\\mu}\n\t\t\t\t\te^{-(\\lambda+\\mu)t} \\right)^{N-M} \\left( 1-e^{-(\\lambda+\\mu)t} \\right)^M \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\sum_{r=0}^{N-M} \\binom{N-M}{r} \\left( \n\t\t\t\t\t\\frac{\\lambda}{\\mu} \\right)^r e^{-r(\\lambda+\\mu)t} \\notag \\\\\n\t\t\t\t\t& \\times \\sum_{h=0}^M\n\t\t\t\t\t\\binom{M}{h} (-1)^h e^{-h(\\lambda+\\mu)t}\n\t\t\t\t\t\\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\sum_{r=0}^{N-M} \\binom{N-M}{r} \\left( \n\t\t\t\t\t\\frac{\\lambda}{\\mu} \\right)^r \\sum_{h=0}^M \\binom{M}{h} (-1)^h e^{-(r+h)(\\lambda+\\mu)t}. \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tUsing Theorem \\ref{sub}, we now obtain\n\t\t\t\t\\begin{align}\n\t\t\t\t\tp_0^\\nu(t) = {} & \\int_0^\\infty p_0(s) h(s,t) \\mathrm ds \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N\n\t\t\t\t\t\\sum_{r=0}^{N-M} \\binom{N-M}{r}\\left( \\frac{\\lambda}{\\mu} \\right)^r \\notag \\\\\n\t\t\t\t\t& \\times \\sum_{h=0}^M \\binom{M}{h}\n\t\t\t\t\t(-1)^h \\int_0^\\infty e^{-(r+h)(\\lambda+\\mu)s} q(s,t) \\mathrm ds \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\sum_{r=0}^{N-M} \\binom{N-M}{r} \\left( \n\t\t\t\t\t\\frac{\\lambda}{\\mu} \\right)^r \\notag \\\\\n\t\t\t\t\t& \\times \\sum_{h=0}^M \\binom{M}{h} (-1)^h\n\t\t\t\t\tE_{\\nu,1}(-(r+h)(\\lambda+\\mu)t^\\nu). \\notag\n\t\t\t\t\\end{align}\n\t\t\t\\end{proof}\n\t\t\t\\begin{flushright} \\qed \\end{flushright}\n\t\t\\end{thm}\n\n\t\t\\begin{thm}\n\t\t\tThe state probabilities $p_n^\\nu(t) = \\text{Pr} \\{ \\mathcal{N}^\\nu(t) = n | \\mathcal{N}^\\nu(0)\n\t\t\t= M \\}$, $\\lambda >0$, $\\mu > 0$, have the following form:\n\t\t\t\\begin{align}\n\t\t\t\t\\label{stato}\n\t\t\t\tp_n^\\nu(t) =\n\t\t\t\t\\begin{cases}\n\t\t\t\t\t\\sum_{r=0}^n g_{n,r}^\\nu(t), & 0 \\leq n < \\min(M,N-M), \\\\\n\t\t\t\t\t\\sum_{r=0}^{N-M} g_{n,r}^\\nu(t), & N-M \\leq n < M, \\: M > N-M, \\\\\n\t\t\t\t\t\\sum_{r=n-M}^n g_{n,r}^\\nu(t), & M \\leq n < N-M, \\: M < N-M, \\\\\n\t\t\t\t\t\\sum_{r=n-M}^{N-M} g_{n,r}^\\nu(t), & \\max(M,N-M) \\leq n \\leq N,\n\t\t\t\t\\end{cases}\n\t\t\t\\end{align}\n\t\t\tand\n\t\t\t\\begin{align}\n\t\t\t\tp_n^\\nu(t) =\n\t\t\t\t\\begin{cases}\n\t\t\t\t\t\\sum_{r=0}^n g_{n,r}^\\nu(t), & 0 \\leq n< M, \\\\\n\t\t\t\t\t\\sum_{r=n-M}^M g_{n,r}^\\nu(t), & M \\leq n \\leq N,\n\t\t\t\t\\end{cases}\t\t\t\n\t\t\t\\end{align}\n\t\t\twhen $N-M=M$, and where\n\t\t\t\\begin{align}\n\t\t\t\tg_{n,r}^\\nu(t)\n\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\binom{N-M}{r} \\binom{M}{n-r} \\\\\n\t\t\t\t& \\times \\sum_{m_1=0}^r \\binom{r}{m_1} (-1)^{m_1}\n\t\t\t\t\\sum_{m_2=0}^{N-M-r} \\binom{N-M-r}{m_2} \\left( \\frac{\\lambda}{\\mu} \\right)^{m_2} \\notag \\\\\n\t\t\t\t& \\times \\sum_{m_3=0}^{n-r} \\binom{n-r}{m_3} \\left( \\frac{\\lambda}{\\mu} \\right)^{n-m_3}\n\t\t\t\t\\sum_{m_4=0}^{M-n+r} \\binom{M-n+r}{m_4} (-1)^{m_4} \\notag \\\\\n\t\t\t\t& \\times E_{\\nu,1} \\left(\n\t\t\t\t-(m_1+m_2+m_3+m_4)(\\mu+\\lambda)t^\\nu \\right). \\notag\n\t\t\t\\end{align}\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\t\tWe start by rewriting the probability generating function of the classical binomial\n\t\t\t\tprocess as\n\t\t\t\t\\begin{align}\n\t\t\t\t\tQ(u,t)& \\\\\n\t\t\t\t\t= {} & \\left[ 1-\\left( 1-e^{-(\\mu+\\lambda)t} \\right)\\frac{\\lambda}{\\lambda+\\mu}u \\right]^{N-M}\\notag\\\\\n\t\t\t\t\t& \\times \\left[ 1-\\left( \\left(1-e^{-(\\mu+\\lambda)t}\\right)\\frac{\\lambda}{\\lambda+\\mu} +\n\t\t\t\t\te^{-(\\mu+\\lambda)t} \\right)u \\right]^M \\notag \\\\\n\t\t\t\t\t= {} & \\left[ (1-u) \\left( \\frac{\\lambda}{\\lambda+\\mu} - \\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\t\te^{-(\\mu+\\lambda)t}\t\\right) + \\frac{\\mu}{\\lambda+\\mu} + \\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\t\te^{-(\\mu+\\lambda)t} \\right]^{N-M} \\notag \\\\\n\t\t\t\t\t& \\times \\left[ \\frac{\\lambda}{\\lambda+\\mu}(1-u) + \\frac{\\mu}{\\lambda+\\mu} + \\frac{\\mu}{\\lambda+\\mu}\n\t\t\t\t\te^{-(\\mu+\\lambda)t} \\right. \\notag \\\\\n\t\t\t\t\t& \\left. -\\frac{\\mu}{\\lambda+\\mu} e^{-(\\mu+\\lambda)t}\n\t\t\t\t\t-u \\frac{\\mu}{\\lambda+\\mu} e^{-(\\mu+\\lambda)t} \\right]^M \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\lambda}{\\lambda+\\mu} \\right)^N\n\t\t\t\t\t\\left[ (1-u) \\left( 1-e^{-(\\mu+\\lambda)t} \\right) + \\frac{\\mu}{\\lambda}\n\t\t\t\t\t+e^{-(\\mu+\\lambda)t} \\right]^{N-M} \\notag \\\\\n\t\t\t\t\t& \\times \\left[ (1-u) \\left( 1+\\frac{\\mu}{\\lambda}e^{-(\\mu+\\lambda)t} \\right)\n\t\t\t\t\t+ \\frac{\\mu}{\\lambda} - \\frac{\\mu}{\\lambda} e^{-(\\mu+\\lambda)t} \\right]^M \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\lambda}{\\lambda+\\mu} \\right)^N\n\t\t\t\t\t\\sum_{r=0}^{N-M} \\binom{N-M}{r} (1-u)^r \\notag \\\\\n\t\t\t\t\t& \\times \\left( 1-e^{-(\\mu+\\lambda)t} \\right)^r\n\t\t\t\t\t\\left( \\frac{\\mu}{\\lambda} + e^{-(\\mu+\\lambda)t} \\right)^{N-M-r} \\notag \\\\\n\t\t\t\t\t& \\times \\sum_{h=0}^M \\binom{M}{h} (1-u)^h \\left( 1+\\frac{\\mu}{\\lambda}\n\t\t\t\t\te^{-(\\mu+\\lambda)t} \\right)^h \\left( \\frac{\\mu}{\\lambda}-\\frac{\\mu}{\\lambda}\n\t\t\t\t\te^{-(\\mu+\\lambda)t} \\right)^{M-h} \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\sum_{r=0}^{N-M} \\sum_{j=r}^{M+r}\n\t\t\t\t\t(1-u)^j \\binom{N-M}{r} \\binom{M}{j-r} \\left( \\frac{\\lambda}{\\mu}\n\t\t\t\t\t-\\frac{\\lambda}{\\mu} e^{-(\\mu+\\lambda)t} \\right)^r \\notag \\\\\n\t\t\t\t\t& \\times \\left( 1+\\frac{\\lambda}{\\mu} e^{-(\\mu+\\lambda)t} \\right)^{N-M-r}\n\t\t\t\t\t\\left( \\frac{\\lambda}{\\mu} + e^{-(\\mu+\\lambda)t} \\right)^{j-r} \\notag \\\\\n\t\t\t\t\t& \\times \\left( 1-e^{-(\\mu+\\lambda)t} \\right)^{M-j+r}. \\notag\n\t\t\t\t\\end{align}\n\t\t\tLetting\n\t\t\t\t\\begin{align}\n\t\t\t\t\tg_{j,r}(t) = {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N\n\t\t\t\t\t\\binom{N-M}{r} \\binom{M}{j-r} \\left( \\frac{\\lambda}{\\mu}\n\t\t\t\t\t-\\frac{\\lambda}{\\mu} e^{-(\\mu+\\lambda)t} \\right)^r \\\\\n\t\t\t\t\t& \\times \\left( 1+\\frac{\\lambda}{\\mu} e^{-(\\mu+\\lambda)t} \\right)^{N-M-r}\n\t\t\t\t\t\\left( \\frac{\\lambda}{\\mu} + e^{-(\\mu+\\lambda)t} \\right)^{j-r} \\notag \\\\\n\t\t\t\t\t& \\times \\left( 1-e^{-(\\mu+\\lambda)t} \\right)^{M-j+r}, \\notag\n\t\t\t\t\\end{align}\n\t\t\t\twe have\n\t\t\t\t\\begin{align}\n\t\t\t\t\tQ(u,t) =\n\t\t\t\t\t\\begin{cases}\n\t\t\t\t\t\t\\sum_{j=0}^{M-1} (1-u)^j \\sum_{r=0}^j g_{j,r}(t) \\\\\n\t\t\t\t\t\t\\qquad + \\sum_{j=M}^{N-M-1}\n\t\t\t\t\t\t(1-u)^j \\sum_{r=j-M}^j g_{j,r}(t) \\\\\n\t\t\t\t\t\t\\qquad + \\sum_{j=N-M}^N (1-u)^j \\sum_{j-M}^{N-M} g_{j,r}(t),\n\t\t\t\t\t\t& M < N-M, \\\\\n\t\t\t\t\t\t\\sum_{j=0}^{N-M-1} (1-u)^j \\sum_{r=0}^j g_{j,r}(t) \\\\\n\t\t\t\t\t\t\\qquad + \\sum_{j=N-M}^{M-1}\n\t\t\t\t\t\t(1-u)^j \\sum_{r=0}^{N-M} g_{j,r}(t)\\\\\n\t\t\t\t\t\t\\qquad + \\sum_{j=M}^N (1-u)^j \\sum_{r=j-M}^{N-M} g_{j,r}(t),\n\t\t\t\t\t\t& M > N-M, \\\\\n\t\t\t\t\t\t\\sum_{j=0}^{M-1} (1-u)^j \\sum_{r=0}^j g_{j,r}(t) \\\\\n\t\t\t\t\t\t\\qquad + \\sum_{j=M}^N (1-u)^j \\sum_{r=j-M}^{M} g_{j,r}(t), & N-M=M.\n\t\t\t\t\t\\end{cases}\t\t\t\t\t\t\n\t\t\t\t\\end{align}\n\t\t\t\tThe classical state probabilities therefore read\n\t\t\t\t\\begin{align}\n\t\t\t\t\tp_n(t) =\n\t\t\t\t\t\\begin{cases}\n\t\t\t\t\t\t\\sum_{r=0}^n g_{n,r}(t), & 0 \\leq n < \\min(M,N-M), \\\\\n\t\t\t\t\t\t\\sum_{r=0}^{N-M} g_{n,r}(t), & N-M \\leq n < M, \\: M > N-M, \\\\\n\t\t\t\t\t\t\\sum_{r=n-M}^n g_{n,r}(t), & M \\leq n < N-M, \\: M < N-M, \\\\\n\t\t\t\t\t\t\\sum_{r=n-M}^{N-M} g_{n,r}(t), & \\max(M,N-M) \\leq n \\leq N,\n\t\t\t\t\t\\end{cases}\n\t\t\t\t\\end{align}\n\t\t\t\twhich reduce to\n\t\t\t\t\\begin{align}\n\t\t\t\t\tp_n(t) =\n\t\t\t\t\t\\begin{cases}\n\t\t\t\t\t\t\\sum_{r=0}^n g_{n,r}(t), & 0 \\leq n< M, \\\\\n\t\t\t\t\t\t\\sum_{r=n-M}^M g_{n,r}(t), & M \\leq n \\leq N,\n\t\t\t\t\t\\end{cases}\t\t\t\n\t\t\t\t\\end{align}\n\t\t\t\twhen $N-M=M$.\n\t\t\t\t\n\t\t\t\tExploiting Theorem \\ref{sub}, we can derive the state probabilities for the\n\t\t\t\tfractional binomial process $\\mathcal{N}^\\nu(t)$, $t \\ge 0$, as\n\t\t\t\t\\begin{align}\n\t\t\t\t\tp_n^\\nu(t) & = \\int_0^\\infty p_n(s) h(s,t) \\mathrm ds \\\\\n\t\t\t\t\t& =\n\t\t\t\t\t\\begin{cases}\n\t\t\t\t\t\t\\sum_{r=0}^n g_{n,r}^\\nu(t), & 0 \\leq n < \\min(M,N-M), \\\\\n\t\t\t\t\t\t\\sum_{r=0}^{N-M} g_{n,r}^\\nu(t), & N-M \\leq n < M, \\: M > N-M, \\\\\n\t\t\t\t\t\t\\sum_{r=n-M}^n g_{n,r}^\\nu(t), & M \\leq n < N-M, \\: M < N-M, \\\\\n\t\t\t\t\t\t\\sum_{r=n-M}^{N-M} g_{n,r}^\\nu(t), & \\max(M,N-M) \\leq n \\leq N,\n\t\t\t\t\t\\end{cases} \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tor\n\t\t\t\t\\begin{align}\n\t\t\t\t\tp_n^\\nu(t) = \\int_0^\\infty p_n(s) h(s,t) \\mathrm ds =\n\t\t\t\t\t\\begin{cases}\n\t\t\t\t\t\t\\sum_{r=0}^n g_{n,r}^\\nu(t), & 0 \\leq n< M, \\\\\n\t\t\t\t\t\t\\sum_{r=n-M}^M g_{n,r}^\\nu(t), & M \\leq n \\leq N,\n\t\t\t\t\t\\end{cases}\t\t\t\t\t\t\n\t\t\t\t\\end{align}\t\t\t\t\n\t\t\t\tfor $N-M = M$. Note that\n\t\t\t\t\\begin{align}\n\t\t\t\t\tg_{n,r}^\\nu(t) = {} & \\int_0^\\infty g_{n,r}(s) h(s,t) \\mathrm ds \\\\\n\t\t\t\t\t= {} & \\int_0^\\infty \\left[ \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N\n\t\t\t\t\t\\binom{N-M}{r} \\binom{M}{n-r} \\left( \\frac{\\lambda}{\\mu}\n\t\t\t\t\t-\\frac{\\lambda}{\\mu} e^{-(\\mu+\\lambda)s} \\right)^r \\right. \\notag \\\\\n\t\t\t\t\t& \\left. \\times \\left( 1+\\frac{\\lambda}{\\mu} e^{-(\\mu+\\lambda)s} \\right)^{N-M-r}\n\t\t\t\t\t\\left( \\frac{\\lambda}{\\mu} + e^{-(\\mu+\\lambda)s} \\right)^{n-r} \\right. \\notag \\\\\n\t\t\t\t\t& \\times \\left. \\left( 1-e^{-(\\mu+\\lambda)s} \\right)^{M-n+r} \\right] h(s,t) \\mathrm ds \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\binom{N-M}{r} \\binom{M}{n-r}\n\t\t\t\t\t\\sum_{m_1=0}^r \\binom{r}{m_1} (-1)^{m_1} \\notag \\\\\n\t\t\t\t\t& \\times \\sum_{m_2=0}^{N-M-r} \\binom{N-M-r}{m_2} \\left( \\frac{\\lambda}{\\mu} \\right)^{m_2} \\notag \\\\\n\t\t\t\t\t& \\times \\sum_{m_3=0}^{n-r} \\binom{n-r}{m_3} \\left( \\frac{\\lambda}{\\mu} \\right)^{n-m_3}\n\t\t\t\t\t\\sum_{m_4=0}^{M-n+r} \\binom{M-n+r}{m_4} (-1)^{m_4} \\notag \\\\\n\t\t\t\t\t& \\times \\int_0^\\infty e^{\n\t\t\t\t\t-(m_1+m_2+m_3+m_4)(\\mu+\\lambda)s} h(s,t) \\mathrm ds \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\binom{N-M}{r} \\binom{M}{n-r}\n\t\t\t\t\t\\sum_{m_1=0}^r \\binom{r}{m_1} (-1)^{m_1} \\notag \\\\\n\t\t\t\t\t& \\times \\sum_{m_2=0}^{N-M-r} \\binom{N-M-r}{m_2} \\left( \\frac{\\lambda}{\\mu} \\right)^{m_2} \\notag \\\\\n\t\t\t\t\t& \\times \\sum_{m_3=0}^{n-r} \\binom{n-r}{m_3} \\left( \\frac{\\lambda}{\\mu} \\right)^{n-m_3}\n\t\t\t\t\t\\sum_{m_4=0}^{M-n+r} \\binom{M-n+r}{m_4} (-1)^{m_4} \\notag \\\\\n\t\t\t\t\t& \\times E_{\\nu,1} \\left(\n\t\t\t\t\t-(m_1+m_2+m_3+m_4)(\\mu+\\lambda)t^\\nu \\right). \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tThis concludes the proof.\n\t\t\t\\end{proof}\n\t\t\t\\begin{flushright} \\qed \\end{flushright}\n\t\t\\end{thm}\n\t\t\n\t\t\\begin{remark}\n\t\t\tFrom \\eqref{stato}, we retrieve the extinction probability \\eqref{extinction} when $n=0$.\n\t\t\\end{remark}\n\t\t\n\t\t\\begin{remark}\n\t\t\tAs $t \\rightarrow \\infty$, the population in a fractional binomial\n\t\t\tprocess obeys a binomial distribution, i.e., \n\t\t\t\\begin{align}\n\t\t\t\t& \\lim_{t \\rightarrow \\infty} Q^\\nu(u,t) \\\\\n\t\t\t\t& = \\sum_{r=0}^{N-M} \\sum_{j=r}^{M+r} (1-u)^j \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N\n\t\t\t\t\\binom{N-M}{r} \\binom{M}{j-r} \\left( \\frac{\\lambda}{\\mu} \\right)^j \\notag \\\\\n\t\t\t\t& = \\sum_{r=0}^{N-M} \\sum_{h=0}^M (1-u)^{h+r} \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N\n\t\t\t\t\\binom{N-M}{r} \\binom{M}{h} \\left( \\frac{\\lambda}{\\mu} \\right)^{h+r} \\notag \\\\\n\t\t\t\t& = \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\left( 1+(1-u)\\frac{\\lambda}{\\mu} \\right)^{N-M}\n\t\t\t\t\\left( 1+(1-u)\\frac{\\lambda}{\\mu} \\right)^M \\notag \\\\\n\t\t\t\t& = \\left( \\frac{\\mu}{\\lambda+\\mu} + \\frac{\\lambda}{\\lambda+\\mu}(1-u) \\right)^N \\notag \\\\\n\t\t\t\t& = \\left( 1- \\frac{\\lambda}{\\lambda+\\mu}u \\right)^N, \\notag\n\t\t\t\\end{align}\n\t\t\tis the probability generating function of a binomial random variable of parameter $\\lambda\/(\\lambda+\\mu)$\n\t\t\tand must be compared with equation $\\mathrm{(9)}$ of \\citet{jakeman}.\n\\end{remark}\nNote also that as $t \\rightarrow \\infty$, indeed we observe (from \\eqref{mea} and \\eqref{va}) that\n\t\t\t\\begin{align}\n\t\t\t\t\\mathbb{E}\\, \\mathcal{N}^\\nu(t) \\longrightarrow N \\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\\end{align}\nand \n\t\t\\begin{align}\n\t\t\t\t\\mathbb{V}ar\\, \\mathcal{N}^\\nu(t) \\longrightarrow N \\frac{\\lambda}{\\lambda+\\mu}\\left(1- \\frac{\\lambda}{\\lambda+\\mu} \\right),\n\t\t\t\\end{align}\nwhich are the mean and variance of the binomial equilibrium process. This suggests that the fractional generalization still preserves the binomial limit. \n\t\t\n\t\\section{Related fractional stochastic processes}\n\t\n\t\tIn this section, we focus our attention to two pure branching processes which are in fact sub-models of the more general fractional binomial process described in Section \\ref{se}. These are\n\t\tthe fractional linear pure death process and the saturable fractional pure birth process. More specifically,\tthese processes can be directly obtained from the fractional binomial process by letting $\\mu=0$ and \t\t$\\lambda=0$, respectively. The main motivation underlining the analysis of these specific cases\n\t\tis that they are widely used in practice particularly in modeling evolving populations \tin interacting environment possibly causing extintion or saturation.\n\t\tOur discussion on the fractional linear pure death process complements that of \\citet{sakhno}'s. Instead, we analyze the saturable fractional pure birth\n\t\tprocess in more detail.\n\t\n\t\tWhen $\\lambda=0$, we obtain the mean value \\eqref{mea} of the fractional linear pure\n\t\tdeath process $\\mathcal{N}^\\nu_d(t)$, $t \\ge 0$, $\\nu \\in (0,1]$, (see \\citet{sakhno}):\n\t\t\\begin{align}\n\t\t\t\\mathbb{E}\\,\\mathcal{N}^\\nu_d(t) = M E_{\\nu,1}(-\\mu t^\\nu), \\qquad t \\ge 0. \n\t\t\\end{align}\n\t\tFigure \\ref{bfig} shows the mean value of the fractional linear pure death process (left)\n\t\tfor specific values of the parameters $\\mu$ and $\\nu$.\n\t\n\t\t\t\\begin{figure}[h!t!b!p!]\n\t\t\t\\centering\n\t\t\t\\includegraphics[height=2.5in, width=2.2in]{p3.pdf}\n\t\t\t\\includegraphics[height=2.5in, width=2.2in]{p4.pdf}\n\t\t \n\t\t\n\t\t\t\\caption{\\label{bfig}The mean value of the fractional binomial process $\\mathbb{E}\\,\n\t\t\t\t\\mathcal{N}^\\nu(t)$ in the two different cases of pure death (left, $(\\lambda,\\mu)=(0,1)$)\n\t\t\t\tand pure birth (right, $(\\lambda,\\mu)=(1,0)$).\n\t\t\t\tFor both cases we have $N=100$, $M=40$, $\\nu=0.7$.}\n\t\t\\end{figure}\n\t\n\t\tThe variance can be easily determined using \\eqref{va} as\n\t\t\\begin{align}\n\t\t\t\\mathbb{V}\\text{ar}\\,\\mathcal{N}^\\nu_d(t) = {} & M(M-1) E_{\\nu,1} \\left( -2\\mu t^\\nu \\right)\n\t\t\t+ M E_{\\nu,1}\\left( -\\mu t^\\nu \\right) \\\\\n\t\t\t& - M^2 \\left[ E_{\\nu,1}\\left( - \\mu\n\t\t\tt^\\nu \\right) \\right]^2, \\qquad t \\ge 0, \\: \\nu \\in (0,1], \\notag\n\t\t\\end{align}\n\t\tand this reduces for $\\nu = 1$ to the variance of the classical process (see \\citet{bailey}, page 91, formula (8.32)):\n\t\t\\begin{align}\n\t\t\t\\mathbb{V}\\text{ar}\\,\\mathcal{N}^1_d(t) = M e^{-\\mu t}\\left( 1-e^{-\\mu t} \\right).\n\t\t\\end{align}\n\t\tFurthermore, the extinction probability of the fractional linear pure death\n\t\tprocess $\\mathcal{N}^\\nu_d(t)$, $t \\ge 0$ (see \\citet{sakhno}, page 73, formula (2.1)),\n\t\t\\begin{align}\n\t\t\t\\text{Pr} \\{ \\mathcal{N}^\\nu_d(t) = 0 | \\mathcal{N}^\\nu_d(0) = M \\}\n\t\t\t= \\sum_{h=0}^M \\binom{M}{h} (-1)^h E_{\\nu,1} \\left( -h \\mu t^\\nu \\right), \\qquad t \\ge 0,\n\t\t\\end{align}\n\t\tcan also be derived directly from \\eqref{extinction}.\n\t\t\n\t\tFor the saturable fractional pure birth process $\\mathcal{N}^\\nu_b(t)$, \n\t\tthe mean value reduces to\n\t\t\\begin{align}\n\t\t\t\\label{satur}\n\t\t\t\\mathbb{E}\\,\\mathcal{N}^\\nu_b(t) = N - \\left( N-M \\right)E_{\\nu,1}\n\t\t\t\\left( -\\lambda t^\\nu \\right).\n\t\t\\end{align}\n\t\tFigure \\ref{bfig} shows the expected value of the saturable fractional pure birth process (right)\n\t\tdetermined for specific values of the parameters $\\lambda$ and $\\nu$. The variance instead remains rather complicated and can be written by specialising \\eqref{va} as\n\t\t\\begin{align}\n\t\t\t\\mathbb{V}\\text{ar}\\,\\mathcal{N}^\\nu_b(t) = {} &\n\t\t\t\\left[ M(M-1)-N(N-1) \\right] E_{\\nu,1}\\left( -2\\lambda t^\\nu \\right) \\\\\n\t\t\t& - (N-M)(4N-1) E_{\\nu,1}\\left( -\\lambda t^\\nu \\right) - (M-N)^2 \\left[ E_{\\nu,1}\\left( -\\lambda t^\\nu\n\t\t\t\\right) \\right]^2. \\notag\n\t\t\\end{align}\nAs $t\\rightarrow \\infty$, \n\t\t\\begin{equation}\n\t\t\t\t\t\\mathbb{E}\\,\\mathcal{N}^\\nu_b(t) \\longrightarrow N \n\t\t\\end{equation}\n\t\tand\n\t\\begin{equation}\n\t\t\t\t\t\\mathbb{V}\\text{ar}\\,\\mathcal{N}^\\nu_b(t) \\longrightarrow 0\n\t\t\\end{equation}\t\t\nas expected. \n\n\t\nWe now determine the state probabilities $p_{n,b}^\\nu(t) = \\Pr \\{ \\mathcal{N}^\\nu_b(t)=n\n\t\t| \\mathcal{N}^\\nu_b(0) = M \\}$.\n\t\tWhen $\\mu = 0$, the state probabilities can be derived from those of a\n\t\tnonlinear fractional pure birth process of \\citet{pol}, and is given as\n\t\t\\begin{equation}\n\t\t\tp_{n,b}^\\nu(t) =\n\t\t\t\\begin{cases}\n\t\t\t\t\\prod_{j=M}^{n-1} \\lambda_j \\sum_{m=M}^n \\frac{1}{\\prod_{l=M,l \\neq m}^n(\\lambda_l-\\lambda_m)}\n\t\t\t\tE_{\\nu,1}\\left( -\\lambda_m t^\\nu \\right), & M < n \\leq N ,\\\\\n\t\t\tE_{\\nu,1} \\left( -\\lambda_M t^\\nu \\right), & n=M.\n\t\t\t\\end{cases}\n\t\\end{equation}\n\tSubstituting the rates $\\lambda_j= \\lambda(N-j)$, we obtain\n\t\t\\begin{align}\n\t\t\tp_{n,b}^\\nu&(t) \\\\\n\t\t\t= {} & \\prod_{j=M}^{n-1} \\lambda(N-j) \\sum_{m=M}^n\n\t\t\t\\frac{E_{\\nu,1}\\left( -\\lambda (N-m) t^\\nu \\right)}{\\prod_{l=M,l \\neq m}^n(\\lambda (N-l)-\\lambda(N-m))}\n\t\t\t\\notag \\\\\n\t\t\t= {} & \\sum_{m=M}^n \\frac{(N-M)(N-M-1)\\dots (N-n+1)}{(m-M)(m-M-1)\\dots (m-m+1)\n\t\t\t(m-m-1)\\dots (m-n)} \\notag \\\\\n\t\t\t& \\times E_{\\nu,1}\\left( -\\lambda (N-m) t^\\nu \\right) \\notag \\\\\n\t\t\t= {} & \\sum_{m=M}^n \\frac{(N-M)!}{(N-n)!(m-M)!(n-m)!} (-1)^{n-m}\n\t\t\tE_{\\nu,1}\\left( -\\lambda (N-m) t^\\nu \\right) \\notag \\\\\n\t\t\t= {} & \\binom{N-M}{N-n} \\sum_{m=M}^n \\binom{n-M}{m-M} (-1)^{n-m}\n\t\t\tE_{\\nu,1}\\left( -\\lambda (N-m) t^\\nu \\right), \\quad M \\leq n \\leq N. \\notag\n\t\t\\end{align}\n\t\tThis and formula \\eqref{satur} show\tthat the behaviour of the saturable fractional pure birth\n\t\tprocess is subtantially different from that of the fractional Yule process. Similarly, the inter-birth waiting time $T_j^\\nu$, i.e.\\ the random\n\t\ttime separating the $j$th and $(j+1)$th birth, has law\n\t\t\\begin{align}\n\t\t\t\\Pr \\{ T_j^\\nu \\in \\mathrm ds \\} = \\lambda(N-j) s^{\\nu-1} E_{\\nu,\\nu} \\left( -\\lambda (N-j)s^\\nu \\right)\n\t\t\t\\mathrm ds, \\qquad j \\ge M, \\: s \\ge 0.\n\t\t\\end{align}\n\tThe figure below shows the sample paths of the saturable fractional (bottom) and classical (top) linear pure birth processes. Apparently, the proposed model naturally includes processes or populations that saturate faster than the classical linear pure birth process. The figure also indicates that saturation of the fractional binomial process is faster due to the explosive growth\/birth bursts at small times and as $\\nu \\to 0$. Note that the parameters of these related fractional point processes can be estimated using the procedures of \\citet{cap11b}.\n\n\t\t\\begin{figure}[h!t!b!p!]\n\t\t\t\\centering\n\t\t\t\\includegraphics[height=3in, width=4.5in]{satbirth.pdf}\n\t\t\n\t\t\t\\caption{\\label{bfig3}Sample trajectories of classical (top) and fractional (bottom) saturable linear pure birth processes using values $(N,M,\\lambda, \\nu)=(100,5,1,1)$ and $(N,M,\\lambda, \\nu)=(100,5,1,0.75)$, correspondingly.}\n\t\t\\end{figure}\n\t\n\n\\section{Concluding Remarks}\n We have proposed a generalization of the binomial process using the techniques of fractional calculus. The fractional generalization In addition, more statistical properties of the fractional binomial process were derived. One interesting property of the fractional binomial process was that it still preserved the binomial limit at large times while enlarging the class of models at small and regular times that naturally include non-Markovian fluctuations with long memory. This potential made the proposed fractional binomial process appealing for real applications especially to the quantum optics community. New sub-models such as the saturable fractional pure birth process could also be automatically extracted from the proposed model. The generated sample trajectories of the saturable fractional linear pure birth process showed interesting features of the process such as the isolated bursts of the population growth particularly at small times. Overall, the fractional binomial process could be considered as a viable generalization of the classical binomial process.\n \n Although theoretical investigations have been done in the present paper, a number of issues are still left undone which could be considered as possible research extensions\nof the current exploration. These may include: application of this model to rapidly saturable binomial processes, and the formalization of the parameter estimation procedures of the proposed model. \n \n\t\t\t\t\n\t\t\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Introduction}\\label{sec:intro}\n\\input{intro.tex}\n\n\n\\section{Related Works}\\label{sec:relwork}\n\\input{relwork.tex}\n\n\\section{Problem Definition}\\label{sec:problemDefinition}\n\\input{probdef.tex} \n\n\\section{Metric Embedding of Node-Pairs}\\label{sec:Methodology}\n\\input{methodology.tex}\n\n\\section{Experiments and Results}\\label{sec:experiemnts}\n\\input{experiments.tex}\n\n\\section{Conclusion}\\label{sec:conclusion}\n\\input{conclusion.tex}\n\n\n\n\\bibliographystyle{spmpsci}\n\n\\subsection{Dataset Descriptions}\\label{sec:DataSet}\n\nHere we discuss the construction and characteristics of the datasets used for\nexperiments. \n\n\\noindent {\\em Enron} email corpus \\cite{priebe2005scan} consists of email\nexchanges between Enron employees. The Enron dataset has $11$ time\nstamps and $16,836$ possible node-pairs; the task is to use first $10$ snapshots for predicting links in the\n$11^{th}$ snapshot. Following \\cite{KevinS:2013}, we aggregate data into time\nsteps of $1$ week. We use the data from weeks $147$ to $157$ of the data trace\nfor the experiments. The reason for choosing that window is that the snapshot\nof the graph at week $157$ has the highest number of edges. \\\\\n\n{\\noindent} {\\em Collaboration} dataset has $10$ time stamps with author collaboration information about $49,455$ author-pairs. The Collaboration dataset is constructed from citation\ndata containing $1.4$ million papers \\cite{Tang:2008}. We process the data to\nconstruct a network of authors with edges between them if they co-author a\npaper. Considering each year as a time stamp, the data of years $2000$-$2009$\n($10$ time stamps) is used for this experiment, where the data from the first\nnine time stamps is used for training and the last for prediction. Since this\ndata is very sparse, we pre-process the data to retain only the active authors,\nwho have last published papers on or after year $2010$; moreover, the selected\nauthors participate in at least two edges in seven or more time stamps. \\\\\n\n{\\noindent} {\\em Facebook1 and Facebook2} are network of Facebook \nwall posts \\cite{viswanath2009evolution}. Each vertex is a Facebook user account and\nan edge represents the event that one user posts a message on the wall of another user.\nBoth Facebook1 and Facebook2 has $9$ time stamps. Facebook1 has\n$219,453$ node-pairs. Facebook2 is an extended version of Facebook1 dataset with $883,785$\nnode-pairs. For pre-processing Facebook1 we follow the same setup as is discussed in\n\\cite{KevinS:2015}; wall posts of 90 days are aggregated in one time step.\n\nWe filter out all people who are active for less than 6 of the 9 \ntime steps, along with the people who have degree less than 30. \nFacebook2 is created using a similar\nmethod, but a larger sample of Facebook wall posts is used for this dataset.\n\n\\subsection{Evaluation Metrics}\nFor evaluating the proposed method we use two metrics, namely, area under\nPrecision-Recall (PR) curve (PRAUC) \\cite{Davis:2006} and an information\nretrieval metric, Normalized Discounted Cumulative Gain (NDCG). PRAUC is best suited for evaluating two class classification performance when class\nmembership is skewed towards one of the classes. This is exactly the case for\nlink prediction; the number of edges $(|E|)$ is very small compared to the\nnumber of possible node-pairs ${|V|\\choose 2}$ In such scenarios, area under the Precision-Recall curve (PRAUC) gives a more informative assessment of the algorithm's performance than other metrics such as, accuracy. The reason why PRAUC is more suitable for the skewed problem is that it does not factor in the count of true negatives in its calculation. In skewed data where the number of negative\nexamples is huge compared to the number of positive examples, true negatives\nare not that meaningful. \n\nWe also use NDCG, an information\nretrieval metric (widely used by the recommender systems community) to evaluate\nthe proposed method. NDCG measures the performance of link prediction system\nbased on the graded relevance of the recommended links. $NDCG_k$ varies from\n0.0 to 1.0, with 1.0 representing ideal ranking of edges. Here, $k$ is a parameter\nchosen by user representing the number of links ranked by the method. We use\n$k=50$ in all our experiments. \n\n\nSome of the earlier works on link prediction have used area under the ROC curve\n(AUC) to evaluate link prediction works \\cite{Gunes2015,Wang:2007}. But recent\nworks \\cite{Yang2015} have demonstrated the limitations of AUC and argued\nin favor of PRAUC over AUC for evaluation of link prediction. So we have not used AUC in this work.\n\n\\subsection{Competing Methods for Comparison}\n\nWe compare the performance of {\\sc DyLink2Vec}\\ based link prediction method with methods from four categories: (1) topological feature based methods, (2) feature time series\nbased methods~\\cite{Gunes2015}, (3) a deep learning based method, namely\nDeepWalk~\\cite{Perozzi:2014}, and (4) a tensor factorization based method\nCANDECOMP\/PARAFAC (CP)~\\cite{Dunlavy:2011}. \n\nBesides these four works, there are two other existing works for link\nprediction in dynamic network setting; one is based on deep Learning\n\\cite{Li:2014} (Conditional Temporal Restricted Boltzmann machine) and the other is based on a signature-based nonparametric method \\cite{ICML2012Sarkar_828}. We did not compare with these models as implementations of their models are not readily available, besides, both of these methods have numerous parameters which will make reproducibility of their results highly improbable and thus, conclusion derived from such experiments may not align with true understanding of the usefulness of the methods. Moreover, none of these methods give unsupervised feature representation for node-pairs in which we claim our main contribution. \n\n\nFor \\textbf{topological feature based methods}, we consider four prominent topological\nfeatures: Common Neighbors ($CN$), Adamic-Adar ($AA$), Jaccard's\nCoefficient ($J$) and Katz measure ($Katz$). \nHowever, in existing works, these features are defined for\nstatic networks only; so we adapt these features for the dynamic network setting\nby computing the feature values over the collapsed\\footnote{Collapsed network is\nconstructed by superimposing all network snapshots(see Figure \\ref{fig:StaticLimitaion}).} dynamic network. \n\nWe also combine the above four features to construct a combined\nfeature vector of length four (\\textbf{J}accard's\nCoefficient, \\textbf{A}damic-Adar, \\textbf{C}ommon Neighbors and \\textbf{K}atz), which we call $JACK$ and use it with a classifier to build a\nsupervised link prediction method, and include this model in our comparison.\n\nSecond, we compare {\\sc DyLink2Vec}\\ with \\textbf{time-series based} neighborhood similarity scores proposed in \\cite{Gunes2015}. In this work, the authors consider several\nneighborhood-based node similarity scores combined with connectivity\ninformation (historical edge information). Authors use time-series of\nsimilarities to model the change of node similarities over time. Among $16$\nproposed methods, we consider $4$ that are relevant to the link prediction task\non unweighted networks and also have the best performance. \n$TS\\mbox{-}CN\\mbox{-}Adj$ represents time-series on normalized score of Common\nNeighbors and connectivity values at time stamps $[1,t]$. Similarly, we get\ntime-series based scores for Adamic-Adar ($TS\\mbox{-}AA\\mbox{-}Adj$), Jaccard's\nCoefficient ($TS\\mbox{-}J\\mbox{-}Adj$) and Preferential Attachment\n($TS\\mbox{-}PA\\mbox{-}Adj$). \n\nThird, we compare {\\sc DyLink2Vec}\\ with \\textbf{DeepWalk} \\cite{Perozzi:2014}, a latent node\nrepresentation based method. We use DeepWalk to construct latent\nrepresentation of nodes from the collapsed dynamic\nnetwork. Then we construct latent representation of node-pairs by computing\ncross product of latent representation of the participating nodes. For example,\nif the node representations in a network are vectors of size $l$, then the\nrepresentation of a node-pair $(u,v)$ will be of size $l^2$, constructed from\nthe cross product of $u$ and $v$'s representation. The DeepWalk based\nnode-pair representation is then used with a classifier to build a supervised\nlink prediction method. We choose node representation size $l=2,4,6,8,10$ and \nreport the best performance.\n\nFinally, we compare {\\sc DyLink2Vec}\\ with a tensor factorization based method, called\n\\textbf{CANDECOMP\/PARAFAC (CP)} \\cite{Dunlavy:2011}. In this method,\nthe dynamic network is represented as a three-dimensional tensor \n$\\mathcal{Z}(n\\times n\\times t)$. Using CP decomposition $\\mathcal{Z}$ is factorized\ninto three factor matrices. The link prediction score is computed by using \nthe factor matrices.\nWe adapted the CP link prediction method for unipartite networks; which has \noriginally been developed for bipartite networks.\n\n\n\n\n\\subsection{Implementation Details}\n\nWe implemented {\\sc DyLink2Vec}\\ algorithm in Matlab version $R2014b$. The learning method runs for a maximum of $100$ iterations\nor until it converges to a local optimal solution. We use coding size $l=100$\nfor all datasets\\footnote{We experiment with different coding sizes\nranging from 100 to 800. The change in link prediction performance is not\nsensitive to the coding size. At most 2.9\\% change in PRAUC was observed for\ndifferent coding sizes.}. For supervised link prediction step we use several\nMatlab provided classification algorithms, namely, AdaBoostM1, RobustBoost, and\nSupport Vector Machine (SVM). We could use neural network classifier. But, as our main goal is to evaluate the quality of unsupervised feature representation, so, we use simple classifiers. Supervised neural network architecture may result in superior performance, but, it is out of scope of the main goal of the paper. We use Matlab for computing the feature values (CN, AA, J, Katz) that we use in other competing methods. Time-series methods are implemented using Python. We use the ARIMA (autoregressive integrated\nmoving average) time series model implemented in Python module\n\\textbf{statsmodels}. The DeepWalk implementation is provided by the authors\nof~\\cite{Perozzi:2014}. We use it to extract node features and extend it for\nlink prediction (using Matlab). Tensor factorization based method CP was\nimplemented using Matlab Tensor Toolbox. \n\n\n\n\\subsection{Performance Comparison Results with Competing Methods}\nIn Figure \\ref{fig:CompareAll} we present the performance comparison results\nof {\\sc DyLink2Vec}\\ based link prediction method with the four kinds of competing methods that we have discussed\nearlier. The figure have eight bar charts. The bar charts from the top to the\nbottom rows display the results for Enron, Collaboration, Facebook1 and\nFacebook2 datasets, respectively. The bar charts in a row show comparison\nresults using PRAUC (left), and $NDCG_{50}$ (right) metrics. \nEach chart has twelve bars, each representing a link prediction method, where the height\nof a bar is indicative of the performance metric value of the corresponding\nmethod. In each chart, from left to right, the first five bars (blue) correspond to the\ntopological feature based methods, the next four (green) represent time series\nbased methods, the tenth bar (black) is for DeepWalk, the eleventh bar (brown)\nrepresents tensor factorization based method CP, and the final bar (purple)\nrepresents the proposed method {\\sc DyLink2Vec}.\n\n\\subsubsection{{\\sc DyLink2Vec}\\ vs. Topological} \nWe first analyze the performance comparison between {\\sc DyLink2Vec}\\ based method and topological feature based methods\n(first five bars). The best of the topological feature based methods have a\nPRAUC value of 0.30, 0.22, 0.137 and 0.14 in Enron, Collaboration, Facebook1,\nand Facebook2 dataset (see Figures\n\\ref{fig:CompareAll}(a), \\ref{fig:CompareAll}(c), \\ref{fig:CompareAll}(e) and\n\\ref{fig:CompareAll}(g)), whereas the corresponding PRAUC values for {\\sc DyLink2Vec}\\ are\n0.531, 0.362, 0.308, and 0.27, which translates to 77\\%, 65\\%, 125\\%, and\n93\\% improvement of PRAUC by {\\sc DyLink2Vec}\\ for these datasets. Superiority of {\\sc DyLink2Vec}\\\nover all the topological feature based baseline methods can be attributed to\nthe capability of Neighborhood based feature representation to capture temporal\ncharacteristics of local neighborhood. Similar trend is observed using \n$NDCG_{50}$ metric, see Figures \\ref{fig:CompareAll}(b),\n\\ref{fig:CompareAll}(d), \\ref{fig:CompareAll}(f) and \\ref{fig:CompareAll}(h).\n\n\\subsubsection{{\\sc DyLink2Vec}\\ vs. Time-Series} \nThe performance of time-series based\nmethod (four green bars) is generally better than the topological feature based\nmethods. The best of the time-series based method has a PRAUC value of 0.503,\n0.28, 0.19, and 0.19 on these datasets, and {\\sc DyLink2Vec}'s PRAUC values are better than\nthese values by 6\\%, 29\\%, 62\\%, and 42\\% respectively. Time-series\nbased methods, though model the temporal behavior well, probably fail to\ncapture signals from the neighborhood topology of the node-pairs. Superiority of\n{\\sc DyLink2Vec}\\ over Time-Series methods is also similarly indicated by information retrieval\nmetric $NDCG_{50}$.\n\n\\subsubsection{{\\sc DyLink2Vec}\\ vs. DeepWalk} \nThe DeepWalk based method (black bars in Figure \\ref{fig:CompareAll}) performs much poorly\nin terms of both PRAUC and $NDCG_{50}$---even poorer than the topological based method in all four\ndatasets. Possible reason could be the following: the latent encoding of nodes\nby DeepWalk is good for node classification, but the cross-product of those\ncodes fails to encode the information needed for effective link prediction.\n\n\\subsubsection{{\\sc DyLink2Vec}\\ vs. CANDECOMP\/PARAFAC (CP)} \n\nFinally, the tensor factorization based method CP performs marginally better\n(around 5\\% in PRAUC, and 6\\% in $NDCG_{50}$) than {\\sc DyLink2Vec}\\ in small and simple\nnetworks, such as Enron (see Figure \\ref{fig:CompareAll}(a, b)). \nBut its\nperformance degrades on comparatively large and complex networks, such as Collaboration, Facebook1\nand Facebook2. On Facebook networks, the performance of CP is\neven worse than the time-series based methods (see Figures \\ref{fig:CompareAll}(e)\nand \\ref{fig:CompareAll}(g)). {\\sc DyLink2Vec}\\ comfortably outperforms CP on larger\ngraphs, see Figures \\ref{fig:CompareAll}(c, d, e, f, g, h). In terms of PRAUC,\n{\\sc DyLink2Vec}\\ outperforms CP by 28\\%, 94\\%, and 120\\% for Collaborative, Facebook1 and Facebook2 networks respectively.\nThis demonstrates the superiority of {\\sc DyLink2Vec}\\ over one of the best state-of-the-art\ndynamic link prediction. A reason for CP's bad\nperformance on large graphs can be its inability to capture network structure \nand dynamics using high-dimensional tensors representation. \n\n\\subsubsection{Performance across datasets}\nWhen we compare the performance of all the methods across different\ndatasets, we observe varying performance. For example, for both the metrics,\nthe performance of dynamic link prediction on Facebook graphs are lower than the\nperformance on Collaboration graph, which, subsequently, is lower than the\nperformance on Enron graph, indicating that link prediction in Facebook data is\na harder problem to solve. In these harder networks, {\\sc DyLink2Vec}\\ perform substantially\nbetter than all the other competing methods that we consider in this experiment.\n\n\\begin{figure}[H]\n\\centering\n\\subfloat[Collaboration Network]{%\n\\includegraphics[width=.5\\linewidth]{DBLP2TW.eps}\n}\n\\subfloat[Facebook1 Network]{%\n\\includegraphics[width=.5\\linewidth]{FaceBookTW.eps}\n}\n\\caption{Change in link prediction performance with number of time stamps. X-axis represents size of training window used\nfor link prediction. Largest possible window size depends on number of time\nstamps available for the dataset.\n}\n\\label{fig:TimeWindow}\n\\vspace{-0.3in}\n\\end{figure}\n\n\\subsection{Performance with varying length of Time Stamps}\n\nBesides comparing with competing methods, we also demonstrate the performance\nof {\\sc DyLink2Vec}\\ with varying number of available time snapshots. For this\npurpose, we use {\\sc DyLink2Vec}\\ with different counts of past snapshots. For example,\nCollaboration dataset has 10 time stamps. The task is to predict links at time\nstamp 10. The largest number of past snapshots we can consider for this data\nis 8, where $\\mathbf{\\widehat{E}}$ is constructed using time stamps $[1-8]$,\nand $\\mathbf{\\overline{E}}$ is constructed using time stamps $[2-9]$. The\nsmallest number of time stamps we can consider is 1, where\n$\\mathbf{\\widehat{E}}$ is constructed using $[8-8]$, and\n$\\mathbf{\\overline{E}}$ is constructed using $[9-9]$.\nIn this way, by varying the length of historical time stamps, \nwe can evaluate the effect of time stamp's length on the performance of a link prediction method.\n\n\nThe result is illustrated in Figure \\ref{fig:TimeWindow}. The x-axis represents\nthe number of time stamps used by {\\sc DyLink2Vec} , the left y-axis represents the\n$NDGC_{50}$ and the right y-axis represents the\nPRAUC. Figures \\ref{fig:TimeWindow}(a),\nand \\ref{fig:TimeWindow}(b) corresponds to the results obtained on Collaboration and Facebook1, respectively. \n\n\nWe observe from Figure \\ref{fig:TimeWindow} that the performance ($NDGC_{50}$\nand PRAUC) of link prediction increases with increased number of time stamps.\nBut beyond a given number of snapshots, the performance increment becomes\ninsignificant. The performance starts to deteriorate after certain number of snapshots (see Figure \\ref{fig:TimeWindow}(a)). This may be because of the added complexity of the optimization framework with increased number of time stamps. We also observe\nconsistent improvement of performance with the number of snapshots for the\nFacebook1 data (Figure \\ref{fig:TimeWindow}(b)),\nwhich indicates that for this dataset link information from distant\nhistory is useful for dynamic link prediction. We do not show results of Enron and Facebook2 for this experiment, \nbecause of space constraint, however, they show similar trends.\n\n\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=.8\\linewidth]{DBLP2.eps}\n\\caption{Effect of class imbalance in link prediction performance on Collaboration network.}\n\\label{fig:classImbalance}\n\\vspace{-0.3in}\n\\end{figure}\n\n\\subsection{Effect of Class Imbalance on Performance}\nIn link prediction problem, class imbalance is a prevalent issue. The class imbalance problem appears in a classification task, when the dataset contains imbalanced number of samples for different classes. In link prediction problem, the number of positive node-pairs (with an edge) is very small compared to the number of negative node-pairs (with no edge), causing class imbalance problem. \n\nTo demonstrate the effect of class imbalance in link prediction task, we perform link prediction using {\\sc DyLink2Vec}\\ embeddings with different level of class imbalance in the training dataset.\nWe construct the training dataset by taking all positive node-pairs and sampling from the set of negative node-pairs. \nFor a balanced dataset, the number of negative samples will be equal to the number of all positive node-pairs considered. Thus, the balanced training dataset has positive node-pairs to negative node-pairs ratio $1:1$. At this point, the only way to increase the size of the data is to increase the sample size for negative node-pairs. Consequently, the ratio of classes also increases towards negative node-pair.\nFigure \\ref{fig:classImbalance} shows gradual decrease in link prediction performance in Collaboration network with the increase of imbalance (see ratios in X-axis) in the dataset (despite the fact that the dataset gets larger by adding negative node-pairs). \n\nThis result advocates towards the design choice of under-sampling \\cite{Lichtenwalter:2010} of negative node-pairs by uniformly sampling from all negative node-pairs, so that the training set has equal numbers of positive and negative node-pairs. Under-sampling, helps\nto mitigate the problem of class imbalance while also reducing the size of the training dataset.\n\n\n\n\n\\subsection{Optimization framework for DyLink to Vec}\\label{subsec:UnsupervisedFeatureExtraction}\n\nIn this section, we discuss the optimization framework which obtains the\noptimal metric embedding of a node pair by learning an optimal coding function\n$h$. For this\nlearning task, let's assume $\\mathbf{\\widehat{E}}$ is the training dataset\nmatrix containing a collection of node-pair feature vectors. Each row of this\nmatrix represents a node-pair (say, $u$ and $v$) and it contains the feature\nvector $\\mathbf{e}^{uv}$ which stores information about neighborhood and link\nhistory, as we discussed earlier. The actual link status of the node-pairs in $\\mathbf{\\widehat{E}}$ \nin $G_{t+1}$ is not used for the learning of $h$, so the metric embedding process\nis unsupervised. In subsequent discussion, we write $\\mathbf{e}$ to represent\nan arbitrary node pair vector in $\\mathbf{\\widehat{E}}$.\n\nNow, the coding function $h$ compresses $\\mathbf{e}$ to a code vector $\\mathbf{\\alpha}$\nof dimension $l$, such that $l < k$. Here $l$ is a user-defined parameter which\nrepresents the code length and $k$ is the size of feature vector. Many\ndifferent coding functions exist in the dimensionality reduction literature,\nbut for {\\sc DyLink2Vec}\\ we choose the coding function which incurs the minimum\nreconstruction error in the sense that from the code $\\mathbf{\\alpha}$ we can\nreconstruct $\\mathbf{e}$ with the minimum error over all $\\mathbf{e} \\in\n\\mathbf{\\widehat{E}}$. We frame the learning of $h$ as an optimization\nproblem, which we discuss below through two operations: Compression and\nReconstruction. \\\\\n\n\\noindent {\\bf Compression:} It obtains $\\mathbf{\\alpha}$ from $\\mathbf{e}$.\nThis transformation can be expressed as a nonlinear function of linear weighted\nsum of the entries in vector $\\mathbf{e}$.\n\\begin{equation}\n\\mathbf{\\alpha} = f(\\mathbf{W}^{(c)}\\mathbf{e} + \\mathbf{b}^{(c)})\n\\label{eq:Compression}\n\\end{equation}\n$\\mathbf{W}^{(c)}$ is a $(k \\times l)$ dimensional matrix. It represents the weight matrix for compression\nand $\\mathbf{b}^{(c)}$ represents biases. $f(\\cdot)$ is the Sigmoid function, $f(x)=\\frac{1}{1+e^{-x}}$. \\\\\n\n\\noindent {\\bf Reconstruction:} It performs the reverse operation of compression, i.e., it \nobtains $\\mathbf{e}$ from $\\mathbf{\\alpha}$ (which was constructed during the compression \noperation).\n\\begin{equation}\n\\mathbf{\\beta} = f(\\mathbf{W}^{(r)}\\mathbf{\\alpha} + \\mathbf{b}^{(r)})\n\\label{eq:Reconstruction}\n\\end{equation}\n$\\mathbf{W}^{(r)}$ is a matrix of dimensions $(l \\times k)$ representing the weight matrix for reconstruction, \nand $\\mathbf{b}^{(r)}$ represents biases. \n\nThe optimal coding function $h$ constituted by the compression and reconstruction operations is defined by the parameters $(\\mathbf{W},\\mathbf{b}) = (\\mathbf{W}^{(c)},\\mathbf{b}^{(c)},\\mathbf{W}^{(r)},\\mathbf{b}^{(r)})$.\nThe objective is to minimize the reconstruction error.\nReconstruction error for a neighborhood based feature vector $\\mathbf(e)$ is defined as,\n$J(\\mathbf{W,b,e}) = \\frac{1}{2}\\parallel \\mathbf{\\beta}-\\mathbf{e} \\parallel^2$.\nOver all possible feature vectors, the average reconstruction error augmented with a regularization\nterm yields the final objective function $J(\\mathbf{W},\\mathbf{b})$:\n\\begin{equation}\n\\begin{split}\nJ(\\mathbf{W},\\mathbf{b}) = \\frac{1}{{|\\mathbf{\\widehat{E}}|}} \\sum_{\\mathbf{e} \\in \\mathbf{\\widehat{E}}}(\\frac{1}{2}\\parallel \\mathbf{\\beta}^{uv}-\\mathbf{e}^{uv} \\parallel^2)\n\\\\+\\frac{\\lambda}{2} (\\parallel\\mathbf{W}^{(c)}\\parallel_F^2 + \\parallel\\mathbf{W}^{(r)}\\parallel_F^2 )\n\\end{split}\n\\label{eq:lossfun2}\n\\end{equation}\n\nHere, $\\lambda$ is a user assigned regularization parameter, responsible for\npreventing over-fitting. $\\parallel\\cdot\\parallel_F$ represents the \nFrobenius norm of a matrix. In this work we use $\\lambda = 0.1$. \n\nTo this end, we discuss the motivation of our proposed optimization framework\nfor learning the coding function $h$. Note that,\nthe dimensionality of $\\mathbf{\\alpha}$ is much smaller than $\\mathbf{e}$, so the\noptimal compression of the vector $\\mathbf{e}$ must extract patterns composing of\nthe entries of $\\mathbf{e}$ and use them as high-order latent feature in\n$\\mathbf{\\alpha}$. In fact, the entries in $\\mathbf{e}$ contain the neighborhood (sum\nof adjacency vector of the node pair) and link history of a node-pair for all\nthe timestamps; for a real-life network, this vector is sparse and substantial\ncompression is possible incurring small loss. Through this compression the\ncoding function $h$ learns the patterns that are similar across different node-pairs (used in $\\mathbf{\\widehat{E}}$).\nThus the function $h$ learns a metric embedding of the node-pairs that packs node-pairs\nhaving similar local structures in close proximity in the embedded feature space.\nAlthough function $h$ acts as a black-box, it captures patterns involving neighborhood\naround a node pair across various time stamps, which obviates the manual construction \nof a node-pair feature---a cumbersome task for the case of a dynamic network. \n\n\n\\subsubsection{Optimization}\n\nThe training of optimal coding defined by parameters $(\\mathbf{W,b})$ begins with\nrandom initialization of the parameters. Since the cost function\n$J(\\mathbf{W},\\mathbf{b})$ defined in Equation \\eqref{eq:lossfun2} is non-convex in\nnature, we obtain a local optimal solution using the gradient descent approach.\nSuch approach usually provides practically useful results (as shown in the Section \n\\ref{sec:experiemnts}).\nThe parameter updates of the gradient descent are similar to the parameter updates\nfor optimizing Auto-encoder in machine learning. \nOne iteration of gradient descent updates the parameters using following equations:\n\n\\begin{equation}\n\\begin{split}\nW_{ij}^{(c)} = W_{ij}^{(c)} - \\sigma \\frac{\\partial}{\\partial W_{ij}^{(c)}}J(W,b) \\\\\nW_{ij}^{(r)} = W_{ij}^{(r)} - \\sigma \\frac{\\partial}{\\partial W_{ij}^{(r)}}J(W,b) \\\\\nb_{i}^{(c)} = b_{i}^{(c)} - \\sigma \\frac{\\partial}{\\partial b_{i}^{(c)}}J(W,b) \\\\\nb_{i}^{(r)} = b_{i}^{(r)} - \\sigma \\frac{\\partial}{\\partial b_{i}^{(r)}}J(W,b) \n\\end{split}\n\\label{eq:updatefun1}\n\\end{equation}\n\nHere, $l$ appropriately identifies the weight and bias parameters $l \\in \\{1,2\\}$. $\\sigma$ is the learning rate. $W_{ij}^{(1)}$ is the weight of connection between node $j$ of the input layer to node $i$ of the hidden layer. \n\nNow, from Equation \\eqref{eq:lossfun2}, the partial derivative terms in equations \\eqref{eq:updatefun1} can be written as,\n\\begin{equation}\n\\begin{split}\n\\frac{\\partial}{\\partial W_{ij}^{(c)}}J(W,b) = \\frac{1}{|\\mathbf{\\widehat{E}}| } \\sum_{\\mathbf{e} \\in \\mathbf{\\widehat{E}}}\\frac{\\partial}{\\partial W_{ij}^{(c)}}J(\\mathbf{W,b,e})+\\lambda W_{ij}^{(c)} \\\\\n\\frac{\\partial}{\\partial W_{ij}^{(r)}}J(W,b) = \\frac{1}{|\\mathbf{\\widehat{E}}|} \\sum_{\\mathbf{e} \\in \\mathbf{\\widehat{E}}}\\frac{\\partial}{\\partial W_{ij}^{(r)}}J(\\mathbf{W,b,e})+\\lambda W_{ij}^{(r)} \\\\\n\\frac{\\partial}{\\partial b_{i}^{(c)}}J(W,b) = \\frac{1}{|\\mathbf{\\widehat{E}}| } \\sum_{\\mathbf{e} \\in \\mathbf{\\widehat{E}}}\\frac{\\partial}{\\partial b_{i}^{(c)}}J(\\mathbf{W,b,e}) \\\\\n\\frac{\\partial}{\\partial b_{i}^{(r)}}J(W,b) = \\frac{1}{|\\mathbf{\\widehat{E}}| } \\sum_{\\mathbf{e} \\in \\mathbf{\\widehat{E}}}\\frac{\\partial}{\\partial b_{i}^{(r)}}J(\\mathbf{W,b,e}) \\\\\n\\end{split}\n\\label{eq:derivative}\n\\end{equation}\n\n\nThe optimization problem is solved by computing partial derivative of cost function $J(\\mathbf{W,b,e})$ using the back propagation approach \\cite{Rumelhart:1986}. \n\nOnce the optimization is done,\nthe metric embedding of\nany node-pair ($u,v$) can be obtained by taking the outputs of compression\nstage (Equation \\eqref{eq:Compression}) of the trained optimal coding\n($\\mathbf{W,b}$).\n\n\\begin{equation}\n\\begin{split}\n\\mathbf{\\alpha}^{uv} = f(\\mathbf{W}^{(c)}\\mathbf{e}^{uv} + \\mathbf{b}^{(c)})=h(\\mathbf{e}^{uv})\n\\end{split}\n\\label{eq:unsupervisedFeatures}\n\\end{equation}\n\n\\subsubsection{Complexity Analysis}\nWe use Matlab implementation of optimization algorithm L-BFGS (Limited-memory\nBroyden-Fletcher-Goldfarb-Shanno) for learning optimal coding. We execute the\nalgorithm for a limited number of iterations to obtain unsupervised features\nwithin a reasonable period of time. Each iteration of L-BFGS executes two tasks\nfor each node-pair: back-propagation to compute partial differentiation of cost\nfunction, and change the parameters $\\mathbf{(W,b)}$. \nTherefore, the time complexity\nof one iteration is $O(|NP_t|kl)$. Here, $NP_t$ is the set on node-pairs used to construct the training dataset $\\mathbf{\\widehat{E}}$.\n$k$ is the length of $\\mathbf{e}$ (dimensionality of initial edge features), and $l$ is\nlength of $\\mathbf{\\alpha}$ (optimal coding).\n\n\n\\section{Link Prediction using proposed metric embedding}\\label{subsec:SupervisedLinkPredictionModel}\nFor link prediction task in a dynamic network, $\\mathbb{G} = \\{G_1,G_2, \\dots ,G_t\\}$; we split the snapshots into two overlapping time windows, $[1,t-1]$ and $[2,t]$.\nTraining dataset, $\\mathbf{\\widehat{E}}$ is feature representation for time\nsnapshots $[1,t-1]$, the ground truth ($\\widehat{\\mathbf{y}}$) is constructed from\n$G_t$. {\\sc DyLink2Vec}\\ learns optimal embedding $h(\\cdot)$ using training dataset $\\mathbf{\\widehat{E}}$.\nAfter training a supervised classification model using\n$\\widehat{\\mathbf{\\alpha}}$=$h(\\widehat{\\mathbf{E}})$ and $\\widehat{\\mathbf{y}}$, prediction\ndataset $\\mathbf{\\overline{E}}$ is used to predict links at $G_{t+1}$. For this\nsupervised prediction task, we experiment with several classification\nalgorithms. Among them SVM (support vector machine) and\nAdaBoost perform the best. \n\n\n\\begin{algorithm}[h]\n\t\\SetKwInOut{Input}{Input}\n \\SetKwInOut{Output}{Output}\n\\begin{algorithmic}[1]\n\t\t\\Procedure{LP{\\sc DyLink2Vec}}{$\\mathbb{G},t$}\n\t\t\\Input{$\\mathbb{G}$: Dynamic Network, $t$: Time steps}\n \t\t\\Output{$\\overline{y}$: Forecasted links at time step $t+1$}\n \t\t\n\t\t\\State $\\widehat{\\mathbf{E}}$=NeighborhoodFeature($\\mathbb{G}$,$1$,$t-1$)\n\t\t\\State $\\widehat{\\mathbf{y}}$=Connectivity($G_t$)\n\t\t\\State $\\overline{\\mathbf{E}}$=NeighborhoodFeature($\\mathbb{G}$,$2$,$t$)\n\t\t\\State $h$=LearningOptimalCoding($\\widehat{\\mathbf{E}}$)\n\t\t\\State $\\widehat{\\mathbf{\\alpha}}$=$h(\\widehat{\\mathbf{E}})$\n\t\t\\State $\\overline{\\mathbf{\\alpha}}$=$h(\\overline{\\mathbf{E}})$\n\t\t\\State $C$=TrainClassifier($\\widehat{\\mathbf{\\alpha}},\\widehat{\\mathbf{y}}$)\n\t\t\\State $\\overline{\\mathbf{y}}$=LinkForecasting($C,\\overline{\\mathbf{\\alpha}}$)\n\t\t\\State \\textbf{return} $\\overline{\\mathbf{y}}$\n\t\t\\EndProcedure\n\t\t\\end{algorithmic}\n\t\t\\caption{Link Prediction using {\\sc DyLink2Vec}}\\label{alg:LPUFE}\n\\end{algorithm}\n\n\nThe pseudo-code of {\\sc DyLink2Vec}\\ based link prediction method is given in Algorithm \\ref{alg:LPUFE}. For training\nlink prediction model, we split the available network snapshots into two\noverlapping time windows, $[1,t-1]$ and $[2,t]$. Neighborhood based features\n$\\widehat{\\mathbf{E}}$ and $\\overline{\\mathbf{E}}$ are constructed in Lines 2 and 4,\nrespectively. Then we learn optimal coding for node-pairs using neighborhood\nfeatures $\\widehat{\\mathbf{E}}$ (in Line 5). Embeddings\nare constructed using learned optimal coding (Lines 6 and 7) using output of\ncompression stage (Equation \\ref{eq:unsupervisedFeatures}). Finally, a\nclassification model $C$ is learned (Line 8), which is used for predicting\nlinks in $G_{t+1}$ (Line 9).\n\n\n\\section{MLJ Contribution Information Sheet}\n{\\em 1. What is the main claim of the paper? Why is this an important contribution to the machine learning literature?}\n\n\\noindent \\textbf{Our Answer:} The temporal dynamics of a complex system such as a social network or a\ncommunication network can be studied by understanding the patterns of link\nappearance and disappearance over time. A critical task along this\nunderstanding is to predict the link state of the network at a future time\ngiven a collection of link states at earlier time points. In existing literature,\nthis task is known as link prediction in dynamic networks. In this work, we propose {\\sc DyLink2Vec}, a scalable method for finding the metric embedding of arbitrary node-pairs (both edges or non-edges) of a \ndynamic network with many temporal snapshots. This work is different from existing \nworks of representation learning on graphs in two ways:\nfirst, existing works find the embedding of vertices whereas we consider embedding\nof arbitrary node-pairs; second, our work considers a dynamic network, which has\nmany temporal snapshots, whereas existing works consider only a static network. Therefore, \nour embedding method is particularly suitable for link forecasting\nin a dynamic networks, and thus, would be an important contribution to the machine learning literature.\\\\\n\n\\noindent {\\em 2. What is the evidence you provide to support your claim? Be precise.}\n\n\\noindent \\textbf{Our Answer:} We present following evidences: \n\\begin{itemize}\n\\item We validate the effectiveness of our representation learning method ({\\sc DyLink2Vec}) by utilizing it for link prediction on four real-life dynamic networks- {\\em Enron}, {\\em Collaboration}, and two version of {\\em Facebook} networks.\n\\item We compare the quality of embedding generated from {\\sc DyLink2Vec}\\ (our algorithm) with multiple state-of-the-art methods for dynamic link prediction task. Our comparison results show that the proposed method is superior than all the competing methods.\n\\end{itemize}\n\n\\noindent {\\em 3. What papers by other authors make the most closely related contributions, and how is your paper related to them?}\n\n\\noindent \\textbf{Our Answer:} A key challenge of link prediction in a dynamic network is to find a suitable feature representation of the node-pair instances which are used for training the prediction model. For the static link prediction problem, various topological metrics (common neighbors, Adamic-Adar, Jaccard's coefficient) are used as features, but they can not be extended easily for the dynamic setting having multiple snapshots of the network. In this work, we propose a very general method for constructing unsupervised feature representation in \ndynamic setting. The following papers also provide methods for generating useful node-pair (edge) feature representation, and thus they are related to our work. \n\\begin{itemize}\n\\item Temporal Link prediction using matrix and tensor factorization by D.M. Dunlavy, T. G. Kolda, and E. Acar, in ACM Trans. Knowl. Discov. Data, 2011\n\n\\item Link prediction using time series of neighborhood-based node similarity scores by\nG{\\\"u}ne{\\c{s}}, {\\.I}smail and G{\\\"u}nd{\\\"u}z-{\\\"O}{\\u{g}}{\\\"u}d{\\\"u}c{\\\"u}, {\\c{S}}ule and {\\c{C}}ataltepe, Zehra, in\nData Mining and Knowledge Discovery, 2015\n\n\\item DeepWalk: Online Learning of Social Representations by\nPerozzi, Bryan and Al-Rfou, Rami and Skiena, Steven, in Proc.\nof SIGKDD, 2014. \n\n\\end{itemize}\n\n\\noindent {\\em 4. Have you published parts of your paper before, for instance in a conference? If so, give details of your previous paper(s) and a precise statement detailing how your paper provides a significant contribution beyond the previous paper(s).}\n\n\\noindent \\textbf{Our Answer:} No, we did not publish parts of our paper anywhere.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe steady decrease of estimates of metal abundances in\nthe solar atmosphere over the\nlast years (Holweger \\cite{hol01}, Lodders \\cite{lod03}, Asplund et al. \\cite{ags04})\nis a serious {\\rm challenge} not only {\\rm to} solar physics, but\nalso to dust modelling.\n This calls for new {\\rm dust} models able to produce the same extinction\nwith a smaller amount of solid material.\nA solution to the problem could be provided by\nan ``admixture of vacuum'' i.e. by the porosity of interstellar grains.\n\nGrain aggregates with {\\rm large} voids can {\\rm form} during\nthe growth of interstellar grains due to their coagulation\nin dense molecular cloud cores (Dorschner \\& Henning \\cite{dh95}).\n The internal structure of such composite grains can\nbe very complicated, but\ntheir optical properties are {\\rm often} calculated using the Mie theory\nfor homogeneous spheres with an average refractive index\nderived from effective medium theory\n(EMT; see, e.g., Mathis \\& Whiffen \\cite{mw89},\nJones \\cite{jones88}, Ossenkopf \\cite{oss91},\nMathis \\cite{m96}, Li \\& Greenberg \\cite{li:gre98}, Il'in \\& Krivova \\cite{ik00}).\n\n Another approach to calculate the optical properties of\nsuch aggregates is the application of complex, computationally time consuming\nmethods such as the discrete dipole approximation (DDA;\nsee, e.g., Wright \\cite{wr87}, Kozasa et al.~\\cite{kbm92}, \\cite{ko93},\nWolff et al.~{\\cite{wo94}}, Stognienko et al.~\\cite{st95},\nKimura \\& Mann \\cite{ki98}).\n\n Using the DDA, Voshchinnikov et al.~(\\cite{vih05}) examined\nthe ability of the EMT-Mie approach to treat porous\nparticles of different structure.\nThey show that the latter approach can give relatively\naccurate results only if the very porous particles have small\n(in comparison with the wavelength of incident radiation) ``Rayleigh''\ninclusions.\nOtherwise, the approach becomes inaccurate\nwhen the porosity exceeds $\\sim$0.5.\n At the same time, the optical properties of heterogeneous\nspherical particles having inclusions of various sizes\n(Rayleigh and non-Rayleigh) and very large porosity\nwere found to closely resemble those of\nspheres with a large number ($\\ga 15-20$) of different layers.\nThe errors in extinction efficiency factors are smaller than 10~--~20\\% if\nthe size parameter $\\la 15$ and porosity is equal to $0.9$.\nNote that this consideration was restricted by spheres,\nnot very absorbing materials (silicate and amorphous carbon)\nand the integral scattering characteristics\n(extinction, scattering, absorption efficiency factors, albedo\nand asymmetry parameter)\nbut not the differential cross sections or elements of the\nscattering matrix.\nNevertheless, very simple computational models instead of\ntime-consuming DDA calculations give us a useful way to treat\ncomposite grains of different structure.\n\nIn this paper, we apply the\nparticle models of porous interstellar dust grains\nbased on the EMT-Mie and layered-sphere calculations.\nThe models described\nin Sect.~\\ref{model} are assumed to represent composite\nparticles with small (Rayleigh) inclusions and inclusions\nof different sizes (Rayleigh and non-Rayleigh).\n The wavelength dependence of extinction is discussed in Sect.~\\ref{ext1}.\n Sections~\\ref{st_ext} and \\ref{ir_ext}\n contain the application of the models to\ncalculations of the extinction curves in the directions\nof two stars, using new solar abundances\nand the near-infrared (IR) extinction in the directions along the\nGalactic plane.\n The next sections deal with grain temperatures\n(Sect.~\\ref{temp}), profiles of IR silicate bands (Sect.~\\ref{ir_b}),\nand grain opacities at $\\lambda = 1$\\,~mm (Sect.~\\ref{opa}).\nThese quantities are especially important for the analysis of observations\nof protoplanetary discs (Henning et al. \\cite{heal05}).\nConcluding remarks are presented in Sect.~\\ref{concl}.\n\n\\section{Particle models}\\label{model}\n\nInformation about the structure of grains can be included in light\nscattering calculations directly (layered spheres) or can be\nused to find the optical constants (EMT-Mie model).\nWe consider models of both types.\n\nFollowing previous papers (Voshchinnikov \\& Mathis \\cite{vm99},\nVoshchinnikov et al. \\cite{vih05}),\nwe construct layered grains as particles consisting of many concentric\nspherical layers of various materials, each with a specified\nvolume fraction $V_i$ ($\\Sigma_i V_i \/V_{\\rm total} = 1)$.\nVacuum can be one of the materials, so a\ncomposite particle may have a central cavity or voids in the form\nof concentric layers.\nThe number of layers is taken to be 18 {\\rm since}\nVoshchinnikov et al.~(\\cite{vih05}) have shown that this was enough\nto preclude an influence of the order of materials on the results.\nFor a larger number of layers,\none can speak of the optical characteristics determined by\nthe volume fractions of different constituents only.\n\nIn the case of the EMT-Mie model, an average (effective) refractive index\nis calculated using the popular rule of Bruggeman\n(see, e.g., Ch\\'ylek et al. \\cite{cval00}, Kr\\\"ugel \\cite{kru03}).\nIn this case, the average dielectric permittivity\n$\\varepsilon_{\\rm eff}$$^($\\footnote{The dielectric permittivity\nis related to the refractive index as $\\varepsilon=m^2$.}$^)$\nis calculated from\n\\begin{equation}\n\\sum_i f_{i} \\frac{\\varepsilon_{i} - \\varepsilon_{\\rm eff}}{\\varepsilon_{i} + 2 \\varepsilon_{\\rm eff}} = 0,\n\\label{bru}\n\\end{equation}\nwhere $f_{i}=V_i \/V_{\\rm total}$ is the volume fraction\nof the $i$th material with the permittivity $\\varepsilon_{i}$.\n\nThe amount of vacuum in a particle can be characterized by its porosity\n${\\cal P}$ ($0 \\leq {\\cal P} < 1$)\n\\begin{equation}\n{\\cal P} = V_{\\rm vac} \/V_{\\rm total}\n= 1 - V_{\\rm solid} \/V_{\\rm total}. \\label{por}\n\\end{equation}\nTo compare the optical properties of porous and compact particles,\none should consider the porous particles of radius (or size parameter)\n\\begin{equation}\nr_{\\rm porous} = \\frac{r_{\\rm compact}}{(1-{\\cal P})^{1\/3}}\n= \\frac{r_{\\rm compact}}{(V_{\\rm solid} \/V_{\\rm total})^{1\/3}}. \\label{xpor}\n\\end{equation}\n\n As ``basic'' {\\rm constituents}, we choose\namorphous carbon (AC1; Rouleau \\& Martin \\cite{rm91})\nand astronomical silicate (astrosil; Laor \\& Draine \\cite{laordr93}).\n The refractive indices for {\\rm these materials and some others} considered\nin Sect.~\\ref{ab_ext} were taken from the Jena--Petersburg Database of\nOptical Constants (JPDOC) described by Henning et al.~(\\cite{heal99})\nand J\\\"ager et al.~(\\cite{jetal02}).\n\nThe application of the standard EMT is known to correspond to the case\nof particles having small randomly located inclusions of\ndifferent materials (Bohren \\& Huffman \\cite{bh83}).\n The optical properties of such particles have been well studied\n(see, e.g., Voshchinnikov \\cite{v02} and references therein).\n\n However, one needs the DDA or other computationally\ntime consuming techniques to treat particles with inclusions\nlarger than the wavelength of incident radiation.\nThe difference in the optical characteristics of particles with\nsmall and large inclusions has been discussed in\nprevious studies (e.g., Wolff et al.~\\cite{wo94}, \\cite{wo98}).\nThe fact that this difference drastically\ngrows with the porosity ${\\cal P}$ and becomes quite essential\nalready for $ {\\cal P} \\ga 0.5$ has been discovered\n{\\rm only recently} by Voshchinnikov et al.~(\\cite{vih05}).\n They also found that the scattering properties of\nparticles with inclusions of different sizes (including\nthose with sizes comparable to the wavelength) were\nvery close to those of layered\nparticles, having the same size and material volume fractions.\nA similar conclusion was reached for an ensemble of particles\nwhere each particle has inclusions of one (but different) size only.\nIn both cases of ``internal'' or ``external'' mixing, the model\nof layered spheres can be applied.\n\nThese results are of particular importance for applications including the\nastronomical ones where we deal with very porous particles.\nNote that instead of time consuming calculations with DDA-like\ncodes, one can use the exact solution to the light scattering\nproblem for layered spheres, which is as quick as Mie theory, to estimate\nthe properties of particles with Rayleigh\/non-Rayleigh inclusions.\n\nThus, the models of the homogeneous sphere (with EMT) and layered spheres,\nboth having fast implementations, allow one to probe the difference\nin optical properties caused by the different structure of\nscatterers.\n\n\\section{Interstellar extinction and interstellar abundances}\\label{ab_ext}\n\n\n\\subsection{Wavelength dependence of extinction}\\label{ext1}\n\n\n\nAs it is well known,\nthe wavelength dependence of interstellar extinction $A(\\lambda)$\nis determined by the wavelength dependence of the\nextinction efficiencies $Q_{\\rm ext}(\\lambda)$.\n This quantity is shown in Fig.~\\ref{ext_w} for particles of the same mass,\nbut different porosity.\nThe volume fractions of AC1 and astrosil are equal, i.e.\n$V_{\\rm AC1} \/V_{\\rm total} = V_{\\rm astrosil} \/V_{\\rm total} =\n1\/2 \\, (V_{\\rm solid} \/V_{\\rm total}) = 1\/2 \\, (1 - {\\cal P})$.\n The radius of compact grains is $r_{\\rm s,\\,compact}=0.1\\,\\mu{\\rm m}$.\n The dependence of $Q_{\\rm ext}$ on $\\lambda$ for compact particles\nis close to that of the average interstellar extinction curve in the\nvisible--near UV ($1 \\,\\mu{\\rm m}^{-1} \\leq\\lambda^{-1} \\leq 3 \\,\\mu{\\rm m}^{-1}$)\n{\\rm where it} can be approximated by the power law\n$A(\\lambda) \\propto \\lambda^{-1.33}$\n(see discussion in Voshchinnikov~\\cite{v02}).\n From Fig.~\\ref{ext_w} one can conclude that the extinction\nproduced by particles with small inclusions and layers\n differs little for compact\nand slightly porous particles. The difference becomes {\\rm most} pronounced in\nthe near and far--UV (for $\\lambda^{-1} \\ga 2.5 - 3 \\,\\mu{\\rm m}^{-1}$).\n\\begin{figure}[htb]\n\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f1.eps}}\n\\caption{Wavelength dependence of the extinction efficiency {\\rm factor}\nfor spherical particles with $r_{\\rm s,\\,compact}=0.1\\,\\mu{\\rm m}$.\nThe particles are of the same mass but of different porosity.\nUpper panel: calculations based on the EMT-Mie theory.\nLower panel: calculations based on the layered-sphere theory.\n}\\label{ext_w}\\end{center}\n\\end{figure}\n\nAs follows from Fig.~\\ref{ext_w}, the wavelength\ndependence of extinction flattens as porosity increases.\nIt is well known (see, e.g., Greenberg \\cite{g78}) that\n{\\rm different particles produce comparable extinction if the products of their\nsize $r$ and refractive index are close, i.e.}\n\\begin{equation}\n r \\, |m-1| \\approx {\\rm const.}\n\\label{mr}\\end{equation}\n The average refractive index of particles with a larger fraction of vacuum\nis closer to 1.\nDespite a larger radius (e.g., from Eq.~(\\ref{xpor}) follows that\n$r_{\\rm s}=0.22 \\,\\mu{\\rm m}$ if ${\\cal P}=0.9$ and $r_{\\rm s,\\,compact}=0.1 \\,\\mu{\\rm m}$),\nthe product given by Eq.~(\\ref{mr}) decreases because of {\\rm an}\ndrop of $|m-1|$. This implies that a steeper extinction\nwith the wavelength dependence closer to $\\propto \\lambda^{-1.33}$\nis produced by for compact particles with larger radii and,\nconsequently, a larger amount of solid material.\nThis behaviour is {\\rm also observed for} particles of other masses (i.e.,\ncompact spheres of other radii). Therefore, an interpretation\nof the observed interstellar extinction curve\nusing only very porous grains should\nnot give any gain in dust-phase abundances and would contradict\nthe wavelength behaviour.\n\n\\begin{figure}\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f2.eps}}\n\\caption{\nThe same as in Fig.~\\ref{ext_w} but now for normalized extinction cross section.\n}\\label{cn_w}\\end{center}\n\\end{figure}\nThe role of porosity in extinction is better seen from\nFig.~\\ref{cn_w} where\n{\\rm we give} the wavelength dependence of the normalized cross section\n\\begin{eqnarray}\nC_{\\rm ext}^{\\rm (n)} = \\frac{C_{\\rm ext}({\\rm porous \\, grain})}\n{C_{\\rm ext}({\\rm compact \\, grain \\, of \\, same \\, mass})} = \\nonumber \\\\\n\\,\\,\\,\\,\\,\\,\\, (1-{\\cal P})^{-2\/3}\\, \\frac{Q_{\\rm ext}({\\rm porous \\, grain})}\n{Q_{\\rm ext}({\\rm compact \\, grain \\, of \\, same \\, mass})}. \\label{cn}\n\\end{eqnarray}\nThis quantity shows how porosity influences the extinction cross section.\nAs follows from Fig.~\\ref{cn_w}, both models predict a growth of\nextinction of porous particles in the far-UV and a decrease in the\nvisual--near-UV part with growing ${\\cal P}$. However, the wavelength interval\nwhere $C_{\\rm ext}^{\\rm (n)} <1$ is narrower and the minimum is\nless deep in the case of layered spheres.\nIn comparison with compact grains and particles with Rayleigh inclusions,\nparticles with Rayleigh\/non-Rayleigh inclusions can also produce rather\nlarge extinction in the near-IR part of spectrum.\nThis is especially\nimportant for the explanation of the flat extinction across the $3 - 8\\,\\mu{\\rm m}$\nwavelength range measured for several lines of sight\n(see Sect.~\\ref{ir_ext}).\nAt the same time, as follows from Fig.~\\ref{pp09},\nfor particles with $r_{\\rm s,\\,compact}>0.1 \\,\\mu{\\rm m}$\nthe normalized UV cross sections grow faster for the\nBruggeman--Mie theory than for layered spheres.\nThus, an addition of vacuum into particles does not mean an automatic\ngrowth of extinction at all wavelengths and a saving in terms\nof solid-phase elements.\nEvidently, the final decision can be made after fitting the theoretical\ncalculations with observations at many wavelengths.\n\\begin{figure}\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f3.eps}}\n\\caption{Wavelength dependence of the normalized extinction cross section\nfor spherical particles with the same porosity ${\\cal P}=0.9$.\nUpper panel: calculations based on the EMT-Mie theory.\nLower panel: calculations based on the layered-sphere theory.\n}\\label{pp09}\\end{center}\n\\end{figure}\n\n\\subsection{Extinction in the directions\nto the stars $\\zeta$ Oph and $\\sigma$ Sco}\\label{st_ext}\n\nThe basic requirement for any model of interstellar dust is\nthe explanation of the observed extinction law along with\nthe dust-phase abundances of elements in the interstellar medium.\nThese abundances are obtained as the difference between the\ncosmic reference abundances and the observed gas-phase abundances.\nHowever, the cosmic abundances are not yet\nconclusively established and usually this causes a problem.\nFor many years, the solar abundances were used as the reference ones,\nuntil the photospheres of the early-type stars were found\nnot to be as rich in heavy elements as the solar\nphotosphere was (Snow \\& Witt \\cite{sw96}).\nThese stellar abundances caused the so-called ``carbon crisis''.\nAbundances of the most important dust-forming elements (C, O, Mg, Si, Fe)\nrequired by the dust models were larger than available.\nHowever, during the past several years the solar abundances\ndropped and now they approach the stellar ones\n(see Asplund et al. \\cite{ags04}).\nEvidently, some abundances determined by Sofia \\& Meyer (\\cite{sm01})\nfor F and G stars must be revised downward\nas it has been done recently\nfor the Sun. This should lead to the agreement between abundances found\nfor stars of different types.\nNote also that the current solar abundances of oxygen and iron\n(see Table~\\ref{ida}) are close to those found from\nhigh-resolution \\mbox{X-ray} spectroscopy.\nJuett et al.~(\\cite{jsc03}) investigated the oxygen\nK-shell interstellar absorption edge in seven X-ray binaries and evaluated\nthe total O abundances. These abundances lie between\n467~ppm$^{(}$\\footnote{parts per million hydrogen atoms}$^{)}$ and 492~ppm.\nSchulz et al.~(\\cite{schet02}) evaluated the\ntotal abundance of iron towards the object Cyg~X-1 to be\n$[{\\rm Fe}\/{\\rm H}]_{\\rm cosmic} \\approx 25$~ppm.\n\nWe applied the model of multi-layered porous particles\nto explain the absolute extinction in the direction to the two stars.\nA first estimate has been made in order to find a possibility\nto enlarge the extinction per unit mass and to minimize the amount\nof solid phase material.\nSeveral materials as components of composite grains were considered.\nAmong the carbonaceous species,\nthe amorphous carbon in the form of Be1 (Rouleau \\& Martin \\cite{rm91})\nwas found to produce the largest extinction. Also the extinction of iron\noxides strongly increases with the growth of porosity.\nAlthough there are no very good constraints on the abundance of oxides,\nFeO is considered as a possible carrier of the 21~$\\mu{\\rm m}$ emission\nobserved in the spectra of post-AGB stars (Posch et al. \\cite{pma04}).\nVery likely, such particles particles can be produced in redox\nreactions (Duley \\cite{dul80}, Jones \\cite{jones90}).\n\n\\begin{figure}[htb]\n\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f4.eps}}\n\\caption\n{Observed and calculated extinction in the direction to\n$\\zeta$ Oph.\nThe errors of the observations are the result of a parameterization\nof the observations (see Fitzpatrick \\& Massa \\cite{fm90}).\nThe contribution to the theoretical extinction from different components\nis also shown.\n{\\rm The dot-dashed curve is the approximation with the observed value\n$R_{\\rm V}$ as suggested by Cardelli et al. (\\cite{ccm89}).}\n}\n\\label{zeta}\\end{center}\\end{figure}\n\n\\begin{figure}[htb]\n\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f5.eps}}\n\\caption\n{The same as in Fig.~\\ref{zeta} but now for $\\sigma$ Sco.\nThe observational data were taken from Wegner~(\\cite{ww02}).\n}\\label{sigma}\\end{center}\\end{figure}\n\nWe fitted the observed extinction toward the stars\n$\\zeta$~Oph and $\\sigma$ Sco.\nFor these stars there exist well determined extinction curves and\ngas-phase abundances. It is also important that the major\npart of extinction in these directions is produced in one\ndiffuse interstellar cloud (Savage \\& Sembach \\cite{ss96},\nZubko et al. \\cite{zkw96}). This allows us to exclude possible large\nvariations in dust composition along the line of sight.\n\nObserved and calculated extinction curves are plotted in\nFigs.~\\ref{zeta} ($\\zeta$~Oph) and \\ref{sigma} ($\\sigma$ Sco).\nAs follows from the previous Section,\nthe use of only porous or only compact grains apparently\ndoes not result significant benefit in the solid-phase abundances.\nTherefore, our models are the combination of compact and\nporous particles. They consist of three or four grain populations:\n\n(I). Porous composite (multi-layered) particles\n(Be1 --- 5\\%,\npyroxene, Fe$_{0.5}$Mg$_{0.5}$SiO$_3$ --- 5\\% for $\\zeta$ Oph or\nforsterite, Mg$_2$SiO$_4$ --- 5\\% for $\\sigma$ Sco and\nvacuum --- 90\\%)\nwith the power-law size distribution having an exponential decay\n$n_{\\rm d}(r_{\\rm s})\\propto r_{\\rm s}^{-2.5}\\exp (-10\/r_{\\rm s})$.\nThe lower\/upper cut-off in the size distribution is\n$0.015\\,\\mu{\\rm m}$\/$0.25\\,\\mu{\\rm m}$ and $0.05\\,\\mu{\\rm m}$\/$0.50\\,\\mu{\\rm m}$\nfor $\\zeta$~Oph and $\\sigma$ Sco, respectively.\n\n(II). Small compact graphite\\footnote{\nThe calculations for graphite were made in the ``2\/3--1\/3'' approximation:\n$Q_{\\rm ext} = {2}\/{3}\\, Q_{\\rm ext}(\\varepsilon_{\\bot}) +\n {1}\/{3}\\, Q_{\\rm ext}(\\varepsilon_{||})$,\nwhere $\\varepsilon_{\\bot}$ and $\\varepsilon_{||}$ are the dielectric\nfunctions for two cases of orientation of the electric field relative\nto the basal plane of graphite.} grains\nwith a narrow power-law size distribution\n($n_{\\rm d}(r_{\\rm s})\\propto r_{\\rm s}^{-2.5}$,\n$r_{\\rm s}=0.01 - 0.02\\,\\mu{\\rm m}$).\n\n(III). Porous composite grains of magnetite\n(Fe$_3$O$_4$ --- 2\\%, vacuum --- 98\\% for $\\zeta$ Oph and\nFe$_3$O$_4$ --- 8\\%, vacuum --- 92\\% for $\\sigma$ Sco)\nwith a power-law size distribution\n$n_{\\rm d}(r_{\\rm s})\\propto r_{\\rm s}^{-2.5}$.\nThe lower\/upper cut-off in the size distribution is\n$0.005\\,\\mu{\\rm m}$\/$0.25\\,\\mu{\\rm m}$ and $0.05\\,\\mu{\\rm m}$\/$0.35\\,\\mu{\\rm m}$\nfor $\\zeta$~Oph and $\\sigma$ Sco, respectively.\n\n(IV). Compact grains of forsterite (Mg$_2$SiO$_4$)\nwith the power-law size distribution (only for $\\zeta$ Oph,\n$n_{\\rm d}(r_{\\rm s})\\propto r_{\\rm s}^{-3.5}$,\n$r_{\\rm s,\\,min}=0.10\\,\\mu{\\rm m}$, $r_{\\rm s,\\,max}=0.25\\,\\mu{\\rm m}$). \\\\\n\n\\begin{table*}[htb]\n\\begin{center}\n\\caption[]{Contribution of different grain populations\nto $A_{\\rm V}$ and dust-phase abundances\nfor the model of $\\zeta$ Oph (in ppm)}\\label{da-oph}\n\\begin{tabular}{llccccc}\n\\hline\n\\noalign{\\smallskip}\nComponent & $A_{\\rm V}$ & C & O & Mg& Si & Fe \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n(I) Be1\/pyroxene\/vacuum & 0\\fm58 & 123 & 68 & 11.3 & 22.6 & 11.3 \\\\\n(II) Graphite & 0\\fm075& ~96 & & & & \\\\\n(III) Magnetite\/vacuum & 0\\fm22& & 33 & & & 24.8 \\\\\n(IV) Forsterite & 0\\fm065& & 23 & 11.4 & 5.7 & \\\\\n\\noalign{\\smallskip}\\hline\n\\noalign{\\smallskip}\nTotal & 0\\fm94 & 219 & 124 & 22.7 & 28.2 & 36.1\\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\\end{center}\n\\end{table*}\n\nFigures~\\ref{zeta} and \\ref{sigma} also contain\nthe extinction curves calculated using the approximation\nsuggested by Cardelli et al. (\\cite{ccm89}) with\nthe coefficients revised by O'Donnell (\\cite{odonn94}).\nCardelli et al. (\\cite{ccm89}) found that\nthe extinction curves from the UV through the IR\ncould be characterized as a one-parameter family dependent\non the ratio of the total extinction to the selective one\n$R_{\\rm V}=A_{\\rm V}\/E({\\rm B-V})$.\nWe used the observed values of $R_{\\rm V}$ in order\nto plot the CCM approximation. It is seen that\nthis relation describes quite well the extinction for\n$\\zeta$~Oph but not for $\\sigma$ Sco.\n\n\\begin{table*}[htb]\n\\begin{center}\n\\caption[]{Contribution of different grain populations\nto $A_{\\rm V}$ and dust-phase abundances\nfor the model of $\\sigma$ Sco (in ppm)}\\label{da-sco}\n\\begin{tabular}{llccccc}\n\\hline\n\\noalign{\\smallskip}\nComponent & $A_{\\rm V}$ & C & O & Mg& Si & Fe \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n(I) Be1\/forsterite\/vacuum & 0\\fm50 & 58 & 35.4 & 17.7 & 8.8 & \\\\\n(II) Graphite & 0\\fm11& 79 & & & & \\\\\n(III) Magnetite\/vacuum & 0\\fm52& & 35.4 & & & 26.6 \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\nTotal & 1\\fm13 & 137 & 71 & 17.7 & 8.8 & 26.6\\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\\end{center}\n\\end{table*}\n\nThe contributions from different components to the calculated extinction\nare given in Tables~\\ref{da-oph} and \\ref{da-sco}\nand shown in Figs.~\\ref{zeta} and \\ref{sigma}.\nThe Tables contain also the dust-phase abundances\nof five dust-forming elements for several grain populations.\nThey were calculated for ratios of the extinction cross-section\nto particle volume averaged over grain size distribution\n(see Eq.~(3.36) in Voshchinnikov~\\cite{v02}).\n\nTable~\\ref{ida} gives the current solar abundances\nof five dust-forming elements according to Asplund et al.~(\\cite{ags04})\nas well as the ``observed''\n(solar minus gaseous) and model abundances.\n\nThe dust-phase abundances in the line of sight to the star\n$\\zeta$~Oph (HD~149757) were taken from\nTable~2 of Snow \\& Witt~(\\cite{sw96}).\nIn our calculations, we adopted\nthe following quantities\nfor $\\zeta$~Oph: a total extinction\n$A_{\\rm V}=0\\fm94^($\\footnote{This value was obtained from the\nrelation $A_{\\rm V}= 1.12 \\, E({\\rm V-K})$ (Voshchinnikov \\& Il'in \\cite{vi87})\nand a colour excess $E({\\rm V-K})=0\\fm84$ (Serkowski et al. \\cite{smf75}).}$^)$,\n colour excess $E({\\rm B}-{\\rm V})=0\\fm32$ and\ntotal hydrogen column density $N({\\rm H})=1.35\\,10^{21}\\,{\\rm cm}^{-2}$\n(Savage \\& Sembach, \\cite{ss96}).\nThe extinction curve was reproduced according to the parameterization\nof Fitzpatrick \\& Massa~(\\cite{fm90}).\n\nFor $\\sigma$ Sco (HD~147165), we used the extinction curve,\nthe colour excess $E({\\rm B}-{\\rm V})=0\\fm35$ and\nthe total extinction $A_{\\rm V}=1\\fm13$ according to Wegner~(\\cite{ww02}).\nThe hydrogen column density\n$N({\\rm H})=2.46 \\, 10^{21}\\, {\\rm cm}^{-2}$ was adopted from\nZubko et al.~(\\cite{zkw96}). The gas-phase abundances were taken from\nAllen et al.~(\\cite{asj90}).\n\nThe dust-phase abundances required by the model are larger\nthan the observed ones in the direction to $\\zeta$~Oph\n(for C and Fe) and\nsmaller than the observed abundances in the direction to $\\sigma$ Sco.\nNote that for $\\sigma$ Sco the required amount of C and Si in dust\ngrains is the lowest in comparison with previous\nmodelling. This is due to the use of highly porous particles\nwhich give considerable extinction in the UV and near-IR (see\nFigs.~\\ref{cn_w} and \\ref{pp09}) and allow one to ``save'' material.\nFor example, the extinction model of $\\sigma$~Sco with compact grains\npresented by Zubko et al.~(\\cite{zkw96})\nrequires 240~--~260 ppm of C and 20~--~30 ppm of Si\nand the model of Clayton et al.~(\\cite{cletal03})\nneeds 155 ppm of C and 36 ppm of Si\n(cf. 137 ppm and 8.8 ppm from Table~\\ref{ida}).\n\nThe models presented above are based on the light scattering\ncalculations for particles with Rayleigh\/non-Rayleigh inclusions\n(layered spheres). It is evident that the observed extinction\ncan be also reproduced if we use the particles with Rayleigh inclusions\n(i.e., if we apply the EMT-Mie theory). Our estimates show that\ndespite a larger extinction in the UV\nthis model requires more material in the solid phase\nin comparison with layered spheres because of\na smaller extinction in the visual-near IR part of the spectrum.\n\n\\begin{table}[htb]\n\\begin{center}\n\\caption[]{Observed and model dust-phase abundances (in ppm)}\\label{ida}\n\\begin{tabular}{cccccc}\n\\hline\n\\noalign{\\smallskip}\nElement & Solar$^\\ast$ & \\multicolumn{2}{c} {$\\zeta$ Oph} &\\multicolumn{2}{c} {$\\sigma$ Sco}\\\\\n & abundance& obs & model & obs & model \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n~C& 245 & 110 & 219 & 176 & 137 \\\\\n~O& 457 & 126 & 124 & ~85 & ~71 \\\\\nMg$^{\\ast\\ast}$& ~33.9 & ~31.9& ~22.7& ~30.9 & ~~~17.7 \\\\\nSi& ~34.2 & ~32.6& ~28.2& ~~32.4 & ~~~8.8 \\\\\nFe& ~28.2 & ~28.2& ~36.1& ~~~27.9 & ~~~26.6 \\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\\end{center}\n\\noindent $^\\ast$ According to Asplund et al.~(\\cite{ags04}). \\\\\n\\noindent $^{\\ast\\ast}$ The abundance of Mg was recalculated with the oscillator\nstrengths from Fitzpatrick~(\\cite{fp97}).\n\n\\end{table}\n\n\n\\subsection{Near infrared extinction in the Galactic plane}\\label{ir_ext}\n\nWe now consider the possibility of explaining the flat extinction\nacross the $3 - 8\\,\\mu{\\rm m}$ wavelength range observed for\nseveral lines of sight. This flattening was first measured by\nLutz et al.~(\\cite{luetal96}) toward the Galactic center\nwith {\\it ISO}, using hydrogen recombination lines.\nLater Lutz~(\\cite{lutz99}) confirmed the effect using more\nrecombination lines. Recently, Indebetouw et al.~(\\cite{inetal05})\nfound a similar flat extinction along two lines of sight:\n$l=42\\degr$ and $l=284\\degr$. The extinction was obtained at seven\nwavelengths ($1.2 - 8\\,\\mu{\\rm m}$) by combining images from the {\\it Spitzer\nSpace Telescope} with the 2MASS point-source catalog.\n\n\\begin{figure}[htb]\n\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f6.eps}}\n\\caption\n{Observed and calculated extinction in the near-IR part\nof spectrum. The observations correspond to the average extinction\nfor two lines of sight along the Galactic plane\n(Indebetouw et al.~\\cite{inetal05})\ntransformed into magnitudes of extinction per kpc.\nThe theoretical extinction was calculated for component~(I)\nof the model used for $\\zeta$ Oph (${\\cal P}=0.9$, short dashed curve).\nTwo other curves correspond to the same component but with\nanother particle porosity.\n{\\rm The dot-dashed curve is the approximation of Cardelli et al. (\\cite{ccm89})\nwith $R_{\\rm V}=3.1$.}\n}\n\\label{gc}\\end{center}\\end{figure}\nWe used the average extinction given in Table~1 of\nIndebetouw et al.~(\\cite{inetal05}) and transformed\nit into magnitudes of extinction per kpc using\nthe measured value $A_{\\rm K}\/D = 0\\fm15 \\pm 0\\fm1\\,{\\rm kpc}^{-1}$\n(Indebetouw et al.~\\cite{inetal05}).\nThe observations are plotted in Fig.~\\ref{gc} together with\nthree theoretical curves.\nBecause we have little information about the UV-visual extinction and\ngas-phase abundances in these directions, we (rather arbitrarily)\napplied the model used for\n$\\zeta$ Oph (porous component (I): Be1\/pyroxene, porosity 90\\%; see\nSect.~\\ref{st_ext}). This model (short dashed curve on Fig.~\\ref{gc})\nwell explains the flat extinction at $\\lambda > 3\\,\\mu{\\rm m}$$^($\\footnote{Note\nthat porous particles from magnetite (component (III); see\nSect.~\\ref{st_ext}) cannot fit well the extinction at these wavelengths\nbecause of a bump at $\\lambda \\approx 2\\,\\mu{\\rm m}$.}$^)$\nbut the extinction in the J and H bands is too small.\nCompact particles (long-dashed curve in Fig.~\\ref{gc}) produce\neven larger extinction at these bands than the observed one.\nHowever, the extinction from such particles at longer wavelengths\ndecreases rapidly. Our preliminary analysis shows that\nparticles with a porosity of\nabout 0.6 (solid curve on Fig.~\\ref{gc})\ncan be chosen as an appropriate model.\nEvidently, a similar curve can be obtained as a combination\nof compact and very porous particles.\nQuite close extinction can be found with\nthe CCM approximation and the standard value $R_{\\rm V}=3.1$.\nWe arbitrarily extrapolated this approximation to long wavelengths\n($\\lambda^{-1} < 0.3 \\,\\mu{\\rm m}^{-1}$) where it gives too small extinction.\n\n Extinction produced by porous grains was also rather flat\nbetween 1.0 and 2.2~$\\mu{\\rm m}$ (for example,\n$A(\\lambda) \\propto \\lambda^{-1.3}$ for ${\\cal P}=0.6$) as was detected\nfor several ultracompact HII regions with $A_{\\rm V} \\ga 15^{\\rm m}$\n(Moore et al. \\cite{metal05}).\n\nIn order to calculate the dust-phase abundances\nfor models presented in Fig.~\\ref{gc}\nwe estimated the total hydrogen column density\nusing Eqs.~(3.26), (3.27) and (3.22) from Voshchinnikov~(\\cite{v02}).\nFirst, we found the column density of\natomic hydrogen $N({\\rm HI})$ from the extinction at J band and then\ntransformed $N({\\rm HI})$ into a total hydrogen column density\n$N({\\rm H})$ using the ratio of total to selective extinction\n$R_{\\rm V}=3.1$. A value of\n$N({\\rm H})\/D=2.79 \\, 10^{21}\\, {\\rm cm}^{-2}{\\rm kpc}^{-1}$\nwas obtained.\nThe calculated abundances are given in Table~\\ref{igc}, which also\ncontains the visual extinction calculated for three\nmodels. Note that the model of grains with porosity ${\\cal P}=0.6$\ngives the largest contribution to $A_{\\rm V}$ in comparison with\ntwo other models.\n\n\\begin{table}[htb]\n\\begin{center}\n\\caption[]{Dust-phase abundances (in ppm)\nand visual extinction for three models presented\nin Fig.~\\ref{gc}.}\\label{igc}\n\\begin{tabular}{cccc}\n\\hline\n\\noalign{\\smallskip}\nElement & ${\\cal P} = 0$ & ${\\cal P} = 0.6$ & ${\\cal P} = 0.9$\\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n~C& 96 & 84 & 62 \\\\\n~O& 53 & 46 & 34 \\\\\nMg& ~8.9& ~7.7 & ~5.7 \\\\\nSi& 18 & 16 & 11 \\\\\nFe& ~8.9 & ~7.7 & ~5.7 \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n$A_{\\rm V}$ & 0\\fm60 & 0\\fm79 & 0\\fm61 \\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\\end{center}\n\\end{table}\n\nThere are useful observational data\nfor foreground stars in the field $l=284\\degr$.\nFor stars HD~90273, HD~93205 and HD~93222,\nWegner~(\\cite{ww02}) and Barbaro et al.~(\\cite{betal04})\nestimate the values of $R_{\\rm V}$ which lie between 3.4 and 4.0.\nFor HD~90273 Barbaro et al.~(\\cite{betal04}) also find\nan anomalously high gas to dust ratio\n$N({\\rm H})\/E({\\rm B}-{\\rm V})=\n1.04\\,10^{22}\\,{\\rm atoms}\\,{\\rm cm}^{-2}\\,{\\rm mag.}^{-1}$\nEnlargement of $R_{\\rm V}$ increases the required dust-phase\nabundances while the decrease of the gas to dust ratio reduces them.\nAndr\\'e et al.~(\\cite{and03}) measured the interstellar gas-phase\noxygen abundances along the sight lines toward 5 early-type stars\nwith $l=285\\fdg3-287\\fdg7$ and $b=-5\\fdg5~-~+0\\fdg1$.\nThe values of $[{\\rm O}\/{\\rm H}]_{\\rm g} = 443$~ppm vary from\n 356~ppm to 512~ppm, the average value being 443~ppm.\nThis gives for the mean dust-phase\nabundance $[{\\rm O}\/{\\rm H}]_{\\rm d} = 14$~ppm.\nHowever, the extinction curves for HD~93205 and HD~93222\npublished by Wegner~(\\cite{ww02}) have strong UV bumps and flat\nextinction in the far-UV. This means that extinction can be mainly produced\nby carbonaceous grains.\nEvidently, the most reasonable way to solve the problem of\nabundances is a re-examination of the reference cosmic\nabnudances and a detailed study of their local values.\n\n\\section{Infrared radiation}\\label{irr}\n\\subsection{Dust temperature}\\label{temp}\n\n\n\\begin{figure}\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f7.eps}}\n\\caption{Size dependence of the temperature\nfor spherical particles.\nThe particles are located at a distance of\n10$^4\\,R_\\star$ from a star with an effective temperature\n$T_\\star=2500$\\,K.\nUpper panel: calculations based on the EMT-Mie theory.\nLower panel: calculations based on the layered-sphere theory.\n}\\label{td}\\end{center}\n\\end{figure}\nThe commonly used equilibrium temperature\nof cosmic grains is derived from a balance {\\rm between} the energy gain\ndue to absorption of the UV and visual stellar photons\nand the energy loss due to re-emission of IR photons.\nThe temperature of porous and compact particles\nof different size and porosity is shown in\nFigs.~\\ref{td} and \\ref{td-ppp} as a function of particle\nsize and porosity, respectively.\nThe results were calculated for particles located at a distance of\n10$^4\\,R_\\star$ from a star with an effective temperature\n$T_\\star=2500$\\,K. In the case of the layered spheres,\nan increase of the vacuum fraction\ncauses a decrease of the grain temperature if the amount of the\nsolid material is kept constant. This behaviour holds for particles of all\nsizes as well as for particles located closer to the star or farther away\nand for other values of $T_\\star$. If the EMT-Mie theory is applied,\nthe temperature drops when the porosity grows up to $\\sim 0.7$ and then\nstarts to increase (see Fig.~\\ref{td}, upper panel and Fig.~\\ref{td-ppp}).\nSuch a behaviour corresponds to the results\nof Greenberg \\& Hage~(\\cite{grha91}) who found an increase of\ntemperature for large grain porosity (see Fig.~4 in their paper).\n\n\n\\begin{figure}[htb]\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f8.eps}}\n\\caption{Dependence of the dust temperature on particle porosity.\nSolid lines: calculations based on the EMT-Mie theory.\ndashed lines: calculations based on the layered-sphere theory.\nOther parameters are the same as in Fig.~\\ref{td}.\n}\\label{td-ppp}\\end{center}\\end{figure}\nAs it follows from Figs.~\\ref{td} and \\ref{td-ppp},\nthe difference in the temperature of very porous grains\ncalculated using the two models can reach $\\sim 6$\\,K or $\\sim 15$\\%\nwhile the temperature of compact (${\\cal P} = 0$)\ncomposite grains differs by less than 1\\%.\nNote that the relative difference in temperatures of $\\sim 15$\\%\nbetween particles of the two types is kept for other stellar temperatures\n(e.g., in the case of the Sun or an interstellar radiation field).\n\n\nThe intermediate porosity of grains leads to a shift of the\npeak position of IR emission to larger wavelengths\nin comparison with compact particles. This occurs independently of\nparticle structure (small or various size inclusions).\nHowever, very porous grains with Rayleigh\/non-Rayleigh inclusions\nare expected to be systematically cooler than\nparticles with Rayleigh inclusions. Such a difference can be of\ngreat importance at a lower temperature regime because it can influence\nthe growth\/destruction of mantles on grains in molecular clouds.\n\n \n\n\\subsection{Infrared features}\\label{ir_b}\n\nIt is well known that the shape of the IR dust features is a good indicator\nof the particle size and chemical composition. With an increase of the size,\na feature becomes wider and eventually fades away.\nFor example, in the case of compact spherical grains of astrosil,\nthe 10~$\\mu{\\rm m}$ and 18~$\\mu{\\rm m}$ features disappear when the grain radius\nexceeds $\\sim 2-3 \\,\\mu{\\rm m}$.\nObserved differences in small scale structure of the features\nare usually attributed to variations of the composition\n(e.g., changes of the ratio of magnesium\nto iron in silicates) or material state (amorphous\/crystalline).\n\n\\begin{figure}\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f9.eps}}\n\\caption{Wavelength dependence of the absorption efficiency factors\nfor spherical particles\nof radius $r_{\\rm s, \\, compact}=0.1 \\,\\mu{\\rm m}$.\nUpper panel: calculations based on the EMT-Mie theory.\nLower panel: calculations based on the layered-sphere theory.\n}\\label{abs01}\\end{center}\n\\end{figure}\nIn Fig.~\\ref{abs01} we compare the wavelength dependence of\nthe absorption efficiency factors for particles of the same mass but different\nstructure.\n The upper panel shows results obtained with the EMT-Mie model\nfor particles with Rayleigh inclusions.\nIt can be seen that the central position and the width of the dust features\ndoes not really change.\nLarger changes occur for the layered-sphere model\n(Fig.~\\ref{abs01}, lower panel).\n In this case a growth of ${\\cal P}$ causes a shift of the center\nof the feature to longer wavelengths and its broadening.\n For particles with ${\\cal P}=0.9$, the 10~$\\mu{\\rm m}$ feature transforms into\na plateau while the 18~$\\mu{\\rm m}$ feature disappears.\n\n\\begin{figure}\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f10.eps}}\n\\caption[]\n{Emission in the disc around the star ${\\beta}~\\rm Pictoris$ in the region of silicate\n10\\,$\\mu{\\rm m}$ band.\nStars and squares are the observations of Knacke et al.~(\\cite{kn93})\nand of Telesco \\& Knacke~(\\cite{tk91}). The curves present the results\nof calculations for particles of radius $r_{\\rm s, \\, compact}=0.1 \\,\\mu{\\rm m}$\nas shown in Fig.~\\ref{abs01} but normalized at $\\lambda=9.6\\,\\mu{\\rm m}$.\n}\\label{bet}\\end{center}\n\\end{figure}\nWe plotted in Fig.~\\ref{bet} our data from Fig.~\\ref{abs01} in a normalized\nmanner together with observations of ${\\beta}~\\rm Pictoris$ made by\nKnacke et al.~(\\cite{kn93}) and Telesco \\& Knacke~(\\cite{tk91}).\nAs follows from Fig.~\\ref{bet}, for given optical constants of the silicate\nthe observed shape of the 10~$\\mu{\\rm m}$ feature\nis better reproduced by either compact or porous particles\nwith small size inclusions of materials.\nNote that in the case of ${\\beta}~\\rm Pictoris$ the EMT-Mie calculations\nwere earlier used by Li \\& Greenberg~(\\cite{li:gre98}) for the\nexplanation of the 10~$\\mu{\\rm m}$ emission feature and by\nVoshchinnikov \\& Kr\\\"ugel~(\\cite{vk99}) for\nthe interpretation of the positional\nand wavelength dependence of polarization.\nThe best fit was obtained for very\nporous particles: ${\\cal P} \\approx 0.95$ and 0.76, respectively.\n\n\n\\begin{figure}\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f11.eps}}\n\\caption{Wavelength dependence of the normalized absorption efficiency factors\nfor spherical particles of radius $r_{\\rm s, \\, compact}=2 \\,\\mu{\\rm m}$.\nUpper panel: calculations based on the EMT-Mie theory.\nLower panel: calculations based on the layered-sphere theory.\n}\\label{abs2}\\end{center}\n\\end{figure}\nFigure~\\ref{abs2} shows the normalized absorption efficiency factors\nfor spherical particles of radius $r_{\\rm s, \\, compact}=2 \\,\\mu{\\rm m}$.\nFor compact grains the 10~$\\mu{\\rm m}$ feature almost disappears.\nWhen the porosity increases the strength of the feature grows\nin the case of the Bruggeman-Mie calculations.\nThis tendency coincides with the results\nshown in Fig.~7 of Hage \\& Greenberg~(\\cite{hagr90}) who found\nthat the higher the porosity, the sharper the silicate emission {\\rm became}.\nFor the case of layered spheres the feature becomes only slightly\nstronger but its peak shifts to longer wavelengths.\n\n\nA standard ``compact'' approach to the modelling of\nthe 10~$\\mu{\\rm m}$ feature was used by\nvan Boekel et al.~(\\cite{vb03}, \\cite{vb05})\nand Przygodda et al. (\\cite{pr03}) who considered the flattening of the 10~$\\mu{\\rm m}$ feature\nas an evidence of grain growth in the discs around Herbig Ae\/Be\nstars and T Tauri stars, respectively.\nOur investigations show that the variations\nof the shape of the feature and its position and strength can\nalso be attributed to the change of porosity and relative amount\nof carbon in composite grains of small sizes.\n\n\n\\subsection{Dust opacities}\\label{opa}\n\n\\begin{table*}[htb]\n\\caption[]{Mass absorption coefficients at $\\lambda = 1$\\,mm of\ncompact and porous spheres consisting of\nAC1$^\\ast$ and (or) astrosil$^{\\ast\\ast}$.} \\label{t1}\n\\begin{center}\\begin{tabular}{cccccccccc}\n\\hline\n\\noalign{\\smallskip}\n&\\multicolumn{3}{c}{AC1 + astrosil}&\\multicolumn{3}{c}{astrosil}\n&\\multicolumn{3}{c}{AC1 } \\\\\n\\noalign{\\smallskip} \\cline{2-10} \\noalign{\\smallskip}\n${\\cal P}$ & $\\rho_{\\rm d}$&\\multicolumn{2}{c}{$\\kappa$, cm$^2$\/g}\n& $\\rho_{\\rm d}$ &\\multicolumn{2}{c}{$\\kappa$, cm$^2$\/g}\n& $\\rho_{\\rm d}$ &\\multicolumn{2}{c}{$\\kappa$, cm$^2$\/g} \\\\\n\\noalign{\\smallskip} \\cline{2-10} \\noalign{\\smallskip}\n&&Brugg.--Mie & lay. spheres\n&&Brugg.--Mie & lay. spheres\n&&Brugg.--Mie & lay. spheres \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n 0.00 & ~2.58 & 1.58 & 1.58 & ~3.30 & 0.310& 0.310 & ~1.85 & 4.37 & ~4.37\\\\\n 0.10 & ~2.32 & 1.89 & 1.55 & ~2.97 & 0.371& 0.334 & ~1.66 & 5.13 & ~4.60\\\\\n 0.30 & ~1.80 & 2.75 & 1.87 & ~2.31 & 0.548& 0.446 & ~1.30 & 7.09 & ~5.77\\\\\n 0.50 & ~1.29 & 3.83 & 2.57 & ~1.65 & 0.778& 0.646 & 0.925 & 9.22 & ~7.88\\\\\n 0.70 & 0.772 & 3.94 & 4.04 & 0.990 & 0.794& ~1.05 & 0.555 & 9.14 & 11.9 \\\\\n 0.90 & 0.258 & 2.45 & 8.12 & 0.330 & 0.431& ~2.20 & 0.185 & 5.94 & 21.8 \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n\\end{tabular}\\end{center}\n $^{\\ast}$ $m(\\lambda=1 {\\rm mm})=2.93+0.276i$, $\\rho_{\\rm d}=1.85$\\,g\/cm$^3$\n\n $^{\\ast\\ast}$ $m(\\lambda=1 {\\rm mm})=3.43+0.050i$, $\\rho_{\\rm d}=3.3$\\,g\/cm$^3$\n\\end{table*}\n\nThe dust opacity or\nthe mass absorption coefficient of a grain material $\\kappa(\\lambda)$\nenters directly in the expression for\nthe dust mass of an object $M_{\\rm d}$ which is determined from\noptically thin millimeter emission\n\\begin{equation}\nM_{\\rm d} = \\frac{F_{\\rm mm}(\\lambda) D^2}{\\kappa(\\lambda) B_\\lambda(T_{\\rm d})}.\n \\label{m}\n\\end{equation}\nHere, $F_{\\rm mm}(\\lambda)$ is the observed flux,\n$D$ the distance to the object, $B_\\lambda(T_{\\rm d})$ the Planck function,\n$T_{\\rm d}$ the dust temperature.\nThe mass absorption coefficient $\\kappa(\\lambda)$ depends on\nthe particle volume $V_{\\rm total}$, the material density $\\rho_{\\rm d}$\nand the extinction cross-section $C_{\\rm ext}$ as follows:\n\\begin{equation}\n\\kappa(\\lambda) = \\frac{C_{\\rm ext}}{\\rho_{\\rm d} V_{\\rm total}} \\approx\n \\frac{3}{\\rho_{\\rm d}} \\, \\left(\\frac{2 \\pi}{\\lambda}\\right) \\,\n {\\rm Im} \\left\\{\\frac{\\varepsilon_{\\rm eff}-1}{\\varepsilon_{\\rm eff}+2} \\right\\}.\n \\label{kap}\n\\end{equation}\nAt long wavelengths the scattering can be neglected\n{\\rm ($C_{\\rm ext} \\approx C_{\\rm abs}$)} and\n$C_{\\rm abs}$ {\\rm can be evaluated} in the Rayleigh approximation.\nThen the mass absorption coefficient does not depend on the particle size\nas shown in the right part of Eq.~(\\ref{kap}).\nThe effective dielectric permittivity $\\varepsilon_{\\rm eff}$\nin Eq.~(\\ref{kap}) can be found from the Bruggeman rule (see Eq.~(\\ref{bru})) or\nthe layered-sphere rule of the EMT (see Eqs.~(7), (8) in\nVoshchinnikov et al. \\cite{vih05}).\n\n\nExtensive studies of the mass absorption coefficient dependence\non the material properties and grain shape are\nsummarized by Henning~(\\cite{h96}) who,\nin particular, {\\rm notes} that the opacities at 1~mm are considerably\nlarger for non-spherical particles {\\rm than for} spheres\n(see also Ossenkopf \\& Henning \\cite{oh94}).\nWe find that a similar effect (an increase of opacity in comparison\nwith compact spheres) is\nproduced by inclusion of a large fraction of vacuum\ninto the particles. This follows from Table~\\ref{t1}\nwhere the opacities at $\\lambda = 1$\\,mm are presented.\nThis Table contains the results for particles consisting of three\nmaterials (AC1, astrosil and vacuum) or\ntwo materials (AC1 or astrosil and vacuum).\nIn the first case,\nthe volume fractions of AC1 and astrosil are equal\n($V_{\\rm AC1} \/V_{\\rm total} = V_{\\rm astrosil} \/V_{\\rm total} =\n1\/2 \\, (1 - {\\cal P})$) while in the second case the volume fraction\nof solid material is $1 - {\\cal P}$.\nIt can be seen that the values of $\\kappa$ are generally larger for particles\nwith a larger fraction of vacuum. This is related\nto the decrease of the particle density $\\rho_{\\rm d}$ which is calculated\nas the volume-average quantity.\n As the mass of dust in an object is proportional to $\\rho_{\\rm d}$\n(see Eqs.~(\\ref{m}) and (\\ref{kap})), the assumption of porous grains\n can lead to considerably smaller mass estimates.\nNote that the opacities are larger for more absorbing\ncarbon particles. A similar effect was noted by\nQuinten et al.~(\\cite{qkhm02}) who theoretically studied the wavelength\ndependence of extinction of different carbonaceous particles.\nThey also showed that the far IR extinction was larger\nfor clusters of spheres and spheroids than for compact spheres.\nA very large enhancement of the submm opacities was found\nby Ossenkopf \\& Henning~(\\cite{oh94}) in the case of pure\ncarbon aggregates or carbon on silicate grains.\n\nUsing Eq.~(\\ref{m}) and data from Table~\\ref{t1} shows\nhow the particle porosity and structure\ncan influence estimates of dust mass in an object.\nThe mass ratio estimates can be found in the Rayleigh--Jeans approximation\n$$\n\\frac{M_{\\rm d} ({\\rm \\mbox{compact}})}{M_{\\rm d} ({\\rm porous})}\n= \\frac{\\kappa_{\\rm porous}(\\lambda) \\, T_{\\rm d, porous}}\n {\\kappa_{\\rm compact}(\\lambda) \\, T_{\\rm d, compact}}.\n$$\nWith grain temperatures from Fig.~\\ref{td-ppp}\nand the values of $\\kappa$ for composite grains and EMT-Mie theory\n(the third column in Table~\\ref{t1}) we can find that the ratio\n$M_{\\rm d} ({\\rm \\mbox{compact}})\/M_{\\rm d} ({\\cal P} = 0.9)$\nlies between $\\sim 1.3$ and $\\sim 1.5$. If the layered-sphere model\nis used (the forth column in Table~\\ref{t1}) the ratio increases to\n$3.8 - 4.3$. This means that the calculated mass of an object can be reduced\nif compact grains are replaced by porous ones.\n\nThe ratio of dust masses calculated for two grain models is\n$$\n\\frac{M_{\\rm d} ({\\rm \\mbox{EMT-Mie}})}{M_{\\rm d} ({\\rm layered \\, sphere})}\n= \\frac{\\kappa_{\\rm lay \\, sphere}(\\lambda) \\, T_{\\rm d, lay \\, sphere}}\n {\\kappa_{\\rm EMT-Mie}(\\lambda) \\, T_{\\rm d, EMT-Mie}} \\approx 2.8.\n$$\nThe numerical value was obtained for particles\nconsisting of AC1, astrosil and vacuum with ${\\cal P} = 0.9$\nand the temperature ratio\n$T_{\\rm d, lay \\, sphere}\/T_{\\rm d, EMT-Mie}=0.85$\nas discussed in Sect.~\\ref{temp}.\nIf we consider particles of the same porosity but consisting of two\nmaterials, the ratio of masses will\nbe even larger (3.1 for AC1 and 4.3 for astrosil).\nThus, one can overestimate the mass of an object\nby a factor of 3 or more if the EMT-Mie model is applied,\nsince real dust grains in molecular cloud cores\nshould be very porous and should have non-Rayleigh inclusions.\nAnother case when the effect can be important is in circumstellar\ndiscs, e.g. Takeushi et al.~(\\cite{tcl05}) used $\\kappa = 0.3\\,{\\rm cm^2\/g}$\nat $\\lambda = 1$\\,mm for highly porous silicate grains,\nwhich is a good approximation only for particles with small size\ninclusions (see Table~\\ref{t1}).\n\n\\section{Concluding remarks}\\label{concl}\n\nWe have considered how the porosity of composite cosmic dust grains\ncan affect their optical properties important for interpretation of\nobservations of interstellar, circumstellar and cometary dust.\n Two models of particle structure were used.\n Particles of the first kind had well-mixed inclusions\nsmall in comparison with the wavelength,\nwhile those of the second kind consisted of very thin,\ncyclically repeating layers.\n Earlier we showed that the optical properties of such layered particles\nare close to those of particles with small and large inclusions\n(see Voshchinnikov et al. \\cite{vih05}).\n As effective medium theories give reliable results\nfor particles with small inclusions,\ntwo very different particle structure models can be\nsimply realized and extensive computations can be performed.\n\nFor both models, we studied\nhow an increase of the volume fraction of vacuum could change\nthe extinction efficiencies at different wavelengths,\ntemperature of dust grains, profiles of the IR silicate bands and\ndust millimeter opacities.\n It is found that the models begin to differ\nessentially when the porosity exceeds $\\sim 0.5$.\n This difference appears as lower temperatures (Sect.~\\ref{temp}),\nshifted central peaks of the silicate bands\n(Sect.~\\ref{ir_b}) and larger millimeter opacities\n(Sect.~\\ref{opa})\nfor layered particle model in comparison with that based\non EMT calculations.\n The latter model also requires larger dust-phase abundances\nthan the layered model (Sect.~\\ref{st_ext})\nto produce the same interstellar extinction.\n\nThe assumption that interstellar particles have only small size inclusions\nlooks to some extent artificial\n(excluding, of course, the case of special laboratory samples).\n Therefore, we believe that the layered sphere model well describing\nlight scattering by very porous quasispherical particles with\ninclusions of different sizes\nshould find wide applications in interpretation of different phenomena.\n In particular, this model has good perspectives\nto explanation of the flat interstellar extinction observed\nin the near-IR part of spectrum (Sect.~\\ref{ir_ext}) and\nvariations of the shape of the silicate feature detected in spectra\nof T Tau and Herbig Ae\/Be stars\n(the results will be in a subsequent next paper).\n\n\n\\acknowledgements{\nWe are grateful to Walter Wegner for the possibility to use unpublished data\nand to Bruce Draine for comments on an earlier version of the paper.\nWe are also grateful to the referee Michael Wolff and scientific editor\nAnthony Jones for useful suggestions.\nNVV acknowledges the hospitality of the Max-Planck-Institut f\\\"ur Astronomie\nwhere this work was finished.\nThe work was partly supported by grant 1088.2003.2 of the President of the\nRussian Federation for leading scientific schools.\n}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nThe properties and dynamics of physical systems are closely tied to their symmetries.\nOften these symmetries are known from fundamental principles. There are also, however, systems with unknown or emergent symmetries.\nDiscovering and characterizing these symmetries is an essential component of physics research.\n\n\nBeyond their inherent interest, symmetries are also practically useful for increasing the statistical power of datasets for various analysis goals.\nFor example, a dataset can be augmented with pseudodata generated by applying symmetry transformations to existing data, thereby creating a larger training sample for machine learning tasks.\nNeural network architectures can be constructed to respect symmetries (e.g.~convolutional neural networks and translation symmetries~\\cite{6795724}), in order to improve generalization and reduce the number of model parameters.\nFurthermore, symmetries can significantly increase the size of a useful synthetic dataset created from a generative model trained on a limited set of examples~\\cite{2008.06545}.\n\n\nDeep learning is a powerful tool for identifying patterns in high-dimensional data and is therefore a promising technique for symmetry discovery.\nA variety of deep learning methods have been proposed for symmetry discovery and related tasks.\nNeural networks can parametrize the equations of motion for physical systems, which can have conserved quantities resulting from symmetries~\\cite{greydanus2019hamiltonian,cranmer2020lagrangian}.\nGeneric neural networks targeting classification tasks can encode symmetries in their hidden layers~\\cite{Barenboim:2021vzh,Krippendorf:2020gny}. %\nThis possibility can be used to actively learn symmetries by encoding a shared equivariance in hidden layers across learning tasks~\\cite{zhou2021metalearning}.\nDirectly learning symmetries can be framed as an inference problem given access to parametric symmetry transformations of the same dataset~\\cite{benton2020learning}.\nA given symmetry can be identified in data if a classifier is unable to distinguish a dataset from its symmetric counterpart~\\cite{Tombs:2021wae,Lester:2021kur,Lester:2021aks} (similar to anomaly detection methods comparing data to a reference~\\cite{Collins:2018epr,Collins:2019jip,DAgnolo:2018cun}).\nAnother class of targeted approaches can be found in the domain of automatic data augmentation.\nIf a dataset can be augmented without changing its statistical properties, then one has learned a symmetry. Significant advances in this area have used reinforcement learning~\\cite{cubuk2019autoaugment,lim2019fast}. \n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{Images\/Plots\/SchematicSymmetry2.pdf}\n \\caption{\n A schematic diagram of (top) the training setup for a usual GAN and (bottom) the SymmetryGAN variation discussed in this paper for automatically discovering symmetries.\n %\n Here, $g$ is the generator and $d$ is the discriminator.\n %\n Not represented here is the incorporation of the inertial reference dataset.\n %\n In our numerical examples, this is accomplished by directly imposing constraints on $g$.\n }\n \\label{fig:schematic}\n\\end{figure}\n\n\nAn alternative symmetry discovery approach that is flexible, fully differentiable, and simple is based on generative models~\\cite{hataya2019faster,antoniou2018data}.\nUsually, a generative model is a function that maps random numbers to structured data.\nFor example, a deep generative surrogate model can be trained such that the resulting probability density matches that of a target dataset.\nFor symmetry discovery, by contrast, the random numbers are replaced with the target dataset itself.\nIn this way, a well-trained generator designed to confound an adversary will implement a symmetry transformation. \nWe call this generative model framework for symmetry discovery \\emph{SymmetryGAN}, since it has the same basic training strategy as a generative adversarial network (GAN)~\\cite{Goodfellow:2014upx}, as shown in \\Fig{schematic}.\n\n\nIn this paper, we extend the SymmetryGAN approach and introduce it to the physics community.\nIn particular, we build a rigorous statistical framework for describing the symmetries of a dataset and construct a learning paradigm for automatically detecting generic symmetries.\nThe key idea is that symmetries of a target dataset have to be defined with respect to an \\emph{inertial} reference dataset, analogous to inertial frames in classical mechanics.\nOur deep learning setup is simpler than existing approaches and we develop an analytic understanding of the algorithm's performance in simple cases.\nThis in turn allows us to understand the dynamics of the machine learning as it trains from a random initialization to an element of the symmetry group.\n\n\n\n This rest of this paper is organized as follows.\n %\n In \\Sec{stats}, we build a rigorous statistical framework for discovering the symmetries of a dataset, contrasting it with discovering the symmetries of an individual data element.\n %\n Our machine learning approach with an inertial restriction is introduced in \\Sec{MLa} and the deep learning implementation is described in \\Sec{ML}.\n %\n Empirical studies of simple Gaussian examples, including both analytic and numerical results, are presented in \\Sec{results}.\n %\n We then apply our method to a high energy physics dataset in \\Sec{hepexample}.\n %\n In \\Sec{inference}, we discuss possible ways to go beyond symmetry discovery and towards symmetry inference, with further studies in \\App{symmetry_discovery_map}.\n %\n Our conclusions and outlook are in \\Sec{conclusions}.\n\n\n\n\\section{Statistics of Symmetries}\n\\label{sec:stats}\n\nWhat is a symmetry?\nLet $X$ be a random variable on an open set $\\O\\subseteq\\mathbb{R}^n$, and let $x$ be an instantiation of $X$.\nWhen we refer to the symmetry of an individual data element $x \\in X$, we usually mean a transformation $h:\\O\\rightarrow\\O$ such that:\n\\begin{equation}\n h(x) = x,\n\\end{equation}\ni.e.\\ $x$ is invariant to the transformation $h$.\nMore generally, we can consider functions of individual data elements, $f:\\O\\subseteq\\mathbb{R}^n\\rightarrow\\O'\\subseteq\\mathbb{R}^m$.\nIn that case, the function is symmetric if\n\\begin{equation}\n\\label{eq:functionsymmetry}\n f(h(x)) = f(x),\n\\end{equation}\ni.e. the output of $f$ is invariant to the transformation $h$ acting on $x$.\nOne can also consider equivariances, where the output of $f$ has well-defined transformation properties under the symmetry~\\cite{Dolan:2020qkr,serviansky2020set2graph,Bogatskiy:2020tje,Shimmin:2021pkm}.\nWhile symmetries acting on individual data elements are interesting, they are \\emph{not} the focus of this paper.\n\n\nWe are interested in the symmetries of a dataset as a whole, treated as a statistical distribution.\nLet $X$ be governed by the probability density function (PDF) $p$.\nNaively, a symmetry of the dataset $X$ is a map $g:\\mathbb{R}^n\\rightarrow\\mathbb{R}^n$ such that $g$ preserves the PDF:\n\\begin{equation}\n\\label{eq:naivesymmetry}\np(X = x) = p(X = g(x)) \\, |g'(x)|,\n\\end{equation}\nwhere $|g'(x)|$ is the Jacobian determinant of $g$.\nWhile it is necessary that any candidate symmetry preserves the probability density, it is not sufficient, at least not in the usual way that physicists think about symmetries.\n\n\nConsider the simple case of $n=1$.\nLet $F$ be the cumulative distribution function (CDF) of $X$.\n$F(X)$ is itself a random variable satisfying \n\\begin{equation}\n F(X)\\sim\\mathcal{U}[0,1],\n\\end{equation}\nwhere $\\mathcal{U}(\\O)$ is the uniform random variable on $\\O$.\nConversely, $F^{-1}(\\mathcal{U}[0,1])$ is a random variable governed by the PDF $p$ (for technical details, see~\\Ref{10.2307\/2132726}).\nThe uniform distribution on the interval $[0,1]$ has many PDF-preserving maps, such as the quantile inversion map:\n\\begin{equation}\n\\widetilde{g}(x)=1-x.\n\\end{equation}\nThis map has the additional property that $\\widetilde{g}^2(x)=x$, so it appears to represent a $\\mathbb{Z}_2$ (i.e.~parity) symmetry.\nUsing the CDF map from above, every probability density $p$ admits a $\\mathbb{Z}_2$ PDF-preserving map:\n\\begin{equation}\n\\label{eq:PDFpreserveZ2}\n g=F^{-1}\\circ\\widetilde{g}\\circ F.\n\\end{equation}\n\n\nIf we were to accept \\Eq{naivesymmetry} as the definition of a symmetry, then \\emph{all} one-dimensional random variables would have a $\\mathbb{Z}_2$ symmetry, namely the one in \\Eq{PDFpreserveZ2}.\nWhile true in a technical sense, this is not what physicists (or, to our knowledge, any domain experts) think of as a symmetry of a dataset.\nThe precise definition of a symmetry must therefore be stricter than simply PDF-preserving.\nIn particular, while this $\\mathbb{Z}_2$ PDF-preserving map applies to every one-dimensional random variable, it requires a different map for each such variable.\nWhen we usually think about symmetries, we imagine common maps that can be applied to a variety of physical systems that share the same underlying symmetry structure.\n\n\n\n\nThis line of thinking suggests a sharper definition of a symmetry that makes use of a reference distribution.\nConsider two probability densities\n\\begin{equation}\n\\label{eq:twoPDFs}\np:\\mathbb{R}^n\\rightarrow\\mathbb{R}, \\quad p_I:\\mathbb{R}^n\\rightarrow\\mathbb{R}.\n\\end{equation}\nA map $g:\\mathbb{R}^n\\rightarrow\\mathbb{R}^n$ is defined to be a symmetry of $p$ \\emph{relative} to $p_I$ if it is PDF-preserving for both $p$ and $p_I$:\n\\begin{equation}\n\\label{eq:improvedsymmetry}\np(x) = p(g(x)) \\, |g'(x)|, \\quad p_I(x) = p_I(g(x)) \\, |g'(x)|.\n\\end{equation}\nThe reference or \\textit{inertial} density $p_I$ is the analogue of an inertial reference frame in classical mechanics.\nThis new definition of a symmetry will typically exclude quantile maps, like $\\widetilde{g}$ above, because the $\\widetilde{g}$ that works for one random variable will typically not work for another (e.g.\\ Gaussian and exponential random variables).\n\n\nWhile this new definition solves the problem of ``fake'' symmetries, it also introduces a dependence on the inertial distribution. \nJust as with inertial reference frames, however, there is often a canonical choice for $p_I$ which reduces the number of possibilities in practice.\nA natural choice for many physics datasets is to pick the uniform distribution on $\\mathbb{R}^{n}$, where $n$ is the dimension of the dataset.\nThis not a proper (i.e.\\ normalizable) probability density, though, so we discuss techniques below to use it as the inertial distribution nonetheless.\n\n\nFinally, it is instructive to relate the definitions of symmetries for datasets and functions.\nGiven the two PDFs in \\Eq{twoPDFs}, we can construct the likelihood ratio\n\\begin{equation}\n \\ell(x) \\equiv \\frac{p(x)}{p_I(x)}.\n\\end{equation}\nApplying the symmetry map $g$ as in \\Eq{improvedsymmetry}, the likelihood ratio transforms as:\n\\begin{equation}\n \\ell(g(x)) = \\frac{p(g(x))}{p_I(g(x))} = \\frac{p(x)}{p_I(x)} = \\ell(x),\n\\end{equation}\nwhere the Jacobian factor $|g'(x)|$ cancels between the numerator and denominator.\nTherefore the likelihood ratio, which is an ordinary function, is symmetric by the definition in \\Eq{functionsymmetry}.\nThis cancelling of the Jacobian factor is an intuitive way to understand why an inertial reference density is necessary to define the symmetry of a dataset.\n\n\n\n\\section{Machine Learning with Inertial Restrictions}\n\\label{sec:MLa}\n\nThe SymmetryGAN paradigm for discovering symmetries in a dataset involves simultaneously learning two functions:\n\\begin{align}\n g:\\mathbb{R}^n &\\rightarrow\\mathbb{R}^n,\\\\\n d:\\mathbb{R}^n &\\rightarrow[0,1].\n\\end{align}\nThe function $g$ is a \\emph{generator} that represents the symmetry map.%\n\\footnote{Here, we are using the machine learning meaning of a ``generator'', which differs from the generator of a symmetry group, though they are closely related.}\nThe function $d$ is a \\emph{discriminator} that tries to distinguish the input data $\\{x_i\\}$ from the transformed data $\\{g(x_i)\\}$.\nWhen the discriminator cannot distinguish the original data from the transformed data, then $g$ will be a symmetry.\nThe technical details of this approach are provided in \\Sec{ML} using the framework of adversarial networks.\n\n\nAs described in \\Sec{stats}, it is not sufficient to require that $g$ preserves the PDF of the input data; it also has to preserve the PDF of the inertial density.\nThere are several methods to implement an inertial restriction into the machine learning strategy.\n\\begin{itemize}\n \\item \\emph{Simultaneous discrimination:}\n %\n In this method, the discriminator $d$ is applied both to the input dataset and to data drawn from the inertial density $p_I$.\n %\n The training procedure penalizes any map $g$ that does not fool $d$ for both datasets.\n %\n In practice, it might be advantageous to use two separate discriminators $d$ and $d_I$ for this approach.\n %\n \\item \\emph{Two stage selection:}\n %\n Here, one first identifies all PDF-preserving maps $g$.\n %\n Then one \\textit{post hoc} selects the ones that also preserve the inertial density.\n %\n \\item \\emph{Upfront restriction:}\n %\n If the PDF-preserving maps of $p_I$ are already known, then one could restrict the set of maps $g$ at the outset.\n %\n This allows one to perform an unconstrained optimization on the restricted search space.\n %\n\\end{itemize}\n\n\n\nEach of these methods has advantages and disadvantages.\nThe first two options require sampling from the inertial density $p_I$.\nThis is advantageous in cases where the symmetries of the inertial density are not known analytically.\nWhen $p_I$ is uniform on $\\mathbb{R}^n$ or another unbounded domain, though, these approaches are not feasible.%\n\\footnote{One could try to leverage approximate strategies, such as cutting off the support for $p_I$ a few standard deviations away from the mean of $p$. Still, one can run into edge effects if there is a mismatch between the domain and range of $g$.}\nThe second option is computationally wasteful, as the space of PDF-preserving maps is generally much larger than the space of symmetry maps.\nWe focus on the third option: restricting the set of functions $g$ to be automatically PDF-preserving for $p_I$.\nThis in turn requires a way to parametrize all such $g$, or at least a large subset of them.\n\nFor all of the studies in this paper, we further focus on the case where the inertial distribution $p_I$ is uniform on $\\mathbb{R}^n$.\nFor any open set $\\O\\subseteq\\mathbb{R}^n$, a differentiable function $g:\\O\\to \\O$ preserves the PDF of the uniform distribution $\\mathcal{U}(\\O)$ if and only if $g$ is an equiareal map.%\n\\footnote{By carefully taking suitable limits, these ideas go through even if $\\mathcal{U}(\\O)$ is an improper prior. The important takeaway is that uniform distributions are preserved by equiareal maps.}\nTo see this, note that the PDF of $X\\sim\\mathcal{U}(\\O)$ is $p(X = x) = 1\/\\operatorname{Vol}(\\O)$.\nHence, the PDF-preserving condition $p = p\\circ g\\cdot |g'|$ is met if and only if $|g'| = 1$.\nA map is equiareal if and only if its Jacobian determinant is $1$, which proves our claim.\nTherefore, our search space to discover symmetries of physics datasets will be the space of equiareal maps of appropriate dimension.\nOf course, there are interesting physics symmetries that do not preserve uniform distributions on $\\mathbb{R}^n$; these would require an alternative approach.\n\n\nThe set of equiareal maps for $n > 1$ is not well characterized.\nFor example, even for $n = 2$, not all equiareal maps are linear.\nA simple example of a non-linear area-preserving map is the H\\'{e}non map~\\cite{10.2307\/43635985}: $g(x,y)=(x,y-x^2)$.\nThis makes the space of equiareal maps difficult to directly encode into the learning.\nWhile the general set of equiareal maps is difficult to parametrize, the set of area preserving linear maps on $\\mathbb R^n$ is well understood:\n\\begin{multline*}\n \\mathbb{A} SL^\\pm_n(\\mathbb{R}) = \\{g: \\mathbb{R}^n\\to\\mathbb{R}^n \\; | \\; g(x) = Mx + V,\\\\ M\\in \\mathbb{R}^{n\\times n}, \\det M = \\pm 1, V\\in \\mathbb{R}^n\\}.\n\\end{multline*}\nThis is a subgroup of the general affine group $\\operatorname{Aff}_n(\\mathbb{R})$, and it can be characterized as a topological group of dimension $n(n+1) - 1$.\nThese maps even have complete parametrizations such as the \\textit{Iwasawa decomposition}~\\cite{10.2307\/1969548} which significantly aid the symmetry discovery process.\n\n\nNot all symmetries are linear, however, and if one chooses $\\mathbb{A} SL^\\pm_n(\\mathbb{R})$ as the search space, one cannot discover non-linear maps.\nEven so, the subset of symmetries discoverable within $\\mathbb{A} SL^\\pm_n(\\mathbb{R})$ is rich enough, and the benefits of having a known parameterization valuable enough, that we focus on linear symmetries in this paper and leave the study of non-linear symmetries to future work.\n\n\n\\section{Deep Learning Implementation}\n\\label{sec:ML}\n\n\nTo implement the SymmetryGAN procedure, we modify the learning setup of a GAN~\\cite{Goodfellow:2014upx}.\nFor a typical GAN, a generator function $g$ surjects a latent space onto a data space.%\n\\footnote{While all the GANs discussed here are (approximately) bijective, GANs in general need not be. Symmetry discovery requires the generator to be bijective, so one may want to leverage nomalizing flows~\\cite{10.5555\/3045118.3045281,Kobyzev2020} in future work.}\nThen, a discriminator distinguishes generated examples from target examples.\n\n\nFor a SymmetryGAN, the latent probability density is the \\emph{same} as the target probability density, as illustrated in \\Fig{schematic}.\nThe generator $g$ and discriminator $d$ are parametrized as neural networks.\nThey are then trained simultaneously to optimize the binary cross entropy loss functional:\n\\begin{align}\n\\label{eq:numericloss}\n L[g,d]=-\\frac1N\\sum_{x\\in\\{x_i\\}_{i=1}^N}\\Big[\\log\\big(d(x)\\big) + \\log\\big(1-d(g(x))\\big)\\Big]\\,.\n\\end{align}\nThis differs from the usual binary cross entropy in that the same samples appear in the first and second terms.\nA similar structure appears in neural resampling~\\cite{Nachman:2020fff} and in step 2 of the \\textsc{OmniFold} algorithm~\\cite{Andreassen:2019cjw}.\nFollowing \\Sec{MLa}, we assume that the generator $g$ already preserves the inertial distribution. \n\n\nThe behavior of \\Eq{numericloss} can be understood analytically by considering the limit of infinite data:\n\\begin{align}\\nonumber\n L[g,d]&=-\\int \\Big[\\log\\big(d(x)\\big) \\, p(x)\\\\\\label{eq:loss}\n &\\qquad+\\log\\big(1-d(g(x))\\big)\\, p(g(x))\\, |g'(x)|\\Big]\\dd x\\,,\n\\end{align}\nwhere the Jacobian factor $|g'(x)|$ is now made manifest.\nFor a fixed $g$, the optimal $d$ is the usual result from binary classification (see e.g.\\ \\Ref{hastie01statisticallearning,sugiyama_suzuki_kanamori_2012}):\n\\begin{align}\n\\label{eq:optimalf}\n d_*=\\frac{p(x)}{p(x)+p(g(x)) \\, |g'(x)|}\\,,\n\\end{align}\nwhich is the ratio of the probability density of the first term in \\Eq{loss} to the sum of the densities of both terms.\nInserting $d_*$ into \\Eq{loss} and optimizing using the Euler-Lagrange equation:\n\\begin{equation}\n\\frac{\\delta L[g,g']}{\\delta g}=\\frac{\\partial L}{\\partial g}-\\frac{\\dd}{\\dd x} \\frac{\\partial L}{\\partial g'} = 0,\n\\end{equation}\none can show that the optimal $g$ satisfies\n\\begin{equation}\np(x)=p(g_*(x)) \\, |g_{*}'(x)|,\n\\end{equation}\ni.e.\\ $g$ is PDF-preserving as in \\Eq{naivesymmetry}.\nFor such a $g$, we have that $d_* = \\frac12$, the loss is maximized at a value of $2 \\log 2$, and the discriminator is maximally confounded. The generator tries to maximize the loss with respect to $g$ and the discriminator tries to minimize the loss with respect to $d$.\n\n\nThe SymmetryGAN approach has the potential to find any symmetry representable by $g(x)$.\nTo target a particular symmetry subgroup, $G \\leq \\mathbb{A} SL_n^\\pm(\\mathbb{R})$, we can add a term to the loss function.\nFor example, to discover a cyclic symmetry group, $G = \\mathbb{Z}_q, q\\in\\mathbb N$, the loss function can be augmented with a mean squared error term:\n\\begin{align}\n\\label{eq:cyclicloss}\n L[g,d] = L_\\text{BCE}[g,d]-\\frac\\alpha N\\sum_{x\\in\\{x_i\\}_{i=1}^N}(g^q(x) - x)^2,\n\\end{align}\n %\n %\nwhere $L_\\text{BCE}$ is the binary cross entropy loss in \\Eq{numericloss}, $g^q$ is $g$ composed with itself $q$ times, and $\\alpha>0$ is a weighting hyperparameter.\nA SymmetryGAN with this loss function will discover the largest subgroup of $G$ that is a symmetry of the dataset.\n\n\n\\section{Empirical Gaussian Experiments}\n\\label{sec:results}\n\nIn this section, we study the SymmetryGAN approach both analytically and numerically in a variety of simple Gaussian examples.\nFor the empirical studies here and in \\Sec{hepexample}, all neural networks are implemented using \\textsc{Keras}~\\cite{keras} with the \\textsc{Tensorflow} backend~\\cite{tensorflow} and optimized with \\textsc{Adam}~\\cite{adam}.\nThe generator function $g$ is parametrized as a linear function, with constraints that vary by example and are described further below.\nThe discriminator function $d$ is parametrized with two hidden layers, using 25 nodes per layer.\nRectified Linear (ReLU) activation functions are used for the intermediate layers and a sigmoid function is used for the last layer.\nFor the empirical studies, $128$ events are generated for each example.\n\n\\subsection{One-Dimensional Gaussian}\n\\label{sec:1d_example}\n\nOur first example involves data drawn from a one-dimensional Gaussian distribution with a $\\mathbb{Z}_2$ reflection symmetry.\nData are distributed according to the probability distribution $\\mathcal N(0.5, 1.0),$ i.e.\\ a Gaussian with $\\mu = 0.5$ and $\\sigma^2 = 1.0$.\nThis distribution has precisely two symmetries, both linear:\n\\begin{equation}\n\\label{eq:1D_minima}\ng(x) = x, \\qquad g(x) = 1-x.\n\\end{equation}\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{Images\/Plots\/Z2analytic.pdf}\n \\caption{\n %\n The analytic loss landscape in the slope ($c$) vs.\\ intercept ($b$) space for the one-dimensional Gaussian example.\n %\n The two maxima are indicated by stars.}\n \\label{fig:Z2analytic}\n\\end{figure}\n\n\nImplicitly, we are taking the inertial distribution to be uniform on $\\mathbb{R}$.\nAs stated earlier, the PDF-preserving maps of $\\mathcal{U}(\\mathbb{R})$ are equireal.\nIn one dimension, the only equireal maps are linear.\nLinear maps in one dimension are defined by two numbers, so the generator function can be parametrized as\n\\begin{equation}\n \\label{eq:linear_form}\n g(x) = b + c \\, x. \n\\end{equation}\nIn \\Fig{Z2analytic}, we show the analytically computed loss from \\Eq{loss} as a function of $b$ and $c$.\nIn this figure, the discriminator $d$ is taken to be the analytic optimum in \\Eq{optimalf}.\nThere are two maxima in the loss landscape, one corresponding to each of the linear symmetries from \\Eq{1D_minima}.\nHere, and in most subsequent examples below, we have shifted the output such that maximum loss value is $0$.\n\n\nAnother interesting feature of the loss landscape is the deep minimum at $c=0$ that divides the space into two parts.\nThis gives rise to the prediction that, under gradient descent, the neural network will find $g(x)= 1-x$ when $c$ is initialized negative and find $g(x) = x$ when $c$ is initialized positive.\nIn the edge case when $c$ is initialized to precisely zero, the generator is degenerate and no longer even bijective and the outcome is indeterminate, but the likelihood of sampling $c$ to be precisely zero is, of course, zero.\nFor the rest of the paper, we ignore such edge cases.\nThere are no such features in the loss landscape as a function of $b$, suggesting that there should be little dependence on the initial value of $b$.\n\n\\begin{figure*}[t]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/b_fc_f.pdf}\n \\label{fig:Z2numeric_i}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/c_ic_f.pdf}\n \\label{fig:Z2numeric_ii}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/b_ib_f.pdf}\n \\label{fig:Z2numeric_iii}}\n \\caption{\n %\n The empirical symmetry discovery process for the one-dimensional Gaussian example.\n %\n The initial parameters have a subscript $i$ and the final parameters have a subscript $f$.\n %\n (i) Final slope ($c_f)$ vs.\\ final intercept ($b_f$), showing that the network finds the two maxima.\n %\n (ii) Final slope ($c_f)$ vs.\\ initial slope ($c_i$), showing the phase transition at $c_i = 0$.\n %\n (iii) Final intercept ($b_f)$ vs.\\ initial intercept ($b_i$), showing the independence on $b_i$.}\n \\label{fig:Z2numeric}\n\\end{figure*}\n\n\n These predictions are tested empirically in \\Fig{Z2numeric}, where the initialized parameters are $(b_i,c_i)\\sim \\mathcal{U}([-5, 5]^2)$ and the learned parameters are $(b_f,c_f)$.\n %\n In \\Fig{Z2numeric_i}, there are distinct clusters at $(b_f, c_f) = (0, 1)$ and $(1, -1)$,\n showing that the SymmetryGAN correctly finds both symmetries of the distribution and nothing else.\n %\n In \\Fig{Z2numeric_ii}, there is a demonstration of the loss barrier in slope space; if the initial slope is positive, the final slope is $+1$, whereas if the initial slope is negative, the final slope is $-1$.\n %\n Finally, \\Fig{Z2numeric_iii} shows the absence of a loss barrier in intercept space; the final intercepts are scattered between $0$ and $1$ independent of the initialized intercept. \n %\n We discuss further the \\emph{symmetry discovery map} from initialized to learned parameters in \\Sec{symmetry_discovery_map} and \\App{symmetry_discovery_map}.\n \n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{Images\/Plots\/restrictedZ2.pdf}\n \\caption{\n The analytic loss landscape for the restricted generator $g(x) = 1 + cx$, with two local maxima at $c = -1$ and $c = 0.5$.}\n \\label{fig:Z2restricted}\n\\end{figure}\n \nIn the above example, the parameterization of $g$ was sufficiently flexible that the SymmetryGAN could find both symmetries and the loss landscape had no other maxima.\nIf the space is incompletely parameterized, though, then local maxima can manifest as false symmetries.\nFor example, suppose instead of a two parameter $g$ as above, $g$ were parameterized as $g(x) = 1 + c\\, x$.\nThe corresponding analytic loss landscape is shown in \\Fig{Z2restricted}.\nA SymmetryGAN initialized with a negative slope correctly finds the only symmetry of this form, $g(x) = 1 - x$, but a neural network initialised with positive slope is unable to cross over the loss barrier at $c = 0$ and instead settles at the locally loss maximizing $g(x) = 1 + 0.5 \\,x$.\nWhile our investigations of $\\mathbb{A} SL_n^\\pm(\\mathbb{R})$ suggest that this does not happen with the full parametrization, the topology of the set of equiareal maps is not known and therefore obstructions like the one illustrated here are possible.\nIt is always possible to check if a solution is a symmetry, however.\nSpecifically, one can apply the learned function to the data and train a \\textit{post hoc} discriminator to ensure that its performance is equivalent to random guessing.\nFor an analytic symmetry, we know that at the point of loss maximisation $p = p\\circ g\\cdot |g'|$, and consequently $d = \\frac{p}{p + p\\circ g\\cdot |g'|} = \\frac12$.\nHence, at the global (symmetry) maxima, $L = - \\frac1N\\sum_{x_i}\\qty[\\log d + \\log (d\\circ g)] = 2\\log2$.\nOn the other hand, there is no way for the neural network to get stuck at non-symmetry local maxima with $L = 2 \\log 2$.\nHence, the true symmetries can be distinguished from local optima by checking the value of the loss.\n\n\\begin{figure*}[t]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/SO2symmAnalytic.pdf} \\label{fig:SO2_i}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/4O2asymmAnalytic.pdf}\n \\label{fig:SO2_iii}}\n %\n \\caption{\n The analytic loss landscapes overlaid with empirically discovered symmetries for the two-dimensional Gaussian examples with the generator restriction in \\Eq{rotation}.\n %\n (i) The Gaussian $N_{1,1}$ with uniform covariance, which has loss maxima on the unit circle $c^2 + s^2 = 1$.\n %\n(ii) The Gaussian $N_{1,2}$ whose covariance matrix has non-equal diagonal elements, which only has symmetries at $c = \\pm 1$ and $s = 0$.\n }\n \\label{fig:SO2}\n\\end{figure*}\n\n\\subsection{Two-Dimensional Gaussian}\n\\label{sec:2d_example}\n\nNext, we consider cases of two-dimensional Gaussian random variables.\nThese examples offer much richer symmetry groups for study as well as a greater scope for variations.\nWe take the inertial distribution to be uniform on $\\mathbb{R}^2$.\n\n\nWe start with the standard normal distribution in two dimensions,\n\\begin{equation}\nN_{1,1}\\equiv\\mathcal{N}\\qty(\\vec{0},\\mathbbm{1}_2),\n\\end{equation}\nwhere $\\mathbbm{1}_n$ is the $n\\times n$ identity matrix.\nThis distribution has as its linear symmetries all rotations about the origin and all reflections about lines through the origin, which constitute the group $O(2)$.\nFor further exploration, we consider a two-dimensional Gaussian with covariance not proportional to the identity,\n\\begin{equation}\nN_{1,2} \\equiv \\mathcal{N}\\qty(\\vec{0},\\mqty[1&0\\\\0&2]).\n\\end{equation}\nThe symmetry group of this distribution is quite complicated and described below.\nAmong other features, it contains the Klein 4--group, $V_4 = \\qty{\\mathbbm{1}, -\\mathbbm{1}, \\sigma_3, -\\sigma_3}$, for Pauli matrix $\\sigma_3$.\n\n\n\nThe linear search space that preserves $\\mathbb{R}^2$, the general affine group in two dimensions, $\\operatorname{Aff}_2(\\mathbb{R}) = \\mathbb{A} GL_2(\\mathbb{R})$, has six real parameters.\nBefore exploring the entire space, we first examine the subspace:\n\\begin{align}\n\\label{eq:rotation}\ng(X) = \\mqty[c&s\\\\-s&c]\\,X,\n\\end{align}\nfor $c, s\\in \\mathbb{R}^\\times$, where $\\mathbb{R}^\\times$ is the set of non-zero real numbers.\nWhile this is only a rotation if $c^2+s^2=1$, we want to test if a SymmetryGAN can discover this relationship starting from this more general representation.\nThe symmetries represented by \\Eq{rotation} are a subgroup of $GL_2(\\mathbb{R})$: $SO(2)\\times \\mathbb{R}^+ = \\ev{\\theta, r|\\theta\\in [0, 2\\pi), r\\in\\mathbb{R}^+}$, where $\\mathbb{R}^+$ is the set of positive real numbers.\nFor the $ N_{1,1}$ Gaussian, this means looking for the $r = 1$ subgroup, which is indicated by the red circle in the loss landscape in \\Fig{SO2_i}.\nTo test the SymmetryGAN, we sample the parameters $c$ and $s$ uniformly at random from $[-1, 1]^2$, and the learned $c$ and $s$ values correspond to the expected $SO(2)$ unit circle, also shown in \\Fig{SO2_i}.\nWe repeat this exercise for the $ N_{1,2}$ Gaussian in \\Fig{SO2_iii}, where the SymmetryGAN discovers the $\\mathbb{Z}_2$ subgroup of $V_4$ generated by a rotation by $\\pi$.\n\n\n\\begin{figure*}[p]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/C2MSE.pdf}\n \\label{fig:MSE_i}\n }\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/C3MSE.pdf}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/C7MSE.pdf}}\\\\\n\n \\caption{The analytic loss landscapes overlaid with empirically discovered symmetries for the $N_{1,1}$ example with a cyclic-enforcing term added to the loss, to be compared to \\Fig{SO2_i}.\n %\n The cases studied are (i) $\\mathbb{Z}_2$, (ii) $\\mathbb{Z}_3$, and (iii) $\\mathbb{Z}_7$.}\n \\label{fig:MSE}\n\\end{figure*}\n\n\\begin{figure*}[p]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/6O2d-tsymm.pdf}} $\\quad$\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/6GL2symmRU.pdf}} $\\quad$\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/6GL2symmAB.pdf}}\\\\\n \\caption{\n %\n Slices through the analytic loss landscape together with empirically discovered symmetries for $\\mathcal N_{1,1}$ with the full $\\mathbb{A} GL_2(\\mathbb{R})$ search space.\n %\n (i) The determinant-rotation angle space. The maxima are indicated by vertical red lines.\n %\n (ii) The dilatation-shear space. The maximum is indicated by a red star.\n %\n (iii) The affine translation space. The maximum is indicated by a red star at the origin.}\n \\label{fig:AGL2symm}\n\\end{figure*}\n\n\\begin{figure*}[p]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/6GL2asymmDU.pdf}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/6GL2asymmTR.pdf}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/6GL2asymmAB.pdf}}\\\\\n \n \\caption{\n %\n Similar to \\Fig{AGL2symm} but for the $N_{1,2}$ distribution.\n %\n (i) The determinant-sheer space. The maxima are indicated by two red stars.\n %\n (ii) The dilatation-rotation angle space. The maxima are indicated by four red stars.\n %\n (iii) The affine translation space. The maximum is indicated by a red star at the origin.}\n \\label{fig:AGL2asymm}\n\\end{figure*}\n\n\n\nThis two-dimensional example allows us to test the approach in \\Eq{cyclicloss} for finding $\\mathbb{Z}_q$ subgroups of the full symmetry group.\nRestricting our attention to the $N_{1, 1}$ example and the $SO(2)\\times \\mathbb{R}^+$ subgroup in \\Eq{rotation}, we add the cyclic-enforcing mean squared error term to the loss with $\\alpha=0.1$.\nResults are shown in \\Fig{MSE} for $q = 2$, $3$, and $7$, where the analytic loss optima and empirically found symmetries are broken into discretely many solutions, with the number corresponding to the $q^{\\textrm{th}}$ roots of unity, as expected.\n\nWe now consider the general affine group, $\\operatorname{Aff}_2(\\mathbb{R})$.\nIn two dimensions, the elements of this group can be represented as a matrix with 6 parameters:\n\\begin{itemize}\n \\item $d\\in\\mathbb{R}^\\times$, the determinant;\n \\item $\\theta\\in [0, 2\\pi)$, the angle of rotation;\n \\item $r\\in \\mathbb{R}^+$, the dilatation;\n \\item $u\\in \\mathbb{R}$, the shear in the $x$-direction; and\n \\item $(a, b)\\in \\mathbb{R}^2$ the overall affine shift.\n\\end{itemize}\nBy Iwasawa's decomposition~\\cite{10.2307\/1969548}, the full transformation can be written as\n\\begin{align}\n&g(X) =\\sqrt{|d|}\\,\\mqty[1&0\\\\0&-1]^{\\delta}\\mqty[c_\\theta&s_\\theta\\\\-s_\\theta&c_\\theta]\\mqty[r&0\\\\0&\\frac{1}{r}]\\mqty[1&u\\\\0&1]\\,X+\\mqty[a\\\\b]\\,,\n\\end{align}\nwhere $\\delta=\\frac{1 - \\operatorname{sgn}(d)}{2}$ and $c_\\theta=\\cos(\\theta)$ and $s_\\theta=\\sin(\\theta)$.\n\n\n\nFor the distribution $N_{1,1}$, the symmetry group is $O(2)$, described by the parameters $d = \\pm 1, \\theta\\in[0, 2\\pi), r = 1$, and $u = a = b = 0$.\nVisualizing this space is difficult, but multiple slices through the analytic loss landscape are presented in \\Fig{AGL2symm}.\nThe neural network is trained over all six parameters of the Iwasawa decomposition of $\\operatorname{Aff}_2(\\mathbb{R})$.\nThe empirically discovered symmetries, shown as yellow dots in \\Fig{AGL2symm}, are two-parameter slices of the discovered symmetry group, where slices are chosen such that the parameters not under study are closest to $d = r = 1$, $\\theta = a = b = 0$.\nThe empirical data agree well with the predictions.\n\n\n\n\n\nThe same analysis of $N_{1,2}$ is more complex because the corresponding symmetry group is more complicated than for $N_{1,1}$.\nWhen $r = 1$ and $u = 0$, the symmetries are the $V_4$ we saw earlier ($\\theta = 0,\\pi$ and $d = \\pm 1$).\nBy varying $r$ and $u$, however, one can in fact undo the symmetry-breaking induced by the non-identity covariance, thereby restoring the rotational symmetry.\nFor example, when $r = \\sqrt{2}$, $N_{1,2}$ is transformed into a Gaussian with covariance $\\mathrm{diag}[2,1]$, therefore $r = \\sqrt 2$ and $\\theta = \\frac\\pi2, \\frac{3\\pi}2$ constitutes a symmetry.\nIt is difficult to describe the whole symmetry group in closed form, or even to visualise it because it does not live in any single planar slice of $\\mathbb{A} GL_2(\\mathbb{R})$.\nAs shown for various parameter slices in \\Fig{AGL2asymm}, though, the empirical results agree well with the analytic predictions.\n\n\n\\begin{figure*}[p]\n \\centering\n \\subfloat[]{\n \\includegraphics[height=0.4\\textwidth]{Images\/Plots\/1d_bimodalplot.pdf}\n \\label{fig:otherdistributions_1Di}\n }\n \\subfloat[]{\n \\includegraphics[height=0.4\\textwidth]{Images\/Plots\/1d_bimodalsymm.pdf}\n \\label{fig:otherdistributions_1Dii}\n }\n \\caption{\n Empirical distribution (i) and empirically discovered symmetries overlaid on the analytic loss landscape (ii) for a one-dimensional bimodal distribution inspired by \\Ref{fisher2018boltzmann}.\n }\n \\label{fig:otherdistributions_1D}\n\\end{figure*}\n\n\\begin{figure*}\n \\subfloat[]{\\includegraphics[height=0.27\\textwidth]{Images\/Plots\/2d_circularplot.pdf}}\\subfloat[]{ \\includegraphics[height=0.28\\textwidth]{Images\/Plots\/2doctagonalrotations.pdf}}\\subfloat[]{ \\includegraphics[height=0.28\\textwidth]{Images\/Plots\/2doctagonalreflections.pdf}}\\\\\n \\subfloat[]{\\includegraphics[height=0.27\\textwidth]{Images\/Plots\/2d_squareplot.pdf}}\\subfloat[]{\\includegraphics[height=0.28\\textwidth]{Images\/Plots\/2dsquarerotations.pdf}}\\subfloat[]{\\includegraphics[height=0.28\\textwidth]{Images\/Plots\/2dsquarereflections.pdf}}\n \\caption{\n Empirical distributions (left column) and empirically discovered rotations (middle column) and reflections (right column) overlaid on the analytic loss landscape for two two-dimensional Gaussian mixture models inspired by \\Ref{fisher2018boltzmann}.\n %\n The studied examples are (i,ii,iii) a two-dimensional octagonal distribution, and (iv,v,vi) a two-dimensional 5$\\times$5 distribution. Note that antipodal points on (iii) and (vi) represent the same reflection}\n \\label{fig:otherdistributions_2D}\n\\end{figure*}\n\n\n\\subsection{Gaussian Mixtures}\n\n\nAs our last set of simple examples, we apply the SymmetryGAN approach to three Gaussian mixture models, inspired by the examples in \\Ref{fisher2018boltzmann}.\nThe first is a one-dimensional bimodal probability distribution:\n\\begin{align}\np(x) = \\frac12\\mathcal N (-1, 1) + \\frac12 \\mathcal N (1, 1),\n\\end{align}\nwhich respects the $\\mathbb{Z}_2$ symmetry group $g(x) = \\pm x$.\nThe empirical distribution for this example is shown in \\Fig{otherdistributions_1Di}.\nApplying SymmetryGAN starting from the generator for linear transformations in \\Eq{linear_form}, it finds the predicted symmetries with great accuracy, as shown in \\Fig{otherdistributions_1Dii}.\n\n\n\n\n\nWe next consider two two-dimensional Gaussian mixtures.\nThe octagonal distribution,\n\\begin{align}\np(x) = \\frac18\\sum_{i = 1}^8\\mathcal N\\left(\\cos\\frac{2\\pi i}{8}, 0.1\\right)\\times \\mathcal N\\left(\\sin\\frac{2\\pi i}{8}, 0.1\\right),\n\\end{align}\nhas the dihedral symmetry group of an octagon $D_{8}$.\nThe two-dimensional $5\\times 5$ square distribution,\n\\begin{align}\np(x) = \\frac1{25} \\sum_{i = 1}^5\\sum_{j = 1}^5\\mathcal N (i - 2 , 0.1)\\times\\mathcal N (j-2, 0.1),\n\\end{align}\nhas the symmetry group of a square $D_4$.\nWe use the generator \\begin{equation}\n \\label{eq:O2}\n g(X) = \\mqty[c&s\\\\-s&(-1)^\\delta c]X,\n\\end{equation}\nwhich can discover the the entire symmetry subgroup (rotations and reflections) in $O(2)$.\nData sampled from these distributions are shown in the left column of \\Fig{otherdistributions_2D}.\nIn the middle and right columns of \\Fig{otherdistributions_2D}, we see that SymmetryGAN finds the expected rotations and reflections, respectively.\n\n\n\\section{Particle Physics Example}\n\\label{sec:hepexample}\n\nWe now turn to an application of SymmetryGANs in particle physics.\nHere, we are interested to learn if this approach can recover well-known azimuthal symmetries in collider physics and possibly identify symmetries that are not immediately obvious.\n\n\\subsection{Dataset and Preprocessing}\n\nThis case study is based on dijet events.\nJets are collimated sprays of particles produced from the fragmentation of quarks and gluons, and pairs of jets are one of the most common configurations encountered at the LHC.\nWith a suitable jet clustering algorithm, each jet has a well-defined momentum, and we can search for symmetries of the jet momentum distributions.\n\nThe dataset we use is the background dijet sample from the LHC Olympics anomaly detection challenge~\\cite{gregor_kasieczka_2019_4536377,Kasieczka:2021xcg}.\nThese events are generated using \\texttt{Pythia} 8.219~\\cite{Sjostrand:2006za,Sjostrand:2007gs} with detector simulation provided by \\texttt{Delphes} 3.4.1~\\cite{deFavereau:2013fsa,Mertens:2015kba,Selvaggi:2014mya}\nThe reconstructed particle-like objects in each event are clustered into\n$R=1$ anti-$k_T$~\\cite{Cacciari:2008gp} jets using \\texttt{FastJet} 3.3.0~\\cite{Cacciari:2011ma,Cacciari:2005hq}.\nAll events are required to satisfy a single $p_T>1.2$~TeV jet trigger, and our analysis is based on the leading two jets in each event, where leading refers to the ones with largest transverse momenta ($p_T^2 = {p_x^2 + p_y^2}$).\n\n\nEach event is represented as a four-dimensional vector:\n\\begin{equation}\nX = (p_{1x},p_{1y},p_{2x},p_{2y}),\n\\end{equation}\nwhere $p_1$ refers to the momentum of the leading jet, $p_2$ represents the momentum of the subleading jet, and $x$ and $y$ are the Cartesian coordinates in the transverse plane.\nWe focus on the transverse plane because the jets are typically back-to-back in this plane as a result of momentum conservation.\nThe longitudinal momentum of the parton-parton interaction is not known and so there is no corresponding conservation law for $p_z$.%\n\\footnote{In principle, we could use SymmetryGAN to confirm the absence of a symmetry in $p_z$.}\n\nSince we have a four-dimensional input space, a natural search space for symmetries is $SO(4)$, the group of all rotations on $\\mathbb{R}^4$. Before exploring the whole candidate symmetry space, we first consider an $SO(2)\\times SO(2)$ subspace where the two leading jets are independently rotated.\n\n\\subsection{$SO(2) \\times SO(2)$ Subspace}\n\n\n\\begin{figure*}[p]\n \\centering\n \\subfloat[]{\\includegraphics[height=0.45\\textwidth]{Images\/Plots\/LHCO.pdf}\n \\label{fig:LHCO_i}}\n $\\quad$\n \\subfloat[]{\\includegraphics[height=0.45\\textwidth]{Images\/Plots\/LHCO_avg.pdf}\n \\label{fig:LHCO_ii}}\n \\caption{\n %\n (i) Empirically discovered symmetries in the LHC Olympics dijet dataset.\n %\n The final values of $\\theta_1$ and $\\theta_2$ from the SymmetryGAN are plotted over the line $\\theta_1 = \\theta_2$.\n %\n (ii) The map between initial and final symmetry parameters.\n %\n The final rotation angle is the average of the initialized rotation angles, offset by $\\pi$ if the angle between the initialized angles is reflex. \n }\n \\label{fig:LHCO}\n\\end{figure*} \n\nBecause of momentum conservation, we expect that only those rotations that simultaneously rotate both jets by the same angle will be symmetries.\nWe start from a generic $SO(2)\\times SO(2)$ group element:\n\\begin{equation}\n g_{\\theta_1, \\theta_2}\\mqty[p_{1x}\\\\ p_{1y}\\\\p_{2x}\\\\p_{2y}] = \\mqty[\\cos\\theta_1&\\sin\\theta_1&0&0\\\\-\\sin\\theta_1&\\cos\\theta_1&0&0\\\\0&0&\\cos\\theta_2&\\sin\\theta_2\\\\0&0&-\\sin\\theta_2&\\cos\\theta_2]\\mqty[p_{1x}\\\\ p_{1y}\\\\p_{2x}\\\\p_{2y}],\n\\end{equation}\nwhere $(\\theta_1, \\theta_2) \\in [0, 2\\pi)^2$.\nWe expect the symmetries to correspond to the subgroup $\\qty{g_{\\theta_1, \\theta_2}|\\theta_1 = \\theta_2}\\cong SO(2)$.\nThis prediction is borne out in \\Fig{LHCO_i}.\n\n\nWe can also study the training dynamics of the SymmetryGAN.\nMore information about this procedure is given in \\App{symmetry_discovery_map}, but the idea is to find a symmetry discovery map $\\Omega: SO(2)\\times SO(2) \\to SO(2)$, $(\\theta_{1i}, \\theta_{2i})\\mapsto\\theta_{f},$ that describes how the initial parameters map to the learned ones.\nWe propose the map given by\n\\begin{equation}\n\\begin{split}\n \\Omega(\\theta_1, \\theta_2) &= \\begin{cases}\\frac{\\theta_1 + \\theta_2}{2}& |\\theta_1 - \\theta_2| < \\pi\\,,\\\\\n \\frac{\\theta_1 + \\theta_2}{2} - \\pi & |\\theta_1 - \\theta_2| > \\pi\\,,\n \\end{cases}\n \\end{split}\n\\end{equation}\nwhere there is only one output angle even though the output space is two-dimensional.\nThis map posits that the final angle will bisect the smaller angle between $\\theta_1$ and $\\theta_2$, which is validated by the empirical results shown in \\Fig{LHCO_ii}.\n\n\n\\subsection{$SO(4)$ Search Space}\n\nWe now turn to the four-dimensional rotation group.\n$SO(4)$ is a six parameter group, specified by $\\qty{\\theta_i}_{i=1}^6,$ which parametrize the six independent rotations:\n\\begin{align}\nR_1\\colon p_{1x}&\\leadsto p_{1y},&\nR_2\\colon p_{1x}&\\leadsto p_{2x},\\\\\nR_3\\colon p_{1x}&\\leadsto p_{2y},&\nR_4\\colon p_{1y}&\\leadsto p_{2x},\\\\\nR_5\\colon p_{1y}&\\leadsto p_{2y},&\nR_6\\colon p_{2x}&\\leadsto p_{2y},\n\\end{align}\nwhere the notation $R: a \\leadsto b$ means\n\\begin{align}\n R(a) &= a\\cos\\theta + b\\sin\\theta,\\\\\n R(b) &= b\\cos\\theta - a\\sin\\theta.\n\\end{align}\nOne way to describe a generic generator $g_{\\vb*\\theta}$ is\nby\n\\begin{equation}\n g_{\\vb*\\theta}(X) = R_1 R_2 R_3 R_4 R_5 R_6 \\, X.\n\\end{equation}\n\n\\begin{figure*}[p]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/LHCO_Comparison1.pdf} \\label{fig:LHCOComparison_i}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/LHCO_Comparison2.pdf}\n \\label{fig:LHCOComparison_ii}}\n \\caption{\n %\n Two dimensional projection of (i) the original LHC Olympics dijet dataset and (ii) its transformation by one of the generators discovered by the SymmetryGAN.\n %\n Here, we plot the momenta of the two leading jets in the transverse plane.}\n \\label{fig:LHCO_Comparison}\n\\end{figure*}\n\nIt is not easy to visualize a six-dimensional space, and the symmetries discovered by SymmetryGAN\ndo not lie in any single $2$-plane or even $3$-plane.\nTherefore, we need alternative methods to verify that the maps discovered by the neural network are indeed symmetries.\n\n\nOne verification strategy is to visually inspect $X$ and $g_{\\vb*\\theta}(X)$ to see if the spectra look the same.\nIn \\Fig{LHCO_Comparison}, we show a projection of the distribution of $X$ and one instance of $g_{\\vb*\\theta}(X)$, which suggests that the found $g_{\\vb*\\theta}$ is indeed a symmetry.\n\n\\begin{figure*}[t]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/KL_rand1.pdf} \\label{fig:KLrand_i}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/KL_rand2.pdf}\n \\label{fig:KLrand_ii}}\n %\n \\caption{\n %\n %\nAn example of the jet azimuthal angle distributions, (i)$\\widetilde{\\phi}_{1}$ and (ii)$\\widetilde{\\phi}_{2}$, of the LHC Olympics dijet data rotated by a randomly selected rotation in $SO(4)$.\nThe distribution is not uniform, so a random rotation is not a symmetry.}\n \\label{fig:KL_rand}\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/KL_symm1.pdf} \\label{fig:KLsymm_i}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/KL_symm2.pdf}\n \\label{fig:KLsymm_ii}}\n %\n \\caption{\n %\n %\n The same as \\Fig{KL_rand}, but for a symmetry in $SO(4)$.\n %\n The distribution is uniform, so this rotation is a candidate symmetry.\n }\n \\label{fig:KL_symm}\n\\end{figure*}\n\nAnother verification strategy is to test if the discovered symmetries preserve special projections of the dataset.\nEach of the two jets has an azimuthal angle $\\phi_{j} = \\text{arctan2}(p_{jy}, p_{jx})$ for $j = 1,2$ that is uniformly distributed over $[-\\pi, \\pi)$, where $\\text{arctan2}$ is the two argument arctangent function, which returns the principal value of the polar angle $\\theta\\in (-\\pi, \\pi]$.\nSymbolically, the data can be represented as\n\\begin{equation}\n X = \\mqty[p_{1x}\\\\ p_{1y}\\\\p_{2x}\\\\p_{2y}] = \\mqty[p_{1T}\\cos\\phi_{i1}\\\\p_{1T}\\sin\\phi_{1}\\\\p_{2T}\\cos\\phi_{2}\\\\p_{2T}\\sin\\phi_{2}],\\qquad \\phi_{j}\\sim\\mathcal{U}[-\\pi, \\pi)\\,,\n\\end{equation}\nwhere $p_{jT}$ is the transverse momentum of each jet (which is approximately the same for both jets since they are roughly back to back).\nIf one applies an arbitrary rotation, there is no reason the new azimuthal angles,\n\\begin{align}\n \\widetilde{\\phi}_{1} &= \\text{arctan2}(g_{\\vb*\\theta}(X)_2, g_{\\vb*\\theta}(X)_1), \\\\\n \\widetilde{\\phi}_{2} &= \\text{arctan2}(g_{\\vb*\\theta}(X)_4, g_{\\vb*\\theta}(X)_3),\n\\end{align}\nshould be uniformly distributed anymore, as \\Fig{KL_rand} demonstrates.\nIf one of the symmetry rotations discovered by the neural network is applied to $X$, however, $\\widetilde{\\phi}_{j}$ must remain uniformly distributed, as shown in \\Fig{KL_symm}.\n\n\n\\begin{figure*}[t]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/KL_Div1.pdf} \\label{fig:KLdiv_i}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/KL_DIv2.pdf}\n \\label{fig:KLdiv_ii}}\n %\n \\caption{\n %\n %\n The KL divergence between the jet azimuthal angle distribution before and after a random rotation or a symmetry rotation, for the (i) leading jet and (ii) subleading jet.\n %\n The KL divergence between two samples drawn from $\\mathcal U[-\\pi, \\pi)$ is shown for comparison.\n }\n \\label{fig:KL_div}\n\\end{figure*}\n\nThis effect can be quantified by computing the Kullback-Leibler (KL) divergence of the two $\\widetilde{\\phi}_{j}$ distributions against that of $\\phi_{j}$.\nIn \\Fig{KL_div}, we see that the KL divergence of the symmetries is much smaller than the KL divergence of the random rotations.\nAlso plotted on the same figure is the KL divergence of two samples drawn from $\\mathcal{U}[-\\pi, \\pi)$, which represents the irreducible effect from considering a finite dataset.\nThis would be the KL divergence of $\\widetilde{\\phi}_{j}$ obtained from applying an ideal analytic symmetry to $X$, against $\\phi_{j}$.\nIt is instructive to consider the means of the histograms.\nThe KL divergence of randomly selected elements of $SO(4)$ has means of $0.37$ ($0.34$) for the leading (subleading) jet, while the KL divergence of symmetries in $SO(4)$ has respective means $0.0058$ ($0.0090$).\nThe irreducible statistical noise has a mean of $0.0010$.\n\nClearly, the symmetries reconstruct the distribution much better than randomly selected elements of $SO(4)$, and are in fact quite close to the irreducible KL divergence due to finite sample size.\nNote that the x-axis of \\Fig{KL_div} is logarithmic, which magnifies the region near zero, so the difference between the symmetry histogram and the statistical noise histogram is smaller than it might appear.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{Images\/Plots\/LHCOLosses.pdf}\n \\caption{\n %\n %\n The loss of random rotations in $SO(4)$ compared to the loss of rotations learned by SymmetryGAN, overlaid with the analytic loss of a symmetry, $2\\log2$.}\n \\label{fig:LHCOLosses}\n\\end{figure}\n\nA final method to independently verify that the rotations SymmetryGAN finds are symmetries of the LHC Olympics data is by computing the loss function.\nAs discussed at the end \\Sec{1d_example}, when $g$ represents a symmetry and $d$ is an ideal discriminator, the binary cross entropy loss is $2\\log2$.\nBy training a \\textit{post hoc} classifier, we can therefore compute the loss of a specific symmetry generator.%\n\\footnote{In principle, one could look at the value of the loss after training the discriminator. In practice, a post-hoc classifier yields more reliable behavior; see related discussion in \\Ref{Diefenbacher:2020rna}.}\nIn \\Fig{LHCOLosses}, we compare the loss of randomly sampled rotations from $SO(4)$ to the loss of rotations discovered by SymmetryGAN.\nThe latter is quite close to the analytic optimum, $2\\log 2$.\n\n\nFrom these tests, we conclude that SymmetryGAN has discovered symmetries of the LHC Olympics dataset.\nAs discussed further in \\Sec{inference} below, though, discovering symmetries is different from inferring the structure of found subgroup from the six-dimensional search space.\nMimicking the study from \\Fig{MSE_i}, we can study its $\\mathbb{Z}_2$ subgroups, through the loss function in \\Eq{cyclicloss} with $q=2$.\nThe backbone of this subgroup is expected to be the reflections $p_{1k}\\leftrightarrow p_{2k}$ (because both jets have approximately the same momenta) and $p_{jx}\\leftrightarrow p_{jy}$ (because $\\sin\\phi$ and $\\cos\\phi$ look the same upon drawing sufficiently many samples of $\\phi$).\nThe learning process reveals a much larger group, though.\nThere is in fact a continuous group of $\\mathbb{Z}_2$ symmetries, which combine an overall azimuthal rotation and one of the aforementioned backbone reflections.\nIn retrospect, these $\\mathbb{Z}_2$ symmetries should have been expected, since they are compositions of well-known symmetry transformations.\nThis example highlights the need to go beyond symmetry discovery and towards symmetry inference.\n\n\n\\section{Towards Symmetry Inference}\n\\label{sec:inference}\n\nThe examples in \\Secs{results}{hepexample} highlight the potential power of SymmetryGAN for discovering symmetries using deep learning.\nDespite the many maps discovered by the neural network, though, it is difficult to infer, for example, the precise Lie subgroup of $SO(4)$ respected by the LHC Olympics data.\nThis highlights a limitation of this approach and the distinction between ``symmetry discovery'' and ``symmetry inference''.\nThough SymmetryGAN can identify points on the Lie group manifold, there is no simple way to infer precisely which Lie group has been discovered.\nIn this section, we mention three potential methods to assist in the process of symmetry inference.\n\n\\subsection{Finding Discrete Subgroups}\n\\label{sec:discrete_subgroups}\n\nOne way to better understand the structure of the learned symmetries is to look for discrete subgroups.\nAs already shown in \\Fig{MSE} and mentioned in the particle physics case, we can identify discrete $\\mathbb{Z}_q$ symmetry transformations by augmenting the loss with \\Eq{cyclicloss}.\nBy forcing the symmetries to take a particular form, we can infer the presence (or absence) of such a subgroup.\n\n\nIt is interesting to consider possible modifications to \\Eq{cyclicloss} to handle non-Abelian discrete symmetries.\nThe goal would be to learn multiple symmetries simultaneously that satisfy known group theoretic relations.\nFor example in the Abelian case, a loss term like \n\\begin{equation}\n\\label{eq:abeliansymm}\n -\\frac\\alpha N\\sum_{x\\in\\{x_i\\}_{i=1}^N}(g_1 \\circ g_2(x) - g_2 \\circ g_1(x))^2\n\\end{equation}\ncould be used to identify any two symmetries $g_1$ and $g_2$ that commute.\nWe leave a study of these possibilities to future work.\n\n\n\\subsection{Group Composition}\n\nBy running SymmetryGAN a few times, one may discover a few points on the symmetry manifold.\nBy composing these discovered symmetries together, one can rapidly increase the number of known points on the manifold because the discovered symmetries are elements of a group, by construction, so their composition is still an element of the group.\n\n\nThis notion is quite powerful.\nThe ergodicity of the orbits of group elements is a richly studied and complex area of mathematics (see e.g.~\\Ref{bams\/1183548783}).\nMany groups of physical interest are locally connected, compact, and have additional structure.\nIn that context, it is likely that the full symmetry group is generated by $\\qty{r_1,\\dots, r_\\nu}$, where $r_i$ is randomly drawn from the group and $\\nu$ is the product of the representation dimension and the number of connected components.\n\nFor example, consider the group $U(1)\\cong SO(2)$, which has $\\nu=1$.\nAlmost any element of $U(1), e^{i\\theta}$, has rotation angle which is an irrational multiple of $\\pi, \\frac\\theta\\pi\\in\\mathbb{R}\\setminus\\mathbb{Q}$.\nWe can therefore approximate any element $e^{i\\phi}\\in U(1)$ by repeated applications of $e^{i\\theta}$:\n\\begin{equation}\n \\forall e^{i\\phi}\\in U(1)\\;\\forall\\epsilon > 0\\;\\exists n\\in \\mathbb{N}\\; \\norm{e^{i\\phi} - e^{in\\theta}} < \\epsilon\\,.\n\\end{equation}\nIn other words, the subgroup generated by $e^{i\\theta}$ is dense in $U(1)$.\n\n\nIn practice, the symmetries discovered by SymmetryGAN will be not exact due to numerical considerations.\nSince the network learns approximate symmetries with some associated error, each composition compounds this error.\nThus, there are practical limits on the number of compositions that can be carried out with numeric data.\n\n\n\\subsection{The Symmetry Discovery Map}\n\\label{sec:symmetry_discovery_map}\n\nSo far, we have initialized a SymmetryGAN with uniformly distributed values of certain parameters, and then trained it to return the values of those parameters that constitute a symmetry.\nWe can define a \\textit{symmetry discovery map}, which connects the initialized parameters of $g$ to the parameters of the learned function:\n\\begin{equation}\n\\Omega: \\mathbb{R}^k \\to \\mathbb{R}^k,\n\\end{equation}\nwhere $k$ is the dimension of the parameter space.\nThis is a powerful object not only for characterizing the learning dynamics but also to assist in the process of symmetry discovery and inference.\n\n\nThere are at least two distinct reasons why knowledge of this symmetry discovery map is useful.\nFirst, the map is of theoretical interest.\nWe discussed in \\Sec{1d_example} the importance of understanding the topology of the symmetry group. \nThe symmetry discovery map induces a \\emph{deformation retract} from the search space to the symmetry space.\nEvery deformation retract is a homotopy equivalence, and by the Eilenberg-Steenrod axiom of homotopy equivalence~\\cite{Eilenberg117}, the homology groups of the symmetry group can be constructed from the homology groups of the search space.\nEven in low dimensions, the topology of the symmetry group can be non-trivial (cf.~\\Sec{2d_example} for an example in 2D).\nThe topology of $GL_n(\\mathbb{R})$, however, has been studied for over half a century, and the homotopy and homology groups of several non-trivial subgroups of $\\operatorname{Aff}_n(\\mathbb{R})$ have been fully determined~\\cite{SCHLICHTING20171}.\nHence, if the symmetry discovery map were known, one could leverage the full scope of algebraic topology and the known results for the linear groups to understand the topology of the symmetry group.\n\n\nSecond, this map has practical value.\nEvery time a SymmetryGAN is trained, it must relearn how to move the initialized values of $g$ to the final values.\nIntuitively, nearby initial values should map to nearby final values, so learning the symmetry discovery map should enable a more efficient exploration of the symmetry group.\nIn practice, this can be accomplished by augmenting the loss function in \\Eq{numericloss}.\nLet $g(x|c)$ be the symmetry generator, with the parameters $c$ made explicit.\nLet $\\Omega(c)$ be a neural network representing the symmetry discovery map.\nSampling parameters from the space of parameters $\\mathbb{R}^k$ and data points from $X$, we can optimize the following loss:\n\\begin{align}\n\\label{eq:numericloss_parameters}\n L[\\Omega,d]&=-\\sum_{c \\in \\{c_a\\}} \\sum_{x\\in\\{x_i\\}} \\Big[\\log\\big(d(x)\\big) \\\\\n & \\nonumber \\qquad \\qquad \\qquad \\qquad + \\log\\big(1-d(g(x|\\Omega(c)))\\big)\\Big]\\,.\n\\end{align}\nNote that this loss is now a functional of $\\Omega$ instead of $g$.\nIf $\\Omega(c)$ can be initialized to the identity function, then gradient descent acting on $\\Omega(c)$ is (asymptotically) the same as gradient descent acting on the original parameters.\nThus, as long as $\\Omega(c)$ has a sufficiently flexible parametrization, the learned $\\Omega(c)$ will be a good approximation to the symmetry discovery map learned by the original SymmetryGAN.\n\n\nWe defer a full exploration of the symmetry discovery map to future work.\nPreliminary analytic and numerical studies of the symmetry discovery map are shown in \\App{symmetry_discovery_map}.\n\n\n\\section{Conclusions and Outlook}\n\\label{sec:conclusions}\n\n\nIn this paper, we provided a rigorous statistical definition of the term ``symmetry'' in the context of probability densities.\nThis is highly relevant for the field of high energy collider physics where the key objects of study are scattering cross sections.\nWe proposed SymmetryGAN as a novel, flexible, and fully differentiable deep learning approach to symmetry discovery.\nSymmetryGAN showed promising results when applied to Gaussian datasets as well as to dijet events from the LHC, conforming with our analytic predictions and providing new insights in some cases.\n\n\nA key takeaway lesson is that the symmetry of a probability density only makes sense when compared to an inertial density.\nFor our studies, we focused exclusively on the inertial density corresponding to the uniform distribution on (an open subset of) $\\mathbb{R}^n$, since Euclidean symmetries are ubiquitous in physics.\nFurthermore, we only considered area preserving linear maps on $\\mathbb{R}^n$, a simple yet rich group of symmetries that maintain this inertial density.\nMoving forward, there are many opportunities to further develop the concepts introduced in this paper.\nAs a straightforward extension, non-linear equiareal maps over $\\mathbb{R}^n$ could be added to the linear parameterizations we explored, as could Lorentz-like symmetries.\nIn more complex cases where there is no obvious notion of an inertial density, one could study the relative symmetries between two different datasets.\nIt would also be interesting to discover approximate symmetries and rigorously quantify the degree of symmetry breaking.\nThis is relevant in cases where the complete symmetry group is obscured by experimental acceptances and efficiencies.\n\n\nA key open question is how to go beyond symmetry discovery and towards symmetry inference.\nWe showed how one can introduce loss function modifications to emphasize the discovery of discrete subgroups.\nOne could imagine extending this strategy to continuous subgroups to gain a better handle on group theoretic structures.\nThe symmetry discovery map is a potentially powerful tool for symmetry inference, since it in principle allows the entire symmetry group to be discovered in a single training.\nIn practice, though, we found learning the symmetry discovery map to be particularly challenging.\nWe hope future algorithmic and implementation developments will enable more effective strategies for symmetry discovery and inference, in particle physics and beyond.\n\nThe code for this paper can be found in this \\href{https:\/\/github.com\/hep-lbdl\/symmetrydiscovery}{\\underline{GitHub repository}}.\n\n\\section*{Acknowledgments}\n\nKD and BN are supported by the U.S. Department of Energy, Office of Science under contract DE-AC02-05CH11231.\nJT is supported by the National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, \\url{http:\/\/iaifi.org\/}), and by the U.S. DOE Office of High Energy Physics under grant number DE-SC0012567.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\n\tA quantum-chromodynamic potential model was proposed by us\\cite{GRR}\nin 1982, which not only yielded results for the $c\\bar{c}$ and $b\\bar{b}$\nenergy levels and their spin splittings in good agreement with the existing\nexperimental data but its predictions were also confirmed by\nlater experiments at the Cornell Electron Storage Ring\\cite{CESR}.\nAn essential feature of our model was the inclusion of the\none-loop radiative corrections to the quark-antiquark potential, which\nhad been derived by us in an earlier investigation\\cite{GR}. Subsequently,\nthe model was improved by using relativisitic kinematics\\cite{GRR2}\nand a nonsingular form of the quarkonium potential\\cite{GRS}.\nAs shown by us, in addition to the energy levels of $c\\bar{c}$ and $b\\bar{b}$,\nour model also yields results in good agreement with the experimental data\nfor the leptonic and E1 transition widths. It was further shown by\nZhang, Sebastian, and Grotch\\cite{zhang} that the M1 transition\nwidths for $c\\bar{c}$ and $b\\bar{b}$ obtained from our model\nare in better agreement with the experimental data than those predicted\nusing other potential models.\n\n\tRecently the mass of the ${}^1P_1$ state of charmonium has\nbeen determined by the E760 collaboration\\cite{E760} in $p\\bar{p}$\nannihilations at Fermilab, and the splitting between the center of\ngravity of the ${}^3P_J$ states and the ${}^1P_1$ state, denoted as\n$\\Delta M_P$, is found to be approximately $-0.9$~MeV. This experimental\nresult has created much interest since it provides a new test for the\npotential models for heavy quarkonia.\n\n\tIf the spin-dependent forces in the quarkonium potential could be\ntreated perturbatively, the $\\Delta M_P$ splitting would arise solely\nfrom the spin-spin (color hyperfine) interaction. However, the spin-dependent\nforces are known to be quite large and, as observed by Lichtenberg and Potting\n\\cite{lichten}, the contributions of the spin-orbit and tensor interactions to\n$\\Delta M_P$ cannot be ignored in a nonperturbative treatment. We shall\nanalyze this complex situation with the use of our model which\navoids the use of an illegitimate perturbative treatment, and provide an\nexplanation for the observed splittings of the charmonium P~states.\n\n\tSeveral authors\\cite{halzen,chen,grotch} have recently shown that\na theoretical value for $\\Delta M_P$ in close agreement with the\nexperimental value can be readily obtained from the spin-spin\ninteraction terms in the quarkonium potential. However,\nsince they have employed an illegitimate perturbative treatment, the\nsignificance of this simple interpretation remains an open question.\n\n\tOnly a quarkonium model which is in good overall agreement with\nthe experimental data can be taken seriously. Our model for heavy quarkonia\nsatisfies this requirement.\n\n\\section{$\\symbol{'143}\\bar{\\symbol{'143}}$ SPECTRUM}\n\n\tOur model is based on the Hamiltonian\n\\begin{equation}\nH=H_0+V_p+V_c , \\label{hamiltonian}\n\\end{equation}\nwhere\n\\begin{equation}\nH_0=2(m^2+{\\bf p}^2)^{1\/2}\n\\end{equation}\nis the relativistic kinetic energy term, and $V_p$ and $V_c$ are nonsingular\nquasistatic perturbative and confining potentials, which are\ngiven in the Appendix.\nWe found a trial wave function introduced by Jacobs, Olsson, and Suchyta\n\\cite{jacobs} particularly suitable for obtaining the quarkonium energy levels\nand wave functions.\n\n\tOur results for the energy-level splittings as well as the individual\nenergy levels of $c\\bar{c}$ are given in Tables~\\ref{splittings}\nand~\\ref{levels}.\nFor experimental data we have generally relied on the Particle Data Group\n\\cite{PDG}, but for the $\\eta_c$ state we have used the new result announced by\nthe E760 collaboration \\cite{appel}. The two sets\nof theoretical results in these tables correspond to the scalar-exchange and\nthe scalar-vector-exchange forms of the confining\npotential, given by\n\\begin{mathletters}\n\\begin{equation}\nV_c=V_S ,\n\\end{equation}\nand\n\\begin{equation}\nV_c=(1-B)V_S+BV_V ,\n\\label{vconfine}\n\\end{equation}\n\\end{mathletters}\nrespectively. The results obtained with the scalar-exchange confining\npotential are\nunsatisfactory, while the scalar-vector-exchange results\nare in surprisingly close agreement with the\nexperimental data, including the observed mass of the ${}^1P_1$ state\nand the $\\Delta M_P$ splitting. The scalar-vector mixing parameter $B$\nis found to be about $\\frac{1}{4}$.\n\n\tIn Table~\\ref{mp}, we display the contributions to $\\Delta M_P$ from\nthe various types of terms in the Hamiltonian (\\ref{hamiltonian}) with\nthe confining potential (\\ref{vconfine}).\nThe table shows comparable contributions to $\\Delta M_P$\nfrom several sources, which brings out the complexity of this splitting when\nspin-dependent potential terms are included in the unperturbed Hamiltonian.\nThe $\\Delta M_P$ splitting, therefore, does not provide a direct test of the\nspin-spin interaction in heavy quarkonia.\n\n\tIn Tables~IV and~V, we give the results for the leptonic and E1 transition\nwidths corresponding to the scalar-vector-exchange confining potential by using\nthe formulae\n\\begin{equation}\n\\Gamma_{ee}({}^3S_1 \\to e^+e^-) = \\frac{16\\pi\\alpha^2\ne_Q^2}{M^2(Q\\overline{Q})}\n\t\\left| \\Psi(0)\\right|^2 \\left( 1-\\frac{16\\alpha_s}{3\\pi}\\right),\n\\end{equation}\nand\n\\begin{eqnarray}\n\\Gamma_{E1}({}^3S_1 \\to {}^3P_J) &=& \\frac{4}{9}\\frac{2J+1}{3}\\alpha e_Q^2\n\tk_J^3 |r_{fi}|^2,\\nonumber \\\\\n\\Gamma_{E1}({}^3P_J \\to {}^3S_1) &=&\\frac{4}{9}\\alpha e_Q^2\n\tk_J^3 |r_{fi}|^2,\\\\\n\\Gamma_{E1}({}^1P_1 \\to {}^1S_0) &=&\\frac{4}{9}\\alpha e_Q^2\n\tk_J^3 |r_{fi}|^2.\\nonumber\n\\end{eqnarray}\nThe photon energies for the E1 transition widths have been\nobtained from the energy difference of the initial and the\nfinal $c\\bar{c}$ states by taking into account the recoil correction.\nOur results are\nin good agreement with the available experimental data \\cite{PDG}, and\nour prediction for the E1 transition width of $1{}^1P_1\\to 1{}^1S_0$ is\n341.8 keV.\n\n\\section{CONCLUSION}\n\tWe conclude with explanatory remarks concerning some features of our\nquarkonium potential.\n\n\\subsection{Renormalization scheme}\n\n\tWe have used the Gupta-Radford (GR) renormalization scheme \\cite{GR2}\nfor the one-loop radiative corrections to the quarkonium potential\nrather than the modified minimal-subtraction ($\\overline{\\rm MS}$)\nscheme. The GR scheme is a simplified momentum-space subtraction\nscheme, and the parameter $\\mu$ can be interpreted as representing the\nmomentum scale of the physical process.\nThis scheme also has the desirable feature that\nit satisfies the decoupling theorem\\cite{appelquist}. On the other\nhand, in the $\\overline{\\rm MS}$ scheme $\\mu$ appears as a mathematical\nparameter, and in this scheme decoupling-theorem-violating terms are\nsimply ignored.\n\n\tThe one-loop radiative corrections in the GR scheme can be converted\ninto those in the $\\overline{\\rm MS}$ scheme by means of the relation\n\\cite{GR2}\n\\begin{equation}\n\\alpha_s=\\bar{\\alpha_s}\\left[ 1+\\frac{\\bar{\\alpha_s}}{4\\pi}\\left(\n\t\\frac{49}{3}-\\frac{10}{9}n_l+\\frac{2}{3}\\sum_{n_h}\n\t\\ln\\frac{m^2}{\\mu^2} \\right)\\right]\\, , \\label{renorm}\n\\end{equation}\nwhere $\\bar{\\alpha_s}$ refers to the $\\overline{\\rm MS}$ scheme, and $n_l$\nand $n_h$ are the numbers of light and heavy quark flavors. If we drop\nthe decoupling-theorem-violating terms that appear in the $\\overline{\\rm MS}$\nscheme, we can put $n_l=n_f$ and $n_h=0$, and (\\ref{renorm}) reduces to\n\\begin{equation}\n\\alpha_s=\\bar{\\alpha_s}\\left[ 1+\\frac{\\bar{\\alpha_s}}{4\\pi}\\left(\n\t\\frac{49}{3}-\\frac{10}{9}n_f \\right)\\right]\\, .\n\\end{equation}\n\n\\subsection{Quasistatic potential}\n\n\tIn an earlier investigation \\cite{GRR2}, we arrived at the surprising\nconclusion that while the quasistatic form of the quarkonium potential yields\nresults in good agreement with the experimental data, this is not the case\nfor the momentum-dependent form. This conclusion has also been confirmed by\nthe\nrecent investigations of Gara {\\it et al.} \\cite{gara} and Lucha {\\it et al.}\n\\cite{lucha}.\n\n\tIt appears to us that the success of the quasistatic potential is related\nto the phenomenon of quark confinement. Since a rigorous treatment of\nquark confinement does not exist at this time, we shall only offer a\nplausible argument. It was argued earlier \\cite{GR3} with the use\nof a renormalization-group-improved quantum-chromodynamic treatment\nthat quark confinement can be understood as a consequence of the fact\nthat quarks and antiquarks are unable to exchange low-momentum gluons.\nMoreover, since for the quark-antiquark scattering in the center-of-mass\nsystem\n\\begin{equation}\n{\\bf p}^2=\\frac{1}{4}{\\bf k}^2+\\frac{1}{4}{\\bf s}^2,\n\\end{equation}\nwhere\n\\begin{equation}\n{\\bf k}={\\bf p'}-{\\bf p},\\qquad {\\bf s}={\\bf p'}+{\\bf p},\n\\end{equation}\nit follows that if ${\\bf k}^2$ is allowed to take only large values,\n${\\bf s}^2$ can be treated as small. This may be regarded as a justification\nfor the quasistatic approximation in which terms of second and higher\norders in ${\\bf s}$ are ignored.\n\n\tOur quarkonium perturbative and confining potentials are not\nonly quasistatic but also nonsingular. In the momentum space, these\npotentials are obtained by first expanding in powers of\n${\\bf p}^2\/(m^2+{\\bf p}^2)$, and then approximating ${\\bf p}^2$ as $\\frac{1}{4}{\\bf k}^2$.\nThe perturbative potential in powers of ${\\bf p}^2\/(m^2+{\\bf p}^2)$ includes, among\nothers, terms of the form\n\\begin{equation}\nf({\\bf p}^2)=\\frac{a+b\\;{\\bf S}_1\\cdot{\\bf S}_2}{m^2+{\\bf p}^2}\\ ,\n\\label{zero}\n\\end{equation}\nwhich becomes in the quasistatic approximation\n\\begin{equation}\nf({\\bf k}^2)=\\frac{a+b\\;{\\bf S}_1\\cdot{\\bf S}_2}{m^2+\\frac{1}{4}{\\bf k}^2}\\ .\n\\label{quasizero}\n\\end{equation}\nIt has been observed by Grotch, Sebastian, and Zhang \\cite{grotch} that while\nthe\ncontribution of $f({\\bf p}^2)$ vanishes for the P states due to the\nvanishing of the wave function at the origin, $f({\\bf k}^2)$ yields a small\nbut nonvanishing contribution for these states. Consequently, for P and higher\nangular-momentum states it would be more accurate to drop\nterms of the form (\\ref{zero}) than to convert them into the approximate form\n(\\ref{quasizero}). We agree with the observation of Grotch {\\it et al.}\nAccordingly, in the treatment of states with $l\\neq 0$ we shall\ndrop terms of the form (\\ref{quasizero}) in the momentum-space potentials\nand the corresponding terms of the form\n\\begin{equation}\nf({\\bf r})=\\frac{a+b\\;{\\bf S}_1\\cdot{\\bf S}_2}{\\pi r}e^{-2mr}\n\\label{coordzero}\n\\end{equation}\nin the coordinate-space potentials.\n\n\\subsection{Confining potential}\n\n\tIn our theoretical treatment, our aim has been to avoid phenomenology\nexcept in the choice of the long-range confining potential, which\ncannot be derived sufficiently accurately by any known theoretical\ntechnique. It is indeed remarkable that the results obtained from our\nfield-theoretical perturbative potential supplemented with a phenomenological\nconfining potential are in\nexcellent over-all agreement with the experimental data including the\n$\\Delta M_P$ splitting. It should be noted that we have neglected\neffect of coupling of the energy levels to virtual decay\nchannels and possibly other small effects.\nSuch effects presumably have also been taken into account in\nour phenomenological confining potential.\n\n\\acknowledgments\n\tThis work was supported in part by the U.S. Department of Energy\nunder Grant No. DE-FG02-85ER40209 and the National Science Foundation\nunder Grant No. PHY-93-07980. W.~W.~R. would like to acknowledge conversations\nwith R.~Lewis and G.~A.~Smith regarding the results of the E760\ncollaboration.\n\n\\cleardoublepage\n\\widetext\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}