diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzeiyy" "b/data_all_eng_slimpj/shuffled/split2/finalzzeiyy" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzeiyy" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} Let $\\S$ be a co-dimension two spacelike submanifold \nof a spacetime $M$. Under suitable orientation assumptions, there exists two families\nof future directed null geodesics issuing orthogonally from $\\S$. If one of the families has vanishing expansion along $\\S$ then\n$\\S$ is called a marginally outer trapped surface (or an apparent horizon). The notion of a\nmarginally outer trapped surface (MOTS) was introduced early on in the development of the\ntheory of black holes, and plays a fundamental role in quasi-local descriptions of \nblack holes; see e.g., \\cite{AK}. MOTSs arose in a more purely mathematical context \nin the work of Schoen and Yau \\cite{SY2} concerning the existence\nof solutions to the Jang equation, in connection with their proof\nof positivity of mass. \n\nMathematically, MOTSs may be viewed as spacetime\nanalogues of minimal surfaces in Riemannian manifolds. Despite the\nabsence of a variational characterization for MOTSs\nlike that for minimal surfaces, MOTSs have recently\nbeen shown to satisfy a number of analogous properties; see for example,\n\\cite{AMS0,AMS,AM1,AM2, AG, E, GS}. Of importance to many of these developments\nis the fact, first discussed in \\cite{AMS0}, that MOTSs admit a notion of stability analogous, in the analytic sense, to that of minimal \nsurfaces (cf., Section 2). \n\nIn this paper we consider applications of stable MOTSs to two problems\nin general relativity. In Section 3 we address the issue of how the size of a\nmaterial body tends to be restricted by the amount of matter contained within it. \nMore specifically, we consider an extension of a result of Schoen and Yau \\cite{SY}\nconcerning the size of material bodies to nonmaximal initial data sets. \nIn Section 4 we discuss a higher dimensional version of the lower area (entropy) bounds\nobtained by Gibbons \\cite{Gi} and Woolgar \\cite{Wo} for ``topological black holes\" which\ncan arise in spacetimes with negative cosmological constant. This extends a\nresult in \\cite{CG} to the general nontime-symmetric setting. We defer \nfurther discussion of these problems until Sections 3 and 4. In the next section we \npresent some basic background material on MOTSs relevant to our needs.\n\n\\section{Marginally outer trapped surfaces}\n\nWe recall here some basic definitions and facts about marginally outer\ntrapped surfaces. We refer the reader to \\cite{AMS, AM1, GS,G} for further\ndetails. \n\nLet $V$ be a spacelike hypersurface in an $n+1$ dimensional, $n \\ge 3$, spacetime $(M,g_M)$. \nLet $g = \\<\\,,\\,\\>$ and $K$ denote the induced metric and second fundamental form \nof $V$, respectively. To set sign conventions, for vectors $X,Y \\in T_pV$, $K$ is defined\nas, $K(X,Y) = \\<\\D_X u,Y\\>$, where $\\D$ is the Levi-Civita connection of $M$ and $u$ is the future directed timelike unit vector field to $V$. Note that we are using the `Wald',\nrather than the `ADM\/MTW', convention for the extrinsic curvature, i.e., positive ${\\rm tr}\\,K$ implies expansion.\n\nLet $\\S$ be a smooth compact hypersurface in $V$, perhaps with boundary $\\delta\\S$, \nand assume $\\S$ is two-sided in $V$. Then $\\S$ admits a smooth unit normal field\n$\\nu$ in $V$, unique up to sign. By convention, refer to such a choice as outward pointing. \nThen $l = u+\\nu$ is a future directed outward pointing null normal vector field along $\\S$, unique\nup to positive scaling. \n\nThe null second fundamental form of $\\S$ with respect to $l$ is, for each $p \\in \\S$,\nthe bilinear form defined by,\n\\beq\n\\chi : T_p\\S \\times T_p\\S \\to \\Bbb R , \\qquad \\chi(X,Y) = g_M(\\D_Xl, Y) \\,.\n\\eeq\nThe null expansion $\\th$ of $\\S$ with respect to $l$ is obtained by tracing the\nnull second fundamental form, \n$\\theta = {\\rm tr}_h \\chi = h^{AB}\\chi_{AB} = {\\rm div}\\,_{\\S} l$, where\n$h$ is the induced metric on $\\S$. In terms of the initial data $(V,g,K)$,\n$\\th = {\\rm tr}_h K + H$, where $H$ is the mean curvature of $\\S$ within\n$V$. It is well known that the sign of $\\th$ is invariant under positive scaling \nof the null vector field $l$.\n\nIf $\\th$ vanishes then $\\S$ is called a marginally outer trapped surface (MOTS). As mentioned in the introduction, MOTSs may be viewed as spacetime analogues of minimal\nsurfaces in Riemannian geometry. In fact in the time-symmetric case ($K=0$)\na MOTS $\\S$ is simply a minimal surface in $V$. Of particular relevance\nfor us is the fact that MOTSs admit a notion of stability analogous to that of minimal \nsurfaces, as we now discuss.\n\nLet $\\S$ be a MOTS in $V$ with outward unit normal $\\nu$. We consider\nvariations $t \\to \\S_t$ of $\\S = \\S_0$, \n$- \\epsilon < t < \\epsilon,$ with variation vector field \n$\\calV = \\left . \\frac{\\delta}{\\delta t}\\right |_{t=0} = \\phi \\nu$, $\\phi \\in C_0^{\\infty}(\\S)$, where\n$C_0^{\\infty}(\\S)$ denotes the space of smooth functions on $\\S$ that vanish on the boundary of $\\S$, if there is one. \n Let $\\th(t)$ denote\nthe null expansion of $\\S_t$ with respect to $l_t = u + \\nu_t$, where $u$ is the future directed timelike unit normal to $V$ and $\\nu_t$ is the\nouter unit normal to $\\S_t$ in $V$. A computation shows,\n\\beq\\label{op}\n\\left . \\frac{\\delta\\th}{\\delta t} \\right |_{t=0} = L(\\f) \\nonumber \n\\eeq\nwhere $L : C_0^{\\infty}(\\S) \\to C_0^{\\infty}(\\S)$ is the operator,\n\\beq\nL(\\phi) = -\\triangle \\phi + \\ + \n\\left( \\frac12 S - (\\mu + \\) - \\frac12 |\\chi|^2+{\\rm div}\\, X - |X|^2 \\right)\\phi \\,.\n\\eeq\nIn the above, $S$ is the scalar curvature of $\\S$, $\\mu = G(u,u)$, where $G = \\ric_M -\\frac12R_Mg_M$ is the Einstein tensor of spacetime, $J$ is the vector field on $V$ dual to the one form $G(u,\\cdot)$, and $X$ is the vector field on $\\S$ defined by taking the tangential part of $\\D_{\\nu}u$ along $\\S$.\n In terms of initial data, the Gauss-Codazzi equations\nimply, $\\mu = \\frac12\\left(S_V + ({\\rm tr}\\,K)^2 - |K|^2\\right)$ and \n$J = (\\div K)^{\\sharp} - \\D({\\rm tr}\\, K)$. \n\nIn the time-symmetric case, $\\th$ becomes the mean curvature $H$, the vector field\n$X$ vanishes and $L$ reduces to the classical stability operator of minimal surface theory.\nIn analogy with the minimal surface case, we refer to $L$ in \\eqref{op} as the stability\noperator associated with variations in the null expansion $\\th$. Although in general $L$ is not self-adjoint, its principal eigenvalue\\footnote{If $\\S$ has nonempty boundary, \nwe mean the principal Dirichlet eigenvalue.} (eigenvalue with smallest real part) \n$\\l_1(L)$ is real. Moreover there exists\nan associated eigenfunction $\\phi$ which is positive on $\\S \\setminus \\delta\\S$. \nContinuing the analogy with the minimal surface case,\nwe say that a MOTS is stable provided $\\l_1(L) \\ge 0$. (In the minimal surface case\nthis is equivalent to the second variation of area being nonnegative.) Note that if $\\phi$ is positive, we are moving `outwards' from the MOTS $\\S$, and if there are no outer trapped surfaces outside of $\\S$, then there shall exist no positive $\\phi$ for which $L(\\phi) < 0$. It follows in this case that $\\S$ is stable \\cite{AMS,AM1,G}.\n\nAs it turns out, stable MOTSs share a number of properties in common with minimal surfaces. This sometimes depends on the following fact. Consider the \n``symmetrized\" operator\n$L_0: C_0^{\\infty}(\\S) \\to C_0^{\\infty}(\\S)$,\n\\beq\\label{symop}\nL_0(\\phi) = -\\triangle \\phi + \\left( \\frac12 S - (\\mu + \\) - \\frac12 |\\chi|^2\\right)\\phi \\,.\n\\eeq\nformally obtained by setting $X= 0$ in \\eqref{op}. Then arguments in \\cite{GS}\nshow the following (see also \\cite{AMS}, \\cite{G}).\n\n\\begin{prop}\\label{eigen} \n$\\l_1(L_0) \\ge \\l_1(L)$.\n\\end{prop}\n\nWe will say that a MOTS is symmetric-stable if $\\l_1(L_0) \\ge 0$; hence ``stable\"\nimplies ``symmetric-stable\". \n\n\\section{On the size of material bodies}\n\nIn this section we restrict attention to four dimensional spacetimes $M$,\nand hence three dimensional initial date sets $(V,g,K)$, $\\dim V = 3$. \n\nIt is a long held view in general relativity that the size of a material body\nis limited by the amount of matter contained within it. There are several\nprecise results in the literature supporting this point of view. In \\cite{FG},\nit was shown, roughly, that the size of a stationary fluid body is bound by the reciprocal\nof the difference of the density and rotation of the fluid. In this case ``size\" refers to the\nradius of the largest distance ball contained in the body. \n\nMore closely related to the considerations of the present paper is the result of Schoen and Yau \\cite{SY} which asserts that\nfor a maximal (${\\rm tr}\\, K = 0$) initial data set $(V,g,K)$, the size of a body \n$\\Omega \\subset V$ is bound by the reciprocal of the square root of the\nminimum of the energy density $\\mu$ on $\\Om$.\nIn this case ``size\" refers to the radius of the largest tubular neighborhood in $\\Omega$\nof a loop contractible in $\\Om$ but not contractible in the tubular neighborhood.\nAs was discussed in \\cite{OM1},\nthis notion of size can be replaced by a notion based on the size of the largest\nstable minimal surface contained in $\\Omega$.\\footnote{This is formulated most simply\nwhen $\\Omega$ is bounded and {\\it mean convex}, meaning that the boundary\nof $\\Omega$ has mean curvature $H > 0$. Then geometric measure theory\nguarantees the existence of many smooth least area surfaces contained in $\\Omega$.} As argued there, this in general\ngives a larger measure of the size of a body, but must still satisfy the same \nSchoen-Yau bound. The aim of this section is to observe that a similar result holds\nwithout the maximality assumption if one replaces minimal surfaces with MOTS.\n\nLet $V$ be a $3$-dimensional spacelike hypersurface, which gives rise to the\ninitial data set $(V,g,K)$, as in Section 2. Consider a {\\it body} in $V$ by which we mean a\nconnected open set $\\Om \\subset V$ with smooth boundary $\\delta\\Om$. \n We describe a precise\nmeasure of the size of $\\Omega$ in terms of MOTSs contained within $\\Omega$. \nLet $\\S$ be a compact connected surface with boundary $\\delta\\S$ contained in $\\Omega$. Let $x$ be a\npoint in $\\S$ furthest from $\\delta\\S$ in $\\Omega$, i.e., $x$ satisfies, $d_{\\Omega}(x, \\delta\\S) \n= sup_{y \\in \\S}\\, d_{\\Omega}(y, \\delta\\S)$, where $d_{\\Omega}$ is distance measured\nwithin $\\Omega$. Then the (ambient) radius of $\\S$, $R(\\S)$, is defined as $R(\\S) = d_{\\Omega}(x, \\delta\\S)$. \n\nWe then define the radius of $\\Om$, $R(\\Om)$ as follows,\n\\beq\nR(\\Om) = \\sup_{\\S} R(\\S) \\,,\n\\eeq\nwhere the sup is taken over all compact connected symmetric-stable MOTSs with boundary contained\nin $\\Om$. Now this can only be a reasonable measure of the size of\n$\\Om$ if there are a plentiful number of large symmetric-stable MOTSs contained\nin $\\Omega$. But in fact a recent result of Eichmair \\cite{E} guarantees the existence of such MOTS, subject to a natural convexity condition on the body $\\Om$. \nWe say that $\\Om$\nis a {\\it null mean convex body} provided its boundary $\\delta\\Om$ has positive \noutward null expansion, $\\th_+ > 0$, and negative inward null expansion, $\\th_-< 0$.\nThe following is an immediate consequence of Theorem 5.1 in \\cite{E}.\n\n\\begin{thm} Let $\\Omega$ be a relatively compact null mean convex body, with connected boundary, in the\n$3$-dimensional initial data set $(V,g,K)$. Let $\\s$ be a closed curve on $\\delta\\Om$\nthat separates $\\delta\\Om$ into two connected components. Then there exists a smooth \nsymmetric-stable MOTS $\\S$ contained in $\\Om$ with boundary~$\\s$.\n\\end{thm}\n\nThe fact that $\\S$ is symmetric-stable follows from a straight forward modification of arguments in \\cite[p. 254]{SY2}; see also the discussion at the end of Section 4 in \\cite{E}. \nIn fact, a variation of the arguments in \\cite[Section 4]{AM2}, may well imply that the MOTS \n$\\S$ constructed in Eichmair's theorem is actually stable. If that were the case, then\n$R(\\Omega)$ could be defined in terms of stable, rather than symmetric-stable, MOTS, which we believe would be conceptually preferable. \n\nWe now state our basic result about the size of bodies.\n\n\\begin{thm}\\label{bound} Let $\\Om$ be a body in the initial data set $(V,g,K)$, and suppose \nthere exists $c> 0$ such that $\\mu - |J| \\ge c$ on $\\Om$. Then, \n\\beq\nR(\\Om) \\le \\frac{2\\pi}{\\sqrt {3}}\\cdot \\frac1{\\sqrt{c}} \\,.\n\\eeq\n\\end{thm} \n\n\\noindent \\emph{Proof:\\ } The proof is similar to the proof of Theorem 1 in \\cite{SY}. The latter\nfollows essentially as a special case of the more general Proposition 1 in \\cite{SY}.\nFor the convenience of the reader we present here a simple direct proof of Theorem\n\\ref{bound}, which involves a variation of the arguments in \\cite{SY}.\n\nLet $\\S$ be a symmetric-stable MOTS with boundary $\\delta\\S$ in $\\Om$; hence\n$\\l_1 = \\l_1(L_0) \\ge~0$. Choose an associated eigenfunction $\\psi$ such that\n$\\psi > 0$ on $\\S\\setminus \\delta\\S$. In fact, by perturbing the boundary $\\delta\\S$ ever so slightly\ninto $\\S$, we may assume without loss of generality that $\\psi > 0$ on $\\S$. Substituting\n$\\phi = \\psi$ into Equation \\eqref{symop}, we obtain,\n\\beq\\label{laplace}\n\\triangle \\psi = - (\\mu + \\ + \\frac12 |\\chi|^2 + \\l_1 - \\kappa) \\psi\n\\eeq\nwhere $\\kappa= \\frac12 S$ is the Gaussian curvature of $\\S$ in the induced metric $h$.\n\nNow consider $\\S$ in the conformally related metric $\\tilde h = \\psi h$. The Gaussian\ncurvature of $(\\S, \\tilde h)$ is related to the Gaussian curvature of $(\\S, h)$ by,\n\\beq\\label{relate}\n\\tilde \\kappa = \\psi^{-2} \\kappa - \\psi^{-3} \\triangle \\psi + \\psi^{-4} |\\psi|^2 \\,.\n\\eeq\nCombining \\eqref{laplace} and \\eqref{relate} we obtain,\n\\beq\\label{gauss}\n\\tilde \\kappa = \\psi^{-2}(Q + \\psi^{-2} |\\D\\psi|^2) \\,,\n\\eeq\nwhere, \n\\beq\\label{Q}\nQ = \\mu + \\ + \\frac12 |\\chi|^2 + \\l_1 \\,\n\\eeq \n\nNow let $x$ be a point in $\\S$ furthest from $\\delta\\S$ in $\\Om$, as in the \ndefinition of $R(\\S)$. Let $\\g$ be a shortest curve in $(\\S, \\tilde h)$ from $x$\nto $\\delta\\S$. Then $\\g$ is a geodesic in $(\\S, \\tilde h)$, and by Synge's formula \\cite{ON}\nfor the second variation of arc length, we have along $\\g$,\n\\beq\\label{ineq}\n\\int_0^{\\tilde \\ell} \\left(\\frac{df}{d\\tilde s}\\right)^2 - \\tilde \\kappa f^2\\, d \\tilde s \\ge 0 \\,,\n\\eeq\nfor all smooth functions $f$ defined on $[0,\\tilde \\ell]$ that vanish at the end points, where\n$\\tilde \\ell$ is the $\\tilde h$-length of $\\g$ and $\\tilde s$ is $\\tilde h$-arc length along $\\g$.\nBy making the change of variable $s = s(\\tilde s)$, where $s$ is $h$-arc length along $\\g$,\nand using Equation \\eqref{gauss}, we arrive at,\n\\beq\\label{ineq2}\n\\int_0^{\\ell} \\psi^{-1}(f')^2 - (Q + \\psi^{-2} |\\D\\psi|^2)\\psi^{-1} f^2 \\, d s \\ge 0 \\,,\n\\eeq\nfor all smooth functions $f$ defined on $[0,\\ell]$ that vanish at the endpoints, where\n$\\ell$ is the $h$-length of $\\g$, and $' = \\frac{d}{ds}$.\n\nSetting $k= \\psi^{-1\/2}f$ in \\eqref{ineq2}, we obtain after a small\nmanipulation,\n\\beq\\label{ineq3}\n\\int_0^{\\ell} (k')^2 - Q\\,k^2+ \\psi^{-1}\\psi'kk' -\\frac34\\psi^{-2}(\\psi')^2k^2 \\, ds \\ge 0 \\,,\n\\eeq\nwhere $\\psi'$ is shorthand for $(\\psi \\circ \\g)'$, etc.\nCompleting the square on the last two terms of the integrand,\n\\beq\n\\frac34\\psi^{-2}(\\psi')^2k^2 - \\psi^{-1}\\psi'kk' = \\left(\\frac{\\sqrt{3}}2 \\psi^{-1}\\psi'k -\\frac1{\\sqrt{3}} k'\\right)^2 - \\frac13(k')^2 , \\nonumber\n\\eeq\nwe see that \\eqref{ineq3} implies,\n\\beq\\label{ineq4}\n\\int_0^{\\ell} \\frac43(k')^2 - Q\\,k^2 \\, ds \\ge 0 \\,.\n\\eeq\nSince, from \\eqref{Q}, we have that $Q \\ge \\mu - |J| \\ge c$,\n\\eqref{ineq4} implies,\n\\beq\\label{ineq5} \n\\frac43\\int_0^{\\ell} (k')^2 \\,ds \\ge c \\int_0^{\\ell} k^2 \\, ds \\,.\n\\eeq\nSetting $k = \\sin \\frac{\\pi s}{\\ell}$ in \\eqref{ineq5} then gives,\n\\beq\n\\ell \\le \\frac{2\\pi}{\\sqrt {3}}\\cdot \\frac1{\\sqrt{c}} \\,.\n\\eeq \nSince $R(\\S) \\le \\ell$, the result follows. \\hfill $\\Box$ \\medskip\n\n\\section{On the area of black holes in asymptotically anti-de Sitter spacetimes}\n\nA basic step in the classical black hole uniqueness theorems is Hawking's theorem\non the topology of black holes \\cite{HE} \nwhich asserts that cross sections of the event horizon\nin $3+1$-dimensional asymptotically flat stationary black hole spacetimes obeying the\ndominant energy condition are topologically 2-spheres. As shown by Hawking \\cite{Hawking},\nthis conclusion also holds for outermost MOTSs in spacetimes that\nare not necessarily stationary. In \\cite{GS, G} a natural\ngeneralization of these results to higher dimensional spacetimes was \nobtained by showing that cross sections of the event horizon (in the\nstationary case) and outermost MOTSs (in the general case)\nare of positive Yamabe type, i.e., admit metrics of positive\nscalar curvature. This implies many well-known restrictions on\nthe topology, and is consistent with recent examples of five\ndimensional stationary black hole spacetimes with horizon topology\n$S^2 \\times S^1$ \\cite{Emp}. \n\nThese results on black hole topology depend crucially on the dominant\nenergy condition. Indeed, there is a well-known class of $3+1$-dimensional static locally anti-de Sitter black hole spacetimes which are solutions to the vacuum Einstein equations \nwith negative cosmological constant $\\Lambda$\nhaving horizon topology of arbitrary genus $g$ \\cite{Brill, Mann}. Higher dimensional versions\nof these topological black holes have been considered in \\cite{Bir, Mann}. However,\nas Gibbons pointed out in \\cite{Gi}, although Hawking's theorem does not hold in the asymptotically locally anti-de Sitter setting, his basic argument still\nleads to an interesting conclusion. Gibbons showed that for $3$-dimensional time-symmetric initial data sets that\ngive rise to spacetimes satisfying the Einstein equations with $\\Lambda <0$, \noutermost MOTSs $\\S$ (which are stable minimal surfaces in this case) must satisfy the area bound,\n\\beq\\label{areabound}\n{\\rm Area}(\\S)\\ge \\frac{4\\pi(g-1)}{|\\Lambda|} \\, ,\n\\eeq\nwhere $g$ is the genus of $\\S$. Woolgar \\cite{Wo} obtained a similar bound in the general, nontime-symmetric, case.\nHence, at least for stationary black holes, black hole entropy has a lower bound depending on a global topological invariant. \n\n\nIn \\cite{CG} Cai and Galloway considered an extension of Gibbon's result to higher dimensional spacetimes. There it was shown, for time-symmetric initial data, that a bound similar to that obtained by Gibbons still holds, but where the genus is replaced by the\nso-called $\\s$-constant (or Yamabe invariant). The $\\s$-constant is a diffeomorphism invariant of smooth compact manifolds that in dimension two reduces to a multiple of the \nEuler characteristic; see \\cite{CG} and references therein for further details. The\naim of this section is to observe that this result extends to the general, nontime-symmetric case.\n\nWe begin by recalling the definition of the $\\s$-constant. \nLet $\\S^{n-1}$, $n\\ge 3$, be a smooth compact (without boundary) $(n-1)$-dimensional\nmanifold. If $g$ is a Riemannian metric on $\\S^{n-1}$, let $[g]$ denote the conformal\nclass of $g$. The Yamabe constant with respect to $[g]$, which we denote by $\\calY[g]$, is the number, \n\\beq\\label{yam}\n\\calY[g] = \\inf_{\\tilde g\\in [g]} \n\\frac{\\int_{\\S}S_{\\tilde g}d\\mu_{\\tilde g}}\n{(\\int_{\\S}d\\mu_{\\tilde g})^{\\frac{n-3}{n-1}}}\\, ,\n\\eeq \nwhere $S_{\\tilde g}$ and $d\\mu_{\\tilde g}$ are respectively the scalar curvature and volume measure of $\\S^{n-1}$ \nin the metric $\\tilde g$. The expression involving integrals is just the volume-normalized total\nscalar curvature of $(\\S,\\tilde g)$.\nThe solution to the Yamabe problem, due to Yamabe, Trudinger, Aubin and Schoen, \nguarantees that the infimum \nin (\\ref{yam}) is achieved by a metric of constant scalar curvature. \n\nThe $\\s$-constant of $\\S$ is \n defined by taking the supremum of the Yamabe constants over all conformal\nclasses,\n\\beq\n\\s(\\S) = \\sup_{[g]} \\calY[g] \\, .\n\\eeq \nAs observed by Aubin, the supremum is finite, and in fact bounded above in terms of the volume\nof the standard unit $(n-1)$-sphere $S^{n-1} \\subset \\Bbb R^n$. The $\\s$-constant divides\nthe family of compact manifolds into three classes according to: (1) $\\s(\\S) > 0$, (2) $\\s(\\S) = 0$,\nand (3) $\\s(\\S) < 0$. \n\nIn the case $\\dim \\S =2$, the Gauss-Bonnet theorem implies $\\s(\\S) = 4\\pi\\chi(\\S)=8\\pi(1-g)$. \nNote that the inequality \\eqref{areabound} only gives information when $\\chi(\\S) < 0$.\nCorrespondingly, in higher dimensions, we shall only be interested in the case \nwhen $\\s(\\S) < 0$. It follows from the resolution of the Yamabe problem that\n$\\s(\\S) \\le 0$ if and only if $\\S$ does not carry a metric of positive scalar curvature.\nIn this case, and with $\\dim \\S = 3$, Anderson \\cite{An} has shown, as an application of Perlman's work on\nthe geometrization conjecture, that $\\s(\\S)$ is \ndetermined by the volume of the ``hyperbolic part'' of $\\S$, which when present\nimplies $\\s(\\S) < 0$. In particular, all closed\nhyperbolic $3$-manifolds have negative $\\s$-constant.\n\nWe now turn to the spacetime setting. In what follows, all MOTSs are compact\nwithout boundary. The following theorem extends Theorem 5 in \\cite{CG}\nto the nontime-symmetric case. \n\n\\begin{thm}\\label{volbound} \nLet $\\S^{n-1}$ be a stable MOTS in the initial data set $(V^n,g,K)$, $n \\ge 4$,\nsuch that $\\s(\\S) < 0$. Suppose there exists $c > 0$, such that $\\mu +\\ \\ge -c$.\nThen the $(n-1)$-volume of $\\S$ satisfies,\n\\beq\n{\\rm vol}(\\S^{n-1}) \\ge \\left(\\frac{|\\s(\\S)|}{2c}\\right)^{\\frac{n-1}2} \\, .\n\\eeq \n\\end{thm}\n\nWe make some comments about the assumptions. Suppose $V$ is a spacelike hypersurface in a spacetime $(M,g_M)$, satisfying the Einstein equation with cosmological\nterm,\n\\beq\nG + \\Lambda g_M = \\calT\n\\eeq\nwhere, as in Section 2, $G = \\ric_M -\\frac12R_Mg_M$ is the Einstein tensor, and $\\calT$\nis the energy-momentum tensor. \nThus,\nsetting $\\ell = u+\\nu$, we have along $\\S$ in $V$,\n\\begin{align}\n\\mu +\\ &= G(u,\\ell) = \\calT(u,\\ell) + \\Lambda \\nonumber \\\\\n& \\ge - |\\Lambda| \\,,\n\\end{align}\nprovided $\\Lambda < 0$ and $\\calT(u,\\ell) \\ge 0$. Hence, when $\\Lambda < 0$ and the fields giving rise to $\\calT$ obey the dominant energy condition, the energy condition in Theorem \\ref{volbound} is satisfied with $c = |\\Lambda|$. \n\nWe briefly comment on the stability assumption. As defined in \\cite{G}, a MOTS $\\S$ is {\\it weakly outermost} in $V$ provided there are no strictly outer trapped surfaces outside of, and homologous to $\\S$ in $V$. Weakly outermost MOTSs are necessarily stable, as noted in Section 2, \nand arise naturally in a number of physical\nsituations. For example, smooth compact cross sections of the\nevent horizon in \nstationary black hole spacetimes obeying the null energy condition, are \nnecessarily weakly outermost MOTSs.\nMoreover, results of Andersson and Metzger \\cite{AM2} provide natural criteria for the existence of weakly outermost MOTSs in general black hole spacetimes containing\ntrapped regions. \n\n\\noindent \\emph{Proof:\\ }[Proof of Theorem \\ref{volbound}] The proof is a simple modification of the\nproof of Theorem 5 in \\cite{CG}.\nBy the stability assumption and Proposition\n\\ref{eigen}, we have $\\l_1(L_0) \\ge 0$, where $L_0$ is the operator given in \\eqref{symop}. \nThe Rayleigh formula,\n$$\n\\l_1(L_0) = \\inf_{\\phi \\ne 0} \\frac{ \\int_{\\S} \\phi L_0(\\phi)d\\mu}{\\int_{\\S} \\phi^2 d\\mu}\n$$ \ntogether with an integration by parts yields the {\\it stability inequality},\n\\beq\\label{stab}\n\\int_{\\S} (|\\D \\f|^2 + \\left( \\frac12 S - (\\mu + \\) - \\frac12 |\\chi|^2\\right)\\phi^2 \\, d\\mu \\ge 0 \\, ,\n\\eeq\nfor all $\\phi \\in C^{\\infty}(\\S)$.\n\nThe Yamabe constant $\\calY[h]$, where $h$ is the induced metric on $\\S$, can\nbe expressed as \\cite{Be},\n\\beq\\label{yam2}\n\\calY[h] = \\inf_{\\f\\in C^{\\8}(\\S), \\f>0} \\frac{\\int_{\\S} (\\frac{4(n-2)}{n-3}|\\D\\f|^2 +S\\f^2)\\, d\\mu}\n{(\\int_{\\S}\\f^{\\frac{2(n-1)}{n-3}}\\, d\\mu)^{\\frac{n-3}{n-1}}} \\, .\n\\eeq\n\nNoting that $\\frac{4(n-2)}{n-3}> 2$, the stability inequality implies,\n\\begin{align}\\label{ineqc}\n\\int_{\\S} \\frac{4(n-2)}{n-3}|\\D \\f|^2 + S\\f^2) \\, d\\mu & \\ge \\int_{\\S}\n2( \\mu + \\ )\\phi^2\\,d\\mu\n\\nonumber\\\\\n& \\ge - 2c \\int_{\\S} \\f^2\\,d\\mu \\, .\n\\end{align}\n\nBy H\\\"older's inequality we have,\n\\beq\n\\int_{\\S} \\f^2\\,d\\mu \\le \\left(\\int_{\\S} \\f^{\\frac{2(n-1)}{n-3}}\\, d\\mu\\right)^{\\frac{n-3}{n-1}}\n\\left(\\int_{\\S} 1 \\, d\\mu\\right)^{\\frac2{n-1}} \\, ,\n\\eeq\nwhich, when combined with (\\ref{ineqc}), gives,\n\\beq\n\\frac{\\int_{\\S} (\\frac{4(n-2)}{n-3}|\\D\\f|^2 + \\hat S\\f^2)\\, d\\mu}\n{(\\int_{\\S}\\f^{\\frac{2(n-1)}{n-3}}\\, d\\mu)^{\\frac{n-3}{n-1}}} \\ge - 2c\\,(\\mbox{vol($\\S$)})^{\\frac2{n-1}} \\, .\n\\eeq\nMaking use of this inequality in (\\ref{yam2}) gives,\n$\\calY[h] \\ge - 2c\\, (\\mbox{vol($\\S$)})^{\\frac2{n-1}}$, or, equivalently,\n\\beq\n{\\rm vol}(\\S^{n-1}) \\ge \\left(\\frac{|\\calY[h]|}{2c}\\right)^{\\frac{n-1}2} \\, .\n\\eeq \nSince $|\\s(\\S)| \\le |\\calY[h]|$, the result follows.\\hfill $\\Box$ \\medskip\n\n\\section*{Acknowledgements}\n\n\\vspace{-.1in}\nThis work was supported in part by NSF grant\nDMS-0708048 (GJG) and SFI grant 07\/RFP\/PHYF148 (NOM). \n\n\n\n\\providecommand{\\bysame}{\\leavevmode\\hbox to3em{\\hrulefill}\\thinspace}\n\\providecommand{\\MR}{\\relax\\ifhmode\\unskip\\space\\fi MR }\n\\providecommand{\\MRhref}[2]{%\n \\href{http:\/\/www.ams.org\/mathscinet-getitem?mr=#1}{#2}\n}\n\\providecommand{\\href}[2]{#2}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\n\n\n\n\n \n\nStock Movement Prediction (SMP) \nis a hot topic in \\textit{Fintech} area since investors continuously\nattempt to predict the stock future trend of listed companies for seeking maximized profit in the volatile financial market \\cite{Li2017Web,Hu2018Listening,Cheng2021Modeling,Wang2021Coupling}. \nThe task has spurred the interest of researchers over the years to develop better predictive models \\cite{Wang2021Coupling}. \nIn particular, the application of machine learning approaches yields a promising performance for SMP task \\cite{Feng2019Temporal,Ye2020Multi-Graph}. \nPrevious studies in both finance and AI research fields predicting a stock movement\nrely on time-series analysis techniques \nusing its own historical prices \n(e.g. \\textit{opening price, closing price, volume}, etc)\n\\cite{Lin2017Hybrid,Feng2019Enhancing}. \nAccording to the Efficient Market Hypothesis (ETH) that implies financial market is informationally efficient \\cite{Malkiel1970Efficient}, therefore, besides these stock trading factors, other researchers \nmine more indicative features from its outside-market data such as web media \\cite{Li2017Web}, including news information \\cite{Ming2014Stock,Liu2018Hierarchical,Li2020A} and social media \\cite{Bollen2011Twitter,Si2013Exploiting,Nguyen2015Sentiment}, while ignoring the stock fluctuation diffusion influence from its related companies, which is also known as momentum spillover effect \\cite{Ali2020Shared} in finance. \n\n\nRecent studies attempt to model stock momentum spillover via Graph Neural Networks (GNN) \\cite{Velickovic2018Graph}. However, most of them only consider the simple explicit relations among related companies \\cite{Feng2019Temporal,Ye2020Multi-Graph,Sawhney2020Spatiotemporal,Li2020Modeling}, which inevitably fail to model the complex connections of listed companies in real financial market, such as the implicit relation, and the associated executives-based meta-relations \\cite{cai2016price,jing2021online}.\n\n \n\n\n\n\n\nTo address this issue, we construct a more comprehensive {M}arket {K}nowledge {G}raph (MKG), which consists of a considerable amount of triples in the form of ({\\emph{head entity}}, {\\emph{relation}}, {\\emph{tail entity}}),\nindicating that there exists a relation between the two entities. \nDifferent from previous graphs in other SOTA works \\cite{Chen2018Incorporating,Feng2019Temporal,Ye2020Multi-Graph,Li2020Modeling,Sawhney2021Stock,Cheng2021Modeling}, the newly constructed MKG develops two essential characteristics: \\textbf{(1)} \\textit{\\textbf{Bi-typed}}, i.e. containing the significant associated executive entities aside from the ordinary company entities; \\textbf{(2)} \\textit{\\textbf{Hybrid-relational}}, i.e. providing an additional implicit relation among listed companies aside from their typical explicit relations. \nFigure $\\ref{figure-instance}$ shows a toy example of MKG (See Section \\ref{section-mkgc} for more details).\n\n\n\n\n\n\nAfterward, to learn the stock\\footnote{The term ``listed company\" and ``stock\" are used interchangeably.} momentum spillover signals on such bi-typed hybrid-relational MKG for stock movement prediction, we pertinently propose a novel \\textbf{D}ual \\textbf{A}ttention \\textbf{N}etworks (\\textsc{DanSmp}), as shown in Figure \\ref{figure-DanSmp-Model}-II.\nSpecifically, the proposed model \\textsc{DanSmp} is equipped with dual attention modules that are able to learn the inter-class interaction among listed companies and associated executives, and their own complex intra-class interaction alternately. \nDifferent from previous methods that can only model homogeneous stock graph \\cite{Chen2018Incorporating,Cheng2021Modeling} or heterogeneity of stock explicit relations \\cite{Nelson2017Stock,Chen2019Investment,Sawhney2021Stock}, our method is able to learn bi-typed heterogeneous entities and hybrid-relations in newly constructed market graph of stock for its spillover effects. \nThe comprehensive comparison between the existing state-of-the-art (SOTA) methods with our newly proposed \\textsc{DanSmp} model in terms of used market signals and main ideas is shown in Table \\ref{table-model-comparison-characteristics}, demonstrating the distinguished advantage of our work.\n\n\n\n\n\n\n\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{Figure1.pdf}\n \\caption{Example of a bi-typed (i.e. \\textit{listed companies}, and \\textit{executives}) hybrid-relational (i.e. \\textit{explicit relations}, and \\textit{implicit relation}) market knowledge graph (MKG).\n \n The relational information in MKG is essential for stock prediction but has not been well utilized in previous works.\n }\n \\label{figure-instance}\n\\end{figure}\n\n\n\nWe collect public data and construct two new SMP datasets (named \\textbf{CSI100E}\\footnote{\"E\" denotes an extension version.} and \\textbf{CSI300E}) based on Chinese Stock Index to evaluate the proposed method, since no existing benchmark datasets can satisfy our need. \nAside from the typical stock historical prices and media news, our newly published benchmark datasets CSI100E and CSI300E also provide rich market knowledge graph as mentioned above.\nThe empirical experimental results on CSI100E and CSI300E against nine SOTA methods demonstrate the better performance of our model \\textsc{DanSmp} with MKG. \nThe ablation studies reaffirm that the performance gain mainly comes from the use of the associated executives, and additional implicit relation among companies in MKG via the proposed \\textsc{DanSmp}.\n\nThe {contributions} of this paper are threefold:\n\n\\begin{itemize}\n \\item To model stock momentum spillover via the complex relations among companies in real market, we first construct a novel market knowledge graph. To the best of our knowledge, this study is the first attempt to explore such bi-typed hybrid-relational knowledge graph of stock via heterogeneous GNNs for its spillover effects.\n \n \n \\item We then propose \\textsc{DanSmp}, a novel Dual Attention Networks to learn the stock momentum spillover features based on the newly constructed bi-typed hybrid-relational MKG for stock prediction, which is also a non-trivial and challenging task.\n \n \n \n \n \\item We propose two new benchmark datasets (CSI100E and CSI300E) to evaluate our method, which are also expected to promote Fintech research field further. The empirical experiments on our constructed datasets demonstrate our method can successfully improve stock prediction with bi-typed hybrid-relational MKG via the proposed \\textsc{DanSmp}\\footnote{The source code and our newly constructed benchmark datasets (CSI100E and CSI300E) will be released on \\url{Github}: {https:\/\/github.com\/trytodoit227\/\\textsc{DANSMP}}}.\n \n\\end{itemize}\n\n\n\n\nThe rest of the paper is organized as follows. In Section \\ref{section-related-work}, we summarize and compare the related work. In Section \\ref{section-market-signal}, we introduce the market signals for stocks prediction. Section \\ref{section-method} introduces the details of the proposed methodology. \nExtensive experiments are conducted to evaluate the effectiveness of the proposed model in Section \\ref{section-experiments}. Finally, we conclude the paper in Section \\ref{section-conclusion}.\n\n\n\n\n\n \n\n \n \n \n\n\n\n\n\n\\section{Related Work}\n\\label{section-related-work}\n\nIn this section, we evaluate the existing relevant research on stock prediction. \nStock movement prediction (SMP) has received a great deal of attention from both investors and researchers since it helps investors to make good investment decisions \\cite{Rather2015Recurrent,Ding2016Knowledge-driven,Li2016A,Deng2019Knowledge-Driven}. In general, traditional SMP methods mainly can be categorized into two classes: {technical analysis} and {fundamental analysis}, according to the different types of the available stock own information they mainly used. Another major aspect for yielding better stock prediction is to utilize the stock connection information \\cite{Chen2018Incorporating,Ye2020Multi-Graph,Sawhney2020Spatiotemporal,Cheng2021Modeling}. We review them in the following.\n\n\n\\subsection{Technical Analysis}\nTechnical analysis takes time-series historical market data of a stock, such as trading price and volume, as features to make prediction \\cite{Edwards2018Technical,Chen2019Investment}. The basic idea behind this type of approach is to discover the hidden trading patterns that can be used for SMP. Most recent methods of this type predict stock movement trend using deep learning models \\cite{Nelson2017Stock,Bao2017A,Lin2017Hybrid}. \nTo further capture the long-term dependency in time series, the Recurrent Neural Networks (RNN) especially Long Short-Term Memory networks (LSTM) have been usually leveraged for prediction \\cite{Gao2016Stock}. \n\\citet{Bao2017A} presented a deep learning framework for stock forecasting using stacked auto-encoders and LSTM. \n\\citet{Nelson2017Stock} studied the usage of LSTM networks to predict future trends of stock prices based on the price history, alongside with technical analysis indicators. \n\\citet{Lin2017Hybrid} proposed an end-to-end hybrid neural networks that leverage convolutional neural networks (CNNs) and LSTM to learn local and global contextual features respectively for predicting the trend of time series. \n\\citet{Zhang2017Stock} proposed a state frequency memory recurrent network to capture the multi-frequency trading patterns for stock price prediction. \n\\citet{Feng2019Enhancing} proposed to employ adversarial training and add perturbations to simulate the stochasticity of price variable, and train the model to work well under\nsmall yet intentional perturbations. \nDespite their achieved progress, however, technical analysis faces an issue that it is incapable of unveiling the rules that govern the fluctuation of the market beyond stock price data.\n\n\\subsection{Fundamental Analysis}\nOn the contrary, fundamental analysis takes advantage of information from outside market price data, such as economic and financial environment, and other qualitative and quantitative factors \\cite{Hu2018Listening,Zhang2018Improving,Xu2018Stock}. Many methods are proposed to explore the relation between stock market and web media, e.g., news information, and social media opinion \\cite{Li2017Web,Akita2016Deep,Zhang2018Improving,Wu2018Hybrid}. For instance, \n\\citet{Ming2014Stock} mined text information from Wall Street Journal for SMP. \n\\citet{Akita2016Deep} presented a deep learning method for stock prediction using numerical and textual information. \n\\citet{Vargas2017Deep} proposed a deep learning method for stock market prediction from financial news articles. \n\\citet{Xu2018Stock} put forward a novel deep generative model jointly exploiting text and price signals for this task. \n\\citet{Liu2018Hierarchical} presented a hierarchical complementary attention network for predicting stock price movements with news. \n\\citet{Li2020A} proposed a multimodal event-driven LSTM model for stock prediction using online news. \nSome researchers mined market media news via analyzing its sentiment, and used it for SMP \\cite{Sedinkina2019Automatic,Qin2019What}. For instance, \\citet{Bollen2011Twitter} analyzed twitter mood to predict the stock market. \n\\citet{Si2013Exploiting} exploited topics based on twitter sentiment for stock prediction. \n\\citet{Nguyen2015Sentiment} incorporated the sentiments of the specific topics of the company into the stock prediction model using social media. \n\\citet{Rekabsaz2017Volatility} investigated the sentiment of annual disclosures of companies in stock markets to forecast volatility. \n\\citet{Qin2019What} proposed a multimodal method that takes CEO's vocal features, such as emotions and voice tones, into consideration. \nIn this paper, we extract the signals from stock historical prices and media news sentiments as the sequential embeddings of stocks.\n\n\n\n\n\n\n\n\n\\subsection{Stock Relations Modeling}\n\\label{Section-srm}\nRecent SMP studies take stock relations into consideration \\cite{Chen2018Incorporating,Sawhney2020Spatiotemporal,Li2020Modeling,Cheng2021Modeling}.\nFor instance, \n\\citet{Chen2018Incorporating} proposed to incorporate corporation relationship via graph convolutional neural networks for stock price prediction. \n\\citet{Feng2019Temporal} captured the stock relations in a time-sensitive manner for stock\nprediction. \n\\citet{Kim2019HATS} proposed a hierarchical attention network for stock prediction using relational data. \n\\citet{Li2019Multi-task} presented a multi-task recurrent neural network (RNN) with high-order Markov random fields (MRFs) to predict stock price movement direction using stock's historical records together with its correlated stocks. \n\\citet{Li2020Modeling} proposed a LSTM Relational Graph Convolutional Network (LSTM-RGCN) model, which models the connection among stocks with their correlation matrix. \n\\citet{Ye2020Multi-Graph} encoded multiple relationships among stocks into graphs based on financial domain knowledge and utilized GCN to extract the cross effect based on these pre-defined graphs for stock prediction. \n\\citet{Sawhney2020Spatiotemporal} proposed a spatio-temporal hypergraph convolution network for stock movement forecasting. \n\\citet{Cheng2021Modeling} proposed to model the momentum spillover effect for stock prediction via attribute-driven graph attention networks. \nDespite the substantial efforts of these SOTA methods, surprisingly, most of them only focus on modeling the momentum spillover via the explicit relations among stocks, while ignoring their complex relations in real market.\n\n\nTable \\ref{table-model-comparison-characteristics} summarizes the key advantages of our model, comparing with a variety of previous state-of-the-art (SOTA) studies in terms of the used market signals, their methods and GNN types. (1) Different from previous studies, our method takes advantage of all three types of stock market signals, including stock historical data, media news, and market knowledge graph. In particular, we construct a more comprehensive heterogeneous market graph that contains explicit relations, implicit relations and executive relations. (2) Different from most existing models that can only {model homogeneous stock graph} \\cite{Chen2018Incorporating,Cheng2021Modeling}, or heterogeneity of stock explicit relations \\cite{Feng2019Temporal,Sawhney2020Deep,Ye2020Multi-Graph,Sawhney2021Stock}, which fall down in modeling heterogeneity of entities in real market, we propose a novel dual attention networks that is able to model bi-typed heterogeneous entities and hybrid-relations in newly constructed market graph of stock for its spillover effects. (3) To the best of our knowledge, this work is the first attempt to study stock movement prediction via heterogeneous GNNs. \n\n\n\\begin{sidewaystable}[thp]\n~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\ ~\\\\~\\\\ \n \\begin{center}\n \\caption{Comparison between several SOTA methods and the proposed model in terms of used market signals and main ideas.}\n \\label{table-model-comparison-characteristics}\n \\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n \\resizebox{\\textwidth}{!}{\n \\begin{tabular}{l|l|c|c|c|c|c|c|c|c|c}\n \\toprule\n \\multirow{3}*{\\bf{Literature} } & \\multirow{3}*{\\bf{Main ideas} }& \\multirow{3}*{\\bf{Market} } & \\multirow{3}*{\\bf{Metrics} } & \\multirow{3}*{\\bf{Method} }&\n \\multirow{3}*{\\bf{GNN Types}}&\n \\multicolumn{5}{c}{\\textbf{Stock Market Signals}} \\\\\n \\cline{7-11}\n & & & & & &\\multirow{2}*{\\tabincell{c}{\\textbf{Historical}\\\\ \\textbf{Data}} }&\\multirow{2}*{\\tabincell{c}{\\textbf{Media}\\\\ \\textbf{News}} } & \\multicolumn{3}{c}{\\textbf{Market Knowledge Graph}} \\\\\n \\cline{9-11}\n & & & & & & & &\\tabincell{c}{\\textbf{Explicit}\\\\ \\textbf{Relation}} & \\tabincell{c}{\\textbf{Implicit}\\\\ \\textbf{Relation}} &\\tabincell{c}{\\textbf{Executives}\\\\ \\textbf{Relation}}\\\\\n \\midrule\n \\midrule\n \\tabincell{l}{\\textbf{EB-CNN}\\\\ \\cite{ding2015deep}}&\\tabincell{l}{$\\bullet$ Neural tensor network for \\\\ \\quad learning event embedding \\\\ $\\bullet$ Deep CNN to model the \\\\ \\quad combined influence} &\\textbf{S\\&P500} & \\tabincell{c}{\\textbf{DA},\\\\ \\textbf{MCC},\\\\ \\textbf{Profit}}&\\tabincell{c}{\\textbf{Open IE},\\\\ \\textbf{CNN}} & \\textbf{-}&\\XSolidBrush & \\Checkmark & \\XSolidBrush & \\XSolidBrush &\\XSolidBrush \\\\ \n \\midrule\n \n \n \\tabincell{l}{\\textbf{SFM}\\\\ \\cite{Zhang2017Stock}} & \\tabincell{l}{$\\bullet$ Extending LSTM by decomposing \\\\ \\quad the hidden memory states \\\\ $\\bullet$ Modeling the latent trading \\\\ \\quad patterns with multiple frequencies } &\\textbf{NASDAQ} & \\textbf{Average square error}& \\tabincell{c}{\\textbf{LSTM},\\\\ \\textbf{DFT}} &\\textbf{-}&\\Checkmark & \\XSolidBrush & \\XSolidBrush & \\XSolidBrush &\\XSolidBrush \\\\\n \\midrule\n \\tabincell{l}{\\textbf{Chen's}\\\\ \\cite{Chen2018Incorporating}} & \\tabincell{l} {$\\bullet$ Single investment relation} &\\textbf{CSI300} & \\textbf{DA}&\\tabincell{c}{\\textbf{LSTM},\\\\ \\textbf{GCN}} &\\textbf{Homogeneous GNNs}& \\Checkmark & \\XSolidBrush & \\Checkmark & \\XSolidBrush &\\XSolidBrush \\\\\n \\midrule\n \\tabincell{l}{\\textbf{TGC}\\\\ \\cite{Feng2019Temporal}} &\\tabincell{l}{$\\bullet$ Single historical data \\\\ $\\bullet$ Dynamically adjust predefined\\\\ \\quad firm relations } &\\tabincell{c}{\\textbf{NASDAQ},\\\\ \\textbf{NYSE}} & \\tabincell{c}{\\textbf{MSE},\\\\ \\textbf{MRR},\\\\ \\textbf{IRR}}& \\tabincell{c}{\\textbf{LSTM},\\\\ \\textbf{Temporal Graph}\\\\ \\textbf{Convolution}}&\\textbf{Homogeneous GNNs}& \\Checkmark & \\XSolidBrush & \\Checkmark &\\XSolidBrush & \\XSolidBrush \\\\\n \\midrule\n \\tabincell{l}{\\textbf{HATS}\\\\ \\cite{Kim2019HATS}}& \\tabincell{l}{$\\bullet$ Hierarchical aggregate different \\\\ \\quad types of firm relational data } &\\textbf{S\\&P500} & \\tabincell{c}{\\textbf{SR},\\\\ \\textbf{F1},\\\\ \\textbf{DA},\\\\ \\textbf{Return}}& \\textbf{GAT}&\\textbf{Homogeneous GNNs}& \\Checkmark & \\XSolidBrush & \\Checkmark & \\XSolidBrush &\\XSolidBrush\\\\\n \n \n \n \n \\midrule\n\n \\tabincell{l}{\\textbf{ALBERT+eventHAN}\\\\ \\cite{wu-2020-event}}&\\tabincell{l}{$\\bullet$ ALBERT enhanced event\\\\ \\quad representations \\\\ $\\bullet$ Event-enhanced hierarchical\\\\ \\quad attention network }&\\tabincell{c}{\\textbf{S\\&P500},\\\\ \\textbf{DOW},\\\\ \\textbf{NASDAQ}} & \\tabincell{c}{\\textbf{DA},\\\\ \\textbf{Annualized return}}&\\tabincell{c}{\\textbf{Open IE},\\\\ \\textbf{ALBERT},\\\\ \\textbf{HAN}} & \\textbf{\n -}& \\XSolidBrush & \\Checkmark & \\XSolidBrush & \\XSolidBrush &\\XSolidBrush \\\\ \n \\midrule\n \n \n \\tabincell{l}{\\textbf{MAN-SF}\\\\ \\cite{Sawhney2020Deep}} & \\tabincell{l}{$\\bullet$ Multi-modal market information \\\\ $\\bullet$ Hierarchical graph attention method } & \\textbf{S\\&P500} & \\tabincell{c}{\\textbf{F1},\\\\ \\textbf{MCC}}& \\tabincell{c}{\\textbf{GAT},\\\\ \\textbf{GRU}}& \\textbf{Homogeneous GNNs}& \\Checkmark & \\Checkmark & \\Checkmark & \\XSolidBrush &\\XSolidBrush\\\\\n \\midrule\n \\tabincell{l}{\\textbf{STHAN-SR}\\\\ \\cite{Sawhney2021Stock}} & \\tabincell{l}{$\\bullet$ A neural hypergraph architecture \\\\ \\quad for stock selection \\\\ $\\bullet$ Temporal hawkes attention \\\\ \\quad mechanism } & \\tabincell{c}{\\textbf{NASDAQ},\\\\ \\textbf{NYSE},\\\\ \\textbf{TSE}} & \\tabincell{c}{\\textbf{SR},\\\\ \\textbf{IRR},\\\\ \\textbf{NDCG\\@5}} & \\tabincell{c}{\\textbf{Hypergraph} \\\\ \\textbf{Convolution}} & \\textbf{Homogeneous GNNs}& \\Checkmark & \\Checkmark & \\Checkmark & \\XSolidBrush &\\XSolidBrush\\\\\n \n \\midrule\n \\tabincell{l}{\\textbf{AD-GAT}\\\\ \\cite{Cheng2021Modeling}} & \\tabincell{l}{$\\bullet$ Multi-modal market information \\\\ $\\bullet$ Attribute-driven graph \\\\ \\quad attention network }&\\textbf{S\\&P500} & \\tabincell{c}{\\textbf{DA},\\\\ \\textbf{AUC}}& \\textbf{GAT}&\\textbf{Homogeneous GNNs}& \\Checkmark & \\Checkmark & \\XSolidBrush & \\Checkmark &\\XSolidBrush\\\\\n \\midrule\n \n \\textbf{\\textsc{DanSmp} (ours)} &\\tabincell{l}{$\\bullet$ Multi-modal market information \\\\ $\\bullet$ Bi-typed hybrid-relation data \\\\ $\\bullet$ Dual attention network} &\\tabincell{c}{\\textbf{CSI100E},\\\\ \\textbf{CSI300E}} & \\tabincell{c}{\\textbf{DA},\\\\ \\textbf{AUC},\\\\ \\textbf{SR},\\\\ \\textbf{IRR}}& \\tabincell{c}{\\textbf{GRU},\\\\ \\textbf{Dual Attention Network}} &\\textbf{Heterogeneous GNNs}& \\Checkmark & \\Checkmark & \\Checkmark & \\Checkmark &\\Checkmark \\\\\n \n \\bottomrule\n \\end{tabular}\n }\n \\end{center}\n\\end{sidewaystable}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n\n\n\n\n\n\n\n\n\\section{Market Signals}\n\\label{section-market-signal}\nIn this section, we introduce the significant signals of stocks in real financial market. We first give the details of the newly proposed bi-typed hybrid-relational market knowledge graph. Afterward, we introduce the historical price data and media news of stocks. Most of previous works focus on partial financial market information, which makes their modeling insufficient. In this work, we take advantage of all three types of market data, fusing numerical, textual, and relational data together, for stock prediction. The features of each data used in this study are summarized in Table \\ref{Features-statistics}.\n\n\\begin{table*}[htb]\n \\begin{center}\n \\caption{Features}\n \\label{Features-statistics}\n \\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n \\resizebox{0.95\\textwidth}{!}{\n \\begin{tabular}{l|l}\n \\toprule\n \\multicolumn{1}{c|}{\\textbf{Information data}} &\n \\multicolumn{1}{c}{\\textbf{Features}} \\\\\n \\midrule\n \\textbf{Entities}& companies, executives.\\\\\n \\midrule\n \\textbf{Relations}& explicit relations (industry category, supply chain, business partnership, investment), implicit relation.\\\\\n \\midrule\n \\textbf{Historical price}& opening price (op), closing price (cp), highest price (hp), lowest price (lp), trade volume (tv).\\\\\n \\midrule\n \\textbf{Media news}& positive media sentiment of a stock $Q(i)^{+}$, negative media sentiment of a stock $Q(i)^{-}$, media sentiment divergence of a stock $D(i)$. \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\end{center}\n\\end{table*}\n\n\n\\subsection{Bi-typed Hybrid-relational Market Knowledge Graph (MKG)}\n\\label{section-mkgc}\n\\subsubsection{Bi-typed Entities}\nMost existing SMP methods solely learn from company relationships in market \\cite{Kim2019HATS,Ye2020Multi-Graph,Li2020Modeling}. \n{In fact, in most stock markets there are also significant associated executives for listed companies, with rich associative information about these companies \\cite{cai2016price,jing2021online}. Hence, our constructed MKG contains not only company entities but also executive entities.}\nThe executive entities can act as the intermediary among companies to build the meta-relations involved in company entities (e.g. \\textit{Company-Executive-Company (CEC)}, \\textit{Company-Executive-Executive-Company (CEEC)}). \nFor example, in Fig. \\ref{figure-instance} we show the associated executives of the companies sampled from Chinese Stock Index. \nThe stock spillover signals can pass from neighboring companies to a target company through the meta-relations established by their connected executives, such as Company 4-Executive C-Company 1; \nCompany 2-Executive A$\\stackrel{classmate}{\\longleftrightarrow}$Executive B-Company 1.\nIn sum, the newly constructed MKG contains bi-typed entities, i.e. \\textbf{\\textit{listed companies}} and their \\textbf{\\textit{associated executives}}.\n\n\n\n\n\\subsubsection{Hybrid-relations}\nMost existing methods only take the explicit relations among companies, such as \\textit{industry category, supply chain, business partnership} and \\textit{investment}, into consideration \\cite{Chen2018Incorporating}. However, the limited explicit company relationships are always insufficient for market knowledge graph construction due to the complex nature of financial market. To solve the MKG incompleteness issue, here we propose an attribute-driven method to conduct MKG completion by inferring missing implicit correlative relation among stocks, which employs stocks attributions. \n\n\nSpecifically, the attribute-driven implicit unobserved relation is calculated based on the features from both its historical prices and news information filtered by a normalized threshold. \nA single-layer feed-forward neural network is adopted to calculate the attention value $\\alpha_{ij}^{t}$ between company $i$ and $j$ for inferring their implicit relation.\n\n\\begin{equation}\n\\alpha_{ij}^{t}=\\mathop{\\text{LeakyRelu}} \\Big ({\\bf u} ^{\\top} [ {\\bf s}_i ^t \\parallel {\\bf s}_j^t ]\\Big) \\ ,\n\\end{equation}\nwhere \n${\\bf s}_{i}^{t}$ and ${\\bf s}_{j}^{t}$ are fused market signals of $i$ and $j$, which are calculated by the Equation \\ref{equaion-seq-learn} (Section \\ref{section-se}). $\\|$ denotes the concatenate operation. ${\\bf u}$ denotes the learnable matrix and LeakyReLU is a nonlinearity activation function. Borrowing gate mechanism in \\cite{Cheng2021Modeling}, we set an implicit relation between $i$ and $j$ if $\\alpha_{ij}^{t} > \\eta$. $\\eta$ denotes a pre-defined threshold. \nIn short, the constructed MKG contains hybrid-relations, i.e. \\textbf{\\textit{explicit relations}} and \\textbf{\\textit{implicit relation}}.\n \n\n\n\\subsection{Historical Price and Media News}\n\\label{section-msr}\n\\subsubsection{Technical Indicators}\nTransactional data is the main manifestation of firms' intrinsic value and investors' expectations. We collect the daily stock price and volume data, including \\textit{opening price (op), closing price (cp), highest price (hp), lowest price (lp), and trade volume (tv)}. \nIn order to better compare and observe the fluctuation of stock price, the stock price is transferred to the return ratio, and the trade volume is transferred to the turnover ratio before being fed into our model. The return ratio is an index reflecting the level of stock return, the higher the return ration is; the better the profitability of the stock is. The turnover is the total value of stocks traded over a period of time; the higher the share turnover could indicate that the share haves good liquidity. ${\\bf p}_i \\in {\\mathbb R}^5$ indicates the {technical indicators} of company $i$, as follows:\n\\begin{equation}\n\\label{equation-technical-indicator}\n {\\bf p}_i = [op(i),\\ cp(i),\\ hp(i),\\ lp(i),\\ tv(i)]^{\\top} \\ .\n\\end{equation} \n\n\n\n\\subsubsection{Sentiment Signals}\nModern behavioral finance theory \\cite{Li2017Web} believes that investors are irrational, tending to be influenced by the opinions expressed in the media. Media sentiment reflects investors' expectations concerning the future of a company or the whole stock market, resulting in the fluctuations of stock price. To capture media sentiment signals, we extract the following characteristics: \\textit{positive media sentiment, negative media sentiment} and \\textit{media sentiment divergence} \\cite{Li2020A}. They are denoted respectively as follows: \n \n \n\n\\begin{equation}\n\\begin{aligned}\n Q(i)^{+} &=\\frac{N(i)^{+}}{N(i)^{+}+N(i)^{-}} \\ , \\\\\n Q(i)^{-} &=\\frac{N(i)^{-}}{N(i)^{+}+N(i)^{-}}\\ , \\\\\n D(i)& =\\frac{N(i)^{+}-N(i)^{-}}{N(i)^{+}+N(i)^{-}} \\ ,\n\\end{aligned}\n\\end{equation}\nwhere $N(i)^{+}$ and $N(i)^{-}$ are the sum of the frequency of each positive and negative sentiment word found in the financial news articles of company $i$, respectively. $D(i)$ denotes the sentiment divergence. Since many negative sentiment words in the general sentiment dictionary no longer express negative emotional meanings\nin the financial field, we resort to a finance-oriented sentiment dictionary created in previous study \\cite{Li2016A}. ${\\bf q}_i \\in {\\mathbb R}^3 $ indicates the news sentiment signals of company $i$.\n\\begin{equation}\n\\label{equation-sentiment-feature}\n {\\bf q}_i =[Q(i)^{+},\\ Q(i)^{-},\\ D(i)]^{\\top}\\ .\n\\end{equation}\n\nNote that we do not have everyday news for all companies since the randomness of the occurrence of media news. In order to make the technical indicators aligned with the media sentiment signals and keep pace with the real situation, the sentiment feature ${\\bf q}_i$ of the firm $i$ is assigned to zero on the day when there are no any media news about it.\n\n\n\n\n\\section{Methodology}\n\\label{section-method}\nIn this section, we introduce the details of our proposed method. \nFigure \\ref{figure-DanSmp-Model} gives an overview of the proposed framework. \n(I) First, the stock sequential embeddings are learned with historical price and media news via multi-modal feature fusion and sequential learning. \n(II) Second, a Dual Attention Networks is proposed to learn the stock relational embeddings based upon the constructed MKG. \n(III) Last, the combinations of sequential embeddings and relational embeddings are utilized to make stock prediction. \n\n \n\n\n\n\n\\subsection{Learning Stock Sequential Embeddings}\n\\label{section-se}\nThe stocks are influenced by multi-modal time-series market signals. \nConsidering the strong temporal dynamics of stock markets, the historical state of the stock is useful for predicting its future trend. \nDue to the fact that the influence of market signals on the stock price would last for some time, we should consider the market signals in the past couple of days when predicting stock trend $\\hat{y}_{i}^{t}$ of company $i$ at date $t$. \nWe first capture the multimodal interactions of technical indicators and sentiment signals. \nWe then feed the fused features into a one-layer GRU and take the last hidden state as the sequential embedding of stock $i$ which preserves the time dependency, as shown in Figure \\ref{figure-DanSmp-Model}-I.\n\n\\subsubsection{Multimodal Features Fusion} \n\nTo learn the fusion of the technical indicators vector ${\\bf p}_i$ and media news sentiment features ${\\bf q}_i$, we adopt a Neural Tensor Network (NTN) which replaces a standard linear neural network layer with a $M$-dimensional bilinear tensor layer that can directly relate the two features across multiple dimensions. \nThe fused daily market signals\\footnote{Here, the superscript $t$ is omitted for simplicity.} of stock $i$, ${\\bf x}_{i} \\in {\\mathbb R}^M$, are calculated by the tensor-based formulation as follows:\n\\begin{equation}\n{\\bf x}_{i}=\\sigma \\left ( {\\bf p}_{i}W_{{\\mathcal{T}}}^{\\left [ 1: M\\right ]}{\\bf q}_{i}+{\\mathcal{V}}\\begin{bmatrix}\n{\\bf p}_{i}\\\\ {\\bf q}_{i}\n\\end{bmatrix}+{\\bf b}\\right ) \\ ,\n\\end{equation}\nwhere $\\sigma$ is an activation function, $W_{{\\mathcal{T}}}^{\\left [ 1: M \\right ]}\\in {\\mathbb R}^{5\\times 3 \\times M}$ is a trainable tensor, ${\\mathcal{V}} \\in {\\mathbb R} ^{ 8\\times M}$ is the learned parameters matrix and ${\\bf b}\\in {\\mathbb R}^M$ is the bias vector. Three parameters are shared by all stocks. \n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=1\\textwidth]{Figure2_model_framework.pdf}\n \\caption{The overall framework of the proposed method. \\textbf{(I) Learning Stock Sequential Embeddings} based on Tensor Fusion and GRU. Tensor Fusion is the Neural Tensor Network (NTN) to learn the fusion of the technical indicators vector ${\\bf p}_i$ and media news sentiment features ${\\bf q}_i$. The GRU is designed to learn the sequential embedding ${\\bf s}_{i}^{t}$. \\textbf{(II) Learning Stock Relational Embeddings} by a dual mechanism to model the mutual affects and inner interaction among the bi-typed entities (i.e. companies and executives) alternately, including: \\textbf{(a)} inter-class attention, and \\textbf{(b)} intra-class attention. The former aims to deal with the interaction between listed companies and their associated executives and the latter aims to learn the interaction among the same type of entities. \\textbf{(III) Stock Movement Prediction} via Feed-forward Neural Network (FNN) with the learned firm embeddings.\n \n }\n \\label{figure-DanSmp-Model}\n\\end{figure} \n\n\\subsubsection{Sequential Learning} \nWe feed the fused daily market signals in the past $T$ days into the GRU to learn {its} sequential embedding ${\\bf s}_{i}^{t}$, as follows:\n\\begin{equation}\n\\label{equaion-seq-learn}\n{\\bf s}_{i}^{t}=\\textbf{GRU}\\left ({\\bf x}_{i}^{t-T}, {\\bf x}_{i}^{t-T+1}, \\dots ,{\\bf x}_{i}^{t-1} \\right ) \\ ,\n\\end{equation}\nwhere ${\\bf s}_{i}^{t}\\in {\\mathbb R}^{F}$ denotes the last hidden state of GRU. ${F}$ is the hidden size of GRU.\n\n\\subsection{Learning Stock Relational Embeddings via Dual Attention Networks}\nIn real market, the stock fluctuation is partially affected by its related stocks which is known as momentum spillover effect in finance \\cite{Ali2020Shared}. \nIn this section, based upon our newly constructed bi-typed hybrid-relational MKG, we propose a Dual Attention Networks to learn the relational embeddings of stocks that represent their received spillover signals. Specifically, we employ a \\textit{\\textbf{dual mechanism}} to model the mutual affection and inner influence among the \\textit{bi-typed} entities (i.e. companies and executives) alternately, including {inter-class interaction} and {intra-class interaction}, as shown in Figure \\ref{figure-DanSmp-Model}-II.\n\n\n\\subsubsection{Inter-class Attention Networks}\nThe inter-class attention aims to deal with the interaction between listed companies and their associated executives, as shown in Figure \\ref{figure-DanSmp-Model} (II-a). Since they are different types of entities, their features usually lie in different space. Hence, we first project their embeddings into a common spaces. Specifically, for a company entity $u \\in {\\mathcal{E}}_1$ with type $\\tau(u)$ and an executive entity $v \\in {\\mathcal{E}}_2$ with type $\\tau(v)$, we design two type-specific matrices\n${\\bf W}^{\\tau(\\cdot)}$ \nto map their features ${\\bf h}_{u},{\\bf h}_v$ into a common space.\n\\begin{equation}\n\\begin{array}{l}\n\\begin{aligned}\n \\label{equation-att-node}\n {\\bf h}_u'&={\\bf W}^{\\tau(u)}{\\bf h}_u \\ , \\quad \\\\\n {\\bf h}_v'&={\\bf W}^{\\tau(v)}{\\bf h}_v \\ ,\n\\end{aligned}\n\\end{array}\n\\end{equation}\nwhere ${\\bf h}_u'\\in {\\mathbb R}^{F'}$ and ${\\bf h}_u\\in {\\mathbb R}^{F}$ denote the original and transformed features of the entity $u$, respectively. \n${\\mathcal{E}}_1$ and ${\\mathcal{E}}_2$ are the sets of listed companies and the executives, respectively.\nHere, the original vectors of company entities (${\\bf h}_u,{\\bf h}_v$) are initialized by learned sequential embeddings (${\\bf s}_u,{\\bf s}_v$) learned in Section \\ref{section-se}, which can bring rich semantic information in downstream learning. The initial features of executives are then simply an average of the features of the companies which they work for.\n\nWe assume that the target company entity $u$ connects with other executives via a relation ${\\theta_i \\in \\Theta_{\\text{inter} }}$ which denotes the set of inter-class relations, so the neighboring executives of a company $u$ with relation ${\\theta_i}$ can be defined as ${\\mathcal{N}}_\\text{inter}^{\\theta_i}(u)$. For entity $u$, different types of inter-class relations contribute different semantics to its embeddings, and so do different entities with the same relation. Hence, we employ attention mechanism here in entity-level and relation-level to hierarchically aggregate signals from other types of neighbors to target entity $u$.\n\nWe first design an entity-level attention to learn the importance of entities within a same relation. \nThen, to learn the importance $e_{u\\upsilon}^{\\theta_i}$ which means how important an executive $v$ for a company $u$ under a specific relation $\\theta_i$, we perform {self-attention} \\cite{Vaswani2017Attention} on the entities as follows:\n\\begin{equation}\n\\begin{array}{l}\n\\begin{aligned}\n \\label{equation-att-node}\n e_{uv}^{\\theta_i} &= att_{node}({\\bf h}_u',{\\bf h}_v';\\theta_i) \\\\\n &= \\text{LeakyRelu}({\\bf a}_{\\theta_i}^\\top \\cdot [ {\\bf h}_u' \\| {\\bf h}_v']) \\ ,\n\\end{aligned}\n\\end{array}\n\\end{equation}\nwhere ${\\bf h}_u'$ and ${\\bf h}_\\upsilon'$ are the transformed representations of the node $u$ and $\\upsilon$. ${\\bf a}_{\\theta_i}\\in {\\mathbb R}^{2F'}$ is a trainable weight vector. $\\|$ denotes the concatenate operation. LeakyReLU is a nonlinearity activation function. \nTo make $e_{u\\upsilon}^{\\theta_i}$ comparable over different entities, we normalize it using the softmax function.\n\\begin{equation}\n \\gamma_{uv}^{\\theta_i} =\\text{softmax}_\\upsilon (e_{uv}^{\\theta_i})= \\frac{\\exp{(e_{uv}^{\\theta_i}})}{\\sum\\limits_{\\bar{v} \\in {\\mathcal{N}}_\\text{inter}^{\\theta_i}(u) }\\exp{(e_{u\\bar{v}}^{\\theta_i})}} \\ ,\n\\end{equation}\nwhere $\\gamma_{uv}^{\\theta_i}$ denotes the attention value of entity $v$ with relation $\\theta_i$ to entity $u$. ${\\mathcal{N}}_\\text{inter}^{\\theta_i}(u)$ denotes the specific relation-based neighbors with the different type.\nWe apply entity-level attention to fuse inter-class neighbors with a specific relation $\\theta_i$:\n\\begin{equation}\n \\label{equation-node-level-aggregation}\n {\\bf h}_u^{\\theta_i} = \\sigma \\Big(\\sum_{v \\in {\\mathcal{N}}_\\text{inter}^{\\theta_i}(u)}\\gamma_{uv}^{\\theta_i} \\cdot {\\bf h}_v' \\Big ) \\ ,\n\\end{equation}\nwhere $\\sigma$ is a nonlinear activation, and ${\\bf h}_v'$ is the projected feature of entity $v$. \n\n\n\n\n\n\n\nOnce we learned all relation embeddings $\\{{\\bf h}_u^{\\theta_i}\\}$, we utilize relation-level attention to fuse them together to obtain the inter-class relational embedding $z_u$ for entity $u$.\nWe first calculate the importance of each relation $w^{\\theta_i}$ as follows:\n\\begin{equation}\n w^{\\theta_i}=\\frac{1}{|{\\mathcal{E}}_1|}\\sum\\limits_{u\\in {\\mathcal{E}}_1} {\\bf q}^{\\tau(u)} \\cdot {\\bf h}_u^{\\theta_i} + \\frac{1}{|{\\mathcal{E}}_2|}\\sum\\limits_{v\\in {\\mathcal{E}}_2} {\\bf q}^{\\tau(v)} \\cdot {\\bf h}_v^{\\theta_i} \\ ,\n\\end{equation}\n\n\\begin{equation}\n \\epsilon^{\\theta_i} = \\frac{\\exp{(w^{\\theta_i})}}{\\sum\\limits_{\\theta_j \\in \\Theta_{\\text{inter} }}\\exp{(w^{\\theta_j})}} \\ ,\n\\end{equation}\nwhere ${\\bf q}^{\\tau(\\cdot)}\\in {\\mathbb R}^{F'\\times 1}$ is learnable parameter. We fuse all relation embeddings to obtain the inter-class relational embedding ${\\bf z}_u\\in {\\mathbb R}^{F'}$ of entity u. \n\\begin{equation}\n \\label{equation-node-level-aggregation}\n {\\bf z}_u = \\sum_{\\theta_i \\in \\Theta_{\\text{intra} }} \\epsilon^{\\theta_i }\\cdot {\\bf h}_u^{\\theta_i} \\ .\n\\end{equation}\nIn inter-class attention, the aggregation of different entities' embedding are seamlessly integrated, and they are mingled and interactively affected each other in nature, as shown in Figure \\ref{figure-DanSmp-Model} (II-a).\n\n\\subsubsection{Intra-class Attention Networks}\nThe intra-class attention aims to learn the interaction among the same type of entities, as shown in Figure \\ref{figure-DanSmp-Model} (II-b).\nSpecifically, given a relation $\\phi_k \\in \\Phi_\\text{intra}^{\\tau(u)}$ that starts from entity $u$, we can get the intra-class relation based neighbors ${\\mathcal{N}}_\\text{intra}^{\\phi_k}(u)$. $\\Phi_\\text{intra}^{\\tau(u)}$ indicates the set of all intra-class relations of $u$. For instance, as shown in Figure \\ref{figure-instance}, Company 5 is a neighbor of Company 3 based on an implicit relation, and Company 4 is a neighbor of Company 1 based on meta-relation \\textit{CEC}. Each intra-class relation represents one semantic interaction, and we apply relation-specific attention to encode this characteristic. We first calculate the attention value of entity $\\tilde{u}$ with relation $\\phi_k$ to entity $u$ as follows:\n\n\n\n\n\n\\begin{equation}\n \\alpha_{u\\tilde{u}}^{\\phi_k} =\n \n \\frac{\\exp{(\\text{LeakyRelu}({\\bf a}_{\\phi_k}^\\top \\cdot [{\\bf W} {\\bf z}_u \\| {\\bf W} {\\bf z}_{\\tilde{u}}]))}}{\\sum\\limits_{u' \\in {\\mathcal{N}}_\\text{intra}^{\\phi_k}(u) }\\exp{(\\text{LeakyRelu}({\\bf a}_{\\phi_k}^\\top \\cdot [{\\bf W} {\\bf z}_u \\| {\\bf W} {\\bf z}_{u'}]))}} \\ ,\n\\end{equation}\nwhere ${\\bf z}_u$ and ${\\bf z}_{\\tilde{u}}$ are output representations of the inter-class attention, respectively. ${\\bf W}\\in {\\mathbb R}^{F'\\times F'}$ is a trainable weight matrix which is shared to every node of the same type.\n${\\bf a}_{\\phi_k}\\in {\\mathbb R}^{2F'}$ is the node-level attention weight vector for relation $\\phi_k$.\n${\\mathcal{N}}_\\text{intra}^{\\phi_k}(u)$ denotes the intra-class neighbors of $u$ under relation $\\phi_k$. \nThe embedding ${\\bf h}_u^{\\phi_k}$ of entity $u$ for the given relation $\\phi_k$ is calculated as follows.\n\\begin{equation}\n \\label{equation-node-level-aggregation}\n {\\bf h}_u^{\\phi_k} = \\sigma \\Big(\\sum_{\\tilde{u} \\in {\\mathcal{N}}_\\text{intra}^{\\phi_k}(u)}\\alpha_{u\\tilde{u}}^{\\phi_k} \\cdot {\\bf W}{\\bf z}_{\\tilde{u}} \\Big ) \\ ,\n\\end{equation}\nwhere $\\sigma$ is a non-linear activation. In total, we can get $|\\Phi_\\text{intra}^{\\tau(u)}|$ embeddings for entity $u$. Then, we conduct relation-level attentions to fuse them into the relational embedding ${\\bf h}_u\\in {\\mathbb R}^{F'}$:\n\n\\begin{equation}\n \\label{equation-node-level-aggregation}\n {\\bf h}_u = \\sum_{\\phi_k \\in \\Phi_{\\text{intra}}^{\\tau(u)}} \\beta^{\\phi_k} \\cdot {\\bf h}_u^{\\phi_k} \\ ,\n\\end{equation}\nwhere $\\Phi_\\text{intra}^{\\tau(u)}$ denotes the set of all intra-class relationships of entity $u$. $\\beta^{\\phi_k}$ denotes the importance of intra-class relation $\\phi_k$, which is calculated as follows:\n\\begin{equation}\n\\begin{array}{l}\n\\begin{aligned}\n {\\bf g}^{\\phi_k}&=\\frac{1}{|{\\mathcal{E}}_1|}\\sum\\limits_{u\\in {\\mathcal{E}}_1} {\\bf q}^{\\tau(u)} \\cdot {\\bf h}_u^{\\phi_k} \\ ,\\\\\n \\beta^{\\phi_k} &= \\frac{\\exp{({\\bf g}^{\\phi_k})}}{{\\sum\\limits_{\\phi_l \\in \\Phi_\\text{intra}^{\\tau(u)} }}\n \\exp{({\\bf g}^{\\phi_l})}} \\ .\n\\end{aligned}\n\\end{array}\n\\end{equation}\nwhere ${\\bf q}^{\\tau(u)}\\in {\\mathbb R}^{F'}$ is a learnable parameter.\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\\subsection{SMP with Stock Final Embeddings}\nFinally, the stock final embeddings by combining learned sequential embeddings and relational embeddings are utilized to make stock prediction by a dense layer feed-forward neural network (FNN) and a softmax function, as shown in Figure \\ref{figure-DanSmp-Model}-III.\n\\begin{equation}\n\\begin{aligned}\n \\hat{y}_{i}^{t}&=\\textbf{SMP} \\Big({\\bf s}_{i}^{t} \\parallel {\\bf h}_{i}^{t} \\Big) \\\\ \n &= \\text{Softmax} \\Big ({\\bf W}_{smp} [ {\\bf s}_{i}^{t} \\parallel {\\bf h}_{i}^{t} ]+ b_{smp} \\Big ) \\ ,\n\\end{aligned}\n\\end{equation}\nwhere ${\\bf W}_{smp}$ is a trainable weight matrix, and $b_{smp}$ is the bias vector. \nWe leverage the Adam algorithm \\cite{Kingma2014Adam} for optimization by minimizing the cross entropy loss function ${\\mathcal{L}}$.\n\n\\begin{equation}\n{\\mathcal{L}}=-\\sum_{i=1}^{\\left | N\\right |}\\sum_t {y}_{i}^t ln\\left ( \\hat{y}_{i}^t\\right ) \\ ,\n\\end{equation}\nwhere ${y}_{i}^t$ and $\\hat{y}_{i}^t$ represent the ground truth and predict stock trend of stock $i$ at $t$ day, respectively. $\\left | N\\right |$ is the total number of stocks.\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Experiments}\n\\label{section-experiments}\n\nIn this section, we present our experiments, mainly focusing on the following research questions:\n\n$\\bullet$ RQ1: Can our model achieve better performance than the state-of-the-art stock prediction methods?\n\n$\\bullet$ RQ2: Can our model achieve a higher investment return and lower risk in the investment simulation on real-world datasets?\n\n$\\bullet$ RQ3: How is the effectiveness of different components in our model?\n\n$\\bullet$ RQ4: Are all firm relations equally important for SMP? How do different parameters influence our model's performance?\n\nIn the following, we first present the experimental settings and then answer these research questions by analyzing the experimental results.\n\n\\subsection{Experimental Settings} \n\\subsubsection{Data Collection}\nSince no existing stock prediction benchmark datasets can satisfy our need to evaluate the effectiveness of our method, we collect public available data about the stocks from the famous China Securities Index (CSI) and construct two new datasets. \nWe name them \\textbf{CSI100E} and \\textbf{CSI300E}\nwith different number of listed companies, respectively. \n185 stocks in CSI300E index without missing transaction data and having at least 60 related news articles during the selected period are kept. Similarly, 73 stocks in CSI100E index are kept.\nFirst, we get historical price of stocks\\footnote{We collect daily stock price and volume data from \\url{https:\/\/www.wind.com.cn\/}} from November 21, 2017 to December 31, 2019 which include 516 transaction days. \nSecond, we collect web news published in the same period from four financial mainstream sites, including \\textit{Sina\\footnote{\\url{http:\/\/www.sina.com}}, Hexun\\footnote{\\url{http:\/\/www.hexun.com}}, Sohu\\footnote{\\url{http:\/\/www.sohu.com}}} and \\textit{Eastmoney}\\footnote{\\url{http:\/\/www.eastmoney.com}}.\nLast, we collect four types of company relations\\footnote{We collect four types of company relations by a publicly available API tushare: \\url{https:\/\/tushare.pro\/}.} and the connections of executives\\footnote{We collect executives relationships from : \\url{http:\/\/www.51ifind.com\/}.} for CSI300E and CSI100E.\nThe basic statistics of the datasets are summarized in Table \\ref{data-stcs}.\nThe usage details of the multimodal market signals are described in Section \\ref{section-msr}. \n\n\n\\begin{table}[t]\n\\caption{Statistics of datasets.}\n\\label{data-stcs}\n\\centering\n\\begin{tabular}[t]{l||c|c}\n\\toprule\n{ } &\\textbf{CSI100E} & \\textbf{CSI300E}\\\\\n\\midrule\n\\textbf{\\textit{\\#Companies(Nodes)}} & 73 & 185 \\\\\n\\textbf{\\textit{\\#Executives(Nodes)}} & 163& 275 \\\\\n\\midrule\n\\textbf{\\textit{\\#Investment(Edges)}} & 7& 44 \\\\\n\\textbf{\\textit{\\#Industry category(Edges)}} & 272& 1043 \\\\\n\\textbf{\\textit{\\#Supply chain(Edges)}} & 27& 37 \\\\\n\\textbf{\\textit{\\#Business partnership(Edges)}} & 98& 328 \\\\\n\\textbf{\\textit{\\#Implicit relation(Edges)}} & \\textit{dynamic}& \\textit{dynamic} \\\\\n\\textbf{\\textit{\\#meta-relation CEC}} & 18& 42 \\\\\n\\textbf{\\textit{\\#meta-relation CEEC}} & 134& 252 \\\\\n\\midrule\n\\textbf{\\textit{\\#Classmate(Edges) }} & 338& 592 \\\\\n\\textbf{\\textit{\\#Colleague(Edges)}} & 953& 2224 \\\\\n\\midrule\n\\textbf{\\textit{\\#Management(Edges)}} & 166& 275 \\\\\n\\textbf{\\textit{\\#Investment(Edges)}} & 1& 8 \\\\\n\\midrule\n\\textbf{\\textit{\\bf\\#Train Period}} & 21\/11\/2017-05\/08\/2019 & 21\/11\/2017-05\/08\/2019 \\\\\n\\textbf{\\textit{\\bf\\#Valid Period}} & 06\/08\/2019-22\/10\/2019 & 06\/08\/2019-22\/10\/2019 \\\\\n\\textbf{\\textit{\\bf\\#Test Period}} & 23\/10\/2019-31\/12\/2019 & 23\/10\/2019-31\/12\/2019 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n \n\\subsubsection{Evaluation Protocols}\nSMP is usually treated as a binary classification problem. \nIf the closing price of a stock $i$ is higher than its opening price at day $t$, the stock movement trend is defined as \"upward\" $\\left ( y_{i}^{t}=1\\right )$, otherwise as \"downward\" $\\left ( y_{i}^{t}=0\\right )$. According to statistics, there are 46.7$\\%$ \"upward\" {stocks} and 53.3$\\%$ \"downward\" ones in CSI100E, and 47.8$\\%$ \"upward\" and 52.2$\\%$ \"downward\" {stocks} in CSI300E. Hence, the datasets are roughly balanced. \n\nSome indicators \\cite{sousa2019bert,Ye2020Multi-Graph} are selected to demonstrate the effectiveness of the proposed method, i.e. Directional Accuracy (DA), Precision, (AUC), Recall, F1-score. \nWe use the \\textbf{Directional Accuracy (DA)} and \\textbf{AUC} (the area under the precision-recall curve) \nto evaluate classification performance in our experiments, \nwhich are widely adopted in previous works \\cite{Li2020A,Cheng2021Modeling}. \nSimilar to \\cite{Sawhney2020Spatiotemporal,Sawhney2021Stock}, \nto evaluate \\textsc{DanSmp}'s applicability to real-world trading, we assess its profitability on CSI100E and CSI300E using metrics: cumulative investment return rate (\\textbf{IRR}) and \\textbf{Sharpe Ratio} \\cite{sharpe1994sharpe}.\nSimilar to previous method \\cite{Li2020A,Cheng2021Modeling}, we use the market signals of the past $T$ trading days (also called lookback window size) to predict stock movement on $t^{th}$ day. The DA, IRR and SR are defined as follows:\n\\begin{equation}\n\\begin{aligned}\n DA &=\\frac{n}{N} \\ , \\\\\n IRR^{t} &=\\sum_{i\\in S^{t-1}}\\frac{p_{i}^{t}-p_{i}^{t-1}}{p_{i}^{t-1}} \\ , \\\\\n SR_{a}& =\\frac{E\\left [ R_{a}-R_{f}\\right ]}{std\\left [ R_{a}-R_{f}\\right ]} \\ ,\n\\end{aligned}\n\\end{equation}\nwhere $n$ is the number of predictions, which witness the same direction of stock movements for the predicted trend and the actual stock trend and $N$ is the total number of predictions. $S^{t-1}$ denotes the set of stocks on day $t-1$, and $p_{i}^{t}$ is the price of stock i at day $t$. $R_{a}$ denotes an asset return and $R_{f}$ is the risk-free rate. In this study, the risk-free rate is set as the one-year deposit interest rate of the People's Bank of China in 2019, i.e. $R_{f}=1.5\\%$.\n\nNote that, to ensure the robustness of the evaluation, we repeat the testing procedure 10 times with different initialization for all the experimental results and the average performance is reported as the final model result.\n\n\n\n\\subsubsection{Parameter Settings}\nAll trainable parameters vectors and matrices are initialized using the Glorot initialization \\cite{glorot2010understanding}.\nIn our \\textsc{DanSmp}, we set the lookback window size $T$ among [10, 15, 20, ... , 40]. We search the learning rate from\n[0.00005, 0.0001, 0.00015, ... , 0.002]. The slice size of NTN $M$ and attention layer hidden size ${F}'$ are determined in [5,10,15, ... ,50] and [10, 11, 12, ... , 50], respectively. The GRU hidden size $F$ is set from [20, 22, 24, ... , 100]. In our model, all hyperparameters were optimized with the validation set, and Table \\ref{table-hyperparameters} shows the hyper-parameter settings of our method. The proposed\n\\textsc{DanSmp} is implemented with PyTorch\\footnote{\\url{https:\/\/pytorch.org\/}.} \nand PyTorch Geometric\\footnote{\\url{https:\/\/pytorch-geometric.readthedocs.io\/en\/latest\/}.}, \nand each training process costs 1.5hrs averagely using a GTX 1080 GPU. To prevent overfitting, we use early stopping based on AUC (the area under the precision-recall curve) over the validation set.\n\n\\begin{table}[htb]\n\\caption{The hyper-parameter settings on two datasets.}\n\\label{table-hyperparameters}\n\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n\\centering\n\\begin{tabular}[t]{l||c c}\n\\toprule\n\\bf Parameter & \\bf CSI100E & \\bf CSI300E\\\\\n\\midrule\nLookback window size $T$& 20 & 20 \\\\ \nThe slice size of NTN $M$ & 10 & 10\\\\\nAttention layer hidden size ${F}'$ & 39 & 22\\\\\nGRU hidden size $F$& 78 & 44 \\\\\nLearing rate & 0.0008 & 0.00085\\\\\nImplicit relation threshold $\\eta$ & 0.0054 & 0.0052 \\\\\nMaximum number of epochs & 400 & 400 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\n\\subsubsection{Baselines}\n\nTo demonstrate the effectiveness of our proposed model \\textsc{DanSmp}, we compare the results with the following baselines. \n\n\n\n\n$\\bullet$ LSTM \\cite{Hochreiter1997Long}: a typical RNN model that has promising performance on time-series data. In the evaluation, two-layer LSTM networks are implemented. \\textcolor{blue}{}\n\n$\\bullet$ GRU \\cite{Cho2014Learning}: a simpler RNN that achieves similar performance with LSTM. In the comparison, two-layer GRU networks are implemented.\n\n\n$\\bullet$ GCN \\cite{Kipf2017Semi-supervised}: It performs graph convolutions to linearly aggregate the attributes of the neighbor nodes. In this study, two-layer GCN network was implemented. \\textcolor{blue}{}\n\n$\\bullet$ GAT \\cite{Velickovic2018Graph}: It introduces attention mechanism which assigns different importance to the neighbors adaptively. Two-layer GAT networks are implemented.\n\n$\\bullet$ RGCN \\cite{Schlichtkrull2018Modeling}: It designs specialized mapping matrices for each relations. Two-layer RGCN network was implemented.\n\n$\\bullet$ HGT \\cite{Hu2020Heterogeneous}: It uses transformer architecture to capture features of different nodes based on type-specific transformation matrices.\n\n\n$\\bullet$ MAN-SF \\cite{Sawhney2020Deep}: It fuses chaotic temporal signals from financial data, social media and stock relations in a hierarchical fashion to predict future stock movement.\n\n$\\bullet$ STHAN-SR \\cite{Sawhney2021Stock}: It uses hypergraph and temporal Hawkes attention mechanism to rank stocks with only historical price data and explicit firm relations. We only need to slightly modify the objective function of MAN-SF to predict future stock movement.\n\n$\\bullet$ AD-GAT \\cite{Cheng2021Modeling}: a SOTA method to use an attribute-driven graph attention network\nto capture attribute-sensitive momentum spillover of stocks, which can modeing market information space with feature interaction to further improve stock movement prediction\n\n\nThese baselines cover different model characters. \nSpecifically, the sequential-based LSTM \\cite{Hochreiter1997Long} and GRU \\cite{Cho2014Learning} can capture the time dependency of stock data, and the fused market signals were used as the input to the LSTM and GRU model. \nThe homogeneous GNNS-based GCN \\cite{Kipf2017Semi-supervised}, GAT \\cite{Velickovic2018Graph}, RGCN \\cite{Schlichtkrull2018Modeling}, HGT \\cite{Hu2020Heterogeneous}, MAN-SF \\cite{Sawhney2020Deep}, STHAN-SR \\cite{Sawhney2021Stock} and AD-GAT \\cite{Cheng2021Modeling} can capture the influence of related stocks based on the fused market signals and simple firm relations. \nNote that, for fair comparison, we do not select the methods that are incapable of dealing with all fused multi-modal market signals (i.e. historical price, media news and stock relations) as baselines. \n\n\n\n\n\n\n\n\\begin{table}[t]\n \\centering\n \\caption{Stock prediction results of different models.}\n \n \\label{table-model-comparison}\n \\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n \\centering\n \n \\begin{tabular}{l||cc|cc}\n \\toprule \n \\multirow{2}*{\\bf{Methods} } &\\multicolumn{2}{c|}{\\bf CSI100E} &\\multicolumn{2}{|c}{\\bf CSI300E}\\\\\n \n &\\textbf{Accuracy}&\\textbf{AUC}&\\textbf{Accuracy} &\\textbf{AUC} \\\\\n \\midrule\n \\midrule\n LSTM \\cite{Hochreiter1997Long} & 51.14 & 51.33&51.78 &52.24 \\\\\n \n GRU \\cite{Cho2014Learning} & 51.66 & 51.46&51.11 &52.30 \\\\\n \\midrule\n GCN \\cite{Kipf2017Semi-supervised} & 51.58 &52.18&51.68 & 51.81 \\\\\n \n GAT \\cite{Velickovic2018Graph} & 52.17 & 52.78&51.40 & 52.24 \\\\\n \n RGCN \\cite{Schlichtkrull2018Modeling} & 52.33 & 52.69&51.79 & 52.59 \\\\\n \n HGT \\cite{Hu2020Heterogeneous} & 53.01 &52.51&51.70 & 52.19 \\\\\n \n \n \n \n MAN-SF \\cite{Sawhney2020Deep}& 52.86 & 52.23 &51.91 & 52.48\\\\\n \n STHAN-SR \\cite{Sawhney2021Stock}& 52.78 & 53.05 & \\underline{52.89}& 53.48\\\\\n \n AD-GAT \\cite{Cheng2021Modeling} &\\underline{54.56} & \\underline{55.46}&52.63 &\\underline{54.29}\\\\\n \\midrule\n \\textbf{\\textsc{DanSmp} (ours)} & \\textbf{57.75} & \\textbf{60.78} & \\textbf{55.79}& \\textbf{59.36} \\\\\n \\bottomrule\n \\end{tabular}\n \n\\end{table}\n\\subsection{Experimental Results and Analysis (RQ1)}\nTable \\ref{table-model-comparison} shows the evaluation results of the two datasets against nine state-of-the-art (SOTA) baselines, from which we observe that our proposed method outperforms all baselines for stock movement prediction in terms of all metrics on CSI100E and CSI300E. \nIt confirms the capability of our method in modeling the comprehensive market signal representations via dual attention networks.\n\n\\paragraph{\\textbf{Analysis.}}\n\n(1) The LSTM and GRU, which only consider historical prices and media news, perform largely worse than our method. The results indicate that the relation datas contribute to stock movement prediction and the proposed method can take full advantage of the relational information in MKG to improve performance. \n(2) The graph-based methods, such as GCN and GAT, are homogeneous GNNs which are incapable of modeling heterogeneous market graph. Although being able to model multi-relational graph, RGCN can not sufficiently encode bi-typed heterogeneous graph become of the fact that it ignores the heterogeneity of node attributes and calculates the importance of neighbors within the same relation based on predefined constants.\nHGT focuses on handling web-scale heterogeneous graphs via graph sampling strategy, which thus is prone to overfitting when dealing with relative sparse MKG. HGT can not learns multi-level representation by sufficiently utilize interactions between two types of nodes.\nWe believe that is the reason they perform worse than our model \\textsc{DanSmp} which is pertinently designed to model bi-typed hybrid-relational MKG.\n(3) The proposed \\textsc{DanSmp} consistently outperforms three other SMP competitors, including AD-GAT, STHAN-SR and MAN-SF.\nSpecifically, it exceeds the second place by approximately 3.19$\\%$ and 5.32$\\%$ in terms of Accuracy and AUC in CSI100E, and 3.16$\\%$ and 5.07$\\%$ in CSI300E. The results clearly demonstrate the effectiveness of \\textsc{DanSmp} and the explicit relation and executives relation are meaningful for stock movement prediction. \n\n\n\n\n\\begin{table}[t]\n \\centering\n \\caption{Profitability of all methods in back-testing.}\n \n \\label{table-as-four}\n \\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n \\centering\n \n \\begin{tabular}{l||rr|rr}\n \\toprule \n \\multirow{2}*{\\bf{Methods} } &\\multicolumn{2}{c|}{\\bf CSI100E} &\\multicolumn{2}{|c}{\\bf CSI300E}\\\\\n \n &\\textbf{IRR}&\\textbf{SR}&\\textbf{IRR} &\\textbf{SR} \\\\\n \\midrule\n \\midrule\n LSTM \\cite{Hochreiter1997Long} &-4.57\\% & -2.1713&-0.38\\% &-0.326 \\\\\n \n GRU\\cite{Cho2014Learning} & -2.55\\% & -1.053&-3.73\\% &-1.197 \\\\\n \\midrule\n GCN \\cite{Kipf2017Semi-supervised} & 1.59\\% &0.719&3.55\\% & 1.873 \\\\\n \n GAT \\cite{Velickovic2018Graph} & 0.3\\% & 0.050&-1.82\\% & -1.121 \\\\\n \n RGCN \\cite{Schlichtkrull2018Modeling}& 6.41\\% & 3.789&-3.64\\% & -1.905 \\\\\n \n HGT\\cite{Hu2020Heterogeneous} & 2.54\\% &1.716&0.36\\% & 0.076 \\\\\n \n \n \n \n MAN-SF\\cite{Sawhney2020Deep}& -2.91\\% & -1.590 &1.38\\% & 0.604\\\\\n \n STHAN-SR\\cite{Sawhney2021Stock}& -0.12\\% & -0.092 & 5.41\\%& 1.565\\\\\n \n AD-GAT \\cite{Cheng2021Modeling} &2.34\\% & 1.190&15.12\\% &4.081\\\\\n \\midrule\n \\textbf{\\textsc{DanSmp} (ours)} & \\textbf{10.18\\%} & \\textbf{4.112} & \\textbf{16.97\\%}& \\textbf{4.628} \\\\\n \\bottomrule\n \\end{tabular}\n \n\\end{table}\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{simulation.png}\\\\\n \\caption{Profitability analysis on CSI100E and CSI300E.}\n \n \n \\label{figure-investment-simulation}\n\\end{figure}\n\n\n\\subsection{Investing simulation (RQ2)}\nTo test whether \na model can make a profit, we set up a back-testing via simulating the stock investment in CSI100E and CSI300E over the test period, during which the CSI100 and CSI300 index increased by 4.20\\% and 5.14\\% (from 4144.05 to 4317.93 and 3896.31 to 4096.58), respectively. Specifically, the top-15 stocks with the highest predicted ranking score in each model are bought and held for one day. We choose RMB 10,000 as the investment budget, and take into account a transaction cost of 0.03\\% when calculating the investment return rate, which is in accordance with the stock market practice in China. The cumulative profit will be invested into the next trading day.\nFrom Table \\ref{table-as-four} and Figure \\ref{figure-investment-simulation}, we can find that \\textsc{DanSmp} achieves approximately stable and continuous positive returns throughout the back-testing. Particularly, the advantage of \\textsc{DanSmp} over all baselines mainly lies in its superior performance when the stock market is in a bear stage. The proposed \\textsc{DanSmp} achieves significantly higher returns than all baselines with the cumulative rate of 10.18\\% and 16.97\\% in CSI100E and CSI300E. In addition, \\textsc{DanSmp} results in a more desirable risk-adjusted return with Sharpe Ratio of 4.113 and 4.628 in CSI100E and CSI300E, respectively. These results further demonstrate the superiority of the proposed method in terms of the trade-off between the return and the risk.\n\n\n\n\n\n\n\n\n\n\\subsection{Ablation Study (RQ3)}\nTo examine the usefulness of each component in \\textsc{DanSmp}, we conduct ablation studies on CSI100E and CSI300E. We design four variants: \n(1) \\textbf{\\textsc{DanSmp} w\/o executives}, which deletes the executive entities. MKG is degraded into a simple uni-type knowledge graph.\n(2) \\textbf{\\textsc{DanSmp} w\/o implicit relation}, which removes the implicit relation when we model the stock momentum spillover effect.\n(3) \\textbf{\\textsc{DanSmp} w\/o explicit relation}, which deletes the explicit relations and only use the implicit relation to predict stock movement.\n(4) \\textbf{\\textsc{DanSmp} w\/o dual}, which replaces the dual attention module by conventional attention mechanism and does not distinguish the node intra-class and inter-class relation. \n\nFrom Table \\ref{table-as-three}, we observe that \nremoving any component of \\textsc{DanSmp} would lead to worse results. \nThe effects of the four components vary in different datasets, but all of them contribute to improving the prediction performance. \nSpecifically, removing executives relations and implicit relations leads to the most performance drop, compared to the other two, which means a company can influence the share price of other companies through interactions between executives. In contrast, using the conventional attention mechanism produces the least performance drop. Compared with conventional attention mechanism, the dual attention module enables \\textsc{DanSmp} to adaptively select more important nodes and relations.\nThis finding further proves that the proposed \\textsc{DanSmp} fully leverages bi-typed hybrid-relational information in MKG via dual mechanism for better stock prediction.\n\n\\begin{table}[t]\n \\centering\n \\caption{The ablation study over \\textsc{DanSmp}.}\n \\label{table-as-three}\n \\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n \\centering\n \n \\begin{tabular}{l||cc|cc}\n \\toprule \n \\multirow{2}*{\\bf{Variants} } &\\multicolumn{2}{c|}{\\bf CSI100E} &\\multicolumn{2}{|c}{\\bf CSI300E}\\\\\n \n &\\textbf{Accuracy}&\\textbf{AUC}&\\textbf{Accuracy} &\\textbf{AUC} \\\\\n \\midrule\n \n\n \\textsc{DanSmp} & \\textbf{57.75} &\\textbf{60.78}& \\textbf{55.79}& \\textbf{59.36} \\\\\n \\midrule\n \\ w\/o executives &52.71 & 52.62&53.80&55.88 \\\\\n \\ w\/o implicit rel. &53.52 & 54.38&52.13&53.37 \\\\\n \\ w\/o explicit rel. &55.12 & 57.05&54.10&55.49 \\\\\n \\midrule\n \\ w\/o dual &56.12 & 58.60&55.43&57.85 \\\\\n\n \\bottomrule\n \\end{tabular}\n \n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{2.png}\\\\\n \\caption{The presentation of the learned attention scores of \\textsc{DanSmp} on CSI100E and CSI300E. Here, IC denotes the industry category; BP stands for the business partnership; IV denotes the investment; SC is the supply chain; IR denotes the implicit relation.}\n \n \n \\label{figure-attention-scores}\n\\end{figure}\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.95\\textwidth]{Figure4_paremeters.pdf}\n \\caption{Sensitivity to parameters $T$ and $\\eta$.}\n \n \n \\label{figure-parameter-analysis-1}\n\\end{figure}\n\n\\subsection{Analysis of Firm Relation (RQ4)}\nTo investigate the impact of using different types of relations for stock prediction, we show the learned attention scores of our model in Figure \\ref{figure-attention-scores}. The attention score is learned parameter for different firm relations. Some main findings are as follows: \n(1) We can observe Figure \\ref{figure-attention-scores} that the learned attention score of implicit relation gains more weight than other relations, and the implicit relation which contains a lot of valuable information proved to be helpful for stock movement prediction. \n(2) The industry category, supply chain and investment get almost the same attention scores, and those can improve the\nperformance of model. \n(3) Compared with other relations, the business partnership has the lowest score. Although the number of business partnership relation is greater than that of investment and supply chain, the relatively dense business partnership relation may carry some noise, which adds irrelevant information to the representations of target nodes. The results further demonstrate the necessity of considering the implicit relation in modeling the momentum spillover effect. In addition, our model can adaptively weight important company relations to obtain better representations of target nodes, which can improve the performance of the model for stock movement prediction.\n\n\\subsection{Parameter Sensitivity Analysis (RQ4)}\nWe also investigate on the sensitivity analysis of two parameters in \\textsc{DanSmp}. We report the results of \\textsc{DanSmp} under different parameter settings on CSI100E and CSI300E and experimental results are shown in Figure \\ref{figure-parameter-analysis-1}.\n\n\\noindent \\textbf{Lookback window size $T$.} \nWe analyze the performance variation with different lookback window size $T$ in Figure \\ref{figure-parameter-analysis-1} (a). Our model performs best when $T$ is set to about 20 in both datasets.\n\n\\noindent \\textbf{Implicit relation threshold $\\eta$.} \nThe results of our model with different implicit relation thresholds are reported in Figure \\ref{figure-parameter-analysis-1} (b). \nThe performance of our proposed model grows with the increment of $\\eta$ and achieves the best performance when $\\eta$ is set to 0.0054 in CSI100E. With the increment of $\\eta$, the performance raises at first and then drops gradually in the dataset CSI300E. When the $\\eta$ becomes bigger, the performance decreases possibly because some meaningful implicit edges are neglected.\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion and Future Work}\n\\label{section-conclusion}\nIn this paper, we focus on stock movement prediction task. To model stock momentum spillover in real financial market, we first construct a novel bi-typed hybrid market knowledge graph. Then, we propose a novel Dual Attention Networks, which are equipped with both inter-class attention module and intra-class attention module, to learn the stock momentum spillover features on the newly constructed MKG. To evaluate our method, we construct two new datasets CSI100E and CSI300E. The empirical experiments on the constructed datasets demonstrate our method can successfully improve stock prediction with bi-typed hybrid-relational MKG via the proposed \\textsc{DanSmp}.\nThe ablation studies reaffirm that the performance gain mainly comes from the use of the associated executives, and additional implicit relation between companies in MKG.\n\nAn interesting future work direction is to explore web media about the executives including: (i) the negative facts from news, such as accusation of crime, health issue, etc; (ii) the improper speech on social media, such as Twitter and Weibo. We believe these factual event information of executives can be detected and utilized to feed into graph-based methods for better SMP performance.\n\n\\begin{acks}\nThe authors would like to thank all anonymous reviewers in advance.\nThis research has been partially supported by grants from the National Natural Science Foundation of China under Grant No. 71725001, 71910107002, 61906159, 62176014, U1836206, 71671141, 71873108, 62072379, the State key R \\& D Program of China under Grant No. 2020YFC0832702, the major project of the National Social Science Foundation of China under Grant No. 19ZDA092, and the Financial Intelligence and Financial Engineering Key Laboratory of Sichuan Province.\n\\end{acks}\n\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nPhysiological information about neural structure and activity \nwas employed from the very beginning to construct effective mathematical models of \nbrain functions. Typically, \nneural networks were introduced as assemblies of elementary dynamical units, that interact with \neach other through a graph of connections \\cite{boccaletti}. Under the stimulus of experimental investigations, \nthese models have been including finer and finer details.\nFor instance, the combination of complex single--neuron dynamics, delay and plasticity in\nsynaptic evolution, endogenous noise and specific network topologies revealed quite crucial\nfor reproducing experimental observations, like the spontaneous emergence of synchronized\nneural activity, both {\\sl in vitro} (see, e.g., \\cite{volman}) and {\\sl in vivo}, and the appearance \nof peculiar fluctuations, the so--called ``up--down\" states, in\ncortical sensory areas \\cite{miguel,torres}. \n\n\nSince the brain activity is a dynamical process, its statistical\ndescription needs to take into account time as an intrinsic variable. \nAccordingly, non--equilibrium statistical mechanics should be the proper \nconceptual frame, where effective models of collective brain activity should be casted in. \nMoreover, the large number of units and the redundancy of connections suggest that\na mean--field approach can be the right mathematical tool for\nunderstanding the large--scale dynamics of neural network\nmodels. Several analytical and numerical investigations have been devoted to mean field approaches to \nneural dynamics. In particular, stability analysis of asynchronous states in globally coupled networks and collective observables in highly connected sparse network can be deduced in relatively simple neural network models through mean field techniques \\cite{mfbrunel,mfcessac,mfbress,polmf,millman}.\n\nIn this paper we provide a detailed account of a mean--field approach, that\nhas been inspired by the \n``heterogeneous mean--field\" (HMF) formulation,\nrecently introduced for general interacting networks \\cite{vespignani,mendes}.\nThe overall method is applied here to the simple case \nof random networks of leaky integrate--and--fire (LIF) excitatory neurons in the\npresence of synaptic plasticity. On the other hand, it can be applied\nto a much wider class of neural network models, based on a similar mathematical\nstructure.\n\nThe main advantages of the HMF method are the following: ({\\sl i}) it can identify the \nrelation between the dynamical properties of the global ({\\sl synaptic}) activity \nfield and the network topology, ({\\sl ii}) it allows one to establish under which conditions partially\nsynchronized or irregular firing events may appear , ({\\sl iii}) it provides a solution to the inverse \nproblem of recovering the network structure from the features of the global activity\nfield.\n\nIn Section \\ref{sec2}, we describe the network model of excitatory LIF neurons with\nshort--term plasticity. The dynamical properties of the model are discussed at\nthe beginning of Section \\ref{sec3}. In particular, we recall that the random structure of the\nnetwork is responsible for the spontaneous organization of neurons in two\nfamilies of {\\sl locked} and {\\sl unlocked} ones \\cite{DLLPT}. In the rest of this Section \nwe summarize how to define a {\\sl heterogeneous thermodynamic limit}, that\npreserves the effects of the network randomness and allows one to transform\nthe original dynamical model into its HMF representation\n\\cite{BCDLV}). The HMF equations provide a relevant computational advantage with respect\nto the original system. Actually, they describe the dynamics\nof classes of equal--in--degree neurons, rather than that of individual neurons.\nIn practice, one can take advantage of a suitable sampling, according to its probability\ndistribution, of the continuous\nin--degree parameter present in the HMF formulation.\nFor instance, by properly \"sampling\" the HMF model into 300 equations one can \nobtain an effective description of the dynamics engendered by a random\nErd\\\"os--Renyi network made of ${\\mathcal O} (10^4)$ neurons.\n\nIn Section \\ref{sec4} we show that the HMF formulation\nallows also for a clear interpretation of the presence of classes of {\\sl locked} and {\\sl unlocked}\nneurons in QSE: they correspond to the presence of a {\\sl fixed point} or of an {\\sl intermittent-like} map of the \nreturn time of firing events, respectively. Moreover, we analyze in details the stability properties of the model and we find that any finite sampling of the\nHMF dynamics is chaotic, i.e. it is characterized by a positive maximum Lyapunov exponent,\n$\\lambda_{\\mathrm max}$. Its value depends indeed on the finite sampling\nof the in--degree parameter. On the other hand, chaos is found to be relatively weak and, when the number \nof samples, $M$, is increased,\n$\\lambda_{\\mathrm max}$ vanishes with a power--law decay, $M^{-\\gamma}$, with $\\gamma \\sim 1\/2$.\nThis is consistent with the mean--field like nature of the HMF equations: in fact, it\ncan be argued that, in the thermodynamic limit, any chaotic component of the dynamics\nshould eventually disappear, as it happens for the original LIF model, when a naive\nthermodynamic limit is performed \\cite{DLLPT}.\n\nIn Section \\ref{sec5} we analyze the HMF dynamics for networks with different topologies (e.g., Erd\\\"os--Renyi\nand in particular scale free). We find that the dynamical phase characterized by QSE is robust with\nrespect to the network topology and it can be observed only if the variance of the considered\nin--degree distributions is sufficiently small. In fact, quasi-synchronous events are\nsuppressed for too broad in--degree distributions, thus yielding a transition between \na fully asynchronous dynamical phase and a quasi-synchronous one, controlled by the\nvariance of the in--degree distribution. In all the cases analyzed in this Section, we find\nthat the global synaptic--activity field characterizes completely\nthe dynamics in any network topology. \n\nAccordingly, the HMF formulation appears as an\neffective algorithmic tool \nfor solving the following {\\sl inverse problem}: given a global\nsynaptic--activity field, which kind of network topology \nhas generated it? In Section \\ref{sec6}, after a summary of the\nnumerical procedure used to solve such an inverse problem, we analyze \nthe robustness of the method in two circumstances: $a)$ when a noise is \nadded to the average synaptic--activity field, and $b)$ when there are\nnoise and disorder in the external currents.\n\nSuch robustness studies are particularly relevant in view of applying this strategy to\nreal data obtained from experiments. \nFinally, in Section \\ref{sec7} we show that a HMF formulation can be \nstraightforwardly extended to non--massive networks, i.e. random networks, where the in--degree does not\nincrease proportionally to the number of neurons. In this case the relevant quantity\nin the HMF-like formulation is the average value of the in--degree\ndistribution, and the HMF equations are expected to reproduce confidently the dynamics of\nnon--massive networks, provided this average is sufficiently large.\nConclusions and perspectives are contained in Section \\ref{sec8}.\n\n\n\n\\section{The model}\\label{sec2}\nWe consider a network of $N$ excitatory LIF neurons\ninteracting via a synaptic current and regulated by short--term plasticity, \naccording to a model introduced in \\cite{tsodyksnet}. \nThe membrane potential $V_j$ of each neuron evolves in time following the\ndifferential equation\n\\begin{equation}\n\\label{eq1}\n\\tau_\\mathrm{m} \\dot V_j= E_{\\mathrm{c}} -V_j + R_\\mathrm{in}I_{\\mathrm{syn}}(j)\\, ,\n\\end{equation}\nwhere $\\tau_\\mathrm{m}$ is the membrane time constant, \n$R_{\\mathrm{in}}$ is the membrane resistance,\n$I_{\\mathrm{syn}}(j)$ is the synaptic current received by neuron $j$ from \nall its presynaptic neurons (see below\nfor its mathematical definition) and \n$E_{\\mathrm{c}}$ is the contribution of an external current \n(properly multiplied by a unit resistance). \n \nWhenever the potential $V_j(t)$ reaches the threshold value $V_{\\mathrm{th}}$, it is \nreset to $V_{\\mathrm{r}} $, and a spike is sent towards the postsynaptic neurons. \nFor the sake of simplicity the spike is assumed to be a $\\delta$--like function of time. \nAccordingly, the spike--train $S_j(t)$ produced by neuron $j$, is defined as,\n\\begin{equation}\n\\label{eq2}\nS_j(t)=\\sum_m \\delta(t-t_{j}(m)),\n\\end{equation}\nwhere $t_{j}(m)$ is the time when neuron $j$ fires its $m$-th spike.\n\nThe transmission of the spike--train $S_j(t)$ is mediated by the\nsynaptic dynamics.\nWe assume that all efferent synapses of a given neuron follow \nthe same evolution (this is justified in so far as no inhibitory \ncoupling is supposed to be present). The state of the $i$-th synapse is\ncharacterized by three variables, $x_i$, $y_i$, and $z_i$, which represent the\nfractions of synaptic transmitters in the recovered, active, and inactive state, \nrespectively ($x_i+y_i+z_i=1$) \\cite{plast1,plast2,tsodyksnet}. \nThe evolution equations are\n\\begin{align}\n\\label{dynsyn}\n& \\dot y_{i} = -\\frac{y_{i}}{\\tau_{\\mathrm{in}}} +ux_{i}S_i\\\\\n\\label{contz}\n& \\dot z_{i} = \\frac{y_{i}}{\\tau_{\\mathrm{in}}} - \\frac{z_{i}}{\\tau_{\\mathrm{r}}} \\ .\n\\end{align} \nOnly the active transmitters react to the incoming spikes: the parameter $u$ \ntunes their effectiveness. Moreover, $\\tau_{\\mathrm{in}}$ is the characteristic decay time of the\npostsynaptic current, while $\\tau_{\\mathrm{r}}$ is the recovery time from synaptic depression. \nFor the sake of simplicity, we assume also that all parameters appearing in the above \nequations are independent of the neuron indices. \nThe model equations are finally closed, by representing the synaptic current \nas the sum of all the active transmitters delivered to neuron $j$ \n\\begin{equation}\nI_{\\mathrm{syn}}(j) = \\frac{ G}{N}\\sum_{i\\ne j} \\epsilon_{ij}y_i,\n\\label{input}\n\\end{equation}\nwhere $G$ is the strength of the synaptic coupling (that we assume \nindependent of both $i$ and $j$), while $\\epsilon_{ij}$ is the directed connectivity\nmatrix whose entries are set equal to 1 or 0 if the presynaptic neuron\n$i$ is connected or disconnected with the postsynaptic neuron $j$, respectively. \nSince we suppose the input resistance\n$R_{\\mathrm{in}}$ independent of $j$, it can be included into $G$.\nIn this paper we study the case of excitatory coupling between neurons,\ni.e. $G > 0$. We assume that each neuron\nis connected to a macroscopic number, \n${\\mathcal O}(N)$, of pre-synaptic neurons: this is the reason why the sum is divided by the \nfactor $N$. \nTypical values of the parameters contained in the model have phenomenological\norigin \\cite{volman,tsodyksnet}. Unless otherwise stated, we adopt the following set of values: \n$\\tau_\\mathrm{in} = 6$ ms, \n$\\tau_\\mathrm{m} = 30$ ms, $\\tau_\\mathrm{r} = 798$ ms, \n$V_{\\mathrm{r}} = 13.5$ mV, $V_{\\mathrm{th}} = 15$ mV, \n$E_{\\mathrm{c}} =15.45$ mV, ${ G} = 45$ mV and $u = 0.5$.\nNumerical simulations can be performed much more effectively by introducing\ndimensionless quantities,\n\\begin{align}\n& a = \\frac{E_c-V_{\\mathrm{r}}}{V_{\\mathrm{th}}-V_{\\mathrm{r}}}\\\\\n& g = \\frac{G}{V_{\\mathrm{th}}-V_{\\mathrm{r}}}\\\\\n& v=\\frac{V-V_\\mathrm{r}}{V_{\\mathrm{th}}-V_\\mathrm{r}},\n\\end{align} \nand by rescaling time, together with all the other temporal parameters, in units of the membrane time \nconstant $\\tau_\\mathrm{m}$ \n(for simplicity, we leave the notation unchanged after rescaling). The values of the\nrescaled parameters are:\n$\\tau_\\mathrm{in} = 0.2$, $\\tau_{\\mathrm{r}} = 133\\tau_{\\mathrm{in}}$, $v_{\\mathrm{r}} = 0$, \n$v_{\\mathrm{th}} = 1$, $a= 1.3$, $g = 30$ and $u = 0.5$. \nAs to the normalized external current $a$, its value for the first part of our analysis corresponds to the firing regime for neurons.\nWhile the rescaled Eqs. (\\ref{dynsyn}) and (\\ref{contz}) keep the same form, Eq.~(\\ref{eq1}) \nchanges to,\n\\begin{equation}\n\\label{eq1n}\n\\dot v_j= a -v_j + \\frac{g}{N} \\sum_{i \\ne j} \\epsilon_{ij} y_i \\, .\n\\end{equation}\nA major advantage for numerical simulations comes from the possibility of transforming \nthe set of differential equations (\\ref{dynsyn})--(\\ref{input}) and (\\ref{eq1n})\ninto an event--driven map (for details see \\cite{DLLPT} and also \\cite{brette,zill}). \n\n\n\\section{Dynamics and heterogeneous mean field limit}\\label{sec3}\n \n\nThe dynamics of the fully coupled neural network (i.e., $\\epsilon_{ij}=1, \\, \\forall i,j$),\ndescribed by Eq.s (\\ref{eq1n}) and (\\ref{eq2})--(\\ref{input}), converges to a periodic synchronous state, \nwhere all neurons fire simultaneously and the period depends on the model parameters \\cite{DLLPT}. \nA more interesting dynamical regime appears when some disorder is introduced in the network structure.\nFor instance, this can be obtained by maintaining each link between neurons with probability\n$p$, so that the in-degree of a neuron (i.e. the number of presynaptic connections acting on it)\ntakes the average value $\\langle k_i \\rangle = p N$, and the\nstandard deviation of the corresponding in-degree distribution is given by the relation $\\sigma_{k} = \\sqrt{Np(1-p)}$.\nIn such an Erd\\\"os-Renyi random network one typically \nobserves quasi--synchronous events (QSE), where a large fraction of neurons fire in a short\ntime interval of a few milliseconds, separated by an irregular firing activity lasting over some tens of ms\n(e.g., see \\cite{DLLPT}). \nThis dynamical regime emerges as a collective phenomenon, where neurons separate spontaneously into\ntwo different families: the {\\it locked} and the {\\sl unlocked} ones. \nLocked neurons determine the QSE and exhibit a periodic behavior, with a common period but different phases.\nTheir in--degree $k_i$ ranges over a finite interval below the average value $\\langle k_i \\rangle$.\nThe unlocked ones participate to the irregular firing activity and exhibit a sort of intermittent evolution \\cite{DLLPT}.\nTheir in-degree is either very small or higher than $\\langle k_i \\rangle$.\n\n\nAs the dynamics is very sensitive to the different values of of $k_i $, in a recent publication \\cite{BCDLV} we have shown that one can design a\n{\\it heterogeneous mean-field} (HMF) approach by a suitable\nthermodynamic limit preserving, for increasing values of $N$, the main features associated with topological disorder. \nThe basic step of this approach is the\nintroduction of a probability distribution, $P(\\tilde k)$, for the\nnormalized in-degree variable ${\\tilde k} = k\/N$, where the average $\\langle \\tilde k \\rangle$\nand the variance $\\sigma_{\\tilde k}^2 = \\langle {\\tilde k}^2 \\rangle - \\langle \\tilde k \\rangle^2$ \nare fixed independently of $N$. \nA realization of the random network containing $N$ nodes (neurons) \nis obtained by extracting for each neuron $i$ ($i =1, \\cdots , N$) a value $\\tilde k_i$ from $P(\\tilde k)$, and \nby connecting the neuron $i$ with $\\tilde k_i N$ randomly chosen neurons (i.e., $\\epsilon_{i,j} = 1$, $j(i)= 1, \\cdots , \\tilde k_i N $). \nFor instance, one can consider a suitably normalized Gaussian--like distribution \ndefined on the compact support, $\\tilde k \\in (0,1]$,\ncentered around $\\langle \\tilde k \\rangle$ with a sufficiently small value of the standard deviation \n$\\sigma_{\\tilde k}$, so that the tails of the distribution vanish at the boundaries of the support.\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.35]{newrp.eps}\n\\caption{\nRaster plot of a randomly diluted network containing 500 \nneurons, ordered along the\nvertical axis according to their in--degree. The distribution $P(\\tilde k)$ is a Gaussian with $\\langle \\tilde k \\rangle =0.7$, standard deviation $\\sigma_{\\tilde k} =0.077$. A black dot in the raster plot indicates that\nneuron $s$ has fired at time $t$. The red line is the global field $Y(t)$ and the green curve is its analytic fit by the function $Y_f(t)=\nAe^{-\\frac{t}{\\tau_1}}+B(e^{\\frac{t}{\\tau_2}}-1)$, that repeats over each \nperiod of $Y(t)$; the parameter values are $A=2 \\cdot 10^{-2}$, $B=3.56\\cdot 10^{-6}$, $\\tau_1=0.268$ and $\\tau_2=0.141$. Notice that the amplitude of both $Y(t)$ and $Y_f(t)$ has been suitably rescaled to be appreciated on the same scale of the Raster plot.}\n\\label{rp1}\n\\end{figure}\n\nIn Fig.\\ref{rp1} we show the raster plot for a network of $N=500$ neurons and a Gaussian distribution $P(\\tilde k)$ \nwith $\\langle \\tilde k \\rangle=0.7$ and $\\sigma_{\\tilde k}=0.077$. One can observe\na quasi-synchronous dynamics characterized by the presence of locked\nand unlocked neurons, and such a distinctive dynamical feature is preserved\nin the thermodynamic limit \\cite{BCDLV}. For example the time average of the inter--spike time interval between firing events of each neuron, (in formulae $ISI_m=t_m-t_{m-1}$, where the integer $m$ labels the $m$-th firing event) as a function of the connectivity $\\tilde k$ is, apart from fluctuations, the same for each network size $N$. This confirms that the main features of the dynamics are maintained for increasing values of $N$. \n\n\n\n\n\n\n The main advantage of this approach is that one can \nexplicitly perform the limit $N\\to \\infty$ on the set of equations\n(\\ref{eq1n}) and (\\ref{eq2})--(\\ref{input}), thus\nobtaining the corresponding HMF equations:\n\n\\begin{align}\n\\label{vk}\n&\\dot v_{\\tilde k}(t)= a -v_{\\tilde k}(t) + g\\tilde kY(t)\\\\\n\\label{sk}\n&S_{\\tilde k}(t) = \\sum_m \\delta(t-t_{\\tilde k}(m)) \\\\\n\\label{yk}\n& \\dot y_{\\tilde k}(t) = -\\frac{y_{\\tilde k}(t)}{\\tau_{\\mathrm{in}}} +u(1-y_{\\tilde k}(t)-z_{\\tilde k}(t))S_{\\tilde k}(t)\\\\\n\\label{zk}\n& \\dot z_{\\tilde k}(t) = \\frac{y_{\\tilde k}(t)}{\\tau_{\\mathrm{in}}} - \\frac{z_{\\tilde k}(t)}{\\tau_{\\mathrm{r}}}\\\\\n\\label{meanfield}\n&Y(t)=\\int_{0}^{1}P(\\tilde k) y_{\\tilde k}(t)d\\tilde k .\n\\end{align}\nThe dynamical variables depend now on the continuous in--degree index $\\tilde k$, and\nthis set of equations represents the dynamics of equivalence classes of neurons. In fact, in this HMF formulation,\nneurons with the same $\\tilde k$ follow the same evolution\n\\cite{vespignani, mendes}. In practice, Eq.s (\\ref{vk})--(\\ref{meanfield}) can be integrated numerically by sampling the \nprobability distribution $P(\\tilde k)$: one can subdivide the support $(0,1]$ of $\\tilde k$ \nby $M$ values $\\tilde k_i \\,\\,\\, (i=1,\\cdots , M)$, in such a way that $\\int_{\\tilde k_i}^{\\tilde k_{i+1}}P(\\tilde k)d\\tilde k$ is \nconstant (importance sampling). Notice that the integration of the discretized HMF equations is much less\ntime consuming than the simulations performed on a random network.\nFor instance, numerical tests indicate that \nthe dynamics of a network with $N=10^4$ neurons can be confidently reproduced by an importance sampling with $M= 300$.\n\nThe effect of the discretization of ${\\tilde k}$ on the HMF dynamics can be analyzed\nby considering the distance $d(Y_{M_1}(t),Y_{M_2}(t))$ between the global\nactivity fields $Y_{M_1}(t)$ and $Y_{M_2}(t)$ (see Eq.(\\ref{meanfield})) obtained for two different values $M_1$ and $M_2$ of the sampling,\n i.e.:\n\\begin{equation}\nd(Y_{M_1}(t),Y_{M_2}(t))=\\Bigg(\\frac{1}{T}\\sum_{i =1}^T\\frac{(Y_{M_1}(t_i)-Y_{M_2}(t_i))^2}{Y_{M_1}(t_i)^2}\\Bigg)^{\\frac{1}{2}}.\n\\end{equation}\nIn general $Y(t)$ exhibits a quasi periodic behavior and $d(Y_{M_1}(t),Y_{M_2}(t))$ is evaluated over a time interval equal to its period $T$.\nIn order to avoid an overestimation of $d(Y_{M_1}(t),Y_{M_2}(t))$\ndue to different initial conditions, the field $Y_2(t)$ is suitably translated in time in order to make its\nfirst maximum coincide with the first maximum of $Y_1(t)$ in the time interval $[1,T]$. \nIn Fig. \\ref{scarto} we plot $d_M=d(Y_M,Y_{M\/2})$ as a function of $M$. We find that $d_M\\sim 1\/\\sqrt M$,\nthus confirming that the finite size simulation of the HMF dynamics is consistent with the HMF model ($M\\to \\infty$).\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.35]{scarto.eps}\n\\caption{(Color online) The effect of sampling the probability distribution\n$P(\\tilde k)$ with $M$ classes of neurons in the HMF dynamics. Finite size\neffects are controlled by plotting the distance\nbetween the activity fields obtained for two sampling values $M$ and $M\/2$, $d_M=d(Y_M(t),Y_{M\/2}(t))$ (defined in the text), vs. $M$. The red dashed line is \nthe power law $1\/\\sqrt M$. Data is obtained for a Gaussian distribution $P(\\tilde k)$, with $\\langle \\tilde k \\rangle=0.7$ and $\\sigma_{\\tilde k}=0.077$.}\n\\label{scarto}\n\\end{figure} \n \nAs a final remark, notice that the presence of short--term synaptic plasticity \nplays a fundamental role in determining the partially synchronized regime.\nIn fact, numerical simulations show that the discretized HMF dynamics without plasticity, \ni.e. $Y(t) = \\int_{0}^{1}P(\\tilde k) S_{\\tilde k}(t)d\\tilde k$, \nconverges to a synchronous periodic dynamics for any value of $M$ \\cite{DL} .\n\n\\section{Stability analysis of the HMF dynamics}\n\\label{sec4}\n\nIn the HMF equations (\\ref{vk})--(\\ref{meanfield}) the dynamics of each neuron is\ndetermined by its in--degree $\\tilde k$ and by the global synaptic activity field $Y(t)$.\nFor the stability analysis of these equations, we follow a procedure \nintroduced in \\cite{tso_locked} and employed also in \\cite{BCDLV}.\nFor sufficiently large $M$ the discretized HMF dynamics allows one to \nobtain a precise fit of the periodic function $Y(t)$ and to estimate its period $T$. \nAs an instance of its periodic behavior, \nin Fig.\\ref{rp1} we report also $Y(t)$ (red line) and its fit (green line and the\nformula in the caption). The fitted field is exactly periodic and is a good approximation of the global field that\n one expects to observe in the mean field model corresponding to an infinite discretization $M$. As a result, the analysis performed using this periodic field are relative to the dynamics of the HMF model, i.e. in the limit $M\\to \\infty$.\nUsing this fit, one can represent the dynamics of each class $\\tilde k$ of neurons \nby the discrete--time map \n\\begin{equation}\n\\label{mappa}\n\\tau_{\\tilde k}(n+1)=R_{\\tilde k}[ \\tau_{\\tilde k}(n)],\n\\end{equation}\nwhere $\\tau_{\\tilde k}(n) = | t_{\\tilde k}(n) - nT |$ is the modulus of the time difference \nbetween the $n$-th spike of neuron $\\tilde k$ and $nT$, i.e. the $n$-th QSE, that \nis conventionally identified by the corresponding maximum of $Y(t)$ (see Fig. \\ref{rp1}). \n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.35]{newmap1.eps}\n\\caption{The return map $R_{\\tilde k}$ of the rescaled variable \n$\\tau_{\\tilde k}\/T$ (see Eq.(\\ref{mappa})) for different values of \n$\\tilde k$, corresponding \nto lines of different colors (see the legend in the inset: the black line \nis the bisector of the square).\n}\n\\label{maps}\n\\end{figure}\n\nIn Fig. \\ref{maps} we show $R_{\\tilde k} $ for different values of $\\tilde k$. The map\nof each class of {\\sl locked} neurons has a stable fixed point, whose value decreases with \n$\\tilde k$. As a consequence, different classes of {\\sl locked} neurons share the\nsame periodic behavior, but exhibit different phase shifts\nwith respect to the maximum of $Y(t)$. This analysis describes in a clear \nmathematical language what is observed in simulations (see Fig \\ref{rp1}):\nequally periodic classes of locked neurons determine\nthe QSE by firing sequentially, over a very short time interval, that depends on\ntheir relative phase shift.\nIn general, the values of $\\tilde k$ identifying the family of locked neurons\nbelong to a subinterval $(\\tilde k_1, \\tilde k_2)$ of $(0,1]$: the values of \n$\\tilde k_1$ and $\\tilde k_2$ mainly depend on $P(\\tilde k)$ and on its \nstandard deviation $\\sigma_{\\tilde k}$ (more details are reported in \\cite{BCDLV}).\nFor what concerns {\\sl unlocked} neurons, $R_{\\tilde k} $ exhibits the features of an intermittent-like dynamics. In fact, unlocked neurons with $\\tilde k$\nclose to $\\tilde k_1$ and $\\tilde k_2$ spend a long time in an almost periodic\nfiring activity, contributing to a QSE, then they depart from it, firing irregularly\nbefore possibly coming back again close to a QSE. The duration of the irregular\nfiring activity of unlocked neurons typically increases for values of $\\tilde k$ far from\nthe interval $ (\\tilde k_1, \\tilde k_2 )$.\n\nUsing the deterministic map (\\ref{mappa}), one can tackle in full\nrigor the stability problem of the HMF model. The existence of stable\nfixed points for the locked neurons implies that they yield a negative\nLyapunov exponent associated with their periodic evolution.\n\nAs for the unlocked neurons, their Lyapunov\nexponent, $\\lambda_{\\tilde k}$, can be calculated numerically by \n the time-averaged expansion rate \nof nearby orbits of map (\\ref{mappa}):\n\\begin{equation}\n\\label{lyn}\n\\lambda_{\\tilde k}(n)= \\frac{1}{n} \\sum_{j=1}^n \\mathrm{log}\\Bigg[\\frac{|\\delta(j)|}{|\\delta(0)|}\\Bigg],\n\\end{equation}\nwhere $\\delta(0)$ is the initial distance between nearby orbits and\n$\\delta(j)$ is their distance at the $j$--th iterate, so that\n\\begin{equation}\n\\label{lyap}\n\\lambda_{\\tilde k}= \\lim_{n \\to \\infty} \\lambda_{\\tilde k}(n)\n\\end{equation}\nif this limit exists. The Lyapunov exponents for the unlocked component vanish as\n$\\lambda_{\\tilde k} (n) \\sim 1\/n$.\n According to these results, one expects that the maximum Lyapunov exponent\n $\\lambda_{\\mathrm{max}}(M)$ goes to zero in the limit $M \\to \\infty$. \nIn fact, at each finite $M$, $\\lambda_{\\mathrm{max}}$ can be evaluated by using the standard algorithm by Benettin et al.\n\\cite{BGGS}. \nIn Fig.\\ref{lyup_max} we plot $\\lambda_{\\mathrm{max}}$ as a function of the discretization parameter $M$. \nThus, $\\lambda_{\\mathrm{max}}(M)$ is positive,\nbehaving approximately as $M^{-\\gamma}$, with $\\gamma \\sim 1\/2$\n(actually, we find $\\gamma = 0.55$).\n\nThe scenario in any discretized version of the HMF dynamics is the following: \n{\\sl (i)} all {\\sl unlocked neurons} exhibit positive Lyapunov exponents, i.e. they represent\nthe chaotic component of the dynamics; {\\sl (ii)} $\\lambda_{\\mathrm{max}}$ is typically \nquite small, and its value depends on the discretization parameter\n$M$ and on $P(\\tilde k)$; {\\sl (iii)} in the limit $M\\to \\infty$ $ \\lambda_{\\mathrm{max}}$ and\nall $\\lambda_{\\tilde k}$'s of unlocked neurons vanish, thus converging to a quasi periodic\ndynamics, while the {\\sl locked neurons} persist in their periodic behavior.\n\nThe same scenario is observed in the dynamics of random networks built with the\nHMF strategy, where the variance of the distribution $P(\\tilde k)$ is kept independent of the \nsystem size $N$, so that the fraction of locked neurons is constant.\n\nFor the LIF dynamics in an E\\\"ordos--Renyi random network with $N$ neurons, it was found that \n$ \\lambda_{\\mathrm{max}}(N) \\approx N^{- 0.27}$ in the limit $N\\to\\infty$ \\cite{DLLPT}. According to the argument proposed in \\cite{DLLPT}, the value of the power-law exponent is associated to the\n scaling of the number of unlocked neurons, $N_u$ with the system size $N$, namely $N_u \\sim N^{0.9}$.\nThe same argument applied to HMF dynamics indicates that the exponent \n$\\gamma \\sim 1\/2$, ruling the vanishing of $\\lambda_{\\mathrm{max}} (M)$ in the limit $M\\to\\infty$,\nstems from the fact that the HMF dynamics keeps the fraction of unlocked neurons constant.\n\n\\vskip 30pt\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.35]{lmnew.eps}\n\\caption{(Color online) The maximum Lyapunov exponent $\\lambda_{\\mathrm{max}}$ \nas a function of the sampling parameter $M$: $\\lambda_{\\mathrm{max}}$ has\nbeen averaged also over ten different realizations of the network (the error bars refer to the maximum deviation from the average). The dashed red line is the powerlaw $M^{-\\gamma}$, with $\\gamma=0.55$. }\n\\label{lyup_max}\n\\end{figure}\n\n\nWhen the distribution $P(\\tilde k)$ is sufficiently broad, the system becomes asynchronous and locked neurons disappear. The global field $Y(t)$ exhibits fluctuations due to finite size effects and in the thermodynamic limit \nit tends to a constant value $Y^*$. From Eq.s (\\ref{vk})--(\\ref{zk}), one obtains that in this regime each neuron\nwith in--degree $\\tilde k$ fires periodically with a period \n $$\n T_{\\tilde k}=\\mathrm{ln}\\Bigg[\\frac{b+g\\tilde k Y^*}{b+g\\tilde k Y^* -1} \\Bigg]~,\n $$\nwhile its phase depends on the initial conditions. In this case all the Lyapunov exponents \n$\\lambda_{\\tilde k}$ are negative.\n\n\n\\section{Topology and collective behavior}\n\\label{sec5}\nFor a given in--degree probability distribution\n$P(\\tilde k)$, the fraction of locked neurons (i.e., $f_{\\mathrm{l}}=\\int_{\\tilde k_1 }^{\\tilde k_2} P(\\tilde k)d\\tilde k$) decreases \nby increasing $\\sigma_{\\tilde k}$ \\cite{BCDLV}. In particular, there is a critical value $\\sigma^*$ at which $f_{\\mathrm{l}}$ vanishes.\nThis signals a very interesting dynamical transition between the quasi-synchronous phase ($\\sigma_{\\tilde k} < \\sigma^*$)\nto a multi-periodic phase ($\\sigma_{\\tilde k} > \\sigma^*$), where all neurons are periodic with different periods. Here we focus on the different collective dynamics that may emerge for choices of $P(\\tilde k)$\nother than the Gaussian case, discussed in the previous section.\n \nFirst, we consider a power--law distribution\n\\begin{equation}\nP(\\tilde k)=A\\tilde k^{-\\alpha},\n\\label{eqplaw}\n\\end{equation}\nwhere the constant $A$ is given by the normalization condition $\\int_{\\tilde k_m}^{1}P(\\tilde k)d\\tilde k=1$. \nThe lower bound $\\tilde k_m$ is introduced in order to maintain $A$ finite. \nFor simplicity, we fix the parameter $\\tilde k_m$ and analyze the dynamics by varying $\\alpha$. \nNotice that the standard deviation $\\sigma_{\\tilde k}$ of distribution (\\ref{eqplaw}) decreases for increasing\nvalues of $\\alpha$. The dynamics for relatively high $\\alpha$ is very similar to the quasi--synchronous regime \nobserved for $\\sigma_{\\tilde k} < \\sigma^*$ in the Gaussian case (see Fig. \\ref{rp1}). \nBy decreasing $\\alpha$ one can observe again a transition to the asynchronous phase observed \nfor $\\sigma_{\\tilde k} > \\sigma^*$ in the Gaussian case. Accordingly, also for the power--law distribution (\\ref{eqplaw})\na phase with locked neurons may set in only when there is a sufficiently large group of neurons sharing close values of $\\tilde k$.\nIn fact, the group of locked neurons is concentrated at values of $\\tilde k$ quite close to the lower bound \n $\\tilde k_m$, while in the Gaussian case they concentrate at values smaller than $\\langle \\tilde k\\rangle $.\n \nAnother distribution, generating an interesting dynamical phase, is\n\\begin{equation}\nP(\\tilde k)=B \\mathrm{exp}\\Bigg(-\\frac{(\\tilde k-p_1)^2}{2\\sigma_s^2}\\Bigg) + B\\mathrm{exp}\\Bigg(-\\frac{(\\tilde k-p_2)^2}{2\\sigma_s^2}\\Bigg) ,\n\\label{dgauss}\n\\end{equation}\ni.e. the sum of two Gaussians peaked around different values, $p_1$ and $p_2$, of $\\tilde k$, \nwith the same variance $\\sigma_s^2$. $B$ is the normalization constant such that $\\int_{0}^{1}P(\\tilde k)=1$.\nWe fix $p_1=0.5$ and vary both the variance,\n$\\sigma_s$, and the distance between the peaks, $\\Delta= |p_2-p_1|$.\n \nIf $\\sigma_s$ is very large ($\\sigma \\gtrsim 0.1$), the situation is the same observed for a single Gaussian with large variance, yielding \na multi--periodic asynchronous dynamical phase.\n\nFor intermediate values of $\\sigma_s$ i.e. $0.05\\lesssim\\sigma\n\\lesssim 0.1$, the dynamics of the network can exhibit \na quasi--synchronous phase or a multi--periodic asynchronous phase, depending on the value of $\\Delta$.\nIn fact, one can easily realize that this parameter tunes the standard deviation of the overall distribution: small separations\namount to broad distributions. \n \nFinally, when $\\sigma_s\\lesssim 0.05$, a new dynamical phase appears. \n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.35]{isikerp.eps}\n\\caption{ The time average of the inter--spike interval \n$\\overline{ ISI_{\\tilde k}}$ vs. $\\tilde k$ for the probability distribution\n$P(\\tilde k)$ defined in Eq.(\\ref{dgauss}), with $\\Delta= |p_2- p_1|= 0.4$, and $\\sigma_{\\mathrm{s}}=0.03$. We have obtained the global field $Y(t)$ simulating the HMF dynamics with a discretization with $M=300$ classes of neurons. We have then used $Y(t)$ to calculate the $ISI$ of neurons evolving Eq. (\\ref{vk}). In the inset we show the raster plot of the dynamics: as\nin Fig.1, neurons are ordered along the vertical axis according to their \nin--degree.\n}\n\\label{rpdue}\n\\end{figure} \nFor small values of $\\Delta$ (e.g. $\\Delta \\approx 0.1$) , we observe the usual QSE scenario with one family of locked neurons\n(data not shown). However, when $\\Delta$ is sufficiently large\n(e.g. $\\Delta \\approx 0.4$), each peak of the distribution generates its own group of locked neurons. More precisely, neurons separate into three different sets: \ntwo locked groups, that evolve with different periods, $T_1$ and $T_2$, and the unlocked group. In Fig.\\ref{rpdue} we show the dependence of $\\overline{ ISI_{\\tilde k} }$ on $\\tilde k$ and the raster plot of the dynamics (see the inset)\nfor $\\sigma_s=0.03$ . Notice that the plateaus of locked neurons extend over values\nof $\\tilde k$ on the left of $p_1$ and $p_2$. \nIn the inset of Fig. \\ref{fourier} we plot the global activity field $Y(t)$: the peaks signal the quasi-synchronous firing events of the two groups of locked neurons. One can\nalso observe that very long oscillations are present over a time scale much larger than $T_1$ and $T_2$. They are the effect of the {\\sl firing synchrony} of the of two locked families. In fact, the two frequencies $\\omega_1=2\\pi\/T_1$ and $\\omega_2=2\\pi\/T_2$ are in general not commensurate, and the resulting global field is a quasi--periodic function. \nThis can be better appreciated by looking at Fig.\\ref{fourier}, where we report the frequency spectrum of the signal $Y(t)$ (red curve). We observe peaks at frequencies $\\omega=n\\omega_1+m\\omega_2$, for integer values of $n$ and $m$. For comparison, we report also the spectrum of a periodic $Y(t)$, generated by the HMF with power law probability distribution \n(\\ref{eqplaw}), with $\\alpha = 4.9$ (black curve): in this case the peaks are located at frequencies multiples of the frequency of the locked group of neurons.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.35]{fourierecampo.eps}\n\\caption{The frequency spectra of the global activity field $Y(t)$ for\ndifferent in--degree probability distributions. The \nblack spectrum has been obtained for the HMF dynamics with $M=350$, generated \nby the power law probability distribution $P(\\tilde k)\\sim\\tilde k^{-4.9}$\n(see Eq.(\\ref{eqplaw})),\nwith $\\tilde k_m=0.1$: in this case there is a unique family of locked neurons\ngenerating a periodic global activity field $Y(t)$.\nThe red spectrum has been obtained for a random network of $N = 300$ neurons\ngenerated by the double Gaussian distribution\n(see Eq.(\\ref{dgauss})) described in Fig.s 6 and 7: in this case two families\n of locked neurons are present while, as reported in the inset, $Y(t)$ exhibits a quasi--periodic \nevolution.}\n\\label{fourier}\n\\end{figure} \nOn the basis of this analysis, we can conclude that slow oscillations of the global activity field $Y(t)$ may signal the presence \nof more than one group of topologically homogeneous (i.e. locked) neurons. Moreover, we have also learnt\n that one can generate a large variety of global synaptic activity fields by selecting suitable\nin-degree distributions $P(\\tilde k)$, thus unveiling unexpected perspectives for exploiting a sort of {\\sl topological engineering} of the \nneural signals. For instance, one could investigate which kind of $P(\\tilde k)$ could give rise to an almost resonant dynamics,\nwhere $\\omega_2$ is close to a multiple of $\\omega_1$. \n\n\\section{HMF and the Inverse problem in presence of noise}\n\\label{sec6}\nThe HMF formulation allows one to define and solve the following\nglobal inverse problem: how to recover the \nin--degree distribution $P(\\tilde k)$ from the knowledge of the global synaptic activity field $Y(t)$ \\cite{BCDLV}. \n\nHere we just sketch the basic steps of the procedure.\nGiven $Y(t)$, each class of neurons of in-degree $\\tilde k$ evolves according to the HMF equations:\n\\begin{align}\n\\label{vktil}\n&\\dot {\\mathcal{ V}}_{ \\tilde k}(t)= a -\\mathcal{ V}_{ \\tilde k}(t) + g \\tilde kY(t)\\\\\n\\label{yktil}\n& \\dot {\\mathcal{ Y}}_{ \\tilde k}(t) = -\\frac{\\mathcal{Y}_{\\tilde k}(t)}{\\tau_{\\mathsf{in}}} +u(1-\\mathcal{ Y}_{ \\tilde{k}}(t)-\\mathcal{ Z}_{ \\tilde{k}}(t))\\tilde S_{ \\tilde{k}}(t)\\\\\n\\label{zktil}\n& \\dot {\\mathcal{ Z}}_{ \\tilde k}(t) = \\frac{\\mathcal{ Y}_{ \\tilde k}(t)}{\\tau_{\\mathsf{in}}} - \\frac{\\mathcal{ Z}_{ \\tilde{k}}(t)}{\\tau_{\\mathsf{r}}} \\,\\,\\, .\n\\end{align}\nThe different fonts used here, with respect to \nEq.s (\\ref{vk})--(\\ref{meanfield}), point out that\nin this framework the choice of the initial conditions is arbitrary and the dynamical variables $\\mathcal{V}(t)$, $\\mathcal{Y}(t)$, $\\mathcal{Z}(t)$ \nin general may take different values from those assumed by $v(t)$,\n$y(t)$, $z(t)$, i.e. the variables generating $Y(t)$ in (\\ref{vk})--(\\ref{meanfield}). \nHowever, one can exploit the self consistent relation for the global field $Y(t)$:\n\\begin{equation}\\label{global}\nY(t)=\\int_0^1 P({\\tilde k}) \\mathcal{Y}_{\\tilde k}(t)d{\\tilde k} \\,\\,\\, .\n\\end{equation}\nIf $Y(t)$ and $\\mathcal{Y}_{\\tilde k}(t)$ are known, this is a Fredholm\nequation of the first kind for the unknown $P(\\tilde k)$ \\cite{kress}.\nIf $Y(t)$ is a periodic signal, Eq. (\\ref{global}) can be easily solved by a functional Montecarlo minimization\nprocedure, yielding a faithful reconstruction of $P(\\tilde k)$\n\\cite{BCDLV}.\nThis method applies successfully also when $Y(t)$ is a quasi-periodic signal, like the\none generated by in--degree distribution (\\ref{dgauss}).\n\n \nIn this section we want to study the robustness of the HMF equations and \nof the corresponding inverse problem procedure in the presence of noise. \nThis is quite an important test for the reliability of the overall HMF approach. \nIn fact, a real neural structure is always affected by some level of noise, that,\nfor instance, may emerge in the form of fluctuations of ionic or synaptic currents. \nMoreover, it has been observed that noise is crucial for reproducing dynamical phases,\nthat exhibit some peculiar synchronization patterns observed in {\\it in vitro} experiments \\cite{volman,DL}.\n\nFor the sake of simplicity, here we introduce noise by turning the\nexternal current $a$, in Eq. (\\ref{vk}), from a constant to a time and neuron dependent stochastic processes\n$a_{\\tilde k}(t)$. Precisely, the $a_{\\tilde k}(t)$ are assumed to be \ni.i.d. stochastic variables, that evolve in time as a random walk with boundaries, \n$a_{\\mathrm{min}}$ and $a_{\\mathrm{max}}$ (the same rule adopted in \\cite{DL}).\nAccordingly, the average value, $\\bar a$ of $a_{\\tilde k}(t)$ is given by the expression $\\bar\na=(a_{\\mathrm{min}}+a_{\\mathrm{max}})\/2$, while the amplitude of fluctuations is\n$\\delta = a_{\\mathrm{max}}-a_{\\mathrm{min}}$.\nAt each step of the walk, the values of $a_{\\tilde k}(t)$ are independently updated by adding or subtracting, with equal\nprobability, a fixed increment $\\Delta a$. Whenever the value of $a_{\\tilde k}(t)$ crosses one of the boundaries, it is reset to the boundary value.\n\nSince the dynamics has lost its deterministic character, its numerical integration cannot exploit an event driven algorithm, and \none has to integrate Eq.s (\\ref{vk}) --(\\ref{zk}) by a scheme based on\nexplicit time discretization. The results reported hereafter \nrefer to an integration time step $\\Delta t=9\\cdot 10^{-4}$, that guarantees an effective sampling of the dynamics over the whole\nrange of parameter values that we have explored. We have assumed that $\\Delta t$ is also the time step of the stochastic evolution of $a_{\\tilde k}(t)$.\n\nHere we consider the case of uncorrelated noise, that can be obtained by a suitable choice of $\\Delta a$\n\\cite{DL}. In our simulations $\\Delta a = 10^{-2}$, that yields \na value $\\mathcal{O}(10^{-2})$ of the correlation time of the random walk with boundaries. This value, \nmuch smaller than the value $\\mathcal{O}(1)$ typical of the ISI of neurons, makes the stochastic evolution\nof the external currents, $a_{\\tilde k}(t)$, an effectively uncorrelated process with respect to the typical time scales of the neural\ndynamics.\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.35]{campi_rum.eps}\n\\caption{ The global activity field $Y(t)$ of the HMF dynamics, sampled by\n$M=4525$ classes of neurons, for a gaussian\nprobability distribution $P(\\tilde k)$, with $\\langle \\tilde k\\rangle=0.7$ \nand $\\sigma_{\\tilde k}=0.0455$. Lines of different colors correspond to\ndifferent values of the noise amplitude, $\\delta$, added to the external \ncurrents $a_{\\tilde k}(t)$: $\\delta = 0$ (black line), $\\delta = 0.1$ \n(red line), $\\delta = 0.15$ (green line), $\\delta = 0.2$ (blue line) and\n$\\delta = 0.3$ (orange line).}\n\\label{camponoise}\n\\end{figure} \nIn Fig. \\ref{camponoise} we show $Y(t)$, produced by the discretized HMF dynamics with \n$M=4525$ and for a Gaussian distribution $P(\\tilde k)$, with $\\langle \\tilde k \\rangle=0.7$ and $\\sigma_{\\tilde k}=0.0455$. \nCurves of different colors correspond to different values of $\\delta$. \n We have found that up to $\\delta\\simeq 0.1$, i.e. also for non negligible noise\namplitudes ($\\bar a =1$), the HMF dynamics is practically unaffected by noise.\nBy further increasing $\\delta$, the amplitude of $Y(t)$ decreases, as a result\nof the desynchronization of the network induced by large amplitude noise.\n\n\nAlso the inversion procedure exhibits the same robustness\nwith respect to noise. As a crucial test, we have solved the inverse problem \nto recover $P(\\tilde k)$ by injecting the noisy signal $Y(t)$ in the noiseless equations \n(\\ref{vktil})--(\\ref{zktil}), where $a = \\bar a$ (see Fig.\\ref{camponoise}). \nThe reconstructed distributions $P(\\tilde k)$, for different $\\delta$, are shown in Fig. \\ref{noise_invert_1}.\nFor relatively small noise amplitudes ($\\delta< 0.1$) the recovered form of $P(\\tilde k)$ is quite\nclose to the original one, as expected because the noisy $Y(t)$ does\nnot differ significantly from the noiseless one.\n On the contrary, for relatively large noise amplitudes\n ($\\delta>0.1$), the recovered distribution\n $P(\\tilde k)$ is broader than the original one and centered around\n a shifted average value $\\langle \\tilde k \\rangle$.\nThe dynamics exhibits much weaker synchrony effects, the same \nindeed one could observe for the noiseless dynamics on the\nlattice built up with this broader $P(\\tilde k)$ given by the inversion method.\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.35]{inv_rum1.eps}\n\\caption{\nSolution of the inverse problem by the HMF equations in the presence\nof noise added to the external currents. We consider the same setup of Fig. 9 \nand we compare, for different\nvalues of the noise amplitude $\\delta$, the \nreconstructed probability distribution $P(\\tilde k)$ (red circles) with the\noriginal gaussian distribution (black line): the upper--left panel \ncorresponds to the\nnoiseless case ($\\delta =0$), while the upper--right, the lower--left and \nand the lower--right correspond to $\\delta = 0.1, 0.2, 0.3$, respectively.\n}\n\\label{noise_invert_1}\n\\end{figure}\n\nAs a matter of fact, the global neural activity fields obtained by experimental\nmeasurements are unavoidably affected by some level of noise. \nAccordingly, it is worth investigating the robustness of the inversion\nmethod also in the case of noise acting directly on $Y(t)$. \nIn order to tackle this problem, we have considered a simple\nnoisy version of the global synaptic activity field, defined as\n$Y_{\\delta} (t) =(1+\\eta(t))Y(t) $, where the random number $\\eta(t)$ is\nuniformly extracted, at each integration time step, in the interval $[-\\frac{\\delta}{2},\\frac{\\delta}{2}]$.\nIn Fig. \\ref{noise_invert_2} we show \nthe distributions $P(\\tilde k)$ obtained for different values of $\\delta$. \nWe can conclude that the inversion method is quite stable with respect to this \nadditive noise. In fact, even for very large signal--to--noise ratio (e.g. low--right panel of Fig. \\ref{noise_invert_2},\nwhere $\\delta = 0.8$) the main features of the original distribution are still recovered,\nwithin a reasonable approximation.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.35]{rumsuy.eps}\n\\caption{\nSolution of the inverse problem by the HMF equations in the presence\nof noise added to the activity field. We consider the same setup of \nFig. 9, where now $a = 1$ and $Y_{\\delta}(t) = (1 - \\eta(t))Y(t)$ (the random\nvariable $\\eta(t)$ is extracted from a uniform probability distribution \nin the interval $[-\\delta\/2, \\delta\/2]$).\nWe compare, for different\nvalues of the noise amplitude $\\delta$, the \nreconstructed probability distribution $P(\\tilde k)$ (red circles) with the\noriginal gaussian distribution (black line): the upper--left, \nthe upper--right, the lower--left and \nand the lower--right panels correspond to $\\delta = 0.1, 0.4, 0.8, 1.2$, \nrespectively.\n}\n\\label{noise_invert_2}\n\\end{figure} \n\n\n\n\\section{HMF in sparse networks}\n\\label{sec7}\n\nIn this section we analyze the effectiveness of the HMF\napproach for sparse networks, i.e. networks where the neurons\ndegree does not scale linearly with $N$ and, in particular, the average degree\n$\\langle k \\rangle$ is independent of the system size. \nIn this context, the coupling term describing\nthe membrane potential of a generic neuron $i$, in a network of $N$ neurons, \nevolves according to the following equation:\n\\begin{equation}\n\\label{vsparsa}\n\\dot v_j= a -v_j + \\frac{g}{\\langle k\\rangle} \\sum_{i \\ne j} \\epsilon_{ij} y_i, \\, \n\\end{equation} \nwhile the dynamics of $y_i$ is the same of Eq.s\n(\\ref{dynsyn})--(\\ref{contz}). The coupling therm is now independent of\n$N$, and the normalization factor, \n$\\langle k\\rangle$, has been introduced in order to compare models with different average connectivity. \nThe structure of the adjacency matrix $\\epsilon_{ij}$ is determined \nby choosing for each neuron $i$ its in-degree $k_i$ from a probability distribution $P(k_i)$\n(with support over positive integers) independent of the system size.\n\n\n\\begin{figure}[htbp]\n\\centering\\\n\\includegraphics[scale=0.35]{sparsa_mf.eps}\n\\vskip 30pt\n\\includegraphics[scale=0.35]{scale_confr.eps}\n\\caption{\nComparison of the global synaptic activity field $Y(t)$ from \nsparse random networks with the same quantity generated by the corresponding\nHMF dynamics. We have considered sparse random networks with \n$N= 10^4$ neurons. In the upper panel we consider a\nGaussian probability distributions $P(k)$ with different averages\n$\\langle k \\rangle$ and \nvariances $\\sigma_k$, such that $ \\sigma_k\/\\langle k \\rangle = 0.06$:\n$\\langle k \\rangle = 10, 20, 60, 100$ correspond to the violet, orange, red and blue\nlines, respectively. The black line represents $Y(t)$ from the HMF dynamics\n($M=10^3$), where $\\hat P(\\hat k)$ is a Gaussian probability distribution\nwith $\\langle \\hat k \\rangle = 1$ and $\\sigma_{\\hat k} = \\sigma_k\/\\langle k \n\\rangle = 0.06$. In the lower panel we consider \nthe scale free case with fixed power exponent $\\alpha$ and different $k_m$:\n$k_m = 10, 30, 70$ correspond to the orange, red and blue\nlines, respectively. The black line represents $Y(t)$ from the HMF dynamics\n($M=10^3$), where $\\hat P(\\hat k)=(\\alpha-1)\\hat{k}^{-\\alpha} $\nwith cutoff $\\hat k_m = 1$.}\n\\label{campi_sparsa}\n\\end{figure}\n\n\nOn sparse networks the HMF model is not recovered in the thermodynamic limit, \nas the fluctuations of the field received by each neuron of in--degree $k_i$ \ndo not vanish for $N\\to \\infty$.\nNevertheless, for large enough values of $k_i$, one can expect that the\nfluctuations become negligible in such a limit,\ni.e. the synaptic activity field received by different neurons with the same \nin-degree is approximately the same. \nEq. (\\ref{vsparsa}) can be turned into a mean--field like\nform as follows\n\\begin{equation}\n\\label{vsparsa_mean}\n\\dot v_j= a -v_j + \\frac{g}{\\langle k\\rangle} k_jY ~, \n\\end{equation} \nwhere $Y(t)$ represents the global field, averaged over all neurons in the\nnetwork. This implies that the equation is the same for all neurons with \nin--degree $k_j$, depending only on the ratio $\\hat{k}_j=k_j\/\\langle\nk\\rangle$. \nConsequently, also in this case one can read Eq. (\\ref{vsparsa_mean}) as a HMF formulation of Eq. (\\ref{vsparsa}), \nwhere each class of neurons $\\hat{k}$ evolves according to\nto Eq.s (\\ref{vk})--(\\ref{zk}), with $\\hat{k}$ replacing $\\tilde k$, while the global activity field is given by\nthe relation \n$Y(t)=\\int_0^{\\infty}\\hat{P}(\\hat{k})y_{\\hat{k}}(t)d\\hat{k}$. \n\nIn order to analyze the validity of the HMF as an approximation of models defined \non sparse networks, we consider two main cases: ({\\sl i}) $\\hat{P}(\\hat{k})$ is a truncated Gaussian\nwith average $ \\langle \\hat{k}\\rangle=1$ \nand standard deviation $\\sigma_{\\hat{k}}$; ({\\sl ii}) $\\hat{P}(\\hat{k})=(\\alpha-1)\\hat{k}^{-\\alpha}$ is a \npower--law (i.e., scale free) distribution with a lower cutoff $\\hat{k}_m=1$.\nThe Gaussian case ({\\sl i}) is an approximation of\nany sparse model, where $P(k_j)$ is a discretized Gaussian distribution \nwith parameters $\\langle k\\rangle$ and $\\sigma_k$, chosen in such a way that\n$\\sigma_{\\hat{k}} = \\sigma_k\/\\langle k\\rangle$. The scale free case ({\\sl ii}) approximates\nany sparse model, where $P(k_j)$ is a power law with exponent $\\alpha$ and a generic cutoff. Such an\napproximation is expected to provide better results the larger is $\\langle k \\rangle$, i.e. \nthe larger is the cutoff $k_m$ of the scale free distribution.\nIn Fig. \\ref{campi_sparsa} we plot the global field emerging from \nthe HMF model, superposing those coming from a large finite size realization \nof the sparse network, with different values of $\\langle k\\rangle$ for the Gaussian case (upper panel) and of $k_m$ for the scale free case (lower panel).\nThe HMF equations exhibit a remarkable agreement with models on sparse \nnetwork, even for relatively small values of $ \\langle k\\rangle$ and $k_m$. \nThis analysis indicates that the HMF approach works also for\nnon--massive topologies, provided the typical connectivities\nin the network are large enough, e.g. $\\langle k\\rangle \\sim {\\mathcal O} (10^2)$\nin a Gaussian random network with $N=10^4$ neurons (see Fig. (\\ref{campi_sparsa})).\n \n \n \n\\section{Conclusions}\n\\label{sec8}\n\n\nFor systems with a very large number of components, the effectiveness\nof a statistical approach, paying the price of some necessary approximation, has been\nextensively proven, and mean--field methods are typical in this sense. \nIn this paper we discuss how such a method, in the specific form of\nHeterogeneous Mean--Field, can be defined in order to fit an \neffective description of neural dynamics on random networks. \n\nThe relative simplicity of the model studied here, \nexcitatory leaky--integrate--and fire neurons with short term\nsynaptic plasticity, is also a way of providing a pedagogical\ndescription of the HMF and of its potential interest in similar contexts \\cite{BCDLV}. \n\nWe have reported a detailed study of the HMF approach including\ninvestigations on \n{\\sl (i)} its stability properties, \n{\\sl (ii)} its effectiveness in describing the dynamics and in solving\nthe associated inverse problem for different network topologies,\n {\\sl (iii)} its robustness with respect to noise, \nand {\\sl (iv)} its adaptability to different formulations of the model\nat hand. In the light of {\\sl (ii)} and {\\sl (iii)}, the HMF approach\nappears quite a promising tool to match\nexperimental situations, such as the identification of topological features\nof real neural structures, through the inverse analysis of signals\nextracted as time series from small, but not microscopic, domains.\nOn a mathematical ground, the HMF approach is a simple and effective\nmean--field formulation, that can be extended to other neural network models\nand also to a wider class of dynamical models on random graphs.\nThe first step in this direction could be the extension of the HMF method\nto the more interesting case, where the random network contains \nexcitatory and inhibitory neurons, according to distributions of interest\nfor neurophysiology \\cite{abeles, bonif} . This will be the subject of our future work.\n\n\n\n\\begin{acknowledgments}\nR.L. acknowledges useful discussions\nwith A. Pikovsky and L. Bunimovich.\n\n\\end{acknowledgments}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAn $(n,k,\\lambda)-$\\textit{design} (or $(n,k,\\lambda)$-BIBD) is a pair $(P, \\mathcal{B})$ where $P$ is a finite set of $n$ \\textit{points} and $\\mathcal{B}$ is a collection of $k-$subsets of $P$, called \\textit {blocks}, such that every two distinct points in $P$ is contained in exactly $\\lambda$ blocks. \nIn case $|P|={|\\cal B|}$, it is called a \\textit{symmetric design}. For positive integer $ q $, \na $(q^2+q+1, q+1, 1)$-BIBD and a $(q^2,q,1)$-BIBD are called a \\textit{projective plane} and an \\textit{affine plane} of order $q$, respectively.\nA design is called \\textit{resolvable}, if there exists a partition of the set of blocks\n$\\mathcal{B}$ into \\textit{parallel classes}, each of which is a partition of $P$.\n\nA \\textit{pairwise balanced design} (PBD) is a pair $(P,\\mathcal{B})$, where $P$ is a finite set of $n$ points and $\\mathcal{B}$ is a family of subsets of $P$, called \\textit {blocks}, such that every two distinct points in $P$, appear in exactly one block. A \\textit{nontrivial} PBD is a PBD where $P\\not \\in \\mathcal{B}$. A PBD $(P,\\mathcal{B})$ on $n$ points with one block of size $n-1$ and the others of size two is called \\textit{near-pencil}.\n\nThe problem of determining the minimum number of blocks in a pairwise balanced design when the size of its largest block is specified or the size of a particular block is specified, has been the subject of many researches in recent decades.\nThe most important and well-known result about this problem is due to de Bruijn and Erd\\H{o}s~\\cite{deBruijn48} which states that every nontrivial PBD on $n$ points has at least $n$ blocks and the only nontrivial PBDs on $n$ points with exactly $n$ blocks are near-pencil and projective plane.\nFor every positive integers $n,m$, where $m\\leq n$, let $\\mathscr{G}(n,m)$ be the minimum number of blocks in a PBD on $n$ points whose largest block has size $m$. Also, let $\\mathscr{G}'(n,m)$ be the minimum number of blocks in a PBD on $n$ points which has a block of size $m$.\nA classical result known as Stanton-Kalbfleisch Bound \\cite{stanton70} states that $\\mathscr{G}'(n,m)\\geq 1+(m^2(n-m))\/(n-1)$ and equality holds if and only if there exists a resolvable $(n-m,(n-1)\/m,1)$- BIBD. Also, a corollary of Stanton-Kalbfleisch is that $\\mathscr{G}(n,m)\\geq \\max\\{n(n-1)\/m(m-1), 1+(m^2(n-m))\/(n-1)\\}$. For a survey on these and more bounds, see \\cite{Rees90,stanton97}.\n\n\nIn this paper, we are interested in minimizing the sum of block sizes in a PBD, where there are some constraints on the size of one block or the size of the largest block. For every positive integers $n,m$, where $m\\leq n$, let $\\mathscr{S}(n,m)$ be the smallest integer $s$ for which there exists a PBD on $n$ points whose largest block has size $m$ and the sum of its block sizes is equal to $s$. Also, let $\\mathscr{S}'(n,m)$ be the smallest integer $s$ for which there exists a PBD on $n$ points which has a block of size $m$ and the sum of it block sizes is equal to $s$. \n In Section~\\ref{sec:pbd}, we prove some lower bounds for $\\mathscr{S}(n,m)$ and $\\mathscr{S}'(n,m)$. In particular, we show that $\\mathscr{S}(n,m)\\geq 3n-3$, for every $m$, $2\\leq m\\leq n-1$. Also, we prove that, for every $2\\leq m\\leq n $,\n \\[\\mathscr{S}'(n,m)\\geq \\max\\left\\{(n+1)m-\\frac{m^2(m-1)}{n-1}, m+\\frac{(n-m)(n-5m-1)}{2}\\right\\},\\]\n where equality holds for $ m\\geq n\/2 $. Furthermore, we prove that if $ n\\geq 10 $ and $2\\leq m\\leq n-\\frac{1}{2} ( \\sqrt{n} +1)$, then\n $\\mathscr{S}(n,m)\\geq n(\\lfloor\\sqrt{n}\\rfloor+1)-1$.\n\nThe connection of pairwise balanced designs and clique partition of graphs is already known in the literature. Given a simple graph $ G $, by a \\textit{clique} in $G$ we mean a subset of mutually adjacent vertices. A \\textit{clique partition } $\\mathcal{C}$ of $G$ is a family of cliques in $G$ such that the endpoints of every edge of $G$ lie in exactly one member of $\\mathcal{C}$. The minimum size of a clique partition of $G$ is called the \\textit{clique partition number} of $G$ and is denoted by $\\cp(G)$. \n\nFor every graph $G$ with $ n $ vertices, the union of a clique partition of $G$ and a clique partition of its complement, $\\overline{G}$, form a PBD on $n$ points. This connection has been deployed to estimate $\\cp(G)$, when $G$ is some special graph such as $K_n-K_m$~\\cite{WallisAsymp,erdos88,pullman82,Rees}, Cocktail party graphs and complement of paths and cycles \\cite{Wallis82,Wallis84,Wallis87}. \n\nOur motivation for study of the above mentioned problem is a weighted version of clique partition number. The \\textit{sigma clique partition number} of a graph $G$, denoted by $ \\scp(G) $, is defined as the smallest integer $s$ for which there exists a clique partition of $G$ where the sum of the sizes of its cliques is equal to $s$. It is shown that for every graph $G$ on $n$ vertices, $\\scp(G)\\leq \\lfloor n^2\/2\\rfloor$, in which equality holds if and only if $ G $ is the complete bipartite graph $K_{\\lfloor n\/2\\rfloor, \\lceil n\/2\\rceil } $ \\cite{chung81,kahn81,gyori80}. \n\n\nGiven a clique partition $\\mathcal{C}$ of a graph $G$, for every vertex $x\\in V(G)$, the \\textit{valency} of $x$ (with respect to $\\mathcal{C}$), denoted by $v_\\mathcal{C}(x)$, is defined to be the number of cliques in $\\mathcal{C}$ containing $x$. In fact,\n\\[\\scp(G)=\\min_{\\mathcal{C}} \\sum_{C\\in \\mathcal{C}}|C|= \\min_{\\mathcal{C}} \\sum_{x\\in V(G)} v_{\\mathcal{C}}(x),\\]\nwhere the minimum is taken over all possible clique partitions of $G$.\n\n\n In Section~\\ref{sec:Kn-Km}, we apply the results of Section~\\ref{sec:pbd} to determine the asymptotic behaviour of the sigma clique partition number of the graph $K_n-K_m$. In fact, we prove that if $m\\leq {\\sqrt{n}}\/{2}$, then $\\scp(K_n-K_m)\\sim (2m-1)n$, if ${\\sqrt{n}}\/{2}\\leq m\\leq \\sqrt{n}$, then $\\scp(K_n-K_m)\\sim n\\sqrt{n}$ and if $m\\geq \\sqrt{n}$ and $m=o(n)$, then $\\scp(K_n-K_m)\\sim mn$. Also, if $G$ is Cocktail party graph, complement of path or cycle on $n$ vertices, then we prove that $\\scp(G)\\sim n\\sqrt{n}$.\n\n\n\n\\section{Pairwise balanced designs}\\label{sec:pbd}\n\nA celebrated result of de Bruijn and Erd\\H{o}s states that for every nontrivial PBD $(P,\\mathcal{B})$, we have $|\\mathcal{B}|\\geq |P|$ and equality holds if and only if $(P,\\mathcal{B})$ is near-pencil or projective plane~\\cite{deBruijn48}. In this section, we are going to answer the question that what is the minimum sum of block sizes in a PBD.\n\n The following theorem can be viewed as a de Bruijn-Erd\\H{o}s-type bound, which shows that $\\mathscr{S}(n,m)\\geq 3n-3$, for every $m$, $2\\leq m\\leq n-1$.\n\n\n\\begin{theorem}\\label{thm:PBDsigma}\nLet $(P,\\mathcal{B})$ be a nontrivial PBD with $n$ points, then we have\n\\begin{equation}\\label{eq:pbd}\n\\sum_{B\\in\\mathcal{B}} |B| \\geq 3n-3,\n\\end{equation}\nand equality holds if and only if $(P,\\mathcal{B})$ is near-pencil.\n\\end{theorem}\n\\begin{proof}\nWe use induction on the number of points. Let $(P,\\mathcal{B})$ be a nontrivial PBD with $n$ points. Inequality~\\eqref{eq:pbd} clearly holds when $n=3$. So assume that $n\\geq 4$ and for every $x\\in P$, let $r_x$ be the number of blocks containing $x$. First note that for every block $B\\in\\mathcal{B}$ and every $x\\in P\\setminus B$, we have $r_x\\geq |B|$. \n\nIf there is a block $B_0\\in \\mathcal{B}$ of size $n-1$ and $x_0$ is the unique point in \n$P\\setminus B_0$, then for every $x\\in B_0$, $x$ and $x_0$ appear within a block of size two. Therefore, $(P,\\mathcal{B})$ is near-pencil and $\\sum_{B\\in\\mathcal{B}}|B|=(n-1)+2(n-1)=3n-3$. \n\nOtherwise, all blocks are of size at most $n-2$. First we prove that there exists some point $x\\in P$ with $r_x\\geq 3$. Since there is no block of size $n$, $r_x\\geq 2$ for all $x\\in P$. Now for some $y\\in P$, assume that $B_1, B_2$ are the only two blocks containing $y$. Since $n\\geq 4$, the size of at least one of these blocks, say $B_1$, is greater than two. Let $x\\neq y$ be an element of $B_2$. Then, $r_x\\geq |B_1|\\geq 3$. \nHence, there exists some point $x\\in P$ which appears in at least three blocks.\n\nNow, remove $x$ from all blocks to obtain the nontrivial PBD $(P',\\mathcal{B'})$, where $P'=P\\setminus \\{x\\}$ and $\\mathcal{B'}=\\{B\\setminus\\{x\\}\\ :\\ B\\in\\mathcal{B}\\}$. Therefore, \n\\begin{equation}\\label{eq:case2}\n\\sum_{B\\in \\mathcal{B}}|B| = r_x+\\sum_{B'\\in\\mathcal{B'}}|B'|\\geq 3+3(n-2),\n\\end{equation}\nwhere the last inequality follows from the induction hypothesis.\n\nNow, assume that for a PBD $(P,\\mathcal{B})$ equality holds in \\eqref{eq:pbd}. If $(P,\\mathcal{B})$ is not a near-pencil, then equality holds in \\eqref{eq:case2} as well and thus we have $2\\leq r_x\\leq 3$, for every $x\\in P$. On the other hand, $\\sum_{B\\in\\mathcal{B}}|B|=\\sum_{x\\in P} r_x=3n-3$. Therefore, there are exactly $3$ points, say $x,y,z$, each of which appears in exactly two blocks and each of the other points appears in exactly three blocks. Also, let $B_1, B_2$ be the only two blocks containing $y$ and assume that $x\\in B_1$. Therefore, $2=r_x\\geq |B_2|$ and then $|B_1|=n-1$, which is a contradiction.\n\\end{proof}\nSince the union of every clique partition of $G$ and $\\overline{G}$ forms a clique partition for $K_n$ which is equivalent to a PBD on $n$ points, the following corollaries are straightforward.\n\\begin{cor}\nLet $\\mathcal{C}$ be a clique partition of $K_n$ whose cliques are of size at most $n-1$. Then, $\\sum_{C\\in \\mathcal{C}} |C|\\geq 3n-3$.\n\\end{cor}\n\n\\begin{cor}\nFor every graph $G$ on $n$ vertices except the empty and complete graph, we have \n\\[\\scp(G)+\\scp(\\overline{G})\\geq 3n-3,\\]\nand equality holds if and only if $G$ or $\\overline{G}$ contains a clique of size $n-1$.\n\\end{cor}\n\nIn the same vein, one can prove the following theorem which states a lower bound on the maximum number of appearance of the points in a PBD.\n\n\\begin{theorem}\nLet $(P,\\mathcal{B})$ be a nontrivial PBD with $n$ points, and for every $x\\in P$, let $r_x$ be the number of blocks containing $x$. Then, we have\n\\begin{equation}\\label{eq:pbd2}\n\\max_{x\\in P}{r_x}\\geq \\frac{1+\\sqrt{4n-3}}{2},\n\\end{equation}\nand equality holds if and only if $(P,\\mathcal{B})$ is a projective plane or near-pencil.\n\\end{theorem}\n\\begin{proof}\nLet $(P,\\mathcal{B})$ be a nontrivial PBD with $n$ points and define $r=\\max_{x\\in P} r_x$. Fix a point $x\\in P$ and let $\\mathcal{B}_x \\subset \\mathcal{B}$ be the set of blocks containing $x$. The family of sets $\\{B\\setminus \\{x\\} \\ : \\ B\\in \\mathcal{B}_x\\}$ is a partition of the set $P\\setminus \\{x\\}$. Thus, \n\\begin{equation}\\label{eq:PBDproof1}\nn-1= \\sum_{B\\in\\mathcal{B}_x}(|B|-1)\\leq r_x(\\max_{B\\in\\mathcal{B}_x} |B|-1).\n\\end{equation}\nTherefore, there exists some block $B_0$ containing $x$, where $r_x (|B_0|-1)\\geq n-1$. Now, let $y$ be a point not in $B_0$. By a note within the proof of Theorem~\\ref{thm:PBDsigma}, we have $r_y\\geq |B_0|$ and then\n\\begin{equation}\\label{eq:PBDproof2}\nr(r-1)\\geq r_x(r_y-1)\\geq r_x(|B_0|-1)\\geq n-1.\n\\end{equation}\nThis yields the assertion.\n\n Now, assume that equality holds in $\\eqref{eq:pbd2}$. Then, we have equalities in \\eqref{eq:PBDproof1} and \\eqref{eq:PBDproof2}. Thus, all valencies $r_x$ are equal and all blocks have the same size, say $k$, which shows that $(P,\\mathcal{B})$ is an $(n,k,1)-$design. Also by \\eqref{eq:PBDproof2}, we have $r=k$, i.e. $(P,\\mathcal{B})$ is a symmetric design.\n\\end{proof}\n\nAlthough the given bound in \\eqref{eq:pbd} is sharp, it can be improved if the PBD avoids blocks of large sizes. The following theorem, as an improvement of Theorem~\\ref{thm:PBDsigma}, provides some lower bounds on the sum of block sizes, when there are some constraints on the size of a block.\n\n\\begin{theorem} \\label{thm:PBD}\nIf $(P,\\mathcal{B})$ is a PBD with $n$ points where $\\tau$ is the maximum size of blocks in $\\mathcal{B}$, then\n\\begin{equation}\\label{eq:pbdA}\n\\sum_{B\\in \\mathcal{B}} |B| \\geq \\frac{n(n-1)}{\\tau-1}.\n\\end{equation}\nAlso if there is a block of size $\\kk$, then \n\\begin{equation}\\label{eq:pbdB}\n\\sum_{B\\in \\mathcal{B}} |B| \\geq (n+1) \\kk - \\frac{\\kk^2(\\kk-1)}{n-1},\n\\end{equation}\nand\n\\begin{equation}\\label{eq:pbdC}\n\\sum_{B\\in \\mathcal{B}} |B| \\geq \\kk-\\frac{(n-\\kk)(n-5\\kk-1)}{2}.\n\\end{equation}\nMoreover, if $\\kk\\geq n\/2$, then there exists a PBD on $n$ points with a block of size $\\kk$, for which equality holds in {\\em (\\ref{eq:pbdC})}.\n\\end{theorem}\n\n\\begin{proof}\nFor every $x\\in P$, let $r_x$ be the number of blocks containing $x$. By Inequality~(\\ref{eq:PBDproof1}), we have\n\\[\\sum_{B\\in \\mathcal{B}} |B| =\\sum_{x\\in P} r_x\\geq \\sum_{x\\in P} \\frac{n-1}{\\tau-1}= \\frac{n(n-1)}{\\tau-1}.\\]\nIn order to prove \\eqref{eq:pbdB}, let $B_0\\in\\mathcal{B}$ and $|B_0|=\\kk$. Define,\n\\[\\tilde{\\mathcal{B}}= \\{B\\setminus B_0 \\ : \\ B\\in\\mathcal{B}, B\\cap B_0\\neq \\emptyset\\}.\\]\nWe have\n\\[ \\sum_{B\\in \\tilde{\\mathcal{B}} } |B|=\\kk (n-\\kk).\\]\nNow, consider the following set\n\\[S=\\{(x,y) \\ : \\ x\\neq y, x,y \\in B, B\\in \\tilde{\\mathcal{B}}\\}.\\]\nWe have\n\\begin{equation}\\label{eq:S1}\n|S|=\\sum_{B\\in \\tilde{\\mathcal{B}} } |B| (|B|-1)\\geq \\frac{1}{|\\tilde{\\mathcal{B}}|} \\left(\\sum_{B\\in \\tilde{\\mathcal{B}} } |B|\\right)^2 - \\sum_{B\\in \\tilde{\\mathcal{B}} } |B|=\\frac{1}{|\\tilde{\\mathcal{B}}|} \\kk^2(n-\\kk)^2 - \\kk(n-\\kk).\n\\end{equation}\n\nOn the other hand, $S\\subseteq \\{(x,y) \\ : \\ x,y\\in P\\setminus B_0\\}$. Thus, \n\\begin{equation}\\label{eq:S2}\n|S|\\leq (n-\\kk)(n-\\kk-1).\n\\end{equation}\nInequalities (\\ref{eq:S1}) and (\\ref{eq:S2}) yield\n\\[|\\tilde{\\mathcal{B}}|\\geq \\frac{\\kk^2(n-\\kk)}{n-1}. \\]\nFinally, \n\\[\\sum_{B\\in\\mathcal{B}} |B|\\geq |B_0|+\\sum_{B\\in \\tilde{\\mathcal{B}}} (|B|+1)\\geq \\kk+\\kk(n-\\kk)+\\frac{\\kk^2(n-\\kk)}{n-1}.\\]\nThus, we conclude \n\\[\\sum_{B\\in \\mathcal{B}} |B| \\geq (n+1) \\kk - \\frac{\\kk^2(\\kk-1)}{n-1}\\]\n\nTo prove Inequality \\eqref{eq:pbdC}, let $B_0\\in\\mathcal{B}$ and $|B_0|=\\kk$ and assume that $\\mathcal{B}$ has $u$ blocks of size 2 intersecting $B_0$. \nDefine, \n\\[\\hat{\\mathcal{B}}= \\{B\\setminus B_0 \\ :\\ B\\in\\mathcal{B}, \\ B\\cap B_0\\neq \\emptyset, \\ |B|\\geq 3\\}.\\]\nThus,\n\\begin{equation*}\n\\binom{n-\\kk}{2}\\geq \\sum_{B\\in\\hat{\\mathcal{B}}} {\\binom{|B|}{2}}\\geq\\sum_{B\\in\\hat{\\mathcal{B}}}(|B|-1).\n\\end{equation*}\nAlso,\n\\begin{equation*}\n\\kk(n-\\kk)= u+\\sum_{B\\in \\hat{\\mathcal{B}}} |B|.\n\\end{equation*}\nHence, \n\\begin{align*}\n\\sum_{B\\in \\mathcal{B}}|B| &\\geq |B_0|+ 2u+ \\sum_{B\\in \\hat{\\mathcal{B}}} (|B|+1) = \\kk+ 2\\kk(n-\\kk)- \\sum_{B\\in \\hat{\\mathcal{B}}} (|B|-1) \\\\\n& \\geq \\kk+ 2\\kk(n-\\kk)- \\binom{n-\\kk}{2}.\n\\end{align*}\n\nNow, assume that $\\kk\\geq n\/2$ and $B_0=\\{x_1,\\ldots, x_k\\}$. We provide a PBD with a block $B_0$ for which\nequality holds in (\\ref{eq:pbdC}). Consider a proper edge coloring of $K_{n-\\kk}$ by $n-\\kk$ colors and let $C_1,\\ldots, C_{n-\\kk}$ be color classes. Each $C_i$ is a collection of subsets of size $2$. For every $i$, $1\\leq i\\leq n-\\kk$, add $x_i$ to each member of $C_i$. Now, we have exactly $(n-\\kk)(n-\\kk-1)\/2$ blocks of size $3$. By adding missing pairs as blocks of size $2$, we get a PBD $(P,\\mathcal{B})$ on $n$ points, with blocks of size $2$ and $3$ and a block of size $\\kk$. In fact, each block of size $3$ contains two pairs from the set $\\{(x,y) \\ :\\ x\\in B_0, y\\not\\in B_0\\}$. Hence,\n\n\\begin{align*}\n\\sum_{B\\in\\mathcal{B}}|B| &= \\kk+\\frac{3(n-\\kk)(n-\\kk-1)}{2}+ 2 (\\kk(n-\\kk)- (n-\\kk)(n-\\kk-1))\\\\\n& =\\kk-\\frac{(n-\\kk)(n-5\\kk-1)}{2}.\n\\end{align*}\n\\end{proof}\n\\begin{remark}\nLet $(P,\\mathcal{B})$ be a PBD with $n$ points where $\\tau$ is the maximum size of blocks in $\\mathcal{B}$. It is easy to check that among the lower bounds (\\ref{eq:pbdA}), (\\ref{eq:pbdB}) and (\\ref{eq:pbdC}), if $1\\leq \\tau\\leq (\\sqrt{4n-3}+1)\/2$, then (\\ref{eq:pbdA}) is the best one, if $ (\\sqrt{4n-3}+1)\/2\\leq \\tau\\leq (n-1)\/2$, then (\\ref{eq:pbdB}) is the best one and if $(n-1)\/2\\leq \\tau \\leq n-1$, then (\\ref{eq:pbdC}) is the best one. The diagram of the lower bounds in terms of $\\tau$ are depicted in Figure~\\ref{fig:PBD} for $n=21$. \n\n\\begin{center}\n\\begin{tikzpicture}\n\\pgfdeclareimage[width=12cm]{main}{PBD.pdf}\n\\pgftext{\\pgfuseimage{main}}\n\\node at (-6.1cm,.2cm)[rotate=90]{\\small ${\\mathscr{S}}(21,k)$ };\n\\node at (-.2cm,-3.7cm){\\small $k=\\tau$ };\n\\end{tikzpicture}\n\\captionof{figure}{Diagram of the lower bounds in (\\ref{eq:pbdA}), (\\ref{eq:pbdB}) and (\\ref{eq:pbdC}) for $n=21$.}\\label{fig:PBD}\n\\end{center}\n\n\\end{remark}\n\nNow, we apply Theorem \\ref{thm:PBD} to improve the bound in (\\ref{eq:pbd}), whenever the PBD does not contain large blocks.\n\\begin{theorem}\\label{thm:LargeBlock}\nLet $n\\geq 10$ and $(P,\\mathcal{B})$ be a PBD on $n$ points and assume that $\\mathcal{B}$ contains no block of size larger than $n-\\frac{1}{2} ( \\sqrt{n} +1)$. Then, we have\n\\[\\sum_{B\\in \\mathcal{B}} |B|\\geq n(\\lfloor\\sqrt{n}\\rfloor+1)-1.\\]\nAlso, the bound is tight in the sense that equality occurs for infinitely many $n$.\n\\end{theorem}\n\\begin{proof}\nLet $\\tau$ be the maximum size of the blocks in $\\mathcal{B}$. If $\\tau\\leq \\sqrt{n}$, then by (\\ref{eq:pbdA}),\n\\[\\sum_{B\\in\\mathcal{B}}|B| \\geq \\frac{n(n-1)}{\\tau-1}\\geq \\frac{n(n-1)}{\\sqrt{n}-1}\\geq n(\\sqrt{n}+1).\\]\nNow, suppose that $\\tau\\geq \\lfloor\\sqrt{n}\\rfloor+1$. Then, $\\mathcal{B}$ contains a block of size larger than or equal $\\lfloor\\sqrt{n}\\rfloor+1$. First assume that $\\mathcal{B}$ contains a block of size $\\kk$, where $\\lfloor\\sqrt{n}\\rfloor+1\\leq \\kk\\leq \\frac{n}{2}$. Then, by (\\ref{eq:pbdB}),\n\\[\\sum_{B\\in\\mathcal{B}}|B|\\geq (n+1) \\kk - \\frac{\\kk^2(\\kk-1)}{n-1}.\\]\nThe right hand side of the above inequality as a function of $\\kk$ takes its minimum on the interval $[\\lfloor\\sqrt{n}\\rfloor+1, \\frac{n}{2}]$ at $\\lfloor\\sqrt{n}\\rfloor+1$. Thus, \n\\begin{align*}\n\\sum_{B\\in\\mathcal{B}}|B|&\\geq (n+1) (\\lfloor\\sqrt{n}\\rfloor+1) - \\frac{(\\lfloor\\sqrt{n}\\rfloor+1)^2\\lfloor\\sqrt{n}\\rfloor}{n-1}\\\\\n&\\geq n (\\lfloor\\sqrt{n}\\rfloor+1) + (\\lfloor\\sqrt{n}\\rfloor+1)(1- \\frac{(\\sqrt{n}+1)\\sqrt{n}}{n-1}) \\\\\n& = n (\\lfloor\\sqrt{n}\\rfloor+1) - \\frac{\\lfloor\\sqrt{n}\\rfloor+1}{\\sqrt{n}-1} \\\\\n& > n (\\lfloor\\sqrt{n}\\rfloor+1) - 2.\n\\end{align*}\nThe last inequality is due to the fact that $n\\geq 10$.\nFinally, assume that $\\mathcal{B}$ contains a block of size $\\kk$, where $\\frac{n}{2}< \\kk\\leq n-\\frac{1}{2} ( \\sqrt{n} +1)$. Then, by (\\ref{eq:pbdC})\n\\[\\sum_{B\\in \\mathcal{B}} |B| \\geq \\kk-\\frac{(n-\\kk)(n-5\\kk-1)}{2}.\\]\n Again, the right hand side of the above inequality as a function of $\\kk$ takes its minimum on the interval $[\\frac{n}{2},n-\\frac{1}{2} ( \\sqrt{n} +1)]$ at $n-\\frac{1}{2} (\\sqrt{n}+1)$. Hence,\n \\begin{align*}\n\\sum_{B\\in \\mathcal{B}} |B| & \\geq n-\\frac{1}{2}(\\sqrt{n} +1) -\\frac{(\\sqrt{n}+1)(-4n+\\frac{5}{2} (\\sqrt{n}+1)-1)}{4} \\\\\n&= n(\\sqrt{n}+1)+\\frac{3n-7}{8}-\\frac{3}{2}\\sqrt{n}\\\\\n&> n(\\sqrt{n}+1)-2,\n \\end{align*}\n where the last inequality is because $ n\\geq 10 $. This completes the proof. \n\nFinally, in order to prove tightness of the bound, let $q$ be a prime power and $(P,\\mathcal{B})$ be an affine plane of order $q$. Suppose that $\\{B_1,\\ldots, B_{q}\\}$ is a parallel class. Add a single new point to all the blocks $B_1,\\ldots, B_q$. The new PBD has $n=q^2+1$ points, $q^2$ blocks of size $q$ and $q$ blocks of size $q+1$. Hence, the sum of its block sizes is \n\\[q^3+q^2+q=(q^2+1)(q+1)-1= n(\\lfloor \\sqrt{n}\\rfloor+1)-1. \\]\n\\end{proof}\n\\section{Sigma clique partition of complement of graphs }\\label{sec:Kn-Km}\nGiven a graph $G$ and its subgraph $H$, the complement of $H$ in $G$ denoted by $G-H$ is obtained from $G$ by removing all edges (but no vertices) of $H$. If $H$ is a graph on $n$ vertices, then $K_n-H$ is called the complement of $H$ and is denoted by $\\overline{H}$.\n\n In this section, applying the results of Section~\\ref{sec:pbd}, we are going to determine the asymptotic behaviour of the sigma clique partition number of the graph $K_n-K_m$, when $ m $ is a function of $ n $, as well as Cocktail party graph, the complement of path and cycle on $n$ vertices.\n\nThe clique partition number of the graph $K_n-K_m$, for $m\\leq n$, has been studied by several authors. \nIn order to notice the hardness of determining the exact value of $\\cp(K_n-K_m)$, note that if we could show that $\\cp(K_{111}-K_{11})\\geq 111$, then we could determine whether there exists a projective plane of order $10$~\\cite{pullman82}.\nWallis in \\cite{WallisAsymp}, proved that $\\cp(K_n-G)\\sim n$, if $G$ has $o(\\sqrt{n})$ vertices. Also, Erd\\H{o}s et al. in \\cite{erdos88} showed that $\\cp(K_n-K_m)\\sim m^2$, if $\\sqrt{n}< m< n$ and $m=o(n)$. Moreover, if $m=cn$ and $1\/2\\leq c\\leq 1$, then Pullman et al. in \\cite{pullmanII} proved that $\\cp(K_n-K_m)=1\/2 (n-m) (3m-n-1)$.\n\n \nIn the following theorem, we present upper and lower bounds for $ \\scp(K_n-K_m) $ and then we improve these bounds in order to determine asymptotic behaviour of $ \\scp(K_n-K_m) $. \n\\begin{theorem}\\label{thm:kn-km}\nFor every $m,n$, $1\\leq m\\leq n$, we have \n\\begin{equation}\\label{eq:kn-km}\nmn-\\dfrac{m^2(m-1)}{n-1}\\leq\\scp(K_n-K_m)\\leq(2m-1)(n-m)+1.\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nAdding the clique $K_m$ to every clique partition of $K_n-K_m$ forms a PBD on $n$ points. Thus, the lower bound is obtained from Inequality~(\\ref{eq:pbdB}).\n\nFor the upper bound, let $V(K_n)=\\{x_1,\\ldots, x_n\\}$ and $V(K_m)=\\{x_{n-m+1},\\ldots, x_n\\}$. Note that the clique $\\{x_1,\\ldots, x_{n-m+1}\\}$ along with $(m-1)(n-m)$ remaining edges form a clique partition of $K_n-K_m$. Hence, $\\scp(K_n-K_m)\\leq (n-m+1)+2(m-1)(n-m)$. \n\\end{proof}\n\nIn the following theorem, for $m\\leq\\frac{\\sqrt{n}}{2}$, we improve\nthe lower bound in (\\ref{eq:kn-km}).\n\\begin{theorem}\\label{thm:kn-km2}\nIf $m\\leq \\frac{\\sqrt{n}}{2}$, then \n\\[(2m-1) n - O(m^2) \\leq \\scp(K_n-K_m)\\leq (2m-1) n - \\Omega(m^2).\\]\n\\end{theorem}\n\\begin{proof}\nThe upper bound holds by (\\ref{eq:kn-km}).\nFor the lower bound, consider an arbitrary clique partition of $K_n-K_m$, say $\\mathcal{C}$, and add the clique $K_m$ to obtain a PBD $(P,\\mathcal{B})$ with $n$ points. Let $\\tau$ be the size of maximum block in $\\mathcal{B}$. It is clear that $m\\leq \\tau\\leq n-m+1$. We give the lower bound in the following cases. First note that since $m\\leq \\sqrt{n}\/2$, we have $(2m-1)^2\\leq n-1$.\n\nIf $\\tau\\leq \\frac{n-1}{2m-1}$, then by (\\ref{eq:pbdA}), we have\n\\[\\sum_{C\\in\\mathcal{C}} |C|\\geq (2m-1)n-m.\\]\n\nIf $\\frac{n-1}{2m-1}\\leq \\tau\\leq n\/2$, then $2m-1\\leq \\tau\\leq n\/2$, and by (\\ref{eq:pbdB}),\n\\[\\sum_{C\\in\\mathcal{C}} |C|\\geq (n+1) \\tau - \\frac{\\tau^2(\\tau-1)}{n-1}-m. \\]\nThe right hand side of this inequality is increasing as a function of $\\tau$ within the interval $[2m-1,n\/2]$. Hence, \n\\[\\sum_{C\\in\\mathcal{C}} |C|\\geq (n+1) (2m-1) - \\frac{(2m-1)^2(2m-2)}{n-1}-m\\geq (2m-1) n-m. \\]\nFinally, if $n\/2\\leq \\tau\\leq n-m+1$, then, by (\\ref{eq:pbdC}),\n\\[\\sum_{C\\in\\mathcal{C}} |C|\\geq\\tau-\\frac{(n-\\tau)(n-5\\tau-1)}{2}-m.\\]\nConsider the right hand side of this inequality as a function of $\\tau$ within the interval $[n\/2,n-m+1]$. It attains its minimum at $\\tau=n-m+1$. Hence, \n\\[\\sum_{C\\in\\mathcal{C}} |C|\\geq n-2m+1-\\frac{(m-1)(5m-4n-6)}{2} = (2m-1) n - O(m^2).\\]\n\\end{proof}\n\nThe following lemma is a direct application of Theorem~\\ref{thm:LargeBlock} that gives a lower bound for $ \\scp(K_n-H) $ in terms of $ \\scp(H) $. Here, $\\omega(G)$ stands for the clique number of graph $G$. \n\n\\begin{lemma}\\label{lem:lower bound K_n-H}\nLet $ H $ be a graph on $ m $ vertices. If $ \\omega(H)\\leq n-\\frac{1}{2}(\\sqrt{n}+1) $ and $\\omega(\\overline{H})\\leq m-\\frac{1}{2}(\\sqrt{n}+1)$, then $$ \\scp(K_n-H)+\\scp(H)\\geq n(\\lfloor\\sqrt{n}\\rfloor+1)-1. $$\n\\end{lemma}\n\\begin{proof}\nAssume that $ \\cal C $ is an arbitrary clique partition for $ K_n-H $ and $ \\tau $ is the size of largest clique in $ \\cal C $. Then, $ \\tau\\leq n-m+\\omega(\\overline{H})\\leq n-m+m-\\frac{1}{2}(\\sqrt{n}+1)=n-\\frac{1}{2}(\\sqrt{n}+1)$. Also, by assumption, $H$ has no clique of size larger than $n-\\frac{1}{2}(\\sqrt{n}+1)$. Moreover, every clique partition of $H$ along with every clique partition for $K_n-H$ form a PBD. Hence, by Theorem~\\ref{thm:LargeBlock}, $ \\scp(K_n-H)+\\scp(H)\\geq n(\\lfloor\\sqrt{n}\\rfloor+1)-1$.\n\\end{proof}\n\n\nWe need the following lemma in order to improve the upper bound in \\eqref{eq:kn-km} whenever $ \\sqrt{n}\\leq m\\leq n $. The idea is similar to \n\\cite{WallisAsymp} that uses a projective plane of appropriate size to give a clique partition for the graph $K_n-K_m$.\n\\begin{lemma}\\label{lem:design}\nLet $ H $ be a graph on $ m $ vertices. If there exists a $(v,k,1)-$design, such that $k\\geq m$ and $v-k\\geq n-m$, then $\\scp(K_n-H)\\leq n(v-1)\/(k-1)+\\scp(\\overline{H})-m$.\n\\end{lemma}\n\\begin{proof}\nLet $ (P,\\mathcal{B}) $ be a $ (v,k,1)-$design. \nSelect a block $ B_1\\in \\mathcal{B} $ and delete $ k-m $ points from it. Also, delete $ v-k-(n-m) $ points not in $ B_1 $. Now, consider the remaining points as vertices of $ K_n-H $ and each block except $ B_1 $ as a clique in $ K_n-H $. Thus, $ \\scp(K_n-H)\\leq r(n-m)+(r-1)m+\\scp(\\overline{H})=nr-m+\\scp(\\overline{H})$, where $ r=(v-1)\/(k-1) $ is the number of blocks containing a single point.\n\\end{proof}\nWe are going to apply Lemma~\\ref{lem:design} to projective planes and provide a clique covering for $ K_n-H $. Since the existence of projective planes of order $ q $ is only known for prime powers, we need the following well-known theorem to approximate an integer by a prime. \n\\begin{thm}{\\em\\cite{baker}}\\label{lem:prime gap}\nThere exists a constant $ x_0 $ such that for every integer $ x > x_0 $, the interval \u2030$ [x , x+x^{.525}] $ contains prime numbers.\n\\end{thm}\nThe following two theorems determine asymptotic behaviour of $ \\scp(K_n-K_m) $, when $\\sqrt{n}\/2\\leq m$ and $ m=o(n) $. \n\\begin{theorem}\\label{thm:kn--km}\nLet $ H $ be a graph on $ m $ vertices. If $\\frac{\\sqrt{n}}{2}\\leq m\\leq \\sqrt{n}$, then $\\scp(K_n-H)\\leq (1+o(1))\\, n\\sqrt{n}$. Moreover, $ \\scp(K_n-K_m)= (1+o(1))\\, n\\sqrt{n}$.\n\\end{theorem}\n\\begin{proof}\nLet $q$ be the smallest prime power greater than or equal to $\\sqrt{n}$. By Theorem~\\ref{lem:prime gap}, we have $\\sqrt{n}\\leq q\\leq \\sqrt{n}+\\sqrt{n}^{.525}$. Thus, \n$ q\\geq\\sqrt{n}>m-1 $\nand\n$ q^2\\geq n\\geq n-m $. Since there exists a projective plane of order $q$, by Lemma~\\ref{lem:design}, we have\n\\[\\scp(K_n-H)\\leq n (q+1)-m+\\scp(\\overline{H}) \\leq n (q+1)-m+\\frac{m^2}{2},\\]\nwhere the last inequality is due to the fact that sigma clique partition number of every graph on $ n $ vertices is at most $ n^2\/2 $ \\cite{chung81,gyori80}. Hence,\n\\[\\scp(K_n-H)\\leq n^{1.5}+n^{1.2625}+1.5\\ n= (1+o(1))\\, n\\sqrt{n}.\\]\nAlso, by Lemma \\ref{lem:lower bound K_n-H}, $ \\scp(K_n-K_m)\\geq (1+o(1))\\, n\\sqrt{n} $.\n\\end{proof}\nIn the following theorem, for $\\sqrt{n}\\leq m\\leq n$, we improve\nthe upper bound in (\\ref{eq:kn-km}).\n\\begin{theorem}\\label{thm:kn-km1}\nIf $\\sqrt{n}\\leq m\\leq n$, then $\\scp(K_n-K_m)\\leq (1+o(1))\\, nm$ and if in addition $ m=o(n) $, then $\\scp(K_n-K_m)= (1+o(1))\\,nm$.\n\\end{theorem}\n\\begin{proof}\nLet $\\sqrt{n}\\leq m\\leq n$, and also let $q$ be the smallest prime power which is greater than or equal to $m$. By Lemma \\ref{lem:prime gap}, $m\\leq q\\leq m+m^{.525}$. Thus, $q=(1+o(1))\\, m$. Since there exists a projective plane of order $q$, by Lemma~\\ref{lem:design}, we have\n\\[\\scp(K_n-K_m)\\leq n (q+1)-m = (1+o(1))\\, nm.\\]\nOn the other hand, when $ m=o(n) $, Inequality~(\\ref{eq:kn-km}) yields $ \\scp(K_n-K_m)\\geq (1+o(1))\\, nm$, which completes the proof.\n\\end{proof}\nTheorems \\ref{thm:kn-km2}, \\ref{thm:kn--km} and \\ref{thm:kn-km1} make clear asymptotic behaviour of $K_n-K_m$ in case $m=o(n)$.\n\\begin{cor}\\label{cor: K_n-K_m}\nLet $m$ be a function of $n$. Then\n\\begin{itemize}\n\\item[\\rm i)] If $m\\leq \\frac{\\sqrt{n}}{2}$, then $\\scp(K_n-K_m)\\sim (2m-1)n$.\n\\item[\\rm ii)] If $\\frac{\\sqrt{n}}{2}\\leq m\\leq \\sqrt{n}$, then $\\scp(K_n-K_m)\\sim n\\sqrt{n}$.\n\\item[\\rm iii)] If $m\\geq \\sqrt{n}$ and $m=o(n)$, then $\\scp(K_n-K_m)\\sim mn$.\n\\end{itemize}\n\\end{cor}\nIn what follows, we consider the case $m=cn$, where $c$ is a constant.\n First note that if $1\/2\\leq c\\leq 1$, then by Theorem~\\ref{thm:PBD}, since $m\\geq n\/2$, there exists a PBD on $n$ points with a block of size $m$, for which equality holds in (\\ref{eq:pbdC}). Hence, we have $\\scp(K_n-K_m)= \\frac{(1-c)}{2}\\bigg((5c-1)n^2+n\\bigg)$.\n In order to deal with the case $c<1\/2$, we need the following well-known existence theorem of resolvable designs.\n \\begin{thm}{\\em \\cite{Lu}}\\label{thm:resol}\n Given any integer $k\\geq 2$, there exists an integer $v_0(k)$ such that for every $ v\\geq v_0(k) $, a $(v,k,1)-$resolvable design exists if and only if $v \\overset{k}{\\equiv} 0$ and $v-1\\overset{k-1}{\\equiv} 0$.\n \\end{thm}\n\\begin{theorem}\nLet $02$, then $ck\\leq 1$ and thus $m-t\\leq 1\/(k-1)<1$. Therefore, $m\\leq t$.\n\n\nNow, let $v_1,\\ldots, v_m$ be $m$ new points and for every $i$, $1\\leq i\\leq m$, add point $v_i$ to all blocks of $i$-th parallel class. These blocks form a clique partition $\\mathcal{C}$ for $K_n-K_m$, where\n\\begin{equation*}\n\\sum_{C\\in\\mathcal{C}}|C|\\leq \\sum_{B\\in\\mathcal{B}}|B|+\\frac{v}{k} m= (n-m)\\frac{v-1}{k-1}+\\frac{mv}{k}.\n\\end{equation*}\nHence,\n\\begin{align*}\n\\sum_{C\\in\\mathcal{C}}|C|&\\leq \\left(\\frac{(1-c)^2}{k-1}+ \\frac{c(1-c)}{k}\\right)n^2+O(n)\\\\\n&= \\frac{(1-c)(k-c)}{k(k-1)} n^2+O(n).\n\\end{align*}\n\\end{proof} \nWe close the paper by proving that if $ G $ is Cocktail party graph, complement of path or cycle on $ n $ vertices, then $ \\scp(G)\\sim n\\sqrt{n} $. \nGiven an even positive integer $n$, Cocktail party graph $T_n$ is obtained from the complete graph $K_{n}$ by removing a perfect matching. If $n$ is an odd positive integer, then $T_n$ is obtained from $T_{n+1}$ by removing a single vertex. \nIn \\cite{Wallis87,gregory86} it is proved that if $G$ is Cocktail party graph or complement of a path or a cycle on $n$ vertices, then $n\\leq \\cp(G)\\leq (1+o(1))\\, n \\log\\log n$ and it is conjectured that for such a graph, $\\cp(G)\\sim n$.\n\n\\begin{theorem}\\label{thm:complment of path}\nLet $ P_n $ be the path on $ n $ vertices. Then, $ \\scp(\\overline{P_n})\\sim n^{3\/2} $.\n\\end{theorem}\n\\begin{proof}\nBy Lemma~\\ref{lem:lower bound K_n-H}, we have $ \\scp(\\overline{P_n})\\geq n^{3\/2}-2n-3$. Now, by induction on $n$, we prove that there exists a constant $c$, such that $\\scp(\\overline{P_n})\\leq n^{3\/2}+c\\, n^{13\/10}$. The idea is similar to \\cite{Wallis87}.\\\\\nLet $ d=\\lfloor\\sqrt{n}\\rfloor $, $ e=\\lceil\\frac{n}{d}\\rceil $ and $ q $ be the smallest prime greater than $ \\sqrt{n} $. By Lemma~\\ref{lem:prime gap}, $q\\leq \\sqrt{n}+n^{3\/10}$. In an affine plane of order $ q $, choose a parallel class, say $ C_1 $, and delete $ q-d $ blocks in $C_1$. Then, remove $ q-e $ blocks in a second parallel class, say $ C_2 $. The collection of remaining blocks is a PBD on $de$ points.\n\nAssume that $ a_{ij} $ is the intersection point of block $ i $ of $ C_1 $ and block $ j $ of $ C_2 $ in the remaining PBD. Thus, \n $C_1=\\{\\{a_{i1}, a_{i2},\\ldots , a_{ie}\\} \\ : \\ 1\\leq i\\leq d\\}$ and $C_2=\\{\\{a_{1j}, a_{2j},\\ldots , a_{dj}\\} \\ : \\ 1\\leq j\\leq e \\}$.\nNow, replace each block in $ C_2 $ by members of a clique partition of a copy of $\\overline{P_d }$ on the same vertices. Also, replace each of the blocks $ \\{a_{11}, a_{12},\\ldots , a_{1e}\\} $ and $ \\{a_{d1}, a_{d2},\\ldots , a_{de}\\} $ in $ C_1 $ by members of a clique partition of a copy of $\\overline{P_e}$ on the same vertices. In fact, we have replaced $ e+2 $ blocks by some clique partitions of complement of paths and $ q(q+1)-(e+2) $ blocks are left unchanged. It can be seen that the resulting collection, is a partition of all edges of $\\overline{P_{de}}$ except $(e-1)$ edges namely $a_{11}a_{12}, a_{d2}a_{d3}, a_{13}a_{14}, a_{d4}a_{d5},\\dots$ . Adding these $e-1 $ edges to this collection comprise a clique partition for $\\overline{P_{de}}$. Hence,\n\n\\begin{align*}\n\\scp(\\overline{P_n})\\leq\\scp(\\overline{P_{de}})&\\leq qde -2e+e\\scp(\\overline{P_d})+2\\scp(\\overline{P_e})+2(e-1)\n\\end{align*}\nSince $ e\\leq d+3 $, $ \\scp(\\overline{P_e})\\leq\\scp(\\overline{P_d})+6d $. Thus,\n\\begin{align*}\n\\scp(\\overline{P_n})&\\leq qd(d+3)+(d+5)\\scp(\\overline{P_d})+12d.\n\\end{align*}\nTherefore, by the induction hypothesis, we have\n\\begin{align*}\n\\scp(\\overline{P_n})&\\leq (\\sqrt{n}+n^{3\/10}) \\sqrt{n}(\\sqrt{n}+3)+(\\sqrt{n}+5)(n^{3\/4}+c\\, n^{13\/20})+12\\sqrt{n}\\\\\n&\\leq n^{3\/2}+ (1+o(1))\\, n^{13\/10}\\\\\n& \\leq n^{3\/2}+c\\, n^{13\/10}.\n\\end{align*}\n\\end{proof}\nAsymptotic behavior of $\\scp(T_n)$ and $\\scp(\\overline{C_n})$ can be easily determined using $\\scp(\\overline{P_n})$, as follows.\n\\begin{cor}\nLet $ T_n $ and $ C_n $ be Cocktail party graph and cycle on $ n $ vertices, respectively. Then, $ \\scp(\\overline{C_n})\\sim n^{3\/2} $ and $ \\scp(T_n)\\sim n^{3\/2} $.\n\\end{cor}\n\\begin{proof}\nBy Lemma \\ref{lem:lower bound K_n-H}, $ \\scp(\\overline{C_n})\\geq n^{3\/2}-2n-1 $ and $ \\scp(T_n)\\geq n^{3\/2}-n-1 $.\\\\\nNote that $\\overline{P_n}$ is obtained from $\\overline{C_{n+1}}$ by removing an arbitrary vertex $v$. Adding $n-2$ edges incident with $v$ to any clique partition of $ \\overline{P_{n}} $ forms a clique partition for $ \\overline{C_{n+1}} $. Therefore, $ \\scp(\\overline{C_{n+1}})\\leq\\scp(\\overline{P_n})+2(n-1) $. Also, adding at most $n\/2$ edges to any clique partition for $\\overline{P_n}$ forms a clique partition for $T_n$. Thus, $ \\scp(T_n)\\leq\\scp(\\overline{P_n})+2\\frac{n}{2} $. Hence, by Theorem~\\ref{thm:complment of path}, $ \\scp(\\overline{C_n}), \\scp(T_n)\\leq (1+o(1))\\, n^{3\/2}$. \n\\end{proof}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\t\\footnotetext[1]{The authors contributed to this work when they were at NVIDIA.}\n\t\n\tMulti-view stereo algorithms \\cite{Bleyer11bmvc, Campbell08eccv, Furukawa07cvpr, Galliani15iccv, schoenberger2016mvs, Seitz2006mvs} have played an important role in 3D reconstruction and scene understanding for applications such as augmented reality, robotics, and autonomous driving. However, if a scene contains motion, such as non-stationary rigid objects or non-rigid surface deformations, the assumption of an epipolar constraint is violated~\\cite{Hartley04book}, causing algorithms to fail in most cases. \n\tScenes with rigidly-moving objects have been reconstructed by segmenting foreground objects from the background, and treating these regions independently \\cite{Zhang11-multibody, MulVSLAM_Abhijit_ICCV2011, Wang15}. However, the reconstruction of scenes with surfaces undergoing deformation is still a challenging task.\n\t\n\tTo solve this problem for sparse point pairs, various non-rigid structure from motion (NRSfM) methods have been introduced~\\cite{Jensen18}. \n\tThese methods often require either dense views (video frames)~\\cite{Ansari17} for the acquisition of dense correspondences with flow estimation, or prior information to constrain the problem~\\cite{Dai17}.\n\tNewcombe et al.~\\cite{Newcombe15cvpr} and Innmann et al.~\\cite{Innmann16eccv} recently demonstrated solutions for the 3D reconstruction of arbitrary, non-rigid, dynamic scenes using a dense stream of known metric depths captured from a commercial depth camera. \n\tHowever, there are many common scenarios one may encounter for which this method cannot be used, for example, when capturing scenes that contain non-rigid changes that are neither acquired as video nor captured by stereo or depth cameras, but rather come from independent single views.\n\t\n\tIn this paper, we are specifically interested in \\emph{dense 3D scene reconstruction with dynamic non-rigid deformations} acquired from images with wide spatial baselines, and sparse, unordered samples in time. This requires two solutions: (1) a method to compute the most plausible deformation from millions of potential deformations between given wide-baseline views, and (2) a dense surface reconstruction algorithm that satisfies a photometric consistency constraint between images of surfaces undergoing non-rigid changes.\n\t\n\tIn our solution, we first compute a \\emph{canonical surface} from views that have minimal non-rigid changes (Fig.~\\ref{fig:teaser}(a)), and then estimate the deformation between the canonical pose and other views by joint optimization of depth and photometric consistency. This allows the expansion of 3D points of canonical views (Fig.~\\ref{fig:teaser}(b)). \n\tThen, through the individual deformation fields estimated from each view to the canonical surface, we can reconstruct a dense 3D point cloud of each single view (Fig.~\\ref{fig:teaser}(c)).\n\tA brief overview of our entire framework is described in Fig.~\\ref{fig:overview}.\\\\\n\t\n\t\\noindent Our contributions are as follows:\n\t\\begin{itemize}[topsep=0pt,noitemsep]\n\t\t\\item The first non-rigid MVS pipeline that densely reconstructs dynamic 3D scenes with non-rigid changes from wide-baseline and sparse RGB views.\n\t\t\\item A new formulation to model non-rigid motion using a deformation graph~\\cite{sumner2007embedded} and the approximation of the inverse-deformation used for the joint optimization with photometric consistency.\n\t\t\\item Patchmatch-based~\\cite{Bleyer11bmvc} dense sample propagation on top of an existing MVS pipeline~\\cite{schoenberger2016mvs}, which allows flexible implementation depending on different MVS architectures.\n\t\\end{itemize}\n\t\n\t\n\t\n\t\\section{Related Work}\n\t\\label{sec:related}\n\t\n\t\\noindent \\textbf{Dynamic RGB-D Scene Reconstruction.}\n\tA prior step to full dynamic scene reconstruction is dynamic template tracking of 3D surfaces.\n\tThe main idea is to track a shape template over time while non-rigidly deforming its surface \\cite{deAguiar2008,allain2015efficient,l0Norigid2015,hernandez2007non,li2008global,li2009robust,li2012temporally,gall2008driftfree,zollhoefer2014deformable}.\n\tJointly tracking and reconstructing a non-rigid surface is significantly more challenging.\n\tIn this context, researchers have developed an impressive line of works based on RGB-D or depth-only input~\\cite{zeng2013templateless,mitra2007dynamic,tevs2012animation,bojsen2012tracking,dou2013scanning,Dou_2015_CVPR,Malleson2014,wang2016capturing,Li13}.\n\tDynamicFusion~\\cite{Newcombe15cvpr} jointly optimizes a Deformation Graph \\cite{sumner2007embedded}, then fuses the deformed surface with the current depth map.\n\tInnmann et al.~\\cite{Innmann16eccv} follows up on this work by using an as-rigid-as-possible regularizer to represent deformations~\\cite{sorkine2007rigid}, and incorporate RGB features in addition to a dense depth tracking term.\n\tFusion4D~\\cite{dou2016fusion4d} brings these ideas a level further by incorporating a high-end RGB-D capture setup, which achieves very impressive results.\n\tMore recent RGB-D non-rigid fusion frameworks include KillingFusion~\\cite{slavcheva2017killingfusion} and \n\tSobolevFusion~\\cite{slavcheva2018sobolevfusion}, which allow for implicit topology changes using advanced regularization techniques.\n\tThis line of research has made tremendous progress in the recent years; but given the difficulty of the problem, all these methods either rely on depth data or calibrated multi-camera rigs.\n\t\n\t\\noindent \\textbf{Non-Rigid Structure from Motion.} \n\tSince the classic structure from motion solutions tend to work well for many real world applications~\\cite{Triggs:1996, Kanade1153, Tomasi92}, many recent efforts have been devoted to computing the 4D structure of sparse points in the spatio-temporal domain, which we call non-rigid structure from motion (NRSfM)~\\cite{Jensen18, kumar17, Rabaud08, garg2013dense}. \n\tHowever, most of the NRSfM methods consider the optimization of sparse points rather than a dense reconstruction, and often require video frames for dense correspondences~\\cite{Ansari17} or prior information~\\cite{Dai17}. \n\tScenes with rigidly moving objects (e.g., cars or chairs) have been reconstructed by segmenting rigidly moving regions~\\cite{Zhang11-multibody, MulVSLAM_Abhijit_ICCV2011, Ladick10, Wang15}.\n\tIn our work, we focus on a new scenario of reconstructing scenes with non-rigid changes from a few images, and estimate deformations that satisfy each view.\n\t\n\t\\noindent \\textbf{Multi-View Stereo and Dense Reconstruction.} \n\tVarious MVS approaches for dense 3D scene reconstruction have been introduced in the last few decades~\\cite{Furukawa07cvpr, Galliani15iccv, schoenberger2016mvs, Seitz2006mvs}. \n\tWhile many of these methods work well for static scenes, they often reject regions that are not consistent with the epipolar geometry~\\cite{Hartley04book}, e.g., if the scene contains changing regions. \n\tReconstruction failure can also occur if the ratio of static to non-rigid parts present in the scene is too low~\\cite{Lv18eccv}. \n\tA recent survey~\\cite{Schoeps2017CVPR} on MVS shows that COLMAP~\\cite{schoenberger2016mvs} performs the best among state-of-the-art methods. Therefore, we adopt COLMAP's Patchmatch framework for dense photometric consistency.\n\t\n\t\n\t\n\t\\section{Approach}\n\tThe input to our non-rigid MVS method is a set of images of a scene taken from unique (wide-baseline) locations at different times. An overview of our method is shown in Fig.~\\ref{fig:overview}.\n\tWe do not assume any knowledge of temporal order, i.e., the images are an unorganized collection. \n\tHowever, we assume there are at least two images with minimal deformation, and the scene contains sufficient background in order to measure the ratio of non-rigidity and to recover the camera poses. We discuss the details later in Sec~\\ref{sec:implementation}.\n\tThe output of our method is an estimate of the deformation within the scene from the canonical pose to every other view, as well as a depth map for each view.\n\tAfter the canonical view selection, we reconstruct an initial canonical surface that serves as a template for the optimization.\n\tGiven another arbitrary input image and its camera pose, we estimate the deformation between the canonical surface and the input.\n\tFurthermore, we compute a depth map for this processed frame using a non-rigid variant of PatchMatch.\n\tHaving estimated the motion and the geometry for every input image, we recompute the depth for the entire set of images to maximize the growth of the canonical surface.\n\t\n\t\\subsection{Modeling Deformation in Sparse Observations}\n\t\\label{sec:deformation_model}\n\t\n\tTo model the non-rigid motion in our scenario, we use the well known concept of deformation graphs \\cite{sumner2007embedded}. Each graph node represents a rigid body transform, similar to the as-rigid-as-possible deformation model \\cite{sorkine2007rigid}. These transforms are locally blended to deform nearby space.\n\t\n\tGiven a point $\\mathbf{v} \\in \\mathbb{R}^3$, the deformed version $\\hat{\\mathbf{v}}$ of the point is computed as:\n\t\\[\n\t\\hat{\\mathbf{v}} = \\sum_{i=1}^k w_i(\\mathbf{v}) \\left[ \\mathbf{R}_i (\\mathbf{v} - \\mathbf{g}_i) + \\mathbf{g}_i + \\mathbf{t}_i \\right],\n\t\\]\n\twhere $\\mathbf{R}_i$ and $\\mathbf{t}_i$ represent the rotation and translation of a rigid body transform about position $\\mathbf{g}_i$ of the $i$-nearest deformation node, and $k$ is the user-specified number of nearest neighbor nodes (we set to $k = 4$ throughout our paper).\n\tThe weights $w_i$ are defined as:\n\t\\[\n\tw_i(\\mathbf{v}) = \\frac{1}{\\sum_{j=1}^k w_j (\\mathbf{v})} \\left( 1 - \\frac{\\|\\mathbf{v} - \\mathbf{g}_i \\|_2}{\\| \\mathbf{v} - \\mathbf{g}_{k+1} \\|_2}\\right)^2.\n\t\\]\n\tFor a complete description of deformation graphs, we refer to the original literature~\\cite{sumner2007embedded}. \n\t\n\tWhen projecting points between different images, we also need to invert the deformation. \n\tThe exact inverse deformation can be derived given known weights:\n\t\\[\n\t\\mathbf{v} = \\left(\\sum_{i=1}^k w_i(\\mathbf{v}) \\mathbf{R}_i \\right)^{-1} \\left[ \\hat{\\mathbf{v}} + \\sum_{i=1}^k w_i(\\mathbf{v}) \\left[ \\mathbf{R}_i \\mathbf{g}_i - \\mathbf{g}_i - \\mathbf{t}_i \\right] \\right]\n\t\\]\n\tHowever, because we do not know the weights a priori, which requires the nearest neighbor nodes and their distances, this becomes a non-linear problem. \n\tSince this computationally expensive step is necessary at many stages of our pipeline, we introduce an approximate solution:\n\t\\[\n\t\\mathbf{v} \\approx \\left(\\sum_{i=1}^k \\hat{w}_i(\\mathbf{\\hat v}) \\mathbf{R}_i \\right)^{-1} \\left[ \\hat{\\mathbf{v}} + \\sum_{i=1}^k \\hat{w}_i(\\mathbf{\\hat v}) \\left[ \\mathbf{R}_i \\mathbf{g}_i - \\mathbf{g}_i - \\mathbf{t}_i \\right] \\right],\n\t\\]\n\twhere the weights $\\hat{w}_i$ are given by\n\t\\[\n\t\\hat{w}_i(\\hat{\\mathbf{v}}) = \\frac{1}{\\sum_{j=1}^k \\hat{w}_i (\\hat{\\mathbf{v}})} \\left( 1 - \\frac{\\|\\mathbf{\\hat v} - (\\mathbf{g}_i + \\mathbf{t}_i) \\|_2}{\\|\\mathbf{\\hat v} - (\\mathbf{g}_{k+1} + \\mathbf{t}_{k+1}) \\|_2}\\right)^2.\n\t\\]\n\tNote that our approximation can be computed directly and efficiently, without leading to any error of observable influence in our synthetic experiments. \n\t\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\includegraphics[width=.9\\linewidth]{figures\/node-visualization}\n\t\t\\caption{\\textbf{Deformation nodes and correspondences:} (Left) shows the deformation nodes at $t_0$ (orange), and another set of nodes at $t_1$ (red) overlaid in the canonical view. (Right) we show the relationship between deformation nodes from two views and sparse 3D matches (after lifting) in the context of a non-rigid change. Note that we only show the sparse matching for simpler visualization while there is also a dense term for photometric consistency that drives the displacement of deformation nodes together with the sparse matches.}\n\t\t\\label{fig:node-viz}\n\t\t\\vspace{-.2cm}\n\t\\end{figure}\n\t\n\t\\subsection{Non-rigid Photometric Consistency and Joint Optimization}\n\t\\label{sec:photometric-conssitency}\n\tWith the deformation model in hand, we next estimate the depth of the other views by estimating deformations that are photometrically consistent with the collection images and subject to constraints on the geometry. \n\tThis entire step can be interpreted as a non-rigid version of a multi-view stereo framework.\\\\\n\t\\noindent \\textbf{Canonical View Selection} \n\tFrom the set of input images, we select two views with a minimal amount of deformation.\n\tWe run COLMAP's implementation of PatchMatch \\cite{schoenberger2016mvs} \n\tto acquire an initial temple model of the canonical pose.\n\tBased on this template, we compute the deformation graph by distributing a user-specified number of nodes on the surface.\n\tTo this end, we start with all points of the point cloud as initial nodes.\n\tWe iterate over all nodes, and for each node remove all its neighbors within a given radius.\n\tThe process is repeated with a radius that is increased by 10\\%, until we have reached the desired number of nodes.\n\tIn our experiments, we found that 100 to 200 nodes are sufficient to faithfully reconstruct the motion. Fig.~\\ref{fig:node-viz}(left) shows an example of the node distribution.\\\\\n\t\\noindent \\textbf{Correspondence Association}\n\tFor sparse global correspondences, we detect SIFT keypoints \\cite{lowe1999object} in each image and match descriptors for every pair of images to compute a set of feature tracks $\\{ \\mathbf{u}_i \\}$. A \\textit{feature track} represents the same 3D point and is computed by connecting each keypoint with each of its matches. We reject inconsistent tracks, i.e.,~if there is a path from a keypoint $\\mathbf{u}^{(j)}_i$ in image $i$ to a different keypoint $\\mathbf{u}^{(k)}_i$ with $j \\neq k$ in the same image. \n\t\n\tWe lift keypoints $\\mathbf{u}_i$ to 3D points $\\mathbf{x}_i$, if there is a depth value in at least one processed view, compute its coordinates in the canonical pose $\\mathbf{D}_i^{-1}(\\mathbf{x}_i)$ and apply the current estimate of our deformation field $\\mathbf{D}_j$ for frame $j$ to these points. \n\tTo establish a sparse 3D-3D correspondence $(\\mathbf{D}_i^{-1}(\\mathbf{x}_i), \\mathbf{x}_j)$ between the canonical pose and the current frame $j$ for the correspondence set $S$, we project $\\mathbf{D}_j(\\mathbf{D}_i^{-1}(\\mathbf{x}_i))$ to the ray of the 2D keypoint $\\mathbf{u}_j$ (see Fig.~\\ref{fig:concept1}).\n\tTo mitigate ambiguities and to constrain the problem, we also aim for dense photometric consistency across views.\n\tThus, for each point of the template of the canonical pose, we also add a photometric consistency constraint with a mask $C_i \\in \\{ 0, 1 \\}$.\n\t\n\t\n\t\n\t\\noindent \\textbf{Deformation and Depth Estimation}\n\tIn our main iteration (see also Algorithm \\ref{alg:algorithm1}), we estimate the deformation $\\hat{\\mathbf{D}}$ between the canonical pose and the currently selected view by minimizing the joint optimization problem:\n\t\\begin{align}\n\t&E = w_\\text{sparse} E_\\text{sparse} + w_\\text{dense} E_\\text{dense} + w_\\text{reg} E_\\text{reg} \\\\\n\t&E_\\text{sparse} = \\sum_{(i, j) \\in S} \\| \\hat{\\mathbf{D}}(\\mathbf{x}_i) - \\mathbf{x}_j \\|_2^2 \\nonumber\\\\ \n\t&E_\\text{dense} = \\sum_r \\sum_s \\sum_i C_i \\cdot (1 - \\rho_{r, s}(\\hat{\\mathbf{D}}(\\mathbf{x}_i), \\hat{\\mathbf{D}}(\\mathbf{n}_i), \\mathbf{x}_i, \\mathbf{n}_i))^2 \\nonumber \\\\\n\t&E_\\text{reg} = \\sum_{j=1}^m \\sum_{k \\in N(j)} \\| \\mathbf{R}_j (\\mathbf{g}_k - \\mathbf{g}_j) + \\mathbf{g}_j + \\mathbf{t}_j - (\\mathbf{g}_k + \\mathbf{t}_k) \\|_2^2 \\nonumber\n\t\\end{align}\n\t\n\tTo measure photometric consistency $\\rho_{r,s}$ between a reference image $r$, i.e.~the canonical pose, and a source view $s$, we use the bilaterally weighted adaption of normalized cross-correlation (NCC) as defined by Schoenberger et al. \\cite{schoenberger2016mvs}. Throughout our pipeline, we employ COLMAP's default settings, i.e. a window of size $11 \\times 11$. \n\tThe regularizer $E_\\text{reg}$ as defined in \\cite{sumner2007embedded} ensures a smooth deformation result.\n\tTo ensure non-local convergence, we solve the problem in a coarse-to-fine manner using an image pyramid with 3 levels in total. \n\t\n\tBoth the sparse and dense matches are subject to outliers. \n\tIn the sparse case, these outliers manifest as incorrect keypoint matches across images.\n\tFor the dense part, outliers mainly occur due to occlusions, either because of the camera pose or because of the observed deformation.\n\t\n\tTo reject outliers in both cases, we reject correspondences with the highest residuals calculated from the result of the non-linear solution.\n\tWe re-run the optimization until a user-specified maximum error is satisfied.\n\tThis rejection is run in a 2-step process.\n\tFirst, we only solve for the deformation considering the sparse 3D-3D matches.\n\tSecond, we fix the retained 3D-3D matches and solve the joint optimization problem, discarding only dense correspondences, resulting in a consistency map $C_i \\in \\{ 0, 1 \\}$.\n\t\n\tWe iterate this process (starting with the correspondence association) until we reach convergence.\n\tIn our experiments, we found that 3 to 5 iterations suffice to ensure a converged state.\n\t\n\tTo estimate the depth for the currently processed view, we then run a modified, non-rigid variant of COLMAP's PatchMatch \\cite{schoenberger2016mvs}.\n\tInstead of simple homography warping, we apply the deformation to the point and its normal.\n\t\n\t\\begin{figure}[t]\n\t\t\\def2.1cm{0.42\\linewidth}\n\t\t\\centering\n\t\t\\subfloat[Iteration 1]{\\includegraphics[width=2.1cm]{concept12_newa}}\n\t\t\\hfill\n\t\t\\subfloat[Iteration 2]{\\includegraphics[width=2.1cm]{concept12_newb}}\n\t\t\\caption{\\textbf{Sparse correspondence association}: In iteration $i$, we transform the 3D point $\\mathbf{x}_0$ according to the previous estimate of the deformation $\\mathbf{D}_2^{(i-1)}$ and project $\\mathbf{D}_2^{(i - 1)}(\\mathbf{x}_0)$ onto the ray defined by $\\mathbf{u}_2$. The projection is used to define a force $F$ pulling the point towards the ray.}\n\t\t\\label{fig:concept1}\n\t\t\\vspace{-.3cm}\n\t\\end{figure}\n\t\n\t\n\t\\subsection{Implementation Details}\n\t\\label{sec:implementation}\n\tIn this section, we provide more details on our implementation of the NRMVS framework (Fig.~\\ref{fig:overview}).\n\tAlgorithm~\\ref{alg:algorithm1} shows the overall method, introduced in Sec.~\\ref{sec:deformation_model}, and Sec.~\\ref{sec:photometric-conssitency}.\n\t\n\tGiven input RGB images, we first pre-process the input.\n\tTo estimate the camera pose for the images, we use the SfM implementation of Agisoft Photoscan \\cite{photoscan}. Our tests showed accurate results for scenes containing at least 60\\% static background. A recent study~\\cite{Lv18eccv} shows that 60$\\sim$90\\% of static regions in a scene results in less than $0.02$ degree RPE~\\cite{Sturm12iros} error for standard pose estimation techniques (see more discussion in the appendix~\\ref{sec:pose}\n\tGiven the camera pose, we triangulate sparse SIFT matches \\cite{lowe1999object}, i.e.,~we compute the 3D position of the associated point by minimizing the reprojection error. \n\tWe consider matches with a reprojection error of less than 1 pixel to be successfully reconstructed (static inliers).\n\tThe ratio of static inliers to the number of total matches is a simple yet effective indication of the non-rigidity in the scene.\n\tWe pick the image pair with the highest ratio to indicate the minimum amount of deformation and use these as the canonical views to bootstrap our method.\n\t\n\tTwo important aspects of our main iteration are described in more detail:\n\tOur method to filter sparse correspondences (line 16 in Algorithm~\\ref{alg:algorithm1}) is given in Algorithm~\\ref{alg:filter}.\n\tThe hierarchical optimization algorithm (line 17 in Algorithm~\\ref{alg:algorithm1}) including filtering for dense correspondences is given in Algorithm~\\ref{alg:optimization}.\n\t\n\tThe joint optimization in our framework is a computationally expensive task. \n\tThe deformation estimation, which strongly dominates the overall run-time, is CPU intensive, while the depth computation runs on the GPU. \n\tSpecifically, for the face example shown in Fig.~\\ref{fig:teaser} (6 images with 100 deformation nodes) the computation time needed is approximately six hours (Intel i7-6700 3.4 GHz, NVIDIA GTX 980Ti).\n\tMore details about the computational expense will be discussed in the appendix~\\ref{sec:performance}.\n\t\n\t\\begin{algorithm}\n\t\t\\KwData{RGB input images $\\{ \\mathbf{I}_k \\}$}\n\t\t\\KwResult{Deformations $\\{ \\mathbf{D}_k \\}$, depth $\\{ d_k \\}$}\n\t\t$P := \\{ 1, \\ldots, k \\}, Q := \\emptyset$ \\;\n\t\t$\\{ \\mathbf{C}_k \\}$ = PhotoScanEstimateCameraPoses()\\;\n\t\t$(i, j)$ = selectCanonicalViews()\\;\n\t\t$(d_i^{(0)}, \\mathbf{n}_i^{(0)}, d_j^{(0)}, \\mathbf{n}_j^{(0)})$ = ColmapPatchMatch($\\mathbf{I}_i, \\mathbf{I}_j$)\\;\n\t\t$\\mathbf{D}_i^{(0)} = \\mathbf{D}_j^{(0)}$ = initDeformationGraph($d_i^{(0)}, d_j^{(0)}$)\\;\n\t\t$\\{ \\mathbf{u}_k \\}$ = computeFeatureTracks()\\;\n\t\t$Q := Q \\cup \\{ i, j \\}$\\;\n\t\t\\While{$Q \\neq P$}{\n\t\t\t$l$ = nextImage($P \\setminus Q$)\\;\n\t\t\t$\\{ \\mathbf{x}_k \\}$ = liftKeyPointsTo3D($\\{ \\mathbf{u}_k \\}_{k \\in Q}$) \\;\n\t\t\t$\\{ \\mathbf{x}_i \\} = \\mathbf{D}_k^{-1}(\\{ \\mathbf{x}_k \\})$ \\;\n\t\t\t$\\mathbf{D}_l^{(1)} = \\mathbf{Id}$ \\;\n\t\t\t\\For{$m = 1$ \\KwTo $N$}{\n\t\t\t\t$\\{ \\mathbf{\\hat{x}}_l^{(m)} \\} = \\mathbf{D}_l^{(m)}(\\{ \\mathbf{x}_i \\})$ \\;\n\t\t\t\t$\\{ \\mathbf{x}_l^{(m)} \\}$ = projToRays($\\{ \\mathbf{\\hat{x}}_l^{(m)} \\}, \\{ \\mathbf{u}_l \\})$ \\;\n\t\t\t\t$\\{ (\\mathbf{\\tilde{x}}_i, \\mathbf{\\tilde{x}}_l^{(m)}) \\}$ = filter($\\mathbf{D}_l^{(m)}, \\{ (\\mathbf{x}_i, \\mathbf{x}_l^{(m)}) \\}$)\\;\n\t\t\t\t$\\mathbf{D}_l^{(m + 1)}$ = solve($\\mathbf{D}_l^{(m)}, \\{ (\\mathbf{\\tilde{x}}_i, \\mathbf{\\tilde{x}}_l^{(m)}) \\}$, $\\mathbf{I}_i, d_i^{(0)}, \\mathbf{n}_i^{(0)}, \\mathbf{I}_j, d_j^{(0)}, \\mathbf{n}_j^{(0)}, \\mathbf{I}_l$)\n\t\t\t}\n\t\t\t$\\mathbf{D}_l = \\mathbf{D}_l^{(m + 1)}$ \\;\n\t\t\t$Q := Q \\cup \\{ l \\}$\\;\n\t\t\t$(d_l^{(0)}, \\mathbf{n}_l^{(0)})$ = NRPatchMatch($\\{ \\mathbf{I}_k, \\mathbf{D}_k \\}_{k \\in Q}$)\\;\n\t\t}\n\t\t$\\{ (d_k, \\mathbf{n}_k) \\}_{k \\in Q}$ = NRPatchMatch($\\{ \\mathbf{I}_k, \\mathbf{D}_k \\}_{k \\in Q}$)\\;\n\t\t\\caption{Non-rigid multi-view stereo}\n\t\t\\label{alg:algorithm1}\n\t\\end{algorithm}\n\t\\vspace{-0.2cm}\n\t\\begin{algorithm}\n\t\t\\SetKwFunction{FMain}{filter}\n\t\t\\SetKwProg{Fn}{Function}{:}{}\n\t\t\\KwData{Threshold $d_\\text{max}$, Ratio $\\tau \\in (0, 1)$}\n\t\t\\Fn{\\FMain{$\\mathbf{D}_l, \\{ (\\mathbf{x}_i, \\mathbf{x}_l) \\}$}}{\n\t\t\t\\While{true}{\n\t\t\t\t$\\mathbf{D}_l^\\ast$ = solve($\\mathbf{D}_l, \\{ (\\mathbf{x}_i, \\mathbf{x}_l) \\}$)\\;\n\t\t\t\t$\\{ r_k \\} = \\{ \\| \\mathbf{D}_l^\\ast(\\mathbf{x}_i) - \\mathbf{x}_l \\|_2 \\}$\\;\n\t\t\t\t$e_\\text{max} = \\max \\{ r_k \\} $\\;\n\t\t\t\t\\If{$e_\\text{max} < d_\\text{max}$}{\n\t\t\t\t\tbreak\\;\n\t\t\t\t}\n\t\t\t\t$d_\\text{cut} := \\max \\{ d_\\text{max}, \\tau \\cdot e_\\text{max} \\}$ \\;\n\t\t\t\t$\\{ (\\mathbf{x}_i, \\mathbf{x}_l) \\} := \\{ (\\mathbf{x}_i, \\mathbf{x}_l) : r_k < d_\\text{cut} \\} $\\;\n\t\t\t}\n\t\t\t\\Return $\\{ (\\mathbf{x}_i, \\mathbf{x}_l) \\}$\\;\n\t\t}\n\t\t\\caption{Filtering of sparse correspondences}\n\t\t\\label{alg:filter}\n\t\\end{algorithm}\n\t\\vspace{-0.2cm}\n\t\\begin{algorithm}\n\t\t\\SetKwFunction{FMain}{solve}\n\t\t\\SetKwProg{Fn}{Function}{:}{}\n\t\t\\KwData{Threshold $\\rho_\\text{max}$, Ratio $\\tau \\in (0, 1)$}\n\t\t\\Fn{\\FMain{$\\mathbf{D}_l, \\{ (\\mathbf{x}_i, \\mathbf{x}_l) \\}, \\mathbf{I}_i, d_i, \\mathbf{n}_i, \\mathbf{I}_j, d_j, \\mathbf{n}_j, \\mathbf{I}_l$}}{\n\t\t\t$\\hat{\\mathbf{D}}_l = \\mathbf{D}_l$\\;\n\t\t\t\\For{$m = 1$ \\KwTo levels}{\n\t\t\t\t$\\rho_\\text{cut} := \\tau \\cdot (1 - \\text{NCC}_\\text{min}) = \\tau \\cdot 2$ \\;\n\t\t\t\t$C_p := 1 \\quad \\forall p$\\;\n\t\t\t\t\\While{true}{\n\t\t\t\t\t$\\mathbf{D}_l^\\ast$ = solveEq1($\\hat{\\mathbf{D}_l}$) \\;\n\t\t\t\t\t$\\{ r_p \\} = \\{ C_p \\cdot~(1~-~\\rho(\\mathbf{D}_l^\\ast(\\mathbf{x}_p), \\mathbf{D}_l^\\ast(\\mathbf{n}_p), \\mathbf{x}_p, \\mathbf{n}_p)) \\}$\\;\n\t\t\t\t\t$e_\\text{max} = \\max \\{ r_p \\} $\\;\n\t\t\t\t\t\\If{$e_\\text{max} < \\rho_\\text{max}$}{\n\t\t\t\t\t\t$\\hat{\\mathbf{D}}_l := \\mathbf{D}_l^\\ast$\\;\n\t\t\t\t\t\tbreak\\;\n\t\t\t\t\t}\n\t\t\t\t\t\\If{$m = levels$}{\n\t\t\t\t\t\t$\\hat{\\mathbf{D}}_l := \\mathbf{D}_l^\\ast$\\;\n\t\t\t\t\t}\n\t\t\t\t\t$C_p := 0 \\quad \\forall p : r_p > \\rho_\\text{cut}$\\;\n\t\t\t\t\t$\\rho_\\text{cut} := \\max \\{ \\rho_\\text{max}, \\tau \\cdot \\rho_\\text{cut} \\}$\n\t\t\t\t}\n\t\t\t}\n\t\t\t\\Return $\\hat{\\mathbf{D}}_l$\n\t\t}\n\t\t\\caption{Solving the joint problem}\n\t\t\\label{alg:optimization}\n\t\\end{algorithm}\n\t\\vspace{-0.2cm}\n\t\n\t\n\t\\begin{table*}[t]\n\t\t\\caption{\\textbf{Evaluation for ground truth data:} (a) using COLMAP, i.e., assuming a static scene, (b) applying our dense photometric optimization on top of an implementation of non-rigid ICP (NRICP), and (c) using different variants of our algorithm. S denotes \\emph{sparse}, D denotes \\emph{dense}, photometric objective. $N$ equals the number of iterations for sparse correspondence association (see paper for more details). We compute the mean relative error (MRE) for all reconstructed values as well as the overall completeness. The last row (w\/o filter) shows the MRE, with disabled rejection of outlier depth values, i.e., a completeness of 100 \\%.\n\t\t}\n\t\t\\vspace{-.2cm}\n\t\t\\begin{tabularx}{\\textwidth}{@{}rccccccc@{}}\n\t\t\t\\toprule\n\t\t\t& & \\multicolumn{6}{c}{Ours} \\\\\n\t\t\t\\cmidrule{4-8}\n\t\t\t& COLMAP \\cite{schoenberger2016mvs} & NRICP~\\cite{Li09} & S ($N=1$) & S ($N=10$) & D & S ($N=1$) + D & S ($N=10$) + D \\\\ \n\t\t\t\\midrule \n\t\t\tCompleteness & 68.74 \\% & 99.30 \\% & 97.24 \\% & 97.71 \\% & 96.41 \\% & 98.76 \\% & \\textbf{98.99} \\% \\\\\n\t\t\t\\hline\n\t\t\tMRE & 2.11 \\% & 0.53 \\% & 1.48 \\% & 1.50 \\% & 2.37 \\% & 1.12 \\% & \\textbf{1.11} \\% \\\\ \n\t\t\t\\hline \n\t\t\t\\hline\n\t\t\tMRE w\/o filter & 6.78 \\% & 0.74 \\% & 2.16 \\% & 2.05 \\% & 3.32 \\% & 1.63 \\% & \\textbf{1.34} \\% \\\\ \n\t\t\t\\bottomrule \n\t\t\\end{tabularx} \n\t\t\\label{tab:eval}\n\t\t\\vspace{-0.2cm}\n\t\\end{table*}\n\t\n\t\n\t\\begin{figure*}\n\t\t\\centering\n\t\t\\newlength\\exlen\n\t\t\\setlength\\exlen{.15\\linewidth}\n\t\t\\def1.5cm{1.5cm}\n\t\t\\captionsetup[subfloat]{labelformat=empty}\n\t\n\t\t\\subfloat{\\parbox[t]{.02\\linewidth}{\\begin{sideways}\\centering \\footnotesize \\, \\, \\, \\, Input\\end{sideways}}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=11cm 7cm 11cm 4cm, clip=true,width=\\exlen]{figures\/gt2\/Cam0_0}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=12cm 6cm 10cm 5cm, clip=true,width=\\exlen]{figures\/gt2\/Cam0_1}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=7.7cm 4.9cm 7.7cm 2.2cm, clip=true,width=\\exlen]{figures\/gt2\/Cam4_6}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=12.5cm 5.5cm 9.5cm 5.5cm, clip=true,width=\\exlen]{figures\/gt2\/Cam7_1}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=11cm 7cm 11cm 4cm, clip=true,width=\\exlen]{figures\/gt2\/Cam8_2}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=11cm 7cm 11cm 4cm, clip=true,width=\\exlen]{figures\/gt2\/Cam13_2}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[height=1.5cm]{gt_eval_colorbar_dummy}} \\\\\n\t\t\\vspace{-0.3cm}\n\t\n\t\t\\subfloat{\\parbox[t]{.02\\linewidth}{\\begin{sideways}\\centering \\footnotesize \\quad \\, 3D points\\end{sideways}}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=9cm 7cm 8cm 5cm, clip=true,width=\\exlen]{figures\/gt2\/snapshot0_0}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=10cm 6cm 7cm 6cm, clip=true,width=\\exlen]{figures\/gt2\/snapshot0_1}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=9cm 6cm 8cm 6cm, clip=true,width=\\exlen]{figures\/gt2\/snapshot4_6}} %\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=9cm 6cm 8cm 6cm, clip=true,width=\\exlen]{figures\/gt2\/snapshot7_1-2}} %\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=8.5cm 6cm 8.5cm 6cm, clip=true,width=\\exlen]{figures\/gt2\/snapshot8_2}} %\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=7cm 6cm 10cm 6cm, clip=true,width=\\exlen]{figures\/gt2\/snapshot13_2}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[height=1.5cm]{gt_eval_colorbar_dummy}} \\\\ %\n\t\t\\vspace{-0.3cm}\n\t\n\t\t\\subfloat{\\parbox[t]{.02\\linewidth}{\\begin{sideways}\\centering \\footnotesize Relative error \\end{sideways}}}\n\t\t\\hfill\n\t\t\\subfloat[\\scriptsize{[0.51 \\% \/\/ 97.4 \\%]}]{\\includegraphics[trim=5.5cm 3.5cm 5.5cm 2cm, clip=true,width=\\exlen]{figures\/gt2\/error_rel_depth_0_0}}\n\t\t\\hfill\n\t\t\\subfloat[\\scriptsize{[0.08 \\% \/\/ 100 \\%]}]{\\includegraphics[trim=6cm 3cm 5cm 2.5cm, clip=true,width=\\exlen]{figures\/gt2\/error_rel_depth_0_1}}\n\t\t\\hfill\n\t\t\\subfloat[\\scriptsize{[1.23 \\% \/\/ 99.25 \\%]}]{\\includegraphics[trim=3.85cm 2.45cm 3.85cm 1.1cm, clip=true,width=\\exlen]{figures\/gt2\/error_rel_depth_4_6}}\n\t\t\\hfill\n\t\t\\subfloat[\\scriptsize{[1.31 \\% \/\/ 99.44 \\%]}]{\\includegraphics[trim=6.25cm 2.75cm 4.75cm 2.75cm, clip=true,width=\\exlen]{figures\/gt2\/error_rel_depth_7_1}}\n\t\t\\hfill\n\t\t\\subfloat[\\scriptsize{[0.97 \\% \/\/ 99.98 \\%]}]{\\includegraphics[trim=5.5cm 3.5cm 5.5cm 2cm, clip=true,width=\\exlen]{figures\/gt2\/error_rel_depth_8_2}}\n\t\t\\hfill\n\t\t\\subfloat[\\scriptsize{[1.00 \\% \/\/ 99.14 \\%]}]{\\includegraphics[trim=5.5cm 3.5cm 5.5cm 2cm, clip=true,width=\\exlen]{figures\/gt2\/error_rel_depth_13_2}} \n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[height=1.5cm]{gt_eval_colorbar}}\n\t\t\\caption{\\textbf{Quantitative evaluation with synthetic data:} We created images of a deforming surface with 10 different views. For the evaluation, we randomly chose six examples from the set and reconstructed the surface. The first row shows the input images. The first two columns show the chosen canonical views. The results of the reconstructed surface (with the point cloud propagated to each view) are shown in the second row. In the third row, we visualize the relative depth error compared to the ground truth. We also show the mean relative depth error value (\\%) and the completeness (\\%). The overall quantitative evaluation including a comparison to other baselines are shown in Table~\\ref{tab:eval}.}\n\t\t\\label{fig:results-synth}\n\t\t\\vspace{-.2cm}\n\t\\end{figure*}\n\t\n\t\\begin{figure*}\n\t\t\\def\\vspace{-1em}{\\vspace{-1em}}\n\t\t\\def\\vspace{-0.2cm}{\\vspace{-0.2cm}}\n\t\t\\centering\n\t\n\t\t\\subfloat{\\parbox[t]{.02\\linewidth}{\\begin{sideways}\\centering \\footnotesize \\qquad Input images\\end{sideways}}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=3cm 0.6cm 1.8cm 0cm, clip=true,width=.16\\linewidth]{figures\/res-kihwan1\/1.jpg}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=2cm 0.3cm 2.8cm 0.3cm, clip=true,width=.16\\linewidth]{figures\/res-kihwan1\/0.jpg}} \n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=2.5cm 0.6cm 2.3cm 0cm, clip=true,width=.16\\linewidth]{figures\/res-kihwan1\/2.jpg}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=2.5cm 0.3cm 2.3cm 0.3cm, clip=true,width=.16\\linewidth]{figures\/res-kihwan1\/3.jpg}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=2.5cm 0.6cm 2.3cm 0cm, clip=true,width=.16\\linewidth]{figures\/res-kihwan1\/4.jpg}} \n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=2.5cm 0.6cm 2.3cm 0cm, clip=true,width=.16\\linewidth]{figures\/res-kihwan1\/5.jpg}} \n\t\t\\\\\n\t\t\\vspace{-1em}\n\t\t\\subfloat{\\parbox[t]{.02\\linewidth}{\\begin{sideways}\\centering \\footnotesize \\qquad \\, 3D point cloud\\end{sideways}}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=14cm 6cm 13cm 4cm, clip=true,width=.16\\linewidth]{figures\/canonical\/canno-kihwan2.jpg}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=14cm 6cm 13cm 4cm, clip=true,width=.16\\linewidth]{figures\/res-kihwan1\/snapshot00_L00.jpg}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=14cm 6cm 13cm 4cm, clip=true,width=.16\\linewidth]{figures\/res-kihwan1\/snapshot00_L01.jpg}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=14cm 6cm 13cm 4cm, clip=true,width=.16\\linewidth]{figures\/res-kihwan1\/snapshot00_L02.jpg}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=14cm 6cm 13cm 4cm, clip=true,width=.16\\linewidth]{figures\/res-kihwan1\/snapshot00_L03.jpg}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=14cm 6cm 13cm 4cm, clip=true,width=.16\\linewidth]{figures\/res-kihwan1\/snapshot00_L04.jpg}} \n\t\t\\\\\n\t\t\\vspace{-0.2cm}\n\t\n\t\t\\subfloat{\\parbox[t]{.02\\linewidth}{\\begin{sideways}\\centering \\footnotesize \\, \\, Input images\\end{sideways}}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=1cm 0cm 1cm 0cm, clip=true,width=.16\\linewidth]{figures\/globe-v3\/g1}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=1cm 0cm 1cm 0cm, clip=true,width=.16\\linewidth]{figures\/globe-v3\/g2}} \n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=1cm 0cm 1cm 0cm, clip=true,width=.16\\linewidth]{figures\/globe-v3\/g3}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=1cm 0cm 1cm 0cm, clip=true,width=.16\\linewidth]{figures\/globe-v3\/g4}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=1cm 0cm 1cm 0cm, clip=true,width=.16\\linewidth]{figures\/globe-v3\/g5}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=1cm 0cm 1cm 0cm, clip=true,width=.16\\linewidth]{figures\/globe-v3\/g6}} \n\t\t\\\\\n\t\t\\vspace{-1em}\n\t\t\\subfloat{\\parbox[t]{.02\\linewidth}{\\begin{sideways}\\centering \\footnotesize \\, 3D point cloud \\end{sideways}}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=12cm 8cm 12cm 8cm, clip=true,width=.16\\linewidth]{figures\/globe-v3\/globe-cano}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=12cm 8cm 12cm 8cm, clip=true,width=.16\\linewidth]{figures\/globe-v3\/globe-1}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=12cm 8cm 12cm 8cm, clip=true,width=.16\\linewidth]{figures\/globe-v3\/globe-2}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=12cm 8cm 12cm 8cm, clip=true,width=.16\\linewidth]{figures\/globe-v3\/globe-3}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=12cm 8cm 12cm 8cm, clip=true,width=.16\\linewidth]{figures\/globe-v3\/globe-4-2}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=12cm 8cm 12cm 8cm, clip=true,width=.16\\linewidth]{figures\/globe-v3\/globe-5-2}}\n\t\t\\\\\n\t\t\\vspace{-0.2cm}\n\t\n\t\t\\subfloat{\\parbox[t]{.02\\linewidth}{\\begin{sideways}\\centering \\footnotesize \\quad\\ Input images\\end{sideways}}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=2.4cm 0.5cm 1.6cm 0.5cm, clip=true,width=.16\\linewidth]{figures\/shirt-v2\/s1}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=2.3cm 0.5cm 1.7cm 0.5cm, clip=true,width=.16\\linewidth]{figures\/shirt-v2\/s2}} \n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=2.2cm 0.5cm 1.8cm 0.5cm, clip=true,width=.16\\linewidth]{figures\/shirt-v2\/s3}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=2cm 0.5cm 2cm 0.5cm, clip=true,width=.16\\linewidth]{figures\/shirt-v2\/s5}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=2.5cm 0.5cm 1.5cm 0.5cm, clip=true,width=.16\\linewidth]{figures\/shirt-v2\/s6}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=2.4cm 1cm 1.6cm 0cm, clip=true,width=.16\\linewidth]{figures\/shirt-v2\/s7}}\n\t\t\\\\\n\t\t\\vspace{-1em}\n\t\n\t\t\\subfloat{\\parbox[t]{.02\\linewidth}{\\begin{sideways}\\centering \\footnotesize 3D point cloud \\end{sideways}}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=11cm 5cm 8cm 5cm, clip=true,width=.16\\linewidth]{figures\/canonical\/shirt-cano}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=10cm 5cm 9cm 5cm, clip=true,width=.16\\linewidth]{figures\/shirt-v2\/snapshot00_L00}} \n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=10cm 5cm 9cm 5cm, clip=true,width=.16\\linewidth]{figures\/shirt-v2\/snapshot00_L01}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=11cm 5cm 8cm 5cm, clip=true,width=.16\\linewidth]{figures\/shirt-v2\/snapshot00_L03}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=10cm 5cm 9cm 5cm, clip=true,width=.16\\linewidth]{figures\/shirt-v2\/snapshot00_L04}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=10cm 5cm 9cm 5cm, clip=true,width=.16\\linewidth]{figures\/shirt-v2\/snapshot00_L05}}\n\t\t\\\\\n\t\t\\vspace{-0.2cm}\n\t\n\t\t\\subfloat{\\parbox[t]{.02\\linewidth}{\\begin{sideways}\\centering \\footnotesize \\quad Input images\\end{sideways}}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=2.5cm 0.5cm 1.5cm 0.5cm, clip=true,width=.16\\linewidth]{figures\/paper\/0_0}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=2.5cm 0.5cm 1.5cm 0.5cm, clip=true,width=.16\\linewidth]{figures\/paper\/0_1}} \n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=2cm 0cm 2cm 1cm, clip=true,width=.16\\linewidth]{figures\/paper\/1}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=2.3cm 0.8cm 1.7cm 0.2cm, clip=true,width=.16\\linewidth]{figures\/paper\/2}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=2cm 0cm 2cm 1cm, clip=true,width=.16\\linewidth]{figures\/paper\/3}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=2cm 0.8cm 2cm 0.2cm, clip=true,width=.16\\linewidth]{figures\/paper\/4}}\n\t\t\\\\\n\t\t\\vspace{-1em}\n\t\t\\subfloat{\\parbox[t]{.02\\linewidth}{\\begin{sideways}\\centering \\footnotesize 3D point cloud \\end{sideways}}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=10cm 5cm 9cm 9cm, clip=true,width=.16\\linewidth]{figures\/paper\/canno}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=10cm 5cm 9cm 9cm, clip=true,width=.16\\linewidth]{figures\/paper\/p1}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=10cm 8cm 9cm 6cm, clip=true,width=.16\\linewidth]{figures\/paper\/p3}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=10cm 9cm 9cm 5cm, clip=true,width=.16\\linewidth]{figures\/paper\/p4}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=10cm 6cm 9cm 8cm, clip=true,width=.16\\linewidth]{figures\/paper\/p5}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[trim=10cm 8.5cm 9cm 5.5cm, clip=true,width=.16\\linewidth]{figures\/paper\/p6}}\n\t\t\n\t\t\\caption{\\textbf{Qualitative evaluation with real data}: In each row, the first two columns show the views used to create each canonical surface. The first column of each result row (even row) shows the original canonical surface. The remaining views from the second column of each result row shows the propagated version of reconstructed surfaces for each view. }\n\t\t\\label{fig:results}\n\t\t\\vspace{-.2cm}\n\t\\end{figure*}\n\t\n\n\t\\begin{figure}\n\t\t\\centering\n\t\n\t\t\\includegraphics[clip=true,width=.9\\linewidth]{figures\/interpolation1\/eye-interp-final}\n\t\t\\includegraphics[clip=true,width=.9\\linewidth]{figures\/interpolation1\/globe-interp-final}\n\t\t\\parbox[h]{.29\\linewidth}{\\centering \\scriptsize Source}\n\t\t\\parbox[h]{.29\\linewidth}{\\centering \\scriptsize Intermediate scene}\n\t\t\\parbox[h]{.29\\linewidth}{\\centering \\scriptsize Target}\n\t\t\\caption{\\textbf{Dynamic 3D scene interpolation with in-between deformations:} We interpolate a point cloud between two reconstructed views from their depth and deformation. We show two key-frames, source and target, denoted as red and yellow frames respectively, and then demonstrate the interpolated \\emph{intermediate point cloud} in the middle column. For the top row, zoomed in-set images of the eye region show how the deformation is applied to the intermediate point cloud. More interpolated frames and 4D animations created from our deformation estimation are shown in the supplementary video with various views\\textsuperscript{\\ref{ftn:video}}.}\n\t\t\\label{fig:results-interpolation}\n\t\t\\vspace{-.2cm}\n\t\\end{figure}\n\t\n\t\\section{Evaluation}\n\tFor existing non-rigid structure from motion methods, different types of datasets are used to evaluate sparse points~\\cite{Jensen18, Dai17}, and dense video frames with small baseline (including actual camera view variation)~\\cite{Ansari17}. \n\tSince our problem formulation is intended for dense reconstruction of scenes with sufficient variation in both camera view and deformation, there are only few examples applicable to our scenario~\\cite{Li13, Wang15, Innmann16eccv}. Unfortunately, these datasets are either commercial and not available~\\cite{Li13}, or only exhibit rigid changes~\\cite{Wang15}. Few depth-based approaches share the input RGB as well~\\cite{Innmann16eccv}, but the quality of the images is not sufficient for our method (i.e., severe motion blur, low resolution (VGA) that does not provide sufficient detail for capturing non-rigid changes). \n\tThus, we created both synthetic data and captured real-world examples for the evaluation.\n\tTo quantitatively evaluate how our method can accurately capture a plausible deformation and reconstruct each scene undergoing non-rigid changes, we rendered several synthetic scenes with non-rigid deformations as shown in the first row of Fig.~\\ref{fig:results-synth}. \n\tWe also captured several real-world scenes containing deforming surfaces from different views at different times. Some examples (face, rubber globe, cloth and paper) appear in Fig.~\\ref{fig:results}, and several more viewpoints are contained in the supplementary video\\footnote{\\url{https:\/\/youtu.be\/et_DFEWeZ-4}\\label{ftn:video}}.\n\t\n\t\\subsection{Quantitative Evaluation with Synthetic Data}\n\tFirst, we evaluate the actual depth errors of the reconstructed depth of each time frame (i.e., propagated\/refined to a specific frame), and of the final refined depth of the \\emph{canonical view}. \n\tBecause we propose the challenging new problem of reconstructing non-rigid dynamic scenes from a small set of images, it is not easy to find other baseline methods. \n\tThus, we conduct the evaluation with an existing MVS method, COLMAP~\\cite{schoenberger2016mvs}, as a lower bound, and use as an upper bound a non-rigid ICP method similar to Li~et~al.~\\cite{Li09} based on the ground truth depth. \n\tThe non-rigid ICP using the point-to-plane error metric serves as a geometric initialization.\n\tWe refine the deformation using our dense photometric alignment (see Sec.~\\ref{sec:photometric-conssitency}).\n\t\n\tTo compare the influence of our proposed objectives for deformation estimation, i.e. sparse 3D-3D correspondences and dense non-rigid photometric consistency, we evaluate our algorithm with different settings. \n\tThe relative performance of these variants can be viewed as an ablation study. \n\tWe perform evaluation on the following variants: 1) considering only the sparse correspondence association using different numbers of iterations (see Sec.~\\ref{sec:photometric-conssitency}), 2) considering only the dense photometric alignment, and 3) the combination of sparse and dense.\n\tThe results of the quantitative evaluation can be found in Table~\\ref{tab:eval}.\n\tAs can be seen, all methods\/variants obtain a mean relative error $< 2.4 \\%$, overall resulting in faithfully reconstructed geometry.\n\tOur joint optimization algorithm considerably improves the reconstruction result both in terms of accuracy (by a factor of $1.9$) and completeness (by $30$ pp, a factor of $1.4$).\n\tAdditionally, we compute the mean relative depth error (MRE) without rejecting outliers; i.e., resulting in depth images with a completeness of 100\\%.\n\t\n\t\\subsection{Qualitative Evaluation with Real Data}\n\tFig.~\\ref{fig:results} shows results of our non-rigid 3D reconstruction. \n\tFor each pair of rows, we show six input images and the corresponding deformed 3D point clouds. \n\tNote that the \\emph{deformed} surfaces belong to the collection of 3D reconstructed points propagated by the computed deformations using the other views as described in Sec.~\\ref{sec:photometric-conssitency}. \n\tThe point cloud of each first column of Fig.~\\ref{fig:results} shows the first canonical surface (triangulated points from two views with minimal deformation).\n\tFor evaluation purposes, we visualize each reconstructed scene from a similar viewpoint as one of the canonical views. \n\tMore viewpoints of the reconstructed 3D results can be found in the supplementary video\\textsuperscript{\\ref{ftn:video}}.\n\t\n\t\\subsection{Dynamic Scene Interpolation} \n\tSince we estimate deformations between each view and the canonical surface, once all deformation pairs have been created, we can easily interpolate the non-rigid structure.\n\tTo blend between the deformations, we compute interpolated deformation graphs by blending the rigid body transform at each node using dual-quaternions \\cite{kavan2007skinning}.\n\t\n\tIn Fig.~\\ref{fig:results-interpolation}, we show interpolated results from reconstructed scenes of the face example and the globe example shown in Fig.~\\ref{fig:results}. \n\tThe scene deformations used for this interpolation (like key-frames) are framed in as red and yellow. \n\tNote that even though the estimated deformation is defined between each view and the canonical pose, any combination of deformation interpolation is possible. \n\tMore examples and interpolated structures from various viewpoints can be found in the supplementary video\\textsuperscript{\\ref{ftn:video}}.\n\t\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\def2.1cm{2.1cm}\n\t\t\\vspace{-0.3cm}\n\t\t\\subfloat[Bad canonical views selection]{\\includegraphics[height=2.1cm]{temp-fail}}\n\t\t\\hfill\n\t\t\\subfloat[Ambiguity along view direction]{\\includegraphics[height=2.1cm]{globe_broken}}\n\t\t\\caption{\\textbf{Failure cases:} (a) shows the result of canonical surface reconstruction from two views that are incorrectly selected (large deformation between two views: images in first and third column in top row of Fig.~\\ref{fig:results}). While the camera pose is successfully computed, since there are large portions of non-rigid changes happening in the upper part of face and near the mouth, there are many holes on the face, which is not the best case if we choose this pair. (b) shows a failure case when deformation (red circles) occurs along the view direction, which causes the ambiguity.}\n\t\t\\label{fig:failure}\n\t\t\\vspace{-.2cm}\n\t\\end{figure}\n\t\n\t\n\t\n\t\\section{Conclusion and Discussion}\n\tWe propose a challenging new research problem for dense 3D reconstruction of scenes containing deforming surfaces from sparse, wide-baseline RGB images. \n\tAs a solution, we present a joint optimization technique that optimizes over depth, appearance, and the deformation field in order to model these non-rigid scene changes. \n\tWe show that an MVS solution for non-rigid change is possible, and that the estimated deformation field can be used to interpolate motion in-between views.\n\t\n\tIt is also important to point out the limitations of our approach (Fig.~\\ref{fig:failure}). We first assume that there is at least one pair of images that has minimal deformation for the initial canonical model. \n\tThis can be interpreted as the first step used by many SLAM or 3D reconstruction algorithms for the initial triangulation. \n\tFig.~\\ref{fig:failure}(a) shows an example of a canonical surface created from two views that contain too much deformation, only leading to a partial triangulation. \n\tFig.~\\ref{fig:failure}(b) shows an example where the deformation occurs mostly along the view direction. \n\tWhile we successfully estimate the deformation and reconstruct a similar example shown in Fig.~\\ref{fig:results}, depending on the view this can cause an erroneous estimation of the deformation.\n\tOn the other hand, we believe that recent advances in deep learning-based approaches to estimate depth from single RGB input~\\cite{Fu2018DeepOR} or learning local rigidity~\\cite{Lv18eccv} for rigid\/non-rigid classification can play a key role for both the initialization and further mitigation of these ambiguities. \n\t\n\t{\\small\n\t\t\\bibliographystyle{ieee}\n\t\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}}