diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzqbxk" "b/data_all_eng_slimpj/shuffled/split2/finalzzqbxk" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzqbxk" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n \\pagenumbering{arabic}\nIn many physical systems, even the\nqualitative features along a line of phase transitions may\ndepend on the choice of the coupling constants which\nparametrize the bare interaction energies. Quite often one\nencounters the case where a continuous higher order\ntransition changes into a discontinuous first-order one. In\nmost cases, the cross-over behaviour is\ncharacterized by a tricritical point (TCP)\\cite{TCP}. In\nfield theoretic language, the origin of a TCP is easy\nto understand on the basis of a Landau expansion of the\neffective potential\nin terms of\na order- (or disorder-) parameter. One source for a\nTCP is the sign-change in the quartic term (assuming a\nstabilizing sextic term) in some range of the parameter\nspace. A positive quartic term leads to a second-order\ntransition, and a negative one to a first-order transition. A\nvanishing quartic term is associated with the TCP. Another\nimportant mechanism arises from the presence of a cubic\nterm. Such a cubic term may be generated,\nin some range of the parameter space, by fluctuations\nof other fields. This always drives the transition\nfirst-order. While the analysis of the Landau expansion\nis very elementary, the\nnon-trivial problem is, of course, its derivation\nfrom the underlying physical system with\nquite complicated microscopic forces (the most prominent\nexample being the Gorkov derivation \\cite{gorkov} of the effective\nGinzburg-Landau theory of superconductivity). It is\ntherefore desirable to find alternative properties of the system which\nallow at least for a semi-quantitative\nunderstanding of the appearance of\nTCP's. In this lecture we exhibit such properties for compact U(1)\nlattice models, having in mind quite different physical\nsystems (D=dimension):\n\n1. D=4 Quantumelectrodynamics (U(1)-LGT)\n\n2. D=3 Superfluid Helium (XY-model)\n\n3. D=3 Defect models of melting\n\nThe common point of these systems is that they can be studied in\n\na)\nthe weak-coupling or ``defect'' expansion. This is formulated\nin terms of lines\\footnote\n{When the dimension is reduced by one, $D \\rightarrow\nD-1$, ``lines'' have to be replaced by ``points''.}\nwith long-range Biot-Savart-like interactions. This is why all three\nsystems can be described, alternatively, by Abelian Higgs models\nin which the complex fields account for the line-like disorder of the systems\nand the gauge fields for the long-range interactions \\cite{book}.\n\nb) The\nstrong-coupling or ``stress'' expansion. This is also formulated in\nterms of geometrical objects (surfaces, lines), but with\nno long-range interactions. This simplifies their\nstatistical behaviour and will provide for the desired alternative\ncriterion for the existence of TCP's.\n\nFor the specific example\nof U(1) lattice gauge theory the different representations,\nbeing dual to each other, and their corresponding\nfield-theoretic descriptions are summarized in\nfig.1. The diagrams for the other systems are\ncompletely analogous.\n \\begin{figure}\n \\vspace{7.0cm}\n \\caption[1]{Interrelation between the dual ``defect''\n (weak-coupling) and ``stress''\n (strong-coupling) representations of U(1)\n lattice gauge theory and their equivalent\nfield theoretic descriptions.}\n \\end{figure}\n \\section{The ``Defect'' Representation}\nLet us first look at the weak-coupling expansion of the various\nU(1) lattice models. They all contain topological excitations with\nlong-range interactions caused by the Nambu-Goldstone\nbosons of the U(1) symmetry. In a three-dimensional solid, the\ntopological excitations are line-like defects, dislocations and\ndisclinations \\cite{defect} , and the Nambu-Goldstone bosons\nare elastic\ndistortions (phonons), leading to a stress-field around each\ndefect and to Biot-Savart-like interactions between defect\nelements. In superfluid Helium in three dimensions, the ``defects''\nare vortex-lines \\cite{vortex} and the long-range forces are\ncaused by the superflow. In four-dimensional U(1)-LGT\nfinally, the ``defects'' are world lines of magnetic\nmonopoles \\cite{monopole} and the forces are due to electromagnetism.\nThe upper right box in fig.1 symbolizes the\nequivalent disorder field description of these ``defect'' lines\nin terms of a complex field\n$\\Psi$ interacting with a gauge field $A_{\\mu}$. In\nsuperfluid Helium and U(1)-LGT, the disorder field theory of\nline-like defects turns out \\cite{hk3d,hk4d} to be an\nAbelian Higgs model. In a solid, it is a more complicated field theory\nof a similar type, involving two disorder fields, one for dislocation\nand one for disclination lines \\cite{book}.\n\nLet us briefly recall the\nphysical content of the dual equivalence. From Feynman's\npath-integral representation of quantum mechanics, we know\nthat the statistical mechanics of one\nfluctuating line (``orbit'') corresponds to the quantum\nmechanics of one particle. It is then easy to see that the\ngrand-canonical ensemble of fluctuating lines corresponds to\nthe quantum mechanics of a\nmany-particle system. This in turn is described most conveniently by\na second quantized\nfield theory.\nThe long-range Biot-Savart-like nature of the interactions\nbetween line-elements is what permits their description by\nan Abelian gauge-field\n$A_{\\mu}$. The minimal coupling of $A_{\\mu}$\nto the disorder field $\\Psi$ leads then immediately to an\nAbelian Higgs model (scalar electrodynamics).\nIndeed, in the lattice formulation of\ninteracting defect lines, these steps can be carried out\nrigorously \\cite{hk3d,hk4d}, leading to the Higgs action\n\\begin{equation}\n{\\cal A} = \\int d^{D}\\!x \\left[ \\frac{1}{4}F_{\\mu\\nu}^{2} + \\frac{1}{2}\n|(\\partial\n_{\\mu} - ieA_{\\mu}) \\Psi|^{2} + \\frac{1}{2} m^{2} |\\Psi|^{2} +\n\\frac{1}{4} g |\\Psi|^{4} + \\ldots \\right]\n\\label{eq:1}\n\\end{equation}\nwhere $F_{\\mu\\nu} = \\partial _{\\mu} A_{\\nu} - \\partial _{\\nu}\nA_{\\mu}$, and $e,m^{2},g>0$ can in principle be calculated from\nthe couplings of\nthe original compact U(1) lattice model, indicated by the circle in\nthe center of fig.1. According to the famous\nColeman-Weinberg argument \\cite{coleman,halperin},\nthis Abelian Higgs model should\nalways undergo a first-order transition when the mass\nparameter turns negative.\nThe argument goes\nas follows: Assuming $\\Psi = $ const., the gauge fields can\nbe integrated out, yielding a one-loop correction to (\\ref{eq:1})\n\\begin{equation}\n{\\rm tr} \\log (k^{2} + e^{2}|\\Psi|^{2})\n\\label{eq:1.1}\n\\end{equation}\nThis amounts to additional $\\Psi$ interactions of the\nform\n\\begin{equation}\n{\\rm ~~}-|\\Psi|^{3} {\\rm ~~~~~~~~~~in~D=3}\n\\label{eq:1.2}\n\\end{equation}\nand\n\\begin{equation}\n|\\Psi|^{4} \\log |\\Psi| {\\rm ~~~~~~~in~D=4}\n\\label{eq:1.3}\n\\end{equation}\nIn both cases, the additional term should drive the transition\nfirst-order.\n\nIn three\ndimensions, this conclusion is in clear contradiction to the\nwell established fact that the original D=3 XY model has a continuous\nphase transition \\cite{3dxy,dasgupta}. The crucial\npoint in the whole argument is, of course, the assumption of\nan almost constant $\\Psi$-field. If this assumption breaks\ndown, then also the generation of a cubic term (in D=3) is\nno longer reliable and the transition may stay continuous\neven with $A_{\\mu}$-field fluctuations. The decisive parameter is\nthe ratio of the length-scales of\n$A_{\\mu}$- and $\\Psi$- fluctuations,\n\\begin{equation}\n\\kappa = \\frac{1}{\\sqrt{2}} \\frac{{\\rm penetration~depth }(A_{\\mu})}\n{{\\rm coherence~length }\n(\\Psi)}\n\\label{eq:1.4}\n\\end{equation}\nIn three dimensions, it was possible to show that there\nexists a tricritical value of $\\kappa \\approx 1\/\\sqrt{2}$,\ni.e. near the separation line between type-I and type-II\nsuperconductivity \\cite{hk3d}.\nFor small $\\kappa$, the Coleman-Weinberg mechanism is\nvalid, leading to a first-order transition,\nwhereas for large $\\kappa$, the transition stays second\norder.\n\nThis led to the suggestion that also four-dimensional scalar\nelectrodynamics could have a TCP \\cite{hk4db}. By the above duality\narguments, this would also hold for U(1)-LGT.\nIndeed, the\nfirst-order nature predicted by the Coleman-Weinberg mechanism\nwas at odds with early Monte Carlo\ninvestigations of the dual U(1)-LGT which all claimed evidence\nfor a continuous transition and even reported estimates for\ncritical indices \\cite{earlyu1}. More recent work favors the existence\nof both transition regions, first-order {\\em and} second-order, in\nagreement with ref.\\cite{hk4db}. Only one paper claims the validity\nof the Coleman-Weinberg mechanism everywhere.\nThe present status\nwill be reviewed in more detail in the next section.\n\nThe goal of this lecture is to present semi-quantitative\nexplanations for these TCP's, working within the dual\nstrong-coupling expansions of the various U(1) models. This\nhas the advantage that, since the geometrical\nobjects appearing in this expansion have no long-range\ninteractions, subtleties of the type (\\ref{eq:1.1})--(\\ref{eq:1.3})\nare absent.\n \\section{The ``Stress'' Representation}\nLet us start by recalling the definitions of the various\nU(1) models mentioned in the introduction. The partition\nfunctions\n\\begin{equation}\nZ = \\sum_{\\{\\gamma{\\rm - conf.}\\}} (\\prod B)\n\\label{eq:2}\n\\end{equation}\nare products of local Boltzmann factors which may be chosen\nas\n\\begin{equation}\nB = \\left\\{\n\\begin{array}{ll}\ne^{\\beta \\cos \\Theta} & {\\rm Wilson}\\\\\ne^{\\beta \\cos \\Theta + \\gamma \\cos 2\\Theta} & {\\rm Mixed}\\\\\n\\sum_{n}e^{-\\frac{\\beta_{V}}{2}(\\Theta - 2 \\pi n)^{2}} & {\\rm\nVillain}\n\\end{array}\n\\right\\} {\\rm action}\n\\label{eq:3}\n\\end{equation}\nwith $\\Theta$ standing short for\n\\begin{equation}\n\\Theta = \\left\\{\n\\begin{array}{ll}\n\\nabla_{i}\\gamma_{j} - \\nabla_{j}\\gamma_{i} \\equiv\n (\\nabla_{i}\\gamma_{j})^{A} & {\\rm Gauge}\\\\\n\\nabla_{i}\\gamma\n& {\\rm XY}\\\\\n\\nabla_{i}\\gamma_{j} + \\nabla_{j}\\gamma_{i} \\equiv\n (\\nabla_{i}\\gamma_{j})^{S} & {\\rm Melting}\n\\end{array}\n\\right\\} {\\rm model}\n\\label{eq:4}\n\\end{equation}\nThe product in (\\ref{eq:2}) runs over the plaquettes or\nlinks of the lattice, and\n$\\nabla_{i}$ are the usual lattice derivatives\n($\\nabla_{i}f(\\vec x) = f(\\vec x + \\vec i) - f(\\vec x))$.\nThe $\\gamma$-configurations are, in U(1)-LGT,\nthe euclidean\nelectromagnetic fields, in the case of superfluid\nHelium, the phase angles of the condensate, and in the\nmelting model, the\natomic displacements.\nThe common important property of these models\nis the U(1) invariance\n\\begin{equation}\n\\gamma \\rightarrow \\gamma + 2 \\pi\n\\label{eq:5}\n\\end{equation}\nsuggesting a Fourier (``character'') expansion of the\nBoltzmann factors\n\\begin{equation}\nB(\\Theta) = \\sum_{b=-\\infty}^{\\infty} W_{b} e^{ib\\Theta}\n\\label{eq:6}\n\\end{equation}\nwith Fourier coefficients\n\\begin{equation}\nW_{b} =\n\\int_{-\\pi}^{\\pi}\\frac{d\\Theta}{2\\pi}B(\\Theta)e^{-ib\\Theta}\n\\label{eq:7}\n\\end{equation}\nInserting (\\ref{eq:6}) into the partition function and\nsumming over the $\\gamma$-configurations, one obtains the\nstrong-coupling expansion $(\\bar{W}_{b} \\equiv\nW_{b}\/W_{0})$\n\\begin{equation}\nZ = (\\prod W_{0}) \\sum_{\\{b{\\rm-conf.}\\}} (\\prod \\bar{W}_{b})\n\\label{eq:8}\n\\end{equation}\nThe algebraic structure of $\\Theta$ in eq.(\\ref{eq:4}) leads\nto conservation laws which constraint the admissible\n$b$-configurations. It is most transparent to characterize\nthem in terms of geometrical objects:\n\\begin{equation}\n\\{b{\\rm-conf.}\\} = {\\rm closed} \\left\\{\n\\begin{array}{ll}\n{\\rm surfaces} & {\\rm Gauge}\\\\\n{\\rm lines} & {\\rm XY}\\\\\n{\\rm complicated~lines} & {\\rm Melting}\n\\end{array}\n\\right.\n\\label{eq:8.1}\n\\end{equation}\nIn the U(1)-LGT, the $b$-variables correspond to the\nelectromagnetic field strengths, in the superfluid to the\nsuperfluid currents, and in the melting model to the\nphysical stress. This is why we call the expansion (\\ref{eq:8})\ngenerically the ``stress''-representation. The geometrical\nobjects in (\\ref{eq:8.1}) are subject only to short-range\ncontact interactions.\nNotice that the geometrical characterization of these\n``stress''-graph configurations does not depend on the\ndimension D. This is in contrast to the\n``defect''-representation where the dimensionality of the\ngeometrical objects depends on the dimension in which\nthe duality transformation is performed.\n \\section{Analytical and Numerical Results}\nLet us now\nanalyze the ``stress'' representation (\\ref{eq:8}) in some detail.\nTo each choice of action in (\\ref{eq:3}) corresponds a\nspecial ``natural'' set of weights $\\bar {W}_{b}$ in\n(\\ref{eq:8}). Their relative importance with increasing\nstrength $b=\\pm 1, \\pm 2, \\ldots$ can be studied\nconveniently by simulating each model with different actions and\ncomparing their thermodynamic quantities such as their\ninternal energies.\n\nWe start with the comparison\nbetween Wilson's and Villain's action for which the weights\nare\n$\\bar{W}_{b}^{W} = I_{b}(\\beta)\/I_{0}(\\beta)$ ($I_{b}:$ modified\nBessel function) and $\\bar{W}_{b}^{V} =\n\\exp(-b^{2}\/(2\\beta_{V}))$, respectively. The two actions\nare made as similar as possible by equating the\n$b=\\pm 1$ weights which amounts to relating the\nVillain parameter $\\beta_{V}$ to the Wilson parameter\n$\\beta$ as follows (see fig.2)\n\\begin{equation}\n\\beta_{V}(\\beta) = -\\frac{1}{2 \\log\n[I_{1}(\\beta)\/I_{0}(\\beta)]}\n\\label{eq:9}\n\\end{equation}\nThis relation was first written down by Villain\n\\cite{villain} when analyzing the Wilson type of\naction of the XY model in terms of the discrete Gaussian model\n(Villain approximation).\nIf furthermore the overall normalizations of the partition\nfunctions are adjusted, then the difference between the\ntwo actions lies all in the higher weights,\n$\\bar{W}_{b}^{W}(\\beta)=I_{b}(\\beta)\/I_{0}(\\beta) \\neq\n[I_{1}(\\beta)\/I_{0}(\\beta)]^{b}=\\exp(-b^{2}\/(2\\beta_{V}))=\n\\bar{W}_{b}^{V}$ for $b \\geq 2$.\n \\begin{figure}\n \\vspace{7.5cm}\n \\caption[2]{The parameter $\\beta_{V}$ of the Villain\napproximation versus $\\beta$ of the corresponding Wilson\naction. The curve results from the requirement of equal\nweights for graphs of\nstrength-1 in the ``stress'' representation for both\nactions.}\n \\end{figure}\nIn order to measure their importance,\nwe have performed Monte Carlo simulations in the\nrepresentation (\\ref{eq:2})\nwith both Wilson's and Villains's action \\cite{jkvill}. As a typical\nexample, we compare in fig.3 our results for the internal energy of\nthe U(1) lattice gauge model.\nThe excellent agreement for\nlow $\\beta$ up to the phase transition around $\\beta \\approx\n1$ demonstrates that, in this range, the systems are dominated\nby $b=\\pm1$ excitations.\n \\begin{figure}\n \\vspace{7.5cm}\n \\caption[3]{The internal energy of D=4 U(1) lattice gauge model\nwith Wilson action in comparison with the Villain approximation.}\n \\end{figure}\nIt is therefore not surprising that also the\ntransition temperatures of the Villain action (with\n$\\beta_{V}$ as a free parameter) are mapped by (\\ref{eq:9})\nvery precisely onto the corresponding ones of the Wilson\naction (see table 1). Furthermore, from fig.3 we read off\nthat at and\n \\input{egertab1}\nabove the phase transition the graphs of higher strengths\nproliferate much stronger using the Wilson action.\nAmong these we expect the graphs of strength-2 to be most\nimportant. This suggests that the weight ratio\n$\\bar{W}_{2}\/\\bar{W}_{1}$ should be the relevant distinguishing\nfeature of the different actions. In the sequel, it will be\ncalled the ``2:1 ratio''. In the Wilson action, the ``2:1\nratio'' is much larger than in the Villain action as is\nshown in table 2.\n\nIt is easy to convince ourselves that the\nhigher ratios $\\bar{W}_{3}\/\\bar{W}_{1},\\ldots$ are\npractically irrelevant near the phase transition.\n \\input{egertab2}\nFor this purpose we compare the Villain action\nwith further Monte Carlo simulations of the mixed\n$\\beta-\\gamma$ action (see eq.(\\ref{eq:3})), which is chosen\nin such a way that the ``2:1 ratio'' coincides with that of\nthe Villain action. The coincidence is achieved by requiring\n\\begin{eqnarray}\n\\frac{I_{1}(\\beta,\\gamma)}{I_{0}(\\beta,\\gamma)} & = &\n\\exp(-\\frac{1}{2\\beta_{V}}) \\label{eq:10a}\\\\\n\\frac{I_{2}(\\beta,\\gamma)}{I_{0}(\\beta,\\gamma)} & = &\n\\exp(-\\frac{4}{2\\beta_{V}})\n\\label{eq:10b}\n\\end{eqnarray}\nwhere $I_{b}(\\beta,\\gamma)$ are generalized modified Bessel\nfunctions\n\\begin{equation}\nI_{b}(\\beta,\\gamma) = \\int_{-\\pi}^{\\pi} \\frac{d\\phi}{2\\pi}\n\\cos(b\\phi) e^{\\beta \\cos(\\phi) + \\gamma \\cos(2\\phi)}\n\\label{eq:11}\n\\end{equation}\nHere, it is convenient to solve eqs.(\\ref{eq:10a},\\ref{eq:10b}) in the\nform $\\beta(\\beta_{V}),\\gamma(\\beta_{V})$ which we shall\ncall the ``Villain locus'' in the $\\beta-\\gamma$\nplane. The comparison of our Monte Carlo data for the\ninternal energy of U(1)-LGT\nin fig.4 clearly\ndemonstrates that the differences due to graphs of higher strength\n$b = \\pm 3,\\ldots$\n(which still have different weights) are completely\nnegligible - the data for the Villain action and the\nmixed $\\beta-\\gamma$ action\n(along the ``Villain locus'') fall almost\nperfectly on top of each other.\n \\begin{figure}\n \\vspace{8.5cm}\n \\caption[4]{The internal energy of U(1)-LGT with Villain\naction. The lower data points are obtained from simulations\nof the $\\beta\\cos\\Theta$ model transformed according to\neq.(\\ref{eq:9}). The\nstars which fall practically on top of the Villain\ndata are from a simulation with mixed action\n$\\beta\\cos\\Theta + \\gamma\\cos 2\\Theta$ treated according to\neqs.(\\ref{eq:10a},\\ref{eq:10b}).}\n \\end{figure}\n\nAs a conclusion, the\ndifferences between the Wilson and the Villain action\nnear the phase transition are completely\nexplained by the different weights of strength-2 ``stress''\ngraphs, whose proliferation is much more pronounced in the Wilson\ncase.\n\nThat an admixture of strength-2 graphs with large enough weight\ncan in principle drive the transition first-order, can be demonstrated\neasily in the disorder field theory of ``stress'' lines (i.e. in the\nmean-field formulation of the D=3 XY model with mixed action). This is\ndescribed by an effective action \\cite{jkxy}\n\\begin{equation}\n{\\cal A} = a|\\phi_{1}|^{2} + b|\\phi_{1}|^{4} + c|\\phi_{2}|^{2}\n-\\frac{1}{2} [ \\phi_{1}^{2} \\phi_{2}^{+} + c.c.] + {\\rm gradients}\n+ \\ldots\n\\label{eq:last}\n\\end{equation}\nwhere the complex fields $\\phi_{1},\\phi_{2}$ represent strength-1 and\nstrength-2 ``stress'' lines, respectively. The coupling $\\phi_{1}\n\\phi_{1} \\phi_{2}^{+}$ corresponds to the merging of two lines of strength-1\ninto one line of strength-2. The complex conjugate coupling describes\nthe reversed process. Integrating out the field $\\phi_{2}$, one obtains\nan additional quartic term in $\\phi_{1}$, $-\\frac{1}{4c} |\\phi_{1}|^{4}$,\nsuch that the total quartic term in $\\phi_{1}$ may change sign, signalizing\na first-order transition.\n\nThis explains at least qualitatively why in careful\nMonte Carlo studies with the Wilson action a\nfirst-order transition was reported while with the Villain\naction evidence for a continuous transition was claimed.\nMoreover, this picture is consistent with Monte Carlo\nstudies of the mixed $\\beta-\\gamma$ action by\nJers\\'{a}k et al.\\cite{jersak}\nwho located a TCP for slightly negative $\\gamma$.\nLooking at fig.5, we see that our ``Villain locus'' indeed\ncrosses the phase transition line in the range\nof second-order transitions. It would be interesting to\ninvestigate whether the TCP is connected with a universal\ntricritical ``2:1 ratio'' $\\bar{W}_{2}\/\\bar{W}_{1}$.\n \\begin{figure}\n \\vspace{9.5cm}\n \\caption[5]{The parameters $\\beta,\\gamma$ of the mixed\naction $\\beta\\cos\\Theta + \\gamma \\cos2\\Theta$ which can be\nstudied by means of the improved Villain approximation\n(``Villain locus''). The\nfat and dashed lines show an interpolation to the critical\npoints found by Jers\\'{a}k et al.\\cite{jersak}. The dotted line was estimated\nby those authors to be the locus of the Villain model in a\ndifferent way from ours (which we believe to be less\naccurate).}\n \\end{figure}\n\nThe most recent Monte Carlo renormalization group (MCRG) studies\nare inconsistent with this nice picture. They are\ninterpreted as evidence for a first-order transition, with both\nthe Wilson and the Villain action. A historic summary is\ncompiled in table 3.\n\\input{egertab3}\nThe latest study \\cite{hase} even speculates that the transition is always\nfirst-order, downto $\\gamma=-\\infty$.\nThe last result would imply that in the whole parameter range of the D=4\nAbelian Higgs model which is covered by the dual\n$\\beta-\\gamma$ action, the Coleman-Weinberg mechanism is\nindeed working. We feel, however, that even in view of this\nimpressive list of work in table 3, the\nanswer has not yet settled and much more high precision\nstudies are necessary to provide the final answer.\n\nWhile in D=4 the situation is still quite\ncontroversial, in D=3 dimensions the numerical evidence is\nclearly against the analog of the Coleman-Weinberg\nmechanism, as advanced by Halperin,Lubensky and Ma\n\\cite{halperin}. Here, it\nwas confirmed many times by different methods that both\nWilson's and Villain's action undergo continuous\ntransitions.\nThe\nquestion was then whether there exists at all a parameter range in the mixed\n$\\beta-\\gamma$ action which shows first-order transitions.\nAccording to our above analysis, a good candidate was only\nthe range $\\gamma \\geq 0$. We therefore concentrated on this\nrange and found \\cite{jkxy}, first in a mean-field (MF) treatment,\nfirst-order\ntransitions for $0.166 \\leq \\gamma \\leq 0.375$. The full\nMF phase diagram is displayed in fig.6 (use the\nlabelings on the right and top axis).\n \\begin{figure}\n \\vspace{10.5cm}\n \\caption[6]{Phase diagram of the D=3 XY model with mixed action as\nobtained by mean-field methods and Monte Carlo simulations.}\n \\end{figure}\nSince already in the pure\nXY model ($\\beta = 0$ or $\\gamma=0$ axis), the MF\ntransition temperatures are off by a factor\n$0.45\/0.33=1.36$, we have rescaled the MF curves by this\nfactor when comparing with our Monte Carlo simulations which\nwe run in the range where the rescaled MF results show the\nlargest entropy jump, $\\gamma \\approx 0.35...0.40$.\nThe numerical results confirmed the first-order nature of\nthe transition. As a typical signal, we show in fig.7 the\ndouble peak structure in the internal energy histogram,\ncorresponding to tunnelings between two metastable states.\n \\begin{figure}\n \\vspace{6.0cm}\n \\caption[7]{Energy histogram near the first-order\ntransition line of\nthe D=3 XY model with mixed action.}\n \\end{figure}\nSince the observed first-order transition is\nvery weak and the $\\gamma$-range is probably very small, we\nwere not able to locate the two tricritical points with\nresonable accuracy. We can therefore only claim evidence for\na short line of first-order transitions in the range $\\gamma\n\\approx 0.35...0.40$.\n\nIn defect models of melting, no TCP seems to exist. The reason is the\nvery large activation energy of the graphs in the stress expansion\nso that there are no pretransitional excitations up to the point\nwhere the free energy intercepts the defect expansion. This is\nresponsible for a jump in the slope of the free energy, i.e. for\nfirst-order transitions \\cite{book}.\n\\newpage\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Background}\n\nIn this section, we present a brief review of time series classification, and make a focus on shapelet methods. From now on, we use calligraphic font (\\({\\cal X}\\)) to denote a collection of elements (e.g. a set of time series), capital (\\(X\\)) for one element (e.g. a time series), and lowercase (\\(x\\)) for a value of this element. In this work we consider supervised classification: the ensemble of input time series will be denoted by \\({\\cal X}=\\{X_1, ..., X_n\\}\\) with \\(X_i=\\{x_1, ..., x_m\\}\\) a time series and \\(Y=\\{y_1, ..., y_n\\}\\) their respective classes.\n\n\\subsection{Time series classification}\n\\label{sec:TSC}\nWe present a brief overview of the algorithms identified as state-of-the-art and used in our experimental section, and we report the reader to a recent review \\cite{TSCreview} for a more detailed view of the field.\n\\begin{itemize}\n\\item \\textbf{Shapelet Transform Classifier (STC)} \\cite{STC}, is regarded as a state of the art for shapelet algorithms in terms of accuracy. This algorithm iteratively initializes new shapelets, assesses their discriminative power, and removes those that are too similar. The goal being to maximize the discriminative power of an ensemble of shapelets. A Rotation Forest is then applied as a classifier.\n\\item \\textbf{Temporal Dictionary Ensemble (TDE)} \\cite{TDE} is an ensemble of dictionary-based classifiers. It uses some variants of the BOSS classifier \\cite{BOSS} and WEASEL \\cite{WEASEL}, as base estimators and optimizes their parameters through a Gaussian process.\n\\item \\textbf{Diverse Representation Canonical Interval Forest Classifier (DrCIF)} \\cite{HC2}, is an extension of the CIF algorithm \\cite{CIF}. After selecting random intervals from different representations of the time series, it uses the Catch22 \\cite{catch22} method to extract a feature matrix.\n\\item \\textbf{RandOm Convolutional KErnel Transform (ROCKET)} \\cite{Rocket}, randomly generates a huge set of convolutional kernels, and extracts as features the maximum and the proportion of positive values of the convolution for each time series and each kernel. It is followed by a Ridge classifier. An ensemble version of this method, called ARSENAL, was introduced by \\cite{HC2}.\n\\item \\textbf{Inception-Time} \\cite{Inception} is an ensemble of Inception networks, which introduce Inception modules as replacement for traditional fully convolutional layers, notably to mitigate the vanishing gradient problem.\n\\item \\textbf{Hierarchical Vote Collective of Transformation-based Ensembles (HC1)} \\cite{HC1}is a meta-ensemble classifier using a variety of time series classifiers, such as STC, with a novel ensemble learning scheme which estimate the weight of each base classifier in the final decision. This estimation is based on performance in a 10-fold validation scheme.\n\nVariants of this method were developed, such as TS-CHIEF \\cite{TSCHIEF} and HC2 \\cite{HC2}, that both modify the set of base classifiers to improve accuracy. HC2 also modified the meta-ensemble procedure, using a Out-Of-Bag estimate instead of a 10-fold validation to estimate the performance of each base classifier, which improved scalability compared to HC1.\n\\end{itemize}\n\n\n\\subsection{Shapelets}\n\\label{sec:shapelet}\nShapelets \\cite{Shapelets} were originally defined as time series subsequences representative of class membership. In the following, we define a shapelet $S$ as a vector $S=\\{s_1, ..., s_l\\}$ with $l$ its length. All shapelet-based algorithms have the same protocol to extract features from a shapelet $S$ and a time series $X=\\{x_1, ..., x_m\\}$, by using a distance vector $f(S,X) = \\{f_1, ..., f_{m-(l-1)}\\} $ defined as :\n\n\\begin{equation}\nf_i = \\sqrt{\\sum_{j=1}^{l} (X_{i+(j-1)} - s_j)^2}\n\\end{equation}\n\nIn this definition, a point $f_i$ is simply the Euclidean distance between $S$ and the subsequence of length $l$ starting at index $i$ in $X$. The minimum value of $f(S,X)$ is then extracted as a feature, which can be interpreted as an indicator of the presence of the pattern represented by $S$ in $X$. A popular variant of this distance function consists in using a z-normalized Euclidean distance, where $S$ and all subsequences of $X$ are z-normalized independently, allowing to add scale invariance to the translation invariance of the initial formulation.\nThen, as presented in the Shapelet Transform \\cite{ST}, by using a set of shapelets $\\cal S$, one can then transform an ensemble of time series $\\cal X$ into a feature matrix of shape $(|{\\cal X}|,|{\\cal S}|)$, and use it as input in a non-temporal classifier such as a decision tree.\\newline\n\nThe step of generating and selecting shapelet candidates is the main difference between most approaches. In order to speed up the exhaustive search, Fast Shapelet \\cite{FastShp} use input discretization, while Ultra Fast Shapelet \\cite{UFShp} use random sampling. FLAG \\cite{FLAG} build shapelets location indicators from the data to reduce the set of admissible candidates, and GENDIS \\cite{GENDIS} use an evolutionary algorithm initialized by a clustering on the set of possible candidates. Learning Time Series Shapelet \\cite{LTS} use a gradient-descent optimization that iteratively change the values of a set of shapelets. MrSEQL \\cite{MrSEQL}, while not strictly speaking a shapelet algorithm, searches for discriminative symbolic sequences in a variety of symbolic representations of the inputs.\n\nSince the publication of Localized Random Shapelet (LRS) \\cite{LRS}, which showed the benefit of extracting $argmin\\ d(S,X)$ to discriminate time series based on the location of the minimum between $S$ and $X$, it has been included in most recent approaches. Based on their results, we will also use this feature in our method.\n\\section{Conclusions and future work}\nThe Random Dilated Shapelet Transform introduces new ways of increasing the global performance of shapelet algorithms, notably through the use of dilation, allowing the use of small non-contiguous subsequences as shapelets, efficiently covering areas of interest in the data. We have shown in our experiments that this new method improves on the state-of-the-art for shapelet algorithms with a good scalability compared to most of the approaches. This work offers many perspectives for future work, notably a generalized version to process uneven length or multivariate time series, as well as modifications of the shapelet generation process to better leverage class information. A more formal explainability framework is also one of our main priorities with this work, since being able to extract clear and visual explanations for domain experts is an extremely desirable property.\n\\section{Experiments}\n\\label{sec:Experiments}\nOur focus in this section is to study the influence of the four parameters of our method on classification accuracy, as well as comparing its performance to recent state-of-the-art approaches. All the experiments were run on a DELL PowerEdge R730 on Debian 9 with 2 XEON E5-2630 Corei7 (92 cores) and 64GB of RAM. We provide a python package\\footnote{https:\/\/github.com\/baraline\/convst} using community standards to run the method and the interpretability tool on any dataset, along with all result tables, and reproducibility instructions for our experiments.\n\nIn the following, we use the 112 univariate datasets from the UCR archive \\cite{Dataset} and when comparing to state-of-the-art results, we use the same resamples scheme as the one used in their experiments. We use critical difference diagrams to display the mean ranks of objects, with cliques (formed by horizontal bars) computed using the Wilcoxon-Holm post-hoc analysis \\cite{CriticalDiagram}, with a p-value of $0.05$. A clique indicates that the accuracy difference between objects is not statistically significant. \n\n\\subsection{Sensitivity Analysis}\nWe conduct a sensitivity analysis on the four input parameters of our algorithm and their effect on classification accuracy on 40 datasets selected randomly, with raw results and selected datasets in a specific file in the online repository. \nFor each parameter analysis, all other parameters remain fixed at the following default values : $n\\_shapelets = 10000$, $p\\_norm=0.9$, $L=[7,9,11]$, $P1=5$ and $P2=15$. Figure \\ref{fig:sensi_len_nshp} and Figure \\ref{fig:sensi_bound_pnorm} give the mean accuracy ranks of each method over the 40 datasets, with the accuracy of each method and each dataset computed as the mean of the same 10 resamples.\nGiven the tested set of values, the most impactful parameter is the number of shapelets, with a noticeable increase in performance above 10000 shapelets. All other parameters only display minor gains and thus seem to be stable. Based on those results, for all further experiments we set as default parameters $n\\_shapelets = 10000$, $p\\_norm=0.8$, $L=[11]$ and $P1=5$, $ P2=10$, and report results for datasets used in sensitivity analysis and the others.\n\n\\begin{figure}[h]\n \\includegraphics[width=1.\\textwidth]{len.png}\n \\centering\n \\caption{Accuracy ranks for (a) different number of shapelets, and (b) different shapelet lengths}\n \\label{fig:sensi_len_nshp}\n\\end{figure}\n\n\\begin{figure}[h]\n \\includegraphics[width=1.\\textwidth]{ppnorm.png}\n \\centering\n \\caption{Accuracy ranks for (a) different percentiles bounds, and (b) proportion of z-normalized shapelets}\n \\label{fig:sensi_bound_pnorm}\n\\end{figure}\n\n\\subsection{Scalability}\nWe perform a comparison of the scalability of our approach against Hive-Cote 1.0 (HC1), Hive-Cote 2.0 (HC2), DrCIF, ROCKET, and the Shapelet Transform Classifier (STC). Note that when used as a component in HC1 and HC2, STC is by default subject to a time contract of two hours. Except from this default configuration in HC1 and HC2, we are not setting any time contract in other algorithms. Both STC and RDST are by default sampling 10000 shapelets, and ROCKET use 10000 kernels.\n\nWe are aware that the runtime of HC1, HC2 and STC could be reduced with time contracts. But, as our goal in this section is to contextualize the gain in classification accuracy against the time complexity of each method, we present the results with the time contracts used to generate the accuracy results of the next section.\n\nWe use the Crop Dataset and the Rock Dataset of the UCR archive for evaluating the scalability respectively on the number of time series and their length. As all competing algorithms implemented in the sktime package of \\cite{sktime} can use parallel processing, we set each algorithm to use 90 cores. Figure \\ref{fig:scal} reports the mean training time over 10 resamples, showing the very competitive scalability of RDST. Note that due to job time limitation on our machine and the computational cost of HC2, we could not consider all samples for the Crop dataset. We report the reader interested in the implementation details of our algorithm to the web page of the project.\n\n\\begin{figure}[h]\n \\includegraphics[width=1.\\textwidth]{scalability.png}\n \\centering\n \\caption{Result of the scalability study of the competing algorithms for current state-of-the-art, for (a) number of time series and (b) time series length. Y-axis use log-scale.}\n \\label{fig:scal}\n\\end{figure}\n\n\\subsection{Comparative study}\nWe present the results of our comparative study using the mean accuracy over the same 30 resamples for each of the 112 datasets as HC2 \\cite{HC2} used in their study, and compare our approach against their experimental result. Figure \\ref{fig:ranksdiv} gives the mean accuracy rank of each method over the 40 datasets used for setting the defaults parameters in sensitivity analysis, and for the 72 others. The full result tables including standard deviation per dataset and more visualizations of the results are available online as supplementary materials. \n\n\\begin{figure}[h]\n \\includegraphics[width=1.0\\textwidth]{ranks2.png}\n \\centering\n \\caption{Mean accuracy ranks of each method for the 40 dataset used in sensitivity analysis and the 72 others.}\n \\label{fig:ranksdiv}\n\\end{figure}\n\nGiven the scalability and simplicity of our method, having an accuracy comparable to the prior developments of HC2 and to deep learning approaches is a very promising result. Notably for future developments where focus would shift to accuracy rather than scalability. For reference, using RDST without any distance normalization is equivalent to STC in terms of mean accuracy rank, with the same protocol as above.\n\n\\subsection{Interpretability}\nGiven a set of $M$ shapelets, RDST generates $3M$ features. Each feature is linked to a weight for each class in the Ridge classifier, as it is trained in a one-vs-all fashion. \nGiven a class, we can then visualize either global or local information. Locally, we can inspect a shapelet to show how it discriminates the current class, and where the shapelet is positioned with either training or testing data, as shown in Figure \\ref{fig:interp}. Globally, we can display the distribution of weights for each feature type ($min$, $\\argmin$ and $SO$) or by shapelet characteristics such as length, dilation, or use of normalization as shown in Figure \\ref{fig:interp2}. \nWhile this only provides a basic interpretation of the results, we believe a more formal framework could be developed to extract explanations from this data.\n\\begin{figure}[h!]\n \\includegraphics[width=1.0\\textwidth]{interp1.png}\n \\centering\n \\caption{The most important shapelet for class 0 of the Coffee dataset, according to weights of the Ridge classifier, with distribution displayed on the testing data, and two testing samples for visualization.}\n \\label{fig:interp}\n\\end{figure}\n\\begin{figure}[h!]\n \\includegraphics[width=1.\\textwidth]{inter2.png}\n \\centering\n \\caption{A global interpretation of RDST, with (a) distribution of weights for each type of feature, and (b) distribution of weights per dilation.}\n \\label{fig:interp2}\n\\end{figure}\n\n\\section{Introduction}\nTime series occur in a multitude of domains, covering a wide range of applications, which have impacts in many parts of society. The ever-increasing quantity of data and the publications of laws regarding models interpretability are setting new constraints for applications across industries. \n\nRecent research in time series classification produced highly accurate classifiers, using either deep learning approaches \\cite{Inception}, or meta-ensemble methods \\cite{HC1,HC2}. Despite being the most accurate approaches, they are among the slowest, which make them hard to apply on use-cases with huge amount of data. \nOther methods, based on random approaches, notably the RandOm Convolutional KErnel Transform (ROCKET) \\cite{Rocket}, achieve comparable accuracy with extreme scalability. Even though recent works on post-hoc methods and specific frameworks \\cite{XAITS} improved the interpretability of those approaches, they lack a \"by design\" interpretability.\n\nOn the other hand, time series shapelets \\cite{Shapelets} have been widely used in time series classification for their ease of interpretation, which is a critical aspect to some application domains such as health and security. \nThe downside is that shapelet algorithms are often outperformed by recent approaches, both in terms of accuracy and scalability. Most Shapelet approaches tried to solve the scalability issues at the expense of some classification accuracy, notably through the use of symbolic approximation techniques \\cite{FastShp}, while others used random shapelets \\cite{UFShp}. Recently, a symbolic sequence ensemble learning \\cite{MrSEQL} method was proposed, which improved the predictive power of approximation-based methods, while other work focused on finding a new discriminative feature \\cite{LRS} to consider during the extraction process.\n\nIn this work, we present the Random Dilated Shapelet Transform, an adaptation of time series shapelets that includes the notion of dilation, one of the core mechanism of the success of convolutional kernel approaches. We also extend on the work of \\cite{LRS} and introduce a new feature to enhance the discriminative power of shapelets. Our contributions can be summarized as follows:\n\n\\begin{itemize}\n\\item an adaptation of time series shapelets allowing the use of dilation, and a feature to capture a new discriminative property of shapelets, \n\\item an interpretable, scalable and accurate shapelet algorithm, which allows shapelet based algorithm to catch-up with the state-of-the-art,\n\\item an experimental study about the sensitivity of our method parameters and a comparative study against the state-of-the-art algorithms for time series classification.\n\\end{itemize}\n\n\n\\section{Proposed method}\n\\label{sec:RDST}\nIn this section, we introduce the main components of our method: the use of dilation in the shapelet formulation and the features extracted from the distance vector between a shapelet and a time series. We put emphasis on the dilation and on the Shapelet Occurrence feature that are new contributions to shapelet algorithms. We give some simple visual examples to illustrate these notions, and report the visualization on real data to the experimental section.\n\n\n\\subsection{Dilated Shapelets}\nTo introduce the notion of dilation in shapelets, we define now a shapelet $S$ as $S=\\{ \\{v_1, ..., v_l\\}, d \\}$ with $l$ the length parameter and $d$ the dilation parameter. In practice, the dilation is used in the distance function $f$, where each value of the shapelet will be compared to a dilated subsequence of the input time series. More formally, consider a time series $X=\\{x_1, ..., x_m\\}$ and a dilated shapelet $S$, we now define $f(S,X) = \\{f_1, ..., f_{m-(l-1)\\times d}\\}$ as :\n\\begin{equation}\nf_i = \\sqrt{\\sum_{j=1}^{l} (X_{i+(j-1) \\times d} - s_j)^2}\n\\end{equation}\n\nThe interest of using dilation in shapelets is to make them non-contiguous subsequences. It allows a shapelet to either match a non-contiguous pattern, or a contiguous one, by focusing on key points of the pattern without covering it entirely, as illustrated in Figure \\ref{fig:ex_dil_shp}. Note that this formulation is equivalent to the original shapelet formulation when $d=1$.\n\n\\begin{figure}[h]\n \\includegraphics[width=0.9\\textwidth]{ex_dil_shp.png}\n \\centering\n \\caption{An example of two possible shapelets (in orange), positioned on a synthetic pattern (in blue): (a) one without dilation, and (b) a much smaller one but with dilation}\n \\label{fig:ex_dil_shp}\n\\end{figure}\n\n\\subsection{Shapelet Occurrence feature}\n\\label{sec:SO}\nIf we consider a shapelet $S$ and two time series $X_1$ and $X_2$, we can imagine multiple ways of discriminating $X_1$ and $X_2$ using $S$.\n\\begin{itemize}\n\\item $S$ can be present in $X_1$ but not in $X_2$. This is captured by $min\\ f(S,X_i)$, with smaller distances indicating better matches between the shapelet and the series.\n\\item $S$ can be present in both series, but not at the same place. This is captured by the $argmin$ feature introduced by LRS \\cite{LRS}.\n\\item $S$ can be present in both series, but not at the same scale. In this case, a normalized distance would not be able to discriminate the series.\n\\item $S$ can be present in both series, but occurs a different number of times in $X_1$ compared to $X_2$. This is captured by a new feature, called Shapelet Occurrence (SO).\n\\end{itemize}\n\nThose points are illustrated in Figure \\ref{fig:shp_disc}. Deciding whether scaling is important or not is highly dependent on the application, but without prior knowledge, one cannot know which to choose. For this reason, we introduce a parameter in Section \\ref{sec:RandDST} allowing to tune the amount of normalized shapelets.\n\n\\begin{figure}[h]\n \\includegraphics[width=1.\\textwidth]{class_diff.png}\n \\centering\n \\caption{Synthetic examples of possible discriminative properties between two classes}\n \\label{fig:shp_disc}\n\\end{figure}\n\nTo the best of our knowledge, the number of occurrences of a shapelet has never been considered as a feature. This requires another modification to the definition of $S$ as $S=\\{ \\{v_1, ..., v_l\\}, d, \\lambda \\}$, with $\\lambda$ a threshold allowing us to compute the Shapelet Occurrence ($SO$) feature as $SO = |\\{i| f(S,X)_i <\\lambda\\}|$. Although the parameter $\\lambda$ could be set randomly, we discuss in Section \\ref{sec:RandDST} a method to set the value of this threshold.\n\n\\subsection{Random Dilated Shapelet Transform (RDST)}\n\\label{sec:RandDST}\nOur objective for this algorithm is to produce an accurate but scalable approach. As our shapelet formulation adds attributes compared to the initial formulation \\cite{Shapelets}, optimizing a set of dilated shapelets with a threshold $\\lambda$ will be costly, and this explains why we choose a random approach.\n\nFor simplicity, we present our approach in the context of univariate and even length time series, with ${\\cal X} = \\{X_1, ..., X_n\\}$ a set of time series ($X_i = \\{x_1, ..., x_m \\}$) and $Y=\\{y_1, ..., y_n\\}$ their respective classes. Our method takes as input four parameters that are: $n\\_shapelets$ the number of shapelets to generate, $L$ a set of possible lengths for the shapelets, $p\\_norm$ the proportion of shapelets that will use z-normalization, and $(P_1, P_2) \\in [0,100]$ a pair used as percentile bounds for the sampling of the threshold $\\lambda$. \n\nGiven the definition of a shapelet $S=\\{ \\{v_1, ..., v_l\\}, d, \\lambda \\}$, we initialize each parameter as follows:\n\\begin{itemize}\n\\item the length $l$ is uniformly drawn from $L$,\n\\item the dilation $d$, in the same way as ROCKET \\cite{Rocket}, is set to $d = \\left\\lfloor 2^x \\right\\rfloor$ with $x$ uniformly drawn in $[0, log_2 \\frac{m}{l}]$,\n\\item we randomly choose whether the shapelet will use a z-normalized distance with probability $p\\_norm$,\n\\item for setting the values, a sample $X$ is uniformly drawn from ${\\cal X}$, and an admissible start point $i$ (given $l, d$) is randomly selected. Then, values are set to $[X_i, ..., X_{i+(l-1)\\times d}]$.\n\\item finally, given a shapelet $S$, to fix the value of $\\lambda$, we take a sample $X$ from the same class as the one used for extracting the shapelet value, and uniformly draw a value between the two percentiles $(P_1, P_2)$ of $f(S,X)$.\n\\end{itemize}\n\nThe strategy employed to find $\\lambda$ is a classic trade-off between time and accuracy. If scalability was not a focus, we could compute the distance vector for more samples in ${\\cal X}$, and optimize the value of $\\lambda$ based on an information measure.\nAfter computing the distance vector between all pairs of time series and shapelets, the output of our method is a feature matrix of size $(|{\\cal X}|,3 \\times n\\_shapelets)$, with the three features extracted from the distance vector $f(S,X)$ being the $min, argmin, SO(S,X)$.\n\nFollowing the arguments of the authors of ROCKET \\cite{Rocket}, we use a Ridge Classifier after the transformation of ${\\cal X}$, as the L2 regularization used in Ridge is of critical importance due to the high number of features that are generated, while being scalable and interpretable.\n\\subsubsection*{Acknowledgements}\nThis work is supported by the ANRT CIFRE grant n\u00b02019\/0281 in partnership with Worldline and the University of Orl\u00e9ans.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWe present a graphical hidden variable framework for modeling non-negative sequential data with hierarchical structure. Our framework is intended for applications where the observed data is non-negative and is well-modeled as a non-negative linear combination of underlying non-negative components. Provided that we are able to adequately model these underlying components individually, the full model will then be capable of representing any observed additive mixture of the components due to the linearity property. This leads to an economical modeling representation, since a compact parameterization can explain any number of components that combine additively. Thus, in our approach, we do not need to be concerned with explicitly modeling the maximum number of observed components nor their relative weights in the mixture signal.\n\nTo motivate the approach, consider the problem of computational auditory scene analysis (CASA), which involves identifying auditory ``objects'' such as musical instrument sounds, human voice, various environmental noises, etc, from an audio recording. Speech recognition and music transcription are specific examples of CASA problems. When analyzing audio, it is common to first transform the audio signal into a time-frequency image, such as the spectrogram (i.e., magnitude of the short-time Fourier transform (STFT)). We empirically observe that the spectrogram of a mixture of auditory sources is often well-modeled as a linear combination of the spectrograms of the individual audio sources, due to the sparseness of the time-frequency representation for typical audio sources. For example, consider a recording of musical piece performed by a band. We empirically observe that the spectrogram of the recording tends to be well-approximated as the sum of the spectrograms of the individual instrument notes played in isolation. If one could construct a model using our framework that is capable of representing any individual instrument note played in isolation, the model would then automatically be capable of representing observed data corresponding to arbitrary non-negative linear combinations of the individual notes. Likewise, if one could construct a model under our framework capable of representing a recording of a single human speaker (possibly including a language model), such a model would then be capable of representing an audio recording of multiple people speaking simultaneously. Such a model would have obvious applications to speaker source separation and simultaneous multiple-speaker speech recognition. We do not attempt to construct such complex models in this paper, however. Rather, our primary objective here will be to construct models that are simple enough to illustrate the interesting properties of our approach, yet complex enough to show that our approach is noise-robust, capable of learning from training data, and at least somewhat scalable. We hope that the results presented here will provide sufficient motivation for others to extend our ideas and begin experimenting with more sophisticated PFNs, perhaps applying them to the above-mentioned CASA problems. \n\nAn existing area of research that is related to our approach is non-negative matrix factorization (NMF) and its extensions. NMF is a data modeling and analysis tool for approximating a non-negative matrix $X$ as the product of two non-negative matrices $W$ and $H$ so that the reconstruction error between $X$ and $W H$ is minimized under a suitable cost function. NMF was originally proposed by Paatero as \\emph{positive matrix factorization} \\cite{paatero_1994}. Lee and Seung later developed robust and simple to implement multiplicative update rules for iteratively performing the factorization \\cite{Lee_seung}. Various sparse versions of NMF have also been recently proposed \\cite{hoyer_sparse_NMF}, \\cite{cichockiNMF}, \\cite{nsNMF2006}. NMF has recently been applied to many applications where a representation of non-negative data as an additive combination of non-negative basis vectors seems reasonable. Such applications include object modeling in computer vision, magnitude spectra modeling of audio signals \\cite{wang_mag_spectrogram_NMF}, and various source separation applications \\cite{NMFParis}. The non-negative basis decomposition provided by NMF is, by itself, not capable of representing complex model structure. For this reason, extensions have been proposed to make NMF more expressive. Smaragdis extended NMF in \\cite{NMFParis} to model the temporal dependencies in successive spectrogram time slices. His NMF extension, which he termed \\emph{Convolutive NMF}, also appears to be a special case of one of our example models in Section~\\ref{sec:sparseHierarchicalModel}. We are unaware of any existing work in the literature that allow for the general graphical representation of complex hidden variable models, particularly sequential data models, that is provided by our approach, however. \n\nA second existing area of research that is related to our approach is probabilistic graphical models \\cite{jordan_graphical_models} and in particular, dynamic Bayesian networks (DBNs) \\cite{Murphy_Thesis}, \\cite{Friedman1999}, which are probabilistic graphical models for sequential data. We note that the Hidden Markov Model (HMM) is a special case of a DBN. DBNs are widely used for speech recognition and other sequential data modeling applications. Probabilistic graphical models are appealing because they they can represent complex model structure using an intuitive and modular graphical modeling representation. A drawback is that the corresponding exact and\/or approximate inference and learning algorithms can be complex and difficult to implement, and overcoming tractability issues can be a challenge.\n\nOur objective in this paper is to present a framework for modeling non-negative data that retains the non-negative linear representation of NMF, while also supporting more structured hidden variable data models with a graphical means for representing variable interdependencies analogously to that of the probabilistic graphical models framework. We will be particularly interested in developing models for sequential data consisting of the spectrograms of audio recordings. Our framework is essentially a modular extension of NMF in which the full graphical model corresponds to several coupled NMF sub-models. The overall model then corresponds to a system of coupled vector or matrix factorization equations. Throughout this paper, we will refer to a particular system of factorizations and the corresponding graphical model as a \\emph{positive factor network (PFN)}. We will refer to the dynamical extension of a PFN as a \\emph{dynamic positive factor network (DPFN)}. Given an observed subset of the PFN model variables, we define inference as solving for the values of the hidden subset of variables and learning as solving for the model parameters in the system of factorization equations. \n\nNote that our definition of inference is distinct from the probabilistic notion of inference. In a PFN, inference corresponds to solving for actual values of the hidden variables, whereas in a probabilistic model inference corresponds to solving for probability distributions over the hidden variables given the values of the observed variables. Performing inference in a PFN is therefore more analogous to computing the MAP estimates for the hidden variables in a probabilistic model. One could obtain an analogous probabilistic model from a PFN by considering the model variables to be non-negative continuous-valued random vectors and defining suitable conditional probability distributions that are consistent with the non-negative linear variable model. Let us call this class of models \\emph{probabilistic PFNs}. Exact inference is generally intractable in such a model since the hidden variables are continuous-valued and the model is not linear-Gaussian. However, one could consider deriving algorithms for performing approximate inference and developing a corresponding EM-based learning algorithm. We are unaware of any existing algorithms for performing tractable approximate inference in a probabilistic PFN. It is possible that our PFN inference algorithm may also have a probabilistic interpretation, but exploring the idea further is outside the scope of this paper. Rather, in this paper our objective is to to develop and motivate the inference and learning algorithms by taking a modular approach in which existing NMF algorithms are used and coupled in a way that seems to be intuitively reasonable. We will be primarily interested in empirically characterizing the performance of the proposed inference and learning algorithms on various example PFNs and test data sets in order to get a sense of the utility of this approach to interesting real-world applications.\n\nWe propose general joint inference and learning algorithms for PFNs which correspond to performing NMF update steps independently (and therefore potentially also in parallel) on the various factorization equations while simultaneously enforcing coupling constraints so that variables that appear in multiple factorization equations are constrained to have identical values. Our empirical results show that the proposed inference and learning algorithms are fairly robust to additive noise and have good convergence properties. By leveraging existing NMF multiplicative update algorithms, the PFN inference and learning algorithms have the advantage of being straightforward to implement, even for relatively large networks. Sparsity constraints can also be added to a module in a PFN model by leveraging existing sparse NMF algorithms. We note that the algorithms for performing inference and learning in PFNs should be understandable by anyone with a knowledge of elementary linear algebra and basic graph theory, and do not require a background in probability theory. Similar to existing NMF algorithms, our algorithms are highly parallel and can be optimized to take advantage of parallel hardware such as multi-core CPUs and potentially also stream processing hardware such as GPUs. More research will be needed to determine how well our approach will scale to very large or complex networks.\n\nThe remainder of this paper has the following structure. In Section~\\ref{sec:pfn_main_section}, we present the basic PFN model. In Section~\\ref{sec:main_factored_model}, we present an example of how a DPFN can be used to represent a transition model and present empirical results. In Section~\\ref{sec:main_hierarchical_state_model}, we present an example of using a PFN to model sequential data with hierarchical structure and present empirical results for a regular expression example. In Section~\\ref{sec:target_tracking}, we present a target tracking example and provide results for synthetic observation data which serve to illustrate the interesting properties of PFNs and motivate their potential usefulness in applications such as music transcription, source separation, and speech recognition. We show how a target process characterized by a hierarchical state transition model can be represented by a PFN. Our results illustrate that a PFN which is defined in terms of a single target observation can then be used to effectively track the states of multiple simultaneous targets in the observed data. In Section~\\ref{sec:hierarchicalSeqDecomp} we present results for an example in which meaningful hierarchical features are extracted from a spectrogram. Such a hierarchical representation could be useful for music transcription and source separation applications. In Section~\\ref{sec:main_language_model}, we propose a DPFN for modeling the sequence of words or characters in a text document as an additive factored transition model of word features. We also propose slightly modified versions Lee and Seung's update rules to avoid numerical stability issues. The resulting modified update rules are presented in Appendix~\\ref{appendix_nmf}. \n\n\n\\section{Positive Factor Networks}\n\\label{sec:pfn_main_section}\n\nIn this section, we specify the basic data model and present a graphical representation. We then propose inference algorithms for solving for the hidden variables and learning the model parameters. \n\n\\subsection{Model specification}\n\\label{sec:model_specification}\n\nWe now specify a data model for a set of non-negative continuous vector-valued variables $\\{x_i: i = 1, \\dots, N\\}$ where the dimension of each $x_i$, in general, can be distinct. We will refer to the set $\\{x_i\\}$ as the \\emph{model variables}. We assume that a subset $X_E$ of the model variables is observed and the rest of the variables comprise the hidden subset, $X_H$. \n\nThe model is specified by a system of $Q$ of non-negative factorization equations where the $j$'th equation is given by:\n\n\\begin{align}\nx_{f(j,0)} =& \\sum_{k=1}^{P_j} W^j_k x_{f(j,k)} \\notag \\\\\n=& \\left[ \\begin{array}{cccc} W^j_1 & W^j_2 & \\dots & W^j_{P_j} \\end{array} \\right] \\left[ \\begin{array}{c} x_{f(j,1)}\\\\\nx_{f(j,2)} \\\\\n\\vdots \\\\\nx_{f(j,{P_j})} \\end{array} \\right] \\notag \\\\\n=& W^j x^j\n\\label{eq:factorization_equation}\n\\end{align}\n\nwhere $P_j \\geq 1$ for all $j \\in \\{1, \\dots, Q\\}$ and all $W^j_k$ are non-negative matrices with a possibly distinct number of columns for each $k$. The function $f(j,k)$ maps each $(j,k)$ above into a corresponding index $i \\in \\{1, \\dots, N\\}$ and satisfies $f(j,k) = f(m,n)$ implies $j \\ne m$. Thus, $x_{f(j,k)}$ refers to one of the model variables $x_i$, and it is possible for a given variable $x_i$ to appear in multiple equations above but only at most once in any given equation. The matrix $W^j$ is defined as the horizontal concatenation of the non-negative $W^j_k$ matrices.\n\nSince there are no constant terms in the above equations, the system corresponds to a a homogeneous system of linear equations, subject to a non-negativity constraint on the variables \\{$x_i$\\}, which can be written as:\n\n\\begin{align}\nA y = 0 \\text{ , where } y = \\left[ \\begin{array}{c} x_1\\\\\nx_2 \\\\\n\\vdots \\\\\nx_N \\end{array} \\right] \\geq 0\n\\end{align}\n\nNote that $A$ will contain both negative and positive values, however, since all of the terms are grouped together on one side. It follows that our model satisfies the linearity property subject to non-negativity constraints (non-negative superposition property): if $y_1$ and $y_2$ are solutions to the system, then $\\alpha_1 y_1 + \\alpha_2 y_2$ is also a solution, for any choice of scalars $\\alpha_1 > 0, \\alpha_2 > 0$. \n\n\n\n\\subsection{Graphical representation}\nWe now develop a graphical representation to facilitate the visualization of the local linear relationships between the model variables in the various factorization equations. Hidden variables correspond to shaded nodes in the graph, and observable variables correspond to unshaded nodes. We construct the graphical model such that the $j$'th factorization equation corresponds to a subgraph with $x_{f(j,0)}$ as the child node and the $\\{x_{f(j,k)} : k = 1, \\dots, P_j\\}$ as the parent nodes. Thus, a child node variable is a linear function its parent nodes in the subgraph. We then arrive at the complete graphical model by superimposing the subgraphs corresponding to the various factorization equations. We allow the same variable $x_i$ to correspond to the child node in multiple subgraphs (i.e., distinct factorization equations), provided that its parents do not overlap between the subgraphs. That is, we do not allow a single arc to correspond to multiple linear relationships. We annotate arcs with dash marks where necessary, to disambiguate subsets of parent nodes that correspond to the same factorization equation. Arcs annotated with the same number of dash marks connecting a child node to its parents denote a subset of parent nodes that corresponds to a single factorization equation. The pseudocode in Algorithm~\\ref{alg:create_graph} outlines a procedure for creating a graphical model representation from a system of factorization equations. \n\n\\begin{algorithm}\n\\caption{Create a directed graph from a system of factorizations.}\n\\label{alg:create_graph}\n\\begin{tabbing}\n{\\bf for} \\= {$j$ = 1 to $Q$} \\\\\n\\> \/\/ for each factorization equation $x_{f(j,0)} = \\sum_{k=1}^{P_j} W^j_k x_{f(j, k)}$ \\\\\n\\> Create a node for the corresponding variable $x_{f(j, 0)}$, if it does not already exist. \\\\\n\\> $dashCount \\gets$ 1 + maximum dash count on any existing arc from a parent node of $x_{f(j, 0)}$ to $x_{f(j, 0)}$. \\\\\n\\> {\\bf for} \\= {$k$ = 1 to $P_j$} \\\\\n\\> \\> {\\bf if} \\= a node corresponding to $x_{f(j, k)}$ does not already exist \\\\\n\\> \\> \\> Create a node corresponding to $x_{f(j, k)}$ \\\\\n\\> \\> \\> {\\bf if} \\= $x_{f(j, k)}$ is observed \\\\\n\\> \\> \\> \\> Shade the node corresponding to $x_{f(j, k)}$ \\\\\n\\> \\> \\> {\\bf end} \\\\\n\\> \\> {\\bf end} \\\\\n\\> \\> Create a directed arc from the $x_{f(j,k)}$ node to the $x_{f(j, 0)}$ node. \\\\\n\\> \\> Annotate the arc with the number of dash marks given by the current value of $dashCount$. \\\\\n\\> \\> Annotate the arc with $W^j_k$ \\\\\n\\> {\\bf end} \\\\\n{\\bf end} \n\\end{tabbing}\n\\end{algorithm}\n\n\nAs an example, consider the set of variables \\{$x_1, x_2, \\dots, x_{12}$\\} that are described by the following system of factorization equations:\n\n\\begin{align}\n\\label{eqn:factor1}\nx_1 =& W_1 x_5 + W_2 x_6 \\\\\n\\label{eqn:factor2}\nx_2 =& W_3 x_6 + W_4 x_7 \\\\\n\\label{eqn:factor3}\nx_2 =& W_5 x_8 + W_6 x_9 \\\\\n\\label{eqn:factor4}\nx_2 =& W_7 x_{10} \\\\\nx_3 =& W_8 x_{10} \\\\\n\\label{eqn:factor6}\nx_3 =& W_9 x_{11} \\\\\n\\label{eqn:factor7}\nx_4 =& W_{10} x_{11} \\\\\nx_6 =& W_{11} x_{12} \\\\\nx_{10} =& W_{12} x_{12} \\\\\n\\label{eqn:factor10}\nx_{11} =& W_{13} x_{12}\n\\end{align}\n\nFigure~\\ref{fig:sampleModel1} shows the graphical model associated with the above system of factorizations. Note that nodes $x_3$ and $x_4$ are connected by a dashed line. This is an optional annotation that specifies that the corresponding connected variables have \\emph{forced factored co-activations}, or simply \\emph{forced co-activations}. So far the only constraint we have placed on the parameter matrices \\{$W_i$\\} is that they be non-negative. However, in some models we might wish to place the additional constraint that the columns $\\{w_n\\}$ of $W_i$ are normalized in some way. For example, we might consider requiring that each $w_n$ have column sum = 1. Consider the subgraph corresponding to Equations (\\ref{eqn:factor6}) and (\\ref{eqn:factor7}). The dashed line connecting $x_3$ and $x_4$ specify that these variables are factorized in terms of a common parent ($x_{11}$ in this case) and that the columns of the corresponding \\{$W_i$\\} ($W_9$ and $W_{10}$ in this case) are normalized to have unit sum. This constraint ensures that these three variables will have equal column sums. Thus, if any one of these variables is observed in the model, then we can infer that all three must have the same column sum. An activation of the parent $x_{11}$ implies that its children $x_3$ and $x_4$ will then be activated with the same column sum (activated together), hence the term forced co-activations.\n\nConsider the subgraph corresponding to Equation (\\ref{eqn:factor1}). An activation in $x_1$ (i.e., $x_1$ is nonzero) could be explained by just one of the parents being nonzero. Now consider the subgraph consisting of $x_2$ and its parents, corresponding to Equations (\\ref{eqn:factor2}), (\\ref{eqn:factor3}), and (\\ref{eqn:factor4}). In this case, an activation of $x_2$ corresponds to at least one of $x_6$ and $x_7$ being activated, at least one of $x_8$ and $x_9$ being activated, as well as $x_{10}$ being activated.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=60ex]{.\/figures\/sampleModel1.pdf}\n\\caption{A graphical model corresponding to the example system of factorizations in Equations~\\ref{eqn:factor1} - \\ref{eqn:factor10}.}\n\\label{fig:sampleModel1}\n\\end{figure}\n\n\nWe borrow the notion of a \\emph{plate} from the probabilistic graphical models formalism \\cite{jordan_graphical_models}, in which plates are used to represent replicated random variables. A plate consists of a box that is drawn around the replicated variables, with the replication count specified in a corner. Figure~\\ref{fig:simpleNmfPlate} shows an example of using the graphical plate notation. This graphical model corresponds to the following system of factorization equations:\n\n\\begin{align}\n\\label{eqn:nmf_vector}\nx^1_1 =& W_1 x^2_1 \\notag \\\\\nx^1_2 =& W_1 x^2_2 \\notag \\\\\n\\vdots \\notag \\\\\nx^1_N =& W_1 x^2_N\n\\end{align}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=25ex]{.\/figures\/simpleNmfPlate.pdf}\n\\caption{An example of using plate notation, corresponding to $N$ observable variables \\{$x^1_i$\\} and $N$ hidden variables \\{$x^2_i$\\}. This model corresponds to standard NMF.}\n\\label{fig:simpleNmfPlate}\n\\end{figure}\n\nLetting $X^1 = \\left[ \\begin{array}{cccc} x^1_1 & x^1_2 & \\dots & x^1_N \\end{array} \\right]$, and letting $X^2 = \\left[ \\begin{array}{cccc} x^2_1 & x^2_2 & \\dots & x^2_N \\end{array} \\right]$, we can then write the system of factorization equations more compactly as the matrix equation:\n\n\\begin{align}\n\\label{eqn:nmf_matrix}\nX^1 = W_1 X^2\n\\end{align}\n\nNote that this corresponds to standard NMF, since the observed $X^1$ matrix is factored as the product of two non-negative matrices. \n\n\n\\subsection{Algorithms for Inference and Learning}\n\\label{sec:inference_and_learning}\n\n\n\nTypically, a subset of the model variables $X_E = \\{x_{E_n}: n = 1, \\dots, N_E\\}$ is observed, and we are interested in solving for the values of the hidden variables $X_H = \\{x_{H_n}: n = 1, \\dots, N_H\\}$, which we refer to as the \\emph{inference problem}. We are also interested in solving for the values of the model parameters $\\theta = \\{W^1, W^2, \\dots, W^Q\\}$, which we refer to as the \\emph{learning problem}. The observed variables may deviate from the modeling assumptions and\/or contain noise so that an exact solution will not be possible in general. We will thus seek an approximate solution by optimizing a reasonable cost function.\n\nGiven a subset of observable variables and the model parameters, we define the inference solution as the values of the hidden variables that minimize some reasonable cost function $g(\\theta, X_H, X_E)$. That is, we consider $X_E$ and $\\theta$ to be fixed and solve for $X_H$:\n\n\\begin{align}\nX_H = \\argmin_{X_H} g(\\theta, X_H, X_E)\n\\end{align}\n\nThe learning problem corresponds to solving for $\\theta$ given observations $X_E$. The joint learning and inference problem corresponds to minimizing $g(\\theta, X_H, X_E)$ jointly for $\\theta$ and $X_H$ given $X_E$: \n\n\\begin{align}\n(\\theta, X_H) = \\argmin_{\\theta, X_H} g(\\theta, X_H, X_E)\n\\end{align}\n\nThe cost function should have the property that its value approaches zero as the approximation errors of the system of factorization equations approach zero. One possibility is the following function, specified as the sum of the squared approximation errors of the factorization equations (\\ref{eq:factorization_equation}):\n\n\\begin{align}\ng(\\theta, X_H, X_E) =& \\sum_{j = 1}^Q ||x_{f(j,0)} -W^j x^j ||^2 \\notag \\\\\n=& \\sum_{j = 1}^Q ||x_{f(j,0)} -W^j \\left[ \\begin{array}{c} x_{f(j,1)}\\\\\nx_{f(j,2)} \\\\\n\\vdots \\\\\nx_{f(j,{P_j})} \\end{array} \\right] ||^2 \n\\end{align}\n\nOther possibilities could involve using the generalized KL-divergence (\\ref{eq:kl_div}), for example. However, we will not develop inference and learning algorithms by directly optimizing any particular cost function, and so will not be too concerned with its exact form. Rather, we propose algorithms that seem intuitively reasonable and for which we have observed good empirical convergence properties on test data sets, but do not offer any proof of convergence, even to a local minimum of any particular cost function. We will only make use of a particular cost function in order to quantify the empirical performance of our algorithms.\n\n\n\nWe require that the graphical model correspond to a directed acyclic graph (DAG). Since there are no cycles, the graph can be arranged so that it flows from top to bottom. We then rename each of the model variables $\\{x_n : n=1, \\dots, N\\}$ to $x^l_i$ where $l = 1,\\dots, L$ denotes the level of the variable (vertical index), and $i$ is its position within the level (horizontal index). We then have a graph with $L$ levels, such that the top level nodes (level $L$) have no parents and the bottom level nodes (level 1) have no children. A level is defined such that no pair of nodes with a parent-child relationship are allowed to be in the same level. That is, for any pair of variables $(x^l_i, x^l_j), i \\ne j$ in the same level $l$, we disallow that $x^l_i$ is the parent of $x^l_j$ or vice versa. If an arc exists between a higher level variable and a lower level variable, then the higher level variable must be the parent of the lower level variable. That is, we require that for any two variables $(x^m_i, x^n_j), \\text{ s.t }. m > n$, $x^m_i$ cannot be a child of $x^n_j$. For example, the graph in Figure~\\ref{fig:sampleModel1} corresponds to an $L = 3$ level graph with the following renamed variables. Variables $(x_1, \\dots, x_4)$ would be renamed to $(x^1_1, \\dots, x^1_4)$. Variables $(x_5, \\dots, x_{11})$ would be renamed to $(x^2_1, \\dots, x^2_7)$. Variable $x_{12}$ would be renamed to $x^3_1$.\n\n\nThe inference and learning algorithm will require a set of \\emph{local variables} \\{$v^j_k$\\}. We use the term local variable because a given $v^j_k$ is associated only with the $j$'th factorization equation, unlike a model variable $x_{f(j,k)}$ which is allowed to appear in multiple factorization equations. Specifically, a distinct $v^j_k$ will be associated with each allowable combination of $j$ and $k$ that appears in the factorization equations in (\\ref{eq:factorization_equation}). Thus, several distinct $v^j_k$ may be associated with a given model variable and this will be the case when a given model variable appears in more than one factorization equation. Replacing the $x_{f(j,k)}$ with the associated $v^j_k$, we then have $Q$ factorization equations $FactorSystem = \\{eq_j : j = 1, \\dots, Q\\}$ where equation $eq_j$ is given by:\n\n\\begin{align}\nv^j_0 =& \\sum_{k=1}^{P_j} W^j_k v^j_k \\notag \\\\\n=& \\left[ \\begin{array}{cccc} W^j_1 & W^j_2 & \\dots & W^j_{P_j} \\end{array} \\right] \\left[ \\begin{array}{c} v^j_1\\\\\nv^j_2 \\\\\n\\vdots \\\\\nv^j_{P_j} \\end{array} \\right] \\notag \\\\\n=& W^j v^j\n\\label{eq:factorization_equation333}\n\\end{align}\n\nWe say that the above equations are in a \\emph{consistent state} if each model variable $x_{f(j,k)}$ and all of its associated local variables \\{$v^j_k$\\} have the same value. Otherwise, we say that the equations are in an \\emph{inconsistent state}. Note being in a consistent state does not imply that the corresponding $x_{f(j,k)}$ actually constitute a solution to the system.\n\nThe basic idea of our approach is to learn a generative model of data by iteratively performing inference and learning in a bottom-up pass through the network and then perform data generation through activation propagation in a top-down pass. In the inference and learning pass, parent node values (activations) and parameters are updated based on the values of their child node variables. These inference and learning updates are performed locally using NMF algorithms. Once the top level nodes have been updated, we then propagate values downward to the lowest level nodes in a data generation pass. This is performed by computing the new child node values as the value of the right hand side of the corresponding factorization equations in which they appear. Throughout this process, the multiple $v^j_k$ variables that correspond to a single model variable $x_{f(j,k)}$ are repeatedly replaced by their mean value in order to put the system of factorization equations back into a consistent state. This process of a bottom-up inference and learning step followed by a top-down value propagation (data generation) step is iterated until convergence. \n\nAlgorithm~\\ref{alg:inference_learning1} and the corresponding procedures in Algorithm~\\ref{alg:upStepDownStep} and Algorithm~\\ref{alg:averagingProcedures} show the pseudocode for the basic inference and learning algorithm that was used to obtain all of the empirical results in this paper. We start by initializing the hidden variables $X_H$ to small random positive values. We then make the system consistent by copying the value of each model variable $x_{f(j,k)}$ into each of its corresponding local variables $v^j_k$. The only distinction between hidden and observed variables from the perspective of the learning and inference algorithm is that model variables in the observed set $X_E$ are never modified in the algorithm. In computing the mean values of the variables, the new mean for the model variables is computed only from the subset of the local variables that were updated in the corresponding inference or value propagation step. After performing the value propagation step, the updated values of all lower level variables in the model are a function of the top level variables.\n\nWhen a parameter matrix $W$ is tied across multiple vector factorization equations, the corresponding equations can be merged into a single matrix factorization equation, as we did in going from (\\ref{eqn:nmf_vector}) to (\\ref{eqn:nmf_matrix}). This merging will also be possible in the dynamic models that we present starting in Section~\\ref{sec:main_factored_model}. In this case, the learning update of the upStep() procedure is simply modified to perform a single NMF left update instead of multiple left up steps. Appendix~\\ref{appendix_inf_learn_2} describes a similar inference and learning algorithm in which the mean values of the variables are computed differently.\n\n\\begin{algorithm}\n\\caption{Perform inference and learning}\n\\label{alg:inference_learning1}\n\\begin{tabbing}\nInitialize hidden variables to random positive values \\\\\n\/\/ Main loop \\\\\n{\\bf repe}\\={\\bf at} \\\\\n\\> \/\/ Bottom-to-top inference and learning \\\\\n\\> {\\bf for} \\= $l$ = 1 to $L -1$ \\\\\n\\> \\> upStep($l$) \\\\\n\\> \\> averageParents($l$) \\\\\n\\> {\\bf end} \\\\\n\\> \/\/ Top-to-bottom value propogation \\\\\n\\> {\\bf for} $l$ = $L -1$ downto 1 \\\\\n\\> \\> downStep($l$) \\\\\n\\> \\> averageChildren($l$) \\\\\n\\> {\\bf end} \\\\\n{\\bf until} convergence\n\\end{tabbing}\n\\end{algorithm}\n\n\n\n\\begin{algorithm}\n\\caption{upStep() and downStep() procedures}\n\\label{alg:upStepDownStep}\n\\begin{tabbing}\n\/\/ Using the values of the child variables at level $l$, update the values of the parent variables and \\\\\n\/\/ update the parameter matrices by performing NMF update steps. \\\\\n\/\/ Let $X_l$ denote the set of model variables \\{$x^l_1, x^l_2, \\dots, x^l_{LevelCount_l}$\\} corresponding to the level $l$ nodes.\\\\\n\/\/ Let $FactorSystem_l$ denote the subset of the factorization equations from (\\ref{eq:factorization_equation333})\\\\\n\/\/ \\{$eq_j: $ such that $v^j_0$ in $eq_j$ corresponds to an $x^l_i \\in X_l$\\}. \\\\\n\/\/ Let $duplicationSetChild(i,l)$ = \\{$j : eq_j \\in FactorSystem_l$ and $v^j_0$ corresponds to $x^l_i$\\}. \\\\\n{\\bf upSt}\\={\\bf ep}($l$) \\\\\n\\> {\\bf for} \\={\\bf each} $j \\in FactorSystem_l$ \\\\\n\\> \\> {\\bf if} \\= learning is enabled \\\\\n\\> \\> \\> Learning update: Using $v^j_0 = W^j v^j$, perform a left NMF update on $W^j$, using, e.g. (\\ref{eqn:left_nmf_update}) \\\\\n\\> \\> {\\bf end} \\\\\n\\> \\> Inference update: Using $v^j_0 = W^j v^j$, perform a right NMF update on $v^j$, using, e.g. (\\ref{eqn:right_nmf_update}) \\\\\n\\> {\\bf end} \\\\\n{\\bf end} \\\\\n\\\\\n\/\/ Using the values of the parent variables, update the values of the level $l$ child variables by performing \\\\\n\/\/ value propagation. \\\\\n{\\bf down}\\={\\bf step}($l$) \\\\\n\\> {\\bf for} \\={\\bf each} $j \\in FactorSystem_l$ \\\\\n\\> \\> \/\/ Perform value propagation \\\\\n\\> \\> $v^j_0 \\gets W^j v^j$ \\\\\n\\> {\\bf end} \\\\\n{\\bf end} \n\\end{tabbing}\n\\end{algorithm}\n\n\\begin{algorithm}\n\\caption{averageChildren() and averageParents() procedures}\n\\label{alg:averagingProcedures}\n\\begin{tabbing}\n\/\/ Let $X_l$ denote the set of model variables \\{$x^l_1, x^l_2, \\dots, x^l_{LevelCount_l}$\\} corresponding to the level $l$ nodes.\\\\\n\/\/ Let $duplicationCountChild(i,l)$ = $|duplicationSetChild(i,l)|$.\\\\\n\/\/ Let $duplicationSet(i,l)$ = \\{$(j,k) : eq_j \\in FactorSystem$ and $v^j_k$ corresponds to $x^l_i$\\}. \\\\\n\/\/ Update the value of each $x^l_i \\in X_l$ as the mean value of the corresponding $v^j_0$ \\\\\n\/\/ that appear in $FactorSystem_l$. Then set all $v^j_k$ corresponding to $x^l_i$ to this value as well.\\\\\n{\\bf aver}\\={\\bf ageChildren}($l$) \\\\\n\\> {\\bf for} \\= $i$ = 1 to $LevelCount_l$ \\\\\n\\> \\> \/\/ For each (child) variable $x^l_i \\in X_l$ \\\\\n\\> \\> {\\bf if} \\=$x^l_i$ is hidden \\\\\n\\> \\> \\> meanValue = $\\frac{1}{duplicationCountChild(i,l)} \\sum_{j \\in duplicationSetChild(i,l)} v^j_0$ \\\\\n\\> \\> \\> {\\bf for} \\= {\\bf each} $(j,k) \\in duplicationSet(i,l)$ \\\\\n\\> \\> \\> \\> $v^j_k \\gets meanValue $ \\\\\n\\> \\> \\> {\\bf end} \\\\\n\\> \\> \\> $x^l_i \\gets meanValue $ \\\\\n\\> \\> {\\bf else if} $x^l_i$ is observed \\\\\n\\> \\> \\> {\\bf for each} $(j,k) \\in duplicationSet(i,l)$ \\\\\n\\> \\> \\> \\> $v^j_k \\gets x^l_i $ \\\\\n\\> \\> \\> {\\bf end} \\\\\n\\> \\> {\\bf end} \\\\\n\\> {\\bf end} \\\\\n{\\bf end} \\\\\n\\\\\n\/\/ Let $X_{l+1}$ denote the set of model variables \\{$x^{l+1}_1, x^{l+1}_2, \\dots, x^{l+1}_{LevelCount_{l+1}}$\\} corresponding to the level $l+1$ nodes.\\\\\n\/\/ Note that it is possible for a level $l$ variable to have some or all of its parents in a level higher than $l+1$, in\\\\\n\/\/ which case the set $X_{l+1}$ will only represent a proper subset of the parents of level $l$. \\\\\n\/\/ Let $duplicationSetParent(i,l+1)$ = \\{$(j,k) : eq_j \\in FactorSystem_l$ and $v^j_k, k \\geq 1$ corresponds to $x^{l+1}_i$\\}. \\\\\n\/\/ Let $duplicationCountParent(i,l+1) = |duplicationSetParent(i,l+1)|$. \\\\\n\/\/ For each $x^{l+1}_i \\in X_{l+1}$, update the corresponding $v^j_k, k \\geq 1$ to their mean value.\\\\\n{\\bf aver}\\={\\bf ageParents}($l$) \\\\\n\\> {\\bf for} \\= $i$ = 1 to $LevelCount_{l+1}$ \\\\\n\\> \\> \/\/ For each (parent) variable $x^{l+1}_i \\in X_{l+1}$ \\\\\n\\> \\> {\\bf if} \\=$x^{l+1}_i$ is hidden \\\\\n\\> \\> \\> meanValue = $\\frac{1}{duplicationCountParent(i,l+1)} \\sum_{(j,k) \\in duplicationSetParent(i,l+1)} v^j_k$ \\\\\n\\> \\> \\> {\\bf for} \\= {\\bf each} $(j,k) \\in duplicationSet(i,l+1)$ \\\\\n\\> \\> \\> \\> $v^j_k \\gets meanValue $ \\\\\n\\> \\> \\> {\\bf end} \\\\\n\\> \\> \\> $x^{l+1}_i \\gets meanValue$ \\\\\n\\> \\> {\\bf else if} $x^{l+1}_i$ is observed \\\\\n\\> \\> \\> {\\bf for each} $(j,k) \\in duplicationSet(i,l+1)$ \\\\\n\\> \\> \\> \\> $v^j_k \\gets x^{l+1}_i $ \\\\\n\\> \\> \\> {\\bf end} \\\\\n\\> \\> {\\bf end} \\\\\n\\> {\\bf end} \\\\\n{\\bf end} \n\\end{tabbing}\n\\end{algorithm}\n\n\nNote that the only distinction the algorithm makes between $X_E$ and $X_H$ is that the values of the $X_E$ variables are not updated during inference. If we wish to make some previously hidden variables observed or vice versa, is is then trivial to modify the algorithm to handle this by simply enabling or disabling updates on those variables. \n\nWe also note that typically, some kind of normalization of the parameter matrices $W$ will be performed during the left NMF update steps. For example, one could normalize the column of $W$ to have unit sum. \n\nNote also that inference and learning are performed jointly. We can perform inference only by simply disabling the learning updates. If a subset of the parameters $\\theta$ are known, then the learning updates can be disabled for those parameters.\n\nThe computational cost of performing a single iteration of the inference and learning algorithm is given by the sum of the costs of performing the learning and inference NMF updates and the value propagation multiplication on each factorization equation. The number of iterations to convergence can depend on several factors, such as the longest path in the graph, NMF algorithm employed, etc.\n\n\n\n\n\n\n\\section{Factored sequential data models}\n\\label{sec:main_factored_model}\n\nWe now consider PFNs for modeling sequential data, which we will refer to as dynamic positive factor networks (DPFNs). A DPFN provides for the modeling of a data sequence as an additive combination of realizations of an underlying process model. Analogously to a dynamic Bayesian network (DBN) \\cite{Friedman1999}, a DPFN is created by horizontally replicating a PFN subgraph. Thus, a DPFN is simply a PFN that has a regular repeating graphical structure. The corresponding parameters are also typically replicated. We only require that the particular subgraph that is replicated correspond to a valid PFN. We will refer to the subgraph that is replicated as corresponding to a time slice. Although we use terminology that implies a time series, such an interpretation is not required in general. For example, the data could correspond to a biological sequence, or the characters or words in a text document, for example.\n\n\n\nOur models will make use of an non-negative linear state representation. Intuitively, one can think of the model as supporting the simultaneous representation of any additive combination of allowable realizations of an underlying state transition model, or more generally, an underlying dynamic process model. A state variable in the model is defined such that the dimensionality of the variable corresponds to the number of possible states. We allow these variables to be general non-negative vectors, so that the non-zero-valued components of the variable represent a (positive) weight or strength of the corresponding state. Any given non-zero valued state variable then corresponds to some superposition of states. A given pair of state variables in adjacent time slices is then factored in terms of a set of basis vectors that specify the allowable transitions, and a corresponding encoding vector. The factored state representation seems to allow significant representational power, while making use of a compact parameter set.\n\nIn this section we present a factored state transition model and present empirical results to illustrate the performance of the inference and learning algorithms.\n\n\\subsection{Model}\n\\label{sec:fact_state_tran_model}\n\nConsider a transition model with $M$ states and $R$ possible state transitions. Figure~\\ref{fig:fsm1} shows an example state transition diagram for a 4-state automaton. This transition model contains $M = 4$ states, labeled $S_1, \\dots, S_4$ and $R = 6$ state transitions, labeled $t_1, \\dots, t_6$. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=60ex]{.\/figures\/fsm1.pdf}\n\\caption{A state transition diagram for a 4-state automaton.}\n\\label{fig:fsm1}\n\\end{figure}\n\nFigure~\\ref{fig:1layerDynamic} shows the DPFN that we will use to model a process that evolves according to the transition model given in Figure~\\ref{fig:fsm1}. This DPFN corresponds to a system of $T-1$ factorization equations where for each $t \\in 1, \\dots, T-1$ we have:\n\n\\begin{align}\n\\left[ \\begin{array}{c} \nx_t\\\\\nx_{t+1} \\end{array} \\right] =&\\ W h_t \\notag \\\\\n =&\\ \\left[ \\begin{array}{c}\nW_1 \\\\\nW_2 \\end{array} \\right] h_t \n\\label{eqn:single_timeslice_factorization}\n\\end{align}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=60ex]{.\/figures\/1layerDynamic.pdf}\n\\caption{The DPFN corresponding to the factorized state model in Equation (\\ref{eqn:key_factorization}). The first four time slices are shown. The state at time slice $t$ is represented by $x_t$. We place constrains on states at adjacent time slices such that the states $x_t$, $x_{t+1}$ are represented as an additive combination of the allowable state transition vectors. $h_t$ represents the encoded transitions for $x_t, x_{t+1}$.}\n\\label{fig:1layerDynamic}\n\\end{figure}\n\n\nSince the same parameter matrix $W$ appears in each equation, the above system of $T-1$ vector factorization equations can be expressed as a single matrix factorization equation:\n\n\\begin{align}\n\\label{eqn:expanded_key_factorization}\n\\left[ \\begin{array}{ccccc}\nx_1 & x_2 & x_3 & \\dots & x_{T-1} \\\\\nx_{2} & x_{3} & x_{4} & \\dots & x_{T} \\end{array} \\right] =&\\ W \\left[ \\begin{array}{ccccc} h_1 & h_2 & h_3 & \\dots & h_{T-1} \\end{array} \\right] \\notag \\\\\n=&\\ \\left[ \\begin{array}{c}\nW_1 \\\\\nW_2 \\end{array} \\right] \\left[ \\begin{array}{ccccc} h_1 & h_2 & h_3 & \\dots & h_{T-1} \\end{array} \\right]\n\\end{align}\n\n\n\nLet us define $x_{c_t}$ as the vertical concatenation of any two adjacent state variables $x_t$, $x_{t+1}$ so that:\n\n\\begin{align}\nx_{c_t} = \\left[ \\begin{array}{c} \nx_t\\\\\nx_{t+1} \\end{array} \\right]\n\\end{align}\n\nWe define the $2 M$ x T-1 matrix $X_c$ as the following horizontal concatenation of the time-ordered $x_{c_t}$ vectors:\n\n\\begin{align}\nX_c =& \\left[ \\begin{array}{cccccc} x_{c_1} & x_{c_2} & x_{c_3} & \\dots & x_{c_{T-1}} \\end{array} \\right] \\notag \\\\\n=& \\left[ \\begin{array}{ccccc}\nx_1 & x_2 & x_3 & \\dots & x_{T-1} \\\\\nx_{2} & x_{3} & x_{4} & \\dots & x_{T} \\end{array} \\right]\n\\end{align}\n\nWe define the $R$ x T-1 matrix $H$ as the following horizontal concatenation of the time-ordered \\{$h_t$\\} vectors:\n\n\\begin{align}\nH = \\left[ \\begin{array}{cccccc} h_1 & h_2 & h_3 & \\dots & h_{T-1} \\end{array} \\right]\n\\end{align}\n\nUsing the above matrix definitions, we can then write the matrix factorization equation (\\ref{eqn:expanded_key_factorization}) more compactly as:\n\n\\begin{align}\nX_c = W H\n\\label{eqn:key_factorization}\n\\end{align}\n\nWe now illustrate how this DPFN can be used to represent the transition model in Figure~\\ref{fig:fsm1}. We let vector $x_t \\in \\mathbb{R}^M$ represent the model state at time $t$. The sequence, $\\{x_t : t = 1, \\dots, T\\}$ is then modeled as a realization of the transition model specified in the state transition diagram. The parameter matrix $W$ specifies the transition model, and the vector $h_t$ represents an encoding of $(x_t, x_{t+1})$ in terms of the basis columns of $W$.\n\nThe parameter matrix $W$ is constructed directly from the state transition diagram as follows. For each state $S_i$, we define a corresponding \\emph{state basis vector} $s_i \\in \\mathbb{R}^M$ such that the $i$'th component is 1 and all other components are zero. States $S_1, \\dots, S_4$ then correspond to the following state basis vectors, respectivley:\n\n\\begin{align}\ns_1=& \\left[ \\begin{array}{c} \n1\\\\\n0\\\\\n0\\\\\n0 \\end{array} \\right], \ns_2 = \\left[ \\begin{array}{c} \n0\\\\\n1\\\\\n0\\\\\n0 \\end{array} \\right], \ns_3 = \\left[ \\begin{array}{c} \n0\\\\\n0\\\\\n1\\\\\n0 \\end{array} \\right], \ns_4 = \\left[ \\begin{array}{c} \n0\\\\\n0\\\\\n0\\\\\n1 \\end{array} \\right]\n\\end{align}\n\nFor each transition $t_k$ in the state transition diagram that represents a transition from state $S_i$ to state $S_j$, we define the corresponding \\emph{transition basis vector} $w_k$ as the vertical concatenation of the corresponding state basis vectors $s_i$ on top of $s_j$. Each of the $R$ transitions $t_k$ will then have a corresponding $2 M$ x 1 transition basis vector $w_k$ given by:\n\n\\begin{align}\nw_k = \\left[ \\begin{array}{c} \ns_i\\\\\ns_j \\end{array} \\right]\n\\end{align}\n\nFor example, transition $t_2$ in the diagram, which represents a transition from state $S_2$ to state $S_3$ has a corresponding transition basis vector $w_2$ given by:\n\n\\begin{align}\nw_2 = \\left[ \\begin{array}{c} \ns_2\\\\\ns_3 \\end{array} \\right] = \\left[ \\begin{array}{c} \n0\\\\\n1\\\\\n0\\\\\n0\\\\ \\hline\n0\\\\\n0\\\\\n1\\\\\n0 \\end{array} \\right]\n\\end{align}\n\nLet the $2 M$ x $R$ \\emph{transition basis matrix} $W$ be defined as the horizontal concatenation of the transition basis vectors \\{$w_i$\\}. Any ordering of the columns is possible. The columns of $W$ then specify the allowable transitions in our model:\n\n\\begin{align}\nW =& \\left[ \\begin{array}{ccccc} w_1 & w_2 & w_3 & \\dots & w_R \\end{array} \\right]\n\\end{align}\n\nWe will find it useful to partition $W$ into upper and lower sub-matrices $W_1$ and $W_2$ such that the $M$ x $R$ upper sub-matrix $W_1$ represents the time slice $t-1$ state basis vectors and the $M$ x $R$ lower sub-matrix $W_2$ represents the corresponding time slice $t$ state basis vectors. One possible $W$ corresponding to the transition diagram in Figure~\\ref{fig:fsm1} is given by:\n\n\\begin{align}\nW =& \\left[ \\begin{array}{cccccc} w_1 & w_2 & w_3 & w_4 & w_5 & w_6 \\end{array} \\right] \\notag \\\\\n=&\\ \\left[ \\begin{array}{c}\nW_1 \\\\\nW_2 \\end{array} \\right] \\notag\\\\\n=&\\ \\left[ \\begin{array}{cccccc}\ns_1 & s_2 & s_3 & s_4 & s_3 & s_2 \\\\\ns_2 & s_3 & s_4 & s_1 & s_3 & s_4 \\end{array} \\right] \\notag \\\\\n=& \\left[ \\begin{array}{cccccc}\n1 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 0 & 1 \\\\\n0 & 0 & 1 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 1 & 0 & 0 \\\\ \\hline\n0 & 0 & 0 & 1 & 0 & 0 \\\\\n1 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 1 & 0 & 0 & 1 \\end{array} \\right] \n\\label{eqn:W_non_deterministic}\n\\end{align}\n\nConsider the following state sequence of length $T=10$, which is a valid sequence under the example transition diagram:\n\n\\begin{align}\n\\label{eq:example_state_seq}\nseq_1 = (S_1, S_2, S_3, S_4, S_1, S_2, S_3, S_4, S_1, S_2)\n\\end{align}\n\nNow suppose we construct a sequence of state variables $\\{x_t: t=1,\\dots,T\\}$ corresponding to this state sequence by setting $x_t$ equal to the corresponding state basis vector $s_i$ for state $S_i$ at time $t$:\n\n\\begin{align}\nx_1 = s_1 = \\left[ \\begin{array}{c} \n1\\\\\n0\\\\\n0\\\\\n0 \\end{array} \\right], \nx_2 = s_2 = \\left[ \\begin{array}{c} \n0\\\\\n1\\\\\n0\\\\\n0 \\end{array} \\right],\nx_3 = s_3 = \\left[ \\begin{array}{c} \n0\\\\\n0\\\\\n1\\\\\n0 \\end{array} \\right],\n\\dots,\nx_{10} = s_2 = \\left[ \\begin{array}{c} \n0\\\\\n1\\\\\n0\\\\\n0 \\end{array} \\right]\n\\label{eqn:simple_state_seq}\n\\end{align}\n\n\nLet us define the $M$ x $T$ matrix $X$ as the horizontal concatenation of the state variables $x_t$ as:\n\n\\begin{align}\nX =& \\left[ \\begin{array}{ccccc} x_1 & x_2 & x_3 & \\dots & x_T \\end{array} \\right]\n\\end{align}\n\n\nWe then have the following sequence $X$ corresponding to the state sequence (\\ref{eq:example_state_seq}):\n\n\n\\begin{align}\nX =& \\left[ \\begin{array}{cccccccccc} x_1 & x_2 & x_3 & x_4 & x_5 & x_6 & x_7 & x_8 & x_9 & x_{10} \\end{array} \\right] \\notag \\\\\n=& \\left[ \\begin{array}{cccccccccc} \n1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0\\\\\n0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1\\\\\n0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\end{array} \\right]\n\\label{eqn:fsm1_determ_X1}\n\\end{align}\n\nUsing the above $X$ and (\\ref{eqn:key_factorization}), we then have the following model factorization:\n\n\\begin{align}\nX_c =& W H \\notag \\\\\n\\left[ \\begin{array}{ccccccccc} \n1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1\\\\\n0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0\\\\\n0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0\\\\ \\hline\n0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0\\\\\n1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1\\\\\n0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\end{array} \\right] =&\n\\left[ \\begin{array}{cccccc}\n1 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 0 & 1 \\\\\n0 & 0 & 1 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 1 & 0 & 0 \\\\ \\hline\n0 & 0 & 0 & 1 & 0 & 0 \\\\\n1 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 1 & 0 & 0 & 1 \\end{array} \\right] \n\\left[ \\begin{array}{ccccccccc} \n1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1\\\\\n0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0\\\\\n0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0\\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\end{array} \\right]\n\\end{align}\n\n Note that given $X_C$ and $W$, only one solution for $H$ is possible. We see that each pair of state vectors $x_{c_t}$ in $X_c$ corresponds to exactly one transition basis vector in the resulting factorization, since the value of each $x_t$ was chosen equal to one of the $s_i$ and $X$ corresponds to a valid state sequence. The columns of $H$ then given an encoding of $X$ in terms of the basis vectors of $W$ so that exactly one component of each column $h_t$ has value 1, corresponding to the transition that explains $x_{c_t}$. We say that a sequence $X$ of state vector observations form an \\emph{elementary state sequence} if there exists a factorization $X_C = W H$ such that each column of $H$ contains at most one nonzero component. \n\n\nNow let us return to the case where the $x_t$ is no longer constrained to be equal to one the state basis vectors. We only require $x_t$ to be a non-negative vector in $\\mathbb{R}^M$. Although $W$ is specified in the example above as containing only 0 and 1 valued components, in general we only place the constraint that the matrices of the factorization equation be non-negative. Note that since the model is a PFN, the linearity property from Section~\\ref{sec:model_specification} holds, so that any additive combination of solutions is also a solution. That is, any additive mixture of state sequences that are valid under the transition model of $W$ are representable. We can also see that this property follows directly from (\\ref{eqn:key_factorization}). For non-negative scalars $\\alpha$, $\\beta$, we have:\n\n\\begin{align}\nX_{c_a} =& W H_a \\notag \\\\\nX_{c_b} =& W H_b \\notag \\\\\n\\Leftrightarrow \\alpha X_{c_a} + \\beta X_{b_c} =& W (\\alpha H_a + \\beta H_b)\n\\end{align}\n\nFor the case where multiple components of $x_t$ are positive, the interpretation is that multiple states are simultaneously active at time $t$. For example, consider an example where $x_t = (\\alpha, 0, 0, \\beta)^T$. That is, at time $t$, the model is in state $S_1$ with magnitude $\\alpha$ and is simultaneously in state $S_4$ with magnitude $\\beta$. Suppose that $x_{t+1}$ is hidden. We can see from the factorization equation above that the solution is $x_{t+1} = (\\beta, \\alpha, 0, 0)^T$. That is, our factored transition model specifies that if the model is in state $S_1$ with magnitude $\\alpha$ at time $t$ then the model must be in state $S_2$ with magnitude $\\alpha$ at time $t+1$. Likewise, if the model is in state $S_4$ with magnitude $\\beta$ at time $t$ then the model must be in state $S_1$ with magnitude $\\beta$ at time $t+1$. Due to the factored state representation, the model can represent both realizations simultaneously. A sequence of state vectors is modeled as an non-negative linear combination of the allowable transition basis vectors.\n\nNote also that since we do not impart a probabilistic interpretation, all allowable outgoing transitions from a given state can be considered equally likely. For example, suppose $x_{t+1}$ is hidden and that $x_t = (0, \\alpha, 0, 0)$, corresponding to the model being in state $S_2$ with magnitude $\\alpha$ at time $t$. Since there are multiple outgoing transitions from $S_2$ ($t_2$ and $t_6$), multiple solutions for $x_{t+1}$ are possible. We will explore the issue of performing inference when multiple solutions are possible in the following results sections.\n\nA learning and inference algorithm for this network can be obtained by applying Algorithm~\\ref{alg:inference_learning1} to the system of factorizations (\\ref{eqn:key_factorization}). Example pseudocode is presented in Appendix~\\ref{sec:simpleDyn1_inf_learn}.\n\n\n\n \n \n \n\\subsection{Empirical results}\n\\label{sec:dyn_model}\n\nIn this section, we perform inference and learning on the dynamic network in Figure~\\ref{fig:1layerDynamic}, for two distinct transition models, using synthetic data sets. We present results for the cases of fully and partially observed input sequences. We also present results for sequences that have been corrupted by noise. We first consider the case where the underlying transition model is known. We then present results for the case where the transition model is learned from training data.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=60ex]{.\/figures\/fsm1_deterministic.pdf}\n\\caption{A deterministic state transition diagram for a 4-state automaton.}\n\\label{fig:fsm1_deterministic}\n\\end{figure}\n\n\n\\subsubsection{Fully observed input sequences under a known transition model}\n\nWe start with a simple deterministic transition model, and will later revisit the earlier nondeterministic model from Figure~\\ref{fig:fsm1}. Figure~\\ref{fig:fsm1_deterministic} shows the state transition diagram for a deterministic model that is obtained by removing transitions $t_5$ and $t_6$ from the earlier nondeterministic model. Using the previously outlined procedure, we obtain a parameter matrix $W$ corresponding to this transition diagram as follows. First we construct a transition basis vector $w_k$ corresponding to each transition $t_k$:\n\n\\begin{align}\nw_1=& \\left( \\begin{array}{c} s_1\\\\\ns_2 \\end{array} \\right) ,\nw_2= \\left( \\begin{array}{c} s_2\\\\\ns_3 \\end{array} \\right) ,\nw_3= \\left( \\begin{array}{c} s_3\\\\\ns_4 \\end{array} \\right) ,\nw_4= \\left( \\begin{array}{c} s_4\\\\\ns_1 \\end{array} \\right) ,\n\\end{align}\n\nThe transition basis matrix $W$ is then constructed as the horizontal concatenation of the transition basis vectors (again, the column ordering is arbitrary):\n\n\\begin{align}\nW = \\left[ \\begin{array}{cccc} w_1 & w_2 & w_3 & w_4 \\end{array} \\right]\n\\end{align}\n\nWe start by considering the case where the state sequence $X$ is fully observed, and the transition sequence $H$ is hidden. We then wish to infer the values of $H$ given the observed $X$. Suppose we wish to specify an input sequence $X$ that corresponds to the following sequence of states: $(S_1, S_2, S_3, S_4, S_1, S_2, S_3, S_4, S_1, S_2)$, which is a valid state sequence under the transition diagram in Figure~\\ref{fig:fsm1_deterministic}. This sequence can be represented by setting $X =\\alpha \\left[ \\begin{array}{cccccccccc} s_1 & s_2 & s_3 & s_4 & s_1 & s_2 & s_3 & s_4 & s_1 & s_2 \\end{array} \\right]$, where $\\alpha$ is any positive scalar. For example, the choice $\\alpha = 1$ results in:\n\n\\begin{align}\nX = \\left[ \\begin{array}{cccccccccc} \n1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0\\\\\n0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1\\\\\n0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\end{array} \\right]\n\\label{eqn:not_used}\n\\end{align}\n\nWe will be dealing with matrices that are non-negative, typically sparse, and such that many or all of the non-zero components will take on the same value. For visualization purposes, the particular value will typically not be relevant. For these reasons we find that, rather than simply printing out the numerical values of a matrix as above, an image plot can be a more visually appealing way to take in the relevant information. In the remainder of this paper, we will therefore make extensive use of image plots of the matrices that appear in the various PFN factorization equations. In displaying the image plots, we make use of the ``hot'' colormap, so that zero-valued components appear black, small values appear red, larger values yellow, and the maximum valued component(s) in a matrix appear white. Figure~\\ref{fig:fsm1_determ_X1} shows an image plot of the observed $X$ from above.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=60ex]{.\/figures\/fsm1_determ_X1.jpg}\n\\caption{An image plot of the observed $X$ from Equation~\\ref{eqn:fsm1_determ_X1}, corresponding to the state sequence: $(S_1, S_2, S_3, S_4, S_1, S_2, S_3, S_4, S_1, S_2)$. Row $i$ of $X$ corresponds to state $S_i$. Column $i$ corresponds to the $i$'th time slice. Here and throughout this paper we use the \"hot\" color map shown on the right, so that components with a minimum component value are displayed in black, components with the maximum component value are displayed in white, and values in between are displayed in shades of red, orange, and yellow.}\n\\label{fig:fsm1_determ_X1}\n\\end{figure}\n\nRecall that the network in Figure~\\ref{fig:1layerDynamic} corresponds to the matrix factorization Equation (\\ref{eqn:key_factorization}), which we reproduce here:\n\n\\begin{align}\nX_C = W H\n\\label{eqn:repeatedDPN1}\n\\end{align}\n\nRecall also that $X_C$ is defined as the vertical stacking of adjacent columns of $X$ and so only $H$ is unknown (hidden). We solve for the values of $H$ by applying the inference algorithm from Appendix~\\ref{sec:simpleDyn1_inf_learn}, which is a special case of the general learning and inference algorithm specified in Algorithm~\\ref{alg:inference_learning1}. In solving for $H$, this corresponds to first initializing $H$ to random positive values and then iterating the inference algorithm (learning is disabled since $W$ is known) until convergence. For all empirical results in this paper, hidden variables are initialized to random positive values uniformly distributed between 0 and $10^{-6}$. Each iteration of the inference algorithm in this case corresponds to performing an NMF right update step, e.g., using (\\ref{eqn:right_nmf_update}).\n\nFigure~\\ref{fig:fsm1_determ_factor} shows a plot of the matrices in the factorization, where $H$ is shown after convergence of the inference algorithm. For this input sequence, a few hundred iterations was sufficient for the RMSE (root mean squared error between $X$ and the reconstruction $W H$) to drop below $10^-4$. The inference computation, implemented in Java and Python, took approximately 1 second to run on a desktop PC with a 3 GHz Core 2 Duo processor, which is representative of the time required to run each of the examples in this section.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_determ_factor.jpg}\n\\caption{A plot of the matrices of the factorization $X_C = W H$, after solving for $H$ Note that the upper half submatrix of $X_c$ is equal to $X$, which corresponds to the state sequence: $(S_1, S_2, S_3, S_4, S_1, S_2, S_3, S_4, S_1, S_2$). Here, $W$ corresponds to the deterministic state transition diagram in Figure~\\ref{fig:fsm1_deterministic}}\n\\label{fig:fsm1_determ_factor}\n\\end{figure}\n\nWe now add some noise consisting of uniformly distributed values in $[0, 0.1]$ to $X$. Figure~\\ref{fig:fsm1_determ_factor_noise} shows the resulting estimate for $H$. The noisy observations can no longer be represented exactly as a linear combination of the transition basis vectors, yielding an RMSE of 0.034. We observe empirically that the approximate factorization found by the NMF updates appears to be relatively insensitive to small amounts of additive noise, and produces an inference result that appears visually similar to applying additive noise to the previous noiseless solution for $H$. \n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_determ_factor_noise.jpg}\n\\caption{A plot of the matrices of the factorization $X_C = W H$, where the observations $X$ are corrupted by additive noise. The inferred hidden transition sequence $H$ is shown after convergence of the inference algorithm. Here, $W$ corresponds to the deterministic state transition diagram in Figure~\\ref{fig:fsm1_deterministic}}\n\\label{fig:fsm1_determ_factor_noise}\n\\end{figure}\n\nWe now consider an example where the observation sequence corresponds to a mixture of realizations of the underlying transition model. Recall that the if observation sequence $X_a$ and the hidden sequence $H_a$ are one solution to the model, and observation sequence $X_b$ and the hidden sequence $H_b$ are another solution to the model, then the observation sequence $X = X_a + X_b$ and the hidden sequence $H = H_a + H_b$ are also a solution of the model. As an example, consider the observation sequence $X = X_a + X_b$, where $X_a$ and $X_b$ are given by:\n\n\\begin{align}\n X_a =& 0.5 \\left[ \\begin{array}{cccccccccc} s_1 & s_2 & s_3 & s_4 & s_1 & s_2 & s_3 & s_4 & s_1 & s_2 \\end{array} \\right] \\notag \\\\\n X_b =& 1.0 \\left[ \\begin{array}{cccccccccc} s_3 & s_4 & s_1 & s_2 & s_3 & s_4 & s_1 & s_2 & s_3 & s_4 \\end{array} \\right]\n \\label{eqn: superposition1}\n \\end{align}\n\nFigure~\\ref{fig:fsm1_determ_X1_superpos} shows an image plot of the observed sequence $X = X_a + X_b$. Note that both sequences $X_a$ and $X_b$ correspond to a valid realization of the underlying transition model of the network. From the superposition property, we know that the sum sequence $X$ has a corresponding hidden sequence $H$ which satisfies the factorization Equation~\\ref{eqn:repeatedDPN1} with equality. The expressiveness of this network is such that any observations and hidden variables corresponding to any non-negative superposition of valid realizations of the underlying transition model specified by $W$ are representable. This does not necessarily mean that our particular choice of inference algorithm will always be able find the corresponding solution, however. We do observe, though, that for this particular network choice, repeated runs of our inference algorithm always converged to an exact solution (specifically, inference was stopped when the RMSE dropped below a certain threshold: $10^-4$). Figure~\\ref{fig:fsm1_determ_factor_superpos} shows an image plot of the inferred $H$ along with the other matrices in Equation~\\ref{eqn:repeatedDPN1}. \n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=60ex]{.\/figures\/fsm1_determ_X1_superpos.jpg}\n\\caption{An image plot of an observation sequence $X = X_a + X_b$ consisting of an additive combination of two sequences $X_a$ and $X_b$, which are each a realization of the underlying network transition model. Here, $X_1$ begins in state $s_1$ in the first time slice, and corresponds to the orange components since it has half the magnitude of sequence $X_b$. Sequence $X_b$ begins in state $s_3$ and corresponds to the white components.}\n\\label{fig:fsm1_determ_X1_superpos}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_determ_factor_superpos.jpg}\n\\caption{An image plot of the matrices of the factorization $X_C = W H$, where the observations are an additive combination of two sequences: $X = X_a + X_b$. $H$ is shown after convergence of the inference algorithm. Here, $W$ corresponds to the deterministic state transition diagram in Figure~\\ref{fig:fsm1_deterministic}}\n\\label{fig:fsm1_determ_factor_superpos}\n\\end{figure}\n\n\\subsubsection{Partially observed input sequences under a known deterministic transition model}\n\\label{sec:example_deterministic_fsm}\n\n\n\n\nWe now consider the case where the sequence $X$ is partially observed so that we are given the values of $x_t$ for some subset of time slices, and the values of $x_t$ for the remaining time slices are hidden. We would then like to infer the values of all hidden variables in the model, which will consist of $H$ along with the hidden subset of $\\{x_t\\}$. \n\nSuppose that $X$ represents a sequence of length 10 where the first time slice $x_1$ is observed and corresponds to state $S_1$. That is, $x_1 = \\alpha s_1$. We arbitrarily choose $\\alpha = 1$. We then wish to infer the values of the future time slices of $X$ as well as the values of $H$. That is, we wish to perform prediction. We again use the same inference algorithm and initialize all hidden variables to small positive random values.\n\nFigure~\\ref{fig:fsm1_determ_factor_X1_after_various_iterations} shows the initial $X$ and the estimates after 10, 50, and 500 iterations of the inference algorithm. We see that convergence is effectively reached by 500 iterations. Figure~\\ref{fig:fsm1_determ_factor_partial} shows the corresponding plots of the matrices of the network factorization equation after 500 iterations. We observe that the inference estimates for $X$ appear to converge outward in time from the observed time slice $x_1$. The local variable averaging step of the inference algorithm causes the transition model to propagate a positive value one time slice forward and backward for each iteration. Note also that given the observed time slice, there is only one possible solution for the hidden variables (and that our inference algorithm successfully finds it) since the transition model is deterministic.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_determ_factor_X1_after_various_iterations.jpg}\n\\caption{A plot of the estimate for $X$ after various iteration counts of the inference algorithm. The top image shows the initial $X$ which only has the first time slice (time index 1 in the figure) set to state $S_1$ (i.e., $x_1 = s_1$). The future time slices $(x_2 \\dots x_{10})$ are initialized to small random values, which appear black in the figure since their maximum value of $10^-6$ is small compared to the largest value of 1 in the figure. The next lower figures show the $X$ estimates after 10, 50, and 500 iterations, respectively. We see that 500 iterations is sufficient to effectively reach convergence.}\n\\label{fig:fsm1_determ_factor_X1_after_various_iterations}\n\\end{figure}\n\nAlthough the first time slice $x_1$ was chosen as the only observed variable above, we could have just as easily chosen any subset of time slices (or even any subset of components) of $X$ and $H$ as the observed subset $X_E$. For example, Figure~\\ref{fig:fsm1_determ_factor_X1_after_various_iterations2} shows the estimates of $X$ for various iteration counts for the case where the time slice $x_4$ is observed and all other time slices are hidden. As expected, we observe that the estimates propogate outward (both backward and forward) from $x_4$ with increasing iteration count.\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_determ_factor_partial.jpg}\n\\caption{An image plot of the matrices of the factorization $X_C = W H$, after solving for $H$ and the hidden time slices $(x_2 \\dots x_{10})$ of $X$. Here, $W$ corresponds to the deterministic state transition diagram in Figure~\\ref{fig:fsm1_deterministic}}\n\\label{fig:fsm1_determ_factor_partial}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_determ_factor_X1_after_various_iterations2.jpg}\n\\caption{A plot of the estimate for $X$ after various iteration counts of the inference algorithm. The top image shows the initial $X$ in which $x_4$ is observed and all other variables are hidden. Here we let $x_4$ correspond to state $S_1$ by setting $x_4 = s_1$. All other variables (i.e, $(x_1,\\dots,x_3), (x_5,\\dots,x_{10}), (h_1,\\dots,h_{10})$) are initialized to small positive random values. We see that convergence is effectively reached by 500 iterations.}\n\\label{fig:fsm1_determ_factor_X1_after_various_iterations2}\n\\end{figure}\n\n\\subsubsection{Partially observed input sequences under a known nondeterministic transition model}\n\nIn this section we perform experiments to observe how the inference algorithm performs when multiple solutions for the hidden variables are possible. We will employ a nondeterministic state transition model and supply partial state observations so that multiple solutions will be possible for at least some of the hidden states. We will now use the nondeterministic state transition diagram in Figure~\\ref{fig:fsm1}, which corresponds to the transition basis matrix $W$ from Equation~\\ref{eqn:W_non_deterministic}.\n\nWe first perform an experiment to verify that the inferred values for the hidden transitions $H$ are correct for a short fully observed sequence of observations $X$. Figure~\\ref{fig:fsm1_nondeterm_X1} shows an image plot for $X = \\left[ \\begin{array}{cccccccccc} s_1 & s_2 & s_3 & s_3 & s_4 & s_1 & s_2 & s_4 & s_1 & s_2 \\end{array} \\right]$, which represents a valid sequence of state transitions under the transition model in Figure~\\ref{fig:fsm1}. We then perform inference on $H$ using the algorithm from Appendix~\\ref{sec:simpleDyn1_inf_learn}, which is a special case of the general learning and inference algorithm specified in Algorithm~\\ref{alg:inference_learning1}. The inference results are shown in Figure~\\ref{fig:fsm1_nondeterm_factor} and correspond to the correct factorization.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=60ex]{.\/figures\/fsm1_nondeterm_X1.jpg}\n\\caption{An image plot of $X$ corresponding to the state sequence: $(S_1, S_2, S_3, S_3, S_4, S_1, S_2, S_4, S_1, S_2)$. This is a valid realization under the transition diagram in Figure~\\ref{fig:fsm1}.}\n\\label{fig:fsm1_nondeterm_X1}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_nondeterm_factor.jpg}\n\\caption{An image plot of the matrices of the factorization $X_C = W H$, after solving for $H$. Note that $X$ is equal to the upper half submatrix of $X_C$ and corresponds to the state sequence $(S_1, S_2, S_3, S_3, S_4, S_1, S_2, S_4, S_1, S_2)$. Here, $W$ corresponds to the nondeterministic state transition diagram in Figure~\\ref{fig:fsm1}}\n\\label{fig:fsm1_nondeterm_factor}\n\\end{figure}\n\nWe now consider a partially observed sequence of state observations for $X$. Let\n\n\\begin{align}\nX = \\left[ \\begin{array}{cccccccccc}? & s_2 & ? & s_3 & s_4 & ? & ? & ? & ? & ?\\end{array} \\right]\n\\label{eqn:x1PartiallyObserved1}\n\\end{align}\n\nwhere the \"?\" state vectors denotes a hidden state. We initialize $H$ and all hidden variables in $X$ to small positive random values and run the inference algorithm to convergence. Figure~\\ref{fig:fsm1_nondeterm2_X1} shows $X$ before and after performing inference. An exact factorization is found and is shown in Figure~\\ref{fig:fsm1_nondeterm2_factor}. We observed that repeated runs always converged to an exact solution, although different (but valid) result was obtained each time. Observe that some of the hidden variables correspond to exactly one state, while others correspond to a superposition of states, such that the column sums are equal. We do not explicitly normalize the columns to have equal column sums in the inference procedure, but rather, the equality follows from the choice of $W$ and the observed variables. Looking at the transition diagram in Figure~\\ref{fig:fsm1}, we see that the inferred variables with a single state component (i.e., $x_t: t \\in \\{1, 3, 6, 7\\}$) occur at the time slices for which there is exactly one possible state that satisfies the transition model, given the observed variables in $X$. For example, time slice 2 has an observed state $x_2 = s_2$, corresponding to $S_2$. From the transition diagram we see that the only state that can transition to $S_2$ is $S_1$ and therefore the hidden state at time slice 1 must correspond to state $S_1$. The variables with multiple distinct state components (i.e., $x_t, t \\in \\{ 8,9,10\\}$) correspond to time slices for which there are multiple possible transitions that satisfy the transition model. We can think of these time slices simultaneously being in multiple distinct states. For example, a valid solution for hidden state $x_8$ can consist of any additive combination of state vectors $s_3, s_4$ such that the column sum of $x_8$ is 1.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_nondeterm2_X1.jpg}\n\\caption{The top image plot shows $X$ after initialization, corresponding to the partially observed state sequence: $(?, S_2, ?, S_3, S_4, ?, ?, ?, ?, ?)$, where \"?\" denotes a hidden state. The bottom image plot shows the inference results for $X$ in which the hidden variables have now been estimated. Observe that variables ${x_1, x_3, x_6, x_7}$ correspond to exactly one state and variables $x_8,x_9,x_{10}$ correspond to a superposition of states, since multiple solutions are possible for $x_8,x_9,x_{10}$.}\n\\label{fig:fsm1_nondeterm2_X1}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_nondeterm2_factor.jpg}\n\\caption{An image plot of the matrices of the factorization $X_C = W H$, after solving for both $H$ and the hidden variables of $X$.}\n\\label{fig:fsm1_nondeterm2_factor}\n\\end{figure}\n\nSuppose that we now make the final time slice observed by setting $x_{10} = s_2$, corresponding to the partially observed sequence $(?, S_2, ?, S_3, S_4, ?, ?, ?, ?, S_2)$. In this case, there is only one possible configuration of hidden states that is constant with the observed states and with the underlying transition model. Figures~\\ref{fig:fsm1_nondeterm3_X1} and \\ref{fig:fsm1_nondeterm3_factor} show the corresponding inference results. We observed that repeated runs of the algorithm always converged to the correct solution.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_nondeterm3_X1.jpg}\n\\caption{The top image plot shows $X$ after initialization, corresponding to the partially observed state sequence: $(?, S_2, ?, S_3, S_4, ?, ?, ?, ?, S_2)$, where \"?\" denotes a hidden state. The bottom image plot shows $X$ again after running the inference algorithm to infer the hidden states. In this case, there is only one possible configuration of hidden states that is constant with the observed states and with the underlying transition model.}\n\\label{fig:fsm1_nondeterm3_X1}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_nondeterm3_factor.jpg}\n\\caption{An image plot of the matrices of the factorization $X_C = W H$, after solving for $H$. This is for the case where $X$ corresponds to the paritally observed sequence: $(?, S_2, ?, S_3, S_4, ?, ?, ?, ?, S_2)$. In this case, there is only one possible configuration of hidden states that is constant with the observed states and with the underlying transition model.}\n\\label{fig:fsm1_nondeterm3_factor}\n\\end{figure}\n\nIn the case where multiple valid solutions are possible for the hidden variables, it is interesting to consider modifying the inference algorithm to attempt to find sparse solutions. One possibility that is simple to implement and seems to provide robust solutions consists of replacing the standard NMF update steps of the basic inference algorithm specified in Algorithm~\\ref{alg:inference_learning1} with corresponding sparse NMF update steps. For these experiments, we chose to use the nonsmooth non-negative matrix factorization (nsNMF) algorithm \\cite{nsNMF2006} which is described in Appendix~\\ref{appendix_sparse_nmf} for reference. Using the partially observed sequence from Equation~\\ref{eqn:x1PartiallyObserved1} and the modified inference algorithm with an nsNMF sparseness value of 0.1, we obtain the results shown in Figures~\\ref{fig:fsm1_nondeterm4_X1} and \\ref{fig:fsm1_nondeterm4_factor}. Repeated runs of the algorithm appear to produce distinct solutions such that each inferred hidden state $x_t$ has only a single nonzero component, corresponding to a single active state in any given time slice. For sparseness values much lower than 0.1, we observe that a solution consisting of a superposition of states can still occur, but most of the weight tends to be concentrated in a single state for each time slice.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_nondeterm4_X1.jpg}\n\\caption{The top image plot shows $X$ after initialization, corresponding to the partially observed state sequence: $(?, S_2, ?, S_3, S_4, ?, ?, ?, ?, ?)$, where \"?\" denotes a hidden state. The bottom image plot shows $X$ again after running the inference algorithm to infer the hidden states. Sparse NMF with sparseness = 0.1 was used. We observe that multiple solutions are possible (since $x_{10}$ is now hidden), but the sparsity constraint leads to solutions in which only 1 state is active in any given time slice.}\n\\label{fig:fsm1_nondeterm4_X1}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_nondeterm4_factor.jpg}\n\\caption{An image plot of the matrices of the factorization $X_C = W H$, after solving for $H$. Note that $X$ is equal to the upper half submatrix of $X_C$ and corresponds to the state sequence $(?, S_2, ?, S_3, S_4, ?, ?, ?, ?, ?)$. Sparse NMF with sparseness = 0.1 was used. We observe that multiple solutions are possible (since $x_{10}$ is now hidden), but the sparsity constraint leads to solutions in which only 1 state is active in any given time slice.}\n\\label{fig:fsm1_nondeterm4_factor}\n\\end{figure}\n\nWe now perform an experiment to see what will happen if all model variables are made hidden (i.e., $X$ and $H$ are initialized to positive random values), and we perform inference using the sparse NMF updates. The top plot of Figure~\\ref{fig:fsm1_nondeterm5_X1} shows $X$ after being initialized to random positive values uniformly distributed between 0 and 1. The bottom plot shows the result of one run of the inference algorithm using a nsNMF sparseness value of 0.1. Figure~\\ref{fig:fsm1_nondeterm5_factor} shows the resulting factorization, which converged to an exact solution. We observe that repeated runs of the inference algorithm produce factorizations corresponding to a randomly sampling of the underlying transition model. This interesting result only appears to occur when we use the sparse inference algorithm. The non-sparse algorithm also converges to an exact factorization, but the inferred $X$ and $H$ then tend to consist of a superposition of many states in any given time slice.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_nondeterm5_X1.jpg}\n\\caption{The top image plot shows $X$ after being initialized to random positive values. The bottom image plot shows $X$ again after convergence of the inference algorithm, using sparse NMF with sparseness = 0.1}\n\\label{fig:fsm1_nondeterm5_X1}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_nondeterm5_factor.jpg}\n\\caption{An image plot of the matrices of the factorization $X_C = W H$, after solving for $X$ and $H$, which were initialized to random positive values. Even though all model variables are hidden, the sparse NMF updates caused the inference algorithm to converge to a sparse solution corresponding to an valid state sequence under the state transition diagram in Figure~\\ref{fig:fsm1} in which only a single state is active in any given time slice.}\n\\label{fig:fsm1_nondeterm5_factor}\n\\end{figure}\n\n\\subsubsection{Learning the transition model from training data}\n\\label{sec:learn_tran_model_from_data}\n\n\nWe now attempt to learn a transition model from training data. The training data will consist of an observed sequence $X$, and $H$ will be hidden. The model parameters $W$ are therefore unknown and will be learned from the observed sequence $X$. We first consider the case where the input sequence $X$ is an elementary sequence, so that any two adjacent state vectors $x_t, x_{t+1}$ can be represented by a single transition basis vector in $W$. That is, any state vector $x_t$ contains only one nonzero component and therefore corresponds to exactly one state in the transition model. We let $X$ consist of a sequence of state basis vectors, such as:\n\n\\begin{align}\nX = \\left[ \\begin{array}{ccccc}s_1 & s_2 & s_3 & s_3 & \\dots \\end{array} \\right]\n\\end{align}\n\nwhere the states are constrained to evolve according to the transition diagram in Figure~\\ref{fig:fsm1}. The training sequence is generated as follows: We choose the initial state $x_1$ randomly with uniform probability. When multiple outgoing transitions from a state are possible, the next state vector is chosen randomly with equal probability from the set of allowable transitions. For all experiments in this section, we use a training sequence length of 1000. \n\n\n\nRecall that inference and learning is performed jointly, so that in the process of learning $W$, we also end up with an estimate for $H$ (the transition activations for the training sequence). The column count of $W$ is a free parameter that must be specified before performing learning. Since we already know the transition model that was used to generate the training data, we know that we must specify at least 6 columns for $W$ in order to have any hope of achieving an exact factorization. We will therefore specify that $W$ have 6 columns. We then run the inference and learning algorithm from Appendix~\\ref{sec:simpleDyn1_inf_learn} to convergence. Figure~\\ref{fig:fsm1_learnW_1component_6basis_factor} shows the inference and learning results on a length 10 sub-sequence of $X$. Observe that the learned $W$ contains the 6 correct transition basis vectors. However, we have observed that it can sometimes take a few runs of the algorithm in order to find an exact factorization. When the column count of $W$ was then increased slightly to 8, we found that the inference and learning algorithm always converged to an exact factorization. Figure~\\ref{fig:fsm1_learnW_1component_8basis_factor} shows the inference and learning results for a $W$ with 8 columns. Note that although the transition basis vectors are correct, there are now some duplicate columns in $W$. This is due to the normalization step in which the columns of $W$ are normalized to have unit sum as part of the NMF left (learning) update step. \n\nWe have observed that the number of duplicate columns in $W$ can be reduced if we use sparse NMF updates in the inference and learning algorithm and also remove the column sum normalization step from the NMF left update step. We instead normalize each column of $W$ so the the upper and lower sub-columns (corresponding to $W_1$ and $W_2$) are constrained have equal sum, which may be 0. This allows unneeded columns of $W$ to tend towards zero during the learning updates. Using an nsNMF sparseness value of 0.1, we obtain the results in Figure~\\ref{fig:fsm1_learnW_1component_8basis_factorSparse}. We observe that even though it was specified that $W$ have two more columns than required to represent the transition model, the learned $W$ contains only the required 6 columns and two zero-valued columns. Note also that since we have omitted the step of normalizing all columns to have sum equal to one, the nonzero columns of $W$ no longer sum to 1. If we wish for the nonzero columns to sum to the same value (e.g., 1), we could consider adding column normalization for the nonzero columns as a post-processing step. We have also observed that the use of the nsNMF algorithm leads to a small approximation error so that the resulting factorization is no longer exact.\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_learnW_1component_6basis_factor.jpg}\n\\caption{An image plot of the matrices of the factorization $X_C = W H$, after learning $W$ and solving for $H$, which were initialized to random positive values. $W$ was constrained to have 6 transition basis vectors. The training sequence $X$ consisted of state basis vectors that evolved according to the transition diagram in Figure~\\ref{fig:fsm1}.}\n\\label{fig:fsm1_learnW_1component_6basis_factor}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_learnW_1component_8basis_factor.jpg}\n\\caption{An image plot of the matrices of the factorization $X_C = W H$, after learning $W$ and solving for $H$, which were initialized to random positive values. $W$ was constrained to have 6 transition basis vectors. Since there are two more columns than necessary to represent the transition model, observe that there are two duplicate columns in the learned $W$. The training sequence $X$ consisted of state basis vectors that evolved according to the transition diagram in Figure~\\ref{fig:fsm1}.}\n\\label{fig:fsm1_learnW_1component_8basis_factor}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_learnW_1component_8basis_factorSparse.jpg}\n\\caption{An image plot of the matrices of the factorization $X_C = W H$, after learning $W$ and solving for $H$. $W$ was constrained to have 6 transition basis vectors. There are two more columns than necessary to represent the transition model, but we remove the normalization step from the $W$ update and set the nsNMF sparseness value to 0.1. The learning algorithm then learns only the 6 required columns. That is, we see that two columns of the learned $W$ are zero-valued.}\n\\label{fig:fsm1_learnW_1component_8basis_factorSparse}\n\\end{figure}\n\n\nWe now consider the more challenging learning problem in which the training sequence consists of an additive mixture of realizations of the transition model in Figure~\\ref{fig:fsm1}. We construct the training sequence $X$ as the following additive mixture of three scaled elementary state transition realization sequences:\n\n\\begin{align}\nX = 0.5*Xelem_1 + 1.0*Xelem_2 + 1.5*Xelem_3\n\\end{align}\n\nwhere the $Xelem_i$ are independent elementary state sequences, each consisting of a sequence of state basis vectors that conform to the transition model in Figure~\\ref{fig:fsm1}. Figure~\\ref{fig:fsm1_learnW_mixture_component_X1} shows the first 10 time slices of the training sequence. Note that each time slice of $X$ now corresponds to a state vector that represents a superposition of three states. Given this training sequence, we perform inference and learning using 8 columns for $W$. The algorithm converges to an exact factorization, which is shown in Figure~\\ref{fig:fsm1_learnW_mixture_component_factor} and we see that the underlying transition model was learned, even though the training data consisted of an additive mixture of elementary state sequences. Thus, even though a realization of the transition model was not presented individually as training data, the model still was able to learn an underlying transition model that explained the mixture training sequence.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_learnW_mixture_component_X1.jpg}\n\\caption{An image plot showing the first 10 time slices of the length 1000 training sequence $X$ consisting of an additive mixture of 3 elementary state transition sequences.}\n\\label{fig:fsm1_learnW_mixture_component_X1}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_learnW_mixture_component_factor.jpg}\n\\caption{An image plot of the matrices of the factorization $X_C = W H$, after learning $W$ and solving for $H$ where the $X$ training sequence consists of an additive mixture of three state transition sequences. $W$ was specified to have 8 transition basis vectors. The underlying transition model used to generate the training data corresponds to the transition diagram in Figure~\\ref{fig:fsm1}.}\n\\label{fig:fsm1_learnW_mixture_component_factor}\n\\end{figure}\n\nWe now add a noise component to the training sequence so that:\n\n\\begin{align}\nX = 0.5*Xelem_1 + 1.0*Xelem_2 + 1.5*Xelem_3 + \\epsilon\n\\end{align}\n\nwhere $\\epsilon$ is a 4 x $L$ matrix of uniformly distributed noise between 0 and 0.1. Figure~\\ref{fig:fsm1_learnW_mixture_noise_component_X1} shows a portion of the noisy training sequence. The inference and learning algorithm converges to the approximate factorization shown in Figure~\\ref{fig:fsm1_learnW_mixture_noise_component_factor}. We see that the underlying transition model was still recovered in $W$, although with a small error. These experiments indicate that it is possible to learn an underlying transition model, given only noisy training data corresponding to an additive mixture of realizations of the underlying model.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_learnW_mixture_noise_component_X1.jpg}\n\\caption{An image plot showing the first 10 time slices of the length 1000 training sequence $X$ consisting of an additive mixture of 3 elementary state transition sequences and a noise component.}\n\\label{fig:fsm1_learnW_mixture_noise_component_X1}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm1_learnW_mixture_noise_component_factor.jpg}\n\\caption{An image plot of the matrices of the matrices of the factorization $X_C = W H$, after learning $W$ and solving for $H$ where the training sequence $X$ consists of an additive mixture of three state transition sequences plus a noise component. $W$ was specified to have 8 transition basis vectors. The underlying transition model used to generate the training data corresponds to the transition diagram in Figure~\\ref{fig:fsm1}.}\n\\label{fig:fsm1_learnW_mixture_noise_component_factor}\n\\end{figure}\n\n\n\n\n\\section{Hierarchical factored state model}\n\\label{sec:main_hierarchical_state_model}\n\n\n\nIn this section we present DPFNs for modeling sequential data with hierarchical structure. The general DPFN framework allows for the representation of dynamical models with complex hierarchical structure. We present multilevel hierarchical networks in which only certain combinations of upper level and lower level variables and state transitions may be simultaneously active. We show how such networks provide for the modeling of factored hierarchical state transition models.\n\nThe models that we present in this section can be thought of as two copies of the network from Figure Figure~\\ref{fig:1layerDynamic} to represent multiple levels in a hierarchy, along with an additional coupling module to enforce desired coupling constraints between pairs of variables within a time slice. As a concrete example, we present a hierarchical state model for a regular expression and present empirical results.\n\n\n\\subsection{Model}\n\\label{sec:dyn2level}\nConsider a 2 level network in which each level represents a state transition model. One can think of each of the levels as having a corresponding state transition diagram. The active state transitions in the 2 levels are then coupled so that only certain combinations of level 2 and level 1 state transitions may be simultaneously active. Figure~\\ref{fig:2layerDynamic} shows a 2-level network for modeling a factored hierarchical state model with two levels in the transition hierarchy. The model immediately extends to an arbitrary number of levels. The level 2 state variables are denoted by \\{$x^2_t: t = 1, \\dots, T$\\} and the level 1 state variables are denoted by $\\{x^1_t: t=1, \\dots, T\\}$. The level 2 transition variables $\\{h^2_t: t=1, \\dots, T-1 \\}$ are coupled to the level 1 transition variables $\\{h^1_t: t = 1, \\dots, T -1\\}$ via the coupling variables $\\{v_t: t = 1, \\dots, T -1\\}$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=60ex]{.\/figures\/2layerDynamic.pdf}\n\\caption{A DPFN with a coupled 2-level transition model. The level 2 state variables are denoted by $x^2_t$ and the level 1 state variables are denoted by $x^1_t$. The level 2 transition variables $h^2_t$ are coupled to the level 1 transition variables $h^1_t$. The first four time slices of the $T$ time slice network are shown. The dashed lines represent (optional) forced factored co-activations, which are enforced if all sub-columns of the parameter matrices are normalized to have equal column sums.}\n\\label{fig:2layerDynamic}\n\\end{figure}\n\nNote the symmetry between levels 1 and 2. One can think of the level 1 and level 2 transition models as executing independently, except for where the coupling constraints forbid it by constraining the combinations of level 1 and level 2 state transitions that can be simultaneously activated. Any two consecutive time slices ($t, t+1$) of Figure~\\ref{fig:2layerDynamic} correspond to the following three factorizations:\n\nFor level 1, we have:\n\\begin{align}\n\\left[ \\begin{array}{c}\nx^1_t \\\\\nx^1_{t+1} \\end{array} \\right] =&\\ W^1 h^1_t\n\\nonumber \\\\\n =&\\ \\left[ \\begin{array}{c}\nW^1_1 \\\\\nW^1_2 \\end{array} \\right] h^1_t\n\\end{align}\n\nFor level 2, we have:\n\\begin{align}\n\\left[ \\begin{array}{c}\nx^2_t \\\\\nx^2_{t+1} \\end{array} \\right] =&\\ W^2 h^2_t\n\\nonumber \\\\\n =&\\ \\left[ \\begin{array}{c}\nW^2_1 \\\\\nW^2_2 \\end{array} \\right] h^2_t\n\\end{align}\n\nFor the level 1 to level 2 transition coupling, we have:\n\\begin{align}\n\\left[ \\begin{array}{c}\nh^2_t \\\\\nh^1_t \\end{array} \\right] =&\\ U v_t\n\\nonumber \\\\\n =&\\ \\left[ \\begin{array}{c}\nU_1 \\\\\nU_2 \\end{array} \\right] v_t\n\\end{align}\n\nFor $T$ time slices, we then have the following three matrix factorization euqations:\n\nFor level 1, we have:\n\\begin{align}\n\\left[ \\begin{array}{ccccc}\nx^1_1 & x^1_2 & x^1_3 & \\dots & x^1_{T-1} \\notag \\\\\nx^1_{2} & x^1_{3} & x^1_{4} & \\dots & x^1_{T} \\end{array} \\right] =&\\ W^1 \\left[ \\begin{array}{ccccc} h^1_1 & h^1_2 & h^1_3 & \\dots & h^1_{T-1} \\end{array} \\right] \\\\\n=&\\ \\left[ \\begin{array}{c}\nW^1_1 \\\\\nW^1_2 \\end{array} \\right] \\left[ \\begin{array}{ccccc} h^1_1 & h^1_2 & h^1_3 & \\dots & h^1_{T-1} \\end{array} \\right]\n\\end{align}\n\nwhich we can write more concisely as:\n\n\\begin{align}\n\\label{eq:dynamic1_fact1}\nX^1_c =& W^1 H^1 \\notag \\\\\n=& \\left[ \\begin{array}{c}\nW^1_1 \\\\\nW^1_2 \\end{array} \\right] H^1\n\\end{align}\n\nFor level 2, we have:\n\\begin{align}\n\\left[ \\begin{array}{ccccc}\nx^2_1 & x^2_2 & x^2_3 & \\dots & x^2_{T-1} \\notag \\\\\nx^2_{2} & x^2_{3} & x^2_{4} & \\dots & x^2_{T} \\end{array} \\right] =&\\ W^2 \\left[ \\begin{array}{ccccc} h^2_1 & h^2_2 & h^2_3 & \\dots & h^2_{T-1} \\end{array} \\right] \\\\\n=&\\ \\left[ \\begin{array}{c}\nW^2_1 \\\\\nW^2_2 \\end{array} \\right] \\left[ \\begin{array}{ccccc} h^2_1 & h^2_2 & h^2_3 & \\dots & h^2_{T-1} \\end{array} \\right]\n\\end{align}\n\nwhich we can write more concisely as:\n\n\\begin{align}\n\\label{eq:dynamic1_fact2}\nX^2_c =& W^2 H^2 \\notag \\\\\n=& \\left[ \\begin{array}{c}\nW^2_1 \\\\\nW^2_2 \\end{array} \\right] H^2\n\\end{align}\n\nFor the level 1 to level 2 transition coupling, we have:\n\\begin{align}\n\\left[ \\begin{array}{ccccc}\nh^2_1 & h^2_2 & h^2_3 & \\dots & h^2_{T-1} \\notag \\\\\nh^1_1& h^1_3 & h^1_4 & \\dots & h^1_{T-1} \\end{array} \\right] =&\\ U \\left[ \\begin{array}{ccccc} v_1 & v_2 & v_3 & \\dots & v_{T-1} \\end{array} \\right] \\\\\n=& \\left[ \\begin{array}{c}\nU_1 \\\\\nU_2 \\end{array} \\right] \\left[ \\begin{array}{ccccc} v_1 & v_2 & v_3 & \\dots & v_{T-1} \\end{array} \\right]\n\\end{align}\n\nDefining $V = \\left[ \\begin{array}{ccccc} v_1 & v_2 & v_3 & \\dots & v_{T-1} \\end{array} \\right]$, we can then write the above factorization more concisely as:\n\n\\begin{align}\n\\label{eq:dynamic1_fact3}\n\\left[ \\begin{array}{c}\nH^2 \\\\\nH^1 \\end{array} \\right] =& U V \\notag \\\\\n=& \\left[ \\begin{array}{c}\nU_1 \\\\\nU_2 \\end{array} \\right] V\n\\end{align}\n\nThe columns of $U$ define the coupling basis. The positive components of each column $u_i$ specify components of $h^2_t$ and $h^1_t$ that can be co-activated. That is, the stacked vector of $h^2_t$ and $h^1_t$ is representable as a non-negative combination of the columns of $U$, with $v_t$ giving the encoding in terms of the basis columns. A learning and inference algorithm for this network can be obtained by applying Algorithm~\\ref{alg:inference_learning1} to the system of factorizations (\\ref{eq:dynamic1_fact1}),(\\ref{eq:dynamic1_fact2}), and (\\ref{eq:dynamic1_fact3}). Example pseudocode is presented in Appendix~\\ref{sec:inf_learn_dyn2level}.\n\n\n\n\n\nWe now consider a regular expression example similar in spirit to the one used by Murphy in \\cite{Murphy_Thesis}. Murphy showed how a Hierarchical hidden Markov model (HHMM) \\cite{murphy01linear} could be used to model the hierarchical transition model corresponding to a regular expression. In this section, we show how a regular expression can be modeled in a somewhat analogous manner by using DPFN with hierarchical structure. We stress that there are many significant differences, however. For example, in a HHMM, each possible state transition is assigned a probability, which is not the case here for the DPFN. Also, in a HHMM, the hidden states are discrete-valued, whereas they are non-negative continuous-valued here. Figure~\\ref{fig:hfsm1_crop} shows a state transition diagram for the DPFN in Figure~\\ref{fig:2layerDynamic} that models the regular expression ``a+b(de)*c(de)+''. Under this transition model, an exact state factorization is only possible if the input sequence satisfies the given regular expression. This regular expression specifies that we have one or more repetitions of ``a'' followed by ``b'' followed by zero or more repetitions of ``de'' followed by ``c'' followed by one or more repetitions of ``de.'' We allow the expression to repeat after reaching the end state. Observe that states $S^2_3$ and $S^2_5$ of the top level transition model (which we refer to as the level 2 transition model) refine to the same lower level transition model (the level 1 transition model). Some states (the ones with letters under them) are \\emph{production states} that correspond to the production of the corresponding letters. The initial state is $S^2_1$, and states $S^2_6$ and $S^1_3$ denote explicit ends states. Although explicit end states are used here, they are not needed in general, as we will see in the next section. The semantics is such that when a transition to a refining state, such as $S^2_3$ or $S^2_5$ occurs, the refinement executes while the parent remains in the same state. We will choose the basis vectors for the coupling activations $U$ so that at any time step, the sum of active state values in level 2 is equal to the corresponding sum of active state values in level 1. For example, the transition $t^2_3$ correspond to a transition from $S^2_2$ to $S^2_3$ in level 2 and a simultaneous transition from the off\/end state to the starting state $S^1_1$ in level 1. The next transition must correspond to a transition from $S^1_1$ (production of ``d'' to $S^1_2$ (production of ``e''), while level 2 remains in the parent state $S^2_3$. The hierarchical transition diagram places implicit constraints on transitions that may simultaneously occur (co-occur) in levels 1 and 2. For example, the level 2 transition $t^2_5$ can co-occur with the level 1 transition $t^1_2$. An implicit self-loop transition from $S^2_3$ to $S^2_3$ can co-occur with level 1 transitions $t^1_1$ and $t^1_3$. Figure~\\ref{fig:hfsm1_detailed_crop} shows the same transition diagram, annotated to include the implicit transitions. Table~\\ref{table:trans_coupling} shows the transition coupling constraints that specify the pairs of transitions in levels 1 and 2 that can co-occur. For example, note that when the refinement is not producing a ``d'' or ``e'' the end state remains in a self-loop.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=60ex]{.\/figures\/hfsm1_crop.pdf}\n\\caption{A state transition diagram for the regular expression ``a+b(de)*c(de)+''.}\n\\label{fig:hfsm1_crop}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=60ex]{.\/figures\/hfsm1_detailed_crop.pdf}\n\\caption{The state transition diagram for the regular expression ``a+b(de)*c(de)+''. The implicit transitions are shown in blue.}\n\\label{fig:hfsm1_detailed_crop}\n\\end{figure}\n\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{| c ||c | c | c | c | c | c | c | c | c | c | c | c | c |}\n \\hline\n Level 2 transition & $t^2_3$ & $t^2_{10}$ & $t^2_{10}$ & $t^2_5$ & $t^2_6$ & $t^2_9$ & $t^2_9$ & $t^2_7$ & $t^2_{11}$ & $t^2_8$ & $t^2_1$ & $t^2_2$ & $t^2_4$ \\\\ \\hline\n Level 1 transition & $t^1_4$ & $t^1_1$ & $t^1_3$ & $t^1_2$ & $t^1_4$ & $t^1_1$ & $t^1_3$ & $t^1_2$ & $t^1_5$ & $t^1_5$ & $t^1_5$ & $t^1_5$ & $t^1_5$ \\\\ \\hline\n\\end{tabular}\n \\caption{Transition coupling constraints for the example hierarchical transition model. Each column specifies a level 2 transition and a level 1 transition that can co-occur.}\n \\label{table:trans_coupling}\n \\end{center}\n \\end{table}\n\nThe level 1 transition model, level 2 transition model, and coupling constraints from the state transition diagram directly map into the corresponding parameters $W^1$, $W^2$, and $U$ as follows. The $W^1$ and $W^2$ matrices are constructed following the procedure outlined in Section~\\ref{sec:fact_state_tran_model}. Each $w^1_i$ is the vertical concatenation of the two state basis vectors associated transition $t^1_i$. $W^1$ is then constructed as the horizontal concatenation of the transition basis vector columns \\{$w^1_i$\\}. We then have:\n\n\\begin{align}\nW^1 =& \\left[ \\begin{array}{ccccc} w^1_1 & w^1_2 & w^1_3 & w^1_4 & w^1_5\\end{array} \\right] \\notag \\\\\n=&\\ \\left[ \\begin{array}{c}\nW^1_1 \\\\\nW^1_2 \\end{array} \\right] \\notag\\\\\n=&\\ \\left[ \\begin{array}{ccccc}\ns^1_1 & s^1_2 & s^1_2 & s^1_3 & s^1_3 \\\\\ns^1_2 & s^1_3 & s^1_1 & s^1_1 & s^1_3 \\end{array} \\right] \\notag \\\\\n=& \\left[ \\begin{array}{cccccc}\n1 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 1 \\\\ \\hline\n0 & 0 & 1 & 1 & 0 \\\\\n1 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 1 \\end{array} \\right] \n\\label{eqn:W_1hfsm}\n\\end{align}\n\nNote that $W^1_1$ is given by:\n\n\\begin{align}\nW^1_1 =& \\left[ \\begin{array}{ccccc} s^1_1 & s^1_2 & s^1_2 & s^1_3 & s^1_3 \\end{array} \\right] \\notag \\\\\n=& \\left[ \\begin{array}{cccccc}\n1 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 1\\end{array} \\right] \n\\end{align}\n\nand $W^1_2$ is given by:\n\n\\begin{align}\nW^1_2 =& \\left[ \\begin{array}{ccccc} s^1_2 & s^1_3 & s^1_1 & s^1_1 & s^1_3 \\end{array} \\right] \\notag \\\\\n=& \\left[ \\begin{array}{cccccc}\n0 & 0 & 1 & 1 & 0 \\\\\n1 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 1\\end{array} \\right] \n\\end{align}\n\nWe construct $W^2$ as:\n\n\\begin{align}\nW^2 =& \\left[ \\begin{array}{ccccccccccc} w^2_1 & w^2_2 & w^2_3 & w^2_4 & w^2_5 & w^2_6 & w^2_7 & w^2_8 & w^2_9 & w^2_{10} & w^2_{11} \\end{array} \\right] \\notag \\\\\n=&\\ \\left[ \\begin{array}{c}\nW^2_1 \\\\\nW^2_2 \\end{array} \\right] \\notag\\\\\n=&\\ \\left[ \\begin{array}{ccccccccccc}\ns^2_1 & s^2_1 & s^2_2 & s^2_2 & s^2_3 & s^2_4 & s^2_5 & s^2_6 & s^2_5 & s^2_3 & s^2_6 \\\\\ns^2_1 & s^2_2 & s^2_3 & s^2_4 & s^2_4 & s^2_5 & s^2_6 & s^2_1 & s^2_5 & s^2_3 & s^2_6 \\end{array} \\right] \\notag \\\\\n=& \\left[ \\begin{array}{ccccccccccc}\n1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \\\\ \\hline\n1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\end{array} \\right] \n\\label{eqn:W_2hfsm}\n\\end{align}\n \nLikewise, $W^2_1$ corresponds to the upper sub-matrix of $W^2$ and $W^2_2$ corresponds to the lower sub-matrix.\n\nWe construct the transition coupling matrix $U$ as follows. We start with the $Q$ pairs of level 2 - level 1 couplings specified in Table~\\ref{table:trans_coupling}. We construct $U$ from this table such that the $i$'th column of $U$ corresponds to the transition pair in the $i$'th column of the table. For the level 2 transitions, we replace each transition $t^2_j$ in the table with the column vector $u_{upper_j} \\in \\mathbb{R}^{R2}$ that contains a 1 in the component corresponding to the row of $H^2$ that activates the corresponding transition in $W^2$. Likewise, for the level 1 transitions, we replace each transition $t^1_j$ in the table with the column vector $u_{lower_j} \\in \\mathbb{R}^{R1}$ that contains a 1 in the component corresponding to the row of $H^1$ that activates the corresponding transition in $W^1$. We then form the $j$'th column $u_j$ of $U$ as the vertical concatenation of $u_{upper_j}$ on top of $u_{lower_j}$ so that we have for the $(R2 + R1)$ x $Q$ matrix U:\n\n\\begin{align}\nU =& \\left[ \\begin{array}{ccccc} u_1 & u_2 & u_3 & \\dots & u_Q \\end{array} \\right] \\notag \\\\\n=& \\left[ \\begin{array}{c} U_1 \\\\\nU_2 \\end{array} \\right] \\notag \\\\\n=& \\left[ \\begin{array}{ccccc} u^2_{upper_1} & u^2_{upper_2} &u^2_{upper_3} & \\dots & u^2_{upper_Q} \\\\\nu^1_{lower_1} & u^1_{lower_2} & u^1_{lower_3} & \\dots & u^1_{lower_Q} \\end{array} \\right]\n\\end{align}\n\nwhere the $R2$ x $Q$ submatrix $U_1$ is given by:\n\n\\begin{align}\nU_1 = \\left[ \\begin{array}{ccccc} u_{upper_1} & u_{upper_2} &u_{upper_3} & \\dots & u_{upper_Q} \\end{array} \\right]\n\\end{align}\n\nand the $R1$ x $Q$ submatrix $U_2$ is given by:\n\n\\begin{align}\nU_2 = \\left[ \\begin{array}{ccccc} u_{lower_1} & u_{lower_2} & u_{lower_3} & \\dots & u_{lower_Q} \\end{array} \\right]\n\\end{align}\n\n\n\nFrom Table~\\ref{table:trans_coupling}, we then have the following for $U$:\n\n\\begin{align}\n\\label{eq:U_coupling}\nU = \\left[ \\begin{array}{ccccccccccccc}\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\\\\n 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\\\\n 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\ \\hline\n 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 \\end{array} \\right] \n\\end{align}\n\nNote that $U^1$ activates $H^2$ (activations for $W^2$) and $U^2$ activates $H^1$ (activations for $W^1$)\n\n\n\\subsection{Empirical results}\n\nIn this section, we perform inference on the network in Figure~\\ref{fig:2layerDynamic} using the parameter matrices for the regular expression hierarchical transition diagram in Figure~\\ref{fig:hfsm1_detailed_crop}. We use parameter matrices $W^1$, $W^2$, and $U$ from Equations (\\ref{eqn:W_1hfsm}),(\\ref{eqn:W_2hfsm}), and (\\ref{eq:U_coupling}) respectively.\n\nWe choose to model a length 10 sequence for ease of diplay of the results. We let the observed set $X_E$ consist of $\\{x^2_2 = s^2_2, x^2_7 = s^2_4\\}$. We let the hidden set $X_H$ consist of all other model variables. That is, we are only given that in time slice 2, the level 2 state is $S^2_2$ (production state ``b'') in Figure~\\ref{fig:hfsm1_detailed_crop} and that in time slice 7, the level 2 state is $S^2_4$ (production state ``c''). We are given no information about the level 1 states, transitions, or coupling variable states. However, even given this limited information, the constraints impose by the hierarchical transition model result in a single possible solution for the hidden variables in time slices $t=1,\\dots,9$. Multiple solutions (due to multiple outgoing transitions) are possible for time slice 10. The reader can verify this by plugging in the observed states and studying the transition diagram to trace the patch of allowable transitions.\n\nWe now perform inference using an instance of the general inference and learning algorithm given in Appendix~\\ref{sec:inf_learn_dyn2level}. All hidden variables are initialized to random positive values, and the algorithm is run for 500 iterations, which is sufficient for convergence to an exact factorization (RMSE less than $10^-4$) for all three factorization equations. We observed that repeated runs always converged, although a different solution for time slice 10 was reached in each case (since multiple solutions are possible for this time slice).\n\nFigure~\\ref{fig:hdyn1_X1_X2_results} shows both the initialized and inferred values for $X^1$ and $X^2$. Model variables $x^2_2 = s^2_2$ and $x^2_7 = s^2_4$ were observed and all other model variables were hidden and estimated by the inference algorithm. Note that the hidden values of $X^2$ (all time slices except $t=2,7$) appear black in the figure because the initialized random values are very small (order of $10^-6$) compared to the maximum value of the observed time slices in the figure. The inference results for $X^1$ and $X^2$ are shown in the rightmost image plots. We see that the inference algorithm successfully found the single possible solution for time slices $t=1,\\dots,9$. In time slice 10 there are two possible solution states for both $x^1_{10}$ and $x^2_{10}$ and we see that the inferred values represent a mixture of the two possible state configurations. That is, in time slice 9, the inferred state is $x^2_9 = s^2_5$ and $x^1_9 = s^1_2$. According to the transition diagram in Figure~\\ref{fig:hfsm1_detailed_crop}, we see that in the next time slice (slice 10), we can either remain in level 2 state $S^2_5$ while level 1 transitions to $S^1_1$, or both level 2 and level 1 can transition to their respective end states: $S^2_6$ and $S^1_3$. We observe that the inferred state values represent a superposition of these two possibilities. We also note that when sparse NMF updates were used in inference, the solution tended to chose one alternative or the other, but not a superposition of the two.\n\nFigure~\\ref{fig:hdyn1_X_W_H_factor} shows the inference results for $X^2_C$ and $H^2$ in the factorization $X^2_C = W^2 H^2$ in the top three image plots. The bottom three image plots show the inference results for $X^1_C$ and $H^1$ in the factorization $X^1_C = W^2 H^1$. Figure~\\ref{fig:hdyn1_H_U_V_factor} shows an image plot of the inference results for the matrices in the transition coupling factorization.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/hdyn1_X1_X2_results.jpg}\n\\caption{The initialized $X^2$ is shown in the upper left. Model variables $x^2_2 = s^2_2$ and $x^2_7 = s^2_4$ were observed and all other model variables were hidden and estimated by the inference algorithm. The initialized $X^1$ is shown in the lower left. All hidden variables were initialized to random positive values. The inference results for $X^1$ and $X^2$ are shown in the rightmost image plots. We note that given the two observed variables, only one possible solution exists for time slices $t=1,\\dots,9$. Multiple solutions exist for the final time slice $t=10$.}\n\\label{fig:hdyn1_X1_X2_results}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/hdyn1_X_W_H_factor.jpg}\n\\caption{The top three matrices show the inference results for $X^2_C$ and $H^2$ in the factorization $X^2_C = W^2 H^2$. The bottom three matrices show the inference results for $X^1_C$ and $H^1$ in the factorization $X^1_C = W^2 H^1$. Model variables $x^2_2$ and $x^2_7$ were observed and all other model variables were hidden and inferred by the inference algorithm. Parameter matrices $W^1$ and $W^2$ are given in Equations (\\ref{eqn:W_1hfsm}) and (\\ref{eqn:W_2hfsm}), respectively.}\n\\label{fig:hdyn1_X_W_H_factor}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/hdyn1_H_U_V_factor.jpg}\n\\caption{An image plot of the inference results for the matrices in the transition coupling factorization. All variables $H^1, H^2, V$ were hidden and inferred. Parameter matrix $U$ is given in Equation (\\ref{eq:U_coupling}).}\n\\label{fig:hdyn1_H_U_V_factor}\n\\end{figure}\n\n\n\n\\section{Multiple target tracking model}\n\\label{sec:target_tracking}\n\nWe now consider a multiple target tracking model that is motivated by the problem of tracking auditory objects in computational auditory scene analysis (CASA) \\cite{ref:bregman}. This example serves to motivate the potential usefulness of our approach to problems such as CASA. Recall that a DPFN supports the representation of an additive combination of realizations of underlying dynamical process models. Such a representation seems to fit very naturally within CASA problems, since the sound pressure waves that arrive at the eardrum are well modeled as an additive mixture of scaled versions of the sound pressure waves produced by each auditory object alone. If it turns out to be possible to use a DPFN to model auditory objects such as musical instruments, environmental noises, speech, etc., then the linearity property would imply that any additive mixture of the various supported auditory objects at various sound levels would then be representable by the model as well. Of course, this does not imply that any particular inference and\/or learning algorithm will successfully be able to deal with such a mixture, but the fact that an additive mixture is representable at all seems to be quite powerful. Our approach has the desirable property that nowhere in our model do we need to explicitly specify anything about the maximum number of simultaneous targets\/objects or the allowable magnitude range (volume levels) for targets.\n\nWe will specify a dynamical model for a single target, and then observe how a scaled mixture of multiple targets is then automatically supported. This illustrates how the factored representation of a dynamical system allows for a compact parameter set, while still being expressive enough to support multiple simultaneous scaled realizations of the underlying process model. All parameters in the target model will be manually specified here for ease of illustration, although in principle one could attempt to learn them from training data as well. We therefore perform inference only in the results that follow. We will also work with synthetic target observation data, rather than actual audio recordings, again for ease of explanation and illustration. It is hoped that after reading this paper, the interested reader will find it straightforward to come up with ideas for developing more complex extensions of these models that may then be applied to practical problems involving real-world data sets. \n\n\n\nThe problem setup is as follows. We are given a sequence of non-negative observation column vectors, one for each time slice. When concatenated horizontally, they form an image such that the time axes runs from left to right. The vertical axis then corresponds to sensor position or frequency bin index, depending on the interpretation. We therefore use position and frequency bin interchangeably in the following discussion. In the context of CASA, each observation vector can be considered a time slice of a magnitude spectrogram such that the components of an observation vector represent the short-time spectra of an audio signal. In the CASA interpretation, we can then think of the observation image as a magnitude time frequency image, such as a spectrogram.\n\nThe observations may also be potentially corrupted by additive non-negative noise. Our goal is to infer the types and positions of all targets given only the target observation sequence. We may also be interested in predicting future target trajectories and\/or inferring missing observations data. \n\n\nSpecifically, we wish to model a target that has the following properties:\n\\begin{enumerate}\n\n\\item A new target can come come into existence (become alive) at any time instant. \n\n\\item Each target has a certain localized \\emph{pattern signature}. That is, its position\/frequency evolves in time according to an underlying transition model. Target classes are distinguished by their position signatures. For example, one can think of two distinct musical instruments: one produces a certain type of vibrato regardless of the note pitch, while the other instrument produces a short chirp followed by a more steady pitch.\n\n\\item A target's pattern signature is modeled as being independent of the target's overall position\/location\/frequency bin index. Thus, we model a target's location as a position shift or frequency shift of the pattern signature. We only allow a target's overall position to change at certain constrained points in the position signature transition model. In the context of CASA, a the target's overall location might correspond to the pitch of a note played by an instrument, which is independent of the instrument's pattern signature (e.g., each note is performed with a vibrato).\n\n\\item A target may cease to exist (die) after some number of time slices which may or may not be deterministic. \n\n\\item When a target comes into existence, it can take on any positive magnitude value, which remains constant during the time that the target is alive. This corresponds to an auditory object, such as an instrument note event, that can begin sounding at an arbitrary sound level or loudness and remains at the same sound level until the note ends. It is possible to relax this constraint, but we enforce it in this example.\n\n\\item Any number of targets may be simultaneously present, each potentially having a distinct magnitude. This corresponds to multiple auditory objects simultaneously occurring, such as several instruments performing together, or several people talking at the same time, for example.\n\n\\end{enumerate}\n\n\n\n\\subsection{Model}\n\nFigure~\\ref{fig:targetTracking1} shows the DPFM for the target tracker. $x^1_t, t=1...T$ are observed and all other variables are hidden. We now describe the role of each of the 7 factorization equations that specify this model. We make use of the matrix notation developed in Section \\ref{sec:fact_state_tran_model} so that the factorizations may be expressed concisely.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=60ex]{.\/figures\/targetTracking1.pdf}\n\\caption{The DPFN for the the multiple target tracker. The target observations variables $x^1_t, t=1...T$ are observed and all other variables are hidden. The first two time slices are shown.}\n\\label{fig:targetTracking1}\n\\end{figure}\n\n\\subsubsection{Target state transition model}\n\n\n\nThe target state transition model corresponds to the factorization:\n\\begin{align}\n\\label{eq:target_state_transition_model}\nX^4_c =& W^6 H^3 \\notag \\\\\n=& \\left[ \\begin{array}{c}\nW^6_1 \\\\\nW^6_2 \\end{array} \\right] H^3\n\\end{align}\n\n\nFigure~\\ref{fig:fsm_target_tracker1} shows the target state transition diagram. Observe that states $S^2_1$ through $S^2_5$ correspond to a ``A-type'' target and states $S^2_6$ through $S^2_9$ correspond to a ``B-type'' target. The initial state when a ``A-type'' target comes into existence is $S^2_1$, and the minimum duration (i.e., target lifetime) is 5 time slices, since the target must pass consecutively through states $S^2_1$ through $S^2_5$ before it is possible for the target to die (i.e., transition to the 0\/off state). We see that when the current state is $S^2_5$ the following state can be $S^2_2$ or the 0 state. Recall that this is not a probabilistic model. Thus, multiple outgoing transitions from a given state can be thought of as transition possibilities that are equally likely. The particular transition (or superposition of transitions) that is chosen during inference depends on the values of the observed variables as well as any other constraints imposed by other parts of the model that are coupled to the transition model. The state transition model for a ``B-type'' target is similar but only consists of 4 states. We construct $W^6$ directly by inspection of the state transition diagram using the procedure described in Section \\ref{sec:fact_state_tran_model}. Note that there are 9 states and 13 transitions. The ``off'' state is not an explicit state, and is represented by the 9-dimensional 0 vector in the transition basis matrix. Thus $W^6$ has size 18 x 13. The state basis vectors \\{$s^2_i$\\} are therefore 9-dimensional column vectors, as are the target state variables \\{$x^4_t$\\}. Each $w^6_i$ is the vertical concatenation of the two basis state vectors associated transition $t^2_i$. $W^6$ is then constructed as the horizontal concatenation of the transition basis vector columns \\{$w^1_i$\\}. We then have for $W^6$:\n\n\\begin{align}\nW^6 =& \\left[ \\begin{array}{ccccccccccccc} w^6_1 & w^6_2 & w^6_3 & w^6_4 & w^6_5 & w^6_6 & w^6_7 & w^6_8 & w^6_9 & w^6_{10} & w^6_{11} & w^6_{12} & w^6_{13} \\end{array} \\right] \\notag \\\\\n=&\\ \\left[ \\begin{array}{c}\nW^6_1 \\\\\nW^6_2 \\end{array} \\right] \\notag\\\\\n=&\\ \\left[ \\begin{array}{ccccccccccccc}\ns^2_1 & s^2_2 & s^2_3 & s^2_4 & s^2_5 & s^2_5 & 0 & s^2_6 & s^2_7 & s^2_8 & s^2_9 & s^2_9 & 0\\\\\ns^2_2 & s^2_3 & s^2_4 & s^2_5 & 0 & s^2_2 & s^2_1 & s^2_7 & s^2_8 & s^2_9 & 0 & s^2_7 & s^2_6\\end{array} \\right] \\notag \\\\\n\\label{eqn:W_6target}\n\\end{align}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/fsm_target_tracker1.pdf}\n\\caption{The state transition diagram for the target state variables $x^4_t$. The corresponding state emission vectors $x^3_t$ are also shown.}\n\\label{fig:fsm_target_tracker1}\n\\end{figure}\n\n\n\n\\subsubsection{Target label to target state coupling}\n\nThe target label to target state coupling corresponds to the factorization:\n\n\\begin{align}\n\\label{eq:target_label_to_target_state_coupling}\n\\left[ \\begin{array}{c}\nX^5 \\\\\nX^4 \\end{array} \\right] =& W^7 U^3 \\notag \\\\\n=& \\left[ \\begin{array}{c}\nW^7_1 \\\\\nW^7_2 \\end{array} \\right] U^3\n\\end{align}\n\nThis module performs the task of assigning a target type label $x^5_t$ to a target state $x^4_t$. From Figure~\\ref{fig:fsm_target_tracker1} we see that there are two distinct target types. We let $x^5_t$ denote the 2-dimensional column vector such that the first (top) component denotes the strength of an ``A-type'' target and the second (bottom) component denotes the strength of a ``B-type'' target. We will choose the basis columns $U$ such that \n\nWe wish to add coupling constraints so that the ``A-type'' basis vector $s^3_1 = (1,0)^T$ can co-occur with any of the basis state vectors \\{$s^2_1, s^2_2, s^2_3, s^2_4, s^2_5$\\} and the ``B-type'' basis vector $s^3_2 = (0,1)^T$ can co-occur with any of the basis state vectors \\{$s^2_6, s^2_7, s^2_8, s^2_9$\\}. We construct each column of $W^7$ as the vertical concatenation of an $s^3_i$ on top of and $S^2_j$ that can co-occur. Noting that the column ordering is arbitrary, we have for the 7 x 9 matrix $W^7$:\n\n\\begin{align}\nW^7 =& \\left[ \\begin{array}{c} W^7_1 \\\\\nW^7_2 \\end{array} \\right] \\notag \\\\\n=& \\left[ \\begin{array}{ccccccccc} s^3_1 & s^3_1 & s^3_1 & s^3_1 & s^3_1 & s^3_2 & s^3_2 & s^3_2 & s^3_2 \\\\\ns^2_1 & s^2_2 & s^2_3 & s^2_4 & s^2_5 & s^2_6 & s^2_7 & s^2_8 & s^2_9 \\end{array} \\right]\n\\end{align}\n\n\n\\subsubsection{Target state to target emission coupling}\n\nThe target state to target emission coupling corresponds to the factorization:\n\n\\begin{align}\n\\label{eq:target_state_to_target_emission_coupling}\n\\left[ \\begin{array}{c}\nX^4 \\\\\nX^3 \\end{array} \\right] =& W^3 U^2 \\notag \\\\\n=& \\left[ \\begin{array}{c}\nW^3_1 \\\\\nW^3_2 \\end{array} \\right] U^2\n\\end{align}\n\nThis module performs the task of coupling the target state variable $x^4_t$ to the corresponding production (emission) variable $x^3_t \\in \\mathbb{R}^{patternSize}$, where $patternSize = 5$ in this example. Figure~\\ref{fig:fsm_target_tracker1} shows the emission basis vector $s^e_i \\in \\mathbb{R}^{patternSize}$ below the corresponding state $S^2_i$. If a given state $S^2_i$ is present in $x^4_t$ with magnitude $\\alpha$, then we wish for the corresponding emission variable $s^e_i$ to also be present in $x^3_t$ with magnitude $\\alpha$ and vice versa. We construct each column of $W^3$ as the vertical concatenation of a basis state vector $s^2_i$ on top of the corresponding emission basis vector $s^e_i$. The 10 x 9 matrix $W^3$ is then given as:\n\n\\begin{align}\nW^3 =& \\left[ \\begin{array}{c} W^3_1 \\\\\nW^3_2 \\end{array} \\right] \\notag \\\\\n=& \\left[ \\begin{array}{ccccccccc} s^2_1 & s^2_2 & s^2_3 & s^2_4 & s^2_5 & s^2_6 & s^2_7 & s^2_8 & s^2_9 \\\\\ns^e_1 & s^e_2 & s^e_3 & s^e_4 & s^e_5 & s^e_6 & s^e_7 & s^e_8 & s^e_9 \\end{array} \\right]\n\\end{align}\n\n\\subsubsection{Pattern translation transition model}\n\nThe pattern translation transition model corresponds to the factorization:\n\n\\begin{align}\n\\label{eq:pattern_translation_transition_model}\nX^2_c =& W^2 H^1 \\notag \\\\\n=& \\left[ \\begin{array}{c}\nW^2_1 \\\\\nW^2_2 \\end{array} \\right] H^1\n\\end{align}\n\nThis module represents the transition model for the amount by the target emission pattern is translated (i.e., target position) between time steps $t$ and $t+1$. Figure~\\ref{fig:targetTrackingPosition} shows the state transition diagram for the position shift variables $x^2_t$ with $P$ possible distinct position shift amounts. Note that $x^2_t$ is therefore a $P$ dimensional column vector, with the $i$'th component corresponding to the magnitude of $S^1_i$ at time $t$. Each state $S^1_i$ corresponds to a distinct target position. The index $i$ representing the downward translation amount that the target pattern will be translated when it appears in the observation vector $x^1_t$. The state $S^1_1$ corresponds to the minimum downward translation amount, so that the target emission pattern appears as the top $patternSize$ components of $x^1_t$. The state $S^1_P$ corresponds to the maximum downward translation amount, so that the target emission pattern appears as the bottom $patternSize$ components of $x^1_t$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80ex]{.\/figures\/targetTrackingPosition.pdf}\n\\caption{The state transition diagram for the position shift variables $x^2_t$ with $P$ possible distinct position shift amounts.}\n\\label{fig:targetTrackingPosition}\n\\end{figure}\n\nWe specify in the state transition diagram that a transition $t_{on_i}$ from the off state at time $t$ to to any of the $P$ possible position shift amounts $S^1_i$ at time $t+1$ is allowed. Let $t_{on}$ denote the set \\{$t_{on_i}, i = 1, \\dots, P$\\} of all ``on'' transitions. We specify that a transition $t_{off_i}$ from any of the $P$ possible positions $S^1_i$ to the off state is allowed. Let $t_{off}$ denote the set \\{$t_{off_i}, i = 1, \\dots, P$\\} of all ``off'' transitions. A self-loop transition $t_{self_i}$ from any position state $S^1_i$ to itself is allowed. Let $t_{self}$ denote the set \\{$t_{self_i}, i = 1, \\dots, P$\\} of all ``self-loop'' transitions. Finally, we also allow a transition $t_{inc_i}$ from state $S^1_i$ to $S^1_{i+1}$, as well as a transition $t_{dec_i}$ from state $S^1_{i+1}$ to $S^1_{i}$. Let $t_{inc}$ denote the set \\{$t_{inc_i}, i = 1, \\dots, P-1$\\} of all ``position increment'' transitions. Let $t_{dec}$ denote the set \\{$t_{dec_i}, i = 1, \\dots, P-1$\\} of all ``position decrement'' transitions. Let $t_{change}$ denote the union of $t_{inc}$ and $t_{dec}$, which is the set of all ``position change'' transitions. The complete set of all position transitions is then $Tpos =\\{t_{self}, t_{change}, t_{on}, t_{off}\\}$.\n\nWe construct $W^2$ directly by inspection of the state transition diagram using the procedure described in Section \\ref{sec:fact_state_tran_model}. Using our convention for basis state vectors, state $S^1_i$ in the transition diagram corresponds to a 9-dimensional column vector $s^1_i$ such that the $i$'th component is 1 and all other components are 0. The ``off'' state is not an explicit state, and is represented by the 9-dimensional 0 vector in the transition basis matrix. Since there are $P$ states, the state basis vectors \\{$s^1_i$\\} are 9-dimensional column vectors. Each column of the $2 P$ x $|Tpos|$ matrix $W^2$ then corresponds to the vertical concatenation of the two basis state vectors associated with each allowable transition in $Tpos$. Again, note that the column ordering of the transition basis vectors is arbitrary. For example, the column of $W^2$ corresponding to transition $t_{inc_2}$ would consist of the vertical concatenation of $s^1_2$ on top of $s^1_3$.\n\n\n\n\\subsubsection{Target state transition to position shift transition coupling}\n\nWe now place coupling constraints on the target state transition and pattern translation transition models. Without these constraints, it would be possible for a target to have a positive state magnitude but a zero-valued corresponding position magnitude and vice versa. However, we wish for these magnitudes to be identical for a given target. Specifically, we add coupling constraints so that a target state ``on'' transition co-occurs with a target position ``on'' transition, and likewise for the ``off'' transitions. We also constrain the target position value so that a pattern position change (pattern translation) can co-occur with the repetition transitions ($t^2_6, t^2_{13}$) for the target pattern, but is otherwise disallowed.\n\nAlthough it would be possible to couple the target state transitions $h^3_t$ directly with the target position transitions $h^1_t$, each of the transitions $t^2_i$ would then have couplings for each of the $P$ positions, resulting in a potentially large number of coupling basis vectors. We can reduce the number of required coupled transitions by introducing an intermediate high-level position transition variable $h^2_t$. The target state transition variable $h^3_t$ is coupled to $h^2_t$ via the factorization:\n\n\\begin{align}\n\\label{eq:h3Toh2CouplingTarget}\n\\left[ \\begin{array}{c}\nH^3 \\\\\nH^2 \\end{array} \\right] =& W^5 V^2 \\notag \\\\\n=& \\left[ \\begin{array}{c}\nW^5_1 \\\\\nW^5_2 \\end{array} \\right] V^2\n\\end{align}\n\nand $h^2_t$ is coupled to the target position transition variable $h^1_t$ via the factorization:\n\n\\begin{align}\n\\label{eq:h2Toh1CouplingTarget}\n\\left[ \\begin{array}{c}\nH^2 \\\\\nH^1 \\end{array} \\right] =& W^4 V^1 \\notag \\\\\n=& \\left[ \\begin{array}{c}\nW^4_1 \\\\\nW^4_2 \\end{array} \\right] V^1\n\\end{align}\n\n\nLet $h^2_t$ correspond to a 4-dimensional column vector such that each component corresponds to a distinct type of position transition. Specifically, we let $h^2_t = a_1 = (1,0,0,0)^T$ represent ``remaining at the same position.'' We let $h^2_t = a_2 = (0,1,0,0)^T$ represent ``position change.'' We let $h^2_t = a_3 = (0,0,1,0)^T$ represent ``position turn on.'' We let $h^2_t = a_4 = (0,0,0,1)^T$ represent ``position turn off.''\n\nTable~\\ref{table:trans_coupling_target_tran_abstract_pos} shows the transition coupling constraints between $h^3_t$ and $h^2_t$. We construct the parameter matrix $W^5$ from this table as follows. We see from \\ref{eqn:W_6target} that setting $h^3_t$ equal to the standard unit basis vector $e_i \\in \\mathbb{R}^9$ will activate transition $t^2_i$. We then construct the $j$'th column of $W^5$ as the vertical concatenation of $e^j$ on top of $a_k$ such that $e^j$ and $a_k$ correspond to the $j$'th target state transition and high-level position transition in the table, respectively. We then arrive at the following 9 x 15 matrix for $W^5$:\n\n\\begin{align}\nW^5 =& \\left[ \\begin{array}{ccccccccccccccc} e_6 & e_6 & e_{12} & e_{12} & e_{7} & e_{13} & e_5 & e_{11} & e_1 & e_2 & e_3 & e_4 & e_8 & e_9 & e_{10}\\\\\na_2 & a_1 & a_2 & a_1 & a_3 & a_3 & a_4 & a_4 & a_1 & a_1& a_1& a_1& a_1& a_1& a_1 \\end{array} \\right]\n\\end{align}\n\n\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{| c ||c | c | c | c | c | c | c | c |}\n \\hline\n Target state transition & $t^2_6$ & $t^2_6$ & $t^2_{12}$ & $t^2_{12}$ & $t^2_7$ & $t^2_{13}$ & $t^2_5$ & $t^2_{11}$ \\\\ \\hline\n High-level position transition & $t_{change}$ & $t_{self}$ & $t_{change}$ & $t_{self}$ & $t_{on}$ & $t_{on}$ & $t_{off}$ & $t_{off}$ \\\\ \\hline\n\\end{tabular}\n\n\\begin{tabular}{| c ||c | c | c | c | c | c | c | }\n \\hline\n Target state transition & $t^2_1$ & $t_2$ & $t_3$ & $t_4$ & $t_8$ & $t_9$ & $t_{10}$ \\\\ \\hline\n High-level position transition & $t_{self}$ & $t_{self}$ & $t_{self}$ & $t_{self}$ & $t_{self}$ & $t_{self}$ & $t_{self}$ \\\\ \\hline\n\\end{tabular}\n \\caption{Transition coupling constraints between the target state transitions $h^3_t$ and the high-level position transitions $h^2_t$. Each column specifies a target state transition and high-level position transition that can co-occur.}\n \\label{table:trans_coupling_target_tran_abstract_pos}\n \\end{center}\n \\end{table}\n\nWe construct $W^4$ as follows. Note that the top 4 rows of $W^4$ correspond to the submatrix $W^4_1$, which contain the basis activations for $h^2_t$. The bottom submatrix $W^4_2$ of $W^4$ contains the activations for $h^1_t$. Note that setting $h^1_t$ equal to the standard unit basis vector $e_i \\in \\mathbb{R}^{|Tpos|}$ activates the $i$'th column of $W^2$, corresponding to a position transition in $Tpos =\\{t_{self}, t_{change}, t_{on}, t_{off}\\}$. For each $i \\in 1 , \\dots, |Tpos|$, we add a corresponding column $w^4_i$ to $W^4$ as follows. If the $h^1_t = e_i$ activates a transition in $W^2$ in the set $t_{self}$, let $w^4_i$ equal the vertical concatenation of $a_1$ on top of $e_i$. Otherwise, if $h^1_t = e_i$ activates a transition in $W^2$ in the set $t_{change}$, let $w^4_i$ equal the vertical concatenation of $a_2$ on top of $e_i$. Otherwise, if $h^1_t = e_i$ activates a transition in $W^2$ in the set $t_{on}$, let $w^4_i$ equal the vertical concatenation of $a_3$ on top of $e_i$. Otherwise, if $h^1_t = e_i$ activates a transition in $W^2$ in the set $t_{off}$, let $w^4_i$ equal the vertical concatenation of $a_4$ on top of $e_i$.\n\n\\subsubsection{Target pattern translation coupling}\n\nThe target pattern translation coupling corresponds to the factorization:\n\n\\begin{align}\n\\label{eq:target_pattern_translation_coupling}\n\\left[ \\begin{array}{c}\nX^3 \\\\\nX^2 \\\\\nX^1 \\end{array} \\right] =& W^1 U^1 \\notag \\\\\n=& \\left[ \\begin{array}{c}\nW^1_1 \\\\\nW^1_2 \\\\\nW^1_3 \\end{array} \\right] U^1\n\\end{align}\n\nwhere $W^1_1$ has $patternSize$ rows, $W^1_2$ has $P$ rows, and $W^1_3$ has $P -1 + patternSize$ rows.\n\n\nWe wish for the target emission vectors to appear translated (vertically shifted) in the observation vectors. An observation vector $x^1_t$ has dimension greater than the target emission vector $x^3_t$ so that $x^3_t$ may appear as a subvector in $x^1_t$. The number of components by which $x^3_t$ is translated is specified by the pattern shift amount $x^2_t$. If component $i$ of $x^3_t$ has value $\\alpha$ and component $j$ of $x^2_t$ also has value $\\alpha$ then we wish for component $k = i + j$ of $x^1_t$ to also have value $\\alpha$. We then construct $W^1$ as follows. For each combination of $i \\in 1, \\dots, dimension(x^3_t)$ and $j \\in 1, \\dots, dimension(x^2_t)$ add a column $q_{i_j}$ to $W^1$. The subcolumn of $q_{i_j}$ corresponding to $W^1_1$ has only the $i$'th component equal to 1 and all other components 0-valued. The subcolumn of $q_{i_j}$ corresponding to $W^1_2$ has only the $j$'th component equal to 1 and all other components 0-valued. The subcolumn of $q_{i_j}$ corresponding to $W^1_3$ has only the $k$'th component equal to 1 and all other components 0-valued. \n\nFigure~\\ref{fig:W1W2W3} shows an image plot of the submatrices of $W^1$ for the case where $patternSize = 5$, $dimension(x^2_t) = 15$, and $dimension(x^1_t) = P -1 + patternSize = 19$. Note that $W^1$ has $patternSize P = 75$ columns. Note that each column in $W^1_1, W^1_2, W^1_3$ has column sum equal to 1 so that $x^3_t$, $x^2_t$, and $x^1_t$ are modeled as having equal column sums. In our model, $x^1_t$ is observed and all other variables are hidden. Note that if a given component of $x^1_t$ has some positive value, then there are typically many basis columns in $W^1$ that can explain it, meaning that many solutions for $x^2_t, x^3_t$ are generally possible. However, the other factorizations in the model will add enough additional constraints that a unique solution for the hidden variables can still be possible.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=100ex]{.\/figures\/W1W2W3.jpg}\n\\caption{Submatrices $W^1_1$, $W^1_2$, and $W^1_3$ of $W^1$.}\n\\label{fig:W1W2W3}\n\\end{figure}\n\n\n\nA learning and inference algorithm for this network can be obtained by applying Algorithm~\\ref{alg:inference_learning1} to the system of factorizations Equations (\\ref{eq:target_state_transition_model}) - (\\ref{eq:target_pattern_translation_coupling}). Appendix~\\ref{sec:learn_inf_target_tracking} shows the psuedocode for the algorithm. Since all parameters matrices are known, we run the algorithm in inference-only mode with learning disabled in the following section. \n\n\\subsection{Empirical Results}\n\nWe now present empirical results for the target tracking model. We make use of the parameter matrices $\\{W^i\\}$ presented in the previous section so that the model parameters can be considered known. We present inference-only results for the case where the observation sequence $X^1$ is either fully or partially observed.\n\n\\subsubsection{Clean target observations}\n\\label{sec:clean_target_obs}\nWe first consider the case where $X^1$ is fully observed and consists of noiseless observations. All other model variables are hidden and will be inferred. The bottom image plot in Figure~\\ref{fig:hdyn2_clean_X5X4X3X2X1} shows the target observations. There are three target objects represented: two A-type targets and one B-type target. Observe that there are two targets present during time slices $t = 4,5,6,7$. The targets have magnitudes of 1.0, 0.8, and 0.6. These target objects were chosen such that they conform to the target model and therefore are exactly representable under the model. We used $P = 15$ position values for these experiments, yielding a pattern shift amount from -7 to 7. That is, $P = 8$ corresponds to the 0 shift amount where a pattern is in the center of the observation vector. \n\n\nWe used the inference algorithm described in Appendix~\\ref{sec:learn_inf_target_tracking}, which is a special case of the general inference and learning algorithm presented in Algorithm~\\ref{alg:inference_learning1}. The top four image plots in Figure~\\ref{fig:hdyn2_clean_X5X4X3X2X1} show the inference results for $X^2$, $X^3$, $X^4$, $X^5$. We note that only $X^1$ was observed, so that higher-level variables $U^1$, $U^2$, $U^3$, $H^1$, $H^2$, $H^3$, $V^1$, and $V^2$ were also hidden, but are not shown here since they are less relevant.\n\nWe found that 10000 iterations were sufficient for all factorization equations to converge completely (RMSE less than 10e-4), which took approximately 1 minute to run on an a PC with a 3.0 GHz Intel Core2 Duo CPU. Given the observed sequence, exactly 1 solution is possible. All results for this model were obtained by running the inference algorithm for 10000 iterations. We observed that on repeated runs, the inference algorithm always converged to the correct exact solution (within our RMSE tolerance). We see that the observed sequence $X^1$ was successfully decomposed into the target label ($X^5$), target state ($X^4$), target emission pattern ($X^3$), pattern shift amount ($X^2$), and the other higher-level hidden variables which are not shown for space reasons.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=100ex]{.\/figures\/hdyn2_clean_X5X4X3X2X1.jpg}\n\\caption{Inference results for $X^2$, $X^3$, $X^4$, $X^5$. Note that only the bottom $X^1$ is observed and consist of clean target observations. All other variables are hidden and were correctly inferred.}\n\\label{fig:hdyn2_clean_X5X4X3X2X1}\n\\end{figure}\n\n\n\\subsubsection{Noisy target observations}\n\nWe now add some noise to the previous clean target observations to see how inference will perform. The bottom image plot in Figure~\\ref{fig:hdyn2_noisy_X5X4X3X2X1_0_sparseness} shows the observed noisy target observations. Noise consisting of uniformly distributed values in the range $[0,0.06]$ was added to the clean target observations from the previous section to form the noisy target observations $X^1$. The maximum noise amount is thus one tenth of the smallest target magnitude.\n\n\nFigure~\\ref{fig:hdyn2_noisy_X5X4X3X2X1_0_sparseness} shows the inference results for $X^2$, $X^3$, $X^4$, $X^5$. As before, the results for the other variables in the model are not shown for space reasons. We observe that the inferred values also appear somewhat noisy, since the model is also trying to explain the noise in the target observations. We observed that the visual cleanness of the results appears to degrade gradually as more noise is added. Small amounts of noise are tolerated quite well. A noise amount less than 0.02 or so produces visually clean results.\n \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=100ex]{.\/figures\/hdyn2_noisy_X5X4X3X2X1_0_sparseness.jpg}\n\\caption{Inference results for $X^2$, $X^3$, $X^4$, $X^5$. Note that only the bottom $X^1$ is observed and consist of target observations corrupted by additive noise.}\n\\label{fig:hdyn2_noisy_X5X4X3X2X1_0_sparseness}\n\\end{figure}\n\n\n\n\\subsubsection{Noisy target observations using sparse inference}\n\nGiven the previous results for the case of noisy observations, we might like for the inference algorithm to attempt to ignore as much of the noise as possible while still accounting for the non-noise components. We modify the inference algorithm so that the NMF update steps are replaced by corresponding sparse NMF update steps using the nsNMF algorithm \\cite{nsNMF2006}, which is describe in Appendix~\\ref{appendix_sparse_nmf}. We have observed empirically that the quality of the inference results seems to be improved if the sparseness parameter value is gradually increased during the inference procedure. For these results, the sparseness value was increased linearly from 0 at iteration 5000 to 0.05 at iteration 10000.\n\nThe input observations are generated with the same targets plus additive noise as in the previous section. Figure~\\ref{fig:hdyn2_noisy_X5X4X3X2X1_gradual_sparseness} shows the inference results, where the bottom subplot shows the noisy target observations for reference. We observe that the inference results are visually less cluttered with noise compared to the previous non-sparse inference case. However, we observe that the magnitudes of the inference results are off by approximately a factor of 2, so that some renormalization would be needed as a post-processing step.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=100ex]{.\/figures\/hdyn2_noisy_X5X4X3X2X1_gradual_sparseness.jpg}\n\\caption{Inference results for $X^2$, $X^3$, $X^4$, $X^5$. Note that only the bottom $X^1$ is observed and consist of target observations corrupted by additive noise. Gradually increasing sparseness was used in the inference algorithm.}\n\\label{fig:hdyn2_noisy_X5X4X3X2X1_gradual_sparseness}\n\\end{figure}\n\n\\subsubsection{Noisy target observations using modified sparse inference}\n\nAs an experiment, we now modify the previous sparse inference procedure so that the observation sequence $X^1$ is made to be hidden (i.e., all model variables are now hidden) during the final several inference iterations. The motivation for this is that after basically reaching convergence on the noisy observations data, the model has settled on a somewhat sparse solution. However, the noise in $X^1$ cannot be ignored completely, and so the model tries to find a configuration of the hidden variables that best explain the noise to some extent. However, if we stop updating $X^1$ on each iteration with the actual observations, the model can then be free to ignore the noise and converge to a sparser solution. We need to be careful, though. If the model is allowed to run a large number of iteration with $X^1$ now hidden, it may gradually ``forget'' $X^1$ and settle on some other configuration of variables.\n\nWe again supply the same target observations corrupted by additive noise as the previous examples. Sparseness was 0 for iterations 0 through 5000, and then gradually increased linearly reaching a maximum sparseness of 0.05 at iteration 9980. For the final 20 iterations, $X^1$ was made hidden. This was accomplished by simply disabling the re-copying of the observations data into the corresponding $X^1$ local variable during inference. Thus, on each iteration, the down propagated values for $X^1$ would be reused on the successive up propagation. \n\nFigure~\\ref{fig:hdyn2_noisy_X5X4X3X2X1} shows the inference results for $X^1$, $X^2$, $X^3$, $X^4$, $X^5$. Note that the inference results appear quite clean. However, the relative magnitudes of two of the targets are not correct. Note that the lowest-magnitude target in the figure corresponds to an to an inferred target with with magnitude greater than one of the other inferred targets.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=100ex]{.\/figures\/hdyn2_noisy_X5X4X3X2X1.jpg}\n\\caption{Inference results for $X^1$, $X^2$, $X^3$, $X^4$, $X^5$. Gradually increasing sparseness was used. Here $X^1$ was observed for the first 9980 iterations and then made to be hidden for the last 20 iterations. Note that since $X^1$ was hidden for several iteration, the inferred values for $X^1$ here differ from those of the actual observed $X^1$ in the bottom image plot.}\n\\label{fig:hdyn2_noisy_X5X4X3X2X1}\n\\end{figure}\n\n\\subsubsection{Prediction}\n\nWe now consider the case where some of the target observations are hidden. We will then infer their values are part of the inference procedure, along with the other hidden variables. We first consider the case where time slice 25 of the clean $X^1$ from Section~\\ref{sec:clean_target_obs} is hidden. The regular inference algorithm was used (i.e.,the usual non-sparse NMF updates were used).\n\nFigure~\\ref{fig:hdyn2_clean_X5X4X3X2X1_prediction1} shows the inference results for $X^1$, $X^2$, $X^3$, $X^4$, $X^5$. We observe that the inferred values for $x^1_{25}$ are the same for the case of fully observed $X^1$. This makes sense since the the transition model in Figure~\\ref{fig:targetTracking1} specifies that target state 9 ($x^4_{25}$) must follow target state 8 ($x^4_{24}$) and a pattern shift is not allowed during this transition (i.e., $X^2$ must remain in the same state). Thus, given the observed time slices of $X^1$ before and after 25, there is only one possible configuration of variable values for time slice 25 that satisfies the model, and our inference algorithm successfully solved for it.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=100ex]{.\/figures\/hdyn2_clean_X5X4X3X2X1_prediction1.jpg}\n\\caption{Inference results for $X^1$, $X^2$, $X^3$, $X^4$, $X^5$. $x^1_{25}$ was hidden and all other time slices of $X^1$ are observed. We observe that the inference results are the same as for the case of a fully observed $X^1$ since only a single solution is possible.}\n\\label{fig:hdyn2_clean_X5X4X3X2X1_prediction1}\n\\end{figure}\n\nWe now set the last 5 time slices of $X^1$ ($x^1_{22}, x^1_{23}, x^1_{24}, x^1_{25}, x^1_{26}$) to be hidden and run the inference algorithm. Figure~\\ref{fig:hdyn2_clean_X5X4X3X2X1_prediction2} shows the corresponding inference results. We observe that the infered values for the final 5 time slices now correspond to a superposition of states since multiple solutions are possible.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=100ex]{.\/figures\/hdyn2_clean_X5X4X3X2X1_prediction2.jpg}\n\\caption{Inference results for $X^1$, $X^2$, $X^3$, $X^4$, $X^5$. Time slices 22 through 26 of $X^1$ are hidden. We observe that the infered values for the final 5 time slices now correspond to a superposition of states since multiple solutions are possible.}\n\\label{fig:hdyn2_clean_X5X4X3X2X1_prediction2}\n\\end{figure}\n\n\nIn performing these experiments, we observed that the inference procedure appears to be quite robust, since for the noiseless case, complete convergence (to a very small RMSE) was achieved on every run that we performed. Even in the noisy case, the noise in the inferred results appeared to increase gradually in proportion to the noise in the observation sequence. A possible drawback is that a large number of iterations were needed in order for the algorithm to converge to a solution. We have not yet explored how the number of iterations to convergence varies with network size or graphical structure.\n\n\n\\section{Sparse hierarchical sequential data model}\n\\label{sec:hierarchicalSeqDecomp}\n\n\n\nIn this section we consider a multilevel hierarchical DPFN for sequential data in which the activations required in order to explain a given sequence (or non-negative superposition of sequences) become more sparse we we ascend the hierarchy. We then present learning and inference results on magnitude spectrogram data.\n\n\\subsection{Model}\n\\label{sec:sparseHierarchicalModel}\n\nIn the sequential data models presented in the previous sections, we modeled $x_t$ as being simultaneously representable as a both a linear function of parent $h_{t-1}$ and another linear function of parent $h_t$. Thus, a nonzero value of the child node $x_t$ implied nonzero values for all of its parents and vice versa. In this section, we consider an alternative network structure for sequential data in which the value of a child node $x_t$ is modeled as sum of linear functions of multiple parents. Under this representation, a nonzero value of a child node implies that at least one parent node must also have a nonzero value and vice versa. Thus, a nonzero value of a single parent node can explain a sequence of possibly several nonzero child node values. \n\n\n \nFigure~\\ref{fig:1hiddenLayerRep2} shows a DPFN with two layers. We note that Non-Negative Matrix Factor Deconvolution \\cite{NMFParis} appears to be a special case of this network. The variables are non-negative vectors such that $x^i_t \\in \\mathbb{R}^{M_i}$. Typically, $\\{x^1_t\\}$ would correspond to the observed data sequence and $\\{x^2_t\\}$ would be hidden. A node $x^1_t$ with parents $x^2_{t-1}$ and $x^2_t$ corresponds to the factorization:\n\n\\begin{align}\nx^1_t =& W_1 x^2_t + W_2 x^2_{t-1} \\notag \\\\\n=& \\left[ \\begin{array}{cc} W_1 & W_2 \\end{array} \\right] \\left[ \\begin{array}{c} x^2_t\\\\\nx^2_{t-1} \\end{array} \\right] \n\\end{align}\n\nFor time slices $x^1_1 \\dots x^1_T$ we then have:\n\n\\begin{align}\n\\left[ \\begin{array}{ccccc} x^1_1 & x^1_2 & x^1_3 & \\dots & x^1_T \\end{array} \\right] = \\left[ \\begin{array}{cc} W_1 & W_2 \\end{array} \\right] \\left[ \\begin{array}{ccccc}\nx^2_1 & x^2_2 & x^2_3 & \\dots & x^2_T \\\\\nx^2_0 & x^2_1 & x^2_2 & \\dots & x^2_{T-1} \\end{array} \\right] \n\\end{align}\n\nwhere we note that $x^2_0 = 0$. Denoting the right-most matrix above as $X^2_S$, we can then express the factorization more concisely as:\n\n \\begin{align}\n X^1 =& \\left[ \\begin{array}{cc} W_1 & W_2 \\end{array} \\right] X^2_S\\\\\n=& W X^2_S\n \\end{align}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=60ex]{.\/figures\/1hiddenLayerRep2.pdf}\n\\caption{A 2-level DPFN for a sparse hierarchical sequence model.}\n\\label{fig:1hiddenLayerRep2}\n\\end{figure}\n\nNote the duplicated variables in $X^2_S$. Since $x^2_j$ appears in two consecutive columns of $X^2_S$, we see that activating $x^2_j$ will correspond to the $j$'th column of $W_1$ being activated in time slice $j$ and the $j$'th column of $W_2$ being activated in time slice $j+1$. Thus, one can think of corresponding columns of $W_1$ and $W_2$ as comprising a transition basis pair. \n\nWe might consider normalizing $W$ so that all columns sum to 1. Alternatively, we could consider normalizing $W$ so that the sum of the corresponding columns of all the $W_j$ sum to 1. That is, we can view the matrix consisting of the j'th column of each consecutive $W_i$ as a basis sequence. \n\n\n\nFigure~\\ref{fig:2hiddenLayerRep2V2} shows a 3-level network. A sequence of nonzero values $x^1_t, x^1_{t+1}, x^1_{t+2}, x^1_{t+3}$ in level 1 can be represented by two nonzero values $x^2_t, x^2_{t+2}$ in level 2, and by a single nonzero value of $x^3_t$ in level 3. That is, a single nonzero value $x^{i+1}_t$ in level $i+1$ represents a pair of consecutive values in level $i$. If viewed in a generative setting, a single nonzero level 3 variable $x^1_1$ can cause the level 2 variables $x^2_1, x^2_3$ to be nonzero, which in turn can cause the level 1 variables $x^1_1, x^1_2, x^1_3, x^1_4$ to be nonzero.\n\nSuppose that the level 1 variables $x^1_1, \\dots, x^1_T$ are observed. Provided that the parameter matrices $W^1 = \\left[ \\begin{array}{cc} W^1_1 & W^1_2 \\end{array} \\right]$ and $W^2 = \\left[ \\begin{array}{cc} W^2_1 & W^2_2 \\end{array} \\right]$ contain sufficient basis columns to represent the possible observation pairs $(x^1_t, x^1_{t+1})$, a given sequence can be represented such that only every other level 2 variable is nonzero, and only every 4th level 3 variable is nonzero. Thus, as we ascend the hierarchy, the activations become increasingly sparse.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=60ex]{.\/figures\/2hiddenLayerRep2V2.pdf}\n\\caption{A 3-level DPFN for a sparse hierarchical sequence model. The first five time slices are shown.}\n\\label{fig:2hiddenLayerRep2V2}\n\\end{figure}\n\nWe represent each level 1 variable in Figure~\\ref{fig:2hiddenLayerRep2V2} as:\n\n\\begin{align}\nx^1_t =& W^1_1 x^2_t + W^1_2 x^2_{t-1} \\notag \\\\\n=& \\left[ \\begin{array}{cc} W^1_1 & W^1_2 \\end{array} \\right] \\left[ \\begin{array}{c} x^2_t\\\\\nx^2_{t-1} \\end{array} \\right] \n\\end{align}\n\nLikewise, we represent each level 2 variable as:\n\n\\begin{align}\nx^2_t =& W^2_1 x^3_t + W^2_2 x^3_{t-2} \\notag \\\\\n=& \\left[ \\begin{array}{cc} W^2_1 & W^2_2 \\end{array} \\right] \\left[ \\begin{array}{c} x^3_t\\\\\nx^3_{t-2} \\end{array} \\right] \n\\end{align}\n\nFor a network with $T$ time slices, we then have the following factorizations:\n\n\\begin{align}\n\\left[ \\begin{array}{ccccc} x^1_1 & x^1_2 & x^1_3 & \\dots & x^1_T \\end{array} \\right] = \\left[ \\begin{array}{cc} W^1_1 & W^1_2 \\end{array} \\right] \\left[ \\begin{array}{ccccc}\nx^2_1 & x^2_2 & x^2_3 & \\dots & x^2_T \\\\\nx^2_0 & x^2_1 & x^2_2 & \\dots & x^2_{T-1} \\end{array} \\right] \n\\end{align}\n\nNote that $x^2_0 = 0$.\n\n\\begin{align}\n\\left[ \\begin{array}{ccccc} x^2_1 & x^2_2 & x^2_3 & \\dots & x^2_T \\end{array} \\right] = \\left[ \\begin{array}{cc} W^2_1 & W^2_2 \\end{array} \\right] \\left[ \\begin{array}{ccccc}\nx^3_1 & x^3_2 & x^3_3 & \\dots & x^3_T \\\\\nx^3_{-1} & x^3_0 & x^3_1 & \\dots & x^3_{T-2} \\end{array} \\right] \n\\end{align}\n\nNote that $x^3_{-1} = x^3_0 = 0$.\n\n\n\nThe network in Figure~\\ref{fig:2hiddenLayerRep2V2} extends immediately to networks with an arbitrary number of levels, arbitrary numbers of parent variables, and arbitrary separation (in time slices) between consecutive child nodes in any given level. Consider a network with $L$ levels and such that the variables $x^{i+1}_t$ in level $i+1$ each have $p$ child nodes with a separation of $q$ time slices between nodes. Let $x^{i+1}_t$ at each time slice $t$ be an $M_{i+1}$ dimensional column vector and let $x^i_t$ at each time slice $t$ be $M_i$ dimensional column vector. For each time slice $t$ and $i \\in \\{1, \\dots, L-1\\}$, $x^i_t$ is then expressed by the factorization:\n\n\\begin{align}\nx^i_t =& W^i_1 x^{i+1}_t + W^i_2 x^{i+1}_{t-q} + W^i_3 x^{i+1}_{t - 2 q} + \\dots + W^i_p x^{i+1}_{t- (p-1) q}\\\\\n=& \\left[ \\begin{array}{ccccc} W^i_1 & W^i_2 & W^i_3 & \\dots & W^i_p \\end{array} \\right] \\left[ \\begin{array}{c} x^{i+1}_t\\\\\nx^{i+1}_{t-q}\\\\\nx^{i+1}_{t - 2 q}\\\\\n\\vdots\\\\\nx^{i+1}_{t- (p-1) q} \\notag \\\\\n \\end{array} \\right] \n\\end{align}\n\nFor a network with $T$ time slices, we then have the following factorization equation for each $i \\in \\{1, \\dots, L-1\\}$, $x^i_t$:\n\n\\begin{align}\n\\left[ \\begin{array}{ccccc} x^i_1 & x^i_2 & x^i_3 & \\dots & x^i_T \\end{array} \\right] = \\left[ \\begin{array}{ccccc} W^i_1 & W^i_2 & W^i_3 & \\dots & W^i_p \\end{array} \\right] \\left[ \\begin{array}{ccccc}\nx^{i+1}_1 & x^{i+1}_2 & x^{i+1}_3 & \\dots & x^{i+1}_T \\\\\nx^{i+1}_{1-q} & x^{i+1}_{2-q} & x^{i+1}_{3-q} & \\dots & x^{i+1}_{T-q} \\\\\nx^{i+1}_{1-2 q} & x^{i+1}_{2-2 q} & x^{i+1}_{3-2 q} & \\dots & x^{i+1}_{T-2 q}\\\\\n\\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\nx^{i+1}_{1-(p-1) q} & x^{i+1}_{2-(p-1) q} & x^{i+1}_{3-(p-1) q} & \\dots & x^{i+1}_{T-(p-1) q} \\\\\n \\end{array} \\right] \n\\end{align}\n\nAll $x^{i+1}_j$ such that $j < 1$ are zero-valued.\n\nLetting \n\\begin{align}\nX^i = \\left[ \\begin{array}{ccccc} x^i_1 & x^i_2 & x^i_3 & \\dots & x^i_T \\end{array} \\right]\n\\end{align}\n\n\\begin{align}\nW^i = \\left[ \\begin{array}{ccccc} W^i_1 & W^i_2 & W^i_3 & \\dots & W^i_p \\end{array} \\right]\n\\end{align}\n\n\\begin{align}\nX^{i+1}_S = \\left[ \\begin{array}{ccccc}\nx^{i+1}_1 & x^{i+1}_2 & x^{i+1}_3 & \\dots & x^{i+1}_T \\\\\nx^{i+1}_{1-q} & x^{i+1}_{2-q} & x^{i+1}_{3-q} & \\dots & x^{i+1}_{T-q} \\\\\nx^{i+1}_{1-2 q} & x^{i+1}_{2-2 q} & x^{i+1}_{3-2 q} & \\dots & x^{i+1}_{T-2 q}\\\\\n\\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\nx^{i+1}_{1-(p-1) q} & x^{i+1}_{2-(p-1) q} & x^{i+1}_{3-(p-1) q} & \\dots & x^{i+1}_{T-(p-1) q}\\\\\n\\end{array} \\right]\n\\end{align}\n \nwe can express the factorization more concisely as:\n \n \\begin{align}\n\\label{eq:sparse_hierarchical_fact}\n X^i = W^i X^{i+1}_S\n \\end{align}\n\nNote that $X^i$ has size $M_i$ by $T$. Each $W^i_j$ has $M_{i+1}$ columns. The matrix $W^i$ has size $M_i$ by $ M_{i+1}$. The matrix $X^{i+1}_S$ has size $p M_{i+1}$ by $T$.\nFor example, a network with $L = 2$ levels, $p = 3$ children and $q = 2$ seperation will correspond to the factorization:\n\n\\begin{align}\n\\left[ \\begin{array}{ccccccccc} x^1_1 & x^1_2 & x^1_3 & x^1_4 & x^1_5 & x^1_6 & x^1_7 & \\dots & x^1_T \\end{array} \\right] = \\left[ \\begin{array}{ccc} W^1_1 & W^1_2 & W^1_3 \\end{array} \\right] \\left[ \\begin{array}{ccccccccc}\nx^2_1 & x^2_2 & x^2_3 & x^2_4 & x^2_5 & x^2_6 & x^2_7 & \\dots & x^2_T \\\\\n0 & 0 & x^2_1 & x^2_2 & x^2_3 & x^2_4 & x^2_5 & \\dots & x^2_{T-2} \\\\\n0 & 0 & 0 & 0 & x^2_1 & x^2_2 & x^2_3 & \\dots & x^2_{T-4} \\\\\n\\end{array} \\right] \n\\end{align}\n\nA learning and inference algorithm for a network with $L$ levels can be obtained by applying Algorithm~\\ref{alg:inference_learning1} to the system of factorizations equations (\\ref{eq:sparse_hierarchical_fact}). Appendix~\\ref{sec:sparse_hierarchical_inf_alg} shows the pseudocode for the algorithm.\n\n\\subsection{Empirical results}\n\nWe now present learning and inference results on magnitude spectrogram data. We let the lowest level variables $X^1$ correspond to a magnitude spectrogram such that $x^1_t$ represents time slice $t$ of a spectrogram. For this experiment, we use a 15 second audio clip from Beethoven's Piano Sonata No. 8 in C minor, op. 13, Adagio cantabile, performed by the author. The corresponding spectrogram, with 512 frequency bins and 622 time slices (41 time slices per second) is shown in the bottom plot of Figure~\\ref{fig:actual_vs_inferred_spectrogram_0sparseness}.\n\nWe make use of a DPFN with the following network parameters: We use a 4-level network with $X^1$ observed and $X^2$, $X^3$, and $X^4$ hidden. Let $x^2_t \\in \\mathbb{R}^{50}$ have $p^2 = 4$ child nodes and a separation of $q^2 = 1$ time slice. Let $x^3_t \\in \\mathbb{R}^{40}$ have $p^3 = 4$ child nodes and a separation of $q^3 = 4$ time slices. Let $x^4_t \\in \\mathbb{R}^{40}$ have $p^4 = 4$ child nodes and a separation of $q^4 = 16$ time slices.\n\nWe use the learning and inference algorithm described in Appendix~\\ref{sec:sparse_hierarchical_inf_alg}, which is a special case of the general algorithm described in Algorithm~\\ref{alg:inference_learning1}. We start by setting $X^1$ to the spectrogram data values and initializing all other variables and model parameters $W^i$ to random positive values. We have observed that model convergence can sometimes be improved by performing the parameter learning update steps more frequently on parameter matrices that are closer to the observed data in the graphical model. In this case, we perform $W^1$ more frequently than $W^2$, and perform $W^2$ updates more frequently then $W^3$ and so on. For each level, $W^i$ was normalized so that the sum of the corresponding columns of all the $W^i_j$ sum to 1.\n\n\nFigure~\\ref{fig:inferred_spectrogram_0sparseness} Shows the observed spectrogram $X^1$ and inferred activations for $X^2$, $X^3$, and $X^4$ after running the learning and inference algorithm for 800 iterations. Note that the activations become progressively more sparse as we ascend the hierarchy from $X^2$ to $X^4$, even though sparse NMF update steps were not used in the inference and learning algorithm. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=100ex]{.\/figures\/inferred_spectrogram_0sparseness.jpg}\n\\caption{Observed spectrogram $X^1$ and inferred activations for $X^2$, $X^3$, and $X^4$. Sparseness was 0.}\n\\label{fig:inferred_spectrogram_0sparseness}\n\\end{figure}\n\nFigure~\\ref{fig:actual_vs_inferred_spectrogram_0sparseness} shows the actual observed spectrogram as well as the approximation to the spectrogram obtained by using only the inferred top-level activations $X^4$ and propagating them downward to form an approximation to $X^1$. The resulting RMSE was 0.07. Only the first 250 frequency bins are shown since the higher frequency bins have negligible energy.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=100ex]{.\/figures\/actual_vs_inferred_spectrogram_0sparseness.jpg}\n\\caption{Actual vs approximated spectrogram $X^1$ for the 0 sparseness case.}\n\\label{fig:actual_vs_inferred_spectrogram_0sparseness}\n\\end{figure}\n\n\nFigure~\\ref{fig:inferred_spectrogram_sparse} Shows the observed spectrogram $X^1$ and inferred activations for $X^2$, $X^3$, and $X^4$ for the case where sparse NMF updates were used in the inference and learning algorithm. Here, we ran the inference and learning algorithm for 400 iterations with 0 sparseness, and then increased the nsNMF sparseness parameter gradually from 0 to 0.2 at 800 iterations.\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=100ex]{.\/figures\/inferred_spectrogram_sparse.jpg}\n\\caption{Observed spectrogram $X^1$ and inferred activations for $X^2$, $X^3$, and $X^4$. Sparseness was gradually increased to a value of 0.2 at 800 iterations.}\n\\label{fig:inferred_spectrogram_sparse}\n\\end{figure}\n\n\nFigure~\\ref{fig:actual_vs_inferred_spectrogram_sparse} shows the actual observed spectrogram as well as the approximation to the spectrogram obtained by using only the inferred top-level activations $X^4$ and propagating them downward to form an approximation to $X^1$. The resulting RMSE was 0.2. Only the first 250 frequency bins are shown since the higher frequency bins have negligible energy. This is for gradually increasing sparse NMF. We observe that the results using sparse NMF updates appear slightly more sparse than those using the non-sparse NMF updates. However, the reconstruction error using sparse updates is also higher.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=100ex]{.\/figures\/actual_vs_inferred_spectrogram_sparse.jpg}\n\\caption{Actual vs approximated spectrogram $X^1$ for the 0 sparseness case.}\n\\label{fig:actual_vs_inferred_spectrogram_sparse}\n\\end{figure}\n\n\n\nWe feel that it may be interesting to consider applying this or a similar DPFN to problems where a sparse hierarchical representation of sequential data could be useful, such as compression, noise reduction, and transcription.\n \n\n\n\n\n\n\n\\section{A DPFN for language modeling}\n\\label{sec:main_language_model}\n\n\nIn this section we propose a data model for the sequence of words in a text document, but do not present any corresponding empirical results. In this model, we perform dimensionality reduction by extracting low dimensional feature vectors corresponding to each distinct word. We then make use of a transition model for the feature vectors. By performing dimensionality reduction to extract features, we can use a much more compact transition model than would be possible by operating directly on the words themselves. The idea of extracting low-dimensional word features in order to model sequential word data in text documents and make predictions of future words was used in \\cite{bengioy-journal}.\n\nFigure~\\ref{fig:wordFeatureMode} shows the graphical model. In this model, the observed \\{$y_t : t = 1, \\dots, T$\\} correspond to the sequence of words that appear in some text document. Suppose our vocabulary size is $N$ words. Then let $y_t$ denote an $N$ dimensional vector such that the $t$'th word $i \\in \\{1, \\dots, N\\}$ in the document is represented by setting the $i$'th component of $y_t$ to 1 and all other components to zero. This model can also be used for prediction of future words, given a sequence of words up to the present $t_p$ by simply considering the word vector nodes for future time slices \\{$y_{t_p}, y_{{t_p}+1}, \\dots$\\} to be hidden.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=60ex]{.\/figures\/wordFeatureModel.pdf}\n\\caption{A DPFN representing a transition model over word features. The observed $y_t$ denotes a word vector, $x_t$ denotes a word feature vector, and $h_t$ denotes a word feature transition. The first four time slices are shown.}\n\\label{fig:wordFeatureMode}\n\\end{figure}\n\nWe perform dimensionality reduction to extract low-dimensional word features $x_t$ by left multiplying each word vector $y_t$ by a \\emph{word feature matrix} $B$ so that we have:\n\n\\begin{align}\nx_t = B y_t\n\\end{align}\n\nThe non-negative matrix $B$ then represents the word feature matrix, where the $i,j$'th component represents the amount of feature $i$ contained by word $j$. One might consider normalizing the columns of $B$ to have unit column sum. Letting $Y = \\left[ \\begin{array}{ccccc} y_1 & y_2 & y_3 & \\dots & y_T \\end{array} \\right]$, we then have the following matrix factorization for the word features:\n\n\\begin{align}\nX = B Y\n\\label{eqn:word_feature_factorization}\n\\end{align}\n\nThe factorization equations that describe this model are exactly the same as those for the model in Figure~\\ref{fig:1layerDynamic} with the additional above factorization for the word features in terms of the word vectors. Thus, the complete system of factorizations for this model can be expressed as the two matrix factorizations (\\ref{eqn:key_factorization}) and (\\ref{eqn:word_feature_factorization}).\n\n\nSuupose, for example, that the word ``cat'' has the features ``animal'' and ``has claws.'' The feature transition model (parametrized by $W_1$, $W_2$) could possibly have distinct basis transitions containing the ``animal'' and ``has claws'' features. For a word such as ``cat'' that has both features, both basis transitions could be simultaneously activated due to the non-negative superposition property (Section~\\ref{sec:model_specification}). Thus, a given word sequence could be explained such that each word has possibly multiple features, and that multiple distinct basis transitions in the feature transition model could simultaneously be activated to explain the words in a sequence as an additive combination of basis feature transitions. One could then think of the word feature transition model as operating on the individual word features independently and simultaneously such that the observed or inferred word sequence is consistent with them. We feel that such an additive factored language model might be a powerful way of modeling word and\/or character sequences. However preliminary empirical results suggest that additional sparseness and\/or orthogonality constraints might need to be imposed in order to obtain meaningful solutions. \n\n\n\n\n\\section{Discussion}\n\nWe have presented a framework for modeling non-negative data that retains the linear representation of NMF, while providing for more structured representations using (non-probabilistic) graphical models. The graphical model provides a visual representation of the factored correlation constraints between variables. We have presented algorithms for performing the inference and learning that leverage existing non-negative matrix factorization (NMF) algorithms. Since the algorithms are inherently parallel, we expect that optimized implementations for hardware such as multi-core CPUs and GPUs will be possible. These algorithm have the advantage of being quite straightforward to implement, even for relatively complex networks structures. We expect that they can be understood and implemented by anyone with a basic understanding of computer programming, matrix algebra, and graph theory, and require no background in probability theory. \n\nWe presented empirical results for several example networks that illustrated the robustness of the inference and learning algorithms. Our goal was to provide a sufficient variety of model structures so that the interested reader will hopefully be able to extend this work and apply it to interesting and practical applications. We observed that in all models in which noiseless synthetic data sets were used, learning and\/or inference converged to an essentially exact factorization, corresponding to a correct solution for the hidden variables. In the case of additive noise, the quality of the solutions tended to degrade gradually as the noise level was increased. In particular, in Section~\\ref{sec:learn_tran_model_from_data} we observed the somewhat surprising result that it was possible to learn an underlying transition model even when the training sequence contained only an additive mixture of realizations of the underlying model. In our magnitude spectrogram modeling example, we observed that the inference and learning solution appeared to correspond to a hierarchical sparse decomposition, with the solution sparseness increasing as we ascend the model hierarchy. As with existing NMF algorithms, however, there is no guarantee that convergence to a global optimum for any particular cost function will occur. We did observe good convergence properties in our empirical results, however. We also proposed a PFN for language modeling and plan to experiment with related models as future research. \n\nIt is still unknown how well models using this framework will perform on hard real-world problems such as speech recognition, language modeling, and music transcription, and this will be an interesting area of future research. It could also be interesting to explore the use of dynamic networks that are directional in time, such as DPFNs with a similar state space model to an HMM or HHMM, for example. Source code for implementing the models in this paper will be made available by the author.\n\n\\section{Acknowledgments}\n\nI would like to thank Michael I. Jordan for reviewing a draft of this paper and offering helpful suggestions.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nApart from giving deep insight to strongly coupled gauge theories\nand string\/M-theory compactifications in various dimensions, the\nAdS\/CFT correspondence has been recently used to explain the entropy\nof asymptotically $AdS_4$ black holes\n\\cite{Zaffaroni_BH1,Zaffaroni_BH2,Zaffaroni_BH3}. In this context,\nthe black hole entropy is computed using topologically twisted index\nof three-dimensional superconformal field theories (SCFTs)\ncompactified on a Riemann surface $\\Sigma_2$\n\\cite{twisted_index1,twisted_index2,twisted_index3,twisted_index4,twisted_index5}.\nIn the dual gravity solutions, the black holes interpolate between\nthe asymptotically $AdS_4$ and the near horizon $AdS_2\\times\n\\Sigma_2$ geometries. These can be interpreted as RG flows from\nthree dimensional SCFTs in the form of Chern-Simons-Matter (CSM)\ntheories possibly with flavor matters to superconformal quantum\nmechanics corresponding to the $AdS_2$ geometry.\n\\\\\n\\indent Along this line of research, BPS black hole solutions in\nfour-dimensional gauged supergravity, in particular near horizon\ngeometries, with known higher dimensional origins are very useful.\nMost of the solutions have been studied within $N=2$ gauged\nsupergravities\n\\cite{BH_M_theory1,BH_M_theory2,AdS4_BH1,AdS4_BH2,AdS4_BH3,AdS4_BH4,AdS4_BH5},\nfor recent results, see \\cite{Guarino_BH1,Guarino_BH2}. Many of\nthese solutions can be uplifted to string\/M-theory since these $N=2$\ngauged supergravities can be obtained either from truncations of the\nmaximal $N=8$ gauged supergravity, whose higher dimensional origin\nis known, or direct truncations of M-theory on Sasaki-Einstein\nmanifolds.\n\\\\\n\\indent In this work, we give an evidence for a new class of BPS\nblack hole solutions in the half-maximal $N=4$ gauged supergravity\nwith known higher dimensional origin by finding a number of new\n$AdS_2\\times \\Sigma_2$ solutions. This gauged supergravity has\n$SO(3)\\ltimes (\\mathbf{T}^3,\\hat{\\mathbf{T}}^3)$ gauge group and can\nbe obtained from a compactification of M-theory on a tri-sasakian\nmanifold \\cite{N010_truncation_Cassani}. Holographic RG flows and\nsupersymmetric Janus solutions, describing $(1+1)$-dimensional\ninterfaces in the dual SCFTs have recently appeared in\n\\cite{trisasakian_flow}. In the present paper, we will look for\nsupersymmetric solutions of the form $AdS_2\\times \\Sigma_2$ within\nthis tri-sasakian compactification.\n\\\\\n\\indent Apart from giving this type of solutions in gauged\nsupergravity with more supersymmetry, to the best of the author's\nknowledge, the results are the first example of $AdS_2\\times\n\\Sigma_2$ solutions from the truncation of M-theory on a\ntri-sasakian manifold. The truncation given in\n\\cite{N010_truncation_Cassani} gives a reduction ansatz for\neleven-dimensional supergravity on a generic tri-sasakian manifold\nincluding massive Kaluza-Klein modes. Among this type of manifolds,\n$N^{010}$ with isometry $SU(2)\\times SU(3)$ is of particular\ninterest. In this case, there is a non-trivial two-form giving rise\nto an extra vector multiplet, see \\cite{N3_spectrum1} and\n\\cite{N3_spectrum2} for the Kaluza-Klein spectrum of $AdS_4\\times\nN^{010}$. This background, discovered long ago in\n\\cite{Castellani_Romans}, preserves $N=3$ out of the original $N=4$\nsupersymmetry. There is another supersymmetric $AdS_4$ vacuum with\n$SO(3)$ symmetry and $N=1$ supersymmetry, the one broken by\n$AdS_4\\times N^{010}$. This vacuum corresponds to $AdS_4\\times\n\\tilde{N}^{010}$ geometry, with $\\tilde{N}^{010}$ being a squashed\nversion of $N^{010}$.\n\\\\\n\\indent Not much is known about the dual $N=1$ SCFT, but the dual\n$N=3$ SCFT has been proposed in a number of previous works\n\\cite{Ring_N3_superfield,Shadow_N3_multiplet}, see also\n\\cite{Hanany_Zaffaroni1,Hanany_Zaffaroni2}. This SCFT takes the form\nof a CSM theory with $SU(N)\\times SU(N)$ gauge group. It is a theory\nof interacting three hypermultiplets transforming in a triplet of\nthe $SU(3)$ flavor symmetry, and each hypermultiplet transforms as a\nbifundamental under the $SU(N)\\times SU(N)$ gauge group and as a\ndoublet of the $SU(2)_R\\sim SO(3)_R$ R-symmetry. There are also a\nnumber of previous works giving holographic studies of this theory\nboth in eleven-dimensional context and in the effective $N=3$ and\n$N=4$ gauged supergravities\n\\cite{trisasakian_flow,N3_flow_Ahn1,N3_flow_Ahn2,Yi_4D_flow,N3_SU2_SU3,N3_Janus}.\nSolutions given in these works are holographic RG flows, Janus\nsolutions and supersymmetric $AdS_2\\times \\Sigma_2$ solutions with magnetic\ncharges.\n\\\\\n\\indent In this work, we consider $N=4$ gauged supergravity\nconstructed in the embedding tensor formalism in\n\\cite{N4_gauged_SUGRA}. This construction is the most general\nsupersymmetric gaugins of $N=4$ supergravity in which both the\n``electric'' vector fields, appearing in the ungauged Lagrangian, and\ntheir magnetic duals can participate. Therefore, magnetic and dyonic gaugings are allowed\nin this formulation. Furthermore, this formulation\ncontains the ``purely electric'' gauged $N=4$ supergravity\nconstructed long time ago in \\cite{Eric_N4_4D} and the non-trivial\n$SL(2,\\mathbb{R})$ phases of \\cite{de_Roo_N4_4D,N4_Wagemans} as\nspecial cases. We will look for supersymmetric $AdS_2\\times\n\\Sigma_2$ solutions in the $N=4$ gauged supergravity with a dyonic\ngauging of the non-semisimple group $SO(3)\\ltimes\n(\\mathbf{T}^3,\\hat{\\mathbf{T}}^3)$. The solutions are required to\npreserve $SO(2)\\subset SO(3)_R$, so only a particular combination of\nvector fields corresponding to this $SO(2)$ residual gauge symmetry\nappears in the gauge covariant derivative. The strategy is essentially\nsimilar to the wrapped brane solutions of\n\\cite{Maldacena_Nunez_nogo}, implementing the twist by cancelling\nthe spin connections on $\\Sigma_2$ by the $SO(2)$ gauge connection.\n\\\\\n\\indent These $AdS_2\\times \\Sigma_2$ solutions should appear as near\nhorizon geometries of supersymmetric black holes in asymptotically\n$AdS_4$ space-time. Since the $N=4$ gauged supergravity admits two\nsupersymmetric $AdS_4$ vacua with unbroken $SO(3)_R$ symmetry and\n$N=1,3$ supersymmetries, the $AdS_2\\times \\Sigma_2$ solutions should\nbe RG fixed points in one dimension of the dual $N=1,3$ SCFTs.\nAlthough the structure of the dual $N=1$ SCFT is presently not\nclear, we expect that there should be RG flows between these twisted\n$N=1,3$ SCFTs on $\\Sigma_2$ to one-dimensional superconformal\nquantum mechanics dual to the $AdS_2\\times \\Sigma_2$ solutions. In\nthis sense, the entropy of these black holes would possibly be computed from\nthe topologically twisted indices of the dual $N=1,3$ SCFTs.\nFurthermore, these solutions should provide a new class of $AdS_2$\ngeometries within M-theory.\n\\\\\n\\indent The paper is organized as follow. In section \\ref{N4theory},\nwe review $N=4$ gauged supergravity coupled to vector\nmultiplets and relevant formulae for uplifting the resulting solutions to eleven dimensions. The analysis of BPS equations for $SO(2)\\subset SO(3)_R$ singlet scalars and Yang-Mills equations, for static black hole ansatze consistent with the symmetry of $\\Sigma_2$, will be carried out in section \\ref{flow_eq}. In section \\ref{AdS2_solution}, we will explicitly give $AdS_2\\times \\Sigma_2$ solutions to the BPS flow equations. We separately consider the $N=1$ and $N=3$ cases and end the section with the uplift formulae for embedding the solutions in eleven dimensions. We finally give some conclusions and\ncomments on the results in section \\ref{conclusions}. In the appendix, we collect the convention regarding `t Hooft matrices and give the explicit form of the Yang-Mills and BPS equations.\n\n\\section{$N=4$ gauged supergravity with dyonic gauging}\\label{N4theory}\nIn this section, we review $N=4$ gauged supergravity in the\nembedding tensor formalism following \\cite{N4_gauged_SUGRA}. We\nmainly focus on the bosonic Lagrangian and supersymmetry\ntransformations of fermions which provide basic ingredients for\nfinding supersymmetric solutions. Since the gauged supergravity\nunder consideration is known to arise from a tri-sasakian truncation\nof eleven-dimensional supergravity, we will also give relevant\nformulae which are useful to uplift four-dimensional solutions to\neleven dimensions. The full detail of this truncation can be found\nin \\cite{N010_truncation_Cassani}.\n\n\\subsection{$N=4$ gauged supergravity coupled to vector multiplets}\nIn the half-maximal $N=4$ supergravity in four\ndimensions, the supergravity multiplet consists of the graviton\n$e^{\\hat{\\mu}}_\\mu$, four gravitini $\\psi^i_\\mu$, six vectors\n$A_\\mu^m$, four spin-$\\frac{1}{2}$ fields $\\chi^i$ and one complex\nscalar $\\tau$. The complex scalar can be parametrized by the $SL(2,\\mathbb{R})\/SO(2)$ coset. The supergravity\nmultiplet can couple to an arbitrary number $n$ of vector multiplets containing a vector\nfield $A_\\mu$, four gaugini $\\lambda^i$ and six scalars $\\phi^m$.\nThe scalar fields can be parametrized by the $SO(6,n)\/SO(6)\\times\nSO(n)$ coset.\n\\\\\n\\indent Space-time and tangent space indices are denoted\nrespectively by $\\mu,\\nu,\\ldots =0,1,2,3$ and\n$\\hat{\\mu},\\hat{\\nu},\\ldots=0,1,2,3$. Indices $m,n=1,\\ldots, 6$ and\n$i,j=1,2,3,4$ respectively describe the vector and spinor\nrepresentations of the $SO(6)_R\\sim SU(4)_R$ R-symmetry or\nequivalently a second-rank anti-symmetric tensor and fundamental\nrepresentations of $SU(4)_R$. The $n$ vector multiplets are labeled by\nindices $a,b=1,\\ldots, n$. All the fields in the vector multiplets\nwill accordingly carry an additional index in the form of\n$(A^a_\\mu,\\lambda^{ia},\\phi^{ma})$.\n\\\\\n\\indent All fermionic fields and the supersymmetry parameters\ntransform in the fundamental representation of $SU(4)_R$ R-symmetry\nwith the chirality projections\n\\begin{equation}\n\\gamma_5\\psi^i_\\mu=\\psi^i_\\mu,\\qquad \\gamma_5\\chi^i=-\\chi^i,\\qquad \\gamma_5\\lambda^i=\\lambda^i\\, .\n\\end{equation}\nSimilarly, for the fields transforming in the anti-fundamental\nrepresentation of $SU(4)_R$, we have\n\\begin{equation}\n\\gamma_5\\psi_{\\mu i}=-\\psi_{\\mu i},\\qquad \\gamma_5\\chi_i=\\chi_i,\\qquad \\gamma_5\\lambda_i=-\\lambda_i\\, .\n\\end{equation}\n\\indent General gaugings of the matter-coupled $N=4$ supergravity\ncan be efficiently described by the embedding tensor $\\Theta$ which\nencodes the information about the embedding of any gauge group $G_0$\nin the global or duality symmetry $SL(2,\\mathbb{R})\\times SO(6,n)$.\nThere are two components of the embedding tensor $\\xi^{\\alpha M}$\nand $f_{\\alpha MNP}$ with $\\alpha=(+,-)$ and $M,N=(m,a)=1,\\ldots,\nn+6$ denoting fundamental representations of $SL(2,\\mathbb{R})$ and\n$SO(6,n)$, respectively. The electric vector fields\n$A^{M+}=(A^m_\\mu,A^a_\\mu)$, appearing in the ungauged Lagrangian,\ntogether with their magnetic dual $A^{M-}$ form a doublet under\n$SL(2,\\mathbb{R})$. These are denoted collectively by $A^{M\\alpha}$.\nIn general, a subgroup of both $SL(2,\\mathbb{R})$ and $SO(6,n)$ can\nbe gauged, and the magnetic vector fields can also participate in\nthe gauging. However, in this paper, we only consider gaugings with\nonly $f_{\\alpha MNP}$ non-vanishing. We then set $\\xi^{\\alpha M}$ to\nzero from now on. This also considerably simplifies many formulae\ngiven below.\n\\\\\n\\indent The full covariant derivative can be written as\n\\begin{equation}\nD_\\mu=\\nabla_\\mu-gA_\\mu^{M\\alpha}f_{\\alpha M}^{\\phantom{\\alpha\nM}NP}t_{NP}\n\\end{equation}\nwhere $\\nabla_\\mu$ is the space-time covariant derivative including the spin connections.\n$t_{MN}$ are $SO(6,n)$ generators which can be chosen as\n\\begin{equation}\n(t_{MN})_P^{\\phantom{P}Q}=2\\delta^Q_{[M}\\eta_{N]P},\n\\end{equation}\nwith $\\eta_{MN}$ being the $SO(6,n)$ invariant tensor. The gauge\ncoupling constant $g$ can be absorbed in the embedding tensor\n$\\Theta$. The original gauging considered in \\cite{Eric_N4_4D} only\ninvolves electric vector fields and is called electric gauging. In\nthis case, only $f_{+MNP}$ are non-vanishing. In the following\ndiscussions, we will consider dyonic gauging involving both electric\nand magnetic vector fields. In this case, both $A^{M+}$ and $A^{M-}$\nenter the Lagrangian, and $f_{\\alpha MNP}$ with $\\alpha=\\pm$ are\nnon-vanishing. Consistency requires the presence of two-form fields\nwhen magnetic vector fields are included. In the present case with\n$\\xi^{\\alpha M}=0$, these two-forms transform as an anti-symmetric\ntensor under $SO(6,n)$ and will be denoted by $B^{MN}_{\\mu\\nu}=B^{[MN]}_{\\mu\\nu}$.\nThe two-forms modify the gauge field strengths to\n\\begin{equation}\n\\mathcal{H}^{M\\pm}=dA^{M\\pm}-\\frac{1}{2}\\eta^{MQ}f_{\\alpha QNP}A^{N\\alpha}\\wedge A^{P\\pm}\\pm\\frac{1}{2}\\eta^{MQ}f_{\\mp QNP}B^{NP}\\, .\n\\end{equation}\nNote that for non-vanishing $f_{-MNP}$ the field strengths of\nelectric vectors $\\mathcal{H}^{M+}$ have a contribution from the two-form\nfields.\n\\\\\n\\indent Before moving to the Lagrangian, we explicitly give the parametrization of the scalar coset\nmanifold $SL(2,\\mathbb{R})\/SO(2)\\times SO(6,n)\/SO(6)\\times SO(n)$.\nThe first factor can be described by a coset representative\n\\begin{equation}\n\\mathcal{V}_\\alpha=\\frac{1}{\\sqrt{\\textrm{Im} \\tau}}\\left(\n \\begin{array}{c}\n \\tau \\\\\n 1 \\\\\n \\end{array}\n \\right)\\label{Valpha}\n\\end{equation}\nor equivalently by a symmetric matrix\n\\begin{equation}\nM_{\\alpha\\beta}=\\textrm{Re} (\\mathcal{V}_\\alpha\\mathcal{V}^*_\\beta)=\\frac{1}{\\textrm{Im}\n\\tau}\\left(\n \\begin{array}{cc}\n |\\tau|^2 & \\textrm{Re} \\tau \\\\\n \\textrm{Re} \\tau & 1 \\\\\n \\end{array}\n \\right).\n\\end{equation}\nIt should also be noted that $\\textrm{Im}(\\mathcal{V}_\\alpha\\mathcal{V}^*_\\beta)=\\epsilon_{\\alpha\\beta}$.\nThe complex scalar $\\tau$ can also be written in terms of the dilaton $\\phi$ and the axion $\\chi$ as\n\\begin{equation}\n\\tau=\\chi+ie^\\phi\\, .\n\\end{equation}\n\\indent For the $SO(6,n)\/SO(6)\\times SO(n)$ factor, we can introduce the\ncoset representative $\\mathcal{V}_M^{\\phantom{M}A}$ transforming by\nleft and right multiplications under $SO(6,n)$ and $SO(6)\\times\nSO(n)$, respectively. The $SO(6)\\times SO(n)$ index will be split as\n$A=(m,a)$ according to which the coset representative can be written as\n$\\mathcal{V}_M^{\\phantom{M}A}=(\\mathcal{V}_M^{\\phantom{M}m},\\mathcal{V}_M^{\\phantom{M}a})$.\nAs an element of $SO(6,n)$, the matrix $\\mathcal{V}_M^{\\phantom{M}A}$ also\nsatisfies the relation\n\\begin{equation}\n\\eta_{MN}=-\\mathcal{V}_M^{\\phantom{M}m}\\mathcal{V}_N^{\\phantom{M}m}+\\mathcal{V}_M^{\\phantom{M}a}\\mathcal{V}_N^{\\phantom{M}a}\\,\n.\n\\end{equation}\nAs in the $SL(2,\\mathbb{R})\/SO(2)$ factor, the $SO(6,n)\/SO(6)\\times\nSO(n)$ coset can also be parametrized in term of a symmetric matrix\ndefined by\n\\begin{equation}\nM_{MN}=\\mathcal{V}_M^{\\phantom{M}m}\\mathcal{V}_N^{\\phantom{M}m}+\\mathcal{V}_M^{\\phantom{M}a}\\mathcal{V}_N^{\\phantom{M}a}\\,\n.\n\\end{equation}\n\\indent The bosonic Lagrangian of the $N=4$ gauged supergravity is given by\n\\begin{eqnarray}\ne^{-1}\\mathcal{L}&=&\\frac{1}{2}R+\\frac{1}{16}\\mathcal{D}_\\mu M_{MN}\\mathcal{D}^\\mu\nM^{MN}-\\frac{1}{4(\\textrm{Im}\\tau)^2}\\partial_\\mu \\tau \\partial^\\mu \\tau^*-V\\nonumber \\\\\n& &-\\frac{1}{4}\\textrm{Im}\\,\\tau M_{MN}\\mathcal{H}^{M+}_{\\mu\\nu}\\mathcal{H}^{N+\\mu\\nu}-\\frac{1}{8}\\textrm{Re}\\,\\tau e^{-1}\\epsilon^{\\mu\\nu\\rho\\sigma}\\eta_{MN}\\mathcal{H}^{M+}_{\\mu\\nu}\\mathcal{H}^{N+}_{\\rho\\sigma}\\nonumber \\\\\n& &-\\frac{1}{2}e^{-1}\\epsilon^{\\mu\\nu\\rho\\sigma}\\left[f_{-MNP}A^{M-}_\\mu A^{N+}_\\nu\\partial_\\rho A^{P-}_\\sigma +\\frac{1}{4}f_{\\alpha MNR}f_{\\beta PQS}\\eta^{RS}A^{M\\alpha}_\\mu A^{N+}_\\nu A^{P\\beta}_\\rho A^{Q-}_\\sigma \\right.\\nonumber \\\\\n& &-\\frac{1}{4}f_{-MNP}B^{NP}_{\\mu\\nu}\\left(2\\partial_\\rho A^{M-}_\\sigma -\\frac{1}{2}\\eta^{MS}f_{\\alpha SQR}A^{Q\\alpha}_\\rho A^{R-}_\\sigma\\right)\\nonumber \\\\\n& &\\left.-\\frac{1}{16}f_{+MNR}f_{-PQS}\\eta^{RS}B^{MN}_{\\mu\\nu}B^{PQ}_{\\rho\\sigma}\n\\right]\n\\end{eqnarray}\nwhere $e$ is the vielbein determinant. The scalar potential is given\nby\n\\begin{eqnarray}\nV&=&\\frac{g^2}{16}\\left[f_{\\alpha MNP}f_{\\beta\nQRS}M^{\\alpha\\beta}\\left[\\frac{1}{3}M^{MQ}M^{NR}M^{PS}+\\left(\\frac{2}{3}\\eta^{MQ}\n-M^{MQ}\\right)\\eta^{NR}\\eta^{PS}\\right]\\right.\\nonumber \\\\\n& &\\left.-\\frac{4}{9}f_{\\alpha MNP}f_{\\beta\nQRS}\\epsilon^{\\alpha\\beta}M^{MNPQRS}\\right]\n\\end{eqnarray}\nwhere $M^{MN}$ is the inverse of $M_{MN}$, and $M^{MNPQRS}$ is\ndefined by\n\\begin{equation}\nM_{MNPQRS}=\\epsilon_{mnpqrs}\\mathcal{V}_{M}^{\\phantom{M}m}\\mathcal{V}_{N}^{\\phantom{M}n}\n\\mathcal{V}_{P}^{\\phantom{M}p}\\mathcal{V}_{Q}^{\\phantom{M}q}\\mathcal{V}_{R}^{\\phantom{M}r}\\mathcal{V}_{S}^{\\phantom{M}s}\\label{M_6}\n\\end{equation}\nwith indices raised by $\\eta^{MN}$. The covariant derivative of $M_{MN}$ is defined by\n\\begin{equation}\n\\mathcal{D}M_{MN}=dM_{MN}+2A^{P\\alpha}\\eta^{QR}f_{\\alpha QP(M}M_{N)R}\\, .\n\\end{equation}\n\\indent It should be pointed out here that the magnetic vectors and\nthe two-forms do not have kinetic terms. They are auxiliary fields\nwhose field equations give rise to the duality relation between\ntwo-forms and scalars and the electric-magnetic duality of $A^{M+}$\nand $A^{M-}$, respectively. Together with the Yang-Mills equations\nobtained from the variation with respect to $A^{M+}$, these\nequations are given by\n\\begin{eqnarray}\n\\eta_{MN}*\\mathcal{D}\\mathcal{H}^{N-}&=&-\\frac{1}{4}{f_{+MP}}^NM_{NQ}\\mathcal{D}M^{QP},\\label{YM1}\\\\\n\\eta_{MN}*\\mathcal{D}\\mathcal{H}^{N+}&=&\\frac{1}{4}{f_{-MP}}^NM_{NQ}\\mathcal{D}M^{QP},\\label{YM2}\\\\\n\\mathcal{H}^{M-}&=&\\textrm{Im}\\, \\tau M^{MN}\\eta_{NP}*\\mathcal{H}^{P+}-\\textrm{Re}\\, \\tau\\mathcal{H}^{M+}\\label{YM3}\n\\end{eqnarray}\nwhere we have used differential form language for later\ncomputational convenience. By substituting $\\mathcal{H}^{M-}$ from\n\\eqref{YM3} in \\eqref{YM1}, we obtain the usual Yang-Mills equations\nfor $\\mathcal{H}^{M+}$ while equation \\eqref{YM2} simply gives the\nrelation between the Hodge dual of the three-form field strength and\nthe scalars due to the usual Bianchi identity of the gauge field\nstrengths\n\\begin{equation}\n\\mathcal{F}^{M\\pm}=dA^{M\\pm}-\\frac{1}{2}\\eta^{MQ}f_{\\alpha QNP}A^{N\\alpha}\\wedge A^{P\\pm}\n\\end{equation}\n\\indent In this paper, we are interested in $N=4$ gauged\nsupergravity coupled to three vector multiplets. The gauge group of\ninterest here is a non-semisimple group $SO(3)\\ltimes\n(\\mathbf{T}^3,\\hat{\\mathbf{T}}^3)\\subset SO(6,3)$ described by the\nfollowing components of the embedding tensor\n\\begin{eqnarray}\nf_{+IJ,K+6}&=&-f_{+I+3,J+6,K+6}=-2\\sqrt{2}\\epsilon_{IJK},\\qquad I,J,K=1,2,3,\\nonumber \\\\\nf_{+I+6,J+6,K+6}&=&6\\sqrt{2}k\\epsilon_{IJK},\\qquad f_{-I,J+6,K+6}=-4\\epsilon_{IJK}\\, .\\label{embedding_tensor}\n\\end{eqnarray}\nThe constant $k$ is related to the four-form flux along the\nfour-dimensional space-time, see equation \\eqref{4_form_flux} below.\n\\\\\n\\indent We should also remark that we follow the convention of\n\\cite{N010_truncation_Cassani} in all of the computations carried\nout here. In particular, the $SO(6,3)$ tensor $\\eta_{MN}$ is\noff-diagonal\n\\begin{equation}\n\\eta_{MN}=\\left(\n \\begin{array}{ccc}\n -\\mathbf{I}_3 & \\mathbf{0}_3 & \\mathbf{0}_3 \\\\\n \\mathbf{0}_3 & \\mathbf{0}_3 & \\mathbf{I}_3 \\\\\n \\mathbf{0}_3 & \\mathbf{I}_3 & \\mathbf{0}_3 \\\\\n \\end{array}\n \\right)\n\\end{equation}\nwhere $\\mathbf{0}_3$ and $\\mathbf{I}_3$ denote $3\\times 3$ zero and\nidentity matrices, respectively. As a result, the computation of\n$M_{MNPQRS}$ in \\eqref{M_6} and parts of the supersymmetry\ntransformations given below which involve $\\mathcal{V}_M^{\\phantom{M}m}$ and\n$\\mathcal{V}_M^{\\phantom{M}a}$ must be done with the projection to the negative and\npositive eigenvalues of $\\eta_{MN}$, respectively. This can be achieved by using the projection matrix\n\\begin{equation}\nP=\\left(\n \\begin{array}{ccc}\n \\mathbf{0}_3 & \\sqrt{2}\\tilde{P}_3 & \\mathbf{0}_3 \\\\\n -\\tilde{P}_3 & \\mathbf{0}_3 & \\tilde{P}_3 \\\\\n \\tilde{P}_3 & \\mathbf{0}_3 & \\tilde{P}_3 \\\\\n \\end{array}\n \\right)\n\\end{equation}\nwhere the $3\\times 3$ matrix $\\tilde{P}_3$ is given by\n\\begin{equation}\n\\tilde{P}_3=\\frac{1}{\\sqrt{2}}\\left(\n \\begin{array}{ccc}\n 0 & 0 & 1 \\\\\n 0 & 1 & 0 \\\\\n 1 & 0 & 0 \\\\\n \\end{array}\n \\right).\n\\end{equation}\n\\indent We now turn to the supersymmetry transformations of\nfermionic fields. These are given by\n\\begin{eqnarray}\n\\delta\\psi^i_\\mu &=&2D_\\mu \\epsilon^i-\\frac{2}{3}gA^{ij}_1\\gamma_\\mu\n\\epsilon_j+\\frac{i}{4}(\\mathcal{V}_\\alpha)^*{\\mathcal{V}_M}^{ij}\\mathcal{H}^{M\\alpha}_{\\nu\\rho}\\gamma^{\\nu\\rho}\\gamma_\\mu\\epsilon_j,\\\\\n\\delta \\chi^i &=&i\\epsilon^{\\alpha\\beta}\\mathcal{V}_\\alpha D_\\mu\n\\mathcal{V}_\\beta\\gamma^\\mu \\epsilon^i-\\frac{4}{3}igA_2^{ij}\\epsilon_j+\\frac{i}{2}\\mathcal{V}_\\alpha{\\mathcal{V}_M}^{ij}\\mathcal{H}^{M\\alpha}_{\\mu\\nu}\\epsilon_j,\\\\\n\\delta \\lambda^i_a&=&2i\\mathcal{V}_a^{\\phantom{a}M}D_\\mu\n\\mathcal{V}_M^{\\phantom{M}ij}\\gamma^\\mu\\epsilon_j+2igA_{2aj}^{\\phantom{2aj}i}\\epsilon^j\n-\\frac{1}{4}\\mathcal{V}_\\alpha\\mathcal{V}_{Ma}\\mathcal{H}^{M\\alpha}_{\\mu\\nu}\\gamma^{\\mu\\nu}\\epsilon^i\\,\n.\n\\end{eqnarray}\nThe fermion shift matrices are defined by\n\\begin{eqnarray}\nA_1^{ij}&=&\\epsilon^{\\alpha\\beta}(\\mathcal{V}_\\alpha)^*\\mathcal{V}_{kl}^{\\phantom{kl}M}\\mathcal{V}_N^{\\phantom{N}ik}\n\\mathcal{V}_P^{\\phantom{P}jl}f_{\\beta M}^{\\phantom{\\beta M}NP},\\nonumber\n\\\\\nA_2^{ij}&=&\\epsilon^{\\alpha\\beta}\\mathcal{V}_\\alpha\\mathcal{V}_{kl}^{\\phantom{kl}M}\\mathcal{V}_N^{\\phantom{N}ik}\n\\mathcal{V}_P^{\\phantom{P}jl}f_{\\beta M}^{\\phantom{\\beta M}NP},\\nonumber\n\\\\\nA_{2ai}^{\\phantom{2ai}j}&=&\\epsilon^{\\alpha\\beta}\\mathcal{V}_\\alpha\n\\mathcal{V}^M_{\\phantom{M}a}\\mathcal{V}^N_{\\phantom{N}ik}\\mathcal{V}_P^{\\phantom{P}jk}f_{\\beta\nMN}^{\\phantom{\\beta MN}P}\n\\end{eqnarray}\nwhere $\\mathcal{V}_M^{\\phantom{M}ij}$ is defined in terms of the `t Hooft\nmatrices $G^{ij}_m$ and $\\mathcal{V}_M^{\\phantom{M}m}$ as\n\\begin{equation}\n\\mathcal{V}_M^{\\phantom{M}ij}=\\frac{1}{2}\\mathcal{V}_M^{\\phantom{M}m}G^{ij}_m\n\\end{equation}\nand similarly for its inverse\n\\begin{equation}\n\\mathcal{V}^M_{\\phantom{M}ij}=-\\frac{1}{2}\\mathcal{V}_M^{\\phantom{M}m}(G^{ij}_m)^*\\,\n.\n\\end{equation}\n$G^{ij}_m$ satisfy the relations\n\\begin{equation}\nG_{mij}=(G^{ij}_m)^*=\\frac{1}{2}\\epsilon_{ijkl}G^{kl}_m\\, .\n\\end{equation}\nThe explicit form of these matrices is given in the appendix. It should also be noted that the scalar\npotential can be written in terms of $A_1$ and $A_2$ tensors as\n\\begin{equation}\nV=-\\frac{1}{3}A^{ij}_1A_{1ij}+\\frac{1}{9}A^{ij}_2A_{2ij}+\\frac{1}{2}A_{2ai}^{\\phantom{2ai}j}\nA_{2a\\phantom{i}j}^{\\phantom{2a}i}\\, .\n\\end{equation}\n\\indent With the explicit form of $\\mathcal{V}_\\alpha$ given in\n\\eqref{Valpha} and equation \\eqref{YM3}, it is straightforward to\nderive the following identities\n\\begin{eqnarray}\ni\\mathcal{V}_\\alpha{\\mathcal{V}_M}^{ij}\\mathcal{H}^{M\\alpha}_{\\mu\\nu}\\gamma^{\\mu\\nu}&=&-(\\mathcal{V}_-)^{-1}{\\mathcal{V}_M}^{ij}\\mathcal{H}^{M+}_{\\mu\\nu}\\gamma^{\\mu\\nu}(1-\\gamma_5),\\\\\ni\\mathcal{V}_\\alpha{\\mathcal{V}_M}^a\\mathcal{H}^{M\\alpha}_{\\mu\\nu}\\gamma^{\\mu\\nu}&=&-(\\mathcal{V}_-)^{-1}{\\mathcal{V}_M}^a\\mathcal{H}^{M+}_{\\mu\\nu}\\gamma^{\\mu\\nu}(1+\\gamma_5),\\\\\ni(\\mathcal{V}_\\alpha)^*{\\mathcal{V}_M}^{ij}\\mathcal{H}^{M\\alpha}_{\\mu\\nu}\\gamma^{\\mu\\nu}\\gamma_\\rho\n&=&(\\mathcal{V}_-)^{-1}{\\mathcal{V}_M}^{ij}\\mathcal{H}^{M+}_{\\mu\\nu}\\gamma^{\\mu\\nu}\\gamma_\\rho(1-\\gamma_5).\n\\end{eqnarray}\nIn obtaining these results, we have used the following relations for\nthe $SO(6,n)$ coset representative \\cite{Eric_N4_4D}\n\\begin{eqnarray}\n\\eta_{MN}&=&-\\frac{1}{2}\\epsilon_{ijkl}{\\mathcal{V}_M}^{ij}{\\mathcal{V}_N}^{kl}+{\\mathcal{V}_M}^{a}{\\mathcal{V}_N}^{a},\\qquad {\\mathcal{V}_M}^a{\\mathcal{V}^M}_{ij}=0,\\nonumber \\\\\n{\\mathcal{V}_M}^{ij}{\\mathcal{V}^{M}}_{kl}&=&-\\frac{1}{2}(\\delta^i_k\\delta^j_l-\\delta^i_l\\delta^j_k),\\qquad {\\mathcal{V}_M}^a{\\mathcal{V}^M}_b=\\delta^a_b\\, .\n\\end{eqnarray}\nThese relations are useful in simplifying the BPS equations\nresulting from the supersymmetry transformations. Note also that\nthese relations are slightly different from those given in\n\\cite{N4_gauged_SUGRA} due to a different convention on $\\mathcal{V}_\\alpha$ in term of the scalar\n$\\tau$. In more detail, $\\mathcal{V}_\\alpha$ used in this paper and in\n\\cite{N010_truncation_Cassani} satisfies $\\mathcal{V}_+\/\\mathcal{V}_-=\\tau$ while $\\mathcal{V}_\\alpha$ used in\n\\cite{N4_gauged_SUGRA} gives $\\mathcal{V}_+\/\\mathcal{V}_-=\\tau^*$. This results in some sign changes in the\nabove equations compared to those of \\cite{N4_gauged_SUGRA}.\n\n\\subsection{Uplift formulae to eleven dimensions}\nAs mentioned above, four-dimensional $N=4$ gauged supergravity\ncoupled to three vector multiplets with $SO(3)\\ltimes\n(\\mathbf{T}^3,\\hat{\\mathbf{T}}^3)$ gauge group has been obtained\nfrom a truncation of eleven-dimensional supergravity on a\ntri-sasakian manifold in \\cite{N010_truncation_Cassani}. We will\nbriefly review the structure and relevant formulae focusing on the\nreduction ansatz which will be useful for embedding four-dimensional\nsolutions. Essentially, we simply collect some formulae without\ngiving detailed explanations for which we refer the interested readers\nto \\cite{N010_truncation_Cassani}.\n\\\\\n\\indent The eleven-dimensional metric can be written as\n\\begin{equation}\nds^2_{11}=e^{2\\varphi}ds^2_4+e^{2U}ds^2(B_\\textrm{QK})+g_{IJ}(\\eta^I+A_1^I)(\\eta^J+A_1^J)\\,\n.\n\\end{equation}\nThe three-dimensional internal metric $g_{IJ}$ can be written in\nterms of the vielbein as\n\\begin{equation}\ng=Q^TQ\\, .\n\\end{equation}\nFollowing \\cite{N010_truncation_Cassani}, we will parametrize the\nmatrix $Q$ in term of a product of a diagonal matrix $V$ and an\n$SO(3)$ matrix $O$\n\\begin{equation}\nQ=VO,\\qquad V=\\textrm{diag}(e^{V_1},e^{V_2},e^{V_3})\\, .\\label{Q_def}\n\\end{equation}\nThe scalar $\\varphi$ is chosen to be\n\\begin{equation}\n\\varphi=-\\frac{1}{2}(4U+V_1+V_2+V_3)\n\\end{equation}\nin order to obtain the Einstein frame action in four dimensions.\n$B_{\\textrm{QK}}$ denotes a four-dimensional quaternionic Kahler\nmanifold whose explicit metric is not needed in the following\ndiscussions.\n\\\\\n\\indent The ansatz for the four-form field is given by\n\\begin{eqnarray}\nG_4&=&H_4+H_{3I}\\wedge (\\eta+A_1)^I+\\frac{1}{2}\\epsilon_{IJK}\\tilde{H}_2^I\\wedge(\\eta+A_1)^J\n\\wedge (\\eta+A_1)^K+4\\textrm{Tr}c\\,\\textrm{vol}(\\textrm{QK})\\nonumber \\\\\n& &H_{1IJ}\\wedge (\\eta+A_1)^I\\wedge J^I+\\frac{1}{6}\\epsilon_{IJK}d\\chi\\wedge(\\eta+A_1)^I\\wedge (\\eta+A_1)^J\\wedge (\\eta+A_1)^K\\nonumber \\\\\n& &+H_{2I}\\wedge\nJ^I+\\epsilon_{IJL}\\left[(\\chi+\\textrm{Tr}c)\\delta_{LK}-2c_{(LK)}\\right](\\eta+A_1)^I\\wedge(\\eta+A_1)^J\\wedge\nJ^K\\, .\\nonumber \\\\\n& &\n\\end{eqnarray}\n$c_{IJ}$ is a $3\\times 3$ matrix and $\\textrm{Tr}\nc=\\delta^{IJ}c_{IJ}$. The volume form of $B_{\\textrm{QK}}$,\n$\\textrm{vol}(\\textrm{QK})$, can be written in terms of the two-forms $J^I$\nas\n\\begin{equation}\n\\textrm{vol}(\\textrm{QK})=\\frac{1}{6}J^I\\wedge J^I\\, .\n\\end{equation}\nVarious forms in the above equation are defined by\n\\begin{eqnarray}\nH_4&=&dc_3+c_{2I}\\wedge F^I_2,\\qquad H_{3I}=Dc_{2I}+\\epsilon_{IJK}F^J_2\\wedge \\tilde{c}_{1K},\\nonumber \\\\\n\\tilde{H}_{2I}&=&D\\tilde{c}_{1I}-2c_{2I}+\\chi F_{2I},\\qquad H_{2I}=Dc_{1I}+2c_{2I}+c_{JI}F^J_2,\\nonumber \\\\\nH_{1IJ}&=&Dc_{IJ}+2\\epsilon_{IJK}(c_{1K}+\\tilde{c}_{1K})\n\\end{eqnarray}\nwith the $SO(3)$ covariant derivative\n\\begin{equation}\nDc_{I_1\\ldots I_n}=dc_{I_1\\ldots I_n}+2\\sum_{l=1}^n\\epsilon_{JI_lK}A^J_1\\wedge c_{I_1\\ldots K\\ldots I_n}\\, .\n\\end{equation}\nThe $SO(3)_R$ field strengths are defined by\n\\begin{equation}\nF^I_2=dA^I_1-\\epsilon_{IJK}A^J_1\\wedge A^K_1\\, .\n\\end{equation}\nIt is useful to note here that the $SL(2,\\mathbb{R})\/SO(2)$ scalars\nare given by\n\\begin{equation}\n\\tau=\\chi+ie^{V_1+V_2+V_3}\\, .\n\\end{equation}\n\\indent Although we will not directly need the explicit form of\n$ds^2(B_{\\textrm{QK}})$ and $\\eta^I$'s in the remaining parts of\nthis paper, it is useful to give some information on the $N^{010}$\ntri-sasakian manifold. $N^{010}$ is a 7-manifold with $SU(2)\\times\nSU(3)$ isometry. The $SU(2)$ is identified with the R-symmetry of\nthe dual $N=3$ SCFT while $SU(3)$ is the flavor symmetry. A simple\ndescription of $N^{010}$ can be obtained in term of a coset manifold\n$SU(3)\/U(1)$. With the standard Gell-Mann matrices, the $SU(3)$\ngenerators can be chosen to be $-\\frac{i}{2}\\lambda_\\alpha$,\n$\\alpha=1,\\ldots, 8$. The coset and $U(1)$ generators are\naccordingly identified as\n\\begin{equation}\nK_i=-\\frac{i}{2}(\\lambda_1,\\lambda_2,\\lambda_3,\\lambda_4,\\lambda_5,\\lambda_6,\\lambda_7),\\qquad\nH=-\\frac{i\\sqrt{3}}{2}\\lambda_8\\, .\n\\end{equation}\nThe vielbein on $N^{010}$ can eventually be obtained from the decomposition of the Maurer-Cartan one-form\n\\begin{equation}\nL^{-1}dL=e^iK_i+\\omega H\n\\end{equation}\nwhere $L$ is the coset representative for $SU(3)\/U(1)$, and $\\omega$ is\nthe corresponding $U(1)$ connection.\n\\\\\n\\indent Following \\cite{N010_truncation_Cassani}, we can use the tri-sasakian structures of the form\n\\begin{eqnarray}\n\\eta^I&=&\\frac{1}{2}(e^1,e^2,e^7),\\nonumber\\\\\nJ^I&=&\\frac{1}{8}(e^4\\wedge e^5-e^3\\wedge e^6,-e^3\\wedge e^5-e^4\\wedge e^6,e^5\\wedge e^6-e^3\\wedge e^4).\n\\end{eqnarray}\nFrom these, we find the metric on the Quaternionic-Kahler base $B_{\\textrm{QK}}$ to be\n\\begin{equation}\nds^2(B_{\\textrm{QK}})=\\frac{1}{256}\\left[(e^3)^2+(e^4)^2+(e^5)^2+(e^6)^2\\right]\n\\end{equation}\nwith the volume form given by\n\\begin{equation}\n\\textrm{vol}(\\textrm{QK})=\\frac{1}{6}J^I\\wedge J^I=-\\frac{1}{64}e^3\\wedge e^4\\wedge e^5\\wedge e^6\\, .\n\\end{equation}\nAs mentioned before, all of the fields appearing in the reduction of \\cite{N010_truncation_Cassani} are $SU(3)$ singlets.\n\n\\section{BPS flow equations}\\label{flow_eq}\nIn this section, we perform the analysis of Yang-Mills equations\nand supersymmetry transformations in order to obtain BPS equations\nfor the flows between $AdS_4$ vacua and possible $AdS_2\\times\n\\Sigma_2$ geometries. We set all fermions to zero and truncate the\nbosonic fields to $SO(2)\\subset SO(3)_R$ singlets. This $SO(2)$ is\ngenerated by\n\\begin{equation}\n\\hat{X}=X_{9+}+X_{6+}+X_{3-}\n\\end{equation}\nwhere the gauge generators are defined by\n\\begin{equation}\nX_{M\\alpha}=-f_{\\alpha MNP}t^{NP}\\, .\n\\end{equation}\nWe see that a combination of the electric vectors $A^{9+}$, $A^{6+}$\nand the magnetic vector $A^{3-}$ becomes the corresponding $SO(2)$\ngauge field.\n\\\\\n\\indent We are interested in supersymmetric solutions of the form\n$AdS_2\\times \\Sigma_2$ with $\\Sigma_2=S^2,H^2$. We will then take\nthe ansatz for the four-dimensional metric to be\n\\begin{equation}\nds^2_4=-e^{2f(r)}dt^2+dr^2+e^{2g(r)}(d\\theta^2+F(\\theta)^2d\\phi^2)\n\\end{equation}\nwith\n\\begin{equation}\nF(\\theta)=\\sin\\theta\\qquad \\textrm{and}\\qquad F(\\theta)=\\sinh\\theta\n\\end{equation}\nfor the $S^2$ and $H^2$, respectively. We will also use the\nparameter $\\kappa=\\pm 1$ to denote the $S^2$ and $H^2$ cases. The\nfunctions $f(r)$, $g(r)$ and all other fields only depend on the\nradial coordinate $r$ for static solutions. With the obvious\nvielbein\n\\begin{equation}\ne^{\\hat{t}}=e^{f}dt,\\qquad e^{\\hat{r}}=dr,\\qquad e^{\\hat{\\theta}}=e^{g}d\\theta,\\qquad e^{\\hat{\\phi}}=e^gFd\\phi,\n\\end{equation}\nit is now straightforward to compute the spin connections of the above metric\n\\begin{eqnarray}\n\\omega^{\\hat{t}\\hat{r}}&=&f'e^{\\hat{t}},\\qquad \\omega^{\\hat{\\theta}\\hat{r}}=g'e^{\\hat{\\theta}},\\nonumber \\\\\n\\omega^{\\hat{\\phi}\\hat{r}}&=&g'e^{\\hat{\\phi}},\\qquad \\omega^{\\hat{\\theta}\\hat{\\phi}}=\\frac{F'(\\theta)}{F(\\theta)}e^{-g}e^{\\hat{\\phi}}\\, .\n\\end{eqnarray}\nIn the above expressions, we have used the hat to denote ``flat''\nindices while $'$ stands for the $r$-derivative with the only\nexception that $F'(\\theta)=\\frac{dF(\\theta)}{d\\theta}$. The ansatz for electric and magnetic vector fields are given by\n\\begin{eqnarray}\nA^{M+}&=&\\mathcal{A}^M_tdt- p^MF'(\\theta)d\\phi, \\\\\nA^{M-}&=&\\tilde{\\mathcal{A}}^M_tdt- e_MF'(\\theta)d\\phi\\label{AM_minus}\n\\end{eqnarray}\nwhere we have chosen the gauge such that $A^{M\\alpha}_r=0$. $p^M$\nand $e_M$ correspond to magnetic and electric charges, respectively.\nIn the present case, only $A^{M\\alpha}$ with $M=3,6,9$ are relevant.\n\\\\\n\\indent We finally give the explicit form of the scalar coset representative for $SO(6,3)\/SO(6)\\times SO(3)$. The parametrization of \\cite{N010_truncation_Cassani} which is directly related to the higher dimensional origin is given by\n\\begin{equation}\n\\mathcal{V}=\\mathcal{C}\\mathcal{Q}\n\\end{equation}\nwhere the matrices $\\mathcal{Q}$ and $\\mathcal{C}$ are defined by\n\\begin{equation}\n\\mathcal{Q}=\\left(\n \\begin{array}{ccc}\n \\mathbf{I}_3 & \\mathbf{0}_3 & \\mathbf{0}_3 \\\\\n \\mathbf{0}_3 & e^{-2U}Q^{-1} & \\mathbf{I}_3 \\\\\n \\mathbf{0}_3 & \\mathbf{0}_3 & e^{2U}Q^T \\\\\n \\end{array}\n \\right),\n \\qquad\n\\mathcal{C}=\\text{exp}\\,\\left(\n \\begin{array}{ccc}\n \\mathbf{0}_3 & \\sqrt{2}c^T & \\mathbf{0}_3 \\\\\n \\mathbf{0}_3 & \\mathbf{0}_3 & \\mathbf{0}_3 \\\\\n \\sqrt{2}c & a & \\mathbf{0}_3 \\\\\n \\end{array}\n \\right).\n\\end{equation}\nFor $SO(2)$ invariant scalars, the $3\\times 3$ matrices $c$ and $a$ are given by\n\\begin{equation}\nc=\\left(\n \\begin{array}{ccc}\n Z_1 & Z_3 & 0 \\\\\n -Z_3 & Z_1 & 0 \\\\\n 0 & 0 & Z_2 \\\\\n \\end{array}\n \\right),\n \\qquad\na=\\left(\n \\begin{array}{ccc}\n 0 & \\Phi & 0 \\\\\n -\\Phi & 0 & 0 \\\\\n 0 & 0 & 0 \\\\\n \\end{array}\n \\right)\n\\end{equation}\nwhile $Q$ can be obtained from \\eqref{Q_def} with $V_2=V_1$ and $O$ being\n\\begin{equation}\nO=\\textrm{exp}\\left(\n \\begin{array}{ccc}\n 0 & \\beta & 0 \\\\\n -\\beta & 0 & 0 \\\\\n 0 & 0 & 0 \\\\\n \\end{array}\n \\right).\n\\end{equation}\nThis is a generalization of the coset representative of the\n$SO(3)_R$ singlet scalars used in \\cite{trisasakian_flow} in which\n$\\Phi=\\beta=Z_3=0$, $Z_1=Z_2$ and $V_1=V_2=V_3$. In the following,\nwe will rename the scalars $V_3\\rightarrow V_2$ such that the\ncomplex scalar $\\tau$ becomes\n\\begin{equation}\n\\tau=\\chi+ie^{2V_1+V_2}\\, .\n\\end{equation}\n\\indent We now give the scalar potential for $SO(2)$ singlet scalars\n\\begin{eqnarray}\nV&=&e^{-3(4U+2V_1+V_2)}\\left[e^{4(U+V_2)}(e^{4U}+2e^{4V_1})+9k^2+4\\chi^2e^{4U+2V_1} \\right.\\nonumber\\\\\n& &-4e^{6U+4V_1+2V_2}(6+e^{2(U-V_1)}-e^{-2(U-V_1)})+24k\\chi Z_1+16\\chi^2Z_1^2\\nonumber \\\\\n& &+8\\chi Z_2e^{4U+2V_1}-12k\\chi Z_2+(16\\chi^2-24k)Z_1Z_2+32\\chi Z_1^2Z_2\\nonumber \\\\\n& &+4Z_2^2e^{4U+2V_1}+4\\chi^2Z_2^2+8\\chi Z_1Z_2^2+16Z_1^2Z_2^2-4\\chi Z_2^3 -8Z_1Z_2^3\\nonumber \\\\\n& &\\left.+6kZ_2^2+Z_2^4+2e^{2V_2}\\left[e^{4U}(\\chi+2Z_1-Z_2)^2+2e^{4V_1}(2Z_1+Z_2)^2\\right]\\right].\n\\end{eqnarray}\nThe scalars $\\beta$, $\\Phi$ and $Z_3$ do not appear in the\npotential. It can also be checked that setting $\\beta=\\Phi=Z_3=0$ is\na consistent truncation. In fact, $\\beta$ never appears in any\nequations, so we can set it to zero. On the other hand, the\nYang-Mills equations, to be given later, demand that $\\Phi$ and\n$Z_3$ must be constant. Since we are interested in the flow\nsolutions interpolating between $AdS_2\\times \\Sigma_2$ and $AdS_4$\nvacua, and at supersymmetric $AdS_4$ critical points, both $\\Phi$\nand $Z_3$ vanish. We then choose $Z_3=\\Phi=0$. \n\\\\\n\\indent The kinetic terms for\nthe remaining scalars read\n\\begin{eqnarray}\n\\mathcal{L}_{\\textrm{kin}}&=&-6{U'}^2-2U'(2V_1'+V_2')-2V_1^{\\prime 2}-V'_1V'_2\\nonumber\\\\\n& &-\\frac{1}{4}\\left[3V_2^{\\prime 2}+e^{-2(2V_1+V_2)}{\\chi'}^2+4e^{-2(2U+V_1)}Z_1^{\\prime 2}+2e^{-2(2U+V_2)}Z_2^{\\prime 2}\\right].\n\\end{eqnarray}\nWe now redefine the scalars such that the kinetic terms are diagonal\n\\begin{equation}\n\\tilde{V}=2V_1+V_2,\\qquad \\tilde{U}_1=2U+V_1,\\qquad \\tilde{U}_2=2U+V_2\n\\end{equation}\nin terms of which we find\n\\begin{equation}\n\\mathcal{L}_{\\textrm{kin}}=-\\frac{1}{4}\\left(4\\tilde{U}_1^{\\prime 2}+2\\tilde{U}_2^{\\prime 2}+\\tilde{V}^{\\prime 2}+e^{-2\\tilde{V}}{\\chi'}^2+4e^{-2\\tilde{U}_1}Z_1^{\\prime 2}\n+2e^{-2\\tilde{U}_2}Z_2^{\\prime 2} \\right).\n\\end{equation}\nThese new scalars will also be useful in the analysis of the BPS equations below.\n\\\\\n\\indent The above scalar potential admits two supersymmetric $AdS_4$\nvacua with $N=1$ and $N=3$ supersymmetries\n\\cite{N010_truncation_Cassani}. At these vacua the symmetry is\nenhanced from $SO(2)$ to $SO(3)$. For convenience, before carry out\nthe analysis of the Yang-Mills and BPS equations, we review the\n$N=3$ and $N=1$ $AdS_4$ critical points in terms of the new scalars\ndefined above:\n\\begin{eqnarray}\nN=3&:&\\qquad \\tilde{V}=\\tilde{U}_1=\\tilde{U}_2=\\frac{1}{2}\\ln k,\\qquad V_0=-12|k|^{-\\frac{3}{2}},\\qquad k>0, \\\\\nN=1&:&\\qquad \\tilde{U}_1=\\tilde{U}_2=\\ln 5+\\frac{1}{2}\\ln \\left[-\\frac{k}{15}\\right],\\qquad \\tilde{V}=\\frac{1}{2}\\ln \\left[-\\frac{k}{15}\\right],\\nonumber \\\\\n\\qquad & &\\qquad V_0=-12|k|^{-\\frac{3}{2}}\\sqrt{\\frac{3^7}{5^5}},\\qquad k<0\\, .\n\\end{eqnarray}\n$V_0$ is the cosmological constant related to the $AdS_4$ radius by\n\\begin{equation}\nL^2=-\\frac{3}{V_0}\\, .\n\\end{equation}\n\n\\subsection{The analysis of Yang-Mills equations}\nWe now solve the equations of motion for the gauge fields given in\n\\eqref{YM1}, \\eqref{YM2} and \\eqref{YM3}. We should emphasize that, in the reduction of\n\\cite{N010_truncation_Cassani}, the magnetic vectors $A^{M-}$ with\n$M=4,5,6$ do not appear in the reduction ansatz. These might arise\nfrom the reduction of the dual internal seven-dimensional metric.\nFurthermore, in this reduction, the two-form fields corresponding to\nthese magnetic vectors do not appear. \n\\\\\n\\indent Although the present analysis involves $A^{6+}$, we will truncate out the $A^{6-}$ in order to use the reduction ansatz of \\cite{N010_truncation_Cassani} to uplift the resulting solutions to eleven dimensions. This amounts to setting $e_6$ and $\\tilde{\\mathcal{A}}_t^6$ in \\eqref{AM_minus} to zero. It turns out that this truncation is consistent provided that the two-form fields are properly truncated. Therefore, we will set $e_6=\\tilde{\\mathcal{A}}_t^6=0$ in the following analysis. Note also that the vanishing of $A^{6-}$ does not mean\nthe covariant field strength $\\mathcal{H}^{6-}$ vanishes although the\nusual gauge field strength $\\mathcal{F}^{6-}$ vanishes. This is due to\nthe fact that $\\mathcal{H}^{6-}$ gets a contribution from the two-form\nfields. \n\\\\\n\\indent In order to consistently remove\n$A^{6-}$, we truncate the two-form fields to only $B^{18}$ and $B^{78}$.\nWith the symmetry of $AdS_2\\times \\Sigma_2$ background and a\nparticular choice of tensor gauge transformations\n\\begin{equation}\nB^{MN}\\rightarrow B^{MN}+d\\Xi^{MN},\n\\end{equation}\nwe will take the ansatz for the two-forms to be\n\\begin{equation}\nB^{78}=B(r)F(\\theta)d\\theta \\wedge d\\phi,\\qquad B^{18}=\\tilde{B}(r)F(\\theta)d\\theta \\wedge d\\phi\\, .\n\\end{equation}\n\\indent With the explicit form of the embedding tensor, we can compute the\ncovariant field strengths\n\\begin{eqnarray}\n\\mathcal{H}^{3+}&=&{\\mathcal{A}_t^3}'dr\\wedge dt+(p^3+4B)F(\\theta)d\\theta\\wedge d\\phi,\\nonumber\\\\\n\\mathcal{H}^{6+}&=&{\\mathcal{A}_t^6}'dr\\wedge dt+(p^6-4\\tilde{B})F(\\theta)d\\theta\\wedge d\\phi,\\nonumber\\\\\n\\mathcal{H}^{9+}&=&{\\mathcal{A}_t^9}'dr\\wedge dt+p^9F(\\theta)d\\theta\\wedge d\\phi,\\nonumber\\\\\n\\mathcal{H}^{3-}&=&\\tilde{\\mathcal{A}}^{3\\prime }_tdr\\wedge dt+(e_3-2\\sqrt{2}\\tilde{B})F(\\theta)d\\theta\\wedge d\\phi,\\nonumber\\\\\n\\mathcal{H}^{6-}&=&-6\\sqrt{2}kBF(\\theta)d\\theta\\wedge d\\phi,\\nonumber\\\\\n\\mathcal{H}^{9-}&=&\\tilde{\\mathcal{A}}^{9\\prime}_tdr\\wedge dt+(e_9-2\\sqrt{2}B)F(\\theta)d\\theta\\wedge d\\phi\\, .\n\\end{eqnarray}\nNote the non-vanishing covariant field strength $\\mathcal{H}^{6-}$, as mentioned above, due to the contribution from the two-form fields despite $A^{6-}=0$.\n\\\\\n\\indent Equations arising from \\eqref{YM1} and \\eqref{YM2} are explicitly\ngiven in the appendix. They can be solved by imposing the following\nconditions\n\\begin{eqnarray}\nZ_3'&=&0,\\qquad \\Phi'=2Z_1Z_3'-2Z_3Z_1',\\nonumber \\\\\nB'F(\\theta)dr\\wedge d\\theta \\wedge d\\phi&=&\\sqrt{2}e^{-4(2U+V_1)}(3k*A^{9+}+*A^{6+}-\\sqrt{2}*A^{3-}),\\nonumber \\\\\n\\tilde{B}'F(\\theta)dr\\wedge d\\theta \\wedge d\\phi&=&4Z_1e^{-4(2U+V_1)}(3k*A^{9+}+*A^{6+}-\\sqrt{2}*A^{3-}).\\label{Con_YM1}\n\\end{eqnarray}\nThe first condition implies that $Z_3$ is constant. As mentioned\nabove, this allows to set $Z_3=0$. The second condition then\nrequires that $\\Phi$ is constant. We can also set $\\Phi=0$. Together\nwith $\\beta=0$, we are left with only six scalars $(U,V_1,V_2,\\chi,Z_1,Z_2)$ or equivalently\n$(\\tilde{U}_1,\\tilde{U}_2,\\tilde{V},\\chi,Z_1,Z_2)$.\n\\\\\n\\indent We move to the last two conditions in \\eqref{Con_YM1}. First\nof all, the $dt\\wedge dr\\wedge d\\theta$ component gives\n\\begin{equation}\n3kp^9+p^6-\\sqrt{2}e_3=0 \\label{twist1}\n\\end{equation}\nwhile the $dr\\wedge d\\theta\\wedge d\\phi$ component leads to\nfirst-order differential equations for $B$ and $\\tilde{B}$\n\\begin{eqnarray}\nB'&=&\\sqrt{2}e^{-4(2U+V_1)+2g-f}(3k\\mathcal{A}^9_t+\\mathcal{A}^6_t-\\sqrt{2}\\tilde{\\mathcal{A}}^3_t),\\label{2-form-eq1}\\\\\n\\tilde{B}'&=&-4Z_1e^{-4(2U+V_1)+2g-f}(3k\\mathcal{A}^9_t+\\mathcal{A}^6_t-\\sqrt{2}\\tilde{\\mathcal{A}}^3_t).\\label{2-form-eq2}\n\\end{eqnarray}\n\\indent After solving all of the Yang-Mills equations and Bianchi\nidentities, we now consider the duality equation for electric and\nmagnetic vector fields. These equations whose explicit form is given\nin the appendix lead to the relations between\n$(\\mathcal{A}^{M\\prime}_t,\\tilde{\\mathcal{A}}^{M\\prime}_t)$ and scalars. We\ncan accordingly express the former in terms of the latter. These\nrelations are given by\n\\begin{eqnarray}\n\\mathcal{A}^{3\\prime}_t&=&e^{f-2g-2(U+V_1)-3V_2}\\left[e^{4U+2V_2}\\left[e_3+\\sqrt{2}e_9Z_2-4BZ_2+\\chi(p^3+4B+\\sqrt{2}Z_2)\\right] \\right.\\nonumber \\\\& &+Z_2^2[2(e_3+p^3\\chi)+\\sqrt{2}Z_2(e_9+p^9\\chi)]-4Z_2B(3k-2\\chi Z_2+Z_2^2)\\nonumber \\\\\n& &\\left.-2\\sqrt{2}\\tilde{B}(e^{4U+2V_2}+2Z_2\\chi+2Z_2^2)+\\sqrt{2}p^6Z_2\\chi\\right],\\label{Ap1}\\\\\n{\\mathcal{A}_t^6}'&=&e^{f-2g-2(2U+V_1)-3V_2}\\left[(2\\sqrt{2}B-e_9-p^9\\chi)e^{8U+4V_2}-p^6Z_2^2\\chi \\right.\\nonumber \\\\\n& &-e^{4U+2V_2}Z_2[\\sqrt{2}e_3-4\\tilde{B}+2e_9Z_2+\\chi(\\sqrt{2}p^3+2p^9Z_2)]\\nonumber\\\\\n& &+4\\tilde{B}Z_2^2(\\chi+Z_2)-Z_2^3[\\sqrt{2}(e_3+p^3\\chi)+Z_2(e_9+p^9\\chi)]\\nonumber \\\\\n& &\\left.+4\\sqrt{2}BZ_2e^{4U+2V_2}(Z_2-\\chi)+2\\sqrt{2}BZ_2^2(3k-2\\chi Z_2+Z_2^2)\\right],\\\\\n{\\mathcal{A}_t^9}'&=&-e^{f-2g-2(2U+V_1)-3V_2}\\left[Z_2(\\sqrt{2}e_3-4\\tilde{B}+e_9Z_2)-2\\sqrt{2}B(3k-2\\chi Z_2 +Z_2^2) \\right.\\nonumber \\\\\n& &\\left. +\\chi(p^6-4\\tilde{B}+\\sqrt{2}Z_2+p^9Z_2^2)\\right],\\\\\n\\tilde{\\mathcal{A}}^{3\\prime}_t&=&\\frac{e^{f-2g-2V_1-V_2}}{Z_2}\\left[-e^{4V_1+2V_2}[\\sqrt{2}e^{4U+2V_2}p^9+Z_2(p^3+4B+\\sqrt{2}p^9Z_2)] \\right.\\nonumber \\\\\n& &+\\chi Z_2[e_3-2\\sqrt{2}\\tilde{B}+\\sqrt{2}e_9Z_2-4BZ_2+\\chi(p^3+4B+\\sqrt{2}p^9Z_2)]\\nonumber \\\\\n& &\\left.+\\chi e^{4U+2V_2}\\left[\\sqrt{2}(e_9+p^9\\chi)-4B\\right]\\right],\\\\\n\\tilde{\\mathcal{A}}^{9\\prime}_t&=&\\frac{e^{f-2g-2V_1-V_2}}{Z_2^2} \\left[e^{4(U+V_1+V_2)}p^9-e^{4U+2V_1}\\chi(e_9-2\\sqrt{2}B+p^9\\chi) \\right.\\nonumber \\\\\n& &-\\chi Z_2[\\sqrt{2}e_3-4\\tilde{B}+4\\sqrt{2}B(\\chi-Z_2)+2e_9Z_2+\\chi(\\sqrt{2}p^3+2p^9Z_2)]\\nonumber \\\\\n& &\\left.\n+e^{4V_1+2V_2}Z_2(\\sqrt{2}p^3+4\\sqrt{2}B+2p^9Z_2)\\right].\\label{Ap6}\n\\end{eqnarray}\nIt turns out that only $\\mathcal{A}_t^{9}$, $\\mathcal{A}_t^{6}$ and\n$\\tilde{\\mathcal{A}}_t^{3}$ appear in other equations while the remaining\nones only appear through their derivatives. Therefore, these fields\ncan be integrated out.\n\n\\subsection{BPS equations for $SO(2)$ invariant scalars}\nWe now use the ansatz for all the fields given in the previous\nsection to set up the BPS equations for finding supersymmetric\nsolutions. We will use Majorana representation for the gamma\nmatrices in which all $\\gamma_\\mu$ are real, and\n\\begin{equation}\n\\gamma_5=i\\gamma_{\\hat{0}}\\gamma_{\\hat{r}}\\gamma_{\\hat{\\theta}}\\gamma_{\\hat{\\phi}}\n\\end{equation}\nis purely imaginary. We then have, for example,\n\\begin{equation}\n\\epsilon^i=\\frac{1}{2}(1+\\gamma_5)\\epsilon^i_M,\\qquad \\epsilon_i=\\frac{1}{2}(1-\\gamma_5)\\epsilon^i_M\n\\end{equation}\nwith $\\epsilon^i_M$ being four-component Majorana spinors. It follows that $\\epsilon_i=(\\epsilon^i)^*$.\n\\\\\n\\indent We first consider the gravitino transformations. As in other\nholographic solutions involving twisted compactifications of the dual\nSCFTs, the strategy is to use the gauge connection to cancel the\nspin connection on $\\Sigma_2$. Equations from\n$\\delta\\psi^i_{\\hat{\\theta}}=0$ and $\\delta\\psi^i_{\\hat{\\phi}}=0$\nthen reduce to the same equation. The gauge connection enters the\ncovariant derivative of $\\epsilon^i$ through the composite\nconnection ${Q_j}^i$. With the $SO(2)$ singlet scalars, we find that\n${Q_j}^i$ takes the form of\n\\begin{equation}\n{Q_j}^i=\\frac{1}{2}\\hat{A}\\left(\n \\begin{array}{cccc}\n 0 & 1 & 0 & 0 \\\\\n -1 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & 0\\\\\n \\end{array}\n \\right)\n\\end{equation}\nwhere $\\hat{A}$ is given by\n\\begin{equation}\n\\hat{A}=\\sqrt{2}e^{-2(2U+V_1)}(3kA^{9+}+A^{6+}-\\sqrt{2}A^{3-}-4e^{4U+2V_1}A^{9+}).\n\\end{equation}\nFrom the form of ${Q_i}^j$, we can see that supersymmetry\ncorresponding to $\\epsilon^{3,4}$ is broken for spherical and\nhyperbolic $\\Sigma_2$ since we cannot cancel the spin connections\nalong $\\epsilon^{3,4}$. The $N=4$ supersymmetry is then broken to\n$N=2$.\n\\\\\n\\indent After using the condition \\eqref{twist1} in the\n${Q_{\\hat{\\phi}i}}^j$ components, the twist is achieved by imposing\nthe projection\n\\begin{equation}\n\\gamma^{\\hat{\\theta}\\hat{\\phi}}\\epsilon^{\\hat{i}}={\\epsilon^{\\hat{i}}}_{\\hat{j}}\\epsilon^{\\hat{j}}\\label{projector1}\n\\end{equation}\nprovided that we impose the following twist condition\n\\begin{equation}\n2\\sqrt{2}\\kappa p^9=1\\, .\n\\end{equation}\nIndices $\\hat{i},\\hat{j}=1,2$ denote the Killing spinors\ncorresponding to the unbroken supersymmetry. From equation\n\\eqref{projector1}, the chirality condition on $\\epsilon^{\\hat{i}}$\nimplies that\n\\begin{equation}\n\\gamma^{\\hat{0}\\hat{r}}\\epsilon^{\\hat{i}}=-i{\\epsilon^{\\hat{i}}}_{\\hat{j}}\\epsilon^{\\hat{j}}\\, .\\label{projector2}\n\\end{equation}\nWith these projections, we can write the $\\delta\n\\psi^i_{\\hat{\\theta}}=0$ equation, which is the same as $\\delta\n\\psi^i_{\\hat{\\phi}}$ equation, as\n\\begin{equation}\ng'\\gamma_{\\hat{r}}\\epsilon^{\\hat{i}}-\\frac{2}{3}A_1^{\\hat{i}\\hat{j}}\\epsilon_{\\hat{j}}+\\frac{i}{2}(\\mathcal{V}_\\alpha)^*{\\mathcal{V}_M}^{\\hat{i}\\hat{j}}(i\\mathcal{H}^{M\\alpha}_{\\hat{0}\\hat{r}}-\\mathcal{H}^{M\\alpha}_{\\hat{\\theta}\\hat{\\phi}}){\\epsilon_{\\hat{j}}}^{\\hat{k}}\\epsilon_{\\hat{k}}=0\n\\end{equation}\nwhere we have multiplied the resulting equation by $\\gamma^{\\hat{\\theta}}$. We further impose the projector\n\\begin{equation}\n\\gamma_{\\hat{r}}\\epsilon^{\\hat{i}}=e^{i\\Lambda}\\delta^{\\hat{i}\\hat{j}}\\epsilon_{\\hat{j}}\\label{projector3}\n\\end{equation}\nin which $e^{i\\Lambda}$ is an $r$-dependent phase. By equation \\eqref{projector2}, this projector implies\n\\begin{equation}\n\\gamma_{\\hat{0}}\\epsilon^{\\hat{i}}=ie^{i\\Lambda}\\epsilon^{\\hat{i}\\hat{j}}\\epsilon_{\\hat{j}}\\, .\\label{projector4}\n\\end{equation}\nIt should be noted that there are only two independent projectors\ngiven in \\eqref{projector1} and \\eqref{projector3}. Therefore, the\nentire flows preserve $\\frac{1}{4}$ supersymmetry. On the other\nhand, the $AdS_2\\times \\Sigma_2$ vacua is $\\frac{1}{2}$\nsupersymmetric since the $\\gamma_{\\hat{r}}$ projection is not needed\nfor constant scalars.\n\\\\\n\\indent As a next step, we introduce the ``superpotential'' $\\mathcal{W}$\nand ``central charge'' $\\mathcal{Z}$ defined respectively by the\neigenvalues of\n\\begin{equation}\n\\frac{2}{3}A^{\\hat{i}\\hat{j}}_1=\\mathcal{W}_{\\hat{i}}\\delta^{\\hat{i}\\hat{j}}\n\\end{equation}\nand\n\\begin{equation}\n-\\frac{i}{2}(\\mathcal{V}_\\alpha)^*{\\mathcal{V}_M}^{\\hat{i}\\hat{j}}(i\\mathcal{H}^{M\\alpha}_{\\hat{0}\\hat{r}}\n-\\mathcal{H}^{M\\alpha}_{\\hat{\\theta}\\hat{\\phi}}){\\epsilon_{\\hat{j}}}^{\\hat{k}}=\\mathcal{Z}_{\\hat{i}}\\delta^{\\hat{i}\\hat{k}}\\,\n.\n\\end{equation}\nIt should be emphasized that no summation is implied in the above two equations.\n\\\\\n\\indent With all these, we obtain the BPS equation from $\\delta\\psi^{\\hat{i}}_{\\hat{\\theta}}=0$ equation\n\\begin{equation}\ne^{i\\Lambda}g'-\\mathcal{W}_i-\\mathcal{Z}_i=0\n\\end{equation}\nwhich gives\n\\begin{equation}\ng'=|\\mathcal{W}_i+\\mathcal{Z}_i|\\qquad \\textrm{and}\\qquad e^{i\\Lambda}=\\frac{\\mathcal{W}_i+\\mathcal{Z}_i}{|\\mathcal{W}_i+\\mathcal{Z}_i|}\\, .\n\\end{equation}\n\\\\\n\\indent Using all of these results, we find that equation $\\delta\\psi^{\\hat{i}}_{\\hat{0}}=0$ gives\n\\begin{equation}\ne^{i\\Lambda}(f'+i\\hat{A}_te^{-f})-\\mathcal{W}_i+\\mathcal{Z}_i=0\\, .\n\\end{equation}\nTaking the real and imaginary parts leads to the following BPS equations\n\\begin{equation}\nf'=\\textrm{Re}[e^{-i\\Lambda}(\\mathcal{W}_i-\\mathcal{Z}_i)]\\label{f_prime}\n\\end{equation}\nand\n\\begin{equation}\n\\hat{A}_t=e^f\\textrm{Im}[e^{-i\\Lambda}(\\mathcal{W}_i-\\mathcal{Z}_i)].\\label{At_constraint}\n\\end{equation}\nWe now come to $\\delta\\psi^{\\hat{i}}_{\\hat{r}}=0$ equation which\ngives the $r$-dependence of the Killing spinors. When combined with\n$\\delta\\psi^{\\hat{i}}_{\\hat{0}}=0$ equation, this equation reads\n\\begin{equation}\n2\\epsilon^{\\hat{i}\\prime}-f'-i\\hat{A}_te^{-f}\\epsilon^{\\hat{i}}=0\n\\end{equation}\nwhich can be solved by\n\\begin{equation}\n\\epsilon^{\\hat{i}}=e^{\\frac{f}{2}+\\frac{i}{2}\\int \\hat{A}_te^{-f}dr}\\tilde{\\epsilon}^{\\hat{i}}.\n\\end{equation}\n$\\tilde{\\epsilon}^{\\hat{i}}$ are constant spinors satisfying the projections\n\\begin{equation}\n\\gamma_{\\hat{r}}\\tilde{\\epsilon}^{\\hat{i}}=\\delta^{\\hat{i}\\hat{j}}\\tilde{\\epsilon}_{\\hat{j}},\\qquad\n\\gamma_{\\hat{\\theta}\\hat{\\phi}}\\tilde{\\epsilon}^{\\hat{i}}={\\epsilon^{\\hat{i}}}_{\\hat{j}}\\tilde{\\epsilon}^{\\hat{j}}\\,\n.\n\\end{equation}\n\\indent Using the $\\gamma_{\\hat{r}}$ projector, we obtain the\nfollowing BPS equations from $\\delta\\chi^i$ and $\\delta\\lambda^i_a$\n\\begin{eqnarray}\n-e^{i\\Lambda}\\epsilon^{\\alpha\\beta}\\mathcal{V}_\\alpha \\mathcal{V}'_\\beta\\delta_{\\hat{i}\\hat{j}}-\\frac{4i}{3}A^{\\hat{j}\\hat{i}}_2\n+i\\mathcal{V}_\\alpha{\\mathcal{V}_M}^{\\hat{i}\\hat{k}}\\epsilon^{\\hat{k}\\hat{j}}(i\\mathcal{H}^{M\\alpha}_{\\hat{0}\\hat{r}}+\\mathcal{H}^{M\\alpha}_{\\hat{\\theta}\\hat{\\phi}})&=&0,\\\\\n{\\mathcal{V}_a}^M{\\mathcal{V}_M}^{ij\\prime}e^{-i\\Lambda}+\\frac{1}{4}\\mathcal{V}_\\alpha \\mathcal{V}_{Ma}(\\mathcal{H}^{M\\alpha}_{\\hat{0}\\hat{r}}+i\\mathcal{H}^{M\\alpha}_{\\hat{\\theta}\\hat{\\phi}})\\delta^i_{\\hat{i}}\\delta^j_{\\hat{j}}\\epsilon^{\\hat{i}\\hat{j}}\n+{A_{2aj}}^i&=&0\\, .\n\\end{eqnarray}\nNote that there are four equations from $\\delta\\lambda^i_a$ for each\nvalue of $a=1,2,3$, but $\\delta\\lambda^{i=3,4}_a$ do not get any\ncontribution from the gauge fields. However, the scalars appearing\nin these equations cannot be consistently set to zero since\n${A_{2aj}}^i$ is not diagonal in $ij$ indices.\n\\\\\n\\indent It should be pointed out that the $N=3$ supersymmetric\n$AdS_4$ vacuum corresponds to the Killing spinors $\\epsilon^{2,3,4}$\nwhile $\\epsilon^1$ is the Killing spinor of the $N=1$ $AdS_4$\ncritical point. In the next section, we will look for possible\n$AdS_2\\times \\Sigma_2$ solutions to the above BPS equations. As\nmentioned before, in the twist given above, the supersymmetry\ncorresponding to $\\epsilon^{3,4}$ is broken. Therefore, the\nresulting $AdS_2\\times \\Sigma_2$ solutions will preserve only two\nsupercharges or half of the $N=1$ supersymmetry corresponding to\neither $\\epsilon^1$ or $\\epsilon^2$. We will analyze these two cases\nseparately.\n\n\\section{Supersymmetric $AdS_2\\times \\Sigma_2$ solutions}\\label{AdS2_solution}\nIn this section, we look for the $AdS_2\\times \\Sigma_2$ fixed points\nof the above BPS flow equations with constant scalars. These\nsolutions should correspond to IR fixed points of the RG flows from\ntwisted compactifications of the dual $N=3$ and $N=1$ SCFTs in three\ndimensions. They also describe near horizon geometries of BPS black\nholes arising from M2-branes wrapped on $\\Sigma_2$. Before giving\nthe solutions, we first discuss the conditions for obtaining the\n$AdS_2$ fixed points.\n\\\\\n\\indent At the $AdS_2\\times \\Sigma_2$ geometries, the scalars are\nconstant, and we can choose the gauge in which $A_t^{M\\alpha}\\sim\n0$. Furthermore, the warped factor $g(r)$ is required to be\nconstant, $g'(r)=0$. Let $r_h$ be the position of the horizon, we\ncan summarized the conditions for $AdS_2\\times \\Sigma_2$ solutions\nand their properties as follow\n\\begin{eqnarray}\nf(r_h)=\\frac{r_h}{L_{AdS_2}},\\qquad e^{g(r_h)}&=&L_{\\Sigma_2},\\qquad \\textrm{Im}[e^{-i\\Lambda}(\\mathcal{W}_i-\\mathcal{Z}_i)]=0,\\nonumber \\\\\n|\\mathcal{W}_i+\\mathcal{Z}_i|&=&0,\\qquad \\frac{4}{3}A_2^{\\hat{i}\\hat{j}}=\\mathcal{V}_\\alpha {\\mathcal{V}_M}^{\\hat{i}\\hat{k}}\\epsilon^{\\hat{k}\\hat{j}}(i\\mathcal{H}^{M\\alpha}_{\\hat{0}\\hat{r}}+\\mathcal{H}^{M\\alpha}_{\\hat{\\theta}\\hat{\\phi}}),\\nonumber \\\\\n\\frac{i}{4}\\mathcal{V}_\\alpha \\mathcal{V}_{Ma}(-i\\hat{H}^{M\\alpha}_{\\hat{0}\\hat{r}}+\\mathcal{H}^{M\\alpha}_{\\hat{\\theta}\\hat{\\phi}})\\epsilon^{\\hat{i}\\hat{j}}&=&-{A_{2a\\hat{j}}}^{\\hat{i}} \\qquad {A_{2a\\tilde{j}}}^{\\hat{i}}=0,\\quad \\tilde{j}=3,4\n\\end{eqnarray}\nwhere $L_{AdS_2}$ and $L_{\\Sigma_2}$ are respectively the radii of\n$AdS_2$ and $\\Sigma_2$. These conditions can be viewed as attractor\nequations for the scalars at the black hole horizon.\n\n\\subsection{Solutions in the $N=3$ case}\nWe begin with the $N=3$ case. The $AdS_2\\times \\Sigma_2$ solutions\nwill describe the fixed points of the RG flows from $N=3$ SCFTs dual\nto the $N^{010}$ compactification of eleven-dimensional supergravity\nto supersymmetric CFT$_1$'s dual to the $AdS_2\\times \\Sigma_2$\ngeometries. These flows are examples of the twisted\ncompactifications of the $N=3$ SCFT on $\\Sigma_2$.\n\\\\\n\\indent In this case, the superpotential and central charge are\ngiven in term of the redefined scalars\n$(\\tilde{U}_1,\\tilde{U}_2,\\tilde{V})$ by\n\\begin{eqnarray}\n\\mathcal{W}_2&=&\\frac{1}{2}e^{-\\frac{1}{2}(4\\tilde{U}_1+2\\tilde{U}_2+\\tilde{V})}\\left[e^{2\\tilde{U}_2}+4e^{\\tilde{U}_1+\\tilde{U}_2}-2e^{\\tilde{U}_2+\\tilde{V}}+4e^{\\tilde{U}_1+\\tilde{V}}\n\\right.\\nonumber \\\\\n& & -3k+2iZ_2e^{\\tilde{U}_2}+4iZ_2e^{\\tilde{U}_1}-4iZ_1(e^{\\tilde{U}_2}+e^{\\tilde{V}}+iZ_2)\\nonumber \\\\\n& &\\left.-2iZ_2e^{\\tilde{V}}-Z_2^2+2\\chi(2ie^{\\tilde{U}_1}-ie^{\\tilde{U}_2}+2Z_1+Z_2) \\right],\\\\\n\\mathcal{Z}_2&=&\\frac{1}{4}e^{-\\frac{1}{2}(4g+2\\tilde{U}_2+\\tilde{V})}\\left[2e_3e^{\\tilde{U}_2}-\\sqrt{2}ie_9e^{2\\tilde{U}_2}+2ie_3\\chi+2p^3\\chi e^{\\tilde{U}_2} \\right.\\nonumber\\\\\n& &-\\sqrt{2}ip^9\\chi(e^{2\\tilde{U}_2}+3k)-4\\sqrt{2}\\tilde{B}[e^{\\tilde{U}_2}+e^{\\tilde{V}}+i(\\chi+Z_2)]\\nonumber\\\\\n& &+2ie_3Z_2+2\\sqrt{2}e_9Z_2e^{\\tilde{U}_2}+2ip^3\\chi Z_2+2\\sqrt{2}p^9\\chi Z_2e^{\\tilde{U}_2}\\nonumber\\\\\n& &+\\sqrt{2}i(e_9+p^9\\chi)Z_2^2+4iB(e^{2\\tilde{U}_2}-2e^{\\tilde{U}_2+\\tilde{V}}-3k)\\nonumber \\\\\n& &+4B[2\\chi (e^{\\tilde{U}_2}+iZ_2)+Z_2(e^{\\tilde{V}}-2e^{\\tilde{U}_2}-iZ_2)]\\nonumber \\\\\n& &+e^{\\tilde{V}}(2e_3-3\\sqrt{2}p^9-\\sqrt{2}p^9e^{2\\tilde{U}_2}+2p^3Z_2+\\sqrt{2}p^9Z_2^2)\\nonumber \\\\\n& &\\left. -2ie^{\\tilde{U}_2+\\tilde{V}}(p^3+\\sqrt{2}p^9Z_2)\\right]\n\\end{eqnarray}\nin which the subscript $2$ on $\\mathcal{W}_2$ and $\\mathcal{Z}_2$ refers to the\nsuperpotential and central charge associated to the Killing spinor\n$\\epsilon^2$.\n\\\\\n\\indent The BPS equations are given by\n\\begin{eqnarray}\nf'&=&\\textrm{Re}[e^{-i\\Lambda}(\\mathcal{W}_2-\\mathcal{Z}_2)],\\qquad e^{i\\Lambda}=\\frac{\\mathcal{W}_2+\\mathcal{Z}_2}{|\\mathcal{W}_2+\\mathcal{Z}_2|},\\\\\ng'&=&|\\mathcal{W}_2+\\mathcal{Z}_2|,\n\\end{eqnarray}\n\\begin{eqnarray}\ne^{i\\Lambda}\\tilde{V}'-ie^{-\\tilde{V}+i\\Lambda}\\chi'&=&\\frac{1}{2}\n\\left[e^{-\\frac{\\tilde{V}}{2}-\\tilde{U}_2-2\\tilde{U}_1}\\left[2e^{\\tilde{U}_2}+8e^{2\\tilde{U}_1}\n-6k+Z_2(8Z_1-2Z_2)\n\\right]\\right.\\nonumber \\\\\n &\n&-e^{-2g-2\\tilde{U}_1+\\frac{\\tilde{V}}{2}}\\left[4e^{2g}+2e^{2\\tilde{U}_1}(p^3+4B+\\sqrt{2}p^9Z_2)\\right]\\nonumber\n\\\\\n& & \\left.\n+4\\chi(2Z_1+Z_2)e^{-\\frac{\\tilde{V}}{2}-\\tilde{U}_2-2\\tilde{U}_1}+\\sqrt{2}e_9e^{\\tilde{U}_2-2g-\\frac{\\tilde{V}}{2}}\\right]\n \\nonumber \\\\\n& &+\\frac{1}{2}e^{-2g-\\tilde{U}_2-\\frac{\\tilde{V}}{2}}\\left[\\sqrt{2}Z_2(4\\tilde{B}-e_9Z_2)-2e_3(\\chi+Z_2)+4\\sqrt{2}\\chi \\tilde{B} \\right.\\nonumber\\\\\n& &-4B(e^{2\\tilde{U}_2}-3k+2\\chi Z_2-Z_2^2)\n+\\sqrt{2}p^9\\chi(e^{\\tilde{U}_2}+3k)\\nonumber \\\\\n& &\\left.-Z_2\\chi(2p^3+\\sqrt{2}p^9Z_2) \\right]\\nonumber \\\\\n&\n&-\\frac{i}{2}e^{-\\tilde{U}_2-\\frac{\\tilde{V}}{2}}\\left[4e^{\\tilde{U}_2-2\\tilde{U}_1}(Z_2-2Z_1-\\chi)\n-4e^{\\tilde{V}-2\\tilde{U}_1}(2Z_1+Z_2) \\right.\\nonumber \\\\\n&\n&-2e^{\\tilde{U}_2-2g}\\left[Z_2(\\sqrt{2}e_9-4B-2\\sqrt{2}\\tilde{B})+\\chi\n(p^3+4B+\\sqrt{2}p^9Z_2)\n\\right]\\nonumber \\\\\n& &+\ne^{\\tilde{V}-2g}\\left[2e_3-4\\sqrt{2}\\tilde{B}-\\sqrt{2}p^9(3k+e^{2\\tilde{U}_2})\n-4\\sqrt{2}\\tilde{B}\\right.\\nonumber\n\\\\\n&\n&\\left.\\left.+Z_2(2p^3+8B+\\sqrt{2}p^9Z_2)\\right]-2e^{\\tilde{U}_2-2g}e_3\\right],\n\\end{eqnarray}\n\\begin{eqnarray}\ne^{-i\\Lambda}\\tilde{U}_2'+ie^{-\\tilde{U}_2-i\\Lambda}Z_2'&=&\\frac{1}{2}e^{-2g-\\tilde{U}_2\n-2\\tilde{U}_1-\\frac{\\tilde{V}}{2}}\\left[2e^{2(g+\\tilde{U}_2)}+\\sqrt{2}ie_9e^{2(\\tilde{U}_1+\\tilde{U}_2)}\n+6ke^{2g} \\right.\\nonumber \\\\\n& &-2ie_3\\chi e^{2\\tilde{U}_1}+\\sqrt{2}ip^9\\chi e^{2(\\tilde{U}_1+\\tilde{U}_2)}\n+3\\sqrt{2}ikp^9\\chi e^{2\\tilde{U}_1}\\nonumber \\\\\n& &+8iZ_2e^{2g+\\tilde{U}_1}-2ie_3Z_2e^{2\\tilde{U_1}}-4\\chi Z_2e^{2g}-2ip^3\\chi Z_2e^{2\\tilde{U}_1}\\nonumber \\\\\n&\n&-8Z_1Z_2e^{2g}+2Z_2^2e^{2g}-\\sqrt{2}ie_9Z_2^2e^{2\\tilde{U}_1}-8\\chi\nZ_1 e^{2g}\n\\nonumber \\\\\n&\n&-4iBe^{2\\tilde{U}_1}\\left[e^{2\\tilde{U}_2}-3k+Z_2(2\\chi-Z_2-2ie^{\\tilde{V}})\\right]+8i\\chi\ne^{2g+\\tilde{U}_1}\\nonumber \\\\\n&\n&+4\\sqrt{2}\\tilde{B}e^{2\\tilde{U}_1}(e^{\\tilde{V}}+i\\chi+iZ_2)-\\sqrt{2}ip^9\\chi\nZ_2^2e^{2\\tilde{U}_1}\n\\nonumber \\\\\n& &\n-4ie^{2g+\\tilde{V}}(2Z_1+Z_2)-Z_2e^{2\\tilde{U}_1+\\tilde{V}}(2p^3+\\sqrt{2}p^9Z_2)\n\\nonumber \\\\\n&\n&\\left.-e^{\\tilde{U}_1+\\tilde{V}}\\left[8e^{2g}+e^{\\tilde{U}_1}\\left[2e_3-\\sqrt{2}p^9(e^{2\\tilde{U}_2}+3k)\\right]\\right]\n\\right],\n\\end{eqnarray}\n\\begin{eqnarray}\ne^{-i\\Lambda}\\tilde{U}_1'-ie^{-\\tilde{U}_1-i\\Lambda}Z_1'&=&e^{-\\tilde{U}_2-2\\tilde{U}_1-\\frac{\\tilde{V}}{2}}\\left[2e^{\\tilde{U}_2+\\tilde{V}}-e^{2\\tilde{U}_2}-2e^{\\tilde{U}_1}(e^{\\tilde{U}_2}+e^{\\tilde{V}})+3k \\right.\\nonumber \\\\\n& &-4iZ_1(e^{\\tilde{U}_2}+e^{\\tilde{V}}-iZ_2)+2i\\chi (e^{\\tilde{U}_1}-e^{\\tilde{U}_2}+2iZ_1+iZ_2)\\nonumber \\\\\n& &\\left.+2iZ_2(e^{\\tilde{U}_2}+e^{\\tilde{U}_1}-e^{\\tilde{V}})+Z_2^2\n\\right]\n\\end{eqnarray}\nwhere we have used the relation \\eqref{twist1} to express $p^6$ in terms of $p^9$ and $e_3$.\n\\\\\n\\indent To obtain the complete flow solutions, we have to solve\nthese equations together with the two-form equations\n\\eqref{2-form-eq1}, \\eqref{2-form-eq2} and the equations for the\ngauge fields \\eqref{Ap1} to \\eqref{Ap6} as well as the algebraic\nconstraint given by equation \\eqref{At_constraint}. These equations\nare very complicated even with the numerical technique not to\nmention the analytic solutions. In what follow, we will present only\nthe $AdS_2\\times\\Sigma_2$ solutions and will not give the numerical\nflow solutions which may be obtained by suitable boundary\nconditions. In principle, the horizon is characterized by the values\nof the scalars as functions of the electric and magnetic charges.\nHowever, due to the complexity of the BPS equations, it is more\nconvenient to solve the horizon conditions for the charges in terms\nof the scalar fields although inverting the solutions to express the\nscalars in terms of the charges is desirable.\n\\\\\n\\indent In the present case, although it is straightforward to solve\nthe above equations for $(B,\\tilde{B},\\chi, Z_1,p^9,p^3,e_3,e_9)$ in\nterms of $(\\tilde{U}_1, \\tilde{U}_2, \\tilde{V},Z_2)$, the resulting\nexpressions turn out to be cumbersome and not very illuminating.\nAccordingly, we refrain from giving the general result here but\ninstead present some solutions with specific values of the\nparameters. These are obtained from truncating the full result and\nrepresent some examples of $AdS_2\\times \\Sigma_2$ geometries within the solution space.\n\\\\\n\\indent Examples of $AdS_2\\times \\Sigma_2$ solutions are as\nfollow:\n\\begin{itemize}\n\\item We begin with a simple solution with vanishing pseudoscalars.\nIn the M-theory point of view, only scalars coming from the eleven-dimensional metric are turned on.\nThe solution is given by\n\\begin{eqnarray}\nk&=&\\frac{1}{5},\\qquad \\chi=Z_1=Z_2=0,\\qquad e_9=0,\\qquad \\tilde{V}=\\frac{1}{2}\\ln \\left[\\frac{27}{5}\\right],\\nonumber \\\\\n\\tilde{U}_1&=&\\frac{1}{2}\\ln \\left[\\frac{27}{80}\\right],\\qquad\n\\tilde{U}_2=-\\frac{1}{2}\\ln \\left[\\frac{5}{3}\\right],\\qquad\n\\tilde{B}=\\frac{1}{20}(5\\sqrt{2}e_3-27p^9),\\nonumber \\\\\ng&=&\\frac{1}{2}\\ln\\left[-\\frac{81}{80}\\sqrt{\\frac{3}{10}}\\kappa\np^9\\right],\\qquad B=-\\frac{p^3}{4},\\qquad\nL_{AdS_2}=\\frac{3^{\\frac{9}{4}}}{32(5)^{\\frac{3}{4}}}\\, .\n\\end{eqnarray}\nIt is clearly seen that only the hyperbolic horizon ($\\kappa=-1$) is possible otherwise\n$g(r_h)$ will become complex. Therefore, we find that this is an $AdS_2\\times H^2$ solution.\n\\item We next consider a solution with scalars and pseudoscalars turned on.\nIn the eleven-dimensional context, the solution involves scalar fields from both the metric\nand the four-form field. This solution is characterized by\n\\begin{eqnarray}\nk&=&1,\\qquad Z_1=Z_2=\\tilde{U}=0,\\qquad \\tilde{U}=\\tilde{V}=\\ln \\left[\\frac{12}{7}\\right],\\nonumber \\\\\np^3&=&\\frac{41e_9+220p^9}{41\\sqrt{2}},\\qquad\nB=-\\frac{41e_9+136p^9}{164\\sqrt{2}},\\qquad\n\\tilde{B}=\\frac{e_3}{2\\sqrt{2}}-\\frac{111}{41}p^9,\\nonumber \\\\\n\\chi&=&-\\frac{1}{7},\\qquad\ng=\\frac{1}{2}\\ln\\left[-2^{\\frac{5}{2}}\\kappa p^9\\sqrt{\n\\frac{21}{41}}\\right],\\qquad L_{AdS_2}=\\frac{\\sqrt{21}}{19}\\, .\n\\end{eqnarray}\nThis solution is also $AdS_2\\times H^2$.\n\\item As a final example, we consider a solution with more scalars turned on\nand hence more general than the previous two solutions. This solution is given by\n\\begin{eqnarray}\nZ_1&=&0,\\qquad Z_2=-\\frac{2\\sqrt{k}}{7},\\qquad \\chi=-\\frac{\\sqrt{k}}{7},\\qquad \\tilde{U}_1=\\tilde{U}_2\n=\\frac{1}{2}\\ln k, \\nonumber \\\\\np^3&=&\\frac{128,447k-104,895}{4,116\\sqrt{2k}}p^9,\\qquad e_9=\\frac{32,723k-13,923}{4,116\\sqrt{2k}}p^9,\n \\nonumber \\\\\n\\tilde{B}&=&\\frac{e_3}{2\\sqrt{2}}+\\frac{567-667k}{98}p^9,\\qquad\ng=\\frac{1}{2}\\ln \\left[\\frac{21(1-k)\\sqrt{k}\\kappa\np^9}{2\\sqrt{2}}\\right],\\nonumber \\\\\n\\tilde{V}&=&\\ln (2\\sqrt{k}),\\qquad\nB=-25p^9\\left[\\frac{3,809k-2,961}{16,464\\sqrt{2k}}\\right], \\qquad\nL_{AdS_2}=\\frac{k^{\\frac{3}{4}}}{3\\sqrt{2}}\\, .\\nonumber \\\\\n& &\n\\end{eqnarray}\nIn this case, the flux parameter $k$ is not fixed, and there are two\ntypes of solutions, $AdS_2\\times S^2$ and $AdS_2\\times H^2$,\ndepending on the value of $k$. For $k>1$, we have an $AdS_2\\times\nH^2$ solution with $\\kappa=-1$ while the solution with $k<1$ is\n$AdS_2\\times S^2$ for which $\\kappa=1$.\n\\end{itemize}\n\n\\subsection{Solutions in the $N=1$ case}\nWe now repeat a similar analysis for the $N=1$ case in which the\n$N=1$ $AdS_4$ vacuum arises from the squashed $N^{010}$ manifold.\nThis critical point exists only for $k<0$, and the $AdS_2\\times\n\\Sigma_2$ solutions would be IR fixed points of the twisted\ncompactifications of the dual $N=1$ SCFT. The superpotential and\ncentral charge are given by\n\\begin{eqnarray}\n\\mathcal{W}_1&=&\\frac{1}{2}e^{-\\tilde{U}_2-2\\tilde{U}_1-\\frac{\\tilde{V}}{2}}\\left[e^{2\\tilde{U}_2}\n-4e^{\\tilde{U}_1+\\tilde{U}_2}-2e^{\\tilde{V}}(e^{\\tilde{U}_2}+2e^{\\tilde{U}_1}) +4Z_1(Z_2-ie^{\\tilde{U}_2}-ie^{\\tilde{V}})\\right. \\nonumber \\\\\n&\n&\\left.-3k+iZ_2(2e^{\\tilde{U}_2}-4e^{\\tilde{U}_1}-2e^{\\tilde{V}}+iZ_2)+2\\chi\n(2Z_1+Z_2-ie^{\\tilde{U}_2}-2ie^{\\tilde{U}_1})\\right],\\nonumber \\\\\n& &\\\\\n\\mathcal{Z}_1&=&\\frac{1}{4}e^{-2g-\\tilde{U}_2-\\frac{\\tilde{V}}{2}}\\left[2e_3(e^{\\tilde{U}_2}+i\\chi)-\\sqrt{2}ie_9e^{2\\tilde{U}_2}+2p^3\\chi e^{\\tilde{U}_2}-3\\sqrt{2}ikp^9\\chi \\right.\\nonumber \\\\\n& &-\\sqrt{2}ip^9 \\chi e^{2\\tilde{U}_2}-4\\sqrt{2}\\tilde{B}(e^{\\tilde{U}_2}+e^{\\tilde{V}+i\\chi+i Z_2})+2ie_3Z_2\\nonumber \\\\\n& &+2\\sqrt{2}e_9Z_2e^{\\tilde{U}_2}+2ip^3\\chi Z_2+2\\sqrt{2}p^9\\chi Z_2+\\sqrt{2}ie_9Z_2^2\\nonumber \\\\\n& &+\\sqrt{2}ip^9\\chi Z_2^2+4B[2\\chi(e^{\\tilde{U}_2}+iZ_2)+i(e^{2\\tilde{U}_2}-2e^{\\tilde{U}_2+\\tilde{V}}-3k)]\\nonumber \\\\\n& &+4BZ_2(2e^{\\tilde{V}}-2e^{\\tilde{U}_2}-iZ_2)-2ie^{\\tilde{U}_2+\\tilde{V}}(p^3+\\sqrt{2}p^9Z_2)\\nonumber \\\\\n& &\\left. +e^{\\tilde{V}}(2e_3-6\\sqrt{2}p^9-\\sqrt{2}p^9e^{2\\tilde{U}_2}+2p^3Z_2+\\sqrt{2}p^9Z_2^2)\\right].\n\\end{eqnarray}\n\\indent The procedure is essentially the same, so we will just\npresent the result of $AdS_2\\times \\Sigma_2$ solutions and leave the\nexplicit form of the corresponding BPS equations to the appendix. In\nthis case, it turns out to be more difficult to find the solutions\nin particular we have not found any solutions without the\npseudoscalars turned on. With some effort, we obtain the following\nsolutions:\n\\begin{itemize}\n\\item We begin with a simple solution in which all scalars have the same value as the $N=1$\nsupersymmetric $AdS_4$ vacuum\n\\begin{eqnarray}\nk&=&-\\frac{18}{11},\\qquad Z_1=Z_2=\\chi=0,\\qquad \\tilde{U}_1=\\tilde{U}_2=\\ln5-\\frac{1}{2}\\ln \\left[\\frac{55}{6}\\right],\\nonumber \\\\\n\\tilde{V}&=&-\\frac{1}{2}\\ln \\left[\\frac{55}{6}\\right],\\qquad B=-\\frac{p^3}{4},\\qquad \\tilde{B}=\\frac{e_3}{2\\sqrt{2}},\\qquad e_9=-\\frac{14p^3}{5\\sqrt{2}},\\nonumber \\\\\ng&=&\\frac{1}{2}\\ln \\left[-\\frac{10}{11}\\sqrt{\\frac{15}{11}}\\kappa p^9\\right],\\qquad L_{AdS_2}=\\frac{5^{\\frac{5}{4}}}{2^{\\frac{5}{4}}(3^{\\frac{1}{4}})(11^{\\frac{3}{4}})}\\, .\n\\end{eqnarray}\nThe solution is of the $AdS_2\\times H^2$ form.\n\\item We now give a more complicated solution\n\\begin{eqnarray}\nk&=&-\\frac{18}{11},\\qquad Z_1=\\chi=0,\\qquad \\tilde{U}_1=\\tilde{V}=\\ln\\left[7\\sqrt{-\\frac{3k}{319}}\\right],\\nonumber \\\\\np^3&=&\\sqrt{\\frac{3}{638}}\\left(\\frac{p^9}{3,190\\sqrt{-k}}\\right)(567,365k-1,002,298),\\nonumber \\\\\nB&=&\\sqrt{\\frac{3}{638}}\\left(\\frac{p^9}{89,320\\sqrt{-k}}\\right)(13,987,355k-27,368,286),\\nonumber \\\\\n\\tilde{B}&=&\\frac{e_3}{2\\sqrt{2}}+\\frac{3p^9}{8,932}(63,162-32,267k),\\qquad Z_2=-5\\sqrt{-\\frac{3k}{319}},\n\\nonumber \\\\\ng&=&\\ln\\left[7\\left(\\frac{3}{638}\\right)^{\\frac{1}{4}}\\sqrt{(k-2)\\sqrt{-k}\\kappa\np^9}\\right],\\nonumber \\\\\n\\tilde{U}_2&=&\\frac{1}{2}\\ln \\left[-\\frac{588k}{319}\\right],\\qquad\nL_{AdS_2}=\\frac{21(3^{\\frac{1}{4}})}{11}\\sqrt{\\frac{7}{21}}\\left(\\frac{2}{29}\\right)^{\\frac{3}{4}}\\,\n.\n\\end{eqnarray}\nThis solution also gives $AdS_2\\times H^2$ geometry. To show that\nthis leads to real solutions, we explicitly give one example of the\npossible solutions\n\\begin{eqnarray}\nZ_1&=&\\chi=0,\\qquad e_9=54.35,\\qquad p^3=-11.56,\\qquad \\tilde{U}_1=\\tilde{V}=-0.14,\\nonumber \\\\\n\\tilde{U}_2&=&0.55,\\qquad Z_2=-0.62,\\qquad B=10.66,\\qquad\n\\tilde{B}=-13.77+0.35e_3,\\nonumber \\\\\n g&=&1.06\\, .\n\\end{eqnarray}\n\\end{itemize}\n\n\\subsection{Uplift formulae}\nWe end this section by giving the uplift formulae for embedding the\npreviously found $AdS_2\\times \\Sigma_2$ solutions in eleven\ndimensions. We first identify the vector and tensor fields in the\n$N=4$ gauged supergravity and those obtained from the dimensional\nreduction of eleven-dimensional supergravity on a tri-sasakian\nmanifold\n\\begin{eqnarray}\nA^3_1&=&\\sqrt{2}A^{9+},\\qquad a^3_1=-\\sqrt{2}A^{6+},\\qquad c^3_1=A^{3+},\\qquad \\tilde{a}^3_1=-A^{3-},\\nonumber \\\\\n\\tilde{c}^3_1&=&\\sqrt{2}A^{9-},\\qquad a^{12}_2=\\sqrt{2}B^{18},\\qquad c^3_2=B^{78}\\, .\n\\end{eqnarray}\nWith this identification and the ansatz for the scalars and vector fields, the eleven-dimensional metric and the four-form field are given by\n\\begin{eqnarray}\nds^2_{11}&=&e^{-\\frac{1}{3}(4\\tilde{U}_1+2\\tilde{U}_2+\\tilde{V})}\\left[-e^{2f}dt^2+dr^2+e^{2g}\n(d\\theta^2+F(\\theta)^2d\\phi^2)\\right]\\nonumber \\\\\n& &+e^{\\frac{1}{3}(2\\tilde{U}_1+\\tilde{U}_2-\\tilde{V})}ds^2(B_{\\textrm{QK}})+e^{\\frac{2}{3}(\\tilde{U}_1-\\tilde{U}_2+\\tilde{V})}\n\\left[(\\eta^1)^2+(\\eta^2)^2\\right]\\nonumber \\\\\n& &+e^{\\frac{2}{3}(\\tilde{V}-2\\tilde{U}_1-2\\tilde{U}_2)}(\\eta^3+\\sqrt{2}\\mathcal{A}^9_tdt\n-\\sqrt{2}p^9F'(\\theta)d\\phi)^2\n\\end{eqnarray}\nand\n\\begin{eqnarray}\nG_4&=&-\\left[6ke^{-(4\\tilde{U}_1+2\\tilde{U}_2+\\tilde{V})+f+2g}-\\sqrt{2}B\\mathcal{A}^{9\\prime}_t-\\sqrt{2}B'\\mathcal{A}^9_t\\right]F(\\theta)dt\\wedge dr\\wedge d\\theta \\wedge d\\phi\\nonumber \\\\\n& &+B'F(\\theta)dr\\wedge d\\theta \\wedge d\\phi\\wedge \\eta^3+dZ_1\\wedge (\\eta^1\\wedge J^1+\\eta^2\\wedge J^2)\\nonumber \\\\\n& &[\\sqrt{2}(\\tilde{\\mathcal{A}}^{9\\prime}_t+\\chi \\mathcal{A}^{9\\prime}_t)dr\\wedge dt+\\sqrt{2}(e_9+\\chi p^9-\\sqrt{2}B)F(\\theta)d\\theta \\wedge d\\phi]\\wedge \\eta^1\\wedge \\eta^2\\nonumber \\\\\n& &[(\\mathcal{A}^{3\\prime}_t+\\sqrt{2}Z_2\\mathcal{A}^{9\\prime}_t)dr\\wedge dt+(p^3+\\sqrt{2}p^9Z_2+2B)F(\\theta)d\\theta \\wedge d\\phi ]\\wedge J^3\\nonumber \\\\\n& &+2(\\chi+2Z_1)\\eta^1\\wedge \\eta^2\\wedge J^3+(dZ_2\\wedge J^3+d\\chi \\wedge \\eta^2\\wedge \\eta^2)\\wedge (\\eta^3-\\sqrt{2}p^9F(\\theta)d\\phi)\\nonumber \\\\\n& &+2[(\\mathcal{A}^3_t+\\sqrt{2}\\tilde{\\mathcal{A}}^9_t)dt-(\\sqrt{2}e_9+p^3)F(\\theta)d\\phi+4(2Z_1+Z_2)\\textrm{vol}(B_{\\textrm{QK}})\n\\nonumber \\\\\n& &\n+(\\chi+Z_2)(\\eta^3+\\sqrt{2}\\mathcal{A}^9_tdt-\\sqrt{2}p^9F(\\theta)d\\phi)]\\wedge(\\eta^1\\wedge\nJ^2-\\eta^2\\wedge J^1).\\label{4_form_flux}\n\\end{eqnarray}\n\n\\section{Conclusions}\\label{conclusions}\nIn this paper, we have found a number of $AdS_2\\times \\Sigma_2$\nsolutions in $N=4$ gauged supergravity with $SO(3)\\ltimes\n(\\mathbf{T}^3,\\hat{\\mathbf{T}}^3)$ gauge group. The solutions can be\nuplifted to M-theory since the $N=4$ gauged supergravity is a\nconsistent truncation of eleven-dimensional supergravity on a\ntri-sasakian manifold. These $AdS_2\\times \\Sigma_2$ gemetries are\nexpected to arise from the near horizon limit of certain dyonic BPS\nblack holes which can be identified as holographic RG flows from\ntwisted compactifications of the dual $N=1,3$ SCFTs in the UV to\nsuperconformal quantum mechanics corresponding to the $AdS_2$ geometry\nin the IR. We have found that most of the solutions have hyperbolic\nhorizons, but some of them have spherical horizons depending on the\nvalues of the four-form flux parameter. These solutions provide examples\nof $AdS_2$ geometries from M-theory compactified on a tri-sasakian\nmanifold such as $N^{010}$ and are hopefully useful in the\nholographic study of the $N=1,3$ Chern-Simons-Matter theories\nin three dimensions. They should also be useful in the study of\nblack hole entropy along the line of recent results in \\cite{Zaffaroni_BH_entropy,BH_entropy_benini,BH_entropy_Passias}. In\nthis aspect, the near horizon solutions given here are enough\nalthough we have not constructed the full black hole solutions,\nnumerically. It would be interesting to compute the topologically\ntwisted index in the dual $N=1,3$ SCFTs and compare with the black\nhole entropy computed from the area of the horizon $A\\sim\nL^2_{\\Sigma_2}$.\n\\\\\n\\indent The solutions found here might constitute only a small\nnumber of all possible solutions due to the complexity of the\nresulting BPS equations. It could be interesting to look for more\nsolutions or even to identify all possible black hole solutions to\nthis $N=4$ gauged supergravity similar to the analysis in $N=2$\ngauged supergravity. For the case of $N^{010}$ manifold, there\nexists an invariant two-form in addition to the universal forms on a\ngeneric tri-sasakian manifold. This leads to an additional vector\nmultiplet, called Betti multiplet, in $N=4$ gauged supergravity.\nThis vector multiplet corresponds to a baryonic symmetry in the dual\nSCFTs. Finding a reduction that includes the Betti multiplet and\n$SU(3)$ non-singlet fields would be very useful in order to find\nmore interesting black hole and other holographic solutions. We\nleave all these issues for future work. \\vspace{1cm}\n\\\\\n{\\large{\\textbf{Acknowledgement}}} \\\\\nThe author would like to thank Davide Cassani for useful\ncorrespondences and the Abdus Salam Centre for Theoretical Physics\nfor hospitality while most of this work has been done. This work is\nsupported by The Thailand Research Fund (TRF) under grant\nRSA5980037.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nHuman society benefits greatly from the discovery and design of new drug candidates with desired properties. Promising drug candidates (also known as ligands) can be small molecules that interact with macromolecules like proteins to form complexes. Such interactions modulate the cellular functions in biological processes for the treatment of diseases. As shown in Figure~\\ref{fig:task}, in many related tasks like molecular property prediction and protein-ligand binding affinity prediction, 3D structures of small molecules or macromolecule complexes are used to predict the related properties including internal energy, binding affinity, etc. These properties can be traditionally either measured by experiments or computed based on simulation methods like density functional theory (DFT)~\\cite{becke2014perspective} and molecular dynamics (MD)~\\cite{karplus2002molecular}. However, such approaches are time-consuming. Recently, novel machine learning approaches, especially deep learning (DL) algorithms, play an increasingly important role in accelerating drug discovery~\\cite{chen2018rise}. Among various DL methods, Graph Neural Networks (GNNs) have shown superior performance by treating each 3D structure as a graph and performing message passing scheme on it~\\cite{wu2020comprehensive,sun2020graph,atz2021geometric,xia2021geometric,li2021structure}. Molecular graphs can naturally represent the structural information in molecules by treating atoms (or set of atoms like functional groups) as nodes, and chemical bonds (or any predefined pairwise interactions) between atoms as edges. In addition, the success of GNNs relies on the implementation of the underlying physics laws, which are symmetry and rotate invariance, associated with chemical molecules~\\cite{noe2020machine,klicpera_dimenet_2020}. To better capture molecular structures and increase the expressive power of models, previous GNNs have adopted auxiliary information such as chemical properties~\\cite{duvenaud2015convolutional,kearnes2016molecular,yang2019analyzing}, atomic pairwise distances in 3D space~\\cite{gilmer2017neural,schutt2018schnetpack,unke2019physnet}, and angular information~\\cite{klicpera_dimenet_2020,klicpera_dimenetpp_2020,shui2020heterogeneous,zhang2020molecular,li2021structure}. \n\nIn spite of their successes, the application of GNNs in 3D representation learning for drug discovery is still in its infancy. Existing GNN models suffer from several limitations: \\textbf{(1) All interactions within or between molecules are modeled by the same message passing scheme in each GNN.} Such design fails to consider the diversity of interactions (e.g. covalent or non-covalent, local or non-local) that are modeled in physics-aware approaches like molecular mechanics~\\cite{schlick2010molecular}. \\textbf{(2) The adoption of additional auxiliary information in GNNs requires higher computational complexity.} For example, by incorporating angular information, the resulting GNNs~\\cite{klicpera_dimenet_2020,klicpera_dimenetpp_2020,shui2020heterogeneous,li2021structure} require at least $O(Nk^2)$ messages to be computed, where $N$ is the number of nodes and $k$ is the average number of nearest neighbors per node. As a comparison, the less powerful GNNs without angular information require only $O(Nk)$ messages~\\cite{schutt2018schnetpack,unke2019physnet}. With restricted computational resources, those computational expensive GNNs will exhibit limited expressive power or even fail when applied to macromolecules like proteins and nucleic acids. \n\\textbf{(3) Most works only focus on predicting scalar properties while ignore vectorial properties.} Although scalar properties like energy and binding affinity are essential in drug discovery, there are still many important vectorial properties like dipole moment and force. The flexibility of predicting such vectorial quantities is highly demanded in various tasks~\\cite{mailoa2019fast,kouza2018role}. \n\nTo tackle the aforementioned limitations, we propose a novel GNN framework, known as the Multiple\\underline{X} \\underline{M}essage \\underline{P}assing (XMP\\xspace), that enables the flexibility of using different message passing schemes to handle different kinds of interactions in either 2D or 3D molecules. In particular, we use multiplex graphs to represent molecular structures. Each plex (or layer) in a multiplex graph contains one type of interaction. With such input, we assign a different message passing scheme to be operated on each plex accordingly to model the interactions and update node embeddings. \n\nBased on our proposed XMP\\xspace, we are inspired by the ideas from physics and build an efficient and accurate GNN, \\underline{P}hysics-\\underline{a}ware Multiple\\underline{x} Graph Neural \\underline{Net}work (PaxNet\\xspace), for the 3D representation learning of both small molecules and macromolecule complexes. PaxNet\\xspace takes advantage of XMP\\xspace and is guided by molecular mechanics~\\cite{schlick2010molecular} to model local and non-local interactions in 3D molecular structures differently. With this flexibility, PaxNet\\xspace achieves efficiency by incorporating the angular information \\textit{only} in the local interactions to avoid using expensive angle-related computations on all interactions. In addition to scalar properties, PaxNet\\xspace can predict vectorial properties by capturing the geometric vectors in molecular structures that originate from quantum mechanics~\\cite{veit2020predicting}.\n\nTo comprehensively verify the effectiveness of PaxNet\\xspace, we conduct experiments on benchmark datasets that involve small molecules and macromolecule complexes. For small molecules, we use QM9~\\cite{ramakrishnan2014quantum} whose task is to predict quantum chemical properties of small organic molecules. For macromolecule complexes, we choose PDBbind~\\cite{wang2004pdbbind} whose task is to predict the binding affinities between proteins and their small-molecule ligands. On both datasets, our model outperforms the state-of-the-art baselines in terms of not only accuracy but also efficiency regarding inference time and memory consumption. Thus, PaxNet\\xspace is suitable to large-scale machine learning for drug discovery. We summarize the main contributions as follows:\n\\begin{itemize}[leftmargin=10pt]\n\\item We propose a novel GNN framework, Multiplex Message Passing (XMP\\xspace), that builds multiplex graphs to represent molecular structures and uses diverse message passing schemes to model different kinds of molecular interactions.\n\\item We build Physics-aware Multiplex Graph Neural Network (PaxNet\\xspace) based on XMP\\xspace for the representation learning of 3D molecular structures. PaxNet\\xspace differently models the local and non-local molecular interactions with an effective and efficient design. PaxNet\\xspace is also extended to directly predict vectorial values besides scalar values.\n\\item Comprehensive experiments related to small molecules and macromolecule complexes are conducted to demonstrate the accuracy and efficiency of our proposed PaxNet\\xspace.\n\\end{itemize}\n\n\n\\section{Related Work}\n\\textbf{Graph Neural Networks for Molecular Structures.}\nGraph Neural Networks (GNNs) have been proposed~\\cite{duvenaud2015convolutional,niepert2016learning,kipf2017semi} to learn the representation of graph-structured data using neural networks. The architectures of GNNs implicitly encode translational, rotational, and permutation invariance, which are physics laws obeyed by molecules~\\cite{noe2020machine,klicpera_dimenet_2020}. These motivate researchers to use GNNs for learning robust and generalizable representations of molecular structures in many challenging tasks such as molecular property prediction~\\cite{kearnes2016molecular,gilmer2017neural,schutt2018schnetpack,klicpera_dimenet_2020}, protein-ligand binding affinity prediction~\\cite{li2021structure}. Initial related works treat chemical bonds in molecules as edges and atoms as nodes to create graphs for molecules~\\cite{duvenaud2015convolutional,kearnes2016molecular}. These GNNs also integrate many hand-picked chemical features to improve performance. However, they do not take account of the 3D structures of molecules, which are critical for molecular representations~\\cite{townshend2020atom3d}. Thus later works turn to take atomic positions into consideration to compute interatomic distances as edge features between atoms~\\cite{gilmer2017neural,schutt2017quantum,schutt2018schnetpack,unke2019physnet,liu2021transferable}. Usually, a cutoff distance is used to create molecular graphs instead of using a complete graph to reduce computational complexity and overfitting. However, GNNs may fail to distinguish certain molecules due to the choice of cutoff distance~\\cite{klicpera_dimenet_2020}. To solve this issue, angular information in 3D molecular structures is further incorporated in GNNs to achieve higher expressive power~\\cite{klicpera_dimenet_2020,klicpera_dimenetpp_2020, shui2020heterogeneous,li2021structure}. However, those angle-aware GNNs have significantly higher computational complexity than the previous works. With limited computational resources, they are hard to be scaled to macromolecules or large-batch learning.\n\nOur proposed model reduces the high complexity caused by angle-related computations: As inspired by molecular mechanics, we exclude the use of angular information in the modeling of non-local interactions in molecules. Other 3D geometric-related information is carefully incorporated.\n\n\\textbf{Multiplex Graph.}\nA multiplex graph (also known as a multi-view graph) consists of multiple types of edges among a set of nodes. Informally, it can be considered as a collection of graphs, where each type of edge with the same set of nodes forms a graph or a layer. To get the representation of each node, both intra-layer relationships and cross-layer relationships have to be addressed properly. In practice, various methods have been proposed to learn the embedding of the multiplex graph~\\cite{zhang2018scalable,schlichtkrull2018modeling,cen2019representation} and the multiplex graph can be applied in many fields~\\cite{lee2020heterogeneous,wang2020abstract}. For the representation learning on molecules, previous work~\\cite{shi2020graph} implicitly represents molecular graphs as multiplex graphs and passes messages according to the edge types. Different from the existing work, we explicitly represent molecules as multiplex graphs based on the 3D geometric information in molecules. Moreover, we propose different message passing schemes for different layers in the multiplex graph.\n\n\\textbf{Neural Networks with Geometric Vectors.}\nA geometric vector is a feature in $\\mathbb{R}^3$ that has a direction as well as a magnitude. In the real world, geometric vectors are used to define many properties such as force, dipole moment, position vector. For the tasks related to geometric vectors, the adoption or prediction of geometric vectors is crucial in the design of neural network architectures. Especially for molecules, neural networks have been proposed to deal with geometric vectors: ~\\cite{mailoa2019fast} predicts force vectors by treating them as sets of scalars and makes scalar predictions separately with expensive data augmentation to approximate rotational invariance. In other physics-informed approaches~\\cite{chmiela2018towards,schutt2018schnetpack,unke2019physnet,klicpera_dimenet_2020}, the force vectors are computed by taking gradients of a predicted scalar energy field since the force field satisfies the conservation of energy. However, many geometric vectors do not have the underlying conservative scalar field. Another approach for predicting geometric vectors is to use equivariant networks with components that are all carefully designed to implement the equivariance of geometric vectors~\\cite{thomas2018tensor,fuchs2020se,schutt2021equivariant}. However, for molecule-related tasks, usually the molecular representations only need to be invariant rather than equivariant. Thus the complicated equivariance may not be necessary. Recently, geometric vectors like position vectors are used to better encode the geometric features in 3D protein backbone structures~\\cite{jing2020learning} using GNN. The node embeddings are updated by geometric vector features to capture more information. \n\nDifferent from the previous works, when predicting geometric vectors, we deal with general 3D molecular structures and directly make an extension based on GNN that encode invariance without complicatedly designing all components to be equivariant. Particularly, we initialize vectorial node-level contributions with position vectors to be updated by node embeddings. The finally updated vectorial node-level contributions are vector summed as the vectorial prediction.\n\n\\section{Preliminaries}\n\\textbf{Notations.}\nLet $G = ( V , E )$ be a graph with $N=|V|$ nodes and $M = |E|$ edges. The nearest neighbors of node $i$ are defined as $\\mathcal { N } ( i ) = \\{ j | d ( i , j ) = 1 \\}$, where $d ( i , j )$ is the shortest distance between node $i$ and $j$. The average number of the nearest neighbors of each node is $k = 2M \/ N$. In later formulations, we will use $\\boldsymbol{h}_{i}$ as the embedding of node $i$, $\\boldsymbol{e}_{j i}$ as the edge embedding between node $i$ and $j$, $\\boldsymbol{m}_{ji}$ as the message being sent from node $j$ to node $i$ in the message passing scheme~\\cite{gilmer2017neural}, $F$ as the hidden dimension in our model, $\\mathrm{MLP}$ as multi-layer perceptron, $\\concat$ as concatenation operation, $\\odot$ as element wise production and $\\boldsymbol{W}$ as weight matrix. \n\nHere we give the definition of a multiplex molecular graph, which is the input of our model, as follows:\n\\begin{definition} \\label{def:Multiplex} \\textbf{Multiplex Molecular Graph.} \nWe denote a molecular structure as an $(L+1)$-tuple $G = (V, E^1, \\ldots, E^L)$ where $V$ is the set of nodes (atoms) and for each $l\\in\\{1, 2, \\ldots, L\\},$ $E^l$ is the set of edges (molecular interactions) in type $l$ that between pairs of nodes (atoms) in $V$. By defining the graph $G^l = (V,E^l)$ which is also called a plex or a layer, the multiplex molecular graph can be seen as the set of graphs $G = \\{G^1, G^2, ..., G^L\\}$.\n\\end{definition}\nNext we introduce the message passing scheme~\\cite{gilmer2017neural} which is the basis of our model and a framework widely used in spatial-based GNNs~\\cite{wu2020comprehensive}:\n\\begin{definition} \\label{def:MP} \\textbf{Message Passing Scheme.} \nGiven a graph $G$, the node feature of each node $i$ is $\\boldsymbol{x}_i$, and the edge feature for each node pair $j$ and $i$ is $\\boldsymbol{e}_{ji}$. The message passing scheme iteratively updates message $\\boldsymbol{m}_{ji}$ and node embedding $\\boldsymbol{h}_{i}$ for each node $i$ using the following functions:\n\\begin{align}\n\\boldsymbol{m}_{ji}^{t} &= f_{\\text {m}}(\\boldsymbol{h}_{i}^{t-1}, \\boldsymbol{h}_{j}^{t-1}, \\boldsymbol{e}_{ji}), \\\\\n\\boldsymbol{h}_{i}^{t} &= f_{\\text {u}}(\\boldsymbol{h}_{i}^{t-1}, \\sum\\nolimits_{j \\in \\mathcal{N}(i)} \\boldsymbol{m}_{ji}^{t}),\n\\end{align}\nwhere superscript $t$ denotes the $t$-step iteration, $f_{\\text {m}}$ and $f_{\\text {u}}$ are learnable functions. For each node $i$, $\\boldsymbol{x}_i$ is the input node embedding $\\boldsymbol{h}_{i}^{0}$.\n\\end{definition}\nIn recent works~\\cite{klicpera_dimenet_2020,klicpera_dimenetpp_2020,shui2020heterogeneous,li2021structure}, the message passing scheme has been modified to capture the angular information in a molecular graph $G = (V, E)$ with $N$ nodes and position vectors $\\boldsymbol{r}=\\left\\{\\boldsymbol{r}_{1}, \\ldots, \\boldsymbol{r}_{N}\\right\\}$, where $\\boldsymbol{r}_{i} \\in \\mathbb{R}^3$ is the position of node $i$. We here analyze their computational complexity by addressing the number of angles in $G$:\n\\begin{theorem} \\label{theo}\nGiven a molecular graph $G$ with position vectors $\\boldsymbol{r}$, an angle is defined by a pair of adjacent edges that share a common node in $G$. By definition, there are at least $O(Nk^2)$ angles in $G$, where $N$ is the number of nodes and $k$ is the average number of nearest neighbors for each node.\n\\end{theorem}\nProof of Theorem~\\ref{theo} can be found in Appendix~\\ref{A1}. Based on Theorem~\\ref{theo}, we have:\n\\begin{corollary} \\label{corollary}\nFor a message passing-based GNN that requires at least one message to encode an angle, its computational complexity is at least $O(Nk^2)$ for each graph in a message passing iteration.\n\\end{corollary}\nWith Corollary~\\ref{corollary}, we find that the related works~\\cite{klicpera_dimenet_2020,klicpera_dimenetpp_2020,shui2020heterogeneous,li2021structure} all have at least $O(Nk^2)$ complexity.\n\n\n\\begin{figure*}[ht]\n \\begin{center}\n\t\\centerline{\\includegraphics[width=1.6\\columnwidth]{framework.png}}\n\t\\vskip -0.05in\n\t\\caption{\\label{fig:framework} Illustration of XMP\\xspace framework. XMP\\xspace consists of three steps including (1) the construction of multiplex molecular graph, (2) the assignment of message passing schemes, and (3) the communication and fusion of updated node embeddings.}\n\t\\end{center}\n\t\\vskip -0.25in\n\\end{figure*}\n\n\\section{Methodology}\nIn this section, we present PaxNet\\xspace for efficient and accurate representation learning of 3D molecular structures. We first describe the underlying XMP\\xspace framework of PaxNet\\xspace. Then we describe the components of PaxNet\\xspace comprehensively.\n\n\\subsection{Multiplex Message Passing Framework}\nIn this work, we propose a novel GNN framework, Multiplex Message Passing (XMP\\xspace), that enables the modeling of different molecular interactions with different message passing schemes to learn the representations of 2D\/3D molecular structures. \n\nAs shown in Figure~\\ref{fig:framework}, XMP\\xspace consists of three parts: (1) Given molecular structures, the corresponding multiplex molecular graphs are constructed based on predefined interactions in 2D\/3D molecules. In detail, for each kind of interaction, we use a plex to contain them accordingly. The resulting multiplex molecular graphs are considered as input of XMP\\xspace. (2) For each plex in a multiplex molecular graph, we assign a corresponding message passing scheme to model the interactions and update the node embeddings in it. (3) To address the connections across plexes, XMP\\xspace communicates the updated node embeddings in different plexes. All node embeddings are finally fused together to get graph-level representations to be fed into downstream tasks.\n\n\\subsection{Physics-aware Multiplex Graph Neural Network}\\label{PaxNet}\nBased on XMP\\xspace, we focus on learning the representations of 3D molecular structures and build Physics-aware Multiplex Graph Neural Network (PaxNet\\xspace) as an efficient and accurate approach. The design of PaxNet\\xspace is inspired by physics-based knowledge to be a novel instance of XMP\\xspace.\n\nIn this section, we will first introduce the input graphs of PaxNet\\xspace, which are two-plex multiplex molecular graphs based on 3D molecular structures. Next we will present the components in PaxNet\\xspace specifically designed for the two-layer multiplex molecular graphs to learn 3D molecular representations. For vectorial property prediction, we will describe our geometric vector-based approach in detail, which is used for the prediction of dipole moments based on quantum mechanics as a practical application. Finally, we give a theoretical analysis of the computational complexity of PaxNet\\xspace to demonstrate the efficiency.\n\n\\textbf{Multiplex Molecular Graphs Inspired by Molecular Mechanics. } \\label{sec:multiplex}\nIn molecular mechanics~\\cite{schlick2010molecular}, the molecular energy $E$ is modeled with a separate consideration of local and non-local interactions: $E=E_{\\text {local}}+E_{\\text {nonlocal}}$. Local interactions $E_{\\text {local}}=E_{\\text {bond}}+E_{\\text {angle}}+E_{\\text {dihedral}}$ models local, covalent interactions including $E_{\\text {bond}}$ that depends on bond lengths, $E_{\\text {angle}}$ on bond angles, and $E_{\\text {dihedral}}$ on dihedral angles. Non-local interactions $E_{\\text {nonlocal}}=E_{\\text {electro}}+E_{\\text {vdW}}$ models non-local, non-covalent interactions including electrostatic and van der Waals interactions which depend on interatomic distances. For the geometric information in molecular mechanics, the local interactions need pairwise distances and angles, while the non-local interactions only need pairwise distances. These inspire us to avoid using expensive angle-related computations when modeling non-local interactions to achieve efficiency. \n\nTo reach our goal, we first decouple the local and non-local interactions in 3D molecular structures by using different cutoff distances when defining those interactions or just treating chemical bonds as local interactions. With the interactions, we then construct a two-plex multiplex molecular graph $G = \\{G_{global}, G_{local}\\}$ as shown in Figure~\\ref{fig:method}a. The local plex $G_{local}$ contains only local, covalent interactions, while the global layer $G_{global}$ further includes non-local, non-covalent interactions besides local interactions. As for the geometric information, $G_{local}$ captures the related adjacency matrix $\\textbf{A}_{local}$, pairwise distances $d _{local}$ and angles $\\theta _{local}$, while $G_{global}$ contains only the related adjacency matrix $\\textbf{A}_{global}$ and pairwise distances $d _{global}$. As illustrated in Figure~\\ref{fig:geometric}, given a node $i$ in $G$, we use the geometric information within the two-hop neighborhoods around $i$.\n\n\\begin{figure*}[t]\n \\centering\n\t\\includegraphics[width=1.7\\columnwidth]{method.png}\n\t\\vskip -0.05in\n\t\\caption{\\label{fig:method}Illustration of PaxNet\\xspace method. (a) Given a 3D molecular structure, a two-plex molecular graph $G$ is constructed to be the input of PaxNet\\xspace. (b) The overall architecture of PaxNet\\xspace. (c) The detailed components in PaxNet\\xspace.}\n\t\\vskip -0.1in\n\\end{figure*}\n\n\\textbf{Overall Architecture of PaxNet\\xspace. }\nAs shown in Figure~\\ref{fig:method}b, with the aforementioned multiplex molecular graph $G$ as input, PaxNet\\xspace uses \\textit{Global Message Passing} and \\textit{Local Message Passing} for $G_{global}$ and $G_{local}$ in $G$ accordingly. The node embeddings $\\boldsymbol{h}$ are updated in each plex and communicated across the two plexes in $G$. To fuse $\\boldsymbol{h}$ for a final representation or prediction of each molecular structure, we use an \\textit{Attention Module} for each hidden layer of PaxNet\\xspace and finally sum the results together.\n\n\\textbf{Global Message Passing. }\nIn this module, we update node embeddings in the global plex $G_{global}$ by capturing the pairwise distances $d_{global}$ based on the message passing in Definition~\\ref{def:MP}. Each message passing operation is:\n\\begin{align}\n\\boldsymbol{m}_{ji}^{t-1} &= \\mathrm{MLP}_m([\\boldsymbol{h}_{j}^{t-1} \\concat \\boldsymbol{h}_{i}^{t-1} \\concat \\boldsymbol{e}_{j i}]),\\label{message_embedding}\\\\\n\\boldsymbol{h}_{i}^{t} &= \\boldsymbol{h}_{i}^{t-1} + \\sum\\nolimits_{j \\in \\mathcal{N}(i)} \\boldsymbol{m}_{ji}^{t-1}\\odot \\phi_{d}(\\boldsymbol{e}_{j i}), \\label{node_update_g}\n\\end{align} \nwhere $i, j \\in G_{global}$ are connected nodes that define a message embedding, $\\phi_{d}$ is a learnable function for pairwise distance. The edge embedding $\\boldsymbol{e}_{j i}$ encodes the corresponding pairwise distance information. In total, this step needs $O(Nk)$ messages.\n\nAfter the message passing, an update function containing multiple residual modules is used to get the node embeddings for the next layer as well as an output module for this layer. The output module is a three-layer MLP to get the output $\\boldsymbol {h}_{global}$ to be fused together. Illustrations of these operations are shown in Figure~\\ref{fig:method}c.\n\n\\textbf{Local Message Passing. } \nFor the updates of node embeddings in the local plex $G_{local}$, we incorporate both pairwise distances $d_{local}$ and angles $\\theta_{local}$. When updating the embedding of node $i$, we consider the one-hop neighbors $\\{j\\}$ and the two-hop neighbors $\\{k\\}$ of $i$. Specifically for the angles related to those nodes, we show an example in Figure~\\ref{fig:geometric} and cluster them depending on the edges around $i$: (a) The \\textit {one-hop angles} are angles between the one-hop edges ($\\theta_{1}$, $\\theta_{2}$ and $\\theta_{3}$ in Figure~\\ref{fig:geometric}). (b) The \\textit {two-hop angles} are angles between the one-hop edges and two-hop edges ($\\theta_{4}$, $\\theta_{5}$ and $\\theta_{6}$ in Figure~\\ref{fig:geometric}). In previous GNNs that incorporate angular information, they either only address the two-hop angles~\\cite{klicpera_dimenet_2020,klicpera_dimenetpp_2020} or only captures the one-hop angles~\\cite{shui2020heterogeneous}. In our model, we contain all those angles to encode more related geometric information. \n\nTo perform message passing, we use the same way as Equation (\\ref{message_embedding}) to compute the message embeddings $\\boldsymbol{m}$. The message passing operation in the $t$-th iteration is:\n\\begin{align}\n\\boldsymbol{m}_{ji}^{'t-1} = \\boldsymbol{m}_{ji}^{t-1} &+ \\sum_{j' \\in \\mathcal{N}(i)\\setminus\\{j\\}} \\boldsymbol{m}_{j'i}^{t-1} \\odot \\phi_{d}(\\boldsymbol{e}_{j'i}) \\odot \\phi_{\\theta}(\\boldsymbol{\\theta}_{j'i, j i}) \\nonumber\\\\\n&+ \\sum_{k \\in \\mathcal{N}(j)\\setminus\\{i\\}} \\boldsymbol{m}_{kj}^{t-1} \\odot \\phi_{d}(\\boldsymbol{e}_{kj}) \\odot \\phi_{\\theta}(\\boldsymbol{\\theta}_{k j, j i}), \\label{message_update} \\\\\n\\boldsymbol{h}_{i}^{t} = \\boldsymbol{h}_{i}^{t-1} &+ \\sum_{j \\in \\mathcal{N}(i)} \\boldsymbol{m}_{ji}^{'t-1}\\odot \\phi_{d}(\\boldsymbol{e}_{ji}) , \\label{node_update_l}\n\\end{align}\nwhere $i, j, k \\in G_{local}$, $\\phi_{d}$ is a learnable function for pairwise distance, $\\phi_{\\alpha}$ is a learnable function for angles. $\\boldsymbol{e}_{j i}$ encodes the corresponding pairwise distance information. $\\boldsymbol{\\theta}_{k j, j i}$ encodes the angle $\\theta_{k j, j i}=\\angle k j i$ accordingly.\n\nIn Equation (\\ref{message_update}), we use two summation terms to separately encode the one-hop and two-hop angles with the associated pairwise distances to update $\\boldsymbol{m}_{j i}$. Motivated by Theorem~\\ref{theo}, there are both $O(Nk^2)$ one-hop and two-hop angles. Thus this step needs $O(2Nk^2)$ messages. In Equation (\\ref{node_update_l}), a similar operation as Equation (\\ref{node_update_g}) is used to update node embedding $\\boldsymbol{h}_{i}$, which requires $O(Nk)$ messages.\n\n\\begin{figure*}[t]\n \\begin{center}\n\t\\centerline{\\includegraphics[width=1.6\\columnwidth]{geometry.png}}\n\t\\vskip -0.05in\n\t\\caption{\\label{fig:geometric} An example of the geometric information in $G$. By defining the one-hop neighbors $\\{j\\}$ and two-hop neighbors $\\{k\\}$ of node $i$, we can define the pairwise distances $d$ and the related angles $\\theta$ including one-hop angles and two-hop angles.}\n\t\\end{center}\n\t\\vskip -0.25in\n\\end{figure*}\n\nAs shown in Figure~\\ref{fig:method}c, we design the remaining functions in Local Message Passing similarly to those in Global Message Passing to get $\\boldsymbol {h}_{local}$ to be fused and the input for the next iteration.\n\n\\textbf{Cross-layer Communication. }\nTo address the cross-layer relations between $G_{global}$ and $G_{local}$, we let the information in those layers communicate with each other as depicted in Figure~\\ref{fig:method}b. We first perform Global Message Passing on $G_{global}$ in each iteration. Then the updated node embeddings are transferred to $G_{local}$ for Local Message Passing. Finally, the further updated node embeddings are passed back to $G_{global}$ for the next iteration. \n\n\\textbf{Attention Module. }\nAs shown in Figure~\\ref{fig:method}b, to fuse the node embeddings for a final graph-level representation or prediction as shown in the next, we design Attention Module with attention mechanism for each hidden layer $t$ of PaxNet\\xspace to get the corresponding node-level scalar prediction ${y}_{out}^t$ using $\\boldsymbol {h}_{global}^t$ and $\\boldsymbol {h}_{local}^t$ computed by the output modules in hidden layer $t$. \n\nWe first compute the attention weight $\\alpha_{m,i}$ that measures the contribution of $\\boldsymbol {h}_{m,i}$, which belongs to node $i$ on plex $m$ in $G$:\n\\begin{align}\n\\alpha _ {m,i}^ {t} = \\frac { \\exp (\\operatorname { LeakyReLU } (\\boldsymbol{W}_m^t\\boldsymbol {h}_{m,i}^t)) } { \\sum _ { m} \\exp ( \\operatorname { LeakyReLU } (\\boldsymbol{W}_m^t\\boldsymbol {h}_{m,i}^t)) },\\label{softmax}\n\\end{align}\nwhere $m$ can be either global or local, $\\boldsymbol{W}_m^t \\in \\mathbb{R}^{1\\times F}$ is a learnable weight matrix different for each hidden layer $t$ and plex $m$. With $\\alpha _ {m,i}^ {t}$, we then compute the node-level scalar prediction ${y}_{out,i}^t$ of node $i$ using weighted summation:\n\\begin{align}\n{y}_{out,i}^t = \\sum\\nolimits _ { m } \\alpha _ {m,i}^ {t} (\\boldsymbol{W}_{out_{m}}^t\\boldsymbol {h}_{m,i}^t),\\label{weight_sum}\n\\end{align}\nwhere $\\boldsymbol{W}_{out_{m}}^t \\in \\mathbb{R}^{1\\times F}$ is a learnable weight matrix different for each hidden layer $t$ and plex $m$. The node-level predictions ${y}_{out,i}^t$ are used to compute the final graph-level predicted result ${y}$:\n\\begin{align}\n{y} = \\sum\\nolimits_{i=1}^{N}\\sum\\nolimits_{t=1}^{T} {y}_{out,i}^t.\\label{sum}\n\\end{align}\n\n\\textbf{Geometric Vector-Based Approach for Predicting Vectorial Properties. }\nTo enable the prediction of vectorial properties instead of only scalar properties for 3D molecular structures, we design an extension of PaxNet\\xspace which can also be applied to any message-passing-based GNNs.\n\nWe extend the idea of learning scalar node-level contributions to be summed together to compute the final scalar property using GNNs. To predict vectorial properties, we aim to have \\textit {vectorial node-level contributions} to be \\textit {vector summed} together. We define each vectorial node-level contribution $\\vec{y_i}$ to be the multiplication of the scalar atomic contribution $y_i$ and an associated vector $\\vec{v}_i$. To learn $\\vec{v}_i$, we propose an update function to be added in message passing operations:\n\\begin{align}\n\\vec{v}_{i}^t = f_{\\vec{v}}(\\boldsymbol{h}^{t}, \\vec{r}),\\label{vector}\n\\end{align}\nwhere $\\boldsymbol{h}^{t}=\\{\\boldsymbol{h}_{1}^{t}, \\ldots, \\boldsymbol{h}_{N}^{t}\\}$, $\\vec{r}=\\{\\vec{r}_{1}, \\ldots, \\vec{r}_{N}\\}$, $\\boldsymbol{h}_{i}^{t}$ is the learned embedding of node $i$ in $t$-th iteration, and $\\vec{r}_{i} \\in \\mathbb{R}^3$ is the position vector of node $i$. In concept, $f_{\\vec{v}}$ is a function that outputs $\\vec{v}_i$ to be equivariant with respect to an arbitrary composition $R$ of rotations and reflections in $\\mathbb{R}^3$. For example, given Equation (\\ref{vector}), there should be\n\\begin{align}\nR(\\vec{v}_{i}^t)=f_{\\vec{v}}(\\boldsymbol{h}^t, R(\\vec{r})). \\label{equivariant}\n\\end{align}\nSuch constraint is necessary for vectorial property prediction and is physics-aware since the vectorial properties are also equivariant with respect to $R$ in the real world. \n\nTo have a final predicted vectorial value $\\vec{y}$, we modify Equation (\\ref{weight_sum}) and (\\ref{sum}) to be:\n\\begin{align}\n\\vec{y}_{out,i}^t &= \\sum\\nolimits _ { m } \\alpha _ {m,i}^ {t} (\\boldsymbol{W}_{out_{m}}^t\\boldsymbol {h}_{m,i}^t)\\vec{v}_{m,i}^t,\\\\\n\\vec{y} &= \\sum\\nolimits_{i=1}^{N}\\sum\\nolimits_{t=1}^{T} \\vec{y}_{out,i}^t,\n\\end{align}\nwhere $\\vec{v}_{m,i}^t$ is the vector for node $i$ on plex $m$ in the $t$-th iteration. We multiply $\\vec{v}_{m,i}^t$ with the learned scalar atomic contributions. The final prediction $\\vec{y}$ is a vector sum of all vectorial contributions $\\vec{y}_{out,i}^t$.\n\nIn practice, we apply our approach to predict dipole moments $\\vec{\\mu}$ using two different $f_{\\vec{v}}$ as motivated by different quantum mechanics approximations~\\cite{veit2020predicting}:\n\n1) With the approximation that electronic charge densities are concentrated at each atomic position, we can compute $\\vec{\\mu}=\\sum\\nolimits_{i}\\vec{r}_{c,i}q_i$, where $q_i$ is the partial charge of node $i$, and $\\vec{r}_{c,i}=\\vec{r}_i - (\\sum\\nolimits_{i}\\vec{r}_i)\/N$. We treat $q_i$ to be modeled by the scalar node-level contribution, and define $f_{\\vec{v}}(\\boldsymbol{h}, \\vec{r})=\\vec{r}_{c,i}$ to be the vector $\\vec{v}_{i}$ associated with each atom $i$.\n\n2) With the approximation of adding dipoles onto atomic positions in the distributed multipole analysis (DMA) approach~\\cite{stone1981distributed}, we can write $\\vec{\\mu}=\\sum\\nolimits_{i}(\\vec{r}_{c,i}q_i+\\vec{\\mu}_i)$, where $\\vec{\\mu}_i$ is the associated partial dipole of node $i$. We can rewrite the equation as $\\vec{\\mu}=\\sum\\nolimits_{i}f(\\vec{r}_{i})q_i$, where $q_i$ can be modeled by the scalar node-level contribution. We treat $f(\\vec{r}_{i})$ as $\\vec{v}_{i}$ and define $f_{\\vec{v}}(\\boldsymbol{h}, \\vec{r})=\\sum\\nolimits_{j \\in \\mathcal{N}(i)}|m_{i j}|(\\vec{r}_{i} - \\vec{r}_{j})$ to compute $\\vec{v}_{i}$.\n\nTo be noted that both of the two $f_{\\vec{v}}$ guarantee the equivariance of $\\vec{v}_{i}$ as given by Equation (\\ref{equivariant}). The reason is that $f_{\\vec{v}}$ only contains linear combination of $\\vec{r}_{i}$ to compute $\\vec{v}_{i}$.\n\n\\textbf{Computational Complexity. }\nWe here analyze the computational complexity of PaxNet\\xspace by addressing the number of messages as an approximation: We denote the cutoff distance when creating the edges as $d_g$ and $d_l$ in $G_{global}$ and $G_{local}$. The average number of the nearest neighbors per node is $k_g$ in $G_{global}$ and is $k_l$ in $G_{local}$. As mentioned in previous sections, the message passings in PaxNet\\xspace require the computation of $O(Nk_g+2N{k_l}^2+Nk_l)$ messages, while previous approaches~\\cite{klicpera_dimenet_2020,klicpera_dimenetpp_2020, shui2020heterogeneous,li2021structure} require $O(N{k_g}^2)$ messages. For 3D molecular structures, we have $k_g \\propto {d_g}^3$ and $k_l \\propto {d_l}^3$. With proper choices of $d_l$ and $d_g$, we have $k_l \\ll k_g$ (e.g. $d_l=2\\text{\\normalfont\\AA}$ and $d_g=5\\text{\\normalfont\\AA}$). In such cases, our PaxNet\\xspace requires much fewer messages in message passings than those related GNNs.\n\n\n\\begin{table*}[t]\n\\caption{Performance comparison on QM9. We report the averaged results together with the standard deviations for PaxNet\\xspace. We mark the best results in bold and the second-best results with underline.}\n\\label{table:QM9}\n\\centering\n\\begin{tabular}{lcccccc}\n\\toprule\n{Target} & {SchNet} & {PhysNet} & {MGCN} & {HMGNN} & {DimeNet++} & \\textbf{PaxNet\\xspace}\\\\\n\\midrule\n$\\mu$ (D) & \\underline{0.021} & 0.0529 & 0.056 & 0.0272 & 0.0297 & \\textbf{0.0108} (0.0001)\\\\\n$\\alpha$ ($a_0^3$) & 0.124 & 0.0615 & \\textbf{0.030} & 0.0561 & \\underline{0.0435} & 0.0447 (0.0003)\\\\\n$\\epsilon_{\\text{HOMO}}$ (meV) & 47 & 32.9 & 42.1 & 24.78 & \\underline{24.6} & \\textbf{22.8} (0.3)\\\\\n$\\epsilon_{\\text{LUMO}}$ (meV) & 39 & 24.7 & 57.4 & 20.61 & \\underline{19.5} & \\textbf{19.2} (0.2)\\\\\n$\\Delta\\epsilon$ (meV) & 74 & 42.5 & 64.2 & 33.31 & \\underline{32.6} & \\textbf{31.0} (0.3)\\\\\n$\\left\\langle R^{2}\\right\\rangle$ ($a_0^2$) & 0.158 & 0.765 & \\underline{0.11} & 0.416 & 0.331 & \\textbf{0.093} (0.020)\\\\\nZPVE (meV) & 1.616 & 1.39 & \\textbf{1.12} & 1.18 & 1.21 & \\underline{1.17} (0.02)\\\\\n$U_0$ (meV) & 12 & 8.15 & 12.9 & \\underline{5.92} & 6.32 & \\textbf{5.90} (0.12)\\\\\n$U$ (meV) & 12 & 8.34 & 14.4 & 6.85 & \\underline{6.28} & \\textbf{5.92} (0.14)\\\\\n$H$ (meV) & 12 & 8.42 & 16.2 & \\underline{6.08} & 6.53 & \\textbf{6.04} (0.14)\\\\\n$G$ (meV) & 13 & 9.40 & 14.6 & 7.61 & \\underline{7.56} & \\textbf{7.14} (0.12)\\\\\n$c_v$ ($\\frac{\\mathrm{cal}}{\\mathrm{mol} \\mathrm{K}}$) & 0.034 & 0.0280 & 0.038 & 0.0233 & \\textbf{0.0230} & \\underline{0.0231} (0.0002)\\\\\n\\midrule\nstd. MAE ($\\%$ ) & 1.78 & 1.37 & 1.89 & 1.00 & \\underline{0.98} & \\textbf{0.83}\\\\\n\\bottomrule\n\\end{tabular}\n\\end{table*}\n\n\\section{Experiments}\nIn this section, we present our experiments using PaxNet\\xspace on two benchmark datasets to answer the following questions:\n\\begin{itemize}[leftmargin=10pt]\n\\item{\\textbf{Q1}:} How accurate is our proposed PaxNet\\xspace compared with the state-of-the-art models?\n\\item{\\textbf{Q2}:} How efficient is PaxNet\\xspace compared with the state-of-the-art models?\n\\item{\\textbf{Q3}:} How does PaxNet\\xspace perform for predicting vectorial property?\n\\item{\\textbf{Q4}:} Do all components in PaxNet\\xspace contribute to the performance?\n\\end{itemize}\n\n\\subsection{Experimental Setup}\n\\subsubsection{Datasets} To comprehensively evaluate the performance of PaxNet\\xspace, we use two datasets to address the 3D molecular structures in different scales from small organic molecules to protein-ligand complexes.\n\n\\textbf{QM9. }\nQM9 is a widely used benchmark for the prediction of 12 molecular properties in\nequilibrium~\\cite{ramakrishnan2014quantum}. It consists of around 130k small organic molecules with up to 9 non-hydrogen atoms. Following~\\cite{klicpera_dimenet_2020}, we randomly use 110000 molecules for training, 10000 for validation and 10831 for testing. Mean absolute error (MAE) and mean standardized MAE (std. MAE)~\\cite{klicpera_dimenet_2020} are used for quantitative evaluation of the target properties.\n\n\\textbf{PDBbind. }\nPDBbind is a database of experimentally measured binding affinities for protein-ligand complexes~\\cite{wang2004pdbbind}. The goal is to predict the binding affinity of each protein-ligand complex based on its 3D structure. We use the same subsets in PDBbind v2016 dataset which contains ~4k structures and the same data splitting approach as in~\\cite{li2021structure}. We preprocess each original complex to a structure that contains around 300 nonhydrogen atoms on average with only the ligand and the protein residues within 6$\\text{\\normalfont\\AA}$ around it. To comprehensively evaluate the performance, following~\\cite{li2021structure}, we use Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Pearson's correlation coefficient (R), and the standard deviation (SD) in regression.\n\n\\subsubsection{Baselines}\nOn QM9, we compare PaxNet\\xspace with 5 baseline models: \\textbf{SchNet}~\\cite{schutt2018schnetpack}, \\textbf{PhysNet}~\\cite{unke2019physnet}, \\textbf{MGCN}~\\cite{lu2019molecular}, \\textbf{HMGNN}~\\cite{shui2020heterogeneous}, and \\textbf{DimeNet++}~\\cite{klicpera_dimenetpp_2020}. On PDBbind, PaxNet\\xspace is compared with 4 baselines including \\textbf{D-MPNN}~\\cite{yang2019analyzing}, \\textbf{CMPNN}~\\cite{song2020communicative}, \\textbf{DimeNet}~\\cite{klicpera_dimenet_2020}, and \\textbf{SIGN}~\\cite{li2021structure}. In all experiments, we use the original results reported in the related works for baselines. More descriptions of the baselines can be found in Appendix~\\ref{A2}.\n\n\\subsubsection{Model Setup}\nIn our message passing operations, we define $\\phi_{d}(\\boldsymbol{e})=\\boldsymbol{W}_{\\boldsymbol{e}}\\boldsymbol{e}$ and $\\phi_{\\alpha}(\\boldsymbol{\\alpha})=\\mathrm{MLP}_{\\alpha}(\\boldsymbol{\\alpha})$, where $\\boldsymbol{W}_{\\boldsymbol{e}}$ is a weight matrix, $\\mathrm{MLP}_{\\alpha}$ is a multi-layer perceptrons (MLP). For the MLPs used in our model, they all have 2 layers to take advantage of the approximation capability of MLP. For all activation functions, we use the self-gated Swish activation function as in~\\cite{klicpera_dimenet_2020}. All results are reported with the averages and the standard deviations based on five random runs. More details of the settings are included in Appendix~\\ref{A3}.\n\n\n\\subsection{Result}\n\\textbf{Performance on small molecule dataset (Q1). }\nWe show and compare PaxNet\\xspace with baseline methods on QM9 in Table~\\ref{table:QM9}. PaxNet\\xspace achieves 9 best and 2 second best results among all 12 properties. It also achieves a new state-of-the-art result regarding std. MAE, which evaluates the overall performance. From the results, we can observe that the models that incorporate only atomic pairwise distance $d$ as geometric information like SchNet, PhysNet, and MGCN perform worse than those models that incorporate both $d$ and related angles $\\theta$ like HMGNN, DimeNet++, and our PaxNet\\xspace. This shows the importance of capturing rich geometric information when representing 3D small molecules. \n\nBesides, although PaxNet\\xspace takes advantage of the concept in molecular mechanics, which is mainly used for predicting potential energies, PaxNet\\xspace works also very well for properties that are not energy-related like electronic structure properties ($\\epsilon_{\\text{HOMO}}$, $\\epsilon_{\\text{LUMO}}$, and $\\Delta\\epsilon$), electronic spatial extent $\\left\\langle R^{2}\\right\\rangle$, heat capacity $c_v$, and dipole moment $\\mu$. The superior performance on those properties demonstrates the power of separating the modeling of different interactions in molecules based on XMP\\xspace and the effectiveness of the message passing schemes involved in PaxNet\\xspace. \n\n\\textbf{Performance on macromolecule dataset (Q1). }\nOn PDBbind, we compare the results of PaxNet\\xspace and baselines in Table~\\ref{table:PDBBind}. PaxNet\\xspace achieves the best performance regarding all 4 evaluation metrics in our experiments. When compared with the second-best model, SIGN, PaxNet\\xspace performs significantly better (p-value < 0.05 as shown in Appendix~\\ref{A4}). These results clearly demonstrate the accuracy of our model when learning representations of 3D macromolecules. The success of PaxNet\\xspace relies on the separate modeling of local and non-local interactions based on XMP\\xspace. For protein-ligand complexes, the local interactions mainly capture the interactions inside a protein and a ligand, while the non-local interactions can capture the interactions between protein and ligand. PaxNet\\xspace is able to effectively handle the diverse interactions and achieve accurate results.\n\nAmong all models, D-MPNN and CMPNN only implicitly encode the geometric information in 3D structures. While DimeNet, SIGN, and our PaxNet\\xspace all explicitly use pairwise distances $d$ and related angles $\\theta$. With explicitly encoded geometric information, DimeNet performs excellently on small molecule datasets. However, we find it loses superiority when compared to models designed for macromolecules like CMPNN and SIGN. This shows that DimeNet does not generalize well on macromolecules. As for SIGN which performs the best among all baselines on PDBbind, its algorithm is not directly suitable for small molecules. As a contrast, our proposed PaxNet\\xspace is generalizable for both small molecules and macromolecule complexes, and can both achieve state-of-the-art performance in our experiments.\n\n\\begin{table}[t]\n\\vspace{-0.5em}\n\\caption{Performance comparison on PDBbind. We report the averaged results together with the standard deviations. For the evaluation metrics, $\\downarrow$ denotes the lower the better, while $\\uparrow$ denotes the higher the better. We mark the best results in bold and the second-best results with underline.}\n\\label{table:PDBBind}\n\\centering\n\\small\n\\vskip -0.05in\n\\begin{tabular}{ccccc}\n\t\\toprule\n\tModel & RMSE $\\downarrow$ & MAE $\\downarrow$ & SD $\\downarrow$ & R $\\uparrow$ \\\\\n\t\\midrule\n\tD-MPNN & 1.493 (0.016) & 1.188 (0.009) & 1.489 (0.014) & 0.729 (0.006)\\\\\n\tCMPNN & 1.408 (0.028) & 1.117 (0.031) & 1.399 (0.025) & 0.765 (0.009)\\\\\n\tDimeNet & 1.453 (0.027) & 1.138 (0.026) & 1.434 (0.023) & 0.752 (0.010)\\\\\n\tSIGN & \\underline{1.316} (0.031) & \\underline{1.027} (0.025) & \\underline{1.312} (0.035) & \\underline{0.797} (0.012)\\\\\n\t\\midrule\n\t\\textbf{PaxNet\\xspace} & \\textbf{1.263 (0.017)} & \\textbf{0.987 (0.013)} & \\textbf{1.261 (0.015)} & \\textbf{0.815 (0.005)}\\\\\n\t\\bottomrule\n\\end{tabular}\n\\vskip -0.05in\n\\end{table}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{result.png}\n \\vskip -0.05in\n \\caption{\\textbf{Results of efficiency evaluation.} (a): Std. MAE vs. memory consumption on QM9. (b): Memory consumption vs. the largest cutoff distance $d$ on PDBbind.}\n \\label{fig:efficiency}\n \\vskip -0.15in\n\\end{figure}\n\n\\textbf{Efficiency Evaluation (Q2). }\nTo analyze the efficiency of PaxNet\\xspace, we first use QM9 to compare PaxNet\\xspace with SchNet, PhysNet, and DimeNet++ for small molecules. The model configurations of the baselines are the same as those in their original papers. We depict the std. MAE and the memory consumption during training of each model in Figure~\\ref{fig:efficiency}a. For SchNet and PhysNet, although they require less memory than PaxNet\\xspace, they perform significantly worse than PaxNet\\xspace with 114$\\%$ and 65$\\%$ higher std. MAE. When compared with DimeNet++, PaxNet\\xspace uses 73$\\%$ less memory and reduces the std. MAE by 15$\\%$. Thus PaxNet\\xspace achieves high efficiency with superior accuracy. As for the training time per epoch, PaxNet\\xspace is 26$\\%$ slower than DimeNet++ by having two successive message passing schemes in each iteration. However, since PaxNet\\xspace is more memory-efficient than DimeNet++, PaxNet\\xspace can be trained with a larger batch size than DimeNet++ to achieve a speedup with the same computational resources.\n\nWe further evaluate the efficiency on PDBbind to compare PaxNet\\xspace with DimeNet and SIGN that explicitly use 3D information for macromolecules. In macromolecules, the cutoff distance $d$ plays an important role to capture inter-molecular interactions~\\cite{li2021structure} and directly affects the computational complexity. Thus we compare the memory consumption when using different $d$ in related models in Figure~\\ref{fig:efficiency}b. From the results, we observe that the memory consumed by DimeNet and SIGN increases much faster than PaxNet\\xspace when $d$ increases. When fixing $d=5\\text{\\normalfont\\AA}$ as an example, PaxNet\\xspace requires 80$\\%$ and 71$\\%$ less memory than DimeNet and SIGN, respectively. Thus PaxNet\\xspace is much more memory-efficient and is able to capture longer-range interactions than these baselines with restricted resources. When comparing SIGN with PaxNet\\xspace, both of which use the configurations to achieve the results in Table~\\ref{table:PDBBind}, we find that PaxNet\\xspace outperforms SIGN while reducing the memory consumption by 33$\\%$ and the inference time by 85$\\%$. This clearly demonstrates the accuracy and efficiency of PaxNet\\xspace.\n\n\\textbf{Performance on Vectorial Property Prediction (Q3). }\nOn QM9 when predicting dipole moment $\\mu$, we can simply predict $\\mu$ as a scalar property, or predict the original vectorial value $\\vec{\\mu}$ and get its magnitude: $\\mu = |\\vec{\\mu}|$. We compare three different methods of using PaxNet\\xspace to predict $\\mu$: 1) No geometric vectors are used in PaxNet\\xspace. 2) Geometric vectors are defined as $\\vec{v}=\\vec{r}_{c,i}$. 3) Geometric vectors are defined as $\\vec{v}=\\sum\\nolimits_{j \\in \\mathcal{N}(i)}|m_{i j}|(\\vec{r}_{i} - \\vec{r}_{j})$. Method 1 directly predict scalar values, while method 2 and 3 are those we proposed in Section~\\ref{PaxNet} and can directly predict vectorial values.\n\nAmong the three methods, method 3 achieves the lowest MAE with 0.0108D, which significantly outperforms the best baseline result (0.021D) in Table~\\ref{table:QM9}. Method 2 gets an MAE of 0.0120D, which is still much better than the other baselines. On the contrary, the scalar-based method 1 get 0.0240D which is the worst one among our three methods based on PaxNet\\xspace. The success of method 3 relies on our geometric vector-based approach and better physics-aware approximation compared to the one used in method 2.\n\n\\begin{table}[t]\n\\caption{Ablation study of PaxNet\\xspace on QM9. We compare the variants with the original PaxNet\\xspace and report the differences of MAE on different targets.} \\label{table:ablation}\n\\vskip -0.05in\n\\centering\n\\small\n\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n\\begin{tabular}{lrr}\n\t\\toprule\n\t\\multirow{2}{*}{PaxNet\\xspace Variant} & \\multicolumn{2}{c}{MAE on Target}\\\\\n\t & $U_0$, $U$, $H$, $G$ & $\\mu$\\\\\n\t\\midrule\n\tNo Attention Module & +4\\% & +3\\%\\\\\n\tNo Communication between MPs & +10\\% & +21\\%\\\\\n Global MP + Local MP (One-hop $\\theta$) & +4\\% & +8\\%\\\\\n Global MP + Local MP (Two-hop $\\theta$) & +6\\% & +11\\%\\\\\n\tGlobal MP & +28\\% & +29\\%\\\\\n Local MP (all $\\theta$) & +49\\% & +102\\%\\\\\n Local MP (One-hop $\\theta$) & +57\\% & +107\\%\\\\\n Local MP (Two-hop $\\theta$) & +64\\% & +111\\%\\\\\n\t\\bottomrule\n\\end{tabular}\n\\vskip -0.05in\n\\end{table}\n\n\\textbf{Ablation Study for (Q4). }\nTo test whether all of the components in PaxNet\\xspace, including attention module, message passing schemes, and communications across plexes, contribute to the success of PaxNet\\xspace, we conduct ablation study on QM9 by designing PaxNet\\xspace variants: Without the attention module, we use an average of all output results. By removing the communications across plexes, we perform the message passing on two plexes in parallel without communication. We also remove either Global Message Passing or Local Message Passing. For Local Message Passing, we design variants by considering different $\\theta$. The performances of all variants are evaluated on scalar targets including $U_0$, $U$, $H$ and $G$, and vectorial target $\\mu$. The results in Table~\\ref{table:ablation} show that all variants decrease the performance of the original PaxNet\\xspace. These results validate the contributions of those components. For $\\theta$, we find that using all $\\theta$ performs the best. It demonstrates the importance of addressing more geometric information.\n\n\\section*{Conclusion}\nIn this work, we propose XMP\\xspace, which is a novel GNN framework to better address the diverse molecular interactions with different message passing schemes than state-of-the-art algorithms. Based on XMP\\xspace, we are inspired by the ideas in physics and propose to separate the modeling of local and non-local interactions in molecules. Such design makes it possible to effectively model the interactions and avoid many computational expensive operations. The resulting GNN, PaxNet\\xspace, can also predict vectorial properties by learning an associated vector for each node. The experiments conducted that involve both small molecules and macromolecule complexes clearly demonstrate the efficiency and accuracy of PaxNet\\xspace.\n\n\n\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}