diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgili" "b/data_all_eng_slimpj/shuffled/split2/finalzzgili" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgili" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{s:intro}\n\n\\subsection{Interval and circular-arc hypergraphs}\n\nAn \\emph{interval ordering} of a hypergraph $\\ensuremath{\\mathcal{H}}$ with a finite vertex set $V=V(\\ensuremath{\\mathcal{H}})$ is\na linear ordering $v_1,\\ldots,v_n$ of $V$ such that every hyperedge of $\\ensuremath{\\mathcal{H}}$ is an interval of\nconsecutive vertices. This notion admits generalization to an \\emph{arc ordering} where\n$v_1,\\dots,v_n$ is \\emph{circularly ordered} (i.e., $v_1$ succeeds $v_n$) \nso that every hyperedge is an \\emph{arc} of consecutive vertices.\n\nAn \\emph{interval hypergraph} is a hypergraph\nadmitting an interval ordering.\nSimilarly, if a hypergraph admits an arc ordering, we call it \\emph{circular-arc} \n(using also the shorthand \\emph{CA}).\nIn the terminology stemming from computational genomics,\ninterval hypergraphs are exactly those hypergraphs\nwhose incidence matrix has the \\emph{consecutive ones property}; e.g.,~\\cite{Dom09}.\nSimilarly, a hypergraph is CA exactly when its\nincidence matrix has the \\emph{circular ones property}; e.g.,~\\cite{HM03,GPZ08,OBS11}.\n\n\nOur goal is to study the conditions under which interval and circular-arc hypergraphs\nare \\emph{rigid} in the sense that they have a unique interval or, respectively, arc\nordering. Since any interval (or arc) ordering can be changed to another interval (or arc) \nordering by reversing, we always mean uniqueness \\emph{up to reversal}.\nAn obvious necessary condition of the uniqueness\nis that a hypergraph has no \\emph{twins}, that is, no two vertices such that\nevery hyperedge contains either both or none of them. \n\nWe say that two sets~$A$ and~$B$ \\emph{overlap} and write $A\\between B$ if\n$A$ and $B$ have nonempty intersection and neither of the two sets includes the other.\nTo facilitate notation, we use the same character $\\ensuremath{\\mathcal{H}}$ to denote a hypergraph\nand the set of its hyperedges.\nWe call $\\ensuremath{\\mathcal{H}}$ \\emph{overlap-connected} if it has no isolated vertex\n(i.e., every vertex is contained in a hyperedge) and the graph\n$(\\ensuremath{\\mathcal{H}},\\between)$ is connected.\nAs a starting point, we refer to the following rigidity result.\n\n\\begin{theorem}[Chen and Yesha~\\cite{ChenY91}]\\label{thm:unique-overlap-1}\nA twin-free, overlap-connected interval hypergraph has, \nup to reversal, a unique interval ordering.\n\\end{theorem}\n\nIf we want to extend this result to CA hypergraphs,\nthe overlap-connectedness obviously does not suffice.\nFor example, the twin-free overlap-connected hypergraph $\\ensuremath{\\mathcal{H}}=\\big\\{\\{a,b\\},\\{a,b,c\\},\\{b,c,d\\}\\big\\}$\nhas essentially different arc orderings. \nWe, therefore, need a stronger\nnotion of connectedness. When $A$ and $B$ are overlapping subsets of $V$\n(i.e.,~$A\\between B$) that additionally satisfy $A\\cup B\\ne V$, we say that \n$A$ and~$B$ \\emph{strictly overlap} and write $A\\between^* B$. \n\nQuilliot~\\cite{Quilliot84} proves that\na CA hypergraph~$\\ensuremath{\\mathcal{H}}$ on~$n$ vertices has a unique\narc ordering if and only if for every set $X\\subset V(\\ensuremath{\\mathcal{H}})$ with\n$1<|X|< n-1$ there exists a hyperedge $H\\in\\ensuremath{\\mathcal{H}}$ such that $H\\between^* X$.\nNote that this criterion does not admit efficient verification\nas it involves quantification over all subsets~$X$.\n\nWe call a hypergraph~$\\ensuremath{\\mathcal{H}}$ \n\\emph{strictly overlap-connected} if it has no isolated vertex and the graph\n$(\\ensuremath{\\mathcal{H}},\\between^*)$ is connected.\nWe prove the following analog of Theorem~\\ref{thm:unique-overlap-1}\nfor CA hypergraphs.\n\n\\begin{theorem}\\label{thm:unique-overlap-2}\nA twin-free, strictly overlap-connected CA hypergraph has, \nup to reversal, a unique arc representation.\n\\end{theorem}\n\n\n\n\\subsection{Tight orderings}\n\nLet us use notation $A\\bowtie B$ to say that sets $A$ and $B$\nhave a non-empty intersection. By the standard terminology,\na hypergraph $\\ensuremath{\\mathcal{H}}$ is \\emph{connected} if it has no isolated vertex\nand the graph $(\\ensuremath{\\mathcal{H}},\\bowtie)$ is connected.\nNote that the assumption made in Theorem~\\ref{thm:unique-overlap-1}\ncannot be weakened just to connectedness;\nconsider $\\ensuremath{\\mathcal{H}}=\\big\\{\\{a\\},\\{a,b,c\\}\\big\\}$ as the simplest example.\nThus, if we want to weaken the assumption,\nwe have also to weaken the conclusion.\n\nCall an arc ordering of a hypergraph $\\ensuremath{\\mathcal{H}}$ \\emph{tight} if,\nfor any two hyperedges $A$ and $B$ such that\n$\\emptyset\\ne A\\subseteq B\\ne V$,\nthe corresponding arcs share an endpoint.\nThe definition of a \\emph{tight interval ordering}\nis similar: We require that the arcs corresponding\nto hyperedges $A$ and $B$ share an endpoint whenever $\\emptyset\\ne A\\subseteq B$\n(the condition $B\\ne V$ is now dropped as the complete interval $V$ has two endpoints, while the\ncomplete arc $V$ has none).\\footnote{%\nThe class of hypergraphs admitting a tight interval ordering\nis characterized in terms of forbidden subhypergraphs in~\\cite{Moore77};\nsuch a characterization of interval hypergraphs is given in~\\cite{TrotterM76}.}\n\n\nFor nonempty $A$ and~$B$, note that\n$A\\bowtie B$ iff $A\\between B$ or $A\\subseteq B$ or $A\\supseteq B$.\nBy similarity, we define\n\\[\nA\\bowtie^* B\\text{ iff }A\\between^* B\\text{ or }A\\subseteq B\\text{ or }A\\supseteq B\n\\]\nand say that such two nonempty sets \\emph{strictly intersect}.\nWe call a hypergraph~$\\ensuremath{\\mathcal{H}}$ \n\\emph{strictly connected} if it has no isolated vertex and the graph\n$(\\ensuremath{\\mathcal{H}},\\bowtie^*)$ is connected.\nIn Section~\\ref{s:hgs} we establish the following result.\n\n\n\\begin{theorem}\\label{thm:unique}\n\\begin{bfenumerate}\n\\item \nA twin-free, connected hypergraph has, up to reversal,\nat most one tight interval ordering.\n\\item \nA twin-free, strictly connected hypergraph has, up to reversal, \nat most one tight arc ordering.\n\\end{bfenumerate}\n\\end{theorem}\n\n\n\\subsection{The neighborhood hypergraphs of proper interval and proper circular-arc graphs}\n\nFor a vertex $v$ of a graph $G$, the set of vertices adjacent to $v$\nis denoted by $N(v)$. Furthermore, $N[v]=N(v)\\cup\\{v\\}$.\nWe define the \\emph{closed neighborhood hypergraph} of~$G$ \nby $\\ensuremath{\\mathcal{N}}[G]=\\{N[v]\\}_{v\\in V(G)}$. \n\nRoberts~\\cite{Roberts71} discovered that $G$ is a proper interval graph \nif and only if $\\ensuremath{\\mathcal{N}}[G]$ is an interval hypergraph.\nThe case of proper circular-arc (PCA) graphs is more complex.\\footnote{For a\n definition of proper interval and PCA graphs, see the beginning of\n Section~\\ref{s:Nhgs}.}\nIf $G$ is a PCA graph, then $\\ensuremath{\\mathcal{N}}[G]$ is a CA hypergraph.\nThe converse is not always true.\nThe class of graphs with circular-arc closed neighborhood hypergraphs,\nknown as \\emph{concave-round graphs}~\\cite{Bang-JHY00},\ncontains PCA graphs as a proper subclass.\nTaking a closer look at the relationship between PCA graphs\nand CA hypergraphs, Tucker~\\cite{Tucker71}\\footnote\nTucker~\\cite{Tucker71} uses an equivalent language of matrices.}\ndistinguishes the case when\nthe complement graph $\\overline{G}$ is non-bipartite and shows that then\n$G$ is PCA exactly when $\\ensuremath{\\mathcal{N}}[G]$ is~CA.\n\n \nOur interest in tight orderings has the following motivation.\nIn fact, $G$ is a proper interval graph \nif and only if the hypergraph $\\ensuremath{\\mathcal{N}}[G]$ has a tight interval ordering\n(this follows from the Roberts theorem and Lemma~\\ref{lem:geomistight} in\nSection~\\ref{s:Nhgs}). Moreover, $G$ is a PCA graph \nif and only if $\\ensuremath{\\mathcal{N}}[G]$ has a tight arc ordering\n(we observed this in~\\cite{fsttcs} based on Lemma~\\ref{lem:geomistight}\nand Tucker's analysis in~\\cite{Tucker71}).\n\nNow, it is natural to consider the connectedness properties of $\\ensuremath{\\mathcal{N}}[G]$\nfor proper interval and PCA graphs and derive from here\nrigidity results. For proper interval graphs this issue has been\nstudied in the literature earlier, but we discuss also this class of graphs for\nexpository purposes.\n\nWe call two vertices~$u$ and~$v$ of a graph $G$ \\emph{twins} if $N[u]=N[v]$. \nNote that $u$ and~$v$ are twins in the graph $G$ if\nand only if they are twins in the hypergraph $\\ensuremath{\\mathcal{N}}[G]$.\nThus, the absence of twins in $G$ is a necessary condition for rigidity of $\\ensuremath{\\mathcal{N}}[G]$.\nAnother obvious necessary condition is the connectedness of $G$ (and, hence, of\n$\\ensuremath{\\mathcal{N}}[G]$).\\footnote{Small graphs are an exception, as all interval orderings\n of at most two vertices are the same up to reversal, and all arc orderings of\n up to three vertices are the same up to reversal and rotation.}\nBy Theorem~\\ref{thm:unique}.1, if a proper interval graph $G$ is\ntwin-free and connected, then $\\ensuremath{\\mathcal{N}}[G]$ has a unique tight interval ordering. \nMaking the same assumptions, Roberts~\\cite{Roberts71} proves that\neven an interval ordering of $\\ensuremath{\\mathcal{N}}[G]$ is unique.\n\nSuppose now that $G$ is a PCA graph. Consider first the case \nwhen $\\overline{G}$ is non-bipartite. In Section~\\ref{s:Nhgs} we prove\nthat then $\\ensuremath{\\mathcal{N}}[G]$ is strictly connected. Theorem~\\ref{thm:unique}.2\napplies and shows that, if $G$ is also twin-free and connected, then\n$\\ensuremath{\\mathcal{N}}[G]$ has a unique tight arc ordering.\nMoreover, we prove that any arc ordering of $\\ensuremath{\\mathcal{N}}[G]$\nis tight and, hence, unique as well.\n\nIf $\\overline{G}$ is bipartite, it is convenient to switch to\nthe complement hypergraph $\\overline{\\ensuremath{\\mathcal{N}}[G]}=\\{V(G)\\setminus N[v]\\}_{v\\in V(G)}$.\nThis hypergraph is interval. Applying Theorem~\\ref{thm:unique}.1\nto the connected components of $\\overline{\\ensuremath{\\mathcal{N}}[G]}$,\nwe conclude that $\\ensuremath{\\mathcal{N}}[G]$ has, up to reversing, exactly two tight arc orderings\nprovided $\\overline{G}$ is connected.\n\nIn~\\cite{KoeblerKLV11} we noticed that,\nif a proper interval graph $G$ is connected, then the hypergraph\n$\\ensuremath{\\mathcal{N}}[G]\\setminus\\{V(G)\\}$ is overlap-connected.\nThis allows to derive Roberts' aforementioned rigidity result\nfrom Theorem~\\ref{thm:unique-overlap-1}.\nIn Section~\\ref{s:ov-conn}, we use Theorem~\\ref{thm:unique-overlap-2}\nto obtain a similar result for PCA graphs:\nIf $G$ is an $n$-vertex connected PCA graph with non-bipartite complement,\nthen removal of all $(n-1)$-vertex hyperedges from $\\ensuremath{\\mathcal{N}}[G]$\ngives a strictly overlap-connected hypergraph.\n\n\n\n\\subsection{Intersection representations of graphs}\n\nA proper interval representation $\\alpha$ of a graph $G$\ndetermines a linear ordering of $V(G)$\naccordingly to the appearance of the left (or, equivalently, right)\nendpoints of the intervals $\\alpha(v)$, $v\\in V(G)$, in the intersection model.\nWe call it the \\emph{geometric order} associated with $\\alpha$.\nSimilarly, a PCA representation of $G$ determines the \\emph{geometric}\ncircular order on the vertex set. Any geometric order is a tight\ninterval or, respectively, arc ordering of $\\ensuremath{\\mathcal{N}}[G]$\n(see Lemma~\\ref{lem:geomistight}). The rigidity results\noverviewed above imply that the geometric order is unique\nfor twin-free, connected proper interval graphs and\ntwin-free, connected PCA graphs with non-bipartite complement.\nIn Section~\\ref{s:repr} we show that this holds true also\nin the case of PCA graphs with bipartite connected complement.\n\nLet us impose reasonable restrictions on proper interval and PCA\nmodels of graphs. Specifically, we always suppose that a model of an $n$-vertex\ngraph has $2n$ points and consists of intervals\/arcs that never\nshare an endpoint. It turns out that such intersection representations\nare determined by the associated geometric order uniquely up to\nreflection (and rotation in the case of arcs representations).\nThis implies that any twin-free, connected proper interval\nor PCA graph has a unique intersection representation.\n\nThe last result is implicitly contained in the work by Deng, Hell, and Huang~\\cite{DengHH96}, that relies on\na theory of local tournaments~\\cite{Huang95}; see the discussion in the end of Section~\\ref{s:repr}.\n\n\n\n\n\n\n\\section{Interval and circular-arc hypergraphs}\\label{s:hgs}\n\nLet $V=\\{v_1,\\dots,v_n\\}$.\nSaying that the sequence $v_1,\\dots,v_n$\nis \\emph{circularly ordered}, \nwe mean that $V$ is endowed with\nthe circular successor relation~$\\prec$ under which\n$v_i\\prec v_{i+1}$ for $i 3 - 4 $ fm\/$c$), the shape of the resolved splittings are nearly consistent with the charged hadron splittings. With such specially selected resolved splitting, we can now identify and specify a time within the jet clustering shower where all the splittings that come afterwards, can be described as the splitting one calculated using hadrons as opposed to a partonic splitting. This measurement now provides a unique method whereby one can use the resolved splittings within a jet shower, to select a specific jet topology entirely based on the time a jet spends in the perturbative sector ($\\tau^{res}_{f} < 2$ fm\/$c$), or in the non-perturbative regime ($\\tau^{res}_{f} > 5$ fm\/$c$). These results will serve as a baseline in proton-proton collisions to first identify the region of perturbative calculability and then compare different hadronization mechanisms to explore the non-perturbative sector in QCD. \n\nSpace-time tomography of the quark-gluon plasma (QGP) via jets produced in heavy ion collisions is one of the standard methods of studying the transport properties of deconfined quarks and gluons. Since the jets evolve concurrently with the QGP, selecting jet topologies with a particular formation time enables a time-dependent study of the jet-QGP interactions. Since there is a large heavy ion underlying event there is a requirement that the selection of the formation time observable in a jet be robust to the background and still be sensitive to the kinematics of the jet shower. The first SD split for jets in heavy ion collisions, with the nominal value of the grooming parameters, have been shown to be extremely sensitive to the heavy ion background~\\cite{STAR:2021kjt}. This can be overcome by utilizing a stricter grooming criterion~\\cite{Mulligan:2020tim} which affects the selection bias of the surviving jet population. Since the leading and subleading charged particles in a jet are predominantly higher momentum, their reconstruction is unaffected by the presence of the underlying event. Thus, the resolved splitting is expected to be a robust guide to selecting specific jet topologies and a formation time which can be translated to their path-length traversed in the QGP medium. \n\n\\section{Hadronization and jet substructure}\n\nIn order to study the impact of selecting on the charged hadron formation time, we look at the charge of the particles which make up the first, second and third (ordered in $p_{T}$) charged hadrons in the jet constituents. The charged particle formation time is then plotted for same charged leading and subleading hadrons with the third particle being either positively or negatively charged. The ratios of these formation times are shown in Fig~\\ref{fig4} for negatively (top) and positively (bottom) charged leading and subleading charged particles. The red circle (black box) markers in each of the panels correspond to the third charged particle being positively (negatively) charged. In each of the cases, we observe a clear separation of the formation times based on the charge of the third particle predominantly expected to oppositely charged to the leading and subleading particles. There is no dependence to the formation time on this ratio estimated from PYTHIA 8 simulations~\\cite{Bierlich:2022pfr} at jet kinematics accessible at RHIC energies. This particular separation shown in Fig~\\ref{fig4} is a feature of the Lund string breaking hadronization mechanism implemented in PYTHIA where one expects an overall balancing of electric charge in the hadrons produced due to the jet fragmentation. Ongoing experimental measurements at both RHIC and LHC are expected to highlight this feature and aim to understand hadronization mechanism via data-driven approaches. \n\n\\begin{figure}\n \\includegraphics[width=0.49\\linewidth]{Fig4a.png}\n \\includegraphics[width=0.49\\linewidth]{Fig4b.png}\n \\caption{Top: Normalized ratio of ch-particle formation time when the leading and subleading charged particles are negatively charged and the third particle, ordered in $p_{T}$ is either positive (red circles) or negative (black boxes) as a function of formation time. Bottom: Similar ratio of ch-particle formation time in the scenario that the leading and subleading charged particles are positively charged and the third highest $p_{T}$ particle is either positive (red) or negative (black).}\n \\label{fig4}\n\\end{figure}\n\n\\section{Conclusions}\n\nWe defined the concept of jet substructure and introduced many of the recently studied analysis techniques such as SoftDrop and iterative clustering. We also introduced the idea of formation times at varying stages of the jet shower and presented the selection of the resolved splittings. Recent STAR data of the formation times corresponding to the SD first splits, leading and subleading charged hadron and the resolved splits was discussed. The resolved splitting is shown to identify a particular time within the jet clustering tree wherein the splittings are consistent with the formation time estimated via hadronized charged particles. For jets at RHIC energies, the transition region from pQCD to npQCD is expected to occur $\\tau \\approx 3-5$ fm\/$c$. We finally highlighted the ability of such formation time observables to potentially study hadronization mechanisms in data by looking at the dependence of the electric charge of the particles considered in the calculation of the charged particle formation time. This work was supported by the Office of Nuclear Physics of the U.S. Department of Energy under award number DE-SC004168. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n{\nMultivariate time-series (temporal signals) have been studied in the statistics and signal processing societies for many years (e.g., see \\cite{tsay2013multivariate,gomez2016multivariate} for a non-exhaustive literature survey), and traditional analysis methods usually highly depend on some predefined models and do not consider the unknown nonlinear structure of the variables that often exists underlying the high-dimensional time samples.\nIn order to accommodate contemporary data acquisitions and collections, large research activity has been devoted to developing spatiotemporal analysis that is specifically-designed to infer this underlying structure and\/or take it explicitly into account.\nIn the last decade, perhaps the most notable attempts to handle such signals with a geometry defined by graphs are graph signal processing \\cite{shuman2013emerging,sandryhaila2013discrete,sandryhaila2014discrete,ortega2018graph}, graph neural networks \\cite{kipf2016semi,scarselli2008graph}, and geometric deep learning \\cite{bronstein2017geometric}.\nAnother prominent line of work is based on an operator-theoretic approach for dynamical systems analysis \\cite{schmid2010dynamic,budivsic2012applied,tu2014dynamic,kutz2016dynamic}, where the time samples have a manifold structure.\nStill, despite these recent efforts, to the best of our knowledge, when there is a time-varying manifold structure underlying the time samples, only a few works are available, e.g. \\cite{froyland2015dynamic,banisch2017understanding,froyland2020dynamic}.\n\n \nIn this work, we propose a new multi-resolution spatiotemporal analysis of multivariate time-series. In contrast to standard multi-resolution analysis using wavelets defined on Euclidean space \\cite{daubechies1992ten,mallat1999wavelet}, we present an operator-based analysis approach combining manifold learning and Riemannian geometry, which we term {\\em Riemannian multi-resolution analysis} (RMRA).\nConcretely, consider a multivariate time-series $\\{\\mathbf{x}_t\\}$. Suppose the temporal propagation of the time-series at time step $t$ can be modelled by two diffeomorphic manifolds $f_t:\\mathcal{M}_t \\rightarrow \\mathcal{M}_{t+1}$, and suppose the corresponding pairs of time samples $(\\mathbf{x}_t, \\mathbf{x}_{t+1})$ are given by $\\mathbf{x}_t[i] \\in \\mathcal{M}_t$ and $\\mathbf{x}_{t+1}[i] = f_t(\\mathbf{x}_t[i]) \\in \\mathcal{M}_{t+1}$, where $\\mathbf{x}_t[i]$ is the $i$th entry of the sample $\\mathbf{x}_t$ for $i=1,\\ldots,N$. \nNote that the entries of the samples $\\mathbf{x}_t$ lie on a manifold, and therefore, each entry is typically high-dimensional. In other words, at each time $t$, we have $N$ high-dimensional points that are distributed on the manifold $\\mathcal{M}_t$.\nOur RMRA consists of the following steps. First, we construct a diffusion operator for each time sample $\\mathbf{x}_t$, characterizing its underlying manifold $\\mathcal{M}_t$. This step is performed using a manifold learning technique, diffusion maps \\cite{Coifman2006}, that facilitates a finite-dimensional matrix approximation of the Laplacian operator of the manifold based on the time sample. This approximation is informative because the Laplacian operator is known to bear the geometric information of the manifold \\cite{berard1994embedding,jones2008manifold}. Then, for each pair of temporally consecutive time frames $(\\mathbf{x}_t,\\mathbf{x}_{t+1})$, we present two composite operators based on ``Riemannian combinations'' of the two respective diffusion operators. \nTypically, diffusion operators are not symmetric, but they are similar to symmetric positive-definite (SPD) matrices. We could thus define diffusion operators as SPD matrices, whose space is endowed with a Riemannian structure.\nTherefore, taking into account this Riemannian manifold structure for the composition of the operators is natural. \nIndeed, we show, both theoretically and in practice, that one operator enhances common components that are expressed similarly in $\\mathcal{M}_t$ and $\\mathcal{M}_{t+1}$, while the other enhances common components that are expressed differently. These properties could be viewed as analogous to low-pass and high-pass filters in this setting, leading to a spatiotemporal decomposition of the multivariate time series into ``low frequency'' and ``high frequency'' components, by considering the common components expressed similarly (resp. differently) as the slowly (resp. rapidly) varying components.\n\nTo facilitate the multi-resolution analysis of the entire temporal sequence, the construction of the composite operators is recursively repeated at different time scales. Since the composite operators are viewed as low-pass and high-pass filters, the proposed framework can be viewed as analogous to the wavelet decomposition for time-varying manifolds in the following sense. At each iteration, the two consecutive time samples are ``fused'' using the composite operators, ``decomposing'' the multivariate time-series into two components: one that varies slowly and one that varies rapidly. The fast varying component is viewed as the ``spectral feature'' of the ``first layer'', and the slowly varying component is ``downsampled'', decomposed again in the next iteration using the composite operators into a slow component and a fast component. Again, the fast component leads to the ``spectral feature'' of the ``second layer''. By iterating this procedure, the multivariate time series is decomposed in multiple resolutions.\n\nBroadly, the basic building block of our analysis, focusing on one time step, consists of two construction steps. First, given two consecutive time samples $(\\mathbf{x}_t, \\mathbf{x}_{t+1})$, we learn the underlying manifolds $\\mathcal{M}_t$ and $\\mathcal{M}_{t+1}$, and then, we study the (unknown) diffeomorphism $f_t$.\nWe posit that this building block can serve as an independent analysis module by itself.\nIndeed, the setting of one time step we consider can be recast as a related multiview data analysis problem (see Section \\ref{sec:related_work}). Consider two diffeomorphic manifolds $\\mathcal{M}_1$ and $\\mathcal{M}_2$ and the diffeomorphism $f:\\mathcal{M}_1 \\rightarrow \\mathcal{M}_2$. Let $x \\in \\mathcal{M}_1$ and $y=f(x)\\in \\mathcal{M}_2$. The pair $(x,y)$ could be considered as two views of some object of interest, providing distinct and complementary information. Applying the proposed two-step procedure to this case first learns the manifold of each view, and then, studies the diffeomorphism representing the relationship between the two views.\nIn \\cite{shnitzer2019recovering}, the diffeomorphism was analyzed in terms of common and unique components, which were represented by the eigenvectors of the diffusion operators.\nHere, we further characterize these spectral common components. Roughly, the common components are classified into two kinds: components that are expressed similarly in the two manifolds in the sense that they have similar eigenvalues in both manifolds, and components that are expressed differently in the sense that they have different eigenvalues. Furthermore, we refine the analysis and in addition to considering\n{\\em strictly common components}, i.e., the same eigenvectors in both manifolds, we also consider {\\em weakly common components}, i.e., similar but not the same eigenvectors. \nIn contrast to the local analysis presented in \\cite{shnitzer2019recovering}, we provide global and spectral analyses, showing that our method indeed extracts and identifies these different components.\n\nWe demonstrate the proposed RMRA on a dynamical system with a transitory double gyre configuration. We show that this framework is sensitive to the change rate of the dynamical system at different time-scales. Such a framework may be especially suitable for studying non-stationary multivariate time-series, particularly when there is a nontrivial geometric relationship among the multivariate coefficients.\nIn addition, for the purpose of multimodal data analysis, we demonstrate that the proposed Riemannian composite operators enhance common structures in remote sensing data captured using hyperspectral and LiDAR sensors.\n\nThe remainder of this paper is organized as follows. In Section \\ref{sec:related_work}, we review related work. In Section \\ref{sec:prelim}, we present preliminaries. In Section \\ref{sec:rmra}, we present the proposed approach for multi-resolution spatiotemporal analysis using Riemannian composition of diffusion operators. Section \\ref{sec:results} shows experimental results. In Section \\ref{sec:analysis}, we present spectral analysis of the proposed composite operators. Finally, in Section \\ref{sec:conc}, we conclude the paper.\n}\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\\section{Related work}\n\\label{sec:related_work}\n\n\\subsection{Manifold learning, diffusion maps, and diffusion wavelets}\n\n{\nManifold learning is a family of methods that consider data lying on some inaccessible manifold and provide a new low-dimensional representation of the data based on intrinsic patterns and similarities in the data \\cite{tenenbaum2000global,Roweis2000,Belkin2003,Coifman2006}.\nFrom an algorithmic viewpoint, manifold learning techniques are broadly based on two stages. The first stage is the computation of a typically positive kernel that provides a notion of similarity between the data points. The second stage is the spectral analysis of the kernel, giving rise to an embedding of the data points into a low-dimensional space.\nSuch a two-stage procedure results in aggregation of multiple pairwise similarities of data points, facilitating the extraction of the underlying manifold structure. This procedure was shown to be especially useful when there are limited high-dimensional data, plausibly circumventing the curse of dimensionality.\n\nWhile the spectral analysis of the kernels has been the dominant approach and well investigated, recent work explores different directions as well.\nOne prominent direction employs an operator-based analysis, which has led to the development of several key methods.\nArguably the first and most influential is diffusion maps \\cite{Coifman2006}\\footnote{Laplacian eigenmaps could also be considered if the diffusion time is not taken into account \\cite{Belkin2003}.}, where a transition matrix is constructed based on the kernel, forming a random walk on the dataset; such transition matrix is viewed as a diffusion operator on the data.\nThere has been abundant theoretical support for diffusion maps. For example, it was shown in \\cite{Belkin2003,hein2006uniform,singer2006graph} that the operator associated with diffusion maps converges point-wisely to the Laplace-Beltrami operator of the underlying manifold, which embodies the geometric properties of the manifold, and its eigenfunctions form a natural basis for square integrable functions on the manifold. The spectral convergence of the eigenvalues and eigenvectors of the operator associated with diffusion maps to the eigenvalues and eigenfunctions of the Laplace-Beltrami operator was first explored in \\cite{belkin2007convergence}, and recently, the $L^\\infty$ spectral convergence with convergence rate was reported in \\cite{dunson2019spectral}. See \\cite{dunson2019spectral} and references therein for additional related work in this direction. The robustness of the diffusion maps operator was studied in \\cite{el2010information,el2016graph}, and recently, its behavior under different noise levels and kernel bandwidths was explored using random matrix theory \\cite{ding2021impact}.\n\nThe propagation rules associated with this diffusion operator are in turn used for defining a new distance, the so-called diffusion distance, which was shown to be useful and informative in many domains and applications \\cite{Lafon2006,li2017efficient,TalmonMagazine,wu2014assess}. \nThis notion of diffusion promoted the development of well-designed and controlled anisotropic diffusions for various purposes, e.g., nonlinear independent component analysis \\cite{singer2008non}, intrinsic representations \\cite{talmon2013empirical}, reduction of stochastic dynamical systems \\cite{singer2009detecting,dsilva2016data}, and time-series forecasting \\cite{zhao2016analog} and filtering \\cite{shnitzer2016manifold}, to name but a few.\nIn another line of work, the combination and composition of diffusion operators led to the development of new manifold learning techniques for learning multiple manifolds \\cite{lederman2015alternating,shnitzer2019recovering,lindenbaum2020multi} as well as for time-series analysis \\cite{froyland2015dynamic}.\n\nA related line of work that considers multivariate time-series (high-dimensional temporal signals) introduces ways to define wavelets on graphs and manifolds, e.g., \\cite{coifman2006wavelets,hammond2011wavelets,ram2011generalized}. These techniques extend the classical wavelet analysis \\cite{mallat1999wavelet} from one or two dimensional Euclidean space to high-dimensional non-Euclidean spaces represented by graphs and manifolds. Specifically, diffusion wavelets \\cite{coifman2006wavelets} makes use of a hierarchy of diffusion operators with multiple well-designed diffusion scales organized in a dyadic tree.\nImportantly, none of these methods addresses an underlying manifold with a time-varying metric, but rather a fixed metric that exhibits different characteristics in different scales.\n}\n\n\\subsection{Manifold learning for sensor fusion}\n\n{\nThe basic building block of our RMRA is based on two diffeomorphic manifolds $\\mathcal{M}_t$ and $\\mathcal{M}_{t+1}$, which represent the temporal evolution at time $t$.\nA similar setting consisting of two diffeomorphic manifolds, say $\\mathcal{M}_1$ and $\\mathcal{M}_2$, has recently been investigate in the context of multimodal data analysis and sensor fusion.\n\nThe sensor fusion problem typically refers to the problem of harvesting useful information from data collected from multiple, often heterogeneous, sensors. \nSensor fusion is a gigantic field. One line of work focuses on the extraction, analysis and comparison of the components expressed by the different sensors for the purpose of gaining understanding of the underlying scene \\cite{murphy2018diffusion,swatantran2011mapping,czaja2016fusion}. However, due to the complex nature of such data, finding informative representations and metrics of these components by combining the information from the different sensors is challenging.\nRecently, several papers propose data fusion methods relying on manifold learning techniques and operator-based data analysis \\cite{de2005spectral,eynard2015multimodal,lederman2015alternating,shnitzer2019recovering,talmon2019latent,katz2019alternating,lindenbaum2020multi}.\nThe basic idea is that data from different modalities or views are fused by constructing kernels that represent the data from each view and operators that combine those kernels.\nDifferent approaches are considered for the combination of kernels. \nPerhaps the most relevant to the present work is the alternating diffusion operator, which was introduced in \\cite{lederman2015alternating,talmon2019latent} and shown to recover the common latent variables from multiple modalities.\nThis operator is defined based on a product of two diffusion operators and then used for extracting a low dimensional representation of the common components shared by the different sensors.\nOther related approaches include different combinations of graph Laplacians \\cite{de2005spectral,eynard2015multimodal}, product of kernel density estimators and their transpose for nonparametric extension of canonical correlation analysis \\cite{michaeli2016nonparametric}, and various other combinations of diffusion operators \\cite{katz2019alternating,shnitzer2019recovering,lindenbaum2020multi}. For a more comprehensive review of the different approaches, see \\cite{shnitzer2019recovering} and references therein.\n\nLargely, most existing sensor fusion algorithms, and particularly those based on kernel and manifold learning approaches, focus on the extraction and representation of the common components, in a broad sense. The sensor fusion framework proposed in \\cite{shnitzer2019recovering} extends this scope and considers both the common components and the unique components of each sensor. \nTherefore, in the context of the present work, it could be used for the analysis of the basic building block consisting of two diffeomorphic manifolds. \nHowever, similarly to the other methods described above, the kernel combination in \\cite{shnitzer2019recovering} is achieved through linear operations, thereby ignoring the prototypical geometry of the kernels.\nConversely, in this work, by taking the Riemannian structure of SPD matrices into account, we propose a new geometry-driven combination of kernels and a systematic analysis.\nThis aspect of our work could be viewed as an extension of \\cite{shnitzer2019recovering} for the purpose of sensor fusion and multimodal data analysis, in addition to the new utility for multivariate time-series analysis.\n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Preliminaries}\\label{sec:prelim}\nIn this section we briefly present the required background for our method. \nFor further details on the theory and derivations we refer the readers to \\cite{bhatia2009positive} and \\cite{Coifman2006}.\n\n\\subsection{Riemannian {Structure} of SPD Matrices\\label{sub:bg_spd}}\n\n{\nIn many recent studies, representing the raw data using SPD matrices and taking into account their specific Riemannian geometry have shown promising results, e.g. in computer vision \\cite{bergmann2018priors,tuzel2008pedestrian}, for domain adaptation \\cite{yair2019parallel}, on medical data \\cite{barachant2013classification} and in recognition tasks \\cite{harandi2014manifold}.\nFor example, Barachant et al. \\cite{barachant2013classification} proposed a support-vector-machine (SVM) classifier that takes into account the Riemannian geometry of the features, which are SPD covariance matrices, representing Electroencephalogram (EEG) recordings. They showed that their ``geometry-aware'' classifier obtains significantly better results compared with a classifier that simply vectorizes the covariance matrices.\n\nHere, we consider the space of SPD matrices endowed with the so-called affine-invariant metric \\cite{pennec2006riemannian}. Using this particular Riemannian geometry results in closed-form expressions for useful properties and operations, such as the geodesic path connecting two points on the manifold \\cite{bhatia2009positive} and the logarithmic map and the exponential map \\cite{pennec2006riemannian}, which locally project SPD matrices onto the tangent space and back. \nWhile the focus is on the affine-invariant metric, which is arguably the most widely used, we remark that other geometries of SPD matrices exist, e.g., the log-Euclidean \\cite{arsigny2007geometric,quang2014log}, the log-det \\cite{sra2012new,chebbi2012means}, the log-Cholesky \\cite{lin2019riemannian}, and the Bures-Wasserstein \\cite{malago2018wasserstein,bhatia2019bures}, which could be considered as well.\nIn the context of this work, since diffusion operators are strictly positive in principal but in practice often have negligible eigenvalues, one particular advantage of the affine-invariant geometry is its existing extensions to symmetric positive semi-definite (SPSD) matrices (see Section \\ref{subsec:SPSD}).\n}\n\nConsider the set of symmetric matrices in $\\mathbb{R}^{N\\times N}$, denoted by $\\mathcal{S}_N$.\nA symmetric matrix $\\mathbf{W}\\in\\mathcal{S}_N$ is an SPD matrix if it has strictly positive eigenvalues.\nLet $\\mathcal{P}_N$ denote the set of all $N\\times N$ SPD matrices.\nThe tangent space at any point in this set is the space of symmetric matrices $\\mathcal{S}_N$. \nWe denote the tangent space at $\\mathbf{W}\\in\\mathcal{P}_N$ by $\\mathcal{T}_\\mathbf{W}\\mathcal{P}_N$.\nIn this work we consider the following affine-invariant metric in the tangent space at each matrix $\\mathbf{W}\\in\\mathcal{P}_N$, which forms a differentiable Riemannian manifold \\cite{moakher2005differential}:\n\\begin{equation}\n \\left\\langle \\mathbf{D}_1,\\mathbf{D}_2\\right\\rangle_\\mathbf{W} = \\left\\langle \\mathbf{W}^{-1\/2}\\mathbf{D}_1\\mathbf{W}^{-1\/2}, \\mathbf{W}^{-1\/2}\\mathbf{D}_2\\mathbf{W}^{-1\/2} \\right\\rangle\\label{eq:riemann_spd_metric}\n\\end{equation}\nwhere $\\mathbf{D}_1,\\mathbf{D}_2\\in\\mathcal{T}_\\mathbf{W}\\mathcal{P}_N$ denote matrices in the tangent space at $\\mathbf{W}\\in\\mathcal{P}_N$ and $\\left\\langle\\cdot,\\cdot\\right\\rangle$ is given by the standard Frobenius inner product $\\left\\langle \\mathbf{A},\\mathbf{B}\\right\\rangle=\\mathrm{Tr}\\left(\\mathbf{A}^T\\mathbf{B}\\right)$.\nUsing this metric, there is a unique geodesic path connecting any two matrices $\\mathbf{W}_1,\\mathbf{W}_2\\in\\mathcal{P}_N$ \\cite{bhatia2009positive}, which is explicitly given by:\n\\begin{equation}\n\\gamma_{\\mathbf{W}_1\\rightarrow\\mathbf{W}_2}(p)=\\mathbf{W}_1^{1\/2}\\left(\\mathbf{W}_1^{-1\/2}\\mathbf{W}_2\\mathbf{W}_1^{-1\/2}\\right)^p\\mathbf{W}_1^{1\/2}, \\ \\ p\\in[0,1],\\label{eq:riemann_spd_geodesic}\n\\end{equation}\n{The arc-length of this geodesic path defines the Riemannian distance on the manifold, \n\\begin{equation}\nd^2_R\\left(\\mathbf{W}_1,\\mathbf{W}_2\\right)=\\left\\Vert \\log\\left(\\mathbf{W}_1^{-1\/2}\\mathbf{W}_2\\mathbf{W}_1^{-1\/2}\\right)\\right\\Vert^2_F. \n\\end{equation}\nUsing the Fr\\'echet mean, we define the Riemannian mean of a set of matrices, {$\\mathbf{W}_1,\\ldots,\\mathbf{W}_n$,} by $\\arg\\min_{\\mathbf{W}\\in\\mathcal{P}_N}{\\sum_{i=1}^n} d^2_R\\left(\\mathbf{W},\\mathbf{W}_i\\right)$. The Riemannian mean of two matrices is a special case, which coincides with the mid-point of the geodesic path connecting them and has the following closed form: }\n\\begin{equation}\n \\gamma_{\\mathbf{W}_1\\rightarrow\\mathbf{W}_2}(1\/2)=\\mathbf{W}_1^{1\/2}\\left(\\mathbf{W}_1^{-1\/2}\\mathbf{W}_2\\mathbf{W}_1^{-1\/2}\\right)^{1\/2}\\mathbf{W}_1^{1\/2}\n\\end{equation}\n\nThe mapping between the Riemannian manifold of SPD matrices and its tangent space is given by the exponential map and the logarithmic map.\nEach matrix $\\mathbf{D}$ in the tangent space at $\\mathbf{W}\\in\\mathcal{P}_N$ \ncan be seen as the derivative of the geodesic connecting $\\mathbf{W}$ and $\\tilde{\\mathbf{W}}=\\mathrm{Exp}_\\mathbf{W}(\\mathbf{D})$, i.e., $\\gamma_{\\mathbf{W}\\rightarrow\\tilde{\\mathbf{W}}}(p)$, at $p=0$.\nThe exponential map in this setting has a known closed-form given by \\cite{moakher2005differential}:\n\\begin{equation}\n \\mathrm{Exp}_{\\mathbf{W}}\\left(\\mathbf{D}\\right)=\\mathbf{W}^{1\/2}\\exp\\left(\\mathbf{W}^{-1\/2}\\mathbf{D}\\mathbf{W}^{-1\/2}\\right)\\mathbf{W}^{1\/2}\\label{eq:spd_expmap},\n\\end{equation}\nwhere $\\mathrm{Exp}_{\\mathbf{W}}(\\mathbf{D})\\in\\mathcal{P}_N$ and $\\exp(\\cdot)$ is applied to the eigenvalues.\nThe inverse of the exponential map is the logarithmic map, which is explicitly given by:\n\\begin{equation}\n \\mathrm{Log}_{\\mathbf{W}}(\\tilde{\\mathbf{W}})=\\mathbf{W}^{1\/2}\\log\\left(\\mathbf{W}^{-1\/2}\\tilde{\\mathbf{W}}\\mathbf{W}^{-1\/2}\\right)\\mathbf{W}^{1\/2},\\label{eq:spd_logmap}\n\\end{equation}\nwhere $\\log(\\cdot)$ is applied to the eigenvalues, $\\tilde{\\mathbf{W}}\\in\\mathcal{P}_N$, and $\\mathrm{Log}_{\\mathbf{W}}(\\tilde{\\mathbf{W}})\\in\\mathcal{T}_\\mathbf{W}\\mathcal{P}_N$.\n\n\n\\subsection{An extension to SPSD Matrices\\label{subsec:SPSD}}\n\nIn practice, the matrices of interest are often not strictly positive, but rather symmetric positive \\emph{semi}-definite (SPSD) matrices, that is symmetric matrices with non-negative eigenvalues.\nBelow is a summary of a Riemannian geometry introduced in \\cite{bonnabel2010riemannian} that extends the affine-invariant metric. We remark that the existence of such an extension serves as an additional motivation to particularly consider the affine-invariant metric over the alternatives.\n\nLet $\\mathcal{S}^+(r,N)$ denote the set of $N\\times N$ SPSD matrices of rank $r0$, where $\\left(\\mathbf{\\Delta}_1,\\mathbf{D}_1\\right),\\left(\\mathbf{\\Delta}_2,\\mathbf{D}_2\\right)\\in\\mathcal{T}_\\mathbf{W}\\mathcal{S}^+\\left(r,N\\right)$ and $\\left\\langle\\cdot,\\cdot\\right\\rangle_\\mathbf{\\Lambda}$ denotes the metric defined in \\eqref{eq:riemann_spd_metric} for SPD matrices.\n\nLet $\\mathbf{W}_1=\\mathbf{V}_1\\mathbf{\\Lambda}_1\\mathbf{V}_1^T$ and $\\mathbf{W}_2=\\mathbf{V}_2\\mathbf{\\Lambda}_2\\mathbf{V}_2^T$ denote the decompositions of two SPSD matrices $\\mathbf{W}_1,\\mathbf{W}_2\\in\\mathcal{S}^+\\left(r,N\\right)$, where $\\mathbf{V}_1,\\mathbf{V}_2\\in\\mathcal{V}_{N,r}$ and $\\mathbf{\\Lambda}_1,\\mathbf{\\Lambda}_2\\in\\mathcal{P}_r$.\nThe closed-form expression of the geodesic path connecting any two such matrices in $\\mathcal{S}^+\\left(r,N\\right)$ using the metric in \\eqref{eq:SPSDmetric} is unknown. \nHowever, the following approximation of it was proposed in \\cite{bonnabel2010riemannian}.\nDenote the singular value decomposition (SVD) of $\\mathbf{V}_2^T\\mathbf{V}_1$ by $\\mathbf{O}_2\\mathbf{\\Sigma}\\mathbf{O}_1^T$, where $\\mathbf{O}_1,\\mathbf{O}_2\\in\\mathbb{R}^{r\\times r}$ and $\\mathrm{diag}(\\mathbf{\\Sigma})$ are the cosines of the principal angles between $\\mathrm{range}(\\mathbf{W}_1)$ and $\\mathrm{range}(\\mathbf{W}_2)$, where $\\mathrm{range}(\\mathbf{W})$ denotes the column space of $\\mathbf{W}$.\nDefine $\\mathbf{\\Theta}=\\arccos\\left(\\mathbf{\\Sigma}\\right)$, which is a diagonal matrix of size $r\\times r$ with the principal angles between the two subspaces on its diagonal.\nThe approximation of the geodesic path connecting two points in $\\mathcal{S}^+\\left(r,N\\right)$ is then given by:\n\\begin{equation}\n \\tilde{\\gamma}_{\\mathbf{W}_1\\rightarrow\\mathbf{W}_2}(p)=\\mathbf{U}_{\\mathbf{W}_1\\rightarrow\\mathbf{W}_2}(p)\\mathbf{R}_{\\mathbf{W}_1\\rightarrow\\mathbf{W}_2}(p)\\mathbf{U}^T_{\\mathbf{W}_1\\rightarrow\\mathbf{W}_2}(p), \\ \\ p\\in[0,1]\\label{eq:riemann_spsd_geodesic}\n\\end{equation}\nwhere $\\mathbf{R}_{\\mathbf{W}_1\\rightarrow\\mathbf{W}_2}(p)$ is the geodesic path connecting SPD matrices as defined in \\eqref{eq:riemann_spd_geodesic} calculated between the matrices $\\mathbf{R}_1=\\mathbf{O}_1^T\\mathbf{\\Lambda}_1\\mathbf{O}_1$ and $\\mathbf{R}_2=\\mathbf{O}_2^T\\mathbf{\\Lambda}_2\\mathbf{O}_2$, i.e. $\\mathbf{R}_{\\mathbf{W}_1\\rightarrow\\mathbf{W}_2}(p)=\\gamma_{\\mathbf{R}_1\\rightarrow\\mathbf{R}_2}(p)$, and $\\mathbf{U}_{\\mathbf{W}_1\\rightarrow\\mathbf{W}_2}(p)$ is the geodesic connecting $\\mathrm{range}(\\mathbf{W}_1)$ and $\\mathrm{range}(\\mathbf{W}_2)$ on the Grassman manifold (the set of $r$ dimensional subspaces of $\\mathbb{R}^N$), defined by:\n\\begin{equation}\n \\mathbf{U}_{\\mathbf{W}_1\\rightarrow\\mathbf{W}_2}(p) = \\mathbf{U}_1\\cos\\left(\\mathbf{\\Theta}p\\right)+\\mathbf{X}\\sin\\left(\\mathbf{\\Theta}p\\right)\\label{eq:grassman_geodesic}\n\\end{equation}\nwhere $\\mathbf{U}_1=\\mathbf{V}_1\\mathbf{O}_1$, $\\mathbf{U}_2=\\mathbf{V}_2\\mathbf{O}_2$ and $\\mathbf{X}=\\left(\\mathrm{I}-\\mathbf{U}_1\\mathbf{U}_1^T\\right)\\mathbf{U}_2\\left(\\sin\\left(\\mathbf{\\Theta}\\right)\\right)^{\\dagger}$, with $(\\cdot)^\\dagger$ denoting the Moore-Penrose pseudo-inverse.\n\n\n\n\n\\subsection{Diffusion Operator\\label{sub:meas_rep}}\n{As described in Section \\ref{sec:related_work}, most manifold learning methods, and particularly diffusion maps \\cite{Coifman2006}, are based on positive kernel matrices. Here, we briefly present the construction of such a kernel, which we term the diffusion operator, as proposed in \\cite{Coifman2006}. In the sequel, we employ this diffusion operator in our framework {to recover the geometry} underlying each modality.}\n\nGiven a set of $N$ points, $\\{\\mathbf{x}[i]\\}_{i=1}^N$, which are sampled from some hidden manifold $\\mathcal{M}$ embedded in $\\mathbb{R}^n$, consider the following affinity kernel matrix $\\mathbf{K}\\in\\mathbb{R}^{N\\times N}$, whose $(i,j)$th entry is given by:\n\\begin{equation}\n\\mathrm{K}[i,j] = \\exp\\left(-\\frac{\\left\\Vert\\mathbf{x}[i]-\\mathbf{x}[j]\\right\\Vert_2^2}{\\sigma^2}\\right),\\label{eq:SPDkernK}\n\\end{equation}\nwhere $\\left\\Vert\\cdot\\right\\Vert_2$ denotes the $\\ell_2$ norm and $\\sigma$ denotes the kernel scale, typically set to the median of the Euclidean distances between the sample points multiplied by some scalar. {By Bochner's theorem, $\\mathbf{K}$ is an SPD matrix.}\nThe kernel is normalized twice according to:\n\\begin{eqnarray}\n\\widehat{\\mathbf{W}} & = & \\widehat{\\mathbf{D}}^{-1}\\ \\mathbf{K}\\ \\widehat{\\mathbf{D}}^{-1}\\nonumber\\\\\n\\mathbf{W} & = & \\mathbf{D}^{-1\/2}\\ \\widehat{\\mathbf{W}}\\ \\mathbf{D}^{-1\/2},\\label{eq:SPDKern}\n\\end{eqnarray}\nwhere $\\widehat{\\mathbf{D}}$ and $\\mathbf{D}$ are diagonal matrices with $\\widehat{\\mathrm{D}}[i,i]=\\sum_{j=1}^N \\mathrm{K}[i,j]$ and $\\mathrm{D}[i,i]=\\sum_{j=1}^N \\widehat{\\mathrm{W}}[i,j]$, respectively.\n\nThe matrix $\\mathbf{W}$ defined in \\eqref{eq:SPDKern} is similar to the diffusion {operator considered in} \\cite{Coifman2006}{, which we call the diffusion maps operator for simplicity,} with a normalization that removes the point density influence, given by $\\mathbf{W}_{\\texttt{DM}}=\\mathbf{D}^{-1}\\widehat{\\mathbf{W}}$.\nDue to this similarity, the matrix $\\mathbf{W}$ and the diffusion maps operator $\\mathbf{W}_{\\texttt{DM}}$ share the same eigenvalues and their eigenvectors are related by $\\psi_{\\texttt{DM}}=\\mathbf{D}^{-1\/2}\\psi$ and $\\tilde{\\psi}_{\\texttt{DM}}=\\mathbf{D}^{1\/2}\\psi$, where $\\psi$ denotes an eigenvector of $\\mathbf{W}$ and $\\psi_{DM}$ and $\\tilde{\\psi}_{\\texttt{DM}}$ denote the right and left eigenvectors of $\\mathbf{W}_{\\texttt{DM}}$, respectively.\n\n\n\n\n\n\n\n\n\\section{Riemannian Multi-resolution Analysis}\\label{sec:rmra}\n\nWe are ready to present our multi-resolution framework for multivariate time-series analysis from a manifold learning perspective. In this section, we focus on the algorithmic aspect, and in Section \\ref{sec:analysis} we present the theoretical justification. We start by introducing the Riemannian composition of two operators that capture the relationship between two datasets sampled from two underlying diffeomorphic manifolds. Then, we generalize the setting to a sequence of datasets in time by presenting a wavelet-like analysis using the composite operators. Finally, we conclude this section with important implementation remarks.\n\n\n\\subsection{Riemannian composition of two operators\\label{sub:op_def}}\n\n{Consider two datasets of $N$ points denoted by $\\{\\mathbf{x}_1[i]\\}_{i=1}^N,\\{\\mathbf{x}_2[i]\\}_{i=1}^N$. Suppose there is some correspondence between the datasets and that they are ordered according to this correspondence, i.e., the two points $\\mathbf{x}_1[i]$ and $\\mathbf{x}_2[i]$ correspond. Such a correspondence could be the result of simultaneous recording from two, possibly different, sensors.\nWe aim to recover the common structures in these datasets and characterize their expression in each dataset. Specifically, we consider two types of common components: common components that are expressed similarly in the two datasets and common components that are expressed differently.}\n\nTo this end, we propose a two-step method.\nFirst, we assume that each dataset lies on some manifold and we characterize its underlying geometry using a diffusion operator constructed according to \\eqref{eq:SPDKern}. \nThis results in two SPD matrices denoted by $\\mathbf{W}_1$ and $\\mathbf{W}_2$ (see Section \\ref{sub:bg_spd}).\nThen, we propose to ``fuse'' the two datasets by considering compositions of $\\mathbf{W}_1$ and $\\mathbf{W}_2$ based on Riemannian geometry.\nIn contrast to previous studies that consider linear combinations involving addition, subtraction, and multiplication, e.g., \\cite{lederman2015alternating,shnitzer2019recovering,lindenbaum2020multi}, which often results in non-symmetric or non-positive matrices violating the fundamental geometric structure of diffusion operators, our Riemannian compositions yield symmetric and SPD matrices. \n{{Specifically, we define} two new operators by:\n\\begin{eqnarray}\n\\mathbf{S}_{p} & = & \\mathbf{W}_1\\#_{p}\\mathbf{W}_2=\\mathbf{W}_1^{1\/2}\\left(\\mathbf{W}_1^{-1\/2}\\mathbf{W}_2\\mathbf{W}_1^{-1\/2}\\right)^{p}\\mathbf{W}_1^{1\/2},\\label{eq:Sr}\\\\\n\\mathbf{F}_{p} & = & \\mathrm{Log}_{\\mathbf{S}_{p}}\\left(\\mathbf{W}_1\\right)=\\mathbf{S}_{p}^{1\/2}\\log\\left(\\mathbf{S}_{p}^{-1\/2}\\mathbf{W}_1\\mathbf{S}_{p}^{-1\/2}\\right)\\mathbf{S}_{p}^{1\/2},\\label{eq:Ar}\n\\end{eqnarray}\nwhere $0 \\le p \\le 1$ denotes the position along the geodesic path connecting $\\mathbf{W}_1$ and $\\mathbf{W}_2$ on the SPD manifold, $\\mathbf{W}_1\\#_{p}\\mathbf{W}_2$ with $p=1\/2$ denotes the midpoint on this geodesic, and $\\mathrm{Log}_{\\mathbf{S}_{p}}\\left(\\mathbf{W}_1\\right)$ denotes the logarithmic map, projecting the matrix $\\mathbf{W}_1$ onto the tangent space of the SPD manifold at point $\\mathbf{S}_{p}=\\mathbf{W}_1\\#_{p}\\mathbf{W}_2$. Figure \\ref{fig:SA_vis} presents an illustration of the definitions of the operators on the Riemannian manifold of SPD matrices and its tangent space.\n\n\nIntuitively, {$\\mathbf{S}$} describes the mean of the two matrices, so it enhances the components that are expressed similarly; that is, common eigenvectors with similar eigenvalues. Conversely, $\\mathbf{F}_{p}$ can be seen as the difference of $\\mathbf{S}_{p}$ and $\\mathbf{W}_1$ along the geodesic connecting {them}, and therefore, it is related to the components expressed differently; that is, the common eigenvectors with different eigenvalues. In Section \\ref{sec:analysis} we {provide a theoretical justification for the above statements}.\n\nWe remark that $\\mathbf{F}_{p}$ is symmetric but not positive-definite, since it is defined as a projection of an SPD matrix onto the tangent space, and that using $\\mathbf{W}_2$ instead of $\\mathbf{W}_1$ in the projection leads only to a change of sign.\nIn addition, given $\\mathbf{S}_p$ and $\\mathbf{F}_p$, both SPD matrices $\\mathbf{W}_1$ and $\\mathbf{W}_2$ can be reconstructed using the exponential map $\\mathrm{Exp}_{\\mathbf{S}_p}\\left(\\pm\\mathbf{F}_p\\right)$ (as defined in \\eqref{eq:spd_expmap}).}\nFor simplicity of notations, we focus in the following on $\\mathbf{F}_p$ and $\\mathbf{S}_p$ with $p=0.5$ and omit the notation of $p$. The extension to other values of $p$ is straightforward.\n\n\n\\begin{figure}[t]\n\\centering\n\\subfloat[]{\\includegraphics[trim=30 10 30 20,clip,width=0.5\\textwidth]{figures\/SA_vis\/S_operator_vis-eps-converted-to.pdf}}\n\\subfloat[]{\\includegraphics[trim=30 60 30 70,clip,width=0.5\\textwidth]{figures\/SA_vis\/F_operator_vis-eps-converted-to.pdf}}\n\\caption{Illustration of the definitions of the operators $\\mathbf{S}$ and $\\mathbf{F}$. (a) Illustration of the operator $\\mathbf{S}$ (blue point) on the geodesic path (solid line in magenta) connecting $\\mathbf{W}_1$ and $\\mathbf{W}_2$ (magenta points) on the manifold of SPD matrices (gray surface represents one level-set of the Riemannian manifold of SPD matrices). The dashed gray line denotes the shortest Euclidean path connecting the two matrices. (b) Illustration of the operator $\\mathbf{F}$ (cyan point) on the tangent space at $\\mathbf{S}$ (colored plane). Both plots present the same region in different orientations. \n\\label{fig:SA_vis}}\n\\end{figure}\n\n\n{In the second step, we} propose new embeddings of the data points, representing the common components between the two datasets based on the operators $\\mathbf{S}$ and $\\mathbf{F}$.\nSince both operators are symmetric, their eigenvalues and eigenvectors are real, and the eigenvectors are orthogonal. \nDenote the eigenvalues and eigenvectors of the operator $\\mathbf{S}$ by $\\lambda_n^{(\\mathbf{S})}$ and $\\psi_n^{(\\mathbf{S})}$, respectively, and the eigenvalues and eigenvectors of the operator $\\mathbf{F}$ by $\\lambda_n^{(\\mathbf{F})}$ and $\\psi_n^{(\\mathbf{F})}$, respectively, where $n=1,\\dots,N$.\nThe new embeddings are constructed based on the eigenvectors of $\\mathbf{S}$ and $\\mathbf{F}$ by taking the $M\\leq N$ leading eigenvectors, i.e. eigenvectors that correspond to the $M$ largest eigenvalues (in absolute value for $\\mathbf{F}$), which are organized in decreasing order $\\lambda^{(\\mathbf{S})}_1\\geq\\lambda^{(\\mathbf{S})}_2\\geq\\dots\\geq\\lambda^{(\\mathbf{S})}_M$ and $\\lambda^{(\\mathbf{F})}_1\\geq\\lambda^{(\\mathbf{F})}_2\\geq\\dots\\geq\\lambda^{(\\mathbf{F})}_M$.\nThe new embeddings are defined by:\n\\begin{eqnarray}\n(\\mathbf{x}_1[i],\\mathbf{x}_2[i]) & \\mapsto & \\Psi^{(\\mathbf{S})}[i]=\\left\\lbrace\\psi^{(\\mathbf{S})}_1[i],\\dots,\\psi^{(\\mathbf{S})}_M[i]\\right\\rbrace\\\\\n(\\mathbf{x}_1[i],\\mathbf{x}_2[i]) & \\mapsto & \\Psi^{(\\mathbf{F})}[i]=\\left\\lbrace\\psi^{(\\mathbf{F})}_1[i],\\dots,\\psi^{(\\mathbf{F})}_M[i]\\right\\rbrace.\n\\end{eqnarray}\n\n{Algorithm \\ref{alg:twomod} summarizes the above two-step operator and embedding construction.}\nIn Section \\ref{subsub:ills_exmp}, we demonstrate the properties of the operators $\\mathbf{S}$ and $\\mathbf{F}$ and the proposed embeddings on an illustrative toy example. In Section \\ref{sec:analysis}, we present some analysis.\n\n{As a final remark, we note that} other SPD kernels and matrices {may be considered} instead of the proposed diffusion operators, e.g. covariance or correlation matrices, which can simply substitute $\\mathbf{W}_1$ and $\\mathbf{W}_2$ in the above definitions.\n\n\\begin{algorithm}[hbt!]\n\\caption{{Operator composition and spectral embedding based on Riemannian geometry}}\n\\label{alg:twomod}\n\\textbf{Input:} Two datasets $\\{\\mathbf{x}_1[i]\\}_{i=1}^N,\\{\\mathbf{x}_2[i]\\}_{i=1}^N$; {the embedding dimension $M$}\\\\\n\\textbf{Output:} Operators $\\mathbf{S}$ and $\\mathbf{F}$ and new embeddings\n$\\mathbf{\\Psi}^{(\\mathbf{S})}$ and $\\mathbf{\\Psi}^{(\\mathbf{F})}$\n\\begin{algorithmic}[1]\n\\Statex\n\\State{Construct a diffusion operator for each dataset, $\\mathbf{W}_\\ell\\in\\mathbb{R}^{N\\times N}$, $\\ell=1,2$, according to \\eqref{eq:SPDkernK} and \\eqref{eq:SPDKern}}\n\\Statex\n\\State{Build operators $\\mathbf{S}$ and $\\mathbf{F}$:}\n\\State{$\\ \\ \\ \\ \\ \\mathbf{S}=\\mathbf{W}_1^{1\/2}\\left(\\mathbf{W}_1^{-1\/2}\\mathbf{W}_2\\mathbf{W}_1^{-1\/2}\\right)^{1\/2}\\mathbf{W}_1^{1\/2}$}\n\\State{$\\ \\ \\ \\ \\ \\mathbf{F}=\\mathbf{S}^{1\/2}\\log\\left(\\mathbf{S}^{-1\/2}\\mathbf{W}_1\\mathbf{S}^{-1\/2}\\right)\\mathbf{S}^{1\/2}$}\n\\Statex\n\\State{Compute the eigenvalue decomposition of the operators $\\mathbf{S}$ and $\\mathbf{F}$}\n\\Statex\n\\State{Take the $M$ largest eigenvalues (in absolute value) and order them such that $\\lambda^{(\\mathbf{S})}_1\\geq\\lambda^{(\\mathbf{S})}_2\\geq\\dots\\geq\\lambda^{(\\mathbf{S})}_M$ and $\\lambda^{(\\mathbf{F})}_1\\geq\\lambda^{(\\mathbf{F})}_2\\geq\\dots\\geq\\lambda^{(\\mathbf{F})}_M$}\n\\Statex\n\\State{Take the corresponding $M$ eigenvectors of $\\mathbf{S}$ and define:\\\\\n$\\mathbf{\\Psi}^{(\\mathbf{S})}=\\left\\lbrace\\psi^{(\\mathbf{S})}_1,\\dots,\\psi^{(\\mathbf{S})}_M\\right\\rbrace\\in\\mathbb{R}^{N\\times M}$}\n\\Comment{{Capture} similarly expressed common components}\n\\Statex\n\\State{Take the corresponding $M$ eigenvectors of $\\mathbf{F}$ and define:\\\\\n$\\mathbf{\\Psi}^{(\\mathbf{F})}=\\left\\lbrace\\psi^{(\\mathbf{F})}_1,\\dots,\\psi^{(\\mathbf{F})}_M\\right\\rbrace\\in\\mathbb{R}^{N\\times M}$}\n\\Comment{{Capture} differently expressed common components}\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\n\n\\subsection{Operator-based analysis of a sequence of datasets\\label{sub:rmra}}\n\n\n{Let $\\{\\mathrm{x}_t[i]\\}_{i=1}^{N}$ denote a temporal sequence of datasets, where $t=1,\\dots,T=2^m$, $m\\in\\mathbb{N}$, denotes time, and $\\mathrm{x}_t[i]\\in\\mathbb{R}^d$ is the $i$-th point sampled at time $t$.\nConsidering first just two consecutive datasets $\\{\\mathrm{x}_t[i]\\}_{i=1}^{N}$ and $\\{\\mathrm{x}_{t+1}[i]\\}_{i=1}^{N}$ is analogous to the setting presented in Section \\ref{sub:op_def}. Applying the same analysis gives rise to the operators $\\mathbf{S}$ and $\\mathbf{F}$ corresponding to \n$\\{\\mathrm{x}_t[i]\\}_{i=1}^{N}$ and $\\{\\mathrm{x}_{t+1}[i]\\}_{i=1}^{N}$, which facilitate the extraction of the two types of underlying common components. Unlike the general setting in Section \\ref{sub:op_def}, the temporal order of the two datasets considered here allows us to view the common components that are expressed similarly and extracted by $\\mathbf{S}$ as the slowly changing components. Similarly, the common components that are expressed differently and extracted by $\\mathbf{F}$ are considered as rapidly changing components.}\n\n\nThe above description constitutes the basic building block of our analysis. \nWith that in mind, we proceed to the construction of the proposed multi-resolution analysis of the entire sequence.\nAt the first step, we build a diffusion operator according to \\eqref{eq:SPDKern} for the dataset $\\{\\mathrm{x}_t[i]\\}_{i=1}^{N}$ at each time point $t$, resulting in $T$ kernels $\\mathbf{W}_t\\in\\mathbb{R}^{N\\times N}$, $t=1,\\dots,T$.\nThen, for every pair of consecutive time-points, $(2t-1,2t)$, $t=1,\\dots,T\/2$, we construct the two operators\\footnote{Note that the operator underscore notation now denotes the time index rather than the geodesic curve parameter $p$ as in Section \\ref{sub:op_def}.} \n$\\mathbf{S}_t^{(1)}$ and $\\mathbf{F}_t^{(1)}$ according to \\eqref{eq:Sr} and \\eqref{eq:Ar} with $p=0.5$. \nThese $2\\times T\/2$ operators represent the fine level, denoted $\\ell=1$, of the multi-resolution framework and recover components which are common to consecutive time-frames.\nAt coarser (higher) levels, i.e. for $\\ell>1$, the operators are constructed according to \\eqref{eq:Sr} and \\eqref{eq:Ar} (with $p=0.5$) using the operators from the previous level as input. Specifically, at level $\\ell>1$, the operators are given by\n\\begin{eqnarray}\n\\mathbf{S}^{(\\ell)}_t&=&\\mathbf{S}_{2t-1}^{(\\ell-1)}\\#\\mathbf{S}_{2t}^{(\\ell-1)}\\label{eq:Swav}\\\\\n\\mathbf{F}_t^{(\\ell)}&=&\\mathrm{Log}_{\\mathbf{S}_t^{(\\ell)}}\\left(\\mathbf{S}_{2t-1}^{(\\ell-1)}\\right),\\label{eq:Awav}\n\\end{eqnarray}\nwhere $t=1,\\dots,T\/2^\\ell$ and $\\ell=2,\\dots,\\log_2T$.\nAt each level, only the operator $\\mathbf{S}$ is used to construct the operators of the next level. The reason for this construction choice is that $\\mathbf{S}$ enhances similarly expressed common components. \nIn the present setting, the similarly expressed common components of consecutive time frames are in fact components that change slowly in time.\nTherefore, using the operators $\\mathbf{S}^{(\\ell-1)}_t$, $t=1,\\dots,T\/2^{\\ell-1}$, to construct the operators of the coarser level, $\\ell$, has a smoothing effect.\nThis is analogous to the construction of the ordinary wavelet decomposition, where the outputs of the low-pass filters at each level are used as inputs to the coarser level.\nIn this analogy, the operator $\\mathbf{F}^{(\\ell)}_t$ can be viewed as a high-pass filter, since it enhances components that are expressed significantly different in consecutive time frames, i.e. rapidly changing components.\n\nSimilarly to the embedding defined based on $\\mathbf{S}$ and $\\mathbf{F}$ in Subsection \\ref{sub:op_def}, we define an embedding based on the eigenvectors of the operators $\\mathbf{S}^{(\\ell)}_t$ and $\\mathbf{F}^{(\\ell)}_t$ at different levels and time-frames.\nDenote the eigenvalues and eigenvectors of the operator $\\mathbf{S}^{(\\ell)}_t$ by $\\lambda_n^{(\\mathbf{S}^{(\\ell)}_t)}$ and $\\psi_n^{(\\mathbf{S}^{(\\ell)}_t)}$, respectively, and the eigenvalues and eigenvectors of the operator $\\mathbf{F}^{(\\ell)}_t$ by $\\lambda_n^{(\\mathbf{F}^{(\\ell)}_t)}$ and $\\psi_n^{(\\mathbf{F}^{(\\ell)}_t)}$, respectively, where $n=1,\\dots,N$ and the eigenvalues are ordered in decreasing magnitude.\nThe embedding at each level and each time frame is then defined by taking the $M\\leq N$ leading eigenvectors of the operators as follows:\n\\begin{eqnarray}\n\\mathbf{x}_t[i] & \\rightarrow & \\Psi^{(\\mathbf{S}^{(\\ell)}_t)}=\\left\\lbrace\\psi^{(\\mathbf{S}^{(\\ell)}_t)}_1[i],\\dots,\\psi^{(\\mathbf{S}^{(\\ell)}_t)}_M[i]\\right\\rbrace\\\\\n\\mathbf{x}_t[i] & \\rightarrow & \\Psi^{(\\mathbf{F}^{(\\ell)}_t)}=\\left\\lbrace\\psi^{(\\mathbf{F}^{(\\ell)}_t)}_1[i],\\dots,\\psi^{(\\mathbf{F}^{(\\ell)}_t)}_M[i]\\right\\rbrace.\n\\end{eqnarray}\nwhere $t=1,\\dots,T\/2^\\ell$ and $\\ell=2,\\dots,\\log_2T$.\nThese embedding coordinates capture the slowly varying components ($\\Psi^{(\\mathbf{S}^{(\\ell)}_t)}$) and fast varying components ($\\Psi^{(\\mathbf{F}^{(\\ell)}_t)}$) at each time frame $t$ and each level $\\ell$.\n\nThe proposed algorithm is summarized in Algorithm \\ref{alg:wavelet}.\n\n\\begin{algorithm}[hbt!]\n\\caption{Riemannian multi-resolution analysis algorithm}\n\\label{alg:wavelet}\n\\textbf{Input:} A time-varying dataset $\\{\\mathrm{x}_t[i]\\}_{i=1}^N$, $\\mathrm{x}_t[i]\\in\\mathbb{R}^d$, $t=1,\\dots,T$.\\\\\n\\textbf{Output:} Operators $\\{\\mathbf{S}_t^{(\\ell)},\\mathbf{F}_t^{(\\ell)}\\}_{t=1}^{T\/2^\\ell}$ and new representations for each level\\\\\n$\\{\\mathbf{\\Psi}^{(\\mathbf{S}_t^{(\\ell)})},\\mathbf{\\Psi}^{(\\mathbf{F}_t^{(\\ell)})}\\}_{t=1}^{T\/2^\\ell}$, where $\\ell=1,\\dots,\\log_2T$\n\\begin{algorithmic}[1]\n\\Statex\n\\State{Construct an SPD kernel representing each time point $t$, denoted by $\\{\\mathbf{W}_t\\}_{t=1}^T$, according to \\eqref{eq:SPDKern}.}\n\\Statex\n\\For{$t=1:T\/2$} \\Comment{Construct the operators for level $1$}\n\t\\State{$\\mathbf{S}_t^{(1)}=\\mathbf{W}_{2t-1}\\#_{0.5}\\mathbf{W}_{2t}$}\n\t\\State{$\\mathbf{F}_t^{(1)}=\\mathrm{Log}_{\\mathbf{S}_t^{(1)}}\\left(\\mathbf{W}_{2t-1}\\right)$}\n\\EndFor\n\\For{$\\ell=2:\\log_2T$} \\Comment{Construct the operators for level $\\ell$}\n\\For{$t=1:T\/2^\\ell$}\n\t\\State{$\\mathbf{S}_t^{(\\ell)}=\\mathbf{S}_{2t-1}^{(\\ell-1)}\\#_{0.5}\\mathbf{S}_{2t}^{(\\ell-1)}$}\n\t\\State{$\\mathbf{F}_t^{(\\ell)}=\\mathrm{Log}_{\\mathbf{S}_t^{(\\ell)}}\\left(\\mathbf{S}_{2t-1}^{(\\ell-1)}\\right)$}\n\\EndFor\n\\EndFor\n\\Statex\n\\For{$\\ell=1:\\log_2T$} \\Comment{Construct the new representations}\n\\For{$t=1:T\/2^\\ell$}\n\t\\State{$\\Psi^{(\\mathbf{S}_t^{(\\ell)})}=\\left[\\psi^{(\\mathbf{S}_t^{(\\ell)})}_1,\\dots,\\psi^{(\\mathbf{S}_t^{(\\ell)})}_M\\right]$}\n\t\\State{$\\Psi^{(\\mathbf{F}_t^{(\\ell)})}=\\left[\\psi^{(\\mathbf{F}_t^{(\\ell)})}_1,\\dots,\\psi^{(\\mathbf{F}_t^{(\\ell)})}_M\\right]$}\n\t\\State{where $\\psi^{(\\mathbf{S}_t^{(\\ell)})}$ and $\\psi^{(\\mathbf{F}_t^{(\\ell)})}$ are the eigenvectors of $\\mathbf{S}_t^{(\\ell)}$ and $\\mathbf{F}_t^{(\\ell)}$.}\n\\EndFor\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\n\\subsection{Implementation Remarks\\label{sec:implement}}\n\n{The numerical implementation of the proposed algorithm, particularly the Riemannian composition of operators, needs some elaboration. While the diffusion operators we consider in \\eqref{eq:SPDKern} are SPD matrices by definition,} in practice, some of their eigenvalues could be {close to zero numerically,} forming in effect SPSD matrices instead.\nIn order to address this issue, we propose an equivalent definition of the operators $\\mathbf{S}$ and $\\mathbf{F}$ for SPSD matrices of fixed rank, based on the Riemannian metric and mean that were introduced in \\cite{bonnabel2010riemannian}.\n\n\nBased on the approximated geodesic path in \\eqref{eq:riemann_spsd_geodesic}, we define the operators $\\mathbf{S}$ and $\\mathbf{F}$ for SPSD matrices as follows. First, define\n\\begin{eqnarray}\n \\mathbf{S} & = & \\tilde{\\gamma}_{\\mathbf{W}_1\\rightarrow\\mathbf{W}_2}(0.5)=\\mathbf{U}_{\\mathbf{W}_1\\rightarrow\\mathbf{W}_2}(0.5)\\left(\\mathbf{R}_1\\#_{0.5}\\mathbf{R}_2\\right)\\mathbf{U}^T_{\\mathbf{W}_1\\rightarrow\\mathbf{W}_2}(0.5)\\,.\n\\end{eqnarray}\n{Next, evaluate $\\mathbf{V}_1^T\\mathbf{V}_\\mathbf{S}=\\mathbf{O}_\\mathbf{S}\\tilde{\\mathbf{\\Sigma}}\\tilde{\\mathbf{O}}^T_1$ by SVD, where $\\mathbf{\\Lambda}_\\mathbf{S}$ and $\\mathbf{V}_\\mathbf{S}$ denote the eigenvalues and eigenvectors of $\\mathbf{S}$, respectively. Also, define $\\mathbf{R}_\\mathbf{S}:=\\mathbf{O}_\\mathbf{S}^T\\mathbf{\\Lambda}_\\mathbf{S}\\mathbf{O}_\\mathbf{S}$.\nThen, define}\n\\begin{eqnarray}\n \\mathbf{F} & = & \\mathbf{U}_{\\mathbf{S}\\rightarrow\\mathbf{W}_1}(1)\\mathrm{Log}_{\\mathbf{R}_\\mathbf{S}}\\left(\\tilde{\\mathbf{O}}_1^T\\mathbf{\\Lambda}_1\\tilde{\\mathbf{O}}_1\\right)\\mathbf{U}^T_{\\mathbf{S}\\rightarrow\\mathbf{W}_1}(1)\n\\end{eqnarray}\nwhere $\\mathbf{U}_{\\mathbf{S}\\rightarrow\\mathbf{W}_1}(p)$ is defined for matrices $\\mathbf{S}$ and $\\mathbf{W}_1$ correspondingly to \\eqref{eq:grassman_geodesic} and the derivations leading to it.\nIntuitively, this can be viewed as applying the operators $\\mathbf{S}$ and $\\mathbf{F}$ to matrices expressed in the bases associated with the non-trivial SPD matrices of rank $r$, and then projecting the resulting operators back to the original space of SPSD matrices by applying $\\mathbf{U}_{\\mathbf{W}_1\\rightarrow\\mathbf{W}_2}(p)$.\n\n\n\nA summary of this derivation is presented in Algorithm \\ref{alg:SA_implementation}.\nA demonstration of the properties of these new operators for SPSD matrices using a simple simulation of $4\\times 4$ matrices, similar to the one presented in Subsection \\ref{subsub:ills_exmp} but with matrices of rank $3$, could be found in Section \\ref{subsub:mat4x4_exmp_fxd_rnk}.\n\n\\begin{algorithm}\n\\caption{Implementation of the operators for SPSD matrices}\n\\textbf{Input:} Two datasets with point correspondence $\\{\\mathbf{x}_1[i],\\mathbf{x}_2[i]\\}_{i=1}^N$\\\\\n\\textbf{Output:} Operators $\\mathbf{S}$ and $\\mathbf{F}$ and their eigenvectors: $\\mathbf{\\Psi}^{(\\mathbf{S})},\\mathbf{\\Psi}^{(\\mathbf{F})}$\\\\\n\\begin{algorithmic}[1]\n\\Statex\n\\Function{SPSD-Geodesics}{$\\mathbf{G}_1,\\mathbf{G}_2,p$}\n\\Comment{As defined in \\cite{bonnabel2010riemannian}}\n\\State{Set $r=\\min\\left\\lbrace\\textrm{rank}\\left(\\mathbf{G}_1\\right),\\textrm{rank}\\left(\\mathbf{G}_2\\right)\\right\\rbrace$.}\n\\For{$i\\in[1,2]$}\n\\State{Set $\\mathbf{G}_i=\\mathbf{V}_i\\mathbf{\\Lambda}_i\\mathbf{V}_i^T$} \\Comment{Eigenvalue decomposition}\n\\State{Set $\\tilde{\\mathbf{V}}_i=\\mathbf{V}_i(:,1:r)$}\n\\State{Set $\\tilde{\\mathbf{\\Lambda}}_i=\\mathbf{\\Lambda}_i(1:r,1:r)$}\n\\EndFor\n\\State{Set $[\\mathbf{O}_1,\\mathbf{\\Sigma},\\mathbf{O}_2]=\\mathrm{SVD}\\left(\\tilde{\\mathbf{V}}_2^T\\tilde{\\mathbf{V}}_1\\right)$}\n\\State{Set $\\mathbf{\\Theta}=\\arccos(\\mathbf{\\Sigma})$}\n\\For{$i\\in[1,2]$}\n\\State{Set $\\mathbf{U}_i=\\tilde{\\mathbf{V}}_i\\mathbf{O}_i$}\n\\State{Set $\\mathbf{R}_{\\mathbf{G}_i}=\\mathbf{O}_i^T\\tilde{\\mathbf{\\Lambda}}_i\\mathbf{O}_i$}\n\\EndFor\n\\State{Compute $\\mathbf{U}_{\\mathbf{G}_1\\rightarrow\\mathbf{G}_2}(p)$} \\Comment{According to \\eqref{eq:grassman_geodesic}}\n\\State{Compute $\\mathbf{R}_{\\mathbf{G}_1\\rightarrow\\mathbf{G}_2}(p)=\\mathbf{R}_1^{1\/2}\\left(\\mathbf{R}_1^{-1\/2}\\mathbf{R}_2\\mathbf{R}_1^{-1\/2}\\right)^{p}\\mathbf{R}_1^{1\/2}$}\n\\State{\\textbf{return} $\\mathbf{U}_{\\mathbf{G}_1\\rightarrow\\mathbf{G}_2}(p)$, $\\mathbf{R}_{\\mathbf{G}_1\\rightarrow\\mathbf{G}_2}(p)$, $\\mathbf{R}_{\\mathbf{G}_1}$, $\\mathbf{R}_{\\mathbf{G}_2}$}\n\\EndFunction\n\\Statex\n\\Function{Main}{}\n\\State{Construct SPSD matrices for the two datasets $\\mathbf{W}_1$ and $\\mathbf{W}_2$} \\Comment{According to \\eqref{eq:SPDKern}}\n\\Statex\n\\State{\\Call{SPSD-Geodesics}{$\\mathbf{W}_1,\\mathbf{W}_2,0.5$}}\n\\State{$\\mathbf{S}=\\mathbf{U}_{\\mathbf{W}_1\\rightarrow\\mathbf{W}_2}(0.5)\\mathbf{R}_{\\mathbf{W}_1\\rightarrow\\mathbf{W}_2}(0.5)\\mathbf{U}^T_{\\mathbf{W}_1\\rightarrow\\mathbf{W}_2}(0.5)$}\n\\Statex\n\\State{\\Call{SPSD-Geodesics}{$\\mathbf{S},\\mathbf{W}_1,1$}}\n\\State{$\\mathbf{F}=\\mathbf{U}_{\\mathbf{S}\\rightarrow\\mathbf{W}_1}(1)\\mathrm{Log}_{\\mathbf{R}_{\\mathbf{S}}}\\left(\\mathbf{R}_{\\mathbf{W}_1}\\right)\\mathbf{U}^T_{\\mathbf{S}\\rightarrow\\mathbf{W}_1}(1)$} \\Comment{$\\mathrm{Log}_\\cdot(\\cdot)$ is defined as in \\eqref{eq:spd_logmap}}\n\\EndFunction\n\\end{algorithmic}\n\\label{alg:SA_implementation}\n\\end{algorithm}\n\n\n\n\n\n\\section{Experimental Results\\label{sec:results}}\n\n\n\n\\subsection{Illustrative Toy Example: SPD Case\\label{subsub:ills_exmp}}\nWe demonstrate the properties of the composite operators $\\mathbf{S}$ and $\\mathbf{F}$, constructed in Algorithm \\ref{alg:twomod}, using a simple simulation of $4\\times4$ matrices.\nDefine two matrices, $\\mathbf{M}_1=\\mathbf{\\Psi}\\mathbf{\\Lambda}^{(1)}\\mathbf{\\Psi}^T$ and $\\mathbf{M}_2=\\mathbf{\\Psi}\\mathbf{\\Lambda}^{(2)}\\mathbf{\\Psi}^T$, with the following common eigenvectors:\n\\begin{eqnarray}\\label{toy example eigenvector matrix}\n\\mathbf{\\Psi} = \\left[\n\\begin{matrix}\n\\psi_1, \\psi_2, \\psi_3, \\psi_4\n\\end{matrix}\n\\right] = \\frac{1}{2}\\left[\n\\begin{matrix}\n1,\\ \\ 1,\\ \\ 1,\\ \\ 1\\\\\n1,\\ \\ 1, -1, -1\\\\\n1, -1, -1,\\ \\ 1\\\\\n1, -1,\\ \\ 1, -1\n\\end{matrix}\n\\right]\n\\end{eqnarray}\nand the following eigenvalues:\n\\begin{eqnarray}\n\\mathbf{\\Lambda}^{(1)} = \\mathrm{diag}\\left(\\left[\\lambda^{(1)}_1,\\lambda^{(1)}_2,\\lambda^{(1)}_3,\\lambda^{(1)}_4\\right]\\right) = \\mathrm{diag}(\\left[ 0.5,\\ \\ 1,\\ 0.01,\\ 0.2 \\right])\\\\\n\\mathbf{\\Lambda}^{(2)} = \\mathrm{diag}\\left(\\left[\\lambda^{(2)}_1,\\lambda^{(2)}_2,\\lambda^{(2)}_3,\\lambda^{(2)}_4\\right]\\right) = \\mathrm{diag}(\\left[ 0.01, \\ 1,\\ 0.5,\\ \\ 0.2\\right])\n\\end{eqnarray}\nIn this example, $\\psi_1$ is a common eigenvector that is dominant in $\\mathbf{M}_1$ and weak in $\\mathbf{M}_2$, $\\psi_3$ is a common eigenvector that is dominant in $\\mathbf{M}_2$ and weak in $\\mathbf{M}_1$ and $\\psi_2$ and $\\psi_4$ are common eigenvectors that are similarly expressed in both $\\mathbf{M}_1$ and $\\mathbf{M}_2$.\n\nWe construct the operators $\\mathbf{S}=\\mathbf{M}_1\\#\\mathbf{M}_2$ and $\\mathbf{F}=\\mathrm{Log}_{\\mathbf{S}}\\left(\\mathbf{M}_1\\right)$ and compute their eigenvalues and eigenvectors. \nFigure \\ref{fig:4x4mat_full} presents the $4$ eigenvalues of $\\mathbf{M}_1$, $\\mathbf{M}_2$, $\\mathbf{S}$ and $\\mathbf{F}$, denoted by $\\{\\lambda^{(1)}_n\\}_{n=1}^4$, $\\{\\lambda^{(2)}_n\\}_{n=1}^4$, $\\{\\lambda^{(\\mathbf{S})}_n\\}_{n=1}^4$ and $\\{\\lambda^{(\\mathbf{F})}_n\\}_{n=1}^4$, respectively, in the left plots, and the corresponding eigenvectors in the right plots. \nThis figure depicts that the two matrices $\\mathbf{M}_1$ and $\\mathbf{M}_2$ share the same $4$ eigenvectors (as defined) and that the resulting eigenvectors of $\\mathbf{S}$ and $\\mathbf{F}$ are similar to these $4$ eigenvectors.\nNote that eigenvectors $2$ and $4$ of operator $\\mathbf{F}$ are not identical to the eigenvectors of $\\mathbf{M}_1$ and $\\mathbf{M}_2$ due to numerical issues, which arise since these eigenvectors in $\\mathbf{F}$ correspond to negligible eigenvalues.\nThe left plots show that the eigenvalues of $\\mathbf{S}$ and $\\mathbf{F}$ capture the similarities and differences in the expression of the spectral components of $\\mathbf{M}_1$ and $\\mathbf{M}_2$.\nSpecifically, since $\\lambda^{(1)}_2=\\lambda^{(2)}_2$ and $\\lambda^{(1)}_4=\\lambda^{(2)}_4$, the corresponding eigenvalues of $\\mathbf{S}$ assume the same magnitude.\nIn contrast, due to this equality, these eigenvalues correspond to negligible eigenvalues of $\\mathbf{F}$.\nThe two other eigenvectors, $\\psi_1$ and $\\psi_3$, correspond to eigenvalues that differ by an order of magnitude in the two matrices and are therefore the most dominant components in $\\mathbf{F}$.\nIn addition, note the opposite sign of eigenvalues $\\lambda^{(\\mathbf{F})}_1$ and $\\lambda^{(\\mathbf{F})}_3$, which indicates the source of the more dominant component, i.e. whether $\\lambda^{(1)}_n>\\lambda^{(2)}_n$ or $\\lambda^{(1)}_n{<}\\lambda^{(2)}_n$.\nThese properties are proved and explained in more detail in Section \\ref{sec:analysis}.\n\n\\begin{figure}[bhtp!]\n\\centering\n\\includegraphics[width=1\\textwidth]{figures\/matrices4x4\/full_rank_F-eps-converted-to.pdf}\n\\caption{Application of $\\mathbf{S}$ and $\\mathbf{F}$ to two $4\\times 4$ matrices with identical eigenvectors.\\label{fig:4x4mat_full}}\n\\end{figure}\n\n\\subsection{Illustrative Toy Example: SPSD case\\label{subsub:mat4x4_exmp_fxd_rnk}}\n\nConsider two matrices, $\\mathbf{M}_1=\\mathbf{\\Psi}\\mathbf{\\Lambda}^{(1)}\\mathbf{\\Psi}^T$ and $\\mathbf{M}_2=\\mathbf{\\Psi}\\mathbf{\\Lambda}^{(2)}\\mathbf{\\Psi}^T$, {with $\\mathbf{\\Psi}$ defined in \\eqref{toy example eigenvector matrix}}\nand the following eigenvalues:\n\\begin{eqnarray}\n\\mathbf{\\Lambda}^{(1)} = \\mathrm{diag}\\left(\\left[\\lambda^{(1)}_1,\\lambda^{(1)}_2,\\lambda^{(1)}_3,\\lambda^{(1)}_4\\right]\\right) = \\mathrm{diag}(\\left[ 0.5,\\ \\ 1,\\ 0.01,\\ 0 \\right])\\\\\n\\mathbf{\\Lambda}^{(2)} = \\mathrm{diag}\\left(\\left[\\lambda^{(2)}_1,\\lambda^{(2)}_2,\\lambda^{(2)}_3,\\lambda^{(2)}_4\\right]\\right) = \\mathrm{diag}(\\left[ 0.01, \\ 1,\\ 0.5,\\ \\ 0\\right])\n\\end{eqnarray}\nNote that the $4$th eigenvalue is zero in both matrices resulting in SPSD matrices of rank $3$.\n\nWe construct the operators $\\mathbf{S}$ and $\\mathbf{F}$ according to Algorithm \\ref{alg:SA_implementation} and compute their eigenvalues and eigenvectors. The results are presented in Figure \\ref{fig:4x4mat_rank3}. Same as Figure \\ref{fig:4x4mat_full},\nFigure \\ref{fig:4x4mat_rank3} presents in the left plots the $4$ eigenvalues of $\\mathbf{M}_1$, $\\mathbf{M}_2$, $\\mathbf{S}$ and $\\mathbf{F}$, denoted by $\\{\\lambda^{(1)}_n\\}_{n=1}^4$, $\\{\\lambda^{(2)}_n\\}_{n=1}^4$, $\\{\\lambda^{(\\mathbf{S})}_n\\}_{n=1}^4$ and $\\{\\lambda^{(\\mathbf{F})}_n\\}_{n=1}^4$, respectively, and the corresponding eigenvectors in the right plots.\nBoth matrices, $\\mathbf{M}_1$ and $\\mathbf{M}_2$, share the same $4$ eigenvectors as depicted in the right plots, and the resulting eigenvectors of $\\mathbf{S}$ and $\\mathbf{F}$ are similar to these $4$ eigenvectors.\nIn this example, $\\psi_2$ is a dominant component in both $\\mathbf{M}_1$ and $\\mathbf{M}_2$ with the same large eigenvalue. \nTherefore, similarly to the SPD case, the eigenvalue of $\\mathbf{S}$ associated with this eigenvector remains large, whereas the eigenvalue of $\\mathbf{F}$ associated with this eigenvector is negligible, as expected.\nIn contrast, $\\psi_1$ and $\\psi_3$ are eigenvectors that are differently expressed in the two matrices (corresponding to eigenvalues $0.5$ and $0.01$), and therefore, they correspond to dominant eigenvalues in $\\mathbf{F}$.\nThe left plot demonstrates that this behavior is indeed captured by the operators $\\mathbf{S}$ and $\\mathbf{F}$ for SPSD matrices.\nMoreover, the eigenvalues of the operators $\\mathbf{S}$ and $\\mathbf{F}$ for SPSD matrices are equal to the eigenvalues that were obtained by the operators for SPD matrices in a corresponding toy example, presented in Figure \\ref{fig:4x4mat_full}.\nNote that all eigenvalues that correspond to $\\psi_4$ are very close to zero, as expected due to the definition of the matrices $\\mathbf{M}_1$ and $\\mathbf{M}_2$.\n\n\\begin{figure}[bhtp!]\n\\centering\n\\includegraphics[width=1\\textwidth]{figures\/matrices4x4\/rank_3_F-eps-converted-to.pdf}\n\\caption{Application of the operators $\\mathbf{S}$ and $\\mathbf{F}$ for SPSD matrices to two $4\\times 4$ matrices of rank $3$ with identical eigenvectors.\\label{fig:4x4mat_rank3}}\n\\end{figure}\n\n\n\n\\subsection{Transitory Double Gyre Flow\\label{sub:2gyre}}\n\nTo demonstrate the proposed Riemannian multi-resolution analysis described in Section \\ref{sec:rmra}, we consider a variation of the transitory double gyre flow presented in \\cite{mosovsky2011transport,froyland2014almost}.\n\nWe simulate a 2D dynamical system with coordinates $(x_t,y_t)$ using the following equations:\n\\begin{eqnarray}\n\\dot{x}_t & = & -\\frac{\\partial}{\\partial y_t}H\\left(x_t,y_t,t\\right)\\\\\n\\dot{y}_t & = & \\frac{\\partial}{\\partial x_t}H\\left(x_t,y_t,t\\right)\n\\end{eqnarray}\nwith the function:\n\\begin{eqnarray}\nH\\left(x_t,y_t,t\\right) & = & (1-g(t))H_1\\left(x_t,y_t\\right) + g(t)H_2\\left(x_t,y_t\\right)\\\\\nH_1\\left(x_t,y_t\\right) & = & c_1\\sin(2\\pi x_t)\\sin(\\pi y_t)\\\\\nH_2\\left(x_t,y_t\\right) & = & c_2\\sin(\\pi x_t)\\sin(2\\pi y_t)\\\\\ng(t) & = & t^2(3-2t),\n\\end{eqnarray}\nwhere $c_1=2$, $c_2=10$, $i=1,...,N$ and $t\\in[0,1]$.\n\nThese equations describe a double gyre pattern, which is horizontal at $t=0$ and transitions into a vertical double gyre pattern at $t=1$.\nNote that in our simulations we add the parameters $c_1$ and $c_2$ to the dynamics, which lead to a change in rate between the (slower) horizontal and (faster) vertical double gyre patterns. \nThese parameters are added in order to demonstrate the time-varying multi-resolution properties of our analysis.\n\nWe generate $N=2500$ trajectories with initial values uniformly distributed in $\\left(x_0,y_0\\right)\\in [0,1]\\times[0,1]$, where each trajectory has $T=256$ time points on a discrete uniform time-grid with a step size of $\\Delta t =1\/256$.\nWe denote each of these trajectories with an index $i=1,\\ldots, N$ by a matrix $\\mathbf{x}[i] \\in \\mathbb{R}^{2 \\times T}$, whose columns are the pair of time samples $(x_t[i],y_t[i])^T$.\nA short GIF file demonstrating the resulting trajectories is available on \\href{https:\/\/github.com\/shnitzer\/Manifold-based-temporal-analysis\/blob\/main\/Transitory_Double_Gyre_Flow_Data.gif}{GitHub}, where each point is colored according to its initial location along the x-axis to illustrate the point movement in time.\nThe point movement demonstrated in this GIF exhibits two main structures: (i) points that rotate in two circular structures (transitioning from a horizontal setting into a vertical setting), which can be described as almost-invariant (coherent) sets as defined and captured by \\cite{froyland2014almost}, and (ii) points that are located on the boundary of these almost-invariant sets and their movement changes significantly over time.\nOur goal in this example is to analyze these two movement types and to recover their different trajectories over time.\n\nFor this purpose, we construct an SPD kernel for each time frame $t$, denoted by $\\mathbf{W}_t\\in\\mathbb{R}^{N\\times N}$, according to \\eqref{eq:SPDKern} based on the distances between the points in that time frame, i.e., $\\left\\Vert(x_t[i]-x_t[j],y_t[i]-y_t[j])\\right\\Vert_2^2$, $i,j=1,\\dots,N$, with $\\sigma$ set to $0.5$ times the median of these distances.\nWe then apply Algorithm \\ref{alg:wavelet} and obtain the multi-resolution representation and $\\ell=\\log_2(T)=8$ levels of operators.\nWe denote the operators of different levels and different time frames by $\\mathbf{S}_r^{(\\ell)}\\in\\mathbb{R}^{N\\times N}$ and $\\mathbf{F}_r^{(\\ell)}\\in\\mathbb{R}^{N\\times N}$, where $\\ell=1,\\dots,8$ and $r=1,\\dots,T\/2^\\ell$. Note that $r$ is associated with the time-frame indices, e.g., at level $\\ell=6$, $r=\\lceil t\/2^\\ell\\rceil=3$ corresponds to time points $t=129,\\dots,192$.\n\nIn the following, we focus on the second eigenvector of $\\mathbf{S}_r^{(\\ell)}$ and show that it indeed captures the common components and common trajectory behavior at the different operator levels.\nFigure \\ref{fig:DG_S} presents the data-points colored according to the second eigenvector of $\\mathbf{S}_r^{(\\ell)}$, denoted by $\\psi_2^{(\\mathbf{S}_r^{(\\ell)})}$ at levels $\\ell=8$, $\\ell=4$ and $\\ell=3$ and with $r$ values corresponding to different time frames.\nMore specifically, the locations of all $N=2500$ points are presented at $8$ different time-instances along the trajectory. \nEach point is colored according to its value in: (a) $\\psi^{(\\mathbf{S}_1^{(8)})}_2$, (b) $\\psi^{(\\mathbf{S}_4^{(4)})}_2$, (c) $\\psi^{(\\mathbf{S}_{10}^{(4)})}_2$, (d) $\\psi_2^{(\\mathbf{S}_7^{(3)})}$ and (e) $\\psi_2^{(\\mathbf{S}_8^{(3)})}$.\nNote that Figure \\ref{fig:DG_S} (d) and Figure \\ref{fig:DG_S} (e) present the preceding time frames of Figure \\ref{fig:DG_S} (b).\nWe maintained a consistent color coding in all time-instances, i.e., each point has the same color throughout its trajectory in time, and the most significant values (largest in absolute value) are colored in either yellow or blue.\n\n\n\n\\begin{landscape}\n\\begin{figure}[bhtp!]\n\\centering\n\\includegraphics[height=0.8\\textheight]{figures\/DoubleGyre\/S_tree.png}\n\\caption{Data points colored according to the second eigenvector of the operator $\\mathbf{S}_r^{(\\ell)}$ at different levels and time frames: (a) $\\mathbf{S}_1^{(8)}$, (b) $\\mathbf{S}_4^{(4)}$, (c) $\\mathbf{S}_{10}^{(4)}$, (d) $\\mathbf{S}_7^{(3)}$ and (e) $\\mathbf{S}_8^{(3)}$. Plots (d) and (e) present the same $8$ time points as in plot (b), where the points are colored according to a different eigenvector in each plot.\n \\label{fig:DG_S}}\n\\end{figure}\n\\end{landscape}\n\n\n\nIn the figure we see the multi-resolution properties of the proposed framework.\nAt the highest level, $\\ell=8$ in plot (a), the eigenvector of $\\mathbf{S}_1^{(8)}$ captures the coherent circular structures, i.e. the almost-invariant sets, which change from a horizontal orientation at the beginning of the trajectory to a vertical orientation at the end. These structures are consistent with the ones described by \\cite{froyland2014almost}.\nIn contrast, in plots (b)-(c) (level $\\ell=4$), the effect of the velocity change over time is apparent, demonstrating that our framework is capable of detecting such properties. \nThese plots present two equal-length sub-segments of the trajectory: $t\\in\\{49,\\ldots,64\\}$ in plot (b) and $t\\in\\{145,\\ldots,160\\}$ in plot (c).\nPlot (c), which corresponds to the faster regime closer to the end of the trajectory, depicts that the circular structures are captured by the eigenvector, whereas in plot (b), which corresponds to the slower regime, these structures are not visible. \nDue to the increase in point movement velocity over time, the components that are similarly expressed over time in the sub-segment that is closer to the end of the trajectory (plot (c)) are mainly the almost-invariant sets, as captured by the operator from the highest level in plot (a). \nConversely, in the slower regime, there are other components that are similarly expressed over short sub-segments in time, as captured by the eigenvector presented in plot (b).\nPlots (d) and (e) correspond to the two sub-segments $t\\in\\{49,\\ldots,56\\}$ and $t\\in\\{57,\\ldots,64\\}$, respectively, whose union is the sub-segment presented in plot (b). \nNote the similarity between the captured point dynamics in the two sub-segments of plots (d) and (e). This similarity explains the component emphasized by the eigenvector in plot (b), which is constructed based on these two sub-segments.\n\nNote that the leading eigenvector of the operator $\\mathbf{S}_r^{(\\ell)}$ was omitted throughout this example since it mostly captures the common point distribution at the different time-frames. \nThe point distribution is of less interest in this example since it provides a general geometric description of the problem setting rather than the common trajectory properties.\n\nIn the following we present the eigenvectors of the operators $\\mathbf{F}$ and show that they indeed capture the time-varying trajectory behavior in consecutive time-frames.\n\nFigure \\ref{fig:DG_A} presents the data-points colored according to the leading eigenvectors of the respective operators $\\mathbf{F}$ corresponding to the largest positive and negative (in absolute value) eigenvalues. \nIn this setting, the eigenvectors corresponding to negative eigenvalues describe the components that are significantly more dominant in the first half of the time segment and the eigenvectors corresponding to positive eigenvalues describe components that are significantly more dominant in the second half of the segment, as we will show in Section \\ref{sec:analysis} (Theorem \\ref{theo:A_eigs}).\n\nFigure \\ref{fig:DG_A} (a) and Figure \\ref{fig:DG_A} (b) present the eigenvectors of the operator $\\mathbf{F}_1^{(8)}$ corresponding to the smallest negative eigenvalue in plot (a) and to the largest positive eigenvalue in plot (b).\nThese plots depict that $\\mathbf{F}_1^{(8)}$ captures the differences between the slower point movement in $t\\in\\{1,\\ldots,128\\}$ and the faster point movement in $t\\in\\{129,\\ldots,256\\}$. \nDue to the change in point movement velocity over time, the component describing the circular structures (the almost-invariant sets) is significantly more dominant in the sub-segment from the faster regime ($t\\in\\{129,\\ldots,256\\}$) than the slower regime ($t\\in\\{1,\\ldots,128\\}$), as captured by the eigenvector in plot (b). In contrast, in the slower regime other components are dominant (as demonstrated also by Figure \\ref{fig:DG_S} (b)), leading to different structures being emphasized in Figure \\ref{fig:DG_A} (a), which mostly captures the boundary points.\nFigure \\ref{fig:DG_A} (c) and Figure \\ref{fig:DG_A} (d) correspond to the time segment $t\\in\\{129,\\ldots,256\\}$.\nPlot (c) presents the leading eigenvector of the operator $\\mathbf{F}_2^{(7)}$ with a negative eigenvalue, describing the components that are more dominant in the slower regime ($t\\in\\{129,\\ldots,192\\}$) and plot (d) presents the leading eigenvector with a positive eigenvalue, describing the components that are more dominant in the faster regime ($t\\in\\{193,\\ldots,256\\}$).\nNote that both plots (c) and (d) emphasize circular structures, however, the structures in plot (d) are smaller than the ones in plot (b) and are approximately complemented by the structures in plot (c).\nThis behavior implies that our framework decomposes the almost-invariant sets into smaller components in short sub-segments (at lower operator-tree levels), and therefore, indicates that the proposed method indeed captures meaningful dynamical information in different time-scales.\nFigure \\ref{fig:DG_A} (e) presents the eigenvector of $\\mathbf{F}_1^{(6)}$ (describing $t\\in[\\{1,\\ldots,64\\}$) with the largest negative eigenvalue.\nThis plot depicts that in the slower regime (at the beginning of the trajectory) the operator $\\mathbf{F}$ highlights high-resolution fine components of the point movement dynamics.\n\n\\begin{landscape}\n\\begin{figure}[bhtp!]\n\\centering\n\\includegraphics[height=0.7\\textheight]{figures\/DoubleGyre\/F_tree.png}\n\\caption{Data points colored according to the eigenvector of (a-b) $\\mathbf{F}_1^{(8)}$ with the smallest negative and largest positive eigenvalues, (c-d) $\\mathbf{F}_2^{(7)}$ with the smallest negative and largest positive eigenvalues, and (e) $\\mathbf{F}_1^{(6)}$ with the smallest negative eigenvalue.\n\\label{fig:DG_A}}\n\\end{figure}\n\n\\end{landscape}\n\n\nWe remark that different choices of the kernel scale in \\eqref{eq:SPDKern} lead to different resolutions. \nFor example, taking a smaller kernel scale leads to a slower ``convergence'' to the almost-invariant sets of the representations obtained at the different levels of the operator $\\mathbf{S}_r^{(\\ell)}$, as well as an enhancement of finer structures captured by the operator $\\mathbf{F}_r^{(\\ell)}$.\n\n\n\n\n\n \n \n\n\n\nIn order to evaluate our framework with respect to previous work, we compare the operators $\\mathbf{S}$ and $\\mathbf{F}$, which serve as the building blocks of our algorithm, with related operators: (i) the dynamic Laplacian \\cite{froyland2015dynamic}, which was shown to recover coherent sets from multiple time-frames of dynamical systems, and (ii) symmetric and anti-symmetric diffusion operators that were shown to recover similar and different components in multimodal data \\cite{shnitzer2019recovering}. \nIn \\cite{froyland2015dynamic}, with a slight abuse of notation and in analogy to \\eqref{eq:Sr}, the dynamic Laplacian is defined by $\\mathbf{L}^\\top\\mathbf{L}$, where $\\mathbf{L}=\\mathbf{W}_{1}\\mathbf{W}_{2}$. The common and difference operators in \\cite{shnitzer2019recovering} are defined by $\\mathbf{\\hat{S}}=\\mathbf{W}_{1}\\mathbf{W}_{2}^\\top+\\mathbf{W}_{2}\\mathbf{W}_{1}^\\top$ and $\\mathbf{\\hat{A}}=\\mathbf{W}_{1}\\mathbf{W}_{2}^\\top-\\mathbf{W}_{2}\\mathbf{W}_{1}^\\top$, respectively, which are analogous to the operators $\\mathbf{S}$ and $\\mathbf{F}$ in \\eqref{eq:Sr} and \\eqref{eq:Ar}.\n\nFigure \\ref{fig:op_compare_S} presents point clustering using k-means applied to the second eigenvector of the $3$ operators used for recovering similarities: the proposed operator $\\mathbf{S}$ in plot (a), the operator $\\mathbf{\\hat{S}}$ from \\cite{shnitzer2019recovering} in plot (b) and the operator $\\mathbf{L}^\\top\\mathbf{L}$ from \\cite{froyland2015dynamic} in plot (c).\nAll $3$ operators were constructed from time frames $t=250$ and $t=256$.\nWe see in this figure that the proposed formulation of the operator is significantly better at capturing the almost-invariant sets (the circular structures). \nNote that the results presented here for $\\mathbf{L}^\\top\\mathbf{L}$ are different than those in \\cite{froyland2015dynamic}, since we take into account only two close time frames, whereas in \\cite{froyland2015dynamic} the operator is constructed using all the points along the trajectory.\n\n\\begin{figure}[t]\n \\centering\n \\subfloat[$\\psi^{(\\mathbf{S})}_2$]{\\includegraphics[width=0.33\\textwidth]{figures\/DoubleGyre\/operator_comparison\/S2_rie_250_256_kmeans-eps-converted-to.pdf}}\n \\subfloat[$\\psi^{(\\mathbf{\\hat{S}})}_2$ \\cite{shnitzer2019recovering}]{\\includegraphics[width=0.33\\textwidth]{figures\/DoubleGyre\/operator_comparison\/S2_old_250_256_kmeans-eps-converted-to.pdf}}\n \\subfloat[$\\psi^{(\\mathbf{L}^T\\mathbf{L})}_2$ \\cite{froyland2015dynamic}]{\\includegraphics[width=0.33\\textwidth]{figures\/DoubleGyre\/operator_comparison\/S2_dlp_250_256_kmeans-eps-converted-to.pdf}}\n \\caption{Clustering of trajectory points based on eigenvectors of the proposed operator $\\mathbf{S}$ in plot (a), and the operators from \\cite{shnitzer2019recovering} in plot (b) and from \\cite{froyland2015dynamic} in plot (c). All operators are constructed by combining two time frames at $t=250$ and $t=256$.}\n \\label{fig:op_compare_S}\n\\end{figure}\n\nIn Figure \\ref{fig:op_compare_A}, we present the point clustering obtained by applying k-means to the second eigenvector of the $2$ operators used for recovering differences: the proposed operator $\\mathbf{F}$ in plot (a) and the operator $\\mathbf{\\hat{A}}$ from \\cite{shnitzer2019recovering} in plot (b). In contrast to plot (b), in plot (a) we clearly see the swirl of the flow from and to the invariant sets (outward and inward).\n\n\\begin{figure}[bhtp!]\n \\centering\n \\subfloat[$\\psi^{(\\mathbf{F})}_2$]{\\includegraphics[width=0.33\\textwidth]{figures\/DoubleGyre\/operator_comparison\/A2_rie_250_256_kmeans-eps-converted-to.pdf}}\n \\subfloat[$\\psi^{(\\mathbf{\\hat{A}})}_2$ \\cite{shnitzer2019recovering}]{\\includegraphics[width=0.33\\textwidth]{figures\/DoubleGyre\/operator_comparison\/A2_euc_250_256_kmeans-eps-converted-to.pdf}}\n \n \\caption{Clustering of trajectory points based on eigenvectors of the proposed operator $\\mathbf{F}$ in plot (a) and operator $\\mathbf{\\hat{A}}$ from \\cite{shnitzer2019recovering} in plot (b).}\n \\label{fig:op_compare_A}\n\\end{figure}\n\nIn addition to the differences demonstrated in Figure \\ref{fig:op_compare_S} and Figure \\ref{fig:op_compare_A}, another crucial advantage of our formulation relates to the construction of the multi-resolution framework and its theoretical justification presented in Section \\ref{sec:analysis}.\nThe operators in \\cite{shnitzer2019recovering} and \\cite{froyland2015dynamic} do not have any theoretical guarantees in such an operator-tree construction and may not be suitable to such a setting. \nIndeed, we report that a similar operator-tree constructed using the operators from \\cite{shnitzer2019recovering} did not exhibit the expected behavior and no meaningful representations were obtained.\n\nWe conclude by noting that such a multi-resolution analysis of the dynamics may be especially useful in applications where the parameters of interest are inaccessible, e.g., for oceanic current analysis based on ocean drifters data \\cite{froyland2015rough,banisch2017understanding}, since the data is represented using non-linear kernels.\n\n\n\\subsection{Hyperspectral and LiDAR Imagery\\label{sec:HSI_LIDAR}}\n\nIn Section \\ref{sub:op_def}, we consider two datasets and present the two Riemannian composite operators $\\mathbf{S}$ and $\\mathbf{F}$.\nLater, in Section \\ref{sub:rmra}, these two datasets are considered as two consecutive sets in a temporal sequence of datasets, and the two Riemannian composite operators are used as a basic building block for our Riemannian multi-resolution analysis. \nAlternatively, similarly to the setting in \\cite{lederman2015alternating,shnitzer2019recovering}, the two datasets could arise from simultaneous observations from two views or modalities. \nHere, we demonstrate the properties of the Riemannian composite operators $\\mathbf{S}$ and $\\mathbf{F}$ on real remote sensing data obtained by two different modalities. \n\nWe consider data from the 2013 IEEE GRSS data fusion contest\\footnote{\\url{http:\/\/www.grss-ieee.org\/community\/technical-committees\/data-fusion\/2013-ieee-grss-data-fusion-contest\/}}, which includes a hyperspectral image (HSI) with 144 spectral bands ($380-1050$nm range) and a LiDAR Digital Surface Model of the University of Houston campus and its neighboring urban area in Houston, Texas. \nThe data from both modalities have the same spatial resolution of $2.5$m.\nThis data was previously considered in the context of manifold learning in \\cite{murphy2018diffusion}.\n\nWe focus on two $60\\times 90$ image patches from the full dataset, in order to reduce computation time of the operators and their eigenvalue decomposition.\nWe first preprocess the $60\\times 90$ LiDAR image and each image in the $60\\times 90\\times 144$ HSI data.\nThe preprocessing stage includes dividing each $60\\times 90$ image by its standard deviation and removing outliers: for the LiDAR image, pixel values larger than the $99$th percentile were removed, and for the HSI data, in each image, pixel values larger than the $95$th percentile or smaller than the $5$th percentile were removed. In both modalities the outliers were replaced by their nearest non-outlier values.\nFigure \\ref{fig:HSI_LDR_1_all} (a) and Figure \\ref{fig:HSI_LDR_2_allEV} (a) present the two LiDAR image patches after preprocessing, and Figure \\ref{fig:HSI_LDR_1_all} (b) and Figure \\ref{fig:HSI_LDR_2_allEV} (b) present the two average HSI image patches after preprocessing.\n\n\nWe apply the operators $\\mathbf{S}$ and $\\mathbf{F}$ to this data in order to analyze the scene properties captured by both the LiDAR and the HSI sensors and extract the similarities and differences between them.\nThis can be viewed as a manifold-driven component analysis.\n\nWe construct the operators according to Algorithm \\ref{alg:SA_implementation}, where the LiDAR image and the HSI images are defined as two datasets with point correspondence between them given by the pixel location. \nWe reshape both datasets such that $\\mathbf{x}_1\\in\\mathbb{R}^{N\\times 1}$ is the reshaped LiDAR image and $\\mathbf{x}_2\\in\\mathbb{R}^{N\\times 144}$ is the reshaped HSI images, where $N=5400$. \nThe resulting kernels, $\\mathbf{W}_1$ and $\\mathbf{W}_2$, and operators, $\\mathbf{S}$ and $\\mathbf{F}$, are matrices of size $N\\times N$.\n\nWe begin with an analysis of the first chosen image patch, shown in Figure \\ref{fig:HSI_LDR_1_all} (a) and (b).\nTo depict the advantages of applying the proposed operators, we visually compare the eigenvectors of the kernels, $\\mathbf{W}_1$ and $\\mathbf{W}_2$, with the eigenvectors of the operators $\\mathbf{S}$ and $\\mathbf{F}$.\n\nFigure \\ref{fig:HSI_LDR_1_all} (c-k) presents the absolute values of the leading eigenvectors of $\\mathbf{S}$ in (c), $\\mathbf{W}_1$ in (d-e), $\\mathbf{W}_2$ in (h-i), and of $\\mathbf{F}$ that correspond to the largest positive eigenvalues in (f-g) and largest negative (in absolute value) eigenvalues in (j-k).\nAll eigenvectors are reshaped into images of size $60\\times 90$.\nThe absolute value of the eigenvectors is presented in order to emphasize the dominant structures in the images and the differences between the leading eigenvectors of the two kernels and the leading eigenvectors of the operators $\\mathbf{S}$ and $\\mathbf{F}$.\n\nFigure \\ref{fig:HSI_LDR_1_all} (c) presents the absolute values of the leading eigenvector of $\\mathbf{S}$ and depicts that the operator $\\mathbf{S}$ indeed recovers common structures strongly expressed in both images.\nSpecifically, this figure mostly highlights an `L'-shaped building at the top of the image, which is the most dominant structure (represented by the high pixel values) in both modalities.\nFigure \\ref{fig:HSI_LDR_1_all} (d-k) depicts that the eigenvectors of the operator $\\mathbf{F}$ capture and enhance differently expressed common structures.\nConsider for example the most dominant structures (with highest absolute values) in the LiDAR image presented in Figure \\ref{fig:HSI_LDR_1_all} (a).\nThese structures include the `L'-shaped building at the top of the image and trees at the bottom.\nBoth structures are represented by high values in the eigenvectors of $\\mathbf{W}_1$ in Figure \\ref{fig:HSI_LDR_1_all} (d-e).\nHowever, in Figure \\ref{fig:HSI_LDR_1_all} (f-g), which presents the leading eigenvectors of $\\mathbf{F}$ with positive eigenvalues, only the trees are significantly highlighted, whereas the `L'-shaped building is significantly attenuated.\nThis is due to the differences between the two modalities, since the HSI image highlights this `L' shaped building but not the trees.\nOther structures exhibiting such properties are marked by black arrows in Figure \\ref{fig:HSI_LDR_1_all} (f-g).\nIn addition, Figure \\ref{fig:HSI_LDR_1_all} (h-k) depicts that the structures which are dominant only in the HSI images are emphasized by the eigenvectors of $\\mathbf{F}$ corresponding to negative eigenvalues, whereas structures that are dominant in both modalities are significantly attenuated.\nExamples for such structures are marked by black arrows in Figure \\ref{fig:HSI_LDR_1_all} (j-k).\n\n\\begin{figure}[bhtp!]\n\\centering\n\\subfloat[]{\\includegraphics[width=0.29\\textwidth]{figures\/HSI_LDR\/example1\/data1_ldr-eps-converted-to.pdf}}\n\\subfloat[]{\\includegraphics[width=0.29\\textwidth]{figures\/HSI_LDR\/example1\/data1_hsi-eps-converted-to.pdf}}\n\\subfloat[]{\\includegraphics[width=0.25\\textwidth]{figures\/HSI_LDR\/example1\/S_psi_2_arrow-eps-converted-to.pdf}}\n\n\\subfloat[]{\\includegraphics[width=0.25\\textwidth]{figures\/HSI_LDR\/example1\/W2_psi_1-eps-converted-to.pdf}}\n\\subfloat[]{\\includegraphics[width=0.25\\textwidth]{figures\/HSI_LDR\/example1\/W2_psi_2-eps-converted-to.pdf}}\n\\subfloat[]{\\includegraphics[width=0.25\\textwidth]{figures\/HSI_LDR\/example1\/Fpos_psi_1_arrow-eps-converted-to.pdf}}\n\\subfloat[]{\\includegraphics[width=0.25\\textwidth]{figures\/HSI_LDR\/example1\/Fpos_psi_2_arrow-eps-converted-to.pdf}}\n\n\\subfloat[]{\\includegraphics[width=0.25\\textwidth]{figures\/HSI_LDR\/example1\/W1_psi_1-eps-converted-to.pdf}}\n\\subfloat[]{\\includegraphics[width=0.25\\textwidth]{figures\/HSI_LDR\/example1\/W1_psi_2-eps-converted-to.pdf}}\n\\subfloat[]{\\includegraphics[width=0.25\\textwidth]{figures\/HSI_LDR\/example1\/Fneg_psi_1_arrow-eps-converted-to.pdf}}\n\\subfloat[]{\\includegraphics[width=0.25\\textwidth]{figures\/HSI_LDR\/example1\/Fneg_psi_2_arrow-eps-converted-to.pdf}}\n\\caption{The two chosen image patches of (a) the LiDAR image and (b) the HSI image after preprocessing, along with the leading eigenvectors of (c) $\\mathbf{S}$, (d-e) $\\mathbf{W}_1$ (LiDAR), (f-g) $\\mathbf{F}$ corresponding to its $2$ largest positive eigenvalues,\n(h-i) $\\mathbf{W}_2$ (HSI), (j-k) $\\mathbf{F}$ corresponding to its two smallest negative eigenvalues.\n\\label{fig:HSI_LDR_1_all}}\n\\end{figure}\n\nWe repeat the presentation for the second image patch, shown in Figure \\ref{fig:HSI_LDR_2_allEV} (a) and (b).\nFigure \\ref{fig:HSI_LDR_2_allEV} (c-g) presents the absolute value of the leading eigenvector of $\\mathbf{W}_1$ in plot (c), of $\\mathbf{W}_2$ in plot (d), of $\\mathbf{F}$ with a positive eigenvalue in plot (e), of $\\mathbf{F}$ with a negative eigenvalue in plot (f) and of $\\mathbf{S}$ in plot (g).\n\\begin{figure}[bhtp!]\n\\centering\n\\subfloat[]{\\includegraphics[width=0.33\\textwidth]{figures\/HSI_LDR\/example2\/data2_ldr-eps-converted-to.pdf}}\n\\subfloat[]{\\includegraphics[width=0.33\\textwidth]{figures\/HSI_LDR\/example2\/data2_hsi-eps-converted-to.pdf}}\n\n\\subfloat[]{\\includegraphics[width=0.33\\textwidth]{figures\/HSI_LDR\/example2\/W1_psi1-eps-converted-to.pdf}}\n\\subfloat[]{\\includegraphics[width=0.33\\textwidth]{figures\/HSI_LDR\/example2\/W2_psi1-eps-converted-to.pdf}}\n\n\\subfloat[]{\\includegraphics[width=0.33\\textwidth]{figures\/HSI_LDR\/example2\/Fpos_psi1-eps-converted-to.pdf}}\n\\subfloat[]{\\includegraphics[width=0.33\\textwidth]{figures\/HSI_LDR\/example2\/Fneg_psi1_arrow-eps-converted-to.pdf}}\n\\subfloat[]{\\includegraphics[width=0.33\\textwidth]{figures\/HSI_LDR\/example2\/S_psi1-eps-converted-to.pdf}}\n\\caption{The two chosen image patches of (a) the LiDAR image and (b) the HSI image after preprocessing along with the leading eigenvectors of (c) $\\mathbf{W}_1$ (LiDAR data), (d) $\\mathbf{W}_2$ (HSI data), (e) $\\mathbf{F}$ corresponding to the largest positive eigenvalue, (f) $\\mathbf{F}$ corresponding to the smallest negative eigenvalue and (g) $\\mathbf{S}$.\n\\label{fig:HSI_LDR_2_allEV}}\n\\end{figure}\n\nNote the dominant structures (with high absolute values) in the leading eigenvectors of the two modalities in Figure \\ref{fig:HSI_LDR_2_allEV} (c) and (d). \nThe dominant structures of the eigenvector representing the LiDAR image in plot (c) include buildings and trees, whereas in plot (d), which relates to the HSI image, only some of the building appear with high intensities and the trees are not clearly visible. \nThis corresponds to the data presented in Figure \\ref{fig:HSI_LDR_2_allEV} (a) and (b).\nThe leading eigenvector of $\\mathbf{F}$ with a positive eigenvalue, presented in Figure \\ref{fig:HSI_LDR_2_allEV} (e), captures buildings and trees that are expressed more dominantly in the LiDAR image compared with the HSI image. \nIn addition, the structures that are dominant in both modalities appear to be less dominant in this plot. \nFor example, the ``attenuated'' structures in plot (e) include the small rectangular roof part around pixels $(x,y)=(10,25)$ (where $x$ denotes the horizontal axis and $y$ denotes the vertical axis) and the building around pixels $(x,y)=(80,10)$.\nConversely, the leading eigenvector of $\\mathbf{F}$ with a negative eigenvalue, presented in Figure \\ref{fig:HSI_LDR_2_allEV} (f), significantly enhances a specific location, marked by a black arrow in this plot, that is clearly visible in the HSI image presented in Figure \\ref{fig:HSI_LDR_2_allEV} (b) but barely visible in Figure \\ref{fig:HSI_LDR_2_allEV} plot (a).\nNote that this building is not represented by high pixel values in the raw HSI average image and therefore a simple subtraction between the two images will not lead to a similar emphasis of the building.\n\nThe leading eigenvector of $\\mathbf{S}$, presented in Figure \\ref{fig:HSI_LDR_2_allEV} (g), captures some combination of the structures that are dominant in both modalities.\n\nTo summarize this example, we showed that the operator $\\mathbf{F}$ captures common components that are expressed strongly only by one of the modalities and that the sign of the eigenvalues of $\\mathbf{F}$ indicates in which modality the component is stronger. \nIn addition, we showed that the operator $\\mathbf{S}$ captures some combination of the dominant components in both modalities.\n\n\n\n\n\n\n\n\n\n\\section{Spectral Analysis\\label{sec:analysis}}\n\nTo provide theoretical justification to the proposed RMRA framework for spatiotemporal analysis presented in Section \\ref{sec:rmra}, we analyze the operators $\\mathbf{S}$ and $\\mathbf{F}$ defined in \\eqref{eq:Sr} and \\eqref{eq:Ar} and show that they admit the desired properties. \nSpecifically, we show that the operator $\\mathbf{S}$ enhances common eigenvectors that are expressed similarly in two consecutive time frames in the sense that they have similar eigenvalues. In addition, we show that the operator $\\mathbf{F}$ enhances common eigenvectors that are expressed differently in two consecutive time frames in the sense that they have different eigenvalues.\n\n\n\n\n\n\n\n\n\n\nIn the following theoretical analysis we focus on two cases: (i) $\\mathbf{W}_1$ and $\\mathbf{W}_2$ have \\emph{strictly} common components, i.e., some of the eigenvectors of the two matrices are identical, and (ii) $\\mathbf{W}_1$ and $\\mathbf{W}_2$ have \\emph{weakly} common components, i.e. some of the eigenvectors of the two matrices differ by a small perturbation.\n\n\n\\subsection{Strictly Common Components\\label{sub:ident_eig}}\n\nGiven that some of the eigenvectors of matrices $\\mathbf{W}_1$ and $\\mathbf{W}_2$ are identical, we show in the following that for these identical eigenvectors, the operator $\\mathbf{S}$ enhances the eigenvectors that have similar dominant eigenvalues and the operator $\\mathbf{F}$ enhances the eigenvectors that have significantly different eigenvalues.\n\nWe begin by reiterating a theorem from \\cite{ok19} along with its proof, which shows that the eigenvectors that are similarly expressed in both matrices, $\\mathbf{W}_1$ and $\\mathbf{W}_2$, correspond to the largest eigenvalues of $\\mathbf{S}$.\n\\begin{theorem}\\label{theo:S_eigs}\nConsider a vector $\\psi$, which is an eigenvector of both $\\mathbf{W}_1$ and $\\mathbf{W}_2$ with possibly different eigenvalues: $\\mathbf{W}_1\\psi=\\lambda^{(1)}\\psi$ and $\\mathbf{W}_2\\psi=\\lambda^{(2)}\\psi$.\nThen, $\\psi$ is also an eigenvector of $\\mathbf{S}$ with the corresponding eigenvalue:\n\\begin{equation}\n\\lambda^{(\\mathbf{S})}=\\sqrt{\\lambda^{(1)}\\lambda^{(2)}} \n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nFrom \\eqref{eq:Sr} we have:\n\\begin{align*}\n\\mathbf{S}\\psi=&\\,\\mathbf{W}_1^{1\/2}\\left(\\mathbf{W}_1^{-1\/2}\\mathbf{W}_2\\mathbf{W}_1^{-1\/2}\\right)^{1\/2}\\mathbf{W}_1^{1\/2}\\psi`\\\\\n=&\\,\\mathbf{W}_1^{1\/2}\\left(\\mathbf{W}_1^{-1\/2}\\mathbf{W}_2\\mathbf{W}_1^{-1\/2}\\right)^{1\/2}\\sqrt{\\lambda^{(1)}}\\psi\\\\\n=&\\,\\mathbf{W}_1^{1\/2}\\sqrt{\\lambda^{(2)}\/\\lambda^{(1)}}\\sqrt{\\lambda^{(1)}}\\psi\\\\\n=&\\,\\sqrt{\\lambda^{(1)}\\lambda^{(2)}}\\psi\n\\end{align*}\nwhere the transition before last is due to $\\mathbf{W}_1^{-1\/2}\\mathbf{W}^{(2)}\\mathbf{W}_1^{-1\/2}\\psi=(\\lambda^{(2)}\/\\lambda^{(1)})\\psi$.\n\\end{proof}\nThis result implies that strictly common components that are dominant and similarly expressed in both datasets (with similar large eigenvalues) are dominant in $\\mathbf{S}$ (have a large eigenvalue $\\lambda^{(\\mathbf{S})}$), i.e. if $\\lambda^{(1)}\\approx\\lambda^{(2)}$ then $\\lambda^{(\\mathbf{S})}\\approx\\lambda^{(1)},\\lambda^{(2)}$.\n\nWe derive a similar theoretical analysis for the operator $\\mathbf{F}$.\n\\begin{theorem}\\label{theo:A_eigs}\nConsider a vector $\\psi$, which is an eigenvector of both $\\mathbf{W}_1$ and $\\mathbf{W}_2$ with possibly different eigenvalues: $\\mathbf{W}_1\\psi=\\lambda^{(1)}\\psi$ and $\\mathbf{W}_2\\psi=\\lambda^{(2)}\\psi$.\nThen $\\psi$ is also an eigenvector of $\\mathbf{F}$ with the corresponding eigenvalue:\n\\begin{equation}\n \\lambda^{(\\mathbf{F})}=\\frac{1}{2}\\sqrt{\\lambda^{(1)}\\lambda^{(2)}}(\\log(\\lambda^{(1)})-\\log(\\lambda^{(2)}))\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nFrom \\eqref{eq:Ar} we have:\n\\begin{align*}\n\\mathbf{F}\\psi=&\\,\\mathbf{S}^{1\/2}\\log\\left(\\mathbf{S}^{-1\/2}\\mathbf{W}_1\\mathbf{S}^{-1\/2}\\right)\\mathbf{S}^{1\/2}\\psi\\\\\n=&\\,\\mathbf{S}^{1\/2}\\log\\left(\\mathbf{S}^{-1\/2}\\mathbf{W}_1\\mathbf{S}^{-1\/2}\\right)(\\lambda^{(1)}\\lambda^{(2)})^{0.25}\\psi\\\\\n=&\\,\\mathbf{S}^{1\/2}(0.5\\log(\\lambda^{(1)})-0.5\\log(\\lambda^{(2)}))(\\lambda^{(1)}\\lambda^{(2)})^{0.25}\\psi\\\\\n=&\\,\\frac{1}{2}\\sqrt{\\lambda^{(1)}\\lambda^{(2)}}(\\log(\\lambda^{(1)})-\\log(\\lambda^{(2)}))\\psi\n\\end{align*}\nwhere the transition before last is due to $\\mathbf{S}^{-1\/2}\\mathbf{W}_1\\mathbf{S}^{-1\/2}\\psi=\\sqrt{\\lambda^{(1)}\/\\lambda^{(2)}}\\psi$ and the application of $\\log$ to the matrix multiplication, which is equivalent to applying $\\log$ to its eigenvalues, leading to $\\log(\\sqrt{\\lambda^{(1)}\/\\lambda^{(2)}})=0.5\\log(\\lambda^{(1)})-0.5\\log(\\lambda^{(2)})$. \n\\end{proof}\nThis result indicates that the strictly common components of the two datasets are also expressed by $\\mathbf{F}$ and that their order is determined by their relative expression in each dataset.\nFor example, if $\\psi$ is an eigenvector of both $\\mathbf{W}_1$ and $\\mathbf{W}_2$ and corresponds to equal eigenvalues, $\\lambda^{(1)}=\\lambda^{(2)}$, then this component is part of the null space of $\\mathbf{F}$;\nif $\\lambda^{(1)}\\neq\\lambda^{(2)}$, then $\\psi$ corresponds to a nonzero eigenvalue in $\\mathbf{F}$. \nNote that $\\lambda^{(\\mathbf{F})}$ depends also on the multiplication by $\\sqrt{\\lambda^{(1)}\\lambda^{(2)}}$. \n\n\nAnother notable result of Theorem \\ref{theo:A_eigs} is that the sign of the eigenvalues of $\\mathbf{F}$ indicates in which dataset their corresponding eigenvector is more dominant. \nFor example, if $\\psi$ is an eigenvector of both $\\mathbf{W}_1$ and $\\mathbf{W}_2$ that has a large corresponding eigenvalue in $\\mathbf{W}_1$ but a small eigenvalue in $\\mathbf{W}_2$ ($\\lambda^{(1)}\\gg\\lambda^{(2)}$), then the corresponding eigenvalue in $\\mathbf{F}$ is large and positive.\nConversely, if $\\psi$ is more dominant in $\\mathbf{W}_2$, then its corresponding eigenvalue in $\\mathbf{F}$ is large (in absolute value) and negative.\n\nAn example depicting these properties is presented in Subsection \\ref{subsub:ills_exmp}, which demonstrates that the eigenvalues of the operators $\\mathbf{S}$ and $\\mathbf{F}$ are indeed equal to the expected values based on Theorem \\ref{theo:S_eigs} and Theorem \\ref{theo:A_eigs}.\n\n\n\\subsection{Weakly Common Components}\n\n{\nTo further demonstrate the power of the operators $\\mathbf{S}$ and $\\mathbf{F}$, we provide stability analysis by investigating how a small variation of the common eigenvector affects the results. For this purpose, we make use of the concept of a pseudo-spectrum \\cite{trefethen2005spectra}. While pseudo-spectra is typically used to provide an analytic framework for investigating non-normal matrices and operators, here we apply it to symmetric matrices for the purpose of analysis of nearly but non-common eigenvectors.\nWe begin by recalling three equivalent definitions of the $\\epsilon$-pseudo-spectrum as presented in \\cite{trefethen2005spectra}. \n\n\n\\begin{definition}[Pseudo-spectrum]\\label{def:pseudo}\nGiven a matrix $\\mathbf{M}\\in\\mathbb{R}^{N\\times N}$, the following definitions of the $\\epsilon$-pseudo-spectrum are equivalent \nfor a small $\\epsilon>0$:\n\\begin{enumerate}\n \\item $\\sigma_\\epsilon(\\mathbf{M}) = \\left\\lbrace\\lambda\\in\\mathbb{R}:\\left\\Vert(\\lambda\\mathrm{I}-\\mathbf{M})^{-1}\\right\\Vert\\geq\\epsilon^{-1}\\right\\rbrace$\n \\item $\\sigma_\\epsilon(\\mathbf{M}) = \\left\\lbrace\\lambda\\in\\mathbb{R}:\\lambda\\in\\sigma(\\mathbf{M}+\\mathbf{E})\\text{ for a }\\mathbf{E}\\text{ with }\\left\\Vert\\mathbf{E}\\right\\Vert\\leq\\epsilon\\right\\rbrace$\n \\item $\\sigma_\\epsilon(\\mathbf{M}) = \\left\\lbrace\\lambda\\in\\mathbb{R}:\\exists v\\in\\mathbb{R}^N\\text{ with }\\left\\Vert v\\right\\Vert_2=1\\text{ s.t. }\\left\\Vert(\\mathbf{M}-\\lambda\\mathrm{I})v\\right\\Vert_2\\leq\\epsilon\\right\\rbrace$\n\\end{enumerate}\nwhere $\\sigma(\\mathbf{M})$ denotes the set of eigenvalues of $\\mathbf{M}$, $\\mathrm{I}$ denotes the identity matrix, $\\left\\Vert\\cdot\\right\\Vert_2$ denotes the $\\ell_2$ norm, and $\\left\\Vert\\cdot\\right\\Vert$ denotes the induced operator norm. Moreover, we term a vector $v$ that adheres to definition 3 an $\\epsilon$-pseudo-eigenvector.\n\\end{definition}\n\nThe following theorem is the counterpart of Theorem \\ref{theo:S_eigs} for the case where the eigenvector is slightly perturbed.\n\n\\begin{theorem}\\label{prop:pseudo_sa}\n\nSuppose there exists an eigenpair $\\lambda^{(k)}$ and $\\psi^{(k)}$ of $\\mathbf{W}_k$ for $k=1,2$ so that $\\psi^{(1)}=\\psi^{(2)}+\\psi_{\\epsilon_{\\mathbf{S}}}$, where $\\left\\Vert \\psi_{\\epsilon_{\\mathbf{S}}}\\right\\Vert_2\\leq \\frac{\\sqrt{\\lambda^{(2)}}}{\\tilde{\\lambda}^{(2)}_{max}\\sqrt{\\lambda^{(1)}}}\\epsilon_{\\mathbf{S}}$ for a small $\\epsilon_{\\mathbf{S}}>0$, where $\\tilde{\\lambda}^{(2)}_{max} = \\Vert\\mathbf{W}_2-\\lambda^{(2)}\\mathrm{I}\\Vert$. Then, we have\n\\begin{equation}\n\\sqrt{\\lambda^{(1)}\\lambda^{(2)}}\\in \\sigma_{\\epsilon_{\\mathbf{S}}}(\\mathbf{S})\\,.\\label{pseudospectrum of S}\n\\end{equation}\nSpecifically, we have \n\\[\n\\left\\Vert(\\mathbf{S}-\\sqrt{\\lambda^{(1)}\\lambda^{(2)}}\\mathrm{I})\\psi^{(1)}\\right\\Vert_2\\leq\\epsilon_{\\mathbf{S}}\\,.\n\\]\nand $\\psi^{(1)}$ is a corresponding $\\epsilon_{\\mathbf{S}}$-pseudo-eigenvector of $\\mathbf{S}$.\n\\end{theorem}\n\nInformally, this theorem implies that when $\\psi^{(1)}$ is a slight perturbation of $\\psi^{(2)}$, then $\\psi^{(1)}$ is ``almost'' an eigenvector of $\\mathbf{S}$ with a corresponding eigenvalue $\\sqrt{\\lambda^{(1)}\\lambda^{(2)}}$.\nEquivalently, $\\psi^{(2)}$ can also be shown to be an $\\epsilon_{\\mathbf{S}}$-pseudo-eigenvector of $\\mathbf{S}$ with the same corresponding eigenvalue, which suggests that the operator $\\mathbf{S}$ is ``stable'' to finite perturbations.\nWe remark that since $\\|\\mathbf{W}_2\\|=1$, we have that $\\tilde{\\lambda}^{(2)}_{max}=\\max(1-\\lambda^{(2)}, \\lambda^{(2)})\\in[0.5,1)$, guaranteeing that the perturbation of the eigenvector is small.\nWe also remark that our numerical study shows that the bound for the $\\epsilon$-pseudo-eigenvalues and $\\epsilon$-pseudo-eigenvectors of $\\mathbf{S}$ is tight.\n\n\n\n\\begin{proof}\nBy Proposition \\ref{prop_app:sa_equiv_forms} (see Appendix \\ref{app:add_state}), we have\n\\begin{equation}\n\\mathbf{S}=\\mathbf{W}_1^{1\/2}\\left(\\mathbf{W}_1^{-1\/2}\\mathbf{W}_2\\mathbf{W}_1^{-1\/2}\\right)^{1\/2}\\mathbf{W}_1^{1\/2}=(\\mathbf{W}_2\\mathbf{W}_1^{-1})^{1\/2}\\mathbf{W}_1.\\label{eq:equivexp}\n\\end{equation}\nSince $\\psi^{(1)}$ is an eigenvector of $\\mathbf{W}_1$ with an eigenvalue $\\lambda^{(1)}$, we have \n\\begin{align}\n \\mathbf{S}\\psi^{(1)} = \\left(\\mathbf{W}_2\\mathbf{W}_1^{-1}\\right)^{1\/2}\\mathbf{W}_1\\psi^{(1)}\n = \\lambda^{(1)}\\left(\\mathbf{W}_2\\mathbf{W}_1^{-1}\\right)^{1\/2}\\psi^{(1)}\\label{eq:pseudo_s_psi}.\n\\end{align}\nTherefore, it is sufficient to show that $\\psi^{(1)}$ is an $\\epsilon$-pseudo-eigenvector of $\\left(\\mathbf{W}_2\\mathbf{W}_1^{-1}\\right)^{1\/2}$. \nBy a direct expansion, we have\n\\begin{align}\n\\mathbf{W}_2\\mathbf{W}_1^{-1}\\psi^{(1)} = & \\, \\frac{1}{\\lambda^{(1)}}\\mathbf{W}_2 \\psi^{(1)}\n = \\frac{1}{\\lambda^{(1)}}\\mathbf{W}_2 (\\psi^{(2)}+\\psi_{\\epsilon_{\\mathbf{S}}})\\nonumber\\\\\n = & \\, \\frac{\\lambda^{(2)}}{\\lambda^{(1)}}\\psi^{(2)}+\\frac{1}{\\lambda^{(1)}}\\mathbf{W}_2 \\psi_{\\epsilon_{\\mathbf{S}}} = \\frac{\\lambda^{(2)}}{\\lambda^{(1)}}\\psi^{(1)}+\\frac{1}{\\lambda^{(1)}}(\\mathbf{W}_2-\\lambda^{(2)}\\mathrm{I}) \\psi_{\\epsilon_{\\mathbf{S}}}\\,,\\label{eq:pseudo_s_1}\n\\end{align}\nwhere the last transition is obtained by replacing $\\psi^{(2)}$ with $\\psi^{(1)}-\\psi_{\\epsilon_{\\mathbf{S}}}$.\nBy reorganizing the elements in \\eqref{eq:pseudo_s_1}, we have\n\\begin{align}\n&\\left(\\mathbf{W}_2\\mathbf{W}_1^{-1}-\\frac{\\lambda^{(2)}}{\\lambda^{(1)}}\\mathrm{I}\\right)\\psi^{(1)} = \\frac{1}{\\lambda^{(1)}}(\\mathbf{W}_2-\\lambda^{(2)}\\mathrm{I}) \\psi_{\\epsilon_{\\mathbf{S}}}\\,,\n\\end{align}\nand applying the $\\ell_2$ norm leads to:\n\\begin{align}\n&\\left\\Vert\\left(\\mathbf{W}_2\\mathbf{W}_1^{-1}-\\frac{\\lambda^{(2)}}{\\lambda^{(1)}}\\mathrm{I}\\right)\\psi^{(1)}\\right\\Vert_2 = \\left\\Vert\\frac{1}{\\lambda^{(1)}}(\\mathbf{W}_2-\\lambda^{(2)}\\mathrm{I}) \\psi_{\\epsilon_{\\mathbf{S}}}\\right\\Vert_2\n \\leq \\frac{1}{\\lambda^{(1)}}\\left\\Vert \\mathbf{W}_2-\\lambda^{(2)}\\mathrm{I}\\right\\Vert\\left\\Vert \\psi_{\\epsilon_{\\mathbf{S}}}\\right\\Vert_2\\label{eq:pseudo_s_2}\\\\\n&\\qquad\\qquad \\leq \\,\\frac{1}{\\lambda^{(1)}}\\left\\Vert\\mathbf{W}_2-\\lambda^{(2)}\\mathrm{I}\\right\\Vert\\frac{\\sqrt{\\lambda^{(2)}}}{\\tilde{\\lambda}^{(2)}_{max}\\sqrt{\\lambda^{(1)}}}\\epsilon_{\\mathbf{S}} = \\frac{\\tilde{\\lambda}^{(2)}_{max}}{\\lambda^{(1)}}\\frac{\\sqrt{\\lambda^{(2)}}}{\\tilde{\\lambda}^{(2)}_{max}\\sqrt{\\lambda^{(1)}}}\\epsilon_{\\mathbf{S}}=\\frac{\\sqrt{\\lambda^{(2)}}}{\\lambda^{(1)}\\sqrt{\\lambda^{(1)}}}\\epsilon_{\\mathbf{S}}\\,.\\nonumber\n\\end{align}\nThis derivation shows that $\\psi^{(1)}$ is a pseudo-eigenvector of $\\mathbf{W}_2\\mathbf{W}_1^{-1}$ with a pseudo-eigenvalue $\\frac{\\lambda^{(2)}}{\\lambda^{(1)}}$, i.e. $\\left\\Vert\\left(\\mathbf{W}_2\\mathbf{W}_1^{-1}-\\frac{\\lambda^{(2)}}{\\lambda^{(1)}}\\mathrm{I}\\right)\\psi^{(1)}\\right\\Vert_2\\leq\\frac{\\sqrt{\\lambda^{(2)}}}{\\lambda^{(1)}\\sqrt{\\lambda^{(1)}}}\\epsilon_{\\mathbf{S}}$.\n{Thus, by definition (see Proposition \\ref{prop_app:pseudo_explicit} in Appendix \\ref{app:add_state})}, there exists a matrix $\\mathbf{E}$ such that $(\\mathbf{W}_2\\mathbf{W}_1^{-1}+\\mathbf{E})\\psi^{(1)}=\\frac{\\lambda^{(2)}}{\\lambda^{(1)}}\\psi^{(1)}$ and $\\left\\Vert\\mathbf{E}\\right\\Vert\\leq\\frac{\\sqrt{\\lambda^{(2)}}}{\\lambda^{(1)}\\sqrt{\\lambda^{(1)}}}\\epsilon_{\\mathbf{S}}$. \nTherefore, we can write the following:\n\\begin{align}\n\\mathbf{E}\\psi^{(1)} = &\\, -\\left(\\mathbf{W}_2\\mathbf{W}_1^{-1}-\\frac{\\lambda^{(2)}}{\\lambda^{(1)}}\\mathrm{I}\\right)\\psi^{(1)}\\label{eq:pseudo_s_3}\\\\\n = &\\, -\\left((\\mathbf{W}_2\\mathbf{W}_1^{-1})^{1\/2}+\\sqrt{\\frac{\\lambda^{(2)}}{\\lambda^{(1)}}}\\mathrm{I}\\right)\\left((\\mathbf{W}_2\\mathbf{W}_1^{-1})^{1\/2}-\\sqrt{\\frac{\\lambda^{(2)}}{\\lambda^{(1)}}}\\mathrm{I}\\right)\\psi^{(1)}.\\nonumber\n\\end{align}\nSince $(\\mathbf{W}_2\\mathbf{W}_1^{-1})^{1\/2}+\\sqrt{\\lambda^{(2)}\/\\lambda^{(1)}}\\mathrm{I}$ is positive definite, \\eqref{eq:pseudo_s_3} leads to\n\\begin{equation}\n\\left((\\mathbf{W}_2\\mathbf{W}_1^{-1})^{1\/2}-\\sqrt{\\frac{\\lambda^{(2)}}{\\lambda^{(1)}}}\\mathrm{I}\\right)\\psi^{(1)}= - \\left((\\mathbf{W}_2\\mathbf{W}_1^{-1})^{1\/2}+\\sqrt{\\frac{\\lambda^{(2)}}{\\lambda^{(1)}}}\\mathrm{I}\\right)^{-1}\\mathbf{E}\\psi^{(1)}. \n\\label{eq:pseudo_s_sqrt_eq}\n\\end{equation}\nWith the above preparation, we have \n\\begin{align}\n\\left(\\mathbf{S}-\\sqrt{\\lambda^{(1)}\\lambda^{(2)}}\\mathrm{I}\\right)\\psi^{(1)} \\underset{\\eqref{eq:pseudo_s_psi}}{=} & \\lambda^{(1)}(\\mathbf{W}_2\\mathbf{W}_1^{-1})^{1\/2}\\psi^{(1)}-\\sqrt{\\lambda^{(1)}\\lambda^{(2)}}\\psi^{(1)} \\nonumber \\\\ \n= \\,& \\lambda^{(1)}\\left((\\mathbf{W}_2\\mathbf{W}_1^{-1})^{1\/2}-\\sqrt{\\frac{\\lambda^{(2)}}{\\lambda^{(1)}}}\\mathrm{I}\\right)\\psi^{(1)} \\nonumber \\\\\n\\underset{\\eqref{eq:pseudo_s_sqrt_eq}}{=} & -\\lambda^{(1)}\\left((\\mathbf{W}_2\\mathbf{W}_1^{-1})^{1\/2}+\\sqrt{\\frac{\\lambda^{(2)}}{\\lambda^{(1)}}}\\mathrm{I}\\right)^{-1}\\mathbf{E}\\psi^{(1)}.\\label{eq:pseudo_s_eq}\n\\end{align}\nTaking the norm gives\n\\begin{align}\n\\left\\Vert \\left(\\mathbf{S}-\\sqrt{\\lambda^{(1)}\\lambda^{(2)}}\\mathrm{I}\\right)\\psi^{(1)} \\right\\Vert_2 \\le \\, &\\lambda^{(1)}\\left\\Vert\\left((\\mathbf{W}_2\\mathbf{W}_1^{-1})^{1\/2}+\\sqrt{\\frac{\\lambda^{(2)}}{\\lambda^{(1)}}}\\mathrm{I}\\right)^{-1}\\right\\Vert\\left\\Vert\\mathbf{E}\\right\\Vert\\\\\n\\leq \\,& \\frac{\\sqrt{\\lambda^{(2)}\/\\lambda^{(1)}}}{\\sigma_{\\min}\\left((\\mathbf{W}_2\\mathbf{W}_1^{-1})^{1\/2}\\right)+\\sqrt{\\lambda^{(2)}\/\\lambda^{(1)}}}\\epsilon_{\\mathbf{S}} \\nonumber \\\\\n\\leq \\,& \\frac{\\sqrt{\\lambda^{(2)}\/\\lambda^{(1)}}}{\\sqrt{\\lambda^{(2)}\/\\lambda^{(1)}}}\\epsilon_{\\mathbf{S}}=\\epsilon_{\\mathbf{S}} \\nonumber, \\label{eq:pseudo_s_sqrt_norm}\n\\end{align}\nwhere $\\sigma_{\\min}$ denotes the minimum eigenvalue. \nThus, $\\sqrt{\\lambda^{(1)}\\lambda^{(2)}}$ is an $\\epsilon_{\\mathbf{S}}$-pseudo-eigenvalue of $\\mathbf{S}$, where $\\psi^{(1)}$ is a corresponding $\\epsilon_{\\mathbf{S}}$-pseudo-eigenvector.\n\\end{proof}\n\nNote that the above proof shows that $\\psi^{(1)}$ is a pseudo-eigenvector of $\\mathbf{S}$, when $\\mathbf{S}$ is defined as the midpoint of the geodesic curve connecting $\\mathbf{W}_1$ and $\\mathbf{W}_2$ (by setting $p=0.5$).\nHowever, due to the decomposition in \\eqref{eq:pseudo_s_3}, the proof is not compatible with definitions of $\\mathbf{S}$ at other points $p \\in (0,1)$ along the geodesic path.\nFor such cases, a different proof is required, specifically, without using the algebraic relationship in \\eqref{eq:pseudo_s_3} that leads to \\eqref{eq:pseudo_s_sqrt_eq}. In the following statement, which is the counterpart of Theorem \\ref{theo:A_eigs} for the case where the eigenvector is not strictly common, we control the ``pseudo'' part by a straightforward perturbation argument.\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{theorem}\\label{prop:pseudo_apart}\nFor $k=1,2$, consider the eigendecomposition $\\mathbf{W}_k=\\mathbf{U}_k \\mathbf{L}_k \\mathbf{U}_k^\\top\\in \\mathbb{R}^{N\\times N}$, where $\\mathbf{L}_k:=\\text{diag}(\\lambda_1^{(k)},\\ldots,\\lambda_N^{(k)})$ so that $\\lambda_1^{(1)}\\geq\\ldots\\geq\\lambda_N^{(1)}$ and $\\mathbf{U}_k:=\\begin{bmatrix}\\psi_1^{(k)}&\\ldots&\\psi_N^{(k)}\\end{bmatrix}\\in \\mathcal{V}_{N,N}$. \nAssume the above eigendecomposition satisfies $\\mathbf{U}_2=\\mathbf{U}_1+\\epsilon \\mathbf{A}$, where $\\|\\mathbf{A}\\|=1$, $\\epsilon>0$ is a small constant, and $c^{-1}\\leq \\ell_i:= \\lambda_i^{(2)}\/\\lambda_i^{(1)}\\leq c$ for some constant $c\\geq1$ for all $i=1,\\ldots,N$. For any $i=1,\\ldots,N$, denote the spectral gap $\\gamma_i:=\\min_{k,\\,\\ell_k\\neq \\ell_i}|\\ell_i-\\ell_k|$.\n\nFix $j$. Then, for the $j$-th eigenpair of $\\mathbf{W}_1$, when $\\epsilon$ is sufficiently small, we have\n\\[\n\\left\\Vert\\left(\\mathbf{F}-0.5\\sqrt{\\lambda_j^{(1)}\\lambda_j^{(2)}}\\log\\left(\\frac{\\lambda_j^{(1)}}{\\lambda_j^{(2)}}\\right)\\mathrm{I}\\right)\\psi_j^{(1)}\\right\\Vert_2=O(\\epsilon)\\,,\n\\]\nwhere the implied constant depends on \n$\\frac{\\sqrt{c}\\ln c}{\\min_i\\left(\\gamma_i\\sqrt{\\lambda_i^{(1)}}\\right)}$.\n\n\\end{theorem}\n\n\n\n\\begin{proof} \n\nBy Proposition \\ref{prop_app:sa_equiv_forms} (see Appendix \\ref{app:add_state}), we rewrite the operator $\\mathbf{F}=\\mathbf{S}^{1\/2}\\log\\left(\\mathbf{S}^{-1\/2}\\mathbf{W}_1\\mathbf{S}^{-1\/2}\\right)\\mathbf{S}^{1\/2}$ as $\\mathbf{F} = \\log\\left(\\mathbf{W}_1\\mathbf{S}^{-1}\\right)\\mathbf{S}$. \nDenote \n\\[\n\\hat{\\lambda}_{\\mathbf{F}}=0.5\\sqrt{\\lambda_j^{(1)}\\lambda_j^{(2)}}\\log\\left(\\frac{\\lambda_j^{(1)}}{\\lambda_j^{(2)}}\\right)\\,.\n\\]\n{By Theorem \\ref{prop:pseudo_sa}, there exists $\\mathbf{E}_{\\mathbf{S}}$ with a sufficiently small norm, such that\n\\[\n \\mathbf{S}\\psi_j^{(1)} = \\left(\\sqrt{\\lambda_j^{(1)}\\lambda_j^{(2)}}\\mathrm{I}-\\mathbf{E}_{\\mathbf{S}}\\right)\\psi_j^{(1)}, \\\n\\]\nand therefore, we have}\n\\begin{align}\n\\left(\\mathbf{F}-\\hat{\\lambda}_{\\mathbf{F}}\\mathrm{I}\\right)\\psi_j^{(1)} = &\\, \\left(\\log\\left(\\mathbf{W}_1\\mathbf{S}^{-1}\\right)\\mathbf{S}-\\hat{\\lambda}_{\\mathbf{F}}\\mathrm{I}\\right)\\psi_j^{(1)}\n\\nonumber\\\\\n=&\\,\\sqrt{\\lambda_j^{(1)}\\lambda_j^{(2)}}\\left(\\log\\left(\\mathbf{W}_1\\mathbf{S}^{-1}\\right)-\\frac{\\hat{\\lambda}_{\\mathbf{F}}}{\\sqrt{\\lambda_j^{(1)}\\lambda_j^{(2)}}}\\mathrm{I}\\right)\\psi_j^{(1)}-\\log\\left(\\mathbf{W}_1\\mathbf{S}^{-1}\\right)\\mathbf{E}_{\\mathbf{S}}\\psi_j^{(1)}.\n\\label{eq:pseudo_a_1}\n\\end{align}\n\nSince both $\\mathbf{W}_1$ and $\\mathbf{W}_2$ are positive definite, they are invertible and also $\\left(\\mathbf{W}_2\\mathbf{W}_1^{-1}\\right)^{1\/2}$ is invertible. \nTherefore, by \\eqref{eq:equivexp}, $\\mathbf{W}_1\\mathbf{S}^{-1}=\\mathbf{W}_1\\mathbf{W}_1^{-1}\\left(\\mathbf{W}_2\\mathbf{W}_1^{-1}\\right)^{-1\/2}=\\left(\\mathbf{W}_2\\mathbf{W}_1^{-1}\\right)^{-1\/2}$. \nSubstituting this relationship into \\eqref{eq:pseudo_a_1} yields\n\\begin{align}\n\\left(\\mathbf{F}-\\hat{\\lambda}_{\\mathbf{F}}\\mathrm{I}\\right)\\psi^{(1)}_j\n= - \\frac{\\sqrt{\\lambda_j^{(1)}\\lambda_j^{(2)}}}{2}\n\\left(\\log\\left(\\mathbf{W}_2\\mathbf{W}_1^{-1}\\right)-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\mathrm{I}\\right)\\psi_j^{(1)}-\\frac{1}{2}\\log\\left(\\mathbf{W}_2\\mathbf{W}_1^{-1}\\right)\\mathbf{E}_{\\mathbf{S}}\\psi_j^{(1)}\\,. \\label{eq:pseudo_a_2}\n\\end{align}\nNow we control the right hand side term by term. \nRecall that for any analytic function $f$ over an open set in $\\mathbb{R}$ that contains the spectrum of $\\mathbf{W}_2\\mathbf{W}_1^{-1}$, we can define $f(\\mathbf{W}_2\\mathbf{W}_1^{-1})$.\nSince $\\mathbf{W}_2\\mathbf{W}_1^{-1}$ and $\\mathbf{W}_1^{-1\/2}\\mathbf{W}_2 \\mathbf{W}_1^{-1\/2}$ are similar, we have\n\\begin{equation}\nf(\\mathbf{W}_2\\mathbf{W}_1^{-1})=\\mathbf{W}_1^{1\/2}f\\left(\\mathbf{W}_1^{-1\/2}\\mathbf{W}_2 \\mathbf{W}_1^{-1\/2}\\right)\\mathbf{W}_1^{-1\/2}\\,,\\label{eq:pseudo_a_6}\n\\end{equation}\nand hence\n\\begin{align}\nf\\left(\\mathbf{W}_1^{-1\/2}\\mathbf{W}_2\\mathbf{W}_1^{-1\/2}\\right)\\psi_j^{(1)}= \\mathbf{W}_1^{-1\/2}f(\\mathbf{W}_2\\mathbf{W}_1^{-1})\\mathbf{W}_1^{1\/2}\\psi_j^{(1)}=\\sqrt{\\lambda_j^{(1)}}\\mathbf{W}_1^{-1\/2}f(\\mathbf{W}_2\\mathbf{W}_1^{-1})\\psi_j^{(1)}.\\label{eq:pseudo_a_6q}\n\\end{align}\n\nLet $\\mu_i$ and $v_i$ denote the eigenvalues and eigenvectors of the matrix $\\mathbf{W}_1^{-1\/2}\\mathbf{W}_2\\mathbf{W}_1^{-1\/2}$, respectively, for $i=1,\\dots,N$. %\nSince $\\{\\psi_j^{(1)}\\}_{j=1}^N$ and $\\{v_j\\}_{j=1}^N$ are both orthonormal bases of $\\mathbb{R}^N$, we have $\\psi_j^{(1)}=\\sum_i \\alpha_{ji} v_i$, where $\\alpha_{ji}\\in\\mathbb{R}$ and $\\sum_i \\alpha_{ji}^2=1$ for all $j$.\nBy \\eqref{eq:pseudo_a_6q}, we have \n\n\\begin{align}\n&\\frac{\\sqrt{\\lambda_j^{(1)}\\lambda_j^{(2)}}}{2}\\left(\\log(\\mathbf{W}_2\\mathbf{W}_1^{-1})-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\mathrm{I}\\right)\\psi_j^{(1)}\\nonumber\\\\\n=&\\,\\frac{\\sqrt{\\lambda_j^{(2)}}}{2}\\mathbf{W}_1^{1\/2}\\left(\\log(\\mathbf{W}_1^{-1\/2}\\mathbf{W}_2\\mathbf{W}_1^{-1\/2})-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\mathrm{I}\\right)\\psi_j^{(1)}\\nonumber\\\\\n=&\\,\\frac{\\sqrt{\\lambda_j^{(2)}}}{2}\\mathbf{W}_1^{1\/2}\\sum_{i=1}^N\\left(\\log(\\mathbf{W}_1^{-1\/2}\\mathbf{W}_2\\mathbf{W}_1^{-1\/2})-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\mathrm{I}\\right)\\alpha_{ji}v_i.\n\\end{align}\nUsing the fact that \n\\begin{eqnarray}\n \\left(\\log\\left(\\mathbf{W}_1^{-1\/2}\\mathbf{W}_2\\mathbf{W}_1^{-1\/2}\\right)-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\mathrm{I}\\right)v_i = \\left(\\log\\mu_i-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\right)v_i\\,,\\label{eq:pseudo_a_5}\n\\end{eqnarray}\nyields\n\\begin{align}\n&\\frac{\\sqrt{\\lambda_j^{(2)}}}{2}\\mathbf{W}_1^{1\/2}\\sum_{i=1}^N\\left(\\log(\\mathbf{W}_1^{-1\/2}\\mathbf{W}_2\\mathbf{W}_1^{-1\/2})-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\mathrm{I}\\right)\\alpha_{ji}v_i\\nonumber\\\\\n=&\\,\\frac{\\sqrt{\\lambda_j^{(2)}}}{2}\\mathbf{W}_1^{1\/2}\\sum_{i=1}^N\\left(\\log\\mu_i-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\right)\\alpha_{ji}v_i\\label{eq to control 00}\n\\end{align}\nTherefore, the squared $L^2$ norm of the first term in the right hand side of \\eqref{eq:pseudo_a_2} becomes\n\\begin{align}\n&\\left\\| \\frac{\\sqrt{\\lambda_j^{(1)}\\lambda_j^{(2)}}}{2}\\left(\\log(\\mathbf{W}_2\\mathbf{W}_1^{-1})-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\mathrm{I}\\right)\\psi_j^{(1)}\\right\\|^2_2\\nonumber\\\\\n=&\\left\\|\\frac{\\sqrt{\\lambda_j^{(2)}}}{2}\\mathbf{W}_1^{1\/2}\\sum_{i=1}^N\\left(\\log\\mu_i-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\right)\\alpha_{ji}v_i\\right\\|^2_2\n\\leq \\frac{\\lambda_j^{(2)}}{4}\\left\\|\\mathbf{W}_1^{1\/2}\\right\\|^2\\left\\|\\sum_{i=1}^N\\left(\\log\\mu_i-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\right)\\alpha_{ji}v_i\\right\\|^2_2\\nonumber\\\\\n=&\\frac{\\lambda_j^{(2)}}{4} \\sum_{i=1}^N\\alpha_{ji}^2\n\\left(\\log\\mu_i-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\right)^2\\, ,\\label{eq to control 1}\n\\end{align}\nwhere we use the fact that $\\{v_j\\}$ form an orthonormal basis and that the operator norm $\\left\\|\\mathbf{W}_1^{1\/2}\\right\\|^2=1$, since $\\mathbf{W}_1$ is normalized.\n\n\nNext, as in \\eqref{eq:pseudo_s_eq}, we set\n\\begin{align}\n\\mathbf{E}_{\\mathbf{S}}\\psi_j^{(1)}\n\\lambda_j^{(1)}\\left((\\mathbf{W}_2\\mathbf{W}_1^{-1})^{1\/2}+\\sqrt{\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}}\\mathrm{I}\\right)^{-1}\\mathbf{E}\\psi_j^{(1)} \n\\underset{\\eqref{eq:pseudo_s_sqrt_eq}}{=}-\\lambda_j^{(1)}\\left((\\mathbf{W}_2\\mathbf{W}_1^{-1})^{1\/2}-\\sqrt{\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}}\\mathrm{I}\\right)\\psi_j^{(1)}\n\\end{align}\nfor some $\\mathbf{E}$ with a sufficiently small norm.\nSince $f(x)=\\log(x)\\sqrt{x}$ is analytic over an open set that contains the spectrum of $\\mathbf{W}_1\\mathbf{W}_2^{-1}$, by the same argument as that for \\eqref{eq to control 00}, we have \n\\begin{align}\n\\log\\left(\\mathbf{W}_2\\mathbf{W}_1^{-1}\\right)\\mathbf{E}_{\\mathbf{S}}\\psi_j^{(1)}&\\,=-\\lambda_j^{(1)}\\log\\left(\\mathbf{W}_2\\mathbf{W}_1^{-1}\\right)\\left((\\mathbf{W}_2\\mathbf{W}_1^{-1})^{1\/2}-\\sqrt{\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}}\\mathrm{I}\\right)\\psi_j^{(1)}\\nonumber\\\\\n&\\,=-\\sqrt{\\lambda_j^{(1)}} \\mathbf{W}_1^{1\/2}\\sum_{i=1}^N\\alpha_{ji}\\log(\\mu_i)\\left(\\sqrt{\\mu_i}-\\sqrt{\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}}\\right)v_i\\,,\\nonumbe\n\\end{align}\nand hence the squared $L^2$ norm of the second term in the right hand side of \\eqref{eq:pseudo_a_2} becomes\n\\begin{align}\n\\left\\|\\log\\left(\\mathbf{W}_2\\mathbf{W}_1^{-1}\\right)\\mathbf{E}_{\\mathbf{S}}\\psi_j^{(1)}\\right\\|_2^2 \\leq \\lambda_j^{(1)} \\sum_{i=1}^N \n\\alpha^2_{ji}(\\log \\mu_i)^2\\left(\\sqrt{\\mu_i}-\\sqrt{\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}}\\right)^2\\,,\\label{eq to control 2}\n\\end{align}\nwhere we again use the fact that $\\{v_j\\}$ form an orthonormal basis and that the operator norm $\\left\\|\\mathbf{W}_1^{1\/2}\\right\\|^2=1$.\nTo finish the proof, we control $\\alpha_{ji}$ and the relationship between $\\mu_i$ and $\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}$ in \\eqref{eq to control 1} and \\eqref{eq to control 2} using matrix perturbation theory.\nBy a direct expansion, we have\n\\begin{align}\n\\mathbf{W}_1^{-1\/2}\\mathbf{W}_2\\mathbf{W}_1^{-1\/2}=\\mathbf{U}_1 \\mathbf{L}_2 \\mathbf{L}_1^{-1} \\mathbf{U}_1^\\top+\\epsilon \\mathbf{B}\\,,\n\\end{align}\nwhere $\\mathbf{B}=\\mathbf{U}_1 \\mathbf{L}_1^{-1\/2} \\mathbf{U}_1^\\top(\\mathbf{A}\\mathbf{L}_2\\mathbf{U}_1^\\top+\\mathbf{U}_1\\mathbf{L}_2\\mathbf{A}^\\top)\\mathbf{U}_1\\mathbf{L}_1^{-1\/2}\\mathbf{U}_1^\\top+\\epsilon \\mathbf{U}_1\\mathbf{L}_1^{-1\/2}\\mathbf{U}_1^\\top \\mathbf{A} \\mathbf{L}_2 \\mathbf{A}^\\top \\mathbf{U}_1 \\mathbf{L}_1^{-1\/2}\\mathbf{U}_1^\\top$. \nTo simplify the notation, we assume that the diagonal entries of $\\mathbf{L}_2\\mathbf{L}_1^{-1}=\\text{diag}\\left(\\frac{\\lambda_1^{(2)}}{\\lambda_1^{(1)}},\\ldots,\\frac{\\lambda_N^{(2)}}{\\lambda_N^{(1)}}\\right)=\\text{diag}\\left(\\ell_1,\\ldots,\\ell_N\\right)$ are all distinct, or the following argument could be carried out with eigenprojections. Thus, by a standard perturbation argument, when $\\epsilon$ is sufficiently small, we have \n\\begin{equation}\nv_i=\\psi_i^{(1)}+\\epsilon\\sum_{k\\neq i}\\frac{\\langle \\psi_k^{(1)}, \\mathbf{B}\\psi_i^{(1)}\\rangle}{\\lambda_i^{(2)}\/\\lambda_i^{(1)}-\\lambda_k^{(2)}\/\\lambda_k^{(1)}}\\psi_k^{(1)}+O(\\epsilon^2),\\ \\ \n\\mu_i=\\frac{\\lambda_i^{(2)}}{\\lambda_i^{(1)}}+\\epsilon \\langle \\psi_i^{(1)}, \\mathbf{B}\\psi_i^{(1)}\\rangle +O(\\epsilon^2)\\label{relationship vi and psii}\n\\end{equation}\nfor each $i=1,\\ldots,N$.\nNote that\n\\[\nI_N=\\mathbf{U}_2^\\top\\mathbf{U}_2=(\\mathbf{U}_1+\\epsilon\\mathbf{A})^\\top(\\mathbf{U}_1+\\epsilon\\mathbf{A})=I_N+\\epsilon(\\mathbf{A}^\\top\\mathbf{U}_1+\\mathbf{U}_1^\\top\\mathbf{A}) +\\epsilon^2 \\mathbf{A}^\\top \\mathbf{A}\\,,\n\\]\nso we have\n\\begin{equation}\n\\mathbf{A}^\\top\\mathbf{U}_1=-\\mathbf{U}_1^\\top\\mathbf{A} -\\epsilon \\mathbf{A}^\\top \\mathbf{A}\\,.\\label{AU=-UA-eps AA}\n\\end{equation}\nThus, by a direct expansion with the definition of $\\mathbf{B}$, we have \n\\begin{align}\n\\langle \\psi_k^{(1)}, \\mathbf{B}\\psi_i^{(1)}\\rangle\n=&\\,(\\lambda_i^{(1)}\\lambda_k^{(1)})^{-1\/2}e_k^\\top \\mathbf{U}_1^\\top(\\mathbf{A}\\mathbf{L}_2\\mathbf{U}_1^\\top+\\mathbf{U}_1\\mathbf{L}_2\\mathbf{A}^\\top)\\mathbf{U}_1e_i +O(\\epsilon)\\nonumber\\\\\n=&\\, (\\lambda_i^{(1)}\\lambda_k^{(1)})^{-1\/2} (\\lambda_i^{(2)}e_k^\\top \\mathbf{U}_1^\\top\\mathbf{A} e_i + \\lambda_k^{(2)}e_k^\\top \\mathbf{A}^\\top\\mathbf{U}_1 e_i)\n+O(\\epsilon)\\nonumber\\\\\n=&\\,(\\lambda_i^{(1)}\\lambda_k^{(1)})^{-1\/2}\\left((\\lambda_i^{(2)}-\\lambda_k^{(2)})e_k^\\top\\mathbf{U}_1^\\top\\mathbf{A}e_i-\\epsilon \\lambda_k^{(2)} e_k\\mathbf{A}^\\top\\mathbf{A}e_i\\right)+(\\lambda_i^{(1)}\\lambda_k^{(1)})^{-1\/2}O(\\epsilon)\\nonumber\\\\\n=&\\,(\\lambda_i^{(1)}\\lambda_k^{(1)})^{-1\/2}(\\lambda_i^{(2)}-\\lambda_k^{(2)})e_k^\\top\\mathbf{U}_1^\\top\\mathbf{A}e_i+O(\\epsilon)\\,,\\nonumber\n\\end{align}\nwhere the equality before last\ncomes from \\eqref{AU=-UA-eps AA}, the last equality is due to $|e_k^\\top \\mathbf{A}^\\top \\mathbf{A}e_i|\\leq 1$, since $\\|A\\|=1$, and the constant in this derivation depends on $(\\lambda_i^{(1)}\\lambda_k^{(1)})^{-1\/2}$.\nThus, for the $j$-th eigenpair we are concerned with, we have $\\mu_j=\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}+O(\\epsilon^2)$, since $\\langle \\psi_j^{(1)}, \\mathbf{B}\\psi_j^{(1)}\\rangle=O(\\epsilon)$.\nFor the eigenvector, when $k\\neq i$,\nwe have\n\\begin{align}\n\\left|\\frac{\\langle \\psi_k^{(1)}, \\mathbf{B}\\psi_i^{(1)}\\rangle}{\\lambda_i^{(2)}\/\\lambda_i^{(1)}-\\lambda_k^{(2)}\/\\lambda_k^{(1)}}\\right|\n\\leq &\\,\\frac{(\\lambda_i^{(1)}\\lambda_k^{(1)})^{-1\/2}\\left|\\lambda_i^{(2)}-\\lambda_k^{(2)}\\right|}{\\left|\\lambda_i^{(2)}\/\\lambda_i^{(1)}-\\lambda_k^{(2)}\/\\lambda_k^{(1)}\\right|}\\left|e_k^\\top \\mathbf{U}_1^\\top\\mathbf{A}e_i\\right|+O(\\epsilon)\\nonumber\\\\\n\\leq &\\,\\frac{1}{\\gamma_i} \\frac{\\left|(\\lambda_i^{(2)}-\\lambda_k^{(2)})\\right|}{\\sqrt{\\lambda_i^{(1)}\\lambda_k^{(1)}}} \\left|e_k^\\top \\mathbf{U}_1^\\top\\mathbf{A}e_i\\right|+O(\\epsilon\n\\nonumber\\,,\n\\end{align}\nwhere the constant depends on $\\frac{1}{\\sqrt{\\lambda_i^{(1)}\\lambda_k^{(1)}} \\gamma_i}$.\n\nBy \\eqref{relationship vi and psii} we have for $j\\neq i$:\n\\begin{align}\n\t\\alpha_{ji} &= \\langle \\psi_j^{(1)}, v_i \\rangle = \\langle \\psi_j^{(1)}, \\psi_i^{(1)}+\\epsilon\\sum_{k\\neq i}\\frac{\\langle \\psi_k^{(1)}, \\mathbf{B}\\psi_i^{(1)}\\rangle}{\\lambda_i^{(2)}\/\\lambda_i^{(1)}-\\lambda_k^{(2)}\/\\lambda_k^{(1)}}\\psi_k^{(1)}+O(\\epsilon^2) \\rangle\\nonumber \\\\\n\t&= \\epsilon\\sum_{k\\neq i}\\frac{\\langle \\psi_k^{(1)}, \\mathbf{B}\\psi_i^{(1)}\\rangle}{\\lambda_i^{(2)}\/\\lambda_i^{(1)}-\\lambda_k^{(2)}\/\\lambda_k^{(1)}}\\langle \\psi_j^{(1)}, \\psi_k^{(1)}\\rangle +O(\\epsilon^2)\\nonumber \\\\\n\t&= \\epsilon \\frac{\\langle \\psi_j^{(1)}, \\mathbf{B}\\psi_i^{(1)}\\rangle}{\\lambda_i^{(2)}\/\\lambda_i^{(1)}-\\lambda_j^{(2)}\/\\lambda_j^{(1)}} +O(\\epsilon^2)\\, .\\nonumber\n\\end{align}\nCombining this with the inequality above, we have for $j\\neq i$:\n\\begin{align}\n\t\\left| \\alpha_{ji} \\right| \\leq \\epsilon\n\t\\frac{1}{\\gamma_i} \\frac{\\left|(\\lambda_i^{(2)}-\\lambda_j^{(2)})\\right|}{\\sqrt{\\lambda_i^{(1)}\\lambda_j^{(1)}}}\n\n\t\\left|e_j^\\top\\mathbf{U}_1^\\top\\mathbf{A}e_i \\right| + O(\\epsilon^2)\\, ,\n\\end{align}\nwhere the constant depends on $\\frac{1}{\\sqrt{\\lambda_i^{(1)}\\lambda_j^{(1)}} \\gamma_i}$.\n\nWe thus have for \\eqref{eq to control 1}:\n\\begin{align}\n &\\frac{\\lambda_j^{(2)}}{4}\\sum_{i=1}^N\\alpha_{ji}^2\n \\left(\\log\\mu_i-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\right)^2\\nonumber\\\\\n=&\\,\\frac{\\lambda_j^{(2)}}{4}\\alpha_{jj}^2\n\\left(\\log\\mu_j-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\right)^2+ \\frac{\\lambda_j^{(2)}}{4}\\sum_{i\\neq j}\\alpha_{ji}^2\n\\left(\\log\\mu_i-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\right)^\n=O(\\epsilon^2)\\,,\\label{final bound part 1}\n\\end{align}\nwhere the constant depends on $\\frac{c(\\ln c)^2}{\\min_i\\{\\gamma_i^2\\lambda_i^{(1)}\\}}$.\n\nThis is because the first term is $O(\\epsilon^2)$ by\n\\begin{align}\n\t\\alpha_{jj}^2 \\left(\\log\\mu_j-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\right)^2 = &\\, \\alpha_{jj}^2 \\left(\\epsilon \\langle \\psi_j^{(1)}, \\mathbf{B}\\psi_j^{(1)}\\rangle \\frac{\\lambda_j^{(1)}}{\\lambda_j^{(2)}}\\right)^2 + O(\\epsilon^2) \\\\\n\t= &\\, \\epsilon^2 \\alpha_{jj}^2 \\left( \\frac{\\lambda_j^{(1)}}{\\lambda_j^{(2)}} \\right)^2 (\\lambda_j^{(1)}\\lambda_j^{(1)})^{-1} \\left((\\lambda_j^{(2)}-\\lambda_j^{(2)})e_j^\\top\\mathbf{U}_1^\\top\\mathbf{A}e_j+O(\\epsilon)\\right) ^2 + O(\\epsilon^2)\n\\end{align}\nusing Taylor expansion $\\log (x+h)=\\log(x)+h\/x+O(h^2)$ for $x>0$ and sufficiently small $h$.\nIn addition, noting that $\\left(\\log\\mu_i-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\right)^2\\leq 5(\\ln c)^2$ we have that the second term is bounded by:\n\\begin{align}\n\t\\frac{\\lambda_j^{(2)}}{4}\\sum_{i\\neq j}\\alpha_{ji}^2 \\left(\\log\\mu_i-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\right)^2 & \\leq \\frac{\\lambda_j^{(2)}}{4}5(\\ln c)^2 \\sum_{i\\neq j}\\alpha_{ji}^2 \\nonumber \\\\\n\t& \\leq \\frac{5\\lambda_j^{(2)}}{4}(\\ln c)^2 \\sum_{i\\neq j} \\epsilon^2 \n\t\\frac{1}{\\gamma_i^2} \\frac{(\\lambda_i^{(2)}-\\lambda_j^{(2)})^2}{\\lambda_i^{(1)}\\lambda_j^{(1)}}\n\n\t\\left|e_j^\\top\\mathbf{U}_1^\\top\\mathbf{A}e_i \\right|^2 \\nonumber\\\\\n\n\t& \\leq \\frac{5}{4}\\epsilon^2(\\ln c)^2\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}} \\sum_{i\\neq j} \n\t\\frac{1}{\\gamma_i^2} \\frac{(\\lambda_i^{(2)}-\\lambda_j^{(2)})^2}{\\lambda_i^{(1)}}\\left|e_j^\\top\\mathbf{U}_1^\\top\\mathbf{A}e_i \\right|^2 \\, .\n\\end{align}\nContinuing with a few coarse steps:\n\\begin{align}\n \\frac{\\lambda_j^{(2)}}{4}\\sum_{i\\neq j}\\alpha_{ji}^2 \\left(\\log\\mu_i-\\log\\left(\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}\\right)\\right)^2 & \\leq \\frac{5}{4}\\epsilon^2\\frac{c(\\ln c)^2}{\\min_i\\{\\gamma_i^2\\lambda_i^{(1)}\\}} \\sum_{i\\neq j} \n\t\\left|e_j^\\top\\mathbf{U}_1^\\top\\mathbf{A}e_i \\right|^2\\nonumber\\\\\n\t& \\leq \\frac{5}{2}\\epsilon^2\\frac{c(\\ln c)^2}{\\min_i\\{\\gamma_i^2\\lambda_i^{(1)}\\}}\n\\end{align}\nwhere $\\left|\\lambda_i^{(2)}-\\lambda_j^{(2)}\\right|^2\\leq 1$ due to the normalization of $\\mathbf{W}_1$ and $\\mathbf{W}_2$ and $\\sum_{i\\neq j} \\left|e_j^\\top\\mathbf{U}_1^\\top\\mathbf{A}e_i \\right|^2 \\leq 2 \\| \\mathbf{A} e_j \\|^2 \\leq 2$ due to $\\| \\mathbf{A}\\|=1$.\n\n\nSimilarly, it can be shown for \\eqref{eq to control 2} that\n\\[\n\\sum_{i=1}^N \\alpha^2_{ji}(\\log \\mu_i)^2\\left(\\sqrt{\\mu_i}-\\sqrt{\\frac{\\lambda_j^{(2)}}{\\lambda_j^{(1)}}}\\right)^2=O(\\epsilon^2)\\,,\n\\]\nand the proof is concluded.\n\\end{proof}\n\n\\begin{remark}\nNote that the implied constant $\\frac{\\sqrt{c}\\ln c}{\\min_i\\left(\\gamma_i\\sqrt{\\lambda_i^{(1)}}\\right)}$ might be large, narrowing the scope of Theorem \\ref{prop:pseudo_apart}. Particularly in our context, the matrix $\\mathbf{W}_1$ (and $\\mathbf{W}_2$) tends to be close to low rank, for which $\\min_i\\left(\\gamma_i\\sqrt{\\lambda_i^{(1)}}\\right)$ is small.\n\\end{remark}\n\\begin{remark}\nEmpirically, we observe that $\\psi_j^{(1)}=\\sum_i \\alpha_{ji} v_i \\simeq \\sum_{i \\sim j} \\alpha_{ji} v_i$, i.e., only a small number of expansion coefficients $\\alpha_{ji}$ are non-negligible, for which $\\lambda_i^{(1)}$ is close to $\\lambda_j^{(1)}$. \nTherefore, in practice, the implied constant depends on $1\/\\min_{i \\sim j}\\left(\\gamma_i\\sqrt{\\lambda_i^{(1)}}\\right)$.\nSince we are usually interested in principal components $\\psi_j^{(1)}$ (i.e., with large $\\lambda_j^{(1)}$), the implied constant is typically sufficiently large. \n\\end{remark}\n}\n\n\n\n\n\n\\section{Conclusions\\label{sec:conc}}\n\n\nIn this work, we introduce a new multi-resolution analysis of temporal high-dimensional data with an underlying time-varying manifold structure. Our analysis is based on the definition of two new composite operators that represent the relation of two aligned datasets jointly sampled from two diffeomorphic manifolds in terms of their spectral components. Specifically, we showed that these operators not only recover but also distinguish different types of common spectral components of the underlying manifolds and that each operator emphasizes different properties. \nOne operator was shown to emphasize common components that are similarly expressed in the two manifolds, and the other operator was shown to emphasize the common components that are expressed with significantly different eigenvalues.\nIn the context of spatiotemporal data analysis, the application of the new operators is analogous to low-pass and high-pass filters. Therefore, by applying them in multiple resolutions, we devise a wavelet-like analysis framework.\nWe demonstrated this framework on a dynamical system describing a transitory double-gyre flow, showing that such a framework can be used for the analysis of non-stationary multivariate time-series.\n\nIn addition to spatiotemporal analysis, we showed that the new composite operators may be useful for multimodal data analysis as well. Specifically, we showed application to remote sensing, demonstrating the recovery of meaningful properties expressed by different sensing modalities.\n\n\nIn the future, we plan to extend the definition of the operators $\\mathbf{S}$ and $\\mathbf{F}$ from two to more time frames (datasets). \nIn addition, since our analysis results in a large number of vectors representing the common components at different scales and time-points, we plan to develop compact representations of these components, which may lead to improved, more conclusive results for highly non-stationary time-series.\n\nFinally, we remark that in our model, we represent each sample by an undirected weighted graph, and then, analyze the temporal sequence of graphs. Another interesting future work would be to investigate our Riemannian composite operators in the context of graph neural networks (GNNs) and graph convolutional networks (GCNs) \\cite{scarselli2008graph,kipf2016semi,bronstein2017geometric}.\n\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nNeural machine translation (NMT) systems \nare sensitive to the data they are trained on. \nThe available parallel corpora come from various genres and have different stylistic variations and \nsemantic ambiguities. \nWhile\nsuch data is often beneficial for a general purpose machine translation system, a problem arises when building systems for specific domains such as lectures \\cite{guzman-sajjad-etal:iwslt13,cettolo2014report}, patents \\cite{Fujii10overviewof} or medical text \\cite{bojar-EtAl:2014:W14-33}, where either the in-domain bilingual text does not exist or is available in small \nquantities.\n\nDomain adaptation aims to preserve the identity of the in-domain data while exploiting the out-of-domain data in favor of the in-domain data and avoiding possible drift towards out-of-domain jargon and style. \nThe most commonly used approach to train a domain-specific neural MT system is to fine-tune an existing model (trained on generic data) with the new domain\n\\cite{luong-manning:iwslt15,FreitagA16,ServanCS16,ChuDK17} or \nto add domain-aware tags in building a concatenated system \\cite{KobusCS16}. \n\\cite{wees:da:emnlp17} proposed a gradual fine-tuning method that starts training with complete in- and out-of-domain data and gradually reduces the out-of-domain data for next epochs. Other approaches that have been recently proposed for domain adaptation of neural machine translation are instance weighting \\cite{wang:da:emnlp17,chen:da:wnmt17} and data selection \\cite{wang:da:acl17}. \n\n\n\nIn this paper we explore NMT in a multi-domain scenario. \nConsidering a small in-domain corpus and a number of out-of-domain corpora, we target questions like: \n\n\\begin{itemize}\n\n\\item What are the different ways to combine multiple domains during a training process?\n\\item What is the best strategy to build an optimal in-domain system?\n\\item Which training strategy results in a robust system?\n\\item Which strategy should be used to build a decent in-domain system given limited time?\n\\end{itemize}\nTo answer these, we try the following approaches: \\textbf{i) data concatenation:} train a system by concatenating all the available in-domain and out-of-domain data; \\textbf{ii) model stacking:} build NMT in an online fashion starting from the most distant domain, fine-tune on the closer domain and finish by fine-tuning the model on the in-domain data; \\textbf{iii) data selection:} select a certain percentage of the available out-of-domain corpora that is closest to the in-domain data and use it for training the system; \\textbf{iv) multi-model ensemble:} separately train models for each available domain and combine them during decoding using balanced or weighted averaging.\nWe experiment with Arabic-English and German-English language pairs. Our results demonstrate the following findings: \n\\begin{itemize}\n\\item A concatenated system fine-tuned on the in-domain data achieves the most optimal in-domain system.\n\\item Model stacking works best when\nstarting from the furthest domain, fine-tuning on closer domains and then\nfinally fine-tuning on the in-domain data.\n\\item A concatenated system on all available data results in the most robust system.\n\\item Data selection\ngives a decent trade-off between translation quality and training time.\n\\item Weighted ensemble is helpful when several individual models have been already\ntrained\nand there is no time for retraining\/fine-tuning.\n\\end{itemize}\n\n\n\n\n\nThe paper is organized as follows:\nSection \\ref{sec:approaches} describes the adaptation approaches explored in this work. We present experimental design in Section \\ref{sec:experiments}. Section \\ref{sec:results} summarizes the results and Section \\ref{sec:conclusion} concludes.\n\n\n\n\n\n\n\\section{Approaches}\n\\label{sec:approaches}\n\\begin{figure} \n\t\\centering\n\t\\includegraphics[width=\\linewidth]{data-approaches.png}\n\t\\caption{Multi-domain training approaches}\n\t\\label{fig:data-approaches}\n\\end{figure}\n\nConsider an in-domain data D$_i$ and a set of out-of-domain data D$_o$ = {D$_{o_1}$, D$_{o_2}$, ..D$_{o_n}$}. We explore several methods to benefit from the available data with an aim to optimize translation quality on the in-domain data. Specifically, we try data concatenation, model stacking, data selection and ensemble. Figure \\ref{fig:data-approaches} presents them graphically. In the following, we describe each approach briefly.\n\n\\subsection{Concatenation}\n\nA na\\\"ive yet commonly used method when training both statistical \\cite{williams-EtAl:2016:WMT}\\footnote{State-of-the-art baselines are trained on plain concatenation of the data with MT feature functions (such as Language Model) skewed towards in-domain data, through interpolation.} and neural machine translation systems \\cite{sennrich-haddow-birch:2016:WMT} is to simply concatenate all the bilingual parallel data before training the system. During training an in-domain validation set is used to guide the training loss.\nThe resulting system has an advantage of seeing a mix of all available data at every time interval, and is thus robust to handle heterogeneous test data.\n\n\n\\subsection{Fine Tuning and Model Stacking}\n\nNeural machine translation follows an online training strategy. It sees only a small portion of the data in every training step and estimates the value of network parameters based on that portion. Previous work has exploited this strategy in the context of domain adaptation. \\cite{luong-manning:iwslt15} trained an initial model on an out-of-domain data and later extended the training on in-domain data. In this way the final model parameters are tuned towards the in-domain data. The approach is referred as \\emph{fine-tuning} later on.\n\nSince in this work we deal with several domains, we propose a stacking method that uses \nmulti-level\nfine-tuning to train a system.\nFigure \\ref{fig:data-approaches} \n(second row) shows the complete procedure: first, \nthe\nmodel is trained on the out-of-domain data D$_{o_1}$ for $N$ epochs; training is resumed from $N+1$-th epoch \nto the $M$-th epoch\nbut using the \nnext \navailable out-of-domain data D$_{o_2}$; repeat the process till all of the available out-of-domain corpora have been used; in the last step, resume training on the in-domain data D$_i$ for a few \nepochs.\nThe resulting model has seen all of the available data as in the case of \nthe \ndata concatenation approach. However, here the system learns from the data domain by domain. We call this technique \\emph{model stacking}. \n\nThe model stacking and fine-tuning approaches \nhave the \nadvantage of seeing the in-domain data in the end \nof training, \nthus making the system parameters more optimized for the in-domain data. They also provide flexibility in extending an existing model to any new domain without having to retrain the complete system again on the available corpora.\n\n\n\\subsection{Data Selection}\n\n\nBuilding a model, whether concatenated or stacked, on all the available data is computationally expensive. \nAn alternative approach is \\emph{data selection}, where we select a part of the out-of-domain data which is close to the in-domain data for training. The intuition here is two \nfold:\ni) \nthe \nout-of-domain data is huge and takes a lot of time to train on, and ii) not all parts of the out-of-domain data are beneficial for the in-domain data. \nTraining only on a selected part of the out-of-domain data reduces the training time significantly \nwhile at the same time creating\na model closer to the in-domain. \n\nIn this work, we use \nthe\nmodified Moore-Lewis \\cite{Axelrod_2011_emnlp} for data selection.\nIt trains in- and out-of-domain n-gram models and then ranks sequences in the out-of-domain data based on cross-entropy difference. The out-of-domain sentences below a certain threshold \nare selected for training. Since we are dealing with several out-of-domain corpora, we apply data selection separately on each of them and build a concatenated system using \nin-domain\nplus selected out-of-domain data as shown in Figure \\ref{fig:data-approaches}.\nData selection significantly reduces data size thus \nimproving training time for NMT. However, finding the optimal threshold to filter data is a cumbersome process. Data selection using joint neural networks has been explored in \\cite{durraniEtAl:MT-Summit2015}. We explore data selection as an alternative to the above mentioned techniques.\n\n\\subsection{Multi-domain Ensemble}\n\nOut-of-domain data is generally available in larger quantity. Training a concatenated system whenever a new in-domain becomes available \nis expensive in terms of both time and computation. An alternative to fine-tuning the system with new in-domain is to do ensemble of the new model with the existing model.\nThe ensemble approach brings the flexibility to use them\nduring decoding without a need of retraining and fine-tuning.\n\nConsider $N$ models that we would like to use to generate translations. For each decoding step, we use the scores over the vocabulary from each of these $N$ models and combine them by averaging. We then use these averaged scores to choose the output word(s) for each hypothesis in our beam. The intuition is to combine the knowledge of the $N$ models to generate a translation. We refer to this approach as \\emph{balanced ensemble} later on. Since \nhere\nwe deal with several different domains, averaging scores of all the models equally may not result in optimum performance. We explore a variation of balanced ensemble called \\emph{weighted ensemble} that performs a weighted average of these scores, where the weights can be pre-defined or learned on a development set.\n \nBalanced ensemble using several models of a single training run saved at different iterations has shown to improve performance by 1-2 BLEU points \\cite{sennrich-haddow-birch:2016:WMT}. Here our goal is not to improve the best system but to benefit from individual models built using several domains during a single decoding process. We experiment with both balanced and weighted ensemble under \nthe\nmulti-domain condition only.\\footnote{Weighted fusion of Neural Networks trained on different domains has been explored in \\cite{durrani-EtAl:2016:COLING} for phrase-based SMT. Weighted training for Neural Network Models has been proposed in \\cite{joty-etAL:2015:EMNLP}.} \n\n\n\n\\section{Experimental Design}\n\\label{sec:experiments}\n\n\\subsection{Data}\nWe experiment with Arabic-English and German-English language pairs \nusing \nthe \nWIT$^3$ TED corpus\n\\cite{cettolol:SeMaT:2016} made available for IWSLT 2016 as our in-domain data. For Arabic-English, we take the UN corpus \\cite{ZiemskiJP16} and the OPUS corpus \\cite{LISON16.947} as out-of-domain corpora.\nFor German-English, we use the Europarl (EP), and the Common Crawl (CC) corpora made available for the {\\it $1^{st}$} Conference on Statistical Machine Translation\\footnote{http:\/\/www.statmt.org\/wmt16\/translation-task.html} as out-of-domain corpus. We tokenize Arabic, German and English using the default \\emph{Moses} tokenizer. We did not do morphological segmentation of Arabic. Instead we apply sub-word based segmentation \\cite{sennrich-haddow-birch:2016:P16-12} that implicitly segment as part of the compression process.\\footnote{\\cite{sajjad-etal:2017:ACLShort} showed that using BPE performs comparable to\nmorphological tokenization \\cite{abdelali-EtAl:2016:N16-3} in Arabic-English machine translation.} \nTable \\ref{tab:corpusstats} shows the data statistics after running the Moses tokenizer.\n\nWe use a concatenation of dev2010 and tst2010 sets for validation during\ntraining. Test sets tst2011 and tst2012 served as development sets\nto find the best model for fine-tuning and tst2013 and tst2014 are used for evaluation. We use BLEU \\cite{Papineni:Roukos:Ward:Zhu:2002} to measure performance. \n\n\\begin{table}\n\\centering\n\\begin{tabular}{lrrr}\n\\toprule\n\n\\multicolumn{4}{c}{\\bf Arabic-English} \\\\\nCorpus & Sentences & Tok${_{ar}}$ & Tok${_{en}}$ \\\\\n\\midrule\nTED & 229k & 3.7M & 4.7M \\\\\nUN & 18.3M & 433M & 494M \\\\\nOPUS & 22.4M & 139M & 195M \\\\\n\\midrule\n\\multicolumn{4}{c}{\\bf German-English} \\\\\nCorpus & Sentences & Tok${_{de}}$ & Tok${_{en}}$ \\\\\n\\midrule\nTED & 209K & 4M & 4.2M \\\\\nEP & 1.9M & 51M & 53M \\\\\nCC & 2.3M & 55M & 59M \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{\\label{tab:corpusstats} Statistics of the Arabic-English and German-English training corpora in terms of Sentences and Tokens. EP = Europarl, CC = Common Crawl, UN = United Nations.}\n\\end{table}\n\n\\subsection{System Settings}\n\nWe use the Nematus tool \\cite{sennrich-EtAl:2017:EACLDemo} to train a 2-layered LSTM encoder-decoder with attention \\cite{bahdanau:ICLR:2015}. We use the default settings: embedding layer size: 512, hidden layer size: 1000. We limit the vocabulary to 50k words\nusing BPE \\cite{sennrich-haddow-birch:2016:P16-12} with 50,000 operations. \n\n\\section{Results}\n\\label{sec:results}\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{ar-curves-shorthand.png}\n\t\\caption{Arabic-English system development life line evaluated on development set tst-11 and tst-12. Here, \\texttt{ALL} refers to \\texttt{UN+OPUS+TED}, and \\texttt{OD} refers to \\texttt{UN+OPUS}}\n\t\\label{fig:ar-curves}\n\\end{figure*}\n\nIn this section, we empirically compare several approaches to combine in- and out-of-domain data to train an NMT system. Figure \\ref{fig:ar-curves} and Figure \\ref{fig:de-curves} show the learning curve on development sets using various approaches mentioned in this work. We will go through them individually later in this section.\n\n\n\n\\subsection{Individual Systems}\n\nWe trained systems on each domain individually (for 10 epochs)\\footnote{For German-English, we ran the models until they converged because the training data is much smaller compared to Arabic-English direction} and chose the best model using the development set. We tested every model on the in-domain testsets. Table \\ref{tab:baseline} shows the results. On Arabic-English, the system trained on the out-of-domain data OPUS performed the best. This is due to the large size of the corpus and its spoken nature which makes it close to TED in style and genre.\nHowever, despite the large size of UN, the system trained using UN performed poorly. The reason is the difference in genre of UN from the TED corpus where \nthe former consists of United Nations proceedings and \nthe latter is based on talks. \n\nFor German-English, the systems built using out-of-domain corpora performed better than the in-domain corpus. \nThe CC corpus appeared to be very close to the TED domain.\nThe system trained on it performed even better than the in-domain system by an average of 2 BLEU points.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lrrrr}\n\\toprule\n\\multicolumn{4}{c}{\\bf Arabic-English} \\\\\n& TED & UN & OPUS & \\\\\n\\midrule\ntst13 & 23.6 & 22.4 & {\\bf 32.2} \\\\\ntst14 & 20.5 & 17.8 & {\\bf 27.3} \\\\\navg. & 22.1 & 20.1 & {\\bf 29.7} \\\\\n\\midrule\n\\multicolumn{4}{c}{\\bf German-English} \\\\\n& TED & CC & EP \\\\\n\\midrule\ntst13 & 29.5 & {\\bf 29.8} & 29.1 \\\\\ntst14 & 23.3 & {\\bf 25.7} & 25.1 \\\\\navg. & 26.4 & {\\bf 27.7} & 27.1 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{\\label{tab:baseline} Individual domain models evaluated on TED testsets}\n\\end{table}\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{de-curves-shorthand.png}\n\t\\caption{German-English system development life line evaluated on development set tst-11 and tst-12. Here, \\texttt{ALL} refers to \\texttt{EP+CC+TED}, and \\texttt{OD} refers to \\texttt{EP+CC}}\n\t\\label{fig:de-curves}\n\\end{figure*}\n\n\\subsection{Concatenation and Fine-tuning}\nNext we evaluated how the models performed when trained on concatenated data. We mainly tried two variations:\ni) concatenating all the available data (\\emph{ALL}) ii) combine only the available out-of-domain data (\\emph{OD}) and later fine-tune the model on the in-domain data. Table \\ref{tab:concatenate} shows the results. The fine-tuned system outperformed a full concatenated system by 1.8 and 2.1 average BLEU points in Arabic-English and German-English systems respectively.\n\nLooking at the development life line of these systems (Figures \\ref{fig:ar-curves}, \\ref{fig:de-curves}), since \\emph{ALL} has seen all of the data, it is better than \\emph{OD } till the point \\emph{OD} is fine-tuned on the in-domain corpus. Interestingly, at that point \\emph{ALL} and \\emph{OD}$\\rightarrow$TED have seen the same amount of data but the parameters of the latter model are fine-tuned towards the in-domain data. This gives it \naverage improvements of up to 2 BLEU points over \\emph{ALL}. \n\nThe \\emph{ALL} system does not give any explicit weight to any domain \\footnote{other than the data size itself} \nduring training. In order to revive the in-domain data, we fine-tuned it on the in-domain data. We achieved comparable results to that of the OD$\\rightarrow$TED model which means that one can adapt an already trained model on all the available data to a specific domain by fine tuning it on the domain of interest. This can be helpful in cases where in-domain data is not known beforehand. \n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lccc|c}\n\\toprule\n\\multicolumn{5}{c}{\\bf Arabic-English} \\\\\n& TED & ALL & OD$\\rightarrow$TED & ALL$\\rightarrow$TED \\\\\n\\midrule\ntst13 & 23.6 & 36.1 & 37.9 & 38.0 \\\\\ntst14 & 20.5 & 30.2 & 32.1 & 32.2 \\\\\navg. & 22.1 & 33.2 & 35.0 & 35.1 \\\\\n\\midrule\n\\multicolumn{5}{c}{\\bf German-English} \\\\\n& TED & ALL & OD$\\rightarrow$TED & ALL$\\rightarrow$TED\\\\\n\\midrule\ntst13 & 29.5 & 35.7 & 38.1 & 38.1 \\\\\ntst14 & 23.3 & 30.8 & 32.8 & 32.9 \\\\\navg. & 28.0 & 33.3 & 35.4 & 35.5 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{\\label{tab:concatenate} Comparing results of systems built on a concatenation of the data. OD represents a concatenation of the out-of-domain corpora and ALL represents a concatenation of OD and the in-domain data. $\\rightarrow$ sign means fine-tuning}\n\\end{table}\n\n\\subsection{Model Stacking}\n\nPreviously we concatenated all out-of-domain data and fine-tuned it with the in-domain TED corpus. \nIn this approach,\nwe picked one out-of-domain corpus at a time, trained a model and fine-tuned it with the other available domain. We repeated this process till all out-of-domain data had been used. In the last step, we fine-tuned the model on the in-domain data. \nSince we have a number of out-of-domain corpora available, we experimented with using them in different permutations for training and analyzed their effect on the development sets. Figure \\ref{fig:ar-curves} and Figure \\ref{fig:de-curves} show the results. It is interesting to see that the order of stacking has a significant effect \non achieving a high quality system. The best combination for \nthe Arabic-English language pair started with the UN data, fine-tuned on OPUS and then fine-tuned on TED. \nWhen we started with OPUS and fine-tuned the model on UN, the results dropped drastically as shown in Figure \\ref{fig:ar-curves} (see OPUS$\\rightarrow$UN). The model started forgetting the previously used data and focused on the newly provided data which is very distant from the in-domain data. We saw similar trends in the case of German-English language pair where CC$\\rightarrow$EP dropped the performance drastically. We did not fine-tune CC$\\rightarrow$EP and OPUS$\\rightarrow$UN on TED since there was no better model to fine-tune than to completely ignore the second corpus i.e. UN and EP for Arabic and German respectively and fine-tune OPUS and CC on TED. The results of OPUS$\\rightarrow$TED and CC$\\rightarrow$TED are shown in Figures.\n\nComparing the OPUS$\\rightarrow$TED system with the UN$\\rightarrow$OPUS$\\rightarrow$TED system, \nthe result of OPUS$\\rightarrow$TED are lowered by 0.62 BLEU points from the UN$\\rightarrow$OPUS$\\rightarrow$TED system. \nSimilarly, we saw a drop of 0.4 BLEU points for German-English language pair when we did not use EP and directly fine-tuned CC on TED. \nThere are two ways to look at these results,\nconsidering\nquality vs. time: i) by using UN and EP in model stacking, \nthe\nmodel learned to remember only those parts of the data that are beneficial \nfor achieving\nbetter translation quality on the in-domain development sets. Thus using them as part of the training pipeline is helpful \nfor building a\nbetter system. ii) training on UN and EP is expensive. Dropping them from the pipeline significantly reduced the training time and resulted in a loss of \n0.62 and 0.4 BLEU points only.\n\nTo summarize, model stacking performs best when it starts from the \ndomain furthest \nfrom the in-domain data. In the following, we compare it with the data concatenation approach. \n\n\n\\subsection{Stacking versus Concatenation}\n\nWe compared model stacking with different forms of concatenation. In terms of data usage, all models are exposed to identical data. Table \\ref{tab:stackvscat} shows the results. The best systems are achieved using a concatenation of all of the out-of-domain data for initial model training and then fine-tuning the trained model on the in-domain data. The concatenated system \\emph{ALL} performed the lowest among all. \n\n\\emph{ALL} learned a generic model from all the available data without giving explicit weight to any particular domain whereas model stacking resulted in a specialized system for the in-domain data. In order to confirm the generalization ability of \\emph{ALL} vs. model stacking, we tested them on a new domain, News. \\emph{ALL} performed 4 BLEU points better than model stacking in translating the news NIST MT04 testset. \nThis concludes that a concatenation system is not an optimum solution for one particular domain but is robust enough to perform well in new testing conditions.\n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lccc}\n\\toprule\n\\multicolumn{4}{c}{\\bf Arabic-English} \\\\\n& ALL & OD$\\rightarrow$TED & UN$\\rightarrow$OPUS$\\rightarrow$TED \\\\\n\\midrule\ntst13 & 36.1 & 37.9 & 36.8\\\\\ntst14 & 30.2 & 32.1 & 31.2\\\\\navg. & 33.2 & 35.0 & 34.0 \\\\\n\\midrule\n\\multicolumn{4}{c}{\\bf German-English} \\\\\n& ALL & OD$\\rightarrow$TED & EP$\\rightarrow$CC$\\rightarrow$TED \\\\\n\\midrule\ntst13 & 35.7 & 38.1 & 36.8 \\\\\ntst14 & 30.8 & 32.8 & 31.7 \\\\\navg. & 33.3 & 35.4 & 34.3 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{\\label{tab:stackvscat} Stacking versus concatenation}\n\\end{table}\n\n\\subsection{Data Selection}\nSince training on large out-of-domain data is time inefficient, we selected a small portion of \nout-of-domain data that is closer to the in-domain data. For Arabic-English, we selected 3\\% and 5\\% from the UN and OPUS data respectively which constitutes roughly ~2M sentences. For German-English, we selected 20\\% from a concatenation of EP and CC, which roughly constitutes 1M training sentences.\\footnote{These data-selection percentages have been previously found to be optimal when training phrase-based systems using the same data. For example see \\cite{sajjad-etal:iwslt13}.}\n \nWe concatenated the selected data and the in-domain data to train an NMT system.\nTable \\ref{tab:selection} presents the results. \nThe selected system is worse than the \\emph{ALL} system. This is \nin\ncontrary to the results mentioned in the literature \non\nphrase-based machine translation where data selection on UN improves translation quality \\cite{sajjad-etal:iwslt13}. This shows that NMT is not as sensitive as phrase-based to the presence of the out-of-domain data. \n\nData selection comes with a cost of reduced translation quality. \nHowever, the selected system is better than all individual systems shown in Table \\ref{tab:baseline}. Each of these out-of-domain systems take more time to train than a selected system. For example, compared to individual UN system, the selected system took approximately 1\/10th of the time to train.\nOne can look at data selected system as a decent trade-off between training time and translation quality.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lcc|cc}\n\\toprule\n&\\multicolumn{2}{c}{\\bf Arabic-English}&\\multicolumn{2}{c}{\\bf German-English} \\\\\n& ALL & Selected & ALL & Selected \\\\\n\\midrule\ntst13 & 36.1 & 32.7 & 35.7 & 34.1 \\\\\ntst14 & 30.2 & 27.8 & 30.8 & 29.9 \\\\\navg. & 33.2 & 30.3 & 33.3 & 32.0 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{\\label{tab:selection} Results of systems trained on a concatenation of selected data and on a concatenation of all available data}\n\\end{table}\n\n\n\\subsection{Multi-domain Ensemble}\n\nWe took \nthe \nbest model \nfor every domain according to the average BLEU on the development sets and ensembled them during decoding. \nFor weighted ensemble, we did a grid search and selected the weights using the development set.\nTable \\ref{tab:ensemble} presents the results of\nan ensemble on the\nArabic-English language pair and compares them with the individual best model, OPUS, \nand a model built on \\emph{ALL}. As expected, balanced ensemble (\\emph{ENS$_b$}) dropped results \ncompared to \nthe best individual model. Since \nthe\ndomains are very distant, giving equal weights to them hurts the overall performance. The weighted ensemble (\\emph{ENS$_w$}) improved from the best individual model by 1.8 BLEU points but is still lower than the concatenated system by 1.7 BLEU points. The weighted ensemble approach is beneficial when individual domain specific models are already available for testing. Decoding with multiple models is more efficient compared to training a system from scratch on a concatenation of the entire data.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lcc|cc}\n\\toprule\n\\multicolumn{5}{c}{\\bf Arabic-English} \\\\\n& OPUS & ALL & ENS$_b$ & ENS$_w$ \\\\\n\\midrule\ntst13 & 32.2 & 36.1 & 31.9 & 34.3\\\\\ntst14 & 27.3 & 30.2 & 25.8 & 28.6\\\\\navg. & 29.7 & 33.2 & 28.9 & 31.5 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{\\label{tab:ensemble} Comparing results of balanced ensemble (ENS$_b$) and weighted ensemble (ENS$_w$) with the best individual model and the concatenated model}\n\\end{table}\n\n\\subsection{Discussion}\n\nThe concatenation system showed robust behavior in translating new domains. \\cite{KobusCS16} proposed a domain aware concatenated system by introducing domain tags for every domain. We trained a system using their approach and compared the results with simple concatenated system. The domain aware system performed slightly better than the concatenated system (up to 0.3 BLEU points) when tested on the in-domain TED development sets. However, domain tags bring a limitation to the model since it can only be tested on the domains it is trained on. Testing on an unknown domain would first require to find its closest domain from the set of domains the model is trained on. The system can then use that tag to translate unknown domain sentences.\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nWe explored several approaches to train a neural machine translation system under multi-domain conditions and evaluated them based on three metrics: translation quality, training time and robustness. Our results showed that an optimum in-domain system can be built using a concatenation of the out-of-domain data and then fine-tuning it on the in-domain data. A system built on the concatenated data resulted in a generic system that is robust to new domains. Model stacking is sensitive to the order of domains it is trained on. Data selection and weighted ensemble resulted in a less \noptimal\nsolution. \nThe\nformer is efficient to train in a short time and \nthe\nlatter is useful when different individual models are available for testing. It provides a mix of all\ndomains\nwithout retraining or fine-tuning the system. \n\n\\section{Acknowledgments}\nThe research presented in this paper is partially conducted as part of the European Union's Horizon 2020\nresearch and innovation programme under grant\nagreement 644333 (SUMMA).\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}