diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbgrf" "b/data_all_eng_slimpj/shuffled/split2/finalzzbgrf" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbgrf" @@ -0,0 +1,5 @@ +{"text":"\n\\section{Exponential lower bound for planar graphs}\\label{sec:main}\nIn this section we present the main result of the paper.\nWe provide a construction that proves that there are planar graphs with $k$ terminals whose mimicking networks are of size $\\Omega(2^k)$. \n\nIn order to present the desired graph, for the sake of simplicity, we describe its dual graph $(\\du{G},\\du{c})$. We let $\\du{\\ensuremath{Q}}=\\{\\du{f_n},\\du{f_s},\\du{f_1},\\du{f_2},\\dots,\\du{f_{k-2}} \\}$ be the set of faces in $\\du{G}$ corresponding to terminals in the primal graph $\\duu{G}$.%\n\\footnote{Since the argument mostly operates on the dual graph, for notational simplicity,\n we use regular symbols for objects in the dual graph, e.g., $G$, $c$, $f_i$,\n while starred symbols refer to the dual of the dual graph, that is, the primal graph.}\nThere are two special terminal faces $\\du{f_n}$ and $\\du{f_s}$, referred to as the north face and the south face. The remaining faces of $\\du{\\ensuremath{Q}}$ are referred to as equator faces.\n\nA set $\\du{\\ensuremath{S}} \\subset \\du{\\ensuremath{Q}}$ is \\emph{important} if $\\du{f_n} \\in \\du{\\ensuremath{S}}$ and $\\du{f_s} \\notin \\du{\\ensuremath{S}}$. Note that there are $2^{k-2}$ important sets; in what follows we care only\nabout minimum cuts in the primal graph for separations between important sets and their complements.\nFor an important set $\\du{\\ensuremath{S}}$, \nwe define its \\emph{signature} as a bit vector $\\sign{\\du{\\ensuremath{S}}} \\in \\bitv{|\\du{\\ensuremath{Q}}|-2}$ whose $i$'th position is defined as $\\sign{\\du{\\ensuremath{S}}}[i]= 1 \\text{ iff } \\du{f_{i}} \\in \\du{\\ensuremath{S}}$. \nGraph $\\du{G}$ will be composed of $2^{k-2}$ cycles referred to as important cycles, each corresponding to an important subset $\\du{\\ensuremath{S}} \\subset \\du{\\ensuremath{Q}}$.\nA cycle corresponding to $\\du{\\ensuremath{S}}$ is referred to as $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$ and it separates $\\du{\\ensuremath{S}}$ from $\\overline{\\du{\\ensuremath{S}}}$.\nTopologically, we draw the equator faces on a straight horizontal line that we call the equator. We put the north face $\\du{f_n}$ above the equator and the south face $\\du{f_s}$ below the equator. For any important $\\du{\\ensuremath{S}} \\subset \\du{\\ensuremath{Q}}$, in the plane drawing of $\\du{G}$ the corresponding cycle $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$ is a curve that goes to the south of $\\du{f_i}$ if $\\du{f_i} \\in \\du{\\ensuremath{S}}$ and otherwise to the north of $\\du{f_i}$. We formally define important cycles later on, see Definition~\\ref{def:impcyc}.\n\nWe now describe in detail the construction of $\\du{G}$.\nWe start with a graph $H$ that is almost a tree, and then embed $H$ in the plane\nwith a number of edge crossings, introducing a new vertex on every edge crossing.\nThe graph $H$ consists of a complete binary tree of height $k-2$ with root $v$ and an extra vertex\n$w$ that is adjacent to the root $v$ and every one of the $2^{k-2}$ leaves of the tree.\nIn what follows, the vertices of $H$ are called \\emph{branching vertices}, contrary\nto \\emph{crossing vertices} that will be introduced at edge crossings\nin the plane embedding of $H$.\n\nTo describe the plane embedding of $H$, we need to introduce some notation of the vertices\nof $H$.\nThe starting point of our construction is the edge $\\du{e}=\\{ \\du{w}, \\du{v} \\}$.\nVertex $\\du{v}$ is the first branching vertex and also the root of $H$.\nIn vertex $\\du{v}$, edge $\\du{e}$ branches into $\\du{e_0}=\\{\\du{v},\\du{v_0}\\}$ and $\\du{e_1}=\\{\\du{v},\\du{v_1} \\}$. Now $\\du{v_0}$ and $\\du{v_1}$ are also branching vertices.\nThe branching vertices are partitioned into layers $L_0,\\ldots,L_{k-2}$. Vertex $\\du{v}$ is in layer $L_0=\\{ \\du{v} \\}$, while $\\du{v_0}$ and $\\du{v_1}$ are in layer $L_1=\\{ \\du{v_0}, \\du{v_1} \\}$. Similarly, we partition edges into layers $\\mathcal{E}^H_0,\\ldots \\mathcal{E}^H_{k-1}$. So far we have $\\mathcal{E}^H_0=\\{ \\du{e} \\}$ and $\\mathcal{E}^H_1=\\{ \\du{e_0}, \\du{e_1} \\}$. \n\nThe construction continues as follows. For any layer $L_i, i \\in \\{1, \\ldots , k-3 \\}$, all the branching vertices of $L_i=\\{ \\du{v_{00 \\ldots 0}} \\ldots \\du{v_{11 \\ldots 1}} \\}$ are of degree $3$. In a vertex $\\du{v_a} \\in L_i$, $a \\in \\bitv{i}$, edge $\\du{e_a} \\in \\mathcal{E}^H_i$ branches into edges $\\du{e_{0a}}=\\{ \\du{v_a}, \\du{v_{0a}} \\},\\du{e_{1a}}=\\{ \\du{v_a}, \\du{v_{1a}} \\} \\in \\mathcal{E}^H_{i+1}$, where $\\du{v_{0a}},\\du{v_{1a}} \\in L_{i+1}$. We emphasize here that the new bit in the index is added \\emph{as the first symbol}. \nEvery next layer is twice the size of the previous one, hence $|L_i|=|\\mathcal{E}^H_i|=2^i$. Finally the vertices of $L_{k-2}$ are all of degree $2$. Each of them is connected to a vertex in $L_{k-3}$ via an edge in $\\mathcal{E}^H_{k-2}$ and to the vertex $w$ via an edge in $\\mathcal{E}^H_{k-1}$.\n\nWe now describe the drawing of $H$, that we later make planar by adding crossing vertices, in order to obtain the graph $G$.\nAs we mentioned before, we want to draw equator faces $\\du{f_1}, \\ldots \\du{f_{k-2}}$ in that order from left to right on a horizontal line (referred to as an equator). Consider equator face $\\du{f_i}$ and vertex layer $L_i$ for some $i>0$. Imagine a vertical line through $\\du{f_i}$ perpendicular to the equator, and let us refer to it as an $i$'th meridian. We align the vertices of $L_i$ along the $i$'th meridian, from the north to the south. We start with the vertex of $L_i$ with the (lexicographically) lowest index, and continue drawing vertices of $L_i$ more and more to the south while the indices increase. Moreover, the first half of $L_i$ is drawn to the north of $\\du{f_i}$, and the second half to the south of $\\du{f_i}$.\nEvery edge of $H$, except for $e$, is drawn as a straight line segment connecting its endpoints.\nThe edge $\\du{e}$ is a curve encapsulating the north face $\\du{f_n}$ and separating it from $\\du{f_s}$-the outer face of $\\du{G}$. \n\n\n\\begin{figure}[t]\n\\begin{center}\n\\input{figures\/dual}\n\\caption{The graph $\\du{G}$.\\label{dual}}\n\\end{center}\n\\end{figure}\n\nThe crossing vertices are added whenever the line segments cross. This way the edges of $H$\nare subdivided and the resulting graph is denoted by $\\du{G}$.\nThis completes the description of the structure and the planar drawing of $\\du{G}$.\nWe refer to Figure~\\ref{dual} for an illustration of the graph $G$.\nThe set $\\mathcal{E}_i$ consists of all edges of $G$ that are parts of the (subdivided) edges of $\\mathcal{E}^H_i$ from $H$, see Figure~\\ref{subdivide}.\nWe are also ready to define important cycles formally.\n\n\\begin{figure}[t]\n\\centering\n\\input{figures\/subdivide}\n\\caption{The layer $\\mathcal{E}_{i+1}$.\n The vertex and edge names are black, their weights are blue.\\label{subdivide}}\n\\end{figure}\n\n\n\\begin{definition}\\label{def:impcyc}\nLet $\\du{\\ensuremath{S}} \\subset \\du{\\ensuremath{Q}}$ be important.\nLet $\\pi$ be a unique path in the binary tree $H-\\{w\\}$ from the root\n$\\du{v}$ to $\\du{v_{\\rev{\\sign{\\du{\\ensuremath{S}}}}}}$, \nwhere $\\rev{\\cdot}$ operator reverses the bit vector.\nLet $\\pi'$ be the path in $G$ corresponding to $\\pi$.\nThe important cycle $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$ is composed of $\\du{e}$, $\\pi'$, and an edge in $\\mathcal{E}_{k-1}$ adjacent to $\\du{v_{\\rev{\\sign{\\du{\\ensuremath{S}}}}}}$. \n\\end{definition}\n\nWe now move on to describing how weights are assigned to the edges of $\\du{G}$. \nThe costs of the edges in $\\du{G}$ admit $k-1$ values: $c_1, c_2, \\ldots c_{k-2}$, and $C$. Let $c_{k-2}=1$. For $i \\in \\{1 \\dots k-3 \\}$ let $c_i= \\sum_{j=i+1}^{k-2}|\\mathcal{E}_{j}|c_{j}$.\nLet $C=\\sum_{j=1}^{k-2} |\\mathcal{E}_i|c_i$. Let us consider an arbitrary edge $\\du{e_{ba}}=\\{ \\du{v_{a}}, \\du{v_{ba}} \\}$ for some $a \\in \\bitv{i}, i \\in \\{ 0 \\ldots k-3 \\}, b \\in \\{ 0,1 \\}$ (see Figure~\\ref{subdivide} for an illustration). As we mentioned before, $\\du{e_{ba}}$ is subdivided by crossing vertices into a number of edges. If $b=0$, then edge $\\du{e_{ba}}$ is subdivided by\\footnote{For a bit vector $a$, $\\dec{a}$ denotes the integral value of $a$ read as a number in binary.} $\\dec{a}$ crossing vertices into $\\dec{a}+1$ edges: $\\du{e^1_{ba}}=\\{ \\du{v_a}, \\du{x^1_{ba}} \\}, \\du{e^2_{ba}}=\\{ \\du{x^1_{ba}},\\du{x^2_{ba}} \\} \\ldots \\du{e^{\\dec{a}+1}_{ba}}=\\{ \\du{x^{\\dec{a}}_{ba}}, \\du{v_{ba}} \\}$. Among those edges $\\du{e^{\\dec{a}+1}_{ba}}$ is assigned cost $C$, and the remaining edges subdividing $\\du{e_{ba}}$ are assigned cost $c_i$. Analogically, if $b=1$, then edge $\\du{e_{ba}}$ is subdivided by $2^i-1-\\dec{a}$ crossing vertices into $2^i-\\dec{a}$ edges: $\\du{e^1_{ba}}=\\{ \\du{v_a}, \\du{x^1_{ba}} \\}, \\du{e^2_{ba}}=\\{ \\du{x^1_{ba}},\\du{x^2_{ba}} \\} \\ldots \\du{e^{2^i-\\dec{a}}_{ba}}=\\{ \\du{x^{2^i-1-\\dec{a}}_{ba}}, \\du{v_{ba}} \\}$. Again, we let edge $\\du{e^{2^i-\\dec{a}}_{ba}}$ have cost $C$, and the remaining edges subdividing $\\du{e_{ba}}$ are assigned cost $c_i$.\nFinally, all the edges connecting the vertices of the last layer with $w$ have weight $c_{k-2} = 1$.\nThe cost assignment within an edge layer is presented in Figure~\\ref{subdivide}. \n\nThis finishes the description of the dual graph $G$. We now consider the primal graph $\\duu{G}$ with the set of terminals $\\duu{\\ensuremath{Q}}$ consisting of the\n$k$ vertices of $\\duu{G}$ corresponding to the faces $\\ensuremath{Q}$ of $G$. In the remainder of this section we show that there is a cost function on the edges of $\\duu{G}$, under which any mimicking network for $\\duu{G}$\ncontains at least $2^{k-2}$ edges. This cost function is in fact a small perturbation of the edge costs implied by the dual graph $G$.\n\nIn order to accomplish this,\nwe use the framework introduced in~\\cite{KrauthgamerR13}. In what follows,\n $\\mincut{G}{c}{S}{S'}$ stands for the minimum cut separating $S$ from $S'$ in a graph $G$ with cost function $c$. Below we provide the definition of the cutset-edge incidence matrix and the Main Technical Lemma from~\\cite{KrauthgamerR13}. \n\n\\begin{definition}[Incidence matrix between cutsets and edges] Let $(G,c)$ be a $k$-terminal network, and fix an enumeration $S_1, \\ldots S_m$ of all $2^{k-1}-1$ distinct and nontrivial bipartitions $Q=S_i \\cup \\overline{S}_i$. The cutset-edge incidence matrix of $(G,c)$ is the matrix $A_{G,c} \\in \\{ 0,1 \\}^{m \\times E(G)}$ given by\n$$\n(A_{G,c})_{i,e}=\n\\begin{cases}\n1 \\text{ if } e \\in \\mincut{G}{c}{S_i}{\\overline{S}_i}\\\\\n0 \\text{ otherwise.}\n\\end{cases}\n$$\n\\end{definition}\n\n\\begin{lemma}[Main Technical Lemma of \\cite{KrauthgamerR13}]\\label{lem:mtl}\nLet $(G,c)$ be a $k$-terminal network. Let $A_{G,c}$ be its cutset-edge incidence matrix, and assume that for all $S \\subset Q$ the minimum $S$-separating cut of $G$ is unique. Then there is for $G$ an edge cost function $\\tilde{c}: E(G) \\mapsto \\mathbb{R}^+$, under which every mimicking network $(G',c')$ satisfies $|E(G')| \\geq \\rank{A_{G,c}}$. \n\\end{lemma}\n\nRecall that $\\duu{G}$ is the dual graph to the graph $\\du{G}$ that we constructed.\nBy slightly abusing the notation, we will use the cost function $c$ defined on the dual edges\nalso on the corresponding primal edges.\nLet $\\duu{\\ensuremath{Q}}=\\{ \\duu{f_n}, \\duu{f_s}, \\duu{f_1}, \\ldots \\duu{f_{k-2}} \\}$ be the set of terminals in $\\duu{G}$ corresponding to $\\du{f_n}, \\du{f_s}, \\du{f_1}, \\ldots \\du{f_{k-2}}$ respectively. We want to apply Lemma~\\ref{lem:mtl} to $\\duu{G}$ and $\\duu{\\ensuremath{Q}}$. For that we need to show that the cuts in $\\duu{G}$ corresponding to important sets are unique and that $\\rank{A_{\\duu{G},c}}$ is high.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\input{figures\/primal}\n\\caption{Primal graph $G^\\ast$.\\label{primal}}\n\\end{center}\n\\end{figure}\n\nAs an intermediate step let us argue that the following holds.\n\\begin{claim}\\label{clm:kc}\nThere are $k$ edge disjoint simple paths in $\\duu{G}$ from $\\duu{f_n}$ to $\\duu{f_s}$: $\\pi_0, \\pi_1, \\ldots, \\pi_{k-2}, \\pi_{k-1}$. Each $\\pi_i$ is composed entirely of edges dual to the edges of $\\mathcal{E}_i$ whose cost equals $C$. For $i \\in \\{ 1 \\ldots k-2 \\}$, $\\pi_i$ contains vertex $\\duu{f_i}$. Let $\\pi_i^n$ be the prefix of $\\pi_i$ from $\\duu{f_n}$ to $\\duu{f_i}$ and $\\pi_i^s$ be the suffix from $\\duu{f_i}$ to $\\duu{f_s}$. The number of edges on $\\pi_i$ is $2^i$, and the number of edges on $\\pi_i^n$ and $\\pi_i^s$ is $2^{i-1}$. \n\\end{claim}\n\\begin{proof}The primal graph $\\duu{G}$ together with paths $\\pi_0, \\pi_1 \\ldots \\pi_{k-2},\\pi_{k-1}$ is pictured in Figure~\\ref{primal}. The paths $\\pi_{k-2},\\pi_{k-1}$ visit the same vertices in the same manner, so for the sake of clarity only one of these paths is shown in the picture. This proof contains a detailed description of these paths and how they emerge from in the dual graph $\\du{G}$.\n\nConsider a layer $L_i$. Recall that for any $ba \\in \\bitv{i}$ edge $\\du{e_{ba}}$ of the almost tree is subdivided in $\\du{G}$, and all the resulting edges are in $\\mathcal{E}_i$. If $b=0$, then edge $\\du{e_{ba}}$ is subdivided by $\\dec{a}$ crossing vertices into $\\dec{a}+1$ edges: $\\du{e^1_{ba}}=\\{ \\du{v_a}, \\du{x^1_{ba}} \\}, \\du{e^2_{ba}}=\\{ \\du{x^1_{ba}},\\du{x^2_{ba}} \\} \\ldots \\du{e^{\\dec{a}+1}_{ba}}=\\{ \\du{x^{\\dec{a}}_{ba}}, \\du{v_{ba}} \\}$, where $\\du{c}(\\du{e^{\\dec{a}+1}_{ba}})=C$. Analogically, if $b=1$, then edge $\\du{e_{ba}}$ is subdivided by $2^i-1-\\dec{a}$ crossing vertices into $2^i-\\dec{a}$ edges: $\\du{e^1_{ba}}=\\{ \\du{v_a}, \\du{x^1_{ba}} \\}, \\du{e^2_{ba}}=\\{ \\du{x^1_{ba}},\\du{x^2_{ba}} \\} \\ldots \\du{e^{2^i-\\dec{a}}_{ba}}=\\{ \\du{x^{2^i-1-\\dec{a}}_{ba}}, \\du{v_{ba}} \\}$. Again, $\\du{c}(\\du{e^{2^i-\\dec{a}}_{ba}})=C$. Consider the edges of $\\mathcal{E}_i$ incident to vertices in $L_i$. If we order these edges lexicographically by their lower index, then each consecutive pair of edges shares a common face. Moreover, the first edge $\\du{e^1_{00\\ldots0}}$ is incident to $\\du{f_n}$ and the last edge $\\du{e^1_{11\\ldots1}}$ is incident to $\\du{f_s}$. This gives a path $\\pi_i$ from $f_n$ to $f_s$ through $f_i$ in the primal graph where all the edges on $\\pi_i$ have cost $C$. Path $\\pi_{k-1}$ is given by the edges of $\\mathcal{E}_{k-1}$ in a similar fashion and path $\\pi_0$ is composed of a single edge dual to $\\du{e}$. \n\\end{proof}\n\nWe move on to proving that the condition in Lemma~\\ref{lem:mtl} holds.\nWe extend the notion of important sets $\\ensuremath{S} \\subseteq \\ensuremath{Q}$ to sets $\\duu{\\ensuremath{S}} \\subseteq \\duu{\\ensuremath{Q}}$\nin the natural manner.\n\\begin{lemma}\\label{lem:uniquecuts}\nFor every important $\\duu{\\ensuremath{S}} \\subset \\duu{\\ensuremath{Q}}$, the minimum cut separating $\\duu{\\ensuremath{S}}$ from $\\overline{\\duu{\\ensuremath{S}}}$ is unique and corresponds to cycle $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$ in $\\du{G}$. \n\\end{lemma}\n\n\\begin{proof}\nLet $\\ensuremath{\\mathcal{C}}$ be the set of edges of $G$ corresponding to some\nminimum cut between $\\duu{\\ensuremath{S}}$ and $\\duu{\\overline{\\ensuremath{S}}}$ in $\\duu{G}$.\nLet $\\ensuremath{S} \\subseteq \\ensuremath{Q}$ be the set of faces of $G$ corresponding to the set $\\duu{\\ensuremath{S}}$.\nWe start by observing that the edges of $\\duu{G}$ corresponding to $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$\nform a cut between $\\duu{\\ensuremath{S}}$ and $\\duu{\\overline{\\ensuremath{S}}}$. Consequently,\n the total weight of edges of $\\ensuremath{\\mathcal{C}}$ is at most the total weight of the edges of\n $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$.\n\nBy Claim~\\ref{clm:kc}, $\\ensuremath{\\mathcal{C}}$ contains at least $k$ edges of cost $C$, at least one edge of cost $C$ per edge layer (it needs to hit an edge in every path $\\pi_0 , \\ldots \\pi_{k-1}$). Note that $\\ensuremath{\\mathcal{C}}_{\\sign{ \\du{\\ensuremath{S}} }}$ contains exactly $k$ edges of cost $C$. We assign the weights in a way that $C$ is larger than all other edges in the graph taken together.\nThis implies that $\\ensuremath{\\mathcal{C}}$ contains exactly one edge of cost $C$ in every edge layer $\\mathcal{E}_i$.\nIn particular, $\\ensuremath{\\mathcal{C}}$ contains the edge $e = \\{ v,w \\}$.\n\nFurthermore, the fact that $\\duu{f_i}$ lies on $\\pi_i$ implies that\nthe edge of weight $C$ in $\\mathcal{E}_i \\cap \\ensuremath{\\mathcal{C}}$ lies on $\\pi_i^n$ if $\\duu{f_i} \\notin \\ensuremath{S}$\nand lies on $\\pi_i^s$ otherwise.\nConsequently, in $\\duu{G}-\\ensuremath{\\mathcal{C}}$ there is one connected component containing all vertices\nof $\\duu{\\ensuremath{S}}$ and one connected component containing all vertices of $\\overline{\\duu{\\ensuremath{S}}}$.\nBy the minimality of $\\ensuremath{\\mathcal{C}}$, we infer that $\\duu{G}-\\ensuremath{\\mathcal{C}}$ contains \nno other connected components apart from the aforementioned two components.\nBy planarity, since any minimum cut in a planar graph corresponds to a collection of cycles\nin its dual, this implies that $\\ensuremath{\\mathcal{C}}$ is a single cycle in $G$.\n\nLet $e_i$ be the unique edge of $\\mathcal{E}_i \\cap \\ensuremath{\\mathcal{C}}$ of weight $C$\nand let $e_i'$ be the unique edge of $\\mathcal{E}_i \\cap \\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$ of weight $C$.\nWe inductively prove that $e_i = e_i'$ and\nthat the subpath of $\\ensuremath{\\mathcal{C}}$ between $e_i$ and $e_{i+1}$ is the same as on\n$\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$.\nFor the base of the induction, note that $e_0 = e_0' = e$.\n\nConsider an index $i > 0$ and the face $\\du{f_i}$. If $\\du{f_i} \\in \\du{\\ensuremath{S}}$, i.e., $\\du{f_i}$ belongs to the north side, then $e_i$ lies south of $f_i$, that is, lies on $\\pi_i^s$.\nOtherwise, if $f_i \\notin \\ensuremath{S}$, then $e_i$ lies north of $f_i$, that is, lies on $\\pi_i^n$.\n\nLet $v_a$ and $v_{ba}$ be the vertices of $\\ensuremath{\\mathcal{C}}_{\\sign{\\ensuremath{S}}}$ that lie \non $L_{i-1}$ and $L_i$, respectively. By the inductive assumption, $v_a$ is an endpoint\nof $e_{i-1}' = e_{i-1}$ that lies on $\\ensuremath{\\mathcal{C}}$.\nLet $e_i = xv_{bc}$, where $v_{bc} \\in L_i$ and let $e_i' = x'v_{ba}$.\nSince $\\ensuremath{\\mathcal{C}}$ is a cycle in $G$ that contains exactly one edge on each path $\\pi_i$,\nwe infer that $\\ensuremath{\\mathcal{C}}$ contains a path between $v_a$ and $v_{bc}$ that consists of\n$e_i$ and a number of edges of $\\mathcal{E}_i$ of weight $c_i$.\nA direct check shows that the subpath from $v_a$ to $v_{ba}$ on $\\ensuremath{\\mathcal{C}}_{\\sign{\\ensuremath{S}}}$\nis the unique such path with minimum number of edges of weight $c_i$.\nSince the weight $c_i$ is larger than the total weight of all edges of smaller weight,\nfrom the minimality of $\\ensuremath{\\mathcal{C}}$ we infer that $v_{ba} = v_{bc}$ and $\\ensuremath{\\mathcal{C}}$\nand $\\ensuremath{\\mathcal{C}}_{\\sign{\\ensuremath{S}}}$ coincide on the path from $v_a$ to $b_{ba}$.\n\nConsequently, $\\ensuremath{\\mathcal{C}}$ and $\\ensuremath{\\mathcal{C}}_{\\sign{\\ensuremath{S}}}$ coincide on the path from the edge $e=vw$\nto the vertex $v_{\\rev{\\sign{\\ensuremath{S}}}} \\in L_{k-2}$. From the minimality of $\\ensuremath{\\mathcal{C}}$\nwe infer that also the edge $\\{w,v_{\\rev{\\sign{\\ensuremath{S}}}} \\}$ lies on the cycle $\\ensuremath{\\mathcal{C}}$ and, hence,\n $\\ensuremath{\\mathcal{C}} = \\ensuremath{\\mathcal{C}}_{\\sign{\\ensuremath{S}}}$. This completes the proof.\n\\end{proof}\n\n\n\\begin{claim}\\label{clm:rank}\n$\\rank{A_{G,c}} \\geq 2^{k-2}$. \n\\end{claim}\n\\begin{proof}\nRecall Definition~\\ref{def:impcyc} and the fact that $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$ is defined for every important $\\ensuremath{S} \\subseteq \\ensuremath{Q}$.\nThis means that the only edge in $\\mathcal{E}_{k-1}$ that belongs to $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$ is the edge adjacent to $\\du{v_{\\rev{\\sign{\\du{\\ensuremath{S}}}}}}$. Let us consider the part of adjacency matrix where rows correspond to the cuts corresponding to $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$ for important $\\ensuremath{S} \\subset \\ensuremath{Q}$ and where columns correspond to the edges in $\\mathcal{E}_{k-1}$ of weight $C$. Let us order the cuts according to $\\rev{\\sign{\\du{\\ensuremath{S}}}}$ and the edges by the index of the adjacent vertex in $L_{k-2}$ (lexicographically). Then this part of $A_{G,c}$ is an identity matrix. Hence, $\\rank{A_{G,c}} \\geq 2^{k-2}$. \n\\end{proof}\n\nLemma \\ref{lem:uniquecuts} and Claim~\\ref{clm:rank} provide the conditions necessary for Lemma~\\ref{lem:mtl} to apply. This proves our main result stated in Theorem~\\ref{thm:main}. \n\n\\section{Doubly exponential example}\\label{sec:side}\n\n\\begin{figure}[tb]\n\\centering\n\\input{figures\/double-exp}\n\\caption{Illustration of the construction. The two panels correspond to two cases in the proof, either $u_{S_0} \\in Z$ (top panel) or $u_{S_0} \\notin Z$ (bottom panel).}\\label{fig:double-exp}\n\\end{figure}\n\nIn this section we show an example graph for which the compression technique introduced by Hagerup et al~\\cite{HagerupKNR98} does indeed produce a mimicking network on\nroughly $2^{\\binom{k-1}{\\lfloor (k-1)\/2 \\rfloor}}$ vertices.\nOur example relies on doubly exponential edge costs. Note that an example with single exponential costs can be compressed into a mimicking network of size single exponential in $k$ using the techniques of~\\cite{KratschW12}.\n\nBefore we go on, let us recall the technique of Hagerup et al~\\cite{HagerupKNR98}. Let $G$ be a weighted graph and $Q$ be the set of terminals. Observe that a minimum cut separating $S \\subset Q$ from $\\overline{S}=Q \\setminus S$, when removed from $G$, divides the vertices of $G$ into two sides: the side of $S$ and the side of $\\overline{S}$. The side is defined for each vertex, as all connected components obtained by removing the minimum cut contain a terminal. Now if two vertices $u$ and $v$\nare on the same side of the minimum cut between $S$ and $\\overline{S}$ for every $S \\subset Q$, then they can be merged without changing the size of any minimum $S$-separating cut. As a result there is at most $2^{2^k}$ vertices in the graph;\nas observed by~\\cite{ChambersE13,KhanR14}, this bound can be improved to roughly $2^{\\binom{k-1}{\\lfloor (k-1)\/2 \\rfloor}}$. After this brief introduction we move on to describing our example.\n\nOur construction builds up on the example provided in~\\cite{KrauthgamerR13} in the proof of Theorem 1.2.\nAs stated in Theorem~\\ref{thm:side} of this paper, our construction works for parameter $k$ equal to $6$ modulo $8$.\nLet $k = 2r+2$, that is, $r$ is equal to $2$ modulo $4$.\nThese remainder assumptions give the following observation via standard calculations.\n\\begin{lemma}\\label{lem:ell-even}\nThe integer $\\ell := \\binom{2r+1}{r}$ is even.\n\\end{lemma}\n\\begin{proof}\nRecall that $r$ equals $2$ modulo $4$.\nSince $\\binom{2r+1}{r} = \\frac{(2r+1)!}{r!(r+1)!}$, while the largest power of $2$ that divides $a!$ equals $\\sum_{i=1}^\\infty \\lfloor \\frac{a}{2^i} \\rfloor$, we have that\nthe largest power of $2$ that divides $\\binom{2r+1}{r}$ equals:\n\\begin{align*}\n& \\sum_{i=1}^\\infty \\left\\lfloor \\frac{2r+1}{2^i} \\right\\rfloor - \\sum_{i=1}^\\infty \\left\\lfloor \\frac{r}{2^i} \\right\\rfloor - \\sum_{i=1}^\\infty \\left\\lfloor \\frac{r+1}{2^i} \\right\\rfloor \n = r + \\sum_{i=1}^\\infty \\left\\lfloor \\frac{r}{2^i} \\right\\rfloor - 2 \\sum_{i=1}^\\infty \\left\\lfloor \\frac{r}{2^i} \\right\\rfloor \\\\\n&\\quad = r - \\sum_{i=1}^\\infty \\left\\lfloor \\frac{r}{2^i} \\right\\rfloor \n = r - \\frac{r}{2} - \\frac{r-2}{4} - \\sum_{i=1}^\\infty \\left\\lfloor \\frac{r}{4 \\cdot 2^i} \\right\\rfloor \n \\geq \\frac{1}{2} + \\frac{r}{4} - \\sum_{i=1}^\\infty \\frac{r}{4 \\cdot 2^i}\n = \\frac{1}{2}.\n\\end{align*}\nIn particular, it is positive. This finishes the proof of the lemma.\n\\end{proof}\n\nWe start our construction with a complete bipartite graph $G_0 = (Q_0, U, E)$, where one side of the graph consists of $2r+1 = k-1$ terminals $Q_0$, and the other side of the graph consists of $\\ell = \\binom{2r+1}{r}$ non-terminals\n$U = \\{u_S~|~S \\in \\binom{Q_0}{r}\\}$. That is, the vertices $u_S \\in U$ are indexed by subsets of $Q_0$ of size $r$.\nThe cost of edges is defined as follows. Let $\\alpha$ be a large constant that we define later on.\nEvery non-terminal $u_S$ is connected by edges of cost $\\alpha$ to every terminal $q \\in Q_0 \\setminus S$ and by edges of cost $(1+\\frac{1}{r} + \\frac{1}{r^2})\\alpha$ to every terminal $q \\in S$.\nTo construct the whole graph $G$, we extend $G_0$ with a last terminal $x$ (i.e., the terminal set is $Q = Q_0 \\cup \\{x\\}$)\n and build a third layer of $m = \\binom{\\ell}{\\ell\/2}$ non-terminal vertices $W = \\{w_Z~|~Z \\in \\binom{U}{\\ell\/2}\\}$. That is, the vertices $w_Z \\in W$ are indexed by subsets of $U$ of size $\\ell\/2$.\nThere is a complete bipartite graph between $U$ and $W$ and every vertex of $W$ is adjacent to $x$.\nThe cost of edges is defined as follows. An edge $u_S w_Z$ is of cost $1$ if $u_S \\in Z$, and of cost $0$ otherwise. Every edge of the form $xw_Z$ is of cost $\\ell\/2 - 1$.\nThis finishes the description of the construction. For the reference see the top picture in Figure~\\ref{fig:double-exp}.\n\nWe say that a set $S \\subseteq Q$ is \\emph{important} if $x \\in S$ and $|S| = r+1$. Note that there are $\\ell = \\binom{2r+1}{r} = \\binom{k-1}{\\lfloor (k-1)\/2 \\rfloor}$ important sets.\nWe observe the following.\n\\begin{lemma}\\label{lem:dblexp}\nLet $S \\subset Q$ be important and let $S_0 = S \\setminus \\{x\\} = S \\cap Q_0$. For $\\alpha > r^2 \\ell|W|$,\n the vertex $w_Z$ is on the $S$ side of the minimum cut between $S$ and $Q \\setminus S$\n if and only if $u_{S_0} \\in Z$. \n\\end{lemma}\n\n\\begin{proof}\nFirst, note that if $\\alpha > r^2 \\ell |W|$, then the total cost of all the edges incident to vertices of $W$ is less than $\\frac{1}{r^2} \\alpha$.\nIntuitively, this means that cost of the cut inflicted by the edges of $G_0$ is of absolutely higher importance than the ones incident with $W$.\n\nConsider an important set $S \\subseteq Q$ and let $S_0 = S \\setminus \\{x\\} = S \\cap Q_0$.\n\nLet $u_{S'} \\in U$. The \\emph{balance} of the vertex $u_{S'}$, denoted henceforth $\\beta(u_{S'})$, is the difference of the cost of edges\nconnecting $u_{S'}$ with $S_0$ and the ones connecting $u_{S'}$ and $Q_0 \\setminus S$. Note that we have\n$$\\beta(u_{S_0}) = r \\cdot \\left(1+\\frac{1}{r}+\\frac{1}{r^2}\\right) \\alpha - \\left(r+1\\right) \\cdot \\alpha = \\frac{1}{r} \\alpha.$$\nOn the other hand, for $S' \\neq S_0$, the balance of $u_{S'}$ can be estimated as follows:\n$$\\beta(u_{S'}) \\leq (r-1) \\cdot \\left(1+\\frac{1}{r}+\\frac{1}{r^2}\\right) \\alpha + \\alpha - r \\cdot \\alpha - \\left(1+\\frac{1}{r}+\\frac{1}{r^2}\\right) \\alpha \n = -\\frac{r+2}{r^2} \\alpha < -\\frac{1}{r^2} \\alpha.$$\nConsequently, as $\\frac{1}{r^2} \\alpha$ is larger than the cost of all edges incident with $W$,\nin a minimum cut separating $S$ from $Q \\setminus S$, the vertex $u_{S_0}$ picks the $S$ side, while every vertex $u_{S'}$ for $S' \\neq S_0$ picks the $Q \\setminus S$ side.\n\nConsider now a vertex $w_Z \\in W$ and consider two cases: either $u_{S_0} \\in Z$ or $u_{S_0} \\notin Z$; see also Figure~\\ref{fig:double-exp}.\n\n\\myparagraph{Case 1: $u_{S_0} \\in Z$. } As argued above, all vertices of $U$ choose their side according to what is best in $G_0$, so $u_{S_0}$ is the only vertex in $U$ on the $S$ side.\nTo join the $S$ side, $w_Z$ has to cut $\\ell\/2-1$ edges $u_{S'} w_Z$ of cost $1$ each, inflicting a total cost of $\\ell\/2-1$;\nnote that it does not need to cut the edge $u_{S_0}w_Z$, which is of cost $1$ as $u_{S_0} \\in Z$.\nTo join the $Q \\setminus S$ side, $w_Z$ needs to cut $xw_Z$ of cost $\\ell\/2-1$\nand $u_{S_0}w_Z$ of cost $1$, inflicting a total cost of $\\ell\/2$.\nConsequently, $w_Z$ joins the $S$ side.\n\n\\myparagraph{Case 2: $u_S \\notin Z$. } Again all vertices of $U$ choose their side according to what is best in $G$, so $u_{S_0}$ is the only vertex in $U$ on the $S$ side.\nTo join the $S$ side, $w_Z$ has to cut $\\ell\/2$ edges $u_{S'}w_Z$ of cost $1$ each, inflicting a total\ncost of $\\ell\/2$.\nTo join the $Q \\setminus S$ side, $w_Z$ has to cut one edge of positive cost, namely the edge $xw_Z$ of cost $\\ell\/2-1$.\nConsequently, $w_Z$ joins the $Q \\setminus S$ side.\n\nThis finishes the proof of the lemma.\n\\end{proof}\nLemma~\\ref{lem:dblexp} shows that $G$ cannot be compressed using the technique presented in~\\cite{HagerupKNR98}.\nTo see that let us fix two vertices $w_Z$ and $w_{Z'}$ in $W$,\nand let $u_S \\in Z \\setminus Z'$.\nThen, Lemma~\\ref{lem:dblexp} shows that $w_Z$ and $w_{Z'}$ lie on different\nsides of the minimum cut between $S$ and $Q \\setminus S$.\nThus, $w_Z$ and $w_{Z'}$ cannot be merged.\nSimilar but simpler arguments show that no other pair of vertices in $G$ can be merged.\nTo finish the proof of Theorem~\\ref{thm:side}, observe that\n$$|W| = \\binom{\\ell}{\\ell\/2} = \\Omega\\left(2^{\\ell}\/\\sqrt{\\ell}\\right) = \\Omega\\left(2^{\\binom{k-1}{\\lfloor (k-1)\/2 \\rfloor} - k\/2}\\right).$$\n\n\n\n\n\n\\section{Introduction}\\label{sec:intro}\nOne of the most popular paradigms when designing effective algorithms is preprocessing.\nThese days in many applications, in particular mobile ones, even though fast running time is desired, the memory usage is the main limitation. The preprocessing needed for such applications is to reduce the size of the input data prior to some resource-demanding computations,\n without (significantly) changing the answer to the problem being solved.\nIn this work we focus on this kind of preprocessing, known also as graph compression, for flows and cuts.\nThe input graph needs to be compressed while preserving its essential flow and cut properties.\n\nCentral to our work is the concept of a \\emph{mimicking network}, introduced by \nHagerup, Katajainen, Nishimura, and Ragde~\\cite{HagerupKNR98}.\nLet $G$ be an edge-weighted graph with a set $Q \\subseteq V(G)$ of $k$ terminals.\nFor a partition $Q = S \\uplus \\bar{S}$, \na minimum cut between $S$ and $\\bar{S}$ is called a \\emph{minimum $S$-separating cut}. \nA \\emph{mimicking network} is an edge-weighted graph\n$G'$ with $Q \\subseteq V(G')$ such that the weights of minimum $S$-separating cuts\nare equal in $G$ and $G'$ for every partition $Q = S \\uplus \\bar{S}$.\nHagerup et al~\\cite{HagerupKNR98} \nobserved the following simple preprocessing step: if two vertices $u$ and $v$\nare always on the same side of the minimum cut between $S$ and $\\bar{S}$ for every choice\nof the partition $Q = S \\uplus \\bar{S}$, then they can be merged without changing the size\nof any minimum $S$-separating cut. \nThis procedure always\nleads to a mimicking network with at most $2^{2^k}$ vertices. \n\nThe above upper bound can be improved to a still double-exponential bound\nof roughly $2^{\\binom{k-1}{\\lfloor (k-1)\/2 \\rfloor}}$, as observed both by \nKhan and Raghavendra~\\cite{KhanR14} and by Chambers and Eppstein~\\cite{ChambersE13}.\nIn 2013, Krauthgamer and Rika~\\cite{KrauthgamerR13} observed that the aforementioned preprocessing\nstep can be adjusted to yield a mimicking network of size $\\mathcal{O}(k^2 2^{2k})$ for planar graphs.\nFurthermore, they introduced a framework for proving lower bounds, and showed that\nthere are (non-planar) graphs, for which any mimicking network \nhas $2^{\\Omega(k)}$ edges; a slightly stronger lower bound \nof $2^{(k-1)\/2}$ has been shown by Khan and Raghavendra~\\cite{KhanR14}.\nOn the other hand, for planar graphs the lower bound of~\\cite{KrauthgamerR13} is $\\Omega(k^2)$. \nFurthermore, the planar graph lower bound applies even in the special case when all the terminals\nlie on the same face.\n\nVery recently, two improvements upon these results for planar graphs have been announced. \nIn a sequel paper, Krauthgamer and Rika~\\cite{KrauthgamerR17} improve the \npolynomial factor in the upper bound for planar graphs to $\\mathcal{O}(k 2^{2k})$ and show that the exponential dependency\nactually adheres only to the \\emph{number of faces containing terminals}: if\nthe terminals lie on $\\gamma$ faces, one can obtain a mimicking network\nof size $\\mathcal{O}(\\gamma 2^{2\\gamma} k^4)$. \nIn a different work, Goranci, Henzinger, and Peng~\\cite{GoranciHP17} showed a tight $\\mathcal{O}(k^2)$ upper bound\nfor mimicking networks for planar graphs with all terminals on a single face.\n\n\\myparagraph{Our results.}\nWe complement these results by showing an exponential lower bound for mimicking networks in planar graphs.\n\\begin{theorem}\\label{thm:main}\nFor every integer $k \\geq 3$,\nthere exists a planar graph $G$ with a set $Q$ of $k$ terminals\nand edge cost function under which every mimicking network for $G$ has\nat least $2^{k-2}$ edges.\n\\end{theorem}\nThis nearly matches the upper bound of $\\mathcal{O}(k2^{2k})$ of Krauthgamer and Rika~\\cite{KrauthgamerR17}\nand is in sharp contrast with the polynomial bounds when the terminals lie on a constant\nnumber of faces~\\cite{GoranciHP17,KrauthgamerR17}.\nNote that it also nearly matches the improved bound of $\\mathcal{O}(\\gamma 2^{2\\gamma} k^4)$ for terminals on $\\gamma$ faces~\\cite{KrauthgamerR17},\nas $k$ terminals lie on at most $k$ faces.\n\nAs a side result, we also show a hard instance for mimicking networks in general graphs.\n\\begin{theorem}\\label{thm:side}\nFor every integer $k \\geq 1$ that is equal to $6$ modulo $8$, there\nexists a graph $G$ with a set $Q$ of $k$ terminals \nand $\\Omega(2^{\\binom{k-1}{\\lfloor (k-1)\/2 \\rfloor} - k\/2})$ vertices,\n such that no two vertices can be identified without strictly increasing the size of some minimum $S$-separating cut.\n\\end{theorem}\nThe example of Theorem~\\ref{thm:side}, obtained by iterating the construction of Krauthgamer and Rika~\\cite{KrauthgamerR13},\nshows that the doubly exponential bound\nis natural for the preprocessing step of Hagerup et al~\\cite{HagerupKNR98}, and\none needs different techniques to improve upon it.\nNote that the bound of Theorem~\\ref{thm:side} is very close to the upper bound given\nby~\\cite{ChambersE13,KhanR14}.\n\n\\myparagraph{Related work.}\nApart from the aforementioned work on mimicking\nnetworks~\\cite{GoranciHP17,HagerupKNR98,KhanR14,KrauthgamerR13,KrauthgamerR17},\nthere has been substantial work on preserving cuts and flows approximately,\nsee e.g.~\\cite{robi-apx,robi-old,mm-sparsifiers}.\nIf one wants to construct mimicking networks for vertex cuts in\nunweighted graphs with deletable terminals (or with small integral\nweights), the representative sets approach of Kratsch and Wahlstr\\\"{o}m~\\cite{KratschW12}\nprovides a mimicking network with $\\mathcal{O}(k^3)$ vertices, improving upon a previous\nquasipolynomial bound of Chuzhoy~\\cite{Chuzhoy12}.\n\n\n\\medskip\n\nWe prove Theorem~\\ref{thm:main} in Section~\\ref{sec:main}\nand show the example of Theorem~\\ref{thm:side} in Section~\\ref{sec:side}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{introduction}\nAbout $100$ young massive stars are located in the central half-parsec of the Milky-Way's nuclear star cluster, with a supermassive black hole at is center. The proper motions of these stars indicate an average sense of clockwise rotation around the black hole \\citep{2000MNRAS.317..348G}. More detailed 3-dimensional analyses show that a significant fraction ($\\sim 30-60 \\%$) of these stars are moving in a thin clockwise disc \\citep{2003ApJ...590L..33L, 2003ApJ...594..812G, 2006ApJ...643.1011P, 2006ApJ...648..405B, 2009ApJ...690.1463L, 2009ApJ...697.1741B, 2014ApJ...783..131Y, 2022ApJ...932L...6V}. The other stars clearly do not belong to the disc, with several groups disagreeing about the interpretation of their precise geometry. \\cite{2003ApJ...594..812G}, \\cite{2006ApJ...643.1011P}, \\cite{2009ApJ...697.1741B}, and \\cite{2022ApJ...932L...6V} argue that the other stars form another disc-like structure, albeit with somewhat diffuse clustering of the stars' orbital planes, possibly connected by a strong warp to the clockwise disc. \\cite{2022ApJ...932L...6V} goes further and claims that several diffuse disc-like structures can be identified. By contrast, \\cite{2009ApJ...690.1463L} and \\cite{2014ApJ...783..131Y} argue that all these secondary disc-like structures are not statistically significant. These groups also disagree on the precise fraction of the young stars that belong to the clockwise disc.\n\nIt is appealing to consider this kinematic data as a result of a partial disruption of an initially coherent stellar disc. There is strong evidence that the young stars were formed {\\it in situ} \\citep{2005MNRAS.364L..23N, 2006ApJ...643.1011P, 2009ApJ...690.1463L}, and by far the most natural scenario for this is the star formation inside a gravitationally unstable gas disc \\citep{2003ApJ...590L..33L, 2008Sci...321.1060B} . \\cite{2009A&A...496..695S} suggested that a torque from the circumnuclear gas ring located at a distance $\\sim 2$pc from the black hole, could exert a disruptive torque on the disc, but their analysis did not take the disc's self-gravity into account. A direct $N$-body simulations of this process by \\cite{2016ApJ...818...29T} showed that it was not efficient in disrupting the disc. \\cite{2011MNRAS.412..187K, 2015MNRAS.448.3265K, 2022MNRAS.tmp.2848P} explored whether the disc could be disrupted by stochastic torques from vector resonant relaxation (VRR), but found that in order for this to happen in $5\\times 10^6$ years, the masses of the background stars in the cluster had to be unrealistically high, over $100M_{\\odot}$.\n\nWe think that the key to the puzzle of the disrupted disc is in its gravitational interaction with the rotating nuclear star cluster inside which it resides. More specifically, we show that the effect called ``resonant friction'' \ncan produce a very strong torque on the disc that tends to align the disc's rotation with that of the cluster, and in the process can partially disrupt \nthe disc in less that $5\\times 10^6$ years. The Milky-way's nuclear cluster is rotating neither clockwise nor counterclockwise relative to the line of sight; instead its rotational axis is close to that of the Galaxy, and thus the cluster's rotation is misaligned with that of the disc \\citep{2014A&A...570A...2F}. In Section 2 we demonstrate using numerical simulations that this configuration naturally leads to the disc disruption, and that the expected orbital distribution is qualitatively similar to the one that is observed in the Galactic Center. Typically we see an inner disc, sometimes warped or broken up into rings, co-existing with a more diffuse disoriented or weakly clustered orbits of the outer stars, with the inner disc containing $\\sim 50\\%$ of the total stars. The reader uninterested in the details of this paper should just look at Figure 1: it contains the paper's most interesting results. In this section we also explain how we choose the range of rotational parameters of the cluster based on existing observations of the cluster's mean radial velocities. In Section 3 we back up the numerical results by obtaining an analytical estimate of the Resonant Friction timescale inside a slowly-rotating cluster. In Section 4 we speculate that resonant friction would reorient gaseous accretion discs that are formed inside a nuclear star cluster from radially infalling, tidally captured gas clouds. We discuss what impact this would have on spins of supermassive black holes. We present our numerical algorithm in the Appendix.\n\\begin{figure*}\n\\centering\n \n \\includegraphics[width=.450\\textwidth]{norotation.jpg}\n \\includegraphics[width=.450\\textwidth]{rotation0.1octupole.jpg} \n \\includegraphics[width=.450\\textwidth]{rotation0.2octupole.jpg}\n \\includegraphics[width=.450\\textwidth]{rotation0.3octupole.jpg}\n \\includegraphics[width=.45\\textwidth]{rotation0.4octupole.jpg}\n \\includegraphics[width=.45\\textwidth]{rotation0.5octupole.jpg}\n \n \\caption{{\\bf Evolution of the stellar disc.} The background cluster is initially rotating along the $z$ axis with the dimensionless rotation rate $\\tilde{\\gamma}$ defined in the text and specified at the top of each subfigure. The counter-rotating disc is injected at $t=0$. The figure shows some randomly chosen examples of the time evolution of $n_z$, where $\\vec{n}$ is the unit vector directed along the star's angular momentum, for all the stars in the disc. The red lines are marking the inner half of the disc's stars and the green lines are marking those the outer half. As the rotation rate increases, we observe the disruption of the outer parts of the disc and tendency for the inner disc to align with the cluster. At high rotation rate, the disc gets warped or broken into transient rings. These effects are very important at $5$Mys, the likely age of the young stellar population in the Galactic Center.\n\n \\small\n }\n\\end{figure*}\n\\newline\\newline\n\\section{The stellar disc inside a rotating cluster}\n\\subsection{The results of numerical experiments}\nWe model only the inner part of the central cluster, which in our simulations consists of $1.2\\times 10^5$ solar-mass stars, located inside the central $0.4$pc with the spacial number density of stars given by the Peebles-Young distribution, $n(r)\\propto r^{-1.5}$. The cluster, if it were extended out to $1$pc, would contain about half a million solar masses. The numbers are based on \\cite{2007A&A...469..125S}, but it is questionable how faithfully they represent the distribution of dynamical scatterers in the Galactic Center, since (a) there seems to be a hole in the stellar distribution in the central $\\sim 0.2$pc \\citep{2009A&A...499..483B, 2009ApJ...703.1323D, 2010ApJ...708..834B}, and (b) stellar-mass black holes are expeceted to segregate inside central $0.1$pc \\citep{2000ApJ...545..847M, 2009ApJ...697.1861A}. \n\n\nDespite these uncertainties, we believe that the numbers are correct to within an order of magnitude. Our young massive disc consists of $105$ stars of $80 M_\\odot$ each. It is the total mass of the disc that plays key role, and we get similar results if we increase the number of stars and decrease their masses, while keeping the total mass constant. The disc's surface density is $\\Sigma\\propto r^{-2}$ \\citep{2006ApJ...643.1011P} and its outer edge is at $0.4$pc.\n\nIn our numerical experiments, we sacrifice the precision of individual interactions between stellar orbits in favour of extremely rapid speed and ease with which we can explore physical effects. The fastest dynamics responsible for the time evolution in the orientation of orbital planes, is captured when the precessing orbits are replaced with the annuli that they trace; this type of relaxation is called ``vector resonant relaxation\" \\citep{1996NewA....1..149R}. We however, replace each annulus with a circular ring of comparable radius, and keep only quadrupolar and octupolar terms in the interaction potential between the rings. Furthermore, instead of considering a continuum of the ring radii, we choose a modest number of discreet radius values, i.e., ``the bins\" ($10$ for the computations shown in Figure 1). Our computational vectorization benefits from each bin having an equal number of stars, including the stars that belong to the disc; there is no restriction on the stars' individual masses. The requirement that the overall distribution (approximately) represents $n(r)\\propto r^{-1.5}$ determines the value of the radius assigned to each bin. The main advantage of the binning is that it allows us to perform $\\sim N_{s}$ instead of $\\sim N_s^2$ operations per each step. This point, together with other details of our numerical procedure, is explained in the Appendix. The commented Matlab code is available upon request.\n\nEach simulation runs models $3\\times 10^7$ years of evolution, on a usual desktop computer. It takes several minutes to complete if only quadrupolar terms are taken into account, and several hour if the octupole terms are included\nWe use the \"octupolar\" simulations for the production runs, and the ``quadrupolar'' ones as a super-fast way to explore the parameter space. The only significant difference is that the inner half of the disc tends to remain somewhat more coherent if the octupolar terms are included. This allows us to quickly change the parameters of the system, and explore the convergence and stability of the results. The results of the numerical simulations are illustrated in Figure 1. The disc is injected in counter-rotation with respect to the cluster, so that the $\\vec{n}_i=\\vec{l}_i\/l_i$ are clustered in the southern hemisphere relative to the cluster rotation; here $\\vec{l}_i$ is the angular momentum of the $i$'th star. If the cluster is non-rotating, the disc stays coherent even though its bending modes are excited, consistent with the findings of \\cite{2011MNRAS.412..187K,2015MNRAS.448.3265K} and, most convincingly, with the direct simulations of \\cite{2022MNRAS.tmp.2848P}. For greater rotation rates, we see that within several million years, the orbits of the inner half of the stars (represented by the red lines) are dragged towards corotation with the cluster, while the outer stars get dispersed. For more rapidly rotating clusters we observe the disc being warped or split into distinctly oriented rings, which are seen as coherent groups of the $\\vec{n}$-vectors. This complexity is consistent with the phenomenology seen in the galactic center.\n\nWhat is the rotation rate of the cluster?\nIn the next subsection we mathematically define the dimensionless rotation rate, and show how to estimate it from the radial velocity data. Using existing literature, we obtain a rough estimate of $\\tilde{\\gamma}\\sim 0.3$.\n\n\\subsection{The rotation rate of the relaxed cluster}\n\n Inside the gravitational radius of influence of a supermassive black hole, the orbit-averaged torques between the stars moving on slowly-evolving elliptical orbits, drive a fast stochastic evolution of the stars' inclinations and eccentricities \\citep{1996NewA....1..149R, 2018ApJ...860L..23B}. This is known as Resonant Relaxation. Importantly, the orbit-averaged dynamics leaves the semimajor axes unchanged. As argued in the original discovery paper, the relaxed state should be in a statistical equilibrium, with the probability distribution function proportional to the appropriate Boltzmann weight:\n\\begin{eqnarray}\n P(m, a, l, l_z,\\omega,\\Omega)&=&N(m, a)\\times\\nonumber\\\\\n & &\\exp\\left[-m\\left(\\beta\\epsilon-\\vec{\\gamma}\\cdot\\vec{l}~\\right)\\right].\n \\label{Prob}\n\\end{eqnarray}\nHere $m$ is the mass and $a$ is the semimajor axis. On the left-hand side of the equation, the two Delaunay actions $l, l_z$ are the magnitude and the z-component of the specific angular momentum vector $\\vec{l}$, and the two Delaunay angles $\\omega, \\Omega$ are the argument of periapsis and the longitude of ascending node. On the right-hand side $N(m,a)$ is the normalization factor that will play no role in the following discussion, and $\\epsilon$ is the specific orbit-averaged potential energy of the star's interaction with the other stars in the cluster. The inverse temperature $\\beta$ can be either positive, zero or negative, and $\\vec{\\gamma}$ is the rotational vector of the cluster; $\\vec{\\Omega}_{\\rm cl}\\equiv \\vec{\\gamma}\/\\beta$ is known as a thermodynamic angular velocity of the cluster, and it coincides with the angular velocity of the cluster's precession if the cluster is lopsided (e.g., Gruzinov et al.~2020). We can define the dimensionless rotation rate of the cluster as follows:\n\\begin{equation}\n \\tilde{\\gamma}=m_0 l_0 \\gamma,\n \\label{tildegamma}\n\\end{equation}\nwhere $m_0$ is the characteristic mass of a star in the cluster and $l_0$ is the characteristic specific angular momentum of a stellar orbit in the cluster. We choose $m_0=M_\\odot$ and $l_0=\\sqrt{GMr_0}$, where $M$ is the mass of the supermassive black hole and $r=0.1$pc, the characteristic radius of the stellar disc. It is this rotation parameter that labels the plots in Figure 1.\n\nA number of authors have explored thermodynamic equilibria of this form \\citep{1996NewA....1..149R,2014JPhA...47C2001T, 2015MNRAS.448.3265K, 2017ApJ...842...90R, 2018PhRvL.121j1101S, 2019PhRvL.123b1103T, 2020MNRAS.493.2632T, 2020ApJ...905...11G, 2022MNRAS.514.3452M, 2022arXiv220207665M}.\nWe now show that for a slowly-rotating edge-on cluster, Eq.~(\\ref{Prob}) together with some reasonably natural assumptions leads to a simple relationship\nbetween the mean radial velocity and its dispersion, both measured from, e.g., pixel-integrated spectroscopy. This relationship can be used to estimate the rotational velocity of the relaxed cluster (or infer it, if the data is detailed enough).\n\nConsider a cluster whose axis of rotation $z$ lies in the plane of the sky; thus $\\vec{\\gamma}=\\gamma \\hat{z}$. We shall assume that the cluster has a rotational symmetry about this axis. The Milky Way nuclear star cluster's rotation axis is perpendicular to the galaxy \\citep{2008A&A...492..419T,2014A&A...570A...2F}, and the cluster appears to be axially symmetric in its inner parts despite some inferred triaxiality in its outer parts \\citep{2017MNRAS.466.4040F}. Let the $x$-axis be directed away from the observer along line of sight to the cluster, and let the origin of the $x, y, z$ coordinate system coincide with the center of the cluster. Each line of sight is characterized by coordinates $y,z$. The mean line-of-sight radial velocity is given by\n\\begin{equation}\n \\langle v_r(y,z)\\rangle=\\int P(\\vec{r}, \\vec{v})~v_x~dv_x~dv_y~dv_z~dx,\n \\label{radvel}\n\\end{equation}\nwhere $P(\\vec{r}, \\vec{v})$ is the probability distribution function for a star moving with velocity $\\vec{v}$ to be located at position $\\vec{r}$. The latter can be obtained by using Eq.~(\\ref{Prob}) and expressing all the Delaunay variables in terms of $\\vec{r}, \\vec{v}$ (this is true because the Jacobian of a canonical transformation equals $1$). Consider a reflection \n\\begin{eqnarray}\n x&\\rightarrow&-x\\nonumber\\\\\n v_x&\\rightarrow&-v_x\\label{symmetry}\n\\end{eqnarray}\nabout the plane of the sky. This leaves $a,l,\\epsilon$ unchanged, but flips the sign of $\\vec{\\gamma} \\cdot \\vec{l}$. Therefore,\n\\begin{eqnarray}\nP(-x,y,z,-v_x, v_y, v_z)&=&\\exp\\left[-2 m\\gamma(y v_x-x v_y)\\right]\\times\\nonumber\\\\\n & &P(x,y,z,v_x,v_y, v_z).\\label{flip1}\n\\end{eqnarray}\nWe assume that the cluster is rotating slowly, with $m \\gamma l\\ll 1$\nWe therefore expand $\\exp\\left[-2 m\\gamma(y v_x-x v_y)\\right]\\simeq 1-2 m\\gamma(y v_x-x v_y)$. Multiplying the above equation by $v_x$ and integrating over the velocities and over $x$, we obtain the following relation:\n\\begin{equation}\n \\langle v_r \\rangle -m \\gamma y \\langle v_r \\rangle^2=m\\gamma \\left(y\\sigma_r^2-\\langle x T_{xy}\\rangle\\right).\n \\label{relation1}\n\\end{equation}\nHere $\\sigma$ is the radial velocity dispersion, $T_{ij}=\\overline{v_i v_j}$ is the velocity tensor, and $\\langle \\rangle$ stands for the average along the line of sight. Solving for ${\\gamma}$ and using Eq.~(\\ref{tildegamma}), and assuming that $m=m_0$ is the typical mass of the observed stars, we get\n\\begin{equation}\n \\tilde{\\gamma}={l_0 \\langle v_r\\rangle\\over y(\\sigma_r^2+\\langle v_r\\rangle^2)-\\langle x T_{xy}\\rangle}.\n\\end{equation}\nIn the above equation everything is measurable except $\\langle x T_{xy}\\rangle$, since even if $v_y$ for individual stars could be measured, we would have no information about $x$. Moreover, this term is in fact non-zero for anisotropic velocity ellipsoid and could be similar to $y \\sigma_r^2$ in magnitude. Note however, $T_{xy}=0$ for an isotropic velocity distribution, which is expected to be produced by scalar resonant relaxation near the black hole\\footnote{The prediction from the Resonant Relaxation is that the closer to the black hole, the more isotropic is the velocity ellipsoid.}. We shall assume this; in principle this assumption could be tested for consistency by checking that so-measured $\\tilde{\\gamma}$ is pixel-independent\\footnote{\\cite{2017MNRAS.466.4040F} performed orbit-modelling of the whole nuclear cluster out to $\\sim 8$pc, and found that the anisotropy of the velocity ellipsoid is significant in the intermediate range of radii around $\\sim 1$pc, but smaller at greater or much smaller radii. The inference is clearly not very precise in the regions where the stellar disc is located.}. We expect the result thus obtained to be correct to within a factor of $\\sim 2$.\n\nThe data in \\cite{2014A&A...570A...2F} is quite noisy at distances of interest. From Figure $11$ of that paper, the inner bin at $10$ arcsec (which corresponds to $\\sim 0.4$pc) has $\\langle v_r\\rangle\\sim 25 $km\/sec and $\\sigma\\sim 85$km\/sec. Plugging the numbers in the above equation, we get \n\n\\begin{equation}\\tilde{\\gamma}\\sim 0.3\\end{equation}. \n\nWe emphasize that this number is only an order of magnitude estimate and therefore we explore a range of values, as indicated in Figure $1$.\n\n\n \n\n\n\n\n\n \\section{Analytical estimate of Resonant Friction timescale}\nResonant friction was first discussed as a phenomenon by \\cite{1996NewA....1..149R}, as a dissipative counterpart to the stochastic resonant relaxation. Both are necessary for the thermal equilibrium to be established. The origin of the friction can be understood as follows. Consider the probability distribution in Eq.~(\\ref{Prob}) for high masses, $m\\gg m_0$. A star with such mass will tend to be near the orbit that minimizes the Jacoby constant $\\epsilon-\\vec{l}\\cdot\\vec{\\gamma}\/\\beta$. \\cite{2020ApJ...905...11G} studied such orbits and showed that they are stationary in the frame of reference that is rotating with the angular velocity $\\vec{\\gamma}\/\\beta$, and are typically aligned with the cluster's rotation. This occurs despite stochastic Vector Resonant Relaxation torques that are perturbing the massive orbit, because Resonant Friction {\\it drives} massive objects in a cluster towards these special orbits. The existence of Resonant Friction is thus \nclosely connected to the existence of thermodynamical equilibrium.\n\nResonant Friction explains why in numerical simulations of an Intermediate-Mass Black Hole inspiraling through a nuclear cluster, the former's orbit rapidly orients itself with the cluster's rotation. \\cite{2012ApJ...754...42M} demonstrated explicitly in numerical experiments that this reorientation occurs because of secular torques, but they did not make the connection to thermodynamics (the community has been somewhat reluctant to accept Madigan \\& Levin's arguments, instead attributing the reorientation to the $2$-body scattering processes).\n\nIt is the balance between the fluctuations and the dissipation that establishes the Boltzmann distribution. Below we use this fact to estimate the dissipation timescale in a rotating cluster.\nFirst, we observe that the $z$-component of the angular momentum of an object inside the cluster experiences a. stochastic walk, due\nto random torques from other orbits, and b. systematic drift upwards, which tends to align the orbit with the\ncluster rotation. The corresponding evolution equation\nfor the $l_z$-distribution $f(m,l_z,t)$ can be written as\n\\begin{equation}\n {\\partial f\\over \\partial t}=-{\\partial\\over \\partial l_z}\\left[F_{\\rm drift}+F_{\\rm stochastic}\\right].\n \n \\label{evolution}\n\\end{equation}\n\n\n\nHere $m$ is the mass of a star, and $l_z=L_z\/m$ where $L_z$ is the $z$-component of its angular momentum, and $F_{\\rm drift}$ and $F_{\\rm stochastic}$ are the fluxes in $l_z$-space due to the resonant friction and stochastic resonant relaxation, respectively.\nThe Fokker-Planck form of the fluxes is given by\n\\begin{eqnarray}\n F_{\\rm drift}(m,l_z)&=&V(m,l_z)~f(m,l_z)\\nonumber\\\\\n F_{\\rm stochastic}(m,l_z)&=&-{\\partial\\over\\partial l_z}\\left[ D(l_z) f(m,l_z)\\right]\\label{fluxes}\n\\end{eqnarray}\nThe drift velocity $V(m,l_z)$ is mass-dependent, while the stochastic diffusion coefficient $D(l_z)$ is mass-independent because of the equivalence principle (the torque per mass on the orbit from the other stars depends only on the orbit, and not on the mass of the star).\nMoreover, since flipping the direction of the angular momentum $\\vec{l}\\rightarrow-\\vec{l}$ does not charge the period-averaged orbit,\nwe must have $D(l_z)=D(-l_z)$.\n\nFor slowly rotating, nearly spherically symmetric clusters $\\epsilon$ depends only weakly on the orbit's orientation. Therefore, the equilibrium distribution is given by\n\\begin{equation}\n f(m,l_z)=f_0(m)\\exp(m\\gamma l_z),\n \\label{equilibrium}\n\\end{equation}\nwhere $\\vec{\\gamma}=\\gamma~\\hat{z}$ is the rotational vector of the cluster. In equilibrium, $F_{\\rm drift}+F_{\\rm stochastic}=0$. Therefore,\nsubstituting Eq.~(\\ref{equilibrium}) into Eq~(\\ref{fluxes}), we obtain the relationship for the drift velocity\n\\begin{equation}\n V(m,l_z)=V(0,l_z)+m\\gamma D(l_z).\n\\end{equation}\nThe second term of the right-hand side is the resonant-friction induced\npart of the drift velocity. The first term\n\\begin{equation}\n V(0,l_z)=D^{\\prime}(l_z)\n\\end{equation}\nis the drift velocity of the zero-mass particles, required to enforce their\nfully isotropic equilibrium distribution. The parity symmetry of $D(l_z)$ results in $V(0,0)=0$, and thus\n\\begin{equation}\n V(m,0)=m\\gamma D(0),\n\\end{equation}\nwhich establishes a key relationship between $V$, $\\gamma$, and $m$.\nIt allows us to relate the frictional timescale $t_{\\rm fr}$ to that of the vector resonant relaxation, $t_{\\rm VRR}$. We have\n\\begin{eqnarray}\n t_{\\rm fr}&\\sim & l_0\/V\\sim l\/(m\\gamma D),\\nonumber\\\\\n t_{\\rm VRR}&\\sim& l_0^2\/D,\\label{timescales}\n\\end{eqnarray}\nwhere like in the previous section, $l_0$ is the characteristic specific angular momentum of an orbit in the cluster. Thus we have\n\\begin{equation}\n t_{\\rm fr}\\sim \\tilde{\\gamma}^{-1} {m_0\\over m} t_{\\rm VRR},\n \\label{tfr1}\n\\end{equation}\nwhere $m_0$ is the mass of a typical star in the cluster and like before, the the dimensionless rotation parameter is given by\n\\begin{equation}\n \\tilde{\\gamma}=m_0\\gamma l_0.\n\\end{equation}\n\nSo what is $t_{\\rm VRR}$? Following the arguments of \\cite{1996NewA....1..149R}, one can show that\n\\begin{equation}\n t_{\\rm VRR}\\sim {M_{\\rm BH}^2\\over M_* m_0} {P^2\\over t_{\\rm coh}},\n\\end{equation}\nwhere $M_{\\rm BH}$ is the mass of the central black hole, $M_*$ is the mass of the stellar cluster, $P$ is the characteristic orbital period, and $t_{\\rm coh}$ is the characteristic coherence timescale for the fluctuating torque. \\cite{1996NewA....1..149R} took $t_{\\rm coh}\\sim t_{\\rm VRR}$, arguing that VRR is the main mechanism for the change in orientation of the orbits. Thus they obtain\n\\begin{equation}\n t_{\\rm VRR}\\sim {M_{\\rm BH}\\over \\sqrt{M_* m_0}}P.\n \\label{tVRR1}\n\\end{equation}\nSubstituting this into Eq.~(\\ref{tfr1}), we obtain\n\\begin{equation}\n t_{\\rm fr}\\sim {M_{\\rm BH}~P\\over \\tilde{\\gamma}\\sqrt{M_* m_0}}.\n \\label{tfr2}\n\\end{equation}\nHowever, this argument has a serious limitation. If a cluster is rotating, its stellar distribution is flattened, with the ellipticity $\\sim \\tilde{\\gamma}^2$. This ellipticity drives precession of the stellar orbits with characteristic timescale\n\\begin{equation}\n t_{\\rm prec}\\sim {M_{\\rm BH}\\over M_*}\\tilde{\\gamma}^{-2}P.\n\\end{equation}\nThis timescale becomes comparable the one given in Eq.~(\\ref{tVRR1}) for $\\tilde{\\gamma}\\sim N_s^{-1\/4}\\sim 0.1$, where $N_s=M_*\/m_0$ is the number of stars in the cluster. Therefore for $\\tilde{\\gamma}\\gg N_s^{-1\/4}$, we should consider $t_{\\rm coh}\\sim t_{\\rm prec}$. In that case, we get\n\\begin{equation}\nt_{\\rm VRR}\\sim {M_{\\rm BH}\\over m_0}\\tilde{\\gamma}^2 P.\n\\end{equation}\nSubstituting this into Eq.~(\\ref{tfr1}), we get\n\\begin{equation}\nt_{\\rm fr}\\sim \\tilde{\\gamma} {M\\over m} P.\n\\label{tfr3}\n\\end{equation}\nNote that this does not depend on the cluster mass $M_*$. To sum up: if $\\tilde{\\gamma}\\ll N_s^{-1\/4}$, one should use Eq.~(\\ref{tfr2}); in the opposite case, one should use Eq.~(\\ref{tfr3}). Clearly, these expressions are very approximate, and one needs more precise arguments which are beyond the scope of this paper,\nto obtain more reliable expressions with more precisely stated domains of validity. \n\nHaving derived the Resonant Friction timescale for a stellar orbit of mass $m$, we note that the effect should be there for any massive object inside the cluster that is gravitationally coherent. Thus it should apply to a disc of stars, for as long as the orbital planes of the stellar orbits remain clustered. We thus apply the expression above to the whole stellar disc. Taking $m=M_{\\rm disc}=8000 M_\\odot$, $M_{\\rm BH}=4\\times 10^6 M_\\odot$, $\\tilde{\\gamma}=0.3$, and $P=1500$yr (corresponding to a circular orbit at $0.1$pc), we get \n$t_{\\rm fr}\\sim 2.5$Myrs. This is consistent with the results of our numerical experiments described in Section 2.\n\n\n\n\\section{Alignment of accretion discs with the nuclear cluster rotation}\n\\begin{figure*}[t]\n \\centering\n \\epsfxsize=16cm \n \\epsfbox{scenario2.jpeg}\n \n \\caption{\n\n \\small\nPictorial description of the scenario described here. An infalling cloud of gas is tidally disrupted and its material forms an accretion disc around the SMBH. The initial orientation of the disc is random, but the Resonant Dynamical Friction drives the disc's angular momentum $\\vec{L}_{\\rm disc}$ into alignment with that of the cluster, $\\vec{L}_{\\rm cluster}$. }\n \\end{figure*} \n \n\\subsection{General remarks}\nA classic argument by \\cite{1982MNRAS.200..115S}, \nand its more recent elaborations \n\\citep{2002MNRAS.335..965Y, 2012AdAst2012E...7K}, \nstrongly suggest that supermassive black holes \n(SMBH) \nacquire most of their mass by accreting gas from thin discs in galactic nuclei. If the orientation of an accretion disc\nrelative to the black hole could be maintained, this mode of accretion would drive the black hole to a very high spin, with dimensionless spin parameter $\\alpha>0.9$ \n\\citep{1973blho.conf..343N}. \nThe discovery of \n\\cite{1975ApJ...195L..65B} \nthat gravito-magnetic forces drive an accretion\ndisc into alignment with the equatorial plane near the black hole, would suggest that we could expect essentially all supermassive black holes to be rapidly spinning. Measurements of x-ray and radio emission from accreting supermassive black holes in galactic nuclei indicate that they are indeed rotating rapidly \\citep{2020arXiv200808588J,2019ApJ...886...37D}. The spin energy of SMBHs is thought to be responsible for powering relativistic jets emanating from galactic nuclei, and thus SMBH spin is the key agent of feedback during the formation of elliptical galaxies. \n\nHowever, much remains to be understood about the spins of SMBHs. The measurements of the black hole spins rely on rather complex models of accretion discs and jets, and thus may contain systematic uncertainties. On the theoretical side,\n\\cite{2006MNRAS.373L..90K} \nargued that the accretion is likely to be driven by randomly-oriented infall episodes; \nthis is supported observationally by the fact that many radio jets seem to be randomly oriented relative to their host spiral galaxies. In this stochastic-accretion picture, one expects nearly half of the transient accretion discs to be counter-rotating relative to the black hole. \nImportantly, a counter-rotating innermost stable orbit has a greater radius than a co-rotating innermost stable orbit, and therefore the counter-rotating discs have a larger lever arm. \\cite{2008MNRAS.385.1621K} argued that as a result of this, the supermassive black holes are spun down, on average, to low spins of $\\alpha\\sim 0.3$. A more elaborated model of \\cite{2018MNRAS.477.3807F} makes a distinction between lower-mass black holes of $M_{\\rm BH}\\lesssim 10^7M_\\odot$ and the more massive ones. These authors argue that despite of the stochastic feeding, the spin directions of the lower mass black holes are expected to remain adiabatically aligned with the\nangular momentum directions of their accretion discs, while the higher-mass black holes might experience both spin-ups and spin-downs in equal measure. \n\nLISA will provide exquisitely precise measurements of the supermassive black hole spins, by measuring the gravitational waves from mergers of stellar-mass and intermediate-mass black holes with SMBHs. It is thus imperative to develop reliable theoretical predictions for the SMBH spins.\nBelow we argue that a key piece of physics is missing in the current theoretical analyses, namely the stabilizing effect of rotating black hole clusters on the orientation of the accretion disc. \nIndeed if the stellar disc in our Galactic Center gets reoriented into alignment with the cluster, why would not an accretion disc in a galactic nucleus which is more active than ours? \n\n\n\\subsection{Resonant friction on accretion discs}\nThe scenario we consider is depicted in Figure 2. In its first stage, a cloud of gas, with the mass of $\\sim 10^4M_\\odot$ falls in, gets tidally disrupted and forms a gaseous accretion disc. \nIf the cluster is flattened due to its rotation, it will exert a gravitational torque on the\ndisc, causing it to precess. The differential precession could distort the disc. The hydrodynamics of distorted discs could be complicated, but it is reasonable to expect that after much dissipation from shocks etc., the disc would reassemble in the equatorial plane of the cluster. Naively it would seem that the disc could equally likely be co-rotating and counter-rotating with the cluster.\n\nHowever, the resonant friction is expected to break the symmetry between co- and counter-rotation, and drive the disc's and the cluster's angular momenta into alignment. \nResonant friction should act on any massive object inside the cluster, in particular it should affect the accretion disc, so long as the disc remains coherent and creates a gravitational perturbation that affects the cluster. Whether the disc remains coherent long enough to experience the effect is the key question.\nIn absence of a full hydrodynamical treatment, our intuition can be guided by simulations of stellar disc evolution inside a rotating cluster, such as the one illustrated in Figure $1$. \nThe initial disc remains coherent enough, until its overall orbital angular momentum flips into co-rotation with the cluster. If instead of the stellar disc we introduced a gaseous disc of the same mass, it would\nfirstly flip into co-rotation with the cluster, and then settle into the equatorial plane of the cluster due to the hydrodynamic torques that are caused by the disc's differential precession.\n\nIf this picture were correct (and it obviously needs to be tested by hydrodynamic simulations!), then the rotating clusters could serve as\na stabilizing flywheel for the material that accretes onto the SMBH. It would imply that\n\\begin{itemize}\n \\item The SMBH spin direction is aligned with that of its host cluster\n \\item The SMBH spin magnitude could reach a very high value, unless there exists an as yet undetermined mechanism for the SMBH spindown (such as its super-radiant coupling to a scalar field or a cosmic string)\n \\item Since the resonant friction from cluster rotation will affect inspiraling Intermediate-Mass \n Black Holes, one expects a non-trivial alignment between the inspiraling orbit and the SMBH spin. Such an alignment will be detectable by LISA.\n\\end{itemize}\n\nIn proposing hydrodynamic numerical experiments, we need to ascertain that the scenario is reasonable by comparing the resonant friction timescale with that of the dissolution of the disc due to a differential precession. \nAs argued in the previous section, the timescale for the friction to align the disc is\n\\begin{equation}\n t_{\\rm fr}\\sim \\tilde{\\gamma} ~{M_{\\rm BH}\\over M_{\\rm disc}}~P,\n\\end{equation}\nthis equation is valid so long as $\\tilde{\\gamma}>N_s^{-1\/4}$, where $N_s$ is the number of stars in the cluster. Note that this timescale is comparable to the period of a bending mode of the disc due to its self-gravity, $t_{\\rm bend}\\sim (M_{\\rm BH}\/M_{\\rm disc}) P$. Therefore the reorientation of the disc due to Resonant Friction is expected to be accompanied by excitation of the bending motion of the disc. This explains the excitation of the\ndisc warps, and the disc's breaking into several rings in some of our simulations.\n\nNote also that the mass of the cluster, assumed to satisfy $M_{\\rm disc}2$ is prime.\nFor a positive integer $m$, $\\mathbb{Z}_p^m$ and $\\mathbb{Z}_{p^2}^m$ are the ring extension.\nThe linear codes over $\\mathbb{Z}_p$ and $\\mathbb{Z}_{p^2}$ are subgroups of $\\mathbb{Z}_p^m$ and $\\mathbb{Z}_{p^2}^m$, respectively.\n\nThe classical Gray map from $\\mathbb{Z}_{p^{k+1}}$ to $\\mathbb{Z}_p^{p^k}$, where $p$ is a prime and $k\\geq 1$, is given in \\cite{LS}.\nHere, let $k=1$, the Gray map $\\phi$ from $\\mathbb{Z}_{p^2}$ to $\\mathbb{Z}_p^{p}$ is as follows.\n\n$$\\begin{array}{ccc}\n\\mathbb{Z}_{p^2} & \\longrightarrow & \\mathbb{Z}_p^p\\\\\n\\theta & \\longmapsto & \\phi(\\theta)\n\\end{array},$$\n$\\phi(\\theta)=\\theta''(1,1,\\ldots,1)+ \\theta' (0,1,\\ldots,p-1)$, where\n$\\theta = \\theta'' p + \\theta',$ and $\\theta',\\theta''\\in\\{0,\\ldots,p-1\\}.$\n\nThe homogeneous weight \\cite{LS} $wt_{hom}$ on $\\mathbb{Z}_{p^2}$ is $$\nwt_{hom}(x)=\\left\\{\n\\begin{array}{ll}\np,& \\text{ if } x\\in p\\mathbb{Z}_{p^2}\\backslash\\{0\\}, \\\\\np-1,& \\text{ if } x\\notin p\\mathbb{Z}_{p^2}, \\\\\n0,& \\text{ if } x=0.\n\\end{array}\n\\right.\n$$\nAnd the homogeneous distance is $d_{hom}=wt_{hom}(\\mathbf{a}-\\mathbf{b})$, where $\\mathbf{a},\\mathbf{b}\\in \\mathbb{Z}_{p^2}^n$.\nThen, $\\phi$ is an isometry from $(\\mathbb{Z}_{p^2}^n, d_{hom})$ to $(\\mathbb{Z}_p^{pn}, d_H)$, where $d_H$ is the Hamming distance on $\\mathbb{Z}_p^{p}$.\n\nIf $\\mathcal{C}$ is a linear code over $\\mathbb{Z}_{p^2}$, then the code $C=\\phi(\\mathcal{C})$ is called a \\emph{$\\mathbb{Z}_{p^2}$-linear} code.\nThe dual of a linear code $\\mathcal{C}$ of length $n$ over $\\mathbb{Z}_{p^2}$, denoted by $\\mathcal{C}^{\\bot}$, is defined as\n$$\\mathcal{C}^\\bot=\\{\\mathbf{x}\\in \\mathbb{Z}_{p^2}^n:\\langle\\mathbf{x},\\mathbf{y}\\rangle=0 ~\\rm {for ~all}~ \\textbf{y}\\in \\mathcal{C}\\},$$\nwhere $\\langle,\\rangle$ denotes the usual Euclidean inner product.\nThe code $C_\\bot=\\phi(\\mathcal{C}^{\\bot})$ is called the \\emph{$\\mathbb{Z}_{p^2}$-dual code} of $C=\\phi(\\mathcal{C})$.\n\n\nIn 1973, Delsarte first defined the additive codes in terms of association schemes \\cite{DP}, it is a subgroup of the underlying abelian group.\nBorges et al. \\cite{BFPR10} studied the standard generator matrix and the duality of $\\mathbb{Z}_2\\mathbb{Z}_4$-additive codes. Since then, a lot of work has been devoted to characterizing $\\mathbb{Z}_2\\mathbb{Z}_4$-additive codes. Dougherty et al. \\cite{DLY16} constructed one weight $\\mathbb{Z}_2\\mathbb{Z}_4$-additive codes and analyzed their parameters. Benbelkacem et al. \\cite{BBDF20} studied $\\mathbb{Z}_2\\mathbb{Z}_4$-additive complementary dual codes and their Gray images. In fact, these codes can be viewed as a generalization of the linear complementary dual (LCD for short) codes \\cite{M92} over finite fields. Joaquin et al. \\cite{BBCM} introduced a decoding method of the $\\mathbb{Z}_2\\mathbb{Z}_4$-linear codes.\nMore structure properties of $\\mathbb{Z}_2\\mathbb{Z}_4$-additive codes can be found in \\cite{BBDF11,JTCR2}. Moreover, the additive codes over different mixed alphabet have also been intensely studied, for example $\\mathbb{Z}_2\\mathbb{Z}_2[u]$-additive codes \\cite{BC2}, $\\mathbb{Z}_2\\mathbb{Z}_{2^s}$-additive codes \\cite{AS13}, $\\mathbb{Z}_{p^r}\\mathbb{Z}_{p^s}$-additive codes \\cite{AS15}, $\\mathbb{Z}_2\\mathbb{Z}_2[u,v]$-additive codes \\cite{SW}, $\\mathbb{Z}_p\\mathbb{Z}_{p^k}$-additive codes \\cite{SWD} and $\\mathbb{Z}_p(\\mathbb{Z}_p+u\\mathbb{Z}_p)$-additive codes \\cite{WS}, and so on. It is worth mentioning that $\\mathbb{Z}_2\\mathbb{Z}_4$-additive cyclic codes form an important family of $\\mathbb{Z}_2\\mathbb{Z}_4$-additive codes, many optimal binary codes can be obtained from the images of this family of codes. More details of $\\mathbb{Z}_2\\mathbb{Z}_4$-additive cyclic codes can be found in \\cite{BC,JCR,JTCR,JTCR2,YZ20}.\n\nIn \\cite{CFC}, Fern\\'{a}ndez et al. studied the rank and kernel of $\\mathbb{Z}_2\\mathbb{Z}_4$-linear codes, where $\\mathbb{Z}_2\\mathbb{Z}_4$-linear codes are $\\mathbb{Z}_2\\mathbb{Z}_4$-additive codes under the generalized Grap map $\\Phi$.\nThe authors also studied the rank and kernel of $\\mathbb{Z}_2\\mathbb{Z}_4$-additive cyclic codes in \\cite{JTCR2}.\nIt is a natural problem: can we study the rank and kernel of $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-linear codes in terms of $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive codes?\nIn this paper, we succeed in considering this general case for the kernel, and we also generalize the rank over $\\mathbb{Z}_3\\mathbb{Z}_9$.\n\nLet $\\mathcal{C}$ be a $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive code of length $\\alpha+\\beta$ defined by a subgroup of $\\mathbb{Z}_p^\\alpha\\times\\mathbb{Z}_{p^2}^\\beta$. Let $n=\\alpha+p\\beta$ and\n$\\Phi:\\mathbb{Z}_p^\\alpha\\times\\mathbb{Z}_{p^2}^\\beta\\rightarrow \\mathbb{Z}_p^n$ is an extension of the Gray map, which\n$$\\Phi(\\mathbf{x},\\mathbf{y})=(\\mathbf{x},\\phi(y_1),\\ldots,\\phi(y_\\beta)),$$\nfor any $\\mathbf{x}\\in \\mathbb{Z}_p^\\alpha$, and $\\mathbf{y}=(y_1,\\ldots,y_\\beta)\\in \\mathbb{Z}_{p^2}^\\beta$. The code $C=\\Phi(\\mathcal{C})$ is called a $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$\\emph{-linear code}.\nThere are two parameters of this nonlinear code: the rank and the dimension of the kernel.\nWe denote $\\langle C\\rangle$ the linear span of the codewords of $C$. The dimension of $\\langle C\\rangle$ is called the \\emph{rank} of the code $C$, denoted by $rank(C)$.\nThe \\emph{kernel} of the code $C$, which is denoted as $K(C)$, is defined as:\n$$K(C)=\\{\\mathbf{x}\\in\\mathbb{Z}_p^{n}|C+\\mathbf{x}=C\\},$$\nwhere $C+\\mathbf{x}$ means $\\mathbf{x}$ adds all codewords in $C$.\nWe will denote the dimension of the kernel of $C$ by \\emph{$ker(C)$}.\n\nThe rank and dimension of the kernel have been studied for some families of $\\mathbb{Z}_2\\mathbb{Z}_4$-linear codes \\cite{BPR,FPV,PRV}.\nThese two parameters are helpful to the classification of $\\mathbb{Z}_2\\mathbb{Z}_4$-linear codes.\nTherefore, we try to generalize them to the more general case over $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$.\n\n\\par\nThe paper is organized as follows.\nIn Section \\ref{sec:2}, we give some properties about $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive codes and $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-linear codes.\nIn Section \\ref{sec:3}, we find all values of the rank for $\\mathbb{Z}_3\\mathbb{Z}_{9}$-linear codes, and we construct a $\\mathbb{Z}_3\\mathbb{Z}_{9}$-linear code for each value.\nIn Section \\ref{sec:4}, we determine all values of the dimension of the kernel for $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-linear codes.\nWe also construct all $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-linear codes for the values of the dimension of the kernel.\nIn Section \\ref{sec:5}, pairs of rank and the dimension of kernel of $\\mathbb{Z}_3\\mathbb{Z}_{9}$-additive codes are studied.\nFor each fixed value of the dimension of the kernel, the range of rank is given.\nMoreover, the construction method of the $\\mathbb{Z}_3\\mathbb{Z}_{9}$-linear codes with pairs of rank and the dimension of the kernel is provided.\nIn Section \\ref{sec:6}, we conclude the paper.\n\n\\section{Preliminaries} \\label{sec:2}\n\nLet $\\mathcal{C}$ be a $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive code. Then $\\mathcal{C}$ is isomorphic to an abelian structure $\\mathbb{Z}_p^\\gamma \\times \\mathbb{Z}_{p^2}^\\delta$ since it is a subgroup of $\\mathbb{Z}_p^\\alpha \\times \\mathbb{Z}_{p^2}^\\beta$.\nThe \\emph{order} of a codeword $\\mathbf{c}$ means the minimal positive integer $a$ such that $a\\cdot\\mathbf{c}=\\mathbf{0}$.\nTherefore, the size of $\\mathcal{C}$ is $p^{\\gamma+2\\delta}$, and the number of codewords of order $p$ in $\\mathcal{C}$ is $p^{\\gamma+\\delta}$.\n\nLet $X$ be the set of $\\mathbb{Z}_p$ coordinate positions, and $Y$ be the set of $\\mathbb{Z}_{p^2}$ coordinate positions, so $|X|=\\alpha$ and $|Y|=\\beta$.\nIn general, it is notice that the first $\\alpha$ positions corresponds to the set $X$ and the last $\\beta$ positions corresponds to the set $Y$.\nWe denote $\\mathcal{C}_X$ and $\\mathcal{C}_Y$ are the punctured codes of $\\mathcal{C}$ by deleting the coordinates outside $X$ and $Y$, respectively.\nLet $\\mathcal{C}_p$ be the subcode consisting of all codewords of order $p$ in $\\mathcal{C}$. Let $\\kappa$ be the dimension of the linear code $(\\mathcal{C}_p)_X$ over $\\mathbb{Z}_p$.\nIf $\\alpha=0$, then $\\kappa=0$.\nConsidering all these parameters, we will say that $\\mathcal{C}$ or $C=\\Phi(\\mathcal{C})$ is of type $(\\alpha, \\beta; \\gamma, \\delta; \\kappa)$.\n\\par\nFor a $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive code, every codeword $\\mathbf{c}$ can be uniquely expressible in the form\n$$\\mathbf{c}=\\sum_{i=1}^\\gamma \\lambda_i\\mathbf{u}_i+\\sum_{j=1}^\\delta \\nu_j\\mathbf{v}_j,$$\nwhere $\\lambda_i \\in \\mathbb{Z}_p$ for $1\\leq i \\leq \\gamma$, $\\nu_j \\in \\mathbb{Z}_{p^2}$ for $1 \\leq j \\leq \\delta$ and $\\mathbf{u}_i$, $\\mathbf{v}_j$ are vectors in $\\mathbb{Z}_p^\\alpha \\times \\mathbb{Z}_{p^2}^\\beta$ of order $p$ and $p^2$, respectively.\nThen, we get the generator matrix $\\mathcal{G}$ for the code $\\mathcal{C}$ by vectors $\\mathbf{u}_i$ and $\\mathbf{v}_j$.\n$$\\mathcal{G}=\\left( \\begin{array}{c}\\mathbf{u}_1\\\\ \\vdots \\\\\\mathbf{u}_\\gamma \\\\ \\mathbf{v}_1 \\\\\\vdots \\\\\\mathbf{v}_\\delta \\end{array}\\right)=\\left( \\begin{array}{c|c}B_1 & pB_3 \\\\\n \\hline B_2 & Q\\end{array}\\right),$$\nwhere $B_1$, $B_2$ and $B_3$ are matrices over $\\mathbb{Z}_p$ of size $\\gamma \\times \\alpha, \\delta \\times \\alpha$ and $\\gamma \\times \\beta$, respectively; and $Q$ is a matrix over $\\mathbb{Z}_{p^2}$ of size $\\delta\\times \\beta$.\n\nRecall that in coding theory, two linear codes $C_1$ and $C_2$ of length $n$ are \\emph{permutation equivalent}\nif there exists a coordinate permutation $\\pi$ such that $C_2=\\{\\pi(c)|c\\in C_1\\}$.\nThe permutation equivalent of $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive codes can be defined similarly.\nThen we have the following theorem.\n\n\\begin{theorem}{\\rm\\cite{AS15}} \\label{th:1}\nLet $\\mathcal{C}$ be a $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive code of type $(\\alpha,\\beta;\\gamma,\\delta;\\kappa)$. Then, $\\mathcal{C}$ is permutation equivalent to a $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive code $\\mathcal{C}^\\prime$ with generator matrix of the form:\n$$\\mathcal{G}_S=\\left(\\begin{array}{cc|ccc}I_\\kappa & T' & pT_2 & \\mathbf{0} & \\mathbf{0}\\\\\n\\mathbf{0} & \\mathbf{0} & pT_1 & pI_{\\gamma-\\kappa} & \\mathbf{0}\\\\\n\\hline\n\\mathbf{0} & S' & S & R & I_\\delta\n\\end{array}\\right),$$\nwhere $I_\\delta$ is the identity matrix of size $\\delta\\times \\delta$; $T', S', T_1, T_2$ and $R$ are matrices over $\\mathbb{Z}_p$; and $S$ is a matrix over $\\mathbb{Z}_{p^2}$.\n\\end{theorem}\n\\begin{proof}\nStraightforward by setting $r=1,s=2$ of $\\mathbb{Z}_{p^r}\\mathbb{Z}_{p^s}$-linear codes in \\cite{AS15}.\n\\end{proof}\n\nFrom Theorem \\ref{th:1}, there is a $\\mathbb{Z}_p \\mathbb{Z}_{p^2}$-additive code $\\mathcal{C}$ of type $(\\alpha,\\beta;\\gamma,\\delta;\\kappa)$ if and only if\n\\begin{equation}\\label{eq:1}\n\\begin{array}{c}\n\\alpha,\\beta,\\gamma,\\delta,\\kappa\\geq 0, \\alpha+\\beta>0,\\\\\n0<\\delta+\\gamma \\leq \\beta+\\kappa \\text{\\rm{ and }} \\kappa\\leq \\rm{min}(\\alpha,\\gamma).\n\\end{array}\n\\end{equation}\n\nThe definition of the duality for $\\mathbb{Z}_p\\mathbb{Z}_{p^k}$-additive codes is shown in \\cite{SWD}, which includes $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive codes.\nThe \\emph{inner product} between $(\\mathbf{v}_1|\\mathbf{w}_1)$ and $(\\mathbf{v}_2|\\mathbf{w}_2)$ in $\\mathbb{Z}_p^\\alpha \\times \\mathbb{Z}_{p^2}^\\beta$ can be written as follows:\n$$\\langle(\\mathbf{v}_1|\\mathbf{w}_1),(\\mathbf{v}_2|\\mathbf{w}_2)\\rangle=p\\langle \\mathbf{v}_1,\\mathbf{v}_2\\rangle + \\langle \\mathbf{w}_1,\\mathbf{w}_2\\rangle \\in \\mathbb{Z}_{p^2}.$$\nNote that the result of the inner product $\\langle \\mathbf{v}_1,\\mathbf{v}_2\\rangle$ is from $\\mathbb{Z}_p$,\nand multiplication of its value by $p$ should be formally understood as the natural homomorphism from $\\mathbb{Z}_p$ into $\\mathbb{Z}_{p^2}$,\nthat is, $p\\langle \\mathbf{v}_1,\\mathbf{v}_2\\rangle \\in p\\mathbb{Z}_{p^2} \\subseteq \\mathbb{Z}_{p^2}$. More detail are suggested to see \\cite{SWD}.\n\nThe \\emph{dual code} $\\mathcal{C}^\\bot$ of a $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive code $\\mathcal{C}$ is defined in the standard way by\n\\begin{eqnarray*}\n \\mathcal{C}^\\bot=\\big\\{(\\mathbf{x}|\\mathbf{y})\n {}\\in \\mathbb{Z}_p^\\alpha \\times \\mathbb{Z}_{p^2}^\\beta:\n {\n \\langle(\\mathbf{x}|\\mathbf{y}),(\\mathbf{v}|\\mathbf{w})\\rangle=0 ~\\rm {for ~all}~ (\\mathbf{v}|\\mathbf{w})\\in \\mathcal{C}\\big\\}.}\n\\end{eqnarray*}\nReadily, the dual code $\\mathcal{C}^\\bot$ is also a $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive code \\cite{AS15}.\nIn particular, it is shown that the type of the code $\\mathcal{C}^\\bot$ is $(\\alpha, \\beta; \\bar{\\gamma}, \\bar{\\delta}; \\bar{\\kappa})$, where\n$$\\begin{array}{l}\\bar{\\gamma}=\\alpha+\\gamma-2\\kappa, \\\\\n\\bar{\\delta}=\\beta-\\gamma-\\delta+\\kappa, \\\\\n\\bar{\\kappa}=\\alpha-\\kappa.\n\\end{array}\n$$\nThe code $\\Phi(\\mathcal{C}^\\bot)$ is denoted by $C_\\bot$ and called the \\emph{$\\mathbb{Z}_p \\mathbb{Z}_{p^2}$-dual code} of $C$.\n\n\\begin{lemma}\\label{l:2}\nFor all $\\mathbf{u},\\mathbf{v}\\in \\mathbb{Z}_p^\\alpha \\times \\mathbb{Z}_{p^2}^\\beta$,\n$\\mathbf{u}=(u_1,\\ldots,u_{\\alpha+\\beta})$,\n$\\mathbf{v}=(v_1,\\ldots,v_{\\alpha+\\beta})$,\nwe have\n$$\n\\Phi(\\mathbf{u}+\\mathbf{v})=\\Phi(\\mathbf{u})+\\Phi(\\mathbf{v})+\\Phi(pP(\\mathbf{u},\\mathbf{v})),\n$$\nwhere $P(\\mathbf{u},\\mathbf{v})=(0,\\ldots,0,P(u_{\\alpha+1},v_{\\alpha+1}),\\ldots,P(u_{\\alpha+\\beta},v_{\\alpha+\\beta}))$ and\n$$\nP(u_i,v_i)=\nP(u'_i,v'_i)=\n\\begin{cases}\n 1 &\n \\mbox{if $u'_i+v'_i \\ge p$}\n \\\\ 0 &\\mbox{otherwise},\n \\end{cases}\n\\qquad u_i=u''_i p + u'_i, \\quad v_i=v''_i p + v'_i.\n$$\n\\end{lemma}\n\\begin{proof}\nIt is sufficient to prove the claim for one\n$\\mathbb{Z}_{p^2}$ coordinate.\nAssume\n $w_i\n= u_i + v_i$ (the addition is modulo $p^2$),\nwhere\n$w_i=w''_i p + w'_i$,\n$u_i=u''_i p + u'_i$, and\n$v_i=v''_i p + v'_i$.\nSince\n$w_i\n= u_i + v_i = (u_i''+v_i'')p + (u_i'+v_i') \\bmod p^2 $,\nwe have $w'_i = u'_i+v'_i$ and $w''_i = u''_i+v''_i \\bmod p$\nif $u'_i+v'_i < p$. If $u'_i+v'_i \\ge p$,\nthe formula is different:\n$w'_i = u'_i+v'_i \\bmod p$ and $w''_i = u''_i+v''_i + 1 \\bmod p$.\nUtilizing the definition of $P$, we get\n\\begin{equation}\\label{eq:'''}\nw'_i = u'_i+v'_i \\bmod\\ p \\qquad\\mbox{and}\\quad\nw''_i = u''_i+v''_i+P(u'_i,v'_i) \\bmod\\ p.\n\\end{equation}\nNow, from the definition of $\\phi$ and \\eqref{eq:'''}, it is straightforward\n$ \\phi( pP(u'_i,v'_i)) = P(u'_i,v'_i) (1,1,\\ldots,1),$\nand we find\n\\begin{multline}\\label{eq:phiu+v}\n \\phi(u_i + v_i) =\n ( u''_i+v''_i+P(u'_i,v'_i) ) (1,1,\\ldots,1)\n+ (u'_i+v'_i) (0,1,\\ldots,p-1)\\\\\n= \\phi(u_i) + \\phi(v_i) + \\phi( pP(u'_i,v'_i)).\n\\end{multline}\n Applying \\eqref{eq:phiu+v} to each coordinate from\n $\\alpha+1$ to $\\alpha+\\beta$ completes the proof.\n\\end{proof}\n\\begin{remark}\n The function $P$ can be treated as\n a $\\{0,1,\\ldots,p-1\\}$-valued function in\n two $\\{0,1,\\ldots,p-1\\}$-valued arguments\n $u'_i$, $v'_i$.\n As any such function, it can be represented\n as a polynomial of degree at most $p-1$ in each variable. For example $P(u'_i,v'_i) = u'_i v'_i$\n for $p=2$ and\n $P(u'_i,v'_i) = 2u'_i v'_i(1 + u'_i + v'_i) $\n for $p=3$. We also note that substituting\n $u_i$ and $v_i$ instead of $u'_i$ and $v'_i$\n in this polynomial does not change the value\n of $p P(u_i,v_i)$; so, for example,\n for $p=3$, it is safe to write\n $P(u_i,v_i) = 2u_i v_i(1 + u_i + v_i) $.\n For the rigor of the paper, we still use a new function $P'(\\mathbf{u},\\mathbf{v})$,\n where $P(\\mathbf{u},\\mathbf{v})=(p-1)P'(\\mathbf{u},\\mathbf{v})$.\n Note that $\\Phi(pP(\\mathbf{u},\\mathbf{v}))=(p-1)\\Phi(pP'(\\mathbf{u},\\mathbf{v}))$.\n\\end{remark}\n\n\\begin{lemma}\\label{l:4}\nLet $\\mathcal{C}$ be a $\\mathbb{Z}_p \\mathbb{Z}_{p^2}$-additive code. The $\\mathbb{Z}_p \\mathbb{Z}_{p^2}$-linear code $C=\\Phi(\\mathcal{C})$ is linear if and only if\n$pP'(\\mathbf{u},\\mathbf{v})\\in \\mathcal{C}$ for all $\\mathbf{u},\\mathbf{v}\\in \\mathcal{C}$.\n\\end{lemma}\n\n\\begin{proof}\nAssume that $C$ is linear. Because $\\mathcal{C}$ is additive, for any $\\mathbf{u},\\mathbf{v}\\in \\mathcal{C}$, we have $\\mathbf{u}+\\mathbf{v}\\in \\mathcal{C}$.\nThen, $\\Phi(\\mathbf{u}),\\Phi(\\mathbf{v}),\\Phi(\\mathbf{u+v})\\in C$.\nSince $C$ is linear, $\\Phi(pP'(\\mathbf{u},\\mathbf{v}))\\in C$ by Lemma \\ref{l:2}.\nTherefore, $pP'(\\mathbf{u},\\mathbf{v})\\in \\mathcal{C}$. Conversely, assume that $pP'(\\mathbf{u},\\mathbf{v})\\in \\mathcal{C}$ for all $\\mathbf{u},\\mathbf{v}\\in \\mathcal{C}$. Let $\\mathbf{x},\\mathbf{y}\\in C$, then there are $\\mathbf{u'},\\mathbf{v'}\\in \\mathcal{C}$ such that $\\mathbf{x}=\\phi(\\mathbf{u'}),\\mathbf{y}=\\Phi(\\mathbf{v'})$.\nThus $pP'(\\mathbf{u'},\\mathbf{v'})\\in \\mathcal{C}$.\nSince $\\mathcal{C}$ is additive, $\\mathbf{u'}+\\mathbf{v'}+pP'(\\mathbf{u'},\\mathbf{v'})\\in \\mathcal{C}$.\nTherefore, $\\Phi(\\mathbf{u'}+\\mathbf{v'}+pP'(\\mathbf{u'},\\mathbf{v'}))\\in C$.\nThen $$\\begin{aligned}\n\\Phi(\\mathbf{u'}+\\mathbf{v'}+pP'(\\mathbf{u'},\\mathbf{v'}))\n&=\\Phi(\\mathbf{u'}+\\mathbf{v'})+\\Phi(pP'(\\mathbf{u'},\\mathbf{v'}))\\\\\n&=\\Phi(\\mathbf{u'})+\\Phi(\\mathbf{v'})+\\Phi(pP(\\mathbf{u'},\\mathbf{v'}))+\\Phi(pP'(\\mathbf{u'},\\mathbf{v'}))\\\\\n&=\\Phi(\\mathbf{u'})+\\Phi(\\mathbf{v'})+(p-1)\\Phi(pP'(\\mathbf{u'},\\mathbf{v'}))+\\Phi(pP'(\\mathbf{u'},\\mathbf{v'}))\\\\\n&=\\Phi(\\mathbf{u'})+\\Phi(\\mathbf{v'}).\n\\end{aligned}$$\nHence, $\\Phi(\\mathbf{u'})+\\Phi(\\mathbf{v'})=\\mathbf{x}+\\mathbf{y} \\in C$.\n\\end{proof}\n\n\\section{Rank of $\\mathbb{Z}_3 \\mathbb{Z}_{9}$-additive codes}\\label{sec:3}\n\nLet $\\mathcal{C}$ be a $\\mathbb{Z}_p \\mathbb{Z}_{p^2}$-additive code of type $(\\alpha, \\beta;\\gamma,\\delta;\\kappa)$ and $C=\\Phi(\\mathcal{C})$ of length $\\alpha+p\\beta$.\nWe have known the definition of $rank(C)$ before. For general prime $p$, it is very difficult to discuss all values of $r=rank(C)$ clearly.\nTherefore, in this section, we only consider $p=3$, then give the range of values $r=rank(C)$ and prove that there is a $\\mathbb{Z}_3 \\mathbb{Z}_{9}$-linear code of type $(\\alpha, \\beta;\\gamma,\\delta;\\kappa)$ with $r=rank(\\mathcal{C})$ for any positive integer $r$.\n\nIf $p=3$, for all $\\mathbf{u},\\mathbf{v}\\in \\mathbb{Z}_3^\\alpha \\times \\mathbb{Z}_{9}^\\beta$,\nit is easy to check that $\\Phi(\\mathbf{u}+\\mathbf{v})=\\Phi(\\mathbf{u})+\\Phi(\\mathbf{v})+2\\Phi(3(\\mathbf{u}\\ast \\mathbf{v}+\\mathbf{u}\\ast \\mathbf{u}\\ast \\mathbf{v}+\\mathbf{u}\\ast \\mathbf{v}\\ast \\mathbf{v}))$ by Lemma \\ref{l:2}, where $\\ast$ denotes the componentwise multiplication.\nThen we have the following theorem.\n\n\\begin{theorem}\\label{th:5}\nLet $\\mathcal{C}$ be a $\\mathbb{Z}_3\\mathbb{Z}_{9}$-additive code of type $(\\alpha, \\beta;\\gamma,\\delta;\\kappa)$ which satisfies (\\ref{eq:1}),\n$C=\\Phi(\\mathcal{C})$ be the corresponding $\\mathbb{Z}_3\\mathbb{Z}_{9}$-linear code of length $n=\\alpha+3\\beta$.\n\n\\begin{itemize}\n\\item[(i)] Let $\\mathcal{G}$ be the generator matrix of $\\mathcal{C}$,\nand let $\\{\\mathbf{u}_i\\}_{i=1}^\\gamma$, $\\{\\mathbf{v}_j\\}_{j=1}^\\delta$ be the rows of order $3$ and $9$ in $\\mathcal{G}$, respectively.\nThen $\\langle C\\rangle$ is generated by $\\{\\Phi(\\mathbf{u}_i)\\}_{i=1}^\\gamma$, $\\{\\Phi(\\mathbf{v}_j)\\}_{j=1}^\\delta$, $\\{\\Phi(3\\mathbf{v}_k*\\mathbf{v}_l)\\}_{1\\leq l\\leq k\\leq \\delta}$\nand $\\{\\Phi(3\\mathbf{v}_x*\\mathbf{v}_y*\\mathbf{v}_z)\\}_{1\\leq x\\leq y\\leq z\\leq\\delta}$.\n\\item[$(ii)$] $rank(C)\\in \\bigg\\{\\gamma+2\\delta,...,\\rm{min}\\bigg(\\beta+\\gamma+\\kappa,\\gamma+\\delta+\\dbinom{\\delta+1}{2}+\\dbinom{\\delta+2}{3}\\bigg)\\bigg\\}$.\nLet $rank(C)=r=\\gamma+2\\delta+\\overline{r}$. Then $\\overline{r}\\in\\bigg\\{0,1,...,\\rm{min}\\bigg(\\beta-(\\gamma-\\kappa)-\\delta,\\dbinom{\\delta+1}{2}+\\dbinom{\\delta+2}{3}-\\delta\\bigg)\\bigg\\}$.\n\\item[$(iii)$] The linear code $\\langle C\\rangle$ over $\\mathbb{Z}_3$ is $\\mathbb{Z}_3 \\mathbb{Z}_{9}$-linear.\n\\end{itemize}\n\\end{theorem}\n\\begin{proof}\n$(i)$\nLet $\\mathbf{c}\\in \\mathcal{C}$, without loss of generality, $\\mathbf{c}$ can be expressed as $\\mathbf{c}=\\Sigma_{j=1}^\\zeta \\mathbf{v}_j+\\omega$,\nwhere $\\zeta\\leq\\delta$ and $\\omega$ is a codeword in $\\mathcal{C}$ of order $3$.\nBy Lemma \\ref{l:2}, we have $\\Phi(\\mathbf{c})=\\Phi(\\Sigma_{j=1}^\\zeta \\mathbf{v}_j)+\\Phi(\\omega)$,\nwhere $\\Phi(\\omega)$ is a linear combination of $\\{\\Phi(\\mathbf{u}_i)\\}_{i=1}^\\gamma$, $\\{\\Phi(3\\mathbf{v}_j)\\}_{j=1}^\\delta$.\nNote that\n$\\Phi(\\Sigma_{j=1}^\\zeta \\mathbf{v}_j)=\\Sigma_{j=1}^\\zeta\\Phi(\\mathbf{v}_j)+\\Sigma_{1\\leq l \\leq k\\leq \\zeta}(\\tilde{a}\\Phi(3(\\mathbf{v}_k\\ast v_l))+\\Sigma_{1\\leq x \\leq y\\leq z\\leq \\zeta}(\\tilde{b}\\Phi(3(\\mathbf{v}_x\\ast \\mathbf{v}_y\\ast \\mathbf{v}_z))$, $\\tilde{a},\\tilde{b}\\in \\mathbb{Z}_3$.\nSince $\\Phi(3\\mathbf{v}_j)=\\Phi(3\\mathbf{v}_j\\ast\\mathbf{v}_j\\ast\\mathbf{v}_j)$ by Lemma \\ref{l:2},\n$\\Phi(\\mathbf{c})$ is generated by $\\{\\Phi(\\mathbf{u}_i)\\}_{i=1}^\\gamma$, $\\{\\Phi(\\mathbf{v}_j)\\}_{j=1}^\\delta$,\n$\\{\\Phi(3\\mathbf{v}_k*\\mathbf{v}_l)\\}_{1\\leq l\\leq k\\leq \\delta}$\nand $\\{\\Phi(3\\mathbf{v}_x*\\mathbf{v}_y*\\mathbf{v}_z)\\}_{1\\leq x\\leq y\\leq z\\leq\\delta}$.\n\n$(ii)$ The bound $\\gamma+\\delta+\\dbinom{\\delta+1}{2}+\\dbinom{\\delta+2}{3}$ is straightforward by (i),\nand the bound $\\beta+\\gamma+\\kappa$ is trivial by the form of $\\mathcal{G}_S$ in Theorem \\ref{th:1}.\n\n$(iii)$ Let $\\mathcal{S}_\\mathcal{C}$ be a $\\mathbb{Z}_3\\mathbb{Z}_9$-additive code generated by\n$\\{\\mathbf{u}_i\\}_{i=1}^\\gamma$, $\\{\\mathbf{v}_j\\}_{j=1}^\\delta$, $\\{3\\mathbf{v}_k*\\mathbf{v}_l\\}_{1\\leq l\\leq k\\leq \\delta}$\nand $\\{3\\mathbf{v}_x*\\mathbf{v}_y*\\mathbf{v}_z\\}_{1\\leq x\\leq y\\leq z\\leq\\delta}$,\nthen $\\mathcal{S}_\\mathcal{C}$ is of type $(\\alpha,\\beta;\\gamma+\\overline{r},\\delta;\\kappa)$ and $\\Phi(\\mathcal{S}_\\mathcal{C})=\\langle C\\rangle$.\n\\end{proof}\nFor convenience, we use $\\mathbf{v}_j^2$ to denote $\\mathbf{v}_j*\\mathbf{v}_j$, and write the vectors in Theorem \\ref{th:5} (i) in the following way:\n$\\{\\Phi(\\mathbf{u}_i)\\}_{i=1}^\\gamma$, $\\{\\Phi(\\mathbf{v}_j)\\}_{j=1}^\\delta$, $\\{\\Phi(3\\mathbf{v}_j)\\}_{j=1}^\\delta$, $\\{\\Phi(3\\mathbf{v}_j^2)\\}_{j=1}^\\delta$, $\\{\\Phi(3\\mathbf{v}_k*\\mathbf{v}_l)\\}_{1\\leq l< k\\leq \\delta}$, $\\{\\Phi(3\\mathbf{v}_x^2*\\mathbf{v}_y)\\}_{1\\leq x< y\\leq \\delta}$, $\\{\\Phi(3\\mathbf{v}_x*\\mathbf{v}_y^2)\\}_{1\\leq x< y\\leq \\delta}$, $\\{\\Phi(3\\mathbf{v}_x*\\mathbf{v}_y*\\mathbf{v}_z)\\}_{1\\leq x< y0$, consider the normalized state\n\\begin{equation}\n\\rho_{U\\cup X \\cup W} = \\frac{4}{(1-\\epsilon)^{\\frac 1n} + 3(\\epsilon\/3)^{\\frac 1n}}\n\\rho_X \\pns{n} \\Big(P^-_{U\\cup X^L} \\otimes P^-_{W\\cup X^R}\\Big),\n\\end{equation}\nwhere\n\\begin{equation}\n\\rho_X = (1-\\epsilon) P^-_{X^L\\cup X^R} + \\frac{\\epsilon}{3} P^+_{X^L\\cup X^R}\n\\end{equation}\nand where $P_{A\\cup B}^\\pm$ denote the projector onto the symmetric and anti-symmetric subspaces of $\\mathcal{H}_A \\otimes \\mathcal{H}_B$. The conditional states are \n\\begin{align}\n\\rho_{U| X}^{(n)} &= \\frac{2}{\\sqrt{(1-\\epsilon)^{\\frac 1n} + 3(\\epsilon\/3)^{\\frac 1n}}} P^-_{U\\cup X^L} \\otimes I_{X^R} \\ \\mathrm{and} \\\\ \n\\rho_{W| X}^{(n)} &= \\frac{2}{\\sqrt{(1-\\epsilon)^{\\frac 1n} + 3(\\epsilon\/3)^{\\frac 1n}}} I_{X^L} \\otimes P^-_{W\\cup X^R}.\n\\end{align} \nBy construction, condition \\eqref{Cond:CIo4} is easily verified $\\rho_{U\\cup X\\cup W} = \\rho_X \\pns{n} (\\rho_{U| X}^{(n)} \\rho_{W| X}^{(n)})$. In the limit $\\epsilon \\rightarrow 0$, the state $\\rho_{U\\cup X \\cup W} \\rightarrow P^-_{U\\cup W} \\otimes P^-_{X^L\\cup X^R}$, which has $S(U:W|X) = 2$. By continuity, we claim that for all $n<\\infty$, there exists an $\\epsilon > 0$ such that $\\rho_{U\\cup X\\cup W}$ is a density operator that does not saturate strong subadditivity. \n\\end{Exa}\n\nThe preceding example shows that some of the conditions given in eqs. (\\ref{Cond:CIo1}-\\ref{Cond:CIo4}) are not sufficient to imply quantum conditional independence on their own. Therefore, additional constraints need to be imposed in order to obtain converse results. Two alternative approaches are considered here, one based on additional commutation conditions that hold for conditionally independent states and one based on the algebraic structure of such states. The approach based on commutation conditions is perhaps more elegant, but the algebraic conditions are also relevant because they are used in theorem \\ref{Graph:THCC} in \\S\\ref{Graph:HC} to provide a characterization result for quantum Markov Networks on trees. The following sequence of results provides the approach based on commutation conditions.\n\n\\begin{The}\n\\label{Cond:EqThe}\nFor a fixed $n$, if $\\rho_X^{-\\frac{1}{2n}}\\rho_{U\\cup X}^{\\frac{1}{2n}}$ and its adjoint commute with $\\rho_X^{-\\frac{1}{2n}}\\rho_{W \\cup X}^{\\frac{1}{2n}}$, then the conditions given in eqs. (\\ref{Cond:CIo1}-\\ref{Cond:CIo4}) are all equivalent.\n\\end{The}\n\\begin{proof}\nWe start by showing that $\\rho\\ns{n}_{U|W\\cup X} = \\rho\\ns{n}_{U|X}$ is equivalent to $\\rho\\ns{n}_{W|U \\cup X} = \\rho\\ns{n}_{W|X}$. The first of these can be written explicitly in terms of joint and reduced density operators as\n\\begin{equation}\n\\rho_{W \\cup X}^{-\\frac{1}{2n}} \\rho_{U \\cup W \\cup X}^{\\frac{1}{n}} \\rho_{W \\cup X}^{-\\frac{1}{2n}} = \\rho_{X}^{-\\frac{1}{2n}} \\rho_{U \\cup X}^{\\frac{1}{n}} \\rho_{X}^{-\\frac{1}{2n}}.\n\\end{equation}\nLeft and right multiplying by $\\rho_{W\\cup X}^{\\frac{1}{2n}}$ gives\n\\begin{equation}\n\\label{Cond:WOIF1}\n\\rho_{U\\cup W\\cup X}^{\\frac{1}{n}} = \\rho_{W\\cup X}^{\\frac{1}{2n}} \\rho_{X}^{-\\frac{1}{2n}} \\rho_{U \\cup X}^{\\frac{1}{n}} \\rho_{X}^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}}.\n\\end{equation}\nNow, define $T = \\rho_{W\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}}$ so that $\\rho_{U\\cup W\\cup X}^{\\frac{1}{n}} = TT^\\dagger$. In a similar fashion, $\\rho\\ns{n}_{W|U\\cup X} = \\rho_{W|X}$ can be shown to be equivalent to $\\rho_{U\\cup W\\cup X}^{\\frac{1}{n}} = T^\\dagger T$. \n\nNow,\n\\begin{align}\nT^\\dagger & = \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{- \\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}} \\\\\n& = \\rho_X^{\\frac{1}{2n}} \\rho_X^{- \\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{- \\frac{1}{2n}} \\rho_{W \\cup X}^{\\frac{1}{2n}} \\\\\n& = \\rho_X^{\\frac{1}{2n}} \\rho_X^{- \\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}} \\rho_X^{- \\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\label{Cond:CommUse1} \\\\\n& = \\rho_{W\\cup X}^{\\frac{1}{2n}} \\rho_X^{- \\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\\\\n& = T,\n\\end{align}\nwhere the assumption that $\\rho_X^{- \\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}}$ commutes with $\\rho_X^{- \\frac{1}{2n}} \\rho_{W\\cup X}$ has been used to derive eq. \\eqref{Cond:CommUse1}. Hence, $T$ is Hermitian and the two conditions are equivalent.\n\nFor the remaining condition note that $\\rho\\ns{n}_{U\\cup W|X} = \\rho\\ns{n}_{U|X} \\rho\\ns{n}_{W|X}$ is equivalent to\n\\begin{align}\n\\rho_{U\\cup W\\cup X}^{\\frac{1}{n}} & = \\rho_{U\\cup X}^{\\frac{1}{n}} \\rho_X^{-\\frac{1}{n}} \\rho_{W\\cup X}^{\\frac{1}{n}} \\\\\n& = \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}} \n\\end{align}\nThe commutativity of $\\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}}$ and $\\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}}$ then gives\n\\begin{align}\n\\rho_{U\\cup W\\cup X}^{\\frac{1}{n}} & = \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}} \\\\\n& = \\rho_X^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}},\n\\end{align}\nand the commutativity of $ \\rho_X^{-\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}}$ and $\\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}}$ gives\n\\begin{align}\n\\rho_{U\\cup W\\cup X}^{\\frac{1}{n}} & = \\rho_X^{+\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{W \\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}} \\\\\n& = \\rho_{W\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{n}} \\rho_X^{-\\frac{1}{n}} \\rho_{W\\cup X}^{\\frac{1}{2n}},\n\\end{align}\nwhich is equivalent to $\\rho\\ns{n}_{U|W\\cup X} = \\rho\\ns{n}_{U|X}$.\n\\end{proof}\n\nTheorem \\ref{Cond:EqThe} relates the conditions eqs. (\\ref{Cond:CIo1}-\\ref{Cond:CIo3}) for a fixed value of $n$, but the conditions for different values of $n$ can also be related via the following corollary.\n\n\\begin{Cor}\n\\label{Cond:InductCor}\nFor fixed $n$, if $\\rho_X^{-\\frac{1}{2n}}\\rho_{U\\cup X}^{\\frac{1}{2n}}$ and its adjoint commute with $\\rho_X^{-\\frac{1}{2n}}\\rho_{W\\cup X}^{\\frac{1}{2n}}$, then $\\rho\\ns{n}_{U|W\\cup X} = \\rho\\ns{n}_{U|X}$ implies $\\rho\\ns{2n}_{U\\cup W|X} = \\rho\\ns{2n}_{U|X}\\rho\\ns{2n}_{W|X}$.\n\\end{Cor}\n\\begin{proof}\nIn the preceding proof it was shown that $\\rho\\ns{n}_{U|W\\cup X} = \\rho\\ns{n}_{U|X}$ is equivalent to $\\rho_{U\\cup W \\cup X}^{\\frac{1}{n}} = TT^{\\dagger}$, where $T = \\rho_{W \\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}}$, and that the commutativity conditions imply that $T$ is Hermitian. Therefore, $\\rho_{U \\cup W\\cup X}^{\\frac{1}{n}} = \\left ( T^{\\dagger} \\right )^2$, which implies $\\rho_{U\\cup W\\cup X}^{\\frac{1}{2n}} = T^{\\dagger} = \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}}$. The latter is straightforwardly equivalent to $\\rho\\ns{2n}_{U\\cup W|X} = \\rho\\ns{2n}_{U|X}\\rho\\ns{2n}_{W|X}$\n\\end{proof}\n\nPutting these results together leads to a set necessary and sufficient condition for conditional independence.\n\n\\begin{Cor}\nIf $\\rho_X^{-\\frac{1}{2n}}\\rho_{U\\cup X}^{\\frac{1}{2n}}$ and its adjoint commute with $\\rho_X^{-\\frac{1}{2n}}\\rho_{W\\cup X}^{\\frac{1}{2n}}$ for every $n$, then any of the conditions given in eqs. (\\ref{Cond:CIo1}-\\ref{Cond:CIo3}) imply that $S(U:W|X) = 0$.\n\\end{Cor}\n\\begin{proof}\nUnder these commutativity conditions, theorem \\ref{Cond:EqThe} implies that eqs. (\\ref{Cond:CIo1}-\\ref{Cond:CIo3}) are equivalent for any fixed $m$ and corollary \\ref{Cond:InductCor} shows that $\\rho\\ns{2m}_{U\\cup W|X} = \\rho\\ns{2m}_{U|X}\\rho\\ns{2m}_{W|X}$ can be derived from $\\rho\\ns{m}_{U|W \\cup X} = \\rho\\ns{m}_{U|X}$. By applying theorem \\ref{Cond:EqThe} with $n=2m$, it follows that $\\rho\\ns{m}_{U|W\\cup X} = \\rho\\ns{m}_{U|X}$ implies $\\rho\\ns{2m}_{U|W\\cup X} = \\rho\\ns{2m}_{U|X}$. By induction, this implies that $\\rho\\ns{2^s m}_{U|W\\cup X} = \\rho\\ns{2^s m}_{U|X}$ for any positive integer $s$. Taking the limit $s \\rightarrow \\infty$ gives $\\rho\\ensuremath{^{(\\infty)}}_{U|W\\cup X} = \\rho\\ensuremath{^{(\\infty)}}_{U|X}$, which implies $S(U:W|X) = 0$ by theorem \\ref{Cond:RuskaiThe}.\n\\end{proof}\n\nWe now turn to the algebraic approach to proving converse results. Firstly, note that eq.~\\eqref{Cond:CIo3} implies that $\\rho\\ns{n}_{U|X}$ and $\\rho\\ns{n}_{W|X}$ commute, since $\\rho\\ns{n}_{U \\cup W|X}$ is Hermitian. It can be shown that whenever two operators $A_{U\\cup X}\\otimes I_W$ and $I_U\\otimes B_{W \\cup X}$ commute there exists a decomposition of $\\mathcal{H}_X$ as in eq.~\\eqref{eq:decomp_X} such that\n\\begin{align}\nA_{U X} &= \\sum^d_{j = 1} a_{U X_j^L} \\otimes I_{X^R_j} \\ \\mathrm{and} \\label{Cond:Decomp1} \\\\\nB_{W X} &= \\sum^d_{j = 1} I_{X^L_j}\\otimes b_{ X_j^RW}, \\label{Cond:Decomp2}\n\\end{align}\nso eq. \\eqref{Cond:CIo3} implies that $\\rho\\ns{n}_{U|X}$ and $\\rho\\ns{n}_{W|X}$ have this structure, as would be expected if the joint state is conditionally independent and hence satisfies eq.~\\eqref{Cond:Hayden}. However, eq. \\eqref{Cond:Hayden} implies an additional constraint that has not been used so far, namely that $\\rho_X$ also respects the same tensor product structure on $\\mathcal{H}_X$, i.e. $\\rho_X$ is of the form\n\\begin{equation}\n\\rho_X = \\sum_{j = 1}^d p_j \\sigma_{X_j^L} \\otimes \\tau_{X_j^R}.\n\\end{equation}\nMore generally, we will say that an operator $C_X$ is {\\em decomposable with respect to} the pair of commuting operators $A_{U\\cup X}$ and $B_{W \\cup X}$ if it has the same algebraic structure on $\\mathcal{H}_X$, i.e. if\n\\begin{equation}\nC_{X} = \\sum^d_{j = 1} c_{X_j^R} \\otimes c_{X_j^R}.\n\\label{def:decomposable}\n\\end{equation}\nfor some factorization of $\\mathcal{H}_X$, such that eqs. \\eqref{Cond:Decomp1} and \\eqref{Cond:Decomp2} hold.\nImposing the commutativity of $\\rho\\ns{n}_{U|X}$ and $\\rho\\ns{n}_{W|X}$, along with the decomposability of $\\rho_X$ with respect to $\\rho\\ns{n}_{U|X}$ and $\\rho\\ns{n}_{W|X}$ as additional constraints is enough to straightforwardly show that any of eqs.~(\\ref{Cond:CIo1}-\\ref{Cond:CIo4}) imply conditional independence for all values of $n$. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Graphical Models}\n\n\\label{Graph}\n\nIn this section, quantum conditional independence is used to define quantum Graphical Models that generalize their classical counterparts. The main focus is on quantum Markov Networks and $n$-Bifactor Networks, since these allow for the simplest formulation of the Belief Propagation algorithms to be described in \\S\\ref{QBP}. \\S\\ref{Graph:MN} reviews the definition of classical Markov Networks and the Hammersley-Clifford theorem, which gives an explicit representation for the probability distributions associated with classical Markov Networks. Motivated by this, \\S\\ref{Graph:QGS} defines the class of quantum $n$-Bifactor Networks, which are the most general class of networks on which our Belief Propagation algorithms operate. \\S\\ref{Graph:DMG} reviews the theory of dependency models and graphoids, which is useful for proving theorems about Graphical Models, and shows that quantum conditional independence can be used to define a graphoid. \\S\\ref{Graph:HC} defines quantum Markov Networks and gives some partial characterization results for the associated quantum states, along similar lines to the Hammersley-Clifford theorem. Most of these definitions and characterization results are summarized on Fig. \\ref{fig:worldview}.\n\nThe remaining two subsections briefly outline two other quantum Graphical Models: Quantum Factor Graphs in \\S\\ref{Graph:FG} and Quantum Bayesian Networks in \\S\\ref{Graph:BN}. These structures are equivalent from the point of view of the efficiency of Belief Propagation algorithms, since it is always possible to convert them into $n$-Bifactor Networks and vice-versa with only a linear overhead in graph size. An explicit method for converting a quantum factor graph into a quantum $1$-Bifactor Network is given because factor graphs are used in the application to quantum error correction developed in \\S\\ref{App:QEC}.\n\n\\subsection{Classical Markov Networks}\n\n\\label{Graph:MN}\n\n\\begin{figure}\n\\center\\includegraphics{MN}\n\\caption{The equalities $H(a:d\\cup e\\cup f | b \\cup c) = 0$, $H(f:a\\cup b \\cup c \\cup d | e) = 0$, and $H(a\\cup b : e \\cup f | c \\cup d) = 0$ are examples of constraints that are satisfied when $(G,P(V))$ is a Markov Network.}\n\\label{fig:MN}\n\\end{figure}\n\nLet $G = (V,E)$ be an undirected graph and suppose that each vertex $v \\in V$ is associated with a random variable, also denoted $v$. Let $P(V)$ be the joint distribution of the variables. $(G,P(V))$ is a \\emph{Classical Markov Network} if for all $U \\subseteq V$, $H(U:V - (n(U) \\cup U)|n(U)) = 0$, where $n(U)$ is the set of nearest neighbors of $U$ in $G$ (see Fig.~\\ref{fig:MN}). Further, if $P(V)$ is strictly positive for all possible valuations of the variables, then $(G,P(V))$ is called a \\emph{Positive Classical Markov Network}. For such positive networks there is a powerful characterization theorem \\cite{Gri73a,Bes74a}.\n\\begin{The}[Hammersley-Clifford \\cite{HC71a}]\n\\label{thm:HC}\n$(G,P(V))$ is a positive classical Markov network iff it can be written as\n\\begin{equation}\n\\label{Graph:HCE}\nP(V) = \\frac{1}{Z} \\prod_{C \\in \\mathfrak{C}} \\psi(C),\n\\end{equation}\nwhere $\\mathfrak{C}$ is the set of cliques of $G$, $\\psi(C)$ is a positive function defined on the random variables in $C$ and $Z$ is a normalization factor.\n\\end{The}\nA set of vertices $C \\subseteq V$ in a graph is a clique if $\\forall u,v \\in C$, $u \\neq v \\rightarrow (u,v) \\in E$, i.e. every vertex in $C$ is connected to every other vertex in $C$ by an edge. Note that the decomposition in eq.~\\eqref{Graph:HCE} is generally not unique, even up to normalization. A distribution of the form of eq.~\\eqref{Graph:HCE} is said to factorize with respect to the graph $G$.\n\nMarkov chains are a special case of Markov Networks in which the graph is a chain. These are included in the slightly more general class of networks where the graph is a tree. For trees the only cliques are the individual vertices and the pairs of vertices that are connected by an edge, and the associated probability distributions have a representation in terms of marginal and mutual probability distributions of the form\n\\begin{equation}\n\\label{Graph:HCMut}\nP(V) = \\prod_{v \\in V} P(v) \\prod_{(u,v) \\in E} P(u:v),\n\\end{equation}\nwhich generalizes the decomposition for three variable Markov chain given in eq. \\eqref{Cond:MCMDecomp}. For more general networks wherein the graph has cycles, there is no Hammersley-Clifford decomposition in which the functions $\\psi(C)$ are marginal and mutual probability distributions.\n\nThe Hammersley-Clifford decomposition can be put in a form more familiar to physicists by introducing a positive constant $\\beta$ and defining the functions $H(C) = - \\beta^{-1} \\log \\psi(C)$, which are always well defined since $\\psi(C)$ is positive. Then eq.~\\eqref{Graph:HCE} can be written as\n\\begin{equation}\nP(V) = \\frac{1}{Z} \\exp \\left ( -\\beta \\sum_{C \\in \\mathfrak{C}} H(C) \\right ),\n\\end{equation}\nwhich is a Gibbs state for a system with a Hamiltonian $\\sum_{C \\in \\mathfrak{C}} H(C)$ and partition function $Z$. This is a generalization of the lattice models studied in statistical physics to arbitrary graphs. Indeed, if $G$ is a lattice, then, as for trees, the only cliques are the individual vertices and pairs of vertices connected by an edge, so for lattices the edges represent local nearest-neighbor interactions. \n\nIn many applications, such as in statistical physics, the functions $\\psi(C)$ are often constants for cliques containing three or more vertices even in the case where the graph has cliques with more than two vertices. In this case, we again have that the only nontrivial functions are defined on the vertices and edges of the graph, so the state can be written as\n\\begin{equation}\n\\label{Graph:CGS}\nP(V) = \\frac{1}{Z} \\prod_{v \\in V} \\psi(v) \\prod_{(u,v) \\in E} \\psi(u:v).\n\\end{equation}\nHere, the edge functions are denoted $\\psi(u:v)$ because of the close parallel with eq.~\\eqref{Graph:HCMut}, but they are general positive functions rather than mutual distributions. We adopt the terminology \\emph{bifactor distribution} to describe distributions of the form of eq.~\\eqref{Graph:CGS} and \\emph{Bifactor Network} for the pair $(G,P(V))$. For example, the distribution associated with a local nearest-neighbor model on an arbitrary graph, such as the spin-glasses studied in statistical physics, would be a bifactor distribution. \n\n\\subsection{Quantum Bifactor Networks}\n\n\\label{Graph:QGS}\n\nA proper generalization of Markov Networks to quantum theory involves the replacement of random variables with quantum systems and the replacement of classical conditional independence with its quantum counterpart. This theory is developed in the following sections, but it is convenient to first introduce a class of states that parallels the classical bifactor distributions of eq.~\\eqref{Graph:CGS}.\n\nLet $G = (V,E)$ be a graph, let each vertex $v \\in V$ be associated to a quantum system with Hilbert space $\\mathcal{H}_v$. Let $\\mathcal{H}_V = \\bigotimes_{v \\in V} \\mathcal{H}_v$ and consider the class of states $\\rho_V$ that can be expressed as\n\\begin{equation}\n\\label{Graph:QGSEPre}\n\\rho_V = \\frac 1 Z \\left(\\bigotimes_{u\\in V} \\mu_u\\right) \\pns{n} \\left(\\left (\\pns{n} \\right )_{(v,w) \\in E} \\nu_{v:w}\\right),\n\\end{equation}\nwhere $Z$ is normalization constant, the $\\mu_u$'s are operators on $\\mathcal{H}_u$ and the $\\nu_{v:w} = \\nu_{w:v}$ are operators on $\\mathcal{H}_v\\otimes\\mathcal{H}_w$. As stated, this expression is ambiguous because the $\\pns{n}$ product is neither commutative or associative apart from in the limit $n \\rightarrow \\infty$. To avoid this ambiguity we impose the additional constraint that $[\\nu_{u:v},\\nu_{w:x}]=0$ for finite $n$, in which case the expression $\\left ( \\pns{n} \\right )_{(v,w) \\in E} \\nu_{v:w}$ reduces to $\\prod_{(v,w) \\in E} \\nu_{v:w}$. The state $\\rho_V$ is an \\emph{$n$-bifactor state} if it can be written as \n\\begin{equation}\n\\label{Graph:QGSE}\n\\rho_V = \\frac 1 Z \\left(\\bigotimes_{u\\in V} \\mu_u\\right) \\pns{n} \\left(\\prod_{(v,w) \\in E} \\nu_{v:w}\\right),\n\\end{equation}\nwith $[\\nu_{u:v},\\nu_{w:x}]=0$, and it is an \\emph{$\\infty$-bifactor state} if it can be written as\n\\begin{equation}\n\\label{Graph:QGSEInf}\n\\rho_V = \\frac 1 Z \\left(\\bigotimes_{u\\in V} \\mu_u\\right) \\ensuremath{\\odot} \\left(\\ensuremath{\\odot}_{(v,w) \\in E} \\nu_{v:w}\\right),\n\\end{equation}\nwith no commutativity constraint on the $\\nu_{v:w}$.\nThe pair $(G,\\rho_V)$ is referred to as a quantum \\emph{$n$-Bifactor Network}, or \\emph{$\\infty$-Bifactor Network}, respectively.\n\nIt turns out that not every quantum Bifactor Network is a quantum Markov Network, but the quantum generalizations of Belief Propagation algorithms to be developed in \\S\\ref{QBP} can be formulated for any Bifactor Network. Therefore, readers who are mainly interested in algorithms and applications rather than proofs can skip to \\S\\ref{QBP}, perhaps pausing to read \\S\\ref{Graph:FG} on the way in order to understand the application to quantum error correction. \n\nThe next goal is to formulate the theory of quantum Markov Networks and provide characterization theorems analogous to the Hammersley-Clifford theorem. In order to do so it is convenient to first introduce the theory of dependency models and graphoids, which is useful for proving theorems about Graphical Models.\n\n\\subsection{Dependency Models and Graphoids}\n\n\\label{Graph:DMG}\n\nGraphs and conditional independence relations share a number of important properties that are responsible for the structure of Graphical Models. These properties are also shared by a number of other mathematical structures and they can be abstracted into structures known as dependency models and graphoids, which were introduced by Gieger, Verma, and Pearl \\cite{VP90a, GVP90a}. Here, the theory is briefly reviewed and quantum conditional independence is shown to also give rise to a graphoid.\n\nA \\emph{dependency model} $M$ over a finite set $V$ is a tripartite relation over disjoint subsets of $V$. The statement that $(U,W,X) \\in M$ will be denoted $I(U,W|X)$, with a possible subscript on the $I$ to denote the type of dependency model. $I(U,W|X)$ should be taken to mean that ``$U$ and $W$ only interact via $X$'', or that ``$U$ and $W$ are independent given $X$''. \n\n\\begin{Exa}\nAn \\emph{Undirected Graph Dependency Model} $I_G$ is defined in terms of an undirected graph $G$. Let $V$ be the set of vertices of $G$ and then let $I_G(U,W|X)$ if every path from a vertex in $U$ to a vertex in $W$ passes through a vertex in $X$. $I_G$ is often called the \\emph{Global Markov Property}.\n\\end{Exa}\n\n\\begin{Exa}\nA \\emph{Probabilistic Dependency Model} $I_P$ is defined in terms of a probability distribution $P(V)$ over a set $V$ of random variables. $I_P(U,W|X)$ is true if $U$ and $W$ are conditionally independent given $X$. \n\\end{Exa}\n\n\\begin{Exa}\nA \\emph{Quantum Dependency Model} $I_\\rho$ is defined in terms of a density operator $\\rho_V$ acting on the tensor product of Hilbert spaces labeled by elements of a set $V$. $I_\\rho(U,W|X)$ is true if $U$ and $W$ are quantum conditionally independent given $X$.\n\\end{Exa}\n\nA \\emph{graphoid} is a dependency model that for all disjoint $U,W,X,Y \\subseteq V$satisfies the following axioms:\n\\begin{align}\n\\text{Symmetry:}\\quad \t\t& I(U,W|X) \\Rightarrow I(W,U|X) \\\\\n\\text{Decomposition:}\\quad \t& I(U,W\\cup Y|X) \\Rightarrow I(U,W|X) \\\\\n\\text{Weak Union:}\\quad \t\t& I(U,W \\cup Y|X) \\Rightarrow I(U,W|X \\cup Y) \\\\\n\\text{Contraction:}\\quad \t\t& I(U,W|X) \\,\\, \\text{and} \\,\\, I(U,Y|X \\cup W) \\Rightarrow I(U,W \\cup Y|X).\n\\end{align}\nA \\emph{positive graphoid} is a graphoid that also satisfies the additional axiom\n\\begin{align}\n\\text{Intersection:}\\quad\t\t& I(U,W |X \\cup Y) \\,\\, \\text{and} \\,\\, I(U,Y|W \\cup X) \\Rightarrow I(U,W \\cup Y|X).\n\\end{align}\n\n\\begin{The}\nThe quantum dependency model is a graphoid.\n\\end{The}\n\\begin{proof}\nSymmetry is immediate because $S(U:W|X)$ is invariant under exchange of $U$ and $W$. Decomposition and Weak Union follow from the strong subadditivity inequality. Specifically, for $A,B,C \\subseteq V$, strong subadditivity asserts that $S(A:B|C) \\geq 0$, or in terms of von Neumann entropies\n\\begin{equation}\n\\label{Cond:SS}\nS(A\\cup C) + S(B\\cup C) - S(C) - S(A\\cup B \\cup C) \\geq 0.\n\\end{equation}\nDecomposition asserts that if $S(U:W \\cup Y|X) = 0$ then $S(U:W|X) = 0$. This is true if $S(U:W \\cup Y|X) - S(U:W|X) \\geq 0$, since $S(U:W|X)$ is guaranteed to be positive by strong subadditivity. Expanding $S(U:W \\cup Y|X) - S(U:W|X)$ and canceling terms gives\n\\begin{equation}\n\\begin{split}\nS(U:W \\cup Y|X) - S(U:W|X) = & S(U \\cup W \\cup X) + S(W \\cup X \\cup Y) \\\\\n& - S(W \\cup X) - S(U \\cup W \\cup X \\cup Y),\n\\end{split}\n\\end{equation}\nbut the right hand side is positive by eq.~\\eqref{Cond:SS} with $A= U, B = Y, C = W\\cup X$.\n\nWeak Union is proved via a similar argument applied to $S(U:W \\cup Y|X) - S(U:W|X \\cup Y)$. It follows from eq.~\\eqref{Cond:SS} by taking $A = U, B = Y, C = X$. Finally, contraction follows from noting that $S(U:W|X) + S(U:Y|X \\cup W) = S(U:W \\cup Y|X)$, which is straightforward to show by expanding in terms of von Neumann entropies.\n\\end{proof}\n\nThe well-known analogous result for classical probability distributions follows immediately because classical probability distributions can be represented by density matrices that are diagonal in an orthonormal product basis, and for such states the von Neumann entropies of subsystems are equal to the Shannon entropies of the corresponding marginal distributions. Additionally, if $P(V)$ is positive for all possible valuations of the variables then the associated dependency model is actually a positive graphoid. The analogous quantum property would be to require that $\\rho_V$ is a strictly positive operator, i.e. it is of full rank, but we have not been able to prove that this property implies intersection.\n\nThe undirected graph dependency model is also a positive graphoid. The proof is straightforward, so it is not given here. The following theorem is important for the theory of Markov networks.\n\\begin{The}[Lauritzen \\cite{Lau06a}]\n\\label{Cond:UDGraph}\nThe undirected graph dependency model is equivalent to the dependency model obtained by setting \n$I\\left (U,V - (U\\cup n(U))|n(U) \\right )$ for all $U \\subseteq V$, where $n(U)$ is the set of nearest neighbors of $U$, and demanding closure under the positive graphoid axioms.\n\\end{The}\n\nThe condition $I\\left (U,V - (U\\cup n(U))|n(U) \\right )$ defines the \\emph{Local Markov Property} on a graph. Note that although its closure under the positive graphoid axioms is equivalent to the Global Markov Property, this is not the case for a graphoid that doesn't satisfy intersection \\cite{Lau06a}.\n\n\\subsection{Quantum Markov Networks}\n\n\\label{Graph:HC}\n\nUsing the terminology of the previous section, the definition of a classical Markov Network can be conveniently reformulated as a pair $(G,P(V))$, where $G=(V,E)$ is an undirected graph and $P(V)$ is a probability distribution over random variables represented by the vertices, such that the graphoid $I_P$ satisfies the local Markov property with respect to the graph $G$. The definition of a quantum Markov network can now be obtained by replacing the probabilistic dependency model with a quantum dependency model.\n\nLet $G = (V,E)$ be an undirected graph and suppose that each vertex $v \\in V$ is associated with a quantum system, also denoted $v$, with Hilbert space $\\mathcal{H}_{v}$. Let $\\rho_V$ be a state on $\\mathcal{H}_V = \\bigotimes_{v \\in V} \\mathcal{H}_v$. $(G,\\rho_V)$ is a \\emph{Quantum Markov Network} if the graphoid $I_\\rho$ satisfies the local Markov property with respect to the graph $G$. Further, if $\\rho_V$ is of full rank, then $(G,\\rho_V)$ is called a \\emph{Positive Quantum Markov Network}. Note that unlike in the classical case, we cannot conclude that the global Markov property holds for positive quantum Markov networks because the intersection axiom has not been proved.\n\nThe remainder of this section provides some partial characterization results for quantum Markov networks, along the lines of the Hammersley-Clifford theorem. The most generally applicable of these results makes use of the $\\ensuremath{\\odot}$ product.\n\\begin{The}\n\\label{Graph:QHC}\nLet $G = (V,E)$ be an undirected graph and let $\\mathfrak{C}$ be the set of cliques of $G$. If $(G,\\rho_V)$ is a positive quantum Markov network then there exist positive operators $\\sigma_C$ acting on the cliques of $G$, i.e. $C \\in \\mathfrak{C}$, such that\n\\begin{equation}\n\\label{Graph:QHCDecomp}\n\\rho_V = \\bigodot_{C \\in \\mathfrak{C}} \\sigma_C .\n\\end{equation}\n\\end{The}\nThis theorem is analogous to one direction of the Hammersley-Clifford theorem and the proof is very similar to a standard proof for the classical case \\cite{Pol04a}, but is somewhat involved so it is given in appendix \\ref{Proof}. However, unlike the classical case, the converse does not hold, i.e. there are states of the form eq. \\eqref{Graph:QHCDecomp} that do not satisfy the local Markov property as illustrated by the following example. \n\n\\begin{Exa}\n\\label{ex:heisenberg}\nConsider a chain of 3 qubits $A$, $B$, and $C$ coupled through an anti-ferromagnetic Heisenberg interaction $H = \\sigma^x_A\\sigma^x_BI_C + \\sigma^y_A\\sigma^y_BI_C + \\sigma^z_A\\sigma^z_BI_C + I_A\\sigma^x_B\\sigma^x_C + I_A\\sigma^y_B\\sigma^y_C + I_A\\sigma^z_B\\sigma^z_C$ where $\\sigma^x$, $\\sigma^y$, and $\\sigma^z$ denote the Pauli operators\n\\begin{equation}\n\\sigma^x= \n\\left( \\begin{array}{cc}\n0 & 1 \\\\\n1 & 0\n\\end{array} \\right),\\ \\ \n\\sigma^z= \n\\left( \\begin{array}{cc}\n1 & 0 \\\\\n0 & -1\n\\end{array} \\right), \\ \\ \n\\mathrm{and}\\ \\ \n\\sigma^y= \\sigma^z \\sigma^x.\n\\end{equation}\nThe Gibbs state $\\rho_{A\\cup B \\cup C}(\\beta) = \\frac{1}{Z(\\beta)} \\exp(-\\beta H)$ has the form eq. \\eqref{Graph:QHCDecomp}, but for any finite $\\beta$ it has a non-zero mutual information between $A$ and $C$ conditioned on $B$ as shown on Fig.~\\ref{Heisenberg}.\n\\begin{figure}[h!]\n\\center \\includegraphics[height=2.5in]{Heis}\n\\caption{Conditional mutual information for a 3-vertex anti-ferromagnetic Heisenberg spin-$\\frac 12$ chain as a function of inverse temperature $\\beta$.}\n\\label{Heisenberg}\n\\end{figure}\n\\end{Exa}\n\nFor trees, a decomposition into reduced and mutual density operators analogous to eq. \\eqref{Graph:HCMut} is possible. For this, we need the following lemma.\n\n\\begin{Lem}\n\\label{Graph:Remove}\nLet $G= (V,E)$ be a graph, let $(G,\\rho_V)$ be a quantum Markov network and let $u \\in V$. Let $G' = (V',E')$ be the graph obtained by removing $u$ from $V$ and removing all edges that connect $u$ to any other vertex from the graph. Let $G'' = (V',E'')$ be the graph obtained by adding to $G'$ an edge between every pair of distinct neighbors of $u$ in the original graph $G$. Let $\\rho_{V'} = \\PTr{u}{\\rho_{V'}}$. Then $(G'',\\rho_{V'})$ is a quantum Markov network.\n\\end{Lem}\n\\begin{proof}\nFor $U \\subset V$, let $U_u = U-u$ if $u \\in U$ and $U_u = U$ otherwise, and denote $n_G(U_u)$ and $n_{G''}(U_u)$ the neighbors of $U_u$ in the graphs $G$ and $G''$ respectively. It must be shown that $I_{\\rho_V}(U, V-(U \\cup n_G(U) | n_G(U))$ for all $U \\subset V$ implies $I_{\\rho_{V'}}(U_u, V'-(U_u \\cup n_{G''}(U_u) | n_{G''}(U_u))$ for every $U_u \\subset V'$. By symmetry, we can assume without loss of generality that $u \\in U$. There are two different cases to consider:\n\n\\noindent{\\bf Case I:} $n_G(u) \\cap U \\neq \\emptyset$.\\\\\nThis implies that $n_{G''}(U_u) = n_G(U)$ and so $V' - (U_u \\cup n_{G''}(U_u)) = V - (U \\cup n_G(U))$. We conclude that $I_{\\rho_{V'}}(U_u, V'-(U_u \\cup n_{G''}(U_u) | n_{G''}(U_u))$ is equivalent to $I_{\\rho_V}(U - u, V-(U \\cup n_G(U) | n_G(U))$, and the result follows from decomposition.\n \n\\noindent{\\bf Case II:} $n_G(u) \\cap U = \\emptyset$.\\\\\nThis implies that $n_{G''}(U_u) = n_G(U_u)$. Consider the local Markov property on the original graph $G$ applied to $U_u$: $I_{\\rho_{V}}(U_u, V-(U_u \\cup n_{G}(U_u) | n_{G}(U_u))$ which is equivalent to $I_{\\rho_{V}}(U_u, u \\cup V'-(U_u \\cup n_{G''}(U_u) | n_{G''}(U_u))$, and the result follows from decomposition. \n\\end{proof}\n\n\\begin{The}\nLet $G= (V,E)$ be a tree. If $(G,\\rho_V)$ is a positive quantum Markov network then it can be written as\n\\begin{equation}\n\\rho_V = \\left (\\bigotimes_{v \\in V} \\rho_v \\right ) \\pns{n} \\left ( \\prod_{(v,u) \\in E} \\rho\\ns{n}_{v:u} \\right ).\n\\label{eq:standard_graph_state}\n\\end{equation}\n\\label{The:mutual}\n\\end{The}\n\n\\begin{proof}\nThe proof is by induction on the number of vertices in the tree. It is clearly true for a single vertex, so consider a tree $G=(V,E)$ with $N$ vertices and choose a leaf vertex $u \\in V$. Construct the quantum Markov network $(G'',\\rho_{V'})$ as in lemma \\ref{Graph:Remove}. Since $u$ is a leaf it only has one neighbor in $G$, denoted $w$, so the only difference between $G$ and $G''$ is that $u$ and the single edge connecting $u$ to the rest of the graph have been removed. By the inductive assumption, $\\rho_{V'}$ has a decomposition of the form\n\\begin{equation}\n\\label{Graph:InductAssump}\n\\rho_{V'} = \\left (\\bigotimes_{v \\in V'} \\rho_v \\right ) \\pns{n} \\left ( \\prod_{(v,x) \\in E''} \\rho\\ns{n}_{v:u} \\right ).\n\\end{equation} \nGenerally, $\\rho_{V} = \\rho_{V' \\cup \\{u\\}} = \\rho_{V'} \\pns{n} \\rho\\ns{n}_{u|V'}$. The local Markov property implies that $I_\\rho (u,V' - w|w)$, so that $\\rho_{u|V'} = \\rho_{u|w}$, which in turn can be written as $\\rho_{u|w} = \\rho_u \\pns{n} \\rho_{u:w}$, so \n\\begin{equation}\n\\rho_V = \\rho_{V'} \\pns{n} \\left ( \\rho_u \\pns{n} \\rho_{u:w}\\right ).\n\\end{equation} \nEvery term in eq.~\\eqref{Graph:InductAssump} commutes with $\\rho_u$, because they are defined on different tensor product factors. Also, $\\rho_{u:w}$ commutes with all the other mutual density operators either because they act on different tensor product factors or because the fact that $w$ is the only neighbor of $u$ implies that $u$ is quantum conditionally independent of any other subsystem given $w$.\n\\end{proof}\n\nIn the classical case, the Hammersley-Clifford decomposition is not necessarily unique, and when the graph is a tree the decomposition into marginal and mutual distributions is only one possibility. Similarly, a state $\\rho_V$ might have a decomposition of the form of eq.~\\eqref{eq:standard_graph_state} but with more general operators in place of the mutual and marginal states. This provides another motivation for the definition of an $n$-bifactor state that was given in eq.~\\eqref{Graph:QGSE}. As mentioned in \\S\\ref{Graph:QGS}, not all $n$-bifactor states are quantum Markov networks, but a subset of them are, as shown by the following theorem.\n\n\\begin{The}\n\\label{Graph:THCC}\nLet $G = (V,E)$ be a tree with each vertex $v \\in V$ associated to a quantum system with Hilbert space $\\mathcal{H}_v$. Let $\\mathcal{H}_V = \\bigotimes_{v \\in V} \\mathcal{H}_v$ and let $\\rho_V$ be an $n$-bifactor state on $\\mathcal{H}_V$. If $\\mu_v$ is decomposable with respect to all pairs $\\nu_{u:v}$ and $\\nu_{w:v}$, then $(G, \\rho_V)$ is a quantum Markov network.\n\\end{The}\n\nThe notion of decomposability used in the statement of this theorem is defined at eq. \\eqref{def:decomposable}. The proof is straightforward and we leave it as an exercise. \n\n\\subsection{Other Graphical Models}\n\n\\label{Graph:OM}\n\nIn this section quantum generalizations of two other Graphical Models are described: Factor Graphs and Bayesian Networks. Generally, the choice of which model to use depends on the application and Belief Propagation algorithms have been developed for all of them in the classical case. For example, Factor Graphs arise naturally in the theory of error correcting codes, Bayesian Networks are commonly used to model causal reasoning in artificial intelligence, and Markov Networks are useful in statistical physics. However, it is now understood that the classical versions of these three models are interconvertable, and that upon such conversion the different Belief Propagation algorithms are all equivalent in complexity \\cite{AM00a, YFW02a, KFL01a}. Some similar results also hold for the quantum case, as we illustrate by showing how a quantum factor graph can be converted into a $1$-Bifactor Network. This construction is used in the application to quantum error correction described in \\S\\ref{App:QEC}. \n\n\\subsubsection{Quantum Factor Graphs}\n\n\\label{Graph:FG}\n\n\\begin{figure}\n\\center\\includegraphics{FG}\n\\caption{Factor graph representation of the state $(\\Ket{000} + \\Ket{111})_{uvw}$, with $\\mu_u = \\mu_v = \\mu_w = I$ and $X_a = (I+\\sigma^z_u\\otimes \\sigma^z_v)$, $X_b = (I+\\sigma^x_u\\otimes \\sigma^x_v\\otimes\\sigma^x_w)$, and $X_c = (I+\\sigma^z_v\\otimes\\sigma^z_w)$.}\n\\label{fig:FG}\n\\end{figure}\n\nA \\emph{quantum factor graph} consists of a pair $(G, \\rho_V)$, where $G = (U,E)$ is a bipartite graph and $\\rho_V$ is a quantum state. A bipartite graph is an undirected graph for which the set of vertices can be partitioned into two disjoint sets, $V$ and $F$, such that $(v,f) \\in E$ only if $v \\in V$ and $f \\in F$. The vertices in $V$ are referred to as ``variable nodes\" and those in $F$ as ``function nodes\". Each variable node $v$ is associated with a quantum system, also labeled $v$, with a Hilbert space $\\mathcal{H}_v$, and $\\rho_V$ is a state on $\\bigotimes_{v \\in V} \\mathcal{H}_v$. The Hilbert space associated to a function node $f$ is the tensor product of the Hilbert spaces of the adjacent variable nodes\\footnote{The following equality is not just meant in the sense of an isomorphism, they are the same Hilbert spaces.}: $\\mathcal{H}_f = \\bigotimes_{v \\in n(f)} \\mathcal{H}_v$. The state associated with a factor graph is of the form\n\\begin{equation}\n\\label{Graph:FGE}\n\\rho_V = \\frac 1Z \\prod_{f \\in F} X_f \\ensuremath{\\star} \\bigotimes_{v \\in V} \\mu_v \n\\end{equation}\nwhere $\\mu_v$ is an operator on $\\mathcal{H}_v$, $X_f$ is an operator on $\\mathcal{H}_f$ and $[X_f,X_g] = 0$. \n\nFor example, such a state would be obtained after performing a sequence of projective von Neumann measurements on a product state of the variable nodes (see Fig.~\\ref{fig:FG}). More precisely, for each $f \\in F$, let $\\{P_f^j\\}$ be a complete set of orthogonal projectors, and let $\\bigotimes _{v \\in V} \\mu_v$ be the initial state of $V$. When the projective measurements $\\{P_f^j\\}$ are performed at each function node and commuting outcomes $P_f^j = X_f$ are obtained, the post-measurement state is of the form of eq. \\eqref{Graph:FGE}. Similarly, factor graph states could be obtained from more general POVM measurements $\\{E_f^j\\}$, provided the state update rule $\\rho_V \\rightarrow \\frac{(E_f^j)^{\\frac{1}{2}} \\rho_V (E_f^j)^{\\frac{1}{2}}}{\\Tr{E_f^j \\rho_V}}$ is used. In that case, the $X_f$ could be any positive operator rather than being restricted to projectors as in the case of a von Neumann measurement. \n\nTo convert a factor graph into a $1$-Bifactor Network, we need to treat the function nodes as distinct quantum systems, and so endow them with their own Hilbert spaces $\\mathcal{H}_f = \\bigotimes_{v \\in n(f)} \\mathcal{H}_{R_v^f}$ where $ \\mathcal{H}_{R_v^f}$ is isomorphic to $ \\mathcal{H}_{v}$. The system $R_v^f$ is called a reference system for $v$ in $f$. Then, the state of the function nodes can be written on the graph $G = (U,E)$, where $U = V\\cup F$, $\\rho_V = \\PTr{F}{\\rho_{U}}$ and\n\\begin{equation}\n\\rho_U = \\frac 1Z \\bigotimes_{u \\in U} \\mu_u \\ensuremath{\\star} \\prod_{(v,f) \\in E} \\nu_{v:f},\n\\end{equation}\nwhere for $u \\in F$, $\\mu_u = X_u^T$, $\\nu_{v:f} = d_v\\kb{\\Phi}{\\Phi}_{v\\cup R_v^f} \\otimes I_{f-R^f_v}$ and $\\Ket\\Phi_{v\\cup R_v^f} = \\frac{1}{\\sqrt d_v} \\sum_{j=1}^{d_v} \\Ket{j}_v\\Ket{j}_{R_v^f}$ denotes the maximally entangled state between $v$ and its reference $R_v^f$. \n\n\\subsubsection{Quantum Bayesian Networks}\n\n\\label{Graph:BN}\n\n\\begin{figure}\n\\center\\includegraphics{BN}\n\\caption{This directed acyclic graph has two distinct ancestral orderings: $(a,b,c,d)$ and $(a,c,b,d)$. The equalities $S(d:a|b\\cup c) = 0$ and $S(b:d\\cup d | a) = 0$ are examples of constraints that are satisfied when $(G,\\rho_V)$ is a Quantum Bayesian Network.}\n\\label{fig:BN}\n\\end{figure}\n\nApart from Markov Networks, there are other Graphical Models that make use of the theory of dependency models and graphoids. Bayesian Networks provide an example, and they are commonly applied in expert systems to model causal reasoning \\cite{Nea90a,Nea04a}. The basic idea is to replace the undirected graph of a Markov network with a Directed Acyclic Graph (DAG), wherein the directed edges represent direct cause-effect relationships. The quantum graphoid can be used to give a straightforward generalization of the classical networks, which we only treat briefly here. To describe the generalization, a few definitions and facts about DAGs are required.\n\nFor a vertex $v$ in a DAG $G = (V,E)$, let $m(v)$ denote the parents of $v$, i.e. $m(v) = \\{u \\in V|(u,v) \\in E\\}$. The set of ancestors of $v$ is denoted $a(v)$ and consists of those vertices $u$ for which there exists a path in the graph starting at $u$ and ending at $v$. Conversely, the set of descendants of $v$ is denoted $d(v)$ and consists of those vertices $u$ for which there exists a path in the graph starting at $v$ and ending at $u$. The set of parents of a subset $U \\subseteq V$ of vertices is defined as $m(U) = \\cup_{u \\in U} m(u) - U$ and similarly $a(U) = \\cup_{u \\in U} a(u) - U$ and $d(U) = \\cup_{u \\in U} d(u) - U$. The set of nondescendants of a subset $U \\subseteq V$ of vertices is defined to be $nd(U) = V - (d(U)\\cup U)$. Note that the vertices in $U$ are not considered to be nondescendants of $U$ for technical convenience. Finally, every DAG has at least one ancestral ordering of its vertices $(v_1,v_2,\\ldots,v_n)$, such that if $v_j \\in a(v_k)$ then $j < k$ (see Fig. \\ref{fig:BN}).\n\nA \\emph{Quantum Bayesian Network} is a pair $(G,\\rho_V)$, where $G = (V,E)$ is a DAG, each vertex $v \\in V$ is associated with a quantum system, also denoted $v$, with Hilbert space $\\mathcal{H}_v$, and $\\rho_V$ is a quantum state on $\\mathcal{H}_V = \\bigotimes_{v \\in V} \\mathcal{H}_v$. The state $\\rho_V$ satisfies the conditional independence constraints $I_\\rho (U, nd(U)-m(U)|m(U) )$ for all subsets $U \\subseteq V$.\n\nThe definition of a classical Bayesian Network is obtained by replacing the quantum systems with classical random variables. It can be shown that $(G,P(V))$ is a classical Bayesian Network iff $P(V) = \\prod_{v \\in V} P(v|m(v))$, and a partial quantum generalization of this can be obtained using the conditional density operator.\n\nDue to the nonassociativity of the $\\pns{n}$ products, expressions like $A \\pns{n} B \\pns{n} C$ are ambiguous. It is convenient to adopt the convention that they are evaluated left-to-right, so that $A \\pns{n} B \\pns{n} C = \\left ( A \\pns{n} B \\right ) \\pns{n} C$. Similarly, we adopt the convention that\n\\begin{equation}\n\\left ( \\pns{n} \\right )_{j=1}^N A_j = \\left ( \\left ( \\left ( A_1 \\pns{n} A_2 \\right ) \\pns{n} A_3 \\right ) \\ldots \\right ) \\pns{n} A_N.\n\\end{equation}\n\n\\begin{The}\nIf $(G,\\rho_V)$ is a Quantum Bayesian Network and $(v_1,v_2,\\ldots,v_N)$ is an ancestral ordering of $V$ then\n\\begin{equation}\n\\rho_V = \\left ( \\pns{n} \\right )_{j=1}^N \\rho\\ns{n}_{v_j|m(v_j)}.\n\\end{equation}\n\\end{The}\n\\begin{proof}\nFor any ordering $(v_1,v_2,\\ldots,v_N)$ of the vertices, an arbitrary state can always be written as\n\\begin{equation}\n\\rho_V = \\left ( \\pns{n} \\right )_{j=1}^{N} \\rho\\ns{n}_{v_{j}|v_{j - 1} v_{j - 2} \\ldots v_1}.\n\\end{equation}\nThis is a quantum generalization of the chain rule for conditional probabilities, which follows straightforwardly from the definition of conditional density operators. If $(v_1,v_2,\\ldots,v_N)$ is in fact an ancestral ordering, then $\\{v_{j-1}, v_{j - 2}, \\ldots, v_1\\} \\subseteq nd(v_{j})$, so $I_\\rho (v_{j}, nd(v_{j})|m(v_{j}))$ implies that $\\rho\\ns{n}_{v_j|v_{j-1} v_{j-2} \\ldots v_1} = \\rho\\ns{n}_{v_j|m(v_j)}$.\n\\end{proof}\n\n\n\\section{Quantum Belief Propagation}\n\n\\label{QBP}\n\nIn this section, we discuss algorithms for solving the inference problem that we started with in \\S\\ref{Problem} for the case of $n$-Bifactor Networks. In fact, we start with the seemingly simpler problem of computing the reduced density operators of the state on the vertices and on pairs of vertices connected by an edge, and then present a simple modification of the algorithm to solve the inference problem for local measurements. \n\nRecall that $n$-bifactor states are of the form\n\\begin{equation}\n\\label{Graph:QGSERem}\n\\rho_V = \\frac 1 Z \\left(\\bigotimes_{u\\in V} \\mu_u\\right) \\pns{n} \\left(\\prod_{(v,w) \\in E} \\nu_{v:w}\\right),\n\\end{equation}\nand that the operators associated with vertices and edges do not have to be straightforwardly related to the reduced and mutual density operators. Therefore, it is not clear a priori that even the simpler task can be done efficiently. {\\em Quantum Belief Propagation} (QBP) algorithms are designed to solve this problem by exploiting the special structure of $n$-bifactor states. Since the class of states under consideration is different for each value of $n$, there is not one but a family of algorithms. The algorithm that is designed to solve inference problems on $n$-Bifactor Networks is denoted QBP$^{(n)}$. \n\nTo avoid cumbersome notation, focus will be given to $n$-bifactor states with $n < \\infty$. Recall that the operators $\\nu_{u:v}$ defining these states mutually commute. This is not true of $\\infty$-bifactor states. Nevertheless, a Belief Propagation algorithm for $\\infty$-bifactor states can be readily defined from the finite $n$ one, by replacing {\\em all} products appearing in eqs.~(\\ref{message}-\\ref{belief2}) by the $\\odot$ product. Under this modification, the convergence Theorem \\ref{thm:QBP} applies to $\\infty$-Bifactor Networks, and its proof only requires straightforward modifications.\n\nThe remainder of this section is structured as follows. \\S\\ref{QBP:Desc} gives a description of the QBP algorithms and \\S\\ref{QBP:Conv} shows that QBP$\\ns{n}$ converges on trees if the $n$-Bifactor Network is also a quantum Markov Network and that QBP$\\ns{1}$ converges on trees in general. In both cases, the algorithm converges in a time that scales linearly with the diameter of the tree. Finally, \\S\\ref{QBP:Inf} explains how to modify the algorithm to solve inference problems for local measurements.\n\n\\subsection{Description of the Algorithm}\n\n\\label{QBP:Desc}\n\nTo describe the operation of the QBP algorithms, it is helpful to imagine that the graph $G$ represents a network of computers with a processor situated at each vertex. The algorithm could equally well be implemented on a single processor, in which case the network is just a convenient fiction. Pairs of processors are connected by a communication channel if there is an edge between the corresponding vertices. The processor at vertex $u$ has a memory that stores the value of $\\mu_u$ as well as the value of $\\nu_{u:v}$ for each vertex $v$ that is adjacent to $u$ in the graph. The task assigned to each processor is to compute the local reduced state $\\rho_u$ and the joint states $\\rho_{u\\cup v}$\\footnote{Of course, it would be sufficient to only have one processor compute $\\rho_{u\\cup v}$ for each edge.}. At each time step $t$, the processor at $u$ updates its ``beliefs'' about $\\rho_u$ and $\\rho_{u\\cup v}$ via an iterative formula. These beliefs are denoted $b_u\\ns{n}(t)$ and $b_{uv}\\ns{n}(t)$, and are supposed to be approximations to the true reduced states $\\rho_v$ and $\\rho_{u\\cup v}$ based on the information available to the processor at time step $t$. Since the reduced states may depend on information stored at other vertices, the processors pass operator valued messages $m\\ns{n}_{u \\rightarrow v}(t)$ along the edges at each time step in order to help their neighbors. The message $m\\ns{n}_{u \\rightarrow v}(t)$ is an operator on $\\mathcal{H}_v$ and is initialized to the identity operator $m\\ns{n}_{u\\rightarrow v}(0) = I_v$ at $t = 0$. For $t > 0$ it is computed via the iterative formula\n\\begin{equation}\nm^{(n)}_{u \\rightarrow v}(t) = \\frac 1Y \\PTr{u}{\\mu_u \\pns{n} \\Bigg[ \\Big\\{\\prod_{v' \\in n(u)-v} m^{(n)}_{v' \\rightarrow u}(t-1)\\Big\\} \\pns{n} \\nu_{u:v} \\Bigg]}.\n\\label{message}\n\\end{equation}\nHere, $Y$ is an arbitrary normalization factor that should be chosen to prevent the the matrix elements of $m\\ns{n}_{u \\rightarrow v}(t)$ becoming increasingly small as the algorithm proceeds. It is convenient to choose $Y$ such that $\\PTr{v}{m\\ns{n}_{u \\rightarrow v}(t)} = 1$. \n\n\nThe beliefs about the local density operator $\\rho_u$ at time $t$ are given by the simple formula\n\\begin{equation}\nb^{(n)}_u(t) = \\frac 1{Y'} \\mu_u \\pns{n} \\prod_{v' \\in n(u)} m^{(n)}_{v' \\rightarrow u}(t), \\label{belief1}\n\\end{equation}\nwhere $Y'$ is again a normalization factor that should be chosen to make $\\PTr{u}{b^{(n)}_u(t)} = 1$.\nOn the other hand, the beliefs about $\\rho_{u\\cup v}$ also depend on the messages received by the processor at $v$, so we have to imagine that each vertex shares its messages with its neighbors. Having done so, the beliefs about $\\rho_{u\\cup v}$ are computed via\n\\begin{equation}\nb^{(n)}_{uv}(t) = \\frac 1{Y''} (\\mu_u \\mu_v) \\pns{n} \\Bigg[\\Big\\{\\prod_{w \\in n(u)-v} m^{(n)}_{w \\rightarrow u}(t) \\prod_{w' \\in n(v)-u} m^{(n)}_{w' \\rightarrow v}(t)\\Big\\} \\pns{n} \\nu_{u:v} \\Bigg] \\label{belief2},\n\\end{equation}\nwhere $Y''$ is again a normalization factor.\n\nThe beliefs obtained from the QBP$^{(n)}$ algorithm on input $\\{\\mu_u\\}_{u\\in V}$ and $\\{\\nu_{u:v}\\}_{(u,v) \\in E}$ after $t$ time steps are denoted $[b^{(n)}_{u}(t), b^{(n)}_{uv}(t) ] = \\mathrm{QBP}_t^{(n)}(\\mu_u,\\nu_{u:v})$. The goal of the next section is to provide conditions under which the beliefs represent the exact solution to the inference problem, i.e. to find states and values of $t$ such that $\\mathrm{QBP}_t\\ns{n}(\\mu_u,\\nu_{u:v}) = [\\rho_u,\\rho_{u\\cup v}]$. \n\\subsection{Convergence on Trees}\n\n\\label{QBP:Conv}\n\nAt time $t$, the beliefs $b^{(n)}_{u}(t)$ and $b^{(n)}_{uv}(t)$ represent estimates of the reduced states $\\rho_u$ and $\\rho_{u\\cup v}$ of the input $n$-bifactor state $\\rho_V$. \nNote that when the $\\mu_u$ and the $\\nu_{u:v}$ all commute with one another and are diagonal in local basis, the QBP$^{(n)}$ algorithms all coincide for different $n$ (including $n=\\infty$) and correspond to the well known classical Belief Propagation algorithm. This algorithm always converges on trees in a time that scales like the diameter of the tree. Its convergence on general graphs is not fully understood and constitutes an active area of research \\cite{Yed01a, YFW02a}. In the quantum setting, the $\\mu_u$ and the $\\nu_{u:v}$ do not commute in general, but for finite $n$, the $\\nu_{u:v}$ commute with each other by assumption. This has straightforward consequence that will be of use later.\n\n\\begin{Prop}\n\\label{prop:commute}\nFor all $u,v \\in V$, $x \\in n(u)$, and $w \\in n(v)$, the following commutation relations hold $[\\nu_{u:v},m^{(n)}_{x\\rightarrow u}(t)] = 0$ and $[m^{(n)}_{w\\rightarrow v}(t),m^{(n)}_{x\\rightarrow u}(t)] = 0$ .\n\\end{Prop}\n\nBefore proving the convergence of Quantum Belief Propagation, the following classical example can help build intuition of its workings, and also serves to outline the crucial steps in proving convergence.\n\n\\begin{figure}\n\\center\\includegraphics{belief}\n\\caption{Belief $b_{uv}$ is a function of $\\mu_u$, $\\mu_v$, $\\nu_{u:v}$, and the incoming messages at vertices $u$ and $v$, except $m_{u\\rightarrow v}$ and $m_{v\\rightarrow u}$.}\n\\label{fig:belief}\n\\end{figure}\n\n\\begin{Exa}\n\\label{ex:classical}\nConsider the function $P$ of $N$ discrete variables $x_j \\in \\{1,2,\\ldots,d\\}$ \n\\begin{equation}\nP(x_1,x_2,\\ldots,x_N) = \\psi(x_1,x_2)\\psi(x_2,x_3)\\ldots\\psi(x_{N-1},x_N)\n\\label{eq:classic_ex}\n\\end{equation}\nwhich could be for instance a classical bifactor distribution on a chain with $N$ sites. To evaluate the marginal function $P(x_N) = \\sum_{x_1,x_2,\\ldots,x_{N-1}} P(x_1,x_2,\\ldots,x_N)$, one can proceed directly and carry the sum over $d^N$ terms. A more efficient solution is obtained by invoking the distributive law to reorder the various sums and products into\n\\begin{equation*}\nP(x_N) = \\sum_{x_{N-1}}\\Big(\\psi(x_{N-1},x_N)\\Big( \\ldots \\Big(\\sum_{x_2} \\psi(x_2,x_3)\\Big(\\sum_{x_1} \\psi(x_1,x_2) \\Big)\\Big) \\ldots\\Big) \\Big),\n\\end{equation*}\nand performing the sums sequentially, starting with $\\sum_{x_1}$, then $\\sum_{x_2}$, and so on\n\\begin{eqnarray*}\nP(x_N) &=& \\sum_{x_{N-1}}\\Big(\\psi(x_{N-1},x_N)\\Big( \\ldots \\Big(\\sum_{x_2} \\psi(x_2,x_3)M_{1\\rightarrow 2}(x_2)\\Big) \\ldots\\Big) \\Big) \\\\\n&=& \\sum_{x_{N-1}}\\Big(\\psi(x_{N-1},x_N)\\Big( \\ldots M_{2\\rightarrow 3}(x_3) \\ldots\\Big) \\Big) \\\\\n&&\\vdots \\\\\n&=& \\sum_{x_{N-1}} \\psi(x_{N-1}:x_N) M_{N-2 \\rightarrow N-1}(x_{N-1}) \n\\end{eqnarray*}\nwhere the ``messages\" are defined recursively $M_{j \\rightarrow j+1}(x_{j+1}) = \\sum_{x_{j}} \\psi(x_j:x_{j+1}) M_{j-1 \\rightarrow j}(x_j)$, with $M_{1 \\rightarrow 2} = \\sum_{x_1} \\psi(x_1:x_2)$. Each of these steps involves the sum of $d^2$ terms, so $P(x_N)$ can be computed with order $Nd^2$ operations. \n\\end{Exa}\n\nThis example differs from the Belief Propagation algorithm described in the previous section in three important aspects. Firstly, it relied on the distributive law, which does not hold in general for the $\\pns{n}$ product, i.e. $\\PTr{u}{X_{uv} \\pns{n} Y_{vw}} \\neq \\PTr{u}{X_{uv}} \\pns{n} Y_{vw}$ in general. This will motivate Theorems \\ref{thm:commute} and \\ref{thm:commute1}, that establish necessary conditions for the validity of the distributive law. Secondly, the graph in that example is a chain, whereas Belief Propagation operates on any graph. However, Belief Propagation is only guaranteed to converge on trees, and the above example generalizes straightforwardly to such graphs. Thirdly, the messages in the example must be computed in a prescribed order: $M_{i-1\\rightarrow i}$ is required to compute $M_{i \\rightarrow i+1}$. This last point is important and deserves an extensive explanation. \n\nSuppose that instead of computing the messages $M_{i\\rightarrow i+1}$ sequentially, messages at each vertex were computed at every time step, following the rule $m_{i \\rightarrow i\\pm1}(t,x_{i\\pm1}) = \\sum_{x_i} m_{i\\mp1 \\rightarrow i}(t-1,x_i)\\psi(x_i:x_{i\\pm1})$, as in eq. \\eqref{message}, with the initialization $m_{i\\pm1 \\rightarrow i}(0,x_i) = 1$. Then, one can easily verify that for $t\\geq i$, $m_{i \\rightarrow i+1}(t,x_{i+1}) = M_{i \\rightarrow i+1}(x_{i+1})$. In other words, the messages $m_{i\\rightarrow i+1}$ become time independent after a time equal to the distance between vertex $i$ the beginning of the chain. This observation can in fact be generalized as follows.\n\n\\begin{figure}\n\\center\\includegraphics{depth}\n\\caption{For $(u,v) \\in E$, the graph $G_v^u$ is obtained from $G$ by considering $u$ as the root and removing the subtree associated to vertex $v$. In this example, $depth(G_v^u) = 2$.}\n\\label{fig:depth}\n\\end{figure} \n\n\\begin{Lem}\nWhen $G$ is a tree, the QBP$^{(n)}$ messages $m^{(n)}_{u\\rightarrow v}(t)$ are time independent for $t> depth(G_v^u)$, where $G_v^u$ is the tree obtained from $G$ by choosing $u$ as the root, and removing the subtree associated to $v$ (see Fig. \\ref{fig:depth}).\n\\end{Lem}\n\n\\begin{proof}\nThe proof is by induction. If $u$ is a leaf, it has a unique neighbor $n(u)$ and $m_{u \\rightarrow n(u)}^{(n)}(t) = \\PTr{u}{\\mu_u \\pns{n} \\nu_{u:n(u)}}$ which is time independent. If $u$ is not a leaf, it has two neighbors $L(u)$ and $R(u)$. Clearly, if $m_{L(u)\\rightarrow u}^{(n)}(t)$ is time independent for $t \\geq t^*$, then $m_{u\\rightarrow R(u)}^{(n)}(t) = \\PTr{u}{\\mu_u \\pns{n}\\big[m_{L(u)\\rightarrow u}^{(n)}(t-1) \\pns{n} \\nu_{u:R(u)}\\big]}$ is time independent for $t \\geq t^*+1$. \n\\end{proof}\n\nWhen operated on a tree, all beliefs computed by QBP algorithm converge to a steady state after a time equal to the diameter of the tree. Note that when the graph contains loop, the beliefs do not necessarily reach a steady state. It remains to be shown that on trees, this steady state is the correct solution. For this, we need a technical result that requires some new notation. Let $U$ and $W$ be two non-intersecting subsets of $V$. Define the two subsets of edges $E_U = \\{ (u,w) \\in E : u,w \\in U\\}$ and $E_{U:W}= \\{ (u,w) \\in E : u \\in U\\ {\\mathrm and}\\ w \\in W\\}$. Let $\\Gamma_U = \\bigotimes_{u \\in U} \\mu_u$ and for any $F \\subset E$, let $\\Lambda_{F} = \\prod_{(u,w) \\in F} \\nu_{v:w}$. \n\n\\begin{The}\\label{thm:commute}\nLet $(G, \\rho_V)$ be an $n$-Bifactor Network with graph $G = (V,E)$. Let $U,W,X$ be non-intersecting subsets of $V$ such that $U\\cup W \\cup X =V$. When $S(U:X|W) = 0$, the following diagram is commutative.\n\\begin{equation}\\begin{CD}\n \\Gamma_{U\\cup W} \\pns{n} (\\Lambda_{E_{U\\cup W}} \\Lambda_{E_{U\\cup W:X}}) \n @>{\\text{Tr}_U}>> \n \\PTr{U}{\\Gamma_{U\\cup W} \\pns{n} (\\Lambda_{E_{U\\cup W}} \\Lambda_{E_{U\\cup W:X}})}\\\\\n @VV{\\Gamma_X \\pns{n}(\\cdot \\Lambda_{E_X})}V \n @VV{\\Gamma_X \\pns{n}(\\cdot \\Lambda_{E_X})}V \\\\\n \\rho_V = \\Gamma_V \\pns{n} \\Lambda_{E_V} \n @>{\\text{Tr}_U}>> \n \\PTr{U}{\\rho_V}\n\\end{CD}\\label{eq:commute_diag}\\end{equation}\n\\end{The}\n\n\\begin{proof}\nThe down-right path is the simplest. The first equality follows from the fact that $\\Lambda_{E_X}$ commutes with $\\Gamma_{U\\cup W}$ and all other $\\Lambda_E$'s, and the definition $\\rho_V = (\\Gamma_{U \\cup W} \\otimes \\Gamma_X ) \\pns{n} (\\Lambda_{E_{U\\cup W}} \\Lambda_{E_{U\\cup W:X}} \\Lambda_{E_X})$. The second equality is just a definition. The right-down path uses the representation of states that saturate strong subadditivity eq.~\\eqref{Cond:Hayden}, which implies that $\\rho_V$ has a decomposition of the form $\\rho_V = \\sum^d_{j = 1} p_j \\sigma_{U W_j^{(1)}} \\otimes \\tau_{W_j^{(2)} X}$. First observe that \n\\begin{align}\n\\Gamma_{U \\cup W} \\star^{(n)} (\\Lambda_{E_{U\\cup W}} \\Lambda_{E_{U\\cup W:X}} ) \n&= (\\Gamma_X^{-1}\\pns{n} \\rho_V) \\Lambda_{E_X}^{-1} \\\\\n&= \\Big(\\Gamma_X^{-1}\\pns{n} \\sum^d_{j = 1} p_j \\sigma_{U W_j^{(1)}} \\otimes \\tau_{W_j^{(2)} X}\\Big)\\Lambda_{E_X}^{-1} \\\\\n&= \\sum^d_{j = 1} p_j \\sigma_{U W_j^{(1)}} \\otimes \\Big[\\big(\\Gamma_X^{-1}\\pns{n} \\tau_{W_j^{(2)} X}\\big)\\Lambda_{E_X}^{-1}\\Big].\n\\end{align}\nIt follows that\n\\begin{align*}\n\\Gamma_X\\pns{n}\\left[\\PTr{U}{\\Gamma_{U \\cup W} \\star^{(n)} (\\Lambda_{E_{U\\cup W}} \\Lambda_{E_{U\\cup W:X}} )}\\Lambda_{E_X} \\right] \n&= \\sum^d_{j = 1} p_j \\sigma_{W_j^{(1)}} \\otimes \\tau_{W_j^{(2)} X}\\\\\n&= \\PTr{U}{\\rho_V}.\n\\end{align*}\n\\end{proof}\n\nSpecializing to the case $n=1$ enables a stronger result to be derived that does not require independence assumptions.\n\n\\begin{The}\\label{thm:commute1}\nLet $(G,\\rho_V)$ be a 1-Bifactor Network with graph $G = (V,E)$. Let $U,W,X$ be non-intersecting subsets of $V$ such that $U\\cup W \\cup X =V$. The following diagram is commutative.\n\\begin{equation}\\begin{CD}\n \\Gamma_{U} \\ensuremath{\\star} (\\Lambda_{E_{U\\cup W}} \\Lambda_{E_{U\\cup W:X}}) \n @>{\\text{Tr}_U}>> \n \\PTr{U}{\\Gamma_{U} \\ensuremath{\\star} (\\Lambda_{E_{U\\cup W}} \\Lambda_{E_{U\\cup W:X}})}\\\\\n @VV{\\Gamma_{W\\cup X} \\ensuremath{\\star}(\\cdot \\Lambda_{E_X})}V \n @VV{\\Gamma_{W\\cup X} \\ensuremath{\\star}(\\cdot \\Lambda_{E_X})}V \\\\\n \\rho_V = \\Gamma_V \\ensuremath{\\star} \\Lambda_{E_V} \n @>{\\text{Tr}_U}>> \n \\PTr{U}{\\rho_V}\n\\end{CD}\\end{equation}\n\\end{The}\n\n\\begin{proof}\nThe theorem follows simply from the cyclic property of the partial trace:\n\\begin{align}\n\\PTr{U}{\\rho_V}\n&= \\PTr{U}{[\\Gamma_U^\\frac 12 \\otimes \\Gamma_{W\\cup X}^\\frac 12] \\Lambda_{E} [\\Gamma_U^\\frac 12 \\otimes \\Gamma_{W\\cup X}^\\frac 12]} \\\\\n&= \\Gamma_{W\\cup X}^\\frac 12 \\PTr{U}{ \\Gamma_U \\Lambda_{E}} \\otimes \\Gamma_{W\\cup X}^\\frac 12 \\\\\n&= \\Gamma_{W\\cup X}^\\frac 12 \\PTr{U}{ \\Gamma_U \\Lambda_{E_{U\\cup W}} \\Lambda_{E_{U\\cup W:X}}} \\Lambda_{E_X} \\otimes \\Gamma_{W\\cup X}^\\frac 12.\n\\end{align}\n\\end{proof}\n\nWe are now positioned to state and prove the main result of this section. \n\n\\begin{The}\n\\label{thm:QBP}\nLet $(G,\\rho_V)$ be an $n$-Bifactor Network with graph $G = (V,E)$, and let $[b^{(n)}_{u}(t), b^{(n)}_{uv}(t) ] = \\mathrm{QBP}_t^{(n)}(\\mu_u,\\nu_{u:v})$. If $(G,\\rho_V)$ is a quantum Markov network and $G$ is a tree, then for all $t \\geq diameter(G)$, $b^{(n)}_u(t) = \\rho_u$ and $b^{(n)}_{uv}(t) = \\rho_{u\\cup v}$.\n\\end{The}\n\n\\begin{proof}\nFirst, observe that $b^{(n)}_u(t) = \\PTr{v}{b^{(n)}_{uv}(t)}$, so it is sufficient to prove that $b^{(n)}_{uv}(t) = \\rho_{u\\cup v}$. Consider $u \\cup v$ to be the root of the tree. We proceed by induction, repeatedly tracing out leaves from the bifactor state except $u$ and $v$ until we are left with only vertices $u$ and $v$. Set $G(0) = G$ and let $G(t) = (V(t),E(t))$ be the tree left after $t$ such rounds of removing leaves. Denote the leaves of $G(t)$ apart from $u$ and $v$ by $l(t)$, the children of $x$ by $c(x)$, and the unique parent of $x$ by $m(x)$. At $t=0$, consider tracing out a leaf $w$ of $G$\n\\begin{align}\n\\PTr{w}{\\rho_V}\n&= \\PTr{u}{(\\mu_w\\otimes \\Gamma_{V-w})\\pns{n} (\\nu_{w:m(w)} \\Lambda_{E_{V-w}})}\\\\\n&= \\Gamma_{V-w} \\pns{n}\\left[\\PTr{w}{\\mu_w \\pns{n} \\nu_{w:m(w)}}\\Lambda_{E_{V-w}}\\right] \\\\\n&= \\Gamma_{V-w} \\pns{n}\\left[m_{w\\rightarrow m(w)}^{(n)}(1)\\Lambda_{E_{V-w}}\\right]\n\\end{align}\nwhere we have used Theorem~\\ref{thm:commute} going from the first to the second line. Since this holds for all leaves, we conclude that \n\\begin{equation}\n\\PTr{l(0)}{\\rho_{V}} = \\Gamma_{V(1)} \\pns{n} \\left(\\prod_{x \\in l(0)} \\prod_{y \\in c(x)} m_{y\\rightarrow x}^{(n)}(1)\\Lambda_{V(1)}\\right).\n\\end{equation}\nWe thus make the inductive assumption that\n\\begin{equation}\n\\label{Graph:IndAss}\n\\rho_{V(t)} = \\Gamma_{V(t)} \\pns{n} \\left(\\prod_{x \\in l(t)} \\prod_{y \\in c(x)} m_{y\\rightarrow x}^{(n)}(t)\\Lambda_{V(t)}\\right).\n\\end{equation}\nIt follows that\n\\begin{align}\n\\rho_{V(t+1)} \n&= \\PTr{l(t)}{\\rho_{V(t)}} \\\\\n&= \\PTr{l(t)}{\\Gamma_{V(t)} \\pns{n} \\left[\\prod_{x \\in l(t)} \\prod_{y \\in c(x)} m_{y\\rightarrow x}^{(n)}(t)\\Lambda_{V(t)}\\right]} \\\\\n&= \\PTr{l(t)}{\\Gamma_{V(t+1)} \\pns{n} \\left[\\prod_{x \\in l(t)} \\mu_x \\pns{n}\\Big( \\prod_{y \\in c(x)} m_{y\\rightarrow x}^{(n)}(t) \\nu_{x:m(x)} \\Lambda_{V(t+1)}\\Big)\\right]} \\\\\n&= \\Gamma_{V(t+1)} \\pns{n} \\left[\\prod_{x \\in l(t)} \\PTr{x}{ \\mu_x \\pns{n}\\Big( \\prod_{y \\in c(x)} m_{y\\rightarrow x}^{(n)}(t) \\nu_{x:m(x)}\\Big)} \\Lambda_{V(t+1)}\\right] \\\\\n&= \\Gamma_{V(t+1)} \\pns{n} \\left[\\prod_{x \\in l(t)} m_{x\\rightarrow m(x)}^{(n)}(t+1) \\Lambda_{V(t+1)}\\right] \\\\\n&= \\Gamma_{V(t+1)} \\pns{n} \\left[\\prod_{x \\in l(t+1)} \\prod_{y \\in c(x)} m_{y\\rightarrow x}^{(n)}(t+1)\\Lambda_{V(t+1)}\\right]\n\\end{align}\nalso assumes the same form, so eq.~\\eqref{Graph:IndAss} follows by induction. We have again used Theorem~\\ref{thm:commute} in going from the third to the fourth line. When $V(t)$ contains only $u$ and $v$ then this reduces to $\\rho_{u\\cup v} = b\\ns{n}_{uv}(t)$, which is what we set out to prove. \n\\end{proof}\n\nOnce again, specializing to the case $n=1$ enables a stronger result to be derived that does not rely on independence assumptions.\n\n\\begin{Cor}\n\\label{cor:QBP1}\nLet $(G,\\rho_V)$ be an $1$-Bifactor Network with graph $G = (V,E)$, and let $[b_{u}(t), b_{uv}(t) ] = \\mathrm{QBP}^{(1)}_t(\\mu_u,\\nu_{u:v})$. If $G$ is a tree, then for all $t \\geq diameter(G)$, $b_u(t) = \\rho_u$ and $b_{uv}(t) = \\rho_{u\\cup v}$.\n\\end{Cor}\n\n\\begin{proof} This Corollary is a consequence of Theorem~\\ref{thm:commute1} and the fact that the proof of Theorem~\\ref{thm:QBP} only relies on the commutativity of the diagram eq.~\\eqref{eq:commute_diag}.\n\\end{proof}\n\nThis last result gives us additional information about the structure of correlations in 1-bifactor states that is captured by the following corollary.\n\n\\begin{Cor}\nLet $(G,\\rho_V)$ be an $1$-Bifactor Network on graph $G = (V,E)$. If $G$ is a tree, then the mutual density operators commute: $[\\rho_{u:v},\\rho_{w:x}] = 0$ for all $(u,v)$ and $(w,x) \\in E$.\n\\end{Cor}\n\n\\begin{proof}\nThe only non trivial case is $[\\rho_{u:v},\\rho_{v:w}]$ with $u \\neq w$. Let $[b_{u}(t), b_{uv}(t) ] = \\mathrm{QBP}^{(1)}_t(\\mu_u,\\nu_{u:v})$ and denote\n\\begin{equation}\nA_{u-v}(t) = \\prod_{w\\in n(u)-v} m_{w\\rightarrow u}(t).\n\\end{equation}\nObserve that $A_{u-v}(t)$ is an operator on $\\mathcal{H}_u$, and by Proposition~\\ref{prop:commute}, $[A_{u-v}(t),\\nu_{u:w}] = 0$ for all $u$, $v$, and $w\\in V$. From Theorem~\\ref{thm:QBP}, we have for $t \\geq diameter(G)$\n\\begin{equation}\n[\\rho_{u:v},\\rho_{v:w}] = \n[ A_{u-v}(t) A_{v-u}(t) \\nu_{u:v}, A_{v-w}(t) A_{w-v}(t) \\nu_{v:w} ] = 0.\n\\end{equation}\n\\end{proof}\n\nCorollary~\\ref{cor:QBP1} shows that for 1-bifactor states on trees, QBP$^{(1)}$ enables an efficient evaluation of the one-vertex and two-vertex reduced density operators $\\rho_{u}$ for all $u \\in V$ and $\\rho_{u\\cup v}$ for all $(u,v) \\in E$. Can this result be generalized to arbitrary bifactor states? This question is of interest since, as we will detail in \\S\\ref{App:Stat}, the Gibbs states used in statistical physics are $\\infty$-bifactor states. However, it is known that approximating the ground state energy of a two-local Hamiltonian on a chain is QMA-complete \\cite{AGK07a,Ira07a}\\footnote{QMA stands for Quantum Merlin and Arthur and it is the natural quantum generalization of the classical complexity class NP. So to the best of our knowledge, solving a QMA-complete problem would require an exponential amount of time even on a quantum computer.}. Knowledge of $\\rho_{u\\cup v}$ leads to an efficient evaluation of the energy. Therefore, without any independence assumptions, it is unlikely that an efficient QBP algorithm for $n$-Bifactor Networks will converge to the correct marginals for $n>1$. This contrasts with classical BP that always converges to the exact solution on trees. However, \\S\\ref{Heur:Rep} gives a QBP algorithm that solves the inference problem for any $n$-bifactor state on a tree in a time that scales exponentially with $n$. \n\n\\subsection{Solving Inference Problems}\n\n\\label{QBP:Inf}\n\nWe close this section with a discussion of how QBP algorithm can solve inference problems when local measurements are executed on a bifactor state. In other words, for an outcome of a local measurement on a subsystem $U$ described by a POVM element $E^{(j)}_U = \\bigotimes_{u \\in U} E_u^{(j)}$, we are interested in evaluating the marginal states $\\rho_{u|E^{(j)}_U}$ and $\\rho_{u\\cup v|E^{(j)}_U}$ conditioned on the outcome, where\n\\begin{align}\n\\rho_{u|E^{(j)}_U} & = \\frac 1 Y \\PTr{V - u}{(E^{(j)}_U)^{\\frac{1}{2}} \\rho_V (E^{(j)}_U)^{\\frac{1}{2}}} \\\\\n\\rho_{u\\cup v|E^{(j)}_U} & = \\frac 1 Y \\PTr{V - \\{u,v\\}}{(E^{(j)}_U)^{\\frac{1}{2}} \\rho_V (E^{(j)}_U)^{\\frac{1}{2}}},\n\\end{align}\nand $Y$ is a normalization factor.\nFor $u,v \\notin U$, this amounts to a local modification of the bifactor state that accounts for the action of the measurement, the QBP algorithm being otherwise unaltered. We focus on 1-Bifactor Networks and return to the general case at the end of this section.\n\n\\begin{The}\n\\label{thm:update}\nLet $(G,\\rho_V)$ be a 1-Bifactor Network with $G = (V,E)$ a tree. For $U \\subset V$, let $\\{E^{(j)}_U\\} = \\Big\\{\\bigotimes_{u \\in U} E_u^{(j)}\\Big\\}$ be a POVM on the subsystem $U$ and let $W = V-U$. Define $\\mu_u^{(j)} = \\mu_u \\ensuremath{\\star} E_u^{(j)} $ for $u\\in U$ and $\\mu_u^{(j)} = \\mu_u$ for $u \\in W$. Let $[b_{uv}(t), b_{uv}(t) ] = \\mathrm{QBP}^{(1)}(\\mu_u^{(j)},\\nu_{u:v})$. Then for all $t \\geq diameter(G)$, $b_u(t) = \\rho_{u|E_U^{(j)}}$ for all $u \\in W$ and $b_{u\\cup v}(t) = \\rho_{uv|E_U^{(j)}}$ for all $(u,v) \\in E_W$.\n\\end{The} \n\n\\begin{proof}\nThe reduced state on $W$ conditioned on the measurement outcome $E_U^{(j)}$ is given by\n\\begin{align}\n\\rho_{W|E_U^{(j)}} & = \\frac 1Y \\PTr{U}{(E_U^{(j)})^{\\frac{1}{2}} \\rho_V (E_U^{(j)})^{\\frac{1}{2}}} \\\\\n&= \\frac 1Y \\prod_{\\substack{v \\in W \\\\ u\\in U}} \\prod_{(w,x) \\in E} \\mu_v^\\frac 12 \\PTr{U}{(E_u^{(j)})^{\\frac{1}{2}} \\mu_u^\\frac 12 \\nu_{w:x} \\mu_u^\\frac 12 (E_u^{(j)})^{\\frac{1}{2}}}\\mu_v^\\frac 12 \\\\\n&= \\frac 1Y \\prod_{\\substack{v \\in W \\\\ u\\in U}} \\prod_{(w,x) \\in E} \\mu_v^\\frac 12 \\PTr{U}{ \\nu_{w:x} \\mu_u^\\frac 12 E_u^{(j)} \\mu_u^\\frac 12 }\\mu_v^\\frac 12 \\\\\n&= \\frac 1Y \\prod_{\\substack{v \\in W \\\\ u\\in U}} \\prod_{(w,x) \\in E} \\Big(\\mu_v^{(j)}\\Big)^\\frac 12 \\PTr{U}{ \\Big(\\mu_u^{(j)}\\Big)^\\frac 12 \\nu_{w:x} \\Big(\\mu_u^{(j)}\\Big)^\\frac 12 }\\Big(\\mu_v^{(j)}\\Big)^\\frac 12 .\n\\end{align}\nThe result thus follows from Corollary~\\ref{cor:QBP1}.\n\\end{proof}\n\n\nThe result of Theorem~\\ref{thm:update} can easily be extended to compute the conditional marginal state $\\rho_{u|E^{(j)}_U}$ and $\\rho_{u\\cup v|E^{(j)}_U}$ for any $u$ and $v$, not just those in $W = V-U$. This is achieved by altering the beliefs as follows\n\\begin{equation}\nb_u(t) = \\frac 1ZE_u^{(j)} \\ensuremath{\\star} \\mu_u \\ensuremath{\\star} \\prod_{v' \\in n(u)} m_{v' \\rightarrow u}(t)\n\\end{equation}\nfor $u \\in U$, \n\\begin{equation}\nb_{uv}(t) = \\frac 1Z E_{uv}^{(j)} \\ensuremath{\\star} (\\mu_u \\mu_v)\\ensuremath{\\star} \\Bigg[\\prod_{w \\in n(u)-v} m_{w \\rightarrow u}(t) \\prod_{w' \\in n(v)-u} m_{w' \\rightarrow v}(t) \\ensuremath{\\star} \\nu_{u:v} \\Bigg]\n\\end{equation}\nwith $E_{uv}^{(j)} = E_{u}^{(j)} \\otimes I_v$ when $u\\in U$ and $v \\in W$ and $E_{uv}^{(j)} = E_{u}^{(j)} \\otimes E_{u}^{(j)} $ when $u,v \\in U$. The proof is straightforward and we omit it.\n\nTheorem~\\ref{thm:update} shows how QBP leads to an efficient algorithm for solving inference problems on 1-bifactor states on trees with local measurements. This immediately implies an efficient algorithm for general $n$-bifactor states when $(G,\\rho_V)$ is a quantum Markov network. Indeed, Theorem~\\ref{thm:QBP} demonstrates that in that case the QBP$^{(n)}$ algorithm can be used to efficiently compute the marginal density operators $\\rho_{u\\cup v}$ for all $(u,v) \\in E$. From these, one can straightforwardly obtain the marginal operators $\\rho_u$ for all $u \\in V$ and mutual operators $\\rho_{u:v}$ for all $(u,v) \\in E$. Theorem~\\ref{The:mutual} states that $\\rho_V$ can be represented as a 1-bifactor state in terms of its marginal and mutual operators. The inference problem can then be solved using the QBP$^{(1)}$ algorithm as explained above. \n\n\n\\section{Heuristic Methods}\n\n\\label{Heur}\n\nThe previous section provided conditions under which QBP algorithms give exact solutions to inference problems on $n$-Bifactor Networks. Namely, the underlying graph must be a tree, and the state must be either a quantum Markov network or a 1-bifactor state. When these conditions are not met, QBP algorithms may still be used as heuristic methods to obtain approximate solutions to the inference problem, although in general these approximations will be uncontrolled. \n\nTo draw a parallel, classical Belief Propagation algorithms have found applications in numerous distinct scientific fields where they are sometimes known under different name: Gallager decoding, Viterbi's algorithm, sum-product, and iterative turbo decoding in information theory; cavity method and the Bethe-Peierls approximation in statistical physics; junction-tree and Shafer-Shenoy algorithm in machine learning to name a few. In many of these examples, BP algorithms exhibit good performance on graphs with loops, even though the algorithm does not converge to the exact solution on such graphs. In fact, ``Loopy Belief Propagation\" is often the best known heuristic method to find approximate solutions to hard problems. Important examples include the near-Shannon capacity achieving turbo-codes and low density parity check codes. On the other hand, there are known examples for which loopy BP fail to converge and their general realm of applicability is not yet fully understood. \n\nAs in the classical case, one can expect loopy QBP to give reasonable approximations in some circumstances, for instance when the size of typical loops is very large. Intuitively, one expects a local algorithm to be relatively insensitive to the large scale structure of the underlying graph. However, quantum inference problems also pose a new challenge. Quite apart from issues regarding the graph's topology, an $n$-bifactor state with $n>1$ may not obey the independence conditions required to ensure the convergence of QBP. The goal of this section is to suggest three techniques that are expected to improve the performance of QBP in such circumstances. \n\n\\subsection{Coarse-graining}\n\n\\label{Heur:CG}\n\nBy definition, a quantum Markov network has the property that the correlations from one vertex to the rest of the graph are screened off by its neighbors. When this property fails, QBP will not in general produce the correct solution to an inference problem. Coarse graining is a simple way of modifying a graph in such a way that the state may be a closer to forming a quantum Markov network with respect to the new graph than it was with respect to the original graph. \n\nA coarse graining of a graph $G = (V,E)$ is a graph $\\tilde G = (\\tilde V, \\tilde E)$, where $\\tilde V$ is a partition of $V$ into disjoint subsets of and $(U,W) \\in \\tilde{E}$ if there is an edge connecting a vertex in $U$ to a vertex in $W$ in $G$. The coarse-grainings that are of most interest are those that partition $V$ into connected sets of vertices (see Fig.~\\ref{CG} for example). It is an elementary exercise to show that if $(G,\\rho_V)$ is an $n$-Bifactor Network, then $(\\tilde G, \\rho_{\\tilde{V}})$ is an $n$-Bifactor Network for any coarse graining $\\tilde G$. The intuition for why coarse graining might get us closer to a Markov network is that it effectively ``thickens\" the neighborhood of each vertex, which may then be more efficient at screening off correlations. This intuition is illustrated in Fig.~\\ref{CG} and is supported by the fact that Markov networks are fixed points of the coarse graining procedure, i.e. if $\\tilde G$ is a coarse-graining of $G$, then $(\\tilde G, \\rho_{\\tilde{V}})$ is a quantum Markov network whenever $(G,\\rho_V)$ is a Markov network.\n\n\\begin{figure}[h!]\n\\center \\includegraphics[height=1.6in]{CG}\n\\caption{Example of a coarse-grained graph. Figure a) shows in light gray the neighborhood of the darkened vertex in the original graph. In b) the dashed ellipses represent coarse-grained vertices. The neighborhood of the darkened coarse-grained vertex is represented by the light gray set.}\n\\label{CG}\n\\end{figure}\n\nAlso note that every graph $G$ can be turned into a tree by a suitable coarse graining. When the obtained Bifactor Network is a Markov Network or when $n=1$, QBP is then guaranteed to converge to the exact solution. The Hilbert space dimension at the vertices of the coarse-grained graph is bounded by an exponential in the tree-width of $G$, so this technique is efficient only for graph of $O(\\log(N))$ tree-width. \n\n\\subsection{Sliding window QBP}\n\n\\label{Heur:SW}\n\nSliding window QBP is similar in spirit to coarse-graining but is mainly suitable for chains (although the idea is easily generalized to arbitrary trees of low degree). Consider an $n$-bifactor state $\\rho_V$ on a one dimensional lattice $G = (V,E)$ with $V = \\{v_1,v_2,\\ldots,v_N\\}$ and $E = \\{(v_j,v_{j+1})\\}_{j=1,\\ldots ,N-1}$. When $(G,\\rho_V)$ is not a quantum Markov Network, the diagram of eq.~\\eqref{eq:commute_diag} will generally fail to be commutative. The commutativity of this diagram is essential for the success of QBP, as for instance it implies\n\\begin{align}\n\\PTr{v_1}{(\\mu_{v_1} \\otimes \\mu_{v_2}) \\pns{n} (\\nu_{v_1:v_2} \\nu_{v_2:v_3})} &= \\mu_{v_2} \\pns{n}\\left[ \\PTr{v_1}{\\mu_{v_1} \\pns{n} \\nu_{v_1:v_2}} \\pns{n} \\nu_{v_2:v_3}\\right] \\\\\n&= \\mu_{v_2} \\pns{n} \\big[ m_{v_1\\rightarrow v_2} \\pns{n} \\nu_{v_2:v_3} \\big].\n\\end{align}\nThus, the Hilbert space of vertex $v_1$ is traced out before operators on vertex $v_3$ are brought into the picture. This enables the algorithm to progress along the lattice by evaluating a cumulative operator of constant dimension (i.e. the messages), much in the spirit of the transfer matrix of statistical physics. Without the Markov property, this is generally not possible. \n\nHowever, when vertices separated by a distance $\\ell$ are conditionally independent given the vertices between them, sliding window QBP can be operated efficiently to produce the exact solution of the inference problem. This works by defining new message operators\n\\begin{equation}\n\\tilde m_{v_{j +\\ell-1}\\rightarrow v_{j+\\ell}} = \\PTr{\\{v_1,v_2, \\ldots ,v_j\\}}{\\left[\\bigotimes_{k = 1}^{\\ell + j-1} \\mu_{v_k}\\right] \n\\pns{n} \\left[\\prod_{k = 1}^{\\ell+j-1} \\nu_{v_k:v_{k+1}}\\right]}\n\\end{equation}\nwhich act on $\\mathcal{H}_{v_{j+1}} \\otimes \\mathcal{H}_{v_{j+2}} \\otimes \\ldots \\mathcal{H}_{v_{j+\\ell}}$. When\n\\begin{equation}\nS(v_j:v_{j+\\ell}|\\{v_{j+1}, v_{j+2}, \\ldots , v_{j+\\ell-1}\\}) = 0\n\\label{correlation-length}\n\\end{equation}\nfor all $v_j \\in V$, we have the equality\n\\begin{equation}\n\\tilde m_{v_{j+\\ell} \\rightarrow v_{j+\\ell+1}} = \\PTr{v_{j+1}}{\\mu_{v_{j+\\ell}}\\pns{n}\\big[\\tilde m_{v_{j+\\ell-1}\\rightarrow v_{j+\\ell}} \\pns{n} \\nu_{v_{j+\\ell}:v_{j+\\ell+1}}\\big]},\n\\end{equation}\nso inference problems can be solved exactly with operators whose dimension grow exponentially with the $\\ell$ rather than the lattice size $N$. In particular, this method can be applied to spin-systems that have a finite correlation length because then eq.~\\eqref{correlation-length} can be expected to hold approximately for some finite $\\ell$.\n\n\\subsection{Replicas}\n\n\\label{Heur:Rep}\n\nThe method of replicas maps $n$-bifactor states to 1-bifactor states on which QBP$^{(1)}$ can be implemented without concerns for independence. This is achieved by replacing the systems $v$ on each vertex of the graph $G$ by $n$ replicas, so that the Hilbert space associated to vertex $v$ becomes $\\mathcal{H}_v^{\\otimes n}$. As a consequence, the algorithm suffers an overhead exponential in $n$. The name ``replica\" is borrowed from the analogous technique used in the study of classical quenched disordered systems. The validity of this technique is based on the following observation. \n\n\\begin{Prop}\\label{prop:product}\nLet $\\{\\mathcal{H}_j\\}_{j=1,\\ldots,n}$ be isomorphic Hilbert spaces. Let $T^{(n)}$ be the operator that cyclicly permutes these $n$ systems. Let $A_1$ be an arbitrary operator on $\\mathcal{H}_1$, and define $A_j = (T^{(n)})^{j-1} A_1 (T^{(n)\\dagger})^{j-1}$ to be the corresponding operators on $\\mathcal{H}_j$. Then for any set of operators $\\{A_1^{(k)}\\}$ on $\\mathcal{H}_1$, the following equality holds\n\\begin{equation}\nA_1^{(1)}A_1^{(2)}\\ldots A_1^{(n)} = \\PTr{2,3,\\ldots, n}{[A_1^{(1)}\\otimes A_2^{(2)}\\otimes\\ldots\\otimes A_n^{(n)}]T^{(n)}}.\n\\end{equation}\n\\end{Prop}\n\nWe are now in a position to formalize the replica method. \n\n\\begin{The} Let $(G,\\rho_V)$ be an $n$-Bifactor Network, with operators $\\mu_u$ and $\\nu_{u:v}$. Then, $\\rho_V$ is locally isomorphic to a 1-bifactor state with Hilbert spaces comprising $n$ replicas of the original system $\\mathcal{H}_u' = \\mathcal{H}_{u_1} \\otimes \\mathcal{H}_{u_2} \\otimes \\ldots \\otimes \\mathcal{H}_{u_n}$ for all $u \\in V$. The partial isomorphism at vertex $u$ is given by $\\PTr{u_2,u_3,\\ldots,u_n}{(T^{(n)\\dagger}_u)^\\frac 12 \\cdot (T^{(n)}_u)^\\frac 12}$. More precisely, we claim that\n\\begin{equation}\n\\rho_V = \\PTr{\\{u_2,u_3,\\ldots, u_n\\}_{u \\in V}}{U^\\dagger \\left(\\bigotimes_{u\\in V} \\tilde \\mu_u\\right) \\ensuremath{\\star} \\left(\\prod_{(v,w) \\in E} \\tilde\\nu_{v:w}\\right) U}\n\\end{equation}\nwhere\n\\begin{align}\n\\tilde \\mu_u &= \\left(\\mu_u^{\\frac{1}{n}} \\right)^{\\otimes n} \\left(T_u^{(n)}\\right) \\\\\n\\tilde \\nu_{u:v} &= \\left(\\nu_{u:v}^{\\frac 1n}\\right)^{\\otimes n} \\\\\nU &= \\bigotimes_{u \\in V} (T^{(n)}_u)^\\frac 12\n\\end{align}\nare operators on $\\mathcal{H}_u'$\n\\end{The}\n\n\\begin{proof}\nFirst, note that $T_u^{(n)}$ commutes with $\\left(\\mu_u^{\\frac{1}{n}} \\right)^{\\otimes n}$, so $\\tilde \\mu_u^\\frac 12 = \\left(\\mu_u^{\\frac{1}{2n}} \\right)^{\\otimes n} \\left(T_u^{(n)}\\right)^\\frac 12 = \\left(T_u^{(n)}\\right)^\\frac 12 \\left(\\mu_u^{\\frac{1}{2n}} \\right)^{\\otimes n}$. Thus\n\\begin{align}\n& \\PTr{\\{u_2,u_3,\\ldots, u_n\\}_{u \\in V}}{U^\\dagger \\left(\\bigotimes_{u\\in V} \\tilde \\mu_u\\right) \\ensuremath{\\star} \\left(\\prod_{(v,w) \\in E} \\tilde\\nu_{v:w}\\right)U} \\\\\n&= \\PTr{\\{u_2,u_3,\\ldots, u_n\\}_{u \\in V}}{U^\\dagger \\left(\\bigotimes_{u\\in V} T_u^{(n)} \\left(\\mu_u^{\\frac 1n} \\right)^{\\otimes n}\\right) \\ensuremath{\\star} \\left(\\prod_{(v,w) \\in E} \\left(\\nu_{u:v}^{\\frac 1n}\\right)^{\\otimes n} \\right)U} \\\\\n&= \\PTr{\\{u_2,u_3,\\ldots, u_n\\}_{u \\in V}}{\\left(\\bigotimes_{u\\in V} \\left(\\mu_u^{\\frac 1n} \\right)^{\\otimes n}\\right) \\ensuremath{\\star} \\left(\\prod_{(v,w) \\in E} \\left(\\nu_{u:v}^{\\frac 1n}\\right)^{\\otimes n} \\right) \\bigotimes_{u \\in V} T_u^{(n)} } \\\\\n&= \\PTr{\\{u_2,u_3,\\ldots, u_n\\}_{u \\in V}}{\\left[\\left(\\bigotimes_{u\\in V} \\mu_u^{\\frac 1n} \\right) \\ensuremath{\\star} \\left(\\prod_{(v,w) \\in E} \\nu_{u:v}^{\\frac 1n} \\right)\\right]^{\\otimes n} \\bigotimes_{u \\in V} T_u^{(n)} } \\\\\n&= \\left[\\left(\\bigotimes_{u\\in V} \\mu_u^{\\frac 1n} \\right) \\ensuremath{\\star} \\left(\\prod_{(v,w) \\in E} \\nu_{u:v}^{\\frac 1n} \\right)\\right]^n = \\rho_V\n\\end{align}\nwhere we used Proposition~\\ref{prop:product} to obtain the last line. \n\\end{proof}\n\nSince the dimension of the Hilbert at each vertex grows exponentially with $n$, the QBP$^{(1)}$ algorithm used to solve the corresponding inference problem suffers an exponential overhead. One can make a replica symmetry ansatz, assuming that the state is symmetric under exchange of replica systems at any given vertex. Since the symmetric subspace of $\\mathcal{H}_v^{\\otimes n}$ grows polynomially\\footnote{More precisely, it grows as $\\binom{n+d-1}{n} \\approx n^{d-1}$.} with $n$, QBP algorithm can be executed efficiently. The validity of this ansatz cannot be verified in general, but it may serve as a good heuristic method. \n\n\n\\section{Applications}\n\n\\label{App}\n\nThis section explains in some detail how QBP can be used as a heuristic algorithm to find approximate solutions to important problems in quantum error correction and the simulation of many-body quantum systems. The focus will be on the reduction of well established problems to inference problems on $n$-Bifactor Networks. One can make use of the techniques discussed in the previous section whenever the resulting Graphical Model does not meet the requirements to ensure convergence of QBP, or when these conditions cannot be verified efficiently.\n\n\\subsection{Quantum Error Correction}\n\n\\label{App:QEC}\n\nMaximum-likelihood decoding is an important task in quantum error correction (QEC). As in classical error correction, this problem reduces to the evaluation of marginals on a factor graph, also called Tanner graph in this context. More precisely, for independent error models, the quantum channel conditioned on error syndrome is a 1-bifactor state. As a consequence, qubit-wise maximum likelihood decoding of a QEC stabilizer code reduces to an inference problem on a 1-Bifactor Network. Thus, there is no independence condition that needs to be verified, although the graph will generally contain loops. Before demonstrating this reduction, a brief summary of stabilizer QEC is in order, see \\cite{Got97a} for more details. For details on the use of Belief Propagation for the decoding of classical error correction codes, the reader is referred to the text of MacKay \\cite{Mac03a} and forthcoming book of Richardson and Urbanke \\cite{RU05a}. \n\nConsider a collection of $N$ two-dimensional quantum systems (qubits) $V = \\{u\\}_{u=1,\\ldots,N}$ with $\\mathcal{H}_u = \\mathbb{C}^2$. A QEC code is a subspace $\\mathcal{C} \\in \\mathcal{H}_V$ that is the $+1$ eigensubspace of a collection of commuting operators $S_j$, $j=1,\\ldots N-K$, called stabilizer generators. Each stabilizer generator is a tensor product of Pauli operators on a subset $U_j$ of $V$:\n\\begin{equation}\nS_j = \\bigotimes_{u \\in U_j} \\sigma^{\\alpha^u_j}_u\n\\end{equation}\nwhere $\\alpha_j^u \\in \\{x,y,z\\}$. When the stabilizer generators are multiplicatively independent, the code encodes $K$ qubits, i.e. $\\mathcal{C}$ has dimension $2^K$. For each $j=1,\\ldots N-K$, define the two projectors $P_j^\\pm = (I \\pm S_j)\/2$. The code space is therefore defined as $\\mathcal C = (\\prod_j P_j^+) \\mathcal H_V$. \n\nError correction consists of three steps. First, the system $V$ is prepared in a code state $\\rho_V$ supported on $\\mathcal C$, in such a way that $P_j^+ \\rho_V P_j^+ = \\rho_V$ for all $j$. The state is then subjected to the channel $\\rho_V \\rightarrow \\mathcal E_{V|V}(\\rho_V)$. Second, each stabilizer generator $S_j$ is measured, yielding an outcome $s_j = \\pm$ with probability $\\Tr{P^\\pm_j \\mathcal{E}_{V|V}(\\rho_V)}$. The collection of all $N-K$ measurement outcomes $s_j$, called the error syndrome, is denoted ${\\bf s} = (s_1,s_2,\\ldots s_{N-K})\\in \\{-,+\\}^{N-K}$. Third, the channel $\\mathcal E_{V|V}$ is updated conditioned the error syndrome $\\bf s$. Based on this updated channel, the optimal recovery is computed and implemented. \n\nThe computationally difficult step in the above protocol consists in conditioning the channel on the error syndrome. To understand this problem, it is useful to express the channel in a Kraus form $\\mathcal{E}_{V|V}(\\rho_V) = \\sum_k M^{(k)}_{V|V} \\rho_V M^{(k)\\dagger}_{V|V}$ where $\\{M^{(k)}\\}$ are operators on $\\mathcal{H}_V$. When $s_j = +$, we learn that the error that has affected the state commutes with $S_j$, while $s_j = -$ indicates that the error anti-commutes with $S_j$. To update the channel conditioned on the error syndrome $s_j = +$ say, we first decompose each Kraus operator $M^{(k)}_{V|V}$ as the sum of an operator that commutes with $S_j$ and an operator that does not commute with $S_j$: $M^{(k)}_{V|V} = M^{(k)+}_{V|V} + M^{(k)\\prime}_{V|V}$ where $ M^{(k)+}_{V|V} = P_j^+ M^{(k)}_{V|V} P_j^+$ and $ M^{(k)\\prime}_{V|V} = M^{(k)}_{V|V} - M^{(k)+}_{V|V}$. The updated channel is obtained by throwing away the primed component $ M^{(k)\\prime}_{V|V}$ of each Kraus operator, and renormalizing. \n\nIn what follows, we demonstrate how the conditional channel can be expressed as a factor graph. This is most easily done using the Jamio\\l kowski representation of quantum channels. For each quantum system $v$, let $R_v$ denote a reference for $v$, with Hilbert space $\\mathcal{H}_{R_v} \\simeq \\mathcal H_v$. Define the maximally entangled state between system $v$ and its reference by $\\Ket\\Phi_{vR_v} = \\frac{1}{\\sqrt d} \\sum_j \\Ket{j}_v\\Ket{j}_{R_v}$. Then, the Jamio\\l kowski representation of a channel $\\mathcal E_{V|V}$ is a density operator $\\rho_{\\overline V}$ on $\\mathcal H_{\\overline V} = \\mathcal H_V \\otimes \\mathcal H_{R_V}$ given by $\\rho_{\\overline V} = (\\mathcal E_{V|V} \\otimes \\mathcal I_{R_V|R_V})(\\kb{\\Phi}{\\Phi}_{VR_V})$, where $\\mathcal I$ denotes the identity channel. For independent error models considered here, $\\rho_{\\overline V} = \\bigotimes_{u \\in V} \\rho_{\\overline u}$.\n\nFor each stabilizer generator $S_j$, denote $\\overline{S}_j = \\bigotimes_{u \\in U_j} \\sigma^{\\alpha_j^u}_u \\otimes \\sigma^{\\alpha_j^u}_{R_u}$, and construct the associated projectors $\\overline{P}_j^\\pm = (I \\pm \\overline{S}_j)\/2$. An important property of these operator is that they fix the maximally entangled state $\\overline{S}_j \\Ket{\\Phi}_{VR_V} = \\overline{P}_j^+ \\Ket{\\Phi}_{VR_V} = \\Ket{\\Phi}_{VR_V}$. Let $E$ be an operator on $V$. If $E$ commutes with $S_j$, we have $\\overline{P}_j^+ (E\\otimes I_{R_V}) \\Ket{\\Phi}_{VR_V} = (E\\otimes I_{R_V}) \\Ket{\\Phi}_{VR_V}$ and $\\overline{P}_j^- (E\\otimes I_{R_V}) \\Ket{\\Phi}_{VR_V} = 0$, while if $E$ anti-commutes with $S_j$, the same identities hold with $\\overline P_j^+$ and $\\overline P_j^-$ exchanged. It follows from this observation that conditioned on the error syndrome $\\bf s$, the channel is described by the Jamio\\l kowski matrix\n\\begin{equation}\n\\rho_{\\overline V|{\\bf s}} = \\frac 1Z \\prod_{j} \\overline P_j^{s_j} \\ensuremath{\\star} \\bigotimes_{v \\in V} \\rho_{\\overline v}, \n\\end{equation}\nthat is a quantum factor graph. \n\nThere are a number of relevant quantities that can be evaluated from this factor graph. For instance, one can efficiently evaluate the conditional channel on any constant size set of qubits $W \\subset V$ vial partial trace. This is useful in iterative decoding schemes such as those used for quantum turbo-codes \\cite{OPT07a} and low density parity check codes \\cite{COT05a}. In those cases, the conditional channel on $W$ can only be evaluated approximately since it requires loopy QBP. The factor graph also enables exact evaluation of the logical error in a concatenated block coding scheme \\cite{Pou06b} such as used in fault-tolerant protocols. \n\n\\subsection{Simulation of Many-Body Quantum Systems}\n\n\\label{App:Stat}\n\nIn statistical physics, the state of a many-body quantum system $V$ is a Gibbs state $ \\rho_V = \\frac 1Z \\exp(-\\beta H)$ for some Hamiltonian $H$, where $\\beta = 1\/T$ is the inverse temperature. Typically, $H$ is the sum of single and two-body interactions $H = \\sum_{u \\in V} H_u + \\sum_{(u,w) \\in E} H_{uv}$ on some graph $G = (V,E)$. Understanding the correlations present in these states is a great challenge in theoretical physics. In this section, we describe how QBP can serve as an heuristic method to accomplish this task approximately. For an account of the use of Belief Propagation in classical statistical mechanical systems, we refer the reader to the text of M\\'ezard and Montanari \\cite{MM07a}.\n\nDefining $\\mu_u = \\exp(-\\beta H_u)$ and $\\nu_{v:w} = \\exp(-\\beta H_{vw})$ gives an expression for $\\rho_V$ of the form of eq.~\\eqref{Graph:QGSEInf}:\n\\begin{equation}\n\\rho_V = \\left(\\bigotimes_{v \\in V} \\mu_v\\right) \\odot \\left(\\bigodot_{(v,w) \\in E} \\nu_{v:w}\\right)\n\\end{equation}\nThus, $\\rho_V$ is an $\\infty$-bifactor state. As mentioned in \\S\\ref{QBP}, a QBP$^{(\\infty)}$ algorithm can easily be formulated for this type of bifactor state, and still converge to the exact solutions of the corresponding inference problem when $\\rho_V$ is a quantum Markov network and $G$ is a tree. This requires replacing all matrix products $\\prod$ by the commutative product $\\odot$ in the defining equations of QBP$^{(\\infty)}$ eqs.~(\\ref{message}-\\ref{belief2}). The proof of convergence Theorem \\ref{thm:QBP} under these more general conditions follows essentially the same reasoning. \n\nTo obtain a bifactor state that satisfies the commutation condition $[\\nu_{u:v},\\nu_{w:x}]=0$, it is possible to coarse-grain $G$ in a way that the resulting interaction between coarse-grained neighbors commute. Consider for instance a one dimensional chain $G = (V,E)$ with $V = \\{u\\}_{u=1,\\ldots,N}$ and $E = \\{(u,u+1)\\}_{u=1,\\ldots,N-1}$. We can construct a coarse-grained graph $\\tilde G$ by identifying all vertices $2 u-1$ and $2 u$ for $u=1,\\ldots,\\lfloor\\frac N 2\\rfloor$. The state $\\rho_V$ is then an $\\infty$-bifactor state on $\\tilde G$, with operators\n\\begin{align}\n\\tilde\\mu_u &= \\mu_{2u-1} \\odot \\mu_{2u} \\odot \\nu_{2u-1:2u} \\\\\n\\tilde\\nu_{u:u+1} &= \\nu_{2u:2u+1},\n\\end{align}\nsatisfying $[\\tilde{\\nu}_{u:u+1},\\tilde{\\nu}_{v:v+1}]$\nThus, $\\infty$-bifactor states are commonplace in quantum many-body physics. Unfortunately, the convergence of the QBP algorithm in this case requires the state to be a quantum Markov network, which cannot be tested directly in general. As we will now explain, it is often possible to reasonably approximate a Gibbs state by an $n$-bifactor with finite $n$, and sometime even $n=1$.\n\nA simple way to obtain an $n$-bifactor state is to approximate $\\odot$ by $\\pns{n}$ for some large value of $n$. In the context of many-body physics, this is called a Trotter-Suzuki decomposition of the Gibbs state, and becomes more accurate as the ratio $\\beta\/n$ decreases. The QBP$^{(n)}$ algorithm can then be operated on this $n$-bifactor state, but its convergence again requires some independence condition that cannot be verified systematically. Alternatively, one can use the replica method described in section \\ref{Heur:Rep} and solve the inference problem exactly with QBP$^{(1)}$, but with an increase in complexity exponential in $n$. The replica method is then reminiscent of the well known correspondence between quantum statistical mechanics in $d$ dimensions and classical statistical mechanics in $d+1$ dimensions, where the extra dimension represents inverse temperature. \n\nThe 1-bifactor states also capture the correlations of some non-trivial quantum many-body systems. {\\em Valence bond solid} (VBS) states were introduced in Ref. \\cite{AKLT87a,AKLT88a} as exact ground states (i.e. $T=0$ Gibbs states) of spin systems with interesting properties. Recent work has generalized these constructions to {\\em matrix product states} (MPS) in one-dimension \\cite{FNW92a,Vid04a,Vid06a}, and {\\em projected entangled-pair states} (PEPS) for higher dimensions \\cite{VC04a,SDV06a}. These form an important class of states for the description of quantum many-body systems. For instance, {\\em density matrix renormalization group} (DMRG) \\cite{Whi92a} --- one of the most successful method for the numerical study of spin chains --- is now understood as a variational method over MPS \\cite{OR95a,DMNS98a,VPC04b}. All these states are instances of 1-bifactor states.\n\n\\begin{figure}[h!]\n\\center \\includegraphics[height=1.6in]{VBS}\n\\caption{Projected entangled pair state on a two-dimensional square lattice. The vertices are associated to dashed circles. Each $\\bullet$---$\\bullet$ represents a maximally entangled state of $D$ dimension shared between neighboring vertices. A partial isometry $A_u: (\\mathbb{C}^D)^{c_u} \\rightarrow \\mathbb{C}^d$ is applied at each vertex, where $c_u$ is the degree of vertex $u$. }\n\\label{VBS}\n\\end{figure}\n\nFor sake of simplicity, we will demonstrate this claim for one-dimensional MPS, but the same argument holds for higher dimensions. The MPS $\\Ket\\Psi$ is a pure state of a collection of $N$ $d$-dimensional quantum systems displayed on a one dimensional lattice. Each vertex $u$ is assigned two ``virtual particles\" $L_u$ and $R_u$, where $L$ and $R$ stand for left and right (see Fig.~\\ref{VBS} for a illustration of this construction in two-dimensions). Each of these particles are associated a Hilbert space $\\mathcal{H}_{L_u} = \\mathcal{H}_{R_u} = \\mathbb{C}^D$. Initially, the right particle of vertex $u$ is in a maximally entangled state with the left particle of vertex $u+1$; $\\Ket{\\Phi}_{R_u\\cup L_{u+1}} = \\frac{1}{\\sqrt D} \\sum_{\\alpha=1}^D \\Ket{\\alpha}_{R_u}\\Ket{\\alpha}_{L_{u+1}}$ where $\\Ket\\alpha$ are orthogonal basis vectors for $\\mathbb{C}^D$. (The lattice can be closed to form a circle, in which case we identify $N+1 = 1$.) The initial state is therefore $\\Ket{\\Phi_0} = \\bigotimes_u \\Ket{\\Phi}_{R_u\\cup L_{u+1}}$. \n\nTo obtain the MPS, apply an operator $A_u: \\mathcal{H}_{L_u}\\otimes \\mathcal{H}_{R_u} \\rightarrow \\mathbb{C}^d$\n\\begin{equation}\nA_u = \\sum_{j=1}^d \\sum_{\\alpha,\\beta = 1}^D A_u^{j,\\alpha,\\beta} \\kb{j}{\\alpha,\\beta}\n\\end{equation}\nto each vertex of the lattice. The vectors $\\Ket j$ form an orthogonal basis for $\\mathbb{C}^d$. The resulting state is\n\\begin{equation}\n\\Ket{\\Psi} = \\bigotimes_{u=1}^N A_u \\Ket{\\Phi_0} \n\\propto \\sum_{j_1,j_2,\\ldots,j_N=1}^d \\Tr{B_1^{j_1}B_2^{j_2}\\ldots B_N^{j_N}} \\Ket{j_1,j_2,\\ldots,j_N}\n\\label{eq:MPS}\n\\end{equation}\nwhere the matrices $B_u^j$ are the submatrices of $A_u$ with matrix elements $(B_u^j)_{(\\alpha,\\beta)} = A_u^{j,\\alpha,\\beta}$. \n\n\nFor the corresponding 1-bifactor state, the underlying graph $G = (V,E)$ is also a one dimensional lattice $V = \\{1,2,\\ldots,N\\}$ and $E = \\{(1,2),(2,3),\\ldots ,(N-1,N)\\}$. The Hilbert space associated to vertex $u$ is $\\mathcal{H}_u = \\mathbb{C}^D \\otimes\\mathbb{C}^D$. As above, it is convenient to imagine that each vertex $u$ is composed of two $D$-dimensional subsystems $L_u$ and $R_u$. Then, up to a local isometry, the MPS of eq.~\\eqref{eq:MPS} can be expressed as a 1-bifactor state eq.~\\eqref{Graph:QGSE} with\n\\begin{equation}\n\\mu_u = A_u^\\dagger A_u \\quad \\mathrm{and}\\quad \\nu_{u:v} = \\kb{\\Phi}{\\Phi}_{R_u\\cup L_v}.\n\\end{equation}\nMoreover, the operators $\\nu_{u:v}$ mutually commute. To see the relation with eq.~\\eqref{eq:MPS}, note that the operators $A^u$ can be polar decomposed $A_u = U_u \\sqrt{A_u^\\dagger A_u} = U_u \\mu_u^\\frac 12$. \\footnote{Note that $\\mu_u$ has rank $\\leq d$. This can be seen straightforwardly by writing $\\mu_u = \\sum_{j=1}^d A_u^{*j,\\alpha,\\beta}A_u^{j,\\gamma,\\delta} = \\sum_{j=1}^d \\kb{h_u^j}{h_u^j}$ where $|h_u^j\\rangle = \\sum_{\\alpha,\\beta} A_u^{j,\\alpha,\\beta} \\Ket{\\alpha,\\beta} \\in \\mathcal{H}_u$.} The matrix $U_u$ is a partial isometry $\\mathcal{H}_u \\rightarrow \\mathbb{C}^d$ and\n\\begin{align}\n\\kb\\Psi\\Psi &= \\frac 1Z \\left(\\prod_{u \\in V} A_u\\right) \\kb{\\Phi_0}{\\Phi_0}\\left(\\prod_{u \\in V} A^\\dagger_u\\right) \\\\\n&= \\frac 1Z \\left(\\prod_{u \\in V} U_u\\mu_u^\\frac 12 \\right) \\left(\\prod_{(v,w) \\in E} \\nu_{u:v}\\right) \\left(\\prod_{u \\in V} \\mu_u^\\frac 12 U^\\dagger_u \\right) \\\\\n&= \\frac 1Z \\left(\\prod_{u \\in V} U_u \\right) \\left(\\bigotimes_{u \\in V} \\mu_u \\right) \\ensuremath{\\star} \\left( \\prod_{(v,w) \\in E} \\nu_{u:v}\\right) \\left(\\prod_{u \\in V} U^\\dagger_u \\right)\n\\end{align}\nas claimed. \n\nBifactor states are thus relevant to the description of quantum many-body systems. QBP can sometimes be used to efficiently compute correlation functions, but in general for spatial dimension larger than one, its convergence is not guaranteed. This is mainly due to the presence of small loops in the underlying graph. Partial solutions have been proposed to overcome this difficulty \\cite{VC04a}, and it is conceivable that techniques from loopy Belief Propagation and its generalizations \\cite{YFW02a} will improve these algorithms. As in the classical case however, QBP may be more appropriate for the study of quantum systems on irregular sparse graphs, such as those encountered in classical spin glasses. \n\nFinally, it should be noted that the Markov conditions required to certify the convergence of QBP --- or the associated coarse-grained Markov conditions as explained in the previous section --- are weaker than those typically studied in statistical physics, namely the vanishing of connected correlation functions beyond some length scale. For pure quantum states, the two notions coincide and are equivalent to the absence of long-range entanglement. At finite temperature however, the state is mixed and the vanishing of mutual information between vertices $u$ and $u+\\ell$ conditioned on vertices $u+1,\\ldots,u+\\ell-1$ eq. \\eqref{correlation-length} does not imply the absence of connected correlations $\\langle A_uA_{u+\\ell}\\rangle = \\Tr{\\rho_V A_u A_{u+\\ell}} - \\Tr{\\rho_V A_u}\\Tr{\\rho_VA_{u+\\ell}}$.\n\n\\section{Related Work}\n\n\\label{Relate}\n\nIn this section, our approach to quantum Graphical Models and Belief Propagation is compared to other proposals that have appeared in the literature. Firstly, Tucci has developed an approach to quantum Bayesian Networks \\cite{Tuc95a}, Markov Networks \\cite{Tuc07a} and Belief Propagation \\cite{Tuc98a} based on a different analogy between quantum theory and classical probability, namely the idea that probabilities should be replaced by complex valued amplitudes. Tucci's models require that these amplitudes should factorize according to conditions similar to those used in classical Graphical Models. One disadvantage of this is that the definition requires a fixed basis to be chosen for the system at each vertex of the graph, and the factorization condition for Bayesian Networks is not preserved under changes of this basis. In contrast, our definition of quantum conditional independence is based on an explicitly basis independent quantity, so it does not have this problem. Another difficulty with using amplitudes is that they are only well-defined for pure states, so that mixed states have to be represented as purifications on larger networks. In our approach, density operators are taken as primary, so mixed states can be represented without purification. On the other hand, the Tucci's definitions can easily accommodate unitary time evolution, whereas we do not have a general treatment of dynamics in our approach at the present time. A related definition of quantum Markov Networks, also based on amplitudes but without a development of the corresponding Belief Propagation algorithm, has been proposed by La Mura and Swiatczak \\cite{LMS07a}, to which similar comments apply.\n\nThere has also been work on Quantum Markov networks within the quantum probability literature \\cite{Lei01a,AF03a,AF03b}, although Belief Propagation has not been investigated in this literature. This is closer to the spirit of the present work, in the sense that it is based on the generalization of classical probability to a noncommutative, operator-valued probability theory. These works are primarily concerned with defining the Markov condition in such a way that it can be applied to systems with an infinite number of degrees of freedom, and hence an operator algebraic formalism is used. This is important for applications to statistical physics because the thermodynamic limit can be formally defined as the limit of an infinite number of systems, but it is not so important for numerical simulations, since these necessarily operate with a finite number of discretized degrees of freedom. Also conditional independence is defined in a different way via quantum conditional expectations, rather than the approach based on conditional mutual information and conditional density operators used in the present work. Nevertheless, it seems likely that there are connections to our approach that should to be investigated in future work. \n\nLastly, during the final stage of preparation of this manuscript, two related papers have appeared on the physics archive. An article by Laumann, Scardicchio and Sondhi \\cite{LSS07a} used a QBP-like to solve quantum models on sparse graphs. Hastings \\cite{Has07b} proposed a QBP algorithm for the simulation of quantum many-body systems based on ideas similar to the ones presented here. The connection between the two approaches, and in particular the application of the Lieb-Robinson bound \\cite{LR72a} to conditional mutual information, is worthy of further investigation. \n\n\n\\section{Conclusion}\n\n\\label{Conc}\n\n\\begin{figure}\n\\center\\includegraphics[width=11cm]{veine}\n\\caption{Relation between Markov Networks, Bifactor Networks, and 1-Bifactor Networks in a) quantum theory and b) classical probability theory. The hashed regions indicate the domain of convergence of the associated Belief Propagation algorithms. Figure a). Convergence of Belief Propagation on trees for Markov Networks is Theorem \\ref{thm:QBP} and for 1-Bifactor states is Corollary \\ref{cor:QBP1}. That all Markov Networks on trees are Bifactor states is Theorem \\ref{The:mutual}. The existence of Bifactor Networks on trees that are not Markov Networks is given by Example \\ref{ex:notMarkov} for $n<\\infty$ and the Heisenberg anti-ferromagnetic spin chain of Example \\ref{ex:heisenberg} for $n = \\infty$. Markov Networks on trees with cliques of size $>2$ are generally not Bifactor Networks, c.f. Theorem \\ref{Graph:QHC}. Figure b). That all classical Bifactor Networks are Markov Networks is the Hammersley-Clifford Theorem \\ref{thm:HC}, and convergence of Belief Propagation on trees follows from Theorem \\ref{thm:QBP}. }\n\\label{fig:worldview}\n\\end{figure}\n\nIn this paper, we have presented quantum Graphical Models and Belief Propagation based on the idea that quantum theory is a noncommutative, operator-valued, generalization of probability theory. Our main results are summarized on Fig. \\ref{fig:worldview}. We expect these methods to have significant applications in quantum error correction and the simulation of many-body quantum systems. We are currently in the process of implementing these algorithm numerically in both of these contexts. Belief Propagation based decoding of several types of quantum error correction codes has already been implemented quite successfully, e.g. on concatenated block codes \\cite{Pou06b}, turbo codes \\cite{OPT07a}, and sparse codes \\cite{COT05a}. However, for the noise models considered there, the corresponding bifactor states only involve commuting operators and thus the corresponding inference problem could be solved by means of a classical Belief Propagation algorithm. We conclude with several open questions suggested by this work. \n\nIn the context of many-body physics, it would be interesting to relate the class of solutions obtained by QBP to other approximation schemes used in statistical physics, much in the spirit of the work of Yedidia \\cite{Yed01a} in the classical setting. A related problem would be to understand how the different classes of bifactor states relate to each other. We suspect that when the Hilbert space dimension at each vertex of the graph is held fixed, the $n$-bifactor states on that graph form a subset of the $m$-bifactor states when $n