diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzmsxp" "b/data_all_eng_slimpj/shuffled/split2/finalzzmsxp" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzmsxp" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIn the mathematics literature, lowering and rasing operators operators are known as generators of step algebras, which were originally\nintroduced by Mickelsson \\cite{Mick} for reductive pairs of Lie algebras, $\\mathfrak{g}'\\subset \\mathfrak{g}$. These algebras naturally act on $\\mathfrak{g}'$-singular vectors in $U(\\mathfrak{g})$-modules and are important in representation theory,\n\\cite{Zh1,Mol}.\n\nThe general theory of step algebras for classical universal enveloping algebras was developed in \\cite{Zh1,Zh2} and\nwas extended to the special liner and orthogonal quantum groups in \\cite{Kek}. They admit a natural description in\nterms of extremal projectors, \\cite{Zh2}, introduced for classical groups in \\cite{AST1,AST2}\nand extended to the quantum group case in \\cite{T}. It is known that the step algebra $Z(\\mathfrak{g},\\mathfrak{g}')$ is generated by\nthe image of the orthogonal complement $\\mathfrak{g}\\ominus \\mathfrak{g}'$ under the extremal projector of the $\\mathfrak{g}'$.\nAnother description of lowering\/rasing operators for classical groups was obtained in\n\\cite{NM,Pei,PH,W} in an explicit form of polynomials in $\\mathfrak{g}$.\n\nA generalization of the results of \\cite{NM,Pei} to quantum $\\mathfrak{g}\\l(n)$ can be found in \\cite{ABHST}. In this special case, the lowering operators can be also conveniently expressed through \"modified commutators\" in the Chevalley generators of $U(\\mathfrak{g})$\nwith coefficients in the field of fractions of $U(\\mathfrak{h})$. Extending\n\\cite{PH,W} to a general quantum group is not straightforward, since there are no\nimmediate candidates for the nilpotent triangular Lie subalgebras $\\mathfrak{g}_\\pm $ in $U_q(\\mathfrak{g})$. We suggest such a generalization, where the lack of\n$\\mathfrak{g}_\\pm $ is compensated by the entries of the universal R-matrix with one leg projected to the natural representation.\nThose entries are nicely expressed through modified commutators in the Chevalley generators turning into elements of $\\mathfrak{g}_\\pm$\nin the quasi-classical limit. Their commutation relation with the Chevalley generators modify the classical commutation\nrelations with $\\mathfrak{g}_\\pm$ in a tractable way. This enabled us to generalize the results of \\cite{NM,Pei,PH,W} and construct\ngenerators of Mickelsson algebras for the non-exceptional quantum groups.\n\n\\subsection{Quantized universal enveloping algebra}\n\\label{ssecQUEA}\nIn this paper, $\\mathfrak{g}$ is a complex simple Lie algebra of type $B$, $C$ or $D$.\nThe case of $\\mathfrak{g}\\l(n)$ can be easily derived from here due to the natural inclusion $U_q\\bigl(\\mathfrak{g}\\l(n)\\bigr)\\subset U_q(\\mathfrak{g})$, so we do not pay special attention to it.\nWe choose a Cartan subalgebra $\\mathfrak{h}\\subset \\mathfrak{g}$ with the canonical inner product $(.,.)$ on $\\mathfrak{h}^*$.\nBy $\\mathrm{R}$ we denote the root system of $\\mathfrak{g}$ with a fixed subsystem of\npositive roots $\\mathrm{R}^+\\subset \\mathrm{R}$ and the basis of simple roots $\\Pi^+\\subset \\mathrm{R}^+$.\nFor every $\\lambda\\in \\mathfrak{h}^*$ we denote by $h_\\lambda$ its image under the isomorphism $\\mathfrak{h}^*\\simeq \\mathfrak{h}$,\nthat is $(\\lambda,\\beta)=\\beta(h_\\lambda)$ for all $\\beta\\in \\mathfrak{h}^*$.\nWe put $\\rho=\\frac{1}{2}\\sum_{\\alpha\\in \\mathrm{R}^+}\\alpha $ for the Weyl vector.\n\nSuppose that $q\\in \\mathbb{C}$ is not a root of unity. Denote by $U_q(\\mathfrak{g}_\\pm)$ the $\\mathbb{C}$-algebra generated by $e_{\\pm\\alpha}$, $\\alpha\\in \\Pi^+$, subject to the q-Serre relations\n$$\n\\sum_{k=0}^{1-a_{ij}}(-1)^k\n\\left[\n\\begin{array}{cc}\n1-a_{ij} \\\\\n k\n\\end{array}\n\\right]_{q_{\\alpha_i}}\ne_{\\pm \\alpha_i}^{1-a_{ij}-k}\ne_{\\pm \\alpha_j}e_{\\pm \\alpha_i}^{k}\n=0\n,\n$$\nwhere $a_{ij}=\\frac{2(\\alpha_i,\\alpha_j)}{(\\alpha_i,\\alpha_i)}$,\n$i,j=1,\\ldots, n=\\mathrm{rk}\\: \\mathfrak{g}$, is the Cartan matrix, $q_{\\alpha}= q^{\\frac{(\\alpha,\\alpha)}{2}}$, and\n$$\n\\left[\n\\begin{array}{cc}\nm \\\\ k\n\\end{array}\n\\right]_{q}\n=\n\\frac{[m]_q!}{[k]_q![m-k]_q!},\n\\quad\n[m]_q!=[1]_q\\cdot [2]_q\\ldots [m]_q.\n$$\nHere and further on, $[z]_q=\\frac{q^z-q^{-z}}{q-q^{-1}}$ whenever $q^{\\pm z}$ make sense.\n\nDenote by $U_q(\\mathfrak{h})$ the commutative $\\mathbb{C}$-algebra generated by $q^{\\pm h_\\alpha}$, $\\alpha\\in \\Pi^+$. The quantum group $U_q(\\mathfrak{g})$ is a $\\mathbb{C}$-algebra generated by $U_q(\\mathfrak{g}_\\pm)$ and $U_q(\\mathfrak{h})$ subject\nto the relations\n$$\nq^{ h_\\alpha}e_{\\pm \\beta}q^{-h_\\alpha}=q^{\\pm(\\alpha,\\beta)} e_{\\pm \\beta},\n\\quad\n[e_{\\alpha},e_{-\\beta}]=\\delta_{\\alpha, \\beta}\\frac{q^{h_{\\alpha}}-q^{-h_{\\alpha}}}{q_\\alpha-q^{-1}_\\alpha}.\n$$\nRemark that $\\mathfrak{h}$ is not contained in $U_q(\\mathfrak{g})$, still it is convenient for us to keep reference to $\\mathfrak{h}$.\n\nFix the comultiplication in $U_q(\\mathfrak{g})$ as in \\cite{CP}:\n\\begin{eqnarray}\n&\\Delta(e_{\\alpha})=e_{\\alpha}\\otimes q^{h_{\\alpha}} + 1\\otimes e_{\\alpha},\n\\quad\n\\Delta(e_{-\\alpha})=e_{-\\alpha}\\otimes 1 + q^{-h_{\\alpha}} \\otimes e_{-\\alpha},\n\\nonumber\\\\&\n\\Delta(q^{\\pm h_{\\alpha}})=q^{\\pm h_{\\alpha}}\\otimes q^{\\pm h_{\\alpha}},\n\\nonumber\n\\end{eqnarray}\nfor all $\\alpha \\in \\Pi^+$.\n\nThe subalgebras $U_q(\\b_\\pm)\\subset U_q(\\mathfrak{g})$ generated by $U_q(\\mathfrak{g}_\\pm)$ over $U_q(\\mathfrak{h})$ are quantized universal enveloping algebras of the\nBorel subalgebras $\\b_\\pm=\\mathfrak{h}+\\mathfrak{g}_\\pm\\subset \\mathfrak{g}$.\n\nThe Chevalley generators $e_{\\alpha}$ can be extended to a set of higher root vectors $e_{\\beta}$ for\nall $\\beta\\in \\mathrm{R}$. A normally ordered set of root vectors generate a Poincar\\'{e}-Birkhoff-Witt (PBW) basis of $U_q(\\mathfrak{g})$\nover $U_q(\\mathfrak{h})$, \\cite{CP}. We will use $\\mathfrak{g}_\\pm$ to denote the vector space spanned by $\\{e_{\\pm \\beta}\\}_{\\beta\\in \\mathrm{R}^+}$.\n\nThe universal R-matrix is an element of a certain extension of $U_q(\\mathfrak{g})\\otimes U_q(\\mathfrak{g})$.\nWe heavily use the intertwining relation\n\\begin{eqnarray}\n\\mathcal{R} \\Delta(x)= \\Delta^{op}(x)\\mathcal{R},\n\\label{intertiner}\n\\end{eqnarray}\nbetween the coproduct and its opposite for all $x\\in U_q(\\mathfrak{g})$.\nLet $\\{\\varepsilon_i\\}_{i=1}^n\\subset \\mathfrak{h}^*$ be the standard orthonormal basis and $\\{h_{\\varepsilon_i}\\}_{i=1}^n$ the corresponding dual basis in $\\mathfrak{h}$.\nThe exact expression for $\\mathcal{R}$ can be extracted from \\cite{CP}, Theorem 8.3.9, as the ordered product\n\\begin{eqnarray}\n\\mathcal{R}= q^{\\sum_{i=1}^n h_{\\varepsilon_i}\\otimes h_{\\varepsilon_i}}\\prod_{\\beta} \\exp_{q_\\beta}\\{(1-q_\\beta^{-2})(e_\\beta\\otimes e_{-\\beta} )\\} \\in U_q(\\b_+)\\hat \\otimes U_q(\\b_-),\n\\label{Rmat}\n\\end{eqnarray}\nwhere $\\exp_{q}(x )=\\sum_{k=0}^\\infty q^{\\frac{1}{2}k(k+1)}\\frac{x^k}{[k]_q!}$.\n\nWe use the notation $e_i=e_{\\alpha_i}$ and $f_{i}=e_{-\\alpha_i}$ for $\\alpha_i\\in \\Pi^+$, in all cases apart\nfrom $i=n$, $\\mathfrak{g}=\\mathfrak{s}\\o(2n+1)$, where we set $f_n=[\\frac{1}{2}]_q e_{-\\alpha_n}$.\nThe reason for this is two-fold.\nFirstly, the natural representation can be defined through the classical assignment on the generators, as given below.\nSecondly,\nwe get rid of $q_{\\alpha_n}=q^{\\frac{1}{2}}$ and can work over $\\mathbb{C}[q]$, as the relations involved turn into\n$$\n[e_{n},f_{n}]=\\frac{q^{h_{\\alpha_n}}-q^{-h_{\\alpha_n}}}{q-q^{-1}},\n$$\n$$\nf_{n}^3f_{n-1}-(q+1+q^{-1})f_{n}^2f_{n-1}f_{n}+(q+1+q^{-1})f_{n}f_{n-1}f_{n}^2-f_{n-1}f_{n}^3=0.\n$$\nIt is easy to see that the square root of $q$ disappears from the corresponding factor in the presentation\n(\\ref{Rmat}).\n\n\nIn what\nfollows, we regard $\\mathfrak{g}\\l(n)\\subset \\mathfrak{g}$ to be the Lie subalgebra with the simple roots $\\{\\alpha_i\\}_{i=1}^{n-1}$ and $U_q\\bigl(\\mathfrak{g}\\l(n)\\bigr)$ the corresponding\nquantum subgroup in $U_q(\\mathfrak{g})$.\n\nConsider the natural representation of $\\mathfrak{g}$ in the vector space $\\mathbb{C}^N$.\nWe use the notation $i'=N+1-i$ for all integers $i=1,\\ldots,N$. The assignment\n$$\n\\pi(e_{i})=e_{i,i+1}\\pm e_{i'-1,i'}, \\quad \\pi(f_{i})= e_{i+1,i}\\pm e_{i',i'-1}, \\quad \\pi(h_{\\alpha_i})= e_{ii}-e_{i+1,i+1}+e_{i'-1,i'-1}-e_{i'i'},\n$$\nfor $i=1, \\ldots,n-1$, defines a direct sum of two representations of $\\mathfrak{g}\\l(n)$ for each sign.\nIt extends to the natural representation of the whole $\\mathfrak{g}$ by\n$$\n\\pi(e_{n})= e_{n,n+1}\\pm e_{n'-1,n'}, \\quad \\pi(f_{n})= e_{n+1,n}\\pm e_{n',n'-1}, \\quad \\pi(h_{\\alpha_n})=\ne_{nn}-e_{n'n'},\n$$\n$$\n\\pi(e_{n})= e_{nn'}, \\quad \\pi(f_{n})= e_{n'n}, \\quad \\pi(h_{\\alpha_n})=\n2e_{nn}-2e_{n'n'},\n$$\n$$\n\\pi(e_{n})= e_{n-1,n'}\\pm e_{n,n'+1}, \\quad \\pi(f_{n})= e_{n',n-1}\\pm e_{n'+1,n}, \\quad \\pi(h_{\\alpha_n})=\ne_{n-1,n-1}+e_{nn}-e_{n'n'}-e_{n'+1,n'+1},\n$$\nrespectively, for $\\mathfrak{g}=\\mathfrak{s}\\o(2n+1)$, $\\mathfrak{g}=\\mathfrak{s}\\mathfrak{p}(2n)$, and $\\mathfrak{g}=\\mathfrak{s}\\o(2n)$.\n\nTwo values of the sign give equivalent representations.\nThe choice of minus corresponds to the standard representation that preserves the bilinear form\nwith entries $C_{ij}=\\delta_{i'j}$, for $\\mathfrak{g}=\\mathfrak{s}\\o(N)$, and $C_{ij}=\\mathrm{sign}(i'-i)\\delta_{i'j}$, for $\\mathfrak{g}=\\mathfrak{s}\\mathfrak{p}(N)$.\nHowever, we fix the sign to $+$ in order to simplify calculations. The above assignment also defines representations of $U_q(\\mathfrak{g})$.\n\n\\section{$R$-matrix of non-exceptional quantum groups}\nDefine $\\check{\\mathcal{R}}=q^{-\\sum_{i=1}^n h_{\\varepsilon_i}\\otimes h_{\\varepsilon_i}}\\mathcal{R} $.\nDenote by $\\check{R}^-=(\\pi\\otimes \\mathrm{id})(\\check{\\mathcal{R}}) \\in \\mathrm{End}(\\mathbb{C}^N)\\otimes U_q(\\mathfrak{g}_-)$\nand by $\\check{R}^+=(\\pi\\otimes \\mathrm{id})(\\check{\\mathcal{R}}_{21}) \\in \\mathrm{End}(\\mathbb{C}^N)\\otimes U_q(\\mathfrak{g}_+)$.\nIn this section, we deal only with $\\check{R}^-$ and suppress the label \"$-$\" for simplicity,\n$\\check{R}=\\check{R}^-$.\n\nDenote by $N_+$ the ring of all upper triangular matrices in $\\mathrm{End}(\\mathbb{C}^N)$ and by $N'_+$ its ideal spanned by $e_{ij}$, $ij$. Due to the $\\mathfrak{h}$-invariance\nof $\\check R$, the entry $\\check{R}_{ij}\\in U_q(\\mathfrak{g}_-)$ carries weight $\\varepsilon_j-\\varepsilon_i$.\n\n\nFor all $\\mathfrak{g}$, we have $f_{k,k+1}=f_k=f_{k'-1,k'}$ once $k1$, and\nlet $\\mathfrak{h}'\\subset \\mathfrak{g}'$ denote its Cartan subalgebra.\nLet the triangular decomposition $\\mathfrak{g}'_-\\oplus \\mathfrak{h}'\\oplus \\mathfrak{g}'_+$ be compatible with the triangular decomposition of $\\mathfrak{g}$. Recall the definition of step algebra\n$Z_q(\\mathfrak{g},\\mathfrak{g}')$ of the pair $(\\mathfrak{g},\\mathfrak{g}')$.\nConsider the left ideal $J=U_q(\\mathfrak{g})\\mathfrak{g}'_+$ and its normalizer $\\mathcal{N}=\\{x\\in U_q(\\mathfrak{g}): e_\\alpha x \\subset J, \\forall \\alpha \\in \\Pi^+_{\\mathfrak{g}'}\\}$. By construction, $J$ is a two-sided ideal in the algebra $\\mathcal{N}$. Then $Z_q(\\mathfrak{g},\\mathfrak{g}')$ is\nthe quotient $\\mathcal{N}\/J$.\n\nFor all $\\beta_i\\in \\mathrm{R}^+_\\mathfrak{g}\\backslash \\mathrm{R}^+_{\\mathfrak{g}'}$ let $e_{\\beta_i}$ be the corresponding PBW generators\nand let $Z$ be the vector space spanned by $e_{-\\beta_l}^{k_l}\\ldots e_{-\\beta_1}^{k_1}e_0^{k_0} e_{\\beta_1}^{m_1}\\ldots e_{\\beta_l}^{m_l}$,\nwere $e_0=q^{h_{\\alpha_1}}$, $k_i\\in \\mathbb{Z}_+$, and $k_0\\in \\mathbb{Z}$.\nThe PBW factorization\n$\nU_q(\\mathfrak{g})=U_q(\\mathfrak{g}'_-)Z U_q(\\mathfrak{h}') U_q(\\mathfrak{g}'_+)\n$\ngives rise to the decomposition\n$$\nU_q(\\mathfrak{g})=Z U_q(\\mathfrak{h}') \\oplus (\\mathfrak{g}'_- U_q(\\mathfrak{g})+ U_q(\\mathfrak{g})\\mathfrak{g}'_+).\n$$\n\\begin{propn}[\\cite{Kek}, Theorem 1]\nThe projection $U_q(\\mathfrak{g})\\to Z U_q(\\mathfrak{h}')$ implements an embedding of $Z_q(\\mathfrak{g},\\mathfrak{g}')$ in $Z U_q(\\mathfrak{h}')$.\n\\label{Kekalainen}\n\\end{propn}\n\\begin{proof}\nThe statement is proved in \\cite{Kek} for the orthogonal and special linear quantum groups\n but the arguments apply to symplectic groups too.\n\\end{proof}\nIt is proved within the theory of extremal projectors that\ngenerators \nof $Z_q(\\mathfrak{g},\\mathfrak{g}')$ are labeled by\nthe roots $\\beta\\in \\mathrm{R}_\\mathfrak{g}\\backslash \\mathrm{R}_{\\mathfrak{g}'}$ plus $z_0=q^{h_{\\alpha_1}}$.\nWe calculate them in the subsequent sections, cf. Propositions \\ref{negative} and \\ref{positive}.\n\n\\subsection{Lowering operators}\nIn what follows, we extend $U_q(\\mathfrak{g})$ along with its subalgebras containing $U_q(\\mathfrak{h})$ over the field of fractions of $U_q (\\mathfrak{h})$\nand denote such an extension by hat, e.g. $\\hat U_q(\\mathfrak{g})$. In this section we calculate representatives\nof the negative generators of $Z_q(\\mathfrak{g},\\mathfrak{g}')$ in $\\hat U_q(\\b_-)$.\n\nSet $h_i=h_{\\varepsilon_i}\\in \\mathfrak{h}$ for all $i=1,\\ldots,N$ and introduce $\\eta_{ij}\\in \\mathfrak{h}+\\mathbb{C}$ for $i,j=1,\\ldots,N$, by\n\\begin{eqnarray}\n\\eta_{ij}=h_i-h_j+(\\varepsilon_i-\\varepsilon_j, \\rho)-\\frac{1}{2}|\\!|\\varepsilon_i-\\varepsilon_j|\\!|^2.\n\\end{eqnarray}\nHere $|\\!|\\mu|\\!|$ is the Euclidean norm on $\\mathfrak{h}^*$.\n\\begin{lemma}\n\\label{thetas}\nSuppose that $(l,r)\\in P(\\alpha)$ for some $\\alpha\\in \\Pi^+$. Then\n\\begin{itemize}\n\\item[i)] if $l1$,\n$\ne_\\alpha g_{i1}=\\sum_{(l,r)\\in P(\\alpha)}\\delta_{il} g_{r1} \\mod \\hat U_q(\\mathfrak{g})e_\\alpha\n$.\n\\label{[e,g]}\n\\end{lemma}\n\\begin{proof}\nFollows from the intertwining property of the R-matrix.\n\\end{proof}\nConsider the right $\\hat U_q(\\mathfrak{h})$-module $\\Psi_{i1}$ freely generated by $f_{(\\vec m,k)}g_{k1}$ with $i\\leqslant\\vec m< k$.\nWe define operators $\\partial_{lr}\\colon \\Psi_{i1}\\to \\hat U_q(\\mathfrak{g})$ similarly as we did it for $\\Phi_{1j}$.\nFor a simple pair $(l,r)\\in P(\\alpha)$, put\n$$\n\\partial_{l,r}f_{(\\vec m,k)}g_{k1}=\n\\left\\{\n\\begin{array}{rrrrr}\nf_{(\\vec m,l)}g_{r1},&l =k,\\\\\n\\bigl(\\partial_{l,r}f_{(\\vec m,k)}\\bigr)g_{k1},&l\\not =k,\n\\end{array}\n\\right.\n \\quad i\\leqslant \\vec m< r.\n$$\nThe Cartan factors appearing in $\\partial_{lr}f_{(\\vec m,k)}$ depend on $h_\\alpha$. When pushed to the right-most position,\n$h_\\alpha$ is shifted by $(\\alpha,\\varepsilon_1-\\varepsilon_r)$.\nWe extend $\\partial_{lr}$ to an action on $\\Psi_{i1}$ by the requirement that $\\partial_{lr}$ commutes with the right action of $\\hat U_q(\\mathfrak{h})$. Let $p$ denote the natural homomorphism of $\\hat U_q(\\mathfrak{h})$-modules,\n$p\\colon \\Psi_{i1}\\to \\hat U_q(\\mathfrak{g})$.\nOne can prove the following analog of Lemma \\ref{partial}.\n\\begin{lemma}\n\\label{partial_+}\nFor all $\\alpha \\in \\Pi^+_{\\mathfrak{g}'}$ and all $x \\in \\Psi_{i1}$,\n$e_{\\alpha}\\circ p(x)=\\sum_{(l,r)\\in P(\\alpha)} \\partial_{lr} x \\mod \\hat U_q(\\mathfrak{g})e_\\alpha$.\n\\end{lemma}\n\\begin{proof}\nStraightforward.\n\\end{proof}\n\\noindent\nWe suppress the symbol of projection $p$ to simplify the formulas.\n\nDefine $\\sigma_i$ for all $i=1,\\ldots,N$ as follows. For $i0$;\n\n$(1.2)$ $\\omega_{t}$ are component-wise sub-Gaussian, i.e., there exists $\\sigma_{\\omega}>0$ such that for any $\\gamma \\in \\mathbb{R}$ and $j=1,2,...,n$\n\\begin{align*}\n\\mathbb{E}[e^{\\gamma\\omega_{j}(t+1)}|\\;\\mathcal{F}_{t}]\\leq e^{\\gamma^2\\sigma_{\\omega}^2\/2}.\n\\end{align*}\n\\end{assum}\n\n\n\nThe problem is designing a sequence $\\{u_t\\}$ of control inputs such that the regret $\\mathcal{R}_T$ defined by\n\n\\begin{align}\n\t\\mathcal{R}_T =\\sum _{t=1}^{T} \\bigg( x_{t}^\\top Q_*x_{t} + u^\\top_{t} R_* u_{t}-J_*(\\Theta_*, Q_*, R_*)\\bigg)\\label{eq:Reg} \n\\end{align}\n\nachieves a desired specification which scales sublinearly in $T$. The term $J_*(\\Theta_*, Q_*, R_*)$ in (\\ref{eq:Reg}) where $\\Theta_*=(A_*\\; B_*)^\\top$ denotes optimal average expected cost. For LQR setting with controllable pair $(A,\\;B)$ we have $J_*(\\Theta,Q,R)=\\bar{\\sigma}_{\\omega}^2 trace( P(\\Theta,Q,R))$, where $P(\\Theta,Q,R)$ is the unique solution of discrete algebraic riccati equation (DARE) and the average expected cost minimizing policy has feedback gain of \n\\begin{align*}\nK(\\Theta,Q,R)= -(B^\\top P(\\Theta, Q,R)B+R)^{-1}B^\\top P(\\Theta, Q, R)A.\n\\end{align*}\n\n\n\n\n\n\n\n\nWhile the regret's exponential dependency on system dimension appears in the long-run in \\cite{abbasi2011regret} the recent results of \\cite{mania2019certainty} on the existence of a stabilizing neighborhood, makes it possible to design an algorithm that only exhibits this dependency during an initial exploration phase (see \\cite{lale2020explore}). \n\nAfter this period, the controller designed for any estimated value of the parameters is guaranteed to be stabilizing and the exponentially dependent term thus only appears as a constant in overall regret bound. As explained in the introduction, this suggests using only a subset of actuators during initial exploration to even further reduce the guaranteed upperbound on the state.\n\nIn the remainder of the paper, we pick the best actuating mode (i.e. subset of actuators) so as to minimize the state norm upper-bound achieved during initial exploration and characterize the needed duration of this phase for all system parameter estimates to reside in the stabilizing neighborhood. This is necessary to guarantee both closed loop stability and acceptable regret, and makes it possible to switch to the full actuation mode. \n\nLet $\\mathbb{B}$ be the set of all columns, $b^i_*$ ($i\\in \\{1,...,d\\}$) of $B_*$. An element of its power set $2^{\\mathbb{B}}$ is a subset $\\mathcal{B}_{j}$ $j\\in \\{1,..., 2^d\\}$ of columns corresponding to a submatrix $B_*^j$ of $B_*$ and mode $j$. For simplicity, we assume that $B^1_*=B_*$ i.e., that the first mode contains all actuators. Given this definition we write down different actuating modes dynamics with extra exploratory noise as follows\n\n\\begin{align}\n\tx _{t+1} ={\\Theta^i_{*}}^\\top z_{t}+B_*\\nu_t+\\omega_{t+1}, \\quad z_t=\\begin{pmatrix} x_t \\\\ u^i_t \\end{pmatrix}. \\label{eq:dynam_by_theta} \n\\end{align}\nwhere $\\Theta^i_*=(A_*,B^i_*)^\\top$ is controllable. \n\nThe associated cost with this mode is\n\\begin{align}\n\t\tc^i_{t} &=x_{t}^\\top Q_* x_{t} + {u^i_t}^\\top R^i_* u^i_t \\label{eq:obsswitch}\n\\end{align}\nwhere $R_*^i\\in \\mathbb{R}^{d_i\\times d_i}$ is a block of $R_*$ which penalizes the control inputs of the actuators of mode $i$. \n\n\nWe have the following assumption on the modes which assists us in designing proposed strategy.\n\n\n\n\n\n\\begin{assum}(Side Information) \\label{Assumption2}\n\\begin{enumerate}\n\\item There exists $s^i$ and $\\Upsilon_i$ such that $\\Theta_{*}^i \\in \\mathcal{S}^i_c$ for all modes $i$ where\n\\begin{align*}\n \\mathcal{S}^i_c =&\\{{\\Theta^i} \\in R^{(n+d_i)\\times n} \\mid trace({\\Theta^i}^\\top \\Theta^i)\\leq ({s^i})^2,\\\\\n &\\text{$(A,B^i)$ is \n controllable,}\\\\\n & \n\\|A+B^iK(\\Theta^i,Q_*,R^i_*)\\|\\leq \\Upsilon_i<1 \\\\\n&\n\\text{and $(A,M)$ is observable,} \\text{where $Q=M^\\top M$} \\}.\n\\end{align*}\n.\n\\item There are known positive constants $\\eta^i$, $\\vartheta_i$, $\\gamma^i$ such that $\\|B_*^i\\|\\leq \\vartheta_i$,\n\\begin{align}\n&\\sup_{\\Theta^i\\in \\mathcal{S}^i}\\|A_*+B^i_*K({\\Theta}^i,Q_*,R^i_*)\\|\\leq \\eta^i \\label{Assum3_1}\n\\end{align}\nand\n\\begin{align}\nJ_*(\\Theta_*^{i},Q_*,R^i_*)-J_*(\\Theta_*,Q_*,R_*)\\leq \\gamma^i. \\label{Assum3_4}\n\\end{align}\t\nfor every mode $i$.\n\\end{enumerate}\n\\end{assum}\n\nBy slightly abusing notation, we drop the superscript label for the actuating mode 1 (e.g. $\\Upsilon_1=\\Upsilon$, $s^1=s$, and $\\mathcal{S}_c^1=\\mathcal{S}_c$). It is obvious that $s^i\\leq s$ $\\forall i$. \n\nNote that the item (1) in Assumption \\ref{Assumption2} is typical in the literature of OFU-based algorithms (see e.g., \\cite{abbasi2011regret,lale2020explore}) while (2) in fact always holds in the sense that $\\sup_{\\Theta^i\\in \\mathcal{S}^i}\\|A_*+B^i_*K({\\Theta}^i,Q_*,R^i_*)\\|$ and $J_*(\\Theta_*^{i},Q_*,R^i_*)-J_*(\\Theta_*,Q_*,R_*)$ are always bounded (see e.g., \\cite{abbasi2011regret,lale2020explore}). The point of (2), then, is that upper-bounds on their suprema are available which can in turn be used to bound regret explicitly. The knowledge of these bounds does not affect Algorithms 1 and 2 but their value enters Algorithm 3 for determination of the best actuating mode and the corresponding exploration duration. In that sense \"best actuating mode\" should be understood as \"best given the available information\".\n\nBoundedness of $S^i_c$'s implies boundedness of $P(\\Theta^i,Q_*,R_*^i)$ with a finite constant $D_c^i$ (see \\cite{anderson1971linear}), (i.e., $D_c^i=\\sup \\{\\left\\lVert P(\\Theta^i,Q_*,R_*^i)\\right\\rVert \\mid\\Theta^i \\in \\mathcal{S}^i_c \\}$). We define $D=\\max_{i\\in \\mathcal{B}^*} D^i$. Furthermore, there exists $\\kappa^i_c>1$ such that $\\kappa^i_c=\\sup \\{\\left\\lVert K(\\Theta^i, Q_*, R_*^i)\\right\\rVert \\mid\\Theta ^i\\in \\mathcal{S}^i_c \\}$.\n\n\nRecalling that the set of actuators of mode $i$ is $\\mathcal{B}_{i}$, we denote its complement by $\\mathcal{B}_i^c$ (i.e. $\\mathcal{B}_i\\cup \\mathcal{B}_i^c=\\{1,...,d\\}$). Furthermore, we denote the complement of control matrix $B_*^i$ by $\\bar{B}^i_*$.\n\n\nIf some modes fail to satisfy Assumption\n\\ref{Assumption2} they can simply be removed from the set $2^{\\mathbb{B}}$ without affecting algorithm or the derived guarantees.\n\n\\section{Overview of Proposed Strategy}\\label{problem statement}\n\n\n\nIn this section, we propose an algorithm in the spirit of that first proposed by \\cite{lale2020explore} which leverages actuator redundancy in the \"more exploration\" step to avoid blow up in the state norm while minimizing the regret bound. We break down the strategy into two phases of initial exploration, presented by Algorithm (IExp), and optimism (Opt), given by SOFUA algorithm. \n\nThe IExp algorithm, which leverages exploratory noise, is deployed in the actuating mode $i^*$ for duration $T_c^{i^*}$ to reach a stabilizing neighborhood of the full-actuation mode and alleviate state explosion while minimizing regret. \n\nAfterwards, Algorithm 2 which leverages all the actuators comes into play. This algorithm has the central confidence set, given by the Algorithm 1, as an input. The best actuating mode $i^*$ that guarantees minimum possible state norm upper-bound and initial exploration duration $T^{i^*}_c$ is determined by running Algorithm 3 at the subsection \\ref{deterBest}. \n\n\n\\begin{algorithm} \n\t\\caption{\\small Initial Exploration (IExp) \\normalsize} \\label{alg:IExp}\n\t\t\\begin{algorithmic}[1]\n\t\t\\STATE \\textbf{Inputs:}$T^{i^*}_c$$\\,s^{i^*}>0,$$\\,\\delta>0,$$\\, \\sigma_{\\omega},\\, \\sigma_{\\nu}\\,,\\lambda>0$\n\t\t\\STATE set $V^{i^*}_0=\\lambda I$, $\\hat{\\Theta}^{i^*}=0$ \n\t\t\\STATE $\\tilde{\\Theta}^i_0=\\arg\\min_{\\Theta \\in \\mathcal{C}^{i^*}_0(\\delta)\\cap S^i}\\,\\, J(\\Theta^i,Q_*,R_*^i)$\n\t\t\\FOR {$t = 0,1,..., T^{i^*}_c$}\n\t\t\\IF {$\\det(V^{i^*}_t)> 2 \\det(V^{i^*}_{\\tau})$ or $t=0$} \n\t\t\\STATE Calculate $\\hat{\\Theta}^{i^*}_t$ by (\\ref{eq:LSE_Solf}) and set $\\tau=t$\n\n\t\t\\STATE Find $\\tilde{\\Theta}^{i^*}_t$ by (\\ref{eq:nonconvexOpt}) for $i=i^*$\n\t\t\\ELSE\n\t\t\\STATE $\\tilde{\\Theta}^{i^*}_t=\\tilde{\\Theta}^{i^*}_{t-1}$\n\t\t\\ENDIF \n\t\t\\STATE For the parameter $\\tilde{\\Theta}^{i^*}_t$ solve the Ricatti equation and find $u^{i^*}_t=K(\\tilde{\\Theta}^{i^*}_t,Q_*, R_*^{i^*})x_t$\n\t\t\\STATE Construct $\\bar{u}^{i^*}_t$ using (\\ref{eq:controlAlg}) and apply it on the system $\\Theta_*$ (\\ref{eq:dynam_by_theta2}) and observe new state $x_{t+1}$.\n\t\t\\STATE Using $u^{i^*}_t$ and $x_t$ form $z^{i^*}$ and Save $(z^{i^*}_t,x_{t+1})$ into dataset\n\t\t\\STATE Set $V^{i^*}_{t+1}=V^{i^*}_t+z^{i^*}_t{z^{i^*}_t}^\\top$ and form $\\mathcal{C}^{i^*}_{t+1}$\n\t\t\\STATE using $\\bar{u}^i_t$ and $x_t$ form $\\bar{z}^i_t$\n\t\t\\STATE Form $(\\bar{z}^i_t,x_{t+1})$\n\t\t\\STATE Set $V_{t+1}=V_t+\\bar{z}^i_t {\\bar{z}_t^i}^\\top$ and form $\\mathcal{C}_{t+1}$\\ENDFOR\n\t\t\\STATE Return $V_{T_c+1}$ and corresponding $\\mathcal{C}_{T_c}$ \n\t\\end{algorithmic}\n\t\\end{algorithm}\n\n\\begin{algorithm} \n\t\\caption{Stabilizing OFU Algorithm (SOFUA)} \\label{alg:SOFUA}\n\t\t\\begin{algorithmic}[1]\n\t\t\\STATE \\textbf{Inputs:}$T,$$\\,S>0,$$\\,\\delta>0,$$\\,Q\\,, L,\\, V_{T_c},\\, \\mathcal{C}_{T_c},\\, \\hat{\\Theta}_{T_c}$\n\t\t\\STATE $\\tilde{\\Theta}_{T_c}=argmin_{\\Theta \\in \\mathcal{C}_{T_c}(\\delta)\\cap S}\\,\\, J(\\Theta)$\n\t\t\\FOR {$t = T_c,T_c+1,T_c+2,...$}\n\t\t\\IF {$\\det(V_t)> 2 \\det(V_{\\tau})$ or $t=T_c$} \n\t\t\\STATE Calculate $\\hat{\\Theta}_t$ by (\\ref{eq:LSE_Solf}) and set $\\tau=t$\n\n\t\t\\STATE Find $\\tilde{\\Theta}_t$ by (\\ref{eq:nonconvexOpt}) for $i=1$\n\t\t\\ELSE\n\t\t\\STATE $\\tilde{\\Theta}_t=\\tilde{\\Theta}_{t-1}$\n\t\t\\ENDIF \n\t\t\\STATE For the parameter $\\tilde{\\Theta}_t$ solve Ricatti and calculate $\\bar{u}_t=K(\\tilde{\\Theta}_t, Q_*,R_*)x_t$\n\t\t\\STATE Apply the control on $\\Theta_*$ and observe new state $x_{t+1}$.\n\t\t\\STATE Save $(z_t,x_{t+1}) $ into dataset\n\t\t\\STATE $V_{t+1}=V_t+z_tz_t^\\top$\n\t\t\\ENDFOR\n\t\\end{algorithmic}\n\t\\end{algorithm}\n\\subsection{Main steps of Algorithm 1}\n\n\\subsubsection{Confidence Set Contruction}\n\nIn IExp phase we add an extra exploratory Gaussian noise $\\nu$ to the input of all actuators even those not in actuators set of mode $i$. Assuming that the system actuates in an arbitrary mode $i$, the dynamics of system, used for confidence set construction (i.e. system identification), is written as\n\\begin{align}\n\tx _{t+1} ={\\Theta^i _{*}}^\\top \\underline{z}^i_{t}+\\bar{B}^i_*\\bar{\\nu}^i_t+\\omega_{t+1}, \\quad \\underline{z}^i_{t}=\\begin{pmatrix} x_t \\\\ \\underline{u}^i_t \\end{pmatrix}. \\label{eq:dynam_by_theta3} \n\\end{align}\n\nin which $\\bar{B}^i_*\\in \\mathbb{R}^{d-d_i}$ and $\\underline{u}^i_{t}=u^i_t+\\nu_t(\\mathcal{B}_i)$, and $\\bar{\\nu}^i_t=\\nu_t(\\mathcal{B}_i^c)$ where, if $\\nu_t \\in \\mathbb{R}^{d}$ and $\\mathcal{N}\\subset\\mathbb{B}$, the vector $\\nu (\\mathcal{N})\\in \\mathbb{R}^{card(\\mathcal{N})}$ is constructed by only keeping the entries of $\\nu_t$ corresponding to the index set of elements in $\\mathcal{N}$. Note that (\\ref{eq:dynam_by_theta3}) is equivalent to (\\ref{eq:dynam_by_theta}) but separates used and unused actuators.\n\nBy applying self-normalized process, the least square estimation error, $e(\\Theta^{i})$ can be obtained as:\n\t\\begin{align}\n\t\\nonumber & e(\\Theta^{i})= \\lambda \\operatorname{Tr}({\\Theta^{i}}^\\top\\Theta^{i})\\\\\n\t&+\\sum _{s=0}^{t-1} \\operatorname{Tr} \\big((x_{s+1}-{\\Theta^{i}}^\\top \\underline{z}^{i}_{s})(x_{s+1}-{\\Theta^{i}}^\\top \\underline{z}^{i}_{s})^\\top)\\big) \\label{eq:LSE_op}\n\t\\end{align}\nwith regularization parameter $\\lambda$. This yields the $l^{2}$-regularized least square estimate:\n\n\\begin{align}\n\t\\hat{\\Theta}^i_t &=\\operatorname*{argmin_{\\Theta^{i}}} e(\\Theta^{i})=({\\underline{Z}_t^{i}}^\\top \\underline{Z}_t^{i}+\\lambda I)^{-1}{\\underline{Z}_t^{i}}^\\top X_t\n\t\\label{eq:LSE_Solf}\n\\end{align}\n\nwhere $\\underline{Z}_t^{i}$ and $X_t$ are matrices whose rows are ${\\underline{z}^{i}_{0}}^\\top,..., {\\underline{z}^{i}_{t-1}}^\\top$ and $x_{1}^\\top,...,x_{t}^\\top$, respectively.\nDefining covariance matrix $V^{i^*}_{t}$ as follows:\n\\begin{align*}\nV^{i}_{t}=\\lambda I + \\sum_{s=0}^{t-1} \\underline{z}^{i}_{s}{\\underline{z}^{i}_{s}}^\\top=\\lambda I +{\\underline{Z}_t^{i}}^\\top \\underline{Z}_t^{i},\n\\end{align*}\n\nit can be shown that with probability at least $(1-\\delta)$, where $0<\\delta<1$, the true parameters of system $\\Theta^i_*$ belongs to the confidence set defined by (see \\ref{thm:Conficence_Set_Attacked}): \n\\begin{align}\n\t\\nonumber\\mathcal{C}^{i}_{t}(\\delta)& =\\{{\\Theta^i}^\\top \\in R^{n \\times (n+d_i)} \\mid \\\\ & \\nonumber\\operatorname{Tr}((\\hat{\\Theta}^{i}_{t}-\\Theta^{i})^\\top V_{t}^i(\\hat{\\Theta}^{i}_{t}-\\Theta^{i}))\\leq \\beta^{i}_{t}(\\delta) \\}, \\\\\n\t\\nonumber\\beta^{i}_t(\\delta) &=\\bigg(\\lambda^{1\/2}s^i+\\sigma_{\\omega}\\sqrt{2n\\log(n\\frac{\\det(V^{i}_{t})^{1\/2}\\det(\\lambda I)^{-1\/2}}{\\delta}})\\\\\n\t&+\\|\\bar{B}_*^i\\|\\sigma_{\\nu}\\sqrt{2d_i\\log(d_i\\frac{ \\det(V^{i}_{t})^{1\/2}\\det(\\lambda I)^{-1\/2}}{\\delta}})\\bigg)^{2} \\label{eq:Conf_set_radius_unAtt}\n\\end{align}\nAfter finding high-probability confidence sets for the unknown parameter, the core step is implementing Optimism in the Face of Uncertainty (OFU) principle. At any time $t$, we choose a parameter $\\tilde{\\Theta}^{i}_t \\in \\mathcal{S}^i_c\\cap \\mathcal{C}^{i}_{t}(\\delta)$ such that:\n\n\\begin{align}\nJ(\\tilde{\\Theta}^{i}_t, Q_*, R_*^i)\\leq \\inf\\limits_{\\Theta^{i} \\in \\mathcal{C}^{i}_t(\\delta)\\cap \\mathcal{S}^i_c }J(\\Theta^{i},Q_*,R^i_*)+\\frac{1}{\\sqrt{t}}. \\label{eq:nonconvexOpt} \\end{align}\n\nThen, by using the chosen parameters as if they were the true parameters, the linear feedback gain $K(\\tilde{\\Theta}^{i}, Q_*, R_*^{i})$ is designed. We synthesized the control $\\underline{u}^{i}_{t}=u^{i}_t+\\nu_t(\\mathcal{B}_{i})$ on (\\ref{eq:dynam_by_theta3}) where $u^{i}_t=K(\\tilde{\\Theta}^{i}, Q_*, R_*^{i})x_t$. The extra exploratory noise $\\nu_t \\sim \\mathcal{N}(\\mu,\\,\\sigma_{\\nu}^{2}I)\\in \\mathbb{R}^{d}$ with $\\sigma_{\\nu}^{2}=2\\kappa^2\\bar{\\sigma}_{\\omega}^2$ is the random ``more exploration\" term. \n\nAs can be seen in the regret bound analysis, recurrent switches in policy may worsen the performance, so a criterion is needed to prevent frequent policy switches. As such, at each time step $t$ the algorithm checks the condition $\\det(V^{i}_{t})>2\\det(V^{i}_{\\tau})$ to determine whether updates to the control policy are needed where $\\tau$ is the last time of policy update.\n\n\\subsubsection{Central Ellipsoid Construction}\nNote that (\\ref{eq:Conf_set_radius_unAtt}) holds regardless of the control signal $\\underline{z}^{i}_t$. The formulation above also holds for any actuation mode, being mindful that the\ndimension of the covariance matrix changes. Even while actuating in the IExp phase, by applying augmentation technique, we can build a confidence set (which we call the central ellipsoid) around the parameters of the full actuation mode thanks to extra exploratory noise. For $t\\leq T^{i}_c$, this simply can be carried out by rewriting (\\ref{eq:dynam_by_theta3}) as follows:\n\\begin{align}\n\tx _{t+1} =\\Theta _{*}^\\top \\bar{z}^i_{t}+\\omega_{t+1}, \\quad \\bar{z}^i_t=\\begin{pmatrix} x_t \\\\ \\bar{u}^i_t \\end{pmatrix} \\label{eq:dynam_by_theta2} \n\\end{align}\nwhere $\\bar{z}^i_t\\in \\mathbb{R}^{n+d}$ and $\\bar{u}^i_t\\in \\mathbb{R}^d$ is constructed by augmentation as follows\n\n\\begin{align}\n\\bar{u}^i_t(\\mathcal{B}_i)=u^i_t+\\nu(\\mathcal{B}_i),\\quad \\bar{u}^i_t(\\mathcal{B}^c_i)=\\nu(\\mathcal{B}^c_i). \\label{eq:controlAlg}\n\\end{align}\n\nBy this augmentation, we can construct the central ellipsoid \n\n\\begin{align}\n&\t\\nonumber\\mathcal{C}_{t}(\\delta) =\\{\\Theta^\\top \\in R^{n \\times (n+d)} \\mid \\operatorname{Tr}((\\hat{\\Theta}_{t}-\\Theta)^\\top V_{t}(\\hat{\\Theta}_{t}-\\Theta))\\\\\n& \\nonumber\\leq \\beta_{t}(\\delta) \\} \\\\\n\t& \\beta_t(\\delta) =(\\sigma_{\\omega}\\sqrt{2n\\log(\\frac{\\det(V_{t})^{1\/2}\\det(\\lambda I)^{-1\/2}}{\\delta}})+\\lambda^{1\/2}s)^{2}. \\label{eq:Conf_set_radius_centralElips} \n\\end{align} \n\nwhich is an input to Algorithm 2 and used to compute IExp duration. \n\n\\subsection{Main steps of Algorithm 2}\n\nThe main steps of Algorithm 2 are quite similar to those of Algorithm 1 with a minor difference in confidence set construction. Algorithm 2 receives $V_{T^{i^*}_c}$, $Z_{T^{i^*}_c}$, and $X_{T^{i^*}_c}$ from Algorithm 1, using which for $t>T^{i^*}_c$ we have \n\\begin{align*}\nV_{t}&=V_{T^{i^*}_c} + \\sum_{s=T^{i^*}_c+1}^{t-1} {z}_{s}{{z}_{s}}^\\top\\\\\nZ_tX_t&=Z_{T^{i^*}_c}X_{T^{i^*}_c}^\\top + \\sum_{s=T^{i^*}_c+1}^{t-1} {z}_{s}{{x}_{s}}^\\top\n\\end{align*}\nand the confidence set is easily constructed.\n\nThe following theorem summarizes boundedness of state norm when Algorithm 1 and 2 are deployed.\n\n\n\n\n\\begin{thm} \\label{lemma:stabilization}\n\\begin{enumerate}\n\\item The IExp algorithm keeps the states of the underlying system actuating in any mode $i$ bounded with the probability at least $1-\\delta$ during initial exploration, i.e., \n\n\\begin{align} \\label{eq:upperbound_state}\n\\nonumber \\|x_t\\|&\\leq \\frac{1}{1-\\Upsilon_i}\\big(\\frac{\\eta_i}{\\Upsilon_i}\\big)^{n+d_i}\\bigg[G_iZ_t^{\\frac{n+d_i}{n+d_i+1}}\\beta^i_t(\\delta)^{\\frac{1}{2(n+d_i+1)}}+\\\\\n&\\quad (\\sigma_{\\omega}\\sqrt{2n\\log\\frac{nt}{\\delta}}+\\|s I\\|\\sigma_{\\nu}\\sqrt{2d_i\\log\\frac{d_it}{\\delta}})\\bigg]=:\\alpha_t^i,\n\\end{align}\nfor all modes $i \\in \\{1,..., 2^d\\} $. \n\n\\item For $t>T^{i^*}_c+\\frac{(n+d_{i^*})\\log(n+d_{i^*})+\\log c^{i^*}-\\log \\chi_s}{\\log\\frac{2}{1-\\Upsilon}}:=T_{rc}$ we, with probability at least $1-\\delta$, have $\\|x_t\\|\\leq 2\\chi_s$ where\n\\begin{align}\n\\chi_s:=\\frac{2\\sigma_{\\omega}}{1-\\Upsilon}\\sqrt{2n\\log\\frac{n(T-T^{i^*}_c)}{\\delta}}.\n\\end{align}\n\\end{enumerate}\n\\end{thm}\n\n\n\n\n\n\n\nFrom parts (1) and (2) of Theorem \\ref{lemma:stabilization} we define the following \\textit{good events}:\n\\begin{align}\nF^i_{t}=\\{\\omega \\in\\Omega \\mid \\forall s \\leq T^{i}_c, \\left\\lVert x_{s}\\right\\rVert \\leq \\alpha^i_{t} \\}.\\label{eq:GoodEven_state_unat} \n\\end{align}\nand \n\\begin{align}\nF^{op,c}_{t}=\\{\\omega \\in\\Omega \\mid \\forall\\;\\; T^{i^*}_c\\leq s \\leq t, \\left\\lVert x_{s}\\right\\rVert^2 \\leq X_c^2 \\}.\\label{eq:GoodEven} \n\\end{align}\nin which\n\\begin{align}\nX_c^2=\\frac{32n\\sigma^2_{\\omega}(1+\\kappa^2)}{(1-\\Upsilon)^2}\\log \\frac{n(T-T_c)}{\\delta}.\\label{eq:upperBoundOptimisimphase} \n\\end{align}\nwhere both the events are used for regret bound analysis and the former one specifically is used to obtain best actuating mode for initial exploration.\n\n\\subsection{Determining the Optimal Mode for IExp}\n\\label{deterBest}\nWe still need to specify the best actuating mode $i^*$ for initial exploration along with its corresponding upperbound $X_t^{i^*}$. Theorem \\ref{lemma:estimationerror_timeMinumum22} specifies $i^*$. First we need the following Lemma.\n\n\\begin{lem} \\label{lemma:estimationerror_timeMinumum}\nAt the end of initial exploration, for any mode $\\forall i\\in \\{1,..., 2^d\\}$ the following inequality holds\n\\begin{align}\n||\\hat{\\Theta}_{T^i_{\\omega}}-\\Theta_*||_2\\leq \\frac{\\mu^i_c}{\\sqrt{T^i_{\\omega}}} \\label{eq:stab_neighb}\n\\end{align}\nwhere $\\mu^i_c$ is given as follows\n\n\\begin{align}\n\\nonumber \\mu^i_c:= & \\frac{1}{\\sigma_{\\star}}\n \\bigg(\\sigma_{\\omega}\\sqrt{n(n+d)\n\\log\\big(1+\\frac{\\mathcal{P}_c}{\\lambda (n+d)}\\big)+2n\\log\\frac{1}{\\delta}}+\\\\\n&\\sqrt{\\lambda}s\\bigg)\n \\label{eq:kappaDefControllable}\n\\end{align}\nwith,\n\\begin{align*}\n\\mathcal{P}_c&:={X_{T^i_{\\omega}}^{i}}^2(1+2{\\kappa^i}^2)T^i_{\\omega}+\n4T^i_{\\omega}\\sigma^2_{\\nu}d_i\\log (dT^i_{\\omega}\/\\delta)\n\\end{align*}\n\nin which $T^i_{\\omega}$ stands for initial exploration duration of actuating in mode $i$. Furthermore, if we define\n\\begin{align}\nT^i_c:=\\frac{4(1+\\kappa)^2\\mu^{i2}_c}{(1-\\Upsilon)^2}\\label{eq:T_cDef}\n\\end{align}\nthen for $T^i_{\\omega}>T^i_c$, $||\\hat{\\Theta}_{T^i_{\\omega}}-\\Theta_*||_2\\leq \\frac{1-\\Upsilon}{2(1+\\kappa)}$ holds with probability at least $1-2\\delta$.\n\\end{lem}\nThe proof is provided in Appendix \\ref{appendix}.\n\n\\begin{thm}\\label{lemma:estimationerror_timeMinumum22}\nSuppose Assumptions \\ref{Assumption 1} and \\ref{Assumption2} hold true. Then for a system actuating in the mode $i$ during initial exploration phase, the following results hold true\n\\begin{enumerate}\n\n\\item $I_{F^i_t} \\max_{1\\leq s\\leq t} \\|x_s\\|\\leq x_t$ \nwhere $I_{F^i_t}$ is indicator function of set $F^i_t$ and\n\\begin{align}\n& x_t=Y_{i,t}^{n+d_i+1}\\\\\n& \\nonumber Y_{i,t}:=\\max \\big(e, \\lambda (n+d^i)(e-1), \\frac{-\\bar{L}_i+\\sqrt{\\bar{L}_i^2+4\\bar{K}_i}}{2\\bar{K}_i}\\big),\n\\end{align}\nwith\n\\begin{align*}\n&\\bar{L}^i=(\\mathcal{D}^i_1+\\mathcal{D}^i_2)\\big(2n\\sigma_{\\omega}\\log\\frac{1}{\\delta}+\\sigma_{\\omega}\\sqrt{\\lambda}s^i\\big)\\log t +\\\\\n&\\mathcal{D}^i_3\n\\sqrt{\\log t\/\\delta}+\n(\\mathcal{D}^i_1+\\mathcal{D}^i_2)n\\sigma_{\\omega}(n+d_i)\\times \\\\ &\\bigg(\\log\\frac{(n+d_i)\\lambda+2{\\mathcal{V}^i_t}^2}{(n+d_i)\\lambda}t+\\log\\frac{(1+2{\\kappa^i}^2)}{(n+d_i)\\lambda}t\\bigg)\\log t\\\\\n&\\quad \\bar{K}^i=2(\\mathcal{D}^i_1+\\mathcal{D}^i_2)n\\sigma_{\\omega}(n+d_i)(n+d_i+1)\\log t.\n\\end{align*}\nwhere \n\t\\begin{align*}\n\t &\\mathcal{D}^i_1:=\\frac{4}{1-\\Upsilon_i}\\big(\\frac{\\eta_i}{\\Upsilon_i}\\big)^{n+d_i}\\bar{G}_i (1+2{\\kappa^i}^2)^{\\frac{n+d_i}{2(n+d_i+1)}}\\\\\n\t & \\mathcal{D}^i_2:=\\frac{4}{1-\\Upsilon_i}\\big(\\frac{\\eta_i}{\\Upsilon_i}\\big)^{n+d_i}\\bar{G}_i 2^{\\frac{n+d_i}{2(n+d_i+1)}}\\mathcal{V}_T^i,\\\\ &\\mathcal{D}^i_3:=\\frac{n\\sqrt{2}}{1-\\Upsilon_i}\\big(\\frac{\\eta_i}{\\Upsilon_i}\\big)^{n+d_i}\\sigma_{\\omega}\n\t\\end{align*}\n\t\nin which $\\mathcal{V}^i_t=\\sigma_{\\nu}\\sqrt{2d_i\\log d_it\/\\delta}$ holds with probability least $1-\\delta\/2$.\n\n\\item The best actuating mode $i^*$ for initial exploration is,\n\\begin{align}\n\\nonumber i^*&=argmin_{i\\in \\{1,...,2^{\\mathbb{B}}\\}} Y_{i,T^i_{\\omega}}^{n+d_i+1}\\\\\n& s.t\\;\\; \\;\\; T^i_{\\omega}\\geq T^i_c \n\\label{optimiza}\n\\end{align}\n\n\\item The upper-bound of state norm of system actuating in the mode $i^*$ during initial exploration phase can be written as follows:\n\\begin{align}\n\\|x_t\\|\\leq c_c^{i^*}(n+d_{i^*})^{n+d_{i^*}}\\label{simpleBound}\n\\end{align}\nfor some finite system parameter-dependent constant $c_c^{i^{*}}$.\n\\end{enumerate}\n\\end{thm}\n\n\\begin{rem}\\label{Remark:Find Best Actuating mode}\nWhile optimization problem (\\ref{optimiza}) cannot be solved \nanalytically because\n$T^i_c$ itself depends \non $x_{T^i_{\\omega}}$, it can be \ndetermined using Algorithm 3.\n\\end{rem}\n\n\n\n\\begin{algorithm} \n\\label{alg3}\n\n\t\\caption{\\small Find best actuating mode $i^*$ and its corresponding $T_c^{i^*}$\\normalsize} \\label{alg:FindMode}\n\t\t\\begin{algorithmic}[1]\n\t\t\\STATE \\textbf{Inputs:}$\\lambda,\\,\n\t\t\\kappa,\\,S^i>0,$$\\,\\delta>0,$$\\,, \\sigma_{\\omega},\\, \\vartheta_i,\\, \\eta_i,\\, \\Upsilon_i \\, \\forall i$\n\t\t\\FOR {$\\forall i\\in \\{1,...,2^{\\mathbb{B}}\\} $}\n\t\t\\STATE $T^i_{itr}=1$\n \t\t\\FOR {$t=1,2,..., T^i_{itr} $}\n\t\t\\STATE compute $T_c^i$ by (\\ref{eq:T_cDef}) \n\t\t\\IF {$t T^{i^*}_c$.\n\nAn upper-bound for $\\mathcal{R}_T$ is given by the following theorem which is the next core result of our analysis. \n\n\\begin{thm} (Regret Bound of IExp+SOFUA) \\label{lemma:RegretBoundControllable}\nUnder Assumptions \\ref{Assumption 1} and \\ref{Assumption2}, with probability at least $1-\\delta$ the algorithm SOFUA together with additional exploration algorithm IExp which runs for $T^{i^*}_{c}$ time steps achieves regret of $\\mathcal{O}\\big((n+d_{i^*})^{(n+d_{i^*})}T^{i^*}_{rc}\\big)$ for $t\\leq T^{i^*}_{c}$ and $\\mathcal{O}\\big(poly(n+d)\\sqrt{T-T^{i^*}_{rc}}\\big)$ for $t>T^{i^*}_{c}$ where $\\mathcal{O}(.)$ absorbs logarithmic terms.\n\\end{thm}\t\n\n\\section{Numerical Experiment}\n\\label{simulation}\nIn this section, we demonstrate on a practical example how the use of our algorithms successfully alleviates state explosion during the initial exploration phase. We consider a control system with drift and control matrices to be set as follows:\n\n\\begin{align*}\n\tA_*=\\begin{pmatrix}\n\t\t1.04 & 0 & -0.27 \\\\\n\t\t0.52 & -0.81 & 0.83\\\\\n\t\t 0 & 0.04 & -0.90\n\t\\end{pmatrix},\\; B_*=\\begin{pmatrix}\n\t0.61 &-0.29 & -0.47\\\\\n\t0.58 & 0.25 & -0.5\\\\\n\t0 & -0.72 & 0.29\n\\end{pmatrix}.\n\\end{align*}\n\nWe choose the cost matrices as follows:\n\n\\begin{align*}\n\tQ_*=\\begin{pmatrix}\n\t0.65 &-0.08& -0.14 \\\\\n\t\t-0.08 & 0.57 & 0.26\\\\\n\t\t-0.14& 0.26& 2.5\n\t\\end{pmatrix},\\;R_*=\\begin{pmatrix}\n\t\t0.14 &0.04& 0.05 \\\\\n\t\t0.04 &0.24 &0.08\\\\\n\t\t0.05 &0.08 &0.2\n\t\\end{pmatrix}.\n\\end{align*}\n\t\nThe Algorithm 3 outputs the exploration duration $T^{i^*}_c=50s$ and best actuating mode $i^*$ for initial exploration with corresponding control matrix $B_*^{i^*}$ and $R_*^{i^*}$\n\\begin{align*}\nB_*^{i^*}=\\begin{pmatrix}\n\t0.61 &-0.29\\\\\n\t0.58 & 0.25\\\\\n\t0 & -0.72\n\\end{pmatrix},\\;\\;\\;\\;\\;\\; R_*^{i^*}=\\begin{pmatrix}\n\t\t0.14 &0.04 \\\\\n\t\t0.04 &0.24\n\t\\end{pmatrix}.\n\\end{align*}\t\n\n\\begin{figure}\n\t\\centering\n\t\\vspace{5pt}\n\t\t\\hspace{4pt}\n\t\t\\includegraphics[trim=4cm 6cm 6cm 8cm, scale=.4]{MainNormState.pdf}\n\t\\hfill\n\t\n\t\t\\hspace{4pt}\n\t\t\\includegraphics[trim=4cm 7cm 6cm 8cm, scale=.4]{mainRegretBound.pdf}\n\t\n\t\\hfill\n\t\\caption{Top. State norm, Bottom. regret bound}\n\t\\label{fig:naive_attacked_estimateANDregret}\n\\end{figure}\n\nIt has graphically been shown in \\cite{abbasi2013online} that the optimization problem (\\ref{eq:nonconvexOpt}) is generally non-convex for $n,d>1$. Because of this fact, we decided to solve optimization problem (\\ref{eq:nonconvexOpt}) using a projected gradient descent method in Algorithm 1 and 2, with basic step\n\n\\begin{align}\n\\tilde{\\Theta}^i_{t+1}\\leftarrow PROJ_{\\mathcal{C}^i_t(\\delta)} \\bigg(\\tilde{\\Theta}^i_{t}-\\gamma \\nabla_{\\Theta^i}(L^itr (P(\\Theta^i,Q_*,R^{i}_*))) \\bigg)\n\\end{align}\n\nwhere $L^{i}=\\bar{\\sigma}^2_{\\omega}+\\vartheta^2\\bar{\\sigma}^2_{\\nu}$ for $i=i^*$ and $L^{i}=\\bar{\\sigma}^2_{\\omega}$ for $i=1$. $\\nabla_{\\Theta^i}f$ is the gradient of $f$ with respect to $\\Theta^i$. $\\mathcal{C}^i_t(\\Theta^i)$ is the confidence set, $PROJ_g$ is Euclidean projection on $g$ and finally $\\gamma$ is the step size. Computation of gradient $\\nabla_{\\Theta^i}$ as well as formulation of projection has been explicited in \\cite{abbasi2013online}, similar to which we choose the learning rate as follows:\n\\begin{align*}\n\t\\gamma=\\sqrt{\\frac{0.001}{tr(V^i_t)}}.\n\\end{align*}\n\nWe apply the gradient method for 100 iterations to solve each OFU optimization problem and apply the projection technique until the projected point lies\ninside the confidence ellipsoid. The inputs to the OFU algorithm are $T=10000$, $\\delta=1\/T$, $\\lambda=1$, $\\sigma_{\\omega}=0.1$, $s=1$ and we repeat simulation $10$ times. \n\nAs can be seen in Fig. \\ref{fig:naive_attacked_estimateANDregret}, the maximum value of the state norm (attained during the initial exploration phase) is smaller when using mode $i^*$ than when all actuators are in action.\n\n\nRegret-bound for both cases is linear during initial exploration phase, however SOFUA guarantees $\\mathcal{O}(\\sqrt{T})$ regret for $T>50 s$.\n\n\\section{Conclusion}\\label{conclusion}\nIn this work, we proposed an OFU principle-based controller for over-actuated systems, which combines\na step of \"more-exploration\" (to produce a stabilizing\nneighborhood of the true parameters while guaranteeing a bounded state during exploration) with one of \"optimism\", which efficiently\ncontrols the system. Due to the redundancy, it is possible to further optimize the speed of convergence of the exploration phase to the stabilizing neighborhood by choosing over actuation modes, then to switch to full actuation to guarantee an $\\mathcal{O}(\\sqrt{T})$ regret in closed-loop, with\npolynomial dependency on the system dimension.\n\nA natural extension of this work is to classes of systems in which some modes are only stabilizable. Speaking more broadly, the theme of this paper also opens the door to more applications of switching as a way to facilitate learning-based control of unknown systems, some of which are the subject of current work.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nTo control magnetic behavior for device applications, it is crucial to engineer magnetic interactions.\nUsually, the interaction between spins has the symmetric form $\\bm S_i \\cdot \\bm S_j$ and, as a result, the spins tend to be (anti)parallel.\nOn the other hand, in magnetic materials with broken inversion symmetry, a qualitatively different interaction, $\\bm S_i \\times \\bm S_j$, called the Dzyaloshinskii--Moriya (DM) interaction, appears as a consequence of spin-orbit interactions\\cite{dzyaloshinskii1958,moriya1960}. \nThe DM interaction is antisymmetric for spin operators and favors twisted spin structures, which induces numerous interesting magnetic behaviors such as chiral soliton lattices in chiral helimagnets\\cite{kishine2015}, skyrmion formation\\cite{bogdanov1989,roszler2006,muhlbauer2009,yu2010,nagaosa2013}, and the enhancement of domain wall mobility\\cite{thiaville2012,chen2013,ryu2013,emori2013,torrejon2014}.\nIn addition, the DM interaction relates magnetic properties and electric polarizations in multiferroic materials\\cite{katsura2005,sergienko2006,tokura2014}.\n\nIn 1960, Moriya first proposed the microscopic derivation of the DM interaction at the first order of the spin-orbit coupling and discussed two different contributions\\cite{moriya1960};\nthe first one is the extension of the superexchange mechanism to multiorbital spin-orbit systems, and the other is the combination of the direct exchange interaction and the spin-orbit coupling.\nAlthough the physical pictures of these mechanisms are clear particularly for insulating systems, these formulations are not suitable for practical calculation.\nFor quantitative analysis, several techniques have been developed to calculate the DM interaction from first-principles calculations\\cite{liechtenstein1987,igor1996,katsnelson2000,katsnelson2010,heide2008,ferriani2008,heide2009,freimuth2014,kikuchi2016,koretsune2015}.\nMost of these approaches consider the energy difference by twisting the magnetic structures in various ways.\nOne of the most direct approaches is to calculate the energies of spirals with the finite vector $\\bm q$ as $E(\\bm q)$ and extract the $q$-linear term\\cite{heide2008,ferriani2008,heide2009},\nalthough this approach is sometimes time-consuming.\nWhen the twisting angle is small, the energy change can be evaluated from the information of the uniform magnetic structures\nby utilizing, for example, the magnetic force theorem\\cite{liechtenstein1987,igor1996,katsnelson2000},\nBerry phase\\cite{freimuth2014}, or\nspin gauge field transformation\\cite{kikuchi2016}.\nOn the other hand, perturbation expansion with respect to the exchange couplings gives a different formulation to evaluate the DM interaction\\cite{fert1980,imamura2004,kundu2015,wakatsuki2015,koretsune2015,shibuya2016}.\n\n\nIn this paper, we overview three approaches to evaluate the DM interaction from first-principles calculations,\nthat is, the methods using the energy of spirals, $E(\\bm q)$, the spin current, and the off-diagonal spin susceptibility.\nBy applying these methods to chiral ferromagnets, namely, Mn$_{1-x}$Fe$_x$Ge\\ and Fe$_{1-x}$Co$_x$Ge\\ systems, we discuss the relationship among these approaches,\nhow the band structures affect the DM interactions, and how each approach explains experimental results.\n\n\\section{Formulation to Compute the DM Interaction}\nIn this section, we describe three approaches to calculate the DM interaction.\nHereafter, we consider the low-energy effective Hamiltonian for the local direction of the magnetic continuum, $\\bm n(\\bm r)$, and neglect the charge degrees of freedom.\nThen, the exchange interaction and the DM interaction are given as\n\\begin{align}\n\tH = \\int d \\bm r \\;\\sum_\\mu\\left[ J_{\\mu} (\\nabla_{\\mu} \\bm n)^2 + \\sum_\\alpha D^{\\alpha}_{\\mu} (\\nabla_{\\mu} \\bm n \\times \\bm n)^{\\alpha}\\right],\n\t\\label{eq:Hamiltonian_continuum}\n\\end{align}\nwhere $J_{\\mu}$ and $D_{\\mu}^{\\alpha}$ denote the spin stiffness and the DM interaction, respectively.\nHere, ${\\mu}=x,y,z$ represents the relative direction of the two spins in the lattice representation, and ${\\alpha}=x,y,z$ represents the spin rotation axis.\nThis Hamiltonian can be derived from the local-spin Hamiltonian\n\\begin{align}\n\tH = \\sum_{ij} \\left[ - J_{ij} \\bm S_i \\cdot \\bm S_j + \\bm D_{ij} \\cdot ( \\bm S_i \\times \\bm S_j ) \\right].\n\\end{align}\nIn fact, when we assume the isotropic system,\n$J_{\\mu}$ and $D_{\\mu}^{\\alpha}$ can be written as\n\\begin{align}\n\tJ &= \\frac{1}{V}\\sum_{j} \\frac{1}{6}| \\bm r_j |^2J_{0j},\\\\\n\t\\bm D_\\mu &= \\frac{1}{V} \\sum_{j} (-r_j)^\\mu \\bm D_{0j},\n\t\\label{eq:JDcontinuum}\n\\end{align}\nwhere $V$ is the volume of one site.\nIn this paper, we consider the total magnetic moment along the $z$-axis as a starting point and the DM interaction with ${\\alpha}=x,y$.\n\n\n\\subsection{Twisting magnetic structures}\nUsing the Hamiltonian in Eq.~\\eqref{eq:Hamiltonian_continuum}, we can easily calculate the energy for a given $\\bm n(\\bm r)$.\nFor example, the exchange term always gives a non-negative value for $J_{\\mu} > 0$ and the uniform magnetic structure, $\\nabla_{\\mu} \\bm n = 0$, is most favorable.\nOn the other hand, the DM interaction term favors the twisted magnetic structure and its chirality depends on the sign of $D_{\\mu}^{\\alpha}$.\nIn fact, when $J_{\\mu} = J$ and $D_{\\mu}^{\\alpha} = D \\delta_{{\\mu} {\\alpha}}$, a helical structure such as $\\bm n = (\\cos q z, \\sin q z, 0)$ has $q$-dependent energy, $E(q) = J q^2 - D q$,\nand the most stable wavevector is given as $q = D\/2J$.\nThis means that once we can calculate the energy of the helical structure, $E(q)$, in actual systems, then we can evaluate the DM interaction as well as the spin stiffness in this effective Hamiltonian.\nBy changing the twisting axis and direction of $\\bm n$, we can discuss each component of the DM interaction, $D_{\\mu}^{\\alpha}$.\n\n\n\nOne way to compute $E(q)$ in the first-principles calculations is to employ the generalized Bloch theorem,\nthat is, apply different boundary conditions for up and down spins to simulate twisted magnetic structures\\cite{heide2008,heide2009}.\nCalculations of energies for actual twisted magnetic structures with supercells have also been performed\\cite{yang2015}.\n\n\\subsection{Perturbation with respect to spin gauge field}\nHere, we describe the spin current approach to the DM interaction using the method of the spin gauge field\\cite{kikuchi2016}.\nIn Eq.~\\eqref{eq:Hamiltonian_continuum}, electron degrees of freedom are integrated out and the Hamiltonian depends only on the magnetic moment $\\bm n(\\bm r)$.\nTo obtain this Hamiltonian from the microscopic model, we consider the following Hamiltonian in the field representation:\n\\begin{align}\n\tH &= \\int d \\bm r \\; \\sum_{l} c^\\dagger_{l} \\left[ -\\frac{\\hbar^2{\\bm \\nabla}^2}{2m} - J_{\\rm ex}^{l} {\\bm n}\\cdot \\bm \\sigma\n +\\frac{i}{2}\\sum_{\\mu} \\bm \\lambda_{\\mu} \\cdot\\bm \\sigma \\overleftrightarrow{\\nabla}_{\\mu} \\right] c_{l},\n\\label{eq:Hamiltonian}\n\\end{align}\nwhere $c^\\dagger_{l}$ and $c_{l}$ are electron creation and annihilation operators for orbital ${l}$, respectively, \n$c^\\dagger \\overleftrightarrow{\\nabla}_{\\mu} c\\equiv c^\\dagger \\nabla_{\\mu} c- (\\nabla_{\\mu} c^\\dagger) c$, and $\\bm \\lambda_{\\mu}$ denotes the spin-orbit interaction.\nThe local direction of the magnetization $\\bm n(\\bm r)$, with $\\bm n=(\\sin\\theta\\cos\\phi, \\sin\\theta\\sin\\phi, \\cos\\theta)$, is static, and $J_{\\rm ex}^{l}$ denotes the exchange constant. \nThe form of $\\bm \\lambda_{\\mu}$ is determined by the symmetries of the system; for example, $\\lambda^\\alpha_\\mu \\propto \\delta^\\alpha_\\mu$ for B20 alloys such as MnSi and FeGe, while $\\lambda^\\alpha_\\mu\\propto \\varepsilon_{\\alpha\\mu z}$ for Rashba systems with $z$ as its perpendicular direction. \nWe consider here a simplified model with a quadratic dispersion and a spin--orbit interaction linear in the momentum but the extension to general cases is straightforward.\n\nFor this Hamiltonian, we consider the local spin gauge transformation so that the local magnetic moment $\\bm n(\\bm r)$ points to the $z$ direction.\nFor this purpose, we introduce a unitary transformation in spin space as\n$c_{l}(\\bm r)=U(\\bm r)a_{l}(\\bm r)$, where $U$ is a $2\\times2$ unitary matrix satisfying \n$U^\\dagger(\\bm n\\cdot\\bm \\sigma) U=\\sigma^z$ \\cite{TKS_PR08}. \nAn explicit form of $U$ is given as $U=\\bm m\\cdot \\bm \\sigma$ with $\\bm m\\equiv (\\sin\\frac{\\theta}{2}\\cos\\phi, \\sin\\frac{\\theta}{2}\\sin\\phi, \\cos\\frac{\\theta}{2})$. \nGeometrically, $U$ rotates the spin space by $\\pi$ around the $\\bm m$-axis.\nUsing this unitary transformation, derivatives of the electron field become covariant derivatives as $\\nabla_{\\mu} c_{l} = U(\\nabla_{\\mu} + iA_{{\\rm s},{\\mu}})a_{l}$, where \n$A_{{\\rm s},{\\mu}} \\equiv \\sum_{\\alpha} A_{{\\rm s},{\\mu}}^{\\alpha} \\frac{\\sigma^{\\alpha}}{2} = -iU^\\dagger \\nabla_{\\mu} U$ is an SU(2) gauge field, called a spin gauge field, given by $A_{{\\rm s},{\\mu}}^{\\alpha}=2(\\bm m\\times \\nabla_{\\mu}\\bm m)^{\\alpha}$.\nThen, the Hamiltonian for the electron in the rotated frame is given as $H=H_0+H_A$ with \n\\begin{align}\n\tH_0\\equiv & \\int d \\bm r \\; \\sum_{{l}} a^\\dagger_{{l}} \\left[- \\frac{\\hbar^2 {\\bm \\nabla}^2}{2m} - J_{\\rm ex}^{{l}} \\sigma^z + \\frac{i}{2}\\sum_{\\mu} \\tilde{\\bm \\lambda}_{\\mu} \\cdot \\bm \\sigma \\overleftrightarrow{\\nabla}_{\\mu} \\right]a_{{l}},\n\\label{H0}\\\\\nH_A \\equiv & \\int d\\bm r \\; \\sum_{{l}}\\left[\\sum_{{\\mu} {\\alpha}}\\hat{\\tilde{j}}_{{\\rm s},{l},{\\mu}}^\\alpha A^{\\alpha}_{{\\rm s},{\\mu}} +\\frac{\\hbar^2}{8m}\\hat{n}_{{\\rm el},{l}}(A_{{\\rm s},{\\mu}}^{\\alpha})^2\\right].\n\\label{HA}\n\\end{align}\nHere, \n$\\tilde{\\lambda}^\\sb_{\\mu}\\equiv \\sum_{\\alpha} R_{{\\alpha} \\sb}\\lambda^{\\alpha}_{\\mu}$ is the spin--orbit coupling constant rotated by the SO(3) matrix $R_{{\\alpha} \\sb}\\equiv 2m_{\\alpha} m_\\sb - \\delta^{{\\alpha}\\sb}$ satisfying $U^\\dagger \\sigma^{\\alpha} U=\\sum_\\sb R_{{\\alpha} \\sb}\\sigma^\\sb$, and $\\hat{n}_{{\\rm el},{l}}\\equiv a^\\dagger_{l} a_{l} $.\nThe spin current density operator $\\hat{\\tilde{j}}_{{\\rm s},{l},{\\mu}}^{\\alpha}$ in the rotated frame is given by $\\hat{\\tilde{j}}_{{\\rm s},{l},{\\mu}}^{\\alpha} \\equiv - \\frac{i\\hbar^2}{4m}a^\\dagger_{l} \\sigma^{\\alpha} \\overleftrightarrow{\\nabla}_{\\mu} a_{l} - \\frac{1}{2}\\tilde{\\lambda}^\\alpha_{\\mu} a_{l}^\\dagger a_{l}$.\n\nThe interaction between electrons and the magnetization structure is originally given by the exchange interaction, $J_{\\rm ex}^{l} c^\\dagger_{l}{\\bm n}(\\bm r)\\cdot \\bm \\sigma c_{l}$ in Eq.~\\eqref{eq:Hamiltonian}. After the spin gauge transformation, \nthis exchange interaction becomes a trivial one, $J_{\\rm ex}^{l} a^\\dagger_{l} \\sigma^z a_{l}$ in Eq.~\\eqref{H0}.\nThe interaction between electrons and the magnetization structure is instead given by $H_A$ in Eq.~\\eqref{HA}. \nWe regard $H_0$ as the non-perturbative Hamiltonian and treat $H_A$ as a perturbation.\nSince $A_{{\\rm s},{\\mu}}^{\\alpha} =2(\\bm m\\times \\nabla_{\\mu} \\bm m)^{\\alpha}$ is proportional to the derivative of $\\bm n(\\bm r)$, the perturbative expansion by $H_A$ gives the derivative expansion with respect to the magnetization structure. \n\n\nLet us derive the effective Hamiltonian for magnetization, $H_{\\rm eff}$, by integrating out the electron degrees of freedom in this rotated frame: \n\\begin{equation}\n\\exp\\left(-\\frac{i}{\\hbar}\\int dt\\; H_{\\rm eff}\\right) \\equiv \\int \\mathcal D a^\\dagger \\mathcal D a \\exp\\left(\\frac{i}{\\hbar}(S_0-\\int dt\\; H_A)\\right),\n\\label{Seff}\n\\end{equation}\nwhere $S_0$ is the action corresponding to $H_0$, that is, $S_0\\equiv \\int dt[\\int d\\bm r\\sum_{l} i\\hbar a_{l}^\\dagger \\partial_t a_{l} - H_0]$. We expand the right-hand side of Eq.~\\eqref{Seff} by $H_A$, and obtain $H_{\\rm eff}$ as \n\\begin{align}\n\\int dt\\; H_{\\rm eff} =& i\\hbar \\ln Z_0 + \\int dt \\average{H_A} + \\mathcal O((\\partial \\bm n)^2), \n\\label{Seff2} \\\\\nZ_0 \\equiv& \\int \\mathcal D a^\\dagger \\mathcal D a \\exp\\left(\\frac{i}{\\hbar}S_0\\right). \n\\label{Z0}\n\\end{align}\nLet us extract the DM interaction from $H_{\\rm eff}$. Since $\\tilde{\\lambda}^{\\alpha}_{\\mu}(\\bm r)=\\sum_\\sb R_{\\sb {\\alpha}}(\\bm r)\\lambda^\\sb_{\\mu}$ in $H_0$ depends on the magnetization structure, not only $\\average{H_A}$ but also $\\ln Z_0$ contributes to the DM interaction. However, the contribution from $\\ln Z_0$ is only higher order in $\\lambda$, while the contribution from $\\average{H_A}$ contains first-order terms in $\\lambda$. Therefore, we can neglect the contribution from $\\ln Z_0$ when $\\lambda$ is sufficiently small. \nFor $\\average{H_A}$, since the DM interaction is the first-order derivative term in the effective Hamiltonian of the magnetization, it is sufficient to consider only the terms in $\\average{H_A}$ proportional to $A_{\\rm s}$. Therefore, the DM interaction is included in \n\\begin{align}\n\tH_{\\rm eff} = & \\int d\\bm r \\; \\sum_{{l} {\\mu} {\\alpha}} \\tilde{j}_{{\\rm s},{l},{\\mu}}^\\alpha A^\\alpha_{{\\rm s},{\\mu}} ,\n\\label{Heff}\n\\end{align}\nwhere $\\tilde{j}_{{\\rm s},{l},{\\mu}}^\\alpha\\equiv \\average{ \\hat{\\tilde{j}}_{{\\rm s},{l},{\\mu}}^{\\alpha} }$ is the expectation value of the spin current density in the rotated frame evaluated by $H_0$ in Eq.~\\eqref{H0}. \nThe spin current density $j_{{\\rm s},{l},{\\mu}}^{\\alpha}$ in the laboratory frame is related as ${j}_{{\\rm s},{l},{\\mu}}^{\\alpha}=\\sum_\\sb R_{{\\alpha} \\sb}\\tilde{j}_{{\\rm s},{l},{\\mu}}^\\sb$.\nThen, by using the identity \n$\\sum_\\sb R_{{\\alpha} \\sb}A_{{\\rm s},\\mu}^\\sb = (\\nabla_{\\mu} \\bm n\\times \\bm n)^{\\alpha} + n^{\\alpha} A_{{\\rm s},{\\mu}}^{z}\n$,\nthe effective Hamiltonian reads \n\\begin{align}\n\tH_{\\rm eff} = & \\int d\\bm r \\; \\left[\\sum_{{\\mu} {\\alpha}} D_{\\mu}^{\\alpha} (\\nabla_{\\mu} \\bm n\\times \\bm n)^{\\alpha} + \\sum_{\\mu {l}} {j}_{{\\rm s},{l},{\\mu}}^\\parallel A_{{\\rm s},{\\mu}}^{z} \\right],\n\\label{Heff2}\n\\end{align}\nwhere \n${j}_{{\\rm s},{l},{\\mu}}^\\parallel \\equiv \\tilde{j}_{{\\rm s},{l},{\\mu}}^z=\\bm n\\cdot\\bm{j}_{{\\rm s},{l},{\\mu}}$, and \n\\begin{align}\n D_{\\mu}^{\\alpha} \\equiv\n \\sum_{{l}} {j}_{{\\rm s},{l},{\\mu}}^{\\perp, {\\alpha}}\n\\label{eq:dm_spincurrent}\n\\end{align}\nwith $j_{{\\rm s},{l},{\\mu}}^{\\perp, {\\alpha}}\\equiv {j}_{{\\rm s},{l},{\\mu}}^{\\alpha}-n^{\\alpha} {j}_{{\\rm s},{l},{\\mu}}^\\parallel$.\nThus, the DM interaction is given by the expectation value of the spin current density of electrons.\nMore precisely, the transversely polarized component of the spin current $j_{{\\rm s},{l},{\\mu}}^{\\perp}$ contributes to the DM interaction. \nOn the other hand, the longitudinally polarized component $j_{{\\rm s},{l},{\\mu}}^{\\parallel}$ contributes to the spin-transfer torque term, $j_{{\\rm s},{l},{\\mu}}^{\\parallel}A_{{\\rm s},{\\mu}}^z=j_{{\\rm s},{l},{\\mu}}^{\\parallel}(1-\\cos\\theta)\\nabla_{\\mu}\\phi$. We have checked that, in the case of the simplified model in Eq.~\\eqref{eq:Hamiltonian}, the contribution to the spin-transfer torque term from $j_{\\rm s}^\\parallel$ is cancelled by that from $\\ln Z_0$ in Eq.~\\eqref{Seff2} up to the $\\lambda^3$-order. This is expected since the spin-transfer torque will not arise spontaneously at equilibrium states. \n\nIn the practical calculation, we consider a general form of the spin current density as\n\\begin{align}\n\tj_{{\\rm s},{l},{\\mu}}^{\\alpha} = \\sum_{\\bm k}\\frac{1}{4} \\langle c_{\\bm k {l}}^\\dagger ( v_{\\mu} \\sigma^{\\alpha} + \\sigma^{\\alpha} v_{\\mu} ) c_{\\bm k {l}} \\rangle,\n\t\\label{J_DFT}\n\\end{align}\nwhere the velocity operator is defined as $v_{\\mu} = d H_{\\bm k}\/ d k_{\\mu}$ with \n$H_{\\bm k}=e^{-i\\bm k\\cdot \\bm x}H e^{i\\bm k\\cdot \\bm x}$. \nThis general form of spin current actually corresponds to the DM interaction when we apply the spin gauge field technique to a generalized Hamiltonian instead of Eq.~\\eqref{eq:Hamiltonian}.\nNote that the DM interaction for the local-spin model derived from the tight-binding Hamiltonian\\cite{katsnelson2010} corresponds to the discrete representation of Eqs.~\\eqref{eq:dm_spincurrent} and \\eqref{J_DFT}, which can be confirmed using Eq.~\\eqref{eq:JDcontinuum}.\n\nThe spin current in Eq.~\\eqref{eq:dm_spincurrent} is the expectation value at equilibrium states. Spin currents at equilibrium states arise mainly by two mechanisms.\nOne is due to the magnetization structure, which induces a spin current of the form $j_{{\\rm s},{\\mu}}^{\\alpha} \\propto (\\bm n\\times \\nabla_{\\mu} \\bm n)^{\\alpha}$.\nSuch a magnetization-induced spin current is known to be relevant, for example, in multiferroic systems\\cite{katsura2005}.\nSince this spin current is proportional to $\\nabla_{\\mu} \\bm n$, it does not contribute to the first-order derivative terms in Eq.~\\eqref{Heff2}, but contributes to higher-order derivative terms in the effective Hamiltonian.\nThe other mechanism that induces a spin current at equilibrium states is the spin--orbit interaction with broken inversion symmetry, represented by the last term in Eq.~\\eqref{eq:Hamiltonian}. \nThis interaction tends to lock the relative angle of the spin and the momentum of electrons and yields a finite spin current. The existence of such a spin--orbit-induced spin current has been noticed and discussed in the literature\\cite{rashba2003,usaj2005,wang2006_2,sonin2007,sonin2007_prb,tokatly2008}. \nThis spin--orbit-induced spin current arises even when $\\nabla_{\\mu} \\bm n=0$ and contributes to the first-order derivative terms in Eq.~\\eqref{Heff2}.\nThus, to be precise, $j_{\\rm s}^\\perp$ in Eq.~\\eqref{eq:dm_spincurrent} is generally not the total amount of spin current flowing in the system; when we expand a spin current in powers of $\\nabla_\\mu \\bm n$, then $j_{\\rm s}^\\perp$ in Eq.~\\eqref{eq:dm_spincurrent} corresponds to the non-derivative part. In the practical calculation to evaluate, for example, the $D_\\mu^x$ and $D_\\mu^y$ components of the DM interaction, we set the magnetization direction $\\bm n$ uniformly in the $z$-direction and calculate $j_{{\\rm s},\\mu}^x$ and $j_{{\\rm s},\\mu}^y$, respectively. \n\n\nOur result, Eq.~\\eqref{eq:dm_spincurrent}, clarifies that a spin current is a direct origin of the DM interaction. Let us give a physical interpretation of this result. Generally, when an interaction is mediated by some medium, the interaction changes with its flow. This phenomenon is known as the Doppler effect. In the case of magnets, magnetic interactions are mediated by electron spin hopping among magnetic moments. Therefore, when the electron spin flows as a spin current, the magnetic interaction changes and an additional interaction will emerge. Since the spin current makes two adjacent magnetic moments inequivalent in the sense that one is located upstream and the other downstream, the additional magnetic interaction is antisymmetric with respect to the exchange of the two adjacent magnetic moments. Thus, an antisymmetric magnetic interaction, that is, the DM interaction, arises as the Doppler effect due to the spin current. Let us see this Doppler effect more closely in Fig.~\\ref{fig:Doppler}. Consider two adjacent magnetic moments with directions $\\bm n$ and $\\bm n'=\\bm n+ (\\bm a\\cdot \\bm \\nabla)\\bm n$, where $\\bm a$ is the vector connecting the two sites, and electrons hopping between them. When an electron with spin $\\bm s$ hops from $\\bm n$ to $\\bm n'$, its spin precesses by $\\epsilon (\\bm s\\times \\bm n)$, with $\\epsilon$ a small coefficient, due to the torque from the magnetic moments. For the electron hopping with such a precession, the spatial variation of the magnetic moments will look like not $(\\bm a\\cdot \\bm \\nabla)\\bm n$ but rather $(\\bm a\\cdot \\bm \\nabla)\\bm n - \\epsilon (\\bm s\\times \\bm n)$. When we express a spin current as $j_{{\\rm s},\\mu}^\\alpha=s^\\alpha v_\\mu$ with $v_\\mu$ the velocity of an electron, the discussion so far suggests that the derivative of the magnetization vector changes from $\\nabla_\\mu \\bm n$ to $\\mathfrak{D}_\\mu\\bm n\\equiv\\nabla_\\mu\\bm n-\\eta(\\bm j_{{\\rm s},\\mu}\\times \\bm n)$, with $\\eta$ some coefficient, due to the presence of the spin current. Accordingly, the interaction energy, which was originally $(\\nabla_\\mu \\bm n)^2$ when there are no spin currents, changes into \n$(\\mathfrak{D}_\\mu\\bm n)^2\\cong (\\nabla_\\mu\\bm n)^2+2\\sum_\\mu\\eta\\bm j_{{\\rm s},\\mu}\\cdot(\\nabla_\\mu\\bm n\\times \\bm n)$. Thus, the DM interaction $\\sum_\\mu\\bm D_\\mu\\cdot(\\nabla_\\mu\\bm n\\times \\bm n)$ arises as the spin-current-induced Doppler shift of the exchange energy $(\\nabla_\\mu \\bm n)^2$. \nNote that only the spin current component $j_{\\rm s}^\\perp$ perpendicular to $\\bm n$ contributes to the Doppler shift $\\mathfrak{D}_\\mu\\bm n\\equiv\\nabla_\\mu\\bm n-\\eta(\\bm j_{{\\rm s},\\mu}\\times \\bm n)$, which is consistent with the result of Eq.~\\eqref{eq:dm_spincurrent}. \n\n\nFinally, let us mention that the spin current method can be applied to insulating systems.\nIn fact, the intrinsic spin current can be finite even in insulators.\nThe application to insulators such as Cu$_2$OSeO$_3$ will be discussed elsewhere.\n\n\\begin{figure}\n\t\\includegraphics[bb=0 0 1224 738, width=0.45\\textwidth]{81712Fig1.png}\n\t\\caption{\n\t\t(Color online) Schematic picture showing the mechanism of the spin-current-induced Doppler shift.\t\n A spin current $j_{{\\rm s},\\mu}^\\alpha=s^\\alpha v_\\mu$ is flowing with spin polarization $\\bm s$ and velocity $\\bm v$. Due to the torque from the localized spin $\\bm n$, this spin current flows with its spin precessing. Such a spin current with precession changes the spatial variation of the localized spins from $\\nabla_\\mu \\bm n$ to $\\nabla_\\mu \\bm n - \\bm j_{{\\rm s},\\mu}\\times \\bm n$, which is the Doppler shift due to the spin current. \n Adapted with permission.\\cite{kikuchi2016} Copyright 2016, American Physical Society.\n\t}\n\\label{fig:Doppler}\n\\end{figure}\n\n\\subsection{Perturbation with respect to exchange couplings}\nHere, we consider Eq.~\\eqref{eq:Hamiltonian} directly to derive the DM interaction.\nWe assume that $J_{\\rm ex}$ is small and consider the second-order perturbation to integrate out the electron degrees of freedom as in the derivation of the RKKY interaction.\nSince there is a spin-orbit interaction, the effective Hamiltonian for $\\bm n(\\bm r)$ includes the DM interaction term $D_{\\mu}^{\\alpha} (\\nabla_{\\mu} \\bm n\\times \\bm n)^{\\alpha}$ with\\cite{koretsune2015}\n\\begin{align}\n\tD_{\\mu}^\\sb = \\frac{J_{\\rm ex}^2}{2} \n\t\\lim_{q \\to 0} \\frac{\\partial \\chi_0^{{\\alpha} \\gamma}(\\bm q,i\\omega_n=0)}{i \\partial q^{\\mu}}.\n\t\\label{eq:dm_chi}\n\\end{align}\nHere, $({\\alpha},\\sb,\\gamma)=(x,y,z), (y,z,x),$ or $(z,x,y)$, and\n$\\chi_0$ is the non-interacting spin susceptibility defined as\n\\begin{align}\n\t&\\chi_0^{{\\alpha} \\gamma}(\\bm q, i\\omega_l) = -\\frac{T}{V} \\sum_{l,l',s_1,s_2,s_3,s_4}\\sum_{\\bm k, m} \\sigma^{\\alpha}{s_4 s_1}\\nonumber\\\\\n\t&\\times G^0_{ls_1 l' s_2} (\\bm k, i\\omega_m) \\sigma^\\gamma_{s_2 s_3} G^0_{l' s_3 l s_4}(\\bm k + \\bm q, i\\omega_m + i \\omega_l),\n\\end{align}\nwhere $\\sigma$ is the Pauli matrix and $G^0$ is the non-interacting Green's function in the orbital basis.\nUsing this spin susceptibility, we can write the DM interaction as\n\\begin{align}\n\tD_{\\mu}^\\sb &= \\frac{1}{V} \\sum_{\\bm k} D_{\\mu}^\\sb(\\bm k)\\\\\n\tD_{\\mu}^\\sb(\\bm k) &= \n\t\\lim_{\\bm q \\to 0} \\frac{\\partial}{i \\partial q^{\\mu}} \\sum_{n,n'}\n\t\\frac{f(\\varepsilon_{n' \\bm k+ \\bm q}) - f(\\varepsilon_{n \\bm k})}{\\varepsilon_{n' \\bm k+ \\bm q} - \\varepsilon_{n \\bm k}}\\nonumber\\\\\n\t& \\times \\langle n\\bm k| \\sigma^\\alpha | n' \\bm k + \\bm q \\rangle\n\t\\langle n'\\bm k+ \\bm q| \\sigma^\\gamma | n \\bm k \\rangle,\n\\end{align}\t\nwhere $| n \\bm k \\rangle$ is the eigenvector of the Kohn-Sham Hamiltonian with the eigenvalue of $\\varepsilon_{n \\bm k}$.\nNote that this representation uses the expectation value of two Green's functions, which is similar to that using the magnetic force theorem and is in sharp contrast to the spin current approach.\n\n\n\\subsection{Relationship between band structures and DM interactions}\nOne advantage of spin current and spin susceptibility approaches is that we can discuss the relationship between band structures and the DM interaction.\nIn fact, we can easily calculate the contribution from the band anticrossing points, where the nontrivial spin texture arises owing to the spin-orbit couplings.\nHere, let us consider the simple 2$\\times$2 Hamiltonian\n\\begin{align}\n\tH = \\gamma (k_x \\sigma^x + k_y \\sigma^y + m \\sigma^z).\n\t\\label{eq:2x2ham}\n\\end{align}\nThen, using Eq.~\\eqref{eq:dm_spincurrent} or Eq.~\\eqref{eq:dm_chi},\nwe obtain\n\\begin{align}\n\tD_{\\mu}^{\\alpha} = \\gamma D \\begin{pmatrix}\n\t\t1 & 0 \\\\\n\t\t0 & 1\n\t\\end{pmatrix}\n\\end{align}\nwith\n\\begin{align}\n\tD = n_{\\rm el} = \\pi (\\mu^2 - m^2) [ \\theta(\\mu - m) - \\theta(-\\mu - m) ]\n\\end{align}\nfrom Eq.~\\eqref{eq:dm_spincurrent}\nand\n\\begin{align}\n\tD = \\frac{J_{\\rm ex}^2}{16 \\pi} [ \\theta(\\mu - m) - \\theta(-\\mu - m) ]\n\\end{align}\nfrom Eq.~\\eqref{eq:dm_chi}.\nHere we assume that the number of electrons, $n_{\\rm el}$, is zero at $\\mu = 0$ and the row and column correspond to spatial (${\\mu}$) and spin (${\\alpha}$) indices, respectively.\nSchematic pictures of the energy band and the DM interactions are shown in Fig.~\\ref{fig:schematic2x2}.\nIt is interesting to note that the DM interaction is negative for $\\mu < -m$ and positive for $\\mu > m$ for $\\gamma > 0$,\nindicating that this anticrossing point gives positive contributions to the DM interaction.\nThat is, when the chemical potential sweeps across the anticrossing points from below to above, the DM interaction increases.\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.35]{81712Fig2.eps}\n\t\t\\caption{\n\t\t\t(Color online) (a) Band structure of the two-band model defined in Eq.~\\eqref{eq:2x2ham} and (b) the chemical potential dependence of the DM interaction corresponding to this band structure calculated by the spin current method (black line) and spin susceptibility method (red line).\n\t\t}\n\t\\label{fig:schematic2x2}\n\t\\end{center}\n\\end{figure}\n\nThe relationship between the spin configurations and the symmetry of the DM coefficient is also clear in these formalisms.\nLet us consider three typical spin configurations of a conduction electron in the momentum space, the Rashba (which arises in polar systems), Dresselhaus, and Weyl (in chiral systems) configurations, represented by the Hamiltonians\n$H_{\\rm R}=\\alpha(k_x\\sigma^y-k_y\\sigma^x)$,\n$H_{\\rm D}=\\beta(k_x\\sigma^x-k_y\\sigma^y)$, and\n$H_{\\rm W}=\\gamma(k_x\\sigma^x+k_y\\sigma^y)$, respectively.\nThe schematic spin textures are shown in Fig.\\ \\ref{fig:spin_textures}.\nThe DM interactions in these cases (denoted by $D_{\\rm R}$, $D_{\\rm D}$, and $D_{\\rm W}$, respectively) are\n\\begin{align}\n\t&D_{{\\rm R},{\\mu}}^{\\alpha} = \\alpha n_{\\rm el} \\left(\\begin{array}{ccc} 0 &1&0\\\\ -1 & 0 & 0 \\\\ 0&0& 0\\end{array}\\right),\n\\;\nD_{{\\rm D},{\\mu}}^{\\alpha} = \\beta n_{\\rm el} \\left(\\begin{array}{ccc} 1 &0&0\\\\ 0 & -1 & 0 \\\\ 0&0& 0\\end{array}\\right),\n\\;\n\\notag \\\\\n&D_{ {\\rm W},{\\mu}}^{\\alpha} = \\gamma n_{\\rm el} \\left(\\begin{array}{ccc} 1 &0&0\\\\ 0 & 1 & 0 \\\\ 0&0& 0\\end{array}\\right).\n\\label{DR and DD and DW}\n\\end{align}\nThis result clearly relates the symmetry of crystal structures, spin-orbit couplings, and the DM interactions.\nFor example, DM interactions are antisymmetric in polar systems whereas diagonal DM interactions are expected in non-polar systems, as discussed by a different approach\\cite{Kim13,Gungordu16}.\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[bb=0 0 1200 600, scale=0.20]{81712Fig3.png}\n\t\t\\caption{\n\t\t\t(Color online) Spin texture in momentum space for (a) Rashba, (b) Dresselhaus, and (c) Weyl-type Hamiltonians.\n Adapted with permission.\\cite{kikuchi2016} Copyright 2016, American Physical Society.\n\t\t}\n\t\t\\label{fig:spin_textures}\n\t\\end{center}\n\\end{figure}\n\n\n\n\n\n\\section{Application to Chiral Ferromagnets}\n\n\\subsection{Electronic structure of FeGe}\nFeGe is a B20-type chiral ferromagnet and is extensively studied experimentally.\nA skyrmion crystal state due to the DM interaction has been observed near room temperature\\cite{yu2011} and the divergence of the skyrmion size has been pointed out in Mn$_{1-x}$Fe$_x$Ge\\ with $x=0.8$, indicating the sign change of the DM interaction\\cite{shibata2013}.\nNeutron scattering experiments also suggest the sign change of the DM interaction in Mn$_{1-x}$Fe$_x$Ge\\ with $x=0.8$\\cite{grigoriev2013} and Fe$_{1-x}$Co$_x$Ge\\ with $x=0.6$\\cite{grigoriev2014}.\nIn MnGe, a unique three-dimensional spin structure has been observed\\cite{kanazawa2011,kanazawa2012}.\nThe crystal symmetry of the B20 compounds is P2$_1$3, and owing to its symmetry, the DM interaction should be given as $D_\\mu^{\\alpha} = D \\delta_{{\\alpha}\\mu}$.\n\nFigure \\ref{fig:band}(a) shows the DFT band structure of FeGe (black solid lines).\nHere, the spin-orbit couplings are included and the total ferromagnetic moment is parallel to the (001) direction ($z$ axis).\nThe calculated local magnetic moment is 1.18 $\\mu_B$ per Fe atom, which is consistent with the results of experiments\\cite{wappling1968,lundgren1968} and previous calculations\\cite{yamada2003}.\nThe red broken lines are the Wannier-interpolated band structure with Fe 3d and Ge 4p Wannier orbitals.\nAccording to the average energy difference between up and down spins for the Fe 3d orbitals, the exchange splitting of the 3d orbitals is estimated to be $\\Delta = 1.17$ eV.\nFigure \\ref{fig:band}(b) shows the obtained tight-binding band structure around the Fermi level with the color representing the weight of the up spin.\nSince the spin-orbit coupling of FeGe is not strong, each band is basically characterized as either an up-spin or down-spin band, and the complex spin texture emerges only around the band anticrossing region.\nHereafter, to discuss the atomic composition dependences of Mn$_{1-x}$Fe$_x$Ge\\ and Fe$_{1-x}$Co$_x$Ge, we show the results obtained using the electronic structures with self-consistent charge densities for corresponding carrier densities by fixing the atomic geometries and the lattice constant to the experimental values of FeGe\\cite{lebech1989}.\n\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.60]{81712Fig4a.eps}\n\t\t\\includegraphics[scale=0.60]{81712Fig4b.eps}\n\t\t\\caption{\n\t\t\t(Color online) (a) Comparison between DFT band structure (black solid lines) and tight-binding band structure (red broken lines).\n\t\t\t(b) Detailed band structure around the Fermi level with colors representing the weight of the up spin;\n\t\t\tthat is, red (blue) lines correspond to up-spin (down-spin) bands.\n\t\t\tThe Fermi level is set to zero.\n\t\t\tAdapted from Ref.\\ \\citen{koretsune2015} under the CC-BY 4.0 license.\n\t\t}\n\t\t\\label{fig:band}\n\t\\end{center}\n\\end{figure}\n\n\\subsection{DM interaction using $E(q)$}\n\n\\begin{figure*}\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.70]{81712Fig5.eps}\n\t\t\\caption{\n\t\t\t(Color online) Spin stiffness $J$ and\n\t\t\tDM interaction $D$ for Mn$_{1-x}$Fe$_x$Ge\\ and Fe$_{1-x}$Co$_x$Ge\\ calculated using the energies of helical spin structures $E(q)$ from Ref.\\ \\citen{gayles2015} (green line) and Ref.\\ \\citen{kikuchi2016} (red lines).\n\t\t\tThe error bars for the red lines indicate the fitting errors of $E(q) = J q^2 - D q$.\n\t\t}\n\t\t\\label{fig:DM_Eq}\n\t\\end{center}\n\\end{figure*}\n\nFigure \\ref{fig:DM_Eq} shows the spin stiffness and the DM interaction calculated using $E(q)$ for Mn$_{1-x}$Fe$_x$Ge\\ and Fe$_{1-x}$Co$_x$Ge$\\;$ in two different studies.\\cite{gayles2015,kikuchi2016} \nAlthough there is a slight difference in the two studies due to the treatment of the alloys, the results well reproduce the sign change of the DM interaction observed in the experiments, that is, at $x=0.8$ in Mn$_{1-x}$Fe$_x$Ge\\ and $x=0.6$ in Fe$_{1-x}$Co$_x$Ge.\nIn addition, a recent spin-wave spectroscopy experiment has shown that the DM interaction for FeGe is $\\sim$ -3.6 meV\\AA, which is also consistent with the calculation.\nIn contrast, around MnGe, the DM interaction is very small while the experiments show a very small skyrmion size, indicating a large DM interaction\\cite{kanazawa2011}.\nThis discrepancy may come from the validity of the evaluation using $E(q)$ since this method assumes that the spatial spin variation is small.\nIn contrast with other skyrmion materials like FeGe and MnSi, MnGe is proposed to have a unique three-dimensional skyrmion lattice structure\\cite{kanazawa2012}, which suggests that not only the DM interaction but also other mechanisms such as the frustration of the exchange coupling may play an important role.\n\nThe values of $J$ do not change much for Mn$_{1-x}$Fe$_x$Ge\\ and are on the order of 1 eV\\AA$^2$.\nFor Fe$_{1-x}$Co$_x$Ge, on the other hand, the values of $J$ decrease with increasing $x$ and almost vanish for $x > 0.6$, indicating that the ferromagnetic state becomes unstable.\nThe wavelength of the helix, $4 \\pi J\/D$, for FeGe is estimated to be 160 nm, which is almost the same order as the experimental value.\n\n\n\n\\subsection{DM interaction using the spin current}\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.60]{81712Fig6.eps}\n\t\t\\caption{\n\t\t\t(Color online) Spin currents, $j_{{\\rm s},x}^x$ and $j_{{\\rm s}, y}^y$ for FeGe as a function of spin-orbit coupling strength.\n\t\t}\n\t\t\\label{fig:lambda_dep}\n\t\\end{center}\n\\end{figure}\n\n\\begin{figure*}\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.70]{81712Fig7.eps}\n\n\t\t\\caption{\n\t\t\t(Color online) DM interactions $D$ for Mn$_{1-x}$Fe$_x$Ge\\ and Fe$_{1-x}$Co$_x$Ge\\ calculated using the spin current (blue line) and the $\\lambda$-linear contribution in the spin current (red line).\n\t\t\tThe error bars indicate the variances of $D_x^x$ and $D_y^y$.\n\t\t}\n\t\t\\label{fig:DM_js}\n\t\\end{center}\n\\end{figure*}\n\n\\begin{figure*}\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.70]{81712Fig8.eps}\n\t\t\\caption{\n\t\t\t(Color online) (a) Contribution of each band to the DM interaction, $D_{n\\bm k}$, with dominant band anticrossing points circled, and (b) the energy distribution of the DM interaction, $D(E)$ (black line), for FeGe. \n\t\t\tThe Fermi energy dependence of the DM interaction, $D\\equiv\\int^{E_F} D(E')f(E')dE'$, within the rigid band approximation is also shown as the red line.\n\t\t\tAdapted with permission.\\cite{kikuchi2016} Copyright 2016, American Physical Society.\n\t\t\t}\n\t\t\\label{fig:Dk}\n\t\\end{center}\n\\end{figure*}\n\n\nNext, let us discuss the DM interaction using Eq.~\\eqref{J_DFT}.\nSince this equation is valid within the first order of the spin-orbit coupling, it is important to check the spin-orbit-coupling-strength dependence of the spin current.\nFor this purpose, an electronic structure calculation without the spin-orbit coupling is performed, and a tight-binding model that reproduces this band structure is constructed.\nBy mixing the two tight-binding Hamiltonians with and without the spin-orbit coupling, the spin current as a function of the spin-orbit coupling strength is calculated as shown in Fig.\\ \\ref{fig:lambda_dep}.\nAs can be seen, the spin current is determined almost within the first order of the spin-orbit coupling in this system, indicating that the spin current is a good approximation for the DM interaction.\nHere, the difference between $j_{{\\rm s},x}^x$ and $j_{{\\rm s},y}^y$ originates from the magnetic moment, which breaks the symmetries that connect the $x$- and $y$-axes.\nIn fact, if we consider the magnetic moment along the (111) direction, the two perpendicular spin currents have the same values due to the C$_3$ symmetry.\nNote that for large spin-orbit-coupling systems such as a Co\/Pt bilayer, the higher-order contributions cannot be neglected\\cite{freimuth2017}.\n\nFigure \\ref{fig:DM_js} shows the DM interaction estimated directly using the spin current and by extracting the first-order contribution from the spin current.\nSince the idea of the spin current approach is the same as the method using $E(q)$, these results give almost the same result as shown in Fig.\\ \\ref{fig:DM_Eq}.\nThe agreement between the first-order contribution and the original spin current supports the validity of this estimation.\nInterestingly, the difference between $j_{{\\rm s},x}^x$ and $j_{{\\rm s},y}^y$ becomes smaller for the first-order contribution.\nThe higher-order contribution of the spin-orbit coupling and the possible anisotropy of the DM interaction are important future issues.\n\nIn the spin current approach, it is possible to discuss the relationship between the band structure and the DM interaction and, as a result, the chemical potential dependence of the DM interaction.\nIn fact, we can rewrite Eq.~\\eqref{J_DFT} as\n\\begin{align}\n\n\tD = \\sum_{n \\bm k} D_{n\\bm k} f(\\epsilon_{n\\bm k}) = \\int D(E) f(E) dE,\n\\end{align}\nwhere $n$, $f(E)$, $D_{n\\bm k}$, and $D(E)$ are the band index, the Fermi distribution function, the contribution of each band to the DM interaction, and the density of the DM interaction, respectively.\nFigure \\ref{fig:Dk}(a) shows the band structure of FeGe with the color representing $D_{n\\bm k}$.\nAs discussed in Sect. 2, we can see that the DM interaction comes from the restricted region of the band structure where the complex spin texture arises due to band anticrossing.\nThe density of the DM interaction, $D(E)$, shown in Fig.\\ \\ref{fig:Dk}(b), also gives useful information for discussing the carrier density dependence of the DM interaction. \nThat is, in this case, $D(E) < 0$ for $E<0$ and $D(E) > 0$ for $E>0$ indicate the dip structure around FeGe ($E=0$) and the resulting two sign changes in Mn$_{1-x}$Fe$_x$Ge\\ and Fe$_{1-x}$Co$_x$Ge. \n\n\n\\subsection{DM interaction using the spin susceptibility}\n\n\\begin{figure*}\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.70]{81712Fig9.eps}\n\t\t\\caption{\n\t\t\tDM interactions for Mn$_{1-x}$Fe$_x$Ge\\ and Fe$_{1-x}$Co$_x$Ge\\ calculated using the off-diagonal spin susceptibility.\n\t\t\tThe error bars indicate the variances of $D_x^x$ and $D_y^y$.\n\t\t}\n\t\t\\label{fig:DM_chiq}\n\t\\end{center}\n\\end{figure*}\n\nThe DM interaction using Eq.~\\eqref{eq:dm_chi} is shown in Fig.\\ \\ref{fig:DM_chiq}.\nHere, the exchange coupling $J_{\\rm ex}$ is estimated by the exchange splitting of the $d$ orbitals, that is, the average energy difference of $3d$ Wannier orbitals for up and down spins.\nIn this result, the two sign-change points ($x\\sim0.4$ for Mn$_{1-x}$Fe$_x$Ge\\ and $x\\sim0.02$ for Fe$_{1-x}$Co$_x$Ge) are shifted to some extent from the positions in the experiments.\nThis may be because the perturbation expansion with respect to $J_{\\rm ex}$ is not a good approximation in this material.\nIn fact, $J_{\\rm ex}$ is not so small and the effect of exchange splitting on the electronic structure cannot be explained only by $J_{\\rm ex} {\\bm n} \\cdot \\bm \\sigma$.\nOn the other hand, the DM interaction for Mn$_{1-x}$Fe$_x$Ge\\ with small $x$ is larger than the previous two results and is more consistent with the experimental result.\nThis suggests that the pertubation expansion with respect to $J_{\\rm ex}$ reasonably works even for large $J_{\\rm ex}$ systems and can give a better estimation once the derivative expansion with respect to the magnetic structure does not work.\nIn this way, this method can play a complementary role in the evaluation of the DM interaction.\n\n\n\\section{Conclusion}\nIn this paper, we have reviewed three approaches to evaluate the DM interaction.\nThe first one is to evaluate the energy of the helical spin structure, $E(\\bm q)$, directly from first principles and extract the DM interaction.\nThe method is very powerful while the obtained information is limited.\nThe second one is the perturbation expansion with respect to the spin gauge field, which represents the spin twisting.\nThe idea is to extract the DM interaction term by twisting the spin structure, which is almost the same as the first approach, while this method only needs the electronic structure calculation for the uniform magnetic state.\nThis method gives a clear picture that the DM interaction can be expressed in terms of the spin current at the equilibrium within the first order of the spin-orbit couplings.\nFurthermore, the relationship to the band structure and the effect of carrier doping can be easily discussed.\nThe third one is the perturbation expansion with respect to the exchange coupling, which can be understood as the extension of the RKKY mechanism.\nThis method also clarifies the relationship between the band structure and the DM interaction, which is roughly consistent with the spin current approach.\n\nBy applying these methods to chiral ferromagnets, it has been shown that the first two approaches give almost the same results, which agree well with the experiments, while the third one gives a slightly different result.\nSince the starting points of the first two approaches and the third approach are completely different, these approaches will play complementary roles in the evaluation of the DM interaction.\nThe application of these methods to various systems and clarifying their validity are important future issues for designing materials utilizing this unique antisymmetric interaction.\n\n\n\\acknowledgement\nThe authors thank H. Fukuyama, N. Kanazawa, H. Kawaguchi, W. Koshibae, H. Kohno, T. Momoi, D. Morikawa, N. Nagaosa, M. Ogata, S. Seki, K. Shibata, Y. Suzuki, G. Tatara, and Y. Tokura for valuable discussions.\nIn particular, the authors thank G. Tatara for the collaboration in Ref.\\ \\citen{kikuchi2016} and T. Ko. and R. A. thank N. Nagaosa for the collaboration in Ref. \\citen{koretsune2015}.\nThis work was supported by JSPS KAKENHI Grant Numbers JP25400344 and JP26103006 and JST PRESTO Grant Number JPMJPR15N5. T. Ki. is a Yukawa Research Fellow supported by the Yukawa Memorial Foundation. \n\n\\bibliographystyle{jpsj}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDetecting, interpreting and comparing structures and properties of network data about social interactions and complex physical phenomena is critically important to a variety of problems. However, this is a difficult task because comparisons between two or more networks can involve checking for graph or subgraph isomorphism, for which no tractable solution is known. Instead, various network properties (\\textit{e.g.}, degree distribution, centrality distributions) have been used to describe and compare networks. \n\nAnother approach is to consider a network's global structure as a by-product of a graph's local substructures~\\cite{ugander2013subgraph}. More sophisticated graph statistics are based on counting the number of small motifs \\cite{milo2002network} or graphlets~\\cite{ahmed2015efficient} present in the graph and comparing their distributions \\cite{yaveroglu2014revealing}. Unfortunately, graphlet counting presupposes that all possible graphlets be enumerated ahead of time. Because the number of unique graphlets increases exponentially with the number of nodes in the graphlet, previous work has been limited to graphlets of at most five nodes.\n\nAn alternative to developing sophisticated graph statistics is to learn graph generation models that encode properties of the graph in various ways. Graph generators like the Exponential Random Graph Model (ERGM) \\cite{robins+al}, the Chung-Lu Edge Configuration Model (CL) \\cite{chung+lu}, the Stochastic Kronecker Graph (SKG) \\cite{leskovec2010kronecker}, and the Block Two-Level Erd\\H{o}s-R\\'{e}nyi Model (BTER) \\cite{seshadhri2012community} can be fitted to real-world graphs.\n\nRecent work has found that many social and information networks have a more-or-less tree-like structure, which implies that detailed topological properties can be identified and extracted from a graph's {\\em tree decomposition}~\\cite{adcock2016tree}. \nBased on these findings, Aguinaga~\\textit{et~al.}~described a method to turn a graph's tree decomposition into a \\emph{Hyperedge Replacement Grammar} (HRG)\n~\\cite{aguinaga2016growing}. The HRG model can then generate new graphs with properties similar to the original.\n\nOne limitation of the HRG model is that the instructions for reassembling the building blocks, \\textit{i.e.}, the graph grammar, encode only enough information to ensure that the result is well-formed. HRG production rules are extracted directly from the tree decomposition; some rules from the top of the tree, some from the middle of the tree, and some from the bottom of the tree. Then, to generate an new graph that is similar to the original graph, we would expect that rules from the top of the tree decomposition are applied first, rules from the middle next, and rules from the leaves of the tree are applied last. However, when generating a new graph the HRG model applies rules probabilistically; where a rule's probability relative to its frequency in the grammar. However, when generating a new graph, the rules in HRG models have no context on when they should fire. HRG models are in need of a mechanism that corrects for this problem by providing context to the rules.\n\nIn the present work, we make three contributions:\n\\begin{enumerate}\n \\item We improve the HRG model by encoding context in latent variables.\n \\item We propose a methodology for evaluating our model that enforces a strict separation between training and test data, in order to guard against overfitting.\n \\item We test our model on 6 train\/test pairs of graphs and find that it discovers a better model, that is, one that generalizes better to the test data, than the original HRG model as well as Kronecker and Chung-Lu models.\n\\end{enumerate} \n\n\n\\section{Background}\n\nBefore we introduce our model, we first provide an overview and examples of the HRG model.\n\n\\subsection{Hyperedge Replacement Grammars}\n\nLike a context free string grammar (CFG), an HRG has a set of production rules $A \\rightarrow R$, where $A$ is called the left-hand side (LHS) and $R$ is called the right-hand side (RHS).\nIn an HRG, a rule's RHS is a graph (or hypergraph) with zero or more \\emph{external} nodes. Applying the rule replaces a hyperedge labeled $A$ with the graph $R$; the nodes formerly joined by the hyperedge are merged with the external nodes of $R$. The HRG generates a graph by starting with the start nonterminal, $S$, and applying rules until no more nonterminal-labeled hyperedges remain.\n\n\\subsection{Tree Decomposition}\n\\begin{figure}\n\\centering\n\\input{.\/figs\/td}\n\\caption{An example graph and its tree decomposition. The width of this tree decomposition is 3, \\textit{i.e.}, the size of the largest bag minus 1. The sepset between each bag and its parent is labeled in blue. Bags are labeled ($\\eta_0$, etc.) for illustration purposes only.\n}\n\\label{fig:td}\n\\end{figure}\n\nGiven a graph $H = (V,E)$, a \\emph{tree decomposition} is a tree whose nodes, called \\emph{bags}, are labeled with subsets of $V$, in such a way that the following properties are satisfied:\n\\begin{itemize}\n\\item For each node $v \\in V$, there is a bag $\\eta$ that contains $v$.\n\\item For each edge $(u,v) \\in E$, there is a bag $\\eta$ that contains $u$ and $v$.\n\\item If bags $\\eta$ and $\\eta^\\prime$ contain $v$, then all the bags on the path from $\\eta$ to $\\eta^\\prime$ also contain $v$.\n\\end{itemize}\n\nIf $\\eta^\\prime$ is the parent of $\\eta$, define $\\bar{\\eta} = \\eta' \\cap \\eta$ (also known as the \\emph{sepset} between $\\eta'$ and $\\eta$). If $\\eta$ is the root, then $\\bar{\\eta} = \\emptyset$.\n\nAll graphs can be decomposed (though not uniquely) into a tree decomposition, as shown in Fig.~\\ref{fig:td}. In simple terms, a tree decomposition of a graph organizes its nodes into overlapping \\emph{bags} that form a tree. The \\emph{width} of the tree decomposition, which is related to the size of the largest bag, measures how tree-like the graph is. Finding optimal tree decompositions is NP-hard, but there is significant interest in finding fast approximations because many computationally difficult problems can be solved efficiently when the data is constrained to be a tree-like structure. \n\n\\subsection{Grammar Extraction}\n\\label{sec:extraction}\n\n\\begin{figure}\n\\centering\n\\input{.\/figs\/rule}\n\\caption{Extraction of an HRG production rule from $\\eta_2$ containing graph vertices \\{2,3,4,5\\}. The LHS of the production rule corresponds to the sepset of the bag and its parent. The RHS of the production rule contains nodes from the bag, terminal edges induced from the original graph, and nonterminal edges from the sepset between the bag and its children.} \n\\label{fig:rule}\n\\end{figure}\n\n\\iffalse\n\\begin{figure*}\n\\centering\n\\input{.\/figs\/grammar}\n\\caption{The full grammar extracted from the example of Figure~\\ref{fig:td}.}\n\\label{fig:grammar}\n\\end{figure*}\n\\fi\n\nAguinaga et al.~\\cite{aguinaga2016growing} extract HRG rules from a graph, guided by a tree decomposition of the graph. For example, Figure~\\ref{fig:rule} illustrates how one HRG rule is extracted from a tree decomposition. \n\nIf we assume that the tree decomposition is rooted, then every bag $\\eta$ of the tree decomposition corresponds to an edge-induced subgraph, which we write $G_\\eta$, defined as follows: For each edge $(u,v) \\in E$, if every bag $\\eta'$ containing $u,v$ is either equal to $\\eta$ or a descendant of $\\eta$, then $(u,v) \\in H_\\eta$.\nFor example, in Figure~\\ref{fig:td}, the bag $\\eta_2 = \\{2,3,4,5\\}$ corresponds to the subgraph $H_{\\eta_2}$ whose edges are 1--2, 1--5, 2--3, 2--4, and 3--5.\n\nIf $H=(V,E)$ is a graph and $H^\\prime=(V^\\prime,E^\\prime)$ is an edge-induced subgraph of $H$, we define an \\emph{external} node of $H^\\prime$ to be any node of $H^\\prime$ that has a neighbor not in $H^\\prime$. Then, define the operation of \\emph{replacing} $H^\\prime$ with a hyperedge to be:\n\\begin{itemize}\n\\item Remove all edges in $E'$.\n\\item Remove all nodes in $V'$ except for the external nodes.\n\\item Add a hyperedge joining the external nodes.\n\\end{itemize}\n\nEvery bag $\\eta$ also induces a HRG rule $\\text{N}^{|\\bar{\\eta}|} \\rightarrow R$, where $R$ is constructed as follows. \n\\begin{itemize}\n\\item Make a copy of $H_\\eta$.\n\\item Label the nodes in $\\bar{\\eta}$ as external nodes.\n\\item For each child $\\eta_i$ of $\\eta$, replace $H_{\\eta_i}$ with a hyperedge labeled $\\text{N}^{|\\bar{\\eta}_i|}$.\n\\end{itemize}\nFor example, in Figure~\\ref{fig:rule}, the bag $\\eta_2$ induces the rule shown at right. The LHS is $\\text{N}^3$ because the sepset between $\\eta_2$ and its parent has three nodes ($3,4,5$); in the RHS, these three nodes are marked as external. The node numbers are for illustration purposes only; they are not actually stored with the production rules. The RHS has two terminal edges (2--3, 2--4) from the original graph and one nonterminal edge (2--5) corresponding to the sepset between $\\eta_2$ and its one child. \n\n\\iffalse\nThree other HRG production rules are extracted from the example tree decomposition in the same way (Figure~\\ref{fig:grammar}). Leaf bags in the tree decomposition do not produce RHSs with nonterminals, because they do not have children. The root bag has no parent, thus its sepset is empty and its LHS is $\\text{N}^0$, the start nonterminal.\n\\fi\n\nAfter an HRG is extracted from the tree decomposition, its production rules are gathered into a set, merging identical rules and assigning to each unique rule $(A\\rightarrow R)$ a probability $P(A \\rightarrow R) = P(R \\mid A)$ proportional to how many times it was encountered during extraction. This grammar can then be used to randomly generate new graphs, or compute the probability of another graph.\n\n\\section{Latent Variable Probabilistic Hyperedge Replacement Grammars}\n\n\nHere, we improve upon the HRG model by encoding more context into the model via latent variables, in a process that is analogous to how a first-order Markov chain can simulate a higher-order Markov chain by expanding the state space\n\nIn this section, we adopt a notational shortcut. In an HRG production $A \\rightarrow R$, the RHS $R$ is a hypergraph fragment containing zero or more hyperedges with nonterminal labels $Y_1, \\ldots, Y_r$. We suppress the graph structure of $R$ and write the rule simply as $X \\rightarrow Y_1 \\cdots Y_r$.\n\n\\subsection{Nonterminal Splitting}\n\nFollowing previous work on probabilistic CFGs \\cite{matsuzaki+al:2005,petrov+al:2006}, we increase the context-sensitivity of the grammar by splitting each nonterminal symbol $X$ in the grammar into $n$ different \\emph{subsymbols}, $X_i, \\ldots, X_n$, which could potentially represent different contexts that the rule is used in. Thus, each rule in the original grammar is replaced with several \\emph{subrules} that use all possible combinations of the subsymbols of its nonterminal symbols. \n\nFor example, if $n=2$, the rule $\\n2\\rightarrow \\epsilon$ would be split into $\\n2_1\\rightarrow \\epsilon$ and $\\n2_2\\rightarrow \\epsilon$.\n\n\\iffalse\nFor example, if $n=2$, the rules in Fig.~\\ref{fig:grammar} would be split as follows:\n\\begin{align*}\n\\text{(from }\\n0&\\rightarrow \\n2~\\n3\\text{)} & \\n0_1 &\\rightarrow \\n2_1~\\n3_1 & \\n0_2 &\\rightarrow \\n2_1~\\n3_1 \\\\\n&&\\n0_1 &\\rightarrow \\n2_1~\\n3_2 &\\n0_2 &\\rightarrow \\n2_1~\\n3_2 \\\\\n&&\\n0_1 &\\rightarrow \\n2_2~\\n3_1 &\\n0_2 &\\rightarrow \\n2_2~\\n3_1 \\\\\n&&\\n0_1 &\\rightarrow \\n2_2~\\n3_2 &\\n0_2 &\\rightarrow \\n2_2~\\n3_2 \\\\\n\\text{(from }\\n2&\\rightarrow \\epsilon\\text{)} & \\n2_1 &\\rightarrow \\epsilon & \\n2_2 &\\rightarrow \\epsilon \\\\\n\\text{(from }\\n3&\\rightarrow \\n2\\text{)} & \\n3_1 &\\rightarrow \\n2_1 & \\n3_2 &\\rightarrow \\n2_1 \\\\\n&&\\n3_1 &\\rightarrow \\n2_2 & \\n3_2 &\\rightarrow \\n2_2.\n\\end{align*}\n\\fi\n\nIn general, a rule with $r$ nonterminal symbols on its right-hand side is split into $n^{r+1}$ subrules.\n\n\n\\subsection{Learning}\n\n\\newcommand{\\inside}[2]{\\ensuremath{P_{\\textit{in}}( #1,#2)}}\n\\newcommand{\\outside}[2]{\\ensuremath{P_{\\textit{out}}(#1,#2)}}\n\\newcommand{\\ruleprob}[1]{\\ensuremath{P(#1)}}\n\n\n\nAfter obtaining an $n$-split grammar from the training graphs, we want to learn probabilities for the subrules that maximize the likelihood of the training graphs and their tree decompositions.\nHere we use Expectation-Maximization (EM)~\\cite{dempster+al:1977}. In the E (Expectation) step, we use the Inside-Outside algorithm \\cite{lari+young:1990} to compute the expected count of each subrule given the training data, and in the M (Maximization) step, we update the subrule probabilities by normalizing their expected counts.\n\n\n\\begin{figure}\n\\centering\n\\input{.\/figs\/egg_ill}\n\\caption{Inside and outside probabilities. Here, a hyperedge labeled $X_i$ with three external nodes has generated a subgraph $\\eta$. The inside probability (left) of $\\eta,X_i$ is the probability of all the subderivations that generate subgraph $\\eta$ from $X_i$. The outside probability (right) is the probability of all the partial derivations that generate the complement of subgraph $\\eta$, with a hyperedge labeled $X_i$ in place of the subgraph.}\n\\label{fig:egg_ill}\n\\end{figure}\n\n\nThese expected counts can be computed efficiently using dynamic programming. \nGiven a tree decomposition $T$, consider a bag $\\eta$ and its corresponding subgraph $H_\\eta$. The grammar extraction method of Background Section~\\ref{sec:extraction} assigns $H_\\eta$ a nonterminal symbol, which we write $X$. Let $X_i$ be a subsymbol of $X$.\nThe \\emph{inside} probability of $H_\\eta$ with label $X_i$, written as $\\inside{\\eta}{X_i}$, is the total probability of all derivations starting from $X_i$ and ending in $H_\\eta$. The \\emph{outside} probability of $H_\\eta$ with label $X_i$, written as $\\outside{\\eta}{X_i}$, is the total probability of all derivations starting from $S$ and ending in $H$ with $H_\\eta$ replaced with a hyperedge labeled $X_i$. See Figure~\\ref{fig:egg_ill}.\n\n\\begin{figure}\n\\centering\n\\input{.\/figs\/egg_calc}\n\\caption{Computation of inside and outside probabilities. Here, a hyperedge labeled $X_i$ has been rewritten with a rule rhs with two hyperedges labeled $Y_j$ and $Z_k$. At left, the inside probability of $\\eta,X_i$ is incremented by the product of the rule and the inside probabilities of $\\eta_1,Y_j$ and $\\eta_2,Z_k$. At right, the outside probability of $\\eta_1,Y_j$ is incremented by the product of the outside probability of $\\eta,X_i$, the rule, and the inside probability of $\\eta_2,Z_k$.}\n\\label{fig:egg_calc}\n\\end{figure}\n\nThe inside probabilities can be calculated recursively, from smaller subgraphs to larger subgraphs. \nWe assume that bag $\\eta$ has at most two children, which follows if $T$ is in Chomsky Normal Form~\\cite{chomsky1959certain}. If $\\eta$ has two children, let $\\eta_1$ and $\\eta_2$ be the children, let $Y$ and $Z$ be the labels of $H_{\\eta_1}$ and $H_{\\eta_2}$, and let $Y_j$ be and $Z_k$ be subsymbols of $Y$ and $Z$. Then the inside probability of $H_\\eta$ with subsymbol $X_i$ is defined by:\n\\begin{align*}\n \\inside{\\eta}{X_i} &= \\sum_{j, k} \\ruleprob{X_i\\rightarrow Y_j Z_k} \\, \\inside{\\eta_1}{Y_j} \\, \\inside{\\eta_2}{Z_k} \\\\\n\\intertext{and similarly if $\\eta$ has only one child:} \n\\inside{\\eta}{X_i} &= \\sum_{j} \\ruleprob{X_i\\rightarrow Y_j} \\, \\inside{\\eta_1}{Y_j} \\\\\n\\intertext{or no children:}\n\\inside{\\eta}{X_i} &= \\ruleprob{X_i\\rightarrow \\epsilon}.\n\\end{align*} \n\nThe outside probabilities are calculated top-down. \nIf a bag~$\\eta$ has two children, then the outside probabilities of its children are defined by:\n\\begin{align*} \n \\outside{\\eta_1}{Y_j} &= \\sum_{i, k} \\ruleprob{X_i\\rightarrow Y_j Z_k} \\, \\outside{\\eta}{X_i} \\, \\inside{\\eta_2}{Z_k} \\\\\n \\outside{\\eta_2}{Z_k} &= \\sum_{i, j} \\ruleprob{X_i\\rightarrow Y_j Z_k} \\, \\outside{\\eta}{X_i} \\, \\inside{\\eta_1}{Y_k}. \\\\\n\\intertext{See Figure~\\ref{fig:egg_calc} for an illustration of this computation. Similarly, if $\\eta$ has only one child:}\n \\outside{\\eta_1}{Y_j} &= \\sum_{i} \\ruleprob{X_i\\rightarrow Y_j} \\, \\outside{\\eta}{X_i}.\n\\end{align*}\n\nIn the Expectation step, we compute the posterior probability of each subrule at each bag of each training tree decomposition~$T$:\n\\begin{align*}\n P(\\eta, X_i\\rightarrow Y_j Z_k\\mid T) &= \\frac1{P(T)}\\outside{\\eta}{X_i} \\ruleprob{X_i\\rightarrow Y_j Z_k} \\cdot{} \\\\ &\\qquad \\inside{\\eta_1}{Y_j} \\inside{\\eta_2}{Z_k} \\\\\n P(\\eta, X_i \\rightarrow Y_j \\mid T) &= \\frac1{P(T)}\\outside{\\eta}{X_i} \\ruleprob{X_i \\rightarrow Y_j} \\inside{\\eta_1}{Y_j} \\\\\n P(\\eta, X_i \\rightarrow \\epsilon \\mid T) &= \\frac1{P(T)}\\outside{\\eta}{X_i} \\ruleprob{X_i \\rightarrow \\epsilon}\n\\end{align*}\nwhere $P(T) = \\inside{\\eta_0}{S}$ and $\\eta_0$ is the root bag of $T$.\nThe expected count of each subrule is calculated by summing over the posterior probability of the rule over all nodes of all training trees:\n\\begin{equation*}\n E[c(X_i\\rightarrow \\alpha)] = \\sum_{\\text{trees $T$}} \\sum_{\\eta \\in T} P(\\eta, X_i\\rightarrow \\alpha\\mid T) \n\\end{equation*}\nwhere $\\alpha$ is any right-hand side.\n\nIn the Maximization step, we use the expected counts calculated above to update the rule probabilities:\n\\begin{align*}\n \\ruleprob{X_i\\rightarrow \\alpha} &:= \\frac{E[c(X_i\\rightarrow \\alpha)]}{\\sum_{\\alpha'} E[c(X_i \\rightarrow \\alpha')]}.\n\\end{align*}\n\nThese probabilities are then used to repeat the E step. The method is guaranteed to converge to a local maximum of the likelihood function, but not necessarily to a global maximum.\n\n\\section{Evaluation}\n\nCurrent research in graph modelling and graph generation evaluate their results by comparing the generated graphs with the original graph by aggregate properties like degree distribution, clustering coefficients, or diameter~\\cite{aguinaga2016growing,leskovec2010kronecker,aguinaga2016infinity,seshadhri2012community,leskovec2005graphs,chakrabarti2006graph}. There are two potential problems with such metrics. First, these metrics do not test how well the model generalizes to model other graphs that represent similar data. Second, they are heuristics from which a generated graph's ``goodness'' is difficult to define or standardize. We discuss and address both of these problems in this section.\n\n\\subsection{Train\/Test Data}\n\nComparing generated graphs with the original graph cannot test how well the model generalizes to other graphs that represent similar data or different versions of the same network phenomena. To see why, consider the extreme case, in which a model simply memorizes the entire original graph. Then, the generated graphs are all identical to the original graph and therefore score perfectly according to these metrics. This is akin to overfitting a machine learning classifier on training data and then testing it on the same data, which would not reveal whether the model is able to generalize to unseen instances.\n\nIn standard data mining and machine learning tasks, the overfitting problem is typically addressed through cross-validation or by evaluating on heldout test data sets. In the present work, we adapt the idea of using heldout test data to evaluate graph grammars.\nIn experiments on synthetic graphs, this means that we generate two random graphs using the same model and parameters; we designate one as the training graph and the other as the test graph. In experiments on real world graphs, we identify two graphs that represent the same phenomenon, \\textit{e.g.}, citations or collaborations, and we mark one as the training graph and one as the test graph.\n\nIn reality, we might not be able to find test graphs that have similar properties as the training graph. Fortunately, cross-validation can also be adapted to cases where no test graph is available by using disjoint subgraph samples from a single graph.\n\n\\subsection{Likelihood}\n\nIn addition to the possibility of overfitting, high-level aggregations of graph properties may not always be good comparators of two or more graphs. Indeed, examples abound in related literature showing how vastly different graphs can share similar aggregate statistics~\\cite{yaveroglu2014revealing}. We propose, as an additional metric, to evaluate models by using them to measure the likelihood of a test graph or graphs. Intuitively, this measures how well a model extracted from the training graph generalizes to a test graph. If the model simply memorizes the entire training graph, then it will have zero likelihood (the worst possible) on the test graph. If the model is better able to generalize to new graphs, then it will have higher likelihood on the test graph.\n\nUnfortunately, it is not always computationally feasible to compute the likelihood of graphs under previous models. But with HRGs, it can be computed in linear time given a tree decomposition. (It would also be possible, but slower, to sum the probabilities of \\emph{all} possible tree decompositions \\cite{chiang2013parsing}.) The likelihood on a test graph is simply $\\inside{\\eta_0}{S}$, where $\\eta_0$ is the root of the tree decomposition. Note that the model probabilities are estimated from the training graphs, even when computing likelihood on test graphs. As this number is usually very small, it's common to take logs and deal with log-likelihoods.\n\n\\subsection{Smoothing}\nA problem arises, however, if the test graph uses a rule that does not exist in the grammar extracted from the training graph. Then the inside probability will be zero (or a log-probability of $-\\infty$). This is because an HRG missing any necessary rules to construct the test graph cannot generate the test graph exactly, and therefore results in a zero probability.\n\nIn this case, we would still like to perform meaningful comparisons between models, if possible. So we apply smoothing as follows. To test an HRG $H$ on a test graph, we first extract an HRG, $H^\\prime$, from the test graph using the latent-variable HRG method. Define an \\emph{unknown rule} to be a rule in $H^\\prime$ but not in $H$. Then for each unknown rule, we add the rule to $H$ with a probability of $\\epsilon$. We can then compute the log likelihood on the test graph under the augmented grammar $H \\cup H^\\prime$. The final test log likelihood is calculated as\n\\begin{align*}\n L = L_{H \\cup H^\\prime} - c(H^\\prime \\setminus H) \\cdot \\log \\epsilon\n\\end{align*}\nwhere $L_{H \\cup H^\\prime}$ is the log likelihood of the test graph under the augmented grammar, $c(H^\\prime \\setminus H)$ is the number of times that unknown rules are used in the test graph, and $\\epsilon$ is the probability of each unknown rule. Note that as long as $\\epsilon$ is much smaller than the probability of any known rule, its value is irrelevant because $L$ does not depend on it.\n\nIdeally, we would like the number of unknown rules to be zero. In our experiments, we find that increasing the number of training graphs and\/or decreasing the size of the training graphs can bring this number to zero or close to zero. Note that if two HRGs have differing sets of unknown rules, then it is \\emph{not} meaningful to compare their log-likelihood on test graphs. But if two HRGs have identical sets of unknown rules, then their log-likelihoods can be meaningfully compared. We will exploit this fact when evaluating models with latent variables in the next section.\n\n\n\\section{Experiments}\n\nIn this section we test the ability of the latent-variable HRG (laHRG) to generate graphs using the train-test framework described above. We vary $n$, the number of subsymbols that each nonterminal symbol is split into, from 1 to 4. Note that the 1-split laHRG model is identical to the original HRG method. By varying the number of splits, we will be able to find the value that optimizes the test likelihood. We have provided all of the source code and data analysis scripts at \\url{https:\/\/github.com\/cindyxinyiwang\/laHRG}.\n\n\\subsection{Setup}\nGiven a training graph, we extract and train a latent-variable HRG from the graph. Then we evaluate the goodness of the grammar by calculating the log likelihood that the test graph could be generated from the grammar. \n\n\\begin{table}\n\\tabcolsep=1.5pt\\relax\n \\smal\n\\caption{Datasets used in experiments}\n\\label{table: real-world-graphs}\n\\centering\n\\begin{tabular}{lllrr} \n\\toprule\n & Type & Name & Nodes & Edges \\\\ \n\\midrule\n \\multirow{4}{*}{Synth} &\n \\multirow{2}{*}{Barabasi-Albert}\n & train-ba & 30,000 & 59,996 \\\\ \n & & test-ba & 30,000 & 59,996 \\\\\n \\cmidrule{2-5}\n & \\multirow{2}{*}{Watts-Strogatz}\n & train-ws & 30,000 & 60,000 \\\\\n & & test-ws & 30,000 & 60,000 \\\\\n\n\\midrule\n\n \\multirow{10}{*}{Real} &\n \\multirow{2}{*}{Citation}\n & train-cit-HepTh & 27,770 & 352,807 \\\\ \n & & test-cit-HepPh & 34,546 & 421,578 \\\\\n \\cmidrule{2-5}\n & \\multirow{2}{*}{Internet}\n & train-as-topo & 34,761 & 171,403 \\\\\n & & test-as-733 & 6,474 & 13,895 \\\\\n \\cmidrule{2-5}\n & \\multirow{2}{*}{Purchase}\n & train-amazon0312 & 400,727 & 3,200,440 \\\\\n && test-amazon0302 & 262,111 & 1,234,877 \\\\ \n \\cmidrule{2-5}\n & \\multirow{2}{*}{Wikipedia}\n & train-wiki-vote & 7,115 & 103,689 \\\\ \n & & test-wiki-talk & 2,394,385 & 5,021,410 \\\\ \n\\bottomrule\n\\end{tabular}\n\\end{table}\n\nFor evaluation, we need to make sure that the test graph is disjoint from the training graph to guard against overfitting. Here we introduce two techniques to achieve that: 1. we partition a single graph into two disjoint parts so that they do not have overlapping vertices. Then we train laHRG from one part of the graph, and calculate the log likelihood of the other disjoint part; 2. we choose 2 graphs of the same type, and use one for training and the other for evaluation.\n\nWe evaluate laHRG with log likelihood metric for both evaluation methods mentioned above on 6 types of graphs: 2 synthetic graphs (generated from random graph generators), and 4 real world graphs. For the first evaluation method, we use 1 graph, each from 6 different types of graphs, and partition the graph for training and testing purpose. For the second evaluation method, we do not partition the training graph, but choose 1 additional graph from each type of graphs for testing. The graphs were obtained from the SNAP\\footnote{\\url{https:\/\/snap.stanford.edu\/data}} and KONECT\\footnote{\\url{http:\/\/konect.uni-koblenz.de}} graph repositories and are listed in Table~\\ref{table: real-world-graphs}.\n\n\\iffalse\nThe graph types are described below:\n\\begin{itemize}\\setlength\\itemsep{1em}\n \\item Barabasi-Albert: two random graphs generated using the preferential attachment algorithm with $\\alpha=2$.\n \\item Watts-Strogatz: two random graphs generated using the Watts-Strogatz ring method with 2 neighbors and a rewiring probability of 0.2.\n \\item Citation: cit-HepPh and cit-HepTh are the citation graphs of research papers in high energy physics phenomenology and theory, respectively, from January 1993 to April 2003. A edge in the a citation graph represents a citation between two papers.\n \\item Internet: The as-topo network of the relationships between autonomous systems (controlled by independent network operators) on the Internet collected between Feb. 15, 2010 and Mar 10, 2010. The as-733 is a similar network from Jan. 2, 2000 -- the largest of 733 such graphs.\n \\item Purchase: The amazon0302 and amazon0312 graphs are co-purchasing networks from Mar. 2, 2003, and Mar. 12, 2003, respectively, as crawled from the ``customers who bought $x$ also bought $y$'' feature on Amazon.\n \\item Wikipedia: The wiki-talk network has an edge between two users if one user edits the talk page of another user on Wikipedia anytime before Jan. 2008. The wiki-vote network has an edge between two Wikipedia users if one users votes (for or against) the promotion of another user as an administrator.\n \n\\end{itemize}\n\\fi \n\nMany of these graphs are too large for a tree decomposition to be calculated. Instead, we randomly sampled a set of fixed-size subgraphs from the training graph and a set of fixed-size subgraphs from the test graph. Besides the concern from the calculation of large tree decompositions, sampling multiple graphs is also important for the extraction of a broad set of rules. Recall that if a single rule required to generate the test graph is not found within the training graph, then the likelihood will be 0. Therefore, large test graphs would require many more training graphs in order to reduce (or hopefully eliminate) the need for smoothing.\n\n\nIn all experiments, we extract 500 samples of size-25 subgraphs from the training graph. We extract an HRG from each size-25 subgraph, perform nonterminal splitting and EM training. The 500 HRGs are then combined and their weights are normalized to create the final laHRG model. We also take 4 samples of size-25 subgraphs from the test graph, calculate the log-likelihood of each under the laHRG model, and report the mean log-likelihood and confidence interval. \n\nWe chose these parameters empirically such that there is no need for smoothing.\nIf we were to increase the subgraph size for the test graphs, then we would also need to increase the number of training graph samples or rely on smoothing to ensure non-zero likelihood. \n\nTo compute tree decompositions, we used a reimplementation of the QuickBB algorithm \\cite{gogate-dechter-uai04}, with only the ``simplicial'' and ``almost-simplicial'' heuristics.\n\n\\subsection{Log-Likelihood Results} \\label{sec:loglikelihood}\n\n\\begin{figure}[t]\n\\centering\n\\input{figs\/gen_loglikelihood.tex}\n\\caption{On synthetic graphs, splitting nonterminal symbols ($n \\geq 2$) always improves log-likelihood on the test graph, as compared to no splitting ($n=1$). We did not observe overfitting up to $n=4$. Left\/right column: train on Barabasi-Albert (ba) or Watts-Strogatz (ws). Top\/bottom row: test on ba\/ws.\n}\n\\label{fig:gen_loglikelihood}\n\\end{figure}\n\nThis section explains the performance of laHRG in terms of log-likelihood metric on test graph for two different train-test split methods mentioned in the previous section. We mainly analyze the results for the second method: train on one graph and test on another graph of the same type. The first method has similar results, and we include them here to show that our evaluation method also works for graphs that are difficult to find different test graphs of the same type.\n\n\\subsubsection{Validate on Different Graph of Same Type}\nWe first show the log-likelihood results on synthetic datasets. The two random graph models, the Barabasi-Albert graph and Watts-Strogatz graph, generate very different graph types.\nThe four panels in Fig.~\\ref{fig:gen_loglikelihood} show the log-likelihood results of four combinations of training graphs and test graphs. Higher is better.\n\nAs a sanity check, we also trained an laHRG model on a Barabasi-Albert graph and tested it against a Watts-Strogatz graph and vice versa. We expect to see much lower log-liklihood scores because the laHRG trained on one type of graph should be different than another type of graph. The top-right and bottom-left panels in Fig.~\\ref{fig:gen_loglikelihood} show that this is indeed the case; the log-likelihood measure and the laHRG model pass the sanity check.\n\n\n\n\\begin{figure}\n\\centering\n\\input{figs\/real_loglikelihood.tex}\n\\caption{On real-world graphs, splitting nonterminal symbols ($n \\geq 2$) always improves log-likelihood on the test graph, as compared to no splitting ($n=1$), peaking at 2 or 3 splits and then dropping due to overfitting. Error bars indicate 95\\% confidence intervals. Higher is better.}\n\\label{fig:real_loglikelihood}\n\\end{figure}\n\nNext we extracted and tested the laHRG model on real world graphs. The log-likelihood results are illustrated in Fig.~\\ref{fig:real_loglikelihood} for laHRG models of up to 4-splits.\nWe find that the log-likelihood scores peak at $n$ = 2 or 3 and then decreases when $n$ = 4. \n\nRecall that laHRG is the same as HRG~\\cite{aguinaga2016growing} when $n$ = 1. Based on the results from Fig.~\\ref{fig:real_loglikelihood}, we find that splitting does indeed increase HRG's ability to generate the test graph. However, as increasing $n$ shows diminishing returns and sometimes decreases performance. The decrease in log likelihood when $n>2$ is caused by model overfitting. The increase in node-splitting allows laHRG to fine-tune the rule probabilities to the training graph, which we find does not always generalize to the test graph.\n\n\\subsubsection{Validate on Disjoint Subgraph}\n\nIf it is difficult to find a test graph of the same type with the training graph, it is still possible to evaluate laHRG with the log likelihood metric. We can partition the graph into two disjoint parts, and use one for training and the other for testing. Fig.~\\ref{fig:single_loglikelihood} shows the log likelihood of the test subgraphs that are disjoint from the training graphs. Again, splitting nonterminal symbols increase the log likelihood on the test graph, but as the number of splits ($n$) increases, log likelihood decreases due to overfitting.\n\\begin{figure}\n\\centering\n\\input{figs\/single_loglikelihood.tex}\n\\caption{Loglikelihood on subgraphs that are disjoint from training graphs. Trends similar to Fig.~\\ref{fig:real_loglikelihood} and Fig.~\\ref{fig:gen_loglikelihood} can be observed with this method. Error bars indicate 95\\% confidence intervals. Higher is better.}\n\\label{fig:single_loglikelihood}\n\\end{figure}\n\n\n\\subsection{Comparing against other Graph Generators}\n\nThe log-likelihood metric is a principled approach to calculating the performance of a graph generator. Unfortunately, other graph generators are not capable of performing this type of likelihood calculation. In order to compare the laHRG graph model to other state of the art graph generators including the Kronecker~\\cite{leskovec2010kronecker} and Chung-Lu~\\cite{chung+lu} models, we revert to traditional graph metrics to compare a generated graph to a test graph(not against the original graph).\n\n\nAmong the many choices for heuristic graph comparison metrics, we chose the degree distribution and graphlet correlation distance (GCD).\n\nRecall that the sampling of 25-node subgraphs was necessary to ensure a non-zero probability for the log likelihood evaluation. No such requirement exists for evaluation on GCD.\nNevertheless, to maintain an apples to apples comparison, we performed similar graph sampling methods for degree distribution distance and GCD: we trained Kronecker and Chung-Lu models on a 25-node subgraph from the training graph, generated a 25-node graph, compared the generated graph against a 25-node subgraph of the test graph, and repeated this process 500 times. \n\nAs a baseline, we also compared the training and test graph directly to get a basic sense of their similarity. So, we directly compared 25-node subgraphs from the training graph to 25-node subgraphs of the test graph without any model. We repeated this direct comparison 500 times and report the mean and 95\\% confidence interval. \nWe call this the ``Direct'' comparison because it does not involve any graph generation.\n\n\\subsubsection{Degree Distribution Distance}\n\nIn the present work we apply the degree distribution distance of Pr{\\v{z}}ulj~\\cite{prvzulj2007biological} to compare two or more degree distributions. Lower degree distribution distance between two graphs means they are more similar. \n\nwhich is defined as follows. Given a graph $H$, we first scale and normalize the degree distribution of $H$:\n\n$$S_H(k) = \\frac{d_H(k)}{k}$$\n$$T_H = \\sum_{k=1}^\\infty{S_H(k)}$$\n$$N_H(k) = \\frac{S_H(k)}{T_H}$$\n\n\\noindent in order to reduce the effect of higher degree nodes, where $d_H(K)$ is the number of nodes in $H$ that have a degree of $k$. Then we calculate the distance between two degree distributions $D\\left(d_H, d_{H^\\prime}\\right)$ as:\n\n$$D\\left(d_H, d_{H^\\prime}\\right) = \\frac{1}{\\sqrt{2}}\\sqrt{ \\sum^\\infty_{k=1}\\left( N_H(k) - N_{H^\\prime}(k) \\right)^2 },$$\n\n\\noindent which is essentially a normalized sum of squares between the two distributions. We call this metric the degree distribution distance. Because this is a ``distance'' metric low values indicate high similarity.\n\n\nFigure~\\ref{fig:degree_distribution} illustrates the results of the degree distribution distance. Recall that the laHRG is identical to HRG~\\cite{aguinaga2016growing} when $n=1$. The Kronecker and Chung-Lu do not have an $n$ parameter, so their plots are flat. All points represent the mean of 500 repetitions; each point contains error bars indicating the 95\\% confidence intervals -- although many error bars are too small to be seen.\n\nThe laHRG model generates graphs that more closely follow the degree distribution of the test graph than graphs generated by Kronecker and Chung-Lu models. Higher nonterminal splitting, \\textit{i.e.}, $n>1$, shows little change on the degree distribution distance.\n\nIt is expected that the Direct baseline outperforms all graph models, because the Direct baseline simply compares two graphs generated from the exact same generation process, which rewards an overfit model.\n\nHere, the HRG models predict the test graph's degree distribution better than the Direct baseline does; whether this is because they generalize better, or due to chance, or some other reason, would need further analysis to determine. In any case, nonterminal splitting ($n \\geq 2$) has only a slight effect on the model, generally attracting the degree distribution toward the training graph's and away from the test graph's. \n\nInterestingly, the Direct baseline has similar or better performance than the Kronecker and Chung-Lu methods. It is unlikely that these results can be completely explained by overfitting. Instead, Kronecker or Chung-Lu methods may perform poorly due to underfitting, wherein these models do not model the training graph well enough. More work is needed to understand these results.\n\n\\begin{figure}[t]\n\\centering\n\\input{figs\/degree_distribution.tex}\n\\input{figs\/legend.tex}\n\\caption{HRG models are shown to generate graphs with lower (= better) degree distribution distance to the test graph when compared to other models. Splitting nonterminals ($n \\geq 2$) sometimes decreases degree distance but sometimes increases it.}\n\\label{fig:degree_distribution}\n\\end{figure}\n\n\n\\begin{figure}[t]\n\\centering\n\\input{figs\/gcd.tex}\n\\input{figs\/legend.tex}\n\\caption{HRG models are shown to generate graphs with lower (= better) graphlet correlation difference (GCD) to the test graph, when compared with other models. Splitting nonterminals ($n \\geq 2$) sometimes inproves GCD and sometimes decreases it.}\n\\label{gcd_real}\n\\end{figure}\n\n\n\n\\subsubsection{Graphlet Correlation Distance}\n\nAlthough the degree distribution is the most well known and widely adopted graph comparison metric, it is far from complete. The degree distribution can be easily mimicked by two very large and different networks. \nFor example, previous work has shown that it is easy to construct two or more networks with exactly the same degree distribution but substantially different structure and function~\\cite{prvzulj2004modeling,li2005towards}. \nThere is mounting evidence which argues that the graphlet comparisons are a better way to measure the similarity between two graphs~\\mbox{\\cite{prvzulj2007biological,ugander2013subgraph}}.\nRecent work from systems biology has identified a metric called the Graphlet Correlation Distance (GCD). The GCD computes the distance between two graphlet correlation matrices -- one matrix for each graph~\\cite{yaveroglu2015proper}.\nBecause the GCD is a distance metric, low values indicate high similarity where the GCD is 0 iff the two graphs are isomorphic. \n\nFigure~\\ref{gcd_real} illustrates the results of the GCD. Recall that the laHRG is identical to HRG [10] when n = 1. The Kronecker and Chung-Lu do not have an n parameter, so their plots are flat. All points represent the mean of 500 repetitions; each point contains error bars indicating the 95 confidence intervals \u2013 although many error bars are too small to be seen.\n\nThe Direct baseline illustrates how similar the training and test graphs are. As expected, we find that the Direct comparison is best on the random Watts-Strogatz graphs. But laHRG outperforms it on all of the real-world graphs.\n\n\n\\subsection{Comparison with Log Likelihood Metric}\nGCD and Degree Distribution metrics indicate that laHRG is a better graph generator than other options like Kron and Chung-Lu graph generators, but our experiments seem to suggest that splitting nonterminals in HRG does not have much effect in terms of GCD and Degree Distribution. However, nonterminal splitting does increase log likelihood of the test graph, as explained in Section~\\ref{sec:loglikelihood}. This discrepancy is probably because log likelihood metric is able to capture more general structure and properties of a graph than GCD and Degree Distribution. Both GCD and Degree Distribution only focus on a specific graph property, which might not be perfectly correlated with overall structure of a graph. On the other hand, the log likelihood metric we propose does not overemphasize a particular graph property, but directly measures the ability of the graph generator to generate the test graph.\n\n\n\\begin{table}[t]\n\\caption{Number of rules\/parameters in grammars extracted from training graphs}\n\\label{fig:sizes}\n\\begin{center}\n\\begin{tabular}{lrrrr}\n\\toprule\n& \\multicolumn{4}{c}{$n$} \\\\\n\\addlinespace[1ex]\nTrain & \\multicolumn{1}{c}{1} & \\multicolumn{1}{c}{2} & \\multicolumn{1}{c}{3} & \\multicolumn{1}{c}{4} \\\\\n\\midrule\nCitation & 1,156 & 7,193 & 19,885 & 44,410 \\\\\nInternet & 1,005 & 5,686 & 14,057 & 29,247 \\\\\nPurchase & 969 & 6,196 & 18,237 & 38,186 \\\\\nWikipedia & 1,065 & 6,891 & 20,891 & 42,841 \\\\\n\\midrule\nBarabasi-Albert & 48 & 298 & 930 & 2,126 \\\\\nWatts-Strogatz & 60 & 346 & 1,023 & 2,380 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\section{Grammar Analysis}\n\nRecall that the HRG models merges two production rules if they are identical. Splitting rules produces subrules that have the same structure, but different symbols, so they cannot be merged; splitting nonterminal nodes will therefore increase the size of the grammar. In the worst case, the blowup could be cubic in $n$. Table~\\ref{fig:sizes} shows the sizes of all the grammars used in the experiments. Because rules with probability zero are excluded, the blowup is slightly less than cubic.\n\nHere we see a trade-off between model size and performance. The larger the grammar gets, the better it is able to fit the training graph. On the other hand, we prefer smaller models to mitigate the possibility of overfitting. If we had not used separate training and test graphs, it would not be clear how to manage this trade-off, but our evaluation is able to demonstrate that larger grammars (up to a point) are indeed able to generalize to new data.\n\n\n\\begin{figure}\n\\[\\begin{aligned}\n\\n0 &\\xrightarrow{0.22}\n\\begin{tikzpicture}[baseline=-2pt]\n\\node (n0)[externalnode] at (0,0) {};\n\\node (n1)[externalnode] at (1,0) {};\n\\node (e0)[nonterminal] at (0,-1) {$\\nss11$};\n\\node (e01)[nonterminal] at (0.5,1) {$\\nss22$};\n\\draw (n0)--(n1);\n\\draw (e0)--(n0);\n\\draw (e01)--(n0);\n\\draw (e01)--(n1);\n\\end{tikzpicture} \\\\[3ex]\n\\n1_1 &\\xrightarrow{0.88} \\begin{tikzpicture}[baseline=-2pt]\n\\node (n0)[externalnode] at (1,0) {};\n\\node (na)[internalnode] at (0,0) {};\n\\draw (n0)--(na);\n\\end{tikzpicture} \\\\[3ex]\n\\n1_2 &\\xrightarrow{0.64} \\begin{tikzpicture}[baseline=-2pt]\n\\node (na)[internalnode] at (0,0) {};\n\\end{tikzpicture}\n\\end{aligned}\n\\hspace{0.5in}\n\\begin{aligned}\n\\n2_1 &\\xrightarrow{0.70}\n\\begin{tikzpicture}[baseline=-2pt]\n\\node (na)[internalnode] at (0,0) {};\n\\node (nb)[internalnode] at (1,0) {};\n\\node (n0)[externalnode] at (0.5,1) {};\n\\draw (na)--(n0);\n\\draw (nb)--(n0);\n\\end{tikzpicture} \\\\[1ex]\n\\n2_2 &\\xrightarrow{0.60}\n\\begin{tikzpicture}[baseline=-2pt]\n\\node (na)[internalnode] at (0,0) {};\n\\node (nb)[internalnode] at (1,0) {};\n\\node (eab)[nonterminal] at (0.5,1) {$\\nss22$};\n\\node (ea)[nonterminal] at (0,-1) {$\\nss11$};\n\\draw (eab)--(na);\n\\draw (eab)--(nb);\n\\draw (ea)--(na);\n\\end{tikzpicture} \\\\[1ex]\n\\n2_2 &\\xrightarrow{0.18}\n\\begin{tikzpicture}[baseline=-2pt]\n\\node (na)[internalnode] at (0,0) {};\n\\node (nb)[internalnode] at (1,0) {};\n\\node (ea1)[nonterminal] at (-0.35,-1) {$\\nss11$};\n\\node (ea2)[nonterminal] at (0.35,-1) {$\\nss11$};\n\\draw (ea1)--(na);\n\\draw (ea2)--(na);\n\\end{tikzpicture} \\\\[1ex]\n\\n2_2 &\\xrightarrow{0.14}\n\\begin{tikzpicture}[baseline=-2pt]\n\\node (na)[internalnode] at (0,0) {};\n\\node (nb)[internalnode] at (1,0) {};\n\\node (eb)[nonterminal] at (1,-1) {$\\nss12$};\n\\draw (eb)--(nb);\n\\end{tikzpicture}\n\\end{aligned}\\]\n\\caption{Rules extracted from as-topo, 2-split nonterminals, showing only those with probability at least $0.1$ and with at most two external nodes.}\n\\label{fig:as-grammar}\n\\end{figure}\n\nWhat do the HRG grammars look like? The models learned by unsupervised methods like EM can often be difficult to interpret, especially when the number of splits is high and the grammar is large. Figure~\\ref{fig:as-grammar} shows selected 2-split rules extracted from the as-topo training graph, namely, those with probability at least 0.1 and with at most two external nodes. \n\nWe can see that the subsymbols behave quite differently from each other.\nFor example, the $\\n2_1$ rule adds a connection between its two external nodes (via a third node), whereas none of the $\\n2_2$ rules adds a connection (perhaps because, as can be seen in the RHS of the $\\n0$ rule, they are already neighbors).\n\nWhat can we learn from these graph grammars? This is an open question. If we assume that the tree decomposition provides a meaningful representation of the original graph, then we may be able to interrogate and assign meaning to these rules depending on their context. But we save this as a matter for future work.\n\n\n\\section{Conclusion}\n\nThis present work identifies and addresses two problems in applying Hyperedge Replacement Grammars (HRGs) to network data~\\cite{aguinaga2016growing} by adding latent variables in order to make production rules more sensitive to context and by introducing a principled evaluation methodology that computes the log likelihood that an HRG model generates a graph.\n\nTo guard against the possibility of the new model overfitting the original graph, we enforced a separation between the original graph from which the model is trained and a different graph on which the model is tested. This methodology should be better at selecting models that generalize well to new data. We confirmed Aguinaga et al.'s original finding that HRGs perform better than the widely-used Kronecker and Chung-Lu models, and showed that adding latent variables usually improves performance further. \n\nFurthermore, we evaluated our method against the original HRG model by directly measuring the log-likelihood of the test graphs under all models. This metric is more principled than aggregation of statistics of select graph properties. Under this metric, our method improves over the original in all cases, peaking at either $n=2$ or $3$ splits.\n\nHRGs extracted from tree decompositions are large.\nSplitting nonterminals grows the model even more. But our finding that 2- or 3-split grammars still generalize better to unseen graphs suggests that these models are not unreasonably large.\n\nIt remains for future work to test this claim by evaluating other generative graph models on test graphs distinct from training graphs. It should also be possible to simplify the HRG and laHRG models by trying to prune low-probability rules while maintaining high performance. Finally, more analysis is needed to provide an interpretation for the patterns automatically discovered by laHRGs.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}