{"text":"\\section*{Introduction}\n\nSet $[n]:=\\{x_1,\\ldots,x_n\\}$. Let $K$ be a field and\n$S=K[x_1,\\ldots,x_n]$, a polynomial ring over $K$. Let $\\Delta$ be a\nsimplicial complex over $[n]$. For an integer $t\\geq 0$, Haghighi,\nYassemi and Zaare-Nahandi introduced the concept of ${\\mathrm{CM}}_t$-ness\nwhich is the pure version of simplicial complexes\n\\emph{Cohen-Macaulay in codimension $t$} studied in \\cite{MiNoSw}. A\nreason for the importance of ${\\mathrm{CM}}_t$ simplicial complexes is that\nthey generalizes two notions for simplicial complexes: being\nCohen-Macaulay and Buchsbaum. In particular, by the results from\n\\cite{Re,Sh}, ${\\mathrm{CM}}_0$ is the same as Cohen-Macaulayness and ${\\mathrm{CM}}_1$\nis identical with Buchsbaum property.\n\nIn \\cite{HaYaZa1}, the authors described some combinatorial properties\nof ${\\mathrm{CM}}_t$ simplicial complexes and gave some characterizations of them\nand generalized some results of \\cite{Hi,Mi}. Then, in \\cite{HaYaZa2},\nthey generalized a characterization of Cohen-Macaulay bipartite graphs\nfrom \\cite{HeHi} and \\cite{CoNa} on unmixed Buchsbaum graphs.\n\nBayati and Herzog defined the expansion functor in the category of\nfinitely generated multigraded $S$-modules and studied some homological\nbehaviors of this functor (see \\cite{BaHe}). The expansion functor helps\nus to present other multigraded $S$-modules from a given finitely generated\nmultigraded $S$-module which may have some of algebraic properties of the\nprimary module. This allows to introduce new structures of a given multigraded\n$S$-module with the same properties and especially to extend some homological\nor algebraic results for larger classes (see for example \\cite[Theorem 4.2]{BaHe}.\nThere are some combinatorial versions of expansion functor which we will recall in this paper.\n\nThe purpose of this paper is the study of behaviors of expansion\nfunctor on ${\\mathrm{CM}}_t$ complexes. We first recall some notations and\ndefinitions of ${\\mathrm{CM}}_t$ simplicial complexes in Section 1. In the\nnext section we describe the expansion functor in three contexts,\nthe expansion of a simplicial complex, the expansion of a simple\ngraph and the expansion of a monomial ideal. We show that there is a\nclose relationship between these three contexts. In Section 3 we\nprove that the expansion of a ${\\mathrm{CM}}_t$ complex $\\Delta$ with respect to\n$\\alpha$ is ${\\mathrm{CM}}_{t+e-k+1}$ but it is not ${\\mathrm{CM}}_{t+e-k}$ where\n$e=\\dim(\\Delta^\\alpha)+1$ and $k$ is the minimum of the components of\n$\\alpha$ (see Theorem \\ref{main}). In Section 4, we introduce a new\nfunctor, called contraction, which acts in contrast to expansion\nfunctor. As a main result of this section we show that if the\ncontraction of a ${\\mathrm{CM}}_t$ complex is pure and all components of the\nvector obtained from contraction are greater than or equal to $t$\nthen it is Buchsbaum (see Theorem \\ref{contract,CM-t}). The section\nis finished with a view towards the contraction of simple graphs.\n\n\\section{Preliminaries}\n\nLet $t$ be a non-negative integer. We recall from \\cite{HaYaZa1}\nthat a simplicial complex $\\Delta$ is called ${\\mathrm{CM}}_t$ or\n\\emph{Cohen-Macaulay in codimension $t$} if it is pure and for every\nface $F\\in\\Delta$ with $\\#(F)\\geq t$, $\\mathrm{link}_\\Delta(F)$ is Cohen-Macaulay.\nEvery ${\\mathrm{CM}}_t$ complex is also ${\\mathrm{CM}}_r$ for all $r\\geq t$. For $t<0$,\n${\\mathrm{CM}}_t$ means ${\\mathrm{CM}}_0$. The properties ${\\mathrm{CM}}_0$ and ${\\mathrm{CM}}_1$ are the\nsame as Cohen-Macaulay-ness and Buchsbaum-ness, respectively.\n\nThe link of a face $F$ in a simplicial complex $\\Delta$ is denoted by\n$\\mathrm{link}_\\Delta(F)$ and is $$\\mathrm{link}_\\Delta(F)=\\{G\\in\\Delta: G\\cap F=\\emptyset,\nG\\cup F\\in\\Delta\\}.$$ The following lemma is useful for checking the\n${\\mathrm{CM}}_t$ property of simplicial complexes:\n\n\\begin{lem}\\label{CM-t eq}\n(\\cite[Lemma 2.3]{HaYaZa1}) Let $t\\geq 1$ and let $\\Delta$ be a nonempty complex. Then $\\Delta$ is ${\\mathrm{CM}}_t$ if and only if $\\Delta$ is pure and $\\mathrm{link}_\\Delta(v)$ is ${\\mathrm{CM}}_{t-1}$ for every vertex $v\\in\\Delta$.\n\\end{lem}\n\nLet $\\mathcal G=(V(\\mathcal G),E(\\mathcal G))$ be a simple graph with vertex set $V$ and edge set $E$. The \\emph{independence complex} of $\\mathcal G$ is the complex $\\Delta_\\mathcal G$ with vertex set $V$ and with faces consisting of independent sets of vertices of $\\mathcal G$. Thus $F$ is a face of $\\Delta_\\mathcal G$ if and only if there is no edge of $\\mathcal G$ joining any two\nvertices of $F$.\n\nThe \\emph{edge ideal} of a simple graph $\\mathcal G$, denoted by $I(\\mathcal G)$, is an ideal of $S$ generated by all squarefree monomials $x_ix_j$ with $x_ix_j\\in E(\\mathcal G)$.\n\nA simple graph $\\mathcal G$ is called ${\\mathrm{CM}}_t$ if $\\Delta_\\mathcal G$ is ${\\mathrm{CM}}_t$ and it is called \\emph{unmixed} if $\\Delta_\\mathcal G$ is pure.\n\nFor a monomial ideal $I\\subset S$, We denote by $G(I)$ the unique minimal set of monomial generators of $I$.\n\n\\section{The expansion functor in combinatorial and algebraic concepts}\n\nIn this section we define the expansion of a simplicial complex and recall the expansion of a simple graph from \\cite{Sc} and the expansion of a monomial ideal from \\cite{BaHe}. We show that these concepts are intimately related to each other.\n\n(1) Let $\\alpha=(k_1,\\ldots,k_n)\\in\\mathbb N^n$. For $F=\\{x_{i_1},\\ldots,x_{i_r}\\}\\subseteq \\{x_1,\\ldots,x_n\\}$ define\n$$F^\\alpha=\\{x_{i_11},\\ldots,x_{i_1k_{i_1}},\\ldots,x_{i_r1},\\ldots,x_{i_rk_{i_r}}\\}$$\nas a subset of $[n]^\\alpha:=\\{x_{11},\\ldots,x_{1k_1},\\ldots,x_{n1},\\ldots,x_{nk_n}\\}$. $F^\\alpha$ is called \\emph{the expansion of $F$ with respect to $\\alpha$.}\n\nFor a simplicial complex $\\Delta=\\langle F_1,\\ldots,F_r\\rangle$ on $[n]$, we define \\emph{the expansion of $\\Delta$ with respect to $\\alpha$} as the simplicial complex\n$$\\Delta^\\alpha=\\langle F^\\alpha_1,\\ldots,F^\\alpha_r\\rangle.$$\n\n(2) The \\emph{duplication} of a vertex $x_i$ of a simple graph $\\mathcal G$ was first introduced by Schrijver \\cite{Sc} and it means extending its vertex set $V(\\mathcal G)$ by a new vertex $x'_i$ and replacing $E(\\mathcal G)$ by\n$$E(\\mathcal G)\\cup\\{(e\\backslash\\{x_i\\})\\cup\\{x'_i\\}:x_i\\in e\\in E(\\mathcal G)\\}.$$\nFor the $n$-tuple $\\alpha=(k_1,\\ldots,k_n)\\in\\mathbb N^n$, with positive integer entries, the \\emph{expansion} of the simple graph $\\mathcal G$ is denoted by $\\mathcal G^\\alpha$ and it is obtained from $\\mathcal G$ by successively duplicating $k_i-1$ times every vertex $x_i$.\n\n(3) In \\cite{BaHe} Bayati and Herzog defined the expansion functor in the category of finitely generated multigraded $S$-modules and studied some homological behaviors of this functor. We recall the expansion functor defined by them only in the category of monomial ideals and refer the reader to \\cite{BaHe} for more general case in the category of finitely generated multigraded $S$-modules.\n\nLet $S^\\alpha$ be a polynomial ring over $K$ in the variables\n$$x_{11},\\ldots,x_{1k_1},\\ldots,x_{n1},\\ldots,x_{nk_n}.$$\nWhenever $I\\subset S$ is a monomial ideal minimally generated by\n$u_1,\\ldots,u_r$, the expansion of $I$ with respect to $\\alpha$ is\ndefined by\n$$I^\\alpha=\\sum^r_{i=1}P^{\\nu_1(u_i)}_1\\ldots P^{\\nu_n(u_i)}_n\\subset S^\\alpha$$\nwhere $P_j=(x_{j1},\\ldots,x_{jk_j})$ is a prime ideal of $S^\\alpha$ and $\\nu_j(u_i)$ is the exponent of $x_j$ in $u_i$.\n\nIt was shown in \\cite{BaHe} that the expansion functor is exact and\nso $(S\/I)^\\alpha=S^\\alpha\/I^\\alpha$. In the following lemmas we\ndescribe the relations between the above three concepts of expansion\nfunctor.\n\n\\begin{lem}\\label{epansion s-R}\nFor a simplicial complex $\\Delta$ we have $I^\\alpha_\\Delta=I_{\\Delta^\\alpha}$. In particular, $K[\\Delta]^\\alpha=K[\\Delta^\\alpha]$.\n\\end{lem}\n\\begin{proof}\nLet $\\Delta=\\langle F_1,\\ldots,F_r\\rangle$. Since $I_\\Delta=\\bigcap^r_{i=1}P_{F^c_i}$, it follows from Lemma 1.1 in \\cite{BaHe} that $I^\\alpha_\\Delta=\\bigcap^r_{i=1}P^\\alpha_{F^c_i}$. The result is obtained by the fact that $P^\\alpha_{F^c_i}=P_{(F^\\alpha_i)^c}$.\n\\end{proof}\n\nLet $u=x_{i_1}\\ldots x_{i_t}\\in S$ be a monomial and $\\alpha=(k_1,\\ldots,k_n)\\in\\mathbb N^n$. We set $u^\\alpha=G((u)^\\alpha)$\nand for a set $A$ of monomials in $S$, $A^\\alpha$ is defined $$A^\\alpha=\\bigcup_{u\\in A} u^\\alpha.$$\nOne can easily obtain the following lemma.\n\n\\begin{lem}\nLet $I\\subset S$ be a monomial ideal and $\\alpha\\in\\mathbb N^n$. Then $G(I^\\alpha)=G(I)^\\alpha$.\n\\end{lem}\n\n\\begin{lem}\nFor a simple graph $\\mathcal G$ on the vertex set $[n]$ and $\\alpha\\in\\mathbb N^n$ we have $I(\\mathcal G^\\alpha)=I(\\mathcal G)^\\alpha$.\n\\end{lem}\n\\begin{proof}\nLet $\\alpha=(k_1,\\ldots,k_n)$ and $P_j=(x_{j1},\\ldots,x_{jk_j})$. Then it follows from Lemma 11(ii,iii) of \\cite{BaHe} that\n$$I(\\mathcal G^\\alpha)=(x_{ir}x_{js}:x_ix_j\\in E(\\mathcal G), 1\\leq r\\leq k_i,1\\leq s\\leq k_j)=\\sum_{x_ix_j\\in E(\\mathcal G)}P_iP_j$$\n$$=\\sum_{x_ix_j\\in E(\\mathcal G)}(x_i)^\\alpha (x_j)^\\alpha=(\\sum_{x_ix_j\\in E(\\mathcal G)}(x_i)(x_j))^\\alpha=I(\\mathcal G)^\\alpha.$$\n\\end{proof}\n\n\\section{The expansion of a ${\\mathrm{CM}}_t$ complex}\n\nThe following proposition gives us some information about the\nexpansion of a simplicial complex which are useful in the proof of\nthe next results.\n\n\\begin{prop}\\label{ex indepen}\nLet $\\Delta$ be a simplicial complex and let $\\alpha\\in\\mathbb N^n$.\n\\begin{enumerate}[\\upshape (i)]\n \\item For all $i\\leq\\dim(\\Delta)$, there exists an epimorphism $\\theta:\\tilde{H}_{i}(\\Delta^\\alpha;K)\\rightarrow\\tilde{H}_{i}(\\Delta;K)$.\n\nIn particular in this case\n$$\\tilde{H}_{i}(\\Delta^\\alpha;K)\/\\ker(\\theta)\\cong\\tilde{H}_{i}(\\Delta;K);$$\n \\item For $F\\in\\Delta^\\alpha$ such that $F=G^\\alpha$ for some $G\\in\\Delta$, we have\n$$\\mathrm{link}_{\\Delta^\\alpha}(F)=(\\mathrm{link}_\\Delta (G))^\\alpha;$$\n \\item For $F\\in\\Delta^\\alpha$ such that $F\\neq G^\\alpha$ for every $G\\in\\Delta$, we have\n$$\\mathrm{link}_{\\Delta^\\alpha}F=\\langle U^\\alpha\\backslash F\\rangle\\ast \\mathrm{link}_{\\Delta^\\alpha}U^\\alpha$$\nfor some $U\\in\\Delta$ with $F\\subseteq U^\\alpha$. Here $\\ast$ means the join of two simplicial complexes.\n\nIn the third case, $\\mathrm{link}_{\\Delta^\\alpha}F$ is a cone and so acyclic,\ni.e., $\\tilde{H}_i(\\mathrm{link}_{\\Delta^\\alpha}F;K)=0$ for all $i>0$.\n\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\n(i) Consider the map $\\pi:[n]^\\alpha\\rightarrow [n]$ by $\\pi(x_{ij})=x_i$ for all $i,j$. Let the simplicial map $\\varphi:\\Delta^\\alpha\\rightarrow\\Delta$ be defined by $\\varphi(\\{x_{i_1j_1},\\ldots,x_{i_qj_q}\\})=\\{\\pi(x_{i_1j_1}),\\ldots,\\pi(x_{i_qj_q})\\}=\\{x_{i_1},\\ldots,x_{i_q}\\}$. Actually, $\\varphi$ is an extension of $\\pi$ to $\\Delta^\\alpha$ by linearity. Define $\\varphi_\\#:\\tilde{\\mathcal C}_q(\\Delta^\\alpha;K)\\rightarrow\\tilde{\\mathcal C}_q(\\Delta;K)$, for each $q$, by\n$$\\varphi_\\#([x_{i_0j_0},\\ldots,x_{i_qj_q}])=\\left\\{\n \\begin{array}{ll}\n 0 & \\mbox{if for some indices}\\ i_r=i_t \\\\\n \\left[\\varphi(\\{x_{i_0j_0}\\}),\\ldots,\\varphi(\\{x_{i_qj_q}\\})\\right] & \\mbox{otherwise}.\n \\end{array}\n\\right.\n$$\nIt is clear from the definitions of $\\tilde{\\mathcal C}_q(\\Delta^\\alpha;K)$ and $\\tilde{\\mathcal C}_q(\\Delta;K)$ that $\\varphi_\\#$ is well-defined. Also, define $\\varphi_\\alpha:\\tilde{H}_{i}(\\Delta^\\alpha;K)\\rightarrow\\tilde{H}_{i}(\\Delta;K)$ by\n$$\\varphi_\\alpha:z+B_i(\\Delta^\\alpha)\\rightarrow \\varphi_\\#(z)+B_i(\\Delta).$$\nIt is trivial that $\\varphi_\\alpha$ is onto.\n\n(ii) The inclusion $\\mathrm{link}_{\\Delta^\\alpha}(F)\\supseteq(\\mathrm{link}_\\Delta (G))^\\alpha$ is trivial. So we show the reverse inclusion. Let $\\sigma\\in\\mathrm{link}_{\\Delta^\\alpha}(G^\\alpha)$. Then $\\sigma\\cap G^\\alpha=\\emptyset$ and $\\sigma\\cup G^\\alpha\\in\\Delta^\\alpha$. We want to show $\\pi(\\sigma)\\in\\mathrm{link}_\\Delta (G)$. Because in this case, $\\pi(\\sigma)^\\alpha\\in(\\mathrm{link}_\\Delta (G))^\\alpha$ and since that $\\sigma\\subseteq \\pi(\\sigma)^\\alpha$, we can conclude that $\\sigma\\in (\\mathrm{link}_\\Delta (G))^\\alpha$.\n\nClearly, $\\pi(\\sigma)\\cup G\\in\\Delta$. To show that $\\pi(\\sigma)\\cap G=\\emptyset$, suppose, on the contrary, that $x_i\\in \\pi(\\sigma)\\cap G$. Then $x_{ij}\\in \\sigma$ for some $j$. Especially, $x_{ij}\\in G^\\alpha$. Therefore $\\sigma\\cap G^\\alpha\\neq\\emptyset$, a contradiction.\n\n(iii) Let $\\tau\\in\\mathrm{link}_{\\Delta^\\alpha}F$. Let $\\tau\\cap \\pi(F)^\\alpha=\\emptyset$. It follows from $\\tau\\cup F\\in \\Delta^\\alpha$ that $\\pi(\\tau)^\\alpha\\cup\\pi(F)^\\alpha\\in \\Delta^\\alpha$. Now by $\\tau\\subset\\pi(\\tau)^\\alpha$ it follows that $\\tau\\cup\\pi(F)^\\alpha\\in \\Delta^\\alpha$. Hence $\\tau\\in\\mathrm{link}_{\\Delta^\\alpha}(\\pi(F)^\\alpha)$. So we suppose that $\\tau\\cap \\pi(F)^\\alpha\\neq\\emptyset$. We write\n$\\tau=(\\tau\\cap \\pi(F)^\\alpha)\\cup (\\tau\\backslash \\pi(F)^\\alpha)$. It is clear that $\\tau\\cap \\pi(F)^\\alpha\\subset \\pi(F)^\\alpha\\backslash F$ and $\\tau\\backslash \\pi(F)^\\alpha\\in\\mathrm{link}_{\\Delta^\\alpha}\\pi(F)^\\alpha$. The reverse inclusion is trivial.\n\\end{proof}\n\n\\begin{rem}\\label{pure expan}\nLet $\\Delta=\\langle x_1x_2,x_2x_3\\rangle$ be a complex on $[3]$ and\n$\\alpha=(2,1,1)\\in\\mathbb N^3$. Then $\\Delta^\\alpha=\\langle\nx_{11}x_{12}x_{21},x_{21}x_{31}\\rangle$ is a complex on\n$\\{x_{11},x_{12},x_{21},x_{31}\\}$. Notice that $\\Delta$ is pure but\n$\\Delta^\\alpha$ is not. Therefore, the expansion of a pure simplicial\ncomplex is not necessarily pure.\n\\end{rem}\n\n\\begin{thm}\\label{main}\nLet $\\Delta$ be a simplicial complex on $[n]$ of dimension $d-1$ and\nlet $t\\geq 0$ be the least integer that $\\Delta$ is ${\\mathrm{CM}}_t$. Suppose\nthat $\\alpha=(k_1,\\ldots,k_n)\\in\\mathbb N^n$ such that $k_i>1$ for some\n$i$ and $\\Delta^\\alpha$ is pure. Then $\\Delta^\\alpha$ is ${\\mathrm{CM}}_{t+e-k+1}$\nbut it is not ${\\mathrm{CM}}_{t+e-k}$, where $e=\\dim(\\Delta^\\alpha)+1$ and\n$k=\\min\\{k_i:k_i>1\\}$ .\n\\end{thm}\n\\begin{proof}\nWe use induction on $e\\geq 2$. If $e=2$, then $\\dim(\\Delta^\\alpha)=1$ and $\\Delta$ should be only in form $\\Delta=\\langle x_1,\\ldots,x_n\\rangle$. In particular, $\\Delta^\\alpha$ is of the form\n$$\\Delta^\\alpha=\\langle \\{x_{i_11},x_{i_12}\\},\\{x_{i_21},x_{i_22}\\},\\ldots,\\{x_{i_r1},x_{i_r2}\\}\\rangle.$$\nIt is clear that $\\Delta^\\alpha$ is ${\\mathrm{CM}}_1$ but it is not Cohen-Macaulay.\n\nAssume that $e>2$. Let $\\{x_{ij}\\}\\in\\Delta^\\alpha$. We want to show that $\\mathrm{link}_{\\Delta^\\alpha}(x_{ij})$ is ${\\mathrm{CM}}_{e-k}$. Consider the following cases:\n\nCase 1: $k_i>1$. Then\n$$\\mathrm{link}_{\\Delta^\\alpha}(x_{ij})=\\langle\\{x_i\\}^\\alpha\\backslash x_{ij}\\rangle\\ast(\\mathrm{link}_{\\Delta}(x_i))^\\alpha.$$\n$(\\mathrm{link}_{\\Delta}(x_i))^\\alpha$ is of dimension $e-k_i-1$ and, by induction hypothesis, it is ${\\mathrm{CM}}_{t+e-k_i-k+1}$. On the other hand, $\\langle\\{x_i\\}^\\alpha\\backslash x_{ij}\\rangle$ is Cohen-Macaulay of dimension $k_i-2$. Therefore, it follows from Theorem 1.1(i) of \\cite{HaYaZa2} that $\\mathrm{link}_{\\Delta^\\alpha}(x_{ij})$ is ${\\mathrm{CM}}_{t+e-k}$.\n\nCase 2: $k_i=1$. Then\n$$\\mathrm{link}_{\\Delta^\\alpha}(x_{ij})=(\\mathrm{link}_{\\Delta}(x_i))^\\alpha$$\nwhich is of dimension $e-2$ and, by induction, it is ${\\mathrm{CM}}_{t+e-k}$.\n\nNow suppose that $e>2$ and $k_s=k$ for some $s\\in [n]$. Let $F$ be a facet of $\\Delta$ such that $x_s$ belongs to $F$.\n\nIf $\\dim(\\Delta)=0$, then $k_l=k$ for all $l\\in [n]$. In particular, $e=k$. It is clear that $\\Delta^\\alpha$ is not ${\\mathrm{CM}}_{t+e-k}$ (or Cohen-Macaulay). So suppose that $\\dim(\\Delta)>0$. Choose $x_i\\in F\\backslash x_s$. Then\n$$\\mathrm{link}_{\\Delta^\\alpha}(x_{ij})=\\langle \\{x_i\\}^\\alpha\\backslash x_{ij}\\rangle\\ast (\\mathrm{link}_\\Delta(x_i))^\\alpha.$$\nBy induction hypothesis, $(\\mathrm{link}_\\Delta(x_i))^\\alpha$ is not ${\\mathrm{CM}}_{t+e-k_i-k}$. It follows from Theorem 3.1(ii) of \\cite{HaYaZa2} that $\\mathrm{link}_{\\Delta^\\alpha}(x_{ij})$ is not ${\\mathrm{CM}}_{t+e-k-1}$. Therefore $\\Delta^\\alpha$ is not ${\\mathrm{CM}}_{t+e-k}$.\n\\end{proof}\n\n\\begin{cor}\nLet $\\Delta$ be a non-empty Cohen-Macaulay simplicial complex on\n$[n]$. Then for any $\\alpha\\in\\mathbb N^n$, with $\\alpha\\neq\\mathbf 1$,\n$\\Delta^\\alpha$ can never be Cohen-Macaulay.\n\\end{cor}\n\n\\section{The contraction functor}\n\nLet $\\Delta=\\langle F_1,\\ldots,F_r\\rangle$ be a simplicial complex on $[n]$. Consider the equivalence relation `$\\sim$' on the vertices of $\\Delta$ given by\n$$x_i\\sim x_j\\Leftrightarrow\\langle x_i\\rangle\\ast\\mathrm{link}_\\Delta(x_i)=\\langle x_j\\rangle\\ast\\mathrm{link}_\\Delta(x_j).$$\nIn fact $\\langle x_i\\rangle\\ast\\mathrm{link}_\\Delta(x_i)$ is the cone over\n$\\mathrm{link}_\\Delta(x_i)$, and the elements of\n$\\langle x_i\\rangle\\ast\\mathrm{link}_\\Delta(x_i)$ are those faces of $\\Delta$,\nwhich contain $x_i$. Hence $\\langle x_i\\rangle\\ast\\mathrm{link}_\\Delta(x_i)\n=\\langle x_j\\rangle\\ast\\mathrm{link}_\\Delta(x_j)$, means the cone with vertex $x_i$ is equal to the cone with vertex $x_j$. In other words,\n$x_i\\sim x_j$ is equivalent to saying that for a facet $F\\in\\Delta$, $F$ contains $x_i$ if and only if it contains $x_j$.\n\nLet $[\\bar{m}]=\\{\\bar{y}_1,\\ldots,\\bar{y}_m\\}$ be the set of\nequivalence classes under $\\sim$. Let\n$\\bar{y}_i=\\{x_{i1},\\ldots,x_{ia_i}\\}$. Set\n$\\alpha=(a_1,\\ldots,a_m)$. For $F_t\\in\\Delta$, define\n$G_t=\\{\\bar{y}_i:\\bar{y}_i\\subset F_t\\}$ and let $\\Gamma$ be a\nsimplicial complex on the vertex set $[m]$ with facets\n$G_1,\\ldots,G_r$. We call $\\Gamma$ the \\emph{contraction of $\\Delta$\nby $\\alpha$} and $\\alpha$ is called \\emph{the vector obtained from\ncontraction}.\n\nFor example, consider the simplicial complex $\\Delta=\\langle x_1x_2x_3,x_2x_3x_4,x_1x_4x_5,x_2x_3x_5\\rangle$\non the vertex set $[5]=\\{x_1,\\ldots,x_5\\}$. Then $\\bar{y}_1=\\{x_1\\}$, $\\bar{y}_2=\\{x_2,x_3\\}$, $\\bar{y}_3=\\{x_4\\}$,\n$\\bar{y}_4=\\{x_5\\}$ and $\\alpha=(1,2,1,1)$. Therefore, the contraction of $\\Delta$ by $\\alpha$ is\n$\\Gamma=\\langle \\bar{y}_1\\bar{y}_2,\\bar{y}_2\\bar{y}_3,\\bar{y}_1\\bar{y}_3\\bar{y}_4,\\bar{y}_2\\bar{y}_4\\rangle$ a\ncomplex on the vertex set $[\\bar{4}]=\\{\\bar{y}_1,\\ldots,\\bar{y}_4\\}$.\n\n\\begin{rem}\nNote that if $\\Delta$ is a pure simplicial complex then the contraction of $\\Delta$ is not necessarily pure (see the above example). In the special case where the vector $\\alpha=(k_1,\\dots,k_n)\\in\\mathbb N^n$ and $k_i=k_j$ for all $i,j$, it is easy to check that in this case $\\Delta$ is pure if and only if $\\Delta^\\alpha$ is pure. Another case is introduced in the following proposition.\n\\end{rem}\n\n\\begin{prop}\nLet $\\Delta$ be a simplicial complex on $[n]$ and assume that\n$\\alpha=(k_1,\\dots,k_n)\\in\\mathbb N^n$ satisfies the following condition:\n\n$(\\dag)$ for all facets $F,G\\in\\Delta$, if $x_i\\in F\\backslash G$ and $x_j\\in G\\backslash F$ then $k_i=k_j$.\n\nThen $\\Delta$ is pure if and only if $\\Delta^\\alpha$ is pure.\n\\end{prop}\n\\begin{proof}\nLet $\\Delta$ be a pure simplicial complex and let $F,G\\in\\Delta$ be two facets of $\\Delta$. Then $$|F^\\alpha|-|G^\\alpha|=\\sum_{x_i\\in F}k_i-\\sum_{x_i\\in G}k_i=\\sum_{x_i\\in F\\backslash G}k_i-\\sum_{x_i\\in G\\backslash F}k_i.$$\nNow the condition $(\\dag)$ implies that $|F^\\alpha|=|G^\\alpha|$. This means that all facets of $\\Delta^\\alpha$ have the same cardinality.\n\nLet $\\Delta^\\alpha$ be pure. Suppose that $F,G$ are two facets in\n$\\Delta$. If $|F|>|G|$ then $|F\\backslash G|>|G\\backslash F|$.\nTherefore $\\sum_{x_i\\in F\\backslash G}k_i>\\sum_{x_i\\in G\\backslash\nF}k_i$. This concludes that $|F^\\alpha|=\\sum_{x_i\\in\nF}k_i>\\sum_{x_i\\in G}k_i=|G^\\alpha|$, a contradiction.\n\\end{proof}\n\nThere is a close relationship between a simplicial complex and its contraction. In fact, the expansion of the contraction of a simplicial complex is the same complex. The precise statement is the following.\n\n\\begin{lem}\nLet $\\Gamma$ be the contraction of $\\Delta$ by $\\alpha$. Then $\\Gamma^\\alpha\\cong \\Delta$.\n\\end{lem}\n\\begin{proof}\nSuppose that $\\Delta$ and $\\Gamma$ are on the vertex sets $[n]=\\{x_1,\\ldots,x_n\\}$ and $[\\bar{m}]=\\{\\bar{y}_1,\\ldots,\\bar{y}_m\\}$, respectively. Let $\\alpha=(a_1,\\ldots,a_m)$. For $\\bar{y}_i\\in\\Gamma$, suppose that $\\{\\bar{y}_i\\}^\\alpha=\\{\\bar{y}_{i1},\\ldots,\\bar{y}_{ia_i}\\}$. So $\\Gamma^\\alpha$ is a simplicial complex on the vertex set $[\\bar{m}]^\\alpha=\\{\\bar{y}_{ij}:i=1,\\ldots,m,\\ j=1,\\ldots,a_i\\}$. Now define $\\varphi:[\\bar{m}]^\\alpha\\rightarrow [n]$ by $\\varphi(\\bar{y}_{ij})=x_{ij}$. Extending $\\varphi$, we obtain the isomorphism $\\varphi:\\Gamma^\\alpha\\rightarrow \\Delta$.\n\\end{proof}\n\n\\begin{prop}\\label{CM indepen}\nLet $\\Delta$ be a simplicial complex and assume that $\\Delta^\\alpha$ is\nCohen-Macaulay for some $\\alpha\\in\\mathbb N^n$. Then $\\Delta$ is\nCohen-Macaulay.\n\\end{prop}\n\\begin{proof}\nBy Lemma \\ref{ex indepen}(i), for all $i\\leq \\dim(\\mathrm{link}_\\Delta F)$ and all $F\\in\\Delta$ there exists an epimorphism $\\theta:\\mathrm{link}_{\\Delta^\\alpha} F^\\alpha\\rightarrow \\mathrm{link}_\\Delta F$ such that\n$$\\tilde{H}_{i}(\\mathrm{link}_{\\Delta^\\alpha} F^\\alpha;K)\/\\ker(\\theta)\\cong\\tilde{H}_{i}(\\mathrm{link}_\\Delta F;K).$$\nNow suppose that $i<\\dim(\\mathrm{link}_\\Delta F)$. Then $i<\\dim(\\mathrm{link}_{\\Delta^\\alpha} G^\\alpha)$ and by Cohen-Macaulayness of $\\Delta^\\alpha$, $\\tilde{H}_{i}(\\mathrm{link}_{\\Delta^\\alpha} F^\\alpha;K)=0$. Therefor $\\tilde{H}_{i}(\\mathrm{link}_\\Delta F;K)=0$. This means that $\\Delta$ is Cohen-Macaulay.\n\\end{proof}\n\nIt follows from Proposition \\ref{CM indepen} that:\n\n\\begin{cor}\\label{CM-ness}\nThe contraction of a Cohen-Macaulay simplicial complex $\\Delta$ is Cohen-Macaulay.\n\\end{cor}\n\nThis can be generalized in the following theorem.\n\n\\begin{thm}\\label{contract,CM-t}\nLet $\\Gamma$ be the contraction of a ${\\mathrm{CM}}_t$ simplicial complex\n$\\Delta$, for some $t\\geq 0$, by $\\alpha=(k_1,\\ldots,k_n)$. If\n$k_i\\geq t$ for all $i$ and $\\Gamma$ is pure, then $\\Gamma$ is\nBuchsbaum.\n\\end{thm}\n\\begin{proof}\nIf $t=0$, then we saw in Corollary \\ref{CM-ness} that $\\Gamma$ is Cohen-Macaulay and so it is ${\\mathrm{CM}}_t$. Hence assume that $t>0$. Let $\\Delta=\\langle F_1,\\ldots,F_r\\rangle$. We have to show that $\\tilde{H}_i(\\mathrm{link}_\\Gamma G;K)=0$, for all faces $G\\in\\Gamma$ with $|G|\\geq 1$ and all $i<\\dim(\\mathrm{link}_\\Gamma G)$.\n\nLet $G\\in\\Gamma$ with $|G|\\geq 1$. Then $|G^\\alpha|\\geq t$. It follows from Lemma \\ref{CM-t eq} and ${\\mathrm{CM}}_t$-ness of $\\Delta$ that\n$$\\tilde{H}_{i}(\\mathrm{link}_\\Gamma G;K)\\cong\\tilde{H}_{i}(\\mathrm{link}_\\Delta G^\\alpha;K)=0$$\nfor $i<\\dim(\\mathrm{link}_\\Delta G^\\alpha)$ and, particularly, for $i<\\dim(\\mathrm{link}_\\Gamma G)$. Therefore $\\Gamma$ is Buchsbaum.\n\\end{proof}\n\n\\begin{cor}\nLet $\\Gamma$ be the contraction of a Buchsbaum simplicial complex\n$\\Delta$. If $\\Gamma$ is pure, then $\\Gamma$ is also Buchsbaum.\n\\end{cor}\n\nLet $\\mathcal G$ be a simple graph on the vertex set $[n]$ and let $\\Delta_\\mathcal G$ be its independence complex on $[n]$, i.e., a simplicial complex whose faces are the independent vertex sets of $G$. Let $\\Gamma$ be the contraction of $\\Delta_\\mathcal G$. In the following we show that $\\Gamma$ is the independence complex of a simple graph $\\mathcal H$. We call $\\mathcal H$ the \\emph{contraction} of $\\mathcal G$.\n\n\\begin{lem}\nLet $\\mathcal G$ be a simple graph. The contraction of $\\Delta_\\mathcal G$ is the independence complex of a simple graph $\\mathcal H$.\n\\end{lem}\n\\begin{proof}\nIt suffices to show that $I_\\Gamma$ is a squarefree monomial ideal generated in degree 2. Let $\\Gamma$ be the contraction of $\\Delta_\\mathcal G$ and let $\\alpha=(k_1,\\ldots,k_n)$ be the vector obtained from the contraction. Let $[n]=\\{x_1,\\ldots,x_n\\}$ be the vertex set of $\\Gamma$. Suppose that $u=x_{i_1}\\ldots x_{i_t}\\in G(I_\\Gamma)$. Then $u^\\alpha\\subset G(I_\\Gamma)^\\alpha=G(I_{\\Delta_\\mathcal G})=G(I(\\mathcal G)$. Since $u^\\alpha=\\{x_{i_1j_1}\\ldots x_{i_tj_t}:1\\leq j_l\\leq k_{i_l},1\\leq l\\leq t\\}$ we have $t=2$ and the proof is completed.\n\\end{proof}\n\n\\begin{exam}\nLet $\\mathcal G_1$ and $\\mathcal G_2$ be, respectively, from left to right the following graphs:\n\n$$\\begin{array}{cccc}\n\\begin{tikzpicture}\n\\coordinate (a) at (0,0);\\fill (0,0) circle (1pt);\n\\coordinate (b) at (0,1);\\fill (0,1) circle (1pt);\n\\coordinate (c) at (1,0);\\fill (1,0) circle (1pt);\n\\coordinate (d) at (0,-1);\\fill (0,-1) circle (1pt);\n\\coordinate (e) at (-1,0);\\fill (-1,0) circle (1pt);\n\\draw[black] (a) -- (c) -- (d) -- (e) -- (b);\n\\end{tikzpicture}\n&&&\n\\begin{tikzpicture}\n\\coordinate (a) at (0,0);\\fill (0,0) circle (1pt);\n\\coordinate (b) at (0,1);\\fill (0,1) circle (1pt);\n\\coordinate (c) at (1,0);\\fill (1,0) circle (1pt);\n\\coordinate (d) at (0,-1);\\fill (0,-1) circle (1pt);\n\\coordinate (e) at (-1,0);\\fill (-1,0) circle (1pt);\n\\draw[black] (e) -- (a) -- (c) -- (d) -- (e) -- (b) -- (c);\n\\end{tikzpicture}\n\\end{array}$$\n\nThe contraction of $\\mathcal G_1$ and $\\mathcal G_2$ are\n\n$$\\begin{array}{cccc}\n\\begin{tikzpicture}\n\\coordinate (a) at (0,0);\\fill (0,0) circle (1pt);\n\\coordinate (b) at (0,1);\\fill (0,1) circle (1pt);\n\\coordinate (c) at (1,0);\\fill (1,0) circle (1pt);\n\\coordinate (d) at (0,-1);\\fill (0,-1) circle (1pt);\n\\coordinate (e) at (-1,0);\\fill (-1,0) circle (1pt);\n\\draw[black] (a) -- (c) -- (d) -- (e) -- (b);\n\\end{tikzpicture}\n&&&\n\\begin{tikzpicture}\n\\coordinate (a) at (0,0);\\fill (0,0) circle (1pt);\n\\coordinate (b) at (1,0);\\fill (1,0) circle (1pt);\n\\draw[black] (a) -- (b);\n\\end{tikzpicture}\n\\end{array}$$\nThe contraction of $\\mathcal G_1$ is equal to itself but $\\mathcal G_2$ is contracted to an edge and the vector obtained from contraction is $\\alpha=(2,3)$.\n\\end{exam}\n\nWe recall that a simple graph is ${\\mathrm{CM}}_t$ for some $t\\geq 0$, if the associated independence complex is ${\\mathrm{CM}}_t$.\n\n\\begin{rem}\nThe simple graph $\\mathcal G'$ obtained from $\\mathcal G$ in Lemma 4.3 and Theorem\n4.4 of \\cite{HaYaZa2} is the expansion of $\\mathcal G$. Actually, suppose\nthat $\\mathcal G$ is a bipartite graph on the vertex set $V(\\mathcal G)=V\\cup W$\nwhere $V=\\{x_1,\\ldots,x_d\\}$ and $W=\\{x_{d+1},\\ldots,x_{2d}\\}$. Then\nfor $\\alpha=(n_1,\\ldots,n_d,n_1,\\ldots,n_d)$ we have\n$\\mathcal G'=\\mathcal G^\\alpha$. It follows from Theorem \\ref{main} that if $\\mathcal G$ is\n${\\mathrm{CM}}_t$ for some $t\\geq 0$ then $\\mathcal G'$ is ${\\mathrm{CM}}_{t+n-n_{i_0}+1}$ where\n$n=\\sum^d_{i=1}n_i$ and $n_{i_0}=\\min\\{n_i>1:i=1,\\ldots,d\\}$. This\nimplies that the first part of Theorem 4.4 of \\cite{HaYaZa2} is an\nobvious consequence of Theorem \\ref{main} for $t=0$.\n\\end{rem}\n\n\\subsection*{Acknowledgment}\n\nThe author would like to thank Hassan Haghighi from K. N. Toosi University of Technology and Rahim Zaare-Nahandi from University of Tehran for careful\nreading an earlier version of this article and for their helpful comments.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\n\nRandom unitary operators have often been used to approximate chaotic dynamics. Notably, in the context of black holes Hayden and Preskill used a model of random dynamics to show such systems can be efficient information scramblers; initially localized information will quickly thermalize and become thoroughly mixed across the entire system \\cite{Hayden07}. It was later conjectured \\cite{Sekino08} and then proven \\cite{Shenker:2013pqa,Maldacena:2015waa} that black holes are the fastest scramblers in nature, suggesting their dynamics must have much in common with random unitary evolution. Such ``scrambling'' is a byproduct of strongly-coupled chaotic dynamics \\cite{Lashkari13,Almheiri13b,Hosur:2015ylk}, and so the work of \\cite{Hayden07} suggests there should be strong quantitative connection between such chaos and pseudorandomness.\\footnote{\n Scrambling has a close connection to the decoupling theorem, see e.g.~\\cite{Berta, Brown15}. Also, see \\cite{Chamon14,Hayden:2016cfa,Nahum:2016muy} for studies connecting randomness to entanglement.\n}\n\n\n\nThe connection between pseudorandomness and chaotic dynamics can also be understood at the level of operators. For example, consider $W$ a local operator of low weight (e.g. a Pauli operator acting on a single spin). With a chaotic Hamiltonian $H$, the operator $W(t)=e^{iHt} W e^{-iHt}$ will be a complicated nonlocal operator that has an expansion as a sum of products of many local operators with an exponential number of terms each with a pseudorandom coefficient \\cite{Roberts:2014isa}. We can gain intuition for this by considering the Baker-Campbell-Hausdorff expansion of $W(t)$\n\\begin{equation}\n W(t) = \\sum_{j=0}^\\infty \\frac{(it)^j}{j!}\\underbrace{[H, \\dots[H}_j,W\\underbrace{] \\dots]}_j. \\label{eq:BCH}\n\\end{equation}\nIf $H$ is $q$-local and sufficiently ``generic'' to contain all possible $q$-local interactions, then the $j$th term will consist of a sum of roughly $\\sim (n\/q)^{qj}$ terms each of weight ranging from $1$ to $\\sim j(q-1)$, where we assume the system consists of $n$ spins so that the Hilbert space is of dimension $d=2^n$: \n\\begin{itemize}\n \\item At roughly $j \\sim n \/ (q-1)$, there will be many terms in the sum of weight $n$. These terms are delocalized over the entire system. For a system without spatial locality, the relationship between time $t$ and when the $j$th term becomes $O(1)$ is roughly $t \\sim \\log j$. The timescale $t \\sim O(\\log n)$ for the operator to cover the entire system is indicative of fast-scrambling behavior.\n \\item At around $j \\sim 2n \/ q \\log(n\/q)$, the total number of terms will reach $2^{2n}$, equal to the total number of orthogonal linear operators acting on the Hilbert space.\\footnote{The number of terms is actually not a well defined quantity, since it can change under local rotations of spin, and should possibly instead consider the minimum number of terms under all local changes of basis. For chaotic systems, the distinction should not be important, since there will not exist a local change of basis where the number of terms drastically simplifies. We thank Douglas Stanford who thanks Juan Maldacena for raising this point.} Even after the operator covers the entire system, it continues to spread over the unitary group (though possibly only until a time roughly $O(\\log n) +$ a constant). \n\\end{itemize}\nFurthermore, the coefficient of any given term will be incredibly complicated, depending on the details of the path through the interaction graph and the time. \nOver time, $W(t)$ should cover the entire unitary group (possibly quotiented by a group set by the symmetries of the Hamiltonian). At sufficiently large $t$, one might even suspect that for many purposes $W(t)$ can be approximated by a random operator $\\tilde{W}\\equiv U^\\dagger W U$, with $U$ sampled randomly from the unitary group.\\footnote{Of course, for a less generic and sparser Hamiltonian e.g. a free system, the expansion of $W(t)$ will organize itself in a way such that the commutator in Eq.~\\eqref{eq:BCH} does not produce an exponential number of terms. Said another way, the terms in the expansion of $W(t)$ will only cover a small subset of the unitary group, and the assumption of uniform randomness will not hold.} If this is true, then we would say that $W(t)$ behaves pseudorandomly.\n\n\n\n\n\n\\subsubsection*{Chaos}\n\nThis pattern of growth of $W(t)$ can be measured by a second local operator $V$. For example, the group commutator of $W(t)$ with $V$, given by $W(t)^\\dagger\\, V^\\dagger \\, W(t)\\, V$, measures the effect of the small perturbation $V$ on a later measurement of $W$. In other words, it is a measure of the butterfly effect and the strength of chaos:\n\\begin{itemize}\n \\item If $W(t)$ is of low weight and few terms, then $W(t)$ and $V$ approximately commute $[W(t),V]\\approx 0$, and the operator $W(t)^\\dagger\\, V^\\dagger \\, W(t)\\, V$ is close to the identity. \n \\item If instead the dynamics are strongly chaotic, $W(t)$ will grow to eventually have a large commutator with all other local operators in the system (in fact, just about all other operators), and so $W(t)^\\dagger\\, V^\\dagger \\, W(t)\\, V$ will be nearly random and have a small expectation in most states.\n\\end{itemize}\nThus, the decay of out-of-time-order (OTO) four-point functions of the form\n\\begin{equation}\n \\langle W(t)^\\dagger\\, V^\\dagger \\, W(t)\\, V \\rangle = \\langle U(t)^\\dagger W^\\dagger U(t) ~ V^\\dagger ~ U(t)^\\dagger W U(t) ~ V \\rangle, \\label{oto-4pt-def}\n\\end{equation} \ncan act as a simple diagnostic of quantum chaos, where \n$U(t)=e^{-iHt}$ is the unitary time evolution operator, and the correlator is usually evaluated on the thermal state $\\langle \\cdot \\rangle \\equiv \\text{tr} \\, \\{ e^{-\\beta H} \\, \\cdot \\, \\} \/ \\text{tr} \\, e^{-\\beta H}$ \\cite{Larkin:1969abc,Almheiri13b,Shenker:2013pqa,Kitaev:2014t1,Maldacena:2015waa}.\\footnote{Usually $W, V$ are taken to be Hermitian, but here we will more generally allow them to be unitary.} For further discussion, please see a selection (but by all means not a complete set) of recent work on out-of-time-order four-point functions and chaos \\cite{Shenker:2013pqa,Shenker:2013yza,Roberts:2014isa,Roberts:2014ifa,Kitaev:2014t1,Shenker:2014cwa,Maldacena:2015waa,Hosur:2015ylk,Stanford:2015owe,Fitzpatrick:2016thx,Gu:2016hoy,Caputa:2016tgt,Swingle:2016var,Perlmutter:2016pkf,Blake:2016wvh,Roberts:2016wdl,Blake:2016sud,Swingle:2016jdj,Huang:2016knw,fan2016out,Halpern:2016zcm}.\n\n\n\nFor sufficiently chaotic systems and sufficiently large times, the correlators Eq.~\\eqref{oto-4pt-def} will reach a floor value equivalent to the substitution, $W(t) \\to U^\\dagger W U$ with $U$ chosen randomly. \nFurthermore, it can be shown \\cite{Hosur:2015ylk} that the decay of correlators Eq.~\\eqref{oto-4pt-def} implies the sort of information-theoretic scrambling studied by Hayden and Preskill in \\cite{Hayden07}. This explains why the random dynamics model of \\cite{Hayden07} was such a good approximation for strongly-chaotic systems, such as black holes. However, are out-of-time-order four-point functions Eq.~\\eqref{oto-4pt-def} actually a sufficient diagnostic of chaos?\n\n\n\nIn \\cite{Hayden07} the authors did not actually require the dynamics to be a uniformly random unitary operator sampled from the Haar measure on the unitary group. Instead, it would have been sufficient to sample from a simpler ensemble of operators that could reproduce only a few moments of the larger Haar ensemble.\\footnote{Inspection of Eq.~\\eqref{oto-4pt-def} suggests that only two moments are required since there are only two copies of $U$ and two copies of $U^\\dagger$ in the correlator. We will explain this more carefully in the rest of the paper.} Of course, there may be other finer-grained information theoretic properties of a system that are dependent on higher moments and would require a larger ensemble to replicate the statistics of the Haar random dynamics. If random dynamics is a valid approximation for computing some, but not all, of these finer-grained quantities then they can represent a measure of the degree of pseudorandomness of the underlying dynamics. In this paper, we will make some progress in developing some of these finer-grained quantities and therefore connect measures of chaos to measures of randomness.\n\n\\subsubsection*{Unitary design}\n\nThe extent to which an ensemble of operators behaves like the uniform distribution can be quantified by the notion of unitary $k$-designs \\cite{DiVincenzo02, Emerson05, Ambainis07, Gross07, Dankert09}.\\footnote{N.B. in the literature, these are often referred to as unitary $t$-designs. Here, we will reserve $t$ for time. For recent work on unitary designs, see~\\cite{Emerson03, Emerson05, Brown10, Brown15, Hayden07, Harrow09, Knill08, Brandao12, Kueng15, Webb15, Low_thesis, Scott08, Zhu15, Collins16,Nakata:2016blv}.} A unitary $k$-design is a subset of the unitary group that replicates the statistics of at least $k$ moments of the distribution. Consider a finite-dimensional Hilbert space $\\mathcal{H}^{\\otimes k}=(\\mathbb{C}^{d})^{\\otimes k}$ consisting of $k$ copies of $\\mathcal{H}=\\mathbb{C}^d$. Given an ensemble of unitary operators $\\mathcal{E}=\\{p_{j},U_{j}\\}$ acting on $\\mathcal{H}$ with probability distribution $p_{j}$, the ensemble $\\mathcal{E}$ is a unitary $k$-design if and only if\n \\begin{align}\n \\sum_{j}p_{j} (\\underbrace{U_{j}\\otimes \\cdots \\otimes U_{j}}_{k}) \\rho (\\underbrace{U_{j}^{\\dagger}\\otimes \\cdots \\otimes U_{j}^{\\dagger}}_{k}) = \\int_{\\text{Haar}}dU(\\underbrace{U\\otimes \\cdots \\otimes U}_{k}) \\rho (\\underbrace{U^{\\dagger}\\otimes \\cdots \\otimes U^{\\dagger}}_{k}), \n \\end{align}\nfor all quantum states $\\rho$ in $\\mathcal{H}^{\\otimes k}$. Therefore, whether or not a given ensemble $\\mathcal{E}$ forms at most a $k$-design is a fine-grained notion of the randomness of the ensemble.\\footnote{With this definition, a $k+1$-design is automatically a $k$-design.}\n\n\n\nInspection of Eq.~\\eqref{oto-4pt-def} suggests that a $2$-design is sufficient for OTO four-point functions to decay. This was also the requirement for a system to appear scrambled \\cite{Page93,Hayden07,Sekino08}. However, the four-point functions Eq.~\\eqref{oto-4pt-def} are not the only observables related to chaos and the butterfly effect. In an attempt to understand the geometry and interior of a typical microstate of a holographic black hole, Shenker and Stanford constructed a series of black hole geometries by making multiple perturbations of an essentially maximally entangled state \\cite{Shenker:2013yza}. While they were unfortunately unsuccessful at constructing a generic microstate, studying correlation functions in these geometries leads one to a family of $2k$-point out-of-time-order correlation functions of the form\n \\begin{equation}\n \\T{\\mc{W}^\\dagger \\, V(0)^\\dagger \\, \\mc{W} \\, V(0) } = \\T{W_1(t_1)^\\dagger \\cdots W_{k-1}(t_{k-1})^\\dagger ~ V(0)^\\dagger ~ W_{k-1}(t_{k-1}) \\cdots W_1(t_1) ~ V(0) }, \\label{multiple-shocks-correlator}\n \\end{equation}\nwhere $V(0)$ and the $W_j(0)$ are local operators, and $\\mc{W} \\equiv W_{k-1}(t_{k-1}) \\cdots W_1(t_1)$ is a composite operator that lets us understand how the correlator is organized. The existence of these correlators begs the question: do these contain any additional information about the system or are they redundant with the four-point functions? The fact that the correlators Eq.~\\eqref{multiple-shocks-correlator} involve many more copies of unitary time evolution, suggests a relationship between higher-point functions and unitary design. In fact, we will show that a generalization of the OTO higher-point correlators Eq.~\\eqref{multiple-shocks-correlator} are probes of unitary design. \n\n\n\\subsubsection*{Complexity}\n\n\n\nProbes of chaos and unitary design can only indirectly address a sharp question of recent interest in holography: the existence and growth of the black hole interior via the time evolution of the state. To be precise, one cannot make use of unitary designs since evolution by a time-independent Hamiltonian does not define any ensemble. However, even in this setting we still expect that higher-point OTO should be able to see fine-grained properties of time evolved states that are not captured by four-point OTO correlators.\n\nOne fine-grained quantity is the quantum circuit complexity of a quantum state $|\\psi(t)\\rangle = U(t)|\\psi(0)\\rangle$. Consider a simple initial state, such as the product state $|\\psi(0)\\rangle= |0\\rangle^{\\otimes n}$, undergoing chaotic time evolution. After a short time thermalization time of $O(1)$, the system will evolve to an equilibrium in which local quantities will reach their thermodynamic values. Next, after the scrambling time of $O(\\log n)$, the initial state $|\\psi(0)\\rangle$ will be forgotten: the information will be distributed in such a way that measurements of even a large number of collections of local quantities of $|\\psi(t)\\rangle$ will not reveal the initial state. However, even after the scrambling time, the quantum circuit complexity of the time-evolved state $|\\psi(t)\\rangle$, as quantified by the number of elementary quantum gates necessary to reach it from the initial product state, will continue to evolve. In fact, it is expected to keep growing linearly in time until it saturates at a time exponential in the system size $e^{O(n)}$ \\cite{Knill95,Susskind:2015toa}. \n\n\n\n\n\n\\sbreak\n\nWe hope this presents an intuitive picture that chaos, pseudorandomness, and quantum circuit complexity should be related. \nTo that end, having first established a connection between higher-point out-of-time-order correlators and pseudorandomness, we will next connect the randomness of an ensemble to computational complexity. Finally, we will use correlation functions to probe pseudorandomness by comparing different random averages to expectations from time evolution.\n\n\n\nBelow, we will summarize our main results, deferring the technical statements to the body and appendices.\n\n\n \n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\\subsection*{Main results}\n\nWe will focus on a particular form of $2k$-point correlation functions evaluated on the maximally mixed state $\\rho=\\frac{1}{d}I$\n \\begin{equation}\n \\langle A_{1} \\, U^{\\dagger}B_1U \\cdots A_{k} \\, U^{\\dagger}B_kU \\rangle := \\frac{1}{2^n}\\text{tr} \\, \\{ A_{1}\\, U^{\\dagger}B_1U \\cdots A_{k}\\, U^{\\dagger}B_kU \\}, \\label{OCO-correlator}\n \\end{equation}\nwhere any of the $A_{j}, B_{j}$ may be a product Pauli operators that act on a single spin.\\footnote{To be clear, this means that the operators we are correlating are not necessarily simple or local.} Note that each of the $B_j$ is conjugated by the same unitary $U$ (which is similar to picking all the time arguments in Eq.~\\eqref{multiple-shocks-correlator} to be either $0$ or $t$). Furthermore, $U$ will not necessarily represent Hamiltonian time evolution, and instead we will let $U$ be sampled from some ensemble.\\footnote{As a result, these correlators are not really out-of-\\emph{time}-order, since there may not be a notion of time. Instead, they are probably more accurately called \\emph{out-of-complexity-order} (OCO) since we might generalize the notion of time ordering to complexity ordering, where we put unitaries of smaller complexity to the right of unitaries of larger complexity. In order to (hopefully) avoid confusion, we will continue to call the $2k$-point functions in Eq.~\\eqref{OCO-correlator} out-of-time-order, despite there not necessarily being a notion of time.} From this point forward, we will use the notation\n\\begin{equation}\n\\tilde{B} \\equiv U^{\\dagger}BU,\n\\end{equation}\nto simplify expressions involving unitary conjugation. Therefore, we can represent the ensemble average of OTO $2k$-point correlation functions as\n\\begin{align}\n\\langle A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}} \\rangle_{\\mathcal{E}} := \\int_{\\mathcal{E}} dU \\langle A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}} \\rangle,\n\\end{align}\nwhere the integral is with respect to the probability distribution in an ensemble of unitary operators $\\mathcal{E}=\\{p_{j},U_{j}\\}$. Finally, the $k$-fold channel over the ensemble $\\mathcal{E}$ is \n\\begin{align}\n\\Phi_{\\mathcal{E}}^{(k)}(\\cdot) = \\int_{\\mathcal{E}} dU (U_{j}\\otimes \\cdots \\otimes U_{j}) (\\cdot )(U_{j}^{\\dagger}\\otimes \\cdots \\otimes U_{j}^{\\dagger}),\n\\end{align}\nwhich is a superoperator. \n\n\\subsubsection*{Chaos and $k$-designs}\n\nFirst, we will prove a theorem stating that a particular set of $2k$-point OTO correlators, averaged over an ensemble $\\mathcal{E}$, is in a one-to-one correspondence with the $k$-fold channel~$\\Phi_{\\mathcal{E}}^{(k)}$ \n\\begin{align}\n\\text{$2k$-point OTO correlators}\\ \\leftrightarrow \\ \\text{$k$-fold channel $\\Phi_{\\mathcal{E}}^{(k)}$},\n\\end{align}\nand we provide a simple formula to convert from one to the other. Such an explicit relation between OTO correlators and the $k$-fold channel may have practical and experimental applications such as statistical testing (e.g. a quantum analog of the $\\chi^2$-test) and randomized bench marking~\\cite{Knill08}. \n\n\n\nNext, we prove that generic ``smallness'' of $2k$-point OTO correlators implies that the ensemble $\\mathcal{E}$ is close to $k$-design. We will make this statement precise by relating OTO correlators to a useful quantity known as the frame potential \n\\begin{align}\nF_{\\mathcal{E}}^{(k)}=\\frac{1}{|\\mathcal{E}|^{2}}\\sum_{U,V\\in\\mathcal{E}}\\big|\\text{tr}\\, \\{ U^{\\dagger}V \\} \\big|^{2k}.\n\\end{align}\nThis quantity, first introduced in \\cite{Scott08}, measures the ($2$-norm) distance between $\\Phi_{\\mathcal{E}}^{(k)}(\\cdot)$ and $\\Phi_{\\text{Haar}}^{(k)}(\\cdot)$, and has been shown to be minimized if and only if the ensemble $\\mathcal{E}$ is $k$-design. We will derive the following formula:\n\\begin{align}\n\\text{Average of $|\\text{$2k$-point OTO correlator}|^{2}$}\\ \\propto \\ \\text{$k$th frame potential $F_{\\mathcal{E}}^{(k)}$},\n\\end{align}\nwhich shows that $2k$-point OTO correlators $\\langle A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}} \\rangle_{\\mathcal{E}}$ are measures of whether an ensemble $\\mathcal{E}$ is a unitary $k$-design.\nThus, the decay of OTO correlators can be used to quantify an increase in pseudorandomness. \n\n\\subsubsection*{Chaos, randomness, and complexity}\n\n\n\nWe prove a lower bound on quantum circuit complexity needed to generate an ensemble of unitary operators $\\mathcal{E}$\n\\begin{align}\n\\text{Complexity of $\\mathcal{E}$}\\ \\geq \\ \\frac{2kn \\log(2) - \\log F_{\\mathcal{E}}^{(k)} }{\\log( \\text{choices})}.\\label{intro-complexity-lower-bound}\n\\end{align}\nThis bound is actually given by a rather simple counting argument. The denominator should be thought of as (the log of) the number of choices made at each step in generating the circuit. For instance, if we have a set $\\mc{G}$ of cardinality $g$ of $q$-qubit quantum gates available and at each step randomly select $q$ qubits out of $n$ total and select one of the gates in $\\mc{G}$ to apply, then we would make $g\\binom{n}{q}$ choices at each step. Recalling our result relating OTO correlators and the frame potential, this result implies that generic smallness of OTO correlators leads to higher quantum circuit complexity. This is a direct and quantitative link between chaos and complexity. \n\nHowever, we caution the reader that in many cases Eq.~\\eqref{intro-complexity-lower-bound} may not be a very tight lower bound. We will provide some discussion of this point as well as a few examples, however further work is most likely required to better understand the utility of this bound.\n\n\\subsubsection*{Haar vs. simpler ensemble averages}\nFinally, we present calculations of the Haar average of some higher-point OTO correlators and compare them to averages in simpler ensembles. These results suggest that the floor value of OTO correlators of local operators might be good diagnostics of pseudorandomness.\n\nFor $4$-point OTO correlators, we find\n\\begin{equation}\n\\begin{split}\n&\\langle A\\tilde{B}C\\tilde{D} \\rangle_{\\text{Haar}} = \\langle AC \\rangle \\langle B \\rangle\\langle D \\rangle + \\langle A \\rangle\\langle C \\rangle \\langle BD \\rangle - \\langle A\\rangle \\langle C \\rangle \\langle B \\rangle\\langle D \\rangle - \\frac{1}{d^2-1}\\langle\\!\\langle AC\\rangle\\!\\rangle \\langle\\!\\langle BD\\rangle\\!\\rangle,\n\\end{split} \\label{intro-4p-haar-average}\n\\end{equation}\nwhere $\\langle\\!\\langle AC\\rangle\\!\\rangle$ represents a connected correlator and $d=2^{n}$ is the total number of states. In contrast, if the $U$ are averaged over the Pauli operators (which form a $1$-design but not a $2$-design), we find\n\\begin{align}\n\\langle A\\tilde{B}C\\tilde{D} \\rangle_{\\text{Pauli}}&= \n\\langle AC \\rangle \\langle B D \\rangle. \\label{intro-4p-pauli-average}\n\\end{align}\nWe will present intuitive explanations on this difference from the viewpoint of local thermal dissipations vs. global thermalization or scrambling. \n\nFor $8$-point OTO correlators with Pauli operators $A,B,C, D$, we compute averages over the unitary group and the Clifford group (which form a $3$-design on qubits but not a $4$-design)\n\\begin{equation}\n\\begin{split}\n&\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{D}^{\\dagger} C^{\\dagger}\\tilde{B}^{\\dagger}\\rangle_{\\text{Haar}} \\sim \\frac{1}{d^4},\\\\\n&\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{D}^{\\dagger} C^{\\dagger}\\tilde{B}^{\\dagger}\\rangle_{\\text{Clifford}} \\sim \\frac{1}{d^2}.\n\\end{split} \\label{intro-8p-averages}\n\\end{equation}\nThis suggests that forming a higher $k$-design leads to a lower value of the correlator.\n\n\nThe results Eq.~\\eqref{intro-4p-haar-average}-\\eqref{intro-8p-averages} are exact for any choice of operators. (The extended results for the correlators in Eq.~\\eqref{intro-8p-averages} is presented in Appendix~\\ref{sec:8-pt-functions}.) However, for a particular ordering of OTO $4m$-point functions where we average over choices of operators, we will also show that a Haar averaging over the unitary group scales as\n\\begin{equation}\n\\textrm{OTO}^{(4m)} \\sim \\frac{1}{d^{2m}}.\\label{intro-4m-averages}\n\\end{equation}\nThis result hints that these correlators continue to be probes of increasing pseudorandomnes. \n\n\n\n\\subsection*{Organization of the paper}\nFor convenience of the reader, we included a(n almost) self-contained introduction to Haar random unitary operators and unitary design in \\S\\ref{sec:review}. In \\S\\ref{sec:OTO_channel}, we establish a formal connection between chaos and unitary design by proving the theorems mentioned above. In \\S\\ref{sec:complexity-bound}, we connect complexity to unitary design by proving the complexity lower bound \\eqref{intro-complexity-lower-bound}.\nIn \\S\\ref{sec:haar-averages}, we include the explicit calculations of $2$-point and $4$-point functions averaged over different ensembles and discuss how these averages relate to expectations from time evolution with chaotic Hamiltonians. We also discuss results and expectations for higher-point functions.\nWe conclude in \\S\\ref{sec:discussion} with an extended discussion of these results, their relevance for physical situations, and outline some future research directions.\n\nDespite page counting to the contrary, this is actually a short paper. A knowledgeable reader may learn our results by simple reading \\S\\ref{sec:OTO_channel}, \\S\\ref{sec:complexity-bound} and \\S\\ref{sec:haar-averages}. On the other hand, a large number of extended calculations and digressions are relegated to the Appendices: \n\\begin{itemize}\n\\item In \\S\\ref{app:proof}, we collect some proofs that we felt interrupted the flow of the main discussion. \n\\item In \\S\\ref{sec:appendix:orthogonal}, we discuss the number of nearly orthogonal states in large-dimensional Hilbert spaces.\n\\item In \\S\\ref{sec:complexity-appendix}, we extend our complexity lower bound to minimum circuit depth by considering gates that may be applied in parallel. We also derive a bound on early-time complexity for evolution with an ensemble of Hamiltonians.\n\\item In \\S\\ref{sec:appendix-averages}, we hide the details of our $8$-point functions Haar averages and also derive the ${}\\sim d^{-2m}$ scaling of Haar averages of certain $4m$-point functions.\n\\item In \\S\\ref{sec:sub-space-randomization}, we provide a generalization of the frame potential that equals an average of the square of OTO correlators for arbitrary states rather than just the maximally mixed state.\n\\item Finally, in \\S\\ref{sec:more-chaos} we prove some extended results relating to our earlier work \\cite{Hosur:2015ylk} that are somewhat outside the main focus of the current paper.\n\\end{itemize}\n\n\\section{Measures of Haar}\\label{sec:review}\n\n\nThe goal of this section is to provide a review of the theory of Haar random unitary operators and unitary design in a self-contained manner. The presentation of this section owes a lot to a recent paper by Webb~\\cite{Webb15} as well as a course note by Kitaev \\cite{Kitaev-Haar}.\\footnote{We also would like to highlight a Master's thesis by Yinzheng Gu, which considers a certain matrix multiplication problem with conjugations by Haar random unitary operators. Although not directly relevant to the work, these are actually out-of-time-order correlation functions in a disguised form~\\cite{gu2013moments}.}\n\n\\subsection*{Haar random unitaries}\n\n\n\\subsubsection*{Schur-Weyl duality}\nConsider a finite-dimensional Hilbert space $\\mathcal{H}^{\\otimes k}=(\\mathbb{C}^{d})^{\\otimes k}$ consisting of $k$ copies of $\\mathcal{H}=\\mathbb{C}^d$. A permutation operator $W_{\\pi}$ with a permutation $\\pi=\\pi(1)\\ldots \\pi (k)$ is defined as follows\n\\begin{align}\nW_{\\pi} |a_{1},\\ldots,a_{k} \\rangle = |a_{\\pi(1)},\\ldots,a_{\\pi(k)} \\rangle,\n\\end{align}\nand $W_{\\pi}(A_{1}\\otimes \\cdots \\otimes A_{k})W_{\\pi}^{-1} = A_{\\pi(1)}\\otimes \\cdots \\otimes A_{\\pi(k)}$. If $\\pi$ is a cyclic permutation ($\\pi(j)=j+1$), then $W_{\\text{cyc}}$ acts as follows\n\\begin{align}\n\\includegraphics[width=0.35\\linewidth]{fig_permutation}.\n\\end{align}\n\n\n\\begin{theorem}\\emph{\\tb{[Schur-Weyl duality]}}\nLet $L(\\mathcal{H}^{\\otimes k})$ be the algebra of all the operators acting on $\\mathcal{H}^{\\otimes k}$. Let $U(\\mathcal{H})$ be the unitary group on $\\mathcal{H}$. An operator $A \\in L(\\mathcal{H}^{\\otimes k})$ commutes with all operators $V^{\\otimes k}$ with $V\\in U(\\mathcal{H})$ if and only if $A$ is a linear combination of permutation operators $W_{\\pi}$\n\\begin{align}\n[A,V^{\\otimes k}]=0, ~ \\forall V \\ \\Leftrightarrow \\ A = \\sum_{\\pi\\in S_k} c_{\\pi}\\cdot W_{\\pi}.\\label{Schur-Weyl-duality} \n\\end{align}\n\\end{theorem}\n\nWhen an operator $A$ is a linear combination of permutation operators, it is clear that $A$ commutes with $V^{\\otimes k}$ ($\\Leftarrow$). A difficult part is to prove the converse ($\\Rightarrow$), which relies on Von Neumann's double commutant theorem. \n\n\\subsubsection*{Pauli operators}\nPauli operators for $\\mathbb{C}^d$ (i.e. $d$-state spins or qudits) are defined by \n\\begin{align}\nX|j\\rangle = |j+1\\rangle, \\qquad Z|j\\rangle = \\omega^j|j\\rangle, \\label{eq:pauli-def}\n\\end{align}\nwhere $\\omega \\equiv e^{2\\pi i\/d}$. We note that Eq.~\\eqref{eq:pauli-def} implies $ZX = \\omega XZ$ and $X^d = Z^d = I$, and that for $d>2$, the Pauli operators are unitary and traceless, but not Hermitian.\n\nThe Pauli group is $\\tilde{\\mathcal{P}}=\\langle \\tilde{\\omega}I,X,Z \\rangle$, where $\\tilde{\\omega}=\\omega$ for odd $d$, and $\\tilde{\\omega}=e^{\\pi\/d}$ for even $d$. Since we are usually uninterested in global phases, we will consider the quotient of the group\n\\begin{align}\n\\mathcal{P} = \\tilde{\\mathcal{P}} \\backslash \\langle \\tilde{\\omega} I \\rangle.\n\\end{align}\nThere are $d^2$ (representative) Pauli operators in $\\mathcal{P}$. When the Hilbert space is built up from the space of $n$ qubits, we will denote the Pauli group by $\\mathcal{P}_{n}$. For such systems, (the representatives of) $\\mathcal{P}_{n}$ consist of tensor products of qubit Pauli operators, such as $X \\otimes Y \\otimes I \\otimes Z \\otimes \\cdots $ without any global phases.\n\nThe Pauli operators provide a basis for the space of linear operators acting on the Hilbert space. They are orthogonal, $\\text{tr} \\, \\{P_{i}^{\\dagger}P_{j} \\}=d\\delta_{ij}$ for $P_{i},P_{j}\\in \\mathcal{P}$, and therefore we can expand any operator $A$ acting on $\\mathcal{H}$ as\n\\begin{align}\nA = \\sum_{j} a_{j} P_{j},\\qquad a_{j}=\\frac{1}{d}\\text{tr} \\, \\{P_{j}^{\\dagger}A\\}.\n\\end{align}\nWith this property, the cyclic permutation operator $W_{\\text{cyc}}$ on $\\mathcal{H}^{\\otimes k}$ can be decomposed as\n\\begin{align}\nW_{\\text{cyc}} = \\frac{1}{d^{k-1}}\\sum_{P_{1},\\ldots,P_{k-1}\\in \\mathcal{P}} P_{1}\\otimes P_{2} \\otimes \\cdots P_{k-1}\\otimes Q^{\\dagger}, \n\\qquad Q = P_{1}P_{2}\\cdots P_{k-1}, \\label{eq:pauli-decomposition-of-cyc}\n\\end{align}\nwhere the sum is over $k-1$ copies of $\\mathcal{P}$.\n\nThe case of $k=2$ is particularly important, giving an operator that swaps two subsystems. Explicitly, we have $\\text{SWAP}=\\frac{1}{d}\\sum_{P} P\\otimes P^{\\dagger}$, or graphically (up to a multiplicative factor)\n\\begin{align}\n\\includegraphics[width=0.35\\linewidth]{fig_SWAP}.\\label{eq:fig_SWAP}\n\\end{align}\nHere and in what follows, a dotted line represents an average over all Pauli operators. For example, let us consider the Pauli channel\n\\begin{align}\n\\frac{1}{d^2}\\sum_{P \\in \\mathcal{P}}P^{\\dagger} A P = \\frac{1}{d}\\, \\text{tr} \\{A \\},\\label{eq:Pauli_twirl}\n\\end{align}\nwhere $A$ is any operator on the system.\nThis equation can be derived graphically by applying Eq.~(\\ref{eq:fig_SWAP})\n\\begin{align}\n\\includegraphics[width=0.50\\linewidth]{fig_Pauli_twirl}.\n\\end{align}\n\n\n\\subsubsection*{$k$-fold channel}\nLet $A$ be an operator acting on $\\mathcal{H}^{\\otimes k}$. The $k$-fold channel of $A$ with respect to the unitary group is defined as\n\\begin{align}\n\\Phi_{\\text{Haar}}^{(k)}(A) := \\int_{\\text{Haar}} (U^{\\otimes k})^{\\dagger} A \\, U^{\\otimes k} dU,\n\\end{align}\nwhere the integral is taken over the Haar measure. Note, this is sometimes referred to as the $k$-fold twirl of $A$. The Haar measure is the unique probability measure on the unitary group that is both left-invariant and right-invariant~\\cite{Watrous}\n\\begin{align}\n\\int_{\\text{Haar}} dU = 1,\\qquad \\int_{\\text{Haar}} f(VU)dU=\\int_{\\text{Haar}} f(UV)dU = \\int_{\\text{Haar}} f(U)dU, \\label{eq:def_Haar}\n\\end{align} \nfor all $V\\in U(\\mathcal{H})$, where $f$ is an arbitrary function. If we take $f(U)= (U^{\\otimes k})^{\\dagger}A\\,U^{\\otimes k}$, then we can show that the twirl of $A$ is invariant under $k$-fold unitary conjugation, \n\\begin{align}\n(V^{\\otimes k})^{\\dagger}\\big(\\Phi^{(k)}_{\\text{Haar}}(A)\\big) V^{\\otimes k} &= \\int_{\\text{Haar}} f(UV)dU = \\Phi^{(k)}_{\\text{Haar}}(A),\\label{eq:duality1}\n\\end{align}\nand that twirl of the $k$-fold unitary conjugation of $A$ equals the twirl of $A$\n\\begin{align}\n\\Phi^{(k)}_{\\text{Haar}}\\big((V^{\\otimes k})^{\\dagger}A V^{\\otimes k}\\big)&=\\int_{\\text{Haar}} f(VU)dU = \\Phi^{(k)}_{\\text{Haar}}(A),\\label{eq:duality2}\n\\end{align}\nwhere for Eq.~\\eqref{eq:duality1} we used the right-invariance property, and for Eq.~\\eqref{eq:duality2} we used left-invariance.\n\n\\subsubsection*{Weingarten function}\nThe content of Eq.~(\\ref{eq:duality1}) is that $\\Phi^{(k)}_{\\text{Haar}}(A)$ commutes with all operators $V^{\\otimes k}$. Thus, we may use the Schur-Weyl duality Eq.~\\eqref{Schur-Weyl-duality} to rewrite it as\n\\begin{align}\n\\Phi^{(k)}_{\\text{Haar}}(A) = \\sum_{\\pi \\in S_k} W_{\\pi} \\cdot u_{\\pi}(A).\n\\end{align}\nHere, $S_k$ is the permutation group, and $u_{\\pi}(A)$ is some linear function of $A$. Since $u_{\\pi}(A)$ is a linear function, it can be written as\n\\begin{align}\nu_{\\pi}(A) = \\text{tr} \\{ C_{\\pi} A\\},\n\\end{align}\nfor some operators $C_{\\pi}$. From Eq.~(\\ref{eq:duality2}), we find that $C_{\\pi}$ commutes with all operators $V^{\\otimes k}$. Then again, by the Schur-Weyl duality, we have\n\\begin{align}\n\\Phi^{(k)}_{\\text{Haar}}(A) =\\sum_{\\pi,\\sigma\\in S_k} c_{\\pi,\\sigma} W_{\\pi}\\, \\text{tr} \\{ W_{\\sigma}A \\}.\n\\end{align}\nThe coefficients $c_{\\pi,\\sigma}$ are called the Weingarten matrix~\\cite{Collins03}. Since $\\Phi^{(k)}_{\\text{Haar}}(W_{\\lambda})=W_{\\lambda}$, we have $W_{\\lambda} = \\sum_{\\pi,\\sigma} c_{\\pi,\\sigma}W_{\\pi} \\, \\text{tr} \\{W_{\\sigma}W_{\\lambda} \\}$. Recalling that $\\text{tr} \\{ W_{\\sigma}W_{\\lambda}\\}=d^{\\# \\text{cycles}(\\sigma\\lambda)}$, we have\n\\begin{align}\n\\delta_{\\pi,\\lambda} = \\sum_{\\sigma\\in S_k}c_{\\pi,\\sigma} Q_{\\sigma,\\lambda},\\qquad Q_{\\sigma,\\lambda}:=d^{\\# \\text{cycles}(\\sigma\\lambda)}.\n\\end{align}\nSo finally, we find\n\\begin{align}\n\\Phi^{(k)}_{\\text{Haar}}(A) = \\sum_{\\pi,\\sigma\\in S_k} (Q^{-1})_{\\pi,\\sigma} W_{\\pi} \\cdot \\text{tr} \\{W_{\\sigma}A\\}. \\label{eq:Weingarten_decomposition}\n\\end{align}\nHere, we assumed the presence of the inverse $Q^{-1}$ which is guaranteed for $k \\leq d$.\n\n\\subsection*{Examples}\nFor $k=1$, $Q_{I,I}=d$, so one has\n\\begin{align}\n\\Phi^{(1)}_{\\text{Haar}}(A) = \\frac{1}{d} \\text{tr}\\{A\\}.\n\\end{align}\nFor $k=2$, so one has\n\\begin{align}\nQ = \\left(\n\\begin{array}{cc}\nQ_{I,I}, Q_{I,S} \\\\\nQ_{S,I} , Q_{S,S}\n\\end{array}\n\\right) =\n\\left(\n\\begin{array}{cc}\nd^2,&d \\\\\nd , &d^2\n\\end{array}\n\\right) \\quad \nC = \\left(\n\\begin{array}{cc}\nC_{I,I}, C_{I,S} \\\\\nC_{S,I} , C_{S,S}\n\\end{array}\n\\right) =\n\\left(\n\\begin{array}{cc}\n\\frac{1}{d^2-1} ,& \\frac{-1}{d(d^2-1)} \\\\\n\\frac{-1}{d(d^2-1)} , &\\frac{1}{d^2-1}\n\\end{array}\n\\right).\n\\end{align}\nExplicitly, we have\n\\begin{align}\n\\Phi^{(2)}_{\\text{Haar}}(A) = \\frac{1}{d^2-1} \\left( I \\ \\text{tr} \\{A\\} + S\\ \\text{tr} \\{ SA\\} - \\frac{1}{d} S\\ \\text{tr} \\{A\\} - \\frac{1}{d} I\\ \\text{tr} \\{SA\\} \\right),\n\\end{align}\nwhere $S$ is the SWAP operator. \n\n\n\\subsubsection*{Haar random states}\nWe can also consider the $k$-fold average of a Haar random state. Define a random state by $|\\psi\\rangle = U|0\\rangle$, where $U$ is sampled uniformly from the unitary group. Then, we have\n\\begin{align}\n\\int_{\\text{Haar}} (|\\psi\\rangle \\langle \\psi| )^{\\otimes k} d\\psi := \\Phi^{(k)}_{\\text{Haar}}\\big((|0\\rangle\\langle 0 |)^{\\otimes k}\\big) = \\sum_{\\pi}c_{\\pi}W_{\\pi},\\label{eq:Haar_state}\n\\end{align}\nfor some coefficients $c_{\\pi}$.\nSince $\\int_{\\text{Haar}} (|\\psi\\rangle \\langle \\psi| )^{\\otimes k}$ commutes with $V^{\\otimes k}$, we again decomposed it with permutation operators. Furthermore, one has\n\\begin{align}\nW_{\\pi}\\int_{\\text{Haar}} (|\\psi\\rangle \\langle \\psi| )^{\\otimes k}=\\int_{\\text{Haar}} (|\\psi\\rangle \\langle \\psi| )^{\\otimes k}, \\qquad \\forall \\pi \\in S_{k}\n\\end{align}\nwhich implies $c_{\\pi}=c$ for all $\\pi$. By taking the trace in Eq.~(\\ref{eq:Haar_state}), we have\n\\begin{align}\nc_{\\pi}= \\frac{1}{\\sum_{\\pi\\in S_k} d^{\\text{\\#cycles($\\pi$)}}} = \\frac{k!}{\n\\Big(\\begin{array}{c}\nk+d-1 \\\\\nk\n\\end{array}\\Big)\n}.\n\\end{align}\nDefining the projector onto the symmetric subspace as $\\Pi_{\\text{sym}} = \\frac{1}{k!}\\sum_{\\pi\\in S_k}W_{\\pi}$, one has\n\\begin{align}\n\\int_{\\text{Haar}} (|\\psi\\rangle \\langle \\psi| )^{\\otimes k} d\\psi = \\frac{\\Pi_{\\text{sym}}}{\n\\Big(\\begin{array}{c}\nk+d-1 \\\\\nk\n\\end{array}\\Big) \\label{haar-random-states-equation}\n}.\n\\end{align}\n\n\n\n\\subsubsection*{Frame potential} In this paper, we will be particularly interested in the following quantity\n\\begin{align}\nF_{\\text{Haar}}^{(k)}=\\iint dU dV\\big|\\text{tr}\\, \\{ U^{\\dagger}V \\} \\big|^{2k}.\n\\end{align}\nUsing Eq.~\\eqref{eq:Weingarten_decomposition}, we can rewrite this\n\\begin{equation}\n\\begin{split}\nF_{\\text{Haar}}^{(k)} &= \\sum_{\\pi_{1},\\pi_{2},\\pi_{3},\\pi_{4}\\in S_{k}} \\text{tr} \\{ W_{\\pi_{1}}W_{\\pi_{2}} \\} (Q^{-1})_{\\pi_{2},\\pi_{3}} \\text{tr} \\{ W_{\\pi_{3}}W_{\\pi_{4}} \\} (Q^{-1})_{\\pi_{4},\\pi_{1}}, \\\\\n&= \\sum_{\\pi_{1},\\pi_{2},\\pi_{3},\\pi_{4}\\in S_{k}}Q_{\\pi_{1},\\pi_{2}} (Q^{-1})_{\\pi_{2},\\pi_{3}}Q_{\\pi_{3},\\pi_{4}}(Q^{-1})_{\\pi_{4},\\pi_{1}} = k!,\n\\end{split}\n\\end{equation}\nfor $k\\leq d$, where we used $C=Q^{-1}$ and $|S_{k}|=k!$.\n\n\n\\subsection*{Unitary design}\n\n\nConsider an ensemble of unitary operators $\\mathcal{E}=\\{p_{j},U_{j}\\}$ where $p_{j}$ are probability distributions such that $\\sum_{j}p_{j}=1$, and $U_{j}$ are unitary operators. The action of the $k$-fold channel with respect to the ensemble $\\mathcal{E}$ is given by \n\\begin{align}\n\\Phi^{(k)}_{\\mathcal{E}}(A) := \\sum_{j} p_{j} (U_{j}^{\\otimes k})^{\\dagger}A (U_{j})^{\\otimes k },\n\\end{align}\nor for continuous distributions\n\\begin{align}\n\\Phi^{(k)}_{\\mathcal{E}}(A) := \\int_{\\mathcal{E}} dU (U_{j}^{\\otimes k})^{\\dagger}A (U_{j})^{\\otimes k }.\n\\end{align}\nThe ensemble $\\mathcal{E}$ is a unitary $k$-design if and only if $ \\Phi^{(k)}_{\\mathcal{E}}(A)=\\Phi^{(k)}_{\\text{Haar}}(A)$ for all $A$. Intuitively, a unitary $k$-design is as random as the Haar ensemble up to the $k$th moment. That is, a unitary $k$-design is an ensemble which satisfies the definition of the Haar measure in Eq.~(\\ref{eq:def_Haar}) when $f(U)$ contains up to $k$th powers of $U$ and $U^{\\dagger}$ (i.e. balanced monomials of degree at most $k$). By this definition, if an ensemble is $k$-design, then it is also $k-1$-design. However, the converse is not true in general. \n\n\nIt is convenient to write the above definition of $k$-design in terms of Pauli operators. An ensemble $\\mathcal{E}$ is $k$-design if and only if $ \\Phi^{(k)}_{\\mathcal{E}}(P)=\\Phi^{(k)}_{\\text{Haar}}(P)$ for all Pauli operators $P\\in (\\mathcal{P})^{\\otimes k}$, since the Pauli operators are the basis of the operator space. Furthermore, for an arbitrary ensemble $\\mathcal{E}$\n\\begin{align}\n\\Phi_{\\text{Haar}}\\left( \\Phi_{\\mathcal{E}}\\big( A \\big) \\right) =\\Phi_{\\text{Haar}}\\left( A\\right), \\qquad \\forall \\mathcal{E},\\label{eq:ensemble-followed-by-Haar}\n\\end{align}\ndue to the left\/right-invariance of the Haar measure. By using Eq.~\\eqref{eq:ensemble-followed-by-Haar}, we can derive the following useful criteria for $k$-designs~\\cite{Webb15}\n\\begin{align}\n\\text{$\\mathcal{E}$ is $k$-design} \\quad \\Leftrightarrow \\quad \\text{$\\Phi_{\\mathcal{E}}(P)$ is a linear combination of $W_{\\pi}$ for all $P\\in (\\mathcal{P})^{\\otimes k}$.} \\label{eq:criteria}\n\\end{align}\nTo make use of this, we will look at some illustrative examples. \n\n\\subsubsection*{Pauli is a $1$-design}\nThe Pauli operators form a unitary $1$-design. In fact, we have already shown this in with Eq.~\\eqref{eq:Pauli_twirl}. We have shown that an average over Pauli operators gives $\\frac{1}{d^2}\\sum_{P\\in \\mathcal{P}}P^{\\dagger} A P = \\frac{1}{d}\\, \\text{tr}\\{A\\}$, and the Haar random channel gives $\\Phi^{(1)}_{\\text{Haar}}(A)=\\frac{1}{d}\\, \\text{tr}\\{A\\}$, so therefore\n\\begin{align}\n\\frac{1}{d^2}\\sum_{P\\in \\mathcal{P}}P^{\\dagger} \\rho P=\\Phi^{(1)}_{\\text{Haar}}(\\rho),\n\\end{align}\nfor all $\\rho$. For example, if $d=2$ and $A=X$ (Pauli $X$ operator), then we have\n\\begin{align}\n\\frac{1}{4}(IXI + XXX + YXY + ZXZ) = X + X - X - X =0,\n\\end{align}\nwhich is consistent with $\\text{tr} \\{X \\}=0$. \n\nThus, an average over Pauli operators is equivalent to talking a trace. Since Pauli operators can be written in a tensor product form: $P_{1}\\otimes P_{2}\\otimes \\ldots \\otimes P_{n}$ for a system of $n$ qubits, they do not create entanglement between different qubits and do not scramble. Instead, they can only mix quantum information locally. This implies some kind of relationship between $1$-designs and local thermalization that we will revisit in the discussion \\S\\ref{sec:discussion}.\n\n\\subsubsection*{Clifford is a $2$-design}\nThe Clifford operators form a unitary $2$-design. The Clifford group $\\mathcal{C}_{n}$ is a group of unitary operators acting on a system of $n$ qubits that transform a Pauli operator into another Pauli operator\n\\begin{align}\nC^{\\dagger} P C = Q, \\qquad P,Q\\in \\tilde{\\mathcal{P}}, \\quad C\\in \\mathcal{C}_{n}.\n\\end{align}\nClearly, Pauli operators are Clifford operators, since $PQP = e^{i\\theta} Q$ for any pairs of Pauli operators $P,Q$: Pauli operators transform a Pauli operator to itself up to a global phase. However, non-trivial Clifford operators are those which transform a Pauli operator into a different Pauli operator. An example of such an operator is the Control-$Z$ gate\n\\begin{align}\n\\text{C$Z$}|a,b\\rangle = (-1)^{ab}|a,b\\rangle, \\qquad a,b =0,1,\n\\end{align}\nwhere summation is modulo $2$. The conjugation of a Pauli operator with a Control-$Z$ gate is as follows\n\\begin{align}\nX\\otimes I \\rightarrow X \\otimes Z, \\qquad Z\\otimes I \\rightarrow Z \\otimes I, \n\\qquad\nI\\otimes X \\rightarrow Z \\otimes X, \\qquad I\\otimes Z \\rightarrow I \\otimes Z. \\label{eq:control-z}\n\\end{align}\n\nLet us prove that the Clifford group is $2$-design by using Eq.~(\\ref{eq:criteria})~\\cite{Webb15}. For qubit Pauli operators of the form $P\\otimes P$ ($P\\not=I$), the action of the Clifford $2$-fold channel is\n\\begin{align}\n\\sum_{C\\in \\mathcal{C}_{n} } (C^{\\dagger}\\otimes C^{\\dagger}) P\\otimes P (C\\otimes C) \\propto \\sum_{Q\\in \\mathcal{P}_{n}, Q\\not=I }Q\\otimes Q,\n\\end{align}\nbecause a random Clifford operator will transform $P$ into some other non-identity Pauli operator. Recalling the definition of the swap operator $\\text{SWAP}=\\frac{1}{d}\\sum_{P\\in \\mathcal{P}_{n}} P\\otimes P$, the RHS is a linear combination of $I\\otimes I$ and $\\text{SWAP}$. On the other hand, for other Pauli operators $P\\otimes Q$ with $P\\not=Q$, the action of the channel is\n\\begin{align}\n\\sum_{C\\in \\mathcal{C}_{n} } (C^{\\dagger}\\otimes C^{\\dagger}) P\\otimes Q (C\\otimes C) =0.\n\\end{align}\nThis can be seen by rewriting the sum as \n\\begin{equation}\n\\frac{1}{2}\\sum_{C\\in \\mathcal{C}_{n} } (C^{\\dagger}\\otimes C^{\\dagger}) P\\otimes Q (C\\otimes C) + \\frac{1}{2}\\sum_{RC\\in \\mathcal{C}_{n} } ( C^\\dagger R^{\\dagger}\\otimes C^\\dagger R^{\\dagger}) P\\otimes Q (RC\\otimes RC), \n\\end{equation}\nsince the Clifford group is invariant under element-wise multiplication by a Pauli $R$. Since we assumed the Pauli operators are different we have $PQ \\neq I$, and thus we can pick $R$ such that it anti-commutes with $PQ$. This implies $R^\\dagger P R \\otimes R^\\dagger Q R = - P \\otimes Q$, and therefore the two terms cancel.\nFinally, $I\\otimes I$ remains invariant. Thus, since the action of the channel on Pauli operators gives a linear combination of permutation operators, by Eq.~\\eqref{eq:criteria} the Clifford group is a unitary $2$-design. \n\nNote that Clifford operators do not have a tensor product form, in general. This means that unlike evolution restricted to Pauli operators, they can change the sizes of an operator as seen in the Control-$Z$ gate example Eq.~\\eqref{eq:control-z}. In other words they can grow local operators into a global operators, indicative of the butterfly effect. \n\nIn fact, Clifford operators can prepare a large class of interesting quantum states called the stabilizer states. Let $|\\psi_{0}\\rangle=|0\\rangle^{\\otimes n}$ be an initial product state. This state satisfies $Z_{j}|\\psi_{0}\\rangle = |\\psi_{0}\\rangle$. Let $U$ be an arbitrary Clifford operator and consider $|\\psi\\rangle = U |\\psi_{0}\\rangle$. This state $|\\psi\\rangle$ satisfies the following\n\\begin{align}\nS_{j}|\\psi\\rangle = |\\psi\\rangle, \\qquad S_{j}=UZ_{j}U^{\\dagger}.\n\\end{align}\nBy definition, $S_{j}$ are Pauli operators and will commute with each other. A quantum state that can be represented by a set of commuting Pauli operators $S_{j}$ is called a stabilizer state. Examples of stabilizer states include ground states of the toric code and the perfect tensors used in construction of holographic quantum error-correcting codes \\cite{Pastawski15b}. The upshot is that Clifford operators can create a global entanglement and can scramble quantum information. We will return to this point again in the discussion \\S\\ref{sec:discussion}.\n\n\\subsubsection*{? is a higher-design}\nCurrently there is no known method of constructing an ensemble which forms an exact $k$-design for $k\\geq 4$ in a way which generalizes to large $d$. Instead, there are several constructions for preparing approximate $k$-design in an efficient manner~\\cite{Brandao12,Nakata:2016blv}.\n\n\\section{Measures of chaos and design}\\label{sec:OTO_channel}\n\nIn this section, we show that $2k$-point OTO correlators are probes of $k$-unitary designs. We will focus on a Hilbert space $\\mathcal{H}=\\mathbb{C}^d$ with $2k$-point correlators of the following form\n\\begin{align}\n\\big\\langle A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}} \\big\\rangle := \\frac{1}{d}\\text{tr}\\, \\{A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}}\\},\\label{eq-form-of-correlator}\n\\end{align}\nwhere $\\tilde{B_{j}}=U^{\\dagger}B_{j}U$. We can think of this as a correlator evaluated in a maximally mixed or infinite temperature state $\\rho = \\frac{1}{d}I$. The trace can be rewritten as\n\\begin{align}\n\\text{tr} \\,\\big\\{ (A_{1} \\otimes \\cdots \\otimes A_{k}) (U^{\\dagger}\\otimes \\cdots\\otimes U^{\\dagger})\n(B_{1} \\otimes \\cdots \\otimes B_{k})(U\\otimes \\cdots\\otimes U)\n W_{\\pi_{\\text{cyc}}} \\big\\},\\label{eq:cyclic}\n\\end{align}\nby considering an enlarged Hilbert space $\\mathcal{H}^{\\otimes k}$ that consists of $k$ copies of the original Hilbert space $\\mathcal{H}$, where $W_{\\pi_{\\text{cyc}}}$ represents a cyclic permutation operator on $\\mathcal{H}^{\\otimes k}$. The action of $W_{\\text{cyc}}$ is to send the $j$th Hilbert space to the $(j+1)$th Hilbert space (modulo $k$), see Fig.~\\ref{fig_k-fold_twirl} for a graphical representation.\\footnote{N.B. this trick of using cyclic permutation operators is similar to the method used for computing R\\'enyi-$k$ entanglement entropies in quantum field theories via the insertion of twist operators.} \n\n\\begin{figure}[htb!]\n\\centering\n\\includegraphics[width=0.40\\linewidth]{fig_k-fold_twirl}\n\\caption{Schematic form of the $2k$-point OTO correlation functions Eq.~\\eqref{eq-form-of-correlator}, interpreted as a correlation function on the enlarged $k$-copied system. The dotted line diagram surrounds the $k$-fold channel $\\Phi_{\\mathcal{E}}(B_{1} \\otimes \\cdots \\otimes B_{k})$, which is probed by $A_{1}\\otimes \\ldots \\otimes A_{k}$. (Periodic boundary conditions are implied to take the trace.)\n} \n\\label{fig_k-fold_twirl}\n\\end{figure}\n\nObserve that Eq.~(\\ref{eq:cyclic}) contains a $k$-fold unitary action \n\\begin{equation}\n(U\\otimes \\cdots\\otimes U){}\n(B_{1} \\otimes \\cdots \\otimes B_{k})\n(U^{\\dagger}\\otimes \\cdots\\otimes U^{\\dagger}), \\notag\n\\end{equation} \nsuggesting correlators of the form Eq.~\\eqref{eq-form-of-correlator} have the potential to be sensitive to whether an ensemble is or is not a $k$-design.\nUnitary $k$-designs concern the \\emph{$k$-fold channel} of an ensemble of unitary operators\n\\begin{align}\n\\Phi_{\\mathcal{E}}(B_{1} \\otimes \\cdots \\otimes B_{k}) = \\int_{\\mathcal{E}} dU (U\\otimes \\cdots\\otimes U)\n(B_{1} \\otimes \\cdots \\otimes B_{k})\n(U^{\\dagger}\\otimes \\cdots\\otimes U^{\\dagger}),\n\\end{align} \nwhere $\\mathcal{E}$ is an ensemble of unitary operators. To further this point, let us consider an average of these correlators over an ensemble of unitary operators $\\mathcal{E}$\n\\begin{align}\n\\left|\\big\\langle A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}} \\big\\rangle\\right|_{\\mathcal{E}} := \\frac{1}{d}\\int_{\\mathcal{E}} dU\\, \\text{tr}\\, \\{ A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}}\\}. \\label{eq-form-of-average-oto}\n\\end{align}\nLooking back at Eq.~\\eqref{eq:cyclic}, the idea is that $A_{1},\\ldots,A_{k}$ operators probe the outcome of $k$-fold channel $\\Phi_{\\mathcal{E}}(B_{1} \\otimes \\cdots \\otimes B_{k})$. Indeed, the part of Fig.~\\ref{fig_k-fold_twirl} surrounded by a dotted line, is $\\Phi_{\\mathcal{E}}(B_{1} \\otimes \\cdots \\otimes B_{k})$. Below, we will make this intuition precise by proving that this set of OTO correlators Eq.~\\eqref{eq-form-of-correlator} completely determine the $k$-fold channel $\\Phi^{(k)}_{\\mathcal{E}}$. \n\n\n\n\\subsection{Chaos and $k$-designs}\n\nIn this subsection, we prove that $2k$-point OTO correlators completely determines the $k$-fold channel of an ensemble $\\mathcal{E}$, denoted by $\\Phi_{\\mathcal{E}}^{(k)}$. \nTo recap, we write the $k$-fold channel of an ensemble $\\mathcal{E}=\\{p_{j},U_{j}\\}$ by \n\\begin{align}\n\\Phi^{(k)}_{\\mathcal{E}}(\\rho) = \\sum_{j} p_{j} (U_{j}^{\\dagger}\\otimes \\cdots \\otimes U_{j}^{\\dagger})\\rho (U_{j}\\otimes \\cdots \\otimes U_{j}),\n\\end{align}\nwhere $\\rho$ is defined over $k$ copies of the system, $\\mathcal{H}^{\\otimes k}$. The map is linear, and completely-positive and trace-preserving (CPTP), i.e. a quantum channel. For simplicity of discussion, we assume that the system $\\mathcal{H}$ is made of $n$ qubits so that $\\mathcal{H}=\\mathbb{C}^{d}$ with $d=2^n$. The input density matrix $\\rho$ can be expanded by Pauli operators \n\\begin{align}\n\\rho_{\\text{in}} = \\sum_{B_{1},\\ldots,B_{k}}\\beta_{B_{1},\\ldots,B_{k}}(B_{1}\\otimes \\cdots \\otimes B_{k}),\n\\end{align}\nwhere $\\beta_{B_{1},\\ldots,B_{k}}= d^{-k} \\, \\text{tr} \\, \\big\\{(B_{1}^{\\dagger}\\otimes \\cdots \\otimes B_{k}^{\\dagger}) \\rho\\big\\}$. The output density matrix is given by\n\\begin{align}\n\\rho_{\\text{out}} = \\sum_{B_{1},\\ldots,B_{k}}\\beta_{B_{1},\\ldots,B_{k}}\\Phi_{\\mathcal{E}}(B_{1}\\otimes \\cdots \\otimes B_{k}).\n\\end{align}\nFor a given Pauli operator $B_{1},\\ldots,B_{k}$, we would like to examine $\\Phi_{\\mathcal{E}}(B_{1}\\otimes \\cdots \\otimes B_{k})$. \n\nLet us fix Pauli operator $B_{1},\\ldots,B_{k}$ for the rest of the argument. Note that the output $\\Phi_{\\mathcal{E}}(B_{1}\\otimes \\cdots \\otimes B_{n})$ can be also expanded by Pauli operators\n\\begin{align}\n\\Phi_{\\mathcal{E}}(B_{1}\\otimes \\cdots \\otimes B_{k}) = \\sum_{C_{1},\\ldots,C_{k}}\\gamma_{C_{1},\\ldots,C_{k}}(C_{1}\\otimes \\cdots \\otimes C_{k}).\\label{eq:expansion}\n\\end{align}\nSince we have fixed $B_{1}\\otimes \\cdots \\otimes B_{k}$, for notational simplicity we have not included $B_{1}, \\cdots, B_{k}$ indices from the tensor $\\gamma$. In order to characterize the $k$-fold channel, we need to know values of $\\gamma_{C_{1},\\ldots,C_{n}}$ for a given $B_{1}, \\cdots, B_{k}$. We would like to show that we can determine the values of $\\gamma_{C_{1},\\ldots,C_{n}}$ by knowing a certain collection of OTO correlators. Consider a $2k$-point OTO correlator labeled by the set of $A$ operators, averaged over an ensemble $\\mathcal{E}$\n\\begin{align}\n\\alpha_{A_{1},\\ldots,A_{k}} = \\left|\\big\\langle A_{1} \\tilde{B_{1}} \\cdots A_{k}\\tilde{B_{k}} \\big\\rangle \\right|_{\\mathcal{E}}\\label{eq:definition},\n\\end{align}\nwhere as always $\\tilde{B_{j}}=U^{\\dagger}B_{j}U$ and $A_{1},\\ldots,A_{k}$ are Pauli operators. As before, for simplicity of notation we have not included $B_{1},\\ldots,B_{k}$ indices on $\\alpha$. Now that the notation is setup, the main question is whether one can determine the coefficients $\\gamma_{C_{1},\\ldots,C_{k}}$ from the numbers $\\alpha_{A_{1},\\ldots,A_{k}}$.\n\n Substituting Eq.~\\eqref{eq:expansion} into Eq.~\\eqref{eq:definition}, we see\n\\begin{align}\n\\alpha_{A_{1},\\ldots,A_{k}} = \\frac{1}{d}\\cdot M^{C_{1},\\ldots,C_{k}}_{A_{1},\\ldots,A_{k}} \\gamma_{C_{1},\\ldots,C_{k}},\n\\qquad\nM^{C_{1},\\ldots,C_{k}}_{A_{1},\\ldots,A_{k}} =\n \\text{tr} \\, \\{ A_{1}C_{1}\\cdots A_{k}C_{k} \\}, \\label{eq:normal}\n\\end{align}\nwhere tensor contractions are implicit following the Einstein summation convention.\nThis shows we can compute OTO correlators $\\alpha_{A_{1},\\ldots,A_{k}}$ the coefficients defining the $k$-fold channel $\\gamma_{C_{1},\\ldots,C_{k}}$. To establish the converse, we must prove that the tensor $M$ is invertible. \n\n\n\\begin{theorem}\\label{theorem:OTO_chaos}\nConsider the tensor $M^{C_{1},\\ldots,C_{k}}_{A_{1},\\ldots,A_{k}}$ \nand its conjugate transpose ${M^{\\dagger}}_{C_{1},\\ldots,C_{k}}^{A_{1},\\ldots,A_{k}}$. Then\n\\begin{align}\n\\sum_{C_{1},\\ldots,C_{k}}{M^{\\dagger}}_{C_{1},\\ldots,C_{k}}^{A_{1}',\\ldots,A_{k}'}M^{C_{1},\\ldots,C_{k}}_{A_{1},\\ldots,A_{k}} = d^{2k}\\cdot \n\\delta^{A_{1}'}_{A_{1}}\\cdots \\delta^{A_{k}'}_{A_{k}},\\label{eq:tensor_M}\n\\end{align}\nwhere $\\delta^{P}_{Q}$ is the delta function for Pauli operators $P,Q$\n\\begin{align}\n\\delta^{P}_{Q} = 1, \\quad (P=Q), \\qquad \\delta^{P}_{Q} = 0, \\quad (P\\not=Q).\n\\end{align}\n\\end{theorem}\n\nThe proof of this theorem is sort of technical and has been relegated to Appendix~\\ref{sec:proof:oto-channnel}. Thus, from the OTO correlators $\\alpha_{C_{1},\\ldots,C_{k}}$, we can completely determine the $k$-fold channel $\\gamma_{A_{1},\\ldots,A_{k}}$\n\\begin{align}\n\\gamma_{C_{1},\\ldots,C_{k}}= \\frac{1}{d^{2k-1}} {M^{\\dagger}}_{C_{1},\\ldots,C_{k}}^{A_{1},\\ldots,A_{k}}\\alpha_{A_{1},\\ldots,A_{k}}.\n\\end{align}\nAs an obvious corollary, this means that $2k$-point OTO correlators can measure whether or not an ensemble forms $k$-design.\n\n\n\n\\subsection{Frame potentials}\n\n\nIn this section we introduce the \\emph{frame potential}, a single quantity that can measure whether an ensemble is a $k$-design. Furthermore, we show how the frame potential may be computed from OTO correlators.\n\nGiven an ensemble of unitary operators $\\mathcal{E}$, the $k$th frame potential is defined by the following double sum \\cite{Scott08}\n\\begin{align}\nF_{\\mathcal{E}}^{(k)} := \\frac{1}{|\\mathcal{E}|^{2}}\\sum_{U,V\\in \\mathcal{E}} \\left| \\text{tr} \\{U^{\\dagger}V\\} \\right|^{2k},\n\\end{align}\nwhere $|\\mathcal{E}|$ denotes the cardinality of $\\mathcal{E}$.Denote the frame potential for the Haar ensemble as $F^{(k)}_{\\text{Haar}}$. Then, the following theorem holds.\n\n\\begin{theorem}\nFor any ensemble $\\mathcal{E}$ of unitary operators, \n\\begin{align}\nF^{(k)}_{\\mathcal{E}} \\geq F^{(k)}_{\\mathrm{Haar}},\n\\end{align}\nwith equality if and only if $\\mathcal{E}$ is $k$-design.\n\\end{theorem}\n\nThe proof of this theorem is very insightful and beautiful, which we reprint from~\\cite{Scott08}. \n\n\\begin{proof}\nLetting $S=\\int_{\\mathcal{E}}(U^{\\dagger})^{\\otimes k}\\otimes U^{\\otimes k} - \\int_{\\text{Haar}}(U^{\\dagger})^{\\otimes k}\\otimes U^{\\otimes k}$, we have\n\\begin{equation}\n\\begin{split}\n0 \\leq \\text{tr} \\{ S^{\\dagger}S\\} = &\\int_{U\\in\\mathcal{E}}\\int_{V\\in\\mathcal{E}}dUdV\\, |\\text{tr} \\{U^{\\dagger}V \\} |^{2k} - 2 \\int_{U\\in\\mathcal{E}}\\int_{V\\in\\text{Haar}}dUdV \\, |\\text{tr} \\{U^{\\dagger}V \\} |^{2k} \\\\\n&+ \\iint_{U,V\\in\\text{Haar}}dUdV \\, |\\text{tr} \\{ U^{\\dagger}V \\} |^{2k}.\n\\end{split}\n\\end{equation}\nThe first term is $F^{(k)}_{\\mathcal{E}}$, and the third term is $F^{(k)}_{\\text{Haar}}$ by definition. The second term is equal to $- 2F^{(k)}_{\\text{Haar}}$ by the left\/right-invariance of the Haar measure. Thus, we see that\n\\begin{align}\n0 \\leq F^{(k)}_{\\mathcal{E}} - 2F^{(k)}_{\\text{Haar}} + F^{(k)}_{\\text{Haar}}\n=F^{(k)}_{\\mathcal{E}} - F^{(k)}_{\\text{Haar}},\n\\end{align}\nwith equality if and only if $\\mathcal{E}$ is $k$-design. \n\\end{proof}\n\nNote that we derived the minimal value of the frame potential in \\S\\ref{sec:review},\n\\begin{equation}\nF_{\\text{Haar}}^{(k)}=k!,\n\\end{equation}\n which holds for $k \\le d$. The frame potential quantifies the $2$-norm distance between the Haar ensemble and the $k$-fold $\\mathcal{E}$-channel.\\footnote{The distance between two quantum channels is typically measured by the diamond norm, while the frame potential measures the $2$-norm (or more precisely the Frobenius norm) distance between a given ensemble and the Haar ensemble. Importantly, the $2$-norm is weaker than the diamond norm. For precise statements regarding the different norms and bounds relating them, see~\\cite{Low_thesis}.} Here we show that the frame potential can be expressed as a certain average of OTO correlation functions. \n\n\\begin{theorem}\nFor any ensemble $\\mathcal{E}$ of unitary operators,\n\\begin{align}\n\\frac{1}{d^{4k}}\\sum_{A_{1},\\cdots,B_{1},\\cdots}\\left|\\big\\langle A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}} \\big\\rangle_{\\mathcal{E}}\\right|^{2} = \\frac{1}{d^{2(k+1)}}\\cdot F^{(k)}_{\\mathcal{E}}\\label{eq:OTO_frame},\n\\end{align}\nwhere summations are over all possible Pauli operators.\n\\end{theorem} \n\nThe LHS of the equation is the operator average of the $2$-norm of OTO correlators, and the RHS is the $k$th frame potential up to a constant factor. There are $d^{4k}$ Pauli operators $A_{1},\\ldots, A_k, B_{1},\\ldots, B_k$, which leads to $1\/d^{4k}$. The theorem implies that the quantitative effect of random unitary evolution is to decrease the frame potential, which is equivalent to the decay of OTO correlators. \n\n\n\\begin{proof}\nWe take the averages over $A_{1},\\cdots,A_k, B_{1},\\cdots, B_k$ first. Expanding the LHS gives \n\\begin{align}\n\\frac{1}{d^{4k}}\\frac{1}{|\\mathcal{E}|^{2}d^2}\\sum_{U,V\\in \\mathcal{E}}\\sum_{A_{1},\\cdots,B_{1},\\cdots}\\text{tr} \\{A_{1}U^{\\dagger}B_{1}U\\cdots A_{k}U^{\\dagger}B_{k}U \\} \\cdot\\text{tr} \\{ V^{\\dagger}B_{k}^{\\dagger}VA_{k}^{\\dagger}\\cdots V^{\\dagger}B_{1}^{\\dagger}VA_{1}^{\\dagger}\\}.\n\\end{align}\nFor $k=2$, this can be depicted graphically as\n\\begin{align}\n\\includegraphics[width=0.40\\linewidth]{fig_frame}.\n\\end{align}\nRecall that a SWAP operator is given by $\\text{SWAP} = \\frac{1}{d}\\sum_{P} P \\otimes P^{\\dagger}$\n\\begin{align}\n\\includegraphics[width=0.35\\linewidth]{fig_SWAP}.\n\\end{align}\nThus, we replace each average of Pauli operators $A_{1},\\cdots,A_k, B_{1},\\cdots, B_k$ by SWAP operators. There are $2k$ loops, where $k$ of them contribute to $\\text{tr} \\{UV^{\\dagger}\\}$ and the remaining $k$ loops contribute to $\\text{tr} \\{ VU^{\\dagger} \\}$. Keeping track of the number of factors of $d$, we find \n\\begin{align}\n\\frac{1}{d^{2(k+1)}|\\mathcal{E}|^2}\\sum_{U,V\\in\\mathcal{E}}\\text{tr} \\{ UV^{\\dagger} \\} ^k\\cdot \\text{tr} \\{ VU^{\\dagger}\\}^k = \\frac{1}{d^{2(k+1)}} \\cdot F^{(k)}_{\\mathcal{E}},\n\\end{align}\nwhich is the desired result.\n\\end{proof}\n\n\nLastly, we note that we can use the frame potential to lower bound the size of the ensemble. Since all the terms in the sum are positive, by taking the diagonal part of the double sum, we find\n\\begin{align}\nF_{\\mathcal{E}}^{(k)} \\geq \\frac{1}{|\\mathcal{E}|^2}\\sum_{U,V\\in \\mathcal{E},\\, U=V} \\big|\\text{tr} \\{I \\}\\big|^{2k}=\n \\frac{1}{|\\mathcal{E}|}d^{2k}, \\label{frame-potential-vs-size}\n\\end{align}\nmeaning the cardinality is lower bounded as $|\\mathcal{E}| \\ge d^{2k} \/ F_{\\mathcal{E}}^{(k)}$. Using the fact that $F^{(k)}_\\mathrm{Haar} = k!$, we can simply bound the size of a $k$-design\\footnote{Actually, one can slightly improve this lower bound for a $k$-design, see~\\cite{Roy:2009aa, Brandao12}.}\n\\begin{equation}\n\\mathcal{|E|} \\ge \\frac{d^{2k}}{k!}, \\qquad \\text{($k$-design)}. \\label{eq-cardinality-of-k-design}\n\\end{equation}\n\n\n\n\\subsubsection*{Large $k$}\n\n\nIn Appendix~\\ref{sec:appendix:orthogonal}, we provided an intuitive way of counting the number of nearly orthogonal states in a high-dimensional vector space $\\mathbb{C}^d$ by introducing a precision tolerance $\\epsilon$. A similar argument holds for the number of operators; if we can only distinguish operators up to some precision $\\epsilon$, then the total number of unitary operators is given by the volume of the unitary group measured in balls of size $\\epsilon$, which roughly goes like $\\sim \\epsilon^{-2^{2n}}$.\n\nThe key point is that if we have a tolerance $\\epsilon$, then the number of operators is finite even for a continuous ensemble. This means that there is a maximum size to any ensemble that is a subset of the unitary group. Looking at our bound Eq.~\\eqref{eq-cardinality-of-k-design}, we see that for $k\\sim d$ we begin to reach the maximum size for some $\\epsilon$. \n\nAs a corollary, this means that for large $k \\gtrsim d$, being a $k$-design implies that the ensemble is an approximate $k+1$-design. In this sense, there are really only $O(d)$ nontrivial moments of the Haar ensemble.\n\nAs an explicit example, we will consider the case of $d=2$. Here, for Haar we can compute the frame potential exactly for all $k$ \\cite{Scott08}\n\\begin{align}\nF^{(k)}_{\\text{Haar}} = \\frac{(2k)!}{k ! (k+1)!},\n\\end{align}\nand thus we have the following relationship\n\\begin{align}\n\\frac{F_{\\text{Haar}}^{(k+1)}}{F_{\\text{Haar}}^{(k)}} = 4 \\frac{k+1\/2}{k+2}.\n\\end{align}\nOn the other hand, for any ensemble the frame potential satisfies\n\\begin{align}\nF^{(k+1)} \\leq d^2 F^{(k)}=4F^{(k)}.\n\\end{align}\nTherefore, for $d=2$ we see that if an ensemble is a $k$-design, for $k \\gtrsim 2$ it will automatically be close to being a $k+1$-design.\n\n\n\n\n\\subsection{Ensemble vs. time averages}\\label{sec:F-time-average}\n\nIn this subsection, we consider the time average of the frame potential\nto ask whether the ensemble specified by sampling $U(t)=e^{-iHt}$ at different times $t$ will ever form a $k$-design. In the classical statistical mechanics and chaos literature, this is the question of whether a system is ergodic---whether the average over phase space equals an average over the time evolution of some initial state.\n\nConsider the one-parameter ensemble of unitary matrices defined by evolving with a fixed time-independent Hamiltonian $H$, \n\\begin{equation}\n\\mathcal{E} = \\big\\{ e^{-iHt} \\big\\}_{t=0}^{\\infty}.\n\\end{equation} \nWe can compute the frame potential as\n\\begin{align}\nF_{\\mathcal{E}}^{(k)} &= \\lim_{T\\to\\infty} \\frac{1}{T^2}\\int_0^T dt_1 dt_2 \\, \\Big|\\trr{e^{-iH(t_1 - t_2)}}\\Big|^{2k}, \\\\\n &= \\lim_{T\\to\\infty} \\frac{1}{T^2}\\int_0^T dt_1 dt_2 \\, \\sum_{\\ell_{1}\\dots \\ell_{k}} \\sum_{m_{1}\\dots m_{k}} \\exp \\Big\\{ -i (t_2 - t_1) \\sum_{i=1}^k E_{\\ell_{i}} + i(t_2-t_1)\\sum_{j=1}^k E_{m_{j}} \\Big\\}. \\notag\n\\end{align}\nIf the spectrum is chaotic\/generic, then the energy levels are all incommensurate. The terms in the sum evaluate to zero unless we have $E_{\\ell_{i}} = E_{m_{j}}$ for all possible pairings $(i,j)$\n\\begin{align}\nF_{\\mathcal{E}}^{(k)} &= \\sum_{\\ell_{1}\\dots \\ell_{k}} \\sum_{m_{1}\\dots m_{k}} \\sum_{ij} \\delta_{\\ell_{i}m_{j}}, \\\\\n &= k! \\Big(\\sum_{\\ell} I \\Big)^k = k!\\,d^k. \\notag\n\\end{align}\nThis is larger than $F_{\\text{Haar}}^{(k)}$ by a factor of $d^k$; the ensemble $\\mathcal{E}=\\{e^{-iHt}\\}$ does not form a $k$-design!\nRecall that $F_{\\mathcal{E}}=d^{2k}$ for a trivial ensemble $\\mathcal{E}=\\{I\\}$ while $F_{\\mathcal{E}}=k!$ for a Haar ensemble. The time average ensemble sits in the middle of these ensembles.\n\nThis ``half-randomness''can be understood in the following way. Let us write the Hamiltonian as\n\\begin{align}\nH = \\sum_{j} E_{j} |\\psi_{j}\\rangle \\langle \\psi_{j}|.\n\\end{align}\nRotating $H$ by a unitary operator does not affect the frame potential, so we can consider a classical Hamiltonian \n\\begin{align}\nH' = \\sum_{j} E_{j} |j\\rangle \\langle j|,\n\\end{align}\nwith the same frame potential. Even though $H'$ is classical, it has an ability to generate entanglement. Namely, if an initial state is $|+\\rangle=\\sum_{j}|j\\rangle$, then time-evolution will create entanglement. In terms of the original Hamiltonian $H$, we see that the system keeps evolving in a non-trivial manner as long as the initial state is ``generic'' and is different from eigenstates of the Hamiltonian $H$. This arguments suggests that the frame potential, in a time-average sense, can only see the spectrum distribution. \n\n\nThis fact, that Hamiltonian time evolution can never lead to random unitaries, can also be understood in terms of the distribution of the spacing of the eigenvalues.\\footnote{This argument has been made by Michael Berry, who related it to Steve Shenker, who related it to Douglas Stanford, who related it to us. As far as we know, it does not otherwise appear in the literature.} The phases of the Haar random unitary operators have Wigner-Dyson statistics; they are repulsed from being degenerate. The eigenvalues of a typical Hamiltonian $H$ also have this property. However, these eigenvalues live on the real line, while the phases of $e^{-iHt}$ live on the circle. In mapping the line to the circle, the eigenvalues of $H$ wrap many times. This means that the difference of neighboring phases of $e^{-iHt}$ will not be repulsed; $e^{-iHt}$ will have Poisson statistics! In this sense, the ensemble formed by sampling $e^{-iHt}$ over time may never become Haar random.\\footnote{Relatedly, the space of unitaries with a fixed Hamiltonian is $d$-dimensional, while the space of unitaries is $d^2$-dimensional.} The calculation in the beginning of this subsection provides an explicit calculation of the $2$-norm distance between such an ensemble and $k$-designs and quantifies the degree to which the ensemble average is ``too random'' as compared to the time average.\n\n\n\n\\section{Measures of complexity}\\label{sec:complexity-bound}\nFor an operator or state, the computational complexity is a measure of the minimum number of basic elementary gates necessary to generate the operator or the state from a simple reference operator (e.g. the identity operator) or reference state (e.g. the product state). With the assumption that the elementary gates are easy to apply, the complexity is a measure of how difficult it is to create the state or operator.\n\n\nOn the other hand, an ensemble contains many different operators with different weights or probabilities. In that case, the computational complexity of the ensemble should be understood as the number of steps it takes to generate the ensemble by probabilistic applications of elementary gates. For instance, to generate the ensemble of Pauli operators, we randomly choose with probability $1\/4$ a Pauli operator $I,X,Y,Z$ to apply to a qubit, and then repeat this procedure for all the qubits.\n\nThe complexity of an ensemble is related to the complexity of an operator in the following way. If an ensemble can be prepared in $\\mathcal{C}$ steps, then all the operators in the ensemble can be generated by applications of at most $\\mathcal{C}$ elementary gates. On the other hand, if an ensemble cannot be prepared (or approximated) in $\\mathcal{C}$ steps, then---for the sorts of ensembles we are interested in---most of the operators cannot be generated by applications of $\\mathcal{C}$ elementary gates. \nFor example, generating the Haar ensemble will take exponential complexity since, on average, individual elements have exponential complexity.\n\n\n\nThe complexity of the ensemble can be lower bounded in terms of the number of elements or cardinality of the ensemble $|\\mathcal{E}|$. If all the elements are represented equally (with uniform probabilities), then clearly at least $\\mathcal{E}$ circuits need to be generated from probabilistic applications of the elementary gates. Making use of a fact introduced in the previous section that $F_{\\mathcal{E}}^{(k)}$ provides a lower bound on $|\\mathcal{E}|$, here we show that the frame potential provides a lower bound on a circuit complexity of generating $\\mathcal{E}$. We will also explain how this bound applies to ensembles that depend continuously on some parameters and thus have a divergent number of elements. \n\nTwo additional bounds that are somewhat outside the scope of the main presentation, one on circuit depth and one on the early-time complexity growth with a disordered ensemble of Hamiltonians, are relegated to Appendix~\\ref{sec:complexity-appendix}.\n\n\\subsection{Discrete ensembles}\\label{sec:complexity-discrete}\n\nConsider a system of $n$ qubits. Let $\\mathcal{G}$ denote an elementary gate set that consists of a finite number of two-qubit quantum gates. We denote the number of elementary two-qubit gates by $g:=|\\mathcal{G}|$. At each time step we assume that we can implement any of the gates from $\\mathcal{G}$. One typically chooses $\\mathcal{G}$ so that gates in $\\mathcal{G}$ enables a universal quantum computation. A well-known example is\n\\begin{align}\n\\mathcal{G} = \\{\\text{$2$-qubit Clifford}, T\\}, \n\\end{align}\nwhere $T$ is the $\\pi\/4$ phase shift operator; $T=\\text{diag}(1,e^{i\\pi\/4})$. (Of course, this is not the only choice of elementary gate sets.) \n\nOur goal is to generate an ensemble of unitary operators $\\mathcal{E}$ by sequentially implementing quantum gates from $\\mathcal{G}$. Let us denote the necessary number of steps (i.e. the circuit complexity) to generate $\\mathcal{E}$ by $\\mathcal{C}(\\mathcal{E})$. Then one has the following complexity lower bound. \n\\begin{theorem}\nLet $g$ be the number of distinct two-qubit gates from the elementary gate set. Then the circuit complexity $\\mathcal{C}(\\mathcal{E})$ to generate an ensemble $\\mathcal{E}$ is lower bounded by \n\\begin{align}\n\\mathcal{C}(\\mathcal{E}) \\geq \\frac{\\log |\\mathcal{E}| }{\\log(g n^2)}.\\label{eq-for-theorem-complexity-ensemble-size-bound}\n\\end{align}\n\\label{theorem-complexity-ensemble-size-bound}\n\\end{theorem}\n\nThe proof relies on an elementary counting argument. Arguments along this line of thought have been used commonly in the literature. \n\n\n\\begin{proof}\n At each step, we randomly pick a pair of qubits. Since there are $g$ implementable quantum gates and $\\binom{n}{2}$ qubit pairs, there are in total $\\simeq g n^2$ choices at each step. \nIf this procedure runs for $\\mathcal{C}$ steps, the number of unique circuits this procedure can implement is upper bounded by\n\\begin{align}\n\\text{$\\#$ of circuits}\\leq (g n^2)^{\\mathcal{C}}.\n\\end{align}\nSince there are $|\\mathcal{E}|$ unitary operators in an ensemble $\\mathcal{E}$, we must have\n\\begin{align}\n(g n^2)^{\\mathcal{C}} \\geq |\\mathcal{E}|, \n\\end{align}\nwhich implies\n\\begin{align}\n\\mathcal{C}(\\mathcal{E}) \\geq \\frac{\\log |\\mathcal{E}| }{\\log(g n^2)}. \n\\end{align} \n\\end{proof}\n\n\nIn Appendix~\\ref{sec:appendix:orthogonal}, we provide an intuitive way of counting the number of states in a $d=2^n$-dimensional Hilbert space (which was well known for a long time from \\cite{Knill95}). \nAs a sanity check on Theorem~\\ref{theorem-complexity-ensemble-size-bound}, if we substitute $|\\mathcal{E}| \\gtrsim 2^{2^n}$ into Eq.~\\eqref{eq-for-theorem-complexity-ensemble-size-bound} we see that the complexity \nof most states is exponential in the number of qubits $\\mathcal{C} > 2^n \\log 2$.\n\n\nFinally, let us examine the relation between the frame potential and the circuit complexity. Using Eq.~\\eqref{frame-potential-vs-size} to trade $|\\mathcal{E}|$ for $F_{\\mathcal{E}}^{(k)}$, we find\n\\begin{align}\n\\mathcal{C}(\\mathcal{E}) \\geq \\frac{2kn \\log(2) - \\log F_{\\mathcal{E}}^{(k)} }{\\log(g n^2)}.\n\\end{align}\nOf course, this bound obviously depends on the choice of basic elements $g$ and the fact that we are using two-qubit gates. If we had considered $q$-body gates, we would have found a denominator $\\log(g\\binom{n}{q}) \\approx \\log(gn^q)$ for $n\\gg q$.\nThis is no more than a choice of ``units'' with which to measure complexity. Thus, we state our key result as:\n\\begin{theorem}\\label{theorem-bound}\nFor an ensemble $\\mathcal{E}$ with the $k$th frame potential $F_{\\mathcal{E}}^{(k)}$, the circuit complexity is lower bounded by\n\\begin{align}\n\\mathcal{C}(\\mathcal{E}) \\geq \\frac{2kn \\log(2) - \\log F_{\\mathcal{E}}^{(k)} }{\\log( \\mathrm{choices})},\\label{eq:lower-bound}\n\\end{align}\n\\end{theorem}\nIn this context, $\\log( \\mathrm{choices})$ simply indicates the logarithm of the number of decisions that are made at each step. If we imagine we have some kind of decision tree for determining which gate to apply where we make a binary decision at each step (and use $\\log_2$), then we may set the denominator to unity and measure complexity in bits rather than gates, i.e. $\\mathcal{C}(\\mathcal{E}) \\geq 2kn - \\log_2 F_{\\mathcal{E}}^{(k)}$.\n\n\nIn the above discussion, we glossed over a subtlety. Here, we considered the quantum circuit complexity to prepare or approximate an entire ensemble $\\mathcal{E}$. A closely related but different question concerns the quantum circuit complexity required to implement a typical unitary operator from the ensemble $\\mathcal{E}$. Nevertheless, in ordinary settings the typical operator complexity and the ensemble complexity are roughly of the same order. While establishing a rigorous result in this direction is beyond the scope of this paper, see~\\cite{Knill95} for some basic proof techniques that are useful in establishing this connection. \n\nAn important consequence of Theorem~\\ref{theorem-bound} is that the smallness of the frame potential (i.e. generic smallness of OTO correlators) implies increases in quantum circuit complexity of generating the ensemble $\\mathcal{E}$. As a corollary, we can simply rewrite Eq.~\\eqref{eq:lower-bound} as\n\\begin{align}\n\\mathcal{C}(\\mathcal{E}) \\geq(2k-1)2n - \\log_2 \\sum_{A_{1},\\cdots,B_{1},\\cdots}\\left|\\big\\langle A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}} \\big\\rangle_{\\mathcal{E}}\\right|^{2}.\\label{eq:lower-bound-oto} \n\\end{align}\nIn this sense, we see how the decay of OTO correlators is directly related to an increase in the (lower-bound) of the complexity.\n\n\nNext, recall that the frame potential for a $k$-design is given by $F^{(k)}_{\\text{Haar}}=k!$, which does not grow with $n$. The complexity of a $k$-design is thus lower bounded as\n\\begin{equation}\n\\mathcal{C}(k\\text{-design}) \\ge 2kn - k\\log_2 (k) + \\log_2 k + \\dots, \\label{eq:complexity-k-design}\n\\end{equation}\nwhich for large $n$ grows roughly linearly in $k$ and $n$ (at least). In \\S\\ref{sec:complexity:depth}, we show that the minimum depth circuit to make a $k$-design also growths linearly in $k$ (at least).\n\n\n\n\n\n\n\n\nFinally, we offer an additional information-theoretic interpretation for our lower bound and generalize it for ensembles with non-uniform probability distributions. Consider an ensemble $\\mathcal{E}=\\{ p_{j}, U_{j} \\}$ with probability distribution $\\{p_{j} \\}$ such that $\\sum_{j}p_{j}=1$. The second R\\`{e}nyi entropy of the distribution $\\{p_{j} \\}$ is defined as $S^{(2)}= - \\log \\big( \\sum_{j} p_{j}^2 \\big)$. In this more general situation, we can still bound the frame potential by considering the diagonal part of the sum\n\\begin{align}\nF^{(k)}_{\\mathcal{E}} = \\sum_{i,j} p_{i}p_{j} \\, \\big|\\text{tr} \\{ U_{i}U_{j}^{\\dagger} \\}\\big|^{2k} \\geq \\sum_{i} p_{i}^2 \\, \\big|\\text{tr} \\{ I \\} \\big|^{2k} = e^{-S^{(2)}}d^{2k}. \n\\end{align}\nSince the von Neumann entropy $S^{(1)} = - \\sum_{j} p_j \\log(p_j)$ is always greater than the second R\\`enyi entropy, $S^{(1)} \\geq S^{(2)}$, we can bound the von Neumann entropy as\n\\begin{align}\nS^{(1)} \\geq 2kn - \\log_2 F^{(k)}_{\\mathcal{E}}.\n\\end{align}\nThe entropy of the ensemble is a notion of complexity measured in bits.\\footnote{However, this is not to be confused with the entanglement entropy. The entropy of the ensemble is essentially the logarithm of the number of different operators, and therefore can be exponential in the size of the system. Instead, the entanglement entropy (as a measure of entanglement) can only be as large as (half) the size of the system.} \n\n\n\n\n\n\n\n\n\n\\subsection{Continuous ensembles}\\label{sec:continuous}\n\nMany interesting ensembles of unitary operators are defined by continuous parameters, e.g. a disordered system has a time evolution that may be modeled by an ensemble of Hamiltonians.\\footnote{A notable example of recent interest to the holography and condensed matter community is the ensemble implied by time evolving with the Sachdev-Ye-Kitaev Hamiltonian \\cite{Sachdev:1992fk,Kitaev:2014t2,Maldacena:2016hyu}.} While the counting argument in \\S\\ref{sec:complexity-discrete} is not directly applicable to these systems with continuous parameters, the complexity lower bound generalizes to such systems by allowing approximations of unitary operators. To be concrete, imagine that we wish to create some unitary operator $U_{0}$ by combining gates from the elementary gate set. In practice, we do not need to create an exact unitary operator $U_{0}$. Instead, we may be fine with preparing some $U$ that faithfully approximates $U_{0}$ to within a trace distance\n\\begin{align}\n||U_{0}-U||_{1}\\leq \\epsilon,\n\\end{align}\nwhere the notation $||\\cdot||_p \\equiv (\\text{tr} \\, \\{ |\\cdot|^p \\} )^{1\/p}$ specifies the $p$-norm of an operator.\n\nNow, let us derive a complexity lower bound up to an $\\epsilon$-tolerance. We begin by taking $N_s$ samples from the ensemble and use them to estimate the frame potential of the continuous distribution\n\\begin{equation}\nF_{\\mathcal{E}}^{(k)} \\approx \\frac{1}{N_s^2}\\sum_{i,j} \\big|\\text{tr}\\, \\{ U_i^{\\dagger}V_j \\} \\big|^{2k},\\label{sampled-frame-potential}\n\\end{equation}\nwhere the each of the two sums runs over all $N_s$ samples. We can lower bound Eq.~\\eqref{sampled-frame-potential} as follows\n\\begin{equation}\nF_{\\mathcal{E}}^{(k)} \\ge \\frac{1}{N_s^2}\\sum_{i} \\sum_{\\substack{j, \\\\ \\mathllap{||}U_i\\mathrlap{ - V_j||_1 < \\epsilon} } } \\big|\\text{tr}\\, \\{ U_i^{\\dagger}V_j \\} \\big|^{2k},\\label{epsilon-frame-potential}\n\\end{equation}\nwhere the sum over $i$ runs from $1$ to $N_s$. The sum over $j$ includes only a smaller subset $N_\\epsilon(U_i)$, which is the number of operators within a trace distance $\\epsilon$ of a particular $U_i$.\n\n\nTo continue, let's bound the summand. First, note that\n\\begin{equation}\n\\big| \\text{tr} \\{ U^\\dagger V \\} \\big| > \\text{Re} \\big\\{ \\, \\text{tr} \\, \\{ U^\\dagger V \\} \\big\\} = d - \\frac{1}{2}||U-V||_2^2.\n\\end{equation}\nThe $2$-norm is upper bounded by the $1$-norm as $|| \\mathcal{O} ||_2 \\le \\sqrt{d} || \\mathcal{O}||_1$, which let's us rewrite this as\n\\begin{equation}\n\\big| \\text{tr}\\, \\{ U^{\\dagger}V \\} \\big|^{2k} > d^{2k}\\bigg(1 - \\frac{1}{2}||U-V||_1^2\\bigg)^{2k}.\\label{summand-bound}\n\\end{equation}\nFor this formula to be sensible, this approximation requires that $\\epsilon < \\sqrt{2}$. Substituting Eq.~\\eqref{summand-bound} into Eq.~\\eqref{epsilon-frame-potential}, we can bound the frame potential\n\\begin{equation}\nF_{\\mathcal{E}}^{(k)} \\ge \\frac{1}{N_s} d^{2k}\\bigg(1 - \\frac{\\epsilon^2}{2}\\bigg)^{2k} \\bigg[ \\frac{1}{N_s} \\sum_{i=1}^{N_s} N{_\\epsilon}(U_i) \\bigg] = d^{2k}\\bigg(\\frac{ \\overline{N_\\epsilon} }{N_s} \\bigg) \\bigg(1 - \\frac{\\epsilon^2}{2}\\bigg)^{2k}. \n\\end{equation}\nThe term in brackets in the middle expression is the average number of operators within a trace distance $\\epsilon$ of an operator in our sample set. In the final expression this is represented by the symbol $\\overline{N_\\epsilon}$.\n\n\nNow, let's run the counting argument again. If we want to make $N_s$ circuits exactly, then in $\\mathcal{C}$ steps we must have\n\\begin{equation}\n(\\text{choices})^\\mathcal{C} > N_s,\n\\end{equation}\nwhere as before $(\\text{choices})$ summarizes the information about our choice of gate set, etc. Instead, if you only care about making circuits to within an $\\epsilon$-accuracy, then in $\\mathcal{C}_\\epsilon$ steps $N_s$ instead satisfies\n\\begin{equation}\n(\\text{choices})^{\\mathcal{C}_\\epsilon} ~ \\overline{N_\\epsilon} > N_s.\n\\end{equation}\nThis lets us lower bound the complexity of the ensemble at precision $\\epsilon$ as\n\\begin{equation}\n\\mathcal{C}_\\epsilon(\\mathcal{E}) > \\frac{2k \\log(d) - k \\epsilon^2 - \\log F_{\\mathcal{E}}^{(k)}}{\\log (\\text{choices}) }. \\label{complexity-boudnd-eps}\n\\end{equation}\nWe then take the continuum limit by taking $N_s \\to \\infty$. The number of operators within an $\\epsilon$-ball of a given sample will also diverge, but the ratio $ N_s\/ \\overline{N_\\epsilon}$ should remain finite and converge to some value, roughly the volume of the ensemble as measured in balls of $\\epsilon$-radius.\\footnote{Note that $\\log(\\text{choices})$ diverges if the elementary gate set is continuous. This problem may be also fixed by employing the $\\epsilon$-tolerance.}\n\nFinally, in $\\S\\ref{sec:complexity:early}$, we further extend this notion of bounding complexity for continuous ensembles to show that the initial early-time growth of complexity for evolution with an ensemble of Hamiltonians grows initially as $t^2$ for a time $t < 1\/\\sqrt{\\log(d)}$.\n\n\n\n\n\n\n\n\\section{Measures of correlators}\\label{sec:haar-averages}\n\nWhile much of the focus of this paper has been on the behavior of ensembles, we were originally motivated by the following question: When is a random unitary operator an appropriate approximation to the underlying dynamics? In this section, we will attempt to return the focus to this question by computing random averages over correlation functions and comparing them to expectations for chaotic time evolution in physical systems.\n\n\\subsection{Haar random averages}\n\nIn this subsection, we will explicitly compute some ensemble averages of OTO correlators for different choices of ensembles. A particular goal will be to understand the asymptotic behavior of these averages in the limit of a large number of degrees of freedom $d\\to \\infty$. We present explicit calculations for $2$-point and $4$-point functions here and provide results for $6$-point and $8$-point functions. Additional calculations may be found in Appendix~\\ref{sec:appendix-averages}.\n\n\n\\subsubsection*{$2$-point functions}\n\nConsider a $2$-point correlator, averaged over Haar random unitary operators\n\\begin{align}\n\\langle A \\tilde{B} \\rangle_{\\text{Haar}}=\\frac{1}{d}\\int_{\\text{Haar}} dU \\, \\text{tr} \\, \\{ A \\, U^{\\dagger}BU\\} .\n\\end{align}\nSince $U$ and $U^\\dagger$ each only appear in the expression once, we will obtain the same answer if the average is instead performed over a 1-design: $\\langle A \\tilde{B} \\rangle_{\\text{Haar}} = \\langle A \\tilde{B} \\rangle_{\\text{1-design}}$. By using a formula from \\S\\ref{sec:review}, we can derive the following expression\n\\begin{align}\n\\langle A \\tilde{B} \\rangle_{\\text{Haar}} = \\langle A\\rangle\\langle B \\rangle. \\label{2-pt-haar}\n\\end{align}\nGraphically, the calculation goes as follows\n\\begin{align}\n\\includegraphics[width=0.65\\linewidth]{fig_2point.pdf}.\n\\end{align}\nIt is often convenient to consider physical observables with zero mean by shifting $A \\rightarrow A -\\langle A\\rangle$. Then, we see that these $2$-point correlation function vanish. Of course, this always holds for Pauli operators\n\\begin{align}\n\\langle A \\tilde{B} \\rangle_{\\text{Haar}} = 0, \\qquad A,B\\in \\mathcal{P},\\quad A,B\\not=I.\n\\end{align}\n\nNext, let us consider the norm squared of a $2$-point correlator averaged over Haar random unitary operators $|\\langle A \\tilde{B} \\rangle|^2_{\\text{Haar}}=\\frac{1}{d^2}\\int_{\\text{Haar}} dU\\, \\text{tr} \\{ A\\,U^{\\dagger}BU \\} \\, \\text{tr} \\{ U^{\\dagger}B^\\dagger U \\, A^{\\dagger}\\}$. Note that we take the Haar average after squaring the correlator. Since there are two pairs of $U$ and $U^{\\dagger}$ appearing, we can perform the average over a $2$-design: $| \\langle A \\tilde{B} \\rangle|^{2}_{\\text{Haar}} = |\\langle A \\tilde{B} \\rangle|^{2}_{\\text{2-design}}$. Let us assume that $A,B$ are Pauli operators so that we can neglect contributions from $\\langle A\\rangle$ and $\\langle B\\rangle$. There are four terms, but only one term survives because the trace of non-identity Pauli operators is zero. We depict the calculation graphically as\n\\begin{align}\n\\includegraphics[width=0.6\\linewidth]{fig_2point_absolute.pdf}.\n\\end{align}\nwhere $C_{I}=1\/(d^2-1)$ comes from the Weingarten function as shown in \\S\\ref{sec:review}. The final result is\n\\begin{align}\n|\\langle A \\tilde{B}\\rangle|^2_{\\text{Haar}}=\\frac{1}{d^2-1},\\qquad A,B\\in \\mathcal{P},\\quad A,B\\not=I.\\label{eq:2-pt-variance}\n\\end{align}\nThus, the variance of the Haar averaged $2$-point function is exponentially small in the number of qubits. \n\n\\subsubsection*{4-point functions}\n\nNext, consider a $4$-point OTO correlator averaged over Haar random unitary operators: \n\\begin{align}\n\\langle A \\tilde{B} C \\tilde{D} \\rangle_{\\text{Haar}} = \\frac{1}{d}\\int_{\\text{Haar}} dU \\text{tr} \\{ A\\, U^\\dagger B U \\,C \\, U^\\dagger D U \\}. \n\\end{align}\nAs has already been explained, we will obtain the same answer if the average is performed with a 2-design: $\\langle A \\tilde{B} C \\tilde{D} \\rangle_{\\text{Haar}} = \\langle A \\tilde{B} C \\tilde{D} \\rangle_{\\text{2-design}}$. By using formulas from \\S\\ref{sec:review} we can derive the following expression\\footnote{Note: this formula was independently obtained by Kitaev.}\n\\begin{equation}\n\\langle A \\tilde{B} C \\tilde{D} \\rangle_{\\text{Haar}} = \\langle AC \\rangle \\langle B \\rangle\\langle D \\rangle + \\langle A \\rangle\\langle C \\rangle \\langle BD \\rangle - \\langle A\\rangle \\langle C \\rangle \\langle B \\rangle\\langle D \\rangle - \\frac{1}{d^2-1}\\langle\\!\\langle AC\\rangle\\!\\rangle \\langle\\!\\langle BD\\rangle\\!\\rangle, \\label{eq:OTO_formula}\n\\end{equation}\nwhere $d=2^n$ and $\\langle \\! \\langle AC\\rangle \\! \\rangle\\equiv\\langle AC\\rangle-\\langle A\\rangle\\langle C\\rangle$. In particular, for Pauli operators $A,B,C,D\\not=I$, one has\n\\begin{equation}\n\\begin{split}\n\\langle A \\tilde{B} C \\tilde{D} \\rangle_{\\text{Haar}} &= - \\frac{1}{d^2-1} \\qquad (A=C^{\\dagger},\\ B=D^{\\dagger} ), \\\\\n&= 0 \\qquad \\quad \\qquad (\\text{otherwise}).\n\\end{split}\n\\end{equation}\nWhen nonzero, the result is exponential small in the number of qubits $n$. The derivation of the aforementioned formula can be understood graphically as follows\n\\begin{equation}\n\\includegraphics[width=0.88\\linewidth]{fig_4pt_OTO}.\n\\end{equation}\nBy rewriting the expression in terms of connected correlators, we obtain the formula Eq.~\\eqref{eq:OTO_formula}.\n\nOf course, we can obtain the same result by instead averaging over the Clifford group. When $A\\not=C^{\\dagger}$ or $B\\not=D^{\\dagger}$, it is easy to show $\\langle A\\tilde{B}C\\tilde{D} \\rangle_{\\text{Clifford}}=0$. When $A=C^{\\dagger}$ and $B=D^{\\dagger}$, the value of the correlator depends on the commutation relations between $A$ and~$B$\n\\begin{equation}\n\\begin{split}\n\\langle A\\tilde{B}A^{\\dagger}\\tilde{B}^{\\dagger} \\rangle&=1, \\quad \\qquad [A,B]=0,\\\\\n&=-1, \\qquad \\{A,B\\}=0.\n\\end{split}\n\\end{equation}\nSince by definition Clifford operators transform a Pauli operators to another Pauli operator, $\\tilde{B}$ is a random Pauli operator with $\\tilde{B}\\not=I$. There are $d^2-1$ non-identity Pauli operators. Among them, $d^2\/2-1$ Pauli operators commute with $A$, and $d^2\/2$ anti-commute with $A$. Therefore, we find\n\\begin{align}\n\\langle A\\tilde{B}A^{\\dagger}\\tilde{B}^{\\dagger} \\rangle_{\\text{Clifford}}\n= \\frac{d^2\/2-1}{d^2-1} - \\frac{d^2\/2}{d^2-1} = \\frac{-1}{d^2-1}.\n\\end{align}\nAs expected, the $4$-point OTO correlator ensemble average over the Clifford group equals the Haar average. Recalling our result from \\S\\ref{sec:OTO_channel} that $4$-point OTO values completely determine the $2$-fold channel, this explicit calculation gives an alternative proof that the Clifford group forms a unitary $2$-design.\n\nFinally, we will present one additional way of computing the Haar average of $4$-point OTO correlators. \nFor convenience, we introduce the following notation\n\\begin{align}\n\\text{OTO}^{(4)}(A,B) :=\\langle A \\tilde{B} A^{\\dagger}\\tilde{B}^{\\dagger} \\rangle_{\\text{Haar}}.\n\\end{align}\nSince $U$ is sampled uniformly over the unitary group, $\\text{OTO}^{(4)}(A,B)$ does not depend on $A,B$ as long as $A,B\\not=I$. Consider the average of OTO correlation functions over all Pauli operators $A$, including identity operators: $\\sum_{A} \\langle A \\tilde{B} A^{\\dagger}\\tilde{B}^{\\dagger} \\rangle_{\\text{Haar}}$. Since $\\frac{1}{d}\\sum_{A}A\\otimes A^{\\dagger}$ is the swap operator, we have\n\\begin{align}\n\\includegraphics[width=0.65\\linewidth]{fig_short_cut},\n\\end{align}\nwhere a dotted line represents an average over all the Pauli operators. This expression must be zero for $B\\not=I$. Thus, we have\n\\begin{align}\n\\sum_{A}\\text{OTO}^{(4)}(A,B) = 0, \\qquad B\\not=I. \n\\end{align}\nIf $A=I$, we have $\\text{OTO}(A,B)=1$. Since there are $d^2$ Pauli operators, we have \n\\begin{align}\n\\text{OTO}^{(4)}(A,B) = -\\frac{1}{d^2-1}, \\qquad A,B\\not=I. \n\\end{align}\nIn Appendix~\\ref{sec:k-point-averages}, we use this method to estimate the scaling of higher-point OTO correlators with $d$, finding $4m$-point functions of a related ordering scale as $\\sim 1\/d^{2m}$.\n\n\n\\subsubsection*{$6$-point functions}\n\nNext, consider the Haar average of $6$-point OTO correlators, $\\langle A \\tilde{B} C \\tilde{D} E \\tilde{F} \\rangle_{\\text{Haar}}$. We will assume that $A,\\ldots,F\\not=I$ are Pauli operators. In order to have non-zero contributions, we must have $ACE \\propto I$ and $BDF \\propto I$. Thus, we will only consider cases with $ACE \\propto I$ and $BDF \\propto I$.\n\nThe results depend on commutation relations between $A,C$ and $B,D$, but always have the following scaling\n\\begin{align}\n\\langle \\text{6-point} \\rangle_{\\text{Haar}} \\sim \\frac{1}{d^2}, \\qquad (A C E=I, \\quad B D F=I).\n\\end{align}\nExplicitly, when $[A,C]=0$ and $[B,D]=0$, we find\n\\begin{align}\n\\langle \\text{6-point} \\rangle_{\\text{Haar}} = \\frac{2d^2}{(d^2-1)(d^2-4)}.\n\\end{align}\nThe more general expression is slightly complicated though has the same scaling.\nThus, the Haar average of $6$-point OTO correlators does not reach any lower a floor value than the Haar average of $4$-point OTO correlators. \n\n\\subsubsection*{$8$-point functions}\nFinally, we will study Haar averages of $8$-point OTO correlators. In this case, there are two different types of nontrivial out-of-time ordering, which behave differently at large $d$. These computations are annoyingly technical, and so the details are hidden in Appendix~\\ref{sec:8-pt-functions}.\n\n\nThe $8$-point OTO correlators of the first type can be written in the following manner\n\\begin{align}\n\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{B}^{\\dagger} C^{\\dagger}\\tilde{D}^{\\dagger}\\rangle, \\qquad \\text{(non-commutator type)}.\\label{eq:non-commutator-type}\n\\end{align}\nFor Hermitian operators, this essentially repeats $A\\tilde{B}C\\tilde{D}$ twice. For reasons that will subsequently become clear, we will call such OTO correlators ``non-commutator types.'' (However, the result does depends on the commutation relations between $A,C$ and $B,D$.) For these correlators, the scaling of the Haar average with respect to $d$ is\n\\begin{align}\n\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{B}^{\\dagger} C^{\\dagger}\\tilde{D}^{\\dagger}\\rangle_{\\text{Haar}} \\sim \\frac{1}{d^2}, \\qquad \\text{(non-commutator type)},\n\\end{align}\nand this scaling does not depend on any commutation relations. Similar to what we found for the Haar average of the $6$-point functions, these non-commutator type $8$-point OTO correlators have the same scaling with $d$ as the Haar average of $4$-point OTO correlators. \n\nThe $8$-point OTO correlators of the second type take the form\n\\begin{align}\n\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{D}^{\\dagger} C^{\\dagger}\\tilde{B}^{\\dagger}\\rangle, \\qquad \\text{(commutator type)},\n\\end{align}\nand are denoted ``commutator-type'' correlators.\nThese correlators have the property that they can be written in the form\n\\begin{align}\n\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{D}^{\\dagger} C^{\\dagger}\\tilde{B}^{\\dagger}\\rangle = \\langle AKA^{\\dagger} K^{\\dagger}\\rangle,\n\\qquad K = \\tilde{B}C\\tilde{D},\n\\end{align}\ni.e. they are the expectation value of the group commutator of the operator $AKA^{\\dagger} K^{\\dagger}$.\nThe OTO correlators Eq.~\\eqref{eq:non-commutator-type} cannot be written in this way. As with the non-commutator types, the exact Haar average depends on commutation relations between $A,C$ and $B,D$. However, the scaling with respect to $d$ does not\n\\begin{align}\n\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{D}^{\\dagger} C^{\\dagger}\\tilde{B}^{\\dagger}\\rangle_{\\text{Haar}} \\sim \\frac{1}{d^4}, \\qquad \\text{(commutator type)}.\n\\end{align}\nThe Haar average of these correlators is much smaller than the Haar average of the non-commutator types and the $4$- and $8$-point Haar averages! This suggests they might be a useful statistic for distinguishing ensembles that form a $4$-design and ensembles that form a $2$-design but do not form a higher design.\n\nTo test this idea, we can take an average of the commutator type $8$-point OTO correlators averaged over the Clifford group.\nSince we have assumed the operators $A, \\dots, D$ are Pauli operators, we find\n\\begin{align}\n\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{D}^{\\dagger} C^{\\dagger}\\tilde{B}^{\\dagger}\\rangle_{\\text{Clifford}} = \\langle A\\tilde{B}\\tilde{D}CA^{\\dagger} C^{\\dagger} \\tilde{D}^{\\dagger} \\tilde{B}^{\\dagger} \\rangle_{\\text{Clifford}} = \\frac{K(A,C^{\\dagger})}{d} \\, \\text{tr} \\{ A\\tilde{B}\\tilde{D}A^{\\dagger} \\tilde{D}^{\\dagger} \\tilde{B}^{\\dagger}\\},\n\\end{align}\nwhere in the first equality we commuted $C,\\tilde{D}$ and $C^{\\dagger},\\tilde{D}^{\\dagger}$, which holds because $C, D$ are Pauli operators, and in the second equality we\nhave defined $K(P,Q)$ by\n\\begin{equation}\nQ^{\\dagger}PQ = K(P,Q)P,\n\\end{equation} \nfor Pauli operators $P,Q$. The final answer is\n\\begin{align}\n\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{D}^{\\dagger} C^{\\dagger}\\tilde{B}^{\\dagger}\\rangle_{\\text{Clifford}} = \\frac{-K(A,C^{\\dagger})}{d^2-1}\\sim \\frac{1}{d^2}.\n\\end{align}\nRecall that the Clifford group is a unitary $2$-design, is not a $3$-design in general (except for a system of qubits) \\cite{Webb15}, and is never a $4$-design. Therefore, we see that commutator-type correlators may provide a statistical test of whether an ensemble forms a $k$-design but not a $k+1$-design. We explore this idea further in \\S\\ref{sec:k-point-averages}.\n\n\n\n\\subsection{Dissipation vs. scrambling}\nIn this subsection, we will return to the $2$- and $4$-point averages and compare them against expectations from time evolution. Furthermore, we will attempt to provide some physical intuition for the behavior of these averages over different ensembles. This will support our picture of chaotic time evolution leading to increased pseudorandomness.\n\nFor strongly coupled thermal systems, it is expected that the connected part of the $2$-point correlation functions decays exponentially within a time scale $t_{d}$ of order the inverse temperature $\\beta$\n\\begin{align}\n\\langle A(0)B(t) \\rangle \\rightarrow \\langle A\\rangle \\langle B \\rangle + O(e^{-t\/t_d}). \\label{2-pt-decay}\n\\end{align}\nThis time scale is often referred to as a ``dissipation'' or ``thermalization'' time and is related to the time it takes local thermodynamic quantities reaching equilibrium.\\footnote{For weakly coupled systems where the quasiparticle picture is valid (e.g. Fermi-liquids) the time scale is instead be given by $t_{d}\\sim \\beta^2$.} \nIt is suggestive that the results Eq.~\\eqref{2-pt-haar} and Eq.~\\eqref{2-pt-decay} are so similar. After a short time $t_d$, for these $2$-point functions the chaotic dynamics give the same results as the Haar random dynamics.\n\nNext, we turn to the variance of the $2$-point correlator $\\langle A(0)B(t) \\rangle$. For a closed system of finite number of degrees of freedom, the $2$-point function will be quasi-periodic with recurrences after a timescale $t_r \\sim e^{d}$ that is exponential in the dimension and doubly exponential in the number of degrees of freedom $d=2^n$. As such, the long-time average of $|\\langle A(0)B(t) \\rangle|^2$ must be nonzero. This can be estimated by performing a time average and gives a well known result \\cite{Dyson:2002pf,Barbon:2003aq,Barbon:2014rma}\\footnote{N.B. there is an error in this calculation as presented in \\cite{Dyson:2002pf,Barbon:2003aq} and so interested readers should consult \\cite{Barbon:2014rma} for the actual details.}\n\\begin{align}\n\\lim_{T\\rightarrow \\infty}\\frac{1}{T}\\int_{0}^{T} |\\langle A(0)B(t)|^2 dt \\sim \\frac{1}{d^2}.\n\\end{align}\nComparing against our result for the Haar-averaged dynamics Eq.~\\eqref{eq:2-pt-variance}, we see that they coincide.\n\n\nNext, let's consider $4$-point correlators in strongly-coupled theories with a large number of degrees of freedom $N$.\\footnote{We thank Douglas Stanford for conversations relating to dissipative behavior of $4$-point functions.} (For example, this can be thought of a system of $N$ qubits where all the qubits interact but the interactions are at most $q$-local, with $q\\ll N$ and $N\\to \\infty$.) First, let's consider the case of a \\emph{time-ordered} correlator. Similar to the case of the $2$-point functions, two of the three Wick contractions are expected to decay exponentially within a dissipation time $t_{d}$\n\\begin{equation}\n\\begin{split}\n\\langle A(0)C(0)B(t)D(t) \\rangle \\rightarrow \n\\langle AC \\rangle \\langle B D \\rangle + O(e^{-t\/t_d}),\n\\end{split}\n\\end{equation}\nwhich for qubits is analogous to considering a $2$-point function between the composite operators $AC$ and $B D$.\\footnote{Note that the exact timescale $t_{d}$ might depend on the operators being correlated and the particular contraction.} Thus, this correlator will equilibrate after a time $t_{d}$ with a late-time value that depends on the expectations $\\langle AC \\rangle$ and $\\langle B D \\rangle$.\n\nNow, let's consider the out-of-time-order $4$-point correlator in a large $N$ strongly interacting theory. For $t \\sim t_d$, this will behave similarly to the time-ordered correlator with two of the three Wick contractions decaying exponentially\n\\begin{equation}\n\\begin{split}\n\\langle A(0)B(t)C(0)D(t) \\rangle \\rightarrow \n\\langle AC \\rangle \\langle B D \\rangle + O(e^{-t\/t_d}), \\qquad t< t_d.\n\\end{split}\n\\end{equation}\nHowever, for $t > t_d$ the correlator obtains a exponentially growing connected component\n\\begin{equation}\n\\langle A(0)B(t)C(0)D(t) \\rangle \\rightarrow \n\\langle AC \\rangle \\langle B D \\rangle - O(e^{\\lambda (t-t_*)}), \\qquad t_d < t < t_*.\n\\end{equation}\nThis growth occurs in the regime $t_d < t < t_*$. The time scale $t_* = \\lambda^{-1} \\log N$, known as the fast scrambling time, is the time at which the exponentially growing piece of the correlator compensates its $1\/N$ suppression and becomes $O(1)$. The coefficient $\\lambda$ has the interpretation of a new kind of Lyapunov exponent \\cite{Kitaev:2014t1} and is bounded from above by $2\\pi \/ \\beta$ \\cite{Maldacena:2015waa}. Finally, for $t > t_*$, these OTO $4$-point correlators are expected to decay to a small floor value that is exponentially small in $N$. A natural guess for this floor is\n\\begin{equation}\n\\langle A(0)B(t)C(0)D(t) \\rangle \\rightarrow e^{-O(N)}\\langle AC\\rangle \\langle BD\\rangle, \\qquad t > t_*,\n\\end{equation}\nwhich is reproduced from Eq.~\\eqref{eq:OTO_formula} with $1$-point functions assumed to be subtracted off.\n\n\nAs we mentioned, the $2$-point function Eq.~\\eqref{2-pt-decay} reached its Haar random value after a short dissipation time $t_d$. This is very suggestive of a picture where chaotic dynamics behave as a pseudo-$1$-design after a time $t_d$. Taking this point further, let's consider the 4-point OTO correlator averaged over Pauli operators, an ensemble that forms a $1$-design, but not a $2$-design. Furthermore, we will assume that the operators $A$ and $B$ have zero overlap. Under this assumption, we can show that \n\\begin{equation}\n\\begin{split}\n\\langle A \\tilde{B} C \\tilde{D} \\rangle_{\\text{Pauli}} = \n\\langle AC \\rangle \\langle B D \\rangle. \\label{eq:some_result}\n\\end{split}\n\\end{equation}\nThe proof of this is relegated to \\S\\ref{sec:proof:pauli}.\nApparently, Pauli operators capture the behavior of the dynamics around $t\\sim t_d$, i.e. from after the dissipative regime until the scrambling regime, but then a $2$-design is required to capture the behavior after $t\\sim t_*$, i.e. the post-scrambling regime.\\footnote{Note that these observations depend on the ensemble we average over actually being the Pauli operators, and not just any ensemble that forms a $1$-design without forming a $2$-design. Furthermore, we assume that $A,B,C,D$ are simply few-body operators so that the correlator is of local operators. These choices are determined for us by the basis in which the Hamiltonian is $q$-local.}\n\nThus, we might say that after a time $\\sim t_d$, the system becomes a pseudo-$1$-design, and then after $\\sim t_*$ the system becomes a pseudo-$2$-design. (See Fig.~\\ref{fig_scrambling} for a cartoon of this behavior.) However, it remains an open question whether there are any additional meaningful timescales that can be probed with correlators after $t_*$, though we are hopeful that such timescales might be hiding in higher-point OTO correlators.\n\n\n \n\n\\begin{figure}[htb!]\n\\centering\n\\includegraphics[width=0.70\\linewidth]{fig_plot2}\n\\caption{Thermalization and scrambling in the decay of a four-point OTO correlator. These correlators typically decay to $\\langle AC \\rangle \\langle B D \\rangle$ in a thermal dissipation time $t_d$, and then they decay to a floor value $\\sim d^{-2}$ at around the scrambling time $t_*$. These regimes are very well captured by replacing the dynamics with $1$-designs or $2$-designs, respectively. \n} \n\\label{fig_scrambling}\n\\end{figure}\n\n\n\n\n\n\n\\section{Discussion}\\label{sec:discussion}\nIn this paper, we have connected the related ideas of chaos and complexity to pseudorandomness and unitary design. A cartoon of these ideas is expressed nicely by Fig.~\\ref{fig-universe}. Operators can be thought of as being organized by increasing complexity. Regions defined by circles of larger and larger radius can be thought of as defining designs with increasing $k$.\\footnote{\n While this picture is a cartoon, the manifold for the unitary group $U(n)$ has a dimension exponentially large in $n$. However, following \\cite{Dowling2006:gt}, one can find a metric in which the length of a minimal geodesic between points computes operator complexity and in which most sections are expected to be hyperbolic. Taking this further, \\cite{Brown:2016wib} considered an interesting analog system on a hyperbolic plane that captures many of the expected properties of complexity.\n} In the rest of the discussion, we will make some related points, tie up loose ends, and mention future work.\n\n\n\\begin{figure}[htb!]\n\\centering\n\\includegraphics[scale=.45]{fig_universe_rev4}\n\\caption{A cartoon of the unitary group, with operators arranged by design. We pick the identity operator to be the reference operator of zero complexity and place it at the center. Typical operators have exponential complexity and live near the edge. Operators closer to the center have lower complexity, which makes them both atypical and more physically realizable in a particular computational model.\n}\n\n \n\\label{fig-universe}\n\\end{figure}\n\n\n\n\n\\subsection*{Generalized frame potentials and designs}\n\nIn realistic physical systems, one usually does not have access to the full Hilbert space. For example, there may be some conserved quantities, such as energy or particle numbers, or the system may be at some finite temperature $\\beta$. In that case, one would be interested in understanding pseudorandomness inside a subspace of the Hilbert space, i.e restricted to some state $\\rho$. In Appendix~\\ref{sec:sub-space-randomization}, we generalize the frame potential for an arbitrary state $\\rho$ finding that the quantity\n\\begin{align}\n\\mathcal{F}^{(k)}_{\\mathcal{E}}(\\rho)&= \\iint dU dV \\ [\\text{tr} \\{ \\rho^{1\/k} UV^{\\dagger}\\} \\, \\text{tr} \\{ \\rho^{1\/k} VU^{\\dagger} \\}]^{k}, \\label{eq:discussion:generalized}\n\\end{align}\nhas all the useful properties desired of a frame potential. In particular, it is minimized by the Haar ensemble, and it provides a lower bound on ensemble size and complexity.\n\nHowever, if the state $\\rho$ is the thermal density matrix, $\\propto e^{-\\beta H}$ and the ensemble is given by time evolution with an ensemble of Hamiltonians $\\mathcal{E}= \\{ e^{-iHt}\\}$, then we need to take into account the fact that the state itself depends on the ensemble. Instead, we can define a \\emph{thermal} frame potential\n\\begin{align}\n\\mathcal{W}^{(k)}_{\\beta}(t)= \\iint dG dH \\ \\frac{ \n\\big|\\text{tr} \\, \\{e^{-(\\beta\/2k-it )G } e^{-(\\beta\/2k+it )H }\\}\\big|^{2k}\n}{\\text{tr} \\, \\{e^{-\\beta G} \\}\\text{tr} \\, \\{e^{-\\beta H} \\}}.\n\\end{align}\n In this case, even at $t=0$ one can derive an interesting bound on the complexity of the ensemble. We hope to return to this in the future to analyze the \\emph{complexity of formation}: the computational complexity of forming the thermal state $\\rho_{\\beta}$ from a suitable reference state.\\footnote{This is also a question of interest in holography, see e.g. \\cite{CofFormation}.}\n\nFinally, it would be similarly interesting to consider a different generalization of unitary designs where, under some physical constraints, we can only access some limited degrees of freedom in the system. In this sense, one could think of the unitary ensemble (as opposed the state) as being generated by tracing over these additional degrees of freedom. These ``subsystem designs'' would then be ``purified'' by integrating back in the original degrees of freedom.\\footnote{We thank Patrick Hayden and Michael Walter for initial discussions about this idea.}\n This interesting direction is a potential subject of future work.\n \n\n\\subsection*{More chaos in quantum channels}\n\nIn Appendix~\\ref{sec:more-chaos}, we revisit some ideas from our previous work \\cite{Hosur:2015ylk}, though these ideas are also relevant to the current work. In particular, in \\S\\ref{sec:more:HP} we reconsider the Hayden-Preskill notion of black hole scrambling \\cite{Hayden07}. In this thought experiment, Alice throws a secret quantum state into a black hole. Assuming Bob knows the initial state and dynamics of the black hole, we show how the question of whether Bob can reconstruct Alice's secret is related to the decay of an average of a certain set of OTO four-point correlators.\n\n\n\n In \\S\\ref{sec:more:op}, we provide an operational interpretation to taking averages over OTO correlators. We show that this is related to a quantum game of ``catch'' where Alice may ``spit-on'' or otherwise perturb the ball before throwing it to Bob. The average over four-point OTO correlators gives the probability that Alice did not modify the ball. We also show that an average over higher-point OTO correlators can be interpreted as an ``iterated'' game of catch (i.e. what normally people just call ``catch'') where both Alice and Bob have the opportunity to modify the ball each turn. In this case, the OTO correlator average is related to the joint probability that neither Alice nor Bob perturb the ball.\n\n\n\n Finally, in \\S\\ref{sec:more:renyi} we show that an average over a particular ordering of $2k$-point OTO correlators can be related to the $k$th R\\'enyi entropy of the operator $U$ interpreted as a state. We find that\n\\begin{equation}\n-\\log( \\text{a certain average of $2k$-point OTO correlators}) \\propto S_{ \\text{subsystem}}^{(k)}(U),\\label{eq:summary-of-renyi-formula}\n\\end{equation}\nwhere $S_{\\text{subsystem} }^{(k)}$ is the R\\'enyi $k$-entropy of a particular subsystem of the density matrix $\\rho = \\ket{U}\\bra{U}$.\n\n\n\n\\subsection*{Volume of unitary operators}\n\nThe argument in \\S\\ref{sec:continuous} led to a bound on the ratio $ N_s \/ \\overline{N_\\epsilon}$, which can be interpreted as the volume of $\\mathcal{E}$ in terms of $\\epsilon$-balls. An interesting application of this bound might be to think about the volume of unitary operators in $U(d)$ that can be probed in a finite time scale $T$, i.e. the volume of operators with depth $\\mathcal{D} \\sim T$. (See \\S\\ref{sec:complexity:depth} for further discussion of a lower bound on circuit depth in terms of the frame potential.)\n\n\n\nIn fact, in certain situations (such as the Brownian circuit introduced in \\cite{Lashkari13} or the random circuit model), it is not difficult to show that the volume of unitary operators with depth $T$ grows at least $\\sim \\exp(\\text{const} \\cdot n \\, T)$ for some small $T$ and some constant independent of $n$ by computing the $k=1$ frame potential. This implies that the space of unitary operators, with the metric being quantum gate complexity, has hyperbolic structure with constant curvature, as discussed in e.g. \\cite{Dowling2006:gt,Susskind:2014jwa,Brown:2016wib} (see also \\cite{Chemissany:2016qqq}). On the other hand, one can upper bound the volume of unitary operators with circuit depth $T$ by thinking about how the depth can grow $V(T) \\sim \\binom{n}{2}\\binom{n-2}{2}\\binom{n-4}{2} \\cdots \\binom{2}{2} \\approx \\exp(n \\log n \\cdot T)$. Thus, for small $T$ and large $n$, the lower bound seems to be reasonably tight.\n\nOnce a lower bound on the volume of unitary operators in an ensemble is obtained in the unit of $\\epsilon$-balls, we can also obtain a lower bound (of the same order) on the complexity of a typical operator in the ensemble. This seems possible by using the formal arguments given in \\cite{Knill95} even when the elementary gate set is not discrete, e.g. all the two-qubit gates.\n\nFinally, it's a curious fact that for systems with time-dependent Hamiltonian ensembles (such as the random circuit models or the Brownian circuit of \\cite{Lashkari13}) that we get an initial linear growth of the volume with $T$. As argued in \\S\\ref{sec:complexity:early} (and confirmed numerically), for time independent Hamiltonian evolution---e.g. in SYK or in the Gaussian unitary ensemble (GUE)---we get a lower bound $V(T) \\sim \\exp(\\text{const} \\cdot n\\, T^2 )$, which persists for a short time $T \\sim 1\/\\sqrt{n}$. It would be very interesting to understand the difference in this scaling.\\footnote{\n One might worry that this bound saturates at a value smaller than unity. However, we expect that there may be a continuous definition of complexity sensible for small complexities, see e.g. \\cite{Brown:2017jil}. \n}\n\n\n\n\n\n\\subsection*{Tightness of the complexity bound}\n\nWhile the frame potential provides a rigorous lower bound on the complexity of generating an ensemble of unitary operators, there may be a cost: the bound may not be very tight when applied to time evolution by an ensemble of Hamiltonians.\\footnote{We have learned this by some numerical investigations of the frame potential.} \n\nLet us try to understand this better. \nTo be concrete, consider the $k=2$ frame potential for a strongly coupled spin systems that scrambles in $t_* \\sim \\log n$ time. In such a system, for local operators $W, V$ of unit weight, OTO four-point correlators $\\langle W(t)^{\\dagger}V^{\\dagger}W(t)V\\rangle$ will begin to decay after $t\\sim O(\\log n)$. Since the $k=2$ frame potential is the average of four-point OTO correlators, one might expect that the frame potential will also start to decay at $t\\sim O(\\log n)$. \n\nHowever, this is not quite right. We expect the decay time for more general correlators of larger operators to be reduced to $\\bar{t}_* \\sim t_* - O\\big(\\log(\\text{size}~W)\\big) - O\\big(\\log(\\text{size}~V)\\big)$, where $t_*\\sim O(\\log n)$ is the scrambling when $V$ and $W$ are low-weight operators. If we randomly select Pauli operators $W$ and $V$, they will typically be nonlocal operators with $O(n)$ weights, and therefore the OTO decay time $\\bar{t}_*$ will be reduced to $O(1)$ for $V$ and $W$ of typical sizes.\n\nIn fact, the above estimate suggests most of the correlators determining the complexity bound should begin to decay immediately. As we can see in Eq.~\\eqref{eq:lower-bound-oto}, each correlator itself only makes a logarithmic contribution to the complexity and so we shouldn't expect the remaining slow decaying local correlators to be dominant. (To be sure, a further investigation of this point is required.)\n\nOne possible way to fix this problem would be to generalize the frame potential by using $p$-norm with $p \\neq 2$ so that it is more sensitive to the slower decaying local correlators. We leave the study of such a generalization to the future.\n\n\n\n\n\n\\subsection*{Complexity and holography}\nFinally, we will return to the question of complexity and holography discussed in the introduction.\nIn the context of holography, computational complexity was ``introduced'' \\cite{Harlow:2013tf} as a possible resolution to the firewall paradox of \\cite{Almheiri13,Almheiri13b}. A direct connection between complexity and black hole geometry was first proposed by Susskind \\cite{Susskind:2014rva,Susskind:2014ira}, which culminated in proposals that the interior of the black hole geometry is holographically dual to the spatial volume \\cite{Stanford:2014jda} or the spacetime action \\cite{Brown:2015bva,Brown:2015lvg}. These proposals are motivated by the fact that the black hole interior continues to grow as the state evolves long past the time entropic quantities equilibrate \\cite{Hartman13}. While there is nice qualitative evidence for both of these proposals \\cite{Susskind:2014jwa,Roberts:2014isa,Brown:2016wib}, missing is a direct understanding of computational complexity in systems that evolve continuously in time with a time-independent Hamiltonian.\n\n\n\n A hint can be obtained by considering some of the motivations for these holographic complexity proposals. In particular, building on the work of \\cite{Hartman13} and previous work of Swingle \\cite{Swingle12,Swingle12b}, Maldacena suggested that the black hole interior could be found in the boundary theory by a tensor network construction of the state \\cite{Maldacena:2013t1}. A tensor network toy model of the evolution of the black hole interior was investigated in \\cite{Hosur:2015ylk}. In this toy model, the interior of the black hole was modeled as a flat tiling of perfect tensors, see Fig.~\\ref{tn-erb}. These tensors were elements of the Clifford group and acted as two-qubit unitary operators that highly entangled neighboring qubits at each time step. From the perspective of the boundary theory, this is a model for Hamiltonian time evolution.\\footnote{In addition to providing a model for the black hole interior, Swingle's identification of the ground state of AdS with a tensor network \\cite{Swingle12,Swingle12b} has led to numerous other quantum information insights in holography (e.g. quantum error correction and its relationship to bulk operator reconstruction \\cite{Almheiri14}) as well as additional toy models that demonstrate those features (e.g. \\cite{Pastawski15b,Hayden:2016cfa}).}\n\n\n\\begin{figure}[htb!]\n\\centering\n\\includegraphics[width=1\\linewidth]{fig_circuit}\n\\caption{A $6$-qubit tensor network model for the geometry of the interior of a black hole. Via holography, the growth of the interior is expected to correspond to chaotic time evolution of a strongly coupled quantum theory. Here, each node corresponds to a perfect tensor and the numbers label the qubit. \n} \n\\label{tn-erb}\n\\end{figure}\n\n\n This toy model captures some important features of the complexity growth of the black hole state. The number of tensors in the network grows linearly in time, by construction. Operators will grow ballistically, exhibiting the butterfly effect, and the network scrambles in linear time. Thus, this network captures the aspects black hole chaos related to local scrambling and ballistic operator growth discussed in \\cite{Roberts:2014isa} as well as aspects of complexity growth discussed in \\cite{Hartman13,Susskind:2014ira,Stanford:2014jda,Brown:2015bva}. \n\n\n\n However, since in this model the perfect tensor is a repeated element of the Clifford group, the complexity can never actually grow to be very big. In fact, the quantum recurrence time of the model was investigated in \\cite{Hosur:2015ylk} and was found to be exponential in the entropy $\\sim e^{n}$ rather than doubly exponential $\\sim e^{e^n}$ as expected in a fully chaotic model. This is related to our oft stated fact that the Clifford group generally does not form a higher-than-2-design. In fact, this model can actually be mapped to a classical problem, and \n by the Gottesman-Knill theorem its complexity can be no greater than $O(n^2)$ gates \\cite{Nielsen_Chuang}.\\footnote{Since the Clifford group is a group, the complexity of any particular circuit in our model is an element of the group. Thus, at most we should be able to reach it by a polynomial number of applications of $2$-qubit gates.} \n\n\n\n These observations were the inspiration for this current work, since in this toy model $4$-point OTO correlators behave chaotically, but higher-point OTO correlators do not. Nevertheless, this model can be ``improved'' by using random $2$-qubit tensors rather than a repeated perfect tensor.\\footnote{Note: the case of a averaging over a single random tensor is like time evolution with a time independent Hamiltonian. On the other hand, the case of averaging separately over all tensors has a continuum limit of time evolution with a time-dependent Hamiltonian with couplings that evolve at each time step. This model is known as the Brownian circuit \\cite{Lashkari13}.} In \\cite{Brandao12}, it was shown that this local random quantum circuit approaches a unitary $k$-design in a circuit depth that scales at most as $O(k^{10})$. Our complexity lower bound for a $k$-design Eq.~\\eqref{eq:complexity-k-design} suggests that the time to become a $k$-design is lower bounded by $k$, and we suspect that this can be saturated.\\footnote{It is believed that this is actually saturated by the local random circuit of \\cite{Brandao12}. For example, there is some numerical evidence for this claim in \\cite{mozrzymas2013local}.} \n\nIt is in this sense that we speculate that the complexity growth of the chaotic black hole is pseudorandom. That is, we suspect that as the complexity of the black hole state increases linearly with time evolution $t$, the dynamics evolve to become pseudo-$k$-designs, with the value $k$ roughly scaling with $t$\n \\begin{equation}\n \\mathcal{C}(e^{-iHt}) \\sim t \\sim k,\n \\end{equation}\n and that this may be quantified by either representative $2k$-point OTO correlators or by an appropriate generalization of unitary design. With this in mind, it would be interesting to see whether one could use the tools of unitary design to prove a version of the conjectures of \\cite{Brown:2015bva,Brown:2015lvg} suggesting that complexity (is greater than or) equal to action.\n\n\n\n\n\n\n\\section*{Acknowledgments}\n\nWe are grateful to Fernando Brandao, Adam Brown, Jordan Cotler, Guy Gur-Ari, Patrick Hayden, Alexei Kitaev, M\\'ark Mezei, Xiao-Liang Qi, Steve Shenker, Lenny Susskind, Douglas Stanford, and Michael Walter for discussions. \n\nDR and BY both acknowledge support from the Simons Foundation through the ``It from Qubit'' collaboration.\nDR is supported by the Fannie and John Hertz Foundation, the National Science Foundation grant\nnumber PHY-1606531 and the Paul Dirac Fund, and is also very thankful for the hospitality of the Stanford Institute for Theoretical Physics and the Perimeter Institute of Theoretical Physics during the completion of this work. \nResearch at Center for Theoretical Physics at MIT is supported by the U.S. Department of Energy under cooperative research agreement Contract Number DESC0012567.\nResearch at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. This paper was brought to you unitarily by the Haar measure.\\footnote{\n Finally, we would like to thank one of our anonymous JHEP referees for pointing out the prior work \\cite{cbd:cd} and suggesting an acknowledgment. While we were unaware of \\cite{cbd:cd} at the time of submission, we are nevertheless quite happy to include a citation in our revision.\n}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\nCreating realistic music pieces automatically has always been regarded as one of frontier subjects in the field of computational creativity. With recent advances in deep learning, deep generative model and its variants have been widely used in automatic music generation \\cite{HerremansCC17}\\cite{deepgenerationsurvey}. However, most of deep composition methods focus on Western music rather than Chinese music. How to employ deep learning to model the structure and style of Chinese music is a challenging but novel problem.\n\nChinese folk songs, an important part of traditional Chinese music, are improvised by local people and passed on from one generation to the next orally. Folk tunes from the same region exhibit similar style while tunes from different areas present different regional styles \\cite{Miao1985}\\cite{LiLDZY19}. For example, the songs named \\emph{Mo Li Hua} have different versions in many areas of China and show various music styles, though they share the same name and similar lyrics \\footnote{The Chorus of \\emph{Mo Li Hua Diao} from various regions in China by Central National Orchestra: \\url{http:\/\/ncpa-classic.cntv.cn\/2017\/05\/11\/VIDEEMEg82W5MuXUMM1jpEuL170511.shtml}.} . The regional characteristics of Chinese folk songs are not well explored and should be utilized to guide automatic composition for Chinese folk tunes. Furthermore, folk song composition based on regional style provides abundant potential materials for Chinese national music creation, and promotes the spread and development of Chinese national music and even Chinese culture in the world.\n\nThere are lots of studies on music style composition of Western Music\\cite{Dai2018}. However, few studies employ deep generative model for Chinese music composition. There is a clear difference between Chinese and Western music. Unlike Western Music, which focuses on the vertical structure of music, Chinese music focuses on the horizontal structure, i.e., the development of melody, and the regional style of Chinese folk songs is mainly reflected in its rhythm and pitch interval patterns \\cite{Guan2014}. \n\nIn this paper, we propose a deep music generation model named MG-VAE to capture regional style of Chinese folk songs (\\emph{Min Ge}) and create novel tunes with controlled regional style. Firstly, a MIDI dataset with more than 2000 Chinese folk songs covering six regions is collected. After that, we encode the input music representations to the latent space and decode the latent space to reconstruct music notes. In detail, the latent space is divided into two parts to present the pitch features and rhythm features, namely, \\emph{pitch variable} and \\emph{rhythm variable}. Then we further divide the pitch latent space into \\emph{style variable} part and \\emph{content variable} part to present style feature and style-less feature in pitch variable, the same operation is launched in rhythm variable. In order to capture the regional style of Chinese folk songs precisely and generate regional style songs in controllable way, we propose a method based on adversarial training for disentanglement of the four latent variables, where temporal supervision is employed in the separation of pitch and rhythm variable, and label supervision is used for the disentanglement the style and content variable. The experimental results and visualization of latent spaces show that our model is effective to disentangle latent variables and is able to generate folk songs with specific regional style. \n\nThe rest of the paper is structured as follows: after introducing related work on deep music generation in Section 2, we present our music representations and model in Section 3. Section 4 describes the experimental results and analysis of our methods. Conclusions and future work are presented in Section 5.\n\n\\section{Related Work}\nRNN (Recurrent Neural Network) is one of the most earliest models introduced into the domain of deep music generation. Researchers employ RNNs to model the music structure and generate different formats of music, including monophonic folk melodies \\cite{SturmSBK16}, rhythm composition \\cite{MakrisKKK19}, expressive music performance \\cite{Sageev19}, multi-part music harmonization \\cite{YanLVD18}. Other recent studies have started to combine convolutional structure and explore using VAE, GAN (Generative Adversarial Network) and Transformer for music generation. MidiNet \\cite{YangCY17} and MuseGAN \\cite{DongHYY18} combine CNN (Convolutional Neural Network) and GAN architecture to generate music with multiple MIDI tracks. MusicVAE \\cite{RobertsERHE18} introduces a hierarchical decoder into general VAE model to generate music note sequences with long-term structure. Due to the impressive results of Transformer in neural translation, Huang et al. modify this sequence model's relative attention mechanism and generate minutes of music clips with high long-range structural coherence \\cite{MusicTransformer}. In addition to the study of music structure, researchers also employ deep generative models to model music styles, such as producing jazz melodies through two LSTM networks \\cite{JohnsonKW17}, harmonizing a user-made melody in Bach's style \\cite{Coconet2019}. Most of them are trained on the specific style dataset. The music generated from these models can only mimic the single style embodied in the training data. \n\nMoreover, little attention has been paid to Chinese music generation with deep learning techniques, especially for modeling the music style of Chinese music, though some researchers utilize Seq2Seq model to create multi-track Chinese popular songs from scratch \\cite{ZhuLYQLZZWXC18} or generate melody of Chinese popular songs with given lyrics \\cite{Bao2018}. The existing generation algorithms for Chinese traditional songs are mostly based on non-deep models such as Markov models \\cite{HuangLNC16}, genetic algorithms \\cite{ZhengWLSGGW17}. These studies cannot break up the bottleneck in melody creation and style imitation. \n\nSome latest work in the domain of music style transfer begins to generate music with mixed style or recombine music content and style. For example, Mao et al. propose an end to end generative modal to produce music with mixture of different classical composer styles \\cite{MaoSC18}. Lu et al. study the deep style transfer between Bach chorales and Jazz \\cite{LuS18a}. Nakamura et al. complete melody style conversion among different music genres \\cite{NakamuraSNY19}. The above studies are based on the music data from different genres or composing periods. However, the regional style generation of Chinese folk songs studied here is modeling style within the same genre, which is more challenging.\n\n\\section{Approach}\n\n\\subsection{Music Representation}\nThe monophonic folk songs $M$ can be represented as a sequence of note tokens, which is a combination of its pitch, interval and rhythm. Pitch and rhythm are essential information for music. The interval is an important indicator to distinguish the regional music feature, especially for Han Chinese folk songs\\cite{Han1989}. The detail processing is described as below and shown in Fig.~\\ref{fig:1}.\n\n\\begin{figure}[htb]\n\t\\centering\n\t\\includegraphics[width=3.6in]{data_representation.png}\n\t\\caption{Chinese folk songs representation including pitch sequence, interval sequence, rhythm sequence.}\n\t\\label{fig:1}\n\\end{figure}\n\n\\begin{itemize}\n\\item \\textbf{Pitch Sequence} $P$: Sequence of pitch tokens which consists of the pitch type presented in melody sequence. Rest note is assigned a special token.\n\t\n\\item \\textbf{Interval Sequence} $I$: Sequence of interval tokens derived from $P$. Each interval token is represented as a deviation between the next pitch and current pitch in step of semitone.\n\t\n\\item \\textbf{Rhythm Sequence} $R$: Sequence of duration tokens comprised of the duration type presented in melody sequence.\n\\end{itemize}\n\n\n\\subsection{Model}\n\nAs mentioned in Section 1, the regional characteristics of Chinese folk songs are mainly reflected in their pitch patterns and rhythm patterns. In some areas, the regional characteristics of folk songs are more dependent on pitch feature, while the rhythm patterns in some areas are more distinctive. For example, in terms of pitch, folk songs in northern Shaanxi tend to use perfect forth, the Hunan folk songs often use the combination of major third and minor third \\cite{Miao1985}, while Uighur folk songs employ the non-pentatonic scale. In terms of rhythm, Korean folk songs have their special rhythm system named \\emph{Jangdan}, while Mongolian \\emph{Long Song}s generally prefer long duration notes \\cite{Du2014}. \n\nInspired by the above observations, it is necessary to further refine the style of folk songs both in pitch and rhythm. Therefore, we propose a VAE-based model to separate pitch space and rhythm space, and further disentangle the music style and content space from pitch and rhythm space, respectively. \n\n\\subsubsection{VAE and its Latent Space Division}\n\nThe VAE introduces a continuous latent variable $z$ from a Gaussian prior $p_{\\theta}(z)$, and then generates sequence $x$ from the distribution $p_{\\theta}(x|z)$ \\cite{KingmaW13}. Concisely, a VAE includes an encoder $q_{\\phi}(z|x)$, a decoder $p_{\\theta}(x|z)$ and latent variable $z$. The loss function of VAE is \n\n\\begin{equation}\nJ(\\phi, \\theta) = -\\mathbb{E}_{q_{\\phi}(z|x)}[{\\rm log}p_{\\theta}(x|z)] + \\beta KL(q_{\\phi}(z|x)\\|p_{\\theta}(z))\n\\label{eq:1}\n\\end{equation}\n\nwhere the first term denotes reconstruction loss, and the second term refers to the Kullback-Leibler (KL) divergence, which is added to regularize the latent space. Weight $\\beta$ is a hyperparameter to balance the two loss terms. By setting $\\beta<1$, we can improve the generation quality of the model \\cite{Higgins2017}. $p_{\\theta}(z)$ is the prior and generally obeys the standard normal distribution, i.e., $p_{\\theta}(z) = \\mathcal{N}(0,I)$. The posterior approximation $q_{\\phi}(z|x)$ is parameterized by encoder which is also assumed to be Gaussian and reparameterization trick is used to acquire its mean and variance.\n\n\\begin{figure}[htb]\n\t\\centering\n\t\\includegraphics[width=4.4in]{model1.png}\n\t\\caption{Architecture of our model, it consists of melody encoder $E$, pitch decoder $D_P$, rhythm decoder $D_R$ and melody decoder $D_M$.}\n\t\\label{fig:2}\n\\end{figure}\n\nWith the labeled data, we can disentangle the latent space of VAE in a way that different parts of the latent space correspond to different external attributes, which can enable the generation process in a more controllable way. In our case, we assume that the latent space can be firstly divided into two independent parts, i.e., \\emph{pitch variable} and \\emph{rhythm variable}. The pitch variable learns the pitch features of Chinese folk songs, while rhythm variable captures the rhythm patterns. Further, we assume both the pitch variable and rhythm variable consist of two independent parts, which refer to music \\emph{style variable} and music \\emph{content variable}, respectively. \n\nSpecifically, given a melody sequence $M=\\{m_1,m_2,\\cdots,m_n\\}$ as the input sequence with $n$ tokens (notes), where $m_k$ denotes the feature combination of the corresponding pitch token $p_k$, interval sequence $i_k$ and rhythm sequence $r_k$, we firstly encode $M$ and obtain four latent variables from the linear transformation of the encoder's output. The four latent variables are pitch style variable $Z_{P_s}$, pitch content variable $Z_{P_c}$, rhythm style variable $Z_{R_s}$ and rhythm content variable $Z_{R_c}$, respectively. Then, we concatenate $Z_{P_s}$ and $Z_{P_c}$ into the total pitch variable $Z_{P}$, which is used to predict the pitch sequence $\\hat{P}$. The same operation is launched in rhythm variable to predict $\\hat{R}$. Finally, all latent variables are concatenated to predict the total melody sequence $\\hat{M}$. The architecture of our model is shown in Fig.~\\ref{fig:2}. \n\n\n\nBased the above assumption and operation, it is easy to extend the basic loss function:\n\\begin{equation}\nJ_{vae} = H(\\hat{P},P) + H(\\hat{R},R) + BCE(\\hat{M},M) + \\beta KL_{total}\n\\label{eq:2}\n\\end{equation}\n\nwhere $H(\\cdot,\\cdot)$ and $BCE(\\cdot,\\cdot)$ denote the cross entropy and binary cross entropy between prediction values and target values, respectively, and $KL_{total}$ denotes the sum KL loss of the four latent variables.\n\n\n\\subsubsection{Adversarial Training for Latent Spaces Disentanglement}\n\nHere, we propose an adversarial training based method to conduct the disentanglement of pitch and rhythm, music style and content. The detail processing is shown in Fig.~\\ref{fig:3}.\n\n\\begin{figure}[htb]\n\t\\centering\n\t\\includegraphics[width=4.5in]{model2.png}\n\t\\caption{Detail processing of latent spaces disentanglement. The\n\t\tdashed lines indicate the adversarial training parts.}\n\t\\label{fig:3}\n\\end{figure}\n\nAs shown in Fig.~\\ref{fig:2}, we use two parallel decoders to reconstruct pitch sequence and rhythm sequence, respectively. Ideally, we expect the pitch variable $Z_P$ and rhythm variable $Z_R$ should be independent of each other. However, the pitch feature may be implicit in rhythm variables actually, vice versa, since the two variables are sampled from the same encoder output. \n\nIn order to separate the pitch and rhythm variable explicitly, the temporal supervision is employed in the separation of pitch and rhythm, which is similar to the work of disentangled representation for pitch and timbre \\cite{IJCAI2019}. Specifically, we feed the latent variable to the wrong decoder deliberately and force the decoder to predict nothing, i.e., all zero sequence, resulting in the following two loss terms based on cross entropy:\n\n\\begin{equation}\nJ_{adv,P} = -\\Sigma[\\mathbf{0}\\cdot {\\rm log}\\hat{P}_{adv} + (1-\\mathbf{0})\\cdot {\\rm log}(1-\\hat{P}_{adv})]\n\\label{eq:3}\n\\end{equation}\n\n\\begin{equation}\nJ_{adv,R} = -\\Sigma[\\mathbf{0}\\cdot {\\rm log}\\hat{R}_{adv} + (1-\\mathbf{0})\\cdot {\\rm log}(1-\\hat{R}_{adv})]\n\\label{eq:4}\n\\end{equation}\n\nwhere $\\mathbf{0}$ denotes all zero sequence, $`\\cdot$' denotes the element-wise product.\n\nFor the disentanglement of music style and content, we firstly obtain the total music style variable $Z_s$ and content variable $Z_c$:\n\n\\begin{equation}\nZ_s = Z_{P_s} \\oplus Z_{R_s}, Z_c = Z_{P_c} \\oplus Z_{R_c}\n\\label{eq:5}\n\\end{equation}\n\nwhere $\\oplus$ means the concatenate operation. \n\nThen two classifiers are defined to force the separation of style and content in the latent space using the regional information. The style classifier ensures the style variable is discriminative for regional label, while the adversary classifier force the content variable is not distinctive for regional label. For style classifier is trained with the cross entropy defined by\n\\begin{equation}\nJ_{dis, Z_s} = -\\Sigma y {\\rm log}p(y|Z_s)\n\\label{eq:6}\n\\end{equation}\n\nwhere $y$ denotes the ground truth, $p(y|Z_s)$ is the predicted probability distributions from style classifier.\n\nFor adversary classifier, we train it by maximizing the empirical entropy of the adversary classifier's prediction \\cite{FuTPZY18}\\cite{Vineet2018}. The training processing is divided into two steps. Firstly, the parameters of the adversary classifier are trained independently, i.e., the gradients of the classifier don't propagate back to VAE. Secondly, we compute the empirical entropy based on the output from adversary classifier as defined by\n\n\\begin{equation}\nJ_{adv, Z_c} = -\\Sigma p(y|Z_c) {\\rm log}p(y|Z_c)\n\\label{eq:7}\n\\end{equation}\nwhere $p(y|Z_c)$ is the predicted probability distributions from adversary classifier. \n\nIn summary, the overall training objective of our model is the minimization the loss function defined by\n\n\\begin{equation}\nJ_{total} = J_{vae} + J_{adv,P} + J_{adv, R} + J_{dis, Z_s} - J_{adv, Z_c}\n\\label{eq:8}\n\\end{equation}\n\n\n\\section{Experimental Results and Analysis}\n\\subsection{Datasets and Preprocessing}\n\nThe lack of large-scale Chinese folk song datasets makes it impossible to apply deep learning methods for automatic generation and analysis of Chinese music. Therefore, we digitize more than 2000 Chinese folk songs in MIDI format from the record of \\emph{Chinese Folk Music Integration}\\footnote{\\emph{Chinese Folk Music Integration} is one of the major national cultural project leaded by the former Ministry of Culture, National Ethnic Affairs Commission and Chinese Musicians Association from 1984 to 2001. This set of book contains more than 40000 selected folk songs of different nationalities. The project website is \\url{http:\/\/www.cefla.org\/project\/book }.}. These songs contain Han folk songs from Wu dialect district, Xiang dialect district\\footnote{According to the analysis of Han Chinese folk songs \\cite{Han1989}\\cite{Du1993}, the folk song style of each region is closely related to the local dialects. Therefore, we classify Han folk songs based on dialect divisions. Wu dialect district here mainly includes Southern Jiangsu, Northern Zhejiang and Shanghai. Xiang dialect district here mainly includes Yiyang, Changsha, Hengyang, Loudi and Shaoyang in Hunan province.} and northern Shaanxi, as well as three ethnic minority folk songs of Uygur in Xinjiang, Mongolian in Inner Mongolia and Korean in northeast China.\n\nAll melodies in datasets are transposed to C key. We use the Pretty-midi python toolkit \\cite{raffel2014intuitive} to process each MIDI file, and count the numbers of pitch token, interval token and rhythm token as the feature dimension of the corresponding sequence, which are 40, 46 and 58, respectively. Then pitch sequence, interval sequence and rhythm sequence are extracted from raw notes sequence with the overlapping window of length 32 tokens and a hop-size of 1. Finally, we get 65508 ternary sequences in total. The regional labels of the token sequences drawn from the same song are consistent.\n\n\\subsection{Experimental Setup}\n\n\\begin{figure}[htb]\n\t\\centering\n\t\\includegraphics[width=2.8in]{encoder.png}\n\t\\caption{Encoder with residual connections.}\n\t\\label{fig:4}\n\\end{figure}\n\nIn order to extract melody feature into latent space effectively, we employ a bidirectional GRU model with the residual connection \\cite{HeZRS16} as encoder, which is illustrated in Fig.~\\ref{fig:4}. The decoder is a normal two-layers GRU. All recurrent hidden size in this paper is 128. Both style classifier and adversary classifier are one-layer linear layer with Softmax function. The size of pitch style variable and rhythm style variable is set to 32, while the size of pitch content variable and rhythm content variable is 96. During training period, the KL term coefficient $\\beta$ increases from 0.0 to 0.15 linearly to alleviate the impact of posterior collapse.\n\nAdam optimizer is employed with the initial learning rate of 0.01 for VAE training, and vanilla SGD optimizer with the initial learning rate of 0.005 for classifiers. All test models are trained for 30 epochs and the size of mini-batch is set to 50.\n\n\n\n\\subsection{Evaluation and Results Analysis}\n\nTo evaluate the generated music, we employ the following metrics from objective and subjective perspectives.\n\n\\begin{itemize}\n\t\\item \\textbf{Reconstruction Accuracy}: We calculate the accuracy between the target notes sequence and reconstructed notes sequence on our test set to evaluate the music generation quality.\n\t\\item \\textbf{Style Recognition Accuracy}: We train a separate style evaluation classifier using the architecture in Fig.~\\ref{fig:4} to predict the regional style of the tunes that are generated using different latent variables. The classifier achieves a reasonable regional accuracy on the independent test set, which is up to 82.71\\%. \n\t\\item \\textbf{Human Evaluation}: As human should be the ultimate judge of creations, human evaluations are conducted to overcome the incoordinations between objective metrics and user studies. We invite three experts who are well educated and expertise in Chinese music. Each expert is asked to listen to the random selected five folk songs of each region on-site, and rate each song on a 5-point scale from 1 (very low) to 5 (very high) according to the following two criteria: a) \\emph{Musicality}: Does the song have a clear music pattern or structure? b) \\emph{Style Significance}: Does the songs' style match the given regional label?\n\\end{itemize}\n\n\\begin{table}[htb]\n\t\\centering\n\t\\caption{Results of automatic evaluations.}\n\t\\begin{tabular}{ccc}\n\t\t\\hline\n\t\tObjectives & Reconstruction Accuracy & Style Recognition Accuracy \\\\\n\t\t\\hline\n\t\t$J_{vae}$ & 0.7684 & 0.1726\/0.1772\/0.1814 \\\\\n\t\t\\hline\n\t\t$J_{vae}, J_{adv,P,R}$ & 0.7926 & 0.1835\/0.1867\/0.1901 \\\\\n\t\t\\hline\n\t\t$J_{vae}, J_{adv, P,R}, J_{adv, Z_c}$ & 0.7746 & 0.4797\/0.4315\/0.5107 \\\\\n\t\t\\hline\n\t\t$J_{vae}, J_{adv, P,R}, J_{dis, Z_s}$ & \\textbf{0.8079} & 0.5774\/0.5483\/0.6025 \\\\\n\t\t\\hline\n\t\t$J_{total}$ & 0.7937 &\\textbf{0.6271}\/\\textbf{0.5648}\/\\textbf{0.6410} \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\label{tab:1}\n\\end{table}\t\t\n\n\n\nTab~\\ref{tab:1} shows all evaluation results of our models. The three values in the third column denote the accuracies derived from the following three kinds of latent variables: a) the concatenation of pitch style variable $Z_{P_s}$ and a random variable sampled from standard normal distribution; b) the concatenation of rhythm style variable $Z_{R_s}$ and the random variable; c) the concatenation of total style variable $Z_s$ and the random variable. $J_{adv,P,R}$ denotes the sum of $J_{adv,P}$ and $J_{adv, R}$. \n\n\n\nThe model with $J_{total}$ achieves the best results in style recognition accuracy and a sub-optimal result in reconstruction accuracy. The model without any constraints performs poorly on the two objective metrics. The addition of $J_{adv,P,R}$ improves the the reconstruction accuracy but fails to bring meaningful improvement to style classification. With the addition of either $J_{adv, Z_c}$ or $J_{dis, Z_s}$, all the three recognition accuracies improve a lot, which indicates that the latent spaces are disentangled into style and content subspaces as expected. Moreover, only employing pitch style or rhythm style for style recognition can also obtain fair results, demonstrating the disentanglement of pitch and rhythm is effective.\n\n\\begin{figure}[htb]\n\t\\centering\n\t\\includegraphics[width=4.6in]{human_new.png}\n\t\\caption{Results of human evaluations including musicality and style significance. The heights of bars represent means of the ratings and the error bars represent the standard deviation.}\n\t\\label{fig:5_add}\n\\end{figure}\n\nThe result of human evaluations is shown in Fig.~\\ref{fig:5_add}. In terms of musicality, all test models have similar performance, which demonstrates the addition of extra loss function has no negative impact on the generation quality of original VAE. Moreover, the model with total objectives $J_{total}$ performs significantly better than other models in terms of style significance (two-tailed $t$-test, $p<0.05$), which is consistent with the results in Tab~\\ref{tab:1}.\n\n\n\n\\begin{figure*}[!t]\n\t\\centering\n\t\\subfigure[Pitch style latent space ]{\\includegraphics[width=2.2in]{ps_space.png} \\label{fig:5a}}~~\n\t\\subfigure[Rhythm style latent space ]{\\includegraphics[width=2.2in]{ds_space.png} \\label{fig:5b}}\n\t\\subfigure[Total style latent space]{\\includegraphics[width=2.2in]{style_space.png} \\label{fig:5c}}~~\n\t\\subfigure[Total content latent space ]{\\includegraphics[width=2.2in]{content_space.png} \\label{fig:5d}}\n\t\\caption{t-SNE visualization of model with $J_{total}$.}\n\t\\label{fig:5} \n\\end{figure*}\n\nFig.~\\ref{fig:5} shows the t-SNE visualization\\cite{maaten2008visualizing} of our model with $J_{total}$. We can observe that music with different regional labels is noticeably separated in the pitch style space, rhythm style space and total style space, but looks chaos in content space.This further demonstrates the validity of our proposed methods to disentangle the pitch, rhythm, style and content.\n\nFinally, we present several examples \\footnote{Online Supplementary Material: \\url{https:\/\/csmt201986.github.io\/mgvaeresults\/}.} of generating folk song with given regional labels with our methods in Fig.~\\ref{fig:6}. As seen, we can create novel folk songs with dominated regional features such as long duration notes and large interval in Mongolian songs, the combination of major third and minor third in Hunan folk songs, and so on. However, there are still several failed examples. For instance, few generated songs repeat same melody pattern. More commonly, some songs don't show the correct regional feature, especially when the given regions belong to Han nationality areas. This may due to the fact that folk tunes in those regions share the same tonal system.\n\n\n\\begin{figure}[htb]\n\t\\centering\n\t\\includegraphics[width=4.8in]{music_example.png}\n\t\\caption{Examples of folk songs generation given regional labels. In order to align each row, the scores of several regions are not completely displayed.}\n\t\\label{fig:6}\n\\end{figure}\n\n\\section{Conclusion}\n\nIn this paper, we focus on how to capture the regional style of Chinese folk songs and generate novel folk songs with specific regional labels. We firstly collect a database including more than 2000 Chinese folk songs for analysis and generation. Then, inspired by the observation of the regional characteristics in Chinese folk songs, a model named MG-VAE based on adversarial learning is proposed to disentangle the pitch variable, rhythm variable, style variable and content variable in the latent space of VAE. Three metrics containing automatic and subjective evaluation in our experiments are used to evaluate the proposed model. Finally, the experimental results and t-SNE visualization show that the disentanglement of the four variables is successful and our model is able to generate folk songs with controllable regional style. In the future, we plan to expand the proposed model to generate longer melody sequence using more powerful model like Transformers, and explore the evolution of tune families like \\emph{Mo Li Hua Diao}, \\emph{Chun Diao} among different regions.\n\n\n\\bibliographystyle{spmpsci} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\n\\section{Analysis and Results}\n\\label{sec:results}\n\\subsection{RQ1- Influence of grammatical, temporal and sentimental sentence characteristics on FR\/NFR classification}\nFor the classification of functional and non-functional requirements, we used the approach of Hussain et al. \\cite{hussain2008using}. We applied this approach to the unprocessed data set of requirements as well as on the processed one resulting from our preprocessing.\n\n{\\it Classification Process:}\nFirstly, we clean up the respective data set by iteratively removing encoding and formatting errors to ensure the further processing. Subsequently, we apply the part-of-speech tagger of the Stanford Parser \\cite{Klein:2003:AUP:1075096.1075150} to assign parts of speech to each word in each requirement.\n\nBased on the tagging of all requirements, we extract the five syntactic features \\textit{number of adjectives}, \\textit{number of adverbs}, \\textit{number of adverbs that modify verbs}, \\textit{number of cardinals}, and \\textit{number of degree adjective\/adverbs}. For each feature, we determine its rank based on the feature's probability of occurrence in the requirements of the data set. According \nto Hussain et al.~\\cite{hussain2008using}, we selected a cutoff threshold of $>0.8$. Therefore, we determined \\textit{number of cardinals} and \\textit{number of degree of adjectives\/adverbs} as valid features among all five ones for the unprocessed data set. For the processed data set, we identified \\textit{number of cardinals} and \\textit{number of adverbs} as valid features.\n\nAfterwards, we extract the required keyword features for the nine defined part-of-speech keyword groups \\textit{adjective}, \\textit{adverb}, \\textit{modal}, \\textit{determiner}, \\textit{verb}, \\textit{preposition}, \\textit{singular noun}, and \\textit{plural noun}. For each keyword group, we calculate the smoothed probability measure and selected the respective cutoff threshold manually to determine the most discriminating keywords for each data set, corresponding to Hussain et al. \\cite{hussain2008using}.\n\nOur final feature list for the unprocessed data set consisted of the ten features \\textit{number of cardinals}, \\textit{number of degree of adjectives\/adverbs}, \\textit{adjective}, \\textit{adverb}, \\textit{modal}, \\textit{determiner}, \\textit{verb}, \\textit{preposition}, \\textit{singular noun}, and \\textit{plural noun}.\n\nOur final feature list for the processed data set consisted of the ten features \\textit{number of cardinals}, \\textit{number of adverbs}, \\textit{adjective}, \\textit{adverb}, \\textit{modal}, \\textit{determiner}, \\textit{verb}, \\textit{preposition}, \\textit{singular noun}, and \\textit{plural noun}.\n\nTo classify each requirement of the respective data set, we implemented a Java-based feature extraction prototype that parses all requirements from the data set and extracts the values for all ten features mentioned above. Subsequently, we used Weka \\cite{witten2016data} to train a C4.5 decision tree algorithm \\cite{quinlan2014c4} which comes with Weka as J48 implementation. According to Hussain et al. \\cite{hussain2008using}, we set the parameters for the minimum number of instances in a leaf to 6 to counter possible chances of over-fitting.\n\n\n\n\n\nSince the data set was not very large with 625 requirements, we performed a 10-fold-cross validation. In the following, we report our classification results for each data set.\\\\\n{\\bf Results:} The classification of the \\emph{unprocessed data set} results in $89.92\\%$ correctly classified requirements with a weighted average precision and recall of $0.90$. The classification of the \\emph{processed data set} results in $94.40\\%$ correctly classified requirements with a weighted average precision of $0.95$ and recall of $0.94$. \\tablename{ \\ref{tb:classification_unprocessed}} and \\tablename{ \\ref{tb:classification_processed}} show the details. By applying our approach, we could achieve an improvement of $4.48\\%$ correctly classified requirements. In total, we could correctly classify $28$ additional requirements, which consist of $9$ functional and $19$ non-functional ones.\nWhen classifying NFRs into sub-categories, the influence of our preprocessing is much stronger. The last two columns of \\tablename{ \\ref{tab:compare}} show the overall precision and recall of six different machine learning algorithms for sub-classifying NFRs into the categories listed in columns 1--10 of the table. For all algorithms, results are dramatically better when using the preprocessed data (column Total P) compared to using the raw data (column Total UP).\n\n\n\\begin{table}[htbp]\n\t\\centering\n\t\\caption{Classification results of the unprocessed data set}\n\n\t\\label{tb:classification_unprocessed}\n\t\\resizebox{\\linewidth}{!}{\\begin{tabular}{c|c|c|c|c|c|c|}\n\t\t\\cline{2-7}\n\t\t& \\begin{tabular}[c]{@{}c@{}}Correctly \\\\ Classified\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Incorrectly \\\\ Classified\\end{tabular} & Precision & Recall & F-Measure & Kappa \\\\ \\hline\n\t\t\\multicolumn{1}{|c|}{NFR} & 325 (87.84\\%) & 45 (12.16\\%) & 0.95 & 0.88 & 0.91 & \\multirow{3}{*}{0.79} \\\\ \\cline{1-6}\n\t\t\\multicolumn{1}{|c|}{FR} & 237 (92.94\\%) & 18 (7.06\\%)& 0.84 & 0.93 & 0.88 & \\\\ \\cline{1-6}\n\t\t\\multicolumn{1}{|c|}{Total} & 562 (89.92\\%) & 63 (10.08\\%) & 0.90 & 0.90 & 0.90 & \\\\ \\hline\n\t\\end{tabular}}\n\\end{table}\n\n\n\n\n\n\n\\begin{table}[htbp]\n\t\\centering\n\t\\caption{Classification results of the processed data set}\n\n\t\\label{tb:classification_processed}\n\t\\resizebox{\\linewidth}{!}{\\begin{tabular}{c|c|c|c|c|c|c|}\n\t\t\t\\cline{2-7}\n\t\t\t& \\begin{tabular}[c]{@{}c@{}}Correctly \\\\ Classified\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Incorrectly \\\\ Classified\\end{tabular} & Precision & Recall & F-Measure & Kappa \\\\ \\hline\n\t\t\t\\multicolumn{1}{|c|}{NFR} & 344 (92.97\\%) & 26 (7.03\\%) & 0.98 & 0.93 & 0.95 & \\multirow{3}{*}{0.89} \\\\ \\cline{1-6}\n\t\t\t\\multicolumn{1}{|c|}{FR} & 246 (96.47\\%) & 9 (3.53\\%)& 0.90 & 0.97 & 0.93 & \\\\ \\cline{1-6}\n\t\t\t\\multicolumn{1}{|c|}{Total} & 590 (94.40\\%) & 35 (5.60\\%) & 0.95 & 0.94 & 0.94 & \\\\ \\hline\n\t\\end{tabular}}\n\n\\end{table}\n\n\\vspace{-3mm}\n\n\n\n\n\n\n\\subsection{RQ2- Classifying Non-functional Requirements}\nIn this section, we describe the machine learning algorithms we used to classify NFRs. The performance of each method is assessed in terms of its recall and precision.\n\n\n\\begin{figure*}[!ht]\n\\centering\n\n\\subfloat[{\\scriptsize Hopkins statistic to assess the clusterability of the data set (hopkins-stat = 0.1)}]{\\includegraphics[scale=0.26]{Figures\/Tendency}}\n\\subfloat[Hierarchical= 0.13]{\\includegraphics[scale=0.28]{Figures\/HCL}}\n\\subfloat[K-means= 0.1]{\\includegraphics[scale=0.28]{Figures\/Kmeans}}\n\\subfloat[Hybrid= 0.13]{\\includegraphics[scale=0.28]{Figures\/HB}}\n\\subfloat[A visual representation of the confusion matrix (BNB algorithm)]{\\includegraphics[scale=0.2]{Figures\/ConfMatrix}}\\\\\n\\caption{Detailed visual representation of classifying NFRs }\n\\label{fig:cluster}\n\\end{figure*}\n\n\n\n\n\\subsubsection{Topic Modeling}\nTopic modeling is an unsupervised text analysis technique that groups a small number of highly correlated words, over a large volume of unlabelled text \\cite{TM1}, into {\\it topics}. \n\n\n{\\bf Algorithms:} The {\\it Latent Dirichlet Allocation (LDA)} algorithm classify documents based on the frequency of word co-occurrences. Unlike the LDA approach, the {\\it Biterm Topic Model} (BTM) method models topics based on the word co-occurrence patterns and learns topics by exploring word-word (i.e., biterm) patterns. Some recent studies on the application of topic modeling in classifying short text documents stated that the BTM approach has a better ability in modeling short and sparse text, as the ones typical for requirements specifications.\n\n\n\n\n\n\n\n\\begin{table*}\n\\centering\n\\footnotesize\n\\caption{Comparison between classification algorithms for classifying non-functional requirements [(U)P= (Un)Processed]}\n\\label{tab:compare}\n\n\\begin{tabular}{ |c|cc|cc|cc|cc|cc|cc|cc|cc|cc|cc|cc|cc| cc| }\n \\hline\n{\\bf Algorithm}& \\multicolumn{2}{c|}{\\bf A}& \\multicolumn{2}{c|}{\\bf US}& \\multicolumn{2}{c|}{\\bf SE}& \\multicolumn{2}{c|}{\\bf SC}& \\multicolumn{2}{c|}{\\bf LF}& \\multicolumn{2}{c|}{\\bf L}& \\multicolumn{2}{c|}{\\bf MN}& \\multicolumn{2}{c|}{\\bf FT}& \\multicolumn{2}{c|}{\\bf O}& \\multicolumn{2}{c|}{\\bf PE} & \\multicolumn{2}{c|}{\\bf PO}& \\multicolumn{2}{c|}{\\bf Total [P]}& \\multicolumn{2}{c|}{\\bf Total [UP]}\\\\ \\cline{2-27}\n&{\\bf R}&{\\bf P} &{\\bf R}&{\\bf P} & {\\bf R}&{\\bf P}& {\\bf R}&{\\bf P} &{\\bf R}&{\\bf P} &{\\bf R}&{\\bf P} &{\\bf R}&{\\bf P} &{\\bf R}&{\\bf P} &{\\bf R}&{\\bf P} &{\\bf R}&{\\bf P} & {\\bf R}&{\\bf P} & {\\bf R}&{\\bf P}&{\\bf R}&{\\bf P}\\\\\\hline\n{\\bf LDA}&95&60 &61&76 & 87&87& 81&57 &60&85 &47&20 &70&52 &10&2 &35&70 &70&95 & -&- &\\cellcolor{Gray}62&\\cellcolor{Gray}62&31&31\\\\\\cline{2-27}\n{\\bf BTM}&0&0 &6&12 & 13&18& 9&8 &5&7 &0&0 &0&0 &40&17 &0&0 &18&43 & -&- &\\cellcolor{Gray}8&\\cellcolor{Gray}8&3&3\\\\\\cline{2-27}\n{\\bf Hierarchical }&13&14 &25&20 &24&17 &16&29 &5&3 &6&15 &19&35 &18&40 &32&29 &26&22 &-&- & \\cellcolor{Gray}21&\\cellcolor{Gray}21&16&16\\\\ \\cline{2-27}\n{\\bf K-means}&10&23 &19&14 &29&18 &14&14 &21&21 &8&15 &22&47 &18&40 &26&30 &31&11& -&- &\\cellcolor{Gray}20&\\cellcolor{Gray}20&15&15\\\\ \\cline{2-27}\n{\\bf Hybrid}&15&14 &27&22& 29&18& 20&4& 26&24 &6&15 &17&35 &18&40 &22&27 &26&22& -&- &\\cellcolor{Gray}22&\\cellcolor{Gray}22&19&17\\\\\\cline{2-27}\n{\\bf Na\\\"{i}ve Bayes} &\\cellcolor{Gray}90&\\cellcolor{Gray}90&\\cellcolor{Gray} 97&\\cellcolor{Gray}77 &\\cellcolor{Gray}97&\\cellcolor{Gray}100 &\\cellcolor{Gray}83&\\cellcolor{Gray}83 &\\cellcolor{Gray}94&\\cellcolor{Gray}94 &\\cellcolor{Gray}75&\\cellcolor{Gray}100 &\\cellcolor{Gray}90&\\cellcolor{Gray}82 &\\cellcolor{Gray}97&\\cellcolor{Gray}90 &\\cellcolor{Gray}78&\\cellcolor{Gray}91 &\\cellcolor{Gray}90&\\cellcolor{Gray}100 &-&- &\\cellcolor{Gray}91&\\cellcolor{Gray}90&45&45\\\\\\hline\n\\end{tabular}\n\n\\end{table*}\n{\\bf Results and Evaluation:} The modeled topics for both LDA and BTM, including the top frequent words and the NFR assigned to each topic are provided in our source code package\\footnote{http:\/\/wcm.ucalgary.ca\/zshakeri\/projects}. We determined each topic by the most probable words that are assigned to it. For instance, LDA yields the word set \\{user, access, allow, prior, and detail\\} for the topic describing the Fault Tolerance sub-category, while BTM yields the set \\{failure, tolerance, case, use and data\\}. \n\n\nGenerally, the word lists generated by BTM for each topic are more intuitive than those produced by LDA. This confirms previous research that BTM performs better than LDA in terms of modeling and generating the topics\/themes of a corpus consisting of short texts. However, surprisingly, BTM performed much worse than LDA for sub-classifying NFRs as shown in Table~\\ref{tab:compare}. This might be because BTM performs its modeling directly at the corpus level and biterms are generated independently from topics. \n\n\n \\noindent\\makebox[\\linewidth]{\\resizebox{0.3333\\linewidth}{1pt}{$\\bullet$}}\\bigskip \n \\vspace{-3mm}\n\\subsubsection{Clustering}\nClustering is an unsupervised classification technique which categorizes documents into groups based on likeness \\cite{Cluster}. This likeness can be defined as the numerical distance between two documents \\(D_i\\) and \\(D_j\\) which is measured as:\n\\vspace{-3mm}\n\n\\begingroup\n\\everymath{\\scriptstyle}\n\\scriptsize\n\\[\nd(D_i, D_j)= \\sqrt{({d_i}_1-{d_j}_1)^2+ ({d_i}_2-{d_j}_2)^2 + ...+ ({d_i}_n-{d_j}_n)^2}\n\\] \\endgroup\n\nWhere \\(({d_i}_1, {d_i}_2, ..., {d_i}_n)\\) and \\(({d_j}_1, {d_j}_2, ..., {d_j}_n)\\) represent the coordinates (i.e., word frequencies) of the two documents. \n\n\n{\\bf Algorithms: }The {\\it Hierarchical} (i.e., Agglomerative) algorithm, first, assigns each document to its own cluster and iteratively merges clusters that are closest to each other until the entire corpus forms a single cluster. Despite the hierarchical approach in which we do not need to specify the number of clusters upfront, the {\\it K-means} algorithm assigns documents randomly to {\\it k} bins. This approach computes the location of the centroid of each bin and computes the distance between each document and each centroid. We defined the {\\it k=10} to run this algorithm. However, the k-means approach is highly sensitive to the initial random selection of cluster centroid (i.e., mean), which might lead to different results each time we run this algorithm. Thus, we used a {\\it Hybrid} algorithm, which combined hierarchical and K-means algorithms. This algorithm, first, computes the center (i.e., mean) of each cluster by applying the hierarchical approach. Then computes the K-means approach by using the set of defined clusters' centers. \n\n\n{\\bf Results and Evaluation: }Before applying clustering algorithms we used Hopkins (H) statistic to test the spatial randomness and assess the {\\it clustering tendency} (i.e., clusterability) of our data set. To this end, we raised the following null hypothesis: \\(\\big(\\){\\it \\(H_0\\): the NFR data set is uniformly distributed and has no meaningful clusters}\\(\\big)\\). As presented in Figure \\ref{fig:cluster} (a), the {\\it H-value} of this test is 0.1 (close to zero), which rejects this hypothesis and concludes that our data set is significantly clusterable. However, as presented in Table \\ref{tab:compare}, the clustering algorithms had poor performance at classifying NFRs. This may imply that the data set under study is quite unstructured and sub-categories of NFRs are not well separated. Thus, an unsupervised algorithm (e.g. Hierarchical or K-means) cannot accurately achieve segmentation.\n\nMoreover, we used Silhouette (s) analysis to assess the cohesion of resulted clusters. We used the function {\\it silhouette()} of {\\it cluster package} to compute the silhouette coefficient. Small {\\it s-value} (i.e., around 0) means that the observation lies between two clusters and has a low cohesion. The results of this test and the details of each cluster, including a number of requirements assigned to it, and its {\\it s-value} are illustrated in Figure \\ref{fig:cluster}(b-d). \n\n \\noindent\\makebox[\\linewidth]{\\resizebox{0.3333\\linewidth}{1pt}{$\\bullet$}}\\bigskip \n \\vspace{-3mm}\n \n\\subsubsection{Na\\\"{i}ve Bayes Classification}\n\nThis approach is a supervised learning method which predicts unseen data based on the {\\it bayes' theorem} \\cite{Naive} used to calculate conditional probability:\n\n\\begingroup\n\\everymath{\\scriptstyle}\n\\scriptsize\n\\[\nP(C=c_k\\mid F=f)= \\dfrac{P(F=f\\mid C=c_k) P(C=c_k)}{P(f)}\n\\] \\endgroup\n\nWhere \\(C=(c_1, c_2, ..., c_k)\\) represents classes and \\(F= (f_1, f_2, ..., f_d)\\) is a vector random variable, which includes one vector for each document. \n\n\n{\\bf Algorithm:} We use a variation of the multinomial Na\\\"{i}ve Bayes (BNB) algorithm known as {\\it Binarized Na\\\"{i}ve Bayes}. In this method, the term frequencies are replaced by Boolean presence\/absence features. The logic behind this is the higher importance of word occurrence than word frequency to sentiment classification.\n\n\n{\\bf Results and Evaluation:} To apply this algorithm we employed a 5-fold-cross validation. To reduce the data splitting bias, we run five runs of the 5-fold-cross validation. Overall accuracy is calculated at just over 90\\% with a {\\it p-value} of 2.2e-16. As illustrated in Table \\ref{tab:compare}, results obtained using the BNB algorithm were generally more accurate. All of the NFRs (except for PO) were recalled at relatively high values ranging from 75 (i.e., Legal requirements) to 97\\% (i.e., security and performance requirements). To represent more details about the performance of our classifier for each NFR, we visualized the confusion matrix resulted from applying the BNB algorithm (Figure \\ref{fig:cluster} (e)). Each column and row of this matrix represent the actual (i.e., reference) and the prediction data, respectively. The blocks are colored based on the frequency of the intersection between actual and predicted classes (e.g., the diagonal represents the correct predictions for the actual class). Since some of the NFRs in our data set occur more frequently, we normalized our data set before visualizing the confusion matrix. As illustrated in Figure \\ref{fig:cluster} (e), requirements in classes FT, L, MN, O, and SC were often assigned to class US. We can imply that {\\it the terminology we use for representing usability requirements is very general, which covers other NFRs that are indirectly related to usability. } This shows a clear need for additional (or better) sentimental patterns, which differentiate this category of NFR from other similar categories. \nto differentiate usability requirements from other types of NFRs.\n\n\n\\begin{tcolorbox}[colback=white, title= Findings]\n\\footnotesize\n \\textcolor{white}{.................. } \\\\\n\\vspace{-5mm}\n \n{\\bf Finding 1:} Our preprocessing approach positively impacted the performance of the applied classification of functional and non-functional requirements. We could improve the accuracy from 89.92\\% to 95.04\\%.\\\\\n\\vspace{-2mm}\n\n{\\bf Finding 2:} Our preprocessing approach strongly impacted the performance of all applied sub-classification methods. For LDA and BNB, both precision and recall doubled. \\\\\n\\vspace{-2mm}\n\n{\\bf Finding 3:} Among the machine learning algorithms LDA, BTM, Hierarchical, K-means, Hybrid and Binarized Na\\\"{i}ve Bayes (BNB), {\\it BNB} had the highest performance for sub-classifying NFRs.\\\\\n\\vspace{-2mm}\n\n\n{\\bf Finding 4:} While BTM generally works better than LDA for exploring the general themes and topics of a short-texts corpus, it did not perform well for sub-classifying NFRs. \\\\\n\\vspace{-2mm}\n\n{\\bf Finding 5:} There is a clear need for additional sentimental patterns\/sentence structures to differentiate usability requirements from other types of NFRs.\n\\end{tcolorbox}\n\\vspace{-2mm}\n\n\n\n\n\n\n\\section{Conclusion and Implications}\n\\label{sec:agenda}\n\n\n\n\nOur findings are summarized in the box at the end of Section~\\ref{sec:results}. \nIn particular, we conclude that using our preprocessing approach improves the performance of both classifying FR\/NFR and sub-classifying NFR into sub-categories. Further, we found that, among popular machine learning algorithms, Binarized Na\\\"{i}ve Bayes (BNB) performed best for the task of classifying NFR into sub-categories. Our results further show that, although BTM generally works better than LDA for extracting the topics of short-texts, BTM does not perform well for classifying NFRs into sub-categories. Finally, additional (or better) sentimental patterns and sentence structures are needed for differentiating usability requirements from other types of NFRs.\n\\vspace{-2mm}\n\\section{Introduction}\n \n In requirements engineering, classifying the requirements of a system by their kind into \\emph{functional requirements}, \\emph{quality requirements} and \\emph{constraints} (the latter two usually called \\emph{non-functional requirements}) \\cite{re_glossary} is a widely accepted standard practice today.\n \nWhile the different kinds of requirements are known and well-described today~\\cite{Martin}, automated classification of requirements written in natural language into functional requirements (FRs) and the various sub-categories of non-functional requirements (NFRs) is still a challenge \\cite{Ernst2010}. This is particularly due to the fact that stakeholders, as well as requirements engineers, use different terminologies and sentence structures to describe the same kind of requirements \\cite{RO, IWSPM}. The high level of inconsistency in documenting requirements makes automated classification more complicated and therefore error-prone.\n\nIn this paper, we investigate how automated classification algorithms for requirements can be improved and how well some of the frequently used machine learning approaches work in this context. We make two contributions. (1)~We investigate if and to which extent an existing decision tree learning algorithm~\\cite{hussain2008using} for classifying requirements into FRs and NFRs can be improved by preprocessing the requirements with a set of rules for (automated) standardizing and normalizing the requirements found in a requirements specification. (2)~We study how well several existing machine learning methods perform for automated classification of NFRs into sub-categories such as availability, security, or usability.\n\nWith this work, we address the \\emph{RE Data Challenge} posed by the 25th IEEE International Requirements Engineering Conference (RE'17). \n\n \n\n \n\n\n\n\n\n\n\\section{Limitations and Threats to Validity}\n\\label{sec:limits}\nIn this section, we discuss the potential threats to the validity\nof our findings in two main threads: \n\n{\\bf (1) Data Analysis Limitations:} The biggest threat to the validity of this work is the fact that our preprocessing model was developed on the basis of the data set given for the RE Data Challenge and that we had to use the same data set for evaluating our approach.\nWe mitigate this threat by using sentence structure features such as temporal, entity, and functional features which are applicable to sentences with different structures from different contexts. We also created a set of regular expressions which are less context-dependent and have been formed mainly based on the semantics of NFRs.\n\nAnother limiting factor is that our work depends on\nthe number and choice of the NFR sub-categories used by the creators of our data set.\nHowever, our preprocessing can be adapted to a different set of NFR sub-categories. In terms of the co-occurrence rules and regular expressions presented in Table \\ref{tab:reg}, we aim to\nexpand these rules by adding more NFR sub-categories in future work, as we gain additional insights from processing real world requirements specifications.\n\n{\\bf (2) Dataset Limitations:} Due to the nature of the RE'17 Data Challenge, we used the data set as is, although it has major data quality issues: (1)~Some requirements are incorrectly labeled. For example, R2.18 ``The product shall allow the user to view previously downloaded search results, CMA reports and appointments'' is labeled as NFR. Obviously, however, this is a functional requirement.\n(2) The important distinction between quality requirements and constraints is not properly reflected in the labeling. (3)~The selection of the requirements has some bias. For example, the data set does not contain any compatibility, compliance or safety requirements. Neither does it contain any cultural, environmental or physical constraints. (4)~Only one single requirement is classified as PO which makes this sub-category useless for our study. The repetition of our study on a data set of higher data quality is subject to future work.\n\nFurthermore, the {\\it unbalanced data set} we used for classifying the NFRs may affect the findings of this study. However, a study by Xue and Titterington \\cite{unbalanced} revealed that there is no reliable empirical\nevidence to support the claim that an unbalanced data\nset negatively impacts the performance of the LDA\/BTM approaches. Further, a recent study by L\\'{o}pez et al. \\cite{unbalanced2} shows that the unbalanced ratio by itself does not have the most significant effect on the classifiers'\nperformance, but there are other issues such as (a) the presence of small disjuncts, (b) the lack of density, (c) the class overlapping,\n(d) the noisy data, (e) the management of borderline examples, and (f) the dataset shift that must be taken into account. The pre-processing step we conducted before applying the classification algorithms helps discriminate the NFR sub-classes more precisely and mitigates the negative impact of the noisy data and borderline problems. Moreover, we employed the n-fold cross validation technique which helps generate enough positive class instances in different folds and reduces additional problems in the data distribution especially for highly unbalanced datasets. This technique, to a great extent, mitigates the negative impact of class overlapping, the dataset shift, and the presence of small disjuncts issues on the performance of the classification algorithms we applied in this study. \n \n\\section{Preprocessing of Requirements Specifications}\n\\label{sec:PP}\nIn this section, we describe the preprocessing\nwe applied to reduce the inconsistency of requirements specifications by leveraging rich sentence features and latent co-occurrence relations.\n\\subsection{Part Of Speech (POS) Tagging} We used the part-of-speech tagger of the Stanford Parser \\cite{Klein:2003:AUP:1075096.1075150} to assign parts of speech, such as noun, verb, adjective, etc. to each word in each requirement. \nThe POS tags\\footnote{Check https:\/\/gist.github.com\/nlothian\/9240750 for a complete list of tags} are necessary to perform the FR\/NFR classification based on the approach of Hussain et al. \\cite{hussain2008using}.\n\\subsection{Entity Tagging}\nTo improve the generalization of input requirements, we used a ``supervised training data'' method in which all context-based products and users are blinded by assigning names as {\\it PRODUCT} and {\\it USER}, respectively. To this end, we used the LingPipe NLP toolkit\\footnote{http:\/\/alias-i.com\/lingpipe\/} and created the \\(SRS\\_dictionary\\) by defining project specific users\/customers and products (e.g., program administrators, nursing staff members, realtor, or card member marked as USER), such as below:\n\\vspace{-2mm}\n\\begin{figure}[H]\n\\centering\n{\\includegraphics[scale=0.85]{Figures\/User}}\n\n\\caption*{}\n\n\\end{figure}\n\n\nThen, each sentence is tokenized and POS tagged with the developed SRS dictionary. All of the tokens associated with {\\it user} and {\\it product} were discarded and we only kept these two keywords to represent these two entities. Finally, we used the POS tagger of the Stanford Parser \\cite{Klein:2003:AUP:1075096.1075150} and replaced all Noun Phrases (NPs) including ``USER'' and ``PRODUCT'' with \\(USER\\) and \\(PRODUCT\\), respectively. For instance, \\mybox[fill=gray!30]{registered USER} in \\big, is replaced with \\(USER\\). \n\\subsection{Temporal Tagging} {\\it Time} is a key factor to characterize non-functional requirements, such as availability, fault tolerance, and performance. \n\nFor this purpose, we used SUTime, a rule-based temporal tagger for recognizing and normalizing temporal expressions by TIMEX3 standard\\footnote{See http:\/\/www.timeml.org for details on the TIMEX3 tag}. SUTIME detects the following basic types of temporal objects \\cite{SUTIME}:\n\\begin{enumerate}\n\\item {\\it Time:} A particular instance on a time scale. SUTIME also handles absolute times, such as {\\it Date}. As in: \n\n\\begin{figure}[H]\n\\centering\n{\\includegraphics[scale=0.67]{Figures\/TIME}}\n\n\\caption*{}\n\n\\end{figure}\n\n\\item {\\it Duration and Intervals:} The amount of intervening time in a time interval. As in:\n\n\\begin{figure}[H]\n\\centering\n{\\includegraphics[scale=0.67]{Figures\/Duration}}\n\n\\caption*{}\n\n\\end{figure}\n\nIntervals can be described as a range of time defined by a start and end time points. SUTime represents this type in the form of other types. \n\\item{\\it Set:} A set of temporals, representing times that occur with some frequency. As in:\n\n\\begin{figure}[H]\n\\centering\n{\\includegraphics[scale=0.67]{Figures\/SET1}}\n\n\\caption*{}\n\\end{figure}\n\\end{enumerate}\n\n\n\\begin{table*}[!htb]\n\\centering\n\\caption{\\small Proposed co-occurrence and regular expressions for preprocessing SRSs [(\\(CO(w)\\)): the set of terms co-occur with word \\(w\\)] }\n\n\\label{tab:reg}\n\\begin{tabular}{ |c|p{4.1cm}|p{5.1cm}p{5.2cm}| }\n \\hline\n {\\bf NFR}&{\\bf Keywords} & {\\bf Part of Speech (POS) and Regular Expressions} & {\\bf Replacements}\\\\\\hline \n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n{\\bf Security [SE]} &protect, encrypt, policy, authenticate, prevent, malicious, login, logon, password, authorize, secure, ensure, access&\n\\( (\\mybox[fill=gray!30]{only}\/ ...\/\\mybox[fill=gray!30]{ nsubj}\/...\/\\mybox[fill=gray!30]{root}\/... )\\)\n\\( \\mid\\mid (\\mybox[fill=gray!30]{only}\/ ...\/\\mybox[fill=gray!30]{ root}\/...\/\\mybox[fill=gray!30]{nmod: agent}\/... )\\)\n\\(\\wedge \\big(\\mybox[fill=gray!30]{root}\\text{ is a } VBP\\big)\\) \n\n\n& \\( \\Rightarrow \\begin{cases}\\mybox[fill=gray!30]{nsubj}\\mybox[fill=gray!30]{agent} \\gets\\text{\\it authorized user}&\\\\ \\mybox[fill=gray!30]{root} \\gets access& \\end{cases}\\)\n\n{\\raisebox{-\\totalheight}{\\includegraphics[scale=.43]{Figures\/SE2}}}\\\\\\cline{3-4}\n\n&&\\( \\forall \\omega \\in \\{\\text{\\it username \\& password, login, logon}\\}\\)\n\n \\(\\text{\\it security, privacy, right, integrity, polict}\\}\\)\n \n \\(\\wedge CO(\\omega)\\cap CO(NFR_{se}) \\neq \\emptyset\\)\n &\\(\\Rightarrow \\omega \\gets authorization\\)\\\\\\cline{3-4}\n \n &&\\( \\forall \\omega \\in \\{\\text{\\it reach, enter, protect, input, interface}\\)\n \n \\(\\wedge \\text{product is obj }\\wedge CO(\\omega)\\cap CO(NFR_{se}) \\neq \\emptyset\\)\n &\\(\\Rightarrow \\omega \\gets access\\)\\\\\\hline\n\n \n \\hline\n\\end{tabular}\n\\vspace{-6mm}\n\\end{table*}\n\\vspace{-6mm}\n\nAfter tagging a sample set of all the classified NFRs, we identified the following patterns and used them to normalize the entire data set. To do this, first, we replaced all expressions in this format ``24[-\/\/*]7'' and ``\\(24\\times7\\times365\\)'' with ``24 hours per day 365 days per year'', and ``everyday'' with ``every day''. Likewise, we replaced all ``sec(s)'' and ``min(s)'' with ``seconds'' and ``minutes'', respectively. In the rest of this section, we present the rules we defined and applied to normalize the temporal expressions embedded in requirements' descriptions. \\\\\n\\vspace{-3mm}\n\\begin{tcolorbox}[colback=white, title= Temporal Rules]\n\\scriptsize\n\\begin{enumerate}\n\\item \\(\\forall \\text{ } [\\backslash exp]\\big, exp \\gets within\\) \\\\\nwhere \\(exp \\in\\)\\{\\it no longer than, under, no more than, not be more than, no later, in, for less than, at a maximum\\} \\\\\n\\vspace{-3mm}\n\n\n \\vspace{2mm}\n\\item \\(\\forall \\text{ } [\\backslash DURATION \\backslash TIME \\backslash DATE]^+ \\gets alltimes\\) \\\\\n\n\\begin{minipage}[t]{1\\linewidth}\n \\includegraphics[scale=.55]{Figures\/Temporal1}\\hfill\n \\vspace{-7mm}\n \\captionsetup{labelformat=empty}\n { \\captionof{figure}{ }\n \\label{fig:Terminology}}\n \\end{minipage}\\hfill\\\\\n \\vspace{2mm}\n\n\n\\item \\(within \\text{ } \\big \\gets fast\\) \\\\\n\\(if \\big == [\\backslash seconds \\backslash minutes]\\)\\\\\n \\vspace{-2mm}\n \n\\begin{minipage}[t]{1\\linewidth}\n\\vspace{-2mm}\n \\includegraphics[scale=.6]{Figures\/Temporal2}\n \\vspace{-3mm}\n \\captionsetup{labelformat=empty}\n { \\captionof{figure}{ }\n \\label{fig:Terminology}}\n \\end{minipage}\\hfill\\\\\n\\vspace{-5mm}\n\n \\vspace{2mm}\n\\item \\(\\{timely, quick\\} \\mid\\mid [\\backslash \\text{\\it positive adj } \\backslash time] \\gets fast\\)\\\\\n\\item \\({[8\\text{-}9][0\\text{-}9][{\\bf \\backslash.}?[0\\text{-}9]?{\\bf \\%}?]}[IN\\mid DET]^* time \\gets alltimes\\)\n\n\\vspace{-1mm}\n \\noindent \n\n\\end{enumerate}\n\\end{tcolorbox}\n\\vspace{-5mm}\n\n\n\n\n\n\n\n\n\\subsection{Co-occurrence and Regular Expressions}\nOnce the sentence features are utilized to reduce the complexity of the text, we used the co-occurrence and regular expressions to increase the weight of the influential words for each type of NFRs. To explore these rules we manually analyzed 6 to 10 requirements of each NFR and deployed different components of Stanford Parser such as part-of-speech, named entities, sentiment, and relations. Moreover, in this step, we recorded co-occurrence counts of each term within the provided NFR data set as the co-occurrence vector. We used this parameter as a supplement for exploring the SRS regular expressions. For instance, Table \\ref{tab:reg} represents the details of the rules we proposed for Security (SE) NFR. Please refer to the footnote\\footnote{ http:\/\/wcm.ucalgary.ca\/zshakeri\/projects} for the complete list of these rules containing regular expressions for all of the provided NFRs. \n\n\n\n\n\\section{Related Work}\n \\label{sec:RW}\n Software Requirements Specifications (SRSs) are written in natural language, with mixed statements of functional and non-functional requirements. There is a growing body of research studies that compare the effect of using manual and automatic approaches for classification of requirements~\\cite{nanniu,Ernst2010}. An efficient classification enables focused communication and prioritization of requirements~\\cite{janerej2}. Categorization of requirements allows filtering relevant requirements for a given important aspect. Our work is also closely related to the research on automatic classification of textual requirements.\n\nKnauss and Ott \\cite{knaussREFSQ} introduced a model of a socio-technical system for requirements classification. They evaluated their model in an industrial setting with a team of ten practitioners by comparing a manual, a semi-automatic, and a fully-automatic approach of requirements classification. \nThey reported that a semi-automatic approach offers the best ratio of quality and effort as well as the best learning performance. Therefore, it is the most promising approach of the three.\n\nCleland-Huang et al. \\cite{janere} investigated mining large requirements documents for non-functional requirements. Their results indicate that NFR-Classifier adequately distinguishes several types of NFRs. However, further work is needed to improve the results for some other NFR types such as 'look-and-feel'. Although their study is similar to ours as they trained a classifier to recognize a set of weighted indicator terms, indicative of each type of requirement, we used different classification algorithms and additionally assessed their precisions and recalls to compare their performance with each other.\n\nRahimi et al. \\cite{monaRE} present a set of machine learning and data mining methods for automatically extracting quality concerns from requirements, feature requests, and online forums. Then, they generate a basic goal model from the requirements specification. Each concern is modeled as a softgoal. For attaching topics to softgoals they used an LDA approach to estimate the similarity between each requirement and the discovered topics. In addition, they used LDA to identify the best sub-goal placement for each of the unattached requirements. However, in this research, we used LDA as one of our approaches for classifying the non-functional requirements. \n\n\nNa\\\"{i}ve Bayes classifier is used in several studies\\cite{KnaussRE, koj} for automatic classification of requirements. Therefore, we included Na\\\"{i}ve Bayes in our study to be comparable with other classifiers.\n\\section{The Challenge and Research Questions}\n\\label{sec:RQ}\n\n\n\\subsection{Context and Data Set}\nThe challenge put forward by the Data Track of RE'17 consists of taking a given data set and performing an automated RE task on the data such as tracing, identifying\/classifying requirements or extracting knowledge. For this paper, we chose the task of automated classification of requirements.\n\nThe data set given for this task comes from the OpenScience tera-PROMISE repository\\footnote{https:\/\/terapromise.csc.ncsu.edu\/!\/\\#repo\/view\/head\/requirements\/nfr}. It consists of 625 labeled natural language requirements (255 FRs and 370 NFRs). The labels classify the requirements first into FR and NFR. Within the latter category, eleven sub-categories are defined: (a)~ten \\emph{quality requirement categories}: Availability (A), Look \\& Feel (LF), Maintainability (MN), Operability (O), Performance (PE), Scalability (SC), Security (SE), Usability (US), Fault Tolerance (FT), and Portability (PO); (b)~one \\emph{constraint category}: Legal \\& Licensing (L). These labels constitute the ground truth for our investigations.\n\n\n\\subsection{Research Questions}\nWe frame the goal of our study in two research questions:\n {\\it RQ1. How do grammatical, temporal and sentimental characteristics of a sentence affect the accuracy of classifying requirements into functional and non-functional ones?}\n \nWith this research question, we investigate whether our preprocessing approach, which addresses the aforementioned characteristics, has a positive impact on the classification into FRs and NFRs in terms of precision and recall.\n\n {\\it RQ2. To what extent is the performance of classifying NFRs into sub-categories influenced by the chosen machine learning classification method?}\n\n\nWith this research question, we study the effects of the chosen machine learning method on the precision and recall achieved when classifying the NFRs in the given data set into the sub-categories defined in the data set.\n\n\n\n\n \n\n \n \n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\n\\section{Introduction}\n\n\nDeep neural networks have achieved state-of-the-art performances in various natural and medical-imaging problems \\cite{litjens2017survey}. However, they tend to under-perform when the test-image distribution is different from those seen during training. In medical imaging, this is due to, for instance, variations in imaging modalities and protocols, vendors, machines, clinical sites and subject populations. For semantic segmentation problems, labelling a large number of images for each different target distribution is impractical, time-consuming, and often impossible. To circumvent those impediments, methods learning robust networks with less supervision have triggered interest in medical imaging \\cite{Cheplygina2018Notsosupervised}.\n\nThis motivates {\\em Domain Adaptation} (DA) methods: DA amounts to adapting a model trained on an annotated source domain to another target domain, with no or minimal new annotations for the latter. Popular strategies involve minimizing the discrepancy between source and target distributions in the feature or output spaces \\cite{ADDA,tsai2018learning}; integrating a domain-specific module in the network \\cite{dou2018pnp}; translating images from one domain to the other \\cite{cyclegan}; or integrating a domain-discriminator module and penalizing\nits success in the loss function \\cite{ADDA}.\n\nIn medical applications, separating the source training and adaptation is critical for privacy and regulatory reasons, as the source and target data may come from different clinical sites. Therefore, it is crucial to develop adaptation methods, which neither assume access to the source data nor modify the pre-training stage. Standard DA methods, such as \\cite{dou2018pnp,tsai2018learning,ADDA,cyclegan}, do not comply with these restrictions. This has recently motivated \\emph{Source-Free Domain Adaptation} (SFDA) \\cite{Bateson2020,KARANI2021101907}, a setting where the source data (neither the images nor the ground-truth masks) is unavailable during the training of the adaptation phase. \n\nEvaluating SFDA methods consists in: (i) adapting on a dedicated training set \\textit{Tr} from the target domain; and (ii) measuring the generalization performance on an unseen test set \\textit{Te} in the target domain. However, emerging and very recent \\emph{Test-Time Adaptation} (TTA) works in machine learning \\cite{wang2021tent,Sun2020} and medical imaging \\cite{KARANI2021101907,Varsavsky} argue that this is not as useful as adapting directly to the test set \\textit{Te}. In various interesting applications, access to the target distribution might not be possible. This is particularly common in medical image segmentation when only a single target-domain subject is available for test-time inference. In the context of image classification, the authors of \\cite{wang2021tent} showed recently that simple adaptation of batch normalization's scale and bias parameters on a set of test-time samples can deal competitively with domain shifts.\n\nWith this context in mind, we propose a simple formulation for source-free and single-subject test-time adaptation of segmentation networks. During inference for a single testing subject, we optimize a loss integrating shape priors and the entropy of predictions with respect to the batch normalization's scale and bias parameters. Unlike the standard SFDA setting, we perform test-time adaptation on each subject separately, and forgo the use of target training set \\textit{Tr} during adaptation. Our setting is most similar to the image classification work in \\cite{wang2021tent}, which minimized a label-free entropy loss defined over test-time samples. Building on this entropy loss, we further guide segmentation adaptation with domain-invariant shape priors on the target regions, and show the substantial effect of such shape priors on TTA performances. \nWe report comprehensive experiments and comparisons with state-of-the-art TTA, SFDA and DA methods, which show the effectiveness of our shape-guided entropy minimization in two different adaptation scenarios: cross-modality cardiac segmentation (from MRI to CT) and prostate segmentation in MRI\nacross different sites. Our method exhibits substantially better performances than the existing TTA methods. Surprisingly, it also fares better than various state-of-the-art SFDA and DA methods, although it does not train on source and additional target data during adaptation, but just performs joint inference and adaptation on a single 3D data point in the target domain.\nOur results and ablation studies question the usefulness of training on target set \\textit{Tr} during adaptation and points to the surprising and substantial effect of embedding shape priors during inference on domain-shifted testing data. \nOur framework can be readily used for integrating various priors and adapting any\nsegmentation network at test times. \n\n\n\n\\begin{comment}\n\\begin{figure}[t]\n \\includegraphics[width=1\\linewidth]{figures\/no_ad3.png}\n \\caption[]{Visualization of 2 aligned slice pairs in source (Water) and target modality (In-Phase): the domain shift in the target produces a drop in confidence and accuracy.}\n \\label{fig:s_t_im}\n\n\\end{figure} \n\\end{comment}\n\n\n\n\\begin{figure}[t]\n \\includegraphics[width=1\\linewidth]{figures\/over2.png}\n \\caption[]{Overview of our framework for Test-Time Adaptation with Shape Moments: we leverage entropy minimization and shape priors to adapt a segmentation network on a single subject at test-time.}\n \\label{fig:overview}\n\n\\end{figure}\n\n\\section{Method} \n\nWe consider a set of $M$ source images ${I}_{m}: \\Omega_s\\subset \\mathbb R^{2} \\rightarrow {\\mathbb R}$, $m=1, \\dots, M$, and denote their ground-truth K-class segmentation for each pixel $i \\in \\Omega_s$ as a $K$-simplex vector ${\\mathbf y}_m (i) = \\left(y^{(1)}_{m} (i), \\dots, y^{(K)}_{m} (i)\\right) \\in \\{0,1\\}^K$. For each pixel $i$, its coordinates in the 2D space are represented by the tuple $\\left(u_{(i)}, v_{(i)}\\right) \\in \\mathbb{R}^{2}$.\n\n\\paragraph{Pre-training Phase} The network is first trained on the source domain only, by minimizing the cross-entropy loss with respect to network parameters $\\theta$: \n\\begin{equation}\\label{eq:crossent}\n\\begin{aligned}\n\\min_{\\theta} \\frac{1}{\\left|\\Omega_{s}\\right|} \\sum_{{m}=1}^{M} \\ell\\left({\\mathbf y}_{m} (i), {\\mathbf s}_{m} (i, \\theta)\\right)\n \\end{aligned}\n\\end{equation}\nwhere ${\\mathbf s}_{m} (i, \\theta) = (s^{(1)}_{m} (i,\\theta), \\dots, s^{(K)}_{m} (i, \\theta)) \\in [0,1]^K$ denotes the predicted softmax probability for class $k \\in\\{1, \\ldots, K\\}$.\n\\paragraph{Shape moments and descriptors}\nShape moments are well-known in classical computer vision \\cite{nosrati2016incorporating}, and were recently shown useful in the different context of supervised training \\cite{KervadecMIDL2021}. Each moment is parametrized by its orders $p, q \\in \\mathbb{N}$, and each order represents a different characteristic of the shape. For a given $p, q \\in \\mathbb{N}$ and class $k$, the shape moments of the segmentation prediction of an image $I_n$ can be computed as follows from the softmax matrix $\\mathrm{S}_{n}(\\theta)=\\left(\\mathrm{s}_{n}^{(k)}(\\theta)\\right)_{k=1\\ldots K}$ :\n$$\n\\mu_{p, q}\\left(\\mathrm{s}_{n}^{(k)}(\\theta)\\right)=\\sum_{i \\in \\Omega} s_{n}^{(k)}(i, \\theta) u_{(i)}^{p} v_{(i)}^{q}\n$$\n\\begin{comment}\n$$\n\\mu_{p, q}^{(k)}\\left({\\mathbf s}_{n}(\\theta)\\right)=\\sum_{i \\in \\Omega} s_{n}^{(k)}(i,\\theta) u_{(i)}^{p} v_{(i)}^{q}.\n$$ \n\\end{comment}\nCentral moments are derived from shape moments to guarantee translation invariance. They are computed as follows:\n\\begin{comment}\n $$\n\\bar{\\mu}_{p, q}^{(k)}\\left({\\mathbf s}_{n,\\theta}\\right)=\\sum_{i \\in \\Omega} s_{n}^{k}(i,\\theta)\\left(u_{(i)}-\\frac{\\mu_{1,0}^{(k)}}{\\mu_{0,0}^{(k)}}\\right)^{p}\\left(v_{(i)}-\\frac{\\mu_{0,1}^{(k)}}{\\mu_{0,0}^{(k)}}\\right)^{q}.\n$$\n$$\n \\bar{\\mu}_{p, q}^{(k)}\\left({\\mathbf s}_{n,\\theta}\\right)=\\sum_{i \\in \\Omega} s_{n}^{k}(i,\\theta)\\left(u_{(i)}-\\bar{u}^{(k)}\\right)^{p}\\left(v_{(i)}-\\bar{v}^{(k)}\\right)^{q}.\n$$\n\\end{comment}\n$$\n \\bar{\\mu}_{p, q}\\left({\\mathbf s}_{n}^{(k)}(\\theta)\\right)=\\sum_{i \\in \\Omega} s_{n}^{k}(i,\\theta)\\left(u_{(i)}-\\bar{u}^{(k)}\\right)^{p}\\left(v_{(i)}-\\bar{v}^{(k)}\\right)^{q}.\n$$\nwhere \n\\begin{comment}\n $\\left( \\bar{u}^{(k)}, \\bar{v}^{(k)}\\right)=\\left(\\frac{\\mu_{1,0}^{(k)}}{\\mu_{0,0}^{(k)}},\\frac{\\mu_{0,1}^{(k)}}{\\mu_{0,0}^{(k)}}\\right)$ \n\\end{comment}\n$\\left(\\frac{\\mu_{1,0}(s_{n}^{(k)}(\\theta))}{\\mu_{0,0}(s_{n}^{(k)}(\\theta))},\\frac{\\mu_{0,1}(s_{n}^{(k)}(\\theta))}{\\mu_{0,0}(s_{n}^{(k)}(\\theta))}\\right)$ are the components of the centroid. \nWe use the vectorized form onwards, e.g. $\\mu_{p, q}\\left(s_{n}(\\theta)\\right) =\\left ( \\mu_{p, q}(s_{n}^{(1)}(\\theta)), \\dots, \\mu_{p, q}(s_{n}^{(K)}(\\theta)) \\right )^\\top$.\n\\begin{comment}\n $\\mu_{p, q}\\left(s_{n}\\right) = (\\mu_{p, q}^{(1)}(s_{n}), \\dots, \\mu_{p, q}^{(K)}(s_{n}))^\\top$.\n\\end{comment}\nBuilding from these definitions, we obtain 2D shape moments from the network predictions. We then derive the shape descriptors $\\mathcal{R} ,\\mathcal{C},\\mathcal{D}$ defined in Table \\ref{table:shapes}, which respectively inform on the size, position, and compactness of a shape. \n\\begin{comment}\n \\begin{table}[t]\n\\begin{tabular}{lll}\nShape Descriptor & Computation from the predicted segmentation $s_\\theta$ \\\\\n\\midrule\nClass-Ratio & $\\mathcal{R}^{(k)}(s_\\theta)=\\frac{1}{\\left| \\Omega_T \\right|}\\mu_{0, 0}^{(k)}\\left(s_{\\theta}\\right) $ \\\\\nDistance to Centroid & $\\mathcal{D}^{(k)}\\left(s_{\\theta}\\right)=\\left(\\sqrt[2]{\\frac{\\overline{\\mu_{2,0}^{(k)}\\left(s_{\\theta}\\right)}}{\\mu_{0,0}^{(k)}\\left(s_{\\theta}\\right)}}, \\sqrt[2]{\\frac{\\bar{\\mu}_{0,2}^{(k)}\\left(s_{\\theta}\\right)}{\\mu_{0,0}^{(k)}\\left(s_{\\theta}\\right)}}\\right) $\\\\ \nEccentricity & $\\mathcal{E}^{(k)}\\left(s_{\\theta}\\right)={\\sqrt {1-{\\frac {\\lambda _{2}}{\\lambda _{1}}}}} \\text{ with } \\lambda _{i}={\\frac {\\bar{\\mu}_{{20}}+\\bar{\\mu}_{{02}}}{2}}\\pm {\\frac {{\\sqrt {4{\\bar{\\mu}}_{{11}}^{2}+({\\bar{\\mu}}_{{20}}-{\\bar{\\mu}}_{{02}})^{2}}}}{2}}$ \\\\\nCompactness & $\\mathcal{C}^{(k)}\\left(s_{\\theta}\\right)=\\frac{\\sum_{i \\in \\Omega_T} \\sqrt{\\left(\\nabla s_{u_{i}}\\right)^{2}+\\left(\\nabla s_{v_{i}}\\right)^{2}}}{4 \\pi\\mu_{0, 0}^{(k)}\\left(s_{\\theta}\\right)}$ \\\\\n\\midrule\n\\label{table:shapes}\n\\end{tabular}\n\\end{table} \n\\end{comment}\n\n\\vspace{-0.3em}\n\\begin{table}[h!!!]\n\\centering\n \\caption{Examples of shape descriptors based on softmax predictions.}\n\\begin{tabular}{ll}\n\\toprule\nShape Descriptor & \\multicolumn{1}{c}{Definition} \\\\\n\\midrule\nClass-Ratio & $\\mathcal{R}(s):=\\frac{1}{\\left| \\Omega_T \\right|}\\mu_{0, 0}\\left(s\\right) $ \\\\\nCentroid & $\\mathcal{C}\\left(s\\right):=\\left(\\frac{\\mu_{1,0}\\left(s\\right)}{\\mu_{0,0}\\left(s\\right)}, \\frac{\\mu_{0,1}\\left(s\\right)}{\\mu_{0,0}\\left(s\\right)}\\right)$\\\\ \nDistance to Centroid & $\\mathcal{D}\\left(s\\right):=\\left(\\sqrt[2]{\\frac{\\bar{\\mu}_{2,0}\\left(s\\right)}{\\mu_{0,0}\\left(s\\right)}}, \\sqrt[2]{\\frac{\\bar{\\mu}_{0,2}\\left(s\\right)}{\\mu_{0,0}\\left(s\\right)}}\\right) $\\\\ \n\\bottomrule\n\\label{table:shapes}\n\\end{tabular}\n\\end{table}\n\n\\vspace{-1.5em}\n\\paragraph{Test-time adaptation and inference with shape-prior constraints}\n\nGiven a single new subject in the target domain composed of $N$ 2D slices, ${I}_n: \\Omega_t\\subset \\mathbb R^{2} \\rightarrow {\\mathbb R}$, $n=1, \\ldots, N$, the first loss term in our adaptation phase is derived from \\cite{wang2021tent}, to encourage high confidence in the softmax predictions, by minimizing their weighted Shannon entropy: $\\ell_{ent}({\\mathbf s}_n (i,\\theta)) = - \\sum_k \\nu_k s^k_n (i,\\theta) \\log s^k_n (i, \\theta)$, where $\\nu_k, k=1 \\ldots K$, are class weights added to mitigate imbalanced class-ratios.\n\\begin{comment}\n\\begin{equation}\n\\label{eq:ent-target}\n\\ell_{ent}({\\mathbf s}_n (i,\\theta)) = - \\nu_k\\sum_k s^k_n (i,\\theta) \\log s^k_n (i, \\theta)\n\\end{equation} \n\\end{comment}\n\n\nIdeally, to guide adaptation, for each slice $I_n$, we would penalize the deviations between the shape descriptors of the softmax predictions ${S}_{n}(\\theta)$ and those corresponding to the ground truth $\\mathbf{y_n}$. As the ground-truth labels are unavailable, instead, we estimate the shape descriptors using the predictions from the whole subject $ \\left\\{ S_{n}(\\theta), n=1,\\dots,N\\right\\}$, which we denote respectively $\\mathcal{\\bar{C}},\\mathcal{\\bar{D}}$.\n\n\nThe first shape moment we leverage is the simplest: a zero-order class-ratio $\\mathcal{R}$. Seeing these class ratios as distributions, we integrate a KL divergence with the Shannon entropy: \n\\begin{equation}\\label{eq:AdaMI}\n\\begin{aligned}\n \\mathcal{L}_{TTAS}(\\theta) = \\sum_n\\frac{1}{\\left|\\Omega_{n}\\right|} \\sum_{i \\in \\Omega_t} \\ell_{ent}({s}_n (i, \\theta))+ \\mbox{KL}(\\mathcal{R}(S_n (\\theta)),\\mathcal{\\bar{R}}).\n \\end{aligned}\n\\end{equation}\nIt is worth noting that, unlike \\cite{Bateson2022}, which used a loss of the form in Eq~\\eqref{eq:AdaMI} for training on target data, here we use this term for inference on a test subject, as a part of our overall shape-based objective.\nAdditionally, we integrate the centroid ($\\mathcal{M}=\\mathcal{C}$) and the distance to centroid ($\\mathcal{M}=\\mathcal{D}$) to further guide adaptation to plausible solutions:\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:ineq}\n \\min_{\\theta} &\\quad\\mathcal{L}_{TTAS}(\\theta) \n \\\\\n &\\text{s.t. } \\left|\\mathcal{M}^{(k)}(S_n (\\theta))-\\mathcal{\\bar{M}}^{(k)} \\right|\\leq0.1, & k=\\{2,\\dots, K\\}, n = \\{1, \\dots, N\\}.\n \\end{aligned}\n\\end{equation}\n\n\nImposing such hard constraints is typically handled through the minimization of the Lagrangian dual in standard convex-optimization. As this is computationally intractable in deep networks, inequality constraints such as Eq~\\eqref{eq:ineq} are typically relaxed to soft penalties \\cite{He2017,kervadec2019constrained,Jia2017}.\nTherefore, we experiment with the integration of $\\mathcal{C}$ and $\\mathcal{D}$ through a quadratic penalty, leading to the following unconstrained objectives for joint test-time adaptation and inference:\n\\begin{comment}\n \\begin{equation}\\label{eq:TTAdS}\n\\begin{aligned}\n \\sum_n\\frac{1}{\\left|\\Omega_{n}\\right|} \\sum_{i \\in \\Omega_n} \\ell_{ent}({s}_n (i, \\theta))+ \\mbox{KL}(\\mathcal{R}(\\mathbf{s}_n (\\theta)),\\mathcal{\\bar{R}}) + \\lambda (\\mathcal{M}(\\mathbf{s}_n (\\theta))-0.9\\mathcal{\\bar{M}})]_+^2 +\\lambda (1.1\\mathcal{\\bar{M}}-\\mathcal{M}(\\mathbf{s}_n (\\theta)))]_+^2\n \\end{aligned}\n\\end{equation} \n\\end{comment}\n\\begin{equation}\\label{eq:TTAS}\n \\sum_n\\frac{1}{\\left|\\Omega_{t}\\right|} \\sum_{i \\in \\Omega_n} \\ell_{ent}({\\mathbf s}_n (i, \\theta))+ \\mbox{KL}(\\mathcal{R}(S_n (\\theta)),\\mathcal{\\bar{R}}) + \\lambda \\mathcal{F}(\\mathcal{M}(S_n (\\theta)),\\mathcal{\\bar{M}}),\n\\end{equation}\nwhere $\\mathcal{F}$ is a quadratic penalty function corresponding to the relaxation of Eq~\\eqref{eq:ineq}: $\\mathcal{F}(m_1,m_2)= [m_1-0.9m_2]_+^2 + [1.1m_2-m_1]_+^2$ and $[m]_+ = \\max (0,m)$, with $\\lambda$ denoting a weighting hyper-parameter.\nFollowing recent TTA methods \\cite{wang2021tent,KARANI2021101907}, we only optimize for the scale and bias parameters of batch normalization layers while the rest of the network is frozen. Figure \\ref{fig:overview} shows the overview of the proposed framework.\n\n\\section{Experiments}\n \n\\subsection{Test-time Adaptation with shape descriptors} \n\n\\paragraph{\\textbf{Heart Application}} We employ the 2017 Multi-Modality Whole Heart Segmentation (MMWHS) Challenge dataset for cardiac segmentation \\cite{Zhuang2019}. The dataset consists of 20 MRI (source domain) and 20 CT volumes (target domain) of non-overlapping subjects, with their manual annotations of four cardiac structures: the Ascending Aorta (AA), the Left Atrium (LA), the Left Ventricle (LV) and the Myocardium (MYO). We employ the pre-processed data provided by \\cite{dou2018pnp}.\nThe scans were normalized as zero mean and unit variance, and data augmentation based on affine transformations was performed. For the domain adaptation benchmark methods (DA and SFDA), we use the data split in \\cite{dou2018pnp}: 14 subjects for training, 2 for validation, and 4 for testing. Each subject has $N=256$ slices.\n\n\\paragraph{\\textbf{Prostate Application}} We employ the dataset from the publicly available NCI-ISBI 2013 Challenge\\footnote{https:\/\/wiki.cancerimagingarchive.net}. It is composed of manually annotated T2-weighted MRI from two different sites: 30 samples from Boston Medical Center (source domain), and 30 samples from Radboud University Medical Center (target domain). For the DA and SFDA benchmark methods, 19 scans were used for training, one for validation, and 10 scans for testing. We used the pre-processed dataset from \\cite{SAML}, who resized each sample to $384\\times384$ in axial plane, and normalized it to zero mean and unit variance. We employed data augmentation based on affine transformations on the source domain. Each subject has $N \\in \\left [15,24 \\right ] $ slices.\n\n\n\\paragraph{\\textbf{Benchmark Methods}} Our first model denoted $TTAS_{\\mathcal{R}\\mathcal{C}}$ constrains the class-ratio $\\mathcal{R}$ and the centroid $\\mathcal{C}$ using Eq~\\eqref{eq:TTAS}; similarly, $TTAS_{\\mathcal{R}\\mathcal{D}}$ constrains $\\mathcal{R}$ and the distance-to-centroid $\\mathcal{D}$. We compare to two \\emph{TTA} methods: the method in \\cite{KARANI2021101907}, denoted $TTDAE$, where an auxiliary branch is used to denoise segmentation, and $Tent$ \\cite{wang2021tent}, which is based on the following loss: $\\min_{\\theta}\\sum_{n} \\sum_{i \\in \\Omega_n} \\ell_{ent}({\\mathbf s}_n (i, \\theta))$. Note that $Tent$ corresponds to performing an ablation of both shape moments terms in our loss. As an additional ablation study, $TTAS_{\\mathcal{R}}$ is trained with the class-ratio matching loss in Eq~\\eqref{eq:AdaMI} only.\nWe also compared to two \\emph{DA} methods based on class-ratio matching, \\textit{CDA} \\cite{Bateson2021}, and $CurDA$ \\cite{zhang2019curriculum}, and to the recent source-free domain adaptation (\\emph{SFDA}) method $AdaMI$ in \\cite{Bateson2022}. \nA model trained on the source only, \\textit{NoAdap}, was used as a lower bound. A model trained on the target domain with the cross-entropy loss, $Oracle$, served as an upper bound.\n\n\\paragraph{\\textbf{Estimating the shape descriptors}}\\label{sec:sizeprior} For the estimation of the class-ratio $\\mathcal{\\bar{R}}$, we employed the coarse estimation in \\cite{Bateson2021}, which is derived from anatomical knowledge available in the clinical literature.\nFor $\\mathcal{M} \\in \\left\\{ \\mathcal{C},\\mathcal{D}\\right\\}$, we estimate the target shape descriptor from the network prediction masks $\\mathbf{\\hat{y}_n}$ after each epoch: $\\mathcal{\\bar{M}}^{(k)} = \\frac{1}{\\left|V^k \\right|}\\sum_{v \\in V^{k}}v$, with $V^{k} = \\left\\{ {\\mathcal{M}^{(k)}(\\mathbf{\\hat{y}_n}) \\text{ if }{\\mathcal{R}^k(\\mathbf{\\hat{y}_n})>\\epsilon^k}},n=1 \\cdots N \\right\\}$.\n\nNote that, for a fair comparison, we used exactly the same class-ratio priors and weak supervision employed in the benchmarks methods in \\cite{Bateson2021,Bateson2022,zhang2019curriculum}.\nWeak supervision takes the form of simple image-level tags by setting $\\mathcal{\\bar{R}}^{(k)}=\\textbf{0}$ and $\\lambda=0$ for the target images that do not contain structure $k$. \n\n\\paragraph{\\textbf{Training and implementation details}} For all methods, the segmentation network employed was UNet \\cite{UNet}. A model trained on the source data with Eq~\\eqref{eq:crossent} for 150 epochs was used as initialization. Then, for TTA models, adaptation is performed on each test subject independently, without target training. Our model was initialized with Eq~\\eqref{eq:AdaMI} for 150 epochs, after which the additional shape constraint was added using Eq~\\eqref{eq:TTAS} for 200 epochs. As there is no learning and validation set in the target domain, the hyper-parameters are set following those in the source training, and are fixed across experiments: we trained with the Adam optimizer \\cite{Adam}, a batch size of $min(N,22)$, an initial learning rate of $5\\times10^{-4}$, a learning rate decay of 0.9 every 20 epochs, and a weight decay of $10^{-4}$. The weights $\\nu_k$ are calculated as: $\\nu_k=\\frac{\\bar{\\mathcal{R}}_k^{-1}}{\\sum_k \\bar{\\mathcal{R}}_k^{-1}}$. We set $\\lambda=1\\times10^{-4}$.\n \n\n\\paragraph{\\textbf{Evaluation}} The 3D Dice similarity coefficient (DSC) and the 3D Average Surface distance (ASD) were used as evaluation metrics in our experiments. \n\n\n\\subsection{Results \\& discussion} \n\n\n\nTable~\\ref{table:resultswhs} and Table \\ref{table:resultspro} report quantitative metrics for the heart and prostate respectively. Among DA methods, the source-free $AdaMI$ achieves the best DSC improvement over the lower baseline \\textit{NoAdap}, with a mean DSC of 75.7\\% (cardiac) and 79.5\\% (prostate). Surprisingly though, in both applications, our method $TTAS_{\\mathcal{R}\\mathcal{D}}$ yields better scores: 76.5\\% DSC, 5.4 vox. ASD (cardiac) and 79.5\\% DSC, 3.9 vox. ASD (prostate); while $TTAS_{\\mathcal{R}\\mathcal{C}}$ achieves the best DSC across methods: 80.0\\% DSC and 5.3 vox. ASD (cardiac), 80.2\\% DSC and 3.79 ASD vox. (prostate).\nFinally, comparing to the TTA methods, both $TTAS_{\\mathcal{R}\\mathcal{C}}$ and $TTAS_{\\mathcal{R}\\mathcal{D}}$ widely outperform $TTADAE$, which yields 40.7\\% DSC, 12.9 vox. ASD (cardiac) and 73.2\\% DSC, 5.80 vox. ASD (prostate), and $Tent$, which reaches 48.2\\% DSC, 11.2 vox. ASD (cardiac) and 68.7\\% DSC, 5.87 vox. ASD (prostate). \n\n\n\n\n\n\\begin{table}[t]\n\\footnotesize\n \\caption{Test-time metrics on the cardiac dataset, for our method and various \\textit{Domain Adaptation} (DA), \\textit{Source Free Domain Adaptation} (SFDA) and \\textit{Test Time Adaptation} (TTA) methods.}\n \\centering\n \\resizebox{\\textwidth}{!}{\n \\begin{tabular}{lccccccc|cp{5pt}cccc|c}\n\n\\toprule\n\n\\multirow{2}{3em}{Methods} & \\multirow{2}{3em}{DA}& \\multirow{2}{3em}{SFDA} & \\multirow{2}{3em}{TTA} & \n \\multicolumn{5}{c}{ DSC (\\%)} && \\multicolumn{5}{c}{ASD (vox)} \\\\\n \\cmidrule(lr){5-9}\\cmidrule(lr){10-15}\n& & & & AA & LA & LV & Myo & Mean && AA & LA & LV & Myo & Mean\n\\\\\n\\hline\nNoAdap (lower b.)&&&& 49.8&62.0&21.1&22.1&38.8 && 19.8&13.0&13.3&12.4&14.6 \\\\\nOracle \\, \\,(upper b.)&&&&91.9 & 88.3 & 91.0 & 85.8 & 89.2 && 3.1 & 3.4 & 3.6 & 2.2 & 3.0 \\\\\n\\midrule\nCurDA \\cite{zhang2019curriculum}&$\\checkmark$&$\\times$& $\\times$&79.0 & 77.9 & 64.4 & 61.3 & 70.7&& 6.5 & 7.6 & 7.2 & 9.1 & 7.6 \\\\\nCDA \\cite{Bateson2021} & $\\checkmark$& $\\times$&$\\times$&77.3 & 72.8 & 73.7 & 61.9 &71.4 && \\textbf{4.1} &6.3 & 6.6 &6.6 & 5.9 \\\\\nAdaMI \\cite{Bateson2022} & $\\times$&$\\checkmark$&$\\times$ &83.1&78.2&74.5&66.8& 75.7&&5.6&\\textbf{4.2}&\\textbf{5.7}&6.9&5.6\\\\\nTTDAE \\cite{KARANI2021101907} &$\\times$& $\\times$& $\\checkmark$& 59.8 & 26.4 & 32.3& 44.4 & 40.7 && 15.1 & 11.7 & 13.6 &11.3 & 12.9 \\\\\nTent \\cite{wang2021tent} & $\\times$& $\\times$& $\\checkmark$& 55.4 & 33.4 &63.0 &41.1 & 48.2 && 18.0 & 8.7 & 8.1 & 10.1 & 11.2 \\\\\n\\rowcolor{Gray}\\begin{tabular}[c]{@{}l@{}}Proposed Method \\end{tabular} && & & & & & && & & & & &\\\\%u\n\\textbf{TTAS$_{\\mathcal{RC}}$} (Ours) &$\\times$&$\\times$&$\\checkmark$&\\textbf{85.1}&\\textbf{82.6}&\\textbf{79.3}&\\textbf{73.2}& \\textbf{80.0}&&5.6&4.3&6.1&\\textbf{5.3}&\\textbf{5.3}\\\\\n\\textbf{TTAS$_{\\mathcal{RD}}$} (Ours) &$\\times$&$\\times$&$\\checkmark$&82.3&78.9&76.1&68.4& 76.5&&4.0&5.8&6.1&5.7&5.4\\\\\n\\rowcolor{Gray}\\begin{tabular}[c]{@{}l@{}}Ablation study \\end{tabular}& & & & & & & && & & & & &\\\\%u\n\\textbf{TTAS$_{\\mathcal{R}}$} &$\\times$&$\\times$&$\\checkmark$ &78.9&77.7&74.8&65.3& 74.2&& 5.2&4.9&7.0&7.6&6.2\\\\\n\\bottomrule\n \\end{tabular}\n }\n \\label{table:resultswhs}\n\\end{table}\n\n\\vspace{-0.5em}\n\\begin{table}[h!]\n\\centering\n\\footnotesize\n\\caption{Test-time metrics on the prostate dataset.\n\n\\begin{tabular}{lccccc}\n\\toprule\nMethods & DA & SFDA &TTA & DSC (\\%) & ASD (vox) \\\\\n\n\\midrule\nNoAdap (lower bound) &&& & 67.2 & 10.60\\\\%u\nOracle \\, \\,(upper bound) &&& & 88.9 & 1.88\\\\%u\n\\midrule\n\\begin{tabular}[c]{@{}l@{}}CurDA \\cite{zhang2019curriculum}\\end{tabular} & $\\checkmark$ &$\\times$& $\\times$ & 76.3 & 3.93\\\\%u \/data\/users\/mathilde\/ccnn\/CDA\/results\/ivdsag\/CKLSizewTag \n\\begin{tabular}[c]{@{}l@{}}CDA \\cite{Bateson2021}\\end{tabular} & $\\checkmark$&$\\times$ &$\\times$ & 77.9 & \\textbf{3.28}\\\\%u\n\\begin{tabular}[c]{@{}l@{}}AdaMI\\cite{Bateson2022}\\end{tabular} & $\\times$&$\\checkmark$& $\\times$& 79.5 & 3.92\\\\%u results\/ivdsag\/sfda2 results\/sal\/invsekllit9 \n\\begin{tabular}[c]{@{}l@{}}TTDAE \\cite{KARANI2021101907}\\end{tabular} &$\\times$ & $\\times$ & $\\checkmark$ & 73.2 & 5.80\\\\%u\n\\begin{tabular}[c]{@{}l@{}}Tent \\cite{wang2021tent}\\end{tabular} &$\\times$ &$\\times$ & $\\checkmark$ & 68.7 & 5.87\\\\%u \n\\rowcolor{Gray}\\begin{tabular}[c]{@{}l@{}}Proposed Method \\end{tabular} & & & & & \\\\%u\n\\begin{tabular}[c]{@{}l@{}}TTAS$_{\\mathcal{RC}}$ (Ours)\\end{tabular} & $\\times$& $\\times$ & $\\checkmark$& \\textbf{80.2} & 3.79\\\\\n\\begin{tabular}[c]{@{}l@{}}TTAS$_{\\mathcal{RD}}$ (Ours)\\end{tabular} & $\\times$& $\\times$& $\\checkmark$ & 79.5 & 3.90\\\\\n\\rowcolor{Gray}\\begin{tabular}[c]{@{}l@{}}Ablation study \\end{tabular} & & & & & \\\\%u\n\\begin{tabular}[c]{@{}l@{}}TTAS$_{\\mathcal{R}}$ (Ours)\\end{tabular} & $\\times$& $\\times$& $\\checkmark$ & 75.3 & 5.06\\\\\n\\bottomrule\n\\end{tabular}\n\\label{table:resultspro}\n\\end{table}\n\n\nQualitative segmentations are depicted in Figure~\\ref{fig:seg}. These visuals results confirm that without adaptation, a model trained only on source data cannot properly segment the structures on the target images. The segmentation masks obtained using the TTA formulations $Tent$ \\cite{wang2021tent}, $TTADAE$ \\cite{KARANI2021101907} only show little improvement. Both methods are unable to recover existing structures when the initialization $NoAdap$ fails to detect them (see fourth and fifth row, Figure~\\ref{fig:seg}). On the contrary, those produced from our degraded model $TTAS_\\mathcal{R}$ show more regular edges and is closer to the ground truth. However, the improvement over $TTAS_\\mathcal{R}$ obtained by our two models $TTAS_{\\mathcal{R}\\mathcal{C}}$, $TTAS_{\\mathcal{R}\\mathcal{D}}$ is remarkable regarding the shape and position of each structures: the prediction masks show better centroid position (first row, Figure~\\ref{fig:seg}, see LA and LV) and better \ncompactness (third, fourth, fifth row, Figure~\\ref{fig:seg}).\n\n\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=\\textwidth]{figures\/segall8.png}\n \\caption[]{Qualitative performance on cardiac images (top) and prostate images (bottom): examples of the segmentations achieved by our formulation ($TTAS_{\\mathcal{R}\\mathcal{C}},TTAS_{\\mathcal{R}\\mathcal{D}}$), and benchmark TTA models. The cardiac structures of MYO, LA, LV and AA are depicted in blue, red, green and yellow respectively. }\n \\label{fig:seg}\n\\end{figure} \n\n\n\n\\begin{comment}\n \\begin{table}[]\n\\scriptsize\n\\begin{tabular}{lcccccccc}\n\\toprule\n & Adversarial & Constraint& Adv+Cons \\\\\n \\midrule\nTraining Time (ms\/batch) & 168 & 153 & 0 \\\\\n \n\\bottomrule\n\\end{tabular}\n \\caption[]{Training times for the diverse methods with a batch size of 1}\n \\label{tab:res_source}\n\\end{table}\n\\end{comment}\n\n\n\n\n\n\n\\section{Conclusion}\n\nIn this paper, we proposed a simple formulation for \\emph{single-subject} test-time adaptation (TTA), which does not need access to the source data, nor the availability of a target training data. \nOur approach performs inference on a test subject by minimizing the entropy of predictions and a class-ratio prior over batchnorm parameters. To further guide adaptation, we integrate shape priors through penalty constraints.\nWe validate our method on two challenging tasks, the MRI-to-CT adaptation of cardiac segmentation and the cross-site adaptation of prostate segmentation. Our formulation achieved better performances than state-of-the-art TTA methods, with a 31.8\\% (resp. 7.0\\%) DSC improvement on cardiac and prostate images respectively. Surprisingly, it also fares better than various state-of-the-art DA and SFDA methods. These results highlight the effectiveness of shape priors on test-time inference, and question the usefulness of training on target data in segmentation adaptation. Future work will involve the introduction of higher-order shape moments, as well as the integration of multiple shapes moments in the adaptation loss. Our test-time adaptation framework is straightforward to use with any segmentation network architecture.\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{splncs04}\n\n\\section{Introduction}\n\nDeep convolutional neural networks, have demonstrated state-of-the-art performance in many natural and medical imaging problems \\cite{litjens2017survey}. However, deep-learning methods tend to under-perform when trained on a dataset with an underlying distribution different from the target images. In medical imaging, this is due for instance to variations in imaging modalities and protocols, vendors, machines and clinical sites (see Fig~\\ref{fig:s_t_im}). For semantic segmentation problems, labelling for each different target distribution is impractical, time-consuming, and often impossible.\nTo circumvent those impediments, methods learning robust networks with less supervision have been popularized in computer vision.\n\n\\paragraph{Domain Adaptation} This motivates Domain Adaptation (DA) methods, to adapt a model trained on an annotated source domain to another target domain with no or minimal annotations. While some approaches to the problem require labelled data from the target domain, others adopt an unsupervised approach to domain adaptation (UDA). \n\n\\paragraph{Source Free Domain Adaptation} Additionally, a more recent line of works tackles source-free domain adaptation \\cite{Bateson2020} (SFDA), a setting where the source data is unavailable (neither images, nor labeled masks) during the training of the adaptation phase. In medical imaging, this may be the case when the source and target data come from different clinical sites, due to, for instance, privacy concerns or the loss or corruption of source data.\n\n\\paragraph{Test-Time Domain Adaptation} In the standard DA setting, the first step consists in fine-tuning or retraining the model with some (unlabeled) samples the target domain. Then, the evaluation consists of measuring the model's ability to generalise to unseen data\nin the target domain. However, the emerging field of Test-Time Adaptation (TTA) \\cite{wang2021tent} argues that this is not as useful as adapting to the test set directly. In some applications, it might not even be possible, such as when only a single target-domain subject is available. \n\n\\paragraph{}Our framework will therefore be Source-Free Test-Time Adaptation for Segmentation. We propose an evaluation framework where we perform test-time adaptation on each subject from the target data separately. This can be seen as an extreme case of the active field of \"learning with less data\", as our method should work with only a single sample from a new data distribution.\n\n\\begin{figure}[t]\n \\includegraphics[width=1\\linewidth]{figures\/s_t_im_crop.png}\n \\caption[]{Visualization of 2 aligned slice pairs in two different MRI modalities: Water and In-Phase, showing the different aspects of the structures to be segmented.}\n \\label{fig:s_t_im}\n\\end{figure}\n\n\n\\section{Project} \n\nIn this project, we will propose a simple formulation for test-time adaptation (DA), which removes the need for a concurrent access to the source and target data, as well as the need for multiple target subjects, in the context of semantic segmentation. Our approach will substitute the standard supervised loss in the source domain by a direct minimization of the entropy of predictions in the target domain. To prevent trivial solutions, we will integrate the entropy loss with different shape moments, such as were introduced in the less challenging context of semi-supervised learning in \\cite{KervadecMIDL2021}. Our formulation is usable with any segmentation network architecture, we will use \\cite{UNet}.\n\nTTA is a nascent field, with only a few works in segmentation \\cite{KARANI2021101907}. The question we wish to answer is the following : Can we leverage the inherent structure of anatomical shapes, to further guide test-time adaptation ?\n\n\\section{Datasets}\n \n\\subsection{Test-time Adaption with shape moments} \n\n\\subsubsection{\\textbf{Prostate Application}} We will first evaluate the proposed method on the publicly available NCI-ISBI 2013 Challenge\\footnote{https:\/\/wiki.cancerimagingarchive.net\/display\/Public\/NCI-ISBI+2013+Challenge+-+Automated+Segmentation+of+Prostate+Structures} dataset. It is composed of manually annotated 3D T2-weighted magnetic resonance from two different sites, including 30 samples from Radboud University Nijmegen Medical Centre (Site A, target domain $S$), 30 samples from Boston Medical Center (Site B, source domain $T$).\n\n\\subsubsection{\\textbf{Heart Application}} We will employ the 2017 Multi-Modality Whole Heart Segmentation (MMWHS) Challenge dataset for cardiac segmentation \\cite{Zhuang2019}. The dataset consists of 20 MRI (source domain $S$) and 20 CT volumes (target domain $T$) of non-overlapping subjects, with their manual annotations. We will adapt the segmentation network for parsing four cardiac structures: the Ascending Aorta (AA), the Left Atrium blood cavity (LA), the Left Ventricle blood cavity (LV) and the Myocardium of the left ventricle (MYO). We will employ the pre-processed data provided by \\cite{dou2018pnp}. \n\n\\subsubsection{\\textbf{Benchmark Methods}}\n\nA model trained on the source only, \\textit{NoAdaptation}, will be used as a lower bound. A model trained with the cross-entropy loss on the target domain, referred to as $Oracle$, will be the upper bound. Regarding Test Time adaptation methods, we will compare our method to \\cite{wang2021tent}, the seminal test-time adaptation method, as well as \\cite{KARANI2021101907}. Then, we will also compare the TTA setting to the DA setting, where a set of subjects from the target distribution is available for adapting the network. We will compare to both SFDA methods \\cite{Bateson2020,zhang2019curriculum} and standard DA methods, such as adversarial methods \\cite{tsai2018learning}.\n\n \n\\subsubsection{\\textbf{Evaluation.}}\nThe Dice similarity coefficient (DSC) and the Hausdorff distance (HD) will be used as evaluation metrics in our experiments. \n\n\n\n\n\\section{Outline} \n\nTable \\ref{tab:our} shows the outline of the project.\n\n \\begin{table}[]\n\\begin{tabular}{ll}\n\\toprule\n1 week & Feasibility study \\\\\n2 weeks & Literature study \\\\\n1 week & Project proposal \\\\\n1 week & Literature Report \\\\\n5 weeks & Implementation, evaluation, and report writing \\\\\n\\bottomrule\n\\end{tabular}\n \\label{tab:our}\n\\end{table}\n\n\n\n\n\n\n\\section{Critical Points}\n\n\\paragraph{Experiments} We will introduce an adaptation loss in the form of entropy loss with a various shapes moments to constrain the structures to be segmented. We will investigate simple shape priors, suche as the size, the centroid, the distance to centroid, the eccentricity ... \n\n\\paragraph{Questions to be answered} How to derive good priors for the shape moments ? Should the whole network be adapted, or just the batch normalization parameters, as is advocated in recent works \\cite{KARANI2021101907,wang2021tent} ?\n\n\\paragraph{Code}\nThe code will build on previous work \\cite{Bateson2020}, using PyTorch.\n\n\\bibliographystyle{splncs04}\n\n\\section{Introduction}\n\nDeep convolutional neural networks, have demonstrated state-of-the-art performance in many natural and medical imaging problems \\cite{litjens2017survey}. However, deep-learning methods tend to under-perform when trained on a dataset with an underlying distribution different from the target images. In medical imaging, this is due for instance to variations in imaging modalities and protocols, vendors, machines and clinical sites. For semantic segmentation problems, labelling for each different target distribution is impractical, time-consuming, and often impossible. To circumvent those impediments, methods learning robust networks with less supervision have been popularized in computer vision.\n\nThis motivates Domain Adaptation (DA) methods, to adapt a model trained on an annotated source domain to another target domain with no or minimal annotations. While some approaches to the problem require labelled data from the target domain, others adopt an unsupervised approach to domain adaptation (UDA). Evaluating UDA methods consists of measuring the model's ability to generalise to unseen data\nin the target domain. However, the emerging field of Test-Time Adaptation (TTA) \\citep{wang2021tent} argues that this is not as useful as adapting to the test set directly. In some applications, it might not even be possible, such as when only a single target-domain subject is available. \n\nMoreover, a more recent line of works tackles source-free domain adaptation \\citep{Bateson2020}, a setting where the source data is unavailable (neither images, nor labeled masks) during the training of the adaptation phase. In medical imaging, this may the case when the source and target data come from different clinical sites, due to, for instance, privacy concerns or the loss or corruption of source data.\n\nOur framework will therefore be source-free Test-Time Adaptation for Segmentation. We therefore propose an evaluation framework where we perform test-time adaptation on each subject separately.\n\n\n\n\\section{Project} \n\nIn this project, we will propose a simple formulation for test-time adaptation (DA), which removes the need for a concurrent access to the source and target data, as well as the need for multiple target subjects, in the context of semantic segmentation. Our approach will substitute the standard supervised loss in the source domain by a direct minimization of the entropy of predictions in the target domain. To prevent trivial solutions, we will integrate the entropy loss with different shape moments, such as in \\citep{KervadecMIDL2021,Bateson2020}. Our formulation is usable with any segmentation network architecture, we will use \\citep{UNet}.\n\n\\section{Datasets}\n \n\\subsection{Test-time Adaption with shape moments} \n\n\\subsubsection{\\textbf{Prostate Application}} We will first evaluate the proposed method on the publicly available NCI-ISBI 2013 Challenge\\footnote{https:\/\/wiki.cancerimagingarchive.net\/display\/Public\/NCI-ISBI+2013+Challenge+-+Automated+Segmentation+of+Prostate+Structures} dataset. It is composed of manually annotated 3D T2-weighted magnetic resonance from two different sites, including 30 samples from Radboud University Nijmegen Medical Centre (Site A, target domain $S$), 30 samples from Boston Medical Center (Site B, source domain $T$).\n\n\\subsubsection{\\textbf{Heart Application}} We employ the 2017 Multi-Modality Whole Heart Segmentation (MMWHS) Challenge dataset for cardiac segmentation \\citep{Zhuang2019}. The dataset consists of 20 MRI (source domain $S$) and 20 CT volumes (target domain $T$) of non-overlapping subjects, with their manual annotations. We will adapt the segmentation network for parsing four cardiac structures: the Ascending Aorta (AA), the Left Atrium blood cavity (LA), the Left Ventricle blood cavity (LV) and the Myocardium of the left ventricle (MYO). We will employ the pre-processed data provided by \\citep{dou2018pnp}. \n\n\\subsubsection{\\textbf{Benchmark Methods}}\n\nA model trained on the source only, \\textit{NoAdaptation}, will be used as a lower bound. A model trained with the cross-entropy loss on the target domain, referred to as $Oracle$, will be the upper bound. Regarding Test Time adaptation methods, we will compare our method to \\citep{wang2021tent}, the the seminal test-time adaptation method, as well as \\citep{KARANI2021101907}. Then, we will also compare the TTA setting to the DA setting, where a set of subjects from the target distribution is available for adaptation. We will compare to both SFDA methods \\citep{Bateson2020,zhang2019curriculum} and standard DA methods, such as adversarial methods \\citep{tsai2018learning}.\n\n \n\\subsubsection{\\textbf{Evaluation.}}\nThe Dice similarity coefficient (DSC) and the Hausdorff distance (HD) were used as evaluation metrics in our experiments. \n\n\n\n\n\\section{Outline} \n\nTable \\ref{tab:our} shows the outline of the project.\n\n \\begin{table}[]\n\\begin{tabular}{ll}\n\\toprule\n1 week & Feasibility study \\\\\n2 weeks & Literature study \\\\\n1 week & Project proposal \\\\\n1 week & Literature Report \\\\\n5 weeks & Implementation, evaluation, and report writing \\\\\n\\bottomrule\n\\end{tabular}\n \\label{tab:our}\n\\end{table}\n\n\n\n\n\n\n\\section{Conclusion}\n\nIn this paper, we proposed a simple formulation for domain adaptation (DA), which removes the need for a concurrent access to the source and target data, in the context of semantic segmentation for multi-modal magnetic resonance images. Our approach substitutes the standard supervised loss in the source domain by a direct minimization of the entropy of predictions in the target domain. To prevent trivial solutions, we integrate the entropy loss with a class-ratio prior, which is built from an auxiliary network. Unlike the recent domain-adaptation techniques, our method tackles DA without resorting to source data during the adaptation phase. Interestingly, our formulation achieved better performances than related state-of-the-art methods with access to both source and target data. \nThis shows the effectiveness of our prior-aware entropy minimization and that, in several cases of interest where the domain shift is not too large, adaptation might not need access to the source data. \nOur proposed adaptation framework is usable with any segmentation network architecture.\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{splncs04}\n\n\\section{Introduction}\n\\lipsum[2]\n\\lipsum[3]\n\n\n\\section{Headings: first level}\n\\label{sec:headings}\n\n\\lipsum[4] See Section \\ref{sec:headings}.\n\n\\subsection{Headings: second level}\n\\lipsum[5]\n\\begin{equation}\n\\xi _{ij}(t)=P(x_{t}=i,x_{t+1}=j|y,v,w;\\theta)= {\\frac {\\alpha _{i}(t)a^{w_t}_{ij}\\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}{\\sum _{i=1}^{N} \\sum _{j=1}^{N} \\alpha _{i}(t)a^{w_t}_{ij}\\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}}\n\\end{equation}\n\n\\subsubsection{Headings: third level}\n\\lipsum[6]\n\n\\paragraph{Paragraph}\n\\lipsum[7]\n\n\\section{Examples of citations, figures, tables, references}\n\\label{sec:others}\n\\lipsum[8] \\cite{kour2014real,kour2014fast} and see \\cite{hadash2018estimate}.\n\nThe documentation for \\verb+natbib+ may be found at\n\\begin{center}\n \\url{http:\/\/mirrors.ctan.org\/macros\/latex\/contrib\/natbib\/natnotes.pdf}\n\\end{center}\nOf note is the command \\verb+\\citet+, which produces citations\nappropriate for use in inline text. For example,\n\\begin{verbatim}\n \\citet{hasselmo} investigated\\dots\n\\end{verbatim}\nproduces\n\\begin{quote}\n Hasselmo, et al.\\ (1995) investigated\\dots\n\\end{quote}\n\n\\begin{center}\n \\url{https:\/\/www.ctan.org\/pkg\/booktabs}\n\\end{center}\n\n\n\\subsection{Figures}\n\\lipsum[10] \nSee Figure \\ref{fig:fig1}. Here is how you add footnotes. \\footnote{Sample of the first footnote.}\n\\lipsum[11] \n\n\\begin{figure}\n \\centering\n \\fbox{\\rule[-.5cm]{4cm}{4cm} \\rule[-.5cm]{4cm}{0cm}}\n \\caption{Sample figure caption.}\n \\label{fig:fig1}\n\\end{figure}\n\n\\subsection{Tables}\n\\lipsum[12]\nSee awesome Table~\\ref{tab:table}.\n\n\\begin{table}\n \\caption{Sample table title}\n \\centering\n \\begin{tabular}{lll}\n \\toprule\n \\multicolumn{2}{c}{Part} \\\\\n \\cmidrule(r){1-2}\n Name & Description & Size ($\\mu$m) \\\\\n \\midrule\n Dendrite & Input terminal & $\\sim$100 \\\\\n Axon & Output terminal & $\\sim$10 \\\\\n Soma & Cell body & up to $10^6$ \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:table}\n\\end{table}\n\n\\subsection{Lists}\n\\begin{itemize}\n\\item Lorem ipsum dolor sit amet\n\\item consectetur adipiscing elit. \n\\item Aliquam dignissim blandit est, in dictum tortor gravida eget. In ac rutrum magna.\n\\end{itemize}\n\n\n\\section{Conclusion}\nYour conclusion here\n\n\\section*{Acknowledgments}\nThis was was supported in part by......\n\n\\bibliographystyle{unsrt} \n\n\\section{Introduction}\n\\lipsum[2]\n\\lipsum[3]\n\n\n\\section{Headings: first level}\n\\label{sec:headings}\n\n\\lipsum[4] See Section \\ref{sec:headings}.\n\n\\subsection{Headings: second level}\n\\lipsum[5]\n\\begin{equation}\n\\xi _{ij}(t)=P(x_{t}=i,x_{t+1}=j|y,v,w;\\theta)= {\\frac {\\alpha _{i}(t)a^{w_t}_{ij}\\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}{\\sum _{i=1}^{N} \\sum _{j=1}^{N} \\alpha _{i}(t)a^{w_t}_{ij}\\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}}\n\\end{equation}\n\n\\subsubsection{Headings: third level}\n\\lipsum[6]\n\n\\paragraph{Paragraph}\n\\lipsum[7]\n\n\\section{Examples of citations, figures, tables, references}\n\\label{sec:others}\n\\lipsum[8] \\cite{kour2014real,kour2014fast} and see \\cite{hadash2018estimate}.\n\nThe documentation for \\verb+natbib+ may be found at\n\\begin{center}\n \\url{http:\/\/mirrors.ctan.org\/macros\/latex\/contrib\/natbib\/natnotes.pdf}\n\\end{center}\nOf note is the command \\verb+\\citet+, which produces citations\nappropriate for use in inline text. For example,\n\\begin{verbatim}\n \\citet{hasselmo} investigated\\dots\n\\end{verbatim}\nproduces\n\\begin{quote}\n Hasselmo, et al.\\ (1995) investigated\\dots\n\\end{quote}\n\n\\begin{center}\n \\url{https:\/\/www.ctan.org\/pkg\/booktabs}\n\\end{center}\n\n\n\\subsection{Figures}\n\\lipsum[10] \nSee Figure \\ref{fig:fig1}. Here is how you add footnotes. \\footnote{Sample of the first footnote.}\n\\lipsum[11] \n\n\\begin{figure}\n \\centering\n \\fbox{\\rule[-.5cm]{4cm}{4cm} \\rule[-.5cm]{4cm}{0cm}}\n \\caption{Sample figure caption.}\n \\label{fig:fig1}\n\\end{figure}\n\n\\subsection{Tables}\n\\lipsum[12]\nSee awesome Table~\\ref{tab:table}.\n\n\\begin{table}\n \\caption{Sample table title}\n \\centering\n \\begin{tabular}{lll}\n \\toprule\n \\multicolumn{2}{c}{Part} \\\\\n \\cmidrule(r){1-2}\n Name & Description & Size ($\\mu$m) \\\\\n \\midrule\n Dendrite & Input terminal & $\\sim$100 \\\\\n Axon & Output terminal & $\\sim$10 \\\\\n Soma & Cell body & up to $10^6$ \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:table}\n\\end{table}\n\n\\subsection{Lists}\n\\begin{itemize}\n\\item Lorem ipsum dolor sit amet\n\\item consectetur adipiscing elit. \n\\item Aliquam dignissim blandit est, in dictum tortor gravida eget. In ac rutrum magna.\n\\end{itemize}\n\n\n\\section{Conclusion}\nYour conclusion here\n\n\\section*{Acknowledgments}\nThis was was supported in part by......\n\n\\bibliographystyle{unsrt} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}